uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,108,101,564,715
arxiv
\section{Introduction} Generalization from limited data is the basic cognitive ability of intelligence~\cite{sims2018efficient}. To achieve class-level generalization abilities, researchers develop zero-shot learning (ZSL)~\cite{xian2018zero}, which aims to identify unseen classes without any available images during training. ZSL can also be extended to a more general setting called generalized ZSL (GZSL) which tries to identify both seen and unseen classes at test time~\cite{chao2016empirical}. \begin{figure}[htbp] \centering \includegraphics[width=0.95\linewidth]{figures_banner.pdf} \caption{Comparison between traditional embedding ZSL methods and our Re-balanced MSE (ReMSE). Traditional embedding ZSL methods generally produce imbalanced error losses on semantic prediction, which have a high co-relation with label value. Our ReMSE could decease the co-relation and obtain a re-balanced error distribution.} \label{fig:banner} \end{figure} Generally, existing ZSL methods utilize various semantic labels, e.g. word2vec~\cite{xian2018zero} and attributes~\cite{lampert2013attribute} as auxiliary information, to obtain knowledge that transfers from seen to unseen classes. Benefiting from these semantic labels, researchers can build a learnable mapping between visual space and semantic space and classify unseen classes in the mapping space, which is called embedding ZSL. Recently, ZSL methods have mainly focused on exploiting various complex modules and architectures, such as GCN~\cite{xie2020region} and Transformer~\cite{chen2022transzero} to extract powerful visual features and improve visual-semantic interaction. However, high computational demands caused by such a complex structure pose challenges in practical applications. In this work, we address the zero-shot problem from a brand new perspective, namely how semantic labels would affect the performance of ZSL methods. Specifically, we find that almost all existing ZSL models suffer from imbalanced semantic prediction, i.e. the model can accurately predict some semantics but may not for others. We argue that such problem could significantly limit the performance of ZSL. In fact, treating every semantic fairly is a prominent topic in current visual recognition systems~\cite{ramaswamy2021fair, park2022fair}. Essentially, an ideal ZSL model should treat all semantic labels with equal importance. However, borrowing the imbalanced regression defined in~\cite{yang2021delving}, we find that such imbalanced semantic prediction in ZSL unfortunately suffers from two unique challenges. On one hand, as shown in Fig.~\ref{fig:banner} (a), the imbalanced predictions in ZSL are highly correlated with the value of semantic labels, rather than the number of samples as seen in traditional imbalanced learning~\cite{yang2021delving,ren2021balanced,hu2020learning}; on the other hand, ZSL is multi-label imbalanced, i.e. different semantics follow quite different error distributions between classes. As such, existing general imbalanced regression methods are not suitable for the ZSL scenario. For instance, the recent Balanced MSE~\cite{ren2021balanced}, one important general imbalanced learning method, is rather limited in dealing with ZSL, which is also empirically demonstrated in our experiments. To this end, we first statistically examine the effect of semantic labels on imbalanced semantic prediction. Next, leveraging a novel notion of the class-averaged semantic error matrix, we develop a simple yet effective re-balanced strategy, \textbf{Re-balanced Mean-squared Error (ReMSE)} loss, which mitigates the inherited imbalance drawback observed in ZSL. To penalize under-fitting semantics potentially with more errors, ReMSE relies on the statistics of semantic errors to generate re-weighting factors. Besides, to adjust different error distributions, we also design two-level re-weighting factors, i.e. 1) Class-level re-weighting that compares errors of the same class across different semantics, and 2) Semantic-level re-weighting that compares errors of the same semantic across classes. Analytically, we show that minimizing the ReMSE loss tends to minimize the mean error loss as well as the standard deviation of error losses across different semantics, thus certifying the effectiveness of ReMSE in learning a re-balanced visual-semantic mapping. In summary, our main contributions are three-fold: \begin{enumerate} \item[(1)] It is a first attempt to theoretically and empirically analyze that ZSL is an imbalanced regression problem affected by semantic label values, thus offering a new insight into ZSL. \item[(2)] A novel loss function ReMSE is designed for ZSL which dynamically perceives multiple error distributions, focusing on under-fitting semantics without increasing inference cost. Furthermore, We show that minimizing the ReMSE loss tends to minimize the mean and variance of the error distributions, leading to a rebalanced ZSL. \item[(3)] Extensive experiments on three ZSL benchmarks show that our ReMSE effectively alleviates the imbalanced regression problem. Without bells and whistles, our approach outperforms many state-of-the-arts in ZSL (e.g. Transformers), as well as the imbalanced regression: Balanced MSE. \end{enumerate} \section{Related Work} \label{sec:rel} \subsection{Zero-shot Learning} Existing ZSL can be mainly divided into generative methods and embedding methods. Generative methods utilize generative models, e.g. Generative Adversarial Network~(GAN)~\cite{li2019alleviating,ye2019sr}, Variational AutoEncoder~(VAE)~\cite{li2021generalized}, and Flow models~\cite{shen2020invertible} to synthesize unseen visual features. In this paper, we focus on the embedding methods, which typically learn the visual-semantic mapping and classify unseen classes in the mapping space. Most recent embedding ZSL proposals study ZSL from the model perspective, i.e. by introducing more complex modules (e.g. GNN)~\cite{fu2017zero} or frameworks (e.g. Transformer)~\cite{chen2022transzero} to extract highly powerful visual features. However, there are just a few investigations of ZSL from the data perspective. For example, \cite{akata2015label} shows that $l_{2}$ normalization can compress the noise of semantic labels. Most of these works do not explicitly analyze the impact of semantics. In contrast, we focus on how semantic annotations would impact learning difficulty. Identifying the semantic prediction imbalance of ZSL, we propose the novel ReMSE algorithm. Supported by the rebalancing strategy, simple CNN-based models can lead to superior performance, even on par with other complex models. \subsection{Imbalanced Learning} Imbalanced learning has been widely studied but received more and more attention recently due to its extensive application in practical scenarios. While most imbalanced learning focuses on classification~\cite{cao2019learning,hu2020learning}, imbalanced regression involving continuous and infinite target values was first defined in~\cite{yang2021delving}. Along this research, Balanced MSE~\cite{ren2021balanced} assumes a balanced distribution to represent the true training and testing distributions. Albeit its good performance, existing imbalanced regression attributes imbalanced optimization to the imbalanced sample distribution. Differently, we find that imbalanced values of semantic labels also play a major role for ZSL. As such, we propose the ReMSE method that could obtain re-balanced prediction error distributions across classes. \section{Preliminary} The main goal of ZSL is to obtain a classifier that can distinguish visual samples (i.e. images) $\mathcal{X}^{u}$ of unseen classes $\mathcal{C}^{u}$ from the images $\mathcal{X}^{s}$ of seen classes $\mathcal{C}^{s}$, which only appear in the training set, i.e. $\mathcal{C}^{s} \cap \mathcal{C}^{u} = \emptyset$. Since existing methods utilize the class-level semantic labels $\mathcal{S}$ (e.g. attributes or word2vec) to bridge the gap between seen and unseen classes, we define the training set as $\mathcal{D}^{tr}=\{(\mathbf{x}_i, \mathbf{s}_i, y_i)| \mathbf{x}_i \in \mathcal{X}^s, \mathbf{s}_i \in \mathcal{S}, y_i \in \mathcal{C}^{s}\}$, where $\mathbf{x}_i$ and $\mathbf{s}_i$ represent image of the $i$-th sample and its semantic vector, respectively. The number of samples in the training set is denoted by $n_{tr}$. Similarly, we can denote the test set by $\mathcal{D}^{te}=\{(\mathbf{x}_i,\mathbf{s}_i,y_i)| \mathbf{x}_i \in \mathcal{X}^u, \mathbf{s}_i \in \mathcal{S}, y_i \in \mathcal{C}^{u} \}$, where the testing samples are from unseen classes in ZSL setting. In GZSL setting, the testing samples may be taken from seen classes, in which $\mathcal{X}^u$ and $\mathcal{C}^{u}$ will be replaced by $\mathcal{X}^u \cup \mathcal{X}^s$ and $\mathcal{C}^u \cup \mathcal{C}^s$, respectively. We denote $d_{s}$ and $d_{v}$ as the dimension of semantic label and visual feature. It is worth mentioning that the values of semantic labels vary in different datasets. For example, on the dataset CUB~\cite{wah2011caltech}, 312 semantic values range from $0$ to $100$. We focus on using Embedding ZSL~\cite{xie2019attentive,liu2021goal} in our work. Such methods first use a pre-trained model, such as ResNet, as the backbone for extracting visual feature $\tilde{\mathbf{v}}_{i}$ of the image $\mathbf{x}_i$. Then a fully connected network (semantic predictor) with a parameter of $\mathbf{W}\in \mathbb{R}^{d_{s} \times d_{v}}$ solves the ZSL task by semantic prediction: \begin{equation} f(\tilde{\mathbf{v}}_i) = \mathbf{W} \tilde{\mathbf{v}}_{i}= \tilde{\mathbf{s}}_{i}. \end{equation} Typically, some recent work~\cite{xu2020attribute,liu2021goal,du2022boosting} trains embedding ZSL models by leveraging the so-called Semantic Cross-Entropy loss (SCE): \begin{equation} \label{eq:SCE} \mathcal{L}_{SCE} = -\log p_{y_i}(\mathbf{x}_{i}), \end{equation} \begin{equation} \label{eq:CP} p_{y_i}(\mathbf{x}_i) = \frac{ e^{ \tau\cos \theta (\tilde{\mathbf{s}}_i, \mathbf{s}_i) } } { \sum_{c \in \mathcal{C}_s } e^{ \tau\cos \theta (\tilde{\mathbf{s}}_i, \mathbf{s}_c) } } , \end{equation} in which $p_{y_i}(\mathbf{x}_i)$ is the predicted probability of class $y_i$ for the sample $\mathbf{x}_i$, $\tau$ is a scale hyper-parameter, $\theta (\tilde{\mathbf{s}}_i, \mathbf{s}_c)$ is defined as the angle between semantic predictions $\tilde{\mathbf{s}}_i$ of sample $i$ and semantic labels $\mathbf{s}_c$ of class $c$, i.e. $\cos \theta (\tilde{\mathbf{s}}_i, \mathbf{s}_c) = \tilde{\mathbf{t}}_i^{T} \mathbf{t}_c$, where $\tilde{\mathbf{t}}_i$ and $\mathbf{t}_c$ denote the $l_2$-normalized semantic prediction and class semantic label, i.e. \begin{align} \tilde{\mathbf{t}}_i = \tilde{\mathbf{s}}_i/\|\tilde{\mathbf{s}}_i\|_2 , \mathbf{t}_c = \mathbf{s}_c/\|\mathbf{s}_c\|_2. \end{align} Specifically, for ZSL, the test sample $\mathbf{x}_{i} \in \mathcal{X}^u$ can be assigned to the best matching class $c^{\prime}$ from the unseen classes $\mathcal{C}^u$: \begin{equation} c^{\prime} = \underset{c \in \mathcal{C}^u}{\arg \max} \,\cos\theta (\tilde{\mathbf{s}}_i, \mathbf{s}_c). \end{equation} In the GZSL setting, the testing samples may be taken from seen classes, in which $\mathcal{X}^u$ and $\mathcal{C}^{u}$ will be replaced by $\mathcal{X}^u \cup \mathcal{X}^s$ and $\mathcal{C}^u \cup \mathcal{C}^s$, respectively. \begin{figure*}[htbp] \centering \begin{minipage}[t]{0.48\linewidth} \centering \includegraphics[width=9cm]{figures_ErrorDistribution.pdf} \caption{Class-averaged semantic error matrix on test set of AWA2 and CUB under GZSL setting (rescaled by logarithmic function). It shows (1) error distributions are not balanced; (2) the same semantic has different prediction errors on different classes; (3) and the same class also has different prediction errors on different semantics.} \label{fig:ErrorDistribution} \end{minipage} \quad \begin{minipage}[t]{0.47\linewidth} \centering \includegraphics[width=8cm]{figures_correspondence.pdf} \caption{Correlation between semantic label value $t$ and averaged semantic prediction error $m$, which exhibit a strong positive linear relationship regrading Pearson Correlation Coefficient (PCC).} \label{fig:correspondence} \end{minipage} \end{figure*} \section{Main Methodology} In this section, we first treat the ZSL as a regression problem. Next, we verify that the imbalanced semantic prediction remains in the existing methods. We then study imbalanced semantic prediction statistically by examining how semantic label values could affect averaged semantic prediction. Finally, we propose our Rebalanced MSE which could well mitigate the imbalanced semantic prediction issue. Our Rebalanced MSE ensures that our model not only minimizes these prediction errors, but also tends to equal or balance these prediction errors whilst converging. Furthermore, to enhance the model's representation ability on different semantics, we further design a novel attention-based baseline, named AttentionNet, which can generate semantic-specific attention maps during training and testing stage. \subsection{Tackling ZSL via Regression} \label{sec:tackling} Compared to training ZSL models with SCE, treating ZSL as a regression task has an unique advantage: SCE entangles the gradient of one semantic with other semantics, whereas regression losses (i.e. MSE, MAE and so on) do not. Specifically, by representing the $k^{th}$ row vector of $\mathbf{W}$ as $\mathbf{w}_{k}$ responsible for the $k^{th}$ semantic prediction, we could derive the gradient as \begin{align} \label{eq:SCEgrad} \frac{\partial \mathcal{L}_{SCE}}{\partial \mathbf{w}_{k} } & = \sum_{c\in \mathcal{C}^{s}} \frac{\partial \mathcal{L}_{SCE}}{\partial \tilde{\mathbf{t}}_i^{\top} \mathbf{t}_{c} } \frac{\partial \tilde{\mathbf{t}}_i^{\top} \mathbf{t}_{c} } {\partial \mathbf{w}_{k} } \nonumber \\ & = \sum_{c \in \mathcal{C}^{s}} \underbrace{ \left( p_{y_i}(\mathbf{x}_i) - \mathds{1}_{[c=y_i]} \right) }_{\text{class probability term}} \frac{\partial \tilde{\mathbf{t}}^{\top} \mathbf{t}_{j} } {\partial \mathbf{w}_{k}}. \end{align} Obviously, the class probability is determined by the whole semantic prediction. Thus, even if some semantic predictions are not good, the predictions of these semantics can still be further optimized when the class probability is close to 1. In contrast, regression losses could explicitly perceive the performance of every semantic. For example, denoting the formulation of MSE: \begin{equation} \mathcal{L}_{MSE} = \frac{1}{N} \sum_{i = 1}^N \|\tilde{\mathbf{s}}_i - \mathbf{s}_i\|_2^2, \end{equation} we can obtain the gradient for $\mathbf{w}_{k}$: \begin{equation} \label{eq:MSE} \frac{\partial \mathcal{L}_{MSE}}{\partial \mathbf{w}_{k} } = 2(\mathbf{w}_{k}^\top \tilde{\mathbf{v}}_i - s_{ik}) \tilde{\mathbf{v}}_i. \end{equation} It is evident that the optimization of $\mathbf{w}_{k}$ would not be interfered by other semantics. It is worth mentioning that many works~\cite{romera2015embarrassingly, qiao2016less, xu2020attribute, liu2021goal} tried to utilize the regression loss $\mathcal{L}_{MSE}$ as a compensatory loss in ZSL, i.e., \begin{equation} \mathcal{L} = \mathcal{L}_{SCE} + \lambda \mathcal{L}_{MSE}, \end{equation} where $\lambda$ denote a hyper-parameter. Although $\mathcal{L}_{MSE}$ penalizes the discrepancy between semantic labels and semantic predictions with the Euclidean distance, it is incompatible with the cosine distance optimized by the SCE loss in ZSL. We have the following proposition: \begin{shaded} \begin{proposition} \label{prop:norm} For any regressor with the MSE loss and parameter $\mathbf{W}$, given any example $(\mathbf{x}_i,\mathbf{s}_i,y_i)$, when the model tends to be optimal, we have $\|\tilde{\mathbf{s}}_i\|_{2} \rightarrow \|\mathbf{s}_i\|_2 \cos \theta$, where $\theta$ is the angle between the semantic value and its predicted value. \end{proposition} \end{shaded} \begin{proof} Recalling the formulation in Eqn.~(\ref{eq:MSE}), the MSE loss can be expanded as follows: \begin{align} \mathcal{L}_{MSE} = \frac{1}{N} \sum_{i = 1}^N (\|\tilde{\mathbf{s}}_i\|_2^2 - 2\|\tilde{\mathbf{s}}_i\|_2 \|\mathbf{s}_i\|_2 \cos \theta + \|\mathbf{s}_i\|_2^2). \nonumber \end{align} By directly calculating the gradient of the MSE loss with respect to the parameter $\mathbf{W}$, we obtain \begin{align} \label{eq:MSEgrad} \frac{ \partial \mathcal{L}_{MSE} }{ \partial \mathbf{W} } & = \frac{1}{N} \sum_{i = 1}^N \frac{ \partial \mathcal{L}_{MSE}} { \partial \|\tilde{\mathbf{s}}_i \|_2 } \frac{ \partial \|\tilde{\mathbf{s}}_i \|_2}{\partial \mathbf{W} } \nonumber \\ & = \frac{1}{N} \sum_{i = 1}^N 2(\|\tilde{\mathbf{s}}_i\|_2 - \|\mathbf{s}_i\|_2 \cos \theta) \frac{ \partial \|\tilde{\mathbf{s}}_i \|_2 }{\partial \mathbf{W} }. \nonumber \end{align} Hence, when the gradient tends to be $0$, the magnitude of $\frac{ \partial \|\tilde{\mathbf{s}}_i \|_2 }{\partial \mathbf{W} }$ becomes very small, then we have $\|\tilde{\mathbf{s}}_i\|_2 \rightarrow \|\mathbf{s}_i\|_2 \cos \theta$. \end{proof} This proposition shows that the original MSE may not precisely measure how well the semantics fit. We can see that even if $\|\tilde{\mathbf{s}}_i\|_2 \rightarrow \|\mathbf{s}_i\|_2 \cos \theta$, $\tilde{\mathbf{s}}_i$ and $\mathbf{s}_i$ may still generate a big angle. Therefore, the predicted and ground-truth semantics can still be very inconsistent. Thus, we normalize both semantic labels and predictions in MSE to obtain the Normalized MSE $\mathcal{L}_{NMSE}(\tilde{\mathbf{s}}_i,\mathbf{s}_i) = \mathcal{L}_{MSE}(\tilde{\mathbf{t}}_i,\mathbf{t}_i)$, which proves fairly compatible with $\mathcal{L}_{SCE}$, since the $\mathcal{L}_{NMSE}= 2- \cos\theta(\tilde{\mathbf{s}}_i, \mathbf{s}_i)$. It is noted that Balanced MSE~\cite{ren2021balanced} handles imbalanced error distributions by assuming that the distribution of sample is balanced. Its batch-based formulation is \begin{equation} \label{eq:BalancedMSE} \mathcal{L}_{balMSE} = -\log \frac{e^{-\|\tilde{\mathbf{t}}_{i}-\mathbf{t}_{i}\|^2_2/\sigma } }{\sum_{\mathbf{t}_{j} \in B_{\mathbf{t}}} e^{-\|\tilde{\mathbf{t}}_{i}-\mathbf{t}_{j}\|^2_2/\sigma} }, \end{equation} where $B_{\mathbf{t}}$ is a batch of normalized semantic labels and $\sigma$ is a learnable parameter. It can be viewed as $\mathcal{L}_{NMSE}$ with a regularization to balance sample distribution. However, as discussed in the next section, we find that the imbalanced error distribution in ZSL is caused by the imbalanced semantic values, rather than the sample size. Thus, the Balanced MSE does not perform well in the ZSL setting. \subsection{Imbalanced Semantic Prediction} \label{sec:ISP} Now, we consider the imbalanced semantic prediction problem in ZSL. During the training process, we find that previous methods often generate imbalanced semantic error distribution. To qualitatively illustrate this, in the GZSL setting\footnote{Note that the test samples of unseen classes of ZSL is the same as in GZSL setting. Thus we only need visualize error matrices under GZSL setting.}, we exploit GEMZSL (an advanced ZSL method) to visualize the class-averaged semantic error matrix $M \in \mathcal{R}^{|\mathcal{C}^u \cup \mathcal{C}^s| \times d_{s}}$ on AWA2 and CUB. Specifically, we collect test samples from every class $l \in \mathcal{C}^u \cup \mathcal{C}^s$ and compute the mean prediction error for each class and each semantic \begin{equation} \label{eqn:prediction_loss} m_{lj} = \frac{1}{|\mathcal{D}_{l}|} \sum_{(\mathbf{x}_i,\mathbf{s}_i,y_i) \in \mathcal{D}_{l}} (\tilde{t}_{ij}-t_{ij})^2, \end{equation} where $|\mathcal{D}_l|$ is the number of samples in $\mathcal{D}_l$ and $t_{ij}$ is the $j^{th}$ element of the vector $\mathbf{t}_i$. Notice that $m_{lj}$ indicates the averaged error loss on the $l^{th}$ class for the $j^{th}$ semantic. The visualization of GZSL on CUB and AWA2 is shown in Fig.~\ref{fig:ErrorDistribution}. The results in Fig.~\ref{fig:ErrorDistribution} reveal three interesting observations: (1) ZSL models perform unbalanced on semantic prediction problems: for a certain semantic of a certain class, the model may fit well, it may however do poorly for other semantics and classes; (2) the same semantic has different prediction errors on different classes; and (3) the same class also has different prediction errors on different semantics. To investigate further, in Fig.~\ref{fig:correspondence}, we employ the Pearson Correlation Coefficient (PCC) to quantitatively measure the correlation between the semantic prediction error $m_{lj}$ and semantic label values $t_{ij}$ on three benchmarks. It is found that this error loss tends to have a high positive correlation with semantic label value. In other words, the larger the semantic label value, the larger the semantic prediction error. \subsection{ReMSE} To this end, we try to utilize imbalanced regression methods to balance the semantic errors. However, in our experiment, previous imbalanced methods~\cite{ren2021balanced,yang2021delving} (i.e. $\mathcal{L}_{balMSE}$) is not suitable for the imbalance semantic prediction problem. They all rely on the hypothesis: imbalanced test error is caused by imbalanced sample distribution, while in ZSL the imbalanced problem is related to the imbalanced semantic label values, as verified in our work. Thus, unlike these methods, we directly adjust the loss, which explicitly measures the semantic prediction performance. \begin{figure*}[ht] \centering \includegraphics[width=0.9\linewidth]{figures_ReMSE.pdf} \caption{Overview of our ReMSE method: The prediction error is computed on the basis that both the semantic and its predicted value are normalized. Our goal is to minimize the weighted prediction error, where the weight factor of prediction error is represented by the average of prediction errors for the classes, including the category-level factors and the semantic-level factors. It is worth noting that once a new prediction error is obtained, its weight need be recalculated. } \label{fig:ReMSE} \end{figure*} First, we design another class-averaged semantic error matrix $M^{\prime} \in \mathcal{R}^{|\mathcal{C}^s| \times d_{s}}$ within every training batch to establish a common scale for error magnitudes and re-balance every error per class and per semantic. As shown in Fig.~\ref{fig:ErrorDistribution}, for different datasets, underfitting of semantic predictions could be distinct: the same semantic has different prediction errors on different classes, and the same class also has different prediction errors on different semantics. Thus, given a label $l$ and a semantic $j$, on one hand, we adopt a semantic-level re-weight factor $p_{lj}$ to balance its weight among different semantics of the same class; on the other hand, we adopt a class-level re-weight factor $q_{lj}$ among different classes of the same semantic. The class-level balancing factor $p_{lj}$ is designed by \begin{equation} p_{lj} = \left(\log \frac{ m^{\prime}_{lj} }{ \min_{c \in \mathcal{C}^s} m^{\prime}_{cj} } + 1 \right)^{\alpha}, \quad \alpha \ge 0. \end{equation} A logarithmic function is used here to avoid potential ratio explosions, and a parameter $\alpha$ is taken to control the scale of re-weighting. Similarly, the semantic-level balancing factor $q_{lj}$ is calculated as follows: \begin{equation} q_{lj} = \left(\log \frac{ m^{\prime}_{lj} }{ \min_{1 \le k \le d_s} m^{\prime}_{l k} } + 1 \right)^{\beta}, \quad \beta \ge 0. \end{equation} Finally, we obtain the Rebalanced MSE loss function: \begin{equation} \label{eq:ReMSE} \mathcal{L}_{ReMSE} = \frac{1}{N} \sum^{N}_{i=1} \sum^{d_{s}}_{j=1} p_{y_i j} q_{y_i j} e_{ij}, \end{equation} where $ e_{ij}= (\tilde{t}_{ij}-t_{ij})^2$. In order to better understand the property of our ReMSE, we reduce our problem to the simplest case, proving that the prediction error for each class will be equal when the algorithm converges. \begin{shaded} \begin{theorem} \label{theorem:rmes} Given two classes of data $\{(x,c_1),(y,c_2)\}$, which are located in a 1-dimensional space, and $\tilde{x}$ and $\tilde{y}$ are the prediction values of the semantic values $x$ and $y$, respectively. The loss function become \begin{equation} \mathcal{L}_{ReMSE} = w(x - \tilde{x})^2 + v(y - \tilde{y})^2, \end{equation} where $w = p_{1 1}q_{1 1}$ and $v = p_{2 1}q_{2 1}$. Minimizing $\mathcal{L}_{ReMSE}$ by gradient descent will minimize prediction errors for each class as well as the variance of prediction errors. \end{theorem} \end{shaded} \begin{proof} For the $t$-th iteration, we have $w_t \ge 1,v_t = 1$ or $w_t = 1, v_t \ge 1$. We know that the prediction errors for the two samples in the $t$-th iteration are $(\tilde{x}_t - x)^2$ and $(\tilde{y}_t - y)^2$. Without loss of generality, let us assume that $(\tilde{x}_t - x)^2 < (\tilde{y}_t - y)^2$, then we have $w_t = 1$ and $v_t > 1$. We update $\tilde{x}$ and $\tilde{y}$ by gradient descent with a stepsize of $r$ (a small positive constant), and get \begin{align} \tilde{x}_{t + 1} &= \tilde{x}_t - 2 r w_t (\tilde{x}_t - x), \\ \tilde{y}_{t + 1} &= \tilde{y}_t - 2 r v_t (\tilde{y}_t - y). \end{align} Therefore, we get a new prediction error as follows \begin{align} (\tilde{x}_{t + 1} - x)^2 &= (1 - 2 r w_t)^2 (\tilde{x}_t - x)^2, \\ (\tilde{y}_{t + 1} - y)^2 &= (1 - 2 r v_t)^2 (\tilde{y}_t - y)^2. \end{align} At this point, since $(1 - 2 r w_t)^2 > (1 - 2 r v_t)^2$, we have \begin{align} & (\tilde{y}_{t + 1} - y)^2 - (\tilde{x}_{t + 1} - x)^2 \nonumber \\ & = (1 - 2 r v_t)^2 (\tilde{y}_t - y)^2 - (1 - 2 r w_t)^2 (\tilde{x}_t - x)^2 \nonumber \\ & < (1 - 2 r w_t)^2 \left((\tilde{y}_t - y)^2 - (\tilde{x}_t - x)^2 \right). \end{align} Since $(1 - 2 r w_t)^2 < 1$, after a gradient descent, the difference in prediction error between the two classes becomes smaller, that is, these losses become more balanced. The weight $w_t$ and $v_t$ are continuously adjusted until $(\tilde{x}_t - x)^2 = (\tilde{y}_t - y)^2$, at which point we have $w_t = 1$,$v_t = 1$ and $(\tilde{x}_{t + 1} - x)^2 = (\tilde{y}_{t + 1} - y)^2$. Finally, it is worth noting that after the $t$-th iteration, we have $(\tilde{x}_{t + 1} - x)^2 \le (\tilde{x}_t - x)^2$ and $(\tilde{y}_{t + 1} - y)^2 \le (\tilde{y}_t - y)^2$. Namely, the prediction error for each class decreases as the iteration progresses. Our ReMSE algorithm will adjust the weights so that they end up being balanced. Ideally, when our loss is minimized, the average prediction error for each class is roughly equal. \end{proof} It is worth noting that, considering the separability of the loss function along the class dimension and the semantic dimension, if the optimization task of the loss function $\mathcal{L}_{ReMSE}$ is regarded as a $|\mathcal{C}^u| \times d_{s}$ independent 1-dimensional regression problem, then the above conclusion can be easily extended to multi-class and multi-semantic situations. \subsection{AttentionNet} \begin{figure}[ht] \centering \includegraphics[width=0.95\linewidth]{figures_Innovation.pdf} \caption{Comparison between previous attention-based ZSL methods and our AttentionNet. The symbols I, F, A and S denote the images, feature maps, attentive features and predicted semantics, respectively. (a) GEMZSL~\cite{liu2021goal} \& APN~\cite{xu2020attribute} use only attention branch to train their backbone, and do not use attention in testing stage. (b) SGAM~\cite{zhu2019semantic} uses attention branch and global branch to predict semantics either locally or globally. (c) In contrast, our AttentionNet utilizes the attention branch in both training and testing stages to produce semantic-specific attentional visual features. Furthermore, we add a semantic-specific feature fusion to avoid the degradation of attentional features.} \label{fig:innovation} \end{figure} \begin{figure*}[ht] \centering \includegraphics[width=0.95\linewidth]{figures_AttentionNet.pdf} \caption{Architecture of our AttentionNet. It contains two innovations: (1) a cross-modal semantic-specific attention module and (2) the semantic-specific fusion. The cross-modal semantic-specific attention module allows the model to pay attention to different local regions according to different semantic attributes to fully exploit both visual and textual semantics. During training and testing, our model can better train end-to-end global and local image features.} \label{fig:attentionNet} \end{figure*} To further improve the accuracy of semantic prediction, we try to utilize additionally the attention mechanism to extract semantic-specific visual features. However, we observe that many existing attention-based ZSL methods do not fully exploit the capacity of the attention mechanism. Thus, we design a novel ZSL embedding model called AttentionNet. The comparison between existing attention-based ZSL and our AttentionNet is illustrated in Fig.~\ref{fig:innovation}. Several remarks are highlighted as follows. (1) The traditional methods APN~\cite{xu2020attribute} and GEMZSL~\cite{liu2021goal} only use the attention branch to train the backbone, while in testing stage they abandon it. Obviously, the attention branch cannot help the model to localize discriminative region at testing stage. (2) The method SGMA~\cite{zhu2019semantic} utilizes a global branch and an attention branch to extract global and local features and predict semantics individually. However, a vast amount of research demonstrates that features fusing local and global information is more powerful~\cite{guo2020augfpn,gao2019res2net}. In addition, the predefined ratio also increases the cost of hyper- parameter search. (3) In contrast, our AttentionNet utilizes the attention branch in both training and testing stages so as to fully exploit the capacity of attention. Besides, our attention branch could generate semantic-specific attentive visual features for different semantics. Moreover, to avoid the degradation of attentional features, we add a feature fusion operator between the attentive features and global features, which shows high effectiveness as empirically demonstrated in our Experiment. The structure details of our AttentionNet is shown in Fig.~\ref{fig:attentionNet}. Specifically, AttentionNet is divided into two branches: the upper branch will synthesize semantic features for image attention, while the lower branch will generate visual (image) global features. In the upper branch, we first adopt the word representation model Glove~\cite{pennington2014glove} to obtain word vectors for all $d_{s}$ semantics ($d_{w2v}$ represents its dimension). For visual feature extraction, we adopt ResNet101~\cite{he2016deep} as our backbone. It will extract $W\times H$ regional visual features using $d_{v}$ channels, which is then followed by our cross-modal attention module. We use three fully connected (FC) layers including query layer, key layer, and value layer to project semantic word2vec and visual features into the latent space where visual and semantic will be aligned. Matrix multiplication between $Q$ and $K$ represents the strength of attention (normalized by softmax). Once the attention map is obtained, the attention network will drive the semantic items to automatically focus on specific regions. Next, our network augments the vector $d_v$ to obtain $d_s \times d_v$ features, which are then merged with the upper branch as the input of the semantic prediction network. Our cross-modal semantic-specific attention module between semantics and images allows our model to implicitly establish relationships between semantic attributes and image features, which could take full advantages of visual and textual modalities. With the help of the attention mechanism, our model is able to obtain better local image features, which are related to the predicted semantics, and fused with global features in the testing phase to obtain better semantic predictions. \section{Experiments} \begin{table*}[htbp] \centering \caption{Statistics of datasets.} \label{table:dataset} \begin{tabular}{c|c|c|c|c|c|c|c} \hline Dataset&\tabincell{c}{Semantic\\dimension} &\tabincell{c}{Semantic\\range}&\tabincell{c}{$\#$ Seen\\classes}&\tabincell{c}{$\#$ Unseen\\classes}&\tabincell{c}{$\#$ Images\\(total)}&\tabincell{c}{$\#$ Images\\(train+val)}&\tabincell{c}{$\#$ Images\\(test unseen/seen)}\\ \hline AWA2~\cite{lampert2013attribute}&85 & $[0,100] \cup \{-1\}$ &40&10&30475&19832&4958/5685\\ \hline CUB~\cite{wah2011caltech}&312& $[0,100]$&150&50&11788&7057&2679/1764\\ \hline SUN~\cite{patterson2012sun}&102& $[0,1]$ &645&72&14340&10320&1440/2580\\ \hline \end{tabular} \end{table*} \begin{table*}[htb] \centering \caption{Overall comparison with SOTAs in the setting of ZSL and GZSL. In ZSL, T1 represents the top-1 accuracy (\%) for unseen classes. In GZSL, $U$, $S$ and $H$ represent the top-1 accuracy (\%) of unseen classes, seen classes, and their harmonic mean, respectively. The symbol $^\star$ denotes the results of our implemented version. The best and next best results among embedding methods are marked with \textcolor{red}{\textbf{red}} and \textcolor{blue}{\textbf{blue}}. } \begin{tabular}{p{2.5cm}|p{1.0cm}|p{0.05cm}p{0.05cm}p{0.05cm}|p{0.05cm}p{0.05cm}p{0.05cm}p{0.05cm}p{0.05cm}p{0.05cm}p{0.05cm}p{0.05cm}p{0.05cm}} \hline & &\multicolumn{3}{|c}{Zero-shot Learning} & \multicolumn{9}{|c}{Generalized Zero-shot Learning}\\ \hline & & \multicolumn{1}{|c}{AWA2} & \multicolumn{1}{|c}{CUB} & \multicolumn{1}{|c|}{SUN} & \multicolumn{3}{|c}{AWA2} & \multicolumn{3}{|c}{CUB} &\multicolumn{3}{|c}{SUN} \\ \hline Approach & Refer & \multicolumn{1}{|c}{T1} & \multicolumn{1}{|c}{T1} & \multicolumn{1}{|c|}{T1} & \multicolumn{1}{|c}{U} & \multicolumn{1}{c}{S} & \multicolumn{1}{c|}{H} & \multicolumn{1}{|c}{U} & \multicolumn{1}{c}{S} & \multicolumn{1}{c|}{H} & \multicolumn{1}{|c}{U} & \multicolumn{1}{c}{S} & \multicolumn{1}{c}{H} \\ \hline \multicolumn{14}{c}{Embedding approaches}\\ \hline AGEN~\cite{xie2019attentive} & CVPR19 & \multicolumn{1}{|c|}{66.9} & \multicolumn{1}{|c|}{72.5} & \multicolumn{1}{|c|}{60.6} & \multicolumn{1}{|c}{54.7} & \multicolumn{1}{c}{79.1} & \multicolumn{1}{c|}{64.7} & \multicolumn{1}{|c}{63.2} & \multicolumn{1}{c}{69.0} & \multicolumn{1}{c|}{66.0} & \multicolumn{1}{|c}{40.3} & \multicolumn{1}{c}{32.3} & \multicolumn{1}{c}{35.9}\\ DUET~\cite{jia2019deep} & TIP19 & \multicolumn{1}{|c|}{\textcolor{red}{\textbf{72.6}}} & \multicolumn{1}{|c|}{72.4} & \multicolumn{1}{|c|}{-} & \multicolumn{1}{|c}{48.2} & \multicolumn{1}{c}{\textcolor{red}{\textbf{90.2}}} & \multicolumn{1}{c|}{63.4} & \multicolumn{1}{|c}{39.7} & \multicolumn{1}{c}{80.1} & \multicolumn{1}{c|}{53.1} & \multicolumn{1}{|c}{-} & \multicolumn{1}{c}{-} & \multicolumn{1}{c}{-} \\ SGAM~\cite{zhu2019semantic}& NeurIPS19 & \multicolumn{1}{|c|}{68.8} & \multicolumn{1}{|c|}{71.0} & \multicolumn{1}{|c|}{-} & \multicolumn{1}{|c}{37.6} & \multicolumn{1}{c}{\textcolor{blue}{\textbf{87.1}}} & \multicolumn{1}{c|}{52.5} & \multicolumn{1}{|c}{36.7} & \multicolumn{1}{c}{71.3} & \multicolumn{1}{c|}{48.5} & \multicolumn{1}{|c}{-} & \multicolumn{1}{c}{-} & \multicolumn{1}{c}{-} \\ APN & NeurIPS20 & \multicolumn{1}{|c|}{68.4} & \multicolumn{1}{|c|}{72.0} & \multicolumn{1}{|c|}{61.6} & \multicolumn{1}{|c}{56.5} & \multicolumn{1}{c}{78.0} & \multicolumn{1}{c|}{65.5} & \multicolumn{1}{|c}{65.3} & \multicolumn{1}{c}{69.3} & \multicolumn{1}{c|}{67.2} & \multicolumn{1}{|c}{41.9} & \multicolumn{1}{c}{34.0} & \multicolumn{1}{c}{37.6}\\ GEMZSL & CVPR21 & \multicolumn{1}{|c|}{67.3} & \multicolumn{1}{|c|}{77.8} & \multicolumn{1}{|c|}{62.8} & \multicolumn{1}{|c}{64.8} & \multicolumn{1}{c}{77.5} & \multicolumn{1}{c|}{70.6} & \multicolumn{1}{|c}{64.8} & \multicolumn{1}{c}{\textcolor{red}{\textbf{77.1}}} & \multicolumn{1}{c|}{70.4} & \multicolumn{1}{|c}{38.1} & \multicolumn{1}{c}{\textcolor{blue}{\textbf{35.7}}} & \multicolumn{1}{c}{36.9} \\ LSG~\cite{xu2021semi}& TIP21 & \multicolumn{1}{|c|}{61.1} & \multicolumn{1}{|c|}{52.9} & \multicolumn{1}{|c|}{53.4} & \multicolumn{1}{|c}{60.4} & \multicolumn{1}{c}{84.9} & \multicolumn{1}{c|}{70.6} & \multicolumn{1}{|c}{49.6} & \multicolumn{1}{c}{50.4} & \multicolumn{1}{c|}{50.0} & \multicolumn{1}{|c}{\textcolor{red}{\textbf{52.8}}} & \multicolumn{1}{c}{23.1} & \multicolumn{1}{c}{32.2} \\ TransZero~\cite{chen2022transzero} & AAAI22 & \multicolumn{1}{|c|}{70.1} & \multicolumn{1}{|c|}{76.8} & \multicolumn{1}{|c|}{\textcolor{red}{\textbf{65.6}}} & \multicolumn{1}{|c}{61.3} & \multicolumn{1}{c}{82.3} & \multicolumn{1}{c|}{70.2} & \multicolumn{1}{|c}{69.3} & \multicolumn{1}{c}{68.3} & \multicolumn{1}{c|}{68.8} & \multicolumn{1}{|c}{\textcolor{blue}{\textbf{52.6}}} & \multicolumn{1}{c}{33.4} & \multicolumn{1}{c}{\textcolor{red}{\textbf{40.8}}} \\ \hline APN$^\star$~\cite{liu2021goal}~\cite{xu2020attribute}& NeurIPS20 & \multicolumn{1}{|c|}{68.2} & \multicolumn{1}{|c|}{71.9} & \multicolumn{1}{|c|}{61.0} & \multicolumn{1}{|c}{59.8} & \multicolumn{1}{c}{75.1} & \multicolumn{1}{c|}{66.6} & \multicolumn{1}{|c}{64.4} & \multicolumn{1}{c}{67.8} & \multicolumn{1}{c|}{66.0} & \multicolumn{1}{|c}{41.1} & \multicolumn{1}{c}{34.0} & \multicolumn{1}{c}{37.2} \\ ~~~~+Balanced MSE& CVPR22 &\multicolumn{1}{|c|}{68.1} & \multicolumn{1}{|c|}{68.5} & \multicolumn{1}{|c|}{60.6} & \multicolumn{1}{|c}{58.3} & \multicolumn{1}{c}{78.9} & \multicolumn{1}{c|}{67.1} & \multicolumn{1}{|c}{57.0} & \multicolumn{1}{c}{65.5} & \multicolumn{1}{c|}{60.9} & \multicolumn{1}{|c}{41.3} & \multicolumn{1}{c}{34.3} & \multicolumn{1}{c}{37.4} \\ ~~~~\textbf{+ReMSE}& \textbf{Ours} &\multicolumn{1}{|c|}{68.3} & \multicolumn{1}{|c|}{72.1} & \multicolumn{1}{|c|}{61.5} & \multicolumn{1}{|c}{63.2} & \multicolumn{1}{c}{74.9} & \multicolumn{1}{c|}{68.5} & \multicolumn{1}{|c}{67.8} & \multicolumn{1}{c}{64.7} & \multicolumn{1}{c|}{66.2} & \multicolumn{1}{|c}{42.9} & \multicolumn{1}{c}{33.7} & \multicolumn{1}{c}{37.7} \\ \hline GEMZSL$^\star$~\cite{liu2021goal} & CVPR21 &\multicolumn{1}{|c|}{65.7} & \multicolumn{1}{|c|}{75.8} & \multicolumn{1}{|c|}{62.2} & \multicolumn{1}{|c}{\textcolor{blue}{\textbf{62.0}}} & \multicolumn{1}{c}{79.9} & \multicolumn{1}{c|}{69.8} & \multicolumn{1}{|c}{69.9} & \multicolumn{1}{c}{73.2} & \multicolumn{1}{c|}{71.5} & \multicolumn{1}{|c}{37.3} & \multicolumn{1}{c}{\textcolor{red}{\textbf{37.9}}} & \multicolumn{1}{c}{37.6} \\ ~~~~+Balanced MSE& CVPR22 &\multicolumn{1}{|c|}{65.3} & \multicolumn{1}{|c|}{75.3} & \multicolumn{1}{|c|}{61.7} & \multicolumn{1}{|c}{60.7} & \multicolumn{1}{c}{81.2} & \multicolumn{1}{c|}{69.5} & \multicolumn{1}{|c}{67.1} & \multicolumn{1}{c}{\textcolor{blue}{\textbf{75.5}}} & \multicolumn{1}{c|}{71.1} & \multicolumn{1}{|c}{46.3} & \multicolumn{1}{c}{30.9} & \multicolumn{1}{c}{37.1} \\ ~~~~\textbf{+ReMSE} & \textbf{Ours} & \multicolumn{1}{|c|}{66.1} & \multicolumn{1}{|c|}{76.6} & \multicolumn{1}{|c|}{63.1} & \multicolumn{1}{|c}{61.4} & \multicolumn{1}{c}{81.9} & \multicolumn{1}{c|}{70.2} & \multicolumn{1}{|c}{69.0} & \multicolumn{1}{c}{75.2} & \multicolumn{1}{c|}{72.0} & \multicolumn{1}{|c}{48.8} & \multicolumn{1}{c}{33.6} & \multicolumn{1}{c}{39.8} \\ \hline \textbf{AttentionNet} & \textbf{Ours} & \multicolumn{1}{|c|}{69.3} & \multicolumn{1}{|c|}{\textcolor{blue}{\textbf{80.2}}} & \multicolumn{1}{|c|}{62.8} & \multicolumn{1}{|c}{\textcolor{red}{\textbf{63.8}}} & \multicolumn{1}{c}{84.6} & \multicolumn{1}{c|}{\textcolor{blue}{\textbf{72.8}}} & \multicolumn{1}{c}{\textcolor{blue}{\textbf{71.9}}} & \multicolumn{1}{c}{74.6} & \multicolumn{1}{c|}{\textcolor{blue}{\textbf{72.9}}} & \multicolumn{1}{|c}{47.1} & \multicolumn{1}{c}{32.8} & \multicolumn{1}{c}{38.6} \\ ~~~~+Balanced MSE& CVPR22 & \multicolumn{1}{|c|}{66.4} & \multicolumn{1}{|c|}{79.6} & \multicolumn{1}{|c|}{62.4} & \multicolumn{1}{|c}{59.9} & \multicolumn{1}{c}{83.4} & \multicolumn{1}{c|}{69.7} & \multicolumn{1}{c}{70.7} & \multicolumn{1}{c}{74.9} & \multicolumn{1}{c|}{72.7} & \multicolumn{1}{|c}{47.9} & \multicolumn{1}{c}{33.7} & \multicolumn{1}{c}{39.6} \\ ~~~~\textbf{+ReMSE}& \textbf{Ours} & \multicolumn{1}{|c|}{\textcolor{blue}{\textbf{70.9}}} & \multicolumn{1}{|c|}{\textcolor{red}{\textbf{80.9}}} & \multicolumn{1}{|c|}{\textcolor{blue}{\textbf{63.2}}} & \multicolumn{1}{|c}{\textcolor{red}{\textbf{63.8}}} & \multicolumn{1}{c}{85.6} & \multicolumn{1}{c|}{\textcolor{red}{\textbf{73.1}}} & \multicolumn{1}{c}{\textcolor{red}{\textbf{72.8}}} & \multicolumn{1}{c}{74.8} & \multicolumn{1}{c|}{\textcolor{red}{\textbf{73.8}}} & \multicolumn{1}{|c}{47.4} & \multicolumn{1}{c}{34.8} & \multicolumn{1}{c}{\textcolor{blue}{\textbf{40.1}}} \\ \hline \rowcolor{gray!14} \multicolumn{14}{c}{Generative approaches}\\ \hline \rowcolor{gray!} fCLSWGAN~\cite{xian2018feature}& CVPR18 & \multicolumn{1}{|c|}{-} & \multicolumn{1}{|c|}{57.3} & \multicolumn{1}{|c|}{60.8} & \multicolumn{1}{|c}{56.1} & \multicolumn{1}{c}{65.5} & \multicolumn{1}{c|}{60.4} & \multicolumn{1}{|c}{43.7} & \multicolumn{1}{c}{57.7} & \multicolumn{1}{c|}{49.7} & \multicolumn{1}{|c}{42.6} & \multicolumn{1}{c}{36.6} & \multicolumn{1}{c}{39.4}\\ \rowcolor{gray!10} DCRGAN~\cite{ye2021disentangling} & TMM21 & \multicolumn{1}{|c|}{-} & \multicolumn{1}{|c|}{61.0} & \multicolumn{1}{|c|}{63.7} & \multicolumn{1}{|c}{-} & \multicolumn{1}{c}{-} & \multicolumn{1}{c|}{-} & \multicolumn{1}{|c}{55.8} & \multicolumn{1}{c}{66.8} & \multicolumn{1}{c|}{60.8} & \multicolumn{1}{|c}{47.1} & \multicolumn{1}{c}{38.5} & \multicolumn{1}{c}{42.4} \\ \rowcolor{gray!10} HSVA~\cite{chen2021hsva} & NeurIPS21 & \multicolumn{1}{|c|}{-} & \multicolumn{1}{|c|}{62.8} & \multicolumn{1}{|c|}{63.8} & \multicolumn{1}{|c}{56.7} & \multicolumn{1}{c}{79.8} & \multicolumn{1}{c|}{66.3} & \multicolumn{1}{|c}{52.7} & \multicolumn{1}{c}{58.3} & \multicolumn{1}{c|}{55.3} & \multicolumn{1}{|c}{48.6} & \multicolumn{1}{c}{39.0} & \multicolumn{1}{c}{43.3} \\ \rowcolor{gray!10} CE-GZSL~\cite{han2021contrastive} & CVPR21& \multicolumn{1}{|c|}{70.4} & \multicolumn{1}{|c|}{77.5} & \multicolumn{1}{|c|}{63.3} & \multicolumn{1}{|c}{63.1} & \multicolumn{1}{c}{78.6} & \multicolumn{1}{c|}{70.0} & \multicolumn{1}{|c}{63.9} & \multicolumn{1}{c}{66.8} & \multicolumn{1}{c|}{65.3} & \multicolumn{1}{|c}{48.8} & \multicolumn{1}{c}{38.6} & \multicolumn{1}{c}{43.1} \\ \hline \end{tabular} \label{table:OverallComparison} \end{table*} \begin{table}[htbp] \centering \caption{AUSUC for GZSL. A larger value means a better trade-off between seen and unseen accuracy.} \begin{tabular}{p{0.1cm}|p{0.1cm}|p{0.1cm}|p{0.1cm}|} \hline & \multicolumn{1}{c|}{AWA2} & \multicolumn{1}{|c|}{CUB} & \multicolumn{1}{|c}{SUN}\\ \hline \multicolumn{1}{c|}{APN} & \multicolumn{1}{|c|}{0.5784} & \multicolumn{1}{|c|}{0.5545} & \multicolumn{1}{|c}{0.2056}\\ \multicolumn{1}{c|}{+Balanced MSE} & \multicolumn{1}{|c|}{0.5825} & \multicolumn{1}{|c|}{0.4973} & \multicolumn{1}{|c}{0.2064}\\ \multicolumn{1}{c|}{\textbf{+ReMSE (Ours)}} & \multicolumn{1}{|c|}{\textbf{0.5840}} & \multicolumn{1}{|c|}{\textbf{0.5552}} & \multicolumn{1}{|c}{\textbf{0.2114}}\\ \hline \multicolumn{1}{c|}{GEMZSL} & \multicolumn{1}{|c|}{0.5823} & \multicolumn{1}{|c|}{0.6178} & \multicolumn{1}{|c}{0.2211}\\ \multicolumn{1}{c|}{+Balanced MSE} & \multicolumn{1}{|c|}{0.6017} & \multicolumn{1}{|c|}{0.6137} & \multicolumn{1}{|c}{0.1991}\\ \multicolumn{1}{c|}{\textbf{+ReMSE (Ours)}} & \multicolumn{1}{|c|}{\textbf{0.6067}} & \multicolumn{1}{|c|}{\textbf{0.6230}} & \multicolumn{1}{|c}{\textbf{0.2275}}\\ \hline \multicolumn{1}{c|}{{AttentionNet}} & \multicolumn{1}{|c|}{0.6275} & \multicolumn{1}{|c|}{0.6390} & \multicolumn{1}{|c}{0.2145}\\ \multicolumn{1}{c|}{+Balanced MSE} & \multicolumn{1}{|c|}{0.6007} & \multicolumn{1}{|c|}{0.6373} & \multicolumn{1}{|c}{0.2192}\\ \multicolumn{1}{c|}{\textbf{+ReMSE (Ours)}} & \multicolumn{1}{|c|}{\textbf{0.6476}} & \multicolumn{1}{|c|}{\textbf{0.6516}} & \multicolumn{1}{|c}{\textbf{0.2338}}\\ \hline \end{tabular} \label{table:AUSUC} \end{table} \begin{table*}[htbp] \centering \caption{Results of ablation analysis in the setting of ZSL and GZSL. AB and GB denote the attention branch and global branch, respectively. In ZSL, T1 represents the top-1 accuracy (\%) for unseen classes. In GZSL, $U$, $S$ and $H$ represent the top-1 accuracy (\%) of unseen classes, seen classes, and their harmonic mean, respectively. } \begin{tabular}{p{2.5cm}|p{1.7cm}|p{0.04cm}p{0.04cm}p{0.04cm}|p{0.04cm}p{0.04cm}p{0.04cm}p{0.04cm}p{0.04cm}p{0.04cm}p{0.04cm}p{0.04cm}p{0.04cm}} \hline & &\multicolumn{3}{|c}{Zero-shot Learning} & \multicolumn{9}{|c}{Generalized Zero-shot Learning}\\ \hline & & \multicolumn{1}{|c}{AWA2} & \multicolumn{1}{|c}{CUB} & \multicolumn{1}{|c|}{SUN} & \multicolumn{3}{|c}{AWA2} & \multicolumn{3}{|c}{CUB} &\multicolumn{3}{|c}{SUN} \\ \hline Structure & Loss & \multicolumn{1}{|c}{T1} & \multicolumn{1}{|c}{T1} & \multicolumn{1}{|c|}{T1} & \multicolumn{1}{|c}{U} & \multicolumn{1}{c}{S} & \multicolumn{1}{c|}{H} & \multicolumn{1}{|c}{U} & \multicolumn{1}{c}{S} & \multicolumn{1}{c|}{H} & \multicolumn{1}{|c}{U} & \multicolumn{1}{c}{S} & \multicolumn{1}{c}{H} \\ \hline AttentionNet w/o AB & $\mathcal{L}_{SCE}$ & \multicolumn{1}{|c|}{67.0} & \multicolumn{1}{|c|}{74.8} & \multicolumn{1}{|c|}{62.7} & \multicolumn{1}{|c}{62.1} & \multicolumn{1}{c}{83.1} & \multicolumn{1}{c|}{71.1} & \multicolumn{1}{|c}{66.9} & \multicolumn{1}{c}{72.0} & \multicolumn{1}{c|}{69.3} & \multicolumn{1}{|c}{47.5} & \multicolumn{1}{c}{31.2} & \multicolumn{1}{c}{37.6}\\ AttentionNet w/o GB & $\mathcal{L}_{SCE}$ & \multicolumn{1}{|c|}{64.2} & \multicolumn{1}{|c|}{79.7} & \multicolumn{1}{|c|}{60.9} & \multicolumn{1}{|c}{58.3} & \multicolumn{1}{c}{68.5} & \multicolumn{1}{c|}{68.4} & \multicolumn{1}{|c}{71.5} & \multicolumn{1}{c}{73.9} & \multicolumn{1}{c|}{72.7} & \multicolumn{1}{|c}{46.7} & \multicolumn{1}{c}{30.6} & \multicolumn{1}{c}{37.0}\\ AttentionNet & $\mathcal{L}_{SCE}$ & \multicolumn{1}{|c|}{69.3} & \multicolumn{1}{|c|}{80.2} & \multicolumn{1}{|c|}{62.8} & \multicolumn{1}{|c}{63.8} & \multicolumn{1}{c}{84.6} & \multicolumn{1}{c|}{72.8} & \multicolumn{1}{|c}{71.2} & \multicolumn{1}{c}{74.6} & \multicolumn{1}{c|}{72.9} & \multicolumn{1}{|c}{47.1} & \multicolumn{1}{c}{32.8} & \multicolumn{1}{c}{38.6}\\ AttentionNet & $\mathcal{L}_{SCE}$+$\mathcal{L}_{ReMSE}$ & \multicolumn{1}{|c|}{70.9} & \multicolumn{1}{|c|}{80.9} & \multicolumn{1}{|c|}{63.2} & \multicolumn{1}{|c}{63.8} & \multicolumn{1}{c}{85.6} & \multicolumn{1}{c|}{73.1} & \multicolumn{1}{|c}{72.8} & \multicolumn{1}{c}{74.8} & \multicolumn{1}{c|}{73.8} & \multicolumn{1}{|c}{47.4} & \multicolumn{1}{c}{34.8} & \multicolumn{1}{c}{40.1}\\ \hline \end{tabular} \label{table:Ablation} \end{table*} \begin{figure*}[htbp] \centering \includegraphics[width=0.95\linewidth]{figures_AUSUC.pdf} \caption{Visualization of Area Under Unseen-Seen Accuracy (AUSUC). Our ReMSE's AUSUC is mostly higher than the baseline model (GEMZSL).} \label{fig:AUSUC} \end{figure*} \begin{figure*}[htbp] \centering \includegraphics[width=0.95\linewidth]{figures_AverageErrorMatrix.pdf} \caption{Class-averaged semantic error matrix on the test set of the datasets CUB and AWA2 (rescaled by logarithmic function).} \label{fig:OverallErrorDistribution} \end{figure*} To demonstrate the effectiveness of our ReMSE, we implement various SOTA ZSL methods and evaluate our ReMSE on multiple metrics both on ZSL and GZSL settings over three popular benchmark datasets. With extensive studies, we show our ReMSE could improve various SOTA models by a significant gap as seen in Sec.~\ref{sec:SOTA}. We also present the imbalanced semantic regression performance in Sec.~\ref{sec:Regression}. Finally, we demonstrate the effectiveness of intra-class re-weighting and intra-semantic re-weighting on Sec.~\ref{sec:Ablation}. \textbf{Datasets.} We conduct extensive experiments to evaluate the proposed method on three ZSL benchmarks, namely (1) the coarse-grained dataset AWA2~\cite{lampert2013attribute}, one extensive animal dataset composed of 37,322 images from 50 classes (40 seen and 10 unseen) with 85-dim attributes ranged from $0$ to $100$ ($-1$ denotes missing data); (2) the fine-grained bird dataset CUB~\cite{wah2011caltech}, containing 11,788 images in 200 (150 seen and 50 unseen) classes with 312 semantics ranging from 0 to 100; (3) the fine-grained dataset SUN~\cite{patterson2012sun}, a large-scale dataset including 14,340 images from 717 classes (645 seen and 72 unseen) with 102 attributes ranging from 0 to 1. We divide these data into training and testing sets following~\cite{xian2018zero}, which is widely used in present methods. \textbf{Baselines \& Implementation Details.} We examine our ReMSE strategy on three ZSL baselines, i.e., APN, GEMZSL\footnote{It is worth mentioning that the model GEMZSL only utilizes the gaze embedding to build the attention maps, its ability to recognize unseen classes only relies on the semantics provided by the benchmark~\cite{xian2018zero}.}, and AttentionNet. For fair comparison, we implement APN and GEMZSL following the original training configuration, including their batch size, learning rate, sampling strategy and so on. For AttentionNet, we only use SCE and our proposed ReMSE loss. We adopt ResNet101~\cite{he2016deep} pretrained on ImageNet1K~\cite{deng2009imagenet} as the backbone. AttentionNet is optimized with a stochastic gradient descent optimizer with a learning rate of 0.0005, momentum of 0.9, and weight decay of 0.0001. The batch size for all datasets is set to 32. All the experiments were run on an NVIDIA Quadro RTX 8000 graphics card with 48GB of memory. \textbf{Evaluation Protocols.} We adopt a variety of metrics for comparison. Specifically, for ZSL, we calculate the top-1 classification accuracy (T1) for unseen classes. For GZSL, we calculate three kinds of top-1 accuracies, namely the accuracy for unseen classes (denoted as $U$), the accuracy for seen classes ($S$), and their harmonic mean: \begin{equation} H = \frac{2 \times U \times S}{U + S}. \end{equation} Besides, for GZSL, we report the performance based on the Area Under Seen-Unseen accuracy Curve (AUSUC)~\cite{chao2016empirical}, which evaluates the degree of trade-off between $U$ and $S$ for ZSL. Finally, we exploit two new metrics, the mean and standard deviation of the class-averaged semantic error matrix, to evaluate the imbalanced performance on semantic regression, i.e. how well the ZSL models can fit the semantic labels and how well the error distribution is balanced, respectively. \subsection{Comparison with SOTAs} \label{sec:SOTA} For ZSL and GZSL tasks, we focus on embedding methods and compare our approach with classical AGEN (CVPR19), DUET (TIP19), SGAM (NeurIPS19), and more recent APN (NeurIPS20), GEMZSL (CVPR21), LSG (TIP21) and even state-of-the-art TransZero (AAAI22). We also report the performance of various generative methods, including f-CLSWGAN (CVPR18), DCRGAN (TMM21), HSVA (NeurIPS21), CE-GZSL (CVPR21) for comprehensive reference. For imbalanced regression, we apply a Balanced MSE (CVPR22) on three embedding methods (i.e. APN, GEMZSL, and AttentionNet), which may be the first method in multi-label imbalanced regression. The results are reported in Table~\ref{table:OverallComparison}. We highlight three main observations: 1) ReMSE can improve the baselines consistently. For example, on the CUB dataset for ZSL task, our ReMSE endows vanilla APN, GEMZSL and AttentionNet with 0.2\%, 0.8\%, and 0.7\% performance gain, respectively, confirming that ReMSE can effectively learn a better visual-semantic mapping. 2) On the CUB dataset, ReMSE achieves the highest score compared with AttentionNet by a considerable gap, i.e. at least 4.1\% higher than all the rest SOTAs for ZSL (Ours $80.9\%$ v.s. TransZero $76.8\%$), and at least $2.3\%$ (w.r.t. H) higher for GZSL (Ours $73.8\%$ v.s. GEMZSL $71.5\%$). 3) Balanced MSE may perform unstable. In some case, it may bring improvements (e.g. when it is integrated into APN for GZSL on SUN and AWA2), but in other cases, it may degrade the performance. Likewise, comparisons on the AUSUC metric (as seen in Table~\ref{table:AUSUC}) also validate that our ReMSE improves all the models with a large gap, again demonstrating the advantage of the rebalancing strategy. A visualization of the Area Under Unseen-Seen Accuracy (AUSUC) is shown in Fig.~\ref{fig:AUSUC}. We can see that our ReMSE's AUSUC is mostly higher than the baseline model (GEMZSL), which evaluates the trade-off ability of ZSL between unseen accuracy and seen accuracy. \begin{figure*}[htbp] \centering \includegraphics[width=0.95\linewidth]{figures_ErrorCurve.pdf} \caption{Mean and standard deviation of error distributions on test set. ReMSE could lead to significant drops in terms of both Mean and standard deviation of error distributions.} \label{fig:ErrorCurve} \end{figure*} \begin{figure*}[htbp] \centering \includegraphics[width=0.95\linewidth]{figures_PCCCurve} \caption{Visualization of the Pearson Correlation Coefficient (PCC) between semantic prediction errors and semantic label values during training. Compared with the baseline model, ReMSE makes the Pearson Correlation Coefficient drop significantly. This shows that our ReMSE method can greatly reduce the correlation between semantic prediction errors and semantic label values.} \label{fig:PCCCurve} \end{figure*} \subsection{Validation of Rebalancing Property} \label{sec:Regression} To validate that our approach can indeed rebalance errors across different classes and different semantics, we conduct several additional experiments. First, we visualize the variations of the error distribution on the testing set of CUB in Fig.~\ref{fig:OverallErrorDistribution}. Darker red means more errors, while darker blue means fewer errors. We can see that with ReMSE, the distribution of prediction errors changes from darker red to blue overall. This clearly shows that our re-weighting could effectively suppress prediction errors without negatively affecting other well-fitting semantic regions. Second, conducting experiments on GEMZSL, we perform two quantitative comparisons of the mean and standard deviation of the errors distributions, as shown in Fig.~\ref{fig:ErrorCurve}. It is evident that once ReMSE is applied, both seen or unseen classes, the means and standard deviations drop significantly, implying that the errors are indeed balanced. We also verify that the semantic predictions of most existing models are unbalanced. Furthermore, imbalanced prediction errors are often associated with semantic labels. To demonstrate that our ReMSE can reduce undesired correlations, we visualize the Pearson Correlation coefficient (PCC) between semantic prediction errors and semantic label values, as shown in Fig.~\ref{fig:PCCCurve}. We can observe that our ReMSE leads that the PCC drops significantly compared to the baseline model, which indicates that our ReMSE can indeed greatly reduce the linear relationship between semantic prediction error and semantic label value, thereby alleviating the imbalanced semantic prediction issue. \begin{figure*}[htbp] \centering \includegraphics[width=0.95\linewidth]{figures_ablation.pdf} \caption{Effects of re-weighting hyper-parameters $\alpha$ (class-level) and $\beta$ (semantic-level).} \label{fig:ablation} \end{figure*} \subsection{Ablation Study} \label{sec:Ablation} \subsubsection{Component analysis} \label{sec:ComAna} We conduct ablation studies to verify the effectiveness of our approach. Table~\ref{table:Ablation} shows the impact of each component. We first use SCE loss to train a model that only contains global branch (GB) or attention branch (AB). Next, we fuse these two branches as the full AttentionNet. After that, our ReMSE are added to our AttentionNet. From the table, we could get three conclusions: (1) The attention branch might cause some degradation. For example, the model without AB could perform better than the model without GB on AWA2 and SUN. (2) The first three column indicates combining global and attentive features could improve the expressiveness of features, and allow models predict semantics more accurate. (3) Remarkly, Our proposed ReMSE improves the T1 of ZSL over the model trained by SCE by 1.6\% (AWA2), 0.7\% (CUB) and 0.4\% (SUN), respectively, and the harmonic mean accuracy (H) of GZSL by 0.3\% (AWA2), 0.9\% (CUB) and 1.5\% (SUN), respectively. This influence verifies the effectiveness of our ReMSE that does not only decrease the mean of errors but also reduce the variance of errors. \subsubsection{Sensitivity Analysis} \label{sec:Sensitivity} We take the SUN and CUB dataset to analyze the Sensitivity of the hyperparameters $\alpha$ and $\beta$ used in the rebalance method. As shown in Fig.~\ref{fig:ablation}, a proper $\alpha$ or $\beta$ can bring improvement of T1, which appears to be higher than the vanilla method. \begin{figure*}[htbp] \centering \includegraphics[width=0.9\linewidth]{figures_attention_vis1.pdf} \caption{Visualization of attention maps produced by our AttentionNet according for different semantics of unseen images on the dataset CUB. The attention map has a resolution of $7 \times 7$, and is reshaped into $224 \times 224$ to match the image size.} \label{fig:attention_vis1} \end{figure*} \begin{figure*}[htbp] \centering \includegraphics[width=0.95\linewidth]{figures_attention_vis2.pdf} \caption{ The effect of our ReMSE algorithm on the attention map. The settings are the same as in Fig.~\ref{fig:attention_vis1}, but the model results in a more accurate attention map for difficult semantics.} \label{fig:attention_vis2} \end{figure*} \subsection{Visualization of Attention} We also visualize the attention map of our AttentionNet to qualitatively verify its effectiveness as shown in Fig.~\ref{fig:attention_vis1}. The figure shows the results of different attention maps according to various semantics. Obviously, our AttentionNet adaptively detects the semantic regions that are beneficial for prediction. For example, when the semantics are related to crown, eye and bill, the attention is distributed to heads of birds. When the semantics are related to wing or upper-parts, the attentive regions become the bodies. Moreover, we also verify the effectiveness of our ReMSE for the attention, as shown in Fig.~\ref{fig:attention_vis2}. We can see that with the help of our ReMSE the model corrects its out-of-focus regions. For instance, for the semantics related to bills, the model incorrectly focuses on (a) the legs, (b) the throat, and (c) the chest. But with the help of ReMSE, it exactly focuses on bills. Generally, these figures illustrate that our ReMSE plays a key role in predicting accurately hard semantics. \section{Conclusion} In this work, we address the zero-shot learning problem from a brand new perspective of imbalanced learning. We propose the ReMSE strategy and focus on re-balancing the imbalanced error distribution across different classes and different semantics. We set out a series of analyses both theoretically and empirically to validate the rationale of ReMSE in ZSL. Extensive experiments on three benchmark datasets show that ReMSE can consistently improve the three baselines, achieving competitive performance even compared to those sophisticated ZSL methods. \section*{Acknowledgments} This work was partially supported by ``Qing Lan Project" in Jiangsu universities, National Natural Science Foundation of China under nos. 61876155 and 62106081, and Jiangsu Science and Technology Programme under no. BE2020006-4. \newpage
1,108,101,564,716
arxiv
\section{Introduction} In the case of a synchronously-orbiting, highly-irradiated planet, if the opacity of the atmosphere is lower at optical wavelengths than at infrared wavelengths, the vertical temperature profile of the dayside hemisphere is expected to decrease with decreasing pressure close to the infrared photosphere. This is essentially because most of the incident stellar radiation is at optical wavelengths. Therefore, if the optical opacity of the atmosphere is lower than the infrared opacity, the stellar radiation will be primarily deposited below the infrared photosphere, heating the atmosphere at those higher pressures. Conversely, if the optical opacity is higher than the infrared opacity, most of the heating by the host star will occur above the infrared photosphere, resulting in a thermal inversion. As such, thermal inversions are valuable diagnostics of the radiative processes at play in a planetary atmosphere, and in particular, the relative strength of absorption at optical versus infrared wavelengths. Observationally, thermal inversions can be inferred by measuring the planetary emission spectrum and detecting opacity bands as emission rather than absorption features. The first detection of a spectrally-resolved emission feature for an exoplanet was made by \cite{2017Natur.548...58E} for WASP-121b, an ultrahot Jupiter discovered by \cite{2016MNRAS.tmp..312D}. This was done by observing a secondary eclipse with the \textit{Hubble Space Telescope} (HST) Wide Field Imaging Camera 3 (WFC3). The resulting dayside spectrum derived from these data revealed an H$_2$O emission band spanning the $\sim 1.3$-$1.6\,\mu\textnormal{m}$ wavelength range, providing strong evidence for a dayside thermal inversion \citep{2017Natur.548...58E}. Additional observations made with the \textit{Spitzer Space Telescope} \citep{2020AJ....159..137G}, ground-based photometry \citep{2016MNRAS.tmp..312D,2019A&A...625A..80K}, HST \citep{2019MNRAS.488.2222M}, and the \textit{Transiting Exoplanet Survey Satellite} (TESS) \citep{2019arXiv190903010B,2019arXiv190903000D} have since extended the wavelength coverage of the WASP-121b emission spectrum considerably. In addition to the H$_2$O emission band, this combined dataset shows evidence for H$^{-}$ and CO emission, with retrieval analyses inferring a temperature profile that increases from $\sim 2500$\,K to $\sim 2800$\,K across the $\sim 30$ to $5$\,mbar pressure range \citep{2019MNRAS.488.2222M}. It has not yet been possible to identify optical opacity source(s) in the dayside atmosphere of WASP-121b and definitively link them to the thermal inversion. Early studies focused on the strong optical absorbers TiO and VO as likely candidates for generating thermal inversions in highly-irradiated atmospheres, in which the temperatures are high enough ($\gtrsim 2000$\,K) for these species to be in the gas phase \citep{2003ApJ...594.1011H,2008ApJ...678.1419F}. Indeed, evidence for VO absorption has been uncovered in the transmission spectrum of WASP-121b, but not TiO \citep{2018AJ....156..283E}. However, recent theoretical work has highlighted that for ultrahot Jupiters with temperatures $\gtrsim 2700$\,K such as WASP-121b, much of the TiO and VO will likely be thermally dissociated on the dayside hemisphere, reducing their potency as thermal inversion drivers \citep{2018A&A...617A.110P,2018ApJ...866...27L}. Instead, for these hottest planets, thermal inversions may be generated by heavy metals in the gas phase, such as Fe and Mg, which have strong absorption lines in the near-ultraviolet and optical \citep{2018ApJ...866...27L}. Statistically significant detections of FeI, FeII, and MgII have been made in the transmission spectrum of WASP-121b \citep{2019AJ....158...91S,2020MNRAS.tmp..220G,2020arXiv200106836B,2020arXiv200107196C}, supporting this hypothesis. Other near-ultraviolet/optical absorbers such as NaH, MgH, FeH, SiO, AlO, and CaO have also been suggested \citep{2018ApJ...866...27L,2018A&A...617A.110P,2019MNRAS.485.5817G}, but no evidence has been uncovered for their presence in the atmosphere of WASP-121b to date. This paper presents follow-up secondary eclipse observations for WASP-121b made with HST WFC3 that allow us to refine the dayside emission spectrum across the $1.12$-$1.64\,\mu\textnormal{m}$ wavelength range. In Section \ref{sec:datared} we describe the observations and data reduction procedures, followed by the light curve fitting methodology in Section \ref{sec:lcfitting}. We discuss the results in Section \ref{sec:discussion} and give our conclusions in Section \ref{sec:conclusion}. \section{Observations and data reduction} \label{sec:datared} Two full-orbit phase curves of WASP-121b were observed with HST/WFC3 on 2018 March 12-13 and 2019 February 3-4 (G.O.\ 15134; P.I.s\ Mikal-Evans \& Kataria). For both visits, the target was observed for approximately 40.3 hours over 26 contiguous HST orbits. Each visit was scheduled to include two consecutive secondary eclipses. Here, we present an analysis of the four secondary eclipses acquired in this manner. The full phase curve will be presented in a future publication (Mikal-Evans et al., \textit{in prep}). A similar observing setup to that used previously in \cite{2016ApJ...822L...4E,2017Natur.548...58E} was adopted. Observations for both visits used the G141 grism, which encompasses the $1.12$-$1.64\,\mu\textnormal{m}$ wavelength range with a spectral resolving power of $R \sim 130$ at $\lambda = 1.4\,\mu\textnormal{m}$. The forward spatial-scanning mode was used and only a $256 \times 256$ subarray was read out from the detector with the SPARS10 sampling sequence and 15 non-destructive reads per exposure ($\textnormal{NSAMP}=15$), corresponding to exposure times of 103\,s. The only difference between \cite{2016ApJ...822L...4E,2017Natur.548...58E} and the current observing setup was that a slower spatial scan rate of $0.073$\,arcsec\,s$^{-1}$ was used, compared to $0.120$\,arcsec\,s$^{-1}$ for the earlier observations. This resulted in shorter scans across approximately 60 pixel-rows of the cross-dispersion axis, leaving more space on the detector for background estimation. With this setup, we obtained 15 exposures in the first HST orbit following acquisition and 16 exposures in each subsequent HST orbit. Typical peak frame counts were $\sim 37,000$ electrons per pixel for both visits, which is within the recommended range derived from an ensemble analysis of WFC3 spatial-scan data spanning eight years \citep{2019wfc..rept...12S}. Spectra were extracted from the raw data frames using a custom-built Python pipeline, which has been described previously \citep{2016ApJ...822L...4E,2017Natur.548...58E,2019MNRAS.488.2222M} and is similar to others employed in the field \citep[e.g.][]{2014Natur.505...66K,2014Natur.505...69K,2017Sci...356..628W,2018AJ....155...29W,2018MNRAS.474.1705N}. For each exposure, we took the difference between successive non-destructive reads and applied a 50-pixel-wide top-hat filter along the cross-dispersion axis, before summing to produce final reconstructed images. The top-hat filter applied in this way has the effect of removing contamination from nearby sources and most cosmic ray strikes on the detector. The target spectrum was then extracted from each image by integrating the flux within a rectangular aperture spanning the full dispersion axis and 100 pixels along the cross-dispersion axis, centered on the central cross-dispersion row of the scan. Background fluxes were assumed to be wavelength-independent and subtracted from each spectrum. These were estimated by taking the median pixel count within a $10 \times 170$ pixel box located away from the target spectrum on the 2D reconstructed image, with typical background levels integrated over the full $103\,$s exposures starting at $\sim 150$\,electrons\,pixel$^{-1}$ and dropping to $\sim 110$\,electrons\,pixel$^{-1}$ over each HST orbit. The wavelength solution was determined by cross-correlating the final spectrum of each visit against a model stellar spectrum, as described in \cite{2016ApJ...822L...4E}. \section{Light Curve Fitting} \label{sec:lcfitting} \begin{figure} \centering \includegraphics[width=\columnwidth]{{fig1}.pdf} \caption{\textit{(Top panel)} Phase-folded white light curve with eclipse model after removing systematics components of a joint fit to all five eclipses. \textit{(Bottom panel)} Model residuals.} \label{fig:whitelc} \end{figure} White light curves were produced by summing the flux of each spectrum across the full wavelength range. We then fit the resulting light curves using the methodology described in \cite{2019MNRAS.488.2222M}, using a Gaussian process (GP) model to account for instrumental systematics. In addition to the four new eclipses presented in this study, we also analyzed an eclipse acquired as part of an earlier HST program (GO-14767; P.I.s\ Sing \& Lopez-Morales) that was originally published in \cite{2017Natur.548...58E}. We refer to the latter eclipse as G141v0ec0; the two eclipses observed as part of the 2018 visit as G141v1ec1 and G141v1ec2; and the two eclipses observed as part of the 2019 visit as G141v2ec3 and G141v2ec4. All five eclipses were fit simultaneously, with separate systematics models and eclipse mid-times ($T_{\textnormal{mid}}$) for each dataset, and a shared eclipse depth. We set the orbital period equal to 1.2749247646 day \citep{2019AJ....158...91S}, and the normalized semimajor axis ($a/\Rs$) and impact parameter ($b$) were fixed to the same values adopted in \cite{2019MNRAS.488.2222M}: namely, $a/\Rs=3.86$ and $b=0.06$. The resulting light curve fit is shown in Figure \ref{fig:whitelc} and the inferred eclipse parameters are reported in Table \ref{table:whitefit}. \begin{table} \begin{minipage}{\columnwidth} \centering \caption{MCMC results for the joint fit to all five eclipse white light curves. Quoted values are the posterior medians and uncertainties give the $\pm 34$\% credible intervals about the median. \label{table:whitefit}} \begin{tabular}{cccc} \hline \\ Parameter & Dataset & Value \medskip \\ \cline{1-3} &&& \\ Eclipse depth (ppm) & All & $1150_{-19}^{+21}$ \\ $T_{\textnormal{mid}}$ (MJD$_{\textnormal{UTC}}$) & G141v0ec0 & $2457703.45707_{-0.00081}^{+0.00077}$ \smallskip \\ & G141v1ec1 & $2458190.47621_{-0.00115}^{+0.00142}$ \smallskip \\ & G141v1ec2 & $2458191.75090_{-0.00055}^{+0.00049}$ \smallskip \\ & G141v2ec3 & $2458518.13064_{-0.00073}^{+0.00073}$ \smallskip \\ & G141v2ec4 & $2458519.40663_{-0.00056}^{+0.00058}$ \\ \\ \hline \end{tabular} \end{minipage} \end{table} \begin{figure} \centering \includegraphics[width=\columnwidth]{{fig2}.pdf} \caption{The same as Figure \ref{fig:whitelc}, but for an example spectroscopic light curve spanning $1.231$-$1.249\,\mu\textnormal{m}$ in wavelength.} \label{fig:speclc} \end{figure} Next, we generated spectroscopic light curves in 28 wavelength channels, using the method described in \cite{2019MNRAS.488.2222M}. Each of these light curves was then fit with the same method used for the white light curve fit, but with the eclipse mid-times held fixed to the best-fit values obtained from the latter. An example light curve fit is shown in Figure \ref{fig:speclc}. For all datasets and spectroscopic light curves, the residual scatter was consistent with being photon noise limited. Inferred eclipse depths are reported in Table \ref{table:specfits}. \begin{table} \begin{minipage}{\columnwidth} \centering \caption{Eclipse depths inferred for each spectroscopic channel, quoted as median and $\pm 34$\% credible intervals from the MCMC fits. \label{table:specfits}} \begin{tabular}{cc} \hline \\ Wavelength ($\mu\textnormal{m}$) & Eclipse depth (ppm) \medskip \\ \cline{1-2} & \\ $1.120$-$1.138$ & $903_{-52}^{+53}$ \smallskip \\ $1.138$-$1.157$ & $991_{-60}^{+59}$ \smallskip \\ $1.157$-$1.175$ & $1002_{-56}^{+58}$ \smallskip \\ $1.175$-$1.194$ & $1029_{-49}^{+50}$ \smallskip \\ $1.194$-$1.212$ & $1066_{-58}^{+58}$ \smallskip \\ $1.212$-$1.231$ & $983_{-57}^{+54}$ \smallskip \\ $1.231$-$1.249$ & $1031_{-51}^{+48}$ \smallskip \\ $1.249$-$1.268$ & $1015_{-50}^{+55}$ \smallskip \\ $1.268$-$1.286$ & $994_{-48}^{+49}$ \smallskip \\ $1.286$-$1.305$ & $1028_{-53}^{+55}$ \smallskip \\ $1.305$-$1.323$ & $1008_{-60}^{+55}$ \smallskip \\ $1.323$-$1.342$ & $1077_{-56}^{+55}$ \smallskip \\ $1.342$-$1.360$ & $1160_{-52}^{+51}$ \smallskip \\ $1.360$-$1.379$ & $1110_{-57}^{+61}$ \smallskip \\ $1.379$-$1.397$ & $1262_{-59}^{+57}$ \smallskip \\ $1.397$-$1.416$ & $1360_{-56}^{+57}$ \smallskip \\ $1.416$-$1.434$ & $1193_{-56}^{+54}$ \smallskip \\ $1.434$-$1.453$ & $1304_{-54}^{+51}$ \smallskip \\ $1.453$-$1.471$ & $1331_{-63}^{+58}$ \smallskip \\ $1.471$-$1.490$ & $1342_{-57}^{+57}$ \smallskip \\ $1.490$-$1.508$ & $1304_{-60}^{+62}$ \smallskip \\ $1.508$-$1.527$ & $1276_{-62}^{+62}$ \smallskip \\ $1.527$-$1.545$ & $1210_{-65}^{+66}$ \smallskip \\ $1.545$-$1.564$ & $1307_{-60}^{+62}$ \smallskip \\ $1.564$-$1.582$ & $1388_{-66}^{+63}$ \smallskip \\ $1.582$-$1.601$ & $1299_{-69}^{+69}$ \smallskip \\ $1.601$-$1.619$ & $1270_{-63}^{+64}$ \smallskip \\ $1.619$-$1.638$ & $1286_{-68}^{+68}$ \\ \\ \hline \end{tabular} \end{minipage} \end{table} \section{Discussion} \label{sec:discussion} \begin{figure*} \centering \includegraphics[width=0.8\linewidth]{{fig3}.pdf} \caption{\textit{(a)} Published eclipse depth measurements for WASP-121b across the red optical and near-infrared wavelength range covered by the TESS and HST WFC3 passbands. Note in particular the improved WFC3 G141 signal-to-noise achieved for the present study with five eclipse observations, compared to the original data presented in \citet{2017Natur.548...58E} for a single eclipse observation. \textit{(b)} Corresponding planetary emission extending out to longer wavelengths including the \textit{Spitzer} IRAC passbands. The errorbars for the IRAC measurements are not much larger than the marker symbols on this vertical scale. In both panels, the dark yellow line shows the spectrum assuming the planet radiates as a blackbody with a best-fit temperature of 2700\,K and the pale yellow envelope indicates blackbody spectra for temperatures of 2330\,K and 2970\,K. The latter encompass a plausible range of emission under limiting assumptions for the albedo and day-night heat recirculation. As labeled in panel \textit{(a)}, other solid lines show best-fit models obtained for the three retrieval analyses described in the main text, which all incorporate the updated WFC3 G141 spectrum, along with that obtained for our previous retrieval analysis published in \citet{2019MNRAS.488.2222M}. Spectral emission features due to H$^-$ and H$_2$O are also labeled in panel \textit{(a)}. } \label{fig:emspec} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.98\linewidth]{{fig4}.pdf} \caption{Marginalized posterior distributions for the retrieval analyses described in the main text, which all adopt the updated WFC3 G141 emission spectrum presented in this work. Good agreement is obtained for all three cases. Note the lower bound on $[\textnormal{C}/\textnormal{H}]$ remains poorly constrained, as the available data do not spectrally resolve any carbon-bearing species. The upper bound of $[\textnormal{M}/\textnormal{H}]$ also remains unconstrained, due to the imposition of a hard limit of $<2$\,dex in the retrievals reported here, but future analyses should consider relaxing this assumption.} \label{fig:posteriors} \end{figure*} \begin{table*} \begin{minipage}{\linewidth} \centering \caption{Retrieval uniform prior ranges and MCMC marginalized posterior distribution medians and $\pm$34\% credible intervals \label{table:posteriors}} \begin{tabular}{ccccccc} \hline & & & & \multicolumn{3}{c}{Fitting to updated WFC3 G141$^{\dagger}$} \\ \cline{5-7} Parameter & Unit & Allowed range$^{\S}$ & \cite{2019MNRAS.488.2222M} & Case 1 & Case 2 & Case 3$^{\star}$ \medskip \\ \cline{1-7} &&& \\ $[\textnormal{M}/\textnormal{H}]$ & dex & $-2$ to $2$ & ${1.09}_{-0.69}^{+0.57}$ & $1.57_{-0.94}^{+0.30}$ & $1.38_{-1.12}^{+0.42}$ & $1.50_{-0.75}^{+0.31}$ \smallskip \\ $[\textnormal{C}/\textnormal{H}]$ & dex & $-2$ to $2$ & ${-0.29}_{-0.48}^{+0.61}$ & $0.05_{-1.38}^{+0.96}$ & $-0.13_{-1.20}^{+0.85}$ & $0.29_{-1.29}^{+0.70}$ \smallskip \\ $[\textnormal{O}/\textnormal{H}]$ & dex & $-2$ to $2$ & ${0.18}_{-0.60}^{+0.64}$ & $0.78_{-1.22}^{+0.44}$ & $0.56_{-1.03}^{+0.49}$ & $0.71_{-0.72}^{+0.41}$ \smallskip \\ $\log_{10}(\kappa_\textnormal{IR})$ & dex\,cm$^2$\,g$^{-1}$ & $-5$ to $0.5$ & ${-3.01}_{-0.62}^{+0.56}$ & $-2.70_{-0.36}^{+0.22}$ & $-2.85_{-0.55}^{+0.27}$ & $-2.61_{-0.31}^{+0.19}$ \smallskip \\ $\log_{10}(\gamma)$ & dex & $-4$ to $1.5$ & ${0.64}_{-0.16}^{+0.19}$ & $0.73_{-0.14}^{+0.12}$ & $0.74_{-0.14}^{+0.15}$ & $0.73_{-0.10}^{+0.10}$ \smallskip \\ $\psi$ & --- & $0$ to $2$ & ${0.99}_{-0.09}^{+0.06}$ & $0.95_{-0.05}^{+0.04}$ & $0.95_{-0.06}^{+0.05}$ & $0.97_{-0.03}^{+0.03}$ \medskip \\ \cline{1-7} \multicolumn{3}{r}{Best-fit model reduced $\chi^2$} & 0.85 & 0.78 & 0.94 & 0.79 \\ \hline \end{tabular} \vspace{5pt} \\ \raggedright $^{\dagger}$\, \footnotesize The three cases are those described in the main text: (1) fitting to the same dataset as \cite{2019MNRAS.488.2222M}, but using the updated WFC3 G141 spectrum derived in this work; (2) also including the \cite{2019arXiv190903000D} \textit{TESS} measurement; and (3) adopting the \cite{2019arXiv190903010B} \textit{TESS} measurement instead. \\ $^{\S}$\, \footnotesize Note that the allowed range for each of $[\textnormal{M}/\textnormal{H}]$, $[\textnormal{C}/\textnormal{H}]$, and $[\textnormal{O}/\textnormal{H}]$ was $-1$ to $2$ in \cite{2019MNRAS.488.2222M}. \\ $^{\star}$\, \footnotesize Our favored analysis, owing to it providing the tightest model parameter constraints and achieving the best fit quality (as quantified by the reduced $\chi^2$), while including all available data. \end{minipage} \end{table*} The updated WASP-121b emission spectrum is shown in Figure \ref{fig:emspec}. The new G141 data is in good overall agreement with the original spectrum presented in \cite{2017Natur.548...58E}. However, unlike the \cite{2017Natur.548...58E} spectrum, the revised G141 spectrum does not exhibit a bump at $1.25\mu\textnormal{m}$, which our previous investigations had failed to replicate with physically-plausible atmosphere models \citep{2017Natur.548...58E,2019MNRAS.488.2222M}. This suggests the $1.25\mu\textnormal{m}$ bump was either a statistical fluctuation or a systematic artefact specific to the G141v0ec0 dataset, demonstrating the benefit gained by observing multiple eclipses well separated in time. Furthermore, the median eclipse depth uncertainty across the spectroscopic channels has improved from 90\,ppm \citep{2017Natur.548...58E} to 60\,ppm, bringing the H$_2$O emission band into sharper focus across the $\sim 1.3$-$1.6\,\mu\textnormal{m}$ wavelength range. Despite the smaller uncertainties, the updated emission spectrum agrees with the best-fit retrieval model presented in \cite{2019MNRAS.488.2222M} (hereafter, ME19), reproduced in Figure \ref{fig:emspec}, which assumes equilibrium chemistry and accounts for the effects of thermal ionization and dissociation of molecules. This model has a temperature inversion, departing from a blackbody spectrum shortward of $\sim 1.3\,\mu\textnormal{m}$ due to H$^{-}$ bound-free emission, between $\sim 1.3$-$1.6\,\mu\textnormal{m}$ due to H$_2$O emission, and within the $4.5\mu\textnormal{m}$ IRAC passband due to CO emission. Remarkably, with the revised WFC3 G141 spectrum, the $\chi^2$ value has improved from $43.6$ to $35.5$ for $42$ degrees of freedom, without any further tuning of the model. In Section \ref{sec:discussion:retrieval} below, we describe updated retrieval analyses performed on the revised dataset. Phase curve measurements for WASP-121b made using TESS have also recently been reported by \cite{2019arXiv190903010B} (B19) and \cite{2019arXiv190903000D} (D19), covering the $0.6$-$1\,\mu\textnormal{m}$ red optical wavelength range. For the planet-to-star dayside emission in the TESS passband, B19 obtain $419_{-42}^{+47}$ ppm, which agrees with the ME19 best-fit retrieval model shown in Figure \ref{fig:emspec} at the $0.4\sigma$ level. D19 report a somewhat higher dayside emission value of $534_{-43}^{+42}$ ppm, which is $2.3\sigma$ above the prediction of the best-fit ME19 model. However, D19 used the same retrieval methodology as ME19 and presented a best-fit model that is consistent with their TESS data point at the $0.7\sigma$ level. Given the posterior distributions of the ME19 and D19 retrieval analyses are consistent to within $1\sigma$ for all free parameters (i.e. $[\textnormal{C}/\textnormal{H}]$, $[\textnormal{O}/\textnormal{H}]$, $[\textnormal{M}/\textnormal{H}]$, $\kappa_\textnormal{IR}$, $\gamma$, $\psi$), we deduce that the difference between the two available TESS analyses does not significantly affect the overall interpretation of the WASP-121b dayside spectrum. Although the best-fit models vary slightly depending on which TESS analysis is adopted, the posterior distributions are affected minimally and the conclusion that the dayside atmosphere of WASP-121b has a thermal inversion remains unchanged. This is explored further in Section \ref{sec:discussion:retrieval}. Finally, if we assume the planet radiates as an isothermal blackbody and adopt the B19 TESS data point,\footnote{Note that for these calculations, we use a planet-to-star radius ratio of $\Rp/\Rs=0.1205$, approximately corresponding to the lowest point of the \cite{2018AJ....156..283E} transmission spectrum.} we obtain a best-fit temperature of $2703 \pm 6$\,K. However, the fit to the data is poor (Figure \ref{fig:emspec}) and can be ruled out at $6.6\sigma$ confidence. If instead we adopt the D19 TESS data point, the best-fit temperature is indistinguishable ($2704 \pm 6$\,K) and can be ruled out at $7.6\sigma$ confidence. The updated G141 dataset presented here and the TESS measurements recently reported in the literature therefore reinforce the conclusion that the dayside emission of WASP-121b is strongly inconsistent with an isothermal blackbody and is instead well explained by an atmosphere model including a thermal inversion. \subsection{Retrieval analyses} \label{sec:discussion:retrieval} Using the updated WFC3 G141 emission spectrum, we repeated the retrieval analysis described in ME19 for three separate dataset combinations: (Case 1) the WFC3 G102 and G141 spectrophotometry, \textit{Spitzer} IRAC photometry \citep{2020AJ....159..137G}, and published ground-based photometry \citep{2016MNRAS.tmp..312D,2019A&A...625A..80K}; (Case 2) the same, but also including the \cite{2019arXiv190903000D} \textit{TESS} eclipse measurement; and (Case 3) the same again, but instead adopting the \cite{2019arXiv190903010B} \textit{TESS} measurement. Our retrieval framework utilizes the \texttt{ATMO} code of \cite{2015ApJ...804L..17T}, which has been further developed by \cite{2016ApJ...817L..19T,2017ApJ...841...30T,2017ApJ...850...46T,2019ApJ...876..144T}, \cite{2014A&A...564A..59A}, \cite{2016A&A...594A..69D}, \cite{2018MNRAS.474.5158G,2019MNRAS.482.4503G}, and \cite{2020arXiv200313717P}, and employed in numerous other exoplanet retrieval analyses \citep[e.g.][]{2017Natur.548...58E,2018AJ....156..283E,2018AJ....156..298A,2018Natur.557..526N,2018MNRAS.474.1705N,2020MNRAS.tmp.1223C,2017Sci...356..628W,2018AJ....155...29W,2020MNRAS.tmp.1223C}. As in ME19, the free parameters of our model were: the carbon abundance ($[\textnormal{C}/\textnormal{H}]$); oxygen abundance ($[\textnormal{O}/\textnormal{H}]$); metallicity of all other heavy elements ($[\textnormal{M}/\textnormal{H}]$); infrared opacity ($\kappa_\textnormal{IR}$); ratio of the visible-to-infrared opacity ($\gamma=\kappa_{\textnormal{V}}/\kappa_\textnormal{IR}$); and an irradiation efficiency factor ($\psi$). For additional details, refer to ME19. The best-fit spectra obtained for each retrieval are all in close agreement and plotted in Figure \ref{fig:emspec}. The marginalized posterior distributions for the model parameters are reported in Table \ref{table:posteriors} and shown in Figure \ref{fig:posteriors}. All three retrievals performed using the updated WFC3 G141 spectrum improve the constraints on the parameters controlling the PT profile (i.e.\ $\kappa_\textnormal{IR}$, $\gamma$, $\psi$). This can be appreciated in Figure \ref{fig:pt}, which shows the PT profile distributions obtained for each retrieval analysis, along with the normalized contribution functions of the \textit{TESS}, \textit{HST}, and \textit{Spitzer} passbands. For the parameters controlling elemental abundances (i.e. $[\textnormal{M}/\textnormal{H}]$, $[\textnormal{C}/\textnormal{H}]$, $[\textnormal{O}/\textnormal{H}]$), the results are also consistent with those reported in ME19. However, in the present work, we allowed the abundances to vary between $-2$ and 2 dex, whereas the lower bounds were set to $-1$ dex in ME19. Consequently, the posterior distributions we obtain here for $[\textnormal{M}/\textnormal{H}]$, $[\textnormal{C}/\textnormal{H}]$, and $[\textnormal{O}/\textnormal{H}]$ are typically broader than those reported in ME19. Despite this, for $[\textnormal{M}/\textnormal{H}]$ and $[\textnormal{O}/\textnormal{H}]$ the upper bounds are better constrained for each of the three retrievals performed in this work (Table \ref{table:posteriors}). This is likely due to two main reasons. First, the WFC3 G141 spectrum is dominated by an H$_2$O band, but does not encompass any strong bands due to carbon-based species. Hence, improving the precision on the WFC3 G141 spectrum results in a better constraint for $[\textnormal{O}/\textnormal{H}]$ while providing little additional information for $[\textnormal{C}/\textnormal{H}]$. Second, the better constrained H$_2$O abundance provides a reference level for the H$^-$ bound-free continuum, which spans the WFC3 G102 passband and short-wavelength half of the WFC3 G141 passband (e.g.\ see Figure 1 of \citealp{2018ApJ...855L..30A} and Figure 12 of ME19). This serves to calibrate the H$^{-}$ abundance, and hence the free electron abundance of the atmosphere, which is closely linked to $[\textnormal{M}/\textnormal{H}]$ via ionized heavy elements such as Na and K. We also note that our retrieval results are overall consistent with those presented in D19. This is to be expected, as identical retrieval methodologies were employed in both studies and the same data were analyzed, with the exception of the updated WFC3 G141 spectrum adopted in this work. Additionally, our retrieval analyses are complementary to that presented in B19. In particular, the B19 study allowed chemical abundances to vary freely, whereas our retrieval analyses enforced chemical equilibrium while allowing $[\textnormal{M}/\textnormal{H}]$, $[\textnormal{C}/\textnormal{H}]$, and $[\textnormal{O}/\textnormal{H}]$ to vary. Despite these differences, visual inspection suggests the retrieved PT profile of B19 is in good agreement with those shown in Figure \ref{fig:pt}, increasing from $\sim 2200$K at 100\,mbar to $\sim 2900$\,K at 10\,mbar. \begin{figure} \centering \includegraphics[width=\linewidth]{{fig5}.pdf} \caption{(a) Retrieved PT profiles obtained for the three retrieval cases described in the main text. Solid lines indicate the median temperature at each pressure level across all PT profiles sampled by the MCMC analyses. Shaded regions indicate the temperature ranges at each pressure level encompassing $\pm$34\% of MCMC samples about the median. The corresponding PT distribution obtained in \citet{2019MNRAS.488.2222M} is also shown for comparison. (b) Normalized contribution functions corresponding to the best-fit model shown in Figure \ref{fig:emspec} for the Case 3 retrieval (i.e.\ including all latest available data and adopting the \citealp{2019arXiv190903010B} \textit{TESS} data point).} \label{fig:pt} \end{figure} \section{Conclusion} \label{sec:conclusion} We presented four new secondary eclipse observations of WASP-121b made with HST/WFC3 using the G141 grism, adding to the single eclipse observation previously reported in \cite{2017Natur.548...58E}. The additional data significantly increases the signal-to-noise of the measured dayside emission spectrum, with the median eclipse depth uncertainty reducing from 90ppm to 60ppm in 28 spectroscopic channels spanning the $1.12$-$1.64\,\mu\textnormal{m}$ wavelength range. The updated spectrum is in excellent agreement with the best-fit model presented in \cite{2019MNRAS.488.2222M}, exhibiting an H$_2$O emission feature in the G141 passband, muted in amplitude due to thermal dissociation. Retrieval analyses performed using the updated WFC3 G141 spectrum allow tighter constraints to be placed on the PT profile in particular. These results reinforce the conclusion of previous studies \citep{2017Natur.548...58E,2019MNRAS.488.2222M,2019arXiv190903010B,2019arXiv190903000D} that the dayside hemisphere of WASP-121b has a thermal inversion. \section*{Acknowledgements} The authors are grateful to the anonymous referee for their constructive feedback. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555. Support for this work was provided by NASA through grant number GO-15134 from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555. \bibliographystyle{apj}
1,108,101,564,717
arxiv
\section*{Introduction} \addcontentsline{toc}{section}{\hspace*{2.3em}Introduction} In this paper, we consider elementary properties (i.e., properties which are definable in the first order language) of endomorphism rings of Abelian $p$-groups. The first result on relationship of elementary properties of some models with elementary properties of derivative models was proved by A.~I.~Maltsev in 1961 in~\cite{Maltsev}. He proved that the groups $G_n(K)$ and $G_m(L)$ (where $G=\mathrm{GL},\mathrm{SL},\mathrm{PGL},\mathrm{PSL}$, $n,m\ge 3$, $K$,~$L$ are fields of characteristic~$0$) are elementarily equivalent if and only if $m=n$ and the fields $K$~and~$L$ are elementarily equivalent. This theory was continued in 1992 when with the help of the construction of ultraproduct and the isomorphism theorem~\cite{Keisler} C.~I.~Beidar and A.~V.~Mikhalev in~\cite{BeiMikh} formulated a~general approach to problems of elementary equivalence of various algebraic structures, and generalized Maltsev's theorem for the case where $K$~and~$L$ are skewfields and associative rings. In 1998--2001 E.~I.~Bunina continued to study some problems of this type (see \cite{Bun1,Bun2,Bun3,Bun4}). She generalized the results of A.~I.~Maltsev for unitary linear groups over skewfields and associative rings with involution, and also for Chevalley groups over fields. In 2000 V.~Tolstykh in~\cite{Tolstyh} studied a~relationship between second order properties of skewfields and first order properties of automorphism groups of infinite-dimensional linear spaces over them. In 2003 (see~\cite{categories}) the authors studied a~relationship between second order properties of associative rings and first order properties of categories of modules, endomorphism rings, automorphism groups, and projective spaces of infinite rank over these rings. In this paper, we study a~relationship between second order properties of Abelian $p$-groups and first order properties of their endomorphism rings. The first section includes some basic notions from the set theory and model theory: definitions of first order and second order languages, models of a~language, deducibility, interpretability, basic notions of set theory, which will be needed in next sections. The second section contains all notions and statements about Abelian groups which will be needed for our future constructions. We have taken them mainly from~\cite{Fuks}. In the third section, we show how to extend the results of S.~Shelah from~\cite{Shelah} on interpreting the set theory in a~category for the case of the endomorphism ring of some special Abelian $p$-group, which is a~direct sum of cyclic groups of the same order. In Sec.~4, we describe the second order group language~$\mathcal L_2$, and also its restriction $\mathcal L_2^\varkappa$ by some cardinal number~$\varkappa$, and then in Sec.~4.2 we introduce the \emph{expressible rank} $r_{\mathrm{exp}}$ of an Abelian group~$A$, represented as the direct sum $D\oplus G$ of its divisible and reduced components, as the maximum of the powers of the group~$D$ and some basic subgroup~$B$ of~$A$, i.e., $r_{\mathrm{exp}}=\max (r(D), r(B))$. In Sec.~4.2, we also formulate the main theorem of this work. {\bf Theorem 1.} \emph{For any infinite $p$-groups $A_1$~and~$A_2$ elementary equivalence of endomorphism rings $\mathop{\mathrm{End}}\nolimits(A_1)$ and $\mathop{\mathrm{End}}\nolimits(A_2)$ implies coincidence of the second order theories $\mathrm{Th}_2^{r_{\mathrm{exp}}(A_1)}(A_1)$ and $\mathrm{Th}_2^{r_{\mathrm{exp}}(A_2)}(A_2)$ of the groups $A_1$~and~$A_2$, bounded by the cardinal numbers $r_{\mathrm{exp}}(A_1)$ and $r_{\mathrm{exp}}(A_2)$, respectively.} \smallskip Note that $r_{\mathrm{exp}}(A)=|A|$ in all cases except the case where $|D|<|G|$, any basic subgroup of~$A$ is countable, and the group~$G$ itself is uncountable. In this case, $r_{\mathrm{exp}}(A)=\omega$. In Sec.~4.3, we prove two ``inverse implications'' of the main theorem. {\bf Theorem 2.} \emph{For any Abelian groups $A_1$~and~$A_2$, if the groups $A_1$~and~$A_2$ are equivalent in the second order logic~$\mathcal L_2$, then the rings $\mathop{\mathrm{End}}\nolimits(A_1)$ and $\mathop{\mathrm{End}}\nolimits(A_2)$ are elementarily equivalent.} \smallskip {\bf Theorem 3.} \emph{If Abelian groups $A_1$~and~$A_2$ are reduced and their basic subgroups are countable, then $\mathrm{Th}_2^\omega(A_1)=\mathrm{Th}_2^\omega(A_2)$ implies elementary equivalence of the rings $\mathop{\mathrm{End}}\nolimits(A_1)$ and $\mathop{\mathrm{End}}\nolimits(A_2)$.} \smallskip Therefore for all Abelian groups, except the case where $A=D\oplus G$, $D\ne 0$, $|D|< |G|$, and $|G|> \omega$, a~basic subgroup in~$A$ is countable, and elementary equivalence of the rings $\mathop{\mathrm{End}}\nolimits(A_1)$ and $\mathop{\mathrm{End}}\nolimits(A_2)$ is equivalent to $$ \mathrm{Th}_2^{r_{\mathrm{exp}}(A_1)}(A_1) = \mathrm{Th}_2^{r_{\mathrm{exp}}(A_2)}(A_2). $$ In Sec.~4.4, we divide the proof of the main theorem into three cases: \begin{enumerate} \item $A_1$ and $A_2$ are bounded; \item $A_1=D_1\oplus G_1$, $A_2=D_2\oplus G_2$, $D_1$~and~$D_2$ are divisible, $G_1$~and~$G_2$ are bounded; \item $A_1$~and~$A_2$ have unbounded basic subgroups. \end{enumerate} In Secs.~5--7, these three cases are under consideration. In Sec.~8, we prove the main theorem, combining all three cases in one proof. \section{Basic Notions from Model Theory} \subsection{First Order Languages}\label{ss1.1} A~\emph{first order language} $\mathcal L$ is a~collection of symbols. It consists of (1)~parentheses ${(}$,~${)}$; (2)~connectives $\land$~(``and'') and $\neg$~(``not''); (3)~the quantifier~$\forall$ (for all); (4)~the binary relation symbol~$=$ (identity); (5)~a countable set of variables~$x_i$; (6)~a~finite or countable set of relation symbols~$Q_i^n$ ($n\ge 1$); (7)~a~finite or countable set of function symbols~$F_i^n$ ($n\ge 1$); (8)~a~finite or countable set of constant symbols~$c_i$. Now we introduce the most important for us examples of first order languages: the group language $\mathcal L_G$ and the ring language~$\mathcal L_R$. We assume that in the group language there are neither function nor constant symbols, and there is a~unique 3-place relation symbol~$Q^3$, which corresponds to multiplication. Instead of $Q^3(x_1,x_2,x_3)$ we shall write $x_1=x_2\cdot x_3$, or $x_1=x_2 x_3$. For the ring language we shall also suppose that there are neither function nor constant symbols, and there are two relation symbols: a~3-place symbol of multiplication~$Q_1^3$ (instead of $Q_1^3(x_1,x_2,x_3)$ we shall write $x_1=x_2\cdot x_3$, or $x_1=x_2 x_3$) and a~3-place symbol of addition~$Q_2^3$ (instead of $Q_2^3(x_1,x_2,x_3)$ we shall write $x_1=x_2+x_3$). A~\emph{symbol-string} is defined as follows: (1)~every symbol~$\alpha$ of the language~$\mathcal L$ is a~\emph{symbol-string}; (2)~if $\sigma$~and~$\rho$ are symbol-strings, than $\sigma\rho$ is a~symbol-string. A~\emph{designating symbol-string~$\sigma$ for a~symbol-string~$\rho$} is the symbol-string $\sigma \mathbin{{:}\!=} \rho$, or $\rho \mathbin{{:}\!=} \sigma$ (\emph{$\sigma$~is a~designation for~$\rho$}). If a~symbol-string~$\rho$ is a~part of a~symbol-string~$\sigma$, staying in one of the three following positions: $\ldots \rho$, $\rho \ldots$, $\ldots \rho \ldots$, then $\rho$ is an \emph{occurrence in~$\sigma$}. Some symbol-strings constructed from the symbols of the language~$\mathcal L$ are called \emph{terms} and \emph{formulas} of this language. \emph{Terms} are defined as follows: \begin{enumerate} \item a~variable is a~term; \item a~constant symbol is a~term; \item if $F^n$ is an $n$-place function symbol and $t_1,\dots,t_{n}$ are terms, then $F^n(t_1,\dots,t_{n})$ is a~term; \item a~symbol-string is a~term only if it can be shown to be a~term by a~finite number of applications of (1)--(3). \end{enumerate} In the cases of the languages $\mathcal L_G$~and~$\mathcal L_R$, terms have the form~$x_i$. The \emph{elementary formulas} of the language~$\mathcal L$ are symbol-strings of the form given below: \begin{enumerate} \item if $t_1$ and $t_2$ are terms of the language~$\mathcal L$, then $t_1=t_2$ is an elementary formula; \item if $Q^n$ is an $n$-place relation symbol and $t_1,\dots,t_{n}$ are terms, then the symbol-string $Q^n(t_1,\dots,t_{n})$ is an elementary formula. \end{enumerate} For the language $\mathcal L_G$ the elementary formulas have the form $x_i=x_j$ and $x_i=x_j\cdot x_k$, and for the language~$\mathcal L_R$ they have the form $x_i=x_j$, $x_i=x_j\cdot x_k$, and $x_i=x_j+x_k$. Finally, the \emph{formulas} of the language~$\mathcal L$ are defined as follows: \begin{enumerate} \item an elementary formula is a~formula; \item if $\varphi$~and~$\psi$ are formulas and $x$~is a~variable, then $(\neg \varphi)$, $(\varphi\land \psi)$, and $(\forall x \, \varphi)$ are formulas; \item a~symbol-string is a~formula only if it can be shown to be a~formula by a~finite number of applications of (1)--(2). \end{enumerate} Let us introduce the following abbreviations: \begin{itemize} \item[] $(\varphi\lor \psi)$ stands for $(\neg ((\neg \varphi)\land(\neg \psi)))$; \item[] $(\varphi\Rightarrow \psi)$ stands for $((\neg \varphi)\lor \psi)$; \item[] $(\varphi\Leftrightarrow \psi)$ stands for $((\varphi\Rightarrow \psi)\land (\psi\Rightarrow \varphi))$; \item[] $(\exists x\, \varphi)$ is an abbreviation for $(\neg (\forall x\, (\neg \varphi)))$; \item[] $\varphi_1\vee \varphi_2\vee\dots \vee\varphi_n$ stands for $(\varphi_1\vee (\varphi_2\vee \dots \vee \varphi_n))$; \item[] $\varphi_1\wedge \varphi_2\wedge \dots \wedge \varphi_n$ stands for $(\varphi_1\wedge (\varphi_2\wedge \dots \wedge \varphi_n))$; \item[] $\forall x_1\dots \forall x_n \varphi$ stands for $(\forall x_1)\dots (\forall x_n)\varphi$; \item[] $\exists x_1\dots \exists x_n \varphi$ stands for $(\exists x_1)\dots (\exists x_n)\varphi$. \end{itemize} Let us introduce the notions of \emph{free} and \emph{bound} occurrences of a~variable in a~formula. \begin{enumerate} \renewcommand{\labelenumi}{\theenumi.} \item All occurrences of all variables in elementary formulas are free occurrences. \item Every free (bound) occurrence of a~variable~$x$ in a~formula~$\varphi$ is a~free (bound) occurrence of the variable~$x$ in the formulas $(\neg \varphi)$, $(\varphi\land \psi)$, and $(\psi \land \varphi)$. \item For any occurrence of a~variable~$x$ in a~formula~$\varphi$, the occurrence of the variable~$x$ in the formula $\forall x\, \varphi$ is bound. If an occurrence of a~variable~$x$ in a~formula~$\varphi$ is free (bound), then the occurrence of~$x$ in $\forall x'\, \varphi$ is free (bound). \end{enumerate} Therefore, one variable can have free and bound occurrences in the same formula. A~variable is called a~\emph{free} (\emph{bound}) \emph{variable} in a~given formula if there exist free (bound) occurrences of this variable in this formula. Thus a~variable can be free and bound in the same time. A~\emph{sentence} is a~formula with no free variables. Let $\varphi$ be a~formula, $t$ be a~term, and $x$ be a~variable. The \emph{substitution} of a~term~$t$ into the formula~$\varphi$ for the variable~$x$ is the formula $\varphi(t\mid x)$, obtained by replacing every free occurrence of the variable~$x$ in~$\varphi$ by the term~$t$. The substitution $\varphi(t\mid x)$ is called \emph{admissible} if for every variable~$x'$ occurring in the term~$t$ no free occurrence of~$x$ in~$\varphi$ is a~part of a~subformula $\forall x'\, \psi(x')$ or $\exists x'\, \psi(x')$ of the formula~$\varphi$. For example, in the case of the group language~$\mathcal L_G$ terms are variables. If we have a~formula ${\forall x_1\, (x_2=x_1)}$, then the substitution of the term~$x_1$ for~$x_2$ is not admissible, and the substitution of the term~$x_3$ for~$x_2$ is admissible. For the formula $\forall x_1\, (x_2=x_1\cdot x_3)$ the substitution $x_1\mid x_2$ is not admissible, and the substitution $x_3\mid x_2$ is admissible. Now let us introduce the following convention of notation: we use $t(x_1,\dots,x_n)$ to denote a~term~$t$ whose variables form a~subset of $\{ x_1,\dots,x_n\}$. Similarly, we use $\varphi (x_1,\dots,x_n)$ to denote a~formula whose \emph{free} variables form a~subset of $\{ x_1,\dots,x_n\}$. We need \emph{logical axioms} and \emph{rules of inference} to construct a~formal system. Logical axioms are cited below. Purely logical axioms. \begin{enumerate} \renewcommand{\labelenumi}{\theenumi.} \item $\varphi\Rightarrow (\psi\Rightarrow \varphi)$. \item $(\varphi\Rightarrow (\psi\Rightarrow \chi))\Rightarrow ((\varphi \Rightarrow \psi)\Rightarrow (\varphi\Rightarrow \chi))$. \item $(\neg \varphi\Rightarrow \neg \psi)\Rightarrow ((\neg \psi \Rightarrow \varphi)\Rightarrow \psi)$. \item $\forall x \varphi(x)\Rightarrow \varphi(t|x)$ if $t$ is a~term such that the substitution $t\mid x$ is admissible. \item $(\forall x (\psi\Rightarrow \varphi(x)))\Rightarrow (\psi\Rightarrow (\forall x \varphi))$ if $\psi$ does not contain any free occurrences of~$x$.% \setcounter{lastla}{\value{enumi}} \end{enumerate} Identity axioms. \begin{enumerate} \renewcommand{\labelenumi}{\theenumi.} \item $x=x$. \item $y=z\Rightarrow t(x_1,\dots,x_{i-1},y,x_{i+1},\dots,x_n)= t(x_1,\dots,x_{i-1},z,x_{i+1},\dots,x_n)$. \item $y=z\Rightarrow (\varphi(x_1,\dots,x_{i-1},y,x_{i+1},\dots,x_n) \Leftrightarrow \varphi (x_1,\dots,x_{i-1},z,x_{i+1},\dots,x_n))$, where $x_1,\dots,x_n,y,z$ are variables, $t$~is a~term, and $\varphi(x_1,\dots,x_n)$ is an elementary formula.% \setcounter{lastar}{\value{enumi}} \end{enumerate} There are two inference rules. \begin{enumerate} \renewcommand{\labelenumi}{\theenumi.} \item The rule of detachment (modus ponens or MP): from $\varphi$ and $\varphi\Rightarrow \psi$ infer $\psi$. \item The rule of generalization: from $\varphi$ infer $\forall x\, \varphi$. \end{enumerate} Let $\Sigma$ be a~collection of formulas and $\psi$ be a~formula of the language~$\mathcal L$. A~sequence $(\varphi_1,\dots, \varphi_n)$ of formulas of the language~$\mathcal L$ is called a~\emph{deduction of the formula~$\psi$ from the collection~$\Sigma$} if $\varphi_n=\psi$ and for any $1\le i\le n$ one of the following conditions is fulfilled: \begin{enumerate} \item $\varphi_i$ belongs to~$\Sigma$ or is a~logical axiom; \item there exist $1\le k< j< i$ such that $\varphi_j$ is $(\varphi_k \Rightarrow \varphi_i)$, i.e., $\varphi_i$~is obtained from $\varphi_k$ and $\varphi_k\Rightarrow \varphi_i$ by the inference rule MP; \item there exists $1\le j< i$ such that $\varphi_i$ is $\forall x\, \varphi_j$, where $x$ is not a~free variable of any formula from~$\Sigma$. \end{enumerate} Denote this deduction by $ (\varphi_1,\dots,\varphi_n)\colon \Sigma\vdash \psi$. If there exists a~deduction $(\varphi_1,\dots,\varphi_n)\colon \Sigma\vdash \psi$, then the formula~$\psi$ is called \emph{deducible in the language~$\mathcal L$ from the set~$ \Sigma$}, and the deduction $(\varphi_1,\dots,\varphi_n)$~is called a~\emph{proof of~$\psi$}. A~(\emph{first order}) \emph{theory}~$T$ in the language~$\mathcal L$ is some set of sentences of the language~$\mathcal L$. A~\emph{set of axioms} of the theory~$T$ is any set of sentences which has the same corollaries as~$T$. \subsection{Theory of Classes and Sets NBG}\label{ss1.2} The set theory of von Neumann, Bernays, and G\"odel NBG (see~\cite{Mendelson}), which will be a~base for all our constructions, has one relation symbol~$P^2$, which denotes a~2-place relation, no function, and no constant symbols. We shall use Latin letters $X$, $Y$, and~$Z$ with subscripts as variables of this system. We also introduce the abbreviations $X\in Y$ for $P(X,Y)$ and $X\notin Y$ for $\neg P(X,Y)$. The sign~${\in}$ can be interpreted as the symbol of belonging. The formula $X\subseteq Y$ is an abbreviation for the formula $\forall Z\, (Z\in X\Rightarrow Z\in Y)$ (\emph{inclusion}), $X\subset Y$ is an abbreviation for $X\subseteq Y\wedge X\ne Y$ (\emph{proper inclusion}). Objects of the theory NBG are called \emph{classes}. A class is called a~\emph{set} if it is an element of some class. A class which is not a~set is called a~\emph{proper class}. We introduce small Latin letters $x$, $y$, and~$z$ with subscripts as special variables bounded by sets. This means that the formula $\forall x\, A(x)$ is an abbreviation for $\forall X\, (\text{$X$ is a~set}\Rightarrow A(X))$, and it has the sense ``$A$~is true for all sets'', and $\exists x\, A(x)$ is an abbreviation for $\exists X\, (\text{$X$~is a~set} \logic\wedge A(X))$, and it has the sense ``$A$~is true for some set.'' \begin{description} \item[\hspace*{-\parindent}A1 {\mdseries (\emph{the extensionality axiom}).}] $\forall X\, \forall Y\, (X=Y\Leftrightarrow \forall Z\, (Z\in X\Leftrightarrow Z\in Y))$. Intuitively, $X=Y$ if and only if $X$~and~$Y$ have the same elements. \item[\hspace*{-\parindent}A2 {\mdseries (\emph{the pair axiom}).}] $\forall x\, \forall y\, \exists z\, \forall u\, (u\in z\Leftrightarrow u=x \logic\vee u=y)$, i.e., for all sets $x$~and~$y$ there exists a~set~$z$ such that $x$~and~$y$ are the only elements of~$z$. \item[\hspace*{-\parindent}A3 {\mdseries (\emph{the empty set axiom}).}] $\exists x\, \forall y\, \neg (y\in x)$, i.e., there exists a~set which does not contain any elements. \end{description} Axioms \textbf{A1}~and~\textbf{A3} imply that this set is unique, i.e., we can introduce a~constant symbol~$\varnothing$ (or~$0$), with the condition $\forall y\, (y\notin \varnothing)$. Also we can introduce a~new function symbol $f(x,y)$ for the pair, and write it in the form $\{ x,y\}$. Further, let $\{ x\}=\{ x,x\}$. The set $\langle x,y\rangle \equiv\{ \{ x,\{ x,y\}\}$ is called the \emph{ordered pair} of sets $x$~and~$y$. \begin{proposition}\label{prop1} $\vdash \forall x\,\forall y\, \forall u\, \forall v\, (\langle x,y\rangle =\langle u,v\rangle \Rightarrow x=u\logic\wedge y=v)$. \end{proposition} In the same way we can introduce \emph{ordered triplets of sets, ordered quadruplets of sets}, and so on. \begin{description} \item[\hspace*{-\parindent}AS4 {\mdseries (\emph{the axiom scheme of existence of classes}).}] Let $$ \varphi(X_1,\dots,X_n,Y_1,\dots,Y_m) $$ be a~formula. We shall call this formula \emph{predicative} if only variables for sets are bound in it (i.e., if it can be transferred to this form with the help of abbreviations). For every predicative formula $\varphi(X_1,\dots,X_n,Y_1,\dots,Y_m)$ $$ \exists Z\, \forall x_1\dots \forall x_n\, (\langle x_1,\dots,x_n\rangle \in Z \Leftrightarrow \varphi(x_1,\dots,x_n,Y_1,\dots,Y_m)). $$ \end{description} The class~$Z$ which exists by the axiom scheme~\textbf{AS4} will be denoted by $$ \{ x_1,\dots,x_n\mid \varphi(x_1,\dots,x_n,Y_1,\dots,Y_m)\}. $$ Now, by the axiom scheme \textbf{AS4}, we can define for arbitrary classes $X$~and~$Y$ the following derivative classes: \begin{itemize} \item[] $X\cap Y\equiv \{ u\mid u\in X \logic\land u\in Y\}$ (\emph{the intersection of classes $X$~and~$Y$}); \item[] $X\cup Y\equiv \{ u\mid u\in X \logic\lor u\in Y\}$ (\emph{the union of classes $X$~and~$Y$}); \item[] $\bar X\equiv \{ u\mid u\notin X\}$ (\emph{the complement of a~class~$X$}); \item[] $V\equiv \{ u\mid u=u\}$ (\emph{the universal class}); \item[] $X\setminus Y\equiv \{ u\mid u\in X \logic\land u\notin Y\}$ (\emph{the difference of classes $X$~and~$Y$}); \item[] $\mathrm{Dom}(X)\equiv \{ u\mid \exists v\, (\langle u,v\rangle \in X)\}$ (\emph{the domain of a~class~$X$}); \item[] $\mathop{\mathrm{Rng}}\nolimits(X)\equiv \{ u\mid \exists v (\langle v,u\rangle \in X)\}$ (\emph{the image of a~class~$X$}); \item[] $X\times Y\equiv \{ u\mid \exists x\, \exists y\, (u=\langle x,y\rangle \logic\land x\in X \logic\land y\in Y)\}$ (\emph{the Cartesian product of classes $X$~and~$Y$}); \item[] $\mathcal P(X)\equiv \{ u\mid u\subseteq X\}$ (\emph{the class of all subsets of a~class~$X$}); \item[] $\mathop{{\cup}} X\equiv \{ u\mid \exists v\, (u\in v \logic\land v\in X)\}$ (\emph{the union of all elements of a~class~$X$}). \end{itemize} Introduce now other axioms. \begin{description} \item[\hspace*{-\parindent}A5 {\mdseries (\emph{the union axiom}).}] $\forall x\, \exists y\, \forall u\, (u\in y\Leftrightarrow \exists v\, (u\in v \logic\land v\in x))$. \end{description} This axiom states that the union $\mathop{{\cup}} x$ of all elements of a~set~$x$ is also a~set. \begin{description} \item[\hspace*{-\parindent}A6 {\mdseries (\emph{the power set axiom}).}] $\forall x\, \exists y\, \forall u\, (u\in y\Leftrightarrow u\subseteq x)$. \end{description} This axiom states that the class of all subsets of a~set~$x$ is a~set, which will be called the \emph{power set of~$x$}. \begin{description} \item[\hspace*{-\parindent}A7 {\mdseries (\emph{the separation axiom}).}] $\forall x\, \forall Y\, \exists z\, \forall u\, (u\in z\Leftrightarrow u\in x \logic\wedge u\in Y)$. \end{description} This axiom states that the intersection of a~class and a~set is a~set. Denote the class $X\times X$ by $X^2$, the class $X\times X\times X$ by~$X^3$, and so on. Let the formula $\mathop{\mathrm{Rel}}(X)$ be an abbreviation for the formula $X\subseteq V^2$ (\emph{$X$~is a~relation}), $\mathrm{Un}(X)$ be an abbreviation for the formula $\forall x\, \forall y\, \forall z\, (\langle x,y\rangle \in X \logic\wedge \langle x,z\rangle \in X\Rightarrow y=z)$ (\emph{$X$~is functional}), and $\mathop{\mathrm{Fnc}}(X)$ be an abbreviation for $X\subseteq V^2 \logic\wedge \mathrm{Un}(X)$ (\emph{$X$~is a~function}). \begin{description} \item[\hspace*{-\parindent}A8 {\mdseries (\emph{the replacement axiom}).}] $\forall X\, \forall x\, (\mathrm{Un}(X)\Rightarrow \exists y\, \forall u\, (u\in y \Leftrightarrow \exists v\, (\langle v,u\rangle \in X \logic\wedge v\in x)))$. \end{description} This axiom states that if the class~$X$ is functional, then the class of second components of pairs from~$X$ such that the first component belongs to~$x$ is a~set. The following axiom postulates existence of an infinite set. \begin{description} \item[\hspace*{-\parindent}A9 {\mdseries (\emph{the infinity axiom}).}] $\exists x\, (0\in x \logic\wedge \forall u\, (u\in x\Rightarrow u\cup \{ u\}\in x))$. It is clear that for such a~set~$x$ we have $\{ 0\}\in x$, $\{ 0,\{ 0\}\}\in x$, $\{ 0,\{ 0\},\{ 0,\{ 0\}\}\}\in x$,\ldots{} If we now set $1\mathbin{{:}\!=} \{0\}$, $2\mathbin{{:}\!=} \{ 0,1\}$,\ldots, $n\mathbin{{:}\!=} \{ 0,1,\dots,n-1\}$, then for every integer $n\ge 0$ the condition $n\in x$ is fulfilled and $0\ne 1$, $0\ne 2$, $1\ne 2$,\ldots \item[\hspace*{-\parindent}A10 {\mdseries (\emph{the regularity axiom}).}] $\forall X\, (X\ne \varnothing\Rightarrow \exists x\in X\, (x\cap X = \varnothing))$. \end{description} This axiom states that every nonempty set is disjoint from one of its elements. \begin{description} \item[\hspace*{-\parindent}A11 {\mdseries (\emph{the axiom of choice AC}).}] For every set~$x$ there exists a~mapping~$f$ such that for every nonempty subset $y\subseteq x$ we have $f(y)\in y$ (this mapping is called a~\emph{choice} mapping for~$x$). \end{description} The list of axioms of the theory NBG is finished. A~class~$P$ is called \emph{ordered by a~binary relation~$\le$ on~$P$}, if the following conditions hold \begin{enumerate} \item $\forall p\in P\, (p\le p) $; \item $\forall p,q\in P\, (p\le q \logic\land q\le p\Rightarrow p=q)$; \item $\forall p,q,r\in P\, (p\le q \logic\land q\le r\Rightarrow p\le r)$. \end{enumerate} If, in addition,\nopagebreak \begin{enumerate} \setcounter{enumi}{3} \item $\forall p,q\in P\, (p\le q \logic\lor q\le p)$, \end{enumerate} then the relation~$\le$ is called a~\emph{linear order} on the class~$P$. An ordered class~$P$ is called \emph{well-ordered} if \begin{enumerate} \setcounter{enumi}{4} \item $\forall q\, (\varnothing \ne q\subseteq P\Rightarrow \exists x\in q\, (\forall y\in q\, (x\le y)))$, i.e., every nonempty subset of the class~$P$ has the smallest element. \end{enumerate} A~class~$S$ is called \emph{transitive} if $\forall x\, (x\in S\Rightarrow x\subseteq S)$. A~class (a~set)~$S$ is called an \emph{ordinal} (an \emph{ordinal number}) if $S$ is transitive and well-ordered by the relation ${\in}\cup {=}$ on~$S$. Ordinal numbers are usually denoted by Greek letters $\alpha$, $\beta$, $\gamma$, and so on. The class of all ordinal numbers is denoted by~$\mathrm{On}$. The natural ordering of the class of ordinal numbers is the relation ${\alpha\le \beta} \mathbin{{:}\!=} \allowbreak \alpha=\beta \logic\lor \alpha\in \beta$. The class $\mathrm{On}$ is transitive and linearly ordered by the relation~$\le$. There are some simple assertions about ordinal numbers: \begin{enumerate} \item if $\alpha$ is an ordinal number, $a$ is a~set, and $a\in \alpha$, then $a$ is an ordinal number; \item $\alpha+1\equiv \alpha\cup \{ \alpha\}$ is the smallest ordinal number that is greater than~$\alpha$; \item every nonempty set of ordinal numbers has the smallest element. \end{enumerate} Therefore the ordered class $\mathrm{On}$ is well-ordered. Thus $\mathrm{On}$ is an ordinal. An ordinal number $\alpha$ is called a~\emph{successor} if $\alpha=\beta+1$ for some ordinal number~$\beta$. In the opposite case $\alpha$ is called a~\emph{limit ordinal number}. The smallest (in the class $\mathrm{On}$) nonzero limit ordinal is denoted by~$\omega$. Ordinals which are smaller than~$\omega$ are called \emph{natural numbers}. Classes~$F$ which are functions with domains equal to~$\omega$ are called \emph{infinite sequences}. Functions with domains equal to $n\in \omega$ are called \emph{finite sequences}. Sets $a$~and~$b$ are called \emph{equivalent} ($a \sim b$) if there exists a~bijective function $u\colon a\to b$. An ordinal number~$\alpha$ is called a~\emph{cardinal} if for every ordinal number~$\beta$ the conditions $\beta \leq \alpha$ and $\beta\sim\alpha$ imply $\beta =\alpha$. The class of all cardinal numbers is denoted by~$\mathrm{Cn}$. The class~$\mathrm{Cn}$ with the order induced from the class~$\mathrm{On}$ is well-ordered. The axiom of choice implies that for every set~$a$ there exists a~unique cardinal number~$\alpha$ such that $a\sim \alpha$. This number~$\alpha$ is called the \emph{power of the set}~$a$ (denoted by $|a|$, or $\mathop{\mathrm{card}}\nolimits a$). A~set of power~$\omega$ is called \emph{countable}. A~set of power $n\in \omega$ is called \emph{finite}. A~set is called \emph{infinite} if it is not finite. A~set is called \emph{uncountable} if it is neither countable, nor finite. The cardinal number $c\mathbin{{:}\!=} |\mathcal P(\omega)|$ is called the power of \emph{continuum}. To denote cardinals we shall use small Greek letters (as in the case of ordinals): $\xi$th infinite cardinal will be denoted by~$\omega_\xi$ (i.e., the cardinal number~$\omega$ will also be denoted by~$\omega_0$). A~set~$X$ is said to be \emph{cofinal} in~$\alpha$ if $X\subset \alpha$ and $\alpha=\mathop{{\cup}} X$. The cofinality of~$\alpha$, written $\mathrm{cf} \alpha$, is the least cardinal~$\beta$ such that a~set of power~$\beta$ is cofinal in~$\alpha$. A~cardinal~$\varkappa$ is said to be \emph{regular} if $\mathrm{cf} \varkappa=\varkappa$, i.e., for every ordinal number~$\beta$ for which there exists a~function $f\colon \beta\to \varkappa$ such that $\mathop{{\cup}} \mathop{\mathrm{rng}}\nolimits f=\varkappa$ the inequality $\varkappa \le \beta$ holds, where $\mathop{{\cup}} \mathop{\mathrm{rng}}\nolimits f=\varkappa$ means that for every $y\in \varkappa$ there exists $x\in \beta$ such that $y< f(x)$. A~cardinal~$\varkappa$ is said to be \emph{singular} if it is not regular. The continuum hypothesis states that $|\mathcal P(\omega)|=\omega_1$, i.e., the power of continuum is the smallest uncountable cardinal number. We shall assume the continuum hypothesis if we need it. \subsection{Models, Satisfaction, and Elementary Equivalence}\label{ss1.3} We now suppose that all our constructions are made in the theory NBG. A~\emph{model of a~first order language~$\mathcal L$} is a~pair~$\mathcal U=\langle A,I\rangle$ consisting of a~universe~$A$ (i.e., some class or set of the theory NBG) and some correspondence~$I$ that assigns to every relation symbol~$Q^n$ some $n$-place relation $R\subset A^n$ on~$A$, to every function symbol~$F^n$ some $m$-place function $G\colon A^m\to A$, and to every constant symbol~$c$ some element of~$A$. A simple example of a~model of the group language is the set $1\mathbin{{:}\!=} \{\varnothing\}=\{ 0\}$, where $I(Q^2)=\{ \langle 0,0,0\rangle\}$. Another simple example of a~model of the group language is the set $2\mathbin{{:}\!=}\{ \varnothing,\{ \varnothing\}\}=\{ 0,1\}$, where $I(Q^2)=\{ \langle 0,0,0\rangle, \langle 0,1,1\rangle, \langle 1,0,1\rangle, \langle 1,1,0\rangle\}$. The \emph{power} of a~model $\mathcal U=\langle A,I\rangle$ is the cardinal number~$|A|$ (if the universe~$A$ is a~set). For all models which will be considered in this paper, the universe~$A$ is a~set. A~model~$\mathcal U$ is called finite, countable, or uncountable if $|A|$ is finite, countable, or uncountable, respectively. Models $\mathcal U$~and~$\mathcal U'$ of a~language~$\mathcal L$ are called \emph{isomorphic}, if there exists a~bijective mapping~$f$ of the set (universe)~$A$ onto the set~$A'$ satisfying the following conditions: \begin{enumerate} \item for each $n$-place relation symbol~$Q^n$ and any $a_1,\dots,a_n$ from~$A$ $$ \langle a_1,\dots,a_n\rangle\in I(Q^n)\text{ if and only if } \langle f(a_1),\dots,f(a_n)\rangle \in I'(Q^n); $$ \item for each $m$-place function symbol~$F^m$ of the language~$\mathcal L$ and any $a_1,\dots,a_m \in A$ $$ f(I(F^m)(\langle a_1,\dots,a_m\rangle))= I'(F^m)(\langle f(a_1),\dots,f(a_m)\rangle); $$ \item for each constant symbol~$c$ of the language~$\mathcal L$ $$ f(I(c))=I'(c). $$ \end{enumerate} Every mapping~$f$ satisfying these conditions is called an \emph{isomorphism of the model~$\mathcal U$ onto the model~$\mathcal U'$} or an \emph{isomorphism between the models $\mathcal U$~and~$\mathcal U'$}. The fact that $f$ is an isomorphism of the model~$\mathcal U$ onto the model~$\mathcal U'$ will be denoted by $f\colon \mathcal U\cong \mathcal U'$, and the formula $\mathcal U\cong \mathcal U'$ means that the models $\mathcal U$~and~$\mathcal U'$ are isomorphic. For convenience we use~$\cong$ to denote the isomorphism relation between models. Indeed, unless we wish to consider the particular structure of each element of $A$~or~$A'$, for all practical purposes $\mathcal U$~and~$\mathcal U'$ are the same if they are isomorphic. Now we shall give a~formal definition of satisfiability. Let $\varphi$ be an arbitrary formula of a~language~$\mathcal L$, let all its variables, free and bound, be contained in the set $x_1,\dots,x_q$, and let $a_1,\dots,a_q$ be an arbitrary sequence of elements of the set~$A$. We define the predicate $$ \text{\emph{$\varphi$ is true on the sequence $a_1,\dots,a_q$ in the model~$\mathcal U$}, or \emph{$a_1,\dots,a_q$ satisfies the formula~$\varphi$ in~$\mathcal U$}.} $$ The definition proceeds in three stages. Let $\mathcal U$ be a~fixed model for~$\mathcal L$. 1. The value of a~term $t(x_1,\dots,x_q)$ at $a_1,\dots,a_q$ is defined as follows (we let $t[a_1,\dots,a_q]$ denote this value): \begin{enumerate} \item if $t$ is a~variable $x_i$, then $t[a_1,\dots,a_q]=a_i$; \item if $t$ is a~constant symbol~$c$, then $t[a_1,\dots,a_q]=I(c)$; \item if $t$ is $F^m(t_1,\dots,t_m)$, where $t_1(x_1,\dots,x_q),\dots, \allowbreak t_m(x_1,\dots,x_q)$ are terms, then $$ t[a_1,\dots,a_q]= I(F^m)(\langle t_1[a_1,\dots,a_q],\dots,t_m[a_1,\dots,a_q]\rangle). $$ \end{enumerate} 2. \begin{enumerate} \item Suppose that $\varphi(x_1,\dots\mskip-1mu,x_q)$ is an elementary formula $t_1\!=\!t_2$, where $t_1(x_1,\dots\mskip-1mu,x_q)$ and $t_2(x_1,\dots,x_q)$ are terms. Then $a_1,\dots,a_q$ satisfies $\varphi$ if and only if $$ t_1[a_1,\dots,a_q]=t_2[a_1,\dots,a_q]. $$ \item Suppose that $\varphi(x_1,\dots,x_q)$ is an elementary formula $Q^n(t_1,\dots,t_n)$, where $Q^n$ is an $n$-place relation symbol and $t_1(x_1,\dots,x_q),\dots, t_n(x_1,\dots,x_q)$ are terms. Then $a_1,\dots,a_q$ satisfies~$\varphi$ if and only if $$ \langle t_1[a_1,\dots, a_q],\dots , t_n[a_1,\dots,a_q]\rangle \in I(Q^n). $$ \end{enumerate} For brevity, we write $$ \mathcal U\vDash \varphi [a_1,\dots, a_q] $$ for: $a_1,\dots,a_q$ satisfies~$\varphi$ in~$\mathcal U$. 3. Now suppose that $\varphi$ is any formula of~$\mathcal L$ and all its free and bound variables are among $x_1,\dots,x_q$. \begin{enumerate} \item If $\varphi$ is $\theta_1\land \theta_2$, then $$ \mathcal U\vDash \varphi[a_1,\dots,a_q]\text{ if and only if } \mathcal U\vDash \Theta_1 [a_1,\dots,a_q]\text{ and }\mathcal U\vDash \Theta_2 [a_1,\dots, a_q]. $$ \item If $\varphi$ is $\neg \Theta$, then $$ \mathcal U\vDash \varphi[a_1,\dots,a_q] \text{ if and only if it is not true that } \mathcal U\vDash \Theta [a_1,\dots,a_q]. $$ \item If $\varphi$ is $\forall x_i\, \psi$, where $i\le q$, then $$ \mathcal U\vDash \varphi[a_1,\dots,a_q]\text{ if and only if } \mathcal U\vDash \psi [a_1,\dots,a_{i-1},a, a_{i+1},\dots,a_q] \text{ for any } a\in A. $$ \end{enumerate} It is easy to check that the abbreviations $\lor$, $\Rightarrow$, $\Leftrightarrow$, and $\exists$ have their usual meanings. In particular, if $\varphi$ is $\exists x_i\, \psi$, where $i\le q$, then $\mathcal U\vDash \varphi[a_1,\dots,a_q]$ if and only if there exists $a\in A$ such that $$ \mathcal U\vDash \psi[a_1,\dots,a_{i-1},a,a_{i+1},\dots,a_q]. $$ The following proposition shows that the relation $$ \mathcal U\models \varphi(x_1,\dots,x_p)[a_1,\dots,a_q] $$ depends only on $a_1,\dots,a_p$, where $p< q$. \begin{proposition}\label{prop2} \begin{enumerate} \item Let $t(x_1,\dots,x_p)$ be a~term, and let $a_1,\dots,a_q$ and $b_1,\dots,b_r$ be two sequences of elements such that $p\le q$, $p\le r$, and $a_i=b_i$ whenever $x_i$ is a~free variable of the term~$t$. Then $$ t[a_1,\dots,a_q]=t[b_0,\dots,b_r]. $$ \item Let $\varphi$ be a~formula, let all its variables, free and bound, belong to the set $x_1,\dots,x_p$, and let $a_1,\dots,a_q$ and $b_1,\dots,b_r$ be two sequences of elements such that $p\le q$, $p\le r$, and $a_i=b_i$ whenever $x_i$ is a~free variable in the formula~$\varphi$. Then $$ \mathcal U\models \varphi [a_1,\dots,a_q]\quad \text{if and only if}\quad \mathcal U\models \varphi [b_1,\dots,b_r]. $$ \end{enumerate} \end{proposition} This proposition allows us to give the following definition. Let $\varphi(x_1,\dots,x_p)$ be a~formula, and let all its variables, free and bound, be contained in the set $x_1,\dots,x_q$, where $p\le q$. Let $a_1,\dots, a_p$ be a~sequence of elements of the set~$A$. We shall say that \emph{$\varphi$~is true in~$\mathcal U$ on $a_1,\dots,a_p$}, $$ \mathcal U\models \varphi[a_1,\dots,a_p] $$ if $\varphi$ is true in~$\mathcal U$ on $a_1,\dots,a_p,\dots,a_q$ with some (or, equivalently, any) sequence $a_{p+1},\dots,a_q$. Let $\varphi$ be a~sentence, and let all its bound variables be contained in the set $x_1,\dots,x_q$. We shall say that \emph{$\varphi$~is true in the model~$\mathcal U$} (notation: $\mathcal U\models \varphi$) if $\varphi$~is true in~$\mathcal U$ on some (equivalently, any) sequence $a_1,\dots,a_q$. This last phrase is equivalent to each of the following phrases: \begin{align*} & \text{$\varphi$ holds in $\mathcal U$;}\\ & \text{$\mathcal U$ satisfies $\varphi$;}\\ & \text{$\mathcal U$ is a~model of $\varphi$.} \end{align*} In the case where $\sigma$ is not true in~$\mathcal U$, we say that $\sigma$ is \emph{false} in~$\mathcal U$, or that $\sigma$ \emph{does not hold} in~$\mathcal U$, or that $\mathcal U$~\emph{is a~model of the sentence} $\neg \sigma$. If we have a~set~$\Sigma$ of sentences, we say that $\mathcal U$ is a~model of this set if $\mathcal U$ is a~model of every sentence $\sigma\in \Sigma$. It is useful to denote this concept by $\mathcal U\models \Sigma$. As we have said above, a~\emph{theory}~$T$ of the language~$\mathcal L$ is a~collection of sentences of the language~$\mathcal L$. A~\emph{theory} of a~model~$\mathcal U$ (of the language~$\mathcal L$) is the set of all sentences which hold in~$\mathcal U$. Two models $\mathcal U$~and~$\mathcal V$ for~$\mathcal L$ are called \emph{elementarily equivalent} if every sentence that is true in~$\mathcal U$ is true in~$\mathcal V$, and vice versa. We express this relationship between models by~$\equiv$. It is easy to see that $\equiv$ is indeed an equivalence relation. We can note that two models are elementarily equivalent if and only if their theories coincide. Any two isomorphic models of the same language are elementarily equivalent. If two models of the same language are elementarily equivalent and one of them is finite, then these models are isomorphic. If models are infinite and elementarily equivalent, they are not necessarily isomorphic. For example, the field~$\mathbb C$ of all complex numbers and the field $\bar{\mathbb Q}$ of all algebraic numbers are elementarily equivalent, but not isomorphic. Together with first order languages we need to consider second order languages, in which we can also quantify relation symbols, i.e., use relation symbols as variables. Such languages will be described in Sec.~\ref{ss1.4}. \subsection{Second Order Languages and Models}\label{ss1.4} Now we shall introduce all notions, similar to the notions of Secs.\ \ref{ss1.1}~and~\ref{ss1.3}, for second order languages and models. A~\emph{second order language}~$\mathcal L_2$ is a~collection of symbols, consisting of (1)~parentheses ${(}$,~${)}$; (2)~connectives $\land$~(``and'') and $\neg$~(``not''); (3)~the quantifier $\forall$~(for all); (4)~the binary relation symbol ${=}$~(identity); (5)~a~countable set of object variables~$x_i$; (6)~a~countable set of predicate variables~$P_i^l$; (7)~a~finite or countable set of relation symbols~$Q_i^n$ ($n\ge 1$); (8)~a~finite or countable set of function symbols~$F_i^n$ ($n\ge 1$); (9)~a~finite or countable set of constant symbols~$c_i$. \emph{Terms} of the language $\mathcal L_2$ are defined as follows: \begin{enumerate} \item a~variable is a~term; \item a~constant symbol is a~term; \item if $F^n$ is an $n$-place function symbol and $t_1,\dots,t_{n}$ are terms, then $F^n(t_1,\dots,t_{n})$ is a~term; \item a~symbol-string is a~term only if it can be shown to be a~term by a~finite number of applications of (1)--(3). \end{enumerate} Therefore terms of the language~$\mathcal L_2$ coincide with terms of the language~$\mathcal L$. \emph{Elementary formulas} of the language~$\mathcal L_2$ are symbol-strings of the form given below: \begin{enumerate} \item if $t_1$~and~$t_2$ are terms of the language~$\mathcal L_2$, then $t_1=t_2$ is an elementary formula; \item if $P^l$ is a~predicate variable and $t_1,\dots,t_l$ are terms, then the symbol-string $P^l(t_1,\dots,t_l)$ is an elementary formula; \item if $Q^n$ is an $n$-place relation symbol, and $t_1,\dots,t_{n}$ are terms, then the symbol-string $Q^n(t_1,\dots,t_{n})$ is an elementary formula. \end{enumerate} Therefore elementary formulas of the second order group language have the form $x_i=x_j$, $x_i=x_j\cdot x_k$, and $P^l(x_{i_1},\dots,x_{i_l})$, where $l\ge 1$. \emph{Formulas} of the language~$\mathcal L_2$ are defined as follows: \begin{enumerate} \item an elementary formula is a~formula; \item if $\varphi$ and $\psi$ are formulas and $x$~is an object variable, then $(\neg \varphi)$, $(\varphi\land \psi)$, and $(\forall x \, \varphi)$ are formulas; \item if $P^l$ is a~predicate variable and $\varphi$ is a~formula, then the symbol-string $(\forall P^l(v_1,\dots,v_l)\varphi)$ is a~formula; \item a~symbol-string is a~formula only if it can be shown to be a~formula by a~finite number of applications of (1)--(3). \end{enumerate} Let us introduce the following abbreviations: \begin{itemize} \item[] $\exists P^l(v_1,\dots,v_l)\, \varphi$ is an abbreviation for $\neg (\forall P^l(v_1,\dots,v_l)\, (\neg \varphi))$; \item[] $\forall P^{l_1}_1(v_1,\dots,v_{l_1})\dots \forall P_n^{l_n}(v_1,\dots,v_{l_n})\, \varphi$ is an abbreviation for $$ (\forall P^{l_1}_1(v_1,\dots,v_{l_1}))\dots (\forall P_n^{l_n}(v_1,\dots,v_{l_n}))\, \varphi; $$ \item[] $\exists P^{l_1}_1(v_1,\dots,v_{l_1})\dots \exists P_n^{l_n}(v_1,\dots,v_{l_n})\, \varphi$ is an abbreviation for $$ (\exists P^{l_1}_1(v_1,\dots,v_{l_1}))\dots (\exists P_n^{l_n}(v_1,\dots,v_{l_n}))\, \varphi. $$ \end{itemize} Introduce the notions of \emph{free} and \emph{bound} occurrence of a~predicate variable in a~formula of the language~$\mathcal L_2$. \begin{enumerate} \item All occurrences of all predicate variables in elementary formulas are free occurrences. \item Every free (bound) occurrence of a~variable~$P^l$ in a~formula~$\varphi$ is a~free (bound) occurrence of a~variable~$P^l$ in the formulas $(\neg \varphi)$, $(\varphi\land \psi)$, and $(\psi \land \varphi)$. \item For any occurrence of a~variable~$P^l$ in a~formula~$\varphi$, the occurrence of the variable~$P^l$ in the formula $\forall P^l(v_1,\dots,v_l)\, \varphi$ is bound. If an occurrence of a~variable~$P_1^l$ in a~formula~$\varphi$ is free (bound), then the occurrences of~$P_1^l$ in the formulas $\forall x\, \varphi$ and $\forall P_2^m (v_1,\dots,v_m)\, \varphi$ are free (bound). \end{enumerate} As in Sec.~\ref{ss1.1}, any formula such that all its \emph{free} object and predicate variables are among the set $\{ x_1,\dots,x_n, P_1^{l_1},\dots,P_k^{l_k}\}$ will be denoted by $\varphi(x_1,\dots,x_n,P_1^{l_1},\dots,P_k^{l_k})$. To our five purely logical axioms from Sec.~\ref{ss1.1} we shall add the sixth purely logical axiom: \begin{enumerate} \setcounter{enumi}{\value{lastla}} \renewcommand{\labelenumi}{\theenumi.} \item $(\forall P^n(v_1,\dots,v_n)\, (\psi\Rightarrow \varphi)\Rightarrow (\psi\Rightarrow (\forall P^n(v_1,\dots,v_n)\, \varphi))$ if $\psi$ does not contain any free occurrences of the variable~$P^n$. \end{enumerate} To the identity axioms we add the fourth identity axiom: \begin{enumerate} \setcounter{enumi}{\value{lastar}} \renewcommand{\labelenumi}{\theenumi.} \item $\forall P^n(v_1,\dots,v_n)\, (y=z\Rightarrow (P^n(x_1,\dots,x_{i-1},y,x_{i+1},\dots,x_n)\Leftrightarrow P^n (x_1, \dots, x_{i-1},z,x_{i+1},\dots,x_n))$. \end{enumerate} The rule of generalization can be changed to ``from~$\varphi$ infer $\forall x\, \varphi$ and $\forall P^n(v_1,\dots,v_n)\, \varphi$.'' A \emph{model of a~second order language~$\mathcal L_2$} (see Sec.~\ref{ss1.3}) is a~pair~$\mathcal U=\langle A,I\rangle$ consisting of an object~$A$ (i.e., some class or set of the theory NBG) and some correspondence~$I$ that assigns to every relation symbol~$Q^n$ some $n$-place relation in~$A$, to every function symbol~$F^n$ some $n$-place function in~$A$, and to every constant symbol~$c$ some element of~$A$. Now we shall give a~definition of satisfaction. Let $\varphi$ be any formula of the language~$\mathcal L_2$ such that all its free and bound variables are among $x_1,\dots,x_q,P_1^{l_1},\dots,P_s^{l_s}$, and let $a_1,\dots,a_q, b_1^{l_1},\dots,b_s^{l_s}$ be any sequence, where $a_1,\dots,a_q$ are elements of the set~$A$, $b_i^{l_i} \subset A^{l_i}$. We define the predicate $$ \text{\emph{$\varphi$ is satisfied by the sequence $a_1,\dots,a_q,b_1^{l_1},\dots,b_s^{l_s}$ in the model~$\mathcal U$}}. $$ 1. The value of a~term $t(x_1,\dots,x_q,P_1^{l_1},\dots,P_s^{l_s})$ at $a_1,\dots,a_q,\allowbreak b_1^{l_1},\dots,b_s^{l_s}$ is defined as follows (we let $t[a_1,\dots,a_q,b_1^{l_1},\dots,b_s^{l_s}]$): \begin{enumerate} \item if $t$ is a~variable $x_i$, then $t[a_1,\dots,a_q,b_1^{l_1},\dots,b_s^{l_s}]=a_i$; \item if $t$ is a~constant symbol~$c$, then $t[a_1,\dots,a_q,b_1^{l_1},\dots,b_s^{l_s}]=I(c)$; \item if $t$ is $F^m(t_1,\dots,t_m)$, where $t_1(x_1,\dots,x_q,P_1^{l_1},\dots,P_s^{l_s}),\dots,\allowbreak t_m(x_1,\dots,x_q, P_1^{l_1},\dots,P_s^{l_s})$ are terms, then $t[a_1,\dots,a_q,b_1^{l_1},\dots,b_s^{l_s}]= I(F^m)(\langle t_1[a_1,\dots,a_q,b_1^{l_1},\dots,b_s^{l_s}],\dots, t_m[a_1,\dots,a_q,b_1^{l_1},\dots,b_s^{l_s}]\rangle)$. \end{enumerate} 2. \begin{enumerate} \item Suppose that $\varphi(x_1,\dots,x_q,P_1^{l_1},\dots,P_s^{l_s})$ is an elementary formula $t_1=t_2$, where $t_1(x_1,\dots,x_q,\allowbreak P_1^{l_1},\dots,P_s^{l_s})$ and $t_2(x_1,\dots,x_q,P_1^{l_1},\dots,P_s^{l_s})$ are terms. Then $a_1,\dots,a_q,b_1^{l_1},\dots,b_s^{l_s}$ satisfies $\varphi$ if and only if $$ t_1[a_1,\dots,a_q,b_1^{l_1},\dots,b_s^{l_s}]= t_2[a_1,\dots,a_q,b_1^{l_1},\dots,b_s^{l_s}]. $$ \item Suppose that $\varphi(x_1,\dots,x_q,P_1^{l_1},\dots,P_s^{l_s})$ is an elementary formula $Q^n(t_1,\dots,t_n)$, where $Q^n$ is an $n$-place relation symbol and $t_1(x_1,\dots,x_q,P_1^{l_1},\dots,P_s^{l_s}),\dots,\allowbreak t_n(x_1,\dots,x_q,P_1^{l_1},\dots,P_s^{l_s})$ are terms. Then $a_1,\dots,a_q,b_1^{l_1},\dots,b_s^{l_s}$ satisfies $\varphi$ if and only if $$ \langle t_1[a_1,\dots, a_q,b_1^{l_1},\dots,b_s^{l_s}],\dots, t_n[a_1,\dots,a_q,b_1^{l_1},\dots,b_s^{l_s}]\rangle \in I(Q^n). $$ \item\sloppy Suppose that $\varphi(x_1,\dots,x_q,\allowbreak P_1^{l_1},\dots,P_s^{l_s})$ is an elementary formula $P_i^{l_i}(t_1,\dots,t_{l_i})$, where $t_1(x_1,\dots,x_q,\allowbreak P_1^{l_1},\dots,P_s^{l_s}),\dots,\allowbreak t_n(x_1,\dots,x_q,P_1^{l_1},\dots,P_s^{l_s})$ are terms. Then $a_1,\dots,a_q,b_1^{l_1},\dots,b_s^{l_s}$ satisfies $\varphi$ if and only if $$ \langle t_1[a_1,\dots, a_q,b_1^{l_1},\dots,b_s^{l_s}],\dots, t_n[a_1,\dots,a_q,b_1^{l_1},\dots,b_s^{l_s}]\rangle \in b_i^{l_i}. $$ \end{enumerate} 3. Now suppose that $\varphi$ is any formula such that all its free and bound variables are among $x_1,\dots,x_q,\allowbreak P_1^{l_1},\dots,P_s^{l_s}$. \begin{enumerate} \item If $\varphi$ is $\theta_1\land \theta_2$, then \begin{multline*} \mathcal U\vDash \varphi[a_1,\dots,a_q,b_1^{l_1},\dots,b_s^{l_s}] \text{ if and only if}\\[-\jot] \mathcal U\vDash \Theta_1 [a_1,\dots,a_q,b_1^{l_1},\dots,b_s^{l_s}] \text{ and }\mathcal U\vDash \Theta_2 [a_1,\dots, a_q,b_1^{l_1},\dots,b_s^{l_s}]. \end{multline*} \item If $\varphi$ is $\neg \Theta$, then $$ \mathcal U\vDash \varphi[a_1,\dots,a_q,b_1^{l_1},\dots,b_s^{l_s}] \text{ if and only if it is false that } \mathcal U\vDash \Theta [a_1,\dots,a_q,b_1^{l_1},\dots,b_s^{l_s}]. $$ \item If $\varphi$ is $\forall x_i \psi$, where $i\le q$, then \begin{multline*} \mathcal U\vDash \varphi[a_1,\dots,a_q,b_1^{l_1},\dots,b_s^{l_s}] \text{ if and only if}\\[-\jot] \mathcal U\vDash \psi [a_1,\dots,a_{i-1},a, a_{i+1},\dots,a_q,b_1^{l_1},\dots,b_s^{l_s}] \text{ for any } a\in A. \end{multline*} \item If $\varphi$ is $\forall P_i^{l_i}(v_1,\dots v_{l_i}) \psi$, where $i\le s$, then \begin{multline*} \mathcal U\vDash \varphi[a_1,\dots,a_q,b_1^{l_1},\dots,b_s^{l_s}] \text{ if and only if}\\[-\jot] \mathcal U\vDash \psi [a_1,\dots,a_q,b_1^{l_1},\dots,b_{i-1}^{l_{i-1}}, b,b_{i+1}^{l_{i+1}},\dots,b_s^{l_s}] \text{ for any } b\subset A^{l_i}. \end{multline*} \end{enumerate} The proposition that $$ \mathcal U\models \varphi(x_1,\dots,x_p,P_1^{l_1},\dots,P_t^{l_t}) [a_1,\dots,a_q,b_1^{l_1},\dots,b_s^{l_s}] $$ depends only on the values $a_1,\dots,a_p,b_1^{l_1},\dots,b_t^{l_t}$, where $p< q$, $s<t$, is the same as Proposition~\ref{prop2}. All other definitions are also similar to definitions from Sec.~\ref{ss1.3}. We shall say that two models of the second order language~$\mathcal L_2$ are equivalent in~$\mathcal L_2$ if for every sentence of this language the sentence is true in one model if and only if it is true in the other model. \section{Basic Concepts about Abelian Groups} \subsection{Preliminaries}\label{ss2.1} The word ``group'' will mean, throughout, an additively written Abelian (i.e., commutative) group. That is, by group we shall understand a~set~$A$ such that with every pair of elements $a,b\in A$ there is associated an element $a+b$ of~$A$, which is called the \emph{sum} of elements $a$~and~$b$; there is an element $0\in A$, the \emph{zero}, such that $a+0=a$ for every $a\in A$; for each $a\in A$ there is an $x\in A$ with the property $a+x=0$, this $x=-a$ is called the \emph{inverse} (\emph{opposite}) to~$a$; finally, we have both commutative and associative laws: $a+b=b+a$, $(a+b)+c=a+(b+c)$ for every $a,b,c\in A$. A~sum $a+\dots+a$ ($n$~times) is abbreviated as $na$, and $-a-\dots-a$ ($n$~times) as~$-na$. By the \emph{order} of a~group~$A$ we mean the power~$|A|$ of the set of its elements. If the power~$|A|$ is a~finite (countable) cardinal, then the group~$A$ is called \emph{finite} (\emph{countable}). A~subset~$B$ of~$A$ is a~\emph{subgroup} if $\forall b_1,b_2\in B\, (b_1+b_2\in B)$. If $B$ is a~subgroup consisting of the zero alone or of all elements of~$A$, then $B$ is a~\emph{trivial} subgroup of~$A$; but a~subgroup of~$ A $ that is different from~$A$ is called a~\emph{proper} subgroup of~$A$. We shall write $B\lhd A$ to indicate that $B$ is a~subgroup of~$A$. Let $B\lhd A$ and $a\in A$. The set $a+B=\{ a+b\mid b\in B\}$ is said to be a~\emph{coset of~$A$ modulo~$B$}. An element of the coset is called a~\emph{representative} of this coset. A~set consisting of representatives, one for each coset of~$A$ modulo~$B$, is called the \emph{complete system of representatives of cosets modulo~$B$}. Its power is called the index of~$B$ in~$A$, and denoted by $|A:B|$. The cosets of~$A$ modulo~$B$ form a~group $A/B$ known as the \emph{quotient group} (of~$A$ with respect to~$B$). If $S$ is any subset in~$A$, then by $\langle S\rangle$ we denote the subgroup of~$A$, \emph{generated} by~$S$, i.e., the intersection of all subgroups of~$A$ containing~$S$. In particular, if $S$ consists of the elements $a_i$ ($i\in I$), we also write $\langle S\rangle=\langle \dots, a_i,\dots \rangle_{i\in I}$ or simply $\langle S\rangle =\langle a_i\rangle_{i\in I}$. The subgroup $\langle S\rangle$ consists exactly of all finite linear combinations of the elements of~$S$, i.e., of all sums $n_1a_1+\dots+n_ka_k$ with $a_i\in S$, $n_i$ integers, and $k$ an arbitrary nonnegative integer. If $S$ is empty, then we put $\langle S\rangle=0$. In the case $\langle S\rangle=A$ we say $S$ is a~\emph{generating system} of~$A$ and the elements of~$S$ are \emph{generators} of~$A$. If there is a~finite generating system, then $A$~is said to be a~\emph{finitely generated} group. If $B$ and $C$ are two subgroups of~$A$, then the subgroup $\langle B,C\rangle$ generated by them consists of all elements of~$A$ of the form $b+c$, where $b\in B$ and $c\in C$. Therefore we shall write $\langle B,C\rangle=B+C$. Similarly, for some set of subgroups~$B_i$ of~$A$ we shall write $B=\langle B_i\rangle_{i\in I}=\sum\limits_{i\in I} B_i$. The group $\langle a\rangle$ is the \emph{cyclic} group generated by~$a$. The order of $\langle a\rangle$ is also called the \emph{order} of~$a$ (notation: $o(a)$). If every element of~$A$ is of finite order, then $A$ is called a~\emph{torsion} or \emph{periodic} group. If all elements of~$A$, except~$0$, are of infinite order. then $A$ is \emph{torsion free}. \emph{Mixed groups} contain both nonzero elements of finite order and elements of infinite order. A~\emph{primary group} or \emph{$p$-group} is defined to be a~group, the orders of whose elements are powers of a~fixed prime~$p$. Given $a\in A$, the greatest nonnegative integer~$r$ for which $p^rx=a$ has a~solution $x\in A$ is called the \emph{$p$-height} $h_p(a)$ of~$a$. If $p^r x=a$ is solvable whatever $r$ is, then $a$ is of \emph{infinite $p$-height}, $h_{p}(a)=\infty$. If it is completely clear from the context which prime~$p$ is meant, then we call $h_p(a)$ simply the \emph{height} of~$a$ and write $h(a)$. For a~group~$A$ and an integer $n> 0$, let $nA=\{ na\mid a\in A\}$ and $A[n]=\{ a\mid a\in A,\ na=0\}$. A~map $\alpha\colon A\to B$ is called a~\emph{homomorphism} (of~$A$ into~$B$) if $$ \forall a_1,a_2\in A \quad \alpha(a_1+a_2)=\alpha a_1+\alpha a_2. $$ The \emph{kernel} of~$\alpha$ ($\mathop{\mathrm{Ker}}\nolimits \alpha \lhd A$) is the set of all elements $a\in A$ such that $\alpha a=0$. The \emph{image} of~$\alpha$ ($\mathop{\mathrm{Im}}\nolimits \alpha \lhd B$) consists of all $b\in B$ that for some $a\in A$ satisfy $\alpha a=b$. If $\mathop{\mathrm{Im}}\nolimits\alpha =B$, then $\alpha$ is called a~\emph{surjective} homomorphism, or an \emph{epimorphism}. If $\mathop{\mathrm{Ker}}\nolimits\alpha=0$, then $\alpha$ is called an \emph{injective} homomorphism, or a~\emph{monomorphism}. A~homomorphism that is injective and surjective simultaneously is called an \emph{isomorphism}. Now we consider the most important types of Abelian groups. Cyclic groups were defined above as groups $\langle a \rangle$ for some~$a$. Note that all subgroups of cyclic groups are likewise cyclic. For a~fixed prime~$p$, consider the $p^n$th complex roots of unity, $n\in \mathbb N \cup \{ 0\}$. They form an infinite multiplicative group; in accordance with our convention, we switch to the additive notation. This group, called a~\emph{quasicyclic} group or a~\emph{group of type~$p^\infty$} (notation: $\mathbb Z(p^\infty)$) can be defined as follows. It is generated by elements $c_1,c_2,\dots,c_n,\ldots$, such that $pc_1=0$, $pc_2=c_1,\dots,\allowbreak pc_{n+1}=c_n,\dots$. Here $o(c_n)=p^n$, and every element of $\mathbb Z(p^\infty)$ is a~multiple of some~$c_n$. It is clear that all the quasicyclic groups corresponding to the same prime~$p$ are isomorphic. Let $p$ be a~prime, and $Q_p$ be the ring of rational numbers whose denominators are prime to~$p$. The nonzero ideals of~$Q_p$ are principal ideals generated by~$p^k$ with $k=0,1,\ldots$. If one considers the ideals $(p^k)$ as a~fundamental system of neighborhoods of~$0$, then $Q_p$ becomes a~topological ring, and we may form the completion $Q_p^*$ of~$Q_p$ in this topology. $Q_p^*$~is again a~ring whose ideals are $(p^k)$ with $k=0,1,\ldots$, and which is complete in the topology defined by its ideals. The elements of $Q_p^*$ may be represented as follows: let $\{ t_0,t_1,\dots,t_{p-1}\}$ be a~complete set of representatives of cosets of $p^kQ_p$ modulo $p^{k+1}Q_p$. Let $\pi \in Q_p^*$, and let $a_n\in Q_p$ be a~sequence tending to~$\pi$. According to the definition of fundamental sequence, almost all its elements (i.e., all with a~finite number of exceptions) belong to the same coset modulo $pQ_p$, say, to that represented by~$s_0$. Almost all differences $a_n-s_0$ belonging to $pQ_p$ belong to the same coset of the ring $pQ_p$ modulo $p^2Q_p$, say, to that represented by~$ps_1$. So proceeding, $\pi$~uniquely defines a~sequence $s_0,ps_1,\ldots$, and we associate with~$\pi$ the formal infinite series $s_0+s_1p+\ldots$. Its partial sums $b_n=s_0+s_1p+\dots+s_np^n$ form in $Q_p$ a~fundamental sequence which converges in $Q_p^*$ to~$\pi$, in view of $\pi-b_n\in p^kQ_p^*$. From the uniqueness of limits it follows that, in this way, different elements of $Q_p^*$ are associated with different series. Therefore we can identify the elements~$\pi$ of the ring~$Q_p^*$ with the formal series $s_0+s_1p+s_2p^2+\dots$ with coefficients from $\{ 0,1,\dots, p-1\}$ and write $$ \pi=s_0+s_1p+s_2p^2+\dots \quad (s_n=0,1,\dots,p-1). $$ The arising ring is a~\emph{commutative integral domain} (i.e., the commutative ring without zero divisors) and is called the \emph{ring of $p$-adic integers}. \subsection{Direct Sums}\label{ss2.3} Let $B$ and~$C$ be two subgroups of~$A$ such that: \begin{enumerate} \item $B+C=A$; \item $B\cap C=0$. \end{enumerate} Then we call $A$ the \emph{direct sum} of its subgroups $B$~and~$C$ ($A=B\oplus C$). If the condition~(2) is satisfied, then we say that the groups $B$~and~$C$ \emph{are disjoint}. If $B_i$ ($i\in I$) is a~family of subgroups of~$A$ such that \begin{enumerate} \item $\sum\limits_{i\in I} B_i=A$; \item $\forall i\in I\ B_i\cap \sum\limits_{j\ne i} B_j=0$, \end{enumerate} then we say that the group~$A$ is a~\emph{direct sum} of its subgroups~$B_i$, and write $A=\bigoplus\limits_{i\in I} B_i$, or $A=B_1\oplus \dots \oplus B_n$, if $I=\{ 1,\dots,n\}$. A~subgroup~$B$ of the group~$A$ is called a~\emph{direct summand} of~$A$ if there exists a~subgroup $C\lhd A$ such that $A=B\oplus C$. In this case, $C$ is called the \emph{complementary direct summand} or simply the \emph{complementation}. Two direct decompositions of~$A$, $A=\bigoplus\limits_i B_i$ and $A=\bigoplus\limits_j C_j$, are called \emph{isomorphic} if the components $B_i$~and~$C_j$ may be brought into a~one-to-one correspondence such that the corresponding components are isomorphic. If we have two groups $B$~and~$C$, then the set of all pairs $(b,c)$, where $b\in B$ and $c\in C$, forms a~group~$A$ if we set $(b_1,c_1)=(b_2,c_2)$ if and only if $b_1=b_2$ and $c_1=c_2$, and $(b_1,c_1)+(b_2,c_2)=(b_1+b_2,c_1+c_2)$. The correspondences $b\mapsto (b,0)$ and $c\mapsto (0,c)$ are isomorphisms between the groups $B$,~$C$ and the subgroups $B'$,~$C'$ of~$A$. We can write $A=B\oplus C$ and call $A$ the (\emph{external}) \emph{direct sum} of $B$~and~$C$. Let $B_i$ ($i\in I$) be a~set of groups. A~\emph{vector} $(\dots,b,\dots)$ over the groups~$B_i$ is a~vector with $i$th coordinate for every $i\in I$ equal to some $b_i\in B_i$. Equality and addition of vectors is defined coordinatewise. In this way, the set of all vectors becomes a~group~$C$, called the \emph{direct product} of the groups~$B_i$, $$ C=\prod_{i\in I} B_i. $$ The correspondence $\rho_i\colon b_i\mapsto (\dots,0,b_i,0,\dots)$, where $b_i$ stands on the $i$th place, and $0$ everywhere else, is an isomorphism of the group~$B_i$ with a~subgroup~$B_i'$ of~$C$. The groups~$B_i'$ ($i\in I$) generate in~$C$ the group~$A$ of all vectors $(\dots,b_i,\dots)$ with $b_i=0$ for almost all $i\in I$, and $A=\bigoplus\limits_{i\in I} B_i'$. The group~$A$ is also called the (\emph{external}) \emph{direct sum} of~$B_i$. If $A$ is a~group and $\varkappa$ is a~cardinal number, then by $\bigoplus\limits_{\varkappa} A$ we shall denote the direct sum of $\varkappa$~groups isomorphic to~$A$, and by $\smash[b]{\prod\limits_{\varkappa}}A=A^\varkappa$ we shall denote the direct product of $\varkappa$~such groups. \begin{proposition}[\cite{Fuks}]\label{p2.1} Every torsion group~$A$ is a~direct sum of $p$-groups~$A_p$ belonging to different primes~$p$. The groups~$A_p$ are uniquely determined by~$A$. \end{proposition} The subgroups~$A_p$ are called the $p$-\emph{components} of~$A$. By virtue of Proposition~\ref{p2.1}, the theory of torsion groups is essentially reduced to that of primary groups. \begin{proposition}[\cite{Fuks}]\label{p2.2} If there exists a~projection~$\pi$ of the group~$A$ onto its subgroup~$B$, then $B$ is a~direct summand in~$A$. \end{proposition} \subsection{Direct Sums of Cyclic Groups}\label{ss2.4} \begin{proposition}[\cite{Fuks}]\label{p2.3} For a~group~$A$ the following conditions are equivalent\textup{:} \begin{enumerate} \item $A$ is a~finitely generated group\textup{;} \item $A$ is a~direct sum of a~finite number of cyclic groups. \end{enumerate} \end{proposition} A~system $\{ a_1,\dots,a_k\}$ of nonzero elements of a~group~$A$ is called \emph{linearly independent} if $$ n_1a_1+\dots+n_ka_k=0\quad (n_i\in \mathbb Z) $$ implies $$ n_1a_1=\dots=n_ka_k=0. $$ A~system of elements is \emph{dependent} if it is not independent. An infinite system $L=\{ a_i\}_{i\in I}$ of elements of the group~$A$ is called \emph{independent} if every finite subsystem of~$L$ is independent. An independent system~$M$ of~$A$ is \emph{maximal} if there is no independent system in~$A$ containing~$M$ properly. By the \emph{rank} $r(A)$ of a~group~$A$ we mean the cardinality of a~maximal independent system containing only elements of infinite and prime power orders. \begin{proposition}\label{p2.4} The rank $r(A)$ of a~group~$A$ is an invariant of this group. \end{proposition} \begin{theorem}[\cite{Prufer2,Baer1}]\label{t2.1} A~bounded group is a~direct sum of cyclic groups. \end{theorem} \begin{theorem}[\cite{Fuks}]\label{t2.2} Any two decompositions of a~group into direct sums of cyclic groups of infinite and prime power orders are isomorphic. \end{theorem} \begin{theorem}[\cite{Kulikov2}]\label{t2.3} Subgroups of direct sums of cyclic groups are again direct sums of cyclic groups. \end{theorem} \subsection{Divisible Groups}\label{ss2.5} We shall say that an element~$a$ of~$A$ is \emph{divisible by} a~positive integer~$n$ ($n\mid a$) if there is an element $x\in A$ such that $nx=a$. A~group~$D$ is called \emph{divisible} if $n\mid a$ for all $a\in D$ and all natural~$n$. The groups $\mathbb Q$ and $\mathbb Z(p^\infty)$ are examples for divisible groups. \begin{theorem}\label{t2.4} Any divisible group~$D$ is a~direct sum of quasicyclic groups and full rational groups~$\mathbb Q$. The powers of the sets of components $\mathbb Z(p^\infty)$ \textup{(}for every~$p$\textup{)} and~$\mathbb Q$ form a~complete and independent system of invariants for the group~$D$. \end{theorem} \begin{corol} Any divisible $p$-group $D$ is a~direct sum of the groups $\mathbb Z(p^\infty)$. The power of the set of components $\mathbb Z(p^\infty)$ is the only invariant of~$D$. \end{corol} \begin{theorem}[\cite{Fuks}]\label{t2.5} For a~group~$D$ the following conditions are equivalent\textup{:} \begin{enumerate} \item $D$ is a~divisible group\textup{;} \item $D$ is a~direct summand in every group containing~$D$. \end{enumerate} \end{theorem} \subsection{Pure Subgroups}\label{ss2.6} A~subgroup $G$ of~$A$ is called \emph{pure} if the equation $nx=g\in G$ is solvable in~$G$ whenever it is solvable in the whole group~$A$. In other words, $G$~is pure if and only if $$ \forall n\in \mathbb Z\quad nG=G\cap nA. $$ \begin{proposition}[\cite{Sele6}]\label{p2.5} Assume that a~subgroup~$B$ of~$A$ is a~direct sum of cyclic groups of the same order~$p^k$. Then the following statements are equivalent\textup{:} \begin{enumerate} \item $B$ is a~pure subgroup of~$A$\textup{;} \item $B$ is a~direct summand of~$A$. \end{enumerate} \end{proposition} \begin{corol} Every element of order~$p$ and of finite height can be embedded in a~finite cyclic direct summand of the group. \end{corol} \begin{theorem}[\cite{Kulikov1}]\label{t2.6} A~bounded pure subgroup is a~direct summand. \end{theorem} \begin{corollary}[\cite{Erdeli1}] A~$p$-subgroup~$B$ of a~group~$A$ can be embedded in a~bounded direct summand of~$A$ if and only if the heights of the nonzero elements of~$B$ \textup{(}relative to~$A$\textup{)} are bounded. \end{corollary} \begin{corollary} An element~$a$ of prime power order belongs to a~finite direct summand of~$A$ if and only if $\langle a\rangle$ contains no elements of infinite height. \end{corollary} \subsection{Basic Subgroups}\label{ss2.7} A~subgroup~$B$ of a~group~$A$ is called a~$p$-\emph{basic subgroup} if it satisfies the following conditions: \begin{enumerate} \item $B$ is a~direct sum of cyclic $p$-groups and infinite cyclic groups; \item $B$ is pure in~$A$; \item $A/B$ is $p$-divisible. \end{enumerate} According to this definition, $B$ possesses a~basis which is said to be a~\emph{$p$-basis} of~$A$. Every group, for every prime~$p$, contains $p$-basic subgroups~\cite{Fuks9}. We now focus our attention on $p$-groups, where $p$-basic subgroups are particularly important. If $A$ is a~$p$-group and $q$ is a~prime different from~$p$, then evidently $A$ has only one $q$-basic subgroup, namely~$0$. Therefore, in $p$-groups we may refer to the $p$-basic subgroups simply as \emph{basic} subgroups, without danger of confusion. \begin{theorem}[\cite{Sele7}]\label{t2.7} Assume that $B$ is a~subgroup of a~$p$-group~$A$, $B=\bigoplus\limits_{n=1}^\infty B_n$, and $B_n$ is a~direct sum of groups $\mathbb Z(p^n)$. Then $B$~is a~basic subgroup of~$A$ if and only if for every integer $n> 0$ the subgroup $B_1\oplus \dots\oplus B_n$ is a~maximal $p^n$-bounded direct summand of~$A$. \end{theorem} \begin{theorem}[Bear, Boyer \cite{Bojer1}]\label{Fuks32.4} Suppose that $B$ is a~subgroup of a~$p$-group~$A$, $$ B=B_1\oplus B_2\oplus\dots\oplus B_n\oplus \dots, $$ where $$ B_n\cong \bigoplus_{\mu_n} \mathbb Z(p^n). $$ The subgroup $B$ is a~basic subgroup of~$A$ if and only if $$ A=B_1\oplus B_2\oplus \dots\oplus B_n\oplus (B_n^*+p^n A), $$ where $n\in \mathbb N$, $$ B_n^*=B_{n+1}\oplus B_{n+2}\oplus \dots. $$ \end{theorem} Since the group $B$ has a~basis and the factor group $A/B$ is a~direct sum of groups isomorphic to $\mathbb Z(p^\infty)$ (i.e., $A/B$ also has a~generating system which can be easily described), it is natural to combine these generating systems and to obtain one for~$A$. We write $$ B=\bigoplus_{i\in I} \langle a_i\rangle\ \ \text{and}\ \ A/B=\bigoplus_{j\in J} C_j^*,\ \ \text{where}\ \ C_j^*=\mathbb Z(p^\infty). $$ If the direct summand $C_j^*$ is generated by cosets $c_{j1}^*,\dots, c_{jn}^*,\dots$ modulo~$B$ with $pc_{j1}^*=0$ and $pc_{j,n+1}^*=c_{jn}^*$ ($n=1,2,\dots$), then, by the purity of~$B$ in~$A$, in the group~$A$ we can pick out $c_{jn}\in c_{jn}^*$ of the same order as~$c_{jn}^*$. Then we get the following set of relations: $$ pc_{j1}=0,\ \ pc_{j,n+1}=c_{jn}=b_{jn}\quad (n\ge 1,\ b_{jn}\in B), $$ where $b_{jn}$ must be of order at most~$p^n$, since $o(c_{jn})=p^n$. The system $\{ a_i,c_{jn}\}_{i\in I,\ j\in J,\ n\in \omega}$ will be called a~\emph{quasibasis} of~$A$. \begin{proposition}[\cite{Fuks2}]\label{p2.6} If $\{ a_i,c_{jn}\}$ is a~quasibasis of the $p$-group~$A$, then every $a\in A$ can be written in the following form: \begin{equation}\label{e2.1} a=s_1a_{i_1}+\dots+s_ma_{i_m}+t_1 a_{j_1n_1}+\dots+t_r a_{j_rn_r}, \end{equation} where $s_i$ and $t_j$ are integers, no $t_j$ is divisible by~$p$, and the indices $i_1,\dots,i_m$ as well as $j_1,\dots,j_r$ are distinct. The expression~\eqref{e2.1} is unique in the sense that $a$ uniquely defines the terms $s a_i$ and $tc_{jn}$. \end{proposition} \begin{theorem}[Kulikov \cite{Kulikov3}]\label{Fuks34.4s} If $B$ is a~basic subgroup of a~reduced $p$-group~$A$, then $$ |A|\le |B|^\omega. $$ \end{theorem} The \emph{final rank} of a~basic subgroup~$B$ of a~$p$-group~$A$ is the infimum of the cardinals $r(p^nB)$. Note that if the rank of~$B$ is equal to~$\mu_1$ and the final rank of~$B$ is equal to~$\mu_2$, then $A=A_1\oplus A_2$, where the group~$A_1$ is bounded and has the rank~$\mu_1$, and a~basic subgroup of the group~$A_2$ has the rank~$\mu_2$ which coincides with its final rank. \begin{theorem}\label{4.endom} If two endomorphisms of a~reduced Abelian group coincide on some of its basic subgroups, then they are equal. \end{theorem} \subsection{Endomorphism Rings of Abelian Groups}\label{ss2.8} If we have an Abelian group~$A$, then its endomorphisms form a~ring with respect to the operations of addition and composition of homomorphisms. This ring will be denoted by $\mathop{\mathrm{End}}\nolimits(A)$. We need some facts about the ring $\mathop{\mathrm{End}}\nolimits(A)$. \begin{enumerate} \item There exists a~one-to-one correspondence between finite direct decompositions $$ A=A_1\oplus \dots\oplus A_n $$ of the group~$A$ and decompositions of the ring $\mathop{\mathrm{End}}\nolimits(A)$ in finite direct sums of left ideals $$ \mathop{\mathrm{End}}\nolimits(A)=L_1\oplus \dots \oplus L_n; $$ namely, if $A_i=e_iA$, where $e_1,\dots,e_n$ are mutually orthogonal idempotents, then $L_i =\mathop{\mathrm{End}}\nolimits(A)e_i$. \item An idempotent $e\ne 0$ is called primitive if it can not be represented as a~sum of two nonzero orthogonal idempotents. If $e\ne 0$ is an idempotent of the ring $\mathop{\mathrm{End}}\nolimits(A)$, then $eA$ is an indecomposable direct summand of~$A$ if and only if $e$ is a~primitive idempotent. \item Let $A=B\oplus C$ and $A=B'\oplus C'$ be direct decompositions of the group~$A$, and let $e\colon A\to B$ and $e'\colon A\to B'$ be the corresponding projections. Then $B\cong B'$ if and only if there exist elements $\alpha,\beta\in \mathop{\mathrm{End}}\nolimits(A)$ such that $$ \alpha\beta=e\text{ and } \beta\alpha=e'. $$ \end{enumerate} \begin{theorem}[Baer \cite{Baer9}, Kaplansky \cite{Kaplans2}]\label{t2.8} If $A$~and~$C$ are torsion groups, and $\mathop{\mathrm{End}}\nolimits(A)\cong \mathop{\mathrm{End}}\nolimits(C)$, then $A\cong C$. \end{theorem} \begin{theorem}[Charles \cite{Sharl1}, Kaplansky \cite{Kaplans2}]\label{t2.9} \hspace{-4pt}% The center of the endomorphism ring $\mathop{\mathrm{End}}\nolimits(A)\mskip-1mu\ttm$ of a~$p$-group~$A$ is the ring of $p$-adic integers or the residue class ring of the integers modulo~$p^k$, depending on whether $A$ is unbounded or bounded with $p^k$ as the least upper bound for the orders of its elements. \end{theorem} \section{Beautiful Linear Combinations}\label{sec3} In this section, we shall completely follow the paper~\cite{Shelah} of S.~Shelah. Suppose that we have some fixed Abelian $p$-group $A\cong \smash[b]{\bigoplus\limits_{\mu} \mathbb Z(p^l)}$, where $\mu$ is an infinite cardinal number and $\mathop{\mathrm{End}}\nolimits(A)$ is its endomorphism ring. For any $f\in \mathop{\mathrm{End}}\nolimits(A)$ let $\mathop{\mathrm{Rng}}\nolimits f$ be its range in~$A$ and $\mathop{\mathrm{Cl}}\nolimits B$ (or $\langle B\rangle$) be the closure of~$B\subset A$ in~$A$, i.e., the minimal subgroup in~$A$ containing~$B$. As usual, $\vec x$ denotes a~finite sequence of variables $\vec x=\langle x_1,\dots, x_n\rangle$. A~linear combination $k_1 x_1+\dots +k_n x_n$, where $k_i\in \mathbb Z$, will also be denoted by $\tau (x_1,\dots,x_n)$, or $\tau(\vec x)$. Such a~combination will be called \emph{reduced} if all~$k_i$ are distinct and not equal to zero. Let $\{ a_i\mid i\in I\}$ be some independent subset of elements of order~$p^l$ in the group~$A$. It is clear that every function $h\colon \{ a_i\mid i\in I\}\to A$ has a~unique extension $$ \tilde h\in \mathop{\mathrm{Hom}}\nolimits (\mathop{\mathrm{Cl}}\nolimits\{ a_i\mid i\in I\},A). $$ Let $B$ be some set and $h$ be a~function from~$B$ into~$B$. For every $x\in B$, we define its \emph{depth} ($\mathop{\mathrm{Dp}}\nolimits(x)=\mathop{\mathrm{Dp}}\nolimits(x,h)$) as the least ordinal number (or infinity) satisfying the following conditions: \begin{enumerate} \item $\mathop{\mathrm{Dp}}\nolimits(x)\ge 0$ if and only if $x\in B$; \item $\mathop{\mathrm{Dp}}\nolimits(x)\ge \delta$ if and only if $\mathop{\mathrm{Dp}}\nolimits(x)\ge \alpha$ for every $\alpha\in \delta$ such that $\delta$ is a~limit ordinal number; \item $\mathop{\mathrm{Dp}}\nolimits(x)\ge \alpha+1$ if and only if for some $y\in B$ we have $h(y)=x$ and $\mathop{\mathrm{Dp}}\nolimits(y)\ge \alpha$. \end{enumerate} \begin{lemma}\label{shel_l1_1} Let $\{ a_i\mid i\in I\}\subset A$ be an independent set consisting of elements of order~$p^l$, and $h$ be a~function from~$I$ into~$I$. Define~$\tilde h$ by the formula $$ \tilde h (k_1 a_{i_1}+\dots+k_n a_{t_n})= k_1 a_{h(t_1)}+\dots +k_n a_{h(t_n)}. $$ Then \begin{enumerate} \renewcommand{\theenumi}{\alph{enumi}} \item $\tilde h\in \mathop{\mathrm{End}}\nolimits(\mathop{\mathrm{Cl}}\nolimits\{ a_i\mid i\in I\})$\textup{;} \item $\mathop{\mathrm{Dp}}\nolimits(k_1 a_{t_1}+\dots +k_n a_{t_n},\tilde h)\ge \min\limits_{i\in \{ 1,\dots,n\}}\mathop{\mathrm{Dp}}\nolimits(t_i,h)$\textup{;} \item if in~\textup{(b)} the linear combination $k_1 a_{t_1}+\dots+k_n a_{t_n}$ is reduced and $t_i$ are distinct, then the equality holds. \end{enumerate} \end{lemma} \begin{proof} (a) Immediate. (b) We prove by induction on ordinal numbers~$\alpha$ that $$ \min_{i\in \{ 1,\dots,n\}} \mathop{\mathrm{Dp}}\nolimits(t_i,h)\ge \alpha \Rightarrow \mathop{\mathrm{Dp}}\nolimits(k_1 a_{t_1}+\dots+k_n a_{t_n},\tilde h)\ge \alpha. $$ This suffices for the proof of~(b). If $\alpha=0$ or $\alpha$ is a~limit ordinal, then this is trivial. For $\alpha=\beta+1$, by the assumption and definition of depth, there are $s_i\in I$ such that $ h(s_i)=t_i$ and $\mathop{\mathrm{Dp}}\nolimits(s_i,h)\ge \beta$. Then $\min\limits_i \mathop{\mathrm{Dp}}\nolimits(s_i,h)\ge \beta$, whence by the induction hypothesis $$ \mathop{\mathrm{Dp}}\nolimits(k_1 a_{s_1}+\dots+k_n a_{s_n},\tilde h)\ge \beta, $$ but $$ \tilde h(k_1 a_{s_1}+\dots+k_n a_{s_n})= k_1 a_{t_1}+\dots+k_na_{t_n}, $$ whence $$ \mathop{\mathrm{Dp}}\nolimits (k_1 a_{t_1}+\dots+k_n a_{t_n},\tilde h)\ge \beta+1=\alpha. $$ (c) It suffices to prove by induction on~$\alpha$ that $$ \mathop{\mathrm{Dp}}\nolimits(k_1 a_{t_1}+\dots+k_n a_{t_n},\tilde h)\ge \alpha \Rightarrow \mathop{\mathrm{Dp}}\nolimits (t_i,h)\ge \alpha $$ for all~$i$. If $\alpha=0$ or $\alpha$ is a~limit ordinal, then this is trivial. For $\alpha =\beta+1$, by the definition of depth, there are a~reduced linear combination $l_1 a_{s_1}+\dots+l_m a_{s_m}$ and distinct~$s_i$ such that \begin{enumerate} \item $\tilde h(l_1 a_{s_1} +\dots +l_m a_{s_m})= k_1 a_{t_1}+\dots+ k_n a_{t_n}$ and \item $\mathop{\mathrm{Dp}}\nolimits(l_1 a_{s_1}+\dots +l_m a_{s_m},\tilde h)\ge\beta$. \end{enumerate} By (2) and the induction hypothesis, for $i=1,\dots,m$ we have $\mathop{\mathrm{Dp}}\nolimits (s_i,h)\ge \beta$. By (1) and the definition of~$\tilde h$, $$ l_1 a_{h(s_1)}+\dots +l_m a_{h(s_m)}= k_1 a_{t_1}+\dots+k_n a_{t_n}. $$ As the linear combination $k_1 a_{t_1}+\dots+k_n a_{t_n}$ is reduced and the~$t_i$ are distinct, $$ \{ t_1,\dots,t_n\}\subseteq \{ h(s_1),\dots,h(s_m)\}. $$ Thus, for each $i=1,\dots,n$ there is $k_i$, $1\le k_i\le m$, such that $t_i=h(s_{k_i})$. Hence $$ \mathop{\mathrm{Dp}}\nolimits(t_i,h)\ge \mathop{\mathrm{Dp}}\nolimits(s_{k_i},h)+1\ge \beta+1=\alpha.\qed $$ \renewcommand{\qed}{} \end{proof} \begin{lemma}\label{shel_l1_2} Let $h_1$ and~$h_2$ be commuting functions from~$B$ into~$B$. Then for any $x\in B$ we have $$ \mathop{\mathrm{Dp}}\nolimits(x,h_1)\le \mathop{\mathrm{Dp}}\nolimits(h_2(x),h_1). $$ \end{lemma} \begin{proof} We prove by induction on~$\alpha$ that $$ \mathop{\mathrm{Dp}}\nolimits (x,h_1)\ge \alpha\Rightarrow \mathop{\mathrm{Dp}}\nolimits(h_2(x),h_1)\ge \alpha. $$ If $\alpha=0$ or $\alpha$ is a~limit ordinal, this is immediate. Now we consider $\alpha=\beta+1$. If $\mathop{\mathrm{Dp}}\nolimits(x,h_1)\ge \beta+1$, then for some $y\in B$ we have $h_1(y)=x$ and $\mathop{\mathrm{Dp}}\nolimits(y,h_1)\ge \beta$. Thus $h_1(h_2(y))=h_1\circ h_2(y)=h_2\circ h_1(y)=h_2(x)$, and by the induction hypothesis $\mathop{\mathrm{Dp}}\nolimits(h_2(y),h_1)\ge \beta$ (since $\mathop{\mathrm{Dp}}\nolimits(y,h_1)\ge \beta)$), whence $\mathop{\mathrm{Dp}}\nolimits(h_2(x),h_1)\ge \mathop{\mathrm{Dp}}\nolimits(h_2(y),h_1)+1\ge \beta+1=\alpha$. \end{proof} \begin{lemma}\label{shel_l1_3} Let $\{ a_i\mid i\in I\}\subset A$ be an independent set with elements of the same order~$p^l$ and let $A'=\mathop{\mathrm{Cl}}\nolimits\{ a_i\mid i\in I\}$. Let $J\subseteq I$, $|I\setminus J|=|I|$, $J=\smash[b]{\bigcup\limits_{\alpha\in \alpha(0)} J_\alpha}$, and $B=\mathop{\mathrm{Cl}}\nolimits\{ a_i\mid i\in J\}$. Then we can find $f\in \mathop{\mathrm{End}}\nolimits(A')$ such that \begin{enumerate} \renewcommand{\theenumi}{\alph{enumi}} \item $i\in J_\alpha\Rightarrow \mathop{\mathrm{Dp}}\nolimits (a_i,f)=\alpha$\textup{;} \item if $g\in \mathop{\mathrm{End}}\nolimits(A')$, $g\circ f=f\circ g$, and $g$ maps~$B$ into~$B$, then for every $\alpha\in \alpha(0)$ $$ i\in J_\alpha\Rightarrow g(a_i)\in \mathop{\mathrm{Cl}}\nolimits\biggl\{ a_j \biggm| j\in \bigcup\limits_{\alpha\le\beta < \alpha(0)} J_\beta\biggr\}; $$ \item every function $g\colon \{ a_i\mid i\in J\}\to B$ satisfying the condition~\textup{(b)} can be extended to an endomorphism of $\mathop{\mathrm{End}}\nolimits (A')$ that commutes with~$f$ and maps~$B$ into~$B$. \end{enumerate} \end{lemma} \begin{proof} By renaming we can assume that $I\setminus J=I_0\cup \{ \langle 0,t,\eta\rangle \mid t\in J_\alpha,\ \alpha< \alpha(0),\ l(\eta)> 0$, $\eta$~is a~decreasing sequence of ordinals, $\eta_0< \alpha\} \cup \{ \langle 1,t,n\rangle \mid 0< n< \omega\}$ and we identify $\langle 0,t,\langle\ \rangle \rangle$ and $\langle 1,t,0\rangle$ with~$t$. Let us define a~function $h$ on~$I$: \begin{enumerate} \item for $s\in I_0$ $$ h(s)=s; $$ \item for $t\in J$, $\langle 0,t, \langle \eta, \beta\rangle \rangle\in I$, where $\beta$ is an ordinal number, $$ h(\langle 0,t,\langle \eta, \beta\rangle \rangle)=\langle 0,t,\eta\rangle; $$ \item for $t\in J$ $$ h(\langle 1,t,n\rangle)=\langle 1,t,n+1\rangle . $$ \end{enumerate} Let $f=\tilde h$. It is easy to prove that if $t\in J$ and $l(\eta)=n$, then $\mathop{\mathrm{Dp}}\nolimits (\langle 0,t,\eta\rangle, h)=\eta_n$. Hence (a)~follows immediately by~(c) of Lemma~\ref{shel_l1_1}, and $f\in \mathop{\mathrm{End}}\nolimits(A')$ by~(a) of Lemma~\ref{shel_l1_1}. Let now $g\in \mathop{\mathrm{End}}\nolimits(A')$ commute with~$f$, $g$ map~$B$ into~$B$, $i\in J_\alpha$, $\alpha\in \alpha_0$. Suppose that the linear combination $g(a_i)=k_1 a_{s_1}+\dots+k_n a_{s_n}$ is reduced. Since $g$ maps~$B$ into~$B$ and $g(a_i)\in B$, we see that $s_k\in J$ for $k=1,\dots, n$. Therefore we can assume that $s_k\in J_{\alpha_k}$. By~(c) from Lemma~\ref{shel_l1_1} and the remarks above, $$ \mathop{\mathrm{Dp}}\nolimits(g(a_i),f)=\mathop{\mathrm{Dp}}\nolimits(k_1 a_{s_1}+\dots+k_n a_{s_n}, f)= \min_k \mathop{\mathrm{Dp}}\nolimits (s_k, h)=\min_k \alpha_k. $$ On the other hand, by Lemma~\ref{shel_l1_2}, as $g$ commutes with~$f$, $$ \alpha\le \mathop{\mathrm{Dp}}\nolimits(a_i,f)\le \mathop{\mathrm{Dp}}\nolimits(g(a_i),f). $$ Combining both, we get $\alpha \le \alpha_k$ for $k=1,\dots,n$. Hence $g(a_i)\in \mathop{\mathrm{Cl}}\nolimits \{ a_i\mid i\in J_\beta,\ \alpha \le \beta< \alpha_0\}$, so we have proved~(b). (c) Extend $g$ to a~function $g_1\colon \{ a_i\mid i\in I\}\to A'$ in the following way: if $g(a_t)=k_1 a_{t_1}+\dots+k_na_{t_n}$, then let \begin{align*} g_1(a_{\langle 0,t,\eta\rangle}) &= k_1 a_{\langle 0,t_1, \eta\rangle}+ \dots+ k_n a_{\langle 0,t_n,\eta\rangle},\\ g_1(a_{\langle a,t,m\rangle}) &= k_1 a_{\langle 1,t_1,m\rangle}+\dots+ k_n a_{\langle 1,t_2,m\rangle},\\ g_1(a_s) &=a_s \text{ for } s\in I_0. \end{align*} It is easy to check that $g_1$ is well defined (because $t\in J_\alpha$ and $t_k\in J_\beta$ imply $\alpha\le \beta$) and it has a~unique extension to $g_2\in \mathop{\mathrm{End}}\nolimits(A')$. In order to check that $g_2$~and~$f$ commute, it suffices to prove that for every $s\in I$ $$ f\circ g_2(a_s)=g_2\circ f(a_s), $$ and this is quite easy. \end{proof} A~linear combination $\tau(x_1,\dots,x_n)=k_1 x_1+\dots+k_nx_n$ is called \emph{beautiful} (see~\cite{Shelah}, where similar terms were called \emph{beautiful terms}) if \begin{enumerate} \item we have $\tau(\tau(x_1^1,\dots,x_n^1),\tau(x_1^2,\dots, x_n^2),\dots \tau(x_1^n,\dots, x_n^n))= \tau(x_1^1,\dots,x_n^n)$; \item we have $\tau(x,\dots,x)=x$. \end{enumerate} The condition~(2) implies $k_1+\dots+k_n=1$. It is clear that the condition~(1) implies $k_i k_j=\delta_{ij} k_i$ for all $i,j=1,\dots,n$. We can see that all $k_j$, except one (let it be~$k_i$), are equal to zero, and $k_i=1$, i.e., all beautiful linear combinations have the form~$x_i$ for some $i\in I$. Therefore the following lemma is trivial. \begin{lemma}\label{shel_l3_1} The set of beautiful linear combinations is closed under substitution. \end{lemma} \begin{theorem}\label{shel_t4_1} There is a~formula $\tilde \varphi({\dots})$ such that the following holds. Let $\{ f_i\}_{i\in \mu}$ be a~set of elements of $\mathop{\mathrm{End}}\nolimits(A')$. Then we can find a~vector~$\bar g$ such that the formula $\tilde \varphi( f,\bar g)$ holds in $\mathop{\mathrm{End}}\nolimits(A')$ if and only if $ f=f_i$ for some $i\in \mu$. \end{theorem} \begin{proof} Suppose that a~set $\{ a_i\mid i\in I^*\}\subset A$ is independent, consists of elements of the same order, and $\mathop{\mathrm{Cl}}\nolimits\{ a_i\mid i\in I^*\}=A'$. Let $J\subseteq I^*$ and $\mu =|J|=|I^*\setminus J|$. For notational simplicity let $J=\{ \langle \alpha,\beta\rangle\mid {\alpha,\beta\in \mu}\}$, $a_{\langle \alpha,\beta\rangle}=a_\alpha^\beta$. \begin{lemma}\label{shel_l4_2} There is a~formula $\varphi(f)$ with one free variable~$f$ such that $\varphi(f)$ holds in $\mathop{\mathrm{End}}\nolimits(A')$ if and only if there is an ordinal number $\alpha\in \mu$ such that for all $\beta \in \mu$ $$ f(a_0^\beta)=a_\alpha^\beta. $$ \end{lemma} Now we shall show the proof of Theorem~\ref{shel_t4_1} with the help of Lemma~\ref{shel_l4_2}. Let a~function $f_0^*\colon A'\to A'$ map the set $\{ a_0^\alpha\mid \alpha\in \mu\}$ onto the set $\{ a_t\mid t\in I^*\}$, and let $f_0^*(a_\alpha^\beta)=f_0^*(a_0^\beta)$. Suppose that we have a~set $\{ f_i\}_{i\in \mu}$ and let the function $f^*\colon A'\to A'$ be such that $$ f^*(a_\alpha^\beta)=f_\alpha\circ f_0^*(a_\alpha^\beta). $$ Let $f_1^*\colon A\to A$ map the set $\{ a_t\mid t\in I^*\}$ onto the set $\{ a_0^\beta\mid \beta\in \mu\}$. Let the formula $\tilde \varphi(f',{\dots})$ say that there exists $f\in \mathop{\mathrm{End}}\nolimits(A')$ such that \begin{enumerate} \item $\varphi(f)$; \item $f'\circ f_0^*\circ f_1^*=f^*\circ f\circ f_1^*$. \end{enumerate} Then $\vDash \varphi(f)$ if and only if there exists $\alpha\in \mu$ such that $$ \forall \beta\in \mu\ f(a_0^\beta)=a_{\alpha}^{\beta}. $$ Therefore \begin{multline*} f'\circ f_0^*\circ f_1^*(a_t)=f^*\circ f\circ f_1^*(a_t) \Leftrightarrow f'\circ f_0^*(a_0^\beta)=f^*\circ f(a_0^\beta)\\ \Leftrightarrow f'\circ f_0^*(a_0^\beta)=f^*(a_{\alpha}^\beta) \Leftrightarrow f'\circ f_0^*(a_0^\beta)=f_{\alpha}\circ f_0^*(a_0^\beta). \end{multline*} Let $f_0^*(a_0^\beta)=a_{t_\beta}$. Then $$ f'(a_{t_\beta})= f_{\alpha}(a_{t_\beta}), $$ what we needed. \end{proof} \begin{proof}[Proof of Lemma~\ref{shel_l4_2}] We partition our proof to three cases. Case I. $\mu=\omega$. Introduce the following mappings \begin{enumerate} \item $f_2^*\in \mathop{\mathrm{End}}\nolimits(A')$ such that $f_2^*(a_t)=a_0^0$ for all $t\in I^*$; \item $f_3^*\in \mathop{\mathrm{End}}\nolimits(A')$ such that $t\in I^*\setminus J\Rightarrow f_3^*(a_t)=a_0^0$, $t\in J\Rightarrow f_3^*(a_n^m)=a_{n+1}^m$; \item $f_4^*\in \mathop{\mathrm{End}}\nolimits(A')$ such that $t\in I^*\setminus J\Rightarrow f_4^*(a_t)=a_0^0$, $t\in J\Rightarrow f_4^*(a_n^m)=a_n^{m+1}$; \item $f_5^*\in \mathop{\mathrm{End}}\nolimits(A')$ such that $t\in I^*\setminus J\Rightarrow f_5^*(a_t)=a_0^0$, $t\in J\Rightarrow f_5^*(a_n^m)=a_m^n$; \item $f_6^*\in \mathop{\mathrm{End}}\nolimits(A')$ such that $t\in I^*\setminus J\Rightarrow f_6^*(a_t)=a_0^0$, $t\in J\Rightarrow f_6^* (a_n^m)=a_n^n$; \item $f_7^*\in \mathop{\mathrm{End}}\nolimits(A')$ such that $t\in I^*\setminus J\Rightarrow f_7^*(a_t)=a_0^0$, $t\in J\Rightarrow f_7^*(a_n^m)=a_n^0$. \end{enumerate} Suppose that \begin{align*} B_0&=\mathop{\mathrm{Cl}}\nolimits\{ a_0^0\},\\ B_1&= \mathop{\mathrm{Cl}}\nolimits\{ a_n^0\mid n\in \omega\},\\ B_2&= \mathop{\mathrm{Cl}}\nolimits\{ a_n^m\mid n,m\in \omega\},\\ B_3&= \mathop{\mathrm{Cl}}\nolimits\{ a_0^m\mid m\in \omega\}. \end{align*} Introduce now the following first order formulas. \begin{enumerate} \renewcommand{\labelenumi}{\theenumi.} \item $\varphi_1(f;\dots)$ says that the restriction of the function~$f$ on the set~$B_1$ is a~function with the image in~$B_1$, commuting with $f_3^*|_{B_1}$. As a~formula we have the following: $$ \varphi_1(f;\dots)= (\rho_{B_1}\circ f\circ \rho_{B_1}=f\circ \rho_{B_1}) \logic\land (f\circ \rho_{B_1}\circ f_3^*\circ \rho_{B_1}= f_3^*\circ \rho_{B_1}\circ f\circ \rho_{B_1}), $$ where $\rho_{B_1}$ is the projection onto~$B_1$. \item Similarly, $\varphi_2(f;\dots)$ says that the formula $\varphi_1(f;\dots)$ holds and the image of $f|_{B_2}$ is included in~$B_2$ and $f_{B_2}$ commutes with $f_4^*|_{B_2}$. \item The formula $\varphi_3(f;\dots)$ says that the formula $\varphi_2(f;\dots)$ holds and $(f_5^*\circ f\circ f_5^*\circ f)|_{B_0}=(f_6^*\circ f)|_{B_0}$. \item The formula $\varphi_4(f;\dots)$ says that the formula $\varphi_3(f;\dots)$ holds and $(f_2^*\circ f)|_{B_0}=f_2^*|_{B_0}$. \item The formula $\varphi_5^*(f;\dots)$ says that there exists $f'\in \mathop{\mathrm{End}}\nolimits(A')$ such that $f'|_{B_3}=f|_{B_3}$ and the formula $\varphi_4(f';\dots)$ holds. \end{enumerate} Now we note the following. 1. The formula $\varphi_1(f;\dots)$ holds if and only if $$ f(a_n^0)=q_1a_{n+l_1}^0+\dots+q_k a_{n+l_k}^0 $$ for some $q_1,\dots,q_k\in \mathbb Z$, $l_1,\dots,l_k\in \omega$ for any $n\in \omega$. Actually, $f\colon B_1\to B_1$ means that $f(a_n^0)=s_1 a_{n_1}^0+\dots+s_k a_{n_k}^0$, and $f|_{B_1}\circ f_3^*|_{B_1}=f_3^*|_{B_1}\circ f|_{B_1}$ is equivalent to the condition that for any $n\in \omega$ $$ f\circ f_3^*(a_n^0)=f_3^*\circ f(a_n^0). $$ Let $f(a_0^0)=r_1 a_{l_1}+\dots+r_k a_{l_k}$. Then $f(a_1^0)=f_3^*\circ f(a_0^0)=r_1 a_{1+l_1}+\dots+r_k a_{1+l_k}$ and so on by induction. 2. The formula $\varphi_2(f;\dots)$ holds if and only if $$ f(a_n^m)=r_1 a_{n+l_1}^m+\dots+r_k a_{n+l_k}^m. $$ Actually, from $\varphi_1(f)$ it follows that for any $n\in \omega$ $$ f(a_n^0)=r_1 a_{n+l_1}^0+\dots+ r_k a_{n+l_k}^0, $$ and $$ f(a_n^1)=f\circ f_4^*(a_n^0)=f_4^* \circ f(a_n^0)= f_4^* (r_1 a_{n+l_1}^0+\dots+ r_k a_{n+l_k}^0)= r_1 a_{n+l_1}^1+\dots+r_k a_{n+l_k}^1, $$ and so on by induction. 3. The formula $\varphi_3(f;\dots)$ holds if and only if the formula $\varphi_2(f;\dots)$ holds, where $l_1,\dots,l_k$ are distinct and the corresponding linear combination~$\tau$ satisfies the condition~(1) of the definition of beautiful linear combination. Actually, let the formula $\varphi_2(f)$ be true and $\tau$ be the corresponding linear combination. Then \begin{align*} & f_5^*\circ f\circ f_5^*\circ f|_{B_0}= f_6^*\circ f|_{B_0}\Leftrightarrow f_5^*\circ f\circ f_5^* \circ f(a_0^0)=f_6^*\circ f(a_0^0)\\ & \quad {}\Leftrightarrow f_5^*\circ f\circ f_5^*(\tau(a_{l_1}^0,\dots,a_{l_k}^0))= f_6^*(\tau(a_{l_1}^0,\dots,a_{l_k}^0))\Leftrightarrow f_5^*\circ f(\tau(a_0^{l_1},\dots,a_0^{l_k}))= \tau(a_{l_1}^{l_1},\dots,a_{l_k}^{l_k})\\ & \quad {}\Leftrightarrow f_5^*(\tau(\tau(a_{l_1}^{l_1},\dots,a_{l_1}^{l_k}),\dots, \tau(a_{l_k}^{l_1},\dots, a_{l_k}^{l_k}))= \tau(a_{l_1}^{l_1},\dots,a_{l_k}^{l_k})\\ & \quad {}\Leftrightarrow \tau (\tau(a_{l_1}^{l_1},\dots, a_{l_k}^{l_1}),\dots, \tau(a_{l_1}^{l_k},\dots,a_{l_k}^{l_k}))= \tau(a_{l_1}^{l_1},\dots,a_{l_k}^{l_k}). \end{align*} It is equivalent to the condition~(1) from the definition of linear combination. The converse condition is proved in the same manner. 4. The formula $\varphi_4(f;\dots)$ holds if and only if $$ f(a_n^m)=\tau(a_{n+l_1}^m,\dots,a_{n+l_k}^m), $$ where $\tau(x_1,\dots,x_k)$ is a~beautiful linear combination, i.e., $f(a_n^m)=a_{n+l_s}^m$. Since $\varphi_4(f)\Rightarrow \varphi_3(f)$, we only need to show that $a_0^0=\tau(a_0^0,\dots,a_0^0)$. Actually, $$ f_2^*\circ f|_{B_0}=f_2^*|_{B_0}\Leftrightarrow f_2^*\circ f(a_0^0)= f_2^*(a_0^0) \Leftrightarrow f_2^*(\tau(a_{l_1}^0,\dots,a_{l_k}^0))=a_0^0\Leftrightarrow \tau(a_0^0,\dots,a_0^0)=a_0^0. $$ 5. The formula $\varphi_5(f)$ holds if and only if $$ f(a_0^n)=\tau(a_{l_1}^n,\dots,a_{l_k}^n) $$ for some beautiful linear combination~$\tau$ and $l_1,\dots,l_k\in \omega$ for all $n\in \omega$. This follows immediately from $f_5^*\circ f_7^*\circ f_5^*(a_0^m)=f_5^*\circ f_7^*(a_m^0)=a_0^m$. \smallskip Case II. The cardinal number $\mu=|J|$ is regular and $\mu> \omega$. Let $I^*\setminus J=I_0\cup \{ \langle \alpha,\delta,n\rangle\mid \alpha\in \mu,\ \delta\in \mu,\ \mathrm{cf}\delta=\omega,\ n\in \omega\}$, $|I_0|=\mu$, and let us denote $a_\alpha^{\beta,n}=a_{\langle \alpha,\beta,n\rangle}$. For every limit ordinal $\delta\in \mu$ such that $\mathrm{cf} \delta=\omega$, choose an increasing sequence $(\delta_n)_{n\in \omega}$ of ordinals less than~$\delta$ such that their limit is~$\delta$ and for each $\beta\in \mu$, $n\in \omega$ the set $\{ \delta\in \mu\mid \beta=\delta_n\}$ is a~stationary subset in~$\mu$ (see~\cite{Solovay}). Let us define some $f_i^*$ by defining $f_i^*(a_t)$ for some $t\in I^*$ and understanding that when $f_i^*(a_t)$ is not explicitly defined, it is $a_0^0$. So let for $\alpha,\beta\in \mu$ $$ f_2^*(a_\alpha^\beta)=a_0^0;\quad f_3^*(a_\alpha^\beta)=a_\alpha^0;\quad f_4^*(a_\alpha^\beta)=a_0^\beta;\quad f_5^*(a_\alpha^\beta)=a_\beta^\alpha;\quad f_6^*(a_\alpha^\beta)=a_\alpha^\alpha; $$ let for $\delta\in \mu$, $\mathrm{cf} \delta=\omega$ $$ f_7^*(a_\alpha^\delta) =a_\alpha^{\delta,0}; $$ and let for $\delta\in\mu$, $\mathrm{cf} \delta\ne\omega$ $$ f_7^*(a_\alpha^\delta)=a_\alpha^\delta; $$ and, further, \begin{gather*} f_8^*(a_\alpha^\beta)= a_\alpha^\beta,\quad f_8^*(a_\alpha^{\delta,n})=a_\alpha^{\delta,n+1};\\ f_9^*(a_\alpha^\beta)=a_\alpha^\beta,\quad f_9^*(a_\alpha^{\delta,n})=a_\alpha^{\delta_n}. \end{gather*} Let \begin{align*} B_0&= \mathop{\mathrm{Cl}}\nolimits\{ a_0^0\},\\ B_1&= \mathop{\mathrm{Cl}}\nolimits\{ a_\alpha^0\mid \alpha\in \mu\},\\ B_2&= \mathop{\mathrm{Cl}}\nolimits\{ a_0^\beta\mid \beta\in \mu\},\\ B_3&= \mathop{\mathrm{Cl}}\nolimits\{ a_\alpha^\beta\mid \alpha,\beta\in \mu\},\\ B_4&= \mathop{\mathrm{Cl}}\nolimits\{ a_\alpha^\beta,a_\alpha^{\beta,n}\mid \alpha,\beta\in \mu,\ n\in \omega\},\\ B_5&= \mathop{\mathrm{Cl}}\nolimits\{ a_0^\beta,a_0^{\beta,n}\mid \beta\in \mu,\ n\in \omega\},\\ B_6&= \mathop{\mathrm{Cl}}\nolimits\{ a_0^\beta\mid \beta\in \mu,\ \mathrm{cf} \beta=\omega\},\\ B_7&= \mathop{\mathrm{Cl}}\nolimits\{ a_\alpha^\beta\mid \alpha,\beta\in \mu,\ \mathrm{cf} \beta=\omega\}. \end{align*} Clearly $f_3^*$, $f_4^*$, and $f_9^*$ are projections onto $B_1$, $B_2$, and $B_3$, respectively. Let $f_{10}^*$, $f_{11}^*$, and $f_{12}^*$ be projections onto $B_4$, $B_5$, and $B_6$, respectively. Now we apply Lemma~\ref{shel_l1_3} with $J=\{ \langle \alpha,\beta\rangle\mid \alpha,\beta \in \mu\}$, $J_\beta=\{ \langle \alpha,\beta\rangle\mid \alpha\in \mu\}$, $I=I^*$, and $f=f_{13}^*\in \mathop{\mathrm{End}}\nolimits(A')$. Let the first order formula $\varphi^1(f,g;\dots)$ say that \begin{enumerate} \item $f$ and $g$ are conjugate to~$f_2^*$; \item $\mathop{\mathrm{Rng}}\nolimits f, \mathop{\mathrm{Rng}}\nolimits g\subset B_3$; \item $\exists h\in \mathop{\mathrm{End}}\nolimits(A')\ (h\circ f_{13}^*=f_{13}^*\circ h \logic\land \mathop{\mathrm{Rng}}\nolimits h|_{B_3}\subseteq B_3 \logic\land h\circ f =g)$. \end{enumerate} We shall write $\varphi^1(f,g;\dots)$ also in the form $f\le g$. If $f$ and $g$ are conjugate to $f_2^*$, $f(a_0^0)=a_\alpha^\beta$, and $g(a_0^0)=\tau(a_{\alpha_1}^{\beta_1},\dots, a_{\alpha_k}^{\beta_k})$, then $f\le g$ if and only if $\beta\le \beta_1,\dots,\allowbreak \beta\le \beta_k$, which is easy to see from Lemma~\ref{shel_l1_3}. Let the first order formula $\varphi_2(f)$ say that \begin{enumerate} \item $\mathop{\mathrm{Rng}}\nolimits f|_{B_2}\subseteq B_3$; $\mathop{\mathrm{Rng}}\nolimits f|_{B_5}\subseteq B_4$; $\mathop{\mathrm{Rng}}\nolimits f|_{B_0}\subseteq B_1$; $\mathop{\mathrm{Rng}}\nolimits f|_{B_6}\subseteq B_7$; \item for any $g\in \mathop{\mathrm{End}}\nolimits (A')$ conjugate to~$f_2^*$, if $\mathop{\mathrm{Rng}}\nolimits g\subseteq B_2$, then $g\le f\circ g$; \item $f|_{B_5}$ commutes with $f_7^*$, $f_8^*$, and $f_9^*$; \item $f|_{B_3}$ commutes with $f_3^*$. \end{enumerate} \begin{statement}\label{shel_s4_3} The formula $\varphi_2(f)$ holds in $\mathop{\mathrm{End}}\nolimits(A')$ if and only if for any $\beta\in \mu$ $$ f(a_0^0)=\tau(a_{\alpha_1}^\beta,\dots,a_{\alpha_k}^\beta)\ \ \text{and}\ \ f(a_0^{\beta,n})=\tau(a_{\alpha_1}^{\beta,n},\dots,a_{\alpha_k}^{\beta,n}) $$ for some linear combination~$\tau$ and ordinal numbers $\alpha_1,\dots,\alpha_k \in \mu$ \textup{(}which do not depend on~$\beta$\textup{)}. \end{statement} \begin{proof} Assume that $\varphi_2(f)$ holds for a~given function~$f$, and let $$ f(a_0^\beta)= \tau_\beta(a_{\alpha_{\beta,1}}^{\gamma_{\beta,1}},\dots, a_{\alpha_{\beta,k_\beta}}^{\gamma_{\beta,k_\beta}}). $$ Without loss of generality we can assume that the ordinal numbers $\alpha_{\beta,i}$ grow as $i$~grows. Choose $g$ such that $g(a_t)=a_0^\beta$ for all $t\in I^*$. Then $g$ is conjugate to $f_2^*$, $\mathop{\mathrm{Rng}}\nolimits g\subseteq B_2$, and therefore, by~(2), $g\le f\circ g$. Since $g(a_0^0)=a_0^\beta$ and $f(a_0^0)=\tau_\beta(a_{\alpha_{\beta,1}}^{\gamma_{\beta,1}},\dots, a_{\alpha_{\beta,k_\beta}}^{\gamma_{\beta,k_\beta}})$, we have that $\beta\le \gamma_{\beta,i}$ for all $i=1,\dots,k_\beta$. Since the cardinal number~$\mu$ is regular and $\mu> \omega$, we have that for any $\beta_0< \mu$ $$ \sup \{ \gamma_{\beta,i}\mid i=1,\dots,k_\beta;\ \beta\in \beta_0\}< \mu . $$ Hence the set $$ S=\{ \beta_0\mid \beta_0\in \mu \logic\land \forall \beta\, (\beta\in \beta_0 \logic\land 1\le i\le k_\beta\Rightarrow \gamma_{\beta,i}\in \beta_0)\} $$ is an unbounded subset in~$\mu$; and by its definition it is closed (indeed, $\beta_1= \sup\{ \gamma_{\beta,i}\mid i=1,\dots,k_\beta,\ \beta\in \beta_0\}$, $\beta_2= \sup \{ \gamma_{\beta,i}\mid i=1,\dots,k_\beta,\ \beta\in \beta_1\}$,~\dots, $\bar \beta=\bigcup\limits_{l\in \omega} \beta_l$, whence $\bar \beta< \mu$, $\bar \beta \in S$, and so on). Now we shall prove that for $\delta\in S$, $\mathrm{cf}\delta=\omega$ implies $\gamma_{\delta,i}=\delta$. Suppose that $\gamma=\gamma_{\delta,i_0}\ne \delta$ for some~$i_0$, which yields $\delta< \gamma_{\delta,i_0}$ as was said above. As $\delta_n$ is increasing (as a~function of~$n$) and its limit is~$\delta$, for some $n\in \omega$ big enough, \begin{equation}\label{shel1} \delta< \gamma_{\delta,i_0}(n) \end{equation} and $\gamma_{\delta,i_1}\ne \gamma_{\delta,i_2}\Rightarrow \gamma_{\delta,i_1}(n)\ne \gamma_{\delta,i_2}(n)$. Since $\mathop{\mathrm{Rng}}\nolimits f|_{B_6}\subseteq B_7$, we see that necessarily $\gamma_{\delta,i}$ has cofinality~$\omega$, and as $f|_{B_5}$ commutes with~$f_7^*$, we see that $$ f(a_0^{\delta,0})=f\circ f_7^*(a_0^\delta)= f_7^*\circ f(a_0^\delta)= \tau_\delta(a_{\alpha_{\delta,1}}^{\gamma_{\delta,1},0},\dots, a_{\alpha_{\delta,s}}^{\gamma_{\delta,s},0}). $$ As $f|_{B_5}$ commutes with $f_8^*$, $$ f(a_0^{\delta,n})= f\circ f_8^*(a_0^{\delta,n-1})= f_8^*\circ f(a_0^{\delta,n-1}), $$ whence $$ f(a_0^{\delta,n})= \tau_\delta (a_{\alpha_{\delta,1}}^{\gamma_{\delta,1},n},\dots). $$ Since $f|_{B_5}$ commutes with $f_9^*$, we have $$ f(a_0^{\delta_n})= \tau_\delta(a_{\alpha_{\delta,1}}^{\gamma_{\delta,1}(n)},\dots) $$ and the linear combination $\tau_\delta(a_{\alpha_{\delta,1}}^{\gamma_{\delta,1}(n)},\dots)$ is reduced. But $f(a_0^{\delta_n})= \tau_{\delta_n}(a_{\alpha_{\delta_n,1}}^{\gamma_{\delta_n,1}},\dots)$, where the linear combination on the right-hand side of the equality is also reduced. Hence $$ \gamma_{\delta,i_0}(n)\in \{ \gamma_{\delta_n,i}\mid i\le i\le k_{\delta_n}\}, $$ but on the one hand, $\delta< \gamma_{\delta,i_0}(n)$ by~\eqref{shel1}, and on the other hand, $\delta< \delta_n\Rightarrow \gamma_{\delta_n,i}< \delta$, as $\delta\in S$, contradiction. Consequently, $\gamma_{\delta,i}=\delta$ for all $\delta\in S$. We know that for all $\beta\in \mu$ and $n\in \omega$ the set $\{ \delta\in \mu\mid \mathrm{cf} \delta=\omega \logic\land \delta_n=\beta\}$ is stationary, so there is $\delta\in S$, where $\mathrm{cf} \delta=\omega$, such that $\delta_n=\beta$. As before, we can show that $$ f(a_0^{\delta_n})= \tau_\delta(a_{\alpha_{\delta,1}}^{\delta_n},\dots, a_{\alpha_{\delta,k_\delta}}^{\delta_n})= \tau_{\delta_n}(a_{\alpha_{\delta_n,1}}^{\gamma_{\delta_n,1}},\dots, a_{\alpha_{\delta_n,k_{\delta_n}}}^{\gamma_{\delta_n,k_{\delta_n}}}), $$ where the linear combination $\tau_{\delta_n}$ is reduced. Therefore $\gamma_{\delta_n,i}\in \{ \delta_n\}$ for all~$i$, so $\gamma_{\delta_n,i}=\delta_n$, i.e., $\gamma_{\beta,i}=\delta_n$ for all~$i$. As the linear combination $\tau_\beta(a_{\alpha_{\beta,1}}^{\gamma_{\beta,1}},\dots)$ is reduced, we have that the ordinals $\alpha_{\beta,i}$, where $1\le i\le k_\beta$, are distinct. As $f|_{B_3}$ commutes with~$f_3^*$, for every~$\beta$ $$ \tau_\beta(a_{\alpha_{\beta,1}}^0,\dots, a_{\alpha_{\beta,k_\beta}}^0)= \tau_0(a_{\alpha_{0,1}}^0,\dots, a_{\alpha_{\alpha_0,k_0}}^0). $$ As the $\alpha_{\beta,i}$ are distinct, necessarily $$ \{ \alpha_{\beta,i}\mid 1\le i\le k_\beta\}= \{ \alpha_{0,i}\mid 1\le i\le k_0\}, $$ but as $\alpha_{\beta,i}$ is increasing with~$i$ (for each~$\beta$, by the choice of $\alpha_{\beta,i}$), necessarily $\alpha_{\beta,i}=\alpha_{0,i}$ and $k_\beta=k_0$, i.e., $\tau=\tau_0$. Therefore $f(a_0^{\beta,n})=\tau_0(a_{\alpha_{0,1}}^{\beta,n},\dots)$. The other direction of this assertion is immediate. \end{proof} Let $\varphi_3(f)$ say that \begin{enumerate} \item $\mathop{\mathrm{Rng}}\nolimits f|_{B_2}\subseteq B_3$; \item $\exists f_1\in \mathop{\mathrm{End}}\nolimits(A')\ (f_1|_{B_2}=f|_{B_2}\logic\land \varphi_2(f_1))$. \end{enumerate} The formula $\varphi_3(f)$ holds if and only if $$ f(a_0^\beta)=\tau(a_{\gamma_1}^\beta,\dots,a_{\gamma_k}^\beta) $$ for any $\beta\in \mu$ and some $\tau, \gamma_1,\dots,\gamma_k$. This follows immediately from Statement~\ref{shel_s4_3}. Let the formula $\varphi_4(f)$ say that \begin{enumerate} \item $\mathop{\mathrm{Rng}}\nolimits f|_{B_2}\subseteq B_3$; \item $\varphi_3(f)$; \item $\forall g\in \mathop{\mathrm{End}}\nolimits(A')\ (\varphi_3(g)\Rightarrow g\circ f_5\circ f|_{\tilde a_0^0}= f_5\circ f\circ f_5\circ g|_{\tilde a_0^0} \logic\land f_5^*\circ f\circ f_5^*\circ f|_{\tilde a_0^0}= f_6^*\circ f|_{\tilde a_0^0} \logic\land f_2^*\circ f|_{\tilde a_0^0}=f_2^*|_{\tilde a_0^0}$. \end{enumerate} As in case~I, we can check that the formula $\varphi_4(f)$ holds in $\mathop{\mathrm{End}}\nolimits(A')$ if and only if $$ f(a_0^\beta)=\tau(a_{\gamma_1}^\beta,\dots, a_{\gamma_k}^\beta), $$ where $\tau$ is a~beautiful linear combination. \smallskip Case III. $\mu$ is a~singular cardinal number. Let $\mu_1< \mu$, where $\mu_1$ is a~regular cardinal number and $\mu_1> \omega$. Let $I^*\setminus J=I_0\cup \{ \langle \alpha,\delta,n\rangle\mid \alpha\in \mu,\allowbreak\ {\delta\in \mu_1},\allowbreak\ \mathrm{cf} \delta=\omega,\ n\in \omega\}$, $|I_0|=\mu$, $a_\alpha^{\beta,n}=a_{\langle \alpha,\beta,n\rangle}$. For every limit ordinal $\delta\in \mu_1$ such that $\mathrm{cf} \delta=\omega$, similarly to the previous case, choose an increasing sequence $(\delta_n)_{n\in\omega}$ of ordinal numbers less than $\delta$, with limit~$\delta$, such that for any $\beta\in \mu_1$ and $n\in \omega$ the set $\{ \delta\in \mu_1\mid \beta=\delta_n\}$ is a~stationary subset in~$\mu_1$. Let \begin{align*} B_1&=\mathop{\mathrm{Cl}}\nolimits\{ a_{\alpha}^0\mid \alpha\in \mu\},\\* B_2&= \mathop{\mathrm{Cl}}\nolimits\{ a_0^\beta\mid \beta\in \mu_1\},\\* B_3&=\mathop{\mathrm{Cl}}\nolimits\{ a_\alpha^\beta\mid \alpha\in \mu,\ \beta\in \mu_1\}. \end{align*} As in case~II, we can define the functions~$f_i^*$ in such a~way that for some $\varphi^0(\dots)$ the formula $\varphi^0(f;\dots)$ holds in $\mathop{\mathrm{End}}\nolimits(A')$ if and only if there exist a~linear combination~$\tau$ and distinct ordinal numbers ${\alpha_1,\dots,\alpha_n\in \mu}$ such that for every $\beta\in \mu_1$ $$ f(a_0^\beta)=\tau(a_{\alpha_1}^\beta,\dots,a_{\alpha_n}^\beta). $$ Let the formula $\varphi^1(f)$ say that \begin{enumerate} \item $\mathop{\mathrm{Rng}}\nolimits f|_{\tilde a_0^0}\subseteq B_2$; \item for every $g\in \mathop{\mathrm{End}}\nolimits(A')$ $\varphi^0(g)\Rightarrow (f\circ g)|_{\tilde a_0^0}=(g\circ f)|_{\tilde a_0^0}$. \end{enumerate} It is easy to check that the formula $\varphi^1(f)$ holds if and only if there exist a~linear combination~$\sigma$ and distinct ordinal numbers $\beta_1,\dots,\beta_m\in \mu_1$ such that for any $\alpha\in \mu$ $$ f(a_\alpha^0)=\sigma(a_\alpha^{\beta_1},\dots,a_\alpha^{\beta_m}). $$ As the cardinal number $\mu_1$ is regular, we can use case~II. Thus, there is a~formula $\varphi^2(f;\dots)$ such that $\varphi^2(f)$ holds in $\mathop{\mathrm{End}}\nolimits(A')$ if and only if there exist a~beautiful linear combination~$\sigma$ and distinct $\beta_i\in \mu_1$ such that for any $\alpha\in \mu$ $$ f(a_\alpha^0)=\sigma(a_\alpha^{\beta_1},\dots,a_\alpha^{\beta_m}). $$ Let $\mu=\bigcup\limits_{i\in \mathrm{cf} \mu} \mu_i$, where $\mu_i\in \mu$ and the sequence $(\mu_i)$ increases. We just prove that for every $\gamma\in \mathrm{cf} \mu$ there is a~function $\bar f_\gamma^*$ such that \begin{enumerate} \item the formula $\varphi^2[f,\bar f_\gamma^*]$ holds in $\mathop{\mathrm{End}}\nolimits(A')$ if and only if there exist a~beautiful linear combination~$\sigma$ and distinct $\beta_i\in \mu_\gamma^+$ such that for all $\alpha\in \mu$ $$ f(a_\alpha^0)=\sigma(a_\alpha^{\beta_1},\dots,a_\alpha^{\beta_m}); $$ \item ${f_\gamma^*}_0$ is a~projection onto $$ \mathop{\mathrm{Cl}}\nolimits\{ a_\alpha^\beta\mid \alpha\in \mu,\ \beta\in \mu_\gamma^+\}. $$ \end{enumerate} Choose $\mu_1=\mu_\gamma^+$ and consider the same $\varphi^2(f)$ as in the case~II, with ${f_\gamma^*}_0$ being a~projection onto $\mathop{\mathrm{Cl}}\nolimits\{ a_\alpha^\beta\mid {\alpha\in \mu},\allowbreak\ \beta\in \mu_1\}$. Let $\tau$ and $\sigma$ be beautiful linear combinations, $\beta_1,\dots,\beta_m \in \mu_{\gamma_k}^+$, $k=1,\dots,n$, and for every $\alpha\in \mu$ $$ f(a_\alpha^0)=\sigma(a_\alpha^{\beta_1},\dots,a_\alpha^{\beta_n}). $$ We show that in this case we have $$ \varphi^2(f, \tau(\bar f_{\gamma_1}^*,\dots,\bar f_{\gamma_n}^*)). $$ The formula $\varphi^2(f,\tau(\bar f_{\gamma_1}^*,\dots, \bar f_{\gamma_n}^*))$ holds if and only if $f(a_\alpha^0)=\sigma(a_\alpha^{\beta_1},\dots,a_\alpha^{\beta_m})$ for all $\alpha\in \mu$ and distinct $\beta_1,\dots,\beta_n\in \mu_{\gamma_1}$, which is true. Further, we have that $\tau({f_{\gamma_1}^*}_0,\dots,{f_{\gamma_n}^*}_0)$ is a~projection onto the set $\mathop{\mathrm{Cl}}\nolimits \{ a_\alpha^\beta\mid {\alpha\in \mu},\allowbreak\ \beta\in \mu_{\gamma_1}^+\}$ because $$ \tau(x_1,\dots,x_n)=x_s. $$ Recall how we proved Theorem~\ref{shel_t4_1} from Lemma~\ref{shel_l4_2}. This proof easily implies that there exist a~formula $\varphi^3$ and a~vector of functions $g^*$ such that the formula $\varphi^3(\bar f,\bar g^*)$ holds if and only if $\bar f =\bar f_\gamma^*$ for some $\gamma\in \mu$. Let now the formula $\varphi^4(f,\bar g^*)$ say that there exists $\bar f_1$ such that $\varphi^3(\bar f_1,\bar g^*)$ and for every~$\bar f_2$ satisfying the formulas $\varphi^3(\bar f_2,\bar g^*)$ and $\mathop{\mathrm{Rng}}\nolimits (\bar f_1)_0 \subseteq \mathop{\mathrm{Rng}}\nolimits (\bar f_2)_0$, also $\varphi^2(f,\bar f_2)$ holds. If the formula $\varphi^4(f,g^*)$ holds, then there exists $\bar f_1=\bar f_{\gamma}^*$ for some $\gamma\in \mu$, and for every $\bar f_2=\bar f_\lambda^*$ (where $\lambda\ge \gamma$) we have the formula $$ f(a_\alpha^0)=\sigma(a_\alpha^{\beta_1},\dots,a_\alpha^{\beta_m}), $$ where $\beta_1,\dots,\beta_m< \mu_{\lambda}^+$. Let $f$ be such that $$ f(a_\alpha^0)=\sigma(a_\alpha^{\beta_1},\dots,a_\alpha^{\beta_m}),\quad \beta_1,\dots,\beta_m\in \mu. $$ Then $\beta_1,\dots,\beta_m\in \mu_\gamma^+$ for $\gamma\in \mathrm{cf} \mu$ and therefore the formula $\varphi^4(f,g^*)$ holds for some~$g^*$. Now we only need to consider the formula $\varphi^4(f_5^*\circ f\circ f_5^*)$, which is the required formula. \end{proof} \section{Formulation of the Main Theorem, Converse Theorems, Different Cases} \subsection{Second Order Language of Abelian Groups} As we mentioned above, we shall consider second order models of Abelian groups, i.e., consider the second order group language, where the 3-place symbol will denote not multiplication, but addition (i.e., we shall write $x_1=x_2+x_3$ instead of $P^3(x_1,x_2,x_3)$). As we see, formulas $\varphi({\dots})$ of the language $\mathcal L_2$ consist of the following subformulas: \begin{enumerate} \item $\forall x$ ($\exists x$); \item $x_1=x_2$ and $x_1=x_2+x_3$, where every variable $x_1$, $x_2$, and~$x_3$ either is a~free variable of the formula~$\varphi$ or is defined in the formula~$\varphi$ with the help of the subformulas $\forall x_i$ or $\exists x_i$, $i=1,2,3$; \item $\forall P(v_1,\dots,v_n)$ ($\exists P(v_1,\dots,v_n)$), $n> 0$; \item $P(x_1,\dots,x_n)$, where every variable $x_1,\dots,x_n$, and also every ``predicate'' variable $P(v_1,\dots,v_n)$ either is a~free variable of the formula~$\varphi$, or is defined in this formula with the help of the subformula $\forall x_i$, $\exists x_i$, $\forall P(v_1,\dots,v_n)$, $\exists P(v_1,\dots,v_n)$. \end{enumerate} Equivalence of two Abelian groups $A_1$~and~$A_2$ in the language~$\mathcal L_2$ will be denoted by $$ A_1\equiv_{\mathcal L_2} A_2,\ \ \text{or}\ \ A_1\equiv_2 A_2. $$ As we remember, the \emph{theory} of a~model~$\mathcal U$ of a~language~$\mathcal L$ is the set of all sentences of the language~$\mathcal L$ which are true in this model. In some cases we shall consider together with theories $\mathrm{Th}_2(A)=\mathrm{Th}_{\mathcal L_2}(A)$ also theories $\mathrm{Th}_2^\varkappa (A)$, which contain those sentences~$\varphi$ of the language~$\mathcal L_2$ that are true for arbitrary sequence $$ \langle a_1,\dots, a_q,b_1^{l_1},\dots,b_s^{l_s}\rangle, $$ where $a_1,\dots,a_q\in A$, $b_i^{l_i}\subset A^{l_i}$, and $|b_i^{l_i}|\le \varkappa$. If $\varkappa\ge |A|$, then $\mathrm{Th}_2(A)$ and $\mathrm{Th}_2^\varkappa(A)$ coincide. \subsection{Formulation of the Main Theorem}\label{sec4_2} If $A=D\oplus G$, where the group~$D$ is divisible and the group~$G$ is reduced, then the \emph{expressible rank of the group~$A$} is the cardinal number $$ r_{\mathrm{exp}}=\mu=\max(\mu_D,\mu_G), $$ where $\mu_D$ is the rank of~$D$ and $\mu_G$ is the rank of the basic subgroup of~$G$. We want to prove the following theorem. \begin{theorem}\label{t4.1} For any infinite $p$-groups $A_1$~and~$A_2$ elementary equivalence of endomorphism rings $\mathop{\mathrm{End}}\nolimits(A_1)$ and $\mathop{\mathrm{End}}\nolimits(A_2)$ implies coincidence of the second order theories $\mathrm{Th}_2^{r_{\mathrm{exp}}(A_1)}(A_1)$ and $\mathrm{Th}_2^{r_{\mathrm{exp}}(A_2)}(A_2)$ of the groups $A_1$~and~$A_2$, bounded by the cardinal numbers $r_{\mathrm{exp}}(A_1)$ and $r_{\mathrm{exp}}(A_2)$, respectively. \end{theorem} In Secs.~\ref{sec5}--\ref{sec7}, we shall separately prove this theorem for Abelian groups $A_1$~and~$A_2$ with various properties and in Sec.~\ref{sec8} gather them and prove the main theorem. Note that if the group~$A$ is finite, then the ring $\mathop{\mathrm{End}}\nolimits(A)$ is also finite. Since in the case of finite models elementary equivalence (and also equivalence in the language~$\mathcal L_2$) is equivalent to isomorphism of them, then in the case where one of the groups $A_1$~and~$A_2$ is finite Theorem~\ref{t4.1} follows from Theorem~\ref{t2.8}. Therefore we now suppose that the groups $A_1$~and~$A_2$ are infinite. \subsection{Proofs of ``Converse'' Theorems} Let us prove two theorems which are, in some sense, converse to our main theorem. \begin{theorem}\label{t4.2} For any Abelian groups $A_1$~and~$A_2$, if the groups $A_1$~and~$A_2$ are equivalent in the second order logic~$\mathcal L_2$, then the rings $\mathop{\mathrm{End}}\nolimits(A_1)$ and $\mathop{\mathrm{End}}\nolimits(A_2)$ are elementarily equivalent. \end{theorem} \begin{proof} Every 2-place predicate variable $P(v_1,v_2)$ will be called a~\emph{correspondence on the group~$A$}. A~correspondence $P(v_1,v_2)$ on the group~$A$ will be called a~\emph{function on the group~$A$} (notation: $\mathrm{Func}(P(v_1,v_2))$, or simply $\mathrm{Func}(P)$) if it satisfies the condition $$ (\forall x\, \exists y\, P(x,y)) \logic\land (\forall x\, \forall y_1\, \forall y_2\, P(x,y_1) \logic\land P(x,y_2)\Rightarrow y_1=y_2). $$ A~function $P(v_1,v_2)$ will be called an \emph{endomorphism on the group~$A$} (notation: $\mathrm{Endom}(P(v_1,v_2))$, or simply $\mathrm{Endom}(P)$) if it satisfies the additional condition $$ \forall x_1\, \forall x_2\, \forall y_1\, \forall y_2\, P(x_1,y_1) \land P(x_2,y_2)\Rightarrow P(x_1+x_2,y_1+y_2). $$ Now consider an arbitrary sentence~$\varphi$ of the first order ring language. This sentence can contain the subformulas \begin{enumerate} \item $\forall x$; \item $\exists x$; \item $x_1=x_2$; \item $x_1=x_2+x_3$; \item $x_1=x_2\cdot x_3$. \end{enumerate} Let us translate this sentence to a~sentence~$\tilde \varphi$ of the second order group language by the following algorithm: \begin{enumerate} \item the subformula $\forall x\, ({\dots})$ is translated to the subformula $$ \forall P^x(v_1,v_2)\, (\mathrm{Endom}(P^x)\Rightarrow {\dots}); $$ \item the subformula $\exists x\, ({\dots})$ is translated to the subformula $$ \exists P^x(v_1,v_2)\, (\mathrm{Endom}(P^x)\land {\dots}); $$ \item the subformula $x_1=x_2$ is translated to the subformula $$ \forall y_1\, \forall y_2\, (P^{x_1}(y_1,y_2) \Leftrightarrow P^{x_2}(y_1,y_2)); $$ \item the subformula $x_1=x_2+x_3$ is translated to the subformula $$ \forall y\, \forall z_1\, \forall z_2\, \forall z_3\, (P^{x_2}(y,z_2)\land P^{x_3}(y,z_3) \Rightarrow (P^{x_1}(y,z_1)\Leftrightarrow z_1=z_2+z_3)); $$ \item the subformula $x_1=x_2\cdot x_3$ is translated to the subformula $$ \forall y\, \forall z\, (P^{x_1}(y,z) \Rightarrow \exists t\, (P^{x_2}(y,t)\land P^{x_3}(t,z)). $$ \end{enumerate} We need to show that a~sentence~$\varphi$ holds in the model $\mathop{\mathrm{End}}\nolimits(A)$ if and only if the sentence~$\tilde \varphi$ holds in the model~$A$. If $A$ is a~model of an Abelian group, then the model $\mathop{\mathrm{End}}\nolimits(A)$ consists of sets of pairs of elements of the model~$A$, $x=\{ \langle u_1,u_2\rangle \mid u_1,u_2\in A\}$, with the conditions \begin{enumerate} \item $\forall u_1\, \exists u_2\, \langle u_1,u_2\rangle\in x$; \item $\forall u_1\, \forall u_2\, \forall u_3\, (\langle u_1,u_2\rangle \in x \logic\land \langle u_1,u_3\rangle \in x\Rightarrow u_2=u_3)$; \item $\forall u_1\, \forall u_2\, \forall u_3\, \forall u_4\, (\langle u_1,u_3\rangle\in x \logic\land \langle u_2,u_4\rangle \in x \Rightarrow \langle u_1+u_2,u_3+u_4\rangle \in x)$. \end{enumerate} Therefore a~sequence $a_1,\dots,a_q$ for which the formula~$\varphi$ is satisfied in the model $\mathop{\mathrm{End}}\nolimits(A)$ is a~sequence consisting of sets of pairs of elements from~$A$ satisfying the conditions (1)--(3). Let us establish the identity bijection between elements of $\mathop{\mathrm{End}}\nolimits(A)$ and the corresponding sets of pairs of the model~$A$. Let an element~$a_i$ of the model $\mathop{\mathrm{End}}\nolimits(A)$ correspond to a~set $A_i\subset A\times A$. 1. If the formula~$\varphi$ has the form $x_i=x_j$, then $\varphi$ holds for a~sequence $a_1,\dots,a_q$ if and only if $a_i=a_j$, i.e., $a_i$~and~$a_j$ are equal endomorphisms of the model $\mathop{\mathrm{End}}\nolimits(A)$, and the sets $A_i$~and~$A_j$ consist of the same elements, i.e., in the model~$A$ for the sequence $A_1,\dots,A_q$ the formula $$ \forall y_1\, \forall y_2\, (P^{x_i}(y_1,y_2)\Leftrightarrow P^{x_j}(y_1,y_2)) $$ is true. 2. If the formula $\varphi$ has the form $x_i=x_j+x_k$, then $\varphi$ holds for a~sequence $a_1,\dots,a_q$ if and only if $a_i=a_j+a_k$, i.e., an endomorphism~$a_i$ is the sum of endomorphisms $a_j$~and~$a_k$, and this means that in the model~$A$ for every element $b\in A$ and for every $b_1,b_2,b_3\in A$ such that $\langle b,b_1\rangle\in A_i$, $\langle b,b_2\rangle\in A_j$, $\langle b,b_3\rangle\in A_k$, we have $b_1=b_2+b_3$ (i.e., formally speaking, $\langle b_1,b_2,b_3\rangle\in I(Q_1^3)$). It is equivalent to $A\vDash \tilde \varphi$. 3. If the formula~$\varphi$ has the form $x_i=x_j\cdot x_k$, then the formula~$\varphi$ is true for a~sequence $a_1,\dots,a_q$ if and only if $a_i=a_j\cdot a_k$, i.e., the endomorphism~$a_i$ is a~composition of endomorphisms $a_j$~and~$a_k$, and this means that in the model~$A$ for every $b_1\in A$ and for every $b_2\in A$ such that $\langle b_1,b_2\rangle\in A_i$, there exists $b_3\in A$ such that $\langle b_1,b_3\rangle\in A_j$ and $\langle b_3,b_2\rangle \in A_k$. This is equivalent to $A\vDash \tilde \varphi$. 4. If $\varphi$ has the form $\theta_1\land \theta_2$, $\theta_1$~and~$\theta_2$ are true in the model $\mathop{\mathrm{End}}\nolimits(A)$ for a~sequence $a_1,\dots,a_q$ if and only if $\tilde \theta_1$~and~$\tilde \theta_2$ are true in the model~$A$ for the sequence $A_1,\dots,A_q$, then it is clear that the formula~$\varphi$ is true in the model $\mathop{\mathrm{End}}\nolimits(A)$ for a~sequence $a_1,\dots,a_q$ if and only if the formula~$\tilde \varphi$ is true in the model~$A$ for the sequence $A_1,\dots,A_q$, because $$ \widetilde{\theta_1\land \theta_2}=\tilde \theta_1\land \tilde \theta_2. $$ 5. A~similar case is the formula~$\varphi$ having the form $\neg \theta$, because $$ \widetilde{\neg \theta}=\neg \tilde \theta. $$ 6. Finally, suppose that the formula~$\varphi$ has the form $\forall x_i\, \psi$. The formula~$\varphi$ is true in the model $\mathop{\mathrm{End}}\nolimits(A)$ for a~sequence $a_1,\dots,a_q$ if and only if the formula~$\psi$ is true in the model $\mathop{\mathrm{End}}\nolimits(A)$ for the sequence $a_1,\dots,a_{i-1},a,a_{i+1},\dots,a_q$ for any $a\in \mathop{\mathrm{End}}\nolimits(A)$, i.e., the formula~$\tilde \psi$ is true in the model~$A$ for the sequence $A_1,\dots,A_{i-1}$, $\bar A,A_{i+1},\dots,A_q$ for every set $\bar A\subset A\times A$ which is an endomorphism of the ring~$A$, i.e., which satisfies the formula $\mathrm{Endom}$. Therefore the formula~$\varphi$ is true in the model $\mathop{\mathrm{End}}\nolimits(A)$ for a~sequence $a_1,\dots,a_q$ if and only if the formula $$ \widetilde{\forall x_i\, \psi} \mathbin{{:}\!=} \forall P^{x_i}(v_1,v_2)\, (\mathrm{Endom}(P^{x_i})\Rightarrow \tilde \psi) $$ is true for the sequence $A_1,\dots,A_q$ in the model~$A$. Suppose now that Abelian groups $A_1$~and~$A_2$ are equivalent in the language~$\mathcal L_2$. Consider an arbitrary sentence~$\varphi$ of the first order ring language, which is true in $\mathop{\mathrm{End}}\nolimits(A_1)$. Then the sentence~$\tilde \varphi$ is true in the group~$A_1$ and hence in the group~$A_2$. Consequently, the sentence~$\varphi$ is true in the ring $\mathop{\mathrm{End}}\nolimits(A_2)$. Therefore the rings $\mathop{\mathrm{End}}\nolimits(A_1)$ and $\mathop{\mathrm{End}}\nolimits(A_2)$ are elementarily equivalent. \end{proof} For the next theorem we need some formulas. 1. The formula $$ \mathrm{Gr}(P(v))\mathbin{{:}\!=} \forall a\, \forall b\, (P(a)\land P(b) \Rightarrow \exists c\, (c=a+b\logic\land P(c)) \logic\land P(0) \logic\land \forall a\, (P(a)\Rightarrow \exists b\, (b=-a \logic\land P(b))) $$ holds for those sets $\{ a\in A\mid P(a)\}$ that are subgroups in~$A$, and only for them. 2. The formula $$ \mathrm{Cycl}(P(v))\mathbin{{:}\!=} \mathrm{Gr}(P(v)) \logic\land \exists a\, (P(a) \logic\land \forall P_a(v)\, (\mathrm{Gr}(P_a(v)) \land P_a(a)\Rightarrow \forall b\, (P(b)\Rightarrow P_a(b))) $$ characterizes cyclic subgroups in~$A$. 3. The formula \begin{multline*} \mathrm{DCycl}(P(v))\mathbin{{:}\!=} \mathrm{Gr}(P(v)) \logic\land \forall a\, (P(a)\Rightarrow \exists P_1(v)\, \exists P_2(v)\, (P_1(a) \logic\land \mathrm{Cycl}(P_1(v))\\ {}\logic\land \forall b\, \neg (P_1(b) \land P_2(b)) \logic\land \forall b\, (P(b)\Rightarrow \exists b_1\, \exists b_2\, (P_1(b_1) \logic\land P_2(b_2) \logic\land b=b_1+b_2))) \end{multline*} characterizes those subgroups in~$A$ that are direct sums of cyclic subgroups. 4. For every $a,a_1,a_2\in A$ the formula $$ \mathrm{Gr}_a(P_a(v)) \mathbin{{:}\!=} P_a(a) \logic\land \mathrm{Gr}(P_a(v)) \logic\land \forall P(v)\, (P(a)\land \mathrm{Gr}(P(v))\Rightarrow \forall b\, (P_a(b)\Rightarrow P(b))) $$ defines in~$A$ the subgroup $\{ b\in A\mid P_a(b)\}$ of all powers (exponents) of the element~$a$; the formula \begin{align*} & (o(a_1)\le o(a_2))\\ & \quad \mathbin{{:}\!=} \exists P_1(v)\, \exists P_2(v)\, \exists P(v_1,v_2)\, (\mathrm{Gr}_{a_1}(P_1) \logic\land \mathrm{Gr}_{a_2}(P_2) \logic\land \forall b_1\, (P_1(b_1)\Rightarrow \exists b_2\, (P_2(b_2)\land P(b_1,b_2)))\\ & \quad \logic\land \forall b_1\, \forall b_2\, \forall c_1\, \forall c_2\, (P_1(b_1) \logic\land P_1(c_1) \logic\land b_1\ne c_1 \logic\land P_2(b_2) \logic\land P_2(c_2) \logic\land P(b_1,b_2) \logic\land P(c_1,c_2)\Rightarrow b_2\ne c_2)) \end{align*} holds if and only if the order of~$a_1$ is not greater than the order of~$a_2$; the formula $$ (o(a_1)=o(a_2)) \mathbin{{:}\!=} (o(a_1)\le o(a_2)) \logic\land (o(a_2)\le o(a_1)) $$ shows that the orders of $a_1$~and~$a_2$ coincide; the formula $$ (o(a_1)< o(a_2))\mathbin{{:}\!=} (o(a_1)\le o(a_2)) \logic\land \neg (o(a_2)\le o(a_1)) $$ shows that the order of~$a_1$ is strictly smaller than the order of~$a_2$. 5. For every $a\in A$ the formula $$ \mathrm{GOrd}_a(P(v))\mathbin{{:}\!=} \mathrm{Gr}(P) \logic\land \forall b\, (P(b)\Rightarrow o(b)\le o(a)) $$ holds for those subgroups that are bounded by the order of~$a$, and only for them. 6. The formula \begin{align*} & \mathrm{Mult}_a(x,b) \mathbin{{:}\!=} \exists P(v)\, \exists P_{x,b}(v_1,v_2)\, ( \mathrm{Cycl}(P) \logic\land P(x) \logic\land P(b) \logic\land{} \forall b_1\, ( P(b_1)\Rightarrow \exists b_2\, (P(b_2) \land P_{x,b}(b_1,b_2)) \\ & \quad {}\logic\land ( \forall b_1\, \forall b_2\, \forall b_3\, P(b_1) \land P_{x,b}(b_1,b_2) \land P_{x,b}(b_1,b_3) \Rightarrow b_2=b_3 ) \\ & \quad {}\logic\land ( \forall b_1\, \forall b_2\, \forall b_3\, \forall c_1\, \forall c_2\, \forall c_3\, P(b_1) \logic\land P(b_2) \logic\land P(b_3) \\ & \quad {}\logic\land b_3=b_1+b_2 \logic\land c_3=c_1+c_2 \logic\land P_{x,b}(b_1,c_1) \logic\land P_{x,b}(b_2,c_2) \Rightarrow P_{x,b}(b_3,c_3) ) \\ & \quad {}\logic\land P_{x,b}(x,0) \logic\land \forall y\, ( P(y) \land py=x \Rightarrow \neg P_{x,b}(y,0) ) \logic\land \exists c\, ( P(b,c) \land o(c)=o(a) ) )) \end{align*} holds for elements $x$ and~$b$ with the property $x=o(a)\cdot b$, and only for them. 7. The formula $$ \mathrm{Serv}(P(v)) \mathbin{{:}\!=} \mathrm{Gr}(P) \logic\land \forall a\, \forall x\, (P(x)\Rightarrow\\ \Rightarrow \exists b\, (\mathrm{Mult}_a(x,b)\Rightarrow \exists c\, (P(c)\land \mathrm{Mult}_a(x,c)))) $$ holds for pure subgroups of the group~$A$, and only for them. 8. The formula $$ \mathrm{FD}(P(v))\mathbin{{:}\!=} \mathrm{Gr}(P) \logic\land \forall a\,\exists b\, \exists x_1\,\exists x_2\, (P(x_1) \logic\land P(x_2) \logic\land a+x_1=p(b+x_2)) $$ holds for subgroups $G=\{ x\mid P(x)\}$ such that $A/G$ is a~divisible subgroup, and only for them. 9. The formula $$ \mathrm{Base}(P(v)) \mathbin{{:}\!=} \mathrm{Gr}(P) \land \mathrm{DCycl}(P) \land \mathrm{Serv}(P) \land \mathrm{FD}(P) $$ defines basic subgroups in~$A$. It is clear that if we have some subgroup~$G'$ of the group~$G$, then we similarly can write the formula $\mathrm{Base}_{G'}(P)$ which holds for basic subgroups of the group~$G'$, and only for them. 10. The formula $$ \mathrm{D}(P(v)) \mathbin{{:}\!=} \mathrm{Gr}(P) \logic\land \forall a\, ( P(a) \Rightarrow \exists b\, ( P(b) \logic\land a=pb )) $$ defines divisible subgroups in~$A$. 11. The sentence \begin{align*} & \mathrm{Exept} \mathbin{{:}\!=} \forall P\, (\mathrm{Gr}(P) \Rightarrow \neg ( \mathrm{D}(P) )) \\ & \quad \logic\land \forall P(v)\, ( \mathrm{Base}(P)\Rightarrow \neg ( \exists F(v_1,v_2)\, ( \forall a\, ( P(a)\Rightarrow \exists b\, (F(a,b)) \\ & \quad \logic\land \forall b\, \exists a\, (P(a)\land F(a,b)) \logic\land \forall a\, \forall b\, (F(a,b) \Rightarrow P(a)) \\ & \quad \logic\land \forall a_1\, \forall a_2\, \forall b_1\, \forall b_2\, ( a_1\ne a_2 \logic\land F(a_1,b_1) \logic\land F(a_2,b_2) \Rightarrow b_1\ne b_2 ) \\* & \quad \logic\land \forall b_1\, \forall b_2\, \forall a_1\, \forall a_2\, ( b_1\ne b_2 \logic\land F(a_1,b_1) \logic\land F(a_2,b_2) \Rightarrow a_1\ne a_2 ) )))) \end{align*} is true for reduced $p$-groups such that their basic subgroups have smaller power (and therefore are countable), and only for them. Thus if $B_1$ is a~basic subgroup of the group~$A_1$, $B_2$~is a~basic subgroup of~$A_2$, $\varkappa_1=|B_1|$, and $\varkappa_2=|B_2|$, then $$ \mathrm{Th}_2^{\varkappa_1}(A_1)=\mathrm{Th}_2^{\varkappa_2}(A_2) $$ implies that either the groups $A_1$~and~$A_2$ are reduced, their basic subgroups are countable, and they themselves are uncountable, or this is not true for both of the groups $A_1$~and~$A_2$. In the first case $\varkappa_1=\varkappa_2=\omega$. \begin{theorem}\label{t4.2d} If Abelian groups $A_1$~and~$A_2$ are reduced and their basic subgroups are countable, then $\mathrm{Th}_2^\omega(A_1)=\mathrm{Th}_2^\omega(A_2)$ implies elementary equivalence of the rings $\mathop{\mathrm{End}}\nolimits(A_1)$ and $\mathop{\mathrm{End}}\nolimits(A_2)$. \end{theorem} \begin{proof} We know (see Theorem~\ref{4.endom}) that for a~reduced $p$-group~$A$ the action of any endomorphism $\varphi\in \mathop{\mathrm{End}}\nolimits(A)$ is completely defined by its action on a~basic subgroup~$B$. Furthermore, let $A'\subset A$ and let $B$~be also a~basic subgroup of~$A'$. Then any $\varphi\colon A'\to A$ is also completely defined by its action on~$B$. Indeed, if $\varphi_1,\varphi_2\colon A'\to A$ and $\varphi_1(b)=\varphi_2(b)$ for all $b\in B$, then for $\varphi \mathbin{{:}\!=} \varphi_1-\varphi_2\colon A'\to A$ we have $\varphi(b)=0$ for all $b\in B$. Hence $\varphi$ induces a~homomorphism $\tilde \varphi\colon A'/B\to A$. But the group $A'/B$ is divisible and the group~$A$ is reduced, i.e., $\tilde \varphi= 0$. Consequently, $\varphi=0$. Note that for every element $a\in A$ there exists a~countable subgroup $A'\subset A$ containing~$a$ and the group~$B$ as a~basic subgroup. Indeed, consider a~quasibasis of the group~$A$ having the form $$ \{ a_i,c_{j,n}\}_{i\in \omega,\ j\in \varkappa,\ n\in \omega}, $$ where $\{ a_i\}$ is a~basis of~$B$, $p c_{j,1}=0$, $pc_{j,n+1}=c_{j,n}-b_{j,n}$, $b_{j,n}\in B$, $o(b_{j,n})\le p^n$, and $o(c_{j,n})=p^n$. As we remember, every element $a\in A$ can be written in the form $$ a=s_1 a_{i_1}+\dots+s_m a_{i_m}+t_1 c_{j_1,n_1}+\dots+t_r c_{j_r,n_r}, $$ where $s_i$~and~$t_j$ are integers, none of~$t_j$ is divisible by~$p$ and the indices $i_1,\dots,i_m$, $j_1,\dots,j_r$ are all distinct. Further, this form is unique in the sense that the members $sa_i$ and $tc_{j,n}$ are uniquely defined. Consider a~decomposition of our element~$a$ and the subgroup in~$A$ generated by the group~$B$ and all~$c_{k,n}$, where $n\in \omega$ and $k\in \{ j_1,\dots,j_r\}$. This group~$A'$ is countable, it contains~$a$, and $B\subset A'$ is its basic subgroup. Let now a~predicate $B(v)$ satisfy in~$A$ the formula $\mathrm{Base}(B)$, i.e., $B(v)$ defines in~$A$ a~basic subgroup $B=\{ x\mid B(x)\}$. A~correspondence $P(v_1,v_2)$ is called a~\emph{homomorphism of the group~$B$ into the group~$A$} (notation: $\mathop{\mathrm{Hom}}\nolimits_B(P)$) if \begin{multline*} \forall x\, (B(x)\Leftrightarrow \exists y\, (P(x,y))) \logic\land \forall x\, \forall y_1\, \forall y_2\, (P(x,y_1)\land P(x,y_2) \Rightarrow y_1=y_2) \\ \logic\land \forall x_1\, \forall x_2\, \forall y_1\, \forall y_2\, (P(x_1,y_1) \land P(x_2,y_2) \Rightarrow P(x_1+x_2,y_2+y_2)). \end{multline*} It is clear that such a~predicate $P(v_1,v_2)$ can be used in sentences from $\mathrm{Th}_2^\omega(A)$, because the group~$B$ is countable. Consider some $B(v)$ such that the formula $\mathrm{Base}(B)$ is true, a~predicate $\Phi(v_1,v_2)$ such that $\mathop{\mathrm{Hom}}\nolimits_B(\Phi)$, and $a\in A$. We shall write $b=\Phi(a)$ if \begin{enumerate} \item $B(a)\land \Phi(a,b)$ or \item $\neg B(a) \logic\land \forall G(v)\, ( \mathrm{Gr}(G) \logic\land G(a) \logic\land \forall x\, (G(x)\Rightarrow B(x)) \logic\land \mathrm{Base}_G(B) \Rightarrow \exists \varphi (v_1,v_2)\, ( \forall x\, ( G(x)\Leftrightarrow \exists y\, (\varphi(x,y)) \logic\land \forall x\, \forall y_1\, \forall y_2\, (\varphi(x,y_1)\land \varphi(x,y_2)\Rightarrow y_1=y_2) \logic\land \forall x_1\, \forall x_2\, \forall y_1\, \forall y_2\, (\varphi(x_1,y_1)\land \varphi(x_2,y_2)\Rightarrow \varphi(x_1+x_2,y_1+y_2)) \logic\land \forall x\, \forall y\, (\Phi(x,y) \Rightarrow \varphi(x,y)) \logic\land \varphi(a,b) ))) $. \end{enumerate} It is clear that for every $a\in A$ there exist no more than one $b\in A$ such that $b=\Phi(a)$, and if the homomorphism $\Phi\colon B\to A$ can be extended to an endomorphism $A\to A$, then it necessarily exists. Now we shall consider $\Phi(v_1,v_2)$ such that \begin{multline*} \mathrm{Endom}_B(\Phi) \mathbin{{:}\!=} \mathop{\mathrm{Hom}}\nolimits_B(\Phi) \logic\land \forall a\, \exists b\, (b=\Phi(a)) \\ {}\logic\land \forall a_1\, \forall a_2\, \forall b_1\, \forall b_2\, (b_1=\Phi(a_1) \logic\land b_2=\Phi(a_2) \Rightarrow b_1+b_2=\Phi(a_1+a_2)). \end{multline*} In our case these $\Phi(v_1,v_2)$ define endomorphisms from $\mathop{\mathrm{End}}\nolimits(A)$. Let us show an algorithm of translation of formulas in this case. A~sentence~$\varphi$ is translated to the sentence $$ \tilde \varphi=\exists B(v)\, (\mathrm{Base}(B)\land \varphi'(B)), $$ where the formula~$\varphi'$ is obtained from the sentence~$\varphi$ in the following way: \begin{enumerate} \item the subformula $\forall x\, ({\dots})$ is translated to the subformula $$ \forall \Phi^x(v_1,v_2)\, (\mathrm{Endom}_B(\Phi^x)\Rightarrow \ldots); $$ \item the subformula $\exists x\, ({\dots})$ is translated to the subformula $$ \exists \Phi^x(v_1,v_2)\, (\mathrm{Endom}_B(\Phi^x)\land \ldots); $$ \item the subformula $x_1=x_2$ is translated to the subformula $$ \forall y_1\, \forall y_2\, (\Phi^{x_1}(y_1,y_2) \Leftrightarrow \Phi^{x_2}(y_1,y_2)); $$ \item the subformula $x_1=x_2+x_3$ is translated to the subformula $$ \forall y\, \forall z_1\, \forall z_2\, \forall z_3\, (\Phi^{x_2}(y,z_2)\land \Phi^{x_3}(y,z_3) \Rightarrow (\Phi^{x_1}(y,z_1)\Leftrightarrow z_1=z_2+z_3)); $$ \item the subformula $x_1=x_2\cdot x_3$ is translated to the subformula $$ \forall y\, \forall z\, (\Phi^{x_1}(y,z) \Rightarrow \exists t\, (\Phi^{x_2}(y,t)\land z=\Phi^{x_3}(t)). $$ \end{enumerate} Now the proof is similar to the proof of the previous theorem. \end{proof} \subsection{Different Cases of the Problem}\label{sec4_4} Following Theorem~\ref{t2.8}, we divide the class of all Abelian $p$-groups into the following three subclasses: \begin{enumerate} \item bounded $p$-groups; \item the groups $D\oplus G$, where $D$~is a~nonzero divisible group and $G$~is a~bounded group; \item groups with unbounded basic subgroups. \end{enumerate} Now we shall show how to find sentences which distinguish groups from different subclasses. If a~group $A$ is bounded, then there exists a~natural number $n=p^k$ such that $$ \forall a\, (na=0) $$ in the group~$A$. This means that in the ring $\mathop{\mathrm{End}}\nolimits(A)$ we also have $$ \forall x\, (nx=0). $$ But if a~group~$A$ is unbounded, then for every~$n$ this sentence is false. Therefore, we distinguish the case~(1) from all other cases. Moreover, we now can find the supremum of orders of elements from~$A$. We shall denote this sentence $\forall x\, (nx=0)$ by~$\varphi_n$. Now consider the sentence \begin{align*} & \psi_n \mathbin{{:}\!=} \exists \rho_1\, \exists \rho_2\, ( \rho_1\rho_2=\rho_2\rho_1=0 \logic\land \rho_1^2=\rho_1 \logic\land \rho_2^2=\rho_2 \logic\land \rho_1+\rho_2=1 \\ & \quad \logic\land \forall x\, (n\cdot \rho_2x\rho_2 =0) \logic\land \forall \rho\, \forall \rho'\, ( \rho^2=\rho \logic\land {\rho'}^2=\rho' \logic\land \rho\rho'=\rho'\rho=0 \logic\land \rho+\rho'=\rho_1 \\ & \quad \logic\land \forall \tau_1\, \forall \tau_2\, (\tau_1^2=\tau_1 \logic\land \tau_2^2=\tau_2 \logic\land \tau_1\tau_2=\tau_2\tau_1=0 \Rightarrow \tau_1+\tau_2\ne \rho) \Rightarrow \forall x\, (\rho x\rho =0 \logic\lor p(\rho x \rho)\ne 0) )). \end{align*} We shall explain by words what this sentence means. 1. There exist orthogonal projections $\rho_1$~and~$\rho_2$; their sum is~$1$ in the ring $\mathop{\mathrm{End}}\nolimits(A)$. This means that $A=\rho_1A\oplus \rho_2 A$, $\rho_1 \mathop{\mathrm{End}}\nolimits(A)\rho_1=\mathop{\mathrm{End}}\nolimits(\rho_1 A)$, and $\rho_2 \mathop{\mathrm{End}}\nolimits(A)\rho_2=\mathop{\mathrm{End}}\nolimits(\rho_2 A)$. 2. The condition $\forall x\, (n\cdot \rho_2 x \rho_2=0)$ means that in the ring $\mathop{\mathrm{End}}\nolimits(\rho_2 A)$ all elements are bounded by a~number $n=p^k$, i.e., the group $\rho_2 A$ is bounded. 3. The last part of the sentence~$\psi_n$ states that if in the ring $\mathop{\mathrm{End}}\nolimits(\rho_1A)$ we consider a~primitive idempotent~$\rho$ (i.e., a~projection onto an indecomposable direct summand $\rho A=\rho \rho_1 A$), then this direct summand does not have any endomorphisms of order~$p$. Therefore in the group $\rho_1 A$ there are no cyclic direct summands, and so it is divisible. Consequently, a~ring $\mathop{\mathrm{End}}\nolimits(A)$ satisfies the sentence~$\psi_n$ if and only if $A=D\oplus G$, where the group~$D$ is divisible and the group~$G$ is bounded by the number~$n$. Hence for any two groups $A_1$~and~$A_2$ from different classes there exists a~sentence which distinguishes the rings $\mathop{\mathrm{End}}\nolimits(A_1)$ and $\mathop{\mathrm{End}}\nolimits(A_2)$. Thus we can now assume that if rings $\mathop{\mathrm{End}}\nolimits(A_1)$ and $\mathop{\mathrm{End}}\nolimits(A_2)$ are elementarily equivalent, then the groups $A_1$~and~$A_2$ belong to one subclass, and if both of them belong to the first or the second subclass, then their reduced parts are bounded by the same number $n=p^k$, which is supposed to be fixed. \section[Bounded $p$-Groups]{Bounded $\boldsymbol{p}$-Groups}\label{sec5} \subsection{Separating Idempotents}\label{sec5_1} As we have seen above (see Sec.~\ref{sec4_4}), the property of $\rho \in \mathop{\mathrm{End}}\nolimits(A)$ to be a~decomposable idempotent which is a~direct summand is a~first order property. Let us denote the formula expressing this property by $\mathrm{Idem}^*(\rho)$, while the formula expressing the property of $\rho\in \mathop{\mathrm{End}}\nolimits(A)$ to be simply an idempotent (not necessarily indecomposable) will be denoted by $\mathrm{Idem}(\rho)$. We consider the group $A=\sum\limits_{i=1}^k A_i$, where $A_i\cong \bigoplus\limits_{\mu_i} \mathbb Z (p^i)$. Since the group~$A$ is infinite, we have that $\mu_l=\max\limits_{i=1,\dots,k} \mu_i$ is infinite and coincides with~$|A|$. Consider for every $i=1,\dots,k$ the formula $$ \mathrm{Idem}_i^*(\rho)=\mathrm{Idem}^*(\rho)\land p^{i-1} \rho\ne 0\land p^i \cdot \rho=0. $$ For every~$i$ this formula is true for projections on direct summands of the group~$A$ which are isomorphic to $\mathbb Z(p^i)$, and only for them. Now consider the following formula: \begin{align*} & \mathrm{Comp}(\rho_1,\dots,\rho_k)= (\rho_1+\dots+\rho_k=1) \logic\land \biggl(\,\bigwedge_{i\ne j} \rho_i \rho_j=\rho_j\rho_i=0\biggr) \\ & \quad \logic\land \biggl(\,\bigwedge_{i=1}^k \rho_i^2=\rho_i\biggr) \logic\land \biggl(\,\bigwedge_{i=1}^k p^i \rho_i=0\biggr) \logic\land \biggl(\,\bigwedge_{i=1}^k p^{i-1} \rho_i\ne 0\biggr) \\ & \quad \logic\land \biggl(\,\bigwedge_{i=1}^k \forall \rho\, ( \mathrm{Idem}^*(\rho) \logic\land \exists \rho'\, (\rho+\rho'=\rho_i \logic\land \rho\rho'=\rho' \rho=0 \logic\land \mathrm{Idem}(\rho')) \Rightarrow \mathrm{Idem}^*_i(\rho)) \biggr). \end{align*} We see that the group $A$ (which corresponds to this formula) is decomposed into a~direct sum ${\rho_1A\oplus \rho_2 A\oplus\dots\oplus \rho_k A=A}$, and in every subgroup $\rho_iA$ all indecomposable direct summands have the order~$p_i$. Therefore $\rho_1A\oplus \dots\oplus \rho_k A$ is a~decomposition of~$A$, isomorphic to the decomposition $\sum\limits_{i=1}^k A_i$. {\sloppy } Let us assume that the projections $\rho_1,\dots,\rho_k$ from the formula $\mathrm{Comp}({\dots})$ are fixed. To separate them from other idempotents we shall denote them by $\bar \rho_1,\dots,\bar \rho_k$. Having fixed idempotents $\bar \rho_1,\dots,\bar \rho_k$ of the ring $\mathop{\mathrm{End}}\nolimits(A)$, we have also its fixed subrings $$ \bar E_i=\bar \rho_i \mathop{\mathrm{End}}\nolimits(A)\bar \rho_i, $$ each of which is isomorphic to the ring $$ \mathop{\mathrm{End}}\nolimits(\rho_iA)\cong \mathop{\mathrm{End}}\nolimits(A_i). $$ Given an idempotent~$\rho$ satisfying the formula $\mathrm{Idem}_i^*(\rho)$ ($\mathrm{Idem}_i(\rho)$), the formula expressing for~$\rho$ the fact that it is a~direct summand in the group $\bar \rho_i A$ (i.e., $\exists \rho'\, (\mathrm{Idem}(\rho') \logic\land \rho\rho'=\rho' \rho=0 \logic\land \rho+\rho'=\bar \rho_i)$) will be written as $\overline{\mathrm{Idem}_i^*}(\rho)$ ($\overline{\mathrm{Idem}_i}(\rho)$). This formula means that the subgroup $\rho A$ is a~direct summand in the group $\bar \rho_i A=A_i$. The number~$l$ from the set $\{ 1,\dots,k\}$ which satisfies the sentence $$ \mathrm{Card}_l=\bigwedge_{i=1,\,i\ne l}^k \exists a\, \forall \rho\, (\overline{\mathrm{Idem}_i^*}(\rho) \Rightarrow \rho a\bar \rho_l\ne 0) $$ is the number of the group~$A_l$ with $|A_l|=|A|=\mu$, because this sentence means that there exists an endomorphism $a\in \mathop{\mathrm{End}}\nolimits(A)$ mapping~$A_l$ to~$A$ in such a~way that on every direct summand of~$A_i$ it is nonzero. This means that $|A_l|\ge |A_i|$, i.e., the power $|A_l|$ is maximal. Let us assume that also the number~$l$ is fixed. The formula $\mathrm{Card}_l$ shows that we can write formulas that determine for every two projections $\rho_1$~and~$\rho_2$ whether $|\rho_1A|< |\rho_2A|$, or $|\rho_1A|> |\rho_2A|$, or $|\rho_1A|=|\rho_2A|$ is true. Let us denote these formulas by $|\rho_1|< |\rho_2|$, $|\rho_1|> |\rho_2|$, and $|\rho_1|=|\rho_2|$, respectively. The formula $$ \mathop{\mathrm{Fin}}\nolimits(\rho) \mathbin{{:}\!=} \forall \rho_1\, \forall \rho_2\, (\mathrm{Idem}(\rho_1) \logic\land \mathrm{Idem}(\rho_2) \logic\land \mathrm{Idem}(\rho) \logic\land \rho_1=\rho+\rho_2 \logic\land \rho\rho_2=\rho_2\rho=0 \Rightarrow |\rho|< |\rho_1|) $$ means that the group $\rho A$ is finitely generated. Respectively, the formula $$ \mathrm{Inf}(\rho)\mathbin{{:}\!=} \mathrm{Idem}(\rho) \logic\land \neg \mathop{\mathrm{Fin}}\nolimits(\rho) $$ holds for projections on infinitely generated groups. The formula $$ \mathrm{Count}(\rho) \mathbin{{:}\!=} \mathrm{Inf}(\rho) \logic\land \forall \rho_1 \, (\mathrm{Inf}(\rho_1)\Rightarrow |\rho|\le |\rho_1|) $$ is true for projections on countably generated groups and only for them. Finally, we need the formula $$ \overline{\mathrm{Idem}_l^\omega}(\rho)= \overline{\mathrm{Idem}_l}(\rho)\land \mathrm{Count}(\rho), $$ which means that the group $\rho A$ is a~countably generated direct summand of the group~$A_l$. \subsection{Special Sets}\label{sec5_2} At first we shall formulate what special sets we want to have. We must obtain two sets. One of them must contain $\mu_i$~independent indecomposable projections on direct summands of~$A_i$, for every $i=1,\dots ,k$, the other set must contain $\mu=\mu_l$ projections on independent countably generated direct summands of the group~$A_l$. By Theorem~\ref{shel_t4_1}, we see that there exists a~formula $\varphi(\bar g;f)$ satisfying the following condition. If $\{ f_i\}_{i\in \mu}$ is a~set of elements from $\mathop{\mathrm{End}}\nolimits(A')$, then there exists a~vector~$\bar g$ such that the formula $\varphi(\bar g;f)$ is true in $\mathop{\mathrm{End}}\nolimits(A')$ if and only if $f=f_i$ for some $i\in \mu$. We fix this formula~$\varphi$. Suppose that we have some fixed $i\in \{ 1,\dots,k\}$. We have already shown that from the ring $\mathop{\mathrm{End}}\nolimits(A)$ we can transfer to the ring $\mathop{\mathrm{End}}\nolimits(A_i)$. Suppose that we argue in the ring $\mathop{\mathrm{End}}\nolimits(A_i)$ (which satisfies the conditions of Theorem~\ref{shel_t4_1}). In this ring let us consider the following formula: \begin{align*} & \tilde \varphi_i(\bar g) \mathbin{{:}\!=} \forall f'\, (\varphi(\bar g,f')\Rightarrow \overline{\mathrm{Idem}_i^*}(f')) \\ & \quad \logic\land \forall f'\, (\overline{\mathrm{Idem}_i}(f') \logic\land \forall f_1\, (\varphi(\bar g,f_1) \Rightarrow \exists f_2\, (\overline{\mathrm{Idem}_i}(f_2) \logic\land f_1f_2=f_2f_1=0 \logic\land f_1+f_2=f') )\Rightarrow |f'|=|\rho_i|) \\ & \quad \logic\land \forall f'\, ( \varphi(\bar g,f')\Rightarrow ( \exists f\, ( \overline{\mathrm{Idem}_i}(f) \logic\land \forall f_1\, (\varphi(\bar g,f_1) \logic\land f_1\ne f'\Rightarrow f_1 f=ff_1=f_1) \logic\land ff'=f'f=0 ))). \end{align*} The part $\forall f'\, (\varphi(\bar g,f')\Rightarrow \overline{\mathrm{Idem}_i^*}(f'))$ means that the vector~$\bar g$ is such that the formula $\varphi(\bar g,f)$ is true only for projections~$f$ on indecomposable direct summands of the group~$A_i$. The part $\forall f'\, (\overline{\mathrm{Idem}_i}(f') \logic\land \forall f_1\, (\varphi(\bar g,f_1)\Rightarrow \exists f_2\, (\overline{\mathrm{Idem}_i}(f_2) \logic\land f_1f_2=f_2f_1=0 \logic\land f_1+f_2=f'))\Rightarrow {|f'|=|\rho_i|})$ means that those subgroups of the group~$A_i$ that contain all summands~$fA$ satisfying $\varphi (\bar g,f)$ have the same power as~$A_i$, i.e., this part means that the power of the set of these~$f$ is equal to~$\mu$. The last part of the formula means that for every~$f'$ satisfying $\varphi(\bar g,f')$, the group generated by all other~$f$ satisfying $\varphi(\bar g,f)$ does not intersect with~$f'$, i.e., the set of all $f$ satisfying $\varphi(\bar g,f)$ is independent. This set will be denoted by~$\mathbf F_i$. It consists of~$\mu_i$ independent projections on indecomposable direct summands of the group~$A_i$. Naturally, this set can be obtained for every vector~$\bar g_i$ satisfying the formula $\tilde \varphi_i(\bar g_i)$, therefore we have to write not~$\mathbf F_i$, but $\mathbf F_i(\bar g_i)$, and we shall do so in what follows. But in the cases where parameters are not so important we shall omit them. The union of all~$\mathbf F_i$ for $i=1,\dots,k$ will be denoted by~$\mathbf F$. The set~$\mathbf F$ depends on the parameter $\bar g=(\bar g_1,\dots,\bar g_k)$. Now we need to obtain a~set~$\mathbf F'$ consisting of $\mu_l=\mu$ independent projections on countably generated direct summands of~$A_l$. It will be done similarly to the previous case, we only need to change in the formula $\tilde \varphi_l(\bar g')$ the following parts: $\overline{\mathrm{Idem}_l^*}(f)$ to $\overline{\mathrm{Idem}_l^\omega}(f)$; besides, we shall consider vectors~$\bar g'$ such that \begin{enumerate} \item $\forall f \in \mathbf F_l\, (\exists f'\, (\varphi(\bar g',f')\logic\land ff'=f))$, i.e., for every cyclic direct summand $fA$ (where $f\in \mathbf F_l$) of~$A_l$ there exists a~countably generated summand $f'A$ of~$A_l$ such that $\varphi (\bar g',f')$ and $fA\subset f'A$; \item (we shall write it by words, because we do not want to write complicated formulas) every direct summand in~$A_l$ which contains all $fA$ for all projections~$f$ such that $\varphi(\bar g_l,f)$ contains all~$f'A$ such that $\varphi(\bar g',f')$. \end{enumerate} Denote the corresponding formula by $\Tilde{\Tilde \varphi}_l(\bar g')$ and the obtained set of projections by $\mathbf F'=\mathbf F'(\bar g')$. \subsection[Interpretation of the Group $A$ for Every Element $\protect\mathbf F'$]% {Interpretation of the Group $\boldsymbol{A}$ for Every Element $\boldsymbol{\mathbf F'}$}\label{sec5_3} By interpretation of the group~$A$ for every element from~$\mathbf F'$ we understand the following. We have $\mu$~independent direct summands ${\mathcal F_i =f_iA}$ ($i\in \mu$) each of which is a~direct sum of a~countable set of cyclic groups of order~$p^l$. Every endomorphism of the group~$A$ acts independently on every summand~$\mathcal F_i$, hence if for every endomorphism $\varphi\in \mathop{\mathrm{End}}\nolimits(A)$ we can map every element of~$\mathcal F_i$ to some element of~$A$, then we shall be able to map every endomorphism $\varphi \in \mathop{\mathrm{End}}\nolimits A$ to a~set of $\mu$ elements of~$A$. This is what we need below to obtain the second order theory of the group~$A$. So in this section, we shall concentrate on a~bijective correspondence between some homomorphisms from the group~$\mathcal F_i$ into the group~$A$, and elements of the group~$A$, and introduce on the set of homomorphisms an operation~$\oplus$ that under this bijection corresponds to the addition of the group~$A$. Let us fix some projection $g\in \mathbf F'$. Consider the set $\mathop{\mathrm{End}}\nolimits_g$ of all those homomorphisms $h\colon gA\to A$ that satisfy the following conditions: \begin{enumerate} \item $\forall f\in \mathbf F_l\, (fg=f\Rightarrow (hf=0 \logic\lor \exists f'\in \mathbf F\, (hf=f'hf\ne 0)))$; this means that for every projection~$f$ from our special set~$\mathbf F_l$, if the projection maps $A$ to the indecomposable direct summand $fA$ of the module~$gA$, then either $h(fA)=0$ or $h(fA)\subset f'A$ for some projection $f'\in \mathbf F$; \item $\exists f\, (\mathop{\mathrm{Fin}}\nolimits(f)\land \mathrm{Idem}(f)\land fh=h)$; this means that the image of the subgroup~$gA$ under the endomorphism~$h$ is finitely generated; \item $\bigwedge\limits_{i=1}^k \forall f\in \mathbf F_i\, \neg \Bigl(\exists f_1\dots \exists f_{p^i}\, \Bigl(\,\bigwedge\limits_{q\ne s} f_q\ne f_s \logic\land f_1,\dots,f_{p^i} \in F_l \logic\land f_1g=g_1 \logic\land \dots \logic\land f_{p^i}g=f_{p^i} \logic\land hf_1=fhf_1\ne 0 \logic\land \dots \logic\land hf_{p^i}=fhf_{p^i}\ne 0\Bigr)\Bigr)$; this means that for every $i=1,\dots,k$ the inverse image of each $fA\subset A_i$, where $f\in \mathbf F_i$, can not contain more than $p^i-1$ different elements $f_mA$, where $f_mA\subset gA$ and $f_m\in \mathbf F_l$. \end{enumerate} Two elements $h_1$~and~$h_2$ from the set $\mathop{\mathrm{End}}\nolimits_g$ are said to be equivalent ($h_1\sim h_2$) if they satisfy the following formula: \begin{multline*} \exists f_1\, \exists f_2\, ((gf_1g)\cdot (gf_2g)=(gf_2g)\cdot (gf_1g)=g \\ \logic\land \forall f\in \mathbf F_l\, (fg=f\Rightarrow \forall f'\in \mathbf F\, (h_1 f=f' h_1f\ne 0\Leftrightarrow (gf_1gh_2)f=f'(gf_1gh_2)f\ne 0)). \end{multline*} This means that there exists an automorphism $gf_1g$ of the group~$gA$ which maps $h_2$ to an endomorphism $(gf_1g\cdot h_2)$ such that for every $\rho\in F_l$, where $\mathop{\mathrm{Im}}\nolimits \rho =gA$, both endomorphisms $h_1$ and $gf_1gh_2$ map this subgroup either to zero or to the same $f'A$ ($f'\in \mathbf F$). The obtained set $\mathop{\mathrm{End}}\nolimits_g/{\sim}$ will be denoted by $\widetilde{\mathrm{End}}_g$. We can interpret elements of this set as finite sets of projections from~$\mathbf F$ with the condition that every projection from~$\mathbf F_i$ can belong to this set at most $p^i-1$ times. Respectively, every element of the set $\widetilde{\mathrm{End}}_g$ can be interpreted as a~set of pairs, where the first element in a~pair is a~projection~$f$ from~$\mathbf F$ and the second element is an integer from~$0$ to~$p^i-1$, where $i$ is such that $f\in \mathbf F_i$, and almost all (all except for a~finite number) second components of the pairs are equal to~$0$. Now we can construct a~bijective mapping between the set $\widetilde{\mathrm{End}}_g$ and the group~$A$, where the image of the described set $\{ \langle f_j,l_j\rangle\mid j\in J\}$ is the element $\sum\limits_{j\in J} l_j\xi_j=a\in A$, where $\xi_j$ is some fixed generator of the cyclic group~$f_jA$. Now we only need to introduce addition on the set $\widetilde{\mathrm{End}}_g$ to make the obtained bijective mapping an isomorphism of Abelian groups. We shall introduce addition by the formula ($h_1,h_2,h_3\in \widetilde{\mathrm{End}}_g$) \begin{align*} & (h_3=h_1\oplus h_2) \mathbin{{:}\!=} \bigwedge_{i=1}^k \forall f\in \mathbf F_i\, \biggl(\,\bigwedge_{j=0}^{p^i-1} \exists g_1\dots \exists g_j\in \mathbf F_l\, \bigwedge_{q\ne s} (g_q\ne g_s \logic\land g_qg=g_q \\ & \quad {}\logic\land h_3 g_q=fh_3g_q\ne 0) \logic\land \neg \biggl(\exists g_1\dots \exists g_{j+1}\in \mathbf F_l\, \bigwedge_{q\ne s} (g_q\ne g_s \logic \land g_qg=g_q \logic\land h_3g_q=fh_3g_q\ne 0 )\biggr) \\ & \quad {}\Rightarrow \biggl(\,\bigvee_{m=0}^j \exists g_1\dots \exists g_m\in \mathbf F_l\, \bigwedge_{q\ne s} (g_q\ne g_s \logic\land g_qg=g_q \logic\land h_1g_q=fh_1g_q\ne 0) \\ & \quad {}\logic\land \neg \biggl(\exists g_1\dots \exists g_{m+1}\in \mathbf F_l\, \bigwedge_{q\ne s} (g_q\ne g_s \logic\land g_qg=g_q \logic\land h_1g_q= fh_1g_q\ne 0)\biggr) \\ & \quad {}\logic\land \exists g_1\dots \exists g_{\gamma(j,m)}\in \mathbf F_l\, \bigwedge_{q\ne s} (g_q\ne g_s \logic\land g_qg=g_q \logic\land h_2 g_q=fh_2g_q\ne 0) \\ & \quad {}\logic\land \neg \biggl(\exists g_1\dots \exists g_{\gamma(j,m)+1}\in \mathbf F_l\, \bigwedge_{q\ne s} ( g_q\ne g_s \logic\land g_qg=g_q \logic\land h_2 g_q=fh_2g_q\ne 0)\biggr)\!\biggr)\!\biggr), \end{align*} where $\gamma(j,m)=j-m$ if $j\ge m$, and $\gamma(j,m)=p^i+j-m$ if $j< m$. Now we see that for every $g\in \mathbf F'$ we have a~definable set $\widetilde{\mathrm{End}}_g$ with the addition operation~$\oplus$, which is isomorphic to the group~$A$. \subsection{Proof of the First Case in the Theorem}\label{sec5_4} \begin{proposition}\label{p5.1} For any two infinite Abelian $p$-groups $A_1$~and~$A_2$ bounded by the number~$p^k$, elementary equivalence of the rings $\mathop{\mathrm{End}}\nolimits(A_1)$ and $\mathop{\mathrm{End}}\nolimits(A_2)$ implies equivalence of the groups $A_1$~and~$A_2$ in the language~$\mathcal L_2$. \end{proposition} \begin{proof} For every $\tilde g\in \mathbf F'$ by $\mathrm{Resp}_{\tilde g}(h)$ we shall denote the following formula: $$ \mathrm{Resp}_{\tilde g}(h) \mathbin{{:}\!=} \forall g\in \mathbf F'\, \exists h'\, ((\tilde g hg)(gh'\tilde g)= \tilde g \logic\land (gh'\tilde g)(\tilde ghg)=g). $$ This formula means that an endomorphism~$h$ isomorphically maps every summand $gA$ (where $g\in \mathbf F'$) to the summand $\tilde gA$. As above, let us consider an arbitrary sentence~$\varphi$ in the second order group language and show an algorithm translating this sentence~$\psi$ to a~sentence~$\tilde \psi$ of the first order ring language so that $\mathop{\mathrm{End}}\nolimits(A)\vDash \tilde \psi$ if and only if $A\vDash \varphi$. Let us translate the sentence $\psi$ to the sentence $$ \exists \bar g_1\dots \exists \bar g_k\, ( \tilde \varphi_1(\bar g_1) \logic\land \dots \logic\land \tilde \varphi_k(\bar g_k) \\ {}\logic\land \exists \bar g'\, ( \Tilde {\Tilde \varphi}_l (\bar g',\bar g_l) \logic\land \exists \tilde g\in \mathbf F'(\bar g')\, \exists h \, (\mathrm{Resp}_{\tilde g}(h) \logic\land \psi'(\bar g_1,\dots,\bar g_k,\bar g',\tilde g,h)) )), $$ where the formula $\psi'({\dots})$ is obtained from the sentence~$\psi$ with the help of the following translations of subformulas of~$\psi$: \begin{enumerate} \item the subformula $\forall x$ is translated to the subformula $\forall x\in \widetilde{\mathrm{End}}_{\tilde g}$; \item the subformula $\exists x$ is translated to the subformula $\exists x\in \widetilde{\mathrm{End}}_{\tilde g}$; \item the subformula $\forall P_m(v_1,\dots,v_m)\, ({\dots})$ is translated to the subformula $$ \forall f_1^P\dots \forall f_m^P\, \biggl(\forall g\in \mathbf F'(\bar g')\, \biggl(\,\bigwedge_{i=1}^m (f_i^Pg\in \mathop{\mathrm{End}}\nolimits_g)\biggr)\Rightarrow \ldots\biggr); $$ \item the subformula $\exists P_m(v_1,\dots,v_m)\, ({\dots})$ is translated to the subformula $$ \exists f_1^P\dots \exists f_m^P\, \biggl( \forall g\in \mathbf F'(\bar g')\, \biggl(\,\bigwedge_{i=1}^m (f_i^Pg\in \mathop{\mathrm{End}}\nolimits_g)\biggr)\logic\land \ldots\biggr); $$ \item the subformula $x_1=x_2$ is translated to the subformula $x_1\sim x_2$; \item the subformula $x_1=x_2+x_3$ is translated to the subformula $x_1\sim x_2\oplus x_3$; \item the subformula $P_m(x_1,\dots,x_m)$ is translated to the subformula $$ \exists g\in \mathbf F'(\bar g')\, \biggl(\, \bigwedge_{i=1}^m f_i^Pg=x_i hg\biggr). $$ \end{enumerate} We can explain by words what these translations mean. According to existence of the set~$\mathbf F'$, we have $\mu$~groups $\widetilde{\mathrm{End}}_g$ for $g\in \mathbf F'$, each of which is isomorphic to the group~$A$. We fix one chosen element $\tilde g\in \mathbf F'$, and therefore we fix one group $\widetilde{\mathrm{End}}_{\tilde g}$, isomorphic to~$A$. Naturally, all subformulas $\forall x$, $\exists x$, $x_1=x_2$, $x_1=x_2+x_3$ (of first order logic) will be translated to the corresponding subformulas for the group $\widetilde{\mathrm{End}}_{\tilde g}$. Now we need to interpret an arbitrary relation $P_m(v_1,\dots,v_m)$ on~$A$ in the ring $\mathop{\mathrm{End}}\nolimits(A)$. Such a~relation is some subset in $A^m$, i.e., a~set of ordered $m$-tuples of elements from~$A$. There are at most~$\mu$ such $m$-tuples, therefore the set $P_m(v_1,\dots,v_m)$ can be considered as a~set of~$\mu$ $m$-tuples of elements from~$A$ (some of them can coincide). We consider $m$~endomorphisms $f_1^P,\dots,f_m^P\in \mathop{\mathrm{End}}\nolimits(A)$ such that the restriction of each of them on any $gA$ (where $g\in \mathbf F'$) is an element of $\widetilde{\mathop{\mathrm{End}}\nolimits}_g$. Thus for every $g\in \mathbf F'$ the restriction of the endomorphisms $f_1^P,\dots,f_m^P$ on~$gA$ is an $m$-tuple of elements of the group $\widetilde{\mathrm{End}}_{\tilde g}$~($\cong A$), where an isomorphism between $\widetilde{\mathrm{End}}_{\tilde g}$ and $\widetilde{\mathrm{End}}_g$ is given by the fixed mapping~$h$ which isomorphically maps every module $gA$ to $\tilde gA$. So we can see that the sentence~$\psi$ is true in~$A$ if and only if the sentence~$\tilde \psi$ is true in the ring $\mathop{\mathrm{End}}\nolimits(A)$. Therefore, as in the previous section, we have the proof. \end{proof} \section[Direct Sums of Divisible and Bounded $p$-Groups]{Direct Sums of Divisible and Bounded $\boldsymbol{p}$-Groups}\label{sec6} \subsection{Finitely Generated Groups} Every infinite finitely generated Abelian $p$-group~$A$ has the form $D\oplus G$, where $D$ is a~divisible finitely generated group and $G$~is a~finite group. There is no need to prove the following proposition. \begin{proposition}\label{p6.1} If Abelian $p$-groups $A_1$~and~$A_2$ are finitely generated, then elementary equivalence of their endomorphism rings $\mathop{\mathrm{End}}\nolimits(A_1)$ and $\mathop{\mathrm{End}}\nolimits(A_2)$ implies that the groups $A_1$~and~$A_2$ are isomorphic. \end{proposition} \subsection{Infinitely Generated Divisible Groups}\label{sec6_2} As in Sec.~\ref{sec5_1}, the formula $\mathrm{Idem}^*(\rho)$ will denote the property of an endomorphism~$\rho$ to be an indecomposable idempotent, while $\mathrm{Idem}(\rho)\mathbin{{:}\!=} (\rho^2=\rho)$. If in a~divisible group~$D$ we have $\mathrm{Idem}^*(\rho)$, then $\rho A\cong \mathbb Z(p^\infty)$. Note that, despite the fact that in Sec.~\ref{sec3} we considered direct sums of cyclic groups of the same order, the Shelah theorem remains true also for divisible groups, because a~divisible group is a~union of the groups $$ \bigoplus_\mu \mathbb Z(p)\subset \bigoplus_\mu \mathbb Z(p^2)\subset \dots \subset \bigoplus_\mu \mathbb Z(p^n)\subset \ldots $$ (proof of an even more general case is given later in Sec.~\ref{sec7}). Therefore, similarly to Sec.~\ref{sec5_2}, we have a~definable set $\mathbf F=\mathbf F(\bar g)$ consisting of $\mu$~indecomposable projections on linearly independent direct summands of~$D$, and also a~definable set $\mathbf F'=\mathbf F'(\bar g')$ consisting of $\mu$~projections on linearly independent countably generated direct summands of~$D$. Let us fix some element $g\in \mathbf F'$ and construct (as in Sec.~\ref{sec5}) an interpretation of the group~$D$ for this set. Namely, let us consider the set $\mathop{\mathrm{End}}\nolimits_g$ of all homomorphisms $h\colon gA\to A$ satisfying the following conditions: \begin{enumerate} \item $\forall f\in \mathbf F\, (fg=f\Rightarrow (hf=0 \logic\lor \exists \tilde f\in \mathbf F\, (\tilde hf=\tilde fhf\ne 0)))$; this means that for every projection~$f$ from~$\mathbf F$ such that $fA\subset gA$, we have either $h(fA)=0$ or $h(fA)\subset \tilde fA$ for some $\tilde f\in \mathbf F$; \item $\exists f\, (\mathop{\mathrm{Fin}}\nolimits(f) \logic\land \mathrm{Idem}(f) \logic\land fh=h)$; this means that the image of~$gA$ on the endomorphism~$h$ is finitely generated; \item $\forall f\in \mathbf F\, (\exists f'\in \mathbf F\, (f'g=g' \logic\land hf'=fhf'\ne 0)\Rightarrow \exists \tilde f\in \mathbf F\, (\tilde fg=\tilde f \logic\land h\tilde f=fh\tilde f\ne 0) \logic\land \forall {f'\in \mathbf F}\allowbreak\, (f'g=f' \logic\land hf'=fhf'\ne 0 \Rightarrow \exists \alpha\, (\alpha hf'=h\tilde f)))$; this means that for every element $f\in \mathbf F$ either the inverse image of~$fA$ is empty or it contains an element $\tilde f A\subset gA$ (where $\tilde f\in \mathbf F$) such that the kernel $\tilde f A$ on the mapping~$h$ has the maximal order among all kernels~$f'A$ such that $f'\in \mathbf F$ and $f'A\subset gA$. \setcounter{slast}{\value{enumi}} \end{enumerate} Before the last condition we shall introduce some new notation. Let $h$ be some endomorphism and $f_1,f_2\in \mathbf F$. We shall write $f_1\sim_h f_2$ if and only if the formula $$ \exists \alpha\, (\alpha^2=1 \logic\land \alpha f_1=f_2\alpha \logic\land \alpha f_2=f_1\alpha \logic\land hf_1=h\alpha f_1 \logic\land hf_2=h\alpha f_2) $$ is true. This formula means that the images of the groups $f_1A$ and $f_2A$ on the endomorphism~$h$ coincide and the kernels of the groups $f_1A$ and $f_2A$ on this endomorphism are isomorphic. Now for an endomorphism~$h$ and projections $f_1,f_2\in \mathbf F$ we shall introduce the formula $$ \exists \alpha\, (\alpha^2=1 \logic\land \alpha f_1=f_2\alpha \logic\land \alpha f_2=f_1\alpha \logic\land hf_1=ph\alpha f_1). $$ This formula states that the images of the groups $f_1A$ and $f_2A$ on the endomorphism~$h$ coincide and the kernel of $f_1A$ is $p$~times greater than the kernel of $f_2A$. This formula will be denoted by $f_1\sim_h f_2+1$. Now we shall introduce the last condition: \begin{enumerate} \setcounter{enumi}{\value{slast}} \item $ \neg \Bigl(\exists f_1,\dots,f_p\in \mathbf F\, \Bigl(\Bigl(\,\bigwedge\limits_{i\ne j} f_i\ne f_j\Bigr) \logic\land f_1g=f_1 \logic\land \dots \logic\land f_pg=f_p \logic\land hf_1=fhf_1\ne 0 \logic\land \dots \logic\land hf_p= fhf_p\ne 0 \logic\land \Bigl(\,\bigwedge\limits_{i\ne j} f_i\sim f_j\Bigr)\Bigr)\Bigr)$; this means that there exist at most $p-1$ such projections from~$\mathbf F$ onto subgroups of~$gA$ that their kernels are isomorphic. \end{enumerate} Two elements $h_1$~and~$h_2$ of the set $\mathop{\mathrm{End}}\nolimits_g$ are said to be equivalent ($h_1\sim h_2$) if the following formula holds: $$ \exists f_1\, \exists f_2\, ((gf_1g)\cdot (gf_2g)=(gf_2g)\cdot (gf_1g)=g \logic\land \forall f\in \mathbf F\, (fg=f\Rightarrow h_1f=(gf_1g)h_2f \logic\land h_2f=(gf_2g)h_1f)). $$ This means that there exist mutually inverse automorphisms $gf_1g$ and $gf_2g$ of the group $gA$ which map $h_2$~and~$h_1$ to automorphisms $(gf_1g) h_2$ and $(gf_2g)h_1$ such that we have $h_2=(gf_2g)h_1$ and $h_1=(gf_1g)h_2$ on~$gA$ for every projection from~$\mathbf F$ projecting on the subgroup of~$gA$. The obtained set $\mathop{\mathrm{End}}\nolimits_g/{\sim}$ will be denoted by $\widetilde{\mathrm{End}}_g$. Suppose that we have two quasicyclic groups $C$~and~$C'$, one of which has generators $c_1,\dots,c_n,\dots$ ($pc_1=0$, $pc_{n+1}=c_n$) and the other one has $c_1',\dots,c_n',\dots$ ($pc_1'=0$, $pc_{n+1}'=c_n'$). Consider the set of homomorphisms $\mathop{\mathrm{Hom}}\nolimits(C,C')$. Two homomorphisms $\alpha_1,\alpha_2\in \mathop{\mathrm{Hom}}\nolimits(C,C')$ correspond to each other under some automorphism of the group~$C$ if and only if their kernels are isomorphic, i.e., have the same order. Thus, all homomorphisms from $\mathop{\mathrm{Hom}}\nolimits(C,C')$ are divided into a~countable number of classes, and every class uniquely corresponds to a~nonnegative integer~$i$ such that $|\mathop{\mathrm{Ker}}\nolimits \alpha|=p^i$. Consequently, every class $h\in \widetilde{\mathrm{End}}_g$ can be mapped to a~finite set of finite sequences $$ \langle f,m(f),l_1(f),\dots,l_{m(f)}(f)\rangle, $$ where $f\in \mathbf F$, $m(f)\in \mathbb N$, and $l_i(f)=0,\dots,p-1$. It is clear that endomorphisms from the same equivalence class are mapped to the same sets, and endomorphisms from different classes are mapped to different sets. Moreover, it is clear that every finite set of sequences is mapped to some class of endomorphisms. Now every such set of sequences will be mapped to an element $$ \sum_{f\in \mathbf F} l_1(f)c_1(f)+\dots+l_m(f)c_{m}(f), $$ where $c_1(f),\dots,c_n(f),\dots$ are some fixed generators of~$fA$. Therefore we have obtained a~bijection between the set $\widetilde{\mathrm{End}}_g$ and the group $A\cong \bigoplus\limits_\mu \mathbb Z(p^\infty)$. Now let us introduce on the set $\widetilde{\mathrm{End}}_g$ an addition ($h_3=h_1\oplus h_2$) in such a~way that this bijection becomes an isomorphism between Abelian groups. Let $h_1,h_2,h_3\in \widetilde{\mathrm{End}}_g$. \begin{align*} & (h_3=h_1\oplus h_2) \mathbin{{:}\!=} \forall f\in \mathbf F\, \biggl( \forall f'\in \mathbf F\, \biggl( f'g=f' \logic\land h_1f'=fh_1f'\ne 0 \\ & \quad {}\Rightarrow \bigwedge_{i=0}^{p-1} \biggl( \exists f_1\dots \exists f_i\in \mathbf F\, \bigwedge_{q\ne s} (f_q\ne f_s \logic\land f_q\sim_{h_1} f' \logic\land f_q g=f_q \logic\land h_1f_q=fh_1f_q\ne 0) \\ & \quad \logic\land \neg \biggl(\exists f_1,\dots,\exists f_{i+1}\in \mathbf F\, \bigwedge_{q\ne s} (f_q\ne f_s \logic\land f_q \sim_{h_1} f' \logic\land f_qg=f_q \logic\land h_1f_q=fh_1f_q\ne 0) \biggr) \\ & \quad {}\Rightarrow \bigvee_{j=0}^{p-1} \biggl(\exists f_1\dots \exists f_j\in \mathbf F\, \bigwedge_{q\ne s} (f_q\ne f_s \logic\land f_q\sim_{h_2} f' \logic\land f_qg=f_q \logic\land h_2f_q=fh_2f_q\ne 0) \\ & \quad \logic\land \neg \biggl(\exists f_1\dots\exists f_{j+1}\in \mathbf F\, \bigwedge_{q\ne s}(f_q\ne f_s \logic\land f_q\sim_{h_2} f' \logic\land f_qg=f_q \logic\land h_2f_q=fh_2f_q\ne 0)\biggr) \\ & \quad \logic\land \biggl(\!\biggl( \exists f_1\dots \exists f_{(i+j) \bmod p}\in \mathbf F\, \bigwedge_{q\ne s} (f_q \ne f_s \logic\land f_q\sim_{h_3} f' \logic\land f_qg=f_q \logic\land h_3f_q=fh_3f_q\ne 0) \\ & \quad \logic\land \neg \biggl(\exists f_1\dots \exists f_{(i+j)\,mod\, p +1}\in \mathbf F \bigwedge_{q\ne s}(f_q\ne f_s \logic\land f_q\sim_{h_3} f' \logic\land f_qg=f_q \logic\land h_3f_q=fh_3f_q\ne 0)\biggr) \\ & \quad \logic\land \neg \biggl(\exists f_1\dots f_p\in \mathbf F\, \bigwedge_{q\ne s} (f_q\ne f_s \logic\land f_q\sim_{h_3} f'+1 \\ & \quad \logic\land f_qg=f_q \logic\land (h_1f_q=fh_1f_q \ne 0 \logic\lor h_2f_q=fh_2f_q\ne 0))\biggr)\!\biggr) \\ & \quad \logic\lor \biggl(\exists f_1\dots \exists f_{(i+j) \bmod p+1}\in \mathbf F\, \bigwedge_{q\ne s} (f_q \ne f_s \logic\land f_q\sim_{h_3} f' \logic\land f_qg=f_q \logic\land h_3f_q=fh_3f_q \ne 0) \\ & \quad \logic\land \neg \biggl(\exists f_1\dots \exists f_{(i+j) \bmod p +2}\in \mathbf F\, \bigwedge_{q\ne s} (f_q\ne f_s \logic\land f_q\sim_{h_3} f' \logic\land f_qg=f_q \logic\land h_3f_q=fh_3f_q\ne 0)\biggr) \\ & \quad \logic\land \exists f_1\dots f_p\in \mathbf F\, \bigwedge_{q\ne s} (f_q\ne f_s \logic\land f_q\sim_{h_3} f'+1 \logic\land f_qg=f_q \\ & \quad \logic\land (hf_q=fh_1f_q\ne 0 \logic\lor h_2f_q=fh_2f_q\ne 0))\biggr)\! \biggr)\!\biggr)\!\biggr)\!\biggr)\!\biggr). \end{align*} Hence for every $g\in \mathbf F'$ there is a~definable set $\widetilde{\mathrm{End}}_g$ with the addition operation~$\oplus$, which is isomorphic to~$A$. \begin{proposition}\label{p6.2} For two infinitely generated divisible $p$-groups $A_1$~and~$A_2$, elementary equivalence of the rings $\mathop{\mathrm{End}}\nolimits(A_1)$ and $\mathop{\mathrm{End}}\nolimits(A_2)$ implies equivalence of the groups $A_1$~and~$A_2$ in the language~$\mathcal L_2$. \end{proposition} \begin{proof} Since we have obtained an interpretation of the group~$A$ for every $g\in \mathbf F'$, the proof of this proposition is completely similar to the proof of Proposition~\ref{p5.1}. \end{proof} \subsection[Direct Sums of Divisible $p$-Groups and Bounded $p$-Groups of Not Greater Power]{Direct Sums of Divisible $\boldsymbol{p}$-Groups and Bounded $\boldsymbol{p}$-Groups of Not Greater Power}\label{sec6_3} In this section, we consider the groups of the form $D\oplus G$, where $D$ is an infinitely generated divisible group, $G$~is a~group bounded by the number~$p^k$, and $|G|\le |D|$. This case is practically the union of the previous two cases. Namely, let us have idempotents $\rho_D$~and~$\rho_G$ from the formula~$\psi_{p^k}$ of Sec.~\ref{sec4_4}, i.e., idempotents which are projections on divisible and bounded parts of~$A$, respectively, and also idempotents $\rho_1,\dots,\rho_k$, where ${\rho_1+\dots}+\rho_k=\rho_G$, which are projections on direct summands of the form $\smash[b]{\bigoplus\limits_{\mu_1} \mathbb Z(p),\dots, \bigoplus\limits_{\mu_k}\mathbb Z(p^k)}$, respectively. Let $|A|=|D|=\mu$. As before, we have the following definable sets: \begin{enumerate} \item $\mathbf F=\mathbf F(\bar g)$ is a~set of $\mu$~indecomposable projections on linearly independent direct summands of the group~$D$; \item the set $\mathbf F'=\mathbf F'(\bar g')$ consists of $\mu$~projections on linearly independent countably generated direct summands of~$D$; \item for every $i=1,\dots,k$ the set $\mathbf F_i=\mathbf F_i(\bar g_i)$ consists of $\mu_i$~projections on independent indecomposable direct summands of $\rho_i A$; \item an endomorphism $\varphi\in \mathop{\mathrm{End}}\nolimits(A)$ satisfying the following formula: \begin{align*} & \forall f\in \mathbf F(\bar g)\, (\varphi f\in \mathbf F) \logic\land \bigwedge_{i=1}^k (\rho_D\varphi \rho_i= \varphi \rho_i \logic\land \forall f_i\in \mathbf F_i(\bar g_i)\, \exists f\in \mathbf F(\bar g)\, (\varphi f_i=f\varphi f_i\ne 0)) \\ & \quad \logic\land \forall f_1,f_2\in \mathbf F(\bar g)\, (f_1\ne f_2\Rightarrow \forall f_1',f_2'\in \mathbf F(\bar g) \, (\varphi f_1=f_1'\varphi f_1\ne 0 \logic\land \varphi f_1=f_2'\varphi f_2\ne 0\Rightarrow f_1'\ne f_2') \\ & \quad \logic\land \bigwedge_{i=1}^k \forall f\in \mathbf F(\bar g)\, \forall f_i\in \mathbf F_i(\bar g_i)\, \forall f'\in F(\bar g)\, (f'=\varphi f\Rightarrow f'\varphi f_i=0) \\ & \quad \logic\land \bigwedge_{i,j=1}^k \forall f_1,f_2\in \mathbf F(\bar g)\, \forall f_i\in \mathbf F_i(\bar g_i)\, \forall f_j\in \mathbf F_j(\bar g_j) \\ & \quad (f_i\ne f_j \logic\land \varphi f_i=f_1\varphi f_i\ne 0 \logic\land \varphi f_j=f_2\varphi f_j\ne 0\Rightarrow f_1\ne f_2)). \end{align*} \end{enumerate} We see that such an endomorphism~$\varphi$ embeds the set $$ \mathbf F(\bar g)\cup \mathbf F_1(\bar g_1)\cup\dots \cup \mathbf F_k(\bar g_k) $$ into the set~$\mathbf F(\bar g)$. Therefore for a~given~$\varphi$ we can consider the sets \begin{align*} \mathbf F^D&=\mathbf F^D(\bar g, \varphi)= \{ f\in \mathbf F(\bar g)\mid \exists f'\in \mathbf F(\bar g)\, (f\varphi f'=f\varphi=\varphi f'\ne 0)\},\\ \mathbf F^D_1&=\mathbf F^D_1(\bar g_1, \varphi)= \{ f\in \mathbf F(\bar g)\mid \exists f'\in \mathbf F_1(\bar g_1)\, (f\varphi f'=f\varphi=\varphi f'\ne 0)\},\\ & \hspace{4.5cm}\ldots\\ \mathbf F^D_k&=\mathbf F_k^D(\bar g_k, \varphi)= \{ f\in \mathbf F(\bar g)\mid \exists f'\in \mathbf F_k(\bar g_k)\, (f\varphi f'=f\varphi=\varphi f' \ne 0)\}. \end{align*} The sets $\mathbf F^D,\mathbf F_1^D,\dots,\mathbf F_k^D$ consist of $\mu,\mu_1,\dots,\mu_k$ projections on indecomposable linearly independent direct summands of the group~$D$, respectively. We shall write them in formulas, sometimes omitting parameters, but meaning that they depend on the parameters $\bar g,\bar g_1,\dots,\bar g_k,\varphi$. Let us fix some element $g\in \mathbf F'$ and construct an interpretation of the group $A=D\oplus G$ for this set. Namely, let us consider the set $\mathop{\mathrm{End}}\nolimits_g$ of all homomorphisms $h\colon gA\to A$ satisfying the following conditions: \begin{enumerate} \item $\forall f\in \mathbf F\, \Bigl(fg=f\Rightarrow \Bigl(hf=0 \logic\lor \exists \tilde f\in \mathbf F\, (hf=\tilde f hf\ne 0) \logic\lor \bigwedge\limits_{i=1}^k \exists \tilde f\in \mathbf F_i^D\, (\tilde f h=\tilde f hf=hf\ne 0)\Bigr)\Bigr)$; this means that for every projection~$f$ from~$\mathbf F$, if $fA\subset gA$, then we have either $h(fA)=0$, or $h(fA)\subset \tilde fA$ for some $\tilde f\in \mathbf F^D$, or $h(fA)=\tilde fA$ for some $\tilde f\in \mathbf F_i^D$; \item $\exists f\, (\mathop{\mathrm{Fin}}\nolimits(f) \logic\land \mathrm{Idem}(f) \logic\land fh=h)$; this means that the image of~$gA$ under~$h$ is finitely generated; \item $\bigwedge\limits_{i=1}^k \forall f\in \mathbf F_i^D\, \neg \Bigl(\exists f_1,\dots,\exists f_{p^i}\in \mathbf F\, \Bigl(\Bigl(\,\bigwedge\limits_{q\ne s} f_q\ne f_s\Bigr) \logic\land f_1 g=f_1 \logic\land \dots \logic\land f_{p^i} g=f_{p^i} \logic\land fhf_1=fh=hf_1\ne 0 \logic\land \dots \logic\land fhf_{p^i}=fh=hf_{p^i}\ne 0\Bigr)\Bigr)$; this means that for every $i=1,\dots,k$ the inverse image of every $fA\subset D$, where $f\in \mathbf F_i^D$, contains at most $p^i-1$ distinct elements $f_mA$ such that $f_mA\subset gA$ and $f_m\in \mathbf F$; \item $\forall f\in \mathbf F^D\, \exists f'\in \mathbf F\, (f'g=g' \logic\land hf'=fhf'\ne 0\Rightarrow \exists \tilde f\in \mathbf F \, (\tilde fg=\tilde f \logic\land h\tilde f= fh\tilde f\ne 0) \logic\land \forall f'\in \mathbf F\, (f'g=f' \logic\land hf'=fhf'\ne 0\Rightarrow \exists \alpha\, (\alpha hf'=h\tilde f)))$; this means that for every element $f\in \mathbf F$ either the inverse image $fA$ is empty or it contains $\tilde f\in \mathbf F$ such that $\tilde f A\subset gA$ and the kernel of $\tilde f A$ under the mapping~$h$ has the maximal order of all kernels of $f'A$ for $f'\in \mathbf F$ such that $f'A\subset gA$; \item $ \neg \Bigl(\exists f_1,\dots,f_p\in \mathbf F\, \Bigl(\Bigl(\,\bigwedge\limits_{q\ne s} f_q\ne f_s\Bigr) \logic\land f_1g=f_1 \logic\land \dots \logic\land f_p=gf_p \logic\land hf_1=fhf_1\ne 0 \logic\land \dots \logic\land hf_p=fhf_p\ne 0 \logic\land \Bigl(\bigwedge\limits_{q\ne s} f_q\sim_h f_s\Bigr)\Bigr)\Bigr)$, this means that there exist at most $p-1$ projections from~$\mathbf F$ onto subgroups from~$gA$ such that their images are included in~$\mathbf F^D$ and their kernels are isomorphic. \end{enumerate} Two elements $h_1$~and~$h_2$ from the set $\mathop{\mathrm{End}}\nolimits_g$ are said to be equivalent ($h_1\sim h_2$) if and only if the formula $$ \exists f_1\, \exists f_2\, ( (gf_1g)\cdot (gf_2g)=(gf_2g)\cdot (gf_1g)=g \logic\land \forall f\in \mathbf F\, (fg=f\Rightarrow h_1f=(gf_1g)h_2f \logic\land h_2f=(gf_2g)h_1f)) $$ is true, i.e., there exist mutually inverse automorphisms $gf_1g$ and $gf_2g$ of~$gA$ which map $h_2$~and~$h_1$ to automorphisms $(gf_1g) h_2$ and $(gf_2g)h_1$ such that we have $h_2=(gf_2g)h_1$ and $h_1=(gf_1g)h_2$ on~$gA$ for every projection from~$\mathbf F$ that project onto a~subgroup of~$gA$. Again the obtained set $\mathop{\mathrm{End}}\nolimits_g/{\sim}$ will be denoted by $\widetilde{\mathrm{End}}_g$. Every class $h\in \widetilde{\mathrm{End}}_g$ can be mapped to a~set consisting of the following $k+1$ components: its $i$th component (where $i=1,\dots,k$) is a~set of pairs $$ M_i=\{ \langle f,m(f)\rangle\mid f\in \mathbf F_i^D,\ m=1,\dots,p^i-1\}, $$ where $m$ is the dimension of the inverse image $f\in \mathbf F_i^D$, if it is not equal to zero; its $(k+1)$th component is a~set of sequences $$ M=\{ \langle f,m(f),l_0(f),\dots,l_{m(f)}(f)\rangle\mid f\in \mathbf F^D,\ m\in \mathbb N,\ l_1,\dots,l_m=0,\dots,p-1\}, $$ where $p^m$ is the maximal order of the kernel of the inverse image of~$fA$ which is included in~$gA$ and has the form $f'A$ for $f'\in \mathbf F'$, $l_i$~is the number of those inverse images of~$fA$ that belong to $gA$, have the form $f'A$ for $f'\in \mathbf F'$, and their kernels have the order $p^i$ under~$h$. {\sloppy It is clear that endomorphisms from one equivalence class are mapped to the same sets $M_1,\dots, M_k, M$, and endomorphisms from different classes are mapped to different sets. Further, every sequence $M_1,\dots,M_k,M$ of finite sets of the described form are mapped to some class from $\widetilde{\mathrm{End}}_g$. Such a~sequence of sets $M_1,\dots,M_k,M$ is mapped to the element $$ \sum_{\langle f,m(f)\rangle\in M_1} m(f)a(f)+\dots + \sum_{\langle f,m(f)\rangle\in M_k} m(f)a(f)+ \sum_{\langle f,m(f),l_0,\dots,l_{m(f)}\rangle \in M} l_1 c_1(f)+\dots+l_{m(f)} c_{m(f)}(f), $$ where $a(f)$ is a~fixed generator of the cyclic group~$fA$ for $f\in \mathbf F_1\cup\dots\cup \mathbf F_k$, and $c_1(f),\dots,c_n(f),\dots$ are fixed generators of the quasicyclic group $fA$ for $f\in \mathbf F$. } Therefore we obtain a~bijection between the set $\widetilde{\mathrm{End}}_g$ and the group~$A$. The addition ($h_3=h_1\oplus h_2$) on the set $\widetilde{\mathrm{End}}_g$ is introduced by a~formula which is similar to the union of the formulas from Secs.\ \ref{sec5_3}~and~\ref{sec6_2}, so we shall not write it here. \begin{proposition}\label{p6.3} Let $A_1=D_1\oplus G_1$, $A_2=D_2\oplus G_2$, the group $D_1$~and~$D_2$ be divisible and infinitely generated, the groups $G_1$~and~$G_2$ be bounded by the number~$p^k$, $|D_1|\ge |G_1|$, and $|D_2|\ge |G_2$. Then $\mathop{\mathrm{End}}\nolimits (A_1)\equiv \mathop{\mathrm{End}}\nolimits(A_2)\Rightarrow A_1\equiv_{\mathcal L_2} A_2$. \end{proposition} \begin{proof} The proof is completely similar to the proof of Proposition~\ref{p6.2}, therefore we shall not write it here. \end{proof} \subsection[Direct Sums of Divisible $p$-Groups and Bounded $p$-Groups of Greater Power]{Direct Sums of Divisible $\boldsymbol{p}$-Groups and Bounded $\boldsymbol{p}$-Groups of Greater Power} This case differs from the two previous cases, it is closer to the case of bounded $p$-groups, but it is more complicated. We shall consider groups of the form $D\oplus G$, where $D$ is an infinitely generated divisible group, $G$~is a~group bounded by the number~$p^k$, $|G|> |D|$, $\smash[b]{G=\sum\limits_{i=1}^k G_i}$, $G_i\cong \smash[b]{\bigoplus\limits_{\mu_i} \mathbb Z(p^i)}$, $\mu_l= \smash[b]{\max\limits_{i=1,\dots,k} \mu_i}$, and $D\cong \bigoplus\limits_{\mu} \mathbb Z(p^\infty)$, $\mu< \mu_l$. Assume that (as in the previous section) we have idempotents $\rho_D$~and~$\rho_G$ which are projections on the divisible and bounded parts of the group~$A$, respectively; and also projections $\rho_1,\dots,\rho_k$, where $\rho_1+\dots+\rho_k=\rho_G$, on the direct summands $G_1,\dots,G_k$. Further, assume that the summand~$G_l$ with the maximal power~$\mu_l$, equal to the power of the whole group~$A$, is known. As above, we introduce various definable sets: \begin{enumerate} \item the set $\mathbf F=\mathbf F(\bar g)$ consists of $\mu$~indecomposable projections on linearly independent direct summands of~$D$; \item for every $i=1,\dots,k$ the set $\mathbf F_i=\mathbf F_i(\bar g_i)$ consists of $\mu_i$ projections on independent indecomposable direct summands of the group $G_i=\rho_i A$; \item the set $\mathbf F'=\mathbf F'(\bar g')$ consists of $\mu_l$~projections on linearly independent countably generated direct summands of~$G_l$; \item an idempotent~$\gamma$ satisfies the following condition: $$ \Gamma(\gamma) \mathbin{{:}\!=} (\gamma \rho_l=\gamma \logic\land \gamma^2=\gamma \logic\land \forall f\in \mathbf F'\, \exists \beta\, (f \gamma=\gamma f=\beta \logic\land \mathrm{Idem}^\omega(\beta))). $$ This condition means that $\gamma$ is a~projection on such a~direct summand in~$G_l$ that its intersection with every subgroup $fA$, where $f\in \mathbf F'$, is a~countably generated summand of~$G_l$; \item for every idempotent~$\gamma$ satisfying the formula $\Gamma(\gamma)$, by $\Gamma_\gamma$ we shall denote the set $\{ f\in \mathbf F_l\mid f\gamma=f\}$, and by $\Gamma_\gamma(g)$ for $g\in \mathbf F'$ we shall denote the set $\{ f\in \mathbf F_l \mid f\gamma=f \logic\land fg=f\}$. Let us fix two of these idempotents $\gamma_0$~and~$\gamma_1$ with the conditions: (1)~$\Gamma_{\gamma_0}\cap \Gamma_{\gamma_1}=\varnothing$; (2)~for every $g\in \mathbf F'$ the set $ \mathbf F_l\setminus (G_{\gamma_0}\cup G_{\gamma_1})\cap \{ f\in \mathbf F_l\mid fg=f\}$ is countable. \setcounter{slast}{\value{enumi}} \end{enumerate} Denote $\Gamma_{\gamma_0}$ by~$\Gamma_0$ and $\Gamma_{\gamma_1}$ by~$\Gamma_1$. \begin{enumerate} \setcounter{enumi}{\value{slast}} \item Fix an endomorphism $\varphi\in \mathop{\mathrm{End}}\nolimits(A)$ satisfying the following formula: \begin{align*} & \Phi(\varphi) \mathbin{{:}\!=} \forall h\, (\mathrm{Idem}(h) \logic\land \forall g\in \mathbf F'\, (hg=gh=0) \Rightarrow \varphi h=h\varphi=0) \\ & \quad \logic\land \forall g\in \mathbf F'\, (\forall f\in \Gamma_0(g)\, (\varphi f=f \logic\land \forall f\in \Gamma_1(g)\, \exists f' \in \mathbf F_l\, (f'\notin \Gamma_1(g) \logic\land f'\notin \Gamma_0(g) \\ & \quad \logic\land f'g=f' \logic\land f'\varphi f=\varphi f\ne 0) \logic\land \forall f\in \mathbf F_l\, (f\notin \Gamma_0(g) \logic\land fg=f \\ & \quad {}\Rightarrow \exists f'\in \mathbf F_l\, (f'\ne f \logic\land f'\notin \Gamma_0(g) \logic\land f'\notin \Gamma_1(g) \logic\land f'g=f' \logic\land f'\varphi f=\varphi f\ne 0) \\ & \quad \logic\land \forall f_1,f_2\in \mathbf F_l\, (f_1\ne f_2 \logic\land f_1 g=f_1 \logic\land f_2 g=f_2 \\ & \quad {}\Rightarrow \neg (\exists f'\in \mathbf F_l\, (f'g=f' \logic\land f'\varphi f_1=\varphi f_1 \logic\land f'\varphi f_2=\varphi f_2)) \\ & \quad {}\logic\land \forall f'\in \mathbf F_l\, (f'g=f' \logic\land f'\in \Gamma_1(g)\Rightarrow \exists f\in \mathbf F_l\, (fg=f \logic\land f'\varphi=f'\varphi f=\varphi f)) \\ & \quad \logic\land \forall h\, (\mathrm{Idem}(h) \logic\land hg=h \logic\land h\gamma_0=\gamma_0 h=0 \logic\land \forall f\, (\mathrm{Idem}^*(f) \logic\land fg=f \logic\land fh=f \\* & \quad {}\Rightarrow \exists f'\, (\mathrm{Idem}^*(f') \logic\land f' g=f' \logic\land f'h=f' \logic\land f'\varphi f=\varphi f)\Rightarrow \mathrm{Idem}^\omega (f)))). \end{align*} \setcounter{slast}{\value{enumi}} \end{enumerate} This condition introduces an endomorphism~$\varphi$ that maps the complementary direct summand for $\sum\limits_{g\in \mathbf F'} gA$ to~$0$, i.e., acts only on $\sum\limits_{g\in \mathbf F'} gA$ in the following way: for every $g\in \mathbf F'$ the elements $\alpha A$, where $\alpha\in \Gamma_0(g)$, are mapped into themselves, and the elements $\alpha A$, where $\alpha\in \Gamma_1(g)$, are mapped somewhere to the elements of $\mathbf F_lA\setminus (\Gamma_0(g)A\cup\Gamma_1 (g)A)$ which are included in~$gA$. Further, $\varphi$ is a~monomorphism on~$gA$, and its image is $$ \langle \Gamma_0(g)A\rangle\oplus \langle \{ fA\mid f\in \mathbf F_l \logic\land fg=f \logic\land f\notin \Gamma_1(g)A\}\rangle. $$ Outside $\Gamma_0(A)$ there are no finite-dimensional proper subspaces of this endomorphism. Therefore, we can numerate all elements from $\mathbf F_l$ that project on subgroups from~$gA$ (we shall denote this set by $F_l(g)$) by the following: $f_i^j=f_i^j(g)$, $i=0,1,\dots$, $j=1,\dots$, and \begin{enumerate} \renewcommand{\theenumi}{\alph{enumi}} \item $f_0^j\in \Gamma_0(g)$; \item $f_1^j\in \Gamma_1(g)$; \item $\varphi(f_i^jA)=f_{i+1}^jA$ if $i> 0$; \item $\varphi(f_0^jA)=f_0^jA$. \end{enumerate} We shall denote the set $\{ f_i^j\}_{j=1,\dots}$ by $\Gamma_i(g)$ (note that for an arbitrary~$i$ this set is not definable). \begin{enumerate} \setcounter{enumi}{\value{slast}} \item The union $\bigcup\limits_{g\in \mathbf F'} \mathbf F_l(g)$ will be denoted by~$\mathbf F_l'$. This set is definable; \item note that on the group $B=\langle \mathbf F_l'A\rangle$ the endomorphism~$\varphi$ has a~left inverse endomorphism~$\psi$ such that $\psi\circ \varphi=1_B$. For every~$g$ we shall introduce $gA$ as follows: $$ \begin{cases} \psi(f_0^jA)=f_0^jA,\\ \psi(f_i^jA)=f_{i-1}^jA\text{ if }i>1,\\ \psi(f_1^jA)\text{ can be arbitrary}. \end{cases} $$ We shall consider $\psi$ with the condition $\psi(f_i^jA)=0$. Then two elements $f_1,f_2\in \mathbf F_l(g)\setminus \Gamma_0(g)$ (or, more generally, $\mathbf F_l'\setminus \Gamma_0$) will be called \emph{$\varphi$-equivalent} ($f_1\sim_{\varphi}f_2$) if \begin{align*} & \exists h_1\, \exists h_2\, \exists \alpha\, \biggl(h_1g=h_1 \logic\land h_2g=h_2 \logic\land \mathrm{Idem}(h_1) \logic\land \mathrm{Idem}(h_2) \logic\land \alpha^2=1 \\[1mm] & \quad \logic\land \smash[t]{\bigwedge_{i=1}^2 \forall f}\, (\mathrm{Idem}^*(f) \logic\land fg=f \logic\land fh_i=f \Rightarrow \exists f'\, (\mathrm{Idem}^*(f') \logic\land f'g=f' \logic\land f'h_i=f' \logic\land f'\psi f=\psi f)) \\ & \quad \logic\land f_1 h_1=f_1 \logic\land f_2 h_2=f_2 \logic\land \smash[t]{\bigwedge_{i=1}^2 \forall h}\, (\mathrm{Idem}(h) \logic\land hg=h \logic\land \forall f\, (\mathrm{Idem}^*(f) \logic\land fg=f \logic\land fh=f \\ & \quad \Rightarrow \exists f'\, (\mathrm{Idem}^*(f') \logic\land f'g=f' \logic\land f'h=f' \logic\land f'\psi f=\psi f) \logic\land f_i hf_i \Rightarrow h_ih=h_i)) \\ & \quad \logic\land h_1\alpha h_2=\alpha h_2=h_1\alpha \logic\land h_2 \alpha h_1=h_2\alpha\biggr). \end{align*} \end{enumerate} This means that minimal proper subspaces of the endomorphism~$\psi$ ($h_1A$ and $h_2A$), containing the groups $f_1A$ and $f_2A$, respectively, have the same power, i.e., $f_1A,f_2A\in \mathop{\mathrm{Ker}}\nolimits \psi^m\setminus \mathop{\mathrm{Ker}}\nolimits \psi^{m+1}$ for some natural~$m$, or $f_1A,f_2A\in \varphi^m(\Gamma_1)$. It is clear that in this case $f_1=f_m^j(g)$ and $f_2=f_m^i(g)$ (or, in more general case, $f_1=f_m^j(g_1)$ and $f_2=f_m^j(g_2)$). We shall call an element $f_1\in \mathbf F_l(g)$ a~\emph{$\varphi$-successor} of the element $f_2\in \mathbf F_l(g)$ ($f_1\sim_\varphi f_2+1$) if $$ \exists f\, (f\sim_\varphi f_1 \logic\land f\varphi f_2=\varphi f_2=f\varphi). $$ A~similar formula will introduce a~notion of a~\emph{$\varphi$-greater element} ($f_1>_\varphi f_2$) as an element for which a~minimal proper subspace of an endomorphism~$\psi$ containing $f_1A$ has greater power than the corresponding subspace for~$f_2A$. Let us fix $g\in \mathbf F_l'$ and construct an interpretation of the group $A=D\oplus G$ for this~$g$. Consider the set $\mathop{\mathrm{End}}\nolimits_g$ of all homomorphisms $h\colon gA\to A$ satisfying the following conditions: \begin{enumerate} \item $\forall f\in \mathbf F_l\, \Bigl(fg=f\Rightarrow \Bigl(hf=0 \logic\lor \exists \tilde f\in \mathbf F^D\, (hf=\tilde f hf\ne 0) \logic\lor \Bigl(\,\bigvee\limits_{i=1}^k \exists \tilde f\in \mathbf F_i\, (hf=\tilde fhf\ne 0)\Bigr)\Bigr)\Bigr)$; \item $\exists f\, (\mathop{\mathrm{Fin}}\nolimits(f) \logic\land \mathrm{Idem}(f) \logic\land fh=h)$; \item $\bigwedge\limits_{i=1}^k \forall f\in \mathbf F_i\, \neg \Bigl(\exists f_1,\dots,\exists f_{p^i}\in \mathbf F\, \Bigl(\,\bigwedge\limits_{q\ne s} f_q\ne f_s \logic\land f_1,\dots,f_{p^i}\in \Gamma_0(g) \logic\land hf_1=fhf_1\ne 0 \logic\land \dots \logic\land hf_{p^i}=fhf_{p^i}\ne 0\Bigr) \logic\land \forall f'\in \mathbf F_l\, (hf'=fhf'\ne 0\Rightarrow f'\in \Gamma_0(g))\Bigr)$; this means that the inverse images of the elements of the bounded summand can be contained only in the set $\Gamma_0(g)$; \item $ \forall f\in \mathbf F^D\, \exists f'\in \mathbf F_l\, (hf'=fhf'\ne 0\Rightarrow f'\notin \Gamma_0)$; contrary to the previous assertion, this means that the inverse images of the elements of the divisible summand can not be contained in~$\Gamma_0(g)$; \item $\forall f\in \mathbf F^D\, \neg \Bigl(\exists f_1,\dots,f_p\in \mathbf F_l\, \Bigl(\Bigl(\, \bigwedge\limits_{q\ne s} f_q\ne f_s \logic\land f_q\sim_\varphi f_s\Bigr) \logic\land f_1g=f_1 \logic\land\dots \logic\land f_pg=f_p \logic\land hf_1=fhf_1\ne 0 \logic\land \dots \logic\land hf_p=fhf_p\ne 0\Bigr)\Bigr)$; this means that no element from~$\mathbf F_D$ can have more than $p-1$ $\varphi$-equivalent inverse images; \item $\forall f\in \mathbf F^D\, \exists f'\in \mathbf F_l\, \neg (\exists f''\in \mathbf F_l\, (f''>_\varphi f' \logic\land hf''=fhf''\ne 0))$, i.e., every element from~$\mathbf F_D$ contains only a~finite number of inverse images in~$\mathbf F_l$. \end{enumerate} Two elements $h_1$~and~$h_2$ of the set $\mathop{\mathrm{End}}\nolimits_g$ are said to be equivalent (notation: $h_1\sim h_2$) if we have the following formula: \begin{align*} & \exists f_1\, \exists f_2\, ((gf_1g)\cdot (gf_2g)=(gf_2g)\cdot (gf_1g)=g \\ & \quad \logic\land \forall f\in \mathbf F_l\, (fg=f \Rightarrow \forall f'\in \mathbf F^D\cup \mathbf F_1\cup \dots \cup \mathbf F_k\, (h_1f=f'h_1f\ne 0\Leftrightarrow (gf_1gh_2)f=f'(gf_1gh_2)f\ne 0)) \\ & \quad \logic\land \forall f\in \mathbf F_l\, (gf_1g\cdot f\sim_\varphi f)). \end{align*} The obtained set $\mathop{\mathrm{End}}\nolimits_g/{\sim}$ will be denoted by $\widetilde{\mathrm{End}}_g$. Now we shall show how to find a~bijection between the set $\widetilde{\mathrm{End}}_g$ and the group~$A$. Let us consider some $h\in \widetilde{\mathrm{End}}_g$. For every $f\in \mathbf F_i$ let us consider the intersection of the inverse image of $fA$ with the set $\Gamma_0 A$. Suppose that this inverse image contains $m_f$~elements. Thus, we get the set $$ M_G =\bigcup\limits_{i=1}^k \{ \langle f,m_f\rangle\mid f\in \mathbf F_i,\ m_f=1,\dots ,p^i-1\}. $$ For every $f\in \mathbf F^D$ and every natural~$j$ let us consider the intersection of the inverse image of~$fA$ with the set $\Gamma_j(g)$. Suppose that this inverse image contains $l_f^j$~elements, and the maximal nonzero~$j$ is equal to $\gamma_f$. Then we get the set $$ M_D=\{ \langle f,\gamma_f,l_f^1,\dots,l_f^{\gamma_f}\rangle\mid f\in \mathbf F^D,\ \gamma_f\in \mathbb N,\ l_f^1,\dots,l_f^{\gamma_f}\in \{ 0,\dots,p-1\}\}. $$ Now an element~$h$ will be mapped to the following sum: $$ \sum_{f\in M_G} m_f a(f)+\sum_{f\in M_D} l_f^1c_1(f)+\dots+ l_f^{\gamma_f}c_{\gamma_f}(f)\in A. $$ It is clear that such a~mapping is a~bijection between the sets $\widetilde{\mathrm{End}}_g$ and~$A$, and this bijection becomes an isomorphism if and only if we introduce addition on the set $\widetilde{\mathrm{End}}_g$ with the help of a~formula similar to the formulas from Secs.\ \ref{sec5_3}~and~\ref{sec6_2}. Therefore we have the following proposition. \begin{proposition}\label{p6.4} Let $A_1=D_1\oplus G_1$, $A_2=D_2\oplus G_2$, the groups $D_1$~and~$D_2$ be divisible, the groups $G_1$~and~$G_2$ be infinite and bounded by the number~$p^k$, $|D_1|< |G_1|$, and $|D_2|< |G_2|$. Then $\mathop{\mathrm{End}}\nolimits (A_1)\equiv \mathop{\mathrm{End}}\nolimits(A_2)\Rightarrow A_1\equiv_{\mathcal L_2} A_2$. \end{proposition} \section{Groups with Unbounded Basic Subgroups}\label{sec7} \subsection[The Case $A=D\oplus G$, Where $|D|\ge |G|$, and Other Cases]{The Case $\boldsymbol{A=D\oplus G}$, Where $\boldsymbol{|D|\ge |G|}$, and Other Cases}\label{sec7_1} Let us separate our problem into three cases. 1. $A=D\oplus G$, where $|D|\ge |G|$ and $G$~is any unbounded group. We shall not consider this case in details, because its proof is similar to the proof from Sec.~\ref{sec6_2}. This case resembles the case $A=D\oplus G$, where $|D|\ge |G|$, $D$~is a~divisible group, and $G$~is a~bounded group (see Sec.~\ref{sec6_3}). Here we give only a~sketch of the proof. Since $|G|\le |D|$, we have that a~basic subgroup of the group~$G$ (or the group~$A$) is of power not greater than the power of~$D$. Hence there exists an embedding $\varphi_1\colon B\to D_1$, where $D=D_1\oplus D_2\oplus D_3$ and $|D|=|D_1|=|D_2|=|D_3|$. This embedding will be fixed, and after that we can assume that the group~$B$ is a~subgroup of~$D_1$. Further, $|G|\le |D|$ implies $|G/B|\le |D|$, whence there exists an embedding $\varphi_2\colon G/B\to D_2$ (i.e., a~mapping from~$G$ into~$D_2$ which is equal to zero on~$B$). Thus the group $G/B$ can also be considered as a~subgroup of~$D_2$. Now we shall find some definable sets: \begin{enumerate} \item the set~$\mathbf F_1$ consists of $|B|$~independent indecomposable projections on quasicyclic direct summands in a~minimal direct summand of~$D_1$, containing $\varphi_1(B)$ as a~subgroup; \item the set~$\mathbf F_2$ consists of $|G/B|$ independent indecomposable projections on quasicyclic direct summands of $\varphi_2(G/B)$; \item the sets $\mathbf F$~and~$\mathbf F_3$ consists of $\mu=|D|$ independent projections on quasicyclic direct summands of the groups $D$~and~$D_3$, respectively; \item the set $\mathbf F'$ consists of $\mu$~independent projections on countably generated direct summands of the group~$D$. \end{enumerate} For every $g\in \mathbf F'$ an interpretation of the group~$A$ will be constructed in the following way: we shall consider homomorphisms $h\colon gA\to A$ such that the images of the subgroups $fA$ ($f\in \mathbf F$, $fA\subset gA$) are either zero or are contained in~$f'A$ ($f'\in \mathbf F_1\cup \mathbf F_2\cup \mathbf F_3$), and $h(gA)$ is finite-dimensional. The inverse images of~$fA$, where $f\in \mathbf F_1$, will interpret the summands from~$B$ in the decomposition of the element $a\in A$ in a~quasibasis; the inverse images of~$fA$, where $f\in \mathbf F_2$, are the summands from $G/B$, i.e., $c_{i,j}$ for $i\in \omega$, $j\in |G/B|$; the inverse images of~$fA$, where $f\in \mathbf F_3$, are the summands from~$D$. The rest of the procedure is similar to that from the previous sections. So we have given a~sketch of the proof of the following proposition. \begin{proposition}\label{p7.1} Let $A_1=D_1\oplus G_1$, $A_2=D_2\oplus G_2$, where the groups $D_1$,~$D_2$ are divisible, the groups $G_1$,~$G_2$ are reduced, $|D_1|\ge |G_1|$, and $|D_2|\ge |G_2|$. Then $\mathop{\mathrm{End}}\nolimits(A_1)\equiv \mathop{\mathrm{End}}\nolimits(A_2)\Rightarrow A_1\equiv_{\mathcal L_2} A_2$. \end{proposition} In this section, we shall assume that $A=D\oplus G$, where $|D|< |G|$. \smallskip 2. $A=D\oplus G$, where $|D|< |G|$, $B$ is a~basic subgroup in~$G$, and $r(B)=r_{\mathrm{fin}}(B)$. The case $r_{\mathrm{fin}}(B)> \omega$ will be considered in Sec.~\ref{sec7_4}, and the case $r_{\mathrm{fin}}=\omega$ will be considered in Sec.~\ref{sec7_5}. If $r(B)> \omega$, then $|A|=r(B)$ and $|D|< r(B)$. If $r(B)=\omega$, then $|A|\le |\mathcal P(\omega)|$, therefore if we do not assume the continuum-hypothesis, then we can meet the situation, where $\omega< |D| < |A|\le 2^\omega$, which is not good for us. Thus for the simplicity of arguments, we shall assume the continuum-hypothesis. Hence, if $A=D\oplus G$, where $|D|< |G|$, $r(B)=r_{\mathrm{fin}}(B)$, then we shall interpret the theory $\mathrm{Th}_2^{r(B)}(A)$ in the ring $\mathop{\mathrm{End}}\nolimits(A)$. \smallskip 3. $A=D\oplus G$, where $|D|< |G|$ and $r(B)\ne r_{\mathrm{fin}}(B)$. If in this case $r_{\mathrm{fin}}(B)> \omega$, then we can obtain the full second order theory of the group~$A$. This case is considered in Sec.~\ref{sec7_4}. If $r_{\mathrm{fin}}(B)=\omega$, then, assuming the continuum-hypothesis, we can find in the group~$A$ a~bounded direct summand of power~$|A|$, and in this case we can define the complete second order theory of the group~$A$. This case is considered in Sec.~\ref{sec7_6}. In Secs.\ \ref{sec7_2}~and~\ref{sec7_3}, we shall find some definable objects, which are important for all cases. \subsection{Definable Objects}\label{sec7_2} In this section, we assume that $A=D\oplus G$, the group~$D$ is divisible (it can be zero), the group~$G$ is reduced and has an unbounded basic subgroup~$B$, $$ B=B_1\oplus \dots\oplus B_n\oplus \dots, $$ where $$ B_n\cong \bigoplus_{\mu_n} \mathbb Z(p^n), $$ $r(D)=\mu_D$, $|B|=\bigcup\limits_{n\in \mathbb N} \mu_n=\mu_B$, $|G|=\mu_G$ (if $\mu_B> \omega$, then $\mu_G=\mu_B$), and $\mu=|A|=\max(\mu_D,\mu_G)$. We suppose that projections $\rho_D$~and~$\rho_G$ on the summands $D$~and~$G$ of the group~$A$, respectively, are fixed. By~$Z$ we shall denote the center of the ring $\mathop{\mathrm{End}}\nolimits(A)$. As we remember (see Theorem~\ref{t2.9}), each of its elements multiplies all elements of~$A$ by some fixed $p$-adic number. For any indecomposable projections $\rho_1$~and~$\rho_2$ from~$\mathop{\mathrm{End}}\nolimits(G)$ we shall write $o(\rho_1)\le o(\rho_2)$ if $$ \forall c\in Z\, (c\rho_2=0\Rightarrow c\rho_1=0). $$ It is clear that this formula holds if and only if the order of the finite cyclic direct summand~$\rho_1A$ is not greater than the order of the summand~$\rho_2A$. Similarly, \begin{align*} (o(\rho_1)< o(\rho_2))&\mathbin{{:}\!=} (o(\rho_1)\le o(\rho_2)) \logic\land \neg (o(\rho_2)\le o(\rho_1)),\\ (o(\rho_1)= o(\rho_2))&\mathbin{{:}\!=} (o(\rho_1)\le o(\rho_2)) \logic\land (o(\rho_2)\le o(\rho_1)). \end{align*} For every indecomposable projection~$\rho$ we shall consider the following formula sets. 1. The formula $$ \mathrm{Ord}_{\rho}(f)\mathbin{{:}\!=} \mathrm{Idem}(f) \logic\land \forall f'\, (\mathrm{Idem}^*(f') \logic\land f'f=f'\Rightarrow o(f')=o(\rho)) $$ defines the projections~$f$ on direct summands $fA$ in~$A$ which are direct sums of cyclic groups of order~$o(\rho A)$. 2. The formula $$ \mathrm{MaxOrd}_{\rho}(f)\mathbin{{:}\!=} \mathrm{Idem}(f) \logic\land \mathrm{Ord}_\rho(f) \logic\land \forall f'\, (\mathrm{Ord}_\rho(f') \Rightarrow \neg (ff'=f)) $$ defines the projections~$f$ on maximal direct summands~$fA$ in~$A$ which are direct sums of cyclic groups of order~$o(\rho A)$. 3. The formula $$ \mathrm{Rest}_{\rho}(f) \mathbin{{:}\!=} \mathrm{Idem}(f) \logic\land \forall f'\, (\mathrm{Idem}^*(f') \logic\land f'f=f'\Rightarrow o(f')\le o(\rho)) $$ defines the projections~$f$ on direct summands $fA$ in~$A$ which are direct sums of cyclic groups of order at most $o(\rho A)$. 4. The formula $$ \mathrm{MaxRest}_{\rho}(f) \mathbin{{:}\!=} \mathrm{Idem}(f) \logic\land \mathrm{Ord}_\rho(f) \logic\land \forall f'\, (\mathrm{Ord}_\rho(f') \Rightarrow \neg (ff'=f)) $$ defines the projections~$f$ on maximal direct summands $fA$ in~$A$ which are direct sums of cyclic groups of order at most $o(\rho A)$. 5. The formula $$ \overline{\mathrm{Base}}(\varphi) \mathbin{{:}\!=} \forall \rho\, \exists f\, (\mathrm{MaxRest}_\rho(f) \logic\land \forall f'\, (\mathrm{Idem}^*(f')\Rightarrow (f'f=f'\Leftrightarrow \forall c\in Z\, (cf'\ne 0\Rightarrow c(f'\varphi)\ne 0))) $$ postulates that for every natural~$n$ there exists a~maximal $p^n$-bounded direct summand of the group~$A$ which is included in~$\varphi A$. Therefore, the group~$\varphi A$ necessarily contains some basic subgroup of the group~$A$. 6. The formula \begin{multline*} \mathrm{Base}(\varphi) \mathbin{{:}\!=} \overline{\mathrm{Base}}(\varphi) \logic\land \forall f^*\, (\mathrm{Idem}^*(f^*) \logic\land f^*\varphi \ne 0\Rightarrow \exists \rho\, \exists f\, (\mathrm{MaxRest}_\rho(f) \\ \logic\land \forall f'\, (\mathrm{Idem}^*(f')\Rightarrow (f'f=f'\Leftrightarrow \forall c\in Z\, (cf'\ne 0\Rightarrow c(f\varphi)\ne 0))) \logic\land f^*f=f^*)) \end{multline*} is true for every endomorphism $\varphi\in \mathop{\mathrm{End}}\nolimits(A)$ whose image is a~basic subgroup in~$A$. Let us suppose that we have a~fixed endomorphism~$\varphi_B$ such that $\mathrm{Base}(\varphi_B)$. \subsection{Definable Special Sets}\label{sec7_3} We shall consider three different cases: \begin{enumerate} \item $\mu_B=\omega$; \item $\mu_B> \omega$ and $\forall k\in \mathbb N\, \exists n\in \mathbb N\, (n> k \logic\land \mu_n=\mu_B)$. This is always true if $\mathrm{cf} \mu_B >\omega$; \item $\mu_B> \omega$, $\mathrm{cf} \mu_B=\omega$, $\forall n\in \mathbb N\, (\mu_n<\mu_B)$. \end{enumerate} \smallskip Case 1. $\mu_B=\omega$. Let us consider the formula \begin{align*} & \mathrm{Intr}(f) \mathbin{{:}\!=} [\forall f' \, (\mathrm{Idem}(f') \logic\land f'\varphi_B\ne 0\Rightarrow f'f\ne 0)]\logic\land [\forall f_1\, \forall f_2\, (\mathrm{Idem}^*(f_1) \logic\land \mathrm{Idem}^*(f_2) \logic\land o(f_1)=o(f_2) \\ & \quad {}\logic\land \forall c\in Z\, (cf_1\ne 0\Rightarrow cf_1f\ne 0) \logic\land \forall c\in Z\, (cf_2\ne 0\Rightarrow cf_2 f\ne 0)\Rightarrow f_1f_2\ne 0 \logic\land f_2f_1\ne 0)] \\ & \quad {}\logic\land [\forall \rho'\, (\mathrm{Idem}^*(\rho')\Rightarrow \exists f'\, (\mathrm{Idem}^*(f') \logic\land o(f') > o(\rho') \logic\land \forall c\in Z\, (cf'\ne 0\Rightarrow cf'f\ne 0)))]. \end{align*} The first part of this formula, enclosed in square brackets, postulates $fA\subset \varphi_B A=B$. The second part states that there is at most one cyclic direct summand of the same order in the image of~$fA$. The third part states that the orders of direct summands in $fA$ are unbounded. Therefore this formula gives us an endomorphism~$f$ with image~$B'$ being a~cyclic direct summand in~$B$, $$ B'\cong \bigoplus_{i\in \mathbb N} \mathbb Z(p^{n_i}), $$ where $(n_i)$ is an increasing sequence. This endomorphism is supposed to be fixed and is denoted by~$f_B$. Now we shall consider endomorphisms from~$B'$ into~$A$. Namely we shall consider only functions~$f$ with the condition $$ \forall \rho\, (\mathrm{Idem}(\rho) \logic\land \rho f_B=0\Rightarrow\rho f=0). $$ Two functions $f_1$~and~$f_2$ satisfying this condition are said to be equivalent if $$ \forall \rho\, (\mathrm{Idem}^*(\rho) \logic\land \forall c\in Z\, (c\rho\ne 0\Rightarrow c\rho f\ne 0) \Rightarrow f_1 \rho=f_2\rho), $$ i.e., if they coincide on the group~$B'$. Consequently, if we factorize the set of all described functions by this equivalence, then we obtain the group $\mathop{\mathrm{Hom}}\nolimits(B',A)$. Introduce now the formula $$ o(\rho_1)\ge o(\rho_2)^2 $$ for two indecomposable idempotents $\rho_1$~and~$\rho_2$ as follows: $$ \forall c\in Z\, (c\rho_2\ne 0\Rightarrow c^2 \rho_1\ne0). $$ This formula means that $|\rho_1A|\ge |\rho_2A|^2$. Similarly we can introduce the formulas $$ o(\rho_1)> o(\rho_2)^2\ \ \text{and}\ \ o(\rho_1)=o(\rho_2)^2. $$ Suppose now that our function~$f_B$ satisfies the additional condition \begin{align*} & \forall f'\, (\mathrm{Idem}^*(f') \logic\land \forall c\in Z\, (cf'\ne 0\Rightarrow cf'f_B\ne 0)\Rightarrow pf'\ne 0) \\* & \quad \logic\land \forall \rho'\, \forall f'\, (\mathrm{Idem}^*(f') \logic\land \forall c\in Z\, (cf'\ne 0\Rightarrow cf'f_B\ne 0) \logic\land o(f')=o(\rho') \\* & \quad \Rightarrow \forall f\, (\mathrm{Idem}^*(f) \logic\land \forall c\in Z\, (cf\ne 0\Rightarrow cff_B\ne 0) \logic\land o(f)> o(\rho')\Rightarrow o(f)> o(\rho')^2)). \end{align*} This condition means that \begin{enumerate} \item every cyclic direct summand in~$B'$ of the smallest order has order greater than~$p$ (i.e., at least not smaller than~$p^2$); \item for every direct cyclic summand in~$B'$ of order~$p^k$ the next cyclic summand of greater order has order greater than~$p^{2k}$. \end{enumerate} Therefore, $$ B'\cong \bigoplus_{i\in \omega} \mathbf Z(p^{n_i}), $$ where $n_1\ge 2$, $n_{i+1}> 2n_i$. Now consider the formula \begin{align*} & \mathrm{Ins}(\psi) \mathbin{{:}\!=} [\exists f\, (\mathrm{Idem}^*(f) \logic\land \forall c\in Z\, (cf\ne 0 \Rightarrow cff_B\ne 0) \\ & \quad \logic\land \forall f'\, (\mathrm{Idem}^*(f') \logic\land \forall c\in Z\, (cf'\ne 0\Rightarrow cf'f_B\ne 0) \Rightarrow o(f)\le o(f')) \logic\land \psi f=pf)] \\ & \quad \logic\land [\forall f_1\, \forall c_1\in Z\, (\mathrm{Idem}^*(f_1) \logic\land \forall c\in Z\, (cf_1\ne 0\Rightarrow cf_1f_B\ne 0) \\ & \quad {} \logic\land (\psi f_1=c_1f_1\Rightarrow \exists f_2\, (\mathrm{Idem}^*(f_2) \logic\land \forall c\in Z\, (cf_2\ne 0\Rightarrow cf_2f_B\ne 0) \logic\land o(f_2)> o(f_1) \\ & \quad \logic\land \forall f'\, (\mathrm{Idem}^*(f') \logic\land \forall c\in Z\, (cf'\ne 0\Rightarrow cf'f_B\ne 0) \Rightarrow o(f')\le o(f_1) \logic\lor o(f')\ge o(f_2)) \logic\land \psi f_2=pc_1 f_2)))]. \end{align*} The condition from the first square brackets states that there exists a~cyclic summand of the smallest order such that the action of the endomorphism~$\psi$ on it is multiplication by~$p$. The second condition states that for every natural~$i$ there exists a~direct cyclic summand $\langle a_i \rangle$ of order~$p^{n_i}$ such that the action of~$\psi$ on it is multiplication by~$p^i$. Suppose that for some different cyclic direct summands $\langle a_i\rangle$ and $\langle b_i\rangle$ from~$B'$ the action of~$\psi$ on them is multiplication by~$p^i$. Let $b_i=\sum \alpha_k a_k +\sum \beta_l a_l + a_i$, where $o(a_k)< o(b_i)$, $o(a_l)> o(b_i)$, $k< i$, $l> i$, $$ \psi(b_i)=p^i b_i=\sum p^k \alpha_k a_k +\sum p^l \beta_l a_l+ p^i a_i= \sum p^i \alpha_k a_k+\sum p^i \beta_l a_l +p^i a_i. $$ Let $k< i$. Then we have $p^i\alpha_ka_k=p_k\alpha_ka_k=0$, i.e., $\alpha_k$~is divisible by~$p^{n_k-k}$ and $p^{n_k-k}$ is divisible by~$p^k$. Therefore, we can write $$ b_i=\sum \alpha_k p^ka_k+\sum \beta_k p^{n_l-n_i}a_l + a_i. $$ We have $p^i\beta_l p^{n_l-n_i}a_l=p^l \beta_l p^{n_l-n_i}a_l=0$. Note that every cyclic direct summand $\langle a\rangle$ either uniquely corresponds to an element of the center $c\in Z$ such that $\psi a=ca$, or there is no such element. We can consider only those summands that correspond to elements of the center. Let us suppose that we have some homomorphism $f\colon B'\to A$ such that $o(f(a_i))\le p^i$. Let $\psi(a_i)=p^ia_i$ and $\psi(b_i)=p^i b_i$. Let us find $f(b_i)$: $$ f(b_i)=\sum \alpha_k p^k f(a_k)+\sum \beta_l p^{n_l-n_i} f(a_l)+f(a_i). $$ Since $o(f(a_k))\le p^k$, we have $\sum \alpha_k p^k f(a_k)=0$. Since $o(f(a_l))\le p^l$, we have $\sum \beta_l p^{n_l-n_i} f(a_l)=0$. Therefore $f(b_i)= f(a_i)$. Thus every element of the center~$Z$ of the form $p^n\cdot E$ is mapped (under this homomorphism $f\colon B'\to A$) to some uniquely defined element $a\in A$ with the condition $o(a)\le p^n$. \smallskip Case 2. $\forall k\in \omega\, \exists n\in\omega\, (n > k \logic\land \mu_n =\mu_B)$. Consider the formula \begin{multline*} \mathrm{ECard}(\rho) \mathbin{{:}\!=} \mathrm{Idem}^*(\rho) \logic\land \exists \psi\, \forall f\, (\mathrm{Idem}^*(f)\logic\land \forall c\in Z\, (cf\ne 0\Rightarrow cf\varphi_B\ne 0) \\ \Rightarrow \exists f'\, (\mathrm{Idem}^*(f') \logic\land \forall c\in Z\, (cf'\ne 0\Rightarrow cf'\varphi_B\ne 0) \logic\land o(f')=o(\rho) \logic\land f\psi f'\ne 0)). \end{multline*} This formula states that the set of independent cyclic summands of order $o(\rho)$ has the same power as the whole group~$B$, because there exists a~homomorphism~$\psi$ from a~direct summand of the group~$B$ (which is isomorphic to the sum of cyclic groups of order~$o(\rho)$) such that its image intersects with every cyclic summand of~$B$. Therefore, $\mu_{o(\rho)}=\mu_B$. Now let us consider the formula \begin{align*} & \mathrm{Fine}(f) \mathbin{{:}\!=} [\forall f'\, (\mathrm{Idem}(f') \logic\land f'\varphi_B\ne 0 \Rightarrow f'f\ne 0)] \\ & \quad {}\logic\land [\forall f_1\, \forall f_2\, (\mathrm{Idem}^*(f_1) \logic\land \mathrm{Idem}^*(f_2) \logic\land o(f_1)=o(f_2) \\ & \quad {}\logic\land \forall c\in Z\, (cf_1\ne 0\Rightarrow cf_1 f\ne 0) \logic\land \forall c\in Z\, (cf_2\ne 0\Rightarrow cf_2 f\ne 0) \Rightarrow f_1f_2\ne 0 \logic\land f_2f_1\ne 0)] \\ & \quad \logic\land [\forall \rho'\, (\mathrm{ECard}(\rho')\Leftrightarrow \exists f'\, (\mathrm{Idem}^*(f') \logic\land o(f')= o(\rho') \logic\land \forall c\in Z\, (cf'\ne 0\Rightarrow cf'f\ne 0)))]. \end{align*} The first part of the formula, enclosed in square brackets, postulates $fA\subset \varphi_B A=B$. The second part, enclosed in the second square brackets, states that the image of $fA$ does not contain more than one cyclic direct summand of any order. The third part states that all direct cyclic summands have order~$p^n$, where $\mu_n=\mu_B$. To make the further constructions, we need to recall Sec.~\ref{sec3}. Formulation of Theorem~\ref{shel_t4_1} need not be changed, but Lemma~\ref{shel_l4_2} and the proof of the theorem with the help of this lemma must be changed a~little for our case. There is a~new formulation of Lemma~\ref{shel_l4_2}: \textit{there exists a~formula $\varphi(f)$ with one free variable~$f$ such that $\varphi(f)$ holds in $\mathop{\mathrm{End}}\nolimits(B')$ if and only if there exists an ordinal number $\alpha\in \mu$ such that for all $\beta \in \mu$ and all $m\in \omega$} $$ f(a_{\langle 0,\beta\rangle}^m)=a_{\langle \alpha,\beta\rangle}^m. $$ Now we shall write the proof of the theorem with the help of the lemma. Let a~function $f_0^*$ map (for every $m\in \omega$) the set $\{ a_{\langle 0,\alpha\rangle}^m\mid \alpha\in \mu\}$ onto the set $\{ a_t^m\mid t\in I^*\}$, and $f_0^*(a_{\langle \alpha,\beta\rangle}^m)= f_0^*(a_{\langle 0,\beta\rangle}^m)$. Suppose that we have a~set $\{ f_i\}_{i\in \mu}$ and let the function~$f^*$ be such that $$ f^*(a_{\langle \alpha,\beta\rangle}^m)= f_\alpha\circ f_0^*(a_{\langle \alpha,\beta\rangle}^m). $$ Let $f_1^*$ map (for every~$m\in \omega$) the set $\{ a_t^m\mid t\in I^*\}$ onto the set $\{ a_{\langle 0,\beta\rangle}^m\mid \beta\in \mu\}$. Let the formula $\tilde \varphi(f',\dots)$ say that there exists~$f$ such that \begin{enumerate} \item $\varphi(f)$; \item $f'\circ f_0^*\circ f_1^*=f^*\circ f\circ f_1^*$. \end{enumerate} Then $\mathop{\mathrm{End}}\nolimits(B')\vDash \varphi(f)$ if and only if there exists $\alpha\in \mu$ such that $$ \forall \beta\in \mu\, \forall m\in \omega\, f(a_{\langle 0,\beta\rangle}^m)=a_{\langle \alpha,\beta\rangle}^m. $$ Therefore \begin{multline*} f'\circ f_0^*\circ f_1^*(a_t^m)=f^*\circ f\circ f_1^*(a_t^m) \Leftrightarrow f'\circ f_0^*(a_{\langle 0,\beta\rangle}^m)= f^*\circ f(a_{\langle 0,\beta\rangle}^m) \\ {}\Leftrightarrow f'\circ f_0^*(a_{\langle 0,\beta\rangle}^m)= f^*(a_{\langle \alpha,\beta\rangle}^m) \Leftrightarrow f'\circ f_0^*(a_{\langle 0,\beta\rangle}^m)= f_{\alpha}\circ f_0^*(a_{\langle 0,\beta\rangle}^m). \end{multline*} Let $f_0^*(a_{\langle 0,\beta\rangle}^m)=a_{t_\beta}^m$. Then $$ f'(a_{t_\beta}^m)= f_{\alpha}(a_{t_\beta}^m), $$ what we needed. Now we need to change the proof of the lemma. The case $\mu=\omega$ is not interesting for us, therefore we shall begin from the second case. Suppose that the cardinal number~$\mu_B$ is regular. Represent it in the form of the union of the sets $I_0$, $I$, and~$J$, where $|I_0|=|J|=|I|=\mu_B$, $J=\{ \langle \alpha,\beta\rangle\mid \alpha,\beta\in \mu\}\cup \{ 0\}$, $I=\{ \langle \alpha,\delta,n\rangle\mid \alpha\in \mu,\allowbreak\ {\delta\in \mu},\allowbreak \ {\mathrm{cf} \delta=\omega},\allowbreak\ n\in \omega\}$, and $a_\alpha^{\beta,n}=a_{\langle \alpha,\beta,n\rangle}$. As in Sec.~\ref{sec3}, for every limit ordinal $\delta\in \mu$ such that $\mathrm{cf} \delta=\omega$, we choose an increasing sequence $(\delta_n)_{n\in \omega}$ of ordinal numbers less than~$\delta$ such that their limit is~$\delta$ and for each $\beta\in \mu$ and $n\in \omega$ the set $\{ \delta\in \mu\mid \beta=\delta_n\}$ is a~stationary subset in~$\mu$. Consider an independent set of generators of cyclic direct summands from~$B$ such that \begin{enumerate} \item the order of every generator from this set is equal to~$p^n$, where $\mu_n=\mu_B$; \item for every $n\in \omega$ such that $\mu_n=\mu_B$, the set of all elements of order~$p^n$ from this set has the power~$\mu_B$. \end{enumerate} Let us denote this set by $$ \{ a_t^n\mid t\in J\cup I_0\cup I=I^*,\ o(a_t^n)=p^n,\ \mu_n=\mu_B\}. $$ Let us define the functions $f_0^*,\dots,f_{14}^*$, similar to the functions from the case~II from Sec.~\ref{sec3}, but with some addition: $f_0^*(a_t^m)=a_0^m$, $f_1^*(a_t^m)=a_t^{m-1}$ for $m> 0$, $f_1^*(a_t^0)=0$, $f_2^*(a_{\langle \alpha,\beta\rangle}^m)=a_{\langle 0,0\rangle}^m$, $f_3^*(a_{\langle \alpha,\beta\rangle}^m)=a_{\langle \alpha,0\rangle}^m$, $f_4^*(a_{\langle \alpha,\beta\rangle}^m)=a_{\langle 0,\beta\rangle}^m$, $f_5^*(a_{\langle \alpha,\beta\rangle}^m)=a_{\langle \beta,\alpha\rangle}^m$, $f_6^*(a_{\langle \alpha,\beta\rangle}^m)= a_{\langle \alpha,\alpha\rangle}^m$, for $\delta\in \mu_B$, $\mathrm{cf} \delta=\omega$, $f_7^*(a_{\langle \alpha,\delta\rangle}^m)= a_{\langle\alpha,\delta,0\rangle}^m$, for $\delta\in \mu_B$, $\mathrm{cf} \delta\ne \omega$, $f_7^*(a_{\langle \alpha,\delta\rangle}^m)= a_{\langle \alpha,\delta\rangle}^m$, $f_8^*(a_{\langle \alpha,\beta\rangle}^m)=a_{\langle \alpha,\beta\rangle}^m$, $f_8^*(a_{\langle \alpha,\delta,n\rangle}^m)= a_{\langle \alpha,\delta, n+1\rangle}^m$, $f_9^*(a_{\langle \alpha,\beta\rangle}^m)=a_{\langle \alpha,\beta\rangle}^m$, $f_9^*(a_{\langle \alpha,\delta,n\rangle}^m)= a_{\langle \alpha,\delta_n\rangle}^m$. Let, as above, \begin{align*} B_0&= \langle \{ a_{\langle 0,0\rangle}^m\mid m\in \omega\}\rangle,\\ B_1&= \langle \{ a_{\langle \alpha,0\rangle}^m\mid \alpha\in \mu,\ m\in \omega\}\rangle,\\ B_2&= \langle \{ a_{\langle 0,\beta\rangle}^m\mid \beta\in \mu,\ m\in \omega\}\rangle,\\ B_3&= \langle \{ a_{\langle \alpha,\beta\rangle}^m\mid \alpha,\beta\in \mu,\ m\in \omega\}\rangle,\\ B_4&= \langle \{ a_{\langle \alpha,\beta\rangle}^m, a_{\langle \alpha,\beta,n\rangle}^m\mid \alpha,\beta\in \mu,\ n,m\in \omega\}\rangle,\\ B_5&= \langle\{ a_{\langle 0,\beta\rangle}^m,a_{\langle 0,\beta,n\rangle}^m \mid \beta\in \mu,\ n,m\in \omega\}\rangle,\\ B_6&= \langle \{ a_{\langle 0,\beta\rangle}^m\mid \beta\in \mu,\ \mathrm{cf} \beta=\omega,\ m\in \omega \}\rangle,\\ B_7&= \langle \{ a_{\langle \alpha,\beta\rangle}^m\mid \alpha,\beta\in \mu,\ \mathrm{cf} \beta =\omega,\ m\in \omega\}\rangle. \end{align*} It is clear that $f_3^*$, $f_4^*$, and $f_9^*$ are projections onto $B_1$, $B_2$, and~$B_3$, respectively. Let $f_{10}^*$, $f_{11}^*$, $f_{12}^*$, and $f_{13}^*$ be projections onto $B_4$, $B_5$, $B_6$, and $B_7$, respectively. All functions which are considered later satisfy the following formula $\varphi^0(f,\dots)$: $$ f_0^* f=f f_0^*=f_0^* \logic\land ff_1^*=f_1^* f. $$ Let us see what this formula means. Its first part gives us $f(a_0^m)=ff_0^*(a_0^m)=f_0^*(a_0^m)=a_0^m$ and $f_0^* f(a_i^m)=f_0^*(a_i^m)=a_0^m$, therefore $f(a_i^m)=\alpha_1 a_{i_1}^m+\dots+\alpha_k a_{i_k}^m$. The second part gives $$ ff_1^*(a_t^m)=f_1^* f(a_t^m)\Leftrightarrow f(a_t^{m-1})=f_1^* \alpha_1(m)a_{t_1(m)}^m+\dots+\alpha_k(m) a_{t_k(m)}^m)= \alpha_1(m) a_{t_1(m)}^{m-1}+\dots+\alpha_k(m)a_{t_k(m)}^{m-1}, $$ therefore for every $t\in I^*$ and for all $l,m\in \omega$ we have $\alpha_i(m)=\alpha_i(l)$ and $t_i(m)=t_i(l)$. Now we apply Lemma~\ref{shel_l1_3} with $J=\{ \langle \alpha,\beta\rangle\mid \alpha,\beta \in \mu\}$, $J_\beta=\{ \langle \alpha,\beta\rangle\mid \alpha\in \mu\}$, $I=I^*$, and $f=f_{14}^*$. The formula \begin{multline*} \varphi^1(f,g;f_1^*,\dots,f_{14}^*) \mathbin{{:}\!=} \varphi^0(f) \logic\land \varphi^0(g) \logic\land \exists h_1\, \exists h_2\, (h_1 f h_1^{-1} =f_2^* \logic\land h_2 gh_2^{-1}=f_2^*) \\ {}\logic\land f_9^* f=f \logic\land f_9^* g=g \logic\land \exists h\, (f f_{14}^*=f_{14}^* h \logic\land h f_9^* =f_9^* h f_9^* \logic\land hf=g) \end{multline*} says that \begin{enumerate} \item $f$ and $g$ are conjugate to~$f_2^*$; \item $\mathop{\mathrm{Rng}}\nolimits f, \mathop{\mathrm{Rng}}\nolimits g\subset B_3$; \item $\exists h\, (h\circ f_{14}^*=f_{14}^*\circ h \logic\land \mathop{\mathrm{Rng}}\nolimits h|_{B_3}\subseteq B_3 \logic\land h\circ f=g)$. \end{enumerate} We shall write $\varphi^1(f,g;\dots)$ also in the form $f\le g$. If $f$ and $g$ are conjugate to $f_2^*$, $f(a_{\langle 0,0\rangle}^m)=a_{\langle \alpha,\beta\rangle}^m$ and $g(a_{\langle 0,0\rangle}^m)= \tau(a_{\langle \alpha_1, \beta_1\rangle}^m,\dots, a_{\langle \alpha_k,\beta_k\rangle}^m)$, then $f\le g$ if and only if $\beta\le \beta_1,\dots,\allowbreak \beta\le \beta_k$. The formula \begin{align*} & \varphi_2(f,f_1^*,\dots,f_{14}^*) \mathbin{{:}\!=} \varphi^0(f) \logic\land (ff_4^*=f_9^* ff_4^*) \logic\land (ff_{11}^*=f_{10}^* ff_{11}^*) \logic\land (ff_2^*=f_3^* ff_2^*) \logic\land (ff_{12}^*=f_{13}^*ff_{12}^*) \\* & \quad \logic\land \forall g\, (\varphi^0(g) \logic\land \exists h\, (hgh^{-1}=f^*) \logic\land f_4^* g=g \Rightarrow \varphi^1(f,fg;f_1^*,\dots,f_{14}^*)) \\* & \quad \logic\land (ff_{11}^* f_7^*=f_7^* ff_{11}^*) \logic\land (ff_{11}^* f_8^*=f_8^* ff_{11}^*) \logic\land (ff_{11}^* f_9^*=f_9^* ff_{11}^*) \logic\land (ff_9^*f_3^*=f_3^*ff_9^*) \end{align*} says that \begin{enumerate} \item $\mathop{\mathrm{Rng}}\nolimits f|_{B_2}\subseteq B_3$; $\mathop{\mathrm{Rng}}\nolimits f|_{B_5}\subseteq B_4$; $\mathop{\mathrm{Rng}}\nolimits f|_{B_0}\subseteq B_1$; $\mathop{\mathrm{Rng}}\nolimits f|_{B_6}\subseteq B_7$; \item for any~$g$ satisfying $\varphi^0(g)$ and conjugate to~$f_2^*$, if $\mathop{\mathrm{Rng}}\nolimits g\subseteq B_2$, then $g\le f\circ g$; \item $f|_{B_5}$ commutes with $f_7^*$, $f_8^*$, and $f_9^*$; \item $f|_{B_3}$ commutes with~$f_3^*$. \end{enumerate} Then, similarly to Statement~\ref{shel_s4_3}, we can prove that the formula $\varphi_2(f,\dots)$ holds in $\mathop{\mathrm{End}}\nolimits(B')$ if and only if for any $\beta\in \mu$ $$ f(a_{\langle 0,\beta\rangle}^m)= \tau(a_{\langle \alpha_1,\beta\rangle}^m,\dots, a_{\langle \alpha_k,\beta\rangle}^m)\ \ \text{and}\ \ f(a_{\langle 0,\beta,n\rangle}^m)= \tau(a_{\langle \alpha_1,\beta,n\rangle}^m,\dots, a_{\langle \alpha_k,\beta,n\rangle}^m) $$ for some linear combination~$\tau$ and ordinal numbers $\alpha_1,\dots,\alpha_k \in \mu$ (which do not depend on~$\beta$). The formula $$ \varphi^3(f,f_1^*,\dots,f_{14}^*)\mathbin{{:}\!=} (ff_4^*=f_9^*ff_4^*) \logic\land \exists f_1\, (f_1f_4^*=ff_4^* \logic\land \varphi^2(f_1,f_1^*,\dots,f_{14}^*)) $$ says that \begin{enumerate} \item $\mathop{\mathrm{Rng}}\nolimits f|_{B_2}\subseteq B_3$; \item $\exists f_1 (f_1|_{B_2}=f|_{B_2} \logic\land \varphi_2(f_1))$. \end{enumerate} The formula $\varphi_3(f,\dots)$ holds if and only if $$ f(a_{\langle 0,\beta\rangle}^m)= \tau(a_{\langle \gamma_1,\beta\rangle}^m,\dots, a_{\langle \gamma_k,\beta\rangle}^m) $$ for every $\beta\in \mu$ and some $\tau; \gamma_1,\dots,\gamma_k$. This follows immediately from Statement~\ref{shel_s4_3}. Let the formula $\varphi_4(f)$ say that \begin{enumerate} \item $\mathop{\mathrm{Rng}}\nolimits f|_{B_2}\subseteq B_3$; \item $\varphi_3(f)$; \item $\forall g\, (\varphi_3(g)\Rightarrow g\circ f_5\circ f|_{B_0}=f_5\circ f\circ f_5\circ g|_{B_0} \logic\land f_5^*\circ f\circ f_5^*\circ f|_{B_0}=f_6^*\circ f|_{B_0} \logic\land f_2^*\circ f|_{B_0}=f_2^*|_{B_0}$. \end{enumerate} The formula $\varphi_4(f)$ holds if and only if $$ f(a_{\langle 0,\beta\rangle}^m)= \tau(a_{\langle \gamma_1,\beta\rangle}^m,\dots, a_{\langle \gamma_k,\beta\rangle}^m), $$ where $\tau$ is a~beautiful linear combination. Now we suppose that $\mu_B$ is a~singular cardinal number. We let $\mu_1< \mu$, where $\mu_1$ is a~regular cardinal and $\mu_1> \omega$. Let $I^*\setminus J=I_0\cup \{ \langle \alpha,\delta,n\rangle\mid \alpha\in \mu,\allowbreak\ {\delta\in \mu_1},\allowbreak\ {\mathrm{cf} \delta=\omega},\allowbreak\ n\in \omega\}$, $|I_0|=\mu$. For every limit ordinal $\delta\in \mu_1$ such that $\mathrm{cf} \delta=\omega$, similarly to the previous case we shall choose an increasing sequence $(\delta_n)_{n \in\omega}$ of ordinal numbers less than~$\delta$, with limit~$\delta$, such that for any $\beta\in \mu_1$ and $n\in \omega$ the set $\{ \delta\in \mu_1\mid \beta=\delta_n\}$ is a~stationary subset on~$\mu_1$. Let \begin{align*} B_1&=\langle \{ a_{\langle \alpha,0\rangle}^m\mid \alpha\in \mu,\ m\in \omega \}\rangle,\\ B_2&= \langle \{ a_{\langle 0,\beta\rangle}^m\mid \beta\in \mu_1,\ m\in \omega\}\rangle,\\ B_3&=\langle \{ a_{\langle\alpha,\beta\rangle}^m\mid \alpha\in \mu,\ \beta\in \mu_1,\ m\in \omega\}\rangle. \end{align*} As in the previous case, we can define the functions~$f_i^*$ in such a~way that for some $\varphi'({\dots})$ the formula $\varphi'(f;\dots)$ holds $\mathop{\mathrm{End}}\nolimits(B')$ if and only if there exists an ordinal number $\alpha\in \mu$ such that for every $\beta\in \mu_1$ and every $m\in \omega$ $$ f(a_{\langle 0,\beta\rangle}^m)=a_{\langle \alpha,\beta\rangle}^m. $$ Let the formula $\varphi^1(f,\dots)$ say that \begin{enumerate} \item $\mathop{\mathrm{Rng}}\nolimits f|_{B_0}\subseteq B_2$; \item for every~$g$ we have $\varphi^0(g)\Rightarrow (f\circ g)|_{B_0}=(g\circ f)|_{B_0}$. \end{enumerate} It is easy to check that the formula $\varphi^1(f,\dots)$ holds if and only if there exist a~linear combination~$\sigma$ and distinct ordinal numbers $\beta_1,\dots,\beta_m\in \mu_1$ such that for every $\alpha\in \mu$ and every $m\in \omega$ $$ f(a_{\langle \alpha,0\rangle}^m)= \sigma(a_{\langle \alpha,\beta_1\rangle}^m,\dots, a_{\langle \alpha,\beta_m\rangle}^m). $$ As the cardinal number~$\mu_1$ is regular, we can use the previous case. Thus, there is a~formula $\varphi^2(f;\dots)$ such that $\varphi^2(f)$ holds in $\mathop{\mathrm{End}}\nolimits(B')$ if and only if there exists $\beta\in \mu_1$ such that for every $\alpha\in \mu$ and every $m\in \omega$ $$ f(a_{\langle \alpha,0\rangle}^m)=a_{\langle \alpha,\beta\rangle}^m. $$ Let $\mu=\bigcup\limits_{i\in \mathrm{cf} \mu} \mu_i$, where $\mu_i\in \mu$ and the sequence $(\mu_i)$ increases. We have just proved that for every $\gamma\in \mathrm{cf} \mu$ there exists a~function $\bar f_\gamma^*$ such that \begin{enumerate} \item the formula $\varphi^2[f,\bar f_\gamma^*]$ holds in $\mathop{\mathrm{End}}\nolimits(B')$ if and only if there exists $\beta\in \mu_\gamma^+$ such that for all $\alpha\in \mu$ and all $m\in \omega$ $$ f(a_{\langle \alpha,0\rangle}^m)=a_{\langle \alpha,\beta\rangle}^m; $$ \item ${f_{\gamma,0}^*}$ is a~projection onto $$ \langle \{ a_{\langle \alpha,\beta\rangle}^m\mid \alpha\in \mu,\ \beta\in \mu_\gamma^+\}\rangle. $$ \end{enumerate} Further, there exists a~formula~$\varphi^3$ and a~vector of functions~$g^*$ such that the formula $\varphi^3(\bar f,\bar g^*)$ holds if and only if $\bar f=\bar f_{\gamma}^*$ for some $\gamma\in \mu$. Let now the formula $\varphi^4(f,\bar g^*)$ say that there exists~$\bar f_1$ such that $\varphi^3(\bar f_1,\bar g^*)$ and for every~$\bar f_2$ satisfying the formulas $\varphi^3(\bar f_2,\bar g^*)$ and $\mathop{\mathrm{Rng}}\nolimits (\bar f_1)_0\subseteq \mathop{\mathrm{Rng}}\nolimits (\bar f_2)_0$ we have also $\varphi^2(f,\bar f_2)$. If the formula $\varphi^4(f,g^*)$ is true, then there exists $\bar f_1=\bar f_{\gamma}^*$ for some $\gamma\in \mu$ and for every $\bar f_2=f_{\lambda}^*$ (where $\lambda\ge \gamma$) we have the formula $$ f(a_{\langle \alpha,0\rangle}^m)=a_{\langle \alpha,\beta\rangle}^m, $$ where $\beta< \mu_{\lambda}^+$. Let $f$ be such that $$ f(a_{\langle \alpha,0\rangle}^m)=a_{\langle \alpha,\beta\rangle}^m,\quad \beta \in \mu. $$ Then $\beta\in \mu_\gamma^+$ for $\gamma\in \mathrm{cf} \mu$ and so the formula $\varphi^4(f,g^*)$ is true for some~$g^*$. Now we only need to consider the formula $\varphi^4(f_5^*\circ f\circ f_5^*)$, which is the required formula. Therefore the case~2 is completely studied, in this case we have (similarly to Sec.~\ref{sec5}) a~formula (with parameters) which holds for a~set of $\mu_B$ independent projections~$f$ satisfying the formula $\mathrm{Fine}(f)$. Thus we can suppose that we have a~set of projections onto~$\mu_B$ from independent direct summands of the group~$B$ isomorphic to the group $$ \bigoplus_{i\in \omega} \mathbb Z(p^{n_i}). $$ This set will be denoted by~$\mathbf F$. \smallskip Case 3. $\forall n\in \omega\, (\mu_n< \mu_B)$ and $\mu_B> \omega$. Naturally, in this case the cardinal number~$\mu_B$ is singular and $\mathrm{cf} \mu_B=\omega$. Choose in the sequence $(\mu_i)_{i\in \omega}$ an increasing subsequence $(\mu_{n_i})_{i\in \omega}$ with limit~$\mu_B$. For every natural~$i$, if the number $\mu_{n_i}$ is regular, then by~$\mu^i$ we shall denote $\mu_{n_i}$, and if it is not regular, then by $\mu^i$ we shall denote some regular cardinal number smaller than $\mu_{n_i}$ and greater than~$\omega$ in such a~way that in the result the limit of the sequence $(\mu^i)_{i\in\omega}$ is also equal to~$\mu_B$. For every natural~$i$ suppose that we have a~set~$I_i^*$ of power~$\mu^i$ which is the union of the following sets: $J_i=\{ \langle \alpha,\beta\rangle\mid \alpha,\beta\in \mu^i\}\cup \{0\}$, $|I^0_i|=\mu^i$, and $I_i=\{ \langle \alpha,\delta,n\rangle \mid \alpha\in \mu^i,\ \delta\in \mu^i,\ \mathrm{cf} \delta=\omega,\ n\in \omega\}$. Let us for every $i\in \omega$ have $\mu^i$~independent generating direct cyclic summands of order~$p^{n_i}$, denoted by~$a_t^i$, where $t\in I^*$. Let $\langle \{ a_t^i\mid t\in I^*,\ i\in \omega\}\rangle =B'$. Again we need to change the formulation of Theorem~\ref{shel_t4_1}, and Lemma~\ref{shel_l4_2} will be corrected again: {\itshape there exists a~formula $\varphi(f)$ with one free variable~$f$ such that $\varphi(f)$ holds in $\mathop{\mathrm{End}}\nolimits(B')$ if and only if there exists a~sequence of ordinal numbers $(\alpha_i)_{i\in \omega}$, where $\alpha_i\in \mu^i$, such that for every $m\in \omega$ and every $\beta_m \in \mu^m$} $$ f(a_{\langle 0,\beta_m\rangle}^m)=a_{\langle \alpha_m,\beta_m\rangle}^m. $$ All changes in the proof of the theorem with the help of the lemma are clear, so we shall not write them here. In the proof of the lemma we need only the third case. As above, for every limit ordinal $\delta\in \mu^i$ such that $\mathrm{cf} \delta=\omega$, we shall choose an increasing subsequence $(\delta_n)_{n\in \omega}$ of ordinal numbers less than~$\delta$, with limit~$\delta$, such that for any $\beta\in \mu$ and $n\in \omega$ the set $\{ \delta\in \mu\mid \beta=\delta_n\}$ is a~stationary subset in~$\mu$. We shall again introduce the functions $f_0^*,\dots,f_{14}^*$, which will differ a~little from the similar functions from the previous case. Namely, let $f_1^*(a_t^m)=a_0^m$, $f_2^*(a_{\langle \alpha,\beta\rangle}^m)=a_{\langle 0,0\rangle}^m$, $f_3^*(a_{\langle \alpha,\beta\rangle}^m)=a_{\langle \alpha,0\rangle}^m$, $f_4^*(a_{\langle \alpha,\beta\rangle}^m)=a_{\langle 0,\beta\rangle}^m$, $f_5^*(a_{\langle \alpha,\beta\rangle}^m)=a_{\langle \beta,\alpha\rangle}^m$, $f_6^*(a_{\langle \alpha,\beta\rangle}^m)=a_{\langle \alpha,\alpha\rangle}^m$, for $\delta\in \mu_B$, $\mathrm{cf} \delta=\omega$, $f_7^*(a_{\langle \alpha,\delta\rangle}^m)= a_{\langle\alpha,\delta,0\rangle}^m$, for $\delta\in \mu_B$, $\mathrm{cf} \delta\ne \omega$, $f_7^*(a_{\langle \alpha,\delta\rangle}^m)= a_{\langle \alpha,\delta\rangle}^m$, $f_8^*(a_{\langle \alpha,\beta\rangle}^m)=a_{\langle \alpha,\beta\rangle}^m$, $f_8^*(a_{\langle \alpha,\delta,n\rangle}^m)= a_{\langle \alpha,\delta, n+1\rangle}^m$, $f_9^*(a_{\langle \alpha,\beta\rangle}^m)=a_{\langle \alpha,\beta\rangle}^m$, $f_9^*(a_{\langle \alpha,\delta,n\rangle}^m)= a_{\langle \alpha,\delta_n\rangle}^m$. Let, as above, \begin{align*} B_0&= \langle \{ a_{\langle 0,0\rangle}^m\mid m\in \omega\}\rangle,\\ B_1&= \langle \{ a_{\langle \alpha,0\rangle}^m\mid \alpha\in \mu^m,\ m\in \omega\}\rangle,\\ B_2&= \langle \{ a_{\langle 0,\beta\rangle}^m\mid \beta\in \mu^m,\ m\in \omega\}\rangle,\\ B_3&= \langle \{ a_{\langle \alpha,\beta\rangle}^m\mid \alpha,\beta\in \mu^m,\ m\in \omega\}\rangle,\\ B_4&= \langle \{ a_{\langle \alpha,\beta\rangle}^m, a_{\langle \alpha,\beta,n\rangle}^m\mid \alpha,\beta\in \mu^m,\ n,m\in \omega\}\rangle,\\ B_5&= \langle\{ a_{\langle 0,\beta\rangle}^m,a_{\langle 0,\beta,n\rangle}^m \mid \beta\in \mu^m,\ n,m\in \omega\}\rangle,\\ B_6&= \langle \{ a_{\langle 0,\beta\rangle}^m\mid \beta\in \mu^m,\ \mathrm{cf} \beta=\omega,\ m\in \omega \}\rangle,\\ B_7&= \langle \{ a_{\langle \alpha,\beta\rangle}^m\mid \alpha,\beta\in \mu^m,\ \mathrm{cf} \beta =\omega,\ m\in \omega\}\rangle. \end{align*} Let $f_{10}^*$, $f_{11}^*$, $f_{12}^*$, and $f_{13}^*$ be projections onto $B_4$, $B_5$, $B_6$, and $B_7$, respectively. All functions which will be considered later satisfy the following formula $\varphi^0(f,\dots)$: $$ f_1^* f=f f_1^*=f_1^*. $$ This formula implies $f(a_0^m)=ff_0^*(a_0^m)=f_0^*(a_0^m)=a_0^m$ and $f_0^* f(a_i^m)=f_0^*(a_i^m)=a_0^m$, and therefore $f(a_i^m)=\alpha_1 a_{i_1}^m+\dots+\alpha_k a_{i_k}^m$. For every $m\in \omega$ we apply Lemma~\ref{shel_l1_3} with $J^m=\{ \langle \alpha,\beta\rangle\mid \alpha,\beta \in \mu^m\}$, $J_\beta^m=\{ \langle \alpha,\beta\rangle\mid \alpha\in \mu^m\}$, and $I^m=I^*_m$. Let us for every $m\in \omega$ have the corresponding function~$f^m$ on the group $B^m=\langle \{ a_t^m\mid t\in I^*_m\}\rangle$. Construct with its help the function~$f_{14}^*$, which coincides with~$f^m$ on every subgroup~$B^m$. As above, the formula \begin{multline*} f\le g \mathbin{{:}\!=} \varphi^1(f,g;f_1^*,\dots,f_{14}^*) \mathbin{{:}\!=} \varphi^0(f) \logic\land \varphi^0(g) \logic\land \exists h_1\, \exists h_2\, (h_1 f h_1^{-1} =f_2^* \logic\land h_2 gh_2^{-1}=f_2^*) \\ \logic\land f_9^* f=f \logic\land f_9^* g=g \logic\land \exists h\, (f f_{14}^*=f_{14}^* h \logic\land h f_9^* =f_9^* h f_9^* \logic\land hf=g) \end{multline*} says that \begin{enumerate} \item $f$ and $g$ are conjugate to~$f_2^*$; \item $\mathop{\mathrm{Rng}}\nolimits f, \mathop{\mathrm{Rng}}\nolimits g\subset B_3$; \item $\exists h\, (h\circ f_{14}^*=f_{14}^*\circ h \logic\land \mathop{\mathrm{Rng}}\nolimits h|_{B_3}\subseteq B_3 \logic\land h\circ f=g)$. \end{enumerate} If $f$ and $g$ are conjugate to~$f_2^*$, $f(a_{\langle 0,0\rangle}^m)=a_{\langle \alpha^m,\beta^m\rangle}^m$, and $g(a_{\langle 0,0\rangle}^m)= \tau^m(a_{\langle \alpha_1^m, \beta_1^m\rangle}^m,\dots, a_{\langle \alpha_{k_m}^m,\beta_{k_m}^m\rangle}^m)$, then $f\le g$ if and only if $\beta^m\le \beta_1^m,\dots,\allowbreak \beta^m \le \beta_{k_m}^m$. The formula $\varphi_2(f,f_1^*,\dots,f_{14}^*)$ says that \begin{enumerate} \item $\mathop{\mathrm{Rng}}\nolimits f|_{B_2}\subseteq B_3$; $\mathop{\mathrm{Rng}}\nolimits f|_{B_5}\subseteq B_4$; $\mathop{\mathrm{Rng}}\nolimits f|_{B_0}\subseteq B_1$; $\mathop{\mathrm{Rng}}\nolimits f|_{B_6}\subseteq B_7$; \item for every $g$ satisfying the formula $\varphi^0(g)$ and conjugate to~$f_2^*$ from $\mathop{\mathrm{Rng}}\nolimits g\subseteq B_2$ it follows that $g\le f\circ g$; \item $f|_{B_5}$ commutes with $f_7^*$, $f_8^*$, and $f_9^*$; \item $f|_{B_3}$ commutes with $f_3^*$. \end{enumerate} The formula $\varphi_2(f,\dots)$ holds in $\mathop{\mathrm{End}}\nolimits(B')$ if and only if for every $m\in \omega$ and every $\beta\in \mu^m$ $$ f(a_{\langle 0,\beta\rangle}^m)= \tau^m(a_{\langle \alpha_1^m,\beta\rangle}^m,\dots, a_{\langle \alpha_{k_m}^m,\beta\rangle}^m)\ \ \text{and}\ \ f(a_{\langle 0,\beta,n\rangle}^m)= \tau^m(a_{\langle \alpha_1^m,\beta,n\rangle}^m,\dots, a_{\langle \alpha_{k_m}^m,\beta,n\rangle}^m) $$ for some linear combination~$\tau^m$ and ordinal numbers $\alpha_1^m,\dots,\alpha_{k_m}^m \in \mu^m$, which do not depends on~$\beta$. The formula $\varphi^3(f,f_1^*,\dots,f_{14}^*)$ says that \begin{enumerate} \item $\mathop{\mathrm{Rng}}\nolimits f|_{B_2}\subseteq B_3$; \item $\exists f_1\, (f_1|_{B_2}=f|_{B_2} \logic\land \varphi_2(f_1))$. \end{enumerate} The formula $\varphi_3(f,\dots)$ holds if and only if for every $m\in \omega$ $$ f(a_{\langle 0,\beta\rangle}^m)= \tau^m(a_{\langle \gamma_1^m,\beta\rangle}^m,\dots, a_{\langle \gamma_{k_m}^m,\beta\rangle}^m) $$ for every $\beta\in \mu^m$ and some $\tau^m; \gamma_1^m,\dots,\gamma_{k_m}^m$. The formula $\varphi_4(f)$ says that \begin{enumerate} \item $\mathop{\mathrm{Rng}}\nolimits f|_{B_2}\subseteq B_3$; \item $\varphi_3(f)$; \item $\forall g\, (\varphi_3(g)\Rightarrow g\circ f_5\circ f|_{B_0}=f_5\circ f\circ f_5\circ g|_{B_0} \logic\land f_5^*\circ f\circ f_5^*\circ f|_{B_0}=f_6^*\circ f|_{B_0} \logic\land f_2^*\circ f|_{B_0}=f_2^*|_{B_0}$. \end{enumerate} The formula $\varphi_4(f)$ holds if and only if for every $m\in \omega$ $$ f(a_{\langle 0,\beta\rangle}^m)= \tau^m(a_{\langle \gamma_1^m,\beta\rangle}^m,\dots, a_{\langle \gamma_{k_m}^m,\beta\rangle}^m), $$ where $\tau$ is a~beautiful linear combination, i.e., there exists a~set $\gamma^1,\dots, \gamma^n,\dots$, where $\gamma^i\in \mu^i$ is such that $$ f(a_{\langle 0,\beta\rangle}^m)=a_{\langle \gamma^m,\beta\rangle}^m $$ for all $m\in \omega$ and $\beta\in \mu^m$. Therefore we suppose that a~set of $\mu_B$ independent projections satisfying the formula $\mathrm{Fine}(f)$ is fixed. This set will also be denoted by~$\mathbf F$. In what follows we shall not distinguish the second and the third cases. \subsection{Final Rank of the Basic Subgroup Is Uncountable}\label{sec7_4} Let us change the formula $\mathrm{Fine}(f)$ a~little: \begin{align*} & \mathrm{Fine}(f) \mathbin{{:}\!=} [\forall f'\, (\mathrm{Idem}(f') \logic\land f'\varphi_B\ne 0\Rightarrow f'f\ne 0)] \\ & \quad \logic\land [\forall f_1\, \forall f_2\, (\mathrm{Idem}^*(f_1) \logic\land \mathrm{Idem}^*(f_2) \logic\land o(f_1)=o(f_2) \\ & \quad \logic\land \forall c\in Z\, (cf_1\ne 0\Rightarrow cf_1f\ne 0) \logic\land \forall c\in Z\, (cf_2\ne 0\Rightarrow cff_2\ne 0)\Rightarrow f_1f_2\ne 0 \logic\land f_2f_1\ne 0)] \\ & \quad \logic\land [\forall \rho'\, (\mathrm{Idem}^*(\rho')\Rightarrow \exists f'\, (\mathrm{Idem}^*(f') \logic\land o(f')> o(\rho') \logic\land \forall c\in Z\, (cf'\ne 0\Rightarrow cf'f\ne 0)))] \\ & \quad \logic\land [\forall f'\, (\mathrm{Idem}^*(f') \logic\land \forall c\in Z\, (cf'\ne 0\Rightarrow cf'f_B\ne 0)\Rightarrow pf'\ne 0)] \\ & \quad \logic\land [\forall \rho'\, \forall f'\, (\mathrm{Idem}^*(f') \logic\land \forall c\in Z\, (cf'\ne 0\Rightarrow cf'f_B\ne 0) \logic\land o(f')=o(\rho') \\ & \quad \Rightarrow \forall f\, (\mathrm{Idem}^*(f) \logic\land \forall c\in Z\, (cf\ne 0\Rightarrow cff_B\ne 0) \logic\land o(f)> o(\rho')\Rightarrow o(f)> o(\rho')^2))]. \end{align*} The first part of the formula, which is enclosed in square brackets, postulates $fA\subset \varphi_B A=B$. The second part, which is enclosed in square brackets, states that the image of~$fA$ contains at most one cyclic direct summand of one order. The third part states that the orders of direct summands of~$fA$ are unbounded. The fourth part states that the cyclic summand of~$fA$ of the smallest order has order greater than~$p$ (i.e., at least not smaller than~$p^2$). Finally, the fifth part states that for every direct cyclic summand in~$fA$ of order~$p^k$ the next cyclic summand of greater order has order greater than~$p^{2k}$. We shall again write the formula $\mathrm{Ins}(\psi)$ from the case~1 from Sec.~\ref{sec7_4}, which says that \begin{enumerate} \item for every group $fA$, where $f\in \mathbf F$, there exists a~direct cyclic summand of the smallest order such that the action of~$\psi$ on it is multiplication by~$p$; \item for every natural~$i$ and every $f\in \mathbf F$ there exists a~direct cyclic summand $\langle a_i \rangle\subset fA$ of order~$p^{n_i}$ such that the action of~$\psi$ on it is multiplication by~$p^i$. \end{enumerate} Let us fix some endomorphism~$\Psi$ that satisfies the formula $\mathrm{Intr}(\Psi)$. Further, fix an endomorphism $\Gamma\colon B'\to B'$ that for every $f\in \mathbf F$ satisfies the following conditions: \begin{enumerate} \item $f \Gamma f=f \Gamma =\Gamma f$, i.e., the endomorphism $\Gamma$ maps $fA$ into $fA$; \item $\forall \rho\, (\mathrm{Idem}^*(\rho) \logic\land \rho f=\rho \logic\land \forall \rho'\, (\mathrm{Idem}^*(\rho') \logic\land \rho' f=\rho'\Rightarrow o(\rho')\ge o(\rho)) \Rightarrow \Gamma \rho =0)$, i.e., the endomorphism $\Gamma$ maps a~cyclic summand of the smallest order of~$fA$ into zero; \item $\forall \rho_1\, (\mathrm{Idem}^*(\rho_1) \logic\land \rho_1 f=\rho_1\Rightarrow \exists \rho_2\, (\mathrm{Idem}^*(\rho_2) \logic\land \rho_2 f=\rho_2 \logic\land o(\rho_1 )< o(\rho_2) \logic\land \forall \rho\, (\mathrm{Idem}^*(\rho) \logic\land {\rho f=\rho}\Rightarrow \neg (o(\rho)>o(\rho_1) \logic\land o(\rho)< o(\rho_2))) \logic\land \rho_1 \Gamma \rho_2=\Gamma \rho_2 \logic\land \forall c\in Z\, (c\rho_1\ne 0\Rightarrow c\rho_1 \Gamma \rho_2\ne 0)))$; this means that $\Gamma$ maps every generator~$a_i$ of a~cyclic direct summand of the group~$fA$ (isomorphic to $\mathbb Z(p^{n_i})$) into the generator~$a_{i-1}$ of a~cyclic direct summand of~$fA$ (isomorphic to $\mathbb Z(p^{n_{i-1}})$). \end{enumerate} This endomorphism gives us a~correspondence between generators of cyclic summands in the group $fA$, for every $f\in \mathbf F$. We shall assume that it is fixed. At first, suppose for simplicity that the final rank of a~basic subgroup of~$A$ coincides with its rank, and that it is uncountable. Then $|A|=|B|=\mu$. We suppose that the set~$\mathbf F$ of $\mu$~independent projections on direct summands of the group~$B$, isomorphic to $$ \bigoplus_{i\in \omega} \mathbb Z(p^{n_i}), $$ where the sequence $(n_i)$ is such that $n_1\ge 2$, $n_{i+1}> 2n_i$, is fixed. Let us fix $f\in \mathbf F$ and interpret the group~$A$ on~$f$. Let us consider the set $\mathop{\mathrm{End}}\nolimits_f$ of all homomorphisms $h\colon fA\to A$ that satisfy the following condition: \begin{align*} & \exists \rho (\mathrm{Idem}^*(\rho) \logic\land \exists c\in Z\, (\Psi \rho=c\rho) \logic\land \rho f=\rho \\ & \quad \logic\land \forall \rho'\, (\mathrm{Idem}^*(\rho') \logic\land \exists c\in Z\, (\Psi \rho'=c\rho') \logic\land \rho' f=\rho' \logic\land (o(\rho')< o(\rho) \logic\lor o(\rho')> o(\rho))\Rightarrow h\rho'=0) \\ & \quad \logic\land \forall c\in Z\, (\Psi \rho=c\rho\Rightarrow pc h \rho=0)). \end{align*} This condition means that \begin{enumerate} \item there exists such $i\in \omega$ that $h(a_k)=0$ for every $k\ne i$; \item $o(h(a_i))\le p^i$. \end{enumerate} Naturally, two such homomorphisms $h_1$~and~$h_2$ are said to be equivalent if $h_1(a_i)=h_2(a_j)\ne 0$ for some $i,j\in \omega$. Therefore two homomorphisms $h_1$~and~$h_2$ from $\mathop{\mathrm{End}}\nolimits_f$ are said to be equivalent if there exists a~homomorphism~$h$ that satisfies the following two conditions: \begin{enumerate} \item $\forall \rho\, (\mathrm{Idem}^*(\rho) \logic\land \rho f=\rho \logic\land \exists c\in Z\, (\Psi \rho =c\rho) \logic\land h_1 \rho\ne 0\Rightarrow h\rho =h_1\rho) \logic\land \forall \rho\, (\mathrm{Idem}^*(\rho) \logic\land \rho f=\rho \logic\land \exists c\in Z\, (\Psi \rho =c\rho) \logic\land h_2 \rho\ne 0\Rightarrow h\rho =h_2\rho)$; this means that the homomorphism~$h$ coincides with~$h_1$ on the element~$a_i$ that satisfies $h_1(a_i)\ne 0$, and it coincides with~$h_2$ on the element~$a_j$ that satisfies $h_2(a_j)\ne 0$; \item $\forall \rho\, (\mathrm{Idem}^*(\rho) \logic\land \rho f=\rho \logic\land \exists c\in Z\, (\Psi \rho=c\rho) \logic\land \exists \rho_1\, \exists \rho_2\, (\mathrm{Idem}^*(\rho_1) \logic\land \mathrm{Idem}^*(\rho_2) \logic\land \rho_1 f=\rho_1 \logic\land \rho_2 f=\rho_2 \logic\land \exists c_1\in Z\, (\Psi \rho_1=c_1\rho_1) \logic\land \exists c_2\in Z\, (\Psi \rho_2=c_2\rho_2) \logic\land h_1\rho_1\ne 0 \logic\land h_2 \rho_2\ne 0 \logic\land {o(\rho)> o(\rho_1)} \logic\land o(\rho)\le o(\rho_2))\Rightarrow h\Gamma \rho=h\rho)$; this means that $h(a_j)=h(a_{j-1})=\dots=h(a_{i+1})=h(a_i)$. \end{enumerate} Thus $h_1(a_i)=h(a_i)=h(a_j)=h_2(a_j)$, which is what we need. If we factorize the set $\mathop{\mathrm{End}}\nolimits_f$ by this equivalence, we shall obtain the set $\widetilde{\mathrm{End}}_f$. A~bijection between this set and the group~$A$ is easy to establish. We only need to introduce addition. Namely, \begin{multline*} (h_3=h_1\oplus h_2) \mathbin{{:}\!=} \exists h_1'\, \exists h_2'\, (h_1'\sim h_1 \logic\land h_2'\sim h_2 \\ \logic\land h_3=h_1+h_2 \logic\land \forall \rho\, (\mathrm{Idem}^*(\rho) \logic\land \rho f=\rho \logic\land \forall c\in Z\, (\Psi \rho =c\rho)\Rightarrow (h_1 \rho =0\Leftrightarrow h_2\rho=0))). \end{multline*} We have interpreted the group~$A$ for every $f\in \mathbf F$, and so we can prove the main theorem for this case. \begin{proposition}\label{p7.2} Suppose that $p$-groups $A_1$~and~$A_2$ are direct sums $D_1\oplus G_1$ and $D_1\oplus G_2$, where the groups $D_1$~and~$D_2$ are divisible, the groups $G_1$~and~$G_2$ are reduced and unbounded, $|D_1|\le |G_1|$, $|D_2|\le |G_2|$, $B_1$~and~$B_2$ are basic subgroups of the groups $A_1$~and~$A_2$, respectively, and the final ranks of the groups $B_1$~and~$B_2$ coincide with their ranks and are uncountable. Then elementary equivalence of the rings $\mathop{\mathrm{End}}\nolimits(A_1)$ and $\mathop{\mathrm{End}}\nolimits(A_2)$ implies equivalence of the groups $A_1$~and~$A_2$ in the language~$\mathcal L_2$. \end{proposition} \begin{proof} As usual, we shall consider an arbitrary sentence~$\psi$ of the second order group language and show an algorithm which translates this sentence~$\psi$ to the sentence~$\tilde \psi$ of the first order ring language such that $\tilde \psi$ holds in $\mathop{\mathrm{End}}\nolimits(A)$ if and only if $\psi$ holds in~$A$. Consider the formula \begin{multline*} \mathrm{Min} (f) \mathbin{{:}\!=} f\in \mathbf F \logic\land \forall f'\, (f'\in \mathbf F \Rightarrow \forall c\, \forall \rho'\, \forall \rho\, (c\in Z \logic\land \mathrm{Idem}^*(\rho') \logic\land \mathrm{Idem}^*(\rho) \\ \logic\land \rho f=\rho \logic\land \rho' f'=\rho' \logic\land \Psi \rho'=c\rho' \logic\land \Psi \rho =c\rho\Rightarrow o(\rho)\le o( \rho'))). \end{multline*} This formula gives us a~summand $f(A)$ in~$B$ such that for every $i\in \omega$ the number $n_i(f)$ is minimal among all $n_i(f')$ for $f'\in \mathbf F$. Consider the formula \begin{multline*} \mathrm{Basic}(\Lambda) \mathbin{{:}\!=} \exists f\, (f\in \mathbf F \logic\land \mathrm{Min}(f) \logic\land \forall f'\, \forall c\, \forall \rho'\, (f'\in \mathbf F \logic\land c\in Z \logic\land \mathrm{Idem}^*(\rho') \logic\land \rho' f'=\rho' \\ \logic\land \Psi \rho'=c\rho'\Rightarrow \exists \rho\, (\mathrm{Idem}^*(\rho) \logic\land \rho f=\rho \logic\land \Psi \rho=c\rho \logic\land \rho \Lambda \rho' =\Lambda \rho' \logic\land \forall c'\in Z\, (c \rho \ne 0\Rightarrow c\rho \Lambda \rho'\ne 0)))). \end{multline*} This formula defines an endomorphism~$\Lambda$ that maps $a_t^i$ to~$a_0^i$ for every $i\in \omega$ and every $t\in \mu$. The corresponding~$f_0$ will be denoted by $f_{\mathrm{min}}$. Translate the sentence~$\psi$ to the sentence $$ \tilde \psi \mathbin{{:}\!=} \exists \bar g\, \exists \Gamma\, \exists \Psi\, \exists \Lambda\, \exists f_{\mathrm{min}}\in \mathbf F\psi'(\bar g,\Gamma,\Psi,\Lambda, f_{\mathrm{min}}), $$ where the formula $\psi'(\dots)$ is obtained from the sentence~$\psi$ with the help of the following translations of subformulas from~$\psi$: \begin{enumerate} \item the subformula $\forall x$ is translated to the subformula $\forall x\in \widetilde{\mathrm{End}}_{f_{\mathrm{min}}}$; \item the subformula $\exists x$ is translated to the subformula $\exists x\in \widetilde{\mathrm{End}}_{f_{\mathrm{min}}}$; \item the subformula $\forall P_m(v_1,\dots,v_m)({\dots})$ is translated to the subformula $$ \forall f_1^P\dots \forall f_m^P\, \biggl(\forall g\in \mathbf F\, \biggl(\,\bigwedge_{i=1}^m (f_i^Pg\in \mathop{\mathrm{End}}\nolimits_g)\biggr)\Rightarrow \ldots\biggr); $$ \item the subformula $\exists P_m(v_1,\dots,v_m)({\dots})$ is translated to the subformula $$ \exists f_1^P\dots \exists f_m^P\, \biggl(\forall g\in \mathbf F\, \biggl(\,\bigwedge_{i=1}^m (f_i^Pg\in \mathop{\mathrm{End}}\nolimits_g)\biggr) \logic\land \ldots\biggr); $$ \item the subformula $x_1=x_2$ is translated to the subformula $x_1\sim x_2$; \item the subformula $x_1=x_2+x_3$ is translated to the subformula $x_1\sim x_2\oplus x_3$; \item the subformula $P_m(x_1,\dots,x_m)$ is translated to the subformula $$ \exists g\in \mathbf F\, \biggl(\,\bigwedge_{i=1}^m (f_i^Pg)=x_i \Lambda g\biggr). $$ \end{enumerate} The rest of the proof is similar to the previous cases. \end{proof} Now we shall consider the case where the final rank of~$B$ is greater than~$\omega$ and does not coincide with the rank of~$B$. In this case, $A=G\oplus G'$, where the group~$G$ satisfies the conditions of the previous proposition, the group~$G'$ is bounded, and its power is greater than~$|G|$. Let $|G|=\mu$ and $|A|=|G'|=\mu'$. Let us define, for the group~$G$, the set~$\mathbf F$ from Proposition~\ref{p7.2}, and the set~$\mathbf F'$ of $\mu'$~independent projections on countably generated subgroups of the group~$G'$, from Sec.~\ref{sec5}. The formula $$ \mathrm{Add}(\varphi) \mathbin{{:}\!=} \forall f'\in \mathbf F'\, \exists f\in \mathbf F\, \forall \rho'\, (\mathrm{Idem}^*(\rho') \logic\land \rho' f'=\rho' \Rightarrow \exists \rho\, (\mathrm{Idem}^*(\rho) \logic\land \rho f=\rho \logic\land \varphi \rho'=\rho \varphi \rho'\ne 0)) $$ defines a~functions from the set~$\mathbf F'$ onto the set~$\mathbf F$. \begin{proposition}\label{p7.3} Suppose that $p$-groups $A_1$~and~$A_2$ are direct sums $D_1\oplus G_1$ and $D_1\oplus G_2$, where the groups $D_1$~and~$D_2$ are divisible, the groups $G_1$~and~$G_2$ are reduced and unbounded, $|D_1|< |G_1|$, $|D_2|< |G_2|$, $B_1$~and~$B_2$ are basic subgroups of the groups $A_1$~and~$A_2$, respectively, and the final ranks of the groups $B_1$~and~$B_2$ do not coincide with their ranks and are uncountable. Then elementary equivalence of the rings $\mathop{\mathrm{End}}\nolimits(A_1)$ and $\mathop{\mathrm{End}}\nolimits(A_2)$ implies equivalence of the groups $A_1$~and~$A_2$ in the language~$\mathcal L_2$. \end{proposition} \begin{proof} We only write an algorithm of translation. Let us translate the sentence~$\psi$ to the sentence $$ \tilde \psi \mathbin{{:}\!=} \exists \bar g\, \exists \Gamma\, \exists \Psi\, \exists \Lambda\, \exists f_{\mathrm{min}} \in \mathbf F\, \exists \bar g_1 \dots \exists \bar g_k\, \exists \bar g'\, \exists \tilde g\in \mathbf F'\psi'(\bar g,\Gamma,\Psi,\Lambda, f_{\mathrm{min}}), $$ where the formula $\psi'({\dots})$ is obtained from the sentence~$\psi$ with the help of the following translations of subformulas from~$\psi$: \begin{enumerate} \item the subformula $\forall x$ is translated to the subformula $\forall x\in \widetilde{\mathrm{End}}_{f_{\mathrm{min}}}\, \forall x'\in \widetilde{\mathrm{End}}_{\tilde g}$; \item the subformula $\exists x$ is translated to the subformula $\exists x\in \widetilde{\mathrm{End}}_{f_{\mathrm{min}}}\, \exists x'\in \widetilde{\mathrm{End}}_{\tilde g}$; \item the subformula $\forall P_m(v_1,\dots,v_m)({\dots})$ is translated to the subformula \begin{multline*} \forall f_1^P\dots \forall f_m^P\, \forall {f_1^P}'\dots \forall {f_m^P}'\, \forall \varphi_1^P\dots \forall \varphi_m^P\, \biggl(\,\bigwedge_{i=1}^m \mathrm{Add}(\varphi_i^P) \\ \logic\land \forall g\in \mathbf F\, \biggl(\,\bigwedge_{i=1}^m f_i^Pg\in \mathop{\mathrm{End}}\nolimits_g\biggr) \logic\land \forall g\in \mathbf F'\, \biggl(\,\bigwedge_{i=1}^m {f_i^P}'g\in \mathop{\mathrm{End}}\nolimits_g\biggr)\Rightarrow \ldots\biggr); \end{multline*} \item the subformula $\exists P_m(v_1,\dots,v_m)({\dots})$ is translated to the subformula \begin{multline*} \exists f_1^P\dots \exists f_m^P\, \exists {f_1^P}'\dots \exists {f_m^P}'\, \exists \varphi_1^P\dots \exists \varphi_m^P\, \biggl(\,\bigwedge_{i=1}^m \mathrm{Add}(\varphi_i^P) \\ \logic\land \forall g\in \mathbf F\, \biggl(\,\bigwedge_{i=1}^m f_i^Pg\in \mathop{\mathrm{End}}\nolimits_g\biggr)\logic\land \forall g\in \mathbf F'\, \biggl(\,\bigwedge_{i=1}^m {f_i^P}'g\in \mathop{\mathrm{End}}\nolimits_g\biggr) \logic\land \ldots\biggr); \end{multline*} \item the subformula $x_1=x_2$ is translated to the subformula $x_1\sim x_2 \logic\land x_1'\sim x_2'$; \item the subformula $x_1=x_2+x_3$ is translated to the subformula $x_1\sim x_2\oplus x_3 \logic\land x_1'\sim x_2'\oplus x_3'$; \item the subformula $P_m(x_1,\dots,x_m)$ is translated to the subformula $$ \exists g\in \mathbf F\, \exists g'\in \mathbf F'\, \biggl(\,\bigwedge_{i=1}^m (\varphi_i^P(g')=g \logic\land f_i^Pg=x_i \Lambda g \logic\land {f_i^P}'g'=x_i'\tilde g h)\biggr).\qed $$ \end{enumerate} \renewcommand{\qed}{} \end{proof} \subsection{The Countable Restriction of the Second Order Theory of the Group in the Case Where the Rank of the Basic Subgroup Is Countable} \label{sec7_5} Let the group~$A$ have a~countable basic subgroup~$B$. As above, we suppose that an endomorphism~$\varphi_B$ with the image~$B$ is fixed. Further, we suppose that an endomorphism~$f_B$ considered in the case~1 in Sec.~\ref{sec7_3} and an endomorphism~$\Psi$ satisfying the formula $\mathrm{Ins}(\Psi)$ from the same section are fixed. As we remember, $$ B=\varphi_B (A)\supset B'=f_B (A)\cong \bigoplus_{i\in \omega}\mathbb Z (p^{n_i}), $$ where $n_0\ge 2$, $n_{i+1}> 2n_i$. Generators of cyclic summands of $f_B(A)$, where $\Psi(a_i)=p^i a_i$, will be denoted by~$a_i$ ($i\in \omega$). Further, as in Sec.~\ref{sec7_4}, we fix a~homomorphism $\Gamma\colon B'\to B'$ satisfying the conditions \begin{enumerate} \item $\forall f\, (\mathrm{Idem}^*(f) \logic\land \forall c\in Z\, (cf\ne 0\Rightarrow cff_B\ne 0)\Rightarrow f\Gamma f=f\Gamma=\Gamma f)$, i.e., $f\in \mathop{\mathrm{End}}\nolimits(B')$; \item $\forall \rho\, (\mathrm{Idem}^*(\rho) \logic\land \forall c\in Z\, (c\rho\ne 0\Rightarrow c\rho f_B\ne 0) \logic\land \forall \rho'\, (\mathrm{Idem}^*(\rho') \logic\land \forall c\in Z\, (c\rho'\ne 0\Rightarrow c\rho' f_B\ne 0)\Rightarrow o(\rho')\ge o(\rho))\Rightarrow \Gamma \rho=0)$, i.e., the endomorphism~$\Gamma$ maps a~cyclic summand of the smallest order (in the group) into~$0$; \item $\forall \rho_1\, (\mathrm{Idem}^*(\rho_1) \logic\land \forall c\in Z\, (\rho_1\ne 0\Rightarrow c\rho_1 f_B\ne 0)\Rightarrow \exists \rho_2\, (\mathrm{Idem}^*(\rho_2) \logic\land \forall c\in Z\, ({c\rho_2\ne 0} \Rightarrow {c\rho_2 f_B \ne 0}) \logic\land o(\rho_1)< o(\rho_2) \logic\land \forall \rho\, (\mathrm{Idem}^*(\rho) \logic\land \forall c\in Z\, (c\rho\ne 0\Rightarrow c\rho f_B \ne 0)\Rightarrow \neg ({o(\rho)> o(\rho_2)} \logic\land o(\rho)< o(\rho_2))) \logic\land \rho_1\Gamma=\Gamma \rho_2 =\rho_1\Gamma \rho_2 \logic\land \forall c\in Z\, (c\rho_1\ne 0\Rightarrow c\rho_1 \Gamma \rho_2 \ne 0)))$, i.e., $\Gamma$ maps every generator~$a_i$ of a~cyclic direct summand of the group~$B'$ (isomorphic to $\mathbb Z(p^{n_i})$) into the generator~$a_{i-1}$ of a~cyclic direct summand (isomorphic to $\mathbb Z(p^{n_{i-1}})$). \end{enumerate} It is clear that for interpretation of the group~$A$ for our function~$f_B$ we can use the sets $\mathop{\mathrm{End}}\nolimits_{f_B}$ and $\widetilde{\mathrm{End}}_{f_B}$ from the previous subsection. Therefore, every element $a\in A$ is mapped to a~class of homomorphisms $g\colon B'\to A$ satisfying the conditions $g(a_i)=a$ and $g(a_j)=0$ if $j\ne i$, where $i$ is such that $p^i\ge o(a)$. Now suppose that we want to interpret some set $X=\{ x_i\}_{i\in I}\subset A$, where $|X|\le \omega$, in the ring $\mathop{\mathrm{End}}\nolimits(A)$. It is clear that there exists a~sequence $(k_i)$, $i\in I$, and a~homomorphism $h\colon B'\to A$ such that $$ h(a_{k_i})x_i,\quad i\in I, $$ and $o(x_i)\le p^{k_i}$. The set $\{ x_i\}$ is uniquely defined by the homomorphism~$h$ and the sequence~$(k_i)$. Therefore, every set $\{ x_i\}$ can be mapped to a~pair of endomorphisms consisting of a~projection onto the subgroup $\langle \{ a_{k_i}\mid i\in I\}\rangle$ and a~homomorphism~$h$. Similarly, every $n$-place relation in~$A$ is mapped to a~projection on $\langle \{ a_{k_i}\mid i\in I\}\rangle$ and an $n$-tuple of homomorphisms $h_1,\dots,h_n$. Introduce the formulas \begin{align*} & \mathrm{Proj}(\rho) \mathbin{{:}\!=} \forall \rho'\, (\mathrm{Idem}^*(\rho') \logic\land \forall c\in Z\, (c\rho'\ne 0\Rightarrow c\rho'\rho\ne 0) \\ & \quad {}\Rightarrow \forall c\in Z\, (c\rho'\ne 0\Rightarrow c\rho'f_B\ne 0) \logic\land \exists \rho''\, (\mathrm{Idem}^*(\rho'') \logic\land \forall c\in Z\, (c\rho''\ne 0\Rightarrow c\rho''\rho\ne0) \\ & \quad \logic\land o(\rho'')=o(\rho') \logic\land \exists c\in Z\, (\Psi \rho''=c\rho'')) \logic\land \exists c\in Z\, (\Psi \rho'=c\rho')) \end{align*} (a~projection on a~direct summand in~$B'$, generated by $\{ a_{k_i}\mid i\in I\}$) and $$ \mathrm{Hom}(h) \mathbin{{:}\!=} \forall \rho'\, (\mathrm{Idem}^*(\rho') \logic\land \forall c\in Z\, (c\rho'\ne 0\Rightarrow c\rho' f_B\ne 0)\Rightarrow \exists c\in Z\, (\Psi \rho'=c\rho'\Rightarrow pch\rho'=0)). $$ Now we are ready to prove the following proposition. \begin{proposition}\label{p7.4} Let $p$-groups $A_1$~and~$A_2$ be unbounded and have countable basic subgroups. Then $\mathop{\mathrm{End}}\nolimits(A_1)\equiv \mathop{\mathrm{End}}\nolimits(A_2)$ implies $\mathrm{Th}_2^\omega(A_1)=\mathrm{Th}_2^\omega(A_2)$. \end{proposition} \begin{proof} Suppose that we have a~sentence $\psi\in \mathrm{Th}_2^\omega(A_1)$. Then for every predicate variable $P_n(v_1,\dots,v_n)$ included in~$\psi$, the set $\{ \langle a_1,\dots,a_n\rangle\in A^n\mid P(a_1,\dots,a_n)\}$ is at most countable. We shall show a~translation of the sentence~$\psi$ into the first order sentence $\tilde \psi \in \mathrm{Th}_1(\mathop{\mathrm{End}}\nolimits(A_1))$. Let us translate the sentence~$\psi$ to the sentence $$ \tilde \psi=\exists \varphi_B\, \exists f_B\, \exists \Psi\, \exists \Gamma\, \psi'(\varphi_B,f_B,\Psi,\Gamma), $$ where the formula $\psi'({\dots})$ is obtained from the sentence~$\psi$ with the help of the following translations of subformulas from~$\psi$: \begin{enumerate} \item the subformula $\forall x$ is translated to the subformula $\forall x\in \widetilde{\mathrm{End}}_{f_B}$; \item the subformula $\exists x$ is translated to the subformula $\exists x\in \widetilde{\mathrm{End}}_{f_B}$; \item the subformula $\forall P_m(v_1,\dots,v_m)({\dots})$ is translated to the subformula $$ \forall \rho^P\, \forall h_1^P \dots \forall h_m^P\, (\mathrm{Proj}(\rho^P) \logic\land \mathrm{Hom}(h_1^P)\land \dots \land \mathrm{Hom}(h_m^P)\Rightarrow \ldots); $$ \item the subformula $\exists P_m(v_1,\dots,v_m)({\dots})$ is translated to the subformula $$ \exists \rho^P\, \exists h_1^P \dots \exists h_m^P\, (\mathrm{Proj}(\rho^P) \logic\land \mathrm{Hom}(h_1^P) \land\dots \land \mathrm{Hom}(h_m^P) \logic\land \ldots); $$ \item the subformula $x_1=x_2$ is translated to the subformula $x_1\sim x_2$; \item the subformula $x_1=x_2+x_3$ is translated to the subformula $x_1\sim x_2\oplus x_3$; \item the subformula $ P_m(x_1,\dots,x_m)$ is translated to the subformula $$ \exists \rho\, (\mathrm{Idem}^*(\rho) \logic\land \rho \rho^P=\rho \logic\land \exists c\in Z\, (\Psi\rho =c\rho) \logic\land h_1^P \rho=x_1 \land \dots \land h_m^P \rho =x_m), $$ i.e., there exists a~cyclic summand $\langle a_{k_i}\rangle$ in~$B'$ such that $i\in I$ and $h_1(a_{k_i})=x_1,\dots,\allowbreak h_m(a_{k_i})=x_m$. \end{enumerate} \end{proof} \subsection[The Final Rank of the Basic Subgroup Is Equal to $\omega$ and Does Not Coincide\\ with Its Rank]{The Final Rank of the Basic Subgroup Is Equal to $\boldsymbol{\omega}$ and Does Not Coincide with Its Rank}\label{sec7_6} As above, we suppose that $A=D\oplus G$, where the group~$D$ is divisible, the group~$G$ is reduced and unbounded, and $|D|< |G|$. Let a~basic subgroup~$\bar B$ of~$A$ (and of~$G$) have the form $B\oplus B'$, where $|B'|=|\bar B|$, $|B|=\omega$, and $B'$ is bounded. Note that from $|D|< |G|$ it follows that $|D|\le \omega$, because we assume the continuum-hypothesis, which implies $|B'|=|\bar B|=\omega_1=2^\omega=c$. The condition $|D|\le \omega$ means that if the groups $A_1=D_1\oplus G_1$ and $A_2=D_2\oplus G_2$ have the described type and $\mathop{\mathrm{End}}\nolimits(A_1)\equiv \mathop{\mathrm{End}}\nolimits(A_2)$, then $D_1\cong D_2$, and in order to prove $A_1\equiv_{\mathcal L_2} A_2$ we only need to prove $G_1\equiv_{\mathcal L_2} G_2$ (with the condition that an endomorphism between $D$ and~$G$ is fixed, see Proposition~\ref{p7.3}). Therefore for simplicity of arguments we suppose that the group~$A$ is reduced, i.e., $A=G$ and $D=0$. We fix an endomorphism~$\varphi_B$ with the image~$B$. Further, let us fix an endomorphism~$f_B$ with the image~$B'$ which is a~direct summand in~$B$ isomorphic to $$ \bigoplus_{i\in \omega} \mathbb Z(p^{n_i}), $$ where $n_0\ge 2$, $n_{i+1}> 2n_i$. Naturally, we also suppose that endomorphisms $\Psi$~and~$\Gamma$, described in Secs.\ \ref{sec7_3}~and~\ref{sec7_5}, are fixed. Let $\rho_1$~and~$\rho_2$ be indecomposable projections onto cyclic direct summands of the group~$B'$ satisfying the formulas $\exists c\in Z\, (\Psi \rho_1=c\rho_1) \logic\land \exists c\in Z\, (\Psi \rho_2=c\rho_2)$ and $o(\rho_1)> o(\rho_2)$. Then we shall write $\gamma\in \langle\Gamma \rangle_{\rho_1,\rho_2}$ if an endomorphism $\gamma$~satisfies the formula \begin{multline*} \exists \gamma'\, (\gamma' \rho_2=\rho_2 \logic\land \forall \rho\, (\mathrm{Idem}^*(\rho) \logic\land \forall c\in Z\, (c\rho\ne 0\Rightarrow c\rho f_B\ne 0) \\ {}\logic\land \exists c\in Z\, (\Psi \rho =c\rho) \logic\land o(\rho)\le o(\rho_1 ) \logic\land o(\rho)>o(\rho_2)\Rightarrow \gamma' \rho=\gamma'\Gamma \rho) \logic\land \gamma \rho_1 =\gamma' \rho_1). \end{multline*} This formula means that there exists an endomorphism $\gamma'\colon B'\to B'$ such that, if $a_i$ is a~generator in~$\rho_2 A$ and $a_{i+k}$ is a~generator in~$\rho_1 A$, then we have \begin{enumerate} \item $\gamma'(a_i)=a_i$; \item $\forall i\in \{ 1,\dots,k\}\, \gamma'(a_{i+k})=\gamma'(a_{i+k-1})=\dots=\gamma'(a_{i+1})=\gamma'(a_i)=a_i$. \end{enumerate} Further, $\gamma(a_{i+k})=\gamma'(a_{i+k})=a_i$. Thus, we have $\gamma (a_{i+k})=a_i$. Now consider the formula \begin{align*} & \mathrm{Onto}(\Lambda)\mathbin{{:}\!=} \biggl[\forall \bar \rho\, \forall \bar c\, \biggl(\mathrm{Idem}^*(\bar \rho) \logic\land \forall c\in Z\, (c\bar \rho \ne 0\Rightarrow c\bar \rho \varphi_B \ne 0) \\ & \quad \logic\land \bar c \in Z \logic\land \bar c \bar \rho \ne 0 \Rightarrow \exists \rho_1 \dots \exists \rho_{p-1}\, \biggl(\biggl(\,\bigwedge_{i,j=1}^{p-1} \rho_i\rho_j=\rho_j\rho_i=0\biggr) \logic\land \mathrm{Idem}^*(\rho_1)\land \dots \land \mathrm{Idem}^*(\rho_{p-1}) \\ & \quad \logic\land \biggl(\,\bigwedge_{i=1}^{p-1} \forall c\in Z\, (c\rho_i \ne 0\Rightarrow c\rho_i f_B\ne 0)\biggr) \logic\land \biggl(\,\bigwedge_{i=1}^{p-1} \exists c\in Z\, (\Psi \rho_i=c\rho_i) \biggr) \\ & \quad \logic\land \biggl(\,\bigwedge_{i=1}^{p-1} \bar c\bar \rho \Lambda \rho_i\ne 0\biggr) \logic\land \biggl(\,\bigwedge_{i=1}^{p-1} \forall c\in Z\, (c\bar c\bar \rho\ne 0\Rightarrow c\bar c \bar \rho\Lambda \rho_i\ne 0)\biggr) \\ & \quad \logic\land \biggl(\,\bigwedge_{i,j=1;\ i\ne j}^{p-1} o(\rho_i)> o(\rho_j)\Rightarrow \forall \gamma\in \langle \Gamma\rangle_{\rho_i,\rho_j} (\Lambda \gamma \rho_i\ne \Lambda \rho_i) \biggl)\!\biggl)\!\biggl)\!\biggr] \\ & \quad \logic\land [\mathop{\mathrm{Hom}}\nolimits(\Lambda)] \logic\land [\forall \rho_1\, \forall \rho_2\, ( \mathrm{Idem}^*(\rho_1) \logic\land \mathrm{Idem}^*(\rho_2) \\ & \quad \logic\land \forall c\in Z\, (c\rho_1 \ne 0\Rightarrow c\rho_1f_B\ne 0) \logic\land \forall c\in Z\, (c\rho_2\ne 0\Rightarrow c\rho_2 f_B\ne 0) \\ & \quad \logic\land \exists c\in Z\, (\Psi \rho_1=c\rho_1) \logic\land \exists c\in Z\, (\Psi \rho_2=c\rho_2) \logic\land o(\rho_1)\ge o(\rho_2) \\ & \quad \Rightarrow \exists \rho_3 (\mathrm{Idem}^*(\rho_3)\logic\land \forall c\in Z\, (c\rho_3\ne 0 \Rightarrow c\rho_3 f_B\ne 0) \logic\land \exists c\in Z\, (\Psi \rho_3=c\rho_3) \\ & \quad \logic\land ((o(\rho_3)> o(\rho_1) \logic\land o(\rho_3)> o(\rho_2) \logic\land \exists \gamma_1\in \langle \Gamma\rangle_{\rho_3,\rho_1}\, \exists \gamma_2 \in \langle \Gamma\rangle_{\rho_3,\rho_2}\, (\Lambda \rho_3=\Lambda \gamma_1 \rho_3+\Lambda \gamma_2 \rho_3)) \\ & \quad \logic\lor (o(\rho_3)< o(\rho_1) \logic\land o(\rho_3)< o(\rho_2) \logic\land \exists \gamma_2\in \langle \Gamma\rangle_{\rho_1,\rho_2}\, \exists \gamma_3 \in \langle \Gamma\rangle_{\rho_1,\rho_3}\, (\Lambda \gamma_3\rho_1=\Lambda \rho_1+\Lambda \gamma_2 \rho_1)) \\ & \quad \logic\lor (o(\rho_3)< o(\rho_1) \logic\land o(\rho_3)> o(\rho_2) \logic\land \exists \gamma_2\in \langle \Gamma\rangle_{\rho_1,\rho_2}\, \exists \gamma_3 \in \langle \Gamma\rangle_{\rho_1,\rho_3}\, (\Lambda \gamma_3\rho_1=\Lambda \gamma_1 +\Lambda \gamma_2 \rho_1)))))] \\ & \quad \logic\land [\forall \rho_1\rho_2 (\mathrm{Idem}^*(\rho_1) \logic\land \mathrm{Idem}^*(\rho_2) \logic\land \forall c\in Z\, (c\rho_1\ne 0\Rightarrow c\rho_1f_B\ne 0) \\ & \quad \logic\land \forall c\in Z\, (c\rho_2\ne 0\Rightarrow c\rho_2 f_B\ne 0) \logic\land \exists c\in Z\, (\Psi \rho_1=c\rho_1) \logic\land \exists c\in Z\, (\Psi \rho_2=c\rho_2) \\ & \quad \logic\land o(\rho_1)> o(\rho_2)\Rightarrow \exists \rho\, (\mathrm{Idem}^*(\rho) \logic\land \forall c\in Z\, (c\rho\ne 0\Rightarrow c\rho \varphi_B\ne 0) \logic\land \rho\Lambda \rho_2\ne 0) \\ & \quad \logic\land \forall \gamma\in \langle \Gamma \rangle_{\rho_1,\rho_2}\, (\Lambda \rho_1\ne \Lambda \gamma \rho_1))]. \end{align*} The first condition in brackets means that for every generator~$b$ of the cyclic direct summand $\langle b\rangle$ of the group~$B$ and for every integer $p$-adic number~$c$ there exist at least $p-1$ numbers $i_1,\dots,i_{p-1}\in \omega$ such that $\Lambda (a_{i_k})=\xi_k c\cdot b$, where $\xi_k$ are distinct and not divisible by~$p$, $\xi_k c\cdot b\ne \xi_l c\cdot b$ for all $k\ne l$. The following conditions mean that for any $a_i$~and~$a_j$ there exists~$a_k$ such that $\Lambda(a_k)=\Lambda(a_)+\Lambda(a_j)$. Therefore the endomorphism~$\Lambda$ is an epimorphism $B'\to B$. The last condition in brackets means that $\Lambda$ induces a~bijection between the set $\{ a_i\mid i\in \omega\}$ and the group~$B$, with the condition $o(\Lambda(a_i))\le p^i$. Let us fix an endomorphism~$\Lambda$. Now recall that we have a~bounded group~$B'$ of some uncountable power~$\mu$ that has a~definable set $\mathbf F=\mathbf F(\bar g)$ consisting of $\mu$~independent indecomposable projections onto direct summands of the group~$B'$, and a~set $\mathbf F'=\mathbf F'(\bar g')$ consisting of $\mu$~independent projections onto countably generated direct summands of the group~$B'$, and for every $f\in \mathbf F'$ the set consisting of $f_t\in \mathbf F$ such that $f_tA$ is a~direct summand in~$fA$ is countable. Denote the subset $\{ f_t\in \mathbf F\mid f_tA\subset fA\}$ of~$\mathbf F$ by~$\mathbf F_f$. It is clear that the set $\mathbf F_f$ is definable. Let us fix endomorphisms $\Pi_1$ and~$\Pi_2$ introducing the order on the set $f(A)$, where $f\in \mathbf F$: \begin{align*} & \mathrm{Order}(\Pi_1,\Pi_2) \mathbin{{:}\!=} \forall f\in \mathbf F\, ( \exists ! f_0\in \mathbf F_f\, ( \Pi_2 f_0=0 \logic\land{} \\ & \quad \logic\land \forall f_1\, ( f_1\in \mathbf F_f \logic\land f_1\ne f_0 \Rightarrow \exists f_2\in \mathbf F_f\, (f_1\ne f_2 \logic\land{} \\ & \quad \logic\land f_2 \Pi_2=\Pi_2f_1=f_2\Pi_2 f_1 \logic\land \forall c\in Z\, (cf_1\ne 0\Rightarrow c\Pi_2 f_1\ne 0) \logic\land{} \\ & \quad \logic\land f_1\Pi_1 =\Pi_1 f_2=f_1\Pi_1f_2 \logic\land \forall c\in Z\, (cf_2\ne 0\Rightarrow c\Pi_1 f_2\ne 0)) \logic\land{} \\ & \quad \logic\land \forall f_1\, \forall f_2\in \mathbf F_f\, \biggl( f_1\ne f_2\Rightarrow \bigwedge_{i=1}^2 \forall f_3,f_4\in \mathbf F_f \\ & \quad (f_3\Pi_i f_1=f_3\Pi_i=\Pi f_1\ne 0 \logic\land f_4\Pi_i f_2=f_4\Pi_i f_2=f_4\Pi_i= \Pi_i f_2\ne 0\Rightarrow f_3\ne f_4)\biggr) \logic\land \\ & \quad \logic\land \forall f_1\in \mathbf F_f\, \exists f_2\in \mathbf F_f\, (\Pi_2 f_2=f_1\Pi_2=f_1\Pi_2 f_2 \ne 0) \logic\land \forall f_1 \in \mathbf F_f\, (\Pi_2\Pi_1 f=f \logic\land{} \\ & \quad \logic\land \forall f'\, (\mathrm{Idem}^*(f') \logic\land f'f=f' \logic\land \mathop{\mathrm{Fin}}\nolimits(f') \logic\land f'\ne 0 \Rightarrow{} \\ & \quad \Rightarrow \exists f_1,f_2\in \mathbf F_f\, (f_1 f=f_1 \logic\land f_2 f=f_2 \logic\land f_2\Pi_2=f_2\Pi_2 f_1=\Pi_2 f_1\ne 0))) ))). \end{align*} Up to the order~$\Pi_1$ we can suppose that for every $f_t\in \mathbf F$ ($t\in \mu$) we have a~basis in $f_t(A)$ consisting of $f_t^0,f_t^1,\dots$, where $\Pi_1(f_t^iA)=f_t^{i+1}A$. Now let us write the conditions for the homomorphism~$\Delta$. 1. For every function $f\in \mathbf F$ we shall introduce (by a~formula) a~function $$ \rho_{\mathrm{on},f}\colon B'\to f_tA $$ such that $$ \rho_{\mathrm{on},f}(a_i)=f_t^i $$ for all natural~$i$. The condition for the homomorphism~$\Delta$ now has the form $$ \forall f\in \mathbf F\, \exists \beta\, (\mathop{\mathrm{Hom}}\nolimits(\beta) \logic\land (1) \logic\land (2) \logic\land (3) \logic\land (4)), $$ where (1) $\forall \rho\, (\mathrm{Idem}^* (\rho) \logic\land \forall c\in Z\, (c\rho\ne 0\Rightarrow c\rho f_B\ne 0) \logic\land \exists c\in Z\, (\Psi \rho = c \rho)\Rightarrow \forall c\in Z\, (\Psi \rho=c\rho \Rightarrow c\beta \rho\ne 0 \logic\land pc\beta \rho \ne 0))$, and this means that $o(\beta(a_i))=p^i$; (2) $\forall \rho\, \forall \rho'\, (\mathrm{Idem}^*(\rho) \logic\land \mathrm{Idem}^*(\rho') \logic\land \forall c\in Z\, (c\rho\ne 0\Rightarrow c\rho f_B\ne 0) \logic\land \forall c\in Z\, (c\rho'\ne 0\Rightarrow c\rho' \varphi_B\ne 0)\allowbreak \exists c\in Z\, (\Psi \rho =c\rho)\Rightarrow \neg (\rho' \beta \rho=\beta \rho))$, and this means that $\beta (a_i)\notin B$; (3) $\forall \rho\, (\mathrm{Idem}^*(\rho) \logic\land \forall c\in Z\, (c\rho \ne 0\Rightarrow c\rho f_B\ne 0) \logic\land \exists c\in Z\, (\Psi \rho =c\rho)\Rightarrow p\beta \rho = p\beta (\Gamma \rho)+\Lambda \Delta \rho_{\mathrm{on},f} \rho)$, and this means that $p\beta(a_i)=\rho (a_{i-1})+\Lambda \Delta (f_t^i)$; (4) $\forall c\in Z\, (c\Lambda \Delta \rho_{\mathrm{on},f} \rho \ne 0\Rightarrow c\beta \rho \ne 0)$, and this means that $o(\Lambda \Delta (f_t^i))\le p^i$. This condition means that for every $f_t$, where $t\in \mu$, $\Delta$ is a~mapping from $\{ f_t^i\mid i\in \omega\}$ into $\{ a_i\mid i\in \omega\}$ such that there is a~sequence $c_{1,t},\dots,c_{m,t},\dots\in A$ such that $c_{i,t}\notin B$, $o(c_{i,t})=p^i$, $pc_{i,t}=c_{i-1,t}+b_{i,t}$, where $b_{i,t}\in B$, $b_{i,t}=\beta(\Delta(f_t^i))$, and $o(b_{i,t})\le p^i$. As we know, such a~sequence can be considered as a~sequence of the elements of a~quasibasis of~$A$, and it can be assumed to be uniquely defined with the help of the sequence $$ (b_{i,t}\mid i\in \omega, b_{i,t}=\beta(\Delta(f_t^iA))). $$ An endomorphism~$\beta$ is uniquely defined by every $f\in \mathbf F$, for abbreviation we shall write $\mathrm{Quasi}_f(\beta)$. 2. Every set of elements from~$\Gamma$ gives us a~set of sequences of elements of the quasibasis $$ \{ f_t\mid t\in J\}\leftrightarrow \{ (c_{1,t},\dots,c_{i,t},\dots)\mid t\in J\}=C_J, $$ and the set $C_J$ gives us a~linear space $\bar C_J=\langle C_J\rangle$ such that we can determine whether some sequence belongs to this linear space or not. We do not want to write the formulas, because they are too complicated, but we shall write only conditions: (a)~every existing sequence belongs to~$\bar C_I$; (b)~linear spaces $\langle C_{J_1}\rangle$ and $\langle C_{J_2}\rangle$ ($J_1\cap J_2=\varnothing$) intersect by~$B$. Therefore, our homomorphism $\Delta$ is a~bijection between the set $\{f_t^i\mid t\in I\}$ and the quasibasis $\{ c_{it}\mid i\in \omega,\ t\in \mu\}$ of the group~$A$. We assume that $\Delta$ is fixed. Now we can interpret the second order theory of the group~$A$. We shall do the following. In addition to the set~$\mathbf F$, consisting of $\mu$~independent projections onto countably generated direct summands of~$B$, we shall also consider a~similar set~$\mathbf G$ with the only additional condition that for every $g\in \mathbf G$ we shall select one fixed projection~$g_0$ onto a~countably generated direct summand $g_0A$ of~$gA$ such that if $gA=g_0A\oplus A'$, then $A'$ is also countably generated. Let us fix some $g\in \mathbf G$ and consider a~homomorphism~$h$ such that $h(g_0A)\in \langle a_i\rangle$ for some $i\in \omega$, and if $g_t\in \mathbf G_g\setminus \mathbf G_g^0$, then either $h(g_tA)\subset fA$ for some $f\in \mathbf F$ or $h(g)=0$. Let the image~$h$ under~$gA$ be finitely generated and the inverse image of every $f\in \mathbf F$ contain at most $p-1$ elements of~$\mathbf G_g$. Then every such~$h$ is mapped to an element from~$A$ as follows: if $h$ on $\mathbf G_g\setminus \mathbf G_g^0$ is a~finite subset $\mathcal F$ of $\{ g_t^i\mid t\in \mu,\ i\in \omega\}$, and $h(g_t^0)=a_k$, then we obtain an element $$ \sum_{f_t^i\in \mathcal F} \alpha_{ti}c_{it}+b, $$ where $\alpha_{it}$ is the multiplicity of the inverse image of~$f_t^i$ and $b=\Lambda(a_k)$. It is clear that, as above, for such a~mapping~$h$ we can write $h\in \mathop{\mathrm{End}}\nolimits_g$. Two elements $h_1,h_2\in \mathop{\mathrm{End}}\nolimits_g$ are said to be equivalent if there exists an automorphism~$\alpha$ of the group~$gA$ substituting elements of~$\mathbf G_g$ and replacing $g_0\in \mathbf G_g^0$ such that $h_1\alpha$ and $h_2$ coincide. As usual, the set $\mathop{\mathrm{End}}\nolimits_g$ factorized by such an equivalence is denoted by $\widetilde{\mathrm{End}}_g$. Addition on the set $\widetilde{\mathrm{End}}_g$ is obvious: the images of an element $g_0$ are added, the number of inverse images of every~$f_{it}$ is added, and, if it is greater than~$p-1$, then we have an excess of~$p$ of inverse images of~$f_{it}$, and we add one more inverse image of $f_{i-1,t}$ and we add the known $b_{it}=\beta(\Delta (f_{it}))$ to the image of~$g_0$. The rest of the proof is similar to the previous cases, because we have $\mu$~independent elements $g_t \in \mathbf G$ and for each of them we can interpret the theory $\mathrm{Th}(A)$. \section{The Main Theorem}\label{sec8} We recall (see Sec.~\ref{sec4_2}) that if $A=D\oplus G$, where $D$ is divisible and $G$ is reduced, then the \emph{expressible rank of the group~$A$} is the cardinal number $$ r_{\mathrm{exp}}=\mu=\max(\mu_D,\mu_G), $$ where $\mu_D$ is the rank of~$D$, and $\mu_G$ is the rank of a~basic subgroup of~$G$. {\bf Theorem 1.} \emph{For any infinite $p$-groups $A_1$~and~$A_2$ elementary equivalence of endomorphism rings $\mathop{\mathrm{End}}\nolimits(A_1)$ and $\mathop{\mathrm{End}}\nolimits(A_2)$ implies coincidence of the second order theories $\mathrm{Th}_2^{r_{\mathrm{exp}}(A_1)}(A_1)$ and $\mathrm{Th}_2^{r_{\mathrm{exp}}(A_2)}(A_2)$ of the groups $A_1$~and~$A_2$, bounded by the cardinal numbers $r_{\mathrm{exp}}(A_1)$ and $r_{\mathrm{exp}}(A_2)$, respectively. } \smallskip \begin{proof} Since the rings $\mathop{\mathrm{End}}\nolimits(A_1)$ and $\mathop{\mathrm{End}}\nolimits(A_2)$ are elementary equivalent, they satisfy the same first order sentences. If in the ring $\mathop{\mathrm{End}}\nolimits(A_1)$ for some natural~$k$ the sentence $$ \forall x\, (p^k x=0) \logic\land \exists x\, (p^{k-1} x\ne 0) $$ holds, then the group~$A_1$ is bounded and the maximum of the orders of its elements is equal to~$p^k$. It is clear that in this case the same holds also for the group~$A_2$, and the theorem follows from Proposition~\ref{p5.1} (Sec.~\ref{sec5_4}). Now suppose that neither the group~$A_1$, nor the group~$A_2$ is bounded. Let for some natural~$k$ the sentence $\psi_{p^k}$ (from Sec.~\ref{sec4_4}) hold in the ring $\mathop{\mathrm{End}}\nolimits(A_1)$. Then this sentence holds also in the ring $\mathop{\mathrm{End}}\nolimits(A_2)$, and therefore the groups $A_1$ and $A_2$ are direct sums $D_1\oplus G_1$ and $D_2\oplus G_2$, respectively, where the groups $D_1$~and~$D_2$ are divisible and the groups $G_1$~and~$G_2$ are bounded by the number~$p^k$. Further, the sentence $\psi_{p^k}$ fixes the projections $\rho_D$~and~$\rho_G$ on the groups $D$~and~$G$, respectively. If $\rho_G=0$, then the groups $A_1$~and~$A_2$ are divisible and in this case the theorem follows from Proposition~\ref{p6.2}. Let the rings $\mathop{\mathrm{End}}\nolimits(A_1)$ and $\mathop{\mathrm{End}}\nolimits(A_2)$ satisfy the sentence \begin{align*} & \tilde \psi_{p^k}^2 \mathbin{{:}\!=} \exists \rho_D\, \exists \rho_G\, ( \psi_{p^k}(\rho_D,\rho_G) \logic\land \exists h\, (\rho_D h\rho_G=h\rho_G \\ & \quad \logic\land \forall \rho_1\,\forall \rho_2\, (\mathrm{Idem}^*(\rho_1) \logic\land \mathrm{Idem}^*(\rho_2) \logic\land \rho_1\rho_G=\rho_1 \logic\land \rho_2\rho_G=\rho_2 \logic\land \rho_1\rho_2=\rho_2\rho_1=0 \\ & \quad \Rightarrow \exists \rho_1'\, \exists \rho_2'\, (\mathrm{Idem}^*(\rho_1') \logic\land \mathrm{Idem}^*(\rho_2') \logic\land \rho_1'\rho_D =\rho_1' \logic\land \rho_2'\rho_D=\rho_2' \\ & \quad \logic\land \rho_1'\rho_2'=\rho_2'\rho_1'=0 \logic\land \rho_1'h\rho_1=h\rho_1\ne 0 \logic\land \rho_2'h\rho_2=h\rho_2\ne 0)))). \end{align*} This sentence (in addition to the conditions of the sentence $\psi_{p^k}$) says that there exists an endomorphism~$h$ of the group~$A$ such that $h$ maps~$G$ into~$D$ and any two independent cyclic summands $\rho_1A$ and $\rho_2A$ of the group~$G$ are mapped into independent quasicyclic summands $\rho_1'A$ and $\rho_2'A$ of~$D$, i.e., there exists an embedding of~$G$ into the group~$D$. This implies that $|G|\le |D|$, i.e., if the sentence $\psi_{p^k}^2$ holds in $\mathop{\mathrm{End}}\nolimits(A_1)$ and $\mathop{\mathrm{End}}\nolimits(A_2)$, then the groups $A_1$~and~$A_2$ are isomorphic to direct sums $D_1\oplus G_1$ and $D_2\oplus G_2$, where $|D_1|\ge |G_1|$ and $|D_2|\ge |G_2|$. In this case, the theorem follows from Proposition~\ref{p6.3}. If the sentence $\psi_{p^k}$ holds in $\mathop{\mathrm{End}}\nolimits(A_1)$ and $\mathop{\mathrm{End}}\nolimits(A_2)$, but the sentence $\psi_{p^k}^2$ is false, then the groups $A_1$~and~$A_2$ are direct sums $D_1\oplus G_1$ and $D_2\oplus G_2$, where $|D_1|< |G_1|$ and $|D_2|< |G_2|$. In this case, the theorem follows from Proposition~\ref{p6.4}. If for no natural~$k$ the sentence $\psi_{p^k}$ belongs to the theory $\mathrm{Th}(\mathop{\mathrm{End}}\nolimits(A_1))$, then for no natural~$k$ the sentence $\psi_{p^k}$ belongs to the theory $\mathrm{Th}(\mathop{\mathrm{End}}\nolimits(A_2))$, and therefore both groups $A_1$~and~$A_2$ have unbounded basic subgroups. Let us consider the formula \begingroup \setlength{\multlinegap}{0pt} \begin{multline*} \psi(\rho_D,\rho_G) \mathbin{{:}\!=} \mathrm{Idem}(\rho_D) \logic\land \mathrm{Idem}(\rho_G) \logic\land (\rho_D\rho_G=\rho_G\rho_D=0) \logic\land (\rho_D+\rho_G=1) \\* \logic\land \forall x\, (\rho_D x\rho_D =0 \logic\lor p(\rho_D x \rho_D)\ne 0) \logic\land \forall \rho'\, (\mathrm{Idem}^*(\rho') \logic\land \rho'\rho_G =\rho'\Rightarrow \neg (\forall x\, (\rho'x\rho'=0 \logic\lor p(\rho' x\rho')\ne 0))). \end{multline*} \endgroup% This formula says that $A$ is the direct sum of its subgroups $\rho_DA$ and $\rho_GA$, the group $\rho_DA$ is divisible, and the group $\rho_GA$ is reduced. Let us consider the sentence $$ \psi^2 \mathbin{{:}\!=} \exists \rho_D\, \exists \rho_G\, \exists h\, (\psi(\rho_D,\rho_G) \logic\land \rho_D h\rho_G=h\rho_G \logic\land \forall \rho\, (\mathrm{Idem}^*(\rho) \logic\land \rho \rho_G=\rho\Rightarrow \forall c\in Z\, (c\rho\ne 0\Rightarrow ch\rho\ne 0))). $$ This sentence says that~$A$ is the direct sum of a~divisible subgroup $D=\rho_DA$ and a~reduced subgroup $G=\rho_GA$, and there exists an embedding $h\colon G\to D$. Therefore $|G|\le |D|$. If the rings $\mathop{\mathrm{End}}\nolimits(A_1)$ and $\mathop{\mathrm{End}}\nolimits(A_2)$ satisfy the sentence~$\psi^2$, then for the groups $A_1$~and~$A_2$ we have $A_1=D_1\oplus G_1$, $A_2=D_2\oplus G_2$, $|D_1|\ge |G_1|$, $|D_2|\ge |G_2|$, and the theorem follows from Proposition~\ref{p7.1}. Now suppose that the rings $\mathop{\mathrm{End}}\nolimits(A_1)$ and $\mathop{\mathrm{End}}\nolimits(A_2)$ do not satisfy the sentence~$\psi^2$. In this case, $A_1=D_1\oplus G_1$, $A_2=D_2\oplus G_2$, $|D_1|< |G_1|$, and $|D_2|< |G_2|$. Recall the formulas from Sec.~\ref{sec7_2}. Let us consider the sentence \begin{align*} & \psi^3\mathbin{{:}\!=} \exists \rho_D\, \exists \rho_G\, (\psi(\rho_D,\rho_G) \logic\land \neg \psi^2 \logic\land \exists \varphi_B\, (\mathrm{Base}(\varphi_B) \logic\land \forall \rho\, (\mathrm{Idem}^*(\rho)\Rightarrow \exists \rho'\, (\mathrm{Idem}^*(\rho') \logic\land o(\rho')> o(\rho) \\ & \quad \logic\land \exists f\, (\mathrm{Ord}_{\rho'}(f) \logic\land \forall f'\, (\mathrm{Idem}^*(f') \logic\land f'f=f'\Rightarrow \forall c\in Z\, (cf'\ne 0\Rightarrow cf'\varphi_B\ne 0)) \\ & \quad \logic\land \exists h\, (\forall f_1\, (\mathrm{Idem}^*(f_1) \logic\land \forall c\in Z\, (cf_1\ne 0\Rightarrow cf_1\varphi_B\ne 0) \\ & \quad \Rightarrow \exists f_2\, (\mathrm{Idem}^*(f_2) \logic\land f_2f=f_2 \logic\land f_1h=hf_2=f_1hf_2\ne 0)))))))). \end{align*} This sentence says that \begin{enumerate} \item $A=\rho_DA\oplus \rho_GA=D\oplus G$, where $D$ is divisible, $G$~is reduced, and $|D|< |G|$; \item $\varphi_B$ is an endomorphism with the image $\varphi_B(A)$ coinciding with some basic subgroup~$B$; \item for every natural~$k$ there exists a~natural~$n$ such that in the group~$B$ there exists a~direct summand (which is a~sum of cyclic groups of order~$p^n$) having the same power as the group~$B$. \end{enumerate} Therefore the sentence~$\psi^3$ says that the final rank of a~basic subgroup of the group~$G$ coincide with its rank. The sentence \begin{multline*} \psi^4 \mathbin{{:}\!=} \exists \varphi_B\, (\mathrm{Base}(\varphi_B) \logic\land \forall \rho\, (\mathrm{Idem}^*(\rho) \Rightarrow \forall f\, (\mathrm{Ord}_\rho(f)\Rightarrow \mathop{\mathrm{Fin}}\nolimits(f) \logic\lor \exists h\, (\forall f_1\, (\mathrm{Idem}^*(f_1) \\ \logic\land \forall c\in Z\, (cf_1\ne 0\Rightarrow cf_1 \varphi_B\ne 0)\Rightarrow \exists f_2\, (\mathrm{Idem}^*(f_2) \logic\land f_2f=f_2 \logic\land f_1h=hf_2=f_1hf_2\ne 0)))))) \end{multline*} means that in a~basic subgroup~$B$ for every natural~$n$ every direct summand which is a~direct sum of cyclic groups of order~$p^n$ either is finite or has the same power as the group~$B$, so the group~$B$ is countable. Therefore if the rings $\mathop{\mathrm{End}}\nolimits(A_1)$ and $\mathop{\mathrm{End}}\nolimits(A_2)$ satisfy the sentence $\psi^3 \logic\land \neg \psi^4$, then the groups $A_1$~and~$A_2$ are direct sums $D_1\oplus G_1$ and $D_2\oplus G_2$, where $|D_1|< |G_1|$, $|D_1|< |G_2|$, and the final ranks of basic subgroups of $A_1$~and~$A_2$ coincide with their ranks and are uncountable. In this case the theorem follows from Proposition~\ref{p7.1}. If the rings $\mathop{\mathrm{End}}\nolimits(A_1)$ and $\mathop{\mathrm{End}}\nolimits(A_2)$ satisfy the sentence $\psi^3 \logic\land \psi^4$, then their basic subgroups are countable and in this case the theorem follows from Proposition~\ref{p7.4}. Now we have only two cases, and to separate them we shall write the sentence \begin{align*} & \psi^5 \mathbin{{:}\!=} \exists \varphi_B\, \exists \bar \rho\, (\mathrm{Base}(\varphi_B) \logic\land \mathrm{Idem}^*(\bar \rho) \logic\land \forall \rho\, (\mathrm{Idem}^*(\rho) \logic\land o(\rho)> o(\bar \rho) \\ & \quad \Rightarrow \forall f\, (\mathrm{Ord}_\rho(f)\Rightarrow \mathop{\mathrm{Fin}}\nolimits(f) \logic\lor \exists h\, (\forall f_1(\mathrm{Idem}^*(f_1) \logic\land o(f_1)> o(\rho) \\ & \quad \logic\land \forall c\in Z\, (cf_1\ne 0\Rightarrow cf_1\varphi_B\ne 0) \Rightarrow \exists f_2\, (\mathrm{Idem}^*(f_2) \logic\land f_2f=f_2 \logic\land f_1h=hf_2=f_1hf_2\ne 0)))))), \end{align*} which means that there exists a~number~$k$ such that in a~basic subgroup~$B$ for every natural $n>k$, every direct summand which is a~sum of cyclic groups of order~$p^n$ either is finite or has the same power as the direct summand of the group~$B$ generated by all generators of order greater than~$p^k$. Naturally, this means that the final rank of the group~$B$ is countable. Now if the rings $\mathop{\mathrm{End}}\nolimits(A_1)$ and $\mathop{\mathrm{End}}\nolimits(A_2)$ satisfy the sentence $\neg \psi^3 \logic\land \neg \psi^5$, then the final ranks of basic subgroups of $A_1$~and~$A_2$ are uncountable and do not coincide with their ranks. In this case, the theorem follows from Proposition~\ref{p7.3}. If the rings $\mathop{\mathrm{End}}\nolimits(A_1)$ and $\mathop{\mathrm{End}}\nolimits(A_2)$ satisfy the sentence $\neg \psi^3 \logic\land \psi^5$, then the final ranks of basic subgroups of $A_1$~and~$A_2$ are countable and do not coincide with their ranks, and in this case the theorem follows from Sec.~\ref{sec7_6}. \end{proof}
1,108,101,564,718
arxiv
\section{Introduction} \IEEEPARstart{T}{he} Industry 4.0 initiative is advocating smart manufacturing as the industrial revolution leading to global economic growth~\cite{1,2,3,4}. Many countries, corporations, and research institutions have embraced the concept of Industry 4.0, in particular the United States, the European Union, and East Asia~\cite{5}. Some industries have begun a transformation from the digital era to the intelligent era. Manufacturing represents a large segment of the global economy, while the interest in smart manufacturing is expanding~\cite{6}. The progress in information and communication technologies, for example, the Internet of Things (IoT)~\cite{7,8}, artificial intelligence (AI)~\cite{9,10}, and big data~\cite{11,12} for manufacturing applications, has impacted smart manufacturing~\cite{13}. In the broad context of manufacturing, customized manufacturing (CM) offers a value-added paradigm for smart manufacturing~\cite{14}, as it refers to personalized products and services. The benefits of CM have been highlighted by multinational companies. Today, information and communication technologies are the base of smart manufacturing~\cite{15, 16}, and intelligent systems driven by AI are the core of CM~\cite{17}. With the development of AI technologies, new theories, models, algorithms, and applications - towards simulating, extending, and enhancing human intelligence - are continuously developed. The progress of big data analysis and deep learning has accelerated AI to enter the 2.0 era~\cite{18,19,20}. AI 2.0 manifests itself as a data-driven deep reinforcement learning intelligence~\cite{21}, network-based swarm intelligence~\cite{22}, technology-oriented hybrid intelligence of human-machine and brain-machine interaction~\cite{Martinez-Garcia2018,23,martinez2019memory}, cross-media reasoning intelligence~\cite{24, 25}, etc. Therefore, AI 2.0 offers significant potential to smart manufacturing, especially, CM in smart factories~\cite{26}. Typically, AI solutions can be applied to several aspects of smart manufacturing. AI algorithms can run the manufacturing of personalized products in a smart factory~\cite{27, 28}. The AI-assisted CM is to construct smart manufacturing systems supported by cognitive computing, machine status sensing, real-time data analysis, and autonomous decision-making~\cite{29, 30}. AI permeates through every link of CM value chains, such as design, production, management, and service~\cite{31, 32}. Based on these insights of CM and AI, the focus of this paper is on the implementation of AI in the smart factory for CM involving architecture, manufacturing equipment, information exchange, flexible production line, and smart manufacturing services. The contributions of the research presented in this paper are as follows. \begin{itemize} \item The architecture of the AI-assisted CM for smart factories is developed by merging smart devices and industrial networks with big data analysis. \item The state-of-the-art AI technologies are reviewed and discussed. \item The key AI-enabled technologies in CM are validated with a prototype platform of a customized candy packaging line. \item The challenges and possible solutions brought by the introduction of AI into CM are discussed. \end{itemize} The remainder of the paper is organized as follows. In Section~\ref{sec:cm-ai}, the relationship between the CM and AI is discussed. The general architecture of AI-assisted CM is presented in Section~\ref{sec:arch}. Section~\ref{sec:devices} illustrates the implementation of AI in intelligent manufacturing equipment. The intelligent information exchange process, flexible production line, and smart manufacturing services in the AI-assisted CM are proposed in Section~\ref{sec:interaction} and Section~\ref{sec:manu-line}, respectively. A case study is provided in Section~\ref{sec:case}. The challenges and possible solutions to the AI-assisted intelligent manufacturing factory are discussed in Section~\ref{sec:challenges}. Section~\ref{sec:conc} concludes the paper. \section{Customized Manufacturing and Artificial Intelligence } \label{sec:cm-ai} This section first summarizes the characteristics of customized manufacturing in Section~\ref{subsec:Char-CM} and then discusses the opportunities brought by AI-driven customized manufacturing in Section~\ref{subsec:AI-CM}. \subsection{Characteristics of customized manufacturing} \label{subsec:Char-CM} Despite the progress made, manufacturing industry faces a number of challenges, some of which are: traditional mass-production is not able to adapt to the rapid production of personalized products; and resource limitations, environmental pollution, global warming, and an aging global population have become more prominent. Therefore, a new manufacturing paradigm to address these challenges is needed. The customer-to-manufacture concept reflects the characteristics of customized production where a manufacturing system directly interacts with a customer to meet his/her personalized needs. The goal is to realize the rapid customization of personalized products. The new generation of intelligent manufacturing technology offers improved flexibility, transparency, resource utilization, and efficiency of manufacturing processes. It has led to new programs, e.g., the Factory of the Future in Europe~\cite{33}, Industry 4.0 in Germany~\cite{1}, and Made in China 2025~\cite{34}. Moreover, the United States has accelerated research and development programs~\cite{35}. Compared with mass production, the production organization of CM is more complex, quality control is more difficult, and the energy consumption needs attention. In classical automation, the production boundaries were rigid to ensure quality, cost, and efficiency. Compared with traditional production, CM has the following characteristics. \begin{itemize} \item \emph{Smart interconnectivity.} Smart manufacturing embraces a cyber-physical environment, e.g., processing/detection/assembly equipment, and storage, all operating in a heterogeneous industrial network. The Industrial IoT has progressed from the original industrial sensor networks to the Narrow Band-Internet of Things (NB-IoT), LoRa WAN, and LTE Cat M1 with increased coverage at reduced power consumption~\cite{36}. Edge computing units are deployed to improve system intelligence. Cognitive technology ensures the context awareness and semantic understanding of the industrial IoT~\cite{37}. Intelligent industrial IoT as the key technologies is widely used for intelligent manufacturing. \item \emph{Dynamic reconfiguration.} The concept of a smart factory aims at the rapid manufacturing of a variety of products in small batches. Since the product types may change dynamically, system resources need to be dynamically reorganized. A multi-agent system~\cite{38} is introduced to negotiate a new system configuration. \item \emph{Massive volumes of data.} An intelligent manufacturing system includes interconnected devices generating data such as device status and process parameters. Cloud computing and big data science make data analysis feasible in failure prediction, active preventive maintenance, and decision making. \item \emph{Deep integration.} The underlying intelligent manufacturing entities, cloud platforms, edge servers, and upper monitoring terminals are closely connected. Data processing, control, and operations can be performed simultaneously in the Cyber-Physical Systems (CPS), where the information barriers are broken down, thereby realizing the deep integration of physical and information environments. \end{itemize} \begin{figure*}[h] \centering \begin{subfigure}[t]{0.45\textwidth} \includegraphics[height=6.3cm]{fig/1a.pdf} \caption{} \label{fig:fig1a} \end{subfigure} \begin{subfigure}[t]{0.45\textwidth} \includegraphics[height=6.3cm]{fig/1b.pdf} \caption{} \label{fig:fig1b} \end{subfigure}% \caption{The AI and customized manufacturing. (a) AI technologies include perception, machine learning, deep learning, reinforcement learning, and decision making as well as AI-enabled applications like computer vision, natural language processing, intelligent robots, and recommendation systems. (b) AI can foster customized manufacturing in the aspects: customized product design, customized product manufacturing, manufacturing maintenance, customer management, logistics, after-sales service, and market analysis.} \label{figure1} \end{figure*} \subsection{Overview of AI technologies} AI embraces theories, methods, technologies, and applications to augment human intelligence. It includes not only AI techniques such as perception, machine learning (ML), deep learning (DL), reinforcement learning, and decision making, but also AI-enabled applications like computer vision, natural language processing, intelligent robots, and recommendation systems, as shown in Fig.~\ref{fig:fig1a}. ML has outperformed traditional statistical methods in tasks such as classification, regression, clustering, and rule extraction~\cite{hndai:EIS19}. Typical ML algorithms include decision tree, support vector machines, regression analysis, Bayesian networks, and deep neural networks. As a subset of ML algorithms, DL algorithms have superior performance than other ML algorithms. The recent success of DL algorithms mainly owes to three factors: 1) the availability of massive data; 2) the advent of computer capability achieved by computer architectures and hardware, such as Graphic Processing Units (GPUs); 3) the advances in diverse DL algorithms such as a convolutional neural network (CNN), long short-term memory (LSTM) and their variants. Different from ML methods, which require substantial efforts in feature engineering in processing raw industrial data, DL methods combine feature engineering and learning process together, thereby achieving outstanding performance. However, DL algorithms also have their disadvantages. First, DL algorithms often require a huge amount of data to train DL models to achieve better performance than other ML algorithms. Moreover, the training of DL models requires substantial computing resources (e.g., expensive GPUs and other computer hardware devices). Third, DL algorithms also suffer from poor interpretability, i.e., a DL model is like an uncontrollable ``black box'', which may not obtain the result as predicted. The poor interpretability of DL models may prevent their wide adoption in industrial systems, especially in critical tasks like fault diagnosis~\cite{HWang:TII20} despite recent advances in improving the interpretability of DL models~\cite{XZhang:TII20}. \subsection{AI-driven customized manufacturing} \label{subsec:AI-CM} \begin{figure*}[!t] \centering \includegraphics[scale= 0.7]{fig/2.pdf} \caption{The architecture of AI-assisted customized manufacturing includes smart devices, smart interaction, AI layer, and smart services.} \label{figure2} \end{figure*} As AI technologies have demonstrated their potential in areas such as customized product design, customized product manufacturing, manufacturing management, manufacturing maintenance, customer management, logistics, after-sales service, and market analysis as shown in Fig.~\ref{fig:fig1b}, industrial practitioners and researchers have begun their implementation. For example, the work~\cite{zuo:2016prediction} presents a Bayesian network-based approach to analyze the consumers' purchase behaviour via analyzing RFID data, which is collected from RFID-tags attached to in-store shopping carts. Moreover, a deep learning method is adopted to identify possible machine faults through analyzing mechanic data collected from the real industrial environments such as induction motors, gearboxes, and bearings~\cite{SShao:TII19}. Therefore, the introduction of AI technologies can potentially realize the customized manufacturing. We name such AI-driven customized manufacturing as AI-driven CM. In summary, AI-driven CM has the following advantages~\cite{39,40}. \begin{enumerate} \item \emph{Improved production efficiency and product quality.} In CM factories, automated devices can potentially make decisions with reduced human interventions. Technologies such as ML and computer vision are enablers of cognitive capabilities, learning, and reasoning (e.g., analysis of order quantities, lead time, faults, errors, and downtime). Product defects and process anomalies can be identified using computer vision and foreign object detection. Human operators can be alerted to process deviations. \item \emph{Facilitating predictive maintenance.} Scheduled maintenance ensures that the equipment is in the best state. Sensors installed on a production line collect data for analysis with ML algorithms, including convolutional neural networks. For example, the wear and tear of a machine can be detected in real-time and a notification can be issued. \item \emph{Developing of smart supply chains.} The variability and uncertainty of supply chains for CM can be predicted with ML algorithms. Moreover, the insights obtained can be used to predict sudden changes in customer demands. \end{enumerate} In short, the incorporation of AI and industrial IoT brings benefits to smart manufacturing. AI-assisted tools improve manufacturing efficiency. Meanwhile, higher value-added products can be introduced to the market. However, we cannot deny that AI technologies still have their limitations when they are formally adopted to real-world manufacturing scenarios. On the one hand, AI and ML algorithms often have stringent requirements on computing facilities. For example, high-performance computing servers equipped with GPUs are often required to fasten the training process on massive data~\cite{VSze:PIEEE17} while exiting manufacturing facilities may not fulfill the stringent requirement on computing capability. Therefore, the common practice is to outsource (or upload) the manufacturing data to cloud computing service providers who can conduct the computing-intensive tasks. Nevertheless, outsourcing the manufacturing data to the third party may lead to the risk of leaking confidential data (e.g., customized product design) or exposing private customer data to others. On the other hand, transferring the manufacturing data to remote clouds inevitably leads to high latency, thereby failing to fulfill the real-time requirement of time-sensitive tasks. \section{Architecture of an AI-Assisted Customized Manufacturing Factory} \label{sec:arch} This section first presents an AI-Assisted customized manufacturing (AIaCM) framework in Section~\ref{subsec:AIaCM} and then gives a brief comparison of the proposed AIaCM framework with the state-of-the-art literature in Section~\ref{subsec:survey}. \subsection{AI-Assisted Customized Manufacturing Factory} \label{subsec:AIaCM} Different frameworks have been presented towards the increased interactivity and resource management~\cite{41,42,43}. Most studies have focused on information communications~\cite{44} or big data processing~\cite{45,46,47}. So far, research proposing generic AI-based CM frameworks is limited. System performance metrics, e.g., flexibility, efficiency, scalability, and sustainability, can be improved by adopting AI technologies such as ML, knowledge graphs, and human-computer interaction (HCI). This is especially true in sensing, interaction, resource optimization, operations, and maintenance in a smart CM factory~\cite{48, 49}. Since cloud computing, edge computing, and local computing paradigms have their unique strengths and limitations, they should be integrated to maximize their effectiveness. At the same time, the corresponding AI algorithms should be redesigned to match the corresponding computing paradigm. Cloud intelligence is responsible for making comprehensive, time-insensitive analysis and decisions, while the edge and local node intelligence are applicable to the context or time-aware environments. Intelligent manufacturing systems include smart manufacturing devices, realize intelligent information interaction, and provide intelligent manufacturing services by merging AI technologies. As shown in Fig.~\ref{figure2}, an AI-assisted CM framework that includes smart devices, smart interaction, AI layer, and smart services. We then explain this framework in detail as follows. \subsubsection{Smart devices} include robots, conveyors, and other basic controlled platforms. Smart devices serve as ``the physical layer'' for the entire AIaCM. Specifically, different devices and equipment, such as robots and processing tools are controlled by their corresponding automatic control systems. Therefore, it is crucial to meet the real-time requirement for the device layer in an AIaCM system. To achieve this goal, ML algorithms can be implemented at the device layer in low power devices such as FPGAs. The interconnection of the physical devices, e.g., machines, conveyors, is implemented at the device layer~\cite{50,51} using edge computing servers. \begin{table*}[t] \centering \caption{Summary of most relevant state-of-the-art literature} \renewcommand{\arraystretch}{1.5} \label{tab:comp} \begin{tabular}{|c|c|c|c|c||p{2.7cm}|p{2.7cm}|p{2.8cm}|} \hline \textbf{Refs.} & \textbf{Smart devices} & \textbf{Smart interaction} & \textbf{AI} & \textbf{Smart services} & \textbf{Pros} & \textbf{Cons} & \textbf{Applications} \\ \hline\hline \cite{41} & \checkmark & $\times$ & \checkmark & \checkmark & Integration sensor with cloud services & No edge computing considered & Machine status monitoring (primitive ML methods were used) \\ \hline \cite{42} & \checkmark & \checkmark & \checkmark & \checkmark & Service-oriented smart manufacturing & No edge computing considered & Milk production from buffalo pasture \\ \hline \cite{43} & \checkmark & \checkmark & $\times$ & $\times$ & Integration CPS with smart manufacturing & No edge computing and in-depth analysis of AI algorithms & Several cases from product design to manufacturing control \\ \hline \cite{44} & \checkmark & $\times$ & $\times$ & \checkmark & Comprehensive consideration of the entire industrial network & No AI as well as edge computing considered & No specific application \\ \hline \cite{46} & \checkmark & $\times$ & \checkmark & \checkmark & Integration sensor with cloud services & No edge computing considered & Light gauge steel production line \\ \hline \cite{47} & \checkmark & \checkmark & \checkmark & $\times$ & Integration CPS with AI & No edge computing considered & Production line and factory management \\ \hline \cite{48} & $\times$ & $\times$ & \checkmark & $\times$ & Diverse AI algorithms were used & No consideration of smart devices, interactions and services & Cold spray additive manufacturing, augmented reality-guided inspection and surface stress estimation \\ \hline \end{tabular} \end{table*} \subsubsection{Smart interaction}links the device layer, AI layer, and services layer~\cite{52,53}. It represents a bridge between different layers of the proposed architecture. The smart interaction layer is composed of two vital modules. The first module includes basic network devices such as access points, switches, routers and network controllers, which are generally supported by different network operating systems, or equipped with different network functions. The basic network devices constitute the core of the network layer~\cite{54,55}. Different from the first module which is fixed or static, the second module consists of the dynamic elements, including network/communications protocols, information interaction, and data persistent or transient storage. These dynamic elements are essentially information carriers to connect different manufacturing processes. The dynamic module is running on top of the static one. AI is utilized in the prediction of wireless channels, optimization of mobile network handoffs, and control network congestion. Recurrent Neural Networks (RNN) or Reservoir Computing (RC) are candidate solutions due to the advantages of them in analyzing temporal network data. \subsubsection {The AI layer} includes algorithms running at different computing platforms such as edge or cloud servers~\cite{46,56}. The computing environment consists of cloud and edge computing servers running MapReduce, Hadoop, and Spark. AI algorithms are adopted at different levels of computing paradigms in the AIaCM architecture. For instance, training a deep learning model for image processing can be conducted in the cloud. Then, edge computing servers are responsible for running the trained DL model and executing relatively simple algorithms for specific manufacturing tasks. \subsubsection{Smart manufacturing services} include data visualization, system maintenance, predictions, and market analysis. For example, a recommender system can provide customers with details of CM products, and the information including the performance of a production line, market trends, and efficiency of the supply chain. \subsection{Overview on state-of-the-art manufacturing methods} \label{subsec:survey} Recently, substantial research efforts have been made to improve the interactivity and elasticity of exiting manufacturing factories~\cite{41,42,43,44,45,46,47,48,49}. Table~\ref{tab:comp} summarizes most relevant state-of-the-art literature. We can observe from Table~\ref{tab:comp} that most of the references only concentrate on a single or several aspects in CM. For example, the work~\cite{41} presents a cloud manufacturing framework to analyze and process manufacturing data. Similarly, a cloud-based manufacturing equipment~\cite{46} is proposed to provide users with on-demand services. However, outsourcing manufacturing data to cloud services providers who are often owned by third parties can also bring the risks of leaking customers' private data and exposing confidential manufacturing data (e.g., product design models). Despite most of the aspects being considered, the work \cite{42} ignores the critical issues such as the edge computing paradigm and advanced AI technologies. In contrast, our AIaCM framework includes all the aspects in CM, including smart devices, smart interaction, AI technologies, and smart services. Meanwhile, our AIaCM framework also considers the advent of edge computing, software-defined networks, and advanced AI technologies. Moreover, we also present a full-fledged prototype to further demonstrate the effectiveness of the proposed framework (please refer to Section~\ref{sec:case} for more details). The implementation details of the AIaCM architecture are discussed next. \section{Intelligent Manufacturing Devices} \label{sec:devices} \subsection{Edge computing-assisted intelligent agent construction} In the customized production paradigm, manufacturing devices should be capable of rapid restructuring and reuse for small batches of personalized products~\cite{57,58}. However, it is challenging to achieve elastic and rapid control over the massive manufacturing devices. The agent-based system was considered a solution to this challenge~\cite{WShen:TSMC06,khan2019agent}. Agents can autonomously and continuously function in a collaborative system~\cite{59}. A multi-agent system can be constructed to take autonomous actions. Different types of agents have been constructed in~\cite{60,61,kovalenko2019model}. Although a single agent may have sensing, computing, and reasoning capabilities, it alone can only accomplish relatively simple tasks. Smart manufacturing may involve complex tasks, for instance, the image-based personalized product recognition, expected from the emerging multi-agent systems~\cite{62,63}. However, the multiple agents are deficient in processing massive data. Recent advances in edge computing can meet this emerging need~\cite{64,65,66}. As shown in Fig.~\ref{figure3}, a variety of decentralized manufacturing agents are connected to edge computing servers via high-speed industrial networks. The edge computing assisted manufacturing agents embrace the device layer, agent layer, edge computing layer, and AI layer. An agent is equipped with a reasoning module and a knowledge base, offering basic AI functionalities such as inferencing and computing. Moreover, with the support of new communication technologies (e.g., 5G mobile networks and high-speed industrial wired networks), all agents and edge computing servers can be interconnected. \begin{figure}[!t] \centering \includegraphics[width=8.8cm]{fig/3.pdf} \caption{Edge computing-assisted manufacturing devices. This architecture includes the device layer, agent layer, edge computing layer, and AI layer.} \label{figure3} \end{figure} Agents run on edge computing servers to guarantee low-latency services for data analytics. The agent edge servers are connected by high-speed industrial IoT to achieve low latency. Generally, edge computing servers support a variety of AI applications. An example of such a system is a personalized product identification based on deep learning image recognition. First, a multiple agent subsystem is constructed for producing personalized products. Then, a single agent records image or video data at different stages of the CM process. Next, the edge computing server runs the image recognition algorithms, such as a convolutional neural network (CNN), R-CNN, Fast R-CNN, Faster R-CNN, YOLO, or Single Shot Detection (SSD), all of which have demonstrated their advantages in computer vision tasks. The identification results are rapidly transferred to the devices. When the single edge computing server cannot meet the real-time requirements, the multiple agent edge servers may work collaboratively to complete the specific tasks such as product identification. Indeed, during the process, the master-slave or auction mode can be adopted for coordination, according to the status analysis of each edge server. Additionally, with the help of edge computing, it is possible to establish a quantitative energy-aware model with a multi-agent system for load balancing, collaborative processing of complex tasks, and scheduling optimization in a smart factory~\cite{67}. The above procedure can also optimize the production line with better logistics while ensuring flexibility and manufacturing efficiency. \subsection{Manufacturing resource description based on ontology} Intelligent manufacturing will be greatly beneficial to the integration of distributed competitive resources (e.g., manpower and diverse automated technologies), so that resource sharing between enterprises and flexibility to respond to market changes are possible (i.e., CM). Therefore, in smart manufacturing, it is imperative to realize dynamic configurations of manufacturing resources~\cite{68,69}. CM can optimize lead time and manufacturing quality under various real-world constraints of dynamic nature (resource and manpower limitations, market demand, etc.). There are several strategies in describing manufacturing resources, such as databases, object-oriented method~\cite{zhang1999object}, and the unified manufacturing resource model~\cite{vichare2009unified}. In contrast to the conventional resource description methods, the ontology-based description is one of the most prominent methods. An ontology represents an explicit specification of a conceptual model~\cite{70}, by way of a classical symbolic AI reasoning method (i.e., an expert system). Modeling an application domain knowledge through an expert system provides a conceptual hierarchy that supports system integration and interoperability via an interpretable way~\cite{71,72}. \begin{figure}[!t] \centering \includegraphics[width=9.0cm]{fig/4.pdf} \caption{Manufacturing resources from the device function perspective. The CM resources of a product can be mapped into computing, cutting, conveying, and other functions.} \label{figure4} \end{figure} In our previous work~\cite{73}, the device resources of smart manufacturing were integrated by the ontology-based integration framework, to describe the intelligent manufacturing resources. The architecture consisted of four layers, namely, the data layer, the rule layer, the knowledge layer, and the resource layer. The resource layer represented the entity of intelligent manufacturing equipment (e.g., manipulators, conveyor belt, PLC), which was essentially the field device. The knowledge layer was essentially the information model composed of intelligent devices, which was integrated into the domain knowledge base through the OWL language~\cite{74}. The rule layer was used to gather the intelligent characteristics of intelligent equipment, such as decision-making and reasoning. The data layer included a distributed database for real-time data storing, and the relational database was used to associate the real-time data. Due to the massive amount of data generated from manufacturing devices, it is nearly impossible to consider all the manufacturing device resources. Thus, it is important to construct a new manufacturing description model to realize the reconfiguration of various manufacturing resources. In this model, the resources can be easily adjusted by running the model. Therefore, ontology modeling is conducted on a device and related attributes of an intelligent production line in CM. The manufacturing resources are mapped to different functions with different attributes. For instance, the time constraint of a product manufacturing is divided into a number of time slots with consideration of features of processes and devices. Then, the CM resources of a product can be mapped into computing, cutting, conveying, and other functions with the limited time slot, as shown in Fig.~\ref{figure4}. Next, a customized product can be produced by different devices with different time constraints. Accordingly, a product can be represented by ontology functions. Meanwhile, after making a reasonable arrangement of different manufacturing functions at different time slots, a DL algorithm can forecast time slots of working states. The time slots of working states are important for the reconfiguration of manufacturing resources. Therefore, in actual applications, a different attribution of a device and customized products can be employed as a constraint condition. \subsection{Edge Computing in Intelligent Sensing} \label{subsec:edge-sensing} The concept of ubiquitous intelligent sensing is a cornerstone of smart manufacturing in the Industry 4.0 framework. Numerous research studies have been conducted in monitoring manufacturing environments~\cite{75,76,77}. Most published results adopt a precondition-sensing system that only accepts a static sensing parameter. Obviously, this results in inflexibility and the sensing parameters are difficult to be adjusted to fulfill different requirements. Second, although some studies claim dynamic parameter tuning, the absence of a prediction function is still an issue. Existing environment sensing (monitoring) cannot adjust the sensing parameters in advance to achieve a more intelligent manufacturing response. \begin{figure}[!t] \centering \includegraphics[width=8.8cm]{fig/5.pdf} \caption{Intelligent sensing based on edge AI computing. Sensor nodes collect ambient data while edge computing notes can preprocess and cache the collected data, which can be further transferred to remote cloud servers for in-depth data analysis.} \label{figure5} \end{figure} As shown in Fig.~\ref{figure5}, the manufacturing environment intelligent sensing based on the edge AI computing framework includes two components: sensors nodes and edge computing nodes~\cite{78}. Generally, smart sensor nodes are equipped with different sensors, processors, and storage and communication modules. The sensors are responsible for converting the physical status of the manufacturing environment into digital signals, and the communication module delivers the sensing data to the edge server or remote data centers. The edge computing servers (nodes) include the stronger processing units, larger memories, and storage space. These servers are connected to different sensors nodes and deployed in approximation to the devices, with the provision of the data storage and smart computing services by running different AI algorithms. Meanwhile, the edge computing servers are interconnected with each other to exchange information and knowledge. Especially, the sensing parameters can be adjusted in a flexible monitoring subsystem in the manufacturing environment, according to different application requirements and the task priority. To achieve a rapid response high priority system, the edge AI servers should have access to the sensing data, and capability to categorize the status of the CM environment. This can be done by processing the data features through ML classification algorithms such as logistic regression, SVM, and classification trees. When the data is out of the safety range, a certain risk may exist in the manufacturing environment. For instance, if an anomalous temperature event would happen in the CM area, the edge server could drive the affected nodes to increase their temperature sensing frequency, in order to obtain more environmental details and to make proactive forecasts and decisions. The environmental sensing data delivery is another important component in CM. With the development of smart manufacturing, a sensing node not only performs sensing but also transmits the data. With the proliferation of massive sensing data, sensor nodes have been facing more challenges from the perspectives of data volume and data heterogeneity. In order to collect environment data effectively, it is needed to introduce new AI technologies. The sensor nodes can realize intelligent routing and communications by adjusting the network parameters, assigning different network loads and priorities to different types of data packets. With this optimized sensing transfer strategy, the AI methods can make adequate forecasts with reduced bandwidth usage. \textbf{Discussion.} We present intelligent manufacturing devices from edge computing-assisted intelligent agent construction, manufacturing resource description based on ontology, and edge computing in intelligent sensing. It is a challenge to upgrade the existing manufacturing devices to improve the interoperability and the inter-connectivity. Retrofitting instead of replacing all the legacy machines may be an alternative strategy in this regard. The legacy manufacturing equipment can be connected to the Internet by additively mounting sensors or IoT nodes in approximation to existing manufacturing devices~\cite{KZhang:TII20,JHuang:TII20}. Moreover, monitors can be attached to existing machinery to visualize the monitoring process. It is worth mentioning that retrofitting strategies may apply for the sensing or monitoring scenarios while they are not suitable or less suitable for the cases requiring to make active actions (like control or movement). Furthermore, a comprehensive plan should be made in advance rather than arbitrarily adding sensors to the existing production line~\cite{pueo2019design}. Retrofitting strategies also have their limitations, such as a limited number of internal physical quantities can be monitored in a retrofitted asset with respect to a newly-designed smart machine. \section{Intelligent Information Interaction In a Smart Factory} \label{sec:interaction} In the CM domain, the information exchange system needs to fulfill the dynamic adjustment of network resources so as to produce multiple customized products in parallel. In order to obtain optimal strategies, many studies have focused on this topic, and proposed insightful algorithms as well as strategies~\cite{79}. However, there are still two open issues: a network framework to dynamically adjust network resources, and the end-to-end (E2E) data delivery. In this section, we present software-defined industrial networks and AI-assisted E2E communication to tackle these two challenges. \begin{figure}[!t] \centering \includegraphics[scale=0.6]{fig/6.pdf} \caption{Software-defined industrial networks consist of network coordinated nodes, SDN routers, SDN controllers, data centers, and cloud computing servers which can support intensive computing tasks of AI algorithms.} \label{figure6} \end{figure} \subsection{Software-defined industrial networks} Industrial networks are a crucial component in CM, and customized product manufacturing groups can be understood as subnets. Via an industrial network (consisting of base stations, access points, network gateways, network switches, network routers, and terminals), the CM equipment and devices are closely interconnected with each other and can be supported by edge or cloud computing paradigms~\cite{80}. Taking full advantage of AI-driven software-defined industrial networks, and relevant networking technologies is an important method to achieve intelligent information sharing in CM~\cite{81,82}. In conventional industrial networks, network control functions have been fixed at network nodes (e.g., gateways, routers, switches). Consequently, industrial networks cannot be adapted to dynamic and elastic network environments, especially in customized manufacturing. The software-defined networking (SDN) technology can separate the conventional network into the data plane and the control plane. In this manner, SDN can achieve flexible and efficient network control for industrial networks. It has been reported that a software-defined industrial network can increase the flexibility of a dynamical network system while decreasing the cost of constructing a new network infrastructure~\cite{83}. The introduction of AI technologies to SDN can further bestow network nodes with intelligence. As demonstrated in Fig.~\ref{figure6}, AI technologies are introduced into traditional SDN so as to form a novel software-defined industrial network (SDIN). The proposed SDIN contains a number of mapping network nodes, SDIN related devices, data centers, and cloud computing servers to support intensive computing tasks of AI algorithms. Manufacturing devices are connected by their communication modules, and they are mapped to different network terminal nodes. On the SDIN level, key devices such as coordinated nodes and SDIN controllers construct the SDIN layer. First of all, coordinated nodes are linked with the ordinary nodes, and deliver network control messages from other SDN devices. Second, the SDN routers are the key devices that realize the separation of data flow and control flow of the entire manufacturing network. In addition, the SDIN controller is directly connected to the AI server, and the AI server provides network decisions directly to the SDN controller. In the network information process, AI algorithms, such as deep neural networks, reinforcement learning, SVM, and other ML algorithms can be executed in a server according to the state of the network devices, such as load information, communication rate, received signal strength indicator, and other data. Then, the AI server returns the optimized results to the SDN controller, and the results are divided into different instructions for different network devices in the light of a specific CM task. Following the above steps, the SDN controllers send a set of instructions to the routers and the coordination nodes. Finally, network terminals readjust the related parameters, (e.g., communication bandwidth, transmitted powers) to complete the data communication process. Intelligent optimization algorithms (e.g., ant colony or particle swarm optimization) can find optimal data transfer strategies – based on the network parameters provided by the SDIN, or given by the constraints of data interaction. These algorithms can adjust the latency and energy consumption requirements. Thus SDIN can improve the information management processes within a CM industry framework, reducing the cost of dynamically adjusting or reconfiguring network resources. Moreover, it can improve and propel the whole manufacturing intelligence. Additionally, by adopting an AI-assisted SDIN, the production efficiency can be further improved. \subsection{End-to-End communication} End-to-end (E2E) or device-to-device communication between manufacturing entities is a convenient communication strategy in industrial networks~\cite{84,85}. E2E communication provides communication services with lower latency and higher reliability, as compared to a centralized approach~\cite{86}. With effective information interaction via E2E communication, the entire system can achieve full connectivity. In the context of CM, data transmission with different real-time constraints has become a critical requirement~\cite{87}. The E2E industrial communication approach optimizes the usage of network resources (e.g., network access and bandwidth allocation) through data communication of varying latency~\cite{88,89}. Meanwhile, in order to realize the E2E communication in the industrial domain, a hybrid E2E communication network – based on the AI technology and SDIN – is here constructed by exploiting different media, communication protocols, and strategies. The hybrid E2E-based communication mechanism with the AI assistance can be divided into three layers: the physical layer, the media access controlling (MAC) layer, and the routing layer. \begin{figure*}[!t] \centering \includegraphics[width=17.3cm]{fig/7.pdf} \caption{Cooperative multiple agents. The strategy of cooperative multiple agents can be divided into i) the order of submission, ii) task decomposition, iii) cooperative group and iv) subgroup assignment.} \label{figure7} \end{figure*} In the physical layer, according to the advantages and disadvantages of the involved communication technologies, different communication media include optical fiber~\cite{90}, network cable~\cite{91}, and wireless radio~\cite{92}. Generally, industrial communications can be divided into wired or wireless communications. On the one hand, wired communications typically exhibit high-stability and low-latency. A representative case is an industrial Ethernet, which is based on a common Ethernet and runs improved Ethernet protocols, such as EtherCAT~\cite{93}, EtherNet/IP~\cite{94}, and Powerlink~\cite{95}. On the other hand, wireless networks have been adopted in applications with relatively high flexibility~\cite{96,97}. Nowadays, an increasing number of mobile elements have been incorporated in manufacturing systems; therefore, wireless media has been widely exploited in mobile communications~\cite{98}. Conventional strategies on fixed and static industrial networks may not fulfill the emerging requirements on flexible network configurations. The AI and related technologies, such as deep reinforcement learning, optimization theory and game theory, can play significant roles in improving the communication efficiency in the physical layer, e.g., determining the optimal communication between wired and wireless networks while achieving a good balance between network operational cost and network performance. In the MAC layer, different devices have different requirements for E2E communications according to their specific functions. Although many different MAC protocols have been proposed (e.g., CSMA–Carrier Sense Multiple Access)~\cite{99}, CDMA–Code Division Multiple Access)~\cite{100}, TDMA–Time Division Multiple Access)~\cite{101} and their improved versions, these methods still lack flexibility, and do not fulfill the emerging requirements of industrial applications. Generally, industrial E2E communications can be divided into two categories: periodic communications and aperiodic communications. Similarly, AI plays an important role in the MAC layer. An example is a hybrid approach that combines the CSMA and TDMA, with an intelligent optimization method, to improve the efficiency of the E2E communication. In particular, the two categories of communication requirements (high and low real-time or periodic and aperiodic communications) are classified by the AI-based method (e.g., naïve Bayes). Next, an improved hybrid MAC is constructed on top of the CSMA and TDMA. TDMA and CSMA schemes deal with the periodic and aperiodic data flows of the E2E communications. The size of this proposed mechanism can be adjusted in accordance with the AI-optimized results of a real application. The network routing is also another key component of E2E communications. The key node of the routing path plays an important role in the E2E communications as well. However, the performances of routing key nodes are impacted by the workload; for instance, the amount of forwarded data. Similarly, AI plays a significant role in the routing layer. The predicted state parameters, such as communication rate and network loads of key nodes, can be obtained by using historical data from the network node status by algorithms, such as deep neural networks or deep reinforcement learning (e.g. deep Q-learning). \section{Flexible Manufacturing Line} \label{sec:manu-line} A flexible manufacturing production line realizes customization. AI-driven production line strategies and technologies, such as collective intelligence, autonomous intelligence, and cross-media reasoning intelligence, have accelerated the global manufacturing process. Therefore, the subjects of cooperative operation between multiple agents, dynamic reconfiguration of manufacturing, and self-organizing scheduling based on production tasks are presented in this section. \subsection{Cooperative multiple agents} Cooperation among multiple agents is necessary to dynamically construct collaborative groups for the completion of customized production tasks~\cite{102}. As discussed in Section~\ref{sec:devices}, multiple agents with edge computing provide a better option than a single device to build a collaborative operation to realize CM~\cite{67,103}. Therefore, by combining the edge computing-assisted intelligent agents and different AI algorithms, a novel cooperative operation can be constructed as shown in Fig.~\ref{figure7}. The strategy of cooperative operation by multiples agents can be divided into the order of submission, task decomposition, cooperative group, and subgroup assignment. The working process of a flexible manufacturing production line can be described as follows. First, according to the customers' requirements, the CM product orders are issued to the manufacturing system through the recommender system. After receiving the product orders, the AI-assisted task decomposition algorithms take the product orders as the input, the device working procedure as the output, and the product manufacturing time as a constraint; these algorithms are mainly executed at the remote cloud server. A product order can be divided into multiple subtasks, which are sent to all the agents via the industrial network. After the negotiation, agents return the answers to the edge server, which handles the working subtasks according to corresponding conditions and constraints. Next, the AI-assisted cost-evaluation algorithm calculates the cost of a producing group (i.e., cooperative manufacturing group) from the historical data. Then, the edge agents intelligently select suitable device agents to finish the product order after considering the whole cooperative group performances, such as producing time and product quality. Moreover, the edge agents send the selection result to the device agents, which are chosen to take part in the producing order. The main cooperative group is constructed based on the working steps. The main cooperative group may not be well suited for real applications, especially for complicated CM tasks. Therefore, an AI-based method for constructing a suitable-size cooperative subgroup is an important step for dealing with the mentioned problem. A possible strategy is to use cognitive approaches such as the Adaptive Control of Thought—Rational (ACT-R) model~\cite{taatgen_lebiere_anderson_2005}. These subtasks cooperative groups can be mapped to the digital space (i.e., edge agent) and form even lower level subgroups, all interconnected by the conveyor, logistics systems, and industrial communication systems. Each subgroup can delegate the same edge agent, to provide the management and customers with manufacturing services. The characteristics of the subgroups are partly derived from the process constraints and the physical constraints of the plant. In principle, the higher the constrains the deeper the task tree will expand, from more abstract tasks to particular atomic targets achievable by the present devices. This structure can be replicated with a probabilistic graphical model or with a fuzzy tree. After all the agents have been assigned with subtasks, they form two level-cooperative groups. The formation of these cooperative groups is beneficial to resource management. Then, according to the manufacturing task attributes, multiple agents complete the producing task. During this period, the corresponding device agents send their status data to edge servers timely, and the manufacturing process can be monitored by analyzing these data in the entire system. In contrast to the AI-driven cooperative operation between multiple agents, conventional methods often rely on human operators who participate in the whole process or computer-assisted operators also requiring human interventions. These methods inevitably result in huge operational expenditure. \subsection{Dynamic reconfiguration of manufacturing Systems} With the scientific development of the industrial market and manufacturing equipment, different industrial devices present different performance requirements representing multiple function trends~\cite{104}. For instance, the latest Computer Numerical Control (CNC) machine tool can complete a wide range of tasks, from lathing to milling functions. On the other hand, a dedicated manufacturing line does not meet new industrial requirements, especially for customized production~\cite{105}. The trend today is towards reconfiguration and reprogrammability of manufacturing processes~\cite{106}. Although several studies have investigated the problem and presented meaningful results~\cite{107,108}, most of them lack intelligent design to fulfill the emerging requirements of dynamic reconfiguration of manufacturing systems, especially for customized manufacturing. In particular, the work~\cite{107} focuses on the communications between agents while \cite{108} investigates the relationship between manufacturing flexibility and demands. Thus, AI technologies have seldom been adopted in these studies. At present, ontology (as shown in Section~\ref{sec:devices}) offers insights into dynamic reconfiguration of manufacturing resources~\cite{86,109}. \begin{figure}[!t] \centering \includegraphics[width=8.8cm]{fig/8.pdf} \caption{Dynamic reconfiguration of manufacturing resources: 1) device selection by ontology reasoning based on the device function; 2) CM production line is constructed; 3) automatic switching to other devices from heavy-load devices or broken devices.} \label{figure8} \end{figure} A schematic of the dynamic reconfiguration process based on the ontology inference is shown in Fig.~\ref{figure8}. Each customized product invokes several processing procedures. First, a personalized product manufacturing-related device (such as cutting, materials handling device) is selected by ontology reasoning based on the device function. Then, the second selection of the devices involved in the manufacturing is finished according to ontology results with respect to the related manufacturing process, the manufacturing time, manufacturing quality, and other parameters of a device. Finally, a CM production line is constructed. Specifically, when the production line receives a production task, the raw material for a specific type of products is delivered from an autonomous warehouse. Then, the production line completes the manufacturing tasks in the process sequence. Furthermore, when one of the manufacturing devices breaks down during the process, automatic switching of the related machining equipment by ontology inference is conducted. Meanwhile, the reasoning mechanism reflects the reconstruction function of a flexible production of the production line. The presented approach leads to optimal process planning and functional reconstruction. Besides, it shows the strengths of ontology modeling and reasoning. In practice, only ontology and constraints need to be established according to the above description. According to Jena syntax\footnote{Jena syntax defines a set of rules, principles, and procedures to specify the semantic web framework of Apache Jena (\url{https://jena.apache.org/getting_started/index.html}).}, the corresponding API interface can be invoked to meet the task requirements of this scenario. In the future, other AI algorithms are expected to be integrated with ontology inference. \begin{figure}[!t] \centering \includegraphics[width=8.9cm]{fig/9.pdf} \caption{Self-organization of schedules of multiple production tasks consists of three steps: task analysis, task decomposition and task execution.} \label{figure9} \end{figure} \subsection{Self-organizing Schedules of Multiple Production Tasks} Product orders generally have stochastic and intermittent characteristics as the arrival time of orders is usually uncertain~\cite{110}. This may result in having to share production resources among multiple tasks. Therefore, creating self-organizing schedules with a time slot based on multiple agents for multiple production tasks is paramount~\cite{61}. The mechanism of self-organizing schedules for multiple production tasks can be divided into three steps: task analysis, task decomposition, and task execution. As shown in Fig.~\ref{figure9}, in terms of initialization, when a new production task is processed by the multi-tasking production line, the new production tasks are divided into multiple steps by an AI-based method executed at the cloud. Additionally, according to the process lead time, the producing period can be decomposed into time slots of different lengths. Moreover, for one working step in a time slot, edge agents select all idle device agents by comparing the mapping relationship between working steps and device agent functions. This processed time slot information is then broadcasted to all the agents simultaneously. Then, idle device agents choose the working step by price bidding or negotiating with others, according to the manufacturing requirements and self-conditions (e.g., manufacturing time and quality). These results are broadcasted to other agents, including different servers. Next, the edge agents update the working state of the idle device agent in the corresponding time slot. These procedures are repeated until the new task steps are allocated within a certain or fixed time. Lastly, multiple agents finish the scheduling of the new production task in a self-organization manner. \begin{figure*}[!t] \centering \subfloat[Framework for case study]{\includegraphics[width=3.4in]{fig/10a-new.pdf}% \label{fig_first_case}} \hfil \centering \subfloat[Implementation scenario of CM]{\includegraphics[width=3.6in]{fig/10b-new.pdf}% \label{fig_second_case}} \caption{The framework of customized candy wrapping smart production. (a) The framework includes CM devices, the industrial network, a conveyor, and a cyber-physical system. (b) The implementation of customized candy wrapping line consists of diverse devices.} \label{fig_sim} \end{figure*} Self-organization of schedules with multiple agents and time slots can effectively complete simultaneous production tasks using a flexible production. Furthermore, production line efficiency is improved. Consequently, all manufacturing resources, including different devices and subsystems, are more intelligent to finish the multiple production tasks autonomously. In contrast, conventional methods often require huge human resources in scheduling and planning production tasks~\cite{MCKAY20035}. Despite the recent advances in computer-aid methods~\cite{BIEL2016243}, they still require substantial human interventions and cannot meet the flexible requirements. However, we have to admit that AI-driven self-organization of schedules does not get rid of humans in the loop of the entire production process. The main goal of AI-driven methods is to save unnecessary human resource consumption and mitigate other operational expenditures. In this manner, human workers can concentrate on planning and optimizing the overall production procedure instead of conducting tedious and repetitive tasks. Meanwhile, an appropriate human intervention is still necessary when full automation is not achievable or is partially implemented. In this sense, AI-driven methods can also assist human workers to give intelligent determinations. \section{Case Study} \label{sec:case} In the section, a case study is presented, showcasing the following aspects: prototype platform construction, big data analysis using AI technology for preventive maintenance, and cloud-assisted customization service. \subsection{Prototype platform construction} We implement a prototype of the AI-assisted CM framework, namely a customized candy wrapping production line. As shown in Fig.~\ref{fig_first_case}, the framework includes the following components: CM devices, the industrial network, a conveyor, and a cyber-physical system. All components are connected through the industrial network, i.e., OPC Unified Architecture (OPC UA) and Data Distribution Service (DDS). Fig.~\ref{fig_second_case} illustrates the implementation of customized candy wrapping line. The candy packing line mainly includes the production stations and the logistics transmission system. In the logistics transmission system, the packing box is continuously transferred by the conveyor belts or AGVs. The production stations are distributed discretely between the mainline and the branch line, and RFID tags are adopted to obtain the operation information. The equipment types of the production stations include the materiel feeding, candy grasping, box delivery, and finished goods storage. The presented system meets the requirements of small-batch production. In particular, the packaged candy followed the taste, flavor and color preferences given by the customers. The system includes four layers, all of which are connected by the industrial IoT with different link functions. The first layer is the device layer, including five robots, two AGVs, a conveyor system, and a warehouse. The device layer performs the basic functions of an intelligent production line, such as carrying, clipping, loading raw material, and unloading final products. Cognitive robots can be vertically integrated into a cyber-physical system in smart manufacturing~\cite{111}. The industrial network layer (second layer) plays a key role in the information interaction and intelligent connection of different communication technologies – e.g., industrial wireless local networks (Wi-Fi, ZigBee), industrial Ethernet, industrial NFC (Near Field Communication), and mobile communications. There are three sub-networks for finishing different latency communication functions~\cite{112}. Specifically, wired industrial networks are employed as the inner equipment to achieve higher real-time performance. In this aspect, the wireless industrial networks were mainly adopted in the monitoring system while the mobile wireless local networks also helped to achieve higher-level flexibility~\cite{113}; for instance, mobile wireless nodes were dynamically deployed to monitor the industrial environment status. The third layer is the computing layer, which is mainly involved with the analysis, computing, and knowledge mining of big data. A commercial solution has been adapted to build a cloud platform. XenServer developed by Citrix is used to realize the virtualization of the server cluster consisting of multiple virtual machines and the management of virtual machines. Meanwhile, we also establish a big data analytics framework, which is a software architecture based on a cloud platform for big data storage and distributed computing. Apache Hadoop, an open-source solution, is used to provide the non-relational database HBase and the computing architecture of YARN (Yet Another Resource Negotiator). On top of the big data framework, the AI-assisted optimization algorithms (such as deep learning models) have been deployed to realize intelligent applications. To meet different latency requirements in the platform, a hybrid computing paradigm, orchestrating the cloud and edge computing paradigms, is adopted. Explicitly, edge computing is used to deal with real-time tasks, while cloud computing was focused on completing time-insensitive tasks, such as historical data processing. The deployment of edge computing enables cloud service characteristics such as mobile computing, scalability, and privacy policy~\cite{114}. The fourth layer is the service layer. In this level, a large number of manufacturing resources are stored at the cloud platform, which offers different AI services. Pattern recognition, accurate modeling, knowledge discovery, reasoning, and decision-making capabilities are provided. \begin{figure*}[!t] \centering \includegraphics[width=12.6cm]{fig/11.pdf} \caption{Big data analysis using AI technology for smart preventive maintenance. The smart preventive maintenance system consists of CM devices, industrial networks, big data processing center, and applications for preventive maintenance. The data pipeline consisted of three main steps, data collection, data processing, and data mining and analysis using AI. } \label{figure11} \end{figure*} The working process of the platform is as follows. First, customers select candy products according to their preferences, which included the color, taste, quantity, and variety of the candies in an AI recommender web service system. Then, these proposed schemes and candy order parameters are delivered to the manufacturing cloud through the web service, and the web server was connected to the cloud via the Internet. The related product orders are created according to the submitting information. These orders were decomposed into different working steps by the ontology-based manufacturing system. Next, the multiple agents completed the production tasks in a self-organized way. After obtaining the working steps, the manufacturing devices are assembled into collaborative groups to finish all tasks. Thereafter, the platform finished the candy wrapping task. During the product manufacturing process, the manufacturing data is collected by sensors and is then transmitted to the cloud or nearby edge servers. The analyzed results provide key information for product monitoring. More importantly, these results can be used to adjust the processes and procedures to ensure higher quality and increase the production efficiency of the whole system. The model-driven method with ontology proposed in~\cite{115} was used to achieve interoperability and knowledge sharing in a manufacturing system across multiple platforms in the product lifecycle. When multiple tasks were needed to be finished in the platform, the manufacturing resource reconstruction methods were employed for production scheduling. The cloud-based manufacturing semantic model proposed in~\cite{116} was used to obtain general task construction and task matching. After implementation, three candy-wrapping tasks with ten different candies were processed in the AI-assisted platform at the same time, which represented a typical production line model for mass wrapping, and the first-in-first-out (FIFO) scheme was adopted accordingly. \subsection{Big data analysis using AI technology for preventive maintenance} Preventive maintenance for smart manufacturing has received attention in the literature~\cite{117,118,119,120,121,122}. A system architecture for an active preventive maintenance system was proposed in~\cite{123}. Based on this architecture, an improved preventive maintenance mechanism was constructed by merging cloud computing and edge computing with deep learning, as shown in Fig.~\ref{figure11}. The smart preventive maintenance system was composed of CM devices, industrial networks, big data processing centers, and applications for preventive maintenance. The data pipeline consisted of three main steps, data collection, data processing, and data mining and analysis using AI. From the perspective of preventive maintenance, the related data collection represented the fundamental step in the following analysis. Different data, including the environmental data, product processing data, device working status, and device logs were collected and transmitted to the computing servers, such as cloud and edge computing servers, through the industrial network or industrial IoT. Edge computing and cloud computing paradigms presented in~\cite{124} were employed to address elastic and virtual manufacturing resources, which provided opportunities for real-time monitoring of production Key Performance Indicators (KPIs) and smart inventory management. The computing servers were responsible for data processing and for device maintenance. The data includes different types of manufacturing device-related data. First, redundant and misleading data were removed during the data collection process. Then, the abstract data for real-time or historical big data analysis was used for equipment maintenance. The AI-based techniques (e.g., deep learning) have been regarded as the most effective way for system and equipment fault recognition based on big data analysis, so these techniques were adopted in this step. Consequently, the manufacturing maintenance knowledge base was built on top of big data. Note that AI techniques play an important role in extracting knowledge from massive and heterogeneous manufacturing data, consequently achieving the intelligence~\cite{Fricke:2018}. The manufacturing data includes 1) structured data such as data stored in a rational database and 2) non-structured data such as text, documents, sound, image, and video. Diverse AI algorithms such as ontology learning, natural language processing, and deep learning can be leveraged. Furthermore, in our previous work~\cite{123}, a big data solution for active preventive maintenance in manufacturing environments was proposed. This approach essentially combined a real-time active maintenance mechanism with an offline prediction method. The real-time performance was considered as the main feature of a manufacturing system, especially equipment maintenance. Therefore, to achieve the system maintenance tasks, the hybrid edge and cloud computing paradigm for big data analysis represented a better option for preventive maintenance. Specifically, the equipment maintenance tasks can be divided into online and offline tasks. On the one hand, as edge servers being deployed close to the equipment can provide low-latency (real-time) service, edge computing was adopted for dealing with the data online. On the other hand, cloud servers have powerful computing capabilities, and offline big data processing was executed at the cloud layer. Moreover, the analysis result was delivered to the management or data visualization system. The segmented model provided in~\cite{125} for preventive maintenance of semiconductor manufacturing equipment, including both parametric and non-parametric models, was used for preventive maintenance. Consequently, big data with AI is a key technology for equipment maintenance in smart manufacturing. Big data helps to build comprehensive condition monitoring and prediction systems, which can provide preventive maintenance scheduling according to the equipment status. Integrated with the AI-based methods, big data analysis can construct a maintenance knowledge library and decrease the cost of operation and maintenance management of a smart manufacturing system. \subsection{Cloud-assisted customization service} Unlike traditional manufacturing, CM can provide customized services. In other words, customers can participate in the process of intelligent manufacturing~\cite{126,127}. Recently, cloud computing has been proven to provide support for customers taking part in the production process and drive the customization services in a seamless manner~\cite{112,128}, allowing different data services to be quickly accessed by customers. Therefore, the integration of cloud computing with customization services can improve the user experience of customization services. We name such integration of cloud computing with customization services as cloud-assisted customization services. As shown in recent studies~\cite{129,ZHANG201912}, cloud-assisted customization services are well suited to offer a better user experience to the customer. In particular, cloud-assisted customization services are user-centric, demand-driven, and service-oriented. Based on the multiple different services provided by cloud computing, the customization service can be achieved from the production cycle perspective, as shown in Fig.~\ref{figure12}. The entire production cycle was divided into three stages: early stage, middle stage, and later stage. In the early stage of product production, customers can select more suitable products by the intelligent recommender system. As for the intelligent recommender system, big data analysis is adapted to integrate the order data, the production data, and the packing line status. Spark MLlib is adapted to realize the personalized recommendation, which has advantages on algorithm, experience, and performance. Moreover, using cloud-assisted design, customers can also design the manufacturing products without a professional background in a digital simulation environment, virtual reality (VR), and/or augmented reality (AR). In the middle stage of the production, CM can provide other personal customization services. For instance, customers can remotely monitor the details of the production process via a digital twin or a virtual production line system. Then, logistics information of different stages can be sent to specific users via cloud computing and mobile Internet. At the later stage, customers can provide feedback on their user experience; this feedback can be used to improve the manufacturing process. \begin{figure}[!t] \centering \includegraphics[width=8.3cm]{fig/12.pdf} \caption{The cloud-assisted customization services cover the entire produce life cycle: early stage, middle stage and later stage.} \label{figure12} \end{figure} An important difference between intelligent manufacturing and traditional manufacturing lies in the fact that users can participate in the production process via cloud-assisted technologies in intelligent manufacturing. Through industrial cloud computing, CM can provide customers and users with customization services. This type of customization service is to motivate participation and construct a new production shopping experience. In terms of the market trend prediction, the more the market data gathered, the more accurate the results. Different algorithms can assist in the data-driven decision-making process. For example, shape mining is a framework based on engineering design data~\cite{130} and has been applied to passenger car design. Further, the apriori and C5.0 algorithms for data mining are able to extract rules from databases~\cite{131}. The platform used in the experiment, cannot only finish on-demand candy production based on the customization parameters but also adapt to changes in the market. The constructed platform can increase the efficiency of the entire system of candy wrapping. This methodology, in combination with the aforementioned decision-making algorithms, constitutes a proof of concept of a seamless CM production pipeline. \textbf{Discussion.} It is worth mentioning that the number of interactions between customization and production should also be limited although cloud-assisted customization services can greatly improve user experience~\cite{FELS2017410}. In practice, there is a trade-off between customization interactions and product manufacturing. From the user experience perspective, the more customization interactions lead to a better user experience. However, the increased number of customization interactions may also increase the expenditure and prolong the production time of products, consequently affecting user experience. Therefore, it can enhance customization services to provide users with some well-designed product samples accompanied by tutorials in advance. \section{Challenges and Advances} \label{sec:challenges} The convergence of AI technologies, IoT, SDN, and other new ICTs in smart factories, while it can significantly increase the flexibility, intelligence, and efficiency of CM systems it also poses challenges. In this section, we discuss these challenges as well as recent advances. \subsection{Smarter devices in customized manufacturing} In a CM environment, the equipment does not only perform the basic functions of automation but also needs to be intelligent and flexible. From the point of view of stand-alone equipment, the devices should have the following functions: parameter sensing, data storage, logical inference, information interaction, self-diagnostics, hybrid computing support, and preventive maintenance. From the perspective of the physical layer, the realization of CM is increasingly complex, as standard devices or equipment cannot accomplish this complicated task. Therefore, devices need to have both collaborative ability and swarm intelligence to collaborate with each to complete complex tasks. Devices with the above functions can meet the requirements of personalized production and adapt to the trend of achieving the automation and intelligence of manufacturing. Therefore, CM devices must be based on the integration and deep fusion of advanced manufacturing technologies, automation technologies, information technology, image technology, communication technologies, and AI technologies to realize the local and swarm intelligence. In this aspect, to be realizable in a generic context, further progress needs to be made in AI and in AI/robotics integration. However, as shown in Section~\ref{sec:case}, CM is already realizable in restricted environments. Extensive studies have been conducted in the respective technologies to support CM~\cite{132,133,134}: intelligent connectivity, data-driven intelligence, cognitive Internet of things, and industrial big data. Unfortunately, most of these studies have mainly focused on single-device intelligence. Although these studies provide useful suggestions towards smarter devices, there is a still huge gap between realistic smart devices and the current solutions. We consider that an ideal method for realizing smart devices in manufacturing should integrate a series of methods to ensure the CM with edge intelligence that offers adaptability and response to a wide range of scenarios. These different methods or algorithms can be orchestrated by a central AI unit, which decides which algorithm is more useful for a particular task (combining a symbolic AI approach with machine learning). Meanwhile, a good balance between intelligence and cost is still a challenge that needs to be addressed in the future. \subsection{Information interaction in CM} Most manufacturing frameworks represent the distributed systems, where high-efficient communications between different components are needed~\cite{135,136}. On the one hand, different components of CM need not only effective connections but also highly efficient information interactions. As shown in early parts of this paper, the CM systems must incorporate different information interaction technologies and algorithms. On the other hand, due to the device heterogeneity and the increased demands of communications, the CM information interactive systems have to transmit massive data with different latency requirements. Consequently, the networks of CM need to be optimized to fulfill the diverse requirements of different applications, such as media access, moving handover, and congestion control. In other words, CM needs as a basis of a highly efficient information interaction system, which integrates multiple information technologies, including the absorbing DL and other AI algorithms. This system particularly depends on efficient and reasonable information interaction. With the development of manufacturing techniques, an increased number of studies have focused on the massive volume of communications and a large number of connections~\cite{137}. Based on network performance optimization methods, researchers have developed several industrial networks~\cite{138,139}. As different information transfer protocols exist with different requirements, such as time-sensitive and time-insensitive data transfer, the construction of a more efficient interaction industrial network represents another important challenge of future research. Existing studies mostly focus on high-speed communications, rather than efficient interaction with intelligent networks. These methods usually only meet the basic requirements of industrial networks, such as bandwidth and delay. Recently, wireless mobile communications (e.g., 5G systems) and optical fiber communications have achieved great development~\cite{140,141}. The 5G technology for smart factories is still in its infancy, and there are still a number of issues such as deployment, accessing and spectrum management to be addressed. A hybrid industrial network with the latest communication technologies and AI (including deep learning, integrated learning, transfer learning, etc.) can be a promising solution for information interaction, which should consider different data flows according to different applications of intelligent CM. \subsection{Dynamic reconfiguration of manufacturing resources} Intelligent manufacturing, especially CM, involves a dynamic reorganization of the available resources and extreme flexibility. The essence of CM is to provide customized products, which have the characteristics of small-batch, short processing cycle, and flexible production. Therefore, manufacturing resources need constant readjustment and reorganization~\cite{142}. In addition, customized products are thought of as a variety of processing crafts. Moreover, the increasing complexity of the industrial environments makes the management of the resources even more difficult. A factory nowadays is a system consisting of many sub-systems, which can produce emergent behaviors through the interaction of the subsystems. Therefore, the dynamic reconstruction of manufacturing resources is one of the main challenges towards achieving a generic CM facility. In this aspect, a number of strategies have been proposed~\cite{143,144} in this domain. Recently, knowledge reasoning, knowledge graph, transfer learning, and other AI algorithms have attained great progress~\cite{145,146,147}. In our opinion, hybrid AI methods combining the latest knowledge reasoning technologies with swarm intelligence can be a promising solution for dynamic reconstruction, which may consider different application scenarios of CM. Manufacturing-based process optimization with ML technologies may be one of the effective methods to reorganize resources. Moreover, digital twin technologies may be the driving technologies to improve resource reconfiguration~\cite{148,149,150}. \subsection{Practical deployment and knowledge transfer} Although the AI-assisted CM framework is promising to foster smart manufacturing, several challenges in industrial practice arise before the formal adoption of this framework. The first non-technical challenge mainly lies in the cost of upgrading existing manufacturing machinery, digitizing manufacturing equipment, and purchasing computing facilities as well as AI services. This huge cost may be affordable for small and medium-sized enterprises (SMEs). With respect to upgrading manufacturing production lines for SMEs, retrofitting legacy machines can be an economic solution as discussed in Section~\ref{subsec:edge-sensing}. In particular, diverse sensors and IoT nodes can be attached to existing manufacturing equipment to collect diverse manufacturing data. Those sensors and IoT nodes can be collected with the Internet so as to improve the interconnectivity of legacy machines. For example, Raspberry Pi models mounted with sensors can be deployed in workrooms to collect ambient data~\cite{YWang:TIA19,KZhang:TII20}. Besides hardware upgrading, software tools as well as AI services should be also purchased and adopted by manufacturing enterprises. Similarly, the economic solution for SMEs is to outsource manufacturing data to cloud services providers or Machine learning as a service (MLaaS) provider who can offer on-demand computing services. Nevertheless, outsourcing confidential data to untrusted third parties may increase the risks of security and privacy leakage. Thus, it is a prerequisite to enforce privacy and security protection schemes on manufacturing data before outsourcing. Moreover, the expenditure of system operating and training personnel should not be ignored in practical deployment. Another challenge is the effective technology transfer from research institutions to enterprises. Technology transfer involves many non-technical factors and multiple parties. The non-technical issues of technology transfer include marketing analysis, intellectual property management, technical invention protection, commercialization, and financial returns. Many frontier technological innovations often end up with unsuccessful technology transfer due to ignorance of the non-technical factors~\cite{JPan:IJPR19}. One of the main obstacles in technology transfer lies in the technology readiness level (TRL) gap between research and industrial practice. In particular, research institutions often focus on research results at TRL 1-3 implying basic feasibility and effectiveness while industrial enterprises often require transferred technologies at TRL 7-8 or even higher levels meaning prototype demonstration and real deployment~\cite{RYBICKA20161001}. We admit that there is still a long way for the AI-assisted CM framework before reaching TRL 7-8. The investigation of the technology transfer of AI-assisted CM will be a future direction. Both researchers and industrial practitioners are expected to work together to realize AI-assisted CM. \section{Conclusion} \label{sec:conc} Recent advances in AI technologies have had an impact on the manufacturing industry, especially within customized smart manufacturing. In this article, AI-assisted customized manufacturing architectures – incorporating IoT, edge intelligence, and cloud computing paradigms – have been proposed. These key AI-enabled technologies have been validated in an industrial packaging scenario. Further, each of the aspects composing these architectures have been carefully reviewed. The fusion of AI and manufacturing provides a potential solution for customized manufacturing. Future research will be directed towards tackling the challenges related to smart manufacturing devices, effective information interaction, dynamic reconstruction of manufacturing resources, and practical deployment issues. \appendices \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
1,108,101,564,719
arxiv
\section{Introduction} The existence and construction of cusp forms is a fundamental problem in the modern theory of automorphic forms (\cite{arthur}, \cite{sel}, \cite{wmuller1}, \cite{gold}, \cite{gold-1}). In this paper we address the issue of existence of cusp forms for almost simple Lie groups using the approach of \cite{Muic1} combined with some local information on supercuspidal representations for $p$-adic groups (\cite{MoyPr1}, \cite{MoyPr}). In view of recent developments in the analytic number theory (\cite{gold}, \cite{gold-1}) we pay special attention to the case of $SL_M$. \medskip Suppose $G$ is a simply connected, absolutely almost simple algebraic group defined over ${\mathbb{Q}}$, and $G_\infty := G({\mathbb{R}} )$ is not compact. Let ${\mathbb{A}}$ and ${\mathbb{A}}_f$ denote respectively the ring of adeles and finite adeles of $\Bbb Q$. For each prime $p$, let ${\mathbb{Z}}_p$ denote the p-adic integers inside ${\mathbb{Q}}_p$. Recall that for almost all primes $p$, the group $G$ is unramified over ${\mathbb{Q}}_p$. Thus, $G$ is a group scheme over ${\mathbb{Z}}_p$, and $G({\mathbb{Z}}_p)$ is a hyperspecial maximal compact subgroup of $G({\mathbb{Q}}_p)$ (\cite{Tits}, 3.9.1). The group $G({\mathbb{A}}_f)$ has a basis of neighborhoods of the identity consisting of open-compact subgroups. Suppose $L \subset G({\mathbb{A}}_f)$ is an open--compact subgroup. Set \begin{equation}\label{int-1} \Gamma_{L} \ := \ G({\mathbb{Q}}) \cap L \subset G({\mathbb{A}}_f), \end{equation} where we identify $G(\Bbb Q)$ with its image under the diagonal embedding into $G({\mathbb{A}}_f)$ and $G({\mathbb{A}} )$. Now, identifying $G(\Bbb Q)$ with its image under the diagonal embedding into $G({\mathbb{A}})$, the projection of $\Gamma_{L} \subset G({\mathbb{A}} )$ to $G_\infty$ is a discrete subgroup. We continue to denote this discrete subgroup by $\Gamma_{L}$. It is called a congruence subgroup of $G( {\mathbb{Q}} )$ (\cite{BJ}). We write ${\mathcal{A}}_{cusp}(\Gamma_{L}\backslash G_\infty)$ and $L^2_{cusp}(\Gamma_{L} \backslash G_\infty)$ for the spaces of cusp forms and its $L^2$-closure (\cite{BJ}). We recall the notion of $\Gamma_{L}$--cuspidality here: a continuous function $\varphi: \ \Gamma_{L}\backslash G_\infty\rightarrow \mathbb C$ is $\Gamma_{L}$--cuspidal if $$ \int_{U_P(\mathbb R)\cap \Gamma_{L}\backslash U_P(\mathbb R)}\varphi (ug)=0, \ \ g\in G_\infty, $$ where $U_P$ is the unipotent radical of any proper $\mathbb Q$--parabolic subgroup. \vskip 0.2in Recall the assumptions that $G$ is simply connected, absolutely almost simple, and $G_{\infty}$ is non-compact means it satisfies the strong approximation property (\cite{Platonov}, \cite{BJ} \S 4.7), i.e., $G(\Bbb Q)$ is dense in $G(\Bbb A_f )$, and so for any open compact subgroup $L \subset G(\Bbb A_f)$: $$ G(\Bbb A_f)=G(\Bbb Q) L \ . $$ \noindent We consider a finite family of open compact subgroups \begin{equation}\label{finite-family} {\mathcal F} = \{ L \} \end{equation} satisfying the following assumptions: \medskip \begin{Assumptions}\label{assumptions} \ \smallskip \begin{itemize} \item[(i)] Under the partial ordering of inclusion there exists a subgroup $L_{\text{\rm{min}}} \in {\mathcal F}$ that is a subgroup of all the others. \smallskip \item[(ii)] The groups $L\in {\mathcal F}$ are factorizable, i.e., $L={\underset p \prod} L_p$, and for all but finitely many $p$'s, the group $L_p$ is the maximal compact subgroup $K_p := G({\mathbb{Z}}_p)$. \smallskip \item[(iii)] There exists a non-empty finite set of primes $T$ such that for $p\in T$ the group $G$ has a Borel subgroup $B=AU$ and a maximal torus $A$ defined over $\Bbb Z_p$, and there exists a supercuspidal representation $\pi_p$ of $G(\Bbb Q_p)$ such that $\pi^{L_{\text{\rm{min}}, p}}_p\neq 0$, and for $L\neq L_{\text{\rm{min}}}$ there exists $p\in T$ such that $\pi^{L_p}_p=0$. \end{itemize} \end{Assumptions} \noindent We note that a simply connected split almost simple group, e.g., $SL_M$, and $Sp_{2M}$, defined over ${\Bbb Z}$ satisfies the above assumptions. \bigskip \begin{Thm}\label{intr-thm} Suppose $G$ is a simply connected, absolutely almost simple algebraic group defined over ${\Bbb Q}$, such that $G_{\infty}$ is non-compact and ${\mathcal F} = \{ L \}$ is a finite set of open compact subgroups of $G({{\mathbb{A}}}_{f})$ satisfying assumptions \eqref{assumptions}. Then, the orthogonal complement of $$ \sum_{\substack{ L \in {\mathcal F} \\ L_{\text{\rm{min}}} \subsetneq L }} L^2_{cusp}(\Gamma_L \backslash G_\infty) $$ in $L^2_{cusp}(\Gamma_{L_{\text{\rm{min}}}} \backslash G_\infty)$ is a direct sum of infinitely many irreducible unitary representations of $G_\infty$. \end{Thm} \bigskip Theorem \ref{intr-thm} is proved in Section \ref{sec-1}. For general $G$, in Sections \ref{sec-2} and \ref{sec-3} we give examples of families of ${\mathcal F}$ satisfying assumption \eqref{assumptions} using Moy--Prasad filtration subgroups (\cite{MoyPr1}, \cite{MoyPr}). \bigskip In the case $G=SL_M$, for suitable open compact subgroups $L \subset G({\mathbb{A}}_{f} )$, the congruence subgroups $\Gamma_{L}$ are the principal congruence subgroups $\Gamma(m)$ (see (\ref{principal-congruence})), and the main theorem has the following form: \begin{Cor}\label{intr-thm-cor} Let $G=SL_M$. Let $n\ge 2$ be an integer. Then, the orthogonal complement of $$ \sum_{\substack{m| n\\ m< n }} L^2_{cusp}(\Gamma(m)\backslash G_\infty) $$ in $L^2_{cusp}(\Gamma(n)\backslash G_\infty)$ is a direct sum of infinitely many irreducible unitary representations of $G_\infty$. \end{Cor} \begin{proof} This follows directly from the examples in Section \ref{sec-3}. \end{proof} \vskip .2in In Section \ref{sec-4}, we refine Corollary \ref{intr-thm-cor} (see Theorem \ref{nthm}). As a result, we obtain a generalization of the compact quotient case (obtained in \cite{Muic3}). The corresponding results are contained in Corollaries \ref{ncor-1}, \ref{ncor-2}, and \ref{ncor-3}. For example, in Corollary \ref{ncor-1}, we prove for sufficiently large $n$ that we can take infinitely many spherical representations. Corollary \ref{ncor-3} improves (\cite{Muic2}, Theorem 0-2). \bigskip The initial research and first draft of the paper was performed when the second author visited The Hong Kong University of Science and Technology Mathematics Department in Spring 2010. Final work and writing of the paper was done during a visit by the first author to the Mathematics Department of the University of Zagreb in Spring 2012. The authors thank the Departments for their hospitality. \bigskip \section{Proof of Theorem \ref{intr-thm}}\label{sec-1} \bigskip We recall some results from \cite{Muic1}. For $f \in C^\infty_c(G({\mathbb{A}}))$, the adelic compactly supported Poincar{\'{e}} series $ P(f)$ is defined as: \begin{equation}\label{e-1} P(f)(g) \ = \ \sum_{\gamma\in G({\mathbb{Q}})} f(\gamma\cdot g). \end{equation} \noindent Write $g \in G({\mathbb{A}}) = G_\infty\times G({\mathbb{A}}_f)$ as $g=(g_\infty, g_f)$ . We have the following: \begin{equation}\label{adel-arch-CCC-PPP} P(f)(g_\infty, 1)=\sum_{\gamma\in G({\mathbb{Q}} )} f (\gamma \cdot g_\infty, \gamma). \end{equation} \noindent The next lemma (\cite{Muic1}, Proposition 3.2) describes the restriction of the Poincar{\'{e}} series (\ref{e-1}) to $G_\infty$. \begin{Lem}\label{lem-1} Let $f \in C_c^\infty(G({\mathbb{A}}))$. Assume that $L$ is an open compact subgroup of $G({\mathbb{A}}_f)$ such that $f$ is right--invariant under $L$. Define the congruence subgroup $\Gamma_{L}$ of $G_\infty$ as in (\ref{int-1}). Then: \begin{itemize} \item[(i)] The function in (\ref{adel-arch-CCC-PPP}) is a compactly supported Poincar{\'{e}} series on $G_\infty$ for $\Gamma_L$. \smallskip \item[(ii)] If $P(f)$ is cuspidal, then the function in (\ref{adel-arch-CCC-PPP}) is cuspidal for $\Gamma_L$ (see the Introduction for the definition). \end{itemize} \end{Lem} We recall that $P(f)$ is cuspidal if $$ \int_{U_P(\mathbb Q)\backslash U_P(\mathbb A)}P(f)(ug)du=0, \ \ g\in G(\mathbb A), $$ where $U_P$ is the unipotent radical of any proper $\mathbb Q$--parabolic subgroup. Let $S$ be a finite set of places, containing $\infty$, and large enough so that $G$ is defined over $\Bbb Z_p$ for $p\not\in S$. We use the decomposition of $G({\mathbb{A}})$ given by: \begin{equation}\label{e-2} G({\mathbb{A}})=G_S\times G^S \, , {\text{\rm{ where }}} G_S := {\underset {p\in S} \prod} G({\mathbb{Q}}_p) \, , {\text{\rm{ and }}} G^S={\underset {p \not\in S} {{\prod}\,'}} G({\mathbb{Q}}_p) \ . \end{equation} \noindent Set $G^S({\mathbb{Z}}_p) :={\underset {p \not\in S} \prod} G({\mathbb{Z}}_p)$, and \begin{equation}\label{e-200} \Gamma (S) \ := \ G^S({\mathbb{Z}}_p) \cap G({\mathbb{Q}}) \ \ \text{(the intersection is taken in $G^S$).} \end{equation} We view $\Gamma (S) \subset G({\mathbb{Q}} ) \subset G({\mathbb{A}} )$. Set $$ \Gamma_{S} \ = \ {\text{\rm{image of $\Gamma (S)$ under the projection map $G({\mathbb{A}} ) = G_S \times G^S \rightarrow G_S$}}}\, . $$ Since $G({\mathbb{Q}})$ is a discrete subgroup of $G({\mathbb{A}})$, it follows that $\Gamma_S$ is a discrete subgroup of $G_S$. \bigskip Set \begin{equation}\label{finte-places-of-S} S_{f} \ := \ {\text{\rm{ the set of finite places in $S$.}}} \end{equation} \noindent For each $p\in S_{f}$, we choose an open--compact subgroup $L_p$, and we set \begin{equation}\label{e-20} \begin{aligned} L \ &= \ \Big( \, {\underset {p\in S_{f}} \prod} L_p \ \Big) \ \times \ G^S({\mathbb{Z}}_{p})\\ \Gamma_{L} \ &= \ L\cap G({\mathbb{Q}}) \ = \ \Big( \, \prod_{p\in S_{f}} L_{p} \ \Big) \ \cap \ \Gamma_S \ . \end{aligned} \end{equation} The group $\Gamma_{L}$ is a discrete subgroup of $G_\infty$. \medskip Let $\mathfrak g_\infty$ be the (real) Lie algebra of $G_{\infty}$, and $K_\infty$ a maximal compact subgroup. We have the following non-vanishing criterion (\cite{Muic1}, Theorem 4.2): \medskip \begin{Lem}\label{lem-2} Assume that for each prime $p$ we have a function $f_{p} \in C_c^\infty(G({\mathbb{Q}}_{p}))$ so that $f_{p}(1)\neq 0$, and $f_{p}=1_{G({\mathbb{Z}}_{p})}$ is the characteristic function of $G({\mathbb{Z}}_{p})$ for all $p\not\in S$. Assume further that for $p \in S_{f}$, that $L_p$ is an open-compact subgroup such that $f_p$ is right-invariant under $L_p$. Note: Since the set $\big( \, K_\infty \times {\underset {p\in S_{f}} \prod} {\mathrm{supp}}{\ (f_p)} \, \big)$ is compact, the intersection $\Gamma_S \cap \big( \, K_\infty \times {\underset {p\in S_{f}} \prod} {\mathrm{supp}}{\ (f_p)} \, \big)$ is a finite set. It can be written as follows: \begin{equation}\label{e-3} {\underset {j=1} {\overset l \bigcup}} \ \ \gamma_j \cdot (K_\infty\cap \Gamma_{L}). \end{equation} Set $$ c_j \ = \ {\underset {p\in S_{f}} \prod} \ f_{p}(\gamma_j). $$ Then, the $K_\infty$-invariant map $C^\infty(K_\infty)\longrightarrow C^\infty(K_\infty\cap \Gamma_{L}\setminus K_\infty)$ given by \begin{equation}\label{e-4} \alpha \ \mapsto \ \hat{\alpha}(k) \ := \ k \ \mapsto \ \sum_{j=1}^l\sum_{\gamma\in K_\infty\cap \Gamma} \ c_j\cdot \alpha(\gamma_j \gamma \cdot k) \end{equation} is non--trivial, and, for every $\delta\in \hat{K}_\infty$, contributing to the decomposition of the closure of the image of (\ref{e-4}) in $L^2(K_\infty\cap \Gamma_{L} \setminus K_\infty)$, we can find a non-trivial $f_\infty \in C_c^\infty(G_\infty)$ so that the following hold: \begin{itemize} \item[(i)] $E_\delta(f_\infty)=f_\infty$. \item[(ii)] The Poincar\' e series $P(f)$ and its restriction to $G_\infty$ (which is a Poincar\' e series for $\Gamma_L$) are non--trivial, where $f \, := \, f_\infty\otimes_{p} f_{p} \in C_c^\infty(G({\mathbb{A}}))$. \item[(iii)] $E_\delta(P(f))=P(f)$ and $P(f)$ is right--invariant under $L$. \item[(iv)] The support of $P(f)|_{G_\infty}$ is contained in a set of the form $\Gamma_{L}\cdot C$, where $C$ is a compact set which is right--invariant under $K_\infty$, and $\Gamma_{L}\cdot C$ is not the whole of $G_\infty$. \end{itemize} \end{Lem} \vskip .2in We begin the proof of Theorem \ref{intr-thm}. We apply the above considerations to $L_{\text{\rm{min}}}$. By hypothesis, this group is factorizable. Take $S$ sufficiently large so that it contains $T$, and if $p\not\in S$ then the group $G$ is unramified over $\Bbb Q_p$ so that it is defined over $\Bbb Z_p$, and $L_{\text{\rm{min}}, p}=G(\Bbb Z_p)$. Thus, (\ref{e-20}) holds for $L=L_{\text{\rm{min}}}$. We apply Lemma \ref{lem-2}. To do this, we construct functions $f_{p} \in C_c^\infty(G({\mathbb{Q}}_{p}))$ such that $f_{p}(1)\neq 0$, $f_p$ is $L_p$-invariant on the right, and $f_{p}=1_{G({\mathbb{Z}}_{p})}$ for all $p\not\in S$. We need to define $f_p$ for $p\in S_{f}$. We let $f_p=1_{L_{\text{\rm{min}}, p}}$ for $p \in S_{f} \, \backslash \, T$. For $p\in T$, we use our assumption that there exists a supercuspidal representation $\pi_p$ such that $\pi^{L_{\text{\rm{min}}, p}}_p\neq 0$. We let $f_p$ be a matrix coefficient of $\pi_p$ such that $f_{p}(1)\neq 0$ and $f_p$ is $L_{\text{\rm{min}}, p}$-invariant on the right. \medskip Our construction of the functions $f_p$ for all finite $p$ satisfy the assumptions of Lemma \ref{lem-2}. Therefore, by that Lemma, we can select $\delta\in \hat{K}_\infty$ and $f_\infty \in C_c^\infty(G_\infty)$ such that (i)---(iv) of Lemma \ref{lem-2} hold. \begin{comment} OLD FORMULATION Having completed the construction of the functions $f_p$ for all finite $p$, we see that we meet all assumptions of Lemma \ref{lem-2}. By that lemma, we can select $\delta\in \hat{K}_\infty$ and $f_\infty \in C_c^\infty(G_\infty)$ such that (i)--(iv) of Lemma \ref{lem-2} hold. \end{comment} \vskip .2in We decompose the cuspidal part $L_{cusp}^2(G(k)\setminus G({\mathbb{A}}))$ into closed irreducible $G({\mathbb{A}})$ invariant subspaces: \begin{equation}\label{e-50} L_{cusp}^2(G(k)\setminus G({\mathbb{A}}))=\oplus_j \ {\mathfrak H}_j \ . \end{equation} We prove the following lemma: \begin{Lem}\label{lem-3} We keep the above assumptions. Then, $$ P(f)\in L_{cusp}^2(G({\mathbb{Q}})\setminus G({\mathbb{A}}))^{L_{\text{\rm{min}}}}, $$ and we can decompose according to (\ref{e-50}) \begin{equation}\label{e-5} P(f)= \sum_j \psi_j, \ \ \psi_j\in {\mathfrak H}_j \ . \end{equation} Then, we have the following: \begin{itemize} \item [(i)] For all $j$, $\psi_j\in \cal A_{cusp}(G({\mathbb{Q}})\setminus G({\mathbb{A}}))$ is right--invariant under $L_{\text{\rm{min}}}$, and transforms according to $\delta$ i.e., $E_\delta(\psi_j)=\psi_j$. \item [(ii)] Assume $\psi_j\neq 0$. Then $\pi_{p}^j\simeq \pi_{p}$ for all $p \in T$. \item [(iii)] The number of indices $j$ in (\ref{e-5}) such that $\psi_j\neq 0$ is infinite. \item [(iv)] The closure of the $G_\infty$--invariant subspace in $L^2_{cusp}(\Gamma_{L_{\text{\rm{min}}}}\setminus G_\infty)$ generated by $P(f)|_{G_\infty}$ is an orthogonal direct sum of infinitely many inequivalent irreducible unitary representations of $G_\infty$ which contain $\delta$. \end{itemize} \end{Lem} \begin{proof} This follows from (\cite{Muic1}, Proposition 5.3 and Theorem 7.2). We remark that the formulation of (iv) follows from the proof of (\cite{Muic1}, Theorem 7.2). \end{proof} \vskip .2in \begin{Lem}\label{lem-4} Let $L\in \mathcal F$. Then, the restriction map gives an isomorphism of unitary $G_\infty$ representations $$ L^2_{cusp}(G({\mathbb{Q}})\setminus G({\mathbb{A}}))^L\simeq L^2_{cusp}(\Gamma_{L}\setminus G_\infty), $$ which is norm preserving up to a scalar $vol_{G({\mathbb{A}}_f)}(L)$ i.e., $$ \int_{G({\mathbb{Q}})\setminus G({\mathbb{A}})} |\psi(g)|^2 dg=vol_{G({\mathbb{A}}_f)}(L) \int_{\Gamma_{L}\setminus G_\infty} |\psi(g_\infty)|^2 dg_\infty. $$ \end{Lem} \begin{proof} Since we assume that $G$ is simply connected, absolutely almost simple over ${\mathbb{Q}}$ and $G_\infty$ is not compact, it satisfies the strong approximation property: we have $G(\Bbb A_f)=G(\Bbb Q) L$. Now, the claim is a particular case of the computations contained in the proof of (\cite{Muic1}, Theorem 7-2, see (7-6) there). \end{proof} \vskip .2in \begin{Lem}\label{lem-5} Let $L\in \mathcal F$. Then, the natural embedding is an embedding of unitary $G_\infty$--representations: $$ L^2_{cusp}(\Gamma_{L}\setminus G_\infty) \hookrightarrow L^2_{cusp}(\Gamma_{L_{\text{\rm{min}}}}\setminus G_\infty). $$ This means it is norm preserving up to a scalar $\left[ \Gamma_L:\Gamma_{L_{\text{\rm{min}}}}\right]$ i.e., $$ \int_{\Gamma_{L_{\text{\rm{min}}}}\setminus G_\infty} |\psi(g_\infty)|^2 dg_\infty=\left[ \Gamma_L:\Gamma_{L_{\text{\rm{min}}}}\right] \int_{\Gamma_{L}\setminus G_\infty} |\psi(g_\infty)|^2 dg_\infty, $$ for $\psi \in L^2_{cusp}(\Gamma_{L}\setminus G_\infty)$. \end{Lem} \begin{proof} This is \cite{Muic2}, Lemma 1-9. \end{proof} \vskip .2in \begin{Lem}\label{lem-6} Let $L\in \mathcal F$. Then, we have $$ \left[ \Gamma_L:\Gamma_{L_{\text{\rm{min}}}}\right]=vol_{G({\mathbb{A}}_f)}(L)/vol_{G({\mathbb{A}}_f)}(L_{\text{\rm{min}}}). $$ \end{Lem} \begin{proof} Obviously, we have $$ \left[L: L_{\text{\rm{min}}}\right]=vol_{G({\mathbb{A}}_f)}(L)/vol_{G({\mathbb{A}}_f)}(L_{\text{\rm{min}}}). $$ But $$L/ L_{\text{\rm{min}}}= \left(L\cap G(\Bbb Q)\right)/\left(L_{\text{\rm{min}}}\cap G(\Bbb Q)\right)= \Gamma_L/\Gamma_{L_{\text{\rm{min}}}}, $$ since $$ G(\Bbb A_f)=G(\Bbb Q) L_{\text{\rm{min}}}. $$ \end{proof} \vskip .2in \begin{Lem}\label{lem-7} We have the following commutative diagram of unitary $G_\infty$--representations: $$ \begin{CD} L^2_{cusp}(G({\mathbb{Q}})\setminus G({\mathbb{A}}))^L@>>>L^2_{cusp}(G({\mathbb{Q}})\setminus G({\mathbb{A}}))^{L_{\text{\rm{min}}}}\\ @V\simeq VV@V\simeq VV\\ L^2_{cusp}(\Gamma_{L}\setminus G_\infty) @>>>L^2_{cusp}(\Gamma_{L_{\text{\rm{min}}}}\setminus G_\infty),\\ \end{CD} $$ where the horizontal maps are inclusions, the vertical maps are isomorphisms, and the inner products are normalized as follows: (i) on the spaces in the first row, we take the usual Petersson inner product $\int_{G({\mathbb{Q}})\setminus G({\mathbb{A}})} \psi(g)\overline{\varphi(g)} dg$, and (ii) on $L^2_{cusp}(\Gamma_{U}\setminus G_\infty)$, the inner product is the normalized integral $vol_{G({\mathbb{A}}_f)}(U)\int_{\Gamma_{U}\setminus G_\infty} \psi(g_\infty)\overline{\varphi(g_\infty)} dg$, where $U\in \{L_{\text{\rm{min}}}, L\}$. \end{Lem} \begin{proof} The lemma follows immediately from Lemmas \ref{lem-4}, \ref{lem-5}, and \ref{lem-6}. \end{proof} \vskip .2in \begin{Lem}\label{lem-8} Let $L\in \mathcal F$, $L\neq L_{\text{\rm{min}}}$. Then, $P(f)|_{G_\infty}$ is orthogonal to $L^2_{cusp}(\Gamma_{L}\setminus G_\infty)$ in $L^2_{cusp}(\Gamma_{L_{\text{\rm{min}}}}\setminus G_\infty)$ if and only if $P(f)$ is orthogonal to $L^2_{cusp}(G({\mathbb{Q}})\setminus G({\mathbb{A}}))^L$. \end{Lem} \begin{proof} This follows from Lemma \ref{lem-7}. \end{proof} \vskip .2in By Lemma \ref{lem-3} (iv), the closure $\cal U$ of the $G_\infty$-invariant subspace in $L^2_{cusp}(\Gamma_{L_{\text{\rm{min}}}}\setminus G_\infty)$ generated by $P(f)|_{G_\infty}$ is an orthogonal direct sum of infinitely many inequivalent irreducible unitary representations of $G_\infty$. By Lemma \ref{lem-8}, $\cal U$ is orthogonal to $$ \sum_{\substack{L\in \mathcal F\\ L\neq L_{\text{\rm{min}}}}}\ \ L^2_{cusp}(\Gamma_{L}\setminus G_\infty) $$ if and only if $P(f)$ is orthogonal to $L^2_{cusp}(G({\mathbb{Q}})\setminus G({\mathbb{A}}))^L$ for all $L\in \mathcal F$, $L\neq L_{\text{\rm{min}}}$. The following lemma completes the proof of Theorem \ref{intr-thm}. \vskip .2in \begin{Lem}\label{lem-9} Let $L\in \mathcal F$, $L\neq L_{\text{\rm{min}}}$. Then, $P(f)$ is orthogonal to $L^2_{cusp}(G({\mathbb{Q}})\setminus G({\mathbb{A}}))^L$. \end{Lem} \begin{proof} Here we use the very last assumption that $\pi^{L_p}=0$ for all $L\in {\mathcal F}$ such that $L\neq L_{\text{\rm{min}}}$. We remind the reader that $f_p$ is a matrix coefficient of $\pi_p$ for $p\in T$. Since $ L_{\text{\rm{min}}}\subset L$, we have $ L_{\text{\rm{min}}, p }\subset L_p$ for all primes $p$ (the groups are factorizable), we obtain that $$ \int_{L} f(g l) dl =0, \ \ \text{for all $g\in G({\mathbb{A}})$.} $$ Hence $$ \int_{L} P(f)(g l) dl =0, \ \ \text{for all $g\in G({\mathbb{A}})$.} $$ Finally, let $\varphi \in L_{cusp}^2(G({\mathbb{Q}})\setminus G({\mathbb{A}}))^{L}$. Then we compute \begin{align*} & vol_{G({\mathbb{A}}_f)}(L)\cdot \int_{G({\mathbb{Q}})\setminus G({\mathbb{A}})} P(f)(g)\overline{\varphi(g)}dg=\\ & \int_{G({\mathbb{Q}})\setminus G({\mathbb{A}})} \int_{L} P(f)(gl)\overline{\varphi(gl)}dldg =\\ & \int_{G({\mathbb{Q}})\setminus G({\mathbb{A}})} \left(\int_{L} P(f)(gl)dl\right) \overline{\varphi(g)}dg=0. \end{align*} This proves the lemma. \end{proof} \bigskip \bigskip \section{Open compact subgroups of $G({{\mathbb{Q}}}_p)$, congruence subgroups, and cusp forms}\label{sec-2} \medskip \subsection{The case of SL$_M$}\label{case-slm} \ \smallskip Fix a positive integer $M$, and consider the algebraic reductive group $G=\SL{M}$. In $\SL{M}({\mathbb Z})$, we consider an alternative to the usual principal congruence subgroup \begin{equation}\label{principal-congruence} \Gamma (n) \ := \ \{ \, g = (g_{i,j}) \in \SL{M}({\mathbb Z}) \ | \ g_{i,j} \equiv \delta_{i,j} \text{\rm{{\ }mod{\ }}} n \, \} \ . \end{equation} \noindent For a prime power $p^k$ ($k > 0$), we set \begin{equation}\label{iwahori-congruence} \aligned \Gamma_{1} (p^k) \ := \ \{ \, g = (g_{i,j}) \in \SL{M}({\mathbb Z}) \ | \ p^{(k-1)} \, &| \, g_{i,j} {\text{\rm{ for $i<j$}}} \\ p^k \, &| \, (g_{i,j} - \delta_{i,j}) {\text{\rm{ for $i \ge j$}}} \ \} . \endaligned \end{equation} \noindent $\Gamma_{1} (p^k)$ is the set of elements in $\Gamma (p^{(k-1)})$ which modulo $p^k$ are unipotent upper triangular. If $n$ is a positive integer, with prime factorization $n={p_{1}}^{e_{1}} \cdots {p_{s}}^{e_{s}}$, set \begin{equation} \Gamma_{1} (n) \ := \ {\underset {i} \bigcap } \ \Gamma_{1} ({p_{i}}^{e_{i}}) \ . \end{equation} \bigskip \noindent Recall that $\Gamma (n)$ is a subgroup of $\Gamma (m)$ if and only if $m$ divides $n$. Moreover, when $\Gamma (n)$ is a subgroup of $\Gamma (m)$, it is a normal subgroup. The situation for the groups $\Gamma_{1}(m)$ is slightly more complicate. We note that if $m$ divides $n$, then $\Gamma_{1}(n)$ is a subgroup of $\Gamma_{1}(m)$. However, for $n > 1$, we also note $\Gamma_{1} (n)$ is not a normal subgroup of $\Gamma (1) = \SL{M}({\mathbb Z})$, and that $\Gamma_{1}(n)$ is not necessarily a normal subgroup of $\Gamma_{1}(m)$ when $m | n$. Define $n$ to be a strong multiple of $m$ (or $m$ divides $n$ strongly) if $n$ is a multiple of $m$ and every prime $p$ that occurs in the factorization of $n$ also occurs in the factorization of $m$. We use the notation $m |_s n$. The following elementary result then provides a criterion for when $\Gamma_1(n)$ is a normal subgroup of $\Gamma_{1}(m)$. \medskip \begin{Prop}\label{gamma-one-1} \begin{itemize} \item[(i)] For $k \ge 1$, the group $\Gamma_{1}(p^{k})$ is a normal subgroup of $\Gamma_{1}(p)$. \smallskip \item[(ii)] If $m |_s n$, then $\Gamma_{1}(n)$ is a normal subgroup of $\Gamma_{1}(m)$. \end{itemize} \end{Prop} \smallskip For $k$ a positive integer, define open compact subgroups of $\SL{M}({\mathbb Q}_p)$ as follows: \begin{equation}\label{iwahori-k} \aligned {\mathcal K}_{p,k} \ :&= \ \{ \, (g_{i,j}) \in \SL{M}({\mathbb Z}_{p}) \ \big| \ p^{k} \, | \, (g_{i,j} - \delta_{i,j}) \ \forall \, i,j \ \} \ , \ {\text{\rm{ and }}} \\ &\ \\ {\mathcal I}_{p,k^{+}} \ :&= \ \{ \, (g_{i,j}) \in \SL{M}({\mathbb Z}_{p}) \ \big| \ p^{(k-1)} \, | \, g_{i,j} {\text{\rm{ for $i<j$,}}} \ {\text{\rm{and}}} \ p^k \, | \, (g_{i,j} - \delta_{i,j}) {\text{\rm{ for $i \ge j$}}} \ \} . \endaligned \end{equation} \noindent The group ${\mathcal K}_{p,k}$ is the well-known k-th congruence subgroup of $\SL{M}({\mathbb Z}_{p})$. Set \begin{equation}\label{open-compact-SL} \aligned K_n \, :&= \ {\underset {p_i | n} \prod } \ {\mathcal K}_{p_i, e_i} \ \ {\underset {p {\nmid} n} \prod } \ G({\mathbb Z}_p) \ , \ \ {\text{\rm{and}}} \ \ I_n \, := \ {\underset {p_i | n} \prod } \ {\mathcal I}_{p_i, (e_i)^{+}} \ \ {\underset {p {\nmid} n} \prod } \ G({\mathbb Z}_p) \ . \endaligned \end{equation} \smallskip \noindent A consequence of the Chinese remainder theorem is the following: \begin{Prop}\label{gamma-one-2} \end{Prop} $$ \aligned \Gamma (n) \ = \ K_n \ \cap \ G( {\mathbb Q}) \, , \ \ {\text{\rm{and}}} \ \ \Gamma_1 (n) \ = \ I_n \ \cap \ G( {\mathbb Q}) \ . \endaligned $$ \smallskip \noindent The groups ${\mathcal I}_{p,k^{+}}$ are the same as certain Moy-Prasad filtration subgroups. Let ${\mathcal I}_{p}$ denote the subgroup consisting of elements in $\SL{M}({\mathbb Z}_{p})$ which are upper triangular modulo $p$. It is an Iwahori subgroup of $\SL{M}({\mathbb Q}_{p})$. Let $\mathcal B (\SL{M}({\mathbb Q}_{p}))$ be the Bruhat-Tits building of $\SL{M}({\mathbb Q}_{p})$. Let $C$ be the alcove in $\mathcal B (\SL{M}({\mathbb Q}_{p}))$ fixed by ${\mathcal I}_{p}$. Then, for any $x \in C$, and $k \in {\mathbb N}$, we have ${\mathcal I}_{p,k^{+}} = {\big(}{\SL{M}({\mathbb Q}_{p})}{\big)}_{x,k^{+}}$. \medskip We prove a cusp form result related to the group ${\mathcal K}_{p,k}$. As a preliminary, we recall the definition of cusp forms for a p-adic or finite field semi-simple group $\mathcal G$ and its Lie algebra ${\mathfrak g}$: \begin{itemize} \item[(i)] A function $\phi \in C^{\infty}_{0} ({\mathfrak g} )$ is a cusp form if for any $X \in {\mathfrak g}$ and unipotent radical ${\mathfrak n}$ of any proper parabolic subalgebra ${\mathfrak p} \subsetneq {\mathfrak g}$, the integral \begin{equation}\label{lie-algebra-cusp-form-1} \int_{{\mathfrak n}} \ \phi \, (X +n ) \ dn \ {\text{\rm{ is zero.}}} \end{equation} \item[(ii)] A function $f \in C^{\infty}_{0} (\mathcal G )$ is a cusp form if for any $X,Y \in \mathcal G$ and unipotent radical $\mathcal U$ of any proper parabolic subgroup $\mathcal P \subsetneq \mathcal G$, the integral \begin{equation}\label{lie-algebra-cusp-form-2} \int_{\mathcal U} \ f(X \, u \, Y) \ du \ {\text{\rm{ is zero.}}} \end{equation} \end{itemize} As noted earlier, the k-th congruence subgroup ${\mathcal K}_{p,k}$ is normal in $\SL{M}({{\mathbb{Z}}}_p)$. For $k >0$, we now formulate and prove a result on cusp forms associated to certain characters of the group ${\mathcal K}_{p,k}/{\mathcal K}_{p,(k+1)}$. To simplify the notation (since $p$ will be fixed), we abbreviate ${\mathcal K}_{p,k}$ to ${\mathcal K}_{k}$. Set ${{\mathcal{L}}} = {{\mathfrak s} {\mathfrak l}}_{M}({{\mathbb{Z}}}_p)$, and \begin{equation} {{\mathcal{L}}}_{k} \ := \ \{ \ (x_{i,j}) \in {{\mathcal{L}}} \ \big| \ p^k \, | \, \, x_{i,j} \ \} \ = \ p^k {{\mathcal{L}}} \ . \end{equation} The quotient ${{\mathcal{L}}}/{{\mathcal{L}}}_1$ is naturally the Lie algebra ${{\mathfrak s} {\mathfrak l}}_{M}({{\mathbb{F}}}_p)$. The map $X \rightarrow p^kX$ gives a natural isomorphism $\tau_{k}$ of ${{\mathcal{L}}}/{{\mathcal{L}}}_1$ with ${{\mathcal{L}}}_k/{{\mathcal{L}}}_{(k+1)}$ Recall there is an isomorphism \begin{equation} \label{cusp-K-one} \theta \ : \ {{\mathcal{L}}}_k/{{\mathcal{L}}}_{(k+1)} \ \rightarrow \ {\mathcal K}_{k}/{\mathcal K}_{(k+1)} \end{equation} so that for $x \in {{\mathcal{L}}}_k$, that $$ \theta (x) \ = \ 1 + x {\text{\rm{ mod }}} {{\mathcal{K}}}_{(k+1)} {\text{\rm{ \ in \ ${\mathrm{GL}}{M} ({\mathbb{Z}}_p )$}}} \ . $$ \noindent Therefore, there is a natural isomorphism of ${\mathcal K}_{k}/{\mathcal K}_{(k+1)}$ with ${{\mathfrak s} {\mathfrak l}}_{M}({{\mathbb{F}}}_p)$. Any choice of a non-trivial additive character $\psi$ of ${{\mathbb{F}}}_p$ gives an identification of the Pontryagin dual of ${{\mathfrak s} {\mathfrak l}}_{M}({{\mathbb{F}}}_p)$ with ${{\mathfrak g} {\mathfrak l}}_{M}({{\mathbb{F}}}_p)/{{\mathbb{F}}}_p$, via the composition of the pairing \begin{equation} \label{cusp-K-two} \aligned {{\mathfrak s} {\mathfrak l}}_{M}({{\mathbb{F}}}_p) \ \times \ {{\mathfrak g} {\mathfrak l}}_{M}({{\mathbb{F}}}_p) \ &\longrightarrow \ {{\mathbb{F}}}_p \\ ( \, X \, , \, Y \, ) \qquad &\longrightarrow \ {\mathbf{tr}} ( \, XY \, ) \endaligned \end{equation} \noindent with $\psi$. Write \begin{equation} \label{cusp-K-three} \aligned \psi_{Y} \ : \ {{\mathfrak s} {\mathfrak l}}_{M}({{\mathbb{F}}}_p) \ \longrightarrow& \ {{\mathbb{C}}}^{\times} \\ X \ {\overset {\psi_{Y}} \longrightarrow}& \ \psi ( \, {\mathbf{tr}} ( \, XY \, ) \, ) \ . \endaligned \end{equation} Take $Y \in {{\mathfrak g} {\mathfrak l}}_{M}({{\mathbb{F}}}_p)$ to be an element whose characteristic polynomial is irreducible. Such an element exists since there is a finite extension of ${{\mathbb{F}}}_p$ of degree $M$. The following proposition is elementary. \begin{Prop} \label{SL-cusp-forms} Suppose $Y \in {{\mathfrak g} {\mathfrak l}}_{M}({{\mathbb{F}}}_p)$ has irreducible characteristic polynomial: \smallskip \begin{itemize} \item[(i)] If ${\mathfrak p} ({{\mathbb{F}}}_p) \subsetneq {{\mathfrak g} {\mathfrak l}}_{M}({{\mathbb{F}}}_p)$ is any parabolic subalgebra of ${{\mathfrak g} {\mathfrak l}}_{M}({{\mathbb{F}}}_p)$, then $Y \notin {\mathfrak p} ({{\mathbb{F}}}_p)$. \smallskip \item[(ii)] The character $\psi_{Y}$ of ${{\mathfrak s} {\mathfrak l}}_{M}({{\mathbb{F}}}_p)$ is a cusp form. \smallskip \item[(iii)] The inflation of $\psi_{Y}$ to ${{\mathcal{K}}}_k$ via \eqref{cusp-K-one}, when extended to $\SL{M}({\mathbb{Q}}_p)$ by setting it zero off ${{\mathcal{K}}}_k$ is a cusp form. \smallskip \item[(iv)] For each positive integer $k$, there exists an irreducible supercuspidal representation $(\rho ,W_{\rho})$ which has a non-zero ${{\mathcal{K}}}_{k+1}$ fixed vector, but no non-zero ${{\mathcal{K}}}_{k}$-fixed vector. \end{itemize} \end{Prop} \smallskip \begin{proof} To prove part (i), suppose ${\mathfrak p} ({{\mathbb{F}}}_p) \subsetneq {{\mathfrak g} {\mathfrak l}}_{M}({{\mathbb{F}}}_p)$ is any parabolic subalgebra and ${\mathfrak p} ({{\mathbb{F}}}_p) = {\mathfrak m} ({{\mathbb{F}}}_p) + {\mathfrak u} ({{\mathbb{F}}}_p)$ is a Levi decomposition. For any $Z \in {\mathfrak p} ({{\mathbb{F}}}_p)$, let $Z_{{\mathfrak m} ({{\mathbb{F}}}_p)}$ be the projection of $Z$ to ${{\mathfrak m} ({{\mathbb{F}}}_p)}$. Then $Z$ and $Z_{{\mathfrak m} ({{\mathbb{F}}}_p)}$ have the same characteristic polynomial, and the latter characteristic polynomial is clearly not irreducible. Thus, if $Y \in {{\mathfrak g} {\mathfrak l}}_{M}({{\mathbb{F}}}_p)$ has irreducible characteristic polynomial, it cannot lie in any ${\mathfrak p} ({{\mathbb{F}}}_p) \subsetneq {{\mathfrak g} {\mathfrak l}}_{M}({{\mathbb{F}}}_p)$. \medskip To prove (ii), suppose $x \in {{\mathfrak s} {\mathfrak l}}_{M}({{\mathbb{F}}}_p)$, and ${\mathfrak p} ({{\mathbb{F}}}_p) = {\mathfrak m} ({{\mathbb{F}}}_p) + {\mathfrak u} ({{\mathbb{F}}}_p)$ is a proper parabolic subalgebra. Then $$ \int_{{\mathfrak u} ({{\mathbb{F}}}_p)} \psi_{Y} (x+n) \, dn \ = \ \psi_{Y}(x) \ \int_{{\mathfrak u} ({{\mathbb{F}}}_p)} \psi_{Y} (n) \, dn $$ Since $Y$ is not contained in any parabolic subalgebra of ${{\mathfrak g} {\mathfrak l}}_{M}({{\mathbb{F}}}_p)$, the integrand on the right side is a non-trivial character of ${{\mathfrak u} ({{\mathbb{F}}}_p)}$ and therefore the integral is zero. Whence, $\psi_{Y}$ is a cusp form. \medskip To prove (iii), denote the inflation of $\psi_{Y}$ by $\psi_{Y} \circ \theta^{-1}$. Suppose $P \subset \SL{M}({{\mathbb{Q}}}_p)$ is a parabolic subgroup. Then $P$ is conjugate to a standard `block upper triangular' parabolic subgroup $Q=M_{Q}N_{Q}$ of $\SL{M}({{\mathbb{Q}}}_p)$, i.e., $$ P \ = \ g^{-1} Q g \ = \ ( \, g^{-1}M_{Q}g \, ) \ ( \, g^{-1}N_{Q}g \, ) \ {\text{\rm{ \ and \ $U_{P} = g^{-1}N_{Q}g$. }}} $$ \noindent Since $\SL{M}({{\mathbb{Q}}}_p) = Q{\mathcal K}$, express $g$ as $g = v_{g} k_{g}$ with $v_{g} \in Q$, and $k_{g} \in {\mathcal K}$. Then, $$ \aligned \int_{U_{P}} \ \psi_{Y} \circ \theta^{-1}(xn) \ dn \ &= \ \int_{U_{Q}} \ \psi_{Y} \circ \theta^{-1}(xg^{-1}ug) \ du \\ &= \ \int_{U_{Q}} \ \psi_{Y} \circ \theta^{-1}(xk_{g}^{-1} v_{g}^{-1} u v_{g} k_{g}) \ du \\ &= \ c \ \int_{U_{Q}} \ \psi_{Y} \circ \theta^{-1}(xk_{g}^{-1} u k_{g}) \ du \ \ {\text{\rm{(suitable constant $c$)}}}\\ &= \ c \ \int_{k_{g}^{-1}U_{Q}k_{g}} \ \psi_{Y} \circ \theta^{-1}(x u ) \ du \ \\ \endaligned $$ \noindent From the last line, since $\psi_{Y} \circ \theta^{-1}$ has support in ${\mathcal K}_k$, to prove the integral vanishes, it suffices to do so when $x \in {\mathcal K}_k$. In this situation the integral vanishes by part (ii). Thus $\psi_{Y} \circ \theta^{-1}$ is a cusp form on $\SL{M}({{\mathbb{Q}}}_p)$. \medskip In regards to part (iv), set $\chi = ( \psi_{Y} \circ \theta^{-1} )$, and let $V_{\chi}$ be the representation of $\mathcal G = \SL{M}({{\mathbb{Q}}}_p)$ generated by the left translates of the cusp form $\chi$. It is the induced representation ${\text{\rm{c-Ind}}}^{\mathcal G}_{{\mathcal K}_{k}} ( \chi )$. As a representation inside the unitary representation $L^{2}( \mathcal G )$, $V_{\chi}$ is unitarizable and therefore completely reducible. The Hecke algebra $$ \aligned {\mathcal H}(\mathcal G //{{\mathcal K}_{k}} \, , \, \chi^{-1} ) \ :&= \ \{ \ f \in C^{\infty}_{0}( (\mathcal G ) \ | \ \forall \, m_a,m_b \in K_{k} {\text{\rm{ and }}} g \in SL_{M}({{\mathbb{Q}}}_p) \ , \ \\ &\qquad \quad f(m_a \, g \, m_b) \, = \, \chi (m_a)^{-1} \, f(g) \, \chi (m_b)^{-1}) \ \} \ , \endaligned $$ is the endomorphism algebra of the unitary representation $V_{\chi}$. It follows from the fact that $\chi$ is a cusp form and the Cartan decompostion $\mathcal G = {\mathcal K} A^{+} {\mathcal K}$ ($A$ the subgroup of diagonal matrices), that there is a sufficiently large compact subset $C \subset \mathcal G$ so that the support of any $f \in {\mathcal H}(\mathcal G //{{\mathcal K}_{k}} \, , \, \chi^{-1} )$ is contained in $C$. The dimension of ${\mathcal H}(\mathcal G //{{\mathcal K}_{k}} \, , \, \chi^{-1} )$ is therefore finite (see also \cite{harish-chandra} page 28), and consequently the (unitarizable) representation $V_{\chi}$ has finite length. Whence, $V_{\chi}$ is a finite direct sum of irreducible representations. By Frobenius reciprocity, any irreducible subrepresentation $\sigma$ of ${\text{\rm{c-Ind}}}^{SL_{M}({{\mathbb{Q}}}_p)}_{{\mathcal K}_{k}} ( \chi )$ contains the character ${\chi}$. Let $\langle , \rangle$ be the unitary form $\sigma$ inherits as a subrepresentation of $L^{2} (\mathcal G )$. Write $\lambda$ for the left translation action, and take a non-zero $v$ so that $$ \forall \ m \in {\mathcal K}_{k} : \quad \lambda (m) v \ = \ \chi (m) v \ . $$ Consider the (non-zero) matrix coefficient $L_{v}$ defined as $L_{v}(g) = \langle \, \lambda (g) v \, , \, v \rangle$. We claim, as a consequence of $\chi$ being a cusp form, that $L_{v}$ is a cusp form too. Indeed, for any $X \in \mathcal G$, and ${\mathcal U}$ the unipotent radical of any proper parabolic group ${\mathcal P} \subset \mathcal G$, we have: $$ \aligned \int_{\mathcal U \cap \mathcal K_{k}} L_{v}(Xu) \ du &= \ \int_{\mathcal U \cap \mathcal K_{k}} \ \langle \, \lambda (X) \, \lambda (u) v \, , \, v \, \rangle \ du \\ &= \ \langle \, \lambda (X) ( \, \int_{\mathcal U \cap \mathcal K_{k}} \lambda (u) v \, du \, ) \, , \, v \, \rangle \ . \\ \endaligned $$ Since $( \int_{\mathcal U \cap \mathcal K_{k}} \chi (u) \, du )$ is zero, the integral $\int_{\mathcal U \cap \mathcal K_{k}} \lambda (u) v \ du = ( \int_{\mathcal U \cap \mathcal K_{k}} \chi (u) \, du ) \, v$ is zero. So, $\sigma$ is cuspidal. Also, by \cite{MoyPr1}, the depth of $\sigma$ is $k$, and therefore (by \cite{MoyPr1}, Theorem 5.2) $\sigma$ cannot contain a non-zero ${\mathcal K}_{k}$-fixed vector. \end{proof} \begin{comment} XXXXXX This means $\sigma$ contains a non-zero ${\mathcal K}_{(k+1)}$-fixed vector, and furthermore, any character of ${\mathcal K}_{k}/{\mathcal K}_{k+1}$ which occurs in $V_{\sigma}^{{\mathcal K}_{k}}$ must be ${\mathcal K}$-conjugate to $\psi_{Y} \circ \theta^{-1}$. \end{comment} \medskip A similar result can be shown with the group ${\mathcal K}_{p,k}$ replaced by the group ${\mathcal I}_{p,k^{+}}$. We explain it in the next subsection where we treat the more general getting of a split simple group. \bigskip \subsection{The case of a split simple group}\label{case-split-simple} \ \smallskip Suppose $G$ is a split simple algebraic group defined over ${\mathbb{Z}}_{p}$. Let $B$ be a Borel subgroup of $G$, $A$ a maximal torus of $B$. Set \begin{equation} \aligned {\mathscr G} :&= G({\mathbb Q}_p) {\text{\rm{ \ and \ }}} {\mathcal K} := G({\mathbb Z}_p) \ {\text{\rm{a maximal compact subgroup of ${\mathscr G}$.}}} \endaligned \end{equation} \noindent We can choose the torus $A \subset B$ so that the point in the building fixed by ${\mathcal K}$ lies in the apartment of $A$. Then, $B$ determines an Iwahori subgroup ${\mathcal I} \subset {\mathcal K}$. \medskip \smallskip Let $\mathcal B ({\mathscr G})$ be the Bruhat-Tits building of ${\mathscr G}$. Let $C = \mathcal B ({\mathscr G})^{\mathcal I}$ be the fixed points of of the Iwahori subgroup ${\mathcal I}$. It is an alcove in $\mathcal B ({\mathscr G})$. Take $x_{0}$ to be the barycenter of $C$, and let $\ell$ be the rank of $G$. Set \begin{equation}\label{moy-prasad-0} k' \ := \ k+ ({\frac{1}{\ell+1}}) \quad {\text{\rm{and}}} \quad k'' \ := \ k+ ({\frac{2}{\ell+1}}) \end{equation} \noindent Then, in terms of the Moy-Prasad filtration subgroups, define \begin{equation}\label{moy-prasad-1} {\mathcal I}_{k} \ := \ {\mathscr G}_{{x_{0}},k} \quad {\text{\rm{and}}} \quad {\mathcal I}_{k^{+}} \ := \ {\mathscr G}_{{x_{0}},k'} \ . \end{equation} \smallskip \noindent For ${\mathscr G} = SL_{M}({\mathbb{Q}}_{p})$, the group ${\mathcal I}_{k^{+}}$ is identical with the group defined in \eqref{iwahori-k} by the smae symbol. \smallskip Let $\Delta$ and $\Delta^{\text{\rm{aff}}}$ be the simple roots and simple affine roots respectively with respect to the Borel and Iwahori subgroups $B$ and ${\mathcal I}$ respectively. We recall that every $\alpha \in \Delta$ is the gradient part of a unique root $\psi \in \Delta^{\text{\rm{aff}}}$. In this way, we view $\Delta$ as a subset of $\Delta^{\text{\rm{aff}}}$. We recall \begin{equation} {\text{\rm{ the quotient }}} \ \ {\mathscr G}_{{x_{0}},k'} / {\mathscr G}_{{x_{0}},k''} \ \ {\text{\rm{ is canonically }}} {\underset {\psi \in \Delta^{\text{\rm{aff}}}}{\prod}} U_{(\psi+k)} / U_{(\psi+k+1 )} \ . \end{equation} \noindent We further recall that a character $\chi$ of ${\mathscr G}_{{x_{0}},k'} / {\mathscr G}_{{x_{0}},k''}$ is non-degenerate if the restriction of $\chi$ to any $U_{(\psi+k)}$ is non-trivial. In particular, it is clear there exists a non-degenerate character $\chi$ of ${\mathscr G}_{{x_{0}},k'} / {\mathscr G}_{{x_{0}},k''}$ for any integer $k \ge 0$. For convenience, we identify a function on ${\mathscr G}_{{x_{0}},k'} / {\mathscr G}_{{x_{0}},k''}$ with its inflation to the group ${\mathscr G}_{{x_{0}},k'}$. \begin{Lem} \label{iwahori-lemma} Let $p$ be a prime such that $G$ is unramified over ${\mathbb{Q}}_{p}$. Let ${\mathcal I}_{k^{+}}$ ($k \ge 0$) denote the subgroup in \eqref{moy-prasad-1}, and let $\chi$ be a non-degenerate character of ${\mathscr G}_{{x_{0}},k'} / {\mathscr G}_{{x_{0}},k''}$. Then, \smallskip \begin{itemize} \item[(i)] The inflation of $\chi$ to ${\mathscr G}_{{x_{0}},k'}$, when extended to ${\mathscr G}$ by zero outside ${\mathscr G}_{{x_{0}},k'}$, is a cusp form of ${\mathscr G}$. \smallskip \item[(ii)] For each $k \ge 0$, there exists an irreducible supercuspidal representation $(\rho_{p}, W_{p})$ which has a non-zero ${\mathcal I}_{(k+1)^{+}}$--invariant vector but no non-zero ${\mathcal I}_{(k)^{+}}$--invariant vector. \end{itemize} \end{Lem} \begin{proof} To prove part (i), suppose $x \in {\mathscr G}=G({\mathbb Q}_p)$, and ${\mathscr U}=U({\mathbb Q}_p)$ is the unipotent radical of a proper parabolic subgroup ${\mathscr P}=P({\mathbb Q}_p)$ of ${\mathscr G}$. We need to show \begin{equation}\label{local-cusp-from-1} \int_{{\mathscr U}} \chi (xu) \ du \ = \ 0 \, . \end{equation} \noindent Take $Q \subset {\mathscr G}$ to be a ${\mathbb Z}_p$-defined parabolic subgroup so that ${\mathscr Q}= Q({\mathbb Q}_p)$ is ${\mathscr G}$ conjugate to ${\mathscr P}$, i.e., $P = gQg^{-1}$, with $g \in {\mathscr G}$. Let $V$ and ${\mathscr V}$ denote the unipotent radical of $Q$, and its group of ${k_{v}}$-rational points. We use the Iwasawa decomposition ${\mathscr G} = {\mathscr K} {\mathscr Q}$ to write $g$ as $g = k h$. Then, \begin{equation} \aligned \int_{{\mathscr U}} \psi_{Y}(xu) \ du \ &= \ \int_{{\mathscr V}} \psi_{Y}( \, x \, g v g^{-1} \, ) \ dn \ \ {\text{\rm{($u=gvg^{-1}$)}}}\\ &= \ \int_{{\mathscr V}} \psi_{Y}( \, x \, k h v h^{-1} k \, ) \ du \\ &= \ c \, \int_{{\mathscr V}} \psi_{Y}( \, x \, k v k^{-1} \, ) \ dv \ \ {\text{\rm{(for a suitable constant $c$)}}} \\ &= \ c \, \int_{k{\mathscr V}k^{-1}} \psi_{Y}( \, x \, n \, ) \ dn \ . \endaligned \end{equation} \noindent In particular, we can reduce to the case where the parabolic $P$ is a ${\mathbb Z}_p$-defined subgroup of ${\mathscr G}$. But, then $P$ is ${\mathcal K}$-conjugate to a standard parabolic subgroup of ${\mathscr G}$ with respect to the maximal split torus $A$. So, we can and do assume $P$ is a standard parabolic. \smallskip Observe that since ${\text{\rm{supp}}}(\chi) = {\mathscr G}_{{x_0},k'}$, to show \eqref{local-cusp-from-1}, it suffices to take $x \in {\mathscr G}_{{x_0},k'}$. Then, $xu \in {\mathscr G}_{{x_0},k'}$ if and only if $u \in {\mathscr G}_{{x_0},k'} \, \cap \, {\mathscr U}$, so \begin{equation}\label{integral-iwahori-1} \aligned \int_{{\mathscr U}} \chi (xu) \ du \ &= \ \int_{{\mathscr G}_{{x_0},k'} \ \cap \ {\mathscr U}} \chi (xu) \ du \\ \endaligned \end{equation} \noindent The intersection ${{\mathscr G}_{{x_0},k'} \, \cap \, {\mathscr U}}$ is a product of affine root subgroups. Combining this with the fact that $\chi$ is a character, we see that the integral over ${{\mathscr G}_{{x_0},k'} \ \cap \ {\mathscr U}}$ is a product of integrals over the affine root subgroups. Since $U$ is the radical of a proper standard parabolic subgroup, at least one of the $A$-roots in $U$ is the gradient of an affine root $\psi \in \Delta^{\text{\rm{aff}}}$. But then \begin{equation}\label{integral-iwahori-2} \aligned \int_{{\mathscr G}_{{x_0},k'} \ \cap \ {\mathscr U}_{\psi}} \chi (xu) \ du \ = \ 0 \\ \endaligned \end{equation} \noindent since $\chi$ is a non-trivial character of ${\mathscr U}_{(\psi +k)} = {\mathscr G}_{{x_0},k'} \ \cap \ {\mathscr U}_{\psi}$. Thus, $\chi$ is a cusp form. This completes the proof of part (i). \medskip To prove part (ii), let $V_{\chi}$ denote the vector space spanned by left translations of $\chi$. That $\chi$ is a cusp form of ${\mathscr G}$ means $V_{\chi}$, as a representation of ${\mathscr G}$, is a direct sum of finitely many irreducible cuspidal representations, and by Frobenius reciprocity each irreducible cuspidal representation $\sigma$ which appears when restricted to ${\mathscr G}_{x_{0},k'}$ contains the character $\chi$. In particular, $\sigma$ contains a non-zero ${\mathscr G}_{x_{0},k''}$-fixed vector, whence a non-zero ${\mathcal I}_{(k+1)^{+}}$-fixed vector. The fact that $\sigma$ contains the non-degenerate character $\chi$ and $\sigma$ is assumed to be irreducible means it cannot have a ${\mathscr G}_{x_{0},k'}={\mathcal I}_{k^{+}}$-fixed vector. So (ii) holds. \end{proof} \bigskip \section{Examples of open compact subgroups ${{\mathcal{F}}}$ satisfying assumptions \eqref{assumptions}}\label{sec-3} \medskip We produce examples of finite sets ${{\mathcal{F}}}$ of open compact subgroups of $G({{\mathbb{A}}}_f)$ satisfying the assumptions \eqref{assumptions}. \medskip Suppose $G = \SL{M}$. \smallskip \begin{itemize} \item[$\bullet$] Fix a positive integer $D$. For each positive divisor $d$ of $D$, set $K_d$ as in \eqref{open-compact-SL}. Then, as a consequence of Proposition \eqref{SL-cusp-forms}, the finite family $$ {{\mathcal{F}}} \ = \ \{ \, K_d \ \big| \ \ d \, | \, D \ \} $$ satisfies the assumptions in \eqref{assumptions}. Whence, Theorem \eqref{intr-thm} applies to this family. As already mentioned in Proposition \eqref{gamma-one-2} $K_d \cap \SL{M}({{\mathbb{Q}}})$ is the principal congruence subgroup $\Gamma (d)$ of \eqref{principal-congruence}. \smallskip \item[$\bullet$] Fix a positive integer $D$. For each positive divisor $d$ of $D$, set $J_d$ as in \eqref{open-compact-SL}. Then, as a consequence of Lemma \eqref{iwahori-lemma}, the finite family $$ {{\mathcal{F}}} \ = \ \{ \, J_d \ \big| \ \ d \, | \, D \ \} $$ satisfies the assumptions in \eqref{assumptions}. Whence, Theorem \eqref{intr-thm} applies to this family. Here, $J_d \cap \SL{M}({{\mathbb{Q}}})$ is the subgroup $\Gamma_1 (d)$ of \eqref{iwahori-congruence}. \end{itemize} Recall that we have been assuming $G$ is simply connected, absolutely almost simple over ${\mathbb{Q}}$ and $G_\infty$ is not compact. Let $S_{f} = \{ p_1, \dots , p_r, p_{r+1}, \dots , p_{r+s} \}$ be primes satisfying the following: \smallskip \begin{itemize} \item[(i)] For $v \notin S_{f}$, the group $G$ is unramified at $v$. \smallskip \item[(ii)] For $v \in S_{f}$, we consider two cases: \begin{itemize} \item[(ii.1)] For $1 \le i \le r$, we are given open compact subgroups ${\mathcal L}_{p_i} \subset G({{\mathbb{Q}}}_{p_i})$. \smallskip \item[(ii.2)] For $(r+1) \le i \le (r+s)$, the group $G$ is unramified at $p_i$. For each $p_i$, take $C_i$ to be an alcove in the Bruhat-Tits building and $x({C_i})$ the barycenter of $C_i$. \end{itemize} \end{itemize} \smallskip \noindent Fix some exponents $e_{r+1}, \dots , e_{r+s}$, and set $$ D \ = \ p_{r+1}^{e_{r+1}} \cdots p_{j}^{e_{j}} \cdots p_{r+s}^{e_{r+s}} \ . $$ For $d = p_{r+1}^{\alpha_{r+1}} \cdots p_{r+s}^{\alpha_{r+s}}$ a divisor of $D$, set \begin{equation}\label{open-compact-3} L_{d} \ := \ {\underset {i=1} {\overset {r} \prod}} \ {\mathcal L}_{p_i} \ \ {\underset {i=(r+1)} {\overset {(r+s)} \prod} } \ G({{\mathbb{Q}}_{p_i}})_{x({C_i}),\alpha_{i}'} \ \ {\underset {v \notin S_f} {\prod}} \ G({{\mathbb{Z}}}_p) \ . \end{equation} Here, $\alpha_{i}'$ is defined as in \eqref{moy-prasad-0}. Then, ${\mathcal{F}} = \{ \ L_d \ \big| \ \ d \, | \, D \ \}$ is a family of open compact subgroups of $G({{\mathbb{A}}}_f)$ satisfying the assumptions \eqref{assumptions}. Whence, Theorem \eqref{intr-thm} applies to this family. \smallskip We note that if we had selected a different choice of alcoves $C_{i}^{\bullet}$, then the groups $G({{\mathbb{Q}}_{p_i}})_{x({C_i}),\alpha_{i}'}$ and $G({{\mathbb{Q}}_{p_i}})_{x({C_{i}^{\bullet}}),\alpha_{i}'}$ are conjugate in $G({{\mathbb{Q}}_{p_i}})$, say $g_{p,i} \, G({{\mathbb{Q}}_{p_i}})_{x({C_i}),\alpha_{i}'} g_{p,i}^{-1} = G({{\mathbb{Q}}_{p_i}})_{x({C_{i}^{\bullet}}),\alpha_{i}'}$. Denote by $L_{d}^{\bullet}$, the open compact subgroup of $G({{\mathbb{A}}}_f)$ obtained in \eqref{open-compact-3} by replacing $G({{\mathbb{Q}}_{p_i}})_{x({C_{i}}),\alpha_{i}'}$ with $G({{\mathbb{Q}}_{p_i}})_{x({C_{i}^{\bullet}}),\alpha_{i}'}$. In $G({{\mathbb{A}}}_f)$ the element $$ g = \ {\underset {i=1} {\overset {r} \prod}} \ 1_{G({{\mathbb{Q}}_{p_i}})} \ \ {\underset {i=(r+1)} {\overset {(r+s)} \prod} } \ g_{p,i} \ \ {\underset {v \notin S_f} {\prod}} \ 1_{G({{\mathbb{Q}}}_p)} $$ conjugates $L_{d}$ to $L_{d}^{\bullet}$. Since $G({{\mathbb{A}}}_f)$ satisfies strong approximation, $g = g_{{\mathbb{Q}}} h_{L_{d}}$, with $g_{{\mathbb{Q}}} \in G({{\mathbb{Q}}})$ and $h_{L_{d}} \in L_{d}$. It follows $L_{d}$ and $L_{d}^{\bullet}$ are conjugate by the element $g_{{\mathbb{Q}}}$. In particular, the intersections $$ L_{d} \cap G({{\mathbb{Q}}}) \ {\text{\rm{\ and \ }}} \ L_{d}^{\bullet} \cap G({{\mathbb{Q}}}) $$ are conjugate by the element $g_{{\mathbb{Q}}}$ in $G({{\mathbb{Q}}})$. \bigskip \section{Some Additional Results for $SL_{M}$}\label{sec-4} In this section we let $G=SL_M$. Set $G_{\infty} := SL_{M}({\mathbb{R}} )$, and $K_{\infty} := \mathrm{SO} (M)$. \medskip We prove some simple properties of the intersection of the principal congruence subgroups $\Gamma (m)$ with $K_\infty$. \begin{Lem}\label{nlem-2} Let $m\ge 1$. If $g\in \Gamma(m)$ is a diagonal element, then $g_{ii}\in \{\pm 1\}$ for all $i=1, \ldots, M$. \end{Lem} \begin{proof} Since $g_{11}g_{22}\cdots g_{MM}=1$, the claim follows. \end{proof} \vskip .2in \begin{Lem}\label{nlem-3} Let $m\ge 3$. If $g\in \Gamma(m)$ is a diagonal element, then $g=I_{M \times M}$. \end{Lem} \begin{proof} By Lemma \ref{nlem-2}, $g_{ii}\in \{\pm 1\}$ for all $i=1, \ldots, M$. Since $g_{ii}\equiv 1 \ (mod \ m)$, we obtain $m| (\pm 1 -1)$. Finally, $m\ge 3$ implies that $g_{ii}= 1$ for all $i=1, \ldots, M$. \end{proof} \vskip .2in \begin{Lem}\label{nlem-4} Let $m\ge 3$. Then $\Gamma(m)\cap K_\infty$ is the identity subgroup. \end{Lem} \begin{proof} Let $g=(g_{ij})\in \Gamma(m)\cap K_\infty$. Then, by definition of $\Gamma(m)$, $m| g_{ij}$ for $i\neq j$. But $|g_{ij}|\le 1< m$. Hence $g_{ij}=0$ for $i\neq j$. Thus, $g$ is a diagonal element of $\Gamma(m)$. Hence, Lemma \ref{nlem-3} implies the claim. \end{proof} \bigskip We now use Proposition \ref{SL-cusp-forms}. \begin{Lem}\label{nlem-1} Suppose $i>1$ is an integer. Consider the open compact subgroups ${\mathcal K}_{p,i-1}$ and ${\mathcal K}_{p,i}$ in $SL_{M}({\mathbb{Q}} )$ (notation as in \eqref{iwahori-k}). Then, there exists a function $f_p$ on $SL_{M}({\mathbb{Q}} )$ so that: \begin{itemize} \item[(i)] $f_P$ is a cusp form with support ${\mathcal K}_{p,i-1}$, \item[(ii)] $f_p$ is right ${\mathcal K}_{p,i}$-invariant, \item[(iii)] $f_p(1) \neq 0$. \end{itemize} \end{Lem} \begin{proof} Apply Proposition \ref{SL-cusp-forms}. \end{proof} \bigskip We now use Lemma \ref{nlem-1} to considerably improve Lemma \ref{lem-2}. \smallskip \begin{Lem}\label{nlem-5} Suppose $n>1$ is an integer. Let $T$ denote the set of primes dividing $n$, and suppose \begin{equation}\label{condition-3p} n \, \ge \, 3 \, {\underset {p\in T} \prod} \, p \ . \end{equation} Then, for any $\delta\in \hat{K}_\infty$, there exists $f_\infty\in C_c^\infty(G_\infty)$ so that the following holds: \begin{itemize} \item[(i)] $E_\delta(f_\infty)=f_\infty$. \item[(ii)] For $$ f \ =\ f_\infty \, \otimes_p \, f_p $$ \noindent where $f_p$ is as in Lemma \ref{nlem-1} when $p \in T$, and $f_{p}=char_{SL_{M}({{\mathbb{Z}}_{p})}}$ when $p\notin T$, then the Poincar\' e series $P(f)$ and its restriction to $G_\infty$, which is a Poincar\' e series for $\Gamma(n)$, are non--zero. \item[(iii)] $E_\delta(P(f))=P(f)$, $E_\delta(P(f)|_{G_\infty})=P(f)|_{G_\infty}$, and $P(f)$ is right--invariant under $K_{n}$. \item[(iv)] The support of $P(f)|_{G_\infty}$ is contained in a set of the form $\Gamma(n)\cdot C$, where $C$ is a compact set which is right--invariant under $K_\infty$, and $\Gamma(n)\cdot C$ is not whole $G_\infty$. \item[(v)] $P(f)$ is cuspidal and $P(f)|_{G_\infty}$ is $\Gamma(n)$--cuspidal. \end{itemize} \end{Lem} \begin{proof} We use Lemmas \ref{lem-1} and \ref{lem-2}. We also use the notation introduced in the paragraph before and in Lemma \ref{nlem-1}. This meets all assumptions of Lemma \ref{lem-2} (with $L=K_{n}$). We let $S=T\cup \{\infty\}$. We need to study the intersection (\ref{e-3}). In our case it is given by \begin{equation} \label{n-2} \Gamma_S \cap \left[K_\infty \times \prod_{p\in T} {\mathrm{supp}}{\ (f_p)}\right]. \end{equation} Thanks to Lemma \ref{nlem-1}, this is a subset of $$ \Gamma_S \cap \left[K_\infty \times \prod_{p\in T} {\mathcal K}_{p,\nu_p(n)-1}\right]. $$ But projecting down to the first factor, this intersection becomes $$ K_\infty\cap \Gamma(n/\prod_{p\in T}p). $$ \noindent By Lemma \ref{nlem-4}, and our assumption $n \, \ge \, 3 {\underset {p\in T} \prod} \, p$, it is trivial. Whence, (\ref{n-2}) consists of the identity only. In particular, in (\ref{e-3}), we have $K_\infty\cap \Gamma=\{1\}$, $l=1$, $\gamma_1=1$, and $c_1\neq 0$. We remark that $\Gamma=\Gamma(n)$ (see (\ref{e-20})). \smallskip Next, by Lemma \ref{lem-2}, we need to study the map (\ref{e-4}). Thanks to the above computations, this map is $\alpha\mapsto c_1\cdot \alpha$. Hence, it is essentially the identity. Now, (i)--(iv) of the lemma follow from (i)--(iv) from Lemma \ref{lem-2} for any $K_\infty$--type $\delta$. Finally, (v) follows from Lemma \ref{lem-1}, and (\cite{Muic1}, Proposition 5.3). \end{proof} \bigskip We now prove the main result of this section. It is analogous and generalizes the main result of \cite{Muic3} (see \cite{Muic3}, Theorem 0-1). \bigskip \begin{Thm}\label{nthm} Suppose $n >1$ is an integer and satisfies the condition \eqref{condition-3p} that $n \ge 3 {\underset {p \in T} \prod p }$. Then, for any $\delta\in \hat{K}_\infty$, the orthogonal complement of $$ \sum_{\substack{m| n\\ m< n }} L^2_{cusp}(\Gamma(m)\backslash G_\infty) $$ in $L^2_{cusp}(\Gamma(n)\backslash G_\infty)$ contains a direct sum of infinitely many inequivalent irreducible unitary representations of $G_\infty$ all containing $\delta$. \end{Thm} \begin{proof} The (finite) family of open compact subgroups $$ {\mathcal F} = \{ \ K_{m} \ | \ \ 1 \le m\le n, \ m|n \ \} $$ meet all the assumptions of \ref{assumptions} with the group $K_{n}$ contained in all the other $K_{m}\in {\mathcal F}$. (See Section \ref{sec-3}.) Now, the proof is the same as the proof of Theorem \ref{intr-thm}. We leave the details to the reader. \end{proof} \vskip .2in \begin{Cor}\label{ncor-1} Suppose $n >1$ satisfies $n \ge 3 {\underset {p \in T} \prod p }$. Then the orthogonal complement of $$ \sum_{\substack{m| n\\ m< n }} L^2_{cusp}(\Gamma(m)\backslash G_\infty) $$ in $L^2_{cusp}(\Gamma(n)\backslash G_\infty)$ contains a direct sum of infinitely many inequivalent irreducible unitary $K_\infty$--spherical representations of $G_\infty$. \end{Cor} \vskip .2in \begin{Cor}\label{ncor-2} Suppose $n >1$ satisfies $n \ge 3 {\underset {p \in T} \prod p }$. Then, for every $\delta\in \hat{K}_\infty$, the orthogonal complement of $$ \sum_{\substack{m| n\\ m< n }} L^2_{cusp}(\Gamma(m)\backslash G_\infty) $$ in $L^2_{cusp}(\Gamma(n)\backslash G_\infty)$ contains a direct sum of infinitely many inequivalent irreducible unitary representations of $G_\infty$ all contaning $\delta$ which are not in the discrete series or in the limits of discrete series for $G_\infty$. \end{Cor} \begin{proof} As in (\cite{Muic3}, Proposition 4.2). \end{proof} \vskip .2in Let $P_\infty=M_\infty A_\infty N_\infty$ be the Langlands decomposition of a minimal parabolic subgroup of $G_\infty$. We let ${\mathfrak{a}}_\infty$ be the real Lie algebra of $A_\infty$ and ${\mathfrak{a}}^*_\infty$ its complex dual. We use Vogan's theory of minimal $K_\infty$--types (\cite{vog}, \cite{vog-1}). Any $\epsilon\in \hat{M}_\infty$ is fine (\cite{vog-1}, Definition 4.3.8). \smallskip Let $\epsilon\in \hat{M}_\infty$. Following (\cite{vog-1}, Definition 4.3.15), we let $A(\epsilon)$ be the set of $K_\infty$--types $\delta$ such that $\delta$ is fine (\cite{vog-1}, Definition 4.3.9) and $\epsilon$ occurs in $\delta|_{M_\infty}$. Applying (\cite{vog-1}, Theorem 4.3.16), we obtain that $A(\epsilon)$ is not empty and for $\delta \in A(\epsilon)$, we have the following: \begin{equation}\label{K-ind} \delta|_M= \oplus_{\epsilon'\in \{w(\epsilon); \ w\in W\}} \epsilon', \end{equation} where $W=N_{K_\infty}(A_\infty)/M_\infty$ is the Weyl group of $A_\infty$ in $G_\infty$. Since the restriction map implies ${\mathrm{Ind}}_{M_\infty A_\infty N_\infty}^{G_\infty}(\epsilon \otimes \exp{\nu(\ )})\simeq {\mathrm{Ind}}_{M_\infty}^{K_\infty}(\epsilon)$ as $K_\infty$--representations, by Frobenius reciprocity and (\ref{K-ind}) we see for every $\nu \in {\mathfrak{a}}^*_\infty$ there exists a unique irreducible subquotient $J_{\epsilon\otimes \nu}(\delta)$ of ${\mathrm{Ind}}_{M_\infty A_\infty N_\infty}^{G_\infty}(\epsilon \otimes \exp{\nu(\ )})$ containing the $K_\infty$--type $\delta$. \vskip .2in One important example is the case $\epsilon={\mathbf{1}}_{M_\infty}$. Then $\mu={\mathbf{1}}_{K_\infty}\in A({\mathbf{1}}_M)$, and $J_{\epsilon\otimes \nu}(\delta)$ is the unique $K_\infty$--spherical irreducible subquotient of ${\mathrm{Ind}}_{M_\infty A_\infty N_\infty }^{G_\infty}(\epsilon \otimes \exp{\nu(\ )})$. \vskip .2in \begin{Cor} \label{ncor-3} Suppose $n >1$ satisfies $n \ge 3 {\underset {p \in T} \prod p }$. Let $\epsilon\in \hat{M_\infty}$. Then, for every $\delta \in A(\epsilon)$, there exist infinitely many $\nu \in {\mathfrak{a}}^*$ such that $J_{\epsilon\otimes \nu}(\delta)$ appears in the orthogonal complement of $$ \sum_{\substack{m| n\\ m< n }} L^2_{cusp}(\Gamma(m)\backslash G_\infty) $$ in $L^2_{cusp}(\Gamma(n)\backslash G_\infty)$. \end{Cor} \begin{proof} As in (\cite{Muic3}, Theorem 4.8). \end{proof} \bigskip \bigskip
1,108,101,564,720
arxiv
\section{\textbf{Introduction}} \label{intro} \subsection{\textbf{Robust Denoising in Deep Learning}} Deep learning technique has rapidly entered into the field of image processing. One of the most popular methods was the Denoising Autoencoder (DA) motivated by~\cite{vincent2008extracting}. It used the reference data to learn a compressed representation (encoder) for the dataset. One extension of DA \HLL{was presented} in~\cite{xie2012image}, which exploited the sparsity regularization and the reconstruction loss to avoid over-fitting. Other developments, such as~\cite{zhang2017beyond}, made use of the residual network architecture to improve the quality of denoised images. In addition,~\cite{agostinelli2013adaptive} combined several sparse Denoising Autoencoder to enhance the robustness under different noise. The Generative Adversarial Networks (GAN) recently \HLL{gained} its popularity and provides a promising new approach for image denoising. GAN \HLL{was proposed} by~\cite{goodfellow2014generative}, and was mainly composed of two parts: the generator ($G$: generate the new samples) and the discriminator ($D$: determine whether the samples are real or generated (fake)). Original GAN~(\cite{goodfellow2014generative}) \HLL{aimed to} minimize the Jensen-Shannon (JS) divergence between distributions of the generated samples and the true samples, hence called JS-GAN. Various GANs \HLL{were then studied}, and in particular,~\cite{arjovsky2017wasserstein} proposed the Wasserstein GAN (WGAN), which \HLL{replaced the JS divergence} with Wasserstein distance.~\cite{gulrajani2017improved} further improved the WGAN with the gradient penalty that stabilized the model training. For image denoising problem, GAN could better describe the distribution of original data by exploiting the common information of samples. Consequently, \HLL{GANs were} widely applied in the image denoising problem (\cite{tran2020gan,tripathi2018correction,yang2018low,chen2018image,dong2020optical}). Recently,~\cite{gao2018robust,gao2019generative} showed that a general family of GANs ($\beta$-GANs, including JS-GAN and TV-GAN) \HLL{enjoyed robust reconstruction} when the data sets contain outliers under Huber contamination models~(\cite{huber1992robust}). In this case, observed samples are drawn from a complex distribution, which is a mixture of contamination distribution and real data distribution. A particular example is provided by Cryo-Electron Microscopy (Cryo-EM) imaging, where the original noisy images are likely contaminated with outliers as broken or non-particles. \HLL{The main challenges of Cryo-EM image denoising are summarized in the subsequent section.} \subsection{\textbf{Challenges of Cryo-EM Image Denoising}} The Cryo-Electron Microscopy (Cryo-EM) has become one of the most popular techniques to resolve the atomic structure. In the past, Cryo-EM was limited to large complexes or low-resolution models. Recently the development of new detector hardware has dramatically improved the resolution in Cryo-EM~(\cite{kuhlbrandt2014resolution}), which made Cryo-EM widely used in a variety of research fields. Different from X-ray crystallography\HLL{~(\cite{warren1990x})}, Cryo-EM had the advantage of preventing the recrystallization of inherent water and re‐contamination. Also, Cryo-EM was superior to Nuclear Magnetic Resonance spectroscopy\HLL{~(\cite{wuthrich1986nmr})} in solving macromolecules in the native state. In addition, both X-ray crystallography and Nuclear Magnetic Resonance spectroscopy required large amounts of relatively pure samples, whereas Cryo-EM required much fewer samples~(\cite{bai2015cryo}). For this celebrated development of Cryo-EM for the high-resolution structure determination of biomolecules in solution, the Nobel Prize in Chemistry in 2017 was awarded to three pioneers in this field~(\cite{shen20182017}). \begin{figure}[htbp] \centering \includegraphics[width=6in]{imgs/exper_image.PNG} \caption{\footnotesize (a) a noisy Cryo-EM image (b) a reference image \label{fig:demo}} \end{figure} However, it is a computational challenge in processing raw Cryo-EM images, due to heterogeneity in molecular conformations and high noise. Macromolecules in natural conditions are usually heterogeneous, i.e., multiple metastable structures might coexist in the experimental samples (\cite{frank2006three,scheres2016processing}). Such conformational heterogeneity adds extra difficulty to the structural reconstruction as we need to assign each 2D image to not only the correct projection angle but also its corresponding conformation. This imposes a computational challenge that one needs to denoise the Cryo-EM images without losing the key features of their corresponding conformations. Moreover, in the process of generating Cryo-EM images, one needs to provide a view using the electron microscope for samples that are in frozen condition. Thus there are two types of noise: one is from ice, and the other is from the electron microscope. Both of them are significant in contributing high noise in Cryo-EM images and leave a difficulty to the detection of particle structures (Fig. \ref{fig:demo} shows a typical noisy Cryo-EM image with its reference image, which is totally non-identifiable to human eyes). In extreme cases, some experimental images even do not contain any particles, rendering it difficult for particle picking either manually or automatically~(\cite{wang2016deeppicker}). How to achieve robust denoising against such kind of contamination thus becomes a critical problem. Therefore, it is a great challenge to develop robust denoising methods for Cryo-EM images to reconstruct heterogeneous biomolecular structures. There are a plethora of denoising methods developed in applied mathematics and machine learning that could be applied to Cryo-EM image denoising. Most of them in Cryo-EM are based on unsupervised learning, which don't need any reference image data to learn.~\cite{wang2013zernike} proposed a filtering method based on non-local means, which made use of the rotational symmetry of some biological molecules. Also,~\cite{wei2010optimized} designed the adaptive non-local filter, which made use of a wide range of pixels to estimate the denoised pixel values. Besides,~\cite{xian2018data} compared transform domain filtering method: BM3D (\cite{dabov2007image}) and dictionary learning method: KSVD (\cite{aharon2006k}) in denoising problem in Cryo-EM. However, all of these \HLL{didn't} work well in low Signal-to-Noise Ratio (SNR) situations. In addition, Covariance Wiener Filtering (CWF) (\cite{bhamre2016denoising}) \HLL{was proposed} for image denoising. \HLL{However,} CWF needed large sample size of data in order to estimate the covariance matrix correctly, although it had an attractive denoising effect. Therefore, a robust denoising method in Cryo-EM images was needed. \subsection{\textbf{Outline}} In this chapter, we propose a robust denoising scheme of Cryo-EM images by exploiting joint training of both Autoencoders and a new type of GANs $\beta$-GANs. Our main results are summarized as follows. \begin{itemize} \item Both Autoencoder and GANs help each other for Cryo-EM denoising in low Signal-to-Noise Ratio scenarios. On the one hand, Autoencoder helps stabilize GANs during training, without which the training processes of GANs are often collapsed due to high noise; on the other hand, GANs help Autoencoder in denoising by sharing information in similar samples via distribution learning. For example, WGAN combined with autoencoder often achieve stat\text{e-}of-th\text{e-}art performance due to its ability of exploiting information in similar samples for denoising. \item To achieve robustness against partial contamination of samples, one needs to choose both robust reconstruction loss for Autoencoder (e.g., $\ell_1$ loss) and robust $\beta$-GANs (e.g., $(.5,.5)$-GAN or $(1,1)$-GAN\footnote{$\beta$-GAN has two parameters: $\alpha$ and $\beta$, written as $(\alpha, \beta)$-GAN in this chapter.}, which is proved to be robust against Huber contamination), that achieve competitive performance with WGANs in contamination-free scenarios, but do not deteriorate that much with data contamination. \item Numerical experiments are conducted with both a heterogeneous conformational dataset on the Thermus aquaticus RNA Polymerase (RNAP) and a homogenous dataset on the Plasmodium falciparum 80S ribosome dataset (EMPIAR-10028). The experiments on those datasets show the validity of the proposed methodology and suggest that: while WGAN, $(.5,.5)$-GAN, and $(1,1)$-GAN combined with $\ell_1$-Autoencoder are among the best choices in contamination-free cases, the latter two are overall the most recommended for robust denoising. \end{itemize} To achieve the goals above, this chapter is to provide an overview on various developments of GANs with their robustness properties. After that we focus on the application to the challenge of Cryo-EM image robust denoising problem. The chapter is structured as follows. In section ``\nameref{sec:background}", we \HLL{provide} a general overview of Autoencoder and GAN. In section ``\nameref{sec:noisy modeling}", we model the tradition denoising problem based on Huber contamination firstly and discuss $\beta$-GAN and its statistics. Finally, we give our algorithm based on combination of $\beta$-GAN and Autoencoder, which is training stable. The evaluation of the algorithm in Cryo-EM date is shown in the section ``\nameref{sec:application}". The section ``\nameref{sec:conclusion}" concludes the chapter. \HLL{In addition, we implement the supplementary experiment in the section ``\nameref{sec:appendix}".} \section{\textbf{Background: Data Representation and Mapping}} \label{sec:background} Efficient representation learning of data distribution is crucial for many machine learning based model. \HLL{For a set of the real data samples $X$, the classical way to learn the probability distribution of this data ($P_r$) is to find $P_\theta$ by minimizing the distance between $P_r$ and $P_{\theta}$, such as Kullback-Leibler divergence $D_{KL}(P_r||P_{\theta})$. This means we can pass a random variable through a parametric function to generate samples following a certain distribution $P_\theta$ instead of directly estimating the unknown distribution $P_r$.} By varying $\theta$, we can change this distribution and make it close to the real data distribution $P_r$. \HLL{Autoencoder and GANs are two well known methods in data representation.} Autoencoder is good at learning the representation of data with low dimensions with an explicit characterization of $P_\theta$, while GAN offers flexibility in defining the objective function (such as the Jensen-Shannon divergence) by directly generating samples without explicitly formulating $P_\theta$. \subsection{\textbf{Autoencoder}} Autoencoder\HLL{~(\cite{baldi2012autoencoders})} is a type of neural network used to learn efficient codings of unlabeled data. It learns a representation (encoding) for a set of data, typically for dimensional reduction by training the network. An Autoencoder has two main parts: encoder and decoder. The encoder maps the input data $x$ ($\in X$) into a latent representation $z$, while the decoder map the latent representation back to the data space: \begin{align} &z\sim \text{Enc}(x) \\ &\hat{x} \sim \text{Dec}(z). \end{align} Autoencoders are trained to minimized reconstruction errors, such as: $\mathcal{L}(x,\hat{x})=||x-\hat{x}||^2$. Various techniques have been developed to improve data representation ability for Autoencoder, such as imposing regularization on the encoding layer: \begin{align} \mathcal{L}(x,\hat{x})+\Omega(h), \end{align} where $h$ is the mapping function of the encoding layer, and $\Omega(h)$ is the regularization term. The Autoencoder is good at data denoising and dimension reduction. \subsection{\textbf{GAN}} The Generative Adversarial Network (GAN), firstly proposed by Goodfellow (\cite{goodfellow2014generative}, called JS-GAN), is a class of machine learning framework. The goal of GAN is to learn to generate new data with the same statistics as the training set. Though original GAN is proposed as a form of generative model for unsupervised learning, GAN has proven useful for semi-supervised learning, fully supervised learning and reinforcement learning\HLL{~(\cite{hua2019gan,sarmad2019rl,dai2017good})}. Although GAN has shown great success in machine learning, the training of GAN is not easy, and is known to be slow and unstable. The problems of GAN\HLL{~(\cite{bau2019seeing,arjovsky2017wasserstein})} include: \begin{itemize} \item \textit{Hard to achieve Nash equilibrium.} The updating process of the generator and the discriminator models are hard to guarantee a convergence. \item \textit{Vanishing gradient.} The gradient update is slow when the discriminator is well trained. \item \textit{Mode collapse.} The generator fails to generate samples with enough representative. \end{itemize} \subsubsection{\textbf{JS-GAN}} JS-GAN proposed in~\cite{goodfellow2014generative} took Jensen-Shannon (JS) distance to measure the difference between different data distribution. The mathematics expression is follows: \begin{align} \label{JSGAN} \min\limits_{G}\max\limits_{D}\mathbb{E}_{x\sim P(X),z\sim P(Z)}[\log D(x) + \log (1-D(G(z))], \end{align} \HLL{where $G$ is a generator which maps disentangled noise $z\sim P(Z)$ (usually Gaussian $N(0,I)$) to fake image data in an purpose to confuse the discriminator $D$ from real data. The discriminator $D$ is simply a classifier, which makes an effort to distinguish real data from the fake data generated by $G$.} $P(X)$ is the input data distribution. $z$ is noise. $P(Z)$ is the noise distribution, and it is used for data generation. GAN trains the adversarial process by updating generator and the discriminator, where training the generator in succeeding in fooling the discriminator. \subsubsection{\textbf{WGAN and WGANgp}} Wasserstein GAN~(\cite{arjovsky2017wasserstein}) replaced the JS distance with the Wasserstein distance: \begin{equation} \label{eq:wgan} \min_{G}\max_{D}\mathbb{E}_{x\sim P(X),z\sim P(z)}\{D(x) - D(G(z)). \end{equation} In reality, WGAN applied weight clipping of neural network to satisfies Lipschitz condition for discriminator. Moreover,~\cite{gulrajani2017improved} proposed WGANgp based on WGAN, which introduced a penalty in gradient to stabilize the training. \begin{equation} \min_{G}\max_{D}\mathbb{E}_{(x,z)\sim P(X,z)}\{D(x) - D(G(z))+ \mu \mathbb{E}_{\tilde{x}} (\Vert \bigtriangledown_{\tilde{x}} D(\tilde{x})\Vert _2 - 1)^2 \}, \end{equation} where $\tilde{x}$ is uniformly sampled along straight lines connecting pairs of the generated and real samples; and $\mu$ is a weighting parameter. In WGANgp, the last layer of the sigmoid function in the discriminator network is removed. Thus $D$'s output range is the whole real $\mathbb{R}$, but its gradient is close to $1$ to achieve Lipschitz-$1$ condition. \section{\textbf{Robust Denoising Method}} \label{sec:noisy modeling} \subsection{\textbf{Huber Contamination Noise Model}} Let $x\in \mathbb{R}^{d_1\times d_2}$ be a clean image, often called reference image in the sequel. The generative model of noisy image $y\in \mathbb{R}^{d_1\times d_2}$ under the linear, weak phase approximation~(\cite{bhamre2016denoising}) could be described by \begin{equation} \label{eq:forward_model} y= a * x + \zeta, \end{equation} where $*$ denotes the convolution operation, $a$ is the point spread function of the microscope convolving with the clean image and $\zeta$ is an additive noise, usually assumed to be Gaussian noise that corrupts the image. In order to remove the noise the microscope brings, traditional Denoising Autoencoder could be exploited to learn from examples $(y_i,x_i)_{i=1,\ldots,n}$ the inverse mapping $a^{-1}$ from the noisy image $y$ to the clean image $x$. However, this model is not sufficient in the real case. In the experimental data, the contamination will significantly affect the denoising efficiency if the denoising methods continuously depend on the sample outliers. Therefore we introduce the following Huber contamination model to extend the image formation model \HLL{(see Eq. \eqref{eq:forward_model})}. Consider that the pair of reference image and experimental image $(x, y)$ is subject to the following mixture distribution $P_{\epsilon}$: \begin{equation} \label{eq:contam_huber} P_{\epsilon} = (1-\epsilon)P_0 + \epsilon Q, \ \ \ \epsilon\in [0,1], \end{equation} a mixture of true distribution $P_0$ of probability $(1-\epsilon)$ and arbitrary contamination distribution $Q$ of probability $\epsilon$. $P_0$ is characterized by Eq. \eqref{eq:forward_model} and $Q$ accounts for the unknown contamination distribution possibly due to ice, broken of data, and so on such that the image sample does not contain any particle information. This is called the Huber contamination model in statistics~\cite{huber1992robust}. Our purpose is that given $n$ samples $(x_i,y_i)\sim P_{\epsilon}$ ($i=1,\ldots,n$), possibly contaminated with unknown $Q$, learn a robust inverse map $a^{-1}(y)$. \subsection{\textbf{Robust Denoising method}} \label{sec:robust denoise} Exploiting a neural network to approximate the robust inverse mapping $G_\theta: \mathbb{R}^{d_1\times d_2} \to \mathbb{R}^{d_1\times d_2}$. The neural network is parameterized by $\theta\in \Theta$. The goal is to ensure that discrepancy between reference image $x$ and reconstructed image $\widehat{x}=G_\theta(y)$ is small. Such a discrepancy is usually measured by some non-negative loss function: $\ell(x,\widehat{x})$. Therefore, the denoising problem minimizes the following expected loss, \begin{equation} \arg\min_{\theta\in \Theta}\mathcal{L}(\theta):=\mathbb{E}_{x,y}[\ell(x,G_{\theta}(y)) ]. \end{equation} In practice, given a set of training samples $S=\{(x_i,y_i):i=1,\ldots,n\}$, we aim to solve the following empirical loss minimization problem, \begin{equation} \arg\min_{\theta\in \Theta}\widehat{\mathcal{L}}_S(\theta):=\frac{1}{n}\sum_{i=1}^n \ell(x_i,G_{\theta}(y_i)). \end{equation} the following choices of loss functions are considered: \begin{itemize} \item ({\textbf{$\ell_2$-Autoencoder}}) $\ell(x,\widehat{x})=\frac{1}{2}\|x-\widehat{x}\|_2^2:=\frac{1}{2}\sum_{i,j} (x_{ij}-\widehat{x}_{ij})^2$, or $\mathbb{E}\ell(x,\widehat{x})=D_{KL}(p(x)\|q(\widehat{x}_\theta))$ equivalently, where $\widehat{x}_\theta\sim \mathcal{N}(x,\sigma^2 I_D)$; \item ({\textbf{$\ell_1$-Autoencoder}}) $\ell(x,\widehat{x})=\|x-\widehat{x}\|_1:=\sum_{i,j} |x_{ij}-\widehat{x}_{ij}|$, or $\mathbb{E}\ell(x,\widehat{x})=D_{KL}(p(x)\|q(\widehat{x}_\theta)) $ equivalently, where $\widehat{x}_\theta\sim \mathrm{Laplace}(x,b)$; \item ({\textbf{Wasserstein-GAN}}) $\ell(x,\widehat{x})=W_1(p(x),q_\theta(\widehat{x}))$, where $W_1$ is the 1-Wasserstein distance between distributions of $x$ and $\widehat{x}$; \item ({\textbf{$\beta$-GAN}}) $\ell(x,\widehat{x})=D(p(x)\|q_\theta(\widehat{x}))$, where $D$ is some divergence function to be discussed below between distributions of $x$ and $\widehat{x}$. \end{itemize} Both the $\ell_2$ and $\ell_1$ losses consider the reconstruction error of $G_\theta$. The $\ell_2$-loss above is equivalent to assume that $G_\theta(y|x)$ follows a Gaussian distribution $\mathcal{N}(x,\sigma^2 I_{D})$, and the $\ell_1$-loss instead assumes a Laplacian distribution centered at $x$. As a result, the $\ell_2$-loss pushes the reconstructed image $\widehat{x}$ toward mean by averaging out the details and thus blurs the image. On the other hand, the $\ell_1$-loss pushes $\widehat{x}$ toward the coordinat\text{e-}wise median, keeping the majority of details while ignoring some large deviations. It improves the reconstructed image, and more robust than the $\ell_2$ loss against large outliers. Although $\ell_1$-Autoencoder has a more robust loss than $\ell_2$, both of them are not sufficient to handle the contamination. In the framework of the Huber contamination model (\HLL{Eq. \eqref{eq:contam_huber}}), $\beta$-GAN is introduced below. \subsection{\textbf{Robust Recovery via $\beta$-GAN}} Recently~\cite{gao2018robust,gao2019generative} came up with a more general form of $\beta$-GAN. It aims to solve the following minimax optimization problem to find the $G_\theta$, \begin{equation}\label{minmax equation} \min_{G_\theta}\max_{D}\mathbb{E}[S(D(x),1) +S(D(G_\theta(y)), 0)], \end{equation} where $S(t,1) = -\int_t^1 c^{\alpha-1}(1-c)^{\beta} dc$, $S(t,0) = -\int_0^t c^{\alpha}(1-c)^{\beta-1} dc$, $\alpha, \beta\in [-1,1]$. For simplicity, we denote this family with parameters $\alpha, \beta$ by ($\alpha,\beta$)-GAN in this chapter. The family of ($\alpha,\beta$)-GAN includes many popular members. For example, when $\alpha = 0, \beta = 0$, it becomes the JS-GAN~(\cite{goodfellow2014generative}), which aims to solve the minmax problem~ \HLL{(Eq. \eqref{JSGAN})} whose loss is the Jensen-Shannon divergence. When $\alpha = 1, \beta =1$ the loss is a simple mean square loss; when $\alpha =-0.5, \beta = -0.5$, the loss is boost score. However, the Wasserstein GAN (WGAN) is not a member of this family. By formally taking $S(t,1)=t$ and $S(t,0)=-t$, we could derive the the type of WGAN as \HLL{Eq.~\eqref{eq:wgan}.} \subsubsection{\textbf{Robust Recovery Theory}} \label{robust theo} Extending the traditional image generative model to a Huber contamination model and exploit the $\beta$-GAN toward robust denoising under unknown contamination. Below includes a brief introduction to robust $\beta$-GAN, which achieves provable robust estimate or recovery under Huber contamination model. Recently, Gao establishes the statistical optimality of $\beta$-GANs for robust estimate of mean (location) and convariance (scatter) of the general elliptical distributions (\cite{gao2018robust},~\cite{gao2019generative}). Here we introduce the main results. \begin{definition}[Elliptical Distribution] A random vector $X \in \mathbb{R}^p$ follows an elliptical distribution if and only if it has the representation $X = \theta + \xi AU$, where $\theta \in \mathbb{R}^p$ and $A \in \mathbb{R}^{p \times r}$ are model parameters. The random variable U is distributed uniformly on the unit sphere \{$u \in \mathbb{R}^r: \parallel u \parallel = 1$\} and $\xi \geq 0$ is a random variable in $\mathbb{R}$ independent of $U$. The vector $\theta$ and the matrix $\Sigma = AA^T$ are called the location and the scatter of the elliptical distribution. \end{definition} Normal distribution is just a member in this family characterized by mean $\theta$ and covariance matrix $\Sigma$. Cauchy distribution is another member in this family whose moments do not exist. \begin{definition}[Huber contamination model] $X_1,..., X_n \overset{\text{iid}}{\sim} (1-\epsilon)P_{ell} + \epsilon Q$, where we consider the $P_{ell}$ an elliptical distribution in its canonical form. \end{definition} A more general data generating process than Huber contamination model is called the strong contamination model below, as the $TV$-neighborhood of a given elliptical distribution $P_{ell}$: \begin{definition}[Strong contamination model] \label{def:TV-ambiguity} $X_1,..., X_n \overset{\text{iid}}{\sim} P$, for some $P$ satisfying $$TV(P, P_{ell}) < \epsilon .$$ \end{definition} \begin{definition}[Discriminator Class] Let ${\sf sigmoid}(x)=\frac{1}{1+e^{-x}}$, ${\sf ramp}(x)=\max(\min(x+1/2,1),0)$, and ${\sf ReLU}(x)=\max(x,0)$. Define a general discriminator class of deep neural nets: firstly define the a ramp bottom layer \begin{equation} \mathcal{G}_{{\sf ramp}} = {g(x) = {\sf ramp}(u^tx+b), u\in \mathbb{R}^p, b \in \mathbb{R}}. \end{equation} \\Then, with $\mathcal{G}_1(B) = \mathcal{G}_{{\sf ramp}}$, inductively define \begin{equation} \mathcal{G}_{l+1}(B) = \Bigg \{ g(x)={\sf ReLU} \bigg (\sum_{h\geq1} v_h g_h(x) \bigg): \sum_{h\geq1}|v_h| \leq B, g_h \in \mathcal{G}_l(B) \Bigg \}. \end{equation} Noted that the neighboring two layers are connected via ReLU activation functions. Finally, the network structure is defined by: \begin{equation} \label{discri class} \mathcal{D}^L(\kappa, B) = \Bigg \{ D(x) = {\sf sigmoid} \bigg( \sum_{j\geq1}w_jg_j(x) \bigg): \sum_{j\geq1}|w_j|\leq \kappa, g_j \in \mathcal{G}_L(B) \Bigg \}. \end{equation} \HLL{This is a network architecture consisting of} $L$ hidden layers. \end{definition} Now consider the following $\beta$-GAN induced by a proper scoring rule $S:[0,1]\times \{0,1\} \to \mathbb{R}$ with the discriminator class above: \begin{equation} \label{estimation para} (\hat{\theta}, \hat{\Sigma}) = \arg\min_{(\theta, \Sigma)}\max_{D \in \mathcal{D}^L(\kappa,B)}\frac{1}{n}\sum_{i=1}^nS(D(x_i),1) +\mathbb{E}_{x \sim P_{ell}(\Theta, \Sigma)}S(D((x)), 0). \end{equation} The following theorem shows that such a $\beta$-GAN may give a statistically optimal estimate of location and scatter of the general family of Elliptical distributions under strong contamination models. \begin{thm}[Gao-Yao-Zhu~(\cite{gao2019generative})]\label{thm1} Consider the $(\alpha,\beta)$-GANs with $|\alpha-\beta|<1$. The discriminator class $D=\mathcal{D}^L(k, B)$ is specified by Eq. \eqref{discri class}. Assume $\frac{p}{n} + \epsilon ^2 \leq c$ for some sufficiently small constant $c > 0$. Set $1 \leq L = O(1), 1 \leq B = O(1)$, and $\kappa = O(\sqrt{\frac{p}{n}} + \epsilon)$. Then for any $X_1,...X_n \overset{\text{iid}}{\sim} P$, for some $P$ satisfying $TV(P, P_{ell}) < \epsilon$ with small enough $\epsilon$, we have: \begin{equation} \begin{split} &\| \hat{\theta} -\theta \|^2 < C (\frac{p}{n} \vee \epsilon ^2), \\ &\| \hat{\Sigma} -\Sigma \|_{op}^2 < C (\frac{p}{n} \vee \epsilon ^2), \end{split} \end{equation} with probability at least $1 - e^{C'(p+n\epsilon^2)}$ (universal constants $C$ and $C'$) uniformly over all $\theta \in R^p$ and all $\|\Sigma\|_{op}\leq M$. \end{thm} The theorem established that for all $|\alpha-\beta|<1$, $(\alpha,\beta)$-GAN family is robust in the sense that one can learn a distribution $P_{ell}$ from contaminated distributions $P_{\epsilon}$ such that $TV(P_{\epsilon},P_{ell})<\epsilon$, which includes Huber contamination model as a special case. Therefore a $(\alpha,\beta)$-GAN with suitable choice of network architecture, can robustly learn the generative model from arbitrary contamination $Q$ when $\epsilon $ is small (e.g. no more than $1/3$). \HLL{In the current case, the denoising autoencoder network is modified to $G_\theta(y)$, providing us an universal approximation of the location (mean) of the inverse generative model as Eq. \eqref{eq:forward_model},} where the noise can be any member of the elliptical distribution. Moreover, the discriminator is adapted to the image classification problem in the current case. Equipped with this design, the proposed $(\alpha,\beta)$-GAN may help enhance the denoising Autoencoder robustness against unknown contamination, e.g. the Huber contamination model for real contamination in the image data. The experimental results in fact confirms the efficacy of such a design. In addition, Wasserstein GAN (WGAN) is not a member of this $\beta$-GAN family. Compared to JS-GAN, WGAN aims to minimize the Wasserstein distance between the sample distribution and the generator distribution. Therefore, WGAN is not robust in the sense of contamination models above as arbitrary $\epsilon$ portion of outliers can be far away from the main distribution $P_0$ such that the Wasserstein distance is arbitrarily large. \subsection{\textbf{Stablized Robust Denoising by Joint Autoencoder and $\beta$-GAN}}\label{sec:joint algorithm} Although $\beta$-GAN can robustly recover model parameters with contaminated samples, as a zero-sum game involving a non-convex-concave minimax optimization problem, training GANs is notoriously unstable with typical cyclic dynamics and possible mode collapse entrapped by local optima (\cite{arjovsky2017wasserstein}). However, in this section we show that the introduction of Autoencoder loss is able to stabilize the training and avoid the mode collapse. In particular, Autoencoder can help stabilize GAN during training, without which the training processes of GAN are often oscillating and sometimes collapsed due to the presence of high noise. \HLL{Compared with the autoencoder, $\beta$-GAN can further help denoising by exploiting common information in similar samples during distribution training.} In GAN, the divergence or Wasserstein distance between the reference image set and the denoised image set is minimized. The similar images can therefore help boost signals for each other. For these considerations, a combined loss is proposed with both $\beta$-GAN and Autoencoder reconstruction loss, \begin{equation} \widehat{\mathcal{L}}_{GAN}(x,\widehat{x})+ \lambda \|x-\widehat{x}\|_p^p, \end{equation} where $p\in\{1,2\}$ and $\lambda\geq 0$ is a trad\text{e-}off parameter for $\ell_p$ reconstruction loss. Algorithm \ref{mainalg} summarizes the procedure of joint training of Autoencoder and GAN, which will be denoted as ``GAN$+\ell_p$" in the experimental section depending on the proper choice of GAN and $p$. The main algorithm is shown in Algorithm \ref{mainalg}. \begin{algorithm} \footnotesize \caption{Joint training of $(\alpha,\beta)$-GAN and $\ell_p$-Autoencoder.}\label{mainalg} {\bf Input:} \\ 1. $(\alpha,\beta)$ for $S(t,1) = -\int_t^1 c^{\alpha-1}(1-c)^{\beta}$dc, $S(t,0) = -\int_0^t c^{\alpha} (1-c)^{\beta-1}$dc \\ \hspace*{0.05in} or $S(t,1)=t$, $S(t,0)=-t$ for WGAN \\ 2. $\lambda$ regularization parameter of the $\ell_p$-Autoencoder \\ 3. $k_d$ number of iterations for discriminator, $k_g$ number of iterations for generator\\ 4. $\eta_d$ learning rate of discriminator, $\eta_g$ learning rate of generator\\ 5. $\omega$ weights of discriminator, $\theta$ weights of generator \begin{algorithmic}[1] \FOR{number of training iterations}{} \STATE $\bullet$ Sample minibatch of $m$ examples $\{(x^{(1)},y^{(1)}),\ldots,(x^{(m)},y^{(m)})\}$ from referenc\text{e-}noisy image pairs. \FOR{$k =1,2...,k_d$}{} \STATE $\bullet$ Update the discriminator by gradient ascent: \STATE $g_{\omega} \xleftarrow{} \frac{1}{m} \sum_{i=1}^m \nabla_\omega [ S(D_\omega(x_i),1) + S(D_\omega(G_\theta(y_i)),0)+ \mu (\Vert \bigtriangledown_{\tilde{x}} D_\omega(\tilde{x}_i)\Vert _2 - 1)^2 ]$ \\ \ \ \ \ \ \ where $\mu>0$ for WGANgp only; \STATE $\omega \xleftarrow{} \omega + \eta_d g_{\omega} $ \ENDFOR \FOR{$k =1,2...,k_g$}{} \STATE $\bullet$ Update the generator by gradient descent: \STATE $g_{\theta} \xleftarrow{} \frac{1}{m} \sum_{i=1}^m \nabla_\theta [ S(D_\omega(G_\theta(y_i)),0) + \lambda \lvert G_\theta(y_i) - x_i \rvert^p]$, $p\in \{1,2\}$ ; \STATE $\theta \xleftarrow{} \theta - \eta_g g_{\omega} $ \ENDFOR \ENDFOR \end{algorithmic} \textbf{Return}:Denoised image: $\widehat{x}_i=G_\theta(y_i)$ \label{algorithm} \end{algorithm} \subsubsection{\textbf{Stability of combining Autoencoder into GAN}} \label{sec:stability} We illustrate that Autoencoder is indispensable to GANs in stabilizing the training in the joint training of Autoencoder and GAN scheme. \begin{figure}[htbp] \centering \includegraphics[width=6in, height=65mm]{imgs/loss.PNG} \caption{Comparison between JS-GAN (black) and joint JS-GAN-$\ell_1$ Autoencoder (blue). \HLL{(a) and (b) are the change of MSE in training and testing data.} Joint training of JS-GAN-$\ell_1$ Autoencoder is much more stable than pure JS-GAN training that oscillates a lot. } \label{fig: l1-nonl1} \end{figure} As an illustration, Fig. \ref{fig: l1-nonl1} shows the comparison of training a JS-GAN and a joint JS-GAN + $\ell_1$-Autoencoder. Training and test mean square error curves are plotted against iteration numbers in the RNAP data under $SNR = 0.1$ as Fig. \ref{fig: l1-nonl1}. It shows that JS-GAN training suffers from drastic oscillations while joint training of JS-GAN + $\ell_1$-Autoencoder exhibits a stable process. In fact, with the aid of Autoencoder here, one does not need the popular ``$\log D$ trick" in JS-GAN. \section{\textbf{Application: Robust Denoising of Cryo-EM Images}} \label{sec:application} \subsection{\textbf{Datasets}} \subsubsection{\textbf{RNAP: Simulation Dataset}} We design a conformational heterogeneous dataset obtained by simulations. We use \textit{Thermus aquaticus} RNA Polymerase (RNAP) in complex with $\sigma^A$ factor (\textit{Taq} holoenzyme) for our dataset. RNAP is the enzyme that transcribes RNA from DNA (transcription) in the cell. During the initiation of transcription, the holoenzyme must bind to the DNA, then separate the doubl\text{e-}stranded DNA into singl\text{e-}stranded (\cite{browning2004regulation}). \textit{Taq} holoenzyme has a crab-claw like structure, with two flexible domains, the clamp and $\beta$ pincers. The clamp, especially, has been suggested to play an important role in the initiation, as it has been captured in various conformations by CryoEM during initiation (\cite{chen2020stepwise}). Thus, we focus on the movement of the clamp in this study. To generate the heterogeneous dataset, we start with two crystal structures of \textit{Taq} holoenzyme, which vary in their clamp conformation, open (PDB ID: 1L9U (\cite{murakami2002structural})) and closed (PDB ID: 4XLN (\cite{bae2015structure})) clamp. For the closed clamp structure, we remove the DNA and RNA in the crystal structure, leaving only the RNAP and $\sigma^A$ for our dataset. The \textit{Taq} holoenzyme has about 370 kDa molecular weight. We then generate the clamp intermediate structures between the open and closed clamp using multipl\text{e-}basin coars\text{e-}grained (CG) molecular dynamic (MD) simulations (\cite{okazaki2006multiple,kenzaki2011cafemol}). CG-MD simulations simplify the system such that the atoms in each amino acid are represented by one particle. The structures from CG-MD simulations are refined back to all-atom or atomic structures using PD2 ca2main (\cite{moore2013high}) and SCRWL4 (\cite{krivov2009improved}). Five structures with equally-spaced clamp opening angle are chosen for our heterogeneous dataset (shown in Fig. \ref{fig:5conf}). Then, we convert the atomic structures to $128 \times 128 \times 128$ volumes using \texttt{Xmipp} package (\cite{marabini1996xmipp}) and generate the 2D projections with an image size of $128 \times 128$ pixels. We further contaminate those clean images with additive Gaussian noise at different Signal-to-Noise Ratio (SNR): $SNR =0.05$. The SNR is defined as the ratio of signal power and the noise power in the real space. For simplicity, we did not apply the contrast transfer function (CTF) to the datasets, and all the images are centered. Fig. \ref{fig:5conf} shows the five \HLL{conformation pictures.} Training data size is $25000$ paired images(noisy and reference images), Test data to calculate the MSE, PSNR and SSIM is another $1500$ paired images. \begin{figure}[htbp] \centering \includegraphics[width=6in]{imgs/5conformation.PNG} \caption{Five conformations in RNAP heterogeneous dataset, from left to right are close conformation to open conformation of different angles. \label{fig:5conf}} \end{figure} \subsubsection{\textbf{EMPIAR-10028: Real Dataset}} This is a real-world experimental dataset that was firstly studied in: the Plasmodium falciparum 80S ribosome dataset (EMPIAR-10028) (\cite{wong2014cryo}). They recover the Cryo-EM structure of the cytoplasmic ribosome from the human malaria parasite, \textit{Plasmodium falciparum}, in complex with emetine, an anti-protozoan drug, at $3.2 \mathring{ A}$ resolution. Ribosome is the essential enzyme that translates RNA to protein molecules, the second step of central dogma. The inhibition of ribosome activity of \textit{Plasmodium falciparum} would effectively kill the parasite (\cite{wong2014cryo}). We can regard this dataset to have homogeneous property. This dataset contains $105247$ noisy particles with an image size of $360 \times 360$ pixels. In order to decrease the complexity of the computing, we pick up the center square of each image with a size of $256 \times 256$, since the surrounding area of the image is entirely useless that does not lose information in such a preprocessing. Then the $256 \times 256$ images are fed as the input of the $G_\theta$-network (Fig. \ref{fig:architecture}). Since the GAN-based method needs clean images as reference, we prepare their clean counterparts in the following way: we first use cryoSPARC1.0 (\cite{punjani2017cryosparc}) to build a $3.2 A$ resolution volume and then rotate the 3D-volume by the Euler angles obtained by cryoSPARC to get projected 2D-images. The training data size we pick is $19500$, and the test data size is $500$. \begin{figure}[htbp] \includegraphics[width=6in]{imgs/archi.PNG} \caption{The architectures of (a) discriminator $D$ and (b) generator $G$, which borrow the residue structure. The input image size ($128 \times 128 $) here is adapted to RNAP dataset, while input image size of EMPIAR-10028 dataset is $256 \times 256$. \label{fig:architecture}} \end{figure} \subsection{\textbf{Evaluation Method}} \label{sec: evaluation} We exploit the following three metrics to determine whether the denoising result is good or not. They are the Mean Square Error (MSE), the Peak Signal-to-Noise Ratio (PSNR) and the Structural Similarity Index Measure (SSIM). \begin{itemize} \item (\textbf{MSE}) For images of size $d_1\times d_2$, the Mean Square Error (MSE) between the reference image $x$ and the denoised image $\widehat{x}$ is defined as, \begin{align*} \mbox{MSE} := \frac{1}{d_1d_2}\sum_{i=1}^{d_1}\sum_{j=1}^{d_2} (x(i,j)-\widehat{x}(i,j))^2. \end{align*} The smaller is the MSE, the better the denoising result is. \item (\textbf{PSNR)} Similarly, the Peak Signal-to-Noise Ratio (PSNR) between the reference image $x$ and the denoised image $\widehat{x}$ whose pixel value range is $[0,t]$ (1 by default), is defined by \begin{align*} \mbox{PSNR} :=10\log_{10}\frac{t^2}{\frac{1}{d_1d_2}\sum_{i=1}^{d_1}\sum_{j=1}^{d_2} (x(i,j)-\widehat{x}(i,j))^2}. \end{align*} The larger is the PSNR, the better the denoising result is. \item (\textbf{SSIM)} The third criterion is the Structural Similarity Index Measure (SSIM) between reference image $x$ and denoised image $\widehat{x}$ is defined in~\cite{wang2004image}, \begin{align*} \mbox{SSIM}= \frac{(2\mu_x \mu_{\widehat{x}} +c_1)(2 \sigma_x \sigma_{\widehat{x}} +c_2)(\sigma_{x \widehat{x}} +c_3)}{(\mu_x ^2 + \mu_{\widehat{x}}^2 + c_1)(\sigma_x^2 + \sigma_{\widehat{x}}^2 + c_2)(\sigma_{x} \sigma_{\widehat{x}} +c_3)}. \end{align*} where $\mu_x$ ($\mu_{\widehat{x}}$) and $\sigma_x$ ($\sigma_{\widehat{x}}$) are the mean and variance of $x$ ($\widehat{x}$), respectively, $\sigma_{x \widehat{x}}$ is covariance of $x$ and $\widehat{x}$, $c_1= {K_1L}^2$, $c_2={K_2L}^2$, $c_3=\frac{c_2}{2} $ three variables to stabilize the division with weak denominator ($K_1 =0.01$, $K_2=0.03$ by default), $L$ is the dynamic range of the pixel-value (1 by default). The value SSIM of lies in $[0,1]$, where the closer it is to 1, the better the result is. \end{itemize} Although these metrics are widely used in image denoising, they might not be the best metrics for Cryo-EM images. In Appendix ``\nameref{sec:lambda}", it shows an example that the best-reconstructed images perhaps do not meet the best MSE/PSNR/SSIM metrics. In addition to these metrics, we consider the 3D reconstruction based on denoised images. Particularly, we take the 3D reconstruction by RELION to validate the denoised result. The procedure of our RELION reconstruction is follows: firstly creating the 3D initial model, then doing 3D classification, followed by operating 3D auto-refine. Moreover, for heterogeneous conformations in simulation data, we further turn the denoising results into a clustering problem to measure the efficacy of denoising methods, whose details will be discussed in Appendix ``\nameref{sec:cluster}''. \subsection{\textbf{Network Architecture and Hyperparameter}} \label{sec:net} In the experiments of this chapter, the best results come from the ResNet architecture (\cite{su2018generative}) shown in Fig. \ref{fig:architecture}, which has been successfully applied to study biological problems such as predicting protein-RNA binding. The generator in such GANs exploits the Autoencoder network architecture, while the discriminator is a binary classification ResNet. In Appendix ``\nameref{sec:convnet}" and ``\nameref{sec:PGGAN}", we also discuss a Convolutional Network without residual blocks and the PGGAN (\cite{karras2018progressive}) strategy with their experimental results, respectively. We chose Adam (\cite{kingma2015adam}) for the Optimization. The learning rate of the discriminator is $\eta_d=0.001$, and the learning rate of the generator is $\eta_g=0.01$. We choose $m=20$ as our batch size, $k_d=1$, and $k_g=2$ in Algorithm \ref{algorithm}. For $(\alpha,\beta)$-GAN, we reports two types of choices: (1) $\alpha=1$, $\beta= 1$; (2) $\alpha=0.5, \beta=0.5$ since they show the best results in our experiments, while the others are collected in Appendix ``\nameref{sec:betagans}". For WGAN, the gradient penalty with parameter $\mu=10$ is used to accelerate the speed of convergence and hence the algorithm is denoted as WGANgp below. The trad\text{e-}off (regularization) parameter of $\ell_1$ or $\ell_2$ reconstruction loss is set to be $\lambda=10$ through out this section, while an ablation study on varying $\lambda$ is discussed in Appendix ``\nameref{sec:lambda}". \subsection{\textbf{Results for RNAP}} \label{sec:RNAP} \subsubsection{\textbf{Denoising without contamination}} In this part, we attempt to denoise the noisy image without the contamination (i.e., $\epsilon=0$ in \HLL{Eq. \eqref{eq:contam_huber}}). In order to present the advantage of GAN, we compare the denoising result in different methods. Table \ref{tbl: sim_denoise} shows the MSE and PSNR of different methods in SNR $0.05$ and $0.1$. We recognize the traditional methods such as KSVD, BM3D, Non-local mean, and CWF can remove the noise partially and extract the general outline, but they still leave the unclear piece. However, deep learning methods can perform much better. Specifically, we observe that GAN-based methods, especially WGANgp $+\ell_1$ loss and $(.5,.5)$-GAN $+\ell_1$ loss, perform better than denoising Autoencoder methods, which only optimizes $\ell_1$ or $\ell_2$ loss. The adversarial process inspires the generation process, and the additional $\ell_1$ loss optimization speeds up the process of generation towards reference images. Notably, WGANgp and $(5,.5)$- or $(1,1)$-GANs are among the best methods, where the best mean performance up to one standard deviation are all marked in bold font. Specifically, compared with $(.5,.5)$-GAN, the WGANgp get better PSNR and SSIM in SNR 0.1; the $(.5,.5)$-GAN shows the advantage in PSNR and SSIM in SNR $0.05$ while $(1,1)$-GAN is competitive within one standard deviation. Also, Fig. \ref{fig4}(a) presents the denoised images of denoising methods in SNR $0.05$. For the convenience of comparison, we choose a clear open-conformation \HLL{(the rightmost conformation of Fig. \ref{fig:5conf})} to present, and the performances show that WGANgp and $(\alpha,\beta)$-GAN can grasp the ``open" shape completely and derive the more explicit pictures than other methods. What's more, in order to test the denoised results of $\beta$-GAN, we reconstruct the 3D volume by RELION in 200000 images of SNR 0.1, which are denoised by $(.5,.5)$-GAN + $\ell_1$. The value of pixel size, amplitude contrast, spherical aberration and voltage are 1.6, 2.26, 0.1 and 300. For the other terms, retaining the default settings in RELION software. Fig. \ref{fig4}(b) and (c) separately show the 3D volume recoverd by clean images and denoised images. Also, the related FSC curves are shown in Fig. \ref{fig4}(d). Specifically, the blue curve, which represents the denoised images in $(.5,.5)$-GAN + $\ell_1$ is closed to red curves representing the clean images. We use the 0.143 cutoff criterion in literature (the resolution as Fourier shell correlation reaches 0.143, shown by dash lines in Fig. \ref{fig4}(d)) to choose the final resolution: 3.39$\mathring{A}$. The structure recovered by $(.5,.5)$-GAN + $\ell_1$ and FSC curve are as good as the original structure, which illustrates that the denoised result of $\beta$-GAN can identify the details of image and be helpful in 3D reconstruction. \begin{table}[t]\tiny \renewcommand\arraystretch{1.5} \centering \caption{Denoising result without contamination in simulated RNAP dataset: MSE, PSNR and SSIM of different models, such as BM3D (\cite{dabov2007image}), KSVD (\cite{aharon2006k}), Non-local means (\cite{wei2010optimized}), CWF (\cite{bhamre2016denoising}), DA and GAN-based methods. } \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{MSE} & \multicolumn{2}{c|}{PSNR}& \multicolumn{2}{c|}{SSIM} \\ \hline Method/SNR & 0.1 &0.05 &0.1 & 0.05 &0.1 & 0.05 \\ \hline BM3D & 3.52\text{e-}2 (7.81\text{e-}3)& 5.87\text{e-}2(9.91\text{e-}3) & 14.54(0.15) & 12.13(0.14)&0.20(0.01)&0.08(0.01)\\ \hline KSVD & 1.84\text{e-}2(6.58\text{e-}3) & 3.49\text{e-}2(7.62\text{e-}3)& 17.57(0.16)& 14.61(0.14) &0.33(0.01)&0.19(0.01) \\ \hline Non-local means & 5.02\text{e-}2(5.51\text{e-}3)&5.81\text{e-}2(8.94\text{e-}3)&13.04(0.50)&12.40(0.65)&0.18(0.01)&0.09(0.01) \\ \hline CWF & 2.53\text{e-}2(2.03\text{e-}3) & 9.28\text{e-}3(8.81\text{e-}4)& 16.06(0.33)& 20.31(0.41)&0.25(0.01)&0.08(0.01) \\ \hline $\ell_2$-Autoencoder\tablefootnote{$\ell_2$-Autoencoder represents $\ell_2$ loss} & 3.13\text{e-}3(7.97\text{e-}5) & 4.02\text{e-}3(1.48\text{e-}4)& 25.10(0.11)& 23.67(0.77)&0.79(0.02)&\textbf{0.79(0.01)} \\ \hline $\ell_1$-Autoencoder\tablefootnote{$\ell_1$-Autoencoder represents $\ell_1$ loss} & 3.16\text{e-}3(7.05\text{e-}5) & 4.23\text{e-}3(1.32\text{e-}4)& 25.05(0.09)& 23.80(0.13) &0.77(0.02)&0.76(0.01)\\ \hline $(0,0)$-GAN + $\ell_1$ \tablefootnote{GAN + $\ell_1$ represents adding $\ell_1$ regularization in GAN generator loss} & 3.06\text{e-}3(5.76\text{e-}5) &\textbf{ 4.02\text{e-}3(5.67\text{e-}4)} & 25.25(0.04) & 24.00(0.06)&0.78(0.03)&\textbf{0.78(0.03)} \\ \hline WGANgp + $\ell_1$ & \textbf{2.95\text{e-}3(1.41\text{e-}5)} & \textbf{4.00\text{e-}3(8.12\text{e-}5)} & \textbf{25.42(0.04)} & \textbf{24.06(0.05)}&\textbf{0.83(0.02)}&\textbf{0.80(0.03) }\\ \hline $(1,1)$-GAN + $\ell_1$ & 2.99\text{e-}3(3.51\text{e-}5) & \textbf{4.01\text{e-}3(1.54\text{e-}4)} & 25.30(0.05) & \textbf{24.07(0.16)}&\textbf{0.82(0.03)}&\textbf{0.79(0.03)} \\ \hline $(.5,.5)$-GAN+ $\ell_1$ & 3.01\text{e-}3(2.81\text{e-}5) & \textbf{ 3.98\text{e-}3(4.60\text{e-}5)}& 25.27(0.04) & \textbf{24.07(0.05)}&0.79(0.04)&\textbf{0.80(0.03)} \\ \hline \end{tabular} \label{tbl: sim_denoise} \end{table} \begin{figure}[htbp] \centering \includegraphics[width=6in]{imgs/figure4.PNG} \caption{Results for RNAP dataset. (a) is denoised images in different denoised methods (from left to right, top to bottom): Clean, Noisy, BM3D, KSVD, Non-local means, CWF, $\ell_1$-Autoencoder, $\ell_2$-Autoencoder, (1,1)-GAN + $\ell_1$, $(0,0)$-GAN + $\ell_1$, $(.5,.5)$-GAN + $\ell_1$ and WGANgp + $\ell_1$. (b) and (c) are reconstruction of clean images and $(.5,.5)$-GAN + $\ell_1$ denoised images. (d) is FSC curve of (b) and (c). (e), (f) and (g) are robustness tests of various methods under $\epsilon\in \{0.1, 0.2, 0.3\}$-proportion contamination in three types of contamination: (e) Type A: replacing the reference images with random noise; (f) Type B: replacing the noisy images with random noise; (g) Type C: replacing both with random noise. (h) and (j) are reconstructions of images with $(.5,.5)$-GAN + $\ell_1$ and $\ell_2$-Autoencoder under type A contamination, respectively, where $\ell_2$-Autoencoder totally fails but $(.5,.5)$-GAN + $\ell_1$ is robust. (i) shows FSC curves of (h) and (j).} \label{fig4} \end{figure} In addition, Appendix ``\nameref{sec:cluster}" also shows an example that GAN with $\ell_1$-Autoencoder helps heterogeneous conformation clustering. \subsubsection{\textbf{Robustness under contamination}} \label{sec:robust} We also consider the contamination model $\epsilon\neq 0$ and $Q$ from purely noisy images. We randomly replace partial samples of our training dataset of RNAP by noise to test whether our model is robust or not. There are three types to test: (A) Only replacing the clean reference images. It implies the reference images are wrong or missing, such that we do not have the reference images to compare. This is the worst contamination case. (B) Only replacing the noisy images. It means the Cryo-EM images which the machine produces are broken. (C) Replacing both, which indicates both A and B happen. The latter two are mild contamination cases, especially C that replaces both reference and noisy images by Gaussian noise whose $\ell_1$ or $\ell_2$ loss is thus well-controlled. Here we test our robustness of various deep learning based methods using the RNAP data of SNR 0.1, and the former \HLL{three types of contamination} is applied to randomly replace the samples in the proportion of $\epsilon\in \{0.1, 0.2, 0.3\}$ of the whole dataset. Fig. \ref{fig4}(e), (f) and (g) compare the robustness of different methods. In all the cases, some $\beta$-GANs ($(.5,.5)$- and $(1,1)$-) with $\ell_1$-Autoencoder exhibit relatively universal robustness. Particularly, (1) The MSE with $\ell_1$ loss is less than the MSE with $\ell_2$ loss, which represents the $\ell_1$ loss is more robust than $\ell_2$ as desired. (2) The Autoencoder method in $\ell_2$ loss and WGANgp show certain robustness in case B and C but are largely influenced by contamination in case A (shown in Figure \ref{fig4} (e)), indicating the most serious damage arising from type A, merely replacing only the reference image by Gaussian noise. The reason is that the $\ell_2$ Autoencoder and WGANgp method are confused by the wrong reference images so that they can not learn the mapping from data distribution to reference distribution accurately. (3) In the type C, the standard deviations of the five best models are larger compared the other two types. The contamination of both noisy $y$ and clean $x$ images influence the the stability of model more than the other two types. Furthermore, we take an example in type A contamination with $\epsilon=0.1$ for 3D reconstruction. The 3D reconstruction in denoised images with $(.5,.5)$-GAN + $\ell_1$ and $l_2$-Autoencoder are shown in Fig.~\ref{fig4}(h) and (j), and related FSC curve is \HLL{Fig.~\ref{fig4}(i)}. Specifically, on the one hand, the blue FSC curve of $\ell_2$-Autoencoder doesn't drop, which leads to the worse reconstruction; on the other hand, the red FSC curve of $(.5,.5)$-GAN + $\ell_1$ drops quickly but begins to rise again, whose reason is that some unclear detail of structure mixed angular information in reconstruction. When applying 0.143 cutoff criterion (dashed line in FSC curve), the resolution of $(.5,.5)$-GAN + $\ell_1$ is about 4$\mathring{A}$. Although reconstruction of images and final resolution is not better than the clean images, it is much clearer than $\ell_2$-Autoencoder which totally fails in the contamination case. The outcome of the reconstruction demonstrates that $(.5,.5)$-GAN + $\ell_1$ is relatively robust, whose 3D result is consistent with the clean image reconstruction. \HLL{In summary, some $(\alpha,\beta)$-GANs methods, such as the ($(.5,.5)$-GAN and $(1,1)$-GAN, with $\ell_1$-Autoencoder are more resistant to sample contamination, which are better to be applied into the denoising of Cryo-EM data.} \subsection{\textbf{Results for EMPIAR-10028}} \label{sec:empire} The following Fig. \ref{fig5}(a) and (b) show the denoising results by different deep learning methods in experimental data: $\ell_1$ or $\ell_2$ Autoencoders, JS-GAN ($(0,0)$-GAN), WGANgp, and $(\alpha,\beta)$-GAN, where we add $\ell_1$ loss in all of the GAN-based structures. Although the Autoencoder can grasp the shape of macromolecules, it is a little blur in some parts. \HLL{What is more, WGANgp and $(.5,.5)$-GAN perform better than other deep learning methods according to MSE and PSNR, which is largely consistent with the result of the RNAP dataset.} The improvements of such GANs over pure Autoencoders lie in their ability of utilizing structural information among similar images to learn the data distribution better. Finally, \HLL{we implement reconstruction} via RELION of 100000 images, which denoised by $(.5,.5)$-GAN +$\ell_1$. \HLL{The parameters are the same as the ones set in the paper~(\cite{wong2014cryo}).} The reconstruction results are shown in Fig. \ref{fig5}(c). It is demonstrated that the final resolution is 3.20$\mathring{A}$, which is derived by FSC curve in Fig. \ref{fig5}(d) using the same 0.143 cutoff (dashed line) to choose the final resolution. We note that the final resolution by RELION after denoising is as good as the original resolution 3.20$\mathring{A}$ reported in~\cite{wong2014cryo}. \begin{figure}[t] \centering \includegraphics[width=5.5in]{imgs/figure5.PNG} \caption{Results for EMPIAR-10028. (a) Comparison in EMPIAR-10028 dataset in different deep learning methods (from left to right, top to bottom): clean image, noisy image, $\ell_1$-Autoencoder, $\ell_2$-Autoencoder, $(0,0)$-GAN + $\ell_1$, $(1,1)$-GAN + $\ell_1$, $(.5,.5)$-GAN + $\ell_1$, WGANgp + $\ell_1$. (b) is the MSE, PSNR and SSIM in different denoised methods. (c) and (d) are the 3D-reconstruction of denoised images by $(.5,.5)$-GAN + $\ell_1$ and the FSC curve, respectively. The resolution of reconstruction from $(.5,.5)$-GAN + $\ell_1$ denoised images is 3.20$\mathring{A}$, which is as good as the original resolution.} \label{fig5} \end{figure} \section{\textbf{Conclusion and Discussion}} \label{sec:conclusion} In this chapter, we set a connection between the traditional image forward model and Huber contamination model in solving the complex contamination in the Cryo-EM dataset. \HLL{The joint training of Autoencoder and GAN has been proved to substantially improve the performance in Cryo-EM image denoising. In this joint training scheme, the reconstruction loss of Autoencoder helps GAN to avoid mode collapse and stabilize training. GAN further helps Autoencoder in denoising by utilizing the highly correlated Cryo-EM images since they are 2D projections of one or a few 3D molecular conformations}. To overcome the low Signal-to-Noise Ratio challenge in Cryo-EM images, joint training of $\ell_1$-Autoencoder combined with $(.5,.5)$-GAN, $(1,1)$-GAN, and WGAN with gradient penalty is often among the best performance in terms of MSE, PSNR, and SSIM when the data is contamination-free. However, when a portion of data is contaminated, especially when the reference data is contaminated, WGAN with $\ell_1$-Autoencoder may suffer from the significant deterioration of reconstruction accuracy. Therefore, robust $\ell_1$ Autoencoder combined with robust GANs ($(.5,.5)$-GAN and $(1,1)$-GAN) are the overall best choices for robust denoising with contaminated and high noise datasets. \HLL{Part of the results in this chapter is based on a technical report (\cite{gu20gan}). Most of the deep learning-based techniques in image denoising need reference data, limiting themselves in the application of Cryo-EM denoising.} For example, in our experimental dataset EMPIAR-10028, the reference data is generated by the cryoSPARC, which itself becomes problematic in highly heterogeneous conformations. Therefore, the reference image we learn may follow a fake distribution. How to denoise without the reference image thus becomes a significant problem. It is still open how to adapt to different experiments and those without reference images. In order to overcome this drawback, an idea called ``image-blind denoising" was offered by the literature\HLL{~(\cite{lehtinen2018noise2noise,krull2019noise2void})}, which viewed the noisy image or void image as the reference image to denoise. Besides,~\cite{chen2018image} tried to extract the noise distribution from the noisy image and gain denoised images through removing the noise for noisy data;~\cite{quan2020self2self} augmented the data by Bernoulli sampling and denoise image with dropout. Nevertheless, all of the methods need noise is independent of the elements themselves. Thus it is hard to remove noise in Cryo-EM because the noise from ice and machine is related to the particles. In addition, for reconstruction problems in Cryo-EM,~\cite{zhong2020reconstructing} \HLL{proposed an end-to-end} 3D reconstruction approach based on the network from Cryo-EM images, where they attempt to borrow the Variational Autoencoder (VAE) to approximate the forward reconstruction model and recover the 3D structure directly by combining the angle information and image information learned from data. This is one future direction to pursue. \section{\textbf{Appendix}}\label{sec:appendix} \subsection*{\textbf{Influence of parameter($\alpha, \beta$) brings in $\beta$-GAN}} \label{sec:betagans} \HLL{In this part,} we have applied $\beta$-GAN into denoising problem. How to pick up a good parameter: $(\alpha, \beta)$ in the $\beta$-GAN becomes an important issue. Therefore, \HLL{we investigate} the impact of the parameter ($\alpha, \beta$) on the outcome of denoising. We choose eight significant groups of $\alpha, \beta$. Our result is shown in Table \ref{tbl: alpha-beta-result}. It is demonstrated that the effect of these groups in different parameters is not large. The best result appears in $\alpha= 1, \beta =1$ and $\alpha= 0.5, \beta =0.5$ \begin{table}[H]\scriptsize \renewcommand\arraystretch{1.5} \centering \caption{The result of $\beta$-GANs with ResNet architecture: MSE, PSNR and SSIM of different $(\alpha, \beta)$ in $\beta$-GAN under various levels of Gaussian noise corruption in RNAP dataset.} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{MSE} & \multicolumn{2}{c|}{PSNR} & \multicolumn{2}{c|}{SSIM} \\ \hline Parameter/SNR & 0.1 &0.05 &0.1 & 0.05 &0.1 & 0.05 \\ \hline $\alpha =1, \beta=1$ &\textbf{2.99\text{e-}3(3.51\text{e-}5)} & 4.01\text{e-}3(1.54\text{e-}4) & \textbf{25.30(0.05) }& \textbf{24.07(0.16)}&\textbf{0.82(0.03)}&0.79(0.03) \\ \hline $\alpha =0.5, \beta=0.5$ & 3.01\text{e-}3(2.81\text{e-}5) & \textbf{ 3.98\text{e-}3(4.60\text{e-}5)}& 25.27(0.04) & \textbf{24.07(0.05)}&0.79(0.04)&\textbf{0.80(0.03)} \\ \hline $\alpha =-0.5, \beta=-0.5$ & 3.02\text{e-}3(1.69\text{e-}5) & 4.15\text{e-}3(5.05\text{e-}5)& 25.27(0.02)& 23.91(0.05) &0.80(0.03)&\textbf{0.80(0.03)} \\ \hline $\alpha =-1, \beta=-1$ & 3.05\text{e-}3(3.54\text{e-}5) & 4.12\text{e-}3(8.30\text{e-}5) & 25.23(0.05) & 23.93(0.08) &0.80(0.05)&0.77(0.04) \\ \hline $\alpha =1, \beta=-1$& 3.05\text{e-}3(4.30\text{e-}5) & 4.10\text{e-}3(5.80\text{e-}5) & 25.24(0.06) & 23.96(0.06) &\textbf{0.82(0.02)}&0.76(0.03) \\ \hline $\alpha =0.5, \beta=-0.5$& 3.09\text{e-}3(6.79\text{e-}5) & 4.05\text{e-}3(6.10\text{e-}5)& 25.17(0.04) & 24.01(0.06)&0.79(0.04)&0.77(0.05) \\ \hline $\alpha =0, \beta=0$ & 3.06\text{e-}3(5.76\text{e-}5) & 4.02\text{e-}3(5.67\text{e-}4) & 25.23(0.04) & 24.00(0.06)&0.78(0.03)&0.78(0.03) \\ \hline $\alpha =0.1, \beta=-0.1$ & 3.07\text{e-}3(5.62\text{e-}5) & 4.05\text{e-}3(8.55\text{e-}5)& 25.23(0.08) & 23.98(0.04) &0.78(0.02)&0.79(0.03) \\ \hline \end{tabular} \label{tbl: alpha-beta-result} \end{table} \iffalse \subsection{\textbf{Importance of $\ell_1$ loss in GAN}} If we remove the $\ell_1$ regularization in our GAN-based model, it is hard to train and the model is easy to collapse. The following Fig. \ref{fig: l1-nonl1} shows training and test mean square error curves against iteration numbers in two conditions: adding $\ell_1$ regularization and not (The parameters are: $\lambda =10$, JS-GAN structure in SNR 0.1 simulated data). It is demonstrated that not adding $\ell_1$ regularization will cause the oscillation of MSE, which highlights the significance of regularization. \begin{figure}[htbp] \centering \includegraphics[width=5.5in]{imgs/loss.PNG} \caption{Comparison between whether adding $\ell_1$ regularization, (a) is training MSE; (b) is test MSE} \label{fig: l1-nonl1} \end{figure} \fi \subsection*{\textbf{Clustering to solve the conformational heterogeneity}} \label{sec:cluster} In this part, we try to analyze whether the denoised result is good in solving conformation heterogeneity in simulated RNAP dataset. Specifically, for heterogeneous conformations in simulation data, we mainly choose the following two typical conformations: \emph{open} and \emph{close} conformations \HLL{(the leftmost and rightmost conformations in Fig. \ref{fig:5conf})} as our testing data. Our goal is to distinguish these two classes of conformations. However, different from the paper\HLL{~(\cite{xian2018data})}, we do not have the template images to calculate the distance matrix, so what we try is unsupervised learning -- clustering. Our clustering method is firstly using manifold learning: Isomap (\cite{tenenbaum2000global}) to reduce the dimension of the denoised images, then make use of $k$-Means ($k=2$) to group the different conformations. The Fig.~\ref{fig: dif_clustering}(a) displays the 2D visualizations of two conformations about the clustering effect in different denoised methods. \HLL{Here the SNR of noisy data is 0.05.} In correspondence to those visualizations, the accuracy of competitive methods is reported: $(1,1)$-GAN$+\ell_1$: $54/60$ (54 clustering correctly in 60), WGANgp$+\ell_1$: $54/60$, $\ell_2$-Autoencoder: $44/60$, BM3D: $34/60$, and KSVD: $36/60$. This result shows that: clean images separate well; $(\alpha,\beta)$-GAN and WGANgp with $l_1$ Autoencoder can distinguish the open and close structure partially, although there exists several wrong points; $\ell_2$-Autoencoder and traditional techniques have poor performance because it is hard to detect the clamp shape. Furthermore, the reason we use Isomap is it performs the best in our case and comparisons of different manifold learning methods are shown in Fig.~\ref{fig: dif_clustering}(b). It demonstrates that blue and red points separate most in the graph of ISOMAP. Specifically, the accuracy of these four methods are $50/60$ (spectral method), $46/50$ (MDS), $46/50$ (TSNE), and $54/60$ (ISOMAP). it shown that Isomap can distinguish best in the two structures' images compared to other methods: such as the Spectral method (\cite{ng2002spectral}), MDS (\cite{cox2008multidimensional}), and TSNE (\cite{maaten2008visualizing}). \begin{figure}[htbp] \centering \includegraphics[width=6in]{imgs/new_clustering.PNG} \caption{ 2D visualization of 2-conforamtional images in manifold learning. Red point, blue point separately represent the open and closed conformation. (a) is 2D visualization of 2-conformation image by ISOMAP in different methods (from the left and top to the right and bottom): clean image, BM3D, KSVD, $\ell_2$-Autoencoder, $(1,1)$-GAN+ $\ell_1$, WGANgp+ $\ell_1$. (b) is 2D visualization of 2-conformation image in different manifold learning methods (from left to right): Spectral methods, MDS, TSNE, and ISOMAP.} \label{fig: dif_clustering} \end{figure} \subsection*{\textbf{Convolution network}} \label{sec:convnet} We present the result of simple deep convolution network (remove the ResNet block), the performances in all of criterion are worse than performances of the residue's architecture work. Table \ref{tab:conv} compares the MSE and PSNR performance of various methods in the RNAP dataset with SNR $0.1$ and $0.05$. And Fig. \ref{fig:dcgan}(a) displays the denoised image of different methods in the RNAP dataset with SNR $0.05$. It shows the advantage of residue structure in our GAN-based denoising Cryo-EM problem. \begin{table}[H] \footnotesize \renewcommand\arraystretch{1.5} \centering \caption{\label{tab:conv}MSE and PSNR of different models under various levels of Gaussian noise corruption in RNAP dataset, where the architecture of GANs or Autoencoders are simply convolution network.} \begin{tabular}{|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{MSE} & \multicolumn{2}{c|}{PSNR} \\ \hline Method/SNR & 0.1 &0.05 &0.1 & 0.05 \\ \hline BM3D & 3.5\text{e-}2(7.8\text{e-}3)& 5.9\text{e-}2(9.9\text{e-}3) & 14.535(0.1452) & 12.134(0.1369) \\ \hline KSVD & 1.8\text{e-}2(6.6\text{e-}3) & 3.5\text{e-}2(7.6\text{e-}3)& 17.570(0.1578)& 14.609(0.1414) \\ \hline Non-local means & 5.0\text{e-}2(5.5\text{e-}3)&5.8\text{e-}2(8.9\text{e-}3)&13.040(0.4935)&12.404(0.6498) \\ \hline CWF & 2.5\text{e-}2(2.0\text{e-}3) & 9.3\text{e-}3(8.8\text{e-}4)& 16.059(0.3253)& 20.314(0.4129) \\ \hline $\ell_2$-Autoencoder & 4.0\text{e-}3(6.0\text{e-}4) & 6.7\text{e-}3(9.0\text{e-}4)& 24.202(0.6414)& 21.739(0.7219) \\ \hline $(0,0)$-GAN +$\ell_1$ & 3.8\text{e-}3(6.0\text{e-}4) & 5.6\text{e-}3(8.0\text{e-}4) & 24.265(0.6537) & 22.594(0.6314) \\ \hline WGANgp+$\ell_1$& \textbf{3.1\text{e-}3(5.0\text{e-}4)} & 5.0\text{e-}3(8.0\text{e-}4) & \textbf{25.086(0.6458)} & 23.010(0.6977) \\ \hline $(1,-1)$-GAN +$\ell_1$& 3.4\text{e-}3(5.0\text{e-}4) & \textbf{4.9\text{e-}3(9.0\text{e-}4)} & 24.748(0.7233) & \textbf{23.116(0.7399)} \\ \hline $(.5,-.5)$-GAN +$\ell_1$ & 3.5\text{e-}3(5.0\text{e-}4) & 5.6\text{e-}3(9.0\text{e-}4)& 24.556(0.6272) & 22.575(0.6441) \\ \hline \end{tabular} \label{tbl_result} \end{table} \begin{figure}[t] \centering \includegraphics[width=6in]{imgs/figure8.PNG} \caption{(a) Denoised images with convolution network without ResNet structure in different methods in RNAP dataset with SNR 0.05 (from left to right, top to bottom): clean, noisy, BM3D, $\ell_2$-Autoencoder, KSVD, JS-GAN + $\ell_1$, WGANgp + $\ell_1$, $(1,-1)$-GAN + $\ell_1$, $(.5, -.5)$-GAN + $\ell_1$. (b) Denoised and reference images in different regularization $\lambda$ (we use $(.5,.5)$-GAN +$\lambda$ $\ell_1$ as an example) in correpsonding to Table \ref{tab:lambda}. From left to right, top to bottom, the image is: Clean image, $\lambda=0.1$, $\lambda=1$, $\lambda=5$, $\lambda=10$, $\lambda=50$, $\lambda=100$, $\lambda=500$ , $\lambda=10000$} \label{fig:dcgan} \end{figure} \subsection*{\textbf{Test RNAP dataset with PGGAN strategy}} \label{sec:PGGAN} \HLL{PGGAN~(\cite{karras2018progressive}) is a popular method to generate high-resolution images from low resolution ones by gradually adding layers of generator and discriminator. It accelerates and stabilizes the model training. Since Cryo-EM images are in large pixel size that fits well the PGGAN method, here we choose its structure\footnote{We set the same architecture and parameters as \url{https://github.com/nashory/pggan-pytorch} and the input image size is $128 \times 128$.} instead of the ResNet and convolution structures above to denoise Cryo-EM images.} Our experiments partially demonstrate two things: 1) the denoised images sharpen more, though the MSE changes to be higher. 2) we do not need to add $\ell_1$ regularization to make model training stable; it can also detect the outlier of images for both real and simulated data without regularization. \HLL{In detail, based on the PGGAN architecture and parameters, we test the following two objective functions developed in the section ``\nameref{sec:noisy modeling}": WGANgp and WGANgp + $\ell_1$, in the RNAP simulated dataset with SNR $0.05$ as an example to explain.} The denoised images are presented in Fig. \ref{fig:pggan}; it is noted that the model is hard to collapse regardless of adding $\ell_1$ regularization. The MSE of adding regularization is $8.09\text{e-}3(1.46\text{e-}3)$, which is less than $1.01\text{e-}2(1.81\text{e-}3)$ without adding regularization. Nevertheless, both of them don't exceed the results based on the ResNet structure above. This shows that PGGAN architecture does not have more power than the ResNet structure. But an advantage of PGGAN lies in its efficiency in training. So it is an interesting problem to improve PGGAN toward the accuracy of ResNet structure. Another thing that needs to highlight is MSE may not be a good criterion because denoised images by PGGAN are clearer in some details than the front methods we propose. This phenomenon is also shown in Appendix ``\nameref{sec:lambda}". So how to find a better criterion to evaluate the model and combine two strengths of ResNet-GAN and PGGAN await us to explore. \begin{figure}[htbp] \centering \includegraphics[width=6in]{imgs/pggan.PNG} \caption{\HLL{Denoised and reference images by PGGAN instead of simple ResNet and Convolution structure in RNAP dataset with SNR 0.05. The PGGAN strategy is tested in two objective functions: WGANgp + $\ell_1$ and WGANgp. (a) and (b) are denoised and reference images using PGGAN with WGANgp + $\ell_1$; (c) and (d) are denoised and reference images using PGGAN in WGANgp, respectively. Specifically, the images highlighted in red color show the structural difference between denoised images and reference images. It demonstrates that denoised images are different from reference images using PGGAN strategy.}} \label{fig:pggan} \end{figure} \subsection*{\textbf{Influence of the regularization parameter: $\lambda$}} \label{sec:lambda} In this chapter, we add $\ell_1$ regularization to make model stable, but how to choose $\lambda$ of $\ell_1$ regularization becomes a significant problem. Here we take $(.5, .5)$-GAN to denosie in RNAP dataset with SNR $0.1$. According to some results in different $\lambda$ in Table \ref{tbl: lambda}, we find as the $\lambda$ tends to infinity, the MSE results tends to $\ell_1$-Autoencoder, which is reasonable. Also, the MSE result becomes the smallest as the $\lambda=10$. \HLL{What’s more, an interesting phenomenon is found that a much clearer result could be obtained at $\lambda=100$ than that at $\lambda=10$, although the MSE is not the best (shown in the Fig. \ref{fig:dcgan}(b)).} \begin{table}[H] \centering \caption{MSE, PSNR and SSIM of different $\lambda$ in (.5,.5)-GAN + $\lambda l_1$ in RNAP dataset. \label{tab:lambda}} \begin{tabular}{|c|c|c|c|} \hline $\lambda$/criterion& MSE& PSNR&SSIM \\ \hline 0.1 & 3.06\text{e-}3(4.50\text{e-}5) & 25.22(0.07)& \textbf{0.82(0.06)}\\ \hline 1 & 3.05\text{e-}3(4.49\text{e-}5) & 25.24(0.06)& 0.81(0.05)\\ \hline 5 & 3.03\text{e-}3(2.80\text{e-}5) & 25.26(0.04) & 0.80(0.04) \\ \hline 10 & \textbf{3.01\text{e-}3(2.81\text{e-}5)} & \textbf{25.27(0.04)} & 0.79(0.04) \\ \hline 50 & 3.07\text{e-}3(3.95\text{e-}5) & 25.20(0.06)& 0.79(0.02) \\ \hline 100 & 3.11\text{e-}3(5.96\text{e-}5) &25.15(0.06) & 0.80(0.02) \\ \hline 500 & 3.17\text{e-}3(5.83\text{e-}5) &25.01(0.07)&0.78(0.04) \\ \hline 10000 & 3.17\text{e-}3(2.90\text{e-}5) &25.03(0.04) &0.79(0.04) \\ \hline \end{tabular} \label{tbl: lambda} \end{table} \label{app:robusttable} \addcontentsline{toc}{section}{\textbf{Reference}} \bibliographystyle{spbasic}
1,108,101,564,721
arxiv
\section{Independent and correlated errors} \label{corr} Before presenting a study of correlated errors, it is worth discussing exactly what is and what is not a correlated error. For illustrative purposes, we center the discussion around a hypothetical quantum computer consisting of a 2-D array of mobile spins on a cooled substrate. A global magnetic field $B_z$ sets the energy difference between $|0\rangle$ and $|1\rangle$. Local solenoids above and below the default location of each spin provide localized AC and DC fields to drive arbitrary single-qubit rotations. Pairs of spins are moved into close proximity to raise the strength of the magnetic dipole interaction and implement two-qubit entangling gates. For simplicity, we also imagine the solenoids can be used, when desired, as sensitive magnetic field detectors for qubit readout. See Fig.~\ref{arch}. This example maps well to architectures based on superconducting qubits \cite{Bare13a}, spin qubits \cite{Holl06}, and quantum dots \cite{Loss98}, and has features common to architectures based on ion traps \cite{Kiel02}, optical lattices \cite{Bren99}, and many others. \begin{figure} \begin{center} \includegraphics[width=75mm]{arch} \end{center} \caption{(Color online) Hypothetical quantum computer architecture consisting of mobile spins on a cold substrate, each spin with its own hypothetical solenoid for readout and single-qubit gates, with two-qubit gates achieved by bringing neighboring spins closer together to increase the strength of the magnetic dipole interaction. This example has features in common with many physical architectures under investigation, especially those based on superconducting qubits.}\label{arch} \end{figure} We now consider various error sources, and whether they are, or are not, correlated error sources. Firstly, we consider small fluctuations in the global magnetic field $B_z$, which will lead to small undesired systematic $Z$ rotations on all qubits. At first glance, this may seem like the ultimate correlated error. However, provided the fluctuations are small and error detection is frequent, each individual small angle $Z$ rotation will just look like a small probability of a $Z$ error on each qubit. When performing error detection, most of the time no errors will be detected, as unwanted phase rotations will be removed by observation the majority of the time. This is a special case of the quantum Zeno effect \cite{Misr77}. Any detected errors will appear random and independent. A global fluctuating field leads to a correlated \emph{probability} of error $p$ on all qubits, but the errors themselves will not be correlated, and the probability of errors from this noise source on any given pair of qubits will be $p^2$. Note that it is critical that the fluctuations are small and error detection frequent --- for example if a global $\pi$ rotation accumulates this will indeed lead to a global correlated error. Secondly, consider crosstalk when driving a single-qubit gate. Under the assumption of widely separated spins and small solenoids, each solenoid will look like a magnetic dipole and the field seen by other spins will decay cubically with separation. The driving field will therefore induce cubically decaying small-angle rotations in all spins in the computer. For the same reason that small fluctuations in the global field $B_z$ do not lead to correlated errors, this polynomially decaying crosstalk will also not lead to correlated errors. Provided the total error seen by any given qubit as a result of the sum of all crosstalk from all other actively manipulated qubits remains small, errors seen by the quantum error detection machinery will remain independent and sufficiently rare to be correctable. Thirdly, consider the possibility that our hypothetical quantum computer is unshielded and located near an infrequent but energetic radiation source. Consider a hypothetical energetic particle that locally strongly heats the substrate on impact, but otherwise causes no physical degradation of the system. Imagine that the heating thermally randomizes spins in some neighborhood of the impact, with the neighborhood size proportional to the energy of the impact, and the probability distribution of increasingly energetic impacts decaying exponentially. Suppose furthermore that the cooling power per unit area of the substrate is sufficiently high to remove the excess heat in a small constant amount of time. This hypothetical scenario would lead to spatially correlated large-area errors with larger areas exponentially suppressed. Noise of this generic form shall be considered in Section~\ref{sc}. This section also considers the possibility of polynomial suppression of larger area errors. Fourthly, consider direct magnetic dipole spin-spin interactions. A pair of antiparallel spins can spontaneously flip with no increase or decrease in energy of the system. The probability of this occurring is proportional to the interaction strength, which decays cubically. Pairwise noise of this form is two-body correlated noise. Note that each pairwise noise event requires the exchange of a virtual photon, so multiple pairwise noise events are random and uncorrelated. We shall consider noise of this generic form in Section~\ref{2q}. Finally, imagine that spins are sufficiently separated to make the direct dipole-dipole interaction negligible, however there are elements in the physical construction that behave like inductive loops around each column of spins. These could be control lines or long-range qubit-qubit coupling elements. Now any pair of initially antiparallel spins in a given column can flip. Such a noise source would not be suppressed with increasing qubit separation. This form of noise shall also be considered in Section~\ref{2q}. Undoubtedly other forms of noise could be considered, however we feel that the four correlated error classes listed above, namely 1) large-area exponentially decaying, 2) large-area polynomially decaying, 3) arbitrary qubit pairs polynomially decaying, 4) qubit pairs in columns non-decaying, cover the vast majority of basic behaviors likely to be found in physical devices. We would be happy to extend our work to cover other error classes of interest to the community, and welcome suggestions. \section{Surface code performance with local many-qubit errors} \label{sc} For our purposes, a distance $d$ surface code is simply a $(2d-1)\times (2d-1)$ 2-D array of qubits capable of protecting a single qubit of data by periodically executing a particular quantum circuit designed to detect errors \cite{Fowl12f}. If we assume that each quantum gate in the periodic circuit has an error rate $p$, then given a distance $d$ surface code we can use simulations to calculate the probability of a logical error per round of error detection $p_L$, namely the probability $p_L$ that we fail to protect the single qubit of data distributed across the lattice of qubits. Fig.~\ref{logx_ft_c} shows $p_L$ as a function of $p$ and $d$ using asymptotically optimal error suppression techniques \cite{Fowl13g}. This is our baseline performance. Introducing large-area errors will degrade this performance. \begin{figure} \begin{center} \begin{tikzpicture} \node[anchor=south west,inner sep=0] at (0,0) {\includegraphics[width=85mm, viewport=60 60 545 430, clip=true]{logicalx_ft_c}}; \node [below right] at (1.5,6) {no correlated errors}; \end{tikzpicture} \end{center} \caption{(Color online) Probability of logical $X$ error per round of fault-tolerant error detection $p_L$ as a function of the depolarizing error probability $p$ for a range of distances $d=3, \ldots, 25$ when exploiting knowledge of correlations between $X$ and $Z$ errors. Referring to the left of the figure, the distance increases top to bottom. Quadratic, cubic, and quartic lines (dashed) have been drawn through the lowest distance 3, 5, 7 data points obtained, respectively.}\label{logx_ft_c} \end{figure} Consider Fig.~\ref{didj}, which defines two quantities $\Delta i$, $\Delta j$ that have meaning during the application of a quantum gate and will enable us to define our error models. We shall consider two particularly severe models of many-qubit errors, each with a single tunable parameter $n$ determining its strength. Unlike in Section~\ref{corr} where large-area errors were motivated by a particle impact example, we shall associate such errors with the every application of every quantum gate. When applying a gate with error rate $p$, a single random number $x$ is generated. If $x<p$, the qubits involved in the gate will suffer random equally likely Pauli errors (with no chance of an identity error). Every other qubit in the surface code will suffer random equally likely errors $I$, $X$, $Y$, $Z$ if at the location of the qubit $x<p/n^{\Delta i+\Delta j}$ (exponential model) or $x<0.1p/r^n$, where $r=\sqrt{\Delta i^2+\Delta j^2}$ (polynomial model). The motivation behind the exponential model's use of a non-Euclidean metric is qubits with a negligible direct qubit-qubit interaction that instead must be coupled via physical devices that are themselves non-interacting. In this scenario, qubits are physically well separated. The hypothetical energetic particle discussed in Section~\ref{corr} should be imagined as significantly raising the temperature or photon count of a specific component, and each successive device should provide additional isolation leading to Manhattan distance exponential suppression of the unwanted effects. The polynomial model is motivated by qubits that are closely spaced with thermal errors radiating through the substrate. All gates, including initialization, Hadamard, CNOT, measurement, and identity, are assumed to have a non-zero probability of suffering from such large-area errors during their implementation. \begin{figure} \begin{center} \includegraphics[width=55mm]{didj} \end{center} \caption{Each dot represents a qubit. If a quantum gate is applied to the two qubits within the vertical rectangle, the qubit within the square is said to have $\Delta i=2$ and $\Delta j=2$, namely the minimum $(i, j)$ coordinate differences with any qubit acted on by the gate.}\label{didj} \end{figure} Figure~\ref{exp} shows the performance of the surface code with exponential model large-area errors and $n=2$, 10, 100, and 1000. It can be seen that performance is still measurably degraded even for $n=1000$, however strong exponential suppression of logical error at fixed $p$ can still be achieved even for $n=10$. To be quantitative, at an operating error rate of $p=10^{-3}$, in the absence of large-area errors (Fig.~\ref{logx_ft_c}), a distance $d=7$ surface code achieves a logical error rate per round of error detection of $p=2.0\times 10^{-6}$. For $n=1000$, this is degraded to $p=2.4\times 10^{-6}$. This level of degradation would have negligible practical impact, with very slightly larger code distances required to compensate. Even for $n=10$, where the logical error rate is degraded to $p=6.7\times 10^{-5}$, the degradation can be fully compensated by using a larger $d=11$ code, leading to an approximate factor of $(11/7)^2\sim 2.5$ additional qubits, independent of the size of the quantum computation protected in this manner. A factor of 2.5 overhead is significant but not excessively onerous, and we therefore claim that even quite moderate exponential suppression of large-area errors is tolerable in a practical manner when using the surface code. \begin{figure*} \begin{center} \begin{tikzpicture} \node [anchor=south west, inner sep=0] at (0,7) {\includegraphics[width=85mm, viewport=60 60 545 430, clip=true]{exp2}}; \node [below right] at (0,13) {a)}; \node [below right] at (1.5,13) {$n=2$}; \node [anchor=south west, inner sep=0] at (9.5,7) {\includegraphics[width=85mm, viewport=60 60 545 430, clip=true]{exp10}}; \node [below right] at (9.5,13) {b)}; \node [below right] at (11,13) {$n=10$}; \node [anchor=south west, inner sep=0] at (0,0) {\includegraphics[width=85mm, viewport=60 60 545 430, clip=true]{exp100}}; \node [below right] at (0,6) {c)}; \node [below right] at (1.5,6) {$n=100$}; \node [anchor=south west, inner sep=0] at (9.5,0) {\includegraphics[width=85mm, viewport=60 60 545 430, clip=true]{exp1000}}; \node [below right] at (9.5,6) {d)}; \node [below right] at (11,6) {$n=1000$}; \end{tikzpicture} \end{center} \caption{(Color online) Probability of surface code logical $X$ error per round of error detection for various code distances $d$ and physical error rates $p$ when increasingly large area errors are exponentially suppressed by a factor of (top left) $n=2$, (top right) $n=10$, (bottom left) $n=100$, (bottom right) $n=1000$. Referring to the left of each graph, distance increases top to bottom. It can be seen (top right) that even when $n=10$, meaning a single-qubit gate with error rate $p$ has a probability $p/10$ of triggering a correlated plus-shaped 5-qubit error, and $p/100$ of triggering a correlated diamond-shaped 13-qubit error, and so on, and two-qubit gates similarly trigger higher weight errors, that robust and efficient exponential suppression of logical error can still be achieved.}\label{exp} \end{figure*} A striking difference between Fig.~\ref{logx_ft_c} (no large-area errors) and Fig.~\ref{exp} (large-area errors) is the linear suppression of logical error in the latter for low values of $p$ at a fixed code distance $d$ as $p$ is reduced further. This is due to the fact that any single error has the potential to cause a logical error, and at low values of $p$ multiple temporally nearby gate errors become unlikely and the dominant logical error process becomes single large-area errors. Note that at fixed low $p$, logical error suppression is still exponential with increasing $d$. We now consider polynomial suppression of large-area errors (Fig.~\ref{poly}). If large-area errors are only quadratically suppressed, adding an additional ring of qubits at distance $r$ from any given qubit adds an $O(1/r)$ amount of error to that qubit, hence larger lattices of qubits will always be more error-prone and no threshold error rate will exist. For any rate of suppression greater than quadratic, arbitrarily reliable quantum computation can be achieved in principle, as the total error seen by any given qubit in an infinite lattice of qubits is bounded by a multiple of $p$. At a moderately high error rate such as $p=10^{-3}$ and modest code distances, the dominant logical error contribution is from multiple temporally local errors. Such logical errors are exponentially suppressed with increasing code distance. To be explicit, for $n=4$, the polynomial of best fit through the data at $p=10^{-3}$ is order 8 in $d$, and for $n=5$ the best fit polynomial is order 16, clearly demonstrating that, in the high $p$ low $d$ regime, logical errors from single large-area physical errors are not dominant. At very large code distances, the quadratic growth of the number of gates per round of error detection and the exponential suppression of logical errors from multiple temporally local gate errors is expected to lead to weak $O(1/d^{n-2})$ suppression of logical error due to single very large area errors, however this regime is outside what we can currently reach with simulations. Based only on currently accessible parameter ranges, at $p=10^{-3}$ the polynomial $n=4$ and $n=5$ overhead to achieve a given logical error rate is similar to the exponential $n=10$ overhead. If the computation being protected by the surface code is not too large, it therefore may well be the case that the desired logical error rate can be reached without excessive overhead with only polynomial suppression of large-area errors at the physical level. Formally, however, it should be noted that the resources required to achieve computation with logical error $\epsilon$ would grow polynomially with $1/\epsilon$ for sufficiently small $\epsilon$, which is not efficient in the computer science sense. \begin{figure*} \begin{center} \begin{tikzpicture} \node [anchor=south west, inner sep=0] at (0,7) {\includegraphics[width=85mm, viewport=60 60 545 430, clip=true]{poly2}}; \node [below right] at (0,13) {a)}; \node [below right] at (1.5,13) {$n=2$}; \node [anchor=south west, inner sep=0] at (9.5,7) {\includegraphics[width=85mm, viewport=60 60 545 430, clip=true]{poly3}}; \node [below right] at (9.5,13) {b)}; \node [below right] at (11,13) {$n=3$}; \node [anchor=south west, inner sep=0] at (0,0) {\includegraphics[width=85mm, viewport=60 60 545 430, clip=true]{poly4}}; \node [below right] at (0,6) {c)}; \node [below right] at (1.5,6) {$n=4$}; \node [anchor=south west, inner sep=0] at (9.5,0) {\includegraphics[width=85mm, viewport=60 60 545 430, clip=true]{poly5}}; \node [below right] at (9.5,6) {d)}; \node [below right] at (11,6) {$n=5$}; \end{tikzpicture} \end{center} \caption{(Color online) Probability of surface code logical $X$ error per round of error detection for various code distances $d$ and physical error rates $p$ when increasingly large area errors are polynomially suppressed as a) $r^2/10$, b) $r^3/10$, c) $r^4/10$, d) $r^5/10$. Referring to the left of each graph, distance increases top to bottom. When suppression is quadratic, arbitrarily low logical error rates cannot be achieved at any finite value of $p$. For higher order polynomial suppression, arbitrarily low logical error rates can be achieved, however logical error is only suppressed polynomially with code distance, which may in some cases lead to unacceptable qubit overhead.}\label{poly} \end{figure*} \section{Surface code performance with non-local two-qubit errors} \label{2q} Only two-body interactions are observed in nature between fundamental particles, meaning the large-area multi-qubit errors considered in the previous Section could only arise from uncontrolled \emph{engineered} multi-qubit interactions within a quantum computer or other exotic effects such as the radiation heating model described in Section~\ref{corr}. Unwanted two-body interactions, such as uncompensated Coulomb or magnetic dipole interaction, give rise to qualitatively and quantitatively different behavior. In this Section, we shall focus on long-range effects, and will therefore not consider interactions that decay exponentially quickly. As we shall see, even weakly polynomially decaying long-range interactions are quite tolerable, further justifying not considering exponentially decaying two-body interactions. Any interaction between qubits is a potential source of unwanted evolution and hence error. When simulating the surface code using an array of qubits with polynomially decaying two-body interactions, if the characteristic gate error rate is $p$, at the beginning of each round of error detection each pair of qubits shall be modeled as suffering two-qubit depolarizing noise with probability $Ap/r^n$. We shall focus on the most severe $n=2$ case, and two values $A=1$ and $A=0.1$. The performance of the surface code with these two different levels of additional noise is shown in Fig.~\ref{poly2q}. \begin{figure} \begin{center} \begin{tikzpicture} \node [anchor=south west, inner sep=0] at (0,7) {\includegraphics[width=85mm, viewport=60 60 545 430, clip=true]{poly2bodyA1}}; \node [below right] at (0,13) {a)}; \node [below right] at (1.5,13) {$p/r^2$}; \node [anchor=south west, inner sep=0] at (0,0) {\includegraphics[width=85mm, viewport=60 60 545 430, clip=true]{poly2bodyA0_1}}; \node [below right] at (0,6) {b)}; \node [below right] at (1.5,6) {$0.1p/r^2$}; \end{tikzpicture} \end{center} \caption{(Color online) Probability of surface code logical $X$ error per round of error detection for various code distances $d$ and physical error rates $p$ when all pairs of qubits suffered two-qubit noise once per round of error detection with probability a) $p/r^2$, and b) $0.1p/r^2$. Referring to the left of each graph, distance increases top to bottom.}\label{poly2q} \end{figure} It should be stressed that, as with quadratically suppressed large-area errors, any qubit in an infinite 2-D lattice of qubits will suffer unbounded error and the surface code will fail. However, it can be seen that for the finite-size qubit arrays considered in simulations, robust suppression of logical error can be achieved even for the most severe $A=1$ case. The effect of a lack of a threshold error rate can be observed at $p=2\times 10^{-3}$ where the $d=25$ logical error rate is higher than that for $d=11$. Nevertheless, at error rates $p<10^{-3}$, the observed logical error rate suppression trend with increasing code distance suggests that extremely low logical error rates can be achieved before using larger code distances starts to hurt. Note that for $A=1$ and $p\leq 5\times 10^{-4}$ the observed logical error rates are less than or equal to those observed for $n=10$ exponential large-area errors, meaning the overhead will be less than the factor of 2.5 calculated in the previous Section, for moderate values of $d$. By making the computer quasi 2-D, namely a finite width 1-D strip, the physical error seen at any given qubit would only grow logarithmically with increasing strip length, very likely permitting a usefully large number of logical qubits with usefully low logical error rates to be achieved. Other techniques such as building an array with carefully arranged walls capable of shielding the problematic interaction, or coupling widely separated finite arrays with other types of quantum communication are also possible. In short, even severe long-range two-qubit quantum errors that are only suppressed quadratically with increasing qubit separation can be handled with practical overhead. The final class of error we shall consider are those arising from large-scale coupling elements that interact with many qubits, specifically entire columns of the surface code in the situation we shall model. Our basic motivating system is a chain of spins in a global magnetic field and shared inductive loop. Any pair of antiparallel spins can spontaneously flip, so we shall model this as a probability $Ap$ of error for every qubit pair in each column. Note that there is no suppression of this error with increasing qubit separation. Since any given qubit in a column has an increasing number of potential partners to flip with as the size of the surface code grows, there will again be no formal threshold error rate. We again focus on $A=1$ and $A=0.1$. Data is shown in Fig.~\ref{const2q}. \begin{figure} \begin{center} \begin{tikzpicture} \node [anchor=south west, inner sep=0] at (0,7) {\includegraphics[width=85mm, viewport=60 60 545 430, clip=true]{const2bodyA1}}; \node [below right] at (0,13) {a)}; \node [below right] at (1.5,13) {$p$}; \node [anchor=south west, inner sep=0] at (0,0) {\includegraphics[width=85mm, viewport=60 60 545 430, clip=true]{const2bodyA0_1}}; \node [below right] at (0,6) {b)}; \node [below right] at (1.5,6) {$0.1p$}; \end{tikzpicture} \end{center} \caption{(Color online) Probability of surface code logical $X$ error per round of error detection for various code distances $d$ and physical error rates $p$ when all pairs of qubits in each column of the surface code suffer two-qubit noise once per round of error detection with probability a) $p$, and b) $0.1p$. Referring to the left of each graph, distance increases top to bottom.}\label{const2q} \end{figure} For $A=1$ (Fig.~\ref{const2q}a), it can be seen that at $p=10^{-3}$ the lowest possible logical error rate is achieved with a distance 15 code. Fig.~\ref{poly2q} and Fig.~\ref{const2q} are qualitatively very similar as in both an array of qubits with quadratically suppressed interactions between all pairs of qubits and an array of qubits with columns coupled by single devices introducing errors with no suppression with increasing distance, the total error seen by any given qubit grows linearly with code distance. When $A=0.1$ (Fig.~\ref{const2q}b), at $p=10^{-3}$ it can be seen that very low logical error rates can be achieved with modest code distances. Again, despite the lack of a threshold error rate, it can be seen that this class of errors is tolerable with low overhead in practice. \section{Conclusion} \label{conc} We have shown that moderate exponential suppression of large-area errors is sufficient to observe strong exponential suppression of logical error with increasing code distance $d$. A factor of 10 suppression of each successively larger area class of physical errors leads to just a factor of 2.5 additional qubits to achieve the same logical error observed without large-area errors when gates have characteristic error $p=10^{-3}$. Overhead is negligible ($<$10\%) for moderately large algorithm sizes and a factor of suppression of $10^3$. Since 5+ body errors are expected to be exceedingly rare in most physical setups, it is reasonable to expect that this higher level of error suppression is experimentally achievable and that large-area errors can therefore mostly be ignored when analyzing the surface code. A second class of errors, namely long-range two-qubit errors, has been shown to be remarkably tolerable, with even lower overhead than the exponentially suppressed large-area errors for $p\lesssim 10^{-3}$. This is surprising as such noise, from a formal point of view, results in no threshold error rate, meaning arbitrarily reliable quantum computation cannot be achieved at any finite error rate. Nevertheless, sufficiently low logical error rates for practical purposes can be achieved with modest code distances. In all cases where a threshold error rate exists, it remains well above $10^{-3}$ and in most cases does not stray far from the baseline threshold error rate of approximately 0.5\%. This is in line with expectations as the correlated errors introduced in the simulations are typically at least an order of magnitude less likely than the baseline gate errors, meaning they have low impact around the threshold error rate. The only exception to this is $n=2$ exponentially suppressed large-area errors, where weight 5 errors, for example, are only half as likely as single-qubit errors, resulting in a degradation of the threshold error rate to just above $10^{-3}$. Collectively, these results imply that large-area and long-range errors pose no fundamental barriers to practical large-scale quantum computation, as both classes of error, from a practical point of view, can be well handled by the surface code. Experimentally, the implication is that, in a large device, one should focus on the gate error rate observed when the maximum possible number of qubits in the array are being actively manipulated in parallel. The parallel error rate is the figure of merit required to determine whether a physical device can be used to achieve low logical error rates. \section{Acknowledgements} \label{ack} We thank Daniel Gottesman for suggesting this project, and Rami Barends, Julian Kelly, Daniel Sank, Evan Jeffrey, Ted White, and John Preskill for helpful discussions. This research was funded by the US Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the US Army Research Office grant No. W911NF-10-1-0334. Supported in part by the Australian Research Council Centre of Excellence for Quantum Computation and Communication Technology (CE110001027) and the U.S. Army Research Office (W911NF-13-1-0024). All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of IARPA, the ODNI, or the US Government.
1,108,101,564,722
arxiv
\section{Introduction} \label{section:introduction} \subsection{Fingerprint features, matching and individuality} Fingerprint individuality is the study of the extent of uniqueness of fingerprints and is the central premise for expert testimony in court. Assessment of individuality is made based on comparing features from two fingerprint images such as in Figure~\ref{fig:impostor:match}. A fingerprint image consists of alternating dark and light smooth flow lines termed as ridges and valleys. The fingerprint feature formed when a ridge occasionally either terminates or bifurcates is called a minutia; the minutia type ``ending'' and ``bifurcation'' correspondingly relate to the type of ridge anomaly that occurs. The location of a minutia is denoted by $x$ where $x \in D$, $D$ is the image domain. Figure~\ref{fig:impostor:match} identifies all minutiae as white squares in the two fingerprint images; the images are from the publicly available database FVC2002. Each white line emanating from the center of the square represents the minutia direction, denoted by $u$ with $u \in (0,2\pi]$, which is (i) the direction of the merged ridge flow for a minutia bifurcation, and (ii) the direction pointing away from the ridge flow for a minutia ending; see Figure~\ref{fig:impostor:match}. Minutia information of a fingerprint consists of the collection of locations and directions, $(x,u)$, of all minutiae in the image. \begin{figure} \includegraphics{734f01.eps} \caption{A total of $35$ and $49$ minutiae were detected in left and right images, respectively, and $w=7$ correspondences (i.e., matches) were found. The white squares and lines, respectively, represent the minutia location and direction. Note that for a minutia ending, the white line points away from the ridge flow, whereas for a minutia bifurcation, the white line points along the direction of the merged ridge. Images taken from the publicly available database FVC2002 DB2.} \label{fig:impostor:match} \end{figure} Minutiae in fingerprint images are extracted using pattern recognition algorithms. In this paper, we used the algorithm described in Zhu, Dass and Jain (\citeyear{ZDJ07}) for minutia extraction. Minutia information is easy to extract, and believed to be permanent and unique; that is, minutiae information is believed to stay the same over time and different individuals have distinct minutia patterns, thus making it a popular method for identifying individuals in the forensics community. For a pair of prints, a minutia $(x,u)$ in one print is said to match a minutia $(y,v)$ in the another print if \begin{equation} \label{distances} |x-y|_{d} < r_0 \quad\mbox{and}\quad |u-v|_{a}< u_0 \end{equation} for prespecified small positive numbers $r_0$ and $u_0$, where $|x-y|_{d}$ and $|u-v|_{a}$, respectively, denote the Euclidean distance in $R^{2}$ and the angular distance \begin{equation} \label{angulardistance} |u-v|_{a} = \operatorname{min}\bigl\{|u-v|,2\pi-|u-v|\bigr\}. \end{equation} We subsequently assume that a set of minutiae has already been extracted (or detected) for every fingerprint image under study. Fingerprint-based authentication proceeds by determining the highest possible number of minutia matches between a pair of prints. This is achieved by an optimal rigid transformation that brings the two sets of minutiae as close to each other as possible and then counting the number of minutia pairs $(x,u)$ and $(y,v)$ that satisfy (\ref{distances}). For example, the number of minutia matches between the prints in the left and right panels in Figure~\ref{fig:impostor:match} is $w \equiv7$. A reasonably high degree of match (high $w$) between the two sets of minutiae leads forensic experts to testify irrefutably that the owner of the two prints is one and the same person. Central to establishing an identity based on fingerprint evidence is the assumption of discernible uniqueness; fingerprint minutiae of different individuals are observably different and, therefore, when two prints share many common minutiae, the experts conclude that the two different prints are from the same person. The primary concern here is what constitutes a ``reasonably high degree of match?'' For example, in Figure~\ref{fig:impostor:match}, is $w=7$ large enough to conclude that the two prints come from the same person? When fingerprint evidence is presented in a court, the testimony of experts is almost always included and, on cross-examination, the foundations and basis of this testimony are rarely questioned. However, in the case of Daubert vs. Merrell Dow Pharmaceuticals (\citeyear{Daubert}), the U.S. Supreme Court ruled that in order for expert forensic testimony to be allowed in courts, it had to be subject to the criteria of scientific validation [see Pankanti, Prabhakar and Jain (\citeyear{PPJ02}) for details]. Following the Daubert ruling, forensic evidence based on fingerprints was first challenged in the case of U.S. vs. Byron C. Mitchell (\citeyear{USvsByronCMitchell}) and, subsequently, in 20 other cases involving fingerprint evidence. The main concern with an expert's testimony as well as admissibility of fingerprint evidence is the problem of individualization. The fundamental premise for asserting the extent to which the prints match each other (i.e., the extent of uniqueness of fingerprints) has not been scientifically validated and matching error rates are unknown [Pankanti, Prabhakar and Jain (\citeyear{PPJ02}), Zhu, Dass and Jain (\citeyear{ZDJ07})]; see also the National Academy of Sciences (NAS) report (\citeyear{NAS:2009}). The central question in a court of law is ``What is the uncertainty associated with the experts' judgement when matches are decided by fingerprint evidence?'' How likely can an erroneous decision be made for the given latent print? The main issue with expert testimony is the lack of quantification of this uncertainty in the decision. To address these concerns, several research investigations have proposed measures that characterize the extent of fingerprint individuality; see Pankanti, Prabhakar and Jain (\citeyear{PPJ02}), Zhu, Dass and Jain (\citeyear{ZDJ07}) and the references therein. The primary aim of these measures is to capture the inherent variability and uncertainty (in the expert's assessment of a ``match'') when an individual is identified based on fingerprint evidence. A measure of fingerprint individuality is given by the probability of a random correspondence (PRC), which is the probability that two sets of minutiae, one from the query containing $m_1$ minutiae and the other from the template fingerprint containing $m_2$ minutiae, randomly correspond to each other with at least $w$ matches. Since large (resp., small) $w$ is a measure of the extent of similarity (resp., dissimilarity) between a pair of fingerprints, the PRC should be a decreasing function of~$w$. Mathematically, the PRC corresponding to $w$ matches is given by \begin{equation} \label{PRC} \operatorname{PRC}(w | m_1, m_2) = P(\mathcal{S} \ge w | m_1, m_2), \end{equation} where the random variable $\mathcal{S}$ denotes the number of minutia matches that result when two arbitrary fingerprints from a target population are paired with each other; the notation used in (\ref{PRC}) also emphasizes the dependence of the PRC on the total number of minutiae detected in the two prints, that is, $m_1$ and $m_2$. It is clear from the above formula that $\operatorname{PRC}(w | m_1, m_2)$ decreases as $w$ increases for fixed $m_1$ and~$m_2$. The distribution of $\mathcal {S}$ is governed by the extent to which minutia matches occur randomly, an unknown quantity which has been modeled in a variety of ways in the literature. The main aim of research in fingerprint individuality is to obtain reliable inference on the PRC, which is also unknown as a result of the unknown extent of random minutia matches. \subsection{Minutia matching models} \label{subsection:databasevalidation} Several fingerprint minutia matching\break models and expressions for the PRC have been developed from a completely theoretical perspective [see Pankanti, Prabhakar and Jain (\citeyear{PPJ02}) for a detailed account of these models]. These models are distributions elicited for minutia occurrences, that is, the distributions that arise from viewing each minutia $(x,u)$ as a random occurrence in the fingerprint, occurring independently of each other. A major drawback of these works is that the models for minutia occurrences, and consequently matching probabilities and PRCs, are not validated based on actual fingerprint images. This effort was first carried out in Pankanti, Prabhakar and Jain (\citeyear{PPJ02}) based on real fingerprint databases, albeit for a simple minutia matching model. Subsequent improvements on validating matching models based on actual fingerprint databases have been carried out in Zhu, Dass and Jain (\citeyear{ZDJ07}) and several other works. Zhu, Dass and Jain (\citeyear{ZDJ07}), for example, demonstrated that when the number of minutiae in the prints are moderate to large, the distribution of $\mathcal{S}$ in (\ref{PRC}) can be approximated by a Poisson distribution with mean $\lambda$, the expected number of random matches, given by \begin{equation} \label{lambdaf1f2} \lambda=\lambda(f_1,f_2)= m_1 m_2 \biggl( 2\pi r_{0}^{2} u_{0} \int_{D} \int_{(0,2\pi]} f_{1}(x,u) f_{2}(x,u) \,dx \,du \biggr), \end{equation} based on the minutia distributions $f_1$ and $f_2$ for prints $1$ and $2$, respectively; in~(\ref{lambdaf1f2}), $m_1$ and $m_2$ are, respectively, the number of minutiae in prints $1$ and $2$, and $r_0$ and $u_0$ are as in (\ref{distances}). The above formula for the mean number of random matches can be understood in the following way: The total number of pairings available for random matching from $m_1$ minutiae in print $1$ and $m_2$ minutiae in print $2$ is $m_1 m_2$. The expression $2\pi r_{0}^{2} u_{0} \int_{D} \int_{(0,2\pi]} f_{1}(x,u) f_{2}(x,u) \,dx \,du$ represents the probability of a single random match when $r_0$ and $u_0$ are small compared to the image size. It follows from the expected value of a binomial distribution that $\lambda$ in (\ref{lambdaf1f2}) is the mean number of successes (i.e., random matches). The contribution of Zhu, Dass and Jain (\citeyear{ZDJ07}) was to characterize the distribution of $\mathcal{S}$ (the extent of random matching in the target population) by a single number, namely, the probability of a (single) random match between two minutiae, one from each of the prints in question. Since the probability of a random match \begin{equation} \label{singlematch} p(f_1,f_2) = 2\pi r_{0}^{2} u_{0} \int_{D} \int_{(0,2\pi]} f_{1}(x,u) f_{2}(x,u) \,dx \,du \end{equation} depends on the minutia distributions $f_1$ and $f_2$, Zhu, Dass and Jain (\citeyear{ZDJ07}) inferred these distributions from actual fingerprint databases by representing them as a finite mixture of normals. \begin{figure} \includegraphics{734f02.eps} \caption{Fingerprint images with \textup{(a)} good \textup{(b)} moderate and \textup{(c)} poor quality. White squares and lines indicate locations and directions of detected minutiae, and possibly not true ones in panel \textup{(c)}; images taken from the publicly available FVC2002 databases.} \label{fig:imagequality} \end{figure} Factors other than the minutia distributions also govern the extent of random matches. In the case of forensic testimony, for example, it is reasonable to believe that expert matching is more prone to error if the latent prints are of poor quality. Here, poor quality images mean poor resolution (or clarity) of the ridge-valley structures (e.g., in the presence of smudges, sweaty fingers, cuts and bruises) due to which the detection of true minutiae can be missed and spurious minutiae can be detected; see Figure~\ref{fig:imagequality} for examples of good, moderate and poor quality fingerprint images. In other words, the extent of individualization should be smaller when the underlying image qualities are poor. One critical issue, therefore, is to be able to quantify the increase in PRC with respect to quality degradation for each matching number $w$. This quantification is crucial, for example, for latent prints lifted\vadjust{\goodbreak} from crime scenes which are known to have inferior image quality. In the presence of poor quality, automatic as well as manual extraction of fingerprint features are prone to more errors through (i)~increased likelihood of detecting spurious minutiae and (ii)~missing true ones. Previous work has not quantified the effects of (i) and (ii) on fingerprint individuality; see \citet{Dass10}. \subsection{Objective and contributions} The aim of this paper is to quantify PRC assessment (and hence fingerprint individuality) in the presence of varying image quality. The methodology involves two main steps. (i) First, a class of generalized linear mixed models (GLMMs) is proposed that consists of two levels. At the top level, the Poisson model is elicited as the distribution for the number of random minutia matches, following the derivation of Zhu, Dass and Jain (\citeyear{ZDJ07}). At the second level, image quality of the two prints in question are incorporated as covariates. Also, inter-finger variability is modeled using variance components in the second level. (ii) Second, an inference procedure for the PRC is developed to accommodate a large number of images from different fingers as is typical in real fingerprint databases. Efficient computational algorithms are essential for arriving at reliable reports of fingerprint individuality in practice. Since the number of fingerprint images in real databases is typically large, inference based on Bayesian computational algorithms such as the Gibbs sampler is extremely slow due to increased dimensions of the parameter space. To alleviate this problem, the asymptotic Laplace approximation of the GLMM likelihood is used instead. To summarize, \textit{contributions of this paper are as follows}: For fingerprint-based authentication, a procedure is developed for obtaining the PRC based on explicitly modeling the occurrence of spurious minutia on image quality degradation. Statistical contributions include (i) an innovative application of GLMM to fingerprint-based authentication, and (ii) the development of computationally fast algorithms achieved by approximating the GLMM likelihood (with associated theoretical and numerical validation). Inference results (point estimates and credible intervals) on the PRC are also given (see the experimental results section---Section~\ref{section:realdataanalysis}). The rest of this paper is organized as follows: Section~\ref{section:glmm} discusses the log-linear GLMM model that is used to study how PRCs change as a function of the underlying image quality. Section~\ref{section:BayesianInference} presents the procedure for fitting GLMMs in a Bayesian framework. Section~\ref{section:PRC} develops the inference procedure (point estimates and credible intervals) on the PRC. Section~\ref{section:realdataanalysis} obtains the estimates of fingerprint individuality based on the FVC2002 and FVC2006 databases and gives an analysis of the numerical results. Section~\ref{simulation:results} validates the Laplace approximation and various other approximations used in this paper for efficient computation. Section~\ref{section:discussion} presents the summary, conclusions and other relevant discussions.\vadjust{\goodbreak} \section{Fingerprint databases and empirical findings}\label{sec2} \subsection{Fingerprint databases} Fingerprint databases that have been used in fingerprint analysis include the FVC (e.g., FVC2000, 2002, 2004 and 2006) and NIST databases (e.g., NIST Special Database 4, 9, 27, etc.) which are publicly available for download and analysis. These fingerprint databases typically consist of images acquired from $F$ different fingers, obtained by placing the finger onto the sensing plate of an image acquisition device. Each finger is sensed multiple times, possibly a different number of times for the different fingers. The databases have a common number of multiple acquisitions, say, $L$, for each finger, resulting in total of $F L$ images in the database. Due to the different sensors used as well as varying placement of the finger, per individual, onto the sensing plate, the images acquired exhibit significant variability (even for the same finger); see Figure~\ref{fig:variousimages} for several examples from two of the databases used in this paper, namely, FVC2002 (DB1 and DB2) and FVC2006 (DB3). For FVC2002 DB1 and DB2, $F=100$ and $L=8$, whereas for FVC2006 DB3, $F=150$ with $L=15$. Figure~\ref{fig:variousimages} gives an idea of the intra-finger variability (variability due to multiple impressions) that has to be taken into account in addition to image quality variability. Intra-finger variability gives different matching numbers when different impressions are used for the matching between two fingers. \begin{figure} \includegraphics{734f03.eps} \caption{Sample images from the FVC2002 DB1, DB2 and FVC2006 DB3 databases; impression $f\mbox{--}l$ refers to the $l$th impression of the $f$th finger. Top, middle and bottom rows are images from the FVC2002 DB1, DB2 and FVC2006 DB3 databases, respectively; all images are publicly available.} \label{fig:variousimages} \end{figure} \subsection{Empirical findings for varying image quality} \label{empiricalfindings} In practice, the quality of an image can be either ordinal (i.e., taking values in an ordered label set) or quantitative (i.e., taking values in a continuum, usually in a bounded interval $[a,b]$). We have considered one specific choice of each type of quality measure: (i) The categorical quality extractor ``NFIQ,'' which is an implementation of the ``NIST Image Quality'' algorithm based on neural networks described in Tabassi, Wilson and Watson (\citeyear{TWW04}), and (ii) the minutia quality extractor obtained from the feature extraction algorithm \texttt{mindtct} from NIST (see the Home Office Automatic Fingerprint Recognition System [HOAFRS (\citeyear{LIC93})]). These algorithms are all publicly available. For the NFIQ quality extractor, the output $Q_0$ is a quality label numbered $1,2,\ldots,5$, with $1$ and $5$ corresponding to the best and worst quality images, respectively. For the continuous quality extractor by \texttt{mindtct}, a real number $Q_{\mathrm{con}}$ between $0$ and $1$ is obtained with higher values indicating better quality images. To maintain consistency between the two quality measures, we relabel the categorial measure as $Q_{\mathrm{cat}} = Q_{\mathrm{max}} + 1 - Q_{0}$, where $Q_{\mathrm{max}}$ is the maximum label for a given database, so that higher labels indicate better quality images. For a pair of prints, $1$ and $2$ say, with $m_1$ and $m_2$ minutiae, respectively, let $Y$ denote the observed number of minutia matches between them. We consider only impostor matches (matches between impressions of different fingers) which are equivalent to the notion of random matching when assessing fingerprint individuality. The total number of impostor pairs in a database with $F$ fingers and $L$ impressions per finger is $F(F-1)L^{2}/2$ (since the order of the prints is immaterial). Suppose $(Q_1,Q_2)$ are the labels for the categorical measure $Q_{{\mathrm{cat}}}$. Table~\ref{tableYFVC2002} gives the average number (and standard deviations) of matches $\bar{Y}$ for each quality bin pair based on FVC2002 DB1 and DB2. Note that the average number of matches is an increasing function of the quality labels, that is, the average number of random matches increases as the image quality becomes better. This seems counterintuitive initially, but we note that the average increases because the total number of minutiae extracted from the two prints also increases as quality gets better. With more minutiae available for random pairings, the number of matches based on such pairings should also increase on the average. Table~\ref{tablemnFVC2002} gives the mean number (and standard deviations) of minutiae for each quality bin for the two databases considered to illustrate this trend. The minutia extraction algorithm \texttt{mindtct} detects (or extracts) a minutia only if its computed reliability measure (computed within the program) is above a certain threshold. For better quality images, more minutiae have a reliability index above the threshold, which explains the higher number extracted in better quality images. \begin{table} \caption{Mean and standard deviations (in parenthesis) of $Y$ (the observed number of random matches) for $Q_{\mathrm{cat}}$ pairs for FVC2002 DB1 and DB2 in panels \textup{(a)} and \textup{(b)}} \label{tableYFVC2002} \centering \begin{tabular}{@{}cc@{}} \begin{tabular}{@{}lccc@{}}\\ \hline \multicolumn{1}{@{}l}{$\bolds{Q_1/Q_2}$} & \multicolumn{1}{c}{$\bolds{1}$} & \multicolumn{1}{c}{$\bolds{2}$} & \multicolumn{1}{c@{}}{$\bolds{3}$} \\ \hline $1$ & 3.36 & 3.43 & 4.15 \\ & (1.11) & (1.14) & (1.36) \\ $2$ & 3.73 & 4.02 & 4.94 \\ & (1.13) & (1.19) & (1.41) \\ $3$ & 4.57 & 4.96 & 6.38 \\ & (1.40) & (1.43) & (1.71) \\ \hline \end{tabular} & \begin{tabular}{@{}lcccc@{}} \hline \multicolumn{1}{@{}l}{$\bolds{Q_1/Q_2}$} & \multicolumn{1}{c}{$\bolds{1}$} & \multicolumn{1}{c}{$\bolds{2}$} & \multicolumn{1}{c}{$\bolds{3}$} & \multicolumn{1}{c@{}}{$\bolds{4}$} \\ \hline $1$ & 2.20 & 3.02 & 3.36 & 3.76\\ & (0.42) & (0.76) & (0.90) & (1.00) \\ $2$ & 2.96 & 3.91 & 4.21 & 5.12 \\ & (0.75) & (1.30) & (1.39) & (1.71) \\ $3$ & 3.19 & 4.26 & 4.71 & 5.77 \\ & (0.82) & (1.26) & (1.32) & (1.57)\\ $4$ & 3.74 & 5.26 & 5.96 & 7.47 \\ & (0.99) & (1.51) & (1.58) & (1.91) \\ \hline\\ \end{tabular}\\ \textup{(a)}& \textup{(b)} \end{tabular} \end{table} \begin{table} \caption{Mean and standard deviations (in parenthesis) of $m_1 m_2$ (the total number of possible random pairings) for $Q_{\mathrm{cat}}$ pairs for FVC2002 DB1 and DB2 in panels \textup{(a)} and \textup{(b)}} \label{tablemnFVC2002} \centering \begin{tabular}{@{}cc@{}} \begin{tabular}{@{}lccc@{}}\\\\ \hline \multicolumn{1}{@{}l}{$\bolds{Q_1/Q_2}$} & \multicolumn{1}{c}{$\bolds{1}$} & \multicolumn{1}{c}{$\bolds{2}$} & \multicolumn{1}{c@{}}{$\bolds{3}$} \\ \hline $1$ & 357 & 403 & 641 \\ & (239) & (219) & (336) \\ $2$ & 490 & 594 & 947 \\ & (260) & (221) & (327) \\ $3$ & 768 & 921 & 1459 \\ & (383) & (301) & (440) \\ \hline \end{tabular} & \begin{tabular}{@{}lcccc@{}} \hline \multicolumn{1}{@{}l}{$\bolds{Q_1/Q_2}$} & \multicolumn{1}{c}{$\bolds{1}$} & \multicolumn{1}{c}{$\bolds{2}$} & \multicolumn{1}{c}{$\bolds{3}$} & \multicolumn{1}{c@{}}{$\bolds{4}$} \\ \hline $1$ & 177 & 366 & 482 & 705\\ & (53) & (144) & (148) & (203) \\ $2$ & 338 & 684 & 835 & 1295 \\ & (178) & (395) & (445) & (634) \\ $3$ & 414 & 845 & 1036 & 1596 \\ & (138) & (368) & (350) & (485)\\ $4$ & 650 & 1342 & 1682 & 2546 \\ & (192) & (527) & (491) & (655) \\ \hline \end{tabular}\\ \textup{(a)}& \textup{(b)} \end{tabular} \end{table} \begin{table}[b] \tabcolsep=3pt \caption{Mean and standard deviations (in parenthesis) of $Y/(m_1 m_2)$ (the probability of a random pairing) for $Q_{\mathrm{cat}}$ pairs for FVC2002 DB1 and DB2 in panels \textup{(a)} and \textup{(b)}} \label{tablepFVC2002} \centering \begin{tabular}{@{}cc@{}} \begin{tabular}{@{}lccc@{}}\\\\ \hline \multicolumn{1}{@{}l}{$\bolds{Q_1/Q_2}$} & \multicolumn{1}{c}{$\bolds{1}$} & \multicolumn{1}{c}{$\bolds{2}$} & \multicolumn{1}{c@{}}{$\bolds{3}$} \\ \hline $1$ & 0.0127 & 0.0105 & 0.0077 \\ & (0.0083) & (0.0057) & (0.0036) \\ $2$ & 0.0092 & 0.0075 & 0.0056 \\ & (0.0046) & (0.0030) & (0.0019) \\ $3$ & 0.0069 & 0.0058 & 0.0046 \\ & (0.0029) & (0.0018) & (0.0013) \\ \hline \end{tabular} & \begin{tabular}{@{}lcccc@{}} \hline \multicolumn{1}{@{}l}{$\bolds{Q_1/Q_2}$} & \multicolumn{1}{c}{$\bolds{1}$} & \multicolumn{1}{c}{$\bolds{2}$} & \multicolumn{1}{c}{$\bolds{3}$} & \multicolumn{1}{c@{}}{$\bolds{4}$} \\ \hline $1$ & 0.0131 & 0.0090 & 0.0074 & 0.0056\\ & (0.0032) & (0.0029) & (0.0024) & (0.0021) \\ $2$ & 0.0104 & 0.0068 & 0.0058 & 0.0044 \\ & (0.0042) & (0.0028) & (0.0023) & (0.0017) \\ $3$ & 0.0083 & 0.0056 & 0.0049 & 0.0038 \\ & (0.0028) & (0.0021) & (0.0016) & (0.0012)\\ $4$ & 0.0060 & 0.0043 & 0.0037 & 0.0030 \\ & (0.0017) & (0.0014) & (0.0010) & (0.0009) \\ \hline \end{tabular}\\ \textup{(a)}& \textup{(b)} \end{tabular} \end{table} Consequently, a better quantity to model for varying image quality is $Y/(m_1 m_2)$, which is an estimate of the probability of a random match based on the Poisson model of Zhu, Dass and Jain (\citeyear{ZDJ07}) [see (\ref {lambdaf1f2}) and (\ref{singlematch})]. Table~\ref{tablepFVC2002} gives the average value of $Y/(m_1 m_2)$ (and standard deviations) for the different quality bins. Note that now the averages are \textit{decreasing} as a function of $Q_{\mathrm{cat}}$. Intuitively, this is expected since it is less likely to obtain a random match when the image quality is high for a pair of impostor fingerprints. More importantly, we can attribute the larger values of $Y/(m_1 m_2)$ for poor quality images to the extraction of spurious minutiae. The probability of a random match based on true minutiae is intrinsic to the two prints in question and depends only on the distributions $f_1$ and $f_2$. Thus, this probability should not depend on quality if all true minutiae are correctly identified. The probability of a random match, $p$, in the presence of noisy (both spurious and true) minutiae is the sum of four component probabilities: \begin{equation} \label{componentprobabilities} p = p^{(0,0)} + p^{(0,1)} + p^{(1,0)} + p^{(1,1)}, \end{equation} where $p^{(u,v)}$ is the probability of a random match with minutia of type $u$ from print~$1$ and type $v$ from print $2$; $u=0,1$ for true or spurious minutia from print~$1$, and $v=0,1$ for true or spurious minutia from print $2$. From earlier discussion, it follows that $p^{(0,0)}$ does not depend on $(Q_1,Q_2)$, whereas $p^{(0,1)}, p^{(1,0)}$ and $p^{(1,1)}$ all increase as either or both quality labels $(Q_1,Q_2)$ decrease. In the GLMM framework, the dependence of $p^{(0,1)}$, $p^{(1,0)}$ and $p^{(1,1)}$ on quality is modeled explicitly. The above discussion is presented for the categorical quality measure $Q_{\mathrm{cat}}$, but similar empirical findings are also obtained for $Q_{\mathrm{con}}$ by binning the values in the range $[0,1]$. \section{GLMM framework for fingerprint individuality} \label{section:glmm} Let a fingerprint\break database consists of $F$ fingers and $L$ impressions per finger. Each fingerprint image in the database corresponds to a fingerprint impression denoted by the index pair $(f,l)$ with $1\le f \le F$ and $1 \le l \le L$; here $f$ and $l$, respectively, are the indices of the finger and the impression. An impostor pair of fingerprint images $(i,j)$ with $i\equiv(f,l)$ and $j \equiv(f',l')$ arise from different fingers, that is, $f\ne f'$. The collection of all impostor pairs arising from the database is denoted by $\mathcal{I}$ with $N \equiv F(F-1) L^2/2$. For each impostor pair $(i,j) \equiv ((f,l),(f',l'))$, the matching algorithm presented in Section~\ref{section:introduction} [see (\ref{distances}) and (\ref {angulardistance})] computes the observed number of minutia matches, $Y_{ij}$, between impressions $i$ and $j$. The image quality of $i$ and $j$ is obtained by a quality extractor $Q$ that outputs the ordered pair $(Q_i,Q_j)$ which for the moment is taken to be continuous (the categorical quality case is discussed later). Based on the discussion in the previous section, the total number of matches $Y_{ij}$ is expressed as a sum of four components: \begin{equation} \label{sumY} Y_{ij} = \sum_{u=0}^{1} \sum_{v=0}^{1} Y_{ij}^{(u,v)}, \end{equation} where $Y_{ij}^{(u,v)}$ is the number of matches obtained based on minutia of type $u$ for impression $i$ and minutia of type $v$ for impression $j$ [see (\ref{componentprobabilities}) and the ensuing discussion in Section~\ref{empiricalfindings}]. The rest of the GLMM model specification is \begin{equation} \label{simple:model:2} Y_{ij}^{(u,v)} \sim \operatorname{Poisson} \bigl(\lambda _{ij}^{(u,v)} \bigr), \end{equation} independently for each combination of $(u,v)$, \begin{equation} \label{mimj} \lambda_{ij}^{(u,v)} = m_i m_j \operatorname{exp} \bigl\{ b_{f} + b_{f'} + \eta_{ij}^{(u,v)} \bigr\} \end{equation} with $\eta_{ij}^{(u,v)}$ for various combinations $(u,v)$ given as \begin{eqnarray} \label{sim1} \eta_{ij}^{(0,0)} &=& 2\beta_0, \\ \label{sim2} \eta_{ij}^{(0,1)} &=& \beta_0 + \theta_0 + \theta_1 {Q_j}, \\ \label{sim3} \eta_{ij}^{(1,0)} &=& \beta_0 + \theta_0 + \theta_1 Q_{i} \quad\mbox{and} \\ \label{sim4} \eta_{ij}^{(1,1)} &=& 2\theta_0 + \theta_1 (Q_i + Q_j). \end{eqnarray} In (\ref{mimj}), $m_i$ and $m_j$ are, respectively, the number of extracted minutiae from $i$ and~$j$. The GLMM model of (\ref {sumY})--(\ref{sim4}) consists of unknown fixed effects parameters $(\theta_0,\theta_1,\beta_0)$ and random effect parameters $b_{f}$ with $b_{f} \sim N(0,\sigma^{2})$ independently for $f=1,2,\ldots,F$ for unknown variance $\sigma^{2}>0$. Comparing (\ref{lambdaf1f2}) and (\ref{mimj}), we note that the probability of a random match $p(f_1,f_2)$ [see (\ref{singlematch})] is modeled as $\operatorname{exp}\{ b_{f} + b_{f'} + \eta_{ij}^{(u,v)}\}$ with random effects $b_{f}$ and $b_{f'}$ in the GLMM framework. The reason for this is as follows: The assessment of fingerprint individuality is typically carried out for a target population with different individuals. Hence, $f_1$ and $f_2$ are random realizations of prints from the target population. While each $f_1$ and $f_2$ is modeled as a mixture of normals, Zhu, Dass and Jain (\citeyear{ZDJ07}) subsequently proceed with a clustering of these estimated $f_1$ and $f_2$ for a given database. The assumption made is that the target population (and, hence, the database which is considered a representative of the target population) consists of unknown $K$ different clusters of hyperdistributions (a distribution on the mixtures) from which $f_1$ and $f_2$ are realized. Subsequent development of the clustering of mixtures of normals is reported in \citet{D11} where the uncertainty of estimating the hyperdistribution is accounted for in the assessment of PRC. In the present context, the random effects $b_f$ (and $b_{f'}$) account for variability due to different fingers (each finger $f$ has a distribution on its minutiae) which are assumed to be realizations from the target population. The fixed effect parameters $(\theta_0,\theta _1,\beta_0,\sigma^{2})$ are target population specific: Their values can change when we move from one database to another. It follows that the parameters $(\theta_0,\theta_1,\beta_0)$ should be all negative. More elaborate restrictions can be placed on the random and fixed effects parameters jointly by requiring that $b_{f} + b_{f'} + \eta_{ij}^{(u,v)}\le0$ for all $(u,v)$ and $(i,j)$, but we choose not to pursue estimation and subsequent Bayesian inference in this complicated feasibility region. The posterior estimates of $(\theta _0,\theta_1,\beta_0,\sigma^{2})$ obtained in the experimental results section demonstrate that the restrictions are satisfied naturally: Estimates of $(\theta_0,\theta_1,\beta_0)$ are found to be negative for all the databases we worked with. Further, we found the estimate of $\sigma$ to be so small relative to $(\theta_0,\theta_1,\beta_0)$ that the restrictions $b_{f} + b_{f'} + \eta_{ij}^{(u,v)} \le0$ satisfied automatically for all realized values of $b_f$ and $b_{f'}$ when computing the PRC. For a categorical quality measure labeled in increasing order of image quality, (\ref{sim1})--(\ref{sim4}) in the GLMM framework are replaced by the following four equations for $\eta_{ij}^{(u,v)}$: \begin{eqnarray} \label{sim11} \eta_{ij}^{(0,0)} &=& 2\beta_0, \\ \label{sim21} \eta_{ij}^{(0,1)} &=& \beta_0 + \theta_0 + \theta_1 +\cdots+\theta_{(Q_j - 1)}, \\ \label{sim31} \eta_{ij}^{(1,0)} &=& \beta_0 + \theta_0 + \theta_1 + \cdots+\theta_{(Q_{i}-1)}\quad \mbox{and} \\ \label{sim41} \eta_{ij}^{(1,1)} &=& 2\theta_0 + \theta_1 + \cdots+\theta_{(Q_i-1)} + \theta_1 + \cdots+ \theta_{(Q_j-1)}, \end{eqnarray} where the fixed effects parameters are now $(\theta_0,\theta_1,\ldots,\theta_{(Q_{\mathrm{max}}-1)})$ and $\beta_0$ which are all negative. Equations (\ref{sim1})--(\ref{sim4}) for a continuous quality measure and (\ref{sim11})--(\ref{sim41}) for a categorical quality measure imply that the matching between type $u$ minutia from print $1$ and type $v$ minutia from print $2$ are independent of each other. To see this, note that the probability of a random match in (\ref {componentprobabilities}), $p=p^{(0,0)}+p^{(0,1)} + p^{(1,0)} + p^{(1,1)}$, is the sum of four component probabilities depending on the type of minutiae being matched. It follows that each normalized term ${p^{(u,v)}}/{p}$ gives the multinomial probability that type $u$ minutia is paired with type $v$ minutia, conditional on the fact that a random pairing has occurred. For the GLMM framework, we get ${p^{(u,v)}}/{p} = {e^{\eta_{ij}^{(u,v)}}}/{\sum_{u=0}^{1}\sum_{v=0}^{1} e^{\eta_{ij}(u,v)}}$. Consequently, the odds ratio $\frac {p^{(1,1)}p^{(0,0)}}{p^{(0,1)}p^{(1,0)}} = 1$ for both the continuous and categorical quality measures. In other words, spurious minutia locations are dispersed ``evenly'' in between the true minutiae. No region in the print is more prone to spurious detection in relation to the distribution of true minutiae in the fingerprint impression. We further note that $\frac{p^{(u,1)}}{p^{(u,0)}} = e^{\theta_0 + \theta_1 Q_{j} - \beta_0}$ for the continuous and $\frac{p^{(u,1)}}{p^{(u,0)}} = e^{\theta_0 + \theta_1 + \cdots+ \theta _{(Q_{j}-1)} - \beta_0}$ for the categorical quality measures. These expressions suggest that $\theta_1$ for the continuous and $\theta _1, \ldots, \theta_{(Q_{\mathrm{max}}-1)}$ for categorical quality measures should all be negative: As $Q_{j}$ increases (better quality image), the odds of pairing with spurious minutiae ($v=1$ versus $v=0$) should decrease for each minutia type $u=0,1$. For both the categorical and continuous quality measures, equation (\ref {mimj}) can be rewritten in the general log-linear form with respect to the fixed and random effects parameters. For each fixed $(u,v)$, this is given by \begin{equation} \label{random:effects:model} \operatorname{log} \lambda_{ij}^{(u,v)} = K_{ij} + \delta_{ij}(u,v), \end{equation} where $K_{ij} = \operatorname{log}(m_i m_j)$ and $\delta_{ij}{(u,v)} = x_{ij}'(u,v)\utheta+ z_{ij}'{\mathbf{b}}$ for appropriate\break choices of the $1 \times p$ row vector $x_{ij}'(u,v)$ and $1\times F$ row vector $z_{ij}'$ [which is independent of $(u,v)$]; $\utheta$ and $\mathbf{b} = (b_1,b_2,\ldots,b_{F})'$, respectively, are $p \times1$ and $F \times1$ column vectors representing the collection of fixed and random effects parameters. For the continuous quality measure, the parameter vector ${\utheta}= (\theta_0,\theta _1,\beta_0)'$ with $p=3$, whereas ${\utheta}= (\theta_0,\theta _1,\ldots,\theta_{(Q_{\mathrm{max}}-1)},\beta_0)'$ with $p=Q_{\mathrm{max}}+1$ for the categorical quality measure. We also denote $\utau= (\utheta',\sigma^{2})'$ to be the $(p+1)\times1$ vector consisting of all unknown fixed effects parameters. In matrix notation, the GLMM for each fixed $(u,v)$ is given by \begin{equation} \label{matrix:notation:0}\qquad Y_{ij}^{(u,v)} \sim \operatorname{Poisson} \bigl( e^{K_{ij}+\delta _{ij}{(u,v)}} \bigr)\qquad\mbox{independently for each $(i,j) \in \mathcal{I}$} \end{equation} and \begin{equation} \label{matrix:notation} {\udelta} {(u,v)} = \mathbf{X}(u,v)\utheta+ \mathbf{Z} \mathbf{b}, \end{equation} where ${\udelta}{(u,v)}$ is the $N \times1$ vector consisting of $\delta_{ij}{(u,v)}$s, $\mathbf{X}(u,v)$ is the $N \times p$ matrix with rows consisting of $x_{ij}'(u,v)$, and $\mathbf{Z}$ is the $N \times F$ matrix with rows comprising of $z_{ij}'$. \section{Inference methodology} \label{section:BayesianInference} The subsequent subsections develop inference methodology for $\utau \equiv(\utheta',\sigma^{2})'$ in a Bayesian framework. \subsection{Exact GLMM likelihood} The following notation is developed for the ensuing discussion. Let $\mathbf{Y}_{uv} = \{ Y_{ij}^{(u,v)}\dvtx (i,j) \in\mathcal{I} \}$ denote the missing data component comprised of minutia matches of type $(u,v)$ for all impostor pairs. We also denote $\mathbf{Y}_{\mathrm{mis}}$ and $\mathbf{Y}_{\mathrm{obs}}$ to be the collection of all missing and observed matching numbers, that is, $\mathbf{Y}_{\mathrm{mis}} = ( \mathbf{Y}_{00}, \mathbf{Y}_{01}, \mathbf {Y}_{10},\mathbf{Y}_{11} )$ and $\mathbf{Y}_{\mathrm{obs}} = \{ Y_{ij}, (i,j)\in\mathcal{I} \}$, respectively. The set of feasible values for $\mathbf{Y}_{\mathrm{mis}}$ is given by the set \begin{equation} \label{missinglabel} \mathcal{M} = \Biggl\{ \mathbf{Y}_{\mathrm{mis}}\dvtx \sum _{u=0}^{1}\sum_{v=0}^{1} Y_{ij}^{(u,v)} = Y_{ij} \mbox{ for all } (i,j) \in \mathcal {I} \Biggr\}. \end{equation} All subsequent analysis is conditional on the random effects $\mathbf {b}$. The missing data component $\mathbf{Y}_{uv}$ has a likelihood $\ell_{uv}(\mathbf{Y}_{uv} | \utheta,\mathbf{b}) = e^{-h_{uv}(\mathbf {Y}_{uv}, \utheta, \mathbf{b})}$, where \begin{eqnarray} \label{huv} h_{uv}(\mathbf{Y}_{uv}, \utheta, \mathbf{b})&=& - \sum_{(i,j)\in\mathcal{I}} \bigl\{ \bigl( x_{ij}'(u,v)\utheta+ z_{ij}' \mathbf{b}+K_{ij} \bigr) Y_{ij}^{(u,v)} \nonumber \\[-8pt] \\[-8pt] \nonumber &&\hspace*{41pt}{} - \operatorname{exp} \bigl(x_{ij}'(u,v)\utheta+ z_{ij}'\mathbf {b}+K_{ij} \bigr) - \operatorname{log} \bigl(Y_{ij}^{(u,v)}! \bigr) \bigr\} \nonumber \end{eqnarray} based on the GLMM model. Since the $(u,v)$ pairs are independent of each other, the complete likelihood, or the likelihood of $\mathbf {Y}_{\mathrm{mis}}$, is \begin{equation} \label{completelikelihood} \ell_{c}(\mathbf{Y}_{\mathrm{mis}} | \utheta, \mathbf{b}) = \operatorname{exp} \Biggl(-\sum_{u=0}^{1} \sum_{v=0}^{1} h_{uv}( \mathbf{Y}_{uv},\utheta,\mathbf {b}) \Biggr). \end{equation} The observed likelihood, or the likelihood of $\mathbf{Y}_{\mathrm{obs}}$, is the marginal of $\ell_{c}$ summing over $\mathbf{Y}_{\mathrm{mis}}\in\mathcal{M}$. Thus, we have \begin{eqnarray} \label{likelihoodobs} \ell_{\mathrm{obs}}(\mathbf{Y}_{\mathrm{obs}} | \utheta, \mathbf{b}) &=& \sum_{\mathbf {Y}_{\mathrm{mis}}\in\mathcal{M}} \ell_{c}( \mathbf{Y}_{\mathrm{mis}} | \utheta,\mathbf{b}) = e^{ - h(\utheta,\mathbf{b})} \end{eqnarray} with \begin{eqnarray*} h(\utheta,\mathbf{b})& =&- \sum_{(i,j) \in\mathcal{I}} \bigl\{ \bigl(H_{ij}(\utheta) + z_{ij}'\mathbf{b} + K_{ij} \bigr)Y_{ij} \\ &&\hspace*{41pt}{}- \operatorname {exp} \bigl(x_{ij}'(u,v) \utheta+ z_{ij}'\mathbf{b}+K_{ij} \bigr) - \operatorname {log} \bigl(Y_{ij}^{}! \bigr) \bigr\} \end{eqnarray*} and \begin{equation} \label{Hfunction} H_{ij}(\utheta) = \operatorname{log} \sum _{u=0}^{1}\sum_{v=0}^{1} \operatorname {exp} \bigl\{ x_{ij}'(u,v)\utheta \bigr\}; \end{equation} the last equality in (\ref{likelihoodobs}) results from simplifications based on the well-known multinomial formula. Finally, marginalizing over $\mathbf{b}$, the marginal likelihood for $\uY_{\mathrm{obs}}$ given $\utau= (\utheta ',\sigma ^{2})'$, denoted by $\ell(\utau)$, becomes \begin{equation} \label{likelihood} \ell(\utau) = \int_{R^{F}} \ell_{\mathrm{obs}}( \mathbf{Y}_{\mathrm{obs}} | \utheta, \mathbf{b}) \frac{e^{-{\mathbf{b}'\mathbf{b}}/{(2\sigma^{2})}}}{ (2\pi)^{F/2}\sigma^{F}} \,d\mathbf{b} = \int_{R^{F}} \operatorname{exp} \bigl\{ -g(\utau,\mathbf{b}) \bigr\} \,d \mathbf{b}, \end{equation} where \begin{eqnarray} \label{gdefinition} g(\utau,\mathbf{b}) &= &h(\utheta,\mathbf{b}) + h_{1} \bigl(\sigma ^{2},\mathbf{b} \bigr), \\ \label{h1definition} h_{1} \bigl(\sigma^{2},\mathbf{b} \bigr) &=& \frac{1}{2\sigma^{2}}\mathbf {b}'\mathbf {b} + \frac{F}{2} \operatorname{log} \sigma^{2} + \frac{F}{2}\operatorname {log}(2 \pi) \end{eqnarray} and the dependence of $\ell(\utau)$ on $\uY_{\mathrm{obs}}$ is suppressed for convenience of the subsequent presentation. \subsection{Laplace approximation of the likelihood and Bayesian inference} \label{section:laplaceapproximationandbayesianinference} The typical approach for obtaining inference on $\utau$ in a Bayesian framework is to utilize the Gibbs sampler. The Gibbs sampler augments the random effects vector $\mathbf{b}$ as additional parameters to be estimated and, on convergence, gives samples from the posterior distribution of $(\utau,\mathbf{b})$. This parameter augmentation step avoids computing the integral in the likelihood (\ref{likelihood}). However, in the case of fingerprint databases, $F$, the number of fingers in a database, is typically large. As a result, parameter augmentation schemes such as the Gibbs sampler take a considerably long time to run until convergence and hinder any possibility of real time inference. We avoid using any parameter augmentation scheme for the inference on $\utau$. Our approach is to derive an approximation of the GLMM likelihood $\ell(\utau)$ based on the Laplace expansion given by \begin{equation} \label{approximatelikelihood} \ell_{a}(\utau) = e^{-g(\utau, \hat{\mathbf{b}}(\utau) )} \biggl[ \operatorname{det} \biggl(\frac{1}{2\pi} \frac{\partial^{2} g(\utau, \hat {\mathbf{b}}(\utau))}{\partial\utau^{2}} \biggr) \biggr]^{-1/2}, \end{equation} where $g(\utau,\mathbf{b})$ is the function defined in (\ref {likelihood}), $\hat{\mathbf{b}}(\utau)$ is the maximum likelihood estimate of $\mathbf{b}$ for fixed $\utau$, and ${\partial^{2} g(\utau, \hat{\mathbf{b}}(\utau))}/{\partial\utau^{2}}$ is the matrix of second order partial derivatives with respect to $\mathbf{b}$ evaluated at $\mathbf{b} = \hat{\mathbf{b}}(\utau)$, see \citet{SM95}. In the supplemental article [\citet{supp}], we show that \begin{equation} \label{orderofapproximation} \operatorname{log} \bigl(\ell(\utau) \bigr) = \operatorname{log} \bigl( \ell_{a}(\utau) \bigr) \biggl(1 + O \biggl(\frac{1}{F^{}} \biggr) \biggr) \end{equation} as $F\rightarrow\infty$, meaning that the Laplace approximation $\operatorname {log}(\ell_{a}(\utau))$ is accurate up to order $O(1/F)$ as $F\rightarrow\infty$. Equation (\ref{orderofapproximation}) justifies the use of the Laplace-based approximate likelihood in place of (\ref {likelihood}) when $F$ is large. A further approximation to $\operatorname{log}(\ell_{a}(\utau))$ is obtained by observing that \begin{eqnarray} \label{approximatelikelihoodsum} \operatorname{log} \bigl(\ell_{a}(\utau) \bigr) &=& -g \bigl(\utau , \hat{\mathbf{b}}(\utau) \bigr) - \frac{1}{2} \operatorname{log \, det} \biggl( \frac {1}{2\pi} \frac{\partial^{2} g(\utau, \hat{\mathbf{b}}(\utau ))}{\partial\utau^{2}} \biggr) \nonumber \\[-8pt] \\[-8pt] \nonumber &\equiv&(A) + (B) \end{eqnarray} is the sum of two terms: the first term $(A)$ involves a summation over $F(F-1)L^{2}/2$ terms and, hence, is of that order as $F\rightarrow \infty$. The order of the second term $(B)$ is $F \operatorname{log}(F-1 + 1/\sigma^{2})$ which follows from ${\partial^{2} g({\utau},\hat {\mathbf {b}}({\utau}))}/{\partial\utau^{2}} \sim(F-1 + 1/\sigma^{2})\uI_F$, where $\uI_F$ is the $F \times F$ identity matrix. This is because each diagonal entry of ${\partial^{2} g({\utau},\hat{\mathbf{b}}({\utau }))}/{\partial\utau^{2}}$ involves a sum over $F-1$ terms together with one term involving $1/\sigma^{2}$, whereas the off-diagonal entries are each of order $1$. We retain the term $1/\sigma^{2}$ in the case when $1/\sigma^{2}$ is of the same order as $F$ for small~$\sigma ^{2}$. Nevertheless, $F(F-1)L^{2}/2$ still dominates over $F \operatorname {log}(F-1 + 1/\sigma^{2})$ for large $F$ and small $\sigma^{2}$ (where $\sigma^{2} \sim F^{-1}$). This implies that $(A)$ dominates $(B)$ for large $F$ and small $\sigma^{2}$ which motivates the further approximation of $\operatorname{log}(\ell_{a}(\utau))$ by $(A)$. In Section~\ref{section:realdataanalysis}, we show that the above approximation is valid based on the estimated values of $\sigma^{2}$ and choices of $F$ and $L$ for each database we worked with. The reader is also referred to further details on the approximation presented in the supplemental article [\citet{supp}]. We assume that the maximum likelihood estimate of $\utau$, $\hat {\utau }$, is available for the moment. Expanding $(A)$ in a Taylor's series around $\hat{\utau}$, we get \begin{equation} \label{normalapproximatontau} g \bigl(\utau,\hat{\mathbf{b}}(\utau) \bigr)\approx g \bigl(\hat{ \utau},\hat {\mathbf {b}}(\hat{\utau}) \bigr) + \frac{1}{2}(\utau- \hat{ \utau})^{\prime} \biggl(\frac {\partial^{2} g(\hat{\utau},\hat{\mathbf{b}}(\hat{\utau }))}{\partial \utau^{2}} \biggr) (\utau- \hat{\utau}), \end{equation} where ${\partial^{2} g(\hat{\utau},\hat{\mathbf{b}}(\hat{\utau }))}/{\partial\utau^{2}}$ is the matrix ${\partial^{2} g({\utau },\hat {\mathbf{b}}({\utau}))}/{\partial\utau^{2}}$\vspace*{1pt} evaluated at $\utau= \hat {\utau}$. It is challenging to numerically evaluate $\hat{\utau}$ and ${\partial^{2} g(\hat{\utau},\hat{\mathbf{b}}(\hat{\utau }))}/{\partial \utau^{2}}$ in real time. This problem is addressed in greater detail in Section~\ref{sectionmletau} subsequently.\vadjust{\goodbreak} For inference in the Bayesian framework, we consider a prior $\pi_0$ on $\utau$ of the product form: $\pi_0(\utau) = \pi_0(\utheta)\pi _0(\sigma ^{2})$ with $\pi_0(\utheta) \propto1$ and $\pi_0(\sigma^{2}) \propto (1/\sigma^{2}) I(\sigma^{2} > 0)$. These are standard noninformative priors used on location and scale parameters in Bayesian literature. Based on the likelihood and prior specifications, the exact and approximate posteriors of $\utau$ (conditional on $\mathbf{Y}_{\mathrm{obs}}$) are \begin{eqnarray} \label{posterior} \pi(\utau| \mathbf{Y}_{\mathrm{obs}}) &=& \frac{\ell(\utau)\times\pi _0(\utau)}{ \int_{\utau} \ell(\utau)\times\pi_0(\utau) \,d\utau} \\ &\approx& \frac{\ell_{a}(\utau)\times\pi_0(\utau)}{ \int_{\utau} \ell_{a}(\utau)\times\pi_0(\utau) \,d\utau} \\ \label{approximateposterior}&=&\pi_{a}(\utau| \mathbf{Y}_{\mathrm{obs}}), \end{eqnarray} say, based on equation (\ref{orderofapproximation}). The first likelihood $\ell(\utau)$ is difficult to evaluate due to the integral with respect to $\mathbf{b}$, whereas the second likelihood $\ell _{a}(\utau)$ is easier to evaluate for each $\utau$ but difficult to simulate from. However, based on equations (\ref {approximatelikelihoodsum}) and (\ref{normalapproximatontau}), we obtain samples of $\utau$ from $\tilde{\pi}(\utau)$, the multivariate normal distribution with mean $\hat{\utau}$ and covariance matrix ${\partial^{2} g(\hat{\utau},\hat{\mathbf{b}}(\hat{\utau }))}/{\partial \utau^{2}}$. An importance sampling scheme is then used to convert these realizations to posterior samples from $\pi_{a}(\utau| \mathbf {Y}_{\mathrm{obs}} )$. More specifically, suppose $\utau_{h}^{*}$, $h=1,2,\ldots,H$ are $H$ samples from $\tilde{\pi}$. Define the $H$ weights $w_{h}^{*}$, $h=1,2,\ldots,H$ by \begin{equation} \label{weights} w_{h}^{*} = \frac{\pi_{a}(\utau_{h}^{*})}{D \tilde{\pi}(\utau_{h}^{*})}, \end{equation} where $D = \sum_{h=1}^{H} {\pi_{a}(\utau_{h}^{*})}/{\tilde{\pi }(\utau _{h}^{*})}$ is the normalizing constant so that $\sum_{h=1}^{H} w_{h}^{*}=1$. To obtain a sample from $\pi_a(\utau| \mathbf {Y}_{\mathrm{obs}} )$ in (\ref{approximateposterior}), we resample from the collection $\utau_{h}^{*}$ with weights $w_{h}^{*}$ for $h=1,2,\ldots,H$. This procedure is repeated $R$ times to obtain $R$ samples from $\pi_a(\utau| \mathbf{Y}_{\mathrm{obs}})$. Numerical analysis showed that the weights $w_{h}^{*}$ are uniformly distributed with value around $1/H$, which indicates the effectiveness of approximating the target posterior density $\pi_{a}$ using the Gaussian approximation~$\tilde{\pi}$. \subsection{Obtaining \texorpdfstring{$\hat{\tau}$}{hat tau} using EM algorithm} \label{sectionmletau} The previous sections assume the availability of the maximum likelihood estimate $\hat{\utau}$ of $\utau$ which we now describe how to obtain. The estimator $\hat{\utau}$ is obtained based on the maximizing function $-g(\utau,\hat{\mathbf{b}}(\utau))$. From equations (\ref{orderofapproximation}) and~(\ref{approximatelikelihoodsum}), it is clear that $\hat{\utau}$ also approximately maximizes the log-likelihood function $\operatorname{log}(\ell_{a}(\utau))$ and, subsequently, $\operatorname{log}(\ell(\utau))$, for large $F$. Ignoring constant terms, we note that \begin{eqnarray} -g \bigl(\utau,\hat{\mathbf{b}}(\utau) \bigr) &=& -h \bigl(\utheta,\hat{\mathbf {b}}(\utau) \bigr) - h_{1} \bigl(\utau,\hat{\mathbf{b}}(\utau) \bigr) \\ \label{eqnEM} &=& \operatorname{log} \biggl(\sum_{\mathbf{Y}_{\mathrm{mis}}\in \mathcal {M}} \ell_{c} \bigl(\mathbf{Y}_{\mathrm{mis}} | \utheta,\hat{\mathbf{b}}( \utau ) \bigr) \biggr) - h_{1} \bigl(\sigma^{2},\hat{ \mathbf{b}}(\utau) \bigr), \end{eqnarray} where the function $h(\utheta,\mathbf{b})$ and $\ell_{c}$ are as defined in (\ref{likelihoodobs}) and \begin{equation} \label{h1} h_{1} \bigl(\sigma^{2},\mathbf{b} \bigr) = \frac{1}{2\sigma^{2}}\mathbf {b}'\mathbf {b} + \frac{F}{2} \operatorname{log} \sigma^{2}.\vadjust{\goodbreak} \end{equation} Equation (\ref{eqnEM}) sets the stage for an EM algorithm to be used: Start with an initial estimate $\utau=\utau_{0}=(\utheta_{0},\sigma _{0}^{2})$ and obtain $\utau= \utau_{k} = (\utheta_{k},\sigma ^{2}_{k})$ at the end of the $k$th step. At step $(k+1)$, the $E$-step is carried out by noting that the conditional distribution of $ ( Y_{ij}^{(u,v)},u=0,1, v=0,1 )$ is multinomial with total number of trials $Y_{ij}$ and multinomial probabilities \[ p_{ij,k}^{(u,v)} = \frac{e^{x_{ij}(u,v)'\utheta_{k}}}{\sum_{u=0}^{1}\sum_{v=0}^{1} e^{x_{ij}(u,v)'\utheta_{k}}} \] independently for each $(i,j) \in\mathcal{I}$. Subsequently, we plug in the expected value of each $Y_{ij}^{(u,v)}$, $Y_{ij,k}^{(u,v)} \equiv Y_{ij} p_{ij,k}^{(u,v)}$ in place of $Y_{ij}^{(u,v)}$ in (\ref {huv}) and (\ref{completelikelihood}). The $M$-step now entails maximizing the objective function $-g_{c}(\utau,\hat{\mathbf {b}}(\utau ))$ with respect to $\utau$, where \begin{equation} \label{objectivefunctionEM} g_{c}(\utau,\mathbf{b}) = h_{c}(\utheta, \mathbf{b}) + h_{1} \bigl(\sigma ^{2},\mathbf{b} \bigr) \end{equation} and $h_{c}(\utheta,\mathbf{b})$ is given by \begin{eqnarray} \label{hc}&& h_{c}(\utheta,\mathbf{b}) = \sum _{u=0}^{1}\sum_{v=0}^{1} \sum_{(i,j)\in \mathcal{I}} \bigl\{ \bigl( x_{ij}'(u,v) \utheta+ z_{ij}'\mathbf{b} +K_{ij} \bigr) Y_{ij,k}^{(u,v)} \nonumber \\[-8pt] \\[-8pt] \nonumber &&\hspace*{114pt}{} + \operatorname{exp} \bigl(x_{ij}'(u,v) \utheta+ z_{ij}'\mathbf {b}+K_{ij} \bigr) \bigr \}. \end{eqnarray} This maximization yields $\utau= \utau_{k+1}$. Proceeding with $k=1,2,\ldots $ gives $\utau= \hat{\utau}$ at convergence. The $M$-step, or maximization of $-g_{c}(\utau,\hat{\mathbf {b}}(\utau ))$ with respect to $\utau$, is carried out using the Newton--Raphson procedure involving the first and second order partial derivatives of $g_{c}$ with respect to $\utau$. At step $(k+1)$ of the EM algorithm, we start with the initial value $\utau\equiv\utau_{k+1}^{0} \equiv \utau_{k}$ as defined above. At step $(l+1)$, the current value $\utau _{k+1}^{l+1}$ is obtained from $\utau_{k+1}^{l}$ using the equation \begin{equation} \label{NRiteration} \utau_{k+1}^{l+1} = \utau_{k+1}^{l} - \biggl[\frac{\partial^{2} g_{c}}{\partial\utau^{2}} \bigl(\utau_{k+1}^{l},\hat{ \mathbf{b}} \bigl(\utau _{k+1}^{l} \bigr) \bigr) \biggr]^{-1} \frac{\partial g_{c}}{ \partial\utau } \bigl(\utau _{k+1}^{l}, \hat{\mathbf{b}} \bigl(\utau_{k+1}^{l} \bigr) \bigr), \end{equation} where $\frac{\partial^{2} g_{c}}{\partial\utau^{2}}(\utau _{k+1}^{l},\hat{\mathbf{b}}(\utau_{k+1}^{l}))$ and $\frac{\partial g_{c}}{ \partial\utau}(\utau_{k+1}^{l},\hat{\mathbf{b}}(\utau _{k+1}^{l}))$ are, respectively, the first and second order partial derivatives of $g_{c}$ evaluated at $\utau=\utau_{k+1}^{l}$. The explicit expressions for the first and second order partial derivatives of $g_{c}(\utau,\hat{\mathbf{b}}(\utau))$ with respect to $\utau$ are given in the supplemental article \citet{supp}. These expressions, in turn, involve the first and second order partial derivatives of $\hat{\mathbf{b}}(\utau)$ with respect to $\utau$. Since no analytical form of $\hat{\mathbf{b}}(\utau)$ as a function of $\utau $ is available, one has to resort to numerical methods to estimate $\hat {\mathbf{b}}(\utau)$, its first and second order partial derivatives at each iterative step of $k \ge0$ and $l\ge0$. A fast and effective way of obtaining these numerical estimates is outlined in the supplemental article \citet{supp} for the interested reader. On convergence at $\utau=\hat{\utau}$, one obtains the numerical estimate of the matrix ${\partial^{2} g(\hat{\utau},\hat{\mathbf{b}}(\hat {\utau }))}/{\partial\utau^{2}}$ by a similar method. The reader is referred to the supplemental article \citet{supp} for details. \section{Bayesian inference for the PRC} \label{section:PRC} Suppose $w$ minutia matches are observed for a fingerprint pair with total number of detected minutiae $m_1$ and $m_2$, and with quality measures $Q_{1}$ and $Q_2$ (assume continuous for the moment). Due to varying image quality, not all of the $w$ matches correspond to matches between genuine (true) minutiae. The model developed in Section~\ref{section:glmm} gives the number of true minutia matches, $Y^{(0,0)}$, to be binomially distributed with parameters $w$ and $p_{00} \equiv e^{\eta^{(0,0)}}/\sum_{u=0}^{1}\sum_{v=0}^{1} e^{\eta^{(u,v)}}$ for the number of trials and success probability, respectively, where $\eta ^{(u,v)}$ are as defined in (\ref{sim1})--(\ref{sim4}). The binomial distribution for $Y^{(0,0)}$ results by observing that the conditional distribution of independent Poisson\vspace*{1pt} random variables $( Y^{(u,v)}, u,v=\{0,1\} )$ given their sum $Y= \sum_{u=0}^{1}\sum_{v=0}^{1} Y^{(u,v)}=w$ is multinomial with total number of trials $w$ and probabilities $p_{uv}\equiv e^{\eta^{(u,v)}}/\sum_{u=0}^{1}\sum_{v=0}^{1} e^{\eta^{(u,v)}}$ summing to one. It follows that the marginal distribution of each $Y^{(u,v)}$ is binomial for each $(u,v) = (0,0), (0,1), (1,0)$ and $(1,1)$. Assuming $Y^{(0,0)}$ is known, the PRC corresponding to $Y^{(0,0)}$ matches is given by \begin{equation} \label{PRCexpression2} \operatorname{PRC}^{*} \bigl(Y^{(0,0)} | b_1,b_2,m_1,m_2, \utau \bigr) = P \bigl(\mathcal{S} \ge Y^{(0,0)} \bigr), \end{equation} where $\mathcal{S}^{} \sim\operatorname{Poisson}(m_1 m_2 e^{\eta})$ and $\eta= 2\beta_{0} + b_{1} + b_{2}$ with $b_{1}$ and $b_{2}$ distributed as independent $N(0,\sigma^{2})$ random variables. The notation of $\operatorname{PRC}^{*}(\cdot| \cdots)$ in (\ref{PRCexpression2}) emphasizes its dependence on the GLMM parameters $\utau$, the unobserved matches between genuine minutiae, $Y^{(0,0)}$, and the random effects parameters $b_1$ and $b_2$. Since $(Y^{(0,0)},b_1,b_2)$ are unknown, the unconditional PRC is obtained by integrating out all unknown random parameters, that is, \begin{equation} \label{PRCun} \operatorname{PRC}(w | m_1,m_2,\utau) = E_{b_1,b_2,Y^{(0,0)}} \bigl[\operatorname{PRC}^{*} \bigl(Y^{(0,0)} | b_1,b_2,m_1,m_2,\utau \bigr) \bigr], \end{equation} where the expectation is taken over the joint distribution of $(Y^{(0,0)},b_{1},b_{2})$ given~$\utau$. The random variables $b_1$ and $b_2$ are independent from each other based on our modeling assumptions, but note further that $Y^{(0,0)}$ is independent of $(b_1,b_2)$. This is because the $p_{00}$ parameter of the binomial distribution of $Y^{(0,0)}$, conditional on $b_1$ and $b_2$, is independent of $b_1$ and $b_2$ (they cancel out from the numerator and denominator expressions of $p_{00}$). Thus, for given $\utau$, it is easy to sample from the joint distribution $(Y^{(0,0)},b_{1},b_{2})$ and estimate the expectation using a Monte Carlo sum. To obtain inference for the PRC, assume $M$ samples from the posterior $\pi _{a}(\utau| \uY_{\mathrm{obs}})$, $\utau_{1},\utau_{2},\ldots,\utau_{M}$, are available. For $r=1,2,\ldots,R$, we obtain the $r$th sample of the PRC, $\operatorname{PRC}_{r} \equiv \operatorname{PRC}(w | m_1,m_2,\utau_{r})$, by plugging $\utau=\utau _{r}$ in~(\ref{PRCun}). These $R$ posterior samples are then used to obtain the PRC posterior mean, standard deviation and the $100(1-\alpha )\%$ credible intervals (mean $\pm z_{1-\alpha/2} $ sd) in Section~\ref{section:realdataanalysis}. Note that $\operatorname{PRC}(w | m_1,m_2,\utau)$ in (\ref{PRCun}) should be the same value whether we use the combination $(Q_1,Q_{2})$ or $(Q_2,Q_{1})$. Although this symmetry can be established mathematically, small deviations away from symmetry arise due to sampling error when (i) approximating the expected value in (\ref {PRCun}) using Monte Carlo, and (ii) using different random samples from the posterior of $\utau$ for the two combinations $(Q_1,Q_2)$ and $(Q_2,Q_1)$. Thus, a common estimate is reported in Section~\ref{section:realdataanalysis} which is the average of the estimates obtained for the two combinations $(Q_1,Q_2)$ and $(Q_2,Q_1)$. \section{Data analysis} \label{section:realdataanalysis} The publicly available databases provided by the Fingerprint Verification Competitions [FVCs, \citet{finger:FVC2002} and \citeauthor{FVC2006}] are considered here. Specifically, subsets DB1 and DB2 of FVC2002 and subset DB3 of FVC2006 are used. The FVC2002 DB1 database consists of fingerprint images of $F=100$ different fingers and $L=8$ impressions per finger obtained using the optical sensor ``TouchView II'' by Identix with image size $388 \times374$ and resolution $500$ dots per inch (dpi). The FVC2002 DB2 database consists of $L=8$ impressions from $F=100$ fingers collected from an optical sensor ``FX2000'' by Biometrika with image size $296 \times560$ and resolution $569$ dpi. Fingerprint images in the DB1 and DB2 subsets were collected under exaggerated distortions but were of good quality in general. The FVC2006 database is comprised of four subsets, DB1 through DB4, each consisting of $F=150$ fingers with $L=12$ impressions per finger. Fingerprint image data collection in FVC2006 was carried out using a thermal sweeping sensor with resulting image size $400\times500$ and resolution $500$ dpi. Examples of images from the FVC2002 and FVC2006 databases are shown in Figure~\ref{fig:variousimages}. Note the variability in the image acquisition process due to using different sensors. Our methodology is applied to each of the three subsets with two quality measures: (i) the NFIQ categorical quality measure and (ii) the continuous quality measure described in Section~\ref{empiricalfindings}. Results of the parameter inference methodology for components of $\utau$ are as given in Tables~\ref{table:parameterestimation:DB1}, \ref {table:parameterestimation:DB2} and \ref{table:parameterestimation:DB3} based on $R=200$ samples from the posterior distribution $\pi_{a}( \utau| \uY_{\mathrm{obs}} )$. \begin{table} \tabcolsep=0pt \caption{Posterior means ($M$), standard deviations ($\mathit{SD}$) and $99.9\%$ credible intervals ($\mathit{CI}$) for components of $\utau$ based on FVC2002 DB1} \label{table:parameterestimation:DB1} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lccclccc@{}} \hline &\multicolumn{3}{c}{\textbf{Continuous,} $\bolds{Q_{\mathrm{con}}}$} & &\multicolumn{3}{c@{}}{\textbf{Categorical,} $\bolds{Q_{\mathrm{cat}}}$}\\ [-6pt] &\multicolumn{3}{c}{\hrulefill} & &\multicolumn{3}{c@{}}{\hrulefill}\\ $\bolds{\utau}$ & \textbf{M} & \textbf{SD} & \textbf{CI} & $\bolds{\utau}$ & \textbf{M} & \textbf{SD} & \textbf{CI}\\ \hline $\theta_0$ & $-1.2801$ & $0.0061$ & $[-1.3003, -1.2599]$ & $\theta_0$ & $-3.4857$ & $0.0049$ & $[-3.5018,-3.4697]$\\ $\theta_1$ & $-5.8520$ & $0.0137$ & $[-5.8969,-5.8070]$ & $\theta_1$ & $-0.7429$ & $0.0013$ & $[-0.7472,-0.7386]$ \\ $-$ & $-$ & $-$ & $-$ & $\theta_2$ & $-1.6144$ & $0.0001$ & $[-1.6145,-1.6143]$\\ $\beta_0$ & $-2.9047$ & $0.0065$ & $[-2.9259,-2.8835]$ & $\beta_0$ & $-2.7297$ & $0.0064$ & $[-2.7507,-2.7087]$\\ $\operatorname{log}(\sigma^{2})$ & $-4.9518$ & $0.1460$ & $[-5.4321,-4.4716]$ & $\operatorname{log}(\sigma^{2})$ & $-4.9537$ & $0.1231$ & $[-5.3588,-4.5486]$ \\ \hline \end{tabular*} \end{table} \begin{table} \tabcolsep=0pt \caption{Posterior means ($M$), standard deviations ($\mathit{SD}$) and $99.9\%$ credible intervals ($\mathit{CI}$) for components of $\utau$ based on FVC2002 DB2} \label{table:parameterestimation:DB2} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lccclccc@{}} \hline &\multicolumn{3}{c}{\textbf{Continuous,} $\bolds{Q_{\mathrm{con}}}$} & &\multicolumn{3}{c@{}}{\textbf{Categorical,} $\bolds{Q_{\mathrm{cat}}}$}\\ [-6pt] &\multicolumn{3}{c}{\hrulefill} & &\multicolumn{3}{c@{}}{\hrulefill}\\ $\bolds{\utau}$ & \textbf{M} & \textbf{SD} & \textbf{CI} & $\bolds{\utau}$ & \textbf{M} & \textbf{SD} & \textbf{CI}\\ \hline $\theta_0$ & $-0.6346$ & $0.0020$ & $[-0.6413,-0.6280]$ & $\theta_0$ & $-2.9255$ & $0.0019$ & $[-2.9319,-2.9191]$\\ $\theta_1$ & $-9.2982$ & $0.0024$ & $[-9.3062,-9.2901]$ & $\theta_1$ & $-1.1496$ & $0.0011$ & $[-1.1532,-1.1460]$ \\ $-$ & $-$ & $-$ & $-$ & $\theta_2$ & $-0.9676$ & $0.0007$ & $[-0.9698,-0.9653]$\\ $-$ & $-$ & $-$ & $-$ & $\theta_3$ & $-4.2827$ & $0.0007$ & $[-4.2850,-4.2804]$\\ $\beta_0$ & $-2.8721$ & $0.0015$ & $[-2.8769,-2.8672]$ & $\beta_0$ & $-2.8810$ & $0.0001$ & $[-2.8811,-2.8810]$\\ $\operatorname{log}(\sigma^{2})$ & $-4.2974$ & $0.0185$ & $[-4.3581,-4.2366]$ & $\operatorname{log}(\sigma^{2})$ & $-4.7817$ & $0.0132$ & $[-4.8252,-4.7382]$ \\ \hline \end{tabular*} \end{table} \begin{table} \tabcolsep=0pt \caption{Posterior means ($M$), standard deviations ($\mathit{SD}$) and $99.9\%$ credible intervals ($\mathit{CI}$) for components of $\utau$ based on FVC2006 DB3} \label{table:parameterestimation:DB3} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lccclccc@{}} \hline &\multicolumn{3}{c}{\textbf{Continuous,} $\bolds{Q_{\mathrm{con}}}$} & &\multicolumn{3}{c@{}}{\textbf{Categorical,} $\bolds{Q_{\mathrm{cat}}}$}\\ [-6pt] &\multicolumn{3}{c}{\hrulefill} & &\multicolumn{3}{c@{}}{\hrulefill}\\ $\bolds{\utau}$ & \textbf{M} & \textbf{SD} & \textbf{CI} & $\bolds{\utau}$ & \textbf{M} & \textbf{SD} & \textbf{CI}\\ \hline $\theta_0$ & $-6.1444$ & $0.0025$ & $[-6.1526,-6.1362]$ & $\theta_0$ & $-5.4501$ & $0.0014$ & $[-5.4547,-5.4456]$\\ $\theta_1$ & $-3.6746$ & $0.0042$ & $[-3.6885,-3.6607]$ & $\theta_1$ & $-0.2537$ & $0.0009$ & $[-0.2565,-0.2509]$ \\ $-$ & $-$ & $-$ & $-$ & $\theta_2$ & $-1.1425$ & $0.0011$ & $[-1.1462,-1.1389]$\\ $-$ & $-$ & $-$ & $-$ & $\theta_3$ & $-2.3753$ & $0.0032$ & $[-2.3859,-2.3647]$\\ $-$ & $-$ & $-$ & $-$ & $\theta_4$ & $-2.9186$ & $0.0029$ & $[-2.9281,-2.9091]$\\ $\beta_0$ & $-3.1624$ & $0.0011$ & $[-3.1660,-3.1587]$ & $\beta_0$ & $-3.1734$ & $0.0009$ & $[-3.1763,-3.1705]$\\ $\operatorname{log}(\sigma^{2})$ & $-3.9248$ & $0.0127$ & $[-3.9666,-3.8831]$ & $\operatorname{log}(\sigma^{2})$ & $-4.2314$ & $0.0065$ & $[-4.2527,-4.2101]$ \\ \hline \end{tabular*} \end{table} \begin{table} \caption{Inference on $\operatorname{PRC}(12 | 38,38)$ based on $Q_{\mathrm{cat}}$ and FVC2002 DB1: $M$ and $\mathit{CI}$ are, respectively, the posterior means and $99.9\%$ credible intervals} \label{table:real:categorical:PRC:2002:db1} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcccccc@{}} \hline & \multicolumn{2}{c}{$\mathbf{1}$} & \multicolumn {2}{c}{$\mathbf{2}$} & \multicolumn{2}{c@{}}{$\mathbf{3}$} \\[-6pt] & \multicolumn{2}{c}{\hrulefill} & \multicolumn {2}{c}{\hrulefill} & \multicolumn{2}{c@{}}{\hrulefill} \\ $\bolds{Q_1\setminus Q_2}$& $\mathbf{M}$ & $\mathbf{CI}$ & $\mathbf{M}$ & $\mathbf{CI}$ & $\mathbf{M}$ & $\mathbf{CI}$ \\ \hline $1$ & $0.4082$ & $[0.3759$, & $0.2653$ & $[0.2424$, & $0.1541$ & $[0.1325$,\\ & & $0.4404 ]$ & & $0.2882]$ & & $0.1758]$\\ $2$ & $0.2653$ & $[0.2424$, & $0.1383$ & $[0.1113$, & $0.0565$ & $[0.0451$,\\ & & $0.2882]$ & & $0.1654]$ & & $0.0679]$ \\ $3$ & $0.1541$ & $[0.1325$, & $0.0565$ & $[0.0451$, & $0.0130$ & $[0.0095$,\\ & & $0.1758]$ & & $0.0679]$ & & $0.0164]$\\ \hline \end{tabular*} \end{table} \begin{table}[b] \caption{Inference on $\operatorname{PRC}(12 | 40,40)$ based on $Q_{\mathrm{cat}}$ and FVC2002 DB2: $M$ and $\mathit{CI}$ are, respectively, the posterior means and $99.9\%$ credible intervals} \label{table:real:categorical:PRC:2002:db2} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcccccccc@{}} \hline & \multicolumn{2}{c}{$\mathbf{1}$} & \multicolumn {2}{c}{$\mathbf{2}$} & \multicolumn{2}{c}{$\mathbf{3}$} &\multicolumn{2}{c@{}}{$\mathbf{4}$} \\[-6pt] & \multicolumn{2}{c}{\hrulefill} & \multicolumn{2}{c}{\hrulefill} & \multicolumn{2}{c}{\hrulefill}& \multicolumn{2}{c@{}}{\hrulefill} \\ $\bolds{Q_1\setminus Q_2}$& $\mathbf{M}$ & $\mathbf{CI}$ & $\mathbf{M}$ & $\mathbf{CI}$ & $\mathbf{M}$ & $\mathbf{CI}$ &$\mathbf{M}$ & $\mathbf{CI}$\\ \hline $1$ & $0.7958$ & $[0.7873$, & $0.5858$ & $[0.5733$, & $0.4734$ & $[0.4605$, & $0.3881$ & $[0.3750$,\\ & & $0.8044]$ & & $0.5983]$ & & $0.4862]$ & & $0.4013]$\\ $2$ & $0.5858$ & $[0.5733$, & $0.2725$ & $[0.2609$, & $0.1556$ & $[0.1469$, & $0.0902$ & $[0.0838$,\\ & & $0.5983]$ & & $0.2841]$ & & $0.1643]$ & & $0.0967]$\\ $3$ & $0.4734$ & $[0.4605$, & $0.1556$ & $[0.1469$, & $0.0667$ & $[0.0608$, & $0.0275$ & $[0.0245$,\\ & & $0.4862]$ & & $0.1643]$ & & $0.0726]$ & & $0.0306]$ \\ $4$ & $0.3881$ & $[0.3750$, & $0.0902$ & $[0.0838$, & $0.0275$ & $[0.0245$, & $0.0073$ & $[0.0062$,\\ & & $0.4013]$ & & $0.0967]$ & & $0.0306]$ & & $0.0085]$\\ \hline \end{tabular*} \end{table} \begin{table} \caption{Inference on $\operatorname{PRC}(12 | 38,38)$ based on $Q_{\mathrm{con}}$ and FVC2002 DB1: $M$ and $\mathit{CI}$ are, respectively, the posterior means and $99.9\%$ credible intervals} \label{table:real:continuous:PRC:2002:db1} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcccccc@{}} \hline & \multicolumn{2}{c}{$\bolds{0.3}$} & \multicolumn {2}{c}{$\bolds{0.4}$} & \multicolumn{2}{c@{}}{$\bolds{0.5}$} \\[-6pt] & \multicolumn{2}{c}{\hrulefill} & \multicolumn {2}{c}{\hrulefill} & \multicolumn{2}{c@{}}{\hrulefill} \\ $\bolds{Q_1\setminus Q_2}$& $\mathbf{M}$ & $\mathbf{CI}$ & $\mathbf{M}$ & $\mathbf{CI}$ & $\mathbf{M}$ & $\mathbf{CI}$ \\ \hline $0.3$ & $0.5358$ & $[0.5126$, & $0.3908$ & $[0.3671$, & $0.2868$ & $[0.2650$,\\ & & $0.5591]$ & & $0.4146]$ & & $0.3087]$\\ $0.4$ & $0.3908$ & $[0.3671$, & $0.2383$ & $[0.2174$, & $0.1455$ & $[0.1280$, \\ & & $0.4146]$ & & $0.2592]$ & & $0.1631]$ \\ $0.5$ & $0.2868$ & $[0.2650$, & $0.1455$ & $[0.1280$, & $0.0724$ & $[0.0610$,\\ & & $0.3087]$ & & $0.1631]$ & & $0.0838]$\\ \hline \end{tabular*} \end{table} \textit{Inference for the PRC}: We report the PRC corresponding to $w=12$ matches using the method outlined in Section~\ref{section:PRC}. Note that $w=12$ is used for illustrative purposes only; similar inference results on the PRC can be obtained for any observed number of minutia matches $w$. The values of $m_1$ and $m_2$, the total number of extracted minutiae in the two prints, are fixed at mean values for each database: These are $(m_1,m_2)=(38,38), (40,40)$ and $(84,84)$ for FVC2002 DB1, FVC2002 DB2 and FVC2006 DB3, respectively. The mean numbers for FVC2002 DB1 and DB2 are similar, and are much smaller compared to FVC2006 DB3. Tables~\ref{table:real:categorical:PRC:2002:db1} and \ref {table:real:categorical:PRC:2002:db2} give the results for FVC2002 DB1 and DB2 based on the categorical quality measure, whereas Tables~\ref{table:real:continuous:PRC:2002:db1} and \ref {table:real:continuous:PRC:2002:db2} give the results for the continuous quality measure. Similar reports and conclusions are obtained for FVC2006 DB3 as in FVC2002 DB1 and DB2, so the relevant tables are not presented here but relegated to the Appendix. The posterior mean estimates of the $\operatorname{PRC}$ are monotonically decreasing functions of increasing quality measures (for both NFIQ and the continuous quality measure), as should be expected. As quality becomes better, erroneous decisions due to spurious minutia matches are reduced. In turn, this component of uncertainty in the PRC evaluation is also decreased. Thus, for high quality images, the PRC essentially captures the inherent inter-finger variability in the population. The results reported in Tables~\ref{table:real:categorical:PRC:2002:db1}--\ref {table:real:continuous:PRC:2002:db2} show the importance of having good quality images when a decision of a positive match has to be made. With the requirement of $\operatorname{PRC} \approx0.05$, we see from the four tables that this is achieved when both prints are of the best quality. The PRC deteriorates quickly as the image quality degrades, as is indicated by all the four tables, signifying the importance of very good quality images for PRC assessment. For poor quality images, one has to be extra cautious interpreting the extent of a fingerprint match, as the uncertainty associated with a positive match is very large. \begin{table}[b] \caption{Inference on $\operatorname{PRC}(12 | 40,40)$ based on $Q_{\mathrm{con}}$ and FVC2002 DB2: $M$ and $\mathit{CI}$ are, respectively, the posterior means and $99.9\%$ credible intervals} \label{table:real:continuous:PRC:2002:db2} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lcccccccc@{}} \hline & \multicolumn{2}{c}{$\bolds{0.2}$} & \multicolumn {2}{c}{$\bolds{0.4}$} & \multicolumn{2}{c}{$\bolds{0.5}$} &\multicolumn{2}{c@{}}{$\bolds{0.6}$} \\[-6pt] & \multicolumn{2}{c}{\hrulefill} & \multicolumn{2}{c}{\hrulefill} & \multicolumn{2}{c}{\hrulefill}& \multicolumn{2}{c@{}}{\hrulefill} \\ $\bolds{Q_1\setminus Q_2}$& $\mathbf{M}$ & $\mathbf{CI}$ & $\mathbf{M}$ & $\mathbf{CI}$ & $\mathbf{M}$ & $\mathbf{CI}$&$\mathbf{M}$ & $\mathbf{CI}$ \\ \hline $0.2$ & $0.9152$ & $[0.9087$, & $0.6967$ & $[0.6811$, & $0.6292$ & $[0.6134$, & $0.5965$ & $[0.5797$, \\ & & $0.9217]$ & & $0.7123]$ & & $0.6451]$ & & $0.6134]$\\ $0.4$ & $0.6967$ & $[0.6811$, & $0.1956$ & $[0.1816$, & $0.1158$ & $[0.1051$, & $0.0873$ & $[0.0787$,\\ & & $0.7123]$ & & $0.2096]$ & & $0.1265]$ & & $0.0959]$\\ $0.5$ & $0.6292$ & $[0.6134$, & $0.1158$ & $[0.1051$, & $0.0559$ & $[0.0489$, & $0.0371$ & $[0.0323$, \\ & & $0.6451]$ & & $0.1265]$ & & $0.0630]$ & & $0.0420]$ \\ $0.6$ & $0.5965$ & $[0.5797$, & $0.0873$ & $[0.0787$, & $0.0371$ & $[0.0323$, & $0.0228$ & $[0.0193$,\\ & & $0.6134]$ & & $0.0959]$ & & $0.0420]$ & & $0.0264]$\\ \hline \end{tabular*} \end{table} Tables~\ref{table:real:categorical:PRC:2002:db1}--\ref {table:real:continuous:PRC:2002:db2} invite some comparisons. For DB1, the average values of $Q_{\mathrm{con}}$ for each value of $Q_{\mathrm{cat}}=1,2,3$ are $0.32,0.45,0.51$, respectively; for DB2, the average values of $Q_{\mathrm{con}}$ for each value of $Q_{\mathrm{cat}}=1,2,3,4$ are $0.24$, $0.38$, $0.53$ and $0.58$, respectively. The levels of $Q_{\mathrm{con}}$ chosen in Tables~\ref{table:real:continuous:PRC:2002:db1} and \ref {table:real:continuous:PRC:2002:db2} are taken to reflect these average values. So, one would expect that the PRC results presented for each database be consistent over $Q_{\mathrm{cat}}$ and $Q_{\mathrm{con}}$, especially when the value of $\operatorname{PRC}$ is small, say, less than $0.05$, for example. This is generally the case when we compare the entries of Tables~\ref{table:real:categorical:PRC:2002:db1} and \ref {table:real:continuous:PRC:2002:db1} for DB1 and Tables~\ref{table:real:categorical:PRC:2002:db2} and \ref {table:real:continuous:PRC:2002:db2} for DB2. Minor differences in the PRCs can be attributed to the significant variability of $Q_{\mathrm{con}}$ for each level of $Q_{\mathrm{cat}}$ in the three databases. While we would expect $Q_{\mathrm{con}}$ values to be different corresponding to different levels of $Q_{\mathrm{cat}}$, we do not find this to be the case empirically. There is significant overlap between the $Q_{\mathrm{con}}$ levels for the different $Q_{\mathrm{cat}}$ values. In the case when $Q_{\mathrm{cat}}=3$ for DB1, the range of $Q_{\mathrm{con}}$ is from $0.32$ to $0.67$, whereas it is $0.30$ to $0.64$ for $Q_{\mathrm{cat}}=2$; see Figure~\ref{fig:boxplot} for an idea of the range of these values for $Q_{\mathrm{con}}$. The PRCs corresponding to $Q_{\mathrm{con}}$ and $Q_{\mathrm{cat}}$ are much larger for DB3 [see Tables~1 and 2 in the supplemental article \citet{supp}] compared to DB1 and DB2, because $(m_1,m_2)=(84,84)$, the mean values for DB3, are much larger compared to the mean minutia numbers for DB1 and DB2. As a result, it is much more likely for random pairings to occur in DB3 since more minutia are available for pairings. \begin{figure} \includegraphics{734f04.eps} \caption{Boxplots showing the distribution of $Q_{\mathrm{con}}$ for each level of $Q_{\mathrm{cat}}=1, 2$ and $3$ for DB1.} \label{fig:boxplot} \end{figure} \textit{Analysis and interpretation for the forensic practitioner}: We illustrate how PRCs can be obtained for a pair of prints under investigation in forensic practice. The pair in Figure~\ref{fig:impostor:match} has $w=7$ matches with $m_1=35$ and $m_2=49$, and $Q_{\mathrm{cat}}=2$ and $3$ for the left and right images, respectively. Since these images are from FVC2002 DB2, we base our inference using the results from this database. Corresponding to $w=7$, the mean PRC is $0.6395$ with $99.9\%$ credible interval $[0.6263, 0.6523]$. The mean value implies that about $64\%$ of random pairings of impostor fingerprints will have minutia matches of $7$ or more. Thus, there is no evidence to suggest that the pair in Figure~\ref{fig:impostor:match} is something other than a random match (namely, an impostor pair). The forensic practitioner can also perform a ``what if'' analysis. If the quality of the images was good with $Q_{\mathrm{cat}}$ taking value $4$ for both images in the pair, then the PRC is reduced to $0.3112$, which still indicates this is more likely a random pairing. With this analysis, a practitioner can eventually conclude that the pairs in question are very likely a random match even in the best case scenario where the image qualities are perfect. The GLMM procedure can also be utilized to provide a guideline for choices of $w$ that will give desired PRC values for forensic testimony, for example, $\operatorname{PRC}=0.01$ means that the error we make in declaring a positive match when in fact it is not is at most $0.01$. Table~\ref{table:designw} provides the smallest values for $w$ for guaranteeing $\operatorname{PRC}(w | m_1,m_2) \le0.01$ for different $Q_{\mathrm{cat}}$ combinations when $m_1=35$ and $m_2=49$ based on DB2 with maximum possible number of matches $w$ being ${\operatorname{min}}(m_1,m_2)=35$. Based on the $(2,3)$ [or $(3,2)$] entry of the table, we see that the desired $w$ to make a reliable decision for the above prints in question should be $21$. Thus, $w=7$ is too low for making a reliable positive match decision. The $*$ entries corresponding to the lowest quality combinations indicate that even for the maximum value of $w=35$, the PRC never goes below $0.01$. This again emphasizes that positive identification decisions cannot and should not be made with very low quality images. Corresponding results can be obtained for DB1 and DB3 similarly, and for $Q_{\mathrm{con}}$, and are therefore not presented here. \begin{table} \tablewidth=200pt \caption{Smallest $w$ required for $\operatorname{PRC}(w | 35,49)\le0.01$ based on $Q_{\mathrm{cat}}$ and DB2. $*$ indicates no such $w$ exists} \label{table:designw} \begin{tabular*}{200pt}{@{\extracolsep{\fill}}lcccc@{}} \hline $\bolds{Q_1 / Q_2}$ & $\bolds{1}$ & $\bolds{2}$ & $\bolds{3}$ & $\bolds{4}$\\ \hline $1$ & $*$ & $*$ & $33$ & $29$\\ $2$ & $*$ & $25$ & $21$ & $18$\\ $3$ & $33$ & $21$ & $17$ & $15$\\ $4$ & $29$ & $18$ & $15$ & $13$\\ \hline \end{tabular*} \end{table} \section{Validation based on simulation} \label{simulation:results} We performed simulation and validation experiments for DB1 and DB2 with $F=100$ fingers and $L=8$ impressions per finger. $Q_{\mathrm{cat}}$ and $Q_{\mathrm{con}}$ levels were fixed as in the respective databases, but the total number of minutia matches $Y_{ij}$ for each pair of prints $(i,j)$ (see Section~\ref{section:glmm} for the GLMM model and related notation) were simulated from the GLMM model with fixed and known parameter values $\utau= (\utheta',\operatorname{log}(\sigma^{2}))'$. The parameter values were fixed at the estimated values obtained based on real data for DB1 and DB2, as given in Tables~\ref{table:parameterestimation:DB1} and \ref {table:parameterestimation:DB2}. As noted previously, our inference procedure yields posterior mean and standard deviation estimates as well as $99.9\%$ credible intervals as mentioned in Section~\ref{section:BayesianInference}. The coverage probabilities of each credible interval were calculated for all the true parameter values as well as the true PRC values based on $50$ runs. We use the coverage probabilities as a measure of how well the Laplace method approximates the exact GLMM likelihood for large $F$. Based on simulations, we find that the average coverage probabilities for parameters and PRC values (averaged over all three databases) are approximately $98\%$ and $98.5\%$, respectively. There is some underestimation of the coverage probability but not grossly so, indicating that the Laplace approximation to the GLMM likelihood is good even for $F=100$. \section{Summary and discussion} \label{section:discussion} To assess the extent of fingerprint individuality for different image quality of prints, a GLMM framework and Bayesian inference procedure is developed. Our inference scheme is able to provide a point estimate as well as a credible interval for the PRC depending on the image quality and the observed number of matches between a pair of prints. Numerical reports of the PRC are obtained which can be used to validate forensic testimony. Further, we also provide the smallest number of minutia matches $w$ needed to keep the PRC around a prespecified (small) number. These matching numbers serve as a guideline and safeguard against falsely deciding a positive match when the quality of the prints is unreliable. The best report of fingerprint individuality is obtained when both or at least one is of very good quality and the other is of good quality. No inference on individuality should be made when either print is of moderate to poor quality as observed from our experimental results; having poor quality latent prints, for example, severely hampers reliable matching results. As far as we know, previous work has not considered including image quality in fingerprint individuality assessment in a quantitative manner and in a formal statistical framework. Our work can be used as a baseline in forensic applications for acquiring a quantitative assessment of individuality given two prints in question. Regardless of whether the prints are of poor or good quality, the methodology outlined will output a PRC value. This PRC value could serve as a baseline and as a first guide to what extent there is evidence beyond a reasonable doubt. The subsection of Section~\ref{section:realdataanalysis} titled ``Analysis and inference for the forensic practitioner'' highlights the kinds of analysis that may be conducted based on the GLMM methodology. The GLMM method still does not utilize other fingerprint features such as type of minutiae detected, ridge lengths and the class of fingerprint, which could potentially further decrease the PRC and increase the extent of individualization. This will be addressed in a future work. \textit{Intra-finger correlations and inter-finger variability}: The incorporation of the random effects $b_f$ is very important in any statistical analysis involving different fingers and multiple impressions per finger. The random effects $b_f$ serve to model both intra-finger correlations as well as inter-finger variability. Models that incorporate correlations due to multiple impressions of the same finger give very different results compared to models that do not account for this. For example, \citet{D10} show that the upper and lower confidence bounds are misleadingly narrower if correlations, when present, are not taken into account. Similarly, the upper and lower credible bounds for the PRC will be affected if one does not account for intra-finger correlations. The random effects $b_{f}$ also serve another purpose. They model inter-finger variability as we move from different fingers in the target population; the database is merely a representative sample of the target population of fingers for which the true population PRC is unknown and has to be estimated. In Zhu, Dass and Jain (\citeyear{ZDJ07}), the PRC was calculated for each pair of print based on minutia distributions on the respective fingers. However, this is not a representative PRC for the \textit{entire} fingerprint database. Thus, Zhu, Dass and Jain (\citeyear{ZDJ07}) considered clustering the minutia distributions for the entire fingerprint database. A more formal approach for obtaining inference for population level PRCs was addressed in \citet{D11}. The random effects $b_{f}$ in the present context represent deviations in the target population, thus enabling inference for this overall population PRC to be made. \textit{Alignment of prints}: The alignment prior to matching a pair of prints is not a separate issue but a function of the detected minutia. Based on the detected minutia in a print, the alignment with the other print is done by first aligning a pair of detected minutiae (one from each print), then finding the optimal translation (via the paired minutiae locations) and rotation (via the paired minutiae directions) that exactly aligns the two. So, one can see that the alignment is also a function of image quality: For poor quality images, more spurious minutiae are detected and, hence, the alignment can be more random (compared to the true alignment based on genuine minutia). The randomness in the alignment also contributes to increasing the number of minutia matches, $Y_{ij}$, for poor quality images. We do not model the alignment separately because of the fact that its randomness, which depends on spurious minutia, is captured in the observed $Y_{ij}$. \textit{Fusion schemes}: The difference in the estimates of fingerprint individuality for the three databases can be attributed to the intrinsic nature of the images in these separate databases. For the databases considered, the intrinsic variability arises due to the different sensors used, the extent of distortion due to varying skin elasticity and the composition of the subjects (manual workers versus aged population and others). Where forensic application is concerned, it is usually the practice to match a latent print to template prints in a database. The sensor used to acquire the templates is known and, therefore, it makes sense to report sensor-specific estimates of fingerprint individuality. However, sensor-specific estimates should not be too different from each other: A significant difference in the reported PRCs (using sensor-specific models) indicates systematic bias of that particular sensor toward or against random matching. As we mentioned and observed earlier, we also find significant variability (as well as overlap) in $Q_{\mathrm{con}}$ values corresponding to different $Q_{\mathrm{cat}}$ levels. This motivates us to consider GLMMs and inference for PRCs based on combining two or more quality covariates that measure different aspects of image clarity. It is possible to arrive at a single measure of fingerprint individuality by incorporating additional random effects (for different fingerprint databases and sensors) and fixed effects (for two or more quality measures) in the GLMM formulation. We will investigate these fusion issues in our future research. \section*{Acknowledgments} \label{section:acknowledgment} The authors thank the Editor, Associate Editor and referees for their valuable suggestions in improving this paper. \begin{supplement}[id=suppA] \stitle{For the supplemental article ``A generalized mixed model framework for assessing fingerprint individuality in presence of varying image quality''} \slink[doi]{10.1214/14-AOAS734SUPP} \sdatatype{.pdf} \sfilename{aoas734\_supp.pdf} \sdescription{The results quoted in the main text are proved in Section~1 and the tables of PRC results for DB3 are in Section~2.} \end{supplement}
1,108,101,564,723
arxiv
\section{Introduction} \label{sec:intro} Double-diffusive instabilities commonly occur in any astrophysical fluid that is stable according to the Ledoux criterion, as long as the entropy and chemical stratifications have opposing contributions to the dynamical stability of the system. They drive weak forms of convection described below, and can cause substantial heat and compositional mixing in circumstances reviewed in this paper. Two cases can be distinguished. In {\it fingering convection}, entropy is stably stratified ($\nabla - \nabla_{\rm ad} < 0)$, but chemical composition is unstably stratified $(\nabla_\mu<0)$; it is often referred to as {\it thermohaline} convection by analogy with the oceanographic context in which the instability was first discovered. In {\it oscillatory double-diffusive convection} (ODDC), entropy is unstably stratified ($\nabla - \nabla_{\rm ad} > 0)$, but chemical composition is stably stratified $(\nabla_\mu>0)$; it is related to semiconvection, but can occur even when the opacity is independent of composition. Fingering convection can naturally occur at late stages of stellar evolution, notably in giants, but also in Main Sequence stars that have been polluted by planetary infall (as first proposed by Vauclair, \cite{Vauclair04}), or by material transferred from a more evolved companion star. ODDC on the other hand is naturally found in stars in the vicinity of convective nuclear-burning regions, including high-mass core-burning Main Sequence or red clump stars, and shell-burning RGB and AGB stars. It is also thought to be common in the interior of giant planets that have been formed through the core-accretion scenario. \section{Linear theory and general considerations about vertical mixing} \label{sec:linear} Beyond competing entropy and compositional gradients, a necessary condition for double-diffusive instabilities to occur is $\tau \equiv \kappa_\mu/\kappa_T < 1$ ($\kappa_\mu$ and $\kappa_T$ are the microscopic compositional and thermal diffusivities). This is usually the case in astrophysical fluids, where $\tau$ is typically {\it much} smaller than one owing to the added contribution of photon and electron transport to the thermal diffusivity. The somewhat counter-intuitive manner in which a high thermal diffusivity can be destabilizing is illustrated in Figure \ref{fig:instab}. In the fingering case, a small $\tau$ ensures that any small displaced fluid element rapidly adjusts to the ambient temperature of its new position, while retaining its original composition. An element displaced downward thus finds itself denser than the surrounding fluid and continues to sink; the opposite occurs for an element displaced upward. In the case of ODDC, thermal diffusion can progressively amplify any internal gravity wave passing through, by heating a fluid element at the lowest position of its displacement and cooling it near the highest. In both cases, the efficient development of the instability is conditional on the fluid element being small enough for thermal diffusion to take place. Double-diffusive convection is therefore a process driven on very small scales, usually orders of magnitude smaller than a pressure scaleheight. \begin{figure}[t] \includegraphics[width=\textwidth]{Fingering-ODDC.pdf} \caption{\small Illustration of the fingering (left) and oscillatory (right) double-diffusive instabilities (see main text for detail).} \label{fig:instab} \end{figure} Consequently, a common way of studying fingering and ODD convection is by a {\it local} linear stability analysis, in which the background gradients of entropy (related to $\frac{dT_0}{dr} - \frac{dT_0^{\rm ad}}{dr} = T_0 (\nabla - \nabla_{\rm ad}) \frac{d \ln p_0}{dr}$) and composition $\frac{d\mu_0}{dr}$ are approximated as being constant (Baines \& Gill, \cite{BainesGill69}). The governing equations in the Boussinesq approximation are then: \begin{eqnarray} && \frac{1}{{\rm Pr}}\left(\frac{\partial {\bf u}}{\partial t} + {\bf u} \cdot \nabla {\bf u}\right) = -\nabla p + (T-\mu) {\bf e}_z +\nabla^2 {\bf u} \mbox{ , } \quad \nabla \cdot {\bf u} = 0 \mbox{ , } \nonumber \\ && \frac{\partial T}{\partial t} + {\bf u} \cdot \nabla T \pm w = \nabla^2 T \mbox{ , } \quad \frac{\partial \mu}{\partial t} + {\bf u} \cdot \nabla \mu \pm R_0^{-1} w = \tau \nabla^2 \mu \mbox{ , } \label{eq:goveqs} \end{eqnarray} where ${\bf u} = (u,v,w)$, $p$, $T$ and $\mu$ are the non-dimensional velocity field, pressure, temperature and mean molecular weight perturbations of the fluid around the background state, Pr$=\nu/\kappa_T$ is the Prandtl number (and $\nu$ is the viscosity), and \begin{equation} R_0 = \frac{\nabla - \nabla_{\rm ad}}{\frac{\phi}{\delta} \nabla_\mu} \mbox{ where } \phi = \left( \frac{\partial \ln \rho}{\partial \ln \mu} \right)_{p, T} \mbox{ and } \delta = -\left( \frac{\partial \ln \rho}{\partial \ln T} \right)_{p, \mu} \end{equation} is called the {\it density ratio}. Here the unit lengthscale used is the typical horizontal scale $d$ of the basic instability, the unit time is $d^2/\kappa_T$, the unit temperature is $\Delta T_0 = |\frac{dT_0}{dr} - \frac{dT_0^{\rm ad}}{dr}|d$ and the unit compositional perturbation is $\frac{ \Delta T_0}{T_0}\frac{\delta}{\phi} \mu_0$. The $+$ sign in the temperature and composition equations should be used to model fingering convection, while the $-$ sign should be used to model ODDC. Mathematically speaking, this sign change is the only difference between the two processes. Assuming perturbations have a spatio-temporal structure of the form $q({\bf x},t) = \hat q e^{i {\bf k} \cdot {\bf x} + \lambda t}$ where $q$ is either one of the dependent variables, ${\bf k}$ is the wavenumber of the perturbation and $\lambda$ its growth rate (which could be complex), $\lambda$ satisfies a cubic equation: \begin{eqnarray} && \lambda^3 + k^2 (1 + {\rm Pr}+ \tau) \lambda^2 \\ && + \left[ k^4 (\tau +{\rm Pr} + \tau{\rm Pr}) \pm {\rm Pr} \frac{l^2}{k^2} (1-R_0^{-1}) \right] \lambda + \left[ k^6 {\rm Pr} \tau \pm l^2 {\rm Pr} (\tau - R_0^{-1}) \right] = 0 \mbox{ , } \nonumber \label{eq:cubic} \end{eqnarray} where $\pm$ again refers to $+$ for fingering convection and $-$ for ODDC, $k= |{\bf k}|$ and $l$ is the norm of the horizontal component of ${\bf k}$. $\lambda$ is real in the case of fingering convection but complex in the case of ODDC, as expected from the physical description of the mechanism driving the instability. The fastest growing mode in both cases is vertically invariant. Its growth rate $\lambda_{\rm fgm}$ and horizontal wavenumber $l_{\rm fgm}$ can be obtained by maximizing $Re(\lambda)$ over all possible ${\bf k}$. Finally, setting $Re(\lambda_{\rm fgm})=0$ identifies marginal stability, and reveals the parameter range for double-diffusive instabilities to be:\begin{eqnarray} && 1 \le R_0 \le \tau^{-1} \mbox{ for fingering convection,} \nonumber \\ && \frac{{\rm Pr} + \tau}{{\rm Pr} + 1} \le R_0 \le 1 \mbox{ for ODDC.} \label{eq:criteria} \end{eqnarray} Note that $R_0 = 1$ in both cases corresponds to the Ledoux criterion. While linear theory is useful to identify {\it when} double-diffusive convection occurs, nonlinear calculations are needed to determine how the latter saturates, and how much mixing it causes. Vertical mixing is often measured via non-dimensional vertical fluxes, called Nusselt numbers. The temperature and compositional Nusselt numbers are defined here as \begin{equation} {\rm Nu}_T = 1 - \frac{F_T}{ \kappa_T \left(\frac{dT_0}{dr} - \frac{dT_0^{\rm ad}}{dr}\right)} \mbox{ and }{\rm Nu}_\mu = 1 - \frac{F_\mu}{ \kappa_\mu \frac{d\mu_0}{dr} } \mbox{ , } \end{equation} where $F_T$ and $F_\mu$ are the dimensional temperature and compositional turbulent fluxes. To reconstruct the dimensional {\it total} fluxes of heat and composition ${\cal F}_T$ and ${\cal F}_\mu$, we have (Wood \etal, \cite{Woodal13}) \begin{equation} {\cal F}_T = - k_T \frac{d T_0}{dr} - ({\rm Nu}_T - 1)k_T \left(\frac{dT_0}{dr} - \frac{dT_0^{\rm ad}}{dr}\right) \mbox{ and } {\cal F}_\mu = - {\rm Nu}_\mu \kappa_\mu \frac{d\mu_0}{dr} \mbox{ , } \label{eq:totalfluxes} \end{equation} where $k_T = \rho_0 c_p \kappa_T$ is the thermal conductivity, and $c_p$ is the specific heat at constant pressure. It is worth noting that ${\rm Nu}_\mu$ can also be interpreted as the ratio of the effective to microscopic compositional diffusivities. Direct Numerical Simulations, which solve the fully nonlinear set of equations (\ref{eq:goveqs}) for given parameter values Pr, $\tau$ and $R_0$ from the onset of instability onward, can in principle be run to estimate the functions ${\rm Nu}_T(R_0;{\rm Pr},\tau)$ and Nu$_\mu(R_0;{\rm Pr},\tau)$. However, the actual nonlinear behavior of double-diffusive systems reveals a number of surprises, that must be adequately studied before a complete theory for mixing can be put forward. \section{The emergence of large-scale structures} \label{sec:large} \subsection{Large-scale gravity waves and staircases} It has long been known in oceanography that double-diffusive convection has a tendency to drive the growth of structures on scales much larger than that of the basic instability (cf. Stern, \cite{Stern69}). This tendency was recently confirmed in the astrophysical context as well (Rosenblum \etal, \cite{Rosenblumal11}; Brown \etal, \cite{Brownal13}). These structures either take the form of large-scale internal gravity waves or thermo-compositional staircases, as shown in Figure \ref{fig:large-scale}. \begin{figure} \begin{center} \includegraphics[width=0.9\textwidth]{Large-scale-instabs.pdf} \end{center} \caption{\small Top: Fingering simulation for Pr $=\tau=0.03$, $R_0 = 1.1$. The basic instability first saturates into a near-homogeneous state of turbulence, but later develops large-scale gravity waves. Bottom: ODDC simulation for Pr =$\tau=0.03$, $R_0 = 0.66$. The basic instability first saturates into a near-homogeneous state, but later develops into a thermo-compositional staircase, whose steps gradually merge until only one is left. The mean Nusselt numbers increase somewhat in the presence of waves in the fingering case, and quite significantly when the staircase forms, and at each merger, in the ODDC case. } \label{fig:large-scale} \end{figure} For density ratios close to one, fingering convection tends to excite large-scale gravity waves, through a process called the {\it collective instability} first discovered by Stern (\cite{Stern69}). These waves grow to significant amplitudes, and enhance mixing by fingering convection when they break. The same is true for ODDC, but the latter can sometimes also form thermo-compositional staircases excited by a process called the $\gamma-${\it instability} (Radko, \cite{Radko03}). The staircases spontaneously emerge from the homogeneously turbulent state, and appear as a stack of fully convective, well-mixed regions (the layers) separated by thin strongly stratified interfaces. The layers have a tendency to merge rather rapidly after they form. Vertical mixing increases significantly when layers form, and with each merger. For these reasons, quantifying transport by double-diffusive convection requires understanding not only how and at what amplitude the basic small-scale instabilities saturates, but also under which circumstances large-scale structures may emerge and how the latter affect mixing. \subsection{Mean-field theory} Given their ubiquity in fingering and ODD convection, it is natural to seek a unified explanation for the emergence of large-scale structures that is applicable to both regimes. Mean-field hydrodynamics is a natural way to proceed, as it can capitalize on the separation of scales between the primary instability and the gravity waves or staircases. To understand how mean-field instabilities can be triggered, first note that the intensity of vertical mixing in double-diffusive convection is naturally smaller if the system is closer to being stable, and vice-versa. If a homogeneously turbulent state is spatially modulated by large-scale (but small amplitude) perturbations in temperature or chemical composition, then vertical mixing will be more efficient in regions where the {\it local} density ratio is closer to one, and smaller in regions where it is further from one. The spatial convergence or divergence of these turbulent fluxes can, under the right conditions, enhance the initial perturbations in a positive feedback loop, in which case a mean-field instability occurs. First discussed separately in oceanography, the collective and $\gamma$-instabilities were later discovered to be different unstable modes of the same mean-field equations by Traxler \etal~(\cite{Traxleral11a}), in the context of fingering convection. Their work has successfully been extended to explain the emergence of thermo-compositional layers in ODDC in astrophysical systems by Mirouh \etal~(\cite{Mirouhal12}). A formal stability analysis of the mean-field equations shows that they are unstable to the $\gamma-$instability (the layering instability) whenever the {\it flux ratio} \begin{equation} \gamma = \frac{R_0}{\tau} \frac{{\rm Nu}_T(R_0;{\rm Pr},\tau)}{{\rm Nu}_\mu(R_0;{\rm Pr},\tau)} \label{eq:gamma} \end{equation} is a {\it decreasing} function of $R_0$ (Radko, \cite{Radko03}). Similarly, a necessary condition for the collective instability was given by Stern \etal~(\cite{Sternal01}), who argued\footnote{It is worth noting that whether his criterion, which was derived in the context of physical oceanography, applies {\it as is} in the astrophysical regime has not yet been verified.} that large-scale gravity waves can develop whenever \begin{equation} A = \frac{({\rm Nu}_T-1)(\gamma_{\rm turb}^{-1}-1)}{{\rm Pr}(1-R_0^{-1})} > 1 \mbox{ , } \label{eq:A} \end{equation} where $\gamma_{\rm turb} = (R_0/\tau) ({\rm Nu}_T-1)/({\rm Nu}_\mu-1)$. This criterion is often much less restrictive than the one for the development of the $\gamma-$instability. Note that ${\rm Nu}_T$ and ${\rm Nu}_\mu$ in (\ref{eq:gamma}) and (\ref{eq:A}) are the Nusselt numbers associated with the small-scale turbulence present {\it before} any large-scale structure has emerged. \section{Fingering (thermohaline) convection} \subsection{Saturation of the primary instability} Traxler \etal~(\cite{Traxleral11b}) were the first to run a systematic sweep of parameter space to study fingering convection in astrophysics, and to measure ${\rm Nu}_T(R_0;{\rm Pr},\tau)$ and ${\rm Nu}_\mu(R_0;{\rm Pr},\tau)$ in 3D numerical experiments. However, they were not able to achieve very low values of Pr and $\tau$. Brown \etal~(\cite{Brownal13}) later presented new simulations with Pr and $\tau$ as low as $10^{-2}$, but this is still orders of magnitude larger than in stellar interiors, where Pr and $\tau$ typically range from $10^{-8}$ to $10^{-3}$. To bridge the gap between numerical experiments and stellar conditions, Brown \etal~(\cite{Brownal13}) derived a compelling semi-analytical prescription for transport by small-scale fingering convection, that reproduces their numerical results and can be extrapolated to much lower Pr and $\tau$. Their model attributes the saturation of the fingering instability to the development of shearing instabilities between adjacent fingers (see also Denissenkov, \cite{Denissenkov10} and Radko \& Smith, \cite{RadkoSmith12}). For a given set of governing parameters Pr, $\tau$ and $R_0$, the growth rate and horizontal wavenumber of the fastest-growing fingers can be calculated from linear theory (see Section \ref{sec:linear}). Meanwhile, the growth rate $\sigma$ of shearing instabilities developing between neighboring fingers is proportional to the velocity of the fluid within the finger times its wavenumber (a result that naturally emerges from dimensional analysis, but can also be shown formally using Floquet theory). Stating that shearing instabilities can disrupt the continued growth of fingers requires $\sigma$ and $\lambda_{\rm fgm}$ to be of the same order. This sets the velocity within the finger to be $W = C \lambda_{\rm fgm}/l_{\rm fgm}$ where $C$ is a universal constant of order one. Meanwhile, linear stability theory also relates the temperature and compositional fluctuations $T$ and $\mu$ within a finger to $W$. The turbulent fluxes can thus be estimated {\it only using linear theory}: \begin{equation} {\rm Nu}_T =1 + \frac{1}{ l_{\rm fgm}^2 } \frac{C^2\lambda_{\rm fgm}^2}{\lambda_{\rm fgm} + l_{\rm fgm}^2} \mbox{ , } {\rm Nu}_\mu = 1 + \frac{1}{\tau l_{\rm fgm}^2 } \frac{C^2\lambda_{\rm fgm}^2}{\lambda_{\rm fgm} + \tau l_{\rm fgm}^2} \mbox{ . } \label{eq:Nusfinger} \end{equation} Comparison of these formula with the data helps calibrate $C$. Brown \etal~(\cite{Brownal13}) found that using $C = 7$ can very satisfactorily reproduce most of their data within a factor of order one or better, except when Pr $< \tau$ (which is rarely the case in stellar interiors anyway). Equation (\ref{eq:Nusfinger}) implies that for low Pr and $\tau$, turbulent heat transport is negligible, while turbulent compositional transport is significant only when $R_0$ is close to one. However, the values of ${\rm Nu}_\mu$ obtained by Brown \etal~(\cite{Brownal13}) are still not large enough to account for the mixing rates required by Charbonnel \& Zahn (\cite{CharbonnelZahn07}) to explain surface abundances in giants. Such large values of $D_\mu$ might on the other hand be achieved if mean-field instabilities take place. \subsection{Mean-field instabilities in fingering convection} As discussed in Section \ref{sec:large}, one simply needs to estimate $\gamma$ and $A$ in order to determine in which parameter regime mean-field instabilities can occur. Using (\ref{eq:gamma}) and (\ref{eq:Nusfinger}) it can be shown that $\gamma$ is always an {\it increasing} function of $R_0$ at low Pr and $\tau$. This implies that fingering convection is stable to the $\gamma-$instability, and therefore not likely to transition {\it spontaneously} to a state of layered convection in astrophysics. The simulations of Brown \etal~(\cite{Brownal13}) generally confirm this statement, except in a few exceptional cases discussed below. By contrast, fingering convection does appear to be prone to the collective instability (as shown in Figure \ref{fig:large-scale}) for sufficiently low $R_0$. By calculating $A$ for a typical stellar fluid with Pr $\sim 10^{-6}$ and $\tau \sim 10^{-7}$, we find for instance that gravity waves should emerge when $R_0 < 100$ or so. In this regime, we expect transport to be somewhat larger than for small-scale fingering convection alone, although probably not by more than a factor of 10 (see Figure \ref{fig:large-scale}). Nevertheless, a first-principles theory for the vertical mixing rate in fingering convection, in the presence of internal gravity waves, remains to be derived. Finally, as first hypothesized by Stern (\cite{Stern69}) and found in preliminary work by Brown \etal~(\cite{Brownal13}), it is possible that these large-scale gravity waves could break on a global scale and mechanically drive the formation of layers. If this is indeed confirmed, transport could be much larger than estimated in (\ref{eq:Nusfinger}) in the region of parameter space for which $A<1$. This, however, remains to be confirmed. \subsection{A plausible 1D mixing prescription for fingering convection} Until we gain a better understanding of the various effects of gravity waves described above, (\ref{eq:Nusfinger}) is our best current estimate for transport by fingering convection in astrophysical objects. An example of the numerical implementation of the model by Brown \etal~(\cite{Brownal13}) is now available in MESA, and consists of the following steps. (1) To estimate the local properties of the star, and calculate all governing parameters/diffusivities. (2) To estimate the properties of the fastest-growing fingering modes using linear theory (see Section \ref{sec:linear}) and (3) To apply (\ref{eq:Nusfinger}) to calculate ${\rm Nu}_\mu$, and then (\ref{eq:totalfluxes}) to calculate ${\cal F}_\mu$. Turbulent heat transport is negligible, so ${\cal F}_T = - k_T dT_0/dr$. \section{Oscillatory double-diffusive convection (semiconvection)} \subsection{Saturation of the primary instability} 3D numerical simulations of ODDC were first presented by Rosenblum \etal~(\cite{Rosenblumal11}) and Mirouh \etal~(\cite{Mirouhal12}). Both explored parameter space to measure, as in the case of fingering convection, the functions ${\rm Nu}_T(R_0;{\rm Pr},\tau)$ and ${\rm Nu}_\mu(R_0;{\rm Pr},\tau)$ after saturation of the basic ODD instability. The values of Pr and $\tau$ achieved, however, were not very low, and models are needed once again to extrapolate these results to parameters relevant for stellar interiors. Mirouh \etal~(\cite{Mirouhal12}) proposed an empirical formula for Nu$_T(R_0;{\rm Pr},\tau)$ (and Nu$_\mu(R_0;{\rm Pr},\tau)$, via $\gamma$), whose parameters were fitted to the experimental data. However, a theory based on first principles is more desirable. We have recently succeeded in applying a very similar method to the one used by Brown \etal~(\cite{Brownal13}) to model transport by small-scale ODDC. As described in Moll \etal~(\cite{Mollal14}), a simple approximate estimate for temperature and compositional Nusselt numbers can also be derived from the linear theory for the fastest-growing mode (see Section \ref{sec:linear}), this time in the form of \begin{equation} {\rm Nu}_T = 1 + \frac{\lambda_R \tilde{\lambda} }{l_{\rm fgm}^2} \frac{ \lambda_R + l_{\rm fgm}^2 }{( \lambda_R + l_{\rm fgm}^2)^2 + \lambda_I^2} \mbox{ , } {\rm Nu}_\mu = 1 + \frac{\lambda_R \tilde{\lambda} }{\tau l_{\rm fgm}^2} \frac{\lambda_R + \tau l_{\rm fgm}^2 }{( \lambda_R + \tau l_{\rm fgm}^2)^2 + \lambda_I^2} \mbox{ , } \label{eq:NusODDC} \end{equation} where $\lambda_R= Re(\lambda_{\rm fgm})$, $\lambda_I=Im(\lambda_{\rm fgm})$ and $\tilde{\lambda} = \sqrt{K_1^2 \lambda_R^2 + K_2^2 \lambda_I^2}$. The constants $K_1$ and $K_2$ must again be fitted to the existing data; preliminary results suggest that $K_1 \simeq 50$ and $K_2 \simeq 7$. By contrast with fingering convection, ODDC is subject to both $\gamma-$ and collective instabilities. Both modify the vertical heat and compositional fluxes quite significantly so (\ref{eq:NusODDC}) should {\it not} be used {\it as is} to model mixing by ODDC. It is used, on the other hand, to determine when mean-field instabilities occur. \subsection{Layered convection} The region of parameter space unstable to layering can again be determined by calculating $\gamma$ (using \ref{eq:NusODDC} this time), and checking when $d\gamma/dR_0 < 0$. Moll \etal~(\cite{Mollal14}) (see also Mirouh \etal, \cite{Mirouhal12}) showed that layering is always possible for Pr and $\tau$ below one, provided $R_0 \in [R_c({\rm Pr},\tau), 1]$. The critical value $R_c({\rm Pr},\tau)$ is fairly close to zero in the parameter regime appropriate for stellar interiors, as $R_c \propto {\rm Pr}^{1/2}$ in the limit $\tau \sim {\rm Pr} \rightarrow 0$. This implies that the region of parameter space unstable to layer formation spans nearly the entire ODDC range. As discussed in Section \ref{sec:large}, the vertical heat and compositional fluxes increase dramatically when a staircase first emerges, and then again at each layer merger. This was studied by Wood \etal~(\cite{Woodal13}), who ran and analyzed simulations for $R_0 \in [R_c,1]$, and argued that their results are consistent with the following empirical transport laws for layered convection: \begin{equation} {\rm Nu}_T = 1 + g_T(R_0,\tau) ({\rm Ra} {\rm Pr})^{1/3} \mbox{ and } {\rm Nu}_\mu = 1 + g_\mu(R_0,\tau) {\rm Ra}^{0.37} {\rm Pr}^{1/4} \tau^{-1} \mbox{ , } \label{eq:Nuslayers} \end{equation} where $g_T$ and $g_\mu$ are slowly varying functions of $R_0$ and $\tau$, and where the Rayleigh number is ${\rm Ra} = \left| \frac{\delta }{\rho_0} \frac{d p_0}{dr} \left(\frac{d \ln T_0}{dr} - \frac{d \ln T_{\rm ad}}{dr} \right) \frac{H^4}{ \kappa_T \nu} \right| $, where $H$ is the mean step height in the staircase. For numerically achievable Pr and $\tau$, Wood \etal~(\cite{Woodal13}) estimated that $g_T \simeq 0.1$ and $g_\mu \simeq 0.03$. Equation (\ref{eq:Nuslayers}) has two important consequences. The first is that turbulent heat transport can be significant in layered convection, provided $H$ is large enough. Secondly, both Nusselt numbers are (roughly) proportional to $H^{4/3}$, but nothing so far has enabled us to determine what $H$ may be in stellar interiors. Indeed, in {\it all} existing simulations of layered ODDC to date, layers were seen to merge fairly rapidly until a single one was left. Whether staircases in stellar interiors evolve in the same way, or eventually reach a stationary state with a layer height smaller than the thickness of the unstable region itself, is difficult to determine without further modeling, and remains the subject of current investigations. \subsection{Large-scale gravity waves} ODD systems which do not transition into staircases ($R_0 < R_c$) also usually evolve further with time after saturation of the basic instability, with the small-scale wave-turbulence gradually giving way to larger-scale gravity waves. Whether the latter are always excited by the collective instability, or could be promoted by other types of nonlinear interactions between modes that transfer energy to larger scales, remains to be determined. In all cases, these large-scale waves have significant amplitudes and regularly break. This enhances transport, as it did in the case of fingering convection. Moll \etal~(\cite{Mollal14}) found that the resulting Nusselt numbers are between 1.2 and 2 across most of the unstable range, regardless of Pr or $\tau$. These results are still quite preliminary, however, and their dependence on the domain size (which sets the scale of the longest waves) remains to be determined. \subsection{A plausible 1D mixing prescription for ODDC} Based on the results obtained so far, and summarized above, a plausible mixing prescription for ODDC can be obtained by applying the following steps. (1) To estimate the local properties of the star, and calculate all governing parameters/diffusivities. (2) To estimate the properties of the fastest-growing modes using linear theory (see Section \ref{sec:linear}) (3) To determine whether layers are expected to form or not by calculating $\gamma$ (using \ref{eq:NusODDC}) for neighboring values of $R_0$, and evaluating $d\gamma/dR_0$. (4) If the system is layered, then {\it assume a layer height} (for instance, some small fraction of a pressure scaleheight), and calculate the heat and compositional fluxes using (\ref{eq:totalfluxes}) with (\ref{eq:Nuslayers}). If the system is not expected to form layers, then calculate these fluxes using (\ref{eq:totalfluxes}) and ${\rm Nu}_T \sim {\rm Nu_\mu} \sim 1.2-2$ instead. The unknown layer height is the only remaining free parameter of this model, and will hopefully be constrained in the future by comparison of the model predictions with asteroseismic results. \begin{figure} \begin{center} \includegraphics[width=0.92\textwidth]{Regimes.pdf} \end{center} \caption{\small Illustration of the various regimes of double-diffusive convection in astrophysics discussed in this paper, as we understand them today.} \label{fig:regimes} \end{figure}
1,108,101,564,724
arxiv
\section{Introduction} Although the current experimental data show no plausible evidence of new physics beyond the Standard Model (SM), the minimal supersymmetric (SUSY) extension of the SM (MSSM) is still a primary candidate for new physics. As has been well-known and intensively studied, the MSSM not only provides us with a solution to the gauge hierarchy problem but also offers a variety of interesting phenomenologies, such as the origin of the electroweak symmetry breaking from SUSY breaking, the SM-like Higgs boson mass prediction with soft SUSY breaking parameters, the lightest superpartner (LSP) as a natural candidate for the dark matter (DM) in our universe, and the grand unified theory paradigm with the successful unification of the three SM gauge couplings at the scale of ${\cal O}(10^{16}\, {\rm GeV})$. Many ongoing and planned experiments will continue searching for the MSSM, or in more general, supersymmetric theories beyond the SM. In phenomenologically viable models, SUSY is spontaneously broken in the hidden sector and the SUSY breaking effects are mediated to the MSSM sector by a certain mechanism for generating soft SUSY breaking terms in the MSSM. Associated with spontaneous SUSY breaking, a massless fermion called goldstino emerges due to the Nambu-Goldstone theorem, and it is absorbed into the spin-$1/2$ component of the spin-$3/2$ massive gravitino in supergravity. The gravitino mass is characterized by the SUSY breaking order parameter $f$ and the reduced Planck mass of $M_P=2.43 \times 10^{19}$ GeV as $m_{3/2} \simeq f/M_P$. It is possible that SUSY breaking occurs at a very low energy (see, for example, Ref.~\cite{Brignole:1996fn}). If this is the case, gravitino becomes the LSP and is involved in phenomenology at low energies. For example, if the SUSY breaking scale lies in the multi-TeV range, the LSP gravitino is extremely light with its mass of ${\cal O}({\rm meV})$. Assuming the decoupling of the hidden sector fields except for the light gravitino (or, equivalently, goldstino) the low energy effective theory involving the very light gravitino can be described by employing a goldstino chiral superfield $X$ with the nilpotent condition $X^2=0$ \cite{Rocek:1978nb, Lindstrom:1979kq, Komargodski:2009rz}. With this formalism, the phenomenology of the MSSM with the goldstino superfield has been studied in detail \cite{Antoniadis:2010hs, Antoniadis:2012zz, Antoniadis:2014eta} (see also Ref.~\cite{Dudas:2012fa} for the phenomenology in a more general setup). This framework is the so-called nonlinear MSSM (NL-MSSM). Interestingly, it has been shown that if the SUSY breaking scale lies in the multi-TeV range, the SM-like Higgs boson receives a sizable contribution to its mass at the tree-level after eliminating $F$-component of the goldstino superfield and as a result, the Higgs boson mass of around $125$ GeV can be achieved by the tree-level Higgs potential. This is in sharp contrast with the usual MSSM in which the 125 GeV SM-like Higgs boson mass is reproduced by quantum corrections through scalar top quarks with the mass larger than multi-TeV. In the view point of the collider physics, the NL-MSSM has an advantage that the scalar top quarks can be sufficiently light to be explored in the near future. The SUSY breaking order parameter $\sqrt{f}\lesssim {\cal O}(100)$ TeV gives the extremely light gravitino with mass $m_{3/2} \lesssim 10$ eV in the NL-MSSM. Although such a light gravitino is harmless in the phenomenological point of view (see, for example, Ref.~\cite{Feng:2010ij}), its relic density is far below the observed dark matter (DM) density. Even if the observed relic density is achieved by some non-standard thermal history of the universe, the very light gravitino is likely to be a hot DM and prevents the formation of the observed structure of the universe. Therefore, for the completion of the NL-MSSM, we should consider an extension of the model which can supplement the model with a suitable DM candidate. In this paper, we propose a minimal extension of the NL-MSSM by introducing a $Z_2$-parity odd SM gauge singlet chiral superfield $\Phi$ and show that the lightest scalar component in $\Phi$ plays the role of the Higgs-portal DM \cite{McDonald:1993ex, Burgess:2000yq}\footnote{For a recent review, see Ref.~\cite{Arcadi:2019lka} and references therein.} through its coupling with the MSSM Higgs doublets induced by the goldstino superfield. With a suitable choice of the model parameters, we can realize a phenomenologically viable Higgs-portal DM scenario. If SUSY breaking scale lies in the multi-TeV range, the SM-like Higgs boson mass of 125 GeV can be achieved by the tree-level Higgs potential through the low-scale SUSY breaking effect. \section{NL-MSSM and the Higgs boson mass}\label{Sec:two} We first present the basic formalism of the NL-MSSM and show how the 125 GeV SM-like Higgs boson mass can be achieved in the framework. We begin with the goldstino effective Lagrangian of the form \cite{Komargodski:2009rz}: \begin{eqnarray} {\cal L}_X=\int d^4\theta X^\dagger X +\left(\int d^2\theta fX +{\rm h.c.} \right)\,, \label{eq:Xlag} \end{eqnarray} where $X$ is a goldstino chiral superfield, and $f$ is the SUSY breaking order parameter in the hidden sector. Although the stability of the hidden sector scalar potential needs an extension of the above minimal K\"ahler potential, this Lagrangian is enough to understand the essence of the formalism. The goldstino chiral superfield is subject to the nilpotent condition \cite{Rocek:1978nb, Lindstrom:1979kq, Komargodski:2009rz}, \begin{eqnarray} X^2=0\,. \end{eqnarray} which leads us to the expression of the superfield with the components, \begin{eqnarray} X={\psi_X \psi_X \over 2 F_X}+\sqrt{2}\theta\psi_X+\theta\theta F_X \,. \label{eq:Xsol} \end{eqnarray} The scalar component in the goldstino superfield is to be integrated out in the low energy effective theory, and under the nilpotent condition, it is replaced by the bilinear term of the goldstino fields. In fact, substituting Eq.~(\ref{eq:Xsol}) into Eq.~(\ref{eq:Xlag}) and eliminating the auxiliary field $F_X$, we recover the Volkov-Akulov Lagrangian \cite{Volkov:1973ix}. In the superfield formalism, the spurion technique is a simple way to introduce the soft SUSY breaking terms to the MSSM Lagrangian. We introduce a dimensionless and SM-singlet spurion field of the form, $Y=\theta^2 m_{\rm soft}$, where $m_{\rm soft}$ is a generic notation for the soft terms (denoted $m_{1,2,3}$, $m_{\Psi}$, $m_{\lambda_a}$ in the following), and attach it to any SUSY operators in the MSSM. The recipe to obtain the NL-MSSM is to replace the spurion by the goldstino superfield as \cite{Komargodski:2009rz}% \begin{eqnarray} Y\rightarrow {m_{\rm soft} \over f}X\,. \label{eq:replace} \end{eqnarray} We apply this rule and write the NL-MSSM Lagrangian as follows \cite{Antoniadis:2010hs}: \begin{eqnarray} {\cal L}={\cal L}_0+{\cal L}_X+{\cal L}_H+{\cal L}_m+{\cal L}_{AB}+{\cal L}_g\,. \label{eq:lagNL} \end{eqnarray} In the right-hand side, the first term ${\cal L}_0$ denotes the supersymmetric part of the MSSM Lagrangian given by \footnote{For a concise review of the MSSM and the standard notation, see, for example, Ref.~\cite{Martin:1997ns}.} \begin{eqnarray} {\cal L}_0&=&\sum_{\Psi, H_u, H_d}\int d^4\theta~\Psi^\dagger e^{V} \Psi \nonumber \\ &&+\left\{\int d^2\theta~[\mu_H H_dH_u+\lambda_u H_uQU^c+\lambda_d QD^cH_d+\lambda_e LE^cH_d]+{\rm h.c.} \right\} \nonumber \\ &&+\sum_{\rm a=1}^3{1 \over 4 g_a^2\kappa}\int d^2\theta~{\rm Tr}[W_a^\alpha W_{a\alpha}]+{\rm h.c.}\,, \label{eq:SUSY_L} \end{eqnarray} where $\Psi=Q, U^c, D^c, L, E^c$, the index $a=1,2,3$ denotes the the SM gauge groups $SU(3)$, $SU(2)$ and $U(1)$, $g_a$ is the corresponding gauge couplings, and $\kappa=1$ for $U(1)$ and $1/2$ for $SU(3)$ and $SU(2)$. The vector superfield $V$ in the K\"ahler potential for the chiral superfields implies, for example, $V=2V_3+2V_2+ {1 \over 3} V_1$ for $Q$ etc., where $V_a$ $(a=1,2,3)$ denote the vector superfields of the corresponding SM gauge groups. ${\cal L}_X$ is the hidden sector Lagrangian already introduced in Eq.~(\ref{eq:Xlag}). ${\cal L}_H$ is the Higgs sector Lagrangian involving the goldstino sueprfield: \begin{eqnarray} {\cal L}_H=-{m_1^2 \over f^2}\int d^4\theta \left( X^\dagger X \right) H_d^\dagger e^{V}H_d -{m_2^2 \over f^2}\int d^4\theta \left(X^\dagger X \right) H_u^\dagger e^{V}H_u\,. \end{eqnarray} The matter field Lagrangian involving the goldstino superfield is given by \begin{eqnarray} {\cal L}_m= - \sum_\Psi \left(m_\Psi^2 \over f^2\right)\int d^4\theta \left(X^\dagger X \right) \Psi^\dagger e^V \Psi\,. \end{eqnarray} The bilinear and trilinear SUSY breaking couplings are given by ${\cal L}_{AB}$: \begin{eqnarray} {\cal L}_{AB}&=&{m_3^2 \over f}\int d^2\theta~XH_dH_u + {\rm h.c.} \nonumber \\ &&+\int d^2\theta \, X \left\{ \lambda_u \left( \frac{A_u}{f} \right) U^c H_u Q + \lambda_d \left( \frac{A_d}{f} \right) D^c H_d Q+ \lambda_e \left( \frac{A_e}{f} \right) E^c H_d L \right \} +{\rm h.c.} \end{eqnarray} The last term ${\cal L}_g$ denotes the gauge sector Lagrangian given by \begin{eqnarray} {\cal L}_g=\sum_{a=1}^3{1 \over 4 g_a^2\kappa}{2 m_{\lambda_a} \over f}\int d^2\theta~X{\rm Tr}[W^\alpha_a W_{a\alpha}]+{\rm h.c.} \end{eqnarray} We focus on the Higgs potential in the NL-MSSM, which is read off from ${\cal L}_0+{\cal L}_X+{\cal L}_H+{\cal L}_{AB}$: \begin{eqnarray} V=V_{\rm SUSY} + V_{\rm soft}\,, \label{eq:potMSSM} \end{eqnarray} where \begin{eqnarray} V_{\rm SUSY}&=& \mu_H^2(|H_u|^2+|H_d|^2) + {g_Z^2 \over 8}(|H_u|^2-|H_d|^2)^2+{g_2^2 \over 2}|H_u^\dagger H_d|^2\,, \\ V_{\rm soft}&=&{\left|f+{m_3^2 \over f}H_uH_d\right|^2 \over 1-{m_1^2 \over f^2}|H_d|^2-{m_2^2 \over f^2}|H_u|^2}\,, \label{eq:soft} \end{eqnarray} with $g_Z^2 \equiv g_1^2+g_2^2$. We express the up-type Higgs and down-type Higgs doublets as \begin{eqnarray} H_u&=& \begin{pmatrix} H^+ \\ {1\over \sqrt{2}}(v_u+R_u+i I_u) \end{pmatrix} \, , \label{eq:expand1} \\ H_d&=& \begin{pmatrix} {1\over \sqrt{2}}(v_d+R_d+i I_d) \\ H^- \\ \end{pmatrix}\,, \label{eq:expand2} \end{eqnarray} where $v_u=v\sin\beta$, $v_d=v\cos\beta$ with $v=246$ GeV, $H^{\pm}$ are charged Higgs fields, and $R_u, I_u, R_d, I_d$ are real scalar fields. Substituting them into the Higgs potential, we derive the stationary conditions: \begin{eqnarray} {\partial V \over \partial R_u}{\Big |}_0&=&{v \over 4}\left\{ 4\mu_H^2\sin\beta-2M_Z^2 \cos2\beta\sin\beta -{2{\cal A} m_3^2\cos\beta \over {\cal B}}+{ {\cal A}^2 m_2^2\sin\beta \over {\cal B}^2} \right\}=0\,, \label{eq:st1} \\ {\partial V \over \partial R_d}{\Big |}_0&=&{v \over 4}\left\{ 4\mu_H^2\cos\beta+M_Z^2(\cos\beta+\cos 3\beta) -{2 {\cal A} m_3^2\sin\beta \over {\cal B}}+{{\cal A}^2 m_1^2\cos\beta \over {\cal B}^2} \right\}=0\,, \label{eq:st2} \end{eqnarray} % where ${|_0}$ means that all the fields are taken to be zero, and \begin{eqnarray} M_Z^2 &= & \frac{1}{4} g_Z^2 v^2 \, , \nonumber \\ {\cal A} & = & 4 f^2+m_3^2 v^2\sin 2\beta \, , \nonumber \\ {\cal B} &=& -2f^2+m_1^2 v^2 \cos^2\beta+m_2^2v^2\sin^2\beta \, . \end{eqnarray} The other stationary conditions such as ${\partial V \over \partial I_u}{\Big |}_0$ are automatically satisfied. The mass matrix of the CP-even Higgs bosons is given by \begin{eqnarray} {\cal M}_{\rm CP-even}=\begin{pmatrix} {\partial^2 V \over \partial R_d^2}{\big |}_0 & {\partial^2 V \over \partial R_d \partial R_u}{\big |}_0 \\ {\partial^2 V \over \partial R_u \partial R_d}{\big |}_0 & {\partial^2 V \over \partial R_u^2}{\big |}_0\, \end{pmatrix}\,, \label{eq:mass1} \end{eqnarray} while the mass matrices for the CP-odd Higgs bosons and the charged Higgs bosons are \begin{eqnarray} {\cal M}_{\rm CP-odd}=\begin{pmatrix} {\partial^2 V \over \partial I_d^2} {\big |}_0 & {\partial^2 V \over \partial I_d \partial I_u}{\big |}_0 \\ {\partial^2 V \over \partial I_u \partial I_d}{\big |}_0 & {\partial^2 V \over \partial I_u^2}{\big |}_0 \\ \end{pmatrix}\,,~~~~ {\cal M}_{\rm charged}=\begin{pmatrix} {\partial^2 V \over \partial H^- \partial H^{-*}}{\big |}_0 & {\partial^2 V \over \partial H^- \partial H^+}{\big |}_0 \\ {\partial^2 V \over \partial H^{-*} \partial H^{+*}}{\big |}_0 & {\partial^2 V \over \partial H^{+}\partial H^{+*}}{\big |}_0 \end{pmatrix}\,. \label{eq:mass2} \end{eqnarray} \begin{figure}[tb] \centering \begin{minipage}{0.48\hsize} \includegraphics[scale=0.6, angle=0]{SM-Higgs.pdf} \caption{ The SM-like Higgs boson mass ($m_h$) at the tree-level as a function of $\sqrt{f}$ (solid line), along with the standard MSSM prediction at the tree-level (dashed line) and the (green) horizontal line indicting $m_h=125$ GeV. In this plot, we have taken $m_1^2=1000^2$ GeV$^2$, $m_2^2=- (2005)^2$ GeV$^2$ and $\tan\beta=10$. } \label{fig:SM-Higgs} \end{minipage} \hspace{3mm} \begin{minipage}{0.48\hsize} \includegraphics[scale=0.6, angle=0]{Heavy-Higgs.pdf} \caption{ Same as Fig.~\ref{fig:SM-Higgs} but for the CP-even heavy Higgs boson mass ($m_H$) (solid line), the CP-odd Higgs boson mass ($m_A$) (dashed line) and the charged Higgs boson mass ($m_{H^\pm}$) (dotted line). } \vspace{4.5mm} \label{fig:Heavy-Higgs} \end{minipage} \end{figure} By using the above formulas, we numerically calculate the Higgs boson mass spectra. First we choose appropriate values for $m_1$, $m_2$, $\tan \beta$ and $\sqrt{f}$ as the input parameters and solve the stationary conditions of Eqs.~(\ref{eq:st1}) and (\ref{eq:st2}) to fix the values of $\mu_H$ and $m_3^2$. We then substitute them into the Higgs potential and calculate the Higgs boson mass eigenvalues from Eqs.~(\ref{eq:mass1}) and (\ref{eq:mass2}). Our results are shown in Figs.~\ref{fig:SM-Higgs} and \ref{fig:Heavy-Higgs}. The solid line in Fig.~\ref{fig:SM-Higgs} shows the mass of the SM-like Higgs boson ($m_h$) as a function of $\sqrt{f}$, where we have fixed $m_1^2=1000^2$ GeV$^2$, $m_2^2=- (2005)^2$ GeV$^2$ and $\tan\beta=10$. As $\sqrt{f}$ decreases, the SM-like Higgs boson mass increases from the standard MSSM prediction at the three level $m_h \simeq M_Z\cos 2\beta$ (dashed line) in the limit of $\sqrt{f} \to \infty$. The (green) horizontal line indicates $m_h=125$ GeV. We find that the main contribution for increasing the SM-like Higgs boson mass comes from the quartic coupling $(m_2^2 |H_u|^2/f)^2$ in the series of expansion of Eq.~(\ref{eq:soft}) and the resultant Higgs boson mass is approximately expressed as \begin{eqnarray} m_h^2 \simeq M_Z^2\cos 2\beta+ 2 \left({m_2^2 \over f}\right)^2 v^2\sin^2\beta\,. \end{eqnarray} Therefore, if the SUSY breaking scale is low enough, the SM-like Higgs boson mass of 125 GeV is achieved by the Higgs potential at the tree-level. As shown in Fig.~\ref{fig:SM-Higgs}, we have obtained $m_h=125$ GeV for $\sqrt{f}=3990$ GeV. If the SUSY breaking scale is larger, the hidden sector effect on the SM-like Higgs boson mass is negligible, so that quantum corrections through heavy scalar top quarks play the crucial role to reproduce $m_h=125$ GeV, as usual in the MSSM. Fig.~\ref{fig:Heavy-Higgs} shows the masses of the heavy neutral Higgs and the charged Higgs bosons as a function of $\sqrt{f}$ with the same inputs as in Fig.~\ref{fig:SM-Higgs}. The solid line depicts to the mass of the heavy CP-even Higgs boson ($m_H$) while the dashed and dotted lines correspond to the CP-odd Higgs boson mass ($m_A$) and the charged Higgs boson mass ($m_{H^\pm}$), respectively. \section{Minimal extension with Higgs-portal dark matter} If the SUSY breaking scale is $\sqrt{f}\lesssim 100$ TeV, gravitino mass is found to be $m_{3/2}\lesssim 10$ eV. Although such a light gravitino (goldstino) is harmless in phenomenological point of view, it is unable to be the dominant component of the DM in our universe and therefore a suitable DM candidate should be supplemented to the NL-MSSM. In order to solve this problem, we propose a minimal extension of the NL-MSSM to incorporate a dark matter candidate, namely, the (scalar) Higgs-portal DM. The Higgs-portal DM scenario is one of the simplest SM extensions to supplement the SM with a dark matter candidate. For a recent review, see Ref.~\cite{Arcadi:2019lka} and references therein. In the simplest setup, we introduce an SM-singlet real scalar ($S$) along with a $Z_2$ symmetry. The stability of this scalar is ensured by assigning an odd-parity to it, while all the SM fields are $Z_2$-even. At the renormalizable level, the Lagrangian is given by \begin{eqnarray} {\cal L}={\cal L}_{\rm SM}+{1 \over 2} (\partial_\mu S) (\partial^\mu S) -{1 \over 2}M_S^2S^2-{1 \over 4}\lambda_SS^4- \frac{1}{4}\lambda_{HSS} (H^\dagger H) S^2, \label{HP-int} \end{eqnarray} where ${\cal L}_{\rm SM}$ is the Lagrangian of the SM, and $H$ is the SM Higgs doublet field. After the electroweak symmetry breaking, the Lagrangian becomes \begin{eqnarray} {\cal L}={\cal L}_{\rm SM}+{1 \over 2} (\partial_\mu S) (\partial^\mu S) -{1 \over 2}m_{DM}^2 S^2-{1 \over 4}\lambda_SS^4- \frac{1}{4}\lambda_{HSS} \, v \, h \, S^2-{1 \over 8}\lambda_{HSS}h^2S^2, \end{eqnarray} where $h$ is the physical Higgs boson, and the DM mass $m_{DM}$ is given by \begin{eqnarray} m_{DM}^2=M_S^2+{1 \over 4}\lambda_{HSS}v^2. \end{eqnarray} Here, the vacuum expectation value of the Higgs field is set to be $\langle H \rangle=(0,v)^T/\sqrt{2}$ with $v=246$ GeV. The DM phenomenology in this Higgs-portal DM scenario is controlled by only two free parameters: $m_{DM}$ and $\lambda_{HSS}$. \begin{figure}[tb]\label{Fig:cross} \centering \includegraphics[scale=0.2, angle=0]{cross.pdf} \caption{Feynman diagrams for dark matter annihilations.} \end{figure} The scalar DM $S$ annihilates into the SM particles through its coupling with the Higgs boson. The annihilation processes are shown in Fig.~\ref{Fig:cross}, where $W(Z)$ is the charged (neutral) weak gauge boson, and $f$ represents quarks and leptons in the SM. We evaluate the DM relic density by solving the Boltzmann equation \cite{Gondolo:1990dk}: \begin{eqnarray} \frac{d Y}{d x} = -\frac{s(m_{DM})}{H(m_{DM})} \, \frac{\langle\sigma v_{\rm rel} \rangle}{x^2} \, (Y^2-Y_{EQ}^2) , \label{eq:BE} \end{eqnarray} where the temperature of the universe is normalized by the DM mass as $x=m_{DM}/T$, $H(m_{DM})$ and $s(m_{DM})$ are the Hubble parameter, and the entropy density of the universe at $T=m_{DM}$, respectively, $Y= n/s$ is the DM yield (the ratio of the DM number density ($n$) to the entropy density ($s$)), $Y_{EQ}$ is the yield of the DM particle in thermal equilibrium, and $\langle \sigma v_{\rm rel} \rangle$ is the thermal-averaged DM annihilation cross section times relative velocity ($v_{\rm rel}$). The formulas for the quantities in the Boltzmann equation are given as follows: \begin{eqnarray} s(T)= \frac{2 \pi^2}{45} g_\star T^3 , \; \; H(T) = \sqrt{\frac{\pi^2}{90} g_\star} \frac{T^2}{M_P}, \; \; n_{EQ}=s \, Y_{EQ}= \frac{g_{DM}}{2 \pi^2} \frac{m_{DM}^3}{x} K_2(x), \end{eqnarray} where $M_P=2.43 \times 10^{18}$ GeV is the reduced Planck mass, $g_{DM}=1$ is the number of degrees of freedom for the Higgs-portal DM, $g_\star$ is the effective number of total degrees of freedom for the particles in thermal equilibrium ($g_\star=106.75$ for the SM particles), and $K_2$ is the modified Bessel function of the second kind. The thermal-averaged annihilation cross section is calculated by \begin{eqnarray} \langle \sigma v \rangle = \frac{1}{n_{EQ}} \, g_{DM}^2 \, \frac{m_{DM}}{64 \pi^4 x} \int_{4 m_{DM}^2}^\infty ds \, 2 (s- 4 m_{DM}^2) \, \sigma(s) \, \sqrt{s} K_1 \left(\frac{x \sqrt{s}}{m_{DM}}\right) , \label{ThAvgSigma} \end{eqnarray} where $\sigma(s)$ is the DM pair annihilation cross section corresponding to the processes in Fig.~\ref{Fig:cross}, and $K_1$ is the modified Bessel function of the first kind. By using the asymptotic value of the yield $Y(\infty)$, the DM relic density $\Omega_{\rm DM}h^2$ is expressed by \begin{eqnarray} \Omega_{\rm DM}h^2={m_{\rm DM}s_0 Y(\infty) \over \rho_c/h^2}, \label{eq:CPDM} \end{eqnarray} where $s_0=2890~{\rm cm}^{-3}$ is the entropy density of the present universe, and $\rho_c/h^2=1.05\times 10^{-5}~{\rm GeV}~{\rm cm}^{-3}$ is the critical density. The resultant DM relic density is controlled by two free parameters ($m_{DM}$ and $\lambda_{HSS}$), and their relation is determined so as to reproduce the observed DM relic density of $ \Omega_{\rm DM}h^2=0.12$ \cite{Aghanim:2018eyx}. In addition to the DM relic density, the parameter space of $m_{DM}$ and $\lambda_{HSS}$ are constrained by the direct/indirect DM particle search results and the Higgs-portal DM search results by the Large Hadron Collider (LHC) experiment. After all the constraints are taken into account, the allowed parameter region is identified. For the result, see, for example, Fig.~19 in Ref.~\cite{Arcadi:2019lka}. It has been found that the Higgs-portal DM scenario is phenomenologically viable, but the allowed parameter region is very limited: $m_{DM} \simeq M_h/2$ with the SM Higgs boson mass $M_h=125$ GeV and $10^{-4}\lesssim |\lambda_{HSS}| \lesssim 10^{-3}$. Now we introduce an SM-singlet chiral superfield $\Phi$ along with a $Z_2$ symmetry and assign odd-parity to it while even-parity to all the MSSM fields. Hence, the lightest component field in $\Phi$ is stable and the DM candidate. The SUSY Lagrangian ${\cal L}_0$ in Eq.~(\ref{eq:SUSY_L}) is then extended to be \begin{eqnarray} {\cal L}_0 \to {\cal L}_0 + \int d^4\theta \, \Phi^\dagger \Phi + \left\{ \int d^2 \theta \, \mu_\Phi \Phi^2 + {\rm h.c.} \right\} \,, \label{eq:SUSY_Phi} \end{eqnarray} where $\mu_\Phi$ is a mass parameter. Similar to ${\cal L}_H$ and ${\cal L}_m$, a new Lagrangian for $\Phi$ involving the goldstino chiral superfield is given by \begin{eqnarray} {\cal L}_\Phi =- {m_\Phi^2 \over f^2} \int d^4\theta \left( X^\dagger X \right) \Phi^\dagger \Phi \,, \label{eq:soft1_Phi} \end{eqnarray} where $m_\Phi$ denotes a soft SUSY breaking mass. Finally, ${\cal L}_{AB}$ is extended to be \begin{eqnarray} {\cal L}_{AB} \to {\cal L}_{AB} + \left\{- {B_\Phi \over 2f} \int d^2\theta \, X \Phi^2+{\rm h.c.} \right\} \, . \label{eq:soft2_Phi} \end{eqnarray} In the following, we assume that $B_\Phi$ is real and positive. We now read off the scalar potential relevant to the Higgs-portal DM scenario by eliminating the auxiliary fields: \begin{eqnarray} V=V_{\rm SUSY} + V_{\rm soft}\,, \end{eqnarray} where \begin{eqnarray} V_{\rm SUSY}&=& \mu_H^2(|H_u|^2+|H_d|^2)+\mu_\Phi^2|\Phi|^2 +{g_Z^2 \over 8}(|H_u|^2-|H_d|^2)^2+{g_2^2 \over 2}|H_u^\dagger H_d|^2 \,, \\ V_{\rm soft}&=&{\left|f+{m_3^2 \over f}H_uH_d- {B_\Phi \over 2f}\Phi^2\right|^2 \over 1-{m_1^2 \over f^2}|H_d|^2-{m_2^2 \over f^2}|H_u|^2-{m_{\Phi}^2 \over f^2}|\Phi|^2}\,. \end{eqnarray} Although the complete form of the scalar potential includes all the sfermions in the MSSM, we have considered the potential terms involving only the MSSM Higgs doublets and the SM-singlet scalar $\Phi$. This is because the sfermions should be heavy to satisfy the current LHC constraints and their couplings with the Higgs-portal DM have little effects on the DM physics for $m_{DM} \simeq M_h/2$. For the physics of the Higgs-portal DM scenario, only the bilinear terms with respect to $\Phi$ are important. To extract them from the scalar potential, we expand $V_{\rm soft}$ up to the order of ${\cal O}(1/f^2)$ and then obtain \begin{eqnarray} V & \supset & \left[ \left( \mu_\Phi^2 + m_{\Phi}^2 \right) + \left\{\left({m_3^2 \over f^2}H_uH_d+ {\rm h.c.} \right)+2{m_1^2 \over f^2}|H_d|^2+2{m_2^2 \over f^2}|H_u|^2\right\}m_{\Phi}^2 \right]|\Phi|^2 \nonumber \\ &&-\left\{\left(1+{m_3^2 \over f^2}H_uH_d+{m_1^2 \over f^2}|H_d|^2+{m_2^2 \over f^2}|H_u|^2\right){B_\Phi \over 2}\Phi^2+{\rm h.c.}\right\} \,. \label{eq:int} \end{eqnarray} Substituting \begin{eqnarray} \Phi&=&{1 \over \sqrt{2}}(\phi+i \eta) \end{eqnarray} into Eq.~(\ref{eq:int}), we can find the mass spectrum of the real scalars, $\phi$ and $\eta$, and their couplings with the Higgs bosons. First, we obtain the mass spectrum to be \begin{eqnarray} m_{\phi/\eta}^2&=& \mu_\Phi^2 + m_\Phi^2 + \left( {m_1^2 \over f} \cos^2\beta + {m_2^2 \over f} \sin^2\beta + {m_3^2 \over f} \sin\beta \cos\beta \right) {m_\Phi^2 \over f} v^2 \nonumber \\ && \mp \left\{ 1+ \left( {m_1^2 \over f} \cos^2\beta + {m_2^2 \over f} \sin^2\beta + {m_3^2 \over f} \sin\beta \cos\beta \right) {v^2 \over f} \right\} B_\Phi \nonumber \\ & \simeq & \mu_\Phi^2 + m_\Phi^2 \mp B_\Phi. \label{DM-Mass} \end{eqnarray} In the last expression, we have used $|m_{1,2,3}^2|, f \gg v^2$ and $m_\Phi^2 < f$ from the theoretical consistency. We see that $m_\phi < m_\eta$ and thus the real scalar $\phi$ is the DM candidate. Since all the Higgs bosons except for the SM-like Higgs boson are heavy, the DM physics is mainly controlled by the coupling of $\phi$ with the SM-like Higgs boson. For a large $\tan \beta$ value, such as $\tan \beta=10$ as we have used in Figs.~\ref{fig:SM-Higgs} and \ref{fig:Heavy-Higgs}, the up-type Higgs doublet is approximately identified as the SM-like Higgs doublet. By employing this approximation $H_u \simeq H$, we can easily extract the coupling of $\phi$ with the SM-like Higgs doublet from Eq.~(\ref{eq:int}) such that \begin{eqnarray} {\cal L}_{int} \simeq - {m_2^2 \over f^2} \left( m_\Phi^2- \frac{B_\Phi}{2} \right) (H^\dagger H) \phi^2 \, . \end{eqnarray} This is the formula to be compared with Eq.~(\ref{HP-int}) with the identification of $S=\phi$. Therefore, in the decoupling limit of the heavy Higgs bosons and all the MSSM sparticles, we have obtained the Higgs-portal DM scenario as the low energy effective theory. In terms of our model parameters, the two parameters $m_{DM}=m_\phi$ and $\lambda_{HSS}$, which control the Higgs-portal DM physics, are approximately expressed as \begin{eqnarray} m_{DM}^2 &\simeq& \mu_\phi^2+ m_\Phi^2 -B_\Phi \, , \nonumber \\ \lambda_{HSS} &\simeq& 4 \, {m_2^2 \over f^2} \left( m_\Phi^2- \frac{B_\Phi}{2} \right) \,. \label{eq:para} \end{eqnarray} As in Fig.~\ref{fig:SM-Higgs}, we may fix $m_2^2=- (2005)^2$ GeV$^2$ and $\sqrt{f} \geq 3990$ GeV so as to yield $m_h \leq125$ GeV at the tree-level. Even after this choice, we still have three free parameters, $\mu_\Phi$, $m_\Phi$ and $B_\Phi$ and we can arrange them to satisfy the phenomenological constraints, $m_{DM} \simeq M_h/2$ and $10^{-4} \lesssim |\lambda_{HSS}| \lesssim 10^{-3}$ for the Higgs-portal DM scenario.\footnote{ For a parameter choice to predict $m_h < 125$ GeV at the tree-level, $M_h=125$ GeV should be reproduced by $M_h^2 =m_h^2 + \Delta m_h^2$ with $\Delta m_h^2$ from quantum corrections through scalar top quarks, as usual in the MSSM.} For example, we may set $\mu_\Phi^2 \simeq m_\Phi^2 \simeq B_\Phi/2 = {\cal O}(1 \, {\rm TeV}^2)$ but tune their differences so as to reproduce the allowed values of $m_{DM}^2 \ll 1 \, {\rm TeV}^2$ and $ |\lambda_{HSS}| \ll1$. \section{Conclusion} If SUSY is broken at a low energy, the NL-MSSM with the goldstino chiral superfield is a very useful description for taking the hidden sector effect into account to the MSSM. The NL-MSSM may be particularly interesting if the SUSY breaking scale lies in the multi-TeV range. In this case the SM-like Higgs boson mass $m_h=125$ GeV is achieved by the Higgs potential at the tree-level after eliminating the $F$-component of the goldstino superfield. However, such a low scale SUSY breaking predicts a milli-eV gravitino LSP, which is too light to be the main component of the DM in our universe. Thus, a suitable DM candidate is missing in the NL-MSSM. To solve this problem, we have proposed a minimal extension of the NL-MSSM by introducing the SM-singlet chiral superfield ($\Phi$) along with the $Z_2$ symmetry. The stability of the lightest component field in $\Phi$ is ensured by assigning odd-parity to $\Phi$ while even-parity for all the MSSM superfields. We have shown that in the decoupling limit of the sparticles and heavy Higgs bosons, our low energy effective theory is nothing but the Higgs-portal DM scenario with the lightest $Z_2$-odd real scalar being the DM candidate. With a suitable choice of the model parameters, we can reproduce the allowed parameter region of the Higgs-portal DM scenario. Here we give a comment on a general property of our model. Since the main point of this paper is to propose the minimal extension of the NL-MSSM to incorporate a suitable DM candidate, we have focused on a special parameter region, for which our model at low energies is reduced to the simplest Higgs-portal DM scenario (plus a extremely light gravitino), namely, the SM with the real scalar DM being odd under the $Z_2$ symmetry. In general, we have a wide variety of the parameter choices to realize a viable dark matter scenario. For example, we may take a very small value of $B_\Phi$ in Eq.~(\ref{DM-Mass}) so that the mass splitting between $\phi$ and $\eta$ is negligibly small. In this case, we identify the complex scalar $\Phi$ with the DM particle. This is a complex scalar extension of the simplest Higgs-portal DM scenario with only one real scalar. This extension has been studied in \cite{McDonald:1993ex, Barger:2010yn, Gonderinger:2012rd, Chiang:2012qa, Coimbra:2013qq, Costa:2015llh, Wu:2016mbe}. Since the MSSM includes two Higgs doublets, our Higgs-portal DM scenario is basically two Higgs doublet extension of the Higgs-portal DM scenario. The two Higgs doublet extension of the SM supplemented by the Higgs-portal DM has been studied in \cite{Bird:2006jd, He:2007tt, He:2008qm, Grzadkowski:2009iz, Aoki:2009pf, Li:2011ja, Cai:2011kb, He:2011gc, Bai:2012nv, He:2013suk, Greljo:2013wja, Wang:2014elb, Drozd:2014yla, Okada:2014usa, Campbell:2015fra, Drozd:2015gda}. In the model, the heavy Higgs bosons can play an important role for the DM physics, for example, an enhancement of the DM pair annihilations through the heavy Higgs boson resonances. While the allowed parameter region of the simplest Higgs-portal DM scenario is very limited, a wide parameter space can be phenomenologically viable in the two-Higgs doublet extension. Although we focused on the interaction between the DM particle and the Higgs boson, the full Lagrangian includes interactions between the DM particles and sfermions, which are also derived by integrating out the $F$-component of the goldstino superfield. If the DM particle is heavier than sfermions, the DM pair annihilation processes through the interaction between the DM particle and sfermions become important in evaluating the DM relic density. We leave such general analysis for future work. Finally, let us consider a crucial difference of our model from standard neutralino dark matter scenario in the MSSM. Neutralino dark matter is an $R$-parity odd particle, while in our scenario the dark matter is an $R$-parity even particle. This fact leads to distinctive phenomenologies. For example, in collider phenomenology, a neutralino dark matter is produced through a cascade decay of heavier sparticles due to the $R$-parity conservation. On the other hand, Higgs-portal dark matter in our scenario is not produced by sparticle decays. A pair of Higgs-portal dark matters can be produced from the decays of Higgs bosons. Since gravitino is an $R$-parity odd particle, it is produced from a cascade decay of sparticles. In our scenario, the gravitino is almost massless, and missing energy distributions associated with gravitino production are very different from those associated with neutralino production. \subsection*{Acknowledgments} N.O. would like to thank the High Energy Theory Group in Yamagata University for the hospitality during his visit. This work is supported in part by the United States Department of Energy Grant No.~DE-SC0012447 (N.O.).
1,108,101,564,725
arxiv
\section*{Acknowledgements} The authors are grateful to Randolf Sch{\"a}rfig for help with generating the experimental datasets and rendering the results, to Daniele Panozzo for help with isotropic remeshing, to Fernando De Goes, David Gu, and Maks Ovsjanikov for helpful discussion on their prior works, and to Klaus Glashov and Artiom Kovnatsky for their useful comments on an early version of the manuscript. This research was supported by ERC Starting Grant no. 307047 (COMET). \section*{Appendix: Derivatives of the energy} \fontsize{8}{9} In this section, we derive the gradient of the energy used in Section~\ref{sec:num}, considering a generic energy of the form % $\mathcal{E}(\bb{\ell}) = \lVert \bb{H}\bb{Q}(\bb{\ell}) \bb{K}-\bb{J} \rVert_\mathrm{F}^2$, % with $\bb{Q}(\bb{\ell})$ being either $\bb{A}(\bb{\ell})$ or $\bb{W}(\bb{\ell})$. By the chain rule, \[ \frac{\partial \mathcal{E}(\bb{\ell})}{\partial \bb{\ell}} = \frac{\partial \mathcal{E}(\bb{\ell})}{\partial \bb{Q}(\bb{\ell})} \frac{\partial \bb{Q}(\bb{\ell})}{\partial \bb{\ell}}, \] where \[ \frac{\partial \mathcal{E}(\bb{\ell})}{\partial \bb{Q}(\bb{\ell})} = 2 \bb{H}^\top (\bb{H}\bb{Q}(\ell)\bb{K}-\bb{J})\bb{K}^\top \] is an $n\times n$ matrix, which is row-stacked into an $1 \times n^2$. The gradient $\frac{\partial \bb{Q}(\bb{\ell})}{\partial \bb{\ell}}$ can be represented as an $n^2 \times |E|$ matrix, which, when multiplied by $\frac{\partial \mathcal{E}(\bb{\ell})}{\partial \bb{Q}(\bb{\ell})}$, produces a vector of size $\lvert E \rvert$. For $\bb{Q} = \bb{A}$, a diagonal matrix of area elements~(\ref{eq:lap_area}), the computation of the gradient is based on the derivative of the area element~(\ref{eq:heron}) \[ \frac{\partial A_{ijk}}{\partial \ell_{i'j'}} = \begin{cases} \gamma(\ell_{ij},\ell_{jk},\ell_{ki}) & i'j' = ij \\ \gamma(\ell_{jk},\ell_{ij},\ell_{ki}) & i'j' = jk \\ \gamma(\ell_{ki},\ell_{ij},\ell_{jk}) & i'j' = ki \\ 0 & \text{else} \end{cases} \] where $\gamma$ is defined as \[ \begin{aligned} \gamma(x,y,z) = \frac{1}{4 A_{ijk}} \Big{[} & (s-x)(s-y)(s-z) + s(s-x)(s-y) \\ & + s(s-x)(s-z) - s(s-y)(s-z) \Big{]}. \end{aligned} \] Then, \[ \begin{aligned} \frac{\partial a_{ij} }{\partial \ell_{i'j'} } = \left\{ \begin{array}{cc} 0 & i\neq j \\ \frac{1}{3}\sum_{kl : ikl\in F} \frac{\partial A_{ikl}}{\partial \ell_{i'j'}} & i=j \end{array} \right. \end{aligned} \] for $i'j' \in E$. For $\bb{Q} = \bb{W}$, we have \[ \frac{\partial w_{ij} }{\partial \ell_{i'j'} } = \begin{cases} \frac{\ell_{ij}}{2} \left[ \frac{ \frac{\partial A_{ijk}}{\partial \ell_{i'j'}} \ell_{ij} - 2A_{ijk} }{A_{ijk}^2 } + \frac{ \frac{\partial A_{ijh}}{\partial \ell_{i'j'}} \ell_{ij} - 2 A_{ijh} }{A_{ijh}^2 } \right] & i'j' = ij \\ \ell_{ij} \frac{\left( \frac{\partial A_{ijk}}{\partial \ell_{i'j'}} \ell_{ij} - 2 A_{ijk} \right) }{2A_{ijk}^2 } & \small ij \in \{ ik, jk, ih, jh \} \\ 0 & \text{else} \end{cases} \] for $i\neq j$ and \[ \frac{\partial w_{ii} }{\partial \ell_{i'j'} } = - \sum_{j \neq i} \frac{\partial w_{ij} }{\partial \ell_{i'j'} }. \] for the diagonal elements, where $i'j' \in E$. \section{Background} \label{sec:backgr} \subsection{Basic definitions} Throughout the paper, we denote by $\bb{A} = (a_{ij})$, $\bb{a} = (a_i)$, and $a$ matrices, vectors, and scalars, respectively. $\|\bb{A}\|_\mathrm{F} = \sqrt{ \sum_{ij} \lvert a \rvert^2_{ij}}$ denotes the Frobenius norm of a matrix. We model a $3$D shape as a simply-connected smooth compact two-dimensional surface $X$ without boundary. We denote by $T_x X$ the tangent space at point $x$ and define the Riemannian metric as the inner product $\langle \cdot, \cdot \rangle_{T_x X} \colon T_x X \times T_x X \to \mathbb{R}$ on the tangent space. We denote by $L^2(X)$ the space of square-integrable functions and by $H^1(X)$ the Sobolev space of weakly differentiable functions on $X$, respectively, and define the standard inner products \begin{align} \label{eq:inprod1} \langle f, g \rangle_{L^2(X)} &= \int_X f(x) g(x) da(x); \\ \langle f, g \rangle_{H^1(X)} &= \int_X \langle \nabla f(x) , \nabla g(x)\rangle_{T_x X} da(x) \label{eq:inprod} \end{align} on these spaces (here $da$ denotes the area element induced by the Riemannian metric). The \emph{Laplace-Beltrami operator} $\Delta f$ is defined through the \emph{Stokes formula}, \begin{equation} \label{eq:stokes} \langle \Delta f, g \rangle_{L^2(X)} = \langle f, g \rangle_{H^1(X)}, \end{equation} and is \emph{intrinsic}, i.e., expressible entirely in terms of the Riemannian metric. \begin{figure}[t!] \centering \begin{tikzpicture}[x=1cm,y=1cm] \coordinate[label=below:$\bb{x}_i$] (xi) at (0,0) {}; \coordinate[label=above:$\bb{x}_j$] (xj) at (0,2) {}; \coordinate[label=left:$\bb{x}_k$] (xk) at (-2,1) {}; \coordinate[label=right:$\bb{x}_h$] (xh) at (3,1) {}; \draw (xi)--(xj)--(xk)--cycle; \tkzLabelSegment[right=2pt](xi,xj){$\ell_{ij}$} \tkzLabelSegment[above left=2pt](xj,xk){$\ell_{jk}$} \tkzLabelSegment[below left=2pt](xk,xi){$\ell_{ik}$} \draw (xi)--(xj)--(xh)--cycle; \tkzLabelSegment[above right=2pt](xj,xh){$\ell_{jh}$} \tkzLabelSegment[below right=2pt](xh,xi){$\ell_{ih}$} \begin{scope} \path[clip] (xk)--(xi)--(xj); \fill[gray, opacity=0.25, draw=black] (xk) circle (4mm); \node at ($(xk)+(0:7mm)$) {$\alpha_{ij}$}; \end{scope} \begin{scope} \path[clip] (xh)--(xj)--(xi); \fill[gray, opacity=0.25, draw=black] (xh) circle (4mm); \node at ($(xh)+(0:-7mm)$) {$\beta_{ij}$}; \end{scope} \end{tikzpicture} \caption{Definitions used in the paper: edge $ij$ has length $\ell_{ij}$. Angles $\alpha_{ij}$ and $\beta_{ij}$ are opposite to edge $ij$. Triangle $ikj$ has area $A_{ijk}$. } \label{fig:cot_weights} \end{figure} \subsection{Discrete metrics and Laplacians} In the discrete setting, the surface $X$ is approximated by a manifold triangular mesh $(V,E,F)$ with vertices $V = \{1, \hdots, n\}$, in which each edge $ij \in E$ is shared by exactly two triangular faces ($ikj$ and $ihj \in F$; see Figure~\ref{fig:cot_weights} for this and the following definitions). A real function $f \colon X \to \mathbb{R}$ on the surface is sampled on the vertices of the mesh and can be identified with an $n$-dimensional vector $\bb{f} = (f_1, \hdots, f_n)^\top$. A discrete Riemannian metric is defined by assigning each edge $ij$ a \emph{length} $\ell_{ij} > 0$, satisfying the strong triangle inequality\footnote{ We require a strong version of the triangle inequality to avoid flat triangles. }, \begin{equation} \label{eq:triangineq} \begin{aligned} \ell_{ij} + \ell_{jk} - \ell_{ki} &> 0, \\ \ell_{jk} + \ell_{ki} - \ell_{ij} &> 0, \\ \ell_{ki} + \ell_{ij} - \ell_{jk} &> 0, \end{aligned} \end{equation} for all $ijk \in F$. We denote by $\bb{\ell} = (\ell_{ij \in E})$ the vector of edge lengths of size $\lvert E \rvert$, representing the discrete metric. The standard inner product on the space of functions on the mesh is discretized as $\langle \bb{f}, \bb{g} \rangle_{L^2(X)} = \bb{f}^\top \bb{A} \bb{g}$, where \begin{equation} \label{eq:lap_area} \begin{aligned} \bb{A} &= \mathrm{diag}(a_1, \hdots, a_n) \\ a_i &= \frac{1}{3} \sum_{jk: ijk \in F} A_{ijk} \end{aligned} \end{equation} is the local area element equal to one third of the sum of the areas of triangles sharing the vertex $i$, and $A_{ijk}$ denotes the area of triangle $ijk$. Using Heron's formula for triangle area, we can express \begin{equation} \label{eq:heron} \begin{aligned} A_{ijk} &= \sqrt{s(s-\ell_{ik})(s-\ell_{kj})(s-\ell_{ij})},\\ s &= (\ell_{ik} + \ell_{kj} +\ell_{ij})/2, \end{aligned} \end{equation} entirely in terms of the discrete metric. The discrete version of the Laplace-Beltrami operator is given as an $n\times n$ matrix $\bb{L} = \bb{A}^{-1}\bb{W}$, where $\bb{W}$ is a matrix of edge-wise weights (also referred to as {\em stiffness matrix}), satisfying $w_{ij} = 0$ if $ij \notin E$ and $w_{ii} = -\sum_{j\neq i}w_{ij}$. Such a matrix has a constant eigenvector with the corresponding null eigenvalue. In particular, we are interested in Laplacian operators that are intrinsic, i.e., expressible entirely in terms of the edge lengths $\bb{\ell}$, and consider weights given by \begin{equation} \label{eq:cotan_} w_{ij}(\bb{\ell}) = \frac{ -\ell_{ij}^2 + \ell_{jk}^2 + \ell_{ki}^2 }{8 A_{ijk}} + \frac{ -\ell_{ij}^2 + \ell_{jh}^2 + \ell_{hi}^2 }{8 A_{ijh}} \end{equation} for $ij \in E$. An \emph{embedding} is the geometric realization of the mesh $(V,E,F)$ in $\mathbb{R}^3$ specified by providing the three-dimensional coordinates $\bb{x}_i$ for each vertex $i \in V$ (we will hereinafter represent the embedding by an $n\times 3$ matrix $\bb{X}$). Such an embedding induces a metric \begin{equation} \bb{\ell}(\bb{X}) = (\| \bb{x}_i - \bb{x}_j \| : ij \in E). \end{equation} With this metric, it is easy to verify that formula~(\ref{eq:cotan_}) becomes the standard cotangent weight \cite{Pinkall1993,meyer2003:ddg} \begin{equation} \label{eq:cotan} w_{ij} = \begin{cases} (\cot \alpha _{ij} + \cot \beta _{ij})/2 & i\neq j; \\ -\sum_{k\neq i} w_{ik} & i = j, \end{cases} \end{equation} since $\cot \alpha_{ij} = (-\ell_{ij}^2 + \ell_{jk}^2 + \ell_{ki}^2 )/(4 A_{ijk})$ \cite{Jacobson2012}. Thus, an embedding $\bb{X}$ defines a discrete metric $\bb{\ell}(\bb{X})$, and consequently, a discrete Laplacian $\bb{W}(\bb{\ell}(\bb{X}))$. In the following, with slight abuse of notation, we will use $X$ to also refer to the triangular mesh approximating the underlying smooth surface, depending on the context. Also, we will use $X$ and $\bb{X}$ interchangeably when referring to a 3D shape. \subsection{Metric-from-Laplacian} Several recent works considered the reconstruction of shape intrinsic geometry from a Laplacian operator. Zeng et al. \cite{Zeng2012} showed that the cotangent Laplacian and the discrete Riemannian metric (unique up to a scaling) represented by edge lengths are mutually defined by each other, and that the set of all discrete metrics that can be defined on a triangular mesh is convex. The authors showed that it is possible to find a discrete metric $\bb{\ell}$ that realizes a given `reference' Laplacian defined through edge weights $\bb{\bar{W}} = (\bar{w}_{ij})$, by minimizing the convex energy given implicitly by \begin{equation}\label{eq:energy_variational} \mathcal{E}_{\mathrm{imp}}(\bb{\ell}) = \int_{\bb{\ell}_0}^{\bb{\ell}} \sum_{ij} (\bar{w}_{ij} - w_{ij}(\bb{\ell})) \, d\ell_{ij} \end{equation} where $w_{ij}(\bb{\ell})$ are the metric-dependent weights defined according to~(\ref{eq:cotan_}). De Goes et al. \cite{deGoes2014} derived a closed-form expression of (\ref{eq:energy_variational}), which turns out to be the classical \emph{conformal energy} \begin{equation}\label{eq:conformal_energy} \mathcal{E}_{\mathrm{conf}}(\bb\ell) = \frac{1}{2}\sum_{ij}(w_{ij}(\bb{\ell})-\bar{w}_{ij})\ell^2_{ij} . \end{equation} \subsection{Shape difference operators } Let us now consider two shapes $X$ and $Y$ related by a point-wise bijective map $t \colon Y \to X$. Ovsjanikov et al. \cite{ovsjanikov2012functional} showed that $t$ induces a linear \emph{functional map} $F \colon L^2(X) \to L^2(Y)$, by which a function $f$ on $X$ is translated into a function $F f = f \circ t$ on $Y$. Note that in general $F$ is not necessarily a point-wise map, in the sense that a delta-function on $X$ can be mapped to a `blob' on $Y$. Rustamov et al. \cite{Rustamov2013} showed that the difference between shapes $X$ and $Y$ can be represented in the form of a linear operator on $L^2(X)$ that describes how the respective inner products change under the functional map. Let $\langle \cdot, \cdot \rangle_{X}$ and $\langle \cdot, \cdot \rangle_{Y}$ denote some inner products on $L^2(X)$ and $L^2(Y)$, respectively. Then, the \emph{shape difference operator} is a unique linear operator $D_{X,Y} \colon L^2(X) \to L^2(X)$ satisfying \begin{equation} \label{eq:shapediff} \langle f, D_{X,Y}g \rangle_X = \langle F f ,F g \rangle_Y, \end{equation} for all $f, g\in L^2(X)$. The shape difference operator depends on the choice of the inner products $\langle \cdot, \cdot \rangle_{X}$ and $\langle \cdot, \cdot \rangle_{Y}$. Rustamov et al. \cite{Rustamov2013} considered the two inner products~(\ref{eq:inprod1}) and~(\ref{eq:inprod}). The former gives rise to the \emph{area-based} shape difference denoted by $V_{X,Y}$, while the latter results in the \emph{conformal} shape difference $R_{X,Y}$ (we refer the reader to \cite{Rustamov2013} for derivations and technical details). In the discrete setting, shapes $X$ and $Y$ are represented as triangular meshes with $n$ and $m$ vertices, respectively. The functional correspondence is represented by an $m \times n$ matrix $\bb{F}$, and the inner products are discretized as $\langle \bb{f}, \bb{g} \rangle_{X} = \bb{f}^\top \bb{H}_X \bb{g}$ and $\langle \bb{p}, \bb{q} \rangle_{Y} = \bb{p}^\top \bb{H}_Y \bb{q}$ (here $\bb{H}_X$ and $\bb{H}_Y$ are $n\times n$ and $m\times m$ positive-definite matrices, respectively). For the two aforementioned choices of inner products, the area-based difference operator is given by an $n\times n$ matrix \begin{equation} \bb{V}_{X,Y} = \bb{A}_X^{-1}\bb{F}^\top \bb{A}_Y\bb{F}, \end{equation} where $\bb{A}$ is defined as in (\ref{eq:lap_area}). The conformal shape difference operator is \begin{equation} \bb{R}_{X,Y} = \bb{W}_X^{\dagger}\bb{F}^\top \bb{W}_Y\bb{F}, \end{equation} where $\bb{W}$ is the matrix of cotangent weights~(\ref{eq:cotan}) and $^\dagger$ denotes the Moore-Penrose pseudoinverse.\footnote{ Note that while the matrix $\bb{A}$ is invertible, $\bb{W}$ is rank-deficient (it has one zero eigenvalue) and thus is only pseudo-invertible. } We note that both operators $\bb{V}_{X,Y}$ and $\bb{R}_{X,Y}$ are intrinsic, since matrices $\bb{A}$ and $\bb{W}$ are expressed only in terms of edge lengths. \section{Discussion and conclusions}\label{sec:disc} We presented a framework for reconstructing shapes from intrinsic operators, focusing on the problem of shape-from-difference operator, as this problem includes other important problems such as shape-from-Laplacian and shape exaggeration as its particular instances. In our experiments, we have encountered two important factors that affect the quality of the obtained results, and which can be considered as limitations of our approach. \textbf{Sensitivity to mesh quality.} The definition of cotangent weights~(\ref{eq:cotan}) produces $w_{ij} < 0$ if $\alpha_{ij} + \beta_{ij} > \pi$, an issue known to be problematic in many applications (e.g. in harmonic parametrization and texture mapping where it leads to triangle flips \cite{bobenko2007discrete}). In our case, negative weights have adverse effect on the convergence of the MoF optimization, since the step size $\mu$ in Algorithm~\ref{algo:mfo} becomes very small. In Figure \ref{fig:remeshing}, we exemplify this behavior by showing the plot of the energy $\mathcal{E}_\mathrm{dif}$ as function of internal iteration (for simplicity, only MoF iterations are shown) in the shape-from-Laplacian problem. When the mesh is completely isotropic (all triangles are acute, Figure \ref{fig:remeshing} left), convergence is very fast (red curve). Adding a few triangles with obtuse angles produces negative weights (marked in red in Figure \ref{fig:remeshing}, center) and slows down the convergence of the algorithm (green). Finally, when too many obtuse triangles are present(Figure \ref{fig:remeshing}, right), the algorithm fails to converge (blue). \textbf{Sensitivity to functional map quality.} Rustamov et al. \cite{Rustamov2013} remark that ``the quality of the information one gets from [shape difference operators] depends on the quality and density of the shape maps''. We also found that the result of shape synthesis with our approach largely depends on the accuracy of the functional maps between the shapes. To illustrate this sensitivity, we show in Figure \ref{fig:funcmaps} the result of shape-from-Laplacian reconstruction using as shapes $A$ and $B$ the faces from Figure \ref{fig:shapes_from_Laplacian} (second row) and varying the quality of the functional map between them. Functional maps were approximated in the bases of the first $K$ Laplace-Beltrami eigenfunctions according to \cite{ovsjanikov2012functional}. Larger values of $K$ result in a better expansion and consequently in better maps (Figure \ref{fig:funcmaps}, top). Figure \ref{fig:funcmaps} (bottom) shows the shape reconstruction result. The output quality is good as long as the functional maps are accurate, and deteriorates only when the map becomes very rough. \input{./sections/fig_caricaturization.tex} \input{./sections/fig_shapes_from_Laplacian.tex} \input{./sections/fig_convergence_plot.tex} \input{./sections/fig_funcmaps.tex} \section{Introduction} Shape reconstruction problems, colloquially known as `Shape-from-X', have been a topic of intensive research in computer vision, graphics, and geometry processing for several decades. Classical examples of `X' include motion \cite{Poelman1997,Kanatani1985,Snavely2006}, shading \cite{IkeuchiHorn1981,Valgaerts2012,Yu2013}, photometric stereo \cite{Woodham1989}, as well as more exotic examples such as texture \cite{ikeuchi1984,Rosenholtz1997,Forsyth01}, contour \cite{Witkin1980,brady1984extremum} and sketches \cite{Karpenko2006}. A recent line of works by Maks Ovsjanikov and co-authors have brought operator-based approaches to geometric processing and analysis problems such as correspondence \cite{ovsjanikov2012functional}, signal processing on manifolds \cite{azencot2013operator}, and quantifying differences between shapes \cite{Rustamov2013}. In the latter paper, shape differences are modeled by an intrinsic linear operator, which allows to tell in a convenient way not only how different two shapes are, but also where and in which way they are different. A particularly appealing use of shapes difference operators is to describe {\em shape analogies}, i.e. to tell how much the difference between shapes $A$ and $B$ is similar to the difference between $C$ and $D$, even if $A$ and $C$ themselves are very different (for example, a sphere and a cylinder in Figure~\ref{fig:teaser}). \footnote{Roughly speaking, shape difference operator defines the notion of `$B-A$' for shapes. The shapes are analogous if `$B-A = D-C$'. The problem of shape analogy synthesis is how to define `$D = C + (B-A)$'. } However, while the authors show convincingly in their work how to use difference operators to {\em describe} analogies between given shapes, the challenging question how to {\em generate} such analogies (i.e., given $A, B$ and $C$, synthesize $D$) remains unanswered. \textbf{Main contributions.} In this paper, we study the problem of shape reconstruction from intrinsic differential operators, such as Laplacians or the aforementioned shape difference operators. By `shape reconstruction' we intend finding an embedding of the shape in the 3D space inducing a Riemannian metric, that, in turn, induces intrinsic operators with desired properties. Particularly interesting instances of our shape-from-operator problem (SfO) include synthesis of shape analogies, shape-from-Laplacian reconstruction, and shape exaggeration. Numerically, we approach the SfO problem by splitting it into two optimization sub-problems: metric-from-operator (reconstruction of the Riemannian metric from the intrinsic operator, which, in the case of shapes discretized as triangular meshes, is represented by edge lengths) and embedding-from-metric (finding a shape embedding that would realize a given metric). These sub-problems are applied in an alternating way, producing the desired shape (see example in Figure~\ref{fig:teaser}). The rest of the paper is organized as follows. In Section~\ref{sec:related}, we overview some of the related works. Section~\ref{sec:backgr} introduces the notation and mathematical setting of our problem. In Section~\ref{sec:prob} we formulate the shape-from-operator problem, and consider two of its particular settings: shape-from-Laplacian and shape-from-difference operator. We also discuss a numerical optimization scheme for solving this problem. Section~\ref{sec:res} provides experimental validation of the proposed approach. We show examples of shape reconstruction from Laplacian, shape analogy synthesis, and shape caricaturization. Limitations and failure cases are discussed in Section~\ref{sec:disc}, which concludes the paper. \section{Related work} \label{sec:related} As already noted, shape-from-X problems have been of interest in various communities for a long time, and our problem can be regarded as another animal in this zoo. Recently, a few works appeared questioning what structures can be recovered from the Laplacian. A well-known fact in differential geometry is that the Laplace-Beltrami operator is fully determined by the Riemannian metric, and, conversely, the metric is determined by the Laplace-Beltrami operator (or a heat kernel constructed from it) \cite{rosenberg1997laplacian}. In the discrete setting, the length of edges of a triangular mesh plays the role of the metric, and fully determines intrinsic discrete Laplacians, e.g. cotangent weights \cite{Pinkall1993,meyer2003:ddg}. Zeng et al. \cite{Zeng2012} showed that the converse also holds for discrete metrics, and formulated the problem of discrete metric reconstruction from the Laplacian. It was shown later by \cite{deGoes2014} that this problem boils down to minimizing the conformal energy. At the other end, we have problems generally referred to as multidimensional scaling (MDS) \cite{kruskal1964multidimensional,borg2005modern,aflalo2013spectral}, consisting of finding a configuration of points in the Euclidean space that realize, as isometrically as possible, some given distance structure. In our terminology, MDS problems can be regarded as problems of shape-from-metric reconstruction. In a sense, our SfO problem is a marriage between these two problems. Several applications we discuss in relation to our problem have been considered from other perspectives. Methods for shape deformation and pose transfer have been proposed by \cite{Sumner2004,sorkine2004laplacian,Rong2008}, Analysis and transfer of shape style have been presented by \cite{Welnicka2011,Ma2014,alhashim_sig14}. Finally, shape exaggeration and caricaturization have been studied in several recent works (see e.g., \cite{Lewiner2011,Clarke2011}). \section{Shape-from-Operator} \label{sec:prob} Rustamov et al. \cite{Rustamov2013} employed the shape difference operators framework to study shape analogies. Let $A$ and $B$ be shapes related by a functional map $\bb{F}$, giving rise to the shape difference operator $\bb{D}_{A,B}$ (area-based, conformal, or both), and let $C$ be another shape related to $A$ by a functional map $\bb{G}$. Then, one would like to know what would shape $X$ be such that the difference $\bb{D}_{C,X}$ is equal to $\bb{D}_{A,B}$ under the functional map $\bb{G}$? In other words, one wants to find an {\em analogy} of the difference between $A$ and $B$ (see Figure~\ref{fig:teaser}). To find such analogies, Rustamov et al. \cite{Rustamov2013} considered a finite collection of shapes $X_1, \hdots, X_K$ and picked up the shape minimizing the energy \begin{multline} \label{eq:analogy} X^* = \mathop{\mathrm{argmin}}_{X \in \{X_1, \hdots, X_K\} } \| \bb{V}_{C,X}\bb{G} - \bb{G}\bb{V}_{A,B} \|_{\mathrm{F}}^2 \\ + \| \bb{R}_{C,X}\bb{G} - \bb{G}\bb{R}_{A,B} \|_{\mathrm{F}}^2. \end{multline} The important question how to {\em generate} $X$ from the given difference operator (rather than browsing through a collection of shapes) was left open. \subsection{Problem formulation} This question, together with the works of \cite{Zeng2012,deGoes2014}, is the main inspiration for our present work. More broadly, we consider the following problem we call {\em shape-from-operator} (SfO): find an embedding $\bb{X}$ of the shape, such that the discrete metric $\bb{\ell}(\bb{X})$ it induces would make an intrinsic operator $\bb{Q}(\bb{\ell}(\bb{X}))$ satisfy some property or a set of properties (for example, one may wish to make $\bb{Q}(\bb{\ell}(\bb{X}))$ as similar as possible to some given reference operator $\bar{\bb{Q}}$). In this paper, we consider a class of SfO problems of the form \begin{multline} \label{eq:sfo} \bb{X}^* = \mathop{\mathrm{argmin}}_{\bb{X} \in \mathbb{R}^{n\times 3} } \lambda\| \bb{H}_1 \bb{A}(\bb{\ell}(\bb{X})) \bb{K}_1 - \bb{J}_1 \|_{\mathrm{F}}^2 \\ + (1- \lambda)\| \bb{H}_2\bb{W}(\bb{\ell}(\bb{X})) \bb{K}_2 - \bb{J}_2 \|_{\mathrm{F}}^2, \end{multline} where $0 \leq \lambda \leq 1$ and $\bb{H}_i$, $\bb{K}_i$, and $\bb{J}_i$ are some given matrices of dimensions $m\times n$, $n\times l$, and $m \times l$, respectively. We denote the energy minimized in~(\ref{eq:sfo}) by $\mathcal{E}(\bb{X})$. \textbf{Shape-from-Laplacian} is a particular setting of the SfO problem, wherein one is given a pair of meshes $A, B$ related by the functional map $\bb{F}$. The embedding of $B$ is not given, but instead, we are given its cotangent weights matrix $\bb{W}_B$. The goal is to find an embedding $\bb{X}$ (which can be regarded as a deformation of shape $A$) that induces a Laplacian $\bb{W}(\bb{\ell}(\bb{X}))$ as close as possible to $\bb{W}_B$ under the functional map $\bb{F}$, by minimizing \begin{equation} \mathcal{E}_{\mathrm{lap}}(\bb{X}) = \| \bb{W}(\bb{\ell}(\bb{X}))\bb{F} - \bb{W}_B \bb{F}\|_{\mathrm{F}}^2 \label{eq:sflap} \end{equation} It is easy to see that problem~(\ref{eq:sflap}) is a particular case of~(\ref{eq:sfo}) when using $\lambda = 0$, $\bb{H}_2 = \bb{I}$, $\bb{K}_2 = \bb{F}$, and $\bb{J}_2 = \bb{W}_{B}\bb{F}$. Note that our shape-from-Laplacian problem is different from the metric-from-Laplacian problems considered by \cite{Zeng2012} and \cite{deGoes2014} in the sense that we are additionally looking for an embedding that realizes the discrete metric. Secondly, unlike \cite{Zeng2012,deGoes2014}, we allow for arbitrary (not necessarily bijective) correspondence between $A$ and $B$. \textbf{Shape-from-difference operator} is the problem of synthesizing the shape analogy~(\ref{eq:analogy}), where we use the embedding of the given shape $C$ as an initialization, and try to deform it to obtain the desired shape $X$. Importantly, this means that the two meshes $C$ and $X$ are compatible and the functional correspondence between them is identity. We thus have simpler expressions for $\bb{V}_{C,X}(\bb{X}) = \bb{A}_{C}^{-1}\bb{A}(\bb{\ell}(\bb{X}))$ and $\bb{R}_{C,X} = \bb{W}_{C}^{\dagger}\bb{W}(\bb{\ell}(\bb{X}))$ leading to the energy \begin{multline} \label{eq:analogy_} \mathcal{E}_{\mathrm{dif}}(\bb{X}) = \lambda\| \bb{A}_{C}^{-1}\bb{A}(\bb{\ell}(\bb{X})) \bb{G} - \bb{G}\bb{V}_{A,B} \|_{\mathrm{F}}^2 \\ + (1- \lambda)\| \bb{W}_{C}^\dagger\bb{W}(\bb{\ell}(\bb{X})) \bb{G} - \bb{G}\bb{R}_{A,B} \|_{\mathrm{F}}^2 \end{multline} that is also a particular case of~(\ref{eq:sfo}) with $\bb{H}_1 = \bb{A}_C^{-1}$, $\bb{H}_2 = \bb{W}_C^{\dagger}$, $\bb{K}_1 = \bb{K}_2 = \bb{G}$, $\bb{J}_1 = \bb{G}\bb{V}_{A,B}$, and $\bb{J}_2 = \bb{G}\bb{R}_{A,B}$. \subsection{Numerical optimization } \label{sec:num} In the SfO problem~(\ref{eq:sfo}), we have a two-level dependence ($\bb{W}$ or $\bb{A}$ depending on $\bb{\ell}$, which in turn depends on the embedding $\bb{X}$), making the optimization directly w.r.t. the embedding coordinates $\bb{X}$ extremely hard. Instead, we split the problem into two stages: first, optimize $\mathcal{E}$ w.r.t. to the discrete metric $\bb{\ell}$, and then recover the embedding $\bb{X}$ from the metric $\bb{\ell}$. \textbf{Embedding-from-metric} is a special setting of the {\em multidimensional scaling} (MDS) problem \cite{kruskal1964multidimensional,borg2005modern}: given a metric $\bb{\ell}$, find its Euclidean realization by minimizing the {\em stress} \begin{equation} \begin{aligned} \label{eq:mds} \bb{X}^* &= \mathop{\mathrm{argmin}}_{\bb{X}\in \mathbb{R}^{n\times 3}} \sum_{ij \in E} (\| \bb{x}_{i} - \bb{x}_j \| - \ell_{ij})^2 \\ &= \mathop{\mathrm{argmin}}_{\mathbb{X}\in \bb{R}^{n\times 3}} \sum_{i>j} v_{ij} (\| \bb{x}_{i} - \bb{x}_j \| - \ell_{ij})^2, \end{aligned} \end{equation} where $v_{ij} = 1$ if $ij \in E$ and zero otherwise. A classical approach for solving~(\ref{eq:mds}) is the iterative SMACOF Algorithm~\ref{algo:smacof} \cite{de1977applications} based on the fixed-point iteration of the form \begin{equation} \bb{X} \leftarrow \bb{Z}^\dagger \bb{B}(\bb{X})\bb{X}, \label{eq:smacof} \end{equation} where \begin{equation} \bb{Z} = \begin{cases} -v_{ij} & i\neq j \\ \sum_{j\neq i} v_{ij} & i=j \end{cases} \end{equation} \begin{equation} \bb{B}(\bb{X}) = \begin{cases} -\frac{v_{ij}\ell_{ij}}{\|\bb{x}_i - \bb{x}_j\|} & i\neq j \,\,\, \text{and} \,\,\, \bb{x}_i \neq \bb{x}_j \\ 0 & i\neq j \,\,\, \text{and} \,\,\, \bb{x}_i = \bb{x}_j \\ -\sum_{j\neq i} b_{ij} & i=j \end{cases} \end{equation} are $n\times n$ matrices (the matrix $\bb{Z}^\dagger$ depends only on the mesh connectivity and is pre-computed). It can be shown \cite{bronstein2006multigrid} that iteration~(\ref{eq:smacof}) is equivalent to steepest descent with constant step size, and is guaranteed to produce a non-increasing sequence of stress values \cite{borg2005modern}. The complexity of a SMACOF iteration is $\mathcal{O}(n^2)$. \textbf{Metric-from-operator (MfO)} is the problem \begin{equation} \label{eq:metric_rec} \bb{\ell} = \mathop{\mathrm{argmin}}_{\bb{\ell}\in \mathbb{R}^{|E|}} \mathcal{E}(\bb{\ell}) \,\,\, \text{s.t.} \,\,\, (\ref{eq:triangineq}), \end{equation} where we have to restrict the search to all the valid discrete metrics\footnote{If the triangle inequality is violated, the energy $\mathcal{E}$ is not well-defined, as Heron's formula would produce imaginary triangle area values.} satisfying the triangle inequality~(\ref{eq:triangineq}). Metric-from-Laplacian restoration of \cite{Zeng2012} and \cite{deGoes2014} can be seen as particular settings of this problem. Optimization~(\ref{eq:metric_rec}) can be carried out using standard gradient descent-type algorithm. The gradient of the energy $\mathcal{E}(\bb{\ell})$ is given in the Appendix; the overall complexity of the computation of the energy and its gradient is $\mathcal{O}(n^3)$. The main complication of solving~(\ref{eq:metric_rec}) is guaranteeing that the metric $\bb{\ell}$ remains valid throughout all the optimization iterations. At the same time, we note that all metrics $\bb{\ell}(\bb{X})$ arising from Euclidean embeddings $\bb{X}$ satisfy the triangle inequality by definition. This brings us to adopting the following optimization scheme: \textbf{Alternating optimization. } We perform optimization w.r.t. to the metric~(\ref{eq:metric_rec}) and the embedding~(\ref{eq:mds}) alternatingly. Initializing $\bb{X}$ with the embedding coordinates $\bb{C}$ of the shape $C$, we compute the metric $\ell(\bb{X})$. We make $N_\mathrm{MfO}$ steps of safeguarded optimization Algorithm~\ref{algo:mfo} to improve the metric. Then, we compute the embedding of the improved metric performing $N_\mathrm{MDS}$ steps of the SMACOF Agorithm~\ref{algo:smacof}, starting with the current embedding. These steps are repeated for $N$ outer iterations, as outlined in Algorithm~\ref{algo:alt}. A convergence example of such a scheme in the shape-from-difference operator problem is shown in Figure~\ref{fig:teaser} (green curve shows the values of the stress at each internal SMACOF iteration; red curve shows the values of the energy $\mathcal{E}_\mathrm{dif}$ on each internal iteration of Algorithm~\ref{algo:mfo}). We should note that our optimization problem~(\ref{eq:analogy_}) is non-convex and thus the described optimization method does not guarantee global convergence. However, we typically have a good initialization ($\bb{X}$ is initialized by the embedding $\bb{C}$), which shows in practice good convergence properties. \begin{algorithm}[!htb] \caption{Safeguarded optimization for metric-from-operator recovery (internal iterations of the alternating minimization Algorithm~\ref{algo:alt}). }\label{algo:mfo} \begin{algorithmic}[0] \State \textbf{Inputs:} initial valid metric $\bb{\ell}_0$, initial step size $\mu_0$ \State \textbf{Output:} improved valid metric $\bb{\ell}$ \vspace{0.1cm} \end{algorithmic} \begin{algorithmic}[1] \State Initialize $\bb{\ell} \leftarrow \bb{\ell}_0$ \State \textbf{for} $k = 1, \hdots, N_\mathrm{MfO}$ \textbf{do} \State \hspace{0.45cm} Set step size $\mu \leftarrow \mu_0$ % \State \hspace{0.45cm} \textbf{while} $\bb{\ell}$ invalid \textbf{or} $\mathcal{E}( \bb{\ell} ) < \mathcal{E}( \bb{\ell} - \mu \nabla \mathcal{E}(\bb{\ell}) )$ \textbf{do} \State \hspace{1.35cm} $\mu \leftarrow \mu/2$ \State \hspace{0.45cm} \textbf{end while} \State \hspace{0.45cm}$\bb{\ell} \leftarrow \bb{\ell} - \mu \nabla \mathcal{E}(\bb{\ell})$ \State \textbf{end for} \vspace{0.1cm} \end{algorithmic} \end{algorithm} \begin{algorithm}[!htb] \caption{SMACOF algorithm for embedding-from-metric recovery (internal iterations of the alternating minimization Algorithm~\ref{algo:alt}). }\label{algo:smacof} \begin{algorithmic}[0] \State \textbf{Inputs:} metric $\bb{\ell}$, initial embedding $\bb{X}_0$ \State \textbf{Output:} embedding $\bb{X}$ \vspace{0.1cm} \end{algorithmic} \begin{algorithmic}[1] \State Initialize $\bb{X} \leftarrow \bb{X}_0$ \State \textbf{for} $k = 1, \hdots, N_\mathrm{MDS}$ \textbf{do} \State \hspace{0.45cm} $\bb{X} \leftarrow \bb{Z}^\dagger \bb{B}(\bb{X})\bb{X}$, \State \textbf{end for} \vspace{0.1cm} \end{algorithmic} \end{algorithm} \begin{algorithm}[!htb] \caption{Alternating minimization scheme for shape-from-operator synthesis. }\label{algo:alt} \begin{algorithmic}[0] \State \textbf{Inputs:} shape difference operators $\bb{V}_{A,B}$, $\bb{R}_{A,B}$, functional maps $\bb{F}$, $\bb{G}$, embedding $\bb{C}$ of shape C. \State \textbf{Output:} embedding $\bb{X}$ \vspace{0.1cm} \end{algorithmic} \begin{algorithmic}[1] \State Initialize the embedding $\bb{X} \leftarrow \bb{C}$ \State \textbf{for} $i = 1, \hdots, N$ \textbf{do} \State \hspace{0.45cm} Compute metric $\bb{\ell}(\bb{X})$ from the embedding $\bb{X}$ \State \hspace{0.45cm} Improve metric $\bb{\ell}$ using Algorithm~\ref{algo:mfo} \State \hspace{0.45cm} Compute embedding $\bb{X}$ from metric $\bb{\ell}$ using Algorithm~\ref{algo:smacof} \vspace{0.1cm} % \State \textbf{end for} \end{algorithmic} \end{algorithm} \section{Results and applications}\label{sec:res} \input{./sections/fig_shape_analogies_results.tex} In this section, we show the applications of our approach for shape synthesis from intrinsic differential operators, considering the shape-from-Laplacian and shape-from-difference operator problems described in Section~\ref{sec:prob}. As we noted, both problems are part of the same framework, so we use the general problem of shape-from-difference operator in various settings. We used shapes from TOSCA \cite{tosca} and AIM@SHAPE \cite{aimatshape} datasets, as well as from Gabriel Peyr{\'e}'s graph toolbox \cite{Peyre_toolbox}. All the shapes were downsampled and isotropically remeshed to $1$K--$3.5$K vertices. Shape deformations were created using Blender. The optimization scheme of Section~\ref{sec:num} was implemented in MATLAB and executed on a MacPro machine with 2.6GHz CPU and 16GB RAM. Typical timing for a mesh with $3.5$K vertices was 3.43 sec for a SMACOF iteration and 2.48 sec for an MfO iteration. In most of our experiments, we used the values $\lambda = 0.5$, $N_\mathrm{MfO}=1$--$10$, and $N_\mathrm{MDS}=10$. \textbf{Shape-from-difference operator } amounts to computing an unknown shape $X$ from a shape $C$, given a difference operator between the analogous shapes $A$ and $B$, as described in Section~\ref{sec:prob}. Figure \ref{fig:analogies1} presents some shape analogies synthesized using our algorithm. Figures \ref{fig:analogies2} and \ref{fig:teaser} additionally shows the intermediate step of the optimization, where, for visualization purposes, we plot the vertex-wise contribution to the energy $\mathcal{E}_\mathrm{dif}(\bb{X})$, \begin{equation} \epsilon_{i}(\bb{X}) = \lambda \sum_{j} \lvert e_{ij} \rvert + \lvert e_{ji} \rvert + (1 - \lambda) \sum_{j} \lvert e'_{ij} \rvert + \lvert e'_{ji} \lvert, \end{equation} where $\bb{E} = \bb{A}_{C}^{-1}\bb{A}(\bb{\ell}(\bb{X})) \bb{G} - \bb{G} \bb{V}_{A,B} $ and $\bb{E}' = \bb{W}_{C}^{\dagger}\bb{W}(\bb{\ell}(\bb{X})) \bb{G} - \bb{G} \bb{R}_{A,B} $. Note that examples in the first two rows of Figure~\ref{fig:analogies2} include only intrinsic differences (or `style') of shapes $A, B$ (man vs woman or thin man vs fat man) that are transferred to a different pose $C$, resulting in $X$ being a shape of style of $B$ in the pose of $C$. In the last row, however, the difference between shapes $A$ and $B$ includes both `style' (thin vs fat man) and pose differences (standing vs running). Shape $C$ is a woman in a standing pose, and the synthesized analogy $X$ is a fat woman in running pose (though the pose of $X$ is attenuated compared to $B$). Obviously, since the operators in our problem are intrinsic, if the pose transformation were a perfect isometry, the pose of $C$ would not change. However, the real pose transformations result in local non-isometric deformations around the knee joint, affecting the metric and the resulting difference operators. Consequently, our optimization tries to account for such a difference by bending the leg of the woman $X$. This result is consistent with the experiments of \cite{Rustamov2013}, who were able to capture extrinsic shape transformations (poses) by means of intrinsic shape difference operators. \input{./sections/fig_shape_analogies.tex} \textbf{Shape exaggeration} is an interesting setting of the shape-from-difference operator problem where $B=C$. In this case, the shape difference between $A$ and $B$ is applied to $B$ itself, `caricaturizing' this difference. Repeating the process several time, an even stronger effect is obtained. Figure \ref{fig:caricaturization} shows an example of such caricaturization of the difference between a man and a woman (top row), resulting in an exaggeratedly female shape with large breast and hips, and the difference between a thin and fat man (bottom row), resulting in a very fat man. \textbf{Shape-from-Laplacian } amounts to deforming shape $A$ into a new shape $X$ (with embedding $\bb{X}$) in such a way that the resulting $\bb{W}({\bb{\ell}(\bb{X})}) \approx \bb{W}_B$, as described in Section~\ref{sec:prob}. % Figure \ref{fig:shapes_from_Laplacian} shows the results of our experiments. The initial shapes $A$ are shown in the leftmost column, shapes $B$ used to compute the reference Laplacian are shown in the second column from left. The result of the optimization $X$ is shown in the rightmost column. Intermediate steps of the optimization with vertex-wise energy are shown in columns 3-6 (in this case, $\epsilon_{i}(\bb{X})$ simply boils down to the mismatch between the $i$th row and column of $\bb{W}(\bb{\ell}(\bb{X})) - \bb{W}_B$). Examples of rows 1-6 show that the Laplacian encodes the `style' of the shape, in the sense that if we e.g. start from a human head and deform it to make its Laplacian close to that of a gorilla, we obtain gorilla's head. The example of row 8 shows a pose transformation (open vs bent fingers). As we observed in our shape-from-difference operator experiments, since such deformations are not perfectly isometric, the reconstructed result $X$ also has a bent finger. Finally, row 7 shows a combination of `style' and pose transformation (thin man in standing pose used as initialization $A$ vs fat man in running pose whose Laplacian is used as a reference $B$). Here again, the reconstructed $X$ has both the style and the pose (albeit attenuated) of the reference shape $B$.
1,108,101,564,726
arxiv
\section{Introduction} In recent years, it was demonstrated that quantum cascade lasers (QCLs) \cite{faist1994} are very well suited to develop broadband sources of coherent radiation in the terahertz (THz) spectral region \cite{turcinkova2011,Roesch2014}. The concept of heterogeneous QCLs, firstly introduced in the mid-infrared \cite{Gmachl2002}, has proven to be very powerful in the THz region as well. It relies on stacking different active regions with individually designed emission frequencies into a single waveguide. Such heterogeneous active regions offer high design freedom to shape the optical gain and thereby allow achieving lasing bandwidths well beyond 1\,THz \cite{turcinkova2011,Roesch2014}. Furthermore, it was shown that the group velocity dispersion (GVD) in such devices is sufficiently low so they can operate as frequency combs (FCs) without any dispersion compensation \cite{Roesch2014,Roesch2016}. Applications for compact and efficient FCs based on THz QCLs are manifold and range from spectroscopy to metrology. However, non-uniform power distribution of the individual lasing modes, a limited spectral bandwidth and higher order lateral modes often hamper the performance of such lasers, especially in continuous wave operation \cite{turcinkova2011,Burghoff2014}. Another interesting field of use for heterogeneous THz QC gain media is the development of broadband amplifiers \cite{bachmann2015}, the coherent detection of its emitted radiation \cite{oustinov2010,bachmann2015} and the ultra-short pulse generation by mode-locking \cite{barbieri2011,Freeman2013,Wang2015}. In contrast to previous contributions, which all rely on homogeneous active regions, heterogeneous lasers potentially allow the generation of sub-ps long THz pulses with bandwidth exceeding 1\,THz. The metal-metal waveguide \cite{williams2003} is the resonator of choice for all broadband concepts mentioned above. Its essential properties include the close to unity confinement of the optical mode and an almost homogeneous field distribution across the active region \cite{scalari2008review,kohen2005}. Additionally, and particularly important for FC operation, it has nearly constant waveguide losses and group velocity dispersion between 2 and 4\,THz \cite{Roesch2014}. These advantages come along with some undesired properties such as the significantly lower optical output powers \cite{williams2007} and the more divergent beam patterns \cite{Adam2006,Orlova2006} compared to lasers with single plasmon waveguides\,\cite{KOE02}. Besides the mentioned extraction problem, another major issue of metal-metal waveguides are higher order lateral modes that are not fully suppressed. Such modes are undesired for short pulse generation and also hinder FC operation. The generation of short pulses in broadband THz QC gain media is up to date limited by the formation of sub-pulse structures, favored by the short upper state lifetime of the gain medium, that hinder regular pulse train formation \cite{bachmann2015}. So far, the origin of this sub-pulse structure has not yet been fully understood. As the spectra of such time traces indicate the presence of higher order lateral modes, they might be the source of this sub-pulse formation. Higher order lateral modes do not encounter the same effective refractive index as the fundamental modes, resulting in different group velocities and the build up of sub-pulses. A successful suppression of higher order lateral modes is therefore of paramount importance in order to generate ultra-short pulses down to a ps level. Furthermore, higher order lasing modes cause a very unevenly distributed power over the fundamental cavity modes due to mode competition. Even though the modal overlap of the excited lateral modes with the active region is reduced for narrow ridges with widths of the order of 50\,$\mu$m \cite{Maineult2008}, it still does not completely hinder them from lasing. An elegant way to achieve a full suppression is to increase the difference in losses between the fundamental and the higher order lateral modes by implementing side-absorbers to the metal-metal waveguide \cite{Fan2008,xu2013,Tanvir2013}. This can be done by slightly reducing the top metalization width with respect to the laser ridge width (set-back) and an underlying highly doped, and therefore very lossy, epitactically grown GaAs layer. In this article, we present a very simple but versatile way to control and fully suppress higher order lateral modes in broadband metal-metal THz QCLs and THz QC amplifiers. In contrast to the work reported in Ref. \cite{Fan2008,xu2013,Tanvir2013,Gellie2008} we introduce a set-back along with additional losses given by a thin metallic layer, which is deposited onto functional devices, instead of using a highly-doped epitaxial layer. Finite element simulations are used to obtain frequency dependent optical losses in order to choose the right set-back width with respect to the laser waveguide width. The correct implementation eliminates mode competition effects and thus tremendously improves the lasing bandwidth and the power distribution between individual modes. Furthermore, it ensures that only fundamental cavity modes are excited by injection seeding with broadband THz pulses. As a consequence, a clean train of transform-limited pulses is formed, achieving record short pulse widths. \section{Lateral mode control} \label{sec:sim} To fully understand the impact of lossy side-absorbers on QC structures, threshold gain calculations were performed. In contrast to the models reported in Ref.\,\cite{Fan2008,xu2013}, we performed frequency dependent simulations in order to adapt them to broadband lasers. The threshold gain $g_{th}$ is defined as \cite{faist2013book} \begin{equation} g_{th}(\nu)=\frac{\alpha_m(\nu)+\alpha_w(\nu)}{\Gamma(\nu)}, \label{eq:threshold} \end{equation} \\ where $\alpha_m$ and $\alpha_w$ are the mirror and waveguide losses of the laser cavity. The mirror losses have been evaluated for a 2\,mm long and 13\,$\mu$m high laser with different ridge widths using a 3D finite element solver (\textsc{CST Microwave Studio$^{\circledR}$}). The mirror losses of a 50\,$\mu$m wide ridge are displayed in Fig.\,\ref{fig:sim1}(a). For simplicity we were using the same mirror losses for the first ($\text{TM}_{00}$) and second ($\text{TM}_{01}$) lateral mode in the waveguide. $\Gamma$\,in \eqref{eq:threshold} represents the overlap factor of the electric field with the active region and is defined as the ratio of the amount of optical intensity inside the active region normalized to the total intensity \cite{faist2013book} \begin{equation} \Gamma(\nu)=\frac{\int_{active}dzdxE_z^2(z,x,\nu)\epsilon(z,x,\nu)}{\int_{z,x=-\infty}^{\infty}dzdxE^2(z,x,\nu)\epsilon(z,x,\nu)}, \end{equation} \\ where $E(z,x,\nu)$ is the electric field at position $(z,x)$ with a frequency $\nu$, $E_z(z,x,\nu)$ is the part of the electric field coupling to the intersubband transition and $\epsilon(z,x,\nu)$ is the dielectric function at the frequency $\nu$ for the material at position $(z,x)$. $\Gamma$ has been calculated in a 2D simulation using the finite element solver \textsc{Comsol Multiphysics$^{\circledR}$} (Fig.\,\ref{fig:sim1}(b)). The dielectric function of GaAs has been approximated using a Lorentz oscillator model. The calculations have been performed for the $\text{TM}_{00}$ (solid lines) as well as for the $\text{TM}_{01}$ modes (dashed line). The corresponding electric field distributions of these modes inside the waveguide are shown in the inset of Fig.\,\ref{fig:sim1}(b). Different waveguide configurations have been simulated. For the blue curves in Fig.\,\ref{fig:sim1}(b) the top metalization has the same width as the active region below, i.e. there are no lossy side-absorbers. For the other two configurations, the width of the top metalization is reduced on both sides by 2\,$\mu$m/4\,$\mu$m (red/green lines). In addition, in order to get lossy side-absorbers, 5\,nm of nickel has been added on top of these set-backs. As can be seen in Fig.\,\ref{fig:sim1}(b), the side-absorbers change the overlap factor only by a small amount. For both, $\text{TM}_{00}$ and $\text{TM}_{01}$ modes, $\Gamma$ is close to unity for all investigated configurations. The same 2D simulations have been used to calculate the complex refractive index $\tilde{n}(\nu)=n-ik$ of the modes propagating in such a waveguide. The extinction coefficient $k$ is directly linked to the waveguide losses by \cite{saleh_teich_book} \begin{equation} \alpha_w(\nu)=\frac{4\pi}{\lambda}k(\nu). \end{equation} \\ These losses have been inserted into \eqref{eq:threshold} together with the intersubband losses of the active region as reported in Ref. \cite{Roesch2014} to calculate the threshold gain. As shown in Fig.\,\ref{fig:sim1}(c), for a 50 $\mu$m wide waveguide without set-back, the $\text{TM}_{00}$ and the $\text{TM}_{01}$ modes have an almost identical threshold gain over the entire frequency range of the laser (blue curves). It is therefore not surprising that in such a device not only the fundamental modes reach lasing threshold. The resulting gain competition will lead to a very inhomogeneous laser spectrum. If one adds a set-back of 2\,$\mu$m per side with 5\,nm of nickel on top, the threshold gain for the $\text{TM}_{01}$ mode is increased up to 22 $\text{cm}^{-1}$ while the $\text{TM}_{00}$ mode only increases slightly to approximately 14 $\text{cm}^{-1}$ (red curves). As a consequence, only the fundamental modes will reach lasing threshold. This results in a much more homogeneous power distribution and a clearly defined mode spacing, which is defined by the cavity round-trip frequency, as there are no more $\text{TM}_{01}$ modes and thus no gain competition. But, if the set-back is chosen too wide, only part of the laser bandwidth will reach threshold (green curves in Fig.\,\ref{fig:sim1}(c)), and an even further increase of the threshold gain will result in no lasing at all. The waveguide losses are strongly dependent on the width of the laser. Since the overlap of the electric field with the lossy-absorbers is smaller for wider ridges, they require a broader set-back in order to achieve the same shift in threshold gain, compared to narrow ridges. From our simulations, we get an optimal trade-off between the threshold gain of $\text{TM}_{00}$ and $\text{TM}_{01}$ as follows: For 50\,-\,70\,$\mu$m wide ridges a set-back of 2\,-\,4\,$\mu$m per side is optimal, whereas for wider ridges of 100\,-\,150\,$\mu$m, a set-back of 10\,-\,12\,$\mu$m is required. \section{Sample fabrication and characterization} \label{sec:fabrication} The sample fabrication is based on a standard metal-metal processing technique that relies on a thermo-compression wafer bonding step of the 13\,$\mu$m thick active region onto a highly doped GaAs substrate \cite{williams2003}. The employed heterogeneous active region is identical to the one presented in Ref. \cite{Roesch2014} and consists of three different GaAs/AlGaAs quantum cascade heterostructures. These are designed for center frequencies of 2.3, 2.6 and 2.9 THz and the total structure is able to generate octave spanning laser emission\,\cite{Roesch2014}. The highly doped GaAs top contact layer was removed and the laser cavities were defined by a reactive ion etching (RIE) process, resulting in almost vertical side walls. Thus, the devices do not possess a significant amount of unpumped active region, which is beneficial for the thermal management, particularly relevant for continuous wave operation. Additionally, dry etched resonators are more suitable for long (2-4\,mm) and narrow (40-80\,$\mu$m) lasers. Those feature a larger number of fundamental Fabry-Perot modes, important for spectroscopy applications, while reducing the probability of higher order lateral modes reaching laser threshold. The lossy side-absorbers were implemented on devices for amplification and pulse generation. The geometry is based on a coupled cavity system that is designed for THz time-domain spectroscopy experiments, where a short section acts as an integrated emitter of broadband THz pulses and a long section as an amplifier for the injected THz radiation \cite{Martl2011,Bachmann2014}. The 3\,$\mu$m wide (each side) set-back of the top metal contact layer (Ti/Au) with respect to the total laser ridge width of 75\,$\mu$m was defined by a sacrificial silicon nitride (SiNx) etch mask, used on the amplifier section only. As a final step, the required amount of side-absorber loss is introduced by evaporating a thin layer (5\,nm) of nickel on top of the fully processed device. The choice of appropriate set-back width and nickel thickness, for a given resonator width, is the crucial point for good spectral performance, as discussed in section \ref{sec:sim}. Figure \ref{fig:sem-cc}(a) shows a scanning electron microscope (SEM) image of such a coupled cavity device. In order to better understand the effect of side-absorbers on the spectral performance as well as the effects on FC operation, a second device geometry has been fabricated. It consists of a single QCL ridge and the lossy side-absorbers were introduced as a post-processing step. The set-back was realized by removing a stripe of the top metalization, on either one or both waveguide edges, by a focused ion beam (FIB; System: Helios 600i). As for the previous geometry, losses were induced by evaporating 5\,nm of nickel onto the region of the ridge now uncovered. This approach is very powerful, because it allows to adapt the set-back width on demand. Furthermore, it permits a "before and after" comparison on the very same laser, directly revealing the impact of this mode control scheme. The technique of FIB cutting can also be used for realizing integrated planar horn structures in order to improve the outcoupling efficiency and the far-field pattern of metal-metal THz QCLs, as it was shown in \cite{wang2016}. Figure \ref{fig:sem-cc}(b) displays the light-current-voltage characteristics (LIV) of a 2\,mm x 50\,$\mu$m dry etched laser with (red line) and without (blue line) a lossy side-absorber, consisting of a 3\,$\mu$m wide set-back on one side and 5\,nm of nickel. The measurements were performed in pulsed operation at a heat-sink temperature of 10\,K. Consistent with the simulations, we observe a slight increase of the threshold current density as a result of the increased waveguide losses due to the FIB/evaporation treatment. Additionally, the THz output power is about 30\% higher, which we mainly attribute to an improved collection efficiency, resulting from a more directed far-field since the higher order lateral modes are suppressed. This is discussed in more detail in the following section. \section{Laser spectrum and far-field pattern} While for wet etched lasers a set-back of the top metalization is routinely implemented to prevent underetching, this is not necessary for dry etching, as a good etch conditioning will result in almost vertical side walls. A laser spectrum of a typical dry etched device (2\,mm\,x\,70\,$\mu$m) without any set-back is displayed in Fig.\,\ref{fig:spectrum-setback}(a). It is apparent that such a laser cannot be used as a frequency comb source. It has only few modes with an irregular spacing and the power is distributed very unevenly between them. To directly show the effect of side-absorbers on the very same laser, a set-back of 3\,$\mu$m (on one side only) has been removed from the top metal using a FIB. After depositing additional 5\,nm of nickel, the laser emission was measured again. The spectrum with the implemented side-absorber is shown in Fig.\,\ref{fig:spectrum-setback}(b). Not only the lasing of higher order modes is suppressed, but also the overall shape of the spectrum is significantly improved. The presence of the side-absorber ensures a smooth mode intensity envelope, leading to a bandwidth of 720\,GHz within a 10\,dB limit. Furthermore, the total spectral bandwidth increases, demonstrating octave spanning laser emission ranging from 1.63 to 3.37\,THz. As already shown in Ref.\,\cite{Roesch2014}, the weak modes on the edges of the spectrum are indeed lasing modes and are by no means measurement artifacts. The laser spectra in Fig.\,\ref{fig:spectrum-setback}(a) and (b) were measured in continuous wave operation at a heat-sink temperature of 20\,K. Fig.\,\ref{fig:spectrum-setback}(c) shows the effect of side-absorbers in pulsed operation on a device with emitter section, as discussed later. \newpage Analyzing the electrical intermode beatnote of the laser without emitter after the implementation of side-absorbers in Fig.\,\ref{fig:BNmap} shows two regimes with narrow beatnotes, indicating FC operation of the laser \cite{hugi2012,Roesch2014}. The corresponding spectral bandwidth expands up to 442\,GHz (see supplementary data for details). The side-absorbers do not only improve the spectral properties but also promote the generation of FCs. A more detailed study of the ridge width dependence on the side-absorbers can be found in the supplementary data. We also measure more optical power when the side-absorber is present, as shown in the LIV comparison in Fig.\,\ref{fig:sem-cc}(b). This increase in power can be attributed to a change of the far-field. Even though the beam patterns of standard metal-metal waveguides are highly non-directional due to their sub-wavelength outcoupling facet, we can still observe a change in far-field after implementing the side-absorber. Figure\,\ref{fig:farfield} displays the measured far-field before (a) and after (b) the fabrication of the side-absorber. It is evident that due to the absence of $\text{TM}_{01}$ modes the beam pattern of the laser with a side-absorber is more directional in the $\phi$-angle. This is explained by the fact that the far-field of $\text{TM}_{00}$ modes is more directional as the one of $\text{TM}_{01}$ modes \cite{Gellie2008}. Therefore, such a change in far-field can be expected. As the detector used for the THz power measurements in Fig.\,\ref{fig:sem-cc}(b) covers a limited solid angle only, more power is detected in the case of a more directional far-field. Thus, the total laser power is not necessarily increased by the usage of side-absorbers. Having proven the successful suppression of higher order lateral modes in standard THz QCLs and knowing the right amount of set-back, either from simulations or experiments, we are now able to directly fabricate a laser with the suitable set-back using a SiNx hard mask for the dry etching step, including an emitter section for the THz pulse generation (as described in section \ref{sec:fabrication}). Figure\,\ref{fig:spectrum-setback}(c) shows a laser spectrum where the set-back has been implemented directly and the device includes an emitter section, as shown in Fig.\,\ref{fig:sem-cc}(a). Comparing Fig.\,\ref{fig:spectrum-setback}(b) and (c) emphasizes that the two different fabrication procedures (direct and post-processing) of the side-absorbers lead to very similar results. The smaller mode spacing in Fig.\,\ref{fig:spectrum-setback}(c) is due to the longer cavity of this laser (2.93\,mm instead of 2\,mm). The modes at 2.64 and 2.78\,THz are attenuated by residual water vapor (black line), since this spectrum was measured in a nitrogen-purged Fourier transform infrared spectroscopy (FTIR) system while the other spectra were measured using a vacuum-pumped FTIR. This also explains the lower signal-to-noise ratio (SNR) in Fig.\,\ref{fig:spectrum-setback}(c), resulting in a smaller bandwidth above the noise floor. \section{Pulse generation and amplification} Apart from using THz QC heterostructures to build lasers, they are obvious candidates for the generation and amplification of ultra-short pulses due to their broad bandwidth. The requirements for coherent and phase-resolved detection of THz QCL radiation can be provided by THz time-domain spectroscopy (TDS) systems \cite{kroell2007}. Hence, it was shown that THz QCLs can emit a stable train of pulses by injecting seeding with a weak THz pulse \cite{Maysonnave2012}, by active mode-locking \cite{barbieri2011}, or a combination of both \cite{Freeman2013}. In all three cases a bound-to-continuum (BTC) based QCL fabricated with a single-plasmon waveguide has been used. The generated pulses exhibited lengths (full width at half-maximum - FWHM) of 9\,ps (11 lasing modes within $\sim$\,200\,GHz) \cite{Maysonnave2012}, 10\,ps (10 lasing modes within $\sim$\,150\,GHz) \cite{barbieri2011}, and 15\,ps (6 lasing modes within $\sim$\,100\,GHz) \cite{Freeman2013}, respectively. However, achieving pulse lengths in the order of 1\,ps requires a bandwidth of $\sim$1\,THz. It is therefore essential to use heterogeneous active regions based on inherently broadband resonant longitudinal optical phonon depopulation (RP) designs, as the one used in this work. Using the coupled cavity device geometry, as discussed in section \ref{sec:fabrication} (without side-absorbers), and a similar version of the heterogeneous active region presented in this work, we were previously able to achieve broadband amplification over a bandwidth of 500\,GHz and amplification factors exceeding 20\,dB \cite{bachmann2015}. The amplification process relies on gain switching that is initiated by a RF pulse. Injection seeding permits the coherent detection of the THz electric field pulse train, separated by the round-trip time of the QCL cavity \cite{kroell2007,oustinov2010}. After a few round-trips, the individual pulses merge and a rich sub-pulse fine structure is formed \cite{bachmann2015}. Its presence can be explained by higher order lateral modes that are seeded by the integrated THz emitter. Different lateral modes have the tendency to form separate pulses, propagating with the group velocities of the given mode order. Therefore, the presented mode selection helps to improve the THz amplifier performance. To investigate the influence of lossy side-absorbers on the pulse formation dynamics we performed injection seeding experiments. They are based on the ones presented in Ref.\,\cite{bachmann2015}, but with the optimized active region reported in Ref.\,\cite{Roesch2014}. The seeding experimental setup features a Ti:Sapphire laser based THz-TDS system adapted to the coupled cavity geometry of our samples \cite{Martl2011,Bachmann2014}. Near-infrared femtosecond (fs) laser pulses (790\,nm, $\sim$\,80\,fs) are used to generate broadband THz seeding pulses in the emitter section, which are directly coupled into the amplifier section. A hyper-hemispherical silicon lens attached to the THz amplifier output facet is used to improve the collection efficiency and achieve a better THz focus on the 1\,mm thick ZnTe detection crystal. The measurements were done at a heat-sink temperature of 10\,K. The seeding experiment is done for two different operating modes of the QC amplifier section. In the reference mode (REF), the amplifier section is driven just above threshold in order to guarantee transparency conditions. In the amplification mode (AMP), the amplifier section is driven $\sim$\,20\% below threshold and a sub-nanosecond long RF pulse with a peak power of +28\,dBm is used to switch the gain while a THz seed pulse is injected into its cavity. First, we have investigated a coupled cavity device without lossy side-absorbers and the dimensions of 2.96\,mm\,x\,70\,$\mu$m. Figure\,\ref{fig:pulse-train}(a) shows the injection seeded part of the QC amplifier THz electric field output (AMP mode, blue line) along with the decreasing pulse train amplitudes of the REF mode (red line). The generated pulses are spaced by the round-trip time through the amplifier waveguide. In agreement with our previous experiments, the coherent QCL emission does not exhibit a train of distinct round-trip pulses \cite{bachmann2015}, but a very complex and highly dynamic fine structure on a ps time scale is formed. Similar behavior was observed in a homogeneous RP active region metal-metal THz QCL \cite{Wang2015}. In contrast to that, the results of the seeding experiment change dramatically if a coupled cavity device of comparable size (2.48\,mm\,x\,75\,$\mu$m), but with side-absorbers (see Fig.\,\ref{fig:sem-cc}(a)), is used. As can be seen in Fig.\,\ref{fig:pulse-train}(b), if the THz amplifier is driven in the AMP mode (blue line), a train of clearly separated THz pulses is formed. A comparison of the two waveguide configurations, seen in Fig.\,\ref{fig:pulse-train}(a) and (b), clearly demonstrates the influence of higher order lateral modes on QC heterostructure based THz pulse generation. The significantly lower effective refractive index of the $\text{TM}_{01}$ with respect to the $\text{TM}_{00}$ modes leads to the formation of sub-pulses due to different group velocities and no clean THz pulse train can be formed. The parasitic pulses appearing $\sim$\,22\,ps before the main pulses are residues of an air-side mode that is bound to the metal-air interface \cite{Martl2011} and result in an interleaved pulse train besides the desired waveguide mode. Figure\,\ref{fig:pulse-train}(c) displays the squared electric field pulse train of Fig.\,\ref{fig:pulse-train}(b) and the inset shows the temporal profiles of the three intensity pulses in the main panel. The pulse centered at 239\,ps (P1) exhibits a Gaussian like pulse shape and a record ultra-short pulse length of 2.5\,ps (FWHM). The increasing pulse widths and the varying tails of the subsequent pulses indicate significant higher order dispersion of the $\text{TM}_{00}$ modes in the laser cavity. In order to check the seeded QCL bandwidth and the influence of excited lateral modes on the spectral properties, 360\,ps long versions of the TDS time traces shown in Fig.\,\ref{fig:pulse-train}(a),\,(b) were Fourier transformed to the frequency domain. The corresponding intensity spectra are shown in Fig.\,\ref{fig:spectrum-tds}(a),\,(b). Independent of side-absorber usage, the two AMP spectra verify that all three active regions, centered at 2.3., 2.6 and 2.9 THz, are seeded and demonstrate a SNR of 50\,dB. Comparing the AMP with the REF spectra reveals amplification bandwidths exceeding 1\,THz and amplification factors of $\sim$\,30\,dB. In addition, the seeded spectra display a significantly improved power distribution, compared to our previous results \cite{bachmann2015}, with a fairly flat top between 2.1 and 2.8\,THz (20\,dB intensity range). The device without side-absorbers, shown in Fig.\,\ref{fig:spectrum-tds}(a), displays a spectrum with non-uniform mode spacing including higher order lateral modes, especially for frequencies above 2.6\,THz. Similar irregularities in the seeded TDS spectrum of a 60\,$\mu$m wide RP based metal-metal QCL with an integrated planar horn antenna were also seen in Ref.\,\cite{wang2016}. On the contrary, and as expected from the clean pulse train in Fig.\,\ref{fig:pulse-train}(b), the spectrum of the QC amplifier with lossy side-absorbers, shown in Fig.\,\ref{fig:spectrum-tds}(b), exhibits a constant mode spacing over the entire bandwidth with a total number of 60 fundamental Fabry-Perot modes. Assuming a Gaussian intensity profile for the pulses in Fig.\,\ref{fig:pulse-train}(c) and using a transform-limited pulse length of 2.5\,ps leads to a corresponding bandwidth of 180\,GHz. This agrees very well with the measured FWHM bandwidth in Fig.\,\ref{fig:spectrum-tds}(b). For comparison, Fig.\,\ref{fig:spectrum-tds}(c) displays a non-seeded spectrum of the spectrally optimized device from Fig.\,\ref{fig:spectrum-tds}(b). The laser is operated at the NDR point and acquired by a FTIR spectrometer with a spectral resolution of 2.25\,GHz. The difference of the spectra in Fig.\,\ref{fig:spectrum-tds}(b) and \ref{fig:spectrum-tds}(c) at frequencies above 2.8\,THz are possibly related to higher order dispersion in this part of the spectrum, affecting the pulse propagation as visible in the inset of Fig.\,\ref{fig:pulse-train}(c). \section{Discussion and Conclusion} The suppression of any kind of higher order lateral modes in broadband THz QC lasers is an essential requirement for many applications, especially for frequency comb operation and short pulse generation. To increase the waveguide losses of higher order lateral modes and thereby preventing them from lasing, we added side-absorbers to the metal-metal resonators. The reported fabrication technique, using a focused ion beam to fabricate set-backs in addition with the deposition of a several nm thick nickel layer, enables a convenient implementation of side-absorbers onto processed and functioning lasers. Frequency dependent threshold gain calculations were used to verify the broadband mode suppression mechanism and to adapt the total amount of side-absorber losses with respect to the used laser ridge width. Several devices, with different ridge and side-absorber widths, have been fabricated and examined, demonstrating the versatility and reliability of the presented side-absorber technique. The performed simulations explain qualitatively very well the experimental results. The optimized devices show significantly improved spectral properties in terms of lasing bandwidth and uniformity of the power distribution with a flat top exceeding 700\,GHz within a 10\,dB range. This is attributed to the suppressed gain competition of fundamental and higher order lateral modes. Lasers with side-absorbers also exhibit an increase in output power of about 30\%, which is mainly due to a more directional far-field and a finite collection solid angle of the optical power measurement. Implementing side-absorbers on injection seeded THz QCLs leads to a manifold improvement of the pulse generation properties, resulting in the formation of a clean train of transform-limited THz pulses with a record ultra-short pulse length of 2.5\,ps. Together with the seeded bandwidth of $\sim$\,1\,THz, this makes such devices very useful as powerful sources for THz-TDS systems, especially for frequencies above 2\,THz, where the power and SNR of conventional TDS systems rapidly decreases. The presented mode control of broadband THz QCLs supports the development of octave-spanning frequency combs with flat topped lasing spectra and the generation of sub-picosecond long THz pulses. \section*{Funding Information} The authors acknowledge partial financial support by the European Union via the FET-Open grant TERACOMB (ICT-296500), the Austrian Science Fund (FWF) through the SFB NextLite (F4902), the DK Solids4Fun (W1243) and the Swiss National Science Foundation (SNF) through project 200020 165639. \section*{Acknowledgments} The authors acknowledge the Center for Micro- and Nanostructures (ZMNS) at TU Wien and the joint labs FIRST and ScopeM at ETH Zurich for sample processing. We further thank C. Bonzon for the support on the 2D simulations, K. Otani for the support and discussions concerning the device fabrication and C. Derntl for the support with the SEM. \bibliographystyle{unsrt}
1,108,101,564,727
arxiv
\section{Introduction} Topological superfluids and superconductors have been a fascinating subject for long. Early explorations of topological superfluids were at least partially motivated by their close connections to quantum Hall physics \cite{Volovik88,Volovik}. Read and Green further pointed out the unique roles played by Majorana edge states in time reversal symmetry breaking (TRB) topological states and in phase transitions between topological and non-topological states \cite{Read00}. Possible non-abelian statistical properties in topological states \cite{Moore91,Nayak96,Read96,Ivanov01} have made topological superfluids and superconductors one of the very promising candidates for topological quantum computers, an idea put forward by Kitaev \cite{Kitaev01,Kitaev03}. Time reversal invariant (TRI) topological superfluids and superconductors studied in more recent literatures are relatively young members of topological states \cite{Roy08,Qi09,Qi11,Bernevig, Zhang13,Mizushima16,Sato17}. These studies are also related to the developments in topological insulators \cite{Kane05,Bernevig06a,Bernevig06b,Fu07,Fu07b,Moore07,Qi08, Hasan10}. The possibility of having topological superconducting states in heterostructures has also generated enormous excitements and interest \cite{Fu08,Qi10a,Lutchyn10,Chung11,Nakosai12}. Impressive efforts have been made to systematically classify these states and characterize them in terms of elementary Fermi surface properties \cite{Schnyder08,Kitaev09,Qi10,Teo10}. These research efforts generalize the notion of topological states beyond the previously known examples of topological superfluids or superconductors and perhaps provide very broad searching criteria in potentially realizing them in quantum materials. They open a door to many new studies of topological matter, both theoretical and experimental. More recently, there has also been growing interest in nodal topological superfluids and superconductors. In these cases, topological invariants can be defined on momentum space submanifolds enclosing the nodes \cite{Sato06,Beri10}. Various nodal structures, such as point nodes, line nodes and surfaces nodes have been investigated and the nodal phases are classified by symmetries of Hamiltonians \cite{Zhao13, Kobayashi14, Zhao16}. For example, in analogy to Weyl semimetals \cite{Wan11, Burkov11, Burkov18, Armitage18} where topologically protected point nodes exist on Fermi surfaces, Wely superconductors with similar point nodes have been proposed to exist when TRS is broken \cite{Meng12,Cho12,Yang14,Bednik15}. Meanwhile, topological properties such as surface states of topological phases with line nodes have been discussed mostly in the context of noncentrosymmetric superconductors \cite{Yada11, Sato11, Schnyder11,Schnyder12}. Possible realizations of superconducting phase with surface nodes have also been proposed in multiband systems with broken TRS \cite{Agterberg17}. In superfluids and superconductors, there can be phase transitions between various gapped phases, or between gapped and gapless nodal phases that have the same local order parameters and break the same symmetries spontaneously but differ in global topology. {``\em How does one phase make a phase transition to another topologically distinct phase?''} is the key question we want to explore in this study. These transitions are obviously beyond the standard Landau paradigm of order-disorder phase transitions \cite{Landau} and usually occur when various symmetries such as gauge symmetries are still spontaneously broken. How are they different from the order-disorder transitions and what are the upper critical dimensions of these transitions, below which strong correlations can emerge? More concretely, TRI topological superfluids and superconductors are robust, gapped states that are well protected and their surface or edge states remain gapless if weak external fields are also time reversal invariant \cite{Mizushima16}. However, in the presence of TRB fields, either due to spin exchange fields or pairing exchange effects, surface states can be gapped at any finite TRB fields. The bulk of a gapped topological state can either simply have a smooth crossover to a gapped topologically trivial superfluid or superconducting state, or in more generic cases that we will focus on below undergo phase transitions to nodal phases with various distinct nodal structures that can also be topological states. The main motivation of this article is to identify these quantum transitions that turn out to only exist at $T=0$, i.e. at zero temperature, and hence can be conveniently characterized as quantum critical points (QCPs) \cite{Sachdev}. These QCPs also naturally define scaling properties of thermal states in quantum critical regimes. Phenomenologically, we can always classify the nodal phases into at least three categories: (A) nodal point phases (NPPs), (B) nodal line phases (NLPs), and (C) nodal surface phases (NSPs). A gapped topological state can undergo a continuous phase transition into one of these phases and correspondingly there should be at least three different universality classes specifying these transitions. Here, we do not attempt to have an exhaustive classification of all possible QCPs that may exist in superfluids or superconductors. Rather, we will focus on QCPs that are characterized by three classes of emergent extended quantum Lifshitz Majorana fields (QLMF) with distinct scaling properties. Real Majorana fields appear naturally because of breaking of gauge symmetries at the points of transitions; while Lifshitz fields \cite{Lifshitz41} are induced as precursors of nodal structures. These QLMFs describe a large variety of QCPs between gapped and nodal phases. We will attribute three types of quantum fields: QLMFA, QLMFB, and QLMFC to transitions from a gapped phase into (A) NPP, (B) NLP, and (C) NSP, respectively. These three fields which all break the Lorentz symmetry, together with relativistic quantum Majorana fields with full emergent Lorentz symmetry appear to form a set of quantum fields naturally emerging at generic QCPs in topological superfluids (TSFs) and topological superconductors (TSCs). They capture the low energy physics of a very broad set of QCPs that can exist in TSFs and TSCs. The emergent Lorentz invariant Majorana fields have been pointed out to appear at a beyond-Landau paradigm transition between a TSF and a non-topological superfluid that spontaneously break the same symmetries \cite{Yang19}. Other beyond-Landau paradigm quantum phase transitions have also been studied in various models. In 2D and 3D topological insulators, topological quantum criticality has been discussed recently \cite{BJYang14, Cho16,Isobe16}. One of the most celebrated examples of beyond-Landau paradigm transitions is a class of quantum phase transitions between states with different order parameters or spontaneously breaking different types of symmetries \cite{Senthil04a,Senthil04b}. It was suggested that such QCPs, denoted as {\it deconfined} QCPs, can possess new emergent symmetries and novel particles which do not exist in either of the ordered phases separated by the QCP. Scale invariant quantum spin liquids can naturally appear at deconfined QCPs, which adds an interesting new member to the earlier family of gapped spin liquids and quantum spin valence bond solids \cite{Haldane88,Rokhsar88,Read91,Moessner01,Senthil04a,Senthil04b}. The topological quantum phase transitions discussed in this article are driven by changes in global topology as mentioned before. However, the unique feature that distinguishes them from other beyond-Landau paradigm transitions is that U(1) symmetry is broken spontaneously and charges are not conserved at these topological QCPs in superfluids and superconductors. This aspect plays a paramount role in the following construction of effective field theories for these QCPs. The rest of this article is organized as follows. In Sec. \ref{general}, we introduce the basic notions of Majorana fields and QLMFs, and general phenomenologies of three classes of QCPs described by these QLMFs. We will present the main results, such as the order of phase transitions and the existence of surface QCPs with TRB fields. In Sec. \ref{pwave}, we will focus on the applications to the simplest $p$-wave superfluids and further zoom in to examine QCPs specifically for $p$-wave superfluids at $T=0$. In Sec. \ref{TQCP}, we discuss the properties of these QCPs at finite temperature. In Sec. \ref{Dirac}, we generalize our studies to TSCs with extra orbital degrees of freedom. We present applications to TSCs of Dirac fermions and classify all possible fields that can drive topological quantum phase transitions. In Sec. \ref{CBS}, as another application we further present results on QCPs in TSC of Cu$_x$Bi$_2$Se$_3$ that was pointed out previously \cite{Fu10, Sasaki11}. \section{QLMF\lowercase{s} and quantum criticality}\label{general} \subsection{General Phenomenology at QCPs} As stated in the Introduction, the specific transitions of our interest in the current article, as well as in a previous article by the authors (Ref. \cite{Yang19}) are either entirely driven by the change of global topologies or involve drastic changes of global topologies. They are either entirely absent in conventional non-topological superconductors or distinctly different from their counterparts in conventional superconductors. More importantly, all these QCPs {\em do not} appear in the Landau paradigm of order-disorder phase transitions because the states on both sides of the QCPs and the QCPs themselves all break the same symmetries spontaneously. All transitions considered below and in Ref. \cite{Yang19} do not involve condensation of new bosonic quantum fields or particles. It is exactly the global topology and topological distinctions between different states that result in such a new class of QCPs. For this reason, we refer them as topological QCPs. To illustrate the possibility of a continuum quantum field theory representation of topological quantum criticality, we apply an adiabatic theorem to the situation under our consideration. A gapped fermonic topological state (allowed by global symmetries) can maintain its topological distinction under small Hamiltonian deformations if the gap remains open. So a change in global topologies requires closing of a gap. The gap closing, which is necessary for changes of global topologies, indicates coalescing of gapped particles into the ground state near topological QCPs. The gap-closing phenomena in topological phase transitions have some unique features compared with those in many conventional non-topological quantum phase transitions. In topological superconductors and fermonic superfluids, elementary emergent particles are fermonic. In fact, they are real Majorana fermions due to the emergent charge conjugation symmetry at U(1) symmetry breaking QCPs. So generically, these topological quantum phase transitions occur with coalescing of real fermions into the ground state without {\em condensation} of new quantum fields or spontaneously breaking additional symmetries. This aspect is obviously fundamentally different from the Landau paradigm of order-disorder transitions. On the other hand, the above observation does explicitly suggest an effective quantum (Majorana) field theory representation for coalesce dynamics of gapped particles and therefore topological quantum criticality. Furthermore, different topologies of quantum phases naturally require different quantum field theory representations and hence result in different universality classes. For instance, a nodal point topology demands a very different quantum Lifshitz field theory than a nodal line topology would demand in the bulk. Meanwhile, perhaps the most remarkable consequence of the gap closing in topological states is the proliferation of surface states into the interior of topological matter. This is essential in topological transitions so that a state can topologically reconstruct in the bulk and boundary simultaneously across a transition. Physically, these bulk transitions are always accompanied or even heralded by surface quantum criticality, a very distinct aspect of topological quantum criticality. These two particular aspects, one reflecting the bulk topology and the other more on its consequences at boundaries, are both absent in other gap closing phenomena in conventional non-topological systems as well as in the standard Landau paradigm. Actually these two aspects are what makes the topological QCPs outstanding. Therefore, any appropriate quantum field theory representation for topological QCPs have to be in a class of quantum field theories with robust boundary states reflecting a corresponding change of topologies across the QCPs. This is indeed one of the guiding principles in our explicit constructions below, apart from more standard symmetry considerations. The unique role of changes of topologies in these transitions is therefore encoded in these quantum fields. An alternative and more practical way to think about these unique transitions is to compare them with what happens in a non-topological $s$-wave superconductor when one varies the same parameters. For instance, if one increases the interaction strength from weak to strong, there are no transitions in conventional $s$-wave superconductors, because the superconductor has the same type of local order or break the same U(1) symmetry. This is not the case in topological superconductors because there can be a change of global topology when interactions increase, even though states are the same locally \cite{Yang19}. This results in a bulk transition (although with relatively higher order), in addition to surface quantum transitions. So the very existence of such transitions entirely and crucially relies on the notion of global topology of underlying superconductors and topological distinction of different states. In other words, global topologies and their characterization add a new dimension to the parameter space along which phase transitions can occur. This new dimension is beyond the standard Landau paradigm. Because of broken gauge symmetry, a generic transition from a gapped topological phase to nodal phases in TSFs and TSCs can be characterized by an effective QLMF, i.e., dynamics of real fermions. The emergence of nodal structures in the low energy limit after transitions generally requires different scaling properties along different spatial-temporal directions near QCPs. This suggests the relevance of extended quantum Lifshitz Majorana fields. For transition to NPPs, the effective field theory construction that takes into account the nodal point feature suggests the following universality class that we name QLMFA. In QLMF theories, the fundamental fields are $2N$-component Majorana fields or real fermions defined as \begin{eqnarray} && \chi({\bf x}) =(\chi_1, \chi_2, ...,\chi_{2N})^T, \nonumber \\ && \chi^\dagger_i ({\bf x}) =\chi_i({\bf x}), \nonumber \\ && \{\chi_i({\bf x}),\chi_j({\bf x})\}=\delta({\bf x}-{\bf x'}) \delta_{ij}, \quad i, j=1,2,...,2N. \end{eqnarray} In terms of fundamental Majorana fields, QLMFA has the following simple generic structure in $d$ dimensions, \begin{eqnarray}\label{HA} &&{ H}_\text{A} =\frac12\int d^d{\bf x}\chi^T [ \Gamma_1 (-\partial^2_{x_1} + m) +\sum_{\alpha=2}^{d} \Gamma_\alpha i \partial_{x_\alpha} ] \chi +{ H}_I,\nonumber\\ && \{ \Gamma_\nu,\Gamma_\rho \}=\delta_{\nu,\rho},\quad \nu,\rho=1,2,...,d, \nonumber \\ && \Gamma_1^\dagger=\Gamma_1=-\Gamma^T_1, \quad \Gamma_\alpha^\dagger=\Gamma_\alpha=\Gamma_\alpha^T, \end{eqnarray} where $\Gamma_\alpha$, $\alpha=2,...,d$, are $2N\times 2N$ real symmetric Hermitian matrices while $\Gamma_1$ is a purely imaginary antisymmetric Hermitian matrix. $\Gamma_\nu$, $\nu=1,2,...,d$, all anti-commute with each other. These symmetries of the $\Gamma$ matrices are to preserve the charge conjugation symmetry of Majorana fermions. In general if $N'$ is the number of bands involved, we should have $N' \geq N \geq 1$. Detailed structures of $\Gamma$ matrices will be shown in the following sections for specific models. $m$ is the mass term defining the transition. At QCPs, we have $m=0$. ${H}_I$ is the interactions between Majorana fermions and background dynamic fluctuations represented by real scalar fields $\phi_\gamma$, $\gamma=1,2,...Q$. The coupling can be conveniently expressed as \begin{eqnarray} && { H}_I ={ H}_\phi + { H}_{\phi\chi}, \nonumber \\ && { H}_\phi=\frac12\int d^d{\bf x}\sum_\gamma [ \pi^2_\gamma + (\nabla \phi_\gamma)^2 +m_\phi^2 \phi^2_\gamma ], \nonumber \\ && { H}_{\phi\chi}=\int d^d{\bf x} \sum_{i,j,\gamma}\phi_\gamma \chi_i g^\gamma_{ij} \chi_j, \nonumber\\ && g^\gamma_{ij}=-g^\gamma_{ji}, \quad {g^\gamma_{ij}}^*=-g^\gamma_{ij}, \nonumber\\ && [\phi_\gamma({\bf x}), \phi_\zeta({\bf x')}]=0,\quad [\phi_\gamma({\bf x}), \pi_\zeta({\bf x'})] =i \delta_{\gamma\zeta} \delta({\bf x}-{\bf x'}).\nonumber\\ \end{eqnarray} Here $g^\gamma_{ij}$ are elements of $G^\gamma$, a set of purely imaginary anti-symmetric tensors that couple Majorana fermion field $\chi_i$, $i=1,2,...,2N$, to the real field $\phi_\gamma$, $\gamma=1,2,...,Q$. $\pi_\gamma$ is the momentum field conjugate to $\phi_\gamma$. In the limit of massive scalar fields and $2N \geq 4$, ${ H}_I$ in the low energy limit can be further cast into an effective form of four-fermion operators, \begin{eqnarray} && { H}_I= \int d^d{\bf x}\sum_{i,j,k,l} \chi_i \chi_j \chi_k \chi_l \Pi_{i,j,k,l}+..., \nonumber \\ && i,j,k,l=1,2,...,2N, \label{4F} \end{eqnarray} where $\Pi_{i,j,k,l}$ is an antisymmetric tensor under the interchange of any pair of indices, e.g. $\Pi_{i,j,k,l}=-\Pi_{j,i,k,l}=-\Pi_{i,k,j,l}=-\Pi_{i,j,l,k}$, etc. Only four-fermion interaction terms, which are most relevant, are present here; other less relevant terms are muted in the form of the ellipsis. However, for $2N=2$ the four-fermion (local) operator vanishes and minimal models must involve dynamic fields $\phi$. In the same limit, the Hamiltonian ${ H}_\text{A}$ has an emergent scale symmetry at a QCP if one introduces the following scale transformation \begin{eqnarray} && t \rightarrow t'=\lambda^2 t, \nonumber \\ && x_1 \rightarrow {x'_1}= \lambda x_1, \nonumber \\ && x_\alpha \rightarrow {x'_\alpha} =\lambda^2 x_\alpha, \quad \alpha=2,..,d, \end{eqnarray} and the Majorana field transforms accordingly \begin{eqnarray} \chi ({\bf x}) \rightarrow \chi'({\bf x'})=\lambda^{-\eta_A} \chi({\bf x}), \end{eqnarray} with $\eta_A=d-1/2$ for a free field theory QCP. By the same token, we list the main properties of QLMFB which can characterize the QCPs for transitions into NLPs, \begin{eqnarray} && { H}_\text{B} =\frac12\int d^d{\bf x}\chi^T [ \Gamma_1 (-\partial^2_{x_1}-\partial^2_{x_2} +m ) + \sum_{\alpha=3}^{d} \Gamma_\alpha i \partial_{x_\alpha}] \chi +{ H}_I \nonumber \\ && \{ \Gamma_\nu,\Gamma_\rho \}=\delta_{\nu\rho},\quad \nu,\rho=1,2,..., d. \nonumber \\ && \Gamma_{1}^\dagger=\Gamma_{1}=-\Gamma^T_{1},\quad \Gamma_\alpha^\dagger=\Gamma_\alpha=\Gamma_\alpha^T, \quad \alpha=3,...,d. \end{eqnarray} The Hamiltonian is scale invariant at a QCP if one introduces the following scale transformation \begin{eqnarray} && t \rightarrow t'=\lambda^2 t, \nonumber \\ && x_{1,2} \rightarrow {x'_{1,2}}= \lambda x_{1,2}, \nonumber \\ && x_\alpha \rightarrow {x'_\alpha} =\lambda^2 x_\alpha, \quad \alpha=3,...,d, \end{eqnarray} and the Majorana field transforms accordingly \begin{eqnarray} \chi ({\bf x}) \rightarrow \chi'({\bf x'})=\lambda^{-\eta_B} \chi({\bf x}), \end{eqnarray} with $\eta_B=d-1$ for a free field theory QCP. Finally, one can also have QLMFC which only involves one antisymmetric $\Gamma$ matrix, \begin{eqnarray} && { H}_\text{C} =\frac12\int d^d{\bf x}\chi^T \Gamma_1 (-\nabla^2 +m) \chi +{ H}_I \nonumber \\ && \Gamma_{1}^\dagger=\Gamma_{1}=-\Gamma^T_{1}. \end{eqnarray} The scaling property of QLMFC is identical to a non-relativistic field theory with Galilean invariance. However it has an additional charge conjugation symmetry of Majorana fermion and therefore always couple to an antisymmetric purely imaginary $\Gamma$ matrix. The Hamiltonian is scale invariant at a QCP if one introduces the following scale transformation \begin{eqnarray} && t \rightarrow t'=\lambda^2 t, \nonumber \\ && x_{\alpha} \rightarrow {x'_{\alpha}}= \lambda x_{\alpha}, \quad \alpha=1,..,d, \end{eqnarray} and the Majorana field transforms accordingly \begin{eqnarray} \chi ({\bf x}) \rightarrow \chi'({\bf x'})=\lambda^{-\eta_c} \chi({\bf x}), \end{eqnarray} with $\eta_C=d/2$ for a free theory QCP. At last, the four-fermion interaction $\Pi$ in Eq. (\ref{4F}) transforms as \begin{eqnarray} \Pi \rightarrow \Pi'=\lambda^{2\eta_{A,B,C}-2} \Pi. \end{eqnarray} The above equation indicates that the upper critical dimensions $D_u$ for interaction operator in which ${ H}_I$ becomes a marginal operator is given by $2\eta_{A,B,C}=2$. So for QLMFA, QLMAB and QLMFC, $D_u=3/2$, $2$ and $2$ respectively. In contrast, Majorana fermions with Lorentz symmetry naturally appear in quantum phase transitions between topological superfluids and non-topological superfluids with upper critical dimension $D_u=1$ \cite{Yang19}. The QLMF classes with symmetries lower than the Lorentz symmetry generally have higher upper critical dimensions. Especially in the case of QLMFB and QLMFC classes, the QCPs in 2D are strong coupling implying conformal fields of Majorana fermions. Before ending this part of the discussion, let us comment on the subtle role of global symmetries on the construction of our QLMF theories. As all the topological transitions studied here occur in a U(1) symmetry breaking state, QCPs here always have the charge conjugation symmetry $\mathcal{C}$, which directly suggests a Majorana representation. But parity symmetry $\mathcal{P}$ or time reversal symmetry $\mathcal{T}$ is usually broken at QCPs, as transitions happen only when external exchange fields that break one or both of these two symmetries are applied. As we are mainly interested in transitions to various nodal phases that belong to symmetry protected topological (SPT) phases \cite{Chen10, Chen12, Chen13}, we have assumed that the additional global symmetries, beyond $\mathcal{T}$ or $\mathcal{P}$ symmetries, can also be present in specific quantum matters to physically protect those topological phases (see Sec. \ref{Dirac}). The effective field theories of quantum Lifshitz Majorana fermions are introduced in this particular context if the phases are protected by appropriate global symmetries and corresponding phase transitions do exist. So the number of components of Majorana fermions involved or the dimension of $\Gamma$ matrices further depend on the concrete global symmetries needed to define specific stable nodal phases. However, if the only global symmetry at transitions, apart from the basic charge conjugation symmetry, is $\mathcal{P}$ or $\mathcal{T}$ and a nodal phase is indeed well protected by one of these two symmetries, then QCPs shall only be described by an effective QLMF model with a definitive $N$. For instance, if a nodal phase is fully protected by $\mathcal{P}$ symmetry with $\mathcal{T}$ symmetry broken such as in some NPPs, QCPs should be expected to possess $\mathcal{P}$ symmetry only. The transition to such an NPP, if occurs, should be described by QLMF models with $N=1$ only. The QCPs of $N=1$ QLMFs represent stable gapless states as long as $\mathcal{P}$ symmetry is present. In other words, QCPs of $N=1$ are protected by the global symmetry $\mathcal{P}$. In the same context where only $\mathcal P$ symmetry is present, quantum critical states implied by QLMF models with $N\geq2$ do exist but they are expected to be unstable and their existence requires further fine tuning of multiple relevant terms. In the presence of those relevant $\mathcal P$ symmetric perturbations, we anticipate that QCPs with $N\geq2$ collapse to the universality of $N=1$ QLMFs and this defines the universality classes of transitions with both $\mathcal C$ and $\mathcal P$ symmetries but with $\mathcal T$ symmetry broken. This aspect is equivalent to a general consensus that the universality only depends on symmetries and is independent of representations of a symmetry group. If the only global symmetries are $\mathcal C$ and $\mathcal P$, without other protecting symmetries QLMF models with $N\ge2$ shall be better considered as appropriate models for topological {\it multicritical} points rather than conventional QCPs. The other possibility is that gapless states in $N\geq2$ QLMF models are fully gapped in the presence of relevant perturbations; but this perhaps further implies the corresponding nodal phases no longer exist and there are no transitions at all. In the rest of the discussion, however, we will always assume the gapless nodal phases involved are sufficiently protected by additional global symmetries and so the transitions have to occur. The same global symmetries should also be naturally present at QCPs to be consistent. This is encoded implicitly in the dimension $2N\geq4$, as well as the structure of anticommunting $\Gamma$ matrices. From this point of view, we will simply treat QLMFs with general $N$, $N\geq2$ as different QCPs in the presence of different topologically protecting symmetries in quantum matters. We will come back to this issue when discussing concrete models. \subsection{General Scaling Properties} The emergent QLMFs imply unique scaling properties at the QCPs and determine the order of phase transitions. For ${ H}_\text{A,B,C}$ defined up to a ultra-violet (UV) {\it energy} scale $\Lambda$, the mass and temperature dependence of the grand potential $\Omega$ can always be expressed as \begin{eqnarray} && \frac{\Omega}{V} =\Lambda^{\eta_0+1} \tilde{\Omega}( \tilde{m}, \tilde{{ G}}; \tilde{T}), \end{eqnarray} where $V$ is the volume of the system, $\eta_0$ takes the values of $\eta_{A,B,C}$ for free fields of QLMFA, QLMFB, and QLMFC, respectively. $\tilde{m}=m /\Lambda$, $\tilde{T}=T/\Lambda$, $\tilde{{ G}}={ G} \Lambda^{\eta_0-1}$ are the dimensionless mass, temperature, and interaction tensor, respectively. As the critical physics given by the infrared behavior of the grand potential should not depend on the UV scale of the effective theory, we can set either $|\tilde{m}|=1$ or $\tilde{T}=1$ leading to scaling properties. We further associate a fixed point under the scale transformation to a QCP by setting $\tilde{{ G}}={ G}^*$, independent of the UV scale $\Lambda$. Below the upper critical dimension of QLMFs, a QCP is a strong coupling fixed point with $G^*$ only dependent on the spatial dimensionalities. While above the upper critical dimensions, the QCPs are described by free theories with ${ G}^*=0$. In either case, the above scaling argument indicates that at $T=0$ and near a QCP, the leading non-analytical term of $\Omega$ is given by \begin{eqnarray} \frac{\Omega^\text{NA}}{V} =|m|^{\eta_0+1} \tilde{\Omega}( \text{sgn}(m), { G}^*; 0). \end{eqnarray} Note that $\tilde{\Omega}$ is a constant of order of unity but may further carry logarithmic dependence of $|m|$ at upper critical dimension. The scaling exponents $\eta_0+1$ are universal and independent of details of topological states involved. For instance, for QCPs associated with the free theory of QLMFA, $\eta_0+1=d+1/2$; for 3D TSCs and TSFs, the transition can be named as $7/2$th order. For QLMFB, $\eta_0+1=d$ and in 3D these QCPs are of the third order. For QLMFC, $\eta_0+1=d/2+1$ and in 3D QCPs coincide with the well known $5/2$th order Lifshitz transitions \cite{Lifshitz60}. Although the above analyses on thermodynamics are applicable to QCPs characterized by either free Majorana fermions or strongly interacting conformal fields of Majorana fermions, the dynamics such as transport phenomena and hydrodynamics strongly depend on whether the QCPs are strong coupling fixed points or not. For this reason, the upper critical dimensions computed in the previous subsection are important. We have shown that the upper critical dimensions for different QLMFs are \begin{eqnarray} &&D_u=\left\{ \begin{array}{cc} 3/2, & \mbox{QLMFA}; \\ 2, & \mbox{QLMFB};\\ 2, & \mbox{QLMFC}. \end{array} \right. \end{eqnarray} Below or at the upper critical dimensions, one should expect an emergence of strong coupling Majorana fixed points, which are an analogue of Wilson-Fisher fixed point in the more conventional order-disorder phase transitions in the standard Landau paradigm \cite{Peshkin}. At finite temperature and in the quantum critical regime where $T \gg |m|$, thermal and other properties are also dominated by these QCPs. For instance, \begin{eqnarray} \frac{\Omega}{V} =T^{\eta_0+1} \tilde{\Omega}(0, { G}^*; 1) +... \end{eqnarray} where, as one can easily see, the same set of indices appear in the thermal properties near QCPs. $\Omega$ discussed above is a non-analytical function of $m$ only at $T=0$ as a result of QCPs. For QLMFB and QLMFC in $d=2$, the interactions are marginally relevant and further carry logarithmic singularity of $m$, which indicates (2+1)D conformal field theory fixed points. However, at any finite temperature, $\Omega$ turns out to be a smooth function of $m$ signifying quantum criticality. Finally, let us also contrast the discussions above with transitions between topological and non-topological superfluids. Those transitions are described by relativistic Majorana fields with an emergent Lorentz symmetry. The transition there is always of $(d+1)$th order and the corresponding index $\eta_0+1=d+1$ differs from all the QCPs discussed here \cite{Yang19}. In the following sections, we will illustrate these emergent QLMFs at a variety of QCPs in TSFs and TSCs where nodal phases appear. \subsection{Surface Quantum Criticality} 3D TSFs and TSCs support gapless helical states on its surfaces. Opposite surfaces carry states with opposite handedness that are well separated by the bulk and are effectively decoupled. Topological surface states can respond to TRB fields in dramatic ways by opening up a gap at any finite field. When this happens, the surface states can be thought as {\em quantum critical} with respect to these TRB fields. Since the topological states are robust against TRI fields, such surface quantum criticality is unique to certain TRB fields, although not all TRB fields result in gapped surface states immediately. For those TRB fields which do lead to surface quantum criticality near zero field and well before a possible bulk phase transition, surface quantum criticality can also be thought as a precursor to the later bulk transition. The effective Hamiltonian for surface criticality can be cast into a simple form of \begin{eqnarray}\label{SQCP} &&{ H}_\text{surf}=\frac12\int d^d{\bf x}\chi^T (s_x i\partial_z -s_z i\partial_x+m s_y ) \chi +{ H}_\text{SI}, \nonumber\\ &&\begin{split} { H}_\text{SI}=&\frac12\int d^d{\bf x}(\pi^2 +\nabla \phi \cdot \nabla \phi +m_\phi^2 \phi^2)\\ & +g_Y\int d^d{\bf x}\phi \chi^T s_y \chi, \end{split} \end{eqnarray} where $s_\alpha$'s are Pauli matrices in spin space, $\chi^T=(\chi_1,\chi_2)$ is a two-component Majorana field and ${ H}_\text{SI}$ describes the interactions between $\chi$ field and a real scalar field $\phi=\phi^\dagger$ in a Yukawa form. Note that because $\chi$ only has two components, the four-fermion operator vanishes in the case of surface Majorana fermions and $\chi$ can only interact with dynamic field $\phi$. On the 2D surface when $m_\phi$ is finite, there is only a free field theory fixed point with $g_Y=0$. The surface criticality at $T=0$ is given by the following cusp structure in the grand potential similar to the discussions on QCPs in 2D, \begin{eqnarray} \frac{\Omega_\text{surf}^\text{NA}}{S}=|m|^3 \tilde{\Omega}_s(\text{sgn}(m),0;0) \label{SQC} \end{eqnarray} where $S$ is the total surface area, $\tilde{\Omega}_s(\tilde{m}, \tilde{g}_Y; \tilde{T})$ is a general function of dimensionless mass $\tilde m$, interaction constant $\tilde g_Y$, and $\tilde T$. For surface criticality, we further have \begin{equation} \tilde{\Omega}_s(\text{sgn}(m),0; 0)=\tilde{\Omega}_s(1, 0; 0), \end{equation} since the energy spectrum is symmetric for $\pm m$. Remarkably, following Eq. (\ref{SQC}) where $m$ represents Zeeman fields, the non-analytical part of surface spin susceptibility is given by \begin{eqnarray} \chi_M^\text{NA} =-6|m| \tilde{\Omega}(1,0;0), \end{eqnarray} which is even in $m$ but varies linearly as a function of $|m|$. This is in contrast to the more conventional cases where $\chi_M$ can usually be expanded as an even analytical function of $m$ as $\chi_M\sim\chi_M^{(0)}+\chi_M^{(1)}m^2+...$. At any finite $T$, the susceptibility is analytical and scales as \begin{eqnarray} \chi_M \sim T, \end{eqnarray} as $m$ approaches zero. In the limit $m_\phi\to0$, there can be further emergent supersymmetries in addition to scale symmetries. We do not pursue this topic in the current article. For the examples of emergent supersymmetries in condensed matter systems, we refer readers to Refs \cite{Lee07,Grover14,Yue10,Zerf16,Li17,Jian17}. In the following sections, we will illustrate the realization of these QCPs and universality classes in a few different TSFs and TSCs. In the concrete models we examine below, we intend to demonstrate explicitly the physical parameters that one can vary to drive quantum transitions leading to the emergence of QCPs described in the general phenomenology. We will also further connect the effective QLMFs to microscopic Majorana quasi-particles in topological states and further explore the physical consequences at both $T=0$ and at finite temperature quantum critical regimes. Specifically, we will identify: (1) a few relevant fields that can lead to potential observation of QLMF physics in TSFs/TSCs; (2) detailed structures of anti-commuting $\Gamma $ matrices, which effectively define QLMF in concrete topological states, and the corresponding projection operators that lead to the effective field theory description near QCPs. In particular, we discuss three physical models where QLMFs can potentially emerge: (1) topological $p$-wave superfluids in both 3D and 2D in Sec. \ref{pwave}. (2) topological odd-parity pairing states in Dirac semimetals in Sec. \ref{Dirac}. (3) Fu-Berg model for Cu$_x$Bi$_2$Se$_3$ in Sec. \ref{CBS}. \section{Topological QCP\lowercase{s} in $p$-wave superfluid model at zero temperature}\label{pwave} \subsection{The model}\label{pwavemodel} The simplest one-band model that supports fully gapped topological states is perhaps the 2D $p+ip$ superfluids of spinless fermions. However, without breaking charge conjugation symmetry the only phase transition in this model happens between two fully gapped phases. The phase transition is driven by chemical potential or interactions, and the effective Hamiltonian near the critical point is Lorentz invariant \cite{Yang19}. A minimal model that hosts QLMF QCPs is topological $p$-wave superfluids involving two bands labeled by spin indices. Let us define Majorana operators \begin{eqnarray} && \chi_{+, s} ({\bf r}) =\frac{1}{\sqrt{2}}\left(c_ s({\bf r})+c^\dagger_ s({\bf r})\right), \nonumber \\ && \chi_{-, s} ({\bf r})=\frac{1}{i\sqrt{2}}\left(c_ s({\bf r})-c^\dagger_ s({\bf r})\right), \end{eqnarray} where $ s=\uparrow,\downarrow$ is the spin index, $c_ s$ and $c_ s^\dagger$ are conventional complex fermionic operators. In analogy to Nambu spinors, we define \begin{eqnarray} \chi_{\bf k}=(\chi_{+,\uparrow}({\bf k}), \chi_{+,\downarrow}({\bf k}), \chi_{-,\uparrow}({\bf k}), \chi_{-,\downarrow}({\bf k}))^T. \end{eqnarray} Notice in the momentum space, the anticommutation relation of Majorana fermions becomes \begin{align} \{\chi_i({\bf k}),\chi_j({\bf k'})\}=\delta({\bf k}+{\bf k'})\delta_{ij}, &&\text{with} && \chi_{\bf k}^\dagger=\chi_{-\bf k}^T. \end{align} We start with a TRI $p$-wave superfluid with order parameter $\Delta_p({\bf k})=v{\bf s}\cdot{\bf k}(i s_y)$. Here we fix the gauge such that $v>0$. This choice of order parameter corresponds to a superfluid phase with isotropic gap. In Majorana representation, the Hamiltonian can be written as \begin{equation}\label{Hamiltonian} H=\frac12\sum_{\bf k}\chi_{-\bf k}^T\mathcal{H}({\bf k})\chi_{\bf k}+{H}_I, \end{equation} where \begin{equation}\label{Hp} \mathcal{H}({\bf k})=v(-\tau_z\otimes s_z k_x+\tau_x\otimes I k_y+\tau_z\otimes s_xk_z)-\tau_y\otimes I(\epsilon_k-\mu). \end{equation} Here $\epsilon_k=k^2/2$, $\mu$ is the chemical potential, $\tau_\alpha$'s are Pauli matrices in the $\chi_+$, $\chi_-$ Majorana space, $s_\alpha$'s are Pauli matrices in spin space, $I$ is the $2\times2$ identity matrix. As a result of charge conjugation symmetry, all terms of odd powers of ${\bf k}$ are coupled to real matrices and all terms of even odd of ${\bf k}$ are coupled to purely imaginary matrices in $\mathcal{H}({\bf k})$. All interactions are included in ${H}_I$, and they are irrelevant operators for the transitions and muted for most discussions in 3D. It is well-known that topological phase transitions happen at $\mu=0$ between a fully gapped topological phase with $\mu>0$ and a non-topological phases with $\mu<0$ \cite{Qi09}. The topological phase is protected by TRS with robust gapless helical states on the surface. In our previous work (see Ref. \cite{Yang19}), we have identified that these transitions are described by a Lorentz invariant free Majorana field theory near the critical point. If the symmetry allows other mass fields such as $s$-wave pairing or spin exchange, it is possible to have other phases with different topology. Let us classify all possible mass fields by charge conjugation symmetry $\mathcal C$, time-reversal symmetry $\mathcal T$, and parity symmetry $\mathcal P$. Under these symmetries, the Hamiltonian transforms as \begin{equation} \mathcal T\mathcal H({\bf k})\mathcal T^{-1}=\mathcal H(-{\bf k}),\qquad \mathcal{T}^2=-1, \end{equation} \begin{equation} \mathcal C\mathcal H({\bf k})\mathcal C^{-1}=-\mathcal H({-\bf k}),\qquad \mathcal{C}^2=1, \end{equation} \begin{equation} \mathcal P\mathcal H({\bf k})\mathcal P^{-1}=\mathcal H(-{\bf k}),\qquad \mathcal P^2=1. \end{equation} In the Majorana representation, we have $\mathcal T=\tau_z\otimes(i s_y) K$ and $\mathcal C=K$, with $K$ being complex conjugate operator. For the $p$-wave superfluid model, we have $\mathcal P=\tau_y$. For Majorana fermions, all mass fields must be purely imaginary and antisymmetric to preserve charge conjugation symmetry. In total, there are six possible mass fields. Among them $\tau_y\otimes I$ has been associated with chemical potential. This leaves us with five other fields that can be attributed to two different types of physics: (1) $s$-wave pairing $\tau_z\otimes s_y$ and $\tau_x\otimes s_y$; (2) Zeeman field term $-{\bf S}\cdot{\bf B}$ which can be written in Cartesian components: $\tau_y\otimes s_x B_x$, $-I\otimes s_y B_y$, and $\tau_y\otimes s_z B_z$, with ${\bf S}$ defined as \begin{equation} {\bf S}=(-\tau_y\otimes s_x, I\otimes s_y, -\tau_y\otimes s_z). \end{equation} The additional $s$-wave pairings will lead to transitions or crossovers to fully gapped non-topological superconducting states. These transitions or crossovers are not described by QMLFs. For this reason, we only include them in Appendix \ref{sp}. In contrast, Zeeman fields will lead to NPPs and QCPs belonging to QLMFA universality class. These NPPs are protected by parity symmetry $\mathcal P$. In the following, we will discuss the effect of Zeeman fields in details. \subsection{Phase diagram and phase transitions due to Zeeman field at $T=0$}\label{EFT} Being charge neutral, Majorana fields do not directly couple to gauge potential of magnetic fields like electrons do. But they can couple to Zeeman fields through spin. These Zeeman fields can be either external fields or due to internal spin exchange. The presence of Zeeman fields in superfluids has two major effects. First, TRS is broken. Therefore, we would expect changes in topology even when the field is weak. A direct consequence of this is the response of surface modes, which will be discussed in Sec. \ref{surface}. Second, Zeeman field defines a preferred direction for spins, which breaks the isotropy of the spectrum. This anisotropy eventually leads to NPPs when the Zeeman field is strong enough. We first obtain the phase diagram by examining the bulk spectrum in the presence of Zeeman fields. For a Zeeman field ${\bf B}$ along an arbitrary direction, the Zeeman Hamiltonian is \begin{equation} H_\text{ZM}=-\frac12\sum_{\bf k}\chi^T_{-{\bf k}}({\bf S}\cdot{\bf B})\chi_{\bf k}, \end{equation} which leads to an anisotropic spectrum \begin{equation} E_{\bf k}^{(\pm)}=\sqrt{v^2{\bf k}^2_\perp+(\sqrt{v^2k_\parallel^2+(\epsilon_k-\mu)^2}\pm B)^2}. \end{equation} Here, $k_\parallel$ (${\bf k}_\perp$) is the momentum parallel (perpendicular) to ${\bf B}$, and $B=|{\bf B}|$. The Zeeman field lifts the spin degeneracy, resulting in two non-degenerate energy bands labeled by the superscript. The bulk gap closes at point nodes in the lower bands $\pm E_{\bf k}^{(-)}$ at ${\bf k_\perp}=0$ and $v^2k_\parallel^2+(k^2_\parallel/2-\mu)^2=B^2$. The number of point nodes in the spectrum depends on the ratio $v^2/\mu$ (Fig. \ref{phase}). In the $p$-wave superfluid model, $v$ is proportional to the pairing amplitude $\Delta_p({\bf k})$, which further depends on coupling strength between fermions. Therefore, we need to consider strong ($v^2>\mu$) and weak ($v^2<\mu$) couplings separately. (1) On the weak coupling side $v^2<\mu$, the chemical potential is always positive. The TRI $p$-wave superfluid is topological in the absence of Zeeman field. For given $\mu$, the bulk spectrum suggests two transitions as $B$ increases. The first one happens at $B_c=\sqrt{\mu^2-(\mu-v^2)^2}$ between a gapped phase and a NPP. At transition $B=B_c$, the bulk gap closes at the two minima of the lower band at ${\bf k_\perp}=0$, $k_\parallel=\pm\sqrt{2(\mu-v^2)}$. For weak field $B<B_c$, the bulk is fully gapped. For intermediate field $B_c<B<\mu$, the bulk is gapless with four point nodes at ${\bf k_\perp}=0$, $k_\parallel=\pm\sqrt{2[(\mu-v^2)\pm\sqrt{(v^2-\mu)^2+(B^2-\mu^2)}]}$. As the Zeeman field increases further, a second transition happens at $B=\mu$ between two NPPs with different numbers of point nodes. During this transition, the two point nodes in the middle merge into one at ${\bf k}=0$ at the transition and then annihilate each other. The other two point nodes persist. For strong field $B>\mu$, the bulk has only two point nodes at ${\bf k_\perp}=0$, $k_\parallel=\pm\sqrt{2[(\mu-v^2)+\sqrt{(v^2-\mu)^2+(B^2-\mu^2)}]}$. The topological quantum phase transitions between gapped phase and NPP in the weak coupling limit are driven by the deformation of Fermi surface and appear to fall outside of the effective field theories listed in Sec. \ref{EFT}. For this reason, we only discuss these transitions in Appendix \ref{weaktransitions}. (2) On the strong coupling side $v^2>\mu$, $\mu$ can be either positive or negative. For given $\mu$, as $B$ increases there is only one phase transition between a gapped phase and a NPP at $B=|\mu|$. For $B>|\mu|$, the bulk is gapless with two point nodes at ${\bf k}_\perp=0$, $k_\parallel=\pm\sqrt{2[(\mu-v^2)+\sqrt{(v^2-\mu)^2+(B^2-\mu^2)}]}$. The same NPP exists for both positive and negative $\mu$. For $B<|\mu|$, the bulk is fully gapped. Note that the TRI topological state with $B=0$ can be smoothly connected to the gapped states with $\mu>B>0$. In the following, we focus on the phase transitions on the strong coupling side and $\mu>0$. We illustrate the bulk spectra and phase diagrams for both strong and weak coupling cases in Fig. \ref{phase}. The NPPs induced by Zeeman field are protected by parity symmmetry $\mathcal P=\tau_y$. \begin{figure*} \includegraphics[width=\textwidth]{phase} \caption{Phase diagrams and bulk spectra of 3D $p$-wave superfluids in Zeeman fields. (a) Phase diagram on the strong coupling side $v^2>\mu$. Without loss of generality, we choose Zeeman field to be along $y$-direction ${\bf B}=B_y{\bf \hat y}$ and $B=|{\bf B}|$. Superfluids are gapped in the shaded area $B<|\mu|$ and gapless with two point nodes in the unshaded area. In the nodal point phase (NPP), the number of point nodes are indicated by the superscript. (b) Phase diagram on the weak coupling side $v^2<\mu$, $\mu>0$. Superfluids are gapped in the shaded area $B< B_c$ and gapless with point nodes in unshaded area. The dashed lines at $\mu=|B|$ represent the transitions between two different NPPs: NPP with four point nodes (NPP$^4$) for $B_c<B<\mu$ and NPP with two point nodes (NPP$^2$) for $B>\mu$. (c) Bulk spectrum on the strong coupling side $v^2>\mu$. (d) Bulk spectrum on the weak coupling side $v^2<\mu$. We set ${\bf k}_\perp=0$ and only plot the lower energy bands $\pm E_{\bf k}^{(-)}$ in both (c) and (d). The higher energy bands $\pm E_{\bf k}^{(+)}$ are always gapped (see text). \label{phase}} \end{figure*} \subsection{Effective field theory for QLMFA transitions} For the purpose of studying phase transitions, we can use a low energy effective field theory. The effective field theory construction relies on the separation between low energy and high energy degrees of freedom. This approach focuses entirely on the low energy degrees of freedom after high energy components are integrated out. Therefore, the correct effective field theory should include various important renormalization effects due to couplings between high energy and low energy degrees of freedom. In the following, we construct the effective field theory explicitly for $\mu>0$ in the strong coupling limit and show that it belongs to the QLMFA universality class. Without loss of generality, we choose Zeeman field to be along $y$-direction. Topological phase transition happens at $B_y=\mu>0$. In the strong coupling limit $v^2\gg\mu$, we can construct the low energy effective field theory by first projecting the Hamiltonian onto the low energy subspace using projection operator \begin{equation} P=P^\tau_{y,+}P^s_{y,+}+P^\tau_{y,-}P^s_{y,-}, \end{equation} where \begin{equation} P^\tau_{\alpha,\pm}=\frac{I\pm\tau_\alpha}2, \qquad P^ s_{\alpha,\pm}=\frac{I\pm s_\alpha}2. \end{equation} Near phase transition, the projected Hamiltonian can be written as \begin{equation}\label{strongproj} \mathcal{H}_\text{proj}({\bf k})=\Gamma_y\Big(\mu-B_y-\epsilon_k\Big)+\Gamma_xvk_x+\Gamma_zvk_z, \end{equation} where \begin{align} &&\Gamma_x=P(-\tau_z\otimes s_z) P, && \Gamma_z=P(\tau_z\otimes s_x) P,\nonumber\\ &&\Gamma_y=P(I\otimes s_y)P. \end{align} The projected Hamiltonian is incomplete for the effective field theory as we also need to include the renormalization effect due to the couplings between high energy and low energy degrees of freedom. After integrating out the high energy degrees of freedom, the leading order effect of these couplings can be written as \begin{equation} \mathcal H^{(2)}({\bf k})=\frac{v^2}{2\mu} k_y^2\Gamma_y \end{equation} Combining $\mathcal H_\text{proj}({\bf k})$ and $\mathcal H^{(2)}({\bf k})$ and keeping the leading order term of each momentum component, we find the effective Hamiltonian \begin{multline}\label{strongeff} {H}_\text{eff}=\frac12\sum_{\bf k}\chi^T_{\bf -k}\Bigg[\Gamma_y\Big(\mu-B_y+\frac{v^2-\mu}{2\mu}k_y^2\Big)\\ +\Gamma_xvk_x+\Gamma_zvk_z\Bigg]\chi_{\bf k}, \end{multline} where we have dropped the irrelevant interactions. This effective Hamiltonian belongs to the universality class of QLMFA. One can recover the phenomenologically constructed Hamiltonian (\ref{HA}) by identifying $m=2\mu(\mu-B_y)/(v^2-\mu)$ and absorbing the coefficients in front of $k_\alpha$ into $\Gamma_\alpha$. In the strong coupling limit, this low energy effective Hamiltonian is valid for $E_{\bf k}^{(-)}\ll\mu $. \subsection{Order of phase transitions} The order of these phase transitions can be computed directly from the zero temperature grand potential \begin{equation} \Omega_0=-\frac12\sum_{{\bf k},i}E_{\bf k}^{(i)}, \end{equation} where $i$ labels quantum numbers such as spin, orbit, etc. In the strong coupling limit, the energy spectrum of the effective Hamiltonian is \begin{equation}\label{effE} E^\text{eff}_{\bf k}=\sqrt{v^2(k_x^2+k_z^2)+\Big[(\mu- B_y)+\frac{v^2-\mu}{2 \mu}k _y^2\Big]^2}. \end{equation} Near phase transitions, the grand potential at $T=0$ is non-analytical with leading non-analytical term \begin{equation} \frac{\Omega_0^\text{NA}}V=\frac8{105\pi^2v^2}|B-\mu|^{7/2}\sqrt{\frac{2\mu}{v^2-\mu}}\theta(B-\mu), \end{equation} where $\theta(\cdot)$ is the step function. Therefore, the phase transitions can be named as $7/2$th order, which agrees with the result we obtained using scaling analyses in Sec. \ref{general}. \subsection{Surface states in Zeeman fields}\label{surface} To illustrate the change of topology, we study how surface modes respond to Zeeman fields. We use a cubic geometry such that the superfluids fill in the space $x_0<x<0$, $y_0<y<0$, $z_0<z<0$. It is vacuum in the rest of the space, which can be modeled by setting chemical potential as $\mu\to-\infty$. We focus on the strong coupling side $v^2>\mu$. \subsubsection{Weak Zeeman fields} We first examine the surface states in the gapped phase $B<\mu$, $\mu>0$. Let us first consider the surface at $y=0$. Without Zeeman field, there exist gapless helical surface states, \begin{gather} \phi^y_1=\mathcal{N}\begin{pmatrix}1\\ 0\end{pmatrix}_\tau\otimes\begin{pmatrix}\sin({\theta_y}/2)\\\cos(\theta_y/2)\end{pmatrix}_ s e^{\frac1{v}\int_0^y\mu dy}e^{i{\bf k}_\perp\cdot{\bf r}}, \end{gather} \begin{gather} \phi^y_2=\mathcal{N}\begin{pmatrix}1\\ 0\end{pmatrix}_\tau\otimes\begin{pmatrix}\cos({\theta_y}/2)\\ -\sin(\theta_y/2)\end{pmatrix}_ s e^{\frac1{v}\int_0^y\mu dy}e^{i{\bf k}_\perp\cdot{\bf r}}, \end{gather} where $\cot\theta_y=k_x/k_z$ and $\mathcal{N}$ is a normalization factor. The surface Hamiltonian is gapless \begin{equation}\label{surfaceH} H^{(B=0)}_\text{surf}=\frac12\sum_{\bf k}\psi^T_{-{\bf k},y}(-vk_x s_z+vk_z s_x)\psi_{{\bf k},y}, \end{equation} where $\psi_{{\bf k},y}=P^\tau_{z,+}\chi_{\bf k}$ is the Majorana operator on this surface. In the presence of Zeeman field, it is convenient to define an effective chemical potential at ${\bf k}=0$, \begin{equation}\label{mueff} \mu_\text{eff}^{(\pm)}({\bf k}=0)=\mu\pm B, \end{equation} where the superscript $\pm$ corresponds to the two energy bands $E^{(\pm)}({\bf k})$. In the fully gapped phase where $0<B<\mu$, $\mu^{(\pm)}_\text{eff}({\bf k}=0)$ is positive for both bands. Thus, both bands can support surface states. However, the surface states can be gapped by the Zeeman field due to broken TRS. For weak Zeeman field ${\bf B}$ along an arbitrary direction, the surface Hamiltonian for $y=0$ becomes \begin{equation}\label{surfaceH} H_\text{surf}=\frac12\sum_{\bf k}\psi^T_{-{\bf k},y}(-vk_x s_z+vk_z s_x-B_y s_y)\psi_{{\bf k},y}, \end{equation} up to linear order of $B$. In this linear approximation, the surface modes are gapped by Zeeman field {\it perpendicular} to this surface but not the field parallel to it. Conversely, surfaces perpendicular to the Zeeman field become gapped; while the helical Majorana modes on surfaces parallel to the field remain gapless in this approximation (Fig. \ref{3D}). This result agrees with the discussions on superfluid ${}^3$He-B in weak magnetic fields in Refs. \cite{Nagato09,Chung09}. \subsubsection{Strong Zeeman fields} Next, we examine the surface states in the NPP with strong Zeeman field $B>|\mu|$. Note that in NPP, $\mu$ can be either positive or negative. Without loss of generality, let us consider Zeeman fields along $y$-direction. In the NPP, we have a gapped band $E^{(+)}$ and a nodal band $E^{(-)}$. The effective chemical potential $\mu_\text{eff}^{(+)}({\bf k}=0)$ for the gapped band is still positive. Therefore, the gapped band can still support surface states. For the nodal band, we can define a generalized momentum-dependent effective chemical potential $\mu_\text{eff}^{(-)}(k_y)$. This effective chemical potential changes sign in momentum space. Let us label the point nodes as $(0,\pm k_0,0)$, $k_0>0$. The effective chemical potential $\mu_\text{eff}^{(-)}(k_y)$ is positive (negative) for $|k_y|>k_0$ ($|k_y|<k_0$). The sign of $\mu_\text{eff}^{(-)}(k_y)$ can be argued using the effective Hamiltonian (\ref{strongeff}). For any given $k_y$, the effective Hamiltonian (\ref{strongeff}) describes a 2D chiral superfluid in the $xz$-plane with Hamiltonian $\mathcal{H}^\text{eff}_{k_y}(k_x,k_z)=\mathcal{H}_\text{eff}({\bf k})$ and effective chemical potential $\mu_\text{eff}^{(-)}(k_y)$. Near transition and near point nodes, we have \begin{equation} \mu_\text{eff}^{(-)}(k_y)\approx\mu-B_y+\frac{v^2}{2\mu}k_y^2. \end{equation} Here, we have taken the strong coupling limit $v^2\gg\mu$. In the same limit, we also have \begin{equation} k_0^2\approx2\mu(B_y-\mu)/v^2. \end{equation} Therefore, $\mu_\text{eff}^{(-)}(k_y)$ is positive (negative) if $k_y^2>k_0^2$ ($k_y^2<k_0^2$) near the point nodes. Away from these nodes, the band gap of the effective 2D Hamiltonian $\mathcal{H}^\text{eff}_{k_y}(k_x,k_z)$ is given by $\mu_\text{eff}^{(-)}(k_y)$. Since the band gap only closes at $k_y^2=k_0^2$, the sign of $\mu_\text{eff}^{(-)}(k_y)$ does not change except at $k_y=\pm k_0$. The sign change at $k_y=\pm k_0$ suggests domain wall structures of $\mu_\text{eff}^{(-)}(k_y)$ in momentum space. It is well-known that domain wall structures of $\mu$ in real space signal a change of topology across the surface. Similarly, the domain wall structures of $\mu_\text{eff}^{(-)}(k_y)$ in the momentum space signal a change of topology in the momentum space across a surface perpendicular to $k_y$ and containing the point nodes. If one only focuses on the nodal band $E^{(-)}$, the change of topology in the momentum space would give rise to Fermi arcs for $k_y^2>k_0^2$. However, to obtain the complete surface states, one needs to also take into account the gapped band $E^{(+)}$. We can solve the gapless surface states explicitly. Let us take the surface at $z=0$ as an example. For $|k_y|<k_0$, only the gapped band can support gapless surface modes \begin{equation}\label{phiz} \phi^z_+=\mathcal{N}(0,1,-1,0)^Te^{\frac1{v}\int_0^z(\mu+B)dz}e^{ik_xx}. \end{equation} The surface Hamiltonian is gapless \begin{equation} {H}^z_\text{surf}=\frac12\sum_{\bf k}\psi^T_{-{\bf k},z}(vk_x)\psi_{{\bf k},z}, \end{equation} where the surface Majorana operator is given by $\psi_{{\bf k},z}=(\chi_{+,\downarrow}({\bf k})-\chi_{-,\uparrow}({\bf k}))/\sqrt2$. For $|k_y|>k_0$, both bands can support surface modes, and we need to take into account the hybridization of these modes. First, let us consider the zero energy surface modes associated with each band without hybridization. The surface states associated with the gapped band is still given by Eq. (\ref{phiz}). The surface states associated with the nodal band is \begin{equation} \phi^z_-=\mathcal{N}(1,0,0,-1)^Te^{\frac1{v}\int_0^z\mu_\text{eff}^{(-)}(k_y) dz}e^{-ik_xx}, \quad |k_y|>k_0. \end{equation} Notice that $\phi^z_+$ and $\phi^z_-$ have finite coupling for any $|k_y|>k_0$, \begin{equation} \langle\phi^z_+|\mathcal{H}({\bf k})|\phi^z_-\rangle\sim -vk_y+O(k_y^2). \end{equation} This coupling results in the hybridization of these two states, and the resultant surface states have finite energy for $|k_y|>k_0$. Therefore, gapless surface modes only exist between the two point nodes, i.e., at $|k_y|<k_0$. In the momentum space, the zero energy states form a Fermi arc between the point nodes. In the real space, the gapless surface states are manifested as chiral Majorana modes on surfaces parallel to the Zeeman field. Zeeman field and these gapless chiral surface modes on parallel surfaces form a right-hand grip. The surfaces perpendicular to the Zeeman field remain gapped (Fig. \ref{3D}). \begin{figure} \includegraphics[width=\columnwidth]{3D_surface} \caption{Surface states of 3D $p$-wave superfluids in Zeeman field ${\bf B}$. Without loss of generality, we choose ${\bf B}$ to be along $y$-direction. (a) For $B<\mu$, $\mu>0$. The bulk is fully gapped. Surfaces perpendicular to the Zeeman field (shaded) are gapped. Under an approximation up to linear order of $B$, helical modes on surfaces parallel to the Zeeman field remain gapless. (b) For $B>|\mu|$. The bulk is in the nodal point phase. Surfaces perpendicular to Zeeman field remain gapped. Surfaces parallel to Zeeman field have gapless chiral surface states. The chiral surface modes and the Zeeman field form a right-hand grip. \label{3D}} \end{figure} \subsection{Topological phase transitions on the surface} The surface Hamiltonian is critical at $B_\perp=0$, where $B_\perp$ is the Zeeman field perpendicular to the surface [see e.g. Eq. (\ref{surfaceH})]. On either side of the surface critical point, the surface states are gapped with broken TRS (Fig. \ref{stransitions}). Therefore, one can define a Chern number as the topological invariant for the gapped surface Hamiltonian. Topological phase transitions between these two gapped phases with different Chern numbers take place on surfaces as Zeeman field perpendicular to the surface sweeps across zero. Across these transitions, the order parameter does not change but Chern number changes by one. These transitions belong to the universality class of Lorentz invariant free Majorana fermions in 2D. The grand potential for the surface states at $T=0$ is \begin{equation} \begin{split}\label{SurfaceOmega} \frac{\Omega_{0,s}}S&=-\frac12\int\frac{d^2{\bf k}}{(2\pi)^2}\sqrt{v^2k^2+B_\perp^2}\\ &=\frac{|B_\perp|^3}{12\pi v^2}+\text{analytical terms}. \end{split} \end{equation} Thus, the surface topological phase transitions are of the $3$rd order. As a result, the surface spin susceptibility has a non-analytical part \begin{equation} \chi_M^\text{NA}=-\frac{|B_\perp|}{2\pi v^2}, \end{equation} which varies linearly in $|B_\perp|$. The overall susceptibility is an even function of $B_\perp$ with additional analytical terms, \begin{equation} \chi_M=\chi_M^{(0)}+\chi_M^\text{NA}+\chi_M^{(1)}B_\perp^2+..., \end{equation} where $\chi_M^{(0)}$, $\chi_M^{(1)}$, ... are non-universal and depend on the microscopic properties of the superfluids. \begin{figure} \includegraphics[width=\columnwidth]{SurfaceTransition} \caption{Surface spectrum in the presence of perpendicular Zeeman field $B_\perp$. Topological phase transitions happen on the surface between two gapped phases when $B_\perp$ crosses zero. Chern number changes by one across these transitions. \label{stransitions}} \end{figure} \subsection{$p$-wave superfluids in 2D} Similar QLMFA QCPs also exist in 2D $p$-wave superfluids. The QCPs are still described by free theory, since the upper critical dimension for QLMFA is $D_u=3/2$. By setting $k_z=0$ in Eq. (\ref{Hp}), we obtain the Hamiltonian for 2D TRI $p$-wave superfluids in the $xy$-plane. In this case, fermions with $ s_z=+1$ and $ s_z=-1$ are paired in $p_x-ip_y$ and $p_x+ip_y$ channels, respectively. In 2D, Zeeman fields parallel and perpendicular to the superfluid plane have different effects. \subsubsection{In-plane Zeeman fields} Zeeman field parallel to the superfluid plane (in-plane Zeeman field) can drive transitions to NPPs. In the strong coupling limit, these transitions belong to QLMFA class. In this case, the effective Hamiltonian is given by Eq (\ref{strongeff}) with $k_z=0$. We find, at $T=0$, the leading non-analytical term in the grand potential to be \begin{equation} \frac{\Omega^\text{NA}_\text{2D}}S=\frac2{15\pi v}\left|B-\mu\right|^{5/2}\sqrt{\frac{2 \mu}{v^2-\mu}}, \end{equation} which suggests a $5/2$th order transition. Edge states respond to in-plane Zeeman fields similarly to the 3D case: they are gapped by the Zeeman fields perpendicular to the edge. The edge Hamiltonians are critical when the Zeeman field perpendicular to the edge is zero. Topological phase transitions happen on the edges when the Zeeman field perpendicular to it is tuned across zero. Let us take Zeeman field along $y$-direction as an example to illustrate the edge states (Fig. \ref{2D}). For $\mu>0$, $|B_y|<\mu$, the edges perpendicular to the Zeeman field are gapped. Under the same linear approximation used for Eq. (\ref{surfaceH}), the edges parallel to Zeeman field have gapless counter-propagating Majarona edge modes with opposite spins. For $|B_y|>|\mu|$, the edges perpendicular to Zeeman field remain gapped, while each edge parallel to the Zeeman field has a zero-energy flat band. \begin{figure} \includegraphics[width=\columnwidth]{2D_edge} \caption{Edge states of 2D $p$-wave superfluids in the presence of Zeeman field ${\bf B}$, $B=|{\bf B}|$. (a) Zeeman field parallel to the superfluid plane (in-plane Zeeman field). For $B<\mu$, $\mu>0$, edges perpendicular to the Zeeman field (shaded) are gapped. Under an approximation up to linear order of $B$, edges parallel to the Zeeman field support counter-propagating gapless edge modes with opposite spins. For $B>|\mu|$, edges perpendicular to the Zeeman field remain gapped. Edges parallel to the Zeeman field support zero-energy flatband states. (b) Zeeman field perpendicular to the superfluid plane (out-of-plane Zeeman field). For $B<\mu$, $\mu>0$, there are gapless helical edge modes on all edges. For $B>|\mu|$, there is only one gapless chiral edge mode with spin along the Zeeman field. The chiral edge mode and the Zeeman field form a right-hand grip. \label{2D}} \end{figure} \subsubsection{Out-of-plane Zeeman fields} In contrast, Zeeman field perpendicular to the superfluid plane (i.e., along $z$-direction) does not lead to nodal phases. The bulk spectrum \begin{equation} E_{{\bf k},z}^{(\pm)}=\sqrt{v^2(k_x^2+k_y^2)+(\epsilon_k-\mu\pm B_z)^2} \end{equation} is always gapped except at transitions $B_z=\pm\mu$ when gap closes at ${\bf k}=0$. For $\mu>0$ and $|B_z|<\mu$, despite the broken TRS, the helical edge modes are still present. This gapped phase can be smoothly connected to the 2D TRI topological superfluids at $B=0$. For $|B_z|>|\mu|$, there exists only one chiral edge mode with spin along $B_z$, which forms a right-hand grip with the Zeeman field. This phase can be smoothly connected to the chiral superfluids in 2D (Fig. \ref{2D}). \section{Topological quantum criticality}\label{TQCP} In this section, we will discuss the properties of 3D TSFs/TSCs near the topological QCPs at finite temperature. These discussions not only apply to $p$-wave superfluids discussed in Sec. \ref{pwave}, but also to the TSC models in the following sections. \subsection{Finite temperature} First, we show that the transitions in the bulk and on the surface discussed in this article only exist at $T=0$. As shown in Sec. \ref{general}, all classes of QLMF QCPs in 3D, as well as the surface QCPs, are described by free field theories. Therefore, we can compute the grand potential near QCPs using simple thermodynamic relations for non-interacting systems. The grand potential $\Omega$ can be split into two parts $\Omega=\Omega_0+F$, where $\Omega_0$ is the zero temperature grand potential and $F$ is the thermal free energy. Utilizing standard thermodynamic relations for fermions, we have \begin{equation} \Omega=-T\sum_{{\bf k},i}\Bigg[\ln\left(1+e^{-\beta E_{\bf k}^{(i)}}\right)+\frac{\beta E_{\bf k}^{(i)}}2\Bigg], \end{equation} where $\beta=1/T$. For the purpose of understanding QCPs, only the infrared behavior of $\Omega$ is relevant. We can expand the grand potential into a power series of $\beta E_{\bf k}^{(i)}$ as \begin{equation}\label{OmegaT} \Omega=T\sum_{{\bf k},i}\sum_{l=1}^\infty\left[\left(\beta E_{\bf k}^{(i)}\right)^{2l}Li_{1-2l}(-1)\right], \end{equation} with $Li_l(\cdot)$ being the polylogarithm function. Notice that this expansion only contains even powers of $E_{\bf k}^{(i)}$. Since $(E_{\bf k}^{(i)})^2$ is an analytical function of $m$ for all the transitions we are interested in, $\Omega$ is also an analytical function of $m$ at any finite temperature. Therefore, all transitions discussed in this article, which are described by either QLMFs or Lorentz invariant Majorana fields, only exist at zero temperature. \begin{figure} \includegraphics[width=\columnwidth]{QCP} \caption{Finite temperature phase diagram. Topological phase transitions discussed in this article only exist at zero temperature at $m=0$. $T=0$ and $m=0$ corresponds to a quantum critical point (QCP). In the quantum critical regime where $T\gg |m|$, the temperature scaling of thermodynamic quantities in superfluids and superconductors are universal and dictated by the QCP. Outside the universal quantum critical regime where $T\ll |m|$, the states have similar properties to the zero temperature phases. For QLMF QCPs, the zero temperature phases are gapped (nodal) for $m>0$ ($m<0$). For surface QCPs, we have $m=B_\perp$ and the QCP is at $B_\perp=0$. $B_\perp$ is the Zeeman field perpendicular to the surface. The zero temperature phases on the surface are gapped on both sides of the QCP. \label{QCP}} \end{figure} The QCP divides the phase diagram into three smoothly connected regions with qualitatively different behaviors (Fig. \ref{QCP}). In the quantum critical regime $T\gg|m|$, the temperature scaling of thermodynamic quantities in the superfluids is universal and dictated by the QCP (see also Sec. \ref{general}). The scaling exponents are unique to each universality class. These scaling behaviors can be used to probe the QCPs at zero temperature. Outside the quantum critical regime where $T\ll|m|$, the low temperature properties of the superfluids are similar to the zero temperature states on either side of the QCP. \subsection{Surface quantum criticality}\label{SQCPs} Let us now focus on the surface quantum criticality induced by Zeeman or spin exchange fields. In this case, $m=B_\perp$ and the QCP is at $B_\perp=0$. The surface spectrum is fully gapped except at QCP. We compare the spin susceptibility inside the quantum critical regime $T\gg |B_\perp|$ and outside the regime $T\ll |B_\perp|$. Inside the quantum critical regime where $T\gg |B_\perp|$, the temperature is much higher than the excitation gap; thermal excitations exhibit universal properties. Thus, susceptibilities should have universal dependence on $T$. The surface spin susceptibility is analytical in the quantum critical regime and scales linearly in $T$ in the leading order \begin{equation} \chi_M\sim T. \end{equation} Outside the quantum critical regime where $T\ll |B_\perp|$, the excitation gap is much higher than the temperature. Therefore, all thermal excitations are suppressed exponentially, and the thermal free energy $F$ is exponentially small. The total grand potential is mainly given by the zero temperature contribution $\Omega\approx\Omega_{0}$. Therefore, when $|B_\perp|\gg T$, surface spin susceptibility has similarly scaling as the zero temperature case. But different from $T=0$ case, the surface susceptibility is always analytical at finite $T$. Finally, we would like to emphasize that all the discussions on finite temperature properties near QCPs are not only applicable to the $p$-wave superfluid model in Sec. \ref{pwave}, but also to the TSC models in the following sections. These properties are universal and robust. \section{TSC of Dirac fermions}\label{Dirac} In the $p$-wave superfluid model discussed in Sec. \ref{pwave}, only QLMFA QCPs exist between gapped phase and NPP. More degrees of freedom (e.g., different orbitals) need to be introduced to realize all three classes of QLMF QCPs in a given model. In this section, we discuss a TSC model of Dirac fermions with four bands labeled by spin and orbital degrees of freedom. In this model, all three classes of QLMF QCPs exist. In the next section, we will show that similar physics also exist in the TSC model of Cu$_x$Bi$_2$Se$_3$. Topological classifications of NPPs, NLPs and NSPs can be found in Refs. \cite{Zhao13,Kobayashi14, Zhao16}, and we refer readers to these references for general discussions. Generally speaking, the topological stability of these nodal phases in TSFs/TSCs further depends on additional global symmetries. We will discuss the symmetries that protect these nodal phases in concrete examples below. In our discussions of QCPs, we assume the nodal phases are either protected by global symmetries and topology, or due to the absence of other gapping couplings in materials as a result of specific energetic non-topological reasons. We focus on the dynamics of QCPs between gapped TSCs and nodal phases assuming these phases are present and phase transitions do exist. The particular transitions we describe below offer concrete realizations of different QCPs in TSCs and detailed energetics of how to drive the corresponding transitions in terms of generalized mass fields. Before considering pairing fields, we first introduce a low energy effective Hamiltonian for Dirac semimetals \cite{Young12, Armitage18} with two orbitals \begin{equation} H_0=\sum_{\bf k}C^\dagger_{\bf k}\mathcal H_0({\bf k})C_{\bf k}, \end{equation} where $C_{\bf k}=(c_{1,\uparrow,{\bf k}},c_{1,\downarrow,{\bf k}},c_{2,\uparrow,{\bf k}},c_{2,\downarrow,{\bf k}})^T$, 1 and 2 are orbital indices, and \begin{equation} \mathcal{H}_0({\bf k})=v\sigma_z\otimes(s_xk_x+s_yk_y+s_zk_z). \end{equation} Here, $\sigma_\alpha$'s are Pauli matrices in orbital space. Let us rewrite the Hamiltonian in the Majorana representation. We define \begin{gather} \chi_{\bf k}=\begin{pmatrix}\chi_{1,{\bf k}}\\\chi_{2,{\bf k}}\end{pmatrix}, \end{gather} where \begin{eqnarray} \chi_{j,{\bf k}}=(\chi_{j,+,\uparrow}({\bf k}), \chi_{j,+,\downarrow}({\bf k}), \chi_{j,-,\uparrow}({\bf k}), \chi_{j,-,\downarrow}({\bf k}))^T, \end{eqnarray} and $j=1,2$. Then the Hamiltonian of the Dirac semimetal becomes \begin{equation} H_0=\frac12\sum_{\bf k}\chi^T_{-\bf k}\mathcal H_0^M({\bf k})\chi_{\bf k}, \end{equation} where \begin{equation} \mathcal H_0^M({\bf k})=v\sigma_z\otimes (I\otimes s_xk_x-\tau_y\otimes s_yk_y+I\otimes s_zk_z). \end{equation} TSCs can be generated by introducing an odd-parity TRI intraorbital spin singlet pairing \begin{equation} \mathcal{H}_\Delta=\sigma_z\otimes\tau_x\otimes s_y\Delta. \end{equation} For convenience, we choose $\Delta>0$. The Hamiltonian of Dirac TSCs is \begin{eqnarray} &&H=\frac12\sum_{\bf k}\chi^T_{-\bf k}\mathcal{H}({\bf k})\chi_{\bf k}+H_I,\nonumber\\ && \mathcal{H}({\bf k})=\mathcal H_0^M({\bf k})+\mathcal{H}_\Delta. \end{eqnarray} The interactions represented by $H_I$ are irrelevant operators in 3D. Therefore, they are muted in the following discussions of QCPs. The Hamiltonian is invariant under parity $\mathcal{P}=\sigma_x\otimes\tau_y$. The bulk spectrum is fully gapped and isotropic. This topological phase is protected by TRS. Only TRB or sufficiently large TRI mass fields that commute with $\mathcal{H}_\Delta$ can drive quantum phase transitions. \begin{table*} \center \begin{tabular}{|c|c|c|c|} \hline Description &Matrix Operator & Dirac TSCs& Cu$_x$Bi$_2$Se$_3$\\ \hline {interorbital spin conserved hopping} & $\sigma_y\otimes I\otimes I$&Gapped$^*$&NPP\\ \hline \multirow{2}{*}{interorbital spin exchange} & $\sigma_x\otimes\tau_y\otimes s_x$&Gapped$^*$&NPP\\ & $-\sigma_x\otimes I\otimes s_y$&Gapped$^*$&NPP\\ \hline \multirow{3}{*}{\makecell{Zeeman-type\\intraorbital spin exchange}} & $I\otimes\tau_y\otimes s_x$&NPP&Gapped$^*$\\ & $-I\otimes I\otimes s_y$&NPP&Gapped$^*$\\ & $I\otimes\tau_y\otimes s_z$&NPP&NPP\\ \hline \multirow{3}{*}{\makecell{orbital dependent \\intraorbital spin exchange}} & $\sigma_z\otimes\tau_y\otimes s_x$&NPP&NPP\\ & $-\sigma_z\otimes I\otimes s_y$&NPP&NPP\\ & $\sigma_z\otimes\tau_y\otimes s_z$&NPP&Gapped$^*$\\ \hline \end{tabular} \caption{List of $U(1)$ symmetry invariant mass fields that lead to NPPs in Dirac TSC and/or Cu$_x$Bi$_2$Se$_3$ model. NPPs are realized when these fields are sufficiently large. All these fields break TRS. The interorbital spin conserved hopping and orbital dependent intraorbital spin exchange fields also break the existing parity symmetry of $\mathcal{H}({\bf k})$, while the rest do not. In some cases, the superconducting gap remain open regardless of the strength of these fields and we label these phases as `Gapped$^*$'. Here, the asterisk means the gap never closes and there is no phase transition.}\label{NPPtable} \end{table*} \begin{table*} \center \begin{tabular}{|c|c|c|c|} \hline Description& Matrix Operator & Dirac TSCs& Cu$_x$Bi$_2$Se$_3$\\ \hline {orbital dependent chemical potential} & $\sigma_z\otimes\tau_y\otimes I$ &Gapped$^*$ &NLP \\ \hline \multirow{3}{*}{interorbital spin exchange} & $\sigma_y\otimes I\otimes s_x$ &NLP &NLP \\ & $-\sigma_y\otimes\tau_y\otimes s_y$ &NLP &NLP\\ & $\sigma_y\otimes I\otimes s_z$ &NLP &Gapped$^*$\\ \hline \end{tabular} \caption{List of $U(1)$ symmetry invariant mass fields that lead to NLPs in Dirac TSC and/or Cu$_x$Bi$_2$Se$_3$ model. NLPs are realized when these fields are sufficiently large. All these fields are TRI but break the existing parity symmetry of $\mathcal{H}({\bf k})$. In some cases, the superconducting gap remain open regardless of the strength of these fields and we label these phases as `Gapped$^*$'. Here, the asterisk means the gap never closes and there is no phase transition.}\label{NLPtable} \end{table*} \begin{table*} \center \begin{tabular}{|c|c|c|c|} \hline Decription&Matrix Operator & Dirac TSCs& Cu$_x$Bi$_2$Se$_3$\\ \hline {interorbital spin singlet pairing}& $\sigma_x\otimes\tau_z\otimes s_y$&NSP&NSP\\ \hline intraorbital spin singlet pairing& $\sigma_z\otimes\tau_z\otimes s_y$&Gapped$^*$ &NPP\\ \hline \multirow{3}{*}{interorbital spin triplet pairing} & $\sigma_y\otimes\tau_z\otimes I$&NPP&NPP\\ & $\sigma_y\otimes\tau_x\otimes s_z$&NPP&NPP\\ & $\sigma_y\otimes\tau_x\otimes s_x$&NPP&Gapped$^*$\\ \hline \end{tabular} \caption{List of $U(1)$ symmetry breaking mass fields that lead to NSPs and NPPs in Dirac TSC and/or Cu$_x$Bi$_2$Se$_3$ model. Nodal phases are realized when these terms are sufficiently large. All these terms break TRS. The interorbital spin singlet pairing field breaks the existing parity symmetry of $\mathcal{H}({\bf k})$, while the rest do not. In some cases, the superconducting gap remain open regardless of the strength of these fields and we label these phases as `Gapped$^*$'. Here, the asterisk means the gap never closes and there is no phase transition.}\label{pairing} \end{table*} \begin{table*} \center \begin{tabular}{|c|c|c|c|} \hline Description & Matrix Operator & Dirac TSCs& Cu$_x$Bi$_2$Se$_3$\\ \hline intraorbital spin singlet pairing & $I\otimes\tau_x\otimes s_y$ & Gapped & Gapped\\ \hline interorbital spin conserved hopping & $\sigma_x\otimes\tau_y\otimes I$ & Gapped & Gapped\\ \hline \end{tabular} \caption{List of mass fields that lead to transitions to different gapped phases when these mass fields are sufficiently large. All these fields are TRI. The intraorbital spin singlet pairing breaks the existing parity symmetry of $\mathcal{H}({\bf k})$, while the interorbital spin conserved hopping does not. }\label{gap} \end{table*} \subsection{Quantum criticality in the bulk}\label{DiracEFT} This TSC model of Dirac fermions can host all three classes of QLMF QCPs and all three types of nodal superconducting phases. In the following, we show explicitly which mass fields will lead to these nodal phases and corresponding QCPs. We first consider phase transitions driven by $U(1)$ invariant non-pairing mass fields. In this case, the order parameter does not change across the transitions. It is worth noting that weak magnetic fields cannot penetrate into the bulk of superconductors due to Meissner effect \cite{SCbook}. Our discussions below mainly apply to the effects of various internal spin-exchange fields ${\bf J}$. We find that both NPPs and NLPs exist as a result of spin exchange fields. In particular, NPPs only exist if TRS is broken; while NLPs only exist if TRS is preserved (see Tables \ref{NPPtable} and \ref{NLPtable}). To realize NSP in this model, it is necessary to introduce additional pairing fields that also break $U(1)$ gauge symmetry. In fact, introducing additional pairing fields can lead to both NSPs and NPPs when TRS is broken (see Table \ref{pairing}). In addition to nodal phases, additional mass fields (pairing or non-pairing) can also lead to phase transitions to different gapped phases when these mass fields are large enough (see Table \ref{gap}). For example, the TRI even-parity intraorbital spin singlet pairing $I\otimes\tau_x\otimes s_y\Delta'$ drives a transition to a non-topological superconducting phase when $\Delta'>\Delta$. The TRI interorbital hopping $\sigma_x\otimes\tau_y\otimes It$ drives a transitions to a topological insulating phase when $t>\Delta$. Following the arguments in Sec. \ref{TQCP}, we find that all the phase transitions discussed in this section only exist at zero temperature and corresponds to topological QCPs. For other mass fields allowed by charge conjugation symmetry but not listed in the Tables, the superconducting gap remains open regardless of the magnitude of these fields. Thus, these mass fields do not lead to phase transitions, and we will not discuss them in details. \subsubsection{QLMFA and NPPs.} TRB intraorbital spin exchange fields can lead to NPPs. We list them in Table \ref{NPPtable}. The bulk spectrum near phase transitions in the presence of these fields becomes \begin{equation} E_{\bf k}=\sqrt{v^2{\bf k}^2_\perp+(\sqrt{v^2k_\parallel^2+\Delta^2}\pm J)^2}, \end{equation} which has two point nodes at ${\bf k}_\perp=0$ and $k_\parallel=\pm\sqrt{J^2-\Delta^2}/v$ when $J>\Delta$. $k_\parallel$ (${\bf k}_\perp$) is the momentum parallel (perpendicular) to the spin exchange field ${\bf J}$, and $J=|{\bf J}|$. The NPPs are SPT states. We take the orbital dependent spin exchange field $\sigma_z\otimes \tau_y\otimes s_zJ$ as an example. In this case, both $\mathcal T$ and $\mathcal P$ are broken, but the Hamiltonian has a combined $\mathcal{TP}$ symmetry. In addition, there also exists a reflection symmetry across $z$-axis $M_z=\sigma_z\otimes\tau_y\otimes s_z$, under which the Hamiltonian transforms as \begin{equation}\label{Mz} M_z\mathcal{H}({\bf k})M_z^{-1}=\mathcal{H}(-k_x,-k_y,k_z). \end{equation} The point nodes are protected by both $\mathcal{TP}$ and $M_z$. We can construct the effective Hamiltonian at low energy using similar procedures as described in Sec. \ref{EFT}. The effective Hamiltonian near $\Delta=J$ can be written as \begin{multline}\label{mzeff} H_\text{eff}=\frac12\sum_{\bf k}\chi^T_{\bf -k}\Bigg[\Gamma_z\left(\Delta-J+\frac{v^2k_z^2}{2\Delta}\right)\\ +\Gamma_xvk_x+\Gamma_yvk_y\Bigg]\chi_{\bf k}, \end{multline} where \begin{align} &&\Gamma_x=P(\sigma_z\otimes I\otimes s_x)P, && \Gamma_y=P(-\sigma_z\otimes \tau_y\otimes s_y)P, \nonumber\\ &&\Gamma_z=P(\sigma_z\otimes\tau_x\otimes s_y)P, \end{align} and \begin{equation} P=P^\tau_{z,+}P^s_{x,-}+P^\tau_{z,-}P^s_{x,+}. \end{equation} The effective Hamiltonian belongs to QLMFA class. These phase transitions are $7/2$th order in 3D. We notice that Eq. (\ref{Mz}) implies an emergent parity symmetry $\mathcal P$ in the effective Hamiltonian (\ref{mzeff}), which is absent in the original Hamiltonian with $J$ included. Similarly, TRB interorbital spin triplet pairing fields listed in Table \ref{pairing} also lead to QLMFA QCPs. \subsubsection{QLMFB and NLPs.} TRI spin exchange fields can lead to NLPs. We list them in Table \ref{NLPtable}. The bulk spectrum near transitions taken into account of these fields is \begin{equation} E_{\bf k}=\sqrt{v^2k^2_\parallel+(\sqrt{v^2{\bf k}^2_\perp+\Delta^2}\pm J)^2}, \end{equation} which has line nodes at $k_\parallel=0$ and ${\bf k}_\perp^2=(B^2-\Delta^2)/v^2$ when $J>\Delta$. The NLPs are SPT states. For example, for $-\sigma_y\otimes\tau_y\otimes s_yJ$, $\mathcal P$ symmetry is broken. The NLP is protected by TRS and mirror reflection with respect to $xz$-plane $M_{xz}=\sigma_y\otimes\tau_y\otimes s_y$, under which the Hamiltonian transforms as \begin{equation}\label{Mxz} M_{xz}\mathcal{H}({\bf k})M_{xz}^{-1}=\mathcal{H}(k_x,-k_y,k_z). \end{equation} The effective Hamiltonian can be constructed similarly \begin{multline}\label{mxzeff} H_\text{eff}=\frac12\sum_{\bf k}\chi^T_{\bf -k}\Bigg[\Gamma_x\left(\frac{v^2}{2\Delta}(k_x^2+k_z^2)+\Delta-J\right)\\ +\Gamma_yvk_y\Bigg]\chi_{\bf k}, \end{multline} where \begin{align} &&\Gamma_x=P(\sigma_z\otimes\tau_x\otimes s_y)P, && \Gamma_y=P(-\sigma_z\otimes\tau_y\otimes s_y)P, \end{align} and \begin{equation} P=P^\sigma_{x,+}P^\tau_{z,+}+P^\sigma_{x,-}P^\tau_{z,-}. \end{equation} Here $P^\sigma_{\alpha,\pm}=(1\pm\sigma_\alpha)/2$. The effective Hamiltonian belongs to QLMFB universality class. The phase transitions are 3rd order in 3D. We again note that Eq. (\ref{Mxz}) implies an emergent parity symmetry $\mathcal P$ in the effective Hamiltonian (\ref{mxzeff}), which is absent in the original Hamiltonian with $J$ included. \subsubsection{QLMFC and NSPs.} With the TRB interorbital spin singlet pairing $\sigma_x\otimes\tau_z\otimes s_yD$, the bulk spectrum near transitions is \begin{equation} E_{\bf k}=\left|\sqrt{v^2k^2+\Delta^2}\pm D\right|, \end{equation} which has surface nodes at $k^2=(D^2-\Delta^2)/v^2$ when $D>\Delta$. Nodal surface states are generally less stable. Here we simply treat the QCP as a multicritical point. The effective Hamiltonian \begin{equation} H_\text{eff}=\frac12\sum_{\bf k}\chi^T_{\bf -k}\Gamma\left(\frac{v^2k^2}{2\Delta}+\Delta-D\right)\chi_{\bf k}, \end{equation} belongs to QLMFC class with \begin{equation} \Gamma=P(\sigma_z\otimes\tau_x\otimes s_y)P \end{equation} and \begin{equation} P=P^\sigma_{y,+}P^\tau_{y,-}+P^\sigma_{y,-}P^\tau_{y,+}. \end{equation} These phase transitions are $5/2$th order in 3D. \subsection{Surface quantum criticality} Gapless helical Majorana states exist on the surfaces of TRI Dirac TSCs. In the presence of Zeeman-type or orbital dependent intraorbital spin exchange fields that lead to NPPs, the surface states can be gapped by the field $J_\perp$ perpendicular to this surface. Topological phase transitions happen on the surface between two gapped states when $J_\perp$ is tuned across zero. On a given surface, $J_\perp=0$, $T=0$ corresponds to a surface QCP. \section{Topological superconducting C\lowercase{u$_x$}B\lowercase{i}$_2$S\lowercase{e}$_3$ model}\label{CBS} As another example, we discuss the QLMF QCPs in the TSC Cu{$_x$}Bi$_2$Se$_3$ model. We first tune the topological insulating gap and chemical potential to zero to obtain a semimetal Hamiltonian \cite{Zhang09}, which can be written in the Majorana representation at low energy as \begin{equation} \mathcal{H}'_0{}^M({\bf k})=v\sigma_z\otimes(\tau_y\otimes s_y k_x+I\otimes s_x k_y)+v_z\sigma_y\otimes\tau_y\otimes I k_z, \end{equation} $v\neq v_z$ due to the crystal symmetry. Following the criterion given by Fu and Berg \cite{Fu10}, the TRI odd-parity interorbital spin triplet pairing \begin{equation} \mathcal{H}'_\Delta=\sigma_y\otimes\tau_z\otimes s_x\Delta \end{equation} should generate a fully gapped TSC. The TSC Hamiltonian \begin{equation} H=\frac12\sum_{\bf k}\chi^T_{-\bf k}(\mathcal H'_0{}^M({\bf k})+\mathcal{H}'_\Delta)\chi_{\bf k}+H_I, \end{equation} is invariant under parity transformation $\mathcal{P}=\sigma_x\otimes\tau_y$. The interactions in $H_I$ are irrelevant operators and will be muted for the discussions of QCPs. The topological phase is protected by TRS. \subsection{QCPs in the bulk} In this model, we also have all three classes of QLMF QCPs. Similar to the Dirac TSC model, NPPs and NLPs can be generated by $U(1)$ invariant non-pairing mass fields. NPPs exist when TRS is broken; NLPs exist when TRS is preserved. NSP must be generated by an additional TRB pairing field. We list all mass fields that lead to nodal phases in Tables \ref{NPPtable}, \ref{NLPtable} and \ref{pairing}. These nodal phases are protected by relevant symmetries depending on the operators driving the transitions. We do not discuss each case individually. The QCPs associated with transitions to these nodal phases belong to their corresponding QLMF universality classes. The low energy effective field theory near QCPs can be obtain similarly as in Sec. \ref{DiracEFT}, and we do not list them here. \subsection{Surface QCPs} Surface QCPs also exist in TSC Cu$_x$Bi$_2$Si$_3$. The gapless Majorana surface states in TSC Cu$_x$Bi$_2$Si$_3$ can be gapped by some TRB mass fields that lead to NPPs. The mass fields that open gaps on the surface depend on the orientation of these surfaces. For example, Zeeman-type spin exchange field along $z$-direction $I\otimes\tau_y\otimes s_zJ$ and TRB interorbital hopping $\sigma_y\otimes I\otimes It$ can open gaps on surfaces perpendicular to $z$-axis with any finite strength. Surface states are quantum critical when these fields are zero. \section{Conclusions} In conclusion, we have investigated a broad set of QCPs in TSFs and TSCs. These QCPs define quantum phase transitions driven by generalized mass fields between fully gapped TSFs/TSCs and nodal phases. Phases on two sides of the transition can break the same symmetries and have the same local ordering but with different global topologies. These QCPs therefore are beyond the standard Landau paradigm of order-disorder phase transitions. U(1) symmetry is also spontaneously broken at these QCPs. We have identified three main universality classes that have distinct scaling properties and are characterized by generalized QLMFs. The main conclusions are: (1) All the QCPs studied here can naturally emerge when generalized Zeeman (spin exchange) fields or other relevant fields are varied. QCPs separate states with different global topologies. The upper critical dimensions of these QCPs are either $D_u=3/2$ or $D_u=2$ depending the classes QLMFA, QLMFB, QLMFC, to which they belong. Below or at the upper critical dimension, QCPs are described by strong coupling conformal field theory fixed points; while above it free quantum Lifshitz Majorana fermions are robustly stable. (2) These QCPs induce various subtle non-analytical cusp structures in bulk quantities such as generalized susceptibility. Each QLMF class has its own unique bulk signatures as a smoking gun of topological QCPs. For instance, in 3D the non-analytical structures associated with QCPs are of 7/2th, 3rd and 5/2th order for QLMFA, QLMFB, and QLMFC, respectively. They are generally smoother than a typical 2nd order phase transition. (3) For transitions driven by the generalized mass fields that lead to QLMFA QCPs, as precursors to transitions in the bulk, surface states can be gapped by arbitrarily small fields perpendicular to the surface. This critical behavior of surface states, i.e., the surface states being quantum critical at zero field, leads to non-analytical surface spin susceptibilities that can be potentially studied in experiments. In fact, the susceptibility itself, being an even function of the field, has a non-analytical part that is proportional to the magnitude of Zeeman fields, indicating a cusp structure. (4) There exist no finite temperature transitions between the phases we have studied. All the cusp structures disappear once temperatures become finite and non-analytical structures are replaced with smooth crossovers. The physics in the quantum critical regime is completely defined by the quantum criticality physics, and the temperature scaling of thermodynamic quantities are distinct for each class of QLMFs. This can be potentially important, as in practical situations it is likely the distinct temperature scaling dictated by QCPs in quantum critical regimes rather than the $T=0$ cusp structures that can be measured. (5) In a few concrete models such as $p$-wave superfluids, and TSCs of Dirac fermions and Cu$_x$Bi$_2$Se$_3$, we have found detailed realizations of the QCPs and universality classes discussed above. These concrete studies are intended to bring the physics of QCPs one step closer to physical reality. There are a few very exciting issues we plan to explore in the near future. The first one is related to the strong coupling conformal field theory (CFT) fixed point in (2+1)D. As we have stated in the article, although there may be no clear distinctions between a free field theory QCP and a QCP of a CFT fixed point, the transport properties and hydrodynamics in these two classes of QCPs should be very different. It remains to be understood the transport properties and hydrodynamics near a QCP of a CFT fixed point, which we speculate to be highly universal as well. The second issue is perhaps the relation between Gross-Neveu strong coupling fixed point in (1+1)D relativistic theory \cite{Gross74}, and the relevant interactions in (2+1)D QLMFB/QLMFC implied by the scaling argument or a simple 1-loop calculation. [In (2+1)D, QLMFB and QLMFC are identical.] It is possible that QLMFB/QLMFC presents a generalization of the Gross-Neveu CFT field but in two spatial dimensions. If this is true, QLMFB/QLMFC maybe a new candidate for CFT but in (2+1)D. It remains to be investigated in the future, perhaps in the context of large $N$-expansion. \begin{acknowledgments} We would like to thank Ian Affleck, Zheng-Cheng Gu, Hae-Young Kee, Sung-Sik Lee, and Xiao-Gang Wen for helpful discussions. This work is partially support by Canadian Institute for Advanced Research. F. Y. is supported by a four year doctoral fellowship from UBC. F. Z. would also like to acknowledge the hospitality of the 2020 CUHK winter workshop on Quantum criticality and topological phases, during which many fascinating gapless topological states were highlighted and debated. \end{acknowledgments}
1,108,101,564,728
arxiv
\section{Introduction} It has long been appreciated that Markov chains can be employed as an effective computational tool to sample from probability measures. Starting from a desired `target' probability distribution $\mu$ on a space $\mathbb{H}$ one seeks a Markov transition kernel $P$ for which $\mu$ is an invariant and which moreover maintains desirable mixing properties with respect to this $\mu$. In particular in Bayesian statistics \cite{kaipio2005statistical, marzouk2007stochastic, martin2012stochastic, stuart2010inverse, dashti2017bayesian, borggaard2018bayesian} and in computational chemistry \cite{ChandlerWolynes, Miller2005,Miller2005a,Craig2004,Craig2005,Craig2005a,Habershon2008, Habershon2013,Lu2018,KoBoMi2019} such Markov chain Monte Carlo methods (MCMC) play a critical role by efficiently resolving high-dimensional distributions possessing complex multimodal and correlation structures which typically arise. However, notwithstanding their broad use in a variety of applications, the theoretical and practical understanding of the mixing rates of these chains remains poorly understood. The initial mathematical foundation of MCMC methods was set in the late 40's by Metropolis and Ulam in \cite{Metropolis49}, and later improved with the development of the Metropolis-Hastings algorithm in \cite{metropolis1953equation,hastings1970monte}. Further notable developments in the late 80's and 90's derived MCMC algorithms based on suitable Hamiltonian \cite{DuKePeRo1987, neal1993probabilistic} and Langevin dynamical systems \cite{grenander1994representations, besag1994comments}. See e.g. \cite{Betancourt2019, Li2008, robert2013monte} for a further general overview of the field. In view of exciting applications for the Bayesian approach to PDE inverse problems and in transition path sampling \cite{hairer2005analysis, hairer2007analysis, ReVa2005, HaStVo2009, HSV2011, stuart2010inverse, martin2012stochastic, bui2014analysis, dashti2017bayesian, PetraEtAl2014, BuiNguyen2016, borggaard2018bayesian}, an important recent advance in the MCMC literature \cite{tierney1998note,BeRoStVo2008, BePiSaSt2011, cotter2013mcmc} concerns the development of algorithms which are well defined on infinite-dimensional spaces. These methods have the scope to partially beat the `curse of dimensionality' since one expects that the number of samples required to effectively resolve the target distribution to be independent of the degree of numerical discretization. However validating such claims of efficacy concerning this recently discovered class of infinite dimensional algorithms both in theory and in practice is an exciting and rapidly developing direction in current research. This work provides an analysis of mixing rates for one particular class of methods among the MCMC algorithms mentioned above, known as Hybrid or Hamiltonian Monte Carlo (HMC) sampling; cf. \cite{DuKePeRo1987,Li2008,neal2011mcmc,BePiSaSt2011}. For HMC sampling the general idea consists in taking advantage of a Hamiltonian dynamic taylored to the structure of the target $\mu$, a distribution which functions as the marginal onto position space of the Gibbs measure associated to the dynamics. As such this `Hamiltonian approach' produces nonlocal and nonsymmetric moves on the state space, allowing for more effective sampling from distributions with complex correlation structures in comparison to more traditional random walk based methods. Indeed the efficacy of the HMC approach has led to its widespread adoption in the statistics community as exemplified for example by the success of the STAN software package \cite{GelmanLeeGuo2015, Stan2016}. However, notwithstanding notable recent work, the theoretical understanding of optimal mixing rates for HMC based methods remains rather incomplete both in terms of optimal tuning of algorithmic parameters and in terms of the allowable structure of the target measure admitted by the theory \cite{BoEbPHMC, BoEbZi2018, LivingstoneEtAl2019, BoSaActaN2018, DurmusEtAl2017, BeskosEtAl2013, BeGiShiFaStu2017, BeKaPa2013, BuiNguyen2016, BoSa2016, MaPiSmi2018, MaSmi2017, MaSmi2019}. We are particularly focused here on a version of HMC introduced in \cite{BePiSaSt2011} where the authors consider a preconditioned Hamiltonian dynamics in order to derive a sampler which is well defined in the infinite-dimensional Hilbert space setting. While recent work \cite{BuiNguyen2016, BeGiShiFaStu2017, borggaard2018bayesian} has shown that this `infinite-dimensional' algorithm can be quite effective in practice, the question of rigorous justification of mixing rates posed in \cite{BePiSaSt2011} as an open problem has only very recently been addressed in the work \cite{BoEbPHMC} in the case of \textit{exact} (i.e. non-temporally-discretized) and preconditioned HMC. In \cite{BoEbPHMC}, the authors follow an approach based on an exact coupling method recently considered in \cite{EbGuZi2016, BoEbZi2018}. Here we develop an alternative approach to establishing mixing rates for preconditioned HMC based on the so called weak Harris theorem \cite{hairer2008spectral, hairer2011asymptotic, hairer2014spectral, butkovsky2018generalized} combined with suitable `nudging' in the velocity variable which plays an analogous role to that provided by the classical Foias-Prodi estimate in the ergodic theory of certain classes of nonlinear SPDEs; cf. \cite{mattingly2002exponential, kuksin2012mathematics, GlattHoltzMattinglyRichards2016}. As such we believe the alternative approach that we consider here to be more flexible in certain ways providing a basis for further future analysis of MCMC algorithms. Furthermore, our approach for the exact dynamics developed here can be modified to derive mixing rates in the more interesting and practical case for discretized HMC. This later challenge will be taken up in future work. Our main results can be summarized as follows. We show exponential mixing rates for the exact preconditioned HMC with respect to an appropriate Wasserstein distance in the space of probability measures on $\mathbb{H}$. For suitable observables, we show that this mixing implies a strong law of large numbers and a central limit theorem. In addition, we use very similar arguments to obtain a novel proof of mixing rates for the finite-dimensional HMC. Finally, the second part of the paper is concerned with the application of the theoretical mixing result to the PDE inverse problem of determining a background flow from partial observations of a passive scalar that is advected by the flow. A careful analysis of this inverse problem within a Bayesian framework is carried out in \cite{borggaard2018bayesian}, where the authors also provide numerical simulations showing the effectiveness of the infinite-dimensional HMC algorithm from \cite{BePiSaSt2011} in approximating the target distribution in this case. Here our task is to show that this example, for suitable observations of the passive scalar, satisfies all the conditions needed for our theoretical mixing result to hold, thus complementing the numerical experiments in \cite{borggaard2018bayesian} with rigorous mixing rates. In the sequel we provide a more detailed summary of the results obtained in the bulk of this manuscript. \subsection{Overview of the Main Results} \label{sec:Main:Thm:Overview} The preconditioned Hamiltonian Monte Carlo algorithm from \cite{BePiSaSt2011} which we analyze here can be described as follows. Fix a separable Hilbert space $\mathbb{H}$ with norm $| \cdot |$ and inner product $\langle \cdot, \cdot \rangle$. Let $\mathcal{B}(\mathbb{H})$ denote the associated Borel $\sigma$-algebra and let $\text{Pr}(\mathbb{H})$ denote the set of Borel probability measures on $\mathbb{H}$. Suppose we wish to consider a target measure $\mu \in \text{Pr}(\mathbb{H})$ which is given in the Gibbsian form \begin{align}\label{eq:mu} \mu(d\mathbf{q} ) \propto \exp( - U(\mathbf{q})) \mu_0(d\mathbf{q}), \end{align} where $U: \mathbb{H} \to \mathbb{R}$ is a potential function. Here $\mu_0$ is a probability measure on $\mathbb{H}$ typically corresponding to the prior distribution when we consider a $\mu$ derived as a Bayesian posterior. Following a standard formulation in the infinite dimensional setting, we assume in what follows that $\mu_0$ is a centered Gaussian distribution on $\mathbb{H}$, i.e. $\mu_0 = \mathcal{N}(0, \mathcal{C})$, with $\mathcal{C}$ being a symmetric, strictly positive-definite, trace-class linear operator on $\mathbb{H}$. Consider the following preconditioned Hamiltonian dynamics \begin{align} \label{eq:dynamics:xxx} \frac{d\mathbf{q}_t}{dt} = \mathbf{v}_t, \quad \frac{d\mathbf{v}_t}{dt} = - \mathbf{q}_t - \mathcal{C} D U(\mathbf{q}_t), \quad \mbox{ with initial condition } (\mathbf{q}_0, \mathbf{v}_0) \in \mathbb{H} \times \mathbb{H}, \end{align} where $\mathbf{v} \in \mathbb{H}$ denotes a `velocity' variable, so that \eqref{eq:dynamics:xxx} describes the evolution of the `position-velocity' pair $(\mathbf{q}, \mathbf{v})$ in the extended phase space $\mathbb{H} \times \mathbb{H}$. Here we adopt the notation $\mathbf{q}_t$ and $\mathbf{v}_t$ to denote the value at time $t$ of the variables $\mathbf{q}$ and $\mathbf{v}$, respectively. The associated Hamiltonian function, a formal invariant of the flow in \eqref{eq:dynamics:xxx}, is given by \begin{align*} H(\mathbf{q}, \mathbf{v}) = \langle \mathcal{C}^{-1} \mathbf{q}, \mathbf{q} \rangle + U(\mathbf{q}) + \langle \mathcal{C}^{-1} \mathbf{v}, \mathbf{v} \rangle \quad \text{ for suitable } (\mathbf{q}, \mathbf{v}) \in \mathbb{H} \times \mathbb{H}. \end{align*} The exact preconditioned HMC algorithm works as follows. Starting from any $\mathbf{q}_0 \in \mathbb{H}$, draw $\mathbf{v}_0 \sim \mathcal{N}(0, \mathcal{C})$ and run the Hamiltonian dynamics with initial condition $(\mathbf{q}_0, \mathbf{v}_0)$ for a chosen temporal duration $T > 0$. Thus a forward step is proposed as the projection on the $\mathbf{q}$-coordinate of the solution of \eqref{eq:dynamics:xxx} starting from $(\mathbf{q}_0, \mathbf{v}_0)$ at time $T$, i.e. $\mathbf{q}_T(\mathbf{q}_0, \mathbf{v}_0)$. The associated Markov transition kernel $P: \mathbb{H} \times \mathcal{B}(\mathbb{H}) \to [0,1]$ is then given as \begin{align} P(\mathbf{q}_0, A) = \mathbb{P}( \mathbf{q}_T(\mathbf{q}_0, \mathbf{v}_0) \in A) \quad \text{ with } \mathbf{v}_0 \sim \mathcal{N}(0,\mathcal{C}), \label{eq:HMC:kernel:overview} \end{align} for every $A \in \mathcal{B}(\mathbb{H})$. We adopt the notation $P^n$ for $n$ steps of the Markov kernel $P$ and recall that $P$ acts as \begin{align*} \nu P(\cdot) = \int P(\mathbf{q}, \cdot) \nu(d \mathbf{q}), \quad P\Phi(\cdot) = \int \Phi(\mathbf{q}) P(\cdot, d\mathbf{q}) \end{align*} on measures $\nu \in \text{Pr}(\mathbb{H})$ and observables $\Phi: \mathbb{H} \to \mathbb{R}$, respectively. This kernel $P$ leaves invariant the desired target probability measure $\mu$ given in \eqref{eq:mu}, namely $\mu P = \mu$, as was demonstrated in \cite{BePiSaSt2011} and recalled in \cref{eq:invariance} below. Clearly, in practice, one is not able to integrate \eqref{eq:dynamics:xxx} exactly so that one must instead resort to suitable numerical discretizations. These numerical integration schemes are designed so as to ensure that fundamental properties of Hamiltonian dynamics are preserved, such as time reversibility and volume-preservation or `symplectiness' --see e.g. \cite{BoSaActaN2018} for a survey. In this work we only analyze the exact dynamics, as the discretized case requires additional techniques and will be the subject of future work. Let us now sketch a simplified version of our main result, given in rigorous and complete detail in \cref{thm:weak:harris} below. Our mixing result for the Markov kernel $P$ defined in \eqref{eq:HMC:kernel:overview} is given with respect to a suitably constructed Wasserstein distance on $\text{Pr}(\mathbb{H})$. Namely, starting from $\varepsilon > 0$ and $\eta > 0$, consider $\tilde \rho: \mathbb{H} \times \mathbb{H} \to \mathbb{R}^+$ defined as \begin{align} \tilde{\rho}(\mathbf{q}, \tilde{\mathbf{q}}) := \sqrt{\left( \frac{| \mathbf{q} - \tilde{\mathbf{q}} |}{\varepsilon } \wedge 1 \right) \left(1 + \exp(\eta |\mathbf{q}|^2) + \exp(\eta |\tilde{\mathbf{q}}|^2) \right) }. \label{eq:t:rho:def} \end{align} Here $\varepsilon$ corresponds to the small scales at which we can match small perturbations in the initial position $\mathbf{q}_0$ with a corresponding perturbation in the initial velocity $\mathbf{v}_0$ in \eqref{eq:dynamics:xxx}. On the other hand, for sufficiently small $\eta >0$, the function $V(\mathbf{q}) = \exp(\eta |\mathbf{q}|^2)$ is a \emph{Foster-Lyapunov} (or, simply, \emph{Lyapunov}) function for $P$ in the sense of \cref{def:Lyap} and \cref{prop:FL} below. The mapping $\tilde \rho$ is a \textit{distance-like function} in $\mathbb{H}$, i.e. it is a symmetric and lower-semicontinuous non-negative function such that $\tilde \rho(\mathbf{q}, \tilde{\mathbf{q}}) = 0$ holds if and only if $\mathbf{q} = \tilde{\mathbf{q}}$. We denote by $\mathcal{W}_{\tilde{\rho}}: \text{Pr}(\mathbb{H}) \times \text{Pr}(\mathbb{H}) \to \mathbb{R}^+ \cup \{\infty\}$ the following extension of $\tilde \rho$ to $\text{Pr}(\mathbb{H})$: \begin{align} \mathcal{W}_{\tilde{\rho}} ( \nu_1, \nu_2) = \inf_{\Gamma \in \mathfrak{C}}%{\rm{Co}(\nu_1, \nu_2)} \int_{\mathbb{V} \times \mathbb{V}} \tilde \rho(\mathbf{q}, \tilde{\mathbf{q}}) \Gamma(d \mathbf{q}, d \tilde{\mathbf{q}}), \label{eq:Wass:Def} \end{align} where $\mathfrak{C}}%{\rm{Co}(\nu_1,\nu_2)$ denotes the set of all \emph{couplings} of $\nu_1$ and $\nu_2$, i.e. the set of all measures $\Gamma \in \text{Pr}(\mathbb{H} \times \mathbb{H})$ with marginals $\nu_1$ and $\nu_2$. We notice that, on the other hand, the mapping $\rho(\mathbf{q},\tilde{\mathbf{q}}) = (|\mathbf{q} - \tilde{\mathbf{q}}|/\varepsilon) \wedge 1$ defines a standard metric in $\mathbb{H}$. As such, its associated extension $\mathcal{W}_\rho$ to $\text{Pr}(\mathbb{H})$ coincides with the usual Wasserstein-1 distance, \cite{villani2008optimal}. With the above notation, we have the following convergence result. For the complete, detailed and general formulation, see \cref{thm:weak:harris} below. \begin{Thm} \label{thm:Main:Result:Overview} Suppose that $\mathcal{C}$ is a symmetric strictly positive-definite trace class operator and that $U \in C^2(\mathbb{H})$ satisfies the global bound \begin{align} L_1 := \sup_{\mathbf{q} \in \mathbb{H}} |D^2U(\mathbf{q})| < \infty \label{eq:intro:global:hessian:c} \end{align} and the following dissipativity condition \begin{align} | \mathbf{q}|^2 + \langle \mathbf{q}, CDU(\mathbf{q})\rangle \geq L_2 |\mathbf{q}|^2 - L_3 \quad \mbox{ for all } \mathbf{q} \in \mathbb{H}, \label{eq:disp:cond:over} \end{align} for some constants $L_2 > 0$ and $L_3 \geq 0$. Let $\lambda_1$ denote the largest eigenvalue of $\mathcal{C}$. Then, there exists an integration time $T = T(\lambda_1, L_1, L_2)$ for which the associated Markov kernel $P$ as defined in \eqref{eq:HMC:kernel:overview} satisfies, with respect to $\tilde \rho$ defined in \eqref{eq:t:rho:def}, \begin{align}\label{ineq:main:result} \mathcal{W}_{\tilde \rho}( \nu_1 P^n, \nu_2 P^n) \le c_1 e^{-c_2 n} \mathcal{W}_{\tilde \rho}( \nu_1, \nu_2) \quad \mbox{ for any } \nu_1, \nu_2 \in \text{Pr}(\mathbb{H}) \mbox{ and } n \in \mathbb{N}, \end{align} for some $\varepsilon > 0$ as in \eqref{eq:t:rho:def} and some positive constants $c_1, c_2$ which depend only on the integration time $T > 0$, the constants $L_i$, $i=1,2,3$, associated to the potential function $U$, and the covariance operator $\mathcal{C}$. In particular, \eqref{ineq:main:result} implies that $\mu$ defined in \eqref{eq:mu} is the unique invariant measure for $P$. Moreover, taking $\nu_1 = \delta_{\mathbf{q}_0}$, the Dirac delta concentrated at some $\mathbf{q}_0 \in \mathbb{H}$, and $\nu_2 = \mu$, it follows from \eqref{ineq:main:result} that $P^n(\mathbf{q}_0,\cdot)$ converges exponentially to $\mu$ with respect to $\mathcal{W}_{\tilde \rho}$ as $n \to \infty$. In addition, for any suitably regular observable $\Phi: \mathbb{H} \to \mathbb{R}$, \begin{align*} \left| P^n \Phi(\mathbf{q}_0) - \int \Phi(\mathbf{q}') \mu(dq') \right| \leq L_\Phi c_1 e^{-n c_2} \int \sqrt{1 + \exp(\eta |\mathbf{q}_0|^2) + \exp(\eta |\mathbf{q}'|^2)} \mu(d \mathbf{q}') \quad \mbox{ for all } n \in \mathbb{N}, \end{align*} for some $\eta > 0$ and $L_\Phi >0$. Further, taking $\{Q_n(\mathbf{q}_0) \}_{n \in \mathbb{N}}$ to be the process associated to $\{P^n\}_{n \in \mathbb{N}}$ starting from $\mathbf{q}_0 \in \mathbb{H}$, i.e. $Q_n(\mathbf{q}_0) \sim P(Q_{n-1}(\mathbf{q}_0), \cdot)$ we have, for any $\mathbf{q}_0 \in \mathbb{H}$ and any suitably regular observable $\Phi: \mathbb{H} \to \mathbb{R}$, that \begin{align*} X_n := \frac{1}{n} \sum_{k =1}^n \Phi( Q_k(\mathbf{q}_0)) - \int \Phi(\mathbf{q}) \mu(d \mathbf{q}) \rightarrow 0 \quad \mbox{ as $n \to \infty$ almost surely} \end{align*} and that \begin{align*} \mathbb{P}( a < \sqrt{n} X_n \leq b) \to \frac{1}{\sqrt{2 \pi \sigma^2}}\int_a^b e^{-\frac{x^2}{2 \sigma^2}} dx \quad \mbox{ as $n \to \infty$ for any } a,b \in \mathbb{R} \mbox{ with } a < b, \end{align*} where $\sigma = \sigma(\Phi)$. In other words, $\{Q_n(\mathbf{q}_0) \}_{n \geq 0}$ satisfies a strong law of large numbers (SLLN) and a central limit theorem (CLT). \end{Thm} With similar arguments as used in the proof of \cref{thm:Main:Result:Overview} (cf. \cref{thm:weak:harris}), we can also provide a new proof of mixing rates for the classical finite-dimensional HMC algorithm, as specified by the dynamics \eqref{eq:FD:H:dynam}. This is carried out in \cref{thm:main:FD} below, and complemented by further comparisons with the assumptions in the main infinite-dimensional result in \cref{rmk:FD}. Having formulated our mixing result for the exact HMC algorithm associated with \eqref{eq:mu} we would like to be able to demonstrate that the conditions \eqref{eq:intro:global:hessian:c}-\eqref{eq:disp:cond:over} which we impose on the potential $U$ can be verified in concrete examples specifically as would apply to the Bayesian approach to PDE inverse problems. Here, as an illustrative example, we consider the problem of recovering a divergence free fluid flow $\mathbf{q}$ from the sparse and noisy observation of a passive solute $\theta(\mathbf{q})$ as was recently studied in \cite{borggaard2018bayesian, borggaard2018Consistency}. To be specific let \begin{align} \partial_t \theta + \mathbf{q} \cdot \nabla \theta = \kappa \Delta \theta, \quad \theta(0) = \theta_0 \label{eq:ad:eqn} \end{align} where the solution evolves on the periodic box $\mathbb{T}^2$, namely $\theta : [0,\infty) \times \mathbb{T}^2 \rightarrow \mathbb{R}$ and $\kappa > 0$ is a fixed diffusion parameter. Given a sufficiently regular initial condition $\theta_0 : \mathbb{T}^2 \to \mathbb{R}$, which we take to be known in advance, we specify the (linear) observation procedure \begin{align} \mathcal{O}(\theta) := \left\{ \int_0^\infty \int_{\mathbb{T}^2} \theta (t,x) K_j(t,x) dx dt \right\}_{j = 1}^m \label{eq:gen:linear:form} \end{align} where $m \geq 1$ represents the number of separate observations of $\theta$ and $K_j$ are the associated `observation kernels'. Positing an additive observation noise $\eta$, we have the following statistical model linking any suitably regular, divergence free, $\mathbf{q}: \mathbb{T}^2 \to \mathbb{R}^2$ with a resulting data set $\mathcal{Y}$ as \begin{align*} \mathcal{Y} = \mathcal{O}(\theta(\mathbf{q})) + \eta, \end{align*} where $\theta(\mathbf{q})$ represents the solution of \eqref{eq:gen:linear:form} corresponding to $\mathbf{q}$ so that $\theta(\mathbf{q})$ sits in an appropriate solution space which we specify in rigorous detail below in \cref{def:adr_weak}. Following the Bayesian statistical inversion formalism \cite{kaipio2005statistical, dashti2017bayesian}, given a fixed observation $\mathcal{Y} \in \mathbb{R}^m$ and a prior distribution $\mu_0$ on a suitable Hilbert space of divergence free, periodic vector fields and a probability density function $p_\eta: \mathbb{R}^m \to \mathbb{R}$ for the observation noise $\eta$, we obtain a posterior distribution \begin{align} \mu^\mathcal{Y}(d\mathbf{q}) \propto \exp(- U(\mathbf{q}))\mu_0(d \mathbf{q}) \quad \text{ where } \quad U(\mathbf{q}) = - \log(p_\eta(\mathcal{Y} - \mathcal{O}(\theta(\mathbf{q}))). \label{eq:post:passive:scal} \end{align} see e.g. \cite{dashti2017bayesian}, \cite[Appendix C]{borggaard2018bayesian}. For simplicity of presentation, we focus here on the typical situation where $\eta \sim N(0, \Gamma)$, with $\Gamma$ a symmetric, strictly positive definite covariance operator on $\mathbb{R}^m$. In this case $U$ takes the form \begin{align} U(\mathbf{q}) = | \Gamma^{-1/2}(\mathcal{Y} - \mathcal{O}(\theta(\mathbf{q})))|^2, \label{eq:pot:passive:scal:g:n} \end{align} where $| \cdot |$ represents the usual Euclidean norm on $\mathbb{R}^m$. Our main results here, \cref{prop:DU:DsqU:Bnds} and \cref{cor:DU:DsqU:Bnd:w:C}, show that in the case of `spectral observations', i.e. when \begin{align*} |\mathcal{O}( \theta)| \leq c_0 \sup_{t \leq t^*} \int_{\mathbb{T}^2}|\theta(t,x)|^2 dx, \end{align*} for some $t^* \geq 0$, we can verify the conditions imposed on the potential function $U$ (cf. \eqref{eq:intro:global:hessian:c} and more generally \cref{B123} below) and in particular establish suitable global bounds on $D^2U$. On the other hand, for the interesting cases of `point-observations' where \begin{align} |\mathcal{O}( \theta)| \leq c_0 \sup_{t \leq t^*, x \in \mathbb{T}^2} |\theta(t,x)| \label{eq:pnt:obs:informal} \end{align} for some $t^* \geq 0$, or for observations involving gradients or other higher order derivatives of $\theta$, we can only show local bounds on $D^2U$. \subsection*{Overview of the Proof} Our proof follows the approach of the weak Harris theorem developed in \cite{hairer2011asymptotic}, which is an elegant generalization of the classical Harris mixing results, \cite{harris1956existence, meyn2012markov, hairer2011yet}. It establishes necessary conditions for two point contraction at small, intermediate and large scales in a fashion well adapted to the Wasserstein metric, a notion of distance which is crucially needed for many types of processes evolving on infinite dimensional spaces. We should emphasize the authors in \cite{hairer2011asymptotic} provide clarity and flexibility in their approach by developing a class of distance-like functions (cf. \eqref{eq:t:rho:def}) which allows one to establish global contractivity directly and thus avoiding the need for intricate pathwise coupling constructions considered elsewhere in the literature. As such, the main difficulties here lie in showing that the necessary assumptions of the weak Harris theorem are valid in our context. These assumptions amount to showing, with respect to $\rho: \mathbb{H} \times \mathbb{H} \to [0,1]$ defined as $\rho (\mathbf{q}, \tilde{\mathbf{q}}) = 1 \wedge(|\mathbf{q} - \tilde{\mathbf{q}}|/\varepsilon)$, with $\varepsilon > 0$ fixed, that the following is true: there exists $m \in \mathbb{N}$ sufficiently large such that \begin{enumerate}[(i)] \item $P^m$ is $\rho$-contracting, i.e. there exists $0 < \delta_1 < 1$ such that \begin{align} \label{oview:contr} \mathcal{W}_\rho(P^m(\mathbf{q}_0, \cdot), P^m(\tilde{\mathbf{q}}_0, \cdot)) \leq \delta_1 \rho(\mathbf{q}, \tilde{\mathbf{q}}) \quad \mbox{ for all } \mathbf{q}_0, \tilde{\mathbf{q}}_0 \in \mathbb{H} \mbox{ with } \rho(\mathbf{q}_0, \tilde{\mathbf{q}}_0) < 1; \end{align} \item For level sets of the form $A_K:= \{\mathbf{q} \in \mathbb{H} \,: \, | q | \leq K\}$, for $K > 0$, $A_K$ is \emph{$\rho$-small} for $P^m$, i.e. there exists $0 < \delta_2 < 1$ and $m \geq 1$ such that \begin{align}\label{oview:small} \mathcal{W}_\rho (P^m(\mathbf{q}_0, \cdot), P^m(\tilde{\mathbf{q}}_0, \cdot)) \leq 1 - \delta_2 \quad \mbox{ for all } \mathbf{q}_0, \tilde{\mathbf{q}}_0 \in A_K. \end{align} \end{enumerate} Finally we need a Lyapunov condition: \begin{itemize} \item[(iii)] For a suitable $V: \mathbb{H} \to \mathbb{R}^+$ that \begin{align}\label{oview:Lyap} P^n V(\mathbf{q}) \leq \kappa_V^n V(\mathbf{q}) + K_V, \end{align} for every $\mathbf{q} \in \mathbb{H}$ and $n \geq 1$ where $\kappa_V \in (0,1)$ and $K_V > 0$ are independent of $\mathbf{q}$ and $n$. \end{itemize} Roughly speaking the conditions (i)--(iii) correspond to establishing a two-point contraction at small, intermediate and large scales respectively. Following an approach developed in the stochastic PDE literature \cite{mattingly2002exponential, kuksin2012mathematics, GlattHoltzMattinglyRichards2016, hairer2008spectral,hairer2011asymptotic, butkovsky2018generalized}, the idea consists in establishing (i) and (ii) above without explicitly constructing a coupling between $P^m(\mathbf{q}_0, \cdot)$ and $P^m(\tilde{\mathbf{q}}_0, \cdot)$. Instead, we construct an `approximate' coupling by defining a modified process $\tilde P(\mathbf{q}_0, \tilde{\mathbf{q}}_0, \cdot)$ in a control-like approach. We define the process $\widetilde{P}$ by imposing a suitable `shift' in the initial velocity $\mathbf{v}_0$ in \eqref{eq:HMC:kernel:overview} depending on the initial positions $\mathbf{q}_0$, $\tilde{\mathbf{q}}_0$. Namely, for a fixed integration time $T > 0$, we take \begin{align}\label{oview:def:tP} \widetilde{P}(\mathbf{q}_0, \tilde{\mathbf{q}}_0, A) := \mathbb{P} (\mathbf{q}_T (\tilde{\mathbf{q}}_0, \tilde\mathbf{v}_0) \in A) \quad \mbox{ with }\tilde\mathbf{v}_0 = \mathbf{v}_0 + \mathcal{S}(\mathbf{q}_0, \tilde{\mathbf{q}}_0), \quad \mathbf{v}_0 \sim \mathcal{N}(0, \mathcal{C}), \end{align} for every $A \in \mathcal{B}(\mathbb{H})$. Here we consider a shift $\mathcal{S}(\mathbf{q}_0, \tilde{\mathbf{q}}_0)$ which is inspired by estimates developed in \cite{BoEbZi2018}; $\mathcal{S}$ is defined so as to ensure a suitable contraction between two solutions of \eqref{eq:dynamics:xxx} starting from $(\mathbf{q}_0, \mathbf{v}_0)$ and $(\tilde{\mathbf{q}}_0, \tilde \mathbf{v}_0)$ at the final time $T > 0$. Since $\rho$ is a metric in $\mathbb{H}$, the corresponding extension $\mathcal{W}_\rho$ is a metric in $\text{Pr}(\mathbb{H})$ and in fact coincides with the Wasserstein-1 distance. Thus, by the triangle inequality, \begin{align}\label{ineq:Wrho:TI} \mathcal{W}_\rho(P^m(\mathbf{q}_0, \cdot), P^m(\tilde{\mathbf{q}}_0, \cdot)) \leq \mathcal{W}_\rho(P^m(\mathbf{q}_0, \cdot), \widetilde{P}^m(\mathbf{q}_0, \tilde{\mathbf{q}}_0, \cdot)) + \mathcal{W}_\rho(\widetilde{P}^m(\mathbf{q}_0, \tilde{\mathbf{q}}_0, \cdot), P^m(\tilde{\mathbf{q}}_0, \cdot)), \end{align} where $\widetilde{P}^m$ denotes the $m$-fold iteration of $\widetilde{P}$, corresponding to a sequence $(\mathbf{v}_0^{(1)}, \ldots, \mathbf{v}_0^{(m)})$ of initial velocities drawn from $\mathcal{N}(0, \mathcal{C})$ and shifted as in \eqref{oview:def:tP} with $\mathbf{q}_0, \tilde{\mathbf{q}}_0$ replaced with the starting positions from each iteration. In view of establishing \eqref{oview:contr} and \eqref{oview:small}, the first term on the right-hand side of \eqref{ineq:Wrho:TI} is estimated by first showing a contraction result between two solutions of \eqref{eq:dynamics:xxx} starting from $(\mathbf{q}_0, \mathbf{v}_0)$ and $(\tilde{\mathbf{q}}_0, \tilde \mathbf{v}_0)$ with respect to $\rho$ in $\mathbb{H}$, which is then extended to $\mathcal{W}_\rho$ in $\text{Pr}(\mathbb{H})$. Such contraction result follows solely from assumption \eqref{eq:intro:global:hessian:c} on the potential function $U$ together with a smallness assumption on the integration time $T$; see \cref{prop:FP:Type:contract} below. Moreover, assumption \eqref{eq:intro:global:hessian:c} implies that the only possible source of nonlinearity in the dynamics \eqref{eq:dynamics:xxx}, i.e. $DU$, is Lipschitz, which in particular guarantees the well-posedness of \eqref{eq:dynamics:xxx} as we detail in \cref{prop:well:posed}. The second term on the right-hand side \eqref{ineq:Wrho:TI} represents a `cost of control' term and in fact the tuning parameter $\varepsilon$ appearing in $\rho$ specifies the scales at which this cost does not `become too large'. We estimate this term with the help of Girsanov's theorem from which we obtain a bound in terms of the Radon-Nikodym derivative between the law $\sigma_m$ of the velocity path $(\mathbf{v}_0^{(1)}, \ldots, \mathbf{v}_0^{(m)})$ and the law $\tilde \sigma_m$ of the associated shifted velocity path $(\tilde\mathbf{v}_0^{(1)}, \ldots, \tilde \mathbf{v}_0^{(m)})$, i.e. Girsanov provides us with $d \sigma_m/ d \tilde\sigma_m$. Here we notice that, in order to guarantee that $d \sigma_m/ d \tilde\sigma_m$ is well-defined, we define the shift $\mathcal{S}$ in \eqref{oview:def:tP} to be in a finite-dimensional subspace of $\mathbb{H}$ (cf. \eqref{def:Phin}). Indeed, looking at the case $m = 1$ for simplicity, notice that if $\mathbf{v}_0 \sim \mathcal{N}(0,\mathcal{C})$ then $\tilde \mathbf{v}_0 \sim \mathcal{N}(\mathcal{S}(\mathbf{q}_0, \tilde{\mathbf{q}}_0), \mathcal{C})$ and, by the Feldman-Hajek theorem (see, e.g., \cite[Theorem 2.23]{DaZa2014}), $\mathcal{N}(0,\mathcal{C})$ and $\mathcal{N}(\mathcal{S}(\mathbf{q}_0, \tilde{\mathbf{q}}_0), \mathcal{C})$ are mutually singular unless $\mathcal{S}(\mathbf{q}_0, \tilde{\mathbf{q}}_0)$ belongs to the Cameron-Martin space of $\mathcal{N}(0,\mathcal{C})$. Notably, the Cameron-Martin space of $\mathcal{N}(0,\mathcal{C})$ has $\mathcal{N}(0,\mathcal{C})$-measure zero when $\mathbb{H}$ is infinite-dimensional. This illustrates the fact that two measures in an infinite-dimensional space are frequently mutually singular. However, by considering a velocity shift $\mathcal{S}$ that belongs to an N-dimensional subspace $\mathbb{H}_N \subset \mathbb{H}$, for some $N \in \mathbb{N}$, we can show that $\sigma_m$ and $\tilde\sigma_m$ are mutually absolutely continuous, with an estimate of $d\sigma_m/d\tilde \sigma_m$, and thus of the second term in \eqref{ineq:Wrho:TI}, that depends on the dimension $N$. Here $N$ is chosen so as to obtain a suitable contraction between different trajectories of \eqref{eq:dynamics:xxx} and hence to provide a useful estimate of the first term in \eqref{ineq:Wrho:TI} (see \cref{prop:FP:Type:contract} and \cref{prop:local_contr}). For this purpose, $N$ must be chosen to be sufficiently large, but is nevertheless a fixed parameter depending only on the potential function $U$ through the constant $L_1$ from \eqref{eq:intro:global:hessian:c} (see \eqref{cond:t:N:contr} below). The third part of the proof consists in showing that such $V$ is a Lyapunov function for $P$ as given in \cref{prop:FL} below. Here, in addition to quadratic exponential function $V(\mathbf{q}) = \exp(\eta |q|^2)$ as in \eqref{eq:t:rho:def} we in fact show that any function of the form $V(\mathbf{q}) = |\mathbf{q}|^i$, $i \in \mathbb{N}$, is also a Lyapunov function. The result of \cref{prop:FL} follows from both assumptions \eqref{eq:intro:global:hessian:c} and \eqref{eq:disp:cond:over} on the potential $U$ together with a smallness assumption on the integration time $T$. Notably, assumption \eqref{eq:disp:cond:over} on $U$ is only imposed in order to obtain this Lyapunov structure. Indeed, condition \eqref{eq:disp:cond:over} provides a coercivity-like property for $DU$ in \eqref{eq:dynamics:xxx} which, when complemented with the smallness assumption on $T$, allows us to show the required exponential decay of such functions $V$ modulo a constant, thus proving the Lyapunov property. It remains to leverage the spectral gap now established, \eqref{ineq:main:result}, to prove a Law of Large numbers (LLN) and Central Limit Theorem (CLT) type result for the implied Markov process. While this implication is extensively developed in the literature, and recently generalized to the situation where the spectral gap appears in the Wasserstein sense \cite{komorowski2012central,kulik2017ergodic}, it was not immediately clear that these results are easily applied as a black box to our situation. Instead, for clarity of presentation, we provide an independent proof of the LLN and CLT in an appendix which is carefully adapted to our situation where the $\tilde{\rho}$ in \eqref{ineq:main:result} is only distance-like. While we are in particular following the road map laid out in \cite{komorowski2012central}, we believe our proof may be of some independent interest. \subsection*{Organization of the Manuscript} The rest of the manuscript is organized as follows. In \cref{sec:Preliminaries} we provide the complete details of our mathematical setting including the assumptions on the covariance operator $\mathcal{C}$ and the potential $U$ in \eqref{eq:dynamics:xxx}. \cref{sec:apriori} provides certain a priori bounds on \eqref{eq:dynamics:xxx} and concludes with the low-mode nudging bound that we use to synchronize the positions of two processes by suitably coupling their momenta. Lyapunov estimates on the exact Hamiltonian Monte Carlo dynamics are given in \cref{sec:f:l:bnd}. In \cref{sec:Ptwise:contract:MD} we combine the bounds in the previous two sections to establish the pointwise contractivity of the Markovian dynamics, namely the so called $\rho$-small and $\rho$-contractivity conditions. The main result on geometric ergodicity is stated rigorously in \cref{sec:Proof:Main} followed by the proof using the weak Harris theorem \cite{hairer2011asymptotic}. \cref{sec:finite:dim:HMC} details how our approach also provides a novel proof for the finite dimensional setting. Finally in \cref{sec:bayes:AD} we establish that the conditions of the main theorem apply to the Bayesian statistical inversion problem of estimating a divergence free vector field $\mathbf{q}$ from the partial observation of a scalar quantity advected by the flow. \cref{sec:LLN:CLT} shows how the law of large numbers and the central limit theorem follow in our setting from our main result on spectral gaps. \section{Preliminaries} \label{sec:Preliminaries} This section collects various mathematical preliminaries and sets down the precise assumptions which we use below in the statements of the main results of the paper. \subsection{The Gaussian reference measure} Let $\mathbb{H}$ be a separable and real Hilbert space with inner product $\langle \cdot, \cdot \rangle$ and norm $| \cdot |$. We take $\mathcal{N}(0, \mathcal{C})$ to denote the centered normal distribution on $\mathbb{H}$ with covariance operator $\mathcal{C}$. See e.g. \cite{bogachev1998gaussian, DaZa2014} for generalities concerning Gaussian measures on Hilbert space. In this paper we {\em always} assume that $\mathcal{C}$ satisfies the following conditions. \begin{assumption} \label{A123} $\mathcal{C}: \mathbb{H} \to \mathbb{H}$ is a trace class, symmetric and strictly positive definite linear operator. Thus, by the spectral theorem, we have a complete orthonormal basis $\{ \mathbf{e}_i \}_{i \in \mathbb{N}}$ of $\mathbb{H}$ which are the eigenfunctions of $\mathcal{C}$. We write corresponding eigenvalues $\{ \lambda_i \}_{i \in \mathbb{N}}$ in non-increasing order and note that the trace class condition amounts to \begin{align} \operatorname{Tr}(\mathcal{C}) := \sum_i \lambda_i < \infty. \label{eq:Tr:Def} \end{align} \end{assumption} \noindent We will also make frequent use of fractional powers of $\mathcal{C}$ which we define as follows. \begin{Def} \label{def:frack:pow} For any $\gamma \in \mathbb{R}$, we define fractional power $\mathcal{C}^\gamma$ of $\mathcal{C}$ by \begin{align*} \mathcal{C}^{\gamma} \mathbf{f} = \sum_{i} \lambda_i^{\gamma} \langle \mathbf{f}, \mathbf{e}_i \rangle \mathbf{e}_i \;, \end{align*} which makes sense for any $\mathbf{f} \in \mathbb{H}_{\gamma}$. Here $\mathbb{H}_\gamma$ is defined as \begin{align}\label{def:Hgamma} \mathbb{H}_{\gamma} = \{ \mathbf{f} \in \mathbb{H} | \; | f |_{\gamma} < \infty\} \quad \text{ where } | \mathbf{f} |_{\gamma}^2 := |\mathcal{C}^{-\gamma} \mathbf{f} |^2 = \sum_i \lambda_i^{-2\gamma} \langle \mathbf{f}, \mathbf{e}_i \rangle^2 \end{align} when $\gamma \geq 0$. For $\gamma < 0$, $\mathbb{H}_\gamma$ is defined as the dual of $\mathbb{H}_{-\gamma}$ relative to $\mathbb{H}$. In addition, for every $\gamma \in \mathbb{R}$, we define the inner product $\langle \cdot, \cdot \rangle_{\gamma} = \langle \mathcal{C}^{-\gamma} \cdot, \mathcal{C}^{-\gamma} \cdot \rangle$. \end{Def} \noindent According to \cref{def:frack:pow}, it follows that $\mathbb{H}_{-\tilde{\gamma}} \subseteq \mathbb{H}_{-\gamma}$ for every $\gamma, \tilde{\gamma} \in \mathbb{R}$ with $\gamma \geq \tilde{\gamma}$. Moreover, note that $\mathbb{H}_{1/2}$ is the Cameron-Martin space associated with $\mathcal{N}(0, \mathcal{C})$ with inner product $\langle \cdot, \cdot \rangle_{1/2} = \langle \mathcal{C}^{-1/2} \cdot, \mathcal{C}^{-1/2} \cdot \rangle$ and norm $| \cdot |_{1/2} = | \mathcal{C}^{-1/2} \cdot |$; see \cite[Chapter 2]{DaZa2014}. In terms of these fractional spaces $\mathbb{H}_\gamma$ we have the following `Poincar\'e' and `reverse-Poincar\'e' inequalities. For this purpose and for later use we define, for $N \geq 1$, \begin{align} \Pi_N \mathbf{f} = \sum_{j \leq N} \langle \mathbf{f}, \mathbf{e}_j \rangle \mathbf{e}_j, \quad \Pi^N \mathbf{f} = \sum_{j > N} \langle \mathbf{f}, \mathbf{e}_j \rangle \mathbf{e}_j, \label{eq:get:high:low} \end{align} namely the projection of $\mathbf{f} \in \mathbb{H}$ onto `low' and `high' modes. \begin{Lem}\label{lem:ineq:C} Given any $\gamma, \tilde{\gamma} \in \mathbb{R}$ with $\gamma \geq \tilde{\gamma}$, the following hold: \begin{align} \norm{\mathcal{C}^{\gamma} \mathbf{f}} \leq \lambda_1^{(\gamma - \tilde{\gamma})} \norm{\mathcal{C}^{\tilde{\gamma}} \mathbf{f}}, \label{eq:Poincare} \end{align} when $\mathbf{f} \in \mathbb{H}_{-\tilde{\gamma}}$. Moreover, for any $N \geq 1$, \begin{align} \norm{\mathcal{C}^{\gamma} \Pi^N \mathbf{f}} \leq \lambda_{N+1}^{(\gamma - \tilde{\gamma})} \norm{\mathcal{C}^{\tilde{\gamma}} \Pi^N \mathbf{f} }, \label{eq:Inv:Poincare} \end{align} for any $\mathbf{f} \in \mathbb{H}_{-\tilde{\gamma}}$. \end{Lem} In certain applications, one may wish to define the Markovian dynamics associated to \eqref{eq:dynamics:xxx} only on $\mathbb{H}_\gamma$ for some $\gamma \in (0, 1/2)$, which is a strict subset of $\mathbb{H}$. For this reason, in what follows we consider our underlying phase space to be more generally given by $\mathbb{H}_\gamma$, for some $\gamma \in [0, 1/2)$. This leads us to introduce the following additional assumption which will sometimes be imposed: \begin{assumption} \label{ass:higher:reg:C} For some $\gamma \in [0,1/2)$, $\mathcal{C}^{1-2\gamma}$ is trace class. Namely, \begin{align} \operatorname{Tr}(\mathcal{C}^{1 - 2 \gamma}) := \sum_i \lambda_i^{1- 2\gamma} < \infty. \label{eq:Tr:Def:b} \end{align} \end{assumption} \noindent Under \cref{ass:higher:reg:C} we have the following regularity property \begin{Lem} \label{lem:diff:gauss:for:diff:folks} Suppose that $\mu_0$ is $\mathcal{N}(0,\mathcal{C})$ defined on $\mathbb{H}$ with $\mathcal{C}$ under \cref{A123}, \cref{ass:higher:reg:C}. Then $\mu_0$ is also $\mathcal{N}(0,\mathcal{C}^{1 -2\gamma})$ defined on $\mathbb{H}_\gamma$. \end{Lem} \begin{Rmk} We typically think of the covariance $\mathcal{C}$ as a `smoothing operator'. A simple example of $\mathcal{C}$ satisfying the above assumptions is $A^{-1}$ where $A = - \partial_{xx}$ is the second derivative on $[0, \pi]$ endowed with Dirichlet boundary conditions. Note that, with this choice of $\mathcal{C}$, the spaces $\mathbb{H}_\gamma$ correspond to the usual $L^2$-based Sobolev space $H^{\gamma/2}$ with the Cameron-Martin space given by $H^1$. A more involved variation on this theme will be considered below in \cref{sec:bayes:AD} when we consider an application to a PDE inverse problem. \end{Rmk} \subsection{Conditions on the potential} In what follows we impose the following regularity conditions on the potential energy function $U$ from \eqref{eq:mu}. Note that in particular assumption $(B1)$ below is compatible with the setting imposed in \cite{BePiSaSt2011}; see \cref{rmk:Comp:beskos} below. \begin{assumption}\label{B123} For a fixed value of $\gamma \in [0,1/2)$ the potential in \eqref{eq:dynamics:xxx} $U: \mathbb{H}_\gamma \to \mathbb{R}$ is twice Fr\'echet differentiable and \begin{itemize} \item[(B1)] There exists $L_1 >0$ such that \begin{align} \label{Hess:bound} | D^2 U(\mathbf{f}) |_{\mathcal{L}_2(\mathbb{H}_\gamma)} = | \mathcal{C}^{\gamma} D^2 U(\mathbf{f}) \mathcal{C}^{\gamma}|_{\mathcal{L}_2(\mathbb{H})} \leq L_1 \end{align} for any $\mathbf{f} \in \mathbb{H}_\gamma$, where $|\cdot|_{\mathcal{L}_2(\mathbb{H}_\gamma)}$ and $| \cdot|_{\mathcal{L}_2(\mathbb{H})}$ denote the usual operator norms for real valued bilinear operators defined on $\mathbb{H}_\gamma \times \mathbb{H}_\gamma$ and on $\mathbb{H} \times \mathbb{H}$, respectively. \item[(B2)] There exists $L_2 >0$ and $L_3 \ge 0$ such that, for this value of $\gamma \in [0,1/2)$ \begin{align} \label{dissip:new} \nga{\mathbf{f}}^2 + \langle \mathbf{f}, \mathcal{C} D U(\mathbf{f}) \rangle_{\gamma} \ge L_2 \nga{\mathbf{f}}^2 - L_3 \end{align} for every $\mathbf{f} \in \mathbb{H}_\gamma$. \end{itemize} \end{assumption} A number of remarks are in order regarding \cref{B123}: \begin{Rmk} \label{rmk:B123:Consequences} \mbox{} \begin{itemize} \item[(i)] \cref{B123} (B1) and the mean value theorem imply that \begin{align} \ngad{DU(\mathbf{f})-DU(\mathbf{g})} \le L_1 \nga{\mathbf{f} - \mathbf{g}} \quad \label{lipschitz:new} \end{align} for any $\mathbf{f}, \mathbf{g} \in \mathbb{H}_\gamma$ and, in particular, \begin{align} \ngad{D U (\mathbf{f})} \leq L_1 \nga{\mathbf{f}} + L_0 \label{cond:sublin:U} \end{align} for every $\mathbf{f} \in \mathbb{H}_\gamma$, where $L_0 = \ngad{D U (0)}$. Inequalities \eqref{lipschitz:new} and \eqref{cond:sublin:U} will be used extensively in the analysis below. \item[(ii)] If $U$ satisfies, in addition, the following property: \begin{itemize} \item[(B3)] There exists $L_4 \in [0,\lambda_1^{-1 + 2 \gamma})$ and $L_5 \geq 0$ such that \begin{align} \ngad{D U(\mathbf{f})} \leq L_4 \nga{\mathbf{f}} + L_5, \quad \text{ for any } \mathbf{f} \in \mathbb{H}_\gamma, \label{eq:B3a} \end{align} \end{itemize} then (B2) is automatically satisfied. Indeed, we have \begin{align}\label{B3:to:B2} \nga{\mathbf{f}}^2 + \langle \mathbf{f}, \mathcal{C} D U(\mathbf{f}) \rangle_{\gamma} &\geq \nga{\mathbf{f}}^2 - | \langle \mathbf{f}, \mathcal{C} D U(\mathbf{f}) \rangle_{\gamma}| \geq \nga{\mathbf{f}}^2 - \nga{\mathbf{f}} |\mathcal{C}^{1 - \gamma} D U (\mathbf{f})| \notag\\ &\geq \nga{\mathbf{f}}^2 - \lambda_1^{1 - 2 \gamma} \nga{\mathbf{f}} \ngad{D U (\mathbf{f})}, \end{align} where the last inequality follows from Lemma \ref{lem:ineq:C} and the fact that $\gamma \in [0,1/2)$. Using \eqref{eq:B3a} in \eqref{B3:to:B2} and Young's inequality, we obtain \begin{align*} \nga{\mathbf{f}}^2 + \langle \mathbf{f}, \mathcal{C} D U (\mathbf{f}) \rangle_{\gamma} &\geq (1 - \lambda_1^{1 - 2 \gamma} L_4 ) \nga{\mathbf{f}}^2 - \lambda_1^{1 - 2 \gamma} L_5 \nga{\mathbf{f}} \notag \\ &\geq \frac{1 - \lambda_1^{1 - 2 \gamma} L_4}{2} \nga{\mathbf{f}}^2 - C, \end{align*} where $C \in \mathbb{R}^+$ is a constant depending on $\lambda_1^{1 - 2\gamma}$, $L_4$, $L_5$. Notice that, in particular, if $U$ satisfies (B1) with $L_1 \in [0,\lambda_1^{-1 + 2 \gamma})$, then (B3) is verified with $L_4 = L_1$ and $L_5 = L_0$ (cf. \eqref{cond:sublin:U}). \item[(iii)] Assumptions (B1) and (B2) imply that the constants $L_1$ and $L_2$ satisfy the following relation: \begin{align} \label{ineq:K:L} L_2 \leq 1 + \lambda_1^{1 - 2 \gamma} L_1. \end{align} Indeed, from (B2), Lemma \ref{lem:ineq:C} and \eqref{cond:sublin:U}, we obtain that \begin{align*} (L_2 - 1) \nga{\mathbf{f}}^2 - L_3 &\leq \langle \mathbf{f}, \mathcal{C} D U (\mathbf{f}) \rangle_{\gamma} \leq \lambda_1^{1 - 2 \gamma} \nga{\mathbf{f}} \ngad{D U (\mathbf{f})} \leq \lambda_1^{1 - 2 \gamma} L_1 \nga{\mathbf{f}}^2 + L_0 \lambda_1^{1 - 2 \gamma} \nga{\mathbf{f}} \\ &\leq (\delta + \lambda_1^{1 - 2 \gamma} L_1) \nga{\mathbf{f}}^2 + \frac{(L_0 \lambda_1^{1 - 2 \gamma})^2}{4 \delta}, \end{align*} for any $\delta > 0$, so that \begin{align*} (L_2 - 1 - \lambda_1^{1 - 2 \gamma} L_1 - \delta) \nga{\mathbf{f}}^2 \leq L_3 + \frac{(L_0 \lambda_1^{1 - 2 \gamma})^2}{4 \delta} \end{align*} holds for any $\mathbf{f} \in \mathbb{H}_{\gamma}$, and every $\delta > 0$, which implies \eqref{ineq:K:L}. \end{itemize} \end{Rmk} This paper is concerned with sampling from probability distributions on $\mathbb{H}$ that have a density with respect to $\mathcal{N}(0, \mathcal{C})$ which are of the form \eqref{eq:mu}. In order that this is indeed the case and furthermore to ensure the invariance of $\mu$ with respect to the Markovian dynamics defined with respect to \eqref{eq:dynamics:xxx}, we assume the following condition. \begin{assumption} \label{ass:integrability:cond} Taking $\gamma \in [0,1/2)$ as in \cref{B123} we suppose that, for any $\varepsilon > 0$ there exists an $M = M(\varepsilon) \geq 0$, such that \begin{align*} U(\mathbf{f}) \geq M - \varepsilon | \mathbf{f} |_{\gamma}^2 \quad \text{ for any } \mathbf{f} \in \mathbb{H}_\gamma . \end{align*} \end{assumption} \begin{Rmk} \label{rmk:Comp:beskos} We notice that \cref{B123} $(B1)$ and \cref{ass:integrability:cond} above are equivalent to conditions 3.2 and 3.3 imposed in \cite{BePiSaSt2011}. Indeed such assumptions are applied there in order to show the well-posedness of the dynamics in \eqref{eq:dynamics:xxx} as well as to show that the measure $\mu$ defined in \eqref{eq:mu} is an invariant measure associated to \eqref{eq:dynamics:xxx}. Such results are recalled in \cref{prop:well:posed} and \cref{eq:invariance} below, respectively. However, as pointed out in the introduction, condition \cref{B123} $(B2)$ is further imposed in our setting in order to obtain the Lyapunov structure \eqref{oview:Lyap}, which together with the contractivity and smallness properties \eqref{oview:contr}-\eqref{oview:small} allows us to obtain our main convergence result, \cref{thm:weak:harris} below. \end{Rmk} \subsection{Well-Posedness of the Hamiltonian Dynamics} \label{subsec:well:pos} In the following proposition, we recall a well-posedness result of the Hamiltonian dynamics in \eqref{eq:dynamics:xxx}, as shown in \cite{BePiSaSt2011}. We consider the usual norm on the product space $\mathbb{H}_{\gamma} \times \mathbb{H}_{\gamma}$ with the slight abuse of notation: \begin{align} \label{def:nprod} \nga{(\mathbf{q},\mathbf{v})} := \nga{\mathbf{q}} + \nga{\mathbf{v}} \quad \mbox{ for all } (\mathbf{q},\mathbf{v}) \in \mathbb{H}_{\gamma} \times \mathbb{H}_{\gamma}. \end{align} \begin{Prop} \label{prop:well:posed} Suppose $\mathcal{C}$ satisfies \cref{A123} and that $U$ maintains \cref{B123}, (B1). Let $\gamma \in [0,1/2)$ be as in \cref{B123}. \begin{itemize} \item[(i)] For any $(\mathbf{q}_0, \mathbf{v}_0) \in \mathbb{H}_\gamma \times \mathbb{H}_\gamma$, there exists a unique $(\mathbf{q},\mathbf{v}) = (\mathbf{q}(\mathbf{q}_0, \mathbf{v}_0), \mathbf{v}(\mathbf{q}_0, \mathbf{v}_0))$ with \begin{align} (\mathbf{q},\mathbf{v}) \in C^{1}(\mathbb{R}; \mathbb{H}_\gamma \times \mathbb{H}_\gamma) \label{eq:sol:reg} \end{align} and obeying \eqref{eq:dynamics:xxx}. The resulting solution operators $\{\Xi_t\}_{t \in \mathbb{R}}$ defined via \begin{align*} \Xi_t(\mathbf{q}_0,\mathbf{v}_0) = \mathbf{q}_t(\mathbf{q}_0,\mathbf{v}_0) \end{align*} are all continuous maps from $\mathbb{H}_\gamma \times \mathbb{H}_\gamma$ to $\mathbb{H}_\gamma$. \item[(ii)] Under the additional restriction on $\mathcal{C}$ of \cref{ass:higher:reg:C} and fixing an integration time $T > 0$ the random variable \begin{align*} Q_1(\mathbf{q}_0) = \mathbf{q}_T(\mathbf{q}_0, \mathbf{v}_0), \quad \mathbf{v}_0 \sim \mathcal{N}(0,\mathcal{C}) \end{align*} is well defined in $\mathbb{H}_\gamma$ for any $\mathbf{q}_0 \in \mathbb{H}_\gamma$. Moreover \begin{align} P(\mathbf{q}_0, A) := \mathbb{P}( Q_1(\mathbf{q}_0) \in A) \label{eq:PEHMC:kernel:def} \end{align} defines a Feller Markov transition kernel on $\mathbb{H}_\gamma$. \end{itemize} \end{Prop} \begin{proof} The first item follows from a standard Banach fixed point argument, i.e. it suffices to show that, given any $(\mathbf{q}_0, \mathbf{v}_0) \in \mathbb{H}_\gamma \times \mathbb{H}_\gamma$ and any $t_0 \in \mathbb{R}$, the mapping \begin{align*} G(\mathbf{p},\mathbf{u})(t) := (\mathbf{q}_0, \mathbf{v}_0) + \int_{t_0}^{t} (\mathbf{u}(s), - \mathbf{p}(s) - \mathcal{C} DU(\mathbf{p}(s))ds, \end{align*} is a contraction mapping on the space of continuous $(\mathbb{H}_\gamma \times \mathbb{H}_{\gamma})$-valued functions defined on $I := [t_0 -\delta,t_0 + \delta] \subset \mathbb{R}$, that is on $C(I; \mathbb{H}_\gamma \times \mathbb{H}_\gamma)$, for some $\delta> 0$ sufficiently small independent of $(\mathbf{q}_0, \mathbf{v}_0)$ and $t_0$. Observe that, with \eqref{lipschitz:new} and \eqref{eq:Poincare}, \begin{align} |\mathcal{C}^{1-\gamma} (DU(\mathbf{p}) - DU(\tilde{\mathbf{p}}))| \leq \lambda_1^{1-2\gamma} L_1 |\mathcal{C}^{-\gamma} (\mathbf{p} - \tilde{\mathbf{p}})| \quad \mbox{ for all } \mathbf{p}, \tilde{\mathbf{p}} \in \mathbb{H}_{\gamma}. \label{eq:NLT:Lip:gam:est} \end{align} Thus, for any $(\mathbf{p},\mathbf{u}), (\tilde{\mathbf{p}},\tilde{\mathbf{u}}) \in C(I; \mathbb{H}_\gamma \times \mathbb{H}_\gamma)$, using \eqref{def:nprod} and \eqref{eq:NLT:Lip:gam:est}, \begin{align*} \sup_{t \in I}\nga{G(\mathbf{p},\mathbf{u})(t) - G(\tilde{\mathbf{p}},\tilde{\mathbf{u}})(t)} \leq \delta (1+\lambda_1^{1-2\gamma} L_1 ) \sup_{t \in I} \nga{(\mathbf{p},\mathbf{u}) - (\tilde{\mathbf{p}},\tilde{\mathbf{u}})}. \end{align*} Therefore, $G$ is a contraction mapping on $C(I; \mathbb{H}_\gamma \times \mathbb{H}_\gamma)$ for $\delta < (1+ \lambda_1^{1-2\gamma} L_1)^{-1}$. Similar argumentation establishes the desired continuity of $\Xi_t$, thus completing the proof. \end{proof} \subsection{Formulation of Precondition Hamiltonian Monte Carlo Chain} \label{sec:HMC:def} Having fixed an integration time $T > 0$, we denote by $Q_n(\mathbf{q}_0)$ as a random variable arising as the $n$ step dynamics of the exact Preconditioned Hamiltonian Monte Carlo (PHMC) chain \eqref{eq:PEHMC:kernel:def} starting from $\mathbf{q}_0 \in \mathbb{H}$. Namely, we iteratively draw $Q_n(\mathbf{q}_0) \sim P(Q_{n-1}(\mathbf{q}_0), \cdot)$ for $n \geq 1$ starting from $Q_0(\mathbf{q}_0) = \mathbf{q}_0$. We can write $Q_n(\mathbf{q}_0)$ more explicitly as a transformation of the sequence of Gaussian draws for the velocity as follows: Let $\mathbb{H}^{\otimes n}$ denote the product of $n$ copies of $\mathbb{H}$. Given a sequence $\{\mathbf{v}_0^{(j)}\}_{j \in \mathbb{N}}$ of i.i.d. draws from $\mathcal{N}(0,\mathcal{C})$, we denote by $\mathbf{V}_0^{(n)}$ the noise path \begin{align} \mathbf{V}_0^{(n)} := (\mathbf{v}_0^{(1)},\ldots, \mathbf{v}_0^{(n)}) \sim \mathcal{N}(0,\mathcal{C})^{\otimes n}, \label{eq:noise:path:notation} \end{align} where $\mathcal{N}(0,\mathcal{C})^{\otimes n}$ denotes the measure on $\mathbb{H}^{\otimes n}$ given as the product of $n$ copies of $\mathcal{N}(0,\mathcal{C})$. Taking $\mathcal{B}(\mathbb{H})$ to be the Borel $\sigma$-algebra on $\mathbb{H}$, we define $Q_1(\mathbf{q}_0): \mathbb{H} \to \mathbb{H}$ to be the Borel random variable defined as \begin{align*} Q_1(\mathbf{q}_0)(\mathbf{v}_0^{(1)}) = \mathbf{q}_t(\mathbf{q}_0, \mathbf{v}_0^{(1)}) \quad \mbox{ where } \mathbf{v}_0^{(1)} \sim \mathcal{N}(0, \mathcal{C}). \end{align*} Iteratively, we define for every $n \geq 2$ the Borel random variable $Q_n(\mathbf{q}_0): \mathbb{H}^{\otimes n} \to \mathbb{H}$ given by \begin{align} Q_n(\mathbf{q}_0)(\mathbf{V}_0^{(n)}) = \mathbf{q}_t(Q_{n-1}(q_0)(\mathbf{V}_0^{(n-1)}), \mathbf{v}_0^{(n)}) \quad \mbox{ where } \mathbf{V}_0^{(n)} \sim \mathcal{N}(0,\mathcal{C})^{\otimes n}. \label{eq:n:step:chain:Noise:p} \end{align} With these notations we can write the $n$-step iterated transition kernels as \begin{align} P^n(\mathbf{q}_0,A) := \mathbb{P}(Q_n(\mathbf{q}_0) \in A) \label{eq:n:step:PHMC:kernel} \end{align} for any $\mathbf{q}_0 \in \mathbb{H}_\gamma$ and $A \in \mathcal{B}(\mathbb{H}_\gamma)$. Or, equivalently, $P^n(\mathbf{q}_0, \cdot)$ is the push-forward of $\mathcal{N}(0,\mathcal{C})^{\otimes n}$ by the mapping $Q_n(\mathbf{q}_0)$, i.e. \begin{align} P^n(\mathbf{q}_0, A) = Q_n(\mathbf{q}_0)^*\mathcal{N}(0,\mathcal{C})^{\otimes n}(A) = \mathcal{N}(0,\mathcal{C})^{\otimes n}(Q_n(\mathbf{q}_0)^{-1}(A)) \label{def:Pn:Qn} \end{align} for every $\mathbf{q}_0 \in \mathbb{H}_\gamma$ and $A \in \mathcal{B}(\mathbb{H}_\gamma)$. We recall an invariance result for \eqref{eq:mu} from \cite{BePiSaSt2011} in our setting. \begin{Prop} \label{eq:invariance} Under the conditions given in \cref{prop:well:posed} and additionally imposing \cref{ass:integrability:cond} we have that \begin{align*} \mathfrak{M}(d\mathbf{q}, d\mathbf{v}) \propto e^{-U(q)}\mu_0(d\mathbf{q}) \times \mu_0(d\mathbf{v}) \end{align*} defines a probability measure on $\mathbb{H}_\gamma \times \mathbb{H}_\gamma$ which is invariant under $\{\Xi_t\}_{t \geq 0}$ namely \begin{align*} \int_{\mathbb{H}_\gamma \times \mathbb{H}_\gamma} f(\Xi_t(\mathbf{q},\mathbf{v})) \mathfrak{M}(d\mathbf{q}, d\mathbf{v}) = \int_{\mathbb{H}_\gamma \times \mathbb{H}_\gamma}f(\mathbf{q},\mathbf{v}) \mathfrak{M}(d\mathbf{q}, d\mathbf{v}) \end{align*} holds for every $f \in C_b( \mathbb{H}_\gamma \times \mathbb{H}_\gamma)$ and every $t \geq 0$. As a consequence, $\mu$ given in \eqref{eq:mu} is a Borel probability measure on $\mathbb{H}_\gamma$ which is invariant for $P$ defined by \eqref{eq:PEHMC:kernel:def}. \end{Prop} \section{A Priori Bounds for the Deterministic Dynamics} \label{sec:apriori} This section provides various a priori bounds on the dynamics specified by \eqref{eq:dynamics:xxx}. The proofs rely solely on the bound on $D^2 U$ given in \eqref{Hess:bound}. In fact, they are obtained by using inequalities \eqref{lipschitz:new} and \eqref{cond:sublin:U}, that follow as a consequence of \eqref{Hess:bound}. \begin{Prop} \label{prop:basic:b:apriori} Impose \cref{A123} and \cref{B123}, (B1) and fix any $T \in \mathbb{R}^+$ satisfying \begin{align} \label{cond:t:apriori} T \leq (1 + \lambda_1^{1 - 2\gamma} L_1)^{-1/2}, \end{align} where the constant $L_1$ is given in \eqref{lipschitz:new} and $\lambda_1$ is the top eigenvalue of $\mathcal{C}$. Then the dynamics defined by \eqref{eq:dynamics:xxx} maintains the bounds \begin{align} \label{sup:qs} \sup_{t\in [0, T]} \nga{ \mathbf{q}_t(\mathbf{q}_0,\mathbf{v}_0)-(\mathbf{q}_0+t \mathbf{v}_0)} \leq (1 + \lambda_1^{1 - 2\gamma} L_1) T^2 \max\{\nga{q_0},\nga{\mathbf{q}_0 + T \mathbf{v}_0} \} + \lambda_1^{1 - 2\gamma} L_0 T^2 \end{align} and \begin{multline} \label{sup:vs} \sup_{t\in [0, T]} \nga{\mathbf{v}(t) - \mathbf{v}_0} \leq (1 + \lambda_1^{1 - 2\gamma} L_1) T [ 1 + (1 + \lambda_1^{1 - 2\gamma} L_1)T^2] \max \left\{ \nga{\mathbf{q}_0}, \nga{\mathbf{q}_0 + T \mathbf{v}_0} \right\} \\ + \lambda_1^{1 - 2\gamma} L_0 T [ 1 + (1 + \lambda_1^{1 - 2\gamma} L_1) T^2 ], \end{multline} for any $(\mathbf{q}_0,\mathbf{v}_0) \in \mathbb{H}_\gamma \times \mathbb{H}_\gamma$, with $L_0$ as given in \eqref{cond:sublin:U}. \end{Prop} \begin{proof} Integrating the first equation in \eqref{eq:dynamics:xxx} twice and then applying the operator $\mathcal{C}^{-\gamma}$, we obtain \begin{align}\label{int:dyn} \mathcal{C}^{-\gamma} \mathbf{q}_t = \mathcal{C}^{-\gamma} (\mathbf{q}_0 + t \mathbf{v}_0) - \int_0^t \int_0^{s} \left[ \mathcal{C}^{-\gamma} \mathbf{q}_\tau + \mathcal{C}^{1 - \gamma} D U(\mathbf{q}_\tau) \right] d \tau ds, \end{align} for each $t \in [0,T]$. From Lemma \ref{lem:ineq:C} and inequality \eqref{cond:sublin:U}, we obtain \begin{align} \nga{\mathbf{q}_t - (\mathbf{q}_0 + t \mathbf{v}_0)} \leq& (1 + \lambda_1^{1 - 2\gamma} L_1) \int_0^t \int_0^{s} \nga{\mathbf{q}_{\tau}} d \tau d s + \lambda_1^{1 - 2\gamma} L_0 \frac{T^2}{2} \notag\\ \leq& (1 + \lambda_1^{1 - 2\gamma} L_1) \int_0^t \int_0^{s} \nga{\mathbf{q}_\tau - (\mathbf{q}_0 + \tau \mathbf{v}_0)} d \tau d s \notag\\ &\quad \qquad + (1 + \lambda_1^{1 - 2\gamma} L_1) \int_0^t \int_0^{s} \nga{\mathbf{q}_0 + \tau \mathbf{v}_0} d \tau d s + \lambda_1^{1 - 2\gamma} L_0 \frac{T^2}{2} \notag\\ \leq& (1 + \lambda_1^{1 - 2\gamma} L_1) \frac{T^2}{2} \sup_{\tau \in [0,T]} \nga{\mathbf{q}_\tau - (\mathbf{q}_0 + \tau \mathbf{v}_0)} \notag\\ &\quad \qquad + (1 + \lambda_1^{1 - 2\gamma} L_1) \frac{T^2}{2} \max \{ \nga{\mathbf{q}_0}, \nga{\mathbf{q}_0 + T \mathbf{v}_0}\} + \lambda_1^{1 - 2\gamma} L_0 \frac{T^2}{2}. \label{bound:qs} \end{align} Here note that, using the convexity of the function $f(\tau) = \nga{\mathbf{q}_0 + \tau \mathbf{v}_0}$, we have \begin{align} \sup_{\tau \in [0,T]} \nga{\mathbf{q}_0 + \tau \mathbf{v}_0} \leq \max \{ \nga{\mathbf{q}_0}, \nga{\mathbf{q}_0 + T \mathbf{v}_0} \} \label{eq:stupid:convex:bnd} \end{align} which we used in the final bound in \eqref{bound:qs}. Thus, using assumption \eqref{cond:t:apriori} and taking the supremum with respect to $t \in [0,T]$ in \eqref{bound:qs}, we conclude the first bound \eqref{sup:qs}. Turn next to second bound \eqref{sup:vs}, integrating the second equation in \eqref{eq:dynamics:xxx} once and using \cref{lem:ineq:C} and inequality \eqref{cond:sublin:U} again, we have \begin{align} \label{sup:vs:a} \nga{\mathbf{v}_t - \mathbf{v}_0} \leq (1 + \lambda_1^{1 - 2\gamma} L_1) \int_0^t \nga{ \mathbf{q}_s} d s + \lambda_1^{1 - 2\gamma} L_0 t \leq (1 + \lambda_1^{1 - 2\gamma} L_1) T \sup_{s \in [0,T]} \nga{\mathbf{q}_\tau} + \lambda_1^{1 - 2\gamma} L_0 T \end{align} for every $t \in [0,T]$. From \eqref{sup:qs}, it follows that \begin{align} \label{sup:qs:b} \sup_{t\in [0, T]} \nga{\mathbf{q}_s} \leq [1 + (1 + \lambda_1^{1 - 2\gamma} L_1) T^2] \max\{\nga{\mathbf{q}_0},\nga{\mathbf{q}_0 + T \mathbf{v}_0} \} + \lambda_1^{1 - 2\gamma} L_0 T^2. \end{align} Hence, we conclude \eqref{sup:vs} from \eqref{sup:vs:a} and \eqref{sup:qs:b}, completing the proof. \end{proof} \begin{Prop}\label{lem:contr:1} Impose \cref{A123}, \cref{B123}, (B1) and consider any $T \in \mathbb{R}^+$ satisfying \begin{align} \label{cond:t:contr:1} T \leq (1 + \lambda_1^{1 - 2 \gamma} L_1)^{-1/2}, \end{align} where $L_1$ is as in \eqref{Hess:bound} and $\lambda_1$ is the top eigenvalue of $\mathcal{C}$. Then, for any $(\mathbf{q}_0,\mathbf{v}_0), (\tilde \mathbf{q}_0, \tilde \mathbf{v}_0) \in \mathbb{H}_\gamma \times \mathbb{H}_\gamma$, \begin{multline} \sup_{t\in[0, T]} \nga{\mathbf{q}_t(\mathbf{q}_0,\mathbf{v}_0) - \mathbf{q}_t(\tilde \mathbf{q}_0, \tilde \mathbf{v}_0) -(\mathbf{q}_0-\tilde \mathbf{q}_0)-t(\mathbf{v}_0-\tilde \mathbf{v}_0)} \\ \leq (1+ \lambda_1^{1 - 2 \gamma} L_1) T^2 \max\left\{\nga{\mathbf{q}_0-\tilde \mathbf{q}_0}, \nga{(\mathbf{q}_0-\tilde \mathbf{q}_0) + T(\mathbf{v}_0-\tilde \mathbf{v}_0)}\right\}. \label{eq:Hgamma:lip:bnd} \end{multline} \end{Prop} \begin{Rmk} \label{rmk:taking:stock:lip_1} Observe that, given any $\mathbf{q}_0, \tilde{\mathbf{q}}_0, \mathbf{v}_0 \in \mathbb{H}_\gamma$, by choosing \begin{align} \tilde{\mathbf{v}}_0 := \mathbf{v}_0 + \frac{1}{T} (\mathbf{q}_0 - \tilde{\mathbf{q}}_0), \label{eq:naive:nudge:IC} \end{align} then under \eqref{eq:Hgamma:lip:bnd} we obtain \begin{align} |\mathcal{C}^{-\gamma} \left[ \mathbf{q}_T(\mathbf{q}_0,\mathbf{v}_0)-\mathbf{q}_T(\tilde \mathbf{q}_0, \tilde \mathbf{v}_0)\right]| \leq (1+ \lambda_1^{1 - 2 \gamma} L_1) T^2 |\mathcal{C}^{-\gamma}(\mathbf{q}_0-\tilde \mathbf{q}_0)|, \label{eq:full:monte:lip:bnd} \end{align} which thus yields a contraction when $T < (1+ \lambda_1^{1 - 2 \gamma} L_1)^{-1/2}$. This observation for the initial conditions in \eqref{eq:naive:nudge:IC} has previously been employed in \cite{BoEbZi2018} and, in the finite dimensional case where $\mathbb{H} = \mathbb{R}^k$ for some $k \in \mathbb{N}$, this bound can be used directly as a crucial step towards establishing the $\rho$-smallness and $\rho$-contraction conditions for the weak Harris theorem in \cite{hairer2011asymptotic}, as we illustrate below in \cref{sec:finite:dim:HMC}. The idea behind definition \eqref{eq:naive:nudge:IC} comes from the fact that for the simplified version of the dynamics in \eqref{eq:dynamics:xxx} where $d\mathbf{v}_t/dt = 0$, the positions of two associated trajectories starting from $(\mathbf{q}_0, \mathbf{v}_0)$ and $(\tilde{\mathbf{q}}_0, \tilde \mathbf{v}_0)$, with $\tilde \mathbf{v}_0$ as in \eqref{eq:naive:nudge:IC}, will coincide at time $T$. With a similar line of reasoning, one could consider a slighly better approximation of the dynamics in \eqref{eq:dynamics:xxx} by assuming instead $U = 0$, in which case the associated dynamics $d\mathbf{q}_t/dt = \mathbf{v}_t$, $d\mathbf{v}_t/dt = \mathbf{q}_t$ describes the motion of a simple pendulum. Here by defining $\tilde \mathbf{v}_0 = \mathbf{v}_0 + (\mathbf{q}_0 - \tilde{\mathbf{q}}_0) (\cos T/\sin T) $ one again concludes that the positions of two trajectories starting from $(\mathbf{q}_0, \mathbf{v}_0)$ and $(\tilde{\mathbf{q}}_0, \tilde \mathbf{v}_0)$ coincide after time $T$. While we could obtain similar results by using the latter approach, this would require the same type of assumptions we already impose in the first case, thus not showing a significant difference at least at the theoretical level. For simplicity, we then chose the first approach for our presentation. We remark however that the second approach, as being associated to a better approximation of \eqref{eq:dynamics:xxx}, could lead to slightly less stringent constants on the conditions for the integration time $T$ in comparison to \eqref{cond:t:contr:1}. More generally, we may view \eqref{eq:naive:nudge:IC} as addressing a control problem. In fact, the methodology of the weak Harris theorem developed here could in principle allow the use of a wide variety of controls. More specifically, we are interested in any `reasonable' mapping $\Psi: \mathbb{H}_\gamma \times \mathbb{H}_\gamma \times \mathbb{H}_\gamma \to \mathbb{H}_\gamma$ such that, for any $\mathbf{q}_0, \tilde{\mathbf{q}}_0, \mathbf{v}_0 \in \mathbb{H}_\gamma$ and any suitable value of $T>0$, one would have \begin{align*} \mathbf{q}_T(\mathbf{q}_0,\mathbf{v}_0) \approx \mathbf{q}_T(\tilde \mathbf{q}_0, \Psi(\mathbf{q}_0, \tilde{\mathbf{q}}_0,\mathbf{v}_0 )). \end{align*} In this connection one might hope to make a more delicate use of the Hamiltonian dynamics, presumably tailored to the fine properties of a particular potential $U$ of interest, to obtain refined results on convergence to equilibrium. In particular, we expect that the constraints imposed on $T$ by \cref{prop:basic:b:apriori} are overzealous, and could potentially be improved by a different type of control. On the other hand, in the infinite dimensional Hilbert space setting which we are primarily focused on here, even \eqref{eq:naive:nudge:IC} is insufficient for the aim of establishing contractivity in the Markovian dynamics, as the law of this choice of $\tilde{\mathbf{v}}_0$ is not generically absolutely continuous with respect to the law of $\mathbf{v}_0$; cf. \cref{prop:local_contr} and \cref{prop:smallness} below. We proceed instead by using the refinement \eqref{eq:kinetic_coupling_idea} which is shown to produce a contraction in \cref{prop:FP:Type:contract}. Here we are making use of some of the intuition and approach to ergodicity in the stochastic fluids literature, cf. \cite{mattingly2002exponential, kuksin2012mathematics, GlattHoltzMattinglyRichards2016}. In these works one modifies the noise path on low modes with the expectation that if one induces a contraction on the large scale dynamics for sufficiently many low frequency modes then the high frequencies (or small scales) will also contract, being enslaved to the behavior of the system at large scales. This effect, sometimes referred as a Foias-Prodi bound \cite{foias1967comportement}, is widely observed in the fluids and infinite dimensional dynamical systems literature. \end{Rmk} \begin{proof}[Proof of \cref{lem:contr:1}] Let $\mathbf{z}_t = \mathbf{q}_t(\mathbf{q}_0,\mathbf{v}_0) - \mathbf{q}_t(\widetilde{\mathbf{q}}_0, \widetilde{\mathbf{v}}_0)$ and $\mathbf{w}_t = d\mathbf{z}_t/ dt$. Then, for any $t > 0$, $\mathbf{z}_t$ satisfies \begin{align} \label{d2:zt} \frac{d^2 \mathbf{z}_t}{dt^2} = - \mathbf{z}_t - \mathcal{C} g(t) \end{align} where \begin{align} \label{def:g} g(t) := D U(\mathbf{q}_t(\mathbf{q}_0,\mathbf{v}_0)) - DU(\mathbf{q}_t(\widetilde{\mathbf{q}}_0, \widetilde{\mathbf{v}}_0)). \end{align} Therefore, for every $t \geq 0$, \begin{align*} \mathcal{C}^{- \gamma} \mathbf{z}_t = \mathcal{C}^{-\gamma}(\mathbf{z}_0 + t \mathbf{w}_0) - \int_0^t \int_0^{s} [\mathcal{C}^{-\gamma} \mathbf{z}_\tau + \mathcal{C}^{1 - \gamma} g(\tau)] d \tau ds. \end{align*} By using \cref{lem:ineq:C} and inequality \eqref{lipschitz:new}, we obtain \begin{align*} \nga{ \mathbf{z}_t - (\mathbf{z}_0 + t \mathbf{w}_0)} &\leq \int_0^t \int_0^{s} \left[ \nga{\mathbf{z}_\tau} + \lambda_1^{1 - 2 \gamma}\ngad{ g(\tau)} \right] d \tau d s \\ &\leq (1 + \lambda_1^{1 - 2 \gamma} L_1) \int_0^t \int_0^{s} \nga{ \mathbf{z}_\tau} d \tau ds. \end{align*} The remaining portion of the proof follows analogously as in the proof of \eqref{sup:qs}. \end{proof} In view of \cref{rmk:taking:stock:lip_1} the bounds in \cref{lem:contr:1} are not sufficient for our application to prove the $\rho$-contractivity and $\rho$-smallness conditions for the weak Harris theorem below in \cref{sec:Ptwise:contract:MD}. For this purpose we consider a modified version of \eqref{eq:naive:nudge:IC} where the shift only involves a low-modes finite-dimensional approximation of $\mathbf{q}_0 - \tilde{\mathbf{q}}_0$. Before proceeding let us introduce some notation. Split $\mathbb{H}$ into a space $\mathbb{H}_N := \operatorname{span}\{\mathbf{e}_1, \cdots, \mathbf{e}_N\}$ and its orthogonal complement $\mathbb{H}^N$; so that $\mathbb{H} = \mathbb{H}_N \oplus \mathbb{H}^N$ where $N$ satisfies the second condition in \eqref{cond:t:N:contr}, below. Recall, as in \eqref{eq:get:high:low}, that, given $\mathbf{f} \in \mathbb{H}$, we denote by $\Pi_N \mathbf{f}$ and $\Pi^N \mathbf{f}$ the orthogonal projections onto $\mathbb{H}_N$ and $\mathbb{H}^N$, respectively. This splitting is defined such that the Lipschitz constant of the projection of $- \mathcal{C} D U(\mathbf{f})$ onto $\mathbb{H}^N$ is at most $1/4$. For any $\gamma \in [0,1/2)$ and $\alpha \in \mathbb{R}^+$, we consider the following auxiliary norm: \begin{align} |\mathbf{f}|_{\gamma, \alpha} := |\Pi_N \mathbf{f}|_{\gamma} + \alpha | \Pi^N \mathbf{f}|_{\gamma}, \; \text{for any } \mathbf{f} \in \mathbb{H}_{\gamma}. \label{eq:wn:gamma:alpha} \end{align} \begin{Rmk} \label{rmk:equiv:norms} Notice that $\normga{\,\cdot\,}$ is equivalent to $\nga{\,\cdot\,}$ and \begin{align} \label{equiv:norms} \min \{1,\alpha\} \nga{\mathbf{f}} \leq \normga{\mathbf{f}} \leq \sqrt{2} \max \{1, \alpha\} \nga{\mathbf{f}}, \; \text{ for all } \mathbf{f} \in \mathbb{H}_{\gamma}. \end{align} In particular, for $\alpha$ defined as in \eqref{eq:alpha:lip:def} below, we have \begin{align} \label{equiv:norms:b} \nga{\mathbf{f}} \leq \normga{\mathbf{f}} \leq \sqrt{2} \alpha \nga{\mathbf{f}}, \; \text{ for all } \mathbf{f} \in \mathbb{H}_{\gamma}. \end{align} \end{Rmk} \begin{Prop} \label{prop:FP:Type:contract} Impose \cref{A123}, \cref{B123}, (B1). Let $(\mathbf{q}_0,\mathbf{v}_0), (\tilde \mathbf{q}_0, \tilde \mathbf{v}_0) \in \mathbb{H}_\gamma \times \mathbb{H}_\gamma$ such that \begin{align} \label{eq:kinetic_coupling_idea} \Pi^N \tilde \mathbf{v}_0 = \Pi^N \mathbf{v}_0 \quad \text{and} \quad \Pi_N \tilde \mathbf{v}_0 = \Pi_N \mathbf{v}_0 + T^{-1} (\Pi_N \mathbf{q}_0 - \Pi_N \tilde \mathbf{q}_0). \end{align} Assume that $T \in \mathbb{R}^+$ and $N \in \mathbb{N}$ satisfy \begin{align} \label{cond:t:N:contr} T \leq \frac{1}{[2(1 + \lambda_1^{1 -2 \gamma} L_1)]^{1/2}} \quad \mbox{and} \quad \lambda_{N+1}^{1 - 2 \gamma} \leq \frac{1}{4 L_1}, \end{align} and let \begin{align} \label{eq:alpha:lip:def} \alpha = 4(1 + \lambda_1^{1 - 2 \gamma} L_1). \end{align} Here $\gamma$ is specified in \cref{B123}, $L_1$ is as in \eqref{Hess:bound} and $\lambda_j$ represent the eigenvalues of $\mathcal{C}$ in descending order as in \cref{A123}. Then, \begin{align} \normga{\mathbf{q}_T(\mathbf{q}_0,\mathbf{v}_0) - \mathbf{q}_T(\tilde \mathbf{q}_0, \tilde \mathbf{v}_0)} \leq \kappa_1 \normga{\mathbf{q}_0 - \tilde \mathbf{q}_0}, \label{eq:FP:type:cont:Multi} \end{align} where $|\cdot|_{\gamma, \alpha}$ is the norm defined in \eqref{eq:wn:gamma:alpha} and \begin{align*} \kappa_1 = 1 - \frac{T^2}{12}. \end{align*} \end{Prop} \begin{proof} As in the proof of \cref{lem:contr:1}, let us denote $\mathbf{z}_t := \mathbf{q}_t(\mathbf{q}_0,\mathbf{v}_0) - \mathbf{q}_t(\tilde \mathbf{q}_0, \tilde \mathbf{v}_0)$ and $\mathbf{w}_t = d\mathbf{z}_t/ dt$, for all $t \geq 0$. Notice that \begin{align} \label{eq:coupling} \Pi_N \mathbf{z}_0 + T \Pi_N \mathbf{w}_0 = 0 \quad \mbox{and} \quad \Pi^N \mathbf{w}_0 = 0. \end{align} Applying $\mathcal{C}^{-\gamma}$ to \eqref{d2:zt}, projecting onto $\mathbb{H}_N$ and integrating, yields \begin{align*} \mathcal{C}^{-\gamma} \Pi_N \mathbf{z}_T = -\int_0^T \int_0^s \left[ \mathcal{C}^{-\gamma} \Pi_N \mathbf{z}_\tau + \mathcal{C}^{1-\gamma} \Pi_N g(\tau) \right] d \tau ds, \end{align*} with $g(\cdot)$ defined as in \eqref{def:g}. Thus, using \eqref{eq:Poincare} in \cref{lem:ineq:C} and \eqref{Hess:bound} of \cref{B123}, we estimate \begin{align} \nga{ \Pi_N \mathbf{z}_T} \leq& \int_0^T \! \int_0^s \left[ \nga{ \mathbf{z}_\tau} + \lambda_1^{1 - 2 \gamma} \ngad{g(\tau) } \right] d \tau ds \leq (1 + \lambda_1^{1 - 2 \gamma} L_1) \frac{T^2}{2} \sup_{s \in [0,T]} \nga{ \mathbf{z}_s} \notag\\ =& \frac{\alpha T^2}{8} \sup_{s \in [0,T]} \nga{ \mathbf{z}_s}. \label{est:ztl} \end{align} On the other hand, by Duhamel's formula, we have \begin{align*} \mathbf{z}_T = \mathbf{z}_0 \cos(T) + \mathbf{w}_0 \sin(T) - \int_0^T \sin(T - s) \, \mathcal{C} g(s) ds, \end{align*} and hence, with \eqref{eq:coupling}, \begin{align*} \mathcal{C}^{-\gamma} \Pi^N \mathbf{z}_T = \mathcal{C}^{-\gamma} \Pi^N \mathbf{z}_0 \cos(T) - \int_0^T \sin(T - s) \, \mathcal{C}^{1 - \gamma} \Pi^N g(s) ds \end{align*} Now, using $(ii)$ of \cref{lem:ineq:C} and $(B1)$ of \cref{B123}, we estimate \begin{align*} \nga{\Pi^N \mathbf{z}_T} \leq& \nga{ \Pi^N \mathbf{z}_0} \cos(T) + \lambda_{N+1}^{1 - 2 \gamma} L_1 \int_0^T \sin(T - s) \nga{ \mathbf{z}_s} ds \\ \leq& \nga{\Pi^N \mathbf{z}_0} \cos(T) + \frac{1 - \cos(T)}{4} \sup_{s \in [0,T]} \nga{\mathbf{z}_s}. \end{align*} where for the final inequality we used the second condition in \eqref{cond:t:N:contr}. Therefore, using that $\cos(s) \leq 1 - s^2/2 + s^4/24$ and $1 - \cos(s) \leq s^2/2$ for every $s \in \mathbb{R}$, yields \begin{align} \label{est:zth} \nga{\Pi^N \mathbf{z}_T} \leq \left( 1 - \frac{T^2}{2} + \frac{T^4}{24} \right)\nga{\Pi^N \mathbf{z}_0} + \frac{T^2}{8} \sup_{s \in [0,T]} \nga{ \mathbf{z}_s}. \end{align} From \cref{lem:contr:1} and a bound as in \eqref{eq:stupid:convex:bnd} it follows that \begin{align*} \sup_{s \in [0,T]} \nga{ \mathbf{z}_s } \leq [1 + (1 + \lambda_1^{1 -2 \gamma} L_1) T^2] \max \left\{ \nga{ \mathbf{z}_0}, \nga{\mathbf{z}_0 + T \mathbf{w}_0} \right\}. \end{align*} However from \eqref{eq:coupling} we have $\mathbf{z}_0 + T \mathbf{w}_0 = \Pi^N \mathbf{z}_0$, so that $\max \{ \nga{ \mathbf{z}_0}, \nga{\mathbf{z}_0 + T \mathbf{w}_0}\} = \nga{ \mathbf{z}_0}$. With this and the first condition in \eqref{cond:t:N:contr}, we therefore obtain \begin{align} \sup_{s \in [0,T]} \nga{ \mathbf{z}_s } \leq [1 + (1 + \lambda_1^{1 -2 \gamma} L_1)T^2] \nga{ \mathbf{z}_0} \leq \frac{3}{2} \nga{ \mathbf{z}_0}. \label{unif:bound:z} \end{align} Using \eqref{unif:bound:z} in \eqref{est:ztl} and in \eqref{est:zth}, we obtain \begin{align*} \nga{\Pi_N \mathbf{z}_T} \leq \frac{3 \alpha T^2}{16} \nga{\mathbf{z}_0} \end{align*} and \begin{align*} \nga{\Pi^N \mathbf{z}_T} \leq \left( 1 - \frac{T^2}{2} + \frac{T^4}{24} \right)\nga{\Pi^N \mathbf{z}_0} + \frac{3 T^2}{16} \nga{\mathbf{z}_0}, \end{align*} so that finally \begin{align} \normga{\mathbf{z}_T} =& \nga{ \Pi_N \mathbf{z}_T} + \alpha \nga{\Pi^N \mathbf{z}_T} \leq \frac{3\alpha T^2}{8} \nga{ \mathbf{z}_0} + \alpha \left( 1 - \frac{T^2}{2} + \frac{T^4}{24} \right) \nga{ \Pi^N \mathbf{z}_0} \notag\\ \leq& \frac{3 \alpha T^2}{8} \nga{\Pi_N \mathbf{z}_0} + \alpha \left( 1 - \frac{T^2}{8} + \frac{T^4}{24} \right) \nga{ \Pi^N \mathbf{z}_0}. \label{est:normga:zt} \end{align} From the first condition in \eqref{cond:t:N:contr} and the definition of $\alpha$ in \eqref{eq:alpha:lip:def}, it follows in particular that $\alpha T^2 \leq 2$ and also $T \leq 1$, so that $T^4 \leq T^2$. Therefore, from \eqref{est:normga:zt}, we have \begin{align*} \normga{\mathbf{z}_T} \leq \frac{3}{4} \nga{\Pi_N \mathbf{z}_0} + \alpha \left( 1 - \frac{T^2}{12} \right) \nga{\Pi^N \mathbf{z}_0} \leq \max \left\{ 1 - \frac{T^2}{12}, \frac{3}{4} \right\} \normga{\mathbf{z}_0} = \left( 1 - \frac{T^2}{12} \right) \normga{\mathbf{z}_0}, \end{align*} where the equality above follows again from the fact that $T \leq 1$, by the first condition in \eqref{cond:t:N:contr}. This completes the proof. \end{proof} \section{Foster-Lyapunov Structure} \label{sec:f:l:bnd} This section provides the details of the Foster-Lyapunov structure for the Markov kernel $P$ defined by \eqref{eq:PEHMC:kernel:def} under \cref{ass:higher:reg:C}, \cref{B123}. First, we recall the underlying definition: \begin{Def}\label{def:Lyap} We say that $V: \mathbb{H}_{\gamma} \to \mathbb{R}^+$ is a \emph{Foster-Lyapunov} (or, simply, a \emph{Lyapunov}) function for the Markov kernel $P$ if $V$ is integrable with respect to $P^n(\mathbf{q}, \cdot)$ for every $\mathbf{q} \in \mathbb{H}$ and $n \in \mathbb{N}$, and satisfies the following inequality \begin{align}\label{ineq:Lyap} P^nV(\mathbf{q}) \leq \kappa_V^n V(\mathbf{q}) +K_V \quad \mbox{ for all } \mathbf{q} \in \mathbb{H} \mbox{ and } n \in \mathbb{N}, \end{align} for some $\kappa_V \in [0,1)$ and $K_V > 0$. \end{Def} With this definition in hand the main result of this section is as follows: \begin{Prop}\label{prop:FL} Impose \cref{A123}, \cref{ass:higher:reg:C} and \cref{B123} and suppose that $T \in \mathbb{R}^+$ satisfies \begin{align} \label{cond:t:Lyap} T \leq \min \left\{ \frac{1}{[2(1 + \lambda_1^{1 - 2\gamma} L_1)]^{1/2}}, \frac{L_2^{1/2}}{2\sqrt{6} (1 + \lambda_1^{1 - 2\gamma} L_1)} \right\}, \end{align} where $L_1$ and $L_2$ are defined as in \eqref{Hess:bound}, \eqref{dissip:new}, respectively and $\lambda_1$ is the largest eigenvalue of $\mathcal{C}$. Then, the functions \begin{align}\label{eq:FL:1} V_{1,i}(\mathbf{q}) = \nga{\mathbf{q}}^i, \quad i \in \mathbb{N}, \end{align} and \begin{align}\label{eq:FL:2} V_{2, \eta}(\mathbf{q}) = \exp(\eta \nga{\mathbf{q}}^2), \end{align} with $\eta \in \mathbb{R}^+$ satisfying \begin{align} \eta < \left[ c \operatorname{Tr}(\mathcal{C}^{1 - 2 \gamma}) \left( L_2^{-1} + T^2 \right) \right]^{-1}, \label{cond:eta} \end{align} for a suitable absolute constant $c \in \mathbb{R}^+$, are Lyapunov functions for the Markov kernel $P$ defined in \eqref{eq:PEHMC:kernel:def}. \end{Prop} \begin{proof} We start by showing that $V_{1,2}(\mathbf{q}) = \nga{\mathbf{q}}^2$ is a Lyapunov function for $P$. First, notice $\frac{d}{dt}\nga{\mathbf{q}_t}^2 = 2 \langle \mathbf{q}_t, \mathbf{v}_t \rangle_{\gamma}$ so that \begin{align} \label{est:norm:q} \nga{\mathbf{q}_T}^2 = \nga{\mathbf{q}_0}^2 + 2 \int_0^T \langle \mathbf{q}_s, \mathbf{v}_s \rangle_{\gamma} ds. \end{align} Moreover, from \eqref{eq:dynamics:xxx} \begin{align}\label{eq:der:q:v} \frac{d}{ds} \langle \mathbf{q}_s, \mathbf{v}_s \rangle_{\gamma} = \nga{\mathbf{v}_s}^2 - \nga{\mathbf{q}_s}^2 - \langle \mathbf{q}_s, \mathcal{C} \nabla U (\mathbf{q}_s) \rangle_{\gamma}. \end{align} Hence, using \cref{B123}, (B2), \begin{align}\label{est:q:v} \langle \mathbf{q}_s, \mathbf{v}_s \rangle_{\gamma} &= \langle \mathbf{q}_0, \mathbf{v}_0 \rangle_{\gamma} + \int_0^s \left[ \nga{\mathbf{v}_\tau}^2 - \nga{\mathbf{q}_\tau}^2 - \langle \mathbf{q}_\tau, \mathcal{C} \nabla U (\mathbf{q}_\tau) \rangle_{\gamma} \right] d\tau \nonumber\\ &\leq \langle \mathbf{q}_0, \mathbf{v}_0 \rangle_{\gamma} + \int_0^s \left[ \nga{\mathbf{v}_\tau}^2 - L_2 \nga{\mathbf{q}_\tau}^2 + L_3 \right] d\tau, \end{align} for any $s \geq 0$. Using \eqref{est:q:v} in \eqref{est:norm:q}, we obtain \begin{align} \label{bound:qt} \nga{\mathbf{q}_T}^2 \leq \nga{\mathbf{q}_0}^2 + 2T \langle \mathbf{q}_0, \mathbf{v}_0 \rangle_{\gamma} + 2 \int_0^T \int_0^s \left[ \nga{\mathbf{v}_\tau}^2 - L_2 \nga{\mathbf{q}_\tau}^2 + L_3 \right] d\tau ds. \end{align} From \cref{prop:basic:b:apriori}, \eqref{sup:vs} and hypothesis \eqref{cond:t:Lyap}, it follows that \begin{align*} \nga{\mathbf{v}_\tau} \leq \frac{7}{4}\nga{\mathbf{v}_0} + \frac{3}{2} (1 + \lambda_1^{1 - 2\gamma} L_1) \tau \nga{\mathbf{q}_0} + \frac{3}{2} \lambda_1^{1 - 2 \gamma} L_0 \tau, \end{align*} so that \begin{align} \label{bound:vtau} \nga{\mathbf{v}_\tau}^2 \leq \frac{49}{8} \nga{\mathbf{v}_0}^2 + 9(1 + \lambda_1^{1 - 2\gamma} L_1)^2 \tau^2 \nga{\mathbf{q}_0}^2 + 9 (\lambda_1^{1 - 2\gamma} L_0)^2 \tau^2, \end{align} which holds for any $\tau \geq 0$. Moreover, from \eqref{sup:qs} and using hypothesis \eqref{cond:t:Lyap} again, we obtain that \begin{align*} \nga{\mathbf{q}_\tau - (\mathbf{q}_0 + \tau \mathbf{v}_0)} \leq \frac{\nga{\mathbf{q}_0}}{2} + \frac{\tau}{2} \nga{\mathbf{v}_0} + \lambda_1^{1 - 2\gamma} L_0 \tau^2, \end{align*} so that \begin{align*} \nga{\mathbf{q}_\tau} \geq \frac{\nga{\mathbf{q}_0}}{2} - \frac{3}{2} \tau \nga{\mathbf{v}_0} - \lambda_1^{1 - 2\gamma} L_0 \tau^2 \end{align*} and, consequently, \begin{align*} 2 \nga{\mathbf{q}_\tau}^2 \geq \frac{\nga{\mathbf{q}_0}^2}{4} - 9 \tau^2 \nga{\mathbf{v}_0}^2 - 4 (\lambda_1^{1 - 2\gamma} L_0)^2 \tau^4. \end{align*} Thus, from \eqref{ineq:K:L} and \eqref{cond:t:Lyap}, it follows that \begin{align} - 2 L_2 \nga{\mathbf{q}_\tau}^2 &\leq -\frac{L_2}{4} \nga{\mathbf{q}_0}^2 + 9 L_2 \tau^2 \nga{\mathbf{v}_0}^2 + 4 L_2 (\lambda_1^{1 - 2\gamma} L_0)^2 \tau^4 \nonumber \\ &\leq -\frac{L_2}{4} \nga{\mathbf{q}_0}^2 + 9 (1 + \lambda_1^{1- 2 \gamma} L_1) \tau^2 \nga{\mathbf{v}_0}^2 + 4 (1 + \lambda_1^{1- 2 \gamma} L_1) (\lambda_1^{1 - 2\gamma} L_0)^2 \tau^4 \nonumber \\ & \leq -\frac{L_2}{4} \nga{\mathbf{q}_0}^2 + \frac{9}{2} \nga{\mathbf{v}_0}^2 + 2 (\lambda_1^{1 - 2\gamma} L_0)^2 \tau^2, \label{bound:qtau} \end{align} for any $\tau \geq 0$. Using \eqref{bound:vtau} and \eqref{bound:qtau} in \eqref{bound:qt}, yields \begin{align} \label{bound:qt:2} \nga{\mathbf{q}_T}^2 \leq& \left( 1 + \frac{3}{2} (1 + \lambda_1^{1 - 2\gamma} L_1)^2 T^4 - \frac{L_2}{8} T^2 \right)\nga{\mathbf{q}_0}^2 \notag\\ &\qquad + 2T \langle \mathbf{q}_0, \mathbf{v}_0 \rangle_{\gamma} + \frac{67}{8} T^2 \nga{\mathbf{v}_0}^2 + \frac{5}{3} (\lambda_1^{1 - 2\gamma} L_0)^2 T^4 + L_3 T^2. \end{align} By hypothesis \eqref{cond:t:Lyap}, we have that $3(1 + \lambda_1^{1 - 2\gamma} L_1)^2 T^4/2 \leq L_2 T^2/16$. Thus, \begin{align} \label{est:term:qt} 1 + \frac{3}{2} (1 + \lambda_1^{1 - 2\gamma} L_1)^2 T^4 - \frac{L_2}{8} T^2 \leq 1 - \frac{L_2}{16}T^2 \leq e^{- \frac{L_2 T^2}{16}}, \end{align} where we used the fact that $1 - x \leq e^{-x}$, for every $x \geq 0$. Using \eqref{est:term:qt} in \eqref{bound:qt:2} and taking expected values on both sides of the resulting inequality, and noting that, by symmetry $\mathbb{E}\langle \mathbf{q}_0, \mathbf{v}_0 \rangle_{\gamma} = 0$ we obtain \begin{align} \label{quad:Lyap} P V_{1,2}(\mathbf{q}_0) = \mathbb{E} \nga{\mathbf{q}_T}^2 \leq e^{- \frac{L_2 T^2}{16}} \nga{\mathbf{q}_0}^2 + \left( \frac{67}{8} \operatorname{Tr}(\mathcal{C}^{1 - 2 \gamma}) + \frac{5}{3} (\lambda_1^{1 - 2\gamma} L_0)^2 T^2 + L_3 \right)T^2. \end{align} Hence, after iterating on the result in \eqref{quad:Lyap} $n$ times, we have \begin{align} P^n V_{1,2}(\mathbf{q}_0) =& \mathbb{E} \nga{Q_n(\mathbf{q}_0)}^2 \notag\\ \leq& e^{- \frac{n L_2 T^2}{16}} \nga{\mathbf{q}_0}^2 + \left( \frac{67}{8} \operatorname{Tr}(\mathcal{C}^{1 - 2 \gamma}) + \frac{5}{3} (\lambda_1^{1 - 2\gamma} L_0)^2 T^2 + L_3 \right) T^2 \sum_{j=0}^{n-1} e^{- \frac{jL_2 t^2}{16}}. \label{quad:Lyap:n} \end{align} Notice that \begin{align*} T^2 \sum_{j=0}^{n-1} e^{- \frac{jL_2 T^2}{16}} \leq \frac{T^2}{1 - e^{- \frac{L_2 t^2}{16}}} \leq \frac{48}{L_2}, \end{align*} where in the last inequality we used that $x/(1 - e^{-x}) \leq e \leq 3$, for every $0 \leq x \leq 1$. Thus, \begin{align*} P^n V_{1,2}(\mathbf{q}_0) \leq e^{- \frac{n L_2 T^2}{16}} \nga{\mathbf{q}_0}^2 + \left( \frac{67}{8} \operatorname{Tr}(\mathcal{C}^{1 - 2 \gamma}) + \frac{5}{3} (\lambda_1^{1 - 2\gamma} L_0)^2 T^2 + L_3 \right) \frac{48}{L_2}, \end{align*} which shows \eqref{def:Lyap} for $V_{1,2}$. We turn now to establish \eqref{def:Lyap} in the general case of $V_{1,i}$, for any $i \in \mathbb{N}$. Here, invoking Young's inequality to estimate the term $ 2T \langle \mathbf{q}_0, \mathbf{v}_0 \rangle_\gamma$ in \eqref{bound:qt:2} as \begin{align*} 2T \langle \mathbf{q}_0, \mathbf{v}_0 \rangle_{\gamma} \leq \frac{L_2 T^2}{32} \nga{\mathbf{q}_0}^2 + \frac{32}{L_2} \nga{\mathbf{v}_0}^2, \end{align*} and using again that $3(1 + \lambda_1^{1 - 2\gamma} L_1)^2 T^4/2 \leq L_2 T^2/16$, it follows from \eqref{bound:qt:2} that \begin{align} \label{bound:qt:3} \nga{\mathbf{q}_T}^2 \leq \left( 1 - \frac{L_2 T^2}{32} \right)\nga{\mathbf{q}_0}^2 + \left( \frac{67}{8} T^2 + \frac{32}{L_2} \right) \nga{\mathbf{v}_0}^2 + \frac{5}{3} (\lambda_1^{1 - 2\gamma} L_0)^2 T^4 + L_3 T^2. \end{align} Invoking the basic inequalities $1 - x \leq e^{-x}$ and $(x + y)^{1/2} \leq x^{1/2} + y^{1/2}$, valid for every $x,y \geq 0$, we obtain, for any $i \geq 1$, \begin{align}\label{V1i:Lyap} \nga{\mathbf{q}_T}^i &\leq e^{-\frac{L_2 T^2 i}{64}} \nga{\mathbf{q}_0}^i + C \sum_{j = 1}^i \left( e^{-\frac{L_2 T^2 }{64}}\nga{\mathbf{q}_0} \right)^j \left( \nga{\mathbf{v}_0}^{i-j} + 1 \right) \notag \\ &\leq e^{-\frac{L_2 T^2 i}{65}} \nga{\mathbf{q}_0}^i + \tilde{C} \left( \nga{\mathbf{v}_0}^i + 1 \right), \end{align} where in the second inequality we invoked Young's inequality to estimate each term inside the sum, and with $C$ and $\tilde C$ being positive constants depending on $i, \lambda_1, \gamma, T, L_0, L_2$ and $L_3$. Since $\mathbf{v}_0 \sim \mathcal{N}(0, \mathcal{C})$, by Fernique's theorem (see, e.g., \cite[Theorem 2.7]{DaZa2014}) we have that $\mathbb{E} \nga{\mathbf{v}_0}^i < \infty$ for every $i \in \mathbb{N}$. Therefore, we conclude the result for $V_{1,i}$ after taking expected values in \eqref{V1i:Lyap} and iterating $n$ times on the resulting inequality. Finally, let us show \eqref{def:Lyap} for $V_{2,\eta}$ as in \eqref{eq:FL:2}. Multiplying by $\eta$, taking the exponential and expected value on both sides of \eqref{bound:qt:3}, it follows that \begin{multline}\label{ineq:exp:Lyap} P V_2(\mathbf{q}_0) = \mathbb{E} \exp \left( \eta \nga{\mathbf{q}_T}^2 \right) \\ \leq \exp\left( \eta \left( 1 - \frac{L_2 T^2}{32} \right) \nga{\mathbf{q}_0}^2 \right) \exp \left( \frac{5}{3} \eta (\lambda_1^{1 - 2 \gamma} L_0)^2 T^4 + \eta L_3 T^2 \right) \mathbb{E} \exp \left[\eta \left( \frac{32}{L_2} + \frac{67}{8} T^2 \right) \nga{\mathbf{v}_0}^2 \right]. \end{multline} Recalling $\mathbf{v}_0 \sim \mathcal{N}(0, \mathcal{C})$ and the assumption $\eta < \left[ 2 \operatorname{Tr}(\mathcal{C}^{1 - 2 \gamma}) \left( \frac{32}{L_2} + \frac{67}{8} T^2 \right) \right]^{-1}$, we have, again by Fernique's theorem \cite[Proposition 2.17]{DaZa2014}, and \cref{lem:ineq:C} that \begin{align} \label{est:Gaussian} \mathbb{E} \exp \left[ \eta \left( \frac{32}{L_2} + \frac{67}{8} T^2 \right) \nga{\mathbf{v}_0}^2 \right] \leq \left[ 1 - 2 \eta \left( \frac{32}{L_2} + \frac{67}{8} T^2 \right) \operatorname{Tr}(\mathcal{C}^{1 - 2 \gamma}) \right]^{-1/2}. \end{align} Thus, denoting $\tilde \kappa_2 = 1 - L_2 T^2/32$ and \begin{align*} R = \exp \left( \frac{5}{3} \eta (\lambda_1^{1 - 2 \gamma} L_0)^2 T^4 + \eta L_3 T^2 \right) \left[ 1 - 2 \eta \left( \frac{32}{L_2} + \frac{67}{8} T^2 \right) \operatorname{Tr}(\mathcal{C}^{1 - 2 \gamma}) \right]^{-1/2}, \end{align*} we obtain from \eqref{ineq:exp:Lyap} and \eqref{est:Gaussian} that \begin{align}\label{ineq:exp:Lyap:2} P V_{2,\eta}(\mathbf{q}_0) &\leq R \exp \left( \eta \tilde \kappa_2 \nga{\mathbf{q}_0}^2 \right) = R \exp \left( \eta \nga{\mathbf{q}_0}^2 \right)^{\tilde \kappa_2} \notag \\ &\leq \tilde \kappa_2 V_2(\mathbf{q}_0) + R^{\frac{1}{1- \tilde \kappa_2}} (1 - \tilde \kappa_2) = \tilde \kappa_2 V_2(\mathbf{q}_0) + R^{\frac{32}{L_2 T^2}} \frac{L_2 T^2}{32} \notag \\ &\leq e^{- \frac{L_2 T^2}{32}} V_2(\mathbf{q}_0) + R^{\frac{32}{L_2 T^2}} \frac{L_2 T^2}{32} \end{align} where the second estimate follows by Young's inequality. We conclude \eqref{def:Lyap} for $V_{2,\eta}$ after using \eqref{ineq:exp:Lyap:2} $n$ times iteratively. The proof is now complete. \end{proof} \section{Pointwise contractivity bounds for the Markovian dynamics} \label{sec:Ptwise:contract:MD} This section details two pointwise contractivity bounds for the Markovian dynamics of the PHMC chain \eqref{eq:PEHMC:kernel:def} in a suitably tuned Wasserstein-Kantorovich metric. These bounds provide crucial ingredients needed for the weak Harris theorem, namely the so called `$\rho$-contractivity' and `$\rho$-smallness' conditions, which, together with the Lyapunov structure identified in \cref{prop:FL}, form the core of the proof of \cref{thm:weak:harris}. Our contraction results are given with respect to an underlying metric $\rho: \mathbb{H}_\gamma \times \mathbb{H}_\gamma \to [0,1]$ defined as \begin{align} \label{eq:rho:def} \rho(\mathbf{q}, \tilde{\mathbf{q}}) := \frac{ \nga{ \mathbf{q} - \tilde{\mathbf{q}} }}{\varepsilon} \wedge 1, \end{align} where $\gamma$ is given in \cref{B123}. On the other hand, $\varepsilon > 0$ is a tuning parameter which specifies the small scales in our problem and is determined by \eqref{def:smc} in such a fashion as to produce a contraction in \eqref{eq:rho:contract}. Recall that the Wasserstein distance on the space of probability measures on $\mathbb{H}_\gamma$ induced by $\rho$ is given as in \eqref{eq:Wass:Def} with $\tilde{\rho}$ replaced by $\rho$, and denoted by $\mathcal{W}_\rho$. The first result yielding `$\rho$-contractivity' (cf. \cite[Definition 4.6]{hairer2011asymptotic}) is given as follows: \begin{Prop} \label{prop:local_contr} Suppose \cref{A123}, \cref{ass:higher:reg:C} and \cref{B123} are satisfied and choose an integration time $T >0$ and $N \in \mathbb{N}$ maintaining the condition \eqref{cond:t:N:contr}. Fix any $\varepsilon > 0$ defining the associated metric $\rho$ as in \eqref{eq:rho:def}. Then, for every $n \in \mathbb{N}$ and for every $\mathbf{q}_0, \tilde{\mathbf{q}}_0 \in \mathbb{H}_{\gamma}$ such that $\rho( \mathbf{q}_0, \tilde{\mathbf{q}}_0) < 1$, we have \begin{align} \mathcal{W}_{\rho}(P^n(\mathbf{q}_0, \cdot), P^n(\tilde{\mathbf{q}}_0, \cdot)) \leq \kappa_3 \rho(\mathbf{q}_0, \tilde{\mathbf{q}}_0) \label{eq:rho:contract} \end{align} where recall that $P^n$ is $n$ steps of the PHMC kernel \eqref{eq:n:step:PHMC:kernel} and $\mathcal{W}_\rho$ is the Wasserstein distance, as in \eqref{ineq:main:result}, associated with $\rho$. Here \begin{align} \label{def:smc} \kappa_3 = \kappa_3(n) := \kappa_2(n) + \frac{ 2 \sqrt{2} \lambda_N^{-\frac{1}{2} + \gamma} (1 + \lambda_1^{1 - 2 \gamma}L_1)\varepsilon } {T (1 - \kappa_1^2)^{1/2}} =\kappa_2(n) + \frac{ \sqrt{2} \lambda_N^{-\frac{1}{2} + \gamma} \alpha \varepsilon } {2 T (1 - \kappa_1^2)^{1/2}}, \end {align} where \begin{align} \label{def:sma:smb} \kappa_2(n) := 4 \sqrt{2}(1 + \lambda_1^{1 - 2 \gamma} L_1) \kappa_1^n = \sqrt{2} \alpha \kappa_1^n, \quad \kappa_1:= 1 - \frac{T^2}{12}, \end{align} $T >0$ is the integration time in \eqref{eq:PEHMC:kernel:def}, $L_1$ is the Lipschitz constant of $DU$ as in \eqref{Hess:bound} and $\lambda_1$ is the largest eigenvalue of $\mathcal{C}$ and, in regards to $\alpha$, recall \eqref{eq:alpha:lip:def}. \end{Prop} \begin{Rmk} If $N \in \mathbb{N}$ is the smallest natural number for which the corresponding condition in \eqref{cond:t:N:contr} holds, i.e. \begin{align*} N = \min \left\{ n \in \mathbb{N} \,:\, \lambda_{n+1}^{1 - 2 \gamma} \leq \frac{1}{4 L_1} \right\}, \end{align*} then $\kappa_3$ from \eqref{def:smc} above can be given in the more explicit form \begin{align*} \kappa_3 = \kappa_3(n) := \kappa_2(n) + \frac{ 4 \sqrt{2} L_1^{1/2} (1 + \lambda_1^{1 - 2 \gamma}L_1)\varepsilon } {T (1 - \kappa_1^2)^{1/2}} = \kappa_2(n) + \frac{ \sqrt{2} L_1^{1/2} \alpha \varepsilon } {T (1 - \kappa_1^2)^{1/2}}, \end {align*} with $\kappa_2$ defined exactly as in \eqref{def:sma:smb} above. \end{Rmk} Our second main result corresponding to `$\rho$-smallness' (cf. \cite[Definition 4.4]{hairer2011asymptotic}) is given as: \begin{Prop}\label{prop:smallness} Assume the same hypotheses from \cref{prop:local_contr}. Let $M \geq 0$ and take \begin{align*} A = \left\{ \mathbf{q} \in \mathbb{H}_{\gamma} \,: \, \nga{\mathbf{q}} \leq M \right\}. \end{align*} Then, for every $n \in \mathbb{N}$ and every $\varepsilon > 0$ we have for the corresponding $\rho$ defined by \eqref{eq:rho:def} that \begin{align}\label{Wass:small} \mathcal{W}_{\rho} (P^n(\mathbf{q}_0, \cdot), P^n(\tilde{\mathbf{q}}_0, \cdot)) \leq 1 - \kappa_4 \end{align} for every $\mathbf{q}_0, \tilde{\mathbf{q}}_0 \in A$, where \begin{align*} \kappa_4 = \kappa_4(n) &:= \frac{1}{2} \exp \left( - \frac{ 256 L_1 (1 + \lambda_1^{1 - 2 \gamma} L_1)^2 M^2} {T^2 (1 - \kappa_1^2)} \right) - \frac{2 M \kappa_2(n) }{\varepsilon} \\ &= \frac{1}{2} \exp \left( - \frac{ 16 L_1 \alpha^2 M^2} {T^2 (1 - \kappa_1^2)} \right) - \frac{2 M \kappa_2(n) }{\varepsilon} , \end{align*} with $\kappa_1$ and $\kappa_2$ as defined in \eqref{def:sma:smb}, and $\alpha$ as defined in \eqref{eq:alpha:lip:def}. \end{Prop} Before proceeding with the proofs of \cref{prop:local_contr} and \cref{prop:smallness}, we introduce some further preliminary terminology and general background. Set an integration time $T > 0$ in the definition of the transition kernel $P$ of the PHMC chain, \eqref{eq:PEHMC:kernel:def}. For each $n \in \mathbb{N}$, let $\mathbb{H}^{\otimes n}$ denote the space given as the product of $n$ copies of $\mathbb{H}$. Moreover, given a sequence $\{\mathbf{v}_0^{(j)}\}_{ j \in \mathbb{N}}$ of i.i.d. draws from $\mathcal{N}(0,\mathcal{C})$, we denote by $\mathbf{V}_0^{(n)} = (\mathbf{v}_0^{(1)}, \ldots, \mathbf{v}_0^{(n)})$ the noise path for the first $n \geq 1$ steps, as in \eqref{eq:noise:path:notation}. We then have $\mathbf{V}_0^{(n)} \sim \mathcal{N}(0,\mathcal{C})^{\otimes n}$, with $\mathcal{N}(0,\mathcal{C})^{\otimes n}$ denoting the product of $n$ independent copies of $\mathcal{N}(0, \mathcal{C})$. For simplicity of notation, we set from now on \begin{align*} \sigma := \mathcal{N}(0, \mathcal{C}), \quad \sigma_n:= \mathcal{N}(0,\mathcal{C})^{\otimes n}. \end{align*} For every $\mathbf{q}_0, \tilde{\mathbf{q}}_0 \in \mathbb{H}_\gamma$, with $\gamma$ as in \eqref{eq:Tr:Def:b}, \eqref{Hess:bound}, and $N \in \mathbb{N}$ as in \cref{prop:FP:Type:contract}, we consider $\widetilde{Q}_1(\tilde{\mathbf{q}}_0, \mathbf{q}_0): \mathbb{H} \to \mathbb{H}$ to be the random variable defined as \begin{align*} \widetilde Q_1(\mathbf{q}_0, \tilde{\mathbf{q}}_0)(\mathbf{v}_0^{(1)}) = \mathbf{q}_T(\tilde{\mathbf{q}}_0, \mathbf{v}_0^{(1)} + T^{-1} \Pi_N (\mathbf{q}_0 - \tilde{\mathbf{q}}_0)) \end{align*} where $\mathbf{v}_0^{(1)} \sim \sigma$. Iteratively we define, for $n \geq 2$, the random variables $\widetilde{Q}_n(\mathbf{q}_0, \tilde{\mathbf{q}}_0):$ $\mathbb{H}^{\otimes n}$ $\to \mathbb{H}$ as \begin{align} \widetilde{Q}_n(\mathbf{q}_0, \tilde{\mathbf{q}}_0)(\mathbf{V}_0^{(n)}) := q_T(\widetilde{Q}_{n-1}(\mathbf{q}_0, \tilde{\mathbf{q}}_0)( \mathbf{V}_0^{(n-1)}), \mathbf{v}_0^{(n)} + \mathcal{S}_n(\mathbf{V}_0^{(n-1)})), \label{eq:shift:RV} \end{align} where $\mathbf{V}_0^{(n)} \sim \sigma_n$, and \begin{align} \label{def:Phin} \mathcal{S}_n(\mathbf{V}_0^{(n-1)}) := T^{-1}\Pi_N [Q_{n-1}(\mathbf{q}_0)(\mathbf{V}_0^{(n-1)}) - \widetilde{Q}_{n-1}(\mathbf{q}_0, \tilde{\mathbf{q}}_0)(\mathbf{V}_0^{(n-1)})]. \end{align} We therefore obtain the shifted noise path \begin{align} \tilde\mathbf{V}_0^{(n)} = ( \mathbf{v}_0^1 + \mathcal{S}_1, \mathbf{v}_0^{(2)} + \mathcal{S}_2(\mathbf{V}_0^{(1)}), \ldots, \mathbf{v}_0^{(n)} + \mathcal{S}_n(\mathbf{V}_0^{(n-1)})), \quad \mbox{ where } \mathcal{S}_1 = T^{-1} \Pi_N (\mathbf{q}_0 - \tilde{\mathbf{q}}_0). \label{eq:shift:Noise} \end{align} Let $\tilde{\sigma}_n := \text{Law}(\tilde \mathbf{V}_0^{(n)})$. In order to simplify notation, let us denote \begin{align} \label{def:bPhi} \boldsymbol{\mathcal{S}}_n(\mathbf{V}_0^{(n)}) = (\mathcal{S}_1, \mathcal{S}_2(\mathbf{V}_0^{(1)}), \ldots, \mathcal{S}_n(\mathbf{V}_0^{(n-1)})) \end{align} and \begin{align} \label{def:bPsi} \boldsymbol{\mathcal{R}}_n(\mathbf{V}_0^{(n)}) = \mathbf{V}_0^{(n)} + \boldsymbol{\mathcal{S}}_n(\mathbf{V}_0^{(n)}), \end{align} so that $\tilde\mathbf{V}_0^{(n)} = \boldsymbol{\mathcal{R}}_n(\mathbf{V}_0^{(n)})$. Thus, $\tilde{\sigma}_n$ is the push-forward of $\sigma_n$ by the mapping $\boldsymbol{\mathcal{R}}_n: \mathbb{H}^{\otimes n} \to \mathbb{H}^{\otimes n}$, i.e. $\tilde \sigma_n = \boldsymbol{\mathcal{R}}_n^\ast \sigma_n$. Now put, for every $n \in \mathbb{N}$ and $A \in \mathcal{B}(\mathbb{H})$, \begin{align} \widetilde{P}^n(\mathbf{q}_0, \tilde{\mathbf{q}}_0, A) = \widetilde Q_n(\mathbf{q}_0, \tilde{\mathbf{q}}_0)^\ast \sigma_n(A) = \sigma_n( \widetilde{Q}_n(\mathbf{q}_0, \tilde{\mathbf{q}}_0)^{-1}(A)). \label{eq:Shifted:MK} \end{align} Notice that $\widetilde{P}^n(\mathbf{q}_0, \tilde{\mathbf{q}}_0, \cdot)$ can be equivalently written as \begin{align} \widetilde{P}^n(\mathbf{q}_0, \tilde{\mathbf{q}}_0, A) = Q_n(\tilde{\mathbf{q}}_0)^\ast (\boldsymbol{\mathcal{R}}_n^\ast \sigma_n)(A) = Q_n(\tilde{\mathbf{q}}_0)^\ast \tilde \sigma_n (A). \label{eq:tPn} \end{align} With these notations in place we have the following estimate which we will use several times below in establishing \cref{prop:local_contr}, \cref{prop:smallness}. The proof follows immediately from \cref{prop:FP:Type:contract} and \cref{rmk:equiv:norms}. \begin{Lem}\label{lem:contract} We are maintaining the same hypotheses as in \cref{prop:local_contr}. Then, starting from any $\mathbf{q}_0, \tilde{\mathbf{q}}_0 \in \mathbb{H}_{\gamma}$ we have that for all $n \geq 1$, \begin{align*} \nga{Q_n(\mathbf{q}_0)(\mathbf{V}_0^{(n)}) - \widetilde{Q}_n(\mathbf{q}_0, \tilde{\mathbf{q}}_0)(\mathbf{V}_0^{(n)})} \leq \kappa_2 \nga{\mathbf{q}_0 - \tilde{\mathbf{q}}_0} \quad \mbox{ for every } \mathbf{V}_0^{(n)} \in \mathbb{H}^{\otimes n}, \end{align*} where $Q_n$ and $\widetilde{Q}_n$ are defined as in \eqref{eq:n:step:chain:Noise:p} and \eqref{eq:shift:RV}, respectively, and $\kappa_2$ is as in \eqref{def:sma:smb}. Therefore, \begin{align} \mathbb{E} \nga{Q_n(\mathbf{q}_0) - \tilde{Q}_n(\mathbf{q}_0, \tilde{\mathbf{q}}_0)} \leq \kappa_2 \nga{\mathbf{q}_0 - \tilde{\mathbf{q}}_0}. \label{eq:nudged:ave:contr} \end{align} \end{Lem} We also recall additional notions of distances in the space of Borel probability measures on a given complete metric space $(X,d)$, denoted $\text{Pr}(X)$, with the associated Borel $\sigma$-algebra denoted as $\mathcal{B}(X)$. Namely, the \textit{total variation} distance is defined as \begin{align} \tv{\nu - \tilde{\nu}} := \sup_{A \in \mathcal{B}(X)} | \nu(A) - \tilde{\nu}(A)| \label{def:tv:dist} \end{align} for any $\nu, \tilde{\nu} \in \text{Pr}(X)$. On the other hand when $\tilde{\nu} \ll \nu$, i.e. when $\tilde{\nu}$ is absolutely continuous with respect to $\nu$, the \textit{Kullback-Leibler Divergence} is defined as \begin{align} \KL (\tilde{\nu} | \nu) := \int_{X} \log\left( \frac{d \tilde{\nu}}{d \nu}(\mathbf{V}) \right) d \tilde{\nu}(d \mathbf{V}). \label{eq:KL:div} \end{align} Recall that for the trivial metric \begin{align*} \rho_0(\mathbf{q}, \tilde{\mathbf{q}}) := \begin{cases} 1&\text{ if } \mathbf{q} \not = \tilde{\mathbf{q}}\\ 0&\text{ if } \mathbf{q} = \tilde{\mathbf{q}}, \end{cases} \end{align*} the associated Wasserstein distance $\mathcal{W}_{\rho_0}$ coincides with the total variation distance. On the other hand, Pinsker's inequality (see e.g. \cite{Tsybakov2009}) states that \begin{align} \tv{\nu - \tilde{\nu} } \leq \sqrt{\frac{1}{2} \KL (\tilde{\nu} | \nu)}, \label{ineq:TV:KL:1} \end{align} for any $\nu, \tilde \nu \in \text{Pr}(X)$, $\tilde{\nu} \ll \nu$. Moreover, as showed e.g. in \cite[Appendix]{butkovsky2018generalized}, \begin{align} \tv{\nu - \tilde{\nu}} \leq 1 - \frac{1}{2} \exp\left( - \KL(\tilde \nu | \nu ) \right) \label{ineq:TV:KL:2} \end{align} for all $\nu, \tilde \nu \in \text{Pr}(X)$, $\tilde{\nu} \ll \nu$. \begin{proof}[Proof of \cref{prop:local_contr}] Fix any $\mathbf{q}_0, \tilde{\mathbf{q}}_0 \in \mathbb{H}_{\gamma}$ such that $\rho( \mathbf{q}_0, \tilde{\mathbf{q}}_0) < 1$. Then, recalling the notation \eqref{eq:Shifted:MK} and using that $\rho$ is a metric on $\mathbb{H}$ we have \begin{align} \label{Wass:triang:ineq} \mathcal{W}_{\rho}(P^n(\mathbf{q}_0, \cdot), P^n(\tilde{\mathbf{q}}_0, \cdot)) \leq \mathcal{W}_{\rho}(P^n(\mathbf{q}_0, \cdot), \widetilde{P}^n(\mathbf{q}_0,\tilde{\mathbf{q}}_0, \cdot)) + \mathcal{W}_{\rho}( \widetilde{P}^n(\mathbf{q}_0,\tilde{\mathbf{q}}_0, \cdot), P^n(\tilde{\mathbf{q}}_0, \cdot)). \end{align} Notice that \begin{align} \mathcal{W}_{\rho}(P^n(\mathbf{q}_0, \cdot), \widetilde{P}^n(\mathbf{q}_0,\tilde{\mathbf{q}}_0, \cdot)) \leq& \mathbb{E} \rho(Q_n(\mathbf{q}_0), \widetilde{Q}_n(\mathbf{q}_0, \tilde{\mathbf{q}}_0)) \leq \frac{1}{\varepsilon} \mathbb{E} \nga{Q_n(\mathbf{q}_0) - \widetilde{Q}_n(\mathbf{q}_0, \tilde{\mathbf{q}}_0)} \notag\\ \leq& \frac{\kappa_2}{\varepsilon} \nga{\mathbf{q}_0 - \tilde{\mathbf{q}}_0} = \kappa_2 \rho(\mathbf{q}_0, \tilde{\mathbf{q}}_0), \label{Wass:P:tP} \end{align} where the last inequality follows from Lemma \ref{lem:contract}. For the second term in \eqref{Wass:triang:ineq}, it follows from the coupling lemma (see e.g. \cite[Lemma 1.2.24]{kuksin2012mathematics}) and the fact that $\rho \leq 1$ that \begin{align} \mathcal{W}_{\rho}( \widetilde{P}^n(\mathbf{q}_0,\tilde{\mathbf{q}}_0, \cdot), P^n(\tilde{\mathbf{q}}_0, \cdot)) \leq \tv{\widetilde{P}^n(\mathbf{q}_0,\tilde{\mathbf{q}}_0, \cdot) - P^n(\tilde{\mathbf{q}}_0, \cdot)}. \label{Wass:tP:P:0a} \end{align} From \eqref{def:Pn:Qn} and \eqref{eq:tPn}, we have \begin{align*} \tv{\widetilde{P}^n(\mathbf{q}_0,\tilde{\mathbf{q}}_0, \cdot) - P^n(\tilde{\mathbf{q}}_0, \cdot)} = \tv{Q_n(\tilde{\mathbf{q}}_0)^\ast \tilde{\sigma}_n - Q_n(\tilde{\mathbf{q}}_0)^\ast \sigma_n}. \end{align*} Moreover, from the definition of the total variation distance in \eqref{def:tv:dist} and inequality \eqref{ineq:TV:KL:1}, we infer \begin{align} \tv{Q_n(\tilde{\mathbf{q}}_0)^\ast \tilde{\sigma}_n - Q_n(\tilde{\mathbf{q}}_0)^\ast \sigma_n} \leq \tv{\tilde{\sigma}_n - \sigma_n} \leq \sqrt{\frac{1}{2} \KL (\tilde{\sigma}_n | \sigma_n)}. \label{Wass:tP:P:0} \end{align} As a consequence of Girsanov's Theorem, we obtain \begin{align} \label{Girsanov} \frac{d \sigma_n}{d \tilde{\sigma}_n}(\boldsymbol{\mathcal{R}}_n(\mathbf{V})) = \exp \left( \frac{1}{2} |\mathcal{C}^{-1/2} \mathbf{V}|^2 - \frac{1}{2} |\mathcal{C}^{-1/2} \boldsymbol{\mathcal{R}}_n(\mathbf{V})|^2 \right) \quad \text{ for any } \mathbf{V} \in \mathbb{H}_{1/2}^{\otimes n}, \end{align} with $\boldsymbol{\mathcal{R}}_n$ as defined in \eqref{def:bPsi}. Thus, \begin{align} \KL(\tilde{\sigma}_n | \sigma_n) =& \int \log \left( \frac{d \tilde{\sigma}_n}{d \sigma_n}(\mathbf{V}) \right) \tilde{\sigma}_n(d\mathbf{V}) = - \int \log \left( \frac{d \sigma_n}{d \tilde{\sigma}_n}(\mathbf{V}) \right) \tilde{\sigma}_n(d\mathbf{V}) \notag\\ =& - \int \log \left( \frac{d \sigma_n}{d \tilde{\sigma}_n}(\boldsymbol{\mathcal{R}}_n(\mathbf{V})) \right) \sigma_n(d \mathbf{V}) = \int \left( - \frac{1}{2} |\mathbf{V}|_{1/2}^2 + \frac{1}{2} |\boldsymbol{\mathcal{R}}_n(\mathbf{V})|_{1/2}^2 \right) \sigma_n(d \mathbf{V}) \notag\\ =& \int \left( \langle \boldsymbol{\mathcal{S}}_n(\mathbf{V}), \mathbf{V} \rangle_{1/2} + \frac{1}{2} |\boldsymbol{\mathcal{S}}_n(\mathbf{V})|_{1/2}^2 \right) \sigma_n(d \mathbf{V}) = \frac{1}{2} \int |\boldsymbol{\mathcal{S}}_n(\mathbf{V})|_{1/2}^2 \sigma_n(d \mathbf{V}) \notag\\ =& \frac{1}{2} \sum_{j=1}^n \mathbb{E} |\mathcal{S}_j(\cdot)|_{1/2}^2. \label{ineq:KL} \end{align} Here note that, taking $\mathbf{V} = (\mathbf{v}_1, \ldots, \mathbf{v}_n)$ and $\mathbf{V}^j = (\mathbf{v}_1, \ldots, \mathbf{v}_j)$ for $j \leq n$ we have \begin{align*} \int \langle \boldsymbol{\mathcal{S}}_n(\mathbf{V}), \mathbf{V} \rangle_{1/2} \sigma_n(d \mathbf{V}) =& \sum_{j = 1}^n \int \langle \mathcal{S}_j(\mathbf{V}^{j-1}), \mathbf{v}_j \rangle_{1/2} \sigma_n(d \mathbf{V})\\ =& \sum_{j = 1}^n \int \int \langle \mathcal{S}_j(\mathbf{V}^{j-1}), \mathbf{v}_j \rangle_{1/2} \sigma(d \mathbf{v}_j) \sigma_{j-1}(d\mathbf{V}^{j-1}) = 0, \end{align*} which justifies dropping this term in \eqref{ineq:KL}. Now, from the definition of $\mathcal{S}_j$ in \eqref{def:Phin}, \eqref{eq:Inv:Poincare} in \cref{lem:ineq:C} and \eqref{equiv:norms:b} it follows that \begin{align*} |\mathcal{S}_j(\mathbf{V}_0^{j-1})|_{1/2}^2 \leq& \lambda_N^{-1 + 2 \gamma} \nga{\mathcal{S}_j(\mathbf{V}_0^{j-1})}^2 \leq \lambda_N^{-1 + 2 \gamma} \normga{\mathcal{S}_j(\mathbf{V}_0^{j-1})}^2 \leq T^{-2} \lambda_N^{-1 + 2 \gamma} \kappa_1^{2(j-1)} \normga{\mathbf{q}_0 - \tilde{\mathbf{q}}_0}^2 \\ \leq& T^{-2} \lambda_N^{-1 + 2 \gamma} \kappa_1^{2(j-1)} 2 \alpha^2 \nga{\mathbf{q}_0 - \tilde{\mathbf{q}}_0}^2, \end{align*} for each $j \geq 1$, with $\alpha$ as defined in \eqref{eq:alpha:lip:def}. Therefore, \begin{align} \KL(\tilde{\sigma}_n | \sigma_n) \leq \frac{\lambda_N^{-1 + 2 \gamma} \alpha^2}{T^2} \nga{\mathbf{q}_0 - \tilde{\mathbf{q}}_0}^2 \sum_{j=1}^n \kappa_1^{2(j-1)} \leq \frac{\lambda_N^{-1 + 2 \gamma} \alpha^2}{T^2 (1 - \kappa_1^2)} \nga{\mathbf{q}_0 - \tilde{\mathbf{q}}_0}^2, \label{eq:D:KL:diff} \end{align} so that, combining this observation with \eqref{Wass:tP:P:0a}-\eqref{Wass:tP:P:0}, and our standing assumption that $\rho(\mathbf{q}_0, \tilde{\mathbf{q}}_0) < 1$, \begin{align} \label{Wass:tP:P} \mathcal{W}_{\rho}(\widetilde{P}^n(\mathbf{q}_0,\tilde{\mathbf{q}}_0, \cdot), P^n(\tilde{\mathbf{q}}_0, \cdot)) \leq \frac{\lambda_N^{-\frac{1}{2} + \gamma} \alpha } {\sqrt{2} T (1 - \kappa_1^2)^{1/2}} \nga{\mathbf{q}_0 - \tilde{\mathbf{q}}_0} = \frac{\lambda_N^{-\frac{1}{2} + \gamma} \alpha \varepsilon }{\sqrt{2} T (1 - \kappa_1^2)^{1/2}} \rho(\mathbf{q}_0, \tilde{\mathbf{q}}_0). \end{align} We therefore conclude \eqref{eq:rho:contract} from \eqref{Wass:triang:ineq}, \eqref{Wass:P:tP} and \eqref{Wass:tP:P}, completing the proof of \cref{prop:local_contr}. \end{proof} \begin{proof}[Proof of \cref{prop:smallness}] We proceed similarly as in the proof of Proposition \ref{prop:local_contr} starting with the splitting \eqref{Wass:triang:ineq}. Fix any $\mathbf{q}_0, \tilde{\mathbf{q}}_0 \in A$. The first term after inequality \eqref{Wass:triang:ineq} is estimated exactly as in \eqref{Wass:P:tP}, so that \begin{align*} \mathcal{W}_{\rho}(P^n(\mathbf{q}_0, \cdot), \widetilde{P}^n(\mathbf{q}_0,\tilde{\mathbf{q}}_0, \cdot)) \leq \frac{\kappa_2}{\varepsilon} \nga{\mathbf{q}_0 - \tilde{\mathbf{q}}_0} \leq \frac{2 M \kappa_2 }{\varepsilon}. \end{align*} The second term in \eqref{Wass:triang:ineq} is estimated by using \eqref{ineq:TV:KL:2} and \eqref{eq:D:KL:diff} as \begin{align*} \mathcal{W}_{\rho}(\widetilde{P}^n(\mathbf{q}_0,\tilde{\mathbf{q}}_0, \cdot), P^n(\tilde{\mathbf{q}}_0, \cdot)) \leq& \tv{\tilde{\sigma}_n - \sigma_n} \leq 1 - \frac{1}{2} \exp \left( - \KL(\tilde{\sigma}_n | \sigma_n) \right) \\ \leq& 1 - \frac{1}{2} \exp \left( - \frac{\lambda_N^{-1 + 2 \gamma} \alpha^2}{T^2 (1 - \kappa_1^2)} \nga{\mathbf{q}_0 - \tilde{\mathbf{q}}_0}^2 \right), \end{align*} with $\alpha$ as defined in \eqref{eq:alpha:lip:def}. Hence, together with \eqref{Wass:triang:ineq} and using that $\mathbf{q}_0, \tilde{\mathbf{q}}_0 \in A$, we conclude \eqref{Wass:small}. \end{proof} \section{Main Result} \label{sec:Proof:Main} Having obtained in the previous sections a Foster-Lyapunov structure \eqref{ineq:Lyap} together with the smallness and contractivity properties \eqref{eq:rho:contract}-\eqref{Wass:small} for the Markov kernel $P$ in \eqref{eq:HMC:kernel:overview}, we are now ready to proceed with the proof of our main result. As pointed out in the introduction, the spectral gap \eqref{Wass:trhoe} below follows as a consequence of the weak Harris theorem given the aforementioned properties. We provide a self-contained presentation of the weak Harris approach in this section both for completeness and in order to make some of the constants in the proof more explicit. We start by noticing that it is enough to show \eqref{Wass:trhoe} for $\nu_1, \nu_2$ being Dirac measures, say concentrated at points $\mathbf{q}_0, \tilde{\mathbf{q}}_0 \in \mathbb{H}_\gamma$. The proof is then split into three possible cases for such points: $\rho(\mathbf{q}_0, \tilde{\mathbf{q}}_0) < 1$ (`close to each other'); $\rho(\mathbf{q}_0, \tilde{\mathbf{q}}_0) = 1$ with $V(\mathbf{q}_0) + V(\tilde{\mathbf{q}}_0) > 4 K_V$ (`far from the origin'); and $\rho(\mathbf{q}_0, \tilde{\mathbf{q}}_0) = 1$ with $V(\mathbf{q}_0) + V(\tilde{\mathbf{q}}_0) \leq 4 K_V$ (`close to the origin'). The first case follows from the contraction result in \cref{prop:local_contr} together with the Lyapunov structure from \cref{prop:FL}. The second case follows entirely from the Lyapunov property. Lastly, the third case follows by invoking the smallness result in \cref{prop:smallness} as well as the Lyapunov structure. Finally, the second part of our main result, namely \eqref{ineq:obs:2}-\eqref{main:thm:CLT}, follows essentially from the spectral gap \eqref{Wass:trhoe} by invoking \cref{lem:Kantor:dual}, \cref{prop:CLT:LLN} and \cref{prop:Lip:2:obs}, which are all proved in detail in \cref{sec:LLN:CLT}. \begin{Thm}\label{thm:weak:harris} Fix $\gamma \in [0,1/2)$. Suppose \cref{A123}, \cref{ass:higher:reg:C}, \cref{B123} and \cref{ass:integrability:cond} are satisfied and choose an integration time $T >0$ such that \begin{align} T \leq \min \left\{ \frac{1}{[2(1 + \lambda_1^{1 - 2\gamma}L_1)]^{1/2}}, \frac{L_2^{1/2}}{2\sqrt{6} (1 + \lambda_1^{1 - 2\gamma} L_1)} \right\}. \label{eq:time:restrict:basic} \end{align} Here the constants $L_1, L_2$ are as in \eqref{Hess:bound} and \eqref{dissip:new} and $\lambda_1$ is the largest eigenvalue of the covariance operator $\mathcal{C}$ defined as in \cref{A123}. Let $V: \mathbb{H}_\gamma \to \mathbb{R}^+$ be a Lyapunov function for the Markov kernel $P$ defined in \eqref{eq:PEHMC:kernel:def} of the form \eqref{eq:FL:1} or \eqref{eq:FL:2}. Then, there exists $\varepsilon > 0$, $C_1 > 0$ and $C_2 > 0$ such that, for every $\nu_1, \nu_2 \in \text{Pr}(\mathbb{H})$ with support included in $\mathbb{H}_{\gamma}$, \begin{align}\label{Wass:trhoe} \mathcal{W}_{\tilde{\rho}} (\nu_1 P^n, \nu_2 P^n) \leq C_1 e^{-C_2 n} \mathcal{W}_{\tilde{\rho}}(\nu_1, \nu_2) \quad \mbox{ for all } n \in \mathbb{N}, \end{align} where $\tilde{\rho}: \mathbb{H}_{\gamma} \times \mathbb{H}_{\gamma} \to \mathbb{R}^+$ is the distance-like function given by \begin{align*} \tilde{\rho}(\mathbf{q}, \tilde \mathbf{q}) = \sqrt{\rho(\mathbf{q},\tilde \mathbf{q}) (1 + V(\mathbf{q}) + V(\tilde \mathbf{q}))} \quad \mbox{ for all } \mathbf{q}, \tilde \mathbf{q} \in \mathbb{H}_{\gamma}, \end{align*} with $\rho$ as defined in \eqref{eq:rho:def}. Moreover, with respect to $\mu$ defined in \eqref{eq:mu}, i.e. the invariant measure for $P$ (cf. \cref{eq:invariance}), the following results hold: for any observable $\Phi: \mathbb{H}_\gamma \to \mathbb{R}$ such that \begin{align}\label{main:thm:Lip:const} L_\Phi := \sup_{q \in \mathbb{H}_\gamma} \frac{\max\{ 2 |\Phi(\mathbf{q})|, \sqrt{\varepsilon} |D \Phi(\mathbf{q})|_{\mathcal{L}(\mathbb{H}_\gamma)} \}}{\sqrt{1+ V(\mathbf{q})}} < \infty, \end{align} with $|\cdot|_{\mathcal{L}(\mathbb{H}_\gamma)}$ denoting the standard operator norm of a linear functional on $\mathbb{H}_\gamma$, we have \begin{align} \left| P^n \Phi(\mathbf{q}) - \int \Phi(\mathbf{q}') \mu(dq') \right| \leq L_\Phi C_1 e^{-n C_2} \int \sqrt{1 + V(\mathbf{q}) + V(\mathbf{q}')} \mu(d \mathbf{q}'), \label{ineq:obs:2} \end{align} for every $n \in \mathbb{N}$ and $\mathbf{q} \in \mathbb{H}_\gamma$. On the other hand, taking $\{Q_k(\mathbf{q}_0)\}_{k \geq 0}$ to be any process associated to $\{P^k(\mathbf{q}_0, \cdot)\}_{k \geq 0}$ as in \eqref{def:Pn:Qn}, we have, for any measurable observable maintaining \eqref{main:thm:Lip:const}, that \begin{align}\label{main:thm:LLN} \lim_{n \to \infty} \frac{1}{n} \sum_{k=1}^n \Phi(Q_k(\mathbf{q})) = \int \Phi(\mathbf{q}') \mu(d\mathbf{q}'), \quad \text{ almost surely}, \end{align} for all $\mathbf{q} \in \mathbb{H}_\gamma$. Furthermore, \begin{align}\label{main:thm:CLT} \sqrt{n} \left[ \frac{1}{n} \sum_{k=1}^n \Phi(Q_k(\mathbf{q})) - \int \Phi(\mathbf{q}') \mu (d \mathbf{q}')) \right] \Rightarrow \mathcal{N}(0, \sigma^2(\Phi)) \quad \mbox{ as } n \to \infty, \end{align} for all $\mathbf{q} \in \mathbb{H}_\gamma$, i.e. the expression in the left-hand side of \eqref{main:thm:CLT} converges weakly to a real-valued gaussian random variable with mean zero and covariance $\sigma^2(\Phi)$, where $\sigma^2(\Phi)$ is specified explicitly as \eqref{eq:CLT:sig:def} below, with $\mu^*$ replaced by $\mu$. \end{Thm} \begin{proof} We claim it suffices to show that there exists $\varepsilon > 0$, $C_1 > 0$ and $C_2 > 0$ such that \begin{align}\label{Wass:points} \mathcal{W}_{\tilde{\rho}}(P^n(\mathbf{q}_0,\cdot), P^n(\tilde \mathbf{q}_0,\cdot)) \leq C_1 e^{-C_2 n} \tilde{\rho}(\mathbf{q}_0, \tilde \mathbf{q}_0) \quad \mbox{ for all } \mathbf{q}_0, \tilde \mathbf{q}_0 \in \mathbb{H}_{\gamma} \mbox{ and } n \in \mathbb{N}. \end{align} Indeed, since $\tilde{\rho}$ is lower-semicontinuous and non-negative, it follows from \cite[Theorem 4.8]{villani2008optimal} that \begin{align*} \mathcal{W}_{\tilde{\rho}}(\nu_1 P^n, \nu_2 P^n ) \leq \int \mathcal{W}_{\tilde{\rho}}(P^n(\mathbf{q}_0,\cdot), P^n( \tilde \mathbf{q}_0, \cdot)) \Gamma (d \mathbf{q}_0, d \tilde \mathbf{q}_0) \quad \mbox{ for all } \Gamma \in \mathfrak{C}}%{\rm{Co}(\nu_1 , \nu_2) \mbox{ and } n \in \mathbb{N}. \end{align*} Clearly, if $\nu_1$ and $\nu_2$ have supports included in $\mathbb{H}_{\gamma}$, then $\Gamma \in \mathfrak{C}}%{\rm{Co}(\nu_1 , \nu_2)$ has support included in $\mathbb{H}_\gamma \times \mathbb{H}_\gamma$. Hence, if \eqref{Wass:points} holds then \begin{align}\label{Wass:points:2} \mathcal{W}_{\tilde{\rho}}(\nu_1 P^n, \nu_2 P^n) \leq C_1 e^{-C_2 n} \int \tilde{\rho}(\mathbf{q}_0, \tilde \mathbf{q}_0) \Gamma(d \mathbf{q}_0, d \tilde \mathbf{q}_0) \quad \mbox{ for all } \Gamma \in \mathfrak{C}}%{\rm{Co}(\nu_1, \nu_2) \mbox{ and } n \in \mathbb{N}, \end{align} which implies \eqref{Wass:trhoe}. In order to show \eqref{Wass:points}, we consider an auxiliary metric defined as \begin{align*} \tilde{\rho}_{\beta}(q, \tilde{q}) = \sqrt{\rho(\mathbf{q},\tilde \mathbf{q}) (1 + \beta V(\mathbf{q}) + \beta V(\tilde \mathbf{q})}, \quad \mbox{ for all } \mathbf{q}, \tilde \mathbf{q} \in \mathbb{H}_{\gamma}, \end{align*} with the additional parameter $\beta > 0$ to be appropriately chosen below; cf. \eqref{cond:eps:beta}. Notice that $\tilde{\rho}$ and $\tilde{\rho}_{\beta}$ are equivalent. Indeed, \begin{align}\label{equiv:trhoe:trhob} \left( \min\{1,\beta\}\right)^{1/2} \tilde{\rho}(\mathbf{q},\tilde \mathbf{q}) \leq \tilde{\rho}_{\beta}(\mathbf{q}, \tilde \mathbf{q}) \leq \left( \max\{1,\beta\} \right)^{1/2} \tilde{\rho}(\mathbf{q}, \tilde \mathbf{q}), \quad \mbox{ for all } \mathbf{q}, \tilde \mathbf{q} \in \mathbb{H}_{\gamma}. \end{align} We now show that \begin{align}\label{Wass:trhob} \mathcal{W}_{\tilde{\rho}_{\beta}}(P^n(\mathbf{q}_0, \cdot), P^n(\tilde \mathbf{q}_0, \cdot)) \leq \kappa_5(n) \tilde{\rho}_{\beta}(\mathbf{q}_0, \tilde \mathbf{q}_0) \quad \mbox{ for all } n \geq 1 \mbox{ and } \mathbf{q}_0, \tilde \mathbf{q}_0 \in \mathbb{H}_{\gamma}, \end{align} such that, for suitably chosen $\varepsilon > 0$, $\beta > 0$, and for $n_0 \in \mathbb{N}$ sufficiently large we have $\kappa_5(n) < 1$ for every $n \geq n_0$. We then subsequently use this bound to establish \eqref{Wass:points} as in \eqref{def:C1:C2} below. The analysis leading to \eqref{Wass:trhob} is split into three cases: \noindent\textit{\underline{Case 1:} Suppose that $\rho(\mathbf{q}_0, \tilde \mathbf{q}_0) < 1$, so that $\rho(\mathbf{q}_0, \tilde \mathbf{q}_0) = \nga{\mathbf{q}_0 - \tilde \mathbf{q}_0} \varepsilon^{-1}$.} By H\"older's inequality, we obtain \begin{align}\label{Wass:Holder} \mathcal{W}_{\tilde{\rho}_{\beta}}(P^n(\mathbf{q}_0, \cdot), P^n(\tilde \mathbf{q}_0, \cdot))^2 &\leq \inf_{\Gamma \in \mathfrak{C}}%{\rm{Co}(\delta_{\mathbf{q}_0} P^n,\delta_{\tilde \mathbf{q}_0} P^n)} \left\{ \! \left( \int \rho(\mathbf{q}, \tilde \mathbf{q}) \Gamma(d\mathbf{q}, d\tilde \mathbf{q}) \right) \! \! \left( \int (1 + \beta V(\mathbf{q}) + \beta V(\tilde \mathbf{q})) \Gamma(d\mathbf{q}, d\tilde \mathbf{q}) \right) \! \right\} \nonumber\\ &= \left( 1 + \beta P^n V(\mathbf{q}_0) + \beta P^n V(\tilde \mathbf{q}_0)\right) \mathcal{W}_{\rho}(P^n(\mathbf{q}_0, \cdot), P^n(\tilde \mathbf{q}_0, \cdot)). \end{align} From \cref{prop:FL}, and \cref{prop:local_contr}, it follows that \begin{align}\label{Wass:case:1} \mathcal{W}_{\tilde{\rho}_{\beta}}(P^n(\mathbf{q}_0, \cdot), P^n(\tilde \mathbf{q}_0, \cdot))^2 &\leq \left( 1 + \beta \kappa_V^n V(\mathbf{q}_0) + \beta \kappa_V^n V(\tilde \mathbf{q}_0) + 2 \beta K_V \right) \kappa_3 \rho(\mathbf{q}_0, \tilde \mathbf{q}_0)\nonumber \\ &\leq \left( 1 + \beta V(\mathbf{q}_0) + \beta V(\tilde \mathbf{q}_0) + 2 \beta K_V \right) \kappa_3 \rho(\mathbf{q}_0, \tilde \mathbf{q}_0) \nonumber \\ &\leq \kappa_3 (1 + 2 \beta K_V) \left( 1 + \beta V(\mathbf{q}_0) + \beta V(\tilde \mathbf{q}_0) \right) \rho(\mathbf{q}_0, \tilde \mathbf{q}_0) \nonumber \\ &= \kappa_3 (1 + 2 \beta K_V) \left( \tilde{\rho}_{\beta}(\mathbf{q}_0, \tilde \mathbf{q}_0) \right)^2. \end{align} \noindent\textit{\underline{Case 2:} Suppose that $\rho(\mathbf{q}_0, \tilde \mathbf{q}_0) = 1$ and $V(\mathbf{q}_0) + V(\tilde \mathbf{q}_0) > 4 K_V$.} Since $\rho(\cdot, \cdot) \leq 1$ and again invoking \cref{prop:FL} we obtain \begin{align}\label{Wass:case:2} \mathcal{W}_{\tilde{\rho}_{\beta}}(P^n(\mathbf{q}_0, \cdot), P^n(\tilde \mathbf{q}_0, \cdot))^2 &\leq 1 + \beta P^n V(\mathbf{q}_0) + \beta P^n V(\tilde \mathbf{q}_0) \notag \\ &\leq 1 + \beta \kappa_V^n V(\mathbf{q}_0) + \beta \kappa_V^n V(\tilde \mathbf{q}_0) + 2 \beta K_V \notag \\ &= \frac{1 + 2 \beta K_V}{1 + 3 \beta K_V} (1 + 3 \beta K_V) + \kappa_V^n \beta ( V(\mathbf{q}_0) + V(\tilde \mathbf{q}_0)) \notag \\ &\leq \max \left\{ \frac{1 + 2 \beta K_V}{1 + 3 \beta K_V}, 4 \kappa_V^n \right\} \left( 1 + 3 \beta K_V + \frac{\beta}{4} (V(\mathbf{q}_0) + V(\tilde \mathbf{q}_0)) \right) \notag \\ &< \max \left\{ \frac{1 + 2 \beta K_V}{1 + 3 \beta K_V}, 4 \kappa_V^n \right\} \left( 1 + \beta V(\mathbf{q}_0) + \beta V(\tilde \mathbf{q}_0) \right) \notag\\ &= \max \left\{ \frac{1 + 2 \beta K_V}{1 + 3 \beta K_V}, 4 \kappa_V^n \right\} \left( \tilde{\rho}_{\beta}(\mathbf{q}_0, \tilde \mathbf{q}_0) \right)^2. \end{align} \noindent\textit{\underline{Case 3:} Suppose that $\rho(\mathbf{q}_0, \tilde \mathbf{q}_0) = 1$ and $V(\mathbf{q}_0) + V(\tilde \mathbf{q}_0) \leq 4 K_V$. } We proceed as in \eqref{Wass:Holder}, but now use Proposition \ref{prop:smallness} to estimate the term $\mathcal{W}_{\rho}(P^n(\mathbf{q}_0, \cdot), P^n(\tilde \mathbf{q}_0, \cdot))$. First, let $M_V > 0 $ be such that \begin{align*} \left\{ \mathbf{q} \in \mathbb{H}_{\gamma} \,:\, V(\mathbf{q}) \leq 4 K_V \right\} = \left\{ \mathbf{q} \in \mathbb{H}_{\gamma} \,:\, \nga{\mathbf{q}} \leq M_V \right\}. \end{align*} Notice that the specific definition of $M_V$ depends on the choice of Lyapunov function $V$ (which defines the constant $K_V$, cf. \eqref{eq:FL:1}-\eqref{eq:FL:2}). Thus, for any $\mathbf{q}_0, \tilde \mathbf{q}_0 \in \left\{ \mathbf{q} \in \mathbb{H}_{\gamma} \,:\, V(\mathbf{q}) \leq 4 K_V \right\}$ from Proposition \ref{prop:smallness}, it follows that \begin{align*} \mathcal{W}_{\rho}(P^n(\mathbf{q}_0, \cdot), P^n(\tilde \mathbf{q}_0, \cdot)) \leq 1 - \kappa_4, \end{align*} where \begin{align}\label{def:smd} \kappa_4 = \kappa_4(n) := \frac{1}{2} \exp \left( - \frac{ 16 L_1 \alpha^2 M_V^2}{T^2 (1 - \kappa_1^2)} \right) - \frac{2 M_V \kappa_2(n)}{\varepsilon}, \end{align} with $\kappa_1$ and $\kappa_2$ as defined in \eqref{def:sma:smb} and $\alpha = 4 (1 + \lambda_1^{1 - 2 \gamma} L_1)$ (cf. \eqref{eq:alpha:lip:def}). Hence, \begin{align}\label{Wass:case:3} \mathcal{W}_{\tilde{\rho}_{\beta}}(P^n(\mathbf{q}_0, \cdot), P^n(\tilde \mathbf{q}_0, \cdot))^2 &\leq (1 - \kappa_4) \left( 1 + \beta \kappa_V^n \left( V(\mathbf{q}_0) + V(\tilde \mathbf{q}_0) \right) + 2 \beta K_V\right) \nonumber \\ &\leq (1 - \kappa_4) (1 + 2 (1 + 2 \kappa_V^n) \beta K_V) \nonumber \\ &\leq (1 - \kappa_4) (1 + 2 (1 + 2 \kappa_V^n) \beta K_V) \left( \tilde{\rho}_{\beta}(\mathbf{q}_0, \tilde \mathbf{q}_0) \right)^2. \end{align} From \eqref{Wass:case:1}, \eqref{Wass:case:2} and \eqref{Wass:case:3}, we now obtain the bound \eqref{Wass:trhob} with $\kappa_5 = \kappa_5(n)$ defined as \begin{align} \biggl( \max \biggl\{ (1 + 2 \beta K_V) \kappa_3(n), \max \left\{\frac{1 + 2\beta K_V}{1 + 3 \beta K_V}, 4 \kappa_V^n\right\}, (1 - \kappa_4(n)) (1 + 2 (1 + 2 \kappa_V^n) \beta K_V) \biggr\} \biggr)^{1/2}. \label{def:C3} \end{align} We claim that if we now choose $\varepsilon > 0 $, $\beta > 0$ satisfying \begin{align}\label{cond:eps:beta} \varepsilon \leq \frac{T(1 - \kappa_1^2)^{1/2}}{8 \sqrt{2} \alpha L_1^{1/2}} \quad \mbox{ and } \quad \beta \leq \frac{1}{12 K_V} \exp \left( - \frac{ 16 L_1 \alpha^2 M_V^2}{T^2 (1 - \kappa_1^2)} \right), \end{align} and $n_0 \in \mathbb{N}$ satisfying \begin{align}\label{cond:n0} \kappa_1^{n_0} \leq \min \left\{ \frac{1}{4 \sqrt{2} \alpha} , \frac{\varepsilon}{8 \sqrt{2} \alpha M_V} \exp \left( - \frac{ 16 L_1 \alpha^2 M_V^2}{T^2 (1 - \kappa_1^2)} \right) \right\} \quad \mbox{ and } \quad \kappa_V^{n_0} \leq \frac{1}{8}, \end{align} then indeed we have \begin{align}\label{def:sme} \kappa_5(n) \leq \sme(n_0) \leq \left(\max \left\{ \frac{1 + 2 \beta K_V}{1 + 3 \beta K_V}, 1 - \frac{1}{16} \exp \left( - \frac{ 32 L_1 \alpha^2 M_V^2} {T^2 (1 - \kappa_1^2)} \right) \right\} \right)^{1/2} < 1 \quad \mbox{ for all } n \geq n_0, \end{align} as we desired in the estimate \eqref{Wass:trhob}. To see this bound in \eqref{def:sme} observe that since $\kappa_1^{n_0} \leq (4 \sqrt{2} \alpha)^{-1}$ and $\varepsilon$ satisfies the first inequality in \eqref{cond:eps:beta}, then it follows from the definitions of $\kappa_2$ and $\kappa_3$ in \eqref{def:sma:smb} and \eqref{def:smc}, respectively, that \begin{align}\label{ineq:smc} \kappa_2(n) \leq \frac{1}{4} \quad \mbox{ and } \quad \kappa_3(n) \leq \frac{3}{8} \quad \mbox{ for all } n \geq n_0. \end{align} From \eqref{cond:eps:beta}, we have in particular that $\beta \leq (12 K_V)^{-1}$. Together with \eqref{ineq:smc}, this yields \begin{align}\label{ineq:first:C3} (1 + 2 \beta K_V) \kappa_3(n) \leq \frac{1}{2} \quad \mbox{ for all } n \geq n_0. \end{align} Moreover, since $\kappa_V^{n_0} \leq 1/8$, then \begin{align}\label{ineq:second:C3} \max \left\{\frac{1 + 2\beta K_V}{1 + 3 \beta K_V}, 4 \kappa_V^n \right\} \leq \max \left\{\frac{1 + 2\beta K_V}{1 + 3 \beta K_V}, \frac{1}{2} \right\} = \frac{1 + 2\beta K_V}{1 + 3 \beta K_V}. \end{align} Also, from the definition of $\kappa_2$ in \eqref{def:sma:smb} and the first condition in \eqref{cond:n0}, it follows that $\kappa_4$, defined in \eqref{def:smd}, satisfies \begin{align}\label{ineq:smd} \kappa_4(n) \geq \frac{1}{4} \exp \left( - \frac{ 16 L_1 \alpha^2 M_V^2}{T^2 (1 - \kappa_1^2)} \right) \quad \mbox{ for all } n \geq n_0. \end{align} Thus, with condition \eqref{cond:eps:beta} on $\beta$, we obtain \begin{align}\label{ineq:third:C3} (1 - \kappa_4(n))(1 + 3 \beta K_V) \leq 1 - \frac{1}{16} \exp \left( - \frac{ 32 L_1 \alpha^2 M_V^2}{T^2 (1 - \kappa_1^2)} \right) \quad \mbox{ for all } n \geq n_0. \end{align} Combining now \eqref{def:C3}, \eqref{ineq:first:C3}, \eqref{ineq:second:C3} and \eqref{ineq:third:C3} we now conclude \eqref{def:sme}. We turn now to show that \eqref{Wass:trhob} implies \eqref{Wass:points} and, consequently, \eqref{Wass:trhoe}. First note that, by the same arguments as in \eqref{Wass:points}-\eqref{Wass:points:2} we have that \eqref{Wass:trhob} implies $\mathcal{W}_{\tilde{\rho}_{\beta}}(\nu_1 P^n, \nu_2 P^n) \leq \kappa_5 \mathcal{W}_{\tilde{\rho}_{\beta}}(\nu_1 , \nu_2)$ for all $n \geq n_0$ and $\nu_1 , \nu_2 \in \text{Pr}(\mathbb{H}_\gamma)$ with support included in $\mathbb{H}_\gamma$. Now, for any $n \in \mathbb{N}$, we can write $n = m n_0 + k$, for some $m, k \in \mathbb{N}$ with $k \leq n_0 - 1$. Thus, \begin{align*} \mathcal{W}_{\tilde{\rho}_{\beta}} (P^n (\mathbf{q}_0, \cdot), P^n(\tilde \mathbf{q}_0, \cdot)) &= \mathcal{W}_{\tilde{\rho}_{\beta}}(P^{m n_0 + k}(\mathbf{q}_0, \cdot), P^{m n_0 + k}(\tilde \mathbf{q}_0, \cdot)) \leq \sme(n_0)^m \mathcal{W}_{\tilde{\rho}_{\beta}} (P^k (\mathbf{q}_0, \cdot), P^k (\tilde \mathbf{q}_0, \cdot)) \nonumber \\ & \leq \sme(n_0)^m \kappa_5(k) \tilde{\rho}_{\beta}(\mathbf{q}_0, \tilde \mathbf{q}_0) \leq \sme(n_0)^{\frac{n}{n_0} -1} \kappa_5(n_0 - 1) \tilde{\rho}_{\beta}(\mathbf{q}_0, \tilde \mathbf{q}_0), \end{align*} where in the last inequality we used that $\kappa_5$ is a non-increasing function of $n$. Moreover, from the equivalence between $\tilde{\rho}$ and $\tilde{\rho}_{\beta}$ in \eqref{equiv:trhoe:trhob}, we obtain \begin{align*} \mathcal{W}_{\tilde{\rho}}(P^n(\mathbf{q}_0, \cdot), P^n(\tilde \mathbf{q}_0, \cdot)) &\leq \left( \frac{\max\{1, \beta\}}{\min\{1, \beta\}} \right)^{1/2} \sme(n_0)^{\frac{n}{n_0} -1} \kappa_5(n_0 - 1) \tilde{\rho} (\mathbf{q}_0, \tilde \mathbf{q}_0) \nonumber \\ &\leq \left( \frac{\max\{1, \beta\}}{\min\{1, \beta\}}\right)^{1/2} \frac{ \kappa_5(n_0 - 1)}{\sme(n_0)} \exp\left( n \log \left(\sme(n_0)^{\frac{1}{n_0}} \right) \right) \tilde{\rho} (\mathbf{q}_0, \tilde \mathbf{q}_0) \quad \mbox{ for all } n \in \mathbb{N}. \end{align*} Therefore, with the constants \begin{align}\label{def:C1:C2} C_1 := \left( \frac{\max\{1, \beta\}} {\min\{1, \beta\}} \right)^{1/2} \frac{ \kappa_5(n_0 - 1)}{\sme(n_0)} \quad \mbox{ and } \quad C_2 := - \log \left( \sme(n_0)^{\frac{1}{n_0}} \right), \end{align} \eqref{Wass:points} and consequently \eqref{Wass:trhoe} are now established. Finally, the second part of the proof, namely \eqref{ineq:obs:2}-\eqref{main:thm:CLT} under assumption \eqref{main:thm:Lip:const}, follow as a direct consequence of \cref{lem:Kantor:dual} and \cref{prop:CLT:LLN} combined with \cref{prop:Lip:2:obs}. \end{proof} \section{Implications for the Finite Dimensional Setting} \label{sec:finite:dim:HMC} The approach given above can be modified in a straightforward fashion to provide a novel proof of the ergodicity of the exact HMC algorithm in finite dimensions. We detail this connection in this section. We abuse notation and use the same terminology for the analogous constants and operators from the infinite-dimensional case introduced in the previous sections. We take our phase space to be $\mathbb{H} = \mathbb{R}^k$, $k \in \mathbb{N}$, endowed with the Euclidean inner product and norm, which are denoted by $\langle \cdot, \cdot \rangle$ and $|\cdot|$, respectively. Similarly to \eqref{eq:mu} above we fix a target probability measure of the form \begin{align}\label{eq:mu:FD} \mu(d\mathbf{q} ) \propto \exp( - U(\mathbf{q})) \mu_0(d\mathbf{q}) \quad \text{ with } \mu_0 = \mathcal{N}(0,\mathcal{C}), \end{align} where $\mathcal{C}$ is a symmetric strictly positive-definite covariance matrix. Here we aim to sample from $\mu$ using the dynamics \begin{align} \frac{d \mathbf{q}}{d t} = \mathcal{M}^{-1} \mathbf{p} \quad \frac{d \mathbf{p}}{d t} = -\mathcal{C}^{-1} \mathbf{q} - DU(\mathbf{q}) \label{eq:FD:H:dynam} \end{align} corresponding to the Hamiltonian \begin{align} H(\mathbf{q},\mathbf{p}) = \langle \mathcal{C}^{-1} \mathbf{q}, \mathbf{q} \rangle + U(\mathbf{q}) + \langle \mathcal{M}^{-1} \mathbf{p}, \mathbf{p} \rangle, \label{eq:FD:Ham} \end{align} where $\mathcal{M}$ is a user-specified `mass matrix' which we suppose to be symmetric and strictly positive definite; and $U: \mathbb{R}^k \to \mathbb{R}$ is a $C^2$ potential function. Let us denote by $\lambda_{\mM}$ and $\Lambda_{\mM}$ the smallest and largest eigenvalues of $\mathcal{M}$. Analogously, let $\lambda_{\cC}$ and $\Lambda_{\cC}$ be the smallest and largest eigenvalues of $\mathcal{C}$. We impose the following conditions on the potential function $U$ (cf. \cref{B123} above): \begin{assumption} \label{ass:potential:FD} \begin{itemize} \item[(F1)] There exists a constant $L_1 \geq 0$ such that \begin{align} \label{Hess:bound:FD} | D^2 U(\mathbf{f}) | \leq L_1 \quad \text{ for any } \mathbf{f} \in \mathbb{R}^k. \end{align} \item[(F2)] There exist constants $L_2 > 0$ and $L_3 \ge 0$ such that \begin{align} \label{dissip:FD} |\mathcal{M}^{-1/2} \mathcal{C}^{-1/2} \mathbf{f}|^2 + \langle \mathbf{f},\mathcal{M}^{-1} D U(\mathbf{f}) \rangle \ge L_2 |\mathcal{M}^{-1/2} \mathcal{C}^{-1/2} \mathbf{f}|^2 - L_3 \quad \text{ for any } \mathbf{f} \in \mathbb{R}^k. \end{align} \end{itemize} \end{assumption} Note that under \eqref{Hess:bound:FD}, $U$ is globally Lipschitz so that \eqref{eq:FD:H:dynam} yields a well defined dynamical system on $C^1(\mathbb{R}, \mathbb{R}^k)$ as above in \cref{prop:well:posed}. Furthermore, similarly as in \cref{rmk:B123:Consequences}, we have: \begin{enumerate}[(i)] \item From \eqref{Hess:bound:FD}, it follows that \begin{align}\label{bound:DU:FD} |DU(\mathbf{f})| \leq L_1 |\mathbf{f}| + L_0 \quad \mbox{ for every } \mathbf{f} \in \mathbb{R}^k. \end{align} where $L_0 = |DU(0)|$. \item If $|DU(\mathbf{f})| \leq L_4 |\mathbf{f}| + L_5$ for some $L_4 \in [0, \lambda_{\mM} (\Lambda_{\mM} \Lambda_{\cC})^{-1})$ and $L_5 \geq 0$, then \eqref{dissip:FD} follows. \item Assumptions $(F1)$ and $(F2)$ imply that \begin{align}\label{ineq:tUa:tUb} L_2 \leq 1 + \Lambda_{\mM} \Lambda_{\cC} \lambda_{\mM}^{-1} L_1. \end{align} \end{enumerate} Fixing an integration time $T > 0$, and under the given conditions on $\mathcal{C}, \mathcal{M}$ and $U$ in \eqref{eq:FD:H:dynam} we have a well-defined Feller Markov transition kernel defined as \begin{align} P(\mathbf{q}_0, A) = \mathbb{P}( q_T(\mathbf{q}_0, \mathbf{p}_0) \in A) \label{eq:finite:dim:TK_1} \end{align} for any $\mathbf{q}_0 \in \mathbb{R}^k$ and any Borel set $A \subset \mathbb{R}^k$, where \begin{align} \mathbf{p}_0 \sim N(0,\mathcal{M}). \label{eq:finite:dim:TK_2} \end{align} Here, following previous notation, $q_T(\mathbf{q}_0, \mathbf{p}_0)$ is the solution of \eqref{eq:FD:H:dynam} at time $T$ starting from the initial position $\mathbf{q}_0 \in \mathbb{R}^k$ and momentum $\mathbf{p}_0 \in \mathbb{R}^k$. The $n$-fold iteration of the kernel $P$ is denoted as $P^n$. As in \cref{thm:weak:harris}, we measure the convergence of $P^n$ using a suitable Wasserstein distance. In this case, we take \begin{align} \tilde\rho(\mathbf{q}, \tilde \mathbf{q}) = \sqrt{\rho( \mathbf{q}, \tilde \mathbf{q}) (1 + V(\mathbf{q}) + V(\tilde \mathbf{q}))} \quad \text{ where } \quad \rho( \mathbf{q}, \tilde \mathbf{q}) = \frac{| \mathbf{q} - \tilde \mathbf{q} |}{\varepsilon} \wedge 1 \label{eq:semi:met:FD} \end{align} and $V$ is a Foster-Lyapunov function defined as either $V(\mathbf{q}) = V_{1,i}(\mathbf{q}) = |\mathbf{q}|^i$, $i \in \mathbb{N}$, or as $V(\mathbf{q}) = V_{2,\eta}(\mathbf{q}) = \exp(\eta |\mathbf{q}|^2)$, with $\eta > 0$ satisfying \begin{align}\label{cond:eta:FD} \eta < \left[ 2 \operatorname{Tr}(\mathcal{M}) \left( \frac{67}{8}T^2 + \frac{32}{L_2 (\Lambda_{\mM} \Lambda_{\cC})^{-1}} \right) \lambda_{\mM}^{-2} \right]^{-1}. \end{align} We then consider the corresponding Wasserstein distance $\mathcal{W}_{\tilde\rho}$ and prove the theorem below concerning the exact HMC kernel $P$. \begin{Thm}\label{thm:main:FD} Consider the Markov kernel $P$ defined as \eqref{eq:finite:dim:TK_1}, \eqref{eq:finite:dim:TK_2} from the dynamics \eqref{eq:FD:H:dynam}. We suppose that $\mathcal{M}$ and $\mathcal{C}$ in \eqref{eq:FD:H:dynam} are both symmetric and strictly positive definite and we assume that the potential function $U$ satisfies \cref{ass:potential:FD}. In addition, we impose the following condition on the integration time $T >0$: \begin{align}\label{fin:dim:cond:T} T \leq \min \left\{ \frac{1}{\left[ 2 \lambda_{\mM}^{-1}(\lambda_{\cC}^{-1} + L_1) \right]^{1/2}}, \frac{L_2^{1/2} (\Lambda_{\mM} \Lambda_{\cC})^{-1/2}}{2 \sqrt{6} \lambda_{\mM}^{-1} (\lambda_{\cC}^{-1} + L_1)} \right\}, \end{align} where $\lambda_{\mM}$ and $\Lambda_{\mM}$ denote the smallest and largest eigenvalues of $\mathcal{M}$, while $\lambda_{\cC}$ and $\Lambda_{\cC}$ denote the smallest and largest eigenvalues of $\mathcal{C}$, respectively. Then $P$ has a unique ergodic invariant measure given by $\mu$ in \eqref{eq:mu:FD}. Moreover, $P$ satisfies the following spectral gap condition with respect to the Wasserstein distance $\mathcal{W}_{\tilde \rho}$ associated to $\tilde \rho$ defined in \eqref{eq:semi:met:FD}: For all $\nu_1, \nu_2$ Borel probability measures on $\mathbb{R}^k$, \begin{align} \mathcal{W}_{\tilde{\rho}}(\nu_1 P^n, \nu_2 P^m) \leq C_1 e^{- C_2 n} \mathcal{W}_{\tilde{\rho}}(\nu_1, \nu_2) \quad \mbox{ for all } n \in \mathbb{N}, \label{eq:Wass:conv:FD} \end{align} where the constants $C_1, C_2, \varepsilon >0$ are independent of $\nu_1, \nu_2$ and $k$, and can be given explicitly as depending exclusively on $L_1$, $L_2$, $L_3$, $T$, $\mathcal{M}$ and $\mathcal{C}$. \end{Thm} \begin{Rmk} Similarly as in \cref{thm:weak:harris}, we can also show that \eqref{eq:Wass:conv:FD} implies a convergence result with respect to suitable observables as in \eqref{ineq:obs:2}, as well as a strong law of large numbers and a central limit theorem analogous to \eqref{main:thm:LLN}-\eqref{main:thm:CLT}. \end{Rmk} \begin{proof} The proof follows very similar steps to the results from Sections \ref{sec:apriori}, \ref{sec:f:l:bnd}, \ref{sec:Ptwise:contract:MD} and \ref{sec:Proof:Main}, so we only point out the main differences. From \eqref{eq:FD:H:dynam}, it follows that \begin{align*} \frac{d^2 \mathbf{q}}{dt^2} = - \mathcal{M}^{-1}\mathcal{C}^{-1}\mathbf{q} - \mathcal{M}^{-1} D U(\mathbf{q}), \end{align*} so that, after integrating with respect to $t \in [0,T]$ twice, we have \begin{align}\label{eq:qt:FD} \mathbf{q}_t - (\mathbf{q}_0 + t \mathcal{M}^{-1} \mathbf{p}_0) = - \int_0^t \int_0^s \left( \mathcal{M}^{-1} \mathcal{C}^{-1} \mathbf{q}_{\tau} + \mathcal{M}^{-1} D U(\mathbf{q}_{\tau}) \right) d\tau ds \end{align} Using that \begin{align*} | \mathcal{M}^{-1} \mathbf{f}| \leq \lambda_{\mM}^{-1} |\mathbf{f}| \quad \mbox{ and } \quad |\mathcal{C}^{-1}\mathbf{f}| \leq \lambda_{\cC}^{-1} |\mathbf{f}| \quad \mbox{ for every }\mathbf{f} \in \mathbb{R}^k, \end{align*} together with \eqref{bound:DU:FD} and the condition $T \leq [\lambda_{\mM}^{-1}(\lambda_{\cC}^{-1} + L_1)]^{-1/2}$, one obtains, analogously to \eqref{sup:qs} and \eqref{sup:vs}, \begin{align}\label{bound:q:FD} \sup_{t \in [0,T]} |\mathbf{q}_t - (\mathbf{q}_0 + t \mathcal{M}^{-1} \mathbf{p}_0)| \leq \lambda_{\mM}^{-1} (\lambda_{\cC}^{-1} + L_1) T^2 \max \left\{ |\mathbf{q}_0|, |\mathbf{q}_0 + T \mathcal{M}^{-1}\mathbf{p}_0| \right\} + \lambda_{\mM}^{-1} L_0 T^2 \end{align} and \begin{align} \sup_{t \in [0,T]} |\mathbf{p}_t - \mathbf{p}_0| \leq& (\lambda_{\cC}^{-1} + L_1)t \left[ 1 + \lambda_{\mM}^{-1}(\lambda_{\cC}^{-1} + L_1)t^2\right] \max \left\{ |\mathbf{q}_0|, |\mathbf{q}_0 + T \mathcal{M}^{-1}\mathbf{p}_0| \right\} \notag\\ &+ L_0 t \left[ 1 + \lambda_{\mM}^{-1}(\lambda_{\cC}^{-1} + L_1) t^2 \right]. \label{bound:p:FD} \end{align} Moreover, analogously to \eqref{eq:Hgamma:lip:bnd}, we obtain that for every $(\mathbf{q}_0, \mathbf{p}_0), (\tilde \mathbf{q}_0, \tilde \mathbf{p}_0) \in \mathbb{R}^k \times \mathbb{R}^k$, \begin{multline} \sup_{t \in [0,T]} |\mathbf{q}_t(\mathbf{q}_0, \mathbf{p}_0) - \mathbf{q}_t(\tilde \mathbf{q}_0, \tilde \mathbf{p}_0) - [(\mathbf{q}_0 - \tilde \mathbf{q}_0) + t \mathcal{M}^{-1}(\mathbf{p}_0 - \tilde \mathbf{p}_0)] | \nonumber \\ \leq \lambda_{\mM}^{-1} (\lambda_{\cC}^{-1} + L_1) T^2 \max \left\{ |\mathbf{q}_0 - \tilde \mathbf{q}_0|, | \mathbf{q}_0 - \tilde \mathbf{q}_0 + t \mathcal{M}^{-1} (\mathbf{p}_0 - \tilde \mathbf{p}_0)| \right\}. \end{multline} In particular, if $\tilde \mathbf{p}_0 = \mathbf{p}_0 + \mathcal{M} (\mathbf{q}_0 - \tilde \mathbf{q}_0) T^{-1}$ then \begin{align}\label{ineq:contr:FD} \sup_{t \in [0,T]} |\mathbf{q}_t(\mathbf{q}_0, \mathbf{p}_0) - \mathbf{q}_t(\tilde \mathbf{q}_0, \tilde \mathbf{p}_0)| \leq \lambda_{\mM}^{-1} (\lambda_{\cC}^{-1} + L_1) T^2 |\mathbf{q}_0 - \tilde \mathbf{q}_0| \leq \frac{1}{2} |\mathbf{q}_0 - \tilde \mathbf{q}_0|. \end{align} We also show that $V(\mathbf{q}) = |\mathbf{q}|^i$ with $i \geq 1$ or $V(\mathbf{q}) = \exp(\eta |\mathbf{q}|^2)$, with $\eta > 0$ satisfying \eqref{cond:eta:FD}, all verify a Foster-Lyapunov structure as in \cref{def:Lyap}. The proof follows as in Proposition \ref{prop:FL}, with the difference starting from \eqref{eq:der:q:v}, which is now written as \begin{align}\label{eq:der:q:p} \frac{d}{ds} \langle \mathbf{q}_s, \mathcal{M}^{-1} \mathbf{p}_s \rangle = |\mathcal{M}^{-1} \mathbf{p}_s|^2 - |\mathcal{M}^{-1/2} \mathcal{C}^{-1/2} \mathbf{q}_s|^2 - \langle \mathbf{q}_s, \mathcal{M}^{-1} D U (\mathbf{q}_s) \rangle. \end{align} Using now $(F2)$ from \cref{ass:potential:FD} and the inequalities \begin{align*} |\mathcal{M}^{-1/2} \mathbf{f}| \geq \Lambda_{\mM}^{-1/2}|\mathbf{f}| \quad \mbox{ and } \quad |\mathcal{C}^{-1/2} \mathbf{f}| \geq \Lambda_{\cC}^{-1/2} |\mathbf{f}| \quad \mbox{ for all } \mathbf{f} \in \mathbb{R}^k, \end{align*} we obtain from \eqref{eq:der:q:p} that \begin{align}\label{ineq:qT:FD} |\mathbf{q}_T|^2 \leq |\mathbf{q}_0|^2 + 2 T \langle \mathbf{q}_0, \mathcal{M}^{-1} \mathbf{p}_0 \rangle + 2 \int_0^T \int_0^s \left[ \lambda_{\mM}^{-2} |\mathbf{p}_\tau|^2 - L_2 (\Lambda_{\mM} \Lambda_{\cC})^{-1} |\mathbf{q}_\tau|^2 + L_3 \right] d\tau ds. \end{align} Then, with \eqref{ineq:tUa:tUb}, the a priori bounds \eqref{bound:q:FD}-\eqref{bound:p:FD} and the fact that $2 \lambda_{\mM}^{-1} (\lambda_{\cC}^{-1} + L_1) T^2 \leq 1$ from hypothesis \eqref{fin:dim:cond:T}, we arrive at \begin{align} |\mathbf{q}_T|^2 \leq& \left( 1 + \frac{3}{2} \lambda_{\mM}^{-2} (\lambda_{\cC}^{-1} + L_1)^2 T^4 - \frac{L_2}{8} (\Lambda_{\mM} \Lambda_{\cC})^{-1} T^2 \right) |\mathbf{q}_0|^2 + 2T \langle \mathbf{q}_0, \mathcal{M}^{-1} \mathbf{p}_0 \rangle \notag\\ &+ \frac{67}{8} \lambda_{\mM}^{-2} T^2 |\mathbf{p}_0|^2 + \frac{3}{2} L_0^2 \lambda_{\mM}^{-2} T^4 + \frac{L_0^2}{6} \lambda_{\mM}^{-2} T^4 + L_3 T^2. \label{ineq:qT:fin:dim} \end{align} From the second condition in hypothesis \eqref{fin:dim:cond:T} it follows that $(3/2)\lambda_{\mM}^{-2} (\lambda_{\cC}^{-1} + L_1)^2 T^4 \leq (L_2/16) (\Lambda_{\mM} \Lambda_{\cC})^{-1} T^2$, so that after taking expected values in \eqref{ineq:qT:fin:dim} we obtain \begin{align*} \mathbb{E} |\mathbf{q}_T|^2 \leq \exp \left( - \frac{L_2}{16} (\Lambda_{\mM} \Lambda_{\cC})^{-1} T^2 \right) |\mathbf{q}_0|^2 + \left( \frac{67}{8} \lambda_{\mM}^{-2} \operatorname{Tr}(\mathcal{M}) + \frac{5}{3} \lambda_{\mM}^{-2} L_0^2 T^2 + L_3 \right) T^2. \end{align*} Now proceeding analogously as in \eqref{quad:Lyap:n}-\eqref{ineq:exp:Lyap:2}, we obtain that for $V: \mathbb{R}^k \to \mathbb{R}$ given either as $V(\mathbf{q}) = |\mathbf{q}|^i$, $i \in \mathbb{N}$, or $V(\mathbf{q}) = \exp(\eta|\mathbf{q}|^2)$, with $\eta > 0$ satisfying \eqref{cond:eta:FD}, there exist constants $\kappa_V \in [0,1)$ and $K_V > 0$ such that \begin{align}\label{ineq:FL:FD} P^n V(\mathbf{q}_0) \leq \kappa_V^n V(\mathbf{q}_0) + K_V \quad \mbox{ for all } \mathbf{q}_0 \in \mathbb{R}^k, \mbox{ for all } n \in \mathbb{N}, \end{align} i.e. these are Lyapunov functions for $P$. Let $(\mathbb{R}^k)^n$ denote the product of $n$ copies of $\mathbb{R}^k$ and let $\mathcal{N}(0, \mathcal{M})^{\otimes n}$ denote the product of $n$ copies of $\mathcal{N}(0, \mathcal{M})$. Analogously to Section \ref{sec:Ptwise:contract:MD}, given $\mathbf{q}_0 \in \mathbb{R}^k$ and a sequence $\{\mathbf{p}_0^{(j)}\}_{j \in \mathbb{N}}$ of i.i.d. draws from $\mathcal{N}(0, \mathcal{M})$, we denote $\mathbf{P}_0^{(n)} = (\mathbf{p}_0^{(1)}, \ldots, \mathbf{p}_0^{(n)})$, for all $n \in \mathbb{N}$, and take $Q_n(\mathbf{q}_0, \cdot): (\mathbb{R}^k)^n \to \mathbb{R}^k$, according to \begin{align*} Q_1(\mathbf{q}_0, \mathbf{p}_0^{(1)}) = \mathbf{q}_T(\mathbf{q}_0, \mathbf{p}_0^{(1)}), \quad Q_n(\mathbf{q}_0,\mathbf{P}_0^{(n)}) = \mathbf{q}_T(Q_{n-1}(\mathbf{q}_0,\mathbf{P}_0^{(n-1)}),\mathbf{p}_0^{(n)}) \quad \mbox{ for all } n \geq 2. \end{align*} Similarly given any $\mathbf{q}_0, \tilde{\mathbf{q}}_0 \in \mathbb{R}^k$ we take $\widetilde{Q}_n(\mathbf{q}_0, \tilde \mathbf{q}_0, \cdot): (\mathbb{R}^k)^n \to \mathbb{R}^k$ to be the random variables starting from \begin{align*} \widetilde{Q}_1(\mathbf{q}_0, \tilde \mathbf{q}_0, \mathbf{p}_0^{(1)}) = \mathbf{q}_T(\tilde \mathbf{q}_0, \mathbf{p}_0^{(1)} + T^{-1} \mathcal{M}(\mathbf{q}_0 - \tilde \mathbf{q}_0)), \end{align*} then defined for each integer $n \geq 2$ as \begin{align*} \widetilde{Q}_n(\mathbf{q}_0, \tilde \mathbf{q}_0, \mathbf{P}_0^{(n)}) = \mathbf{q}_T(\widetilde{Q}_{n-1}(\mathbf{q}_0, \tilde \mathbf{q}_0, \mathbf{P}_0^{(n-1)}), \mathbf{p}_0^{(n)} + \mathcal{S}_n(\mathbf{P}_0^{(n-1)})) \end{align*} with \begin{align}\label{def:Phin:FD} \mathcal{S}_n(\mathbf{P}_0^{(n-1)}) = T^{-1} \mathcal{M} \left[ Q_{n-1}(\mathbf{q}_0, \mathbf{P}_0^{(n-1)}) - \widetilde{Q}_{n-1} (\mathbf{q}_0, \tilde \mathbf{q}_0, \mathbf{P}_0^{(n-1)}) \right] \quad \mbox{ for all } n \geq 2. \end{align} We also denote \begin{align*} \boldsymbol{\mathcal{S}}_n(\mathbf{P}_0^{(n)}) = (\mathcal{S}_1, \mathcal{S}_2(\mathbf{P}_0^{(1)}), \ldots, \mathcal{S}_n(\mathbf{P}_0^{(n-1)})), \quad \mbox{ with } \mathcal{S}_1 = T^{-1}(\mathbf{q}_0 - \tilde \mathbf{q}_0), \end{align*} and $\boldsymbol{\Psi}_n(\mathbf{P}_0^{(n)}) = \mathbf{P}_0^{(n)} + \boldsymbol{\mathcal{S}}_n(\mathbf{P}_0^{(n)})$. Thus, by using inequality \eqref{ineq:contr:FD} $n$ times iteratively, we obtain that \begin{align}\label{ineq:contr:FD:2} |Q_n(\mathbf{q}_0,\mathbf{P}_0^{(n)}) - \widetilde{Q}_n(\mathbf{q}_0, \tilde \mathbf{q}_0,\mathbf{P}_0^{(n)})| \leq \frac{1}{2^n} |\mathbf{q}_0 - \tilde \mathbf{q}_0|, \end{align} for all $\mathbf{P}_0^{(n)} \in (\mathbb{R}^k)^n$. Let $\sigma_n = \text{Law} (\mathbf{P}_0^{(n)}) = \mathcal{N}(0,\mathcal{M})^{\otimes n}$ and $\tilde \sigma_n = \text{Law}(\boldsymbol{\Psi}_n(\mathbf{P}_0^{(n)})) = \boldsymbol{\Psi}_n^\ast \nu_n$. Analogously as in Proposition \ref{prop:local_contr} and Proposition \ref{prop:smallness}, we obtain that the distance-like function $\rho$ defined in \eqref{eq:semi:met:FD} satisfies contractivity and smallness properties with respect to the Markov operator $P^n$ for $n$ sufficiently large. Here, the main difference lies in the estimate of Kullback-Leibler Divergence $\KL (\tilde{\sigma}_n | \sigma_n)$, \eqref{eq:KL:div}. Proceeding similarly as in \eqref{Girsanov}-\eqref{ineq:KL}, we arrive at \begin{align*} \KL(\tilde{\sigma}_n | \sigma_n) \leq \frac{1}{2} \sum_{j=1}^n \mathbb{E} |\mathcal{M}^{-1/2} \mathcal{S}_j (\cdot)|^2. \end{align*} Using \eqref{ineq:contr:FD:2}, it follows that for every $j \in \{1, \ldots, n\}$ and $\mathbf{P}_0^{(j-1)} \in (\mathbb{R}^k)^{(j-1)}$ \begin{align*} |\mathcal{M}^{-1/2}\mathcal{S}_j(\mathbf{P}_0^{(j-1)})|^2 \leq& \lambda_{\mM}^{-1} |\mathcal{S}_j(\mathbf{P}_0^{(j-1)})|^2 \leq \lambda_{\mM}^{-1} \Lambda_{\mM}^2 T^{-2} |Q_{j-1} (\mathbf{q}_0)(\mathbf{P}_0^{(j-1)}) - \widetilde{Q}_{j-1} (\mathbf{q}_0, \tilde \mathbf{q}_0)(\mathbf{P}_0^{(j-1)})|^2 \\ \leq& \frac{\lambda_{\mM}^{-1} \Lambda_{\mM}^2 T^{-2}}{2^{(j-1)2}} |\mathbf{q}_0 - \tilde \mathbf{q}_0|^2, \end{align*} where in the second inequality we used that $|\mathcal{M} \cdot|^2 \leq \Lambda_{\mM}^2 |\cdot|^2$. Hence, \begin{align}\label{ineq:KL:FD} \KL(\tilde{\sigma}_n | \sigma_n) \leq \frac{\lambda_{\mM}^{-1} \Lambda_{\mM}^2 T^{-2}}{2} |\mathbf{q}_0 - \tilde \mathbf{q}_0|^2 \sum_{j=1}^n \frac{1}{2^{(j-1)2}} \leq \frac{4 \Lambda_{\mM}^2}{\lambda_{\mM} T^2} |\mathbf{q}_0 - \tilde \mathbf{q}_0|^2. \end{align} By using \eqref{ineq:KL:FD}, one obtains analogously as in Proposition \ref{prop:local_contr} that for every $n \in \mathbb{N}$ and for every $\mathbf{q}_0, \tilde \mathbf{q}_0 \in \mathbb{R}^k$ such that $\rho(\mathbf{q}_0, \tilde \mathbf{q}_0) < 1$, we have \begin{align}\label{ineq:contr:Wass:FD} \mathcal{W}_{\rho}(P^n(\mathbf{q}_0, \cdot), P^n(\tilde \mathbf{q}_0, \cdot)) \leq \left( \frac{1}{2^n} + \frac{\sqrt{2} \Lambda_{\mM} \varepsilon}{\lambda_{\mM}^{1/2} T } \right) \rho(\mathbf{q}_0, \tilde \mathbf{q}_0). \end{align} Moreover, analogously as in Proposition \ref{prop:smallness}, we obtain that, given $M \geq 0$, for every $\mathbf{q}_0, \tilde \mathbf{q}_0 \in A := \{ \mathbf{q} \in \mathbb{R}^k \,:\, |\mathbf{q}| \leq M \}$, it holds: \begin{align}\label{ineq:small:FD} \mathcal{W}_{\rho} (P^n(\mathbf{q}_0, \cdot), P^n(\tilde \mathbf{q}_0, \cdot)) \leq 1 - \frac{1}{2} \exp \left( -\frac{16 \Lambda_{\mM}^2 M^2}{\lambda_{\mM} T^2} \right) + \frac{M}{2^{n-1} \varepsilon}. \end{align} The remaining portion of the proof now follows as for Theorem \ref{thm:weak:harris}, by combining \eqref{ineq:FL:FD}, \eqref{ineq:contr:Wass:FD} and \eqref{ineq:small:FD}. \end{proof} \begin{Rmk}\label{rmk:FD} From condition the \eqref{fin:dim:cond:T} on the integration time $T$, we see how the upper bound could potentially degenerate to zero in case the eigenvalues of $\mathcal{C}$ and/or the eigenvalues of $\mathcal{M}$ decay to zero as the dimension of $\mathbb{R}^k$ increases. Moreover, if the eigenvalues of $\mathcal{M}$ decrease to zero (i.e. $\lambda_{\mM} \to 0$) or increase to infinity (i.e. $\Lambda_{\mM} \to \infty$) with respect to $k$, then, for fixed $n$, $\varepsilon$ and $T$, the upper bound in \eqref{ineq:contr:Wass:FD} increases to infinity, and the first two terms in the upper bound in \eqref{ineq:small:FD} increase to $1$. This would imply that the convergence rate in \eqref{eq:Wass:conv:FD}, which is directly proportional to the upper bounds in \eqref{ineq:contr:Wass:FD}-\eqref{ineq:small:FD} and inversely proportional to $T$, would become `slower' as $k$ increases. In other words, the number $n$ of iterations necessary for the distance between $\nu_1 P^n$ and $\nu_2 P^n$ to decay within a given $\delta > 0$ would increase with the dimension $k$. This type of behavior is commonly known as the `curse of dimensionality'. A natural choice for the mass matrix $\mathcal{M}$ to avoid such unwanted behavior is given by $\mathcal{M} = \mathcal{C}^{-1}$ -- this is the idea behind preconditioning in \cite{BePiSaSt2011} which leads us to consider \eqref{eq:dynamics:xxx} in the infinite dimensional formulation. In this preconditioned case, one could use that $\lambda_{\mM} = \Lambda_{\cC}^{-1}$ and $\Lambda_{\mM} = \lambda_{\cC}^{-1}$ directly in \eqref{fin:dim:cond:T} to obtain \begin{align}\label{cond:T:rough} T \leq \min \left\{ \frac{1}{[ 2 \Lambda_{\cC} (\lambda_{\cC}^{-1} + L_1) ]^{1/2}}, \frac{L_2^{1/2} (\lambda_{\cC}^{-1} \Lambda_{\cC} )^{-1/2}} {2 \sqrt{6} \Lambda_{\cC} (\lambda_{\cC}^{-1} + L_1)} \right\}, \end{align} where the upper bound actually still degenerates to zero in case $\lambda_{\cC} \to 0$ as $k \to \infty$ (corresponding to the trace-class assumption on $\mathcal{C}$ in the infinite-dimensional case). However, the inequalities that lead to the condition on $T$ as in \eqref{cond:T:rough} would in fact be a rough overestimate in this case. Indeed, for $\mathcal{M} = \mathcal{C}^{-1}$, the term $\mathcal{M}^{-1}\mathcal{C}^{-1} \mathbf{q}_{\tau}$ in \eqref{eq:qt:FD} is simply equal to $\mathbf{q}_\tau$ and thus we no longer estimate from above by $\lambda_{\mM}^{-1}\lambda_{\cC}^{-1} |\mathbf{q}_\tau|$ as in \eqref{bound:q:FD}. Similarly, the term $|\mathcal{M}^{-1/2} \mathcal{C}^{-1/2} \mathbf{q}_s|^2$ in \eqref{eq:der:q:p} is simply $|\mathbf{q}_s|^2$ and thus no longer estimated from below by $(\Lambda_{\mM} \Lambda_{\cC})^{-1} |\mathbf{q}_\tau|^2$ as in \eqref{ineq:qT:FD}. With these changes, T is required to satisfy instead \begin{align*} T \leq \min \left\{ \frac{1}{[2 (1 + \Lambda_{\cC} L_1)]^{1/2}}, \frac{L_2^{1/2}}{2 \sqrt{6} (1 + \Lambda_{\cC} L_1)} \right\}, \end{align*} which is consistent with condition \eqref{eq:time:restrict:basic} for $\Lambda_{\cC} = \lambda_1$ (when $\gamma = 0$), and thus independent of $k$ when $\Lambda_{\cC}$ is uniformly bounded with respect to $k$. On the other hand, replacing $\Lambda_{\mM}$ with $\lambda_{\cC}^{-1}$ in \eqref{ineq:contr:Wass:FD} and \eqref{ineq:small:FD}, we see that the same unwanted behavior is not removed here when $\lambda_{\cC} \to 0$ as $k \to \infty$; i.e. the convergence rate would still degenerate with the dimension $k$. This emphasizes the need for considering `shifts' in the momentum (or velocity) paths for the modified process $\widetilde{Q}_n(\mathbf{q}_0, \tilde{\mathbf{q}}_0, \cdot)$, $\mathbf{q}_0, \tilde{\mathbf{q}}_0 \in \mathbb{R}^k$, that are restricted to a fixed number of directions in $\mathbb{R}^k$, for every $k$, as done in \eqref{def:Phin} through the projection operator $\Pi_N$, with $N$ sufficiently large but fixed (cf. \eqref{def:Phin:FD}). \end{Rmk} \section{Application for the Bayesian estimation of divergence free Flows from a passive scalar} \label{sec:bayes:AD} In this section we establish some results concerning the degree of applicability of \cref{thm:weak:harris} to the PDE inverse problem of estimating a divergence free flow from a passive scalar as we described above in the introduction, cf. \eqref{eq:ad:eqn}, \eqref{eq:gen:linear:form}, \eqref{eq:post:passive:scal}. For this purpose, according to the conditions required in \cref{B123}, we wish to establish suitable bounds on $U$, $DU$ and $D^2U$. Of course such bounds are expected to depend crucially on the form of the observation operator $\mathcal{O}$. Here, adopting the notations $ U^{\xi} = \langle DU, \xi \rangle$ and $U^{\xi,\tilde{\xi}} = \langle D^2U\xi, \tilde{\xi} \rangle$ for directional derivatives of $U$ with respect to vectors $\xi, \tilde{\xi}$ in the phase space, we have that \begin{align} U^{\xi} (\mathbf{q}) = -2\langle \Gamma^{-1/2} (\mathcal{Y} - \mathcal{O}(\theta(\mathbf{q}))), \Gamma^{-1/2} \mathcal{O}(\psi^\xi(\mathbf{q})) \rangle \label{eq:DD:U:xi} \end{align} and \begin{align} U^{\xi, \tilde{\xi}} (\mathbf{q}) = 2\langle \Gamma^{-1/2} \mathcal{O} (\psi^{\tilde{\xi}}(\mathbf{q})), \Gamma^{-1/2} \mathcal{O}(\psi^\xi(\mathbf{q})) \rangle -2\langle \Gamma^{-1/2} (\mathcal{Y} - \mathcal{O}(\theta(\mathbf{q}))), \Gamma^{-1/2} \mathcal{O}(\psi^{\xi, \tilde{\xi}}(\mathbf{q})) \rangle \label{eq:DD:U:xi:txi} \end{align} where $\psi^\xi(\mathbf{q}) = \psi^\xi(t;\mathbf{q})$ obeys \begin{align} \partial_t \psi^\xi + \mathbf{q} \cdot \nabla \psi^\xi = \kappa \Delta \psi^\xi - \xi \cdot \nabla \theta(\mathbf{q}), \quad \psi^\xi(0; \mathbf{q}) =0 \label{eq:grad:the:xi} \end{align} and $\psi^{\xi, \tilde{\xi}}(\mathbf{q}) = \psi^{\xi, \tilde{\xi}}(t; \mathbf{q})$ satisfies \begin{align} \partial_t \psi^{\xi, \tilde{\xi}} + \mathbf{q} \cdot \nabla \psi^{\xi, \tilde{\xi}} = \kappa \Delta\psi^{\xi, \tilde{\xi}} - \tilde{\xi} \cdot \nabla \psi^\xi - \xi \cdot \nabla \psi^{\tilde{\xi}}, \quad \psi^{\xi, \tilde{\xi}} (0; \mathbf{q}) = 0, \label{eq:grad:the:xitxi} \end{align} for any suitable $\xi, \tilde{\xi}$. \subsubsection{Mathematical Setting of the Advection Diffusion Equation, Associated Bounds} \label{sec:AD:Math} In order to place \eqref{eq:post:passive:scal} in a rigorous functional setting we adapt some results from \cite{borggaard2018Consistency, borggaard2018bayesian}. In view of \eqref{eq:grad:the:xi}, \eqref{eq:grad:the:xitxi} we consider a slightly more general version of \eqref{eq:ad:eqn} where we include an external forcing term $f: [0,T] \times \mathbb{T}^2 \rightarrow \mathbb{R}$, namely, \begin{align} \partial_t \phi + \mathbf{q} \cdot \nabla \phi = \kappa \Delta \phi + f, \quad \phi(0) = \phi_0. \label{eq:ad:forced} \end{align} Specially, we need to estimate terms appearing in the gradient and Hessian of $U$ involving solutions of \eqref{eq:ad:forced} with certain forcing terms; cf. \eqref{eq:grad:the:xi}, \eqref{eq:grad:the:xitxi} below. We adopt the notation $H^s(\mathbb{T}^2)$ for the Sobolev space of periodic functions with $s \geq 0$ derivatives in $L^2$. Here we denote $\Lambda^s = (- \Delta)^{s/2}$. Thus, the associated $H^s(\mathbb{T}^2)$ norms are given by $\| \cdot \|_s = \| \Lambda^s \cdot \|_0$ where $\| \cdot \|_0$ is the usual $L^2(\mathbb{T}^2)$ norm. We also make use of the negative Sobolev spaces $H^{-s}(\mathbb{T}^2)$ for $s \geq 0$ defined via duality relative to $L^2(\mathbb{T}^2)$ with the norms reading as \begin{align} \| f \|_{-s} = \sup_{\|\xi\|_{s} = 1} \langle f, \xi \rangle \label{eq:neg:sob:space} \end{align} where $\langle \cdot, \cdot \rangle$ is the usual duality pairing so that $\langle f, \xi \rangle = \int_{\mathbb{T}^2} f \xi dx$ when $f \in L^2(\mathbb{T}^2)$. All other norms are denoted as $\| \cdot \|_X$ where $X$ is the associated space i.e. $L^\infty$. We abuse notation and use the same naming convention $H^s(\mathbb{T}^2)$ and associated norm $\| \cdot \|_s$ for periodic, divergence free vector fields with $s$ derivatives in $L^2(\mathbb{T}^2)$. We have the following proposition adapted from \cite{borggaard2018Consistency}: \begin{Prop}[Well-Posedness and Continuity of the solution map for \eqref{eq:ad:forced}] \label{def:adr_weak} \mbox{} \begin{itemize} \item[(i)] Fix any $s \geq 0$ and suppose that $\mathbf{q} \in H^{s}(\mathbb{T}^2)$, $\phi_0 \in H^{s}(\mathbb{T}^2) \cap L^\infty(\mathbb{T}^2)$ and $f \in L^2_{loc}([0,\infty);$ $H^{s -1}(\mathbb{T}^2))$. Then there exists a unique $\phi = \phi(\mathbf{q}, \phi_0, f)$ such that \begin{align} \phi &\in L^2_{loc}([0,\infty); H^{s+1}(\mathbb{T}^2)) \cap L^\infty([0,\infty); H^{s}(\mathbb{T}^2)), \quad \frac{\partial \phi}{\partial t} \in L^2_{loc}([0,\infty); H^{s-1}(\mathbb{T}^2)) \label{eq:AD:gen:reg:1} \end{align} so that in particular\footnote{See e.g. \cite[Lemma 3.1.2]{Temam2001}.} \begin{align*} \phi \in C([0,\infty); H^{s}(\mathbb{T}^2)) \end{align*} and where $\phi$ solves~\eqref{eq:ad:forced} at least weakly. Additionally $\phi$ maintains the bounds \begin{align} &\frac{d}{dt} \| \phi \|^2_0 + 2\kappa \| \phi \|_1^2 = 2 \int f \phi dx, \label{eq:L2:balance}\\ &\sup_{t \in [0,t^*]} \| \phi(t) \|_{L^\infty} \leq \| \phi_0 \|_{L^\infty} + \int_0^{t^*} \|f\|_{L^\infty} dt, \text{ for any } t^* > 0. \label{eq:Linfty:bnd} \end{align} When $s > 0$ we have \begin{align} \frac{d}{dt} \| \phi \|^2_s + \kappa \| \phi \|_{s+1}^2 \leq c \| \phi \|^2_s \| \mathbf{q}\|^{a}_s + 2 \int \Lambda^s f \Lambda^s \phi \, dx \label{eq:Hs:balance} \end{align} where the contants $c = c(\kappa, s)$, $a = a(\kappa, s)$ are independent of $\mathbf{q}$. \item[(ii)] Let $\phi^{(j)} = \phi(\mathbf{q}_j,\phi_{0,j},f_j)$ for $j = 1,2$ be two solutions of \eqref{eq:ad:forced} corresponding to data $\mathbf{q}_j,\phi_{0,j},f_j$ satisfying the conditions in part (i). Then, taking $\psi = \phi^{(1)} - \phi^{(2)}$, $\mathbf{p} = \mathbf{q}_1 - \mathbf{q}_2$, we have \begin{align} \frac{d}{dt} \|\psi \|_0^2 +\kappa \|\psi \|_1^2 \leq c \| \mathbf{p}\|_0^2 \|\phi^{(1)}\|_{L^\infty}^2 + c\| f_1 - f_2\|^2_{-1} \label{eq:L2:cont:est:AD} \end{align} with $c = c(\kappa)$ independent of $\mathbf{q}_1, \mathbf{q}_2$. Furthermore, in the case when $s > 0$ we have \begin{align} \frac{d}{dt}\|\psi \|_s^2 +\kappa \|\psi \|_{s+1}^2 \leq c \|\psi \|_s^2 \|\mathbf{q}_1\|_s^{a} + c \|\mathbf{p}\|_s^{2} \| \phi^{(2)} \|_{s+1}^2 + c \|f_1 - f_2\|_{s-1}^2, \label{eq:Hs:cont:est:AD} \end{align} where the constants $c = c(\kappa, s)$, $a = a(\kappa, s)$ are again independent of $\mathbf{q}_1, \mathbf{q}_2$. \end{itemize} \end{Prop} \cref{def:adr_weak} immediately yields quantitate bounds on derivatives of $\theta(\mathbf{q})$ in its advecting flow $\mathbf{q}$ which solve \eqref{eq:grad:the:xi}, \eqref{eq:grad:the:xitxi}. In turn these bounds provide the quantitative foundation for the estimates on $DU$ and $D^2U$ below in \cref{prop:DU:DsqU:Bnds}, \cref{cor:DU:DsqU:Bnd:w:C}. \begin{Prop} \label{prop:grad:xi:apriori} Fix any $s > 0$ and $\theta_0 \in L^\infty(\mathbb{T}^2) \cap H^s(\mathbb{T}^2)$. Then the map from $H^s(\mathbb{T}^2)$ to $C([0,\infty); H^s(\mathbb{T}^2))$ that associates to each $\mathbf{q} \in H^s(\mathbb{T}^2)$ the corresponding solution $\theta(\mathbf{q}) := \theta(\cdot; \mathbf{q}, \theta_0)$ of \eqref{eq:ad:eqn} is a $C^2$ function. Denote $\psi^\xi(\mathbf{q})$ and $\psi^{\xi,\tilde{\xi}}(\mathbf{q})$ as the directional derivatives of $\theta$ in the directions $\xi, \tilde{\xi} \in H^s(\mathbb{T}^2)$. Then $\psi^{\xi}(\mathbf{q})$ and $\psi^{\xi,\tilde{\xi}}(\mathbf{q})$ obey \eqref{eq:grad:the:xi} and \eqref{eq:grad:the:xitxi}, respectively, with regularity \eqref{eq:AD:gen:reg:1} in the sense of \cref{def:adr_weak}. Furthermore, \begin{itemize} \item[(i)] For any $\mathbf{q}, \xi \in H^s(\mathbb{T}^2)$, $t^* > 0$ we have \begin{align} \sup_{t\leq t^*} \| \psi^\xi(t;\mathbf{q})\|_0^2 + \int_0^{t^*}\| \psi^\xi(t;\mathbf{q})\|_1^2 dt \leq c t^* \|\xi\|_0^2 \label{eq:psi:xi:Bnd:L2} \end{align} and \begin{align} \sup_{t\leq t^*} \| \psi^\xi(t;\mathbf{q})\|_0^2 + \int_0^{t^*}\| \psi^\xi(t;\mathbf{q})\|_1^2 dt \leq c \|\xi\|_s^2 \label{eq:psi:xi:Bnd:L2:b} \end{align} where $c = c(\|\theta_0\|_{L^\infty},\kappa)$ is independent of $\mathbf{q}$, $\xi$ and $t^*$. Furthermore, \begin{align} \sup_{t \leq t^*} \| \psi^\xi(t; \mathbf{q})\|_s^2 + \int_0^{t^*} \| \psi^\xi(t; \mathbf{q})\|_{s+1}^2 dt \leq c \|\xi\|_s^2 \exp( c t^* \|\mathbf{q}\|^{a}_s) \label{eq:psi:xi:Bnd:Hs} \end{align} where the constant $c = c(s, \|\theta_0\|_s, \kappa)$ is independent of $\mathbf{q}$, $\xi$ and $t^* > 0$; and $a$ is precisely the constant from \eqref{eq:Hs:balance}. \item[(ii)] On the other hand, given any $\mathbf{q}, \xi, \tilde{\xi} \in H^s(\mathbb{T}^2)$, $t^* > 0$ \begin{align} \sup_{t \leq t^*} \| \psi^{\xi,\tilde{\xi}}(t;\mathbf{q})\|_0^2 + \int_0^{t^*}\| \psi^{\xi,\tilde{\xi}}(t;\mathbf{q})\|_1^2 dt \leq c(\|\xi\|_s^4 + \|\tilde{\xi}\|_s^4) \label{eq:psi:xi:txiBnd:L2} \end{align} where $c= c(s,\|\theta_0\|_{L^\infty}, \|\theta_0\|_s, \kappa)$ is independent of $\mathbf{q}, \xi, \tilde{\xi}$ and $t^*$. Moreover, \begin{align} \sup_{t \leq t^*} \| \psi^{\xi,\tilde{\xi}}(t; \mathbf{q})\|_s^2 \leq c(\|\xi\|^4_s + \|\tilde{\xi}\|_s^4) \exp( t^* c \| \mathbf{q}\|^{a}_s ) \label{eq:psi:xi:txiBnd:H2} \end{align} for a constant $c = c(s,\|\theta_0\|_{L^\infty}, \|\theta_0\|_s, \kappa)$ independent of $\mathbf{q}, \xi, \tilde{\xi}$ and $t^* > 0$. \end{itemize} \end{Prop} Before turning to the details of the proof let us recall some useful inequalities. Firstly the Sobolev embedding theorem in dimension $d = 2$ is given as \begin{align} \| g \|_{L^p} \leq c \| g \|_{H^{r}} \quad \text{ for any } r \geq 1 - \frac{2}{p}, \text{ with } 2 \leq p < \infty, \label{eq:sob:embedding} \end{align} for any $g: \mathbb{T}^2 \to \mathbb{R}$ in $H^r(\mathbb{T}^2)$, where the universal constant $c$ depends only on $p$ and $r$. We also make use of the Leibniz-Kato-Ponce inequality which takes the general form \begin{align} \| \Lambda^{r} (fg) \|_{L^m} \leq C ( \| \Lambda^{r} f\|_{L^{p_1}} \| g \|_{L^{q_1}} + \| f\|_{L^{p_2}} \| \Lambda^{r} g \|_{L^{q_2}} ) \label{eq:Kato:Ponce} \end{align} valid for any $r \geq 0$, $1 < m < \infty$ and $1 < p_i, q_i \leq \infty$ with $m^{-1} = p^{-1}_j + q^{-1}_j$ for $j = 1,2$ and where $C$ is a positive constant depending only on $r, m, p_1, q_1, p_2, q_2$. \begin{proof} The claimed regularity for $\psi^\xi$, $\psi^{\xi,\tilde{\xi}}$ follows from \cref{def:adr_weak} and the forthcoming formal estimates leading to \eqref{eq:psi:xi:Bnd:L2}--\eqref{eq:psi:xi:txiBnd:H2} which can be justified in the context of an appropriate regularization scheme. We begin by showing \eqref{eq:psi:xi:Bnd:L2}. From \eqref{eq:L2:balance}, namely multiplying \eqref{eq:grad:the:xi} by $\psi^\xi$ and integrating we have \begin{align} \frac{1}{2}\frac{d}{dt} \| \psi^\xi\|_0^2 + \kappa \| \nabla \psi^\xi\|_0^2 = - \int_{\mathbb{T}^2} \xi \cdot \nabla \theta(\mathbf{q}) \psi^\xi dx. \label{eq:energy:psixi} \end{align} Integrating by parts and using that $\xi$ is divergence free \begin{align} \left| \int_{\mathbb{T}^2} \xi \cdot \nabla \theta(\mathbf{q}) \psi^\xi dx \right| = \left| \int_{\mathbb{T}^2} \xi \cdot \nabla \psi^\xi \theta(\mathbf{q}) dx \right| \leq \|\theta(\mathbf{q}) \|_{L^\infty} \|\nabla \psi^\xi\|_0 \|\xi\|_0 \label{est:int:advec} \end{align} Invoking the Maximum principle as in \eqref{eq:Linfty:bnd} we obtain that \begin{align} \| \theta(t;\mathbf{q}) \|_{L^\infty} \leq \| \theta_0\|_{L^\infty} \quad \text{ for any } t \geq 0, \label{eq:Max:Prin:Th} \end{align} and hence \begin{align*} \frac{d}{dt} \| \psi^\xi\|_0^2 + \kappa \| \nabla \psi^\xi\|_0^2 \leq c\|\xi\|_0^2. \end{align*} This immediately implies the first estimate \eqref{eq:psi:xi:Bnd:L2}. For showing \eqref{eq:psi:xi:Bnd:L2:b}, we estimate \eqref{est:int:advec} differently, namely \begin{align*} \left| \int_{\mathbb{T}^2} \xi \cdot \nabla \theta(\mathbf{q}) \psi^\xi dx \right| \leq \|\xi\|_p \|\nabla \theta(\mathbf{q}) \|_0 \|\psi^\xi\|_q \end{align*} with $1 < p,q < \infty$ such that $\frac{1}{p} + \frac{1}{q} = \frac{1}{2}$. With the Sobolev inequality \eqref{eq:sob:embedding} and noting that $q \to 2$ when $p \to \infty$ we can find $p$ and $q$ in this range such that \begin{align*} \left| \int_{\mathbb{T}^2} \xi \cdot \nabla \theta(\mathbf{q}) \psi^\xi dx \right| & \leq \|\xi\|_s \|\nabla \theta(\mathbf{q}) \|_0 \|\nabla \psi^\xi\|_0 \\ & \leq \frac{\kappa}{2} \|\nabla \psi^\xi\|_0^2 + c \|\xi\|_s^2 \|\nabla \theta(\mathbf{q}) \|_0^2, \end{align*} which in combination with \eqref{eq:energy:psixi} yields \begin{align} \frac{d}{dt} \| \psi^\xi\|_0^2 + \kappa \| \nabla \psi^\xi\|_0^2 \leq c\|\xi\|_{s}^2 \|\nabla \theta(\mathbf{q})\|_0^2. \label{ineq:energy:psixi} \end{align} Integrating \eqref{eq:L2:balance} for $f = 0$ with respect to time, we have \begin{align} \sup_{s \leq t^*} \|\theta(\mathbf{q})\|^2_0 + \kappa \int_0^{t^*} \|\nabla \theta(\mathbf{q})\|^2_0 dt \leq \|\theta_0\|^2_0 \label{eq:stupid:L2:bnd} \end{align} Hence from \eqref{ineq:energy:psixi} and \eqref{eq:stupid:L2:bnd}, it follows that \begin{align*} \sup_{t\leq t^*} \|\psi^\xi\|_0^2 + \kappa \int_0^{t^*}\| \nabla \psi^\xi\|_0^2 dt \leq c \|\xi\|_s^2 \int_0^{t^*} \|\nabla \theta(\mathbf{q})\|_0^2 dt \leq c \|\theta_0\|_0^2 \|\xi\|_s^2, \end{align*} finishing the proof of \eqref{eq:psi:xi:Bnd:L2:b}. Turning to $H^s(\mathbb{T}^2)$ estimates we refer to \eqref{eq:Hs:balance} which translates to \begin{align} \frac{d}{dt} \| \psi^\xi \|^2_s + \kappa \| \nabla \psi^\xi \|_s^2 \leq c\| \psi^\xi \|^2_s \| \mathbf{q}\|^{a}_s - 2 \int \Lambda^s (\xi \cdot \nabla \theta(\mathbf{q})) \Lambda^s \psi^\xi \, dx. \label{eq:Hs:inequal:psi:xi} \end{align} Invoking H\"older's inequality and the Leibniz bound \eqref{eq:Kato:Ponce} we estimate \begin{align} \left | \int \Lambda^s (\xi \cdot \nabla \theta(\mathbf{q})) \Lambda^s \psi^\xi \, dx \right| \leq c \| \Lambda^s \psi^\xi\|_{L^p} ( \| \Lambda^s \xi \|_0 \|\Lambda^1 \theta(\mathbf{q})\|_{L^q} + \| \xi \|_{L^q} \|\Lambda^{s+1} \theta(\mathbf{q})\|_0) \label{eq:K:P:App:1} \end{align} valid whenever $1 < p, q < \infty$ and maintains $1 - \tfrac{1}{p} = \tfrac{1}{2} + \tfrac{1}{q}$ i.e. $q = 2p/(p -2)$. Again with the Sobolev inequality \eqref{eq:sob:embedding} and noting that $q \to 2$ when $p \to \infty$ we can find $p$ and $q$ in this range such that \begin{align} \left | \int \Lambda^s (\xi \cdot \nabla \theta(\mathbf{q})) \Lambda^s \psi^\xi \, dx \right| &\leq c \| \Lambda^{s+1} \psi^\xi\|_0 \| \Lambda^s \xi \|_0 \|\Lambda^{s+1} \theta(\mathbf{q})\|_0 \notag\\ &\leq \frac{\kappa}{4} \| \Lambda^{s+1} \psi^\xi\|_0^2 + c\| \Lambda^s \xi \|_0^2 \|\Lambda^{s+1} \theta(\mathbf{q})\|_0^2. \label{eq:K:P:App:2} \end{align} Combining this bound with \eqref{eq:Hs:inequal:psi:xi} yields the inequality \begin{align} \frac{d}{dt} \| \psi^\xi \|^2_s+ \frac{\kappa}{2} \| \nabla \psi^\xi \|_s^2 \leq c\| \psi^\xi \|^2_s \| \mathbf{q}\|^{a}_s + c \| \xi \|_s^2 \|\theta(\mathbf{q})\|_{s+1}^2 \label{eq:psi:xi:HS:est:1} \end{align} so that with the Gronwall inequality we obtain \begin{align*} \sup_{r \leq t^*} \| \psi^\xi \|^2_s \leq \|\xi\|_s^2 \exp(c t^* \|\mathbf{q}\|_s^{a}) \int_0^{t^*} \|\theta(\mathbf{q})\|_{s+1}^2 dt \end{align*} A second application of \eqref{eq:Hs:balance}, this time with $f = 0$, yields \begin{align} \kappa \int_0^{t^*} \|\theta(\mathbf{q})\|_{s+1}^2 dt &\leq c t^* \| \mathbf{q}\|^{a}_s \sup_{t \leq t^*}\|\theta(\mathbf{q})\|_{s}^2 \leq c t^* \| \mathbf{q}\|^{a}_s \exp( ct^*\|\mathbf{q}\|_s^{a}) \|\theta_0\|_s^2 \notag\\ &\leq c \exp( c t^*\|\mathbf{q}\|_s^{a}) \|\theta_0\|_s^2. \label{eq:psi:xi:HS:est:2} \end{align} Combining the previous two bounds we find, for any $t^* \geq 0$, \begin{align} \sup_{t \leq t^*}\| \psi^\xi\|_s^2 \leq c \exp(c t^* \|\mathbf{q}\|_s^{a}) \|\xi\|_s^2 \|\theta_0\|_s^2. \label{eq:psi:xi:HS:est:3} \end{align} Integrating \eqref{eq:psi:xi:HS:est:1} in time and invoking \eqref{eq:psi:xi:HS:est:2}, \eqref{eq:psi:xi:HS:est:3} \begin{align*} \kappa \int_0^{t^*} \| \nabla \psi^\xi \|_s^2 dt \leq c t^* \sup_{t \leq t^*} \| \psi^\xi \|^2_s \| \mathbf{q}\|^{a}_s + c \| \xi \|_s^2 \int_0^{t^*}\|\theta(\mathbf{q})\|_{s+1}^2 dt \leq c \|\theta_0\|_s^2 \|\xi\|_s^2 \exp(ct^* \|\mathbf{q}\|_s^{a}) \end{align*} and hence we now obtain \eqref{eq:psi:xi:Bnd:Hs}. We next provide estimates for $\psi^{\xi, \tilde{\xi}}$. As before we begin by addressing the $L^2$ case, namely \eqref{eq:psi:xi:txiBnd:L2}. We take the inner product in $L^2$ of \eqref{eq:grad:the:xitxi} with $\psi^{\xi, \tilde{\xi}}$ and integrate to obtain, as in \eqref{eq:L2:balance}, \begin{align} \frac{1}{2}\frac{d}{dt} \| \psi^{\xi, \tilde{\xi}}\|_0^2 + \kappa \| \nabla \psi^{\xi, \tilde{\xi}}\|_0^2 = - \int \tilde{\xi} \cdot \nabla \psi^\xi \psi^{\xi, \tilde{\xi}} - \int \xi \cdot \nabla \psi^{\tilde{\xi}} \psi^{\xi, \tilde{\xi}} := I. \label{eq:L2:xitxi:en:bal} \end{align} Integrating by parts and using H\"older's inequality the right hand side is estimated as \begin{align*} |I| \leq (\|\xi\|_{L^p} + \| \tilde{\xi}\|_{L^p}) ( \|\psi^{\xi}\|_{L^q} + \| \psi^{\tilde{\xi}} \|_{L^q}) \|\nabla\psi^{\xi, \tilde{\xi}}\|_0 \end{align*} for $p^{-1} + q^{-1} =2^{-1}$. Choosing $p$, $q$ appropriately and then applying the Sobolev embedding, \eqref{eq:sob:embedding}, we find \begin{align*} |I| \leq (\|\xi\|_s + \| \tilde{\xi}\|_s) ( \|\psi^{\xi}\|_{1} + \| \psi^{\tilde{\xi}} \|_{1}) \|\nabla\psi^{\xi, \tilde{\xi}}\|_0 \leq c(\|\xi\|_s^2 + \| \tilde{\xi}\|_s^2) ( \|\psi^{\xi}\|_1^2 + \| \psi^{\tilde{\xi}} \|_1^2) + \frac{\kappa}{2} \|\nabla\psi^{\xi, \tilde{\xi}}\|_0^2. \end{align*} Hence, using this bound with \eqref{eq:L2:xitxi:en:bal} and then applying \eqref{eq:psi:xi:Bnd:L2} we infer \eqref{eq:psi:xi:txiBnd:L2}. We turn finally to the $H^s(\mathbb{T}^2)$ estimates for $\psi^{\xi, \tilde{\xi}}$. Here \eqref{eq:Hs:balance} becomes \begin{align} \frac{d}{dt} \| \psi^{\xi, \tilde{\xi}} \|^2_s + \kappa \| \nabla \psi^{\xi, \tilde{\xi}} \|_s^2 \leq c\| \psi^{\xi, \tilde{\xi}} \|^2_s \| \mathbf{q}\|^{a}_s - 2 \int \Lambda^s (\tilde{\xi} \cdot \nabla \psi^\xi +\xi \cdot \nabla \psi^{\tilde{\xi}}) \Lambda^s \psi^{\xi, \tilde{\xi}} \, dx. \label{eq:psi:xi:txi:Hs:bnd:1} \end{align} Estimating the last term above in a similar fashion in \eqref{eq:K:P:App:2} above leads to \begin{align} &\left| \int \Lambda^s (\tilde{\xi} \cdot \nabla \psi^\xi +\xi \cdot \nabla \psi^{\tilde{\xi}}) \Lambda^s \psi^{\xi, \tilde{\xi}} \, dx \right| \leq \frac{\kappa}{2} \| \psi^{\xi, \tilde{\xi}} \|_{s+1}^2 + c(\|\xi\|^2_s + \|\tilde{\xi}\|_s^2) (\|\psi^\xi\|_{s+1}^2 + \| \psi^{\tilde{\xi}}\|_{s+1}^2). \label{eq:psi:xi:txi:Hs:bnd:2} \end{align} Combining the previous two bounds \eqref{eq:psi:xi:txi:Hs:bnd:1}, \eqref{eq:psi:xi:txi:Hs:bnd:2} and then making use of Gronwall inequality and \eqref{eq:psi:xi:Bnd:Hs} we obtain \begin{align} \sup_{r \leq t^*} \| \psi^{\xi, \tilde{\xi}} \|^2 \leq& c(\|\xi\|^2_s + \|\tilde{\xi}\|_s^2) \exp( ct^* \| \mathbf{q}\|^{a}_s ) \int_0^{t^*} (\|\psi^\xi\|_{s+1}^2 + \|\psi^{\tilde{\xi}}\|_{s+1}^2) dt \notag\\ \leq& c(\|\xi\|^4_s + \|\tilde{\xi}\|_s^4) \exp( c t^* \| \mathbf{q}\|^{a}_s ), \notag \end{align} which establishes the final bound, \eqref{eq:psi:xi:txiBnd:H2}, completing the proof. \end{proof} \subsubsection{Bounds on the Potential $U$ and its Derivatives} \label{sec:d:d2:pot:bnds} With these preliminary bounds on \eqref{eq:ad:forced} and hence \cref{prop:grad:xi:apriori} in hand we turn to provide estimates for $U$ defined as in \eqref{eq:post:passive:scal}. Recall that we seek to determine the extent to which \cref{B123}, \cref{ass:integrability:cond} applies for the certain classes of potential $U$ which arise in this example, namely \eqref{eq:pot:passive:scal:g:n} subject to conditions on the observation operator \eqref{eq:gen:linear:form}. Of course, since $U$ is positive, \cref{ass:integrability:cond} holds regardless of our assumptions on $\mathcal{O}$. Regarding the assumptions on $\mathcal{O}$ we consider the following three situations. Fix an observation time window $t^* > 0$. Firstly, in the case of spectral observations or that of spatial (volumetric) averages, we may suppose \begin{align} |\mathcal{O}( \phi)| \leq c_0 \sup_{t \leq t^*} \| \phi(t)\|_0 \label{eq:obs:cond:spec} \end{align} for $\phi \in C([0,t^*]; L^2(\mathbb{T}^2))$. On the other hand, the case of pointwise spatial-temporal measurement yields the condition: \begin{align} |\mathcal{O}( \phi)| \leq c_0 \sup_{t \leq t^*} \|\phi(t)\|_{L^\infty} \label{eq:obs:cond:node} \end{align} for $\phi \in C([0,t^*] \times \mathbb{T}^2)$. Finally, for estimates involving gradients or other derivatives of $\phi$ we assume that, for some $s > 0$, \begin{align} |\mathcal{O}( \phi)| \leq c_0 \sup_{t \leq t^*} \|\phi(t)\|_{H^s} \label{eq:obs:cond:grad} \end{align} valid for $\phi \in C([0,t^*]; H^s(\mathbb{T}^2))$. Let us begin with estimates on $DU$ and $D^2U$ in negative Sobolev space which in turn yield the conditions in \cref{B123} on the $\mathbb{H}_\gamma$ spaces, \eqref{def:Hgamma}, defined relative to a covariance operator $\mathcal{C}$ of the Gaussian prior $\mu_0$ in \eqref{eq:post:passive:scal}. \begin{Prop} \label{prop:DU:DsqU:Bnds} Let $U$ be defined as in \eqref{eq:pot:passive:scal:g:n} for a fixed $\mathcal{Y} \in \mathbb{R}^m$ and $\Gamma$ a symmetric strictly positive definite matrix. \begin{itemize} \item[(i)] When $\mathcal{O}$ satisfies \eqref{eq:obs:cond:spec}, $U$ is twice Fr\'echet differentiable in $H^{s'}(\mathbb{T}^2)$ for any $s' > 0$. In this case for any $s' \geq 0$ \begin{align} \| D U(\mathbf{q})\|_{-s'} \leq M_1 < \infty \label{eq:Grad:bnd:spec} \end{align} for a constant $M_1 = M_1(s', \kappa, t^*, \theta_0, c_0, \mathcal{Y}, \Gamma)$ which is independent of $\mathbf{q}$.\footnote{Note furthermore that $M_1$ is independent of $t^*$ in the case when $s' > 0$, cf. \eqref{eq:psi:xi:Bnd:L2}, \eqref{eq:psi:xi:Bnd:L2:b}.} Furthermore, assuming now that $s' > 0$ we have \begin{align} \| D^2 U(\mathbf{q})\|_{\mathcal{L}_2(H^{s'}(\mathbb{T}^2))} \leq M_2 < \infty, \label{eq:Hess:bnd:spec} \end{align} where $\|\cdot\|_{\mathcal{L}_2(H^{s'}(\mathbb{T}^2))}$ denotes the standard operator norm of a real-valued bilinear operator on $H^{s'}(\mathbb{T}^2) \times H^{s'}(\mathbb{T}^2)$ (see \eqref{eq:C:gamma:Hess:U}), and $M_2 = M_2(s', \kappa, \theta_0, c_0, \mathcal{Y}, \Gamma)$ is a constant independent of $\mathbf{q}$. \item[(ii)] In the case \eqref{eq:obs:cond:grad} for $\mathcal{O}$ we have once again that $U$ is twice Fr\'echet differentiable in $H^{s}(\mathbb{T}^2)$ for the given value of $s > 0$ in \eqref{eq:obs:cond:grad}. Here, for any $s' \geq s$, \begin{align} \| D U(\mathbf{q})\|_{-s'} \leq M \exp(c \|\mathbf{q}\|_{s'}^{a}) \label{eq:Grad:bnd:point} \end{align} and \begin{align} \| D^2 U(\mathbf{q})\|_{\mathcal{L}_2(H^{s'}(\mathbb{T}^2))} \leq M \exp(c \|\mathbf{q}\|_{s'}^{a}) \label{eq:Hess:bnd:point} \end{align} where $c = c(s', \kappa, t^*, \theta_0, c_0, \mathcal{Y}, \Gamma)$, $M = M(s', \kappa, t^*, \theta_0, c_0, \mathcal{Y}, \Gamma)$ are independent of $\mathbf{q}$ and $a> 0$ is precisely the constant appearing in \eqref{eq:Hs:balance}. \item[(iii)] Finally under the assumption that $\mathcal{O}$ obeys \eqref{eq:obs:cond:node}, $U$ is twice Fr\'echet differentiable in $H^{s'}(\mathbb{T}^2))$ for any $s' > 1$. In this case, when $s' > 1$, we again have the bounds \eqref{eq:Grad:bnd:point}, \eqref{eq:Hess:bnd:point}. \end{itemize} \end{Prop} \begin{proof} We start with the proof of \eqref{eq:Grad:bnd:spec}. Notice that, referring back to \eqref{eq:DD:U:xi} and using the condition \eqref{eq:obs:cond:spec}, we have \begin{align*} |U^{\xi} (\mathbf{q})| \leq c (1 + \sup_{t \leq t^*} \| \theta(\mathbf{q})\|_0) \cdot \sup_{t \leq t^*} \| \psi^\xi(\mathbf{q}) \|_0, \end{align*} for any $\mathbf{q} \in L^2(\mathbb{T}^2)$, $\xi \in H^{s'}(\mathbb{T}^2)$ and $c = c(\Gamma^{-1/2}, \mathcal{Y}, c_0)$. Observe that for any $s' \geq 0$ we have \begin{align} \| D U(\mathbf{q}) \|_{-s'} = \sup_{\|\xi \|_{s'} = 1} |U ^\xi(\mathbf{q})|. \label{eq:C:gamma:grad:U} \end{align} Thus, invoking the bounds \eqref{eq:stupid:L2:bnd}, \eqref{eq:psi:xi:Bnd:L2} when $s' = 0$ or \eqref{eq:psi:xi:Bnd:L2:b} for the case $s' > 0$, we obtain \eqref{eq:Grad:bnd:spec}. We turn next to the proof of \eqref{eq:Hess:bnd:spec}. In this case, working from \eqref{eq:DD:U:xi:txi} and again making use of the condition \eqref{eq:obs:cond:spec}, \begin{align*} |U^{\xi, \tilde{\xi}} (\mathbf{q})| \leq c \sup_{t \leq t^*} \| \psi^{\tilde{\xi}}(\mathbf{q})\|_0 \cdot \sup_{t \leq t^*} \| \psi^{\xi}(\mathbf{q}) \|_0 + c (1 +\sup_{t \leq t^*} \| \theta(\mathbf{q})\|_0 ) \cdot \sup_{t \leq t^*} \| \psi^{\xi, \tilde{\xi}} (\mathbf{q}) \|_0 \end{align*} for any $\xi, \tilde{\xi} \in H^{s'}(\mathbb{T}^2)$, where $c = c(\Gamma^{-1/2}, \mathcal{Y}, c_0)$. Here using \begin{align} \| D^2 U(\mathbf{q}) \|_{\mathcal{L}_2(H^{s'}(\mathbb{T}^2))} = \sup_{\|\xi \|_{s'} = \|\tilde{\xi}\|_{s'} = 1} |U ^{\xi, \tilde{\xi}}(\mathbf{q})| \label{eq:C:gamma:Hess:U} \end{align} and the bounds \eqref{eq:stupid:L2:bnd}, \eqref{eq:psi:xi:Bnd:L2:b}, \eqref{eq:psi:xi:txiBnd:L2}, the desired estimate \eqref{eq:Hess:bnd:spec} now follows. We next address \eqref{eq:Grad:bnd:point}, \eqref{eq:Hess:bnd:point}. Here \eqref{eq:DD:U:xi} and \eqref{eq:obs:cond:grad} result in \begin{align} |U^{\xi} (\mathbf{q})| \leq c (1 + \sup_{t \leq t^*} \| \theta(\mathbf{q})\|_s) \cdot \sup_{t \leq t^*} \| \psi^\xi(\mathbf{q}) \|_s \label{eq:U:xi:Hs:Obs:bnd} \end{align} and similarly, with \eqref{eq:DD:U:xi:txi}, \begin{align} |U^{\xi,\tilde{\xi}} (\mathbf{q})| \leq c \sup_{t \leq t^*} \| \psi^{\tilde{\xi}}(\mathbf{q})\|_{s} \cdot \sup_{t \leq t^*} \| \psi^{\xi}(\mathbf{q}) \|_{s} + c (1 +\sup_{t \leq t^*} \| \theta(\mathbf{q})\|_{s} ) \cdot \sup_{t \leq t^*} \| \psi^{\xi, \tilde{\xi}} (\mathbf{q}) \|_{s} \label{eq:U:txi:Hs:Obs:bnd} \end{align} for any $\xi,\tilde{\xi} \in H^{s'}(\mathbb{T}^2)$, $s' \geq s$. Thus, invoking \eqref{eq:Hs:balance} (with $f \equiv 0$), \eqref{eq:psi:xi:Bnd:Hs}, \eqref{eq:psi:xi:txiBnd:H2} with \eqref{eq:C:gamma:grad:U}-\eqref{eq:U:txi:Hs:Obs:bnd}, we obtain \eqref{eq:Grad:bnd:point}, \eqref{eq:Hess:bnd:point} establishing the second item. Regarding the final item (iii) observe that \eqref{eq:DD:U:xi}, \eqref{eq:DD:U:xi:txi} and the Sobolev embedding of $H^s(\mathbb{T}^2) \subset L^\infty(\mathbb{T}^2)$ when $s > 1$ we obtain bounds as in \eqref{eq:U:xi:Hs:Obs:bnd}, \eqref{eq:U:txi:Hs:Obs:bnd} under \eqref{eq:obs:cond:node} for any $s > 1$. We therefore conclude this final item arguing as in the previous case. The proof is now complete. \end{proof} Drawing upon \cref{prop:DU:DsqU:Bnds} we now draw certain conclusions on the scope of applicability of \cref{B123} to \eqref{eq:post:passive:scal}. For this purpose suppose $\mathcal{C}$ is a symmetric, positive, trace class operator on $L^2(\mathbb{T}^2)$. Following the notations introduced above in \eqref{def:Hgamma} we consider the fractional powers of $\mathcal{C}$ and associated spaces $\mathbb{H}_\gamma$ with norm $|\mathbf{q} |_{\gamma} = \| \mathcal{C}^{-\gamma} \mathbf{q}\|_{0}$ for $\gamma \geq 0$, so that in particular we have the notation $|\mathbf{q}| = \|\mathbf{q}\|_{0}$. We have the following corollary: \begin{Cor} \label{cor:DU:DsqU:Bnd:w:C} Let $\mathcal{C}$ be a symmetric, positive, trace class operator on $L^2(\mathbb{T}^2)$. Assume that for some $s > 0$, and some $\gamma \in (0,1/2)$ there is a constant $c_1$ such that \begin{align} \| \mathbf{q}\|_s \leq c_1 |\mathbf{q} |_\gamma = c_1 \|\mathcal{C}^{-\gamma} \mathbf{q}\|_0 \quad \mbox{ for all } \mathbf{q} \in \mathbb{H}_\gamma, \label{eq:C:reg:cond:1} \end{align} so that $\mathbb{H}_\gamma \subset H^s(\mathbb{T}^2)$. \begin{itemize} \item[(i)] Under the spectral observation assumption, \eqref{eq:obs:cond:spec}, \cref{B123} and \cref{ass:integrability:cond} hold for $U$ and the given $\mathcal{C}$. Additionally, if for this value of $\gamma$, $\mathcal{C}^{1-2\gamma}$ is trace class in the sense of \eqref{eq:Tr:Def:b}, so that \cref{ass:higher:reg:C} holds, then \cref{thm:weak:harris} applies to \eqref{eq:post:passive:scal}. \item[(ii)] Under \eqref{eq:obs:cond:grad}, assuming that \eqref{eq:C:reg:cond:1} holds for the value of $s > 0$ in \eqref{eq:obs:cond:grad} we have that \begin{align} |DU(\mathbf{q})|_{-\gamma} \leq M \exp( c |\mathbf{q}|_{\gamma}^a) \label{eq:tran:to:gamma:sp:DU} \end{align} and that \begin{align} \|\mathcal{C}^\gamma D^2U(\mathbf{q}) \mathcal{C}^\gamma\|_{\mathcal{L}_2(\mathbb{H}_0)} \leq M \exp( c |\mathbf{q}|_{\gamma}^a) \label{eq:tran:to:gamma:sp:DU:b} \end{align} where $\|\cdot\|_{\mathcal{L}(\mathbb{H}_0)}$ here denotes the standard operator norm of a real-valued bilinear operator on $\mathbb{H}_0 \times \mathbb{H}_0$, and again the constants $c = c(s', \kappa, t^*, \theta_0, c_0, c_1 \mathcal{Y}, \Gamma)$, $M = M(s', \kappa, t^*, \theta_0, c_0, c_1, \mathcal{Y}, \Gamma)$ are independent of $\mathbf{q}$ and $a> 0$ is as in \eqref{eq:Hs:balance}. \item[(iii)] In the case \eqref{eq:obs:cond:node}, if \eqref{eq:C:reg:cond:1} holds for some $s > 1$ then we again have the bounds \eqref{eq:tran:to:gamma:sp:DU}, \eqref{eq:tran:to:gamma:sp:DU:b} for the corresponding values of $\gamma$. \end{itemize} \end{Cor} \begin{proof} Regarding the first item we proceed to establish the conditions \eqref{Hess:bound} and \eqref{dissip:new}. Observe that under \eqref{eq:C:reg:cond:1} \begin{align} c_1^2 \| D^2U (\mathbf{q}) \|_{\mathcal{L}_2(H^s(\mathbb{T}^2))} \geq \|\mathcal{C}^\gamma D^2U(\mathbf{q}) \mathcal{C}^\gamma \|_{\mathcal{L}_2(\mathbb{H}_0)} \label{eq:obv:upp:bnd:1} \end{align} so that with \eqref{eq:Hess:bnd:spec} we infer \eqref{Hess:bound}. For \eqref{dissip:new} we demonstrate the stronger condition \eqref{eq:B3a}. Again, due to \eqref{eq:C:reg:cond:1} we have \begin{align} c_1 \| D U(\mathbf{q})\|_{-s} \geq | D U(\mathbf{q}) |_{-\gamma} \label{eq:obv:upp:bnd:2} \end{align} so that \eqref{eq:B3a} follows from \eqref{eq:Grad:bnd:spec}. Regarding the second and third items we simply apply \eqref{eq:obv:upp:bnd:1}, \eqref{eq:obv:upp:bnd:2} now in combination with \eqref{eq:Grad:bnd:point} and \eqref{eq:Hess:bnd:point}. The proof is complete. \end{proof} \begin{Rmk} \label{rmk:crux:matter} Let $A$ be the Stokes operator in dimension $2$ with periodic boundary conditions. Of course for any given $s > 0$ the condition \eqref{eq:C:reg:cond:1} is fulfilled when $\mathcal{C} = (A)^{-\kappa/2}$ for any $\kappa$ such that $\kappa \geq s/\gamma$. Here note, in regards to \cref{ass:higher:reg:C}, $\mathcal{C} = (A)^{-\kappa/2}$ has the eigenvalues $\lambda_j \approx |j|^{\kappa/2}$. Thus \eqref{eq:Tr:Def:b} entails the additional requirement $\kappa > 2/(1 - 2 \gamma)$. Note however that the examples considered in \cite{borggaard2018bayesian} involved a covariance $\mathcal{C}$ with exponentially decaying spectrum so that \eqref{eq:C:reg:cond:1} applies for any $s \geq 0$ and \eqref{eq:Tr:Def:b} for any $0 \leq \gamma < 1/2$. \end{Rmk} \begin{Rmk}[Improved bounds in the time independent case] We expect that improved, $\mathbf{q}$-independent bounds on (\ref{eq:grad:the:xi}) and (\ref{eq:grad:the:xitxi}) can be achieved through more sophisticated parabolic regularity techniques. In turn this could improve bounds obtainable for $DU$ and $D^2U$ in the case of point observations \eqref{eq:obs:cond:node}. Whatever the mechanism, we note that the numerical results in \cite{borggaard2018bayesian} suggest good mixing occurs for the Hamiltonian Monte Carlo algorithm in this case of point observations notwithstanding the fact that our current results do not cover this situation. In this connection it is notable that a global bound on $DU$ and $D^2U$ and hence the conditions for \eqref{thm:weak:harris} can be achieved for point observations in the time-stationary analogue of \eqref{eq:ad:forced} thanks to \cite{Berestycki_etal_09}. Let \begin{align} \mathbf{q} \cdot \nabla \theta = \kappa \Delta \theta + f \label{eq:AD:stat} \end{align} on $\mathbb{T}^2$ for a given fixed $f: \mathbb{T}^2 \to \mathbb{R}$, $\kappa >0$. We can consider, similarly to above, the statistical inversion problem of recovering a divergence free $\mathbf{q}$ from the sparse observation of the resulting solution $\theta: \mathbb{T}^2 \to \mathbb{R}$. In this case, following the Bayesian approach we again obtain a posterior measure of the form \eqref{eq:post:passive:scal} with $U$ given analogously to \eqref{eq:pot:passive:scal:g:n} in the case of Gaussian observation noise. As previously the task of estimating $DU$ and $D^2U$ entails suitable estimates for \begin{align*} \mathbf{q} \cdot \nabla \psi^\xi = \kappa \Delta \psi^\xi - \xi \cdot \nabla \theta(\mathbf{q}), \end{align*} and \begin{align*} \mathbf{q} \cdot \nabla \psi^{\xi, \tilde{\xi}} = \kappa \Delta\psi^{\xi, \tilde{\xi}} - \tilde{\xi} \cdot \nabla \psi^\xi - \xi \cdot \nabla \psi^{\tilde{\xi}}. \end{align*} over suitable directions $\xi, \tilde{\xi}$. Suppose that $\phi$ obeys \begin{align} \mathbf{q} \cdot \nabla \phi = \kappa \Delta \phi + g \label{eq:gen:AD:stat:eqn} \end{align} for some $\mathbf{q}: \mathbb{T}^2 \to \mathbb{R}^2$, divergence free and $g:\mathbb{T}^2 \to \mathbb{R}$. According to \cite[Lemma 1.3]{Berestycki_etal_09} we have that\footnote{The result \cite{Berestycki_etal_09} is stated for \eqref{eq:AD:stat} supplemented with Dirichlet boundary conditions but pursuing the proof it is clear that this bound also applies in the spatially periodic case.} \begin{align} \|\phi\|_{L^\infty} \leq c \| g \|_{L^p} \label{eq:int:bnd} \end{align} for any $p >1$ where crucially the constant $c = c(p, \kappa)$ is independent of $\mathbf{q}$. Applying \eqref{eq:int:bnd} and carrying out other standard manipulations we have that \begin{align} \| \theta(\mathbf{q}) \|_{L^\infty}^2 + \|\nabla \theta(\mathbf{q})\|^2_{0} \leq c \|f\|_{0}^2 \label{eq:th:base:bnd} \end{align} for $c = c(\kappa)$ independent of $\mathbf{q}$. As such a second application of \eqref{eq:int:bnd}, Sobolev embedding, \eqref{eq:sob:embedding}, and \eqref{eq:th:base:bnd} yields \begin{align} \| \psi^{\xi} \|_{L^\infty} \leq c \| \xi \|_{s} \|f\|_{0} \label{eq:phi:stat:xi:2} \end{align} for any $s > 0$ where the constant $c = c(s, \kappa)$ is again independent of $\mathbf{q}$. Moreover, using that $\mathbf{q}$ is divergence free and \eqref{eq:th:base:bnd} \begin{align} \|\nabla \psi^{\xi} \|_{0} \leq c \|\xi\|_{0} \| f\|_{0} \label{eq:phi:stat:xi:3} \end{align} with $c = c(s,\kappa)$ independent of $\mathbf{q}$. Finally \eqref{eq:int:bnd} followed by \begin{align} \| \psi^{\xi, \tilde{\xi}} \|_{L^\infty} \leq c (\|\xi\|_{s}^2 + \| \tilde{\xi} \|^2_{s}). \label{eq:phi:stat:xi:4} \end{align} for any $s > 0$ where $c = c(s, \kappa)$ does not depend on $\mathbf{q}$. Thus, arguing as in \cref{prop:grad:xi:apriori} but making use of \eqref{eq:phi:stat:xi:2}, \eqref{eq:phi:stat:xi:4} we can therefore conclude that whenever \begin{align*} |\mathcal{O}(\phi)| \leq c_0 \|\phi\|_{L^\infty}, \end{align*} bounds as in \eqref{eq:Grad:bnd:spec}, \eqref{eq:Hess:bnd:spec} must hold. \end{Rmk} \section{Outlook} This work provides an illustration of the power and efficacy of the weak Harris theorem as a tool for the analysis of mixing in infinite-dimensional MCMC methods. Specifically our work addresses a Hilbert space version from \cite{BePiSaSt2011} of the Hamiltonian Monte Carlo method. Notwithstanding recent progress in this setting of infinite dimensional MCMC algorithms, the understanding of mixing rates and the relatedly optimal choice of algorithmic parameters remains in its infancy. Let us therefore point out a number of interesting questions remaining to be studied which we plan to address in future work. One immediate avenue concerns the analysis of numerically discretized versions of the HMC algorithm \eqref{eq:PEHMC:kernel:def} which must be used in practice. Here the Metropolization step, which is used to correct for the bias introduced by the discretization of \eqref{eq:dynamics:xxx}, must be accounted for. In a similar vein it would be useful to have error bounds between the adjusted and unadjusted versions of the algorithm. It is also worth noting that there are a number of variations on the infinite dimensional HMC algorithm from \cite{BePiSaSt2011} now available in the literature whose mixing properties are poorly understood, particularly as we regard these different algorithms in comparative perspective. For example we note the Second-Order Langevin Hamiltonian (SOLHMC) methods in \cite{ottobre2016} and the Riemannian (geometric) HMC approach developed in \cite{BeByLiGi2017, BeGiShiFaStu2017}. Although the above analysis is a nontrivial first step towards a better understanding of \eqref{eq:HMC:kernel:overview} one may nevertheless view the time step condition \eqref{eq:time:restrict:basic} as restricting the scope of our analysis to a perturbation of the linear Gaussian case; cf. \cref{rmk:taking:stock:lip_1}. It is notable that similar small time step condition also appears in all the other recent studies of the HMC algorithm that we are aware of\cite{DurmusEtAl2017,LivingstoneEtAl2019,BoEbZi2018, BoEbPHMC}. We conjecture that for many problems of interest this restriction on $T$ may be far from optimal from the point of view of mixing rates. Indeed this bound on $T$ \eqref{eq:time:restrict:basic} turns on our treatment of the Lyapunov structure in \cref{prop:FL} and on the nudging scheme \cref{prop:FP:Type:contract} which could presumably be improved with a more delicate treatment of the Hamiltonian dynamics \eqref{eq:dynamics:xxx}. As a starting point it would be of great interest to find some simple settings in finite dimensions where this could be carried out. As already noted above in the introduction, a primary motivation for considering infinite dimensional MCMC methods concerns the Bayesian approach to PDE inverse problems. While several large scale numerical studies have been carried out for some specific problems a more systematic gallery of examples on which the performance of algorithms have been experimentally tested would be desirable. Here our results presented in \cref{sec:bayes:AD} show that analysis of conditions on the potential $U$ in \eqref{eq:dynamics:xxx} as arising from the Bayesian approach to PDE inverse problems can be quite involved. Indeed, in the case of the advection-diffusion problem we consider here, it is not clear that we can obtain a global Hessian bound on $U$ for interesting classes of observations, such as space-time point observations. Thus it would be useful to develop an analysis that only requires that $U$ is locally Lipschitz. More broadly, further examples of PDE inverse problems as found in e.g. \cite{stuart2010inverse} should be analytically studied in this context to obtain a broader sense of the variety of relevant conditions on $U$. \setcounter{equation}{0}
1,108,101,564,729
arxiv
\section{Preliminaries} Let $n\in\mathbb{N}$ and $q=p^h$, with $p$ prime and $h\in\mathbb{N}\setminus\{0\}$. Let PG($n,q$) be the $n$-dimensional Desarguesian projective space over the finite field of order $q$. In line with other articles, we define \[ \theta_n=\frac{q^{n+1}-1}{q-1}\textnormal{,} \] with the extension that $\theta_m=0$ if $m\in\mathbb{Z}\setminus\mathbb{N}$. Denote the set of all points of PG($n,q$) by $\mathcal{P}(n,q)$ and the set of all hyperplanes by $\mathcal{H}(n,q)$. Let $\mathcal V(n,q)$ be the $p$-ary vector space of functions from $\mathcal{P}(n,q)$ to $\mathbb{F}_p$; thus $\mathcal V(n,q)=\mathbb{F}_p^{\mathcal{P}(n,q)}$. Denote by $\boldsymbol{1}$ the function that maps all points to $1$. \begin{definition} Let $v\in \mathcal V(n,q)$. Define the \emph{support} of $v$ as $\supp{v}=\{P\in\mathcal{P}(n,q):v(P)\neq0\}$ and the \emph{weight} of $v$ as $\wt{v}=|\supp{v}|$. We will call all points of $\mathcal{P}(n,q)\setminus\supp{v}$ the \emph{holes} of $v$. \end{definition} We can identify each hyperplane $H\in\mathcal{H}(n,q)$ with the function $H\in \mathcal V(n,q)$ such that \[ H(P)=\begin{cases}1\quad\textnormal{if }P\in H\textnormal{,}\\ 0\quad\textnormal{otherwise.}\end{cases} \] If a hyperplane $H$ is identified as a function, its representation as a vector will be called the \emph{incidence vector} of the hyperplane $H$. It should be clear from the context whether we mean an actual hyperplane or such a function/vector. \begin{definition} The $p$-ary linear code $C_{n-1}(n,q)$ is the subspace of $\mathcal V(n,q)$ generated by $\mathcal{H}(n,q)$, where we interpret the elements of the latter as functions in $\mathcal V(n,q)$. The elements of $C_{n-1}(n,q)$ are called $\emph{code words}$. \end{definition} Define the \emph{scalar product} of two functions $v,w\in \mathcal V(n,q)$ as \[ v\cdot w=\sum_{P\in\mathcal{P}(n,q)}v(P)w(P)\textnormal{.} \] \begin{definition} We define the \emph{dual code} of ${C_{n-1}(n,q)}$ as its orthogonal complement with respect to the above scalar product: \[ {C_{n-1}(n,q)}^\bot=\big\{v\in \mathcal V(n,q):\big(\forall c\in C_{n-1}(n,q)\big)\big(c\cdot v=0\big)\big\}\textnormal{.} \] \end{definition} \begin{definition} Let $v\in \mathcal V(n,q)$ and take a $k$-space $\kappa$ in PG($n,q$). If we let $\kappa$ play the role of PG($k,q$), we can naturally define the \emph{restriction} of $v$ to the space $\kappa$ as the function $\restr{v}{\kappa}\in \mathcal V(k,q)$ restricted to the point set $\mathcal{P}(k,q)\subseteq \mathcal{P}(n,q)$. \end{definition} \begin{definition} Let $s$ be a line in PG($n,q$) and $v\in \mathcal V(n,q)$. If $s$ intersects $\supp{v}$ in $\alpha$ points ($0\leqslant\alpha\leqslant q+1$), we will call $s$ an $\alpha$\emph{-secant} to $\supp{v}$. Furthermore, \begin{itemize} \item if $\alpha\leqslant3$, $s$ will be called a \emph{short secant}, \item if $\alpha\geqslant q-1$, $s$ will be called a \emph{long secant}. \end{itemize} \end{definition} \section{Known results} \subsection{Results in general dimension} The minimum weight of the code $C_{n-1}(n,q)$ equals $\theta_{n-1}$. The code words corresponding to this weight are characterised. \begin{theorem}[\cite{assmuskey,macwilliams}]\label{MinimumWeight} The code words of $C_{n-1}(n,q)$ having minimum weight are the scalar multiples of the incidence vectors of hyperplanes. \end{theorem} Bagchi and Inamdar \cite[Theorem 1]{bagchiminweight} gave a geometrical proof of this theorem, using \emph{blocking sets}. Recently, Polverino and Zullo \cite{polverino} characterised all code words up to the second smallest (non-zero) weight: \begin{theorem}[{\cite[4]{polverino}}]\label{ResultsPolverinoZullo} Let $q = p^h$ with $p$ prime. \begin{enumerate} \item There are no code words of $C_{n-1}(n,q)$ with weight in the interval $]\theta_{n-1},2q^{n-1}[$. \item The code words of weight $2q^{n-1}$ in $C_{n-1}(n,q)$ are the scalar multiples of the difference of the incidence vectors of two distinct hyperplanes of PG($n,q$). \end{enumerate} \end{theorem} So far, Theorem \ref{ResultsPolverinoZullo} summarises the best results known concerning the characterisation of small weight code words in $C_{n-1}(n,q)$ in case $n\geqslant3$. \bigskip As a final note, we keep the following lemmata in mind. \begin{lemma}[{\cite[Chapter $6$]{assmuskey}, \cite[Lemma $2$]{polverino}}]\label{SumOfCoefficients} Let $c\in C_{n-1}(n,q)$, $c=\sum_i\alpha_iH_i$ for some $\alpha_i\in\mathbb{F}_p\setminus\{0\}$ and $H_i\in\mathcal{H}(n,q)$, and let $\kappa$ be a $k$-space of PG($n,q$), $1\leqslant\kappa\leqslant n$. Then $\kappa\cdot c=\sum_i\alpha_i$. \end{lemma} \begin{lemma}[{\cite[Remark 3.1]{polverino}}] \label{InductiveLemma} Let $c\in C_{n-1}(n,q)$ be a code word and $\kappa$ a $k$-space of PG($n,q$), $1\leqslant k\leqslant n$. Then $\restr{c}{\kappa}$ is a code word of $C_{k-1}(k,q)$. \end{lemma} \subsection{Results in the plane} Historically, most of the work done on this topic focuses on the planar case, i.e. the code of points and lines $C_1(2,q)$. Some early results on small weight code words in this particular code were those of Chouinard. In his PhD Thesis \cite{chouinardprime}, he proved that, when $q=p$ prime, code words up to weight $2p$ are linear combinations of at most two lines. When $q=9$, he proved that code words having a weight in the interval $]q+1,2q[$ do not exist \cite{chouinard9}.\\ Fack et al.\ \cite{fack} improved the prime case. More specifically, these authors proved that, if $q=p\geqslant11$, all code words of weight up to $2p+\frac{p-1}{2}$ are linear combinations of at most two lines. They cleverly made use of the existence of a Moorhouse base \cite{moorhouse}.\\ The prime case kept on inspiring more mathematicians. Next in line is Bagchi \cite{bagchiweight} on the one hand, and Sz\H onyi and Weiner \cite{szonyi} on the other hand (see Theorem \ref{ResultsSzonyiWeinerPrime}). Bagchi proved the following: \begin{theorem}[{\cite[Theorem $1.1$]{bagchiweight}}]\label{ResultsBagchi} Let $p\geqslant5$. Then, the fourth smallest weight of $C_1(2,p)$ is $3p-3$. The only words of $C_1(2,p)$ of Hamming weight smaller than $3p-3$ are the linear combinations of at most two lines in the plane. \end{theorem} Bagchi knew this bound was sharp, as he discovered a code word of weight $3p-3$ which \emph{cannot} be constructed as a linear combination of at most two lines when $p>3$ \cite{bagchicodeword}. This code word was independently discovered by De Boeck and Vandendriessche \cite{deboeck} as well. \begin{example}[{\cite{bagchicodeword},\cite[Example $10.3.4$]{deboeck}}]\label{OddCodeword} Choose a coordinate system for PG($2,p$) and let $c$ be a vector of $\mathcal V(2,p)$, $p\neq2$ a prime, such that \[ c(P)=\begin{cases} a\quad&\textnormal{if}\quad P=(0,1,a)\textnormal{,}\\ b\quad&\textnormal{if}\quad P=(1,0,b)\textnormal{,}\\ -c\quad&\textnormal{if}\quad P=(1,1,c)\textnormal{,}\\ 0\quad&\textnormal{otherwise.} \end{cases} \] Remark that $\supp{c}$ is covered by the three concurrent lines $m: X_0=0$, $m': X_1=0$ and $m'': X_0=X_1$. \end{example} The proof of $c$ being a code word of $C_1(2,p)$ relies on proving that $c$ belongs to ${C_1(2,p)}^\bot\subseteq C_1(2,p)$. As each of the three lines $m$, $m'$ and $m''$ contains $p-1$ points with pairwise different, non-zero values, it is easy to see that such a code word can never be written as a linear combination of less than $p-1$ different lines.\\ As noted by Sz\H onyi and Weiner \cite{szonyi}, the above example can be generalised as follows: \begin{example}({\cite[Example $4.7$]{szonyi}})\label{OddCodewordGen} Let $c$ be the code word in Example \ref{OddCodeword}, with corresponding lines $m$, $m'$ and $m''$ considered as incidence vectors. Suppose $\pi$ is an arbitrary collineation of PG($2,p$) and let $\gamma\in\mathbb{F}_p\setminus\{0\}$ and $\lambda,\lambda',\lambda''\in\mathbb{F}_p$. Then \[ d=(\gamma c+\lambda m+\lambda'm'+\lambda''m'')^\pi \] is a code word of weight $3p-3$ or $3p-2$, depending on the value of $\lambda+\lambda'+\lambda''$. \end{example} By construction, it is easy to see that this generalised example has some interesting properties. \begin{proposition}\label{PropOddCodeword} Suppose $d$ is the code word as constructed in Example \ref{OddCodewordGen}. Let $S=(m\cap m'\cap m'')^\pi$. Then \[ \wt{d}=\begin{cases} 3p-3\quad\textnormal{if}\quad d(S)=0\textnormal{,}\\ 3p-2\quad\textnormal{if}\quad d(S)\neq0\textnormal{.} \end{cases} \] \end{proposition} For somewhat larger values of $p$, Sz\H onyi and Weiner \cite{szonyi} improved Bagchi's result: \begin{theorem}[{\cite[Theorem $4.8$ and Corollary $4.10$]{szonyi}}]\label{ResultsSzonyiWeinerPrime} Let $c$ be a code word of $C_1(2,p)$, $p>17$ prime. If $\wt{c}\leqslant\max\{3p+1,4p-22\}$, then $c$ is either the linear combination of at most three lines or given by Example \ref{OddCodewordGen}. \end{theorem} The same authors have proven the following results for $q$ not prime, proving for large values of $q$ that the code word described in Example \ref{OddCodewordGen} can only exist when $q$ is prime. \begin{theorem}[{\cite[Theorem $4.3$]{szonyi}}]\label{ResultsSzonyiWeinerNonPrime} Let $c$ be a code word of $C_1(2,q)$, with $27 < q$, $q = p^h$, $p$ prime. If \begin{itemize} \item $\wt{c}<(\lfloor\sqrt{q}\rfloor+1)(q+1-\lfloor\sqrt{q}\rfloor)$, when $2<h$, or \item $\wt{c}<\frac{(p-1)(p-4)(p^2+1)}{2p-1}$, when $h=2$, \end{itemize} then $c$ is a linear combination of exactly $\big\lceil\frac{\wt{c}}{q+1}\big\rceil$ different lines. \end{theorem} We can now summarise these results concerning $C_1(2,q)$ in one corollary: \begin{corollary}\label{PlaneResults} Let $c$ be a code word of $C_1(2,q)$, with $q=p^h$, $p$ prime, and $q\notin\{8,9,16,25,27,49\}$. \begin{itemize} \item If $\wt{c}\leqslant3q-4$, then $c$ is a linear combination of at most two lines. \item If $\wt{c}\leqslant3q+1$ and $q=121$, then $c$ is a linear combination of at most three lines. \item If $\wt{c}\leqslant\max\{3q+1,4q-22\}$ and $q>17$, $q\neq121$, then $c$ is a linear combination of at most three lines or given by Example \ref{OddCodewordGen}. \end{itemize} \end{corollary} \begin{proof} If $q\leqslant4$, then $3q-4\leqslant2q$ and we can use Theorem \ref{ResultsPolverinoZullo}. If $q>4$ and $q$ is prime, the proof immediately follows from Theorem \ref{ResultsBagchi} and Theorem \ref{ResultsSzonyiWeinerPrime}.\\ Suppose $q>4$ is not prime. Then, by assumption, $q>27$, which means that $\max\{3q+1,4q-22\}=4q-22$. To apply Theorem \ref{ResultsSzonyiWeinerNonPrime}, we only have to check the weight assumptions. One can verify that \begin{itemize} \item $4q-22 < (\lfloor\sqrt{q}\rfloor+1)(q+1-\lfloor\sqrt{q}\rfloor)$ if $q\geqslant10$, \item $3p^2+1 < \frac{(p-1)(p-4)(p^2+1)}{2p-1}$ if $q=p^2\geqslant121$, \item $4p^2-22 < \frac{(p-1)(p-4)(p^2+1)}{2p-1}$ if $q=p^2\geqslant144$. \end{itemize} We conclude that $c$ is a linear combination of at most $\lceil\frac{4q-22}{q+1}\rceil = 4$ lines. If $c$ is a linear combination of precisely $4$ lines, then its weight is at least $4\cdot\big((q+1)-3\big)=4q-8$, a contradiction. \end{proof} \section{The main theorem} Throughout this section, let $n\in\mathbb{N}\setminus\{0,1\}$ and $q=p^h$, with $p$ prime and $h\in\mathbb{N}\setminus\{0\}$. Let $c\in C_{n-1}(n,q)$ be an arbitrary code word. Furthermore, define \[ B_{n,q}=\begin{cases} \qquad\qquad2q^{n-1}\quad&\textnormal{if }q<7\textnormal{ or }q\in\{8,9,16,25,27,49\}\textnormal{,}\\ \Big(3q-\sqrt{6q}-\frac{1}{2}\Big)q^{n-2}\quad&\textnormal{if }q\in\{7,11,13,17\}\textnormal{,}\\ \Big(3q-\sqrt{6q}+\frac{9}{2}\Big)q^{n-2}\quad&\textnormal{if }q\in\{19,121\}\textnormal{,}\\ \Big(4q-4\sqrt{q}-\frac{25}{2}\Big)q^{n-2}\quad&\textnormal{if }q\in\{29,31,32\}\textnormal{,}\\ \Big(4q-\sqrt{8q}-\frac{33}{2}\Big)q^{n-2}\quad&\textnormal{otherwise.}\\ \end{cases} \] This will be the assumed upper bound on the weight of $c$. By Theorem \ref{ResultsPolverinoZullo}, we can always assume that $q\geqslant7$ and $q\notin\{8,9,16,25,27,49\}$. \subsection{Preliminaries} Using Corollary \ref{PlaneResults} and Lemma \ref{InductiveLemma}, we can distinguish several types of small weight code words: \begin{definition}\label{DefPlane} Let $c$ be a code word, and $\pi$ a plane. We will call $\restr{c}{\pi}$ \begin{itemize} \item a \emph{code word of type} $T_w$ if $\restr{c}{\pi}$ is a linear combination of at most two lines, with $w$ the weight of $\restr{c}{\pi}$. \item a \emph{code word of type} ${T^{\textnormal{odd}}}$ if $\restr{c}{\pi}$ is a code word as described in Example \ref{OddCodewordGen}. \item a \emph{code word of type} ${T^{\boldsymbol{\triangle}}}$ if $\restr{c}{\pi}$ is a linear combination of three nonconcurrent lines. \item a \emph{code word of type} ${T^{\bigstar}}$ if $\restr{c}{\pi}$ is a linear combination of three concurrent lines. \item a \emph{code word of type} $\boldsymbol{\mathcal{T}}=\{{T_0},{T_{q+1}},{T_{2q}},{T_{2q+1}},{T^{\textnormal{odd}}},{T^{\boldsymbol{\triangle}}},{T^{\bigstar}}\}$ if $\restr{c}{\pi}$ is a code word of one of the types mentioned above. \item a \emph{code word of type} $\boldsymbol{\mathcal{O}}$ if $\restr{c}{\pi}$ is \emph{not} a code word of one of the types mentioned above. \end{itemize} We will often make no distinction between the code word $\restr{c}{\pi}$ and the plane $\pi$: if $\restr{c}{\pi}$ is a code word of a certain type $T$, we will call $\pi$ a \emph{plane of type} $T$. \end{definition} \begin{proposition}\label{LinesIncharacterisedPlanes} If $\pi$ is a plane of type $\boldsymbol{\mathcal{T}}$ in PG($n,q$), then all lines of $\pi$ are either short or long secants to $\supp{c}$. If the type of $\pi$ is an element of $\{T_0,T_{q+1},T_{2q},T_{2q+1}\}$, then all lines intersect $\supp{c}$ in at most $2$ or in at least $q$ points.\qed \end{proposition} The following is a generalisation of Definition \ref{DefPlane} to arbitrary dimension. \begin{definition}\label{DefHyperplane} Let $\Pi$ be a $k$-space of PG($n,q$), $2\leqslant k\leqslant n$. We will call the code word $\restr{c}{\Pi}$ \emph{a code word of type} $T\in\boldsymbol{\mathcal{T}}$ if the following is true: \begin{itemize} \item there exists a $(k-3)$-space $\kappa$ in $\Pi$ such that $\restr{c}{\kappa}$ is a scalar multiple of $\boldsymbol{1}$. \item there exists a plane $\pi$ in $\Pi$ of type $T$, disjoint to $\kappa$. \item for all points $P\in\Pi\setminus\kappa$, $c(P)=c\big(\spann{\kappa}{P}\cap\pi\big)$. \end{itemize} If $\restr{c}{\Pi}$ is a code word of type $T$, we will often call the space $\Pi$ a space of type $T$ as well. If the type $T\in\boldsymbol{\mathcal{T}}$ is not known, we will call the code word $\restr{c}{\Pi}$ (or the space $\Pi$) \emph{of type} $\boldsymbol{\mathcal{T}}$. Remark that, if $k=2$, the above definition coincides with Definition \ref{DefPlane}. \end{definition} The upcoming theorem is the main theorem of this article, which is an improvement of Theorem \ref{ResultsPolverinoZullo} when $q\geqslant7$ and $q\notin\{8,9,16,25,27,49\}$: \begin{theorem}\label{THETHEOREM} Let $c$ be a code word of $C_{n-1}(n,q)$, with $n\geqslant3$, $q$ a prime power and $\wt{c}\leqslant B_{n,q}$. Then $c$ can be written as a linear combination of incidence vectors of hyperplanes through a fixed $(n-3)$-space. \end{theorem} \begin{corollary}\label{THETHEOREMCOR} Let $c$ be a code word of $C_{n-1}(n,q)$, with $n\geqslant3$, $q$ a prime power and $\wt{c}\leqslant B_{n,q}$. Then $c$ is a code word of type $\boldsymbol{\mathcal{T}}$. \end{corollary} \begin{proof} By Theorem \ref{THETHEOREM}, $c$ can be written as a linear combination of the incidence vectors of hyperplanes through a fixed $(n-3)$-space $\kappa$. If all planes disjoint to $\kappa$ are planes of type $\boldsymbol{\mathcal{O}}$, Lemma \ref{LinesAndPlanes} implies that $\wt{c}\geqslant\frac{1}{2}q^{n-2}(q^2-3q-2)$, a contradiction for all $q$. All other properties of Definition \ref{DefHyperplane} can easily be checked. \end{proof} \subsection{The proof} \subsubsection{The weight spectrum concerning lines and planes} In this subsection, we will prove some intermediate results, the first one stating that all lines contain few or many points of $\supp{c}$. \begin{lemma}\label{LinesAndPlanes} Suppose $\wt{c}\leqslant B_{n,q}$. Then \begin{enumerate} \item all lines are either short or long secants to $\supp{c}$. If $q\leqslant17$, then all lines intersect $\supp{c}$ in at most $2$ or in at least $q$ points. \item all planes of type $\boldsymbol{\mathcal{O}}$ contain at least $\frac{1}{2}q(q-1)$ points of $\supp{c}$. \end{enumerate} \end{lemma} \begin{proof} First of all, define the values \[ A_q=\begin{cases} 3q-3\quad&\textnormal{if }q\in\{7,11,13,17\}\textnormal{,}\\ 3q+2\quad&\textnormal{if }q\in\{19,121\}\textnormal{,}\\ 4q-21\quad&\textnormal{otherwise,}\\ \end{cases} \quad\textnormal{and}\quad i=\begin{cases} 3\quad&\textnormal{if }q\in\{7,11,13,17\}\textnormal{,}\\ 4\quad&\textnormal{otherwise.}\\ \end{cases} \] Suppose, on the contrary, that $s$ is an $m$-secant to $\supp{c}$, with $i\leqslant m\leqslant q+2-i$. By Proposition \ref{LinesIncharacterisedPlanes} and Corollary \ref{PlaneResults}, all planes through $s$ contain at least $A_q$ points of $\supp{c}$. We get \begin{align} \wt{c}&\geqslant A_q\theta_{n-2}-m(\theta_{n-2}-1)\nonumber\\ \Leftrightarrow m&\geqslant \frac{A_q\theta_{n-2}-\wt{c}}{\theta_{n-2}-1}\label{LowerBoundM}\\ \Rightarrow q+2-i&\geqslant\frac{A_q\theta_{n-2}-\wt{c}}{\theta_{n-2}-1}\textnormal{.}\label{UpperBoundOnLowerBound} \end{align} From \eqref{LowerBoundM} and \eqref{UpperBoundOnLowerBound}, we can conclude that all lines intersect $\supp{c}$ in at most $i-1$ or in at least $\frac{A_q\theta_{n-2}-\wt{c}}{\theta_{n-2}-1}$ points. Let $\pi\supseteq s$ be an arbitrary plane, thus $\wt{\restr{c}{\pi}}\geqslant A_q$, and define $j=\min{\{\wt{\restr{c}{l}}:l\subseteq\pi,\wt{\restr{c}{l}}\geqslant i\}}$. Choose a point $P\in s\cap\supp{c}$. If all other $q$ lines in $\pi$ through $P$ contain at most $i-1$ points of $\supp{c}$, then $\wt{\restr{c}{\pi}}\leqslant (i-2)q+m\leqslant(i-1)q-1<A_q$, a contradiction. Thus, through each point on $s\cap\supp{c}$ we find at least one line in $\pi$, other than $s$, containing at least $j$ points of $\supp{c}$. We find at least $m\geqslant j$ such lines, meaning that \begin{equation}\label{TelescopicSum} \wt{\restr{c}{\pi}}\geqslant j + (j-1) + \dots + 1 = \frac{1}{2}j(j+1)\textnormal{.} \end{equation} This holds for all planes through an $m$-secant with $i\leqslant m\leqslant q+2-i$, in particular for all planes through a $j$-secant in $\pi$. As such, we get \[ \wt{c}\geqslant\bigg(\frac{1}{2}j(j+1)-j\bigg)\theta_{n-2}+j\textnormal{.} \] When combining this result with $j\geqslant\frac{A_q\theta_{n-2}-\wt{c}}{\theta_{n-2}-1}$, we get a condition on $\wt{c}$, eventually leading to $\wt{c}>B_{n,q}$, a contradiction. We refer to Appendix \ref{Appendix} for the arithmetic details.\\ Let $\pi$ be a plane of type $\boldsymbol{\mathcal{O}}$. If no long secant is contained in this plane, $\wt{\restr{c}{\pi}}\leqslant2q+1<A_q$, a contradiction. Repeating the previous arguments, we get the same result as \eqref{TelescopicSum}, for $j\geqslant q-1$. This concludes the proof. \end{proof} Using this result, we can deduce the following. \begin{lemma}\label{q-1Implies3} Suppose $\wt{c}\leqslant B_{n,q}$. If there exists a $(q-1)$-secant to $\supp{c}$, then there exists a $3$-secant to $\supp{c}$ as well. \end{lemma} \begin{proof} Let $s$ be a $(q-1)$-secant and suppose, on the contrary, that no $3$-secants exist. Remark that planes of type $\boldsymbol{\mathcal{T}}$ containing a $(q-1)$-secant always contain a $3$-secant. Hence, by Lemma \ref{LinesAndPlanes}, all planes through $s$ contain at least $\frac{1}{2}q(q-1)$ points of $\supp{c}$. We get \[ B_{n,q}\geqslant\wt{c}\geqslant\bigg(\frac{1}{2}q(q-1)-(q-1)\bigg)\theta_{n-2}+(q-1)\textnormal{,} \] which is a contradiction for all values of $q$. \end{proof} \begin{lemma}\label{Lines2AndQ} Suppose $\wt{c}\leqslant\min\big\{(3q-6)\theta_{n-2}+2,B_{n,q}\big\}$. Then all lines intersect $\supp{c}$ in at most $2$ or in at least $q$ points. \end{lemma} \begin{proof} By Lemma \ref{LinesAndPlanes} and Lemma \ref{q-1Implies3}, it suffices to prove that $3$-secants to $\supp{c}$ cannot exist. Suppose there exists a $3$-secant to $\supp{c}$. By Corollary \ref{PlaneResults}, all planes containing this $3$-secant have at least $3q-3$ points in common with $\supp{c}$. This gives us the following contradiction: \[ (3q-6)\theta_{n-2}+2\geqslant \wt{c}\geqslant(3q-3-3)\theta_{n-2}+3\textnormal{.}\qedhere \] \end{proof} \subsubsection{Code words of weight $\boldsymbol{2q^{n-1}+\theta_{n-2}}$}\label{SecThirdWeight} In this section, we will prove Theorem \ref{TwoHyperplaneTheorem}, which essentially states that, if $\wt{c}\leqslant\min\big\{(3q-6)\theta_{n-2}+2,B_{n,q}\big\}$, the code word $c$ corresponds to a linear combination of at most \emph{two} hyperplanes. \begin{lemma}\label{ClassificationOf012QQ+1Secants} Assume that $S$ is a point set in PG($n,q$), $q \geqslant 7$, and every line intersects $S$ in at most $2$ or in at least $q$ points. Then one of the following holds: \begin{enumerate} \item[\textnormal{(1)}] $|S| \leqslant 2q^{n-1} + \theta_{n-2}$. \item[\textnormal{(2)}] The complement of $S$, denoted by $S^c$, is contained in a hyperplane. \end{enumerate} \end{lemma} \begin{proof} We prove this by induction on $n$. Note that the statement is trivial for $n=1$, so assume that $n \geqslant 2$. Furthermore, we can inductively assume that for every hyperplane $\Pi$, either $|S \cap \Pi| \leqslant 2q^{n-2} + \theta_{n-3}$, in which case we call $\Pi$ a \emph{small hyperplane}, or $S^c \cap \Pi$ is contained in an $(n-2)$-subspace of $\Pi$, in which case we call $\Pi$ a \emph{large hyperplane}.\\ \underline{Case $1$: There exist two large hyperplanes $\Pi_1$ and $\Pi_2$, and a point $P \in S \setminus (\Pi_1 \cup \Pi_2)$.} Consider the lines through $P$. At most $q^{n-2}$ of these lines intersect $\Pi_i \setminus \Pi_{3-i}$ in a point of $S^c$, and $\theta_{n-2}$ of these lines intersect $\Pi_1 \cap \Pi_2$. Hence, at least $\theta_{n-1} - 2q^{n-2} - \theta_{n-2} = q^{n-1} - 2q^{n-2}$ of these lines intersect $\Pi_1$ and $\Pi_2$ in distinct points of $S$. As $P \in S$, each of these lines contains at least three points of $S$. Therefore, they must contain at least $q$ points of $S$, thus at least $q-3$ points of $S \setminus\big(\Pi_1 \cup \Pi_2 \cup \{P\}\big)$. Since $\Pi_1 \cup \Pi_2$ contains at least $2q^{n-1}-q^{n-2}$ points of $S$, we know that \[ |S| \geqslant (q^{n-1}-2q^{n-2})(q-3) + (2q^{n-1} - q^{n-2}) + 1 = q^n - 3q^{n-1} + 5q^{n-2} + 1. \] \underline{Case $2$: There exists a small hyperplane $\Pi$ and a point $P \in S^c \setminus\Pi$.} The small hyperplane $\Pi$ must contain at least $\theta_{n-1} - (2q^{n-2} + \theta_{n-3}) = q^{n-1} - q^{n-2}$ points of $S^c$. Every line through $P$ and a point of $\Pi \cap S^c$ intersects $S^c$ in at least 2, thus in at least $q-1$ points of $S^c$. This yields that \[ |S^c| \geqslant (q^{n-1} - q^{n-2})(q-2) + 1 = q^n - 3q^{n-1} + 2q^{n-2} + 1. \] If both cases would occur simultaneously, then \begin{align*} \theta_n = |S| + |S^c| & \geqslant (q^n - 3q^{n-1} + 5q^{n-2} + 1) + (q^n - 3q^{n-1} + 2q^{n-2} + 1) \\ & = 2q^n - 6q^{n-1} + 7q^{n-2} + 2 \textnormal{,} \end{align*} which is a contradiction if $q \geqslant 7$. Note that the existence of three large hyperplanes implies Case $1$, and the existence of two small hyperplanes implies Case $2$. Therefore, exactly one of these cases holds. Assume that Case $1$ holds. Hence, Case $2$ cannot hold, so if there exists a small hyperplane, it has to contain the entirety of $S^c$ and the proof is done. As such, we can assume that all hyperplanes are large. Take a hyperplane $\Pi$. If the points of $S^c \cap \Pi$ span an $(n-2)$-space $\Sigma$, then $S^c \subseteq \Sigma \subseteq \Pi$. Otherwise, if a point $P \in S^c$ lies outside of $\Sigma$, $\spann{\Sigma}{P}$ would be a (necessarily large) hyperplane, spanned by elements of $S^c$, a contradiction. In this way, we see that either some hyperplane contains all points of $S^c$, or for every hyperplane $\Pi$, $S^c \cap \Pi$ is contained in an $(n-3)$-subspace of $\Pi$. We can now use the same reasoning to prove that either some hyperplane contains all points of $S^c$, or for every hyperplane $\Pi$, $S^c \cap \Pi$ is contained in an $(n-4)$-space. Inductively repeating this process proves the theorem. Assume that Case $2$ holds. Then there are at most two large hyperplanes, otherwise Case $1$ would hold. Consider the set $V = \{(P,\Pi) : P \textnormal{ a point}, \, \Pi \textnormal{ a hyperplane}, \, P \in S\cap\Pi\}$. Counting the elements of $V$ in two ways yields \[ |S| \theta_{n-1} = |V| \leqslant 2 \theta_{n-1} + (\theta_n - 2)( 2q^{n-2} + \theta_{n-3} ). \] Note that the right-hand side equals the exact size of $V$ in case $S$ is the union of two hyperplanes. Hence, the right-hand side equals $(2q^{n-1}+\theta_{n-2}) \theta_{n-1}$. Thus, $|S| \leqslant 2q^{n-1}+\theta_{n-2}$. \end{proof} \begin{lemma}\label{Exists2Secant} Suppose $\theta_{n-1}<\wt{c}\leqslant\min\big\{(3q-6)\theta_{n-2}+2,B_{n,q}\big\}$. Then there exists a $2$-secant to $\supp{c}$. \end{lemma} \begin{proof} Suppose that no $2$-secant to $\supp{c}$ exists and suppose $t$ is a $q$-secant to $\supp{c}$. By Corollary \ref{PlaneResults}, all planes through $t$ containing at most $2q+1$ points of $\supp{c}$ correspond to planes of type ${T_{2q}}$. However, such planes contain several $2$-secants, contradicting the assumptions. Thus, by Lemma \ref{Lines2AndQ} and Lemma \ref{ClassificationOf012QQ+1Secants}, all planes through $t$ must contain at least $q^2$ points of $\supp{c}$. In this way, \[ \wt{c}\geqslant\theta_{n-2}\cdot(q^2-q)+q=q^n\textnormal{,} \] which contradicts the weight assumptions. To conclude, all lines intersect $\supp{c}$ in $0$, $1$ or $q+1$ points, which is only possible if $\supp{c}$ is a subspace. Once again, this contradicts the weight assumptions. \end{proof} \begin{theorem}\label{TwoHyperplaneTheorem} Suppose $\wt{c}\leqslant\min\big\{(3q-6)\theta_{n-2}+2,B_{n,q}\big\}$. Then $c$ is a linear combination of the incidence vectors of at most two distinct hyperplanes. \end{theorem} \begin{proof} By Theorem \ref{ResultsPolverinoZullo}, we may assume that $2q^{n-1}<\wt{c}\leqslant\min\big\{(3q-6)\theta_{n-2}+2,B_{n,q}\big\}$. The proof will be done by induction on $n$. If $n=2$, Corollary \ref{PlaneResults} finishes the proof. Hence, let $n\geqslant3$ and assume, for each hyperplane $\Pi$, that if $\wt{\restr{c}{\Pi}}\leqslant\min\big\{(3q-6)\theta_{n-3}+2,B_{n-1,q}\big\}$, $\restr{c}{\Pi}$ is a linear combination of at most two distinct $(n-2)$-subspaces of $\Pi$.\\ Suppose all hyperplanes contain at most $2q^{n-2}+\theta_{n-3}$ points of $\supp{c}$. Since $\supp{c}\neq\emptyset$, there must exist an $(n-2)$-space $\Pi_{n-2}$ intersecting $\supp{c}$ in $q^{n-2}$ or $\theta_{n-2}$ points, such that all hyperplanes through $\Pi_{n-2}$ contain either zero or $q^{n-2}$ points of $\supp{c}\setminus\Pi_{n-2}$. This yields \[ \wt{c}\leqslant\theta_{n-2}+(q+1)q^{n-2} = \theta_{n-1} + q^{n-2} < 2q^{n-1}\textnormal{,} \] a contradiction.\\ So consider a hyperplane $\Pi_{n-1}$, containing more than $2q^{n-2}+\theta_{n-3}$ points of $\supp{c}$. Due to Lemma \ref{ClassificationOf012QQ+1Secants}, $\wt{\restr{c}{\Pi_{n-1}}}\geqslant q^{n-1}$ and the holes of $\Pi_{n-1}$ are contained in an $(n-2)$-space $H_{n-2}$ of $\Pi_{n-1}$. By Lemma \ref{Exists2Secant}, we find a 2-secant $l$ to $\supp{c}$. Let $P$ and $Q$ be the points in $l \cap \supp{c}$ and let $\alpha=c(P)$.\\ \underline{Case $1$: $P, Q \notin \Pi_{n-1}$.} Suppose there is at most one hyperplane of type ${T_{2q}}$ or ${T_{2q+1}}$ through $l$. Fix an $(n-2)$-space $\Pi_{n-2}$ through $l$. By Lemma \ref{ClassificationOf012QQ+1Secants}, at least $q$ hyperplanes through $\Pi_{n-2}$ each contain at least $q^{n-1}$ points of $\supp{c}$, thus \[ \wt{c}\geqslant q^{n-1} + (q-1)\cdot\big(q^{n-1} - \theta_{n-2}\big) = q^n - q^{n-1} + 1\textnormal{,} \] which exceeds the imposed upper bound on $\wt{c}$ for all prime powers $q$, a contradiction.\\ Hence, we can choose a ${T_{2q}}$- or ${T_{2q+1}}$-typed hyperplane $\Sigma_{n-1}$ through $l$, different from the hyperplane $\spann{H_{n-2}}{l}$. Therefore, all holes in $\Sigma_{n-1} \cap \Pi_{n-1}$ are contained in the $(n-3)$-space $\Sigma_{n-1}\cap H_{n-2}$. As $\supp{\restr{c}{\Sigma_{n-1}}}$ is the union or symmetric difference of precisely two $(n-2)$-subspaces and as $\Sigma_{n-1} \cap \Pi_{n-1}$ must be one of these two, the latter contains either $P$ or $Q$, contrary to the assumption of this case.\\ \underline{Case $2$: $P \in \Pi_{n-1}$.} Remark that, due to Lemma \ref{ClassificationOf012QQ+1Secants}, $\wt{c}\leqslant2q^{n-1}+\theta_{n-2}$. From this, we get that there are at least $q^{n-2}$ planes through $l$ containing at most $2q+1$ points of $\supp{c}$. Otherwise, we would have \[ 2q^{n-1}+\theta_{n-2}\geqslant\wt{c}> q^{n-2}\cdot(2q-2)+(\theta_{n-2}-q^{n-2})q^2+2\textnormal{,} \] a contradiction whenever $q>2$.\\ The space $\Pi_{n-1}$ contains $\theta_{n-3}$ planes through a fixed line, so there exists a plane $\pi$ through $l$, not contained in $\Pi_{n-1}$, having at most $2q+1$ points of $\supp{c}$. If $Q \in \Pi_{n-1}$, we could choose another 2-secant lying in such an `external' plane to $\Pi_{n-1}$ and replace $l$ (and $Q$ correspondingly) with this 2-secant. In this way, we may assume that $Q\in\pi\setminus\Pi_{n-1}$. Note that every line through $P$ containing at least two holes of $\Pi_{n-1}$ lies in $H_{n-2}$. Therefore, there are at most $\theta_{n-3}$ such lines through $P$. Every plane through $l$ intersects $\Pi_{n-1}$ in a line through $P$, hence there must be at least $q^{n-2} - \theta_{n-3}$ planes through $t$ of type ${T_{2q}}$ of ${T_{2q+1}}$, resulting in at least $q^{n-2} - \theta_{n-3}$ lines in $\Pi_{n-1}$, through $P$, each containing at least $q$ points all having the same non-zero value $\alpha$ in $c$. This yields at least \[ (q^{n-2} - \theta_{n-3})(q-1) + 1>\frac{1}{2}\theta_{n-1} \] points in $\Pi_{n-1}$ with value $\alpha$.\\ Now suppose, on the contrary, that $c$ is a code word of minimal weight such that $c$ cannot be written as a linear combination of at most two hyperplanes. Then $\wt{c-\alpha \Pi_{n-1}}<\wt{c}$, thus the code word $c-\alpha \Pi_{n-1}$ is a linear combination of exactly two hyperplanes. As a consequence, $c$ must be a linear combination of precisely three hyperplanes, implying that $\wt{c}\geqslant3(q^{n-1}-q^{n-2})$, contradicting the weight assumptions. \end{proof} \subsubsection{Going higher on the weight spectrum} It will turn out that we can go further than the code words of weight $2q^{n-1}+\theta_{n-2}$. Moreover, we will be able to prove that a code word of weight at most $B_{n,q}$ corresponds to a linear combination of hyperplanes through a fixed $(n-3)$-space (Theorem \ref{THETHEOREM}). \bigskip Due to Theorem \ref{TwoHyperplaneTheorem}, we can assume the following on the weight of the code word $c$: \[ (3q-6)\theta_{n-2}+3\leqslant\wt{c}\leqslant B_{n,q}\textnormal{.} \] As we are mainly interested in the case $n\geqslant3$, the inequality above implies that $q\geqslant29$, which we will keep in mind for the remainder of this section. \begin{lemma}\label{Exist3Secant} Suppose $(3q-6)\theta_{n-2}+3\leqslant\wt{c}\leqslant B_{n,q}$. Then there exists a $3$-secant to $\supp{c}$. \end{lemma} \begin{proof} Suppose that there does not exist a $3$-secant to $\supp{c}$. By Lemma \ref{LinesAndPlanes} and Lemma \ref{q-1Implies3}, all lines intersect $\supp{c}$ in at most $2$ or in at least $q$ points. Applying Lemma \ref{ClassificationOf012QQ+1Secants}, we obtain that $\wt{c}\leqslant2q^{n-1}+\theta_{n-2}$ or $\wt{c}\geqslant q^n$, contradicting our weight assumptions. \end{proof} \begin{lemma}\label{PlanesThrough3Secant} Suppose $(3q-6)\theta_{n-2}+3\leqslant\wt{c} \leqslant B_{n,q}$. Then all planes containing a $3$-secant are planes of type $\boldsymbol{\mathcal{T}}$. \end{lemma} \begin{proof} Suppose that $\sigma$ is a plane of type $\boldsymbol{\mathcal{O}}$ containing a $3$-secant $t$ and suppose that $\Sigma$ is a solid containing $\sigma$. We claim that $\wt{\restr{c}{\Sigma}}\geqslant \frac{1}{4}q^3$.\\ In the first case, suppose that all planes in $\Sigma$ through $t$ are planes of type $\boldsymbol{\mathcal{O}}$. By Lemma \ref{LinesAndPlanes}, \begin{align*} \wt{\restr c \Sigma}&\geqslant\Big(\frac{1}{2}q^2-\frac{1}{2}q-3\Big)(q+1)+3\\ &=\frac{1}{2}q^3-\frac{7}{2}q\geqslant\frac{1}{4}q^3\textnormal{,} \end{align*} the last inequality being valid whenever $q>3$.\\ In the second case, suppose there exists a plane $\pi$ of type $\boldsymbol{\mathcal{T}}$ in $\Sigma$ through $t$. By Corollary \ref{PlaneResults}, as $\pi$ contains a $3$-secant, $\pi$ is either a plane of type ${T^{\textnormal{odd}}}$, type ${T^{\boldsymbol{\triangle}}}$ or type ${T^{\bigstar}}$. Regardless of this type, $\pi$ always contains another $3$-secant $t'$ such that $t\cap t'\notin\supp{c}$.\\ Let $y$ be the number of type-$\boldsymbol{\mathcal{T}}$ planes in $\Sigma$ through $t'$. Remark that such a plane intersects $\sigma$ in at most three points of $\supp{c}$. Indeed, should a $\boldsymbol{\mathcal T}$-typed plane in $\Sigma$ through $t'$ intersect $\sigma$ in at least $4$, thus in at least $q-1$ points (Lemma \ref{LinesAndPlanes}), then one of the three points of $t'\cap\supp{c}$ must lie on this intersection line (as $\pi$ is a plane of type $\boldsymbol{\mathcal{T}}$). But then $t'\cap\sigma\in\supp{c}$, in contradiction with $t\cap t'\notin\supp{c}$. In this way, we get \begin{align*} \frac{1}{2}q(q-1)\leqslant\wt{\sigma}&\leqslant y\cdot3+(q+1-y)q\\ &=q^2+q-y(q-3)\textnormal{,} \end{align*} which implies $y\leqslant\frac{1}{2}(q+7)$, as $q\geqslant29$.\\ Thus we get that $t'$ is contained in at least $q+1-\frac{1}{2}(q+7)=\frac{1}{2}(q-5)$ planes of type $\boldsymbol{\mathcal{O}}$ (all lying in $\Sigma$). As each $\boldsymbol{\mathcal{T}}$-typed plane in $\Sigma$ through $t'$ contains at least $3q-3$ points of $\supp{c}$, we get \begin{align*} \wt{\restr c \sigma}&\geqslant\bigg\lceil\frac{1}{2}(q-5)\bigg\rceil\cdot\Big(\frac{1}{2}q(q-1)-3\Big)+\bigg\lfloor\frac{1}{2}(q+7)\bigg\rfloor\cdot(3q-3-3)+3\\ &\geqslant\Big(\frac{1}{2}(q-5)\Big)\cdot\Big(\frac{1}{2}q(q-1)-3\Big)+\Big(\frac{1}{2}(q+6)\Big)\cdot(3q-3-3)+3\\ &=\frac{1}{4}q^3+\frac{23}{4}q-\frac{15}{2}\geqslant\frac{1}{4}q^3\textnormal{.} \end{align*} As the above claim holds for all solids containing $\sigma$, we get \[ \wt{c}\geqslant\theta_{n-3}\Big(\frac{1}{4}q^3\Big)-(\theta_{n-3}-1)(q^2+q+1)\textnormal{.} \] One can easily check this implies $\wt{c}\geqslant B_{n,q}$ for all prime powers $q$, a contradiction. \end{proof} We can generalise the above lemma, which will prove its usefulness when using induction. \begin{lemma}\label{SpacesThrough3Secant} Suppose $(3q-6)\theta_{n-2}+3\leqslant\wt{c}\leqslant B_{n,q}$. Let $\psi$ be a $k$-space, $2\leqslant k<n$, containing a $3$-secant $s$. Then $\wt{\restr{c}{\psi}}\leqslant B_{k,q}$. \end{lemma} \begin{proof} By Lemma \ref{PlanesThrough3Secant}, we know that all planes in $\psi$ through $t$ contain at most $3q+1$ points of $\supp{c}$ (Corollary \ref{PlaneResults}). This implies that $\wt{\restr c \psi}\leqslant\theta_{k-2}(3q+1-3)+3\leqslant B_{k,q}$. \end{proof} Remark that the last inequality in the above proof is the reason why the bound $B_{n,q}$ differs in value for $q\in\{29,31,32\}$. \bigskip We can now present some properties about certain types of subspaces sharing a common $3$-secant. \begin{lemma}\label{OneTypeCat} Suppose $(3q-6)\theta_{n-2}+3\leqslant\wt{c}\leqslant B_{n,q}$. Let $\Pi_1$ and $\Pi_2$ be two $k$-spaces, $2\leqslant k<n$, of type $T_1,T_2\in\{{T^{\textnormal{odd}}},{T^{\boldsymbol{\triangle}}},{T^{\bigstar}}\}$, respectively, having a $3$-secant $s$ in common. Then at least one of the following holds: \begin{enumerate} \item[\textnormal{(1)}] $T_1={T^{\bigstar}}$. \item[\textnormal{(2)}] $T_2={T^{\bigstar}}$. \item[\textnormal{(3)}] $T_1=T_2$. \end{enumerate} Furthermore, if $T_1=T_2$, then $\wt{\restr{c}{\Pi_1}}=\wt{\restr{c}{\Pi_2}}$. \end{lemma} \begin{proof} In each subspace $\Pi_i$, choose a plane $\pi_i$ through $s$, disjoint to the vertex corresponding with the cone $\supp{\restr{c}{\Pi_i}}$. By definition, $\pi_i$ is a plane of type $T_i$. Define $\Sigma=\spann{\pi_1}{\pi_2}$.\\ Furthermore, let $P_\alpha$, $P_\beta$ and $P_\gamma$ be the points in $s\cap\supp{c}$ with corresponding non-zero values $\alpha$, $\beta$ and $\gamma$ in $c$. Let $l_\alpha^{(i)}$, $l_\beta^{(i)}$ and $l_\gamma^{(i)}$ be the unique long secants in $\pi_i$ through $P_\alpha$, $P_\beta$ and $P_\gamma$, respectively ($i=1,2$).\\ \underline{Case $1$: $T_1={T^{\textnormal{odd}}}$ and $T_2={T^{\boldsymbol{\triangle}}}$.} Suppose that $\pi$ is a plane in $\Sigma$ going through $l_\alpha^{(2)}$. Remark that $l_\alpha^{(2)}$ is a long secant, containing $q-1$ points having non-zero value $\alpha$, one point having value $\alpha+\beta$ and one point having value $\alpha+\gamma$. From this, we know that the plane $\pi$ \emph{cannot} be \begin{itemize} \item a plane of type ${T_0}$, as $\alpha\neq0$. \item a plane of type ${T_{q+1}}$, ${T_{2q}}$, ${T_{2q+1}}$ or ${T^{\bigstar}}$, else $\alpha+\beta=\alpha$ or $\alpha+\gamma=\alpha$. \item a plane of type ${T^{\textnormal{odd}}}$, as $l_\alpha^{(2)}$ contains at least three points with the same value $\alpha$. \item a plane of type ${T^{\boldsymbol{\triangle}}}$, unless $\wt{\restr{c}{\pi}}=\wt{\restr{c}{\pi_2}}$. Indeed, $l_\alpha^{(2)}$ contains two points $l_\alpha^{(2)}\cap l_\beta^{(2)}$ and $l_\alpha^{(2)}\cap l_\gamma^{(2)}$ with corresponding values $\alpha+\beta$ and $\alpha+\gamma$, respectively, unambiguously fixing the weight of $\wt{\restr{c}{\pi}}$. \end{itemize} However, $\pi$ can only be a plane of type ${T^{\boldsymbol{\triangle}}}$ in some cases. Suppose that $\pi$ is a plane of type ${T^{\boldsymbol{\triangle}}}$ and suppose that $\pi$ intersects $\pi_1$ in a $3$-secant $t$. One of the points of $t\cap\supp{c}$ is obviously $P_\alpha$, as this point belongs to both $l_\alpha^{(2)}$ and $\pi_1$. The other two points of $t\cap\supp{c}$ lie on $l_\beta^{(1)}$ and $l_\gamma^{(1)}$ and must have corresponding values $\beta$ and $\gamma$, as $\wt{\restr{c}{\pi}}=\wt{\restr{c}{\pi_2}}$. As $\pi_1$ is a plane of type ${T^{\textnormal{odd}}}$, there are only two possibilities for $\pi$ to intersect $\pi_1$, namely when the $\beta$-valued point of $t$ lies on $l_\beta^{(1)}$ (then $\pi=\pi_2$), or when the $\beta$-valued point of $t$ lies on $l_\gamma^{(1)}$. Conclusion: of the at least $q-2$ planes through $l_\alpha^{(2)}$ in $\Sigma$, intersecting $\pi_1$ in a $3$-secant, at least $q-4$ of them cannot be a plane of type ${T^{\boldsymbol{\triangle}}}$, and thus must be planes of type $\boldsymbol{\mathcal{O}}$. In addition, the plane $\spann{l_\alpha^{(1)}}{l_\alpha^{(2)}}$ can never be a plane of type ${T^{\boldsymbol{\triangle}}}$ as well, as $l_\alpha^{(1)}$ contains many distinctly valued points. Thus, we find at least $q-3$ planes of type $\boldsymbol{\mathcal{O}}$ in $\Sigma$ through $l_\alpha^{(2)}$, each containing at least $\frac{1}{2}q(q-1)$ points of $\supp{c}$ (Lemma \ref{LinesAndPlanes}). The other planes in $\Sigma$ through $l_\alpha^{(2)}$, of which there are at most four, contain at least $3q-3$ points of $\supp{c}$. We get \begin{align*} \wt{\restr{c}{\Sigma}}&\geqslant\Big(\frac{1}{2}q(q-1)\Big)(q-3)+4\cdot(3q-3)-q\cdot(q+1)\\ &=\frac{1}{2}q^3-3q^2+\frac{25}{2}q-12>B_{3,q}\textnormal{,} \end{align*} which is, if $n=3$, a direct contradiction or, if $n>3$, a contradiction with Lemma \ref{SpacesThrough3Secant}, as $\Sigma$ contains the $3$-secant $s$.\\ \underline{Case $2$: $T_1=T_2$.} Suppose, on the contrary, that $\wt{\restr{c}{\Pi_1}}\neq\wt{\restr{c}{\Pi_2}}$. W.l.o.g. we can assume that $\wt{\restr{c}{\pi_1}}\neq\wt{\restr{c}{\pi_2}}$ as well. Assume, in the first case, that $T_1\in\{{T^{\textnormal{odd}}},{T^{\bigstar}}\}$. By observing the types of these planes and by Proposition \ref{PropOddCodeword}, $\wt{\restr{c}{\pi_1}}\neq\wt{\restr{c}{\pi_2}}$ implies that both $\alpha+\beta+\gamma=0$ and $\alpha+\beta+\gamma\neq0$, a contradiction.\\ Now assume $T_1={T^{\boldsymbol{\triangle}}}$. Considering the plane $\pi_i$, we know that the lines $l_\alpha^{(i)}$, $l_\beta^{(i)}$ and $l_\gamma^{(i)}$ are not concurrent. As $\wt{\restr{c}{\pi_1}}\neq\wt{\restr{c}{\pi_2}}$, we know, without loss of generality, that the value of the point $l_\alpha^{(1)}\cap l_\beta^{(1)}$ is zero, while the value of the point $l_\alpha^{(2)}\cap l_\beta^{(2)}$ is not zero. This implies that $\alpha+\beta$ is both zero and non-zero, a contradiction. \end{proof} \begin{lemma}\label{UniqueLongSecants} Suppose $q>3$ and let $\pi$ be a plane of type $T\in\{{T^{\textnormal{odd}}},{T^{\boldsymbol{\triangle}}}\}$. Then all planes $\sigma$ of type $\mathcal{T}$ intersecting $\pi$ in a long secant are planes of type $T$ as well. Moreover, $\wt{\restr{c}{\sigma}}=\wt{\restr{c}{\pi}}$. \end{lemma} \begin{proof} Suppose the plane $\sigma$ is a plane of type $T_\sigma\in\mathcal{T}$; let $l$ be the long secant $\pi\cap\sigma$. As $T\in\{{T^{\textnormal{odd}}},{T^{\boldsymbol{\triangle}}}\}$, no $q$ points on $l$ have the same non-zero value in $c$. As a consequence, $T_\sigma\notin\{{T_0},{T_{q+1}},{T_{2q}},{T_{2q+1}},{T^{\bigstar}}\}$. If $T={T^{\textnormal{odd}}}$, we find at least $q$ points on $l$ having pairwise different values in $c$. If $T={T^{\boldsymbol{\triangle}}}$, we find at most $3$ different points on $l$ having pairwise different values. Hence, if $T_\sigma\neq T$, then $q\leqslant3$, a contradiction. Furthermore, it is not hard to check that the set of values of points on $l$ fixes the weight of $\restr{c}{\sigma}$. \end{proof} \begin{lemma}\label{NotAllCatBPlane} Suppose that $n=3$ and $3q^2-3q-3\leqslant\wt{c}\leqslant B_{3,q}$. Then a $3$-secant is never contained in $q+1$ planes of the same type $T\in\{{T^{\textnormal{odd}}},{T^{\boldsymbol{\triangle}}}\}$. \end{lemma} \begin{proof} Suppose, on the contrary, that $t$ is such a $3$-secant. Fix a plane $\pi$ through $t$. By Lemma \ref{OneTypeCat}, the weight of the code word $c$ is known, as we can count: $\wt{c}=(q+1)\big(\wt{\restr{c}{\pi}}-3\big)+3=(q+1)\wt{\restr{c}{\pi}}-3q$.\\ Remark that, as $\pi$ is a plane of either type ${T^{\textnormal{odd}}}$ or ${T^{\boldsymbol{\triangle}}}$, we can always find a $1$- or $2$-secant $r$ in $\pi$ such that $t$ and $r$ intersect in a point $Q$ of $\supp{c}$. Indeed, \begin{itemize} \item if $\pi$ is a plane of type ${T^{\textnormal{odd}}}$, we can simply connect two points: a hole lying on a long secant in $\pi$, different from the intersection point of the three long secants in $\pi$, with a point of $t\cap\supp{c}$ on another long secant in $\pi$. \item if $\pi$ is plane of type ${T^{\boldsymbol{\triangle}}}$, we can connect a point lying on two long secants with the unique point of $t$ lying on the third long secant. \end{itemize} Let $\sigma$ be a plane through $r$, not equal to $\pi$. Choose a long secant $s$ in $\sigma$ through $Q$. This is possible since every plane of type $\mathcal T \setminus \{T_0\}$ obviously contains a long secant, and planes of type $\mathcal O$ contain long secants as well (cfr. Lemma \ref{LinesAndPlanes}). The plane $\spann{t}{s}$ contains the $3$-secant $t$, thus this plane has to be of the same subtype as $\pi$. In particular, this means that $\spann{t}{s}$ is a plane of type ${T^{\textnormal{odd}}}$ or ${T^{\boldsymbol{\triangle}}}$. However, by Lemma \ref{UniqueLongSecants}, the plane $\spann{t}{s}$ then has to be of the same type as $\sigma$ as well, as they share the long secant $s$, unless $\sigma$ is a plane of type $\boldsymbol{\mathcal{O}}$.\\ Therefore, all planes $\sigma$ through $r$ satisfy either $\wt{\restr{c}{\sigma}} = \wt{\restr{c}{\pi}}$ (if $\sigma$ is a plane of type $\mathcal T$), or $\wt{\restr{c}{\sigma}} \geq \frac{1}{2}q(q-1) > \wt{\restr{c}{\pi}}$ (if $\sigma$ is a plane of type $\mathcal O$, by Lemma \ref{LinesAndPlanes}). In both cases, this yields the following lower bound on $\wt{c}$: \[ (q+1)\wt{\restr{c}{\pi}}-3q=\wt{c}\geqslant(q+1)\big(\wt{\restr{c}{\pi}}-2\big)+2\textnormal{,} \] a contradiction. \end{proof} The following proposition is a consequence of the way code words of type $\boldsymbol{\mathcal{T}}$ are defined (Definition \ref{DefHyperplane}). \begin{proposition}\label{TypesOfPlanesAndHyperplanes} Suppose that $\Pi$ is a hyperplane of type $T\in\boldsymbol{\mathcal{T}}$, with $\kappa$ the $(n-4)$-dimensional vertex of $\supp{\restr{c}{\Pi}}$. Suppose that $t$ is a $3$-secant contained in $\Pi$. Then $t$ is disjoint to $\kappa$ and all $q^{n-3}$ planes in $\Pi$ that contain $t$ but that are disjoint to $\kappa$ are planes of type $T$. The other $\theta_{n-4}$ planes in $\Pi$ through $t$ intersect $\kappa$ in a point and are all planes of type ${T^{\bigstar}}$. \end{proposition} \begin{lemma}\label{NotAllCatBHyper} Suppose that $(3q-6)\theta_{n-2}+3\leqslant\wt{c}\leqslant B_{n,q}$. Then a $3$-secant is never contained in $\theta_{n-2}$ hyperplanes of the same type $T\in\{{T^{\textnormal{odd}}},{T^{\boldsymbol{\triangle}}}\}$. \end{lemma} \begin{proof} By Lemma \ref{NotAllCatBPlane}, we can assume that $n>3$. Suppose that $t$ is a $3$-secant with the described property. Now define \[ S=\{(\pi,\Pi):t\subseteq\pi\subseteq\Pi, \pi\textnormal{ a plane},\Pi\textnormal{ a hyperplane},\textnormal{both of type }T\}\textnormal{.} \] Fix an arbitrary $T$-typed plane $\pi_0\supseteq t$. As all hyperplanes through $t$ are of the same type $T$, all hyperplanes through $\pi_0$ have this property as well. Thus, the number of elements in $S$ with a fixed first argument $\pi_0$ equals $\theta_{n-3}$.\\ Fix an arbitrary $T$-typed hyperplane $\Pi_0\supseteq t$. By Proposition \ref{TypesOfPlanesAndHyperplanes}, the number of elements in $S$ with a fixed second argument $\Pi_0$ equals $q^{n-3}$ (the number of planes in $\Pi_0$ through $t$, disjoint to an $(n-4)$-subspace not intersecting $t$).\\ Let $x_\pi$ be the number of $T$-typed planes through $t$. By double counting, we get: \[ x_\pi\cdot\theta_{n-3}=|S|=\theta_{n-2}\cdot q^{n-3}\quad\Longleftrightarrow\quad x_\pi=\frac{q^{n-1}-1}{q^{n-2}-1}q^{n-3}=q^{n-2}+1-\frac{q^{n-3}-1}{q^{n-2}-1} \] As $x_\pi$ is known to be an integer, this is only valid when the fraction on the right is an integer. As $n>3$, this is never the case. \end{proof} \begin{lemma}\label{VerticesAreEqual} Suppose that $(3q-6)\theta_{n-2}+3\leqslant\wt{c}\leqslant B_{n,q}$. Let $\Pi_1$ and $\Pi_2$ be two hyperplanes of type ${T^{\bigstar}}$ and let $\mathcal{C}_i$ be the union of the three $(n-2)$-subspaces present in the linear combination $\restr{c}{\Pi_i}$, thus intersecting in a common $(n-3)$-space $\kappa_i$ ($i=1,2$). Suppose that $\mathcal{C}_1$ and $\mathcal{C}_2$ have an $(n-2)$-subspace in common. Then either \begin{itemize} \item $\kappa_1=\kappa_2$, or \item $n>3$ and there exists a solid $S$ containing a long secant that is only contained in planes in $S$ of type $\mathcal{T}$. \end{itemize} \end{lemma} \begin{proof} Let $\Sigma$ be the $(n-2)$-space that $\mathcal{C}_1$ and $\mathcal{C}_2$ have in common. As $q>3$, $\Sigma$ must be one of the three subspaces present in the linear combination of both $\restr{c}{\Pi_1}$ and $\restr{c}{\Pi_2}$.\\ Suppose that $\kappa_1\neq\kappa_2$. As these are spaces of the same dimension, we can find a point $P_1\in\kappa_1\setminus\kappa_2$ and a point $P_2\in\kappa_2\setminus\kappa_1$; define $l=\spann{P_1}{P_2}$. Remark that $l$ must be a $(q+1)$-secant to $\supp{c}$. This follows from the fact that every point of $l$ lies in $\Sigma \setminus \kappa_i$, for at least one choice of $i$. Looking in $\mathcal C_i$, we see that all points of $\Sigma \setminus \kappa_i$ lie in $\supp c$. Now take planes $\pi_i$ in $\Pi_i$, for $i = 1, 2$, through $l$, not contained in $\Sigma$. Due to this choice, it is clear that the plane $\pi_i$ will intersect each $(n-2)$-subspace of $\mathcal{C}_i$ in a line (through $P_i$). Define $S=\spann{\pi_1}{\pi_2}$.\\ Choose a $(q+1)$-secant $s$ in $\pi_1$, different from $l$. As $P_1\neq P_2$, all planes in $S$ through $s$ (not equal to $\pi_1$) intersect $\pi_2$ in a $3$-secant and thus, by Lemma \ref{PlanesThrough3Secant}, are planes of type $\boldsymbol{\mathcal{T}}$. As $\pi_1$ is a plane of type $\boldsymbol{\mathcal{T}}$ as well, we know that all planes in $S$ through $s$ are planes of type $\boldsymbol{\mathcal{T}}$.\\ If $n=3$, we get that $(3q-6)(q+1)+3\leqslant\wt{c}=\wt{\restr{c}{S}}\leqslant q\cdot(2q)+(3q+1)=2q^2+3q+1$, which is only valid if $q<7$, contrary to the assumptions. \end{proof} The following theorem connects all previous results and proves Theorem \ref{THETHEOREM}. \begin{theorem}\label{LastTheorem} Suppose $(3q-6)\theta_{n-2}+3\leqslant\wt{c}\leqslant B_{n,q}$. Then there exists a plane $\pi$ of type $T\in\{{T^{\textnormal{odd}}},{T^{\boldsymbol{\triangle}}},{T^{\bigstar}}\}$ and an $(n-3)$-space $\kappa$ such that $\pi\cap\kappa=\emptyset$ and \[ c=\sum_{P\in\pi}c(P)\cdot\spann{\kappa}{P}\textnormal{.} \] \end{theorem} \begin{proof} We will prove this by induction on $n$. When $n = 2$, we can choose $\kappa=\emptyset$ and refer to Corollary \ref{PlaneResults}. Now assume $n\geqslant3$ and suppose the statement is true for $c$ restricted to any $k$-space, $2\leqslant k<n$. By Lemma \ref{Exist3Secant}, we can choose a $3$-secant $t$ with corresponding non-zero values $\alpha$, $\beta$ and $\gamma$. By the induction hypothesis and Lemma \ref{SpacesThrough3Secant}, each hyperplane through $t$ is a hyperplane of type $\boldsymbol{\mathcal{T}}$ and by Lemma \ref{OneTypeCat}, we know that there exist two specific types $T_A={T^{\bigstar}}$ and $T_B\in\{{T^{\textnormal{odd}}},{T^{\boldsymbol{\triangle}}},{T^{\bigstar}}\}$ such that all hyperplanes through $t$ are either of type $T_A$ or type $T_B$. Furthermore, by Lemma \ref{NotAllCatBHyper}, we know that there exists at least one hyperplane through $t$ of type $T_A$; consider such a hyperplane $\Pi$. Remark that by Proposition \ref{TypesOfPlanesAndHyperplanes}, all planes through $t$ are planes of type $T_A$ or $T_B$ as well. We can now fix a certain plane $\pi$ as follows: if all planes through $t$ are planes of type $T_A$, choose $\pi$ to be an arbitrary plane through $t$, not contained in $\Pi$. Else, choose $\pi$ to be a plane through $t$ of type $T_B$. By Proposition \ref{TypesOfPlanesAndHyperplanes}, $\pi$ cannot be contained in $\Pi$.\\ Furthermore, we know that $\restr{c}{\Pi}$ is a linear combination of three different $(n-2)$-subspaces of $\Pi$ through an $(n-3)$-space. Choose $\kappa$ to be this $(n-3)$-space. As all lines in $\Pi$, not disjoint to $\kappa$, are either $0$-, $1$-, $q$- or $(q+1)$-secants, we know that $\kappa$ must be disjoint to the $3$-secant $t$ and, furthermore, disjoint from the plane $\pi\supseteq t$, as that plane is not contained in $\Pi$. \bigskip For each point $P\in\kappa$, it is easy to see that $c(P)$ is equal to the sum of the values of the points on the $3$-secant $t$ (which is $\alpha+\beta+\gamma$).\\ As $\restr{c}{\Pi}$ is a linear combination of three different $(n-2)$-spaces of $\Pi$ having the space $\kappa$ in common, we can choose one of those $(n-2)$-spaces $\Psi_1$; w.l.o.g. this space corresponds to the value $\alpha$. Choose an arbitrary $3$-secant $t_1$ in $\pi$ through the point $\Psi_1\cap\pi$, thus having corresponding non-zero values $\alpha$, $\beta_1$ and $\gamma_1$. By the induction hypothesis and Lemma \ref{SpacesThrough3Secant}, $\Pi_1=\spann{\Psi_1}{t_1}$ is a hyperplane of type $\boldsymbol{\mathcal{T}}$. We claim that $\Pi_1$ is a hyperplane of type $T_A$. Indeed, let $\pi_1$ be a plane in $\Pi_1$ through $t_1$, thus intersecting $\Pi$ in a line of $\Psi_1$. Then this intersection line must be a $q$- or $(q+1)$-secant. By Lemma \ref{PlanesThrough3Secant}, $\pi_1$ has to be a plane of type $\boldsymbol{\mathcal{T}}$ and, more specifically, a plane of type $T_A$ (Lemma \ref{UniqueLongSecants}). As such, all planes in $\Pi_1$ through $t_1$ are planes of type $T_A$, thus $\Pi_1$ contains at least $\theta_{n-3}$ planes of type $T_A$ through a fixed $3$-secant ($t_1$). By Proposition \ref{TypesOfPlanesAndHyperplanes}, at least one of these planes is of the same type as $\Pi_1$, thus this hyperplane must be of type $T_A$.\\ Let $\kappa_1$ be the $(n-3)$-subspace of $\Pi_1$ in which the three hyperplanes of $\restr{c}{\Pi_1}$ intersect. By Lemma \ref{VerticesAreEqual}, we know that $\kappa=\kappa_1$. In this way, it is easy to see that all points in $\Pi_1\setminus\kappa$ fulfil the desired property.\\ We can now repeat the above process by choosing another $(n-2)$-space $\Psi_2$ in one of the linear combinations of $\restr{c}{\Pi}$ or $\restr{c}{\Pi_1}$ and considering the span $\Pi_2=\spann{\Psi_2}{t_2}$, with $t_2$ an arbitrary $3$-secant in $\pi$ through the point $\Psi_2\cap\pi$. All points in $\Pi_2\setminus\kappa$ will fulfil the desired property as well.\\ To conclude, if, for each point $P$ in $\pi$, there exists a sequence of $3$-secants $t_1,t_2,\dots,t_m \ni P$ in $\pi$ such that $t\cap t_1\in\supp{c}$ and $t_i\cap t_{i+1}\in\supp{c}$ for all $i\in\{1,2,\dots,m-1\}$, then this theorem is proven by consecutively repeating the above arguments. Unfortunately, not all points in $\pi$ satisfy this property. However, if a point $P\in\pi$ does not lie on such a (sequence of) $3$-secant(s), we can easily prove it lies on a $0$-, $1$- or $2$-secant $r$ in $\pi$ having $q$ points that do satisfy this first property. Thus, we already know the value of a lot of points in the hyperplane $\spann{\kappa}{r}$, namely of precisely $|\spann{\kappa}{r}|-|\spann{\kappa}{P}|+|\kappa|=\theta_{n-1}-q^{n-2}$ points. Furthermore, $\wt{\restr{c}{\spann{\kappa}{r}}}\leqslant2q^{n-2}+\theta_{n-3}+\wt{\restr{c}{\spann{\kappa}{P}}}-\wt{\restr{c}{\kappa}}\leqslant3q^{n-2}+\theta_{n-3}\leqslant B_{n-1,q}$. Thus, by the induction hypothesis, this hyperplane is a hyperplane of type $\boldsymbol{\mathcal{T}}$. It is easy to see that all points in $\spann{\kappa}{P}$ must satisfy the property of the theorem. \end{proof} \textbf{Acknowledgement.} Special thanks to Maarten De Boeck for revising these results with great care and eye for detail.
1,108,101,564,730
arxiv
\section{Introduction} Let $K$ be a number field. Let $X$ be a smooth projective curve over $K$ given by an equation $y^N=f(x)$, where $f(x) \in K[x]$ is a monic separable polynomial of degree $m>N>1$ and where $\gcd(m,N)=1$. We call such a curve a superelliptic curve of type $(N,m)$ over $K$. The curve $X$ has a unique point $o$ at infinity, which is $K$-rational. The genus $g$ of $X$ equals $g = \frac{1}{2}(N-1)(m-1)$, by Riemann-Hurwitz; in particular $g$ is positive. Let $v$ be a place of $K$, and fix a $v$-adic absolute value $|\cdot|_v$ on $K_v$. We are interested in local integrals: \[ \lambda_v(p) = \frac{1}{N} \int_{X_v} \log |x-x(p)|_v \, \mu_v \] for $o \neq p \in X(K_v)$. The measure space $(X_v,\mu_v)$ is a suitable ``analytic space'' associated to $X$ at $v$; in fact $X_v$ is the Berkovich analytic space associated to $X \otimes \hat{\overline{K_v}}$ if $v$ is non-archimedean, and $X_v$ is the complex analytic space $X(\overline{K_v})$ if $v$ is archimedean. Here $\overline{K_v}$ is an algebraic closure of $K_v$. The measure $\mu_v$ is the canonical Arakelov measure on $X_v$, as defined in Section~\ref{potential} below. Let $o \neq p \in X(K)$. Then $\lambda_v(p)$ vanishes for almost all $v$ and we put: \[ h_{X,x}(p) = \frac{1}{[K \colon \mathbb{Q}]} \sum_v n_v \, \lambda_v(p) = \frac{1}{[K \colon \mathbb{Q}]} \frac{1}{N} \sum_v n_v \int_{X_v} \log |x-x(p)|_v \, \mu_v \, , \] where the $n_v$ are suitable local factors. We put $h_{X,x}(o)=0$. Let $\overline{K}$ be an algebraic closure of $K$; then $h_{X,x}$ extends naturally to a real-valued function on $X(\overline{K})$. Our first result is as follows. \begin{thmA} The function $h_{X,x}$ is a Weil height on $X(\overline{K})$ associated to the line bundle $\mathcal{O}_X(o)$. In fact, let $J=\mathrm{Pic}^0 X$ be the jacobian of $X$ and let $h_J$ be the canonical (N\'eron-Tate) height on $J(\overline{K})$. Then the formula: \[ h_{X,x}(p) = \frac{1}{g} h_J([p-o]) \] holds for all $p \in X(\overline{K})$. \end{thmA} Our second result says that, except possibly when $p$ is a Weierstrass point, the local integrals $\lambda_v(p)$ for $p \in X(K)$ can be computed by averaging over the $n$-division points of $X$. To introduce these, consider on $J$ the subscheme $\Theta$ represented by the classes $[q_1+\cdots+q_{g-1} - (g-1)o]$ with $q_1,\ldots,q_{g-1}$ running over $X$. Then $\Theta$ is a symmetric theta divisor on $J$. Now if $n \geq g$ is an integer we define a subscheme $H_n$ of $X$ by putting: \[ H_n=\iota^* [n]^* \Theta \, . \] Here $[n]$ denotes multiplication by $n$ on $J$, and $\iota \colon X \to J$ is the natural embedding sending $p$ to $[p-o]$. It can be shown that $H_n$ is an effective divisor on $X$ of degree $gn^2$. The points in the support of $H_n$ are called the $n$-division points of $X$; they naturally generalise the notion of $n$-torsion points on elliptic curves. We mostly view $H_n$ as a multi-set of $\overline{K}$-points of $X$ of cardinality $gn^2$. \begin{thmB} Let $o \neq p \in X(K)$, and assume that $p$ is not a Weierstrass point of~$X$. Let $v$ be a place of $K$. Then: \begin{equation} \label{limit} \frac{1}{gn^2} \sum_{q \in H_n \atop x(q) \neq x(p),\infty} \log|x(p)-x(q)|_v \longrightarrow \int_{X_v} \log|x - x(p)|_v \, \mu_v \end{equation} for a suitable sequence of natural numbers $n$ tending to infinity. Here the points in $H_n$ are counted with multiplicity. \end{thmB} Note that the left hand side of (\ref{limit}) is well-defined. Recall that a $\overline{K}$-point $p$ of $X$ is a Weierstrass point of $X$ if there is a rational function on $X\otimes \overline{K}$ with a pole of multiplicity at most $g$ at $p$. If $g \geq 2$, each ramification point of $x \colon X \to \mathbb{P}^1_K$ is a Weierstrass point of $X$. If $x \colon X \to \mathbb{P}^1_K$ is a hyperelliptic curve (so $N=2$) then conversely each Weierstrass point is a hyperelliptic ramification point. There are at most $g^3-g$ Weierstrass points in $X(\overline{K})$. Theorem~A and Theorem~B are known in the case that $(X,o)$ is an elliptic curve, i.e., when $m=3, N=2$. In this case the function $\lambda_v \colon X(K_v) - \{o\} \to \mathbb{R}$ is just the unique N\'eron function for $o$ on $X(K_v)$ normalised by the condition: \[ \lambda_v(p) - \frac{1}{2} \log |x(p)|_v \to 0 \quad \textrm{as} \quad p \to o \, . \] This can be seen for example by integrating the quasi-parallelogram law satisfied by the N\'eron function: \begin{equation} \label{parallel} \lambda_v(p+q) + \lambda_v(p-q)=2\lambda_v(p) + 2 \lambda_v(q) -\log|x(p)-x(q)|_v \end{equation} against $\mu_v(q)$, using the translation invariance of $\mu_v$. Theorem~A is then obtained from the usual formula expressing the canonical (N\'eron-Tate) height on an elliptic curve as a sum of local N\'eron functions. Theorem B and variations of Theorem B in the elliptic curve case were proved in the 1990s by Everest and n\' i Flath\'uin \cite{enf} and Everest and Ward \cite{ew}, and more recently by Baker, Ih and Rumely \cite{bir} and Szpiro and Tucker \cite{st2}. The proofs are based on classical diophantine approximation results (linear forms in logarithms, and Roth's theorem). We note that the divisors $H_n$ are exactly the divisors of Weierstrass points of the line bundles $\mathcal{O}_X(o)^{\otimes n+g-1}$. A theorem of Neeman \cite{ne} therefore implies that, at least if $v$ is archimedean, the multi-sets $H_n$ are equidistributed with respect to $\mu_v$ in the weak topology, i.e., the average of each continuous function on $X_v$ over $H_n$ converges to its integral against $\mu_v$, as $n \to \infty$. Theorem B asserts that such a convergence result holds as well for certain functions on $X_v$ with logarithmic singularities, in both the archimedean and the non-archimedean case. The proof of Theorem A (given in Section~\ref{theoA}) is based on expressing the local integrals $\lambda_v$ within Zhang's theory of admissible local pairing \cite{zh}, using potential theory on $X_v$, and by exploiting the connection discovered by Hriljac and Faltings between global intersection pairing on $X$ and the canonical height on $J$ (i.e., the arithmetic Hodge index theorem). The bit of potential theory on $X_v$ that we need is developed in Section~\ref{potential}, based on the thesis of Thuillier \cite{th}. In Section \ref{complements} we discuss some additional results in the context of hyperelliptic curves. This features a formula for the admissible self-intersection of the relative dualising sheaf of a hyperelliptic curve as a sum of local double integrals over the places of $K$. The proof of Theorem B (given in Section~\ref{theoB}) is based on a diophantine approximation result of Faltings \cite{fa} on abelian varieties, and a formula relating the ``division polynomial'' of $H_n$ to $\lambda_v$ and a suitable N\'eron function $\lambda_{\Theta,v}$ on $J$. Incidentally, we show that this formula can be used to prove that the average height of division points remains bounded. This also follows from results of Burnol \cite{bu}. We expect that the statement of Theorem B remains true even when $p \neq o$ is a Weierstrass point on $X$. We prove this in Section \ref{cantor} when $X$ is a hyperelliptic curve. The proof consists of a direct computation, using a determinantal formula due to D. Cantor \cite{ca} for hyperelliptic division polynomials. A corollary of this computation is a generalisation of a result on ``one half log discriminant'' due to Szpiro and Tucker~\cite{st1}. As an application of Theorems A and B we prove in Section~\ref{sectionapplication} a finiteness result for divisors $H_n$ which are integral with respect to a given algebraic point. It would be interesting to have function field analogues of the results in this paper, and to extend the results to more general types of curves. We finish the Introduction by mentioning some recent results similar in spirit to Theorems A and B. First, let $\varphi \colon \mathbb{P}^1_K \to \mathbb{P}^1_K$ be a surjective morphism of degree $d>1$, i.e., a dynamical system on the projective line over $K$. Call and Silverman \cite{cs} prove that there exists a canonical height $h_\varphi \colon \mathbb{P}^1(\overline{K}) \to \mathbb{R}$ associated to $\varphi$, which is non-negative, and satisfies the functional equation $h_\varphi(\varphi(\alpha))=dh_\varphi(\alpha)$ for all $\alpha \in \mathbb{P}^1(\overline{K})$. An example is the ``usual'' height $h$ given by $ h(\alpha)= \frac{1}{[K \colon \mathbb{Q}]} \sum_v \log \max \{ 1, |\alpha|_v\} $ for $\infty \neq \alpha \in \mathbb{P}^1(K)$, which is the canonical height associated to the squaring map $\varphi(x)=x^2$ of degree~$2$. If $\infty$ is a fixed point for $\varphi$, then Favre and Rivera-Letelier \cite{fr} and Pineiro, Szpiro and Tucker \cite{pst} prove that for $\infty \neq \alpha \in \mathbb{P}^1(K)$ one has a ``Mahler type'' formula: \begin{equation} \label{mahler} h_\varphi(\alpha)= \frac{1}{[K\colon \mathbb{Q}]} \sum_v n_v \int_{\mathbb{P}^1_v} \log |x-\alpha|_v \, \mu_{\varphi,v} \end{equation} for the canonical height associated to $\varphi$. In \cite{pst} the expressions $\int_{\mathbb{P}^1_v} \log |x-\alpha|_v \mu_{\varphi,v}$ are certain natural local factors defined using intersection theory on suitable models of $\mathbb{P}^1_K$, whereas in \cite{fr} it is shown that the expressions $\int_{\mathbb{P}^1_v} \log |x-\alpha|_v \mu_{\varphi,v}$ can be taken to be actual integrals over the Berkovich projective line, for a canonical measure $\mu_{\varphi,v}$ on $\mathbb{P}^1_v$ associated to $\varphi$. Formula (\ref{mahler}) specialises to a well-known formula of Mahler in the case that $h_\varphi$ is the usual height $h$ associated to~$\varphi \colon x \mapsto x^2$. In \cite{st2} Szpiro and Tucker prove two results in the context of dynamical systems analogous to Theorem~B, cf. \cite{st2}, Theorem 4.6 and Theorem 4.7. First, for $\infty \neq \alpha \in \mathbb{P}^1(K)$ and for any place $v$ of $K$ one has: \[ \frac{1}{d^n} \sum_{w : \varphi^n(w)=w \atop w \neq \alpha, \infty} \log |w - \alpha|_v \longrightarrow \int_{\mathbb{P}^1_v} \log |x - \alpha|_v \, \mu_{\varphi,v} \] as $n \to \infty$. Second, if $\alpha_0 \in \mathbb{P}^1(K)$ is not an exceptional point for $\varphi$ (i.e., the set $\cup_{n=1}^\infty (\varphi^n)^{-1}(\alpha_0)$ is infinite) then: \[ \frac{1}{d^n} \sum_{w : \varphi^n(w)=\alpha_0 \atop w \neq \alpha, \infty} \log |w -\alpha|_v \longrightarrow \int_{\mathbb{P}^1_v} \log |x - \alpha|_v \, \mu_{\varphi,v} \] for each $\infty \neq \alpha \in \mathbb{P}^1(K)$, as $n \to \infty$. In both cases, the points $w$ are counted with multiplicity. The proofs in \cite{st2} are based upon Roth's theorem. In \cite{clt}, Th\'eor\`eme~1.2, Chambert-Loir and Thuillier prove a logarithmic equidistribution result in the general context of polarised projective varieties $(X,L)$ over~$K$. The result asserts the equidistribution, at each place $v$ of $K$, of any generic sequence of Galois orbits of ``small'' points, with respect to any function with logarithmic singularities along an effective Cartier divisor on $X$ whose canonical height is equal to the canonical height of $X$. Here, both the adelic integration measure and the canonical height are determined by the polarisation $L$, and again one has a ``Mahler type'' formula for the canonical height, cf. \cite{clt}, Th\'eor\`eme 1.4. Interestingly, the proof of the result of Chambert-Loir and Thuillier does not rely on diophantine approximation, but instead on a weak equidistribution result of Zhang for polarised projective varieties, combined with a result on arithmetic volumes due to Yuan. We note finally that our Theorem B does not seem to be a direct consequence of the logarithmic equidistribution results \cite{clt} \cite{st2} discussed above. Indeed, in both \cite{clt} and \cite{st2} the logarithmic equidistribution is asserted of an infinite sequence of points whose heights become ``small'' with respect to the canonical height. Now Theorem A says that the canonical height on $X$ is essentially the restriction to $X$ of the canonical height on the jacobian of $X$. But the Bogomolov conjecture, first proved by Ullmo \cite{ul}, implies that $X$ contains no infinite sequence of ``small'' points, if $g \geq 2$. \section{Potential theory on analytic curves} \label{potential} Let $X$ be a geometrically connected smooth projective curve over a local field $(k,|\cdot|)$, let $\overline{k}$ be an algebraic closure of $k$, and let $\hat{\overline{k}}$ be the completion of $\overline{k}$. The absolute value $|\cdot|$ extends in a unique way to $\hat{\overline{k}}$. One has associated to $X$ a locally ringed space $\mathfrak{X}=(|\mathfrak{X}|,\mathcal{O}_\mathfrak{X})$ where the underlying topological space $|\mathfrak{X}|$ has the following properties: $|\mathfrak{X}|$ is compact, metrisable, path-connected, and contains $X(\overline{k})$ with its topology induced from $|\cdot|$ naturally as a dense subspace. If $k$ is archimedean, we just take $X(\overline{k})$ itself, with its structure of complex analytic space; if $k$ is non-archimedean we let $\mathfrak{X}$ be the Berkovich analytic space associated to $X \otimes \hat{\overline{k}}$, see \cite{be}. Our purpose in this section is to put a canonical probability measure $\mu_\mathfrak{X}$ on $|\mathfrak{X}|$, and to discuss some elements of a potential theory on $\mathfrak{X}$. This is standard for archimedean $k$; for non-archimedean $k$ we base our discussion on Thuillier's thesis \cite{th}, Chapter~3. See also Baker's expository paper \cite{bak}. As an application we derive a useful formula for the integral $\int_\mathfrak{X} \log |f| \mu_\mathfrak{X}$ where $f$ is a non-zero rational function on $X \otimes \overline{k}$. This formula establishes a link with Zhang's theory of local admissible pairing \cite{zh}. We start by considering the natural exact sequence: \[ 0 \longrightarrow \mathcal{H} \longrightarrow \mathcal{A}^0 \, {\buildrel \, \mathrm{d} \mathrm{d}^c\, \over \longrightarrow} \, \mathcal{A}^1 \longrightarrow 0 \] of sheaves of $\mathbb{R}$-vector spaces on $\mathfrak{X}$. Here $\mathcal{H}$ is the sheaf of harmonic functions on $\mathfrak{X}$, $\mathcal{A}^0$ is the sheaf of smooth functions on $\mathfrak{X}$, $\mathcal{A}^1$ is the sheaf of smooth forms on $\mathfrak{X}$, and $\, \mathrm{d} \mathrm{d}^c\, $ is the Laplace operator. The sheaf $\mathcal{A}^0$ is in fact a sheaf of $\mathbb{R}$-algebras and the sheaf $\mathcal{A}^1$ is naturally a sheaf of modules over $\mathcal{A}^0$. We let $\mathcal{A}^0(\mathfrak{X})$ and $\mathcal{A}^1(\mathfrak{X})$ be the spaces of global sections of $\mathcal{A}^0$ and $\mathcal{A}^1$. Further we put $\mathrm{D}^0(\mathfrak{X}) = \mathcal{A}^1(\mathfrak{X})^*$ and $\mathrm{D}^1(\mathfrak{X}) = \mathcal{A}^0(\mathfrak{X})^*$ for their $\mathbb{R}$-linear duals. We have a natural $\mathbb{R}$-linear integration map $\int_\mathfrak{X} \colon \mathcal{A}^1(\mathfrak{X}) \to \mathbb{R}$ and a natural $\mathbb{R}$-bilinear pairing $\mathcal{A}^0(\mathfrak{X}) \times \mathcal{A}^1(\mathfrak{X}) \to \mathbb{R}$ given by $(\varphi,\omega) \mapsto \int_\mathfrak{X} \varphi \, \omega$. This pairing yields a natural commutative diagram: \[ \xymatrix{ \mathrm{D}^0(\mathfrak{X}) \ar[r] & \mathrm{D}^1(\mathfrak{X}) \\ \mathcal{A}^0(\mathfrak{X}) \ar[u] \ar[r]^\, \mathrm{d} \mathrm{d}^c\, & \mathcal{A}^1(\mathfrak{X}) \ar[u] } \] where the upward arrows are injections. The map $\mathrm{D}^0(\mathfrak{X}) \to \mathrm{D}^1(\mathfrak{X})$ is the dual of the map $\mathcal{A}^0(\mathfrak{X})\, {\buildrel \, \mathrm{d} \mathrm{d}^c\, \over \longrightarrow} \, \mathcal{A}^1(\mathfrak{X})$, and we shall also denote it by $\mathrm{d} \mathrm{d}^c$. The kernel of $\, \mathrm{d} \mathrm{d}^c\, \colon \mathrm{D}^0(\mathfrak{X}) \to \mathrm{D}^1(\mathfrak{X})$ is a one-dimensional $\mathbb{R}$-vector space, to be identified with the set of constant functions on $\mathfrak{X}$. Elements of $\mathrm{D}^\alpha(\mathfrak{X})$ are called $(\alpha,\alpha)$-currents; $(1,1)$-currents can be viewed as measures on $|\mathfrak{X}|$. The unit element of $\mathcal{A}^0(\mathfrak{X})$ gives, under the natural map $\mathcal{A}^0(\mathfrak{X}) \to \mathrm{D}^1(\mathfrak{X})^*$, a natural $\mathbb{R}$-linear integration map $\int_\mathfrak{X} \colon \mathrm{D}^1(\mathfrak{X}) \to \mathbb{R}$, extending $\int_\mathfrak{X}$ on $\mathcal{A}^1(\mathfrak{X})$. Associated to each non-zero rational function $f$ on $X \otimes \overline{k}$ we have a natural $(0,0)$-current $\log |f| \in \mathrm{D}^0(\mathfrak{X})$. For each closed point $p$ on $X\otimes \overline{k}$ we have a Dirac measure $\delta_p \in \mathrm{D}^1(\mathfrak{X})$, and by linearity we obtain a measure $\delta_D \in \mathrm{D}^1(\mathfrak{X})$ for each divisor $D$ on $X\otimes \overline{k}$. \begin{prop} \label{properties} (i) (Poincar\'e-Lelong equation) Let $f$ be a non-zero rational function on $X \otimes \overline{k}$. Then the equation: \[ \, \mathrm{d} \mathrm{d}^c\, \log |f| - \delta_{\mathrm{div} f} = 0 \] holds in $\mathrm{D}^1(\mathfrak{X})$. \\ (ii) (The Laplace operator is self-adjoint) We have: \[ \int_\mathfrak{X} \varphi \, \mathrm{d} \mathrm{d}^c\, \psi = \int_\mathfrak{X} \psi \, \mathrm{d} \mathrm{d}^c\, \varphi \] for all $\varphi, \psi \in \mathrm{D}^0(\mathfrak{X})$. \\ (iii) (Existence and uniqueness of Green's functions) Let $\mu \in \mathcal{A}^1(\mathfrak{X})$ be a smooth measure with $\int_\mathfrak{X} \mu=1$, and let $p \in X(\overline{k})$. Then there exists a unique $g_{\mu,p} \in \mathrm{D}^0(\mathfrak{X})$ such that: \[ \, \mathrm{d} \mathrm{d}^c\, g_{\mu,p} = \mu - \delta_p \, , \quad \int_\mathfrak{X} g_{\mu,p} \, \mu = 0 \, . \] The symmetry relation $g_{\mu,p}(q) = g_{\mu,q}(p)$ holds for all $p \neq q \in X(\overline{k})$. \end{prop} \begin{proof} This is well-known for archimedean $k$. We find (i)--(iii) respectively in \cite{th}, Proposition 3.3.15, Proposition 3.2.12 and Proposition 3.3.13, for non-archimedean~$k$. \end{proof} We construct a canonical probability measure $\mu_\mathfrak{X} \in \mathcal{A}^1(\mathfrak{X})$ as follows. If $k$ is archimedean we let $\mu_\mathfrak{X}$ be the canonical Arakelov probability measure on $X(\overline{k})$. One way of giving $\mu_\mathfrak{X}$ is as follows: let $\iota \colon X(\overline{k}) \to J(\overline{k})$ be an immersion of $X(\overline{k})$ into the complex torus $J(\overline{k})$, where $J= \mathrm{Pic}^0 X$ is the jacobian of $X$. Then $\mu_\mathfrak{X} = \frac{1}{g} \iota^* \nu$ where $\nu$ is the unique translation-invariant $(1,1)$-form representing the first Chern class of the line bundle defining the principal polarisation on $J(\overline{k})$. Now suppose that $k$ is non-archimedean. We let $\mathcal{R}$ be the reduction graph of Chinburg-Rumely \cite{cr} of $X$. This is a metrised graph, receiving a canonical surjective continuous specialisation map $\mathrm{sp} \colon |\mathfrak{X}| \to \mathcal{R}$. In particular $\mathcal{R}$ is compact and path-connected. The map $\mathrm{sp}$ has a canonical section $i \colon \mathcal{R} \to |\mathfrak{X}|$; in fact $i \circ \mathrm{sp} \colon \mathfrak{X} \to i(\mathcal{R})$ is a deformation retraction. The Laplace operator on $\mathfrak{X}$ extends in a natural way the Laplace operator on $\mathcal{R}$. In \cite{zh} a canonical probability measure $\mu_\mathcal{R}$ is constructed on the reduction graph $\mathcal{R}$ of $X$, called admissible measure. It satisfies properties analogous to the Arakelov measure in the archimedean setting; especially, it gives rise to an adjunction formula. We define the measure $\mu_\mathfrak{X}$ on $|\mathfrak{X}|$ by putting $\mu_\mathfrak{X} = i_* \mu_\mathcal{R}$. Note that we obtain a canonical symmetric pairing $(,)_a$ on $X(\overline{k})$ by putting $(p,q)_a = g_{\mu_\mathfrak{X},p}(q)$ for $p \neq q$. This pairing coincides with the admissible pairing $(,)_a$ constructed in \cite{zh}, Section~4 using Green's functions on $\mathcal{R}$ with respect to $\mu_\mathcal{R}$, and the specialisation map. For example, if $X$ is an elliptic curve with $j$-invariant $j_X$ in $k$ then $\mathcal{R}$ is a point if $|j_X| \leq 1$, and $\mathcal{R}$ is a circle of circumference $\log|j_X|$ if $\log|j_X|>1$. A detailed discussion of the associated Berkovich space, including formulas for $(,)_a$ can be found for example in \cite{pet}. The following proposition calculates the integrals $\int_\mathfrak{X} \log |f| \mu_\mathfrak{X}$ in terms of admissible pairing. \begin{prop} \label{formulaintegral} Let $f$ be a non-zero rational function on $X \otimes \overline{k}$. Then the formula: \[ \int_\mathfrak{X} \log |f| \, \mu_\mathfrak{X} = \log |f|(r) + (\mathrm{div} f, r)_a \] holds. Here $r$ is an arbitrary point in $X(\overline{k})$ outside the support of $f$. \end{prop} \begin{proof} By Proposition \ref{properties}(i) we have: \[ \, \mathrm{d} \mathrm{d}^c\, \log |f| = \delta_{\mathrm{div} f} \, . \] By integrating against $g_{\mu_\mathfrak{X},r}$ we obtain: \[ \int_\mathfrak{X} g_{\mu_\mathfrak{X},r} \, \mathrm{d} \mathrm{d}^c\, \log |f| = g_{\mu_\mathfrak{X},r}(\mathrm{div} f)=(\mathrm{div} f, r)_a \, . \] On the other hand, by Proposition \ref{properties}(ii) and (iii) we have: \begin{align*} \int_\mathfrak{X} g_{\mu_\mathfrak{X},r} \, \mathrm{d} \mathrm{d}^c\, \log |f| & = \int_\mathfrak{X} (\log |f|) \, \mathrm{d} \mathrm{d}^c\, g_{\mu_\mathfrak{X},r} \\ & = \int_\mathfrak{X} (\log |f|) ( \mu_\mathfrak{X} - \delta_r) \\ & = -\log|f|(r)+\int_\mathfrak{X} \log|f| \, \mu_\mathfrak{X} \, . \end{align*} The proposition follows. \end{proof} Compare with \cite{zh}, Theorem 4.6(iii) which states that $\log|f|(r)+ (\mathrm{div} f,r)_a$ is constant outside the support of $ f$. Using potential theory we are thus able to interpret this constant as a suitable integral over $\mathfrak{X}$. \section{Proof of Theorem A} \label{theoA} In this section we prove Theorem A. So let $X$ be a superelliptic curve of type $(N,m)$ over $K$ with equation $y^N=f(x)$ and point at infinity $o$. We recall that this implies that $m=\deg(f)$ with $m>N>1$ and $\gcd(m,N)=1$. If $v$ is a place of $K$ we denote by $\overline{K_v}$ an algebraic closure of $K_v$. We endow each $K_v$ with a (standard) absolute value $|\cdot|_v$ as follows. If $v$ is archimedean, we take the euclidean norm on $K_v$. If $v$ is non-archimedean, we choose $|\cdot|_v$ such that $|\pi|_v=\mathrm{e}$, where $\pi$ is a uniformiser of $K_v$. We let $X_v$ be the analytic space associated to $X \otimes \hat{\overline{K_v}}$, and $\mu_v$ be the canonical measure on $X_v$, as in Section~\ref{potential}. For each $o \neq p \in X(\overline{K_v})$ we thus have a well-defined element: \begin{equation} \lambda_v(p) = \frac{1}{N} \int_{X_v} \log |x-x(p)|_v \, \mu_v \end{equation} of $\mathbb{R}$. Let $(,)_a$ be the local admissible pairing on $X(\overline{K_v})$ discussed in Section~\ref{potential} and let $\sigma \in \mathrm{Aut}(X \otimes \overline{K_v})$ be a generator of the automorphism group of $x \colon X \to \mathbb{P}^1_K$ over $\overline{K_v}$. Note that $\mathrm{div}(x-x(p))=-No + \sum_{i=0}^{N-1} \sigma^i(p)$. From Proposition \ref{formulaintegral} we obtain therefore that: \begin{equation} \label{keyequation} N\lambda_v(p) = \log|x(p)-x(r)|_v + \sum_{i=0}^{N-1} \left( \sigma^i(p)-o,r \right)_a \, . \end{equation} Here $r \in X(\overline{K_v})$ is an arbitrary point, not in the support of $x-x(p)$. \begin{prop} \label{equationslambda} The function $\lambda_v \colon X(\overline{K_v}) - \{o\} \to \mathbb{R}$ extends uniquely as an element of $\mathrm{D}^0(X_v)$. As such it satisfies the $\mathrm{d} \mathrm{d}^c$-equation: \[ \, \mathrm{d} \mathrm{d}^c\, \lambda_v = \mu_v - \delta_o \, . \] Furthermore we have: \[ \lambda_v(p) - \frac{1}{N} \log |x(p)|_v \to 0 \] as $p \to o$ on $X(K_v)$. In particular $\lambda_v$ defines a local Weil function with respect to the divisor $o$ on $X$. \end{prop} \begin{proof} We consider equation (\ref{keyequation}) with $p$ as a variable and $r$ fixed. Both $(\sigma^i(p)-o,r)_a$ and $\log|x(p)-x(r)|_v$ extend as $(0,0)$-currents over $X_v$, hence so does $\lambda_v$. The extension is unique, as $X(\overline{K_v})$ is dense in $X_v$. To prove the first formula, note that $(,)_a$ is canonical, hence invariant under $\mathrm{Aut}(X \otimes \overline{K_v})$. We can thus rewrite (\ref{keyequation}) as: \[ N \lambda_v(p) = \log|x(p)-x(r)|_v + \sum_{i=0}^{N-1} \left(p-o,\sigma^i(r)\right)_a \, . \] Taking $\mathrm{d} \mathrm{d}^c$ we have, by Proposition \ref{properties}: \[ N \, \mathrm{d} \mathrm{d}^c\, \lambda_v= \sum_{i=0}^{N-1} \left( \delta_{\sigma^i(r)} - \delta_o \right) + \sum_{i=0}^{N-1} \left( \mu_v - \delta_{\sigma^i(r)} \right) \, . \] It follows that $\, \mathrm{d} \mathrm{d}^c\, \lambda_v = \mu_v - \delta_o$ as required. To prove the second formula, we let $p \to o$ in (\ref{keyequation}). Then the sum $\sum_{i=0}^{N-1} \left( \sigma^i(p)-o,r \right)_a$ converges to zero. \end{proof} Now let $v$ run over all places of $K$ and take a $o \neq p \in X(K)$. From (\ref{keyequation}) it follows that $\lambda_v(p) = 0$ for almost all $v$, as the right hand side has this property. Hence the function $h_{X,x}$ on $X(K)$ given by: \[ h_{X,x}(p) = \frac{1}{[K \colon \mathbb{Q}]} \sum_v n_v \lambda_v(p) = \frac{1}{[K \colon \mathbb{Q}]} \frac{1}{N} \sum_v n_v \int_{X_v} \log |x-x(p)|_v \, \mu_v\] and $h_{X,x}(o)=0$ is well-defined. Here $n_v$ is a (standard) local factor defined as follows: if $v$ is real, then $n_v=1$; if $v$ is complex, then $n_v=2$; if $v$ is non-archimedean, then $n_v$ is the $\log$ of the cardinality of the residue field at $v$. Now let $\overline{K}$ be an algebraic closure of $K$. Using (\ref{keyequation}) with another base field we see that we can extend $h_{X,x}$ compatibly over all finite extensions of $K$ contained in $\overline{K}$, hence we can extend $h_{X,x}$ to $X(\overline{K})$. We call $h_{X,x}$ the canonical height on $X(\overline{K})$ associated to $x \colon X \to \mathbb{P}^1_K$. Let $g$ be the genus of $X$. By Riemann-Hurwitz we have $g=\frac{1}{2}(N-1)(m-1)$; in particular $g$ is positive. The following is a restatement of Theorem A. \begin{thm} \label{theheights} The function $h_{X,x}$ on $X(\overline{K})$ is a Weil height with respect to the line bundle $\mathcal{O}_X(o)$ on $X$. Let $J=\mathrm{Pic}^0 X$ be the jacobian of $X$ and let $h_J$ be the canonical (N\'eron-Tate) height on $J(\overline{K})$. Then the formula: \[ h_J([p-o]) = g h_{X,x}(p) \] holds for all $p \in X(\overline{K})$. \end{thm} \begin{proof} We can assume without loss of generality that $K$ contains a root of unity of order $N$, and that $p \in X(K)$. From (\ref{keyequation}) we find by summing over all places $v$ of $K$ that: \begin{equation} \label{first} N h_{X,x}(p) = \frac{1}{[K:\mathbb{Q}]} \sum_{i=0}^{N-1} \left( \sigma^i(p) - o,r \right)_a \end{equation} for any $r \in X(K)$ which is not in the support of $x-x(p)$. Here $(,)_a$ now denotes global admissible pairing. Since we can vary $r$ it follows that as an admissible line bundle $\otimes_{i=0}^{N-1} \mathcal{O}_X(\sigma^i(p)-o)$ is a trivial line bundle, with a constant metric at all places $v$. Hence in (\ref{first}) we can also take $r=o$. By invariance of $(,)_a$ under $\mathrm{Aut}(X)$ we arrive at the simple formula: \begin{equation} \label{second} h_{X,x}(p) = \frac{1}{[K \colon \mathbb{Q}]} (p-o,o)_a \end{equation} for all $p \in X(K)$. This shows that $h_{X,x}$ defines a Weil height with respect to $\mathcal{O}_X(o)$, by \cite{zh}, Theorem 4.6.(2). Recall that we have an equation $y^N=f(x)$ for $X$ giving $(X,o)$ the structure of a superelliptic curve of type $(N,m)$ over $K$. A small computation shows that the differential: \[ \omega_{1,N-1}=-\frac{\mathrm{d} x}{N y^{N-1}} = -\frac{\mathrm{d} y}{f'(x)} \] has divisor equal to $2(g-1)o$ on $X$. Thus, if we let $\omega$ be the admissible canonical line bundle on $X$, then $2(g-1)o - \omega$ is a trivial line bundle, with a constant metric at all places $v$. In particular we have $(p-o, 2(g-1)o - \omega)_a = 0 $ for all $p \in X(K)$. By writing out we obtain: \[ -(p,\omega)_a = -2(g-1)(p,o)_a + 2(g-1)(o,o)_a - (o,\omega)_a \, . \] Now by the adjunction formula (cf. \cite{zh}, 5.2) we have $-(o,\omega)_a=(o,o)_a$ so that: \begin{equation} \label{third} -(p,\omega)_a = -2(g-1)(p,o)_a + (2g-1)(o,o)_a \, . \end{equation} The arithmetic Hodge index theorem for admissible pairing (cf. \cite{zh}, 5.4) implies that: \[ -2[K\colon \mathbb{Q}]h_J([p-o]) = (p-o,p-o)_a = (p,p)_a -2(p,o)_a+(o,o)_a \, . \] By the adjunction formula we can write $(p,p)_a=-(p,\omega)_a$ and with (\ref{third}) we arrive at $ [K\colon \mathbb{Q}]h_J([p-o]) = g (p-o,o)_a $. By (\ref{second}) we find $h_J([p-o])=gh_{X,x}(p)$. \end{proof} \section{Complements on hyperelliptic curves} \label{complements} We want to make some additional remarks about the local Weil functions $\lambda_v$ in the case that $N=2$, i.e., in the case that $(X,o)$ is an elliptic or pointed hyperelliptic curve. As an application we express the admissible self-intersection of the relative dualising sheaf (cf. \cite{zh}, 5.4) of a hyperelliptic curve as a sum of local \emph{double} integrals over the places of~$K$. We start by giving a formula for the special value of $\lambda_v$ at a hyperelliptic ramification point. Let $(X,o)$ be an elliptic or pointed hyperelliptic curve over $K$ given by an equation $y^2=f(x)$, where $f(x) \in K[x]$ is monic and separable of degree $m=2g+1$. Fix a place $v$ of $K$, as well as an algebraic closure $\overline{K_v}$ of $K_v$. Keep the $v$-adic absolute value on $K_v$ and $\overline{K_v}$ as defined in Section~\ref{theoA}. \begin{prop} \label{atalpha} Let $o \neq p \in X(\overline{K_v})$ be a hyperelliptic ramification point of $X$ and let $\alpha = x(p)$ in $\overline{K_v}$. Then the formula: \[ 2\lambda_v(p) = \int_{X_v} \log|x-\alpha|_v \, \mu_v = \frac{1}{2g} \log |f'(\alpha)|_v \] holds. \end{prop} Note that this result is especially remarkable if $v$ is an archimedean place: it says that $\lambda_v$, which is a certain transcendental function on $X(\overline{K_v})-\{o\}$, assumes an algebraic special value at each hyperelliptic ramification point of $X$. \begin{proof} We distinguish between the case $g=1$ and the case $g \geq 2$. In the case $g=1$, let $p_i, p_j, p_k$ be the non-trivial hyperelliptic ramification points, i.e., the non-trivial $2$-torsion points of $(X,o)$, and let $\alpha_i, \alpha_j, \alpha_k$ in $\overline{K_v}$ be the corresponding roots of $f$. By the quasi-parallelogram law (\ref{parallel}) we find: \[ 2\lambda_v(p_i) = 2\lambda_v(p_j)+2\lambda_v(p_k) - \log |\alpha_j-\alpha_k|_v \, . \] By cyclic permutation of $(i,j,k)$ we obtain two other linear relations between $\lambda_v(p_i)$, $\lambda_v(p_j)$ and $\lambda_v(p_k)$. By elimination we find: \begin{align*} 2\lambda_v(p_i) & = \frac{1}{2} \log \left( |\alpha_i-\alpha_j|_v |\alpha_i - \alpha_k|_v \right) \\ &= \frac{1}{2} \log |f'(\alpha_i)|_v \, , \end{align*} settling the case $g=1$. For $g \geq 2$ we use a result on the arithmetic of symmetric roots from \cite{dj}. Let $\alpha_1,\ldots,\alpha_{2g+2}$ on $\mathbb{P}^1(\overline{K_v})$ be the branch points of $x$ (this includes $\infty$). The symmetric root of a triple $(\alpha_i,\alpha_j,\alpha_k)$ of distinct branch points is then defined to be an element: \[ \ell_{ijk} = \frac{\alpha_i - \alpha_k}{\alpha_j - \alpha_k} \sqrt[2g]{ - \frac{f'(\alpha_j)}{f'(\alpha_i)}} \] of $\overline{K_v}^*$. The actual choice of $2g$-th root will be immaterial in what follows. If $\alpha_j$ equals infinity, the formula should be read as follows: \begin{equation} \label{symmrootinfinity} \ell_{i \infty k} = \left( \alpha_i - \alpha_k \right) \sqrt[2g]{-f'(\alpha_i)}^{-1} \end{equation} (recall that $f$ is monic). Now let $w_1,\ldots,w_{2g+2}$ on $X(\overline{K_v})$ be the hyperelliptic ramification points corrresponding to $\alpha_1, \ldots, \alpha_{2g+2}$. Theorem C of \cite{dj} then states that if $(w_i,w_j,w_k)$ is a triple of distinct ramification points, the formula: \begin{equation} \label{rootandpairing} (w_i-w_j,w_k)_a = -\frac{1}{2} \log |\ell_{ijk}|_v \end{equation} holds. Here, as before $(,)_a$ denotes Zhang's local admissible pairing on $X(\overline{K_v})$. Applying Proposition \ref{formulaintegral} to the rational function $x-\alpha_i$, with $\alpha_i$ a finite branch point, we find: \[ \int_{X_v} \log|x-\alpha_i|_v \, \mu_v = \log|x(p)-\alpha_i|_v + 2(w_i-o,p)_a \] for any $p \neq o,w_i$. Taking $p=w_k$ and applying (\ref{rootandpairing}) we find: \begin{align*} \int_{X_v} \log|x-\alpha_i|_v \, \mu_v & = \log|\alpha_i-\alpha_k|_v + 2(w_i-o,w_k)_a \\ & = \log|\alpha_i - \alpha_k|_v - \log |\ell_{i \infty k}|_v \, . \end{align*} Hence by (\ref{symmrootinfinity}): \[ 2\lambda_v(\alpha_i)=\int_{X_v} \log|x-\alpha_i|_v \, \mu_v=\frac{1}{2g} \log|f'(\alpha_i)|_v \, . \] The proposition is proven. \end{proof} Note that $\lambda_v$ depends on the choice of monic equation $f$ for the pointed curve $(X,o)$. Let $\Delta = 2^{4g}\Delta(f)$ where $\Delta(f)$ is the discriminant of $f$. We refer to \cite{lo} for properties of $\Delta$. The discriminant $\Delta$ generalises the usual definition $\Delta = 2^4 \Delta(f)$ in the case where $(X,o)$ is an elliptic curve. We renormalise $\lambda_v$ by putting: \[ \hat{\lambda}_v(p) = \lambda_v(p) - \frac{1}{4g(2g+1)} \log |\Delta|_v \, . \] Then $\hat{\lambda}_v$ is independent of the choice of monic equation $f$ for $(X,o)$, as one checks by replacing $x$ by $u^2x+t$ for $u \in K^*$, $t \in K$. We obtain the familiar relation: \[ \hat{\lambda}_v = \lambda_v - \frac{1}{12} \log |\Delta|_v \] in the case where $(X,o)$ is an elliptic curve. In that case, the $\hat{\lambda}_v$ have the additional property that $\int_{X_v} \hat{\lambda}_v \mu _v = 0$ for each place $v$. This is no longer true in general when $(X,o)$ is hyperelliptic, though we have the following result. Assume that $g \geq 2$, and let $i$ be an index with $1 \leq i \leq 2g+2$. Then put: \[ \chi(X_v) = -2g\left( \log|2|_v + \sum_{k \neq i} (w_i,w_k)_a \right) \, . \] We proved in \cite{dj}, Theorem B that $\chi(X_v)$ is independent of the choice of $i$, hence is an invariant of $X_v$, and that $\chi(X_v) \geq 0$ if $v$ is non-archimedean. We conjecture that $\chi(X_v) \geq 0$ even if $v$ is archimedean, but this is proved only if $g=2$. We have $\chi(X_v)=0$ if $v$ is non-archimedean and $X$ has potentially good reduction at $v$. The relevance of $\chi(X_v)$ is that it gives a formula in terms of local invariants for the admissible self-intersection of the relative dualising sheaf $(\omega,\omega)_a$ of~$X$, namely: \begin{equation} \label{omegasquared} (\omega,\omega)_a = \frac{2g-2}{2g+1} \sum_v n_v \chi(X_v) \, , \end{equation} where $v$ runs over the places of $K$. The following result relates $\int_{X_v} \hat{\lambda}_v \, \mu_v$ to $\chi(X_v)$. We believe that this formula could be interesting from the point of view of potential theory on graphs. \begin{prop} \label{integrallambdahat} Let $v$ be a place of $K$. Then the formula: \[ \int_{X_v} \hat{\lambda}_v \, \mu_v = \frac{1}{2g(2g+1)} \chi(X_v) \] holds. \end{prop} \begin{proof} From Proposition \ref{equationslambda} we obtain that $\lambda_v$ equals $(p,o)_a$ up to an additive constant, more precisely: \[ \lambda_v(p) = (p,o)_a + \int_{X_v} \lambda_v \, \mu_v \] for each $o \neq p \in X(\overline{K_v})$. By taking $p=\alpha_i$ and using Proposition \ref{atalpha} we find: \[ \frac{1}{4g} \log |f'(\alpha_i)|_v = \lambda_v(w_i)= (w_i,o)_a + \int_{X_v} \lambda_v \, \mu_v \, . \] Assume that $\alpha_{2g+2}=\infty$. By summing over $i=1,\ldots,2g+1$ we arrive at: \[ \frac{1}{4g} \log |\Delta(f)|_v = -\frac{1}{2g} \chi(X_v) - \log|2|_v + (2g+1) \int_{X_v} \lambda_v \, \mu_v \, , \] using the definition of $\chi(X_v)$ above. Rewriting a bit gives: \[ \log|\Delta|_v = -2 \chi(X_v) + 4g(2g+1) \int_{X_v} \lambda_v \, \mu_v \] and hence: \[ \chi(X_v) = 2g(2g+1) \int_{X_v} \hat{\lambda}_v \, \mu_v \] as required. \end{proof} \begin{cor} \label{doubleintegral} The formula: \[ (\omega,\omega)_a = 2g(g-1) \sum_v n_v \int_{X_v} \int_{X_v} \log |x(p)-x(q)|_v \, \mu_v(p) \mu_v(q) \] holds. \end{cor} \begin{proof} Using equation (\ref{omegasquared}), Proposition \ref{integrallambdahat}, the product formula, and the definition of $\lambda_v$ we find: \begin{align*} (\omega,\omega)_a & = \frac{2g-2}{2g+1} \sum_v n_v \chi(X_v) \\ & = 4g(g-1) \sum_v n_v \int_{X_v} \hat{\lambda}_v \, \mu_v \\ & = 4g(g-1) \sum_v n_v \int_{X_v} \lambda_v \, \mu_v \\ & = 2g(g-1) \sum_v n_v \int_{X_v} \int_{X_v} \log |x(p)-x(q)|_v \, \mu_v(p) \mu_v(q) \end{align*} which gives the corollary. \end{proof} It would be interesting to have explicit formulas for $\lambda_v(p)$ \`a la the ones of Tate (cf. \cite{sil2}, Chapter VI) for elliptic curves, given the type of the reduction graph $\mathcal{R}_v$ of $X$ at $v$, and the specialisation of $p$ on $\mathcal{R}_v$, if $v$ is non-archimedean. A natural case to start would be the case where $X$ becomes a Mumford curve at $v$. This occurs if the branch points of $X$ come in pairs of points closer to one another than to the other branch points in a suitable $v$-adic metric on the projective line \cite{br}. If $v$ is archimedean, an explicit formula can be given using complex uniformisation, and a certain holomorphic function $\sigma_\sharp$ introduced in \cite{on}. Our formula generalises the formula given in \cite{sil2}, Chapter VI, Theorem 3.2. Write: \[ f(x)=a_0x^{2g+1}+a_1x^{2g} + \cdots + a_{2g}x + a_{2g+1} \, , \] where we view all $a_i$ as complex numbers using the embedding $v$. Note that $a_0=1$. We let $\omega_i = x^{i-1} \mathrm{d} x/(2y)$ for $i=1,\ldots,g$; then $(\omega_1,\ldots,\omega_g)$ is a basis of holomorphic $1$-forms on the compact Riemann surface $X_v$. Let $(\omega'|\omega'') \in M_{g \times 2g}(\mathbb{C})$ be the period matrix of $(\omega_1,\ldots,\omega_g)$ on a canonical symplectic basis $e$ of $\mathrm{H}_1(X_v,\mathbb{Z})$. Let $\Lambda_v=(\omega'|\omega'')\cdot \mathbb{Z}^{2g}$ be the associated period lattice in $\mathbb{C}^g$ (vectors are considered as column vectors). Let $\kappa \colon \mathbb{C}^g \to \mathbb{C}^g/\Lambda_v$ be the projection and let: \[ \eta_i = \frac{1}{2y} \sum_{j=i}^{2g-i} (j+1-i)a_{2g-i-j} x^j \mathrm{d} x \] for $i=1,\ldots,g$. The $\eta_i$ are standard meromorphic differential forms on $X_v$ with poles only at $o$, and with vanishing residues. Let $(\eta'|\eta'') \in M_{g \times 2g}(\mathbb{C})$ be the period matrix of $(\eta_1,\ldots,\eta_g)$ on $e$ and let: \[ \delta'={}^t ( \frac{1}{2},\ldots,\frac{1}{2} ) \, , \quad \delta'' = {}^t( \frac{g}{2}, \frac{g-1}{2} , \ldots, \frac{1}{2} ) \, , \quad \delta = \left[ \delta' \atop \delta'' \right] \, . \] We put $\tau = \omega'^{-1} \omega''$ and consider the following hyperelliptic sigma-function: \[ \sigma(z) = \gamma \cdot \exp( -\frac{1}{2} {}^t z \eta' \omega'^{-1} z) \theta[\delta] (\omega'^{-1}z,\tau) \] on $\mathbb{C}^g$ with $\theta[\delta]$ the standard theta function with characteristic $\delta$. The constant $\gamma$ will be fixed in the proof of Proposition \ref{lambda_v_complex} below. The function $\sigma(z)$ is holomorphic, and satisfies the functional equation: \begin{equation} \label{functional} \sigma (z+ \ell) = \chi(\ell) \sigma(z) \exp( L(z+\frac{\ell}{2},\ell )) \end{equation} for $\ell \in \Lambda_v$ where: \begin{align*} \chi(\ell) & = \exp( 2\pi i ({}^t \ell' \delta'' - {}^t \ell'' \delta') - \pi i {}^t \ell' \ell'') \, , \\ L(w,z) & = {}^t w \cdot (\eta' z' + \eta'' z'') \, , \end{align*} according to \cite{on}, Lemma 3.3. Here, the vectors $z',z'' \in \mathbb{R}^g$ and $\ell',\ell'' \in \mathbb{Z}^g$ are uniquely determined by the equations $ z = \omega' z' + \omega'' z'' $ and $ \ell = \omega' \ell' + \omega'' \ell''$. It follows that the real-valued function $\|\sigma\|$ given by: \[ \| \sigma \|(z) = |\sigma(z) | \exp(-\frac{1}{2} \mathrm{Re} \, L(z,z)) \] descends to $\mathbb{C}^g/\Lambda_v$. In order to introduce $\sigma_\sharp$ write: \[ \sigma_{ij \cdots k}(z) = \frac{\partial}{\partial z_i} \frac{\partial}{\partial z_j} \cdots \frac{\partial}{\partial z_k} \sigma(z) \] for any tuple $(ij \ldots k)$ of integers between~$1$ and~$g$. The $\sigma_\sharp$-function is then defined to be: \[ \sigma_\sharp(z) = \left\{ \begin{array}{ll} \sigma_{24\cdots g} (z) & \quad \mbox{if $g$ is even} \, , \\ \sigma_{24\cdots g-1} (z) & \quad \mbox{if $g$ is odd} \, . \end{array} \right. \] In particular the function $\sigma_\sharp$ coincides with $\sigma$ if $g=1$. As is proved in \cite{on}, Lemma~6.4 the $\sigma_\sharp$-function satisfies the functional equation (\ref{functional}) for $z$ restricted to $\kappa^{-1}(\iota(X_v))$, where $\iota \colon X_v \to \mathbb{C}^g/\Lambda_v$ is the standard immersion given by $p \mapsto \int_o^p {}^t (\omega_1,\ldots, \omega_g)$. Furthermore, by \cite{on}, Proposition 6.6 the function $\sigma_\sharp$ is non-vanishing for $z$ in $\kappa^{-1}(\iota(X_v-\{o\}))$ and if $U \subset \kappa^{-1}(\iota(X_v))$ is an open neighbourhood of $0$ analytically isomorphic to a small open disc around $o$ on $X_v$ then $\sigma_\sharp$ restricted to $U$ vanishes at $0$ with multiplicity equal to $g$. We put: \[ \|\sigma_\sharp \|(z) = |\sigma_\sharp(z)| \exp(-\frac{1}{2} \mathrm{Re} \, L(z,z)) \] for $z$ in $\kappa^{-1}(\iota(X_v))$. By what we have said above the function $\|\sigma_\sharp\|$ descends to a real-valued continuous function on $X_v$, vanishing only at $o$. The following proposition says that we can express $\lambda_v$ in terms of $\sigma_\sharp$. \begin{prop} \label{lambda_v_complex} Assume that $(X,o)$ is an elliptic or pointed hyperelliptic curve of genus $g$ and let $v$ be an archimedean place of $K$. Let $\iota \colon X_v \to \mathbb{C}^g/\Lambda_v$ be the immersion of $X_v$ in the complex torus $\mathbb{C}^g/\Lambda_v$ as described above, and let $\kappa \colon \mathbb{C}^g \to \mathbb{C}^g/\Lambda_v$ be the projection. Then for all $p \in X(\overline{K_v}) - \{ o \}$ the formula: \[ \lambda_v(p) = -\frac{1}{g} \log \| \sigma_\sharp \|(z) \] holds, if $\kappa(z) = \iota(p)$. \end{prop} \begin{proof} Let $U$ be an open disc on the universal abelian cover $\kappa^{-1}(\iota(X_v))$ of $X_v$, homeomorphic to a disc on $X_v$ not containing $o$. Then we have: \[ -\, \mathrm{d} \mathrm{d}^c\, \log \|\sigma_\sharp\|(z) = - \frac{\partial \overline{\partial}}{\pi i} \frac{1}{2} \mathrm{Re} \, L(z,z) = \widetilde{\nu}|_U \] on $U$ where $\widetilde{\nu}$ on $\mathbb{C}^g$ is a lift of the unique translation-invariant $(1,1)$-form $\nu$ on $\mathbb{C}^g/\Lambda_v$ representing the first Chern class of the line bundle defining the canonical principal polarisation on $\mathbb{C}^g/\Lambda_v$. As $\iota^*\nu = g\mu_v$ and $U$ is allowed to vary we find: \[ -\, \mathrm{d} \mathrm{d}^c\, \log \| \sigma_\sharp \| = g (\mu_v - \delta_o) \] as $(1,1)$-currents on $X_v$. Recall that a germ of $\sigma_\sharp$ around $o$ vanishes at $o$ with multiplicity $g$. By Proposition \ref{equationslambda} we conclude that $\lambda_v = -\frac{1}{g} \log \|\sigma_\sharp\| + c$ for some $c \in \mathbb{R}$. We determine $c$ by looking at the Taylor series expansion of $\sigma_\sharp$ around $o$. It turns out, by \cite{belhyp}, Section~2.1.1, that for a suitable choice of $\gamma$ such that $\gamma^8 = \pi^{-4g}(\det \omega')^4 \Delta(f)$ the homogeneous term of least total degree of the Taylor series expansion of $\sigma(z)$ around the origin of $\mathbb{C}^g$ equals the Hankel determinant: \[ H(z_1,\ldots,z_g)=\left| \begin{array}{cccc} z_1 & z_2 & \cdots & z_{(g+1)/2} \\ z_2 & \adots & \adots & \vdots \\ \vdots & \adots & \adots & z_{g-1} \\ z_{(g+1)/2} & \cdots & z_{g-1} & z_g \end{array} \right| \] if $g$ is odd, and: \[ H(z_1,\ldots,z_g)=\left| \begin{array}{cccc} z_1 & z_2 & \cdots & z_{g/2} \\ z_2 & \adots & \adots & \vdots \\ \vdots & \adots & \adots & z_{g-2} \\ z_{g/2} & \cdots & z_{g-2} & z_{g-1} \end{array} \right| \] if $g$ is even. From \cite{on}, Proposition 6.6 it follows then that, up to a sign, the expansion $\sigma_\sharp(t) = t^g(1+O(t))$ holds in the local coordinate $t=\frac{x^g}{y}$ around $o$ on $X_v$. Since $x=t^{-2}(1+O(t))$ and $\lambda_v(p) - \frac{1}{2} \log|x(p)|_v \to 0$ as $p \to o$ again by Proposition \ref{equationslambda} we find that $c=0$. \end{proof} We expect that analogues of the results in this section hold true for superelliptic curves in general. \section{Proof of Theorem B} \label{theoB} In this section we prove Theorem B. We will make use of the following general diophantine approximation result due to Faltings (cf. \cite{fa}, Theorem II): \begin{thm} \label{faltings} Let $A$ be an abelian variety over $K$ and let $D$ be an ample divisor on $A$. Let $v$ be a place of $K$ and let $\lambda_{D,v}$ be a N\'eron function on $A(K_v)$ with respect to $D$. Let $h$ be a height on $A(\overline{K})$ associated to an ample line bundle on $A$, and let $\kappa \in \mathbb{R}_{>0}$. Then there exist only finitely many $K$-rational points $z$ in $A-D$ such that $\lambda_{D,v}(z) > \kappa \cdot h(z)$. \end{thm} Let $X$ be a superelliptic curve of type $(N,m)$ over $K$ of genus $g$ with equation $y^N=f(x)$ and with point at infinity $o$. We recall the divisors $H_n$ on $X$ from the Introduction: let $J=\mathrm{Pic}^0 X$ be the jacobian of $X$, and let $\iota \colon X \to J$ be the embedding given by $p \mapsto [p-o]$. We have a theta divisor $\Theta$ on $J$ represented by the classes $[q_1+\cdots+q_{g-1} - (g-1)o]$ for $q_1,\ldots,q_{g-1}$ running through $X$. We note that $\Theta$ is symmetric since, as noted above (cf. the proof of Theorem \ref{theoA}), the divisor $2(g-1)o$ is a canonical divisor on~$X$. If $n \geq g$ is an integer we define $H_n$ to be the subscheme $\iota^* [n]^* \Theta$ of $X$. This is an effective $K$-divisor on $X$ of degree $gn^2$, as can be seen for example by noting that $H_n$ coincides with the scheme of Weierstrass points of the line bundle $\mathcal{O}_X(o)^{\otimes n+g-1}$ \cite{ne}. The points in the support of $H_n$ are called $n$-division points. For $p \in X(K)$ we put $T(p) = \{ n \in \mathbb{Z}_{\geq g} | p \notin H_n \} = \{ n \in \mathbb{Z}_{\geq g} | n[p-o] \notin \Theta \}$. \begin{lem} \label{infiniteset} Let $p \in X(K)$. If $T(p)$ is not empty, then $T(p)$ contains infinitely many elements. \end{lem} \begin{proof} For $p$ such that $[p-o]$ is torsion in $J$ the statement is immediate: assume $n_0[p-o] \notin \Theta$, then if $k$ is the order of $[p-o]$ we can take those $n \geq g$ such that $n \equiv n_0 \bmod k$. Assume therefore that $[p-o]$ is not torsion in $J$. We prove that infinitely many points of the form $n[p-o]$ where $n$ is an integer are not in $\Theta$. This is sufficient by the symmetry of $\Theta$. Let $Z$ be the Zariski closure in $J$ of the subgroup $\{ n[p-o] | n \in \mathbb{Z} \}$ of $J(K)$. Then $Z$ is a closed algebraic subgroup of $J$, by Lemma \ref{isagroup} below. Suppose that only finitely many of the $n[p-o]$ are outside $\Theta$. Then $Z$ is the union of a finite positive number of isolated points and a closed subset of $\Theta$. It follows that $Z$ has dimension zero, contradicting the assumption that $[p-o]$ is not torsion. \end{proof} \begin{lem} \label{isagroup} Let $G$ be an algebraic group variety over a field $k$ and let $H$ be a subgroup of $G$. Then the Zariski closure of $H$ in $G$ is an algebraic subgroup of $G$. \end{lem} \begin{proof} Let $Z$ be the Zariski closure of $H$ in $G$ and for every $h$ in $H$ denote by $t_hZ$ the left translate of $Z$ under $h$ in $G$. As $t_hZ$ is closed in $G$ and contains $H$ we find that $t_hZ$ contains $Z$ and in fact $t_hZ = Z$. This implies that $H$ is contained in the stabiliser $\mathrm{Stab}(Z)$ of $Z$, which is a closed algebraic subgroup of $G$. We conclude that $Z$ is contained in $\mathrm{Stab}(Z)$ and hence $Z$ is itself an algebraic subgroup of $G$. \end{proof} Note that $T(p)$ can be empty for $p \neq o$: for example if $g \geq 2$ and $p \neq o$ is a ramification point of $x \colon X \to \mathbb{P}^1_K$. We have the following theorem. \begin{thm} \label{main} Assume that $T(p)$ is not empty, hence infinite. Let $v$ be a place of~$K$. Then one has: \[ \frac{1}{gn^2} \sum_{q \in H_n \atop x(q) \neq \infty} \log|x(p)-x(q)|_v \longrightarrow \int_{X_v} \log|x - x(p)|_v \, \mu_v \] as $n \to \infty$ over $T(p)$. In the sum, points are counted with multiplicity. \end{thm} Note that Theorem \ref{main} implies Theorem B. Indeed, if $p\neq o$ is not a Weierstrass point we have $g[p-o] \notin \Theta$ so that $T(p)$ is not empty. Moreover if $n \in T(p)$ then automatically $x(q) \neq x(p)$ for all $q \in H_n$ since $H_n$ is acted upon by the automorphism group of $x \colon X \to \mathbb{P}^1_K$ over $\overline{K}$. The proof of Theorem \ref{main} is based on the existence of an identity: \begin{equation} \label{identity} \log|a(n)|_v + \frac{1}{N} \sum_{q \in H_n \atop x(q) \neq \infty} \log|x(p)-x(q)|_v = -\lambda_{\Theta,v}(n[p-o]) + gn^2 \lambda_v(p) \end{equation} of functions on $X(K_v)-\{o\}$ where $\lambda_{\Theta,v}$ is a suitable N\'eron function with respect to $\Theta$ on $J(K_v)$ and where $a(n)$ is a function with at most polynomial growth in $n$. We obtain Theorem \ref{main} by dividing by $gn^2$ and letting $n$ tend to infinity over $T(p)$, using Faltings's result to see that $ \lim_{n \to \infty} \lambda_{\Theta,v}(n[p-o])/gn^2 = 0 $. The existence of an identity of the type (\ref{identity}) for each $v$ can be seen from general principles; however, we would like to exhibit concrete functions $a$ and $\lambda_{\Theta,v}$ such that (\ref{identity}) holds, in order to achieve uniformity in $v$, cf. Proposition \ref{identityprop} below. The constructions are based on a paper \cite{belrat} by Buchstaber, Enolskii and Leykin. We start by constructing a certain polynomial $\sigma_{N,m} \in \mathbb{Q}[z_1,\ldots,z_g]$, where $z_1,\ldots,z_g$ are indeterminates, following \cite{belrat}, Sections~1--4. Let $W_{N,m}$ be the set of positive integers of the form $w = -\alpha N + \beta m$, where $\alpha >0$, and $N > \beta > 0$. Then $W_{N,m}$ is the Weierstrass gap sequence at $o$ on $X$, consisting of $g=\frac{1}{2}(N-1)(m-1)$ elements. Write $W_{N,m} = \{w_1,\ldots,w_g \}$ where $1=w_1 < \cdots < w_g = 2g-1$, and put $\pi_k = w_{g-k+1} + k-g$ for $k=1,\ldots,g$. Then $\pi=(\pi_k)$ is a non-increasing $g$-tuple of positive integers, the ``partition'' associated to $(N,m)$. According to \cite{belrat}, Lemma~2.4 the partition $\pi$ is self-conjugate. Let $e_1,\ldots,e_g$ be the elementary symmetric functions in the variables $x_1,\ldots,x_g$. We call, as is customary: \[ s_\pi = \det( e_{\pi_i -i+j})_{1 \leq i,j \leq g} \] in $\mathbb{Q}[x_1,\ldots,x_g]$ the Schur polynomial associated to $\pi$. Let $p_r = \frac{1}{r} \sum_{i=1}^g x_i^r$ for $r=1,2,\ldots$ be the Newton polynomials in the variables $x_1,\ldots,x_g$. According to \cite{belrat}, Theorem~4.1 there exists then a unique polynomial $\sigma$ in $\mathbb{Q}[z_1,\ldots,z_g]$ such that $s_\pi=\sigma(p_{w_1},\ldots,p_{w_g})$. We call this polynomial $\sigma_{N,m}$. If one puts $\mathrm{wt}(z_i)=w_i$ for $i=1,\ldots,g$ then one sees that $\sigma_{N,m}$ is homogeneous, of total weight: \[ \mathrm{length}(\pi) = \sum_{k=1}^g (w_k-k+1) = w + g \, , \] where $w=\sum_{k=1}^g (w_k-k)$ is the Weierstrass weight of $o$ on $X$. One can compute that $w+g=(N^2-1)(m^2-1)/24$. Let $(\alpha_0,\beta_0)$ be the unique pair of integers with $\alpha_0>0$ and $N > \beta_0 >0$ such that $-\alpha_0N+\beta_0 m=w_1=1$. We define $t$ to be the element $x^{\alpha_0} y^{-\beta_0}$ in the function field of $X$. For each pair of integers $(\alpha,\beta)$ with $\alpha >0$ and $N > \beta >0$ such that $-\alpha N + \beta m >0$ we define $\omega_{\alpha,\beta}$ to be the rational differential form: \begin{equation} \label{theomegas} \omega_{\alpha,\beta} = -\frac{x^{\alpha -1}}{N y^\beta} \mathrm{d} x = -\frac{x^{\alpha-1} y^{N-1-\beta}}{f'(x)} \mathrm{d} y \, . \end{equation} Note that we have seen the special case $\omega_{1,N-1}$ before in Section~\ref{theoA}. \begin{lem} \label{tandomega} The element $t$ is a local coordinate at $o$ on $X$. Written in the local coordinate $t$, the Laurent series expansions: \[ x = t^{-N}(1+O(t)) \, , \quad y = t^{-m}(1+O(t)) \] hold. Let $w_{\alpha,\beta}=-\alpha N+\beta m$. Then $\omega_{\alpha,\beta}$ can be written as: \[ \omega_{\alpha,\beta} = t^{w_{\alpha,\beta}-1} (1+O(t)) \, \mathrm{d} t \] in the local coordinate $t$. The $\omega_{\alpha,\beta}$ give rise to a basis of regular differential forms on $X$. \end{lem} \begin{proof} As $x$ has a pole of order $N$ at $o$ and $y$ has a pole of order $m$ at $o$, it follows that $t$ is indeed a local coordinate at $o$. The Laurent series expansions: \[ x = t^{-N}(1+O(t)) \, , \quad y = t^{-m}(1+O(t)) \] follow using the relation $y^N=f(x)=x^m(1+O(t))$. The formula: \[ \omega_{\alpha,\beta} = t^{w_{\alpha,\beta}-1} (1+O(t)) \, \mathrm{d} t \] follows then directly from the definition (\ref{theomegas}) of $\omega_{\alpha,\beta}$. Since the $w_{\alpha,\beta}$ are positive, we see that the $\omega_{\alpha,\beta}$ are regular at $o$. Further it is clear from (\ref{theomegas}) that the $\omega_{\alpha,\beta}$ are also regular away from $o$. Finally, the $\omega_{\alpha,\beta}$ are linearly independent since the $w_{\alpha,\beta}$ are distinct. As the set of $w_{\alpha,\beta}$ consists of $g$ elements (as we have noted above, they form the Weierstrass gap sequence at $o$), the $\omega_{\alpha,\beta}$ give rise to a basis of regular differential forms on $X$. \end{proof} In a sense to be explained below, the $\sigma_{N,m}$ are suitable ``degenerations'' of local equations of theta divisors on superelliptic curves of type $(N,m)$. Consider the $g$-fold self-product $X^g$ of $X$ with itself. Let $t^{(j)}$ be the local coordinate~$t$ around $o$ on the $j$-th factor of $X^g$. Let $\omega_i$ for $i=1,\ldots,g$ be the set of $\omega_{\alpha,\beta}$ as in Lemma \ref{tandomega}, ordered such that $\omega_i$ vanishes with multiplicity $w_i-1$ at $o$. For each $i,j$ with $1\leq i,j \leq g$ let $\int_o \omega_i^{(j)}$ be the power series in $K[[t^{(j)}]]$ obtained by formally integrating $\omega_i$, seen as an element of $K[[t^{(j)}]] \, \mathrm{d}t^{(j)}$, taking constant term equal to zero. As $\omega_i^{(j)} = (t^{(j)})^{w_i-1}(1+O(t^{(j)})) \, \mathrm{d} t^{(j)}$ by Lemma \ref{tandomega} we find $\int_o \omega_i^{(j)} = \frac{1}{w_i} (t^{(j)})^{w_i}(1+O(t^{(j)}))$ in $K[[t^{(j)}]]$. For each $i=1,\ldots,g$ we then let: \[ z_i = \sum_{j=1}^g \int_o \omega_i^{(j)} \] in $K[[t^{(1)},\ldots,t^{(g)}]]$. Note that the latter can naturally be identified with $\widehat{\mathcal{O}}_{X^g,o}$, the completion of the local ring of $X^g$ at $(o,o,\ldots,o)$. Via the Abel-Jacobi map $X^g \to J$ we obtain $\widehat{\mathcal{O}}_{J,0}$, the completion of the local ring of $J$ at $0$, naturally as the subring $K[[z_1,\ldots,z_g]]$ of $K[[t^{(1)},\ldots,t^{(g)}]]$. We have the following result. \begin{prop} \label{localeqn} (Buchstaber, Enolskii, Leykin \cite{belrat} \cite{belsig}) Assign the weight $w_i$ to the variable $z_i$ for $i=1,\ldots,g$. Then up to a scalar in $K^*$, the lowest weight homogeneous part of a local equation for $\Theta$ in $\widehat{\mathcal{O}}_{J,0}$, written in terms of $z_1,\ldots,z_g$, equals $\sigma_{N,m}$. \end{prop} This proposition allows us to make two (implicit) definitions. First, it follows that for integers $n \geq g$ the function $\iota^*[n]^*\sigma_{N,m}(z_1,\ldots,z_g)$ is a local equation for $H_n=\iota^*[n]^*\Theta$ in $\widehat{\mathcal{O}}_{X,o}$, the completion of the local ring of $X$ at $o$. We define $a(u) \in \mathbb{Q}[u]$ to be the unique polynomial such that: \begin{equation} \label{defa} \sigma_{N,m}(\ldots, \frac{n}{w_i} t^{w_i}(1+O(t)), \ldots)=a(n) \cdot t^{w+g}(1+O(t)) \end{equation} for all $n \in \mathbb{Z}$. As $\sigma_{N,m}$ is homogeneous of total weight $w+g$ in $z_i$ if each $z_i$ is given the weight $w_i$, cf. our remarks above, the polynomial $a(u)$ is well-defined. As $[n]^*z_i \equiv nz_i \bmod (z_1,\ldots,z_g)^2$ and $\iota^* z_i = \frac{1}{w_i}t^{w_i}(1+O(t))$ for $i=1,\ldots,g$, we have that: \begin{equation} \label{pullbacksigma} \iota^*[n]^*\sigma_{N,m} = a(n) \cdot t^{w+g}(1+O(t)) \end{equation} in $\widehat{\mathcal{O}}_{X,o}$ if $\sigma_{N,m}=\sigma_{N,m}(z_1,\ldots,z_g)$ is seen as a function in $\widehat{\mathcal{O}}_{J,0}$. In particular the multiplicity of $o$ in $H_n$ is (a constant) equal to $w+g$ for all $n \geq g$. Second, let $v$ be a place of $K$. Then we let $\lambda_{\Theta,v}$ be the unique N\'eron function with respect to $\Theta$ on $J(K_v)$ such that: \begin{equation} \label{deflambdaTheta} \lambda_{\Theta,v}(u) + \log|\sigma_{N,m}(z_1(u),\ldots,z_g(u))|_v \to 0 \end{equation} as $u\to 0$ in $(J-\Theta)(K_v)$. It follows from Proposition \ref{localeqn} that this is well-defined too. We claim that identity (\ref{identity}) holds with the above (implicitly) defined polynomial $a$ and N\'eron function $\lambda_{\Theta,v}$. \begin{prop} \label{identityprop} For all integers $n$ with $n \geq g$, for all places $v$ of $K$, and for all $p \in X(K_v)$ with $p \notin H_n$, the equality: \[ \log|a(n)|_v + \frac{1}{N} \sum_{q \in H_n \atop x(q) \neq \infty} \log|x(p)-x(q)|_v = -\lambda_{\Theta,v}(n[p-o]) + gn^2 \lambda_v(p) \] holds, where $\lambda_{\Theta,v}$ is the N\'eron function defined in (\ref{deflambdaTheta}) and where $a$ is the polynomial in $\mathbb{Q}[u]$ defined in (\ref{defa}). In the sum, points are counted with their multiplicity. \end{prop} \begin{proof} Write $\ell_{n,v}(p)$ as a shorthand for $\lambda_{\Theta,v}(n[p-o])$. One can view $L=\mathcal{O}_J(\Theta)$ as an adelic line bundle on $J$ by putting $\|1\|_{L,v}(z) = \exp(-\lambda_{\Theta,v}(z))$ where $1$ is the canonical global section of $\mathcal{O}_J(\Theta)$. By pullback one obtains a structure of adelic line bundle on each $L_n=\mathcal{O}_X(H_n) =i^*[n]^*\mathcal{O}_J(\Theta)$ given by $\|1\|_{L_n,v}(p) = \exp(-\ell_{n,v}(p))$ where now $1$ is the canonical global section of $\mathcal{O}_X(H_n)$. By \cite{zh}, (4.7) the resulting adelic metric is admissible; in particular $\ell_{n,v}(p)$ equals the admissible pairing $(p,H_n)_a$ up to an additive constant. As a result $\ell_{n,v}$ extends to $\mathrm{D}^0(X_v)$, the space of $(0,0)$-currents on $X_v$. As the other terms in the equality to be proven do so as well, we try to prove the equality as an identity in $\mathrm{D}^0(X_v)$. By Proposition \ref{properties}(iii) we are done once we prove that both sides of the claimed equality have the same image under $\mathrm{d} \mathrm{d}^c$, and the difference of both sides tends to zero as $p$ tends to $o$ over $X(K_v)$, avoiding $H_n$. From the observation that $\|1\|_{L_n,v}(p) = \exp(-\ell_{n,v}(p))$ defines an admissible metric on $L_n$ we obtain first of all that: \[ \, \mathrm{d} \mathrm{d}^c\, \ell_{n,v} = (\deg H_n) \mu_v - \delta_{H_n} = gn^2 \mu_v - \delta_{H_n} \, . \] Next we recall from Proposition \ref{equationslambda} that $ \, \mathrm{d} \mathrm{d}^c\, \lambda_v = \mu_v - \delta_o $. As by the Poincar\'e-Lelong equation Proposition \ref{properties}(i) we have: \[ \, \mathrm{d} \mathrm{d}^c\, \frac{1}{N} \sum_{q \in H_n \atop x(q) \neq \infty} \log|x(p)-x(q)|_v = \delta_{H_n} - gn^2 \delta_o \, ,\] the first step follows. To see the second step, Proposition \ref{equationslambda} tells us that: \[ \lambda_v(p) - \frac{1}{N} \log |x(p)|_v \to 0 \quad \textrm{as} \quad p \to o \, , \] that is, by Lemma \ref{tandomega}: \[ \lambda_v(p) + \log |t(p)|_v \to 0 \quad \textrm{as} \quad p \to o \, . \] Next, as by definition: \[ \lambda_{\Theta,v}(u) + \log|\sigma_{N,m}(z_1(u),\ldots,z_g(u))|_v \to 0 \quad \textrm{as} \quad u \to 0 \] in $(J-\Theta)(K_v)$ we have, upon pulling back along $[n]$ and $\iota$ and using (\ref{defa}) that: \[ \log|a(n)|_v + \ell_{n,v}(p) + (w+g)\log|t(p)|_v \to 0 \quad \textrm{as} \quad p \to o \] outside $H_n$, by (\ref{pullbacksigma}). Finally by applying once again Lemma \ref{tandomega} we have: \[ \frac{1}{N} \sum_{q \in H_n \atop x(q) \neq \infty} \log|x(p)-x(q)|_v + (gn^2-w-g) \log|t(p)|_v \to 0 \quad \textrm{as} \quad p \to o \] outside $H_n$. The second step follows by combining these equalities. \end{proof} We can now prove Theorem \ref{main}. \begin{proof}[Proof of Theorem \ref{main}] Let $p \in X(K)$ be a point such that $T(p)$ is infinite, and let $v$ be a place of $K$. By Proposition \ref{identityprop} we are done once we prove that $\log|a(n)|_v/n^2 \to 0$ as $n \to \infty$ and $\lambda_{\Theta,v}(n[p-o])/n^2 \to 0$ as $n\to \infty$ over $T(p)$. The first statement is immediate since $a(n)$ is a polynomial in $n$. As to the second statement, note that it follows immediately if $[p-o]$ is torsion since then the set of values $\lambda_{\Theta,v}(n[p-o])$ as $n$ ranges over $T(p)$ is bounded. Assume therefore that $[p-o]$ is not torsion. Then the $n[p-o]$ with $n$ running through $T(p)$ form an infinite set of $K$-rational points of $J-\Theta$. Since: \[ \frac{\lambda_{\Theta,v}(n[p-o])}{n^2} = h_J([p-o]) \cdot \frac{\lambda_{\Theta,v}(n[p-o])}{h_J(n[p-o])} \] with $h_J([p-o])>0$ Faltings's Theorem \ref{faltings} can be applied to give: \[ \limsup_{n \to \infty \atop n \in T(p)} \frac{\lambda_{\Theta,v}(n[p-o])}{n^2} \leq 0 \, . \] On the other hand $\lambda_{\Theta,v}$ is bounded from below so that: \[ \liminf_{n \to \infty \atop n \in T(p)} \frac{\lambda_{\Theta,v}(n[p-o])}{n^2} \geq 0 \, . \] Theorem \ref{main} follows. \end{proof} As we remarked above the divisor $H_n$ equals the divisor of Weierstrass points of the line bundle $\mathcal{O}_X(o)^{n+g-1}$. From \cite{bu}, 3.2.2 we obtain therefore that the average height of $n$-division points remains bounded, as $n \to \infty$. We refer to \cite{sil}, Theorem~1.1 for a similar (but weaker) result in the context of hyperelliptic curves. From Proposition~\ref{identityprop} we may reobtain this boundedness of the average height, in the following form. \begin{prop} \label{averageheight} Let $h_{X,x}$ be the canonical height on $X(\overline{K})$ defined in Section \ref{theoA}. Then the estimates: \[ \limsup_{n \to \infty} \frac{1}{gn^2} \sum_{q \in H_n} h_{X,x}(q) \leq \frac{1}{[K:\mathbb{Q}]} \sum_v n_v \int_{X_v} \lambda_v \, \mu_v < \infty\] hold. Here $v$ runs over all places of $K$, and points in $H_n$ are counted with multiplicity. \end{prop} \begin{proof} We take the identity from Proposition \ref{identityprop}, integrate both sides against $\mu_v$, and divide by $gn^2$. This gives: \[ \frac{1}{gn^2} \log|a(n)|_v + \frac{1}{gn^2} \sum_{q \in H_n \atop q \neq o} \lambda_v(q) = - \frac{1}{gn^2} \int_{X_v} \lambda_{\Theta,v}(n[p-o]) \mu_v + \int_{X_v} \lambda_v \, \mu_v \] for each $n \geq g$ and for each place $v$ of $K$. As $a(n)$ is a polynomial in $n$ we have $\log|a(n)|_v/n^2 \to 0$ as $n \to \infty$ and as $\lambda_{\Theta,v}$ is bounded from below we have $\liminf_{n \to \infty} \lambda_{\Theta,v}(n[p-o])/n^2 \geq 0$. By Fatou's Lemma we find: \begin{align*} \limsup_{n \to \infty} - \frac{1}{gn^2} \int_{X_v} \lambda_{\Theta,v}(n[p-o]) \mu_v & = - \liminf_{n \to \infty} \frac{1}{gn^2} \int_{X_v} \lambda_{\Theta,v}(n[p-o]) \mu_v \\ & \leq - \int_{X_v} \liminf_{n \to \infty} \frac{1}{gn^2} \lambda_{\Theta,v}(n[p-o]) \mu_v \leq 0 \, . \end{align*} Hence we have: \[ \limsup_{n \to \infty} \frac{1}{gn^2} \sum_{q \in H_n \atop q \neq o} \lambda_v(q) \leq \int_{X_v} \lambda_v \, \mu_v \, . \] Summing over the places $v$ of $K$ we obtain: \begin{align*} \limsup_{n \to \infty} \frac{1}{gn^2} \sum_{q \in H_n \atop q \neq o} \sum_v n_v \lambda_v(q) & \leq \sum_v \limsup_{n \to \infty} \frac{1}{gn^2} n_v \sum_{q \in H_n \atop q \neq o} \lambda_v(q) \\ & \leq \sum_v n_v \int_{X_v} \lambda_v \, \mu_v \, , \end{align*} which is the first estimate. The equality $\lambda_v(p) = (p,o)_a + \int_{X_v} \lambda_v \, \mu_v$ for $p \in X(K_v)$ finally shows that $\int_{X_v} \lambda_v \, \mu_v$ vanishes for almost all $v$. \end{proof} As is explained in \cite{bu}, 3.2.4 the boundedness of the average height of $n$-division points implies that the degrees of the fields generated by the $H_n$ are not bounded as $n \to \infty$. The boundedness of the average height of $n$-division points also implies that the affine logarithmic height of the ``division polynomial'' $\prod_{q \in H_n, q \neq o} (x-x(q))$ in $K[x]$ is $O(n^2)$ as $n \to \infty$. This generalises a known result in the context of elliptic curves, cf. \cite{la}, Theorem 3.1 and Theorem 3.2. \section{The case of a Weierstrass point} \label{cantor} In this section we prove that the asymptotic formula of Theorem B also holds when $p \neq o$ is a Weierstrass point on a hyperelliptic curve. Write $x(p)=\alpha$. By Proposition \ref{atalpha} we have: \[ \int_{X_v} \log|x-\alpha|_v \, \mu_v = \frac{1}{2g} \log |f'(\alpha)|_v \, , \] so we are done once we prove that: \[ \frac{1}{n^2} \sum_{q \in H_n \atop x(q) \neq \alpha, \infty} \log|x(q)-\alpha|_v \longrightarrow \frac{1}{2} \log |f'(\alpha)|_v \] as $n \to \infty$. We will prove in fact a somewhat stronger statement. Let $g \geq 1$ be an integer and let $k$ be a field of characteristic $\ell$ where either $\ell=0$ or $\ell \geq 2g+1$. In particular $\ell \neq 2$. Let: \[ T(k,g)=\{ n \in \mathbb{Z}_{\geq g} | \ell \nmid (n-g+1)\cdots(n+g-1) \} \, ;\] note that this is an infinite set. Let $f(x) \in k[x]$ be a monic and separable polynomial of degree $2g+1$, and let $(X,o)$ be the elliptic or pointed hyperelliptic curve of genus $g$ over $k$ given by $f$. For each integer $n \in T(k,g)$ we have effective $k$-divisors $H_n$ on $X$ of degree $gn^2$, as before. They are invariant under the hyperelliptic involution of $(X,o)$ and split over a separable algebraic closure $k^s$ of $k$. It will be convenient to view the $H_n$ as multi-sets of $k^s$-points of $X$ of cardinality $gn^2$. Let $\alpha \in k$ be a root of $f$. \begin{thm} \label{mainatalpha} Assume that $k$ is endowed with an absolute value $|\cdot|$. Then one has: \begin{equation} \label{onehalflogfprime} \frac{1}{n^2} \sum_{q \in H_n \atop x(q) \neq \alpha, \infty} \log |x(q)-\alpha| \longrightarrow \frac{1}{2} \log|f'(\alpha)| \end{equation} for $n$ in $T(k,g)$ tending to infinity. The points in $H_n$ are counted with multiplicity. \end{thm} Note that the left hand side of (\ref{onehalflogfprime}) is well-defined. Our proof relies on a rather intricate determinantal formula for the division polynomials related to $H_n$ due to D. Cantor \cite{ca}, which we introduce first. For each $n \geq g$ we put: \[ H_n^* = \left\{ \begin{array}{ll} H_n - H_g & n \equiv g \bmod 2 \, , \\ H_n - H_{g+1} & n \equiv g+1 \bmod 2 \, . \end{array} \right. \] It can be shown that these $H_n^*$ are effective $k$-divisors on $X$ with support away from~$o$. Note that: \[ \deg H_n^* = \left\{ \begin{array}{ll} g(n^2-g^2) & n \equiv g \bmod 2 \, , \\ g(n^2-(g+1)^2) & n \equiv g+1 \bmod 2 \, . \end{array} \right. \] Also note that: \[ H_g = \frac{g(g-1)}{2} R + g o \, , \quad H_{g+1} = \frac{g(g+1)}{2} R \, , \] where $R$ denotes the reduced divisor of degree $2g+2$ on $X$ consisting of the hyperelliptic ramification points of $X$. Now let $\mathcal{R}$ be the commutative ring $\mathbb{Z}[a_1,\ldots,a_{2g+1}][1/2]$ where $a_1,\ldots,a_{2g+1}$ are indeterminates. Let $F(x)$ be the polynomial $x^{2g+1}+a_{1}x^{2g} + \cdots+a_{2g} x + a_{2g+1}$ in $\mathcal{R}[x]$, and let $\Delta \in \mathcal{R}$ be the discriminant of $F$. Let $y$ be a variable satisfying $y^2 =F(x)$, and let $E_1(z)$ be the polynomial $E_1(z)=(F(x-z)-y^2)/z$ in $\mathcal{R}[x,z]$. Next put $S(z)=(-1)^{g+1} y \sqrt{1+zE_1(z)/y^2}$ where $\sqrt{1+zE_1(z)/y^2}$ is the power series in $\mathcal{R}[x,\frac{1}{y}][[z]]$ obtained by binomial expansion on $1+zE_1(z)/y^2$. Note that $S(z)^2 = F(x-z)$ and that $ S(z) = \sum_{j=0}^\infty P_j(x)(2y)^{1-2j} z^j $ for suitable $P_j(x) \in \mathcal{R}[x]$ of degree $2jg$ (cf. \cite{ca}, Section~8). For each integer $n \geq g$ we define a polynomial $\psi_n$ in $\mathcal{R}[x]$ by: \begin{equation} \label{defpsi} \psi_n = \left\{ \begin{array}{cl} \left| \begin{array}{cccc} P_{g+1} & P_{g+2} & \cdots & P_{(n+g)/2} \\ P_{g+2} & \adots & \adots & \vdots \\ \vdots & \adots & \adots & P_{n-2} \\ P_{(n+g)/2} & \cdots & P_{n-2} & P_{n-1} \end{array} \right| & n \equiv g \bmod 2 \, , \\ \left| \begin{array}{cccc} P_{g+2} & P_{g+3} & \cdots & P_{(n+g+1)/2} \\ P_{g+3} & \adots & \adots & \vdots \\ \vdots & \adots & \adots & P_{n-2} \\ P_{(n+g+1)/2} & \cdots & P_{n-2} & P_{n-1} \end{array} \right| & n \equiv g+1 \bmod 2 \, . \end{array} \right. \end{equation} Here, for $n=g$ resp. $n=g+1$ we understand that $\psi_n$ is the unit element. We have: \[ \deg \psi_n = \left\{ \begin{array}{ll} g(n^2-g^2)/2 & n \equiv g \bmod 2 \, , \\ g(n^2-(g+1)^2)/2 & n \equiv g+1 \bmod 2 \, . \end{array} \right. \] The result of D. Cantor is that the $H_n^*$ are given by the $\psi_n$. Let $b(n)$ in $\mathcal{R}$ be the leading coefficient of $\psi_n$. \begin{thm} (D. Cantor \cite{ca}) The function $b \colon \mathbb{Z}_{n \geq g} \to \mathcal{R}$ has positive integral values, and is represented by a numerical polynomial. The function $b$ satisfies $\ell \nmid (n-g+1)\cdots(n+g-1) \Rightarrow \ell \nmid b(n)$ for all prime numbers $\ell$ and all integers $n \geq g$. Furthermore $\psi_n$ is a universal $n$-division polynomial, in the following sense: if $\, \bar{} \, \colon \mathcal{R} \to k$ is a ring homomorphism from $\mathcal{R}$ to a field $k$ such that $\overline{\Delta}$ is non-zero in $k$, then for the pointed curve $(X,o)$ over $k$ given by $\overline{F}$ one has $H_n^* = Z(\psi_n)$, where $Z(\psi_n)$ is the zero divisor of $\psi_n$ on $X$, for each $n \in T(k,g)$. \end{thm} We deduce Theorem \ref{mainatalpha} from D. Cantor's theorem by evaluating the determinants at the right hand side of identity $(\ref{defpsi})$ at $\alpha$. \begin{proof}[Proof of Theorem \ref{mainatalpha}] Let $\alpha$ be a root of $F$ in an algebraic closure $\overline{Q(\mathcal{R})}$ of the fraction field $Q(\mathcal{R})$ of $\mathcal{R}$. Let $c_m = \frac{1}{2m+1} {2m+1 \choose m}$ for $m \geq 0$ be the $m$-th Catalan number. We claim: \begin{lem} The identity: \[ P_j(\alpha) = (-1)^g \cdot c_{j-1} \cdot F'(\alpha)^j \] holds in $\mathcal{R}[\alpha]$ for all integers $j \geq 1$. \end{lem} \begin{proof} We recall the relations: \[ S(z)= \sum_{j=0}^\infty \frac{P_j(x)}{(2y)^{2j-1}}z^j \, , \quad S(z)^2 = F(x-z) \, . \] We claim that: \begin{equation} \label{byinduction} \frac{1}{j!} \frac{ \mathrm{d}^j S(z)}{\mathrm{d} \, z^j} = \frac{R_j(x,z)}{(2S(z))^{2j-1}} \end{equation} for some $R_j(x,z) \in Q(\mathcal{R})[x,z]$ with $R_j(\alpha,0)= -c_{j-1} \cdot F'(\alpha)^j $, for all $j \geq 1$. This gives what we want since $S(0)=(-1)^{g+1} y$ and $P_j(x)=R_j(x,0)$. To prove the claim we argue by induction on $j$. We have $\frac{\mathrm{d} S}{\mathrm{d}z} = - \frac{F'(x-z)}{2S(z)}$ which settles the case $j=1$ with $R_1(x,z)=-F'(x-z)$. Now assume that (\ref{byinduction}) holds with $R_j(x,z) \in Q(\mathcal{R})[x,z]$, and with $R_j(\alpha,0)=-c_{j-1} \cdot F'(\alpha)^j$ for a certain $j \geq 1$. Then a small calculation yields: \[ \frac{1}{(j+1)!} \frac{\mathrm{d}^{j+1} S}{\mathrm{d}z^{j+1}} = \frac{1}{j+1} \frac{\mathrm{d}}{\mathrm{d} z} \frac{R_j(x,z)}{(2S(z))^{2j-1}} = \frac{R_{j+1}(x,z)}{(2S(z))^{2j+1}} \] with: \[ R_{j+1}(x,z) = \frac{2}{j+1} \left( 2 \left( \frac{\mathrm{d}}{\mathrm{d} z} R_j(x,z) \right) F(x-z) + (2j-1)R_j(x,z) F'(x-z) \right) \, . \] We find $R_{j+1}(x,z) \in Q(\mathcal{R})[x,z]$ and: \begin{align*} R_{j+1}(\alpha,0) & = \frac{2(2j-1)}{j+1} R_j(\alpha,0) \cdot F'(\alpha) \\ & = -\frac{2(2j-1)}{j+1} c_{j-1} \cdot F'(\alpha)^{j+1} \\ & = -c_j \cdot F'(\alpha)^{j+1} \end{align*} by the induction hypothesis. This completes the induction step. \end{proof} We can now specialise $(\ref{defpsi})$ at $\alpha$. This yields the equality: \begin{equation} \label{psi_atalpha} \psi_n(\alpha) = b_0(n) \cdot F'(\alpha)^{d^*(n)} \end{equation} in $\mathcal{R}[\alpha]$ where: \begin{equation} \label{b_zero_n} b_0(n) = \left\{ \begin{array}{cl} \left| \begin{array}{cccc} c_g & c_{g+1} & \cdots & c_{(n+g)/2-1} \\ c_{g+1} & \adots & \adots & \vdots \\ \vdots & \adots & \adots & c_{n-3} \\ c_{(n+g)/2-1} & \cdots & c_{n-3} & c_{n-2} \end{array} \right| & n \equiv g \bmod 2 \, , \\ \left| \begin{array}{cccc} c_{g+1} & c_{g+2} & \cdots & c_{(n+g-1)/2} \\ c_{g+2} & \adots & \adots & \vdots \\ \vdots & \adots & \adots & c_{n-3} \\ c_{(n+g-1)/2} & \cdots & c_{n-3} & c_{n-2} \end{array} \right| & n \equiv g+1 \bmod 2 \, , \end{array} \right. \end{equation} at least up to a sign, and where $d^*(n) \in \mathbb{Z}$ is given by: \[ 2d^*(n) = \left\{ \begin{array}{ll} (n^2-g^2)/2 & n \equiv g \bmod 2 \, , \\ (n^2-(g+1)^2)/2 & n \equiv g+1 \bmod 2 \, . \end{array} \right. \] To obtain Theorem \ref{mainatalpha} we need that $b_0(n)$ is non-vanishing in $k$ if $n \in T(k,g)$, and that $b_0(n)$ is not ``too large'' in $\mathbb{Z}$. To achieve this, we can apply a general result due to Desainte-Catherine and Viennot (cf. \cite{dcv}, Section~6), stating that for arbitrary integers $l,m \geq 1$ we have the Hankel determinant: \begin{equation} \label{desainteviennot} \left| \begin{array}{cccc} c_l & c_{l+1} & \cdots & c_{l+m-1} \\ c_{l+1} & \adots & \adots & \vdots \\ \vdots & \adots & \adots & c_{l+2m-3} \\ c_{l+m-1} & \cdots & c_{l+2m-3} & c_{l+2m-2} \end{array} \right| = \prod_{1 \leq i \leq j \leq l-1} \frac{i+j +2m}{i+j} \, . \end{equation} Applying this to (\ref{b_zero_n}) we infer that $\ell \nmid (n-g+1)\cdots (n+g-1) \Rightarrow \ell \nmid b_0(n)$ holds for every prime number $\ell$ and every integer $n$ and that $b_0(n)$ is represented by a numerical polynomial. In particular $b_0(n)$ is non-vanishing in $k$ if $n \in T(k,g)$, and $b_0(n)$ is not ``too large'' in $\mathbb{Z}$. Let us now place ourselves in the situation of Theorem \ref{mainatalpha}. From Cantor's theorem and equation (\ref{psi_atalpha}) we obtain the identity: \begin{equation} \label{mainformula} b(n)^2 \prod_{q \in H_n^*} (x(q)-\alpha) = \psi_n(\alpha)^2 = b_0(n)^2 \cdot f'(\alpha)^{2d^*(n)} \end{equation} in $k$. Since $f'(\alpha)$ and $b_0(n)$ are both non-zero in $k$ we deduce that $\prod_{q \in H_n^*} (x(q)-\alpha)$ is non-zero in $k$ as well (we knew already that $b(n)$ is non-zero in $k$). In particular $H_n^*$ has support disjoint from the hyperelliptic ramification point corresponding to~$\alpha$. By multiplying left and right hand side by: \[ \prod_{q \in H_n-H_n^* \atop x(q) \neq \alpha,\infty} (x(q)-\alpha) = \left\{ \begin{array}{ll} f'(\alpha)^{g(g-1)/2} & n \equiv g \bmod 2 \, , \\ f'(\alpha)^{g(g+1)/2} & n \equiv g+1 \bmod 2 \end{array} \right. \] we obtain: \[ b(n)^2 \prod_{q \in H_n \atop x(q) \neq \alpha, \infty} (x(q)-\alpha) = b_0(n)^2 \cdot f'(\alpha)^{2d(n)} \, , \] where: \[ 2d(n) = \left\{ \begin{array}{ll} (n^2-g)/2 & n \equiv g \bmod 2 \, , \\ (n^2-g-1)/2 & n \equiv g+1 \bmod 2 \, . \end{array} \right. \] Taking absolute values and then logarithms (which we can do since both sides are non-zero in $k$) we obtain: \[ \frac{2}{n^2} \log |b(n)| + \frac{1}{n^2} \sum_{q \in H_n \atop x(q) \neq \alpha, \infty} \log |x(q)-\alpha| = \frac{2}{n^2} \log|b_0(n)| + \frac{2d(n)}{n^2} \log |f'(\alpha)| \, . \] As both $b(n)$ and $b_0(n)$ are represented by polynomials in $n$, the terms $ \frac{2}{n^2} \log |b(n)|$ and $\frac{2}{n^2} \log |b_0(n)|$ tend to zero as $n \to \infty$ over $T(k,g)$. Finally we clearly have $\frac{2d(n)}{n^2} \to \frac{1}{2}$ as $n \to \infty$. We end up with: \[ \frac{1}{n^2} \sum_{q \in H_n \atop x(q) \neq \alpha, \infty} \log |x(q)-\alpha| \longrightarrow \frac{1}{2} \log |f'(\alpha)| \] as $n \to \infty$ over $T(k,g)$, as required. \end{proof} \begin{remark} It is possible to give closed expressions for $b(n)$ and $b_0(n)$. First of all for $n \geq g$ put: \[ \sigma_{n,g} = \left\{ \begin{array}{ll} {n+1 \choose 3} {n+3 \choose 7} \cdots {n+g-1 \choose 2g-1} & \quad \mbox{if $g$ is even} \, , \\ { n \choose 1}{n+2 \choose 5} \cdots {n+g-1 \choose 2g-1 } & \quad \mbox{if $g$ is odd} \, . \end{array} \right. \] Then by \cite{ca}, Theorem 4.1 we have: \[ b(n) = \left\{ \begin{array}{ll} \frac{\sigma_{n,g}}{\sigma_{g,g}} & n \equiv g \bmod 2 \, , \\ \frac{\sigma_{n,g}}{\sigma_{g,g}} 2^{-g} & n \equiv g+1 \bmod 2 \, . \end{array} \right. \] Next, from (\ref{desainteviennot}) we obtain, by some rewriting: \[ b_0(n) = \left\{ \begin{array}{ll} \frac{\sigma_{n,g}}{\sigma_{g,g}} \frac{ 1 \cdot 3 \cdots (2g-1)}{(n-g+1)(n-g+3) \cdots (n+g-1)} & n \equiv g \bmod 2 \, , \\ \frac{\sigma_{n,g}}{\sigma_{g,g}} \cdot 2^{-g} & n \equiv g+1 \bmod 2 \, . \end{array} \right. \] Note that $b(n)=b_0(n)$ if $n \equiv g+1 \bmod 2$. It would be interesting to investigate if $H_n$ is a non-zero divisor on $X$ for $n \equiv g+1 \bmod 2$ and $n \geq g+1$, even if the characteristic $\ell$ of $k$ is positive and smaller than $2g+1$. \end{remark} \begin{remark} Silverman \cite{sil} proves a number of results which are very similar to the ones above; especially compare formula (21) in \cite{sil} with our formula (\ref{mainformula}). Formula (\ref{mainformula}) shows that for $n \in T(k,g)$ the divisor $H_n^*$ has support outside the hyperelliptic ramification locus. This generalises Corollary 1.4 of \cite{sil}. The main difference with \cite{sil} is that we use a slightly more general notion of division point. The sequence of divisors considered in \cite{sil} is basically the sequence of $H_n$ such that $n=(2k-1)(g-1)$ for some $k \geq 2$. \end{remark} By summing over the finite branch points of $(X,o)$ we obtain the following result. \begin{cor} Take the assumptions of Theorem \ref{mainatalpha}. Then: \[ \frac{1}{n^2} \sum_{q \in H_n \atop f(x(q)) \neq 0, \infty} \log |f(x(q))| \longrightarrow \frac{1}{2} \log |\Delta(f)| \] as $n \to \infty$ over $T(k,g)$. Here $\Delta(f)$ is the discriminant of $f$. \end{cor} This result was shown by Szpiro and Tucker in \cite{st1} for the case that $(X,o)$ is an elliptic curve and $k$ is a discrete valuation field such that $X$ has semistable reduction over $k$. The proof in \cite{st1} uses the geometry of the special fiber of the minimal regular model of $X$ over $k$; such considerations seem to be absent from the arguments above. \section{An application} \label{sectionapplication} In this section we deduce a finiteness result from Theorem A and Theorem B. The line of reasoning is inspired upon \cite{bir}, Introduction, and \cite{clt}, Section~8. Let $X$ be any geometrically connected projective curve over the number field $K$. Let $S$ be a finite set of places of $K$, including the archimedean ones. Let $\mathcal{X}$ be a proper model of $X$ over the ring of $S$-integers of $K$ and let $D$ be an effective $K$-divisor on $X$. A point $p \in X(\overline{K})$ is called $S$-integral with respect to $D$ if the Zariski closures of $D$ and of the $\mathrm{Gal}(\overline{K}/K)$-orbit of $p$ in $\mathcal{X}$ are disjoint. If $(D_n)_{n \in \mathbb{N}}$ is a sequence of effective $K$-divisors on $X$ we are interested in the question whether the number of $n \in \mathbb{N}$ such that $p$ is $S$-integral with respect to $D_n$ is finite or infinite. Note that the validity of this question does not depend on the choice of model $\mathcal{X}$. We focus on the special case that $X$ is a superelliptic curve over $K$, with equation $y^N=f(x)$ and with point at infinity $o$. We have the sequence $(H_n)_{n \geq g}$ of divisors of $n$-division points on $X$. For $p \in X(\overline{K})$ we put $T(p) = \{ n \in \mathbb{Z}_{\geq g} | p \notin H_n \}$. \begin{prop} \label{application} Let $p \in X(\overline{K})$. Assume that $[p-o]$ is not torsion in the jacobian of $X$. Then there are only finitely many $n \in T(p)$ such that $p$ is $S$-integral with respect to $H_n$. \end{prop} \begin{proof} Without loss of generality we may enlarge $S$ until it contains all primes of bad reduction of $X$. Also we may assume that $T(p)$ is infinite, and that $p \in X(K)$. By Theorem \ref{theheights} and Theorem \ref{main} we have, as $[p-o]$ is not torsion: \begin{align*} 0 < h_J([p-o]) &= g \sum_v n_v \int_{X_v} \log |x-x(p)|_v \, \mu_v \\ & = g \sum_v n_v \lim_{n \to \infty \atop n \in T(p)} \frac{1}{n^2} \sum_{q \in H_n \atop x(q) \neq x(p), \infty} \log |x(p)-x(q)|_v \, . \end{align*} Let $T'(p)$ be the set of $n \in T(p)$ such that $p$ is $S$-integral with respect to $H_n$. We assume, by contradiction, that $T'(p)$ is infinite. Then we may write: \begin{equation} \label{secondinproof} 0 < \sum_v n_v \lim_{n \to \infty \atop n \in T'(p)} \frac{1}{n^2} \sum_{q \in H_n \atop x(q) \neq x(p), \infty} \log |x(p)-x(q)|_v \, . \end{equation} However, if $p$ is $S$-integral with respect to $H_n$ then $x(p)$ and $x(H_n)$ are disjoint mod $v$ for all $v \notin S$, as $H_n$ is invariant for the automorphism group of $x \colon X \to \mathbb{P}^1_K$ over $\overline{K}$. It follows that the contribution in (\ref{secondinproof}) to the sum over all $v$ vanishes for $v \notin S$ so that we can conclude: \[ 0 < \sum_{v \in S} n_v \lim_{n \to \infty \atop n \in T'(p)} \frac{1}{n^2} \sum_{q \in H_n \atop x(q) \neq x(p), \infty} \log |x(p)-x(q)|_v \, . \] We can now interchange the first summation and the limit, yielding: \[ 0 < \lim_{n \to \infty \atop n \in T'(p)} \frac{1}{n^2} \sum_{q \in H_n \atop x(q) \neq x(p), \infty} \sum_{v } n_v \log |x(p)-x(q)|_v \, , \] the latter sum being again over all places of $K$. We arrive at a contradiction since: \[ \sum_{q \in H_n \atop x(q) \neq x(p), \infty} \sum_{v } n_v \log |x(p)-x(q)|_v =0 \] for each $n \in T'(p)$ by the product formula. \end{proof} Note that by the Manin-Mumford conjecture (proved by Raynaud) the number of $ p \in X(\overline{K})$ such that $[p-o]$ is torsion, is finite. It would be interesting to know whether Proposition \ref{application} could be strengthened to the following statement, generalizing ``Ih's conjecture'' (cf.~\cite{bir}, Section~3): assume $[p-o]$ is not torsion, then there are only finitely many division points $\xi_n \in H_n$ such that $p$ is $S$-integral with respect to the Galois orbit of $\xi_n$. Such a statement would follow by the arguments above if one had a version of Theorem~\ref{main} with the $H_n$ replaced by the Galois orbits of a sequence of distinct $\xi_n \in H_n$. Unfortunately proving such a statement seems to be hard; for example one does not seem to know whether $H_n$ (minus the contribution from the branch points of $x$) is composed of ``large enough'' Galois orbits as $n \to \infty$ (i.e., whether the division polynomials associated to $H_n$ have ``large enough'' prime factors). See also \cite{sil} for a discussion of this problem in the context of hyperelliptic curves. In \cite{bir}, Theorem 0.2 the stronger version of Proposition \ref{application} as stated above is proved in the case that $(X,o)$ is an elliptic curve. An analogue of Proposition \ref{application} in the context of dynamical systems on $\mathbb{P}^1_K$ is stated in \cite{st2}, Proposition~6.3.
1,108,101,564,731
arxiv
\section*{REFERENCES\markboth {REFERENCES}{REFERENCES}}\list {}{\settowidth\labelwidth{[#1]}\leftmargin\labelwidth \advance\leftmargin\labelsep \usecounter{enumi}} \def\hskip .11em plus .33em minus .07em{\hskip .11em plus .33em minus .07em} \sloppy \sfcode`\.=1000\relax} \let\endthebibliography=\endlist \title{The influence of Galactic wind upon the star formation histories of Local Group galaxies} \author{H. Hirashita\thanks{E-mail: [email protected]}, H. Kamaya, S. Mineshige \\ {\it Department of Astronomy, Kyoto University,} \\ {\it Sakyo-ku, Kyoto 606-01, Japan}} \date{Accepted by {\sl MNRAS}} \begin{document} \maketitle \begin{abstract} We examine the possibility that ram pressure exerted by the galactic wind from the Galaxy could have stripped gas from the Local Group dwarf galaxies, thereby affecting their star formation histories. Whether gas stripping occurs or not depends on the relative magnitudes of two counteracting forces acting on gas in a dwarf galaxy: ram pressure force by the wind and the gravitational binding force by the dwarf galaxy itself. We suggest that the galactic wind could have stripped gas in a dwarf galaxy located within the distance of $R_{\rm c}\simeq 120(r_{\rm s}/1\,\mbox{kpc})^{3/2} ({\cal E}_{\rm b}/10^{50}\,\mbox{erg} )^{-1/2}$ kpc (where $r_{\rm s}$ is the surface radius and ${\cal E}_{\rm b}$ is the total binding energy of the dwarf galaxy, respectively) from the Galaxy within a timescale of Gyr, thereby preventing star formation there. Our result based on this Galactic wind model explains the recent observation that dwarfs located close to the Galaxy experienced star formation only in the early phase of their lifetimes, whereas distant dwarfs are still undergoing star formation. The present star formation in the Large Magellanic Cloud can also be explained through our Galactic wind model. \end{abstract} \section{INTRODUCTION} It is widely accepted that the hot interstellar gas, mainly originating from supernova explosions, will eventually escape from the galactic disk as galactic wind (Shapiro \& Field 1976; Habe \& Ikeuchi 1980; Tenorio-Tagle \& Bodenheimer 1988; Norman \& Ikeuchi 1989). Such a wind can have various effects on the disk-halo system. For example, it can supply energy and gas to the halo (Chevalier \& Oegerle 1979; Habe \& Ikeuchi 1980; Li \& Ikeuchi 1992). We, here, consider if ram pressure exerted by the galactic wind from the Galaxy could have stripped gas from the Local Group dwarf galaxies. Such ram pressure stripping may influence star formation histories of the dwarfs. The effects of ram pressure are extensively discussed by Portnoy, Pistinner, \& Shaviv (1993). We summarize various properties of Local Group dwarf galaxies in Table 1, where $R$ is the distance from the Galaxy (the galactocentric distance), and $r_{\rm s}$ and ${\cal E}_{\rm b}$ are the surface radius and the binding energy calculated from the observations of velocity dispersions, respectively. In the fifth column of SF (star formation), we distinguish the following two categories: \noindent A: Dwarf galaxies in which the stellar population is dominated by the initial bursts of star formation; \noindent B: Dwarf galaxies which experienced recent star formation, as well. Van den Bergh (1994) has asserted that the star formation histories of Local Group dwarf galaxies correlate with the distance from the Galaxy ($R$): Dwarf galaxies near the Galaxy, such as Ursa Minor and Draco, only experienced star formation in the early phase of their lifetimes ($\sim 12$ Gyr), while there is observational evidence for more recent (or present) star formation in distant dwarfs (see van den Bergh 1994 and references therein). He also implied that star formation in dwarfs suffers great difficulty in the existence of mass flow from the Galaxy. \begin{table} {\small \caption{Local Group dwarf galaxies. See text for the definitions.} \begin{tabular}{lccccccc} Name & $R$(kpc) & $r_{\rm s}$(kpc) & ${\cal E}_{\rm b}(10^{50}\mbox{erg})$ & SF & $R_{\rm c}$(kpc) & $t_{\rm cross}$(Gyr) & Refernces \footnote{(1) van den Bergh 1994; (2) Saito 1979a; (3) Da Costa 1984; (4) Mighell \& Rich 1996; (5) Carignan, Demers, \& C\^{o}t\'{e} 1991; (6) van de Rydt, Demers, \& Kunkel 1991; (7) Lo, Sargent, \& Young 1993; (8) Fisher \& Tully 1979.} \\ Ursa Minor & 65 & 1.38 & 0.121 & A & 550 & 0.2 & 1, 2 \\ Draco & 76 & 0.795 & 0.136 & A & 240 & 0.2 & 1, 2 \\ Sculptor & 78 & 1.70 & 20.9 & B & 50 & 0.2 & 1, 2, 3 \\ Sextans & 82 & $\cdots$ & $\cdots$ & A & $\cdots$ & 0.2 & 1 \\ Fornax & 133 & 3.89 & 86.6 & B & 100 & 0.4 & 1, 2 \\ Leo II & 217 & 0.717 & 0.665 & B & 90 & 0.6 & 1, 2, 4 \\ Leo I & 277 & 1.20 & 7.19 & B & 50 & 0.8 & 1, 2 \\ Phoenix & 390 & 1 & 0.8 & B & 140 & 1 & 1, 5, 6 \\ DDO 210 & 794 & 0.7 & 10 & B & 20 & 2 & 1, 7, 8 \end{tabular} } \end{table} The plan of this Letter is as follows. First of all, in the next section, we define the critical radius $(R_{\rm c})$ in such a way that a dwarf galaxy within $R_{\rm c}$ from the Galaxy suffers ram pressure stripping of the wind, and derive the general expression for $R_{\rm c}$. We then discuss in \S 3 the interpretation of the observations by van den Bergh (1994) and one exceptional case, the Large Magellanic Cloud, whose star formation histories are also explained with our galactic wind model. In the final section, we shall conclude that our hypothesis can qualitatively account for the observation. \section{THE CRITICAL RADIUS FOR GAS STRIPPING} In this section we compare ram pressure force by the wind and gravitational binding force by a dwarf galaxy itself. We shall then evaluate the critical radius, $R_{\rm c}$, within which the ram pressure exceeds the gravitational force. First we evaluate the mass loss rate of the Galaxy. The hot gas component of the Galaxy typically has a density of $10^{-2.5}\, \mbox{cm}^{-3}$ and a temperature of $10^{5.7}$ K (McKee \& Ostriker 1977). The hot gas will flow out of the Galaxy (Shapiro \& Field 1976), because its thermal energy is larger than the gravitational binding energy of the Galaxy (Cox \& Smith 1974). Using the calculation by Habe \& Ikeuchi (1980), we estimate the mass ejection (escaping) rate as $\dot{M}\sim 1\, M_\odot\,\mbox{yr}^{-1}$. Assuming a steady, spherically symmetric flow, we can write approximately \begin{equation} \dot{M}\simeq 4\pi R^2\rho v_{\rm esc} , \end{equation} where $R$ is the galactocentric distance, and $\rho$ and $v_{\rm esc}$ are the density and the velocity of the escaping wind at distance $R$, respectively (see also Wang 1995). From equation (1), $\rho$ is expressed as follows: \begin{equation} \rho\simeq\frac{\dot{M}}{4\pi R^2v_{\rm esc}}. \end{equation} Next we consider a dwarf galaxy which is located at the distance $R$ from the Galaxy. Ram pressure by the wind is $\sim\rho v_{\rm esc}^2$. For the ram pressure to remove the gas from the dwarf, we require \begin{equation} \rho v_{\rm esc}^2\pi r_{\rm s}^2\mathrel{\mathpalette\gl@align>} {\cal E}_{\rm b}/r_{\rm s}, \end{equation} where $r_{\rm s}$ and ${\cal E}_{\rm b}$ are the surface radius and the binding energy of the dwarfs, respectively. $r_{\rm s}$ and ${\cal E}_{\rm b}$ are calculated from observations of velocity dispersions (Saito 1979a; see also Table 1). Approximately the left-hand side of (3) represents the total ram pressure force on the dwarf, while the right-hand side represents the total gravitational force. Combining (2) and (3), we find the following condition, \begin{equation} R^2\mathrel{\mathpalette\gl@align<} \frac{\dot{M}r_{\rm s}^3v_{\rm esc}}{4{\cal E}_{\rm b}} \equiv R_{\rm c}^2. \end{equation} Here, $R_{\rm c}$ is the critical radius. If $R<R_{\rm c}$, ram pressure exceeds binding force so that the wind can strip the gas from the dwarf. The velocity of the wind escaping from the Galaxy is $v_{\rm esc}\simeq 300\,\mbox{km}\,\mbox{s}^{-1}$ (Habe \& Ikeuchi 1980). We finally derive \begin{eqnarray} R_{\rm c} &\simeq & 120\left( \frac{\dot{M}}{1\, M_\odot\,\mbox{yr}^{-1}} \right)^{1/2}\left(\frac{r_{\rm s}}{1\,\mbox{kpc}} \right)^{3/2 \left(\frac{v_{\rm esc}}{300\,\mbox{km}\,\mbox{s}^{-1}} \right)^{1/2} \left(\frac{{\cal E}_{\rm b}}{10^{50}\,\mbox{erg}} \right)^{-1/2}\mbox{kpc}. \end{eqnarray} The calculated critical radii are listed in Table 1. Moreover, we calculate and list the crossing time, $t_{\rm cross}\equiv R/v_{\rm esc}$, which is typically \begin{equation} t_{\rm cross}\simeq 0.3 \left(\frac{R}{100\,\mbox{kpc}}\right) \left(\frac{v_{\rm esc}}{300\,\mbox{km\,s}^{-1}}\right)^{-1} \,\mbox{Gyr}. \end{equation} \section{DISCUSSIONS} \subsection{Star formation histories} Observationally, Ursa Minor, Draco and Sextans are classified as A in Table 1, in which the stellar population is dominated by early bursts of star formation. For Ursa Minor and Draco, we find $R<R_{\rm c}$. Thus it seems highly probable that the galactic wind has stripped the gas from these dwarfs within $\sim 1$ Gyr after the wind formation in the Galaxy. The other dwarf galaxies in Table 1 belong to the category B. Indeed they satisfy the condition $R>R_{\rm c}$, and the ram pressure by the wind has little influence on them. In two most distant dwarfs, particularly, Phoenix and DDO 210, star formation is still going on (van den Bergh 1994). This is because they practically feel no ram pressure by the wind. We also comment that as for dwarf galaxies in the category A, a radial `pumping' mode, in which mass flows quasi-periodically into the core of the galaxies (Balsara, Livio, \& O'Dea 1994), is not permitted, because of their shallow gravitational potential, although this mode may exist for more distant dwarfs in the category B (Comins 1984). \subsection{The Large Magellanic Cloud} Though the Galactic wind should have a considerable influence on the Large Magellanic Cloud (LMC), the nearest galaxies from the Galaxy, there are some evidences of recent star formation in the LMC (e.g., Massey 1990). Someone might think that the LMC breaks our model. But, throughout our discussions, we have only discussed the steady galactic wind. In the realistic situations, however, supernova rate (star formation rate) decreases as time goes on. According to Larson \& Tinsley (1978), its decaying timescale may be about $\sim$ 1 Gyr. In recent epochs, OB-star formation rate also decreases and then the Galactic wind becomes weaker. In other words, the Galactic halo is now bound and cooled type (Habe \& Ikeuchi 1980), because OB-star heating in the Galactic disk becomes insignificant [galactic fountain picture like Shapiro \& Field (1976)]. This makes possible the recent star formation in the LMC, since the LMC is free from the Galactic wind. Indeed, the LMC has a bimodal age distribution of stellar populations, with almost all of the star clusters in the LMC being either younger than 3 Gyr or older than 12 Gyr (Da Costa 1991). First star formation is stopped by the Galactic wind, and second star formation becomes possible after the Galactic wind has stopped. We should note that the LMC is much larger gravitational potential than galaxies listed in Table1, which makes the second star formation easier. Thus, through our galactic wind model, we can understand proximity effect on star formation rate in nearby galaxies. \subsection{The importance of Galactic wind} In the previous two subsections, we have verified the importance of galactic wind in the following three points: \noindent [1] Gas in dwarf galaxies located within $R_{\rm c}$ from the Galaxy ($R_{\rm c}$ is defined in Eq. 5) can be stripped by ram pressure by the Galactic wind. \noindent [2] The process of ram pressure stripping by the Galactic wind must be considered in studying star formation histories of Local Group dwarf galaxies. \noindent [3] The star formation histories of the LMC can be explained consistently by adopting a Galactic wind model (see \S 3.2). Thus, our Galactic wind model is successful in understanding the star formation histories of Local Group galaxies. \section{CONCLUSION} On the basis of the observational results in van den Bergh (1994), we suggest that star formation histories of Local Group dwarf galaxies depend on the distance from the Galaxy. If the distance from the Galaxy is less than the critical radius $R_{\rm c}$, which is given in equation (5), the gas of the dwarf galaxy is stripped by ram pressure exerted by the Galactic wind, since the ram pressure force by the wind exceeds the gravitational binding force by the dwarf itself. Thus we can conclude that galactic wind from a giant galaxy generally has considerable influence on star formation histories of nearby dwarf galaxies. \section*{Acknowledgments} We thank T. T. Takeuchi for useful discussions and comments.
1,108,101,564,732
arxiv
\section{Introduction} How matter is distributed around galaxies is one of the fundamental questions in cosmology. Gravitational lensing provides a powerful method to map the matter distribution at small and large length scales~\citep[for a review, see][]{BS01}. Recent large galaxy redshift surveys, such as the Sloan Digital Sky Survey \citep[SDSS;][]{York00} and COSMOS survey, have allowed us to explore the mean surface density profile of galaxies through weak lensing techniques \citep[e.g., ][]{Sheldon04,Mandelbaum06,MSFR,Leauthaud11}. \citet[][hereafter MSFR]{MSFR} measured the mean surface matter density profile of the SDSS main galaxies with the mean redshift $\langle z \rangle=0.36$ through gravitational lensing magnification of background quasars (QSOs). They calculated the cross-correlation between the number density of foreground galaxies and the flux magnification of background QSOs. The cross-correlation function was then converted to the surface matter density profile $\Sigma_m$ of the lens galaxies as a function of the projected distance $R$ from the galactic center. The derived profile is well approximated as $\Sigma_m\propto R^{-0.8}$ at $10{\rm kpc}\la R\la10{\rm Mpc}$. MSFR also detected the systematic offset between the five SDSS photometric bands in magnification of background QSOs; in shorter wavelength, QSOs appear less magnified. It is interpreted as reddening due to dust in and around foreground galaxies. Adopting the small Magellanic cloud type dust model for the sample galaxies, they derived the mean surface dust density profile of galaxies $\Sigma_d(R)$ from the galaxy-QSO color cross-correlation function. The shape of the derived $\Sigma_d$ is very similar to that of $\Sigma_m$ at $10{\rm kpc}\la R\la10{\rm Mpc}$, suggesting that there is a substantial amount of dust in the galactic halos (see also Chelouche et al. 2007; McGee \& Balogh 2010). Theoretical models are needed to properly interpret the observationally inferred dust distribution. In this Letter, we develop an analytic model based on the so-called halo approach to study the distribution of dust around galaxies. Earlier in \cite{MFY}, we used cosmological $N$-body simulations to study in detail the matter distribution around galaxies. There, we showed that the observed surface density profile can be used to determine the characteristic mass of the sample lens galaxies, and that the mass distributed beyond the galaxies' virial radii contributes about half of the global mass density. We provide a physical model for the dust distribution in this Letter. Our model is characterized by two key physical parameters; one is the host halo mass of the galaxies and the other is the extent of dust distribution. The former is determined from the observed matter profile \cite[e.g.,][]{Mandelbaum06,HW08,Leauthaud11,MFY}, while the latter can be inferred from the dust distribution. How far dust is transported from the galaxies is indeed a highly interesting question. The observed dust profile is well described by a single power law over a wide range of distance of from 10kpc to 10Mpc. We show that the profile is decomposed into two parts, the so-called one-halo and two-halo terms. We parametrize the one-halo term such that dust is distributed to $\alpha R_{\rm vir}$ where $R_{\rm vir}$ is the galaxy's virial radius. Through model fitting, we determine the extension parameter $\alpha$ to be greater than unity. We discuss the implication for the dust production and transport mechanism into intergalactic space. \section{THE MODEL} \subsection{Halo approach} We present a simple formulation to calculate the surface dust density profile. Our model is based on the so-called halo approach \citep{Seljak00,CS02}. The mean surface density $\Sigma_d(R)$ is divided into two terms: \begin{equation} \Sigma_d(R)=\Sigma_d^{\rm 1h}(R)+\Sigma_d^{\rm 2h}(R), \end{equation} where $R$ is the distance in the projected two-dimensional plane. The one-halo term $\Sigma_d^{\rm 1h}(R)$ arises from the central halo, and the two-halo term $\Sigma_d^{\rm 2h}(R)$ from the neighbouring halos. The contribution from an individual galaxy halo with mass $M_h$ to the one-halo term $\Sigma_d^{\rm 1h}(R)$ is given by the projection of the halo dust density profile $\rho_d(r|M_h)$ along the line-of-sight $\chi$: \begin{eqnarray} \Sigma_d(R|M_h)=\int^\infty_{-^\infty} {\rm d}\chi\, \rho_d(r=\sqrt{\chi^2+R^2}~|M_h). \end{eqnarray} The one-halo term is then a number-weighted average of $\Sigma_d(R|M_h)$ \begin{eqnarray} \Sigma_d^{\rm 1h}(R)&=&\frac{1}{n_{\rm halo}}\int_{M_{\rm min}}^\infty {\rm d}M_h\frac{{\rm d}n}{{\rm d}M_h}\Sigma_d(R|M_h),\\ n_{\rm halo}&=&\int_{M_{\rm min}}^\infty {\rm d}M_h\frac{{\rm d}n}{{\rm d}M_h}, \end{eqnarray} where ${\rm d}n/{\rm d}M_h$ is the halo mass function and $M_{\rm min}$ is the threshold halo mass for the sample galaxies. The threshold mass corresponds to the typical host halo mass of the observed galaxies. We calculate the two-halo term power spectrum $P_d^{\rm 2h}(k)$ as follows: \begin{eqnarray} \label{pk} P_d^{\rm 2h}(k)&=&P_{\rm lin}(k)\nonumber \\ &\times&\left[\frac{1}{\bar\rho_d}\int_0^\infty {\rm d}M_h\frac{{\rm d}n}{{\rm d}M_h}M_d(M_h)b(M_h)u_d(k|M_h)\right]\nonumber \\ &\times&\left[\frac{1}{n_{\rm halo}}\int_{M_{\rm min}}^\infty {\rm d}M_h\frac{{\rm d}n}{{\rm d}M_h}b(M_h)u_d(k|M_h)\right], \end{eqnarray} where $\bar\rho_d$ is the mean cosmic dust density, $P_{\rm lin}(k)$ is the linear matter power spectrum, $b(M_h)$ is the halo bias factor, $M_d(M_h)$ is dust mass in and around a halo with mass $M_h$, and $u_d(k|M_h)$ is the Fourier transform of the density profile $\rho_d$ normalized by its dust mass. The power spectrum is converted to the two-point correlation function via \begin{equation} \xi_d^{\rm 2h}(r)=\frac{1}{2\pi^2}\int_0^\infty {\rm d}k\, k^2\frac{\sin(kr)}{kr}P_d^{\rm 2h}(k). \end{equation} Then we obtain the two-halo term of the mean surface density profile \begin{eqnarray} \Sigma_d^{\rm 2h}(R)&=&\bar\rho_d\int^\infty_{-\infty}{\rm d}\chi\,\xi_d^{\rm 2h}(r=\sqrt{\chi^2+R^2})\nonumber\\ &=&2\bar\rho_d\int_R^\infty {\rm d}r\frac{r\xi_d^{\rm 2h}}{\sqrt{r^2-R^2}}. \label{eq:twohalo} \end{eqnarray} We adopt a flat-$\Lambda$CDM cosmology, with $\Omega_m=0.272, \Omega_\Lambda=0.728, H_0=70.2{\rm km~s^{-1}~Mpc^{-1}}, n_s=0.961$ \citep{wmap7}. We use the code {\it CAMB} to obtain the linear matter powerspectrum \citep{CAMB} and utilize the halo mass function and bias given by \cite{ST99} at $z=0.36$ which is equal to the mean redshift of the galaxy sample used in MSFR. \subsection{Dust distribution profile} We assume that the spatial distribution of dust within and around a halo is organized as \begin{eqnarray} \rho_d(r|M_h)&\propto&\frac{1}{r^2}\exp\left(-\frac{r}{\alpha R_{\rm vir}}\right). \label{eq:dust_1halo} \end{eqnarray} where $R_{\rm vir}$ is the virial radius. Within the virial radius $R_{\rm vir}$, the mean internal matter density is $\Delta\times\rho_{\rm crit}$, where $\Delta$ is given by \cite{BN98}. Essentially, we assume that the dust distribution follows a singular isothermal sphere (SIS) profile with exponential cut-off at $r=\alpha R_{\rm vir}$. One of our aims in this Letter is to determine the value of $\alpha$, i.e., how far dust is distributed from galaxies. Fig.~\ref{rho_dust} shows the shape of the dust density profile $\rho_{\rm d}$ for a halo with mass $M_h=10^{13} h^{-1} M_{\odot}$ computed in the above manner. We see the dependence on $\alpha$ clearly. The power-law shape is motivated by the fact that the observationally derived surface dust profile itself is well fit by a simple power law of $\Sigma_d \propto R^{-0.8}$, similarly to matter distribution (MSFR). Also, detailed calculations of dust ejection and radiation-driven transport by \citet{BF05} show approximately a power-law distribution for the resulting gas metallicity through dust sputtering. \begin{figure} \includegraphics[width=8.5cm]{rho_dust.eps} \caption{The model dust density profile as a function of the spatial distance from the center. The black, red and blue lines represent the model profiles with $\alpha=0.1, 1$ and 10, respectively. For comparison, we also show the NFW \citep{NFW} and the untruncated SIS profiles by grey lines. Note that the amplitudes are arbitrary in this figure.} \label{rho_dust} \end{figure} We have also examined other profiles of the form $r^{-3}$ and $r^{-1}$ with a similar exponential cut-off. However, we have found that neither of the steeper or the shallower profile reproduces well the observed dust profile at small distances. We therefore adopt the profile equation (\ref{eq:dust_1halo}) in our model. The Fourier transform of $\rho_d(r)$ is given by \begin{equation} u_d(k|M_h)=\int_0^{\infty} {\rm d}r\, 4\pi r^2\frac{\sin(kr)}{kr}\frac{\rho_d(r|M_h)}{M_d(M_h)}. \end{equation} Note that the value of $u_d$ should be unity in small-$k$ limit. We determine the amplitude of $\rho_d$ by setting the halo dust mass associated with a halo to be a certain value $M_{\rm d}$. To this end, we first consider the total amount of dust around galaxies in the local universe. \cite{F11} estimated the total amount, in units of the cosmic density parameter, \begin{equation} \Omega_{\rm galaxy~dust}=4.7\times10^{-6}. \label{eq:globaldust1} \end{equation} Interestingly, this value is close to the difference between the estimated amount of dust produced and shed by stars over the age of the universe and the summed amount found in local galactic discs \cite[see also][]{IK03}. Suppose that the comoving density of the total halo dust remains constant in the local universe. Then the mean cosmic density of dust in galactic halos is given by \begin{equation} \bar\rho_d(z)=\Omega_{\rm galaxy~dust}\, \rho_{\rm crit}(z)(1+z)^3. \label{eq:globaldust2} \end{equation} We set the dust mass associated with a halo to be \begin{eqnarray} \int^\infty_0 {\rm d}r \, 4\pi r^2 \rho_d(r|M_h)=M_d=\Gamma\times M_h \label{amp} \end{eqnarray} where $\Gamma$ is the dust-halo mass ratio. We integrate the dust mass weighted by the halo mass function to obtain the global dust density. We normalize $\rho_{\rm d}$, or equivalently $\Gamma$, by matching the global dust density to equation (\ref{eq:globaldust2}). Note that $\Gamma$ is not necessarily a constant but can be a function of halo mass. \subsubsection{Dust-halo mass ratio $\Gamma$} An essential physical quantity in our model is the dust-halo mass ratio $\Gamma$ in equation (\ref{amp}). We propose two simple models. The first one is {\it constant model}, i.e., $\Gamma$ is independent of halo mass. The dust-halo mass ratio is simply the global density ratio \begin{equation} \Gamma=1.73\times10^{-5}=\Omega_{\rm galaxy~dust}/\Omega_{\rm m}. \end{equation} Because the heavy elements that constitute dust are produced by stars, it may be reasonable to expect that the dust mass is proportional to the stellar mass. Intriguingly, \cite{Takeuchi10} used data of AKARI and GALEX to show a moderate correlation between the stellar mass and dust attenuation indicator for the sample galaxies (see their Figure 16). In our second model, we consider the observed galaxy stellar-halo mass relation to model the halo mass dependence of $\Gamma$. We call the model as {\it mass dependent model}. \cite{Leauthaud11} recently studied the stellar-halo mass relation from the joint analysis of galaxy-galaxy weak lensing, galaxy clustering and galaxy number densities using the COSMOS survey data. We use their functional form with the best fit parameters at $z\approx0.37$, \begin{eqnarray} \log_{10}(M_h)&=&\log_{10}(M_1)+\beta\log_{10}\left(\frac{M_*}{M_{*,0}}\right)\nonumber\\ &~&+\frac{(M_*/M_{*,0})^\delta}{1+(M_*/M_{*,0})^{-\gamma}}-0.5, \label{Mh-Ms} \end{eqnarray} where $M_*$ is the galaxy stellar mass, $\log_{10}(M_1/M_\odot)=12.52,\log_{10}(M_{*,0}/M_\odot)=10.92,\beta=0.46,\delta=0.57,$ and $\gamma=1.5$ \citep[see also][]{Behroozi10}. We then relate the dust mass to the stellar mass as \begin{equation} M_d(M_h) \propto M_*(M_h). \label{md-ms} \end{equation} The normalization constant is determined to be $3.05\times10^{-3}$ by integrating this equation weighted by the halo mass function. The global dust mass density thus calculated is matched to equation (\ref{eq:globaldust2}). Fig. \ref{Gamma} compares $\Gamma$ for our two models. \begin{figure} \includegraphics[width=8.5cm]{Gamma.eps} \caption{Two models for dust-halo mass ratio as a function of halo mass. The dashed and the solid lines are the ratio $\Gamma$ for the constant and the mass dependent model, respectively.} \label{Gamma} \end{figure} The shape of $\Gamma$ for the mass-dependent model reflects the stellar-halo mass relation. The peak value of $\Gamma$ at $\sim 6\times 10^{11} h^{-1}M_{\odot}$ is $\simeq10^{-4}$. Overall, $\Gamma$ for the mass dependent model is larger than that for the constant model at the characteristic mass of the sample galaxies (see Section 3). We are now able to calculate the dust surface density profile. In Fig. \ref{sigma_dust_alpha}, we show the dependence of the surface dust density profile on the extension parameter $\alpha$. \begin{figure} \includegraphics[width=8.5cm]{sigma_dust_alpha.eps} \caption{The surface dust density profile as a function of the projected radius for $\alpha = 0.1, 1$ and 10. The projected radius is the physical distance at $z=0.36$. The constant model for dust-halo mass ratio is adopted. The dotted and the dashed lines represent the one-halo and the two-halo terms, respectively. The solid lines show the sum of the two terms.} \label{sigma_dust_alpha} \end{figure} For this figure, the threshold halo mass is fixed to be $2\times10^{12}h^{-1}M_\odot$. We compare three cases; $\alpha=0.1,~1$ and 10. The dotted lines show the one-halo term. Clearly the extension parameter $\alpha$ affects significantly the amplitude and the shape of the one-halo term. The central surface density is larger for smaller $\alpha$. This can be easily understood by noting the total dust mass associated with a halo is given by equation (\ref{amp}). On the other hand, $\alpha$ does not affect much the two-halo term at $R\ga1{\rm Mpc}$. The amplitude of the two-halo term is essentially set by the halo bias $b(M_{\rm h})$ (see equation [5]). \section{RESULTS} We fit our dust distribution model to the observed surface dust density through least chi-squared minimization. We have two physical parameters, $M_{\rm min}$ and $\alpha$, in our model. We have found that poor constraints are obtained on the parameters when both of them are treated as free parameters. Because $M_{\rm min}$ is already estimated to be $2\times10^{12}h^{-1}M_\odot$ in \cite{MFY} through detailed comparison of the observed matter distribution with the results of $N$-body simulations, it is sensible to fit our dust distribution model by treating only $\alpha$ as a free parameter. Namely, the characteristic halo mass can be determined by the gravitational lensing measurement of the matter distribution, whereas the dust distribution extension can be inferred from the observed dust profile. We evaluate the likelihood of the specific model by the $\chi^2$ value of the model fit to the observed quantities. The obtained best-fit $\alpha$ for the constant and the mass dependent models are, respectively, \begin{eqnarray} \alpha&=&1.16^{+0.203}_{-0.155}~(1\sigma)~~{\rm for~constant~model},\\ \alpha&=&2.88^{+0.450}_{-0.355}~(1\sigma)~~{\rm for~mass~dependent~model}. \end{eqnarray} Fig. ~\ref{sig_dust} shows the best-fit dust profile of the constant model with $\alpha=1.16$ and that of the mass dependent model with $\alpha=2.88$. \begin{figure} \includegraphics[width=8.5cm]{sigma_dust_sis.eps} \caption{The mean surface dust density profile as a function of the physical projected distance from the center of galaxies at $z=0.36$. The results from our constant model and mass dependent model are shown by the green and the black lines, respectively. The dotted, the dashed and the solid lines are the one-halo, the two-halo and the total, respectively. } \label{sig_dust} \end{figure} The data points are from MSFR. Both models for $\Gamma$ reproduce the observed profile fairly well. It is interesting to compare these two equally good models. The mass dependent model requires a larger $\alpha$, which is owing to the difference in the typical value of $\Gamma$ for the two models. At $M_h > 10^{11}h^{-1}M_\odot$, $\Gamma$ of mass dependent model is higher than that of constant model. Because the one-halo term is largely contributed by halos with masses $\sim M_{\rm min} = 2\times 10^{12} h^{-1}M_{\odot}$, the best fit $\alpha$ is larger for the mass dependent model to match the observed inner dust surface density profile (see Fig.\ref{sigma_dust_alpha}). Overall, our simple models reproduce the observed dust profile very well. Intriguingly, both our models suggest $\alpha\sim{\cal O}(1)$, i.e., halo dust is distributed over a few hundred kilo parsecs from the galaxies. It is also important that the observed power law surface density $\Sigma_d\propto R^{-0.8}$ at $R = 10 {\rm kpc} - 10 {\rm Mpc}$ can be explained with $\alpha\sim{\cal O}(1)$. The apparent large-scale dust distribution is explained by the two-halo contributions. Dust is distributed to/over $\sim R_{\rm vir}$ from a galaxy, but not necessarily up to 10 Mpc as one might naively expect from the observed dust profile. It is worth discussing the total dust budget in the universe. The amplitude of the two-halo term depends largely on the mean cosmic density of intergalactic dust, $\bar{\rho_{\rm d}}$ in equation (\ref{eq:twohalo}) \footnote{The halo bias $b(M_{\rm h})$ is also a critical factor. However, the characteristic halo mass, and hence $b(M_{\rm h})$, is well constrained from the observed matter density profile, as shown in \cite{MFY}}. The excellent agreement at large separation between the observed dust density profile and our model prediction shown in Fig. \ref{sig_dust} implies that $\Omega_{\rm galaxy~dust}$ should be $\sim10^{-6}$. Clearly, a significant amount of dust exists around the galaxies. Such ``halo dust'' can close the cosmic dust budget as discussed more quantitatively by \citet{F11}. The intergalactic dust could cause non-negligible extinction and thus could compromise cosmological studies with distant supernovae \citep{Menard10b}. We calculate the mean extinction by the intergalactic dust following \cite{Zu11} (see their equation [2]). With our model mean cosmic density of $\Omega_{\rm galaxy~dust}=4.7\times10^{-6}$, the predicted mean extinction is $\langle A_V \rangle = 0.0090~{\rm mag}$ to $z=0.5$. Such an opacity is not completely negligible even in the current SNe surveys, and will become important for future surveys that are aimed at determining cosmological parameters with sub-percent precision \citep{Menard10b}. \section{Discussion} We have shown that our halo model can reproduce the dust profile around galaxies measured by MSFR. By fitting the model to the observed dust profile, we infer that dust is distributed beyond the virial radius of a galaxy. Several authors proposed radiation-driven transport of dust from galactic discs into intergalactic medium at high redshifts \citep{Aguirre01,BF05}. \cite{Zu11} showed that galactic winds can disperse dust into the inter-galactic medium efficiently. Such studies generally suggest that dust can travel up to a few $\times$ 100 kpc from galaxies if the ejection velocity is $\simeq100{\rm km~s^{-1}}$. The relatively larger extent radius of dust for our mass dependent model requires very efficient transport mechanisms. Note also that the dust must survive on its way through the galactic halos. Dust in a large, group-size halo could be destroyed by thermal sputtering in hot gas (Bianchi \& Ferrara 2005; McGee \& Balogh 2010). Clearly, detailed theoretical studies on dust transport are needed. Fluctuations of the cosmic infrared background (CIB) provide insight into dust distribution around galaxies \cite[e.g.,][]{Viero09,Amblard11}. \cite{Viero09} used BLAST data to measure the CIB power spectrum. Using a halo approach, they found that the observed power spectrum at small angular scales is reproduced if halo dust extends up to a few times of the virial radius of galactic halos. It is remarkably consistent with our conclusion in this Letter. \cite{Amblard11} compared their measurements of the CIB anisotropies from Herschel wide-area surveys with \cite{Viero09}. Two power spectra are consistent with each other at small scales. Our dust distribution model may provide a key element for studies on the CIB. Although our model reproduces the observed dust profiles very well, a few improvements can be certainly made. The dust extension $\alpha$ and the dust-halo mass ratio $\Gamma$ are likely to depend on the halo mass and galaxy type etc (McGee \& Balogh 2010) . One may need to consider the distribution of satellite galaxies within a halo by using, for example, the halo occupation distribution (HOD). Indeed, we see slight discrepancies between the model predictions and the observation in the dust profiles at $\sim1$Mpc (Fig. ~\ref{sig_dust}), where the contribution from satellite galaxies are non-negligible \citep[for more detailed modeling, see e.g.,][]{Leauthaud11}. In principle, the HOD parameters can be inferred from the lensing magnification measurement presented by MSFR. However, in order to derive the parameters accurately, one needs to use additional information from observations of the galaxy-galaxy correlation function \citep{Leauthaud11}. Including these effect in our model is beyond the scope of this Letter, but will be needed in order to interpret data from future wide-field galaxy surveys. \section*{ACKNOWLEDGMENTS} We thank R.S. Asano, M. Fukugita, A. Leauthaud, B. M{\'e}nard and T.T. Takeuchi for helpful discussions. SM is supported by the JSPS Research Fellowship. NY acknowledges financial support by the Grant-in-Aid for Young Scientists  (S 20674003) by JSPS. The work is supported in part by KMI and GCOE at Nagoya University, by WPI Initiative by MEXT, and by Grant-in-Aid for Scientific Research on Priority Areas No. 467.
1,108,101,564,733
arxiv
\section{\sc ARS, reward, and dopamine in the evolution of life} The starting point of these notes is a talk to which I heard few years back in Milan, Italy. The speaker, Giuseppe Boccignone, showed us the pair of images in Figure~\ref{wow}, which represent two paths in two dimensions. It is not hard to see that the macroscopic characteristics of these images are quite the same. Their most evident macroscopic feature is that they are heavily clustered: there are a bunch of points in a very restricted area and then suddenly the path jumps to a different area where a new cluster of points is created. \begin{figure}[bth] \begin{center} \VInsert{graphs/saccade.jpg}{0.3\columnwidth} \end{center} \caption{\small Two images that, apparently, describe the same phenomenon. While the paths displayed in the two images are qualitatively similar, their origin is quite different. The path on the right is a recording of the saccadic eye movements of a person looking at the image to which it is superimposed. The path of the left is that of a spider monkey (a monkey common in the Yucatan peninsula of Mexico) looking for food. The two paths are example of a common behavior found throughout the animal kingdom: \textsl{Area-Restricted Search}.} \label{wow} \end{figure} The images are so similar that one would have little trouble believing that they have been created by two instances of the same physical phenomenon. Yet, much to my amazement, Boccignone told us that this was not the case. The figure on the right shows the saccadic eye movements of a person looking at the picture that you can faintly see in the background; the picture on the left is the path followed by a spider monkey of the Yucatan peninsula while looking for food. It has nothing to do with the picture and was superimposed to it only to make the point more forcefully. To find such a similarity in completely unrelated activities of two different species is as striking as it would be to find out that the ritualistic chant of a remote tribe in Papua has the same harmonic structure as the Goldberg Variations. Just as in the case of the tribe we would like to look for an explanation (maybe a previous contact with some Bach-loving explorer that has been incorporated into the rituals of the tribe), so in this case it is not too far-fetched to start looking for some common underlying mechanism. We are encouraged in our endeavor by the fact that the two behaviors do have indeed something in common: they are both examples of \textsl{search}. Search for visual information in one case, search for food in the other. So, we are on a hunt for a common mechanism that guides \textsl{search} in a wide variety of species under the most diverse circumstances. The mechanism must be very general, since it should apply not only to different species but also to very different levels of abstraction (from search for food in physical space to search for information in conceptual space). The behavior that we observe in these two examples is commonly known as \textsl{ARS (Area Restricted Search)}, a strategy that consists in \dqt{a concentration of searching effort around areas where rewards in the form of specific resources have been found in the past. When resources are encountered less frequently, behavior changes such that the search becomes less sensitive, but covers more space} \cite{hills:06}. As we shall see, the same basic mechanism permits ARS in a variety of cases and circumstances, from the foraging behavior of the nematode \textsl{C.elegans} to goal-directed cognition in people. You can have a personal experience of ARS by looking at Figure~\ref{ARS-exp} and following the instructions in the caption (read the caption before looking at the pictures). \begin{figure}[bth] \HInsert{graphs/ARS_exp.jpg}{\columnwidth} \caption{\small Paying attention to where your eyes look, begin in the left figure and look for the upside-down triangle (there is only one). Once you have found it, move to the figure to the right and look for the upside-down triangle there. Go ahead and do that now, before you read the rest of the caption. Where did you look first when you were looking for the triangle in the second figure? Did you look first near the black hearts? If you did, then you were performing ARS. You focused the attention in the area where you expected (based on your previous experience) the reward (the upside-down triangle) to be found and then, once you found out that your \dqt{confidence} area did not have the resource you were looking for, you started a rapid scan of the rest of the image, until you found the sought-after triangle.} \label{ARS-exp} \end{figure} ARS is incredibly widespread. Some form of it has been found in all major eumetazoan clades. To have an idea of what this entails, in Figure~\ref{taxonomy} I have drawn a very partial taxonomy of the animal kingdom. \begin{figure} \begin{center} {\small $\displaystyle \xymatrix@R=1em@C=4em{ & & \mbox{metazoa} \ar@{-}[dr] \ar@{-}[dl] \\ & \mbox{porifera} & & \mbox{eumetazoa} \ar@{-}[dr] \ar@{-}[dl] \\ & & \mbox{bilatera} \ar@{-}[dll] \ar@{-}[dl] \ar@{-}[d] \ar@{-}[drr] & & \mbox{radiata} \\ \mbox{lophotrochozoa} \ar@{-}[dd]& \mbox{platyzoa} & \mbox{ecydozoa} \ar@{-}[dr] \ar@{-}[dl] \ar@{-}[d] & & \mbox{deuterostomia} \ar@{-}[d] \\ & {} \ar@{..}[r] & \mbox{nemotoda}\ar@{..}[r] & {} & \mbox{chordata} \ar@{-}[dr] \ar@{-}[dl] \ar@{-}[d] \\ (mollusks) & & & {} \ar@{..}[r] & \mbox{craniata} \ar@{-}[dr] \ar@{-}[dl] \ar@{..}[r] & {} \\ & & & \mbox{vertebrata} & & \mbox{myxini} } $ } \end{center} \caption{\small A (very small) fragment of the taxonomy of \textsl{metazoa} (i.e.\ animals). The group of \textsl{porifera} is composed of animals without tissue, and consists pretty much of sponges and little else. The clade of the eumetazoa contains all other animals, and in this whole clade ARS has been observed.} \label{taxonomy} \end{figure} The clade on the left of the root, the \textsl{porifera} is composed of animals that do not have a real tissue: sponges and little else. The other clade, \textsl{eumetazoa} contains all other animals, from worms to mollusks to you and me. ARS can be observed, in some form or another, in the whole eumetazoa clade. This broad presence indicates that the mechanism behind ARS must have evolved quite early, since the major divisions of the eumetazoa clade are very ancient, and it is reasonable to assume that all forms of ARS derive from a mechanism that was put in place before this division. ARS is, in other words, one of the basic mechanisms of life. ARS might even be more basic than the eumetazoa: there are molecular mechanisms in protozoa that could be precursors of ARS. The most primitive example is the \dqt{run and tumble} movement of \textsl{E.coli} and \textsl{Salmonella typhimurium}. The movement of these bacteria is controlled by a flagellar motor. \textsl{Runs} consist of forward motion (longish stretch of resource search), while \textsl{tumbles} are made of random turns that keep the bacteria more or less in the same place (exploiting local resources while they last). Receptor proteins in the membrane bind these behaviors to external stimuli \cite{}. The mechanism on which ARS is based is, in its essential structure, fairly consistent across the whole spectrum of eumetazoans. Its fully formed presence in organisms with limited learning capabilities, such as \textsl{C.elegans} suggests that learning is not involved or, to the extent that it is (such as in mammals), it is based on a fully formed pre-learning machinery. This is what makes ARS so interesting: it is a basic mechanism that very different forms of life have adopted as a basic strategy to solve such diverse problems as looking for food in a Petri dish or trying to prove a mathematical theorem. Its omnipresence derives from the optimality of ARS as a search strategy in cases in which resources are \dqt{clumpy} and the information about the locations of the \dqt{clumps} is limited (we'll see that in the next section). In the case of foraging animals, the resource is food, and the reward is finding something to eat. In the case of somebody looking at an image, the resource is information about its content, and the reward is understanding what the image is about% \footnote{Things are more complex in the case of looking at images: the actual paths that people's gaze follow depend on what they are looking for. Given an image of an interior, the actual paths are different if the subjects are asked e.g.\ \dqt{From what epoch is the interior?} or if they are asked e.g.\ \dqt{What are the people in the image doing?} This is not important for our present considerations: all these paths, different as they may be, exhibit the features of ARS.}% . In all these cases, the animal or person moves locally around the same clump as long as it is rewarding to do so (viz.\ as long as locally one finds more food or new information), then starts moving rapidly to explore quickly new territory in search of a new clump. \begin{figure}[bthp] \begin{center} \setlength{\unitlength}{0.75em} \begin{picture}(56,6)(0,0) \put(0,0){ \put(2,1){\line(2,1){2}} \put(2,6){\line(2,-1){2}} \put(4,2){\line(0,1){3}} \put(4,5){\line(2,1){2}} \put(6,6){\line(2,-1){2}} \put(8,5){\line(0,-1){3}} \put(8,2){\line(-2,-1){2}} \put(6,1){\line(-2,1){2}} \put(4.4,2.4){\line(0,1){2.2}} \put(6,5.6){\line(2,-1){1.6}} \put(6,1.4){\line(2,1){1.6}} % \put(8,5){\line(2,1){2}} \put(10,6){\line(2,-1){2}} \put(12,5){\line(0,-1){2.5}} \put(12,1.8){\makebox(0,0){NH$_2$}} % \put(1.8,1){\makebox(0,0)[r]{HO}} \put(1.8,6){\makebox(0,0)[r]{HO}} % \put(6,-2){\makebox(0,0){dopamine}} } \put(20,0){ \multiput(1,3)(4,0){3}{\line(2,1){2}} \multiput(3,4)(4,0){3}{\line(2,-1){2}} \multiput(0,0)(8,0){2}{% \put(2.8,4.2){\line(0,1){1.4}} \put(3.2,4.2){\line(0,1){1.4}} \put(3,6){\makebox(0,0)[b]{O}} } \put(9,3){\line(0,-1){2}} \put(8.75,1){\makebox(0,0)[lt]{NH$_3$ (+)}} \put(13.2,3){\makebox(0,0)[l]{O (-)}} \put(0.8,3){\makebox(0,0)[r]{(-) O}} \put(8.75,5.3){\makebox(0,0)[b]{H}} \put(9,3){\line(0,1){2}} \put(9,3){\line(-1,4){0.5}} \put(9,5){\line(-1,0){0.5}} \put(7,-2){\makebox(0,0){glutamate}} } \put(40,0){ \multiput(2,3)(4,0){3}{\line(2,1){2}} \multiput(4,4)(4,0){2}{\line(2,-1){2}} \put(3.8,4.2){\line(0,1){1.4}} \put(4.2,4.2){\line(0,1){1.4}} \put(4,6){\makebox(0,0)[b]{O}} \put(1.8,3){\makebox(0,0)[r]{HO}} \put(12.2,4){\makebox(0,0)[l]{NH$_2$}} \put(8,-2){\makebox(0,0){GABA}} } \end{picture} \end{center} \caption{\small Three neurotransmitters that will figure in our discussion. Simplifying things very (very!) much, glutamate has essentially an excitatory action: its release in a synapse will move the postsynaptic neuron closer to firing. GABA is, on the contrary, inhibitory, its release will make the postsynaptic neuron less likely to fire. Dopamine is a modulator of glutammate.} \label{dopamine} \end{figure} The \dqt{reward} that one is after can be something very concrete (food) or something very abstract (the \dqt{Eureka} moment of proving a theorem) but in all cases the nervous system makes the reward substantial by encoding it as the release of a very specific chemical: \textsl{dopamine} (figure~\ref{dopamine}). The organism in which the molecular basis of ARS is best understood is the nematode \textsl{C.elegans} \cite{hills:04}. The neural circuitry consists of eight sensory neurons presynaptic to eight interneurons that co\"ordinate forward and backward movements. The sensory neurons alter the turning frequency by releasing dopamine on the interneurons, modulating the reception of glutammate (Figure~\ref{elegans}). \begin{figure}[bthp] \begin{center} \VInsert{graphs/elegans_da.jpg}{15em} \end{center} \caption{\small Dopaminergic action in \textsl{C.elegans}. Glu and Da represent glutamatergic and dopaminergic presynaptic neurons, respectively. The release of dopamine from the dopaminergic neurons alters the postsynaptic neuron's response to glutammate. It is not known whether this is due to a presynaptic response of the glutamatergic neuron and/or to a postsynaptic response of the locomotory neurons (from \cite{hills:04}).} \label{elegans} \end{figure} External administration of dopamine increases the turning frequency, while administration of a dopamine antagonist reduces it \cite{hills:04}. A reasonable model of \textsl{C.elegans} behavior suggests that, while on food, the sensory neurons release dopamine, which, via the action of the glutammate, leads to increased switching behavior in the interneurons, resulting in more turns and, consequently, in a trajectory that stays local. When off food, the dopaminergic activity is reduced, and the interneurons reduce their switching frequency, leading to less turns and more ground covered Although \textsl{C.elegans} is the only organism for which the neuromolecular mechanism of ARS is well understood, there is strong evidence of dopaminergic modultation of glutamatergic synapses throughout the major clades of the eumetazoans \cite{acerbo:02,cleland:97}. In insects, for example, dopaminergic neurons in the abdominal ganglion are sparsely distributed, but show large branching patterns, indicative of neuromodulation \cite{nassel:96}. The relation between dopamine and ARS has been documented throughout the invertebrates, especially in the fruit fly \textsl{Drosophila melanogaster} \cite{bainton:00}, in crustaceans \cite{harriswarrick:95}, in \textsl{Aplysia} \cite{due:04}, etc. In all these cases, ARS is limited to food search which is, clearly, the search problem for which ARS first evolved. In vertebrates, the modification of behavior by dopamine increases in complexity and begins to involve behavior not directly related to food. For example, in frogs and toads dopamine modulation is involved in the visuomotor focus on preys \cite{buxbaumconradi:99}; similar dopaminergic involvement in visuomotor coordination can be found in rats and humans \cite{barrett:04,dursun:99,evenden:93}. This finding is significant in that it indicates a strong relation between ARS and \textsl{inhibition of return}: the fact that viewers show significant latency in revisitig objects or regions of a scene that have already been investigated, united to the lingering of saccadic movements in regions of intertest \cite{tipper:94}. The important change in vertebrates is the extension of ARS-like behavior to cover not only actions with an immediate reward, such as the search for food, but also situations in which the reward is projected or even in which the reward itself is a neural state. The detachment from the immediate food rewards is what makes it possible to adapt ARS to abstract functions such as \textsl{goal-directed cognition}. It seems, in other words, that when new problems arose that had the same abstract structure as search for food, animals, rather than developing a new mechanism, co\"opted the dopaminergic modulation that guided food search to work on the new problem. The most important neural structure associated with goal-directed cognition is the \textsl{basal ganglia} and, more specifically, the \textsl{striatum} \cite{delong:90,reiner:94} (Figure~\ref{striatum}). \begin{figure} \setlength{\unitlength}{1em} \begin{picture}(0,0)(0,0) \put(1,-4){\makebox(0,0)[l]{1. lateral medial}} \put(1,-6){\makebox(0,0)[l]{2. globus pallidus}} \put(1,-8){\makebox(0,0)[l]{3. striatum}} \end{picture} \begin{center} \VInsert{graphs/basal-ganglia.jpg}{15em} \end{center} \caption{\small The basal Ganglia and, specifically the striatum have a large number of dopaminergic inputs from other parts of the brain, and are involved in ARS. While in simple animals like \textsl{C.Elegans} ARS is in a simple pathway from sensory input to motor neuron, in mammals the input comes from other areas of the brain, and the output acts on the cortex, making it possible to use ARS for more abstract problems than the immediate search for food.}% \label{striatum} \end{figure} Information enters the basal ganglia through the striatum, and a great number of the inputs to the system are dopaminergic \cite{reiner:98}. The structure of the basal ganglia and much of their connectivity are maintained across vertebrates \cite{salas:03}. The major change from anamniotes (fish and amphibians) to amniotes is the proliferation of dopaminergic neurons that input to the striatum \cite{reiner:98}, while the structure of the striatum stays pretty much the same. The balance between glutammate and dopamine in the striatum is key to the proper functioning of ARS-like activities, and an imbalance between the two neurotransmitters is suspected in a number of pathologies affecting goal-directed cognition, including Parkinson's, schizophrenia, and addiction. Many of these conditions can be regarded as radicalization of ARS in one direction or another (too local or too global) due to imperfect dopamine control (see Figure~\ref{diseases}). \begin{figure} \begin{center} \setlength{\unitlength}{1.5em} \begin{picture}(20,8)(0,0) \put(0,3){ \multiput(0,0)(0,2){2}{\line(1,0){20}} \multiput(0,0)(10,0){3}{\line(0,1){2}} \put(5,1){\makebox(0,0){{\large Focused}}} \put(15,1){\makebox(0,0){{\large Diffused}}} \put(0.1,0.1){\makebox(0,0)[lb]{{\small too much DA}}} \put(19.9,0.1){\makebox(0,0)[rb]{{\small too little DA}}} \put(6,2.2){\vector(1,0){8}} \multiput(8,2.7)(0.5,0){8}{\line(1,0){0.25}} \put(10,2.9){\makebox(0,0)[b]{schizophrenia}} } \put(0,2){ \put(0,0){\vector(0,1){0.8}} \put(0,-0.1){\makebox(0,0)[t]{autism}} } \put(7,2){ \put(0,0){\vector(0,1){0.8}} \put(0,-0.1){\makebox(0,0)[t]{addiction}} } \put(18,2){ \put(0,0){\vector(0,1){0.8}} \put(0,-0.1){\makebox(0,0)[t]{parkinson's}} } \put(3,6){ \put(0,0){\vector(0,-1){0.8}} \put(0,0.1){\makebox(0,0)[b]{OCD, TS}} } \put(16,6){ \put(0,0){\vector(0,-1){0.8}} \put(0,0.1){\makebox(0,0)[b]{ADHD}} } \end{picture} \end{center} \caption{\small The continuous arrow at the top represents the normal temporal progression of dopaminergic activity in ARS, modulating behavior from focused (i.e.\ local) search to diffuse (i.e.\ global). The placement of the pathologies is qualitative and not based on a model but on the fact that they are treated either with dopamine or with dopamine antagonists. Although dopamine seems to be a factor in these diseases, there are clearly more factors at play. Schizophrenia is the most emblematic case in which the mechanism is not well understood, as reflected by its ambiguous positioning via the dashed line. OCD is \textsl{Obsessive-Compulsive Disorder}, TS \textsl{Tourette Syndrome}, ADHD \textsl{Attention Deficit Hyperactivity Disorder} (from \cite{hills:06}).} \label{diseases} \end{figure} In the striatum, dopaminergic neurons modulate the glutammatergic input at the tips of spiny neurons. The action of the dopaminergic inputs appears to perform a neuromodulation of the strength of the glutammatergic inputs (Figure~\ref{mammalian}) The mechanism is similar to that described for \textsl{C.elegans} in Figure~\ref{elegans}, but in the mammalian striatum the shape of the spiny neuron has specialized for this function and the inputs, several magnitudes higher in number, come primarily from connections to cortical neurons rather than directly from sensory neurons as in \textsl{C.Elegans} \cite{white:86}. However, at the level of the microcircuit, little has changed from nematodes to amniote vertebrates. The origin of the dopaminergic input does, however, mark a fundamental evolutionary shift in the activity range of ARS-like behavior. While in \textsl{C.elegans} or \textsl{Drosophila} the afferent dopaminergic signal is reliably related to the presence or absence of food, in higher vertebrates the signal may represent the \textsl{expectation of a reward}. The critical transition here is from a concrete (directly sensed) reward to its neural representation that is, from a physical reward to the abstract idea of a reward \cite{schultz:95}. As Hills puts it: \dqt{the evolutionary theory $[$of ARS$]$ is therefore completely consistent with the reward theory of dopamine, but adds the evolutionary hypothesis that the initial reward represented by the release of dopamine were food. Only later was this system co-opted to represent the expectation of a reward, which allows for goal-directed cognition} \cite{hills:06}. To conclude this brief \textsl{excursus} of ARS, we consider a region of relatively recent evolution that, outside of the basal ganglia, is heavily involved in goal-directed behavior: the \textsl{prefrontal cortex} (PFC). The PFC has clearly evolved much later than ARS; nevertheless, it is heavily involved in goal-directed behavior via massive connections to the striatum \cite{odonnell:95}. Dopamine has been shown to be a factor in the sustained activation of the PFC \cite{seamans:98,wang:04}. Most models of PFC see the r\^ole of dopamine as holding objects in attention long enough for appropriate behavior to be activated \cite{braver:00}. Consistently with ARS, already known solutions mediated by the PFC are most typically tried when a problem has to be solved in a new situation \cite{eichenbaum:01}. The context in which goal-directed cognition takes place includes external and internal stimuli; ARS depends in part on the alignment of external stimuli with previous expectations. This is likely to be controlled by the connections between the PFC and the \textsl{Nucleus Accumbens} (NAcc) in the striatum, which modulates attention, eye movements, and the maintenance of working memory \cite{bertolucci:90,floresco:99,schultz:04}; dopamine has been identified as one of the main influences in the modulation of NAcc activity \cite{floresco:96}: novel stimuli lead to increase in dopamine in the NAcc and in the PFC \cite{berns:01}. \begin{figure} \begin{center} \VInsert{graphs/mammalian_da.jpg}{18em} \vspace{-3em} \end{center} \caption{\small The dopaminergic-glutammatergic interaction in the synapses of spiny neurons in the mammalian striatum. Glu and DA represent glutammatergic and dopaminergic pre-synaptic neurons, respectively (from \cite{hills:06}, redrawn from \cite{dani:04}).} \label{mammalian} \end{figure} So, in the evolution of vertebrates, we see a progressive extension of the r\^ole of ARS, from the dopaminergic control of visuomotor control in frogs and toads to the similarity mediated maintenance of ideas in working memory \cite{schultz:95}. ARS appears therefore to be one of the fundamental strategies in the animal kingdom, co-opted and adapted to a number of situations, from the \dqt{run and tumble} behavior of \textsl{E.coli} to the way we focus on and later abandon ideas when we think about a problem. \separate This brief explanation of the evolutionary basis of ARS has been centered on its molecular mechanism, especially on the r\^ole of dopamine as neuromodulator. From now on, however, our focus will change: we shall try to understand the \textsl{exterior} characteristics of the behavior. ARS leads to a well identified patterns of motion either in the physical space (in the case of foraging), in the visual space (scanning an image), or in any number of abstract spaces. We shall study mathematically these patterns of motion and try to characterize them. Our methods will be based mostly on the study of random walks, of diffusion, and on the kinds of anomalous diffusion to which ARS leads. \section{\sc Optimality of ARS} \label{genetic} The evolutionary success of ARS entails, according to the theory of natural selection, that ARS is an optimal strategy---if not globally, at least locally---for a large set of problems. In abstract terms, we have a space with certain resources placed in different parts of it; we need a strategy to navigate this space collecting the greatest amount of resources. This must be done without information on the placement of the resources. (If we can sense from afar where the resources are located, we simply walk there and get them: no search strategy is necessary.) The nature of the resources can be the most diverse: in the case of foraging (to which we shall mostly make reference), the resource is food; in the case of saccadic movements, it is the visual information that we get from the visual field, and so on. ARS is optimal if the resources are \dqt{patchy,} that is, if they are organized in resource-rich patches separated by areas of small or zero resource concentration. In the model that we shall develop in this section, we assume that the resources are consumed in the course of the activity, and that they are not replenished while the activity goes on. Resources are consumed simply by moving on top of them (assuming that, after walking on them, they would be consumed with a certain probability would not substantially change the model). In the example of food, this means that we have food distributed in patches (a grove, a pond, a herd, a school of fish...) and the forager moves inside the patch and between patches eating what it finds. We make a number of hypotheses. Firstly, we assume that the food doesn't move around or if it does (as is the case of animal preys) its movement is not significant and food can be modeled as static. Secondly, we assume that the forager will eat all the food it can find as soon as it finds it (its eyesight is perfect and its appetite endless). Finally, food doesn't grow back: once it has been eaten at a particular location, that location will remain barren for the rest of the forager's activity. In the case of saccades, we assume (as is often the case) that there are patchy areas in the visual field that are rich in information useful to interpret the scene (relevant or telling objects, faces, etc.). We also assume that once we have analyzed the information in a given area of the visual field, that information is remembered and it is not necessary to analyze it again. This is equivalent to the hypothesis that the food doesn't grow back once it has been eaten. \separate In this section, we want to check whether ARS emerges as an optimal solution to the foraging problem. We shall do this by implementing a genetic algorithm based on a competition among individuals whose characteristics are encoded in a string of bits called a \textsl{gene} \cite{mitchell:98}. Individuals move around a \textsl{foraging areas} under the guidance of their gene and collect food. Their \textsl{score}, which determines their fitness for survival, is the amount of food they have collected. The world in which these individuals move is a regular grid of patches of food of $p\times{p}$ ($p\in{\mathbb{N}}$) \textsl{pellets}, separated by barren areas without food (see Figure~\ref{feedenvironment}). Each pellet is an atomic unit of food, that it, it is either not consumed or consumed entirely. The consumption of each pellet increases the survival fitness by one unit. \begin{figure}[btp] \begin{center} \setlength{\unitlength}{1.1em} \begin{picture}(21,21)(-3,-3) \newsavebox{\patch} \savebox{\patch}{ \thicklines \multiput(0,0)(3,0){2}{\line(0,1){3}} \multiput(0,0)(0,3){2}{\line(1,0){3}} \thinlines \multiput(0.25,0.25)(0.25,0){11}{ \multiput(0,0)(0,0.25){11}{\circle*{0.0001}} } } \multiput(0,0)(0,5){3}{ \multiput(0,0)(5,0){3}{\usebox{\patch}}% } \multiput(5,5)(0,5){2}{ \multiput(0,0)(0.5,0){10}{% \multiput(0,0)(0,0.05){2}{\line(1,0){0.25}}}% } \multiput(5,5)(5,0){2}{ \multiput(0,0)(0,0.5){10}{% \multiput(0,0)(0.05,0){2}{\line(0,1){0.25}}}% } \multiput(-1,1.5)(0,5){3}{% \multiput(0,0)(-1,0){3}{\circle*{0.1}} } \multiput(14,1.5)(0,5){3}{% \multiput(0,0)(1,0){3}{\circle*{0.1}} } \multiput(1.5,-1)(5,0){3}{% \multiput(0,0)(0,-1){3}{\circle*{0.1}} } \multiput(1.5,14)(5,0){3}{% \multiput(0,0)(0,1){3}{\circle*{0.1}} } \put(5,5){% \circle*{0.15}% \put(-0.1,-0.1){\makebox(0,0)[tr]{$(0,0)$}} % \multiput(0,-0.75)(3,0){2}{\line(0,1){0.5}}% \put(0,-0.5){\line(1,0){3}}% \put(1.5,-0.55){\makebox(0,0)[t]{$p$}} % \multiput(0,-1.75)(5,0){2}{\line(0,1){0.5}}% \put(0,-1.5){\line(1,0){5}}% \put(2.5,-1.45){\makebox(0,0)[b]{$q$}} % \multiput(-0.75,0)(0,3){2}{\line(1,0){0.5}}% \put(-0.5,0){\line(0,1){3}}% \put(-0.55,1.5){\makebox(0,0)[r]{$p$}} % \multiput(-1.75,0)(0,5){2}{\line(1,0){0.5}}% \put(-1.5,0){\line(0,1){5}}% \put(-1.45,2.5){\makebox(0,0)[l]{$q$}} \put(1.5,1.5){\circle*{0.2}} \put(6.5,4){\vector(-2,-1){4.8}} \put(6.5,4){\makebox(0,0)[lb]{$(x_0,y_0)$}} } \end{picture} \end{center} \caption{\small The environment for the application of genetic algorithms to the evolution of ARS. The dotted squares represent the patches composed of $p\times{p}$ pellets of food, where $p$ is a program parameter. The distance between the patches, $q$, is determined by the desired density of food, $\rho$ (also a program parameter) through the relation $q=\lceil{p/\sqrt{\rho}}\rceil$. In the implementation, patches are generated dynamically the first time that the forager walks on them so that the foraging field is virtually infinite. } \label{feedenvironment} \end{figure} Each patch is separated from the other by being placed in the lower-left corner of a larger square of $q\times{q}$ units ($q\in{\mathbb{N}}$, $q>p$) called a \textsl{plot}. The density of the food is $\rho=p^2/q^2$. The plots are created dynamically at run time as the walker steps on them for the first time, so that the foraging field is virtually infinite. In all the tests discussed below, $p$ is kept fixed ($p=16$), $\rho$ is a parameter that varies from $\rho=0.01$ to $\rho=0.95$, and $q$ is determined as $q=\lceil{p/\sqrt{\rho}}\rceil$. Each individual does a random walk (specified by certain parameters, as described below) starting at $(x_0,y_0)=(p/2,p/2)$ that is, in the center of the patch whose lower-left corner is the origin. \subsection{The walk parameters} Each time the walker walks on a position containing a pellet, it \dqt{eats} it, incrementing its score (which determines its evolutionary fitness) by one. The pellet is removed, so that further visits to the location will not provide any food (Figure~\ref{eating}). The movement of each individual is a random walk whose statistical features depend on whether the individual is currently eating (status: on-food) or whether it has been without food for some time (status: off-food). The individual doesn't go \dqt{off-food} immediately as soon as it steps on a location with no food: the individual has memory, so that it gradually changes its status from on-food to off-food during a certain number of time steps. The amount of time without food that it takes to go to the status off-food is controlled by a parameter in the gene of the individual. \begin{figure}[tbhp] \begin{center} \setlength{\unitlength}{1.5em} \begin{picture}(25,7)(0,-1) \savebox{\patch}{ \thicklines \multiput(0,0)(1,0){2}{\line(0,1){1}} \multiput(0.1,0)(1,0){2}{\line(0,1){1}} \multiput(0,0)(0,1){2}{\line(1,0){1}} \multiput(0.1,0)(0,1){2}{\line(1,0){1}} \thinlines \multiput(0.125,0.125)(0.125,0){7}{ \multiput(0,0)(0,0.125){7}{\circle*{0.0001}} } } \multiput(2,0)(0,1){7}{\line(1,0){6}} \multiput(2,0)(1,0){7}{\line(0,1){6}} \multiput(-0.5,2.5)(0.25,0){16}{\line(1,0){0.125}} \multiput(3.5,2.5)(0.125,0.125){8}{\circle*{0.001}} \multiput(4.5,3.5)(0.125,-0.125){16}{\circle*{0.001}} \multiput(6.5,1.5)(0.125,0.125){8}{\circle*{0.001}} \multiput(7.5,2.5)(0,0.25){8}{\line(0,1){0.125}} \multiput(7.5,4.5)(0.25,0){8}{\line(1,0){0.125}} \put(9,4){\makebox(0,0)[tl]{path}} \put(14,0){ \multiput(0,0)(0,1){7}{\line(1,0){6}} \multiput(0,0)(1,0){7}{\line(0,1){6}} \multiput(0,0)(1,0){6}{\usebox{\patch}} \put(0,1){\usebox{\patch}} \put(1,1){\usebox{\patch}} \put(2,1){\usebox{\patch}} \put(3,1){\usebox{\patch}} \put(5,1){\usebox{\patch}} \put(2,2){\usebox{\patch}} \put(4,2){\usebox{\patch}} \put(0,3){\usebox{\patch}} \put(1,3){\usebox{\patch}} \put(3,3){\usebox{\patch}} \put(4,3){\usebox{\patch}} \multiput(0,4)(1,0){5}{\usebox{\patch}} \multiput(0,5)(1,0){6}{\usebox{\patch}} \put(3,-0.5){\makebox(0,0)[t]{patch after the walk}} \put(3,-1){\makebox(0,0)[t]{(score: 8)}} } \end{picture} \end{center} \caption{\small A walk of an individual through a patch of food: each square crossed by the individual is \dqt{eaten,} and is removed from the patch. In this case, the individual passes over 8 patches: after the walk, its score is eight, and eight pellets are removed from the patch.} \label{eating} \end{figure} The general behavior of the random walk is the same regardless of whether the individual is on-food or off-food (the only thing that changes in the two cases is the numerical value of the parameters). Consider a generic situation in which the parameters are $(\alpha_0,l_0)$. The individual is coming from a direction $\theta$, being currently at location $(x,y)$. The individual chooses a deviation angle $\alpha$, selected with a Gaussian distribution centered at $\alpha_0$, and a length $l$ selected with an exponential distribution with average $l_0$, and performs a jump in a direction at an angle $\alpha$ from its current direction, and for a length $l$ (Figure~\ref{jump}). The parameters $(\alpha_0,l_0)$ characterize the statistics of the jump, they are encoded in the individual's gene, and they take different values depending on whether the individual is on-food or off-food. When the individual is on-food, the jumps are done according to the parameters $(\alpha_f,l_f)$, while when the individual has been for some time off-food, the jumps are done according to $(\alpha_n,l_n)$. The switch from the on-food parameters to the off-food is done gradually when the individual is in an area without food. The gene defines a memory threshold $\tau$ to switch from the on-food to the off-food behavior. If $t$ is the number of step that the individual has been off-food ($t=0$ if the individual is on-food) then the parameters for the next jump will be \begin{equation} \begin{array}{ll} \begin{array}{rc} (\alpha_f,l_f) & \end{array} & \mbox{on food} \\ ~ \\ \left. \begin{array}{rc} (\alpha_f + \frac{t}{\tau}(\alpha_n-\alpha_f), l_f + \frac{t}{\tau} (l_n-l_f)) & t < \tau \\ (\alpha_n,l_n) &t \ge \tau \end{array} \right\} & \mbox{off food} \end{array} \end{equation} \begin{figure}[btp] \begin{center} \setlength{\unitlength}{1em} \begin{picture}(45,15)(0,-3.5) \thicklines \put(0,0){\line(1,0){13}} \thinlines \put(1,0){\line(3,1){6}} \multiput(7,2)(0.3,0.1){18}{\circle*{0.001}} \put(4.5,0.3){\makebox(0,0)[b]{$\theta$}} \put(7,1.8){\makebox(0,0)[tl]{$(x,y)$}} \put(7,2){\circle*{0.3}} \thicklines \put(7,2){\line(1,2){2}} \put(7.1,2.1){\line(1,2){2}} \put(6.5,4.5){\vector(1,2){1}} \put(6.5,4.5){\vector(-1,-2){1}} \put(6.4,4.6){\makebox(0,0)[rb]{$l$}} \put(8.5,3.5){\makebox(0,0)[l]{$\alpha$}} \put(6,-3.5){\makebox(0,0){\pmb{(a)}}} \put(21,-3.5){\makebox(0,0){\pmb{(b)}}} \put(36,-3.5){\makebox(0,0){\pmb{(c)}}} \put(15,-3){ \setlength{\unitlength}{2.5em} \begin{picture}(5,5)(-2.5,-2.5) \put(-2.5,0){\line(1,0){5}} \put(0,-2.5){\line(0,1){5}} \put(1.000,0.000){\circle*{0.001}} \put(0.992,0.125){\circle*{0.001}} \put(0.969,0.249){\circle*{0.001}} \put(0.930,0.368){\circle*{0.001}} \put(0.876,0.482){\circle*{0.001}} \put(0.809,0.588){\circle*{0.001}} \put(0.729,0.685){\circle*{0.001}} \put(0.637,0.771){\circle*{0.001}} \put(0.536,0.844){\circle*{0.001}} \put(0.426,0.905){\circle*{0.001}} \put(0.309,0.951){\circle*{0.001}} \put(0.187,0.982){\circle*{0.001}} \put(0.063,0.998){\circle*{0.001}} \put(-0.063,0.998){\circle*{0.001}} \put(-0.187,0.982){\circle*{0.001}} \put(-0.309,0.951){\circle*{0.001}} \put(-0.426,0.905){\circle*{0.001}} \put(-0.536,0.844){\circle*{0.001}} \put(-0.637,0.771){\circle*{0.001}} \put(-0.729,0.685){\circle*{0.001}} \put(-0.809,0.588){\circle*{0.001}} \put(-0.876,0.482){\circle*{0.001}} \put(-0.930,0.368){\circle*{0.001}} \put(-0.969,0.249){\circle*{0.001}} \put(-0.992,0.125){\circle*{0.001}} \put(-1.000,0.000){\circle*{0.001}} \put(-0.992,-0.125){\circle*{0.001}} \put(-0.969,-0.249){\circle*{0.001}} \put(-0.930,-0.368){\circle*{0.001}} \put(-0.876,-0.482){\circle*{0.001}} \put(-0.809,-0.588){\circle*{0.001}} \put(-0.729,-0.685){\circle*{0.001}} \put(-0.637,-0.771){\circle*{0.001}} \put(-0.536,-0.844){\circle*{0.001}} \put(-0.426,-0.905){\circle*{0.001}} \put(-0.309,-0.951){\circle*{0.001}} \put(-0.187,-0.982){\circle*{0.001}} \put(-0.063,-0.998){\circle*{0.001}} \put(0.063,-0.998){\circle*{0.001}} \put(0.187,-0.982){\circle*{0.001}} \put(0.309,-0.951){\circle*{0.001}} \put(0.426,-0.905){\circle*{0.001}} \put(0.536,-0.844){\circle*{0.001}} \put(0.637,-0.771){\circle*{0.001}} \put(0.729,-0.685){\circle*{0.001}} \put(0.809,-0.588){\circle*{0.001}} \put(0.876,-0.482){\circle*{0.001}} \put(0.930,-0.368){\circle*{0.001}} \put(0.969,-0.249){\circle*{0.001}} \put(0.992,-0.125){\circle*{0.001}} \put(2.000,0.000){\circle*{0.001}} \put(1.996,0.126){\circle*{0.001}} \put(1.984,0.251){\circle*{0.001}} \put(1.965,0.375){\circle*{0.001}} \put(1.937,0.497){\circle*{0.001}} \put(1.902,0.618){\circle*{0.001}} \put(1.860,0.736){\circle*{0.001}} \put(1.810,0.852){\circle*{0.001}} \put(1.753,0.964){\circle*{0.001}} \put(1.689,1.072){\circle*{0.001}} \put(1.618,1.176){\circle*{0.001}} \put(1.541,1.275){\circle*{0.001}} \put(1.458,1.369){\circle*{0.001}} \put(1.369,1.458){\circle*{0.001}} \put(1.275,1.541){\circle*{0.001}} \put(1.176,1.618){\circle*{0.001}} \put(1.072,1.689){\circle*{0.001}} \put(0.964,1.753){\circle*{0.001}} \put(0.852,1.810){\circle*{0.001}} \put(0.736,1.860){\circle*{0.001}} \put(0.618,1.902){\circle*{0.001}} \put(0.497,1.937){\circle*{0.001}} \put(0.375,1.965){\circle*{0.001}} \put(0.251,1.984){\circle*{0.001}} \put(0.126,1.996){\circle*{0.001}} \put(0.000,2.000){\circle*{0.001}} \put(-0.126,1.996){\circle*{0.001}} \put(-0.251,1.984){\circle*{0.001}} \put(-0.375,1.965){\circle*{0.001}} \put(-0.497,1.937){\circle*{0.001}} \put(-0.618,1.902){\circle*{0.001}} \put(-0.736,1.860){\circle*{0.001}} \put(-0.852,1.810){\circle*{0.001}} \put(-0.964,1.753){\circle*{0.001}} \put(-1.072,1.689){\circle*{0.001}} \put(-1.176,1.618){\circle*{0.001}} \put(-1.275,1.541){\circle*{0.001}} \put(-1.369,1.458){\circle*{0.001}} \put(-1.458,1.369){\circle*{0.001}} \put(-1.541,1.275){\circle*{0.001}} \put(-1.618,1.176){\circle*{0.001}} \put(-1.689,1.072){\circle*{0.001}} \put(-1.753,0.964){\circle*{0.001}} \put(-1.810,0.852){\circle*{0.001}} \put(-1.860,0.736){\circle*{0.001}} \put(-1.902,0.618){\circle*{0.001}} \put(-1.937,0.497){\circle*{0.001}} \put(-1.965,0.375){\circle*{0.001}} \put(-1.984,0.251){\circle*{0.001}} \put(-1.996,0.126){\circle*{0.001}} \put(-2.000,0.000){\circle*{0.001}} \put(-1.996,-0.126){\circle*{0.001}} \put(-1.984,-0.251){\circle*{0.001}} \put(-1.965,-0.375){\circle*{0.001}} \put(-1.937,-0.497){\circle*{0.001}} \put(-1.902,-0.618){\circle*{0.001}} \put(-1.860,-0.736){\circle*{0.001}} \put(-1.810,-0.852){\circle*{0.001}} \put(-1.753,-0.964){\circle*{0.001}} \put(-1.689,-1.072){\circle*{0.001}} \put(-1.618,-1.176){\circle*{0.001}} \put(-1.541,-1.275){\circle*{0.001}} \put(-1.458,-1.369){\circle*{0.001}} \put(-1.369,-1.458){\circle*{0.001}} \put(-1.275,-1.541){\circle*{0.001}} \put(-1.176,-1.618){\circle*{0.001}} \put(-1.072,-1.689){\circle*{0.001}} \put(-0.964,-1.753){\circle*{0.001}} \put(-0.852,-1.810){\circle*{0.001}} \put(-0.736,-1.860){\circle*{0.001}} \put(-0.618,-1.902){\circle*{0.001}} \put(-0.497,-1.937){\circle*{0.001}} \put(-0.375,-1.965){\circle*{0.001}} \put(-0.251,-1.984){\circle*{0.001}} \put(-0.126,-1.996){\circle*{0.001}} \put(-0.000,-2.000){\circle*{0.001}} \put(0.126,-1.996){\circle*{0.001}} \put(0.251,-1.984){\circle*{0.001}} \put(0.375,-1.965){\circle*{0.001}} \put(0.497,-1.937){\circle*{0.001}} \put(0.618,-1.902){\circle*{0.001}} \put(0.736,-1.860){\circle*{0.001}} \put(0.852,-1.810){\circle*{0.001}} \put(0.964,-1.753){\circle*{0.001}} \put(1.072,-1.689){\circle*{0.001}} \put(1.176,-1.618){\circle*{0.001}} \put(1.275,-1.541){\circle*{0.001}} \put(1.369,-1.458){\circle*{0.001}} \put(1.458,-1.369){\circle*{0.001}} \put(1.541,-1.275){\circle*{0.001}} \put(1.618,-1.176){\circle*{0.001}} \put(1.689,-1.072){\circle*{0.001}} \put(1.753,-0.964){\circle*{0.001}} \put(1.810,-0.852){\circle*{0.001}} \put(1.860,-0.736){\circle*{0.001}} \put(1.902,-0.618){\circle*{0.001}} \put(1.937,-0.497){\circle*{0.001}} \put(1.965,-0.375){\circle*{0.001}} \put(1.984,-0.251){\circle*{0.001}} \put(1.996,-0.126){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.000, -0.000){\circle*{0.001}} \put( -0.001, -0.000){\circle*{0.001}} \put( -0.001, -0.001){\circle*{0.001}} \put( -0.001, -0.001){\circle*{0.001}} \put( -0.001, -0.001){\circle*{0.001}} \put( -0.001, -0.001){\circle*{0.001}} \put( -0.001, -0.001){\circle*{0.001}} \put( -0.001, -0.001){\circle*{0.001}} \put( -0.001, -0.001){\circle*{0.001}} \put( -0.001, -0.001){\circle*{0.001}} \put( -0.001, -0.001){\circle*{0.001}} \put( -0.001, -0.001){\circle*{0.001}} \put( -0.001, -0.001){\circle*{0.001}} \put( -0.001, -0.001){\circle*{0.001}} \put( -0.001, -0.001){\circle*{0.001}} \put( -0.001, -0.002){\circle*{0.001}} \put( -0.001, -0.002){\circle*{0.001}} \put( -0.001, -0.002){\circle*{0.001}} \put( -0.001, -0.002){\circle*{0.001}} \put( -0.001, -0.002){\circle*{0.001}} \put( -0.001, -0.002){\circle*{0.001}} \put( -0.001, -0.002){\circle*{0.001}} \put( -0.001, -0.003){\circle*{0.001}} \put( -0.002, -0.003){\circle*{0.001}} \put( -0.002, -0.003){\circle*{0.001}} \put( -0.002, -0.003){\circle*{0.001}} \put( -0.002, -0.004){\circle*{0.001}} \put( -0.002, -0.004){\circle*{0.001}} \put( -0.002, -0.004){\circle*{0.001}} \put( -0.002, -0.004){\circle*{0.001}} \put( -0.002, -0.005){\circle*{0.001}} \put( -0.002, -0.005){\circle*{0.001}} \put( -0.002, -0.005){\circle*{0.001}} \put( -0.002, -0.006){\circle*{0.001}} \put( -0.002, -0.006){\circle*{0.001}} \put( -0.002, -0.007){\circle*{0.001}} \put( -0.002, -0.007){\circle*{0.001}} \put( -0.002, -0.008){\circle*{0.001}} \put( -0.002, -0.008){\circle*{0.001}} \put( -0.002, -0.009){\circle*{0.001}} \put( -0.002, -0.009){\circle*{0.001}} \put( -0.002, -0.010){\circle*{0.001}} \put( -0.002, -0.010){\circle*{0.001}} \put( -0.002, -0.011){\circle*{0.001}} \put( -0.002, -0.012){\circle*{0.001}} \put( -0.002, -0.012){\circle*{0.001}} \put( -0.002, -0.013){\circle*{0.001}} \put( -0.002, -0.014){\circle*{0.001}} \put( -0.001, -0.015){\circle*{0.001}} \put( -0.001, -0.016){\circle*{0.001}} \put( -0.001, -0.017){\circle*{0.001}} \put( -0.001, -0.018){\circle*{0.001}} \put( -0.001, -0.019){\circle*{0.001}} \put( -0.000, -0.020){\circle*{0.001}} \put( 0.000, -0.021){\circle*{0.001}} \put( 0.000, -0.022){\circle*{0.001}} \put( 0.001, -0.023){\circle*{0.001}} \put( 0.001, -0.024){\circle*{0.001}} \put( 0.002, -0.026){\circle*{0.001}} \put( 0.002, -0.027){\circle*{0.001}} \put( 0.003, -0.029){\circle*{0.001}} \put( 0.003, -0.030){\circle*{0.001}} \put( 0.004, -0.032){\circle*{0.001}} \put( 0.005, -0.033){\circle*{0.001}} \put( 0.006, -0.035){\circle*{0.001}} \put( 0.006, -0.037){\circle*{0.001}} \put( 0.007, -0.038){\circle*{0.001}} \put( 0.008, -0.040){\circle*{0.001}} \put( 0.009, -0.042){\circle*{0.001}} \put( 0.011, -0.044){\circle*{0.001}} \put( 0.012, -0.046){\circle*{0.001}} \put( 0.013, -0.048){\circle*{0.001}} \put( 0.015, -0.051){\circle*{0.001}} \put( 0.016, -0.053){\circle*{0.001}} \put( 0.018, -0.055){\circle*{0.001}} \put( 0.020, -0.058){\circle*{0.001}} \put( 0.022, -0.060){\circle*{0.001}} \put( 0.024, -0.063){\circle*{0.001}} \put( 0.026, -0.065){\circle*{0.001}} \put( 0.028, -0.068){\circle*{0.001}} \put( 0.031, -0.071){\circle*{0.001}} \put( 0.033, -0.074){\circle*{0.001}} \put( 0.036, -0.077){\circle*{0.001}} \put( 0.039, -0.080){\circle*{0.001}} \put( 0.042, -0.083){\circle*{0.001}} \put( 0.045, -0.086){\circle*{0.001}} \put( 0.049, -0.089){\circle*{0.001}} \put( 0.052, -0.092){\circle*{0.001}} \put( 0.056, -0.095){\circle*{0.001}} \put( 0.060, -0.098){\circle*{0.001}} \put( 0.065, -0.102){\circle*{0.001}} \put( 0.069, -0.105){\circle*{0.001}} \put( 0.074, -0.109){\circle*{0.001}} \put( 0.079, -0.112){\circle*{0.001}} \put( 0.084, -0.116){\circle*{0.001}} \put( 0.089, -0.119){\circle*{0.001}} \put( 0.095, -0.123){\circle*{0.001}} \put( 0.101, -0.126){\circle*{0.001}} \put( 0.107, -0.130){\circle*{0.001}} \put( 0.114, -0.133){\circle*{0.001}} \put( 0.121, -0.137){\circle*{0.001}} \put( 0.128, -0.140){\circle*{0.001}} \put( 0.135, -0.144){\circle*{0.001}} \put( 0.143, -0.147){\circle*{0.001}} \put( 0.151, -0.151){\circle*{0.001}} \put( 0.159, -0.154){\circle*{0.001}} \put( 0.168, -0.157){\circle*{0.001}} \put( 0.177, -0.161){\circle*{0.001}} \put( 0.186, -0.164){\circle*{0.001}} \put( 0.196, -0.167){\circle*{0.001}} \put( 0.206, -0.170){\circle*{0.001}} \put( 0.216, -0.173){\circle*{0.001}} \put( 0.227, -0.176){\circle*{0.001}} \put( 0.238, -0.178){\circle*{0.001}} \put( 0.249, -0.181){\circle*{0.001}} \put( 0.261, -0.183){\circle*{0.001}} \put( 0.273, -0.185){\circle*{0.001}} \put( 0.285, -0.187){\circle*{0.001}} \put( 0.298, -0.189){\circle*{0.001}} \put( 0.311, -0.191){\circle*{0.001}} \put( 0.325, -0.192){\circle*{0.001}} \put( 0.339, -0.193){\circle*{0.001}} \put( 0.353, -0.194){\circle*{0.001}} \put( 0.368, -0.195){\circle*{0.001}} \put( 0.383, -0.195){\circle*{0.001}} \put( 0.399, -0.195){\circle*{0.001}} \put( 0.414, -0.195){\circle*{0.001}} \put( 0.431, -0.194){\circle*{0.001}} \put( 0.447, -0.193){\circle*{0.001}} \put( 0.464, -0.192){\circle*{0.001}} \put( 0.481, -0.190){\circle*{0.001}} \put( 0.499, -0.188){\circle*{0.001}} \put( 0.516, -0.186){\circle*{0.001}} \put( 0.534, -0.183){\circle*{0.001}} \put( 0.553, -0.180){\circle*{0.001}} \put( 0.572, -0.176){\circle*{0.001}} \put( 0.591, -0.172){\circle*{0.001}} \put( 0.610, -0.167){\circle*{0.001}} \put( 0.629, -0.162){\circle*{0.001}} \put( 0.649, -0.156){\circle*{0.001}} \put( 0.669, -0.150){\circle*{0.001}} \put( 0.689, -0.143){\circle*{0.001}} \put( 0.710, -0.135){\circle*{0.001}} \put( 0.730, -0.127){\circle*{0.001}} \put( 0.751, -0.119){\circle*{0.001}} \put( 0.772, -0.110){\circle*{0.001}} \put( 0.793, -0.100){\circle*{0.001}} \put( 0.814, -0.090){\circle*{0.001}} \put( 0.835, -0.079){\circle*{0.001}} \put( 0.857, -0.067){\circle*{0.001}} \put( 0.878, -0.055){\circle*{0.001}} \put( 0.899, -0.042){\circle*{0.001}} \put( 0.920, -0.029){\circle*{0.001}} \put( 0.942, -0.015){\circle*{0.001}} \put( 0.963, 0.000){\circle*{0.001}} \put( 0.984, 0.015){\circle*{0.001}} \put( 1.005, 0.032){\circle*{0.001}} \put( 1.026, 0.048){\circle*{0.001}} \put( 1.046, 0.066){\circle*{0.001}} \put( 1.067, 0.084){\circle*{0.001}} \put( 1.087, 0.103){\circle*{0.001}} \put( 1.107, 0.122){\circle*{0.001}} \put( 1.126, 0.142){\circle*{0.001}} \put( 1.146, 0.163){\circle*{0.001}} \put( 1.165, 0.184){\circle*{0.001}} \put( 1.183, 0.207){\circle*{0.001}} \put( 1.202, 0.229){\circle*{0.001}} \put( 1.219, 0.253){\circle*{0.001}} \put( 1.237, 0.276){\circle*{0.001}} \put( 1.254, 0.301){\circle*{0.001}} \put( 1.270, 0.326){\circle*{0.001}} \put( 1.286, 0.352){\circle*{0.001}} \put( 1.301, 0.378){\circle*{0.001}} \put( 1.315, 0.405){\circle*{0.001}} \put( 1.329, 0.432){\circle*{0.001}} \put( 1.343, 0.460){\circle*{0.001}} \put( 1.355, 0.488){\circle*{0.001}} \put( 1.367, 0.517){\circle*{0.001}} \put( 1.378, 0.546){\circle*{0.001}} \put( 1.389, 0.575){\circle*{0.001}} \put( 1.398, 0.605){\circle*{0.001}} \put( 1.407, 0.635){\circle*{0.001}} \put( 1.415, 0.666){\circle*{0.001}} \put( 1.422, 0.697){\circle*{0.001}} \put( 1.428, 0.728){\circle*{0.001}} \put( 1.434, 0.759){\circle*{0.001}} \put( 1.438, 0.791){\circle*{0.001}} \put( 1.442, 0.822){\circle*{0.001}} \put( 1.444, 0.854){\circle*{0.001}} \put( 1.446, 0.886){\circle*{0.001}} \put( 1.447, 0.918){\circle*{0.001}} \put( 1.446, 0.950){\circle*{0.001}} \put( 1.445, 0.982){\circle*{0.001}} \put( 1.443, 1.014){\circle*{0.001}} \put( 1.439, 1.046){\circle*{0.001}} \put( 1.435, 1.078){\circle*{0.001}} \put( 1.430, 1.109){\circle*{0.001}} \put( 1.423, 1.140){\circle*{0.001}} \put( 1.416, 1.172){\circle*{0.001}} \put( 1.408, 1.202){\circle*{0.001}} \put( 1.398, 1.233){\circle*{0.001}} \put( 1.388, 1.263){\circle*{0.001}} \put( 1.377, 1.293){\circle*{0.001}} \put( 1.364, 1.322){\circle*{0.001}} \put( 1.351, 1.351){\circle*{0.001}} \put( 1.337, 1.379){\circle*{0.001}} \put( 1.321, 1.407){\circle*{0.001}} \put( 1.305, 1.435){\circle*{0.001}} \put( 1.288, 1.461){\circle*{0.001}} \put( 1.270, 1.487){\circle*{0.001}} \put( 1.251, 1.512){\circle*{0.001}} \put( 1.231, 1.537){\circle*{0.001}} \put( 1.211, 1.561){\circle*{0.001}} \put( 1.189, 1.584){\circle*{0.001}} \put( 1.167, 1.606){\circle*{0.001}} \put( 1.144, 1.628){\circle*{0.001}} \put( 1.120, 1.648){\circle*{0.001}} \put( 1.096, 1.668){\circle*{0.001}} \put( 1.070, 1.687){\circle*{0.001}} \put( 1.045, 1.705){\circle*{0.001}} \put( 1.018, 1.721){\circle*{0.001}} \put( 0.991, 1.737){\circle*{0.001}} \put( 0.963, 1.752){\circle*{0.001}} \put( 0.935, 1.766){\circle*{0.001}} \put( 0.906, 1.779){\circle*{0.001}} \put( 0.877, 1.791){\circle*{0.001}} \put( 0.848, 1.801){\circle*{0.001}} \put( 0.818, 1.811){\circle*{0.001}} \put( 0.787, 1.819){\circle*{0.001}} \put( 0.757, 1.827){\circle*{0.001}} \put( 0.726, 1.833){\circle*{0.001}} \put( 0.695, 1.838){\circle*{0.001}} \put( 0.663, 1.842){\circle*{0.001}} \put( 0.632, 1.845){\circle*{0.001}} \put( 0.600, 1.847){\circle*{0.001}} \put( 0.569, 1.848){\circle*{0.001}} \put( 0.537, 1.848){\circle*{0.001}} \put( 0.505, 1.846){\circle*{0.001}} \put( 0.473, 1.844){\circle*{0.001}} \put( 0.442, 1.840){\circle*{0.001}} \put( 0.410, 1.835){\circle*{0.001}} \put( 0.379, 1.830){\circle*{0.001}} \put( 0.348, 1.823){\circle*{0.001}} \put( 0.317, 1.815){\circle*{0.001}} \put( 0.286, 1.806){\circle*{0.001}} \put( 0.256, 1.796){\circle*{0.001}} \put( 0.226, 1.785){\circle*{0.001}} \put( 0.196, 1.774){\circle*{0.001}} \put( 0.166, 1.761){\circle*{0.001}} \put( 0.138, 1.747){\circle*{0.001}} \put( 0.109, 1.733){\circle*{0.001}} \put( 0.081, 1.717){\circle*{0.001}} \put( 0.053, 1.701){\circle*{0.001}} \put( 0.026, 1.684){\circle*{0.001}} \put( 0.000, 1.666){\circle*{0.001}} \put( -0.026, 1.647){\circle*{0.001}} \put( -0.051, 1.628){\circle*{0.001}} \put( -0.076, 1.608){\circle*{0.001}} \put( -0.100, 1.587){\circle*{0.001}} \put( -0.123, 1.566){\circle*{0.001}} \put( -0.146, 1.544){\circle*{0.001}} \put( -0.168, 1.521){\circle*{0.001}} \put( -0.189, 1.498){\circle*{0.001}} \put( -0.210, 1.475){\circle*{0.001}} \put( -0.230, 1.450){\circle*{0.001}} \put( -0.249, 1.426){\circle*{0.001}} \put( -0.267, 1.401){\circle*{0.001}} \put( -0.285, 1.376){\circle*{0.001}} \put( -0.302, 1.350){\circle*{0.001}} \put( -0.318, 1.324){\circle*{0.001}} \put( -0.333, 1.298){\circle*{0.001}} \put( -0.348, 1.272){\circle*{0.001}} \put( -0.362, 1.245){\circle*{0.001}} \put( -0.375, 1.218){\circle*{0.001}} \put( -0.387, 1.191){\circle*{0.001}} \put( -0.399, 1.164){\circle*{0.001}} \put( -0.409, 1.137){\circle*{0.001}} \put( -0.419, 1.110){\circle*{0.001}} \put( -0.429, 1.083){\circle*{0.001}} \put( -0.437, 1.056){\circle*{0.001}} \put( -0.445, 1.029){\circle*{0.001}} \put( -0.452, 1.002){\circle*{0.001}} \put( -0.459, 0.975){\circle*{0.001}} \put( -0.464, 0.948){\circle*{0.001}} \put( -0.469, 0.921){\circle*{0.001}} \put( -0.474, 0.895){\circle*{0.001}} \put( -0.477, 0.868){\circle*{0.001}} \put( -0.481, 0.842){\circle*{0.001}} \put( -0.483, 0.817){\circle*{0.001}} \put( -0.485, 0.791){\circle*{0.001}} \put( -0.486, 0.766){\circle*{0.001}} \put( -0.487, 0.741){\circle*{0.001}} \put( -0.487, 0.716){\circle*{0.001}} \put( -0.486, 0.692){\circle*{0.001}} \put( -0.485, 0.668){\circle*{0.001}} \put( -0.484, 0.644){\circle*{0.001}} \put( -0.482, 0.621){\circle*{0.001}} \put( -0.480, 0.599){\circle*{0.001}} \put( -0.477, 0.576){\circle*{0.001}} \put( -0.473, 0.554){\circle*{0.001}} \put( -0.470, 0.533){\circle*{0.001}} \put( -0.466, 0.512){\circle*{0.001}} \put( -0.461, 0.491){\circle*{0.001}} \put( -0.456, 0.471){\circle*{0.001}} \put( -0.451, 0.451){\circle*{0.001}} \put( -0.446, 0.432){\circle*{0.001}} \put( -0.440, 0.413){\circle*{0.001}} \put( -0.434, 0.395){\circle*{0.001}} \put( -0.428, 0.377){\circle*{0.001}} \put( -0.421, 0.360){\circle*{0.001}} \put( -0.415, 0.343){\circle*{0.001}} \put( -0.408, 0.327){\circle*{0.001}} \put( -0.401, 0.311){\circle*{0.001}} \put( -0.393, 0.295){\circle*{0.001}} \put( -0.386, 0.281){\circle*{0.001}} \put( -0.379, 0.266){\circle*{0.001}} \put( -0.371, 0.252){\circle*{0.001}} \put( -0.363, 0.239){\circle*{0.001}} \put( -0.355, 0.226){\circle*{0.001}} \put( -0.348, 0.213){\circle*{0.001}} \put( -0.340, 0.201){\circle*{0.001}} \put( -0.332, 0.189){\circle*{0.001}} \put( -0.324, 0.178){\circle*{0.001}} \put( -0.316, 0.167){\circle*{0.001}} \put( -0.308, 0.157){\circle*{0.001}} \put( -0.300, 0.147){\circle*{0.001}} \put( -0.292, 0.137){\circle*{0.001}} \put( -0.284, 0.128){\circle*{0.001}} \put( -0.276, 0.119){\circle*{0.001}} \put( -0.268, 0.111){\circle*{0.001}} \put( -0.260, 0.103){\circle*{0.001}} \put( -0.253, 0.095){\circle*{0.001}} \put( -0.245, 0.088){\circle*{0.001}} \put( -0.237, 0.081){\circle*{0.001}} \put( -0.230, 0.075){\circle*{0.001}} \put( -0.223, 0.068){\circle*{0.001}} \put( -0.215, 0.063){\circle*{0.001}} \put( -0.208, 0.057){\circle*{0.001}} \put( -0.201, 0.052){\circle*{0.001}} \put( -0.194, 0.047){\circle*{0.001}} \put( -0.188, 0.042){\circle*{0.001}} \put( -0.181, 0.037){\circle*{0.001}} \put( -0.174, 0.033){\circle*{0.001}} \put( -0.168, 0.029){\circle*{0.001}} \put( -0.162, 0.026){\circle*{0.001}} \put( -0.156, 0.022){\circle*{0.001}} \put( -0.150, 0.019){\circle*{0.001}} \put( -0.144, 0.016){\circle*{0.001}} \put( -0.138, 0.013){\circle*{0.001}} \put( -0.133, 0.010){\circle*{0.001}} \put( -0.127, 0.008){\circle*{0.001}} \put( -0.122, 0.006){\circle*{0.001}} \put( -0.117, 0.004){\circle*{0.001}} \put( -0.112, 0.002){\circle*{0.001}} \put(0,0){\line(1,2){1}} \put(1.1,2.1){\makebox(0,0)[lb]{$\alpha_0$}} \put(2,-0.1){\makebox(0,0)[t]{\small{2.0}}} \put(-2,-0.1){\makebox(0,0)[t]{\small{-2.0}}} \put(1,-0.1){\makebox(0,0)[t]{\small{1.0}}} \put(-1,-0.1){\makebox(0,0)[t]{\small{-1.0}}} \end{picture} } \put(30,-2){ \setlength{\unitlength}{2.5em} \begin{picture}(6,4)(0,0) \put(0,0){\line(1,0){6}} \put(0,0){\line(0,1){4}} \put( 0.000, 3.000){\circle*{0.001}} \put( 0.015, 2.985){\circle*{0.001}} \put( 0.030, 2.970){\circle*{0.001}} \put( 0.045, 2.955){\circle*{0.001}} \put( 0.060, 2.941){\circle*{0.001}} \put( 0.075, 2.926){\circle*{0.001}} \put( 0.090, 2.911){\circle*{0.001}} \put( 0.105, 2.897){\circle*{0.001}} \put( 0.120, 2.882){\circle*{0.001}} \put( 0.135, 2.868){\circle*{0.001}} \put( 0.150, 2.854){\circle*{0.001}} \put( 0.165, 2.839){\circle*{0.001}} \put( 0.180, 2.825){\circle*{0.001}} \put( 0.195, 2.811){\circle*{0.001}} \put( 0.210, 2.797){\circle*{0.001}} \put( 0.225, 2.783){\circle*{0.001}} \put( 0.240, 2.769){\circle*{0.001}} \put( 0.255, 2.756){\circle*{0.001}} \put( 0.270, 2.742){\circle*{0.001}} \put( 0.285, 2.728){\circle*{0.001}} \put( 0.300, 2.715){\circle*{0.001}} \put( 0.315, 2.701){\circle*{0.001}} \put( 0.330, 2.688){\circle*{0.001}} \put( 0.345, 2.674){\circle*{0.001}} \put( 0.360, 2.661){\circle*{0.001}} \put( 0.375, 2.647){\circle*{0.001}} \put( 0.390, 2.634){\circle*{0.001}} \put( 0.405, 2.621){\circle*{0.001}} \put( 0.420, 2.608){\circle*{0.001}} \put( 0.435, 2.595){\circle*{0.001}} \put( 0.450, 2.582){\circle*{0.001}} \put( 0.465, 2.569){\circle*{0.001}} \put( 0.480, 2.556){\circle*{0.001}} \put( 0.495, 2.544){\circle*{0.001}} \put( 0.510, 2.531){\circle*{0.001}} \put( 0.525, 2.518){\circle*{0.001}} \put( 0.540, 2.506){\circle*{0.001}} \put( 0.555, 2.493){\circle*{0.001}} \put( 0.570, 2.481){\circle*{0.001}} \put( 0.585, 2.469){\circle*{0.001}} \put( 0.600, 2.456){\circle*{0.001}} \put( 0.615, 2.444){\circle*{0.001}} \put( 0.630, 2.432){\circle*{0.001}} \put( 0.645, 2.420){\circle*{0.001}} \put( 0.660, 2.408){\circle*{0.001}} \put( 0.675, 2.396){\circle*{0.001}} \put( 0.690, 2.384){\circle*{0.001}} \put( 0.705, 2.372){\circle*{0.001}} \put( 0.720, 2.360){\circle*{0.001}} \put( 0.735, 2.348){\circle*{0.001}} \put( 0.750, 2.336){\circle*{0.001}} \put( 0.765, 2.325){\circle*{0.001}} \put( 0.780, 2.313){\circle*{0.001}} \put( 0.795, 2.302){\circle*{0.001}} \put( 0.810, 2.290){\circle*{0.001}} \put( 0.825, 2.279){\circle*{0.001}} \put( 0.840, 2.267){\circle*{0.001}} \put( 0.855, 2.256){\circle*{0.001}} \put( 0.870, 2.245){\circle*{0.001}} \put( 0.885, 2.234){\circle*{0.001}} \put( 0.900, 2.222){\circle*{0.001}} \put( 0.915, 2.211){\circle*{0.001}} \put( 0.930, 2.200){\circle*{0.001}} \put( 0.945, 2.189){\circle*{0.001}} \put( 0.960, 2.178){\circle*{0.001}} \put( 0.975, 2.168){\circle*{0.001}} \put( 0.990, 2.157){\circle*{0.001}} \put( 1.005, 2.146){\circle*{0.001}} \put( 1.020, 2.135){\circle*{0.001}} \put( 1.035, 2.125){\circle*{0.001}} \put( 1.050, 2.114){\circle*{0.001}} \put( 1.065, 2.104){\circle*{0.001}} \put( 1.080, 2.093){\circle*{0.001}} \put( 1.095, 2.083){\circle*{0.001}} \put( 1.110, 2.072){\circle*{0.001}} \put( 1.125, 2.062){\circle*{0.001}} \put( 1.140, 2.052){\circle*{0.001}} \put( 1.155, 2.041){\circle*{0.001}} \put( 1.170, 2.031){\circle*{0.001}} \put( 1.185, 2.021){\circle*{0.001}} \put( 1.200, 2.011){\circle*{0.001}} \put( 1.215, 2.001){\circle*{0.001}} \put( 1.230, 1.991){\circle*{0.001}} \put( 1.245, 1.981){\circle*{0.001}} \put( 1.260, 1.971){\circle*{0.001}} \put( 1.275, 1.961){\circle*{0.001}} \put( 1.290, 1.952){\circle*{0.001}} \put( 1.305, 1.942){\circle*{0.001}} \put( 1.320, 1.932){\circle*{0.001}} \put( 1.335, 1.922){\circle*{0.001}} \put( 1.350, 1.913){\circle*{0.001}} \put( 1.365, 1.903){\circle*{0.001}} \put( 1.380, 1.894){\circle*{0.001}} \put( 1.395, 1.884){\circle*{0.001}} \put( 1.410, 1.875){\circle*{0.001}} \put( 1.425, 1.866){\circle*{0.001}} \put( 1.440, 1.856){\circle*{0.001}} \put( 1.455, 1.847){\circle*{0.001}} \put( 1.470, 1.838){\circle*{0.001}} \put( 1.485, 1.829){\circle*{0.001}} \put( 1.500, 1.820){\circle*{0.001}} \put( 1.515, 1.811){\circle*{0.001}} \put( 1.530, 1.801){\circle*{0.001}} \put( 1.545, 1.793){\circle*{0.001}} \put( 1.560, 1.784){\circle*{0.001}} \put( 1.575, 1.775){\circle*{0.001}} \put( 1.590, 1.766){\circle*{0.001}} \put( 1.605, 1.757){\circle*{0.001}} \put( 1.620, 1.748){\circle*{0.001}} \put( 1.635, 1.740){\circle*{0.001}} \put( 1.650, 1.731){\circle*{0.001}} \put( 1.665, 1.722){\circle*{0.001}} \put( 1.680, 1.714){\circle*{0.001}} \put( 1.695, 1.705){\circle*{0.001}} \put( 1.710, 1.697){\circle*{0.001}} \put( 1.725, 1.688){\circle*{0.001}} \put( 1.740, 1.680){\circle*{0.001}} \put( 1.755, 1.671){\circle*{0.001}} \put( 1.770, 1.663){\circle*{0.001}} \put( 1.785, 1.655){\circle*{0.001}} \put( 1.800, 1.646){\circle*{0.001}} \put( 1.815, 1.638){\circle*{0.001}} \put( 1.830, 1.630){\circle*{0.001}} \put( 1.845, 1.622){\circle*{0.001}} \put( 1.860, 1.614){\circle*{0.001}} \put( 1.875, 1.606){\circle*{0.001}} \put( 1.890, 1.598){\circle*{0.001}} \put( 1.905, 1.590){\circle*{0.001}} \put( 1.920, 1.582){\circle*{0.001}} \put( 1.935, 1.574){\circle*{0.001}} \put( 1.950, 1.566){\circle*{0.001}} \put( 1.965, 1.558){\circle*{0.001}} \put( 1.980, 1.551){\circle*{0.001}} \put( 1.995, 1.543){\circle*{0.001}} \put( 2.010, 1.535){\circle*{0.001}} \put( 2.025, 1.527){\circle*{0.001}} \put( 2.040, 1.520){\circle*{0.001}} \put( 2.055, 1.512){\circle*{0.001}} \put( 2.070, 1.505){\circle*{0.001}} \put( 2.085, 1.497){\circle*{0.001}} \put( 2.100, 1.490){\circle*{0.001}} \put( 2.115, 1.482){\circle*{0.001}} \put( 2.130, 1.475){\circle*{0.001}} \put( 2.145, 1.468){\circle*{0.001}} \put( 2.160, 1.460){\circle*{0.001}} \put( 2.175, 1.453){\circle*{0.001}} \put( 2.190, 1.446){\circle*{0.001}} \put( 2.205, 1.439){\circle*{0.001}} \put( 2.220, 1.431){\circle*{0.001}} \put( 2.235, 1.424){\circle*{0.001}} \put( 2.250, 1.417){\circle*{0.001}} \put( 2.265, 1.410){\circle*{0.001}} \put( 2.280, 1.403){\circle*{0.001}} \put( 2.295, 1.396){\circle*{0.001}} \put( 2.310, 1.389){\circle*{0.001}} \put( 2.325, 1.382){\circle*{0.001}} \put( 2.340, 1.375){\circle*{0.001}} \put( 2.355, 1.368){\circle*{0.001}} \put( 2.370, 1.362){\circle*{0.001}} \put( 2.385, 1.355){\circle*{0.001}} \put( 2.400, 1.348){\circle*{0.001}} \put( 2.415, 1.341){\circle*{0.001}} \put( 2.430, 1.335){\circle*{0.001}} \put( 2.445, 1.328){\circle*{0.001}} \put( 2.460, 1.321){\circle*{0.001}} \put( 2.475, 1.315){\circle*{0.001}} \put( 2.490, 1.308){\circle*{0.001}} \put( 2.505, 1.302){\circle*{0.001}} \put( 2.520, 1.295){\circle*{0.001}} \put( 2.535, 1.289){\circle*{0.001}} \put( 2.550, 1.282){\circle*{0.001}} \put( 2.565, 1.276){\circle*{0.001}} \put( 2.580, 1.269){\circle*{0.001}} \put( 2.595, 1.263){\circle*{0.001}} \put( 2.610, 1.257){\circle*{0.001}} \put( 2.625, 1.251){\circle*{0.001}} \put( 2.640, 1.244){\circle*{0.001}} \put( 2.655, 1.238){\circle*{0.001}} \put( 2.670, 1.232){\circle*{0.001}} \put( 2.685, 1.226){\circle*{0.001}} \put( 2.700, 1.220){\circle*{0.001}} \put( 2.715, 1.214){\circle*{0.001}} \put( 2.730, 1.208){\circle*{0.001}} \put( 2.745, 1.202){\circle*{0.001}} \put( 2.760, 1.196){\circle*{0.001}} \put( 2.775, 1.190){\circle*{0.001}} \put( 2.790, 1.184){\circle*{0.001}} \put( 2.805, 1.178){\circle*{0.001}} \put( 2.820, 1.172){\circle*{0.001}} \put( 2.835, 1.166){\circle*{0.001}} \put( 2.850, 1.160){\circle*{0.001}} \put( 2.865, 1.154){\circle*{0.001}} \put( 2.880, 1.149){\circle*{0.001}} \put( 2.895, 1.143){\circle*{0.001}} \put( 2.910, 1.137){\circle*{0.001}} \put( 2.925, 1.132){\circle*{0.001}} \put( 2.940, 1.126){\circle*{0.001}} \put( 2.955, 1.120){\circle*{0.001}} \put( 2.970, 1.115){\circle*{0.001}} \put( 2.985, 1.109){\circle*{0.001}} \put( 3.000, 1.104){\circle*{0.001}} \put( 3.015, 1.098){\circle*{0.001}} \put( 3.030, 1.093){\circle*{0.001}} \put( 3.045, 1.087){\circle*{0.001}} \put( 3.060, 1.082){\circle*{0.001}} \put( 3.075, 1.076){\circle*{0.001}} \put( 3.090, 1.071){\circle*{0.001}} \put( 3.105, 1.066){\circle*{0.001}} \put( 3.120, 1.060){\circle*{0.001}} \put( 3.135, 1.055){\circle*{0.001}} \put( 3.150, 1.050){\circle*{0.001}} \put( 3.165, 1.045){\circle*{0.001}} \put( 3.180, 1.039){\circle*{0.001}} \put( 3.195, 1.034){\circle*{0.001}} \put( 3.210, 1.029){\circle*{0.001}} \put( 3.225, 1.024){\circle*{0.001}} \put( 3.240, 1.019){\circle*{0.001}} \put( 3.255, 1.014){\circle*{0.001}} \put( 3.270, 1.009){\circle*{0.001}} \put( 3.285, 1.004){\circle*{0.001}} \put( 3.300, 0.999){\circle*{0.001}} \put( 3.315, 0.994){\circle*{0.001}} \put( 3.330, 0.989){\circle*{0.001}} \put( 3.345, 0.984){\circle*{0.001}} \put( 3.360, 0.979){\circle*{0.001}} \put( 3.375, 0.974){\circle*{0.001}} \put( 3.390, 0.969){\circle*{0.001}} \put( 3.405, 0.964){\circle*{0.001}} \put( 3.420, 0.959){\circle*{0.001}} \put( 3.435, 0.955){\circle*{0.001}} \put( 3.450, 0.950){\circle*{0.001}} \put( 3.465, 0.945){\circle*{0.001}} \put( 3.480, 0.940){\circle*{0.001}} \put( 3.495, 0.936){\circle*{0.001}} \put( 3.510, 0.931){\circle*{0.001}} \put( 3.525, 0.926){\circle*{0.001}} \put( 3.540, 0.922){\circle*{0.001}} \put( 3.555, 0.917){\circle*{0.001}} \put( 3.570, 0.913){\circle*{0.001}} \put( 3.585, 0.908){\circle*{0.001}} \put( 3.600, 0.904){\circle*{0.001}} \put( 3.615, 0.899){\circle*{0.001}} \put( 3.630, 0.895){\circle*{0.001}} \put( 3.645, 0.890){\circle*{0.001}} \put( 3.660, 0.886){\circle*{0.001}} \put( 3.675, 0.881){\circle*{0.001}} \put( 3.690, 0.877){\circle*{0.001}} \put( 3.705, 0.873){\circle*{0.001}} \put( 3.720, 0.868){\circle*{0.001}} \put( 3.735, 0.864){\circle*{0.001}} \put( 3.750, 0.860){\circle*{0.001}} \put( 3.765, 0.855){\circle*{0.001}} \put( 3.780, 0.851){\circle*{0.001}} \put( 3.795, 0.847){\circle*{0.001}} \put( 3.810, 0.842){\circle*{0.001}} \put( 3.825, 0.838){\circle*{0.001}} \put( 3.840, 0.834){\circle*{0.001}} \put( 3.855, 0.830){\circle*{0.001}} \put( 3.870, 0.826){\circle*{0.001}} \put( 3.885, 0.822){\circle*{0.001}} \put( 3.900, 0.818){\circle*{0.001}} \put( 3.915, 0.814){\circle*{0.001}} \put( 3.930, 0.809){\circle*{0.001}} \put( 3.945, 0.805){\circle*{0.001}} \put( 3.960, 0.801){\circle*{0.001}} \put( 3.975, 0.797){\circle*{0.001}} \put( 3.990, 0.793){\circle*{0.001}} \put( 4.005, 0.789){\circle*{0.001}} \put( 4.020, 0.786){\circle*{0.001}} \put( 4.035, 0.782){\circle*{0.001}} \put( 4.050, 0.778){\circle*{0.001}} \put( 4.065, 0.774){\circle*{0.001}} \put( 4.080, 0.770){\circle*{0.001}} \put( 4.095, 0.766){\circle*{0.001}} \put( 4.110, 0.762){\circle*{0.001}} \put( 4.125, 0.759){\circle*{0.001}} \put( 4.140, 0.755){\circle*{0.001}} \put( 4.155, 0.751){\circle*{0.001}} \put( 4.170, 0.747){\circle*{0.001}} \put( 4.185, 0.743){\circle*{0.001}} \put( 4.200, 0.740){\circle*{0.001}} \put( 4.215, 0.736){\circle*{0.001}} \put( 4.230, 0.732){\circle*{0.001}} \put( 4.245, 0.729){\circle*{0.001}} \put( 4.260, 0.725){\circle*{0.001}} \put( 4.275, 0.722){\circle*{0.001}} \put( 4.290, 0.718){\circle*{0.001}} \put( 4.305, 0.714){\circle*{0.001}} \put( 4.320, 0.711){\circle*{0.001}} \put( 4.335, 0.707){\circle*{0.001}} \put( 4.350, 0.704){\circle*{0.001}} \put( 4.365, 0.700){\circle*{0.001}} \put( 4.380, 0.697){\circle*{0.001}} \put( 4.395, 0.693){\circle*{0.001}} \put( 4.410, 0.690){\circle*{0.001}} \put( 4.425, 0.686){\circle*{0.001}} \put( 4.440, 0.683){\circle*{0.001}} \put( 4.455, 0.680){\circle*{0.001}} \put( 4.470, 0.676){\circle*{0.001}} \put( 4.485, 0.673){\circle*{0.001}} \put( 4.500, 0.669){\circle*{0.001}} \put( 4.515, 0.666){\circle*{0.001}} \put( 4.530, 0.663){\circle*{0.001}} \put( 4.545, 0.659){\circle*{0.001}} \put( 4.560, 0.656){\circle*{0.001}} \put( 4.575, 0.653){\circle*{0.001}} \put( 4.590, 0.650){\circle*{0.001}} \put( 4.605, 0.646){\circle*{0.001}} \put( 4.620, 0.643){\circle*{0.001}} \put( 4.635, 0.640){\circle*{0.001}} \put( 4.650, 0.637){\circle*{0.001}} \put( 4.665, 0.634){\circle*{0.001}} \put( 4.680, 0.630){\circle*{0.001}} \put( 4.695, 0.627){\circle*{0.001}} \put( 4.710, 0.624){\circle*{0.001}} \put( 4.725, 0.621){\circle*{0.001}} \put( 4.740, 0.618){\circle*{0.001}} \put( 4.755, 0.615){\circle*{0.001}} \put( 4.770, 0.612){\circle*{0.001}} \put( 4.785, 0.609){\circle*{0.001}} \put( 4.800, 0.606){\circle*{0.001}} \put( 4.815, 0.603){\circle*{0.001}} \put( 4.830, 0.600){\circle*{0.001}} \put( 4.845, 0.597){\circle*{0.001}} \put( 4.860, 0.594){\circle*{0.001}} \put( 4.875, 0.591){\circle*{0.001}} \put( 4.890, 0.588){\circle*{0.001}} \put( 4.905, 0.585){\circle*{0.001}} \put( 4.920, 0.582){\circle*{0.001}} \put( 4.935, 0.579){\circle*{0.001}} \put( 4.950, 0.576){\circle*{0.001}} \put( 4.965, 0.573){\circle*{0.001}} \put( 4.980, 0.570){\circle*{0.001}} \put( 4.995, 0.568){\circle*{0.001}} \put( 5.010, 0.565){\circle*{0.001}} \put( 5.025, 0.562){\circle*{0.001}} \put( 5.040, 0.559){\circle*{0.001}} \put( 5.055, 0.556){\circle*{0.001}} \put( 5.070, 0.554){\circle*{0.001}} \put( 5.085, 0.551){\circle*{0.001}} \put( 5.100, 0.548){\circle*{0.001}} \put( 5.115, 0.545){\circle*{0.001}} \put( 5.130, 0.543){\circle*{0.001}} \put( 5.145, 0.540){\circle*{0.001}} \put( 5.160, 0.537){\circle*{0.001}} \put( 5.175, 0.535){\circle*{0.001}} \put( 5.190, 0.532){\circle*{0.001}} \put( 5.205, 0.529){\circle*{0.001}} \put( 5.220, 0.527){\circle*{0.001}} \put( 5.235, 0.524){\circle*{0.001}} \put( 5.250, 0.521){\circle*{0.001}} \put( 5.265, 0.519){\circle*{0.001}} \put( 5.280, 0.516){\circle*{0.001}} \put( 5.295, 0.514){\circle*{0.001}} \put( 5.310, 0.511){\circle*{0.001}} \put( 5.325, 0.508){\circle*{0.001}} \put( 5.340, 0.506){\circle*{0.001}} \put( 5.355, 0.503){\circle*{0.001}} \put( 5.370, 0.501){\circle*{0.001}} \put( 5.385, 0.498){\circle*{0.001}} \put( 5.400, 0.496){\circle*{0.001}} \put( 5.415, 0.493){\circle*{0.001}} \put( 5.430, 0.491){\circle*{0.001}} \put( 5.445, 0.489){\circle*{0.001}} \put( 5.460, 0.486){\circle*{0.001}} \put( 5.475, 0.484){\circle*{0.001}} \put( 5.490, 0.481){\circle*{0.001}} \put( 5.505, 0.479){\circle*{0.001}} \put( 5.520, 0.476){\circle*{0.001}} \put( 5.535, 0.474){\circle*{0.001}} \put( 5.550, 0.472){\circle*{0.001}} \put( 5.565, 0.469){\circle*{0.001}} \put( 5.580, 0.467){\circle*{0.001}} \put( 5.595, 0.465){\circle*{0.001}} \put( 5.610, 0.462){\circle*{0.001}} \put( 5.625, 0.460){\circle*{0.001}} \put( 5.640, 0.458){\circle*{0.001}} \put( 5.655, 0.455){\circle*{0.001}} \put( 5.670, 0.453){\circle*{0.001}} \put( 5.685, 0.451){\circle*{0.001}} \put( 5.700, 0.449){\circle*{0.001}} \put( 5.715, 0.446){\circle*{0.001}} \put( 5.730, 0.444){\circle*{0.001}} \put( 5.745, 0.442){\circle*{0.001}} \put( 5.760, 0.440){\circle*{0.001}} \put( 5.775, 0.438){\circle*{0.001}} \put( 5.790, 0.435){\circle*{0.001}} \put( 5.805, 0.433){\circle*{0.001}} \put( 5.820, 0.431){\circle*{0.001}} \put( 5.835, 0.429){\circle*{0.001}} \put( 5.850, 0.427){\circle*{0.001}} \put( 5.865, 0.425){\circle*{0.001}} \put( 5.880, 0.423){\circle*{0.001}} \put( 5.895, 0.420){\circle*{0.001}} \put( 5.910, 0.418){\circle*{0.001}} \put( 5.925, 0.416){\circle*{0.001}} \put( 5.940, 0.414){\circle*{0.001}} \put( 5.955, 0.412){\circle*{0.001}} \put( 5.970, 0.410){\circle*{0.001}} \put( 5.985, 0.408){\circle*{0.001}} \put( -0.200, 3.000){\makebox(0,0)[r]{$l_0$}} \multiput( 3.000, 0.000)(0,0.25){8}{\line(0,1){0.125}} \put( 3.000, -0.200){\makebox(0,0)[t]{$1/l_0$}} \end{picture} } \end{picture} \end{center} \caption{\small Generation of a single jump of the random walk. In (a), a walker is coming to point $(x,y)$ following a trajectory with direction $\theta$. An angle $\alpha$ is chosen from the Gaussian angle distribution in (b), centered around the angle $\alpha_0$: the direction of the new jump is $\alpha$ from the previous direction (viz.\ $\theta+\alpha$ absolute direction). The length of the jump is drawn from the exponential distribution with average $l_0$ in (c).} \label{jump} \end{figure} The parameters of the jump are reset to $(\alpha_f,l_f)$ as soon as the individual finds food. That is, there is a lingering memory that food was around there even if currently no food is found---a memory that fades away in a time $\tau$---but the absence of food is forgotten as soon as new food is found. Any moral or philosophical conclusion, be it positive or negative, that can be drawn from this hypothesis is beyond the scope of these notes. \subsection{Gene definition} Each individual is therefore characterized by five parameters: $(\alpha_f,l_f,\alpha_n,l_n,\tau)$. We represent each one as a 8-bit value (these values are scaled in order to compute the actual values of the parameters) and collect them in a 40-bit \dqt{gene.} We try to keep related parameters in nearby positions of the gene (this is believed to speed up the convergence of the algorithm). Calling $A_F$, $L_F$, $A_N$, $L_N$, and $T$ the 8-bit representations of the parameters we have the genetic representation of an individual in Figure~\ref{gene}. \begin{figure}[tbh] \begin{center} \setlength{\unitlength}{1.5em} \begin{picture}(25,3)(0,2) \multiput(0,4)(0,1){2}{\line(1,0){25}} \multiput(0,4)(5,0){6}{\line(0,1){1}} \multiput(0,3)(5,0){6}{\line(0,1){0.75}} \put(0,2.8){\makebox(0,0)[t]{0}} \put(5,2.8){\makebox(0,0)[t]{7}} \put(10,2.8){\makebox(0,0)[t]{15}} \put(15,2.8){\makebox(0,0)[t]{23}} \put(20,2.8){\makebox(0,0)[t]{31}} \put(25,2.8){\makebox(0,0)[t]{39}} \put(2.5,4.5){\makebox(0,0){$A_F$}} \put(7.5,4.5){\makebox(0,0){$L_F$}} \put(12.5,4.5){\makebox(0,0){$T$}} \put(17.5,4.5){\makebox(0,0){$A_N$}} \put(22.5,4.5){\makebox(0,0){$L_N$}} \put(0,2){\vector(1,0){1}} \put(1.1,2){\makebox(0,0)[l]{bits}} \end{picture} \end{center} \caption{\small The genetic representation of an individual. The five 8-bit parameters are scaled to provide the jump paramterers $(\alpha_f,l_f,\alpha_n,l_n,\tau)$.} \label{gene} \end{figure} The jump parameters are derived from these 8-bit integers as \begin{equation} \label{normal} \begin{array}{ccc} \displaystyle \alpha_f=2\pi\frac{A_F}{256} & \displaystyle l_f = \frac{L_F}{4} & \displaystyle \tau = \frac{T}{10} \\ & \\ \displaystyle \alpha_n=2\pi\frac{A_N}{256} & \displaystyle l_n = \frac{L_N}{4} \end{array} \end{equation} The scaling factors (except those for $\alpha_f$ and $\alpha_n$, which are derived from geometric considerations) have been determined by trial and error. \begin{figure}[tbhp] \begin{center} \setlength{\unitlength}{1.5em} \begin{picture}(20,12)(0,0) \put(4,1){\line(0,1){11}} \put(11,1){\line(0,1){11}} \put(4,0.8){\makebox(0,0)[t]{a}} \put(11,0.8){\makebox(0,0)[t]{b}} \put(0,2){ \thicklines \multiput(0,0)(0,1){2}{\line(1,0){14}} \multiput(0,0)(14,0){2}{\line(0,1){1}} \thinlines \put(2,0.5){\makebox(0,0){$B_1$}} \put(7.5,0.5){\makebox(0,0){$A_2$}} \put(12.5,0.5){\makebox(0,0){$B_3$}} \put(16,0.5){\makebox(0,0)[l]{offspring 2}} } \put(0,4){ \thicklines \multiput(0,0)(0,1){2}{\line(1,0){14}} \multiput(0,0)(14,0){2}{\line(0,1){1}} \thinlines \put(2,0.5){\makebox(0,0){$A_1$}} \put(7.5,0.5){\makebox(0,0){$B_2$}} \put(12.5,0.5){\makebox(0,0){$A_3$}} \put(16,0.5){\makebox(0,0)[l]{offspring 1}} } \put(0,7){ \thicklines \multiput(0,0)(0,1){2}{\line(1,0){14}} \multiput(0,0)(14,0){2}{\line(0,1){1}} \thinlines \put(2,0.5){\makebox(0,0){$B_1$}} \put(7.5,0.5){\makebox(0,0){$B_2$}} \put(12.5,0.5){\makebox(0,0){$B_3$}} \put(16,0.5){\makebox(0,0)[l]{parent B}} } \put(0,9){ \thicklines \multiput(0,0)(0,1){2}{\line(1,0){14}} \multiput(0,0)(14,0){2}{\line(0,1){1}} \thinlines \put(2,0.5){\makebox(0,0){$A_1$}} \put(7.5,0.5){\makebox(0,0){$A_2$}} \put(12.5,0.5){\makebox(0,0){$A_3$}} \put(16,0.5){\makebox(0,0)[l]{parent A}} } \end{picture} \end{center} \caption{\small Double cut for the generation of offsprings. Two parents generate two offprings, mixing the bits of their genes as represented.} \label{offsprings} \end{figure} \subsection{The algorithm} The genetic algorithm is pretty standard. A \textsl{generation} is a set of individuals. Each individual is placed in the environment in the same initial position, and does a random walk of predetermined length, according to the parameters encoded in its gene, and collecting pellets of food as specified above. The environment is restored between individuals, so that each one has the same initial supply of food (this entails that there is no competition among the individuals). A point is scored for each pellet that is eaten. As a result, after all individuals have executed a random walk, individual number $k$, characterized by gene $\gamma_k$ has a score $s_k$, with $k=1,\ldots,G$, where $G$ is the number of individuals in a generation. There are several methods to create the following generation of individuals. Since the performance of the algorithms seems to have little dependence in the specific method used, we use one of the simplest, based on the creation of an intermediate \textsl{gene pool}. The gene pool is a set ${\mathcal{P}}$ of $P$ individuals possibly replicated (generally $|P|=G$: the pool has the same size as the generations) such that the number of \dqt{copies} of an individual in the pool is proportional to its score. An easy algorithm for generating a pool is the \textsl{tournament}: we do $P$ comparisons of pairs of individuals taken at random from the generation: the individual with the highest score goes into the pool: \parbox{\columnwidth}{ {\tt \cb \> $P$ $\leftarrow$ $\emptyset$ \\ \> \cmd{for} k=1 \cmd{to} $P$ \cmd{do} \\ \> \> i $\leftarrow$ rnd(1,G) \\ \> \> j $\leftarrow$ rnd(1,G) \\ \> \> \cmd{if} $s_i\ge{s_j}$ \cmd{then} \\ \> \> \> ${\mathcal{P}}$ $\leftarrow$ ${\mathcal{P}} \cup \{\gamma_i\}$ \\ \> \> \cmd{else} \\ \> \> \> ${\mathcal{P}}$ $\leftarrow$ ${\mathcal{P}} \cup \{\gamma_j\}$ \\ \> \> \cmd{fi} \\ \> \cmd{od} \ce } } In order to build the next generation, pairs of genes are taken at random from the pool (with uniform distribution) and crossed to create two new individuals that will go into the next generation (this requires that $G$ be even). We use the method of the \textsl{double cut} to cross the genes. Two values $a,b\in[0,39]$ are chosen randomly. The two offspring are then generated as in Figure~\ref{offsprings}, in which we assume $a<b$. We also define a small mutation probability: for each new gene, with a (small) probability $p$, we pick a random bit and flip it. Note that this method doesn't guarantee that the best individual of a generation will pass unchanged to the next, so we actually use the crossing to create $G-2$ individuals to which we add the two best performers of the previous generation. \subsection{Results} Figure~\ref{paths} shows typical paths from the best individual for various values of the density of food, while Table~\ref{params} shows the value of the parameters for the same individual. \begin{figure}[btp] \setlength{\unitlength}{15em} \begin{center} \begin{tabular}{cc} \setlength{\unitlength}{00.02em} \begin{picture}(800,800)(-160,-480) \ifx\envbox\undefined\newsavebox{\envbox}\fi \multiput(-160,-480)(800,0){2}{\line(0,1){800}} \multiput(-160,-480)(0,800){2}{\line(1,0){800}} \savebox{\envbox}{ \multiput(0,0)(16,0){2}{\line(0,1){16}} \multiput(0,0)(0,16){2}{\line(1,0){16}} } \put(0,160){\usebox{\envbox}} \put(160,-480){\usebox{\envbox}} \put(0,0){\usebox{\envbox}} \put(-160,0){\usebox{\envbox}} \put(320,-160){\usebox{\envbox}} \put(480,0){\usebox{\envbox}} \put(160,-320){\usebox{\envbox}} \put(0,-320){\usebox{\envbox}} \put(-160,-160){\usebox{\envbox}} \put(-160,-480){\usebox{\envbox}} \put(160,-160){\usebox{\envbox}} \put(320,0){\usebox{\envbox}} \put(480,-160){\usebox{\envbox}} \put(320,-320){\usebox{\envbox}} \put(-160,-320){\usebox{\envbox}} \put(0,-160){\usebox{\envbox}} \put(160,0){\usebox{\envbox}} \put(0,-480){\usebox{\envbox}} \put( 7.78, 7.78){\circle*{0.000001}} \put( 8.49, 7.78){\circle*{0.000001}} \put( 8.49, 7.78){\circle*{0.000001}} \put( 9.19, 7.78){\circle*{0.000001}} \put( 9.90, 8.49){\circle*{0.000001}} \put( 9.90, 8.49){\circle*{0.000001}} \put( 9.90, 9.19){\circle*{0.000001}} \put( 9.90, 9.90){\circle*{0.000001}} \put( 9.90,10.61){\circle*{0.000001}} \put( 9.90,10.61){\circle*{0.000001}} \put( 9.90,10.61){\circle*{0.000001}} \put(10.61, 9.90){\circle*{0.000001}} \put(11.31, 9.90){\circle*{0.000001}} \put(12.02, 9.19){\circle*{0.000001}} \put(12.73, 9.19){\circle*{0.000001}} \put(13.44, 8.49){\circle*{0.000001}} \put(13.44, 8.49){\circle*{0.000001}} \put(14.14, 8.49){\circle*{0.000001}} \put(14.85, 7.78){\circle*{0.000001}} \put(14.85, 7.78){\circle*{0.000001}} \put(14.85, 7.78){\circle*{0.000001}} \put(14.85, 7.78){\circle*{0.000001}} \put(14.85, 8.49){\circle*{0.000001}} \put(14.85, 9.19){\circle*{0.000001}} \put(14.85, 9.90){\circle*{0.000001}} \put(12.73, 6.36){\circle*{0.000001}} \put(13.44, 7.07){\circle*{0.000001}} \put(13.44, 7.78){\circle*{0.000001}} \put(14.14, 8.49){\circle*{0.000001}} \put(14.14, 9.19){\circle*{0.000001}} \put(14.85, 9.90){\circle*{0.000001}} \put(12.73, 6.36){\circle*{0.000001}} \put(12.73, 6.36){\circle*{0.000001}} \put( 7.78, 3.54){\circle*{0.000001}} \put( 8.49, 4.24){\circle*{0.000001}} \put( 9.19, 4.24){\circle*{0.000001}} \put( 9.90, 4.95){\circle*{0.000001}} \put(10.61, 4.95){\circle*{0.000001}} \put(11.31, 5.66){\circle*{0.000001}} \put(12.02, 5.66){\circle*{0.000001}} \put(12.73, 6.36){\circle*{0.000001}} \put( 7.78, 2.83){\circle*{0.000001}} \put( 7.78, 3.54){\circle*{0.000001}} \put( 8.49, 0.00){\circle*{0.000001}} \put( 8.49, 0.71){\circle*{0.000001}} \put( 8.49, 1.41){\circle*{0.000001}} \put( 7.78, 2.12){\circle*{0.000001}} \put( 7.78, 2.83){\circle*{0.000001}} \put( 7.78, 0.00){\circle*{0.000001}} \put( 8.49, 0.00){\circle*{0.000001}} \put( 7.78, 0.00){\circle*{0.000001}} \put( 6.36, 0.00){\circle*{0.000001}} \put( 7.07, 0.00){\circle*{0.000001}} \put( 7.78, 0.00){\circle*{0.000001}} \put( 6.36, 0.00){\circle*{0.000001}} \put( 7.07, 0.00){\circle*{0.000001}} \put( 7.78, 0.00){\circle*{0.000001}} \put( 8.49, 0.00){\circle*{0.000001}} \put( 8.49, 0.00){\circle*{0.000001}} \put( 9.19, 0.00){\circle*{0.000001}} \put( 9.90, 0.00){\circle*{0.000001}} \put( 9.90, 0.00){\circle*{0.000001}} \put( 8.49, 0.00){\circle*{0.000001}} \put( 9.19, 0.00){\circle*{0.000001}} \put( 9.90, 0.00){\circle*{0.000001}} \put( 6.36, 0.71){\circle*{0.000001}} \put( 7.07, 0.71){\circle*{0.000001}} \put( 7.78, 0.00){\circle*{0.000001}} \put( 8.49, 0.00){\circle*{0.000001}} \put( 6.36, 0.71){\circle*{0.000001}} \put( 7.07, 0.71){\circle*{0.000001}} \put( 7.78, 1.41){\circle*{0.000001}} \put( 5.66, 3.54){\circle*{0.000001}} \put( 6.36, 2.83){\circle*{0.000001}} \put( 7.07, 2.12){\circle*{0.000001}} \put( 7.78, 1.41){\circle*{0.000001}} \put( 5.66, 3.54){\circle*{0.000001}} \put( 6.36, 3.54){\circle*{0.000001}} \put( 6.36, 3.54){\circle*{0.000001}} \put( 3.54, 3.54){\circle*{0.000001}} \put( 4.24, 3.54){\circle*{0.000001}} \put( 4.95, 3.54){\circle*{0.000001}} \put( 5.66, 3.54){\circle*{0.000001}} \put( 6.36, 3.54){\circle*{0.000001}} \put( 1.41, 2.83){\circle*{0.000001}} \put( 2.12, 2.83){\circle*{0.000001}} \put( 2.83, 3.54){\circle*{0.000001}} \put( 3.54, 3.54){\circle*{0.000001}} \put( 0.71, 0.71){\circle*{0.000001}} \put( 0.71, 1.41){\circle*{0.000001}} \put( 1.41, 2.12){\circle*{0.000001}} \put( 1.41, 2.83){\circle*{0.000001}} \put( 0.71, 0.71){\circle*{0.000001}} \put( 0.71, 0.71){\circle*{0.000001}} \put( 0.71, 1.41){\circle*{0.000001}} \put( 0.71, 2.12){\circle*{0.000001}} \put( 0.71, 2.83){\circle*{0.000001}} \put( 0.00, 1.41){\circle*{0.000001}} \put( 0.00, 2.12){\circle*{0.000001}} \put( 0.71, 2.83){\circle*{0.000001}} \put(-0.71, 0.00){\circle*{0.000001}} \put(-0.71, 0.71){\circle*{0.000001}} \put( 0.00, 1.41){\circle*{0.000001}} \put(-0.71, 0.00){\circle*{0.000001}} \put( 0.00, 0.00){\circle*{0.000001}} \put( 0.00, 0.00){\circle*{0.000001}} \put( 0.00, 0.00){\circle*{0.000001}} \put( 0.71,-0.71){\circle*{0.000001}} \put( 0.00,-1.41){\circle*{0.000001}} \put( 0.71,-0.71){\circle*{0.000001}} \put( 0.00,-1.41){\circle*{0.000001}} \put( 0.00,-0.71){\circle*{0.000001}} \put( 0.00, 0.00){\circle*{0.000001}} \put( 0.00, 0.71){\circle*{0.000001}} \put( 0.00, 1.41){\circle*{0.000001}} \put( 0.00, 1.41){\circle*{0.000001}} \put( 0.71, 0.71){\circle*{0.000001}} \put( 1.41, 0.71){\circle*{0.000001}} \put( 2.12, 0.00){\circle*{0.000001}} \put( 2.83, 0.00){\circle*{0.000001}} \put( 3.54,-0.71){\circle*{0.000001}} \put( 3.54,-0.71){\circle*{0.000001}} \put( 4.24,-1.41){\circle*{0.000001}} \put( 4.95,-1.41){\circle*{0.000001}} \put( 5.66,-2.12){\circle*{0.000001}} \put( 6.36,-2.83){\circle*{0.000001}} \put( 6.36,-2.83){\circle*{0.000001}} \put( 7.07,-2.83){\circle*{0.000001}} \put( 7.78,-2.83){\circle*{0.000001}} \put( 8.49,-2.83){\circle*{0.000001}} \put( 9.19,-2.83){\circle*{0.000001}} \put( 9.90,-2.83){\circle*{0.000001}} \put(10.61,-3.54){\circle*{0.000001}} \put(11.31,-3.54){\circle*{0.000001}} \put(12.02,-3.54){\circle*{0.000001}} \put(12.73,-3.54){\circle*{0.000001}} \put(13.44,-3.54){\circle*{0.000001}} \put(13.44,-3.54){\circle*{0.000001}} \put(14.14,-2.83){\circle*{0.000001}} \put(14.85,-2.83){\circle*{0.000001}} \put(15.56,-2.12){\circle*{0.000001}} \put(16.26,-1.41){\circle*{0.000001}} \put(16.97,-1.41){\circle*{0.000001}} \put(17.68,-0.71){\circle*{0.000001}} \put(18.38, 0.00){\circle*{0.000001}} \put(19.09, 0.71){\circle*{0.000001}} \put(19.80, 0.71){\circle*{0.000001}} \put(20.51, 1.41){\circle*{0.000001}} \put(11.31,-3.54){\circle*{0.000001}} \put(12.02,-2.83){\circle*{0.000001}} \put(12.73,-2.83){\circle*{0.000001}} \put(13.44,-2.12){\circle*{0.000001}} \put(14.14,-2.12){\circle*{0.000001}} \put(14.85,-1.41){\circle*{0.000001}} \put(15.56,-1.41){\circle*{0.000001}} \put(16.26,-0.71){\circle*{0.000001}} \put(16.97,-0.71){\circle*{0.000001}} \put(17.68, 0.00){\circle*{0.000001}} \put(18.38, 0.00){\circle*{0.000001}} \put(19.09, 0.71){\circle*{0.000001}} \put(19.80, 0.71){\circle*{0.000001}} \put(20.51, 1.41){\circle*{0.000001}} \put(11.31,-3.54){\circle*{0.000001}} \put(12.02,-3.54){\circle*{0.000001}} \put(12.73,-3.54){\circle*{0.000001}} \put(13.44,-3.54){\circle*{0.000001}} \put(14.14,-3.54){\circle*{0.000001}} \put(14.85,-3.54){\circle*{0.000001}} \put(15.56,-3.54){\circle*{0.000001}} \put(16.26,-3.54){\circle*{0.000001}} \put(16.97,-3.54){\circle*{0.000001}} \put(17.68,-3.54){\circle*{0.000001}} \put(18.38,-3.54){\circle*{0.000001}} \put(19.09,-3.54){\circle*{0.000001}} \put(19.80,-3.54){\circle*{0.000001}} \put(20.51,-3.54){\circle*{0.000001}} \put(21.21,-3.54){\circle*{0.000001}} \put(21.92,-3.54){\circle*{0.000001}} \put(22.63,-3.54){\circle*{0.000001}} \put(23.33,-3.54){\circle*{0.000001}} \put(23.33,-3.54){\circle*{0.000001}} \put(22.63,-2.83){\circle*{0.000001}} \put(22.63,-2.12){\circle*{0.000001}} \put(21.92,-1.41){\circle*{0.000001}} \put(21.21,-0.71){\circle*{0.000001}} \put(20.51, 0.00){\circle*{0.000001}} \put(20.51, 0.71){\circle*{0.000001}} \put(19.80, 1.41){\circle*{0.000001}} \put(19.09, 2.12){\circle*{0.000001}} \put(18.38, 2.83){\circle*{0.000001}} \put(18.38, 3.54){\circle*{0.000001}} \put(17.68, 4.24){\circle*{0.000001}} \put(16.97, 4.95){\circle*{0.000001}} \put(16.97, 4.95){\circle*{0.000001}} \put(16.26, 5.66){\circle*{0.000001}} \put(15.56, 6.36){\circle*{0.000001}} \put(14.85, 7.07){\circle*{0.000001}} \put(14.85, 7.78){\circle*{0.000001}} \put(14.14, 8.49){\circle*{0.000001}} \put(13.44, 9.19){\circle*{0.000001}} \put(12.73, 9.90){\circle*{0.000001}} \put(12.02,10.61){\circle*{0.000001}} \put(11.31,11.31){\circle*{0.000001}} \put(10.61,12.02){\circle*{0.000001}} \put(10.61,12.73){\circle*{0.000001}} \put( 9.90,13.44){\circle*{0.000001}} \put( 9.19,14.14){\circle*{0.000001}} \put( 8.49,14.85){\circle*{0.000001}} \put( 8.49,13.44){\circle*{0.000001}} \put( 8.49,14.14){\circle*{0.000001}} \put( 8.49,14.85){\circle*{0.000001}} \put( 8.49,12.73){\circle*{0.000001}} \put( 8.49,13.44){\circle*{0.000001}} \put( 8.49,12.73){\circle*{0.000001}} \put( 8.49,12.73){\circle*{0.000001}} \put( 9.19,12.73){\circle*{0.000001}} \put( 9.90,12.73){\circle*{0.000001}} \put(10.61,13.44){\circle*{0.000001}} \put(11.31,13.44){\circle*{0.000001}} \put(11.31,13.44){\circle*{0.000001}} \put( 8.49,11.31){\circle*{0.000001}} \put( 9.19,12.02){\circle*{0.000001}} \put( 9.90,12.02){\circle*{0.000001}} \put(10.61,12.73){\circle*{0.000001}} \put(11.31,13.44){\circle*{0.000001}} \put( 7.78, 8.49){\circle*{0.000001}} \put( 7.78, 9.19){\circle*{0.000001}} \put( 7.78, 9.90){\circle*{0.000001}} \put( 8.49,10.61){\circle*{0.000001}} \put( 8.49,11.31){\circle*{0.000001}} \put( 7.78, 8.49){\circle*{0.000001}} \put( 7.78, 9.19){\circle*{0.000001}} \put( 7.78, 9.90){\circle*{0.000001}} \put( 5.66, 7.78){\circle*{0.000001}} \put( 6.36, 8.49){\circle*{0.000001}} \put( 7.07, 9.19){\circle*{0.000001}} \put( 7.78, 9.90){\circle*{0.000001}} \put( 6.36, 6.36){\circle*{0.000001}} \put( 6.36, 7.07){\circle*{0.000001}} \put( 5.66, 7.78){\circle*{0.000001}} \put( 6.36, 5.66){\circle*{0.000001}} \put( 6.36, 6.36){\circle*{0.000001}} \put( 5.66, 4.95){\circle*{0.000001}} \put( 6.36, 5.66){\circle*{0.000001}} \put( 5.66, 4.95){\circle*{0.000001}} \put( 6.36, 4.95){\circle*{0.000001}} \put( 7.07, 4.95){\circle*{0.000001}} \put( 7.78, 4.95){\circle*{0.000001}} \put( 7.78, 4.95){\circle*{0.000001}} \put( 8.49, 4.95){\circle*{0.000001}} \put( 8.49, 4.95){\circle*{0.000001}} \put( 9.19, 4.95){\circle*{0.000001}} \put( 9.90, 5.66){\circle*{0.000001}} \put( 9.90, 5.66){\circle*{0.000001}} \put( 9.19, 6.36){\circle*{0.000001}} \put( 9.19, 7.07){\circle*{0.000001}} \put( 8.49, 7.78){\circle*{0.000001}} \put( 7.78, 8.49){\circle*{0.000001}} \put( 7.78, 8.49){\circle*{0.000001}} \put( 8.49, 8.49){\circle*{0.000001}} \put( 9.19, 8.49){\circle*{0.000001}} \put( 9.90, 7.78){\circle*{0.000001}} \put(10.61, 7.78){\circle*{0.000001}} \put(10.61, 7.78){\circle*{0.000001}} \put(11.31, 8.49){\circle*{0.000001}} \put(11.31, 8.49){\circle*{0.000001}} \put(11.31, 9.19){\circle*{0.000001}} \put(10.61, 9.90){\circle*{0.000001}} \put(10.61,10.61){\circle*{0.000001}} \put( 9.90,11.31){\circle*{0.000001}} \put( 9.90,10.61){\circle*{0.000001}} \put( 9.90,11.31){\circle*{0.000001}} \put( 9.90,10.61){\circle*{0.000001}} \put( 5.66,13.44){\circle*{0.000001}} \put( 6.36,12.73){\circle*{0.000001}} \put( 7.07,12.73){\circle*{0.000001}} \put( 7.78,12.02){\circle*{0.000001}} \put( 8.49,11.31){\circle*{0.000001}} \put( 9.19,11.31){\circle*{0.000001}} \put( 9.90,10.61){\circle*{0.000001}} \put( 5.66,13.44){\circle*{0.000001}} \put( 6.36,14.14){\circle*{0.000001}} \put( 7.07,14.14){\circle*{0.000001}} \put( 7.78,14.85){\circle*{0.000001}} \put( 6.36,15.56){\circle*{0.000001}} \put( 7.07,15.56){\circle*{0.000001}} \put( 7.78,14.85){\circle*{0.000001}} \put( 6.36,15.56){\circle*{0.000001}} \put( 7.07,15.56){\circle*{0.000001}} \put( 7.78,15.56){\circle*{0.000001}} \put( 8.49,15.56){\circle*{0.000001}} \put( 9.19,15.56){\circle*{0.000001}} \put( 9.90,15.56){\circle*{0.000001}} \put( 9.90,15.56){\circle*{0.000001}} \put(10.61,16.26){\circle*{0.000001}} \put(11.31,16.97){\circle*{0.000001}} \put(12.02,16.97){\circle*{0.000001}} \put(12.73,17.68){\circle*{0.000001}} \put(13.44,18.38){\circle*{0.000001}} \put( 9.90,19.80){\circle*{0.000001}} \put(10.61,19.80){\circle*{0.000001}} \put(11.31,19.09){\circle*{0.000001}} \put(12.02,19.09){\circle*{0.000001}} \put(12.73,18.38){\circle*{0.000001}} \put(13.44,18.38){\circle*{0.000001}} \put( 2.83,21.92){\circle*{0.000001}} \put( 3.54,21.92){\circle*{0.000001}} \put( 4.24,21.21){\circle*{0.000001}} \put( 4.95,21.21){\circle*{0.000001}} \put( 5.66,21.21){\circle*{0.000001}} \put( 6.36,21.21){\circle*{0.000001}} \put( 7.07,20.51){\circle*{0.000001}} \put( 7.78,20.51){\circle*{0.000001}} \put( 8.49,20.51){\circle*{0.000001}} \put( 9.19,19.80){\circle*{0.000001}} \put( 9.90,19.80){\circle*{0.000001}} \put(-1.41,25.46){\circle*{0.000001}} \put(-0.71,24.75){\circle*{0.000001}} \put( 0.00,24.04){\circle*{0.000001}} \put( 0.71,24.04){\circle*{0.000001}} \put( 1.41,23.33){\circle*{0.000001}} \put( 2.12,22.63){\circle*{0.000001}} \put( 2.83,21.92){\circle*{0.000001}} \put(-1.41,25.46){\circle*{0.000001}} \put(-1.41,26.16){\circle*{0.000001}} \put(-1.41,26.87){\circle*{0.000001}} \put(-2.12,27.58){\circle*{0.000001}} \put(-2.12,28.28){\circle*{0.000001}} \put(-2.12,28.99){\circle*{0.000001}} \put(-2.12,29.70){\circle*{0.000001}} \put(-2.12,30.41){\circle*{0.000001}} \put(-2.83,31.11){\circle*{0.000001}} \put(-2.83,31.82){\circle*{0.000001}} \put(-2.83,32.53){\circle*{0.000001}} \put(-2.83,32.53){\circle*{0.000001}} \put(-2.12,31.82){\circle*{0.000001}} \put(-1.41,31.82){\circle*{0.000001}} \put(-0.71,31.11){\circle*{0.000001}} \put( 0.00,31.11){\circle*{0.000001}} \put( 0.71,30.41){\circle*{0.000001}} \put( 1.41,30.41){\circle*{0.000001}} \put( 2.12,29.70){\circle*{0.000001}} \put( 2.83,29.70){\circle*{0.000001}} \put( 3.54,28.99){\circle*{0.000001}} \put( 3.54,28.99){\circle*{0.000001}} \put( 4.24,29.70){\circle*{0.000001}} \put( 4.95,29.70){\circle*{0.000001}} \put( 5.66,30.41){\circle*{0.000001}} \put( 6.36,30.41){\circle*{0.000001}} \put( 7.07,31.11){\circle*{0.000001}} \put( 7.78,31.82){\circle*{0.000001}} \put( 8.49,31.82){\circle*{0.000001}} \put( 9.19,32.53){\circle*{0.000001}} \put( 9.90,33.23){\circle*{0.000001}} \put(10.61,33.23){\circle*{0.000001}} \put(11.31,33.94){\circle*{0.000001}} \put(12.02,33.94){\circle*{0.000001}} \put(12.73,34.65){\circle*{0.000001}} \put( 0.71,34.65){\circle*{0.000001}} \put( 1.41,34.65){\circle*{0.000001}} \put( 2.12,34.65){\circle*{0.000001}} \put( 2.83,34.65){\circle*{0.000001}} \put( 3.54,34.65){\circle*{0.000001}} \put( 4.24,34.65){\circle*{0.000001}} \put( 4.95,34.65){\circle*{0.000001}} \put( 5.66,34.65){\circle*{0.000001}} \put( 6.36,34.65){\circle*{0.000001}} \put( 7.07,34.65){\circle*{0.000001}} \put( 7.78,34.65){\circle*{0.000001}} \put( 8.49,34.65){\circle*{0.000001}} \put( 9.19,34.65){\circle*{0.000001}} \put( 9.90,34.65){\circle*{0.000001}} \put(10.61,34.65){\circle*{0.000001}} \put(11.31,34.65){\circle*{0.000001}} \put(12.02,34.65){\circle*{0.000001}} \put(12.73,34.65){\circle*{0.000001}} \put( 8.49,25.46){\circle*{0.000001}} \put( 7.78,26.16){\circle*{0.000001}} \put( 7.07,26.87){\circle*{0.000001}} \put( 6.36,27.58){\circle*{0.000001}} \put( 6.36,28.28){\circle*{0.000001}} \put( 5.66,28.99){\circle*{0.000001}} \put( 4.95,29.70){\circle*{0.000001}} \put( 4.24,30.41){\circle*{0.000001}} \put( 3.54,31.11){\circle*{0.000001}} \put( 2.83,31.82){\circle*{0.000001}} \put( 2.83,32.53){\circle*{0.000001}} \put( 2.12,33.23){\circle*{0.000001}} \put( 1.41,33.94){\circle*{0.000001}} \put( 0.71,34.65){\circle*{0.000001}} \put( 8.49,25.46){\circle*{0.000001}} \put( 9.19,25.46){\circle*{0.000001}} \put( 9.90,25.46){\circle*{0.000001}} \put(10.61,25.46){\circle*{0.000001}} \put(11.31,25.46){\circle*{0.000001}} \put(12.02,25.46){\circle*{0.000001}} \put(12.73,25.46){\circle*{0.000001}} \put(13.44,25.46){\circle*{0.000001}} \put(14.14,25.46){\circle*{0.000001}} \put(14.85,24.75){\circle*{0.000001}} \put(15.56,24.75){\circle*{0.000001}} \put(16.26,24.75){\circle*{0.000001}} \put(16.97,24.75){\circle*{0.000001}} \put(17.68,24.75){\circle*{0.000001}} \put(18.38,24.75){\circle*{0.000001}} \put(19.09,24.75){\circle*{0.000001}} \put(19.80,24.75){\circle*{0.000001}} \put(20.51,24.75){\circle*{0.000001}} \put(20.51,24.75){\circle*{0.000001}} \put(19.80,25.46){\circle*{0.000001}} \put(19.09,26.16){\circle*{0.000001}} \put(18.38,26.87){\circle*{0.000001}} \put(18.38,27.58){\circle*{0.000001}} \put(17.68,28.28){\circle*{0.000001}} \put(16.97,28.99){\circle*{0.000001}} \put(16.26,29.70){\circle*{0.000001}} \put(15.56,30.41){\circle*{0.000001}} \put(14.85,31.11){\circle*{0.000001}} \put(14.14,31.82){\circle*{0.000001}} \put(13.44,32.53){\circle*{0.000001}} \put(13.44,33.23){\circle*{0.000001}} \put(12.73,33.94){\circle*{0.000001}} \put(12.02,34.65){\circle*{0.000001}} \put(11.31,35.36){\circle*{0.000001}} \put(-1.41,29.70){\circle*{0.000001}} \put(-0.71,29.70){\circle*{0.000001}} \put( 0.00,30.41){\circle*{0.000001}} \put( 0.71,30.41){\circle*{0.000001}} \put( 1.41,31.11){\circle*{0.000001}} \put( 2.12,31.11){\circle*{0.000001}} \put( 2.83,31.82){\circle*{0.000001}} \put( 3.54,31.82){\circle*{0.000001}} \put( 4.24,32.53){\circle*{0.000001}} \put( 4.95,32.53){\circle*{0.000001}} \put( 5.66,32.53){\circle*{0.000001}} \put( 6.36,33.23){\circle*{0.000001}} \put( 7.07,33.23){\circle*{0.000001}} \put( 7.78,33.94){\circle*{0.000001}} \put( 8.49,33.94){\circle*{0.000001}} \put( 9.19,34.65){\circle*{0.000001}} \put( 9.90,34.65){\circle*{0.000001}} \put(10.61,35.36){\circle*{0.000001}} \put(11.31,35.36){\circle*{0.000001}} \put(-1.41,29.70){\circle*{0.000001}} \put(-0.71,30.41){\circle*{0.000001}} \put( 0.00,31.11){\circle*{0.000001}} \put( 0.71,31.11){\circle*{0.000001}} \put( 1.41,31.82){\circle*{0.000001}} \put( 2.12,32.53){\circle*{0.000001}} \put( 2.83,33.23){\circle*{0.000001}} \put( 3.54,33.94){\circle*{0.000001}} \put( 4.24,34.65){\circle*{0.000001}} \put( 4.95,34.65){\circle*{0.000001}} \put( 5.66,35.36){\circle*{0.000001}} \put( 6.36,36.06){\circle*{0.000001}} \put( 7.07,36.77){\circle*{0.000001}} \put( 7.78,37.48){\circle*{0.000001}} \put( 8.49,38.18){\circle*{0.000001}} \put( 9.19,38.18){\circle*{0.000001}} \put( 9.90,38.89){\circle*{0.000001}} \put(10.61,39.60){\circle*{0.000001}} \put( 3.54,24.75){\circle*{0.000001}} \put( 3.54,25.46){\circle*{0.000001}} \put( 4.24,26.16){\circle*{0.000001}} \put( 4.24,26.87){\circle*{0.000001}} \put( 4.95,27.58){\circle*{0.000001}} \put( 4.95,28.28){\circle*{0.000001}} \put( 5.66,28.99){\circle*{0.000001}} \put( 5.66,29.70){\circle*{0.000001}} \put( 6.36,30.41){\circle*{0.000001}} \put( 6.36,31.11){\circle*{0.000001}} \put( 7.07,31.82){\circle*{0.000001}} \put( 7.07,32.53){\circle*{0.000001}} \put( 7.78,33.23){\circle*{0.000001}} \put( 7.78,33.94){\circle*{0.000001}} \put( 8.49,34.65){\circle*{0.000001}} \put( 8.49,35.36){\circle*{0.000001}} \put( 9.19,36.06){\circle*{0.000001}} \put( 9.19,36.77){\circle*{0.000001}} \put( 9.90,37.48){\circle*{0.000001}} \put( 9.90,38.18){\circle*{0.000001}} \put(10.61,38.89){\circle*{0.000001}} \put(10.61,39.60){\circle*{0.000001}} \put(-15.56,21.92){\circle*{0.000001}} \put(-14.85,21.92){\circle*{0.000001}} \put(-14.14,21.92){\circle*{0.000001}} \put(-13.44,21.92){\circle*{0.000001}} \put(-12.73,22.63){\circle*{0.000001}} \put(-12.02,22.63){\circle*{0.000001}} \put(-11.31,22.63){\circle*{0.000001}} \put(-10.61,22.63){\circle*{0.000001}} \put(-9.90,22.63){\circle*{0.000001}} \put(-9.19,22.63){\circle*{0.000001}} \put(-8.49,22.63){\circle*{0.000001}} \put(-7.78,23.33){\circle*{0.000001}} \put(-7.07,23.33){\circle*{0.000001}} \put(-6.36,23.33){\circle*{0.000001}} \put(-5.66,23.33){\circle*{0.000001}} \put(-4.95,23.33){\circle*{0.000001}} \put(-4.24,23.33){\circle*{0.000001}} \put(-3.54,24.04){\circle*{0.000001}} \put(-2.83,24.04){\circle*{0.000001}} \put(-2.12,24.04){\circle*{0.000001}} \put(-1.41,24.04){\circle*{0.000001}} \put(-0.71,24.04){\circle*{0.000001}} \put( 0.00,24.04){\circle*{0.000001}} \put( 0.71,24.04){\circle*{0.000001}} \put( 1.41,24.75){\circle*{0.000001}} \put( 2.12,24.75){\circle*{0.000001}} \put( 2.83,24.75){\circle*{0.000001}} \put( 3.54,24.75){\circle*{0.000001}} \put(-15.56,21.92){\circle*{0.000001}} \put(-14.85,21.92){\circle*{0.000001}} \put(-14.14,21.92){\circle*{0.000001}} \put(-13.44,21.21){\circle*{0.000001}} \put(-12.73,21.21){\circle*{0.000001}} \put(-12.02,21.21){\circle*{0.000001}} \put(-11.31,21.21){\circle*{0.000001}} \put(-10.61,20.51){\circle*{0.000001}} \put(-9.90,20.51){\circle*{0.000001}} \put(-9.19,20.51){\circle*{0.000001}} \put(-8.49,20.51){\circle*{0.000001}} \put(-7.78,20.51){\circle*{0.000001}} \put(-7.07,19.80){\circle*{0.000001}} \put(-6.36,19.80){\circle*{0.000001}} \put(-5.66,19.80){\circle*{0.000001}} \put(-4.95,19.80){\circle*{0.000001}} \put(-4.24,19.09){\circle*{0.000001}} \put(-3.54,19.09){\circle*{0.000001}} \put(-2.83,19.09){\circle*{0.000001}} \put(-2.12,19.09){\circle*{0.000001}} \put(-1.41,18.38){\circle*{0.000001}} \put(-0.71,18.38){\circle*{0.000001}} \put( 0.00,18.38){\circle*{0.000001}} \put(-18.38,16.97){\circle*{0.000001}} \put(-17.68,16.97){\circle*{0.000001}} \put(-16.97,16.97){\circle*{0.000001}} \put(-16.26,16.97){\circle*{0.000001}} \put(-15.56,16.97){\circle*{0.000001}} \put(-14.85,16.97){\circle*{0.000001}} \put(-14.14,16.97){\circle*{0.000001}} \put(-13.44,17.68){\circle*{0.000001}} \put(-12.73,17.68){\circle*{0.000001}} \put(-12.02,17.68){\circle*{0.000001}} \put(-11.31,17.68){\circle*{0.000001}} \put(-10.61,17.68){\circle*{0.000001}} \put(-9.90,17.68){\circle*{0.000001}} \put(-9.19,17.68){\circle*{0.000001}} \put(-8.49,17.68){\circle*{0.000001}} \put(-7.78,17.68){\circle*{0.000001}} \put(-7.07,17.68){\circle*{0.000001}} \put(-6.36,17.68){\circle*{0.000001}} \put(-5.66,17.68){\circle*{0.000001}} \put(-4.95,17.68){\circle*{0.000001}} \put(-4.24,18.38){\circle*{0.000001}} \put(-3.54,18.38){\circle*{0.000001}} \put(-2.83,18.38){\circle*{0.000001}} \put(-2.12,18.38){\circle*{0.000001}} \put(-1.41,18.38){\circle*{0.000001}} \put(-0.71,18.38){\circle*{0.000001}} \put( 0.00,18.38){\circle*{0.000001}} \put(-18.38,16.97){\circle*{0.000001}} \put(-17.68,16.97){\circle*{0.000001}} \put(-16.97,17.68){\circle*{0.000001}} \put(-16.26,17.68){\circle*{0.000001}} \put(-15.56,18.38){\circle*{0.000001}} \put(-14.85,18.38){\circle*{0.000001}} \put(-14.14,18.38){\circle*{0.000001}} \put(-13.44,19.09){\circle*{0.000001}} \put(-12.73,19.09){\circle*{0.000001}} \put(-12.02,19.80){\circle*{0.000001}} \put(-11.31,19.80){\circle*{0.000001}} \put(-10.61,19.80){\circle*{0.000001}} \put(-9.90,20.51){\circle*{0.000001}} \put(-9.19,20.51){\circle*{0.000001}} \put(-8.49,21.21){\circle*{0.000001}} \put(-7.78,21.21){\circle*{0.000001}} \put(-7.07,21.92){\circle*{0.000001}} \put(-6.36,21.92){\circle*{0.000001}} \put(-5.66,21.92){\circle*{0.000001}} \put(-4.95,22.63){\circle*{0.000001}} \put(-4.24,22.63){\circle*{0.000001}} \put(-3.54,23.33){\circle*{0.000001}} \put(-2.83,23.33){\circle*{0.000001}} \put(-2.12,23.33){\circle*{0.000001}} \put(-1.41,24.04){\circle*{0.000001}} \put(-0.71,24.04){\circle*{0.000001}} \put( 0.00,24.75){\circle*{0.000001}} \put( 0.71,24.75){\circle*{0.000001}} \put(-11.31, 7.78){\circle*{0.000001}} \put(-10.61, 8.49){\circle*{0.000001}} \put(-10.61, 9.19){\circle*{0.000001}} \put(-9.90, 9.90){\circle*{0.000001}} \put(-9.19,10.61){\circle*{0.000001}} \put(-8.49,11.31){\circle*{0.000001}} \put(-8.49,12.02){\circle*{0.000001}} \put(-7.78,12.73){\circle*{0.000001}} \put(-7.07,13.44){\circle*{0.000001}} \put(-7.07,14.14){\circle*{0.000001}} \put(-6.36,14.85){\circle*{0.000001}} \put(-5.66,15.56){\circle*{0.000001}} \put(-5.66,16.26){\circle*{0.000001}} \put(-4.95,16.97){\circle*{0.000001}} \put(-4.24,17.68){\circle*{0.000001}} \put(-3.54,18.38){\circle*{0.000001}} \put(-3.54,19.09){\circle*{0.000001}} \put(-2.83,19.80){\circle*{0.000001}} \put(-2.12,20.51){\circle*{0.000001}} \put(-2.12,21.21){\circle*{0.000001}} \put(-1.41,21.92){\circle*{0.000001}} \put(-0.71,22.63){\circle*{0.000001}} \put( 0.00,23.33){\circle*{0.000001}} \put( 0.00,24.04){\circle*{0.000001}} \put( 0.71,24.75){\circle*{0.000001}} \put(-33.94, 7.78){\circle*{0.000001}} \put(-33.23, 7.78){\circle*{0.000001}} \put(-32.53, 7.78){\circle*{0.000001}} \put(-31.82, 7.78){\circle*{0.000001}} \put(-31.11, 7.78){\circle*{0.000001}} \put(-30.41, 7.78){\circle*{0.000001}} \put(-29.70, 7.78){\circle*{0.000001}} \put(-28.99, 7.78){\circle*{0.000001}} \put(-28.28, 7.78){\circle*{0.000001}} \put(-27.58, 7.78){\circle*{0.000001}} \put(-26.87, 7.78){\circle*{0.000001}} \put(-26.16, 7.78){\circle*{0.000001}} \put(-25.46, 7.78){\circle*{0.000001}} \put(-24.75, 7.78){\circle*{0.000001}} \put(-24.04, 7.78){\circle*{0.000001}} \put(-23.33, 7.78){\circle*{0.000001}} \put(-22.63, 7.78){\circle*{0.000001}} \put(-21.92, 7.78){\circle*{0.000001}} \put(-21.21, 7.78){\circle*{0.000001}} \put(-20.51, 7.78){\circle*{0.000001}} \put(-19.80, 7.78){\circle*{0.000001}} \put(-19.09, 7.78){\circle*{0.000001}} \put(-18.38, 7.78){\circle*{0.000001}} \put(-17.68, 7.78){\circle*{0.000001}} \put(-16.97, 7.78){\circle*{0.000001}} \put(-16.26, 7.78){\circle*{0.000001}} \put(-15.56, 7.78){\circle*{0.000001}} \put(-14.85, 7.78){\circle*{0.000001}} \put(-14.14, 7.78){\circle*{0.000001}} \put(-13.44, 7.78){\circle*{0.000001}} \put(-12.73, 7.78){\circle*{0.000001}} \put(-12.02, 7.78){\circle*{0.000001}} \put(-11.31, 7.78){\circle*{0.000001}} \put(-33.94, 7.78){\circle*{0.000001}} \put(-33.23, 8.49){\circle*{0.000001}} \put(-32.53, 9.19){\circle*{0.000001}} \put(-31.82, 9.90){\circle*{0.000001}} \put(-31.11,10.61){\circle*{0.000001}} \put(-30.41,11.31){\circle*{0.000001}} \put(-29.70,12.02){\circle*{0.000001}} \put(-28.99,12.02){\circle*{0.000001}} \put(-28.28,12.73){\circle*{0.000001}} \put(-27.58,13.44){\circle*{0.000001}} \put(-26.87,14.14){\circle*{0.000001}} \put(-26.16,14.85){\circle*{0.000001}} \put(-25.46,15.56){\circle*{0.000001}} \put(-24.75,16.26){\circle*{0.000001}} \put(-24.04,16.97){\circle*{0.000001}} \put(-23.33,17.68){\circle*{0.000001}} \put(-22.63,18.38){\circle*{0.000001}} \put(-21.92,19.09){\circle*{0.000001}} \put(-21.21,19.80){\circle*{0.000001}} \put(-20.51,20.51){\circle*{0.000001}} \put(-19.80,21.21){\circle*{0.000001}} \put(-19.09,21.21){\circle*{0.000001}} \put(-18.38,21.92){\circle*{0.000001}} \put(-17.68,22.63){\circle*{0.000001}} \put(-16.97,23.33){\circle*{0.000001}} \put(-16.26,24.04){\circle*{0.000001}} \put(-15.56,24.75){\circle*{0.000001}} \put(-14.85,25.46){\circle*{0.000001}} \put(-14.85,25.46){\circle*{0.000001}} \put(-14.14,26.16){\circle*{0.000001}} \put(-13.44,26.16){\circle*{0.000001}} \put(-12.73,26.87){\circle*{0.000001}} \put(-12.02,26.87){\circle*{0.000001}} \put(-11.31,27.58){\circle*{0.000001}} \put(-10.61,27.58){\circle*{0.000001}} \put(-9.90,28.28){\circle*{0.000001}} \put(-9.19,28.99){\circle*{0.000001}} \put(-8.49,28.99){\circle*{0.000001}} \put(-7.78,29.70){\circle*{0.000001}} \put(-7.07,29.70){\circle*{0.000001}} \put(-6.36,30.41){\circle*{0.000001}} \put(-5.66,30.41){\circle*{0.000001}} \put(-4.95,31.11){\circle*{0.000001}} \put(-4.24,31.82){\circle*{0.000001}} \put(-3.54,31.82){\circle*{0.000001}} \put(-2.83,32.53){\circle*{0.000001}} \put(-2.12,32.53){\circle*{0.000001}} \put(-1.41,33.23){\circle*{0.000001}} \put(-0.71,33.94){\circle*{0.000001}} \put( 0.00,33.94){\circle*{0.000001}} \put( 0.71,34.65){\circle*{0.000001}} \put( 1.41,34.65){\circle*{0.000001}} \put( 2.12,35.36){\circle*{0.000001}} \put( 2.83,35.36){\circle*{0.000001}} \put( 3.54,36.06){\circle*{0.000001}} \put( 4.24,36.77){\circle*{0.000001}} \put( 4.95,36.77){\circle*{0.000001}} \put( 5.66,37.48){\circle*{0.000001}} \put( 6.36,37.48){\circle*{0.000001}} \put( 7.07,38.18){\circle*{0.000001}} \put( 7.78,38.18){\circle*{0.000001}} \put( 8.49,38.89){\circle*{0.000001}} \put(21.92,14.85){\circle*{0.000001}} \put(21.21,15.56){\circle*{0.000001}} \put(21.21,16.26){\circle*{0.000001}} \put(20.51,16.97){\circle*{0.000001}} \put(20.51,17.68){\circle*{0.000001}} \put(19.80,18.38){\circle*{0.000001}} \put(19.80,19.09){\circle*{0.000001}} \put(19.09,19.80){\circle*{0.000001}} \put(19.09,20.51){\circle*{0.000001}} \put(18.38,21.21){\circle*{0.000001}} \put(17.68,21.92){\circle*{0.000001}} \put(17.68,22.63){\circle*{0.000001}} \put(16.97,23.33){\circle*{0.000001}} \put(16.97,24.04){\circle*{0.000001}} \put(16.26,24.75){\circle*{0.000001}} \put(16.26,25.46){\circle*{0.000001}} \put(15.56,26.16){\circle*{0.000001}} \put(15.56,26.87){\circle*{0.000001}} \put(14.85,27.58){\circle*{0.000001}} \put(14.14,28.28){\circle*{0.000001}} \put(14.14,28.99){\circle*{0.000001}} \put(13.44,29.70){\circle*{0.000001}} \put(13.44,30.41){\circle*{0.000001}} \put(12.73,31.11){\circle*{0.000001}} \put(12.73,31.82){\circle*{0.000001}} \put(12.02,32.53){\circle*{0.000001}} \put(11.31,33.23){\circle*{0.000001}} \put(11.31,33.94){\circle*{0.000001}} \put(10.61,34.65){\circle*{0.000001}} \put(10.61,35.36){\circle*{0.000001}} \put( 9.90,36.06){\circle*{0.000001}} \put( 9.90,36.77){\circle*{0.000001}} \put( 9.19,37.48){\circle*{0.000001}} \put( 9.19,38.18){\circle*{0.000001}} \put( 8.49,38.89){\circle*{0.000001}} \put(21.92,14.85){\circle*{0.000001}} \put(21.92,15.56){\circle*{0.000001}} \put(21.21,16.26){\circle*{0.000001}} \put(21.21,16.97){\circle*{0.000001}} \put(21.21,17.68){\circle*{0.000001}} \put(20.51,18.38){\circle*{0.000001}} \put(20.51,19.09){\circle*{0.000001}} \put(20.51,19.80){\circle*{0.000001}} \put(19.80,20.51){\circle*{0.000001}} \put(19.80,21.21){\circle*{0.000001}} \put(19.80,21.92){\circle*{0.000001}} \put(19.80,22.63){\circle*{0.000001}} \put(19.09,23.33){\circle*{0.000001}} \put(19.09,24.04){\circle*{0.000001}} \put(19.09,24.75){\circle*{0.000001}} \put(18.38,25.46){\circle*{0.000001}} \put(18.38,26.16){\circle*{0.000001}} \put(18.38,26.87){\circle*{0.000001}} \put(17.68,27.58){\circle*{0.000001}} \put(17.68,28.28){\circle*{0.000001}} \put(17.68,28.99){\circle*{0.000001}} \put(16.97,29.70){\circle*{0.000001}} \put(16.97,30.41){\circle*{0.000001}} \put(16.97,31.11){\circle*{0.000001}} \put(16.26,31.82){\circle*{0.000001}} \put(16.26,32.53){\circle*{0.000001}} \put(16.26,33.23){\circle*{0.000001}} \put(15.56,33.94){\circle*{0.000001}} \put(15.56,34.65){\circle*{0.000001}} \put(15.56,35.36){\circle*{0.000001}} \put(15.56,36.06){\circle*{0.000001}} \put(14.85,36.77){\circle*{0.000001}} \put(14.85,37.48){\circle*{0.000001}} \put(14.85,38.18){\circle*{0.000001}} \put(14.14,38.89){\circle*{0.000001}} \put(14.14,39.60){\circle*{0.000001}} \put(14.14,40.31){\circle*{0.000001}} \put(13.44,41.01){\circle*{0.000001}} \put(13.44,41.72){\circle*{0.000001}} \put(-5.66,23.33){\circle*{0.000001}} \put(-4.95,24.04){\circle*{0.000001}} \put(-4.24,24.75){\circle*{0.000001}} \put(-3.54,25.46){\circle*{0.000001}} \put(-2.83,26.16){\circle*{0.000001}} \put(-2.12,26.87){\circle*{0.000001}} \put(-1.41,27.58){\circle*{0.000001}} \put(-0.71,28.28){\circle*{0.000001}} \put( 0.00,28.99){\circle*{0.000001}} \put( 0.71,29.70){\circle*{0.000001}} \put( 1.41,30.41){\circle*{0.000001}} \put( 2.12,31.11){\circle*{0.000001}} \put( 2.83,31.82){\circle*{0.000001}} \put( 3.54,32.53){\circle*{0.000001}} \put( 4.24,32.53){\circle*{0.000001}} \put( 4.95,33.23){\circle*{0.000001}} \put( 5.66,33.94){\circle*{0.000001}} \put( 6.36,34.65){\circle*{0.000001}} \put( 7.07,35.36){\circle*{0.000001}} \put( 7.78,36.06){\circle*{0.000001}} \put( 8.49,36.77){\circle*{0.000001}} \put( 9.19,37.48){\circle*{0.000001}} \put( 9.90,38.18){\circle*{0.000001}} \put(10.61,38.89){\circle*{0.000001}} \put(11.31,39.60){\circle*{0.000001}} \put(12.02,40.31){\circle*{0.000001}} \put(12.73,41.01){\circle*{0.000001}} \put(13.44,41.72){\circle*{0.000001}} \put(-26.87, 4.95){\circle*{0.000001}} \put(-26.16, 5.66){\circle*{0.000001}} \put(-25.46, 6.36){\circle*{0.000001}} \put(-24.75, 7.07){\circle*{0.000001}} \put(-24.04, 7.07){\circle*{0.000001}} \put(-23.33, 7.78){\circle*{0.000001}} \put(-22.63, 8.49){\circle*{0.000001}} \put(-21.92, 9.19){\circle*{0.000001}} \put(-21.21, 9.90){\circle*{0.000001}} \put(-20.51,10.61){\circle*{0.000001}} \put(-19.80,11.31){\circle*{0.000001}} \put(-19.09,12.02){\circle*{0.000001}} \put(-18.38,12.02){\circle*{0.000001}} \put(-17.68,12.73){\circle*{0.000001}} \put(-16.97,13.44){\circle*{0.000001}} \put(-16.26,14.14){\circle*{0.000001}} \put(-15.56,14.85){\circle*{0.000001}} \put(-14.85,15.56){\circle*{0.000001}} \put(-14.14,16.26){\circle*{0.000001}} \put(-13.44,16.26){\circle*{0.000001}} \put(-12.73,16.97){\circle*{0.000001}} \put(-12.02,17.68){\circle*{0.000001}} \put(-11.31,18.38){\circle*{0.000001}} \put(-10.61,19.09){\circle*{0.000001}} \put(-9.90,19.80){\circle*{0.000001}} \put(-9.19,20.51){\circle*{0.000001}} \put(-8.49,21.21){\circle*{0.000001}} \put(-7.78,21.21){\circle*{0.000001}} \put(-7.07,21.92){\circle*{0.000001}} \put(-6.36,22.63){\circle*{0.000001}} \put(-5.66,23.33){\circle*{0.000001}} \put(-48.79,-14.85){\circle*{0.000001}} \put(-48.08,-14.14){\circle*{0.000001}} \put(-47.38,-13.44){\circle*{0.000001}} \put(-46.67,-12.73){\circle*{0.000001}} \put(-45.96,-12.02){\circle*{0.000001}} \put(-45.25,-11.31){\circle*{0.000001}} \put(-44.55,-11.31){\circle*{0.000001}} \put(-43.84,-10.61){\circle*{0.000001}} \put(-43.13,-9.90){\circle*{0.000001}} \put(-42.43,-9.19){\circle*{0.000001}} \put(-41.72,-8.49){\circle*{0.000001}} \put(-41.01,-7.78){\circle*{0.000001}} \put(-40.31,-7.07){\circle*{0.000001}} \put(-39.60,-6.36){\circle*{0.000001}} \put(-38.89,-5.66){\circle*{0.000001}} \put(-38.18,-4.95){\circle*{0.000001}} \put(-37.48,-4.95){\circle*{0.000001}} \put(-36.77,-4.24){\circle*{0.000001}} \put(-36.06,-3.54){\circle*{0.000001}} \put(-35.36,-2.83){\circle*{0.000001}} \put(-34.65,-2.12){\circle*{0.000001}} \put(-33.94,-1.41){\circle*{0.000001}} \put(-33.23,-0.71){\circle*{0.000001}} \put(-32.53, 0.00){\circle*{0.000001}} \put(-31.82, 0.71){\circle*{0.000001}} \put(-31.11, 1.41){\circle*{0.000001}} \put(-30.41, 1.41){\circle*{0.000001}} \put(-29.70, 2.12){\circle*{0.000001}} \put(-28.99, 2.83){\circle*{0.000001}} \put(-28.28, 3.54){\circle*{0.000001}} \put(-27.58, 4.24){\circle*{0.000001}} \put(-26.87, 4.95){\circle*{0.000001}} \put(-65.76,-41.72){\circle*{0.000001}} \put(-65.05,-41.01){\circle*{0.000001}} \put(-65.05,-40.31){\circle*{0.000001}} \put(-64.35,-39.60){\circle*{0.000001}} \put(-63.64,-38.89){\circle*{0.000001}} \put(-63.64,-38.18){\circle*{0.000001}} \put(-62.93,-37.48){\circle*{0.000001}} \put(-62.93,-36.77){\circle*{0.000001}} \put(-62.23,-36.06){\circle*{0.000001}} \put(-61.52,-35.36){\circle*{0.000001}} \put(-61.52,-34.65){\circle*{0.000001}} \put(-60.81,-33.94){\circle*{0.000001}} \put(-60.10,-33.23){\circle*{0.000001}} \put(-60.10,-32.53){\circle*{0.000001}} \put(-59.40,-31.82){\circle*{0.000001}} \put(-59.40,-31.11){\circle*{0.000001}} \put(-58.69,-30.41){\circle*{0.000001}} \put(-57.98,-29.70){\circle*{0.000001}} \put(-57.98,-28.99){\circle*{0.000001}} \put(-57.28,-28.28){\circle*{0.000001}} \put(-56.57,-27.58){\circle*{0.000001}} \put(-56.57,-26.87){\circle*{0.000001}} \put(-55.86,-26.16){\circle*{0.000001}} \put(-55.15,-25.46){\circle*{0.000001}} \put(-55.15,-24.75){\circle*{0.000001}} \put(-54.45,-24.04){\circle*{0.000001}} \put(-54.45,-23.33){\circle*{0.000001}} \put(-53.74,-22.63){\circle*{0.000001}} \put(-53.03,-21.92){\circle*{0.000001}} \put(-53.03,-21.21){\circle*{0.000001}} \put(-52.33,-20.51){\circle*{0.000001}} \put(-51.62,-19.80){\circle*{0.000001}} \put(-51.62,-19.09){\circle*{0.000001}} \put(-50.91,-18.38){\circle*{0.000001}} \put(-50.91,-17.68){\circle*{0.000001}} \put(-50.20,-16.97){\circle*{0.000001}} \put(-49.50,-16.26){\circle*{0.000001}} \put(-49.50,-15.56){\circle*{0.000001}} \put(-48.79,-14.85){\circle*{0.000001}} \put(-53.74,-73.54){\circle*{0.000001}} \put(-53.74,-72.83){\circle*{0.000001}} \put(-54.45,-72.12){\circle*{0.000001}} \put(-54.45,-71.42){\circle*{0.000001}} \put(-55.15,-70.71){\circle*{0.000001}} \put(-55.15,-70.00){\circle*{0.000001}} \put(-55.15,-69.30){\circle*{0.000001}} \put(-55.86,-68.59){\circle*{0.000001}} \put(-55.86,-67.88){\circle*{0.000001}} \put(-55.86,-67.18){\circle*{0.000001}} \put(-56.57,-66.47){\circle*{0.000001}} \put(-56.57,-65.76){\circle*{0.000001}} \put(-57.28,-65.05){\circle*{0.000001}} \put(-57.28,-64.35){\circle*{0.000001}} \put(-57.28,-63.64){\circle*{0.000001}} \put(-57.98,-62.93){\circle*{0.000001}} \put(-57.98,-62.23){\circle*{0.000001}} \put(-57.98,-61.52){\circle*{0.000001}} \put(-58.69,-60.81){\circle*{0.000001}} \put(-58.69,-60.10){\circle*{0.000001}} \put(-59.40,-59.40){\circle*{0.000001}} \put(-59.40,-58.69){\circle*{0.000001}} \put(-59.40,-57.98){\circle*{0.000001}} \put(-60.10,-57.28){\circle*{0.000001}} \put(-60.10,-56.57){\circle*{0.000001}} \put(-60.10,-55.86){\circle*{0.000001}} \put(-60.81,-55.15){\circle*{0.000001}} \put(-60.81,-54.45){\circle*{0.000001}} \put(-61.52,-53.74){\circle*{0.000001}} \put(-61.52,-53.03){\circle*{0.000001}} \put(-61.52,-52.33){\circle*{0.000001}} \put(-62.23,-51.62){\circle*{0.000001}} \put(-62.23,-50.91){\circle*{0.000001}} \put(-62.23,-50.20){\circle*{0.000001}} \put(-62.93,-49.50){\circle*{0.000001}} \put(-62.93,-48.79){\circle*{0.000001}} \put(-63.64,-48.08){\circle*{0.000001}} \put(-63.64,-47.38){\circle*{0.000001}} \put(-63.64,-46.67){\circle*{0.000001}} \put(-64.35,-45.96){\circle*{0.000001}} \put(-64.35,-45.25){\circle*{0.000001}} \put(-64.35,-44.55){\circle*{0.000001}} \put(-65.05,-43.84){\circle*{0.000001}} \put(-65.05,-43.13){\circle*{0.000001}} \put(-65.76,-42.43){\circle*{0.000001}} \put(-65.76,-41.72){\circle*{0.000001}} \put(-53.74,-73.54){\circle*{0.000001}} \put(-53.74,-72.83){\circle*{0.000001}} \put(-53.74,-72.12){\circle*{0.000001}} \put(-53.74,-71.42){\circle*{0.000001}} \put(-53.74,-70.71){\circle*{0.000001}} \put(-53.74,-70.00){\circle*{0.000001}} \put(-53.74,-69.30){\circle*{0.000001}} \put(-53.74,-68.59){\circle*{0.000001}} \put(-53.74,-67.88){\circle*{0.000001}} \put(-53.74,-67.18){\circle*{0.000001}} \put(-53.74,-66.47){\circle*{0.000001}} \put(-53.74,-65.76){\circle*{0.000001}} \put(-53.74,-65.05){\circle*{0.000001}} \put(-53.03,-64.35){\circle*{0.000001}} \put(-53.03,-63.64){\circle*{0.000001}} \put(-53.03,-62.93){\circle*{0.000001}} \put(-53.03,-62.23){\circle*{0.000001}} \put(-53.03,-61.52){\circle*{0.000001}} \put(-53.03,-60.81){\circle*{0.000001}} \put(-53.03,-60.10){\circle*{0.000001}} \put(-53.03,-59.40){\circle*{0.000001}} \put(-53.03,-58.69){\circle*{0.000001}} \put(-53.03,-57.98){\circle*{0.000001}} \put(-53.03,-57.28){\circle*{0.000001}} \put(-53.03,-56.57){\circle*{0.000001}} \put(-53.03,-55.86){\circle*{0.000001}} \put(-53.03,-55.15){\circle*{0.000001}} \put(-53.03,-54.45){\circle*{0.000001}} \put(-53.03,-53.74){\circle*{0.000001}} \put(-53.03,-53.03){\circle*{0.000001}} \put(-53.03,-52.33){\circle*{0.000001}} \put(-53.03,-51.62){\circle*{0.000001}} \put(-53.03,-50.91){\circle*{0.000001}} \put(-53.03,-50.20){\circle*{0.000001}} \put(-53.03,-49.50){\circle*{0.000001}} \put(-53.03,-48.79){\circle*{0.000001}} \put(-53.03,-48.08){\circle*{0.000001}} \put(-52.33,-47.38){\circle*{0.000001}} \put(-52.33,-46.67){\circle*{0.000001}} \put(-52.33,-45.96){\circle*{0.000001}} \put(-52.33,-45.25){\circle*{0.000001}} \put(-52.33,-44.55){\circle*{0.000001}} \put(-52.33,-43.84){\circle*{0.000001}} \put(-52.33,-43.13){\circle*{0.000001}} \put(-52.33,-42.43){\circle*{0.000001}} \put(-52.33,-41.72){\circle*{0.000001}} \put(-52.33,-41.01){\circle*{0.000001}} \put(-52.33,-40.31){\circle*{0.000001}} \put(-52.33,-39.60){\circle*{0.000001}} \put(-85.56,-48.79){\circle*{0.000001}} \put(-84.85,-48.79){\circle*{0.000001}} \put(-84.15,-48.08){\circle*{0.000001}} \put(-83.44,-48.08){\circle*{0.000001}} \put(-82.73,-48.08){\circle*{0.000001}} \put(-82.02,-48.08){\circle*{0.000001}} \put(-81.32,-47.38){\circle*{0.000001}} \put(-80.61,-47.38){\circle*{0.000001}} \put(-79.90,-47.38){\circle*{0.000001}} \put(-79.20,-47.38){\circle*{0.000001}} \put(-78.49,-46.67){\circle*{0.000001}} \put(-77.78,-46.67){\circle*{0.000001}} \put(-77.07,-46.67){\circle*{0.000001}} \put(-76.37,-45.96){\circle*{0.000001}} \put(-75.66,-45.96){\circle*{0.000001}} \put(-74.95,-45.96){\circle*{0.000001}} \put(-74.25,-45.96){\circle*{0.000001}} \put(-73.54,-45.25){\circle*{0.000001}} \put(-72.83,-45.25){\circle*{0.000001}} \put(-72.12,-45.25){\circle*{0.000001}} \put(-71.42,-44.55){\circle*{0.000001}} \put(-70.71,-44.55){\circle*{0.000001}} \put(-70.00,-44.55){\circle*{0.000001}} \put(-69.30,-44.55){\circle*{0.000001}} \put(-68.59,-43.84){\circle*{0.000001}} \put(-67.88,-43.84){\circle*{0.000001}} \put(-67.18,-43.84){\circle*{0.000001}} \put(-66.47,-43.84){\circle*{0.000001}} \put(-65.76,-43.13){\circle*{0.000001}} \put(-65.05,-43.13){\circle*{0.000001}} \put(-64.35,-43.13){\circle*{0.000001}} \put(-63.64,-42.43){\circle*{0.000001}} \put(-62.93,-42.43){\circle*{0.000001}} \put(-62.23,-42.43){\circle*{0.000001}} \put(-61.52,-42.43){\circle*{0.000001}} \put(-60.81,-41.72){\circle*{0.000001}} \put(-60.10,-41.72){\circle*{0.000001}} \put(-59.40,-41.72){\circle*{0.000001}} \put(-58.69,-41.01){\circle*{0.000001}} \put(-57.98,-41.01){\circle*{0.000001}} \put(-57.28,-41.01){\circle*{0.000001}} \put(-56.57,-41.01){\circle*{0.000001}} \put(-55.86,-40.31){\circle*{0.000001}} \put(-55.15,-40.31){\circle*{0.000001}} \put(-54.45,-40.31){\circle*{0.000001}} \put(-53.74,-40.31){\circle*{0.000001}} \put(-53.03,-39.60){\circle*{0.000001}} \put(-52.33,-39.60){\circle*{0.000001}} \put(-113.84,-70.71){\circle*{0.000001}} \put(-113.14,-70.00){\circle*{0.000001}} \put(-112.43,-69.30){\circle*{0.000001}} \put(-111.72,-69.30){\circle*{0.000001}} \put(-111.02,-68.59){\circle*{0.000001}} \put(-110.31,-67.88){\circle*{0.000001}} \put(-109.60,-67.18){\circle*{0.000001}} \put(-108.89,-67.18){\circle*{0.000001}} \put(-108.19,-66.47){\circle*{0.000001}} \put(-107.48,-65.76){\circle*{0.000001}} \put(-106.77,-65.05){\circle*{0.000001}} \put(-106.07,-64.35){\circle*{0.000001}} \put(-105.36,-64.35){\circle*{0.000001}} \put(-104.65,-63.64){\circle*{0.000001}} \put(-103.94,-62.93){\circle*{0.000001}} \put(-103.24,-62.23){\circle*{0.000001}} \put(-102.53,-62.23){\circle*{0.000001}} \put(-101.82,-61.52){\circle*{0.000001}} \put(-101.12,-60.81){\circle*{0.000001}} \put(-100.41,-60.10){\circle*{0.000001}} \put(-99.70,-60.10){\circle*{0.000001}} \put(-98.99,-59.40){\circle*{0.000001}} \put(-98.29,-58.69){\circle*{0.000001}} \put(-97.58,-57.98){\circle*{0.000001}} \put(-96.87,-57.28){\circle*{0.000001}} \put(-96.17,-57.28){\circle*{0.000001}} \put(-95.46,-56.57){\circle*{0.000001}} \put(-94.75,-55.86){\circle*{0.000001}} \put(-94.05,-55.15){\circle*{0.000001}} \put(-93.34,-55.15){\circle*{0.000001}} \put(-92.63,-54.45){\circle*{0.000001}} \put(-91.92,-53.74){\circle*{0.000001}} \put(-91.22,-53.03){\circle*{0.000001}} \put(-90.51,-52.33){\circle*{0.000001}} \put(-89.80,-52.33){\circle*{0.000001}} \put(-89.10,-51.62){\circle*{0.000001}} \put(-88.39,-50.91){\circle*{0.000001}} \put(-87.68,-50.20){\circle*{0.000001}} \put(-86.97,-50.20){\circle*{0.000001}} \put(-86.27,-49.50){\circle*{0.000001}} \put(-85.56,-48.79){\circle*{0.000001}} \put(-117.38,-106.77){\circle*{0.000001}} \put(-117.38,-106.07){\circle*{0.000001}} \put(-117.38,-105.36){\circle*{0.000001}} \put(-117.38,-104.65){\circle*{0.000001}} \put(-117.38,-103.94){\circle*{0.000001}} \put(-117.38,-103.24){\circle*{0.000001}} \put(-116.67,-102.53){\circle*{0.000001}} \put(-116.67,-101.82){\circle*{0.000001}} \put(-116.67,-101.12){\circle*{0.000001}} \put(-116.67,-100.41){\circle*{0.000001}} \put(-116.67,-99.70){\circle*{0.000001}} \put(-116.67,-98.99){\circle*{0.000001}} \put(-116.67,-98.29){\circle*{0.000001}} \put(-116.67,-97.58){\circle*{0.000001}} \put(-116.67,-96.87){\circle*{0.000001}} \put(-116.67,-96.17){\circle*{0.000001}} \put(-115.97,-95.46){\circle*{0.000001}} \put(-115.97,-94.75){\circle*{0.000001}} \put(-115.97,-94.05){\circle*{0.000001}} \put(-115.97,-93.34){\circle*{0.000001}} \put(-115.97,-92.63){\circle*{0.000001}} \put(-115.97,-91.92){\circle*{0.000001}} \put(-115.97,-91.22){\circle*{0.000001}} \put(-115.97,-90.51){\circle*{0.000001}} \put(-115.97,-89.80){\circle*{0.000001}} \put(-115.97,-89.10){\circle*{0.000001}} \put(-115.26,-88.39){\circle*{0.000001}} \put(-115.26,-87.68){\circle*{0.000001}} \put(-115.26,-86.97){\circle*{0.000001}} \put(-115.26,-86.27){\circle*{0.000001}} \put(-115.26,-85.56){\circle*{0.000001}} \put(-115.26,-84.85){\circle*{0.000001}} \put(-115.26,-84.15){\circle*{0.000001}} \put(-115.26,-83.44){\circle*{0.000001}} \put(-115.26,-82.73){\circle*{0.000001}} \put(-115.26,-82.02){\circle*{0.000001}} \put(-114.55,-81.32){\circle*{0.000001}} \put(-114.55,-80.61){\circle*{0.000001}} \put(-114.55,-79.90){\circle*{0.000001}} \put(-114.55,-79.20){\circle*{0.000001}} \put(-114.55,-78.49){\circle*{0.000001}} \put(-114.55,-77.78){\circle*{0.000001}} \put(-114.55,-77.07){\circle*{0.000001}} \put(-114.55,-76.37){\circle*{0.000001}} \put(-114.55,-75.66){\circle*{0.000001}} \put(-114.55,-74.95){\circle*{0.000001}} \put(-113.84,-74.25){\circle*{0.000001}} \put(-113.84,-73.54){\circle*{0.000001}} \put(-113.84,-72.83){\circle*{0.000001}} \put(-113.84,-72.12){\circle*{0.000001}} \put(-113.84,-71.42){\circle*{0.000001}} \put(-113.84,-70.71){\circle*{0.000001}} \put(-117.38,-106.77){\circle*{0.000001}} \put(-117.38,-106.07){\circle*{0.000001}} \put(-117.38,-105.36){\circle*{0.000001}} \put(-117.38,-104.65){\circle*{0.000001}} \put(-117.38,-103.94){\circle*{0.000001}} \put(-116.67,-103.24){\circle*{0.000001}} \put(-116.67,-102.53){\circle*{0.000001}} \put(-116.67,-101.82){\circle*{0.000001}} \put(-116.67,-101.12){\circle*{0.000001}} \put(-116.67,-100.41){\circle*{0.000001}} \put(-116.67,-99.70){\circle*{0.000001}} \put(-116.67,-98.99){\circle*{0.000001}} \put(-116.67,-98.29){\circle*{0.000001}} \put(-115.97,-97.58){\circle*{0.000001}} \put(-115.97,-96.87){\circle*{0.000001}} \put(-115.97,-96.17){\circle*{0.000001}} \put(-115.97,-95.46){\circle*{0.000001}} \put(-115.97,-94.75){\circle*{0.000001}} \put(-115.97,-94.05){\circle*{0.000001}} \put(-115.97,-93.34){\circle*{0.000001}} \put(-115.97,-92.63){\circle*{0.000001}} \put(-115.26,-91.92){\circle*{0.000001}} \put(-115.26,-91.22){\circle*{0.000001}} \put(-115.26,-90.51){\circle*{0.000001}} \put(-115.26,-89.80){\circle*{0.000001}} \put(-115.26,-89.10){\circle*{0.000001}} \put(-115.26,-88.39){\circle*{0.000001}} \put(-115.26,-87.68){\circle*{0.000001}} \put(-115.26,-86.97){\circle*{0.000001}} \put(-114.55,-86.27){\circle*{0.000001}} \put(-114.55,-85.56){\circle*{0.000001}} \put(-114.55,-84.85){\circle*{0.000001}} \put(-114.55,-84.15){\circle*{0.000001}} \put(-114.55,-83.44){\circle*{0.000001}} \put(-114.55,-82.73){\circle*{0.000001}} \put(-114.55,-82.02){\circle*{0.000001}} \put(-114.55,-81.32){\circle*{0.000001}} \put(-113.84,-80.61){\circle*{0.000001}} \put(-113.84,-79.90){\circle*{0.000001}} \put(-113.84,-79.20){\circle*{0.000001}} \put(-113.84,-78.49){\circle*{0.000001}} \put(-113.84,-77.78){\circle*{0.000001}} \put(-113.84,-77.07){\circle*{0.000001}} \put(-113.84,-76.37){\circle*{0.000001}} \put(-113.84,-75.66){\circle*{0.000001}} \put(-113.14,-74.95){\circle*{0.000001}} \put(-113.14,-74.25){\circle*{0.000001}} \put(-113.14,-73.54){\circle*{0.000001}} \put(-113.14,-72.83){\circle*{0.000001}} \put(-113.14,-72.12){\circle*{0.000001}} \put(-113.14,-71.42){\circle*{0.000001}} \put(-113.14,-70.71){\circle*{0.000001}} \put(-113.14,-70.00){\circle*{0.000001}} \put(-112.43,-69.30){\circle*{0.000001}} \put(-112.43,-68.59){\circle*{0.000001}} \put(-112.43,-67.88){\circle*{0.000001}} \put(-112.43,-67.18){\circle*{0.000001}} \put(-112.43,-66.47){\circle*{0.000001}} \put(-132.94,-100.41){\circle*{0.000001}} \put(-132.23,-99.70){\circle*{0.000001}} \put(-132.23,-98.99){\circle*{0.000001}} \put(-131.52,-98.29){\circle*{0.000001}} \put(-131.52,-97.58){\circle*{0.000001}} \put(-130.81,-96.87){\circle*{0.000001}} \put(-130.11,-96.17){\circle*{0.000001}} \put(-130.11,-95.46){\circle*{0.000001}} \put(-129.40,-94.75){\circle*{0.000001}} \put(-129.40,-94.05){\circle*{0.000001}} \put(-128.69,-93.34){\circle*{0.000001}} \put(-127.99,-92.63){\circle*{0.000001}} \put(-127.99,-91.92){\circle*{0.000001}} \put(-127.28,-91.22){\circle*{0.000001}} \put(-127.28,-90.51){\circle*{0.000001}} \put(-126.57,-89.80){\circle*{0.000001}} \put(-125.87,-89.10){\circle*{0.000001}} \put(-125.87,-88.39){\circle*{0.000001}} \put(-125.16,-87.68){\circle*{0.000001}} \put(-125.16,-86.97){\circle*{0.000001}} \put(-124.45,-86.27){\circle*{0.000001}} \put(-123.74,-85.56){\circle*{0.000001}} \put(-123.74,-84.85){\circle*{0.000001}} \put(-123.04,-84.15){\circle*{0.000001}} \put(-123.04,-83.44){\circle*{0.000001}} \put(-122.33,-82.73){\circle*{0.000001}} \put(-121.62,-82.02){\circle*{0.000001}} \put(-121.62,-81.32){\circle*{0.000001}} \put(-120.92,-80.61){\circle*{0.000001}} \put(-120.21,-79.90){\circle*{0.000001}} \put(-120.21,-79.20){\circle*{0.000001}} \put(-119.50,-78.49){\circle*{0.000001}} \put(-119.50,-77.78){\circle*{0.000001}} \put(-118.79,-77.07){\circle*{0.000001}} \put(-118.09,-76.37){\circle*{0.000001}} \put(-118.09,-75.66){\circle*{0.000001}} \put(-117.38,-74.95){\circle*{0.000001}} \put(-117.38,-74.25){\circle*{0.000001}} \put(-116.67,-73.54){\circle*{0.000001}} \put(-115.97,-72.83){\circle*{0.000001}} \put(-115.97,-72.12){\circle*{0.000001}} \put(-115.26,-71.42){\circle*{0.000001}} \put(-115.26,-70.71){\circle*{0.000001}} \put(-114.55,-70.00){\circle*{0.000001}} \put(-113.84,-69.30){\circle*{0.000001}} \put(-113.84,-68.59){\circle*{0.000001}} \put(-113.14,-67.88){\circle*{0.000001}} \put(-113.14,-67.18){\circle*{0.000001}} \put(-112.43,-66.47){\circle*{0.000001}} \put(-132.94,-100.41){\circle*{0.000001}} \put(-132.23,-100.41){\circle*{0.000001}} \put(-131.52,-100.41){\circle*{0.000001}} \put(-130.81,-100.41){\circle*{0.000001}} \put(-130.11,-100.41){\circle*{0.000001}} \put(-129.40,-100.41){\circle*{0.000001}} \put(-128.69,-100.41){\circle*{0.000001}} \put(-127.99,-100.41){\circle*{0.000001}} \put(-127.28,-100.41){\circle*{0.000001}} \put(-126.57,-100.41){\circle*{0.000001}} \put(-125.87,-100.41){\circle*{0.000001}} \put(-125.16,-100.41){\circle*{0.000001}} \put(-124.45,-100.41){\circle*{0.000001}} \put(-123.74,-100.41){\circle*{0.000001}} \put(-123.04,-100.41){\circle*{0.000001}} \put(-122.33,-99.70){\circle*{0.000001}} \put(-121.62,-99.70){\circle*{0.000001}} \put(-120.92,-99.70){\circle*{0.000001}} \put(-120.21,-99.70){\circle*{0.000001}} \put(-119.50,-99.70){\circle*{0.000001}} \put(-118.79,-99.70){\circle*{0.000001}} \put(-118.09,-99.70){\circle*{0.000001}} \put(-117.38,-99.70){\circle*{0.000001}} \put(-116.67,-99.70){\circle*{0.000001}} \put(-115.97,-99.70){\circle*{0.000001}} \put(-115.26,-99.70){\circle*{0.000001}} \put(-114.55,-99.70){\circle*{0.000001}} \put(-113.84,-99.70){\circle*{0.000001}} \put(-113.14,-99.70){\circle*{0.000001}} \put(-112.43,-99.70){\circle*{0.000001}} \put(-111.72,-99.70){\circle*{0.000001}} \put(-111.02,-99.70){\circle*{0.000001}} \put(-110.31,-99.70){\circle*{0.000001}} \put(-109.60,-99.70){\circle*{0.000001}} \put(-108.89,-99.70){\circle*{0.000001}} \put(-108.19,-99.70){\circle*{0.000001}} \put(-107.48,-99.70){\circle*{0.000001}} \put(-106.77,-99.70){\circle*{0.000001}} \put(-106.07,-99.70){\circle*{0.000001}} \put(-105.36,-99.70){\circle*{0.000001}} \put(-104.65,-99.70){\circle*{0.000001}} \put(-103.94,-99.70){\circle*{0.000001}} \put(-103.24,-99.70){\circle*{0.000001}} \put(-102.53,-99.70){\circle*{0.000001}} \put(-101.82,-98.99){\circle*{0.000001}} \put(-101.12,-98.99){\circle*{0.000001}} \put(-100.41,-98.99){\circle*{0.000001}} \put(-99.70,-98.99){\circle*{0.000001}} \put(-98.99,-98.99){\circle*{0.000001}} \put(-98.29,-98.99){\circle*{0.000001}} \put(-97.58,-98.99){\circle*{0.000001}} \put(-96.87,-98.99){\circle*{0.000001}} \put(-96.17,-98.99){\circle*{0.000001}} \put(-95.46,-98.99){\circle*{0.000001}} \put(-94.75,-98.99){\circle*{0.000001}} \put(-94.05,-98.99){\circle*{0.000001}} \put(-93.34,-98.99){\circle*{0.000001}} \put(-92.63,-98.99){\circle*{0.000001}} \put(-91.92,-98.99){\circle*{0.000001}} \put(-91.92,-98.99){\circle*{0.000001}} \put(-91.22,-99.70){\circle*{0.000001}} \put(-90.51,-99.70){\circle*{0.000001}} \put(-89.80,-100.41){\circle*{0.000001}} \put(-89.10,-101.12){\circle*{0.000001}} \put(-88.39,-101.12){\circle*{0.000001}} \put(-87.68,-101.82){\circle*{0.000001}} \put(-86.97,-102.53){\circle*{0.000001}} \put(-86.27,-102.53){\circle*{0.000001}} \put(-85.56,-103.24){\circle*{0.000001}} \put(-84.85,-103.24){\circle*{0.000001}} \put(-84.15,-103.94){\circle*{0.000001}} \put(-83.44,-104.65){\circle*{0.000001}} \put(-82.73,-104.65){\circle*{0.000001}} \put(-82.02,-105.36){\circle*{0.000001}} \put(-81.32,-106.07){\circle*{0.000001}} \put(-80.61,-106.07){\circle*{0.000001}} \put(-79.90,-106.77){\circle*{0.000001}} \put(-79.20,-107.48){\circle*{0.000001}} \put(-78.49,-107.48){\circle*{0.000001}} \put(-77.78,-108.19){\circle*{0.000001}} \put(-77.07,-108.89){\circle*{0.000001}} \put(-76.37,-108.89){\circle*{0.000001}} \put(-75.66,-109.60){\circle*{0.000001}} \put(-74.95,-110.31){\circle*{0.000001}} \put(-74.25,-110.31){\circle*{0.000001}} \put(-73.54,-111.02){\circle*{0.000001}} \put(-72.83,-111.02){\circle*{0.000001}} \put(-72.12,-111.72){\circle*{0.000001}} \put(-71.42,-112.43){\circle*{0.000001}} \put(-70.71,-112.43){\circle*{0.000001}} \put(-70.00,-113.14){\circle*{0.000001}} \put(-69.30,-113.84){\circle*{0.000001}} \put(-68.59,-113.84){\circle*{0.000001}} \put(-67.88,-114.55){\circle*{0.000001}} \put(-67.18,-115.26){\circle*{0.000001}} \put(-66.47,-115.26){\circle*{0.000001}} \put(-65.76,-115.97){\circle*{0.000001}} \put(-65.05,-116.67){\circle*{0.000001}} \put(-64.35,-116.67){\circle*{0.000001}} \put(-63.64,-117.38){\circle*{0.000001}} \put(-62.93,-118.09){\circle*{0.000001}} \put(-62.23,-118.09){\circle*{0.000001}} \put(-61.52,-118.79){\circle*{0.000001}} \put(-60.81,-118.79){\circle*{0.000001}} \put(-60.10,-119.50){\circle*{0.000001}} \put(-59.40,-120.21){\circle*{0.000001}} \put(-58.69,-120.21){\circle*{0.000001}} \put(-57.98,-120.92){\circle*{0.000001}} \put(-57.28,-121.62){\circle*{0.000001}} \put(-56.57,-121.62){\circle*{0.000001}} \put(-55.86,-122.33){\circle*{0.000001}} \put(-31.82,-157.68){\circle*{0.000001}} \put(-32.53,-156.98){\circle*{0.000001}} \put(-32.53,-156.27){\circle*{0.000001}} \put(-33.23,-155.56){\circle*{0.000001}} \put(-33.94,-154.86){\circle*{0.000001}} \put(-33.94,-154.15){\circle*{0.000001}} \put(-34.65,-153.44){\circle*{0.000001}} \put(-35.36,-152.74){\circle*{0.000001}} \put(-35.36,-152.03){\circle*{0.000001}} \put(-36.06,-151.32){\circle*{0.000001}} \put(-36.77,-150.61){\circle*{0.000001}} \put(-36.77,-149.91){\circle*{0.000001}} \put(-37.48,-149.20){\circle*{0.000001}} \put(-38.18,-148.49){\circle*{0.000001}} \put(-38.89,-147.79){\circle*{0.000001}} \put(-38.89,-147.08){\circle*{0.000001}} \put(-39.60,-146.37){\circle*{0.000001}} \put(-40.31,-145.66){\circle*{0.000001}} \put(-40.31,-144.96){\circle*{0.000001}} \put(-41.01,-144.25){\circle*{0.000001}} \put(-41.72,-143.54){\circle*{0.000001}} \put(-41.72,-142.84){\circle*{0.000001}} \put(-42.43,-142.13){\circle*{0.000001}} \put(-43.13,-141.42){\circle*{0.000001}} \put(-43.13,-140.71){\circle*{0.000001}} \put(-43.84,-140.01){\circle*{0.000001}} \put(-44.55,-139.30){\circle*{0.000001}} \put(-44.55,-138.59){\circle*{0.000001}} \put(-45.25,-137.89){\circle*{0.000001}} \put(-45.96,-137.18){\circle*{0.000001}} \put(-45.96,-136.47){\circle*{0.000001}} \put(-46.67,-135.76){\circle*{0.000001}} \put(-47.38,-135.06){\circle*{0.000001}} \put(-47.38,-134.35){\circle*{0.000001}} \put(-48.08,-133.64){\circle*{0.000001}} \put(-48.79,-132.94){\circle*{0.000001}} \put(-48.79,-132.23){\circle*{0.000001}} \put(-49.50,-131.52){\circle*{0.000001}} \put(-50.20,-130.81){\circle*{0.000001}} \put(-50.91,-130.11){\circle*{0.000001}} \put(-50.91,-129.40){\circle*{0.000001}} \put(-51.62,-128.69){\circle*{0.000001}} \put(-52.33,-127.99){\circle*{0.000001}} \put(-52.33,-127.28){\circle*{0.000001}} \put(-53.03,-126.57){\circle*{0.000001}} \put(-53.74,-125.87){\circle*{0.000001}} \put(-53.74,-125.16){\circle*{0.000001}} \put(-54.45,-124.45){\circle*{0.000001}} \put(-55.15,-123.74){\circle*{0.000001}} \put(-55.15,-123.04){\circle*{0.000001}} \put(-55.86,-122.33){\circle*{0.000001}} \put(-60.81,-193.75){\circle*{0.000001}} \put(-60.10,-193.04){\circle*{0.000001}} \put(-59.40,-192.33){\circle*{0.000001}} \put(-59.40,-191.63){\circle*{0.000001}} \put(-58.69,-190.92){\circle*{0.000001}} \put(-57.98,-190.21){\circle*{0.000001}} \put(-57.28,-189.50){\circle*{0.000001}} \put(-56.57,-188.80){\circle*{0.000001}} \put(-56.57,-188.09){\circle*{0.000001}} \put(-55.86,-187.38){\circle*{0.000001}} \put(-55.15,-186.68){\circle*{0.000001}} \put(-54.45,-185.97){\circle*{0.000001}} \put(-53.74,-185.26){\circle*{0.000001}} \put(-53.74,-184.55){\circle*{0.000001}} \put(-53.03,-183.85){\circle*{0.000001}} \put(-52.33,-183.14){\circle*{0.000001}} \put(-51.62,-182.43){\circle*{0.000001}} \put(-50.91,-181.73){\circle*{0.000001}} \put(-50.91,-181.02){\circle*{0.000001}} \put(-50.20,-180.31){\circle*{0.000001}} \put(-49.50,-179.61){\circle*{0.000001}} \put(-48.79,-178.90){\circle*{0.000001}} \put(-48.08,-178.19){\circle*{0.000001}} \put(-48.08,-177.48){\circle*{0.000001}} \put(-47.38,-176.78){\circle*{0.000001}} \put(-46.67,-176.07){\circle*{0.000001}} \put(-45.96,-175.36){\circle*{0.000001}} \put(-45.25,-174.66){\circle*{0.000001}} \put(-44.55,-173.95){\circle*{0.000001}} \put(-44.55,-173.24){\circle*{0.000001}} \put(-43.84,-172.53){\circle*{0.000001}} \put(-43.13,-171.83){\circle*{0.000001}} \put(-42.43,-171.12){\circle*{0.000001}} \put(-41.72,-170.41){\circle*{0.000001}} \put(-41.72,-169.71){\circle*{0.000001}} \put(-41.01,-169.00){\circle*{0.000001}} \put(-40.31,-168.29){\circle*{0.000001}} \put(-39.60,-167.58){\circle*{0.000001}} \put(-38.89,-166.88){\circle*{0.000001}} \put(-38.89,-166.17){\circle*{0.000001}} \put(-38.18,-165.46){\circle*{0.000001}} \put(-37.48,-164.76){\circle*{0.000001}} \put(-36.77,-164.05){\circle*{0.000001}} \put(-36.06,-163.34){\circle*{0.000001}} \put(-36.06,-162.63){\circle*{0.000001}} \put(-35.36,-161.93){\circle*{0.000001}} \put(-34.65,-161.22){\circle*{0.000001}} \put(-33.94,-160.51){\circle*{0.000001}} \put(-33.23,-159.81){\circle*{0.000001}} \put(-33.23,-159.10){\circle*{0.000001}} \put(-32.53,-158.39){\circle*{0.000001}} \put(-31.82,-157.68){\circle*{0.000001}} \put(-60.81,-193.75){\circle*{0.000001}} \put(-60.10,-193.75){\circle*{0.000001}} \put(-59.40,-194.45){\circle*{0.000001}} \put(-58.69,-194.45){\circle*{0.000001}} \put(-57.98,-195.16){\circle*{0.000001}} \put(-57.28,-195.16){\circle*{0.000001}} \put(-56.57,-195.87){\circle*{0.000001}} \put(-55.86,-195.87){\circle*{0.000001}} \put(-55.15,-196.58){\circle*{0.000001}} \put(-54.45,-196.58){\circle*{0.000001}} \put(-53.74,-196.58){\circle*{0.000001}} \put(-53.03,-197.28){\circle*{0.000001}} \put(-52.33,-197.28){\circle*{0.000001}} \put(-51.62,-197.99){\circle*{0.000001}} \put(-50.91,-197.99){\circle*{0.000001}} \put(-50.20,-198.70){\circle*{0.000001}} \put(-49.50,-198.70){\circle*{0.000001}} \put(-48.79,-199.40){\circle*{0.000001}} \put(-48.08,-199.40){\circle*{0.000001}} \put(-47.38,-199.40){\circle*{0.000001}} \put(-46.67,-200.11){\circle*{0.000001}} \put(-45.96,-200.11){\circle*{0.000001}} \put(-45.25,-200.82){\circle*{0.000001}} \put(-44.55,-200.82){\circle*{0.000001}} \put(-43.84,-201.53){\circle*{0.000001}} \put(-43.13,-201.53){\circle*{0.000001}} \put(-42.43,-202.23){\circle*{0.000001}} \put(-41.72,-202.23){\circle*{0.000001}} \put(-41.01,-202.23){\circle*{0.000001}} \put(-40.31,-202.94){\circle*{0.000001}} \put(-39.60,-202.94){\circle*{0.000001}} \put(-38.89,-203.65){\circle*{0.000001}} \put(-38.18,-203.65){\circle*{0.000001}} \put(-37.48,-204.35){\circle*{0.000001}} \put(-36.77,-204.35){\circle*{0.000001}} \put(-36.06,-204.35){\circle*{0.000001}} \put(-35.36,-205.06){\circle*{0.000001}} \put(-34.65,-205.06){\circle*{0.000001}} \put(-33.94,-205.77){\circle*{0.000001}} \put(-33.23,-205.77){\circle*{0.000001}} \put(-32.53,-206.48){\circle*{0.000001}} \put(-31.82,-206.48){\circle*{0.000001}} \put(-31.11,-207.18){\circle*{0.000001}} \put(-30.41,-207.18){\circle*{0.000001}} \put(-29.70,-207.18){\circle*{0.000001}} \put(-28.99,-207.89){\circle*{0.000001}} \put(-28.28,-207.89){\circle*{0.000001}} \put(-27.58,-208.60){\circle*{0.000001}} \put(-26.87,-208.60){\circle*{0.000001}} \put(-26.16,-209.30){\circle*{0.000001}} \put(-25.46,-209.30){\circle*{0.000001}} \put(-24.75,-210.01){\circle*{0.000001}} \put(-24.04,-210.01){\circle*{0.000001}} \put(-23.33,-210.01){\circle*{0.000001}} \put(-22.63,-210.72){\circle*{0.000001}} \put(-21.92,-210.72){\circle*{0.000001}} \put(-21.21,-211.42){\circle*{0.000001}} \put(-20.51,-211.42){\circle*{0.000001}} \put(-19.80,-212.13){\circle*{0.000001}} \put(-19.09,-212.13){\circle*{0.000001}} \put(-18.38,-212.84){\circle*{0.000001}} \put(-17.68,-212.84){\circle*{0.000001}} \put(-17.68,-212.84){\circle*{0.000001}} \put(-16.97,-212.84){\circle*{0.000001}} \put(-16.26,-212.84){\circle*{0.000001}} \put(-15.56,-212.84){\circle*{0.000001}} \put(-14.85,-212.13){\circle*{0.000001}} \put(-14.14,-212.13){\circle*{0.000001}} \put(-13.44,-212.13){\circle*{0.000001}} \put(-12.73,-212.13){\circle*{0.000001}} \put(-12.02,-212.13){\circle*{0.000001}} \put(-11.31,-212.13){\circle*{0.000001}} \put(-10.61,-212.13){\circle*{0.000001}} \put(-9.90,-211.42){\circle*{0.000001}} \put(-9.19,-211.42){\circle*{0.000001}} \put(-8.49,-211.42){\circle*{0.000001}} \put(-7.78,-211.42){\circle*{0.000001}} \put(-7.07,-211.42){\circle*{0.000001}} \put(-6.36,-211.42){\circle*{0.000001}} \put(-5.66,-210.72){\circle*{0.000001}} \put(-4.95,-210.72){\circle*{0.000001}} \put(-4.24,-210.72){\circle*{0.000001}} \put(-3.54,-210.72){\circle*{0.000001}} \put(-2.83,-210.72){\circle*{0.000001}} \put(-2.12,-210.72){\circle*{0.000001}} \put(-1.41,-210.72){\circle*{0.000001}} \put(-0.71,-210.01){\circle*{0.000001}} \put( 0.00,-210.01){\circle*{0.000001}} \put( 0.71,-210.01){\circle*{0.000001}} \put( 1.41,-210.01){\circle*{0.000001}} \put( 2.12,-210.01){\circle*{0.000001}} \put( 2.83,-210.01){\circle*{0.000001}} \put( 3.54,-210.01){\circle*{0.000001}} \put( 4.24,-209.30){\circle*{0.000001}} \put( 4.95,-209.30){\circle*{0.000001}} \put( 5.66,-209.30){\circle*{0.000001}} \put( 6.36,-209.30){\circle*{0.000001}} \put( 7.07,-209.30){\circle*{0.000001}} \put( 7.78,-209.30){\circle*{0.000001}} \put( 8.49,-208.60){\circle*{0.000001}} \put( 9.19,-208.60){\circle*{0.000001}} \put( 9.90,-208.60){\circle*{0.000001}} \put(10.61,-208.60){\circle*{0.000001}} \put(11.31,-208.60){\circle*{0.000001}} \put(12.02,-208.60){\circle*{0.000001}} \put(12.73,-208.60){\circle*{0.000001}} \put(13.44,-207.89){\circle*{0.000001}} \put(14.14,-207.89){\circle*{0.000001}} \put(14.85,-207.89){\circle*{0.000001}} \put(15.56,-207.89){\circle*{0.000001}} \put(16.26,-207.89){\circle*{0.000001}} \put(16.97,-207.89){\circle*{0.000001}} \put(17.68,-207.89){\circle*{0.000001}} \put(18.38,-207.18){\circle*{0.000001}} \put(19.09,-207.18){\circle*{0.000001}} \put(19.80,-207.18){\circle*{0.000001}} \put(20.51,-207.18){\circle*{0.000001}} \put(21.21,-207.18){\circle*{0.000001}} \put(21.92,-207.18){\circle*{0.000001}} \put(22.63,-206.48){\circle*{0.000001}} \put(23.33,-206.48){\circle*{0.000001}} \put(24.04,-206.48){\circle*{0.000001}} \put(24.75,-206.48){\circle*{0.000001}} \put(24.75,-206.48){\circle*{0.000001}} \put(25.46,-207.18){\circle*{0.000001}} \put(26.16,-207.18){\circle*{0.000001}} \put(26.87,-207.89){\circle*{0.000001}} \put(27.58,-207.89){\circle*{0.000001}} \put(28.28,-208.60){\circle*{0.000001}} \put(28.99,-208.60){\circle*{0.000001}} \put(29.70,-209.30){\circle*{0.000001}} \put(30.41,-210.01){\circle*{0.000001}} \put(31.11,-210.01){\circle*{0.000001}} \put(31.82,-210.72){\circle*{0.000001}} \put(32.53,-210.72){\circle*{0.000001}} \put(33.23,-211.42){\circle*{0.000001}} \put(33.94,-211.42){\circle*{0.000001}} \put(34.65,-212.13){\circle*{0.000001}} \put(35.36,-212.13){\circle*{0.000001}} \put(36.06,-212.84){\circle*{0.000001}} \put(36.77,-213.55){\circle*{0.000001}} \put(37.48,-213.55){\circle*{0.000001}} \put(38.18,-214.25){\circle*{0.000001}} \put(38.89,-214.25){\circle*{0.000001}} \put(39.60,-214.96){\circle*{0.000001}} \put(40.31,-214.96){\circle*{0.000001}} \put(41.01,-215.67){\circle*{0.000001}} \put(41.72,-216.37){\circle*{0.000001}} \put(42.43,-216.37){\circle*{0.000001}} \put(43.13,-217.08){\circle*{0.000001}} \put(43.84,-217.08){\circle*{0.000001}} \put(44.55,-217.79){\circle*{0.000001}} \put(45.25,-217.79){\circle*{0.000001}} \put(45.96,-218.50){\circle*{0.000001}} \put(46.67,-218.50){\circle*{0.000001}} \put(47.38,-219.20){\circle*{0.000001}} \put(48.08,-219.91){\circle*{0.000001}} \put(48.79,-219.91){\circle*{0.000001}} \put(49.50,-220.62){\circle*{0.000001}} \put(50.20,-220.62){\circle*{0.000001}} \put(50.91,-221.32){\circle*{0.000001}} \put(51.62,-221.32){\circle*{0.000001}} \put(52.33,-222.03){\circle*{0.000001}} \put(53.03,-222.74){\circle*{0.000001}} \put(53.74,-222.74){\circle*{0.000001}} \put(54.45,-223.45){\circle*{0.000001}} \put(55.15,-223.45){\circle*{0.000001}} \put(55.86,-224.15){\circle*{0.000001}} \put(56.57,-224.15){\circle*{0.000001}} \put(57.28,-224.86){\circle*{0.000001}} \put(57.98,-224.86){\circle*{0.000001}} \put(58.69,-225.57){\circle*{0.000001}} \put(59.40,-226.27){\circle*{0.000001}} \put(60.10,-226.27){\circle*{0.000001}} \put(60.81,-226.98){\circle*{0.000001}} \put(61.52,-226.98){\circle*{0.000001}} \put(62.23,-227.69){\circle*{0.000001}} \put(62.93,-227.69){\circle*{0.000001}} \put(63.64,-228.40){\circle*{0.000001}} \put(63.64,-228.40){\circle*{0.000001}} \put(64.35,-228.40){\circle*{0.000001}} \put(65.05,-228.40){\circle*{0.000001}} \put(65.76,-228.40){\circle*{0.000001}} \put(66.47,-228.40){\circle*{0.000001}} \put(67.18,-228.40){\circle*{0.000001}} \put(67.88,-228.40){\circle*{0.000001}} \put(68.59,-229.10){\circle*{0.000001}} \put(69.30,-229.10){\circle*{0.000001}} \put(70.00,-229.10){\circle*{0.000001}} \put(70.71,-229.10){\circle*{0.000001}} \put(71.42,-229.10){\circle*{0.000001}} \put(72.12,-229.10){\circle*{0.000001}} \put(72.83,-229.10){\circle*{0.000001}} \put(73.54,-229.10){\circle*{0.000001}} \put(74.25,-229.10){\circle*{0.000001}} \put(74.95,-229.10){\circle*{0.000001}} \put(75.66,-229.10){\circle*{0.000001}} \put(76.37,-229.10){\circle*{0.000001}} \put(77.07,-229.81){\circle*{0.000001}} \put(77.78,-229.81){\circle*{0.000001}} \put(78.49,-229.81){\circle*{0.000001}} \put(79.20,-229.81){\circle*{0.000001}} \put(79.90,-229.81){\circle*{0.000001}} \put(80.61,-229.81){\circle*{0.000001}} \put(81.32,-229.81){\circle*{0.000001}} \put(82.02,-229.81){\circle*{0.000001}} \put(82.73,-229.81){\circle*{0.000001}} \put(83.44,-229.81){\circle*{0.000001}} \put(84.15,-229.81){\circle*{0.000001}} \put(84.85,-229.81){\circle*{0.000001}} \put(85.56,-229.81){\circle*{0.000001}} \put(86.27,-230.52){\circle*{0.000001}} \put(86.97,-230.52){\circle*{0.000001}} \put(87.68,-230.52){\circle*{0.000001}} \put(88.39,-230.52){\circle*{0.000001}} \put(89.10,-230.52){\circle*{0.000001}} \put(89.80,-230.52){\circle*{0.000001}} \put(90.51,-230.52){\circle*{0.000001}} \put(91.22,-230.52){\circle*{0.000001}} \put(91.92,-230.52){\circle*{0.000001}} \put(92.63,-230.52){\circle*{0.000001}} \put(93.34,-230.52){\circle*{0.000001}} \put(94.05,-230.52){\circle*{0.000001}} \put(94.75,-231.22){\circle*{0.000001}} \put(95.46,-231.22){\circle*{0.000001}} \put(96.17,-231.22){\circle*{0.000001}} \put(96.87,-231.22){\circle*{0.000001}} \put(97.58,-231.22){\circle*{0.000001}} \put(98.29,-231.22){\circle*{0.000001}} \put(98.99,-231.22){\circle*{0.000001}} \put(99.70,-231.22){\circle*{0.000001}} \put(100.41,-231.22){\circle*{0.000001}} \put(101.12,-231.22){\circle*{0.000001}} \put(101.82,-231.22){\circle*{0.000001}} \put(102.53,-231.22){\circle*{0.000001}} \put(103.24,-231.93){\circle*{0.000001}} \put(103.94,-231.93){\circle*{0.000001}} \put(104.65,-231.93){\circle*{0.000001}} \put(105.36,-231.93){\circle*{0.000001}} \put(106.07,-231.93){\circle*{0.000001}} \put(106.77,-231.93){\circle*{0.000001}} \put(107.48,-231.93){\circle*{0.000001}} \put(107.48,-231.93){\circle*{0.000001}} \put(108.19,-231.22){\circle*{0.000001}} \put(108.19,-230.52){\circle*{0.000001}} \put(108.89,-229.81){\circle*{0.000001}} \put(108.89,-229.10){\circle*{0.000001}} \put(109.60,-228.40){\circle*{0.000001}} \put(109.60,-227.69){\circle*{0.000001}} \put(110.31,-226.98){\circle*{0.000001}} \put(110.31,-226.27){\circle*{0.000001}} \put(111.02,-225.57){\circle*{0.000001}} \put(111.02,-224.86){\circle*{0.000001}} \put(111.72,-224.15){\circle*{0.000001}} \put(111.72,-223.45){\circle*{0.000001}} \put(112.43,-222.74){\circle*{0.000001}} \put(112.43,-222.03){\circle*{0.000001}} \put(113.14,-221.32){\circle*{0.000001}} \put(113.84,-220.62){\circle*{0.000001}} \put(113.84,-219.91){\circle*{0.000001}} \put(114.55,-219.20){\circle*{0.000001}} \put(114.55,-218.50){\circle*{0.000001}} \put(115.26,-217.79){\circle*{0.000001}} \put(115.26,-217.08){\circle*{0.000001}} \put(115.97,-216.37){\circle*{0.000001}} \put(115.97,-215.67){\circle*{0.000001}} \put(116.67,-214.96){\circle*{0.000001}} \put(116.67,-214.25){\circle*{0.000001}} \put(117.38,-213.55){\circle*{0.000001}} \put(117.38,-212.84){\circle*{0.000001}} \put(118.09,-212.13){\circle*{0.000001}} \put(118.09,-211.42){\circle*{0.000001}} \put(118.79,-210.72){\circle*{0.000001}} \put(119.50,-210.01){\circle*{0.000001}} \put(119.50,-209.30){\circle*{0.000001}} \put(120.21,-208.60){\circle*{0.000001}} \put(120.21,-207.89){\circle*{0.000001}} \put(120.92,-207.18){\circle*{0.000001}} \put(120.92,-206.48){\circle*{0.000001}} \put(121.62,-205.77){\circle*{0.000001}} \put(121.62,-205.06){\circle*{0.000001}} \put(122.33,-204.35){\circle*{0.000001}} \put(122.33,-203.65){\circle*{0.000001}} \put(123.04,-202.94){\circle*{0.000001}} \put(123.04,-202.23){\circle*{0.000001}} \put(123.74,-201.53){\circle*{0.000001}} \put(124.45,-200.82){\circle*{0.000001}} \put(124.45,-200.11){\circle*{0.000001}} \put(125.16,-199.40){\circle*{0.000001}} \put(125.16,-198.70){\circle*{0.000001}} \put(125.87,-197.99){\circle*{0.000001}} \put(125.87,-197.28){\circle*{0.000001}} \put(126.57,-196.58){\circle*{0.000001}} \put(126.57,-195.87){\circle*{0.000001}} \put(127.28,-195.16){\circle*{0.000001}} \put(127.28,-194.45){\circle*{0.000001}} \put(127.99,-193.75){\circle*{0.000001}} \put(127.99,-193.04){\circle*{0.000001}} \put(128.69,-192.33){\circle*{0.000001}} \put(128.69,-191.63){\circle*{0.000001}} \put(129.40,-190.92){\circle*{0.000001}} \put(85.56,-187.38){\circle*{0.000001}} \put(86.27,-187.38){\circle*{0.000001}} \put(86.97,-187.38){\circle*{0.000001}} \put(87.68,-187.38){\circle*{0.000001}} \put(88.39,-187.38){\circle*{0.000001}} \put(89.10,-187.38){\circle*{0.000001}} \put(89.80,-187.38){\circle*{0.000001}} \put(90.51,-188.09){\circle*{0.000001}} \put(91.22,-188.09){\circle*{0.000001}} \put(91.92,-188.09){\circle*{0.000001}} \put(92.63,-188.09){\circle*{0.000001}} \put(93.34,-188.09){\circle*{0.000001}} \put(94.05,-188.09){\circle*{0.000001}} \put(94.75,-188.09){\circle*{0.000001}} \put(95.46,-188.09){\circle*{0.000001}} \put(96.17,-188.09){\circle*{0.000001}} \put(96.87,-188.09){\circle*{0.000001}} \put(97.58,-188.09){\circle*{0.000001}} \put(98.29,-188.09){\circle*{0.000001}} \put(98.99,-188.80){\circle*{0.000001}} \put(99.70,-188.80){\circle*{0.000001}} \put(100.41,-188.80){\circle*{0.000001}} \put(101.12,-188.80){\circle*{0.000001}} \put(101.82,-188.80){\circle*{0.000001}} \put(102.53,-188.80){\circle*{0.000001}} \put(103.24,-188.80){\circle*{0.000001}} \put(103.94,-188.80){\circle*{0.000001}} \put(104.65,-188.80){\circle*{0.000001}} \put(105.36,-188.80){\circle*{0.000001}} \put(106.07,-188.80){\circle*{0.000001}} \put(106.77,-188.80){\circle*{0.000001}} \put(107.48,-188.80){\circle*{0.000001}} \put(108.19,-189.50){\circle*{0.000001}} \put(108.89,-189.50){\circle*{0.000001}} \put(109.60,-189.50){\circle*{0.000001}} \put(110.31,-189.50){\circle*{0.000001}} \put(111.02,-189.50){\circle*{0.000001}} \put(111.72,-189.50){\circle*{0.000001}} \put(112.43,-189.50){\circle*{0.000001}} \put(113.14,-189.50){\circle*{0.000001}} \put(113.84,-189.50){\circle*{0.000001}} \put(114.55,-189.50){\circle*{0.000001}} \put(115.26,-189.50){\circle*{0.000001}} \put(115.97,-189.50){\circle*{0.000001}} \put(116.67,-190.21){\circle*{0.000001}} \put(117.38,-190.21){\circle*{0.000001}} \put(118.09,-190.21){\circle*{0.000001}} \put(118.79,-190.21){\circle*{0.000001}} \put(119.50,-190.21){\circle*{0.000001}} \put(120.21,-190.21){\circle*{0.000001}} \put(120.92,-190.21){\circle*{0.000001}} \put(121.62,-190.21){\circle*{0.000001}} \put(122.33,-190.21){\circle*{0.000001}} \put(123.04,-190.21){\circle*{0.000001}} \put(123.74,-190.21){\circle*{0.000001}} \put(124.45,-190.21){\circle*{0.000001}} \put(125.16,-190.92){\circle*{0.000001}} \put(125.87,-190.92){\circle*{0.000001}} \put(126.57,-190.92){\circle*{0.000001}} \put(127.28,-190.92){\circle*{0.000001}} \put(127.99,-190.92){\circle*{0.000001}} \put(128.69,-190.92){\circle*{0.000001}} \put(129.40,-190.92){\circle*{0.000001}} \put(51.62,-214.96){\circle*{0.000001}} \put(52.33,-214.25){\circle*{0.000001}} \put(53.03,-213.55){\circle*{0.000001}} \put(53.74,-213.55){\circle*{0.000001}} \put(54.45,-212.84){\circle*{0.000001}} \put(55.15,-212.13){\circle*{0.000001}} \put(55.86,-211.42){\circle*{0.000001}} \put(56.57,-210.72){\circle*{0.000001}} \put(57.28,-210.72){\circle*{0.000001}} \put(57.98,-210.01){\circle*{0.000001}} \put(58.69,-209.30){\circle*{0.000001}} \put(59.40,-208.60){\circle*{0.000001}} \put(60.10,-207.89){\circle*{0.000001}} \put(60.81,-207.18){\circle*{0.000001}} \put(61.52,-207.18){\circle*{0.000001}} \put(62.23,-206.48){\circle*{0.000001}} \put(62.93,-205.77){\circle*{0.000001}} \put(63.64,-205.06){\circle*{0.000001}} \put(64.35,-204.35){\circle*{0.000001}} \put(65.05,-204.35){\circle*{0.000001}} \put(65.76,-203.65){\circle*{0.000001}} \put(66.47,-202.94){\circle*{0.000001}} \put(67.18,-202.23){\circle*{0.000001}} \put(67.88,-201.53){\circle*{0.000001}} \put(68.59,-201.53){\circle*{0.000001}} \put(69.30,-200.82){\circle*{0.000001}} \put(70.00,-200.11){\circle*{0.000001}} \put(70.71,-199.40){\circle*{0.000001}} \put(71.42,-198.70){\circle*{0.000001}} \put(72.12,-197.99){\circle*{0.000001}} \put(72.83,-197.99){\circle*{0.000001}} \put(73.54,-197.28){\circle*{0.000001}} \put(74.25,-196.58){\circle*{0.000001}} \put(74.95,-195.87){\circle*{0.000001}} \put(75.66,-195.16){\circle*{0.000001}} \put(76.37,-195.16){\circle*{0.000001}} \put(77.07,-194.45){\circle*{0.000001}} \put(77.78,-193.75){\circle*{0.000001}} \put(78.49,-193.04){\circle*{0.000001}} \put(79.20,-192.33){\circle*{0.000001}} \put(79.90,-192.33){\circle*{0.000001}} \put(80.61,-191.63){\circle*{0.000001}} \put(81.32,-190.92){\circle*{0.000001}} \put(82.02,-190.21){\circle*{0.000001}} \put(82.73,-189.50){\circle*{0.000001}} \put(83.44,-188.80){\circle*{0.000001}} \put(84.15,-188.80){\circle*{0.000001}} \put(84.85,-188.09){\circle*{0.000001}} \put(85.56,-187.38){\circle*{0.000001}} \put(51.62,-214.96){\circle*{0.000001}} \put(52.33,-215.67){\circle*{0.000001}} \put(53.03,-216.37){\circle*{0.000001}} \put(53.74,-216.37){\circle*{0.000001}} \put(54.45,-217.08){\circle*{0.000001}} \put(55.15,-217.79){\circle*{0.000001}} \put(55.86,-218.50){\circle*{0.000001}} \put(56.57,-219.20){\circle*{0.000001}} \put(57.28,-219.20){\circle*{0.000001}} \put(57.98,-219.91){\circle*{0.000001}} \put(58.69,-220.62){\circle*{0.000001}} \put(59.40,-221.32){\circle*{0.000001}} \put(60.10,-222.03){\circle*{0.000001}} \put(60.81,-222.03){\circle*{0.000001}} \put(61.52,-222.74){\circle*{0.000001}} \put(62.23,-223.45){\circle*{0.000001}} \put(62.93,-224.15){\circle*{0.000001}} \put(63.64,-224.86){\circle*{0.000001}} \put(64.35,-224.86){\circle*{0.000001}} \put(65.05,-225.57){\circle*{0.000001}} \put(65.76,-226.27){\circle*{0.000001}} \put(66.47,-226.98){\circle*{0.000001}} \put(67.18,-227.69){\circle*{0.000001}} \put(67.88,-227.69){\circle*{0.000001}} \put(68.59,-228.40){\circle*{0.000001}} \put(69.30,-229.10){\circle*{0.000001}} \put(70.00,-229.81){\circle*{0.000001}} \put(70.71,-230.52){\circle*{0.000001}} \put(71.42,-231.22){\circle*{0.000001}} \put(72.12,-231.22){\circle*{0.000001}} \put(72.83,-231.93){\circle*{0.000001}} \put(73.54,-232.64){\circle*{0.000001}} \put(74.25,-233.35){\circle*{0.000001}} \put(74.95,-234.05){\circle*{0.000001}} \put(75.66,-234.05){\circle*{0.000001}} \put(76.37,-234.76){\circle*{0.000001}} \put(77.07,-235.47){\circle*{0.000001}} \put(77.78,-236.17){\circle*{0.000001}} \put(78.49,-236.88){\circle*{0.000001}} \put(79.20,-236.88){\circle*{0.000001}} \put(79.90,-237.59){\circle*{0.000001}} \put(80.61,-238.29){\circle*{0.000001}} \put(81.32,-239.00){\circle*{0.000001}} \put(82.02,-239.71){\circle*{0.000001}} \put(82.73,-239.71){\circle*{0.000001}} \put(83.44,-240.42){\circle*{0.000001}} \put(84.15,-241.12){\circle*{0.000001}} \put(84.85,-241.83){\circle*{0.000001}} \put(85.56,-242.54){\circle*{0.000001}} \put(86.27,-242.54){\circle*{0.000001}} \put(86.97,-243.24){\circle*{0.000001}} \put(87.68,-243.95){\circle*{0.000001}} \put(89.80,-289.91){\circle*{0.000001}} \put(89.80,-289.21){\circle*{0.000001}} \put(89.80,-288.50){\circle*{0.000001}} \put(89.80,-287.79){\circle*{0.000001}} \put(89.80,-287.09){\circle*{0.000001}} \put(89.80,-286.38){\circle*{0.000001}} \put(89.80,-285.67){\circle*{0.000001}} \put(89.80,-284.96){\circle*{0.000001}} \put(89.80,-284.26){\circle*{0.000001}} \put(89.80,-283.55){\circle*{0.000001}} \put(89.80,-282.84){\circle*{0.000001}} \put(89.10,-282.14){\circle*{0.000001}} \put(89.10,-281.43){\circle*{0.000001}} \put(89.10,-280.72){\circle*{0.000001}} \put(89.10,-280.01){\circle*{0.000001}} \put(89.10,-279.31){\circle*{0.000001}} \put(89.10,-278.60){\circle*{0.000001}} \put(89.10,-277.89){\circle*{0.000001}} \put(89.10,-277.19){\circle*{0.000001}} \put(89.10,-276.48){\circle*{0.000001}} \put(89.10,-275.77){\circle*{0.000001}} \put(89.10,-275.06){\circle*{0.000001}} \put(89.10,-274.36){\circle*{0.000001}} \put(89.10,-273.65){\circle*{0.000001}} \put(89.10,-272.94){\circle*{0.000001}} \put(89.10,-272.24){\circle*{0.000001}} \put(89.10,-271.53){\circle*{0.000001}} \put(89.10,-270.82){\circle*{0.000001}} \put(89.10,-270.11){\circle*{0.000001}} \put(89.10,-269.41){\circle*{0.000001}} \put(89.10,-268.70){\circle*{0.000001}} \put(89.10,-267.99){\circle*{0.000001}} \put(89.10,-267.29){\circle*{0.000001}} \put(88.39,-266.58){\circle*{0.000001}} \put(88.39,-265.87){\circle*{0.000001}} \put(88.39,-265.17){\circle*{0.000001}} \put(88.39,-264.46){\circle*{0.000001}} \put(88.39,-263.75){\circle*{0.000001}} \put(88.39,-263.04){\circle*{0.000001}} \put(88.39,-262.34){\circle*{0.000001}} \put(88.39,-261.63){\circle*{0.000001}} \put(88.39,-260.92){\circle*{0.000001}} \put(88.39,-260.22){\circle*{0.000001}} \put(88.39,-259.51){\circle*{0.000001}} \put(88.39,-258.80){\circle*{0.000001}} \put(88.39,-258.09){\circle*{0.000001}} \put(88.39,-257.39){\circle*{0.000001}} \put(88.39,-256.68){\circle*{0.000001}} \put(88.39,-255.97){\circle*{0.000001}} \put(88.39,-255.27){\circle*{0.000001}} \put(88.39,-254.56){\circle*{0.000001}} \put(88.39,-253.85){\circle*{0.000001}} \put(88.39,-253.14){\circle*{0.000001}} \put(88.39,-252.44){\circle*{0.000001}} \put(88.39,-251.73){\circle*{0.000001}} \put(87.68,-251.02){\circle*{0.000001}} \put(87.68,-250.32){\circle*{0.000001}} \put(87.68,-249.61){\circle*{0.000001}} \put(87.68,-248.90){\circle*{0.000001}} \put(87.68,-248.19){\circle*{0.000001}} \put(87.68,-247.49){\circle*{0.000001}} \put(87.68,-246.78){\circle*{0.000001}} \put(87.68,-246.07){\circle*{0.000001}} \put(87.68,-245.37){\circle*{0.000001}} \put(87.68,-244.66){\circle*{0.000001}} \put(87.68,-243.95){\circle*{0.000001}} \put(89.80,-289.91){\circle*{0.000001}} \put(89.10,-289.21){\circle*{0.000001}} \put(88.39,-288.50){\circle*{0.000001}} \put(87.68,-287.79){\circle*{0.000001}} \put(86.97,-287.09){\circle*{0.000001}} \put(86.27,-286.38){\circle*{0.000001}} \put(85.56,-285.67){\circle*{0.000001}} \put(84.85,-284.96){\circle*{0.000001}} \put(84.15,-284.26){\circle*{0.000001}} \put(83.44,-283.55){\circle*{0.000001}} \put(82.73,-282.84){\circle*{0.000001}} \put(82.02,-282.14){\circle*{0.000001}} \put(82.02,-281.43){\circle*{0.000001}} \put(81.32,-280.72){\circle*{0.000001}} \put(80.61,-280.01){\circle*{0.000001}} \put(79.90,-279.31){\circle*{0.000001}} \put(79.20,-278.60){\circle*{0.000001}} \put(78.49,-277.89){\circle*{0.000001}} \put(77.78,-277.19){\circle*{0.000001}} \put(77.07,-276.48){\circle*{0.000001}} \put(76.37,-275.77){\circle*{0.000001}} \put(75.66,-275.06){\circle*{0.000001}} \put(74.95,-274.36){\circle*{0.000001}} \put(74.25,-273.65){\circle*{0.000001}} \put(73.54,-272.94){\circle*{0.000001}} \put(72.83,-272.24){\circle*{0.000001}} \put(72.12,-271.53){\circle*{0.000001}} \put(71.42,-270.82){\circle*{0.000001}} \put(70.71,-270.11){\circle*{0.000001}} \put(70.00,-269.41){\circle*{0.000001}} \put(69.30,-268.70){\circle*{0.000001}} \put(68.59,-267.99){\circle*{0.000001}} \put(67.88,-267.29){\circle*{0.000001}} \put(67.18,-266.58){\circle*{0.000001}} \put(66.47,-265.87){\circle*{0.000001}} \put(65.76,-265.17){\circle*{0.000001}} \put(65.76,-264.46){\circle*{0.000001}} \put(65.05,-263.75){\circle*{0.000001}} \put(64.35,-263.04){\circle*{0.000001}} \put(63.64,-262.34){\circle*{0.000001}} \put(62.93,-261.63){\circle*{0.000001}} \put(62.23,-260.92){\circle*{0.000001}} \put(61.52,-260.22){\circle*{0.000001}} \put(60.81,-259.51){\circle*{0.000001}} \put(60.10,-258.80){\circle*{0.000001}} \put(59.40,-258.09){\circle*{0.000001}} \put(58.69,-257.39){\circle*{0.000001}} \put(57.98,-256.68){\circle*{0.000001}} \put(57.98,-256.68){\circle*{0.000001}} \put(58.69,-255.97){\circle*{0.000001}} \put(58.69,-255.27){\circle*{0.000001}} \put(59.40,-254.56){\circle*{0.000001}} \put(59.40,-253.85){\circle*{0.000001}} \put(60.10,-253.14){\circle*{0.000001}} \put(60.10,-252.44){\circle*{0.000001}} \put(60.81,-251.73){\circle*{0.000001}} \put(61.52,-251.02){\circle*{0.000001}} \put(61.52,-250.32){\circle*{0.000001}} \put(62.23,-249.61){\circle*{0.000001}} \put(62.23,-248.90){\circle*{0.000001}} \put(62.93,-248.19){\circle*{0.000001}} \put(62.93,-247.49){\circle*{0.000001}} \put(63.64,-246.78){\circle*{0.000001}} \put(64.35,-246.07){\circle*{0.000001}} \put(64.35,-245.37){\circle*{0.000001}} \put(65.05,-244.66){\circle*{0.000001}} \put(65.05,-243.95){\circle*{0.000001}} \put(65.76,-243.24){\circle*{0.000001}} \put(65.76,-242.54){\circle*{0.000001}} \put(66.47,-241.83){\circle*{0.000001}} \put(67.18,-241.12){\circle*{0.000001}} \put(67.18,-240.42){\circle*{0.000001}} \put(67.88,-239.71){\circle*{0.000001}} \put(67.88,-239.00){\circle*{0.000001}} \put(68.59,-238.29){\circle*{0.000001}} \put(68.59,-237.59){\circle*{0.000001}} \put(69.30,-236.88){\circle*{0.000001}} \put(70.00,-236.17){\circle*{0.000001}} \put(70.00,-235.47){\circle*{0.000001}} \put(70.71,-234.76){\circle*{0.000001}} \put(70.71,-234.05){\circle*{0.000001}} \put(71.42,-233.35){\circle*{0.000001}} \put(72.12,-232.64){\circle*{0.000001}} \put(72.12,-231.93){\circle*{0.000001}} \put(72.83,-231.22){\circle*{0.000001}} \put(72.83,-230.52){\circle*{0.000001}} \put(73.54,-229.81){\circle*{0.000001}} \put(73.54,-229.10){\circle*{0.000001}} \put(74.25,-228.40){\circle*{0.000001}} \put(74.95,-227.69){\circle*{0.000001}} \put(74.95,-226.98){\circle*{0.000001}} \put(75.66,-226.27){\circle*{0.000001}} \put(75.66,-225.57){\circle*{0.000001}} \put(76.37,-224.86){\circle*{0.000001}} \put(76.37,-224.15){\circle*{0.000001}} \put(77.07,-223.45){\circle*{0.000001}} \put(77.78,-222.74){\circle*{0.000001}} \put(77.78,-222.03){\circle*{0.000001}} \put(78.49,-221.32){\circle*{0.000001}} \put(78.49,-220.62){\circle*{0.000001}} \put(79.20,-219.91){\circle*{0.000001}} \put(79.20,-219.20){\circle*{0.000001}} \put(79.90,-218.50){\circle*{0.000001}} \put(35.36,-227.69){\circle*{0.000001}} \put(36.06,-227.69){\circle*{0.000001}} \put(36.77,-227.69){\circle*{0.000001}} \put(37.48,-226.98){\circle*{0.000001}} \put(38.18,-226.98){\circle*{0.000001}} \put(38.89,-226.98){\circle*{0.000001}} \put(39.60,-226.98){\circle*{0.000001}} \put(40.31,-226.98){\circle*{0.000001}} \put(41.01,-226.27){\circle*{0.000001}} \put(41.72,-226.27){\circle*{0.000001}} \put(42.43,-226.27){\circle*{0.000001}} \put(43.13,-226.27){\circle*{0.000001}} \put(43.84,-226.27){\circle*{0.000001}} \put(44.55,-225.57){\circle*{0.000001}} \put(45.25,-225.57){\circle*{0.000001}} \put(45.96,-225.57){\circle*{0.000001}} \put(46.67,-225.57){\circle*{0.000001}} \put(47.38,-224.86){\circle*{0.000001}} \put(48.08,-224.86){\circle*{0.000001}} \put(48.79,-224.86){\circle*{0.000001}} \put(49.50,-224.86){\circle*{0.000001}} \put(50.20,-224.86){\circle*{0.000001}} \put(50.91,-224.15){\circle*{0.000001}} \put(51.62,-224.15){\circle*{0.000001}} \put(52.33,-224.15){\circle*{0.000001}} \put(53.03,-224.15){\circle*{0.000001}} \put(53.74,-224.15){\circle*{0.000001}} \put(54.45,-223.45){\circle*{0.000001}} \put(55.15,-223.45){\circle*{0.000001}} \put(55.86,-223.45){\circle*{0.000001}} \put(56.57,-223.45){\circle*{0.000001}} \put(57.28,-223.45){\circle*{0.000001}} \put(57.98,-222.74){\circle*{0.000001}} \put(58.69,-222.74){\circle*{0.000001}} \put(59.40,-222.74){\circle*{0.000001}} \put(60.10,-222.74){\circle*{0.000001}} \put(60.81,-222.74){\circle*{0.000001}} \put(61.52,-222.03){\circle*{0.000001}} \put(62.23,-222.03){\circle*{0.000001}} \put(62.93,-222.03){\circle*{0.000001}} \put(63.64,-222.03){\circle*{0.000001}} \put(64.35,-222.03){\circle*{0.000001}} \put(65.05,-221.32){\circle*{0.000001}} \put(65.76,-221.32){\circle*{0.000001}} \put(66.47,-221.32){\circle*{0.000001}} \put(67.18,-221.32){\circle*{0.000001}} \put(67.88,-221.32){\circle*{0.000001}} \put(68.59,-220.62){\circle*{0.000001}} \put(69.30,-220.62){\circle*{0.000001}} \put(70.00,-220.62){\circle*{0.000001}} \put(70.71,-220.62){\circle*{0.000001}} \put(71.42,-219.91){\circle*{0.000001}} \put(72.12,-219.91){\circle*{0.000001}} \put(72.83,-219.91){\circle*{0.000001}} \put(73.54,-219.91){\circle*{0.000001}} \put(74.25,-219.91){\circle*{0.000001}} \put(74.95,-219.20){\circle*{0.000001}} \put(75.66,-219.20){\circle*{0.000001}} \put(76.37,-219.20){\circle*{0.000001}} \put(77.07,-219.20){\circle*{0.000001}} \put(77.78,-219.20){\circle*{0.000001}} \put(78.49,-218.50){\circle*{0.000001}} \put(79.20,-218.50){\circle*{0.000001}} \put(79.90,-218.50){\circle*{0.000001}} \put(23.33,-272.94){\circle*{0.000001}} \put(23.33,-272.24){\circle*{0.000001}} \put(24.04,-271.53){\circle*{0.000001}} \put(24.04,-270.82){\circle*{0.000001}} \put(24.04,-270.11){\circle*{0.000001}} \put(24.04,-269.41){\circle*{0.000001}} \put(24.75,-268.70){\circle*{0.000001}} \put(24.75,-267.99){\circle*{0.000001}} \put(24.75,-267.29){\circle*{0.000001}} \put(24.75,-266.58){\circle*{0.000001}} \put(25.46,-265.87){\circle*{0.000001}} \put(25.46,-265.17){\circle*{0.000001}} \put(25.46,-264.46){\circle*{0.000001}} \put(25.46,-263.75){\circle*{0.000001}} \put(26.16,-263.04){\circle*{0.000001}} \put(26.16,-262.34){\circle*{0.000001}} \put(26.16,-261.63){\circle*{0.000001}} \put(26.87,-260.92){\circle*{0.000001}} \put(26.87,-260.22){\circle*{0.000001}} \put(26.87,-259.51){\circle*{0.000001}} \put(26.87,-258.80){\circle*{0.000001}} \put(27.58,-258.09){\circle*{0.000001}} \put(27.58,-257.39){\circle*{0.000001}} \put(27.58,-256.68){\circle*{0.000001}} \put(27.58,-255.97){\circle*{0.000001}} \put(28.28,-255.27){\circle*{0.000001}} \put(28.28,-254.56){\circle*{0.000001}} \put(28.28,-253.85){\circle*{0.000001}} \put(28.28,-253.14){\circle*{0.000001}} \put(28.99,-252.44){\circle*{0.000001}} \put(28.99,-251.73){\circle*{0.000001}} \put(28.99,-251.02){\circle*{0.000001}} \put(28.99,-250.32){\circle*{0.000001}} \put(29.70,-249.61){\circle*{0.000001}} \put(29.70,-248.90){\circle*{0.000001}} \put(29.70,-248.19){\circle*{0.000001}} \put(30.41,-247.49){\circle*{0.000001}} \put(30.41,-246.78){\circle*{0.000001}} \put(30.41,-246.07){\circle*{0.000001}} \put(30.41,-245.37){\circle*{0.000001}} \put(31.11,-244.66){\circle*{0.000001}} \put(31.11,-243.95){\circle*{0.000001}} \put(31.11,-243.24){\circle*{0.000001}} \put(31.11,-242.54){\circle*{0.000001}} \put(31.82,-241.83){\circle*{0.000001}} \put(31.82,-241.12){\circle*{0.000001}} \put(31.82,-240.42){\circle*{0.000001}} \put(31.82,-239.71){\circle*{0.000001}} \put(32.53,-239.00){\circle*{0.000001}} \put(32.53,-238.29){\circle*{0.000001}} \put(32.53,-237.59){\circle*{0.000001}} \put(33.23,-236.88){\circle*{0.000001}} \put(33.23,-236.17){\circle*{0.000001}} \put(33.23,-235.47){\circle*{0.000001}} \put(33.23,-234.76){\circle*{0.000001}} \put(33.94,-234.05){\circle*{0.000001}} \put(33.94,-233.35){\circle*{0.000001}} \put(33.94,-232.64){\circle*{0.000001}} \put(33.94,-231.93){\circle*{0.000001}} \put(34.65,-231.22){\circle*{0.000001}} \put(34.65,-230.52){\circle*{0.000001}} \put(34.65,-229.81){\circle*{0.000001}} \put(34.65,-229.10){\circle*{0.000001}} \put(35.36,-228.40){\circle*{0.000001}} \put(35.36,-227.69){\circle*{0.000001}} \put(23.33,-272.94){\circle*{0.000001}} \put(23.33,-272.24){\circle*{0.000001}} \put(23.33,-271.53){\circle*{0.000001}} \put(23.33,-270.82){\circle*{0.000001}} \put(23.33,-270.11){\circle*{0.000001}} \put(22.63,-269.41){\circle*{0.000001}} \put(22.63,-268.70){\circle*{0.000001}} \put(22.63,-267.99){\circle*{0.000001}} \put(22.63,-267.29){\circle*{0.000001}} \put(22.63,-266.58){\circle*{0.000001}} \put(22.63,-265.87){\circle*{0.000001}} \put(22.63,-265.17){\circle*{0.000001}} \put(22.63,-264.46){\circle*{0.000001}} \put(22.63,-263.75){\circle*{0.000001}} \put(21.92,-263.04){\circle*{0.000001}} \put(21.92,-262.34){\circle*{0.000001}} \put(21.92,-261.63){\circle*{0.000001}} \put(21.92,-260.92){\circle*{0.000001}} \put(21.92,-260.22){\circle*{0.000001}} \put(21.92,-259.51){\circle*{0.000001}} \put(21.92,-258.80){\circle*{0.000001}} \put(21.92,-258.09){\circle*{0.000001}} \put(21.92,-257.39){\circle*{0.000001}} \put(21.92,-256.68){\circle*{0.000001}} \put(21.21,-255.97){\circle*{0.000001}} \put(21.21,-255.27){\circle*{0.000001}} \put(21.21,-254.56){\circle*{0.000001}} \put(21.21,-253.85){\circle*{0.000001}} \put(21.21,-253.14){\circle*{0.000001}} \put(21.21,-252.44){\circle*{0.000001}} \put(21.21,-251.73){\circle*{0.000001}} \put(21.21,-251.02){\circle*{0.000001}} \put(21.21,-250.32){\circle*{0.000001}} \put(20.51,-249.61){\circle*{0.000001}} \put(20.51,-248.90){\circle*{0.000001}} \put(20.51,-248.19){\circle*{0.000001}} \put(20.51,-247.49){\circle*{0.000001}} \put(20.51,-246.78){\circle*{0.000001}} \put(20.51,-246.07){\circle*{0.000001}} \put(20.51,-245.37){\circle*{0.000001}} \put(20.51,-244.66){\circle*{0.000001}} \put(20.51,-243.95){\circle*{0.000001}} \put(19.80,-243.24){\circle*{0.000001}} \put(19.80,-242.54){\circle*{0.000001}} \put(19.80,-241.83){\circle*{0.000001}} \put(19.80,-241.12){\circle*{0.000001}} \put(19.80,-240.42){\circle*{0.000001}} \put(19.80,-239.71){\circle*{0.000001}} \put(19.80,-239.00){\circle*{0.000001}} \put(19.80,-238.29){\circle*{0.000001}} \put(19.80,-237.59){\circle*{0.000001}} \put(19.80,-236.88){\circle*{0.000001}} \put(19.09,-236.17){\circle*{0.000001}} \put(19.09,-235.47){\circle*{0.000001}} \put(19.09,-234.76){\circle*{0.000001}} \put(19.09,-234.05){\circle*{0.000001}} \put(19.09,-233.35){\circle*{0.000001}} \put(19.09,-232.64){\circle*{0.000001}} \put(19.09,-231.93){\circle*{0.000001}} \put(19.09,-231.22){\circle*{0.000001}} \put(19.09,-230.52){\circle*{0.000001}} \put(18.38,-229.81){\circle*{0.000001}} \put(18.38,-229.10){\circle*{0.000001}} \put(18.38,-228.40){\circle*{0.000001}} \put(18.38,-227.69){\circle*{0.000001}} \put(18.38,-226.98){\circle*{0.000001}} \put(25.46,-268.70){\circle*{0.000001}} \put(25.46,-267.99){\circle*{0.000001}} \put(25.46,-267.29){\circle*{0.000001}} \put(24.75,-266.58){\circle*{0.000001}} \put(24.75,-265.87){\circle*{0.000001}} \put(24.75,-265.17){\circle*{0.000001}} \put(24.75,-264.46){\circle*{0.000001}} \put(24.75,-263.75){\circle*{0.000001}} \put(24.75,-263.04){\circle*{0.000001}} \put(24.04,-262.34){\circle*{0.000001}} \put(24.04,-261.63){\circle*{0.000001}} \put(24.04,-260.92){\circle*{0.000001}} \put(24.04,-260.22){\circle*{0.000001}} \put(24.04,-259.51){\circle*{0.000001}} \put(24.04,-258.80){\circle*{0.000001}} \put(23.33,-258.09){\circle*{0.000001}} \put(23.33,-257.39){\circle*{0.000001}} \put(23.33,-256.68){\circle*{0.000001}} \put(23.33,-255.97){\circle*{0.000001}} \put(23.33,-255.27){\circle*{0.000001}} \put(23.33,-254.56){\circle*{0.000001}} \put(22.63,-253.85){\circle*{0.000001}} \put(22.63,-253.14){\circle*{0.000001}} \put(22.63,-252.44){\circle*{0.000001}} \put(22.63,-251.73){\circle*{0.000001}} \put(22.63,-251.02){\circle*{0.000001}} \put(22.63,-250.32){\circle*{0.000001}} \put(21.92,-249.61){\circle*{0.000001}} \put(21.92,-248.90){\circle*{0.000001}} \put(21.92,-248.19){\circle*{0.000001}} \put(21.92,-247.49){\circle*{0.000001}} \put(21.92,-246.78){\circle*{0.000001}} \put(21.92,-246.07){\circle*{0.000001}} \put(21.21,-245.37){\circle*{0.000001}} \put(21.21,-244.66){\circle*{0.000001}} \put(21.21,-243.95){\circle*{0.000001}} \put(21.21,-243.24){\circle*{0.000001}} \put(21.21,-242.54){\circle*{0.000001}} \put(21.21,-241.83){\circle*{0.000001}} \put(20.51,-241.12){\circle*{0.000001}} \put(20.51,-240.42){\circle*{0.000001}} \put(20.51,-239.71){\circle*{0.000001}} \put(20.51,-239.00){\circle*{0.000001}} \put(20.51,-238.29){\circle*{0.000001}} \put(20.51,-237.59){\circle*{0.000001}} \put(19.80,-236.88){\circle*{0.000001}} \put(19.80,-236.17){\circle*{0.000001}} \put(19.80,-235.47){\circle*{0.000001}} \put(19.80,-234.76){\circle*{0.000001}} \put(19.80,-234.05){\circle*{0.000001}} \put(19.80,-233.35){\circle*{0.000001}} \put(19.09,-232.64){\circle*{0.000001}} \put(19.09,-231.93){\circle*{0.000001}} \put(19.09,-231.22){\circle*{0.000001}} \put(19.09,-230.52){\circle*{0.000001}} \put(19.09,-229.81){\circle*{0.000001}} \put(19.09,-229.10){\circle*{0.000001}} \put(18.38,-228.40){\circle*{0.000001}} \put(18.38,-227.69){\circle*{0.000001}} \put(18.38,-226.98){\circle*{0.000001}} \put(-17.68,-275.77){\circle*{0.000001}} \put(-16.97,-275.77){\circle*{0.000001}} \put(-16.26,-275.77){\circle*{0.000001}} \put(-15.56,-275.77){\circle*{0.000001}} \put(-14.85,-275.06){\circle*{0.000001}} \put(-14.14,-275.06){\circle*{0.000001}} \put(-13.44,-275.06){\circle*{0.000001}} \put(-12.73,-275.06){\circle*{0.000001}} \put(-12.02,-275.06){\circle*{0.000001}} \put(-11.31,-275.06){\circle*{0.000001}} \put(-10.61,-274.36){\circle*{0.000001}} \put(-9.90,-274.36){\circle*{0.000001}} \put(-9.19,-274.36){\circle*{0.000001}} \put(-8.49,-274.36){\circle*{0.000001}} \put(-7.78,-274.36){\circle*{0.000001}} \put(-7.07,-274.36){\circle*{0.000001}} \put(-6.36,-273.65){\circle*{0.000001}} \put(-5.66,-273.65){\circle*{0.000001}} \put(-4.95,-273.65){\circle*{0.000001}} \put(-4.24,-273.65){\circle*{0.000001}} \put(-3.54,-273.65){\circle*{0.000001}} \put(-2.83,-273.65){\circle*{0.000001}} \put(-2.12,-272.94){\circle*{0.000001}} \put(-1.41,-272.94){\circle*{0.000001}} \put(-0.71,-272.94){\circle*{0.000001}} \put( 0.00,-272.94){\circle*{0.000001}} \put( 0.71,-272.94){\circle*{0.000001}} \put( 1.41,-272.94){\circle*{0.000001}} \put( 2.12,-272.24){\circle*{0.000001}} \put( 2.83,-272.24){\circle*{0.000001}} \put( 3.54,-272.24){\circle*{0.000001}} \put( 4.24,-272.24){\circle*{0.000001}} \put( 4.95,-272.24){\circle*{0.000001}} \put( 5.66,-272.24){\circle*{0.000001}} \put( 6.36,-271.53){\circle*{0.000001}} \put( 7.07,-271.53){\circle*{0.000001}} \put( 7.78,-271.53){\circle*{0.000001}} \put( 8.49,-271.53){\circle*{0.000001}} \put( 9.19,-271.53){\circle*{0.000001}} \put( 9.90,-271.53){\circle*{0.000001}} \put(10.61,-270.82){\circle*{0.000001}} \put(11.31,-270.82){\circle*{0.000001}} \put(12.02,-270.82){\circle*{0.000001}} \put(12.73,-270.82){\circle*{0.000001}} \put(13.44,-270.82){\circle*{0.000001}} \put(14.14,-270.82){\circle*{0.000001}} \put(14.85,-270.11){\circle*{0.000001}} \put(15.56,-270.11){\circle*{0.000001}} \put(16.26,-270.11){\circle*{0.000001}} \put(16.97,-270.11){\circle*{0.000001}} \put(17.68,-270.11){\circle*{0.000001}} \put(18.38,-270.11){\circle*{0.000001}} \put(19.09,-269.41){\circle*{0.000001}} \put(19.80,-269.41){\circle*{0.000001}} \put(20.51,-269.41){\circle*{0.000001}} \put(21.21,-269.41){\circle*{0.000001}} \put(21.92,-269.41){\circle*{0.000001}} \put(22.63,-269.41){\circle*{0.000001}} \put(23.33,-268.70){\circle*{0.000001}} \put(24.04,-268.70){\circle*{0.000001}} \put(24.75,-268.70){\circle*{0.000001}} \put(25.46,-268.70){\circle*{0.000001}} \put(-58.69,-295.57){\circle*{0.000001}} \put(-57.98,-295.57){\circle*{0.000001}} \put(-57.28,-294.86){\circle*{0.000001}} \put(-56.57,-294.86){\circle*{0.000001}} \put(-55.86,-294.16){\circle*{0.000001}} \put(-55.15,-294.16){\circle*{0.000001}} \put(-54.45,-293.45){\circle*{0.000001}} \put(-53.74,-293.45){\circle*{0.000001}} \put(-53.03,-292.74){\circle*{0.000001}} \put(-52.33,-292.74){\circle*{0.000001}} \put(-51.62,-292.04){\circle*{0.000001}} \put(-50.91,-292.04){\circle*{0.000001}} \put(-50.20,-291.33){\circle*{0.000001}} \put(-49.50,-291.33){\circle*{0.000001}} \put(-48.79,-290.62){\circle*{0.000001}} \put(-48.08,-290.62){\circle*{0.000001}} \put(-47.38,-289.91){\circle*{0.000001}} \put(-46.67,-289.91){\circle*{0.000001}} \put(-45.96,-289.21){\circle*{0.000001}} \put(-45.25,-289.21){\circle*{0.000001}} \put(-44.55,-288.50){\circle*{0.000001}} \put(-43.84,-288.50){\circle*{0.000001}} \put(-43.13,-287.79){\circle*{0.000001}} \put(-42.43,-287.79){\circle*{0.000001}} \put(-41.72,-287.09){\circle*{0.000001}} \put(-41.01,-287.09){\circle*{0.000001}} \put(-40.31,-286.38){\circle*{0.000001}} \put(-39.60,-286.38){\circle*{0.000001}} \put(-38.89,-285.67){\circle*{0.000001}} \put(-38.18,-285.67){\circle*{0.000001}} \put(-37.48,-285.67){\circle*{0.000001}} \put(-36.77,-284.96){\circle*{0.000001}} \put(-36.06,-284.96){\circle*{0.000001}} \put(-35.36,-284.26){\circle*{0.000001}} \put(-34.65,-284.26){\circle*{0.000001}} \put(-33.94,-283.55){\circle*{0.000001}} \put(-33.23,-283.55){\circle*{0.000001}} \put(-32.53,-282.84){\circle*{0.000001}} \put(-31.82,-282.84){\circle*{0.000001}} \put(-31.11,-282.14){\circle*{0.000001}} \put(-30.41,-282.14){\circle*{0.000001}} \put(-29.70,-281.43){\circle*{0.000001}} \put(-28.99,-281.43){\circle*{0.000001}} \put(-28.28,-280.72){\circle*{0.000001}} \put(-27.58,-280.72){\circle*{0.000001}} \put(-26.87,-280.01){\circle*{0.000001}} \put(-26.16,-280.01){\circle*{0.000001}} \put(-25.46,-279.31){\circle*{0.000001}} \put(-24.75,-279.31){\circle*{0.000001}} \put(-24.04,-278.60){\circle*{0.000001}} \put(-23.33,-278.60){\circle*{0.000001}} \put(-22.63,-277.89){\circle*{0.000001}} \put(-21.92,-277.89){\circle*{0.000001}} \put(-21.21,-277.19){\circle*{0.000001}} \put(-20.51,-277.19){\circle*{0.000001}} \put(-19.80,-276.48){\circle*{0.000001}} \put(-19.09,-276.48){\circle*{0.000001}} \put(-18.38,-275.77){\circle*{0.000001}} \put(-17.68,-275.77){\circle*{0.000001}} \put(-58.69,-295.57){\circle*{0.000001}} \put(-57.98,-296.28){\circle*{0.000001}} \put(-57.28,-296.28){\circle*{0.000001}} \put(-56.57,-296.98){\circle*{0.000001}} \put(-55.86,-296.98){\circle*{0.000001}} \put(-55.15,-297.69){\circle*{0.000001}} \put(-54.45,-297.69){\circle*{0.000001}} \put(-53.74,-298.40){\circle*{0.000001}} \put(-53.03,-299.11){\circle*{0.000001}} \put(-52.33,-299.11){\circle*{0.000001}} \put(-51.62,-299.81){\circle*{0.000001}} \put(-50.91,-299.81){\circle*{0.000001}} \put(-50.20,-300.52){\circle*{0.000001}} \put(-49.50,-301.23){\circle*{0.000001}} \put(-48.79,-301.23){\circle*{0.000001}} \put(-48.08,-301.93){\circle*{0.000001}} \put(-47.38,-301.93){\circle*{0.000001}} \put(-46.67,-302.64){\circle*{0.000001}} \put(-45.96,-302.64){\circle*{0.000001}} \put(-45.25,-303.35){\circle*{0.000001}} \put(-44.55,-304.06){\circle*{0.000001}} \put(-43.84,-304.06){\circle*{0.000001}} \put(-43.13,-304.76){\circle*{0.000001}} \put(-42.43,-304.76){\circle*{0.000001}} \put(-41.72,-305.47){\circle*{0.000001}} \put(-41.01,-305.47){\circle*{0.000001}} \put(-40.31,-306.18){\circle*{0.000001}} \put(-39.60,-306.88){\circle*{0.000001}} \put(-38.89,-306.88){\circle*{0.000001}} \put(-38.18,-307.59){\circle*{0.000001}} \put(-37.48,-307.59){\circle*{0.000001}} \put(-36.77,-308.30){\circle*{0.000001}} \put(-36.06,-309.01){\circle*{0.000001}} \put(-35.36,-309.01){\circle*{0.000001}} \put(-34.65,-309.71){\circle*{0.000001}} \put(-33.94,-309.71){\circle*{0.000001}} \put(-33.23,-310.42){\circle*{0.000001}} \put(-32.53,-310.42){\circle*{0.000001}} \put(-31.82,-311.13){\circle*{0.000001}} \put(-31.11,-311.83){\circle*{0.000001}} \put(-30.41,-311.83){\circle*{0.000001}} \put(-29.70,-312.54){\circle*{0.000001}} \put(-28.99,-312.54){\circle*{0.000001}} \put(-28.28,-313.25){\circle*{0.000001}} \put(-27.58,-313.25){\circle*{0.000001}} \put(-26.87,-313.96){\circle*{0.000001}} \put(-26.16,-314.66){\circle*{0.000001}} \put(-25.46,-314.66){\circle*{0.000001}} \put(-24.75,-315.37){\circle*{0.000001}} \put(-24.04,-315.37){\circle*{0.000001}} \put(-23.33,-316.08){\circle*{0.000001}} \put(-22.63,-316.78){\circle*{0.000001}} \put(-21.92,-316.78){\circle*{0.000001}} \put(-21.21,-317.49){\circle*{0.000001}} \put(-20.51,-317.49){\circle*{0.000001}} \put(-19.80,-318.20){\circle*{0.000001}} \put(-19.09,-318.20){\circle*{0.000001}} \put(-18.38,-318.91){\circle*{0.000001}} \put( 1.41,-356.38){\circle*{0.000001}} \put( 0.71,-355.67){\circle*{0.000001}} \put( 0.71,-354.97){\circle*{0.000001}} \put( 0.00,-354.26){\circle*{0.000001}} \put( 0.00,-353.55){\circle*{0.000001}} \put(-0.71,-352.85){\circle*{0.000001}} \put(-0.71,-352.14){\circle*{0.000001}} \put(-1.41,-351.43){\circle*{0.000001}} \put(-1.41,-350.72){\circle*{0.000001}} \put(-2.12,-350.02){\circle*{0.000001}} \put(-2.12,-349.31){\circle*{0.000001}} \put(-2.83,-348.60){\circle*{0.000001}} \put(-2.83,-347.90){\circle*{0.000001}} \put(-3.54,-347.19){\circle*{0.000001}} \put(-3.54,-346.48){\circle*{0.000001}} \put(-4.24,-345.78){\circle*{0.000001}} \put(-4.24,-345.07){\circle*{0.000001}} \put(-4.95,-344.36){\circle*{0.000001}} \put(-5.66,-343.65){\circle*{0.000001}} \put(-5.66,-342.95){\circle*{0.000001}} \put(-6.36,-342.24){\circle*{0.000001}} \put(-6.36,-341.53){\circle*{0.000001}} \put(-7.07,-340.83){\circle*{0.000001}} \put(-7.07,-340.12){\circle*{0.000001}} \put(-7.78,-339.41){\circle*{0.000001}} \put(-7.78,-338.70){\circle*{0.000001}} \put(-8.49,-338.00){\circle*{0.000001}} \put(-8.49,-337.29){\circle*{0.000001}} \put(-9.19,-336.58){\circle*{0.000001}} \put(-9.19,-335.88){\circle*{0.000001}} \put(-9.90,-335.17){\circle*{0.000001}} \put(-9.90,-334.46){\circle*{0.000001}} \put(-10.61,-333.75){\circle*{0.000001}} \put(-10.61,-333.05){\circle*{0.000001}} \put(-11.31,-332.34){\circle*{0.000001}} \put(-11.31,-331.63){\circle*{0.000001}} \put(-12.02,-330.93){\circle*{0.000001}} \put(-12.73,-330.22){\circle*{0.000001}} \put(-12.73,-329.51){\circle*{0.000001}} \put(-13.44,-328.80){\circle*{0.000001}} \put(-13.44,-328.10){\circle*{0.000001}} \put(-14.14,-327.39){\circle*{0.000001}} \put(-14.14,-326.68){\circle*{0.000001}} \put(-14.85,-325.98){\circle*{0.000001}} \put(-14.85,-325.27){\circle*{0.000001}} \put(-15.56,-324.56){\circle*{0.000001}} \put(-15.56,-323.85){\circle*{0.000001}} \put(-16.26,-323.15){\circle*{0.000001}} \put(-16.26,-322.44){\circle*{0.000001}} \put(-16.97,-321.73){\circle*{0.000001}} \put(-16.97,-321.03){\circle*{0.000001}} \put(-17.68,-320.32){\circle*{0.000001}} \put(-17.68,-319.61){\circle*{0.000001}} \put(-18.38,-318.91){\circle*{0.000001}} \put(-41.72,-369.82){\circle*{0.000001}} \put(-41.01,-369.82){\circle*{0.000001}} \put(-40.31,-369.11){\circle*{0.000001}} \put(-39.60,-369.11){\circle*{0.000001}} \put(-38.89,-369.11){\circle*{0.000001}} \put(-38.18,-368.40){\circle*{0.000001}} \put(-37.48,-368.40){\circle*{0.000001}} \put(-36.77,-368.40){\circle*{0.000001}} \put(-36.06,-368.40){\circle*{0.000001}} \put(-35.36,-367.70){\circle*{0.000001}} \put(-34.65,-367.70){\circle*{0.000001}} \put(-33.94,-367.70){\circle*{0.000001}} \put(-33.23,-366.99){\circle*{0.000001}} \put(-32.53,-366.99){\circle*{0.000001}} \put(-31.82,-366.99){\circle*{0.000001}} \put(-31.11,-366.28){\circle*{0.000001}} \put(-30.41,-366.28){\circle*{0.000001}} \put(-29.70,-366.28){\circle*{0.000001}} \put(-28.99,-365.57){\circle*{0.000001}} \put(-28.28,-365.57){\circle*{0.000001}} \put(-27.58,-365.57){\circle*{0.000001}} \put(-26.87,-364.87){\circle*{0.000001}} \put(-26.16,-364.87){\circle*{0.000001}} \put(-25.46,-364.87){\circle*{0.000001}} \put(-24.75,-364.87){\circle*{0.000001}} \put(-24.04,-364.16){\circle*{0.000001}} \put(-23.33,-364.16){\circle*{0.000001}} \put(-22.63,-364.16){\circle*{0.000001}} \put(-21.92,-363.45){\circle*{0.000001}} \put(-21.21,-363.45){\circle*{0.000001}} \put(-20.51,-363.45){\circle*{0.000001}} \put(-19.80,-362.75){\circle*{0.000001}} \put(-19.09,-362.75){\circle*{0.000001}} \put(-18.38,-362.75){\circle*{0.000001}} \put(-17.68,-362.04){\circle*{0.000001}} \put(-16.97,-362.04){\circle*{0.000001}} \put(-16.26,-362.04){\circle*{0.000001}} \put(-15.56,-361.33){\circle*{0.000001}} \put(-14.85,-361.33){\circle*{0.000001}} \put(-14.14,-361.33){\circle*{0.000001}} \put(-13.44,-361.33){\circle*{0.000001}} \put(-12.73,-360.62){\circle*{0.000001}} \put(-12.02,-360.62){\circle*{0.000001}} \put(-11.31,-360.62){\circle*{0.000001}} \put(-10.61,-359.92){\circle*{0.000001}} \put(-9.90,-359.92){\circle*{0.000001}} \put(-9.19,-359.92){\circle*{0.000001}} \put(-8.49,-359.21){\circle*{0.000001}} \put(-7.78,-359.21){\circle*{0.000001}} \put(-7.07,-359.21){\circle*{0.000001}} \put(-6.36,-358.50){\circle*{0.000001}} \put(-5.66,-358.50){\circle*{0.000001}} \put(-4.95,-358.50){\circle*{0.000001}} \put(-4.24,-357.80){\circle*{0.000001}} \put(-3.54,-357.80){\circle*{0.000001}} \put(-2.83,-357.80){\circle*{0.000001}} \put(-2.12,-357.80){\circle*{0.000001}} \put(-1.41,-357.09){\circle*{0.000001}} \put(-0.71,-357.09){\circle*{0.000001}} \put( 0.00,-357.09){\circle*{0.000001}} \put( 0.71,-356.38){\circle*{0.000001}} \put( 1.41,-356.38){\circle*{0.000001}} \put(-31.82,-414.36){\circle*{0.000001}} \put(-31.82,-413.66){\circle*{0.000001}} \put(-31.82,-412.95){\circle*{0.000001}} \put(-32.53,-412.24){\circle*{0.000001}} \put(-32.53,-411.54){\circle*{0.000001}} \put(-32.53,-410.83){\circle*{0.000001}} \put(-32.53,-410.12){\circle*{0.000001}} \put(-33.23,-409.41){\circle*{0.000001}} \put(-33.23,-408.71){\circle*{0.000001}} \put(-33.23,-408.00){\circle*{0.000001}} \put(-33.23,-407.29){\circle*{0.000001}} \put(-33.23,-406.59){\circle*{0.000001}} \put(-33.94,-405.88){\circle*{0.000001}} \put(-33.94,-405.17){\circle*{0.000001}} \put(-33.94,-404.47){\circle*{0.000001}} \put(-33.94,-403.76){\circle*{0.000001}} \put(-34.65,-403.05){\circle*{0.000001}} \put(-34.65,-402.34){\circle*{0.000001}} \put(-34.65,-401.64){\circle*{0.000001}} \put(-34.65,-400.93){\circle*{0.000001}} \put(-34.65,-400.22){\circle*{0.000001}} \put(-35.36,-399.52){\circle*{0.000001}} \put(-35.36,-398.81){\circle*{0.000001}} \put(-35.36,-398.10){\circle*{0.000001}} \put(-35.36,-397.39){\circle*{0.000001}} \put(-36.06,-396.69){\circle*{0.000001}} \put(-36.06,-395.98){\circle*{0.000001}} \put(-36.06,-395.27){\circle*{0.000001}} \put(-36.06,-394.57){\circle*{0.000001}} \put(-36.06,-393.86){\circle*{0.000001}} \put(-36.77,-393.15){\circle*{0.000001}} \put(-36.77,-392.44){\circle*{0.000001}} \put(-36.77,-391.74){\circle*{0.000001}} \put(-36.77,-391.03){\circle*{0.000001}} \put(-37.48,-390.32){\circle*{0.000001}} \put(-37.48,-389.62){\circle*{0.000001}} \put(-37.48,-388.91){\circle*{0.000001}} \put(-37.48,-388.20){\circle*{0.000001}} \put(-37.48,-387.49){\circle*{0.000001}} \put(-38.18,-386.79){\circle*{0.000001}} \put(-38.18,-386.08){\circle*{0.000001}} \put(-38.18,-385.37){\circle*{0.000001}} \put(-38.18,-384.67){\circle*{0.000001}} \put(-38.89,-383.96){\circle*{0.000001}} \put(-38.89,-383.25){\circle*{0.000001}} \put(-38.89,-382.54){\circle*{0.000001}} \put(-38.89,-381.84){\circle*{0.000001}} \put(-38.89,-381.13){\circle*{0.000001}} \put(-39.60,-380.42){\circle*{0.000001}} \put(-39.60,-379.72){\circle*{0.000001}} \put(-39.60,-379.01){\circle*{0.000001}} \put(-39.60,-378.30){\circle*{0.000001}} \put(-40.31,-377.60){\circle*{0.000001}} \put(-40.31,-376.89){\circle*{0.000001}} \put(-40.31,-376.18){\circle*{0.000001}} \put(-40.31,-375.47){\circle*{0.000001}} \put(-40.31,-374.77){\circle*{0.000001}} \put(-41.01,-374.06){\circle*{0.000001}} \put(-41.01,-373.35){\circle*{0.000001}} \put(-41.01,-372.65){\circle*{0.000001}} \put(-41.01,-371.94){\circle*{0.000001}} \put(-41.72,-371.23){\circle*{0.000001}} \put(-41.72,-370.52){\circle*{0.000001}} \put(-41.72,-369.82){\circle*{0.000001}} \put(-75.66,-413.66){\circle*{0.000001}} \put(-74.95,-413.66){\circle*{0.000001}} \put(-74.25,-413.66){\circle*{0.000001}} \put(-73.54,-413.66){\circle*{0.000001}} \put(-72.83,-413.66){\circle*{0.000001}} \put(-72.12,-413.66){\circle*{0.000001}} \put(-71.42,-413.66){\circle*{0.000001}} \put(-70.71,-413.66){\circle*{0.000001}} \put(-70.00,-413.66){\circle*{0.000001}} \put(-69.30,-413.66){\circle*{0.000001}} \put(-68.59,-413.66){\circle*{0.000001}} \put(-67.88,-413.66){\circle*{0.000001}} \put(-67.18,-413.66){\circle*{0.000001}} \put(-66.47,-413.66){\circle*{0.000001}} \put(-65.76,-413.66){\circle*{0.000001}} \put(-65.05,-413.66){\circle*{0.000001}} \put(-64.35,-413.66){\circle*{0.000001}} \put(-63.64,-413.66){\circle*{0.000001}} \put(-62.93,-413.66){\circle*{0.000001}} \put(-62.23,-413.66){\circle*{0.000001}} \put(-61.52,-413.66){\circle*{0.000001}} \put(-60.81,-413.66){\circle*{0.000001}} \put(-60.10,-413.66){\circle*{0.000001}} \put(-59.40,-413.66){\circle*{0.000001}} \put(-58.69,-413.66){\circle*{0.000001}} \put(-57.98,-413.66){\circle*{0.000001}} \put(-57.28,-413.66){\circle*{0.000001}} \put(-56.57,-413.66){\circle*{0.000001}} \put(-55.86,-413.66){\circle*{0.000001}} \put(-55.15,-413.66){\circle*{0.000001}} \put(-54.45,-413.66){\circle*{0.000001}} \put(-53.74,-413.66){\circle*{0.000001}} \put(-53.03,-414.36){\circle*{0.000001}} \put(-52.33,-414.36){\circle*{0.000001}} \put(-51.62,-414.36){\circle*{0.000001}} \put(-50.91,-414.36){\circle*{0.000001}} \put(-50.20,-414.36){\circle*{0.000001}} \put(-49.50,-414.36){\circle*{0.000001}} \put(-48.79,-414.36){\circle*{0.000001}} \put(-48.08,-414.36){\circle*{0.000001}} \put(-47.38,-414.36){\circle*{0.000001}} \put(-46.67,-414.36){\circle*{0.000001}} \put(-45.96,-414.36){\circle*{0.000001}} \put(-45.25,-414.36){\circle*{0.000001}} \put(-44.55,-414.36){\circle*{0.000001}} \put(-43.84,-414.36){\circle*{0.000001}} \put(-43.13,-414.36){\circle*{0.000001}} \put(-42.43,-414.36){\circle*{0.000001}} \put(-41.72,-414.36){\circle*{0.000001}} \put(-41.01,-414.36){\circle*{0.000001}} \put(-40.31,-414.36){\circle*{0.000001}} \put(-39.60,-414.36){\circle*{0.000001}} \put(-38.89,-414.36){\circle*{0.000001}} \put(-38.18,-414.36){\circle*{0.000001}} \put(-37.48,-414.36){\circle*{0.000001}} \put(-36.77,-414.36){\circle*{0.000001}} \put(-36.06,-414.36){\circle*{0.000001}} \put(-35.36,-414.36){\circle*{0.000001}} \put(-34.65,-414.36){\circle*{0.000001}} \put(-33.94,-414.36){\circle*{0.000001}} \put(-33.23,-414.36){\circle*{0.000001}} \put(-32.53,-414.36){\circle*{0.000001}} \put(-31.82,-414.36){\circle*{0.000001}} \put(-116.67,-433.46){\circle*{0.000001}} \put(-115.97,-433.46){\circle*{0.000001}} \put(-115.26,-432.75){\circle*{0.000001}} \put(-114.55,-432.75){\circle*{0.000001}} \put(-113.84,-432.04){\circle*{0.000001}} \put(-113.14,-432.04){\circle*{0.000001}} \put(-112.43,-431.34){\circle*{0.000001}} \put(-111.72,-431.34){\circle*{0.000001}} \put(-111.02,-430.63){\circle*{0.000001}} \put(-110.31,-430.63){\circle*{0.000001}} \put(-109.60,-429.92){\circle*{0.000001}} \put(-108.89,-429.92){\circle*{0.000001}} \put(-108.19,-429.21){\circle*{0.000001}} \put(-107.48,-429.21){\circle*{0.000001}} \put(-106.77,-428.51){\circle*{0.000001}} \put(-106.07,-428.51){\circle*{0.000001}} \put(-105.36,-427.80){\circle*{0.000001}} \put(-104.65,-427.80){\circle*{0.000001}} \put(-103.94,-427.09){\circle*{0.000001}} \put(-103.24,-427.09){\circle*{0.000001}} \put(-102.53,-426.39){\circle*{0.000001}} \put(-101.82,-426.39){\circle*{0.000001}} \put(-101.12,-425.68){\circle*{0.000001}} \put(-100.41,-425.68){\circle*{0.000001}} \put(-99.70,-424.97){\circle*{0.000001}} \put(-98.99,-424.97){\circle*{0.000001}} \put(-98.29,-424.26){\circle*{0.000001}} \put(-97.58,-424.26){\circle*{0.000001}} \put(-96.87,-423.56){\circle*{0.000001}} \put(-96.17,-423.56){\circle*{0.000001}} \put(-95.46,-423.56){\circle*{0.000001}} \put(-94.75,-422.85){\circle*{0.000001}} \put(-94.05,-422.85){\circle*{0.000001}} \put(-93.34,-422.14){\circle*{0.000001}} \put(-92.63,-422.14){\circle*{0.000001}} \put(-91.92,-421.44){\circle*{0.000001}} \put(-91.22,-421.44){\circle*{0.000001}} \put(-90.51,-420.73){\circle*{0.000001}} \put(-89.80,-420.73){\circle*{0.000001}} \put(-89.10,-420.02){\circle*{0.000001}} \put(-88.39,-420.02){\circle*{0.000001}} \put(-87.68,-419.31){\circle*{0.000001}} \put(-86.97,-419.31){\circle*{0.000001}} \put(-86.27,-418.61){\circle*{0.000001}} \put(-85.56,-418.61){\circle*{0.000001}} \put(-84.85,-417.90){\circle*{0.000001}} \put(-84.15,-417.90){\circle*{0.000001}} \put(-83.44,-417.19){\circle*{0.000001}} \put(-82.73,-417.19){\circle*{0.000001}} \put(-82.02,-416.49){\circle*{0.000001}} \put(-81.32,-416.49){\circle*{0.000001}} \put(-80.61,-415.78){\circle*{0.000001}} \put(-79.90,-415.78){\circle*{0.000001}} \put(-79.20,-415.07){\circle*{0.000001}} \put(-78.49,-415.07){\circle*{0.000001}} \put(-77.78,-414.36){\circle*{0.000001}} \put(-77.07,-414.36){\circle*{0.000001}} \put(-76.37,-413.66){\circle*{0.000001}} \put(-75.66,-413.66){\circle*{0.000001}} \put(-116.67,-433.46){\circle*{0.000001}} \put(-115.97,-433.46){\circle*{0.000001}} \put(-115.26,-434.16){\circle*{0.000001}} \put(-114.55,-434.16){\circle*{0.000001}} \put(-113.84,-434.87){\circle*{0.000001}} \put(-113.14,-434.87){\circle*{0.000001}} \put(-112.43,-435.58){\circle*{0.000001}} \put(-111.72,-435.58){\circle*{0.000001}} \put(-111.02,-436.28){\circle*{0.000001}} \put(-110.31,-436.28){\circle*{0.000001}} \put(-109.60,-436.28){\circle*{0.000001}} \put(-108.89,-436.99){\circle*{0.000001}} \put(-108.19,-436.99){\circle*{0.000001}} \put(-107.48,-437.70){\circle*{0.000001}} \put(-106.77,-437.70){\circle*{0.000001}} \put(-106.07,-438.41){\circle*{0.000001}} \put(-105.36,-438.41){\circle*{0.000001}} \put(-104.65,-439.11){\circle*{0.000001}} \put(-103.94,-439.11){\circle*{0.000001}} \put(-103.24,-439.11){\circle*{0.000001}} \put(-102.53,-439.82){\circle*{0.000001}} \put(-101.82,-439.82){\circle*{0.000001}} \put(-101.12,-440.53){\circle*{0.000001}} \put(-100.41,-440.53){\circle*{0.000001}} \put(-99.70,-441.23){\circle*{0.000001}} \put(-98.99,-441.23){\circle*{0.000001}} \put(-98.29,-441.94){\circle*{0.000001}} \put(-97.58,-441.94){\circle*{0.000001}} \put(-96.87,-441.94){\circle*{0.000001}} \put(-96.17,-442.65){\circle*{0.000001}} \put(-95.46,-442.65){\circle*{0.000001}} \put(-94.75,-443.36){\circle*{0.000001}} \put(-94.05,-443.36){\circle*{0.000001}} \put(-93.34,-444.06){\circle*{0.000001}} \put(-92.63,-444.06){\circle*{0.000001}} \put(-91.92,-444.06){\circle*{0.000001}} \put(-91.22,-444.77){\circle*{0.000001}} \put(-90.51,-444.77){\circle*{0.000001}} \put(-89.80,-445.48){\circle*{0.000001}} \put(-89.10,-445.48){\circle*{0.000001}} \put(-88.39,-446.18){\circle*{0.000001}} \put(-87.68,-446.18){\circle*{0.000001}} \put(-86.97,-446.89){\circle*{0.000001}} \put(-86.27,-446.89){\circle*{0.000001}} \put(-85.56,-446.89){\circle*{0.000001}} \put(-84.85,-447.60){\circle*{0.000001}} \put(-84.15,-447.60){\circle*{0.000001}} \put(-83.44,-448.31){\circle*{0.000001}} \put(-82.73,-448.31){\circle*{0.000001}} \put(-82.02,-449.01){\circle*{0.000001}} \put(-81.32,-449.01){\circle*{0.000001}} \put(-80.61,-449.72){\circle*{0.000001}} \put(-79.90,-449.72){\circle*{0.000001}} \put(-79.20,-449.72){\circle*{0.000001}} \put(-78.49,-450.43){\circle*{0.000001}} \put(-77.78,-450.43){\circle*{0.000001}} \put(-77.07,-451.13){\circle*{0.000001}} \put(-76.37,-451.13){\circle*{0.000001}} \put(-75.66,-451.84){\circle*{0.000001}} \put(-74.95,-451.84){\circle*{0.000001}} \put(-74.25,-452.55){\circle*{0.000001}} \put(-73.54,-452.55){\circle*{0.000001}} \put(-73.54,-452.55){\circle*{0.000001}} \put(-72.83,-452.55){\circle*{0.000001}} \put(-72.12,-451.84){\circle*{0.000001}} \put(-71.42,-451.84){\circle*{0.000001}} \put(-70.71,-451.84){\circle*{0.000001}} \put(-70.00,-451.84){\circle*{0.000001}} \put(-69.30,-451.13){\circle*{0.000001}} \put(-68.59,-451.13){\circle*{0.000001}} \put(-67.88,-451.13){\circle*{0.000001}} \put(-67.18,-450.43){\circle*{0.000001}} \put(-66.47,-450.43){\circle*{0.000001}} \put(-65.76,-450.43){\circle*{0.000001}} \put(-65.05,-450.43){\circle*{0.000001}} \put(-64.35,-449.72){\circle*{0.000001}} \put(-63.64,-449.72){\circle*{0.000001}} \put(-62.93,-449.72){\circle*{0.000001}} \put(-62.23,-449.01){\circle*{0.000001}} \put(-61.52,-449.01){\circle*{0.000001}} \put(-60.81,-449.01){\circle*{0.000001}} \put(-60.10,-448.31){\circle*{0.000001}} \put(-59.40,-448.31){\circle*{0.000001}} \put(-58.69,-448.31){\circle*{0.000001}} \put(-57.98,-448.31){\circle*{0.000001}} \put(-57.28,-447.60){\circle*{0.000001}} \put(-56.57,-447.60){\circle*{0.000001}} \put(-55.86,-447.60){\circle*{0.000001}} \put(-55.15,-446.89){\circle*{0.000001}} \put(-54.45,-446.89){\circle*{0.000001}} \put(-53.74,-446.89){\circle*{0.000001}} \put(-53.03,-446.89){\circle*{0.000001}} \put(-52.33,-446.18){\circle*{0.000001}} \put(-51.62,-446.18){\circle*{0.000001}} \put(-50.91,-446.18){\circle*{0.000001}} \put(-50.20,-445.48){\circle*{0.000001}} \put(-49.50,-445.48){\circle*{0.000001}} \put(-48.79,-445.48){\circle*{0.000001}} \put(-48.08,-445.48){\circle*{0.000001}} \put(-47.38,-444.77){\circle*{0.000001}} \put(-46.67,-444.77){\circle*{0.000001}} \put(-45.96,-444.77){\circle*{0.000001}} \put(-45.25,-444.06){\circle*{0.000001}} \put(-44.55,-444.06){\circle*{0.000001}} \put(-43.84,-444.06){\circle*{0.000001}} \put(-43.13,-444.06){\circle*{0.000001}} \put(-42.43,-443.36){\circle*{0.000001}} \put(-41.72,-443.36){\circle*{0.000001}} \put(-41.01,-443.36){\circle*{0.000001}} \put(-40.31,-442.65){\circle*{0.000001}} \put(-39.60,-442.65){\circle*{0.000001}} \put(-38.89,-442.65){\circle*{0.000001}} \put(-38.18,-441.94){\circle*{0.000001}} \put(-37.48,-441.94){\circle*{0.000001}} \put(-36.77,-441.94){\circle*{0.000001}} \put(-36.06,-441.94){\circle*{0.000001}} \put(-35.36,-441.23){\circle*{0.000001}} \put(-34.65,-441.23){\circle*{0.000001}} \put(-33.94,-441.23){\circle*{0.000001}} \put(-33.23,-440.53){\circle*{0.000001}} \put(-32.53,-440.53){\circle*{0.000001}} \put(-31.82,-440.53){\circle*{0.000001}} \put(-31.11,-440.53){\circle*{0.000001}} \put(-30.41,-439.82){\circle*{0.000001}} \put(-29.70,-439.82){\circle*{0.000001}} \put(-29.70,-439.82){\circle*{0.000001}} \put(-28.99,-439.82){\circle*{0.000001}} \put(-28.28,-439.11){\circle*{0.000001}} \put(-27.58,-439.11){\circle*{0.000001}} \put(-26.87,-439.11){\circle*{0.000001}} \put(-26.16,-438.41){\circle*{0.000001}} \put(-25.46,-438.41){\circle*{0.000001}} \put(-24.75,-438.41){\circle*{0.000001}} \put(-24.04,-437.70){\circle*{0.000001}} \put(-23.33,-437.70){\circle*{0.000001}} \put(-22.63,-437.70){\circle*{0.000001}} \put(-21.92,-436.99){\circle*{0.000001}} \put(-21.21,-436.99){\circle*{0.000001}} \put(-20.51,-436.99){\circle*{0.000001}} \put(-19.80,-436.28){\circle*{0.000001}} \put(-19.09,-436.28){\circle*{0.000001}} \put(-18.38,-436.28){\circle*{0.000001}} \put(-17.68,-435.58){\circle*{0.000001}} \put(-16.97,-435.58){\circle*{0.000001}} \put(-16.26,-435.58){\circle*{0.000001}} \put(-15.56,-434.87){\circle*{0.000001}} \put(-14.85,-434.87){\circle*{0.000001}} \put(-14.14,-434.87){\circle*{0.000001}} \put(-13.44,-434.16){\circle*{0.000001}} \put(-12.73,-434.16){\circle*{0.000001}} \put(-12.02,-434.16){\circle*{0.000001}} \put(-11.31,-433.46){\circle*{0.000001}} \put(-10.61,-433.46){\circle*{0.000001}} \put(-9.90,-433.46){\circle*{0.000001}} \put(-9.19,-432.75){\circle*{0.000001}} \put(-8.49,-432.75){\circle*{0.000001}} \put(-7.78,-432.75){\circle*{0.000001}} \put(-7.07,-432.04){\circle*{0.000001}} \put(-6.36,-432.04){\circle*{0.000001}} \put(-5.66,-432.04){\circle*{0.000001}} \put(-4.95,-431.34){\circle*{0.000001}} \put(-4.24,-431.34){\circle*{0.000001}} \put(-3.54,-431.34){\circle*{0.000001}} \put(-2.83,-430.63){\circle*{0.000001}} \put(-2.12,-430.63){\circle*{0.000001}} \put(-1.41,-430.63){\circle*{0.000001}} \put(-0.71,-429.92){\circle*{0.000001}} \put( 0.00,-429.92){\circle*{0.000001}} \put( 0.71,-429.92){\circle*{0.000001}} \put( 1.41,-429.21){\circle*{0.000001}} \put( 2.12,-429.21){\circle*{0.000001}} \put( 2.83,-429.21){\circle*{0.000001}} \put( 3.54,-428.51){\circle*{0.000001}} \put( 4.24,-428.51){\circle*{0.000001}} \put( 4.95,-428.51){\circle*{0.000001}} \put( 5.66,-427.80){\circle*{0.000001}} \put( 6.36,-427.80){\circle*{0.000001}} \put( 7.07,-427.80){\circle*{0.000001}} \put( 7.78,-427.09){\circle*{0.000001}} \put( 8.49,-427.09){\circle*{0.000001}} \put( 9.19,-427.09){\circle*{0.000001}} \put( 9.90,-426.39){\circle*{0.000001}} \put(10.61,-426.39){\circle*{0.000001}} \put(10.61,-426.39){\circle*{0.000001}} \put(10.61,-425.68){\circle*{0.000001}} \put(10.61,-424.97){\circle*{0.000001}} \put(11.31,-424.26){\circle*{0.000001}} \put(11.31,-423.56){\circle*{0.000001}} \put(11.31,-422.85){\circle*{0.000001}} \put(11.31,-422.14){\circle*{0.000001}} \put(12.02,-421.44){\circle*{0.000001}} \put(12.02,-420.73){\circle*{0.000001}} \put(12.02,-420.02){\circle*{0.000001}} \put(12.02,-419.31){\circle*{0.000001}} \put(12.02,-418.61){\circle*{0.000001}} \put(12.73,-417.90){\circle*{0.000001}} \put(12.73,-417.19){\circle*{0.000001}} \put(12.73,-416.49){\circle*{0.000001}} \put(12.73,-415.78){\circle*{0.000001}} \put(13.44,-415.07){\circle*{0.000001}} \put(13.44,-414.36){\circle*{0.000001}} \put(13.44,-413.66){\circle*{0.000001}} \put(13.44,-412.95){\circle*{0.000001}} \put(14.14,-412.24){\circle*{0.000001}} \put(14.14,-411.54){\circle*{0.000001}} \put(14.14,-410.83){\circle*{0.000001}} \put(14.14,-410.12){\circle*{0.000001}} \put(14.14,-409.41){\circle*{0.000001}} \put(14.85,-408.71){\circle*{0.000001}} \put(14.85,-408.00){\circle*{0.000001}} \put(14.85,-407.29){\circle*{0.000001}} \put(14.85,-406.59){\circle*{0.000001}} \put(15.56,-405.88){\circle*{0.000001}} \put(15.56,-405.17){\circle*{0.000001}} \put(15.56,-404.47){\circle*{0.000001}} \put(15.56,-403.76){\circle*{0.000001}} \put(15.56,-403.05){\circle*{0.000001}} \put(16.26,-402.34){\circle*{0.000001}} \put(16.26,-401.64){\circle*{0.000001}} \put(16.26,-400.93){\circle*{0.000001}} \put(16.26,-400.22){\circle*{0.000001}} \put(16.97,-399.52){\circle*{0.000001}} \put(16.97,-398.81){\circle*{0.000001}} \put(16.97,-398.10){\circle*{0.000001}} \put(16.97,-397.39){\circle*{0.000001}} \put(16.97,-396.69){\circle*{0.000001}} \put(17.68,-395.98){\circle*{0.000001}} \put(17.68,-395.27){\circle*{0.000001}} \put(17.68,-394.57){\circle*{0.000001}} \put(17.68,-393.86){\circle*{0.000001}} \put(18.38,-393.15){\circle*{0.000001}} \put(18.38,-392.44){\circle*{0.000001}} \put(18.38,-391.74){\circle*{0.000001}} \put(18.38,-391.03){\circle*{0.000001}} \put(19.09,-390.32){\circle*{0.000001}} \put(19.09,-389.62){\circle*{0.000001}} \put(19.09,-388.91){\circle*{0.000001}} \put(19.09,-388.20){\circle*{0.000001}} \put(19.09,-387.49){\circle*{0.000001}} \put(19.80,-386.79){\circle*{0.000001}} \put(19.80,-386.08){\circle*{0.000001}} \put(19.80,-385.37){\circle*{0.000001}} \put(19.80,-384.67){\circle*{0.000001}} \put(20.51,-383.96){\circle*{0.000001}} \put(20.51,-383.25){\circle*{0.000001}} \put(20.51,-382.54){\circle*{0.000001}} \put(18.38,-430.63){\circle*{0.000001}} \put(18.38,-429.92){\circle*{0.000001}} \put(18.38,-429.21){\circle*{0.000001}} \put(18.38,-428.51){\circle*{0.000001}} \put(18.38,-427.80){\circle*{0.000001}} \put(18.38,-427.09){\circle*{0.000001}} \put(18.38,-426.39){\circle*{0.000001}} \put(18.38,-425.68){\circle*{0.000001}} \put(18.38,-424.97){\circle*{0.000001}} \put(18.38,-424.26){\circle*{0.000001}} \put(18.38,-423.56){\circle*{0.000001}} \put(18.38,-422.85){\circle*{0.000001}} \put(19.09,-422.14){\circle*{0.000001}} \put(19.09,-421.44){\circle*{0.000001}} \put(19.09,-420.73){\circle*{0.000001}} \put(19.09,-420.02){\circle*{0.000001}} \put(19.09,-419.31){\circle*{0.000001}} \put(19.09,-418.61){\circle*{0.000001}} \put(19.09,-417.90){\circle*{0.000001}} \put(19.09,-417.19){\circle*{0.000001}} \put(19.09,-416.49){\circle*{0.000001}} \put(19.09,-415.78){\circle*{0.000001}} \put(19.09,-415.07){\circle*{0.000001}} \put(19.09,-414.36){\circle*{0.000001}} \put(19.09,-413.66){\circle*{0.000001}} \put(19.09,-412.95){\circle*{0.000001}} \put(19.09,-412.24){\circle*{0.000001}} \put(19.09,-411.54){\circle*{0.000001}} \put(19.09,-410.83){\circle*{0.000001}} \put(19.09,-410.12){\circle*{0.000001}} \put(19.09,-409.41){\circle*{0.000001}} \put(19.09,-408.71){\circle*{0.000001}} \put(19.09,-408.00){\circle*{0.000001}} \put(19.09,-407.29){\circle*{0.000001}} \put(19.09,-406.59){\circle*{0.000001}} \put(19.80,-405.88){\circle*{0.000001}} \put(19.80,-405.17){\circle*{0.000001}} \put(19.80,-404.47){\circle*{0.000001}} \put(19.80,-403.76){\circle*{0.000001}} \put(19.80,-403.05){\circle*{0.000001}} \put(19.80,-402.34){\circle*{0.000001}} \put(19.80,-401.64){\circle*{0.000001}} \put(19.80,-400.93){\circle*{0.000001}} \put(19.80,-400.22){\circle*{0.000001}} \put(19.80,-399.52){\circle*{0.000001}} \put(19.80,-398.81){\circle*{0.000001}} \put(19.80,-398.10){\circle*{0.000001}} \put(19.80,-397.39){\circle*{0.000001}} \put(19.80,-396.69){\circle*{0.000001}} \put(19.80,-395.98){\circle*{0.000001}} \put(19.80,-395.27){\circle*{0.000001}} \put(19.80,-394.57){\circle*{0.000001}} \put(19.80,-393.86){\circle*{0.000001}} \put(19.80,-393.15){\circle*{0.000001}} \put(19.80,-392.44){\circle*{0.000001}} \put(19.80,-391.74){\circle*{0.000001}} \put(19.80,-391.03){\circle*{0.000001}} \put(20.51,-390.32){\circle*{0.000001}} \put(20.51,-389.62){\circle*{0.000001}} \put(20.51,-388.91){\circle*{0.000001}} \put(20.51,-388.20){\circle*{0.000001}} \put(20.51,-387.49){\circle*{0.000001}} \put(20.51,-386.79){\circle*{0.000001}} \put(20.51,-386.08){\circle*{0.000001}} \put(20.51,-385.37){\circle*{0.000001}} \put(20.51,-384.67){\circle*{0.000001}} \put(20.51,-383.96){\circle*{0.000001}} \put(20.51,-383.25){\circle*{0.000001}} \put(20.51,-382.54){\circle*{0.000001}} \put(-17.68,-401.64){\circle*{0.000001}} \put(-16.97,-402.34){\circle*{0.000001}} \put(-16.26,-403.05){\circle*{0.000001}} \put(-15.56,-403.05){\circle*{0.000001}} \put(-14.85,-403.76){\circle*{0.000001}} \put(-14.14,-404.47){\circle*{0.000001}} \put(-13.44,-405.17){\circle*{0.000001}} \put(-12.73,-405.88){\circle*{0.000001}} \put(-12.02,-405.88){\circle*{0.000001}} \put(-11.31,-406.59){\circle*{0.000001}} \put(-10.61,-407.29){\circle*{0.000001}} \put(-9.90,-408.00){\circle*{0.000001}} \put(-9.19,-408.71){\circle*{0.000001}} \put(-8.49,-408.71){\circle*{0.000001}} \put(-7.78,-409.41){\circle*{0.000001}} \put(-7.07,-410.12){\circle*{0.000001}} \put(-6.36,-410.83){\circle*{0.000001}} \put(-5.66,-411.54){\circle*{0.000001}} \put(-4.95,-411.54){\circle*{0.000001}} \put(-4.24,-412.24){\circle*{0.000001}} \put(-3.54,-412.95){\circle*{0.000001}} \put(-2.83,-413.66){\circle*{0.000001}} \put(-2.12,-414.36){\circle*{0.000001}} \put(-1.41,-414.36){\circle*{0.000001}} \put(-0.71,-415.07){\circle*{0.000001}} \put( 0.00,-415.78){\circle*{0.000001}} \put( 0.71,-416.49){\circle*{0.000001}} \put( 1.41,-417.19){\circle*{0.000001}} \put( 2.12,-417.90){\circle*{0.000001}} \put( 2.83,-417.90){\circle*{0.000001}} \put( 3.54,-418.61){\circle*{0.000001}} \put( 4.24,-419.31){\circle*{0.000001}} \put( 4.95,-420.02){\circle*{0.000001}} \put( 5.66,-420.73){\circle*{0.000001}} \put( 6.36,-420.73){\circle*{0.000001}} \put( 7.07,-421.44){\circle*{0.000001}} \put( 7.78,-422.14){\circle*{0.000001}} \put( 8.49,-422.85){\circle*{0.000001}} \put( 9.19,-423.56){\circle*{0.000001}} \put( 9.90,-423.56){\circle*{0.000001}} \put(10.61,-424.26){\circle*{0.000001}} \put(11.31,-424.97){\circle*{0.000001}} \put(12.02,-425.68){\circle*{0.000001}} \put(12.73,-426.39){\circle*{0.000001}} \put(13.44,-426.39){\circle*{0.000001}} \put(14.14,-427.09){\circle*{0.000001}} \put(14.85,-427.80){\circle*{0.000001}} \put(15.56,-428.51){\circle*{0.000001}} \put(16.26,-429.21){\circle*{0.000001}} \put(16.97,-429.21){\circle*{0.000001}} \put(17.68,-429.92){\circle*{0.000001}} \put(18.38,-430.63){\circle*{0.000001}} \put(-17.68,-401.64){\circle*{0.000001}} \put(-18.38,-400.93){\circle*{0.000001}} \put(-18.38,-400.22){\circle*{0.000001}} \put(-19.09,-399.52){\circle*{0.000001}} \put(-19.09,-398.81){\circle*{0.000001}} \put(-19.80,-398.10){\circle*{0.000001}} \put(-19.80,-397.39){\circle*{0.000001}} \put(-20.51,-396.69){\circle*{0.000001}} \put(-20.51,-395.98){\circle*{0.000001}} \put(-21.21,-395.27){\circle*{0.000001}} \put(-21.21,-394.57){\circle*{0.000001}} \put(-21.92,-393.86){\circle*{0.000001}} \put(-21.92,-393.15){\circle*{0.000001}} \put(-22.63,-392.44){\circle*{0.000001}} \put(-22.63,-391.74){\circle*{0.000001}} \put(-23.33,-391.03){\circle*{0.000001}} \put(-23.33,-390.32){\circle*{0.000001}} \put(-24.04,-389.62){\circle*{0.000001}} \put(-24.04,-388.91){\circle*{0.000001}} \put(-24.75,-388.20){\circle*{0.000001}} \put(-24.75,-387.49){\circle*{0.000001}} \put(-25.46,-386.79){\circle*{0.000001}} \put(-25.46,-386.08){\circle*{0.000001}} \put(-26.16,-385.37){\circle*{0.000001}} \put(-26.16,-384.67){\circle*{0.000001}} \put(-26.87,-383.96){\circle*{0.000001}} \put(-26.87,-383.25){\circle*{0.000001}} \put(-27.58,-382.54){\circle*{0.000001}} \put(-27.58,-381.84){\circle*{0.000001}} \put(-28.28,-381.13){\circle*{0.000001}} \put(-28.99,-380.42){\circle*{0.000001}} \put(-28.99,-379.72){\circle*{0.000001}} \put(-29.70,-379.01){\circle*{0.000001}} \put(-29.70,-378.30){\circle*{0.000001}} \put(-30.41,-377.60){\circle*{0.000001}} \put(-30.41,-376.89){\circle*{0.000001}} \put(-31.11,-376.18){\circle*{0.000001}} \put(-31.11,-375.47){\circle*{0.000001}} \put(-31.82,-374.77){\circle*{0.000001}} \put(-31.82,-374.06){\circle*{0.000001}} \put(-32.53,-373.35){\circle*{0.000001}} \put(-32.53,-372.65){\circle*{0.000001}} \put(-33.23,-371.94){\circle*{0.000001}} \put(-33.23,-371.23){\circle*{0.000001}} \put(-33.94,-370.52){\circle*{0.000001}} \put(-33.94,-369.82){\circle*{0.000001}} \put(-34.65,-369.11){\circle*{0.000001}} \put(-34.65,-368.40){\circle*{0.000001}} \put(-35.36,-367.70){\circle*{0.000001}} \put(-35.36,-366.99){\circle*{0.000001}} \put(-36.06,-366.28){\circle*{0.000001}} \put(-36.06,-365.57){\circle*{0.000001}} \put(-36.77,-364.87){\circle*{0.000001}} \put(-36.77,-364.16){\circle*{0.000001}} \put(-37.48,-363.45){\circle*{0.000001}} \put(-37.48,-362.75){\circle*{0.000001}} \put(-38.18,-362.04){\circle*{0.000001}} \put(-38.18,-361.33){\circle*{0.000001}} \put(-38.89,-360.62){\circle*{0.000001}} \put(-38.89,-360.62){\circle*{0.000001}} \put(-38.18,-360.62){\circle*{0.000001}} \put(-37.48,-360.62){\circle*{0.000001}} \put(-36.77,-361.33){\circle*{0.000001}} \put(-36.06,-361.33){\circle*{0.000001}} \put(-35.36,-361.33){\circle*{0.000001}} \put(-34.65,-361.33){\circle*{0.000001}} \put(-33.94,-361.33){\circle*{0.000001}} \put(-33.23,-362.04){\circle*{0.000001}} \put(-32.53,-362.04){\circle*{0.000001}} \put(-31.82,-362.04){\circle*{0.000001}} \put(-31.11,-362.04){\circle*{0.000001}} \put(-30.41,-362.04){\circle*{0.000001}} \put(-29.70,-362.75){\circle*{0.000001}} \put(-28.99,-362.75){\circle*{0.000001}} \put(-28.28,-362.75){\circle*{0.000001}} \put(-27.58,-362.75){\circle*{0.000001}} \put(-26.87,-362.75){\circle*{0.000001}} \put(-26.16,-363.45){\circle*{0.000001}} \put(-25.46,-363.45){\circle*{0.000001}} \put(-24.75,-363.45){\circle*{0.000001}} \put(-24.04,-363.45){\circle*{0.000001}} \put(-23.33,-363.45){\circle*{0.000001}} \put(-22.63,-364.16){\circle*{0.000001}} \put(-21.92,-364.16){\circle*{0.000001}} \put(-21.21,-364.16){\circle*{0.000001}} \put(-20.51,-364.16){\circle*{0.000001}} \put(-19.80,-364.16){\circle*{0.000001}} \put(-19.09,-364.87){\circle*{0.000001}} \put(-18.38,-364.87){\circle*{0.000001}} \put(-17.68,-364.87){\circle*{0.000001}} \put(-16.97,-364.87){\circle*{0.000001}} \put(-16.26,-364.87){\circle*{0.000001}} \put(-15.56,-364.87){\circle*{0.000001}} \put(-14.85,-365.57){\circle*{0.000001}} \put(-14.14,-365.57){\circle*{0.000001}} \put(-13.44,-365.57){\circle*{0.000001}} \put(-12.73,-365.57){\circle*{0.000001}} \put(-12.02,-365.57){\circle*{0.000001}} \put(-11.31,-366.28){\circle*{0.000001}} \put(-10.61,-366.28){\circle*{0.000001}} \put(-9.90,-366.28){\circle*{0.000001}} \put(-9.19,-366.28){\circle*{0.000001}} \put(-8.49,-366.28){\circle*{0.000001}} \put(-7.78,-366.99){\circle*{0.000001}} \put(-7.07,-366.99){\circle*{0.000001}} \put(-6.36,-366.99){\circle*{0.000001}} \put(-5.66,-366.99){\circle*{0.000001}} \put(-4.95,-366.99){\circle*{0.000001}} \put(-4.24,-367.70){\circle*{0.000001}} \put(-3.54,-367.70){\circle*{0.000001}} \put(-2.83,-367.70){\circle*{0.000001}} \put(-2.12,-367.70){\circle*{0.000001}} \put(-1.41,-367.70){\circle*{0.000001}} \put(-0.71,-368.40){\circle*{0.000001}} \put( 0.00,-368.40){\circle*{0.000001}} \put( 0.71,-368.40){\circle*{0.000001}} \put( 1.41,-368.40){\circle*{0.000001}} \put( 2.12,-368.40){\circle*{0.000001}} \put( 2.83,-369.11){\circle*{0.000001}} \put( 3.54,-369.11){\circle*{0.000001}} \put( 4.24,-369.11){\circle*{0.000001}} \put( 4.95,-369.11){\circle*{0.000001}} \put( 5.66,-369.11){\circle*{0.000001}} \put( 6.36,-369.82){\circle*{0.000001}} \put( 7.07,-369.82){\circle*{0.000001}} \put( 7.78,-369.82){\circle*{0.000001}} \put( 7.78,-369.82){\circle*{0.000001}} \put( 8.49,-369.11){\circle*{0.000001}} \put( 9.19,-368.40){\circle*{0.000001}} \put( 9.90,-368.40){\circle*{0.000001}} \put(10.61,-367.70){\circle*{0.000001}} \put(11.31,-366.99){\circle*{0.000001}} \put(12.02,-366.28){\circle*{0.000001}} \put(12.73,-366.28){\circle*{0.000001}} \put(13.44,-365.57){\circle*{0.000001}} \put(14.14,-364.87){\circle*{0.000001}} \put(14.85,-364.16){\circle*{0.000001}} \put(15.56,-363.45){\circle*{0.000001}} \put(16.26,-363.45){\circle*{0.000001}} \put(16.97,-362.75){\circle*{0.000001}} \put(17.68,-362.04){\circle*{0.000001}} \put(18.38,-361.33){\circle*{0.000001}} \put(19.09,-360.62){\circle*{0.000001}} \put(19.80,-360.62){\circle*{0.000001}} \put(20.51,-359.92){\circle*{0.000001}} \put(21.21,-359.21){\circle*{0.000001}} \put(21.92,-358.50){\circle*{0.000001}} \put(22.63,-358.50){\circle*{0.000001}} \put(23.33,-357.80){\circle*{0.000001}} \put(24.04,-357.09){\circle*{0.000001}} \put(24.75,-356.38){\circle*{0.000001}} \put(25.46,-355.67){\circle*{0.000001}} \put(26.16,-355.67){\circle*{0.000001}} \put(26.87,-354.97){\circle*{0.000001}} \put(27.58,-354.26){\circle*{0.000001}} \put(28.28,-353.55){\circle*{0.000001}} \put(28.99,-352.85){\circle*{0.000001}} \put(29.70,-352.85){\circle*{0.000001}} \put(30.41,-352.14){\circle*{0.000001}} \put(31.11,-351.43){\circle*{0.000001}} \put(31.82,-350.72){\circle*{0.000001}} \put(32.53,-350.72){\circle*{0.000001}} \put(33.23,-350.02){\circle*{0.000001}} \put(33.94,-349.31){\circle*{0.000001}} \put(34.65,-348.60){\circle*{0.000001}} \put(35.36,-347.90){\circle*{0.000001}} \put(36.06,-347.90){\circle*{0.000001}} \put(36.77,-347.19){\circle*{0.000001}} \put(37.48,-346.48){\circle*{0.000001}} \put(38.18,-345.78){\circle*{0.000001}} \put(38.89,-345.07){\circle*{0.000001}} \put(39.60,-345.07){\circle*{0.000001}} \put(40.31,-344.36){\circle*{0.000001}} \put(41.01,-343.65){\circle*{0.000001}} \put(41.72,-342.95){\circle*{0.000001}} \put(42.43,-342.95){\circle*{0.000001}} \put(43.13,-342.24){\circle*{0.000001}} \put(43.84,-341.53){\circle*{0.000001}} \put(43.84,-341.53){\circle*{0.000001}} \put(43.13,-340.83){\circle*{0.000001}} \put(43.13,-340.12){\circle*{0.000001}} \put(42.43,-339.41){\circle*{0.000001}} \put(41.72,-338.70){\circle*{0.000001}} \put(41.72,-338.00){\circle*{0.000001}} \put(41.01,-337.29){\circle*{0.000001}} \put(40.31,-336.58){\circle*{0.000001}} \put(40.31,-335.88){\circle*{0.000001}} \put(39.60,-335.17){\circle*{0.000001}} \put(38.89,-334.46){\circle*{0.000001}} \put(38.89,-333.75){\circle*{0.000001}} \put(38.18,-333.05){\circle*{0.000001}} \put(37.48,-332.34){\circle*{0.000001}} \put(37.48,-331.63){\circle*{0.000001}} \put(36.77,-330.93){\circle*{0.000001}} \put(36.06,-330.22){\circle*{0.000001}} \put(36.06,-329.51){\circle*{0.000001}} \put(35.36,-328.80){\circle*{0.000001}} \put(34.65,-328.10){\circle*{0.000001}} \put(34.65,-327.39){\circle*{0.000001}} \put(33.94,-326.68){\circle*{0.000001}} \put(33.23,-325.98){\circle*{0.000001}} \put(33.23,-325.27){\circle*{0.000001}} \put(32.53,-324.56){\circle*{0.000001}} \put(31.82,-323.85){\circle*{0.000001}} \put(31.82,-323.15){\circle*{0.000001}} \put(31.11,-322.44){\circle*{0.000001}} \put(30.41,-321.73){\circle*{0.000001}} \put(30.41,-321.03){\circle*{0.000001}} \put(29.70,-320.32){\circle*{0.000001}} \put(28.99,-319.61){\circle*{0.000001}} \put(28.99,-318.91){\circle*{0.000001}} \put(28.28,-318.20){\circle*{0.000001}} \put(27.58,-317.49){\circle*{0.000001}} \put(27.58,-316.78){\circle*{0.000001}} \put(26.87,-316.08){\circle*{0.000001}} \put(26.16,-315.37){\circle*{0.000001}} \put(26.16,-314.66){\circle*{0.000001}} \put(25.46,-313.96){\circle*{0.000001}} \put(24.75,-313.25){\circle*{0.000001}} \put(24.75,-312.54){\circle*{0.000001}} \put(24.04,-311.83){\circle*{0.000001}} \put(23.33,-311.13){\circle*{0.000001}} \put(23.33,-310.42){\circle*{0.000001}} \put(22.63,-309.71){\circle*{0.000001}} \put(21.92,-309.01){\circle*{0.000001}} \put(21.92,-308.30){\circle*{0.000001}} \put(21.21,-307.59){\circle*{0.000001}} \put(20.51,-306.88){\circle*{0.000001}} \put(20.51,-306.18){\circle*{0.000001}} \put(19.80,-305.47){\circle*{0.000001}} \put(19.09,-304.76){\circle*{0.000001}} \put(19.09,-304.06){\circle*{0.000001}} \put(18.38,-303.35){\circle*{0.000001}} \put(18.38,-303.35){\circle*{0.000001}} \put(19.09,-303.35){\circle*{0.000001}} \put(19.80,-302.64){\circle*{0.000001}} \put(20.51,-302.64){\circle*{0.000001}} \put(21.21,-302.64){\circle*{0.000001}} \put(21.92,-302.64){\circle*{0.000001}} \put(22.63,-301.93){\circle*{0.000001}} \put(23.33,-301.93){\circle*{0.000001}} \put(24.04,-301.93){\circle*{0.000001}} \put(24.75,-301.23){\circle*{0.000001}} \put(25.46,-301.23){\circle*{0.000001}} \put(26.16,-301.23){\circle*{0.000001}} \put(26.87,-300.52){\circle*{0.000001}} \put(27.58,-300.52){\circle*{0.000001}} \put(28.28,-300.52){\circle*{0.000001}} \put(28.99,-300.52){\circle*{0.000001}} \put(29.70,-299.81){\circle*{0.000001}} \put(30.41,-299.81){\circle*{0.000001}} \put(31.11,-299.81){\circle*{0.000001}} \put(31.82,-299.11){\circle*{0.000001}} \put(32.53,-299.11){\circle*{0.000001}} \put(33.23,-299.11){\circle*{0.000001}} \put(33.94,-299.11){\circle*{0.000001}} \put(34.65,-298.40){\circle*{0.000001}} \put(35.36,-298.40){\circle*{0.000001}} \put(36.06,-298.40){\circle*{0.000001}} \put(36.77,-297.69){\circle*{0.000001}} \put(37.48,-297.69){\circle*{0.000001}} \put(38.18,-297.69){\circle*{0.000001}} \put(38.89,-296.98){\circle*{0.000001}} \put(39.60,-296.98){\circle*{0.000001}} \put(40.31,-296.98){\circle*{0.000001}} \put(41.01,-296.98){\circle*{0.000001}} \put(41.72,-296.28){\circle*{0.000001}} \put(42.43,-296.28){\circle*{0.000001}} \put(43.13,-296.28){\circle*{0.000001}} \put(43.84,-295.57){\circle*{0.000001}} \put(44.55,-295.57){\circle*{0.000001}} \put(45.25,-295.57){\circle*{0.000001}} \put(45.96,-294.86){\circle*{0.000001}} \put(46.67,-294.86){\circle*{0.000001}} \put(47.38,-294.86){\circle*{0.000001}} \put(48.08,-294.86){\circle*{0.000001}} \put(48.79,-294.16){\circle*{0.000001}} \put(49.50,-294.16){\circle*{0.000001}} \put(50.20,-294.16){\circle*{0.000001}} \put(50.91,-293.45){\circle*{0.000001}} \put(51.62,-293.45){\circle*{0.000001}} \put(52.33,-293.45){\circle*{0.000001}} \put(53.03,-293.45){\circle*{0.000001}} \put(53.74,-292.74){\circle*{0.000001}} \put(54.45,-292.74){\circle*{0.000001}} \put(55.15,-292.74){\circle*{0.000001}} \put(55.86,-292.04){\circle*{0.000001}} \put(56.57,-292.04){\circle*{0.000001}} \put(57.28,-292.04){\circle*{0.000001}} \put(57.98,-291.33){\circle*{0.000001}} \put(58.69,-291.33){\circle*{0.000001}} \put(59.40,-291.33){\circle*{0.000001}} \put(60.10,-291.33){\circle*{0.000001}} \put(60.81,-290.62){\circle*{0.000001}} \put(61.52,-290.62){\circle*{0.000001}} \put(61.52,-290.62){\circle*{0.000001}} \put(61.52,-289.91){\circle*{0.000001}} \put(61.52,-289.21){\circle*{0.000001}} \put(62.23,-288.50){\circle*{0.000001}} \put(62.23,-287.79){\circle*{0.000001}} \put(62.23,-287.09){\circle*{0.000001}} \put(62.23,-286.38){\circle*{0.000001}} \put(62.23,-285.67){\circle*{0.000001}} \put(62.93,-284.96){\circle*{0.000001}} \put(62.93,-284.26){\circle*{0.000001}} \put(62.93,-283.55){\circle*{0.000001}} \put(62.93,-282.84){\circle*{0.000001}} \put(63.64,-282.14){\circle*{0.000001}} \put(63.64,-281.43){\circle*{0.000001}} \put(63.64,-280.72){\circle*{0.000001}} \put(63.64,-280.01){\circle*{0.000001}} \put(63.64,-279.31){\circle*{0.000001}} \put(64.35,-278.60){\circle*{0.000001}} \put(64.35,-277.89){\circle*{0.000001}} \put(64.35,-277.19){\circle*{0.000001}} \put(64.35,-276.48){\circle*{0.000001}} \put(64.35,-275.77){\circle*{0.000001}} \put(65.05,-275.06){\circle*{0.000001}} \put(65.05,-274.36){\circle*{0.000001}} \put(65.05,-273.65){\circle*{0.000001}} \put(65.05,-272.94){\circle*{0.000001}} \put(65.76,-272.24){\circle*{0.000001}} \put(65.76,-271.53){\circle*{0.000001}} \put(65.76,-270.82){\circle*{0.000001}} \put(65.76,-270.11){\circle*{0.000001}} \put(65.76,-269.41){\circle*{0.000001}} \put(66.47,-268.70){\circle*{0.000001}} \put(66.47,-267.99){\circle*{0.000001}} \put(66.47,-267.29){\circle*{0.000001}} \put(66.47,-266.58){\circle*{0.000001}} \put(66.47,-265.87){\circle*{0.000001}} \put(67.18,-265.17){\circle*{0.000001}} \put(67.18,-264.46){\circle*{0.000001}} \put(67.18,-263.75){\circle*{0.000001}} \put(67.18,-263.04){\circle*{0.000001}} \put(67.18,-262.34){\circle*{0.000001}} \put(67.88,-261.63){\circle*{0.000001}} \put(67.88,-260.92){\circle*{0.000001}} \put(67.88,-260.22){\circle*{0.000001}} \put(67.88,-259.51){\circle*{0.000001}} \put(68.59,-258.80){\circle*{0.000001}} \put(68.59,-258.09){\circle*{0.000001}} \put(68.59,-257.39){\circle*{0.000001}} \put(68.59,-256.68){\circle*{0.000001}} \put(68.59,-255.97){\circle*{0.000001}} \put(69.30,-255.27){\circle*{0.000001}} \put(69.30,-254.56){\circle*{0.000001}} \put(69.30,-253.85){\circle*{0.000001}} \put(69.30,-253.14){\circle*{0.000001}} \put(69.30,-252.44){\circle*{0.000001}} \put(70.00,-251.73){\circle*{0.000001}} \put(70.00,-251.02){\circle*{0.000001}} \put(70.00,-250.32){\circle*{0.000001}} \put(70.00,-249.61){\circle*{0.000001}} \put(70.71,-248.90){\circle*{0.000001}} \put(70.71,-248.19){\circle*{0.000001}} \put(70.71,-247.49){\circle*{0.000001}} \put(70.71,-246.78){\circle*{0.000001}} \put(70.71,-246.07){\circle*{0.000001}} \put(71.42,-245.37){\circle*{0.000001}} \put(71.42,-244.66){\circle*{0.000001}} \put(71.42,-243.95){\circle*{0.000001}} \put(57.98,-283.55){\circle*{0.000001}} \put(57.98,-282.84){\circle*{0.000001}} \put(58.69,-282.14){\circle*{0.000001}} \put(58.69,-281.43){\circle*{0.000001}} \put(58.69,-280.72){\circle*{0.000001}} \put(59.40,-280.01){\circle*{0.000001}} \put(59.40,-279.31){\circle*{0.000001}} \put(59.40,-278.60){\circle*{0.000001}} \put(60.10,-277.89){\circle*{0.000001}} \put(60.10,-277.19){\circle*{0.000001}} \put(60.10,-276.48){\circle*{0.000001}} \put(60.81,-275.77){\circle*{0.000001}} \put(60.81,-275.06){\circle*{0.000001}} \put(60.81,-274.36){\circle*{0.000001}} \put(61.52,-273.65){\circle*{0.000001}} \put(61.52,-272.94){\circle*{0.000001}} \put(61.52,-272.24){\circle*{0.000001}} \put(62.23,-271.53){\circle*{0.000001}} \put(62.23,-270.82){\circle*{0.000001}} \put(62.23,-270.11){\circle*{0.000001}} \put(62.93,-269.41){\circle*{0.000001}} \put(62.93,-268.70){\circle*{0.000001}} \put(62.93,-267.99){\circle*{0.000001}} \put(63.64,-267.29){\circle*{0.000001}} \put(63.64,-266.58){\circle*{0.000001}} \put(63.64,-265.87){\circle*{0.000001}} \put(64.35,-265.17){\circle*{0.000001}} \put(64.35,-264.46){\circle*{0.000001}} \put(64.35,-263.75){\circle*{0.000001}} \put(65.05,-263.04){\circle*{0.000001}} \put(65.05,-262.34){\circle*{0.000001}} \put(65.76,-261.63){\circle*{0.000001}} \put(65.76,-260.92){\circle*{0.000001}} \put(65.76,-260.22){\circle*{0.000001}} \put(66.47,-259.51){\circle*{0.000001}} \put(66.47,-258.80){\circle*{0.000001}} \put(66.47,-258.09){\circle*{0.000001}} \put(67.18,-257.39){\circle*{0.000001}} \put(67.18,-256.68){\circle*{0.000001}} \put(67.18,-255.97){\circle*{0.000001}} \put(67.88,-255.27){\circle*{0.000001}} \put(67.88,-254.56){\circle*{0.000001}} \put(67.88,-253.85){\circle*{0.000001}} \put(68.59,-253.14){\circle*{0.000001}} \put(68.59,-252.44){\circle*{0.000001}} \put(68.59,-251.73){\circle*{0.000001}} \put(69.30,-251.02){\circle*{0.000001}} \put(69.30,-250.32){\circle*{0.000001}} \put(69.30,-249.61){\circle*{0.000001}} \put(70.00,-248.90){\circle*{0.000001}} \put(70.00,-248.19){\circle*{0.000001}} \put(70.00,-247.49){\circle*{0.000001}} \put(70.71,-246.78){\circle*{0.000001}} \put(70.71,-246.07){\circle*{0.000001}} \put(70.71,-245.37){\circle*{0.000001}} \put(71.42,-244.66){\circle*{0.000001}} \put(71.42,-243.95){\circle*{0.000001}} \put(57.98,-283.55){\circle*{0.000001}} \put(57.98,-282.84){\circle*{0.000001}} \put(57.98,-282.14){\circle*{0.000001}} \put(58.69,-281.43){\circle*{0.000001}} \put(58.69,-280.72){\circle*{0.000001}} \put(58.69,-280.01){\circle*{0.000001}} \put(58.69,-279.31){\circle*{0.000001}} \put(59.40,-278.60){\circle*{0.000001}} \put(59.40,-277.89){\circle*{0.000001}} \put(59.40,-277.19){\circle*{0.000001}} \put(59.40,-276.48){\circle*{0.000001}} \put(60.10,-275.77){\circle*{0.000001}} \put(60.10,-275.06){\circle*{0.000001}} \put(60.10,-274.36){\circle*{0.000001}} \put(60.10,-273.65){\circle*{0.000001}} \put(60.81,-272.94){\circle*{0.000001}} \put(60.81,-272.24){\circle*{0.000001}} \put(60.81,-271.53){\circle*{0.000001}} \put(60.81,-270.82){\circle*{0.000001}} \put(61.52,-270.11){\circle*{0.000001}} \put(61.52,-269.41){\circle*{0.000001}} \put(61.52,-268.70){\circle*{0.000001}} \put(61.52,-267.99){\circle*{0.000001}} \put(62.23,-267.29){\circle*{0.000001}} \put(62.23,-266.58){\circle*{0.000001}} \put(62.23,-265.87){\circle*{0.000001}} \put(62.23,-265.17){\circle*{0.000001}} \put(62.93,-264.46){\circle*{0.000001}} \put(62.93,-263.75){\circle*{0.000001}} \put(62.93,-263.04){\circle*{0.000001}} \put(62.93,-262.34){\circle*{0.000001}} \put(62.93,-261.63){\circle*{0.000001}} \put(63.64,-260.92){\circle*{0.000001}} \put(63.64,-260.22){\circle*{0.000001}} \put(63.64,-259.51){\circle*{0.000001}} \put(63.64,-258.80){\circle*{0.000001}} \put(64.35,-258.09){\circle*{0.000001}} \put(64.35,-257.39){\circle*{0.000001}} \put(64.35,-256.68){\circle*{0.000001}} \put(64.35,-255.97){\circle*{0.000001}} \put(65.05,-255.27){\circle*{0.000001}} \put(65.05,-254.56){\circle*{0.000001}} \put(65.05,-253.85){\circle*{0.000001}} \put(65.05,-253.14){\circle*{0.000001}} \put(65.76,-252.44){\circle*{0.000001}} \put(65.76,-251.73){\circle*{0.000001}} \put(65.76,-251.02){\circle*{0.000001}} \put(65.76,-250.32){\circle*{0.000001}} \put(66.47,-249.61){\circle*{0.000001}} \put(66.47,-248.90){\circle*{0.000001}} \put(66.47,-248.19){\circle*{0.000001}} \put(66.47,-247.49){\circle*{0.000001}} \put(67.18,-246.78){\circle*{0.000001}} \put(67.18,-246.07){\circle*{0.000001}} \put(67.18,-245.37){\circle*{0.000001}} \put(67.18,-244.66){\circle*{0.000001}} \put(67.88,-243.95){\circle*{0.000001}} \put(67.88,-243.24){\circle*{0.000001}} \put(67.88,-242.54){\circle*{0.000001}} \put(67.88,-241.83){\circle*{0.000001}} \put(68.59,-241.12){\circle*{0.000001}} \put(68.59,-240.42){\circle*{0.000001}} \put(68.59,-239.71){\circle*{0.000001}} \put(77.78,-285.67){\circle*{0.000001}} \put(77.78,-284.96){\circle*{0.000001}} \put(77.78,-284.26){\circle*{0.000001}} \put(77.07,-283.55){\circle*{0.000001}} \put(77.07,-282.84){\circle*{0.000001}} \put(77.07,-282.14){\circle*{0.000001}} \put(77.07,-281.43){\circle*{0.000001}} \put(77.07,-280.72){\circle*{0.000001}} \put(76.37,-280.01){\circle*{0.000001}} \put(76.37,-279.31){\circle*{0.000001}} \put(76.37,-278.60){\circle*{0.000001}} \put(76.37,-277.89){\circle*{0.000001}} \put(76.37,-277.19){\circle*{0.000001}} \put(75.66,-276.48){\circle*{0.000001}} \put(75.66,-275.77){\circle*{0.000001}} \put(75.66,-275.06){\circle*{0.000001}} \put(75.66,-274.36){\circle*{0.000001}} \put(75.66,-273.65){\circle*{0.000001}} \put(74.95,-272.94){\circle*{0.000001}} \put(74.95,-272.24){\circle*{0.000001}} \put(74.95,-271.53){\circle*{0.000001}} \put(74.95,-270.82){\circle*{0.000001}} \put(74.95,-270.11){\circle*{0.000001}} \put(74.25,-269.41){\circle*{0.000001}} \put(74.25,-268.70){\circle*{0.000001}} \put(74.25,-267.99){\circle*{0.000001}} \put(74.25,-267.29){\circle*{0.000001}} \put(74.25,-266.58){\circle*{0.000001}} \put(73.54,-265.87){\circle*{0.000001}} \put(73.54,-265.17){\circle*{0.000001}} \put(73.54,-264.46){\circle*{0.000001}} \put(73.54,-263.75){\circle*{0.000001}} \put(73.54,-263.04){\circle*{0.000001}} \put(72.83,-262.34){\circle*{0.000001}} \put(72.83,-261.63){\circle*{0.000001}} \put(72.83,-260.92){\circle*{0.000001}} \put(72.83,-260.22){\circle*{0.000001}} \put(72.83,-259.51){\circle*{0.000001}} \put(72.12,-258.80){\circle*{0.000001}} \put(72.12,-258.09){\circle*{0.000001}} \put(72.12,-257.39){\circle*{0.000001}} \put(72.12,-256.68){\circle*{0.000001}} \put(72.12,-255.97){\circle*{0.000001}} \put(71.42,-255.27){\circle*{0.000001}} \put(71.42,-254.56){\circle*{0.000001}} \put(71.42,-253.85){\circle*{0.000001}} \put(71.42,-253.14){\circle*{0.000001}} \put(71.42,-252.44){\circle*{0.000001}} \put(70.71,-251.73){\circle*{0.000001}} \put(70.71,-251.02){\circle*{0.000001}} \put(70.71,-250.32){\circle*{0.000001}} \put(70.71,-249.61){\circle*{0.000001}} \put(70.71,-248.90){\circle*{0.000001}} \put(70.00,-248.19){\circle*{0.000001}} \put(70.00,-247.49){\circle*{0.000001}} \put(70.00,-246.78){\circle*{0.000001}} \put(70.00,-246.07){\circle*{0.000001}} \put(70.00,-245.37){\circle*{0.000001}} \put(69.30,-244.66){\circle*{0.000001}} \put(69.30,-243.95){\circle*{0.000001}} \put(69.30,-243.24){\circle*{0.000001}} \put(69.30,-242.54){\circle*{0.000001}} \put(69.30,-241.83){\circle*{0.000001}} \put(68.59,-241.12){\circle*{0.000001}} \put(68.59,-240.42){\circle*{0.000001}} \put(68.59,-239.71){\circle*{0.000001}} \put(42.43,-256.68){\circle*{0.000001}} \put(43.13,-257.39){\circle*{0.000001}} \put(43.84,-258.09){\circle*{0.000001}} \put(44.55,-258.09){\circle*{0.000001}} \put(45.25,-258.80){\circle*{0.000001}} \put(45.96,-259.51){\circle*{0.000001}} \put(46.67,-260.22){\circle*{0.000001}} \put(47.38,-260.92){\circle*{0.000001}} \put(48.08,-261.63){\circle*{0.000001}} \put(48.79,-261.63){\circle*{0.000001}} \put(49.50,-262.34){\circle*{0.000001}} \put(50.20,-263.04){\circle*{0.000001}} \put(50.91,-263.75){\circle*{0.000001}} \put(51.62,-264.46){\circle*{0.000001}} \put(52.33,-264.46){\circle*{0.000001}} \put(53.03,-265.17){\circle*{0.000001}} \put(53.74,-265.87){\circle*{0.000001}} \put(54.45,-266.58){\circle*{0.000001}} \put(55.15,-267.29){\circle*{0.000001}} \put(55.86,-267.99){\circle*{0.000001}} \put(56.57,-267.99){\circle*{0.000001}} \put(57.28,-268.70){\circle*{0.000001}} \put(57.98,-269.41){\circle*{0.000001}} \put(58.69,-270.11){\circle*{0.000001}} \put(59.40,-270.82){\circle*{0.000001}} \put(60.10,-270.82){\circle*{0.000001}} \put(60.81,-271.53){\circle*{0.000001}} \put(61.52,-272.24){\circle*{0.000001}} \put(62.23,-272.94){\circle*{0.000001}} \put(62.93,-273.65){\circle*{0.000001}} \put(63.64,-274.36){\circle*{0.000001}} \put(64.35,-274.36){\circle*{0.000001}} \put(65.05,-275.06){\circle*{0.000001}} \put(65.76,-275.77){\circle*{0.000001}} \put(66.47,-276.48){\circle*{0.000001}} \put(67.18,-277.19){\circle*{0.000001}} \put(67.88,-277.89){\circle*{0.000001}} \put(68.59,-277.89){\circle*{0.000001}} \put(69.30,-278.60){\circle*{0.000001}} \put(70.00,-279.31){\circle*{0.000001}} \put(70.71,-280.01){\circle*{0.000001}} \put(71.42,-280.72){\circle*{0.000001}} \put(72.12,-280.72){\circle*{0.000001}} \put(72.83,-281.43){\circle*{0.000001}} \put(73.54,-282.14){\circle*{0.000001}} \put(74.25,-282.84){\circle*{0.000001}} \put(74.95,-283.55){\circle*{0.000001}} \put(75.66,-284.26){\circle*{0.000001}} \put(76.37,-284.26){\circle*{0.000001}} \put(77.07,-284.96){\circle*{0.000001}} \put(77.78,-285.67){\circle*{0.000001}} \put(42.43,-256.68){\circle*{0.000001}} \put(42.43,-255.97){\circle*{0.000001}} \put(43.13,-255.27){\circle*{0.000001}} \put(43.13,-254.56){\circle*{0.000001}} \put(43.13,-253.85){\circle*{0.000001}} \put(43.84,-253.14){\circle*{0.000001}} \put(43.84,-252.44){\circle*{0.000001}} \put(43.84,-251.73){\circle*{0.000001}} \put(44.55,-251.02){\circle*{0.000001}} \put(44.55,-250.32){\circle*{0.000001}} \put(44.55,-249.61){\circle*{0.000001}} \put(45.25,-248.90){\circle*{0.000001}} \put(45.25,-248.19){\circle*{0.000001}} \put(45.25,-247.49){\circle*{0.000001}} \put(45.96,-246.78){\circle*{0.000001}} \put(45.96,-246.07){\circle*{0.000001}} \put(45.96,-245.37){\circle*{0.000001}} \put(46.67,-244.66){\circle*{0.000001}} \put(46.67,-243.95){\circle*{0.000001}} \put(46.67,-243.24){\circle*{0.000001}} \put(47.38,-242.54){\circle*{0.000001}} \put(47.38,-241.83){\circle*{0.000001}} \put(47.38,-241.12){\circle*{0.000001}} \put(48.08,-240.42){\circle*{0.000001}} \put(48.08,-239.71){\circle*{0.000001}} \put(48.08,-239.00){\circle*{0.000001}} \put(48.79,-238.29){\circle*{0.000001}} \put(48.79,-237.59){\circle*{0.000001}} \put(48.79,-236.88){\circle*{0.000001}} \put(49.50,-236.17){\circle*{0.000001}} \put(49.50,-235.47){\circle*{0.000001}} \put(49.50,-234.76){\circle*{0.000001}} \put(50.20,-234.05){\circle*{0.000001}} \put(50.20,-233.35){\circle*{0.000001}} \put(50.20,-232.64){\circle*{0.000001}} \put(50.91,-231.93){\circle*{0.000001}} \put(50.91,-231.22){\circle*{0.000001}} \put(50.91,-230.52){\circle*{0.000001}} \put(51.62,-229.81){\circle*{0.000001}} \put(51.62,-229.10){\circle*{0.000001}} \put(51.62,-228.40){\circle*{0.000001}} \put(52.33,-227.69){\circle*{0.000001}} \put(52.33,-226.98){\circle*{0.000001}} \put(52.33,-226.27){\circle*{0.000001}} \put(53.03,-225.57){\circle*{0.000001}} \put(53.03,-224.86){\circle*{0.000001}} \put(53.03,-224.15){\circle*{0.000001}} \put(53.74,-223.45){\circle*{0.000001}} \put(53.74,-222.74){\circle*{0.000001}} \put(53.74,-222.03){\circle*{0.000001}} \put(54.45,-221.32){\circle*{0.000001}} \put(54.45,-220.62){\circle*{0.000001}} \put(54.45,-219.91){\circle*{0.000001}} \put(55.15,-219.20){\circle*{0.000001}} \put(55.15,-218.50){\circle*{0.000001}} \put(55.15,-217.79){\circle*{0.000001}} \put(55.86,-217.08){\circle*{0.000001}} \put(55.86,-216.37){\circle*{0.000001}} \put(17.68,-235.47){\circle*{0.000001}} \put(18.38,-235.47){\circle*{0.000001}} \put(19.09,-234.76){\circle*{0.000001}} \put(19.80,-234.76){\circle*{0.000001}} \put(20.51,-234.05){\circle*{0.000001}} \put(21.21,-234.05){\circle*{0.000001}} \put(21.92,-233.35){\circle*{0.000001}} \put(22.63,-233.35){\circle*{0.000001}} \put(23.33,-232.64){\circle*{0.000001}} \put(24.04,-232.64){\circle*{0.000001}} \put(24.75,-231.93){\circle*{0.000001}} \put(25.46,-231.93){\circle*{0.000001}} \put(26.16,-231.22){\circle*{0.000001}} \put(26.87,-231.22){\circle*{0.000001}} \put(27.58,-230.52){\circle*{0.000001}} \put(28.28,-230.52){\circle*{0.000001}} \put(28.99,-229.81){\circle*{0.000001}} \put(29.70,-229.81){\circle*{0.000001}} \put(30.41,-229.10){\circle*{0.000001}} \put(31.11,-229.10){\circle*{0.000001}} \put(31.82,-228.40){\circle*{0.000001}} \put(32.53,-228.40){\circle*{0.000001}} \put(33.23,-227.69){\circle*{0.000001}} \put(33.94,-227.69){\circle*{0.000001}} \put(34.65,-226.98){\circle*{0.000001}} \put(35.36,-226.98){\circle*{0.000001}} \put(36.06,-226.27){\circle*{0.000001}} \put(36.77,-226.27){\circle*{0.000001}} \put(37.48,-225.57){\circle*{0.000001}} \put(38.18,-225.57){\circle*{0.000001}} \put(38.89,-224.86){\circle*{0.000001}} \put(39.60,-224.86){\circle*{0.000001}} \put(40.31,-224.15){\circle*{0.000001}} \put(41.01,-224.15){\circle*{0.000001}} \put(41.72,-223.45){\circle*{0.000001}} \put(42.43,-223.45){\circle*{0.000001}} \put(43.13,-222.74){\circle*{0.000001}} \put(43.84,-222.74){\circle*{0.000001}} \put(44.55,-222.03){\circle*{0.000001}} \put(45.25,-222.03){\circle*{0.000001}} \put(45.96,-221.32){\circle*{0.000001}} \put(46.67,-221.32){\circle*{0.000001}} \put(47.38,-220.62){\circle*{0.000001}} \put(48.08,-220.62){\circle*{0.000001}} \put(48.79,-219.91){\circle*{0.000001}} \put(49.50,-219.91){\circle*{0.000001}} \put(50.20,-219.20){\circle*{0.000001}} \put(50.91,-219.20){\circle*{0.000001}} \put(51.62,-218.50){\circle*{0.000001}} \put(52.33,-218.50){\circle*{0.000001}} \put(53.03,-217.79){\circle*{0.000001}} \put(53.74,-217.79){\circle*{0.000001}} \put(54.45,-217.08){\circle*{0.000001}} \put(55.15,-217.08){\circle*{0.000001}} \put(55.86,-216.37){\circle*{0.000001}} \put(41.72,-272.94){\circle*{0.000001}} \put(41.01,-272.24){\circle*{0.000001}} \put(41.01,-271.53){\circle*{0.000001}} \put(40.31,-270.82){\circle*{0.000001}} \put(39.60,-270.11){\circle*{0.000001}} \put(39.60,-269.41){\circle*{0.000001}} \put(38.89,-268.70){\circle*{0.000001}} \put(38.89,-267.99){\circle*{0.000001}} \put(38.18,-267.29){\circle*{0.000001}} \put(37.48,-266.58){\circle*{0.000001}} \put(37.48,-265.87){\circle*{0.000001}} \put(36.77,-265.17){\circle*{0.000001}} \put(36.06,-264.46){\circle*{0.000001}} \put(36.06,-263.75){\circle*{0.000001}} \put(35.36,-263.04){\circle*{0.000001}} \put(34.65,-262.34){\circle*{0.000001}} \put(34.65,-261.63){\circle*{0.000001}} \put(33.94,-260.92){\circle*{0.000001}} \put(33.23,-260.22){\circle*{0.000001}} \put(33.23,-259.51){\circle*{0.000001}} \put(32.53,-258.80){\circle*{0.000001}} \put(32.53,-258.09){\circle*{0.000001}} \put(31.82,-257.39){\circle*{0.000001}} \put(31.11,-256.68){\circle*{0.000001}} \put(31.11,-255.97){\circle*{0.000001}} \put(30.41,-255.27){\circle*{0.000001}} \put(29.70,-254.56){\circle*{0.000001}} \put(29.70,-253.85){\circle*{0.000001}} \put(28.99,-253.14){\circle*{0.000001}} \put(28.28,-252.44){\circle*{0.000001}} \put(28.28,-251.73){\circle*{0.000001}} \put(27.58,-251.02){\circle*{0.000001}} \put(26.87,-250.32){\circle*{0.000001}} \put(26.87,-249.61){\circle*{0.000001}} \put(26.16,-248.90){\circle*{0.000001}} \put(26.16,-248.19){\circle*{0.000001}} \put(25.46,-247.49){\circle*{0.000001}} \put(24.75,-246.78){\circle*{0.000001}} \put(24.75,-246.07){\circle*{0.000001}} \put(24.04,-245.37){\circle*{0.000001}} \put(23.33,-244.66){\circle*{0.000001}} \put(23.33,-243.95){\circle*{0.000001}} \put(22.63,-243.24){\circle*{0.000001}} \put(21.92,-242.54){\circle*{0.000001}} \put(21.92,-241.83){\circle*{0.000001}} \put(21.21,-241.12){\circle*{0.000001}} \put(20.51,-240.42){\circle*{0.000001}} \put(20.51,-239.71){\circle*{0.000001}} \put(19.80,-239.00){\circle*{0.000001}} \put(19.80,-238.29){\circle*{0.000001}} \put(19.09,-237.59){\circle*{0.000001}} \put(18.38,-236.88){\circle*{0.000001}} \put(18.38,-236.17){\circle*{0.000001}} \put(17.68,-235.47){\circle*{0.000001}} \put( 7.78,-301.93){\circle*{0.000001}} \put( 8.49,-301.23){\circle*{0.000001}} \put( 9.19,-300.52){\circle*{0.000001}} \put( 9.90,-299.81){\circle*{0.000001}} \put(10.61,-299.81){\circle*{0.000001}} \put(11.31,-299.11){\circle*{0.000001}} \put(12.02,-298.40){\circle*{0.000001}} \put(12.73,-297.69){\circle*{0.000001}} \put(13.44,-296.98){\circle*{0.000001}} \put(14.14,-296.28){\circle*{0.000001}} \put(14.85,-295.57){\circle*{0.000001}} \put(15.56,-295.57){\circle*{0.000001}} \put(16.26,-294.86){\circle*{0.000001}} \put(16.97,-294.16){\circle*{0.000001}} \put(17.68,-293.45){\circle*{0.000001}} \put(18.38,-292.74){\circle*{0.000001}} \put(19.09,-292.04){\circle*{0.000001}} \put(19.80,-291.33){\circle*{0.000001}} \put(20.51,-291.33){\circle*{0.000001}} \put(21.21,-290.62){\circle*{0.000001}} \put(21.92,-289.91){\circle*{0.000001}} \put(22.63,-289.21){\circle*{0.000001}} \put(23.33,-288.50){\circle*{0.000001}} \put(24.04,-287.79){\circle*{0.000001}} \put(24.75,-287.79){\circle*{0.000001}} \put(25.46,-287.09){\circle*{0.000001}} \put(26.16,-286.38){\circle*{0.000001}} \put(26.87,-285.67){\circle*{0.000001}} \put(27.58,-284.96){\circle*{0.000001}} \put(28.28,-284.26){\circle*{0.000001}} \put(28.99,-283.55){\circle*{0.000001}} \put(29.70,-283.55){\circle*{0.000001}} \put(30.41,-282.84){\circle*{0.000001}} \put(31.11,-282.14){\circle*{0.000001}} \put(31.82,-281.43){\circle*{0.000001}} \put(32.53,-280.72){\circle*{0.000001}} \put(33.23,-280.01){\circle*{0.000001}} \put(33.94,-279.31){\circle*{0.000001}} \put(34.65,-279.31){\circle*{0.000001}} \put(35.36,-278.60){\circle*{0.000001}} \put(36.06,-277.89){\circle*{0.000001}} \put(36.77,-277.19){\circle*{0.000001}} \put(37.48,-276.48){\circle*{0.000001}} \put(38.18,-275.77){\circle*{0.000001}} \put(38.89,-275.06){\circle*{0.000001}} \put(39.60,-275.06){\circle*{0.000001}} \put(40.31,-274.36){\circle*{0.000001}} \put(41.01,-273.65){\circle*{0.000001}} \put(41.72,-272.94){\circle*{0.000001}} \put( 7.78,-301.93){\circle*{0.000001}} \put( 8.49,-301.93){\circle*{0.000001}} \put( 9.19,-301.93){\circle*{0.000001}} \put( 9.90,-301.93){\circle*{0.000001}} \put(10.61,-301.93){\circle*{0.000001}} \put(11.31,-301.93){\circle*{0.000001}} \put(12.02,-301.93){\circle*{0.000001}} \put(12.73,-301.23){\circle*{0.000001}} \put(13.44,-301.23){\circle*{0.000001}} \put(14.14,-301.23){\circle*{0.000001}} \put(14.85,-301.23){\circle*{0.000001}} \put(15.56,-301.23){\circle*{0.000001}} \put(16.26,-301.23){\circle*{0.000001}} \put(16.97,-301.23){\circle*{0.000001}} \put(17.68,-301.23){\circle*{0.000001}} \put(18.38,-301.23){\circle*{0.000001}} \put(19.09,-301.23){\circle*{0.000001}} \put(19.80,-301.23){\circle*{0.000001}} \put(20.51,-301.23){\circle*{0.000001}} \put(21.21,-300.52){\circle*{0.000001}} \put(21.92,-300.52){\circle*{0.000001}} \put(22.63,-300.52){\circle*{0.000001}} \put(23.33,-300.52){\circle*{0.000001}} \put(24.04,-300.52){\circle*{0.000001}} \put(24.75,-300.52){\circle*{0.000001}} \put(25.46,-300.52){\circle*{0.000001}} \put(26.16,-300.52){\circle*{0.000001}} \put(26.87,-300.52){\circle*{0.000001}} \put(27.58,-300.52){\circle*{0.000001}} \put(28.28,-300.52){\circle*{0.000001}} \put(28.99,-300.52){\circle*{0.000001}} \put(29.70,-299.81){\circle*{0.000001}} \put(30.41,-299.81){\circle*{0.000001}} \put(31.11,-299.81){\circle*{0.000001}} \put(31.82,-299.81){\circle*{0.000001}} \put(32.53,-299.81){\circle*{0.000001}} \put(33.23,-299.81){\circle*{0.000001}} \put(33.94,-299.81){\circle*{0.000001}} \put(34.65,-299.81){\circle*{0.000001}} \put(35.36,-299.81){\circle*{0.000001}} \put(36.06,-299.81){\circle*{0.000001}} \put(36.77,-299.81){\circle*{0.000001}} \put(37.48,-299.81){\circle*{0.000001}} \put(38.18,-299.11){\circle*{0.000001}} \put(38.89,-299.11){\circle*{0.000001}} \put(39.60,-299.11){\circle*{0.000001}} \put(40.31,-299.11){\circle*{0.000001}} \put(41.01,-299.11){\circle*{0.000001}} \put(41.72,-299.11){\circle*{0.000001}} \put(42.43,-299.11){\circle*{0.000001}} \put(43.13,-299.11){\circle*{0.000001}} \put(43.84,-299.11){\circle*{0.000001}} \put(44.55,-299.11){\circle*{0.000001}} \put(45.25,-299.11){\circle*{0.000001}} \put(45.96,-299.11){\circle*{0.000001}} \put(46.67,-298.40){\circle*{0.000001}} \put(47.38,-298.40){\circle*{0.000001}} \put(48.08,-298.40){\circle*{0.000001}} \put(48.79,-298.40){\circle*{0.000001}} \put(49.50,-298.40){\circle*{0.000001}} \put(50.20,-298.40){\circle*{0.000001}} \put(50.91,-298.40){\circle*{0.000001}} \put(50.91,-298.40){\circle*{0.000001}} \put(51.62,-298.40){\circle*{0.000001}} \put(52.33,-298.40){\circle*{0.000001}} \put(53.03,-298.40){\circle*{0.000001}} \put(53.74,-298.40){\circle*{0.000001}} \put(54.45,-298.40){\circle*{0.000001}} \put(55.15,-297.69){\circle*{0.000001}} \put(55.86,-297.69){\circle*{0.000001}} \put(56.57,-297.69){\circle*{0.000001}} \put(57.28,-297.69){\circle*{0.000001}} \put(57.98,-297.69){\circle*{0.000001}} \put(58.69,-297.69){\circle*{0.000001}} \put(59.40,-297.69){\circle*{0.000001}} \put(60.10,-297.69){\circle*{0.000001}} \put(60.81,-297.69){\circle*{0.000001}} \put(61.52,-297.69){\circle*{0.000001}} \put(62.23,-297.69){\circle*{0.000001}} \put(62.93,-297.69){\circle*{0.000001}} \put(63.64,-296.98){\circle*{0.000001}} \put(64.35,-296.98){\circle*{0.000001}} \put(65.05,-296.98){\circle*{0.000001}} \put(65.76,-296.98){\circle*{0.000001}} \put(66.47,-296.98){\circle*{0.000001}} \put(67.18,-296.98){\circle*{0.000001}} \put(67.88,-296.98){\circle*{0.000001}} \put(68.59,-296.98){\circle*{0.000001}} \put(69.30,-296.98){\circle*{0.000001}} \put(70.00,-296.98){\circle*{0.000001}} \put(70.71,-296.98){\circle*{0.000001}} \put(71.42,-296.98){\circle*{0.000001}} \put(72.12,-296.28){\circle*{0.000001}} \put(72.83,-296.28){\circle*{0.000001}} \put(73.54,-296.28){\circle*{0.000001}} \put(74.25,-296.28){\circle*{0.000001}} \put(74.95,-296.28){\circle*{0.000001}} \put(75.66,-296.28){\circle*{0.000001}} \put(76.37,-296.28){\circle*{0.000001}} \put(77.07,-296.28){\circle*{0.000001}} \put(77.78,-296.28){\circle*{0.000001}} \put(78.49,-296.28){\circle*{0.000001}} \put(79.20,-296.28){\circle*{0.000001}} \put(79.90,-296.28){\circle*{0.000001}} \put(80.61,-295.57){\circle*{0.000001}} \put(81.32,-295.57){\circle*{0.000001}} \put(82.02,-295.57){\circle*{0.000001}} \put(82.73,-295.57){\circle*{0.000001}} \put(83.44,-295.57){\circle*{0.000001}} \put(84.15,-295.57){\circle*{0.000001}} \put(84.85,-295.57){\circle*{0.000001}} \put(85.56,-295.57){\circle*{0.000001}} \put(86.27,-295.57){\circle*{0.000001}} \put(86.97,-295.57){\circle*{0.000001}} \put(87.68,-295.57){\circle*{0.000001}} \put(88.39,-295.57){\circle*{0.000001}} \put(89.10,-294.86){\circle*{0.000001}} \put(89.80,-294.86){\circle*{0.000001}} \put(90.51,-294.86){\circle*{0.000001}} \put(91.22,-294.86){\circle*{0.000001}} \put(91.92,-294.86){\circle*{0.000001}} \put(92.63,-294.86){\circle*{0.000001}} \put(92.63,-294.86){\circle*{0.000001}} \put(93.34,-294.86){\circle*{0.000001}} \put(94.05,-294.16){\circle*{0.000001}} \put(94.75,-294.16){\circle*{0.000001}} \put(95.46,-294.16){\circle*{0.000001}} \put(96.17,-293.45){\circle*{0.000001}} \put(96.87,-293.45){\circle*{0.000001}} \put(97.58,-292.74){\circle*{0.000001}} \put(98.29,-292.74){\circle*{0.000001}} \put(98.99,-292.74){\circle*{0.000001}} \put(99.70,-292.04){\circle*{0.000001}} \put(100.41,-292.04){\circle*{0.000001}} \put(101.12,-292.04){\circle*{0.000001}} \put(101.82,-291.33){\circle*{0.000001}} \put(102.53,-291.33){\circle*{0.000001}} \put(103.24,-291.33){\circle*{0.000001}} \put(103.94,-290.62){\circle*{0.000001}} \put(104.65,-290.62){\circle*{0.000001}} \put(105.36,-290.62){\circle*{0.000001}} \put(106.07,-289.91){\circle*{0.000001}} \put(106.77,-289.91){\circle*{0.000001}} \put(107.48,-289.21){\circle*{0.000001}} \put(108.19,-289.21){\circle*{0.000001}} \put(108.89,-289.21){\circle*{0.000001}} \put(109.60,-288.50){\circle*{0.000001}} \put(110.31,-288.50){\circle*{0.000001}} \put(111.02,-288.50){\circle*{0.000001}} \put(111.72,-287.79){\circle*{0.000001}} \put(112.43,-287.79){\circle*{0.000001}} \put(113.14,-287.79){\circle*{0.000001}} \put(113.84,-287.09){\circle*{0.000001}} \put(114.55,-287.09){\circle*{0.000001}} \put(115.26,-286.38){\circle*{0.000001}} \put(115.97,-286.38){\circle*{0.000001}} \put(116.67,-286.38){\circle*{0.000001}} \put(117.38,-285.67){\circle*{0.000001}} \put(118.09,-285.67){\circle*{0.000001}} \put(118.79,-285.67){\circle*{0.000001}} \put(119.50,-284.96){\circle*{0.000001}} \put(120.21,-284.96){\circle*{0.000001}} \put(120.92,-284.96){\circle*{0.000001}} \put(121.62,-284.26){\circle*{0.000001}} \put(122.33,-284.26){\circle*{0.000001}} \put(123.04,-283.55){\circle*{0.000001}} \put(123.74,-283.55){\circle*{0.000001}} \put(124.45,-283.55){\circle*{0.000001}} \put(125.16,-282.84){\circle*{0.000001}} \put(125.87,-282.84){\circle*{0.000001}} \put(126.57,-282.84){\circle*{0.000001}} \put(127.28,-282.14){\circle*{0.000001}} \put(127.99,-282.14){\circle*{0.000001}} \put(128.69,-282.14){\circle*{0.000001}} \put(129.40,-281.43){\circle*{0.000001}} \put(130.11,-281.43){\circle*{0.000001}} \put(130.81,-281.43){\circle*{0.000001}} \put(131.52,-280.72){\circle*{0.000001}} \put(132.23,-280.72){\circle*{0.000001}} \put(132.94,-280.01){\circle*{0.000001}} \put(133.64,-280.01){\circle*{0.000001}} \put(134.35,-280.01){\circle*{0.000001}} \put(135.06,-279.31){\circle*{0.000001}} \put(135.76,-279.31){\circle*{0.000001}} \put(135.76,-279.31){\circle*{0.000001}} \put(135.76,-278.60){\circle*{0.000001}} \put(135.76,-277.89){\circle*{0.000001}} \put(135.76,-277.19){\circle*{0.000001}} \put(135.76,-276.48){\circle*{0.000001}} \put(135.76,-275.77){\circle*{0.000001}} \put(135.76,-275.06){\circle*{0.000001}} \put(135.76,-274.36){\circle*{0.000001}} \put(135.76,-273.65){\circle*{0.000001}} \put(135.76,-272.94){\circle*{0.000001}} \put(135.76,-272.24){\circle*{0.000001}} \put(135.76,-271.53){\circle*{0.000001}} \put(135.06,-270.82){\circle*{0.000001}} \put(135.06,-270.11){\circle*{0.000001}} \put(135.06,-269.41){\circle*{0.000001}} \put(135.06,-268.70){\circle*{0.000001}} \put(135.06,-267.99){\circle*{0.000001}} \put(135.06,-267.29){\circle*{0.000001}} \put(135.06,-266.58){\circle*{0.000001}} \put(135.06,-265.87){\circle*{0.000001}} \put(135.06,-265.17){\circle*{0.000001}} \put(135.06,-264.46){\circle*{0.000001}} \put(135.06,-263.75){\circle*{0.000001}} \put(135.06,-263.04){\circle*{0.000001}} \put(135.06,-262.34){\circle*{0.000001}} \put(135.06,-261.63){\circle*{0.000001}} \put(135.06,-260.92){\circle*{0.000001}} \put(135.06,-260.22){\circle*{0.000001}} \put(135.06,-259.51){\circle*{0.000001}} \put(135.06,-258.80){\circle*{0.000001}} \put(135.06,-258.09){\circle*{0.000001}} \put(135.06,-257.39){\circle*{0.000001}} \put(135.06,-256.68){\circle*{0.000001}} \put(135.06,-255.97){\circle*{0.000001}} \put(134.35,-255.27){\circle*{0.000001}} \put(134.35,-254.56){\circle*{0.000001}} \put(134.35,-253.85){\circle*{0.000001}} \put(134.35,-253.14){\circle*{0.000001}} \put(134.35,-252.44){\circle*{0.000001}} \put(134.35,-251.73){\circle*{0.000001}} \put(134.35,-251.02){\circle*{0.000001}} \put(134.35,-250.32){\circle*{0.000001}} \put(134.35,-249.61){\circle*{0.000001}} \put(134.35,-248.90){\circle*{0.000001}} \put(134.35,-248.19){\circle*{0.000001}} \put(134.35,-247.49){\circle*{0.000001}} \put(134.35,-246.78){\circle*{0.000001}} \put(134.35,-246.07){\circle*{0.000001}} \put(134.35,-245.37){\circle*{0.000001}} \put(134.35,-244.66){\circle*{0.000001}} \put(134.35,-243.95){\circle*{0.000001}} \put(134.35,-243.24){\circle*{0.000001}} \put(134.35,-242.54){\circle*{0.000001}} \put(134.35,-241.83){\circle*{0.000001}} \put(134.35,-241.12){\circle*{0.000001}} \put(134.35,-240.42){\circle*{0.000001}} \put(133.64,-239.71){\circle*{0.000001}} \put(133.64,-239.00){\circle*{0.000001}} \put(133.64,-238.29){\circle*{0.000001}} \put(133.64,-237.59){\circle*{0.000001}} \put(133.64,-236.88){\circle*{0.000001}} \put(133.64,-236.17){\circle*{0.000001}} \put(133.64,-235.47){\circle*{0.000001}} \put(133.64,-234.76){\circle*{0.000001}} \put(133.64,-234.05){\circle*{0.000001}} \put(133.64,-233.35){\circle*{0.000001}} \put(133.64,-232.64){\circle*{0.000001}} \put(132.94,-276.48){\circle*{0.000001}} \put(132.94,-275.77){\circle*{0.000001}} \put(132.94,-275.06){\circle*{0.000001}} \put(132.94,-274.36){\circle*{0.000001}} \put(132.94,-273.65){\circle*{0.000001}} \put(132.94,-272.94){\circle*{0.000001}} \put(132.94,-272.24){\circle*{0.000001}} \put(132.94,-271.53){\circle*{0.000001}} \put(132.94,-270.82){\circle*{0.000001}} \put(132.94,-270.11){\circle*{0.000001}} \put(132.94,-269.41){\circle*{0.000001}} \put(132.94,-268.70){\circle*{0.000001}} \put(132.94,-267.99){\circle*{0.000001}} \put(132.94,-267.29){\circle*{0.000001}} \put(132.94,-266.58){\circle*{0.000001}} \put(132.94,-265.87){\circle*{0.000001}} \put(132.94,-265.17){\circle*{0.000001}} \put(132.94,-264.46){\circle*{0.000001}} \put(132.94,-263.75){\circle*{0.000001}} \put(132.94,-263.04){\circle*{0.000001}} \put(132.94,-262.34){\circle*{0.000001}} \put(132.94,-261.63){\circle*{0.000001}} \put(132.94,-260.92){\circle*{0.000001}} \put(132.94,-260.22){\circle*{0.000001}} \put(132.94,-259.51){\circle*{0.000001}} \put(132.94,-258.80){\circle*{0.000001}} \put(132.94,-258.09){\circle*{0.000001}} \put(132.94,-257.39){\circle*{0.000001}} \put(132.94,-256.68){\circle*{0.000001}} \put(132.94,-255.97){\circle*{0.000001}} \put(132.94,-255.27){\circle*{0.000001}} \put(132.94,-254.56){\circle*{0.000001}} \put(133.64,-253.85){\circle*{0.000001}} \put(133.64,-253.14){\circle*{0.000001}} \put(133.64,-252.44){\circle*{0.000001}} \put(133.64,-251.73){\circle*{0.000001}} \put(133.64,-251.02){\circle*{0.000001}} \put(133.64,-250.32){\circle*{0.000001}} \put(133.64,-249.61){\circle*{0.000001}} \put(133.64,-248.90){\circle*{0.000001}} \put(133.64,-248.19){\circle*{0.000001}} \put(133.64,-247.49){\circle*{0.000001}} \put(133.64,-246.78){\circle*{0.000001}} \put(133.64,-246.07){\circle*{0.000001}} \put(133.64,-245.37){\circle*{0.000001}} \put(133.64,-244.66){\circle*{0.000001}} \put(133.64,-243.95){\circle*{0.000001}} \put(133.64,-243.24){\circle*{0.000001}} \put(133.64,-242.54){\circle*{0.000001}} \put(133.64,-241.83){\circle*{0.000001}} \put(133.64,-241.12){\circle*{0.000001}} \put(133.64,-240.42){\circle*{0.000001}} \put(133.64,-239.71){\circle*{0.000001}} \put(133.64,-239.00){\circle*{0.000001}} \put(133.64,-238.29){\circle*{0.000001}} \put(133.64,-237.59){\circle*{0.000001}} \put(133.64,-236.88){\circle*{0.000001}} \put(133.64,-236.17){\circle*{0.000001}} \put(133.64,-235.47){\circle*{0.000001}} \put(133.64,-234.76){\circle*{0.000001}} \put(133.64,-234.05){\circle*{0.000001}} \put(133.64,-233.35){\circle*{0.000001}} \put(133.64,-232.64){\circle*{0.000001}} \put(86.97,-274.36){\circle*{0.000001}} \put(87.68,-274.36){\circle*{0.000001}} \put(88.39,-274.36){\circle*{0.000001}} \put(89.10,-274.36){\circle*{0.000001}} \put(89.80,-274.36){\circle*{0.000001}} \put(90.51,-274.36){\circle*{0.000001}} \put(91.22,-274.36){\circle*{0.000001}} \put(91.92,-274.36){\circle*{0.000001}} \put(92.63,-274.36){\circle*{0.000001}} \put(93.34,-274.36){\circle*{0.000001}} \put(94.05,-274.36){\circle*{0.000001}} \put(94.75,-275.06){\circle*{0.000001}} \put(95.46,-275.06){\circle*{0.000001}} \put(96.17,-275.06){\circle*{0.000001}} \put(96.87,-275.06){\circle*{0.000001}} \put(97.58,-275.06){\circle*{0.000001}} \put(98.29,-275.06){\circle*{0.000001}} \put(98.99,-275.06){\circle*{0.000001}} \put(99.70,-275.06){\circle*{0.000001}} \put(100.41,-275.06){\circle*{0.000001}} \put(101.12,-275.06){\circle*{0.000001}} \put(101.82,-275.06){\circle*{0.000001}} \put(102.53,-275.06){\circle*{0.000001}} \put(103.24,-275.06){\circle*{0.000001}} \put(103.94,-275.06){\circle*{0.000001}} \put(104.65,-275.06){\circle*{0.000001}} \put(105.36,-275.06){\circle*{0.000001}} \put(106.07,-275.06){\circle*{0.000001}} \put(106.77,-275.06){\circle*{0.000001}} \put(107.48,-275.06){\circle*{0.000001}} \put(108.19,-275.06){\circle*{0.000001}} \put(108.89,-275.06){\circle*{0.000001}} \put(109.60,-275.06){\circle*{0.000001}} \put(110.31,-275.77){\circle*{0.000001}} \put(111.02,-275.77){\circle*{0.000001}} \put(111.72,-275.77){\circle*{0.000001}} \put(112.43,-275.77){\circle*{0.000001}} \put(113.14,-275.77){\circle*{0.000001}} \put(113.84,-275.77){\circle*{0.000001}} \put(114.55,-275.77){\circle*{0.000001}} \put(115.26,-275.77){\circle*{0.000001}} \put(115.97,-275.77){\circle*{0.000001}} \put(116.67,-275.77){\circle*{0.000001}} \put(117.38,-275.77){\circle*{0.000001}} \put(118.09,-275.77){\circle*{0.000001}} \put(118.79,-275.77){\circle*{0.000001}} \put(119.50,-275.77){\circle*{0.000001}} \put(120.21,-275.77){\circle*{0.000001}} \put(120.92,-275.77){\circle*{0.000001}} \put(121.62,-275.77){\circle*{0.000001}} \put(122.33,-275.77){\circle*{0.000001}} \put(123.04,-275.77){\circle*{0.000001}} \put(123.74,-275.77){\circle*{0.000001}} \put(124.45,-275.77){\circle*{0.000001}} \put(125.16,-275.77){\circle*{0.000001}} \put(125.87,-276.48){\circle*{0.000001}} \put(126.57,-276.48){\circle*{0.000001}} \put(127.28,-276.48){\circle*{0.000001}} \put(127.99,-276.48){\circle*{0.000001}} \put(128.69,-276.48){\circle*{0.000001}} \put(129.40,-276.48){\circle*{0.000001}} \put(130.11,-276.48){\circle*{0.000001}} \put(130.81,-276.48){\circle*{0.000001}} \put(131.52,-276.48){\circle*{0.000001}} \put(132.23,-276.48){\circle*{0.000001}} \put(132.94,-276.48){\circle*{0.000001}} \put(43.84,-263.75){\circle*{0.000001}} \put(44.55,-263.75){\circle*{0.000001}} \put(45.25,-263.75){\circle*{0.000001}} \put(45.96,-264.46){\circle*{0.000001}} \put(46.67,-264.46){\circle*{0.000001}} \put(47.38,-264.46){\circle*{0.000001}} \put(48.08,-264.46){\circle*{0.000001}} \put(48.79,-265.17){\circle*{0.000001}} \put(49.50,-265.17){\circle*{0.000001}} \put(50.20,-265.17){\circle*{0.000001}} \put(50.91,-265.17){\circle*{0.000001}} \put(51.62,-265.87){\circle*{0.000001}} \put(52.33,-265.87){\circle*{0.000001}} \put(53.03,-265.87){\circle*{0.000001}} \put(53.74,-265.87){\circle*{0.000001}} \put(54.45,-266.58){\circle*{0.000001}} \put(55.15,-266.58){\circle*{0.000001}} \put(55.86,-266.58){\circle*{0.000001}} \put(56.57,-266.58){\circle*{0.000001}} \put(57.28,-267.29){\circle*{0.000001}} \put(57.98,-267.29){\circle*{0.000001}} \put(58.69,-267.29){\circle*{0.000001}} \put(59.40,-267.29){\circle*{0.000001}} \put(60.10,-267.99){\circle*{0.000001}} \put(60.81,-267.99){\circle*{0.000001}} \put(61.52,-267.99){\circle*{0.000001}} \put(62.23,-267.99){\circle*{0.000001}} \put(62.93,-268.70){\circle*{0.000001}} \put(63.64,-268.70){\circle*{0.000001}} \put(64.35,-268.70){\circle*{0.000001}} \put(65.05,-268.70){\circle*{0.000001}} \put(65.76,-269.41){\circle*{0.000001}} \put(66.47,-269.41){\circle*{0.000001}} \put(67.18,-269.41){\circle*{0.000001}} \put(67.88,-269.41){\circle*{0.000001}} \put(68.59,-270.11){\circle*{0.000001}} \put(69.30,-270.11){\circle*{0.000001}} \put(70.00,-270.11){\circle*{0.000001}} \put(70.71,-270.11){\circle*{0.000001}} \put(71.42,-270.82){\circle*{0.000001}} \put(72.12,-270.82){\circle*{0.000001}} \put(72.83,-270.82){\circle*{0.000001}} \put(73.54,-270.82){\circle*{0.000001}} \put(74.25,-271.53){\circle*{0.000001}} \put(74.95,-271.53){\circle*{0.000001}} \put(75.66,-271.53){\circle*{0.000001}} \put(76.37,-271.53){\circle*{0.000001}} \put(77.07,-272.24){\circle*{0.000001}} \put(77.78,-272.24){\circle*{0.000001}} \put(78.49,-272.24){\circle*{0.000001}} \put(79.20,-272.24){\circle*{0.000001}} \put(79.90,-272.94){\circle*{0.000001}} \put(80.61,-272.94){\circle*{0.000001}} \put(81.32,-272.94){\circle*{0.000001}} \put(82.02,-272.94){\circle*{0.000001}} \put(82.73,-273.65){\circle*{0.000001}} \put(83.44,-273.65){\circle*{0.000001}} \put(84.15,-273.65){\circle*{0.000001}} \put(84.85,-273.65){\circle*{0.000001}} \put(85.56,-274.36){\circle*{0.000001}} \put(86.27,-274.36){\circle*{0.000001}} \put(86.97,-274.36){\circle*{0.000001}} \put( 9.90,-229.81){\circle*{0.000001}} \put(10.61,-230.52){\circle*{0.000001}} \put(11.31,-231.22){\circle*{0.000001}} \put(12.02,-231.93){\circle*{0.000001}} \put(12.73,-232.64){\circle*{0.000001}} \put(13.44,-233.35){\circle*{0.000001}} \put(14.14,-234.05){\circle*{0.000001}} \put(14.85,-234.76){\circle*{0.000001}} \put(15.56,-235.47){\circle*{0.000001}} \put(16.26,-236.17){\circle*{0.000001}} \put(16.97,-236.88){\circle*{0.000001}} \put(17.68,-237.59){\circle*{0.000001}} \put(18.38,-238.29){\circle*{0.000001}} \put(19.09,-239.00){\circle*{0.000001}} \put(19.80,-239.71){\circle*{0.000001}} \put(20.51,-240.42){\circle*{0.000001}} \put(21.21,-241.12){\circle*{0.000001}} \put(21.92,-241.83){\circle*{0.000001}} \put(22.63,-242.54){\circle*{0.000001}} \put(23.33,-243.24){\circle*{0.000001}} \put(24.04,-243.95){\circle*{0.000001}} \put(24.75,-244.66){\circle*{0.000001}} \put(25.46,-245.37){\circle*{0.000001}} \put(26.16,-246.07){\circle*{0.000001}} \put(26.87,-246.78){\circle*{0.000001}} \put(27.58,-247.49){\circle*{0.000001}} \put(28.28,-248.19){\circle*{0.000001}} \put(28.99,-248.90){\circle*{0.000001}} \put(29.70,-249.61){\circle*{0.000001}} \put(30.41,-250.32){\circle*{0.000001}} \put(31.11,-251.02){\circle*{0.000001}} \put(31.82,-251.73){\circle*{0.000001}} \put(32.53,-252.44){\circle*{0.000001}} \put(33.23,-253.14){\circle*{0.000001}} \put(33.94,-253.85){\circle*{0.000001}} \put(34.65,-254.56){\circle*{0.000001}} \put(35.36,-255.27){\circle*{0.000001}} \put(36.06,-255.97){\circle*{0.000001}} \put(36.77,-256.68){\circle*{0.000001}} \put(37.48,-257.39){\circle*{0.000001}} \put(38.18,-258.09){\circle*{0.000001}} \put(38.89,-258.80){\circle*{0.000001}} \put(39.60,-259.51){\circle*{0.000001}} \put(40.31,-260.22){\circle*{0.000001}} \put(41.01,-260.92){\circle*{0.000001}} \put(41.72,-261.63){\circle*{0.000001}} \put(42.43,-262.34){\circle*{0.000001}} \put(43.13,-263.04){\circle*{0.000001}} \put(43.84,-263.75){\circle*{0.000001}} \put( 9.90,-229.81){\circle*{0.000001}} \put(10.61,-229.10){\circle*{0.000001}} \put(10.61,-228.40){\circle*{0.000001}} \put(11.31,-227.69){\circle*{0.000001}} \put(12.02,-226.98){\circle*{0.000001}} \put(12.73,-226.27){\circle*{0.000001}} \put(12.73,-225.57){\circle*{0.000001}} \put(13.44,-224.86){\circle*{0.000001}} \put(14.14,-224.15){\circle*{0.000001}} \put(14.85,-223.45){\circle*{0.000001}} \put(14.85,-222.74){\circle*{0.000001}} \put(15.56,-222.03){\circle*{0.000001}} \put(16.26,-221.32){\circle*{0.000001}} \put(16.97,-220.62){\circle*{0.000001}} \put(16.97,-219.91){\circle*{0.000001}} \put(17.68,-219.20){\circle*{0.000001}} \put(18.38,-218.50){\circle*{0.000001}} \put(19.09,-217.79){\circle*{0.000001}} \put(19.09,-217.08){\circle*{0.000001}} \put(19.80,-216.37){\circle*{0.000001}} \put(20.51,-215.67){\circle*{0.000001}} \put(20.51,-214.96){\circle*{0.000001}} \put(21.21,-214.25){\circle*{0.000001}} \put(21.92,-213.55){\circle*{0.000001}} \put(22.63,-212.84){\circle*{0.000001}} \put(22.63,-212.13){\circle*{0.000001}} \put(23.33,-211.42){\circle*{0.000001}} \put(24.04,-210.72){\circle*{0.000001}} \put(24.75,-210.01){\circle*{0.000001}} \put(24.75,-209.30){\circle*{0.000001}} \put(25.46,-208.60){\circle*{0.000001}} \put(26.16,-207.89){\circle*{0.000001}} \put(26.87,-207.18){\circle*{0.000001}} \put(26.87,-206.48){\circle*{0.000001}} \put(27.58,-205.77){\circle*{0.000001}} \put(28.28,-205.06){\circle*{0.000001}} \put(28.28,-204.35){\circle*{0.000001}} \put(28.99,-203.65){\circle*{0.000001}} \put(29.70,-202.94){\circle*{0.000001}} \put(30.41,-202.23){\circle*{0.000001}} \put(30.41,-201.53){\circle*{0.000001}} \put(31.11,-200.82){\circle*{0.000001}} \put(31.82,-200.11){\circle*{0.000001}} \put(32.53,-199.40){\circle*{0.000001}} \put(32.53,-198.70){\circle*{0.000001}} \put(33.23,-197.99){\circle*{0.000001}} \put(33.94,-197.28){\circle*{0.000001}} \put(34.65,-196.58){\circle*{0.000001}} \put(34.65,-195.87){\circle*{0.000001}} \put(35.36,-195.16){\circle*{0.000001}} \put(36.06,-194.45){\circle*{0.000001}} \put(36.77,-193.75){\circle*{0.000001}} \put(36.77,-193.04){\circle*{0.000001}} \put(37.48,-192.33){\circle*{0.000001}} \put(-2.83,-178.90){\circle*{0.000001}} \put(-2.12,-178.90){\circle*{0.000001}} \put(-1.41,-179.61){\circle*{0.000001}} \put(-0.71,-179.61){\circle*{0.000001}} \put( 0.00,-179.61){\circle*{0.000001}} \put( 0.71,-180.31){\circle*{0.000001}} \put( 1.41,-180.31){\circle*{0.000001}} \put( 2.12,-180.31){\circle*{0.000001}} \put( 2.83,-181.02){\circle*{0.000001}} \put( 3.54,-181.02){\circle*{0.000001}} \put( 4.24,-181.02){\circle*{0.000001}} \put( 4.95,-181.73){\circle*{0.000001}} \put( 5.66,-181.73){\circle*{0.000001}} \put( 6.36,-181.73){\circle*{0.000001}} \put( 7.07,-182.43){\circle*{0.000001}} \put( 7.78,-182.43){\circle*{0.000001}} \put( 8.49,-182.43){\circle*{0.000001}} \put( 9.19,-183.14){\circle*{0.000001}} \put( 9.90,-183.14){\circle*{0.000001}} \put(10.61,-183.14){\circle*{0.000001}} \put(11.31,-183.85){\circle*{0.000001}} \put(12.02,-183.85){\circle*{0.000001}} \put(12.73,-183.85){\circle*{0.000001}} \put(13.44,-184.55){\circle*{0.000001}} \put(14.14,-184.55){\circle*{0.000001}} \put(14.85,-184.55){\circle*{0.000001}} \put(15.56,-185.26){\circle*{0.000001}} \put(16.26,-185.26){\circle*{0.000001}} \put(16.97,-185.26){\circle*{0.000001}} \put(17.68,-185.97){\circle*{0.000001}} \put(18.38,-185.97){\circle*{0.000001}} \put(19.09,-185.97){\circle*{0.000001}} \put(19.80,-186.68){\circle*{0.000001}} \put(20.51,-186.68){\circle*{0.000001}} \put(21.21,-186.68){\circle*{0.000001}} \put(21.92,-187.38){\circle*{0.000001}} \put(22.63,-187.38){\circle*{0.000001}} \put(23.33,-187.38){\circle*{0.000001}} \put(24.04,-188.09){\circle*{0.000001}} \put(24.75,-188.09){\circle*{0.000001}} \put(25.46,-188.09){\circle*{0.000001}} \put(26.16,-188.80){\circle*{0.000001}} \put(26.87,-188.80){\circle*{0.000001}} \put(27.58,-188.80){\circle*{0.000001}} \put(28.28,-189.50){\circle*{0.000001}} \put(28.99,-189.50){\circle*{0.000001}} \put(29.70,-189.50){\circle*{0.000001}} \put(30.41,-190.21){\circle*{0.000001}} \put(31.11,-190.21){\circle*{0.000001}} \put(31.82,-190.21){\circle*{0.000001}} \put(32.53,-190.92){\circle*{0.000001}} \put(33.23,-190.92){\circle*{0.000001}} \put(33.94,-190.92){\circle*{0.000001}} \put(34.65,-191.63){\circle*{0.000001}} \put(35.36,-191.63){\circle*{0.000001}} \put(36.06,-191.63){\circle*{0.000001}} \put(36.77,-192.33){\circle*{0.000001}} \put(37.48,-192.33){\circle*{0.000001}} \put(-48.79,-169.71){\circle*{0.000001}} \put(-48.08,-169.71){\circle*{0.000001}} \put(-47.38,-169.71){\circle*{0.000001}} \put(-46.67,-170.41){\circle*{0.000001}} \put(-45.96,-170.41){\circle*{0.000001}} \put(-45.25,-170.41){\circle*{0.000001}} \put(-44.55,-170.41){\circle*{0.000001}} \put(-43.84,-170.41){\circle*{0.000001}} \put(-43.13,-171.12){\circle*{0.000001}} \put(-42.43,-171.12){\circle*{0.000001}} \put(-41.72,-171.12){\circle*{0.000001}} \put(-41.01,-171.12){\circle*{0.000001}} \put(-40.31,-171.12){\circle*{0.000001}} \put(-39.60,-171.83){\circle*{0.000001}} \put(-38.89,-171.83){\circle*{0.000001}} \put(-38.18,-171.83){\circle*{0.000001}} \put(-37.48,-171.83){\circle*{0.000001}} \put(-36.77,-171.83){\circle*{0.000001}} \put(-36.06,-172.53){\circle*{0.000001}} \put(-35.36,-172.53){\circle*{0.000001}} \put(-34.65,-172.53){\circle*{0.000001}} \put(-33.94,-172.53){\circle*{0.000001}} \put(-33.23,-172.53){\circle*{0.000001}} \put(-32.53,-173.24){\circle*{0.000001}} \put(-31.82,-173.24){\circle*{0.000001}} \put(-31.11,-173.24){\circle*{0.000001}} \put(-30.41,-173.24){\circle*{0.000001}} \put(-29.70,-173.24){\circle*{0.000001}} \put(-28.99,-173.95){\circle*{0.000001}} \put(-28.28,-173.95){\circle*{0.000001}} \put(-27.58,-173.95){\circle*{0.000001}} \put(-26.87,-173.95){\circle*{0.000001}} \put(-26.16,-173.95){\circle*{0.000001}} \put(-25.46,-174.66){\circle*{0.000001}} \put(-24.75,-174.66){\circle*{0.000001}} \put(-24.04,-174.66){\circle*{0.000001}} \put(-23.33,-174.66){\circle*{0.000001}} \put(-22.63,-174.66){\circle*{0.000001}} \put(-21.92,-175.36){\circle*{0.000001}} \put(-21.21,-175.36){\circle*{0.000001}} \put(-20.51,-175.36){\circle*{0.000001}} \put(-19.80,-175.36){\circle*{0.000001}} \put(-19.09,-175.36){\circle*{0.000001}} \put(-18.38,-176.07){\circle*{0.000001}} \put(-17.68,-176.07){\circle*{0.000001}} \put(-16.97,-176.07){\circle*{0.000001}} \put(-16.26,-176.07){\circle*{0.000001}} \put(-15.56,-176.07){\circle*{0.000001}} \put(-14.85,-176.78){\circle*{0.000001}} \put(-14.14,-176.78){\circle*{0.000001}} \put(-13.44,-176.78){\circle*{0.000001}} \put(-12.73,-176.78){\circle*{0.000001}} \put(-12.02,-176.78){\circle*{0.000001}} \put(-11.31,-177.48){\circle*{0.000001}} \put(-10.61,-177.48){\circle*{0.000001}} \put(-9.90,-177.48){\circle*{0.000001}} \put(-9.19,-177.48){\circle*{0.000001}} \put(-8.49,-177.48){\circle*{0.000001}} \put(-7.78,-178.19){\circle*{0.000001}} \put(-7.07,-178.19){\circle*{0.000001}} \put(-6.36,-178.19){\circle*{0.000001}} \put(-5.66,-178.19){\circle*{0.000001}} \put(-4.95,-178.19){\circle*{0.000001}} \put(-4.24,-178.90){\circle*{0.000001}} \put(-3.54,-178.90){\circle*{0.000001}} \put(-2.83,-178.90){\circle*{0.000001}} \put(-91.92,-155.56){\circle*{0.000001}} \put(-91.22,-155.56){\circle*{0.000001}} \put(-90.51,-156.27){\circle*{0.000001}} \put(-89.80,-156.27){\circle*{0.000001}} \put(-89.10,-156.27){\circle*{0.000001}} \put(-88.39,-156.98){\circle*{0.000001}} \put(-87.68,-156.98){\circle*{0.000001}} \put(-86.97,-156.98){\circle*{0.000001}} \put(-86.27,-157.68){\circle*{0.000001}} \put(-85.56,-157.68){\circle*{0.000001}} \put(-84.85,-157.68){\circle*{0.000001}} \put(-84.15,-158.39){\circle*{0.000001}} \put(-83.44,-158.39){\circle*{0.000001}} \put(-82.73,-158.39){\circle*{0.000001}} \put(-82.02,-159.10){\circle*{0.000001}} \put(-81.32,-159.10){\circle*{0.000001}} \put(-80.61,-159.10){\circle*{0.000001}} \put(-79.90,-159.81){\circle*{0.000001}} \put(-79.20,-159.81){\circle*{0.000001}} \put(-78.49,-159.81){\circle*{0.000001}} \put(-77.78,-160.51){\circle*{0.000001}} \put(-77.07,-160.51){\circle*{0.000001}} \put(-76.37,-160.51){\circle*{0.000001}} \put(-75.66,-161.22){\circle*{0.000001}} \put(-74.95,-161.22){\circle*{0.000001}} \put(-74.25,-161.22){\circle*{0.000001}} \put(-73.54,-161.93){\circle*{0.000001}} \put(-72.83,-161.93){\circle*{0.000001}} \put(-72.12,-161.93){\circle*{0.000001}} \put(-71.42,-162.63){\circle*{0.000001}} \put(-70.71,-162.63){\circle*{0.000001}} \put(-70.00,-162.63){\circle*{0.000001}} \put(-69.30,-162.63){\circle*{0.000001}} \put(-68.59,-163.34){\circle*{0.000001}} \put(-67.88,-163.34){\circle*{0.000001}} \put(-67.18,-163.34){\circle*{0.000001}} \put(-66.47,-164.05){\circle*{0.000001}} \put(-65.76,-164.05){\circle*{0.000001}} \put(-65.05,-164.05){\circle*{0.000001}} \put(-64.35,-164.76){\circle*{0.000001}} \put(-63.64,-164.76){\circle*{0.000001}} \put(-62.93,-164.76){\circle*{0.000001}} \put(-62.23,-165.46){\circle*{0.000001}} \put(-61.52,-165.46){\circle*{0.000001}} \put(-60.81,-165.46){\circle*{0.000001}} \put(-60.10,-166.17){\circle*{0.000001}} \put(-59.40,-166.17){\circle*{0.000001}} \put(-58.69,-166.17){\circle*{0.000001}} \put(-57.98,-166.88){\circle*{0.000001}} \put(-57.28,-166.88){\circle*{0.000001}} \put(-56.57,-166.88){\circle*{0.000001}} \put(-55.86,-167.58){\circle*{0.000001}} \put(-55.15,-167.58){\circle*{0.000001}} \put(-54.45,-167.58){\circle*{0.000001}} \put(-53.74,-168.29){\circle*{0.000001}} \put(-53.03,-168.29){\circle*{0.000001}} \put(-52.33,-168.29){\circle*{0.000001}} \put(-51.62,-169.00){\circle*{0.000001}} \put(-50.91,-169.00){\circle*{0.000001}} \put(-50.20,-169.00){\circle*{0.000001}} \put(-49.50,-169.71){\circle*{0.000001}} \put(-48.79,-169.71){\circle*{0.000001}} \put(-91.92,-155.56){\circle*{0.000001}} \put(-92.63,-154.86){\circle*{0.000001}} \put(-92.63,-154.15){\circle*{0.000001}} \put(-93.34,-153.44){\circle*{0.000001}} \put(-94.05,-152.74){\circle*{0.000001}} \put(-94.05,-152.03){\circle*{0.000001}} \put(-94.75,-151.32){\circle*{0.000001}} \put(-95.46,-150.61){\circle*{0.000001}} \put(-96.17,-149.91){\circle*{0.000001}} \put(-96.17,-149.20){\circle*{0.000001}} \put(-96.87,-148.49){\circle*{0.000001}} \put(-97.58,-147.79){\circle*{0.000001}} \put(-97.58,-147.08){\circle*{0.000001}} \put(-98.29,-146.37){\circle*{0.000001}} \put(-98.99,-145.66){\circle*{0.000001}} \put(-98.99,-144.96){\circle*{0.000001}} \put(-99.70,-144.25){\circle*{0.000001}} \put(-100.41,-143.54){\circle*{0.000001}} \put(-100.41,-142.84){\circle*{0.000001}} \put(-101.12,-142.13){\circle*{0.000001}} \put(-101.82,-141.42){\circle*{0.000001}} \put(-102.53,-140.71){\circle*{0.000001}} \put(-102.53,-140.01){\circle*{0.000001}} \put(-103.24,-139.30){\circle*{0.000001}} \put(-103.94,-138.59){\circle*{0.000001}} \put(-103.94,-137.89){\circle*{0.000001}} \put(-104.65,-137.18){\circle*{0.000001}} \put(-105.36,-136.47){\circle*{0.000001}} \put(-105.36,-135.76){\circle*{0.000001}} \put(-106.07,-135.06){\circle*{0.000001}} \put(-106.77,-134.35){\circle*{0.000001}} \put(-106.77,-133.64){\circle*{0.000001}} \put(-107.48,-132.94){\circle*{0.000001}} \put(-108.19,-132.23){\circle*{0.000001}} \put(-108.89,-131.52){\circle*{0.000001}} \put(-108.89,-130.81){\circle*{0.000001}} \put(-109.60,-130.11){\circle*{0.000001}} \put(-110.31,-129.40){\circle*{0.000001}} \put(-110.31,-128.69){\circle*{0.000001}} \put(-111.02,-127.99){\circle*{0.000001}} \put(-111.72,-127.28){\circle*{0.000001}} \put(-111.72,-126.57){\circle*{0.000001}} \put(-112.43,-125.87){\circle*{0.000001}} \put(-113.14,-125.16){\circle*{0.000001}} \put(-113.14,-124.45){\circle*{0.000001}} \put(-113.84,-123.74){\circle*{0.000001}} \put(-114.55,-123.04){\circle*{0.000001}} \put(-115.26,-122.33){\circle*{0.000001}} \put(-115.26,-121.62){\circle*{0.000001}} \put(-115.97,-120.92){\circle*{0.000001}} \put(-116.67,-120.21){\circle*{0.000001}} \put(-116.67,-119.50){\circle*{0.000001}} \put(-117.38,-118.79){\circle*{0.000001}} \put(-117.38,-118.79){\circle*{0.000001}} \put(-116.67,-118.79){\circle*{0.000001}} \put(-115.97,-118.09){\circle*{0.000001}} \put(-115.26,-118.09){\circle*{0.000001}} \put(-114.55,-117.38){\circle*{0.000001}} \put(-113.84,-117.38){\circle*{0.000001}} \put(-113.14,-117.38){\circle*{0.000001}} \put(-112.43,-116.67){\circle*{0.000001}} \put(-111.72,-116.67){\circle*{0.000001}} \put(-111.02,-116.67){\circle*{0.000001}} \put(-110.31,-115.97){\circle*{0.000001}} \put(-109.60,-115.97){\circle*{0.000001}} \put(-108.89,-115.26){\circle*{0.000001}} \put(-108.19,-115.26){\circle*{0.000001}} \put(-107.48,-115.26){\circle*{0.000001}} \put(-106.77,-114.55){\circle*{0.000001}} \put(-106.07,-114.55){\circle*{0.000001}} \put(-105.36,-113.84){\circle*{0.000001}} \put(-104.65,-113.84){\circle*{0.000001}} \put(-103.94,-113.84){\circle*{0.000001}} \put(-103.24,-113.14){\circle*{0.000001}} \put(-102.53,-113.14){\circle*{0.000001}} \put(-101.82,-113.14){\circle*{0.000001}} \put(-101.12,-112.43){\circle*{0.000001}} \put(-100.41,-112.43){\circle*{0.000001}} \put(-99.70,-111.72){\circle*{0.000001}} \put(-98.99,-111.72){\circle*{0.000001}} \put(-98.29,-111.72){\circle*{0.000001}} \put(-97.58,-111.02){\circle*{0.000001}} \put(-96.87,-111.02){\circle*{0.000001}} \put(-96.17,-111.02){\circle*{0.000001}} \put(-95.46,-110.31){\circle*{0.000001}} \put(-94.75,-110.31){\circle*{0.000001}} \put(-94.05,-109.60){\circle*{0.000001}} \put(-93.34,-109.60){\circle*{0.000001}} \put(-92.63,-109.60){\circle*{0.000001}} \put(-91.92,-108.89){\circle*{0.000001}} \put(-91.22,-108.89){\circle*{0.000001}} \put(-90.51,-108.19){\circle*{0.000001}} \put(-89.80,-108.19){\circle*{0.000001}} \put(-89.10,-108.19){\circle*{0.000001}} \put(-88.39,-107.48){\circle*{0.000001}} \put(-87.68,-107.48){\circle*{0.000001}} \put(-86.97,-107.48){\circle*{0.000001}} \put(-86.27,-106.77){\circle*{0.000001}} \put(-85.56,-106.77){\circle*{0.000001}} \put(-84.85,-106.07){\circle*{0.000001}} \put(-84.15,-106.07){\circle*{0.000001}} \put(-83.44,-106.07){\circle*{0.000001}} \put(-82.73,-105.36){\circle*{0.000001}} \put(-82.02,-105.36){\circle*{0.000001}} \put(-81.32,-104.65){\circle*{0.000001}} \put(-80.61,-104.65){\circle*{0.000001}} \put(-79.90,-104.65){\circle*{0.000001}} \put(-79.20,-103.94){\circle*{0.000001}} \put(-78.49,-103.94){\circle*{0.000001}} \put(-77.78,-103.94){\circle*{0.000001}} \put(-77.07,-103.24){\circle*{0.000001}} \put(-76.37,-103.24){\circle*{0.000001}} \put(-75.66,-102.53){\circle*{0.000001}} \put(-74.95,-102.53){\circle*{0.000001}} \put(-74.95,-102.53){\circle*{0.000001}} \put(-74.25,-101.82){\circle*{0.000001}} \put(-74.25,-101.12){\circle*{0.000001}} \put(-73.54,-100.41){\circle*{0.000001}} \put(-73.54,-99.70){\circle*{0.000001}} \put(-72.83,-98.99){\circle*{0.000001}} \put(-72.83,-98.29){\circle*{0.000001}} \put(-72.12,-97.58){\circle*{0.000001}} \put(-71.42,-96.87){\circle*{0.000001}} \put(-71.42,-96.17){\circle*{0.000001}} \put(-70.71,-95.46){\circle*{0.000001}} \put(-70.71,-94.75){\circle*{0.000001}} \put(-70.00,-94.05){\circle*{0.000001}} \put(-70.00,-93.34){\circle*{0.000001}} \put(-69.30,-92.63){\circle*{0.000001}} \put(-68.59,-91.92){\circle*{0.000001}} \put(-68.59,-91.22){\circle*{0.000001}} \put(-67.88,-90.51){\circle*{0.000001}} \put(-67.88,-89.80){\circle*{0.000001}} \put(-67.18,-89.10){\circle*{0.000001}} \put(-67.18,-88.39){\circle*{0.000001}} \put(-66.47,-87.68){\circle*{0.000001}} \put(-65.76,-86.97){\circle*{0.000001}} \put(-65.76,-86.27){\circle*{0.000001}} \put(-65.05,-85.56){\circle*{0.000001}} \put(-65.05,-84.85){\circle*{0.000001}} \put(-64.35,-84.15){\circle*{0.000001}} \put(-64.35,-83.44){\circle*{0.000001}} \put(-63.64,-82.73){\circle*{0.000001}} \put(-62.93,-82.02){\circle*{0.000001}} \put(-62.93,-81.32){\circle*{0.000001}} \put(-62.23,-80.61){\circle*{0.000001}} \put(-62.23,-79.90){\circle*{0.000001}} \put(-61.52,-79.20){\circle*{0.000001}} \put(-61.52,-78.49){\circle*{0.000001}} \put(-60.81,-77.78){\circle*{0.000001}} \put(-60.10,-77.07){\circle*{0.000001}} \put(-60.10,-76.37){\circle*{0.000001}} \put(-59.40,-75.66){\circle*{0.000001}} \put(-59.40,-74.95){\circle*{0.000001}} \put(-58.69,-74.25){\circle*{0.000001}} \put(-58.69,-73.54){\circle*{0.000001}} \put(-57.98,-72.83){\circle*{0.000001}} \put(-57.28,-72.12){\circle*{0.000001}} \put(-57.28,-71.42){\circle*{0.000001}} \put(-56.57,-70.71){\circle*{0.000001}} \put(-56.57,-70.00){\circle*{0.000001}} \put(-55.86,-69.30){\circle*{0.000001}} \put(-55.86,-68.59){\circle*{0.000001}} \put(-55.15,-67.88){\circle*{0.000001}} \put(-54.45,-67.18){\circle*{0.000001}} \put(-54.45,-66.47){\circle*{0.000001}} \put(-53.74,-65.76){\circle*{0.000001}} \put(-53.74,-65.05){\circle*{0.000001}} \put(-53.03,-64.35){\circle*{0.000001}} \put(-53.03,-63.64){\circle*{0.000001}} \put(-52.33,-62.93){\circle*{0.000001}} \put(-88.39,-84.85){\circle*{0.000001}} \put(-87.68,-84.15){\circle*{0.000001}} \put(-86.97,-84.15){\circle*{0.000001}} \put(-86.27,-83.44){\circle*{0.000001}} \put(-85.56,-83.44){\circle*{0.000001}} \put(-84.85,-82.73){\circle*{0.000001}} \put(-84.15,-82.02){\circle*{0.000001}} \put(-83.44,-82.02){\circle*{0.000001}} \put(-82.73,-81.32){\circle*{0.000001}} \put(-82.02,-81.32){\circle*{0.000001}} \put(-81.32,-80.61){\circle*{0.000001}} \put(-80.61,-79.90){\circle*{0.000001}} \put(-79.90,-79.90){\circle*{0.000001}} \put(-79.20,-79.20){\circle*{0.000001}} \put(-78.49,-78.49){\circle*{0.000001}} \put(-77.78,-78.49){\circle*{0.000001}} \put(-77.07,-77.78){\circle*{0.000001}} \put(-76.37,-77.78){\circle*{0.000001}} \put(-75.66,-77.07){\circle*{0.000001}} \put(-74.95,-76.37){\circle*{0.000001}} \put(-74.25,-76.37){\circle*{0.000001}} \put(-73.54,-75.66){\circle*{0.000001}} \put(-72.83,-75.66){\circle*{0.000001}} \put(-72.12,-74.95){\circle*{0.000001}} \put(-71.42,-74.25){\circle*{0.000001}} \put(-70.71,-74.25){\circle*{0.000001}} \put(-70.00,-73.54){\circle*{0.000001}} \put(-69.30,-73.54){\circle*{0.000001}} \put(-68.59,-72.83){\circle*{0.000001}} \put(-67.88,-72.12){\circle*{0.000001}} \put(-67.18,-72.12){\circle*{0.000001}} \put(-66.47,-71.42){\circle*{0.000001}} \put(-65.76,-71.42){\circle*{0.000001}} \put(-65.05,-70.71){\circle*{0.000001}} \put(-64.35,-70.00){\circle*{0.000001}} \put(-63.64,-70.00){\circle*{0.000001}} \put(-62.93,-69.30){\circle*{0.000001}} \put(-62.23,-69.30){\circle*{0.000001}} \put(-61.52,-68.59){\circle*{0.000001}} \put(-60.81,-67.88){\circle*{0.000001}} \put(-60.10,-67.88){\circle*{0.000001}} \put(-59.40,-67.18){\circle*{0.000001}} \put(-58.69,-66.47){\circle*{0.000001}} \put(-57.98,-66.47){\circle*{0.000001}} \put(-57.28,-65.76){\circle*{0.000001}} \put(-56.57,-65.76){\circle*{0.000001}} \put(-55.86,-65.05){\circle*{0.000001}} \put(-55.15,-64.35){\circle*{0.000001}} \put(-54.45,-64.35){\circle*{0.000001}} \put(-53.74,-63.64){\circle*{0.000001}} \put(-53.03,-63.64){\circle*{0.000001}} \put(-52.33,-62.93){\circle*{0.000001}} \put(-67.88,-121.62){\circle*{0.000001}} \put(-68.59,-120.92){\circle*{0.000001}} \put(-68.59,-120.21){\circle*{0.000001}} \put(-69.30,-119.50){\circle*{0.000001}} \put(-69.30,-118.79){\circle*{0.000001}} \put(-70.00,-118.09){\circle*{0.000001}} \put(-70.00,-117.38){\circle*{0.000001}} \put(-70.71,-116.67){\circle*{0.000001}} \put(-70.71,-115.97){\circle*{0.000001}} \put(-71.42,-115.26){\circle*{0.000001}} \put(-72.12,-114.55){\circle*{0.000001}} \put(-72.12,-113.84){\circle*{0.000001}} \put(-72.83,-113.14){\circle*{0.000001}} \put(-72.83,-112.43){\circle*{0.000001}} \put(-73.54,-111.72){\circle*{0.000001}} \put(-73.54,-111.02){\circle*{0.000001}} \put(-74.25,-110.31){\circle*{0.000001}} \put(-74.25,-109.60){\circle*{0.000001}} \put(-74.95,-108.89){\circle*{0.000001}} \put(-75.66,-108.19){\circle*{0.000001}} \put(-75.66,-107.48){\circle*{0.000001}} \put(-76.37,-106.77){\circle*{0.000001}} \put(-76.37,-106.07){\circle*{0.000001}} \put(-77.07,-105.36){\circle*{0.000001}} \put(-77.07,-104.65){\circle*{0.000001}} \put(-77.78,-103.94){\circle*{0.000001}} \put(-77.78,-103.24){\circle*{0.000001}} \put(-78.49,-102.53){\circle*{0.000001}} \put(-79.20,-101.82){\circle*{0.000001}} \put(-79.20,-101.12){\circle*{0.000001}} \put(-79.90,-100.41){\circle*{0.000001}} \put(-79.90,-99.70){\circle*{0.000001}} \put(-80.61,-98.99){\circle*{0.000001}} \put(-80.61,-98.29){\circle*{0.000001}} \put(-81.32,-97.58){\circle*{0.000001}} \put(-82.02,-96.87){\circle*{0.000001}} \put(-82.02,-96.17){\circle*{0.000001}} \put(-82.73,-95.46){\circle*{0.000001}} \put(-82.73,-94.75){\circle*{0.000001}} \put(-83.44,-94.05){\circle*{0.000001}} \put(-83.44,-93.34){\circle*{0.000001}} \put(-84.15,-92.63){\circle*{0.000001}} \put(-84.15,-91.92){\circle*{0.000001}} \put(-84.85,-91.22){\circle*{0.000001}} \put(-85.56,-90.51){\circle*{0.000001}} \put(-85.56,-89.80){\circle*{0.000001}} \put(-86.27,-89.10){\circle*{0.000001}} \put(-86.27,-88.39){\circle*{0.000001}} \put(-86.97,-87.68){\circle*{0.000001}} \put(-86.97,-86.97){\circle*{0.000001}} \put(-87.68,-86.27){\circle*{0.000001}} \put(-87.68,-85.56){\circle*{0.000001}} \put(-88.39,-84.85){\circle*{0.000001}} \put(-91.92,-159.81){\circle*{0.000001}} \put(-91.22,-159.10){\circle*{0.000001}} \put(-91.22,-158.39){\circle*{0.000001}} \put(-90.51,-157.68){\circle*{0.000001}} \put(-89.80,-156.98){\circle*{0.000001}} \put(-89.80,-156.27){\circle*{0.000001}} \put(-89.10,-155.56){\circle*{0.000001}} \put(-89.10,-154.86){\circle*{0.000001}} \put(-88.39,-154.15){\circle*{0.000001}} \put(-87.68,-153.44){\circle*{0.000001}} \put(-87.68,-152.74){\circle*{0.000001}} \put(-86.97,-152.03){\circle*{0.000001}} \put(-86.27,-151.32){\circle*{0.000001}} \put(-86.27,-150.61){\circle*{0.000001}} \put(-85.56,-149.91){\circle*{0.000001}} \put(-85.56,-149.20){\circle*{0.000001}} \put(-84.85,-148.49){\circle*{0.000001}} \put(-84.15,-147.79){\circle*{0.000001}} \put(-84.15,-147.08){\circle*{0.000001}} \put(-83.44,-146.37){\circle*{0.000001}} \put(-82.73,-145.66){\circle*{0.000001}} \put(-82.73,-144.96){\circle*{0.000001}} \put(-82.02,-144.25){\circle*{0.000001}} \put(-82.02,-143.54){\circle*{0.000001}} \put(-81.32,-142.84){\circle*{0.000001}} \put(-80.61,-142.13){\circle*{0.000001}} \put(-80.61,-141.42){\circle*{0.000001}} \put(-79.90,-140.71){\circle*{0.000001}} \put(-79.20,-140.01){\circle*{0.000001}} \put(-79.20,-139.30){\circle*{0.000001}} \put(-78.49,-138.59){\circle*{0.000001}} \put(-77.78,-137.89){\circle*{0.000001}} \put(-77.78,-137.18){\circle*{0.000001}} \put(-77.07,-136.47){\circle*{0.000001}} \put(-77.07,-135.76){\circle*{0.000001}} \put(-76.37,-135.06){\circle*{0.000001}} \put(-75.66,-134.35){\circle*{0.000001}} \put(-75.66,-133.64){\circle*{0.000001}} \put(-74.95,-132.94){\circle*{0.000001}} \put(-74.25,-132.23){\circle*{0.000001}} \put(-74.25,-131.52){\circle*{0.000001}} \put(-73.54,-130.81){\circle*{0.000001}} \put(-73.54,-130.11){\circle*{0.000001}} \put(-72.83,-129.40){\circle*{0.000001}} \put(-72.12,-128.69){\circle*{0.000001}} \put(-72.12,-127.99){\circle*{0.000001}} \put(-71.42,-127.28){\circle*{0.000001}} \put(-70.71,-126.57){\circle*{0.000001}} \put(-70.71,-125.87){\circle*{0.000001}} \put(-70.00,-125.16){\circle*{0.000001}} \put(-70.00,-124.45){\circle*{0.000001}} \put(-69.30,-123.74){\circle*{0.000001}} \put(-68.59,-123.04){\circle*{0.000001}} \put(-68.59,-122.33){\circle*{0.000001}} \put(-67.88,-121.62){\circle*{0.000001}} \put(-91.92,-159.81){\circle*{0.000001}} \put(-91.22,-159.81){\circle*{0.000001}} \put(-90.51,-159.81){\circle*{0.000001}} \put(-89.80,-159.81){\circle*{0.000001}} \put(-89.10,-159.10){\circle*{0.000001}} \put(-88.39,-159.10){\circle*{0.000001}} \put(-87.68,-159.10){\circle*{0.000001}} \put(-86.97,-159.10){\circle*{0.000001}} \put(-86.27,-159.10){\circle*{0.000001}} \put(-85.56,-159.10){\circle*{0.000001}} \put(-84.85,-159.10){\circle*{0.000001}} \put(-84.15,-158.39){\circle*{0.000001}} \put(-83.44,-158.39){\circle*{0.000001}} \put(-82.73,-158.39){\circle*{0.000001}} \put(-82.02,-158.39){\circle*{0.000001}} \put(-81.32,-158.39){\circle*{0.000001}} \put(-80.61,-158.39){\circle*{0.000001}} \put(-79.90,-158.39){\circle*{0.000001}} \put(-79.20,-157.68){\circle*{0.000001}} \put(-78.49,-157.68){\circle*{0.000001}} \put(-77.78,-157.68){\circle*{0.000001}} \put(-77.07,-157.68){\circle*{0.000001}} \put(-76.37,-157.68){\circle*{0.000001}} \put(-75.66,-157.68){\circle*{0.000001}} \put(-74.95,-157.68){\circle*{0.000001}} \put(-74.25,-156.98){\circle*{0.000001}} \put(-73.54,-156.98){\circle*{0.000001}} \put(-72.83,-156.98){\circle*{0.000001}} \put(-72.12,-156.98){\circle*{0.000001}} \put(-71.42,-156.98){\circle*{0.000001}} \put(-70.71,-156.98){\circle*{0.000001}} \put(-70.00,-156.98){\circle*{0.000001}} \put(-69.30,-156.27){\circle*{0.000001}} \put(-68.59,-156.27){\circle*{0.000001}} \put(-67.88,-156.27){\circle*{0.000001}} \put(-67.18,-156.27){\circle*{0.000001}} \put(-66.47,-156.27){\circle*{0.000001}} \put(-65.76,-156.27){\circle*{0.000001}} \put(-65.05,-156.27){\circle*{0.000001}} \put(-64.35,-155.56){\circle*{0.000001}} \put(-63.64,-155.56){\circle*{0.000001}} \put(-62.93,-155.56){\circle*{0.000001}} \put(-62.23,-155.56){\circle*{0.000001}} \put(-61.52,-155.56){\circle*{0.000001}} \put(-60.81,-155.56){\circle*{0.000001}} \put(-60.10,-155.56){\circle*{0.000001}} \put(-59.40,-154.86){\circle*{0.000001}} \put(-58.69,-154.86){\circle*{0.000001}} \put(-57.98,-154.86){\circle*{0.000001}} \put(-57.28,-154.86){\circle*{0.000001}} \put(-56.57,-154.86){\circle*{0.000001}} \put(-55.86,-154.86){\circle*{0.000001}} \put(-55.15,-154.86){\circle*{0.000001}} \put(-54.45,-154.15){\circle*{0.000001}} \put(-53.74,-154.15){\circle*{0.000001}} \put(-53.03,-154.15){\circle*{0.000001}} \put(-52.33,-154.15){\circle*{0.000001}} \put(-51.62,-154.15){\circle*{0.000001}} \put(-50.91,-154.15){\circle*{0.000001}} \put(-50.20,-154.15){\circle*{0.000001}} \put(-49.50,-153.44){\circle*{0.000001}} \put(-48.79,-153.44){\circle*{0.000001}} \put(-48.08,-153.44){\circle*{0.000001}} \put(-47.38,-153.44){\circle*{0.000001}} \put(-47.38,-153.44){\circle*{0.000001}} \put(-46.67,-153.44){\circle*{0.000001}} \put(-45.96,-152.74){\circle*{0.000001}} \put(-45.25,-152.74){\circle*{0.000001}} \put(-44.55,-152.03){\circle*{0.000001}} \put(-43.84,-152.03){\circle*{0.000001}} \put(-43.13,-151.32){\circle*{0.000001}} \put(-42.43,-151.32){\circle*{0.000001}} \put(-41.72,-150.61){\circle*{0.000001}} \put(-41.01,-150.61){\circle*{0.000001}} \put(-40.31,-149.91){\circle*{0.000001}} \put(-39.60,-149.91){\circle*{0.000001}} \put(-38.89,-149.20){\circle*{0.000001}} \put(-38.18,-149.20){\circle*{0.000001}} \put(-37.48,-148.49){\circle*{0.000001}} \put(-36.77,-148.49){\circle*{0.000001}} \put(-36.06,-148.49){\circle*{0.000001}} \put(-35.36,-147.79){\circle*{0.000001}} \put(-34.65,-147.79){\circle*{0.000001}} \put(-33.94,-147.08){\circle*{0.000001}} \put(-33.23,-147.08){\circle*{0.000001}} \put(-32.53,-146.37){\circle*{0.000001}} \put(-31.82,-146.37){\circle*{0.000001}} \put(-31.11,-145.66){\circle*{0.000001}} \put(-30.41,-145.66){\circle*{0.000001}} \put(-29.70,-144.96){\circle*{0.000001}} \put(-28.99,-144.96){\circle*{0.000001}} \put(-28.28,-144.25){\circle*{0.000001}} \put(-27.58,-144.25){\circle*{0.000001}} \put(-26.87,-143.54){\circle*{0.000001}} \put(-26.16,-143.54){\circle*{0.000001}} \put(-25.46,-143.54){\circle*{0.000001}} \put(-24.75,-142.84){\circle*{0.000001}} \put(-24.04,-142.84){\circle*{0.000001}} \put(-23.33,-142.13){\circle*{0.000001}} \put(-22.63,-142.13){\circle*{0.000001}} \put(-21.92,-141.42){\circle*{0.000001}} \put(-21.21,-141.42){\circle*{0.000001}} \put(-20.51,-140.71){\circle*{0.000001}} \put(-19.80,-140.71){\circle*{0.000001}} \put(-19.09,-140.01){\circle*{0.000001}} \put(-18.38,-140.01){\circle*{0.000001}} \put(-17.68,-139.30){\circle*{0.000001}} \put(-16.97,-139.30){\circle*{0.000001}} \put(-16.26,-138.59){\circle*{0.000001}} \put(-15.56,-138.59){\circle*{0.000001}} \put(-14.85,-138.59){\circle*{0.000001}} \put(-14.14,-137.89){\circle*{0.000001}} \put(-13.44,-137.89){\circle*{0.000001}} \put(-12.73,-137.18){\circle*{0.000001}} \put(-12.02,-137.18){\circle*{0.000001}} \put(-11.31,-136.47){\circle*{0.000001}} \put(-10.61,-136.47){\circle*{0.000001}} \put(-9.90,-135.76){\circle*{0.000001}} \put(-9.19,-135.76){\circle*{0.000001}} \put(-8.49,-135.06){\circle*{0.000001}} \put(-7.78,-135.06){\circle*{0.000001}} \put(-7.07,-134.35){\circle*{0.000001}} \put(-6.36,-134.35){\circle*{0.000001}} \put(-5.66,-133.64){\circle*{0.000001}} \put(-4.95,-133.64){\circle*{0.000001}} \put(-4.95,-133.64){\circle*{0.000001}} \put(-4.24,-133.64){\circle*{0.000001}} \put(-3.54,-132.94){\circle*{0.000001}} \put(-2.83,-132.94){\circle*{0.000001}} \put(-2.12,-132.23){\circle*{0.000001}} \put(-1.41,-132.23){\circle*{0.000001}} \put(-0.71,-131.52){\circle*{0.000001}} \put( 0.00,-131.52){\circle*{0.000001}} \put( 0.71,-130.81){\circle*{0.000001}} \put( 1.41,-130.81){\circle*{0.000001}} \put( 2.12,-130.81){\circle*{0.000001}} \put( 2.83,-130.11){\circle*{0.000001}} \put( 3.54,-130.11){\circle*{0.000001}} \put( 4.24,-129.40){\circle*{0.000001}} \put( 4.95,-129.40){\circle*{0.000001}} \put( 5.66,-128.69){\circle*{0.000001}} \put( 6.36,-128.69){\circle*{0.000001}} \put( 7.07,-127.99){\circle*{0.000001}} \put( 7.78,-127.99){\circle*{0.000001}} \put( 8.49,-127.99){\circle*{0.000001}} \put( 9.19,-127.28){\circle*{0.000001}} \put( 9.90,-127.28){\circle*{0.000001}} \put(10.61,-126.57){\circle*{0.000001}} \put(11.31,-126.57){\circle*{0.000001}} \put(12.02,-125.87){\circle*{0.000001}} \put(12.73,-125.87){\circle*{0.000001}} \put(13.44,-125.16){\circle*{0.000001}} \put(14.14,-125.16){\circle*{0.000001}} \put(14.85,-125.16){\circle*{0.000001}} \put(15.56,-124.45){\circle*{0.000001}} \put(16.26,-124.45){\circle*{0.000001}} \put(16.97,-123.74){\circle*{0.000001}} \put(17.68,-123.74){\circle*{0.000001}} \put(18.38,-123.04){\circle*{0.000001}} \put(19.09,-123.04){\circle*{0.000001}} \put(19.80,-122.33){\circle*{0.000001}} \put(20.51,-122.33){\circle*{0.000001}} \put(21.21,-121.62){\circle*{0.000001}} \put(21.92,-121.62){\circle*{0.000001}} \put(22.63,-121.62){\circle*{0.000001}} \put(23.33,-120.92){\circle*{0.000001}} \put(24.04,-120.92){\circle*{0.000001}} \put(24.75,-120.21){\circle*{0.000001}} \put(25.46,-120.21){\circle*{0.000001}} \put(26.16,-119.50){\circle*{0.000001}} \put(26.87,-119.50){\circle*{0.000001}} \put(27.58,-118.79){\circle*{0.000001}} \put(28.28,-118.79){\circle*{0.000001}} \put(28.99,-118.79){\circle*{0.000001}} \put(29.70,-118.09){\circle*{0.000001}} \put(30.41,-118.09){\circle*{0.000001}} \put(31.11,-117.38){\circle*{0.000001}} \put(31.82,-117.38){\circle*{0.000001}} \put(32.53,-116.67){\circle*{0.000001}} \put(33.23,-116.67){\circle*{0.000001}} \put(33.94,-115.97){\circle*{0.000001}} \put(34.65,-115.97){\circle*{0.000001}} \put(34.65,-115.97){\circle*{0.000001}} \put(34.65,-115.26){\circle*{0.000001}} \put(35.36,-114.55){\circle*{0.000001}} \put(35.36,-113.84){\circle*{0.000001}} \put(36.06,-113.14){\circle*{0.000001}} \put(36.06,-112.43){\circle*{0.000001}} \put(36.06,-111.72){\circle*{0.000001}} \put(36.77,-111.02){\circle*{0.000001}} \put(36.77,-110.31){\circle*{0.000001}} \put(37.48,-109.60){\circle*{0.000001}} \put(37.48,-108.89){\circle*{0.000001}} \put(37.48,-108.19){\circle*{0.000001}} \put(38.18,-107.48){\circle*{0.000001}} \put(38.18,-106.77){\circle*{0.000001}} \put(38.89,-106.07){\circle*{0.000001}} \put(38.89,-105.36){\circle*{0.000001}} \put(38.89,-104.65){\circle*{0.000001}} \put(39.60,-103.94){\circle*{0.000001}} \put(39.60,-103.24){\circle*{0.000001}} \put(39.60,-102.53){\circle*{0.000001}} \put(40.31,-101.82){\circle*{0.000001}} \put(40.31,-101.12){\circle*{0.000001}} \put(41.01,-100.41){\circle*{0.000001}} \put(41.01,-99.70){\circle*{0.000001}} \put(41.01,-98.99){\circle*{0.000001}} \put(41.72,-98.29){\circle*{0.000001}} \put(41.72,-97.58){\circle*{0.000001}} \put(42.43,-96.87){\circle*{0.000001}} \put(42.43,-96.17){\circle*{0.000001}} \put(42.43,-95.46){\circle*{0.000001}} \put(43.13,-94.75){\circle*{0.000001}} \put(43.13,-94.05){\circle*{0.000001}} \put(43.84,-93.34){\circle*{0.000001}} \put(43.84,-92.63){\circle*{0.000001}} \put(43.84,-91.92){\circle*{0.000001}} \put(44.55,-91.22){\circle*{0.000001}} \put(44.55,-90.51){\circle*{0.000001}} \put(45.25,-89.80){\circle*{0.000001}} \put(45.25,-89.10){\circle*{0.000001}} \put(45.25,-88.39){\circle*{0.000001}} \put(45.96,-87.68){\circle*{0.000001}} \put(45.96,-86.97){\circle*{0.000001}} \put(46.67,-86.27){\circle*{0.000001}} \put(46.67,-85.56){\circle*{0.000001}} \put(46.67,-84.85){\circle*{0.000001}} \put(47.38,-84.15){\circle*{0.000001}} \put(47.38,-83.44){\circle*{0.000001}} \put(47.38,-82.73){\circle*{0.000001}} \put(48.08,-82.02){\circle*{0.000001}} \put(48.08,-81.32){\circle*{0.000001}} \put(48.79,-80.61){\circle*{0.000001}} \put(48.79,-79.90){\circle*{0.000001}} \put(48.79,-79.20){\circle*{0.000001}} \put(49.50,-78.49){\circle*{0.000001}} \put(49.50,-77.78){\circle*{0.000001}} \put(50.20,-77.07){\circle*{0.000001}} \put(50.20,-76.37){\circle*{0.000001}} \put(50.20,-75.66){\circle*{0.000001}} \put(50.91,-74.95){\circle*{0.000001}} \put(50.91,-74.25){\circle*{0.000001}} \put(51.62,-73.54){\circle*{0.000001}} \put(51.62,-72.83){\circle*{0.000001}} \put( 6.36,-68.59){\circle*{0.000001}} \put( 7.07,-68.59){\circle*{0.000001}} \put( 7.78,-68.59){\circle*{0.000001}} \put( 8.49,-68.59){\circle*{0.000001}} \put( 9.19,-68.59){\circle*{0.000001}} \put( 9.90,-68.59){\circle*{0.000001}} \put(10.61,-69.30){\circle*{0.000001}} \put(11.31,-69.30){\circle*{0.000001}} \put(12.02,-69.30){\circle*{0.000001}} \put(12.73,-69.30){\circle*{0.000001}} \put(13.44,-69.30){\circle*{0.000001}} \put(14.14,-69.30){\circle*{0.000001}} \put(14.85,-69.30){\circle*{0.000001}} \put(15.56,-69.30){\circle*{0.000001}} \put(16.26,-69.30){\circle*{0.000001}} \put(16.97,-69.30){\circle*{0.000001}} \put(17.68,-69.30){\circle*{0.000001}} \put(18.38,-70.00){\circle*{0.000001}} \put(19.09,-70.00){\circle*{0.000001}} \put(19.80,-70.00){\circle*{0.000001}} \put(20.51,-70.00){\circle*{0.000001}} \put(21.21,-70.00){\circle*{0.000001}} \put(21.92,-70.00){\circle*{0.000001}} \put(22.63,-70.00){\circle*{0.000001}} \put(23.33,-70.00){\circle*{0.000001}} \put(24.04,-70.00){\circle*{0.000001}} \put(24.75,-70.00){\circle*{0.000001}} \put(25.46,-70.71){\circle*{0.000001}} \put(26.16,-70.71){\circle*{0.000001}} \put(26.87,-70.71){\circle*{0.000001}} \put(27.58,-70.71){\circle*{0.000001}} \put(28.28,-70.71){\circle*{0.000001}} \put(28.99,-70.71){\circle*{0.000001}} \put(29.70,-70.71){\circle*{0.000001}} \put(30.41,-70.71){\circle*{0.000001}} \put(31.11,-70.71){\circle*{0.000001}} \put(31.82,-70.71){\circle*{0.000001}} \put(32.53,-70.71){\circle*{0.000001}} \put(33.23,-71.42){\circle*{0.000001}} \put(33.94,-71.42){\circle*{0.000001}} \put(34.65,-71.42){\circle*{0.000001}} \put(35.36,-71.42){\circle*{0.000001}} \put(36.06,-71.42){\circle*{0.000001}} \put(36.77,-71.42){\circle*{0.000001}} \put(37.48,-71.42){\circle*{0.000001}} \put(38.18,-71.42){\circle*{0.000001}} \put(38.89,-71.42){\circle*{0.000001}} \put(39.60,-71.42){\circle*{0.000001}} \put(40.31,-71.42){\circle*{0.000001}} \put(41.01,-72.12){\circle*{0.000001}} \put(41.72,-72.12){\circle*{0.000001}} \put(42.43,-72.12){\circle*{0.000001}} \put(43.13,-72.12){\circle*{0.000001}} \put(43.84,-72.12){\circle*{0.000001}} \put(44.55,-72.12){\circle*{0.000001}} \put(45.25,-72.12){\circle*{0.000001}} \put(45.96,-72.12){\circle*{0.000001}} \put(46.67,-72.12){\circle*{0.000001}} \put(47.38,-72.12){\circle*{0.000001}} \put(48.08,-72.83){\circle*{0.000001}} \put(48.79,-72.83){\circle*{0.000001}} \put(49.50,-72.83){\circle*{0.000001}} \put(50.20,-72.83){\circle*{0.000001}} \put(50.91,-72.83){\circle*{0.000001}} \put(51.62,-72.83){\circle*{0.000001}} \put(-32.53,-89.80){\circle*{0.000001}} \put(-31.82,-89.10){\circle*{0.000001}} \put(-31.11,-89.10){\circle*{0.000001}} \put(-30.41,-88.39){\circle*{0.000001}} \put(-29.70,-88.39){\circle*{0.000001}} \put(-28.99,-87.68){\circle*{0.000001}} \put(-28.28,-87.68){\circle*{0.000001}} \put(-27.58,-86.97){\circle*{0.000001}} \put(-26.87,-86.97){\circle*{0.000001}} \put(-26.16,-86.27){\circle*{0.000001}} \put(-25.46,-86.27){\circle*{0.000001}} \put(-24.75,-85.56){\circle*{0.000001}} \put(-24.04,-84.85){\circle*{0.000001}} \put(-23.33,-84.85){\circle*{0.000001}} \put(-22.63,-84.15){\circle*{0.000001}} \put(-21.92,-84.15){\circle*{0.000001}} \put(-21.21,-83.44){\circle*{0.000001}} \put(-20.51,-83.44){\circle*{0.000001}} \put(-19.80,-82.73){\circle*{0.000001}} \put(-19.09,-82.73){\circle*{0.000001}} \put(-18.38,-82.02){\circle*{0.000001}} \put(-17.68,-82.02){\circle*{0.000001}} \put(-16.97,-81.32){\circle*{0.000001}} \put(-16.26,-80.61){\circle*{0.000001}} \put(-15.56,-80.61){\circle*{0.000001}} \put(-14.85,-79.90){\circle*{0.000001}} \put(-14.14,-79.90){\circle*{0.000001}} \put(-13.44,-79.20){\circle*{0.000001}} \put(-12.73,-79.20){\circle*{0.000001}} \put(-12.02,-78.49){\circle*{0.000001}} \put(-11.31,-78.49){\circle*{0.000001}} \put(-10.61,-77.78){\circle*{0.000001}} \put(-9.90,-77.78){\circle*{0.000001}} \put(-9.19,-77.07){\circle*{0.000001}} \put(-8.49,-76.37){\circle*{0.000001}} \put(-7.78,-76.37){\circle*{0.000001}} \put(-7.07,-75.66){\circle*{0.000001}} \put(-6.36,-75.66){\circle*{0.000001}} \put(-5.66,-74.95){\circle*{0.000001}} \put(-4.95,-74.95){\circle*{0.000001}} \put(-4.24,-74.25){\circle*{0.000001}} \put(-3.54,-74.25){\circle*{0.000001}} \put(-2.83,-73.54){\circle*{0.000001}} \put(-2.12,-73.54){\circle*{0.000001}} \put(-1.41,-72.83){\circle*{0.000001}} \put(-0.71,-72.12){\circle*{0.000001}} \put( 0.00,-72.12){\circle*{0.000001}} \put( 0.71,-71.42){\circle*{0.000001}} \put( 1.41,-71.42){\circle*{0.000001}} \put( 2.12,-70.71){\circle*{0.000001}} \put( 2.83,-70.71){\circle*{0.000001}} \put( 3.54,-70.00){\circle*{0.000001}} \put( 4.24,-70.00){\circle*{0.000001}} \put( 4.95,-69.30){\circle*{0.000001}} \put( 5.66,-69.30){\circle*{0.000001}} \put( 6.36,-68.59){\circle*{0.000001}} \put(-1.41,-122.33){\circle*{0.000001}} \put(-2.12,-121.62){\circle*{0.000001}} \put(-2.83,-120.92){\circle*{0.000001}} \put(-3.54,-120.21){\circle*{0.000001}} \put(-4.24,-119.50){\circle*{0.000001}} \put(-4.95,-118.79){\circle*{0.000001}} \put(-5.66,-118.09){\circle*{0.000001}} \put(-6.36,-117.38){\circle*{0.000001}} \put(-7.07,-116.67){\circle*{0.000001}} \put(-7.78,-115.97){\circle*{0.000001}} \put(-8.49,-115.26){\circle*{0.000001}} \put(-9.19,-114.55){\circle*{0.000001}} \put(-9.19,-113.84){\circle*{0.000001}} \put(-9.90,-113.14){\circle*{0.000001}} \put(-10.61,-112.43){\circle*{0.000001}} \put(-11.31,-111.72){\circle*{0.000001}} \put(-12.02,-111.02){\circle*{0.000001}} \put(-12.73,-110.31){\circle*{0.000001}} \put(-13.44,-109.60){\circle*{0.000001}} \put(-14.14,-108.89){\circle*{0.000001}} \put(-14.85,-108.19){\circle*{0.000001}} \put(-15.56,-107.48){\circle*{0.000001}} \put(-16.26,-106.77){\circle*{0.000001}} \put(-16.97,-106.07){\circle*{0.000001}} \put(-17.68,-105.36){\circle*{0.000001}} \put(-18.38,-104.65){\circle*{0.000001}} \put(-19.09,-103.94){\circle*{0.000001}} \put(-19.80,-103.24){\circle*{0.000001}} \put(-20.51,-102.53){\circle*{0.000001}} \put(-21.21,-101.82){\circle*{0.000001}} \put(-21.92,-101.12){\circle*{0.000001}} \put(-22.63,-100.41){\circle*{0.000001}} \put(-23.33,-99.70){\circle*{0.000001}} \put(-24.04,-98.99){\circle*{0.000001}} \put(-24.75,-98.29){\circle*{0.000001}} \put(-24.75,-97.58){\circle*{0.000001}} \put(-25.46,-96.87){\circle*{0.000001}} \put(-26.16,-96.17){\circle*{0.000001}} \put(-26.87,-95.46){\circle*{0.000001}} \put(-27.58,-94.75){\circle*{0.000001}} \put(-28.28,-94.05){\circle*{0.000001}} \put(-28.99,-93.34){\circle*{0.000001}} \put(-29.70,-92.63){\circle*{0.000001}} \put(-30.41,-91.92){\circle*{0.000001}} \put(-31.11,-91.22){\circle*{0.000001}} \put(-31.82,-90.51){\circle*{0.000001}} \put(-32.53,-89.80){\circle*{0.000001}} \put(-36.77,-147.79){\circle*{0.000001}} \put(-36.06,-147.08){\circle*{0.000001}} \put(-35.36,-147.08){\circle*{0.000001}} \put(-34.65,-146.37){\circle*{0.000001}} \put(-33.94,-145.66){\circle*{0.000001}} \put(-33.23,-144.96){\circle*{0.000001}} \put(-32.53,-144.96){\circle*{0.000001}} \put(-31.82,-144.25){\circle*{0.000001}} \put(-31.11,-143.54){\circle*{0.000001}} \put(-30.41,-143.54){\circle*{0.000001}} \put(-29.70,-142.84){\circle*{0.000001}} \put(-28.99,-142.13){\circle*{0.000001}} \put(-28.28,-141.42){\circle*{0.000001}} \put(-27.58,-141.42){\circle*{0.000001}} \put(-26.87,-140.71){\circle*{0.000001}} \put(-26.16,-140.01){\circle*{0.000001}} \put(-25.46,-139.30){\circle*{0.000001}} \put(-24.75,-139.30){\circle*{0.000001}} \put(-24.04,-138.59){\circle*{0.000001}} \put(-23.33,-137.89){\circle*{0.000001}} \put(-22.63,-137.89){\circle*{0.000001}} \put(-21.92,-137.18){\circle*{0.000001}} \put(-21.21,-136.47){\circle*{0.000001}} \put(-20.51,-135.76){\circle*{0.000001}} \put(-19.80,-135.76){\circle*{0.000001}} \put(-19.09,-135.06){\circle*{0.000001}} \put(-18.38,-134.35){\circle*{0.000001}} \put(-17.68,-134.35){\circle*{0.000001}} \put(-16.97,-133.64){\circle*{0.000001}} \put(-16.26,-132.94){\circle*{0.000001}} \put(-15.56,-132.23){\circle*{0.000001}} \put(-14.85,-132.23){\circle*{0.000001}} \put(-14.14,-131.52){\circle*{0.000001}} \put(-13.44,-130.81){\circle*{0.000001}} \put(-12.73,-130.81){\circle*{0.000001}} \put(-12.02,-130.11){\circle*{0.000001}} \put(-11.31,-129.40){\circle*{0.000001}} \put(-10.61,-128.69){\circle*{0.000001}} \put(-9.90,-128.69){\circle*{0.000001}} \put(-9.19,-127.99){\circle*{0.000001}} \put(-8.49,-127.28){\circle*{0.000001}} \put(-7.78,-126.57){\circle*{0.000001}} \put(-7.07,-126.57){\circle*{0.000001}} \put(-6.36,-125.87){\circle*{0.000001}} \put(-5.66,-125.16){\circle*{0.000001}} \put(-4.95,-125.16){\circle*{0.000001}} \put(-4.24,-124.45){\circle*{0.000001}} \put(-3.54,-123.74){\circle*{0.000001}} \put(-2.83,-123.04){\circle*{0.000001}} \put(-2.12,-123.04){\circle*{0.000001}} \put(-1.41,-122.33){\circle*{0.000001}} \put(-14.85,-189.50){\circle*{0.000001}} \put(-15.56,-188.80){\circle*{0.000001}} \put(-15.56,-188.09){\circle*{0.000001}} \put(-16.26,-187.38){\circle*{0.000001}} \put(-16.26,-186.68){\circle*{0.000001}} \put(-16.97,-185.97){\circle*{0.000001}} \put(-16.97,-185.26){\circle*{0.000001}} \put(-17.68,-184.55){\circle*{0.000001}} \put(-17.68,-183.85){\circle*{0.000001}} \put(-18.38,-183.14){\circle*{0.000001}} \put(-18.38,-182.43){\circle*{0.000001}} \put(-19.09,-181.73){\circle*{0.000001}} \put(-19.09,-181.02){\circle*{0.000001}} \put(-19.80,-180.31){\circle*{0.000001}} \put(-19.80,-179.61){\circle*{0.000001}} \put(-20.51,-178.90){\circle*{0.000001}} \put(-20.51,-178.19){\circle*{0.000001}} \put(-21.21,-177.48){\circle*{0.000001}} \put(-21.21,-176.78){\circle*{0.000001}} \put(-21.92,-176.07){\circle*{0.000001}} \put(-22.63,-175.36){\circle*{0.000001}} \put(-22.63,-174.66){\circle*{0.000001}} \put(-23.33,-173.95){\circle*{0.000001}} \put(-23.33,-173.24){\circle*{0.000001}} \put(-24.04,-172.53){\circle*{0.000001}} \put(-24.04,-171.83){\circle*{0.000001}} \put(-24.75,-171.12){\circle*{0.000001}} \put(-24.75,-170.41){\circle*{0.000001}} \put(-25.46,-169.71){\circle*{0.000001}} \put(-25.46,-169.00){\circle*{0.000001}} \put(-26.16,-168.29){\circle*{0.000001}} \put(-26.16,-167.58){\circle*{0.000001}} \put(-26.87,-166.88){\circle*{0.000001}} \put(-26.87,-166.17){\circle*{0.000001}} \put(-27.58,-165.46){\circle*{0.000001}} \put(-27.58,-164.76){\circle*{0.000001}} \put(-28.28,-164.05){\circle*{0.000001}} \put(-28.28,-163.34){\circle*{0.000001}} \put(-28.99,-162.63){\circle*{0.000001}} \put(-28.99,-161.93){\circle*{0.000001}} \put(-29.70,-161.22){\circle*{0.000001}} \put(-30.41,-160.51){\circle*{0.000001}} \put(-30.41,-159.81){\circle*{0.000001}} \put(-31.11,-159.10){\circle*{0.000001}} \put(-31.11,-158.39){\circle*{0.000001}} \put(-31.82,-157.68){\circle*{0.000001}} \put(-31.82,-156.98){\circle*{0.000001}} \put(-32.53,-156.27){\circle*{0.000001}} \put(-32.53,-155.56){\circle*{0.000001}} \put(-33.23,-154.86){\circle*{0.000001}} \put(-33.23,-154.15){\circle*{0.000001}} \put(-33.94,-153.44){\circle*{0.000001}} \put(-33.94,-152.74){\circle*{0.000001}} \put(-34.65,-152.03){\circle*{0.000001}} \put(-34.65,-151.32){\circle*{0.000001}} \put(-35.36,-150.61){\circle*{0.000001}} \put(-35.36,-149.91){\circle*{0.000001}} \put(-36.06,-149.20){\circle*{0.000001}} \put(-36.06,-148.49){\circle*{0.000001}} \put(-36.77,-147.79){\circle*{0.000001}} \put(-50.91,-214.96){\circle*{0.000001}} \put(-50.20,-214.25){\circle*{0.000001}} \put(-49.50,-214.25){\circle*{0.000001}} \put(-48.79,-213.55){\circle*{0.000001}} \put(-48.08,-212.84){\circle*{0.000001}} \put(-47.38,-212.13){\circle*{0.000001}} \put(-46.67,-212.13){\circle*{0.000001}} \put(-45.96,-211.42){\circle*{0.000001}} \put(-45.25,-210.72){\circle*{0.000001}} \put(-44.55,-210.72){\circle*{0.000001}} \put(-43.84,-210.01){\circle*{0.000001}} \put(-43.13,-209.30){\circle*{0.000001}} \put(-42.43,-209.30){\circle*{0.000001}} \put(-41.72,-208.60){\circle*{0.000001}} \put(-41.01,-207.89){\circle*{0.000001}} \put(-40.31,-207.18){\circle*{0.000001}} \put(-39.60,-207.18){\circle*{0.000001}} \put(-38.89,-206.48){\circle*{0.000001}} \put(-38.18,-205.77){\circle*{0.000001}} \put(-37.48,-205.77){\circle*{0.000001}} \put(-36.77,-205.06){\circle*{0.000001}} \put(-36.06,-204.35){\circle*{0.000001}} \put(-35.36,-203.65){\circle*{0.000001}} \put(-34.65,-203.65){\circle*{0.000001}} \put(-33.94,-202.94){\circle*{0.000001}} \put(-33.23,-202.23){\circle*{0.000001}} \put(-32.53,-202.23){\circle*{0.000001}} \put(-31.82,-201.53){\circle*{0.000001}} \put(-31.11,-200.82){\circle*{0.000001}} \put(-30.41,-200.82){\circle*{0.000001}} \put(-29.70,-200.11){\circle*{0.000001}} \put(-28.99,-199.40){\circle*{0.000001}} \put(-28.28,-198.70){\circle*{0.000001}} \put(-27.58,-198.70){\circle*{0.000001}} \put(-26.87,-197.99){\circle*{0.000001}} \put(-26.16,-197.28){\circle*{0.000001}} \put(-25.46,-197.28){\circle*{0.000001}} \put(-24.75,-196.58){\circle*{0.000001}} \put(-24.04,-195.87){\circle*{0.000001}} \put(-23.33,-195.16){\circle*{0.000001}} \put(-22.63,-195.16){\circle*{0.000001}} \put(-21.92,-194.45){\circle*{0.000001}} \put(-21.21,-193.75){\circle*{0.000001}} \put(-20.51,-193.75){\circle*{0.000001}} \put(-19.80,-193.04){\circle*{0.000001}} \put(-19.09,-192.33){\circle*{0.000001}} \put(-18.38,-192.33){\circle*{0.000001}} \put(-17.68,-191.63){\circle*{0.000001}} \put(-16.97,-190.92){\circle*{0.000001}} \put(-16.26,-190.21){\circle*{0.000001}} \put(-15.56,-190.21){\circle*{0.000001}} \put(-14.85,-189.50){\circle*{0.000001}} \put(-50.91,-214.96){\circle*{0.000001}} \put(-50.20,-215.67){\circle*{0.000001}} \put(-49.50,-215.67){\circle*{0.000001}} \put(-48.79,-216.37){\circle*{0.000001}} \put(-48.08,-216.37){\circle*{0.000001}} \put(-47.38,-217.08){\circle*{0.000001}} \put(-46.67,-217.08){\circle*{0.000001}} \put(-45.96,-217.79){\circle*{0.000001}} \put(-45.25,-217.79){\circle*{0.000001}} \put(-44.55,-218.50){\circle*{0.000001}} \put(-43.84,-218.50){\circle*{0.000001}} \put(-43.13,-219.20){\circle*{0.000001}} \put(-42.43,-219.20){\circle*{0.000001}} \put(-41.72,-219.91){\circle*{0.000001}} \put(-41.01,-219.91){\circle*{0.000001}} \put(-40.31,-220.62){\circle*{0.000001}} \put(-39.60,-220.62){\circle*{0.000001}} \put(-38.89,-221.32){\circle*{0.000001}} \put(-38.18,-221.32){\circle*{0.000001}} \put(-37.48,-222.03){\circle*{0.000001}} \put(-36.77,-222.03){\circle*{0.000001}} \put(-36.06,-222.74){\circle*{0.000001}} \put(-35.36,-222.74){\circle*{0.000001}} \put(-34.65,-223.45){\circle*{0.000001}} \put(-33.94,-223.45){\circle*{0.000001}} \put(-33.23,-224.15){\circle*{0.000001}} \put(-32.53,-224.15){\circle*{0.000001}} \put(-31.82,-224.86){\circle*{0.000001}} \put(-31.11,-224.86){\circle*{0.000001}} \put(-30.41,-225.57){\circle*{0.000001}} \put(-29.70,-225.57){\circle*{0.000001}} \put(-28.99,-226.27){\circle*{0.000001}} \put(-28.28,-226.27){\circle*{0.000001}} \put(-27.58,-226.98){\circle*{0.000001}} \put(-26.87,-226.98){\circle*{0.000001}} \put(-26.16,-227.69){\circle*{0.000001}} \put(-25.46,-227.69){\circle*{0.000001}} \put(-24.75,-228.40){\circle*{0.000001}} \put(-24.04,-228.40){\circle*{0.000001}} \put(-23.33,-229.10){\circle*{0.000001}} \put(-22.63,-229.10){\circle*{0.000001}} \put(-21.92,-229.81){\circle*{0.000001}} \put(-21.21,-229.81){\circle*{0.000001}} \put(-20.51,-230.52){\circle*{0.000001}} \put(-19.80,-230.52){\circle*{0.000001}} \put(-19.09,-231.22){\circle*{0.000001}} \put(-18.38,-231.22){\circle*{0.000001}} \put(-17.68,-231.93){\circle*{0.000001}} \put(-16.97,-231.93){\circle*{0.000001}} \put(-16.26,-232.64){\circle*{0.000001}} \put(-15.56,-232.64){\circle*{0.000001}} \put(-14.85,-233.35){\circle*{0.000001}} \put(-14.14,-233.35){\circle*{0.000001}} \put(-13.44,-234.05){\circle*{0.000001}} \put(-12.73,-234.05){\circle*{0.000001}} \put(-12.02,-234.76){\circle*{0.000001}} \put(-11.31,-234.76){\circle*{0.000001}} \put(-10.61,-235.47){\circle*{0.000001}} \put( 3.54,-279.31){\circle*{0.000001}} \put( 3.54,-278.60){\circle*{0.000001}} \put( 2.83,-277.89){\circle*{0.000001}} \put( 2.83,-277.19){\circle*{0.000001}} \put( 2.83,-276.48){\circle*{0.000001}} \put( 2.12,-275.77){\circle*{0.000001}} \put( 2.12,-275.06){\circle*{0.000001}} \put( 2.12,-274.36){\circle*{0.000001}} \put( 1.41,-273.65){\circle*{0.000001}} \put( 1.41,-272.94){\circle*{0.000001}} \put( 1.41,-272.24){\circle*{0.000001}} \put( 0.71,-271.53){\circle*{0.000001}} \put( 0.71,-270.82){\circle*{0.000001}} \put( 0.71,-270.11){\circle*{0.000001}} \put( 0.00,-269.41){\circle*{0.000001}} \put( 0.00,-268.70){\circle*{0.000001}} \put( 0.00,-267.99){\circle*{0.000001}} \put( 0.00,-267.29){\circle*{0.000001}} \put(-0.71,-266.58){\circle*{0.000001}} \put(-0.71,-265.87){\circle*{0.000001}} \put(-0.71,-265.17){\circle*{0.000001}} \put(-1.41,-264.46){\circle*{0.000001}} \put(-1.41,-263.75){\circle*{0.000001}} \put(-1.41,-263.04){\circle*{0.000001}} \put(-2.12,-262.34){\circle*{0.000001}} \put(-2.12,-261.63){\circle*{0.000001}} \put(-2.12,-260.92){\circle*{0.000001}} \put(-2.83,-260.22){\circle*{0.000001}} \put(-2.83,-259.51){\circle*{0.000001}} \put(-2.83,-258.80){\circle*{0.000001}} \put(-3.54,-258.09){\circle*{0.000001}} \put(-3.54,-257.39){\circle*{0.000001}} \put(-3.54,-256.68){\circle*{0.000001}} \put(-4.24,-255.97){\circle*{0.000001}} \put(-4.24,-255.27){\circle*{0.000001}} \put(-4.24,-254.56){\circle*{0.000001}} \put(-4.95,-253.85){\circle*{0.000001}} \put(-4.95,-253.14){\circle*{0.000001}} \put(-4.95,-252.44){\circle*{0.000001}} \put(-5.66,-251.73){\circle*{0.000001}} \put(-5.66,-251.02){\circle*{0.000001}} \put(-5.66,-250.32){\circle*{0.000001}} \put(-6.36,-249.61){\circle*{0.000001}} \put(-6.36,-248.90){\circle*{0.000001}} \put(-6.36,-248.19){\circle*{0.000001}} \put(-7.07,-247.49){\circle*{0.000001}} \put(-7.07,-246.78){\circle*{0.000001}} \put(-7.07,-246.07){\circle*{0.000001}} \put(-7.07,-245.37){\circle*{0.000001}} \put(-7.78,-244.66){\circle*{0.000001}} \put(-7.78,-243.95){\circle*{0.000001}} \put(-7.78,-243.24){\circle*{0.000001}} \put(-8.49,-242.54){\circle*{0.000001}} \put(-8.49,-241.83){\circle*{0.000001}} \put(-8.49,-241.12){\circle*{0.000001}} \put(-9.19,-240.42){\circle*{0.000001}} \put(-9.19,-239.71){\circle*{0.000001}} \put(-9.19,-239.00){\circle*{0.000001}} \put(-9.90,-238.29){\circle*{0.000001}} \put(-9.90,-237.59){\circle*{0.000001}} \put(-9.90,-236.88){\circle*{0.000001}} \put(-10.61,-236.17){\circle*{0.000001}} \put(-10.61,-235.47){\circle*{0.000001}} \put(-25.46,-310.42){\circle*{0.000001}} \put(-24.75,-309.71){\circle*{0.000001}} \put(-24.04,-309.01){\circle*{0.000001}} \put(-23.33,-308.30){\circle*{0.000001}} \put(-22.63,-307.59){\circle*{0.000001}} \put(-21.92,-306.88){\circle*{0.000001}} \put(-21.21,-306.18){\circle*{0.000001}} \put(-20.51,-305.47){\circle*{0.000001}} \put(-20.51,-304.76){\circle*{0.000001}} \put(-19.80,-304.06){\circle*{0.000001}} \put(-19.09,-303.35){\circle*{0.000001}} \put(-18.38,-302.64){\circle*{0.000001}} \put(-17.68,-301.93){\circle*{0.000001}} \put(-16.97,-301.23){\circle*{0.000001}} \put(-16.26,-300.52){\circle*{0.000001}} \put(-15.56,-299.81){\circle*{0.000001}} \put(-14.85,-299.11){\circle*{0.000001}} \put(-14.14,-298.40){\circle*{0.000001}} \put(-13.44,-297.69){\circle*{0.000001}} \put(-12.73,-296.98){\circle*{0.000001}} \put(-12.02,-296.28){\circle*{0.000001}} \put(-11.31,-295.57){\circle*{0.000001}} \put(-11.31,-294.86){\circle*{0.000001}} \put(-10.61,-294.16){\circle*{0.000001}} \put(-9.90,-293.45){\circle*{0.000001}} \put(-9.19,-292.74){\circle*{0.000001}} \put(-8.49,-292.04){\circle*{0.000001}} \put(-7.78,-291.33){\circle*{0.000001}} \put(-7.07,-290.62){\circle*{0.000001}} \put(-6.36,-289.91){\circle*{0.000001}} \put(-5.66,-289.21){\circle*{0.000001}} \put(-4.95,-288.50){\circle*{0.000001}} \put(-4.24,-287.79){\circle*{0.000001}} \put(-3.54,-287.09){\circle*{0.000001}} \put(-2.83,-286.38){\circle*{0.000001}} \put(-2.12,-285.67){\circle*{0.000001}} \put(-1.41,-284.96){\circle*{0.000001}} \put(-1.41,-284.26){\circle*{0.000001}} \put(-0.71,-283.55){\circle*{0.000001}} \put( 0.00,-282.84){\circle*{0.000001}} \put( 0.71,-282.14){\circle*{0.000001}} \put( 1.41,-281.43){\circle*{0.000001}} \put( 2.12,-280.72){\circle*{0.000001}} \put( 2.83,-280.01){\circle*{0.000001}} \put( 3.54,-279.31){\circle*{0.000001}} \put(-25.46,-310.42){\circle*{0.000001}} \put(-24.75,-311.13){\circle*{0.000001}} \put(-24.04,-311.13){\circle*{0.000001}} \put(-23.33,-311.83){\circle*{0.000001}} \put(-22.63,-311.83){\circle*{0.000001}} \put(-21.92,-312.54){\circle*{0.000001}} \put(-21.21,-313.25){\circle*{0.000001}} \put(-20.51,-313.25){\circle*{0.000001}} \put(-19.80,-313.96){\circle*{0.000001}} \put(-19.09,-313.96){\circle*{0.000001}} \put(-18.38,-314.66){\circle*{0.000001}} \put(-17.68,-315.37){\circle*{0.000001}} \put(-16.97,-315.37){\circle*{0.000001}} \put(-16.26,-316.08){\circle*{0.000001}} \put(-15.56,-316.08){\circle*{0.000001}} \put(-14.85,-316.78){\circle*{0.000001}} \put(-14.14,-317.49){\circle*{0.000001}} \put(-13.44,-317.49){\circle*{0.000001}} \put(-12.73,-318.20){\circle*{0.000001}} \put(-12.02,-318.20){\circle*{0.000001}} \put(-11.31,-318.91){\circle*{0.000001}} \put(-10.61,-319.61){\circle*{0.000001}} \put(-9.90,-319.61){\circle*{0.000001}} \put(-9.19,-320.32){\circle*{0.000001}} \put(-8.49,-320.32){\circle*{0.000001}} \put(-7.78,-321.03){\circle*{0.000001}} \put(-7.07,-321.73){\circle*{0.000001}} \put(-6.36,-321.73){\circle*{0.000001}} \put(-5.66,-322.44){\circle*{0.000001}} \put(-4.95,-322.44){\circle*{0.000001}} \put(-4.24,-323.15){\circle*{0.000001}} \put(-3.54,-323.85){\circle*{0.000001}} \put(-2.83,-323.85){\circle*{0.000001}} \put(-2.12,-324.56){\circle*{0.000001}} \put(-1.41,-324.56){\circle*{0.000001}} \put(-0.71,-325.27){\circle*{0.000001}} \put( 0.00,-325.98){\circle*{0.000001}} \put( 0.71,-325.98){\circle*{0.000001}} \put( 1.41,-326.68){\circle*{0.000001}} \put( 2.12,-326.68){\circle*{0.000001}} \put( 2.83,-327.39){\circle*{0.000001}} \put( 3.54,-328.10){\circle*{0.000001}} \put( 4.24,-328.10){\circle*{0.000001}} \put( 4.95,-328.80){\circle*{0.000001}} \put( 5.66,-328.80){\circle*{0.000001}} \put( 6.36,-329.51){\circle*{0.000001}} \put( 7.07,-330.22){\circle*{0.000001}} \put( 7.78,-330.22){\circle*{0.000001}} \put( 8.49,-330.93){\circle*{0.000001}} \put( 9.19,-330.93){\circle*{0.000001}} \put( 9.90,-331.63){\circle*{0.000001}} \put( 9.90,-331.63){\circle*{0.000001}} \put(10.61,-331.63){\circle*{0.000001}} \put(11.31,-332.34){\circle*{0.000001}} \put(12.02,-332.34){\circle*{0.000001}} \put(12.73,-333.05){\circle*{0.000001}} \put(13.44,-333.05){\circle*{0.000001}} \put(14.14,-333.75){\circle*{0.000001}} \put(14.85,-333.75){\circle*{0.000001}} \put(15.56,-334.46){\circle*{0.000001}} \put(16.26,-334.46){\circle*{0.000001}} \put(16.97,-335.17){\circle*{0.000001}} \put(17.68,-335.17){\circle*{0.000001}} \put(18.38,-335.88){\circle*{0.000001}} \put(19.09,-335.88){\circle*{0.000001}} \put(19.80,-336.58){\circle*{0.000001}} \put(20.51,-336.58){\circle*{0.000001}} \put(21.21,-337.29){\circle*{0.000001}} \put(21.92,-337.29){\circle*{0.000001}} \put(22.63,-337.29){\circle*{0.000001}} \put(23.33,-338.00){\circle*{0.000001}} \put(24.04,-338.00){\circle*{0.000001}} \put(24.75,-338.70){\circle*{0.000001}} \put(25.46,-338.70){\circle*{0.000001}} \put(26.16,-339.41){\circle*{0.000001}} \put(26.87,-339.41){\circle*{0.000001}} \put(27.58,-340.12){\circle*{0.000001}} \put(28.28,-340.12){\circle*{0.000001}} \put(28.99,-340.83){\circle*{0.000001}} \put(29.70,-340.83){\circle*{0.000001}} \put(30.41,-341.53){\circle*{0.000001}} \put(31.11,-341.53){\circle*{0.000001}} \put(31.82,-342.24){\circle*{0.000001}} \put(32.53,-342.24){\circle*{0.000001}} \put(33.23,-342.95){\circle*{0.000001}} \put(33.94,-342.95){\circle*{0.000001}} \put(34.65,-343.65){\circle*{0.000001}} \put(35.36,-343.65){\circle*{0.000001}} \put(36.06,-343.65){\circle*{0.000001}} \put(36.77,-344.36){\circle*{0.000001}} \put(37.48,-344.36){\circle*{0.000001}} \put(38.18,-345.07){\circle*{0.000001}} \put(38.89,-345.07){\circle*{0.000001}} \put(39.60,-345.78){\circle*{0.000001}} \put(40.31,-345.78){\circle*{0.000001}} \put(41.01,-346.48){\circle*{0.000001}} \put(41.72,-346.48){\circle*{0.000001}} \put(42.43,-347.19){\circle*{0.000001}} \put(43.13,-347.19){\circle*{0.000001}} \put(43.84,-347.90){\circle*{0.000001}} \put(44.55,-347.90){\circle*{0.000001}} \put(45.25,-348.60){\circle*{0.000001}} \put(45.96,-348.60){\circle*{0.000001}} \put(46.67,-349.31){\circle*{0.000001}} \put(47.38,-349.31){\circle*{0.000001}} \put(47.38,-349.31){\circle*{0.000001}} \put(48.08,-349.31){\circle*{0.000001}} \put(48.79,-349.31){\circle*{0.000001}} \put(49.50,-350.02){\circle*{0.000001}} \put(50.20,-350.02){\circle*{0.000001}} \put(50.91,-350.02){\circle*{0.000001}} \put(51.62,-350.02){\circle*{0.000001}} \put(52.33,-350.72){\circle*{0.000001}} \put(53.03,-350.72){\circle*{0.000001}} \put(53.74,-350.72){\circle*{0.000001}} \put(54.45,-350.72){\circle*{0.000001}} \put(55.15,-351.43){\circle*{0.000001}} \put(55.86,-351.43){\circle*{0.000001}} \put(56.57,-351.43){\circle*{0.000001}} \put(57.28,-351.43){\circle*{0.000001}} \put(57.98,-352.14){\circle*{0.000001}} \put(58.69,-352.14){\circle*{0.000001}} \put(59.40,-352.14){\circle*{0.000001}} \put(60.10,-352.14){\circle*{0.000001}} \put(60.81,-352.85){\circle*{0.000001}} \put(61.52,-352.85){\circle*{0.000001}} \put(62.23,-352.85){\circle*{0.000001}} \put(62.93,-352.85){\circle*{0.000001}} \put(63.64,-352.85){\circle*{0.000001}} \put(64.35,-353.55){\circle*{0.000001}} \put(65.05,-353.55){\circle*{0.000001}} \put(65.76,-353.55){\circle*{0.000001}} \put(66.47,-353.55){\circle*{0.000001}} \put(67.18,-354.26){\circle*{0.000001}} \put(67.88,-354.26){\circle*{0.000001}} \put(68.59,-354.26){\circle*{0.000001}} \put(69.30,-354.26){\circle*{0.000001}} \put(70.00,-354.97){\circle*{0.000001}} \put(70.71,-354.97){\circle*{0.000001}} \put(71.42,-354.97){\circle*{0.000001}} \put(72.12,-354.97){\circle*{0.000001}} \put(72.83,-355.67){\circle*{0.000001}} \put(73.54,-355.67){\circle*{0.000001}} \put(74.25,-355.67){\circle*{0.000001}} \put(74.95,-355.67){\circle*{0.000001}} \put(75.66,-356.38){\circle*{0.000001}} \put(76.37,-356.38){\circle*{0.000001}} \put(77.07,-356.38){\circle*{0.000001}} \put(77.78,-356.38){\circle*{0.000001}} \put(78.49,-356.38){\circle*{0.000001}} \put(79.20,-357.09){\circle*{0.000001}} \put(79.90,-357.09){\circle*{0.000001}} \put(80.61,-357.09){\circle*{0.000001}} \put(81.32,-357.09){\circle*{0.000001}} \put(82.02,-357.80){\circle*{0.000001}} \put(82.73,-357.80){\circle*{0.000001}} \put(83.44,-357.80){\circle*{0.000001}} \put(84.15,-357.80){\circle*{0.000001}} \put(84.85,-358.50){\circle*{0.000001}} \put(85.56,-358.50){\circle*{0.000001}} \put(86.27,-358.50){\circle*{0.000001}} \put(86.97,-358.50){\circle*{0.000001}} \put(87.68,-359.21){\circle*{0.000001}} \put(88.39,-359.21){\circle*{0.000001}} \put(89.10,-359.21){\circle*{0.000001}} \put(89.80,-359.21){\circle*{0.000001}} \put(90.51,-359.92){\circle*{0.000001}} \put(91.22,-359.92){\circle*{0.000001}} \put(91.92,-359.92){\circle*{0.000001}} \put(91.92,-359.92){\circle*{0.000001}} \put(92.63,-359.92){\circle*{0.000001}} \put(93.34,-359.92){\circle*{0.000001}} \put(94.05,-359.92){\circle*{0.000001}} \put(94.75,-359.92){\circle*{0.000001}} \put(95.46,-359.92){\circle*{0.000001}} \put(96.17,-359.92){\circle*{0.000001}} \put(96.87,-359.92){\circle*{0.000001}} \put(97.58,-359.92){\circle*{0.000001}} \put(98.29,-359.92){\circle*{0.000001}} \put(98.99,-359.92){\circle*{0.000001}} \put(99.70,-359.92){\circle*{0.000001}} \put(100.41,-359.21){\circle*{0.000001}} \put(101.12,-359.21){\circle*{0.000001}} \put(101.82,-359.21){\circle*{0.000001}} \put(102.53,-359.21){\circle*{0.000001}} \put(103.24,-359.21){\circle*{0.000001}} \put(103.94,-359.21){\circle*{0.000001}} \put(104.65,-359.21){\circle*{0.000001}} \put(105.36,-359.21){\circle*{0.000001}} \put(106.07,-359.21){\circle*{0.000001}} \put(106.77,-359.21){\circle*{0.000001}} \put(107.48,-359.21){\circle*{0.000001}} \put(108.19,-359.21){\circle*{0.000001}} \put(108.89,-359.21){\circle*{0.000001}} \put(109.60,-359.21){\circle*{0.000001}} \put(110.31,-359.21){\circle*{0.000001}} \put(111.02,-359.21){\circle*{0.000001}} \put(111.72,-359.21){\circle*{0.000001}} \put(112.43,-359.21){\circle*{0.000001}} \put(113.14,-359.21){\circle*{0.000001}} \put(113.84,-359.21){\circle*{0.000001}} \put(114.55,-359.21){\circle*{0.000001}} \put(115.26,-359.21){\circle*{0.000001}} \put(115.97,-358.50){\circle*{0.000001}} \put(116.67,-358.50){\circle*{0.000001}} \put(117.38,-358.50){\circle*{0.000001}} \put(118.09,-358.50){\circle*{0.000001}} \put(118.79,-358.50){\circle*{0.000001}} \put(119.50,-358.50){\circle*{0.000001}} \put(120.21,-358.50){\circle*{0.000001}} \put(120.92,-358.50){\circle*{0.000001}} \put(121.62,-358.50){\circle*{0.000001}} \put(122.33,-358.50){\circle*{0.000001}} \put(123.04,-358.50){\circle*{0.000001}} \put(123.74,-358.50){\circle*{0.000001}} \put(124.45,-358.50){\circle*{0.000001}} \put(125.16,-358.50){\circle*{0.000001}} \put(125.87,-358.50){\circle*{0.000001}} \put(126.57,-358.50){\circle*{0.000001}} \put(127.28,-358.50){\circle*{0.000001}} \put(127.99,-358.50){\circle*{0.000001}} \put(128.69,-358.50){\circle*{0.000001}} \put(129.40,-358.50){\circle*{0.000001}} \put(130.11,-358.50){\circle*{0.000001}} \put(130.81,-358.50){\circle*{0.000001}} \put(131.52,-357.80){\circle*{0.000001}} \put(132.23,-357.80){\circle*{0.000001}} \put(132.94,-357.80){\circle*{0.000001}} \put(133.64,-357.80){\circle*{0.000001}} \put(134.35,-357.80){\circle*{0.000001}} \put(135.06,-357.80){\circle*{0.000001}} \put(135.76,-357.80){\circle*{0.000001}} \put(136.47,-357.80){\circle*{0.000001}} \put(137.18,-357.80){\circle*{0.000001}} \put(137.89,-357.80){\circle*{0.000001}} \put(138.59,-357.80){\circle*{0.000001}} \put(138.59,-357.80){\circle*{0.000001}} \put(139.30,-357.09){\circle*{0.000001}} \put(140.01,-357.09){\circle*{0.000001}} \put(140.71,-356.38){\circle*{0.000001}} \put(141.42,-356.38){\circle*{0.000001}} \put(142.13,-355.67){\circle*{0.000001}} \put(142.84,-355.67){\circle*{0.000001}} \put(143.54,-354.97){\circle*{0.000001}} \put(144.25,-354.26){\circle*{0.000001}} \put(144.96,-354.26){\circle*{0.000001}} \put(145.66,-353.55){\circle*{0.000001}} \put(146.37,-353.55){\circle*{0.000001}} \put(147.08,-352.85){\circle*{0.000001}} \put(147.79,-352.14){\circle*{0.000001}} \put(148.49,-352.14){\circle*{0.000001}} \put(149.20,-351.43){\circle*{0.000001}} \put(149.91,-351.43){\circle*{0.000001}} \put(150.61,-350.72){\circle*{0.000001}} \put(151.32,-350.72){\circle*{0.000001}} \put(152.03,-350.02){\circle*{0.000001}} \put(152.74,-349.31){\circle*{0.000001}} \put(153.44,-349.31){\circle*{0.000001}} \put(154.15,-348.60){\circle*{0.000001}} \put(154.86,-348.60){\circle*{0.000001}} \put(155.56,-347.90){\circle*{0.000001}} \put(156.27,-347.90){\circle*{0.000001}} \put(156.98,-347.19){\circle*{0.000001}} \put(157.68,-346.48){\circle*{0.000001}} \put(158.39,-346.48){\circle*{0.000001}} \put(159.10,-345.78){\circle*{0.000001}} \put(159.81,-345.78){\circle*{0.000001}} \put(160.51,-345.07){\circle*{0.000001}} \put(161.22,-344.36){\circle*{0.000001}} \put(161.93,-344.36){\circle*{0.000001}} \put(162.63,-343.65){\circle*{0.000001}} \put(163.34,-343.65){\circle*{0.000001}} \put(164.05,-342.95){\circle*{0.000001}} \put(164.76,-342.95){\circle*{0.000001}} \put(165.46,-342.24){\circle*{0.000001}} \put(166.17,-341.53){\circle*{0.000001}} \put(166.88,-341.53){\circle*{0.000001}} \put(167.58,-340.83){\circle*{0.000001}} \put(168.29,-340.83){\circle*{0.000001}} \put(169.00,-340.12){\circle*{0.000001}} \put(169.71,-340.12){\circle*{0.000001}} \put(170.41,-339.41){\circle*{0.000001}} \put(171.12,-338.70){\circle*{0.000001}} \put(171.83,-338.70){\circle*{0.000001}} \put(172.53,-338.00){\circle*{0.000001}} \put(173.24,-338.00){\circle*{0.000001}} \put(173.95,-337.29){\circle*{0.000001}} \put(174.66,-336.58){\circle*{0.000001}} \put(175.36,-336.58){\circle*{0.000001}} \put(176.07,-335.88){\circle*{0.000001}} \put(176.78,-335.88){\circle*{0.000001}} \put(177.48,-335.17){\circle*{0.000001}} \put(178.19,-335.17){\circle*{0.000001}} \put(178.90,-334.46){\circle*{0.000001}} \put(178.90,-334.46){\circle*{0.000001}} \put(178.90,-333.75){\circle*{0.000001}} \put(178.90,-333.05){\circle*{0.000001}} \put(178.19,-332.34){\circle*{0.000001}} \put(178.19,-331.63){\circle*{0.000001}} \put(178.19,-330.93){\circle*{0.000001}} \put(178.19,-330.22){\circle*{0.000001}} \put(177.48,-329.51){\circle*{0.000001}} \put(177.48,-328.80){\circle*{0.000001}} \put(177.48,-328.10){\circle*{0.000001}} \put(177.48,-327.39){\circle*{0.000001}} \put(177.48,-326.68){\circle*{0.000001}} \put(176.78,-325.98){\circle*{0.000001}} \put(176.78,-325.27){\circle*{0.000001}} \put(176.78,-324.56){\circle*{0.000001}} \put(176.78,-323.85){\circle*{0.000001}} \put(176.78,-323.15){\circle*{0.000001}} \put(176.07,-322.44){\circle*{0.000001}} \put(176.07,-321.73){\circle*{0.000001}} \put(176.07,-321.03){\circle*{0.000001}} \put(176.07,-320.32){\circle*{0.000001}} \put(175.36,-319.61){\circle*{0.000001}} \put(175.36,-318.91){\circle*{0.000001}} \put(175.36,-318.20){\circle*{0.000001}} \put(175.36,-317.49){\circle*{0.000001}} \put(175.36,-316.78){\circle*{0.000001}} \put(174.66,-316.08){\circle*{0.000001}} \put(174.66,-315.37){\circle*{0.000001}} \put(174.66,-314.66){\circle*{0.000001}} \put(174.66,-313.96){\circle*{0.000001}} \put(174.66,-313.25){\circle*{0.000001}} \put(173.95,-312.54){\circle*{0.000001}} \put(173.95,-311.83){\circle*{0.000001}} \put(173.95,-311.13){\circle*{0.000001}} \put(173.95,-310.42){\circle*{0.000001}} \put(173.24,-309.71){\circle*{0.000001}} \put(173.24,-309.01){\circle*{0.000001}} \put(173.24,-308.30){\circle*{0.000001}} \put(173.24,-307.59){\circle*{0.000001}} \put(173.24,-306.88){\circle*{0.000001}} \put(172.53,-306.18){\circle*{0.000001}} \put(172.53,-305.47){\circle*{0.000001}} \put(172.53,-304.76){\circle*{0.000001}} \put(172.53,-304.06){\circle*{0.000001}} \put(172.53,-303.35){\circle*{0.000001}} \put(171.83,-302.64){\circle*{0.000001}} \put(171.83,-301.93){\circle*{0.000001}} \put(171.83,-301.23){\circle*{0.000001}} \put(171.83,-300.52){\circle*{0.000001}} \put(171.12,-299.81){\circle*{0.000001}} \put(171.12,-299.11){\circle*{0.000001}} \put(171.12,-298.40){\circle*{0.000001}} \put(171.12,-297.69){\circle*{0.000001}} \put(171.12,-296.98){\circle*{0.000001}} \put(170.41,-296.28){\circle*{0.000001}} \put(170.41,-295.57){\circle*{0.000001}} \put(170.41,-294.86){\circle*{0.000001}} \put(170.41,-294.16){\circle*{0.000001}} \put(170.41,-293.45){\circle*{0.000001}} \put(169.71,-292.74){\circle*{0.000001}} \put(169.71,-292.04){\circle*{0.000001}} \put(169.71,-291.33){\circle*{0.000001}} \put(169.71,-290.62){\circle*{0.000001}} \put(169.00,-289.91){\circle*{0.000001}} \put(169.00,-289.21){\circle*{0.000001}} \put(169.00,-288.50){\circle*{0.000001}} \put(190.92,-326.68){\circle*{0.000001}} \put(190.21,-325.98){\circle*{0.000001}} \put(190.21,-325.27){\circle*{0.000001}} \put(189.50,-324.56){\circle*{0.000001}} \put(189.50,-323.85){\circle*{0.000001}} \put(188.80,-323.15){\circle*{0.000001}} \put(188.80,-322.44){\circle*{0.000001}} \put(188.09,-321.73){\circle*{0.000001}} \put(187.38,-321.03){\circle*{0.000001}} \put(187.38,-320.32){\circle*{0.000001}} \put(186.68,-319.61){\circle*{0.000001}} \put(186.68,-318.91){\circle*{0.000001}} \put(185.97,-318.20){\circle*{0.000001}} \put(185.97,-317.49){\circle*{0.000001}} \put(185.26,-316.78){\circle*{0.000001}} \put(184.55,-316.08){\circle*{0.000001}} \put(184.55,-315.37){\circle*{0.000001}} \put(183.85,-314.66){\circle*{0.000001}} \put(183.85,-313.96){\circle*{0.000001}} \put(183.14,-313.25){\circle*{0.000001}} \put(183.14,-312.54){\circle*{0.000001}} \put(182.43,-311.83){\circle*{0.000001}} \put(181.73,-311.13){\circle*{0.000001}} \put(181.73,-310.42){\circle*{0.000001}} \put(181.02,-309.71){\circle*{0.000001}} \put(181.02,-309.01){\circle*{0.000001}} \put(180.31,-308.30){\circle*{0.000001}} \put(180.31,-307.59){\circle*{0.000001}} \put(179.61,-306.88){\circle*{0.000001}} \put(178.90,-306.18){\circle*{0.000001}} \put(178.90,-305.47){\circle*{0.000001}} \put(178.19,-304.76){\circle*{0.000001}} \put(178.19,-304.06){\circle*{0.000001}} \put(177.48,-303.35){\circle*{0.000001}} \put(176.78,-302.64){\circle*{0.000001}} \put(176.78,-301.93){\circle*{0.000001}} \put(176.07,-301.23){\circle*{0.000001}} \put(176.07,-300.52){\circle*{0.000001}} \put(175.36,-299.81){\circle*{0.000001}} \put(175.36,-299.11){\circle*{0.000001}} \put(174.66,-298.40){\circle*{0.000001}} \put(173.95,-297.69){\circle*{0.000001}} \put(173.95,-296.98){\circle*{0.000001}} \put(173.24,-296.28){\circle*{0.000001}} \put(173.24,-295.57){\circle*{0.000001}} \put(172.53,-294.86){\circle*{0.000001}} \put(172.53,-294.16){\circle*{0.000001}} \put(171.83,-293.45){\circle*{0.000001}} \put(171.12,-292.74){\circle*{0.000001}} \put(171.12,-292.04){\circle*{0.000001}} \put(170.41,-291.33){\circle*{0.000001}} \put(170.41,-290.62){\circle*{0.000001}} \put(169.71,-289.91){\circle*{0.000001}} \put(169.71,-289.21){\circle*{0.000001}} \put(169.00,-288.50){\circle*{0.000001}} \put(155.56,-356.38){\circle*{0.000001}} \put(156.27,-355.67){\circle*{0.000001}} \put(156.98,-354.97){\circle*{0.000001}} \put(157.68,-354.26){\circle*{0.000001}} \put(158.39,-354.26){\circle*{0.000001}} \put(159.10,-353.55){\circle*{0.000001}} \put(159.81,-352.85){\circle*{0.000001}} \put(160.51,-352.14){\circle*{0.000001}} \put(161.22,-351.43){\circle*{0.000001}} \put(161.93,-350.72){\circle*{0.000001}} \put(162.63,-350.72){\circle*{0.000001}} \put(163.34,-350.02){\circle*{0.000001}} \put(164.05,-349.31){\circle*{0.000001}} \put(164.76,-348.60){\circle*{0.000001}} \put(165.46,-347.90){\circle*{0.000001}} \put(166.17,-347.19){\circle*{0.000001}} \put(166.88,-347.19){\circle*{0.000001}} \put(167.58,-346.48){\circle*{0.000001}} \put(168.29,-345.78){\circle*{0.000001}} \put(169.00,-345.07){\circle*{0.000001}} \put(169.71,-344.36){\circle*{0.000001}} \put(170.41,-343.65){\circle*{0.000001}} \put(171.12,-343.65){\circle*{0.000001}} \put(171.83,-342.95){\circle*{0.000001}} \put(172.53,-342.24){\circle*{0.000001}} \put(173.24,-341.53){\circle*{0.000001}} \put(173.95,-340.83){\circle*{0.000001}} \put(174.66,-340.12){\circle*{0.000001}} \put(175.36,-339.41){\circle*{0.000001}} \put(176.07,-339.41){\circle*{0.000001}} \put(176.78,-338.70){\circle*{0.000001}} \put(177.48,-338.00){\circle*{0.000001}} \put(178.19,-337.29){\circle*{0.000001}} \put(178.90,-336.58){\circle*{0.000001}} \put(179.61,-335.88){\circle*{0.000001}} \put(180.31,-335.88){\circle*{0.000001}} \put(181.02,-335.17){\circle*{0.000001}} \put(181.73,-334.46){\circle*{0.000001}} \put(182.43,-333.75){\circle*{0.000001}} \put(183.14,-333.05){\circle*{0.000001}} \put(183.85,-332.34){\circle*{0.000001}} \put(184.55,-332.34){\circle*{0.000001}} \put(185.26,-331.63){\circle*{0.000001}} \put(185.97,-330.93){\circle*{0.000001}} \put(186.68,-330.22){\circle*{0.000001}} \put(187.38,-329.51){\circle*{0.000001}} \put(188.09,-328.80){\circle*{0.000001}} \put(188.80,-328.80){\circle*{0.000001}} \put(189.50,-328.10){\circle*{0.000001}} \put(190.21,-327.39){\circle*{0.000001}} \put(190.92,-326.68){\circle*{0.000001}} \put(155.56,-356.38){\circle*{0.000001}} \put(156.27,-356.38){\circle*{0.000001}} \put(156.98,-356.38){\circle*{0.000001}} \put(157.68,-357.09){\circle*{0.000001}} \put(158.39,-357.09){\circle*{0.000001}} \put(159.10,-357.09){\circle*{0.000001}} \put(159.81,-357.09){\circle*{0.000001}} \put(160.51,-357.80){\circle*{0.000001}} \put(161.22,-357.80){\circle*{0.000001}} \put(161.93,-357.80){\circle*{0.000001}} \put(162.63,-357.80){\circle*{0.000001}} \put(163.34,-358.50){\circle*{0.000001}} \put(164.05,-358.50){\circle*{0.000001}} \put(164.76,-358.50){\circle*{0.000001}} \put(165.46,-358.50){\circle*{0.000001}} \put(166.17,-359.21){\circle*{0.000001}} \put(166.88,-359.21){\circle*{0.000001}} \put(167.58,-359.21){\circle*{0.000001}} \put(168.29,-359.21){\circle*{0.000001}} \put(169.00,-359.21){\circle*{0.000001}} \put(169.71,-359.92){\circle*{0.000001}} \put(170.41,-359.92){\circle*{0.000001}} \put(171.12,-359.92){\circle*{0.000001}} \put(171.83,-359.92){\circle*{0.000001}} \put(172.53,-360.62){\circle*{0.000001}} \put(173.24,-360.62){\circle*{0.000001}} \put(173.95,-360.62){\circle*{0.000001}} \put(174.66,-360.62){\circle*{0.000001}} \put(175.36,-361.33){\circle*{0.000001}} \put(176.07,-361.33){\circle*{0.000001}} \put(176.78,-361.33){\circle*{0.000001}} \put(177.48,-361.33){\circle*{0.000001}} \put(178.19,-361.33){\circle*{0.000001}} \put(178.90,-362.04){\circle*{0.000001}} \put(179.61,-362.04){\circle*{0.000001}} \put(180.31,-362.04){\circle*{0.000001}} \put(181.02,-362.04){\circle*{0.000001}} \put(181.73,-362.75){\circle*{0.000001}} \put(182.43,-362.75){\circle*{0.000001}} \put(183.14,-362.75){\circle*{0.000001}} \put(183.85,-362.75){\circle*{0.000001}} \put(184.55,-363.45){\circle*{0.000001}} \put(185.26,-363.45){\circle*{0.000001}} \put(185.97,-363.45){\circle*{0.000001}} \put(186.68,-363.45){\circle*{0.000001}} \put(187.38,-364.16){\circle*{0.000001}} \put(188.09,-364.16){\circle*{0.000001}} \put(188.80,-364.16){\circle*{0.000001}} \put(189.50,-364.16){\circle*{0.000001}} \put(190.21,-364.16){\circle*{0.000001}} \put(190.92,-364.87){\circle*{0.000001}} \put(191.63,-364.87){\circle*{0.000001}} \put(192.33,-364.87){\circle*{0.000001}} \put(193.04,-364.87){\circle*{0.000001}} \put(193.75,-365.57){\circle*{0.000001}} \put(194.45,-365.57){\circle*{0.000001}} \put(195.16,-365.57){\circle*{0.000001}} \put(195.87,-365.57){\circle*{0.000001}} \put(196.58,-366.28){\circle*{0.000001}} \put(197.28,-366.28){\circle*{0.000001}} \put(197.99,-366.28){\circle*{0.000001}} \put(198.70,-366.28){\circle*{0.000001}} \put(199.40,-366.99){\circle*{0.000001}} \put(200.11,-366.99){\circle*{0.000001}} \put(200.82,-366.99){\circle*{0.000001}} \put(200.82,-366.99){\circle*{0.000001}} \put(201.53,-366.28){\circle*{0.000001}} \put(202.23,-365.57){\circle*{0.000001}} \put(202.94,-365.57){\circle*{0.000001}} \put(203.65,-364.87){\circle*{0.000001}} \put(204.35,-364.16){\circle*{0.000001}} \put(205.06,-363.45){\circle*{0.000001}} \put(205.77,-362.75){\circle*{0.000001}} \put(206.48,-362.75){\circle*{0.000001}} \put(207.18,-362.04){\circle*{0.000001}} \put(207.89,-361.33){\circle*{0.000001}} \put(208.60,-360.62){\circle*{0.000001}} \put(209.30,-359.92){\circle*{0.000001}} \put(210.01,-359.92){\circle*{0.000001}} \put(210.72,-359.21){\circle*{0.000001}} \put(211.42,-358.50){\circle*{0.000001}} \put(212.13,-357.80){\circle*{0.000001}} \put(212.84,-357.09){\circle*{0.000001}} \put(213.55,-357.09){\circle*{0.000001}} \put(214.25,-356.38){\circle*{0.000001}} \put(214.96,-355.67){\circle*{0.000001}} \put(215.67,-354.97){\circle*{0.000001}} \put(216.37,-354.26){\circle*{0.000001}} \put(217.08,-354.26){\circle*{0.000001}} \put(217.79,-353.55){\circle*{0.000001}} \put(218.50,-352.85){\circle*{0.000001}} \put(219.20,-352.14){\circle*{0.000001}} \put(219.91,-351.43){\circle*{0.000001}} \put(220.62,-350.72){\circle*{0.000001}} \put(221.32,-350.72){\circle*{0.000001}} \put(222.03,-350.02){\circle*{0.000001}} \put(222.74,-349.31){\circle*{0.000001}} \put(223.45,-348.60){\circle*{0.000001}} \put(224.15,-347.90){\circle*{0.000001}} \put(224.86,-347.90){\circle*{0.000001}} \put(225.57,-347.19){\circle*{0.000001}} \put(226.27,-346.48){\circle*{0.000001}} \put(226.98,-345.78){\circle*{0.000001}} \put(227.69,-345.07){\circle*{0.000001}} \put(228.40,-345.07){\circle*{0.000001}} \put(229.10,-344.36){\circle*{0.000001}} \put(229.81,-343.65){\circle*{0.000001}} \put(230.52,-342.95){\circle*{0.000001}} \put(231.22,-342.24){\circle*{0.000001}} \put(231.93,-342.24){\circle*{0.000001}} \put(232.64,-341.53){\circle*{0.000001}} \put(233.35,-340.83){\circle*{0.000001}} \put(234.05,-340.12){\circle*{0.000001}} \put(234.76,-339.41){\circle*{0.000001}} \put(235.47,-339.41){\circle*{0.000001}} \put(236.17,-338.70){\circle*{0.000001}} \put(236.88,-338.00){\circle*{0.000001}} \put(236.88,-338.00){\circle*{0.000001}} \put(236.17,-337.29){\circle*{0.000001}} \put(236.17,-336.58){\circle*{0.000001}} \put(235.47,-335.88){\circle*{0.000001}} \put(235.47,-335.17){\circle*{0.000001}} \put(234.76,-334.46){\circle*{0.000001}} \put(234.76,-333.75){\circle*{0.000001}} \put(234.05,-333.05){\circle*{0.000001}} \put(234.05,-332.34){\circle*{0.000001}} \put(233.35,-331.63){\circle*{0.000001}} \put(233.35,-330.93){\circle*{0.000001}} \put(232.64,-330.22){\circle*{0.000001}} \put(232.64,-329.51){\circle*{0.000001}} \put(231.93,-328.80){\circle*{0.000001}} \put(231.93,-328.10){\circle*{0.000001}} \put(231.22,-327.39){\circle*{0.000001}} \put(231.22,-326.68){\circle*{0.000001}} \put(230.52,-325.98){\circle*{0.000001}} \put(230.52,-325.27){\circle*{0.000001}} \put(229.81,-324.56){\circle*{0.000001}} \put(229.81,-323.85){\circle*{0.000001}} \put(229.10,-323.15){\circle*{0.000001}} \put(229.10,-322.44){\circle*{0.000001}} \put(228.40,-321.73){\circle*{0.000001}} \put(228.40,-321.03){\circle*{0.000001}} \put(227.69,-320.32){\circle*{0.000001}} \put(227.69,-319.61){\circle*{0.000001}} \put(226.98,-318.91){\circle*{0.000001}} \put(226.98,-318.20){\circle*{0.000001}} \put(226.27,-317.49){\circle*{0.000001}} \put(225.57,-316.78){\circle*{0.000001}} \put(225.57,-316.08){\circle*{0.000001}} \put(224.86,-315.37){\circle*{0.000001}} \put(224.86,-314.66){\circle*{0.000001}} \put(224.15,-313.96){\circle*{0.000001}} \put(224.15,-313.25){\circle*{0.000001}} \put(223.45,-312.54){\circle*{0.000001}} \put(223.45,-311.83){\circle*{0.000001}} \put(222.74,-311.13){\circle*{0.000001}} \put(222.74,-310.42){\circle*{0.000001}} \put(222.03,-309.71){\circle*{0.000001}} \put(222.03,-309.01){\circle*{0.000001}} \put(221.32,-308.30){\circle*{0.000001}} \put(221.32,-307.59){\circle*{0.000001}} \put(220.62,-306.88){\circle*{0.000001}} \put(220.62,-306.18){\circle*{0.000001}} \put(219.91,-305.47){\circle*{0.000001}} \put(219.91,-304.76){\circle*{0.000001}} \put(219.20,-304.06){\circle*{0.000001}} \put(219.20,-303.35){\circle*{0.000001}} \put(218.50,-302.64){\circle*{0.000001}} \put(218.50,-301.93){\circle*{0.000001}} \put(217.79,-301.23){\circle*{0.000001}} \put(217.79,-300.52){\circle*{0.000001}} \put(217.08,-299.81){\circle*{0.000001}} \put(217.08,-299.11){\circle*{0.000001}} \put(216.37,-298.40){\circle*{0.000001}} \put(216.37,-297.69){\circle*{0.000001}} \put(215.67,-296.98){\circle*{0.000001}} \put(215.67,-296.98){\circle*{0.000001}} \put(216.37,-296.98){\circle*{0.000001}} \put(217.08,-296.98){\circle*{0.000001}} \put(217.79,-296.28){\circle*{0.000001}} \put(218.50,-296.28){\circle*{0.000001}} \put(219.20,-296.28){\circle*{0.000001}} \put(219.91,-296.28){\circle*{0.000001}} \put(220.62,-296.28){\circle*{0.000001}} \put(221.32,-295.57){\circle*{0.000001}} \put(222.03,-295.57){\circle*{0.000001}} \put(222.74,-295.57){\circle*{0.000001}} \put(223.45,-295.57){\circle*{0.000001}} \put(224.15,-294.86){\circle*{0.000001}} \put(224.86,-294.86){\circle*{0.000001}} \put(225.57,-294.86){\circle*{0.000001}} \put(226.27,-294.86){\circle*{0.000001}} \put(226.98,-294.86){\circle*{0.000001}} \put(227.69,-294.16){\circle*{0.000001}} \put(228.40,-294.16){\circle*{0.000001}} \put(229.10,-294.16){\circle*{0.000001}} \put(229.81,-294.16){\circle*{0.000001}} \put(230.52,-294.16){\circle*{0.000001}} \put(231.22,-293.45){\circle*{0.000001}} \put(231.93,-293.45){\circle*{0.000001}} \put(232.64,-293.45){\circle*{0.000001}} \put(233.35,-293.45){\circle*{0.000001}} \put(234.05,-293.45){\circle*{0.000001}} \put(234.76,-292.74){\circle*{0.000001}} \put(235.47,-292.74){\circle*{0.000001}} \put(236.17,-292.74){\circle*{0.000001}} \put(236.88,-292.74){\circle*{0.000001}} \put(237.59,-292.74){\circle*{0.000001}} \put(238.29,-292.04){\circle*{0.000001}} \put(239.00,-292.04){\circle*{0.000001}} \put(239.71,-292.04){\circle*{0.000001}} \put(240.42,-292.04){\circle*{0.000001}} \put(241.12,-291.33){\circle*{0.000001}} \put(241.83,-291.33){\circle*{0.000001}} \put(242.54,-291.33){\circle*{0.000001}} \put(243.24,-291.33){\circle*{0.000001}} \put(243.95,-291.33){\circle*{0.000001}} \put(244.66,-290.62){\circle*{0.000001}} \put(245.37,-290.62){\circle*{0.000001}} \put(246.07,-290.62){\circle*{0.000001}} \put(246.78,-290.62){\circle*{0.000001}} \put(247.49,-290.62){\circle*{0.000001}} \put(248.19,-289.91){\circle*{0.000001}} \put(248.90,-289.91){\circle*{0.000001}} \put(249.61,-289.91){\circle*{0.000001}} \put(250.32,-289.91){\circle*{0.000001}} \put(251.02,-289.91){\circle*{0.000001}} \put(251.73,-289.21){\circle*{0.000001}} \put(252.44,-289.21){\circle*{0.000001}} \put(253.14,-289.21){\circle*{0.000001}} \put(253.85,-289.21){\circle*{0.000001}} \put(254.56,-288.50){\circle*{0.000001}} \put(255.27,-288.50){\circle*{0.000001}} \put(255.97,-288.50){\circle*{0.000001}} \put(256.68,-288.50){\circle*{0.000001}} \put(257.39,-288.50){\circle*{0.000001}} \put(258.09,-287.79){\circle*{0.000001}} \put(258.80,-287.79){\circle*{0.000001}} \put(259.51,-287.79){\circle*{0.000001}} \put(259.51,-287.79){\circle*{0.000001}} \put(260.22,-287.09){\circle*{0.000001}} \put(260.92,-286.38){\circle*{0.000001}} \put(261.63,-285.67){\circle*{0.000001}} \put(262.34,-284.96){\circle*{0.000001}} \put(263.04,-284.26){\circle*{0.000001}} \put(263.75,-283.55){\circle*{0.000001}} \put(264.46,-282.84){\circle*{0.000001}} \put(265.17,-282.14){\circle*{0.000001}} \put(265.87,-281.43){\circle*{0.000001}} \put(266.58,-280.72){\circle*{0.000001}} \put(267.29,-280.01){\circle*{0.000001}} \put(267.99,-279.31){\circle*{0.000001}} \put(268.70,-278.60){\circle*{0.000001}} \put(269.41,-277.89){\circle*{0.000001}} \put(270.11,-277.19){\circle*{0.000001}} \put(270.82,-276.48){\circle*{0.000001}} \put(271.53,-275.77){\circle*{0.000001}} \put(272.24,-275.06){\circle*{0.000001}} \put(272.94,-274.36){\circle*{0.000001}} \put(273.65,-273.65){\circle*{0.000001}} \put(274.36,-272.94){\circle*{0.000001}} \put(275.06,-272.24){\circle*{0.000001}} \put(275.77,-271.53){\circle*{0.000001}} \put(276.48,-270.82){\circle*{0.000001}} \put(277.19,-270.11){\circle*{0.000001}} \put(277.89,-269.41){\circle*{0.000001}} \put(278.60,-268.70){\circle*{0.000001}} \put(279.31,-267.99){\circle*{0.000001}} \put(280.01,-267.29){\circle*{0.000001}} \put(280.72,-266.58){\circle*{0.000001}} \put(281.43,-265.87){\circle*{0.000001}} \put(282.14,-265.17){\circle*{0.000001}} \put(282.84,-264.46){\circle*{0.000001}} \put(283.55,-263.75){\circle*{0.000001}} \put(284.26,-263.04){\circle*{0.000001}} \put(284.96,-262.34){\circle*{0.000001}} \put(285.67,-261.63){\circle*{0.000001}} \put(286.38,-260.92){\circle*{0.000001}} \put(287.09,-260.22){\circle*{0.000001}} \put(287.79,-259.51){\circle*{0.000001}} \put(288.50,-258.80){\circle*{0.000001}} \put(289.21,-258.09){\circle*{0.000001}} \put(289.91,-257.39){\circle*{0.000001}} \put(290.62,-256.68){\circle*{0.000001}} \put(291.33,-255.97){\circle*{0.000001}} \put(291.33,-255.97){\circle*{0.000001}} \put(290.62,-255.27){\circle*{0.000001}} \put(289.91,-254.56){\circle*{0.000001}} \put(289.21,-253.85){\circle*{0.000001}} \put(289.21,-253.14){\circle*{0.000001}} \put(288.50,-252.44){\circle*{0.000001}} \put(287.79,-251.73){\circle*{0.000001}} \put(287.09,-251.02){\circle*{0.000001}} \put(286.38,-250.32){\circle*{0.000001}} \put(285.67,-249.61){\circle*{0.000001}} \put(284.96,-248.90){\circle*{0.000001}} \put(284.96,-248.19){\circle*{0.000001}} \put(284.26,-247.49){\circle*{0.000001}} \put(283.55,-246.78){\circle*{0.000001}} \put(282.84,-246.07){\circle*{0.000001}} \put(282.14,-245.37){\circle*{0.000001}} \put(281.43,-244.66){\circle*{0.000001}} \put(280.72,-243.95){\circle*{0.000001}} \put(280.72,-243.24){\circle*{0.000001}} \put(280.01,-242.54){\circle*{0.000001}} \put(279.31,-241.83){\circle*{0.000001}} \put(278.60,-241.12){\circle*{0.000001}} \put(277.89,-240.42){\circle*{0.000001}} \put(277.19,-239.71){\circle*{0.000001}} \put(276.48,-239.00){\circle*{0.000001}} \put(276.48,-238.29){\circle*{0.000001}} \put(275.77,-237.59){\circle*{0.000001}} \put(275.06,-236.88){\circle*{0.000001}} \put(274.36,-236.17){\circle*{0.000001}} \put(273.65,-235.47){\circle*{0.000001}} \put(272.94,-234.76){\circle*{0.000001}} \put(272.24,-234.05){\circle*{0.000001}} \put(272.24,-233.35){\circle*{0.000001}} \put(271.53,-232.64){\circle*{0.000001}} \put(270.82,-231.93){\circle*{0.000001}} \put(270.11,-231.22){\circle*{0.000001}} \put(269.41,-230.52){\circle*{0.000001}} \put(268.70,-229.81){\circle*{0.000001}} \put(267.99,-229.10){\circle*{0.000001}} \put(267.99,-228.40){\circle*{0.000001}} \put(267.29,-227.69){\circle*{0.000001}} \put(266.58,-226.98){\circle*{0.000001}} \put(265.87,-226.27){\circle*{0.000001}} \put(265.17,-225.57){\circle*{0.000001}} \put(264.46,-224.86){\circle*{0.000001}} \put(263.75,-224.15){\circle*{0.000001}} \put(263.75,-223.45){\circle*{0.000001}} \put(263.04,-222.74){\circle*{0.000001}} \put(262.34,-222.03){\circle*{0.000001}} \put(261.63,-221.32){\circle*{0.000001}} \put(261.63,-221.32){\circle*{0.000001}} \put(262.34,-220.62){\circle*{0.000001}} \put(263.04,-220.62){\circle*{0.000001}} \put(263.75,-219.91){\circle*{0.000001}} \put(264.46,-219.91){\circle*{0.000001}} \put(265.17,-219.20){\circle*{0.000001}} \put(265.87,-219.20){\circle*{0.000001}} \put(266.58,-218.50){\circle*{0.000001}} \put(267.29,-218.50){\circle*{0.000001}} \put(267.99,-217.79){\circle*{0.000001}} \put(268.70,-217.79){\circle*{0.000001}} \put(269.41,-217.08){\circle*{0.000001}} \put(270.11,-217.08){\circle*{0.000001}} \put(270.82,-216.37){\circle*{0.000001}} \put(271.53,-216.37){\circle*{0.000001}} \put(272.24,-215.67){\circle*{0.000001}} \put(272.94,-215.67){\circle*{0.000001}} \put(273.65,-214.96){\circle*{0.000001}} \put(274.36,-214.96){\circle*{0.000001}} \put(275.06,-214.25){\circle*{0.000001}} \put(275.77,-214.25){\circle*{0.000001}} \put(276.48,-213.55){\circle*{0.000001}} \put(277.19,-213.55){\circle*{0.000001}} \put(277.89,-212.84){\circle*{0.000001}} \put(278.60,-212.84){\circle*{0.000001}} \put(279.31,-212.13){\circle*{0.000001}} \put(280.01,-212.13){\circle*{0.000001}} \put(280.72,-211.42){\circle*{0.000001}} \put(281.43,-211.42){\circle*{0.000001}} \put(282.14,-210.72){\circle*{0.000001}} \put(282.84,-210.72){\circle*{0.000001}} \put(283.55,-210.01){\circle*{0.000001}} \put(284.26,-210.01){\circle*{0.000001}} \put(284.96,-209.30){\circle*{0.000001}} \put(285.67,-209.30){\circle*{0.000001}} \put(286.38,-208.60){\circle*{0.000001}} \put(287.09,-208.60){\circle*{0.000001}} \put(287.79,-207.89){\circle*{0.000001}} \put(288.50,-207.89){\circle*{0.000001}} \put(289.21,-207.18){\circle*{0.000001}} \put(289.91,-207.18){\circle*{0.000001}} \put(290.62,-206.48){\circle*{0.000001}} \put(291.33,-206.48){\circle*{0.000001}} \put(292.04,-205.77){\circle*{0.000001}} \put(292.74,-205.77){\circle*{0.000001}} \put(293.45,-205.06){\circle*{0.000001}} \put(294.16,-205.06){\circle*{0.000001}} \put(294.86,-204.35){\circle*{0.000001}} \put(295.57,-204.35){\circle*{0.000001}} \put(296.28,-203.65){\circle*{0.000001}} \put(296.98,-203.65){\circle*{0.000001}} \put(297.69,-202.94){\circle*{0.000001}} \put(298.40,-202.94){\circle*{0.000001}} \put(299.11,-202.23){\circle*{0.000001}} \put(299.81,-202.23){\circle*{0.000001}} \put(300.52,-201.53){\circle*{0.000001}} \put(301.23,-201.53){\circle*{0.000001}} \put(301.93,-200.82){\circle*{0.000001}} \put(301.93,-200.82){\circle*{0.000001}} \put(302.64,-200.11){\circle*{0.000001}} \put(302.64,-199.40){\circle*{0.000001}} \put(303.35,-198.70){\circle*{0.000001}} \put(303.35,-197.99){\circle*{0.000001}} \put(304.06,-197.28){\circle*{0.000001}} \put(304.06,-196.58){\circle*{0.000001}} \put(304.76,-195.87){\circle*{0.000001}} \put(304.76,-195.16){\circle*{0.000001}} \put(305.47,-194.45){\circle*{0.000001}} \put(305.47,-193.75){\circle*{0.000001}} \put(306.18,-193.04){\circle*{0.000001}} \put(306.18,-192.33){\circle*{0.000001}} \put(306.88,-191.63){\circle*{0.000001}} \put(306.88,-190.92){\circle*{0.000001}} \put(307.59,-190.21){\circle*{0.000001}} \put(307.59,-189.50){\circle*{0.000001}} \put(308.30,-188.80){\circle*{0.000001}} \put(308.30,-188.09){\circle*{0.000001}} \put(309.01,-187.38){\circle*{0.000001}} \put(309.01,-186.68){\circle*{0.000001}} \put(309.71,-185.97){\circle*{0.000001}} \put(309.71,-185.26){\circle*{0.000001}} \put(310.42,-184.55){\circle*{0.000001}} \put(310.42,-183.85){\circle*{0.000001}} \put(311.13,-183.14){\circle*{0.000001}} \put(311.13,-182.43){\circle*{0.000001}} \put(311.83,-181.73){\circle*{0.000001}} \put(311.83,-181.02){\circle*{0.000001}} \put(312.54,-180.31){\circle*{0.000001}} \put(312.54,-179.61){\circle*{0.000001}} \put(313.25,-178.90){\circle*{0.000001}} \put(313.25,-178.19){\circle*{0.000001}} \put(313.96,-177.48){\circle*{0.000001}} \put(313.96,-176.78){\circle*{0.000001}} \put(314.66,-176.07){\circle*{0.000001}} \put(314.66,-175.36){\circle*{0.000001}} \put(315.37,-174.66){\circle*{0.000001}} \put(315.37,-173.95){\circle*{0.000001}} \put(316.08,-173.24){\circle*{0.000001}} \put(316.08,-172.53){\circle*{0.000001}} \put(316.78,-171.83){\circle*{0.000001}} \put(316.78,-171.12){\circle*{0.000001}} \put(317.49,-170.41){\circle*{0.000001}} \put(317.49,-169.71){\circle*{0.000001}} \put(318.20,-169.00){\circle*{0.000001}} \put(318.20,-168.29){\circle*{0.000001}} \put(318.91,-167.58){\circle*{0.000001}} \put(318.91,-166.88){\circle*{0.000001}} \put(319.61,-166.17){\circle*{0.000001}} \put(319.61,-165.46){\circle*{0.000001}} \put(320.32,-164.76){\circle*{0.000001}} \put(320.32,-164.05){\circle*{0.000001}} \put(321.03,-163.34){\circle*{0.000001}} \put(321.03,-162.63){\circle*{0.000001}} \put(321.73,-161.93){\circle*{0.000001}} \put(321.73,-161.22){\circle*{0.000001}} \put(322.44,-160.51){\circle*{0.000001}} \put(278.60,-175.36){\circle*{0.000001}} \put(279.31,-175.36){\circle*{0.000001}} \put(280.01,-174.66){\circle*{0.000001}} \put(280.72,-174.66){\circle*{0.000001}} \put(281.43,-174.66){\circle*{0.000001}} \put(282.14,-173.95){\circle*{0.000001}} \put(282.84,-173.95){\circle*{0.000001}} \put(283.55,-173.95){\circle*{0.000001}} \put(284.26,-173.24){\circle*{0.000001}} \put(284.96,-173.24){\circle*{0.000001}} \put(285.67,-173.24){\circle*{0.000001}} \put(286.38,-172.53){\circle*{0.000001}} \put(287.09,-172.53){\circle*{0.000001}} \put(287.79,-172.53){\circle*{0.000001}} \put(288.50,-171.83){\circle*{0.000001}} \put(289.21,-171.83){\circle*{0.000001}} \put(289.91,-171.83){\circle*{0.000001}} \put(290.62,-171.12){\circle*{0.000001}} \put(291.33,-171.12){\circle*{0.000001}} \put(292.04,-171.12){\circle*{0.000001}} \put(292.74,-170.41){\circle*{0.000001}} \put(293.45,-170.41){\circle*{0.000001}} \put(294.16,-170.41){\circle*{0.000001}} \put(294.86,-169.71){\circle*{0.000001}} \put(295.57,-169.71){\circle*{0.000001}} \put(296.28,-169.71){\circle*{0.000001}} \put(296.98,-169.00){\circle*{0.000001}} \put(297.69,-169.00){\circle*{0.000001}} \put(298.40,-169.00){\circle*{0.000001}} \put(299.11,-168.29){\circle*{0.000001}} \put(299.81,-168.29){\circle*{0.000001}} \put(300.52,-168.29){\circle*{0.000001}} \put(301.23,-167.58){\circle*{0.000001}} \put(301.93,-167.58){\circle*{0.000001}} \put(302.64,-166.88){\circle*{0.000001}} \put(303.35,-166.88){\circle*{0.000001}} \put(304.06,-166.88){\circle*{0.000001}} \put(304.76,-166.17){\circle*{0.000001}} \put(305.47,-166.17){\circle*{0.000001}} \put(306.18,-166.17){\circle*{0.000001}} \put(306.88,-165.46){\circle*{0.000001}} \put(307.59,-165.46){\circle*{0.000001}} \put(308.30,-165.46){\circle*{0.000001}} \put(309.01,-164.76){\circle*{0.000001}} \put(309.71,-164.76){\circle*{0.000001}} \put(310.42,-164.76){\circle*{0.000001}} \put(311.13,-164.05){\circle*{0.000001}} \put(311.83,-164.05){\circle*{0.000001}} \put(312.54,-164.05){\circle*{0.000001}} \put(313.25,-163.34){\circle*{0.000001}} \put(313.96,-163.34){\circle*{0.000001}} \put(314.66,-163.34){\circle*{0.000001}} \put(315.37,-162.63){\circle*{0.000001}} \put(316.08,-162.63){\circle*{0.000001}} \put(316.78,-162.63){\circle*{0.000001}} \put(317.49,-161.93){\circle*{0.000001}} \put(318.20,-161.93){\circle*{0.000001}} \put(318.91,-161.93){\circle*{0.000001}} \put(319.61,-161.22){\circle*{0.000001}} \put(320.32,-161.22){\circle*{0.000001}} \put(321.03,-161.22){\circle*{0.000001}} \put(321.73,-160.51){\circle*{0.000001}} \put(322.44,-160.51){\circle*{0.000001}} \put(269.41,-219.91){\circle*{0.000001}} \put(269.41,-219.20){\circle*{0.000001}} \put(269.41,-218.50){\circle*{0.000001}} \put(270.11,-217.79){\circle*{0.000001}} \put(270.11,-217.08){\circle*{0.000001}} \put(270.11,-216.37){\circle*{0.000001}} \put(270.11,-215.67){\circle*{0.000001}} \put(270.11,-214.96){\circle*{0.000001}} \put(270.82,-214.25){\circle*{0.000001}} \put(270.82,-213.55){\circle*{0.000001}} \put(270.82,-212.84){\circle*{0.000001}} \put(270.82,-212.13){\circle*{0.000001}} \put(270.82,-211.42){\circle*{0.000001}} \put(271.53,-210.72){\circle*{0.000001}} \put(271.53,-210.01){\circle*{0.000001}} \put(271.53,-209.30){\circle*{0.000001}} \put(271.53,-208.60){\circle*{0.000001}} \put(272.24,-207.89){\circle*{0.000001}} \put(272.24,-207.18){\circle*{0.000001}} \put(272.24,-206.48){\circle*{0.000001}} \put(272.24,-205.77){\circle*{0.000001}} \put(272.24,-205.06){\circle*{0.000001}} \put(272.94,-204.35){\circle*{0.000001}} \put(272.94,-203.65){\circle*{0.000001}} \put(272.94,-202.94){\circle*{0.000001}} \put(272.94,-202.23){\circle*{0.000001}} \put(272.94,-201.53){\circle*{0.000001}} \put(273.65,-200.82){\circle*{0.000001}} \put(273.65,-200.11){\circle*{0.000001}} \put(273.65,-199.40){\circle*{0.000001}} \put(273.65,-198.70){\circle*{0.000001}} \put(273.65,-197.99){\circle*{0.000001}} \put(274.36,-197.28){\circle*{0.000001}} \put(274.36,-196.58){\circle*{0.000001}} \put(274.36,-195.87){\circle*{0.000001}} \put(274.36,-195.16){\circle*{0.000001}} \put(274.36,-194.45){\circle*{0.000001}} \put(275.06,-193.75){\circle*{0.000001}} \put(275.06,-193.04){\circle*{0.000001}} \put(275.06,-192.33){\circle*{0.000001}} \put(275.06,-191.63){\circle*{0.000001}} \put(275.06,-190.92){\circle*{0.000001}} \put(275.77,-190.21){\circle*{0.000001}} \put(275.77,-189.50){\circle*{0.000001}} \put(275.77,-188.80){\circle*{0.000001}} \put(275.77,-188.09){\circle*{0.000001}} \put(275.77,-187.38){\circle*{0.000001}} \put(276.48,-186.68){\circle*{0.000001}} \put(276.48,-185.97){\circle*{0.000001}} \put(276.48,-185.26){\circle*{0.000001}} \put(276.48,-184.55){\circle*{0.000001}} \put(277.19,-183.85){\circle*{0.000001}} \put(277.19,-183.14){\circle*{0.000001}} \put(277.19,-182.43){\circle*{0.000001}} \put(277.19,-181.73){\circle*{0.000001}} \put(277.19,-181.02){\circle*{0.000001}} \put(277.89,-180.31){\circle*{0.000001}} \put(277.89,-179.61){\circle*{0.000001}} \put(277.89,-178.90){\circle*{0.000001}} \put(277.89,-178.19){\circle*{0.000001}} \put(277.89,-177.48){\circle*{0.000001}} \put(278.60,-176.78){\circle*{0.000001}} \put(278.60,-176.07){\circle*{0.000001}} \put(278.60,-175.36){\circle*{0.000001}} \put(269.41,-219.91){\circle*{0.000001}} \put(269.41,-219.20){\circle*{0.000001}} \put(269.41,-218.50){\circle*{0.000001}} \put(268.70,-217.79){\circle*{0.000001}} \put(268.70,-217.08){\circle*{0.000001}} \put(268.70,-216.37){\circle*{0.000001}} \put(268.70,-215.67){\circle*{0.000001}} \put(268.70,-214.96){\circle*{0.000001}} \put(268.70,-214.25){\circle*{0.000001}} \put(267.99,-213.55){\circle*{0.000001}} \put(267.99,-212.84){\circle*{0.000001}} \put(267.99,-212.13){\circle*{0.000001}} \put(267.99,-211.42){\circle*{0.000001}} \put(267.99,-210.72){\circle*{0.000001}} \put(267.29,-210.01){\circle*{0.000001}} \put(267.29,-209.30){\circle*{0.000001}} \put(267.29,-208.60){\circle*{0.000001}} \put(267.29,-207.89){\circle*{0.000001}} \put(267.29,-207.18){\circle*{0.000001}} \put(266.58,-206.48){\circle*{0.000001}} \put(266.58,-205.77){\circle*{0.000001}} \put(266.58,-205.06){\circle*{0.000001}} \put(266.58,-204.35){\circle*{0.000001}} \put(266.58,-203.65){\circle*{0.000001}} \put(266.58,-202.94){\circle*{0.000001}} \put(265.87,-202.23){\circle*{0.000001}} \put(265.87,-201.53){\circle*{0.000001}} \put(265.87,-200.82){\circle*{0.000001}} \put(265.87,-200.11){\circle*{0.000001}} \put(265.87,-199.40){\circle*{0.000001}} \put(265.17,-198.70){\circle*{0.000001}} \put(265.17,-197.99){\circle*{0.000001}} \put(265.17,-197.28){\circle*{0.000001}} \put(265.17,-196.58){\circle*{0.000001}} \put(265.17,-195.87){\circle*{0.000001}} \put(264.46,-195.16){\circle*{0.000001}} \put(264.46,-194.45){\circle*{0.000001}} \put(264.46,-193.75){\circle*{0.000001}} \put(264.46,-193.04){\circle*{0.000001}} \put(264.46,-192.33){\circle*{0.000001}} \put(264.46,-191.63){\circle*{0.000001}} \put(263.75,-190.92){\circle*{0.000001}} \put(263.75,-190.21){\circle*{0.000001}} \put(263.75,-189.50){\circle*{0.000001}} \put(263.75,-188.80){\circle*{0.000001}} \put(263.75,-188.09){\circle*{0.000001}} \put(263.04,-187.38){\circle*{0.000001}} \put(263.04,-186.68){\circle*{0.000001}} \put(263.04,-185.97){\circle*{0.000001}} \put(263.04,-185.26){\circle*{0.000001}} \put(263.04,-184.55){\circle*{0.000001}} \put(262.34,-183.85){\circle*{0.000001}} \put(262.34,-183.14){\circle*{0.000001}} \put(262.34,-182.43){\circle*{0.000001}} \put(262.34,-181.73){\circle*{0.000001}} \put(262.34,-181.02){\circle*{0.000001}} \put(262.34,-180.31){\circle*{0.000001}} \put(261.63,-179.61){\circle*{0.000001}} \put(261.63,-178.90){\circle*{0.000001}} \put(261.63,-178.19){\circle*{0.000001}} \put(261.63,-177.48){\circle*{0.000001}} \put(261.63,-176.78){\circle*{0.000001}} \put(260.92,-176.07){\circle*{0.000001}} \put(260.92,-175.36){\circle*{0.000001}} \put(260.92,-174.66){\circle*{0.000001}} \put(283.55,-215.67){\circle*{0.000001}} \put(282.84,-214.96){\circle*{0.000001}} \put(282.84,-214.25){\circle*{0.000001}} \put(282.14,-213.55){\circle*{0.000001}} \put(282.14,-212.84){\circle*{0.000001}} \put(281.43,-212.13){\circle*{0.000001}} \put(281.43,-211.42){\circle*{0.000001}} \put(280.72,-210.72){\circle*{0.000001}} \put(280.72,-210.01){\circle*{0.000001}} \put(280.01,-209.30){\circle*{0.000001}} \put(279.31,-208.60){\circle*{0.000001}} \put(279.31,-207.89){\circle*{0.000001}} \put(278.60,-207.18){\circle*{0.000001}} \put(278.60,-206.48){\circle*{0.000001}} \put(277.89,-205.77){\circle*{0.000001}} \put(277.89,-205.06){\circle*{0.000001}} \put(277.19,-204.35){\circle*{0.000001}} \put(277.19,-203.65){\circle*{0.000001}} \put(276.48,-202.94){\circle*{0.000001}} \put(276.48,-202.23){\circle*{0.000001}} \put(275.77,-201.53){\circle*{0.000001}} \put(275.06,-200.82){\circle*{0.000001}} \put(275.06,-200.11){\circle*{0.000001}} \put(274.36,-199.40){\circle*{0.000001}} \put(274.36,-198.70){\circle*{0.000001}} \put(273.65,-197.99){\circle*{0.000001}} \put(273.65,-197.28){\circle*{0.000001}} \put(272.94,-196.58){\circle*{0.000001}} \put(272.94,-195.87){\circle*{0.000001}} \put(272.24,-195.16){\circle*{0.000001}} \put(271.53,-194.45){\circle*{0.000001}} \put(271.53,-193.75){\circle*{0.000001}} \put(270.82,-193.04){\circle*{0.000001}} \put(270.82,-192.33){\circle*{0.000001}} \put(270.11,-191.63){\circle*{0.000001}} \put(270.11,-190.92){\circle*{0.000001}} \put(269.41,-190.21){\circle*{0.000001}} \put(269.41,-189.50){\circle*{0.000001}} \put(268.70,-188.80){\circle*{0.000001}} \put(267.99,-188.09){\circle*{0.000001}} \put(267.99,-187.38){\circle*{0.000001}} \put(267.29,-186.68){\circle*{0.000001}} \put(267.29,-185.97){\circle*{0.000001}} \put(266.58,-185.26){\circle*{0.000001}} \put(266.58,-184.55){\circle*{0.000001}} \put(265.87,-183.85){\circle*{0.000001}} \put(265.87,-183.14){\circle*{0.000001}} \put(265.17,-182.43){\circle*{0.000001}} \put(265.17,-181.73){\circle*{0.000001}} \put(264.46,-181.02){\circle*{0.000001}} \put(263.75,-180.31){\circle*{0.000001}} \put(263.75,-179.61){\circle*{0.000001}} \put(263.04,-178.90){\circle*{0.000001}} \put(263.04,-178.19){\circle*{0.000001}} \put(262.34,-177.48){\circle*{0.000001}} \put(262.34,-176.78){\circle*{0.000001}} \put(261.63,-176.07){\circle*{0.000001}} \put(261.63,-175.36){\circle*{0.000001}} \put(260.92,-174.66){\circle*{0.000001}} \put(267.99,-256.68){\circle*{0.000001}} \put(267.99,-255.97){\circle*{0.000001}} \put(268.70,-255.27){\circle*{0.000001}} \put(268.70,-254.56){\circle*{0.000001}} \put(269.41,-253.85){\circle*{0.000001}} \put(269.41,-253.14){\circle*{0.000001}} \put(269.41,-252.44){\circle*{0.000001}} \put(270.11,-251.73){\circle*{0.000001}} \put(270.11,-251.02){\circle*{0.000001}} \put(270.11,-250.32){\circle*{0.000001}} \put(270.82,-249.61){\circle*{0.000001}} \put(270.82,-248.90){\circle*{0.000001}} \put(271.53,-248.19){\circle*{0.000001}} \put(271.53,-247.49){\circle*{0.000001}} \put(271.53,-246.78){\circle*{0.000001}} \put(272.24,-246.07){\circle*{0.000001}} \put(272.24,-245.37){\circle*{0.000001}} \put(272.24,-244.66){\circle*{0.000001}} \put(272.94,-243.95){\circle*{0.000001}} \put(272.94,-243.24){\circle*{0.000001}} \put(273.65,-242.54){\circle*{0.000001}} \put(273.65,-241.83){\circle*{0.000001}} \put(273.65,-241.12){\circle*{0.000001}} \put(274.36,-240.42){\circle*{0.000001}} \put(274.36,-239.71){\circle*{0.000001}} \put(274.36,-239.00){\circle*{0.000001}} \put(275.06,-238.29){\circle*{0.000001}} \put(275.06,-237.59){\circle*{0.000001}} \put(275.77,-236.88){\circle*{0.000001}} \put(275.77,-236.17){\circle*{0.000001}} \put(275.77,-235.47){\circle*{0.000001}} \put(276.48,-234.76){\circle*{0.000001}} \put(276.48,-234.05){\circle*{0.000001}} \put(277.19,-233.35){\circle*{0.000001}} \put(277.19,-232.64){\circle*{0.000001}} \put(277.19,-231.93){\circle*{0.000001}} \put(277.89,-231.22){\circle*{0.000001}} \put(277.89,-230.52){\circle*{0.000001}} \put(277.89,-229.81){\circle*{0.000001}} \put(278.60,-229.10){\circle*{0.000001}} \put(278.60,-228.40){\circle*{0.000001}} \put(279.31,-227.69){\circle*{0.000001}} \put(279.31,-226.98){\circle*{0.000001}} \put(279.31,-226.27){\circle*{0.000001}} \put(280.01,-225.57){\circle*{0.000001}} \put(280.01,-224.86){\circle*{0.000001}} \put(280.01,-224.15){\circle*{0.000001}} \put(280.72,-223.45){\circle*{0.000001}} \put(280.72,-222.74){\circle*{0.000001}} \put(281.43,-222.03){\circle*{0.000001}} \put(281.43,-221.32){\circle*{0.000001}} \put(281.43,-220.62){\circle*{0.000001}} \put(282.14,-219.91){\circle*{0.000001}} \put(282.14,-219.20){\circle*{0.000001}} \put(282.14,-218.50){\circle*{0.000001}} \put(282.84,-217.79){\circle*{0.000001}} \put(282.84,-217.08){\circle*{0.000001}} \put(283.55,-216.37){\circle*{0.000001}} \put(283.55,-215.67){\circle*{0.000001}} \put(267.99,-256.68){\circle*{0.000001}} \put(268.70,-255.97){\circle*{0.000001}} \put(269.41,-255.27){\circle*{0.000001}} \put(270.11,-255.27){\circle*{0.000001}} \put(270.82,-254.56){\circle*{0.000001}} \put(271.53,-253.85){\circle*{0.000001}} \put(272.24,-253.14){\circle*{0.000001}} \put(272.94,-252.44){\circle*{0.000001}} \put(273.65,-252.44){\circle*{0.000001}} \put(274.36,-251.73){\circle*{0.000001}} \put(275.06,-251.02){\circle*{0.000001}} \put(275.77,-250.32){\circle*{0.000001}} \put(276.48,-249.61){\circle*{0.000001}} \put(277.19,-249.61){\circle*{0.000001}} \put(277.89,-248.90){\circle*{0.000001}} \put(278.60,-248.19){\circle*{0.000001}} \put(279.31,-247.49){\circle*{0.000001}} \put(280.01,-246.78){\circle*{0.000001}} \put(280.72,-246.78){\circle*{0.000001}} \put(281.43,-246.07){\circle*{0.000001}} \put(282.14,-245.37){\circle*{0.000001}} \put(282.84,-244.66){\circle*{0.000001}} \put(283.55,-243.95){\circle*{0.000001}} \put(284.26,-243.95){\circle*{0.000001}} \put(284.96,-243.24){\circle*{0.000001}} \put(285.67,-242.54){\circle*{0.000001}} \put(286.38,-241.83){\circle*{0.000001}} \put(287.09,-241.12){\circle*{0.000001}} \put(287.79,-241.12){\circle*{0.000001}} \put(288.50,-240.42){\circle*{0.000001}} \put(289.21,-239.71){\circle*{0.000001}} \put(289.91,-239.00){\circle*{0.000001}} \put(290.62,-238.29){\circle*{0.000001}} \put(291.33,-238.29){\circle*{0.000001}} \put(292.04,-237.59){\circle*{0.000001}} \put(292.74,-236.88){\circle*{0.000001}} \put(293.45,-236.17){\circle*{0.000001}} \put(294.16,-235.47){\circle*{0.000001}} \put(294.86,-235.47){\circle*{0.000001}} \put(295.57,-234.76){\circle*{0.000001}} \put(296.28,-234.05){\circle*{0.000001}} \put(296.98,-233.35){\circle*{0.000001}} \put(297.69,-232.64){\circle*{0.000001}} \put(298.40,-232.64){\circle*{0.000001}} \put(299.11,-231.93){\circle*{0.000001}} \put(299.81,-231.22){\circle*{0.000001}} \put(300.52,-230.52){\circle*{0.000001}} \put(301.23,-229.81){\circle*{0.000001}} \put(301.93,-229.81){\circle*{0.000001}} \put(302.64,-229.10){\circle*{0.000001}} \put(303.35,-228.40){\circle*{0.000001}} \put(303.35,-228.40){\circle*{0.000001}} \put(303.35,-227.69){\circle*{0.000001}} \put(303.35,-226.98){\circle*{0.000001}} \put(302.64,-226.27){\circle*{0.000001}} \put(302.64,-225.57){\circle*{0.000001}} \put(302.64,-224.86){\circle*{0.000001}} \put(302.64,-224.15){\circle*{0.000001}} \put(301.93,-223.45){\circle*{0.000001}} \put(301.93,-222.74){\circle*{0.000001}} \put(301.93,-222.03){\circle*{0.000001}} \put(301.93,-221.32){\circle*{0.000001}} \put(301.23,-220.62){\circle*{0.000001}} \put(301.23,-219.91){\circle*{0.000001}} \put(301.23,-219.20){\circle*{0.000001}} \put(301.23,-218.50){\circle*{0.000001}} \put(300.52,-217.79){\circle*{0.000001}} \put(300.52,-217.08){\circle*{0.000001}} \put(300.52,-216.37){\circle*{0.000001}} \put(300.52,-215.67){\circle*{0.000001}} \put(299.81,-214.96){\circle*{0.000001}} \put(299.81,-214.25){\circle*{0.000001}} \put(299.81,-213.55){\circle*{0.000001}} \put(299.81,-212.84){\circle*{0.000001}} \put(299.11,-212.13){\circle*{0.000001}} \put(299.11,-211.42){\circle*{0.000001}} \put(299.11,-210.72){\circle*{0.000001}} \put(299.11,-210.01){\circle*{0.000001}} \put(298.40,-209.30){\circle*{0.000001}} \put(298.40,-208.60){\circle*{0.000001}} \put(298.40,-207.89){\circle*{0.000001}} \put(298.40,-207.18){\circle*{0.000001}} \put(298.40,-206.48){\circle*{0.000001}} \put(297.69,-205.77){\circle*{0.000001}} \put(297.69,-205.06){\circle*{0.000001}} \put(297.69,-204.35){\circle*{0.000001}} \put(297.69,-203.65){\circle*{0.000001}} \put(296.98,-202.94){\circle*{0.000001}} \put(296.98,-202.23){\circle*{0.000001}} \put(296.98,-201.53){\circle*{0.000001}} \put(296.98,-200.82){\circle*{0.000001}} \put(296.28,-200.11){\circle*{0.000001}} \put(296.28,-199.40){\circle*{0.000001}} \put(296.28,-198.70){\circle*{0.000001}} \put(296.28,-197.99){\circle*{0.000001}} \put(295.57,-197.28){\circle*{0.000001}} \put(295.57,-196.58){\circle*{0.000001}} \put(295.57,-195.87){\circle*{0.000001}} \put(295.57,-195.16){\circle*{0.000001}} \put(294.86,-194.45){\circle*{0.000001}} \put(294.86,-193.75){\circle*{0.000001}} \put(294.86,-193.04){\circle*{0.000001}} \put(294.86,-192.33){\circle*{0.000001}} \put(294.16,-191.63){\circle*{0.000001}} \put(294.16,-190.92){\circle*{0.000001}} \put(294.16,-190.21){\circle*{0.000001}} \put(294.16,-189.50){\circle*{0.000001}} \put(293.45,-188.80){\circle*{0.000001}} \put(293.45,-188.09){\circle*{0.000001}} \put(293.45,-187.38){\circle*{0.000001}} \put(293.45,-186.68){\circle*{0.000001}} \put(292.74,-185.97){\circle*{0.000001}} \put(292.74,-185.26){\circle*{0.000001}} \put(292.74,-184.55){\circle*{0.000001}} \put(292.74,-184.55){\circle*{0.000001}} \put(293.45,-184.55){\circle*{0.000001}} \put(294.16,-185.26){\circle*{0.000001}} \put(294.86,-185.26){\circle*{0.000001}} \put(295.57,-185.97){\circle*{0.000001}} \put(296.28,-185.97){\circle*{0.000001}} \put(296.98,-185.97){\circle*{0.000001}} \put(297.69,-186.68){\circle*{0.000001}} \put(298.40,-186.68){\circle*{0.000001}} \put(299.11,-186.68){\circle*{0.000001}} \put(299.81,-187.38){\circle*{0.000001}} \put(300.52,-187.38){\circle*{0.000001}} \put(301.23,-188.09){\circle*{0.000001}} \put(301.93,-188.09){\circle*{0.000001}} \put(302.64,-188.09){\circle*{0.000001}} \put(303.35,-188.80){\circle*{0.000001}} \put(304.06,-188.80){\circle*{0.000001}} \put(304.76,-188.80){\circle*{0.000001}} \put(305.47,-189.50){\circle*{0.000001}} \put(306.18,-189.50){\circle*{0.000001}} \put(306.88,-190.21){\circle*{0.000001}} \put(307.59,-190.21){\circle*{0.000001}} \put(308.30,-190.21){\circle*{0.000001}} \put(309.01,-190.92){\circle*{0.000001}} \put(309.71,-190.92){\circle*{0.000001}} \put(310.42,-191.63){\circle*{0.000001}} \put(311.13,-191.63){\circle*{0.000001}} \put(311.83,-191.63){\circle*{0.000001}} \put(312.54,-192.33){\circle*{0.000001}} \put(313.25,-192.33){\circle*{0.000001}} \put(313.96,-192.33){\circle*{0.000001}} \put(314.66,-193.04){\circle*{0.000001}} \put(315.37,-193.04){\circle*{0.000001}} \put(316.08,-193.75){\circle*{0.000001}} \put(316.78,-193.75){\circle*{0.000001}} \put(317.49,-193.75){\circle*{0.000001}} \put(318.20,-194.45){\circle*{0.000001}} \put(318.91,-194.45){\circle*{0.000001}} \put(319.61,-195.16){\circle*{0.000001}} \put(320.32,-195.16){\circle*{0.000001}} \put(321.03,-195.16){\circle*{0.000001}} \put(321.73,-195.87){\circle*{0.000001}} \put(322.44,-195.87){\circle*{0.000001}} \put(323.15,-195.87){\circle*{0.000001}} \put(323.85,-196.58){\circle*{0.000001}} \put(324.56,-196.58){\circle*{0.000001}} \put(325.27,-197.28){\circle*{0.000001}} \put(325.98,-197.28){\circle*{0.000001}} \put(326.68,-197.28){\circle*{0.000001}} \put(327.39,-197.99){\circle*{0.000001}} \put(328.10,-197.99){\circle*{0.000001}} \put(328.80,-197.99){\circle*{0.000001}} \put(329.51,-198.70){\circle*{0.000001}} \put(330.22,-198.70){\circle*{0.000001}} \put(330.93,-199.40){\circle*{0.000001}} \put(331.63,-199.40){\circle*{0.000001}} \put(331.63,-199.40){\circle*{0.000001}} \put(332.34,-199.40){\circle*{0.000001}} \put(333.05,-199.40){\circle*{0.000001}} \put(333.75,-199.40){\circle*{0.000001}} \put(334.46,-199.40){\circle*{0.000001}} \put(335.17,-199.40){\circle*{0.000001}} \put(335.88,-199.40){\circle*{0.000001}} \put(336.58,-199.40){\circle*{0.000001}} \put(337.29,-198.70){\circle*{0.000001}} \put(338.00,-198.70){\circle*{0.000001}} \put(338.70,-198.70){\circle*{0.000001}} \put(339.41,-198.70){\circle*{0.000001}} \put(340.12,-198.70){\circle*{0.000001}} \put(340.83,-198.70){\circle*{0.000001}} \put(341.53,-198.70){\circle*{0.000001}} \put(342.24,-198.70){\circle*{0.000001}} \put(342.95,-198.70){\circle*{0.000001}} \put(343.65,-198.70){\circle*{0.000001}} \put(344.36,-198.70){\circle*{0.000001}} \put(345.07,-198.70){\circle*{0.000001}} \put(345.78,-198.70){\circle*{0.000001}} \put(346.48,-198.70){\circle*{0.000001}} \put(347.19,-198.70){\circle*{0.000001}} \put(347.90,-198.70){\circle*{0.000001}} \put(348.60,-197.99){\circle*{0.000001}} \put(349.31,-197.99){\circle*{0.000001}} \put(350.02,-197.99){\circle*{0.000001}} \put(350.72,-197.99){\circle*{0.000001}} \put(351.43,-197.99){\circle*{0.000001}} \put(352.14,-197.99){\circle*{0.000001}} \put(352.85,-197.99){\circle*{0.000001}} \put(353.55,-197.99){\circle*{0.000001}} \put(354.26,-197.99){\circle*{0.000001}} \put(354.97,-197.99){\circle*{0.000001}} \put(355.67,-197.99){\circle*{0.000001}} \put(356.38,-197.99){\circle*{0.000001}} \put(357.09,-197.99){\circle*{0.000001}} \put(357.80,-197.99){\circle*{0.000001}} \put(358.50,-197.99){\circle*{0.000001}} \put(359.21,-197.28){\circle*{0.000001}} \put(359.92,-197.28){\circle*{0.000001}} \put(360.62,-197.28){\circle*{0.000001}} \put(361.33,-197.28){\circle*{0.000001}} \put(362.04,-197.28){\circle*{0.000001}} \put(362.75,-197.28){\circle*{0.000001}} \put(363.45,-197.28){\circle*{0.000001}} \put(364.16,-197.28){\circle*{0.000001}} \put(364.87,-197.28){\circle*{0.000001}} \put(365.57,-197.28){\circle*{0.000001}} \put(366.28,-197.28){\circle*{0.000001}} \put(366.99,-197.28){\circle*{0.000001}} \put(367.70,-197.28){\circle*{0.000001}} \put(368.40,-197.28){\circle*{0.000001}} \put(369.11,-197.28){\circle*{0.000001}} \put(369.82,-197.28){\circle*{0.000001}} \put(370.52,-196.58){\circle*{0.000001}} \put(371.23,-196.58){\circle*{0.000001}} \put(371.94,-196.58){\circle*{0.000001}} \put(372.65,-196.58){\circle*{0.000001}} \put(373.35,-196.58){\circle*{0.000001}} \put(374.06,-196.58){\circle*{0.000001}} \put(374.77,-196.58){\circle*{0.000001}} \put(375.47,-196.58){\circle*{0.000001}} \put(375.47,-196.58){\circle*{0.000001}} \put(376.18,-196.58){\circle*{0.000001}} \put(376.89,-196.58){\circle*{0.000001}} \put(377.60,-196.58){\circle*{0.000001}} \put(378.30,-196.58){\circle*{0.000001}} \put(379.01,-196.58){\circle*{0.000001}} \put(379.72,-197.28){\circle*{0.000001}} \put(380.42,-197.28){\circle*{0.000001}} \put(381.13,-197.28){\circle*{0.000001}} \put(381.84,-197.28){\circle*{0.000001}} \put(382.54,-197.28){\circle*{0.000001}} \put(383.25,-197.28){\circle*{0.000001}} \put(383.96,-197.28){\circle*{0.000001}} \put(384.67,-197.28){\circle*{0.000001}} \put(385.37,-197.28){\circle*{0.000001}} \put(386.08,-197.28){\circle*{0.000001}} \put(386.79,-197.99){\circle*{0.000001}} \put(387.49,-197.99){\circle*{0.000001}} \put(388.20,-197.99){\circle*{0.000001}} \put(388.91,-197.99){\circle*{0.000001}} \put(389.62,-197.99){\circle*{0.000001}} \put(390.32,-197.99){\circle*{0.000001}} \put(391.03,-197.99){\circle*{0.000001}} \put(391.74,-197.99){\circle*{0.000001}} \put(392.44,-197.99){\circle*{0.000001}} \put(393.15,-197.99){\circle*{0.000001}} \put(393.86,-198.70){\circle*{0.000001}} \put(394.57,-198.70){\circle*{0.000001}} \put(395.27,-198.70){\circle*{0.000001}} \put(395.98,-198.70){\circle*{0.000001}} \put(396.69,-198.70){\circle*{0.000001}} \put(397.39,-198.70){\circle*{0.000001}} \put(398.10,-198.70){\circle*{0.000001}} \put(398.81,-198.70){\circle*{0.000001}} \put(399.52,-198.70){\circle*{0.000001}} \put(400.22,-198.70){\circle*{0.000001}} \put(400.93,-199.40){\circle*{0.000001}} \put(401.64,-199.40){\circle*{0.000001}} \put(402.34,-199.40){\circle*{0.000001}} \put(403.05,-199.40){\circle*{0.000001}} \put(403.76,-199.40){\circle*{0.000001}} \put(404.47,-199.40){\circle*{0.000001}} \put(405.17,-199.40){\circle*{0.000001}} \put(405.88,-199.40){\circle*{0.000001}} \put(406.59,-199.40){\circle*{0.000001}} \put(407.29,-199.40){\circle*{0.000001}} \put(408.00,-200.11){\circle*{0.000001}} \put(408.71,-200.11){\circle*{0.000001}} \put(409.41,-200.11){\circle*{0.000001}} \put(410.12,-200.11){\circle*{0.000001}} \put(410.83,-200.11){\circle*{0.000001}} \put(411.54,-200.11){\circle*{0.000001}} \put(412.24,-200.11){\circle*{0.000001}} \put(412.95,-200.11){\circle*{0.000001}} \put(413.66,-200.11){\circle*{0.000001}} \put(414.36,-200.11){\circle*{0.000001}} \put(415.07,-200.82){\circle*{0.000001}} \put(415.78,-200.82){\circle*{0.000001}} \put(416.49,-200.82){\circle*{0.000001}} \put(417.19,-200.82){\circle*{0.000001}} \put(417.90,-200.82){\circle*{0.000001}} \put(418.61,-200.82){\circle*{0.000001}} \put(418.61,-200.82){\circle*{0.000001}} \put(418.61,-200.11){\circle*{0.000001}} \put(419.31,-199.40){\circle*{0.000001}} \put(419.31,-198.70){\circle*{0.000001}} \put(419.31,-197.99){\circle*{0.000001}} \put(420.02,-197.28){\circle*{0.000001}} \put(420.02,-196.58){\circle*{0.000001}} \put(420.02,-195.87){\circle*{0.000001}} \put(420.73,-195.16){\circle*{0.000001}} \put(420.73,-194.45){\circle*{0.000001}} \put(420.73,-193.75){\circle*{0.000001}} \put(421.44,-193.04){\circle*{0.000001}} \put(421.44,-192.33){\circle*{0.000001}} \put(421.44,-191.63){\circle*{0.000001}} \put(422.14,-190.92){\circle*{0.000001}} \put(422.14,-190.21){\circle*{0.000001}} \put(422.14,-189.50){\circle*{0.000001}} \put(422.14,-188.80){\circle*{0.000001}} \put(422.85,-188.09){\circle*{0.000001}} \put(422.85,-187.38){\circle*{0.000001}} \put(422.85,-186.68){\circle*{0.000001}} \put(423.56,-185.97){\circle*{0.000001}} \put(423.56,-185.26){\circle*{0.000001}} \put(423.56,-184.55){\circle*{0.000001}} \put(424.26,-183.85){\circle*{0.000001}} \put(424.26,-183.14){\circle*{0.000001}} \put(424.26,-182.43){\circle*{0.000001}} \put(424.97,-181.73){\circle*{0.000001}} \put(424.97,-181.02){\circle*{0.000001}} \put(424.97,-180.31){\circle*{0.000001}} \put(425.68,-179.61){\circle*{0.000001}} \put(425.68,-178.90){\circle*{0.000001}} \put(425.68,-178.19){\circle*{0.000001}} \put(426.39,-177.48){\circle*{0.000001}} \put(426.39,-176.78){\circle*{0.000001}} \put(426.39,-176.07){\circle*{0.000001}} \put(427.09,-175.36){\circle*{0.000001}} \put(427.09,-174.66){\circle*{0.000001}} \put(427.09,-173.95){\circle*{0.000001}} \put(427.80,-173.24){\circle*{0.000001}} \put(427.80,-172.53){\circle*{0.000001}} \put(427.80,-171.83){\circle*{0.000001}} \put(428.51,-171.12){\circle*{0.000001}} \put(428.51,-170.41){\circle*{0.000001}} \put(428.51,-169.71){\circle*{0.000001}} \put(429.21,-169.00){\circle*{0.000001}} \put(429.21,-168.29){\circle*{0.000001}} \put(429.21,-167.58){\circle*{0.000001}} \put(429.21,-166.88){\circle*{0.000001}} \put(429.92,-166.17){\circle*{0.000001}} \put(429.92,-165.46){\circle*{0.000001}} \put(429.92,-164.76){\circle*{0.000001}} \put(430.63,-164.05){\circle*{0.000001}} \put(430.63,-163.34){\circle*{0.000001}} \put(430.63,-162.63){\circle*{0.000001}} \put(431.34,-161.93){\circle*{0.000001}} \put(431.34,-161.22){\circle*{0.000001}} \put(431.34,-160.51){\circle*{0.000001}} \put(432.04,-159.81){\circle*{0.000001}} \put(432.04,-159.10){\circle*{0.000001}} \put(432.04,-158.39){\circle*{0.000001}} \put(432.75,-157.68){\circle*{0.000001}} \put(432.75,-156.98){\circle*{0.000001}} \put(387.49,-152.74){\circle*{0.000001}} \put(388.20,-152.74){\circle*{0.000001}} \put(388.91,-152.74){\circle*{0.000001}} \put(389.62,-152.74){\circle*{0.000001}} \put(390.32,-152.74){\circle*{0.000001}} \put(391.03,-152.74){\circle*{0.000001}} \put(391.74,-153.44){\circle*{0.000001}} \put(392.44,-153.44){\circle*{0.000001}} \put(393.15,-153.44){\circle*{0.000001}} \put(393.86,-153.44){\circle*{0.000001}} \put(394.57,-153.44){\circle*{0.000001}} \put(395.27,-153.44){\circle*{0.000001}} \put(395.98,-153.44){\circle*{0.000001}} \put(396.69,-153.44){\circle*{0.000001}} \put(397.39,-153.44){\circle*{0.000001}} \put(398.10,-153.44){\circle*{0.000001}} \put(398.81,-153.44){\circle*{0.000001}} \put(399.52,-154.15){\circle*{0.000001}} \put(400.22,-154.15){\circle*{0.000001}} \put(400.93,-154.15){\circle*{0.000001}} \put(401.64,-154.15){\circle*{0.000001}} \put(402.34,-154.15){\circle*{0.000001}} \put(403.05,-154.15){\circle*{0.000001}} \put(403.76,-154.15){\circle*{0.000001}} \put(404.47,-154.15){\circle*{0.000001}} \put(405.17,-154.15){\circle*{0.000001}} \put(405.88,-154.15){\circle*{0.000001}} \put(406.59,-154.86){\circle*{0.000001}} \put(407.29,-154.86){\circle*{0.000001}} \put(408.00,-154.86){\circle*{0.000001}} \put(408.71,-154.86){\circle*{0.000001}} \put(409.41,-154.86){\circle*{0.000001}} \put(410.12,-154.86){\circle*{0.000001}} \put(410.83,-154.86){\circle*{0.000001}} \put(411.54,-154.86){\circle*{0.000001}} \put(412.24,-154.86){\circle*{0.000001}} \put(412.95,-154.86){\circle*{0.000001}} \put(413.66,-154.86){\circle*{0.000001}} \put(414.36,-155.56){\circle*{0.000001}} \put(415.07,-155.56){\circle*{0.000001}} \put(415.78,-155.56){\circle*{0.000001}} \put(416.49,-155.56){\circle*{0.000001}} \put(417.19,-155.56){\circle*{0.000001}} \put(417.90,-155.56){\circle*{0.000001}} \put(418.61,-155.56){\circle*{0.000001}} \put(419.31,-155.56){\circle*{0.000001}} \put(420.02,-155.56){\circle*{0.000001}} \put(420.73,-155.56){\circle*{0.000001}} \put(421.44,-155.56){\circle*{0.000001}} \put(422.14,-156.27){\circle*{0.000001}} \put(422.85,-156.27){\circle*{0.000001}} \put(423.56,-156.27){\circle*{0.000001}} \put(424.26,-156.27){\circle*{0.000001}} \put(424.97,-156.27){\circle*{0.000001}} \put(425.68,-156.27){\circle*{0.000001}} \put(426.39,-156.27){\circle*{0.000001}} \put(427.09,-156.27){\circle*{0.000001}} \put(427.80,-156.27){\circle*{0.000001}} \put(428.51,-156.27){\circle*{0.000001}} \put(429.21,-156.98){\circle*{0.000001}} \put(429.92,-156.98){\circle*{0.000001}} \put(430.63,-156.98){\circle*{0.000001}} \put(431.34,-156.98){\circle*{0.000001}} \put(432.04,-156.98){\circle*{0.000001}} \put(432.75,-156.98){\circle*{0.000001}} \put(344.36,-167.58){\circle*{0.000001}} \put(345.07,-167.58){\circle*{0.000001}} \put(345.78,-166.88){\circle*{0.000001}} \put(346.48,-166.88){\circle*{0.000001}} \put(347.19,-166.88){\circle*{0.000001}} \put(347.90,-166.17){\circle*{0.000001}} \put(348.60,-166.17){\circle*{0.000001}} \put(349.31,-166.17){\circle*{0.000001}} \put(350.02,-165.46){\circle*{0.000001}} \put(350.72,-165.46){\circle*{0.000001}} \put(351.43,-165.46){\circle*{0.000001}} \put(352.14,-164.76){\circle*{0.000001}} \put(352.85,-164.76){\circle*{0.000001}} \put(353.55,-164.76){\circle*{0.000001}} \put(354.26,-164.05){\circle*{0.000001}} \put(354.97,-164.05){\circle*{0.000001}} \put(355.67,-163.34){\circle*{0.000001}} \put(356.38,-163.34){\circle*{0.000001}} \put(357.09,-163.34){\circle*{0.000001}} \put(357.80,-162.63){\circle*{0.000001}} \put(358.50,-162.63){\circle*{0.000001}} \put(359.21,-162.63){\circle*{0.000001}} \put(359.92,-161.93){\circle*{0.000001}} \put(360.62,-161.93){\circle*{0.000001}} \put(361.33,-161.93){\circle*{0.000001}} \put(362.04,-161.22){\circle*{0.000001}} \put(362.75,-161.22){\circle*{0.000001}} \put(363.45,-161.22){\circle*{0.000001}} \put(364.16,-160.51){\circle*{0.000001}} \put(364.87,-160.51){\circle*{0.000001}} \put(365.57,-160.51){\circle*{0.000001}} \put(366.28,-159.81){\circle*{0.000001}} \put(366.99,-159.81){\circle*{0.000001}} \put(367.70,-159.81){\circle*{0.000001}} \put(368.40,-159.10){\circle*{0.000001}} \put(369.11,-159.10){\circle*{0.000001}} \put(369.82,-159.10){\circle*{0.000001}} \put(370.52,-158.39){\circle*{0.000001}} \put(371.23,-158.39){\circle*{0.000001}} \put(371.94,-158.39){\circle*{0.000001}} \put(372.65,-157.68){\circle*{0.000001}} \put(373.35,-157.68){\circle*{0.000001}} \put(374.06,-157.68){\circle*{0.000001}} \put(374.77,-156.98){\circle*{0.000001}} \put(375.47,-156.98){\circle*{0.000001}} \put(376.18,-156.98){\circle*{0.000001}} \put(376.89,-156.27){\circle*{0.000001}} \put(377.60,-156.27){\circle*{0.000001}} \put(378.30,-155.56){\circle*{0.000001}} \put(379.01,-155.56){\circle*{0.000001}} \put(379.72,-155.56){\circle*{0.000001}} \put(380.42,-154.86){\circle*{0.000001}} \put(381.13,-154.86){\circle*{0.000001}} \put(381.84,-154.86){\circle*{0.000001}} \put(382.54,-154.15){\circle*{0.000001}} \put(383.25,-154.15){\circle*{0.000001}} \put(383.96,-154.15){\circle*{0.000001}} \put(384.67,-153.44){\circle*{0.000001}} \put(385.37,-153.44){\circle*{0.000001}} \put(386.08,-153.44){\circle*{0.000001}} \put(386.79,-152.74){\circle*{0.000001}} \put(387.49,-152.74){\circle*{0.000001}} \put(369.82,-205.77){\circle*{0.000001}} \put(369.11,-205.06){\circle*{0.000001}} \put(369.11,-204.35){\circle*{0.000001}} \put(368.40,-203.65){\circle*{0.000001}} \put(367.70,-202.94){\circle*{0.000001}} \put(367.70,-202.23){\circle*{0.000001}} \put(366.99,-201.53){\circle*{0.000001}} \put(366.28,-200.82){\circle*{0.000001}} \put(366.28,-200.11){\circle*{0.000001}} \put(365.57,-199.40){\circle*{0.000001}} \put(364.87,-198.70){\circle*{0.000001}} \put(364.87,-197.99){\circle*{0.000001}} \put(364.16,-197.28){\circle*{0.000001}} \put(363.45,-196.58){\circle*{0.000001}} \put(363.45,-195.87){\circle*{0.000001}} \put(362.75,-195.16){\circle*{0.000001}} \put(362.04,-194.45){\circle*{0.000001}} \put(362.04,-193.75){\circle*{0.000001}} \put(361.33,-193.04){\circle*{0.000001}} \put(360.62,-192.33){\circle*{0.000001}} \put(360.62,-191.63){\circle*{0.000001}} \put(359.92,-190.92){\circle*{0.000001}} \put(359.21,-190.21){\circle*{0.000001}} \put(359.21,-189.50){\circle*{0.000001}} \put(358.50,-188.80){\circle*{0.000001}} \put(357.80,-188.09){\circle*{0.000001}} \put(357.80,-187.38){\circle*{0.000001}} \put(357.09,-186.68){\circle*{0.000001}} \put(356.38,-185.97){\circle*{0.000001}} \put(356.38,-185.26){\circle*{0.000001}} \put(355.67,-184.55){\circle*{0.000001}} \put(354.97,-183.85){\circle*{0.000001}} \put(354.97,-183.14){\circle*{0.000001}} \put(354.26,-182.43){\circle*{0.000001}} \put(353.55,-181.73){\circle*{0.000001}} \put(353.55,-181.02){\circle*{0.000001}} \put(352.85,-180.31){\circle*{0.000001}} \put(352.14,-179.61){\circle*{0.000001}} \put(352.14,-178.90){\circle*{0.000001}} \put(351.43,-178.19){\circle*{0.000001}} \put(350.72,-177.48){\circle*{0.000001}} \put(350.72,-176.78){\circle*{0.000001}} \put(350.02,-176.07){\circle*{0.000001}} \put(349.31,-175.36){\circle*{0.000001}} \put(349.31,-174.66){\circle*{0.000001}} \put(348.60,-173.95){\circle*{0.000001}} \put(347.90,-173.24){\circle*{0.000001}} \put(347.90,-172.53){\circle*{0.000001}} \put(347.19,-171.83){\circle*{0.000001}} \put(346.48,-171.12){\circle*{0.000001}} \put(346.48,-170.41){\circle*{0.000001}} \put(345.78,-169.71){\circle*{0.000001}} \put(345.07,-169.00){\circle*{0.000001}} \put(345.07,-168.29){\circle*{0.000001}} \put(344.36,-167.58){\circle*{0.000001}} \put(352.85,-245.37){\circle*{0.000001}} \put(352.85,-244.66){\circle*{0.000001}} \put(353.55,-243.95){\circle*{0.000001}} \put(353.55,-243.24){\circle*{0.000001}} \put(354.26,-242.54){\circle*{0.000001}} \put(354.26,-241.83){\circle*{0.000001}} \put(354.97,-241.12){\circle*{0.000001}} \put(354.97,-240.42){\circle*{0.000001}} \put(354.97,-239.71){\circle*{0.000001}} \put(355.67,-239.00){\circle*{0.000001}} \put(355.67,-238.29){\circle*{0.000001}} \put(356.38,-237.59){\circle*{0.000001}} \put(356.38,-236.88){\circle*{0.000001}} \put(357.09,-236.17){\circle*{0.000001}} \put(357.09,-235.47){\circle*{0.000001}} \put(357.09,-234.76){\circle*{0.000001}} \put(357.80,-234.05){\circle*{0.000001}} \put(357.80,-233.35){\circle*{0.000001}} \put(358.50,-232.64){\circle*{0.000001}} \put(358.50,-231.93){\circle*{0.000001}} \put(359.21,-231.22){\circle*{0.000001}} \put(359.21,-230.52){\circle*{0.000001}} \put(359.21,-229.81){\circle*{0.000001}} \put(359.92,-229.10){\circle*{0.000001}} \put(359.92,-228.40){\circle*{0.000001}} \put(360.62,-227.69){\circle*{0.000001}} \put(360.62,-226.98){\circle*{0.000001}} \put(361.33,-226.27){\circle*{0.000001}} \put(361.33,-225.57){\circle*{0.000001}} \put(361.33,-224.86){\circle*{0.000001}} \put(362.04,-224.15){\circle*{0.000001}} \put(362.04,-223.45){\circle*{0.000001}} \put(362.75,-222.74){\circle*{0.000001}} \put(362.75,-222.03){\circle*{0.000001}} \put(363.45,-221.32){\circle*{0.000001}} \put(363.45,-220.62){\circle*{0.000001}} \put(363.45,-219.91){\circle*{0.000001}} \put(364.16,-219.20){\circle*{0.000001}} \put(364.16,-218.50){\circle*{0.000001}} \put(364.87,-217.79){\circle*{0.000001}} \put(364.87,-217.08){\circle*{0.000001}} \put(365.57,-216.37){\circle*{0.000001}} \put(365.57,-215.67){\circle*{0.000001}} \put(365.57,-214.96){\circle*{0.000001}} \put(366.28,-214.25){\circle*{0.000001}} \put(366.28,-213.55){\circle*{0.000001}} \put(366.99,-212.84){\circle*{0.000001}} \put(366.99,-212.13){\circle*{0.000001}} \put(367.70,-211.42){\circle*{0.000001}} \put(367.70,-210.72){\circle*{0.000001}} \put(367.70,-210.01){\circle*{0.000001}} \put(368.40,-209.30){\circle*{0.000001}} \put(368.40,-208.60){\circle*{0.000001}} \put(369.11,-207.89){\circle*{0.000001}} \put(369.11,-207.18){\circle*{0.000001}} \put(369.82,-206.48){\circle*{0.000001}} \put(369.82,-205.77){\circle*{0.000001}} \put(352.85,-245.37){\circle*{0.000001}} \put(353.55,-244.66){\circle*{0.000001}} \put(354.26,-244.66){\circle*{0.000001}} \put(354.97,-243.95){\circle*{0.000001}} \put(355.67,-243.95){\circle*{0.000001}} \put(356.38,-243.24){\circle*{0.000001}} \put(357.09,-243.24){\circle*{0.000001}} \put(357.80,-242.54){\circle*{0.000001}} \put(358.50,-241.83){\circle*{0.000001}} \put(359.21,-241.83){\circle*{0.000001}} \put(359.92,-241.12){\circle*{0.000001}} \put(360.62,-241.12){\circle*{0.000001}} \put(361.33,-240.42){\circle*{0.000001}} \put(362.04,-240.42){\circle*{0.000001}} \put(362.75,-239.71){\circle*{0.000001}} \put(363.45,-239.71){\circle*{0.000001}} \put(364.16,-239.00){\circle*{0.000001}} \put(364.87,-238.29){\circle*{0.000001}} \put(365.57,-238.29){\circle*{0.000001}} \put(366.28,-237.59){\circle*{0.000001}} \put(366.99,-237.59){\circle*{0.000001}} \put(367.70,-236.88){\circle*{0.000001}} \put(368.40,-236.88){\circle*{0.000001}} \put(369.11,-236.17){\circle*{0.000001}} \put(369.82,-235.47){\circle*{0.000001}} \put(370.52,-235.47){\circle*{0.000001}} \put(371.23,-234.76){\circle*{0.000001}} \put(371.94,-234.76){\circle*{0.000001}} \put(372.65,-234.05){\circle*{0.000001}} \put(373.35,-234.05){\circle*{0.000001}} \put(374.06,-233.35){\circle*{0.000001}} \put(374.77,-233.35){\circle*{0.000001}} \put(375.47,-232.64){\circle*{0.000001}} \put(376.18,-231.93){\circle*{0.000001}} \put(376.89,-231.93){\circle*{0.000001}} \put(377.60,-231.22){\circle*{0.000001}} \put(378.30,-231.22){\circle*{0.000001}} \put(379.01,-230.52){\circle*{0.000001}} \put(379.72,-230.52){\circle*{0.000001}} \put(380.42,-229.81){\circle*{0.000001}} \put(381.13,-229.10){\circle*{0.000001}} \put(381.84,-229.10){\circle*{0.000001}} \put(382.54,-228.40){\circle*{0.000001}} \put(383.25,-228.40){\circle*{0.000001}} \put(383.96,-227.69){\circle*{0.000001}} \put(384.67,-227.69){\circle*{0.000001}} \put(385.37,-226.98){\circle*{0.000001}} \put(386.08,-226.98){\circle*{0.000001}} \put(386.79,-226.27){\circle*{0.000001}} \put(387.49,-225.57){\circle*{0.000001}} \put(388.20,-225.57){\circle*{0.000001}} \put(388.91,-224.86){\circle*{0.000001}} \put(389.62,-224.86){\circle*{0.000001}} \put(390.32,-224.15){\circle*{0.000001}} \put(391.03,-224.15){\circle*{0.000001}} \put(391.74,-223.45){\circle*{0.000001}} \put(391.74,-223.45){\circle*{0.000001}} \put(392.44,-222.74){\circle*{0.000001}} \put(393.15,-222.03){\circle*{0.000001}} \put(393.86,-221.32){\circle*{0.000001}} \put(394.57,-220.62){\circle*{0.000001}} \put(395.27,-220.62){\circle*{0.000001}} \put(395.98,-219.91){\circle*{0.000001}} \put(396.69,-219.20){\circle*{0.000001}} \put(397.39,-218.50){\circle*{0.000001}} \put(398.10,-217.79){\circle*{0.000001}} \put(398.81,-217.08){\circle*{0.000001}} \put(399.52,-216.37){\circle*{0.000001}} \put(400.22,-215.67){\circle*{0.000001}} \put(400.93,-214.96){\circle*{0.000001}} \put(401.64,-214.25){\circle*{0.000001}} \put(402.34,-214.25){\circle*{0.000001}} \put(403.05,-213.55){\circle*{0.000001}} \put(403.76,-212.84){\circle*{0.000001}} \put(404.47,-212.13){\circle*{0.000001}} \put(405.17,-211.42){\circle*{0.000001}} \put(405.88,-210.72){\circle*{0.000001}} \put(406.59,-210.01){\circle*{0.000001}} \put(407.29,-209.30){\circle*{0.000001}} \put(408.00,-208.60){\circle*{0.000001}} \put(408.71,-208.60){\circle*{0.000001}} \put(409.41,-207.89){\circle*{0.000001}} \put(410.12,-207.18){\circle*{0.000001}} \put(410.83,-206.48){\circle*{0.000001}} \put(411.54,-205.77){\circle*{0.000001}} \put(412.24,-205.06){\circle*{0.000001}} \put(412.95,-204.35){\circle*{0.000001}} \put(413.66,-203.65){\circle*{0.000001}} \put(414.36,-202.94){\circle*{0.000001}} \put(415.07,-202.94){\circle*{0.000001}} \put(415.78,-202.23){\circle*{0.000001}} \put(416.49,-201.53){\circle*{0.000001}} \put(417.19,-200.82){\circle*{0.000001}} \put(417.90,-200.11){\circle*{0.000001}} \put(418.61,-199.40){\circle*{0.000001}} \put(419.31,-198.70){\circle*{0.000001}} \put(420.02,-197.99){\circle*{0.000001}} \put(420.73,-197.28){\circle*{0.000001}} \put(421.44,-196.58){\circle*{0.000001}} \put(422.14,-196.58){\circle*{0.000001}} \put(422.85,-195.87){\circle*{0.000001}} \put(423.56,-195.16){\circle*{0.000001}} \put(424.26,-194.45){\circle*{0.000001}} \put(424.97,-193.75){\circle*{0.000001}} \put(384.67,-166.88){\circle*{0.000001}} \put(385.37,-167.58){\circle*{0.000001}} \put(386.08,-167.58){\circle*{0.000001}} \put(386.79,-168.29){\circle*{0.000001}} \put(387.49,-169.00){\circle*{0.000001}} \put(388.20,-169.00){\circle*{0.000001}} \put(388.91,-169.71){\circle*{0.000001}} \put(389.62,-170.41){\circle*{0.000001}} \put(390.32,-170.41){\circle*{0.000001}} \put(391.03,-171.12){\circle*{0.000001}} \put(391.74,-171.83){\circle*{0.000001}} \put(392.44,-171.83){\circle*{0.000001}} \put(393.15,-172.53){\circle*{0.000001}} \put(393.86,-173.24){\circle*{0.000001}} \put(394.57,-173.24){\circle*{0.000001}} \put(395.27,-173.95){\circle*{0.000001}} \put(395.98,-174.66){\circle*{0.000001}} \put(396.69,-174.66){\circle*{0.000001}} \put(397.39,-175.36){\circle*{0.000001}} \put(398.10,-176.07){\circle*{0.000001}} \put(398.81,-176.07){\circle*{0.000001}} \put(399.52,-176.78){\circle*{0.000001}} \put(400.22,-177.48){\circle*{0.000001}} \put(400.93,-177.48){\circle*{0.000001}} \put(401.64,-178.19){\circle*{0.000001}} \put(402.34,-178.90){\circle*{0.000001}} \put(403.05,-178.90){\circle*{0.000001}} \put(403.76,-179.61){\circle*{0.000001}} \put(404.47,-180.31){\circle*{0.000001}} \put(405.17,-180.31){\circle*{0.000001}} \put(405.88,-181.02){\circle*{0.000001}} \put(406.59,-181.73){\circle*{0.000001}} \put(407.29,-181.73){\circle*{0.000001}} \put(408.00,-182.43){\circle*{0.000001}} \put(408.71,-183.14){\circle*{0.000001}} \put(409.41,-183.14){\circle*{0.000001}} \put(410.12,-183.85){\circle*{0.000001}} \put(410.83,-184.55){\circle*{0.000001}} \put(411.54,-184.55){\circle*{0.000001}} \put(412.24,-185.26){\circle*{0.000001}} \put(412.95,-185.97){\circle*{0.000001}} \put(413.66,-185.97){\circle*{0.000001}} \put(414.36,-186.68){\circle*{0.000001}} \put(415.07,-187.38){\circle*{0.000001}} \put(415.78,-187.38){\circle*{0.000001}} \put(416.49,-188.09){\circle*{0.000001}} \put(417.19,-188.80){\circle*{0.000001}} \put(417.90,-188.80){\circle*{0.000001}} \put(418.61,-189.50){\circle*{0.000001}} \put(419.31,-190.21){\circle*{0.000001}} \put(420.02,-190.21){\circle*{0.000001}} \put(420.73,-190.92){\circle*{0.000001}} \put(421.44,-191.63){\circle*{0.000001}} \put(422.14,-191.63){\circle*{0.000001}} \put(422.85,-192.33){\circle*{0.000001}} \put(423.56,-193.04){\circle*{0.000001}} \put(424.26,-193.04){\circle*{0.000001}} \put(424.97,-193.75){\circle*{0.000001}} \put(384.67,-166.88){\circle*{0.000001}} \put(384.67,-166.17){\circle*{0.000001}} \put(383.96,-165.46){\circle*{0.000001}} \put(383.96,-164.76){\circle*{0.000001}} \put(383.96,-164.05){\circle*{0.000001}} \put(383.25,-163.34){\circle*{0.000001}} \put(383.25,-162.63){\circle*{0.000001}} \put(382.54,-161.93){\circle*{0.000001}} \put(382.54,-161.22){\circle*{0.000001}} \put(382.54,-160.51){\circle*{0.000001}} \put(381.84,-159.81){\circle*{0.000001}} \put(381.84,-159.10){\circle*{0.000001}} \put(381.84,-158.39){\circle*{0.000001}} \put(381.13,-157.68){\circle*{0.000001}} \put(381.13,-156.98){\circle*{0.000001}} \put(381.13,-156.27){\circle*{0.000001}} \put(380.42,-155.56){\circle*{0.000001}} \put(380.42,-154.86){\circle*{0.000001}} \put(379.72,-154.15){\circle*{0.000001}} \put(379.72,-153.44){\circle*{0.000001}} \put(379.72,-152.74){\circle*{0.000001}} \put(379.01,-152.03){\circle*{0.000001}} \put(379.01,-151.32){\circle*{0.000001}} \put(379.01,-150.61){\circle*{0.000001}} \put(378.30,-149.91){\circle*{0.000001}} \put(378.30,-149.20){\circle*{0.000001}} \put(378.30,-148.49){\circle*{0.000001}} \put(377.60,-147.79){\circle*{0.000001}} \put(377.60,-147.08){\circle*{0.000001}} \put(376.89,-146.37){\circle*{0.000001}} \put(376.89,-145.66){\circle*{0.000001}} \put(376.89,-144.96){\circle*{0.000001}} \put(376.18,-144.25){\circle*{0.000001}} \put(376.18,-143.54){\circle*{0.000001}} \put(376.18,-142.84){\circle*{0.000001}} \put(375.47,-142.13){\circle*{0.000001}} \put(375.47,-141.42){\circle*{0.000001}} \put(374.77,-140.71){\circle*{0.000001}} \put(374.77,-140.01){\circle*{0.000001}} \put(374.77,-139.30){\circle*{0.000001}} \put(374.06,-138.59){\circle*{0.000001}} \put(374.06,-137.89){\circle*{0.000001}} \put(374.06,-137.18){\circle*{0.000001}} \put(373.35,-136.47){\circle*{0.000001}} \put(373.35,-135.76){\circle*{0.000001}} \put(373.35,-135.06){\circle*{0.000001}} \put(372.65,-134.35){\circle*{0.000001}} \put(372.65,-133.64){\circle*{0.000001}} \put(371.94,-132.94){\circle*{0.000001}} \put(371.94,-132.23){\circle*{0.000001}} \put(371.94,-131.52){\circle*{0.000001}} \put(371.23,-130.81){\circle*{0.000001}} \put(371.23,-130.11){\circle*{0.000001}} \put(371.23,-129.40){\circle*{0.000001}} \put(370.52,-128.69){\circle*{0.000001}} \put(370.52,-127.99){\circle*{0.000001}} \put(370.52,-127.28){\circle*{0.000001}} \put(369.82,-126.57){\circle*{0.000001}} \put(369.82,-125.87){\circle*{0.000001}} \put(369.11,-125.16){\circle*{0.000001}} \put(369.11,-124.45){\circle*{0.000001}} \put(369.11,-123.74){\circle*{0.000001}} \put(368.40,-123.04){\circle*{0.000001}} \put(368.40,-122.33){\circle*{0.000001}} \put(368.40,-122.33){\circle*{0.000001}} \put(369.11,-122.33){\circle*{0.000001}} \put(369.82,-122.33){\circle*{0.000001}} \put(370.52,-122.33){\circle*{0.000001}} \put(371.23,-122.33){\circle*{0.000001}} \put(371.94,-122.33){\circle*{0.000001}} \put(372.65,-122.33){\circle*{0.000001}} \put(373.35,-123.04){\circle*{0.000001}} \put(374.06,-123.04){\circle*{0.000001}} \put(374.77,-123.04){\circle*{0.000001}} \put(375.47,-123.04){\circle*{0.000001}} \put(376.18,-123.04){\circle*{0.000001}} \put(376.89,-123.04){\circle*{0.000001}} \put(377.60,-123.04){\circle*{0.000001}} \put(378.30,-123.04){\circle*{0.000001}} \put(379.01,-123.04){\circle*{0.000001}} \put(379.72,-123.04){\circle*{0.000001}} \put(380.42,-123.04){\circle*{0.000001}} \put(381.13,-123.04){\circle*{0.000001}} \put(381.84,-123.04){\circle*{0.000001}} \put(382.54,-123.74){\circle*{0.000001}} \put(383.25,-123.74){\circle*{0.000001}} \put(383.96,-123.74){\circle*{0.000001}} \put(384.67,-123.74){\circle*{0.000001}} \put(385.37,-123.74){\circle*{0.000001}} \put(386.08,-123.74){\circle*{0.000001}} \put(386.79,-123.74){\circle*{0.000001}} \put(387.49,-123.74){\circle*{0.000001}} \put(388.20,-123.74){\circle*{0.000001}} \put(388.91,-123.74){\circle*{0.000001}} \put(389.62,-123.74){\circle*{0.000001}} \put(390.32,-123.74){\circle*{0.000001}} \put(391.03,-123.74){\circle*{0.000001}} \put(391.74,-124.45){\circle*{0.000001}} \put(392.44,-124.45){\circle*{0.000001}} \put(393.15,-124.45){\circle*{0.000001}} \put(393.86,-124.45){\circle*{0.000001}} \put(394.57,-124.45){\circle*{0.000001}} \put(395.27,-124.45){\circle*{0.000001}} \put(395.98,-124.45){\circle*{0.000001}} \put(396.69,-124.45){\circle*{0.000001}} \put(397.39,-124.45){\circle*{0.000001}} \put(398.10,-124.45){\circle*{0.000001}} \put(398.81,-124.45){\circle*{0.000001}} \put(399.52,-124.45){\circle*{0.000001}} \put(400.22,-125.16){\circle*{0.000001}} \put(400.93,-125.16){\circle*{0.000001}} \put(401.64,-125.16){\circle*{0.000001}} \put(402.34,-125.16){\circle*{0.000001}} \put(403.05,-125.16){\circle*{0.000001}} \put(403.76,-125.16){\circle*{0.000001}} \put(404.47,-125.16){\circle*{0.000001}} \put(405.17,-125.16){\circle*{0.000001}} \put(405.88,-125.16){\circle*{0.000001}} \put(406.59,-125.16){\circle*{0.000001}} \put(407.29,-125.16){\circle*{0.000001}} \put(408.00,-125.16){\circle*{0.000001}} \put(408.71,-125.16){\circle*{0.000001}} \put(409.41,-125.87){\circle*{0.000001}} \put(410.12,-125.87){\circle*{0.000001}} \put(410.83,-125.87){\circle*{0.000001}} \put(411.54,-125.87){\circle*{0.000001}} \put(412.24,-125.87){\circle*{0.000001}} \put(412.95,-125.87){\circle*{0.000001}} \put(413.66,-125.87){\circle*{0.000001}} \put(413.66,-125.87){\circle*{0.000001}} \put(414.36,-125.16){\circle*{0.000001}} \put(414.36,-124.45){\circle*{0.000001}} \put(415.07,-123.74){\circle*{0.000001}} \put(415.78,-123.04){\circle*{0.000001}} \put(416.49,-122.33){\circle*{0.000001}} \put(416.49,-121.62){\circle*{0.000001}} \put(417.19,-120.92){\circle*{0.000001}} \put(417.90,-120.21){\circle*{0.000001}} \put(418.61,-119.50){\circle*{0.000001}} \put(418.61,-118.79){\circle*{0.000001}} \put(419.31,-118.09){\circle*{0.000001}} \put(420.02,-117.38){\circle*{0.000001}} \put(420.73,-116.67){\circle*{0.000001}} \put(420.73,-115.97){\circle*{0.000001}} \put(421.44,-115.26){\circle*{0.000001}} \put(422.14,-114.55){\circle*{0.000001}} \put(422.85,-113.84){\circle*{0.000001}} \put(422.85,-113.14){\circle*{0.000001}} \put(423.56,-112.43){\circle*{0.000001}} \put(424.26,-111.72){\circle*{0.000001}} \put(424.97,-111.02){\circle*{0.000001}} \put(424.97,-110.31){\circle*{0.000001}} \put(425.68,-109.60){\circle*{0.000001}} \put(426.39,-108.89){\circle*{0.000001}} \put(426.39,-108.19){\circle*{0.000001}} \put(427.09,-107.48){\circle*{0.000001}} \put(427.80,-106.77){\circle*{0.000001}} \put(428.51,-106.07){\circle*{0.000001}} \put(428.51,-105.36){\circle*{0.000001}} \put(429.21,-104.65){\circle*{0.000001}} \put(429.92,-103.94){\circle*{0.000001}} \put(430.63,-103.24){\circle*{0.000001}} \put(430.63,-102.53){\circle*{0.000001}} \put(431.34,-101.82){\circle*{0.000001}} \put(432.04,-101.12){\circle*{0.000001}} \put(432.75,-100.41){\circle*{0.000001}} \put(432.75,-99.70){\circle*{0.000001}} \put(433.46,-98.99){\circle*{0.000001}} \put(434.16,-98.29){\circle*{0.000001}} \put(434.87,-97.58){\circle*{0.000001}} \put(434.87,-96.87){\circle*{0.000001}} \put(435.58,-96.17){\circle*{0.000001}} \put(436.28,-95.46){\circle*{0.000001}} \put(436.99,-94.75){\circle*{0.000001}} \put(436.99,-94.05){\circle*{0.000001}} \put(437.70,-93.34){\circle*{0.000001}} \put(438.41,-92.63){\circle*{0.000001}} \put(439.11,-91.92){\circle*{0.000001}} \put(439.11,-91.22){\circle*{0.000001}} \put(439.82,-90.51){\circle*{0.000001}} \put(404.47,-58.69){\circle*{0.000001}} \put(405.17,-59.40){\circle*{0.000001}} \put(405.88,-60.10){\circle*{0.000001}} \put(406.59,-60.81){\circle*{0.000001}} \put(407.29,-61.52){\circle*{0.000001}} \put(408.00,-61.52){\circle*{0.000001}} \put(408.71,-62.23){\circle*{0.000001}} \put(409.41,-62.93){\circle*{0.000001}} \put(410.12,-63.64){\circle*{0.000001}} \put(410.83,-64.35){\circle*{0.000001}} \put(411.54,-65.05){\circle*{0.000001}} \put(412.24,-65.76){\circle*{0.000001}} \put(412.95,-66.47){\circle*{0.000001}} \put(413.66,-67.18){\circle*{0.000001}} \put(414.36,-67.88){\circle*{0.000001}} \put(415.07,-67.88){\circle*{0.000001}} \put(415.78,-68.59){\circle*{0.000001}} \put(416.49,-69.30){\circle*{0.000001}} \put(417.19,-70.00){\circle*{0.000001}} \put(417.90,-70.71){\circle*{0.000001}} \put(418.61,-71.42){\circle*{0.000001}} \put(419.31,-72.12){\circle*{0.000001}} \put(420.02,-72.83){\circle*{0.000001}} \put(420.73,-73.54){\circle*{0.000001}} \put(421.44,-74.25){\circle*{0.000001}} \put(422.14,-74.25){\circle*{0.000001}} \put(422.85,-74.95){\circle*{0.000001}} \put(423.56,-75.66){\circle*{0.000001}} \put(424.26,-76.37){\circle*{0.000001}} \put(424.97,-77.07){\circle*{0.000001}} \put(425.68,-77.78){\circle*{0.000001}} \put(426.39,-78.49){\circle*{0.000001}} \put(427.09,-79.20){\circle*{0.000001}} \put(427.80,-79.90){\circle*{0.000001}} \put(428.51,-80.61){\circle*{0.000001}} \put(429.21,-80.61){\circle*{0.000001}} \put(429.92,-81.32){\circle*{0.000001}} \put(430.63,-82.02){\circle*{0.000001}} \put(431.34,-82.73){\circle*{0.000001}} \put(432.04,-83.44){\circle*{0.000001}} \put(432.75,-84.15){\circle*{0.000001}} \put(433.46,-84.85){\circle*{0.000001}} \put(434.16,-85.56){\circle*{0.000001}} \put(434.87,-86.27){\circle*{0.000001}} \put(435.58,-86.97){\circle*{0.000001}} \put(436.28,-86.97){\circle*{0.000001}} \put(436.99,-87.68){\circle*{0.000001}} \put(437.70,-88.39){\circle*{0.000001}} \put(438.41,-89.10){\circle*{0.000001}} \put(439.11,-89.80){\circle*{0.000001}} \put(439.82,-90.51){\circle*{0.000001}} \put(404.47,-58.69){\circle*{0.000001}} \put(404.47,-57.98){\circle*{0.000001}} \put(405.17,-57.28){\circle*{0.000001}} \put(405.17,-56.57){\circle*{0.000001}} \put(405.88,-55.86){\circle*{0.000001}} \put(405.88,-55.15){\circle*{0.000001}} \put(406.59,-54.45){\circle*{0.000001}} \put(406.59,-53.74){\circle*{0.000001}} \put(407.29,-53.03){\circle*{0.000001}} \put(407.29,-52.33){\circle*{0.000001}} \put(408.00,-51.62){\circle*{0.000001}} \put(408.00,-50.91){\circle*{0.000001}} \put(408.71,-50.20){\circle*{0.000001}} \put(408.71,-49.50){\circle*{0.000001}} \put(409.41,-48.79){\circle*{0.000001}} \put(409.41,-48.08){\circle*{0.000001}} \put(410.12,-47.38){\circle*{0.000001}} \put(410.12,-46.67){\circle*{0.000001}} \put(410.83,-45.96){\circle*{0.000001}} \put(410.83,-45.25){\circle*{0.000001}} \put(411.54,-44.55){\circle*{0.000001}} \put(411.54,-43.84){\circle*{0.000001}} \put(412.24,-43.13){\circle*{0.000001}} \put(412.24,-42.43){\circle*{0.000001}} \put(412.95,-41.72){\circle*{0.000001}} \put(412.95,-41.01){\circle*{0.000001}} \put(413.66,-40.31){\circle*{0.000001}} \put(413.66,-39.60){\circle*{0.000001}} \put(414.36,-38.89){\circle*{0.000001}} \put(414.36,-38.18){\circle*{0.000001}} \put(415.07,-37.48){\circle*{0.000001}} \put(415.07,-36.77){\circle*{0.000001}} \put(415.78,-36.06){\circle*{0.000001}} \put(415.78,-35.36){\circle*{0.000001}} \put(416.49,-34.65){\circle*{0.000001}} \put(416.49,-33.94){\circle*{0.000001}} \put(417.19,-33.23){\circle*{0.000001}} \put(417.19,-32.53){\circle*{0.000001}} \put(417.90,-31.82){\circle*{0.000001}} \put(417.90,-31.11){\circle*{0.000001}} \put(418.61,-30.41){\circle*{0.000001}} \put(418.61,-29.70){\circle*{0.000001}} \put(419.31,-28.99){\circle*{0.000001}} \put(419.31,-28.28){\circle*{0.000001}} \put(420.02,-27.58){\circle*{0.000001}} \put(420.02,-26.87){\circle*{0.000001}} \put(420.73,-26.16){\circle*{0.000001}} \put(420.73,-25.46){\circle*{0.000001}} \put(421.44,-24.75){\circle*{0.000001}} \put(421.44,-24.04){\circle*{0.000001}} \put(422.14,-23.33){\circle*{0.000001}} \put(422.14,-22.63){\circle*{0.000001}} \put(422.85,-21.92){\circle*{0.000001}} \put(422.85,-21.21){\circle*{0.000001}} \put(423.56,-20.51){\circle*{0.000001}} \put(423.56,-19.80){\circle*{0.000001}} \put(424.26,-19.09){\circle*{0.000001}} \put(424.26,-18.38){\circle*{0.000001}} \put(424.97,-17.68){\circle*{0.000001}} \put(424.97,-16.97){\circle*{0.000001}} \put(392.44,-47.38){\circle*{0.000001}} \put(393.15,-46.67){\circle*{0.000001}} \put(393.86,-45.96){\circle*{0.000001}} \put(394.57,-45.25){\circle*{0.000001}} \put(395.27,-44.55){\circle*{0.000001}} \put(395.98,-43.84){\circle*{0.000001}} \put(396.69,-43.13){\circle*{0.000001}} \put(397.39,-42.43){\circle*{0.000001}} \put(398.10,-42.43){\circle*{0.000001}} \put(398.81,-41.72){\circle*{0.000001}} \put(399.52,-41.01){\circle*{0.000001}} \put(400.22,-40.31){\circle*{0.000001}} \put(400.93,-39.60){\circle*{0.000001}} \put(401.64,-38.89){\circle*{0.000001}} \put(402.34,-38.18){\circle*{0.000001}} \put(403.05,-37.48){\circle*{0.000001}} \put(403.76,-36.77){\circle*{0.000001}} \put(404.47,-36.06){\circle*{0.000001}} \put(405.17,-35.36){\circle*{0.000001}} \put(405.88,-34.65){\circle*{0.000001}} \put(406.59,-33.94){\circle*{0.000001}} \put(407.29,-33.23){\circle*{0.000001}} \put(408.00,-32.53){\circle*{0.000001}} \put(408.71,-32.53){\circle*{0.000001}} \put(409.41,-31.82){\circle*{0.000001}} \put(410.12,-31.11){\circle*{0.000001}} \put(410.83,-30.41){\circle*{0.000001}} \put(411.54,-29.70){\circle*{0.000001}} \put(412.24,-28.99){\circle*{0.000001}} \put(412.95,-28.28){\circle*{0.000001}} \put(413.66,-27.58){\circle*{0.000001}} \put(414.36,-26.87){\circle*{0.000001}} \put(415.07,-26.16){\circle*{0.000001}} \put(415.78,-25.46){\circle*{0.000001}} \put(416.49,-24.75){\circle*{0.000001}} \put(417.19,-24.04){\circle*{0.000001}} \put(417.90,-23.33){\circle*{0.000001}} \put(418.61,-22.63){\circle*{0.000001}} \put(419.31,-21.92){\circle*{0.000001}} \put(420.02,-21.92){\circle*{0.000001}} \put(420.73,-21.21){\circle*{0.000001}} \put(421.44,-20.51){\circle*{0.000001}} \put(422.14,-19.80){\circle*{0.000001}} \put(422.85,-19.09){\circle*{0.000001}} \put(423.56,-18.38){\circle*{0.000001}} \put(424.26,-17.68){\circle*{0.000001}} \put(424.97,-16.97){\circle*{0.000001}} \put(392.44,-47.38){\circle*{0.000001}} \put(393.15,-47.38){\circle*{0.000001}} \put(393.86,-46.67){\circle*{0.000001}} \put(394.57,-46.67){\circle*{0.000001}} \put(395.27,-46.67){\circle*{0.000001}} \put(395.98,-46.67){\circle*{0.000001}} \put(396.69,-45.96){\circle*{0.000001}} \put(397.39,-45.96){\circle*{0.000001}} \put(398.10,-45.96){\circle*{0.000001}} \put(398.81,-45.25){\circle*{0.000001}} \put(399.52,-45.25){\circle*{0.000001}} \put(400.22,-45.25){\circle*{0.000001}} \put(400.93,-44.55){\circle*{0.000001}} \put(401.64,-44.55){\circle*{0.000001}} \put(402.34,-44.55){\circle*{0.000001}} \put(403.05,-44.55){\circle*{0.000001}} \put(403.76,-43.84){\circle*{0.000001}} \put(404.47,-43.84){\circle*{0.000001}} \put(405.17,-43.84){\circle*{0.000001}} \put(405.88,-43.13){\circle*{0.000001}} \put(406.59,-43.13){\circle*{0.000001}} \put(407.29,-43.13){\circle*{0.000001}} \put(408.00,-43.13){\circle*{0.000001}} \put(408.71,-42.43){\circle*{0.000001}} \put(409.41,-42.43){\circle*{0.000001}} \put(410.12,-42.43){\circle*{0.000001}} \put(410.83,-41.72){\circle*{0.000001}} \put(411.54,-41.72){\circle*{0.000001}} \put(412.24,-41.72){\circle*{0.000001}} \put(412.95,-41.72){\circle*{0.000001}} \put(413.66,-41.01){\circle*{0.000001}} \put(414.36,-41.01){\circle*{0.000001}} \put(415.07,-41.01){\circle*{0.000001}} \put(415.78,-40.31){\circle*{0.000001}} \put(416.49,-40.31){\circle*{0.000001}} \put(417.19,-40.31){\circle*{0.000001}} \put(417.90,-39.60){\circle*{0.000001}} \put(418.61,-39.60){\circle*{0.000001}} \put(419.31,-39.60){\circle*{0.000001}} \put(420.02,-39.60){\circle*{0.000001}} \put(420.73,-38.89){\circle*{0.000001}} \put(421.44,-38.89){\circle*{0.000001}} \put(422.14,-38.89){\circle*{0.000001}} \put(422.85,-38.18){\circle*{0.000001}} \put(423.56,-38.18){\circle*{0.000001}} \put(424.26,-38.18){\circle*{0.000001}} \put(424.97,-38.18){\circle*{0.000001}} \put(425.68,-37.48){\circle*{0.000001}} \put(426.39,-37.48){\circle*{0.000001}} \put(427.09,-37.48){\circle*{0.000001}} \put(427.80,-36.77){\circle*{0.000001}} \put(428.51,-36.77){\circle*{0.000001}} \put(429.21,-36.77){\circle*{0.000001}} \put(429.92,-36.77){\circle*{0.000001}} \put(430.63,-36.06){\circle*{0.000001}} \put(431.34,-36.06){\circle*{0.000001}} \put(432.04,-36.06){\circle*{0.000001}} \put(432.75,-35.36){\circle*{0.000001}} \put(433.46,-35.36){\circle*{0.000001}} \put(434.16,-35.36){\circle*{0.000001}} \put(434.87,-34.65){\circle*{0.000001}} \put(435.58,-34.65){\circle*{0.000001}} \put(436.28,-34.65){\circle*{0.000001}} \put(436.99,-34.65){\circle*{0.000001}} \put(437.70,-33.94){\circle*{0.000001}} \put(438.41,-33.94){\circle*{0.000001}} \put(438.41,-33.94){\circle*{0.000001}} \put(439.11,-33.23){\circle*{0.000001}} \put(439.11,-32.53){\circle*{0.000001}} \put(439.82,-31.82){\circle*{0.000001}} \put(440.53,-31.11){\circle*{0.000001}} \put(440.53,-30.41){\circle*{0.000001}} \put(441.23,-29.70){\circle*{0.000001}} \put(441.94,-28.99){\circle*{0.000001}} \put(442.65,-28.28){\circle*{0.000001}} \put(442.65,-27.58){\circle*{0.000001}} \put(443.36,-26.87){\circle*{0.000001}} \put(444.06,-26.16){\circle*{0.000001}} \put(444.06,-25.46){\circle*{0.000001}} \put(444.77,-24.75){\circle*{0.000001}} \put(445.48,-24.04){\circle*{0.000001}} \put(445.48,-23.33){\circle*{0.000001}} \put(446.18,-22.63){\circle*{0.000001}} \put(446.89,-21.92){\circle*{0.000001}} \put(447.60,-21.21){\circle*{0.000001}} \put(447.60,-20.51){\circle*{0.000001}} \put(448.31,-19.80){\circle*{0.000001}} \put(449.01,-19.09){\circle*{0.000001}} \put(449.01,-18.38){\circle*{0.000001}} \put(449.72,-17.68){\circle*{0.000001}} \put(450.43,-16.97){\circle*{0.000001}} \put(450.43,-16.26){\circle*{0.000001}} \put(451.13,-15.56){\circle*{0.000001}} \put(451.84,-14.85){\circle*{0.000001}} \put(452.55,-14.14){\circle*{0.000001}} \put(452.55,-13.44){\circle*{0.000001}} \put(453.26,-12.73){\circle*{0.000001}} \put(453.96,-12.02){\circle*{0.000001}} \put(453.96,-11.31){\circle*{0.000001}} \put(454.67,-10.61){\circle*{0.000001}} \put(455.38,-9.90){\circle*{0.000001}} \put(455.38,-9.19){\circle*{0.000001}} \put(456.08,-8.49){\circle*{0.000001}} \put(456.79,-7.78){\circle*{0.000001}} \put(457.50,-7.07){\circle*{0.000001}} \put(457.50,-6.36){\circle*{0.000001}} \put(458.21,-5.66){\circle*{0.000001}} \put(458.91,-4.95){\circle*{0.000001}} \put(458.91,-4.24){\circle*{0.000001}} \put(459.62,-3.54){\circle*{0.000001}} \put(460.33,-2.83){\circle*{0.000001}} \put(460.33,-2.12){\circle*{0.000001}} \put(461.03,-1.41){\circle*{0.000001}} \put(461.74,-0.71){\circle*{0.000001}} \put(462.45, 0.00){\circle*{0.000001}} \put(462.45, 0.71){\circle*{0.000001}} \put(463.15, 1.41){\circle*{0.000001}} \put(463.86, 2.12){\circle*{0.000001}} \put(463.86, 2.83){\circle*{0.000001}} \put(464.57, 3.54){\circle*{0.000001}} \put(422.85,-14.85){\circle*{0.000001}} \put(423.56,-14.85){\circle*{0.000001}} \put(424.26,-14.14){\circle*{0.000001}} \put(424.97,-14.14){\circle*{0.000001}} \put(425.68,-13.44){\circle*{0.000001}} \put(426.39,-13.44){\circle*{0.000001}} \put(427.09,-12.73){\circle*{0.000001}} \put(427.80,-12.73){\circle*{0.000001}} \put(428.51,-12.02){\circle*{0.000001}} \put(429.21,-12.02){\circle*{0.000001}} \put(429.92,-12.02){\circle*{0.000001}} \put(430.63,-11.31){\circle*{0.000001}} \put(431.34,-11.31){\circle*{0.000001}} \put(432.04,-10.61){\circle*{0.000001}} \put(432.75,-10.61){\circle*{0.000001}} \put(433.46,-9.90){\circle*{0.000001}} \put(434.16,-9.90){\circle*{0.000001}} \put(434.87,-9.90){\circle*{0.000001}} \put(435.58,-9.19){\circle*{0.000001}} \put(436.28,-9.19){\circle*{0.000001}} \put(436.99,-8.49){\circle*{0.000001}} \put(437.70,-8.49){\circle*{0.000001}} \put(438.41,-7.78){\circle*{0.000001}} \put(439.11,-7.78){\circle*{0.000001}} \put(439.82,-7.07){\circle*{0.000001}} \put(440.53,-7.07){\circle*{0.000001}} \put(441.23,-7.07){\circle*{0.000001}} \put(441.94,-6.36){\circle*{0.000001}} \put(442.65,-6.36){\circle*{0.000001}} \put(443.36,-5.66){\circle*{0.000001}} \put(444.06,-5.66){\circle*{0.000001}} \put(444.77,-4.95){\circle*{0.000001}} \put(445.48,-4.95){\circle*{0.000001}} \put(446.18,-4.24){\circle*{0.000001}} \put(446.89,-4.24){\circle*{0.000001}} \put(447.60,-4.24){\circle*{0.000001}} \put(448.31,-3.54){\circle*{0.000001}} \put(449.01,-3.54){\circle*{0.000001}} \put(449.72,-2.83){\circle*{0.000001}} \put(450.43,-2.83){\circle*{0.000001}} \put(451.13,-2.12){\circle*{0.000001}} \put(451.84,-2.12){\circle*{0.000001}} \put(452.55,-1.41){\circle*{0.000001}} \put(453.26,-1.41){\circle*{0.000001}} \put(453.96,-1.41){\circle*{0.000001}} \put(454.67,-0.71){\circle*{0.000001}} \put(455.38,-0.71){\circle*{0.000001}} \put(456.08, 0.00){\circle*{0.000001}} \put(456.79, 0.00){\circle*{0.000001}} \put(457.50, 0.71){\circle*{0.000001}} \put(458.21, 0.71){\circle*{0.000001}} \put(458.91, 0.71){\circle*{0.000001}} \put(459.62, 1.41){\circle*{0.000001}} \put(460.33, 1.41){\circle*{0.000001}} \put(461.03, 2.12){\circle*{0.000001}} \put(461.74, 2.12){\circle*{0.000001}} \put(462.45, 2.83){\circle*{0.000001}} \put(463.15, 2.83){\circle*{0.000001}} \put(463.86, 3.54){\circle*{0.000001}} \put(464.57, 3.54){\circle*{0.000001}} \put(441.94,-56.57){\circle*{0.000001}} \put(441.94,-55.86){\circle*{0.000001}} \put(441.23,-55.15){\circle*{0.000001}} \put(441.23,-54.45){\circle*{0.000001}} \put(440.53,-53.74){\circle*{0.000001}} \put(440.53,-53.03){\circle*{0.000001}} \put(439.82,-52.33){\circle*{0.000001}} \put(439.82,-51.62){\circle*{0.000001}} \put(439.11,-50.91){\circle*{0.000001}} \put(439.11,-50.20){\circle*{0.000001}} \put(438.41,-49.50){\circle*{0.000001}} \put(438.41,-48.79){\circle*{0.000001}} \put(438.41,-48.08){\circle*{0.000001}} \put(437.70,-47.38){\circle*{0.000001}} \put(437.70,-46.67){\circle*{0.000001}} \put(436.99,-45.96){\circle*{0.000001}} \put(436.99,-45.25){\circle*{0.000001}} \put(436.28,-44.55){\circle*{0.000001}} \put(436.28,-43.84){\circle*{0.000001}} \put(435.58,-43.13){\circle*{0.000001}} \put(435.58,-42.43){\circle*{0.000001}} \put(434.87,-41.72){\circle*{0.000001}} \put(434.87,-41.01){\circle*{0.000001}} \put(434.16,-40.31){\circle*{0.000001}} \put(434.16,-39.60){\circle*{0.000001}} \put(434.16,-38.89){\circle*{0.000001}} \put(433.46,-38.18){\circle*{0.000001}} \put(433.46,-37.48){\circle*{0.000001}} \put(432.75,-36.77){\circle*{0.000001}} \put(432.75,-36.06){\circle*{0.000001}} \put(432.04,-35.36){\circle*{0.000001}} \put(432.04,-34.65){\circle*{0.000001}} \put(431.34,-33.94){\circle*{0.000001}} \put(431.34,-33.23){\circle*{0.000001}} \put(430.63,-32.53){\circle*{0.000001}} \put(430.63,-31.82){\circle*{0.000001}} \put(430.63,-31.11){\circle*{0.000001}} \put(429.92,-30.41){\circle*{0.000001}} \put(429.92,-29.70){\circle*{0.000001}} \put(429.21,-28.99){\circle*{0.000001}} \put(429.21,-28.28){\circle*{0.000001}} \put(428.51,-27.58){\circle*{0.000001}} \put(428.51,-26.87){\circle*{0.000001}} \put(427.80,-26.16){\circle*{0.000001}} \put(427.80,-25.46){\circle*{0.000001}} \put(427.09,-24.75){\circle*{0.000001}} \put(427.09,-24.04){\circle*{0.000001}} \put(426.39,-23.33){\circle*{0.000001}} \put(426.39,-22.63){\circle*{0.000001}} \put(426.39,-21.92){\circle*{0.000001}} \put(425.68,-21.21){\circle*{0.000001}} \put(425.68,-20.51){\circle*{0.000001}} \put(424.97,-19.80){\circle*{0.000001}} \put(424.97,-19.09){\circle*{0.000001}} \put(424.26,-18.38){\circle*{0.000001}} \put(424.26,-17.68){\circle*{0.000001}} \put(423.56,-16.97){\circle*{0.000001}} \put(423.56,-16.26){\circle*{0.000001}} \put(422.85,-15.56){\circle*{0.000001}} \put(422.85,-14.85){\circle*{0.000001}} \put(399.52,-73.54){\circle*{0.000001}} \put(400.22,-73.54){\circle*{0.000001}} \put(400.93,-72.83){\circle*{0.000001}} \put(401.64,-72.83){\circle*{0.000001}} \put(402.34,-72.12){\circle*{0.000001}} \put(403.05,-72.12){\circle*{0.000001}} \put(403.76,-72.12){\circle*{0.000001}} \put(404.47,-71.42){\circle*{0.000001}} \put(405.17,-71.42){\circle*{0.000001}} \put(405.88,-70.71){\circle*{0.000001}} \put(406.59,-70.71){\circle*{0.000001}} \put(407.29,-70.71){\circle*{0.000001}} \put(408.00,-70.00){\circle*{0.000001}} \put(408.71,-70.00){\circle*{0.000001}} \put(409.41,-69.30){\circle*{0.000001}} \put(410.12,-69.30){\circle*{0.000001}} \put(410.83,-69.30){\circle*{0.000001}} \put(411.54,-68.59){\circle*{0.000001}} \put(412.24,-68.59){\circle*{0.000001}} \put(412.95,-67.88){\circle*{0.000001}} \put(413.66,-67.88){\circle*{0.000001}} \put(414.36,-67.88){\circle*{0.000001}} \put(415.07,-67.18){\circle*{0.000001}} \put(415.78,-67.18){\circle*{0.000001}} \put(416.49,-66.47){\circle*{0.000001}} \put(417.19,-66.47){\circle*{0.000001}} \put(417.90,-66.47){\circle*{0.000001}} \put(418.61,-65.76){\circle*{0.000001}} \put(419.31,-65.76){\circle*{0.000001}} \put(420.02,-65.05){\circle*{0.000001}} \put(420.73,-65.05){\circle*{0.000001}} \put(421.44,-65.05){\circle*{0.000001}} \put(422.14,-64.35){\circle*{0.000001}} \put(422.85,-64.35){\circle*{0.000001}} \put(423.56,-63.64){\circle*{0.000001}} \put(424.26,-63.64){\circle*{0.000001}} \put(424.97,-63.64){\circle*{0.000001}} \put(425.68,-62.93){\circle*{0.000001}} \put(426.39,-62.93){\circle*{0.000001}} \put(427.09,-62.23){\circle*{0.000001}} \put(427.80,-62.23){\circle*{0.000001}} \put(428.51,-62.23){\circle*{0.000001}} \put(429.21,-61.52){\circle*{0.000001}} \put(429.92,-61.52){\circle*{0.000001}} \put(430.63,-60.81){\circle*{0.000001}} \put(431.34,-60.81){\circle*{0.000001}} \put(432.04,-60.81){\circle*{0.000001}} \put(432.75,-60.10){\circle*{0.000001}} \put(433.46,-60.10){\circle*{0.000001}} \put(434.16,-59.40){\circle*{0.000001}} \put(434.87,-59.40){\circle*{0.000001}} \put(435.58,-59.40){\circle*{0.000001}} \put(436.28,-58.69){\circle*{0.000001}} \put(436.99,-58.69){\circle*{0.000001}} \put(437.70,-57.98){\circle*{0.000001}} \put(438.41,-57.98){\circle*{0.000001}} \put(439.11,-57.98){\circle*{0.000001}} \put(439.82,-57.28){\circle*{0.000001}} \put(440.53,-57.28){\circle*{0.000001}} \put(441.23,-56.57){\circle*{0.000001}} \put(441.94,-56.57){\circle*{0.000001}} \put(405.88,-119.50){\circle*{0.000001}} \put(405.88,-118.79){\circle*{0.000001}} \put(405.88,-118.09){\circle*{0.000001}} \put(405.88,-117.38){\circle*{0.000001}} \put(405.17,-116.67){\circle*{0.000001}} \put(405.17,-115.97){\circle*{0.000001}} \put(405.17,-115.26){\circle*{0.000001}} \put(405.17,-114.55){\circle*{0.000001}} \put(405.17,-113.84){\circle*{0.000001}} \put(405.17,-113.14){\circle*{0.000001}} \put(405.17,-112.43){\circle*{0.000001}} \put(404.47,-111.72){\circle*{0.000001}} \put(404.47,-111.02){\circle*{0.000001}} \put(404.47,-110.31){\circle*{0.000001}} \put(404.47,-109.60){\circle*{0.000001}} \put(404.47,-108.89){\circle*{0.000001}} \put(404.47,-108.19){\circle*{0.000001}} \put(404.47,-107.48){\circle*{0.000001}} \put(404.47,-106.77){\circle*{0.000001}} \put(403.76,-106.07){\circle*{0.000001}} \put(403.76,-105.36){\circle*{0.000001}} \put(403.76,-104.65){\circle*{0.000001}} \put(403.76,-103.94){\circle*{0.000001}} \put(403.76,-103.24){\circle*{0.000001}} \put(403.76,-102.53){\circle*{0.000001}} \put(403.76,-101.82){\circle*{0.000001}} \put(403.05,-101.12){\circle*{0.000001}} \put(403.05,-100.41){\circle*{0.000001}} \put(403.05,-99.70){\circle*{0.000001}} \put(403.05,-98.99){\circle*{0.000001}} \put(403.05,-98.29){\circle*{0.000001}} \put(403.05,-97.58){\circle*{0.000001}} \put(403.05,-96.87){\circle*{0.000001}} \put(402.34,-96.17){\circle*{0.000001}} \put(402.34,-95.46){\circle*{0.000001}} \put(402.34,-94.75){\circle*{0.000001}} \put(402.34,-94.05){\circle*{0.000001}} \put(402.34,-93.34){\circle*{0.000001}} \put(402.34,-92.63){\circle*{0.000001}} \put(402.34,-91.92){\circle*{0.000001}} \put(401.64,-91.22){\circle*{0.000001}} \put(401.64,-90.51){\circle*{0.000001}} \put(401.64,-89.80){\circle*{0.000001}} \put(401.64,-89.10){\circle*{0.000001}} \put(401.64,-88.39){\circle*{0.000001}} \put(401.64,-87.68){\circle*{0.000001}} \put(401.64,-86.97){\circle*{0.000001}} \put(400.93,-86.27){\circle*{0.000001}} \put(400.93,-85.56){\circle*{0.000001}} \put(400.93,-84.85){\circle*{0.000001}} \put(400.93,-84.15){\circle*{0.000001}} \put(400.93,-83.44){\circle*{0.000001}} \put(400.93,-82.73){\circle*{0.000001}} \put(400.93,-82.02){\circle*{0.000001}} \put(400.93,-81.32){\circle*{0.000001}} \put(400.22,-80.61){\circle*{0.000001}} \put(400.22,-79.90){\circle*{0.000001}} \put(400.22,-79.20){\circle*{0.000001}} \put(400.22,-78.49){\circle*{0.000001}} \put(400.22,-77.78){\circle*{0.000001}} \put(400.22,-77.07){\circle*{0.000001}} \put(400.22,-76.37){\circle*{0.000001}} \put(399.52,-75.66){\circle*{0.000001}} \put(399.52,-74.95){\circle*{0.000001}} \put(399.52,-74.25){\circle*{0.000001}} \put(399.52,-73.54){\circle*{0.000001}} \put(359.92,-110.31){\circle*{0.000001}} \put(360.62,-110.31){\circle*{0.000001}} \put(361.33,-110.31){\circle*{0.000001}} \put(362.04,-111.02){\circle*{0.000001}} \put(362.75,-111.02){\circle*{0.000001}} \put(363.45,-111.02){\circle*{0.000001}} \put(364.16,-111.02){\circle*{0.000001}} \put(364.87,-111.02){\circle*{0.000001}} \put(365.57,-111.72){\circle*{0.000001}} \put(366.28,-111.72){\circle*{0.000001}} \put(366.99,-111.72){\circle*{0.000001}} \put(367.70,-111.72){\circle*{0.000001}} \put(368.40,-111.72){\circle*{0.000001}} \put(369.11,-112.43){\circle*{0.000001}} \put(369.82,-112.43){\circle*{0.000001}} \put(370.52,-112.43){\circle*{0.000001}} \put(371.23,-112.43){\circle*{0.000001}} \put(371.94,-112.43){\circle*{0.000001}} \put(372.65,-113.14){\circle*{0.000001}} \put(373.35,-113.14){\circle*{0.000001}} \put(374.06,-113.14){\circle*{0.000001}} \put(374.77,-113.14){\circle*{0.000001}} \put(375.47,-113.14){\circle*{0.000001}} \put(376.18,-113.84){\circle*{0.000001}} \put(376.89,-113.84){\circle*{0.000001}} \put(377.60,-113.84){\circle*{0.000001}} \put(378.30,-113.84){\circle*{0.000001}} \put(379.01,-113.84){\circle*{0.000001}} \put(379.72,-114.55){\circle*{0.000001}} \put(380.42,-114.55){\circle*{0.000001}} \put(381.13,-114.55){\circle*{0.000001}} \put(381.84,-114.55){\circle*{0.000001}} \put(382.54,-114.55){\circle*{0.000001}} \put(383.25,-115.26){\circle*{0.000001}} \put(383.96,-115.26){\circle*{0.000001}} \put(384.67,-115.26){\circle*{0.000001}} \put(385.37,-115.26){\circle*{0.000001}} \put(386.08,-115.26){\circle*{0.000001}} \put(386.79,-115.97){\circle*{0.000001}} \put(387.49,-115.97){\circle*{0.000001}} \put(388.20,-115.97){\circle*{0.000001}} \put(388.91,-115.97){\circle*{0.000001}} \put(389.62,-115.97){\circle*{0.000001}} \put(390.32,-116.67){\circle*{0.000001}} \put(391.03,-116.67){\circle*{0.000001}} \put(391.74,-116.67){\circle*{0.000001}} \put(392.44,-116.67){\circle*{0.000001}} \put(393.15,-116.67){\circle*{0.000001}} \put(393.86,-117.38){\circle*{0.000001}} \put(394.57,-117.38){\circle*{0.000001}} \put(395.27,-117.38){\circle*{0.000001}} \put(395.98,-117.38){\circle*{0.000001}} \put(396.69,-117.38){\circle*{0.000001}} \put(397.39,-118.09){\circle*{0.000001}} \put(398.10,-118.09){\circle*{0.000001}} \put(398.81,-118.09){\circle*{0.000001}} \put(399.52,-118.09){\circle*{0.000001}} \put(400.22,-118.09){\circle*{0.000001}} \put(400.93,-118.79){\circle*{0.000001}} \put(401.64,-118.79){\circle*{0.000001}} \put(402.34,-118.79){\circle*{0.000001}} \put(403.05,-118.79){\circle*{0.000001}} \put(403.76,-118.79){\circle*{0.000001}} \put(404.47,-119.50){\circle*{0.000001}} \put(405.17,-119.50){\circle*{0.000001}} \put(405.88,-119.50){\circle*{0.000001}} \put(325.98,-79.90){\circle*{0.000001}} \put(326.68,-80.61){\circle*{0.000001}} \put(327.39,-81.32){\circle*{0.000001}} \put(328.10,-82.02){\circle*{0.000001}} \put(328.80,-82.73){\circle*{0.000001}} \put(329.51,-82.73){\circle*{0.000001}} \put(330.22,-83.44){\circle*{0.000001}} \put(330.93,-84.15){\circle*{0.000001}} \put(331.63,-84.85){\circle*{0.000001}} \put(332.34,-85.56){\circle*{0.000001}} \put(333.05,-86.27){\circle*{0.000001}} \put(333.75,-86.97){\circle*{0.000001}} \put(334.46,-87.68){\circle*{0.000001}} \put(335.17,-88.39){\circle*{0.000001}} \put(335.88,-89.10){\circle*{0.000001}} \put(336.58,-89.10){\circle*{0.000001}} \put(337.29,-89.80){\circle*{0.000001}} \put(338.00,-90.51){\circle*{0.000001}} \put(338.70,-91.22){\circle*{0.000001}} \put(339.41,-91.92){\circle*{0.000001}} \put(340.12,-92.63){\circle*{0.000001}} \put(340.83,-93.34){\circle*{0.000001}} \put(341.53,-94.05){\circle*{0.000001}} \put(342.24,-94.75){\circle*{0.000001}} \put(342.95,-94.75){\circle*{0.000001}} \put(343.65,-95.46){\circle*{0.000001}} \put(344.36,-96.17){\circle*{0.000001}} \put(345.07,-96.87){\circle*{0.000001}} \put(345.78,-97.58){\circle*{0.000001}} \put(346.48,-98.29){\circle*{0.000001}} \put(347.19,-98.99){\circle*{0.000001}} \put(347.90,-99.70){\circle*{0.000001}} \put(348.60,-100.41){\circle*{0.000001}} \put(349.31,-101.12){\circle*{0.000001}} \put(350.02,-101.12){\circle*{0.000001}} \put(350.72,-101.82){\circle*{0.000001}} \put(351.43,-102.53){\circle*{0.000001}} \put(352.14,-103.24){\circle*{0.000001}} \put(352.85,-103.94){\circle*{0.000001}} \put(353.55,-104.65){\circle*{0.000001}} \put(354.26,-105.36){\circle*{0.000001}} \put(354.97,-106.07){\circle*{0.000001}} \put(355.67,-106.77){\circle*{0.000001}} \put(356.38,-107.48){\circle*{0.000001}} \put(357.09,-107.48){\circle*{0.000001}} \put(357.80,-108.19){\circle*{0.000001}} \put(358.50,-108.89){\circle*{0.000001}} \put(359.21,-109.60){\circle*{0.000001}} \put(359.92,-110.31){\circle*{0.000001}} \put(325.98,-79.90){\circle*{0.000001}} \put(325.98,-79.20){\circle*{0.000001}} \put(325.98,-78.49){\circle*{0.000001}} \put(325.98,-77.78){\circle*{0.000001}} \put(325.98,-77.07){\circle*{0.000001}} \put(326.68,-76.37){\circle*{0.000001}} \put(326.68,-75.66){\circle*{0.000001}} \put(326.68,-74.95){\circle*{0.000001}} \put(326.68,-74.25){\circle*{0.000001}} \put(326.68,-73.54){\circle*{0.000001}} \put(326.68,-72.83){\circle*{0.000001}} \put(326.68,-72.12){\circle*{0.000001}} \put(326.68,-71.42){\circle*{0.000001}} \put(326.68,-70.71){\circle*{0.000001}} \put(327.39,-70.00){\circle*{0.000001}} \put(327.39,-69.30){\circle*{0.000001}} \put(327.39,-68.59){\circle*{0.000001}} \put(327.39,-67.88){\circle*{0.000001}} \put(327.39,-67.18){\circle*{0.000001}} \put(327.39,-66.47){\circle*{0.000001}} \put(327.39,-65.76){\circle*{0.000001}} \put(327.39,-65.05){\circle*{0.000001}} \put(327.39,-64.35){\circle*{0.000001}} \put(328.10,-63.64){\circle*{0.000001}} \put(328.10,-62.93){\circle*{0.000001}} \put(328.10,-62.23){\circle*{0.000001}} \put(328.10,-61.52){\circle*{0.000001}} \put(328.10,-60.81){\circle*{0.000001}} \put(328.10,-60.10){\circle*{0.000001}} \put(328.10,-59.40){\circle*{0.000001}} \put(328.10,-58.69){\circle*{0.000001}} \put(328.10,-57.98){\circle*{0.000001}} \put(328.10,-57.28){\circle*{0.000001}} \put(328.80,-56.57){\circle*{0.000001}} \put(328.80,-55.86){\circle*{0.000001}} \put(328.80,-55.15){\circle*{0.000001}} \put(328.80,-54.45){\circle*{0.000001}} \put(328.80,-53.74){\circle*{0.000001}} \put(328.80,-53.03){\circle*{0.000001}} \put(328.80,-52.33){\circle*{0.000001}} \put(328.80,-51.62){\circle*{0.000001}} \put(328.80,-50.91){\circle*{0.000001}} \put(329.51,-50.20){\circle*{0.000001}} \put(329.51,-49.50){\circle*{0.000001}} \put(329.51,-48.79){\circle*{0.000001}} \put(329.51,-48.08){\circle*{0.000001}} \put(329.51,-47.38){\circle*{0.000001}} \put(329.51,-46.67){\circle*{0.000001}} \put(329.51,-45.96){\circle*{0.000001}} \put(329.51,-45.25){\circle*{0.000001}} \put(329.51,-44.55){\circle*{0.000001}} \put(330.22,-43.84){\circle*{0.000001}} \put(330.22,-43.13){\circle*{0.000001}} \put(330.22,-42.43){\circle*{0.000001}} \put(330.22,-41.72){\circle*{0.000001}} \put(330.22,-41.01){\circle*{0.000001}} \put(330.22,-40.31){\circle*{0.000001}} \put(330.22,-39.60){\circle*{0.000001}} \put(330.22,-38.89){\circle*{0.000001}} \put(330.22,-38.18){\circle*{0.000001}} \put(330.93,-37.48){\circle*{0.000001}} \put(330.93,-36.77){\circle*{0.000001}} \put(330.93,-36.06){\circle*{0.000001}} \put(330.93,-35.36){\circle*{0.000001}} \put(330.93,-34.65){\circle*{0.000001}} \put(302.64,-69.30){\circle*{0.000001}} \put(303.35,-68.59){\circle*{0.000001}} \put(304.06,-67.88){\circle*{0.000001}} \put(304.06,-67.18){\circle*{0.000001}} \put(304.76,-66.47){\circle*{0.000001}} \put(305.47,-65.76){\circle*{0.000001}} \put(306.18,-65.05){\circle*{0.000001}} \put(306.88,-64.35){\circle*{0.000001}} \put(307.59,-63.64){\circle*{0.000001}} \put(307.59,-62.93){\circle*{0.000001}} \put(308.30,-62.23){\circle*{0.000001}} \put(309.01,-61.52){\circle*{0.000001}} \put(309.71,-60.81){\circle*{0.000001}} \put(310.42,-60.10){\circle*{0.000001}} \put(310.42,-59.40){\circle*{0.000001}} \put(311.13,-58.69){\circle*{0.000001}} \put(311.83,-57.98){\circle*{0.000001}} \put(312.54,-57.28){\circle*{0.000001}} \put(313.25,-56.57){\circle*{0.000001}} \put(313.96,-55.86){\circle*{0.000001}} \put(313.96,-55.15){\circle*{0.000001}} \put(314.66,-54.45){\circle*{0.000001}} \put(315.37,-53.74){\circle*{0.000001}} \put(316.08,-53.03){\circle*{0.000001}} \put(316.78,-52.33){\circle*{0.000001}} \put(316.78,-51.62){\circle*{0.000001}} \put(317.49,-50.91){\circle*{0.000001}} \put(318.20,-50.20){\circle*{0.000001}} \put(318.91,-49.50){\circle*{0.000001}} \put(319.61,-48.79){\circle*{0.000001}} \put(319.61,-48.08){\circle*{0.000001}} \put(320.32,-47.38){\circle*{0.000001}} \put(321.03,-46.67){\circle*{0.000001}} \put(321.73,-45.96){\circle*{0.000001}} \put(322.44,-45.25){\circle*{0.000001}} \put(323.15,-44.55){\circle*{0.000001}} \put(323.15,-43.84){\circle*{0.000001}} \put(323.85,-43.13){\circle*{0.000001}} \put(324.56,-42.43){\circle*{0.000001}} \put(325.27,-41.72){\circle*{0.000001}} \put(325.98,-41.01){\circle*{0.000001}} \put(325.98,-40.31){\circle*{0.000001}} \put(326.68,-39.60){\circle*{0.000001}} \put(327.39,-38.89){\circle*{0.000001}} \put(328.10,-38.18){\circle*{0.000001}} \put(328.80,-37.48){\circle*{0.000001}} \put(329.51,-36.77){\circle*{0.000001}} \put(329.51,-36.06){\circle*{0.000001}} \put(330.22,-35.36){\circle*{0.000001}} \put(330.93,-34.65){\circle*{0.000001}} \put(302.64,-69.30){\circle*{0.000001}} \put(303.35,-68.59){\circle*{0.000001}} \put(304.06,-67.88){\circle*{0.000001}} \put(304.06,-67.18){\circle*{0.000001}} \put(304.76,-66.47){\circle*{0.000001}} \put(305.47,-65.76){\circle*{0.000001}} \put(306.18,-65.05){\circle*{0.000001}} \put(306.88,-64.35){\circle*{0.000001}} \put(307.59,-63.64){\circle*{0.000001}} \put(307.59,-62.93){\circle*{0.000001}} \put(308.30,-62.23){\circle*{0.000001}} \put(309.01,-61.52){\circle*{0.000001}} \put(309.71,-60.81){\circle*{0.000001}} \put(310.42,-60.10){\circle*{0.000001}} \put(310.42,-59.40){\circle*{0.000001}} \put(311.13,-58.69){\circle*{0.000001}} \put(311.83,-57.98){\circle*{0.000001}} \put(312.54,-57.28){\circle*{0.000001}} \put(313.25,-56.57){\circle*{0.000001}} \put(313.96,-55.86){\circle*{0.000001}} \put(313.96,-55.15){\circle*{0.000001}} \put(314.66,-54.45){\circle*{0.000001}} \put(315.37,-53.74){\circle*{0.000001}} \put(316.08,-53.03){\circle*{0.000001}} \put(316.78,-52.33){\circle*{0.000001}} \put(316.78,-51.62){\circle*{0.000001}} \put(317.49,-50.91){\circle*{0.000001}} \put(318.20,-50.20){\circle*{0.000001}} \put(318.91,-49.50){\circle*{0.000001}} \put(319.61,-48.79){\circle*{0.000001}} \put(319.61,-48.08){\circle*{0.000001}} \put(320.32,-47.38){\circle*{0.000001}} \put(321.03,-46.67){\circle*{0.000001}} \put(321.73,-45.96){\circle*{0.000001}} \put(322.44,-45.25){\circle*{0.000001}} \put(323.15,-44.55){\circle*{0.000001}} \put(323.15,-43.84){\circle*{0.000001}} \put(323.85,-43.13){\circle*{0.000001}} \put(324.56,-42.43){\circle*{0.000001}} \put(325.27,-41.72){\circle*{0.000001}} \put(325.98,-41.01){\circle*{0.000001}} \put(325.98,-40.31){\circle*{0.000001}} \put(326.68,-39.60){\circle*{0.000001}} \put(327.39,-38.89){\circle*{0.000001}} \put(328.10,-38.18){\circle*{0.000001}} \put(328.80,-37.48){\circle*{0.000001}} \put(329.51,-36.77){\circle*{0.000001}} \put(329.51,-36.06){\circle*{0.000001}} \put(330.22,-35.36){\circle*{0.000001}} \put(330.93,-34.65){\circle*{0.000001}} \put(289.91,-14.85){\circle*{0.000001}} \put(290.62,-14.85){\circle*{0.000001}} \put(291.33,-15.56){\circle*{0.000001}} \put(292.04,-15.56){\circle*{0.000001}} \put(292.74,-16.26){\circle*{0.000001}} \put(293.45,-16.26){\circle*{0.000001}} \put(294.16,-16.97){\circle*{0.000001}} \put(294.86,-16.97){\circle*{0.000001}} \put(295.57,-17.68){\circle*{0.000001}} \put(296.28,-17.68){\circle*{0.000001}} \put(296.98,-18.38){\circle*{0.000001}} \put(297.69,-18.38){\circle*{0.000001}} \put(298.40,-19.09){\circle*{0.000001}} \put(299.11,-19.09){\circle*{0.000001}} \put(299.81,-19.80){\circle*{0.000001}} \put(300.52,-19.80){\circle*{0.000001}} \put(301.23,-20.51){\circle*{0.000001}} \put(301.93,-20.51){\circle*{0.000001}} \put(302.64,-21.21){\circle*{0.000001}} \put(303.35,-21.21){\circle*{0.000001}} \put(304.06,-21.92){\circle*{0.000001}} \put(304.76,-21.92){\circle*{0.000001}} \put(305.47,-22.63){\circle*{0.000001}} \put(306.18,-22.63){\circle*{0.000001}} \put(306.88,-23.33){\circle*{0.000001}} \put(307.59,-23.33){\circle*{0.000001}} \put(308.30,-24.04){\circle*{0.000001}} \put(309.01,-24.04){\circle*{0.000001}} \put(309.71,-24.75){\circle*{0.000001}} \put(310.42,-24.75){\circle*{0.000001}} \put(311.13,-24.75){\circle*{0.000001}} \put(311.83,-25.46){\circle*{0.000001}} \put(312.54,-25.46){\circle*{0.000001}} \put(313.25,-26.16){\circle*{0.000001}} \put(313.96,-26.16){\circle*{0.000001}} \put(314.66,-26.87){\circle*{0.000001}} \put(315.37,-26.87){\circle*{0.000001}} \put(316.08,-27.58){\circle*{0.000001}} \put(316.78,-27.58){\circle*{0.000001}} \put(317.49,-28.28){\circle*{0.000001}} \put(318.20,-28.28){\circle*{0.000001}} \put(318.91,-28.99){\circle*{0.000001}} \put(319.61,-28.99){\circle*{0.000001}} \put(320.32,-29.70){\circle*{0.000001}} \put(321.03,-29.70){\circle*{0.000001}} \put(321.73,-30.41){\circle*{0.000001}} \put(322.44,-30.41){\circle*{0.000001}} \put(323.15,-31.11){\circle*{0.000001}} \put(323.85,-31.11){\circle*{0.000001}} \put(324.56,-31.82){\circle*{0.000001}} \put(325.27,-31.82){\circle*{0.000001}} \put(325.98,-32.53){\circle*{0.000001}} \put(326.68,-32.53){\circle*{0.000001}} \put(327.39,-33.23){\circle*{0.000001}} \put(328.10,-33.23){\circle*{0.000001}} \put(328.80,-33.94){\circle*{0.000001}} \put(329.51,-33.94){\circle*{0.000001}} \put(330.22,-34.65){\circle*{0.000001}} \put(330.93,-34.65){\circle*{0.000001}} \put(289.91,-14.85){\circle*{0.000001}} \put(289.21,-14.14){\circle*{0.000001}} \put(288.50,-13.44){\circle*{0.000001}} \put(287.79,-12.73){\circle*{0.000001}} \put(287.09,-12.02){\circle*{0.000001}} \put(287.09,-11.31){\circle*{0.000001}} \put(286.38,-10.61){\circle*{0.000001}} \put(285.67,-9.90){\circle*{0.000001}} \put(284.96,-9.19){\circle*{0.000001}} \put(284.26,-8.49){\circle*{0.000001}} \put(283.55,-7.78){\circle*{0.000001}} \put(282.84,-7.07){\circle*{0.000001}} \put(282.14,-6.36){\circle*{0.000001}} \put(281.43,-5.66){\circle*{0.000001}} \put(280.72,-4.95){\circle*{0.000001}} \put(280.72,-4.24){\circle*{0.000001}} \put(280.01,-3.54){\circle*{0.000001}} \put(279.31,-2.83){\circle*{0.000001}} \put(278.60,-2.12){\circle*{0.000001}} \put(277.89,-1.41){\circle*{0.000001}} \put(277.19,-0.71){\circle*{0.000001}} \put(276.48, 0.00){\circle*{0.000001}} \put(275.77, 0.71){\circle*{0.000001}} \put(275.06, 1.41){\circle*{0.000001}} \put(274.36, 2.12){\circle*{0.000001}} \put(274.36, 2.83){\circle*{0.000001}} \put(273.65, 3.54){\circle*{0.000001}} \put(272.94, 4.24){\circle*{0.000001}} \put(272.24, 4.95){\circle*{0.000001}} \put(271.53, 5.66){\circle*{0.000001}} \put(270.82, 6.36){\circle*{0.000001}} \put(270.11, 7.07){\circle*{0.000001}} \put(269.41, 7.78){\circle*{0.000001}} \put(268.70, 8.49){\circle*{0.000001}} \put(267.99, 9.19){\circle*{0.000001}} \put(267.99, 9.90){\circle*{0.000001}} \put(267.29,10.61){\circle*{0.000001}} \put(266.58,11.31){\circle*{0.000001}} \put(265.87,12.02){\circle*{0.000001}} \put(265.17,12.73){\circle*{0.000001}} \put(264.46,13.44){\circle*{0.000001}} \put(263.75,14.14){\circle*{0.000001}} \put(263.04,14.85){\circle*{0.000001}} \put(262.34,15.56){\circle*{0.000001}} \put(261.63,16.26){\circle*{0.000001}} \put(261.63,16.97){\circle*{0.000001}} \put(260.92,17.68){\circle*{0.000001}} \put(260.22,18.38){\circle*{0.000001}} \put(259.51,19.09){\circle*{0.000001}} \put(258.80,19.80){\circle*{0.000001}} \put(258.80,19.80){\circle*{0.000001}} \put(259.51,20.51){\circle*{0.000001}} \put(260.22,21.21){\circle*{0.000001}} \put(260.92,21.92){\circle*{0.000001}} \put(260.92,22.63){\circle*{0.000001}} \put(261.63,23.33){\circle*{0.000001}} \put(262.34,24.04){\circle*{0.000001}} \put(263.04,24.75){\circle*{0.000001}} \put(263.75,25.46){\circle*{0.000001}} \put(264.46,26.16){\circle*{0.000001}} \put(265.17,26.87){\circle*{0.000001}} \put(265.17,27.58){\circle*{0.000001}} \put(265.87,28.28){\circle*{0.000001}} \put(266.58,28.99){\circle*{0.000001}} \put(267.29,29.70){\circle*{0.000001}} \put(267.99,30.41){\circle*{0.000001}} \put(268.70,31.11){\circle*{0.000001}} \put(269.41,31.82){\circle*{0.000001}} \put(269.41,32.53){\circle*{0.000001}} \put(270.11,33.23){\circle*{0.000001}} \put(270.82,33.94){\circle*{0.000001}} \put(271.53,34.65){\circle*{0.000001}} \put(272.24,35.36){\circle*{0.000001}} \put(272.94,36.06){\circle*{0.000001}} \put(272.94,36.77){\circle*{0.000001}} \put(273.65,37.48){\circle*{0.000001}} \put(274.36,38.18){\circle*{0.000001}} \put(275.06,38.89){\circle*{0.000001}} \put(275.77,39.60){\circle*{0.000001}} \put(276.48,40.31){\circle*{0.000001}} \put(277.19,41.01){\circle*{0.000001}} \put(277.19,41.72){\circle*{0.000001}} \put(277.89,42.43){\circle*{0.000001}} \put(278.60,43.13){\circle*{0.000001}} \put(279.31,43.84){\circle*{0.000001}} \put(280.01,44.55){\circle*{0.000001}} \put(280.72,45.25){\circle*{0.000001}} \put(281.43,45.96){\circle*{0.000001}} \put(281.43,46.67){\circle*{0.000001}} \put(282.14,47.38){\circle*{0.000001}} \put(282.84,48.08){\circle*{0.000001}} \put(283.55,48.79){\circle*{0.000001}} \put(284.26,49.50){\circle*{0.000001}} \put(284.96,50.20){\circle*{0.000001}} \put(285.67,50.91){\circle*{0.000001}} \put(285.67,51.62){\circle*{0.000001}} \put(286.38,52.33){\circle*{0.000001}} \put(287.09,53.03){\circle*{0.000001}} \put(287.79,53.74){\circle*{0.000001}} \put(244.66,69.30){\circle*{0.000001}} \put(245.37,69.30){\circle*{0.000001}} \put(246.07,68.59){\circle*{0.000001}} \put(246.78,68.59){\circle*{0.000001}} \put(247.49,68.59){\circle*{0.000001}} \put(248.19,67.88){\circle*{0.000001}} \put(248.90,67.88){\circle*{0.000001}} \put(249.61,67.18){\circle*{0.000001}} \put(250.32,67.18){\circle*{0.000001}} \put(251.02,67.18){\circle*{0.000001}} \put(251.73,66.47){\circle*{0.000001}} \put(252.44,66.47){\circle*{0.000001}} \put(253.14,66.47){\circle*{0.000001}} \put(253.85,65.76){\circle*{0.000001}} \put(254.56,65.76){\circle*{0.000001}} \put(255.27,65.76){\circle*{0.000001}} \put(255.97,65.05){\circle*{0.000001}} \put(256.68,65.05){\circle*{0.000001}} \put(257.39,65.05){\circle*{0.000001}} \put(258.09,64.35){\circle*{0.000001}} \put(258.80,64.35){\circle*{0.000001}} \put(259.51,63.64){\circle*{0.000001}} \put(260.22,63.64){\circle*{0.000001}} \put(260.92,63.64){\circle*{0.000001}} \put(261.63,62.93){\circle*{0.000001}} \put(262.34,62.93){\circle*{0.000001}} \put(263.04,62.93){\circle*{0.000001}} \put(263.75,62.23){\circle*{0.000001}} \put(264.46,62.23){\circle*{0.000001}} \put(265.17,62.23){\circle*{0.000001}} \put(265.87,61.52){\circle*{0.000001}} \put(266.58,61.52){\circle*{0.000001}} \put(267.29,60.81){\circle*{0.000001}} \put(267.99,60.81){\circle*{0.000001}} \put(268.70,60.81){\circle*{0.000001}} \put(269.41,60.10){\circle*{0.000001}} \put(270.11,60.10){\circle*{0.000001}} \put(270.82,60.10){\circle*{0.000001}} \put(271.53,59.40){\circle*{0.000001}} \put(272.24,59.40){\circle*{0.000001}} \put(272.94,59.40){\circle*{0.000001}} \put(273.65,58.69){\circle*{0.000001}} \put(274.36,58.69){\circle*{0.000001}} \put(275.06,57.98){\circle*{0.000001}} \put(275.77,57.98){\circle*{0.000001}} \put(276.48,57.98){\circle*{0.000001}} \put(277.19,57.28){\circle*{0.000001}} \put(277.89,57.28){\circle*{0.000001}} \put(278.60,57.28){\circle*{0.000001}} \put(279.31,56.57){\circle*{0.000001}} \put(280.01,56.57){\circle*{0.000001}} \put(280.72,56.57){\circle*{0.000001}} \put(281.43,55.86){\circle*{0.000001}} \put(282.14,55.86){\circle*{0.000001}} \put(282.84,55.86){\circle*{0.000001}} \put(283.55,55.15){\circle*{0.000001}} \put(284.26,55.15){\circle*{0.000001}} \put(284.96,54.45){\circle*{0.000001}} \put(285.67,54.45){\circle*{0.000001}} \put(286.38,54.45){\circle*{0.000001}} \put(287.09,53.74){\circle*{0.000001}} \put(287.79,53.74){\circle*{0.000001}} \put(205.77,91.92){\circle*{0.000001}} \put(206.48,91.22){\circle*{0.000001}} \put(207.18,91.22){\circle*{0.000001}} \put(207.89,90.51){\circle*{0.000001}} \put(208.60,90.51){\circle*{0.000001}} \put(209.30,89.80){\circle*{0.000001}} \put(210.01,89.80){\circle*{0.000001}} \put(210.72,89.10){\circle*{0.000001}} \put(211.42,88.39){\circle*{0.000001}} \put(212.13,88.39){\circle*{0.000001}} \put(212.84,87.68){\circle*{0.000001}} \put(213.55,87.68){\circle*{0.000001}} \put(214.25,86.97){\circle*{0.000001}} \put(214.96,86.27){\circle*{0.000001}} \put(215.67,86.27){\circle*{0.000001}} \put(216.37,85.56){\circle*{0.000001}} \put(217.08,85.56){\circle*{0.000001}} \put(217.79,84.85){\circle*{0.000001}} \put(218.50,84.85){\circle*{0.000001}} \put(219.20,84.15){\circle*{0.000001}} \put(219.91,83.44){\circle*{0.000001}} \put(220.62,83.44){\circle*{0.000001}} \put(221.32,82.73){\circle*{0.000001}} \put(222.03,82.73){\circle*{0.000001}} \put(222.74,82.02){\circle*{0.000001}} \put(223.45,81.32){\circle*{0.000001}} \put(224.15,81.32){\circle*{0.000001}} \put(224.86,80.61){\circle*{0.000001}} \put(225.57,80.61){\circle*{0.000001}} \put(226.27,79.90){\circle*{0.000001}} \put(226.98,79.90){\circle*{0.000001}} \put(227.69,79.20){\circle*{0.000001}} \put(228.40,78.49){\circle*{0.000001}} \put(229.10,78.49){\circle*{0.000001}} \put(229.81,77.78){\circle*{0.000001}} \put(230.52,77.78){\circle*{0.000001}} \put(231.22,77.07){\circle*{0.000001}} \put(231.93,76.37){\circle*{0.000001}} \put(232.64,76.37){\circle*{0.000001}} \put(233.35,75.66){\circle*{0.000001}} \put(234.05,75.66){\circle*{0.000001}} \put(234.76,74.95){\circle*{0.000001}} \put(235.47,74.95){\circle*{0.000001}} \put(236.17,74.25){\circle*{0.000001}} \put(236.88,73.54){\circle*{0.000001}} \put(237.59,73.54){\circle*{0.000001}} \put(238.29,72.83){\circle*{0.000001}} \put(239.00,72.83){\circle*{0.000001}} \put(239.71,72.12){\circle*{0.000001}} \put(240.42,71.42){\circle*{0.000001}} \put(241.12,71.42){\circle*{0.000001}} \put(241.83,70.71){\circle*{0.000001}} \put(242.54,70.71){\circle*{0.000001}} \put(243.24,70.00){\circle*{0.000001}} \put(243.95,70.00){\circle*{0.000001}} \put(244.66,69.30){\circle*{0.000001}} \put(205.77,91.92){\circle*{0.000001}} \put(205.77,92.63){\circle*{0.000001}} \put(205.77,93.34){\circle*{0.000001}} \put(205.77,94.05){\circle*{0.000001}} \put(205.77,94.75){\circle*{0.000001}} \put(205.77,95.46){\circle*{0.000001}} \put(205.77,96.17){\circle*{0.000001}} \put(205.77,96.87){\circle*{0.000001}} \put(205.77,97.58){\circle*{0.000001}} \put(205.77,98.29){\circle*{0.000001}} \put(205.77,98.99){\circle*{0.000001}} \put(205.06,99.70){\circle*{0.000001}} \put(205.06,100.41){\circle*{0.000001}} \put(205.06,101.12){\circle*{0.000001}} \put(205.06,101.82){\circle*{0.000001}} \put(205.06,102.53){\circle*{0.000001}} \put(205.06,103.24){\circle*{0.000001}} \put(205.06,103.94){\circle*{0.000001}} \put(205.06,104.65){\circle*{0.000001}} \put(205.06,105.36){\circle*{0.000001}} \put(205.06,106.07){\circle*{0.000001}} \put(205.06,106.77){\circle*{0.000001}} \put(205.06,107.48){\circle*{0.000001}} \put(205.06,108.19){\circle*{0.000001}} \put(205.06,108.89){\circle*{0.000001}} \put(205.06,109.60){\circle*{0.000001}} \put(205.06,110.31){\circle*{0.000001}} \put(205.06,111.02){\circle*{0.000001}} \put(205.06,111.72){\circle*{0.000001}} \put(205.06,112.43){\circle*{0.000001}} \put(205.06,113.14){\circle*{0.000001}} \put(205.06,113.84){\circle*{0.000001}} \put(204.35,114.55){\circle*{0.000001}} \put(204.35,115.26){\circle*{0.000001}} \put(204.35,115.97){\circle*{0.000001}} \put(204.35,116.67){\circle*{0.000001}} \put(204.35,117.38){\circle*{0.000001}} \put(204.35,118.09){\circle*{0.000001}} \put(204.35,118.79){\circle*{0.000001}} \put(204.35,119.50){\circle*{0.000001}} \put(204.35,120.21){\circle*{0.000001}} \put(204.35,120.92){\circle*{0.000001}} \put(204.35,121.62){\circle*{0.000001}} \put(204.35,122.33){\circle*{0.000001}} \put(204.35,123.04){\circle*{0.000001}} \put(204.35,123.74){\circle*{0.000001}} \put(204.35,124.45){\circle*{0.000001}} \put(204.35,125.16){\circle*{0.000001}} \put(204.35,125.87){\circle*{0.000001}} \put(204.35,126.57){\circle*{0.000001}} \put(204.35,127.28){\circle*{0.000001}} \put(204.35,127.99){\circle*{0.000001}} \put(204.35,128.69){\circle*{0.000001}} \put(203.65,129.40){\circle*{0.000001}} \put(203.65,130.11){\circle*{0.000001}} \put(203.65,130.81){\circle*{0.000001}} \put(203.65,131.52){\circle*{0.000001}} \put(203.65,132.23){\circle*{0.000001}} \put(203.65,132.94){\circle*{0.000001}} \put(203.65,133.64){\circle*{0.000001}} \put(203.65,134.35){\circle*{0.000001}} \put(203.65,135.06){\circle*{0.000001}} \put(203.65,135.76){\circle*{0.000001}} \put(203.65,136.47){\circle*{0.000001}} \put(216.37,91.92){\circle*{0.000001}} \put(216.37,92.63){\circle*{0.000001}} \put(215.67,93.34){\circle*{0.000001}} \put(215.67,94.05){\circle*{0.000001}} \put(215.67,94.75){\circle*{0.000001}} \put(215.67,95.46){\circle*{0.000001}} \put(214.96,96.17){\circle*{0.000001}} \put(214.96,96.87){\circle*{0.000001}} \put(214.96,97.58){\circle*{0.000001}} \put(214.25,98.29){\circle*{0.000001}} \put(214.25,98.99){\circle*{0.000001}} \put(214.25,99.70){\circle*{0.000001}} \put(214.25,100.41){\circle*{0.000001}} \put(213.55,101.12){\circle*{0.000001}} \put(213.55,101.82){\circle*{0.000001}} \put(213.55,102.53){\circle*{0.000001}} \put(212.84,103.24){\circle*{0.000001}} \put(212.84,103.94){\circle*{0.000001}} \put(212.84,104.65){\circle*{0.000001}} \put(212.84,105.36){\circle*{0.000001}} \put(212.13,106.07){\circle*{0.000001}} \put(212.13,106.77){\circle*{0.000001}} \put(212.13,107.48){\circle*{0.000001}} \put(211.42,108.19){\circle*{0.000001}} \put(211.42,108.89){\circle*{0.000001}} \put(211.42,109.60){\circle*{0.000001}} \put(211.42,110.31){\circle*{0.000001}} \put(210.72,111.02){\circle*{0.000001}} \put(210.72,111.72){\circle*{0.000001}} \put(210.72,112.43){\circle*{0.000001}} \put(210.01,113.14){\circle*{0.000001}} \put(210.01,113.84){\circle*{0.000001}} \put(210.01,114.55){\circle*{0.000001}} \put(210.01,115.26){\circle*{0.000001}} \put(209.30,115.97){\circle*{0.000001}} \put(209.30,116.67){\circle*{0.000001}} \put(209.30,117.38){\circle*{0.000001}} \put(208.60,118.09){\circle*{0.000001}} \put(208.60,118.79){\circle*{0.000001}} \put(208.60,119.50){\circle*{0.000001}} \put(208.60,120.21){\circle*{0.000001}} \put(207.89,120.92){\circle*{0.000001}} \put(207.89,121.62){\circle*{0.000001}} \put(207.89,122.33){\circle*{0.000001}} \put(207.18,123.04){\circle*{0.000001}} \put(207.18,123.74){\circle*{0.000001}} \put(207.18,124.45){\circle*{0.000001}} \put(207.18,125.16){\circle*{0.000001}} \put(206.48,125.87){\circle*{0.000001}} \put(206.48,126.57){\circle*{0.000001}} \put(206.48,127.28){\circle*{0.000001}} \put(205.77,127.99){\circle*{0.000001}} \put(205.77,128.69){\circle*{0.000001}} \put(205.77,129.40){\circle*{0.000001}} \put(205.77,130.11){\circle*{0.000001}} \put(205.06,130.81){\circle*{0.000001}} \put(205.06,131.52){\circle*{0.000001}} \put(205.06,132.23){\circle*{0.000001}} \put(204.35,132.94){\circle*{0.000001}} \put(204.35,133.64){\circle*{0.000001}} \put(204.35,134.35){\circle*{0.000001}} \put(204.35,135.06){\circle*{0.000001}} \put(203.65,135.76){\circle*{0.000001}} \put(203.65,136.47){\circle*{0.000001}} \put(175.36,110.31){\circle*{0.000001}} \put(176.07,110.31){\circle*{0.000001}} \put(176.78,109.60){\circle*{0.000001}} \put(177.48,109.60){\circle*{0.000001}} \put(178.19,108.89){\circle*{0.000001}} \put(178.90,108.89){\circle*{0.000001}} \put(179.61,108.19){\circle*{0.000001}} \put(180.31,108.19){\circle*{0.000001}} \put(181.02,107.48){\circle*{0.000001}} \put(181.73,107.48){\circle*{0.000001}} \put(182.43,107.48){\circle*{0.000001}} \put(183.14,106.77){\circle*{0.000001}} \put(183.85,106.77){\circle*{0.000001}} \put(184.55,106.07){\circle*{0.000001}} \put(185.26,106.07){\circle*{0.000001}} \put(185.97,105.36){\circle*{0.000001}} \put(186.68,105.36){\circle*{0.000001}} \put(187.38,104.65){\circle*{0.000001}} \put(188.09,104.65){\circle*{0.000001}} \put(188.80,103.94){\circle*{0.000001}} \put(189.50,103.94){\circle*{0.000001}} \put(190.21,103.94){\circle*{0.000001}} \put(190.92,103.24){\circle*{0.000001}} \put(191.63,103.24){\circle*{0.000001}} \put(192.33,102.53){\circle*{0.000001}} \put(193.04,102.53){\circle*{0.000001}} \put(193.75,101.82){\circle*{0.000001}} \put(194.45,101.82){\circle*{0.000001}} \put(195.16,101.12){\circle*{0.000001}} \put(195.87,101.12){\circle*{0.000001}} \put(196.58,101.12){\circle*{0.000001}} \put(197.28,100.41){\circle*{0.000001}} \put(197.99,100.41){\circle*{0.000001}} \put(198.70,99.70){\circle*{0.000001}} \put(199.40,99.70){\circle*{0.000001}} \put(200.11,98.99){\circle*{0.000001}} \put(200.82,98.99){\circle*{0.000001}} \put(201.53,98.29){\circle*{0.000001}} \put(202.23,98.29){\circle*{0.000001}} \put(202.94,98.29){\circle*{0.000001}} \put(203.65,97.58){\circle*{0.000001}} \put(204.35,97.58){\circle*{0.000001}} \put(205.06,96.87){\circle*{0.000001}} \put(205.77,96.87){\circle*{0.000001}} \put(206.48,96.17){\circle*{0.000001}} \put(207.18,96.17){\circle*{0.000001}} \put(207.89,95.46){\circle*{0.000001}} \put(208.60,95.46){\circle*{0.000001}} \put(209.30,94.75){\circle*{0.000001}} \put(210.01,94.75){\circle*{0.000001}} \put(210.72,94.75){\circle*{0.000001}} \put(211.42,94.05){\circle*{0.000001}} \put(212.13,94.05){\circle*{0.000001}} \put(212.84,93.34){\circle*{0.000001}} \put(213.55,93.34){\circle*{0.000001}} \put(214.25,92.63){\circle*{0.000001}} \put(214.96,92.63){\circle*{0.000001}} \put(215.67,91.92){\circle*{0.000001}} \put(216.37,91.92){\circle*{0.000001}} \put(137.89,137.89){\circle*{0.000001}} \put(138.59,137.18){\circle*{0.000001}} \put(139.30,137.18){\circle*{0.000001}} \put(140.01,136.47){\circle*{0.000001}} \put(140.71,135.76){\circle*{0.000001}} \put(141.42,135.06){\circle*{0.000001}} \put(142.13,135.06){\circle*{0.000001}} \put(142.84,134.35){\circle*{0.000001}} \put(143.54,133.64){\circle*{0.000001}} \put(144.25,132.94){\circle*{0.000001}} \put(144.96,132.94){\circle*{0.000001}} \put(145.66,132.23){\circle*{0.000001}} \put(146.37,131.52){\circle*{0.000001}} \put(147.08,130.81){\circle*{0.000001}} \put(147.79,130.81){\circle*{0.000001}} \put(148.49,130.11){\circle*{0.000001}} \put(149.20,129.40){\circle*{0.000001}} \put(149.91,128.69){\circle*{0.000001}} \put(150.61,128.69){\circle*{0.000001}} \put(151.32,127.99){\circle*{0.000001}} \put(152.03,127.28){\circle*{0.000001}} \put(152.74,127.28){\circle*{0.000001}} \put(153.44,126.57){\circle*{0.000001}} \put(154.15,125.87){\circle*{0.000001}} \put(154.86,125.16){\circle*{0.000001}} \put(155.56,125.16){\circle*{0.000001}} \put(156.27,124.45){\circle*{0.000001}} \put(156.98,123.74){\circle*{0.000001}} \put(157.68,123.04){\circle*{0.000001}} \put(158.39,123.04){\circle*{0.000001}} \put(159.10,122.33){\circle*{0.000001}} \put(159.81,121.62){\circle*{0.000001}} \put(160.51,120.92){\circle*{0.000001}} \put(161.22,120.92){\circle*{0.000001}} \put(161.93,120.21){\circle*{0.000001}} \put(162.63,119.50){\circle*{0.000001}} \put(163.34,119.50){\circle*{0.000001}} \put(164.05,118.79){\circle*{0.000001}} \put(164.76,118.09){\circle*{0.000001}} \put(165.46,117.38){\circle*{0.000001}} \put(166.17,117.38){\circle*{0.000001}} \put(166.88,116.67){\circle*{0.000001}} \put(167.58,115.97){\circle*{0.000001}} \put(168.29,115.26){\circle*{0.000001}} \put(169.00,115.26){\circle*{0.000001}} \put(169.71,114.55){\circle*{0.000001}} \put(170.41,113.84){\circle*{0.000001}} \put(171.12,113.14){\circle*{0.000001}} \put(171.83,113.14){\circle*{0.000001}} \put(172.53,112.43){\circle*{0.000001}} \put(173.24,111.72){\circle*{0.000001}} \put(173.95,111.02){\circle*{0.000001}} \put(174.66,111.02){\circle*{0.000001}} \put(175.36,110.31){\circle*{0.000001}} \put(137.89,137.89){\circle*{0.000001}} \put(137.89,138.59){\circle*{0.000001}} \put(138.59,139.30){\circle*{0.000001}} \put(138.59,140.01){\circle*{0.000001}} \put(138.59,140.71){\circle*{0.000001}} \put(139.30,141.42){\circle*{0.000001}} \put(139.30,142.13){\circle*{0.000001}} \put(139.30,142.84){\circle*{0.000001}} \put(140.01,143.54){\circle*{0.000001}} \put(140.01,144.25){\circle*{0.000001}} \put(140.01,144.96){\circle*{0.000001}} \put(140.01,145.66){\circle*{0.000001}} \put(140.71,146.37){\circle*{0.000001}} \put(140.71,147.08){\circle*{0.000001}} \put(140.71,147.79){\circle*{0.000001}} \put(141.42,148.49){\circle*{0.000001}} \put(141.42,149.20){\circle*{0.000001}} \put(141.42,149.91){\circle*{0.000001}} \put(142.13,150.61){\circle*{0.000001}} \put(142.13,151.32){\circle*{0.000001}} \put(142.13,152.03){\circle*{0.000001}} \put(142.84,152.74){\circle*{0.000001}} \put(142.84,153.44){\circle*{0.000001}} \put(142.84,154.15){\circle*{0.000001}} \put(143.54,154.86){\circle*{0.000001}} \put(143.54,155.56){\circle*{0.000001}} \put(143.54,156.27){\circle*{0.000001}} \put(144.25,156.98){\circle*{0.000001}} \put(144.25,157.68){\circle*{0.000001}} \put(144.25,158.39){\circle*{0.000001}} \put(144.25,159.10){\circle*{0.000001}} \put(144.96,159.81){\circle*{0.000001}} \put(144.96,160.51){\circle*{0.000001}} \put(144.96,161.22){\circle*{0.000001}} \put(145.66,161.93){\circle*{0.000001}} \put(145.66,162.63){\circle*{0.000001}} \put(145.66,163.34){\circle*{0.000001}} \put(146.37,164.05){\circle*{0.000001}} \put(146.37,164.76){\circle*{0.000001}} \put(146.37,165.46){\circle*{0.000001}} \put(147.08,166.17){\circle*{0.000001}} \put(147.08,166.88){\circle*{0.000001}} \put(147.08,167.58){\circle*{0.000001}} \put(147.79,168.29){\circle*{0.000001}} \put(147.79,169.00){\circle*{0.000001}} \put(147.79,169.71){\circle*{0.000001}} \put(148.49,170.41){\circle*{0.000001}} \put(148.49,171.12){\circle*{0.000001}} \put(148.49,171.83){\circle*{0.000001}} \put(149.20,172.53){\circle*{0.000001}} \put(149.20,173.24){\circle*{0.000001}} \put(149.20,173.95){\circle*{0.000001}} \put(149.20,174.66){\circle*{0.000001}} \put(149.91,175.36){\circle*{0.000001}} \put(149.91,176.07){\circle*{0.000001}} \put(149.91,176.78){\circle*{0.000001}} \put(150.61,177.48){\circle*{0.000001}} \put(150.61,178.19){\circle*{0.000001}} \put(150.61,178.90){\circle*{0.000001}} \put(151.32,179.61){\circle*{0.000001}} \put(151.32,180.31){\circle*{0.000001}} \put(123.74,143.54){\circle*{0.000001}} \put(124.45,144.25){\circle*{0.000001}} \put(124.45,144.96){\circle*{0.000001}} \put(125.16,145.66){\circle*{0.000001}} \put(125.87,146.37){\circle*{0.000001}} \put(126.57,147.08){\circle*{0.000001}} \put(126.57,147.79){\circle*{0.000001}} \put(127.28,148.49){\circle*{0.000001}} \put(127.99,149.20){\circle*{0.000001}} \put(128.69,149.91){\circle*{0.000001}} \put(128.69,150.61){\circle*{0.000001}} \put(129.40,151.32){\circle*{0.000001}} \put(130.11,152.03){\circle*{0.000001}} \put(130.81,152.74){\circle*{0.000001}} \put(130.81,153.44){\circle*{0.000001}} \put(131.52,154.15){\circle*{0.000001}} \put(132.23,154.86){\circle*{0.000001}} \put(132.94,155.56){\circle*{0.000001}} \put(132.94,156.27){\circle*{0.000001}} \put(133.64,156.98){\circle*{0.000001}} \put(134.35,157.68){\circle*{0.000001}} \put(135.06,158.39){\circle*{0.000001}} \put(135.06,159.10){\circle*{0.000001}} \put(135.76,159.81){\circle*{0.000001}} \put(136.47,160.51){\circle*{0.000001}} \put(137.18,161.22){\circle*{0.000001}} \put(137.18,161.93){\circle*{0.000001}} \put(137.89,162.63){\circle*{0.000001}} \put(138.59,163.34){\circle*{0.000001}} \put(139.30,164.05){\circle*{0.000001}} \put(139.30,164.76){\circle*{0.000001}} \put(140.01,165.46){\circle*{0.000001}} \put(140.71,166.17){\circle*{0.000001}} \put(141.42,166.88){\circle*{0.000001}} \put(141.42,167.58){\circle*{0.000001}} \put(142.13,168.29){\circle*{0.000001}} \put(142.84,169.00){\circle*{0.000001}} \put(143.54,169.71){\circle*{0.000001}} \put(143.54,170.41){\circle*{0.000001}} \put(144.25,171.12){\circle*{0.000001}} \put(144.96,171.83){\circle*{0.000001}} \put(145.66,172.53){\circle*{0.000001}} \put(145.66,173.24){\circle*{0.000001}} \put(146.37,173.95){\circle*{0.000001}} \put(147.08,174.66){\circle*{0.000001}} \put(147.79,175.36){\circle*{0.000001}} \put(147.79,176.07){\circle*{0.000001}} \put(148.49,176.78){\circle*{0.000001}} \put(149.20,177.48){\circle*{0.000001}} \put(149.91,178.19){\circle*{0.000001}} \put(149.91,178.90){\circle*{0.000001}} \put(150.61,179.61){\circle*{0.000001}} \put(151.32,180.31){\circle*{0.000001}} \put(123.74,143.54){\circle*{0.000001}} \put(124.45,142.84){\circle*{0.000001}} \put(125.16,142.84){\circle*{0.000001}} \put(125.87,142.13){\circle*{0.000001}} \put(126.57,141.42){\circle*{0.000001}} \put(127.28,140.71){\circle*{0.000001}} \put(127.99,140.71){\circle*{0.000001}} \put(128.69,140.01){\circle*{0.000001}} \put(129.40,139.30){\circle*{0.000001}} \put(130.11,138.59){\circle*{0.000001}} \put(130.81,138.59){\circle*{0.000001}} \put(131.52,137.89){\circle*{0.000001}} \put(132.23,137.18){\circle*{0.000001}} \put(132.94,136.47){\circle*{0.000001}} \put(133.64,136.47){\circle*{0.000001}} \put(134.35,135.76){\circle*{0.000001}} \put(135.06,135.06){\circle*{0.000001}} \put(135.76,134.35){\circle*{0.000001}} \put(136.47,134.35){\circle*{0.000001}} \put(137.18,133.64){\circle*{0.000001}} \put(137.89,132.94){\circle*{0.000001}} \put(138.59,132.23){\circle*{0.000001}} \put(139.30,132.23){\circle*{0.000001}} \put(140.01,131.52){\circle*{0.000001}} \put(140.71,130.81){\circle*{0.000001}} \put(141.42,130.11){\circle*{0.000001}} \put(142.13,130.11){\circle*{0.000001}} \put(142.84,129.40){\circle*{0.000001}} \put(143.54,128.69){\circle*{0.000001}} \put(144.25,127.99){\circle*{0.000001}} \put(144.96,127.99){\circle*{0.000001}} \put(145.66,127.28){\circle*{0.000001}} \put(146.37,126.57){\circle*{0.000001}} \put(147.08,125.87){\circle*{0.000001}} \put(147.79,125.87){\circle*{0.000001}} \put(148.49,125.16){\circle*{0.000001}} \put(149.20,124.45){\circle*{0.000001}} \put(149.91,123.74){\circle*{0.000001}} \put(150.61,123.74){\circle*{0.000001}} \put(151.32,123.04){\circle*{0.000001}} \put(152.03,122.33){\circle*{0.000001}} \put(152.74,121.62){\circle*{0.000001}} \put(153.44,121.62){\circle*{0.000001}} \put(154.15,120.92){\circle*{0.000001}} \put(154.86,120.21){\circle*{0.000001}} \put(155.56,119.50){\circle*{0.000001}} \put(156.27,119.50){\circle*{0.000001}} \put(156.98,118.79){\circle*{0.000001}} \put(157.68,118.09){\circle*{0.000001}} \put(158.39,117.38){\circle*{0.000001}} \put(159.10,117.38){\circle*{0.000001}} \put(159.81,116.67){\circle*{0.000001}} \put(159.81,116.67){\circle*{0.000001}} \put(160.51,115.97){\circle*{0.000001}} \put(161.22,115.97){\circle*{0.000001}} \put(161.93,115.26){\circle*{0.000001}} \put(162.63,114.55){\circle*{0.000001}} \put(163.34,114.55){\circle*{0.000001}} \put(164.05,113.84){\circle*{0.000001}} \put(164.76,113.14){\circle*{0.000001}} \put(165.46,113.14){\circle*{0.000001}} \put(166.17,112.43){\circle*{0.000001}} \put(166.88,111.72){\circle*{0.000001}} \put(167.58,111.72){\circle*{0.000001}} \put(168.29,111.02){\circle*{0.000001}} \put(169.00,110.31){\circle*{0.000001}} \put(169.71,110.31){\circle*{0.000001}} \put(170.41,109.60){\circle*{0.000001}} \put(171.12,108.89){\circle*{0.000001}} \put(171.83,108.89){\circle*{0.000001}} \put(172.53,108.19){\circle*{0.000001}} \put(173.24,107.48){\circle*{0.000001}} \put(173.95,107.48){\circle*{0.000001}} \put(174.66,106.77){\circle*{0.000001}} \put(175.36,106.07){\circle*{0.000001}} \put(176.07,106.07){\circle*{0.000001}} \put(176.78,105.36){\circle*{0.000001}} \put(177.48,104.65){\circle*{0.000001}} \put(178.19,104.65){\circle*{0.000001}} \put(178.90,103.94){\circle*{0.000001}} \put(179.61,103.24){\circle*{0.000001}} \put(180.31,102.53){\circle*{0.000001}} \put(181.02,102.53){\circle*{0.000001}} \put(181.73,101.82){\circle*{0.000001}} \put(182.43,101.12){\circle*{0.000001}} \put(183.14,101.12){\circle*{0.000001}} \put(183.85,100.41){\circle*{0.000001}} \put(184.55,99.70){\circle*{0.000001}} \put(185.26,99.70){\circle*{0.000001}} \put(185.97,98.99){\circle*{0.000001}} \put(186.68,98.29){\circle*{0.000001}} \put(187.38,98.29){\circle*{0.000001}} \put(188.09,97.58){\circle*{0.000001}} \put(188.80,96.87){\circle*{0.000001}} \put(189.50,96.87){\circle*{0.000001}} \put(190.21,96.17){\circle*{0.000001}} \put(190.92,95.46){\circle*{0.000001}} \put(191.63,95.46){\circle*{0.000001}} \put(192.33,94.75){\circle*{0.000001}} \put(193.04,94.05){\circle*{0.000001}} \put(193.75,94.05){\circle*{0.000001}} \put(194.45,93.34){\circle*{0.000001}} \put(195.16,92.63){\circle*{0.000001}} \put(195.87,92.63){\circle*{0.000001}} \put(196.58,91.92){\circle*{0.000001}} \put(197.28,91.22){\circle*{0.000001}} \put(197.99,91.22){\circle*{0.000001}} \put(198.70,90.51){\circle*{0.000001}} \put(220.62,51.62){\circle*{0.000001}} \put(219.91,52.33){\circle*{0.000001}} \put(219.91,53.03){\circle*{0.000001}} \put(219.20,53.74){\circle*{0.000001}} \put(219.20,54.45){\circle*{0.000001}} \put(218.50,55.15){\circle*{0.000001}} \put(218.50,55.86){\circle*{0.000001}} \put(217.79,56.57){\circle*{0.000001}} \put(217.08,57.28){\circle*{0.000001}} \put(217.08,57.98){\circle*{0.000001}} \put(216.37,58.69){\circle*{0.000001}} \put(216.37,59.40){\circle*{0.000001}} \put(215.67,60.10){\circle*{0.000001}} \put(215.67,60.81){\circle*{0.000001}} \put(214.96,61.52){\circle*{0.000001}} \put(214.96,62.23){\circle*{0.000001}} \put(214.25,62.93){\circle*{0.000001}} \put(213.55,63.64){\circle*{0.000001}} \put(213.55,64.35){\circle*{0.000001}} \put(212.84,65.05){\circle*{0.000001}} \put(212.84,65.76){\circle*{0.000001}} \put(212.13,66.47){\circle*{0.000001}} \put(212.13,67.18){\circle*{0.000001}} \put(211.42,67.88){\circle*{0.000001}} \put(210.72,68.59){\circle*{0.000001}} \put(210.72,69.30){\circle*{0.000001}} \put(210.01,70.00){\circle*{0.000001}} \put(210.01,70.71){\circle*{0.000001}} \put(209.30,71.42){\circle*{0.000001}} \put(209.30,72.12){\circle*{0.000001}} \put(208.60,72.83){\circle*{0.000001}} \put(208.60,73.54){\circle*{0.000001}} \put(207.89,74.25){\circle*{0.000001}} \put(207.18,74.95){\circle*{0.000001}} \put(207.18,75.66){\circle*{0.000001}} \put(206.48,76.37){\circle*{0.000001}} \put(206.48,77.07){\circle*{0.000001}} \put(205.77,77.78){\circle*{0.000001}} \put(205.77,78.49){\circle*{0.000001}} \put(205.06,79.20){\circle*{0.000001}} \put(204.35,79.90){\circle*{0.000001}} \put(204.35,80.61){\circle*{0.000001}} \put(203.65,81.32){\circle*{0.000001}} \put(203.65,82.02){\circle*{0.000001}} \put(202.94,82.73){\circle*{0.000001}} \put(202.94,83.44){\circle*{0.000001}} \put(202.23,84.15){\circle*{0.000001}} \put(202.23,84.85){\circle*{0.000001}} \put(201.53,85.56){\circle*{0.000001}} \put(200.82,86.27){\circle*{0.000001}} \put(200.82,86.97){\circle*{0.000001}} \put(200.11,87.68){\circle*{0.000001}} \put(200.11,88.39){\circle*{0.000001}} \put(199.40,89.10){\circle*{0.000001}} \put(199.40,89.80){\circle*{0.000001}} \put(198.70,90.51){\circle*{0.000001}} \put(198.70,11.31){\circle*{0.000001}} \put(199.40,12.02){\circle*{0.000001}} \put(199.40,12.73){\circle*{0.000001}} \put(200.11,13.44){\circle*{0.000001}} \put(200.11,14.14){\circle*{0.000001}} \put(200.82,14.85){\circle*{0.000001}} \put(200.82,15.56){\circle*{0.000001}} \put(201.53,16.26){\circle*{0.000001}} \put(201.53,16.97){\circle*{0.000001}} \put(202.23,17.68){\circle*{0.000001}} \put(202.23,18.38){\circle*{0.000001}} \put(202.94,19.09){\circle*{0.000001}} \put(203.65,19.80){\circle*{0.000001}} \put(203.65,20.51){\circle*{0.000001}} \put(204.35,21.21){\circle*{0.000001}} \put(204.35,21.92){\circle*{0.000001}} \put(205.06,22.63){\circle*{0.000001}} \put(205.06,23.33){\circle*{0.000001}} \put(205.77,24.04){\circle*{0.000001}} \put(205.77,24.75){\circle*{0.000001}} \put(206.48,25.46){\circle*{0.000001}} \put(206.48,26.16){\circle*{0.000001}} \put(207.18,26.87){\circle*{0.000001}} \put(207.89,27.58){\circle*{0.000001}} \put(207.89,28.28){\circle*{0.000001}} \put(208.60,28.99){\circle*{0.000001}} \put(208.60,29.70){\circle*{0.000001}} \put(209.30,30.41){\circle*{0.000001}} \put(209.30,31.11){\circle*{0.000001}} \put(210.01,31.82){\circle*{0.000001}} \put(210.01,32.53){\circle*{0.000001}} \put(210.72,33.23){\circle*{0.000001}} \put(210.72,33.94){\circle*{0.000001}} \put(211.42,34.65){\circle*{0.000001}} \put(211.42,35.36){\circle*{0.000001}} \put(212.13,36.06){\circle*{0.000001}} \put(212.84,36.77){\circle*{0.000001}} \put(212.84,37.48){\circle*{0.000001}} \put(213.55,38.18){\circle*{0.000001}} \put(213.55,38.89){\circle*{0.000001}} \put(214.25,39.60){\circle*{0.000001}} \put(214.25,40.31){\circle*{0.000001}} \put(214.96,41.01){\circle*{0.000001}} \put(214.96,41.72){\circle*{0.000001}} \put(215.67,42.43){\circle*{0.000001}} \put(215.67,43.13){\circle*{0.000001}} \put(216.37,43.84){\circle*{0.000001}} \put(217.08,44.55){\circle*{0.000001}} \put(217.08,45.25){\circle*{0.000001}} \put(217.79,45.96){\circle*{0.000001}} \put(217.79,46.67){\circle*{0.000001}} \put(218.50,47.38){\circle*{0.000001}} \put(218.50,48.08){\circle*{0.000001}} \put(219.20,48.79){\circle*{0.000001}} \put(219.20,49.50){\circle*{0.000001}} \put(219.91,50.20){\circle*{0.000001}} \put(219.91,50.91){\circle*{0.000001}} \put(220.62,51.62){\circle*{0.000001}} \put(198.70,11.31){\circle*{0.000001}} \put(198.70,12.02){\circle*{0.000001}} \put(199.40,12.73){\circle*{0.000001}} \put(199.40,13.44){\circle*{0.000001}} \put(199.40,14.14){\circle*{0.000001}} \put(199.40,14.85){\circle*{0.000001}} \put(200.11,15.56){\circle*{0.000001}} \put(200.11,16.26){\circle*{0.000001}} \put(200.11,16.97){\circle*{0.000001}} \put(200.82,17.68){\circle*{0.000001}} \put(200.82,18.38){\circle*{0.000001}} \put(200.82,19.09){\circle*{0.000001}} \put(200.82,19.80){\circle*{0.000001}} \put(201.53,20.51){\circle*{0.000001}} \put(201.53,21.21){\circle*{0.000001}} \put(201.53,21.92){\circle*{0.000001}} \put(202.23,22.63){\circle*{0.000001}} \put(202.23,23.33){\circle*{0.000001}} \put(202.23,24.04){\circle*{0.000001}} \put(202.23,24.75){\circle*{0.000001}} \put(202.94,25.46){\circle*{0.000001}} \put(202.94,26.16){\circle*{0.000001}} \put(202.94,26.87){\circle*{0.000001}} \put(203.65,27.58){\circle*{0.000001}} \put(203.65,28.28){\circle*{0.000001}} \put(203.65,28.99){\circle*{0.000001}} \put(203.65,29.70){\circle*{0.000001}} \put(204.35,30.41){\circle*{0.000001}} \put(204.35,31.11){\circle*{0.000001}} \put(204.35,31.82){\circle*{0.000001}} \put(205.06,32.53){\circle*{0.000001}} \put(205.06,33.23){\circle*{0.000001}} \put(205.06,33.94){\circle*{0.000001}} \put(205.06,34.65){\circle*{0.000001}} \put(205.77,35.36){\circle*{0.000001}} \put(205.77,36.06){\circle*{0.000001}} \put(205.77,36.77){\circle*{0.000001}} \put(206.48,37.48){\circle*{0.000001}} \put(206.48,38.18){\circle*{0.000001}} \put(206.48,38.89){\circle*{0.000001}} \put(206.48,39.60){\circle*{0.000001}} \put(207.18,40.31){\circle*{0.000001}} \put(207.18,41.01){\circle*{0.000001}} \put(207.18,41.72){\circle*{0.000001}} \put(207.89,42.43){\circle*{0.000001}} \put(207.89,43.13){\circle*{0.000001}} \put(207.89,43.84){\circle*{0.000001}} \put(207.89,44.55){\circle*{0.000001}} \put(208.60,45.25){\circle*{0.000001}} \put(208.60,45.96){\circle*{0.000001}} \put(208.60,46.67){\circle*{0.000001}} \put(209.30,47.38){\circle*{0.000001}} \put(209.30,48.08){\circle*{0.000001}} \put(209.30,48.79){\circle*{0.000001}} \put(209.30,49.50){\circle*{0.000001}} \put(210.01,50.20){\circle*{0.000001}} \put(210.01,50.91){\circle*{0.000001}} \put(210.01,51.62){\circle*{0.000001}} \put(210.72,52.33){\circle*{0.000001}} \put(210.72,53.03){\circle*{0.000001}} \put(210.72,53.74){\circle*{0.000001}} \put(210.72,54.45){\circle*{0.000001}} \put(211.42,55.15){\circle*{0.000001}} \put(211.42,55.86){\circle*{0.000001}} \put(169.71,46.67){\circle*{0.000001}} \put(170.41,46.67){\circle*{0.000001}} \put(171.12,46.67){\circle*{0.000001}} \put(171.83,47.38){\circle*{0.000001}} \put(172.53,47.38){\circle*{0.000001}} \put(173.24,47.38){\circle*{0.000001}} \put(173.95,47.38){\circle*{0.000001}} \put(174.66,48.08){\circle*{0.000001}} \put(175.36,48.08){\circle*{0.000001}} \put(176.07,48.08){\circle*{0.000001}} \put(176.78,48.08){\circle*{0.000001}} \put(177.48,48.08){\circle*{0.000001}} \put(178.19,48.79){\circle*{0.000001}} \put(178.90,48.79){\circle*{0.000001}} \put(179.61,48.79){\circle*{0.000001}} \put(180.31,48.79){\circle*{0.000001}} \put(181.02,49.50){\circle*{0.000001}} \put(181.73,49.50){\circle*{0.000001}} \put(182.43,49.50){\circle*{0.000001}} \put(183.14,49.50){\circle*{0.000001}} \put(183.85,49.50){\circle*{0.000001}} \put(184.55,50.20){\circle*{0.000001}} \put(185.26,50.20){\circle*{0.000001}} \put(185.97,50.20){\circle*{0.000001}} \put(186.68,50.20){\circle*{0.000001}} \put(187.38,50.91){\circle*{0.000001}} \put(188.09,50.91){\circle*{0.000001}} \put(188.80,50.91){\circle*{0.000001}} \put(189.50,50.91){\circle*{0.000001}} \put(190.21,50.91){\circle*{0.000001}} \put(190.92,51.62){\circle*{0.000001}} \put(191.63,51.62){\circle*{0.000001}} \put(192.33,51.62){\circle*{0.000001}} \put(193.04,51.62){\circle*{0.000001}} \put(193.75,51.62){\circle*{0.000001}} \put(194.45,52.33){\circle*{0.000001}} \put(195.16,52.33){\circle*{0.000001}} \put(195.87,52.33){\circle*{0.000001}} \put(196.58,52.33){\circle*{0.000001}} \put(197.28,53.03){\circle*{0.000001}} \put(197.99,53.03){\circle*{0.000001}} \put(198.70,53.03){\circle*{0.000001}} \put(199.40,53.03){\circle*{0.000001}} \put(200.11,53.03){\circle*{0.000001}} \put(200.82,53.74){\circle*{0.000001}} \put(201.53,53.74){\circle*{0.000001}} \put(202.23,53.74){\circle*{0.000001}} \put(202.94,53.74){\circle*{0.000001}} \put(203.65,54.45){\circle*{0.000001}} \put(204.35,54.45){\circle*{0.000001}} \put(205.06,54.45){\circle*{0.000001}} \put(205.77,54.45){\circle*{0.000001}} \put(206.48,54.45){\circle*{0.000001}} \put(207.18,55.15){\circle*{0.000001}} \put(207.89,55.15){\circle*{0.000001}} \put(208.60,55.15){\circle*{0.000001}} \put(209.30,55.15){\circle*{0.000001}} \put(210.01,55.86){\circle*{0.000001}} \put(210.72,55.86){\circle*{0.000001}} \put(211.42,55.86){\circle*{0.000001}} \put(138.59,16.97){\circle*{0.000001}} \put(139.30,17.68){\circle*{0.000001}} \put(140.01,18.38){\circle*{0.000001}} \put(140.71,19.09){\circle*{0.000001}} \put(141.42,19.80){\circle*{0.000001}} \put(142.13,20.51){\circle*{0.000001}} \put(142.84,21.21){\circle*{0.000001}} \put(143.54,21.92){\circle*{0.000001}} \put(144.25,22.63){\circle*{0.000001}} \put(144.96,23.33){\circle*{0.000001}} \put(145.66,24.04){\circle*{0.000001}} \put(146.37,24.04){\circle*{0.000001}} \put(147.08,24.75){\circle*{0.000001}} \put(147.79,25.46){\circle*{0.000001}} \put(148.49,26.16){\circle*{0.000001}} \put(149.20,26.87){\circle*{0.000001}} \put(149.91,27.58){\circle*{0.000001}} \put(150.61,28.28){\circle*{0.000001}} \put(151.32,28.99){\circle*{0.000001}} \put(152.03,29.70){\circle*{0.000001}} \put(152.74,30.41){\circle*{0.000001}} \put(153.44,31.11){\circle*{0.000001}} \put(154.15,31.82){\circle*{0.000001}} \put(154.86,32.53){\circle*{0.000001}} \put(155.56,33.23){\circle*{0.000001}} \put(156.27,33.94){\circle*{0.000001}} \put(156.98,34.65){\circle*{0.000001}} \put(157.68,35.36){\circle*{0.000001}} \put(158.39,36.06){\circle*{0.000001}} \put(159.10,36.77){\circle*{0.000001}} \put(159.81,37.48){\circle*{0.000001}} \put(160.51,38.18){\circle*{0.000001}} \put(161.22,38.89){\circle*{0.000001}} \put(161.93,38.89){\circle*{0.000001}} \put(162.63,39.60){\circle*{0.000001}} \put(163.34,40.31){\circle*{0.000001}} \put(164.05,41.01){\circle*{0.000001}} \put(164.76,41.72){\circle*{0.000001}} \put(165.46,42.43){\circle*{0.000001}} \put(166.17,43.13){\circle*{0.000001}} \put(166.88,43.84){\circle*{0.000001}} \put(167.58,44.55){\circle*{0.000001}} \put(168.29,45.25){\circle*{0.000001}} \put(169.00,45.96){\circle*{0.000001}} \put(169.71,46.67){\circle*{0.000001}} \put(138.59,16.97){\circle*{0.000001}} \put(139.30,16.97){\circle*{0.000001}} \put(140.01,16.26){\circle*{0.000001}} \put(140.71,16.26){\circle*{0.000001}} \put(141.42,15.56){\circle*{0.000001}} \put(142.13,15.56){\circle*{0.000001}} \put(142.84,15.56){\circle*{0.000001}} \put(143.54,14.85){\circle*{0.000001}} \put(144.25,14.85){\circle*{0.000001}} \put(144.96,14.14){\circle*{0.000001}} \put(145.66,14.14){\circle*{0.000001}} \put(146.37,13.44){\circle*{0.000001}} \put(147.08,13.44){\circle*{0.000001}} \put(147.79,13.44){\circle*{0.000001}} \put(148.49,12.73){\circle*{0.000001}} \put(149.20,12.73){\circle*{0.000001}} \put(149.91,12.02){\circle*{0.000001}} \put(150.61,12.02){\circle*{0.000001}} \put(151.32,12.02){\circle*{0.000001}} \put(152.03,11.31){\circle*{0.000001}} \put(152.74,11.31){\circle*{0.000001}} \put(153.44,10.61){\circle*{0.000001}} \put(154.15,10.61){\circle*{0.000001}} \put(154.86,10.61){\circle*{0.000001}} \put(155.56, 9.90){\circle*{0.000001}} \put(156.27, 9.90){\circle*{0.000001}} \put(156.98, 9.19){\circle*{0.000001}} \put(157.68, 9.19){\circle*{0.000001}} \put(158.39, 9.19){\circle*{0.000001}} \put(159.10, 8.49){\circle*{0.000001}} \put(159.81, 8.49){\circle*{0.000001}} \put(160.51, 7.78){\circle*{0.000001}} \put(161.22, 7.78){\circle*{0.000001}} \put(161.93, 7.07){\circle*{0.000001}} \put(162.63, 7.07){\circle*{0.000001}} \put(163.34, 7.07){\circle*{0.000001}} \put(164.05, 6.36){\circle*{0.000001}} \put(164.76, 6.36){\circle*{0.000001}} \put(165.46, 5.66){\circle*{0.000001}} \put(166.17, 5.66){\circle*{0.000001}} \put(166.88, 5.66){\circle*{0.000001}} \put(167.58, 4.95){\circle*{0.000001}} \put(168.29, 4.95){\circle*{0.000001}} \put(169.00, 4.24){\circle*{0.000001}} \put(169.71, 4.24){\circle*{0.000001}} \put(170.41, 4.24){\circle*{0.000001}} \put(171.12, 3.54){\circle*{0.000001}} \put(171.83, 3.54){\circle*{0.000001}} \put(172.53, 2.83){\circle*{0.000001}} \put(173.24, 2.83){\circle*{0.000001}} \put(173.95, 2.83){\circle*{0.000001}} \put(174.66, 2.12){\circle*{0.000001}} \put(175.36, 2.12){\circle*{0.000001}} \put(176.07, 1.41){\circle*{0.000001}} \put(176.78, 1.41){\circle*{0.000001}} \put(177.48, 0.71){\circle*{0.000001}} \put(178.19, 0.71){\circle*{0.000001}} \put(178.90, 0.71){\circle*{0.000001}} \put(179.61, 0.00){\circle*{0.000001}} \put(180.31, 0.00){\circle*{0.000001}} \put(181.02,-0.71){\circle*{0.000001}} \put(181.73,-0.71){\circle*{0.000001}} \put(181.73,-0.71){\circle*{0.000001}} \put(182.43,-0.71){\circle*{0.000001}} \put(183.14,-1.41){\circle*{0.000001}} \put(183.85,-1.41){\circle*{0.000001}} \put(184.55,-2.12){\circle*{0.000001}} \put(185.26,-2.12){\circle*{0.000001}} \put(185.97,-2.83){\circle*{0.000001}} \put(186.68,-2.83){\circle*{0.000001}} \put(187.38,-3.54){\circle*{0.000001}} \put(188.09,-3.54){\circle*{0.000001}} \put(188.80,-3.54){\circle*{0.000001}} \put(189.50,-4.24){\circle*{0.000001}} \put(190.21,-4.24){\circle*{0.000001}} \put(190.92,-4.95){\circle*{0.000001}} \put(191.63,-4.95){\circle*{0.000001}} \put(192.33,-5.66){\circle*{0.000001}} \put(193.04,-5.66){\circle*{0.000001}} \put(193.75,-6.36){\circle*{0.000001}} \put(194.45,-6.36){\circle*{0.000001}} \put(195.16,-6.36){\circle*{0.000001}} \put(195.87,-7.07){\circle*{0.000001}} \put(196.58,-7.07){\circle*{0.000001}} \put(197.28,-7.78){\circle*{0.000001}} \put(197.99,-7.78){\circle*{0.000001}} \put(198.70,-8.49){\circle*{0.000001}} \put(199.40,-8.49){\circle*{0.000001}} \put(200.11,-9.19){\circle*{0.000001}} \put(200.82,-9.19){\circle*{0.000001}} \put(201.53,-9.19){\circle*{0.000001}} \put(202.23,-9.90){\circle*{0.000001}} \put(202.94,-9.90){\circle*{0.000001}} \put(203.65,-10.61){\circle*{0.000001}} \put(204.35,-10.61){\circle*{0.000001}} \put(205.06,-11.31){\circle*{0.000001}} \put(205.77,-11.31){\circle*{0.000001}} \put(206.48,-12.02){\circle*{0.000001}} \put(207.18,-12.02){\circle*{0.000001}} \put(207.89,-12.73){\circle*{0.000001}} \put(208.60,-12.73){\circle*{0.000001}} \put(209.30,-12.73){\circle*{0.000001}} \put(210.01,-13.44){\circle*{0.000001}} \put(210.72,-13.44){\circle*{0.000001}} \put(211.42,-14.14){\circle*{0.000001}} \put(212.13,-14.14){\circle*{0.000001}} \put(212.84,-14.85){\circle*{0.000001}} \put(213.55,-14.85){\circle*{0.000001}} \put(214.25,-15.56){\circle*{0.000001}} \put(214.96,-15.56){\circle*{0.000001}} \put(215.67,-15.56){\circle*{0.000001}} \put(216.37,-16.26){\circle*{0.000001}} \put(217.08,-16.26){\circle*{0.000001}} \put(217.79,-16.97){\circle*{0.000001}} \put(218.50,-16.97){\circle*{0.000001}} \put(219.20,-17.68){\circle*{0.000001}} \put(219.91,-17.68){\circle*{0.000001}} \put(220.62,-18.38){\circle*{0.000001}} \put(221.32,-18.38){\circle*{0.000001}} \put(221.32,-18.38){\circle*{0.000001}} \put(222.03,-19.09){\circle*{0.000001}} \put(222.74,-19.09){\circle*{0.000001}} \put(223.45,-19.80){\circle*{0.000001}} \put(224.15,-19.80){\circle*{0.000001}} \put(224.86,-20.51){\circle*{0.000001}} \put(225.57,-20.51){\circle*{0.000001}} \put(226.27,-21.21){\circle*{0.000001}} \put(226.98,-21.21){\circle*{0.000001}} \put(227.69,-21.92){\circle*{0.000001}} \put(228.40,-21.92){\circle*{0.000001}} \put(229.10,-22.63){\circle*{0.000001}} \put(229.81,-22.63){\circle*{0.000001}} \put(230.52,-23.33){\circle*{0.000001}} \put(231.22,-23.33){\circle*{0.000001}} \put(231.93,-24.04){\circle*{0.000001}} \put(232.64,-24.04){\circle*{0.000001}} \put(233.35,-24.75){\circle*{0.000001}} \put(234.05,-24.75){\circle*{0.000001}} \put(234.76,-25.46){\circle*{0.000001}} \put(235.47,-26.16){\circle*{0.000001}} \put(236.17,-26.16){\circle*{0.000001}} \put(236.88,-26.87){\circle*{0.000001}} \put(237.59,-26.87){\circle*{0.000001}} \put(238.29,-27.58){\circle*{0.000001}} \put(239.00,-27.58){\circle*{0.000001}} \put(239.71,-28.28){\circle*{0.000001}} \put(240.42,-28.28){\circle*{0.000001}} \put(241.12,-28.99){\circle*{0.000001}} \put(241.83,-28.99){\circle*{0.000001}} \put(242.54,-29.70){\circle*{0.000001}} \put(243.24,-29.70){\circle*{0.000001}} \put(243.95,-30.41){\circle*{0.000001}} \put(244.66,-30.41){\circle*{0.000001}} \put(245.37,-31.11){\circle*{0.000001}} \put(246.07,-31.11){\circle*{0.000001}} \put(246.78,-31.82){\circle*{0.000001}} \put(247.49,-31.82){\circle*{0.000001}} \put(248.19,-32.53){\circle*{0.000001}} \put(248.90,-33.23){\circle*{0.000001}} \put(249.61,-33.23){\circle*{0.000001}} \put(250.32,-33.94){\circle*{0.000001}} \put(251.02,-33.94){\circle*{0.000001}} \put(251.73,-34.65){\circle*{0.000001}} \put(252.44,-34.65){\circle*{0.000001}} \put(253.14,-35.36){\circle*{0.000001}} \put(253.85,-35.36){\circle*{0.000001}} \put(254.56,-36.06){\circle*{0.000001}} \put(255.27,-36.06){\circle*{0.000001}} \put(255.97,-36.77){\circle*{0.000001}} \put(256.68,-36.77){\circle*{0.000001}} \put(257.39,-37.48){\circle*{0.000001}} \put(258.09,-37.48){\circle*{0.000001}} \put(258.80,-38.18){\circle*{0.000001}} \put(259.51,-38.18){\circle*{0.000001}} \put(260.22,-38.89){\circle*{0.000001}} \put(260.92,-38.89){\circle*{0.000001}} \put(261.63,-39.60){\circle*{0.000001}} \put(286.38,-74.95){\circle*{0.000001}} \put(285.67,-74.25){\circle*{0.000001}} \put(285.67,-73.54){\circle*{0.000001}} \put(284.96,-72.83){\circle*{0.000001}} \put(284.26,-72.12){\circle*{0.000001}} \put(284.26,-71.42){\circle*{0.000001}} \put(283.55,-70.71){\circle*{0.000001}} \put(282.84,-70.00){\circle*{0.000001}} \put(282.14,-69.30){\circle*{0.000001}} \put(282.14,-68.59){\circle*{0.000001}} \put(281.43,-67.88){\circle*{0.000001}} \put(280.72,-67.18){\circle*{0.000001}} \put(280.72,-66.47){\circle*{0.000001}} \put(280.01,-65.76){\circle*{0.000001}} \put(279.31,-65.05){\circle*{0.000001}} \put(279.31,-64.35){\circle*{0.000001}} \put(278.60,-63.64){\circle*{0.000001}} \put(277.89,-62.93){\circle*{0.000001}} \put(277.19,-62.23){\circle*{0.000001}} \put(277.19,-61.52){\circle*{0.000001}} \put(276.48,-60.81){\circle*{0.000001}} \put(275.77,-60.10){\circle*{0.000001}} \put(275.77,-59.40){\circle*{0.000001}} \put(275.06,-58.69){\circle*{0.000001}} \put(274.36,-57.98){\circle*{0.000001}} \put(274.36,-57.28){\circle*{0.000001}} \put(273.65,-56.57){\circle*{0.000001}} \put(272.94,-55.86){\circle*{0.000001}} \put(272.24,-55.15){\circle*{0.000001}} \put(272.24,-54.45){\circle*{0.000001}} \put(271.53,-53.74){\circle*{0.000001}} \put(270.82,-53.03){\circle*{0.000001}} \put(270.82,-52.33){\circle*{0.000001}} \put(270.11,-51.62){\circle*{0.000001}} \put(269.41,-50.91){\circle*{0.000001}} \put(269.41,-50.20){\circle*{0.000001}} \put(268.70,-49.50){\circle*{0.000001}} \put(267.99,-48.79){\circle*{0.000001}} \put(267.29,-48.08){\circle*{0.000001}} \put(267.29,-47.38){\circle*{0.000001}} \put(266.58,-46.67){\circle*{0.000001}} \put(265.87,-45.96){\circle*{0.000001}} \put(265.87,-45.25){\circle*{0.000001}} \put(265.17,-44.55){\circle*{0.000001}} \put(264.46,-43.84){\circle*{0.000001}} \put(264.46,-43.13){\circle*{0.000001}} \put(263.75,-42.43){\circle*{0.000001}} \put(263.04,-41.72){\circle*{0.000001}} \put(262.34,-41.01){\circle*{0.000001}} \put(262.34,-40.31){\circle*{0.000001}} \put(261.63,-39.60){\circle*{0.000001}} \put(269.41,-118.79){\circle*{0.000001}} \put(269.41,-118.09){\circle*{0.000001}} \put(270.11,-117.38){\circle*{0.000001}} \put(270.11,-116.67){\circle*{0.000001}} \put(270.82,-115.97){\circle*{0.000001}} \put(270.82,-115.26){\circle*{0.000001}} \put(270.82,-114.55){\circle*{0.000001}} \put(271.53,-113.84){\circle*{0.000001}} \put(271.53,-113.14){\circle*{0.000001}} \put(271.53,-112.43){\circle*{0.000001}} \put(272.24,-111.72){\circle*{0.000001}} \put(272.24,-111.02){\circle*{0.000001}} \put(272.94,-110.31){\circle*{0.000001}} \put(272.94,-109.60){\circle*{0.000001}} \put(272.94,-108.89){\circle*{0.000001}} \put(273.65,-108.19){\circle*{0.000001}} \put(273.65,-107.48){\circle*{0.000001}} \put(274.36,-106.77){\circle*{0.000001}} \put(274.36,-106.07){\circle*{0.000001}} \put(274.36,-105.36){\circle*{0.000001}} \put(275.06,-104.65){\circle*{0.000001}} \put(275.06,-103.94){\circle*{0.000001}} \put(275.77,-103.24){\circle*{0.000001}} \put(275.77,-102.53){\circle*{0.000001}} \put(275.77,-101.82){\circle*{0.000001}} \put(276.48,-101.12){\circle*{0.000001}} \put(276.48,-100.41){\circle*{0.000001}} \put(276.48,-99.70){\circle*{0.000001}} \put(277.19,-98.99){\circle*{0.000001}} \put(277.19,-98.29){\circle*{0.000001}} \put(277.89,-97.58){\circle*{0.000001}} \put(277.89,-96.87){\circle*{0.000001}} \put(277.89,-96.17){\circle*{0.000001}} \put(278.60,-95.46){\circle*{0.000001}} \put(278.60,-94.75){\circle*{0.000001}} \put(279.31,-94.05){\circle*{0.000001}} \put(279.31,-93.34){\circle*{0.000001}} \put(279.31,-92.63){\circle*{0.000001}} \put(280.01,-91.92){\circle*{0.000001}} \put(280.01,-91.22){\circle*{0.000001}} \put(280.01,-90.51){\circle*{0.000001}} \put(280.72,-89.80){\circle*{0.000001}} \put(280.72,-89.10){\circle*{0.000001}} \put(281.43,-88.39){\circle*{0.000001}} \put(281.43,-87.68){\circle*{0.000001}} \put(281.43,-86.97){\circle*{0.000001}} \put(282.14,-86.27){\circle*{0.000001}} \put(282.14,-85.56){\circle*{0.000001}} \put(282.84,-84.85){\circle*{0.000001}} \put(282.84,-84.15){\circle*{0.000001}} \put(282.84,-83.44){\circle*{0.000001}} \put(283.55,-82.73){\circle*{0.000001}} \put(283.55,-82.02){\circle*{0.000001}} \put(284.26,-81.32){\circle*{0.000001}} \put(284.26,-80.61){\circle*{0.000001}} \put(284.26,-79.90){\circle*{0.000001}} \put(284.96,-79.20){\circle*{0.000001}} \put(284.96,-78.49){\circle*{0.000001}} \put(284.96,-77.78){\circle*{0.000001}} \put(285.67,-77.07){\circle*{0.000001}} \put(285.67,-76.37){\circle*{0.000001}} \put(286.38,-75.66){\circle*{0.000001}} \put(286.38,-74.95){\circle*{0.000001}} \put(269.41,-118.79){\circle*{0.000001}} \put(269.41,-118.09){\circle*{0.000001}} \put(270.11,-117.38){\circle*{0.000001}} \put(270.11,-116.67){\circle*{0.000001}} \put(270.11,-115.97){\circle*{0.000001}} \put(270.82,-115.26){\circle*{0.000001}} \put(270.82,-114.55){\circle*{0.000001}} \put(271.53,-113.84){\circle*{0.000001}} \put(271.53,-113.14){\circle*{0.000001}} \put(271.53,-112.43){\circle*{0.000001}} \put(272.24,-111.72){\circle*{0.000001}} \put(272.24,-111.02){\circle*{0.000001}} \put(272.24,-110.31){\circle*{0.000001}} \put(272.94,-109.60){\circle*{0.000001}} \put(272.94,-108.89){\circle*{0.000001}} \put(272.94,-108.19){\circle*{0.000001}} \put(273.65,-107.48){\circle*{0.000001}} \put(273.65,-106.77){\circle*{0.000001}} \put(273.65,-106.07){\circle*{0.000001}} \put(274.36,-105.36){\circle*{0.000001}} \put(274.36,-104.65){\circle*{0.000001}} \put(275.06,-103.94){\circle*{0.000001}} \put(275.06,-103.24){\circle*{0.000001}} \put(275.06,-102.53){\circle*{0.000001}} \put(275.77,-101.82){\circle*{0.000001}} \put(275.77,-101.12){\circle*{0.000001}} \put(275.77,-100.41){\circle*{0.000001}} \put(276.48,-99.70){\circle*{0.000001}} \put(276.48,-98.99){\circle*{0.000001}} \put(276.48,-98.29){\circle*{0.000001}} \put(277.19,-97.58){\circle*{0.000001}} \put(277.19,-96.87){\circle*{0.000001}} \put(277.19,-96.17){\circle*{0.000001}} \put(277.89,-95.46){\circle*{0.000001}} \put(277.89,-94.75){\circle*{0.000001}} \put(278.60,-94.05){\circle*{0.000001}} \put(278.60,-93.34){\circle*{0.000001}} \put(278.60,-92.63){\circle*{0.000001}} \put(279.31,-91.92){\circle*{0.000001}} \put(279.31,-91.22){\circle*{0.000001}} \put(279.31,-90.51){\circle*{0.000001}} \put(280.01,-89.80){\circle*{0.000001}} \put(280.01,-89.10){\circle*{0.000001}} \put(280.01,-88.39){\circle*{0.000001}} \put(280.72,-87.68){\circle*{0.000001}} \put(280.72,-86.97){\circle*{0.000001}} \put(281.43,-86.27){\circle*{0.000001}} \put(281.43,-85.56){\circle*{0.000001}} \put(281.43,-84.85){\circle*{0.000001}} \put(282.14,-84.15){\circle*{0.000001}} \put(282.14,-83.44){\circle*{0.000001}} \put(282.14,-82.73){\circle*{0.000001}} \put(282.84,-82.02){\circle*{0.000001}} \put(282.84,-81.32){\circle*{0.000001}} \put(282.84,-80.61){\circle*{0.000001}} \put(283.55,-79.90){\circle*{0.000001}} \put(283.55,-79.20){\circle*{0.000001}} \put(283.55,-78.49){\circle*{0.000001}} \put(284.26,-77.78){\circle*{0.000001}} \put(284.26,-77.07){\circle*{0.000001}} \put(284.96,-76.37){\circle*{0.000001}} \put(284.96,-75.66){\circle*{0.000001}} \put(284.96,-74.95){\circle*{0.000001}} \put(285.67,-74.25){\circle*{0.000001}} \put(285.67,-73.54){\circle*{0.000001}} \put(271.53,-115.97){\circle*{0.000001}} \put(271.53,-115.26){\circle*{0.000001}} \put(272.24,-114.55){\circle*{0.000001}} \put(272.24,-113.84){\circle*{0.000001}} \put(272.24,-113.14){\circle*{0.000001}} \put(272.94,-112.43){\circle*{0.000001}} \put(272.94,-111.72){\circle*{0.000001}} \put(272.94,-111.02){\circle*{0.000001}} \put(273.65,-110.31){\circle*{0.000001}} \put(273.65,-109.60){\circle*{0.000001}} \put(273.65,-108.89){\circle*{0.000001}} \put(274.36,-108.19){\circle*{0.000001}} \put(274.36,-107.48){\circle*{0.000001}} \put(274.36,-106.77){\circle*{0.000001}} \put(275.06,-106.07){\circle*{0.000001}} \put(275.06,-105.36){\circle*{0.000001}} \put(275.06,-104.65){\circle*{0.000001}} \put(275.77,-103.94){\circle*{0.000001}} \put(275.77,-103.24){\circle*{0.000001}} \put(275.77,-102.53){\circle*{0.000001}} \put(276.48,-101.82){\circle*{0.000001}} \put(276.48,-101.12){\circle*{0.000001}} \put(276.48,-100.41){\circle*{0.000001}} \put(277.19,-99.70){\circle*{0.000001}} \put(277.19,-98.99){\circle*{0.000001}} \put(277.19,-98.29){\circle*{0.000001}} \put(277.89,-97.58){\circle*{0.000001}} \put(277.89,-96.87){\circle*{0.000001}} \put(277.89,-96.17){\circle*{0.000001}} \put(278.60,-95.46){\circle*{0.000001}} \put(278.60,-94.75){\circle*{0.000001}} \put(278.60,-94.05){\circle*{0.000001}} \put(279.31,-93.34){\circle*{0.000001}} \put(279.31,-92.63){\circle*{0.000001}} \put(279.31,-91.92){\circle*{0.000001}} \put(280.01,-91.22){\circle*{0.000001}} \put(280.01,-90.51){\circle*{0.000001}} \put(280.01,-89.80){\circle*{0.000001}} \put(280.72,-89.10){\circle*{0.000001}} \put(280.72,-88.39){\circle*{0.000001}} \put(280.72,-87.68){\circle*{0.000001}} \put(281.43,-86.97){\circle*{0.000001}} \put(281.43,-86.27){\circle*{0.000001}} \put(281.43,-85.56){\circle*{0.000001}} \put(282.14,-84.85){\circle*{0.000001}} \put(282.14,-84.15){\circle*{0.000001}} \put(282.14,-83.44){\circle*{0.000001}} \put(282.84,-82.73){\circle*{0.000001}} \put(282.84,-82.02){\circle*{0.000001}} \put(282.84,-81.32){\circle*{0.000001}} \put(283.55,-80.61){\circle*{0.000001}} \put(283.55,-79.90){\circle*{0.000001}} \put(283.55,-79.20){\circle*{0.000001}} \put(284.26,-78.49){\circle*{0.000001}} \put(284.26,-77.78){\circle*{0.000001}} \put(284.26,-77.07){\circle*{0.000001}} \put(284.96,-76.37){\circle*{0.000001}} \put(284.96,-75.66){\circle*{0.000001}} \put(284.96,-74.95){\circle*{0.000001}} \put(285.67,-74.25){\circle*{0.000001}} \put(285.67,-73.54){\circle*{0.000001}} \put(271.53,-115.97){\circle*{0.000001}} \put(272.24,-115.26){\circle*{0.000001}} \put(272.94,-114.55){\circle*{0.000001}} \put(273.65,-113.84){\circle*{0.000001}} \put(273.65,-113.14){\circle*{0.000001}} \put(274.36,-112.43){\circle*{0.000001}} \put(275.06,-111.72){\circle*{0.000001}} \put(275.77,-111.02){\circle*{0.000001}} \put(276.48,-110.31){\circle*{0.000001}} \put(277.19,-109.60){\circle*{0.000001}} \put(277.19,-108.89){\circle*{0.000001}} \put(277.89,-108.19){\circle*{0.000001}} \put(278.60,-107.48){\circle*{0.000001}} \put(279.31,-106.77){\circle*{0.000001}} \put(280.01,-106.07){\circle*{0.000001}} \put(280.72,-105.36){\circle*{0.000001}} \put(280.72,-104.65){\circle*{0.000001}} \put(281.43,-103.94){\circle*{0.000001}} \put(282.14,-103.24){\circle*{0.000001}} \put(282.84,-102.53){\circle*{0.000001}} \put(283.55,-101.82){\circle*{0.000001}} \put(284.26,-101.12){\circle*{0.000001}} \put(284.96,-100.41){\circle*{0.000001}} \put(284.96,-99.70){\circle*{0.000001}} \put(285.67,-98.99){\circle*{0.000001}} \put(286.38,-98.29){\circle*{0.000001}} \put(287.09,-97.58){\circle*{0.000001}} \put(287.79,-96.87){\circle*{0.000001}} \put(288.50,-96.17){\circle*{0.000001}} \put(288.50,-95.46){\circle*{0.000001}} \put(289.21,-94.75){\circle*{0.000001}} \put(289.91,-94.05){\circle*{0.000001}} \put(290.62,-93.34){\circle*{0.000001}} \put(291.33,-92.63){\circle*{0.000001}} \put(292.04,-91.92){\circle*{0.000001}} \put(292.74,-91.22){\circle*{0.000001}} \put(292.74,-90.51){\circle*{0.000001}} \put(293.45,-89.80){\circle*{0.000001}} \put(294.16,-89.10){\circle*{0.000001}} \put(294.86,-88.39){\circle*{0.000001}} \put(295.57,-87.68){\circle*{0.000001}} \put(296.28,-86.97){\circle*{0.000001}} \put(296.28,-86.27){\circle*{0.000001}} \put(296.98,-85.56){\circle*{0.000001}} \put(297.69,-84.85){\circle*{0.000001}} \put(298.40,-84.15){\circle*{0.000001}} \put(299.11,-83.44){\circle*{0.000001}} \put(299.81,-82.73){\circle*{0.000001}} \put(299.81,-82.02){\circle*{0.000001}} \put(300.52,-81.32){\circle*{0.000001}} \put(301.23,-80.61){\circle*{0.000001}} \put(301.93,-79.90){\circle*{0.000001}} \put(265.87,-52.33){\circle*{0.000001}} \put(266.58,-53.03){\circle*{0.000001}} \put(267.29,-53.74){\circle*{0.000001}} \put(267.99,-53.74){\circle*{0.000001}} \put(268.70,-54.45){\circle*{0.000001}} \put(269.41,-55.15){\circle*{0.000001}} \put(270.11,-55.86){\circle*{0.000001}} \put(270.82,-55.86){\circle*{0.000001}} \put(271.53,-56.57){\circle*{0.000001}} \put(272.24,-57.28){\circle*{0.000001}} \put(272.94,-57.98){\circle*{0.000001}} \put(273.65,-57.98){\circle*{0.000001}} \put(274.36,-58.69){\circle*{0.000001}} \put(275.06,-59.40){\circle*{0.000001}} \put(275.77,-60.10){\circle*{0.000001}} \put(276.48,-60.10){\circle*{0.000001}} \put(277.19,-60.81){\circle*{0.000001}} \put(277.89,-61.52){\circle*{0.000001}} \put(278.60,-62.23){\circle*{0.000001}} \put(279.31,-62.93){\circle*{0.000001}} \put(280.01,-62.93){\circle*{0.000001}} \put(280.72,-63.64){\circle*{0.000001}} \put(281.43,-64.35){\circle*{0.000001}} \put(282.14,-65.05){\circle*{0.000001}} \put(282.84,-65.05){\circle*{0.000001}} \put(283.55,-65.76){\circle*{0.000001}} \put(284.26,-66.47){\circle*{0.000001}} \put(284.96,-67.18){\circle*{0.000001}} \put(285.67,-67.18){\circle*{0.000001}} \put(286.38,-67.88){\circle*{0.000001}} \put(287.09,-68.59){\circle*{0.000001}} \put(287.79,-69.30){\circle*{0.000001}} \put(288.50,-69.30){\circle*{0.000001}} \put(289.21,-70.00){\circle*{0.000001}} \put(289.91,-70.71){\circle*{0.000001}} \put(290.62,-71.42){\circle*{0.000001}} \put(291.33,-72.12){\circle*{0.000001}} \put(292.04,-72.12){\circle*{0.000001}} \put(292.74,-72.83){\circle*{0.000001}} \put(293.45,-73.54){\circle*{0.000001}} \put(294.16,-74.25){\circle*{0.000001}} \put(294.86,-74.25){\circle*{0.000001}} \put(295.57,-74.95){\circle*{0.000001}} \put(296.28,-75.66){\circle*{0.000001}} \put(296.98,-76.37){\circle*{0.000001}} \put(297.69,-76.37){\circle*{0.000001}} \put(298.40,-77.07){\circle*{0.000001}} \put(299.11,-77.78){\circle*{0.000001}} \put(299.81,-78.49){\circle*{0.000001}} \put(300.52,-78.49){\circle*{0.000001}} \put(301.23,-79.20){\circle*{0.000001}} \put(301.93,-79.90){\circle*{0.000001}} \put(265.87,-52.33){\circle*{0.000001}} \put(265.87,-51.62){\circle*{0.000001}} \put(266.58,-50.91){\circle*{0.000001}} \put(266.58,-50.20){\circle*{0.000001}} \put(267.29,-49.50){\circle*{0.000001}} \put(267.29,-48.79){\circle*{0.000001}} \put(267.29,-48.08){\circle*{0.000001}} \put(267.99,-47.38){\circle*{0.000001}} \put(267.99,-46.67){\circle*{0.000001}} \put(268.70,-45.96){\circle*{0.000001}} \put(268.70,-45.25){\circle*{0.000001}} \put(269.41,-44.55){\circle*{0.000001}} \put(269.41,-43.84){\circle*{0.000001}} \put(269.41,-43.13){\circle*{0.000001}} \put(270.11,-42.43){\circle*{0.000001}} \put(270.11,-41.72){\circle*{0.000001}} \put(270.82,-41.01){\circle*{0.000001}} \put(270.82,-40.31){\circle*{0.000001}} \put(270.82,-39.60){\circle*{0.000001}} \put(271.53,-38.89){\circle*{0.000001}} \put(271.53,-38.18){\circle*{0.000001}} \put(272.24,-37.48){\circle*{0.000001}} \put(272.24,-36.77){\circle*{0.000001}} \put(272.94,-36.06){\circle*{0.000001}} \put(272.94,-35.36){\circle*{0.000001}} \put(272.94,-34.65){\circle*{0.000001}} \put(273.65,-33.94){\circle*{0.000001}} \put(273.65,-33.23){\circle*{0.000001}} \put(274.36,-32.53){\circle*{0.000001}} \put(274.36,-31.82){\circle*{0.000001}} \put(274.36,-31.11){\circle*{0.000001}} \put(275.06,-30.41){\circle*{0.000001}} \put(275.06,-29.70){\circle*{0.000001}} \put(275.77,-28.99){\circle*{0.000001}} \put(275.77,-28.28){\circle*{0.000001}} \put(275.77,-27.58){\circle*{0.000001}} \put(276.48,-26.87){\circle*{0.000001}} \put(276.48,-26.16){\circle*{0.000001}} \put(277.19,-25.46){\circle*{0.000001}} \put(277.19,-24.75){\circle*{0.000001}} \put(277.89,-24.04){\circle*{0.000001}} \put(277.89,-23.33){\circle*{0.000001}} \put(277.89,-22.63){\circle*{0.000001}} \put(278.60,-21.92){\circle*{0.000001}} \put(278.60,-21.21){\circle*{0.000001}} \put(279.31,-20.51){\circle*{0.000001}} \put(279.31,-19.80){\circle*{0.000001}} \put(279.31,-19.09){\circle*{0.000001}} \put(280.01,-18.38){\circle*{0.000001}} \put(280.01,-17.68){\circle*{0.000001}} \put(280.72,-16.97){\circle*{0.000001}} \put(280.72,-16.26){\circle*{0.000001}} \put(281.43,-15.56){\circle*{0.000001}} \put(281.43,-14.85){\circle*{0.000001}} \put(281.43,-14.14){\circle*{0.000001}} \put(282.14,-13.44){\circle*{0.000001}} \put(282.14,-12.73){\circle*{0.000001}} \put(282.84,-12.02){\circle*{0.000001}} \put(282.84,-11.31){\circle*{0.000001}} \put(273.65,-55.86){\circle*{0.000001}} \put(273.65,-55.15){\circle*{0.000001}} \put(273.65,-54.45){\circle*{0.000001}} \put(274.36,-53.74){\circle*{0.000001}} \put(274.36,-53.03){\circle*{0.000001}} \put(274.36,-52.33){\circle*{0.000001}} \put(274.36,-51.62){\circle*{0.000001}} \put(274.36,-50.91){\circle*{0.000001}} \put(275.06,-50.20){\circle*{0.000001}} \put(275.06,-49.50){\circle*{0.000001}} \put(275.06,-48.79){\circle*{0.000001}} \put(275.06,-48.08){\circle*{0.000001}} \put(275.06,-47.38){\circle*{0.000001}} \put(275.77,-46.67){\circle*{0.000001}} \put(275.77,-45.96){\circle*{0.000001}} \put(275.77,-45.25){\circle*{0.000001}} \put(275.77,-44.55){\circle*{0.000001}} \put(276.48,-43.84){\circle*{0.000001}} \put(276.48,-43.13){\circle*{0.000001}} \put(276.48,-42.43){\circle*{0.000001}} \put(276.48,-41.72){\circle*{0.000001}} \put(276.48,-41.01){\circle*{0.000001}} \put(277.19,-40.31){\circle*{0.000001}} \put(277.19,-39.60){\circle*{0.000001}} \put(277.19,-38.89){\circle*{0.000001}} \put(277.19,-38.18){\circle*{0.000001}} \put(277.19,-37.48){\circle*{0.000001}} \put(277.89,-36.77){\circle*{0.000001}} \put(277.89,-36.06){\circle*{0.000001}} \put(277.89,-35.36){\circle*{0.000001}} \put(277.89,-34.65){\circle*{0.000001}} \put(277.89,-33.94){\circle*{0.000001}} \put(278.60,-33.23){\circle*{0.000001}} \put(278.60,-32.53){\circle*{0.000001}} \put(278.60,-31.82){\circle*{0.000001}} \put(278.60,-31.11){\circle*{0.000001}} \put(278.60,-30.41){\circle*{0.000001}} \put(279.31,-29.70){\circle*{0.000001}} \put(279.31,-28.99){\circle*{0.000001}} \put(279.31,-28.28){\circle*{0.000001}} \put(279.31,-27.58){\circle*{0.000001}} \put(279.31,-26.87){\circle*{0.000001}} \put(280.01,-26.16){\circle*{0.000001}} \put(280.01,-25.46){\circle*{0.000001}} \put(280.01,-24.75){\circle*{0.000001}} \put(280.01,-24.04){\circle*{0.000001}} \put(280.01,-23.33){\circle*{0.000001}} \put(280.72,-22.63){\circle*{0.000001}} \put(280.72,-21.92){\circle*{0.000001}} \put(280.72,-21.21){\circle*{0.000001}} \put(280.72,-20.51){\circle*{0.000001}} \put(281.43,-19.80){\circle*{0.000001}} \put(281.43,-19.09){\circle*{0.000001}} \put(281.43,-18.38){\circle*{0.000001}} \put(281.43,-17.68){\circle*{0.000001}} \put(281.43,-16.97){\circle*{0.000001}} \put(282.14,-16.26){\circle*{0.000001}} \put(282.14,-15.56){\circle*{0.000001}} \put(282.14,-14.85){\circle*{0.000001}} \put(282.14,-14.14){\circle*{0.000001}} \put(282.14,-13.44){\circle*{0.000001}} \put(282.84,-12.73){\circle*{0.000001}} \put(282.84,-12.02){\circle*{0.000001}} \put(282.84,-11.31){\circle*{0.000001}} \put(273.65,-55.86){\circle*{0.000001}} \put(273.65,-55.15){\circle*{0.000001}} \put(274.36,-54.45){\circle*{0.000001}} \put(274.36,-53.74){\circle*{0.000001}} \put(274.36,-53.03){\circle*{0.000001}} \put(274.36,-52.33){\circle*{0.000001}} \put(275.06,-51.62){\circle*{0.000001}} \put(275.06,-50.91){\circle*{0.000001}} \put(275.06,-50.20){\circle*{0.000001}} \put(275.06,-49.50){\circle*{0.000001}} \put(275.77,-48.79){\circle*{0.000001}} \put(275.77,-48.08){\circle*{0.000001}} \put(275.77,-47.38){\circle*{0.000001}} \put(275.77,-46.67){\circle*{0.000001}} \put(276.48,-45.96){\circle*{0.000001}} \put(276.48,-45.25){\circle*{0.000001}} \put(276.48,-44.55){\circle*{0.000001}} \put(276.48,-43.84){\circle*{0.000001}} \put(277.19,-43.13){\circle*{0.000001}} \put(277.19,-42.43){\circle*{0.000001}} \put(277.19,-41.72){\circle*{0.000001}} \put(277.19,-41.01){\circle*{0.000001}} \put(277.89,-40.31){\circle*{0.000001}} \put(277.89,-39.60){\circle*{0.000001}} \put(277.89,-38.89){\circle*{0.000001}} \put(278.60,-38.18){\circle*{0.000001}} \put(278.60,-37.48){\circle*{0.000001}} \put(278.60,-36.77){\circle*{0.000001}} \put(278.60,-36.06){\circle*{0.000001}} \put(279.31,-35.36){\circle*{0.000001}} \put(279.31,-34.65){\circle*{0.000001}} \put(279.31,-33.94){\circle*{0.000001}} \put(279.31,-33.23){\circle*{0.000001}} \put(280.01,-32.53){\circle*{0.000001}} \put(280.01,-31.82){\circle*{0.000001}} \put(280.01,-31.11){\circle*{0.000001}} \put(280.01,-30.41){\circle*{0.000001}} \put(280.72,-29.70){\circle*{0.000001}} \put(280.72,-28.99){\circle*{0.000001}} \put(280.72,-28.28){\circle*{0.000001}} \put(280.72,-27.58){\circle*{0.000001}} \put(281.43,-26.87){\circle*{0.000001}} \put(281.43,-26.16){\circle*{0.000001}} \put(281.43,-25.46){\circle*{0.000001}} \put(282.14,-24.75){\circle*{0.000001}} \put(282.14,-24.04){\circle*{0.000001}} \put(282.14,-23.33){\circle*{0.000001}} \put(282.14,-22.63){\circle*{0.000001}} \put(282.84,-21.92){\circle*{0.000001}} \put(282.84,-21.21){\circle*{0.000001}} \put(282.84,-20.51){\circle*{0.000001}} \put(282.84,-19.80){\circle*{0.000001}} \put(283.55,-19.09){\circle*{0.000001}} \put(283.55,-18.38){\circle*{0.000001}} \put(283.55,-17.68){\circle*{0.000001}} \put(283.55,-16.97){\circle*{0.000001}} \put(284.26,-16.26){\circle*{0.000001}} \put(284.26,-15.56){\circle*{0.000001}} \put(284.26,-14.85){\circle*{0.000001}} \put(284.26,-14.14){\circle*{0.000001}} \put(284.96,-13.44){\circle*{0.000001}} \put(284.96,-12.73){\circle*{0.000001}} \put(284.96,-12.02){\circle*{0.000001}} \put(284.96,-11.31){\circle*{0.000001}} \put(285.67,-10.61){\circle*{0.000001}} \put(285.67,-9.90){\circle*{0.000001}} \put(266.58,-49.50){\circle*{0.000001}} \put(266.58,-48.79){\circle*{0.000001}} \put(267.29,-48.08){\circle*{0.000001}} \put(267.29,-47.38){\circle*{0.000001}} \put(267.99,-46.67){\circle*{0.000001}} \put(267.99,-45.96){\circle*{0.000001}} \put(268.70,-45.25){\circle*{0.000001}} \put(268.70,-44.55){\circle*{0.000001}} \put(269.41,-43.84){\circle*{0.000001}} \put(269.41,-43.13){\circle*{0.000001}} \put(270.11,-42.43){\circle*{0.000001}} \put(270.11,-41.72){\circle*{0.000001}} \put(270.82,-41.01){\circle*{0.000001}} \put(270.82,-40.31){\circle*{0.000001}} \put(271.53,-39.60){\circle*{0.000001}} \put(271.53,-38.89){\circle*{0.000001}} \put(272.24,-38.18){\circle*{0.000001}} \put(272.24,-37.48){\circle*{0.000001}} \put(272.94,-36.77){\circle*{0.000001}} \put(272.94,-36.06){\circle*{0.000001}} \put(273.65,-35.36){\circle*{0.000001}} \put(273.65,-34.65){\circle*{0.000001}} \put(274.36,-33.94){\circle*{0.000001}} \put(274.36,-33.23){\circle*{0.000001}} \put(275.06,-32.53){\circle*{0.000001}} \put(275.06,-31.82){\circle*{0.000001}} \put(275.77,-31.11){\circle*{0.000001}} \put(275.77,-30.41){\circle*{0.000001}} \put(275.77,-29.70){\circle*{0.000001}} \put(276.48,-28.99){\circle*{0.000001}} \put(276.48,-28.28){\circle*{0.000001}} \put(277.19,-27.58){\circle*{0.000001}} \put(277.19,-26.87){\circle*{0.000001}} \put(277.89,-26.16){\circle*{0.000001}} \put(277.89,-25.46){\circle*{0.000001}} \put(278.60,-24.75){\circle*{0.000001}} \put(278.60,-24.04){\circle*{0.000001}} \put(279.31,-23.33){\circle*{0.000001}} \put(279.31,-22.63){\circle*{0.000001}} \put(280.01,-21.92){\circle*{0.000001}} \put(280.01,-21.21){\circle*{0.000001}} \put(280.72,-20.51){\circle*{0.000001}} \put(280.72,-19.80){\circle*{0.000001}} \put(281.43,-19.09){\circle*{0.000001}} \put(281.43,-18.38){\circle*{0.000001}} \put(282.14,-17.68){\circle*{0.000001}} \put(282.14,-16.97){\circle*{0.000001}} \put(282.84,-16.26){\circle*{0.000001}} \put(282.84,-15.56){\circle*{0.000001}} \put(283.55,-14.85){\circle*{0.000001}} \put(283.55,-14.14){\circle*{0.000001}} \put(284.26,-13.44){\circle*{0.000001}} \put(284.26,-12.73){\circle*{0.000001}} \put(284.96,-12.02){\circle*{0.000001}} \put(284.96,-11.31){\circle*{0.000001}} \put(285.67,-10.61){\circle*{0.000001}} \put(285.67,-9.90){\circle*{0.000001}} \put(266.58,-49.50){\circle*{0.000001}} \put(267.29,-48.79){\circle*{0.000001}} \put(267.99,-48.79){\circle*{0.000001}} \put(268.70,-48.08){\circle*{0.000001}} \put(269.41,-48.08){\circle*{0.000001}} \put(270.11,-47.38){\circle*{0.000001}} \put(270.82,-47.38){\circle*{0.000001}} \put(271.53,-46.67){\circle*{0.000001}} \put(272.24,-46.67){\circle*{0.000001}} \put(272.94,-45.96){\circle*{0.000001}} \put(273.65,-45.25){\circle*{0.000001}} \put(274.36,-45.25){\circle*{0.000001}} \put(275.06,-44.55){\circle*{0.000001}} \put(275.77,-44.55){\circle*{0.000001}} \put(276.48,-43.84){\circle*{0.000001}} \put(277.19,-43.84){\circle*{0.000001}} \put(277.89,-43.13){\circle*{0.000001}} \put(278.60,-42.43){\circle*{0.000001}} \put(279.31,-42.43){\circle*{0.000001}} \put(280.01,-41.72){\circle*{0.000001}} \put(280.72,-41.72){\circle*{0.000001}} \put(281.43,-41.01){\circle*{0.000001}} \put(282.14,-41.01){\circle*{0.000001}} \put(282.84,-40.31){\circle*{0.000001}} \put(283.55,-40.31){\circle*{0.000001}} \put(284.26,-39.60){\circle*{0.000001}} \put(284.96,-38.89){\circle*{0.000001}} \put(285.67,-38.89){\circle*{0.000001}} \put(286.38,-38.18){\circle*{0.000001}} \put(287.09,-38.18){\circle*{0.000001}} \put(287.79,-37.48){\circle*{0.000001}} \put(288.50,-37.48){\circle*{0.000001}} \put(289.21,-36.77){\circle*{0.000001}} \put(289.91,-36.06){\circle*{0.000001}} \put(290.62,-36.06){\circle*{0.000001}} \put(291.33,-35.36){\circle*{0.000001}} \put(292.04,-35.36){\circle*{0.000001}} \put(292.74,-34.65){\circle*{0.000001}} \put(293.45,-34.65){\circle*{0.000001}} \put(294.16,-33.94){\circle*{0.000001}} \put(294.86,-33.94){\circle*{0.000001}} \put(295.57,-33.23){\circle*{0.000001}} \put(296.28,-32.53){\circle*{0.000001}} \put(296.98,-32.53){\circle*{0.000001}} \put(297.69,-31.82){\circle*{0.000001}} \put(298.40,-31.82){\circle*{0.000001}} \put(299.11,-31.11){\circle*{0.000001}} \put(299.81,-31.11){\circle*{0.000001}} \put(300.52,-30.41){\circle*{0.000001}} \put(301.23,-29.70){\circle*{0.000001}} \put(301.93,-29.70){\circle*{0.000001}} \put(302.64,-28.99){\circle*{0.000001}} \put(303.35,-28.99){\circle*{0.000001}} \put(304.06,-28.28){\circle*{0.000001}} \put(304.76,-28.28){\circle*{0.000001}} \put(305.47,-27.58){\circle*{0.000001}} \put(306.18,-27.58){\circle*{0.000001}} \put(306.88,-26.87){\circle*{0.000001}} \put(306.88,-26.87){\circle*{0.000001}} \put(306.88,-26.16){\circle*{0.000001}} \put(306.88,-25.46){\circle*{0.000001}} \put(306.88,-24.75){\circle*{0.000001}} \put(306.88,-24.04){\circle*{0.000001}} \put(306.88,-23.33){\circle*{0.000001}} \put(306.88,-22.63){\circle*{0.000001}} \put(306.88,-21.92){\circle*{0.000001}} \put(306.88,-21.21){\circle*{0.000001}} \put(306.88,-20.51){\circle*{0.000001}} \put(306.88,-19.80){\circle*{0.000001}} \put(306.88,-19.09){\circle*{0.000001}} \put(306.88,-18.38){\circle*{0.000001}} \put(306.88,-17.68){\circle*{0.000001}} \put(306.88,-16.97){\circle*{0.000001}} \put(306.88,-16.26){\circle*{0.000001}} \put(306.88,-15.56){\circle*{0.000001}} \put(306.88,-14.85){\circle*{0.000001}} \put(306.88,-14.14){\circle*{0.000001}} \put(306.88,-13.44){\circle*{0.000001}} \put(306.88,-12.73){\circle*{0.000001}} \put(306.88,-12.02){\circle*{0.000001}} \put(306.88,-11.31){\circle*{0.000001}} \put(306.88,-10.61){\circle*{0.000001}} \put(306.88,-9.90){\circle*{0.000001}} \put(306.88,-9.19){\circle*{0.000001}} \put(306.88,-8.49){\circle*{0.000001}} \put(306.88,-7.78){\circle*{0.000001}} \put(306.88,-7.07){\circle*{0.000001}} \put(306.88,-6.36){\circle*{0.000001}} \put(306.88,-5.66){\circle*{0.000001}} \put(306.88,-4.95){\circle*{0.000001}} \put(306.88,-4.24){\circle*{0.000001}} \put(306.88,-3.54){\circle*{0.000001}} \put(306.88,-2.83){\circle*{0.000001}} \put(306.88,-2.12){\circle*{0.000001}} \put(306.88,-1.41){\circle*{0.000001}} \put(306.88,-0.71){\circle*{0.000001}} \put(306.88, 0.00){\circle*{0.000001}} \put(306.88, 0.71){\circle*{0.000001}} \put(306.88, 1.41){\circle*{0.000001}} \put(306.88, 2.12){\circle*{0.000001}} \put(306.88, 2.83){\circle*{0.000001}} \put(306.88, 3.54){\circle*{0.000001}} \put(306.88, 4.24){\circle*{0.000001}} \put(306.88, 4.95){\circle*{0.000001}} \put(306.88, 5.66){\circle*{0.000001}} \put(306.88, 6.36){\circle*{0.000001}} \put(306.88, 7.07){\circle*{0.000001}} \put(306.88, 7.78){\circle*{0.000001}} \put(306.88, 8.49){\circle*{0.000001}} \put(306.88, 9.19){\circle*{0.000001}} \put(306.88, 9.90){\circle*{0.000001}} \put(306.88,10.61){\circle*{0.000001}} \put(306.88,11.31){\circle*{0.000001}} \put(306.88,12.02){\circle*{0.000001}} \put(306.88,12.73){\circle*{0.000001}} \put(306.88,13.44){\circle*{0.000001}} \put(306.88,14.14){\circle*{0.000001}} \put(306.88,14.85){\circle*{0.000001}} \put(306.88,15.56){\circle*{0.000001}} \put(306.88,16.26){\circle*{0.000001}} \put(306.88,16.97){\circle*{0.000001}} \put(306.88,17.68){\circle*{0.000001}} \put(306.88,17.68){\circle*{0.000001}} \put(307.59,16.97){\circle*{0.000001}} \put(308.30,16.97){\circle*{0.000001}} \put(309.01,16.26){\circle*{0.000001}} \put(309.71,16.26){\circle*{0.000001}} \put(310.42,15.56){\circle*{0.000001}} \put(311.13,14.85){\circle*{0.000001}} \put(311.83,14.85){\circle*{0.000001}} \put(312.54,14.14){\circle*{0.000001}} \put(313.25,14.14){\circle*{0.000001}} \put(313.96,13.44){\circle*{0.000001}} \put(314.66,13.44){\circle*{0.000001}} \put(315.37,12.73){\circle*{0.000001}} \put(316.08,12.02){\circle*{0.000001}} \put(316.78,12.02){\circle*{0.000001}} \put(317.49,11.31){\circle*{0.000001}} \put(318.20,11.31){\circle*{0.000001}} \put(318.91,10.61){\circle*{0.000001}} \put(319.61, 9.90){\circle*{0.000001}} \put(320.32, 9.90){\circle*{0.000001}} \put(321.03, 9.19){\circle*{0.000001}} \put(321.73, 9.19){\circle*{0.000001}} \put(322.44, 8.49){\circle*{0.000001}} \put(323.15, 7.78){\circle*{0.000001}} \put(323.85, 7.78){\circle*{0.000001}} \put(324.56, 7.07){\circle*{0.000001}} \put(325.27, 7.07){\circle*{0.000001}} \put(325.98, 6.36){\circle*{0.000001}} \put(326.68, 6.36){\circle*{0.000001}} \put(327.39, 5.66){\circle*{0.000001}} \put(328.10, 4.95){\circle*{0.000001}} \put(328.80, 4.95){\circle*{0.000001}} \put(329.51, 4.24){\circle*{0.000001}} \put(330.22, 4.24){\circle*{0.000001}} \put(330.93, 3.54){\circle*{0.000001}} \put(331.63, 2.83){\circle*{0.000001}} \put(332.34, 2.83){\circle*{0.000001}} \put(333.05, 2.12){\circle*{0.000001}} \put(333.75, 2.12){\circle*{0.000001}} \put(334.46, 1.41){\circle*{0.000001}} \put(335.17, 0.71){\circle*{0.000001}} \put(335.88, 0.71){\circle*{0.000001}} \put(336.58, 0.00){\circle*{0.000001}} \put(337.29, 0.00){\circle*{0.000001}} \put(338.00,-0.71){\circle*{0.000001}} \put(338.70,-0.71){\circle*{0.000001}} \put(339.41,-1.41){\circle*{0.000001}} \put(340.12,-2.12){\circle*{0.000001}} \put(340.83,-2.12){\circle*{0.000001}} \put(341.53,-2.83){\circle*{0.000001}} \put(342.24,-2.83){\circle*{0.000001}} \put(342.95,-3.54){\circle*{0.000001}} \put(342.95,-3.54){\circle*{0.000001}} \put(343.65,-3.54){\circle*{0.000001}} \put(344.36,-4.24){\circle*{0.000001}} \put(345.07,-4.24){\circle*{0.000001}} \put(345.78,-4.95){\circle*{0.000001}} \put(346.48,-4.95){\circle*{0.000001}} \put(347.19,-4.95){\circle*{0.000001}} \put(347.90,-5.66){\circle*{0.000001}} \put(348.60,-5.66){\circle*{0.000001}} \put(349.31,-5.66){\circle*{0.000001}} \put(350.02,-6.36){\circle*{0.000001}} \put(350.72,-6.36){\circle*{0.000001}} \put(351.43,-7.07){\circle*{0.000001}} \put(352.14,-7.07){\circle*{0.000001}} \put(352.85,-7.07){\circle*{0.000001}} \put(353.55,-7.78){\circle*{0.000001}} \put(354.26,-7.78){\circle*{0.000001}} \put(354.97,-7.78){\circle*{0.000001}} \put(355.67,-8.49){\circle*{0.000001}} \put(356.38,-8.49){\circle*{0.000001}} \put(357.09,-9.19){\circle*{0.000001}} \put(357.80,-9.19){\circle*{0.000001}} \put(358.50,-9.19){\circle*{0.000001}} \put(359.21,-9.90){\circle*{0.000001}} \put(359.92,-9.90){\circle*{0.000001}} \put(360.62,-10.61){\circle*{0.000001}} \put(361.33,-10.61){\circle*{0.000001}} \put(362.04,-10.61){\circle*{0.000001}} \put(362.75,-11.31){\circle*{0.000001}} \put(363.45,-11.31){\circle*{0.000001}} \put(364.16,-11.31){\circle*{0.000001}} \put(364.87,-12.02){\circle*{0.000001}} \put(365.57,-12.02){\circle*{0.000001}} \put(366.28,-12.73){\circle*{0.000001}} \put(366.99,-12.73){\circle*{0.000001}} \put(367.70,-12.73){\circle*{0.000001}} \put(368.40,-13.44){\circle*{0.000001}} \put(369.11,-13.44){\circle*{0.000001}} \put(369.82,-13.44){\circle*{0.000001}} \put(370.52,-14.14){\circle*{0.000001}} \put(371.23,-14.14){\circle*{0.000001}} \put(371.94,-14.85){\circle*{0.000001}} \put(372.65,-14.85){\circle*{0.000001}} \put(373.35,-14.85){\circle*{0.000001}} \put(374.06,-15.56){\circle*{0.000001}} \put(374.77,-15.56){\circle*{0.000001}} \put(375.47,-16.26){\circle*{0.000001}} \put(376.18,-16.26){\circle*{0.000001}} \put(376.89,-16.26){\circle*{0.000001}} \put(377.60,-16.97){\circle*{0.000001}} \put(378.30,-16.97){\circle*{0.000001}} \put(379.01,-16.97){\circle*{0.000001}} \put(379.72,-17.68){\circle*{0.000001}} \put(380.42,-17.68){\circle*{0.000001}} \put(381.13,-18.38){\circle*{0.000001}} \put(381.84,-18.38){\circle*{0.000001}} \put(382.54,-18.38){\circle*{0.000001}} \put(383.25,-19.09){\circle*{0.000001}} \put(383.96,-19.09){\circle*{0.000001}} \put(384.67,-19.09){\circle*{0.000001}} \put(385.37,-19.80){\circle*{0.000001}} \put(386.08,-19.80){\circle*{0.000001}} \put(386.79,-20.51){\circle*{0.000001}} \put(387.49,-20.51){\circle*{0.000001}} \put(387.49,-20.51){\circle*{0.000001}} \put(388.20,-20.51){\circle*{0.000001}} \put(388.91,-20.51){\circle*{0.000001}} \put(389.62,-20.51){\circle*{0.000001}} \put(390.32,-20.51){\circle*{0.000001}} \put(391.03,-19.80){\circle*{0.000001}} \put(391.74,-19.80){\circle*{0.000001}} \put(392.44,-19.80){\circle*{0.000001}} \put(393.15,-19.80){\circle*{0.000001}} \put(393.86,-19.80){\circle*{0.000001}} \put(394.57,-19.80){\circle*{0.000001}} \put(395.27,-19.80){\circle*{0.000001}} \put(395.98,-19.80){\circle*{0.000001}} \put(396.69,-19.80){\circle*{0.000001}} \put(397.39,-19.80){\circle*{0.000001}} \put(398.10,-19.09){\circle*{0.000001}} \put(398.81,-19.09){\circle*{0.000001}} \put(399.52,-19.09){\circle*{0.000001}} \put(400.22,-19.09){\circle*{0.000001}} \put(400.93,-19.09){\circle*{0.000001}} \put(401.64,-19.09){\circle*{0.000001}} \put(402.34,-19.09){\circle*{0.000001}} \put(403.05,-19.09){\circle*{0.000001}} \put(403.76,-19.09){\circle*{0.000001}} \put(404.47,-18.38){\circle*{0.000001}} \put(405.17,-18.38){\circle*{0.000001}} \put(405.88,-18.38){\circle*{0.000001}} \put(406.59,-18.38){\circle*{0.000001}} \put(407.29,-18.38){\circle*{0.000001}} \put(408.00,-18.38){\circle*{0.000001}} \put(408.71,-18.38){\circle*{0.000001}} \put(409.41,-18.38){\circle*{0.000001}} \put(410.12,-18.38){\circle*{0.000001}} \put(410.83,-18.38){\circle*{0.000001}} \put(411.54,-17.68){\circle*{0.000001}} \put(412.24,-17.68){\circle*{0.000001}} \put(412.95,-17.68){\circle*{0.000001}} \put(413.66,-17.68){\circle*{0.000001}} \put(414.36,-17.68){\circle*{0.000001}} \put(415.07,-17.68){\circle*{0.000001}} \put(415.78,-17.68){\circle*{0.000001}} \put(416.49,-17.68){\circle*{0.000001}} \put(417.19,-17.68){\circle*{0.000001}} \put(417.90,-17.68){\circle*{0.000001}} \put(418.61,-16.97){\circle*{0.000001}} \put(419.31,-16.97){\circle*{0.000001}} \put(420.02,-16.97){\circle*{0.000001}} \put(420.73,-16.97){\circle*{0.000001}} \put(421.44,-16.97){\circle*{0.000001}} \put(422.14,-16.97){\circle*{0.000001}} \put(422.85,-16.97){\circle*{0.000001}} \put(423.56,-16.97){\circle*{0.000001}} \put(424.26,-16.97){\circle*{0.000001}} \put(424.97,-16.26){\circle*{0.000001}} \put(425.68,-16.26){\circle*{0.000001}} \put(426.39,-16.26){\circle*{0.000001}} \put(427.09,-16.26){\circle*{0.000001}} \put(427.80,-16.26){\circle*{0.000001}} \put(428.51,-16.26){\circle*{0.000001}} \put(429.21,-16.26){\circle*{0.000001}} \put(429.92,-16.26){\circle*{0.000001}} \put(430.63,-16.26){\circle*{0.000001}} \put(431.34,-16.26){\circle*{0.000001}} \put(432.04,-15.56){\circle*{0.000001}} \put(432.75,-15.56){\circle*{0.000001}} \put(433.46,-15.56){\circle*{0.000001}} \put(434.16,-15.56){\circle*{0.000001}} \put(434.87,-15.56){\circle*{0.000001}} \put(434.87,-15.56){\circle*{0.000001}} \put(435.58,-15.56){\circle*{0.000001}} \put(436.28,-14.85){\circle*{0.000001}} \put(436.99,-14.85){\circle*{0.000001}} \put(437.70,-14.85){\circle*{0.000001}} \put(438.41,-14.85){\circle*{0.000001}} \put(439.11,-14.14){\circle*{0.000001}} \put(439.82,-14.14){\circle*{0.000001}} \put(440.53,-14.14){\circle*{0.000001}} \put(441.23,-13.44){\circle*{0.000001}} \put(441.94,-13.44){\circle*{0.000001}} \put(442.65,-13.44){\circle*{0.000001}} \put(443.36,-12.73){\circle*{0.000001}} \put(444.06,-12.73){\circle*{0.000001}} \put(444.77,-12.73){\circle*{0.000001}} \put(445.48,-12.73){\circle*{0.000001}} \put(446.18,-12.02){\circle*{0.000001}} \put(446.89,-12.02){\circle*{0.000001}} \put(447.60,-12.02){\circle*{0.000001}} \put(448.31,-11.31){\circle*{0.000001}} \put(449.01,-11.31){\circle*{0.000001}} \put(449.72,-11.31){\circle*{0.000001}} \put(450.43,-10.61){\circle*{0.000001}} \put(451.13,-10.61){\circle*{0.000001}} \put(451.84,-10.61){\circle*{0.000001}} \put(452.55,-10.61){\circle*{0.000001}} \put(453.26,-9.90){\circle*{0.000001}} \put(453.96,-9.90){\circle*{0.000001}} \put(454.67,-9.90){\circle*{0.000001}} \put(455.38,-9.19){\circle*{0.000001}} \put(456.08,-9.19){\circle*{0.000001}} \put(456.79,-9.19){\circle*{0.000001}} \put(457.50,-8.49){\circle*{0.000001}} \put(458.21,-8.49){\circle*{0.000001}} \put(458.91,-8.49){\circle*{0.000001}} \put(459.62,-8.49){\circle*{0.000001}} \put(460.33,-7.78){\circle*{0.000001}} \put(461.03,-7.78){\circle*{0.000001}} \put(461.74,-7.78){\circle*{0.000001}} \put(462.45,-7.07){\circle*{0.000001}} \put(463.15,-7.07){\circle*{0.000001}} \put(463.86,-7.07){\circle*{0.000001}} \put(464.57,-6.36){\circle*{0.000001}} \put(465.28,-6.36){\circle*{0.000001}} \put(465.98,-6.36){\circle*{0.000001}} \put(466.69,-6.36){\circle*{0.000001}} \put(467.40,-5.66){\circle*{0.000001}} \put(468.10,-5.66){\circle*{0.000001}} \put(468.81,-5.66){\circle*{0.000001}} \put(469.52,-4.95){\circle*{0.000001}} \put(470.23,-4.95){\circle*{0.000001}} \put(470.93,-4.95){\circle*{0.000001}} \put(471.64,-4.24){\circle*{0.000001}} \put(472.35,-4.24){\circle*{0.000001}} \put(473.05,-4.24){\circle*{0.000001}} \put(473.76,-4.24){\circle*{0.000001}} \put(474.47,-3.54){\circle*{0.000001}} \put(475.18,-3.54){\circle*{0.000001}} \put(475.88,-3.54){\circle*{0.000001}} \put(476.59,-2.83){\circle*{0.000001}} \put(477.30,-2.83){\circle*{0.000001}} \put(477.30,-2.83){\circle*{0.000001}} \put(477.30,-2.12){\circle*{0.000001}} \put(478.00,-1.41){\circle*{0.000001}} \put(478.00,-0.71){\circle*{0.000001}} \put(478.00, 0.00){\circle*{0.000001}} \put(478.71, 0.71){\circle*{0.000001}} \put(478.71, 1.41){\circle*{0.000001}} \put(478.71, 2.12){\circle*{0.000001}} \put(479.42, 2.83){\circle*{0.000001}} \put(479.42, 3.54){\circle*{0.000001}} \put(479.42, 4.24){\circle*{0.000001}} \put(480.13, 4.95){\circle*{0.000001}} \put(480.13, 5.66){\circle*{0.000001}} \put(480.13, 6.36){\circle*{0.000001}} \put(480.83, 7.07){\circle*{0.000001}} \put(480.83, 7.78){\circle*{0.000001}} \put(480.83, 8.49){\circle*{0.000001}} \put(481.54, 9.19){\circle*{0.000001}} \put(481.54, 9.90){\circle*{0.000001}} \put(481.54,10.61){\circle*{0.000001}} \put(482.25,11.31){\circle*{0.000001}} \put(482.25,12.02){\circle*{0.000001}} \put(482.25,12.73){\circle*{0.000001}} \put(482.95,13.44){\circle*{0.000001}} \put(482.95,14.14){\circle*{0.000001}} \put(482.95,14.85){\circle*{0.000001}} \put(483.66,15.56){\circle*{0.000001}} \put(483.66,16.26){\circle*{0.000001}} \put(483.66,16.97){\circle*{0.000001}} \put(484.37,17.68){\circle*{0.000001}} \put(484.37,18.38){\circle*{0.000001}} \put(484.37,19.09){\circle*{0.000001}} \put(485.08,19.80){\circle*{0.000001}} \put(485.08,20.51){\circle*{0.000001}} \put(485.08,21.21){\circle*{0.000001}} \put(485.78,21.92){\circle*{0.000001}} \put(485.78,22.63){\circle*{0.000001}} \put(485.78,23.33){\circle*{0.000001}} \put(486.49,24.04){\circle*{0.000001}} \put(486.49,24.75){\circle*{0.000001}} \put(486.49,25.46){\circle*{0.000001}} \put(487.20,26.16){\circle*{0.000001}} \put(487.20,26.87){\circle*{0.000001}} \put(487.20,27.58){\circle*{0.000001}} \put(487.90,28.28){\circle*{0.000001}} \put(487.90,28.99){\circle*{0.000001}} \put(487.90,29.70){\circle*{0.000001}} \put(488.61,30.41){\circle*{0.000001}} \put(488.61,31.11){\circle*{0.000001}} \put(488.61,31.82){\circle*{0.000001}} \put(489.32,32.53){\circle*{0.000001}} \put(489.32,33.23){\circle*{0.000001}} \put(489.32,33.94){\circle*{0.000001}} \put(490.02,34.65){\circle*{0.000001}} \put(490.02,35.36){\circle*{0.000001}} \put(490.02,36.06){\circle*{0.000001}} \put(490.73,36.77){\circle*{0.000001}} \put(490.73,37.48){\circle*{0.000001}} \put(490.73,38.18){\circle*{0.000001}} \put(491.44,38.89){\circle*{0.000001}} \put(491.44,39.60){\circle*{0.000001}} \put(471.64,-0.71){\circle*{0.000001}} \put(471.64, 0.00){\circle*{0.000001}} \put(472.35, 0.71){\circle*{0.000001}} \put(472.35, 1.41){\circle*{0.000001}} \put(473.05, 2.12){\circle*{0.000001}} \put(473.05, 2.83){\circle*{0.000001}} \put(473.76, 3.54){\circle*{0.000001}} \put(473.76, 4.24){\circle*{0.000001}} \put(474.47, 4.95){\circle*{0.000001}} \put(474.47, 5.66){\circle*{0.000001}} \put(475.18, 6.36){\circle*{0.000001}} \put(475.18, 7.07){\circle*{0.000001}} \put(475.88, 7.78){\circle*{0.000001}} \put(475.88, 8.49){\circle*{0.000001}} \put(476.59, 9.19){\circle*{0.000001}} \put(476.59, 9.90){\circle*{0.000001}} \put(477.30,10.61){\circle*{0.000001}} \put(477.30,11.31){\circle*{0.000001}} \put(478.00,12.02){\circle*{0.000001}} \put(478.00,12.73){\circle*{0.000001}} \put(478.71,13.44){\circle*{0.000001}} \put(478.71,14.14){\circle*{0.000001}} \put(479.42,14.85){\circle*{0.000001}} \put(479.42,15.56){\circle*{0.000001}} \put(480.13,16.26){\circle*{0.000001}} \put(480.13,16.97){\circle*{0.000001}} \put(480.83,17.68){\circle*{0.000001}} \put(480.83,18.38){\circle*{0.000001}} \put(481.54,19.09){\circle*{0.000001}} \put(481.54,19.80){\circle*{0.000001}} \put(482.25,20.51){\circle*{0.000001}} \put(482.25,21.21){\circle*{0.000001}} \put(482.95,21.92){\circle*{0.000001}} \put(482.95,22.63){\circle*{0.000001}} \put(483.66,23.33){\circle*{0.000001}} \put(483.66,24.04){\circle*{0.000001}} \put(484.37,24.75){\circle*{0.000001}} \put(484.37,25.46){\circle*{0.000001}} \put(485.08,26.16){\circle*{0.000001}} \put(485.08,26.87){\circle*{0.000001}} \put(485.78,27.58){\circle*{0.000001}} \put(485.78,28.28){\circle*{0.000001}} \put(486.49,28.99){\circle*{0.000001}} \put(486.49,29.70){\circle*{0.000001}} \put(487.20,30.41){\circle*{0.000001}} \put(487.20,31.11){\circle*{0.000001}} \put(487.90,31.82){\circle*{0.000001}} \put(487.90,32.53){\circle*{0.000001}} \put(488.61,33.23){\circle*{0.000001}} \put(488.61,33.94){\circle*{0.000001}} \put(489.32,34.65){\circle*{0.000001}} \put(489.32,35.36){\circle*{0.000001}} \put(490.02,36.06){\circle*{0.000001}} \put(490.02,36.77){\circle*{0.000001}} \put(490.73,37.48){\circle*{0.000001}} \put(490.73,38.18){\circle*{0.000001}} \put(491.44,38.89){\circle*{0.000001}} \put(491.44,39.60){\circle*{0.000001}} \put(471.64,-0.71){\circle*{0.000001}} \put(472.35, 0.00){\circle*{0.000001}} \put(473.05, 0.00){\circle*{0.000001}} \put(473.76, 0.71){\circle*{0.000001}} \put(474.47, 0.71){\circle*{0.000001}} \put(475.18, 1.41){\circle*{0.000001}} \put(475.88, 2.12){\circle*{0.000001}} \put(476.59, 2.12){\circle*{0.000001}} \put(477.30, 2.83){\circle*{0.000001}} \put(478.00, 2.83){\circle*{0.000001}} \put(478.71, 3.54){\circle*{0.000001}} \put(479.42, 4.24){\circle*{0.000001}} \put(480.13, 4.24){\circle*{0.000001}} \put(480.83, 4.95){\circle*{0.000001}} \put(481.54, 5.66){\circle*{0.000001}} \put(482.25, 5.66){\circle*{0.000001}} \put(482.95, 6.36){\circle*{0.000001}} \put(483.66, 6.36){\circle*{0.000001}} \put(484.37, 7.07){\circle*{0.000001}} \put(485.08, 7.78){\circle*{0.000001}} \put(485.78, 7.78){\circle*{0.000001}} \put(486.49, 8.49){\circle*{0.000001}} \put(487.20, 8.49){\circle*{0.000001}} \put(487.90, 9.19){\circle*{0.000001}} \put(488.61, 9.90){\circle*{0.000001}} \put(489.32, 9.90){\circle*{0.000001}} \put(490.02,10.61){\circle*{0.000001}} \put(490.73,10.61){\circle*{0.000001}} \put(491.44,11.31){\circle*{0.000001}} \put(492.15,12.02){\circle*{0.000001}} \put(492.85,12.02){\circle*{0.000001}} \put(493.56,12.73){\circle*{0.000001}} \put(494.27,13.44){\circle*{0.000001}} \put(494.97,13.44){\circle*{0.000001}} \put(495.68,14.14){\circle*{0.000001}} \put(496.39,14.14){\circle*{0.000001}} \put(497.10,14.85){\circle*{0.000001}} \put(497.80,15.56){\circle*{0.000001}} \put(498.51,15.56){\circle*{0.000001}} \put(499.22,16.26){\circle*{0.000001}} \put(499.92,16.26){\circle*{0.000001}} \put(500.63,16.97){\circle*{0.000001}} \put(501.34,17.68){\circle*{0.000001}} \put(502.05,17.68){\circle*{0.000001}} \put(502.75,18.38){\circle*{0.000001}} \put(503.46,18.38){\circle*{0.000001}} \put(504.17,19.09){\circle*{0.000001}} \put(504.87,19.80){\circle*{0.000001}} \put(505.58,19.80){\circle*{0.000001}} \put(506.29,20.51){\circle*{0.000001}} \put(507.00,21.21){\circle*{0.000001}} \put(507.70,21.21){\circle*{0.000001}} \put(508.41,21.92){\circle*{0.000001}} \put(509.12,21.92){\circle*{0.000001}} \put(509.82,22.63){\circle*{0.000001}} \put(509.82,22.63){\circle*{0.000001}} \put(509.82,23.33){\circle*{0.000001}} \put(509.82,24.04){\circle*{0.000001}} \put(509.12,24.75){\circle*{0.000001}} \put(509.12,25.46){\circle*{0.000001}} \put(509.12,26.16){\circle*{0.000001}} \put(509.12,26.87){\circle*{0.000001}} \put(509.12,27.58){\circle*{0.000001}} \put(508.41,28.28){\circle*{0.000001}} \put(508.41,28.99){\circle*{0.000001}} \put(508.41,29.70){\circle*{0.000001}} \put(508.41,30.41){\circle*{0.000001}} \put(507.70,31.11){\circle*{0.000001}} \put(507.70,31.82){\circle*{0.000001}} \put(507.70,32.53){\circle*{0.000001}} \put(507.70,33.23){\circle*{0.000001}} \put(507.70,33.94){\circle*{0.000001}} \put(507.00,34.65){\circle*{0.000001}} \put(507.00,35.36){\circle*{0.000001}} \put(507.00,36.06){\circle*{0.000001}} \put(507.00,36.77){\circle*{0.000001}} \put(507.00,37.48){\circle*{0.000001}} \put(506.29,38.18){\circle*{0.000001}} \put(506.29,38.89){\circle*{0.000001}} \put(506.29,39.60){\circle*{0.000001}} \put(506.29,40.31){\circle*{0.000001}} \put(505.58,41.01){\circle*{0.000001}} \put(505.58,41.72){\circle*{0.000001}} \put(505.58,42.43){\circle*{0.000001}} \put(505.58,43.13){\circle*{0.000001}} \put(505.58,43.84){\circle*{0.000001}} \put(504.87,44.55){\circle*{0.000001}} \put(504.87,45.25){\circle*{0.000001}} \put(504.87,45.96){\circle*{0.000001}} \put(504.87,46.67){\circle*{0.000001}} \put(504.87,47.38){\circle*{0.000001}} \put(504.17,48.08){\circle*{0.000001}} \put(504.17,48.79){\circle*{0.000001}} \put(504.17,49.50){\circle*{0.000001}} \put(504.17,50.20){\circle*{0.000001}} \put(504.17,50.91){\circle*{0.000001}} \put(503.46,51.62){\circle*{0.000001}} \put(503.46,52.33){\circle*{0.000001}} \put(503.46,53.03){\circle*{0.000001}} \put(503.46,53.74){\circle*{0.000001}} \put(502.75,54.45){\circle*{0.000001}} \put(502.75,55.15){\circle*{0.000001}} \put(502.75,55.86){\circle*{0.000001}} \put(502.75,56.57){\circle*{0.000001}} \put(502.75,57.28){\circle*{0.000001}} \put(502.05,57.98){\circle*{0.000001}} \put(502.05,58.69){\circle*{0.000001}} \put(502.05,59.40){\circle*{0.000001}} \put(502.05,60.10){\circle*{0.000001}} \put(502.05,60.81){\circle*{0.000001}} \put(501.34,61.52){\circle*{0.000001}} \put(501.34,62.23){\circle*{0.000001}} \put(501.34,62.93){\circle*{0.000001}} \put(501.34,63.64){\circle*{0.000001}} \put(500.63,64.35){\circle*{0.000001}} \put(500.63,65.05){\circle*{0.000001}} \put(500.63,65.76){\circle*{0.000001}} \put(500.63,66.47){\circle*{0.000001}} \put(500.63,67.18){\circle*{0.000001}} \put(499.92,67.88){\circle*{0.000001}} \put(499.92,68.59){\circle*{0.000001}} \put(499.92,69.30){\circle*{0.000001}} \put(499.92,69.30){\circle*{0.000001}} \put(500.63,68.59){\circle*{0.000001}} \put(501.34,67.88){\circle*{0.000001}} \put(502.05,67.18){\circle*{0.000001}} \put(502.75,66.47){\circle*{0.000001}} \put(503.46,65.76){\circle*{0.000001}} \put(504.17,65.05){\circle*{0.000001}} \put(504.87,64.35){\circle*{0.000001}} \put(505.58,63.64){\circle*{0.000001}} \put(506.29,62.93){\circle*{0.000001}} \put(507.00,62.23){\circle*{0.000001}} \put(507.70,61.52){\circle*{0.000001}} \put(508.41,60.81){\circle*{0.000001}} \put(509.12,60.10){\circle*{0.000001}} \put(509.82,59.40){\circle*{0.000001}} \put(510.53,58.69){\circle*{0.000001}} \put(511.24,57.98){\circle*{0.000001}} \put(511.95,57.28){\circle*{0.000001}} \put(512.65,56.57){\circle*{0.000001}} \put(513.36,55.86){\circle*{0.000001}} \put(514.07,55.15){\circle*{0.000001}} \put(514.77,54.45){\circle*{0.000001}} \put(515.48,53.74){\circle*{0.000001}} \put(516.19,53.03){\circle*{0.000001}} \put(516.90,52.33){\circle*{0.000001}} \put(517.60,51.62){\circle*{0.000001}} \put(518.31,50.91){\circle*{0.000001}} \put(519.02,50.20){\circle*{0.000001}} \put(519.72,49.50){\circle*{0.000001}} \put(520.43,48.79){\circle*{0.000001}} \put(521.14,48.08){\circle*{0.000001}} \put(521.84,47.38){\circle*{0.000001}} \put(522.55,46.67){\circle*{0.000001}} \put(523.26,45.96){\circle*{0.000001}} \put(523.97,45.25){\circle*{0.000001}} \put(524.67,44.55){\circle*{0.000001}} \put(525.38,43.84){\circle*{0.000001}} \put(526.09,43.13){\circle*{0.000001}} \put(526.79,42.43){\circle*{0.000001}} \put(527.50,41.72){\circle*{0.000001}} \put(528.21,41.01){\circle*{0.000001}} \put(528.92,40.31){\circle*{0.000001}} \put(529.62,39.60){\circle*{0.000001}} \put(530.33,38.89){\circle*{0.000001}} \put(530.33,-5.66){\circle*{0.000001}} \put(530.33,-4.95){\circle*{0.000001}} \put(530.33,-4.24){\circle*{0.000001}} \put(530.33,-3.54){\circle*{0.000001}} \put(530.33,-2.83){\circle*{0.000001}} \put(530.33,-2.12){\circle*{0.000001}} \put(530.33,-1.41){\circle*{0.000001}} \put(530.33,-0.71){\circle*{0.000001}} \put(530.33, 0.00){\circle*{0.000001}} \put(530.33, 0.71){\circle*{0.000001}} \put(530.33, 1.41){\circle*{0.000001}} \put(530.33, 2.12){\circle*{0.000001}} \put(530.33, 2.83){\circle*{0.000001}} \put(530.33, 3.54){\circle*{0.000001}} \put(530.33, 4.24){\circle*{0.000001}} \put(530.33, 4.95){\circle*{0.000001}} \put(530.33, 5.66){\circle*{0.000001}} \put(530.33, 6.36){\circle*{0.000001}} \put(530.33, 7.07){\circle*{0.000001}} \put(530.33, 7.78){\circle*{0.000001}} \put(530.33, 8.49){\circle*{0.000001}} \put(530.33, 9.19){\circle*{0.000001}} \put(530.33, 9.90){\circle*{0.000001}} \put(530.33,10.61){\circle*{0.000001}} \put(530.33,11.31){\circle*{0.000001}} \put(530.33,12.02){\circle*{0.000001}} \put(530.33,12.73){\circle*{0.000001}} \put(530.33,13.44){\circle*{0.000001}} \put(530.33,14.14){\circle*{0.000001}} \put(530.33,14.85){\circle*{0.000001}} \put(530.33,15.56){\circle*{0.000001}} \put(530.33,16.26){\circle*{0.000001}} \put(530.33,16.97){\circle*{0.000001}} \put(530.33,17.68){\circle*{0.000001}} \put(530.33,18.38){\circle*{0.000001}} \put(530.33,19.09){\circle*{0.000001}} \put(530.33,19.80){\circle*{0.000001}} \put(530.33,20.51){\circle*{0.000001}} \put(530.33,21.21){\circle*{0.000001}} \put(530.33,21.92){\circle*{0.000001}} \put(530.33,22.63){\circle*{0.000001}} \put(530.33,23.33){\circle*{0.000001}} \put(530.33,24.04){\circle*{0.000001}} \put(530.33,24.75){\circle*{0.000001}} \put(530.33,25.46){\circle*{0.000001}} \put(530.33,26.16){\circle*{0.000001}} \put(530.33,26.87){\circle*{0.000001}} \put(530.33,27.58){\circle*{0.000001}} \put(530.33,28.28){\circle*{0.000001}} \put(530.33,28.99){\circle*{0.000001}} \put(530.33,29.70){\circle*{0.000001}} \put(530.33,30.41){\circle*{0.000001}} \put(530.33,31.11){\circle*{0.000001}} \put(530.33,31.82){\circle*{0.000001}} \put(530.33,32.53){\circle*{0.000001}} \put(530.33,33.23){\circle*{0.000001}} \put(530.33,33.94){\circle*{0.000001}} \put(530.33,34.65){\circle*{0.000001}} \put(530.33,35.36){\circle*{0.000001}} \put(530.33,36.06){\circle*{0.000001}} \put(530.33,36.77){\circle*{0.000001}} \put(530.33,37.48){\circle*{0.000001}} \put(530.33,38.18){\circle*{0.000001}} \put(530.33,38.89){\circle*{0.000001}} \put(530.33,-5.66){\circle*{0.000001}} \put(529.62,-4.95){\circle*{0.000001}} \put(528.92,-4.24){\circle*{0.000001}} \put(528.92,-3.54){\circle*{0.000001}} \put(528.21,-2.83){\circle*{0.000001}} \put(527.50,-2.12){\circle*{0.000001}} \put(526.79,-1.41){\circle*{0.000001}} \put(526.79,-0.71){\circle*{0.000001}} \put(526.09, 0.00){\circle*{0.000001}} \put(525.38, 0.71){\circle*{0.000001}} \put(524.67, 1.41){\circle*{0.000001}} \put(523.97, 2.12){\circle*{0.000001}} \put(523.97, 2.83){\circle*{0.000001}} \put(523.26, 3.54){\circle*{0.000001}} \put(522.55, 4.24){\circle*{0.000001}} \put(521.84, 4.95){\circle*{0.000001}} \put(521.84, 5.66){\circle*{0.000001}} \put(521.14, 6.36){\circle*{0.000001}} \put(520.43, 7.07){\circle*{0.000001}} \put(519.72, 7.78){\circle*{0.000001}} \put(519.02, 8.49){\circle*{0.000001}} \put(519.02, 9.19){\circle*{0.000001}} \put(518.31, 9.90){\circle*{0.000001}} \put(517.60,10.61){\circle*{0.000001}} \put(516.90,11.31){\circle*{0.000001}} \put(516.90,12.02){\circle*{0.000001}} \put(516.19,12.73){\circle*{0.000001}} \put(515.48,13.44){\circle*{0.000001}} \put(514.77,14.14){\circle*{0.000001}} \put(514.07,14.85){\circle*{0.000001}} \put(514.07,15.56){\circle*{0.000001}} \put(513.36,16.26){\circle*{0.000001}} \put(512.65,16.97){\circle*{0.000001}} \put(511.95,17.68){\circle*{0.000001}} \put(511.24,18.38){\circle*{0.000001}} \put(511.24,19.09){\circle*{0.000001}} \put(510.53,19.80){\circle*{0.000001}} \put(509.82,20.51){\circle*{0.000001}} \put(509.12,21.21){\circle*{0.000001}} \put(509.12,21.92){\circle*{0.000001}} \put(508.41,22.63){\circle*{0.000001}} \put(507.70,23.33){\circle*{0.000001}} \put(507.00,24.04){\circle*{0.000001}} \put(506.29,24.75){\circle*{0.000001}} \put(506.29,25.46){\circle*{0.000001}} \put(505.58,26.16){\circle*{0.000001}} \put(504.87,26.87){\circle*{0.000001}} \put(504.17,27.58){\circle*{0.000001}} \put(504.17,28.28){\circle*{0.000001}} \put(503.46,28.99){\circle*{0.000001}} \put(502.75,29.70){\circle*{0.000001}} \put(502.75,29.70){\circle*{0.000001}} \put(502.75,30.41){\circle*{0.000001}} \put(503.46,31.11){\circle*{0.000001}} \put(503.46,31.82){\circle*{0.000001}} \put(504.17,32.53){\circle*{0.000001}} \put(504.17,33.23){\circle*{0.000001}} \put(504.87,33.94){\circle*{0.000001}} \put(504.87,34.65){\circle*{0.000001}} \put(505.58,35.36){\circle*{0.000001}} \put(505.58,36.06){\circle*{0.000001}} \put(506.29,36.77){\circle*{0.000001}} \put(506.29,37.48){\circle*{0.000001}} \put(507.00,38.18){\circle*{0.000001}} \put(507.00,38.89){\circle*{0.000001}} \put(507.70,39.60){\circle*{0.000001}} \put(507.70,40.31){\circle*{0.000001}} \put(508.41,41.01){\circle*{0.000001}} \put(508.41,41.72){\circle*{0.000001}} \put(509.12,42.43){\circle*{0.000001}} \put(509.12,43.13){\circle*{0.000001}} \put(509.82,43.84){\circle*{0.000001}} \put(509.82,44.55){\circle*{0.000001}} \put(510.53,45.25){\circle*{0.000001}} \put(510.53,45.96){\circle*{0.000001}} \put(511.24,46.67){\circle*{0.000001}} \put(511.24,47.38){\circle*{0.000001}} \put(511.95,48.08){\circle*{0.000001}} \put(511.95,48.79){\circle*{0.000001}} \put(512.65,49.50){\circle*{0.000001}} \put(512.65,50.20){\circle*{0.000001}} \put(512.65,50.91){\circle*{0.000001}} \put(513.36,51.62){\circle*{0.000001}} \put(513.36,52.33){\circle*{0.000001}} \put(514.07,53.03){\circle*{0.000001}} \put(514.07,53.74){\circle*{0.000001}} \put(514.77,54.45){\circle*{0.000001}} \put(514.77,55.15){\circle*{0.000001}} \put(515.48,55.86){\circle*{0.000001}} \put(515.48,56.57){\circle*{0.000001}} \put(516.19,57.28){\circle*{0.000001}} \put(516.19,57.98){\circle*{0.000001}} \put(516.90,58.69){\circle*{0.000001}} \put(516.90,59.40){\circle*{0.000001}} \put(517.60,60.10){\circle*{0.000001}} \put(517.60,60.81){\circle*{0.000001}} \put(518.31,61.52){\circle*{0.000001}} \put(518.31,62.23){\circle*{0.000001}} \put(519.02,62.93){\circle*{0.000001}} \put(519.02,63.64){\circle*{0.000001}} \put(519.72,64.35){\circle*{0.000001}} \put(519.72,65.05){\circle*{0.000001}} \put(520.43,65.76){\circle*{0.000001}} \put(520.43,66.47){\circle*{0.000001}} \put(521.14,67.18){\circle*{0.000001}} \put(521.14,67.88){\circle*{0.000001}} \put(521.84,68.59){\circle*{0.000001}} \put(521.84,69.30){\circle*{0.000001}} \put(522.55,70.00){\circle*{0.000001}} \put(522.55,70.71){\circle*{0.000001}} \put(501.34,31.82){\circle*{0.000001}} \put(502.05,32.53){\circle*{0.000001}} \put(502.05,33.23){\circle*{0.000001}} \put(502.75,33.94){\circle*{0.000001}} \put(502.75,34.65){\circle*{0.000001}} \put(503.46,35.36){\circle*{0.000001}} \put(503.46,36.06){\circle*{0.000001}} \put(504.17,36.77){\circle*{0.000001}} \put(504.17,37.48){\circle*{0.000001}} \put(504.87,38.18){\circle*{0.000001}} \put(504.87,38.89){\circle*{0.000001}} \put(505.58,39.60){\circle*{0.000001}} \put(506.29,40.31){\circle*{0.000001}} \put(506.29,41.01){\circle*{0.000001}} \put(507.00,41.72){\circle*{0.000001}} \put(507.00,42.43){\circle*{0.000001}} \put(507.70,43.13){\circle*{0.000001}} \put(507.70,43.84){\circle*{0.000001}} \put(508.41,44.55){\circle*{0.000001}} \put(508.41,45.25){\circle*{0.000001}} \put(509.12,45.96){\circle*{0.000001}} \put(509.12,46.67){\circle*{0.000001}} \put(509.82,47.38){\circle*{0.000001}} \put(510.53,48.08){\circle*{0.000001}} \put(510.53,48.79){\circle*{0.000001}} \put(511.24,49.50){\circle*{0.000001}} \put(511.24,50.20){\circle*{0.000001}} \put(511.95,50.91){\circle*{0.000001}} \put(511.95,51.62){\circle*{0.000001}} \put(512.65,52.33){\circle*{0.000001}} \put(512.65,53.03){\circle*{0.000001}} \put(513.36,53.74){\circle*{0.000001}} \put(513.36,54.45){\circle*{0.000001}} \put(514.07,55.15){\circle*{0.000001}} \put(514.77,55.86){\circle*{0.000001}} \put(514.77,56.57){\circle*{0.000001}} \put(515.48,57.28){\circle*{0.000001}} \put(515.48,57.98){\circle*{0.000001}} \put(516.19,58.69){\circle*{0.000001}} \put(516.19,59.40){\circle*{0.000001}} \put(516.90,60.10){\circle*{0.000001}} \put(516.90,60.81){\circle*{0.000001}} \put(517.60,61.52){\circle*{0.000001}} \put(517.60,62.23){\circle*{0.000001}} \put(518.31,62.93){\circle*{0.000001}} \put(519.02,63.64){\circle*{0.000001}} \put(519.02,64.35){\circle*{0.000001}} \put(519.72,65.05){\circle*{0.000001}} \put(519.72,65.76){\circle*{0.000001}} \put(520.43,66.47){\circle*{0.000001}} \put(520.43,67.18){\circle*{0.000001}} \put(521.14,67.88){\circle*{0.000001}} \put(521.14,68.59){\circle*{0.000001}} \put(521.84,69.30){\circle*{0.000001}} \put(521.84,70.00){\circle*{0.000001}} \put(522.55,70.71){\circle*{0.000001}} \put(501.34,31.82){\circle*{0.000001}} \put(501.34,32.53){\circle*{0.000001}} \put(501.34,33.23){\circle*{0.000001}} \put(502.05,33.94){\circle*{0.000001}} \put(502.05,34.65){\circle*{0.000001}} \put(502.05,35.36){\circle*{0.000001}} \put(502.05,36.06){\circle*{0.000001}} \put(502.75,36.77){\circle*{0.000001}} \put(502.75,37.48){\circle*{0.000001}} \put(502.75,38.18){\circle*{0.000001}} \put(502.75,38.89){\circle*{0.000001}} \put(503.46,39.60){\circle*{0.000001}} \put(503.46,40.31){\circle*{0.000001}} \put(503.46,41.01){\circle*{0.000001}} \put(503.46,41.72){\circle*{0.000001}} \put(504.17,42.43){\circle*{0.000001}} \put(504.17,43.13){\circle*{0.000001}} \put(504.17,43.84){\circle*{0.000001}} \put(504.17,44.55){\circle*{0.000001}} \put(504.87,45.25){\circle*{0.000001}} \put(504.87,45.96){\circle*{0.000001}} \put(504.87,46.67){\circle*{0.000001}} \put(504.87,47.38){\circle*{0.000001}} \put(505.58,48.08){\circle*{0.000001}} \put(505.58,48.79){\circle*{0.000001}} \put(505.58,49.50){\circle*{0.000001}} \put(505.58,50.20){\circle*{0.000001}} \put(506.29,50.91){\circle*{0.000001}} \put(506.29,51.62){\circle*{0.000001}} \put(506.29,52.33){\circle*{0.000001}} \put(506.29,53.03){\circle*{0.000001}} \put(507.00,53.74){\circle*{0.000001}} \put(507.00,54.45){\circle*{0.000001}} \put(507.00,55.15){\circle*{0.000001}} \put(507.00,55.86){\circle*{0.000001}} \put(507.70,56.57){\circle*{0.000001}} \put(507.70,57.28){\circle*{0.000001}} \put(507.70,57.98){\circle*{0.000001}} \put(507.70,58.69){\circle*{0.000001}} \put(508.41,59.40){\circle*{0.000001}} \put(508.41,60.10){\circle*{0.000001}} \put(508.41,60.81){\circle*{0.000001}} \put(508.41,61.52){\circle*{0.000001}} \put(509.12,62.23){\circle*{0.000001}} \put(509.12,62.93){\circle*{0.000001}} \put(509.12,63.64){\circle*{0.000001}} \put(509.12,64.35){\circle*{0.000001}} \put(509.82,65.05){\circle*{0.000001}} \put(509.82,65.76){\circle*{0.000001}} \put(509.82,66.47){\circle*{0.000001}} \put(509.82,67.18){\circle*{0.000001}} \put(510.53,67.88){\circle*{0.000001}} \put(510.53,68.59){\circle*{0.000001}} \put(510.53,69.30){\circle*{0.000001}} \put(510.53,70.00){\circle*{0.000001}} \put(511.24,70.71){\circle*{0.000001}} \put(511.24,71.42){\circle*{0.000001}} \put(511.24,72.12){\circle*{0.000001}} \put(511.24,72.83){\circle*{0.000001}} \put(511.95,73.54){\circle*{0.000001}} \put(511.95,74.25){\circle*{0.000001}} \put(511.95,74.95){\circle*{0.000001}} \put(479.42,42.43){\circle*{0.000001}} \put(480.13,43.13){\circle*{0.000001}} \put(480.83,43.84){\circle*{0.000001}} \put(481.54,44.55){\circle*{0.000001}} \put(482.25,45.25){\circle*{0.000001}} \put(482.95,45.96){\circle*{0.000001}} \put(483.66,46.67){\circle*{0.000001}} \put(484.37,47.38){\circle*{0.000001}} \put(485.08,48.08){\circle*{0.000001}} \put(485.78,48.79){\circle*{0.000001}} \put(486.49,49.50){\circle*{0.000001}} \put(487.20,50.20){\circle*{0.000001}} \put(487.90,50.91){\circle*{0.000001}} \put(488.61,51.62){\circle*{0.000001}} \put(489.32,52.33){\circle*{0.000001}} \put(490.02,53.03){\circle*{0.000001}} \put(490.73,53.74){\circle*{0.000001}} \put(491.44,54.45){\circle*{0.000001}} \put(492.15,55.15){\circle*{0.000001}} \put(492.85,55.86){\circle*{0.000001}} \put(493.56,56.57){\circle*{0.000001}} \put(494.27,57.28){\circle*{0.000001}} \put(494.97,57.98){\circle*{0.000001}} \put(495.68,58.69){\circle*{0.000001}} \put(496.39,59.40){\circle*{0.000001}} \put(497.10,60.10){\circle*{0.000001}} \put(497.80,60.81){\circle*{0.000001}} \put(498.51,61.52){\circle*{0.000001}} \put(499.22,62.23){\circle*{0.000001}} \put(499.92,62.93){\circle*{0.000001}} \put(500.63,63.64){\circle*{0.000001}} \put(501.34,64.35){\circle*{0.000001}} \put(502.05,65.05){\circle*{0.000001}} \put(502.75,65.76){\circle*{0.000001}} \put(503.46,66.47){\circle*{0.000001}} \put(504.17,67.18){\circle*{0.000001}} \put(504.87,67.88){\circle*{0.000001}} \put(505.58,68.59){\circle*{0.000001}} \put(506.29,69.30){\circle*{0.000001}} \put(507.00,70.00){\circle*{0.000001}} \put(507.70,70.71){\circle*{0.000001}} \put(508.41,71.42){\circle*{0.000001}} \put(509.12,72.12){\circle*{0.000001}} \put(509.82,72.83){\circle*{0.000001}} \put(510.53,73.54){\circle*{0.000001}} \put(511.24,74.25){\circle*{0.000001}} \put(511.95,74.95){\circle*{0.000001}} \put(479.42,42.43){\circle*{0.000001}} \put(480.13,42.43){\circle*{0.000001}} \put(480.83,43.13){\circle*{0.000001}} \put(481.54,43.13){\circle*{0.000001}} \put(482.25,43.13){\circle*{0.000001}} \put(482.95,43.84){\circle*{0.000001}} \put(483.66,43.84){\circle*{0.000001}} \put(484.37,44.55){\circle*{0.000001}} \put(485.08,44.55){\circle*{0.000001}} \put(485.78,44.55){\circle*{0.000001}} \put(486.49,45.25){\circle*{0.000001}} \put(487.20,45.25){\circle*{0.000001}} \put(487.90,45.25){\circle*{0.000001}} \put(488.61,45.96){\circle*{0.000001}} \put(489.32,45.96){\circle*{0.000001}} \put(490.02,45.96){\circle*{0.000001}} \put(490.73,46.67){\circle*{0.000001}} \put(491.44,46.67){\circle*{0.000001}} \put(492.15,47.38){\circle*{0.000001}} \put(492.85,47.38){\circle*{0.000001}} \put(493.56,47.38){\circle*{0.000001}} \put(494.27,48.08){\circle*{0.000001}} \put(494.97,48.08){\circle*{0.000001}} \put(495.68,48.08){\circle*{0.000001}} \put(496.39,48.79){\circle*{0.000001}} \put(497.10,48.79){\circle*{0.000001}} \put(497.80,49.50){\circle*{0.000001}} \put(498.51,49.50){\circle*{0.000001}} \put(499.22,49.50){\circle*{0.000001}} \put(499.92,50.20){\circle*{0.000001}} \put(500.63,50.20){\circle*{0.000001}} \put(501.34,50.20){\circle*{0.000001}} \put(502.05,50.91){\circle*{0.000001}} \put(502.75,50.91){\circle*{0.000001}} \put(503.46,50.91){\circle*{0.000001}} \put(504.17,51.62){\circle*{0.000001}} \put(504.87,51.62){\circle*{0.000001}} \put(505.58,52.33){\circle*{0.000001}} \put(506.29,52.33){\circle*{0.000001}} \put(507.00,52.33){\circle*{0.000001}} \put(507.70,53.03){\circle*{0.000001}} \put(508.41,53.03){\circle*{0.000001}} \put(509.12,53.03){\circle*{0.000001}} \put(509.82,53.74){\circle*{0.000001}} \put(510.53,53.74){\circle*{0.000001}} \put(511.24,53.74){\circle*{0.000001}} \put(511.95,54.45){\circle*{0.000001}} \put(512.65,54.45){\circle*{0.000001}} \put(513.36,55.15){\circle*{0.000001}} \put(514.07,55.15){\circle*{0.000001}} \put(514.77,55.15){\circle*{0.000001}} \put(515.48,55.86){\circle*{0.000001}} \put(516.19,55.86){\circle*{0.000001}} \put(516.90,55.86){\circle*{0.000001}} \put(517.60,56.57){\circle*{0.000001}} \put(518.31,56.57){\circle*{0.000001}} \put(519.02,57.28){\circle*{0.000001}} \put(519.72,57.28){\circle*{0.000001}} \put(520.43,57.28){\circle*{0.000001}} \put(521.14,57.98){\circle*{0.000001}} \put(521.84,57.98){\circle*{0.000001}} \put(521.84,57.98){\circle*{0.000001}} \put(521.84,58.69){\circle*{0.000001}} \put(521.14,59.40){\circle*{0.000001}} \put(521.14,60.10){\circle*{0.000001}} \put(520.43,60.81){\circle*{0.000001}} \put(520.43,61.52){\circle*{0.000001}} \put(520.43,62.23){\circle*{0.000001}} \put(519.72,62.93){\circle*{0.000001}} \put(519.72,63.64){\circle*{0.000001}} \put(519.02,64.35){\circle*{0.000001}} \put(519.02,65.05){\circle*{0.000001}} \put(519.02,65.76){\circle*{0.000001}} \put(518.31,66.47){\circle*{0.000001}} \put(518.31,67.18){\circle*{0.000001}} \put(517.60,67.88){\circle*{0.000001}} \put(517.60,68.59){\circle*{0.000001}} \put(517.60,69.30){\circle*{0.000001}} \put(516.90,70.00){\circle*{0.000001}} \put(516.90,70.71){\circle*{0.000001}} \put(516.19,71.42){\circle*{0.000001}} \put(516.19,72.12){\circle*{0.000001}} \put(516.19,72.83){\circle*{0.000001}} \put(515.48,73.54){\circle*{0.000001}} \put(515.48,74.25){\circle*{0.000001}} \put(514.77,74.95){\circle*{0.000001}} \put(514.77,75.66){\circle*{0.000001}} \put(514.77,76.37){\circle*{0.000001}} \put(514.07,77.07){\circle*{0.000001}} \put(514.07,77.78){\circle*{0.000001}} \put(513.36,78.49){\circle*{0.000001}} \put(513.36,79.20){\circle*{0.000001}} \put(513.36,79.90){\circle*{0.000001}} \put(512.65,80.61){\circle*{0.000001}} \put(512.65,81.32){\circle*{0.000001}} \put(511.95,82.02){\circle*{0.000001}} \put(511.95,82.73){\circle*{0.000001}} \put(511.95,83.44){\circle*{0.000001}} \put(511.24,84.15){\circle*{0.000001}} \put(511.24,84.85){\circle*{0.000001}} \put(510.53,85.56){\circle*{0.000001}} \put(510.53,86.27){\circle*{0.000001}} \put(510.53,86.97){\circle*{0.000001}} \put(509.82,87.68){\circle*{0.000001}} \put(509.82,88.39){\circle*{0.000001}} \put(509.12,89.10){\circle*{0.000001}} \put(509.12,89.80){\circle*{0.000001}} \put(509.12,90.51){\circle*{0.000001}} \put(508.41,91.22){\circle*{0.000001}} \put(508.41,91.92){\circle*{0.000001}} \put(507.70,92.63){\circle*{0.000001}} \put(507.70,93.34){\circle*{0.000001}} \put(507.70,94.05){\circle*{0.000001}} \put(507.00,94.75){\circle*{0.000001}} \put(507.00,95.46){\circle*{0.000001}} \put(506.29,96.17){\circle*{0.000001}} \put(506.29,96.87){\circle*{0.000001}} \put(506.29,97.58){\circle*{0.000001}} \put(505.58,98.29){\circle*{0.000001}} \put(505.58,98.99){\circle*{0.000001}} \put(504.87,99.70){\circle*{0.000001}} \put(504.87,100.41){\circle*{0.000001}} \put(504.87,100.41){\circle*{0.000001}} \put(505.58,99.70){\circle*{0.000001}} \put(506.29,99.70){\circle*{0.000001}} \put(507.00,98.99){\circle*{0.000001}} \put(507.70,98.29){\circle*{0.000001}} \put(508.41,98.29){\circle*{0.000001}} \put(509.12,97.58){\circle*{0.000001}} \put(509.82,96.87){\circle*{0.000001}} \put(510.53,96.87){\circle*{0.000001}} \put(511.24,96.17){\circle*{0.000001}} \put(511.95,95.46){\circle*{0.000001}} \put(512.65,95.46){\circle*{0.000001}} \put(513.36,94.75){\circle*{0.000001}} \put(514.07,94.05){\circle*{0.000001}} \put(514.77,94.05){\circle*{0.000001}} \put(515.48,93.34){\circle*{0.000001}} \put(516.19,92.63){\circle*{0.000001}} \put(516.90,92.63){\circle*{0.000001}} \put(517.60,91.92){\circle*{0.000001}} \put(518.31,91.22){\circle*{0.000001}} \put(519.02,91.22){\circle*{0.000001}} \put(519.72,90.51){\circle*{0.000001}} \put(520.43,89.80){\circle*{0.000001}} \put(521.14,89.80){\circle*{0.000001}} \put(521.84,89.10){\circle*{0.000001}} \put(522.55,88.39){\circle*{0.000001}} \put(523.26,88.39){\circle*{0.000001}} \put(523.97,87.68){\circle*{0.000001}} \put(524.67,87.68){\circle*{0.000001}} \put(525.38,86.97){\circle*{0.000001}} \put(526.09,86.27){\circle*{0.000001}} \put(526.79,86.27){\circle*{0.000001}} \put(527.50,85.56){\circle*{0.000001}} \put(528.21,84.85){\circle*{0.000001}} \put(528.92,84.85){\circle*{0.000001}} \put(529.62,84.15){\circle*{0.000001}} \put(530.33,83.44){\circle*{0.000001}} \put(531.04,83.44){\circle*{0.000001}} \put(531.74,82.73){\circle*{0.000001}} \put(532.45,82.02){\circle*{0.000001}} \put(533.16,82.02){\circle*{0.000001}} \put(533.87,81.32){\circle*{0.000001}} \put(534.57,80.61){\circle*{0.000001}} \put(535.28,80.61){\circle*{0.000001}} \put(535.99,79.90){\circle*{0.000001}} \put(536.69,79.20){\circle*{0.000001}} \put(537.40,79.20){\circle*{0.000001}} \put(538.11,78.49){\circle*{0.000001}} \put(538.82,77.78){\circle*{0.000001}} \put(539.52,77.78){\circle*{0.000001}} \put(540.23,77.07){\circle*{0.000001}} \put(540.94,76.37){\circle*{0.000001}} \put(541.64,76.37){\circle*{0.000001}} \put(542.35,75.66){\circle*{0.000001}} \put(542.35,28.99){\circle*{0.000001}} \put(542.35,29.70){\circle*{0.000001}} \put(542.35,30.41){\circle*{0.000001}} \put(542.35,31.11){\circle*{0.000001}} \put(542.35,31.82){\circle*{0.000001}} \put(542.35,32.53){\circle*{0.000001}} \put(542.35,33.23){\circle*{0.000001}} \put(542.35,33.94){\circle*{0.000001}} \put(542.35,34.65){\circle*{0.000001}} \put(542.35,35.36){\circle*{0.000001}} \put(542.35,36.06){\circle*{0.000001}} \put(542.35,36.77){\circle*{0.000001}} \put(542.35,37.48){\circle*{0.000001}} \put(542.35,38.18){\circle*{0.000001}} \put(542.35,38.89){\circle*{0.000001}} \put(542.35,39.60){\circle*{0.000001}} \put(542.35,40.31){\circle*{0.000001}} \put(542.35,41.01){\circle*{0.000001}} \put(542.35,41.72){\circle*{0.000001}} \put(542.35,42.43){\circle*{0.000001}} \put(542.35,43.13){\circle*{0.000001}} \put(542.35,43.84){\circle*{0.000001}} \put(542.35,44.55){\circle*{0.000001}} \put(542.35,45.25){\circle*{0.000001}} \put(542.35,45.96){\circle*{0.000001}} \put(542.35,46.67){\circle*{0.000001}} \put(542.35,47.38){\circle*{0.000001}} \put(542.35,48.08){\circle*{0.000001}} \put(542.35,48.79){\circle*{0.000001}} \put(542.35,49.50){\circle*{0.000001}} \put(542.35,50.20){\circle*{0.000001}} \put(542.35,50.91){\circle*{0.000001}} \put(542.35,51.62){\circle*{0.000001}} \put(542.35,52.33){\circle*{0.000001}} \put(542.35,53.03){\circle*{0.000001}} \put(542.35,53.74){\circle*{0.000001}} \put(542.35,54.45){\circle*{0.000001}} \put(542.35,55.15){\circle*{0.000001}} \put(542.35,55.86){\circle*{0.000001}} \put(542.35,56.57){\circle*{0.000001}} \put(542.35,57.28){\circle*{0.000001}} \put(542.35,57.98){\circle*{0.000001}} \put(542.35,58.69){\circle*{0.000001}} \put(542.35,59.40){\circle*{0.000001}} \put(542.35,60.10){\circle*{0.000001}} \put(542.35,60.81){\circle*{0.000001}} \put(542.35,61.52){\circle*{0.000001}} \put(542.35,62.23){\circle*{0.000001}} \put(542.35,62.93){\circle*{0.000001}} \put(542.35,63.64){\circle*{0.000001}} \put(542.35,64.35){\circle*{0.000001}} \put(542.35,65.05){\circle*{0.000001}} \put(542.35,65.76){\circle*{0.000001}} \put(542.35,66.47){\circle*{0.000001}} \put(542.35,67.18){\circle*{0.000001}} \put(542.35,67.88){\circle*{0.000001}} \put(542.35,68.59){\circle*{0.000001}} \put(542.35,69.30){\circle*{0.000001}} \put(542.35,70.00){\circle*{0.000001}} \put(542.35,70.71){\circle*{0.000001}} \put(542.35,71.42){\circle*{0.000001}} \put(542.35,72.12){\circle*{0.000001}} \put(542.35,72.83){\circle*{0.000001}} \put(542.35,73.54){\circle*{0.000001}} \put(542.35,74.25){\circle*{0.000001}} \put(542.35,74.95){\circle*{0.000001}} \put(542.35,75.66){\circle*{0.000001}} \put(508.41,57.98){\circle*{0.000001}} \put(509.12,57.28){\circle*{0.000001}} \put(509.82,56.57){\circle*{0.000001}} \put(510.53,55.86){\circle*{0.000001}} \put(511.24,55.86){\circle*{0.000001}} \put(511.95,55.15){\circle*{0.000001}} \put(512.65,54.45){\circle*{0.000001}} \put(513.36,53.74){\circle*{0.000001}} \put(514.07,53.03){\circle*{0.000001}} \put(514.77,52.33){\circle*{0.000001}} \put(515.48,51.62){\circle*{0.000001}} \put(516.19,51.62){\circle*{0.000001}} \put(516.90,50.91){\circle*{0.000001}} \put(517.60,50.20){\circle*{0.000001}} \put(518.31,49.50){\circle*{0.000001}} \put(519.02,48.79){\circle*{0.000001}} \put(519.72,48.08){\circle*{0.000001}} \put(520.43,47.38){\circle*{0.000001}} \put(521.14,47.38){\circle*{0.000001}} \put(521.84,46.67){\circle*{0.000001}} \put(522.55,45.96){\circle*{0.000001}} \put(523.26,45.25){\circle*{0.000001}} \put(523.97,44.55){\circle*{0.000001}} \put(524.67,43.84){\circle*{0.000001}} \put(525.38,43.84){\circle*{0.000001}} \put(526.09,43.13){\circle*{0.000001}} \put(526.79,42.43){\circle*{0.000001}} \put(527.50,41.72){\circle*{0.000001}} \put(528.21,41.01){\circle*{0.000001}} \put(528.92,40.31){\circle*{0.000001}} \put(529.62,39.60){\circle*{0.000001}} \put(530.33,39.60){\circle*{0.000001}} \put(531.04,38.89){\circle*{0.000001}} \put(531.74,38.18){\circle*{0.000001}} \put(532.45,37.48){\circle*{0.000001}} \put(533.16,36.77){\circle*{0.000001}} \put(533.87,36.06){\circle*{0.000001}} \put(534.57,35.36){\circle*{0.000001}} \put(535.28,35.36){\circle*{0.000001}} \put(535.99,34.65){\circle*{0.000001}} \put(536.69,33.94){\circle*{0.000001}} \put(537.40,33.23){\circle*{0.000001}} \put(538.11,32.53){\circle*{0.000001}} \put(538.82,31.82){\circle*{0.000001}} \put(539.52,31.11){\circle*{0.000001}} \put(540.23,31.11){\circle*{0.000001}} \put(540.94,30.41){\circle*{0.000001}} \put(541.64,29.70){\circle*{0.000001}} \put(542.35,28.99){\circle*{0.000001}} \put(508.41,57.98){\circle*{0.000001}} \put(509.12,58.69){\circle*{0.000001}} \put(509.12,59.40){\circle*{0.000001}} \put(509.82,60.10){\circle*{0.000001}} \put(509.82,60.81){\circle*{0.000001}} \put(510.53,61.52){\circle*{0.000001}} \put(510.53,62.23){\circle*{0.000001}} \put(511.24,62.93){\circle*{0.000001}} \put(511.24,63.64){\circle*{0.000001}} \put(511.95,64.35){\circle*{0.000001}} \put(511.95,65.05){\circle*{0.000001}} \put(512.65,65.76){\circle*{0.000001}} \put(512.65,66.47){\circle*{0.000001}} \put(513.36,67.18){\circle*{0.000001}} \put(513.36,67.88){\circle*{0.000001}} \put(514.07,68.59){\circle*{0.000001}} \put(514.77,69.30){\circle*{0.000001}} \put(514.77,70.00){\circle*{0.000001}} \put(515.48,70.71){\circle*{0.000001}} \put(515.48,71.42){\circle*{0.000001}} \put(516.19,72.12){\circle*{0.000001}} \put(516.19,72.83){\circle*{0.000001}} \put(516.90,73.54){\circle*{0.000001}} \put(516.90,74.25){\circle*{0.000001}} \put(517.60,74.95){\circle*{0.000001}} \put(517.60,75.66){\circle*{0.000001}} \put(518.31,76.37){\circle*{0.000001}} \put(518.31,77.07){\circle*{0.000001}} \put(519.02,77.78){\circle*{0.000001}} \put(519.72,78.49){\circle*{0.000001}} \put(519.72,79.20){\circle*{0.000001}} \put(520.43,79.90){\circle*{0.000001}} \put(520.43,80.61){\circle*{0.000001}} \put(521.14,81.32){\circle*{0.000001}} \put(521.14,82.02){\circle*{0.000001}} \put(521.84,82.73){\circle*{0.000001}} \put(521.84,83.44){\circle*{0.000001}} \put(522.55,84.15){\circle*{0.000001}} \put(522.55,84.85){\circle*{0.000001}} \put(523.26,85.56){\circle*{0.000001}} \put(523.26,86.27){\circle*{0.000001}} \put(523.97,86.97){\circle*{0.000001}} \put(523.97,87.68){\circle*{0.000001}} \put(524.67,88.39){\circle*{0.000001}} \put(525.38,89.10){\circle*{0.000001}} \put(525.38,89.80){\circle*{0.000001}} \put(526.09,90.51){\circle*{0.000001}} \put(526.09,91.22){\circle*{0.000001}} \put(526.79,91.92){\circle*{0.000001}} \put(526.79,92.63){\circle*{0.000001}} \put(527.50,93.34){\circle*{0.000001}} \put(527.50,94.05){\circle*{0.000001}} \put(528.21,94.75){\circle*{0.000001}} \put(528.21,95.46){\circle*{0.000001}} \put(528.92,96.17){\circle*{0.000001}} \put(528.92,96.87){\circle*{0.000001}} \put(529.62,97.58){\circle*{0.000001}} \put(486.49,79.90){\circle*{0.000001}} \put(487.20,79.90){\circle*{0.000001}} \put(487.90,80.61){\circle*{0.000001}} \put(488.61,80.61){\circle*{0.000001}} \put(489.32,81.32){\circle*{0.000001}} \put(490.02,81.32){\circle*{0.000001}} \put(490.73,81.32){\circle*{0.000001}} \put(491.44,82.02){\circle*{0.000001}} \put(492.15,82.02){\circle*{0.000001}} \put(492.85,82.73){\circle*{0.000001}} \put(493.56,82.73){\circle*{0.000001}} \put(494.27,83.44){\circle*{0.000001}} \put(494.97,83.44){\circle*{0.000001}} \put(495.68,83.44){\circle*{0.000001}} \put(496.39,84.15){\circle*{0.000001}} \put(497.10,84.15){\circle*{0.000001}} \put(497.80,84.85){\circle*{0.000001}} \put(498.51,84.85){\circle*{0.000001}} \put(499.22,84.85){\circle*{0.000001}} \put(499.92,85.56){\circle*{0.000001}} \put(500.63,85.56){\circle*{0.000001}} \put(501.34,86.27){\circle*{0.000001}} \put(502.05,86.27){\circle*{0.000001}} \put(502.75,86.27){\circle*{0.000001}} \put(503.46,86.97){\circle*{0.000001}} \put(504.17,86.97){\circle*{0.000001}} \put(504.87,87.68){\circle*{0.000001}} \put(505.58,87.68){\circle*{0.000001}} \put(506.29,87.68){\circle*{0.000001}} \put(507.00,88.39){\circle*{0.000001}} \put(507.70,88.39){\circle*{0.000001}} \put(508.41,89.10){\circle*{0.000001}} \put(509.12,89.10){\circle*{0.000001}} \put(509.82,89.80){\circle*{0.000001}} \put(510.53,89.80){\circle*{0.000001}} \put(511.24,89.80){\circle*{0.000001}} \put(511.95,90.51){\circle*{0.000001}} \put(512.65,90.51){\circle*{0.000001}} \put(513.36,91.22){\circle*{0.000001}} \put(514.07,91.22){\circle*{0.000001}} \put(514.77,91.22){\circle*{0.000001}} \put(515.48,91.92){\circle*{0.000001}} \put(516.19,91.92){\circle*{0.000001}} \put(516.90,92.63){\circle*{0.000001}} \put(517.60,92.63){\circle*{0.000001}} \put(518.31,92.63){\circle*{0.000001}} \put(519.02,93.34){\circle*{0.000001}} \put(519.72,93.34){\circle*{0.000001}} \put(520.43,94.05){\circle*{0.000001}} \put(521.14,94.05){\circle*{0.000001}} \put(521.84,94.05){\circle*{0.000001}} \put(522.55,94.75){\circle*{0.000001}} \put(523.26,94.75){\circle*{0.000001}} \put(523.97,95.46){\circle*{0.000001}} \put(524.67,95.46){\circle*{0.000001}} \put(525.38,96.17){\circle*{0.000001}} \put(526.09,96.17){\circle*{0.000001}} \put(526.79,96.17){\circle*{0.000001}} \put(527.50,96.87){\circle*{0.000001}} \put(528.21,96.87){\circle*{0.000001}} \put(528.92,97.58){\circle*{0.000001}} \put(529.62,97.58){\circle*{0.000001}} \put(499.92,36.77){\circle*{0.000001}} \put(499.92,37.48){\circle*{0.000001}} \put(499.22,38.18){\circle*{0.000001}} \put(499.22,38.89){\circle*{0.000001}} \put(499.22,39.60){\circle*{0.000001}} \put(498.51,40.31){\circle*{0.000001}} \put(498.51,41.01){\circle*{0.000001}} \put(498.51,41.72){\circle*{0.000001}} \put(498.51,42.43){\circle*{0.000001}} \put(497.80,43.13){\circle*{0.000001}} \put(497.80,43.84){\circle*{0.000001}} \put(497.80,44.55){\circle*{0.000001}} \put(497.10,45.25){\circle*{0.000001}} \put(497.10,45.96){\circle*{0.000001}} \put(497.10,46.67){\circle*{0.000001}} \put(496.39,47.38){\circle*{0.000001}} \put(496.39,48.08){\circle*{0.000001}} \put(496.39,48.79){\circle*{0.000001}} \put(495.68,49.50){\circle*{0.000001}} \put(495.68,50.20){\circle*{0.000001}} \put(495.68,50.91){\circle*{0.000001}} \put(494.97,51.62){\circle*{0.000001}} \put(494.97,52.33){\circle*{0.000001}} \put(494.97,53.03){\circle*{0.000001}} \put(494.97,53.74){\circle*{0.000001}} \put(494.27,54.45){\circle*{0.000001}} \put(494.27,55.15){\circle*{0.000001}} \put(494.27,55.86){\circle*{0.000001}} \put(493.56,56.57){\circle*{0.000001}} \put(493.56,57.28){\circle*{0.000001}} \put(493.56,57.98){\circle*{0.000001}} \put(492.85,58.69){\circle*{0.000001}} \put(492.85,59.40){\circle*{0.000001}} \put(492.85,60.10){\circle*{0.000001}} \put(492.15,60.81){\circle*{0.000001}} \put(492.15,61.52){\circle*{0.000001}} \put(492.15,62.23){\circle*{0.000001}} \put(491.44,62.93){\circle*{0.000001}} \put(491.44,63.64){\circle*{0.000001}} \put(491.44,64.35){\circle*{0.000001}} \put(491.44,65.05){\circle*{0.000001}} \put(490.73,65.76){\circle*{0.000001}} \put(490.73,66.47){\circle*{0.000001}} \put(490.73,67.18){\circle*{0.000001}} \put(490.02,67.88){\circle*{0.000001}} \put(490.02,68.59){\circle*{0.000001}} \put(490.02,69.30){\circle*{0.000001}} \put(489.32,70.00){\circle*{0.000001}} \put(489.32,70.71){\circle*{0.000001}} \put(489.32,71.42){\circle*{0.000001}} \put(488.61,72.12){\circle*{0.000001}} \put(488.61,72.83){\circle*{0.000001}} \put(488.61,73.54){\circle*{0.000001}} \put(487.90,74.25){\circle*{0.000001}} \put(487.90,74.95){\circle*{0.000001}} \put(487.90,75.66){\circle*{0.000001}} \put(487.90,76.37){\circle*{0.000001}} \put(487.20,77.07){\circle*{0.000001}} \put(487.20,77.78){\circle*{0.000001}} \put(487.20,78.49){\circle*{0.000001}} \put(486.49,79.20){\circle*{0.000001}} \put(486.49,79.90){\circle*{0.000001}} \put(472.35, 0.71){\circle*{0.000001}} \put(473.05, 1.41){\circle*{0.000001}} \put(473.76, 2.12){\circle*{0.000001}} \put(473.76, 2.83){\circle*{0.000001}} \put(474.47, 3.54){\circle*{0.000001}} \put(475.18, 4.24){\circle*{0.000001}} \put(475.88, 4.95){\circle*{0.000001}} \put(475.88, 5.66){\circle*{0.000001}} \put(476.59, 6.36){\circle*{0.000001}} \put(477.30, 7.07){\circle*{0.000001}} \put(478.00, 7.78){\circle*{0.000001}} \put(478.00, 8.49){\circle*{0.000001}} \put(478.71, 9.19){\circle*{0.000001}} \put(479.42, 9.90){\circle*{0.000001}} \put(480.13,10.61){\circle*{0.000001}} \put(480.13,11.31){\circle*{0.000001}} \put(480.83,12.02){\circle*{0.000001}} \put(481.54,12.73){\circle*{0.000001}} \put(482.25,13.44){\circle*{0.000001}} \put(482.95,14.14){\circle*{0.000001}} \put(482.95,14.85){\circle*{0.000001}} \put(483.66,15.56){\circle*{0.000001}} \put(484.37,16.26){\circle*{0.000001}} \put(485.08,16.97){\circle*{0.000001}} \put(485.08,17.68){\circle*{0.000001}} \put(485.78,18.38){\circle*{0.000001}} \put(486.49,19.09){\circle*{0.000001}} \put(487.20,19.80){\circle*{0.000001}} \put(487.20,20.51){\circle*{0.000001}} \put(487.90,21.21){\circle*{0.000001}} \put(488.61,21.92){\circle*{0.000001}} \put(489.32,22.63){\circle*{0.000001}} \put(489.32,23.33){\circle*{0.000001}} \put(490.02,24.04){\circle*{0.000001}} \put(490.73,24.75){\circle*{0.000001}} \put(491.44,25.46){\circle*{0.000001}} \put(492.15,26.16){\circle*{0.000001}} \put(492.15,26.87){\circle*{0.000001}} \put(492.85,27.58){\circle*{0.000001}} \put(493.56,28.28){\circle*{0.000001}} \put(494.27,28.99){\circle*{0.000001}} \put(494.27,29.70){\circle*{0.000001}} \put(494.97,30.41){\circle*{0.000001}} \put(495.68,31.11){\circle*{0.000001}} \put(496.39,31.82){\circle*{0.000001}} \put(496.39,32.53){\circle*{0.000001}} \put(497.10,33.23){\circle*{0.000001}} \put(497.80,33.94){\circle*{0.000001}} \put(498.51,34.65){\circle*{0.000001}} \put(498.51,35.36){\circle*{0.000001}} \put(499.22,36.06){\circle*{0.000001}} \put(499.92,36.77){\circle*{0.000001}} \put(472.35, 0.71){\circle*{0.000001}} \put(473.05, 0.71){\circle*{0.000001}} \put(473.76, 1.41){\circle*{0.000001}} \put(474.47, 1.41){\circle*{0.000001}} \put(475.18, 1.41){\circle*{0.000001}} \put(475.88, 2.12){\circle*{0.000001}} \put(476.59, 2.12){\circle*{0.000001}} \put(477.30, 2.12){\circle*{0.000001}} \put(478.00, 2.83){\circle*{0.000001}} \put(478.71, 2.83){\circle*{0.000001}} \put(479.42, 2.83){\circle*{0.000001}} \put(480.13, 3.54){\circle*{0.000001}} \put(480.83, 3.54){\circle*{0.000001}} \put(481.54, 4.24){\circle*{0.000001}} \put(482.25, 4.24){\circle*{0.000001}} \put(482.95, 4.24){\circle*{0.000001}} \put(483.66, 4.95){\circle*{0.000001}} \put(484.37, 4.95){\circle*{0.000001}} \put(485.08, 4.95){\circle*{0.000001}} \put(485.78, 5.66){\circle*{0.000001}} \put(486.49, 5.66){\circle*{0.000001}} \put(487.20, 5.66){\circle*{0.000001}} \put(487.90, 6.36){\circle*{0.000001}} \put(488.61, 6.36){\circle*{0.000001}} \put(489.32, 6.36){\circle*{0.000001}} \put(490.02, 7.07){\circle*{0.000001}} \put(490.73, 7.07){\circle*{0.000001}} \put(491.44, 7.07){\circle*{0.000001}} \put(492.15, 7.78){\circle*{0.000001}} \put(492.85, 7.78){\circle*{0.000001}} \put(493.56, 7.78){\circle*{0.000001}} \put(494.27, 8.49){\circle*{0.000001}} \put(494.97, 8.49){\circle*{0.000001}} \put(495.68, 9.19){\circle*{0.000001}} \put(496.39, 9.19){\circle*{0.000001}} \put(497.10, 9.19){\circle*{0.000001}} \put(497.80, 9.90){\circle*{0.000001}} \put(498.51, 9.90){\circle*{0.000001}} \put(499.22, 9.90){\circle*{0.000001}} \put(499.92,10.61){\circle*{0.000001}} \put(500.63,10.61){\circle*{0.000001}} \put(501.34,10.61){\circle*{0.000001}} \put(502.05,11.31){\circle*{0.000001}} \put(502.75,11.31){\circle*{0.000001}} \put(503.46,11.31){\circle*{0.000001}} \put(504.17,12.02){\circle*{0.000001}} \put(504.87,12.02){\circle*{0.000001}} \put(505.58,12.02){\circle*{0.000001}} \put(506.29,12.73){\circle*{0.000001}} \put(507.00,12.73){\circle*{0.000001}} \put(507.70,12.73){\circle*{0.000001}} \put(508.41,13.44){\circle*{0.000001}} \put(509.12,13.44){\circle*{0.000001}} \put(509.82,14.14){\circle*{0.000001}} \put(510.53,14.14){\circle*{0.000001}} \put(511.24,14.14){\circle*{0.000001}} \put(511.95,14.85){\circle*{0.000001}} \put(512.65,14.85){\circle*{0.000001}} \put(513.36,14.85){\circle*{0.000001}} \put(514.07,15.56){\circle*{0.000001}} \put(514.77,15.56){\circle*{0.000001}} \put(514.77,15.56){\circle*{0.000001}} \put(515.48,16.26){\circle*{0.000001}} \put(516.19,16.97){\circle*{0.000001}} \put(516.90,16.97){\circle*{0.000001}} \put(517.60,17.68){\circle*{0.000001}} \put(518.31,18.38){\circle*{0.000001}} \put(519.02,19.09){\circle*{0.000001}} \put(519.72,19.80){\circle*{0.000001}} \put(520.43,20.51){\circle*{0.000001}} \put(521.14,20.51){\circle*{0.000001}} \put(521.84,21.21){\circle*{0.000001}} \put(522.55,21.92){\circle*{0.000001}} \put(523.26,22.63){\circle*{0.000001}} \put(523.97,23.33){\circle*{0.000001}} \put(524.67,23.33){\circle*{0.000001}} \put(525.38,24.04){\circle*{0.000001}} \put(526.09,24.75){\circle*{0.000001}} \put(526.79,25.46){\circle*{0.000001}} \put(527.50,26.16){\circle*{0.000001}} \put(528.21,26.87){\circle*{0.000001}} \put(528.92,26.87){\circle*{0.000001}} \put(529.62,27.58){\circle*{0.000001}} \put(530.33,28.28){\circle*{0.000001}} \put(531.04,28.99){\circle*{0.000001}} \put(531.74,29.70){\circle*{0.000001}} \put(532.45,29.70){\circle*{0.000001}} \put(533.16,30.41){\circle*{0.000001}} \put(533.87,31.11){\circle*{0.000001}} \put(534.57,31.82){\circle*{0.000001}} \put(535.28,32.53){\circle*{0.000001}} \put(535.99,32.53){\circle*{0.000001}} \put(536.69,33.23){\circle*{0.000001}} \put(537.40,33.94){\circle*{0.000001}} \put(538.11,34.65){\circle*{0.000001}} \put(538.82,35.36){\circle*{0.000001}} \put(539.52,36.06){\circle*{0.000001}} \put(540.23,36.06){\circle*{0.000001}} \put(540.94,36.77){\circle*{0.000001}} \put(541.64,37.48){\circle*{0.000001}} \put(542.35,38.18){\circle*{0.000001}} \put(543.06,38.89){\circle*{0.000001}} \put(543.77,38.89){\circle*{0.000001}} \put(544.47,39.60){\circle*{0.000001}} \put(545.18,40.31){\circle*{0.000001}} \put(545.89,41.01){\circle*{0.000001}} \put(546.59,41.72){\circle*{0.000001}} \put(547.30,42.43){\circle*{0.000001}} \put(548.01,42.43){\circle*{0.000001}} \put(548.71,43.13){\circle*{0.000001}} \put(549.42,43.84){\circle*{0.000001}} \put(514.77,74.95){\circle*{0.000001}} \put(515.48,74.25){\circle*{0.000001}} \put(516.19,73.54){\circle*{0.000001}} \put(516.90,72.83){\circle*{0.000001}} \put(517.60,72.12){\circle*{0.000001}} \put(518.31,72.12){\circle*{0.000001}} \put(519.02,71.42){\circle*{0.000001}} \put(519.72,70.71){\circle*{0.000001}} \put(520.43,70.00){\circle*{0.000001}} \put(521.14,69.30){\circle*{0.000001}} \put(521.84,68.59){\circle*{0.000001}} \put(522.55,67.88){\circle*{0.000001}} \put(523.26,67.18){\circle*{0.000001}} \put(523.97,66.47){\circle*{0.000001}} \put(524.67,65.76){\circle*{0.000001}} \put(525.38,65.76){\circle*{0.000001}} \put(526.09,65.05){\circle*{0.000001}} \put(526.79,64.35){\circle*{0.000001}} \put(527.50,63.64){\circle*{0.000001}} \put(528.21,62.93){\circle*{0.000001}} \put(528.92,62.23){\circle*{0.000001}} \put(529.62,61.52){\circle*{0.000001}} \put(530.33,60.81){\circle*{0.000001}} \put(531.04,60.10){\circle*{0.000001}} \put(531.74,59.40){\circle*{0.000001}} \put(532.45,59.40){\circle*{0.000001}} \put(533.16,58.69){\circle*{0.000001}} \put(533.87,57.98){\circle*{0.000001}} \put(534.57,57.28){\circle*{0.000001}} \put(535.28,56.57){\circle*{0.000001}} \put(535.99,55.86){\circle*{0.000001}} \put(536.69,55.15){\circle*{0.000001}} \put(537.40,54.45){\circle*{0.000001}} \put(538.11,53.74){\circle*{0.000001}} \put(538.82,53.03){\circle*{0.000001}} \put(539.52,53.03){\circle*{0.000001}} \put(540.23,52.33){\circle*{0.000001}} \put(540.94,51.62){\circle*{0.000001}} \put(541.64,50.91){\circle*{0.000001}} \put(542.35,50.20){\circle*{0.000001}} \put(543.06,49.50){\circle*{0.000001}} \put(543.77,48.79){\circle*{0.000001}} \put(544.47,48.08){\circle*{0.000001}} \put(545.18,47.38){\circle*{0.000001}} \put(545.89,46.67){\circle*{0.000001}} \put(546.59,46.67){\circle*{0.000001}} \put(547.30,45.96){\circle*{0.000001}} \put(548.01,45.25){\circle*{0.000001}} \put(548.71,44.55){\circle*{0.000001}} \put(549.42,43.84){\circle*{0.000001}} \put(514.77,74.95){\circle*{0.000001}} \put(514.77,75.66){\circle*{0.000001}} \put(514.77,76.37){\circle*{0.000001}} \put(514.77,77.07){\circle*{0.000001}} \put(515.48,77.78){\circle*{0.000001}} \put(515.48,78.49){\circle*{0.000001}} \put(515.48,79.20){\circle*{0.000001}} \put(515.48,79.90){\circle*{0.000001}} \put(515.48,80.61){\circle*{0.000001}} \put(515.48,81.32){\circle*{0.000001}} \put(515.48,82.02){\circle*{0.000001}} \put(515.48,82.73){\circle*{0.000001}} \put(516.19,83.44){\circle*{0.000001}} \put(516.19,84.15){\circle*{0.000001}} \put(516.19,84.85){\circle*{0.000001}} \put(516.19,85.56){\circle*{0.000001}} \put(516.19,86.27){\circle*{0.000001}} \put(516.19,86.97){\circle*{0.000001}} \put(516.19,87.68){\circle*{0.000001}} \put(516.19,88.39){\circle*{0.000001}} \put(516.90,89.10){\circle*{0.000001}} \put(516.90,89.80){\circle*{0.000001}} \put(516.90,90.51){\circle*{0.000001}} \put(516.90,91.22){\circle*{0.000001}} \put(516.90,91.92){\circle*{0.000001}} \put(516.90,92.63){\circle*{0.000001}} \put(516.90,93.34){\circle*{0.000001}} \put(516.90,94.05){\circle*{0.000001}} \put(517.60,94.75){\circle*{0.000001}} \put(517.60,95.46){\circle*{0.000001}} \put(517.60,96.17){\circle*{0.000001}} \put(517.60,96.87){\circle*{0.000001}} \put(517.60,97.58){\circle*{0.000001}} \put(517.60,98.29){\circle*{0.000001}} \put(517.60,98.99){\circle*{0.000001}} \put(518.31,99.70){\circle*{0.000001}} \put(518.31,100.41){\circle*{0.000001}} \put(518.31,101.12){\circle*{0.000001}} \put(518.31,101.82){\circle*{0.000001}} \put(518.31,102.53){\circle*{0.000001}} \put(518.31,103.24){\circle*{0.000001}} \put(518.31,103.94){\circle*{0.000001}} \put(518.31,104.65){\circle*{0.000001}} \put(519.02,105.36){\circle*{0.000001}} \put(519.02,106.07){\circle*{0.000001}} \put(519.02,106.77){\circle*{0.000001}} \put(519.02,107.48){\circle*{0.000001}} \put(519.02,108.19){\circle*{0.000001}} \put(519.02,108.89){\circle*{0.000001}} \put(519.02,109.60){\circle*{0.000001}} \put(519.02,110.31){\circle*{0.000001}} \put(519.72,111.02){\circle*{0.000001}} \put(519.72,111.72){\circle*{0.000001}} \put(519.72,112.43){\circle*{0.000001}} \put(519.72,113.14){\circle*{0.000001}} \put(519.72,113.84){\circle*{0.000001}} \put(519.72,114.55){\circle*{0.000001}} \put(519.72,115.26){\circle*{0.000001}} \put(519.72,115.97){\circle*{0.000001}} \put(520.43,116.67){\circle*{0.000001}} \put(520.43,117.38){\circle*{0.000001}} \put(520.43,118.09){\circle*{0.000001}} \put(520.43,118.79){\circle*{0.000001}} \put(515.48,72.83){\circle*{0.000001}} \put(515.48,73.54){\circle*{0.000001}} \put(515.48,74.25){\circle*{0.000001}} \put(515.48,74.95){\circle*{0.000001}} \put(515.48,75.66){\circle*{0.000001}} \put(516.19,76.37){\circle*{0.000001}} \put(516.19,77.07){\circle*{0.000001}} \put(516.19,77.78){\circle*{0.000001}} \put(516.19,78.49){\circle*{0.000001}} \put(516.19,79.20){\circle*{0.000001}} \put(516.19,79.90){\circle*{0.000001}} \put(516.19,80.61){\circle*{0.000001}} \put(516.19,81.32){\circle*{0.000001}} \put(516.19,82.02){\circle*{0.000001}} \put(516.90,82.73){\circle*{0.000001}} \put(516.90,83.44){\circle*{0.000001}} \put(516.90,84.15){\circle*{0.000001}} \put(516.90,84.85){\circle*{0.000001}} \put(516.90,85.56){\circle*{0.000001}} \put(516.90,86.27){\circle*{0.000001}} \put(516.90,86.97){\circle*{0.000001}} \put(516.90,87.68){\circle*{0.000001}} \put(516.90,88.39){\circle*{0.000001}} \put(516.90,89.10){\circle*{0.000001}} \put(517.60,89.80){\circle*{0.000001}} \put(517.60,90.51){\circle*{0.000001}} \put(517.60,91.22){\circle*{0.000001}} \put(517.60,91.92){\circle*{0.000001}} \put(517.60,92.63){\circle*{0.000001}} \put(517.60,93.34){\circle*{0.000001}} \put(517.60,94.05){\circle*{0.000001}} \put(517.60,94.75){\circle*{0.000001}} \put(517.60,95.46){\circle*{0.000001}} \put(518.31,96.17){\circle*{0.000001}} \put(518.31,96.87){\circle*{0.000001}} \put(518.31,97.58){\circle*{0.000001}} \put(518.31,98.29){\circle*{0.000001}} \put(518.31,98.99){\circle*{0.000001}} \put(518.31,99.70){\circle*{0.000001}} \put(518.31,100.41){\circle*{0.000001}} \put(518.31,101.12){\circle*{0.000001}} \put(518.31,101.82){\circle*{0.000001}} \put(519.02,102.53){\circle*{0.000001}} \put(519.02,103.24){\circle*{0.000001}} \put(519.02,103.94){\circle*{0.000001}} \put(519.02,104.65){\circle*{0.000001}} \put(519.02,105.36){\circle*{0.000001}} \put(519.02,106.07){\circle*{0.000001}} \put(519.02,106.77){\circle*{0.000001}} \put(519.02,107.48){\circle*{0.000001}} \put(519.02,108.19){\circle*{0.000001}} \put(519.02,108.89){\circle*{0.000001}} \put(519.72,109.60){\circle*{0.000001}} \put(519.72,110.31){\circle*{0.000001}} \put(519.72,111.02){\circle*{0.000001}} \put(519.72,111.72){\circle*{0.000001}} \put(519.72,112.43){\circle*{0.000001}} \put(519.72,113.14){\circle*{0.000001}} \put(519.72,113.84){\circle*{0.000001}} \put(519.72,114.55){\circle*{0.000001}} \put(519.72,115.26){\circle*{0.000001}} \put(520.43,115.97){\circle*{0.000001}} \put(520.43,116.67){\circle*{0.000001}} \put(520.43,117.38){\circle*{0.000001}} \put(520.43,118.09){\circle*{0.000001}} \put(520.43,118.79){\circle*{0.000001}} \put(515.48,72.83){\circle*{0.000001}} \put(514.77,73.54){\circle*{0.000001}} \put(514.77,74.25){\circle*{0.000001}} \put(514.07,74.95){\circle*{0.000001}} \put(514.07,75.66){\circle*{0.000001}} \put(513.36,76.37){\circle*{0.000001}} \put(513.36,77.07){\circle*{0.000001}} \put(512.65,77.78){\circle*{0.000001}} \put(512.65,78.49){\circle*{0.000001}} \put(511.95,79.20){\circle*{0.000001}} \put(511.95,79.90){\circle*{0.000001}} \put(511.24,80.61){\circle*{0.000001}} \put(511.24,81.32){\circle*{0.000001}} \put(510.53,82.02){\circle*{0.000001}} \put(510.53,82.73){\circle*{0.000001}} \put(509.82,83.44){\circle*{0.000001}} \put(509.82,84.15){\circle*{0.000001}} \put(509.12,84.85){\circle*{0.000001}} \put(508.41,85.56){\circle*{0.000001}} \put(508.41,86.27){\circle*{0.000001}} \put(507.70,86.97){\circle*{0.000001}} \put(507.70,87.68){\circle*{0.000001}} \put(507.00,88.39){\circle*{0.000001}} \put(507.00,89.10){\circle*{0.000001}} \put(506.29,89.80){\circle*{0.000001}} \put(506.29,90.51){\circle*{0.000001}} \put(505.58,91.22){\circle*{0.000001}} \put(505.58,91.92){\circle*{0.000001}} \put(504.87,92.63){\circle*{0.000001}} \put(504.87,93.34){\circle*{0.000001}} \put(504.17,94.05){\circle*{0.000001}} \put(504.17,94.75){\circle*{0.000001}} \put(503.46,95.46){\circle*{0.000001}} \put(503.46,96.17){\circle*{0.000001}} \put(502.75,96.87){\circle*{0.000001}} \put(502.75,97.58){\circle*{0.000001}} \put(502.05,98.29){\circle*{0.000001}} \put(501.34,98.99){\circle*{0.000001}} \put(501.34,99.70){\circle*{0.000001}} \put(500.63,100.41){\circle*{0.000001}} \put(500.63,101.12){\circle*{0.000001}} \put(499.92,101.82){\circle*{0.000001}} \put(499.92,102.53){\circle*{0.000001}} \put(499.22,103.24){\circle*{0.000001}} \put(499.22,103.94){\circle*{0.000001}} \put(498.51,104.65){\circle*{0.000001}} \put(498.51,105.36){\circle*{0.000001}} \put(497.80,106.07){\circle*{0.000001}} \put(497.80,106.77){\circle*{0.000001}} \put(497.10,107.48){\circle*{0.000001}} \put(497.10,108.19){\circle*{0.000001}} \put(496.39,108.89){\circle*{0.000001}} \put(496.39,109.60){\circle*{0.000001}} \put(495.68,110.31){\circle*{0.000001}} \put(-160,-484){\makebox(0,0)[lt]{$\rho: 0.01$}} \end{picture} & \setlength{\unitlength}{00.03em} \begin{picture}(459,612)(-255,-306) \ifx\envbox\undefined\newsavebox{\envbox}\fi \multiput(-255,-306)(459,0){2}{\line(0,1){612}} \multiput(-255,-306)(0,612){2}{\line(1,0){459}} \savebox{\envbox}{ \multiput(0,0)(16,0){2}{\line(0,1){16}} \multiput(0,0)(0,16){2}{\line(1,0){16}} } \put(102,-51){\usebox{\envbox}} \put(51,153){\usebox{\envbox}} \put(-51,0){\usebox{\envbox}} \put(-153,-102){\usebox{\envbox}} \put(102,-153){\usebox{\envbox}} \put(-102,204){\usebox{\envbox}} \put(-51,-153){\usebox{\envbox}} \put(-51,-306){\usebox{\envbox}} \put(-102,102){\usebox{\envbox}} \put(0,153){\usebox{\envbox}} \put(-102,-204){\usebox{\envbox}} \put(51,-102){\usebox{\envbox}} \put(-102,0){\usebox{\envbox}} \put(51,102){\usebox{\envbox}} \put(-102,51){\usebox{\envbox}} \put(-153,-153){\usebox{\envbox}} \put(102,-102){\usebox{\envbox}} \put(-51,-204){\usebox{\envbox}} \put(51,-51){\usebox{\envbox}} \put(-204,-204){\usebox{\envbox}} \put(51,255){\usebox{\envbox}} \put(-153,102){\usebox{\envbox}} \put(-102,-51){\usebox{\envbox}} \put(51,-153){\usebox{\envbox}} \put(-255,-153){\usebox{\envbox}} \put(51,51){\usebox{\envbox}} \put(0,-255){\usebox{\envbox}} \put(0,-204){\usebox{\envbox}} \put(0,-306){\usebox{\envbox}} \put(-102,-255){\usebox{\envbox}} \put(-102,-306){\usebox{\envbox}} \put(0,0){\usebox{\envbox}} \put(-153,-204){\usebox{\envbox}} \put(-204,-153){\usebox{\envbox}} \put(-51,-102){\usebox{\envbox}} \put(0,204){\usebox{\envbox}} \put(-51,51){\usebox{\envbox}} \put(-102,-102){\usebox{\envbox}} \put(102,153){\usebox{\envbox}} \put(51,-204){\usebox{\envbox}} \put(-51,204){\usebox{\envbox}} \put(0,-153){\usebox{\envbox}} \put(-153,-255){\usebox{\envbox}} \put(-51,-255){\usebox{\envbox}} \put(-102,153){\usebox{\envbox}} \put(-255,-204){\usebox{\envbox}} \put(0,-51){\usebox{\envbox}} \put(-51,-51){\usebox{\envbox}} \put(-153,0){\usebox{\envbox}} \put(102,0){\usebox{\envbox}} \put(-204,-255){\usebox{\envbox}} \put(0,-102){\usebox{\envbox}} \put(153,204){\usebox{\envbox}} \put(102,204){\usebox{\envbox}} \put( 7.78, 2.83){\circle*{0.000001}} \put( 7.78, 3.54){\circle*{0.000001}} \put( 7.78, 4.24){\circle*{0.000001}} \put( 7.78, 4.95){\circle*{0.000001}} \put( 7.78, 5.66){\circle*{0.000001}} \put( 7.78, 6.36){\circle*{0.000001}} \put( 7.78, 7.07){\circle*{0.000001}} \put( 7.78, 7.78){\circle*{0.000001}} \put( 7.78, 2.83){\circle*{0.000001}} \put( 7.78, 2.83){\circle*{0.000001}} \put( 8.49, 2.83){\circle*{0.000001}} \put( 9.19, 2.83){\circle*{0.000001}} \put( 9.90, 3.54){\circle*{0.000001}} \put(10.61, 3.54){\circle*{0.000001}} \put(10.61, 2.83){\circle*{0.000001}} \put(10.61, 3.54){\circle*{0.000001}} \put(10.61, 2.83){\circle*{0.000001}} \put(11.31, 2.12){\circle*{0.000001}} \put(12.02, 2.12){\circle*{0.000001}} \put(12.73, 1.41){\circle*{0.000001}} \put(12.73, 1.41){\circle*{0.000001}} \put(12.73, 1.41){\circle*{0.000001}} \put(12.73, 2.12){\circle*{0.000001}} \put(12.73, 2.83){\circle*{0.000001}} \put(12.73, 3.54){\circle*{0.000001}} \put(12.73, 4.24){\circle*{0.000001}} \put(12.73, 4.95){\circle*{0.000001}} \put( 8.49, 3.54){\circle*{0.000001}} \put( 9.19, 3.54){\circle*{0.000001}} \put( 9.90, 4.24){\circle*{0.000001}} \put(10.61, 4.24){\circle*{0.000001}} \put(11.31, 4.24){\circle*{0.000001}} \put(12.02, 4.95){\circle*{0.000001}} \put(12.73, 4.95){\circle*{0.000001}} \put( 6.36, 3.54){\circle*{0.000001}} \put( 7.07, 3.54){\circle*{0.000001}} \put( 7.78, 3.54){\circle*{0.000001}} \put( 8.49, 3.54){\circle*{0.000001}} \put( 6.36, 2.83){\circle*{0.000001}} \put( 6.36, 3.54){\circle*{0.000001}} \put( 5.66, 0.71){\circle*{0.000001}} \put( 5.66, 1.41){\circle*{0.000001}} \put( 6.36, 2.12){\circle*{0.000001}} \put( 6.36, 2.83){\circle*{0.000001}} \put( 5.66, 0.00){\circle*{0.000001}} \put( 5.66, 0.71){\circle*{0.000001}} \put( 5.66, 0.00){\circle*{0.000001}} \put( 5.66, 0.71){\circle*{0.000001}} \put( 5.66, 0.71){\circle*{0.000001}} \put( 6.36, 1.41){\circle*{0.000001}} \put( 7.07, 2.12){\circle*{0.000001}} \put( 7.78, 2.83){\circle*{0.000001}} \put( 7.78, 2.83){\circle*{0.000001}} \put( 8.49, 3.54){\circle*{0.000001}} \put( 8.49, 4.24){\circle*{0.000001}} \put( 9.19, 4.95){\circle*{0.000001}} \put( 9.19, 5.66){\circle*{0.000001}} \put( 9.90, 6.36){\circle*{0.000001}} \put(10.61, 7.07){\circle*{0.000001}} \put(10.61, 7.78){\circle*{0.000001}} \put(11.31, 8.49){\circle*{0.000001}} \put(10.61, 8.49){\circle*{0.000001}} \put(11.31, 8.49){\circle*{0.000001}} \put( 9.90, 5.66){\circle*{0.000001}} \put( 9.90, 6.36){\circle*{0.000001}} \put( 9.90, 7.07){\circle*{0.000001}} \put(10.61, 7.78){\circle*{0.000001}} \put(10.61, 8.49){\circle*{0.000001}} \put( 9.90, 5.66){\circle*{0.000001}} \put(10.61, 4.95){\circle*{0.000001}} \put(11.31, 3.54){\circle*{0.000001}} \put(11.31, 4.24){\circle*{0.000001}} \put(10.61, 4.95){\circle*{0.000001}} \put( 9.90, 3.54){\circle*{0.000001}} \put(10.61, 3.54){\circle*{0.000001}} \put(11.31, 3.54){\circle*{0.000001}} \put( 9.90, 3.54){\circle*{0.000001}} \put( 9.90, 4.24){\circle*{0.000001}} \put( 9.90, 4.95){\circle*{0.000001}} \put(10.61, 5.66){\circle*{0.000001}} \put(10.61, 6.36){\circle*{0.000001}} \put( 8.49, 7.78){\circle*{0.000001}} \put( 9.19, 7.07){\circle*{0.000001}} \put( 9.90, 7.07){\circle*{0.000001}} \put(10.61, 6.36){\circle*{0.000001}} \put( 7.78, 5.66){\circle*{0.000001}} \put( 7.78, 6.36){\circle*{0.000001}} \put( 8.49, 7.07){\circle*{0.000001}} \put( 8.49, 7.78){\circle*{0.000001}} \put( 7.78, 5.66){\circle*{0.000001}} \put( 8.49, 5.66){\circle*{0.000001}} \put( 9.19, 4.95){\circle*{0.000001}} \put( 9.90, 4.95){\circle*{0.000001}} \put( 9.90, 2.83){\circle*{0.000001}} \put( 9.90, 3.54){\circle*{0.000001}} \put( 9.90, 4.24){\circle*{0.000001}} \put( 9.90, 4.95){\circle*{0.000001}} \put( 9.90, 2.83){\circle*{0.000001}} \put(10.61, 3.54){\circle*{0.000001}} \put(10.61, 3.54){\circle*{0.000001}} \put(11.31, 3.54){\circle*{0.000001}} \put(11.31,-0.71){\circle*{0.000001}} \put(11.31, 0.00){\circle*{0.000001}} \put(11.31, 0.71){\circle*{0.000001}} \put(11.31, 1.41){\circle*{0.000001}} \put(11.31, 2.12){\circle*{0.000001}} \put(11.31, 2.83){\circle*{0.000001}} \put(11.31, 3.54){\circle*{0.000001}} \put(11.31,-0.71){\circle*{0.000001}} \put(12.02, 0.00){\circle*{0.000001}} \put(12.73, 0.71){\circle*{0.000001}} \put(13.44, 1.41){\circle*{0.000001}} \put(14.14, 2.12){\circle*{0.000001}} \put(14.14, 2.83){\circle*{0.000001}} \put(14.85, 3.54){\circle*{0.000001}} \put(15.56, 4.24){\circle*{0.000001}} \put(16.26, 4.95){\circle*{0.000001}} \put(16.97, 5.66){\circle*{0.000001}} \put(17.68, 6.36){\circle*{0.000001}} \put(17.68, 6.36){\circle*{0.000001}} \put(18.38, 6.36){\circle*{0.000001}} \put(19.09, 6.36){\circle*{0.000001}} \put(19.80, 6.36){\circle*{0.000001}} \put(20.51, 7.07){\circle*{0.000001}} \put(21.21, 7.07){\circle*{0.000001}} \put(21.92, 7.07){\circle*{0.000001}} \put(22.63, 7.07){\circle*{0.000001}} \put(23.33, 7.07){\circle*{0.000001}} \put(24.04, 7.07){\circle*{0.000001}} \put(24.75, 7.78){\circle*{0.000001}} \put(25.46, 7.78){\circle*{0.000001}} \put(26.16, 7.78){\circle*{0.000001}} \put(26.87, 7.78){\circle*{0.000001}} \put(27.58, 7.78){\circle*{0.000001}} \put(28.28, 7.78){\circle*{0.000001}} \put(28.99, 8.49){\circle*{0.000001}} \put(29.70, 8.49){\circle*{0.000001}} \put(30.41, 8.49){\circle*{0.000001}} \put(24.75,-4.95){\circle*{0.000001}} \put(24.75,-4.24){\circle*{0.000001}} \put(25.46,-3.54){\circle*{0.000001}} \put(25.46,-2.83){\circle*{0.000001}} \put(26.16,-2.12){\circle*{0.000001}} \put(26.16,-1.41){\circle*{0.000001}} \put(26.87,-0.71){\circle*{0.000001}} \put(26.87, 0.00){\circle*{0.000001}} \put(26.87, 0.71){\circle*{0.000001}} \put(27.58, 1.41){\circle*{0.000001}} \put(27.58, 2.12){\circle*{0.000001}} \put(28.28, 2.83){\circle*{0.000001}} \put(28.28, 3.54){\circle*{0.000001}} \put(28.28, 4.24){\circle*{0.000001}} \put(28.99, 4.95){\circle*{0.000001}} \put(28.99, 5.66){\circle*{0.000001}} \put(29.70, 6.36){\circle*{0.000001}} \put(29.70, 7.07){\circle*{0.000001}} \put(30.41, 7.78){\circle*{0.000001}} \put(30.41, 8.49){\circle*{0.000001}} \put(30.41,-20.51){\circle*{0.000001}} \put(30.41,-19.80){\circle*{0.000001}} \put(29.70,-19.09){\circle*{0.000001}} \put(29.70,-18.38){\circle*{0.000001}} \put(29.70,-17.68){\circle*{0.000001}} \put(28.99,-16.97){\circle*{0.000001}} \put(28.99,-16.26){\circle*{0.000001}} \put(28.28,-15.56){\circle*{0.000001}} \put(28.28,-14.85){\circle*{0.000001}} \put(28.28,-14.14){\circle*{0.000001}} \put(27.58,-13.44){\circle*{0.000001}} \put(27.58,-12.73){\circle*{0.000001}} \put(27.58,-12.02){\circle*{0.000001}} \put(26.87,-11.31){\circle*{0.000001}} \put(26.87,-10.61){\circle*{0.000001}} \put(26.87,-9.90){\circle*{0.000001}} \put(26.16,-9.19){\circle*{0.000001}} \put(26.16,-8.49){\circle*{0.000001}} \put(25.46,-7.78){\circle*{0.000001}} \put(25.46,-7.07){\circle*{0.000001}} \put(25.46,-6.36){\circle*{0.000001}} \put(24.75,-5.66){\circle*{0.000001}} \put(24.75,-4.95){\circle*{0.000001}} \put(30.41,-20.51){\circle*{0.000001}} \put(31.11,-20.51){\circle*{0.000001}} \put(31.82,-19.80){\circle*{0.000001}} \put(32.53,-19.80){\circle*{0.000001}} \put(33.23,-19.09){\circle*{0.000001}} \put(33.94,-19.09){\circle*{0.000001}} \put(34.65,-18.38){\circle*{0.000001}} \put(35.36,-18.38){\circle*{0.000001}} \put(36.06,-18.38){\circle*{0.000001}} \put(36.77,-17.68){\circle*{0.000001}} \put(37.48,-17.68){\circle*{0.000001}} \put(38.18,-16.97){\circle*{0.000001}} \put(38.89,-16.97){\circle*{0.000001}} \put(39.60,-16.97){\circle*{0.000001}} \put(40.31,-16.26){\circle*{0.000001}} \put(41.01,-16.26){\circle*{0.000001}} \put(41.72,-15.56){\circle*{0.000001}} \put(42.43,-15.56){\circle*{0.000001}} \put(43.13,-14.85){\circle*{0.000001}} \put(43.84,-14.85){\circle*{0.000001}} \put(44.55,-14.85){\circle*{0.000001}} \put(45.25,-14.14){\circle*{0.000001}} \put(45.96,-14.14){\circle*{0.000001}} \put(46.67,-13.44){\circle*{0.000001}} \put(47.38,-13.44){\circle*{0.000001}} \put(48.08,-13.44){\circle*{0.000001}} \put(48.79,-12.73){\circle*{0.000001}} \put(49.50,-12.73){\circle*{0.000001}} \put(50.20,-12.02){\circle*{0.000001}} \put(50.91,-12.02){\circle*{0.000001}} \put(51.62,-11.31){\circle*{0.000001}} \put(52.33,-11.31){\circle*{0.000001}} \put(48.79,-37.48){\circle*{0.000001}} \put(48.79,-36.77){\circle*{0.000001}} \put(48.79,-36.06){\circle*{0.000001}} \put(48.79,-35.36){\circle*{0.000001}} \put(49.50,-34.65){\circle*{0.000001}} \put(49.50,-33.94){\circle*{0.000001}} \put(49.50,-33.23){\circle*{0.000001}} \put(49.50,-32.53){\circle*{0.000001}} \put(49.50,-31.82){\circle*{0.000001}} \put(49.50,-31.11){\circle*{0.000001}} \put(49.50,-30.41){\circle*{0.000001}} \put(49.50,-29.70){\circle*{0.000001}} \put(50.20,-28.99){\circle*{0.000001}} \put(50.20,-28.28){\circle*{0.000001}} \put(50.20,-27.58){\circle*{0.000001}} \put(50.20,-26.87){\circle*{0.000001}} \put(50.20,-26.16){\circle*{0.000001}} \put(50.20,-25.46){\circle*{0.000001}} \put(50.20,-24.75){\circle*{0.000001}} \put(50.91,-24.04){\circle*{0.000001}} \put(50.91,-23.33){\circle*{0.000001}} \put(50.91,-22.63){\circle*{0.000001}} \put(50.91,-21.92){\circle*{0.000001}} \put(50.91,-21.21){\circle*{0.000001}} \put(50.91,-20.51){\circle*{0.000001}} \put(50.91,-19.80){\circle*{0.000001}} \put(51.62,-19.09){\circle*{0.000001}} \put(51.62,-18.38){\circle*{0.000001}} \put(51.62,-17.68){\circle*{0.000001}} \put(51.62,-16.97){\circle*{0.000001}} \put(51.62,-16.26){\circle*{0.000001}} \put(51.62,-15.56){\circle*{0.000001}} \put(51.62,-14.85){\circle*{0.000001}} \put(51.62,-14.14){\circle*{0.000001}} \put(52.33,-13.44){\circle*{0.000001}} \put(52.33,-12.73){\circle*{0.000001}} \put(52.33,-12.02){\circle*{0.000001}} \put(52.33,-11.31){\circle*{0.000001}} \put(52.33,-66.47){\circle*{0.000001}} \put(52.33,-65.76){\circle*{0.000001}} \put(52.33,-65.05){\circle*{0.000001}} \put(52.33,-64.35){\circle*{0.000001}} \put(52.33,-63.64){\circle*{0.000001}} \put(51.62,-62.93){\circle*{0.000001}} \put(51.62,-62.23){\circle*{0.000001}} \put(51.62,-61.52){\circle*{0.000001}} \put(51.62,-60.81){\circle*{0.000001}} \put(51.62,-60.10){\circle*{0.000001}} \put(51.62,-59.40){\circle*{0.000001}} \put(51.62,-58.69){\circle*{0.000001}} \put(51.62,-57.98){\circle*{0.000001}} \put(50.91,-57.28){\circle*{0.000001}} \put(50.91,-56.57){\circle*{0.000001}} \put(50.91,-55.86){\circle*{0.000001}} \put(50.91,-55.15){\circle*{0.000001}} \put(50.91,-54.45){\circle*{0.000001}} \put(50.91,-53.74){\circle*{0.000001}} \put(50.91,-53.03){\circle*{0.000001}} \put(50.91,-52.33){\circle*{0.000001}} \put(50.20,-51.62){\circle*{0.000001}} \put(50.20,-50.91){\circle*{0.000001}} \put(50.20,-50.20){\circle*{0.000001}} \put(50.20,-49.50){\circle*{0.000001}} \put(50.20,-48.79){\circle*{0.000001}} \put(50.20,-48.08){\circle*{0.000001}} \put(50.20,-47.38){\circle*{0.000001}} \put(50.20,-46.67){\circle*{0.000001}} \put(49.50,-45.96){\circle*{0.000001}} \put(49.50,-45.25){\circle*{0.000001}} \put(49.50,-44.55){\circle*{0.000001}} \put(49.50,-43.84){\circle*{0.000001}} \put(49.50,-43.13){\circle*{0.000001}} \put(49.50,-42.43){\circle*{0.000001}} \put(49.50,-41.72){\circle*{0.000001}} \put(49.50,-41.01){\circle*{0.000001}} \put(48.79,-40.31){\circle*{0.000001}} \put(48.79,-39.60){\circle*{0.000001}} \put(48.79,-38.89){\circle*{0.000001}} \put(48.79,-38.18){\circle*{0.000001}} \put(48.79,-37.48){\circle*{0.000001}} \put(52.33,-66.47){\circle*{0.000001}} \put(53.03,-66.47){\circle*{0.000001}} \put(53.74,-65.76){\circle*{0.000001}} \put(54.45,-65.76){\circle*{0.000001}} \put(55.15,-65.05){\circle*{0.000001}} \put(55.86,-65.05){\circle*{0.000001}} \put(56.57,-65.05){\circle*{0.000001}} \put(57.28,-64.35){\circle*{0.000001}} \put(57.98,-64.35){\circle*{0.000001}} \put(58.69,-64.35){\circle*{0.000001}} \put(59.40,-63.64){\circle*{0.000001}} \put(60.10,-63.64){\circle*{0.000001}} \put(60.81,-62.93){\circle*{0.000001}} \put(61.52,-62.93){\circle*{0.000001}} \put(62.23,-62.93){\circle*{0.000001}} \put(62.93,-62.23){\circle*{0.000001}} \put(63.64,-62.23){\circle*{0.000001}} \put(64.35,-61.52){\circle*{0.000001}} \put(65.05,-61.52){\circle*{0.000001}} \put(65.76,-61.52){\circle*{0.000001}} \put(66.47,-60.81){\circle*{0.000001}} \put(67.18,-60.81){\circle*{0.000001}} \put(67.88,-60.81){\circle*{0.000001}} \put(68.59,-60.10){\circle*{0.000001}} \put(69.30,-60.10){\circle*{0.000001}} \put(70.00,-59.40){\circle*{0.000001}} \put(70.71,-59.40){\circle*{0.000001}} \put(71.42,-59.40){\circle*{0.000001}} \put(72.12,-58.69){\circle*{0.000001}} \put(72.83,-58.69){\circle*{0.000001}} \put(73.54,-57.98){\circle*{0.000001}} \put(74.25,-57.98){\circle*{0.000001}} \put(74.95,-57.98){\circle*{0.000001}} \put(75.66,-57.28){\circle*{0.000001}} \put(76.37,-57.28){\circle*{0.000001}} \put(77.07,-56.57){\circle*{0.000001}} \put(77.78,-56.57){\circle*{0.000001}} \put(78.49,-56.57){\circle*{0.000001}} \put(79.20,-55.86){\circle*{0.000001}} \put(79.90,-55.86){\circle*{0.000001}} \put(80.61,-55.86){\circle*{0.000001}} \put(81.32,-55.15){\circle*{0.000001}} \put(82.02,-55.15){\circle*{0.000001}} \put(82.73,-54.45){\circle*{0.000001}} \put(83.44,-54.45){\circle*{0.000001}} \put(52.33,-72.83){\circle*{0.000001}} \put(53.03,-72.12){\circle*{0.000001}} \put(53.74,-72.12){\circle*{0.000001}} \put(54.45,-71.42){\circle*{0.000001}} \put(55.15,-71.42){\circle*{0.000001}} \put(55.86,-70.71){\circle*{0.000001}} \put(56.57,-70.00){\circle*{0.000001}} \put(57.28,-70.00){\circle*{0.000001}} \put(57.98,-69.30){\circle*{0.000001}} \put(58.69,-69.30){\circle*{0.000001}} \put(59.40,-68.59){\circle*{0.000001}} \put(60.10,-68.59){\circle*{0.000001}} \put(60.81,-67.88){\circle*{0.000001}} \put(61.52,-67.18){\circle*{0.000001}} \put(62.23,-67.18){\circle*{0.000001}} \put(62.93,-66.47){\circle*{0.000001}} \put(63.64,-66.47){\circle*{0.000001}} \put(64.35,-65.76){\circle*{0.000001}} \put(65.05,-65.05){\circle*{0.000001}} \put(65.76,-65.05){\circle*{0.000001}} \put(66.47,-64.35){\circle*{0.000001}} \put(67.18,-64.35){\circle*{0.000001}} \put(67.88,-63.64){\circle*{0.000001}} \put(68.59,-62.93){\circle*{0.000001}} \put(69.30,-62.93){\circle*{0.000001}} \put(70.00,-62.23){\circle*{0.000001}} \put(70.71,-62.23){\circle*{0.000001}} \put(71.42,-61.52){\circle*{0.000001}} \put(72.12,-60.81){\circle*{0.000001}} \put(72.83,-60.81){\circle*{0.000001}} \put(73.54,-60.10){\circle*{0.000001}} \put(74.25,-60.10){\circle*{0.000001}} \put(74.95,-59.40){\circle*{0.000001}} \put(75.66,-59.40){\circle*{0.000001}} \put(76.37,-58.69){\circle*{0.000001}} \put(77.07,-57.98){\circle*{0.000001}} \put(77.78,-57.98){\circle*{0.000001}} \put(78.49,-57.28){\circle*{0.000001}} \put(79.20,-57.28){\circle*{0.000001}} \put(79.90,-56.57){\circle*{0.000001}} \put(80.61,-55.86){\circle*{0.000001}} \put(81.32,-55.86){\circle*{0.000001}} \put(82.02,-55.15){\circle*{0.000001}} \put(82.73,-55.15){\circle*{0.000001}} \put(83.44,-54.45){\circle*{0.000001}} \put(13.44,-60.81){\circle*{0.000001}} \put(14.14,-60.81){\circle*{0.000001}} \put(14.85,-61.52){\circle*{0.000001}} \put(15.56,-61.52){\circle*{0.000001}} \put(16.26,-61.52){\circle*{0.000001}} \put(16.97,-62.23){\circle*{0.000001}} \put(17.68,-62.23){\circle*{0.000001}} \put(18.38,-62.23){\circle*{0.000001}} \put(19.09,-62.23){\circle*{0.000001}} \put(19.80,-62.93){\circle*{0.000001}} \put(20.51,-62.93){\circle*{0.000001}} \put(21.21,-62.93){\circle*{0.000001}} \put(21.92,-63.64){\circle*{0.000001}} \put(22.63,-63.64){\circle*{0.000001}} \put(23.33,-63.64){\circle*{0.000001}} \put(24.04,-64.35){\circle*{0.000001}} \put(24.75,-64.35){\circle*{0.000001}} \put(25.46,-64.35){\circle*{0.000001}} \put(26.16,-65.05){\circle*{0.000001}} \put(26.87,-65.05){\circle*{0.000001}} \put(27.58,-65.05){\circle*{0.000001}} \put(28.28,-65.05){\circle*{0.000001}} \put(28.99,-65.76){\circle*{0.000001}} \put(29.70,-65.76){\circle*{0.000001}} \put(30.41,-65.76){\circle*{0.000001}} \put(31.11,-66.47){\circle*{0.000001}} \put(31.82,-66.47){\circle*{0.000001}} \put(32.53,-66.47){\circle*{0.000001}} \put(33.23,-67.18){\circle*{0.000001}} \put(33.94,-67.18){\circle*{0.000001}} \put(34.65,-67.18){\circle*{0.000001}} \put(35.36,-67.88){\circle*{0.000001}} \put(36.06,-67.88){\circle*{0.000001}} \put(36.77,-67.88){\circle*{0.000001}} \put(37.48,-68.59){\circle*{0.000001}} \put(38.18,-68.59){\circle*{0.000001}} \put(38.89,-68.59){\circle*{0.000001}} \put(39.60,-68.59){\circle*{0.000001}} \put(40.31,-69.30){\circle*{0.000001}} \put(41.01,-69.30){\circle*{0.000001}} \put(41.72,-69.30){\circle*{0.000001}} \put(42.43,-70.00){\circle*{0.000001}} \put(43.13,-70.00){\circle*{0.000001}} \put(43.84,-70.00){\circle*{0.000001}} \put(44.55,-70.71){\circle*{0.000001}} \put(45.25,-70.71){\circle*{0.000001}} \put(45.96,-70.71){\circle*{0.000001}} \put(46.67,-71.42){\circle*{0.000001}} \put(47.38,-71.42){\circle*{0.000001}} \put(48.08,-71.42){\circle*{0.000001}} \put(48.79,-71.42){\circle*{0.000001}} \put(49.50,-72.12){\circle*{0.000001}} \put(50.20,-72.12){\circle*{0.000001}} \put(50.91,-72.12){\circle*{0.000001}} \put(51.62,-72.83){\circle*{0.000001}} \put(52.33,-72.83){\circle*{0.000001}} \put(13.44,-60.81){\circle*{0.000001}} \put(14.14,-60.81){\circle*{0.000001}} \put(14.85,-61.52){\circle*{0.000001}} \put(15.56,-61.52){\circle*{0.000001}} \put(16.26,-62.23){\circle*{0.000001}} \put(16.97,-62.23){\circle*{0.000001}} \put(17.68,-62.23){\circle*{0.000001}} \put(18.38,-62.93){\circle*{0.000001}} \put(19.09,-62.93){\circle*{0.000001}} \put(19.80,-62.93){\circle*{0.000001}} \put(20.51,-63.64){\circle*{0.000001}} \put(21.21,-63.64){\circle*{0.000001}} \put(21.92,-64.35){\circle*{0.000001}} \put(22.63,-64.35){\circle*{0.000001}} \put(23.33,-64.35){\circle*{0.000001}} \put(24.04,-65.05){\circle*{0.000001}} \put(24.75,-65.05){\circle*{0.000001}} \put(25.46,-65.76){\circle*{0.000001}} \put(26.16,-65.76){\circle*{0.000001}} \put(26.87,-65.76){\circle*{0.000001}} \put(27.58,-66.47){\circle*{0.000001}} \put(28.28,-66.47){\circle*{0.000001}} \put(28.99,-67.18){\circle*{0.000001}} \put(29.70,-67.18){\circle*{0.000001}} \put(30.41,-67.18){\circle*{0.000001}} \put(31.11,-67.88){\circle*{0.000001}} \put(31.82,-67.88){\circle*{0.000001}} \put(32.53,-67.88){\circle*{0.000001}} \put(33.23,-68.59){\circle*{0.000001}} \put(33.94,-68.59){\circle*{0.000001}} \put(34.65,-69.30){\circle*{0.000001}} \put(35.36,-69.30){\circle*{0.000001}} \put(36.06,-69.30){\circle*{0.000001}} \put(36.77,-70.00){\circle*{0.000001}} \put(37.48,-70.00){\circle*{0.000001}} \put(38.18,-70.71){\circle*{0.000001}} \put(38.89,-70.71){\circle*{0.000001}} \put(39.60,-70.71){\circle*{0.000001}} \put(40.31,-71.42){\circle*{0.000001}} \put(41.01,-71.42){\circle*{0.000001}} \put(41.72,-72.12){\circle*{0.000001}} \put(42.43,-72.12){\circle*{0.000001}} \put(43.13,-72.12){\circle*{0.000001}} \put(43.84,-72.83){\circle*{0.000001}} \put(44.55,-72.83){\circle*{0.000001}} \put(45.25,-72.83){\circle*{0.000001}} \put(45.96,-73.54){\circle*{0.000001}} \put(46.67,-73.54){\circle*{0.000001}} \put(47.38,-74.25){\circle*{0.000001}} \put(48.08,-74.25){\circle*{0.000001}} \put(48.79,-74.25){\circle*{0.000001}} \put(49.50,-74.95){\circle*{0.000001}} \put(50.20,-74.95){\circle*{0.000001}} \put(50.91,-75.66){\circle*{0.000001}} \put(51.62,-75.66){\circle*{0.000001}} \put( 8.49,-62.93){\circle*{0.000001}} \put( 9.19,-62.93){\circle*{0.000001}} \put( 9.90,-63.64){\circle*{0.000001}} \put(10.61,-63.64){\circle*{0.000001}} \put(11.31,-63.64){\circle*{0.000001}} \put(12.02,-63.64){\circle*{0.000001}} \put(12.73,-64.35){\circle*{0.000001}} \put(13.44,-64.35){\circle*{0.000001}} \put(14.14,-64.35){\circle*{0.000001}} \put(14.85,-65.05){\circle*{0.000001}} \put(15.56,-65.05){\circle*{0.000001}} \put(16.26,-65.05){\circle*{0.000001}} \put(16.97,-65.76){\circle*{0.000001}} \put(17.68,-65.76){\circle*{0.000001}} \put(18.38,-65.76){\circle*{0.000001}} \put(19.09,-65.76){\circle*{0.000001}} \put(19.80,-66.47){\circle*{0.000001}} \put(20.51,-66.47){\circle*{0.000001}} \put(21.21,-66.47){\circle*{0.000001}} \put(21.92,-67.18){\circle*{0.000001}} \put(22.63,-67.18){\circle*{0.000001}} \put(23.33,-67.18){\circle*{0.000001}} \put(24.04,-67.18){\circle*{0.000001}} \put(24.75,-67.88){\circle*{0.000001}} \put(25.46,-67.88){\circle*{0.000001}} \put(26.16,-67.88){\circle*{0.000001}} \put(26.87,-68.59){\circle*{0.000001}} \put(27.58,-68.59){\circle*{0.000001}} \put(28.28,-68.59){\circle*{0.000001}} \put(28.99,-69.30){\circle*{0.000001}} \put(29.70,-69.30){\circle*{0.000001}} \put(30.41,-69.30){\circle*{0.000001}} \put(31.11,-69.30){\circle*{0.000001}} \put(31.82,-70.00){\circle*{0.000001}} \put(32.53,-70.00){\circle*{0.000001}} \put(33.23,-70.00){\circle*{0.000001}} \put(33.94,-70.71){\circle*{0.000001}} \put(34.65,-70.71){\circle*{0.000001}} \put(35.36,-70.71){\circle*{0.000001}} \put(36.06,-71.42){\circle*{0.000001}} \put(36.77,-71.42){\circle*{0.000001}} \put(37.48,-71.42){\circle*{0.000001}} \put(38.18,-71.42){\circle*{0.000001}} \put(38.89,-72.12){\circle*{0.000001}} \put(39.60,-72.12){\circle*{0.000001}} \put(40.31,-72.12){\circle*{0.000001}} \put(41.01,-72.83){\circle*{0.000001}} \put(41.72,-72.83){\circle*{0.000001}} \put(42.43,-72.83){\circle*{0.000001}} \put(43.13,-72.83){\circle*{0.000001}} \put(43.84,-73.54){\circle*{0.000001}} \put(44.55,-73.54){\circle*{0.000001}} \put(45.25,-73.54){\circle*{0.000001}} \put(45.96,-74.25){\circle*{0.000001}} \put(46.67,-74.25){\circle*{0.000001}} \put(47.38,-74.25){\circle*{0.000001}} \put(48.08,-74.95){\circle*{0.000001}} \put(48.79,-74.95){\circle*{0.000001}} \put(49.50,-74.95){\circle*{0.000001}} \put(50.20,-74.95){\circle*{0.000001}} \put(50.91,-75.66){\circle*{0.000001}} \put(51.62,-75.66){\circle*{0.000001}} \put( 8.49,-62.93){\circle*{0.000001}} \put( 9.19,-62.93){\circle*{0.000001}} \put( 9.90,-62.93){\circle*{0.000001}} \put(10.61,-62.93){\circle*{0.000001}} \put(11.31,-62.93){\circle*{0.000001}} \put(12.02,-62.23){\circle*{0.000001}} \put(12.73,-62.23){\circle*{0.000001}} \put(13.44,-62.23){\circle*{0.000001}} \put(14.14,-62.23){\circle*{0.000001}} \put(14.85,-62.23){\circle*{0.000001}} \put(15.56,-62.23){\circle*{0.000001}} \put(16.26,-62.23){\circle*{0.000001}} \put(16.97,-62.23){\circle*{0.000001}} \put(17.68,-62.23){\circle*{0.000001}} \put(18.38,-62.23){\circle*{0.000001}} \put(19.09,-61.52){\circle*{0.000001}} \put(19.80,-61.52){\circle*{0.000001}} \put(20.51,-61.52){\circle*{0.000001}} \put(21.21,-61.52){\circle*{0.000001}} \put(21.92,-61.52){\circle*{0.000001}} \put(22.63,-61.52){\circle*{0.000001}} \put(23.33,-61.52){\circle*{0.000001}} \put(24.04,-61.52){\circle*{0.000001}} \put(24.75,-61.52){\circle*{0.000001}} \put(25.46,-60.81){\circle*{0.000001}} \put(26.16,-60.81){\circle*{0.000001}} \put(26.87,-60.81){\circle*{0.000001}} \put(27.58,-60.81){\circle*{0.000001}} \put(28.28,-60.81){\circle*{0.000001}} \put(28.99,-60.81){\circle*{0.000001}} \put(29.70,-60.81){\circle*{0.000001}} \put(30.41,-60.81){\circle*{0.000001}} \put(31.11,-60.81){\circle*{0.000001}} \put(31.82,-60.81){\circle*{0.000001}} \put(32.53,-60.10){\circle*{0.000001}} \put(33.23,-60.10){\circle*{0.000001}} \put(33.94,-60.10){\circle*{0.000001}} \put(34.65,-60.10){\circle*{0.000001}} \put(35.36,-60.10){\circle*{0.000001}} \put(36.06,-60.10){\circle*{0.000001}} \put(36.77,-60.10){\circle*{0.000001}} \put(37.48,-60.10){\circle*{0.000001}} \put(38.18,-60.10){\circle*{0.000001}} \put(38.89,-60.10){\circle*{0.000001}} \put(39.60,-59.40){\circle*{0.000001}} \put(40.31,-59.40){\circle*{0.000001}} \put(41.01,-59.40){\circle*{0.000001}} \put(41.72,-59.40){\circle*{0.000001}} \put(42.43,-59.40){\circle*{0.000001}} \put(43.13,-59.40){\circle*{0.000001}} \put(43.84,-59.40){\circle*{0.000001}} \put(44.55,-59.40){\circle*{0.000001}} \put(45.25,-59.40){\circle*{0.000001}} \put(45.96,-58.69){\circle*{0.000001}} \put(46.67,-58.69){\circle*{0.000001}} \put(47.38,-58.69){\circle*{0.000001}} \put(48.08,-58.69){\circle*{0.000001}} \put(48.79,-58.69){\circle*{0.000001}} \put(49.50,-58.69){\circle*{0.000001}} \put(50.20,-58.69){\circle*{0.000001}} \put(50.91,-58.69){\circle*{0.000001}} \put(51.62,-58.69){\circle*{0.000001}} \put(52.33,-58.69){\circle*{0.000001}} \put(53.03,-57.98){\circle*{0.000001}} \put(53.74,-57.98){\circle*{0.000001}} \put(54.45,-57.98){\circle*{0.000001}} \put(55.15,-57.98){\circle*{0.000001}} \put(55.86,-57.98){\circle*{0.000001}} \put(10.61,-79.90){\circle*{0.000001}} \put(11.31,-79.90){\circle*{0.000001}} \put(12.02,-79.20){\circle*{0.000001}} \put(12.73,-79.20){\circle*{0.000001}} \put(13.44,-78.49){\circle*{0.000001}} \put(14.14,-78.49){\circle*{0.000001}} \put(14.85,-77.78){\circle*{0.000001}} \put(15.56,-77.78){\circle*{0.000001}} \put(16.26,-77.07){\circle*{0.000001}} \put(16.97,-77.07){\circle*{0.000001}} \put(17.68,-76.37){\circle*{0.000001}} \put(18.38,-76.37){\circle*{0.000001}} \put(19.09,-75.66){\circle*{0.000001}} \put(19.80,-75.66){\circle*{0.000001}} \put(20.51,-74.95){\circle*{0.000001}} \put(21.21,-74.95){\circle*{0.000001}} \put(21.92,-74.25){\circle*{0.000001}} \put(22.63,-74.25){\circle*{0.000001}} \put(23.33,-73.54){\circle*{0.000001}} \put(24.04,-73.54){\circle*{0.000001}} \put(24.75,-72.83){\circle*{0.000001}} \put(25.46,-72.83){\circle*{0.000001}} \put(26.16,-72.12){\circle*{0.000001}} \put(26.87,-72.12){\circle*{0.000001}} \put(27.58,-71.42){\circle*{0.000001}} \put(28.28,-71.42){\circle*{0.000001}} \put(28.99,-70.71){\circle*{0.000001}} \put(29.70,-70.71){\circle*{0.000001}} \put(30.41,-70.00){\circle*{0.000001}} \put(31.11,-70.00){\circle*{0.000001}} \put(31.82,-69.30){\circle*{0.000001}} \put(32.53,-69.30){\circle*{0.000001}} \put(33.23,-69.30){\circle*{0.000001}} \put(33.94,-68.59){\circle*{0.000001}} \put(34.65,-68.59){\circle*{0.000001}} \put(35.36,-67.88){\circle*{0.000001}} \put(36.06,-67.88){\circle*{0.000001}} \put(36.77,-67.18){\circle*{0.000001}} \put(37.48,-67.18){\circle*{0.000001}} \put(38.18,-66.47){\circle*{0.000001}} \put(38.89,-66.47){\circle*{0.000001}} \put(39.60,-65.76){\circle*{0.000001}} \put(40.31,-65.76){\circle*{0.000001}} \put(41.01,-65.05){\circle*{0.000001}} \put(41.72,-65.05){\circle*{0.000001}} \put(42.43,-64.35){\circle*{0.000001}} \put(43.13,-64.35){\circle*{0.000001}} \put(43.84,-63.64){\circle*{0.000001}} \put(44.55,-63.64){\circle*{0.000001}} \put(45.25,-62.93){\circle*{0.000001}} \put(45.96,-62.93){\circle*{0.000001}} \put(46.67,-62.23){\circle*{0.000001}} \put(47.38,-62.23){\circle*{0.000001}} \put(48.08,-61.52){\circle*{0.000001}} \put(48.79,-61.52){\circle*{0.000001}} \put(49.50,-60.81){\circle*{0.000001}} \put(50.20,-60.81){\circle*{0.000001}} \put(50.91,-60.10){\circle*{0.000001}} \put(51.62,-60.10){\circle*{0.000001}} \put(52.33,-59.40){\circle*{0.000001}} \put(53.03,-59.40){\circle*{0.000001}} \put(53.74,-58.69){\circle*{0.000001}} \put(54.45,-58.69){\circle*{0.000001}} \put(55.15,-57.98){\circle*{0.000001}} \put(55.86,-57.98){\circle*{0.000001}} \put(10.61,-79.90){\circle*{0.000001}} \put(10.61,-79.20){\circle*{0.000001}} \put(10.61,-78.49){\circle*{0.000001}} \put( 9.90,-77.78){\circle*{0.000001}} \put( 9.90,-77.07){\circle*{0.000001}} \put( 9.90,-76.37){\circle*{0.000001}} \put( 9.90,-75.66){\circle*{0.000001}} \put( 9.19,-74.95){\circle*{0.000001}} \put( 9.19,-74.25){\circle*{0.000001}} \put( 9.19,-73.54){\circle*{0.000001}} \put( 9.19,-72.83){\circle*{0.000001}} \put( 9.19,-72.12){\circle*{0.000001}} \put( 8.49,-71.42){\circle*{0.000001}} \put( 8.49,-70.71){\circle*{0.000001}} \put( 8.49,-70.00){\circle*{0.000001}} \put( 8.49,-69.30){\circle*{0.000001}} \put( 7.78,-68.59){\circle*{0.000001}} \put( 7.78,-67.88){\circle*{0.000001}} \put( 7.78,-67.18){\circle*{0.000001}} \put( 7.78,-66.47){\circle*{0.000001}} \put( 7.07,-65.76){\circle*{0.000001}} \put( 7.07,-65.05){\circle*{0.000001}} \put( 7.07,-64.35){\circle*{0.000001}} \put( 7.07,-63.64){\circle*{0.000001}} \put( 7.07,-62.93){\circle*{0.000001}} \put( 6.36,-62.23){\circle*{0.000001}} \put( 6.36,-61.52){\circle*{0.000001}} \put( 6.36,-60.81){\circle*{0.000001}} \put( 6.36,-60.10){\circle*{0.000001}} \put( 5.66,-59.40){\circle*{0.000001}} \put( 5.66,-58.69){\circle*{0.000001}} \put( 5.66,-57.98){\circle*{0.000001}} \put( 5.66,-57.28){\circle*{0.000001}} \put( 5.66,-56.57){\circle*{0.000001}} \put( 4.95,-55.86){\circle*{0.000001}} \put( 4.95,-55.15){\circle*{0.000001}} \put( 4.95,-54.45){\circle*{0.000001}} \put( 4.95,-53.74){\circle*{0.000001}} \put( 4.24,-53.03){\circle*{0.000001}} \put( 4.24,-52.33){\circle*{0.000001}} \put( 4.24,-51.62){\circle*{0.000001}} \put( 4.24,-50.91){\circle*{0.000001}} \put( 3.54,-50.20){\circle*{0.000001}} \put( 3.54,-49.50){\circle*{0.000001}} \put( 3.54,-48.79){\circle*{0.000001}} \put( 3.54,-48.08){\circle*{0.000001}} \put( 3.54,-47.38){\circle*{0.000001}} \put( 2.83,-46.67){\circle*{0.000001}} \put( 2.83,-45.96){\circle*{0.000001}} \put( 2.83,-45.25){\circle*{0.000001}} \put( 2.83,-44.55){\circle*{0.000001}} \put( 2.12,-43.84){\circle*{0.000001}} \put( 2.12,-43.13){\circle*{0.000001}} \put( 2.12,-42.43){\circle*{0.000001}} \put( 2.12,-41.72){\circle*{0.000001}} \put( 2.12,-41.01){\circle*{0.000001}} \put( 1.41,-40.31){\circle*{0.000001}} \put( 1.41,-39.60){\circle*{0.000001}} \put( 1.41,-38.89){\circle*{0.000001}} \put( 1.41,-38.18){\circle*{0.000001}} \put( 0.71,-37.48){\circle*{0.000001}} \put( 0.71,-36.77){\circle*{0.000001}} \put( 0.71,-36.06){\circle*{0.000001}} \put( 0.71,-35.36){\circle*{0.000001}} \put( 0.00,-34.65){\circle*{0.000001}} \put( 0.00,-33.94){\circle*{0.000001}} \put( 0.00,-33.23){\circle*{0.000001}} \put( 0.00,-32.53){\circle*{0.000001}} \put( 0.00,-31.82){\circle*{0.000001}} \put(-0.71,-31.11){\circle*{0.000001}} \put(-0.71,-30.41){\circle*{0.000001}} \put(-0.71,-29.70){\circle*{0.000001}} \put(-0.71,-28.99){\circle*{0.000001}} \put(-1.41,-28.28){\circle*{0.000001}} \put(-1.41,-27.58){\circle*{0.000001}} \put(-1.41,-26.87){\circle*{0.000001}} \put(-47.38, 0.00){\circle*{0.000001}} \put(-46.67,-0.71){\circle*{0.000001}} \put(-45.96,-0.71){\circle*{0.000001}} \put(-45.25,-1.41){\circle*{0.000001}} \put(-44.55,-1.41){\circle*{0.000001}} \put(-43.84,-2.12){\circle*{0.000001}} \put(-43.13,-2.83){\circle*{0.000001}} \put(-42.43,-2.83){\circle*{0.000001}} \put(-41.72,-3.54){\circle*{0.000001}} \put(-41.01,-3.54){\circle*{0.000001}} \put(-40.31,-4.24){\circle*{0.000001}} \put(-39.60,-4.24){\circle*{0.000001}} \put(-38.89,-4.95){\circle*{0.000001}} \put(-38.18,-5.66){\circle*{0.000001}} \put(-37.48,-5.66){\circle*{0.000001}} \put(-36.77,-6.36){\circle*{0.000001}} \put(-36.06,-6.36){\circle*{0.000001}} \put(-35.36,-7.07){\circle*{0.000001}} \put(-34.65,-7.78){\circle*{0.000001}} \put(-33.94,-7.78){\circle*{0.000001}} \put(-33.23,-8.49){\circle*{0.000001}} \put(-32.53,-8.49){\circle*{0.000001}} \put(-31.82,-9.19){\circle*{0.000001}} \put(-31.11,-9.19){\circle*{0.000001}} \put(-30.41,-9.90){\circle*{0.000001}} \put(-29.70,-10.61){\circle*{0.000001}} \put(-28.99,-10.61){\circle*{0.000001}} \put(-28.28,-11.31){\circle*{0.000001}} \put(-27.58,-11.31){\circle*{0.000001}} \put(-26.87,-12.02){\circle*{0.000001}} \put(-26.16,-12.73){\circle*{0.000001}} \put(-25.46,-12.73){\circle*{0.000001}} \put(-24.75,-13.44){\circle*{0.000001}} \put(-24.04,-13.44){\circle*{0.000001}} \put(-23.33,-14.14){\circle*{0.000001}} \put(-22.63,-14.14){\circle*{0.000001}} \put(-21.92,-14.85){\circle*{0.000001}} \put(-21.21,-15.56){\circle*{0.000001}} \put(-20.51,-15.56){\circle*{0.000001}} \put(-19.80,-16.26){\circle*{0.000001}} \put(-19.09,-16.26){\circle*{0.000001}} \put(-18.38,-16.97){\circle*{0.000001}} \put(-17.68,-17.68){\circle*{0.000001}} \put(-16.97,-17.68){\circle*{0.000001}} \put(-16.26,-18.38){\circle*{0.000001}} \put(-15.56,-18.38){\circle*{0.000001}} \put(-14.85,-19.09){\circle*{0.000001}} \put(-14.14,-19.09){\circle*{0.000001}} \put(-13.44,-19.80){\circle*{0.000001}} \put(-12.73,-20.51){\circle*{0.000001}} \put(-12.02,-20.51){\circle*{0.000001}} \put(-11.31,-21.21){\circle*{0.000001}} \put(-10.61,-21.21){\circle*{0.000001}} \put(-9.90,-21.92){\circle*{0.000001}} \put(-9.19,-22.63){\circle*{0.000001}} \put(-8.49,-22.63){\circle*{0.000001}} \put(-7.78,-23.33){\circle*{0.000001}} \put(-7.07,-23.33){\circle*{0.000001}} \put(-6.36,-24.04){\circle*{0.000001}} \put(-5.66,-24.04){\circle*{0.000001}} \put(-4.95,-24.75){\circle*{0.000001}} \put(-4.24,-25.46){\circle*{0.000001}} \put(-3.54,-25.46){\circle*{0.000001}} \put(-2.83,-26.16){\circle*{0.000001}} \put(-2.12,-26.16){\circle*{0.000001}} \put(-1.41,-26.87){\circle*{0.000001}} \put(-47.38, 0.00){\circle*{0.000001}} \put(-46.67, 0.00){\circle*{0.000001}} \put(-45.96, 0.00){\circle*{0.000001}} \put(-45.25, 0.00){\circle*{0.000001}} \put(-44.55, 0.00){\circle*{0.000001}} \put(-44.55, 0.00){\circle*{0.000001}} \put(-44.55, 0.00){\circle*{0.000001}} \put(-44.55, 0.71){\circle*{0.000001}} \put(-44.55, 1.41){\circle*{0.000001}} \put(-45.25, 2.12){\circle*{0.000001}} \put(-45.25, 2.83){\circle*{0.000001}} \put(-45.25, 3.54){\circle*{0.000001}} \put(-45.25, 4.24){\circle*{0.000001}} \put(-45.96, 4.95){\circle*{0.000001}} \put(-45.96, 5.66){\circle*{0.000001}} \put(-45.96, 4.95){\circle*{0.000001}} \put(-45.96, 5.66){\circle*{0.000001}} \put(-45.96, 3.54){\circle*{0.000001}} \put(-45.96, 4.24){\circle*{0.000001}} \put(-45.96, 4.95){\circle*{0.000001}} \put(-45.96, 3.54){\circle*{0.000001}} \put(-45.96, 3.54){\circle*{0.000001}} \put(-45.96, 4.24){\circle*{0.000001}} \put(-46.67, 4.95){\circle*{0.000001}} \put(-46.67, 5.66){\circle*{0.000001}} \put(-47.38, 6.36){\circle*{0.000001}} \put(-47.38, 7.07){\circle*{0.000001}} \put(-47.38, 7.78){\circle*{0.000001}} \put(-48.08, 8.49){\circle*{0.000001}} \put(-48.08, 9.19){\circle*{0.000001}} \put(-48.79, 9.90){\circle*{0.000001}} \put(-48.79,10.61){\circle*{0.000001}} \put(-49.50, 7.78){\circle*{0.000001}} \put(-49.50, 8.49){\circle*{0.000001}} \put(-49.50, 9.19){\circle*{0.000001}} \put(-48.79, 9.90){\circle*{0.000001}} \put(-48.79,10.61){\circle*{0.000001}} \put(-50.91, 4.95){\circle*{0.000001}} \put(-50.91, 5.66){\circle*{0.000001}} \put(-50.20, 6.36){\circle*{0.000001}} \put(-50.20, 7.07){\circle*{0.000001}} \put(-49.50, 7.78){\circle*{0.000001}} \put(-50.91, 4.95){\circle*{0.000001}} \put(-50.20, 4.24){\circle*{0.000001}} \put(-49.50, 4.24){\circle*{0.000001}} \put(-48.79, 3.54){\circle*{0.000001}} \put(-49.50, 1.41){\circle*{0.000001}} \put(-49.50, 2.12){\circle*{0.000001}} \put(-48.79, 2.83){\circle*{0.000001}} \put(-48.79, 3.54){\circle*{0.000001}} \put(-49.50, 1.41){\circle*{0.000001}} \put(-49.50,-1.41){\circle*{0.000001}} \put(-49.50,-0.71){\circle*{0.000001}} \put(-49.50, 0.00){\circle*{0.000001}} \put(-49.50, 0.71){\circle*{0.000001}} \put(-49.50, 1.41){\circle*{0.000001}} \put(-49.50,-1.41){\circle*{0.000001}} \put(-48.79,-2.12){\circle*{0.000001}} \put(-48.08,-2.83){\circle*{0.000001}} \put(-47.38,-3.54){\circle*{0.000001}} \put(-46.67,-3.54){\circle*{0.000001}} \put(-45.96,-4.24){\circle*{0.000001}} \put(-45.25,-4.95){\circle*{0.000001}} \put(-44.55,-5.66){\circle*{0.000001}} \put(-52.33,-5.66){\circle*{0.000001}} \put(-51.62,-5.66){\circle*{0.000001}} \put(-50.91,-5.66){\circle*{0.000001}} \put(-50.20,-5.66){\circle*{0.000001}} \put(-49.50,-5.66){\circle*{0.000001}} \put(-48.79,-5.66){\circle*{0.000001}} \put(-48.08,-5.66){\circle*{0.000001}} \put(-47.38,-5.66){\circle*{0.000001}} \put(-46.67,-5.66){\circle*{0.000001}} \put(-45.96,-5.66){\circle*{0.000001}} \put(-45.25,-5.66){\circle*{0.000001}} \put(-44.55,-5.66){\circle*{0.000001}} \put(-52.33,-5.66){\circle*{0.000001}} \put(-52.33,-4.95){\circle*{0.000001}} \put(-52.33,-4.24){\circle*{0.000001}} \put(-52.33,-3.54){\circle*{0.000001}} \put(-52.33,-2.83){\circle*{0.000001}} \put(-52.33,-2.12){\circle*{0.000001}} \put(-52.33,-1.41){\circle*{0.000001}} \put(-52.33,-0.71){\circle*{0.000001}} \put(-52.33, 0.00){\circle*{0.000001}} \put(-52.33, 0.71){\circle*{0.000001}} \put(-52.33, 1.41){\circle*{0.000001}} \put(-52.33, 1.41){\circle*{0.000001}} \put(-52.33, 2.12){\circle*{0.000001}} \put(-53.03, 2.83){\circle*{0.000001}} \put(-53.03, 3.54){\circle*{0.000001}} \put(-53.03, 4.24){\circle*{0.000001}} \put(-53.03, 4.95){\circle*{0.000001}} \put(-53.74, 5.66){\circle*{0.000001}} \put(-53.74, 6.36){\circle*{0.000001}} \put(-53.74, 7.07){\circle*{0.000001}} \put(-53.74, 7.78){\circle*{0.000001}} \put(-54.45, 8.49){\circle*{0.000001}} \put(-54.45, 9.19){\circle*{0.000001}} \put(-54.45, 9.90){\circle*{0.000001}} \put(-54.45,10.61){\circle*{0.000001}} \put(-55.15,11.31){\circle*{0.000001}} \put(-55.15,12.02){\circle*{0.000001}} \put(-55.15,12.73){\circle*{0.000001}} \put(-55.15,13.44){\circle*{0.000001}} \put(-55.86,14.14){\circle*{0.000001}} \put(-55.86,14.85){\circle*{0.000001}} \put(-70.71, 7.78){\circle*{0.000001}} \put(-70.00, 7.78){\circle*{0.000001}} \put(-69.30, 8.49){\circle*{0.000001}} \put(-68.59, 8.49){\circle*{0.000001}} \put(-67.88, 9.19){\circle*{0.000001}} \put(-67.18, 9.19){\circle*{0.000001}} \put(-66.47, 9.90){\circle*{0.000001}} \put(-65.76, 9.90){\circle*{0.000001}} \put(-65.05,10.61){\circle*{0.000001}} \put(-64.35,10.61){\circle*{0.000001}} \put(-63.64,11.31){\circle*{0.000001}} \put(-62.93,11.31){\circle*{0.000001}} \put(-62.23,12.02){\circle*{0.000001}} \put(-61.52,12.02){\circle*{0.000001}} \put(-60.81,12.73){\circle*{0.000001}} \put(-60.10,12.73){\circle*{0.000001}} \put(-59.40,13.44){\circle*{0.000001}} \put(-58.69,13.44){\circle*{0.000001}} \put(-57.98,14.14){\circle*{0.000001}} \put(-57.28,14.14){\circle*{0.000001}} \put(-56.57,14.85){\circle*{0.000001}} \put(-55.86,14.85){\circle*{0.000001}} \put(-85.56,19.80){\circle*{0.000001}} \put(-84.85,19.09){\circle*{0.000001}} \put(-84.15,18.38){\circle*{0.000001}} \put(-83.44,18.38){\circle*{0.000001}} \put(-82.73,17.68){\circle*{0.000001}} \put(-82.02,16.97){\circle*{0.000001}} \put(-81.32,16.26){\circle*{0.000001}} \put(-80.61,15.56){\circle*{0.000001}} \put(-79.90,15.56){\circle*{0.000001}} \put(-79.20,14.85){\circle*{0.000001}} \put(-78.49,14.14){\circle*{0.000001}} \put(-77.78,13.44){\circle*{0.000001}} \put(-77.07,12.73){\circle*{0.000001}} \put(-76.37,12.02){\circle*{0.000001}} \put(-75.66,12.02){\circle*{0.000001}} \put(-74.95,11.31){\circle*{0.000001}} \put(-74.25,10.61){\circle*{0.000001}} \put(-73.54, 9.90){\circle*{0.000001}} \put(-72.83, 9.19){\circle*{0.000001}} \put(-72.12, 9.19){\circle*{0.000001}} \put(-71.42, 8.49){\circle*{0.000001}} \put(-70.71, 7.78){\circle*{0.000001}} \put(-85.56,19.80){\circle*{0.000001}} \put(-84.85,19.09){\circle*{0.000001}} \put(-84.15,19.09){\circle*{0.000001}} \put(-83.44,18.38){\circle*{0.000001}} \put(-82.73,17.68){\circle*{0.000001}} \put(-82.02,16.97){\circle*{0.000001}} \put(-81.32,16.97){\circle*{0.000001}} \put(-80.61,16.26){\circle*{0.000001}} \put(-79.90,15.56){\circle*{0.000001}} \put(-79.20,15.56){\circle*{0.000001}} \put(-78.49,14.85){\circle*{0.000001}} \put(-77.78,14.14){\circle*{0.000001}} \put(-77.07,14.14){\circle*{0.000001}} \put(-76.37,13.44){\circle*{0.000001}} \put(-75.66,12.73){\circle*{0.000001}} \put(-74.95,12.02){\circle*{0.000001}} \put(-74.25,12.02){\circle*{0.000001}} \put(-73.54,11.31){\circle*{0.000001}} \put(-72.83,10.61){\circle*{0.000001}} \put(-72.12,10.61){\circle*{0.000001}} \put(-71.42, 9.90){\circle*{0.000001}} \put(-70.71, 9.19){\circle*{0.000001}} \put(-70.00, 8.49){\circle*{0.000001}} \put(-69.30, 8.49){\circle*{0.000001}} \put(-68.59, 7.78){\circle*{0.000001}} \put(-89.80,19.80){\circle*{0.000001}} \put(-89.10,19.09){\circle*{0.000001}} \put(-88.39,19.09){\circle*{0.000001}} \put(-87.68,18.38){\circle*{0.000001}} \put(-86.97,18.38){\circle*{0.000001}} \put(-86.27,17.68){\circle*{0.000001}} \put(-85.56,17.68){\circle*{0.000001}} \put(-84.85,16.97){\circle*{0.000001}} \put(-84.15,16.26){\circle*{0.000001}} \put(-83.44,16.26){\circle*{0.000001}} \put(-82.73,15.56){\circle*{0.000001}} \put(-82.02,15.56){\circle*{0.000001}} \put(-81.32,14.85){\circle*{0.000001}} \put(-80.61,14.85){\circle*{0.000001}} \put(-79.90,14.14){\circle*{0.000001}} \put(-79.20,14.14){\circle*{0.000001}} \put(-78.49,13.44){\circle*{0.000001}} \put(-77.78,12.73){\circle*{0.000001}} \put(-77.07,12.73){\circle*{0.000001}} \put(-76.37,12.02){\circle*{0.000001}} \put(-75.66,12.02){\circle*{0.000001}} \put(-74.95,11.31){\circle*{0.000001}} \put(-74.25,11.31){\circle*{0.000001}} \put(-73.54,10.61){\circle*{0.000001}} \put(-72.83, 9.90){\circle*{0.000001}} \put(-72.12, 9.90){\circle*{0.000001}} \put(-71.42, 9.19){\circle*{0.000001}} \put(-70.71, 9.19){\circle*{0.000001}} \put(-70.00, 8.49){\circle*{0.000001}} \put(-69.30, 8.49){\circle*{0.000001}} \put(-68.59, 7.78){\circle*{0.000001}} \put(-89.80,19.80){\circle*{0.000001}} \put(-89.10,19.09){\circle*{0.000001}} \put(-88.39,19.09){\circle*{0.000001}} \put(-87.68,18.38){\circle*{0.000001}} \put(-86.97,17.68){\circle*{0.000001}} \put(-86.27,17.68){\circle*{0.000001}} \put(-85.56,16.97){\circle*{0.000001}} \put(-84.85,16.26){\circle*{0.000001}} \put(-84.15,15.56){\circle*{0.000001}} \put(-83.44,15.56){\circle*{0.000001}} \put(-82.73,14.85){\circle*{0.000001}} \put(-82.02,14.14){\circle*{0.000001}} \put(-81.32,14.14){\circle*{0.000001}} \put(-80.61,13.44){\circle*{0.000001}} \put(-79.90,12.73){\circle*{0.000001}} \put(-79.20,12.73){\circle*{0.000001}} \put(-78.49,12.02){\circle*{0.000001}} \put(-77.78,11.31){\circle*{0.000001}} \put(-77.07,10.61){\circle*{0.000001}} \put(-76.37,10.61){\circle*{0.000001}} \put(-75.66, 9.90){\circle*{0.000001}} \put(-74.95, 9.19){\circle*{0.000001}} \put(-74.25, 9.19){\circle*{0.000001}} \put(-73.54, 8.49){\circle*{0.000001}} \put(-72.83, 7.78){\circle*{0.000001}} \put(-72.12, 7.78){\circle*{0.000001}} \put(-71.42, 7.07){\circle*{0.000001}} \put(-70.71, 6.36){\circle*{0.000001}} \put(-70.00, 5.66){\circle*{0.000001}} \put(-69.30, 5.66){\circle*{0.000001}} \put(-68.59, 4.95){\circle*{0.000001}} \put(-67.88, 4.24){\circle*{0.000001}} \put(-67.18, 4.24){\circle*{0.000001}} \put(-66.47, 3.54){\circle*{0.000001}} \put(-66.47, 3.54){\circle*{0.000001}} \put(-66.47, 4.24){\circle*{0.000001}} \put(-67.18, 4.95){\circle*{0.000001}} \put(-67.18, 5.66){\circle*{0.000001}} \put(-67.18, 6.36){\circle*{0.000001}} \put(-67.18, 7.07){\circle*{0.000001}} \put(-67.88, 7.78){\circle*{0.000001}} \put(-67.88, 8.49){\circle*{0.000001}} \put(-67.88, 9.19){\circle*{0.000001}} \put(-68.59, 9.90){\circle*{0.000001}} \put(-68.59,10.61){\circle*{0.000001}} \put(-68.59,11.31){\circle*{0.000001}} \put(-69.30,12.02){\circle*{0.000001}} \put(-69.30,12.73){\circle*{0.000001}} \put(-69.30,13.44){\circle*{0.000001}} \put(-69.30,14.14){\circle*{0.000001}} \put(-70.00,14.85){\circle*{0.000001}} \put(-70.00,15.56){\circle*{0.000001}} \put(-70.00,16.26){\circle*{0.000001}} \put(-70.71,16.97){\circle*{0.000001}} \put(-70.71,17.68){\circle*{0.000001}} \put(-70.71,18.38){\circle*{0.000001}} \put(-71.42,19.09){\circle*{0.000001}} \put(-71.42,19.80){\circle*{0.000001}} \put(-71.42,20.51){\circle*{0.000001}} \put(-71.42,21.21){\circle*{0.000001}} \put(-72.12,21.92){\circle*{0.000001}} \put(-72.12,22.63){\circle*{0.000001}} \put(-72.12,23.33){\circle*{0.000001}} \put(-72.83,24.04){\circle*{0.000001}} \put(-72.83,24.75){\circle*{0.000001}} \put(-72.83,25.46){\circle*{0.000001}} \put(-73.54,26.16){\circle*{0.000001}} \put(-73.54,26.87){\circle*{0.000001}} \put(-73.54,27.58){\circle*{0.000001}} \put(-73.54,28.28){\circle*{0.000001}} \put(-74.25,28.99){\circle*{0.000001}} \put(-74.25,29.70){\circle*{0.000001}} \put(-74.25,30.41){\circle*{0.000001}} \put(-74.95,31.11){\circle*{0.000001}} \put(-74.95,31.82){\circle*{0.000001}} \put(-106.77,40.31){\circle*{0.000001}} \put(-106.07,40.31){\circle*{0.000001}} \put(-105.36,39.60){\circle*{0.000001}} \put(-104.65,39.60){\circle*{0.000001}} \put(-103.94,39.60){\circle*{0.000001}} \put(-103.24,39.60){\circle*{0.000001}} \put(-102.53,38.89){\circle*{0.000001}} \put(-101.82,38.89){\circle*{0.000001}} \put(-101.12,38.89){\circle*{0.000001}} \put(-100.41,38.89){\circle*{0.000001}} \put(-99.70,38.18){\circle*{0.000001}} \put(-98.99,38.18){\circle*{0.000001}} \put(-98.29,38.18){\circle*{0.000001}} \put(-97.58,38.18){\circle*{0.000001}} \put(-96.87,37.48){\circle*{0.000001}} \put(-96.17,37.48){\circle*{0.000001}} \put(-95.46,37.48){\circle*{0.000001}} \put(-94.75,36.77){\circle*{0.000001}} \put(-94.05,36.77){\circle*{0.000001}} \put(-93.34,36.77){\circle*{0.000001}} \put(-92.63,36.77){\circle*{0.000001}} \put(-91.92,36.06){\circle*{0.000001}} \put(-91.22,36.06){\circle*{0.000001}} \put(-90.51,36.06){\circle*{0.000001}} \put(-89.80,36.06){\circle*{0.000001}} \put(-89.10,35.36){\circle*{0.000001}} \put(-88.39,35.36){\circle*{0.000001}} \put(-87.68,35.36){\circle*{0.000001}} \put(-86.97,35.36){\circle*{0.000001}} \put(-86.27,34.65){\circle*{0.000001}} \put(-85.56,34.65){\circle*{0.000001}} \put(-84.85,34.65){\circle*{0.000001}} \put(-84.15,33.94){\circle*{0.000001}} \put(-83.44,33.94){\circle*{0.000001}} \put(-82.73,33.94){\circle*{0.000001}} \put(-82.02,33.94){\circle*{0.000001}} \put(-81.32,33.23){\circle*{0.000001}} \put(-80.61,33.23){\circle*{0.000001}} \put(-79.90,33.23){\circle*{0.000001}} \put(-79.20,33.23){\circle*{0.000001}} \put(-78.49,32.53){\circle*{0.000001}} \put(-77.78,32.53){\circle*{0.000001}} \put(-77.07,32.53){\circle*{0.000001}} \put(-76.37,32.53){\circle*{0.000001}} \put(-75.66,31.82){\circle*{0.000001}} \put(-74.95,31.82){\circle*{0.000001}} \put(-106.77,40.31){\circle*{0.000001}} \put(-106.07,40.31){\circle*{0.000001}} \put(-105.36,40.31){\circle*{0.000001}} \put(-104.65,39.60){\circle*{0.000001}} \put(-103.94,39.60){\circle*{0.000001}} \put(-103.24,39.60){\circle*{0.000001}} \put(-102.53,39.60){\circle*{0.000001}} \put(-101.82,39.60){\circle*{0.000001}} \put(-101.12,39.60){\circle*{0.000001}} \put(-100.41,38.89){\circle*{0.000001}} \put(-99.70,38.89){\circle*{0.000001}} \put(-98.99,38.89){\circle*{0.000001}} \put(-98.29,38.89){\circle*{0.000001}} \put(-97.58,38.89){\circle*{0.000001}} \put(-96.87,38.18){\circle*{0.000001}} \put(-96.17,38.18){\circle*{0.000001}} \put(-95.46,38.18){\circle*{0.000001}} \put(-94.75,38.18){\circle*{0.000001}} \put(-94.05,38.18){\circle*{0.000001}} \put(-93.34,37.48){\circle*{0.000001}} \put(-92.63,37.48){\circle*{0.000001}} \put(-91.92,37.48){\circle*{0.000001}} \put(-91.22,37.48){\circle*{0.000001}} \put(-90.51,37.48){\circle*{0.000001}} \put(-89.80,37.48){\circle*{0.000001}} \put(-89.10,36.77){\circle*{0.000001}} \put(-88.39,36.77){\circle*{0.000001}} \put(-87.68,36.77){\circle*{0.000001}} \put(-86.97,36.77){\circle*{0.000001}} \put(-86.27,36.77){\circle*{0.000001}} \put(-85.56,36.06){\circle*{0.000001}} \put(-84.85,36.06){\circle*{0.000001}} \put(-84.15,36.06){\circle*{0.000001}} \put(-83.44,36.06){\circle*{0.000001}} \put(-82.73,36.06){\circle*{0.000001}} \put(-82.02,35.36){\circle*{0.000001}} \put(-81.32,35.36){\circle*{0.000001}} \put(-80.61,35.36){\circle*{0.000001}} \put(-79.90,35.36){\circle*{0.000001}} \put(-79.20,35.36){\circle*{0.000001}} \put(-78.49,35.36){\circle*{0.000001}} \put(-77.78,34.65){\circle*{0.000001}} \put(-77.07,34.65){\circle*{0.000001}} \put(-76.37,34.65){\circle*{0.000001}} \put(-75.66,34.65){\circle*{0.000001}} \put(-74.95,34.65){\circle*{0.000001}} \put(-74.25,33.94){\circle*{0.000001}} \put(-73.54,33.94){\circle*{0.000001}} \put(-72.83,33.94){\circle*{0.000001}} \put(-110.31,27.58){\circle*{0.000001}} \put(-109.60,27.58){\circle*{0.000001}} \put(-108.89,27.58){\circle*{0.000001}} \put(-108.19,28.28){\circle*{0.000001}} \put(-107.48,28.28){\circle*{0.000001}} \put(-106.77,28.28){\circle*{0.000001}} \put(-106.07,28.28){\circle*{0.000001}} \put(-105.36,28.28){\circle*{0.000001}} \put(-104.65,28.28){\circle*{0.000001}} \put(-103.94,28.99){\circle*{0.000001}} \put(-103.24,28.99){\circle*{0.000001}} \put(-102.53,28.99){\circle*{0.000001}} \put(-101.82,28.99){\circle*{0.000001}} \put(-101.12,28.99){\circle*{0.000001}} \put(-100.41,28.99){\circle*{0.000001}} \put(-99.70,29.70){\circle*{0.000001}} \put(-98.99,29.70){\circle*{0.000001}} \put(-98.29,29.70){\circle*{0.000001}} \put(-97.58,29.70){\circle*{0.000001}} \put(-96.87,29.70){\circle*{0.000001}} \put(-96.17,29.70){\circle*{0.000001}} \put(-95.46,30.41){\circle*{0.000001}} \put(-94.75,30.41){\circle*{0.000001}} \put(-94.05,30.41){\circle*{0.000001}} \put(-93.34,30.41){\circle*{0.000001}} \put(-92.63,30.41){\circle*{0.000001}} \put(-91.92,30.41){\circle*{0.000001}} \put(-91.22,31.11){\circle*{0.000001}} \put(-90.51,31.11){\circle*{0.000001}} \put(-89.80,31.11){\circle*{0.000001}} \put(-89.10,31.11){\circle*{0.000001}} \put(-88.39,31.11){\circle*{0.000001}} \put(-87.68,31.11){\circle*{0.000001}} \put(-86.97,31.82){\circle*{0.000001}} \put(-86.27,31.82){\circle*{0.000001}} \put(-85.56,31.82){\circle*{0.000001}} \put(-84.85,31.82){\circle*{0.000001}} \put(-84.15,31.82){\circle*{0.000001}} \put(-83.44,31.82){\circle*{0.000001}} \put(-82.73,32.53){\circle*{0.000001}} \put(-82.02,32.53){\circle*{0.000001}} \put(-81.32,32.53){\circle*{0.000001}} \put(-80.61,32.53){\circle*{0.000001}} \put(-79.90,32.53){\circle*{0.000001}} \put(-79.20,32.53){\circle*{0.000001}} \put(-78.49,33.23){\circle*{0.000001}} \put(-77.78,33.23){\circle*{0.000001}} \put(-77.07,33.23){\circle*{0.000001}} \put(-76.37,33.23){\circle*{0.000001}} \put(-75.66,33.23){\circle*{0.000001}} \put(-74.95,33.23){\circle*{0.000001}} \put(-74.25,33.94){\circle*{0.000001}} \put(-73.54,33.94){\circle*{0.000001}} \put(-72.83,33.94){\circle*{0.000001}} \put(-110.31,27.58){\circle*{0.000001}} \put(-109.60,28.28){\circle*{0.000001}} \put(-109.60,28.99){\circle*{0.000001}} \put(-108.89,29.70){\circle*{0.000001}} \put(-108.89,30.41){\circle*{0.000001}} \put(-108.19,31.11){\circle*{0.000001}} \put(-108.19,31.82){\circle*{0.000001}} \put(-107.48,32.53){\circle*{0.000001}} \put(-107.48,33.23){\circle*{0.000001}} \put(-106.77,33.94){\circle*{0.000001}} \put(-106.77,34.65){\circle*{0.000001}} \put(-106.07,35.36){\circle*{0.000001}} \put(-106.07,36.06){\circle*{0.000001}} \put(-105.36,36.77){\circle*{0.000001}} \put(-105.36,37.48){\circle*{0.000001}} \put(-104.65,38.18){\circle*{0.000001}} \put(-104.65,38.89){\circle*{0.000001}} \put(-103.94,39.60){\circle*{0.000001}} \put(-103.94,40.31){\circle*{0.000001}} \put(-103.24,41.01){\circle*{0.000001}} \put(-103.24,41.72){\circle*{0.000001}} \put(-102.53,42.43){\circle*{0.000001}} \put(-102.53,43.13){\circle*{0.000001}} \put(-101.82,43.84){\circle*{0.000001}} \put(-101.82,44.55){\circle*{0.000001}} \put(-101.12,45.25){\circle*{0.000001}} \put(-100.41,45.96){\circle*{0.000001}} \put(-100.41,46.67){\circle*{0.000001}} \put(-99.70,47.38){\circle*{0.000001}} \put(-99.70,48.08){\circle*{0.000001}} \put(-98.99,48.79){\circle*{0.000001}} \put(-98.99,49.50){\circle*{0.000001}} \put(-98.29,50.20){\circle*{0.000001}} \put(-98.29,50.91){\circle*{0.000001}} \put(-97.58,51.62){\circle*{0.000001}} \put(-97.58,52.33){\circle*{0.000001}} \put(-96.87,53.03){\circle*{0.000001}} \put(-96.87,53.74){\circle*{0.000001}} \put(-96.17,54.45){\circle*{0.000001}} \put(-96.17,55.15){\circle*{0.000001}} \put(-95.46,55.86){\circle*{0.000001}} \put(-95.46,56.57){\circle*{0.000001}} \put(-94.75,57.28){\circle*{0.000001}} \put(-94.75,57.98){\circle*{0.000001}} \put(-94.05,58.69){\circle*{0.000001}} \put(-94.05,59.40){\circle*{0.000001}} \put(-93.34,60.10){\circle*{0.000001}} \put(-93.34,60.81){\circle*{0.000001}} \put(-92.63,61.52){\circle*{0.000001}} \put(-92.63,62.23){\circle*{0.000001}} \put(-91.92,62.93){\circle*{0.000001}} \put(-91.92,62.93){\circle*{0.000001}} \put(-91.22,62.93){\circle*{0.000001}} \put(-90.51,62.93){\circle*{0.000001}} \put(-90.51,62.93){\circle*{0.000001}} \put(-90.51,63.64){\circle*{0.000001}} \put(-89.80,64.35){\circle*{0.000001}} \put(-89.80,64.35){\circle*{0.000001}} \put(-89.80,65.05){\circle*{0.000001}} \put(-90.51,65.76){\circle*{0.000001}} \put(-90.51,65.76){\circle*{0.000001}} \put(-89.80,65.05){\circle*{0.000001}} \put(-89.10,65.05){\circle*{0.000001}} \put(-88.39,64.35){\circle*{0.000001}} \put(-87.68,64.35){\circle*{0.000001}} \put(-86.97,63.64){\circle*{0.000001}} \put(-86.97,61.52){\circle*{0.000001}} \put(-86.97,62.23){\circle*{0.000001}} \put(-86.97,62.93){\circle*{0.000001}} \put(-86.97,63.64){\circle*{0.000001}} \put(-86.97,61.52){\circle*{0.000001}} \put(-86.27,61.52){\circle*{0.000001}} \put(-85.56,60.81){\circle*{0.000001}} \put(-89.80,59.40){\circle*{0.000001}} \put(-89.10,59.40){\circle*{0.000001}} \put(-88.39,60.10){\circle*{0.000001}} \put(-87.68,60.10){\circle*{0.000001}} \put(-86.97,60.10){\circle*{0.000001}} \put(-86.27,60.81){\circle*{0.000001}} \put(-85.56,60.81){\circle*{0.000001}} \put(-89.80,59.40){\circle*{0.000001}} \put(-84.85,53.74){\circle*{0.000001}} \put(-85.56,54.45){\circle*{0.000001}} \put(-86.27,55.15){\circle*{0.000001}} \put(-86.97,55.86){\circle*{0.000001}} \put(-86.97,56.57){\circle*{0.000001}} \put(-87.68,57.28){\circle*{0.000001}} \put(-88.39,57.98){\circle*{0.000001}} \put(-89.10,58.69){\circle*{0.000001}} \put(-89.80,59.40){\circle*{0.000001}} \put(-93.34,60.81){\circle*{0.000001}} \put(-92.63,60.10){\circle*{0.000001}} \put(-91.92,59.40){\circle*{0.000001}} \put(-91.22,59.40){\circle*{0.000001}} \put(-90.51,58.69){\circle*{0.000001}} \put(-89.80,57.98){\circle*{0.000001}} \put(-89.10,57.28){\circle*{0.000001}} \put(-88.39,56.57){\circle*{0.000001}} \put(-87.68,55.86){\circle*{0.000001}} \put(-86.97,55.86){\circle*{0.000001}} \put(-86.27,55.15){\circle*{0.000001}} \put(-85.56,54.45){\circle*{0.000001}} \put(-84.85,53.74){\circle*{0.000001}} \put(-93.34,60.81){\circle*{0.000001}} \put(-92.63,60.10){\circle*{0.000001}} \put(-91.92,59.40){\circle*{0.000001}} \put(-91.92,58.69){\circle*{0.000001}} \put(-91.92,59.40){\circle*{0.000001}} \put(-91.92,58.69){\circle*{0.000001}} \put(-91.22,58.69){\circle*{0.000001}} \put(-90.51,58.69){\circle*{0.000001}} \put(-89.80,58.69){\circle*{0.000001}} \put(-89.10,58.69){\circle*{0.000001}} \put(-88.39,58.69){\circle*{0.000001}} \put(-89.80,56.57){\circle*{0.000001}} \put(-89.10,57.28){\circle*{0.000001}} \put(-89.10,57.98){\circle*{0.000001}} \put(-88.39,58.69){\circle*{0.000001}} \put(-89.80,56.57){\circle*{0.000001}} \put(-89.10,56.57){\circle*{0.000001}} \put(-88.39,55.86){\circle*{0.000001}} \put(-88.39,55.86){\circle*{0.000001}} \put(-88.39,56.57){\circle*{0.000001}} \put(-87.68,57.28){\circle*{0.000001}} \put(-87.68,57.98){\circle*{0.000001}} \put(-87.68,57.98){\circle*{0.000001}} \put(-86.97,53.74){\circle*{0.000001}} \put(-86.97,54.45){\circle*{0.000001}} \put(-86.97,55.15){\circle*{0.000001}} \put(-86.97,55.86){\circle*{0.000001}} \put(-87.68,56.57){\circle*{0.000001}} \put(-87.68,57.28){\circle*{0.000001}} \put(-87.68,57.98){\circle*{0.000001}} \put(-86.97,53.74){\circle*{0.000001}} \put(-86.27,53.74){\circle*{0.000001}} \put(-85.56,53.74){\circle*{0.000001}} \put(-85.56,53.74){\circle*{0.000001}} \put(-84.85,53.74){\circle*{0.000001}} \put(-84.15,53.74){\circle*{0.000001}} \put(-83.44,53.74){\circle*{0.000001}} \put(-82.73,53.74){\circle*{0.000001}} \put(-82.02,53.74){\circle*{0.000001}} \put(-81.32,53.74){\circle*{0.000001}} \put(-80.61,53.74){\circle*{0.000001}} \put(-79.90,45.96){\circle*{0.000001}} \put(-79.90,46.67){\circle*{0.000001}} \put(-79.90,47.38){\circle*{0.000001}} \put(-79.90,48.08){\circle*{0.000001}} \put(-79.90,48.79){\circle*{0.000001}} \put(-79.90,49.50){\circle*{0.000001}} \put(-80.61,50.20){\circle*{0.000001}} \put(-80.61,50.91){\circle*{0.000001}} \put(-80.61,51.62){\circle*{0.000001}} \put(-80.61,52.33){\circle*{0.000001}} \put(-80.61,53.03){\circle*{0.000001}} \put(-80.61,53.74){\circle*{0.000001}} \put(-79.90,45.96){\circle*{0.000001}} \put(-79.20,45.96){\circle*{0.000001}} \put(-78.49,46.67){\circle*{0.000001}} \put(-77.78,46.67){\circle*{0.000001}} \put(-77.07,47.38){\circle*{0.000001}} \put(-76.37,47.38){\circle*{0.000001}} \put(-75.66,47.38){\circle*{0.000001}} \put(-74.95,48.08){\circle*{0.000001}} \put(-74.25,48.08){\circle*{0.000001}} \put(-73.54,48.79){\circle*{0.000001}} \put(-72.83,48.79){\circle*{0.000001}} \put(-72.12,49.50){\circle*{0.000001}} \put(-71.42,49.50){\circle*{0.000001}} \put(-71.42,49.50){\circle*{0.000001}} \put(-70.71,48.79){\circle*{0.000001}} \put(-70.00,48.79){\circle*{0.000001}} \put(-69.30,48.08){\circle*{0.000001}} \put(-68.59,47.38){\circle*{0.000001}} \put(-67.88,46.67){\circle*{0.000001}} \put(-67.18,46.67){\circle*{0.000001}} \put(-66.47,45.96){\circle*{0.000001}} \put(-65.76,45.25){\circle*{0.000001}} \put(-65.05,44.55){\circle*{0.000001}} \put(-64.35,44.55){\circle*{0.000001}} \put(-63.64,43.84){\circle*{0.000001}} \put(-62.93,43.13){\circle*{0.000001}} \put(-62.23,42.43){\circle*{0.000001}} \put(-61.52,42.43){\circle*{0.000001}} \put(-60.81,41.72){\circle*{0.000001}} \put(-74.95,38.89){\circle*{0.000001}} \put(-74.25,38.89){\circle*{0.000001}} \put(-73.54,38.89){\circle*{0.000001}} \put(-72.83,39.60){\circle*{0.000001}} \put(-72.12,39.60){\circle*{0.000001}} \put(-71.42,39.60){\circle*{0.000001}} \put(-70.71,39.60){\circle*{0.000001}} \put(-70.00,39.60){\circle*{0.000001}} \put(-69.30,40.31){\circle*{0.000001}} \put(-68.59,40.31){\circle*{0.000001}} \put(-67.88,40.31){\circle*{0.000001}} \put(-67.18,40.31){\circle*{0.000001}} \put(-66.47,40.31){\circle*{0.000001}} \put(-65.76,41.01){\circle*{0.000001}} \put(-65.05,41.01){\circle*{0.000001}} \put(-64.35,41.01){\circle*{0.000001}} \put(-63.64,41.01){\circle*{0.000001}} \put(-62.93,41.01){\circle*{0.000001}} \put(-62.23,41.72){\circle*{0.000001}} \put(-61.52,41.72){\circle*{0.000001}} \put(-60.81,41.72){\circle*{0.000001}} \put(-74.95,38.89){\circle*{0.000001}} \put(-74.95,39.60){\circle*{0.000001}} \put(-75.66,40.31){\circle*{0.000001}} \put(-75.66,41.01){\circle*{0.000001}} \put(-76.37,41.72){\circle*{0.000001}} \put(-76.37,42.43){\circle*{0.000001}} \put(-77.07,43.13){\circle*{0.000001}} \put(-77.07,43.84){\circle*{0.000001}} \put(-77.78,44.55){\circle*{0.000001}} \put(-77.78,45.25){\circle*{0.000001}} \put(-78.49,45.96){\circle*{0.000001}} \put(-78.49,46.67){\circle*{0.000001}} \put(-79.20,47.38){\circle*{0.000001}} \put(-79.20,48.08){\circle*{0.000001}} \put(-79.90,48.79){\circle*{0.000001}} \put(-79.90,49.50){\circle*{0.000001}} \put(-80.61,50.20){\circle*{0.000001}} \put(-80.61,50.91){\circle*{0.000001}} \put(-81.32,51.62){\circle*{0.000001}} \put(-81.32,52.33){\circle*{0.000001}} \put(-82.02,53.03){\circle*{0.000001}} \put(-82.02,53.74){\circle*{0.000001}} \put(-82.73,54.45){\circle*{0.000001}} \put(-88.39,35.36){\circle*{0.000001}} \put(-88.39,36.06){\circle*{0.000001}} \put(-87.68,36.77){\circle*{0.000001}} \put(-87.68,37.48){\circle*{0.000001}} \put(-87.68,38.18){\circle*{0.000001}} \put(-87.68,38.89){\circle*{0.000001}} \put(-86.97,39.60){\circle*{0.000001}} \put(-86.97,40.31){\circle*{0.000001}} \put(-86.97,41.01){\circle*{0.000001}} \put(-86.27,41.72){\circle*{0.000001}} \put(-86.27,42.43){\circle*{0.000001}} \put(-86.27,43.13){\circle*{0.000001}} \put(-85.56,43.84){\circle*{0.000001}} \put(-85.56,44.55){\circle*{0.000001}} \put(-85.56,45.25){\circle*{0.000001}} \put(-85.56,45.96){\circle*{0.000001}} \put(-84.85,46.67){\circle*{0.000001}} \put(-84.85,47.38){\circle*{0.000001}} \put(-84.85,48.08){\circle*{0.000001}} \put(-84.15,48.79){\circle*{0.000001}} \put(-84.15,49.50){\circle*{0.000001}} \put(-84.15,50.20){\circle*{0.000001}} \put(-83.44,50.91){\circle*{0.000001}} \put(-83.44,51.62){\circle*{0.000001}} \put(-83.44,52.33){\circle*{0.000001}} \put(-83.44,53.03){\circle*{0.000001}} \put(-82.73,53.74){\circle*{0.000001}} \put(-82.73,54.45){\circle*{0.000001}} \put(-88.39,35.36){\circle*{0.000001}} \put(-87.68,34.65){\circle*{0.000001}} \put(-86.97,34.65){\circle*{0.000001}} \put(-86.27,33.94){\circle*{0.000001}} \put(-85.56,33.23){\circle*{0.000001}} \put(-84.85,33.23){\circle*{0.000001}} \put(-84.15,32.53){\circle*{0.000001}} \put(-83.44,32.53){\circle*{0.000001}} \put(-82.73,31.82){\circle*{0.000001}} \put(-82.02,31.11){\circle*{0.000001}} \put(-81.32,31.11){\circle*{0.000001}} \put(-80.61,30.41){\circle*{0.000001}} \put(-79.90,29.70){\circle*{0.000001}} \put(-79.20,29.70){\circle*{0.000001}} \put(-78.49,28.99){\circle*{0.000001}} \put(-77.78,28.99){\circle*{0.000001}} \put(-77.07,28.28){\circle*{0.000001}} \put(-76.37,27.58){\circle*{0.000001}} \put(-75.66,27.58){\circle*{0.000001}} \put(-74.95,26.87){\circle*{0.000001}} \put(-74.25,26.16){\circle*{0.000001}} \put(-73.54,26.16){\circle*{0.000001}} \put(-72.83,25.46){\circle*{0.000001}} \put(-72.12,25.46){\circle*{0.000001}} \put(-71.42,24.75){\circle*{0.000001}} \put(-70.71,24.04){\circle*{0.000001}} \put(-70.00,24.04){\circle*{0.000001}} \put(-69.30,23.33){\circle*{0.000001}} \put(-87.68,39.60){\circle*{0.000001}} \put(-86.97,38.89){\circle*{0.000001}} \put(-86.27,38.18){\circle*{0.000001}} \put(-85.56,37.48){\circle*{0.000001}} \put(-84.85,36.77){\circle*{0.000001}} \put(-84.15,36.77){\circle*{0.000001}} \put(-83.44,36.06){\circle*{0.000001}} \put(-82.73,35.36){\circle*{0.000001}} \put(-82.02,34.65){\circle*{0.000001}} \put(-81.32,33.94){\circle*{0.000001}} \put(-80.61,33.23){\circle*{0.000001}} \put(-79.90,32.53){\circle*{0.000001}} \put(-79.20,31.82){\circle*{0.000001}} \put(-78.49,31.82){\circle*{0.000001}} \put(-77.78,31.11){\circle*{0.000001}} \put(-77.07,30.41){\circle*{0.000001}} \put(-76.37,29.70){\circle*{0.000001}} \put(-75.66,28.99){\circle*{0.000001}} \put(-74.95,28.28){\circle*{0.000001}} \put(-74.25,27.58){\circle*{0.000001}} \put(-73.54,26.87){\circle*{0.000001}} \put(-72.83,26.16){\circle*{0.000001}} \put(-72.12,26.16){\circle*{0.000001}} \put(-71.42,25.46){\circle*{0.000001}} \put(-70.71,24.75){\circle*{0.000001}} \put(-70.00,24.04){\circle*{0.000001}} \put(-69.30,23.33){\circle*{0.000001}} \put(-85.56,13.44){\circle*{0.000001}} \put(-85.56,14.14){\circle*{0.000001}} \put(-85.56,14.85){\circle*{0.000001}} \put(-85.56,15.56){\circle*{0.000001}} \put(-85.56,16.26){\circle*{0.000001}} \put(-85.56,16.97){\circle*{0.000001}} \put(-85.56,17.68){\circle*{0.000001}} \put(-86.27,18.38){\circle*{0.000001}} \put(-86.27,19.09){\circle*{0.000001}} \put(-86.27,19.80){\circle*{0.000001}} \put(-86.27,20.51){\circle*{0.000001}} \put(-86.27,21.21){\circle*{0.000001}} \put(-86.27,21.92){\circle*{0.000001}} \put(-86.27,22.63){\circle*{0.000001}} \put(-86.27,23.33){\circle*{0.000001}} \put(-86.27,24.04){\circle*{0.000001}} \put(-86.27,24.75){\circle*{0.000001}} \put(-86.27,25.46){\circle*{0.000001}} \put(-86.27,26.16){\circle*{0.000001}} \put(-86.97,26.87){\circle*{0.000001}} \put(-86.97,27.58){\circle*{0.000001}} \put(-86.97,28.28){\circle*{0.000001}} \put(-86.97,28.99){\circle*{0.000001}} \put(-86.97,29.70){\circle*{0.000001}} \put(-86.97,30.41){\circle*{0.000001}} \put(-86.97,31.11){\circle*{0.000001}} \put(-86.97,31.82){\circle*{0.000001}} \put(-86.97,32.53){\circle*{0.000001}} \put(-86.97,33.23){\circle*{0.000001}} \put(-86.97,33.94){\circle*{0.000001}} \put(-86.97,34.65){\circle*{0.000001}} \put(-87.68,35.36){\circle*{0.000001}} \put(-87.68,36.06){\circle*{0.000001}} \put(-87.68,36.77){\circle*{0.000001}} \put(-87.68,37.48){\circle*{0.000001}} \put(-87.68,38.18){\circle*{0.000001}} \put(-87.68,38.89){\circle*{0.000001}} \put(-87.68,39.60){\circle*{0.000001}} \put(-85.56,13.44){\circle*{0.000001}} \put(-84.85,13.44){\circle*{0.000001}} \put(-84.15,14.14){\circle*{0.000001}} \put(-83.44,14.14){\circle*{0.000001}} \put(-82.73,14.14){\circle*{0.000001}} \put(-82.02,14.85){\circle*{0.000001}} \put(-81.32,14.85){\circle*{0.000001}} \put(-80.61,15.56){\circle*{0.000001}} \put(-79.90,15.56){\circle*{0.000001}} \put(-79.20,15.56){\circle*{0.000001}} \put(-78.49,16.26){\circle*{0.000001}} \put(-77.78,16.26){\circle*{0.000001}} \put(-77.07,16.26){\circle*{0.000001}} \put(-76.37,16.97){\circle*{0.000001}} \put(-75.66,16.97){\circle*{0.000001}} \put(-74.95,16.97){\circle*{0.000001}} \put(-74.25,17.68){\circle*{0.000001}} \put(-73.54,17.68){\circle*{0.000001}} \put(-72.83,17.68){\circle*{0.000001}} \put(-72.12,18.38){\circle*{0.000001}} \put(-71.42,18.38){\circle*{0.000001}} \put(-70.71,19.09){\circle*{0.000001}} \put(-70.00,19.09){\circle*{0.000001}} \put(-69.30,19.09){\circle*{0.000001}} \put(-68.59,19.80){\circle*{0.000001}} \put(-67.88,19.80){\circle*{0.000001}} \put(-67.18,19.80){\circle*{0.000001}} \put(-66.47,20.51){\circle*{0.000001}} \put(-65.76,20.51){\circle*{0.000001}} \put(-65.05,20.51){\circle*{0.000001}} \put(-64.35,21.21){\circle*{0.000001}} \put(-63.64,21.21){\circle*{0.000001}} \put(-62.93,21.21){\circle*{0.000001}} \put(-62.23,21.92){\circle*{0.000001}} \put(-61.52,21.92){\circle*{0.000001}} \put(-60.81,22.63){\circle*{0.000001}} \put(-60.10,22.63){\circle*{0.000001}} \put(-59.40,22.63){\circle*{0.000001}} \put(-58.69,23.33){\circle*{0.000001}} \put(-57.98,23.33){\circle*{0.000001}} \put(-54.45,-5.66){\circle*{0.000001}} \put(-54.45,-4.95){\circle*{0.000001}} \put(-54.45,-4.24){\circle*{0.000001}} \put(-54.45,-3.54){\circle*{0.000001}} \put(-54.45,-2.83){\circle*{0.000001}} \put(-55.15,-2.12){\circle*{0.000001}} \put(-55.15,-1.41){\circle*{0.000001}} \put(-55.15,-0.71){\circle*{0.000001}} \put(-55.15, 0.00){\circle*{0.000001}} \put(-55.15, 0.71){\circle*{0.000001}} \put(-55.15, 1.41){\circle*{0.000001}} \put(-55.15, 2.12){\circle*{0.000001}} \put(-55.15, 2.83){\circle*{0.000001}} \put(-55.86, 3.54){\circle*{0.000001}} \put(-55.86, 4.24){\circle*{0.000001}} \put(-55.86, 4.95){\circle*{0.000001}} \put(-55.86, 5.66){\circle*{0.000001}} \put(-55.86, 6.36){\circle*{0.000001}} \put(-55.86, 7.07){\circle*{0.000001}} \put(-55.86, 7.78){\circle*{0.000001}} \put(-55.86, 8.49){\circle*{0.000001}} \put(-56.57, 9.19){\circle*{0.000001}} \put(-56.57, 9.90){\circle*{0.000001}} \put(-56.57,10.61){\circle*{0.000001}} \put(-56.57,11.31){\circle*{0.000001}} \put(-56.57,12.02){\circle*{0.000001}} \put(-56.57,12.73){\circle*{0.000001}} \put(-56.57,13.44){\circle*{0.000001}} \put(-56.57,14.14){\circle*{0.000001}} \put(-57.28,14.85){\circle*{0.000001}} \put(-57.28,15.56){\circle*{0.000001}} \put(-57.28,16.26){\circle*{0.000001}} \put(-57.28,16.97){\circle*{0.000001}} \put(-57.28,17.68){\circle*{0.000001}} \put(-57.28,18.38){\circle*{0.000001}} \put(-57.28,19.09){\circle*{0.000001}} \put(-57.28,19.80){\circle*{0.000001}} \put(-57.98,20.51){\circle*{0.000001}} \put(-57.98,21.21){\circle*{0.000001}} \put(-57.98,21.92){\circle*{0.000001}} \put(-57.98,22.63){\circle*{0.000001}} \put(-57.98,23.33){\circle*{0.000001}} \put(-54.45,-5.66){\circle*{0.000001}} \put(-53.74,-5.66){\circle*{0.000001}} \put(-53.03,-6.36){\circle*{0.000001}} \put(-52.33,-6.36){\circle*{0.000001}} \put(-51.62,-7.07){\circle*{0.000001}} \put(-50.91,-7.07){\circle*{0.000001}} \put(-50.20,-7.78){\circle*{0.000001}} \put(-49.50,-7.78){\circle*{0.000001}} \put(-48.79,-7.78){\circle*{0.000001}} \put(-48.08,-8.49){\circle*{0.000001}} \put(-47.38,-8.49){\circle*{0.000001}} \put(-46.67,-9.19){\circle*{0.000001}} \put(-45.96,-9.19){\circle*{0.000001}} \put(-45.25,-9.90){\circle*{0.000001}} \put(-44.55,-9.90){\circle*{0.000001}} \put(-43.84,-9.90){\circle*{0.000001}} \put(-43.13,-10.61){\circle*{0.000001}} \put(-42.43,-10.61){\circle*{0.000001}} \put(-41.72,-11.31){\circle*{0.000001}} \put(-41.01,-11.31){\circle*{0.000001}} \put(-40.31,-12.02){\circle*{0.000001}} \put(-39.60,-12.02){\circle*{0.000001}} \put(-38.89,-12.02){\circle*{0.000001}} \put(-38.18,-12.73){\circle*{0.000001}} \put(-37.48,-12.73){\circle*{0.000001}} \put(-36.77,-13.44){\circle*{0.000001}} \put(-36.06,-13.44){\circle*{0.000001}} \put(-35.36,-14.14){\circle*{0.000001}} \put(-34.65,-14.14){\circle*{0.000001}} \put(-33.94,-14.14){\circle*{0.000001}} \put(-33.23,-14.85){\circle*{0.000001}} \put(-32.53,-14.85){\circle*{0.000001}} \put(-31.82,-15.56){\circle*{0.000001}} \put(-31.11,-15.56){\circle*{0.000001}} \put(-30.41,-16.26){\circle*{0.000001}} \put(-29.70,-16.26){\circle*{0.000001}} \put(-28.99,-16.26){\circle*{0.000001}} \put(-28.28,-16.97){\circle*{0.000001}} \put(-27.58,-16.97){\circle*{0.000001}} \put(-26.87,-17.68){\circle*{0.000001}} \put(-26.16,-17.68){\circle*{0.000001}} \put(-25.46,-18.38){\circle*{0.000001}} \put(-24.75,-18.38){\circle*{0.000001}} \put(-60.81,-22.63){\circle*{0.000001}} \put(-60.10,-22.63){\circle*{0.000001}} \put(-59.40,-22.63){\circle*{0.000001}} \put(-58.69,-22.63){\circle*{0.000001}} \put(-57.98,-22.63){\circle*{0.000001}} \put(-57.28,-21.92){\circle*{0.000001}} \put(-56.57,-21.92){\circle*{0.000001}} \put(-55.86,-21.92){\circle*{0.000001}} \put(-55.15,-21.92){\circle*{0.000001}} \put(-54.45,-21.92){\circle*{0.000001}} \put(-53.74,-21.92){\circle*{0.000001}} \put(-53.03,-21.92){\circle*{0.000001}} \put(-52.33,-21.92){\circle*{0.000001}} \put(-51.62,-21.21){\circle*{0.000001}} \put(-50.91,-21.21){\circle*{0.000001}} \put(-50.20,-21.21){\circle*{0.000001}} \put(-49.50,-21.21){\circle*{0.000001}} \put(-48.79,-21.21){\circle*{0.000001}} \put(-48.08,-21.21){\circle*{0.000001}} \put(-47.38,-21.21){\circle*{0.000001}} \put(-46.67,-21.21){\circle*{0.000001}} \put(-45.96,-21.21){\circle*{0.000001}} \put(-45.25,-20.51){\circle*{0.000001}} \put(-44.55,-20.51){\circle*{0.000001}} \put(-43.84,-20.51){\circle*{0.000001}} \put(-43.13,-20.51){\circle*{0.000001}} \put(-42.43,-20.51){\circle*{0.000001}} \put(-41.72,-20.51){\circle*{0.000001}} \put(-41.01,-20.51){\circle*{0.000001}} \put(-40.31,-20.51){\circle*{0.000001}} \put(-39.60,-19.80){\circle*{0.000001}} \put(-38.89,-19.80){\circle*{0.000001}} \put(-38.18,-19.80){\circle*{0.000001}} \put(-37.48,-19.80){\circle*{0.000001}} \put(-36.77,-19.80){\circle*{0.000001}} \put(-36.06,-19.80){\circle*{0.000001}} \put(-35.36,-19.80){\circle*{0.000001}} \put(-34.65,-19.80){\circle*{0.000001}} \put(-33.94,-19.80){\circle*{0.000001}} \put(-33.23,-19.09){\circle*{0.000001}} \put(-32.53,-19.09){\circle*{0.000001}} \put(-31.82,-19.09){\circle*{0.000001}} \put(-31.11,-19.09){\circle*{0.000001}} \put(-30.41,-19.09){\circle*{0.000001}} \put(-29.70,-19.09){\circle*{0.000001}} \put(-28.99,-19.09){\circle*{0.000001}} \put(-28.28,-19.09){\circle*{0.000001}} \put(-27.58,-18.38){\circle*{0.000001}} \put(-26.87,-18.38){\circle*{0.000001}} \put(-26.16,-18.38){\circle*{0.000001}} \put(-25.46,-18.38){\circle*{0.000001}} \put(-24.75,-18.38){\circle*{0.000001}} \put(-60.81,-22.63){\circle*{0.000001}} \put(-60.81,-21.92){\circle*{0.000001}} \put(-60.10,-21.21){\circle*{0.000001}} \put(-60.10,-20.51){\circle*{0.000001}} \put(-60.10,-19.80){\circle*{0.000001}} \put(-59.40,-19.09){\circle*{0.000001}} \put(-59.40,-18.38){\circle*{0.000001}} \put(-59.40,-17.68){\circle*{0.000001}} \put(-58.69,-16.97){\circle*{0.000001}} \put(-58.69,-16.26){\circle*{0.000001}} \put(-58.69,-15.56){\circle*{0.000001}} \put(-58.69,-14.85){\circle*{0.000001}} \put(-57.98,-14.14){\circle*{0.000001}} \put(-57.98,-13.44){\circle*{0.000001}} \put(-57.98,-12.73){\circle*{0.000001}} \put(-57.28,-12.02){\circle*{0.000001}} \put(-57.28,-11.31){\circle*{0.000001}} \put(-57.28,-10.61){\circle*{0.000001}} \put(-56.57,-9.90){\circle*{0.000001}} \put(-56.57,-9.19){\circle*{0.000001}} \put(-56.57,-8.49){\circle*{0.000001}} \put(-55.86,-7.78){\circle*{0.000001}} \put(-55.86,-7.07){\circle*{0.000001}} \put(-55.86,-6.36){\circle*{0.000001}} \put(-55.15,-5.66){\circle*{0.000001}} \put(-55.15,-4.95){\circle*{0.000001}} \put(-55.15,-4.24){\circle*{0.000001}} \put(-55.15,-3.54){\circle*{0.000001}} \put(-54.45,-2.83){\circle*{0.000001}} \put(-54.45,-2.12){\circle*{0.000001}} \put(-54.45,-1.41){\circle*{0.000001}} \put(-53.74,-0.71){\circle*{0.000001}} \put(-53.74, 0.00){\circle*{0.000001}} \put(-53.74, 0.71){\circle*{0.000001}} \put(-53.03, 1.41){\circle*{0.000001}} \put(-53.03, 2.12){\circle*{0.000001}} \put(-53.03, 2.83){\circle*{0.000001}} \put(-52.33, 3.54){\circle*{0.000001}} \put(-52.33, 4.24){\circle*{0.000001}} \put(-52.33, 4.95){\circle*{0.000001}} \put(-51.62, 5.66){\circle*{0.000001}} \put(-51.62, 6.36){\circle*{0.000001}} \put(-51.62, 7.07){\circle*{0.000001}} \put(-51.62, 7.78){\circle*{0.000001}} \put(-50.91, 8.49){\circle*{0.000001}} \put(-50.91, 9.19){\circle*{0.000001}} \put(-50.91, 9.90){\circle*{0.000001}} \put(-50.20,10.61){\circle*{0.000001}} \put(-50.20,11.31){\circle*{0.000001}} \put(-50.20,12.02){\circle*{0.000001}} \put(-49.50,12.73){\circle*{0.000001}} \put(-49.50,13.44){\circle*{0.000001}} \put(-49.50,13.44){\circle*{0.000001}} \put(-49.50,14.14){\circle*{0.000001}} \put(-49.50,14.85){\circle*{0.000001}} \put(-48.79,15.56){\circle*{0.000001}} \put(-48.79,16.26){\circle*{0.000001}} \put(-48.79,16.97){\circle*{0.000001}} \put(-53.74,18.38){\circle*{0.000001}} \put(-53.03,18.38){\circle*{0.000001}} \put(-52.33,17.68){\circle*{0.000001}} \put(-51.62,17.68){\circle*{0.000001}} \put(-50.91,17.68){\circle*{0.000001}} \put(-50.20,17.68){\circle*{0.000001}} \put(-49.50,16.97){\circle*{0.000001}} \put(-48.79,16.97){\circle*{0.000001}} \put(-53.74,18.38){\circle*{0.000001}} \put(-53.03,19.09){\circle*{0.000001}} \put(-53.03,19.80){\circle*{0.000001}} \put(-52.33,20.51){\circle*{0.000001}} \put(-51.62,21.21){\circle*{0.000001}} \put(-51.62,21.92){\circle*{0.000001}} \put(-50.91,22.63){\circle*{0.000001}} \put(-50.20,23.33){\circle*{0.000001}} \put(-49.50,24.04){\circle*{0.000001}} \put(-49.50,24.75){\circle*{0.000001}} \put(-48.79,25.46){\circle*{0.000001}} \put(-48.79,25.46){\circle*{0.000001}} \put(-48.79,26.16){\circle*{0.000001}} \put(-48.08,26.87){\circle*{0.000001}} \put(-48.08,27.58){\circle*{0.000001}} \put(-47.38,28.28){\circle*{0.000001}} \put(-47.38,28.99){\circle*{0.000001}} \put(-46.67,29.70){\circle*{0.000001}} \put(-46.67,30.41){\circle*{0.000001}} \put(-45.96,31.11){\circle*{0.000001}} \put(-45.96,31.82){\circle*{0.000001}} \put(-45.25,32.53){\circle*{0.000001}} \put(-45.25,33.23){\circle*{0.000001}} \put(-44.55,33.94){\circle*{0.000001}} \put(-44.55,34.65){\circle*{0.000001}} \put(-44.55,34.65){\circle*{0.000001}} \put(-44.55,35.36){\circle*{0.000001}} \put(-45.25,36.06){\circle*{0.000001}} \put(-45.25,36.77){\circle*{0.000001}} \put(-45.25,37.48){\circle*{0.000001}} \put(-45.96,38.18){\circle*{0.000001}} \put(-45.96,38.89){\circle*{0.000001}} \put(-45.96,39.60){\circle*{0.000001}} \put(-46.67,40.31){\circle*{0.000001}} \put(-46.67,41.01){\circle*{0.000001}} \put(-46.67,41.72){\circle*{0.000001}} \put(-47.38,42.43){\circle*{0.000001}} \put(-47.38,43.13){\circle*{0.000001}} \put(-48.08,43.84){\circle*{0.000001}} \put(-48.08,44.55){\circle*{0.000001}} \put(-48.08,45.25){\circle*{0.000001}} \put(-48.79,45.96){\circle*{0.000001}} \put(-48.79,46.67){\circle*{0.000001}} \put(-48.79,47.38){\circle*{0.000001}} \put(-49.50,48.08){\circle*{0.000001}} \put(-49.50,48.79){\circle*{0.000001}} \put(-65.76,41.72){\circle*{0.000001}} \put(-65.05,41.72){\circle*{0.000001}} \put(-64.35,42.43){\circle*{0.000001}} \put(-63.64,42.43){\circle*{0.000001}} \put(-62.93,43.13){\circle*{0.000001}} \put(-62.23,43.13){\circle*{0.000001}} \put(-61.52,43.84){\circle*{0.000001}} \put(-60.81,43.84){\circle*{0.000001}} \put(-60.10,43.84){\circle*{0.000001}} \put(-59.40,44.55){\circle*{0.000001}} \put(-58.69,44.55){\circle*{0.000001}} \put(-57.98,45.25){\circle*{0.000001}} \put(-57.28,45.25){\circle*{0.000001}} \put(-56.57,45.96){\circle*{0.000001}} \put(-55.86,45.96){\circle*{0.000001}} \put(-55.15,46.67){\circle*{0.000001}} \put(-54.45,46.67){\circle*{0.000001}} \put(-53.74,46.67){\circle*{0.000001}} \put(-53.03,47.38){\circle*{0.000001}} \put(-52.33,47.38){\circle*{0.000001}} \put(-51.62,48.08){\circle*{0.000001}} \put(-50.91,48.08){\circle*{0.000001}} \put(-50.20,48.79){\circle*{0.000001}} \put(-49.50,48.79){\circle*{0.000001}} \put(-65.76,41.72){\circle*{0.000001}} \put(-65.76,42.43){\circle*{0.000001}} \put(-65.76,43.13){\circle*{0.000001}} \put(-65.76,43.84){\circle*{0.000001}} \put(-65.76,44.55){\circle*{0.000001}} \put(-65.76,45.25){\circle*{0.000001}} \put(-65.76,45.96){\circle*{0.000001}} \put(-65.76,46.67){\circle*{0.000001}} \put(-65.76,47.38){\circle*{0.000001}} \put(-65.76,48.08){\circle*{0.000001}} \put(-65.76,48.79){\circle*{0.000001}} \put(-65.76,49.50){\circle*{0.000001}} \put(-65.76,50.20){\circle*{0.000001}} \put(-65.76,50.91){\circle*{0.000001}} \put(-65.76,51.62){\circle*{0.000001}} \put(-65.76,52.33){\circle*{0.000001}} \put(-66.47,53.03){\circle*{0.000001}} \put(-66.47,53.74){\circle*{0.000001}} \put(-66.47,54.45){\circle*{0.000001}} \put(-66.47,55.15){\circle*{0.000001}} \put(-66.47,55.86){\circle*{0.000001}} \put(-66.47,56.57){\circle*{0.000001}} \put(-66.47,57.28){\circle*{0.000001}} \put(-66.47,57.98){\circle*{0.000001}} \put(-66.47,58.69){\circle*{0.000001}} \put(-66.47,59.40){\circle*{0.000001}} \put(-66.47,60.10){\circle*{0.000001}} \put(-66.47,60.81){\circle*{0.000001}} \put(-66.47,61.52){\circle*{0.000001}} \put(-66.47,62.23){\circle*{0.000001}} \put(-66.47,62.93){\circle*{0.000001}} \put(-66.47,62.93){\circle*{0.000001}} \put(-67.18,63.64){\circle*{0.000001}} \put(-67.88,64.35){\circle*{0.000001}} \put(-67.88,65.05){\circle*{0.000001}} \put(-68.59,65.76){\circle*{0.000001}} \put(-69.30,66.47){\circle*{0.000001}} \put(-70.00,67.18){\circle*{0.000001}} \put(-70.00,67.88){\circle*{0.000001}} \put(-70.71,68.59){\circle*{0.000001}} \put(-71.42,69.30){\circle*{0.000001}} \put(-72.12,70.00){\circle*{0.000001}} \put(-72.12,70.71){\circle*{0.000001}} \put(-72.83,71.42){\circle*{0.000001}} \put(-73.54,72.12){\circle*{0.000001}} \put(-74.25,72.83){\circle*{0.000001}} \put(-74.95,73.54){\circle*{0.000001}} \put(-74.95,74.25){\circle*{0.000001}} \put(-75.66,74.95){\circle*{0.000001}} \put(-76.37,75.66){\circle*{0.000001}} \put(-77.07,76.37){\circle*{0.000001}} \put(-77.07,77.07){\circle*{0.000001}} \put(-77.78,77.78){\circle*{0.000001}} \put(-78.49,78.49){\circle*{0.000001}} \put(-79.20,79.20){\circle*{0.000001}} \put(-79.20,79.90){\circle*{0.000001}} \put(-79.90,80.61){\circle*{0.000001}} \put(-80.61,81.32){\circle*{0.000001}} \put(-69.30,53.74){\circle*{0.000001}} \put(-69.30,54.45){\circle*{0.000001}} \put(-70.00,55.15){\circle*{0.000001}} \put(-70.00,55.86){\circle*{0.000001}} \put(-70.71,56.57){\circle*{0.000001}} \put(-70.71,57.28){\circle*{0.000001}} \put(-70.71,57.98){\circle*{0.000001}} \put(-71.42,58.69){\circle*{0.000001}} \put(-71.42,59.40){\circle*{0.000001}} \put(-72.12,60.10){\circle*{0.000001}} \put(-72.12,60.81){\circle*{0.000001}} \put(-72.83,61.52){\circle*{0.000001}} \put(-72.83,62.23){\circle*{0.000001}} \put(-72.83,62.93){\circle*{0.000001}} \put(-73.54,63.64){\circle*{0.000001}} \put(-73.54,64.35){\circle*{0.000001}} \put(-74.25,65.05){\circle*{0.000001}} \put(-74.25,65.76){\circle*{0.000001}} \put(-74.25,66.47){\circle*{0.000001}} \put(-74.95,67.18){\circle*{0.000001}} \put(-74.95,67.88){\circle*{0.000001}} \put(-75.66,68.59){\circle*{0.000001}} \put(-75.66,69.30){\circle*{0.000001}} \put(-75.66,70.00){\circle*{0.000001}} \put(-76.37,70.71){\circle*{0.000001}} \put(-76.37,71.42){\circle*{0.000001}} \put(-77.07,72.12){\circle*{0.000001}} \put(-77.07,72.83){\circle*{0.000001}} \put(-77.07,73.54){\circle*{0.000001}} \put(-77.78,74.25){\circle*{0.000001}} \put(-77.78,74.95){\circle*{0.000001}} \put(-78.49,75.66){\circle*{0.000001}} \put(-78.49,76.37){\circle*{0.000001}} \put(-79.20,77.07){\circle*{0.000001}} \put(-79.20,77.78){\circle*{0.000001}} \put(-79.20,78.49){\circle*{0.000001}} \put(-79.90,79.20){\circle*{0.000001}} \put(-79.90,79.90){\circle*{0.000001}} \put(-80.61,80.61){\circle*{0.000001}} \put(-80.61,81.32){\circle*{0.000001}} \put(-69.30,53.74){\circle*{0.000001}} \put(-68.59,54.45){\circle*{0.000001}} \put(-67.88,55.15){\circle*{0.000001}} \put(-67.18,55.86){\circle*{0.000001}} \put(-66.47,56.57){\circle*{0.000001}} \put(-65.76,57.28){\circle*{0.000001}} \put(-65.76,57.98){\circle*{0.000001}} \put(-65.05,58.69){\circle*{0.000001}} \put(-64.35,59.40){\circle*{0.000001}} \put(-63.64,60.10){\circle*{0.000001}} \put(-62.93,60.81){\circle*{0.000001}} \put(-62.23,61.52){\circle*{0.000001}} \put(-61.52,62.23){\circle*{0.000001}} \put(-60.81,62.93){\circle*{0.000001}} \put(-60.10,63.64){\circle*{0.000001}} \put(-59.40,64.35){\circle*{0.000001}} \put(-59.40,65.05){\circle*{0.000001}} \put(-58.69,65.76){\circle*{0.000001}} \put(-57.98,66.47){\circle*{0.000001}} \put(-57.28,67.18){\circle*{0.000001}} \put(-56.57,67.88){\circle*{0.000001}} \put(-55.86,68.59){\circle*{0.000001}} \put(-55.15,69.30){\circle*{0.000001}} \put(-54.45,70.00){\circle*{0.000001}} \put(-53.74,70.71){\circle*{0.000001}} \put(-53.03,71.42){\circle*{0.000001}} \put(-52.33,72.12){\circle*{0.000001}} \put(-52.33,72.83){\circle*{0.000001}} \put(-51.62,73.54){\circle*{0.000001}} \put(-50.91,74.25){\circle*{0.000001}} \put(-50.20,74.95){\circle*{0.000001}} \put(-49.50,75.66){\circle*{0.000001}} \put(-48.79,76.37){\circle*{0.000001}} \put(-48.79,76.37){\circle*{0.000001}} \put(-48.08,76.37){\circle*{0.000001}} \put(-47.38,76.37){\circle*{0.000001}} \put(-46.67,76.37){\circle*{0.000001}} \put(-45.96,76.37){\circle*{0.000001}} \put(-45.25,76.37){\circle*{0.000001}} \put(-44.55,75.66){\circle*{0.000001}} \put(-43.84,75.66){\circle*{0.000001}} \put(-43.13,75.66){\circle*{0.000001}} \put(-42.43,75.66){\circle*{0.000001}} \put(-41.72,75.66){\circle*{0.000001}} \put(-41.01,75.66){\circle*{0.000001}} \put(-40.31,75.66){\circle*{0.000001}} \put(-39.60,75.66){\circle*{0.000001}} \put(-38.89,75.66){\circle*{0.000001}} \put(-38.18,75.66){\circle*{0.000001}} \put(-37.48,74.95){\circle*{0.000001}} \put(-36.77,74.95){\circle*{0.000001}} \put(-36.06,74.95){\circle*{0.000001}} \put(-35.36,74.95){\circle*{0.000001}} \put(-34.65,74.95){\circle*{0.000001}} \put(-33.94,74.95){\circle*{0.000001}} \put(-33.23,74.95){\circle*{0.000001}} \put(-32.53,74.95){\circle*{0.000001}} \put(-31.82,74.95){\circle*{0.000001}} \put(-31.11,74.95){\circle*{0.000001}} \put(-30.41,74.95){\circle*{0.000001}} \put(-29.70,74.25){\circle*{0.000001}} \put(-28.99,74.25){\circle*{0.000001}} \put(-28.28,74.25){\circle*{0.000001}} \put(-27.58,74.25){\circle*{0.000001}} \put(-26.87,74.25){\circle*{0.000001}} \put(-26.16,74.25){\circle*{0.000001}} \put(-25.46,74.25){\circle*{0.000001}} \put(-24.75,74.25){\circle*{0.000001}} \put(-24.04,74.25){\circle*{0.000001}} \put(-23.33,74.25){\circle*{0.000001}} \put(-22.63,74.25){\circle*{0.000001}} \put(-21.92,73.54){\circle*{0.000001}} \put(-21.21,73.54){\circle*{0.000001}} \put(-20.51,73.54){\circle*{0.000001}} \put(-19.80,73.54){\circle*{0.000001}} \put(-19.09,73.54){\circle*{0.000001}} \put(-18.38,73.54){\circle*{0.000001}} \put(-17.68,73.54){\circle*{0.000001}} \put(-16.97,73.54){\circle*{0.000001}} \put(-16.26,73.54){\circle*{0.000001}} \put(-15.56,73.54){\circle*{0.000001}} \put(-14.85,72.83){\circle*{0.000001}} \put(-14.14,72.83){\circle*{0.000001}} \put(-13.44,72.83){\circle*{0.000001}} \put(-12.73,72.83){\circle*{0.000001}} \put(-12.02,72.83){\circle*{0.000001}} \put(-11.31,72.83){\circle*{0.000001}} \put(-29.70,38.89){\circle*{0.000001}} \put(-28.99,39.60){\circle*{0.000001}} \put(-28.99,40.31){\circle*{0.000001}} \put(-28.28,41.01){\circle*{0.000001}} \put(-28.28,41.72){\circle*{0.000001}} \put(-27.58,42.43){\circle*{0.000001}} \put(-27.58,43.13){\circle*{0.000001}} \put(-26.87,43.84){\circle*{0.000001}} \put(-26.87,44.55){\circle*{0.000001}} \put(-26.16,45.25){\circle*{0.000001}} \put(-26.16,45.96){\circle*{0.000001}} \put(-25.46,46.67){\circle*{0.000001}} \put(-25.46,47.38){\circle*{0.000001}} \put(-24.75,48.08){\circle*{0.000001}} \put(-24.04,48.79){\circle*{0.000001}} \put(-24.04,49.50){\circle*{0.000001}} \put(-23.33,50.20){\circle*{0.000001}} \put(-23.33,50.91){\circle*{0.000001}} \put(-22.63,51.62){\circle*{0.000001}} \put(-22.63,52.33){\circle*{0.000001}} \put(-21.92,53.03){\circle*{0.000001}} \put(-21.92,53.74){\circle*{0.000001}} \put(-21.21,54.45){\circle*{0.000001}} \put(-21.21,55.15){\circle*{0.000001}} \put(-20.51,55.86){\circle*{0.000001}} \put(-19.80,56.57){\circle*{0.000001}} \put(-19.80,57.28){\circle*{0.000001}} \put(-19.09,57.98){\circle*{0.000001}} \put(-19.09,58.69){\circle*{0.000001}} \put(-18.38,59.40){\circle*{0.000001}} \put(-18.38,60.10){\circle*{0.000001}} \put(-17.68,60.81){\circle*{0.000001}} \put(-17.68,61.52){\circle*{0.000001}} \put(-16.97,62.23){\circle*{0.000001}} \put(-16.97,62.93){\circle*{0.000001}} \put(-16.26,63.64){\circle*{0.000001}} \put(-16.26,64.35){\circle*{0.000001}} \put(-15.56,65.05){\circle*{0.000001}} \put(-14.85,65.76){\circle*{0.000001}} \put(-14.85,66.47){\circle*{0.000001}} \put(-14.14,67.18){\circle*{0.000001}} \put(-14.14,67.88){\circle*{0.000001}} \put(-13.44,68.59){\circle*{0.000001}} \put(-13.44,69.30){\circle*{0.000001}} \put(-12.73,70.00){\circle*{0.000001}} \put(-12.73,70.71){\circle*{0.000001}} \put(-12.02,71.42){\circle*{0.000001}} \put(-12.02,72.12){\circle*{0.000001}} \put(-11.31,72.83){\circle*{0.000001}} \put(-70.71,31.82){\circle*{0.000001}} \put(-70.00,31.82){\circle*{0.000001}} \put(-69.30,31.82){\circle*{0.000001}} \put(-68.59,32.53){\circle*{0.000001}} \put(-67.88,32.53){\circle*{0.000001}} \put(-67.18,32.53){\circle*{0.000001}} \put(-66.47,32.53){\circle*{0.000001}} \put(-65.76,32.53){\circle*{0.000001}} \put(-65.05,32.53){\circle*{0.000001}} \put(-64.35,33.23){\circle*{0.000001}} \put(-63.64,33.23){\circle*{0.000001}} \put(-62.93,33.23){\circle*{0.000001}} \put(-62.23,33.23){\circle*{0.000001}} \put(-61.52,33.23){\circle*{0.000001}} \put(-60.81,33.23){\circle*{0.000001}} \put(-60.10,33.94){\circle*{0.000001}} \put(-59.40,33.94){\circle*{0.000001}} \put(-58.69,33.94){\circle*{0.000001}} \put(-57.98,33.94){\circle*{0.000001}} \put(-57.28,33.94){\circle*{0.000001}} \put(-56.57,33.94){\circle*{0.000001}} \put(-55.86,34.65){\circle*{0.000001}} \put(-55.15,34.65){\circle*{0.000001}} \put(-54.45,34.65){\circle*{0.000001}} \put(-53.74,34.65){\circle*{0.000001}} \put(-53.03,34.65){\circle*{0.000001}} \put(-52.33,34.65){\circle*{0.000001}} \put(-51.62,35.36){\circle*{0.000001}} \put(-50.91,35.36){\circle*{0.000001}} \put(-50.20,35.36){\circle*{0.000001}} \put(-49.50,35.36){\circle*{0.000001}} \put(-48.79,35.36){\circle*{0.000001}} \put(-48.08,36.06){\circle*{0.000001}} \put(-47.38,36.06){\circle*{0.000001}} \put(-46.67,36.06){\circle*{0.000001}} \put(-45.96,36.06){\circle*{0.000001}} \put(-45.25,36.06){\circle*{0.000001}} \put(-44.55,36.06){\circle*{0.000001}} \put(-43.84,36.77){\circle*{0.000001}} \put(-43.13,36.77){\circle*{0.000001}} \put(-42.43,36.77){\circle*{0.000001}} \put(-41.72,36.77){\circle*{0.000001}} \put(-41.01,36.77){\circle*{0.000001}} \put(-40.31,36.77){\circle*{0.000001}} \put(-39.60,37.48){\circle*{0.000001}} \put(-38.89,37.48){\circle*{0.000001}} \put(-38.18,37.48){\circle*{0.000001}} \put(-37.48,37.48){\circle*{0.000001}} \put(-36.77,37.48){\circle*{0.000001}} \put(-36.06,37.48){\circle*{0.000001}} \put(-35.36,38.18){\circle*{0.000001}} \put(-34.65,38.18){\circle*{0.000001}} \put(-33.94,38.18){\circle*{0.000001}} \put(-33.23,38.18){\circle*{0.000001}} \put(-32.53,38.18){\circle*{0.000001}} \put(-31.82,38.18){\circle*{0.000001}} \put(-31.11,38.89){\circle*{0.000001}} \put(-30.41,38.89){\circle*{0.000001}} \put(-29.70,38.89){\circle*{0.000001}} \put(-70.71,31.82){\circle*{0.000001}} \put(-70.71,32.53){\circle*{0.000001}} \put(-70.71,33.23){\circle*{0.000001}} \put(-70.71,33.94){\circle*{0.000001}} \put(-70.71,34.65){\circle*{0.000001}} \put(-70.71,35.36){\circle*{0.000001}} \put(-70.71,36.06){\circle*{0.000001}} \put(-70.71,36.77){\circle*{0.000001}} \put(-70.71,37.48){\circle*{0.000001}} \put(-70.71,38.18){\circle*{0.000001}} \put(-70.71,38.89){\circle*{0.000001}} \put(-70.71,39.60){\circle*{0.000001}} \put(-70.71,40.31){\circle*{0.000001}} \put(-70.71,41.01){\circle*{0.000001}} \put(-70.71,41.72){\circle*{0.000001}} \put(-70.71,42.43){\circle*{0.000001}} \put(-70.71,43.13){\circle*{0.000001}} \put(-70.71,43.84){\circle*{0.000001}} \put(-70.71,44.55){\circle*{0.000001}} \put(-70.71,45.25){\circle*{0.000001}} \put(-70.71,45.96){\circle*{0.000001}} \put(-70.71,46.67){\circle*{0.000001}} \put(-70.71,47.38){\circle*{0.000001}} \put(-70.71,48.08){\circle*{0.000001}} \put(-70.71,48.79){\circle*{0.000001}} \put(-70.71,49.50){\circle*{0.000001}} \put(-70.71,50.20){\circle*{0.000001}} \put(-70.71,50.91){\circle*{0.000001}} \put(-70.71,51.62){\circle*{0.000001}} \put(-70.71,52.33){\circle*{0.000001}} \put(-70.71,53.03){\circle*{0.000001}} \put(-70.71,53.74){\circle*{0.000001}} \put(-70.71,54.45){\circle*{0.000001}} \put(-70.71,55.15){\circle*{0.000001}} \put(-70.71,55.86){\circle*{0.000001}} \put(-70.71,56.57){\circle*{0.000001}} \put(-70.71,57.28){\circle*{0.000001}} \put(-70.71,57.98){\circle*{0.000001}} \put(-70.71,58.69){\circle*{0.000001}} \put(-70.71,59.40){\circle*{0.000001}} \put(-70.71,60.10){\circle*{0.000001}} \put(-70.71,60.81){\circle*{0.000001}} \put(-70.71,61.52){\circle*{0.000001}} \put(-70.71,62.23){\circle*{0.000001}} \put(-70.71,62.93){\circle*{0.000001}} \put(-70.71,63.64){\circle*{0.000001}} \put(-70.71,64.35){\circle*{0.000001}} \put(-70.71,65.05){\circle*{0.000001}} \put(-70.71,65.76){\circle*{0.000001}} \put(-70.71,66.47){\circle*{0.000001}} \put(-70.71,67.18){\circle*{0.000001}} \put(-70.71,67.88){\circle*{0.000001}} \put(-70.71,68.59){\circle*{0.000001}} \put(-70.71,69.30){\circle*{0.000001}} \put(-70.71,70.00){\circle*{0.000001}} \put(-70.71,70.71){\circle*{0.000001}} \put(-70.71,71.42){\circle*{0.000001}} \put(-70.71,72.12){\circle*{0.000001}} \put(-70.71,72.83){\circle*{0.000001}} \put(-70.71,73.54){\circle*{0.000001}} \put(-70.71,74.25){\circle*{0.000001}} \put(-70.71,74.95){\circle*{0.000001}} \put(-70.71,75.66){\circle*{0.000001}} \put(-70.71,75.66){\circle*{0.000001}} \put(-70.71,76.37){\circle*{0.000001}} \put(-71.42,77.07){\circle*{0.000001}} \put(-71.42,77.78){\circle*{0.000001}} \put(-71.42,78.49){\circle*{0.000001}} \put(-72.12,79.20){\circle*{0.000001}} \put(-72.12,79.90){\circle*{0.000001}} \put(-72.12,80.61){\circle*{0.000001}} \put(-72.83,81.32){\circle*{0.000001}} \put(-72.83,82.02){\circle*{0.000001}} \put(-72.83,82.73){\circle*{0.000001}} \put(-73.54,83.44){\circle*{0.000001}} \put(-73.54,84.15){\circle*{0.000001}} \put(-73.54,84.85){\circle*{0.000001}} \put(-74.25,85.56){\circle*{0.000001}} \put(-74.25,86.27){\circle*{0.000001}} \put(-74.25,86.97){\circle*{0.000001}} \put(-74.95,87.68){\circle*{0.000001}} \put(-74.95,88.39){\circle*{0.000001}} \put(-74.95,89.10){\circle*{0.000001}} \put(-75.66,89.80){\circle*{0.000001}} \put(-75.66,90.51){\circle*{0.000001}} \put(-75.66,91.22){\circle*{0.000001}} \put(-76.37,91.92){\circle*{0.000001}} \put(-76.37,92.63){\circle*{0.000001}} \put(-76.37,93.34){\circle*{0.000001}} \put(-77.07,94.05){\circle*{0.000001}} \put(-77.07,94.75){\circle*{0.000001}} \put(-77.07,95.46){\circle*{0.000001}} \put(-77.78,96.17){\circle*{0.000001}} \put(-77.78,96.87){\circle*{0.000001}} \put(-77.78,97.58){\circle*{0.000001}} \put(-78.49,98.29){\circle*{0.000001}} \put(-78.49,98.99){\circle*{0.000001}} \put(-79.20,99.70){\circle*{0.000001}} \put(-79.20,100.41){\circle*{0.000001}} \put(-79.20,101.12){\circle*{0.000001}} \put(-79.90,101.82){\circle*{0.000001}} \put(-79.90,102.53){\circle*{0.000001}} \put(-79.90,103.24){\circle*{0.000001}} \put(-80.61,103.94){\circle*{0.000001}} \put(-80.61,104.65){\circle*{0.000001}} \put(-80.61,105.36){\circle*{0.000001}} \put(-81.32,106.07){\circle*{0.000001}} \put(-81.32,106.77){\circle*{0.000001}} \put(-81.32,107.48){\circle*{0.000001}} \put(-82.02,108.19){\circle*{0.000001}} \put(-82.02,108.89){\circle*{0.000001}} \put(-82.02,109.60){\circle*{0.000001}} \put(-82.73,110.31){\circle*{0.000001}} \put(-82.73,111.02){\circle*{0.000001}} \put(-82.73,111.72){\circle*{0.000001}} \put(-83.44,112.43){\circle*{0.000001}} \put(-83.44,113.14){\circle*{0.000001}} \put(-83.44,113.84){\circle*{0.000001}} \put(-84.15,114.55){\circle*{0.000001}} \put(-84.15,115.26){\circle*{0.000001}} \put(-84.15,115.97){\circle*{0.000001}} \put(-84.85,116.67){\circle*{0.000001}} \put(-84.85,117.38){\circle*{0.000001}} \put(-84.85,118.09){\circle*{0.000001}} \put(-85.56,118.79){\circle*{0.000001}} \put(-85.56,119.50){\circle*{0.000001}} \put(-132.94,138.59){\circle*{0.000001}} \put(-132.23,138.59){\circle*{0.000001}} \put(-131.52,137.89){\circle*{0.000001}} \put(-130.81,137.89){\circle*{0.000001}} \put(-130.11,137.18){\circle*{0.000001}} \put(-129.40,137.18){\circle*{0.000001}} \put(-128.69,137.18){\circle*{0.000001}} \put(-127.99,136.47){\circle*{0.000001}} \put(-127.28,136.47){\circle*{0.000001}} \put(-126.57,135.76){\circle*{0.000001}} \put(-125.87,135.76){\circle*{0.000001}} \put(-125.16,135.76){\circle*{0.000001}} \put(-124.45,135.06){\circle*{0.000001}} \put(-123.74,135.06){\circle*{0.000001}} \put(-123.04,134.35){\circle*{0.000001}} \put(-122.33,134.35){\circle*{0.000001}} \put(-121.62,134.35){\circle*{0.000001}} \put(-120.92,133.64){\circle*{0.000001}} \put(-120.21,133.64){\circle*{0.000001}} \put(-119.50,132.94){\circle*{0.000001}} \put(-118.79,132.94){\circle*{0.000001}} \put(-118.09,132.94){\circle*{0.000001}} \put(-117.38,132.23){\circle*{0.000001}} \put(-116.67,132.23){\circle*{0.000001}} \put(-115.97,131.52){\circle*{0.000001}} \put(-115.26,131.52){\circle*{0.000001}} \put(-114.55,131.52){\circle*{0.000001}} \put(-113.84,130.81){\circle*{0.000001}} \put(-113.14,130.81){\circle*{0.000001}} \put(-112.43,130.11){\circle*{0.000001}} \put(-111.72,130.11){\circle*{0.000001}} \put(-111.02,130.11){\circle*{0.000001}} \put(-110.31,129.40){\circle*{0.000001}} \put(-109.60,129.40){\circle*{0.000001}} \put(-108.89,128.69){\circle*{0.000001}} \put(-108.19,128.69){\circle*{0.000001}} \put(-107.48,127.99){\circle*{0.000001}} \put(-106.77,127.99){\circle*{0.000001}} \put(-106.07,127.99){\circle*{0.000001}} \put(-105.36,127.28){\circle*{0.000001}} \put(-104.65,127.28){\circle*{0.000001}} \put(-103.94,126.57){\circle*{0.000001}} \put(-103.24,126.57){\circle*{0.000001}} \put(-102.53,126.57){\circle*{0.000001}} \put(-101.82,125.87){\circle*{0.000001}} \put(-101.12,125.87){\circle*{0.000001}} \put(-100.41,125.16){\circle*{0.000001}} \put(-99.70,125.16){\circle*{0.000001}} \put(-98.99,125.16){\circle*{0.000001}} \put(-98.29,124.45){\circle*{0.000001}} \put(-97.58,124.45){\circle*{0.000001}} \put(-96.87,123.74){\circle*{0.000001}} \put(-96.17,123.74){\circle*{0.000001}} \put(-95.46,123.74){\circle*{0.000001}} \put(-94.75,123.04){\circle*{0.000001}} \put(-94.05,123.04){\circle*{0.000001}} \put(-93.34,122.33){\circle*{0.000001}} \put(-92.63,122.33){\circle*{0.000001}} \put(-91.92,122.33){\circle*{0.000001}} \put(-91.22,121.62){\circle*{0.000001}} \put(-90.51,121.62){\circle*{0.000001}} \put(-89.80,120.92){\circle*{0.000001}} \put(-89.10,120.92){\circle*{0.000001}} \put(-88.39,120.92){\circle*{0.000001}} \put(-87.68,120.21){\circle*{0.000001}} \put(-86.97,120.21){\circle*{0.000001}} \put(-86.27,119.50){\circle*{0.000001}} \put(-85.56,119.50){\circle*{0.000001}} \put(-132.94,138.59){\circle*{0.000001}} \put(-132.23,137.89){\circle*{0.000001}} \put(-131.52,137.18){\circle*{0.000001}} \put(-130.81,136.47){\circle*{0.000001}} \put(-130.11,135.76){\circle*{0.000001}} \put(-129.40,135.06){\circle*{0.000001}} \put(-128.69,134.35){\circle*{0.000001}} \put(-127.99,133.64){\circle*{0.000001}} \put(-127.28,132.94){\circle*{0.000001}} \put(-126.57,132.23){\circle*{0.000001}} \put(-125.87,131.52){\circle*{0.000001}} \put(-125.16,130.81){\circle*{0.000001}} \put(-124.45,130.11){\circle*{0.000001}} \put(-123.74,129.40){\circle*{0.000001}} \put(-123.04,128.69){\circle*{0.000001}} \put(-122.33,127.99){\circle*{0.000001}} \put(-121.62,127.28){\circle*{0.000001}} \put(-120.92,126.57){\circle*{0.000001}} \put(-120.21,125.87){\circle*{0.000001}} \put(-119.50,125.16){\circle*{0.000001}} \put(-118.79,124.45){\circle*{0.000001}} \put(-118.09,123.74){\circle*{0.000001}} \put(-117.38,123.04){\circle*{0.000001}} \put(-116.67,122.33){\circle*{0.000001}} \put(-115.97,121.62){\circle*{0.000001}} \put(-115.26,120.92){\circle*{0.000001}} \put(-114.55,120.21){\circle*{0.000001}} \put(-113.84,119.50){\circle*{0.000001}} \put(-113.14,119.50){\circle*{0.000001}} \put(-112.43,118.79){\circle*{0.000001}} \put(-111.72,118.09){\circle*{0.000001}} \put(-111.02,117.38){\circle*{0.000001}} \put(-110.31,116.67){\circle*{0.000001}} \put(-109.60,115.97){\circle*{0.000001}} \put(-108.89,115.26){\circle*{0.000001}} \put(-108.19,114.55){\circle*{0.000001}} \put(-107.48,113.84){\circle*{0.000001}} \put(-106.77,113.14){\circle*{0.000001}} \put(-106.07,112.43){\circle*{0.000001}} \put(-105.36,111.72){\circle*{0.000001}} \put(-104.65,111.02){\circle*{0.000001}} \put(-103.94,110.31){\circle*{0.000001}} \put(-103.24,109.60){\circle*{0.000001}} \put(-102.53,108.89){\circle*{0.000001}} \put(-101.82,108.19){\circle*{0.000001}} \put(-101.12,107.48){\circle*{0.000001}} \put(-100.41,106.77){\circle*{0.000001}} \put(-99.70,106.07){\circle*{0.000001}} \put(-98.99,105.36){\circle*{0.000001}} \put(-98.29,104.65){\circle*{0.000001}} \put(-97.58,103.94){\circle*{0.000001}} \put(-96.87,103.24){\circle*{0.000001}} \put(-96.17,102.53){\circle*{0.000001}} \put(-95.46,101.82){\circle*{0.000001}} \put(-94.75,101.12){\circle*{0.000001}} \put(-94.05,100.41){\circle*{0.000001}} \put(-93.34,99.70){\circle*{0.000001}} \put(-93.34,99.70){\circle*{0.000001}} \put(-93.34,100.41){\circle*{0.000001}} \put(-93.34,101.12){\circle*{0.000001}} \put(-92.63,101.82){\circle*{0.000001}} \put(-92.63,102.53){\circle*{0.000001}} \put(-92.63,103.24){\circle*{0.000001}} \put(-92.63,103.94){\circle*{0.000001}} \put(-92.63,104.65){\circle*{0.000001}} \put(-92.63,105.36){\circle*{0.000001}} \put(-91.92,106.07){\circle*{0.000001}} \put(-91.92,106.77){\circle*{0.000001}} \put(-91.92,107.48){\circle*{0.000001}} \put(-91.92,108.19){\circle*{0.000001}} \put(-91.92,108.89){\circle*{0.000001}} \put(-91.92,109.60){\circle*{0.000001}} \put(-91.22,110.31){\circle*{0.000001}} \put(-91.22,111.02){\circle*{0.000001}} \put(-91.22,111.72){\circle*{0.000001}} \put(-91.22,112.43){\circle*{0.000001}} \put(-91.22,113.14){\circle*{0.000001}} \put(-91.22,113.84){\circle*{0.000001}} \put(-90.51,114.55){\circle*{0.000001}} \put(-90.51,115.26){\circle*{0.000001}} \put(-90.51,115.97){\circle*{0.000001}} \put(-90.51,116.67){\circle*{0.000001}} \put(-90.51,117.38){\circle*{0.000001}} \put(-90.51,118.09){\circle*{0.000001}} \put(-89.80,118.79){\circle*{0.000001}} \put(-89.80,119.50){\circle*{0.000001}} \put(-89.80,120.21){\circle*{0.000001}} \put(-89.80,120.92){\circle*{0.000001}} \put(-89.80,121.62){\circle*{0.000001}} \put(-89.10,122.33){\circle*{0.000001}} \put(-89.10,123.04){\circle*{0.000001}} \put(-89.10,123.74){\circle*{0.000001}} \put(-89.10,124.45){\circle*{0.000001}} \put(-89.10,125.16){\circle*{0.000001}} \put(-89.10,125.87){\circle*{0.000001}} \put(-88.39,126.57){\circle*{0.000001}} \put(-88.39,127.28){\circle*{0.000001}} \put(-88.39,127.99){\circle*{0.000001}} \put(-88.39,128.69){\circle*{0.000001}} \put(-88.39,129.40){\circle*{0.000001}} \put(-88.39,130.11){\circle*{0.000001}} \put(-87.68,130.81){\circle*{0.000001}} \put(-87.68,131.52){\circle*{0.000001}} \put(-87.68,132.23){\circle*{0.000001}} \put(-87.68,132.94){\circle*{0.000001}} \put(-87.68,133.64){\circle*{0.000001}} \put(-87.68,134.35){\circle*{0.000001}} \put(-86.97,135.06){\circle*{0.000001}} \put(-86.97,135.76){\circle*{0.000001}} \put(-86.97,136.47){\circle*{0.000001}} \put(-86.97,137.18){\circle*{0.000001}} \put(-86.97,137.89){\circle*{0.000001}} \put(-86.27,138.59){\circle*{0.000001}} \put(-86.27,139.30){\circle*{0.000001}} \put(-86.27,140.01){\circle*{0.000001}} \put(-86.27,140.71){\circle*{0.000001}} \put(-86.27,141.42){\circle*{0.000001}} \put(-86.27,142.13){\circle*{0.000001}} \put(-85.56,142.84){\circle*{0.000001}} \put(-85.56,143.54){\circle*{0.000001}} \put(-85.56,144.25){\circle*{0.000001}} \put(-85.56,144.96){\circle*{0.000001}} \put(-85.56,145.66){\circle*{0.000001}} \put(-85.56,146.37){\circle*{0.000001}} \put(-84.85,147.08){\circle*{0.000001}} \put(-84.85,147.79){\circle*{0.000001}} \put(-84.85,148.49){\circle*{0.000001}} \put(-84.85,149.20){\circle*{0.000001}} \put(-84.85,149.91){\circle*{0.000001}} \put(-84.85,150.61){\circle*{0.000001}} \put(-84.15,151.32){\circle*{0.000001}} \put(-84.15,152.03){\circle*{0.000001}} \put(-84.15,152.74){\circle*{0.000001}} \put(-84.15,153.44){\circle*{0.000001}} \put(-84.15,154.15){\circle*{0.000001}} \put(-84.15,154.86){\circle*{0.000001}} \put(-83.44,155.56){\circle*{0.000001}} \put(-83.44,156.27){\circle*{0.000001}} \put(-83.44,156.98){\circle*{0.000001}} \put(-83.44,156.98){\circle*{0.000001}} \put(-82.73,157.68){\circle*{0.000001}} \put(-82.73,158.39){\circle*{0.000001}} \put(-82.02,159.10){\circle*{0.000001}} \put(-82.02,159.81){\circle*{0.000001}} \put(-81.32,160.51){\circle*{0.000001}} \put(-80.61,161.22){\circle*{0.000001}} \put(-80.61,161.93){\circle*{0.000001}} \put(-79.90,162.63){\circle*{0.000001}} \put(-79.90,163.34){\circle*{0.000001}} \put(-79.20,164.05){\circle*{0.000001}} \put(-78.49,164.76){\circle*{0.000001}} \put(-78.49,165.46){\circle*{0.000001}} \put(-77.78,166.17){\circle*{0.000001}} \put(-77.78,166.88){\circle*{0.000001}} \put(-77.07,167.58){\circle*{0.000001}} \put(-76.37,168.29){\circle*{0.000001}} \put(-76.37,169.00){\circle*{0.000001}} \put(-75.66,169.71){\circle*{0.000001}} \put(-75.66,170.41){\circle*{0.000001}} \put(-74.95,171.12){\circle*{0.000001}} \put(-74.95,171.83){\circle*{0.000001}} \put(-74.25,172.53){\circle*{0.000001}} \put(-73.54,173.24){\circle*{0.000001}} \put(-73.54,173.95){\circle*{0.000001}} \put(-72.83,174.66){\circle*{0.000001}} \put(-72.83,175.36){\circle*{0.000001}} \put(-72.12,176.07){\circle*{0.000001}} \put(-71.42,176.78){\circle*{0.000001}} \put(-71.42,177.48){\circle*{0.000001}} \put(-70.71,178.19){\circle*{0.000001}} \put(-70.71,178.90){\circle*{0.000001}} \put(-70.00,179.61){\circle*{0.000001}} \put(-69.30,180.31){\circle*{0.000001}} \put(-69.30,181.02){\circle*{0.000001}} \put(-68.59,181.73){\circle*{0.000001}} \put(-68.59,182.43){\circle*{0.000001}} \put(-67.88,183.14){\circle*{0.000001}} \put(-67.18,183.85){\circle*{0.000001}} \put(-67.18,184.55){\circle*{0.000001}} \put(-66.47,185.26){\circle*{0.000001}} \put(-66.47,185.97){\circle*{0.000001}} \put(-65.76,186.68){\circle*{0.000001}} \put(-65.05,187.38){\circle*{0.000001}} \put(-65.05,188.09){\circle*{0.000001}} \put(-64.35,188.80){\circle*{0.000001}} \put(-64.35,189.50){\circle*{0.000001}} \put(-63.64,190.21){\circle*{0.000001}} \put(-62.93,190.92){\circle*{0.000001}} \put(-62.93,191.63){\circle*{0.000001}} \put(-62.23,192.33){\circle*{0.000001}} \put(-62.23,193.04){\circle*{0.000001}} \put(-61.52,193.75){\circle*{0.000001}} \put(-61.52,194.45){\circle*{0.000001}} \put(-60.81,195.16){\circle*{0.000001}} \put(-60.10,195.87){\circle*{0.000001}} \put(-60.10,196.58){\circle*{0.000001}} \put(-59.40,197.28){\circle*{0.000001}} \put(-59.40,197.99){\circle*{0.000001}} \put(-58.69,198.70){\circle*{0.000001}} \put(-57.98,199.40){\circle*{0.000001}} \put(-57.98,200.11){\circle*{0.000001}} \put(-57.28,200.82){\circle*{0.000001}} \put(-57.28,201.53){\circle*{0.000001}} \put(-56.57,202.23){\circle*{0.000001}} \put(-55.86,202.94){\circle*{0.000001}} \put(-55.86,203.65){\circle*{0.000001}} \put(-55.15,204.35){\circle*{0.000001}} \put(-55.15,205.06){\circle*{0.000001}} \put(-54.45,205.77){\circle*{0.000001}} \put(-54.45,205.77){\circle*{0.000001}} \put(-53.74,206.48){\circle*{0.000001}} \put(-53.03,206.48){\circle*{0.000001}} \put(-52.33,207.18){\circle*{0.000001}} \put(-51.62,207.89){\circle*{0.000001}} \put(-50.91,208.60){\circle*{0.000001}} \put(-50.20,208.60){\circle*{0.000001}} \put(-49.50,209.30){\circle*{0.000001}} \put(-48.79,210.01){\circle*{0.000001}} \put(-48.08,210.72){\circle*{0.000001}} \put(-47.38,210.72){\circle*{0.000001}} \put(-46.67,211.42){\circle*{0.000001}} \put(-45.96,212.13){\circle*{0.000001}} \put(-45.25,212.13){\circle*{0.000001}} \put(-44.55,212.84){\circle*{0.000001}} \put(-43.84,213.55){\circle*{0.000001}} \put(-43.13,214.25){\circle*{0.000001}} \put(-42.43,214.25){\circle*{0.000001}} \put(-41.72,214.96){\circle*{0.000001}} \put(-41.01,215.67){\circle*{0.000001}} \put(-40.31,216.37){\circle*{0.000001}} \put(-39.60,216.37){\circle*{0.000001}} \put(-38.89,217.08){\circle*{0.000001}} \put(-38.18,217.79){\circle*{0.000001}} \put(-37.48,217.79){\circle*{0.000001}} \put(-36.77,218.50){\circle*{0.000001}} \put(-36.06,219.20){\circle*{0.000001}} \put(-35.36,219.91){\circle*{0.000001}} \put(-34.65,219.91){\circle*{0.000001}} \put(-33.94,220.62){\circle*{0.000001}} \put(-33.23,221.32){\circle*{0.000001}} \put(-32.53,222.03){\circle*{0.000001}} \put(-31.82,222.03){\circle*{0.000001}} \put(-31.11,222.74){\circle*{0.000001}} \put(-30.41,223.45){\circle*{0.000001}} \put(-29.70,223.45){\circle*{0.000001}} \put(-28.99,224.15){\circle*{0.000001}} \put(-28.28,224.86){\circle*{0.000001}} \put(-27.58,225.57){\circle*{0.000001}} \put(-26.87,225.57){\circle*{0.000001}} \put(-26.16,226.27){\circle*{0.000001}} \put(-25.46,226.98){\circle*{0.000001}} \put(-24.75,227.69){\circle*{0.000001}} \put(-24.04,227.69){\circle*{0.000001}} \put(-23.33,228.40){\circle*{0.000001}} \put(-22.63,229.10){\circle*{0.000001}} \put(-21.92,229.10){\circle*{0.000001}} \put(-21.21,229.81){\circle*{0.000001}} \put(-20.51,230.52){\circle*{0.000001}} \put(-19.80,231.22){\circle*{0.000001}} \put(-19.09,231.22){\circle*{0.000001}} \put(-18.38,231.93){\circle*{0.000001}} \put(-17.68,232.64){\circle*{0.000001}} \put(-16.97,233.35){\circle*{0.000001}} \put(-16.26,233.35){\circle*{0.000001}} \put(-15.56,234.05){\circle*{0.000001}} \put(-14.85,234.76){\circle*{0.000001}} \put(-14.14,234.76){\circle*{0.000001}} \put(-13.44,235.47){\circle*{0.000001}} \put(-12.73,236.17){\circle*{0.000001}} \put(-12.02,236.88){\circle*{0.000001}} \put(-11.31,236.88){\circle*{0.000001}} \put(-10.61,237.59){\circle*{0.000001}} \put(-9.90,238.29){\circle*{0.000001}} \put(-9.19,239.00){\circle*{0.000001}} \put(-8.49,239.00){\circle*{0.000001}} \put(-7.78,239.71){\circle*{0.000001}} \put(23.33,191.63){\circle*{0.000001}} \put(22.63,192.33){\circle*{0.000001}} \put(22.63,193.04){\circle*{0.000001}} \put(21.92,193.75){\circle*{0.000001}} \put(21.21,194.45){\circle*{0.000001}} \put(21.21,195.16){\circle*{0.000001}} \put(20.51,195.87){\circle*{0.000001}} \put(19.80,196.58){\circle*{0.000001}} \put(19.80,197.28){\circle*{0.000001}} \put(19.09,197.99){\circle*{0.000001}} \put(19.09,198.70){\circle*{0.000001}} \put(18.38,199.40){\circle*{0.000001}} \put(17.68,200.11){\circle*{0.000001}} \put(17.68,200.82){\circle*{0.000001}} \put(16.97,201.53){\circle*{0.000001}} \put(16.26,202.23){\circle*{0.000001}} \put(16.26,202.94){\circle*{0.000001}} \put(15.56,203.65){\circle*{0.000001}} \put(14.85,204.35){\circle*{0.000001}} \put(14.85,205.06){\circle*{0.000001}} \put(14.14,205.77){\circle*{0.000001}} \put(13.44,206.48){\circle*{0.000001}} \put(13.44,207.18){\circle*{0.000001}} \put(12.73,207.89){\circle*{0.000001}} \put(12.02,208.60){\circle*{0.000001}} \put(12.02,209.30){\circle*{0.000001}} \put(11.31,210.01){\circle*{0.000001}} \put(11.31,210.72){\circle*{0.000001}} \put(10.61,211.42){\circle*{0.000001}} \put( 9.90,212.13){\circle*{0.000001}} \put( 9.90,212.84){\circle*{0.000001}} \put( 9.19,213.55){\circle*{0.000001}} \put( 8.49,214.25){\circle*{0.000001}} \put( 8.49,214.96){\circle*{0.000001}} \put( 7.78,215.67){\circle*{0.000001}} \put( 7.07,216.37){\circle*{0.000001}} \put( 7.07,217.08){\circle*{0.000001}} \put( 6.36,217.79){\circle*{0.000001}} \put( 5.66,218.50){\circle*{0.000001}} \put( 5.66,219.20){\circle*{0.000001}} \put( 4.95,219.91){\circle*{0.000001}} \put( 4.24,220.62){\circle*{0.000001}} \put( 4.24,221.32){\circle*{0.000001}} \put( 3.54,222.03){\circle*{0.000001}} \put( 3.54,222.74){\circle*{0.000001}} \put( 2.83,223.45){\circle*{0.000001}} \put( 2.12,224.15){\circle*{0.000001}} \put( 2.12,224.86){\circle*{0.000001}} \put( 1.41,225.57){\circle*{0.000001}} \put( 0.71,226.27){\circle*{0.000001}} \put( 0.71,226.98){\circle*{0.000001}} \put( 0.00,227.69){\circle*{0.000001}} \put(-0.71,228.40){\circle*{0.000001}} \put(-0.71,229.10){\circle*{0.000001}} \put(-1.41,229.81){\circle*{0.000001}} \put(-2.12,230.52){\circle*{0.000001}} \put(-2.12,231.22){\circle*{0.000001}} \put(-2.83,231.93){\circle*{0.000001}} \put(-3.54,232.64){\circle*{0.000001}} \put(-3.54,233.35){\circle*{0.000001}} \put(-4.24,234.05){\circle*{0.000001}} \put(-4.24,234.76){\circle*{0.000001}} \put(-4.95,235.47){\circle*{0.000001}} \put(-5.66,236.17){\circle*{0.000001}} \put(-5.66,236.88){\circle*{0.000001}} \put(-6.36,237.59){\circle*{0.000001}} \put(-7.07,238.29){\circle*{0.000001}} \put(-7.07,239.00){\circle*{0.000001}} \put(-7.78,239.71){\circle*{0.000001}} \put(23.33,191.63){\circle*{0.000001}} \put(23.33,192.33){\circle*{0.000001}} \put(24.04,193.04){\circle*{0.000001}} \put(24.04,193.75){\circle*{0.000001}} \put(24.75,194.45){\circle*{0.000001}} \put(24.75,195.16){\circle*{0.000001}} \put(25.46,195.87){\circle*{0.000001}} \put(25.46,196.58){\circle*{0.000001}} \put(26.16,197.28){\circle*{0.000001}} \put(26.16,197.99){\circle*{0.000001}} \put(26.87,198.70){\circle*{0.000001}} \put(26.87,199.40){\circle*{0.000001}} \put(27.58,200.11){\circle*{0.000001}} \put(27.58,200.82){\circle*{0.000001}} \put(28.28,201.53){\circle*{0.000001}} \put(28.28,202.23){\circle*{0.000001}} \put(28.99,202.94){\circle*{0.000001}} \put(28.99,203.65){\circle*{0.000001}} \put(29.70,204.35){\circle*{0.000001}} \put(29.70,205.06){\circle*{0.000001}} \put(30.41,205.77){\circle*{0.000001}} \put(30.41,206.48){\circle*{0.000001}} \put(31.11,207.18){\circle*{0.000001}} \put(31.11,207.89){\circle*{0.000001}} \put(31.82,208.60){\circle*{0.000001}} \put(31.82,209.30){\circle*{0.000001}} \put(32.53,210.01){\circle*{0.000001}} \put(32.53,210.72){\circle*{0.000001}} \put(33.23,211.42){\circle*{0.000001}} \put(33.23,212.13){\circle*{0.000001}} \put(33.94,212.84){\circle*{0.000001}} \put(33.94,213.55){\circle*{0.000001}} \put(34.65,214.25){\circle*{0.000001}} \put(34.65,214.96){\circle*{0.000001}} \put(35.36,215.67){\circle*{0.000001}} \put(35.36,216.37){\circle*{0.000001}} \put(36.06,217.08){\circle*{0.000001}} \put(36.06,217.79){\circle*{0.000001}} \put(36.06,218.50){\circle*{0.000001}} \put(36.77,219.20){\circle*{0.000001}} \put(36.77,219.91){\circle*{0.000001}} \put(37.48,220.62){\circle*{0.000001}} \put(37.48,221.32){\circle*{0.000001}} \put(38.18,222.03){\circle*{0.000001}} \put(38.18,222.74){\circle*{0.000001}} \put(38.89,223.45){\circle*{0.000001}} \put(38.89,224.15){\circle*{0.000001}} \put(39.60,224.86){\circle*{0.000001}} \put(39.60,225.57){\circle*{0.000001}} \put(40.31,226.27){\circle*{0.000001}} \put(40.31,226.98){\circle*{0.000001}} \put(41.01,227.69){\circle*{0.000001}} \put(41.01,228.40){\circle*{0.000001}} \put(41.72,229.10){\circle*{0.000001}} \put(41.72,229.81){\circle*{0.000001}} \put(42.43,230.52){\circle*{0.000001}} \put(42.43,231.22){\circle*{0.000001}} \put(43.13,231.93){\circle*{0.000001}} \put(43.13,232.64){\circle*{0.000001}} \put(43.84,233.35){\circle*{0.000001}} \put(43.84,234.05){\circle*{0.000001}} \put(44.55,234.76){\circle*{0.000001}} \put(44.55,235.47){\circle*{0.000001}} \put(45.25,236.17){\circle*{0.000001}} \put(45.25,236.88){\circle*{0.000001}} \put(45.96,237.59){\circle*{0.000001}} \put(45.96,238.29){\circle*{0.000001}} \put(46.67,239.00){\circle*{0.000001}} \put(46.67,239.71){\circle*{0.000001}} \put(47.38,240.42){\circle*{0.000001}} \put(47.38,241.12){\circle*{0.000001}} \put(48.08,241.83){\circle*{0.000001}} \put(48.08,242.54){\circle*{0.000001}} \put(48.79,243.24){\circle*{0.000001}} \put(48.79,243.95){\circle*{0.000001}} \put(48.79,243.95){\circle*{0.000001}} \put(49.50,244.66){\circle*{0.000001}} \put(50.20,244.66){\circle*{0.000001}} \put(50.91,245.37){\circle*{0.000001}} \put(51.62,246.07){\circle*{0.000001}} \put(52.33,246.07){\circle*{0.000001}} \put(53.03,246.78){\circle*{0.000001}} \put(53.74,247.49){\circle*{0.000001}} \put(54.45,247.49){\circle*{0.000001}} \put(55.15,248.19){\circle*{0.000001}} \put(55.86,248.90){\circle*{0.000001}} \put(56.57,248.90){\circle*{0.000001}} \put(57.28,249.61){\circle*{0.000001}} \put(57.98,250.32){\circle*{0.000001}} \put(58.69,250.32){\circle*{0.000001}} \put(59.40,251.02){\circle*{0.000001}} \put(60.10,251.73){\circle*{0.000001}} \put(60.81,251.73){\circle*{0.000001}} \put(61.52,252.44){\circle*{0.000001}} \put(62.23,253.14){\circle*{0.000001}} \put(62.93,253.14){\circle*{0.000001}} \put(63.64,253.85){\circle*{0.000001}} \put(64.35,254.56){\circle*{0.000001}} \put(65.05,254.56){\circle*{0.000001}} \put(65.76,255.27){\circle*{0.000001}} \put(66.47,255.97){\circle*{0.000001}} \put(67.18,255.97){\circle*{0.000001}} \put(67.88,256.68){\circle*{0.000001}} \put(68.59,257.39){\circle*{0.000001}} \put(69.30,257.39){\circle*{0.000001}} \put(70.00,258.09){\circle*{0.000001}} \put(70.71,258.80){\circle*{0.000001}} \put(71.42,258.80){\circle*{0.000001}} \put(72.12,259.51){\circle*{0.000001}} \put(72.83,260.22){\circle*{0.000001}} \put(73.54,260.22){\circle*{0.000001}} \put(74.25,260.92){\circle*{0.000001}} \put(74.95,261.63){\circle*{0.000001}} \put(75.66,261.63){\circle*{0.000001}} \put(76.37,262.34){\circle*{0.000001}} \put(77.07,263.04){\circle*{0.000001}} \put(77.78,263.04){\circle*{0.000001}} \put(78.49,263.75){\circle*{0.000001}} \put(79.20,264.46){\circle*{0.000001}} \put(79.90,264.46){\circle*{0.000001}} \put(80.61,265.17){\circle*{0.000001}} \put(81.32,265.87){\circle*{0.000001}} \put(82.02,265.87){\circle*{0.000001}} \put(82.73,266.58){\circle*{0.000001}} \put(83.44,267.29){\circle*{0.000001}} \put(84.15,267.29){\circle*{0.000001}} \put(84.85,267.99){\circle*{0.000001}} \put(85.56,268.70){\circle*{0.000001}} \put(86.27,268.70){\circle*{0.000001}} \put(86.97,269.41){\circle*{0.000001}} \put(87.68,270.11){\circle*{0.000001}} \put(88.39,270.11){\circle*{0.000001}} \put(89.10,270.82){\circle*{0.000001}} \put(89.80,271.53){\circle*{0.000001}} \put(90.51,271.53){\circle*{0.000001}} \put(91.22,272.24){\circle*{0.000001}} \put(91.92,272.94){\circle*{0.000001}} \put(92.63,272.94){\circle*{0.000001}} \put(93.34,273.65){\circle*{0.000001}} \put(94.05,274.36){\circle*{0.000001}} \put(94.75,274.36){\circle*{0.000001}} \put(95.46,275.06){\circle*{0.000001}} \put(96.17,275.77){\circle*{0.000001}} \put(96.87,275.77){\circle*{0.000001}} \put(97.58,276.48){\circle*{0.000001}} \put(115.97,221.32){\circle*{0.000001}} \put(115.97,222.03){\circle*{0.000001}} \put(115.26,222.74){\circle*{0.000001}} \put(115.26,223.45){\circle*{0.000001}} \put(115.26,224.15){\circle*{0.000001}} \put(114.55,224.86){\circle*{0.000001}} \put(114.55,225.57){\circle*{0.000001}} \put(114.55,226.27){\circle*{0.000001}} \put(113.84,226.98){\circle*{0.000001}} \put(113.84,227.69){\circle*{0.000001}} \put(113.84,228.40){\circle*{0.000001}} \put(113.14,229.10){\circle*{0.000001}} \put(113.14,229.81){\circle*{0.000001}} \put(113.14,230.52){\circle*{0.000001}} \put(112.43,231.22){\circle*{0.000001}} \put(112.43,231.93){\circle*{0.000001}} \put(112.43,232.64){\circle*{0.000001}} \put(111.72,233.35){\circle*{0.000001}} \put(111.72,234.05){\circle*{0.000001}} \put(111.72,234.76){\circle*{0.000001}} \put(111.02,235.47){\circle*{0.000001}} \put(111.02,236.17){\circle*{0.000001}} \put(111.02,236.88){\circle*{0.000001}} \put(110.31,237.59){\circle*{0.000001}} \put(110.31,238.29){\circle*{0.000001}} \put(110.31,239.00){\circle*{0.000001}} \put(109.60,239.71){\circle*{0.000001}} \put(109.60,240.42){\circle*{0.000001}} \put(109.60,241.12){\circle*{0.000001}} \put(108.89,241.83){\circle*{0.000001}} \put(108.89,242.54){\circle*{0.000001}} \put(108.89,243.24){\circle*{0.000001}} \put(108.19,243.95){\circle*{0.000001}} \put(108.19,244.66){\circle*{0.000001}} \put(108.19,245.37){\circle*{0.000001}} \put(107.48,246.07){\circle*{0.000001}} \put(107.48,246.78){\circle*{0.000001}} \put(107.48,247.49){\circle*{0.000001}} \put(106.77,248.19){\circle*{0.000001}} \put(106.77,248.90){\circle*{0.000001}} \put(106.77,249.61){\circle*{0.000001}} \put(106.07,250.32){\circle*{0.000001}} \put(106.07,251.02){\circle*{0.000001}} \put(106.07,251.73){\circle*{0.000001}} \put(105.36,252.44){\circle*{0.000001}} \put(105.36,253.14){\circle*{0.000001}} \put(105.36,253.85){\circle*{0.000001}} \put(104.65,254.56){\circle*{0.000001}} \put(104.65,255.27){\circle*{0.000001}} \put(104.65,255.97){\circle*{0.000001}} \put(103.94,256.68){\circle*{0.000001}} \put(103.94,257.39){\circle*{0.000001}} \put(103.94,258.09){\circle*{0.000001}} \put(103.24,258.80){\circle*{0.000001}} \put(103.24,259.51){\circle*{0.000001}} \put(103.24,260.22){\circle*{0.000001}} \put(102.53,260.92){\circle*{0.000001}} \put(102.53,261.63){\circle*{0.000001}} \put(102.53,262.34){\circle*{0.000001}} \put(101.82,263.04){\circle*{0.000001}} \put(101.82,263.75){\circle*{0.000001}} \put(101.82,264.46){\circle*{0.000001}} \put(101.12,265.17){\circle*{0.000001}} \put(101.12,265.87){\circle*{0.000001}} \put(101.12,266.58){\circle*{0.000001}} \put(100.41,267.29){\circle*{0.000001}} \put(100.41,267.99){\circle*{0.000001}} \put(100.41,268.70){\circle*{0.000001}} \put(99.70,269.41){\circle*{0.000001}} \put(99.70,270.11){\circle*{0.000001}} \put(99.70,270.82){\circle*{0.000001}} \put(98.99,271.53){\circle*{0.000001}} \put(98.99,272.24){\circle*{0.000001}} \put(98.99,272.94){\circle*{0.000001}} \put(98.29,273.65){\circle*{0.000001}} \put(98.29,274.36){\circle*{0.000001}} \put(98.29,275.06){\circle*{0.000001}} \put(97.58,275.77){\circle*{0.000001}} \put(97.58,276.48){\circle*{0.000001}} \put(115.97,221.32){\circle*{0.000001}} \put(116.67,222.03){\circle*{0.000001}} \put(117.38,222.03){\circle*{0.000001}} \put(118.09,222.74){\circle*{0.000001}} \put(118.79,222.74){\circle*{0.000001}} \put(119.50,223.45){\circle*{0.000001}} \put(120.21,223.45){\circle*{0.000001}} \put(120.92,224.15){\circle*{0.000001}} \put(121.62,224.86){\circle*{0.000001}} \put(122.33,224.86){\circle*{0.000001}} \put(123.04,225.57){\circle*{0.000001}} \put(123.74,225.57){\circle*{0.000001}} \put(124.45,226.27){\circle*{0.000001}} \put(125.16,226.27){\circle*{0.000001}} \put(125.87,226.98){\circle*{0.000001}} \put(126.57,226.98){\circle*{0.000001}} \put(127.28,227.69){\circle*{0.000001}} \put(127.99,228.40){\circle*{0.000001}} \put(128.69,228.40){\circle*{0.000001}} \put(129.40,229.10){\circle*{0.000001}} \put(130.11,229.10){\circle*{0.000001}} \put(130.81,229.81){\circle*{0.000001}} \put(131.52,229.81){\circle*{0.000001}} \put(132.23,230.52){\circle*{0.000001}} \put(132.94,231.22){\circle*{0.000001}} \put(133.64,231.22){\circle*{0.000001}} \put(134.35,231.93){\circle*{0.000001}} \put(135.06,231.93){\circle*{0.000001}} \put(135.76,232.64){\circle*{0.000001}} \put(136.47,232.64){\circle*{0.000001}} \put(137.18,233.35){\circle*{0.000001}} \put(137.89,234.05){\circle*{0.000001}} \put(138.59,234.05){\circle*{0.000001}} \put(139.30,234.76){\circle*{0.000001}} \put(140.01,234.76){\circle*{0.000001}} \put(140.71,235.47){\circle*{0.000001}} \put(141.42,235.47){\circle*{0.000001}} \put(142.13,236.17){\circle*{0.000001}} \put(142.84,236.17){\circle*{0.000001}} \put(143.54,236.88){\circle*{0.000001}} \put(144.25,237.59){\circle*{0.000001}} \put(144.96,237.59){\circle*{0.000001}} \put(145.66,238.29){\circle*{0.000001}} \put(146.37,238.29){\circle*{0.000001}} \put(147.08,239.00){\circle*{0.000001}} \put(147.79,239.00){\circle*{0.000001}} \put(148.49,239.71){\circle*{0.000001}} \put(149.20,240.42){\circle*{0.000001}} \put(149.91,240.42){\circle*{0.000001}} \put(150.61,241.12){\circle*{0.000001}} \put(151.32,241.12){\circle*{0.000001}} \put(152.03,241.83){\circle*{0.000001}} \put(152.74,241.83){\circle*{0.000001}} \put(153.44,242.54){\circle*{0.000001}} \put(154.15,243.24){\circle*{0.000001}} \put(154.86,243.24){\circle*{0.000001}} \put(155.56,243.95){\circle*{0.000001}} \put(156.27,243.95){\circle*{0.000001}} \put(156.98,244.66){\circle*{0.000001}} \put(157.68,244.66){\circle*{0.000001}} \put(158.39,245.37){\circle*{0.000001}} \put(159.10,245.37){\circle*{0.000001}} \put(159.81,246.07){\circle*{0.000001}} \put(160.51,246.78){\circle*{0.000001}} \put(161.22,246.78){\circle*{0.000001}} \put(161.93,247.49){\circle*{0.000001}} \put(162.63,247.49){\circle*{0.000001}} \put(163.34,248.19){\circle*{0.000001}} \put(164.05,248.19){\circle*{0.000001}} \put(164.76,248.90){\circle*{0.000001}} \put(135.76,198.70){\circle*{0.000001}} \put(136.47,199.40){\circle*{0.000001}} \put(136.47,200.11){\circle*{0.000001}} \put(137.18,200.82){\circle*{0.000001}} \put(137.18,201.53){\circle*{0.000001}} \put(137.89,202.23){\circle*{0.000001}} \put(137.89,202.94){\circle*{0.000001}} \put(138.59,203.65){\circle*{0.000001}} \put(139.30,204.35){\circle*{0.000001}} \put(139.30,205.06){\circle*{0.000001}} \put(140.01,205.77){\circle*{0.000001}} \put(140.01,206.48){\circle*{0.000001}} \put(140.71,207.18){\circle*{0.000001}} \put(141.42,207.89){\circle*{0.000001}} \put(141.42,208.60){\circle*{0.000001}} \put(142.13,209.30){\circle*{0.000001}} \put(142.13,210.01){\circle*{0.000001}} \put(142.84,210.72){\circle*{0.000001}} \put(142.84,211.42){\circle*{0.000001}} \put(143.54,212.13){\circle*{0.000001}} \put(144.25,212.84){\circle*{0.000001}} \put(144.25,213.55){\circle*{0.000001}} \put(144.96,214.25){\circle*{0.000001}} \put(144.96,214.96){\circle*{0.000001}} \put(145.66,215.67){\circle*{0.000001}} \put(145.66,216.37){\circle*{0.000001}} \put(146.37,217.08){\circle*{0.000001}} \put(147.08,217.79){\circle*{0.000001}} \put(147.08,218.50){\circle*{0.000001}} \put(147.79,219.20){\circle*{0.000001}} \put(147.79,219.91){\circle*{0.000001}} \put(148.49,220.62){\circle*{0.000001}} \put(148.49,221.32){\circle*{0.000001}} \put(149.20,222.03){\circle*{0.000001}} \put(149.91,222.74){\circle*{0.000001}} \put(149.91,223.45){\circle*{0.000001}} \put(150.61,224.15){\circle*{0.000001}} \put(150.61,224.86){\circle*{0.000001}} \put(151.32,225.57){\circle*{0.000001}} \put(152.03,226.27){\circle*{0.000001}} \put(152.03,226.98){\circle*{0.000001}} \put(152.74,227.69){\circle*{0.000001}} \put(152.74,228.40){\circle*{0.000001}} \put(153.44,229.10){\circle*{0.000001}} \put(153.44,229.81){\circle*{0.000001}} \put(154.15,230.52){\circle*{0.000001}} \put(154.86,231.22){\circle*{0.000001}} \put(154.86,231.93){\circle*{0.000001}} \put(155.56,232.64){\circle*{0.000001}} \put(155.56,233.35){\circle*{0.000001}} \put(156.27,234.05){\circle*{0.000001}} \put(156.27,234.76){\circle*{0.000001}} \put(156.98,235.47){\circle*{0.000001}} \put(157.68,236.17){\circle*{0.000001}} \put(157.68,236.88){\circle*{0.000001}} \put(158.39,237.59){\circle*{0.000001}} \put(158.39,238.29){\circle*{0.000001}} \put(159.10,239.00){\circle*{0.000001}} \put(159.10,239.71){\circle*{0.000001}} \put(159.81,240.42){\circle*{0.000001}} \put(160.51,241.12){\circle*{0.000001}} \put(160.51,241.83){\circle*{0.000001}} \put(161.22,242.54){\circle*{0.000001}} \put(161.22,243.24){\circle*{0.000001}} \put(161.93,243.95){\circle*{0.000001}} \put(162.63,244.66){\circle*{0.000001}} \put(162.63,245.37){\circle*{0.000001}} \put(163.34,246.07){\circle*{0.000001}} \put(163.34,246.78){\circle*{0.000001}} \put(164.05,247.49){\circle*{0.000001}} \put(164.05,248.19){\circle*{0.000001}} \put(164.76,248.90){\circle*{0.000001}} \put(93.34,157.68){\circle*{0.000001}} \put(94.05,158.39){\circle*{0.000001}} \put(94.75,159.10){\circle*{0.000001}} \put(95.46,159.81){\circle*{0.000001}} \put(96.17,160.51){\circle*{0.000001}} \put(96.87,161.22){\circle*{0.000001}} \put(97.58,161.93){\circle*{0.000001}} \put(98.29,162.63){\circle*{0.000001}} \put(98.99,163.34){\circle*{0.000001}} \put(99.70,164.05){\circle*{0.000001}} \put(100.41,164.76){\circle*{0.000001}} \put(101.12,165.46){\circle*{0.000001}} \put(101.82,166.17){\circle*{0.000001}} \put(102.53,166.88){\circle*{0.000001}} \put(103.24,167.58){\circle*{0.000001}} \put(103.94,167.58){\circle*{0.000001}} \put(104.65,168.29){\circle*{0.000001}} \put(105.36,169.00){\circle*{0.000001}} \put(106.07,169.71){\circle*{0.000001}} \put(106.77,170.41){\circle*{0.000001}} \put(107.48,171.12){\circle*{0.000001}} \put(108.19,171.83){\circle*{0.000001}} \put(108.89,172.53){\circle*{0.000001}} \put(109.60,173.24){\circle*{0.000001}} \put(110.31,173.95){\circle*{0.000001}} \put(111.02,174.66){\circle*{0.000001}} \put(111.72,175.36){\circle*{0.000001}} \put(112.43,176.07){\circle*{0.000001}} \put(113.14,176.78){\circle*{0.000001}} \put(113.84,177.48){\circle*{0.000001}} \put(114.55,178.19){\circle*{0.000001}} \put(115.26,178.90){\circle*{0.000001}} \put(115.97,179.61){\circle*{0.000001}} \put(116.67,180.31){\circle*{0.000001}} \put(117.38,181.02){\circle*{0.000001}} \put(118.09,181.73){\circle*{0.000001}} \put(118.79,182.43){\circle*{0.000001}} \put(119.50,183.14){\circle*{0.000001}} \put(120.21,183.85){\circle*{0.000001}} \put(120.92,184.55){\circle*{0.000001}} \put(121.62,185.26){\circle*{0.000001}} \put(122.33,185.97){\circle*{0.000001}} \put(123.04,186.68){\circle*{0.000001}} \put(123.74,187.38){\circle*{0.000001}} \put(124.45,188.09){\circle*{0.000001}} \put(125.16,188.09){\circle*{0.000001}} \put(125.87,188.80){\circle*{0.000001}} \put(126.57,189.50){\circle*{0.000001}} \put(127.28,190.21){\circle*{0.000001}} \put(127.99,190.92){\circle*{0.000001}} \put(128.69,191.63){\circle*{0.000001}} \put(129.40,192.33){\circle*{0.000001}} \put(130.11,193.04){\circle*{0.000001}} \put(130.81,193.75){\circle*{0.000001}} \put(131.52,194.45){\circle*{0.000001}} \put(132.23,195.16){\circle*{0.000001}} \put(132.94,195.87){\circle*{0.000001}} \put(133.64,196.58){\circle*{0.000001}} \put(134.35,197.28){\circle*{0.000001}} \put(135.06,197.99){\circle*{0.000001}} \put(135.76,198.70){\circle*{0.000001}} \put(37.48,159.81){\circle*{0.000001}} \put(38.18,159.81){\circle*{0.000001}} \put(38.89,159.81){\circle*{0.000001}} \put(39.60,159.81){\circle*{0.000001}} \put(40.31,159.81){\circle*{0.000001}} \put(41.01,159.81){\circle*{0.000001}} \put(41.72,159.81){\circle*{0.000001}} \put(42.43,159.81){\circle*{0.000001}} \put(43.13,159.81){\circle*{0.000001}} \put(43.84,159.81){\circle*{0.000001}} \put(44.55,159.81){\circle*{0.000001}} \put(45.25,159.81){\circle*{0.000001}} \put(45.96,159.81){\circle*{0.000001}} \put(46.67,159.81){\circle*{0.000001}} \put(47.38,159.10){\circle*{0.000001}} \put(48.08,159.10){\circle*{0.000001}} \put(48.79,159.10){\circle*{0.000001}} \put(49.50,159.10){\circle*{0.000001}} \put(50.20,159.10){\circle*{0.000001}} \put(50.91,159.10){\circle*{0.000001}} \put(51.62,159.10){\circle*{0.000001}} \put(52.33,159.10){\circle*{0.000001}} \put(53.03,159.10){\circle*{0.000001}} \put(53.74,159.10){\circle*{0.000001}} \put(54.45,159.10){\circle*{0.000001}} \put(55.15,159.10){\circle*{0.000001}} \put(55.86,159.10){\circle*{0.000001}} \put(56.57,159.10){\circle*{0.000001}} \put(57.28,159.10){\circle*{0.000001}} \put(57.98,159.10){\circle*{0.000001}} \put(58.69,159.10){\circle*{0.000001}} \put(59.40,159.10){\circle*{0.000001}} \put(60.10,159.10){\circle*{0.000001}} \put(60.81,159.10){\circle*{0.000001}} \put(61.52,159.10){\circle*{0.000001}} \put(62.23,159.10){\circle*{0.000001}} \put(62.93,159.10){\circle*{0.000001}} \put(63.64,159.10){\circle*{0.000001}} \put(64.35,159.10){\circle*{0.000001}} \put(65.05,159.10){\circle*{0.000001}} \put(65.76,158.39){\circle*{0.000001}} \put(66.47,158.39){\circle*{0.000001}} \put(67.18,158.39){\circle*{0.000001}} \put(67.88,158.39){\circle*{0.000001}} \put(68.59,158.39){\circle*{0.000001}} \put(69.30,158.39){\circle*{0.000001}} \put(70.00,158.39){\circle*{0.000001}} \put(70.71,158.39){\circle*{0.000001}} \put(71.42,158.39){\circle*{0.000001}} \put(72.12,158.39){\circle*{0.000001}} \put(72.83,158.39){\circle*{0.000001}} \put(73.54,158.39){\circle*{0.000001}} \put(74.25,158.39){\circle*{0.000001}} \put(74.95,158.39){\circle*{0.000001}} \put(75.66,158.39){\circle*{0.000001}} \put(76.37,158.39){\circle*{0.000001}} \put(77.07,158.39){\circle*{0.000001}} \put(77.78,158.39){\circle*{0.000001}} \put(78.49,158.39){\circle*{0.000001}} \put(79.20,158.39){\circle*{0.000001}} \put(79.90,158.39){\circle*{0.000001}} \put(80.61,158.39){\circle*{0.000001}} \put(81.32,158.39){\circle*{0.000001}} \put(82.02,158.39){\circle*{0.000001}} \put(82.73,158.39){\circle*{0.000001}} \put(83.44,158.39){\circle*{0.000001}} \put(84.15,157.68){\circle*{0.000001}} \put(84.85,157.68){\circle*{0.000001}} \put(85.56,157.68){\circle*{0.000001}} \put(86.27,157.68){\circle*{0.000001}} \put(86.97,157.68){\circle*{0.000001}} \put(87.68,157.68){\circle*{0.000001}} \put(88.39,157.68){\circle*{0.000001}} \put(89.10,157.68){\circle*{0.000001}} \put(89.80,157.68){\circle*{0.000001}} \put(90.51,157.68){\circle*{0.000001}} \put(91.22,157.68){\circle*{0.000001}} \put(91.92,157.68){\circle*{0.000001}} \put(92.63,157.68){\circle*{0.000001}} \put(93.34,157.68){\circle*{0.000001}} \put(37.48,159.81){\circle*{0.000001}} \put(38.18,160.51){\circle*{0.000001}} \put(38.89,160.51){\circle*{0.000001}} \put(39.60,161.22){\circle*{0.000001}} \put(40.31,161.22){\circle*{0.000001}} \put(41.01,161.93){\circle*{0.000001}} \put(41.72,161.93){\circle*{0.000001}} \put(42.43,162.63){\circle*{0.000001}} \put(43.13,162.63){\circle*{0.000001}} \put(43.84,163.34){\circle*{0.000001}} \put(44.55,163.34){\circle*{0.000001}} \put(45.25,164.05){\circle*{0.000001}} \put(45.96,164.05){\circle*{0.000001}} \put(46.67,164.76){\circle*{0.000001}} \put(47.38,164.76){\circle*{0.000001}} \put(48.08,165.46){\circle*{0.000001}} \put(48.79,165.46){\circle*{0.000001}} \put(49.50,166.17){\circle*{0.000001}} \put(50.20,166.17){\circle*{0.000001}} \put(50.91,166.88){\circle*{0.000001}} \put(51.62,166.88){\circle*{0.000001}} \put(52.33,167.58){\circle*{0.000001}} \put(53.03,167.58){\circle*{0.000001}} \put(53.74,168.29){\circle*{0.000001}} \put(54.45,168.29){\circle*{0.000001}} \put(55.15,169.00){\circle*{0.000001}} \put(55.86,169.00){\circle*{0.000001}} \put(56.57,169.71){\circle*{0.000001}} \put(57.28,169.71){\circle*{0.000001}} \put(57.98,170.41){\circle*{0.000001}} \put(58.69,170.41){\circle*{0.000001}} \put(59.40,171.12){\circle*{0.000001}} \put(60.10,171.12){\circle*{0.000001}} \put(60.81,171.83){\circle*{0.000001}} \put(61.52,171.83){\circle*{0.000001}} \put(62.23,172.53){\circle*{0.000001}} \put(62.93,172.53){\circle*{0.000001}} \put(63.64,173.24){\circle*{0.000001}} \put(64.35,173.95){\circle*{0.000001}} \put(65.05,173.95){\circle*{0.000001}} \put(65.76,174.66){\circle*{0.000001}} \put(66.47,174.66){\circle*{0.000001}} \put(67.18,175.36){\circle*{0.000001}} \put(67.88,175.36){\circle*{0.000001}} \put(68.59,176.07){\circle*{0.000001}} \put(69.30,176.07){\circle*{0.000001}} \put(70.00,176.78){\circle*{0.000001}} \put(70.71,176.78){\circle*{0.000001}} \put(71.42,177.48){\circle*{0.000001}} \put(72.12,177.48){\circle*{0.000001}} \put(72.83,178.19){\circle*{0.000001}} \put(73.54,178.19){\circle*{0.000001}} \put(74.25,178.90){\circle*{0.000001}} \put(74.95,178.90){\circle*{0.000001}} \put(75.66,179.61){\circle*{0.000001}} \put(76.37,179.61){\circle*{0.000001}} \put(77.07,180.31){\circle*{0.000001}} \put(77.78,180.31){\circle*{0.000001}} \put(78.49,181.02){\circle*{0.000001}} \put(79.20,181.02){\circle*{0.000001}} \put(79.90,181.73){\circle*{0.000001}} \put(80.61,181.73){\circle*{0.000001}} \put(81.32,182.43){\circle*{0.000001}} \put(82.02,182.43){\circle*{0.000001}} \put(82.73,183.14){\circle*{0.000001}} \put(83.44,183.14){\circle*{0.000001}} \put(84.15,183.85){\circle*{0.000001}} \put(84.85,183.85){\circle*{0.000001}} \put(85.56,184.55){\circle*{0.000001}} \put(86.27,184.55){\circle*{0.000001}} \put(86.97,185.26){\circle*{0.000001}} \put(87.68,185.26){\circle*{0.000001}} \put(88.39,185.97){\circle*{0.000001}} \put(71.42,129.40){\circle*{0.000001}} \put(71.42,130.11){\circle*{0.000001}} \put(72.12,130.81){\circle*{0.000001}} \put(72.12,131.52){\circle*{0.000001}} \put(72.12,132.23){\circle*{0.000001}} \put(72.12,132.94){\circle*{0.000001}} \put(72.83,133.64){\circle*{0.000001}} \put(72.83,134.35){\circle*{0.000001}} \put(72.83,135.06){\circle*{0.000001}} \put(73.54,135.76){\circle*{0.000001}} \put(73.54,136.47){\circle*{0.000001}} \put(73.54,137.18){\circle*{0.000001}} \put(74.25,137.89){\circle*{0.000001}} \put(74.25,138.59){\circle*{0.000001}} \put(74.25,139.30){\circle*{0.000001}} \put(74.25,140.01){\circle*{0.000001}} \put(74.95,140.71){\circle*{0.000001}} \put(74.95,141.42){\circle*{0.000001}} \put(74.95,142.13){\circle*{0.000001}} \put(75.66,142.84){\circle*{0.000001}} \put(75.66,143.54){\circle*{0.000001}} \put(75.66,144.25){\circle*{0.000001}} \put(76.37,144.96){\circle*{0.000001}} \put(76.37,145.66){\circle*{0.000001}} \put(76.37,146.37){\circle*{0.000001}} \put(76.37,147.08){\circle*{0.000001}} \put(77.07,147.79){\circle*{0.000001}} \put(77.07,148.49){\circle*{0.000001}} \put(77.07,149.20){\circle*{0.000001}} \put(77.78,149.91){\circle*{0.000001}} \put(77.78,150.61){\circle*{0.000001}} \put(77.78,151.32){\circle*{0.000001}} \put(78.49,152.03){\circle*{0.000001}} \put(78.49,152.74){\circle*{0.000001}} \put(78.49,153.44){\circle*{0.000001}} \put(78.49,154.15){\circle*{0.000001}} \put(79.20,154.86){\circle*{0.000001}} \put(79.20,155.56){\circle*{0.000001}} \put(79.20,156.27){\circle*{0.000001}} \put(79.90,156.98){\circle*{0.000001}} \put(79.90,157.68){\circle*{0.000001}} \put(79.90,158.39){\circle*{0.000001}} \put(80.61,159.10){\circle*{0.000001}} \put(80.61,159.81){\circle*{0.000001}} \put(80.61,160.51){\circle*{0.000001}} \put(80.61,161.22){\circle*{0.000001}} \put(81.32,161.93){\circle*{0.000001}} \put(81.32,162.63){\circle*{0.000001}} \put(81.32,163.34){\circle*{0.000001}} \put(82.02,164.05){\circle*{0.000001}} \put(82.02,164.76){\circle*{0.000001}} \put(82.02,165.46){\circle*{0.000001}} \put(82.73,166.17){\circle*{0.000001}} \put(82.73,166.88){\circle*{0.000001}} \put(82.73,167.58){\circle*{0.000001}} \put(82.73,168.29){\circle*{0.000001}} \put(83.44,169.00){\circle*{0.000001}} \put(83.44,169.71){\circle*{0.000001}} \put(83.44,170.41){\circle*{0.000001}} \put(84.15,171.12){\circle*{0.000001}} \put(84.15,171.83){\circle*{0.000001}} \put(84.15,172.53){\circle*{0.000001}} \put(84.85,173.24){\circle*{0.000001}} \put(84.85,173.95){\circle*{0.000001}} \put(84.85,174.66){\circle*{0.000001}} \put(84.85,175.36){\circle*{0.000001}} \put(85.56,176.07){\circle*{0.000001}} \put(85.56,176.78){\circle*{0.000001}} \put(85.56,177.48){\circle*{0.000001}} \put(86.27,178.19){\circle*{0.000001}} \put(86.27,178.90){\circle*{0.000001}} \put(86.27,179.61){\circle*{0.000001}} \put(86.97,180.31){\circle*{0.000001}} \put(86.97,181.02){\circle*{0.000001}} \put(86.97,181.73){\circle*{0.000001}} \put(86.97,182.43){\circle*{0.000001}} \put(87.68,183.14){\circle*{0.000001}} \put(87.68,183.85){\circle*{0.000001}} \put(87.68,184.55){\circle*{0.000001}} \put(88.39,185.26){\circle*{0.000001}} \put(88.39,185.97){\circle*{0.000001}} \put(73.54,69.30){\circle*{0.000001}} \put(73.54,70.00){\circle*{0.000001}} \put(73.54,70.71){\circle*{0.000001}} \put(73.54,71.42){\circle*{0.000001}} \put(73.54,72.12){\circle*{0.000001}} \put(73.54,72.83){\circle*{0.000001}} \put(73.54,73.54){\circle*{0.000001}} \put(73.54,74.25){\circle*{0.000001}} \put(73.54,74.95){\circle*{0.000001}} \put(73.54,75.66){\circle*{0.000001}} \put(73.54,76.37){\circle*{0.000001}} \put(73.54,77.07){\circle*{0.000001}} \put(73.54,77.78){\circle*{0.000001}} \put(73.54,78.49){\circle*{0.000001}} \put(73.54,79.20){\circle*{0.000001}} \put(72.83,79.90){\circle*{0.000001}} \put(72.83,80.61){\circle*{0.000001}} \put(72.83,81.32){\circle*{0.000001}} \put(72.83,82.02){\circle*{0.000001}} \put(72.83,82.73){\circle*{0.000001}} \put(72.83,83.44){\circle*{0.000001}} \put(72.83,84.15){\circle*{0.000001}} \put(72.83,84.85){\circle*{0.000001}} \put(72.83,85.56){\circle*{0.000001}} \put(72.83,86.27){\circle*{0.000001}} \put(72.83,86.97){\circle*{0.000001}} \put(72.83,87.68){\circle*{0.000001}} \put(72.83,88.39){\circle*{0.000001}} \put(72.83,89.10){\circle*{0.000001}} \put(72.83,89.80){\circle*{0.000001}} \put(72.83,90.51){\circle*{0.000001}} \put(72.83,91.22){\circle*{0.000001}} \put(72.83,91.92){\circle*{0.000001}} \put(72.83,92.63){\circle*{0.000001}} \put(72.83,93.34){\circle*{0.000001}} \put(72.83,94.05){\circle*{0.000001}} \put(72.83,94.75){\circle*{0.000001}} \put(72.83,95.46){\circle*{0.000001}} \put(72.83,96.17){\circle*{0.000001}} \put(72.83,96.87){\circle*{0.000001}} \put(72.83,97.58){\circle*{0.000001}} \put(72.83,98.29){\circle*{0.000001}} \put(72.83,98.99){\circle*{0.000001}} \put(72.12,99.70){\circle*{0.000001}} \put(72.12,100.41){\circle*{0.000001}} \put(72.12,101.12){\circle*{0.000001}} \put(72.12,101.82){\circle*{0.000001}} \put(72.12,102.53){\circle*{0.000001}} \put(72.12,103.24){\circle*{0.000001}} \put(72.12,103.94){\circle*{0.000001}} \put(72.12,104.65){\circle*{0.000001}} \put(72.12,105.36){\circle*{0.000001}} \put(72.12,106.07){\circle*{0.000001}} \put(72.12,106.77){\circle*{0.000001}} \put(72.12,107.48){\circle*{0.000001}} \put(72.12,108.19){\circle*{0.000001}} \put(72.12,108.89){\circle*{0.000001}} \put(72.12,109.60){\circle*{0.000001}} \put(72.12,110.31){\circle*{0.000001}} \put(72.12,111.02){\circle*{0.000001}} \put(72.12,111.72){\circle*{0.000001}} \put(72.12,112.43){\circle*{0.000001}} \put(72.12,113.14){\circle*{0.000001}} \put(72.12,113.84){\circle*{0.000001}} \put(72.12,114.55){\circle*{0.000001}} \put(72.12,115.26){\circle*{0.000001}} \put(72.12,115.97){\circle*{0.000001}} \put(72.12,116.67){\circle*{0.000001}} \put(72.12,117.38){\circle*{0.000001}} \put(72.12,118.09){\circle*{0.000001}} \put(72.12,118.79){\circle*{0.000001}} \put(71.42,119.50){\circle*{0.000001}} \put(71.42,120.21){\circle*{0.000001}} \put(71.42,120.92){\circle*{0.000001}} \put(71.42,121.62){\circle*{0.000001}} \put(71.42,122.33){\circle*{0.000001}} \put(71.42,123.04){\circle*{0.000001}} \put(71.42,123.74){\circle*{0.000001}} \put(71.42,124.45){\circle*{0.000001}} \put(71.42,125.16){\circle*{0.000001}} \put(71.42,125.87){\circle*{0.000001}} \put(71.42,126.57){\circle*{0.000001}} \put(71.42,127.28){\circle*{0.000001}} \put(71.42,127.99){\circle*{0.000001}} \put(71.42,128.69){\circle*{0.000001}} \put(71.42,129.40){\circle*{0.000001}} \put(101.82,17.68){\circle*{0.000001}} \put(101.12,18.38){\circle*{0.000001}} \put(101.12,19.09){\circle*{0.000001}} \put(100.41,19.80){\circle*{0.000001}} \put(100.41,20.51){\circle*{0.000001}} \put(99.70,21.21){\circle*{0.000001}} \put(99.70,21.92){\circle*{0.000001}} \put(98.99,22.63){\circle*{0.000001}} \put(98.99,23.33){\circle*{0.000001}} \put(98.29,24.04){\circle*{0.000001}} \put(98.29,24.75){\circle*{0.000001}} \put(97.58,25.46){\circle*{0.000001}} \put(96.87,26.16){\circle*{0.000001}} \put(96.87,26.87){\circle*{0.000001}} \put(96.17,27.58){\circle*{0.000001}} \put(96.17,28.28){\circle*{0.000001}} \put(95.46,28.99){\circle*{0.000001}} \put(95.46,29.70){\circle*{0.000001}} \put(94.75,30.41){\circle*{0.000001}} \put(94.75,31.11){\circle*{0.000001}} \put(94.05,31.82){\circle*{0.000001}} \put(93.34,32.53){\circle*{0.000001}} \put(93.34,33.23){\circle*{0.000001}} \put(92.63,33.94){\circle*{0.000001}} \put(92.63,34.65){\circle*{0.000001}} \put(91.92,35.36){\circle*{0.000001}} \put(91.92,36.06){\circle*{0.000001}} \put(91.22,36.77){\circle*{0.000001}} \put(91.22,37.48){\circle*{0.000001}} \put(90.51,38.18){\circle*{0.000001}} \put(90.51,38.89){\circle*{0.000001}} \put(89.80,39.60){\circle*{0.000001}} \put(89.10,40.31){\circle*{0.000001}} \put(89.10,41.01){\circle*{0.000001}} \put(88.39,41.72){\circle*{0.000001}} \put(88.39,42.43){\circle*{0.000001}} \put(87.68,43.13){\circle*{0.000001}} \put(87.68,43.84){\circle*{0.000001}} \put(86.97,44.55){\circle*{0.000001}} \put(86.97,45.25){\circle*{0.000001}} \put(86.27,45.96){\circle*{0.000001}} \put(86.27,46.67){\circle*{0.000001}} \put(85.56,47.38){\circle*{0.000001}} \put(84.85,48.08){\circle*{0.000001}} \put(84.85,48.79){\circle*{0.000001}} \put(84.15,49.50){\circle*{0.000001}} \put(84.15,50.20){\circle*{0.000001}} \put(83.44,50.91){\circle*{0.000001}} \put(83.44,51.62){\circle*{0.000001}} \put(82.73,52.33){\circle*{0.000001}} \put(82.73,53.03){\circle*{0.000001}} \put(82.02,53.74){\circle*{0.000001}} \put(82.02,54.45){\circle*{0.000001}} \put(81.32,55.15){\circle*{0.000001}} \put(80.61,55.86){\circle*{0.000001}} \put(80.61,56.57){\circle*{0.000001}} \put(79.90,57.28){\circle*{0.000001}} \put(79.90,57.98){\circle*{0.000001}} \put(79.20,58.69){\circle*{0.000001}} \put(79.20,59.40){\circle*{0.000001}} \put(78.49,60.10){\circle*{0.000001}} \put(78.49,60.81){\circle*{0.000001}} \put(77.78,61.52){\circle*{0.000001}} \put(77.07,62.23){\circle*{0.000001}} \put(77.07,62.93){\circle*{0.000001}} \put(76.37,63.64){\circle*{0.000001}} \put(76.37,64.35){\circle*{0.000001}} \put(75.66,65.05){\circle*{0.000001}} \put(75.66,65.76){\circle*{0.000001}} \put(74.95,66.47){\circle*{0.000001}} \put(74.95,67.18){\circle*{0.000001}} \put(74.25,67.88){\circle*{0.000001}} \put(74.25,68.59){\circle*{0.000001}} \put(73.54,69.30){\circle*{0.000001}} \put(101.82,17.68){\circle*{0.000001}} \put(102.53,18.38){\circle*{0.000001}} \put(103.24,18.38){\circle*{0.000001}} \put(103.94,19.09){\circle*{0.000001}} \put(104.65,19.09){\circle*{0.000001}} \put(105.36,19.80){\circle*{0.000001}} \put(106.07,20.51){\circle*{0.000001}} \put(106.77,20.51){\circle*{0.000001}} \put(107.48,21.21){\circle*{0.000001}} \put(108.19,21.21){\circle*{0.000001}} \put(108.89,21.92){\circle*{0.000001}} \put(109.60,22.63){\circle*{0.000001}} \put(110.31,22.63){\circle*{0.000001}} \put(111.02,23.33){\circle*{0.000001}} \put(111.72,23.33){\circle*{0.000001}} \put(112.43,24.04){\circle*{0.000001}} \put(113.14,24.75){\circle*{0.000001}} \put(113.84,24.75){\circle*{0.000001}} \put(114.55,25.46){\circle*{0.000001}} \put(115.26,25.46){\circle*{0.000001}} \put(115.97,26.16){\circle*{0.000001}} \put(116.67,26.87){\circle*{0.000001}} \put(117.38,26.87){\circle*{0.000001}} \put(118.09,27.58){\circle*{0.000001}} \put(118.79,27.58){\circle*{0.000001}} \put(119.50,28.28){\circle*{0.000001}} \put(120.21,28.99){\circle*{0.000001}} \put(120.92,28.99){\circle*{0.000001}} \put(121.62,29.70){\circle*{0.000001}} \put(122.33,29.70){\circle*{0.000001}} \put(123.04,30.41){\circle*{0.000001}} \put(123.74,31.11){\circle*{0.000001}} \put(124.45,31.11){\circle*{0.000001}} \put(125.16,31.82){\circle*{0.000001}} \put(125.87,31.82){\circle*{0.000001}} \put(126.57,32.53){\circle*{0.000001}} \put(127.28,33.23){\circle*{0.000001}} \put(127.99,33.23){\circle*{0.000001}} \put(128.69,33.94){\circle*{0.000001}} \put(129.40,33.94){\circle*{0.000001}} \put(130.11,34.65){\circle*{0.000001}} \put(130.81,35.36){\circle*{0.000001}} \put(131.52,35.36){\circle*{0.000001}} \put(132.23,36.06){\circle*{0.000001}} \put(132.94,36.06){\circle*{0.000001}} \put(133.64,36.77){\circle*{0.000001}} \put(134.35,37.48){\circle*{0.000001}} \put(135.06,37.48){\circle*{0.000001}} \put(135.76,38.18){\circle*{0.000001}} \put(136.47,38.18){\circle*{0.000001}} \put(137.18,38.89){\circle*{0.000001}} \put(137.89,39.60){\circle*{0.000001}} \put(138.59,39.60){\circle*{0.000001}} \put(139.30,40.31){\circle*{0.000001}} \put(140.01,40.31){\circle*{0.000001}} \put(140.71,41.01){\circle*{0.000001}} \put(141.42,41.72){\circle*{0.000001}} \put(142.13,41.72){\circle*{0.000001}} \put(142.84,42.43){\circle*{0.000001}} \put(143.54,42.43){\circle*{0.000001}} \put(144.25,43.13){\circle*{0.000001}} \put(144.96,43.84){\circle*{0.000001}} \put(145.66,43.84){\circle*{0.000001}} \put(146.37,44.55){\circle*{0.000001}} \put(147.08,44.55){\circle*{0.000001}} \put(147.79,45.25){\circle*{0.000001}} \put(148.49,45.96){\circle*{0.000001}} \put(149.20,45.96){\circle*{0.000001}} \put(149.91,46.67){\circle*{0.000001}} \put(150.61,46.67){\circle*{0.000001}} \put(151.32,47.38){\circle*{0.000001}} \put(127.99,-2.83){\circle*{0.000001}} \put(127.99,-2.12){\circle*{0.000001}} \put(128.69,-1.41){\circle*{0.000001}} \put(128.69,-0.71){\circle*{0.000001}} \put(129.40, 0.00){\circle*{0.000001}} \put(129.40, 0.71){\circle*{0.000001}} \put(130.11, 1.41){\circle*{0.000001}} \put(130.11, 2.12){\circle*{0.000001}} \put(130.81, 2.83){\circle*{0.000001}} \put(130.81, 3.54){\circle*{0.000001}} \put(131.52, 4.24){\circle*{0.000001}} \put(131.52, 4.95){\circle*{0.000001}} \put(132.23, 5.66){\circle*{0.000001}} \put(132.23, 6.36){\circle*{0.000001}} \put(132.94, 7.07){\circle*{0.000001}} \put(132.94, 7.78){\circle*{0.000001}} \put(132.94, 8.49){\circle*{0.000001}} \put(133.64, 9.19){\circle*{0.000001}} \put(133.64, 9.90){\circle*{0.000001}} \put(134.35,10.61){\circle*{0.000001}} \put(134.35,11.31){\circle*{0.000001}} \put(135.06,12.02){\circle*{0.000001}} \put(135.06,12.73){\circle*{0.000001}} \put(135.76,13.44){\circle*{0.000001}} \put(135.76,14.14){\circle*{0.000001}} \put(136.47,14.85){\circle*{0.000001}} \put(136.47,15.56){\circle*{0.000001}} \put(137.18,16.26){\circle*{0.000001}} \put(137.18,16.97){\circle*{0.000001}} \put(137.18,17.68){\circle*{0.000001}} \put(137.89,18.38){\circle*{0.000001}} \put(137.89,19.09){\circle*{0.000001}} \put(138.59,19.80){\circle*{0.000001}} \put(138.59,20.51){\circle*{0.000001}} \put(139.30,21.21){\circle*{0.000001}} \put(139.30,21.92){\circle*{0.000001}} \put(140.01,22.63){\circle*{0.000001}} \put(140.01,23.33){\circle*{0.000001}} \put(140.71,24.04){\circle*{0.000001}} \put(140.71,24.75){\circle*{0.000001}} \put(141.42,25.46){\circle*{0.000001}} \put(141.42,26.16){\circle*{0.000001}} \put(142.13,26.87){\circle*{0.000001}} \put(142.13,27.58){\circle*{0.000001}} \put(142.13,28.28){\circle*{0.000001}} \put(142.84,28.99){\circle*{0.000001}} \put(142.84,29.70){\circle*{0.000001}} \put(143.54,30.41){\circle*{0.000001}} \put(143.54,31.11){\circle*{0.000001}} \put(144.25,31.82){\circle*{0.000001}} \put(144.25,32.53){\circle*{0.000001}} \put(144.96,33.23){\circle*{0.000001}} \put(144.96,33.94){\circle*{0.000001}} \put(145.66,34.65){\circle*{0.000001}} \put(145.66,35.36){\circle*{0.000001}} \put(146.37,36.06){\circle*{0.000001}} \put(146.37,36.77){\circle*{0.000001}} \put(146.37,37.48){\circle*{0.000001}} \put(147.08,38.18){\circle*{0.000001}} \put(147.08,38.89){\circle*{0.000001}} \put(147.79,39.60){\circle*{0.000001}} \put(147.79,40.31){\circle*{0.000001}} \put(148.49,41.01){\circle*{0.000001}} \put(148.49,41.72){\circle*{0.000001}} \put(149.20,42.43){\circle*{0.000001}} \put(149.20,43.13){\circle*{0.000001}} \put(149.91,43.84){\circle*{0.000001}} \put(149.91,44.55){\circle*{0.000001}} \put(150.61,45.25){\circle*{0.000001}} \put(150.61,45.96){\circle*{0.000001}} \put(151.32,46.67){\circle*{0.000001}} \put(151.32,47.38){\circle*{0.000001}} \put(85.56,-42.43){\circle*{0.000001}} \put(86.27,-41.72){\circle*{0.000001}} \put(86.97,-41.01){\circle*{0.000001}} \put(87.68,-40.31){\circle*{0.000001}} \put(88.39,-39.60){\circle*{0.000001}} \put(89.10,-38.89){\circle*{0.000001}} \put(89.80,-38.18){\circle*{0.000001}} \put(90.51,-37.48){\circle*{0.000001}} \put(91.22,-37.48){\circle*{0.000001}} \put(91.92,-36.77){\circle*{0.000001}} \put(92.63,-36.06){\circle*{0.000001}} \put(93.34,-35.36){\circle*{0.000001}} \put(94.05,-34.65){\circle*{0.000001}} \put(94.75,-33.94){\circle*{0.000001}} \put(95.46,-33.23){\circle*{0.000001}} \put(96.17,-32.53){\circle*{0.000001}} \put(96.87,-31.82){\circle*{0.000001}} \put(97.58,-31.11){\circle*{0.000001}} \put(98.29,-30.41){\circle*{0.000001}} \put(98.99,-29.70){\circle*{0.000001}} \put(99.70,-28.99){\circle*{0.000001}} \put(100.41,-28.28){\circle*{0.000001}} \put(101.12,-27.58){\circle*{0.000001}} \put(101.82,-27.58){\circle*{0.000001}} \put(102.53,-26.87){\circle*{0.000001}} \put(103.24,-26.16){\circle*{0.000001}} \put(103.94,-25.46){\circle*{0.000001}} \put(104.65,-24.75){\circle*{0.000001}} \put(105.36,-24.04){\circle*{0.000001}} \put(106.07,-23.33){\circle*{0.000001}} \put(106.77,-22.63){\circle*{0.000001}} \put(107.48,-21.92){\circle*{0.000001}} \put(108.19,-21.21){\circle*{0.000001}} \put(108.89,-20.51){\circle*{0.000001}} \put(109.60,-19.80){\circle*{0.000001}} \put(110.31,-19.09){\circle*{0.000001}} \put(111.02,-18.38){\circle*{0.000001}} \put(111.72,-17.68){\circle*{0.000001}} \put(112.43,-17.68){\circle*{0.000001}} \put(113.14,-16.97){\circle*{0.000001}} \put(113.84,-16.26){\circle*{0.000001}} \put(114.55,-15.56){\circle*{0.000001}} \put(115.26,-14.85){\circle*{0.000001}} \put(115.97,-14.14){\circle*{0.000001}} \put(116.67,-13.44){\circle*{0.000001}} \put(117.38,-12.73){\circle*{0.000001}} \put(118.09,-12.02){\circle*{0.000001}} \put(118.79,-11.31){\circle*{0.000001}} \put(119.50,-10.61){\circle*{0.000001}} \put(120.21,-9.90){\circle*{0.000001}} \put(120.92,-9.19){\circle*{0.000001}} \put(121.62,-8.49){\circle*{0.000001}} \put(122.33,-7.78){\circle*{0.000001}} \put(123.04,-7.78){\circle*{0.000001}} \put(123.74,-7.07){\circle*{0.000001}} \put(124.45,-6.36){\circle*{0.000001}} \put(125.16,-5.66){\circle*{0.000001}} \put(125.87,-4.95){\circle*{0.000001}} \put(126.57,-4.24){\circle*{0.000001}} \put(127.28,-3.54){\circle*{0.000001}} \put(127.99,-2.83){\circle*{0.000001}} \put(29.70,-22.63){\circle*{0.000001}} \put(30.41,-22.63){\circle*{0.000001}} \put(31.11,-23.33){\circle*{0.000001}} \put(31.82,-23.33){\circle*{0.000001}} \put(32.53,-23.33){\circle*{0.000001}} \put(33.23,-24.04){\circle*{0.000001}} \put(33.94,-24.04){\circle*{0.000001}} \put(34.65,-24.04){\circle*{0.000001}} \put(35.36,-24.75){\circle*{0.000001}} \put(36.06,-24.75){\circle*{0.000001}} \put(36.77,-25.46){\circle*{0.000001}} \put(37.48,-25.46){\circle*{0.000001}} \put(38.18,-25.46){\circle*{0.000001}} \put(38.89,-26.16){\circle*{0.000001}} \put(39.60,-26.16){\circle*{0.000001}} \put(40.31,-26.16){\circle*{0.000001}} \put(41.01,-26.87){\circle*{0.000001}} \put(41.72,-26.87){\circle*{0.000001}} \put(42.43,-26.87){\circle*{0.000001}} \put(43.13,-27.58){\circle*{0.000001}} \put(43.84,-27.58){\circle*{0.000001}} \put(44.55,-27.58){\circle*{0.000001}} \put(45.25,-28.28){\circle*{0.000001}} \put(45.96,-28.28){\circle*{0.000001}} \put(46.67,-28.99){\circle*{0.000001}} \put(47.38,-28.99){\circle*{0.000001}} \put(48.08,-28.99){\circle*{0.000001}} \put(48.79,-29.70){\circle*{0.000001}} \put(49.50,-29.70){\circle*{0.000001}} \put(50.20,-29.70){\circle*{0.000001}} \put(50.91,-30.41){\circle*{0.000001}} \put(51.62,-30.41){\circle*{0.000001}} \put(52.33,-30.41){\circle*{0.000001}} \put(53.03,-31.11){\circle*{0.000001}} \put(53.74,-31.11){\circle*{0.000001}} \put(54.45,-31.11){\circle*{0.000001}} \put(55.15,-31.82){\circle*{0.000001}} \put(55.86,-31.82){\circle*{0.000001}} \put(56.57,-31.82){\circle*{0.000001}} \put(57.28,-32.53){\circle*{0.000001}} \put(57.98,-32.53){\circle*{0.000001}} \put(58.69,-33.23){\circle*{0.000001}} \put(59.40,-33.23){\circle*{0.000001}} \put(60.10,-33.23){\circle*{0.000001}} \put(60.81,-33.94){\circle*{0.000001}} \put(61.52,-33.94){\circle*{0.000001}} \put(62.23,-33.94){\circle*{0.000001}} \put(62.93,-34.65){\circle*{0.000001}} \put(63.64,-34.65){\circle*{0.000001}} \put(64.35,-34.65){\circle*{0.000001}} \put(65.05,-35.36){\circle*{0.000001}} \put(65.76,-35.36){\circle*{0.000001}} \put(66.47,-35.36){\circle*{0.000001}} \put(67.18,-36.06){\circle*{0.000001}} \put(67.88,-36.06){\circle*{0.000001}} \put(68.59,-36.06){\circle*{0.000001}} \put(69.30,-36.77){\circle*{0.000001}} \put(70.00,-36.77){\circle*{0.000001}} \put(70.71,-37.48){\circle*{0.000001}} \put(71.42,-37.48){\circle*{0.000001}} \put(72.12,-37.48){\circle*{0.000001}} \put(72.83,-38.18){\circle*{0.000001}} \put(73.54,-38.18){\circle*{0.000001}} \put(74.25,-38.18){\circle*{0.000001}} \put(74.95,-38.89){\circle*{0.000001}} \put(75.66,-38.89){\circle*{0.000001}} \put(76.37,-38.89){\circle*{0.000001}} \put(77.07,-39.60){\circle*{0.000001}} \put(77.78,-39.60){\circle*{0.000001}} \put(78.49,-39.60){\circle*{0.000001}} \put(79.20,-40.31){\circle*{0.000001}} \put(79.90,-40.31){\circle*{0.000001}} \put(80.61,-41.01){\circle*{0.000001}} \put(81.32,-41.01){\circle*{0.000001}} \put(82.02,-41.01){\circle*{0.000001}} \put(82.73,-41.72){\circle*{0.000001}} \put(83.44,-41.72){\circle*{0.000001}} \put(84.15,-41.72){\circle*{0.000001}} \put(84.85,-42.43){\circle*{0.000001}} \put(85.56,-42.43){\circle*{0.000001}} \put(29.70,-22.63){\circle*{0.000001}} \put(30.41,-23.33){\circle*{0.000001}} \put(31.11,-24.04){\circle*{0.000001}} \put(31.82,-24.04){\circle*{0.000001}} \put(32.53,-24.75){\circle*{0.000001}} \put(33.23,-25.46){\circle*{0.000001}} \put(33.94,-26.16){\circle*{0.000001}} \put(34.65,-26.87){\circle*{0.000001}} \put(35.36,-27.58){\circle*{0.000001}} \put(36.06,-27.58){\circle*{0.000001}} \put(36.77,-28.28){\circle*{0.000001}} \put(37.48,-28.99){\circle*{0.000001}} \put(38.18,-29.70){\circle*{0.000001}} \put(38.89,-30.41){\circle*{0.000001}} \put(39.60,-30.41){\circle*{0.000001}} \put(40.31,-31.11){\circle*{0.000001}} \put(41.01,-31.82){\circle*{0.000001}} \put(41.72,-32.53){\circle*{0.000001}} \put(42.43,-33.23){\circle*{0.000001}} \put(43.13,-33.94){\circle*{0.000001}} \put(43.84,-33.94){\circle*{0.000001}} \put(44.55,-34.65){\circle*{0.000001}} \put(45.25,-35.36){\circle*{0.000001}} \put(45.96,-36.06){\circle*{0.000001}} \put(46.67,-36.77){\circle*{0.000001}} \put(47.38,-36.77){\circle*{0.000001}} \put(48.08,-37.48){\circle*{0.000001}} \put(48.79,-38.18){\circle*{0.000001}} \put(49.50,-38.89){\circle*{0.000001}} \put(50.20,-39.60){\circle*{0.000001}} \put(50.91,-40.31){\circle*{0.000001}} \put(51.62,-40.31){\circle*{0.000001}} \put(52.33,-41.01){\circle*{0.000001}} \put(53.03,-41.72){\circle*{0.000001}} \put(53.74,-42.43){\circle*{0.000001}} \put(54.45,-43.13){\circle*{0.000001}} \put(55.15,-43.84){\circle*{0.000001}} \put(55.86,-43.84){\circle*{0.000001}} \put(56.57,-44.55){\circle*{0.000001}} \put(57.28,-45.25){\circle*{0.000001}} \put(57.98,-45.96){\circle*{0.000001}} \put(58.69,-46.67){\circle*{0.000001}} \put(59.40,-46.67){\circle*{0.000001}} \put(60.10,-47.38){\circle*{0.000001}} \put(60.81,-48.08){\circle*{0.000001}} \put(61.52,-48.79){\circle*{0.000001}} \put(62.23,-49.50){\circle*{0.000001}} \put(62.93,-50.20){\circle*{0.000001}} \put(63.64,-50.20){\circle*{0.000001}} \put(64.35,-50.91){\circle*{0.000001}} \put(65.05,-51.62){\circle*{0.000001}} \put(65.76,-52.33){\circle*{0.000001}} \put(66.47,-53.03){\circle*{0.000001}} \put(67.18,-53.03){\circle*{0.000001}} \put(67.88,-53.74){\circle*{0.000001}} \put(68.59,-54.45){\circle*{0.000001}} \put(69.30,-55.15){\circle*{0.000001}} \put(70.00,-55.86){\circle*{0.000001}} \put(70.71,-56.57){\circle*{0.000001}} \put(71.42,-56.57){\circle*{0.000001}} \put(72.12,-57.28){\circle*{0.000001}} \put(72.83,-57.98){\circle*{0.000001}} \put(24.75,-21.92){\circle*{0.000001}} \put(25.46,-22.63){\circle*{0.000001}} \put(26.16,-22.63){\circle*{0.000001}} \put(26.87,-23.33){\circle*{0.000001}} \put(27.58,-24.04){\circle*{0.000001}} \put(28.28,-24.75){\circle*{0.000001}} \put(28.99,-24.75){\circle*{0.000001}} \put(29.70,-25.46){\circle*{0.000001}} \put(30.41,-26.16){\circle*{0.000001}} \put(31.11,-26.87){\circle*{0.000001}} \put(31.82,-26.87){\circle*{0.000001}} \put(32.53,-27.58){\circle*{0.000001}} \put(33.23,-28.28){\circle*{0.000001}} \put(33.94,-28.99){\circle*{0.000001}} \put(34.65,-28.99){\circle*{0.000001}} \put(35.36,-29.70){\circle*{0.000001}} \put(36.06,-30.41){\circle*{0.000001}} \put(36.77,-31.11){\circle*{0.000001}} \put(37.48,-31.11){\circle*{0.000001}} \put(38.18,-31.82){\circle*{0.000001}} \put(38.89,-32.53){\circle*{0.000001}} \put(39.60,-33.23){\circle*{0.000001}} \put(40.31,-33.23){\circle*{0.000001}} \put(41.01,-33.94){\circle*{0.000001}} \put(41.72,-34.65){\circle*{0.000001}} \put(42.43,-35.36){\circle*{0.000001}} \put(43.13,-35.36){\circle*{0.000001}} \put(43.84,-36.06){\circle*{0.000001}} \put(44.55,-36.77){\circle*{0.000001}} \put(45.25,-37.48){\circle*{0.000001}} \put(45.96,-37.48){\circle*{0.000001}} \put(46.67,-38.18){\circle*{0.000001}} \put(47.38,-38.89){\circle*{0.000001}} \put(48.08,-39.60){\circle*{0.000001}} \put(48.79,-39.60){\circle*{0.000001}} \put(49.50,-40.31){\circle*{0.000001}} \put(50.20,-41.01){\circle*{0.000001}} \put(50.91,-41.72){\circle*{0.000001}} \put(51.62,-41.72){\circle*{0.000001}} \put(52.33,-42.43){\circle*{0.000001}} \put(53.03,-43.13){\circle*{0.000001}} \put(53.74,-43.84){\circle*{0.000001}} \put(54.45,-43.84){\circle*{0.000001}} \put(55.15,-44.55){\circle*{0.000001}} \put(55.86,-45.25){\circle*{0.000001}} \put(56.57,-45.96){\circle*{0.000001}} \put(57.28,-45.96){\circle*{0.000001}} \put(57.98,-46.67){\circle*{0.000001}} \put(58.69,-47.38){\circle*{0.000001}} \put(59.40,-48.08){\circle*{0.000001}} \put(60.10,-48.08){\circle*{0.000001}} \put(60.81,-48.79){\circle*{0.000001}} \put(61.52,-49.50){\circle*{0.000001}} \put(62.23,-50.20){\circle*{0.000001}} \put(62.93,-50.20){\circle*{0.000001}} \put(63.64,-50.91){\circle*{0.000001}} \put(64.35,-51.62){\circle*{0.000001}} \put(65.05,-52.33){\circle*{0.000001}} \put(65.76,-52.33){\circle*{0.000001}} \put(66.47,-53.03){\circle*{0.000001}} \put(67.18,-53.74){\circle*{0.000001}} \put(67.88,-54.45){\circle*{0.000001}} \put(68.59,-54.45){\circle*{0.000001}} \put(69.30,-55.15){\circle*{0.000001}} \put(70.00,-55.86){\circle*{0.000001}} \put(70.71,-56.57){\circle*{0.000001}} \put(71.42,-56.57){\circle*{0.000001}} \put(72.12,-57.28){\circle*{0.000001}} \put(72.83,-57.98){\circle*{0.000001}} \put(20.51,-79.90){\circle*{0.000001}} \put(20.51,-79.20){\circle*{0.000001}} \put(20.51,-78.49){\circle*{0.000001}} \put(20.51,-77.78){\circle*{0.000001}} \put(20.51,-77.07){\circle*{0.000001}} \put(20.51,-76.37){\circle*{0.000001}} \put(20.51,-75.66){\circle*{0.000001}} \put(21.21,-74.95){\circle*{0.000001}} \put(21.21,-74.25){\circle*{0.000001}} \put(21.21,-73.54){\circle*{0.000001}} \put(21.21,-72.83){\circle*{0.000001}} \put(21.21,-72.12){\circle*{0.000001}} \put(21.21,-71.42){\circle*{0.000001}} \put(21.21,-70.71){\circle*{0.000001}} \put(21.21,-70.00){\circle*{0.000001}} \put(21.21,-69.30){\circle*{0.000001}} \put(21.21,-68.59){\circle*{0.000001}} \put(21.21,-67.88){\circle*{0.000001}} \put(21.21,-67.18){\circle*{0.000001}} \put(21.21,-66.47){\circle*{0.000001}} \put(21.21,-65.76){\circle*{0.000001}} \put(21.92,-65.05){\circle*{0.000001}} \put(21.92,-64.35){\circle*{0.000001}} \put(21.92,-63.64){\circle*{0.000001}} \put(21.92,-62.93){\circle*{0.000001}} \put(21.92,-62.23){\circle*{0.000001}} \put(21.92,-61.52){\circle*{0.000001}} \put(21.92,-60.81){\circle*{0.000001}} \put(21.92,-60.10){\circle*{0.000001}} \put(21.92,-59.40){\circle*{0.000001}} \put(21.92,-58.69){\circle*{0.000001}} \put(21.92,-57.98){\circle*{0.000001}} \put(21.92,-57.28){\circle*{0.000001}} \put(21.92,-56.57){\circle*{0.000001}} \put(21.92,-55.86){\circle*{0.000001}} \put(22.63,-55.15){\circle*{0.000001}} \put(22.63,-54.45){\circle*{0.000001}} \put(22.63,-53.74){\circle*{0.000001}} \put(22.63,-53.03){\circle*{0.000001}} \put(22.63,-52.33){\circle*{0.000001}} \put(22.63,-51.62){\circle*{0.000001}} \put(22.63,-50.91){\circle*{0.000001}} \put(22.63,-50.20){\circle*{0.000001}} \put(22.63,-49.50){\circle*{0.000001}} \put(22.63,-48.79){\circle*{0.000001}} \put(22.63,-48.08){\circle*{0.000001}} \put(22.63,-47.38){\circle*{0.000001}} \put(22.63,-46.67){\circle*{0.000001}} \put(23.33,-45.96){\circle*{0.000001}} \put(23.33,-45.25){\circle*{0.000001}} \put(23.33,-44.55){\circle*{0.000001}} \put(23.33,-43.84){\circle*{0.000001}} \put(23.33,-43.13){\circle*{0.000001}} \put(23.33,-42.43){\circle*{0.000001}} \put(23.33,-41.72){\circle*{0.000001}} \put(23.33,-41.01){\circle*{0.000001}} \put(23.33,-40.31){\circle*{0.000001}} \put(23.33,-39.60){\circle*{0.000001}} \put(23.33,-38.89){\circle*{0.000001}} \put(23.33,-38.18){\circle*{0.000001}} \put(23.33,-37.48){\circle*{0.000001}} \put(23.33,-36.77){\circle*{0.000001}} \put(24.04,-36.06){\circle*{0.000001}} \put(24.04,-35.36){\circle*{0.000001}} \put(24.04,-34.65){\circle*{0.000001}} \put(24.04,-33.94){\circle*{0.000001}} \put(24.04,-33.23){\circle*{0.000001}} \put(24.04,-32.53){\circle*{0.000001}} \put(24.04,-31.82){\circle*{0.000001}} \put(24.04,-31.11){\circle*{0.000001}} \put(24.04,-30.41){\circle*{0.000001}} \put(24.04,-29.70){\circle*{0.000001}} \put(24.04,-28.99){\circle*{0.000001}} \put(24.04,-28.28){\circle*{0.000001}} \put(24.04,-27.58){\circle*{0.000001}} \put(24.04,-26.87){\circle*{0.000001}} \put(24.75,-26.16){\circle*{0.000001}} \put(24.75,-25.46){\circle*{0.000001}} \put(24.75,-24.75){\circle*{0.000001}} \put(24.75,-24.04){\circle*{0.000001}} \put(24.75,-23.33){\circle*{0.000001}} \put(24.75,-22.63){\circle*{0.000001}} \put(24.75,-21.92){\circle*{0.000001}} \put(20.51,-79.90){\circle*{0.000001}} \put(21.21,-80.61){\circle*{0.000001}} \put(21.92,-81.32){\circle*{0.000001}} \put(22.63,-82.02){\circle*{0.000001}} \put(23.33,-82.73){\circle*{0.000001}} \put(24.04,-83.44){\circle*{0.000001}} \put(24.75,-84.15){\circle*{0.000001}} \put(25.46,-84.85){\circle*{0.000001}} \put(26.16,-85.56){\circle*{0.000001}} \put(26.87,-86.27){\circle*{0.000001}} \put(27.58,-86.97){\circle*{0.000001}} \put(28.28,-87.68){\circle*{0.000001}} \put(28.99,-88.39){\circle*{0.000001}} \put(29.70,-89.10){\circle*{0.000001}} \put(30.41,-89.80){\circle*{0.000001}} \put(31.11,-90.51){\circle*{0.000001}} \put(31.82,-91.22){\circle*{0.000001}} \put(32.53,-91.92){\circle*{0.000001}} \put(33.23,-92.63){\circle*{0.000001}} \put(33.94,-93.34){\circle*{0.000001}} \put(34.65,-94.05){\circle*{0.000001}} \put(35.36,-94.75){\circle*{0.000001}} \put(36.06,-95.46){\circle*{0.000001}} \put(36.77,-96.17){\circle*{0.000001}} \put(37.48,-96.87){\circle*{0.000001}} \put(38.18,-97.58){\circle*{0.000001}} \put(38.89,-98.29){\circle*{0.000001}} \put(39.60,-98.99){\circle*{0.000001}} \put(40.31,-99.70){\circle*{0.000001}} \put(41.01,-100.41){\circle*{0.000001}} \put(41.72,-101.12){\circle*{0.000001}} \put(42.43,-101.82){\circle*{0.000001}} \put(43.13,-102.53){\circle*{0.000001}} \put(43.84,-103.24){\circle*{0.000001}} \put(44.55,-103.94){\circle*{0.000001}} \put(45.25,-104.65){\circle*{0.000001}} \put(45.96,-105.36){\circle*{0.000001}} \put(46.67,-106.07){\circle*{0.000001}} \put(47.38,-106.77){\circle*{0.000001}} \put(48.08,-107.48){\circle*{0.000001}} \put(48.79,-108.19){\circle*{0.000001}} \put(49.50,-108.89){\circle*{0.000001}} \put(50.20,-109.60){\circle*{0.000001}} \put(50.91,-110.31){\circle*{0.000001}} \put(51.62,-111.02){\circle*{0.000001}} \put(52.33,-111.72){\circle*{0.000001}} \put(53.03,-112.43){\circle*{0.000001}} \put(53.74,-113.14){\circle*{0.000001}} \put(54.45,-113.84){\circle*{0.000001}} \put(55.15,-114.55){\circle*{0.000001}} \put(55.86,-115.26){\circle*{0.000001}} \put(56.57,-115.97){\circle*{0.000001}} \put(57.28,-116.67){\circle*{0.000001}} \put(57.98,-117.38){\circle*{0.000001}} \put(58.69,-118.09){\circle*{0.000001}} \put(59.40,-118.79){\circle*{0.000001}} \put(59.40,-118.79){\circle*{0.000001}} \put(60.10,-118.09){\circle*{0.000001}} \put(60.10,-117.38){\circle*{0.000001}} \put(60.81,-116.67){\circle*{0.000001}} \put(60.81,-115.97){\circle*{0.000001}} \put(61.52,-115.26){\circle*{0.000001}} \put(61.52,-114.55){\circle*{0.000001}} \put(62.23,-113.84){\circle*{0.000001}} \put(62.23,-113.14){\circle*{0.000001}} \put(62.93,-112.43){\circle*{0.000001}} \put(63.64,-111.72){\circle*{0.000001}} \put(63.64,-111.02){\circle*{0.000001}} \put(64.35,-110.31){\circle*{0.000001}} \put(64.35,-109.60){\circle*{0.000001}} \put(65.05,-108.89){\circle*{0.000001}} \put(65.05,-108.19){\circle*{0.000001}} \put(65.76,-107.48){\circle*{0.000001}} \put(65.76,-106.77){\circle*{0.000001}} \put(66.47,-106.07){\circle*{0.000001}} \put(67.18,-105.36){\circle*{0.000001}} \put(67.18,-104.65){\circle*{0.000001}} \put(67.88,-103.94){\circle*{0.000001}} \put(67.88,-103.24){\circle*{0.000001}} \put(68.59,-102.53){\circle*{0.000001}} \put(68.59,-101.82){\circle*{0.000001}} \put(69.30,-101.12){\circle*{0.000001}} \put(69.30,-100.41){\circle*{0.000001}} \put(70.00,-99.70){\circle*{0.000001}} \put(70.71,-98.99){\circle*{0.000001}} \put(70.71,-98.29){\circle*{0.000001}} \put(71.42,-97.58){\circle*{0.000001}} \put(71.42,-96.87){\circle*{0.000001}} \put(72.12,-96.17){\circle*{0.000001}} \put(72.12,-95.46){\circle*{0.000001}} \put(72.83,-94.75){\circle*{0.000001}} \put(72.83,-94.05){\circle*{0.000001}} \put(73.54,-93.34){\circle*{0.000001}} \put(73.54,-92.63){\circle*{0.000001}} \put(74.25,-91.92){\circle*{0.000001}} \put(74.95,-91.22){\circle*{0.000001}} \put(74.95,-90.51){\circle*{0.000001}} \put(75.66,-89.80){\circle*{0.000001}} \put(75.66,-89.10){\circle*{0.000001}} \put(76.37,-88.39){\circle*{0.000001}} \put(76.37,-87.68){\circle*{0.000001}} \put(77.07,-86.97){\circle*{0.000001}} \put(77.07,-86.27){\circle*{0.000001}} \put(77.78,-85.56){\circle*{0.000001}} \put(78.49,-84.85){\circle*{0.000001}} \put(78.49,-84.15){\circle*{0.000001}} \put(79.20,-83.44){\circle*{0.000001}} \put(79.20,-82.73){\circle*{0.000001}} \put(79.90,-82.02){\circle*{0.000001}} \put(79.90,-81.32){\circle*{0.000001}} \put(80.61,-80.61){\circle*{0.000001}} \put(80.61,-79.90){\circle*{0.000001}} \put(81.32,-79.20){\circle*{0.000001}} \put(82.02,-78.49){\circle*{0.000001}} \put(82.02,-77.78){\circle*{0.000001}} \put(82.73,-77.07){\circle*{0.000001}} \put(82.73,-76.37){\circle*{0.000001}} \put(83.44,-75.66){\circle*{0.000001}} \put(83.44,-74.95){\circle*{0.000001}} \put(84.15,-74.25){\circle*{0.000001}} \put(84.15,-73.54){\circle*{0.000001}} \put(84.85,-72.83){\circle*{0.000001}} \put(85.56,-72.12){\circle*{0.000001}} \put(85.56,-71.42){\circle*{0.000001}} \put(86.27,-70.71){\circle*{0.000001}} \put(86.27,-70.00){\circle*{0.000001}} \put(86.97,-69.30){\circle*{0.000001}} \put(86.97,-68.59){\circle*{0.000001}} \put(87.68,-67.88){\circle*{0.000001}} \put(87.68,-67.18){\circle*{0.000001}} \put(88.39,-66.47){\circle*{0.000001}} \put(88.39,-66.47){\circle*{0.000001}} \put(89.10,-66.47){\circle*{0.000001}} \put(89.80,-66.47){\circle*{0.000001}} \put(90.51,-66.47){\circle*{0.000001}} \put(91.22,-66.47){\circle*{0.000001}} \put(91.92,-66.47){\circle*{0.000001}} \put(92.63,-66.47){\circle*{0.000001}} \put(93.34,-66.47){\circle*{0.000001}} \put(94.05,-66.47){\circle*{0.000001}} \put(94.75,-66.47){\circle*{0.000001}} \put(95.46,-66.47){\circle*{0.000001}} \put(96.17,-66.47){\circle*{0.000001}} \put(96.87,-66.47){\circle*{0.000001}} \put(97.58,-66.47){\circle*{0.000001}} \put(98.29,-66.47){\circle*{0.000001}} \put(98.99,-66.47){\circle*{0.000001}} \put(99.70,-66.47){\circle*{0.000001}} \put(100.41,-66.47){\circle*{0.000001}} \put(101.12,-66.47){\circle*{0.000001}} \put(101.82,-66.47){\circle*{0.000001}} \put(102.53,-66.47){\circle*{0.000001}} \put(103.24,-66.47){\circle*{0.000001}} \put(103.94,-66.47){\circle*{0.000001}} \put(104.65,-66.47){\circle*{0.000001}} \put(105.36,-66.47){\circle*{0.000001}} \put(106.07,-66.47){\circle*{0.000001}} \put(106.77,-66.47){\circle*{0.000001}} \put(107.48,-66.47){\circle*{0.000001}} \put(108.19,-66.47){\circle*{0.000001}} \put(108.89,-66.47){\circle*{0.000001}} \put(109.60,-66.47){\circle*{0.000001}} \put(110.31,-66.47){\circle*{0.000001}} \put(111.02,-66.47){\circle*{0.000001}} \put(111.72,-66.47){\circle*{0.000001}} \put(112.43,-66.47){\circle*{0.000001}} \put(113.14,-66.47){\circle*{0.000001}} \put(113.84,-66.47){\circle*{0.000001}} \put(114.55,-66.47){\circle*{0.000001}} \put(115.26,-66.47){\circle*{0.000001}} \put(115.97,-66.47){\circle*{0.000001}} \put(116.67,-66.47){\circle*{0.000001}} \put(117.38,-66.47){\circle*{0.000001}} \put(118.09,-66.47){\circle*{0.000001}} \put(118.79,-66.47){\circle*{0.000001}} \put(119.50,-66.47){\circle*{0.000001}} \put(120.21,-66.47){\circle*{0.000001}} \put(120.92,-66.47){\circle*{0.000001}} \put(121.62,-66.47){\circle*{0.000001}} \put(122.33,-66.47){\circle*{0.000001}} \put(123.04,-66.47){\circle*{0.000001}} \put(123.74,-66.47){\circle*{0.000001}} \put(124.45,-66.47){\circle*{0.000001}} \put(125.16,-66.47){\circle*{0.000001}} \put(125.87,-66.47){\circle*{0.000001}} \put(126.57,-66.47){\circle*{0.000001}} \put(127.28,-66.47){\circle*{0.000001}} \put(127.99,-66.47){\circle*{0.000001}} \put(128.69,-66.47){\circle*{0.000001}} \put(129.40,-66.47){\circle*{0.000001}} \put(130.11,-66.47){\circle*{0.000001}} \put(130.81,-66.47){\circle*{0.000001}} \put(131.52,-66.47){\circle*{0.000001}} \put(132.23,-66.47){\circle*{0.000001}} \put(132.94,-66.47){\circle*{0.000001}} \put(133.64,-66.47){\circle*{0.000001}} \put(134.35,-66.47){\circle*{0.000001}} \put(135.06,-66.47){\circle*{0.000001}} \put(135.76,-66.47){\circle*{0.000001}} \put(136.47,-66.47){\circle*{0.000001}} \put(137.18,-66.47){\circle*{0.000001}} \put(137.89,-66.47){\circle*{0.000001}} \put(138.59,-66.47){\circle*{0.000001}} \put(139.30,-66.47){\circle*{0.000001}} \put(140.01,-66.47){\circle*{0.000001}} \put(140.71,-66.47){\circle*{0.000001}} \put(141.42,-66.47){\circle*{0.000001}} \put(142.13,-66.47){\circle*{0.000001}} \put(142.84,-66.47){\circle*{0.000001}} \put(143.54,-66.47){\circle*{0.000001}} \put(144.25,-66.47){\circle*{0.000001}} \put(144.96,-66.47){\circle*{0.000001}} \put(145.66,-66.47){\circle*{0.000001}} \put(146.37,-66.47){\circle*{0.000001}} \put(108.89,-111.72){\circle*{0.000001}} \put(109.60,-111.02){\circle*{0.000001}} \put(110.31,-110.31){\circle*{0.000001}} \put(110.31,-109.60){\circle*{0.000001}} \put(111.02,-108.89){\circle*{0.000001}} \put(111.72,-108.19){\circle*{0.000001}} \put(112.43,-107.48){\circle*{0.000001}} \put(113.14,-106.77){\circle*{0.000001}} \put(113.84,-106.07){\circle*{0.000001}} \put(113.84,-105.36){\circle*{0.000001}} \put(114.55,-104.65){\circle*{0.000001}} \put(115.26,-103.94){\circle*{0.000001}} \put(115.97,-103.24){\circle*{0.000001}} \put(116.67,-102.53){\circle*{0.000001}} \put(117.38,-101.82){\circle*{0.000001}} \put(117.38,-101.12){\circle*{0.000001}} \put(118.09,-100.41){\circle*{0.000001}} \put(118.79,-99.70){\circle*{0.000001}} \put(119.50,-98.99){\circle*{0.000001}} \put(120.21,-98.29){\circle*{0.000001}} \put(120.92,-97.58){\circle*{0.000001}} \put(120.92,-96.87){\circle*{0.000001}} \put(121.62,-96.17){\circle*{0.000001}} \put(122.33,-95.46){\circle*{0.000001}} \put(123.04,-94.75){\circle*{0.000001}} \put(123.74,-94.05){\circle*{0.000001}} \put(124.45,-93.34){\circle*{0.000001}} \put(124.45,-92.63){\circle*{0.000001}} \put(125.16,-91.92){\circle*{0.000001}} \put(125.87,-91.22){\circle*{0.000001}} \put(126.57,-90.51){\circle*{0.000001}} \put(127.28,-89.80){\circle*{0.000001}} \put(127.28,-89.10){\circle*{0.000001}} \put(127.99,-88.39){\circle*{0.000001}} \put(128.69,-87.68){\circle*{0.000001}} \put(129.40,-86.97){\circle*{0.000001}} \put(130.11,-86.27){\circle*{0.000001}} \put(130.81,-85.56){\circle*{0.000001}} \put(130.81,-84.85){\circle*{0.000001}} \put(131.52,-84.15){\circle*{0.000001}} \put(132.23,-83.44){\circle*{0.000001}} \put(132.94,-82.73){\circle*{0.000001}} \put(133.64,-82.02){\circle*{0.000001}} \put(134.35,-81.32){\circle*{0.000001}} \put(134.35,-80.61){\circle*{0.000001}} \put(135.06,-79.90){\circle*{0.000001}} \put(135.76,-79.20){\circle*{0.000001}} \put(136.47,-78.49){\circle*{0.000001}} \put(137.18,-77.78){\circle*{0.000001}} \put(137.89,-77.07){\circle*{0.000001}} \put(137.89,-76.37){\circle*{0.000001}} \put(138.59,-75.66){\circle*{0.000001}} \put(139.30,-74.95){\circle*{0.000001}} \put(140.01,-74.25){\circle*{0.000001}} \put(140.71,-73.54){\circle*{0.000001}} \put(141.42,-72.83){\circle*{0.000001}} \put(141.42,-72.12){\circle*{0.000001}} \put(142.13,-71.42){\circle*{0.000001}} \put(142.84,-70.71){\circle*{0.000001}} \put(143.54,-70.00){\circle*{0.000001}} \put(144.25,-69.30){\circle*{0.000001}} \put(144.96,-68.59){\circle*{0.000001}} \put(144.96,-67.88){\circle*{0.000001}} \put(145.66,-67.18){\circle*{0.000001}} \put(146.37,-66.47){\circle*{0.000001}} \put(61.52,-143.54){\circle*{0.000001}} \put(62.23,-142.84){\circle*{0.000001}} \put(62.93,-142.84){\circle*{0.000001}} \put(63.64,-142.13){\circle*{0.000001}} \put(64.35,-141.42){\circle*{0.000001}} \put(65.05,-141.42){\circle*{0.000001}} \put(65.76,-140.71){\circle*{0.000001}} \put(66.47,-140.01){\circle*{0.000001}} \put(67.18,-140.01){\circle*{0.000001}} \put(67.88,-139.30){\circle*{0.000001}} \put(68.59,-138.59){\circle*{0.000001}} \put(69.30,-138.59){\circle*{0.000001}} \put(70.00,-137.89){\circle*{0.000001}} \put(70.71,-137.18){\circle*{0.000001}} \put(71.42,-137.18){\circle*{0.000001}} \put(72.12,-136.47){\circle*{0.000001}} \put(72.83,-135.76){\circle*{0.000001}} \put(73.54,-135.76){\circle*{0.000001}} \put(74.25,-135.06){\circle*{0.000001}} \put(74.95,-134.35){\circle*{0.000001}} \put(75.66,-134.35){\circle*{0.000001}} \put(76.37,-133.64){\circle*{0.000001}} \put(77.07,-132.94){\circle*{0.000001}} \put(77.78,-132.94){\circle*{0.000001}} \put(78.49,-132.23){\circle*{0.000001}} \put(79.20,-131.52){\circle*{0.000001}} \put(79.90,-131.52){\circle*{0.000001}} \put(80.61,-130.81){\circle*{0.000001}} \put(81.32,-130.11){\circle*{0.000001}} \put(82.02,-130.11){\circle*{0.000001}} \put(82.73,-129.40){\circle*{0.000001}} \put(83.44,-128.69){\circle*{0.000001}} \put(84.15,-128.69){\circle*{0.000001}} \put(84.85,-127.99){\circle*{0.000001}} \put(85.56,-127.28){\circle*{0.000001}} \put(86.27,-126.57){\circle*{0.000001}} \put(86.97,-126.57){\circle*{0.000001}} \put(87.68,-125.87){\circle*{0.000001}} \put(88.39,-125.16){\circle*{0.000001}} \put(89.10,-125.16){\circle*{0.000001}} \put(89.80,-124.45){\circle*{0.000001}} \put(90.51,-123.74){\circle*{0.000001}} \put(91.22,-123.74){\circle*{0.000001}} \put(91.92,-123.04){\circle*{0.000001}} \put(92.63,-122.33){\circle*{0.000001}} \put(93.34,-122.33){\circle*{0.000001}} \put(94.05,-121.62){\circle*{0.000001}} \put(94.75,-120.92){\circle*{0.000001}} \put(95.46,-120.92){\circle*{0.000001}} \put(96.17,-120.21){\circle*{0.000001}} \put(96.87,-119.50){\circle*{0.000001}} \put(97.58,-119.50){\circle*{0.000001}} \put(98.29,-118.79){\circle*{0.000001}} \put(98.99,-118.09){\circle*{0.000001}} \put(99.70,-118.09){\circle*{0.000001}} \put(100.41,-117.38){\circle*{0.000001}} \put(101.12,-116.67){\circle*{0.000001}} \put(101.82,-116.67){\circle*{0.000001}} \put(102.53,-115.97){\circle*{0.000001}} \put(103.24,-115.26){\circle*{0.000001}} \put(103.94,-115.26){\circle*{0.000001}} \put(104.65,-114.55){\circle*{0.000001}} \put(105.36,-113.84){\circle*{0.000001}} \put(106.07,-113.84){\circle*{0.000001}} \put(106.77,-113.14){\circle*{0.000001}} \put(107.48,-112.43){\circle*{0.000001}} \put(108.19,-112.43){\circle*{0.000001}} \put(108.89,-111.72){\circle*{0.000001}} \put(60.81,-145.66){\circle*{0.000001}} \put(60.81,-144.96){\circle*{0.000001}} \put(61.52,-144.25){\circle*{0.000001}} \put(61.52,-143.54){\circle*{0.000001}} \put(60.81,-145.66){\circle*{0.000001}} \put(61.52,-146.37){\circle*{0.000001}} \put(59.40,-145.66){\circle*{0.000001}} \put(60.10,-145.66){\circle*{0.000001}} \put(60.81,-146.37){\circle*{0.000001}} \put(61.52,-146.37){\circle*{0.000001}} \put(59.40,-145.66){\circle*{0.000001}} \put(60.10,-144.96){\circle*{0.000001}} \put(60.10,-144.25){\circle*{0.000001}} \put(60.81,-143.54){\circle*{0.000001}} \put(60.81,-143.54){\circle*{0.000001}} \put(57.98,-145.66){\circle*{0.000001}} \put(58.69,-144.96){\circle*{0.000001}} \put(59.40,-144.96){\circle*{0.000001}} \put(60.10,-144.25){\circle*{0.000001}} \put(60.81,-143.54){\circle*{0.000001}} \put(56.57,-146.37){\circle*{0.000001}} \put(57.28,-146.37){\circle*{0.000001}} \put(57.98,-145.66){\circle*{0.000001}} \put(56.57,-147.79){\circle*{0.000001}} \put(56.57,-147.08){\circle*{0.000001}} \put(56.57,-146.37){\circle*{0.000001}} \put(56.57,-147.79){\circle*{0.000001}} \put(57.28,-147.08){\circle*{0.000001}} \put(57.98,-146.37){\circle*{0.000001}} \put(58.69,-145.66){\circle*{0.000001}} \put(58.69,-145.66){\circle*{0.000001}} \put(59.40,-145.66){\circle*{0.000001}} \put(60.10,-144.96){\circle*{0.000001}} \put(60.81,-144.96){\circle*{0.000001}} \put(59.40,-145.66){\circle*{0.000001}} \put(60.10,-145.66){\circle*{0.000001}} \put(60.81,-144.96){\circle*{0.000001}} \put(59.40,-145.66){\circle*{0.000001}} \put(60.10,-145.66){\circle*{0.000001}} \put(60.81,-146.37){\circle*{0.000001}} \put(61.52,-146.37){\circle*{0.000001}} \put(62.23,-146.37){\circle*{0.000001}} \put(62.93,-147.08){\circle*{0.000001}} \put(63.64,-147.08){\circle*{0.000001}} \put(64.35,-147.08){\circle*{0.000001}} \put(65.05,-147.79){\circle*{0.000001}} \put(65.76,-147.79){\circle*{0.000001}} \put(66.47,-147.79){\circle*{0.000001}} \put(67.18,-148.49){\circle*{0.000001}} \put(67.88,-148.49){\circle*{0.000001}} \put(55.86,-156.98){\circle*{0.000001}} \put(56.57,-156.27){\circle*{0.000001}} \put(57.28,-156.27){\circle*{0.000001}} \put(57.98,-155.56){\circle*{0.000001}} \put(58.69,-154.86){\circle*{0.000001}} \put(59.40,-154.15){\circle*{0.000001}} \put(60.10,-154.15){\circle*{0.000001}} \put(60.81,-153.44){\circle*{0.000001}} \put(61.52,-152.74){\circle*{0.000001}} \put(62.23,-152.74){\circle*{0.000001}} \put(62.93,-152.03){\circle*{0.000001}} \put(63.64,-151.32){\circle*{0.000001}} \put(64.35,-151.32){\circle*{0.000001}} \put(65.05,-150.61){\circle*{0.000001}} \put(65.76,-149.91){\circle*{0.000001}} \put(66.47,-149.20){\circle*{0.000001}} \put(67.18,-149.20){\circle*{0.000001}} \put(67.88,-148.49){\circle*{0.000001}} \put(34.65,-155.56){\circle*{0.000001}} \put(35.36,-155.56){\circle*{0.000001}} \put(36.06,-155.56){\circle*{0.000001}} \put(36.77,-155.56){\circle*{0.000001}} \put(37.48,-155.56){\circle*{0.000001}} \put(38.18,-155.56){\circle*{0.000001}} \put(38.89,-155.56){\circle*{0.000001}} \put(39.60,-155.56){\circle*{0.000001}} \put(40.31,-156.27){\circle*{0.000001}} \put(41.01,-156.27){\circle*{0.000001}} \put(41.72,-156.27){\circle*{0.000001}} \put(42.43,-156.27){\circle*{0.000001}} \put(43.13,-156.27){\circle*{0.000001}} \put(43.84,-156.27){\circle*{0.000001}} \put(44.55,-156.27){\circle*{0.000001}} \put(45.25,-156.27){\circle*{0.000001}} \put(45.96,-156.27){\circle*{0.000001}} \put(46.67,-156.27){\circle*{0.000001}} \put(47.38,-156.27){\circle*{0.000001}} \put(48.08,-156.27){\circle*{0.000001}} \put(48.79,-156.27){\circle*{0.000001}} \put(49.50,-156.27){\circle*{0.000001}} \put(50.20,-156.27){\circle*{0.000001}} \put(50.91,-156.98){\circle*{0.000001}} \put(51.62,-156.98){\circle*{0.000001}} \put(52.33,-156.98){\circle*{0.000001}} \put(53.03,-156.98){\circle*{0.000001}} \put(53.74,-156.98){\circle*{0.000001}} \put(54.45,-156.98){\circle*{0.000001}} \put(55.15,-156.98){\circle*{0.000001}} \put(55.86,-156.98){\circle*{0.000001}} \put(34.65,-155.56){\circle*{0.000001}} \put(34.65,-154.86){\circle*{0.000001}} \put(35.36,-154.15){\circle*{0.000001}} \put(35.36,-153.44){\circle*{0.000001}} \put(35.36,-152.74){\circle*{0.000001}} \put(35.36,-152.03){\circle*{0.000001}} \put(36.06,-151.32){\circle*{0.000001}} \put(36.06,-150.61){\circle*{0.000001}} \put(36.06,-149.91){\circle*{0.000001}} \put(36.06,-149.20){\circle*{0.000001}} \put(36.77,-148.49){\circle*{0.000001}} \put(36.77,-147.79){\circle*{0.000001}} \put(36.77,-147.08){\circle*{0.000001}} \put(37.48,-146.37){\circle*{0.000001}} \put(37.48,-145.66){\circle*{0.000001}} \put(37.48,-144.96){\circle*{0.000001}} \put(37.48,-144.25){\circle*{0.000001}} \put(38.18,-143.54){\circle*{0.000001}} \put(38.18,-142.84){\circle*{0.000001}} \put(38.18,-142.13){\circle*{0.000001}} \put(38.18,-141.42){\circle*{0.000001}} \put(38.89,-140.71){\circle*{0.000001}} \put(38.89,-140.01){\circle*{0.000001}} \put(38.89,-139.30){\circle*{0.000001}} \put(38.89,-138.59){\circle*{0.000001}} \put(39.60,-137.89){\circle*{0.000001}} \put(39.60,-137.18){\circle*{0.000001}} \put(39.60,-136.47){\circle*{0.000001}} \put(40.31,-135.76){\circle*{0.000001}} \put(40.31,-135.06){\circle*{0.000001}} \put(40.31,-134.35){\circle*{0.000001}} \put(40.31,-133.64){\circle*{0.000001}} \put(41.01,-132.94){\circle*{0.000001}} \put(41.01,-132.23){\circle*{0.000001}} \put(41.01,-131.52){\circle*{0.000001}} \put(41.01,-130.81){\circle*{0.000001}} \put(41.72,-130.11){\circle*{0.000001}} \put(41.72,-129.40){\circle*{0.000001}} \put(41.72,-129.40){\circle*{0.000001}} \put(42.43,-128.69){\circle*{0.000001}} \put(42.43,-127.99){\circle*{0.000001}} \put(43.13,-127.28){\circle*{0.000001}} \put(43.13,-126.57){\circle*{0.000001}} \put(43.84,-125.87){\circle*{0.000001}} \put(44.55,-125.16){\circle*{0.000001}} \put(44.55,-124.45){\circle*{0.000001}} \put(45.25,-123.74){\circle*{0.000001}} \put(45.96,-123.04){\circle*{0.000001}} \put(45.96,-122.33){\circle*{0.000001}} \put(46.67,-121.62){\circle*{0.000001}} \put(46.67,-120.92){\circle*{0.000001}} \put(47.38,-120.21){\circle*{0.000001}} \put(48.08,-119.50){\circle*{0.000001}} \put(48.08,-118.79){\circle*{0.000001}} \put(48.79,-118.09){\circle*{0.000001}} \put(48.79,-117.38){\circle*{0.000001}} \put(49.50,-116.67){\circle*{0.000001}} \put(50.20,-115.97){\circle*{0.000001}} \put(50.20,-115.26){\circle*{0.000001}} \put(50.91,-114.55){\circle*{0.000001}} \put(51.62,-113.84){\circle*{0.000001}} \put(51.62,-113.14){\circle*{0.000001}} \put(52.33,-112.43){\circle*{0.000001}} \put(52.33,-111.72){\circle*{0.000001}} \put(53.03,-111.02){\circle*{0.000001}} \put(53.74,-110.31){\circle*{0.000001}} \put(53.74,-109.60){\circle*{0.000001}} \put(54.45,-108.89){\circle*{0.000001}} \put(54.45,-108.19){\circle*{0.000001}} \put(55.15,-107.48){\circle*{0.000001}} \put(55.86,-106.77){\circle*{0.000001}} \put(55.86,-106.07){\circle*{0.000001}} \put(56.57,-105.36){\circle*{0.000001}} \put(57.28,-104.65){\circle*{0.000001}} \put(57.28,-103.94){\circle*{0.000001}} \put(57.98,-103.24){\circle*{0.000001}} \put(57.98,-102.53){\circle*{0.000001}} \put(58.69,-101.82){\circle*{0.000001}} \put(58.69,-101.82){\circle*{0.000001}} \put(58.69,-101.12){\circle*{0.000001}} \put(58.69,-100.41){\circle*{0.000001}} \put(58.69,-99.70){\circle*{0.000001}} \put(58.69,-98.99){\circle*{0.000001}} \put(58.69,-98.99){\circle*{0.000001}} \put(58.69,-98.29){\circle*{0.000001}} \put(57.98,-97.58){\circle*{0.000001}} \put(57.98,-97.58){\circle*{0.000001}} \put(57.98,-97.58){\circle*{0.000001}} \put(57.98,-96.87){\circle*{0.000001}} \put(58.69,-96.17){\circle*{0.000001}} \put(58.69,-95.46){\circle*{0.000001}} \put(58.69,-94.75){\circle*{0.000001}} \put(59.40,-94.05){\circle*{0.000001}} \put(59.40,-93.34){\circle*{0.000001}} \put(59.40,-93.34){\circle*{0.000001}} \put(59.40,-92.63){\circle*{0.000001}} \put(60.10,-91.92){\circle*{0.000001}} \put(60.10,-91.22){\circle*{0.000001}} \put(60.81,-90.51){\circle*{0.000001}} \put(60.81,-90.51){\circle*{0.000001}} \put(60.81,-89.80){\circle*{0.000001}} \put(60.81,-89.10){\circle*{0.000001}} \put(60.81,-88.39){\circle*{0.000001}} \put(60.81,-87.68){\circle*{0.000001}} \put(60.81,-87.68){\circle*{0.000001}} \put(60.10,-86.97){\circle*{0.000001}} \put(60.10,-86.27){\circle*{0.000001}} \put(59.40,-85.56){\circle*{0.000001}} \put(53.74,-87.68){\circle*{0.000001}} \put(54.45,-87.68){\circle*{0.000001}} \put(55.15,-86.97){\circle*{0.000001}} \put(55.86,-86.97){\circle*{0.000001}} \put(56.57,-86.97){\circle*{0.000001}} \put(57.28,-86.27){\circle*{0.000001}} \put(57.98,-86.27){\circle*{0.000001}} \put(58.69,-85.56){\circle*{0.000001}} \put(59.40,-85.56){\circle*{0.000001}} \put(52.33,-86.97){\circle*{0.000001}} \put(53.03,-86.97){\circle*{0.000001}} \put(53.74,-87.68){\circle*{0.000001}} \put(52.33,-86.97){\circle*{0.000001}} \put(52.33,-86.27){\circle*{0.000001}} \put(52.33,-85.56){\circle*{0.000001}} \put(51.62,-84.85){\circle*{0.000001}} \put(51.62,-84.15){\circle*{0.000001}} \put(51.62,-83.44){\circle*{0.000001}} \put(46.67,-86.97){\circle*{0.000001}} \put(47.38,-86.27){\circle*{0.000001}} \put(48.08,-86.27){\circle*{0.000001}} \put(48.79,-85.56){\circle*{0.000001}} \put(49.50,-84.85){\circle*{0.000001}} \put(50.20,-84.15){\circle*{0.000001}} \put(50.91,-84.15){\circle*{0.000001}} \put(51.62,-83.44){\circle*{0.000001}} \put(38.89,-91.92){\circle*{0.000001}} \put(39.60,-91.22){\circle*{0.000001}} \put(40.31,-91.22){\circle*{0.000001}} \put(41.01,-90.51){\circle*{0.000001}} \put(41.72,-89.80){\circle*{0.000001}} \put(42.43,-89.80){\circle*{0.000001}} \put(43.13,-89.10){\circle*{0.000001}} \put(43.84,-89.10){\circle*{0.000001}} \put(44.55,-88.39){\circle*{0.000001}} \put(45.25,-87.68){\circle*{0.000001}} \put(45.96,-87.68){\circle*{0.000001}} \put(46.67,-86.97){\circle*{0.000001}} \put(25.46,-85.56){\circle*{0.000001}} \put(26.16,-85.56){\circle*{0.000001}} \put(26.87,-86.27){\circle*{0.000001}} \put(27.58,-86.27){\circle*{0.000001}} \put(28.28,-86.97){\circle*{0.000001}} \put(28.99,-86.97){\circle*{0.000001}} \put(29.70,-87.68){\circle*{0.000001}} \put(30.41,-87.68){\circle*{0.000001}} \put(31.11,-88.39){\circle*{0.000001}} \put(31.82,-88.39){\circle*{0.000001}} \put(32.53,-89.10){\circle*{0.000001}} \put(33.23,-89.10){\circle*{0.000001}} \put(33.94,-89.80){\circle*{0.000001}} \put(34.65,-89.80){\circle*{0.000001}} \put(35.36,-90.51){\circle*{0.000001}} \put(36.06,-90.51){\circle*{0.000001}} \put(36.77,-91.22){\circle*{0.000001}} \put(37.48,-91.22){\circle*{0.000001}} \put(38.18,-91.92){\circle*{0.000001}} \put(38.89,-91.92){\circle*{0.000001}} \put(25.46,-85.56){\circle*{0.000001}} \put(26.16,-85.56){\circle*{0.000001}} \put(26.87,-85.56){\circle*{0.000001}} \put(27.58,-85.56){\circle*{0.000001}} \put(28.28,-85.56){\circle*{0.000001}} \put(28.99,-85.56){\circle*{0.000001}} \put(29.70,-85.56){\circle*{0.000001}} \put(30.41,-86.27){\circle*{0.000001}} \put(31.11,-86.27){\circle*{0.000001}} \put(31.82,-86.27){\circle*{0.000001}} \put(32.53,-86.27){\circle*{0.000001}} \put(33.23,-86.27){\circle*{0.000001}} \put(33.94,-86.27){\circle*{0.000001}} \put(34.65,-86.27){\circle*{0.000001}} \put(35.36,-86.27){\circle*{0.000001}} \put(36.06,-86.27){\circle*{0.000001}} \put(36.77,-86.27){\circle*{0.000001}} \put(37.48,-86.27){\circle*{0.000001}} \put(38.18,-86.27){\circle*{0.000001}} \put(38.89,-86.27){\circle*{0.000001}} \put(39.60,-86.97){\circle*{0.000001}} \put(40.31,-86.97){\circle*{0.000001}} \put(41.01,-86.97){\circle*{0.000001}} \put(41.72,-86.97){\circle*{0.000001}} \put(42.43,-86.97){\circle*{0.000001}} \put(43.13,-86.97){\circle*{0.000001}} \put(43.84,-86.97){\circle*{0.000001}} \put(28.99,-99.70){\circle*{0.000001}} \put(29.70,-98.99){\circle*{0.000001}} \put(30.41,-98.29){\circle*{0.000001}} \put(31.11,-97.58){\circle*{0.000001}} \put(31.82,-97.58){\circle*{0.000001}} \put(32.53,-96.87){\circle*{0.000001}} \put(33.23,-96.17){\circle*{0.000001}} \put(33.94,-95.46){\circle*{0.000001}} \put(34.65,-94.75){\circle*{0.000001}} \put(35.36,-94.05){\circle*{0.000001}} \put(36.06,-93.34){\circle*{0.000001}} \put(36.77,-93.34){\circle*{0.000001}} \put(37.48,-92.63){\circle*{0.000001}} \put(38.18,-91.92){\circle*{0.000001}} \put(38.89,-91.22){\circle*{0.000001}} \put(39.60,-90.51){\circle*{0.000001}} \put(40.31,-89.80){\circle*{0.000001}} \put(41.01,-89.10){\circle*{0.000001}} \put(41.72,-89.10){\circle*{0.000001}} \put(42.43,-88.39){\circle*{0.000001}} \put(43.13,-87.68){\circle*{0.000001}} \put(43.84,-86.97){\circle*{0.000001}} \put( 3.54,-108.89){\circle*{0.000001}} \put( 4.24,-108.89){\circle*{0.000001}} \put( 4.95,-108.19){\circle*{0.000001}} \put( 5.66,-108.19){\circle*{0.000001}} \put( 6.36,-108.19){\circle*{0.000001}} \put( 7.07,-107.48){\circle*{0.000001}} \put( 7.78,-107.48){\circle*{0.000001}} \put( 8.49,-106.77){\circle*{0.000001}} \put( 9.19,-106.77){\circle*{0.000001}} \put( 9.90,-106.77){\circle*{0.000001}} \put(10.61,-106.07){\circle*{0.000001}} \put(11.31,-106.07){\circle*{0.000001}} \put(12.02,-106.07){\circle*{0.000001}} \put(12.73,-105.36){\circle*{0.000001}} \put(13.44,-105.36){\circle*{0.000001}} \put(14.14,-105.36){\circle*{0.000001}} \put(14.85,-104.65){\circle*{0.000001}} \put(15.56,-104.65){\circle*{0.000001}} \put(16.26,-104.65){\circle*{0.000001}} \put(16.97,-103.94){\circle*{0.000001}} \put(17.68,-103.94){\circle*{0.000001}} \put(18.38,-103.24){\circle*{0.000001}} \put(19.09,-103.24){\circle*{0.000001}} \put(19.80,-103.24){\circle*{0.000001}} \put(20.51,-102.53){\circle*{0.000001}} \put(21.21,-102.53){\circle*{0.000001}} \put(21.92,-102.53){\circle*{0.000001}} \put(22.63,-101.82){\circle*{0.000001}} \put(23.33,-101.82){\circle*{0.000001}} \put(24.04,-101.82){\circle*{0.000001}} \put(24.75,-101.12){\circle*{0.000001}} \put(25.46,-101.12){\circle*{0.000001}} \put(26.16,-100.41){\circle*{0.000001}} \put(26.87,-100.41){\circle*{0.000001}} \put(27.58,-100.41){\circle*{0.000001}} \put(28.28,-99.70){\circle*{0.000001}} \put(28.99,-99.70){\circle*{0.000001}} \put( 3.54,-108.89){\circle*{0.000001}} \put( 3.54,-108.19){\circle*{0.000001}} \put( 2.83,-107.48){\circle*{0.000001}} \put( 2.83,-106.77){\circle*{0.000001}} \put( 2.12,-106.07){\circle*{0.000001}} \put( 2.12,-105.36){\circle*{0.000001}} \put( 1.41,-104.65){\circle*{0.000001}} \put( 1.41,-103.94){\circle*{0.000001}} \put( 0.71,-103.24){\circle*{0.000001}} \put( 0.71,-102.53){\circle*{0.000001}} \put( 0.00,-101.82){\circle*{0.000001}} \put( 0.00,-101.12){\circle*{0.000001}} \put(-0.71,-100.41){\circle*{0.000001}} \put(-0.71,-99.70){\circle*{0.000001}} \put(-1.41,-98.99){\circle*{0.000001}} \put(-1.41,-98.29){\circle*{0.000001}} \put(-2.12,-97.58){\circle*{0.000001}} \put(-2.12,-96.87){\circle*{0.000001}} \put(-2.12,-96.17){\circle*{0.000001}} \put(-2.83,-95.46){\circle*{0.000001}} \put(-2.83,-94.75){\circle*{0.000001}} \put(-3.54,-94.05){\circle*{0.000001}} \put(-3.54,-93.34){\circle*{0.000001}} \put(-4.24,-92.63){\circle*{0.000001}} \put(-4.24,-91.92){\circle*{0.000001}} \put(-4.95,-91.22){\circle*{0.000001}} \put(-4.95,-90.51){\circle*{0.000001}} \put(-5.66,-89.80){\circle*{0.000001}} \put(-5.66,-89.10){\circle*{0.000001}} \put(-6.36,-88.39){\circle*{0.000001}} \put(-6.36,-87.68){\circle*{0.000001}} \put(-7.07,-86.97){\circle*{0.000001}} \put(-7.07,-86.27){\circle*{0.000001}} \put(-7.78,-85.56){\circle*{0.000001}} \put(-7.78,-84.85){\circle*{0.000001}} \put(-8.49,-84.15){\circle*{0.000001}} \put(-8.49,-83.44){\circle*{0.000001}} \put(-22.63,-111.72){\circle*{0.000001}} \put(-22.63,-111.02){\circle*{0.000001}} \put(-21.92,-110.31){\circle*{0.000001}} \put(-21.92,-109.60){\circle*{0.000001}} \put(-21.21,-108.89){\circle*{0.000001}} \put(-21.21,-108.19){\circle*{0.000001}} \put(-20.51,-107.48){\circle*{0.000001}} \put(-20.51,-106.77){\circle*{0.000001}} \put(-19.80,-106.07){\circle*{0.000001}} \put(-19.80,-105.36){\circle*{0.000001}} \put(-19.09,-104.65){\circle*{0.000001}} \put(-19.09,-103.94){\circle*{0.000001}} \put(-18.38,-103.24){\circle*{0.000001}} \put(-18.38,-102.53){\circle*{0.000001}} \put(-17.68,-101.82){\circle*{0.000001}} \put(-17.68,-101.12){\circle*{0.000001}} \put(-16.97,-100.41){\circle*{0.000001}} \put(-16.97,-99.70){\circle*{0.000001}} \put(-16.26,-98.99){\circle*{0.000001}} \put(-16.26,-98.29){\circle*{0.000001}} \put(-15.56,-97.58){\circle*{0.000001}} \put(-15.56,-96.87){\circle*{0.000001}} \put(-14.85,-96.17){\circle*{0.000001}} \put(-14.85,-95.46){\circle*{0.000001}} \put(-14.14,-94.75){\circle*{0.000001}} \put(-14.14,-94.05){\circle*{0.000001}} \put(-13.44,-93.34){\circle*{0.000001}} \put(-13.44,-92.63){\circle*{0.000001}} \put(-12.73,-91.92){\circle*{0.000001}} \put(-12.73,-91.22){\circle*{0.000001}} \put(-12.02,-90.51){\circle*{0.000001}} \put(-12.02,-89.80){\circle*{0.000001}} \put(-11.31,-89.10){\circle*{0.000001}} \put(-11.31,-88.39){\circle*{0.000001}} \put(-10.61,-87.68){\circle*{0.000001}} \put(-10.61,-86.97){\circle*{0.000001}} \put(-9.90,-86.27){\circle*{0.000001}} \put(-9.90,-85.56){\circle*{0.000001}} \put(-9.19,-84.85){\circle*{0.000001}} \put(-9.19,-84.15){\circle*{0.000001}} \put(-8.49,-83.44){\circle*{0.000001}} \put(-31.82,-144.96){\circle*{0.000001}} \put(-31.82,-144.25){\circle*{0.000001}} \put(-31.11,-143.54){\circle*{0.000001}} \put(-31.11,-142.84){\circle*{0.000001}} \put(-31.11,-142.13){\circle*{0.000001}} \put(-31.11,-141.42){\circle*{0.000001}} \put(-30.41,-140.71){\circle*{0.000001}} \put(-30.41,-140.01){\circle*{0.000001}} \put(-30.41,-139.30){\circle*{0.000001}} \put(-30.41,-138.59){\circle*{0.000001}} \put(-29.70,-137.89){\circle*{0.000001}} \put(-29.70,-137.18){\circle*{0.000001}} \put(-29.70,-136.47){\circle*{0.000001}} \put(-28.99,-135.76){\circle*{0.000001}} \put(-28.99,-135.06){\circle*{0.000001}} \put(-28.99,-134.35){\circle*{0.000001}} \put(-28.99,-133.64){\circle*{0.000001}} \put(-28.28,-132.94){\circle*{0.000001}} \put(-28.28,-132.23){\circle*{0.000001}} \put(-28.28,-131.52){\circle*{0.000001}} \put(-27.58,-130.81){\circle*{0.000001}} \put(-27.58,-130.11){\circle*{0.000001}} \put(-27.58,-129.40){\circle*{0.000001}} \put(-27.58,-128.69){\circle*{0.000001}} \put(-26.87,-127.99){\circle*{0.000001}} \put(-26.87,-127.28){\circle*{0.000001}} \put(-26.87,-126.57){\circle*{0.000001}} \put(-26.87,-125.87){\circle*{0.000001}} \put(-26.16,-125.16){\circle*{0.000001}} \put(-26.16,-124.45){\circle*{0.000001}} \put(-26.16,-123.74){\circle*{0.000001}} \put(-25.46,-123.04){\circle*{0.000001}} \put(-25.46,-122.33){\circle*{0.000001}} \put(-25.46,-121.62){\circle*{0.000001}} \put(-25.46,-120.92){\circle*{0.000001}} \put(-24.75,-120.21){\circle*{0.000001}} \put(-24.75,-119.50){\circle*{0.000001}} \put(-24.75,-118.79){\circle*{0.000001}} \put(-24.04,-118.09){\circle*{0.000001}} \put(-24.04,-117.38){\circle*{0.000001}} \put(-24.04,-116.67){\circle*{0.000001}} \put(-24.04,-115.97){\circle*{0.000001}} \put(-23.33,-115.26){\circle*{0.000001}} \put(-23.33,-114.55){\circle*{0.000001}} \put(-23.33,-113.84){\circle*{0.000001}} \put(-23.33,-113.14){\circle*{0.000001}} \put(-22.63,-112.43){\circle*{0.000001}} \put(-22.63,-111.72){\circle*{0.000001}} \put(-39.60,-182.43){\circle*{0.000001}} \put(-39.60,-181.73){\circle*{0.000001}} \put(-39.60,-181.02){\circle*{0.000001}} \put(-38.89,-180.31){\circle*{0.000001}} \put(-38.89,-179.61){\circle*{0.000001}} \put(-38.89,-178.90){\circle*{0.000001}} \put(-38.89,-178.19){\circle*{0.000001}} \put(-38.89,-177.48){\circle*{0.000001}} \put(-38.18,-176.78){\circle*{0.000001}} \put(-38.18,-176.07){\circle*{0.000001}} \put(-38.18,-175.36){\circle*{0.000001}} \put(-38.18,-174.66){\circle*{0.000001}} \put(-38.18,-173.95){\circle*{0.000001}} \put(-37.48,-173.24){\circle*{0.000001}} \put(-37.48,-172.53){\circle*{0.000001}} \put(-37.48,-171.83){\circle*{0.000001}} \put(-37.48,-171.12){\circle*{0.000001}} \put(-36.77,-170.41){\circle*{0.000001}} \put(-36.77,-169.71){\circle*{0.000001}} \put(-36.77,-169.00){\circle*{0.000001}} \put(-36.77,-168.29){\circle*{0.000001}} \put(-36.77,-167.58){\circle*{0.000001}} \put(-36.06,-166.88){\circle*{0.000001}} \put(-36.06,-166.17){\circle*{0.000001}} \put(-36.06,-165.46){\circle*{0.000001}} \put(-36.06,-164.76){\circle*{0.000001}} \put(-36.06,-164.05){\circle*{0.000001}} \put(-35.36,-163.34){\circle*{0.000001}} \put(-35.36,-162.63){\circle*{0.000001}} \put(-35.36,-161.93){\circle*{0.000001}} \put(-35.36,-161.22){\circle*{0.000001}} \put(-35.36,-160.51){\circle*{0.000001}} \put(-34.65,-159.81){\circle*{0.000001}} \put(-34.65,-159.10){\circle*{0.000001}} \put(-34.65,-158.39){\circle*{0.000001}} \put(-34.65,-157.68){\circle*{0.000001}} \put(-34.65,-156.98){\circle*{0.000001}} \put(-33.94,-156.27){\circle*{0.000001}} \put(-33.94,-155.56){\circle*{0.000001}} \put(-33.94,-154.86){\circle*{0.000001}} \put(-33.94,-154.15){\circle*{0.000001}} \put(-33.23,-153.44){\circle*{0.000001}} \put(-33.23,-152.74){\circle*{0.000001}} \put(-33.23,-152.03){\circle*{0.000001}} \put(-33.23,-151.32){\circle*{0.000001}} \put(-33.23,-150.61){\circle*{0.000001}} \put(-32.53,-149.91){\circle*{0.000001}} \put(-32.53,-149.20){\circle*{0.000001}} \put(-32.53,-148.49){\circle*{0.000001}} \put(-32.53,-147.79){\circle*{0.000001}} \put(-32.53,-147.08){\circle*{0.000001}} \put(-31.82,-146.37){\circle*{0.000001}} \put(-31.82,-145.66){\circle*{0.000001}} \put(-31.82,-144.96){\circle*{0.000001}} \put(-36.77,-224.86){\circle*{0.000001}} \put(-36.77,-224.15){\circle*{0.000001}} \put(-36.77,-223.45){\circle*{0.000001}} \put(-36.77,-222.74){\circle*{0.000001}} \put(-36.77,-222.03){\circle*{0.000001}} \put(-36.77,-221.32){\circle*{0.000001}} \put(-36.77,-220.62){\circle*{0.000001}} \put(-36.77,-219.91){\circle*{0.000001}} \put(-37.48,-219.20){\circle*{0.000001}} \put(-37.48,-218.50){\circle*{0.000001}} \put(-37.48,-217.79){\circle*{0.000001}} \put(-37.48,-217.08){\circle*{0.000001}} \put(-37.48,-216.37){\circle*{0.000001}} \put(-37.48,-215.67){\circle*{0.000001}} \put(-37.48,-214.96){\circle*{0.000001}} \put(-37.48,-214.25){\circle*{0.000001}} \put(-37.48,-213.55){\circle*{0.000001}} \put(-37.48,-212.84){\circle*{0.000001}} \put(-37.48,-212.13){\circle*{0.000001}} \put(-37.48,-211.42){\circle*{0.000001}} \put(-37.48,-210.72){\circle*{0.000001}} \put(-37.48,-210.01){\circle*{0.000001}} \put(-37.48,-209.30){\circle*{0.000001}} \put(-38.18,-208.60){\circle*{0.000001}} \put(-38.18,-207.89){\circle*{0.000001}} \put(-38.18,-207.18){\circle*{0.000001}} \put(-38.18,-206.48){\circle*{0.000001}} \put(-38.18,-205.77){\circle*{0.000001}} \put(-38.18,-205.06){\circle*{0.000001}} \put(-38.18,-204.35){\circle*{0.000001}} \put(-38.18,-203.65){\circle*{0.000001}} \put(-38.18,-202.94){\circle*{0.000001}} \put(-38.18,-202.23){\circle*{0.000001}} \put(-38.18,-201.53){\circle*{0.000001}} \put(-38.18,-200.82){\circle*{0.000001}} \put(-38.18,-200.11){\circle*{0.000001}} \put(-38.18,-199.40){\circle*{0.000001}} \put(-38.18,-198.70){\circle*{0.000001}} \put(-38.89,-197.99){\circle*{0.000001}} \put(-38.89,-197.28){\circle*{0.000001}} \put(-38.89,-196.58){\circle*{0.000001}} \put(-38.89,-195.87){\circle*{0.000001}} \put(-38.89,-195.16){\circle*{0.000001}} \put(-38.89,-194.45){\circle*{0.000001}} \put(-38.89,-193.75){\circle*{0.000001}} \put(-38.89,-193.04){\circle*{0.000001}} \put(-38.89,-192.33){\circle*{0.000001}} \put(-38.89,-191.63){\circle*{0.000001}} \put(-38.89,-190.92){\circle*{0.000001}} \put(-38.89,-190.21){\circle*{0.000001}} \put(-38.89,-189.50){\circle*{0.000001}} \put(-38.89,-188.80){\circle*{0.000001}} \put(-38.89,-188.09){\circle*{0.000001}} \put(-39.60,-187.38){\circle*{0.000001}} \put(-39.60,-186.68){\circle*{0.000001}} \put(-39.60,-185.97){\circle*{0.000001}} \put(-39.60,-185.26){\circle*{0.000001}} \put(-39.60,-184.55){\circle*{0.000001}} \put(-39.60,-183.85){\circle*{0.000001}} \put(-39.60,-183.14){\circle*{0.000001}} \put(-39.60,-182.43){\circle*{0.000001}} \put(-8.49,-262.34){\circle*{0.000001}} \put(-9.19,-261.63){\circle*{0.000001}} \put(-9.90,-260.92){\circle*{0.000001}} \put(-9.90,-260.22){\circle*{0.000001}} \put(-10.61,-259.51){\circle*{0.000001}} \put(-11.31,-258.80){\circle*{0.000001}} \put(-12.02,-258.09){\circle*{0.000001}} \put(-12.02,-257.39){\circle*{0.000001}} \put(-12.73,-256.68){\circle*{0.000001}} \put(-13.44,-255.97){\circle*{0.000001}} \put(-14.14,-255.27){\circle*{0.000001}} \put(-14.14,-254.56){\circle*{0.000001}} \put(-14.85,-253.85){\circle*{0.000001}} \put(-15.56,-253.14){\circle*{0.000001}} \put(-16.26,-252.44){\circle*{0.000001}} \put(-16.26,-251.73){\circle*{0.000001}} \put(-16.97,-251.02){\circle*{0.000001}} \put(-17.68,-250.32){\circle*{0.000001}} \put(-18.38,-249.61){\circle*{0.000001}} \put(-18.38,-248.90){\circle*{0.000001}} \put(-19.09,-248.19){\circle*{0.000001}} \put(-19.80,-247.49){\circle*{0.000001}} \put(-20.51,-246.78){\circle*{0.000001}} \put(-20.51,-246.07){\circle*{0.000001}} \put(-21.21,-245.37){\circle*{0.000001}} \put(-21.92,-244.66){\circle*{0.000001}} \put(-22.63,-243.95){\circle*{0.000001}} \put(-22.63,-243.24){\circle*{0.000001}} \put(-23.33,-242.54){\circle*{0.000001}} \put(-24.04,-241.83){\circle*{0.000001}} \put(-24.75,-241.12){\circle*{0.000001}} \put(-24.75,-240.42){\circle*{0.000001}} \put(-25.46,-239.71){\circle*{0.000001}} \put(-26.16,-239.00){\circle*{0.000001}} \put(-26.87,-238.29){\circle*{0.000001}} \put(-26.87,-237.59){\circle*{0.000001}} \put(-27.58,-236.88){\circle*{0.000001}} \put(-28.28,-236.17){\circle*{0.000001}} \put(-28.99,-235.47){\circle*{0.000001}} \put(-28.99,-234.76){\circle*{0.000001}} \put(-29.70,-234.05){\circle*{0.000001}} \put(-30.41,-233.35){\circle*{0.000001}} \put(-31.11,-232.64){\circle*{0.000001}} \put(-31.11,-231.93){\circle*{0.000001}} \put(-31.82,-231.22){\circle*{0.000001}} \put(-32.53,-230.52){\circle*{0.000001}} \put(-33.23,-229.81){\circle*{0.000001}} \put(-33.23,-229.10){\circle*{0.000001}} \put(-33.94,-228.40){\circle*{0.000001}} \put(-34.65,-227.69){\circle*{0.000001}} \put(-35.36,-226.98){\circle*{0.000001}} \put(-35.36,-226.27){\circle*{0.000001}} \put(-36.06,-225.57){\circle*{0.000001}} \put(-36.77,-224.86){\circle*{0.000001}} \put(-8.49,-262.34){\circle*{0.000001}} \put(-7.78,-262.34){\circle*{0.000001}} \put(-7.07,-261.63){\circle*{0.000001}} \put(-6.36,-261.63){\circle*{0.000001}} \put(-5.66,-260.92){\circle*{0.000001}} \put(-4.95,-260.92){\circle*{0.000001}} \put(-4.24,-260.22){\circle*{0.000001}} \put(-3.54,-260.22){\circle*{0.000001}} \put(-2.83,-259.51){\circle*{0.000001}} \put(-2.12,-259.51){\circle*{0.000001}} \put(-1.41,-258.80){\circle*{0.000001}} \put(-0.71,-258.80){\circle*{0.000001}} \put( 0.00,-258.09){\circle*{0.000001}} \put( 0.71,-258.09){\circle*{0.000001}} \put( 1.41,-257.39){\circle*{0.000001}} \put( 2.12,-257.39){\circle*{0.000001}} \put( 2.83,-256.68){\circle*{0.000001}} \put( 3.54,-256.68){\circle*{0.000001}} \put( 4.24,-255.97){\circle*{0.000001}} \put( 4.95,-255.97){\circle*{0.000001}} \put( 5.66,-255.27){\circle*{0.000001}} \put( 6.36,-255.27){\circle*{0.000001}} \put( 7.07,-254.56){\circle*{0.000001}} \put( 7.78,-254.56){\circle*{0.000001}} \put( 8.49,-253.85){\circle*{0.000001}} \put( 9.19,-253.85){\circle*{0.000001}} \put( 9.90,-253.14){\circle*{0.000001}} \put(10.61,-253.14){\circle*{0.000001}} \put(11.31,-252.44){\circle*{0.000001}} \put(12.02,-252.44){\circle*{0.000001}} \put(12.73,-251.73){\circle*{0.000001}} \put(13.44,-251.73){\circle*{0.000001}} \put(14.14,-251.02){\circle*{0.000001}} \put(14.85,-251.02){\circle*{0.000001}} \put(15.56,-250.32){\circle*{0.000001}} \put(16.26,-250.32){\circle*{0.000001}} \put(16.97,-249.61){\circle*{0.000001}} \put(17.68,-249.61){\circle*{0.000001}} \put(18.38,-248.90){\circle*{0.000001}} \put(19.09,-248.90){\circle*{0.000001}} \put(19.80,-248.19){\circle*{0.000001}} \put(20.51,-248.19){\circle*{0.000001}} \put(21.21,-247.49){\circle*{0.000001}} \put(21.92,-247.49){\circle*{0.000001}} \put(22.63,-246.78){\circle*{0.000001}} \put(23.33,-246.78){\circle*{0.000001}} \put(24.04,-246.07){\circle*{0.000001}} \put(24.75,-246.07){\circle*{0.000001}} \put(25.46,-245.37){\circle*{0.000001}} \put(26.16,-245.37){\circle*{0.000001}} \put(26.87,-244.66){\circle*{0.000001}} \put(27.58,-244.66){\circle*{0.000001}} \put(28.28,-243.95){\circle*{0.000001}} \put(28.99,-243.95){\circle*{0.000001}} \put(29.70,-243.24){\circle*{0.000001}} \put(30.41,-243.24){\circle*{0.000001}} \put(31.11,-242.54){\circle*{0.000001}} \put(31.82,-242.54){\circle*{0.000001}} \put(32.53,-241.83){\circle*{0.000001}} \put(33.23,-241.83){\circle*{0.000001}} \put(33.94,-241.12){\circle*{0.000001}} \put(34.65,-241.12){\circle*{0.000001}} \put(35.36,-240.42){\circle*{0.000001}} \put(36.06,-240.42){\circle*{0.000001}} \put(36.77,-239.71){\circle*{0.000001}} \put( 8.49,-286.38){\circle*{0.000001}} \put( 9.19,-285.67){\circle*{0.000001}} \put( 9.19,-284.96){\circle*{0.000001}} \put( 9.90,-284.26){\circle*{0.000001}} \put( 9.90,-283.55){\circle*{0.000001}} \put(10.61,-282.84){\circle*{0.000001}} \put(11.31,-282.14){\circle*{0.000001}} \put(11.31,-281.43){\circle*{0.000001}} \put(12.02,-280.72){\circle*{0.000001}} \put(12.02,-280.01){\circle*{0.000001}} \put(12.73,-279.31){\circle*{0.000001}} \put(13.44,-278.60){\circle*{0.000001}} \put(13.44,-277.89){\circle*{0.000001}} \put(14.14,-277.19){\circle*{0.000001}} \put(14.14,-276.48){\circle*{0.000001}} \put(14.85,-275.77){\circle*{0.000001}} \put(15.56,-275.06){\circle*{0.000001}} \put(15.56,-274.36){\circle*{0.000001}} \put(16.26,-273.65){\circle*{0.000001}} \put(16.97,-272.94){\circle*{0.000001}} \put(16.97,-272.24){\circle*{0.000001}} \put(17.68,-271.53){\circle*{0.000001}} \put(17.68,-270.82){\circle*{0.000001}} \put(18.38,-270.11){\circle*{0.000001}} \put(19.09,-269.41){\circle*{0.000001}} \put(19.09,-268.70){\circle*{0.000001}} \put(19.80,-267.99){\circle*{0.000001}} \put(19.80,-267.29){\circle*{0.000001}} \put(20.51,-266.58){\circle*{0.000001}} \put(21.21,-265.87){\circle*{0.000001}} \put(21.21,-265.17){\circle*{0.000001}} \put(21.92,-264.46){\circle*{0.000001}} \put(21.92,-263.75){\circle*{0.000001}} \put(22.63,-263.04){\circle*{0.000001}} \put(23.33,-262.34){\circle*{0.000001}} \put(23.33,-261.63){\circle*{0.000001}} \put(24.04,-260.92){\circle*{0.000001}} \put(24.04,-260.22){\circle*{0.000001}} \put(24.75,-259.51){\circle*{0.000001}} \put(25.46,-258.80){\circle*{0.000001}} \put(25.46,-258.09){\circle*{0.000001}} \put(26.16,-257.39){\circle*{0.000001}} \put(26.16,-256.68){\circle*{0.000001}} \put(26.87,-255.97){\circle*{0.000001}} \put(27.58,-255.27){\circle*{0.000001}} \put(27.58,-254.56){\circle*{0.000001}} \put(28.28,-253.85){\circle*{0.000001}} \put(28.28,-253.14){\circle*{0.000001}} \put(28.99,-252.44){\circle*{0.000001}} \put(29.70,-251.73){\circle*{0.000001}} \put(29.70,-251.02){\circle*{0.000001}} \put(30.41,-250.32){\circle*{0.000001}} \put(31.11,-249.61){\circle*{0.000001}} \put(31.11,-248.90){\circle*{0.000001}} \put(31.82,-248.19){\circle*{0.000001}} \put(31.82,-247.49){\circle*{0.000001}} \put(32.53,-246.78){\circle*{0.000001}} \put(33.23,-246.07){\circle*{0.000001}} \put(33.23,-245.37){\circle*{0.000001}} \put(33.94,-244.66){\circle*{0.000001}} \put(33.94,-243.95){\circle*{0.000001}} \put(34.65,-243.24){\circle*{0.000001}} \put(35.36,-242.54){\circle*{0.000001}} \put(35.36,-241.83){\circle*{0.000001}} \put(36.06,-241.12){\circle*{0.000001}} \put(36.06,-240.42){\circle*{0.000001}} \put(36.77,-239.71){\circle*{0.000001}} \put(-47.38,-290.62){\circle*{0.000001}} \put(-46.67,-290.62){\circle*{0.000001}} \put(-45.96,-290.62){\circle*{0.000001}} \put(-45.25,-290.62){\circle*{0.000001}} \put(-44.55,-290.62){\circle*{0.000001}} \put(-43.84,-290.62){\circle*{0.000001}} \put(-43.13,-290.62){\circle*{0.000001}} \put(-42.43,-289.91){\circle*{0.000001}} \put(-41.72,-289.91){\circle*{0.000001}} \put(-41.01,-289.91){\circle*{0.000001}} \put(-40.31,-289.91){\circle*{0.000001}} \put(-39.60,-289.91){\circle*{0.000001}} \put(-38.89,-289.91){\circle*{0.000001}} \put(-38.18,-289.91){\circle*{0.000001}} \put(-37.48,-289.91){\circle*{0.000001}} \put(-36.77,-289.91){\circle*{0.000001}} \put(-36.06,-289.91){\circle*{0.000001}} \put(-35.36,-289.91){\circle*{0.000001}} \put(-34.65,-289.91){\circle*{0.000001}} \put(-33.94,-289.91){\circle*{0.000001}} \put(-33.23,-289.21){\circle*{0.000001}} \put(-32.53,-289.21){\circle*{0.000001}} \put(-31.82,-289.21){\circle*{0.000001}} \put(-31.11,-289.21){\circle*{0.000001}} \put(-30.41,-289.21){\circle*{0.000001}} \put(-29.70,-289.21){\circle*{0.000001}} \put(-28.99,-289.21){\circle*{0.000001}} \put(-28.28,-289.21){\circle*{0.000001}} \put(-27.58,-289.21){\circle*{0.000001}} \put(-26.87,-289.21){\circle*{0.000001}} \put(-26.16,-289.21){\circle*{0.000001}} \put(-25.46,-289.21){\circle*{0.000001}} \put(-24.75,-289.21){\circle*{0.000001}} \put(-24.04,-288.50){\circle*{0.000001}} \put(-23.33,-288.50){\circle*{0.000001}} \put(-22.63,-288.50){\circle*{0.000001}} \put(-21.92,-288.50){\circle*{0.000001}} \put(-21.21,-288.50){\circle*{0.000001}} \put(-20.51,-288.50){\circle*{0.000001}} \put(-19.80,-288.50){\circle*{0.000001}} \put(-19.09,-288.50){\circle*{0.000001}} \put(-18.38,-288.50){\circle*{0.000001}} \put(-17.68,-288.50){\circle*{0.000001}} \put(-16.97,-288.50){\circle*{0.000001}} \put(-16.26,-288.50){\circle*{0.000001}} \put(-15.56,-288.50){\circle*{0.000001}} \put(-14.85,-288.50){\circle*{0.000001}} \put(-14.14,-287.79){\circle*{0.000001}} \put(-13.44,-287.79){\circle*{0.000001}} \put(-12.73,-287.79){\circle*{0.000001}} \put(-12.02,-287.79){\circle*{0.000001}} \put(-11.31,-287.79){\circle*{0.000001}} \put(-10.61,-287.79){\circle*{0.000001}} \put(-9.90,-287.79){\circle*{0.000001}} \put(-9.19,-287.79){\circle*{0.000001}} \put(-8.49,-287.79){\circle*{0.000001}} \put(-7.78,-287.79){\circle*{0.000001}} \put(-7.07,-287.79){\circle*{0.000001}} \put(-6.36,-287.79){\circle*{0.000001}} \put(-5.66,-287.79){\circle*{0.000001}} \put(-4.95,-287.09){\circle*{0.000001}} \put(-4.24,-287.09){\circle*{0.000001}} \put(-3.54,-287.09){\circle*{0.000001}} \put(-2.83,-287.09){\circle*{0.000001}} \put(-2.12,-287.09){\circle*{0.000001}} \put(-1.41,-287.09){\circle*{0.000001}} \put(-0.71,-287.09){\circle*{0.000001}} \put( 0.00,-287.09){\circle*{0.000001}} \put( 0.71,-287.09){\circle*{0.000001}} \put( 1.41,-287.09){\circle*{0.000001}} \put( 2.12,-287.09){\circle*{0.000001}} \put( 2.83,-287.09){\circle*{0.000001}} \put( 3.54,-287.09){\circle*{0.000001}} \put( 4.24,-286.38){\circle*{0.000001}} \put( 4.95,-286.38){\circle*{0.000001}} \put( 5.66,-286.38){\circle*{0.000001}} \put( 6.36,-286.38){\circle*{0.000001}} \put( 7.07,-286.38){\circle*{0.000001}} \put( 7.78,-286.38){\circle*{0.000001}} \put( 8.49,-286.38){\circle*{0.000001}} \put(-47.38,-290.62){\circle*{0.000001}} \put(-47.38,-289.91){\circle*{0.000001}} \put(-48.79,-294.86){\circle*{0.000001}} \put(-48.79,-294.16){\circle*{0.000001}} \put(-48.08,-293.45){\circle*{0.000001}} \put(-48.08,-292.74){\circle*{0.000001}} \put(-48.08,-292.04){\circle*{0.000001}} \put(-48.08,-291.33){\circle*{0.000001}} \put(-47.38,-290.62){\circle*{0.000001}} \put(-47.38,-289.91){\circle*{0.000001}} \put(-48.79,-294.86){\circle*{0.000001}} \put(-48.08,-294.86){\circle*{0.000001}} \put(-47.38,-294.86){\circle*{0.000001}} \put(-47.38,-294.86){\circle*{0.000001}} \put(-47.38,-294.16){\circle*{0.000001}} \put(-47.38,-293.45){\circle*{0.000001}} \put(-47.38,-293.45){\circle*{0.000001}} \put(-46.67,-293.45){\circle*{0.000001}} \put(-45.96,-294.86){\circle*{0.000001}} \put(-45.96,-294.16){\circle*{0.000001}} \put(-46.67,-293.45){\circle*{0.000001}} \put(-45.96,-294.86){\circle*{0.000001}} \put(-45.25,-294.16){\circle*{0.000001}} \put(-45.25,-293.45){\circle*{0.000001}} \put(-44.55,-292.74){\circle*{0.000001}} \put(-44.55,-292.74){\circle*{0.000001}} \put(-49.50,-289.91){\circle*{0.000001}} \put(-48.79,-290.62){\circle*{0.000001}} \put(-48.08,-290.62){\circle*{0.000001}} \put(-47.38,-291.33){\circle*{0.000001}} \put(-46.67,-291.33){\circle*{0.000001}} \put(-45.96,-292.04){\circle*{0.000001}} \put(-45.25,-292.04){\circle*{0.000001}} \put(-44.55,-292.74){\circle*{0.000001}} \put(-49.50,-289.91){\circle*{0.000001}} \put(-48.79,-289.91){\circle*{0.000001}} \put(-48.08,-289.91){\circle*{0.000001}} \put(-47.38,-289.91){\circle*{0.000001}} \put(-46.67,-289.91){\circle*{0.000001}} \put(-45.96,-289.91){\circle*{0.000001}} \put(-45.25,-289.91){\circle*{0.000001}} \put(-44.55,-289.91){\circle*{0.000001}} \put(-43.84,-289.91){\circle*{0.000001}} \put(-43.13,-289.91){\circle*{0.000001}} \put(-42.43,-289.91){\circle*{0.000001}} \put(-41.72,-289.91){\circle*{0.000001}} \put(-41.01,-289.91){\circle*{0.000001}} \put(-40.31,-289.91){\circle*{0.000001}} \put(-39.60,-289.91){\circle*{0.000001}} \put(-39.60,-289.91){\circle*{0.000001}} \put(-38.89,-290.62){\circle*{0.000001}} \put(-38.18,-290.62){\circle*{0.000001}} \put(-37.48,-291.33){\circle*{0.000001}} \put(-36.77,-291.33){\circle*{0.000001}} \put(-36.06,-292.04){\circle*{0.000001}} \put(-35.36,-292.74){\circle*{0.000001}} \put(-34.65,-292.74){\circle*{0.000001}} \put(-33.94,-293.45){\circle*{0.000001}} \put(-33.23,-294.16){\circle*{0.000001}} \put(-32.53,-294.16){\circle*{0.000001}} \put(-31.82,-294.86){\circle*{0.000001}} \put(-31.11,-294.86){\circle*{0.000001}} \put(-30.41,-295.57){\circle*{0.000001}} \put(-43.84,-290.62){\circle*{0.000001}} \put(-43.13,-290.62){\circle*{0.000001}} \put(-42.43,-291.33){\circle*{0.000001}} \put(-41.72,-291.33){\circle*{0.000001}} \put(-41.01,-291.33){\circle*{0.000001}} \put(-40.31,-292.04){\circle*{0.000001}} \put(-39.60,-292.04){\circle*{0.000001}} \put(-38.89,-292.74){\circle*{0.000001}} \put(-38.18,-292.74){\circle*{0.000001}} \put(-37.48,-292.74){\circle*{0.000001}} \put(-36.77,-293.45){\circle*{0.000001}} \put(-36.06,-293.45){\circle*{0.000001}} \put(-35.36,-293.45){\circle*{0.000001}} \put(-34.65,-294.16){\circle*{0.000001}} \put(-33.94,-294.16){\circle*{0.000001}} \put(-33.23,-294.86){\circle*{0.000001}} \put(-32.53,-294.86){\circle*{0.000001}} \put(-31.82,-294.86){\circle*{0.000001}} \put(-31.11,-295.57){\circle*{0.000001}} \put(-30.41,-295.57){\circle*{0.000001}} \put(-43.84,-290.62){\circle*{0.000001}} \put(-43.13,-290.62){\circle*{0.000001}} \put(-42.43,-289.91){\circle*{0.000001}} \put(-42.43,-289.91){\circle*{0.000001}} \put(-41.72,-289.91){\circle*{0.000001}} \put(-41.01,-289.91){\circle*{0.000001}} \put(-40.31,-289.91){\circle*{0.000001}} \put(-39.60,-289.91){\circle*{0.000001}} \put(-38.89,-289.91){\circle*{0.000001}} \put(-38.18,-290.62){\circle*{0.000001}} \put(-37.48,-290.62){\circle*{0.000001}} \put(-36.77,-290.62){\circle*{0.000001}} \put(-36.06,-290.62){\circle*{0.000001}} \put(-35.36,-290.62){\circle*{0.000001}} \put(-34.65,-290.62){\circle*{0.000001}} \put(-32.53,-297.69){\circle*{0.000001}} \put(-32.53,-296.98){\circle*{0.000001}} \put(-33.23,-296.28){\circle*{0.000001}} \put(-33.23,-295.57){\circle*{0.000001}} \put(-33.23,-294.86){\circle*{0.000001}} \put(-33.23,-294.16){\circle*{0.000001}} \put(-33.94,-293.45){\circle*{0.000001}} \put(-33.94,-292.74){\circle*{0.000001}} \put(-33.94,-292.04){\circle*{0.000001}} \put(-34.65,-291.33){\circle*{0.000001}} \put(-34.65,-290.62){\circle*{0.000001}} \put(-32.53,-297.69){\circle*{0.000001}} \put(-32.53,-296.98){\circle*{0.000001}} \put(-31.82,-296.28){\circle*{0.000001}} \put(-31.82,-295.57){\circle*{0.000001}} \put(-31.11,-294.86){\circle*{0.000001}} \put(-31.11,-294.16){\circle*{0.000001}} \put(-30.41,-293.45){\circle*{0.000001}} \put(-30.41,-292.74){\circle*{0.000001}} \put(-30.41,-292.04){\circle*{0.000001}} \put(-29.70,-291.33){\circle*{0.000001}} \put(-29.70,-290.62){\circle*{0.000001}} \put(-28.99,-289.91){\circle*{0.000001}} \put(-28.99,-289.21){\circle*{0.000001}} \put(-28.28,-288.50){\circle*{0.000001}} \put(-28.28,-287.79){\circle*{0.000001}} \put(-27.58,-287.09){\circle*{0.000001}} \put(-27.58,-286.38){\circle*{0.000001}} \put(-27.58,-286.38){\circle*{0.000001}} \put(-27.58,-285.67){\circle*{0.000001}} \put(-26.87,-284.96){\circle*{0.000001}} \put(-26.87,-284.26){\circle*{0.000001}} \put(-26.16,-283.55){\circle*{0.000001}} \put(-26.16,-282.84){\circle*{0.000001}} \put(-25.46,-282.14){\circle*{0.000001}} \put(-25.46,-281.43){\circle*{0.000001}} \put(-24.75,-280.72){\circle*{0.000001}} \put(-24.75,-280.01){\circle*{0.000001}} \put(-24.04,-279.31){\circle*{0.000001}} \put(-24.04,-278.60){\circle*{0.000001}} \put(-24.04,-277.89){\circle*{0.000001}} \put(-23.33,-277.19){\circle*{0.000001}} \put(-23.33,-276.48){\circle*{0.000001}} \put(-22.63,-275.77){\circle*{0.000001}} \put(-22.63,-275.06){\circle*{0.000001}} \put(-21.92,-274.36){\circle*{0.000001}} \put(-21.92,-273.65){\circle*{0.000001}} \put(-21.21,-272.94){\circle*{0.000001}} \put(-21.21,-272.24){\circle*{0.000001}} \put(-20.51,-271.53){\circle*{0.000001}} \put(-20.51,-270.82){\circle*{0.000001}} \put(-19.80,-270.11){\circle*{0.000001}} \put(-19.80,-269.41){\circle*{0.000001}} \put(-19.80,-269.41){\circle*{0.000001}} \put(-19.80,-268.70){\circle*{0.000001}} \put(-19.80,-267.99){\circle*{0.000001}} \put(-19.80,-267.29){\circle*{0.000001}} \put(-19.80,-266.58){\circle*{0.000001}} \put(-19.80,-265.87){\circle*{0.000001}} \put(-19.80,-265.17){\circle*{0.000001}} \put(-19.80,-264.46){\circle*{0.000001}} \put(-19.09,-263.75){\circle*{0.000001}} \put(-19.09,-263.04){\circle*{0.000001}} \put(-19.09,-262.34){\circle*{0.000001}} \put(-19.09,-261.63){\circle*{0.000001}} \put(-19.09,-260.92){\circle*{0.000001}} \put(-19.09,-260.22){\circle*{0.000001}} \put(-19.09,-259.51){\circle*{0.000001}} \put(-19.09,-258.80){\circle*{0.000001}} \put(-19.09,-258.09){\circle*{0.000001}} \put(-19.09,-257.39){\circle*{0.000001}} \put(-19.09,-256.68){\circle*{0.000001}} \put(-19.09,-255.97){\circle*{0.000001}} \put(-19.09,-255.27){\circle*{0.000001}} \put(-19.09,-254.56){\circle*{0.000001}} \put(-19.09,-253.85){\circle*{0.000001}} \put(-19.09,-253.14){\circle*{0.000001}} \put(-18.38,-252.44){\circle*{0.000001}} \put(-18.38,-251.73){\circle*{0.000001}} \put(-18.38,-251.02){\circle*{0.000001}} \put(-18.38,-250.32){\circle*{0.000001}} \put(-18.38,-249.61){\circle*{0.000001}} \put(-18.38,-248.90){\circle*{0.000001}} \put(-18.38,-248.19){\circle*{0.000001}} \put(-18.38,-247.49){\circle*{0.000001}} \put(-37.48,-230.52){\circle*{0.000001}} \put(-36.77,-231.22){\circle*{0.000001}} \put(-36.06,-231.93){\circle*{0.000001}} \put(-35.36,-232.64){\circle*{0.000001}} \put(-34.65,-233.35){\circle*{0.000001}} \put(-33.94,-233.35){\circle*{0.000001}} \put(-33.23,-234.05){\circle*{0.000001}} \put(-32.53,-234.76){\circle*{0.000001}} \put(-31.82,-235.47){\circle*{0.000001}} \put(-31.11,-236.17){\circle*{0.000001}} \put(-30.41,-236.88){\circle*{0.000001}} \put(-29.70,-237.59){\circle*{0.000001}} \put(-28.99,-238.29){\circle*{0.000001}} \put(-28.28,-239.00){\circle*{0.000001}} \put(-27.58,-239.00){\circle*{0.000001}} \put(-26.87,-239.71){\circle*{0.000001}} \put(-26.16,-240.42){\circle*{0.000001}} \put(-25.46,-241.12){\circle*{0.000001}} \put(-24.75,-241.83){\circle*{0.000001}} \put(-24.04,-242.54){\circle*{0.000001}} \put(-23.33,-243.24){\circle*{0.000001}} \put(-22.63,-243.95){\circle*{0.000001}} \put(-21.92,-244.66){\circle*{0.000001}} \put(-21.21,-244.66){\circle*{0.000001}} \put(-20.51,-245.37){\circle*{0.000001}} \put(-19.80,-246.07){\circle*{0.000001}} \put(-19.09,-246.78){\circle*{0.000001}} \put(-18.38,-247.49){\circle*{0.000001}} \put(-37.48,-230.52){\circle*{0.000001}} \put(-36.77,-231.22){\circle*{0.000001}} \put(-36.06,-231.22){\circle*{0.000001}} \put(-35.36,-231.93){\circle*{0.000001}} \put(-34.65,-231.93){\circle*{0.000001}} \put(-33.94,-232.64){\circle*{0.000001}} \put(-33.23,-232.64){\circle*{0.000001}} \put(-32.53,-233.35){\circle*{0.000001}} \put(-31.82,-234.05){\circle*{0.000001}} \put(-31.11,-234.05){\circle*{0.000001}} \put(-30.41,-234.76){\circle*{0.000001}} \put(-29.70,-234.76){\circle*{0.000001}} \put(-28.99,-235.47){\circle*{0.000001}} \put(-28.28,-235.47){\circle*{0.000001}} \put(-27.58,-236.17){\circle*{0.000001}} \put(-26.87,-236.88){\circle*{0.000001}} \put(-26.16,-236.88){\circle*{0.000001}} \put(-25.46,-237.59){\circle*{0.000001}} \put(-24.75,-237.59){\circle*{0.000001}} \put(-24.04,-238.29){\circle*{0.000001}} \put(-23.33,-238.29){\circle*{0.000001}} \put(-22.63,-239.00){\circle*{0.000001}} \put(-21.92,-239.71){\circle*{0.000001}} \put(-21.21,-239.71){\circle*{0.000001}} \put(-20.51,-240.42){\circle*{0.000001}} \put(-19.80,-240.42){\circle*{0.000001}} \put(-19.09,-241.12){\circle*{0.000001}} \put(-18.38,-241.12){\circle*{0.000001}} \put(-17.68,-241.83){\circle*{0.000001}} \put(-16.97,-242.54){\circle*{0.000001}} \put(-16.26,-242.54){\circle*{0.000001}} \put(-15.56,-243.24){\circle*{0.000001}} \put(-14.85,-243.24){\circle*{0.000001}} \put(-14.14,-243.95){\circle*{0.000001}} \put(-13.44,-243.95){\circle*{0.000001}} \put(-12.73,-244.66){\circle*{0.000001}} \put(-44.55,-234.76){\circle*{0.000001}} \put(-43.84,-234.76){\circle*{0.000001}} \put(-43.13,-235.47){\circle*{0.000001}} \put(-42.43,-235.47){\circle*{0.000001}} \put(-41.72,-235.47){\circle*{0.000001}} \put(-41.01,-236.17){\circle*{0.000001}} \put(-40.31,-236.17){\circle*{0.000001}} \put(-39.60,-236.17){\circle*{0.000001}} \put(-38.89,-236.17){\circle*{0.000001}} \put(-38.18,-236.88){\circle*{0.000001}} \put(-37.48,-236.88){\circle*{0.000001}} \put(-36.77,-236.88){\circle*{0.000001}} \put(-36.06,-237.59){\circle*{0.000001}} \put(-35.36,-237.59){\circle*{0.000001}} \put(-34.65,-237.59){\circle*{0.000001}} \put(-33.94,-238.29){\circle*{0.000001}} \put(-33.23,-238.29){\circle*{0.000001}} \put(-32.53,-238.29){\circle*{0.000001}} \put(-31.82,-239.00){\circle*{0.000001}} \put(-31.11,-239.00){\circle*{0.000001}} \put(-30.41,-239.00){\circle*{0.000001}} \put(-29.70,-239.71){\circle*{0.000001}} \put(-28.99,-239.71){\circle*{0.000001}} \put(-28.28,-239.71){\circle*{0.000001}} \put(-27.58,-239.71){\circle*{0.000001}} \put(-26.87,-240.42){\circle*{0.000001}} \put(-26.16,-240.42){\circle*{0.000001}} \put(-25.46,-240.42){\circle*{0.000001}} \put(-24.75,-241.12){\circle*{0.000001}} \put(-24.04,-241.12){\circle*{0.000001}} \put(-23.33,-241.12){\circle*{0.000001}} \put(-22.63,-241.83){\circle*{0.000001}} \put(-21.92,-241.83){\circle*{0.000001}} \put(-21.21,-241.83){\circle*{0.000001}} \put(-20.51,-242.54){\circle*{0.000001}} \put(-19.80,-242.54){\circle*{0.000001}} \put(-19.09,-242.54){\circle*{0.000001}} \put(-18.38,-243.24){\circle*{0.000001}} \put(-17.68,-243.24){\circle*{0.000001}} \put(-16.97,-243.24){\circle*{0.000001}} \put(-16.26,-243.24){\circle*{0.000001}} \put(-15.56,-243.95){\circle*{0.000001}} \put(-14.85,-243.95){\circle*{0.000001}} \put(-14.14,-243.95){\circle*{0.000001}} \put(-13.44,-244.66){\circle*{0.000001}} \put(-12.73,-244.66){\circle*{0.000001}} \put(-44.55,-234.76){\circle*{0.000001}} \put(-43.84,-235.47){\circle*{0.000001}} \put(-43.13,-235.47){\circle*{0.000001}} \put(-42.43,-236.17){\circle*{0.000001}} \put(-41.72,-236.88){\circle*{0.000001}} \put(-41.01,-236.88){\circle*{0.000001}} \put(-40.31,-237.59){\circle*{0.000001}} \put(-39.60,-238.29){\circle*{0.000001}} \put(-38.89,-238.29){\circle*{0.000001}} \put(-38.18,-239.00){\circle*{0.000001}} \put(-37.48,-239.71){\circle*{0.000001}} \put(-36.77,-239.71){\circle*{0.000001}} \put(-36.06,-240.42){\circle*{0.000001}} \put(-35.36,-241.12){\circle*{0.000001}} \put(-34.65,-241.83){\circle*{0.000001}} \put(-33.94,-241.83){\circle*{0.000001}} \put(-33.23,-242.54){\circle*{0.000001}} \put(-32.53,-243.24){\circle*{0.000001}} \put(-31.82,-243.24){\circle*{0.000001}} \put(-31.11,-243.95){\circle*{0.000001}} \put(-30.41,-244.66){\circle*{0.000001}} \put(-29.70,-244.66){\circle*{0.000001}} \put(-28.99,-245.37){\circle*{0.000001}} \put(-28.28,-246.07){\circle*{0.000001}} \put(-27.58,-246.07){\circle*{0.000001}} \put(-26.87,-246.78){\circle*{0.000001}} \put(-26.16,-247.49){\circle*{0.000001}} \put(-25.46,-247.49){\circle*{0.000001}} \put(-24.75,-248.19){\circle*{0.000001}} \put(-24.04,-248.90){\circle*{0.000001}} \put(-23.33,-248.90){\circle*{0.000001}} \put(-22.63,-249.61){\circle*{0.000001}} \put(-21.92,-250.32){\circle*{0.000001}} \put(-21.21,-250.32){\circle*{0.000001}} \put(-20.51,-251.02){\circle*{0.000001}} \put(-19.80,-251.73){\circle*{0.000001}} \put(-19.09,-252.44){\circle*{0.000001}} \put(-18.38,-252.44){\circle*{0.000001}} \put(-17.68,-253.14){\circle*{0.000001}} \put(-16.97,-253.85){\circle*{0.000001}} \put(-16.26,-253.85){\circle*{0.000001}} \put(-15.56,-254.56){\circle*{0.000001}} \put(-14.85,-255.27){\circle*{0.000001}} \put(-14.14,-255.27){\circle*{0.000001}} \put(-13.44,-255.97){\circle*{0.000001}} \put(-13.44,-255.97){\circle*{0.000001}} \put(-14.14,-255.27){\circle*{0.000001}} \put(-14.14,-254.56){\circle*{0.000001}} \put(-14.85,-253.85){\circle*{0.000001}} \put(-15.56,-253.14){\circle*{0.000001}} \put(-16.26,-252.44){\circle*{0.000001}} \put(-16.26,-251.73){\circle*{0.000001}} \put(-16.97,-251.02){\circle*{0.000001}} \put(-17.68,-250.32){\circle*{0.000001}} \put(-17.68,-249.61){\circle*{0.000001}} \put(-18.38,-248.90){\circle*{0.000001}} \put(-19.09,-248.19){\circle*{0.000001}} \put(-19.80,-247.49){\circle*{0.000001}} \put(-19.80,-246.78){\circle*{0.000001}} \put(-20.51,-246.07){\circle*{0.000001}} \put(-21.21,-245.37){\circle*{0.000001}} \put(-21.92,-244.66){\circle*{0.000001}} \put(-21.92,-243.95){\circle*{0.000001}} \put(-22.63,-243.24){\circle*{0.000001}} \put(-23.33,-242.54){\circle*{0.000001}} \put(-23.33,-241.83){\circle*{0.000001}} \put(-24.04,-241.12){\circle*{0.000001}} \put(-24.75,-240.42){\circle*{0.000001}} \put(-25.46,-239.71){\circle*{0.000001}} \put(-25.46,-239.00){\circle*{0.000001}} \put(-26.16,-238.29){\circle*{0.000001}} \put(-26.87,-237.59){\circle*{0.000001}} \put(-26.87,-236.88){\circle*{0.000001}} \put(-27.58,-236.17){\circle*{0.000001}} \put(-28.28,-235.47){\circle*{0.000001}} \put(-28.99,-234.76){\circle*{0.000001}} \put(-28.99,-234.05){\circle*{0.000001}} \put(-29.70,-233.35){\circle*{0.000001}} \put(-30.41,-232.64){\circle*{0.000001}} \put(-30.41,-231.93){\circle*{0.000001}} \put(-31.11,-231.22){\circle*{0.000001}} \put(-31.82,-230.52){\circle*{0.000001}} \put(-32.53,-229.81){\circle*{0.000001}} \put(-32.53,-229.10){\circle*{0.000001}} \put(-33.23,-228.40){\circle*{0.000001}} \put(-33.94,-227.69){\circle*{0.000001}} \put(-34.65,-226.98){\circle*{0.000001}} \put(-34.65,-226.27){\circle*{0.000001}} \put(-35.36,-225.57){\circle*{0.000001}} \put(-36.06,-224.86){\circle*{0.000001}} \put(-36.06,-224.15){\circle*{0.000001}} \put(-36.77,-223.45){\circle*{0.000001}} \put(-37.48,-222.74){\circle*{0.000001}} \put(-38.18,-222.03){\circle*{0.000001}} \put(-38.18,-221.32){\circle*{0.000001}} \put(-38.89,-220.62){\circle*{0.000001}} \put(-69.30,-252.44){\circle*{0.000001}} \put(-68.59,-251.73){\circle*{0.000001}} \put(-67.88,-251.02){\circle*{0.000001}} \put(-67.18,-250.32){\circle*{0.000001}} \put(-66.47,-249.61){\circle*{0.000001}} \put(-65.76,-248.90){\circle*{0.000001}} \put(-65.05,-248.19){\circle*{0.000001}} \put(-64.35,-247.49){\circle*{0.000001}} \put(-63.64,-246.78){\circle*{0.000001}} \put(-62.93,-246.07){\circle*{0.000001}} \put(-62.23,-245.37){\circle*{0.000001}} \put(-61.52,-244.66){\circle*{0.000001}} \put(-61.52,-243.95){\circle*{0.000001}} \put(-60.81,-243.24){\circle*{0.000001}} \put(-60.10,-242.54){\circle*{0.000001}} \put(-59.40,-241.83){\circle*{0.000001}} \put(-58.69,-241.12){\circle*{0.000001}} \put(-57.98,-240.42){\circle*{0.000001}} \put(-57.28,-239.71){\circle*{0.000001}} \put(-56.57,-239.00){\circle*{0.000001}} \put(-55.86,-238.29){\circle*{0.000001}} \put(-55.15,-237.59){\circle*{0.000001}} \put(-54.45,-236.88){\circle*{0.000001}} \put(-53.74,-236.17){\circle*{0.000001}} \put(-53.03,-235.47){\circle*{0.000001}} \put(-52.33,-234.76){\circle*{0.000001}} \put(-51.62,-234.05){\circle*{0.000001}} \put(-50.91,-233.35){\circle*{0.000001}} \put(-50.20,-232.64){\circle*{0.000001}} \put(-49.50,-231.93){\circle*{0.000001}} \put(-48.79,-231.22){\circle*{0.000001}} \put(-48.08,-230.52){\circle*{0.000001}} \put(-47.38,-229.81){\circle*{0.000001}} \put(-46.67,-229.10){\circle*{0.000001}} \put(-46.67,-228.40){\circle*{0.000001}} \put(-45.96,-227.69){\circle*{0.000001}} \put(-45.25,-226.98){\circle*{0.000001}} \put(-44.55,-226.27){\circle*{0.000001}} \put(-43.84,-225.57){\circle*{0.000001}} \put(-43.13,-224.86){\circle*{0.000001}} \put(-42.43,-224.15){\circle*{0.000001}} \put(-41.72,-223.45){\circle*{0.000001}} \put(-41.01,-222.74){\circle*{0.000001}} \put(-40.31,-222.03){\circle*{0.000001}} \put(-39.60,-221.32){\circle*{0.000001}} \put(-38.89,-220.62){\circle*{0.000001}} \put(-116.67,-237.59){\circle*{0.000001}} \put(-115.97,-237.59){\circle*{0.000001}} \put(-115.26,-238.29){\circle*{0.000001}} \put(-114.55,-238.29){\circle*{0.000001}} \put(-113.84,-238.29){\circle*{0.000001}} \put(-113.14,-239.00){\circle*{0.000001}} \put(-112.43,-239.00){\circle*{0.000001}} \put(-111.72,-239.00){\circle*{0.000001}} \put(-111.02,-239.71){\circle*{0.000001}} \put(-110.31,-239.71){\circle*{0.000001}} \put(-109.60,-239.71){\circle*{0.000001}} \put(-108.89,-239.71){\circle*{0.000001}} \put(-108.19,-240.42){\circle*{0.000001}} \put(-107.48,-240.42){\circle*{0.000001}} \put(-106.77,-240.42){\circle*{0.000001}} \put(-106.07,-241.12){\circle*{0.000001}} \put(-105.36,-241.12){\circle*{0.000001}} \put(-104.65,-241.12){\circle*{0.000001}} \put(-103.94,-241.83){\circle*{0.000001}} \put(-103.24,-241.83){\circle*{0.000001}} \put(-102.53,-241.83){\circle*{0.000001}} \put(-101.82,-242.54){\circle*{0.000001}} \put(-101.12,-242.54){\circle*{0.000001}} \put(-100.41,-242.54){\circle*{0.000001}} \put(-99.70,-243.24){\circle*{0.000001}} \put(-98.99,-243.24){\circle*{0.000001}} \put(-98.29,-243.24){\circle*{0.000001}} \put(-97.58,-243.24){\circle*{0.000001}} \put(-96.87,-243.95){\circle*{0.000001}} \put(-96.17,-243.95){\circle*{0.000001}} \put(-95.46,-243.95){\circle*{0.000001}} \put(-94.75,-244.66){\circle*{0.000001}} \put(-94.05,-244.66){\circle*{0.000001}} \put(-93.34,-244.66){\circle*{0.000001}} \put(-92.63,-245.37){\circle*{0.000001}} \put(-91.92,-245.37){\circle*{0.000001}} \put(-91.22,-245.37){\circle*{0.000001}} \put(-90.51,-246.07){\circle*{0.000001}} \put(-89.80,-246.07){\circle*{0.000001}} \put(-89.10,-246.07){\circle*{0.000001}} \put(-88.39,-246.78){\circle*{0.000001}} \put(-87.68,-246.78){\circle*{0.000001}} \put(-86.97,-246.78){\circle*{0.000001}} \put(-86.27,-246.78){\circle*{0.000001}} \put(-85.56,-247.49){\circle*{0.000001}} \put(-84.85,-247.49){\circle*{0.000001}} \put(-84.15,-247.49){\circle*{0.000001}} \put(-83.44,-248.19){\circle*{0.000001}} \put(-82.73,-248.19){\circle*{0.000001}} \put(-82.02,-248.19){\circle*{0.000001}} \put(-81.32,-248.90){\circle*{0.000001}} \put(-80.61,-248.90){\circle*{0.000001}} \put(-79.90,-248.90){\circle*{0.000001}} \put(-79.20,-249.61){\circle*{0.000001}} \put(-78.49,-249.61){\circle*{0.000001}} \put(-77.78,-249.61){\circle*{0.000001}} \put(-77.07,-250.32){\circle*{0.000001}} \put(-76.37,-250.32){\circle*{0.000001}} \put(-75.66,-250.32){\circle*{0.000001}} \put(-74.95,-250.32){\circle*{0.000001}} \put(-74.25,-251.02){\circle*{0.000001}} \put(-73.54,-251.02){\circle*{0.000001}} \put(-72.83,-251.02){\circle*{0.000001}} \put(-72.12,-251.73){\circle*{0.000001}} \put(-71.42,-251.73){\circle*{0.000001}} \put(-70.71,-251.73){\circle*{0.000001}} \put(-70.00,-252.44){\circle*{0.000001}} \put(-69.30,-252.44){\circle*{0.000001}} \put(-116.67,-237.59){\circle*{0.000001}} \put(-115.97,-238.29){\circle*{0.000001}} \put(-115.26,-238.29){\circle*{0.000001}} \put(-114.55,-239.00){\circle*{0.000001}} \put(-113.84,-239.71){\circle*{0.000001}} \put(-113.14,-239.71){\circle*{0.000001}} \put(-112.43,-240.42){\circle*{0.000001}} \put(-111.72,-241.12){\circle*{0.000001}} \put(-111.02,-241.12){\circle*{0.000001}} \put(-110.31,-241.83){\circle*{0.000001}} \put(-109.60,-242.54){\circle*{0.000001}} \put(-108.89,-243.24){\circle*{0.000001}} \put(-108.19,-243.24){\circle*{0.000001}} \put(-107.48,-243.95){\circle*{0.000001}} \put(-106.77,-244.66){\circle*{0.000001}} \put(-106.07,-244.66){\circle*{0.000001}} \put(-105.36,-245.37){\circle*{0.000001}} \put(-104.65,-246.07){\circle*{0.000001}} \put(-103.94,-246.07){\circle*{0.000001}} \put(-103.24,-246.78){\circle*{0.000001}} \put(-102.53,-247.49){\circle*{0.000001}} \put(-101.82,-247.49){\circle*{0.000001}} \put(-101.12,-248.19){\circle*{0.000001}} \put(-100.41,-248.90){\circle*{0.000001}} \put(-99.70,-248.90){\circle*{0.000001}} \put(-98.99,-249.61){\circle*{0.000001}} \put(-98.29,-250.32){\circle*{0.000001}} \put(-97.58,-251.02){\circle*{0.000001}} \put(-96.87,-251.02){\circle*{0.000001}} \put(-96.17,-251.73){\circle*{0.000001}} \put(-95.46,-252.44){\circle*{0.000001}} \put(-94.75,-252.44){\circle*{0.000001}} \put(-94.05,-253.14){\circle*{0.000001}} \put(-93.34,-253.85){\circle*{0.000001}} \put(-92.63,-253.85){\circle*{0.000001}} \put(-91.92,-254.56){\circle*{0.000001}} \put(-91.22,-255.27){\circle*{0.000001}} \put(-90.51,-255.27){\circle*{0.000001}} \put(-89.80,-255.97){\circle*{0.000001}} \put(-89.10,-256.68){\circle*{0.000001}} \put(-88.39,-256.68){\circle*{0.000001}} \put(-87.68,-257.39){\circle*{0.000001}} \put(-86.97,-258.09){\circle*{0.000001}} \put(-86.27,-258.80){\circle*{0.000001}} \put(-85.56,-258.80){\circle*{0.000001}} \put(-84.85,-259.51){\circle*{0.000001}} \put(-84.15,-260.22){\circle*{0.000001}} \put(-83.44,-260.22){\circle*{0.000001}} \put(-82.73,-260.92){\circle*{0.000001}} \put(-82.02,-261.63){\circle*{0.000001}} \put(-81.32,-261.63){\circle*{0.000001}} \put(-80.61,-262.34){\circle*{0.000001}} \put(-79.90,-263.04){\circle*{0.000001}} \put(-79.20,-263.04){\circle*{0.000001}} \put(-78.49,-263.75){\circle*{0.000001}} \put(-77.78,-264.46){\circle*{0.000001}} \put(-77.07,-264.46){\circle*{0.000001}} \put(-76.37,-265.17){\circle*{0.000001}} \put(-75.66,-265.87){\circle*{0.000001}} \put(-74.95,-266.58){\circle*{0.000001}} \put(-74.25,-266.58){\circle*{0.000001}} \put(-73.54,-267.29){\circle*{0.000001}} \put(-72.83,-267.99){\circle*{0.000001}} \put(-72.12,-267.99){\circle*{0.000001}} \put(-71.42,-268.70){\circle*{0.000001}} \put(-71.42,-268.70){\circle*{0.000001}} \put(-71.42,-267.99){\circle*{0.000001}} \put(-71.42,-267.29){\circle*{0.000001}} \put(-71.42,-266.58){\circle*{0.000001}} \put(-71.42,-265.87){\circle*{0.000001}} \put(-71.42,-265.17){\circle*{0.000001}} \put(-70.71,-264.46){\circle*{0.000001}} \put(-70.71,-263.75){\circle*{0.000001}} \put(-70.71,-263.04){\circle*{0.000001}} \put(-70.71,-262.34){\circle*{0.000001}} \put(-70.71,-261.63){\circle*{0.000001}} \put(-70.71,-260.92){\circle*{0.000001}} \put(-70.71,-260.22){\circle*{0.000001}} \put(-70.71,-259.51){\circle*{0.000001}} \put(-70.71,-258.80){\circle*{0.000001}} \put(-70.71,-258.09){\circle*{0.000001}} \put(-70.71,-257.39){\circle*{0.000001}} \put(-70.71,-256.68){\circle*{0.000001}} \put(-70.00,-255.97){\circle*{0.000001}} \put(-70.00,-255.27){\circle*{0.000001}} \put(-70.00,-254.56){\circle*{0.000001}} \put(-70.00,-253.85){\circle*{0.000001}} \put(-70.00,-253.14){\circle*{0.000001}} \put(-70.00,-252.44){\circle*{0.000001}} \put(-70.00,-251.73){\circle*{0.000001}} \put(-70.00,-251.02){\circle*{0.000001}} \put(-70.00,-250.32){\circle*{0.000001}} \put(-70.00,-249.61){\circle*{0.000001}} \put(-70.00,-248.90){\circle*{0.000001}} \put(-69.30,-248.19){\circle*{0.000001}} \put(-69.30,-247.49){\circle*{0.000001}} \put(-69.30,-246.78){\circle*{0.000001}} \put(-69.30,-246.07){\circle*{0.000001}} \put(-69.30,-245.37){\circle*{0.000001}} \put(-69.30,-244.66){\circle*{0.000001}} \put(-69.30,-243.95){\circle*{0.000001}} \put(-69.30,-243.24){\circle*{0.000001}} \put(-69.30,-242.54){\circle*{0.000001}} \put(-69.30,-241.83){\circle*{0.000001}} \put(-69.30,-241.12){\circle*{0.000001}} \put(-69.30,-240.42){\circle*{0.000001}} \put(-68.59,-239.71){\circle*{0.000001}} \put(-68.59,-239.00){\circle*{0.000001}} \put(-68.59,-238.29){\circle*{0.000001}} \put(-68.59,-237.59){\circle*{0.000001}} \put(-68.59,-236.88){\circle*{0.000001}} \put(-68.59,-236.17){\circle*{0.000001}} \put(-68.59,-235.47){\circle*{0.000001}} \put(-68.59,-234.76){\circle*{0.000001}} \put(-68.59,-234.05){\circle*{0.000001}} \put(-68.59,-233.35){\circle*{0.000001}} \put(-68.59,-232.64){\circle*{0.000001}} \put(-68.59,-231.93){\circle*{0.000001}} \put(-67.88,-231.22){\circle*{0.000001}} \put(-67.88,-230.52){\circle*{0.000001}} \put(-67.88,-229.81){\circle*{0.000001}} \put(-67.88,-229.10){\circle*{0.000001}} \put(-67.88,-228.40){\circle*{0.000001}} \put(-67.88,-227.69){\circle*{0.000001}} \put(-67.88,-226.98){\circle*{0.000001}} \put(-67.88,-226.27){\circle*{0.000001}} \put(-67.88,-225.57){\circle*{0.000001}} \put(-67.88,-224.86){\circle*{0.000001}} \put(-67.88,-224.15){\circle*{0.000001}} \put(-67.18,-223.45){\circle*{0.000001}} \put(-67.18,-222.74){\circle*{0.000001}} \put(-67.18,-222.03){\circle*{0.000001}} \put(-67.18,-221.32){\circle*{0.000001}} \put(-67.18,-220.62){\circle*{0.000001}} \put(-67.18,-219.91){\circle*{0.000001}} \put(-67.18,-219.20){\circle*{0.000001}} \put(-67.18,-218.50){\circle*{0.000001}} \put(-67.18,-217.79){\circle*{0.000001}} \put(-67.18,-217.08){\circle*{0.000001}} \put(-67.18,-216.37){\circle*{0.000001}} \put(-67.18,-215.67){\circle*{0.000001}} \put(-66.47,-214.96){\circle*{0.000001}} \put(-66.47,-214.25){\circle*{0.000001}} \put(-66.47,-213.55){\circle*{0.000001}} \put(-66.47,-212.84){\circle*{0.000001}} \put(-66.47,-212.13){\circle*{0.000001}} \put(-66.47,-211.42){\circle*{0.000001}} \put(-66.47,-211.42){\circle*{0.000001}} \put(-66.47,-210.72){\circle*{0.000001}} \put(-66.47,-210.01){\circle*{0.000001}} \put(-66.47,-209.30){\circle*{0.000001}} \put(-67.18,-208.60){\circle*{0.000001}} \put(-67.18,-207.89){\circle*{0.000001}} \put(-67.18,-207.18){\circle*{0.000001}} \put(-67.18,-206.48){\circle*{0.000001}} \put(-67.18,-205.77){\circle*{0.000001}} \put(-67.18,-205.06){\circle*{0.000001}} \put(-67.18,-204.35){\circle*{0.000001}} \put(-67.18,-203.65){\circle*{0.000001}} \put(-67.88,-202.94){\circle*{0.000001}} \put(-67.88,-202.23){\circle*{0.000001}} \put(-67.88,-201.53){\circle*{0.000001}} \put(-67.88,-200.82){\circle*{0.000001}} \put(-67.88,-200.11){\circle*{0.000001}} \put(-67.88,-199.40){\circle*{0.000001}} \put(-67.88,-198.70){\circle*{0.000001}} \put(-67.88,-197.99){\circle*{0.000001}} \put(-68.59,-197.28){\circle*{0.000001}} \put(-68.59,-196.58){\circle*{0.000001}} \put(-68.59,-195.87){\circle*{0.000001}} \put(-68.59,-195.16){\circle*{0.000001}} \put(-68.59,-194.45){\circle*{0.000001}} \put(-68.59,-193.75){\circle*{0.000001}} \put(-68.59,-193.04){\circle*{0.000001}} \put(-68.59,-192.33){\circle*{0.000001}} \put(-69.30,-191.63){\circle*{0.000001}} \put(-69.30,-190.92){\circle*{0.000001}} \put(-69.30,-190.21){\circle*{0.000001}} \put(-69.30,-189.50){\circle*{0.000001}} \put(-69.30,-188.80){\circle*{0.000001}} \put(-69.30,-188.09){\circle*{0.000001}} \put(-69.30,-187.38){\circle*{0.000001}} \put(-69.30,-186.68){\circle*{0.000001}} \put(-70.00,-185.97){\circle*{0.000001}} \put(-70.00,-185.26){\circle*{0.000001}} \put(-70.00,-184.55){\circle*{0.000001}} \put(-70.00,-183.85){\circle*{0.000001}} \put(-70.00,-183.14){\circle*{0.000001}} \put(-70.00,-182.43){\circle*{0.000001}} \put(-70.00,-181.73){\circle*{0.000001}} \put(-70.00,-181.02){\circle*{0.000001}} \put(-70.71,-180.31){\circle*{0.000001}} \put(-70.71,-179.61){\circle*{0.000001}} \put(-70.71,-178.90){\circle*{0.000001}} \put(-70.71,-178.19){\circle*{0.000001}} \put(-70.71,-177.48){\circle*{0.000001}} \put(-70.71,-176.78){\circle*{0.000001}} \put(-70.71,-176.07){\circle*{0.000001}} \put(-70.71,-175.36){\circle*{0.000001}} \put(-71.42,-174.66){\circle*{0.000001}} \put(-71.42,-173.95){\circle*{0.000001}} \put(-71.42,-173.24){\circle*{0.000001}} \put(-71.42,-172.53){\circle*{0.000001}} \put(-71.42,-171.83){\circle*{0.000001}} \put(-71.42,-171.12){\circle*{0.000001}} \put(-71.42,-170.41){\circle*{0.000001}} \put(-71.42,-169.71){\circle*{0.000001}} \put(-72.12,-169.00){\circle*{0.000001}} \put(-72.12,-168.29){\circle*{0.000001}} \put(-72.12,-167.58){\circle*{0.000001}} \put(-72.12,-166.88){\circle*{0.000001}} \put(-72.12,-166.17){\circle*{0.000001}} \put(-72.12,-165.46){\circle*{0.000001}} \put(-72.12,-164.76){\circle*{0.000001}} \put(-72.12,-164.05){\circle*{0.000001}} \put(-72.83,-163.34){\circle*{0.000001}} \put(-72.83,-162.63){\circle*{0.000001}} \put(-72.83,-161.93){\circle*{0.000001}} \put(-72.83,-161.22){\circle*{0.000001}} \put(-72.83,-160.51){\circle*{0.000001}} \put(-72.83,-159.81){\circle*{0.000001}} \put(-72.83,-159.10){\circle*{0.000001}} \put(-72.83,-158.39){\circle*{0.000001}} \put(-73.54,-157.68){\circle*{0.000001}} \put(-73.54,-156.98){\circle*{0.000001}} \put(-73.54,-156.27){\circle*{0.000001}} \put(-73.54,-155.56){\circle*{0.000001}} \put(-73.54,-155.56){\circle*{0.000001}} \put(-74.25,-154.86){\circle*{0.000001}} \put(-74.95,-154.15){\circle*{0.000001}} \put(-75.66,-153.44){\circle*{0.000001}} \put(-76.37,-152.74){\circle*{0.000001}} \put(-77.07,-152.03){\circle*{0.000001}} \put(-77.78,-151.32){\circle*{0.000001}} \put(-78.49,-150.61){\circle*{0.000001}} \put(-79.20,-149.91){\circle*{0.000001}} \put(-79.90,-149.20){\circle*{0.000001}} \put(-80.61,-148.49){\circle*{0.000001}} \put(-81.32,-147.79){\circle*{0.000001}} \put(-82.02,-147.08){\circle*{0.000001}} \put(-82.73,-146.37){\circle*{0.000001}} \put(-83.44,-145.66){\circle*{0.000001}} \put(-84.15,-144.96){\circle*{0.000001}} \put(-84.85,-144.25){\circle*{0.000001}} \put(-85.56,-143.54){\circle*{0.000001}} \put(-86.27,-142.84){\circle*{0.000001}} \put(-86.97,-142.13){\circle*{0.000001}} \put(-87.68,-141.42){\circle*{0.000001}} \put(-88.39,-140.71){\circle*{0.000001}} \put(-89.10,-140.01){\circle*{0.000001}} \put(-89.80,-139.30){\circle*{0.000001}} \put(-90.51,-138.59){\circle*{0.000001}} \put(-91.22,-137.89){\circle*{0.000001}} \put(-91.92,-137.18){\circle*{0.000001}} \put(-92.63,-136.47){\circle*{0.000001}} \put(-93.34,-135.76){\circle*{0.000001}} \put(-93.34,-135.06){\circle*{0.000001}} \put(-94.05,-134.35){\circle*{0.000001}} \put(-94.75,-133.64){\circle*{0.000001}} \put(-95.46,-132.94){\circle*{0.000001}} \put(-96.17,-132.23){\circle*{0.000001}} \put(-96.87,-131.52){\circle*{0.000001}} \put(-97.58,-130.81){\circle*{0.000001}} \put(-98.29,-130.11){\circle*{0.000001}} \put(-98.99,-129.40){\circle*{0.000001}} \put(-99.70,-128.69){\circle*{0.000001}} \put(-100.41,-127.99){\circle*{0.000001}} \put(-101.12,-127.28){\circle*{0.000001}} \put(-101.82,-126.57){\circle*{0.000001}} \put(-102.53,-125.87){\circle*{0.000001}} \put(-103.24,-125.16){\circle*{0.000001}} \put(-103.94,-124.45){\circle*{0.000001}} \put(-104.65,-123.74){\circle*{0.000001}} \put(-105.36,-123.04){\circle*{0.000001}} \put(-106.07,-122.33){\circle*{0.000001}} \put(-106.77,-121.62){\circle*{0.000001}} \put(-107.48,-120.92){\circle*{0.000001}} \put(-108.19,-120.21){\circle*{0.000001}} \put(-108.89,-119.50){\circle*{0.000001}} \put(-109.60,-118.79){\circle*{0.000001}} \put(-110.31,-118.09){\circle*{0.000001}} \put(-111.02,-117.38){\circle*{0.000001}} \put(-111.72,-116.67){\circle*{0.000001}} \put(-112.43,-115.97){\circle*{0.000001}} \put(-113.14,-115.26){\circle*{0.000001}} \put(-113.84,-114.55){\circle*{0.000001}} \put(-167.58,-128.69){\circle*{0.000001}} \put(-166.88,-128.69){\circle*{0.000001}} \put(-166.17,-127.99){\circle*{0.000001}} \put(-165.46,-127.99){\circle*{0.000001}} \put(-164.76,-127.99){\circle*{0.000001}} \put(-164.05,-127.99){\circle*{0.000001}} \put(-163.34,-127.28){\circle*{0.000001}} \put(-162.63,-127.28){\circle*{0.000001}} \put(-161.93,-127.28){\circle*{0.000001}} \put(-161.22,-127.28){\circle*{0.000001}} \put(-160.51,-126.57){\circle*{0.000001}} \put(-159.81,-126.57){\circle*{0.000001}} \put(-159.10,-126.57){\circle*{0.000001}} \put(-158.39,-126.57){\circle*{0.000001}} \put(-157.68,-125.87){\circle*{0.000001}} \put(-156.98,-125.87){\circle*{0.000001}} \put(-156.27,-125.87){\circle*{0.000001}} \put(-155.56,-125.87){\circle*{0.000001}} \put(-154.86,-125.16){\circle*{0.000001}} \put(-154.15,-125.16){\circle*{0.000001}} \put(-153.44,-125.16){\circle*{0.000001}} \put(-152.74,-124.45){\circle*{0.000001}} \put(-152.03,-124.45){\circle*{0.000001}} \put(-151.32,-124.45){\circle*{0.000001}} \put(-150.61,-124.45){\circle*{0.000001}} \put(-149.91,-123.74){\circle*{0.000001}} \put(-149.20,-123.74){\circle*{0.000001}} \put(-148.49,-123.74){\circle*{0.000001}} \put(-147.79,-123.74){\circle*{0.000001}} \put(-147.08,-123.04){\circle*{0.000001}} \put(-146.37,-123.04){\circle*{0.000001}} \put(-145.66,-123.04){\circle*{0.000001}} \put(-144.96,-123.04){\circle*{0.000001}} \put(-144.25,-122.33){\circle*{0.000001}} \put(-143.54,-122.33){\circle*{0.000001}} \put(-142.84,-122.33){\circle*{0.000001}} \put(-142.13,-122.33){\circle*{0.000001}} \put(-141.42,-121.62){\circle*{0.000001}} \put(-140.71,-121.62){\circle*{0.000001}} \put(-140.01,-121.62){\circle*{0.000001}} \put(-139.30,-120.92){\circle*{0.000001}} \put(-138.59,-120.92){\circle*{0.000001}} \put(-137.89,-120.92){\circle*{0.000001}} \put(-137.18,-120.92){\circle*{0.000001}} \put(-136.47,-120.21){\circle*{0.000001}} \put(-135.76,-120.21){\circle*{0.000001}} \put(-135.06,-120.21){\circle*{0.000001}} \put(-134.35,-120.21){\circle*{0.000001}} \put(-133.64,-119.50){\circle*{0.000001}} \put(-132.94,-119.50){\circle*{0.000001}} \put(-132.23,-119.50){\circle*{0.000001}} \put(-131.52,-119.50){\circle*{0.000001}} \put(-130.81,-118.79){\circle*{0.000001}} \put(-130.11,-118.79){\circle*{0.000001}} \put(-129.40,-118.79){\circle*{0.000001}} \put(-128.69,-118.79){\circle*{0.000001}} \put(-127.99,-118.09){\circle*{0.000001}} \put(-127.28,-118.09){\circle*{0.000001}} \put(-126.57,-118.09){\circle*{0.000001}} \put(-125.87,-117.38){\circle*{0.000001}} \put(-125.16,-117.38){\circle*{0.000001}} \put(-124.45,-117.38){\circle*{0.000001}} \put(-123.74,-117.38){\circle*{0.000001}} \put(-123.04,-116.67){\circle*{0.000001}} \put(-122.33,-116.67){\circle*{0.000001}} \put(-121.62,-116.67){\circle*{0.000001}} \put(-120.92,-116.67){\circle*{0.000001}} \put(-120.21,-115.97){\circle*{0.000001}} \put(-119.50,-115.97){\circle*{0.000001}} \put(-118.79,-115.97){\circle*{0.000001}} \put(-118.09,-115.97){\circle*{0.000001}} \put(-117.38,-115.26){\circle*{0.000001}} \put(-116.67,-115.26){\circle*{0.000001}} \put(-115.97,-115.26){\circle*{0.000001}} \put(-115.26,-115.26){\circle*{0.000001}} \put(-114.55,-114.55){\circle*{0.000001}} \put(-113.84,-114.55){\circle*{0.000001}} \put(-167.58,-128.69){\circle*{0.000001}} \put(-167.58,-127.99){\circle*{0.000001}} \put(-166.88,-127.28){\circle*{0.000001}} \put(-166.88,-126.57){\circle*{0.000001}} \put(-166.17,-125.87){\circle*{0.000001}} \put(-166.17,-125.16){\circle*{0.000001}} \put(-165.46,-124.45){\circle*{0.000001}} \put(-165.46,-123.74){\circle*{0.000001}} \put(-164.76,-123.04){\circle*{0.000001}} \put(-164.76,-122.33){\circle*{0.000001}} \put(-164.05,-121.62){\circle*{0.000001}} \put(-164.05,-120.92){\circle*{0.000001}} \put(-163.34,-120.21){\circle*{0.000001}} \put(-163.34,-119.50){\circle*{0.000001}} \put(-162.63,-118.79){\circle*{0.000001}} \put(-162.63,-118.09){\circle*{0.000001}} \put(-161.93,-117.38){\circle*{0.000001}} \put(-161.93,-116.67){\circle*{0.000001}} \put(-161.22,-115.97){\circle*{0.000001}} \put(-161.22,-115.26){\circle*{0.000001}} \put(-161.22,-114.55){\circle*{0.000001}} \put(-160.51,-113.84){\circle*{0.000001}} \put(-160.51,-113.14){\circle*{0.000001}} \put(-159.81,-112.43){\circle*{0.000001}} \put(-159.81,-111.72){\circle*{0.000001}} \put(-159.10,-111.02){\circle*{0.000001}} \put(-159.10,-110.31){\circle*{0.000001}} \put(-158.39,-109.60){\circle*{0.000001}} \put(-158.39,-108.89){\circle*{0.000001}} \put(-157.68,-108.19){\circle*{0.000001}} \put(-157.68,-107.48){\circle*{0.000001}} \put(-156.98,-106.77){\circle*{0.000001}} \put(-156.98,-106.07){\circle*{0.000001}} \put(-156.27,-105.36){\circle*{0.000001}} \put(-156.27,-104.65){\circle*{0.000001}} \put(-155.56,-103.94){\circle*{0.000001}} \put(-155.56,-103.24){\circle*{0.000001}} \put(-155.56,-102.53){\circle*{0.000001}} \put(-154.86,-101.82){\circle*{0.000001}} \put(-154.86,-101.12){\circle*{0.000001}} \put(-154.15,-100.41){\circle*{0.000001}} \put(-154.15,-99.70){\circle*{0.000001}} \put(-153.44,-98.99){\circle*{0.000001}} \put(-153.44,-98.29){\circle*{0.000001}} \put(-152.74,-97.58){\circle*{0.000001}} \put(-152.74,-96.87){\circle*{0.000001}} \put(-152.03,-96.17){\circle*{0.000001}} \put(-152.03,-95.46){\circle*{0.000001}} \put(-151.32,-94.75){\circle*{0.000001}} \put(-151.32,-94.05){\circle*{0.000001}} \put(-150.61,-93.34){\circle*{0.000001}} \put(-150.61,-92.63){\circle*{0.000001}} \put(-149.91,-91.92){\circle*{0.000001}} \put(-149.91,-91.22){\circle*{0.000001}} \put(-149.20,-90.51){\circle*{0.000001}} \put(-149.20,-89.80){\circle*{0.000001}} \put(-149.20,-89.10){\circle*{0.000001}} \put(-148.49,-88.39){\circle*{0.000001}} \put(-148.49,-87.68){\circle*{0.000001}} \put(-147.79,-86.97){\circle*{0.000001}} \put(-147.79,-86.27){\circle*{0.000001}} \put(-147.08,-85.56){\circle*{0.000001}} \put(-147.08,-84.85){\circle*{0.000001}} \put(-146.37,-84.15){\circle*{0.000001}} \put(-146.37,-83.44){\circle*{0.000001}} \put(-145.66,-82.73){\circle*{0.000001}} \put(-145.66,-82.02){\circle*{0.000001}} \put(-144.96,-81.32){\circle*{0.000001}} \put(-144.96,-80.61){\circle*{0.000001}} \put(-144.25,-79.90){\circle*{0.000001}} \put(-144.25,-79.20){\circle*{0.000001}} \put(-143.54,-78.49){\circle*{0.000001}} \put(-143.54,-77.78){\circle*{0.000001}} \put(-142.84,-77.07){\circle*{0.000001}} \put(-142.84,-76.37){\circle*{0.000001}} \put(-142.84,-76.37){\circle*{0.000001}} \put(-142.13,-76.37){\circle*{0.000001}} \put(-141.42,-76.37){\circle*{0.000001}} \put(-140.71,-76.37){\circle*{0.000001}} \put(-140.01,-76.37){\circle*{0.000001}} \put(-139.30,-76.37){\circle*{0.000001}} \put(-138.59,-76.37){\circle*{0.000001}} \put(-137.89,-76.37){\circle*{0.000001}} \put(-137.18,-76.37){\circle*{0.000001}} \put(-136.47,-76.37){\circle*{0.000001}} \put(-135.76,-76.37){\circle*{0.000001}} \put(-135.06,-76.37){\circle*{0.000001}} \put(-134.35,-76.37){\circle*{0.000001}} \put(-133.64,-76.37){\circle*{0.000001}} \put(-132.94,-76.37){\circle*{0.000001}} \put(-132.23,-76.37){\circle*{0.000001}} \put(-131.52,-76.37){\circle*{0.000001}} \put(-130.81,-76.37){\circle*{0.000001}} \put(-130.11,-76.37){\circle*{0.000001}} \put(-129.40,-76.37){\circle*{0.000001}} \put(-128.69,-76.37){\circle*{0.000001}} \put(-127.99,-76.37){\circle*{0.000001}} \put(-127.28,-76.37){\circle*{0.000001}} \put(-126.57,-76.37){\circle*{0.000001}} \put(-125.87,-76.37){\circle*{0.000001}} \put(-125.16,-76.37){\circle*{0.000001}} \put(-124.45,-76.37){\circle*{0.000001}} \put(-123.74,-76.37){\circle*{0.000001}} \put(-123.04,-76.37){\circle*{0.000001}} \put(-122.33,-76.37){\circle*{0.000001}} \put(-121.62,-76.37){\circle*{0.000001}} \put(-120.92,-76.37){\circle*{0.000001}} \put(-120.21,-76.37){\circle*{0.000001}} \put(-119.50,-76.37){\circle*{0.000001}} \put(-118.79,-76.37){\circle*{0.000001}} \put(-118.09,-76.37){\circle*{0.000001}} \put(-117.38,-76.37){\circle*{0.000001}} \put(-116.67,-76.37){\circle*{0.000001}} \put(-115.97,-76.37){\circle*{0.000001}} \put(-115.26,-76.37){\circle*{0.000001}} \put(-114.55,-76.37){\circle*{0.000001}} \put(-113.84,-76.37){\circle*{0.000001}} \put(-113.14,-75.66){\circle*{0.000001}} \put(-112.43,-75.66){\circle*{0.000001}} \put(-111.72,-75.66){\circle*{0.000001}} \put(-111.02,-75.66){\circle*{0.000001}} \put(-110.31,-75.66){\circle*{0.000001}} \put(-109.60,-75.66){\circle*{0.000001}} \put(-108.89,-75.66){\circle*{0.000001}} \put(-108.19,-75.66){\circle*{0.000001}} \put(-107.48,-75.66){\circle*{0.000001}} \put(-106.77,-75.66){\circle*{0.000001}} \put(-106.07,-75.66){\circle*{0.000001}} \put(-105.36,-75.66){\circle*{0.000001}} \put(-104.65,-75.66){\circle*{0.000001}} \put(-103.94,-75.66){\circle*{0.000001}} \put(-103.24,-75.66){\circle*{0.000001}} \put(-102.53,-75.66){\circle*{0.000001}} \put(-101.82,-75.66){\circle*{0.000001}} \put(-101.12,-75.66){\circle*{0.000001}} \put(-100.41,-75.66){\circle*{0.000001}} \put(-99.70,-75.66){\circle*{0.000001}} \put(-98.99,-75.66){\circle*{0.000001}} \put(-98.29,-75.66){\circle*{0.000001}} \put(-97.58,-75.66){\circle*{0.000001}} \put(-96.87,-75.66){\circle*{0.000001}} \put(-96.17,-75.66){\circle*{0.000001}} \put(-95.46,-75.66){\circle*{0.000001}} \put(-94.75,-75.66){\circle*{0.000001}} \put(-94.05,-75.66){\circle*{0.000001}} \put(-93.34,-75.66){\circle*{0.000001}} \put(-92.63,-75.66){\circle*{0.000001}} \put(-91.92,-75.66){\circle*{0.000001}} \put(-91.22,-75.66){\circle*{0.000001}} \put(-90.51,-75.66){\circle*{0.000001}} \put(-89.80,-75.66){\circle*{0.000001}} \put(-89.10,-75.66){\circle*{0.000001}} \put(-88.39,-75.66){\circle*{0.000001}} \put(-87.68,-75.66){\circle*{0.000001}} \put(-86.97,-75.66){\circle*{0.000001}} \put(-86.27,-75.66){\circle*{0.000001}} \put(-85.56,-75.66){\circle*{0.000001}} \put(-84.85,-75.66){\circle*{0.000001}} \put(-145.66,-75.66){\circle*{0.000001}} \put(-144.96,-75.66){\circle*{0.000001}} \put(-144.25,-75.66){\circle*{0.000001}} \put(-143.54,-75.66){\circle*{0.000001}} \put(-142.84,-75.66){\circle*{0.000001}} \put(-142.13,-75.66){\circle*{0.000001}} \put(-141.42,-75.66){\circle*{0.000001}} \put(-140.71,-75.66){\circle*{0.000001}} \put(-140.01,-75.66){\circle*{0.000001}} \put(-139.30,-75.66){\circle*{0.000001}} \put(-138.59,-75.66){\circle*{0.000001}} \put(-137.89,-75.66){\circle*{0.000001}} \put(-137.18,-75.66){\circle*{0.000001}} \put(-136.47,-75.66){\circle*{0.000001}} \put(-135.76,-75.66){\circle*{0.000001}} \put(-135.06,-75.66){\circle*{0.000001}} \put(-134.35,-75.66){\circle*{0.000001}} \put(-133.64,-75.66){\circle*{0.000001}} \put(-132.94,-75.66){\circle*{0.000001}} \put(-132.23,-75.66){\circle*{0.000001}} \put(-131.52,-75.66){\circle*{0.000001}} \put(-130.81,-75.66){\circle*{0.000001}} \put(-130.11,-75.66){\circle*{0.000001}} \put(-129.40,-75.66){\circle*{0.000001}} \put(-128.69,-75.66){\circle*{0.000001}} \put(-127.99,-75.66){\circle*{0.000001}} \put(-127.28,-75.66){\circle*{0.000001}} \put(-126.57,-75.66){\circle*{0.000001}} \put(-125.87,-75.66){\circle*{0.000001}} \put(-125.16,-75.66){\circle*{0.000001}} \put(-124.45,-75.66){\circle*{0.000001}} \put(-123.74,-75.66){\circle*{0.000001}} \put(-123.04,-75.66){\circle*{0.000001}} \put(-122.33,-75.66){\circle*{0.000001}} \put(-121.62,-75.66){\circle*{0.000001}} \put(-120.92,-75.66){\circle*{0.000001}} \put(-120.21,-75.66){\circle*{0.000001}} \put(-119.50,-75.66){\circle*{0.000001}} \put(-118.79,-75.66){\circle*{0.000001}} \put(-118.09,-75.66){\circle*{0.000001}} \put(-117.38,-75.66){\circle*{0.000001}} \put(-116.67,-75.66){\circle*{0.000001}} \put(-115.97,-75.66){\circle*{0.000001}} \put(-115.26,-75.66){\circle*{0.000001}} \put(-114.55,-75.66){\circle*{0.000001}} \put(-113.84,-75.66){\circle*{0.000001}} \put(-113.14,-75.66){\circle*{0.000001}} \put(-112.43,-75.66){\circle*{0.000001}} \put(-111.72,-75.66){\circle*{0.000001}} \put(-111.02,-75.66){\circle*{0.000001}} \put(-110.31,-75.66){\circle*{0.000001}} \put(-109.60,-75.66){\circle*{0.000001}} \put(-108.89,-75.66){\circle*{0.000001}} \put(-108.19,-75.66){\circle*{0.000001}} \put(-107.48,-75.66){\circle*{0.000001}} \put(-106.77,-75.66){\circle*{0.000001}} \put(-106.07,-75.66){\circle*{0.000001}} \put(-105.36,-75.66){\circle*{0.000001}} \put(-104.65,-75.66){\circle*{0.000001}} \put(-103.94,-75.66){\circle*{0.000001}} \put(-103.24,-75.66){\circle*{0.000001}} \put(-102.53,-75.66){\circle*{0.000001}} \put(-101.82,-75.66){\circle*{0.000001}} \put(-101.12,-75.66){\circle*{0.000001}} \put(-100.41,-75.66){\circle*{0.000001}} \put(-99.70,-75.66){\circle*{0.000001}} \put(-98.99,-75.66){\circle*{0.000001}} \put(-98.29,-75.66){\circle*{0.000001}} \put(-97.58,-75.66){\circle*{0.000001}} \put(-96.87,-75.66){\circle*{0.000001}} \put(-96.17,-75.66){\circle*{0.000001}} \put(-95.46,-75.66){\circle*{0.000001}} \put(-94.75,-75.66){\circle*{0.000001}} \put(-94.05,-75.66){\circle*{0.000001}} \put(-93.34,-75.66){\circle*{0.000001}} \put(-92.63,-75.66){\circle*{0.000001}} \put(-91.92,-75.66){\circle*{0.000001}} \put(-91.22,-75.66){\circle*{0.000001}} \put(-90.51,-75.66){\circle*{0.000001}} \put(-89.80,-75.66){\circle*{0.000001}} \put(-89.10,-75.66){\circle*{0.000001}} \put(-88.39,-75.66){\circle*{0.000001}} \put(-87.68,-75.66){\circle*{0.000001}} \put(-86.97,-75.66){\circle*{0.000001}} \put(-86.27,-75.66){\circle*{0.000001}} \put(-85.56,-75.66){\circle*{0.000001}} \put(-84.85,-75.66){\circle*{0.000001}} \put(-145.66,-75.66){\circle*{0.000001}} \put(-144.96,-75.66){\circle*{0.000001}} \put(-144.25,-75.66){\circle*{0.000001}} \put(-143.54,-75.66){\circle*{0.000001}} \put(-142.84,-75.66){\circle*{0.000001}} \put(-142.13,-75.66){\circle*{0.000001}} \put(-141.42,-75.66){\circle*{0.000001}} \put(-140.71,-75.66){\circle*{0.000001}} \put(-140.01,-75.66){\circle*{0.000001}} \put(-139.30,-75.66){\circle*{0.000001}} \put(-138.59,-75.66){\circle*{0.000001}} \put(-137.89,-75.66){\circle*{0.000001}} \put(-137.18,-75.66){\circle*{0.000001}} \put(-136.47,-75.66){\circle*{0.000001}} \put(-135.76,-75.66){\circle*{0.000001}} \put(-135.06,-74.95){\circle*{0.000001}} \put(-134.35,-74.95){\circle*{0.000001}} \put(-133.64,-74.95){\circle*{0.000001}} \put(-132.94,-74.95){\circle*{0.000001}} \put(-132.23,-74.95){\circle*{0.000001}} \put(-131.52,-74.95){\circle*{0.000001}} \put(-130.81,-74.95){\circle*{0.000001}} \put(-130.11,-74.95){\circle*{0.000001}} \put(-129.40,-74.95){\circle*{0.000001}} \put(-128.69,-74.95){\circle*{0.000001}} \put(-127.99,-74.95){\circle*{0.000001}} \put(-127.28,-74.95){\circle*{0.000001}} \put(-126.57,-74.95){\circle*{0.000001}} \put(-125.87,-74.95){\circle*{0.000001}} \put(-125.16,-74.95){\circle*{0.000001}} \put(-124.45,-74.95){\circle*{0.000001}} \put(-123.74,-74.95){\circle*{0.000001}} \put(-123.04,-74.95){\circle*{0.000001}} \put(-122.33,-74.95){\circle*{0.000001}} \put(-121.62,-74.95){\circle*{0.000001}} \put(-120.92,-74.95){\circle*{0.000001}} \put(-120.21,-74.95){\circle*{0.000001}} \put(-119.50,-74.95){\circle*{0.000001}} \put(-118.79,-74.95){\circle*{0.000001}} \put(-118.09,-74.95){\circle*{0.000001}} \put(-117.38,-74.95){\circle*{0.000001}} \put(-116.67,-74.95){\circle*{0.000001}} \put(-115.97,-74.95){\circle*{0.000001}} \put(-115.26,-74.25){\circle*{0.000001}} \put(-114.55,-74.25){\circle*{0.000001}} \put(-113.84,-74.25){\circle*{0.000001}} \put(-113.14,-74.25){\circle*{0.000001}} \put(-112.43,-74.25){\circle*{0.000001}} \put(-111.72,-74.25){\circle*{0.000001}} \put(-111.02,-74.25){\circle*{0.000001}} \put(-110.31,-74.25){\circle*{0.000001}} \put(-109.60,-74.25){\circle*{0.000001}} \put(-108.89,-74.25){\circle*{0.000001}} \put(-108.19,-74.25){\circle*{0.000001}} \put(-107.48,-74.25){\circle*{0.000001}} \put(-106.77,-74.25){\circle*{0.000001}} \put(-106.07,-74.25){\circle*{0.000001}} \put(-105.36,-74.25){\circle*{0.000001}} \put(-104.65,-74.25){\circle*{0.000001}} \put(-103.94,-74.25){\circle*{0.000001}} \put(-103.24,-74.25){\circle*{0.000001}} \put(-102.53,-74.25){\circle*{0.000001}} \put(-101.82,-74.25){\circle*{0.000001}} \put(-101.12,-74.25){\circle*{0.000001}} \put(-100.41,-74.25){\circle*{0.000001}} \put(-99.70,-74.25){\circle*{0.000001}} \put(-98.99,-74.25){\circle*{0.000001}} \put(-98.29,-74.25){\circle*{0.000001}} \put(-97.58,-74.25){\circle*{0.000001}} \put(-96.87,-74.25){\circle*{0.000001}} \put(-96.17,-74.25){\circle*{0.000001}} \put(-95.46,-73.54){\circle*{0.000001}} \put(-94.75,-73.54){\circle*{0.000001}} \put(-94.05,-73.54){\circle*{0.000001}} \put(-93.34,-73.54){\circle*{0.000001}} \put(-92.63,-73.54){\circle*{0.000001}} \put(-91.92,-73.54){\circle*{0.000001}} \put(-91.22,-73.54){\circle*{0.000001}} \put(-90.51,-73.54){\circle*{0.000001}} \put(-89.80,-73.54){\circle*{0.000001}} \put(-89.10,-73.54){\circle*{0.000001}} \put(-88.39,-73.54){\circle*{0.000001}} \put(-87.68,-73.54){\circle*{0.000001}} \put(-86.97,-73.54){\circle*{0.000001}} \put(-86.27,-73.54){\circle*{0.000001}} \put(-85.56,-73.54){\circle*{0.000001}} \put(-121.62,-118.79){\circle*{0.000001}} \put(-120.92,-118.09){\circle*{0.000001}} \put(-120.21,-117.38){\circle*{0.000001}} \put(-120.21,-116.67){\circle*{0.000001}} \put(-119.50,-115.97){\circle*{0.000001}} \put(-118.79,-115.26){\circle*{0.000001}} \put(-118.09,-114.55){\circle*{0.000001}} \put(-117.38,-113.84){\circle*{0.000001}} \put(-117.38,-113.14){\circle*{0.000001}} \put(-116.67,-112.43){\circle*{0.000001}} \put(-115.97,-111.72){\circle*{0.000001}} \put(-115.26,-111.02){\circle*{0.000001}} \put(-114.55,-110.31){\circle*{0.000001}} \put(-114.55,-109.60){\circle*{0.000001}} \put(-113.84,-108.89){\circle*{0.000001}} \put(-113.14,-108.19){\circle*{0.000001}} \put(-112.43,-107.48){\circle*{0.000001}} \put(-111.72,-106.77){\circle*{0.000001}} \put(-111.72,-106.07){\circle*{0.000001}} \put(-111.02,-105.36){\circle*{0.000001}} \put(-110.31,-104.65){\circle*{0.000001}} \put(-109.60,-103.94){\circle*{0.000001}} \put(-108.89,-103.24){\circle*{0.000001}} \put(-108.89,-102.53){\circle*{0.000001}} \put(-108.19,-101.82){\circle*{0.000001}} \put(-107.48,-101.12){\circle*{0.000001}} \put(-106.77,-100.41){\circle*{0.000001}} \put(-106.07,-99.70){\circle*{0.000001}} \put(-106.07,-98.99){\circle*{0.000001}} \put(-105.36,-98.29){\circle*{0.000001}} \put(-104.65,-97.58){\circle*{0.000001}} \put(-103.94,-96.87){\circle*{0.000001}} \put(-103.94,-96.17){\circle*{0.000001}} \put(-103.24,-95.46){\circle*{0.000001}} \put(-102.53,-94.75){\circle*{0.000001}} \put(-101.82,-94.05){\circle*{0.000001}} \put(-101.12,-93.34){\circle*{0.000001}} \put(-101.12,-92.63){\circle*{0.000001}} \put(-100.41,-91.92){\circle*{0.000001}} \put(-99.70,-91.22){\circle*{0.000001}} \put(-98.99,-90.51){\circle*{0.000001}} \put(-98.29,-89.80){\circle*{0.000001}} \put(-98.29,-89.10){\circle*{0.000001}} \put(-97.58,-88.39){\circle*{0.000001}} \put(-96.87,-87.68){\circle*{0.000001}} \put(-96.17,-86.97){\circle*{0.000001}} \put(-95.46,-86.27){\circle*{0.000001}} \put(-95.46,-85.56){\circle*{0.000001}} \put(-94.75,-84.85){\circle*{0.000001}} \put(-94.05,-84.15){\circle*{0.000001}} \put(-93.34,-83.44){\circle*{0.000001}} \put(-92.63,-82.73){\circle*{0.000001}} \put(-92.63,-82.02){\circle*{0.000001}} \put(-91.92,-81.32){\circle*{0.000001}} \put(-91.22,-80.61){\circle*{0.000001}} \put(-90.51,-79.90){\circle*{0.000001}} \put(-89.80,-79.20){\circle*{0.000001}} \put(-89.80,-78.49){\circle*{0.000001}} \put(-89.10,-77.78){\circle*{0.000001}} \put(-88.39,-77.07){\circle*{0.000001}} \put(-87.68,-76.37){\circle*{0.000001}} \put(-86.97,-75.66){\circle*{0.000001}} \put(-86.97,-74.95){\circle*{0.000001}} \put(-86.27,-74.25){\circle*{0.000001}} \put(-85.56,-73.54){\circle*{0.000001}} \put(-169.00,-147.79){\circle*{0.000001}} \put(-168.29,-147.08){\circle*{0.000001}} \put(-167.58,-147.08){\circle*{0.000001}} \put(-166.88,-146.37){\circle*{0.000001}} \put(-166.17,-146.37){\circle*{0.000001}} \put(-165.46,-145.66){\circle*{0.000001}} \put(-164.76,-144.96){\circle*{0.000001}} \put(-164.05,-144.96){\circle*{0.000001}} \put(-163.34,-144.25){\circle*{0.000001}} \put(-162.63,-143.54){\circle*{0.000001}} \put(-161.93,-143.54){\circle*{0.000001}} \put(-161.22,-142.84){\circle*{0.000001}} \put(-160.51,-142.84){\circle*{0.000001}} \put(-159.81,-142.13){\circle*{0.000001}} \put(-159.10,-141.42){\circle*{0.000001}} \put(-158.39,-141.42){\circle*{0.000001}} \put(-157.68,-140.71){\circle*{0.000001}} \put(-156.98,-140.71){\circle*{0.000001}} \put(-156.27,-140.01){\circle*{0.000001}} \put(-155.56,-139.30){\circle*{0.000001}} \put(-154.86,-139.30){\circle*{0.000001}} \put(-154.15,-138.59){\circle*{0.000001}} \put(-153.44,-138.59){\circle*{0.000001}} \put(-152.74,-137.89){\circle*{0.000001}} \put(-152.03,-137.18){\circle*{0.000001}} \put(-151.32,-137.18){\circle*{0.000001}} \put(-150.61,-136.47){\circle*{0.000001}} \put(-149.91,-135.76){\circle*{0.000001}} \put(-149.20,-135.76){\circle*{0.000001}} \put(-148.49,-135.06){\circle*{0.000001}} \put(-147.79,-135.06){\circle*{0.000001}} \put(-147.08,-134.35){\circle*{0.000001}} \put(-146.37,-133.64){\circle*{0.000001}} \put(-145.66,-133.64){\circle*{0.000001}} \put(-144.96,-132.94){\circle*{0.000001}} \put(-144.25,-132.94){\circle*{0.000001}} \put(-143.54,-132.23){\circle*{0.000001}} \put(-142.84,-131.52){\circle*{0.000001}} \put(-142.13,-131.52){\circle*{0.000001}} \put(-141.42,-130.81){\circle*{0.000001}} \put(-140.71,-130.81){\circle*{0.000001}} \put(-140.01,-130.11){\circle*{0.000001}} \put(-139.30,-129.40){\circle*{0.000001}} \put(-138.59,-129.40){\circle*{0.000001}} \put(-137.89,-128.69){\circle*{0.000001}} \put(-137.18,-127.99){\circle*{0.000001}} \put(-136.47,-127.99){\circle*{0.000001}} \put(-135.76,-127.28){\circle*{0.000001}} \put(-135.06,-127.28){\circle*{0.000001}} \put(-134.35,-126.57){\circle*{0.000001}} \put(-133.64,-125.87){\circle*{0.000001}} \put(-132.94,-125.87){\circle*{0.000001}} \put(-132.23,-125.16){\circle*{0.000001}} \put(-131.52,-125.16){\circle*{0.000001}} \put(-130.81,-124.45){\circle*{0.000001}} \put(-130.11,-123.74){\circle*{0.000001}} \put(-129.40,-123.74){\circle*{0.000001}} \put(-128.69,-123.04){\circle*{0.000001}} \put(-127.99,-123.04){\circle*{0.000001}} \put(-127.28,-122.33){\circle*{0.000001}} \put(-126.57,-121.62){\circle*{0.000001}} \put(-125.87,-121.62){\circle*{0.000001}} \put(-125.16,-120.92){\circle*{0.000001}} \put(-124.45,-120.21){\circle*{0.000001}} \put(-123.74,-120.21){\circle*{0.000001}} \put(-123.04,-119.50){\circle*{0.000001}} \put(-122.33,-119.50){\circle*{0.000001}} \put(-121.62,-118.79){\circle*{0.000001}} \put(-218.50,-119.50){\circle*{0.000001}} \put(-217.79,-120.21){\circle*{0.000001}} \put(-217.08,-120.21){\circle*{0.000001}} \put(-216.37,-120.92){\circle*{0.000001}} \put(-215.67,-120.92){\circle*{0.000001}} \put(-214.96,-121.62){\circle*{0.000001}} \put(-214.25,-121.62){\circle*{0.000001}} \put(-213.55,-122.33){\circle*{0.000001}} \put(-212.84,-123.04){\circle*{0.000001}} \put(-212.13,-123.04){\circle*{0.000001}} \put(-211.42,-123.74){\circle*{0.000001}} \put(-210.72,-123.74){\circle*{0.000001}} \put(-210.01,-124.45){\circle*{0.000001}} \put(-209.30,-124.45){\circle*{0.000001}} \put(-208.60,-125.16){\circle*{0.000001}} \put(-207.89,-125.87){\circle*{0.000001}} \put(-207.18,-125.87){\circle*{0.000001}} \put(-206.48,-126.57){\circle*{0.000001}} \put(-205.77,-126.57){\circle*{0.000001}} \put(-205.06,-127.28){\circle*{0.000001}} \put(-204.35,-127.28){\circle*{0.000001}} \put(-203.65,-127.99){\circle*{0.000001}} \put(-202.94,-128.69){\circle*{0.000001}} \put(-202.23,-128.69){\circle*{0.000001}} \put(-201.53,-129.40){\circle*{0.000001}} \put(-200.82,-129.40){\circle*{0.000001}} \put(-200.11,-130.11){\circle*{0.000001}} \put(-199.40,-130.11){\circle*{0.000001}} \put(-198.70,-130.81){\circle*{0.000001}} \put(-197.99,-131.52){\circle*{0.000001}} \put(-197.28,-131.52){\circle*{0.000001}} \put(-196.58,-132.23){\circle*{0.000001}} \put(-195.87,-132.23){\circle*{0.000001}} \put(-195.16,-132.94){\circle*{0.000001}} \put(-194.45,-132.94){\circle*{0.000001}} \put(-193.75,-133.64){\circle*{0.000001}} \put(-193.04,-134.35){\circle*{0.000001}} \put(-192.33,-134.35){\circle*{0.000001}} \put(-191.63,-135.06){\circle*{0.000001}} \put(-190.92,-135.06){\circle*{0.000001}} \put(-190.21,-135.76){\circle*{0.000001}} \put(-189.50,-135.76){\circle*{0.000001}} \put(-188.80,-136.47){\circle*{0.000001}} \put(-188.09,-137.18){\circle*{0.000001}} \put(-187.38,-137.18){\circle*{0.000001}} \put(-186.68,-137.89){\circle*{0.000001}} \put(-185.97,-137.89){\circle*{0.000001}} \put(-185.26,-138.59){\circle*{0.000001}} \put(-184.55,-138.59){\circle*{0.000001}} \put(-183.85,-139.30){\circle*{0.000001}} \put(-183.14,-140.01){\circle*{0.000001}} \put(-182.43,-140.01){\circle*{0.000001}} \put(-181.73,-140.71){\circle*{0.000001}} \put(-181.02,-140.71){\circle*{0.000001}} \put(-180.31,-141.42){\circle*{0.000001}} \put(-179.61,-141.42){\circle*{0.000001}} \put(-178.90,-142.13){\circle*{0.000001}} \put(-178.19,-142.84){\circle*{0.000001}} \put(-177.48,-142.84){\circle*{0.000001}} \put(-176.78,-143.54){\circle*{0.000001}} \put(-176.07,-143.54){\circle*{0.000001}} \put(-175.36,-144.25){\circle*{0.000001}} \put(-174.66,-144.25){\circle*{0.000001}} \put(-173.95,-144.96){\circle*{0.000001}} \put(-173.24,-145.66){\circle*{0.000001}} \put(-172.53,-145.66){\circle*{0.000001}} \put(-171.83,-146.37){\circle*{0.000001}} \put(-171.12,-146.37){\circle*{0.000001}} \put(-170.41,-147.08){\circle*{0.000001}} \put(-169.71,-147.08){\circle*{0.000001}} \put(-169.00,-147.79){\circle*{0.000001}} \put(-207.89,-176.78){\circle*{0.000001}} \put(-207.89,-176.07){\circle*{0.000001}} \put(-207.89,-175.36){\circle*{0.000001}} \put(-208.60,-174.66){\circle*{0.000001}} \put(-208.60,-173.95){\circle*{0.000001}} \put(-208.60,-173.24){\circle*{0.000001}} \put(-208.60,-172.53){\circle*{0.000001}} \put(-208.60,-171.83){\circle*{0.000001}} \put(-208.60,-171.12){\circle*{0.000001}} \put(-209.30,-170.41){\circle*{0.000001}} \put(-209.30,-169.71){\circle*{0.000001}} \put(-209.30,-169.00){\circle*{0.000001}} \put(-209.30,-168.29){\circle*{0.000001}} \put(-209.30,-167.58){\circle*{0.000001}} \put(-210.01,-166.88){\circle*{0.000001}} \put(-210.01,-166.17){\circle*{0.000001}} \put(-210.01,-165.46){\circle*{0.000001}} \put(-210.01,-164.76){\circle*{0.000001}} \put(-210.01,-164.05){\circle*{0.000001}} \put(-210.72,-163.34){\circle*{0.000001}} \put(-210.72,-162.63){\circle*{0.000001}} \put(-210.72,-161.93){\circle*{0.000001}} \put(-210.72,-161.22){\circle*{0.000001}} \put(-210.72,-160.51){\circle*{0.000001}} \put(-210.72,-159.81){\circle*{0.000001}} \put(-211.42,-159.10){\circle*{0.000001}} \put(-211.42,-158.39){\circle*{0.000001}} \put(-211.42,-157.68){\circle*{0.000001}} \put(-211.42,-156.98){\circle*{0.000001}} \put(-211.42,-156.27){\circle*{0.000001}} \put(-212.13,-155.56){\circle*{0.000001}} \put(-212.13,-154.86){\circle*{0.000001}} \put(-212.13,-154.15){\circle*{0.000001}} \put(-212.13,-153.44){\circle*{0.000001}} \put(-212.13,-152.74){\circle*{0.000001}} \put(-212.13,-152.03){\circle*{0.000001}} \put(-212.84,-151.32){\circle*{0.000001}} \put(-212.84,-150.61){\circle*{0.000001}} \put(-212.84,-149.91){\circle*{0.000001}} \put(-212.84,-149.20){\circle*{0.000001}} \put(-212.84,-148.49){\circle*{0.000001}} \put(-213.55,-147.79){\circle*{0.000001}} \put(-213.55,-147.08){\circle*{0.000001}} \put(-213.55,-146.37){\circle*{0.000001}} \put(-213.55,-145.66){\circle*{0.000001}} \put(-213.55,-144.96){\circle*{0.000001}} \put(-214.25,-144.25){\circle*{0.000001}} \put(-214.25,-143.54){\circle*{0.000001}} \put(-214.25,-142.84){\circle*{0.000001}} \put(-214.25,-142.13){\circle*{0.000001}} \put(-214.25,-141.42){\circle*{0.000001}} \put(-214.25,-140.71){\circle*{0.000001}} \put(-214.96,-140.01){\circle*{0.000001}} \put(-214.96,-139.30){\circle*{0.000001}} \put(-214.96,-138.59){\circle*{0.000001}} \put(-214.96,-137.89){\circle*{0.000001}} \put(-214.96,-137.18){\circle*{0.000001}} \put(-215.67,-136.47){\circle*{0.000001}} \put(-215.67,-135.76){\circle*{0.000001}} \put(-215.67,-135.06){\circle*{0.000001}} \put(-215.67,-134.35){\circle*{0.000001}} \put(-215.67,-133.64){\circle*{0.000001}} \put(-215.67,-132.94){\circle*{0.000001}} \put(-216.37,-132.23){\circle*{0.000001}} \put(-216.37,-131.52){\circle*{0.000001}} \put(-216.37,-130.81){\circle*{0.000001}} \put(-216.37,-130.11){\circle*{0.000001}} \put(-216.37,-129.40){\circle*{0.000001}} \put(-217.08,-128.69){\circle*{0.000001}} \put(-217.08,-127.99){\circle*{0.000001}} \put(-217.08,-127.28){\circle*{0.000001}} \put(-217.08,-126.57){\circle*{0.000001}} \put(-217.08,-125.87){\circle*{0.000001}} \put(-217.79,-125.16){\circle*{0.000001}} \put(-217.79,-124.45){\circle*{0.000001}} \put(-217.79,-123.74){\circle*{0.000001}} \put(-217.79,-123.04){\circle*{0.000001}} \put(-217.79,-122.33){\circle*{0.000001}} \put(-217.79,-121.62){\circle*{0.000001}} \put(-218.50,-120.92){\circle*{0.000001}} \put(-218.50,-120.21){\circle*{0.000001}} \put(-218.50,-119.50){\circle*{0.000001}} \put(-207.89,-176.78){\circle*{0.000001}} \put(-207.18,-176.78){\circle*{0.000001}} \put(-206.48,-176.78){\circle*{0.000001}} \put(-205.77,-177.48){\circle*{0.000001}} \put(-205.06,-177.48){\circle*{0.000001}} \put(-204.35,-177.48){\circle*{0.000001}} \put(-203.65,-177.48){\circle*{0.000001}} \put(-202.94,-177.48){\circle*{0.000001}} \put(-202.23,-177.48){\circle*{0.000001}} \put(-201.53,-178.19){\circle*{0.000001}} \put(-200.82,-178.19){\circle*{0.000001}} \put(-200.11,-178.19){\circle*{0.000001}} \put(-199.40,-178.19){\circle*{0.000001}} \put(-198.70,-178.19){\circle*{0.000001}} \put(-197.99,-178.90){\circle*{0.000001}} \put(-197.28,-178.90){\circle*{0.000001}} \put(-196.58,-178.90){\circle*{0.000001}} \put(-195.87,-178.90){\circle*{0.000001}} \put(-195.16,-178.90){\circle*{0.000001}} \put(-194.45,-179.61){\circle*{0.000001}} \put(-193.75,-179.61){\circle*{0.000001}} \put(-193.04,-179.61){\circle*{0.000001}} \put(-192.33,-179.61){\circle*{0.000001}} \put(-191.63,-179.61){\circle*{0.000001}} \put(-190.92,-179.61){\circle*{0.000001}} \put(-190.21,-180.31){\circle*{0.000001}} \put(-189.50,-180.31){\circle*{0.000001}} \put(-188.80,-180.31){\circle*{0.000001}} \put(-188.09,-180.31){\circle*{0.000001}} \put(-187.38,-180.31){\circle*{0.000001}} \put(-186.68,-181.02){\circle*{0.000001}} \put(-185.97,-181.02){\circle*{0.000001}} \put(-185.26,-181.02){\circle*{0.000001}} \put(-184.55,-181.02){\circle*{0.000001}} \put(-183.85,-181.02){\circle*{0.000001}} \put(-183.14,-181.02){\circle*{0.000001}} \put(-182.43,-181.73){\circle*{0.000001}} \put(-181.73,-181.73){\circle*{0.000001}} \put(-181.02,-181.73){\circle*{0.000001}} \put(-180.31,-181.73){\circle*{0.000001}} \put(-179.61,-181.73){\circle*{0.000001}} \put(-178.90,-182.43){\circle*{0.000001}} \put(-178.19,-182.43){\circle*{0.000001}} \put(-177.48,-182.43){\circle*{0.000001}} \put(-176.78,-182.43){\circle*{0.000001}} \put(-176.07,-182.43){\circle*{0.000001}} \put(-175.36,-183.14){\circle*{0.000001}} \put(-174.66,-183.14){\circle*{0.000001}} \put(-173.95,-183.14){\circle*{0.000001}} \put(-173.24,-183.14){\circle*{0.000001}} \put(-172.53,-183.14){\circle*{0.000001}} \put(-171.83,-183.14){\circle*{0.000001}} \put(-171.12,-183.85){\circle*{0.000001}} \put(-170.41,-183.85){\circle*{0.000001}} \put(-169.71,-183.85){\circle*{0.000001}} \put(-169.00,-183.85){\circle*{0.000001}} \put(-168.29,-183.85){\circle*{0.000001}} \put(-167.58,-184.55){\circle*{0.000001}} \put(-166.88,-184.55){\circle*{0.000001}} \put(-166.17,-184.55){\circle*{0.000001}} \put(-165.46,-184.55){\circle*{0.000001}} \put(-164.76,-184.55){\circle*{0.000001}} \put(-164.05,-184.55){\circle*{0.000001}} \put(-163.34,-185.26){\circle*{0.000001}} \put(-162.63,-185.26){\circle*{0.000001}} \put(-161.93,-185.26){\circle*{0.000001}} \put(-161.22,-185.26){\circle*{0.000001}} \put(-160.51,-185.26){\circle*{0.000001}} \put(-159.81,-185.97){\circle*{0.000001}} \put(-159.10,-185.97){\circle*{0.000001}} \put(-158.39,-185.97){\circle*{0.000001}} \put(-157.68,-185.97){\circle*{0.000001}} \put(-156.98,-185.97){\circle*{0.000001}} \put(-156.27,-186.68){\circle*{0.000001}} \put(-155.56,-186.68){\circle*{0.000001}} \put(-154.86,-186.68){\circle*{0.000001}} \put(-154.15,-186.68){\circle*{0.000001}} \put(-153.44,-186.68){\circle*{0.000001}} \put(-152.74,-186.68){\circle*{0.000001}} \put(-152.03,-187.38){\circle*{0.000001}} \put(-151.32,-187.38){\circle*{0.000001}} \put(-150.61,-187.38){\circle*{0.000001}} \put(-206.48,-189.50){\circle*{0.000001}} \put(-205.77,-189.50){\circle*{0.000001}} \put(-205.06,-189.50){\circle*{0.000001}} \put(-204.35,-189.50){\circle*{0.000001}} \put(-203.65,-189.50){\circle*{0.000001}} \put(-202.94,-189.50){\circle*{0.000001}} \put(-202.23,-189.50){\circle*{0.000001}} \put(-201.53,-189.50){\circle*{0.000001}} \put(-200.82,-189.50){\circle*{0.000001}} \put(-200.11,-189.50){\circle*{0.000001}} \put(-199.40,-189.50){\circle*{0.000001}} \put(-198.70,-189.50){\circle*{0.000001}} \put(-197.99,-189.50){\circle*{0.000001}} \put(-197.28,-189.50){\circle*{0.000001}} \put(-196.58,-188.80){\circle*{0.000001}} \put(-195.87,-188.80){\circle*{0.000001}} \put(-195.16,-188.80){\circle*{0.000001}} \put(-194.45,-188.80){\circle*{0.000001}} \put(-193.75,-188.80){\circle*{0.000001}} \put(-193.04,-188.80){\circle*{0.000001}} \put(-192.33,-188.80){\circle*{0.000001}} \put(-191.63,-188.80){\circle*{0.000001}} \put(-190.92,-188.80){\circle*{0.000001}} \put(-190.21,-188.80){\circle*{0.000001}} \put(-189.50,-188.80){\circle*{0.000001}} \put(-188.80,-188.80){\circle*{0.000001}} \put(-188.09,-188.80){\circle*{0.000001}} \put(-187.38,-188.80){\circle*{0.000001}} \put(-186.68,-188.80){\circle*{0.000001}} \put(-185.97,-188.80){\circle*{0.000001}} \put(-185.26,-188.80){\circle*{0.000001}} \put(-184.55,-188.80){\circle*{0.000001}} \put(-183.85,-188.80){\circle*{0.000001}} \put(-183.14,-188.80){\circle*{0.000001}} \put(-182.43,-188.80){\circle*{0.000001}} \put(-181.73,-188.80){\circle*{0.000001}} \put(-181.02,-188.80){\circle*{0.000001}} \put(-180.31,-188.80){\circle*{0.000001}} \put(-179.61,-188.80){\circle*{0.000001}} \put(-178.90,-188.80){\circle*{0.000001}} \put(-178.19,-188.09){\circle*{0.000001}} \put(-177.48,-188.09){\circle*{0.000001}} \put(-176.78,-188.09){\circle*{0.000001}} \put(-176.07,-188.09){\circle*{0.000001}} \put(-175.36,-188.09){\circle*{0.000001}} \put(-174.66,-188.09){\circle*{0.000001}} \put(-173.95,-188.09){\circle*{0.000001}} \put(-173.24,-188.09){\circle*{0.000001}} \put(-172.53,-188.09){\circle*{0.000001}} \put(-171.83,-188.09){\circle*{0.000001}} \put(-171.12,-188.09){\circle*{0.000001}} \put(-170.41,-188.09){\circle*{0.000001}} \put(-169.71,-188.09){\circle*{0.000001}} \put(-169.00,-188.09){\circle*{0.000001}} \put(-168.29,-188.09){\circle*{0.000001}} \put(-167.58,-188.09){\circle*{0.000001}} \put(-166.88,-188.09){\circle*{0.000001}} \put(-166.17,-188.09){\circle*{0.000001}} \put(-165.46,-188.09){\circle*{0.000001}} \put(-164.76,-188.09){\circle*{0.000001}} \put(-164.05,-188.09){\circle*{0.000001}} \put(-163.34,-188.09){\circle*{0.000001}} \put(-162.63,-188.09){\circle*{0.000001}} \put(-161.93,-188.09){\circle*{0.000001}} \put(-161.22,-188.09){\circle*{0.000001}} \put(-160.51,-188.09){\circle*{0.000001}} \put(-159.81,-187.38){\circle*{0.000001}} \put(-159.10,-187.38){\circle*{0.000001}} \put(-158.39,-187.38){\circle*{0.000001}} \put(-157.68,-187.38){\circle*{0.000001}} \put(-156.98,-187.38){\circle*{0.000001}} \put(-156.27,-187.38){\circle*{0.000001}} \put(-155.56,-187.38){\circle*{0.000001}} \put(-154.86,-187.38){\circle*{0.000001}} \put(-154.15,-187.38){\circle*{0.000001}} \put(-153.44,-187.38){\circle*{0.000001}} \put(-152.74,-187.38){\circle*{0.000001}} \put(-152.03,-187.38){\circle*{0.000001}} \put(-151.32,-187.38){\circle*{0.000001}} \put(-150.61,-187.38){\circle*{0.000001}} \put(-206.48,-189.50){\circle*{0.000001}} \put(-205.77,-189.50){\circle*{0.000001}} \put(-205.06,-189.50){\circle*{0.000001}} \put(-204.35,-189.50){\circle*{0.000001}} \put(-203.65,-189.50){\circle*{0.000001}} \put(-202.94,-189.50){\circle*{0.000001}} \put(-202.23,-189.50){\circle*{0.000001}} \put(-201.53,-189.50){\circle*{0.000001}} \put(-200.82,-189.50){\circle*{0.000001}} \put(-200.11,-189.50){\circle*{0.000001}} \put(-199.40,-189.50){\circle*{0.000001}} \put(-198.70,-189.50){\circle*{0.000001}} \put(-197.99,-189.50){\circle*{0.000001}} \put(-197.28,-189.50){\circle*{0.000001}} \put(-196.58,-189.50){\circle*{0.000001}} \put(-195.87,-189.50){\circle*{0.000001}} \put(-195.16,-189.50){\circle*{0.000001}} \put(-194.45,-189.50){\circle*{0.000001}} \put(-193.75,-189.50){\circle*{0.000001}} \put(-193.04,-189.50){\circle*{0.000001}} \put(-192.33,-189.50){\circle*{0.000001}} \put(-191.63,-189.50){\circle*{0.000001}} \put(-190.92,-189.50){\circle*{0.000001}} \put(-190.21,-189.50){\circle*{0.000001}} \put(-189.50,-189.50){\circle*{0.000001}} \put(-188.80,-189.50){\circle*{0.000001}} \put(-188.09,-189.50){\circle*{0.000001}} \put(-187.38,-189.50){\circle*{0.000001}} \put(-186.68,-189.50){\circle*{0.000001}} \put(-185.97,-189.50){\circle*{0.000001}} \put(-185.26,-189.50){\circle*{0.000001}} \put(-184.55,-189.50){\circle*{0.000001}} \put(-183.85,-189.50){\circle*{0.000001}} \put(-183.14,-189.50){\circle*{0.000001}} \put(-182.43,-189.50){\circle*{0.000001}} \put(-181.73,-189.50){\circle*{0.000001}} \put(-181.02,-189.50){\circle*{0.000001}} \put(-180.31,-189.50){\circle*{0.000001}} \put(-179.61,-189.50){\circle*{0.000001}} \put(-178.90,-189.50){\circle*{0.000001}} \put(-178.19,-189.50){\circle*{0.000001}} \put(-177.48,-189.50){\circle*{0.000001}} \put(-176.78,-189.50){\circle*{0.000001}} \put(-176.07,-189.50){\circle*{0.000001}} \put(-175.36,-189.50){\circle*{0.000001}} \put(-174.66,-189.50){\circle*{0.000001}} \put(-173.95,-189.50){\circle*{0.000001}} \put(-173.24,-189.50){\circle*{0.000001}} \put(-172.53,-189.50){\circle*{0.000001}} \put(-171.83,-189.50){\circle*{0.000001}} \put(-171.12,-189.50){\circle*{0.000001}} \put(-170.41,-189.50){\circle*{0.000001}} \put(-169.71,-189.50){\circle*{0.000001}} \put(-169.00,-189.50){\circle*{0.000001}} \put(-168.29,-189.50){\circle*{0.000001}} \put(-167.58,-189.50){\circle*{0.000001}} \put(-166.88,-189.50){\circle*{0.000001}} \put(-166.17,-189.50){\circle*{0.000001}} \put(-165.46,-189.50){\circle*{0.000001}} \put(-164.76,-189.50){\circle*{0.000001}} \put(-164.05,-189.50){\circle*{0.000001}} \put(-163.34,-189.50){\circle*{0.000001}} \put(-162.63,-189.50){\circle*{0.000001}} \put(-161.93,-189.50){\circle*{0.000001}} \put(-161.22,-189.50){\circle*{0.000001}} \put(-160.51,-189.50){\circle*{0.000001}} \put(-159.81,-189.50){\circle*{0.000001}} \put(-159.10,-189.50){\circle*{0.000001}} \put(-158.39,-189.50){\circle*{0.000001}} \put(-157.68,-189.50){\circle*{0.000001}} \put(-156.98,-189.50){\circle*{0.000001}} \put(-156.27,-189.50){\circle*{0.000001}} \put(-155.56,-189.50){\circle*{0.000001}} \put(-154.86,-189.50){\circle*{0.000001}} \put(-154.15,-189.50){\circle*{0.000001}} \put(-153.44,-189.50){\circle*{0.000001}} \put(-152.74,-189.50){\circle*{0.000001}} \put(-152.03,-189.50){\circle*{0.000001}} \put(-151.32,-189.50){\circle*{0.000001}} \put(-150.61,-189.50){\circle*{0.000001}} \put(-149.91,-189.50){\circle*{0.000001}} \put(-149.20,-189.50){\circle*{0.000001}} \put(-148.49,-189.50){\circle*{0.000001}} \put(-148.49,-189.50){\circle*{0.000001}} \put(-147.79,-189.50){\circle*{0.000001}} \put(-147.79,-189.50){\circle*{0.000001}} \put(-147.79,-189.50){\circle*{0.000001}} \put(-147.08,-189.50){\circle*{0.000001}} \put(-146.37,-190.21){\circle*{0.000001}} \put(-145.66,-190.21){\circle*{0.000001}} \put(-144.96,-190.21){\circle*{0.000001}} \put(-144.25,-190.92){\circle*{0.000001}} \put(-143.54,-190.92){\circle*{0.000001}} \put(-142.84,-190.92){\circle*{0.000001}} \put(-142.13,-191.63){\circle*{0.000001}} \put(-141.42,-191.63){\circle*{0.000001}} \put(-143.54,-192.33){\circle*{0.000001}} \put(-142.84,-192.33){\circle*{0.000001}} \put(-142.13,-191.63){\circle*{0.000001}} \put(-141.42,-191.63){\circle*{0.000001}} \put(-143.54,-192.33){\circle*{0.000001}} \put(-141.42,-196.58){\circle*{0.000001}} \put(-141.42,-195.87){\circle*{0.000001}} \put(-142.13,-195.16){\circle*{0.000001}} \put(-142.13,-194.45){\circle*{0.000001}} \put(-142.84,-193.75){\circle*{0.000001}} \put(-142.84,-193.04){\circle*{0.000001}} \put(-143.54,-192.33){\circle*{0.000001}} \put(-141.42,-196.58){\circle*{0.000001}} \put(-141.42,-195.87){\circle*{0.000001}} \put(-140.71,-195.16){\circle*{0.000001}} \put(-140.71,-194.45){\circle*{0.000001}} \put(-140.01,-193.75){\circle*{0.000001}} \put(-140.01,-193.04){\circle*{0.000001}} \put(-139.30,-192.33){\circle*{0.000001}} \put(-139.30,-192.33){\circle*{0.000001}} \put(-138.59,-197.99){\circle*{0.000001}} \put(-138.59,-197.28){\circle*{0.000001}} \put(-138.59,-196.58){\circle*{0.000001}} \put(-138.59,-195.87){\circle*{0.000001}} \put(-138.59,-195.16){\circle*{0.000001}} \put(-139.30,-194.45){\circle*{0.000001}} \put(-139.30,-193.75){\circle*{0.000001}} \put(-139.30,-193.04){\circle*{0.000001}} \put(-139.30,-192.33){\circle*{0.000001}} \put(-138.59,-197.99){\circle*{0.000001}} \put(-137.89,-197.99){\circle*{0.000001}} \put(-137.18,-197.99){\circle*{0.000001}} \put(-136.47,-197.99){\circle*{0.000001}} \put(-135.76,-197.99){\circle*{0.000001}} \put(-135.76,-201.53){\circle*{0.000001}} \put(-135.76,-200.82){\circle*{0.000001}} \put(-135.76,-200.11){\circle*{0.000001}} \put(-135.76,-199.40){\circle*{0.000001}} \put(-135.76,-198.70){\circle*{0.000001}} \put(-135.76,-197.99){\circle*{0.000001}} \put(-135.76,-201.53){\circle*{0.000001}} \put(-135.06,-201.53){\circle*{0.000001}} \put(-134.35,-200.82){\circle*{0.000001}} \put(-133.64,-200.82){\circle*{0.000001}} \put(-132.94,-200.82){\circle*{0.000001}} \put(-132.23,-200.82){\circle*{0.000001}} \put(-131.52,-200.11){\circle*{0.000001}} \put(-130.81,-200.11){\circle*{0.000001}} \put(-130.11,-200.11){\circle*{0.000001}} \put(-129.40,-199.40){\circle*{0.000001}} \put(-128.69,-199.40){\circle*{0.000001}} \put(-130.81,-213.55){\circle*{0.000001}} \put(-130.81,-212.84){\circle*{0.000001}} \put(-130.81,-212.13){\circle*{0.000001}} \put(-130.81,-211.42){\circle*{0.000001}} \put(-130.11,-210.72){\circle*{0.000001}} \put(-130.11,-210.01){\circle*{0.000001}} \put(-130.11,-209.30){\circle*{0.000001}} \put(-130.11,-208.60){\circle*{0.000001}} \put(-130.11,-207.89){\circle*{0.000001}} \put(-130.11,-207.18){\circle*{0.000001}} \put(-130.11,-206.48){\circle*{0.000001}} \put(-129.40,-205.77){\circle*{0.000001}} \put(-129.40,-205.06){\circle*{0.000001}} \put(-129.40,-204.35){\circle*{0.000001}} \put(-129.40,-203.65){\circle*{0.000001}} \put(-129.40,-202.94){\circle*{0.000001}} \put(-129.40,-202.23){\circle*{0.000001}} \put(-128.69,-201.53){\circle*{0.000001}} \put(-128.69,-200.82){\circle*{0.000001}} \put(-128.69,-200.11){\circle*{0.000001}} \put(-128.69,-199.40){\circle*{0.000001}} \put(-130.81,-213.55){\circle*{0.000001}} \put(-130.11,-214.25){\circle*{0.000001}} \put(-129.40,-214.25){\circle*{0.000001}} \put(-128.69,-214.96){\circle*{0.000001}} \put(-127.99,-215.67){\circle*{0.000001}} \put(-127.28,-215.67){\circle*{0.000001}} \put(-126.57,-216.37){\circle*{0.000001}} \put(-125.87,-217.08){\circle*{0.000001}} \put(-125.16,-217.08){\circle*{0.000001}} \put(-124.45,-217.79){\circle*{0.000001}} \put(-123.74,-217.79){\circle*{0.000001}} \put(-123.04,-218.50){\circle*{0.000001}} \put(-122.33,-219.20){\circle*{0.000001}} \put(-121.62,-219.20){\circle*{0.000001}} \put(-120.92,-219.91){\circle*{0.000001}} \put(-120.21,-220.62){\circle*{0.000001}} \put(-119.50,-220.62){\circle*{0.000001}} \put(-118.79,-221.32){\circle*{0.000001}} \put(-118.09,-222.03){\circle*{0.000001}} \put(-117.38,-222.03){\circle*{0.000001}} \put(-116.67,-222.74){\circle*{0.000001}} \put(-138.59,-227.69){\circle*{0.000001}} \put(-137.89,-227.69){\circle*{0.000001}} \put(-137.18,-227.69){\circle*{0.000001}} \put(-136.47,-226.98){\circle*{0.000001}} \put(-135.76,-226.98){\circle*{0.000001}} \put(-135.06,-226.98){\circle*{0.000001}} \put(-134.35,-226.98){\circle*{0.000001}} \put(-133.64,-226.27){\circle*{0.000001}} \put(-132.94,-226.27){\circle*{0.000001}} \put(-132.23,-226.27){\circle*{0.000001}} \put(-131.52,-226.27){\circle*{0.000001}} \put(-130.81,-226.27){\circle*{0.000001}} \put(-130.11,-225.57){\circle*{0.000001}} \put(-129.40,-225.57){\circle*{0.000001}} \put(-128.69,-225.57){\circle*{0.000001}} \put(-127.99,-225.57){\circle*{0.000001}} \put(-127.28,-224.86){\circle*{0.000001}} \put(-126.57,-224.86){\circle*{0.000001}} \put(-125.87,-224.86){\circle*{0.000001}} \put(-125.16,-224.86){\circle*{0.000001}} \put(-124.45,-224.15){\circle*{0.000001}} \put(-123.74,-224.15){\circle*{0.000001}} \put(-123.04,-224.15){\circle*{0.000001}} \put(-122.33,-224.15){\circle*{0.000001}} \put(-121.62,-224.15){\circle*{0.000001}} \put(-120.92,-223.45){\circle*{0.000001}} \put(-120.21,-223.45){\circle*{0.000001}} \put(-119.50,-223.45){\circle*{0.000001}} \put(-118.79,-223.45){\circle*{0.000001}} \put(-118.09,-222.74){\circle*{0.000001}} \put(-117.38,-222.74){\circle*{0.000001}} \put(-116.67,-222.74){\circle*{0.000001}} \put(-138.59,-227.69){\circle*{0.000001}} \put(-138.59,-226.98){\circle*{0.000001}} \put(-138.59,-226.27){\circle*{0.000001}} \put(-138.59,-225.57){\circle*{0.000001}} \put(-138.59,-224.86){\circle*{0.000001}} \put(-137.89,-224.15){\circle*{0.000001}} \put(-137.89,-223.45){\circle*{0.000001}} \put(-137.89,-222.74){\circle*{0.000001}} \put(-137.89,-222.03){\circle*{0.000001}} \put(-137.89,-221.32){\circle*{0.000001}} \put(-137.89,-220.62){\circle*{0.000001}} \put(-137.89,-219.91){\circle*{0.000001}} \put(-137.89,-219.20){\circle*{0.000001}} \put(-137.89,-218.50){\circle*{0.000001}} \put(-137.18,-217.79){\circle*{0.000001}} \put(-137.18,-217.08){\circle*{0.000001}} \put(-137.18,-216.37){\circle*{0.000001}} \put(-137.18,-215.67){\circle*{0.000001}} \put(-137.18,-214.96){\circle*{0.000001}} \put(-137.18,-214.25){\circle*{0.000001}} \put(-137.18,-213.55){\circle*{0.000001}} \put(-137.18,-212.84){\circle*{0.000001}} \put(-136.47,-212.13){\circle*{0.000001}} \put(-136.47,-211.42){\circle*{0.000001}} \put(-136.47,-210.72){\circle*{0.000001}} \put(-136.47,-210.01){\circle*{0.000001}} \put(-136.47,-209.30){\circle*{0.000001}} \put(-136.47,-208.60){\circle*{0.000001}} \put(-136.47,-207.89){\circle*{0.000001}} \put(-136.47,-207.18){\circle*{0.000001}} \put(-136.47,-206.48){\circle*{0.000001}} \put(-135.76,-205.77){\circle*{0.000001}} \put(-135.76,-205.06){\circle*{0.000001}} \put(-135.76,-204.35){\circle*{0.000001}} \put(-135.76,-203.65){\circle*{0.000001}} \put(-135.76,-202.94){\circle*{0.000001}} \put(-135.76,-202.94){\circle*{0.000001}} \put(-135.76,-202.23){\circle*{0.000001}} \put(-136.47,-201.53){\circle*{0.000001}} \put(-136.47,-200.82){\circle*{0.000001}} \put(-137.18,-200.11){\circle*{0.000001}} \put(-137.18,-199.40){\circle*{0.000001}} \put(-137.18,-198.70){\circle*{0.000001}} \put(-137.89,-197.99){\circle*{0.000001}} \put(-137.89,-197.28){\circle*{0.000001}} \put(-138.59,-196.58){\circle*{0.000001}} \put(-138.59,-195.87){\circle*{0.000001}} \put(-139.30,-195.16){\circle*{0.000001}} \put(-139.30,-194.45){\circle*{0.000001}} \put(-139.30,-193.75){\circle*{0.000001}} \put(-140.01,-193.04){\circle*{0.000001}} \put(-140.01,-192.33){\circle*{0.000001}} \put(-140.71,-191.63){\circle*{0.000001}} \put(-140.71,-190.92){\circle*{0.000001}} \put(-140.71,-190.21){\circle*{0.000001}} \put(-141.42,-189.50){\circle*{0.000001}} \put(-141.42,-188.80){\circle*{0.000001}} \put(-142.13,-188.09){\circle*{0.000001}} \put(-142.13,-187.38){\circle*{0.000001}} \put(-142.84,-186.68){\circle*{0.000001}} \put(-142.84,-185.97){\circle*{0.000001}} \put(-142.84,-185.26){\circle*{0.000001}} \put(-143.54,-184.55){\circle*{0.000001}} \put(-143.54,-183.85){\circle*{0.000001}} \put(-144.25,-183.14){\circle*{0.000001}} \put(-144.25,-182.43){\circle*{0.000001}} \put(-144.25,-181.73){\circle*{0.000001}} \put(-144.96,-181.02){\circle*{0.000001}} \put(-144.96,-180.31){\circle*{0.000001}} \put(-145.66,-179.61){\circle*{0.000001}} \put(-145.66,-178.90){\circle*{0.000001}} \put(-146.37,-178.19){\circle*{0.000001}} \put(-146.37,-177.48){\circle*{0.000001}} \put(-146.37,-176.78){\circle*{0.000001}} \put(-147.08,-176.07){\circle*{0.000001}} \put(-147.08,-175.36){\circle*{0.000001}} \put(-147.79,-174.66){\circle*{0.000001}} \put(-147.79,-173.95){\circle*{0.000001}} \put(-178.90,-189.50){\circle*{0.000001}} \put(-178.19,-189.50){\circle*{0.000001}} \put(-177.48,-188.80){\circle*{0.000001}} \put(-176.78,-188.80){\circle*{0.000001}} \put(-176.07,-188.09){\circle*{0.000001}} \put(-175.36,-188.09){\circle*{0.000001}} \put(-174.66,-187.38){\circle*{0.000001}} \put(-173.95,-187.38){\circle*{0.000001}} \put(-173.24,-186.68){\circle*{0.000001}} \put(-172.53,-186.68){\circle*{0.000001}} \put(-171.83,-185.97){\circle*{0.000001}} \put(-171.12,-185.97){\circle*{0.000001}} \put(-170.41,-185.26){\circle*{0.000001}} \put(-169.71,-185.26){\circle*{0.000001}} \put(-169.00,-184.55){\circle*{0.000001}} \put(-168.29,-184.55){\circle*{0.000001}} \put(-167.58,-183.85){\circle*{0.000001}} \put(-166.88,-183.85){\circle*{0.000001}} \put(-166.17,-183.14){\circle*{0.000001}} \put(-165.46,-183.14){\circle*{0.000001}} \put(-164.76,-182.43){\circle*{0.000001}} \put(-164.05,-182.43){\circle*{0.000001}} \put(-163.34,-181.73){\circle*{0.000001}} \put(-162.63,-181.73){\circle*{0.000001}} \put(-161.93,-181.02){\circle*{0.000001}} \put(-161.22,-181.02){\circle*{0.000001}} \put(-160.51,-180.31){\circle*{0.000001}} \put(-159.81,-180.31){\circle*{0.000001}} \put(-159.10,-179.61){\circle*{0.000001}} \put(-158.39,-179.61){\circle*{0.000001}} \put(-157.68,-178.90){\circle*{0.000001}} \put(-156.98,-178.90){\circle*{0.000001}} \put(-156.27,-178.19){\circle*{0.000001}} \put(-155.56,-178.19){\circle*{0.000001}} \put(-154.86,-177.48){\circle*{0.000001}} \put(-154.15,-177.48){\circle*{0.000001}} \put(-153.44,-176.78){\circle*{0.000001}} \put(-152.74,-176.78){\circle*{0.000001}} \put(-152.03,-176.07){\circle*{0.000001}} \put(-151.32,-176.07){\circle*{0.000001}} \put(-150.61,-175.36){\circle*{0.000001}} \put(-149.91,-175.36){\circle*{0.000001}} \put(-149.20,-174.66){\circle*{0.000001}} \put(-148.49,-174.66){\circle*{0.000001}} \put(-147.79,-173.95){\circle*{0.000001}} \put(-210.72,-165.46){\circle*{0.000001}} \put(-210.01,-166.17){\circle*{0.000001}} \put(-209.30,-166.88){\circle*{0.000001}} \put(-208.60,-166.88){\circle*{0.000001}} \put(-207.89,-167.58){\circle*{0.000001}} \put(-207.18,-168.29){\circle*{0.000001}} \put(-206.48,-169.00){\circle*{0.000001}} \put(-205.77,-169.00){\circle*{0.000001}} \put(-205.06,-169.71){\circle*{0.000001}} \put(-204.35,-170.41){\circle*{0.000001}} \put(-203.65,-171.12){\circle*{0.000001}} \put(-202.94,-171.12){\circle*{0.000001}} \put(-202.23,-171.83){\circle*{0.000001}} \put(-201.53,-172.53){\circle*{0.000001}} \put(-200.82,-173.24){\circle*{0.000001}} \put(-200.11,-173.24){\circle*{0.000001}} \put(-199.40,-173.95){\circle*{0.000001}} \put(-198.70,-174.66){\circle*{0.000001}} \put(-197.99,-175.36){\circle*{0.000001}} \put(-197.28,-175.36){\circle*{0.000001}} \put(-196.58,-176.07){\circle*{0.000001}} \put(-195.87,-176.78){\circle*{0.000001}} \put(-195.16,-177.48){\circle*{0.000001}} \put(-194.45,-177.48){\circle*{0.000001}} \put(-193.75,-178.19){\circle*{0.000001}} \put(-193.04,-178.90){\circle*{0.000001}} \put(-192.33,-179.61){\circle*{0.000001}} \put(-191.63,-179.61){\circle*{0.000001}} \put(-190.92,-180.31){\circle*{0.000001}} \put(-190.21,-181.02){\circle*{0.000001}} \put(-189.50,-181.73){\circle*{0.000001}} \put(-188.80,-181.73){\circle*{0.000001}} \put(-188.09,-182.43){\circle*{0.000001}} \put(-187.38,-183.14){\circle*{0.000001}} \put(-186.68,-183.85){\circle*{0.000001}} \put(-185.97,-183.85){\circle*{0.000001}} \put(-185.26,-184.55){\circle*{0.000001}} \put(-184.55,-185.26){\circle*{0.000001}} \put(-183.85,-185.97){\circle*{0.000001}} \put(-183.14,-185.97){\circle*{0.000001}} \put(-182.43,-186.68){\circle*{0.000001}} \put(-181.73,-187.38){\circle*{0.000001}} \put(-181.02,-188.09){\circle*{0.000001}} \put(-180.31,-188.09){\circle*{0.000001}} \put(-179.61,-188.80){\circle*{0.000001}} \put(-178.90,-189.50){\circle*{0.000001}} \put(-190.92,-204.35){\circle*{0.000001}} \put(-191.63,-203.65){\circle*{0.000001}} \put(-191.63,-202.94){\circle*{0.000001}} \put(-192.33,-202.23){\circle*{0.000001}} \put(-192.33,-201.53){\circle*{0.000001}} \put(-193.04,-200.82){\circle*{0.000001}} \put(-193.04,-200.11){\circle*{0.000001}} \put(-193.75,-199.40){\circle*{0.000001}} \put(-193.75,-198.70){\circle*{0.000001}} \put(-194.45,-197.99){\circle*{0.000001}} \put(-194.45,-197.28){\circle*{0.000001}} \put(-195.16,-196.58){\circle*{0.000001}} \put(-195.16,-195.87){\circle*{0.000001}} \put(-195.87,-195.16){\circle*{0.000001}} \put(-195.87,-194.45){\circle*{0.000001}} \put(-196.58,-193.75){\circle*{0.000001}} \put(-196.58,-193.04){\circle*{0.000001}} \put(-197.28,-192.33){\circle*{0.000001}} \put(-197.28,-191.63){\circle*{0.000001}} \put(-197.99,-190.92){\circle*{0.000001}} \put(-197.99,-190.21){\circle*{0.000001}} \put(-198.70,-189.50){\circle*{0.000001}} \put(-198.70,-188.80){\circle*{0.000001}} \put(-199.40,-188.09){\circle*{0.000001}} \put(-199.40,-187.38){\circle*{0.000001}} \put(-200.11,-186.68){\circle*{0.000001}} \put(-200.11,-185.97){\circle*{0.000001}} \put(-200.82,-185.26){\circle*{0.000001}} \put(-200.82,-184.55){\circle*{0.000001}} \put(-201.53,-183.85){\circle*{0.000001}} \put(-201.53,-183.14){\circle*{0.000001}} \put(-202.23,-182.43){\circle*{0.000001}} \put(-202.23,-181.73){\circle*{0.000001}} \put(-202.94,-181.02){\circle*{0.000001}} \put(-202.94,-180.31){\circle*{0.000001}} \put(-203.65,-179.61){\circle*{0.000001}} \put(-203.65,-178.90){\circle*{0.000001}} \put(-204.35,-178.19){\circle*{0.000001}} \put(-204.35,-177.48){\circle*{0.000001}} \put(-205.06,-176.78){\circle*{0.000001}} \put(-205.06,-176.07){\circle*{0.000001}} \put(-205.77,-175.36){\circle*{0.000001}} \put(-205.77,-174.66){\circle*{0.000001}} \put(-206.48,-173.95){\circle*{0.000001}} \put(-206.48,-173.24){\circle*{0.000001}} \put(-207.18,-172.53){\circle*{0.000001}} \put(-207.18,-171.83){\circle*{0.000001}} \put(-207.89,-171.12){\circle*{0.000001}} \put(-207.89,-170.41){\circle*{0.000001}} \put(-208.60,-169.71){\circle*{0.000001}} \put(-208.60,-169.00){\circle*{0.000001}} \put(-209.30,-168.29){\circle*{0.000001}} \put(-209.30,-167.58){\circle*{0.000001}} \put(-210.01,-166.88){\circle*{0.000001}} \put(-210.01,-166.17){\circle*{0.000001}} \put(-210.72,-165.46){\circle*{0.000001}} \put(-190.92,-204.35){\circle*{0.000001}} \put(-190.21,-203.65){\circle*{0.000001}} \put(-189.50,-202.94){\circle*{0.000001}} \put(-188.80,-202.94){\circle*{0.000001}} \put(-188.09,-202.23){\circle*{0.000001}} \put(-187.38,-201.53){\circle*{0.000001}} \put(-186.68,-200.82){\circle*{0.000001}} \put(-185.97,-200.11){\circle*{0.000001}} \put(-185.26,-199.40){\circle*{0.000001}} \put(-184.55,-199.40){\circle*{0.000001}} \put(-183.85,-198.70){\circle*{0.000001}} \put(-183.14,-197.99){\circle*{0.000001}} \put(-182.43,-197.28){\circle*{0.000001}} \put(-181.73,-196.58){\circle*{0.000001}} \put(-181.02,-195.87){\circle*{0.000001}} \put(-180.31,-195.87){\circle*{0.000001}} \put(-179.61,-195.16){\circle*{0.000001}} \put(-178.90,-194.45){\circle*{0.000001}} \put(-178.19,-193.75){\circle*{0.000001}} \put(-177.48,-193.04){\circle*{0.000001}} \put(-176.78,-192.33){\circle*{0.000001}} \put(-176.07,-192.33){\circle*{0.000001}} \put(-175.36,-191.63){\circle*{0.000001}} \put(-174.66,-190.92){\circle*{0.000001}} \put(-173.95,-190.21){\circle*{0.000001}} \put(-173.24,-189.50){\circle*{0.000001}} \put(-172.53,-188.80){\circle*{0.000001}} \put(-171.83,-188.80){\circle*{0.000001}} \put(-171.12,-188.09){\circle*{0.000001}} \put(-170.41,-187.38){\circle*{0.000001}} \put(-169.71,-186.68){\circle*{0.000001}} \put(-169.00,-185.97){\circle*{0.000001}} \put(-168.29,-185.26){\circle*{0.000001}} \put(-167.58,-185.26){\circle*{0.000001}} \put(-166.88,-184.55){\circle*{0.000001}} \put(-166.17,-183.85){\circle*{0.000001}} \put(-165.46,-183.14){\circle*{0.000001}} \put(-164.76,-182.43){\circle*{0.000001}} \put(-164.05,-181.73){\circle*{0.000001}} \put(-163.34,-181.73){\circle*{0.000001}} \put(-162.63,-181.02){\circle*{0.000001}} \put(-161.93,-180.31){\circle*{0.000001}} \put(-161.22,-179.61){\circle*{0.000001}} \put(-160.51,-178.90){\circle*{0.000001}} \put(-159.81,-178.19){\circle*{0.000001}} \put(-159.10,-178.19){\circle*{0.000001}} \put(-158.39,-177.48){\circle*{0.000001}} \put(-157.68,-176.78){\circle*{0.000001}} \put(-156.98,-176.07){\circle*{0.000001}} \put(-156.27,-175.36){\circle*{0.000001}} \put(-155.56,-174.66){\circle*{0.000001}} \put(-154.86,-174.66){\circle*{0.000001}} \put(-154.15,-173.95){\circle*{0.000001}} \put(-153.44,-173.24){\circle*{0.000001}} \put(-152.74,-172.53){\circle*{0.000001}} \put(-127.99,-222.74){\circle*{0.000001}} \put(-127.99,-222.03){\circle*{0.000001}} \put(-128.69,-221.32){\circle*{0.000001}} \put(-128.69,-220.62){\circle*{0.000001}} \put(-129.40,-219.91){\circle*{0.000001}} \put(-129.40,-219.20){\circle*{0.000001}} \put(-130.11,-218.50){\circle*{0.000001}} \put(-130.11,-217.79){\circle*{0.000001}} \put(-130.81,-217.08){\circle*{0.000001}} \put(-130.81,-216.37){\circle*{0.000001}} \put(-131.52,-215.67){\circle*{0.000001}} \put(-131.52,-214.96){\circle*{0.000001}} \put(-132.23,-214.25){\circle*{0.000001}} \put(-132.23,-213.55){\circle*{0.000001}} \put(-132.94,-212.84){\circle*{0.000001}} \put(-132.94,-212.13){\circle*{0.000001}} \put(-133.64,-211.42){\circle*{0.000001}} \put(-133.64,-210.72){\circle*{0.000001}} \put(-134.35,-210.01){\circle*{0.000001}} \put(-134.35,-209.30){\circle*{0.000001}} \put(-135.06,-208.60){\circle*{0.000001}} \put(-135.06,-207.89){\circle*{0.000001}} \put(-135.76,-207.18){\circle*{0.000001}} \put(-135.76,-206.48){\circle*{0.000001}} \put(-136.47,-205.77){\circle*{0.000001}} \put(-136.47,-205.06){\circle*{0.000001}} \put(-137.18,-204.35){\circle*{0.000001}} \put(-137.18,-203.65){\circle*{0.000001}} \put(-137.89,-202.94){\circle*{0.000001}} \put(-137.89,-202.23){\circle*{0.000001}} \put(-138.59,-201.53){\circle*{0.000001}} \put(-138.59,-200.82){\circle*{0.000001}} \put(-139.30,-200.11){\circle*{0.000001}} \put(-139.30,-199.40){\circle*{0.000001}} \put(-140.01,-198.70){\circle*{0.000001}} \put(-140.01,-197.99){\circle*{0.000001}} \put(-140.71,-197.28){\circle*{0.000001}} \put(-140.71,-196.58){\circle*{0.000001}} \put(-141.42,-195.87){\circle*{0.000001}} \put(-141.42,-195.16){\circle*{0.000001}} \put(-142.13,-194.45){\circle*{0.000001}} \put(-142.13,-193.75){\circle*{0.000001}} \put(-142.84,-193.04){\circle*{0.000001}} \put(-142.84,-192.33){\circle*{0.000001}} \put(-143.54,-191.63){\circle*{0.000001}} \put(-143.54,-190.92){\circle*{0.000001}} \put(-144.25,-190.21){\circle*{0.000001}} \put(-144.25,-189.50){\circle*{0.000001}} \put(-144.96,-188.80){\circle*{0.000001}} \put(-144.96,-188.09){\circle*{0.000001}} \put(-145.66,-187.38){\circle*{0.000001}} \put(-145.66,-186.68){\circle*{0.000001}} \put(-146.37,-185.97){\circle*{0.000001}} \put(-146.37,-185.26){\circle*{0.000001}} \put(-147.08,-184.55){\circle*{0.000001}} \put(-147.08,-183.85){\circle*{0.000001}} \put(-147.79,-183.14){\circle*{0.000001}} \put(-147.79,-182.43){\circle*{0.000001}} \put(-148.49,-181.73){\circle*{0.000001}} \put(-148.49,-181.02){\circle*{0.000001}} \put(-149.20,-180.31){\circle*{0.000001}} \put(-149.20,-179.61){\circle*{0.000001}} \put(-149.91,-178.90){\circle*{0.000001}} \put(-149.91,-178.19){\circle*{0.000001}} \put(-150.61,-177.48){\circle*{0.000001}} \put(-150.61,-176.78){\circle*{0.000001}} \put(-151.32,-176.07){\circle*{0.000001}} \put(-151.32,-175.36){\circle*{0.000001}} \put(-152.03,-174.66){\circle*{0.000001}} \put(-152.03,-173.95){\circle*{0.000001}} \put(-152.74,-173.24){\circle*{0.000001}} \put(-152.74,-172.53){\circle*{0.000001}} \put(-127.99,-222.74){\circle*{0.000001}} \put(-127.28,-222.74){\circle*{0.000001}} \put(-126.57,-222.03){\circle*{0.000001}} \put(-125.87,-222.03){\circle*{0.000001}} \put(-125.16,-221.32){\circle*{0.000001}} \put(-124.45,-221.32){\circle*{0.000001}} \put(-123.74,-220.62){\circle*{0.000001}} \put(-123.04,-220.62){\circle*{0.000001}} \put(-122.33,-219.91){\circle*{0.000001}} \put(-121.62,-219.91){\circle*{0.000001}} \put(-120.92,-219.20){\circle*{0.000001}} \put(-120.21,-219.20){\circle*{0.000001}} \put(-119.50,-218.50){\circle*{0.000001}} \put(-118.79,-218.50){\circle*{0.000001}} \put(-118.09,-217.79){\circle*{0.000001}} \put(-117.38,-217.79){\circle*{0.000001}} \put(-116.67,-217.79){\circle*{0.000001}} \put(-115.97,-217.08){\circle*{0.000001}} \put(-115.26,-217.08){\circle*{0.000001}} \put(-114.55,-216.37){\circle*{0.000001}} \put(-113.84,-216.37){\circle*{0.000001}} \put(-113.14,-215.67){\circle*{0.000001}} \put(-112.43,-215.67){\circle*{0.000001}} \put(-111.72,-214.96){\circle*{0.000001}} \put(-111.02,-214.96){\circle*{0.000001}} \put(-110.31,-214.25){\circle*{0.000001}} \put(-109.60,-214.25){\circle*{0.000001}} \put(-108.89,-213.55){\circle*{0.000001}} \put(-108.19,-213.55){\circle*{0.000001}} \put(-107.48,-212.84){\circle*{0.000001}} \put(-106.77,-212.84){\circle*{0.000001}} \put(-106.07,-212.84){\circle*{0.000001}} \put(-105.36,-212.13){\circle*{0.000001}} \put(-104.65,-212.13){\circle*{0.000001}} \put(-103.94,-211.42){\circle*{0.000001}} \put(-103.24,-211.42){\circle*{0.000001}} \put(-102.53,-210.72){\circle*{0.000001}} \put(-101.82,-210.72){\circle*{0.000001}} \put(-101.12,-210.01){\circle*{0.000001}} \put(-100.41,-210.01){\circle*{0.000001}} \put(-99.70,-209.30){\circle*{0.000001}} \put(-98.99,-209.30){\circle*{0.000001}} \put(-98.29,-208.60){\circle*{0.000001}} \put(-97.58,-208.60){\circle*{0.000001}} \put(-96.87,-207.89){\circle*{0.000001}} \put(-96.17,-207.89){\circle*{0.000001}} \put(-95.46,-207.89){\circle*{0.000001}} \put(-94.75,-207.18){\circle*{0.000001}} \put(-94.05,-207.18){\circle*{0.000001}} \put(-93.34,-206.48){\circle*{0.000001}} \put(-92.63,-206.48){\circle*{0.000001}} \put(-91.92,-205.77){\circle*{0.000001}} \put(-91.22,-205.77){\circle*{0.000001}} \put(-90.51,-205.06){\circle*{0.000001}} \put(-89.80,-205.06){\circle*{0.000001}} \put(-89.10,-204.35){\circle*{0.000001}} \put(-88.39,-204.35){\circle*{0.000001}} \put(-87.68,-203.65){\circle*{0.000001}} \put(-86.97,-203.65){\circle*{0.000001}} \put(-86.27,-202.94){\circle*{0.000001}} \put(-85.56,-202.94){\circle*{0.000001}} \put(-84.85,-202.94){\circle*{0.000001}} \put(-84.15,-202.23){\circle*{0.000001}} \put(-83.44,-202.23){\circle*{0.000001}} \put(-82.73,-201.53){\circle*{0.000001}} \put(-82.02,-201.53){\circle*{0.000001}} \put(-81.32,-200.82){\circle*{0.000001}} \put(-80.61,-200.82){\circle*{0.000001}} \put(-79.90,-200.11){\circle*{0.000001}} \put(-79.20,-200.11){\circle*{0.000001}} \put(-78.49,-199.40){\circle*{0.000001}} \put(-77.78,-199.40){\circle*{0.000001}} \put(-77.07,-198.70){\circle*{0.000001}} \put(-76.37,-198.70){\circle*{0.000001}} \put(-75.66,-197.99){\circle*{0.000001}} \put(-74.95,-197.99){\circle*{0.000001}} \put(-124.45,-225.57){\circle*{0.000001}} \put(-123.74,-224.86){\circle*{0.000001}} \put(-123.04,-224.86){\circle*{0.000001}} \put(-122.33,-224.15){\circle*{0.000001}} \put(-121.62,-224.15){\circle*{0.000001}} \put(-120.92,-223.45){\circle*{0.000001}} \put(-120.21,-223.45){\circle*{0.000001}} \put(-119.50,-222.74){\circle*{0.000001}} \put(-118.79,-222.74){\circle*{0.000001}} \put(-118.09,-222.03){\circle*{0.000001}} \put(-117.38,-221.32){\circle*{0.000001}} \put(-116.67,-221.32){\circle*{0.000001}} \put(-115.97,-220.62){\circle*{0.000001}} \put(-115.26,-220.62){\circle*{0.000001}} \put(-114.55,-219.91){\circle*{0.000001}} \put(-113.84,-219.91){\circle*{0.000001}} \put(-113.14,-219.20){\circle*{0.000001}} \put(-112.43,-219.20){\circle*{0.000001}} \put(-111.72,-218.50){\circle*{0.000001}} \put(-111.02,-217.79){\circle*{0.000001}} \put(-110.31,-217.79){\circle*{0.000001}} \put(-109.60,-217.08){\circle*{0.000001}} \put(-108.89,-217.08){\circle*{0.000001}} \put(-108.19,-216.37){\circle*{0.000001}} \put(-107.48,-216.37){\circle*{0.000001}} \put(-106.77,-215.67){\circle*{0.000001}} \put(-106.07,-215.67){\circle*{0.000001}} \put(-105.36,-214.96){\circle*{0.000001}} \put(-104.65,-214.25){\circle*{0.000001}} \put(-103.94,-214.25){\circle*{0.000001}} \put(-103.24,-213.55){\circle*{0.000001}} \put(-102.53,-213.55){\circle*{0.000001}} \put(-101.82,-212.84){\circle*{0.000001}} \put(-101.12,-212.84){\circle*{0.000001}} \put(-100.41,-212.13){\circle*{0.000001}} \put(-99.70,-212.13){\circle*{0.000001}} \put(-98.99,-211.42){\circle*{0.000001}} \put(-98.29,-210.72){\circle*{0.000001}} \put(-97.58,-210.72){\circle*{0.000001}} \put(-96.87,-210.01){\circle*{0.000001}} \put(-96.17,-210.01){\circle*{0.000001}} \put(-95.46,-209.30){\circle*{0.000001}} \put(-94.75,-209.30){\circle*{0.000001}} \put(-94.05,-208.60){\circle*{0.000001}} \put(-93.34,-207.89){\circle*{0.000001}} \put(-92.63,-207.89){\circle*{0.000001}} \put(-91.92,-207.18){\circle*{0.000001}} \put(-91.22,-207.18){\circle*{0.000001}} \put(-90.51,-206.48){\circle*{0.000001}} \put(-89.80,-206.48){\circle*{0.000001}} \put(-89.10,-205.77){\circle*{0.000001}} \put(-88.39,-205.77){\circle*{0.000001}} \put(-87.68,-205.06){\circle*{0.000001}} \put(-86.97,-204.35){\circle*{0.000001}} \put(-86.27,-204.35){\circle*{0.000001}} \put(-85.56,-203.65){\circle*{0.000001}} \put(-84.85,-203.65){\circle*{0.000001}} \put(-84.15,-202.94){\circle*{0.000001}} \put(-83.44,-202.94){\circle*{0.000001}} \put(-82.73,-202.23){\circle*{0.000001}} \put(-82.02,-202.23){\circle*{0.000001}} \put(-81.32,-201.53){\circle*{0.000001}} \put(-80.61,-200.82){\circle*{0.000001}} \put(-79.90,-200.82){\circle*{0.000001}} \put(-79.20,-200.11){\circle*{0.000001}} \put(-78.49,-200.11){\circle*{0.000001}} \put(-77.78,-199.40){\circle*{0.000001}} \put(-77.07,-199.40){\circle*{0.000001}} \put(-76.37,-198.70){\circle*{0.000001}} \put(-75.66,-198.70){\circle*{0.000001}} \put(-74.95,-197.99){\circle*{0.000001}} \put(-124.45,-225.57){\circle*{0.000001}} \put(-125.16,-224.86){\circle*{0.000001}} \put(-125.16,-224.15){\circle*{0.000001}} \put(-125.87,-223.45){\circle*{0.000001}} \put(-125.87,-222.74){\circle*{0.000001}} \put(-126.57,-222.03){\circle*{0.000001}} \put(-126.57,-221.32){\circle*{0.000001}} \put(-127.28,-220.62){\circle*{0.000001}} \put(-127.28,-219.91){\circle*{0.000001}} \put(-127.99,-219.20){\circle*{0.000001}} \put(-127.99,-218.50){\circle*{0.000001}} \put(-128.69,-217.79){\circle*{0.000001}} \put(-129.40,-217.08){\circle*{0.000001}} \put(-129.40,-216.37){\circle*{0.000001}} \put(-130.11,-215.67){\circle*{0.000001}} \put(-130.11,-214.96){\circle*{0.000001}} \put(-130.81,-214.25){\circle*{0.000001}} \put(-130.81,-213.55){\circle*{0.000001}} \put(-131.52,-212.84){\circle*{0.000001}} \put(-131.52,-212.13){\circle*{0.000001}} \put(-132.23,-211.42){\circle*{0.000001}} \put(-132.23,-210.72){\circle*{0.000001}} \put(-132.94,-210.01){\circle*{0.000001}} \put(-133.64,-209.30){\circle*{0.000001}} \put(-133.64,-208.60){\circle*{0.000001}} \put(-134.35,-207.89){\circle*{0.000001}} \put(-134.35,-207.18){\circle*{0.000001}} \put(-135.06,-206.48){\circle*{0.000001}} \put(-135.06,-205.77){\circle*{0.000001}} \put(-135.76,-205.06){\circle*{0.000001}} \put(-135.76,-204.35){\circle*{0.000001}} \put(-136.47,-203.65){\circle*{0.000001}} \put(-136.47,-202.94){\circle*{0.000001}} \put(-137.18,-202.23){\circle*{0.000001}} \put(-137.18,-201.53){\circle*{0.000001}} \put(-137.89,-200.82){\circle*{0.000001}} \put(-138.59,-200.11){\circle*{0.000001}} \put(-138.59,-199.40){\circle*{0.000001}} \put(-139.30,-198.70){\circle*{0.000001}} \put(-139.30,-197.99){\circle*{0.000001}} \put(-140.01,-197.28){\circle*{0.000001}} \put(-140.01,-196.58){\circle*{0.000001}} \put(-140.71,-195.87){\circle*{0.000001}} \put(-140.71,-195.16){\circle*{0.000001}} \put(-141.42,-194.45){\circle*{0.000001}} \put(-141.42,-193.75){\circle*{0.000001}} \put(-142.13,-193.04){\circle*{0.000001}} \put(-142.84,-192.33){\circle*{0.000001}} \put(-142.84,-191.63){\circle*{0.000001}} \put(-143.54,-190.92){\circle*{0.000001}} \put(-143.54,-190.21){\circle*{0.000001}} \put(-144.25,-189.50){\circle*{0.000001}} \put(-144.25,-188.80){\circle*{0.000001}} \put(-144.96,-188.09){\circle*{0.000001}} \put(-144.96,-187.38){\circle*{0.000001}} \put(-145.66,-186.68){\circle*{0.000001}} \put(-145.66,-185.97){\circle*{0.000001}} \put(-146.37,-185.26){\circle*{0.000001}} \put(-147.08,-184.55){\circle*{0.000001}} \put(-147.08,-183.85){\circle*{0.000001}} \put(-147.79,-183.14){\circle*{0.000001}} \put(-147.79,-182.43){\circle*{0.000001}} \put(-148.49,-181.73){\circle*{0.000001}} \put(-148.49,-181.02){\circle*{0.000001}} \put(-149.20,-180.31){\circle*{0.000001}} \put(-149.20,-179.61){\circle*{0.000001}} \put(-149.91,-178.90){\circle*{0.000001}} \put(-149.91,-178.19){\circle*{0.000001}} \put(-150.61,-177.48){\circle*{0.000001}} \put(-202.94,-153.44){\circle*{0.000001}} \put(-202.23,-153.44){\circle*{0.000001}} \put(-201.53,-154.15){\circle*{0.000001}} \put(-200.82,-154.15){\circle*{0.000001}} \put(-200.11,-154.86){\circle*{0.000001}} \put(-199.40,-154.86){\circle*{0.000001}} \put(-198.70,-155.56){\circle*{0.000001}} \put(-197.99,-155.56){\circle*{0.000001}} \put(-197.28,-156.27){\circle*{0.000001}} \put(-196.58,-156.27){\circle*{0.000001}} \put(-195.87,-156.98){\circle*{0.000001}} \put(-195.16,-156.98){\circle*{0.000001}} \put(-194.45,-157.68){\circle*{0.000001}} \put(-193.75,-157.68){\circle*{0.000001}} \put(-193.04,-157.68){\circle*{0.000001}} \put(-192.33,-158.39){\circle*{0.000001}} \put(-191.63,-158.39){\circle*{0.000001}} \put(-190.92,-159.10){\circle*{0.000001}} \put(-190.21,-159.10){\circle*{0.000001}} \put(-189.50,-159.81){\circle*{0.000001}} \put(-188.80,-159.81){\circle*{0.000001}} \put(-188.09,-160.51){\circle*{0.000001}} \put(-187.38,-160.51){\circle*{0.000001}} \put(-186.68,-161.22){\circle*{0.000001}} \put(-185.97,-161.22){\circle*{0.000001}} \put(-185.26,-161.22){\circle*{0.000001}} \put(-184.55,-161.93){\circle*{0.000001}} \put(-183.85,-161.93){\circle*{0.000001}} \put(-183.14,-162.63){\circle*{0.000001}} \put(-182.43,-162.63){\circle*{0.000001}} \put(-181.73,-163.34){\circle*{0.000001}} \put(-181.02,-163.34){\circle*{0.000001}} \put(-180.31,-164.05){\circle*{0.000001}} \put(-179.61,-164.05){\circle*{0.000001}} \put(-178.90,-164.76){\circle*{0.000001}} \put(-178.19,-164.76){\circle*{0.000001}} \put(-177.48,-165.46){\circle*{0.000001}} \put(-176.78,-165.46){\circle*{0.000001}} \put(-176.07,-165.46){\circle*{0.000001}} \put(-175.36,-166.17){\circle*{0.000001}} \put(-174.66,-166.17){\circle*{0.000001}} \put(-173.95,-166.88){\circle*{0.000001}} \put(-173.24,-166.88){\circle*{0.000001}} \put(-172.53,-167.58){\circle*{0.000001}} \put(-171.83,-167.58){\circle*{0.000001}} \put(-171.12,-168.29){\circle*{0.000001}} \put(-170.41,-168.29){\circle*{0.000001}} \put(-169.71,-169.00){\circle*{0.000001}} \put(-169.00,-169.00){\circle*{0.000001}} \put(-168.29,-169.71){\circle*{0.000001}} \put(-167.58,-169.71){\circle*{0.000001}} \put(-166.88,-169.71){\circle*{0.000001}} \put(-166.17,-170.41){\circle*{0.000001}} \put(-165.46,-170.41){\circle*{0.000001}} \put(-164.76,-171.12){\circle*{0.000001}} \put(-164.05,-171.12){\circle*{0.000001}} \put(-163.34,-171.83){\circle*{0.000001}} \put(-162.63,-171.83){\circle*{0.000001}} \put(-161.93,-172.53){\circle*{0.000001}} \put(-161.22,-172.53){\circle*{0.000001}} \put(-160.51,-173.24){\circle*{0.000001}} \put(-159.81,-173.24){\circle*{0.000001}} \put(-159.10,-173.24){\circle*{0.000001}} \put(-158.39,-173.95){\circle*{0.000001}} \put(-157.68,-173.95){\circle*{0.000001}} \put(-156.98,-174.66){\circle*{0.000001}} \put(-156.27,-174.66){\circle*{0.000001}} \put(-155.56,-175.36){\circle*{0.000001}} \put(-154.86,-175.36){\circle*{0.000001}} \put(-154.15,-176.07){\circle*{0.000001}} \put(-153.44,-176.07){\circle*{0.000001}} \put(-152.74,-176.78){\circle*{0.000001}} \put(-152.03,-176.78){\circle*{0.000001}} \put(-151.32,-177.48){\circle*{0.000001}} \put(-150.61,-177.48){\circle*{0.000001}} \put(-202.94,-153.44){\circle*{0.000001}} \put(-202.23,-153.44){\circle*{0.000001}} \put(-201.53,-154.15){\circle*{0.000001}} \put(-200.82,-154.15){\circle*{0.000001}} \put(-200.11,-154.15){\circle*{0.000001}} \put(-199.40,-154.86){\circle*{0.000001}} \put(-198.70,-154.86){\circle*{0.000001}} \put(-197.99,-155.56){\circle*{0.000001}} \put(-197.28,-155.56){\circle*{0.000001}} \put(-196.58,-155.56){\circle*{0.000001}} \put(-195.87,-156.27){\circle*{0.000001}} \put(-195.16,-156.27){\circle*{0.000001}} \put(-194.45,-156.27){\circle*{0.000001}} \put(-193.75,-156.98){\circle*{0.000001}} \put(-193.04,-156.98){\circle*{0.000001}} \put(-192.33,-157.68){\circle*{0.000001}} \put(-191.63,-157.68){\circle*{0.000001}} \put(-190.92,-157.68){\circle*{0.000001}} \put(-190.21,-158.39){\circle*{0.000001}} \put(-189.50,-158.39){\circle*{0.000001}} \put(-188.80,-158.39){\circle*{0.000001}} \put(-188.09,-159.10){\circle*{0.000001}} \put(-187.38,-159.10){\circle*{0.000001}} \put(-186.68,-159.81){\circle*{0.000001}} \put(-185.97,-159.81){\circle*{0.000001}} \put(-185.26,-159.81){\circle*{0.000001}} \put(-184.55,-160.51){\circle*{0.000001}} \put(-183.85,-160.51){\circle*{0.000001}} \put(-183.14,-160.51){\circle*{0.000001}} \put(-182.43,-161.22){\circle*{0.000001}} \put(-181.73,-161.22){\circle*{0.000001}} \put(-181.02,-161.93){\circle*{0.000001}} \put(-180.31,-161.93){\circle*{0.000001}} \put(-179.61,-161.93){\circle*{0.000001}} \put(-178.90,-162.63){\circle*{0.000001}} \put(-178.19,-162.63){\circle*{0.000001}} \put(-177.48,-162.63){\circle*{0.000001}} \put(-176.78,-163.34){\circle*{0.000001}} \put(-176.07,-163.34){\circle*{0.000001}} \put(-175.36,-163.34){\circle*{0.000001}} \put(-174.66,-164.05){\circle*{0.000001}} \put(-173.95,-164.05){\circle*{0.000001}} \put(-173.24,-164.76){\circle*{0.000001}} \put(-172.53,-164.76){\circle*{0.000001}} \put(-171.83,-164.76){\circle*{0.000001}} \put(-171.12,-165.46){\circle*{0.000001}} \put(-170.41,-165.46){\circle*{0.000001}} \put(-169.71,-165.46){\circle*{0.000001}} \put(-169.00,-166.17){\circle*{0.000001}} \put(-168.29,-166.17){\circle*{0.000001}} \put(-167.58,-166.88){\circle*{0.000001}} \put(-166.88,-166.88){\circle*{0.000001}} \put(-166.17,-166.88){\circle*{0.000001}} \put(-165.46,-167.58){\circle*{0.000001}} \put(-164.76,-167.58){\circle*{0.000001}} \put(-164.05,-167.58){\circle*{0.000001}} \put(-163.34,-168.29){\circle*{0.000001}} \put(-162.63,-168.29){\circle*{0.000001}} \put(-161.93,-169.00){\circle*{0.000001}} \put(-161.22,-169.00){\circle*{0.000001}} \put(-160.51,-169.00){\circle*{0.000001}} \put(-159.81,-169.71){\circle*{0.000001}} \put(-159.10,-169.71){\circle*{0.000001}} \put(-158.39,-169.71){\circle*{0.000001}} \put(-157.68,-170.41){\circle*{0.000001}} \put(-156.98,-170.41){\circle*{0.000001}} \put(-156.27,-171.12){\circle*{0.000001}} \put(-155.56,-171.12){\circle*{0.000001}} \put(-154.86,-171.12){\circle*{0.000001}} \put(-154.15,-171.83){\circle*{0.000001}} \put(-153.44,-171.83){\circle*{0.000001}} \put(-152.74,-171.83){\circle*{0.000001}} \put(-152.03,-172.53){\circle*{0.000001}} \put(-151.32,-172.53){\circle*{0.000001}} \put(-150.61,-173.24){\circle*{0.000001}} \put(-149.91,-173.24){\circle*{0.000001}} \put(-149.20,-173.24){\circle*{0.000001}} \put(-148.49,-173.95){\circle*{0.000001}} \put(-147.79,-173.95){\circle*{0.000001}} \put(-201.53,-147.79){\circle*{0.000001}} \put(-200.82,-147.79){\circle*{0.000001}} \put(-200.11,-148.49){\circle*{0.000001}} \put(-199.40,-148.49){\circle*{0.000001}} \put(-198.70,-149.20){\circle*{0.000001}} \put(-197.99,-149.20){\circle*{0.000001}} \put(-197.28,-149.91){\circle*{0.000001}} \put(-196.58,-149.91){\circle*{0.000001}} \put(-195.87,-150.61){\circle*{0.000001}} \put(-195.16,-150.61){\circle*{0.000001}} \put(-194.45,-151.32){\circle*{0.000001}} \put(-193.75,-151.32){\circle*{0.000001}} \put(-193.04,-152.03){\circle*{0.000001}} \put(-192.33,-152.03){\circle*{0.000001}} \put(-191.63,-152.74){\circle*{0.000001}} \put(-190.92,-152.74){\circle*{0.000001}} \put(-190.21,-153.44){\circle*{0.000001}} \put(-189.50,-153.44){\circle*{0.000001}} \put(-188.80,-154.15){\circle*{0.000001}} \put(-188.09,-154.15){\circle*{0.000001}} \put(-187.38,-154.86){\circle*{0.000001}} \put(-186.68,-154.86){\circle*{0.000001}} \put(-185.97,-155.56){\circle*{0.000001}} \put(-185.26,-155.56){\circle*{0.000001}} \put(-184.55,-156.27){\circle*{0.000001}} \put(-183.85,-156.27){\circle*{0.000001}} \put(-183.14,-156.98){\circle*{0.000001}} \put(-182.43,-156.98){\circle*{0.000001}} \put(-181.73,-157.68){\circle*{0.000001}} \put(-181.02,-157.68){\circle*{0.000001}} \put(-180.31,-158.39){\circle*{0.000001}} \put(-179.61,-158.39){\circle*{0.000001}} \put(-178.90,-159.10){\circle*{0.000001}} \put(-178.19,-159.10){\circle*{0.000001}} \put(-177.48,-159.81){\circle*{0.000001}} \put(-176.78,-159.81){\circle*{0.000001}} \put(-176.07,-160.51){\circle*{0.000001}} \put(-175.36,-160.51){\circle*{0.000001}} \put(-174.66,-160.51){\circle*{0.000001}} \put(-173.95,-161.22){\circle*{0.000001}} \put(-173.24,-161.22){\circle*{0.000001}} \put(-172.53,-161.93){\circle*{0.000001}} \put(-171.83,-161.93){\circle*{0.000001}} \put(-171.12,-162.63){\circle*{0.000001}} \put(-170.41,-162.63){\circle*{0.000001}} \put(-169.71,-163.34){\circle*{0.000001}} \put(-169.00,-163.34){\circle*{0.000001}} \put(-168.29,-164.05){\circle*{0.000001}} \put(-167.58,-164.05){\circle*{0.000001}} \put(-166.88,-164.76){\circle*{0.000001}} \put(-166.17,-164.76){\circle*{0.000001}} \put(-165.46,-165.46){\circle*{0.000001}} \put(-164.76,-165.46){\circle*{0.000001}} \put(-164.05,-166.17){\circle*{0.000001}} \put(-163.34,-166.17){\circle*{0.000001}} \put(-162.63,-166.88){\circle*{0.000001}} \put(-161.93,-166.88){\circle*{0.000001}} \put(-161.22,-167.58){\circle*{0.000001}} \put(-160.51,-167.58){\circle*{0.000001}} \put(-159.81,-168.29){\circle*{0.000001}} \put(-159.10,-168.29){\circle*{0.000001}} \put(-158.39,-169.00){\circle*{0.000001}} \put(-157.68,-169.00){\circle*{0.000001}} \put(-156.98,-169.71){\circle*{0.000001}} \put(-156.27,-169.71){\circle*{0.000001}} \put(-155.56,-170.41){\circle*{0.000001}} \put(-154.86,-170.41){\circle*{0.000001}} \put(-154.15,-171.12){\circle*{0.000001}} \put(-153.44,-171.12){\circle*{0.000001}} \put(-152.74,-171.83){\circle*{0.000001}} \put(-152.03,-171.83){\circle*{0.000001}} \put(-151.32,-172.53){\circle*{0.000001}} \put(-150.61,-172.53){\circle*{0.000001}} \put(-149.91,-173.24){\circle*{0.000001}} \put(-149.20,-173.24){\circle*{0.000001}} \put(-148.49,-173.95){\circle*{0.000001}} \put(-147.79,-173.95){\circle*{0.000001}} \put(-201.53,-147.79){\circle*{0.000001}} \put(-200.82,-147.79){\circle*{0.000001}} \put(-200.11,-147.79){\circle*{0.000001}} \put(-199.40,-147.79){\circle*{0.000001}} \put(-199.40,-147.79){\circle*{0.000001}} \put(-198.70,-148.49){\circle*{0.000001}} \put(-198.70,-148.49){\circle*{0.000001}} \put(-198.70,-147.79){\circle*{0.000001}} \put(-198.70,-147.08){\circle*{0.000001}} \put(-198.70,-146.37){\circle*{0.000001}} \put(-199.40,-146.37){\circle*{0.000001}} \put(-198.70,-146.37){\circle*{0.000001}} \put(-199.40,-146.37){\circle*{0.000001}} \put(-199.40,-145.66){\circle*{0.000001}} \put(-198.70,-144.96){\circle*{0.000001}} \put(-198.70,-144.96){\circle*{0.000001}} \put(-198.70,-144.25){\circle*{0.000001}} \put(-198.70,-143.54){\circle*{0.000001}} \put(-198.70,-143.54){\circle*{0.000001}} \put(-201.53,-142.84){\circle*{0.000001}} \put(-200.82,-142.84){\circle*{0.000001}} \put(-200.11,-142.84){\circle*{0.000001}} \put(-199.40,-143.54){\circle*{0.000001}} \put(-198.70,-143.54){\circle*{0.000001}} \put(-201.53,-142.84){\circle*{0.000001}} \put(-200.82,-142.13){\circle*{0.000001}} \put(-200.82,-141.42){\circle*{0.000001}} \put(-200.11,-140.71){\circle*{0.000001}} \put(-200.11,-140.01){\circle*{0.000001}} \put(-199.40,-139.30){\circle*{0.000001}} \put(-199.40,-139.30){\circle*{0.000001}} \put(-199.40,-138.59){\circle*{0.000001}} \put(-199.40,-137.89){\circle*{0.000001}} \put(-198.70,-137.18){\circle*{0.000001}} \put(-198.70,-136.47){\circle*{0.000001}} \put(-198.70,-136.47){\circle*{0.000001}} \put(-198.70,-135.76){\circle*{0.000001}} \put(-199.40,-135.06){\circle*{0.000001}} \put(-199.40,-134.35){\circle*{0.000001}} \put(-200.11,-133.64){\circle*{0.000001}} \put(-200.11,-132.94){\circle*{0.000001}} \put(-200.82,-132.23){\circle*{0.000001}} \put(-200.82,-131.52){\circle*{0.000001}} \put(-206.48,-136.47){\circle*{0.000001}} \put(-205.77,-135.76){\circle*{0.000001}} \put(-205.06,-135.06){\circle*{0.000001}} \put(-204.35,-134.35){\circle*{0.000001}} \put(-203.65,-134.35){\circle*{0.000001}} \put(-202.94,-133.64){\circle*{0.000001}} \put(-202.23,-132.94){\circle*{0.000001}} \put(-201.53,-132.23){\circle*{0.000001}} \put(-200.82,-131.52){\circle*{0.000001}} \put(-214.96,-144.96){\circle*{0.000001}} \put(-214.25,-144.25){\circle*{0.000001}} \put(-213.55,-143.54){\circle*{0.000001}} \put(-212.84,-142.84){\circle*{0.000001}} \put(-212.13,-142.13){\circle*{0.000001}} \put(-211.42,-141.42){\circle*{0.000001}} \put(-210.72,-140.71){\circle*{0.000001}} \put(-210.01,-140.01){\circle*{0.000001}} \put(-209.30,-139.30){\circle*{0.000001}} \put(-208.60,-138.59){\circle*{0.000001}} \put(-207.89,-137.89){\circle*{0.000001}} \put(-207.18,-137.18){\circle*{0.000001}} \put(-206.48,-136.47){\circle*{0.000001}} \put(-224.86,-158.39){\circle*{0.000001}} \put(-224.15,-157.68){\circle*{0.000001}} \put(-224.15,-156.98){\circle*{0.000001}} \put(-223.45,-156.27){\circle*{0.000001}} \put(-222.74,-155.56){\circle*{0.000001}} \put(-222.03,-154.86){\circle*{0.000001}} \put(-222.03,-154.15){\circle*{0.000001}} \put(-221.32,-153.44){\circle*{0.000001}} \put(-220.62,-152.74){\circle*{0.000001}} \put(-219.91,-152.03){\circle*{0.000001}} \put(-219.91,-151.32){\circle*{0.000001}} \put(-219.20,-150.61){\circle*{0.000001}} \put(-218.50,-149.91){\circle*{0.000001}} \put(-217.79,-149.20){\circle*{0.000001}} \put(-217.79,-148.49){\circle*{0.000001}} \put(-217.08,-147.79){\circle*{0.000001}} \put(-216.37,-147.08){\circle*{0.000001}} \put(-215.67,-146.37){\circle*{0.000001}} \put(-215.67,-145.66){\circle*{0.000001}} \put(-214.96,-144.96){\circle*{0.000001}} \put(-226.98,-176.78){\circle*{0.000001}} \put(-226.98,-176.07){\circle*{0.000001}} \put(-226.98,-175.36){\circle*{0.000001}} \put(-226.98,-174.66){\circle*{0.000001}} \put(-226.98,-173.95){\circle*{0.000001}} \put(-226.27,-173.24){\circle*{0.000001}} \put(-226.27,-172.53){\circle*{0.000001}} \put(-226.27,-171.83){\circle*{0.000001}} \put(-226.27,-171.12){\circle*{0.000001}} \put(-226.27,-170.41){\circle*{0.000001}} \put(-226.27,-169.71){\circle*{0.000001}} \put(-226.27,-169.00){\circle*{0.000001}} \put(-226.27,-168.29){\circle*{0.000001}} \put(-226.27,-167.58){\circle*{0.000001}} \put(-225.57,-166.88){\circle*{0.000001}} \put(-225.57,-166.17){\circle*{0.000001}} \put(-225.57,-165.46){\circle*{0.000001}} \put(-225.57,-164.76){\circle*{0.000001}} \put(-225.57,-164.05){\circle*{0.000001}} \put(-225.57,-163.34){\circle*{0.000001}} \put(-225.57,-162.63){\circle*{0.000001}} \put(-225.57,-161.93){\circle*{0.000001}} \put(-224.86,-161.22){\circle*{0.000001}} \put(-224.86,-160.51){\circle*{0.000001}} \put(-224.86,-159.81){\circle*{0.000001}} \put(-224.86,-159.10){\circle*{0.000001}} \put(-224.86,-158.39){\circle*{0.000001}} \put(-226.98,-176.78){\circle*{0.000001}} \put(-226.27,-176.78){\circle*{0.000001}} \put(-225.57,-177.48){\circle*{0.000001}} \put(-224.86,-177.48){\circle*{0.000001}} \put(-224.15,-178.19){\circle*{0.000001}} \put(-223.45,-178.19){\circle*{0.000001}} \put(-222.74,-178.90){\circle*{0.000001}} \put(-222.03,-178.90){\circle*{0.000001}} \put(-221.32,-179.61){\circle*{0.000001}} \put(-220.62,-179.61){\circle*{0.000001}} \put(-219.91,-180.31){\circle*{0.000001}} \put(-219.20,-180.31){\circle*{0.000001}} \put(-218.50,-180.31){\circle*{0.000001}} \put(-217.79,-181.02){\circle*{0.000001}} \put(-217.08,-181.02){\circle*{0.000001}} \put(-216.37,-181.73){\circle*{0.000001}} \put(-215.67,-181.73){\circle*{0.000001}} \put(-214.96,-182.43){\circle*{0.000001}} \put(-214.25,-182.43){\circle*{0.000001}} \put(-213.55,-183.14){\circle*{0.000001}} \put(-212.84,-183.14){\circle*{0.000001}} \put(-212.13,-183.85){\circle*{0.000001}} \put(-211.42,-183.85){\circle*{0.000001}} \put(-210.72,-183.85){\circle*{0.000001}} \put(-210.01,-184.55){\circle*{0.000001}} \put(-209.30,-184.55){\circle*{0.000001}} \put(-208.60,-185.26){\circle*{0.000001}} \put(-207.89,-185.26){\circle*{0.000001}} \put(-207.18,-185.97){\circle*{0.000001}} \put(-206.48,-185.97){\circle*{0.000001}} \put(-205.77,-186.68){\circle*{0.000001}} \put(-205.06,-186.68){\circle*{0.000001}} \put(-204.35,-187.38){\circle*{0.000001}} \put(-203.65,-187.38){\circle*{0.000001}} \put(-229.81,-184.55){\circle*{0.000001}} \put(-229.10,-184.55){\circle*{0.000001}} \put(-228.40,-184.55){\circle*{0.000001}} \put(-227.69,-184.55){\circle*{0.000001}} \put(-226.98,-184.55){\circle*{0.000001}} \put(-226.27,-185.26){\circle*{0.000001}} \put(-225.57,-185.26){\circle*{0.000001}} \put(-224.86,-185.26){\circle*{0.000001}} \put(-224.15,-185.26){\circle*{0.000001}} \put(-223.45,-185.26){\circle*{0.000001}} \put(-222.74,-185.26){\circle*{0.000001}} \put(-222.03,-185.26){\circle*{0.000001}} \put(-221.32,-185.26){\circle*{0.000001}} \put(-220.62,-185.26){\circle*{0.000001}} \put(-219.91,-185.97){\circle*{0.000001}} \put(-219.20,-185.97){\circle*{0.000001}} \put(-218.50,-185.97){\circle*{0.000001}} \put(-217.79,-185.97){\circle*{0.000001}} \put(-217.08,-185.97){\circle*{0.000001}} \put(-216.37,-185.97){\circle*{0.000001}} \put(-215.67,-185.97){\circle*{0.000001}} \put(-214.96,-185.97){\circle*{0.000001}} \put(-214.25,-185.97){\circle*{0.000001}} \put(-213.55,-185.97){\circle*{0.000001}} \put(-212.84,-186.68){\circle*{0.000001}} \put(-212.13,-186.68){\circle*{0.000001}} \put(-211.42,-186.68){\circle*{0.000001}} \put(-210.72,-186.68){\circle*{0.000001}} \put(-210.01,-186.68){\circle*{0.000001}} \put(-209.30,-186.68){\circle*{0.000001}} \put(-208.60,-186.68){\circle*{0.000001}} \put(-207.89,-186.68){\circle*{0.000001}} \put(-207.18,-186.68){\circle*{0.000001}} \put(-206.48,-187.38){\circle*{0.000001}} \put(-205.77,-187.38){\circle*{0.000001}} \put(-205.06,-187.38){\circle*{0.000001}} \put(-204.35,-187.38){\circle*{0.000001}} \put(-203.65,-187.38){\circle*{0.000001}} \put(-229.81,-184.55){\circle*{0.000001}} \put(-229.10,-183.85){\circle*{0.000001}} \put(-228.40,-183.85){\circle*{0.000001}} \put(-227.69,-183.14){\circle*{0.000001}} \put(-226.98,-182.43){\circle*{0.000001}} \put(-226.27,-181.73){\circle*{0.000001}} \put(-225.57,-181.73){\circle*{0.000001}} \put(-224.86,-181.02){\circle*{0.000001}} \put(-224.15,-180.31){\circle*{0.000001}} \put(-223.45,-179.61){\circle*{0.000001}} \put(-222.74,-179.61){\circle*{0.000001}} \put(-222.03,-178.90){\circle*{0.000001}} \put(-221.32,-178.19){\circle*{0.000001}} \put(-220.62,-177.48){\circle*{0.000001}} \put(-219.91,-177.48){\circle*{0.000001}} \put(-219.20,-176.78){\circle*{0.000001}} \put(-218.50,-176.07){\circle*{0.000001}} \put(-217.79,-175.36){\circle*{0.000001}} \put(-217.08,-175.36){\circle*{0.000001}} \put(-216.37,-174.66){\circle*{0.000001}} \put(-215.67,-173.95){\circle*{0.000001}} \put(-214.96,-173.24){\circle*{0.000001}} \put(-214.25,-173.24){\circle*{0.000001}} \put(-213.55,-172.53){\circle*{0.000001}} \put(-212.84,-171.83){\circle*{0.000001}} \put(-212.13,-171.12){\circle*{0.000001}} \put(-211.42,-171.12){\circle*{0.000001}} \put(-210.72,-170.41){\circle*{0.000001}} \put(-210.01,-169.71){\circle*{0.000001}} \put(-209.30,-169.00){\circle*{0.000001}} \put(-208.60,-169.00){\circle*{0.000001}} \put(-207.89,-168.29){\circle*{0.000001}} \put(-207.18,-167.58){\circle*{0.000001}} \put(-206.48,-166.88){\circle*{0.000001}} \put(-205.77,-166.88){\circle*{0.000001}} \put(-205.06,-166.17){\circle*{0.000001}} \put(-204.35,-165.46){\circle*{0.000001}} \put(-183.85,-192.33){\circle*{0.000001}} \put(-184.55,-191.63){\circle*{0.000001}} \put(-185.26,-190.92){\circle*{0.000001}} \put(-185.26,-190.21){\circle*{0.000001}} \put(-185.97,-189.50){\circle*{0.000001}} \put(-186.68,-188.80){\circle*{0.000001}} \put(-187.38,-188.09){\circle*{0.000001}} \put(-187.38,-187.38){\circle*{0.000001}} \put(-188.09,-186.68){\circle*{0.000001}} \put(-188.80,-185.97){\circle*{0.000001}} \put(-189.50,-185.26){\circle*{0.000001}} \put(-189.50,-184.55){\circle*{0.000001}} \put(-190.21,-183.85){\circle*{0.000001}} \put(-190.92,-183.14){\circle*{0.000001}} \put(-191.63,-182.43){\circle*{0.000001}} \put(-191.63,-181.73){\circle*{0.000001}} \put(-192.33,-181.02){\circle*{0.000001}} \put(-193.04,-180.31){\circle*{0.000001}} \put(-193.75,-179.61){\circle*{0.000001}} \put(-193.75,-178.90){\circle*{0.000001}} \put(-194.45,-178.19){\circle*{0.000001}} \put(-195.16,-177.48){\circle*{0.000001}} \put(-195.87,-176.78){\circle*{0.000001}} \put(-196.58,-176.07){\circle*{0.000001}} \put(-196.58,-175.36){\circle*{0.000001}} \put(-197.28,-174.66){\circle*{0.000001}} \put(-197.99,-173.95){\circle*{0.000001}} \put(-198.70,-173.24){\circle*{0.000001}} \put(-198.70,-172.53){\circle*{0.000001}} \put(-199.40,-171.83){\circle*{0.000001}} \put(-200.11,-171.12){\circle*{0.000001}} \put(-200.82,-170.41){\circle*{0.000001}} \put(-200.82,-169.71){\circle*{0.000001}} \put(-201.53,-169.00){\circle*{0.000001}} \put(-202.23,-168.29){\circle*{0.000001}} \put(-202.94,-167.58){\circle*{0.000001}} \put(-202.94,-166.88){\circle*{0.000001}} \put(-203.65,-166.17){\circle*{0.000001}} \put(-204.35,-165.46){\circle*{0.000001}} \put(-183.85,-192.33){\circle*{0.000001}} \put(-183.85,-191.63){\circle*{0.000001}} \put(-183.85,-190.92){\circle*{0.000001}} \put(-183.85,-190.21){\circle*{0.000001}} \put(-184.55,-189.50){\circle*{0.000001}} \put(-184.55,-188.80){\circle*{0.000001}} \put(-184.55,-188.09){\circle*{0.000001}} \put(-184.55,-187.38){\circle*{0.000001}} \put(-184.55,-186.68){\circle*{0.000001}} \put(-184.55,-185.97){\circle*{0.000001}} \put(-184.55,-185.26){\circle*{0.000001}} \put(-185.26,-184.55){\circle*{0.000001}} \put(-185.26,-183.85){\circle*{0.000001}} \put(-185.26,-183.14){\circle*{0.000001}} \put(-185.26,-182.43){\circle*{0.000001}} \put(-185.26,-181.73){\circle*{0.000001}} \put(-185.26,-181.02){\circle*{0.000001}} \put(-185.26,-180.31){\circle*{0.000001}} \put(-185.97,-179.61){\circle*{0.000001}} \put(-185.97,-178.90){\circle*{0.000001}} \put(-185.97,-178.19){\circle*{0.000001}} \put(-185.97,-177.48){\circle*{0.000001}} \put(-185.97,-176.78){\circle*{0.000001}} \put(-185.97,-176.07){\circle*{0.000001}} \put(-185.97,-175.36){\circle*{0.000001}} \put(-186.68,-174.66){\circle*{0.000001}} \put(-186.68,-173.95){\circle*{0.000001}} \put(-186.68,-173.24){\circle*{0.000001}} \put(-186.68,-172.53){\circle*{0.000001}} \put(-186.68,-171.83){\circle*{0.000001}} \put(-186.68,-171.12){\circle*{0.000001}} \put(-186.68,-170.41){\circle*{0.000001}} \put(-187.38,-169.71){\circle*{0.000001}} \put(-187.38,-169.00){\circle*{0.000001}} \put(-187.38,-168.29){\circle*{0.000001}} \put(-187.38,-167.58){\circle*{0.000001}} \put(-187.38,-166.88){\circle*{0.000001}} \put(-187.38,-166.17){\circle*{0.000001}} \put(-187.38,-165.46){\circle*{0.000001}} \put(-188.09,-164.76){\circle*{0.000001}} \put(-188.09,-164.05){\circle*{0.000001}} \put(-188.09,-163.34){\circle*{0.000001}} \put(-188.09,-162.63){\circle*{0.000001}} \put(-188.09,-161.93){\circle*{0.000001}} \put(-188.09,-161.22){\circle*{0.000001}} \put(-188.09,-160.51){\circle*{0.000001}} \put(-188.80,-159.81){\circle*{0.000001}} \put(-188.80,-159.10){\circle*{0.000001}} \put(-188.80,-158.39){\circle*{0.000001}} \put(-188.80,-157.68){\circle*{0.000001}} \put(-188.80,-156.98){\circle*{0.000001}} \put(-188.80,-156.27){\circle*{0.000001}} \put(-188.80,-155.56){\circle*{0.000001}} \put(-189.50,-154.86){\circle*{0.000001}} \put(-189.50,-154.15){\circle*{0.000001}} \put(-189.50,-153.44){\circle*{0.000001}} \put(-189.50,-152.74){\circle*{0.000001}} \put(-190.92,-152.74){\circle*{0.000001}} \put(-190.21,-152.74){\circle*{0.000001}} \put(-189.50,-152.74){\circle*{0.000001}} \put(-190.92,-152.74){\circle*{0.000001}} \put(-190.92,-152.03){\circle*{0.000001}} \put(-190.92,-151.32){\circle*{0.000001}} \put(-190.92,-150.61){\circle*{0.000001}} \put(-190.92,-149.91){\circle*{0.000001}} \put(-191.63,-150.61){\circle*{0.000001}} \put(-190.92,-149.91){\circle*{0.000001}} \put(-192.33,-151.32){\circle*{0.000001}} \put(-191.63,-150.61){\circle*{0.000001}} \put(-191.63,-152.74){\circle*{0.000001}} \put(-191.63,-152.03){\circle*{0.000001}} \put(-192.33,-151.32){\circle*{0.000001}} \put(-191.63,-152.74){\circle*{0.000001}} \put(-191.63,-152.03){\circle*{0.000001}} \put(-191.63,-151.32){\circle*{0.000001}} \put(-191.63,-150.61){\circle*{0.000001}} \put(-190.92,-149.91){\circle*{0.000001}} \put(-190.92,-149.20){\circle*{0.000001}} \put(-190.92,-148.49){\circle*{0.000001}} \put(-190.92,-147.79){\circle*{0.000001}} \put(-190.92,-147.79){\circle*{0.000001}} \put(-190.92,-147.08){\circle*{0.000001}} \put(-190.92,-146.37){\circle*{0.000001}} \put(-190.92,-147.79){\circle*{0.000001}} \put(-190.92,-147.08){\circle*{0.000001}} \put(-190.92,-146.37){\circle*{0.000001}} \put(-190.92,-147.79){\circle*{0.000001}} \put(-190.21,-147.79){\circle*{0.000001}} \put(-189.50,-147.79){\circle*{0.000001}} \put(-188.80,-147.08){\circle*{0.000001}} \put(-188.09,-147.08){\circle*{0.000001}} \put(-187.38,-147.08){\circle*{0.000001}} \put(-186.68,-147.08){\circle*{0.000001}} \put(-185.97,-146.37){\circle*{0.000001}} \put(-185.26,-146.37){\circle*{0.000001}} \put(-184.55,-146.37){\circle*{0.000001}} \put(-183.85,-146.37){\circle*{0.000001}} \put(-183.14,-145.66){\circle*{0.000001}} \put(-182.43,-145.66){\circle*{0.000001}} \put(-188.80,-159.81){\circle*{0.000001}} \put(-188.80,-159.10){\circle*{0.000001}} \put(-188.09,-158.39){\circle*{0.000001}} \put(-188.09,-157.68){\circle*{0.000001}} \put(-187.38,-156.98){\circle*{0.000001}} \put(-187.38,-156.27){\circle*{0.000001}} \put(-186.68,-155.56){\circle*{0.000001}} \put(-186.68,-154.86){\circle*{0.000001}} \put(-185.97,-154.15){\circle*{0.000001}} \put(-185.97,-153.44){\circle*{0.000001}} \put(-185.97,-152.74){\circle*{0.000001}} \put(-185.26,-152.03){\circle*{0.000001}} \put(-185.26,-151.32){\circle*{0.000001}} \put(-184.55,-150.61){\circle*{0.000001}} \put(-184.55,-149.91){\circle*{0.000001}} \put(-183.85,-149.20){\circle*{0.000001}} \put(-183.85,-148.49){\circle*{0.000001}} \put(-183.14,-147.79){\circle*{0.000001}} \put(-183.14,-147.08){\circle*{0.000001}} \put(-182.43,-146.37){\circle*{0.000001}} \put(-182.43,-145.66){\circle*{0.000001}} \put(-194.45,-181.73){\circle*{0.000001}} \put(-194.45,-181.02){\circle*{0.000001}} \put(-193.75,-180.31){\circle*{0.000001}} \put(-193.75,-179.61){\circle*{0.000001}} \put(-193.75,-178.90){\circle*{0.000001}} \put(-193.75,-178.19){\circle*{0.000001}} \put(-193.04,-177.48){\circle*{0.000001}} \put(-193.04,-176.78){\circle*{0.000001}} \put(-193.04,-176.07){\circle*{0.000001}} \put(-193.04,-175.36){\circle*{0.000001}} \put(-192.33,-174.66){\circle*{0.000001}} \put(-192.33,-173.95){\circle*{0.000001}} \put(-192.33,-173.24){\circle*{0.000001}} \put(-192.33,-172.53){\circle*{0.000001}} \put(-191.63,-171.83){\circle*{0.000001}} \put(-191.63,-171.12){\circle*{0.000001}} \put(-191.63,-170.41){\circle*{0.000001}} \put(-191.63,-169.71){\circle*{0.000001}} \put(-190.92,-169.00){\circle*{0.000001}} \put(-190.92,-168.29){\circle*{0.000001}} \put(-190.92,-167.58){\circle*{0.000001}} \put(-190.92,-166.88){\circle*{0.000001}} \put(-190.21,-166.17){\circle*{0.000001}} \put(-190.21,-165.46){\circle*{0.000001}} \put(-190.21,-164.76){\circle*{0.000001}} \put(-190.21,-164.05){\circle*{0.000001}} \put(-189.50,-163.34){\circle*{0.000001}} \put(-189.50,-162.63){\circle*{0.000001}} \put(-189.50,-161.93){\circle*{0.000001}} \put(-189.50,-161.22){\circle*{0.000001}} \put(-188.80,-160.51){\circle*{0.000001}} \put(-188.80,-159.81){\circle*{0.000001}} \put(-194.45,-181.73){\circle*{0.000001}} \put(-193.75,-182.43){\circle*{0.000001}} \put(-193.04,-182.43){\circle*{0.000001}} \put(-192.33,-183.14){\circle*{0.000001}} \put(-191.63,-183.85){\circle*{0.000001}} \put(-190.92,-184.55){\circle*{0.000001}} \put(-190.21,-184.55){\circle*{0.000001}} \put(-189.50,-185.26){\circle*{0.000001}} \put(-188.80,-185.97){\circle*{0.000001}} \put(-188.09,-185.97){\circle*{0.000001}} \put(-187.38,-186.68){\circle*{0.000001}} \put(-186.68,-187.38){\circle*{0.000001}} \put(-185.97,-188.09){\circle*{0.000001}} \put(-185.26,-188.09){\circle*{0.000001}} \put(-184.55,-188.80){\circle*{0.000001}} \put(-183.85,-189.50){\circle*{0.000001}} \put(-183.14,-189.50){\circle*{0.000001}} \put(-182.43,-190.21){\circle*{0.000001}} \put(-181.73,-190.92){\circle*{0.000001}} \put(-181.02,-190.92){\circle*{0.000001}} \put(-180.31,-191.63){\circle*{0.000001}} \put(-179.61,-192.33){\circle*{0.000001}} \put(-178.90,-193.04){\circle*{0.000001}} \put(-178.19,-193.04){\circle*{0.000001}} \put(-177.48,-193.75){\circle*{0.000001}} \put(-176.78,-194.45){\circle*{0.000001}} \put(-176.07,-194.45){\circle*{0.000001}} \put(-175.36,-195.16){\circle*{0.000001}} \put(-174.66,-195.87){\circle*{0.000001}} \put(-173.95,-196.58){\circle*{0.000001}} \put(-173.24,-196.58){\circle*{0.000001}} \put(-172.53,-197.28){\circle*{0.000001}} \put(-171.83,-197.99){\circle*{0.000001}} \put(-171.12,-197.99){\circle*{0.000001}} \put(-170.41,-198.70){\circle*{0.000001}} \put(-169.71,-199.40){\circle*{0.000001}} \put(-169.00,-200.11){\circle*{0.000001}} \put(-168.29,-200.11){\circle*{0.000001}} \put(-167.58,-200.82){\circle*{0.000001}} \put(-167.58,-200.82){\circle*{0.000001}} \put(-168.29,-200.11){\circle*{0.000001}} \put(-168.29,-199.40){\circle*{0.000001}} \put(-169.00,-198.70){\circle*{0.000001}} \put(-169.00,-197.99){\circle*{0.000001}} \put(-169.71,-197.28){\circle*{0.000001}} \put(-169.71,-196.58){\circle*{0.000001}} \put(-170.41,-195.87){\circle*{0.000001}} \put(-170.41,-195.16){\circle*{0.000001}} \put(-171.12,-194.45){\circle*{0.000001}} \put(-171.12,-193.75){\circle*{0.000001}} \put(-171.83,-193.04){\circle*{0.000001}} \put(-171.83,-192.33){\circle*{0.000001}} \put(-172.53,-191.63){\circle*{0.000001}} \put(-172.53,-190.92){\circle*{0.000001}} \put(-173.24,-190.21){\circle*{0.000001}} \put(-173.24,-189.50){\circle*{0.000001}} \put(-173.95,-188.80){\circle*{0.000001}} \put(-173.95,-188.09){\circle*{0.000001}} \put(-174.66,-187.38){\circle*{0.000001}} \put(-174.66,-186.68){\circle*{0.000001}} \put(-175.36,-185.97){\circle*{0.000001}} \put(-175.36,-185.26){\circle*{0.000001}} \put(-176.07,-184.55){\circle*{0.000001}} \put(-176.78,-183.85){\circle*{0.000001}} \put(-176.78,-183.14){\circle*{0.000001}} \put(-177.48,-182.43){\circle*{0.000001}} \put(-177.48,-181.73){\circle*{0.000001}} \put(-178.19,-181.02){\circle*{0.000001}} \put(-178.19,-180.31){\circle*{0.000001}} \put(-178.90,-179.61){\circle*{0.000001}} \put(-178.90,-178.90){\circle*{0.000001}} \put(-179.61,-178.19){\circle*{0.000001}} \put(-179.61,-177.48){\circle*{0.000001}} \put(-180.31,-176.78){\circle*{0.000001}} \put(-180.31,-176.07){\circle*{0.000001}} \put(-181.02,-175.36){\circle*{0.000001}} \put(-181.02,-174.66){\circle*{0.000001}} \put(-181.73,-173.95){\circle*{0.000001}} \put(-181.73,-173.24){\circle*{0.000001}} \put(-182.43,-172.53){\circle*{0.000001}} \put(-182.43,-171.83){\circle*{0.000001}} \put(-183.14,-171.12){\circle*{0.000001}} \put(-183.14,-170.41){\circle*{0.000001}} \put(-183.85,-169.71){\circle*{0.000001}} \put(-228.40,-180.31){\circle*{0.000001}} \put(-227.69,-180.31){\circle*{0.000001}} \put(-226.98,-180.31){\circle*{0.000001}} \put(-226.27,-179.61){\circle*{0.000001}} \put(-225.57,-179.61){\circle*{0.000001}} \put(-224.86,-179.61){\circle*{0.000001}} \put(-224.15,-179.61){\circle*{0.000001}} \put(-223.45,-178.90){\circle*{0.000001}} \put(-222.74,-178.90){\circle*{0.000001}} \put(-222.03,-178.90){\circle*{0.000001}} \put(-221.32,-178.90){\circle*{0.000001}} \put(-220.62,-178.19){\circle*{0.000001}} \put(-219.91,-178.19){\circle*{0.000001}} \put(-219.20,-178.19){\circle*{0.000001}} \put(-218.50,-178.19){\circle*{0.000001}} \put(-217.79,-177.48){\circle*{0.000001}} \put(-217.08,-177.48){\circle*{0.000001}} \put(-216.37,-177.48){\circle*{0.000001}} \put(-215.67,-177.48){\circle*{0.000001}} \put(-214.96,-176.78){\circle*{0.000001}} \put(-214.25,-176.78){\circle*{0.000001}} \put(-213.55,-176.78){\circle*{0.000001}} \put(-212.84,-176.78){\circle*{0.000001}} \put(-212.13,-176.78){\circle*{0.000001}} \put(-211.42,-176.07){\circle*{0.000001}} \put(-210.72,-176.07){\circle*{0.000001}} \put(-210.01,-176.07){\circle*{0.000001}} \put(-209.30,-176.07){\circle*{0.000001}} \put(-208.60,-175.36){\circle*{0.000001}} \put(-207.89,-175.36){\circle*{0.000001}} \put(-207.18,-175.36){\circle*{0.000001}} \put(-206.48,-175.36){\circle*{0.000001}} \put(-205.77,-174.66){\circle*{0.000001}} \put(-205.06,-174.66){\circle*{0.000001}} \put(-204.35,-174.66){\circle*{0.000001}} \put(-203.65,-174.66){\circle*{0.000001}} \put(-202.94,-173.95){\circle*{0.000001}} \put(-202.23,-173.95){\circle*{0.000001}} \put(-201.53,-173.95){\circle*{0.000001}} \put(-200.82,-173.95){\circle*{0.000001}} \put(-200.11,-173.24){\circle*{0.000001}} \put(-199.40,-173.24){\circle*{0.000001}} \put(-198.70,-173.24){\circle*{0.000001}} \put(-197.99,-173.24){\circle*{0.000001}} \put(-197.28,-173.24){\circle*{0.000001}} \put(-196.58,-172.53){\circle*{0.000001}} \put(-195.87,-172.53){\circle*{0.000001}} \put(-195.16,-172.53){\circle*{0.000001}} \put(-194.45,-172.53){\circle*{0.000001}} \put(-193.75,-171.83){\circle*{0.000001}} \put(-193.04,-171.83){\circle*{0.000001}} \put(-192.33,-171.83){\circle*{0.000001}} \put(-191.63,-171.83){\circle*{0.000001}} \put(-190.92,-171.12){\circle*{0.000001}} \put(-190.21,-171.12){\circle*{0.000001}} \put(-189.50,-171.12){\circle*{0.000001}} \put(-188.80,-171.12){\circle*{0.000001}} \put(-188.09,-170.41){\circle*{0.000001}} \put(-187.38,-170.41){\circle*{0.000001}} \put(-186.68,-170.41){\circle*{0.000001}} \put(-185.97,-170.41){\circle*{0.000001}} \put(-185.26,-169.71){\circle*{0.000001}} \put(-184.55,-169.71){\circle*{0.000001}} \put(-183.85,-169.71){\circle*{0.000001}} \put(-228.40,-180.31){\circle*{0.000001}} \put(-227.69,-179.61){\circle*{0.000001}} \put(-226.98,-178.90){\circle*{0.000001}} \put(-226.27,-178.19){\circle*{0.000001}} \put(-225.57,-177.48){\circle*{0.000001}} \put(-224.86,-176.78){\circle*{0.000001}} \put(-224.15,-176.07){\circle*{0.000001}} \put(-223.45,-175.36){\circle*{0.000001}} \put(-222.74,-174.66){\circle*{0.000001}} \put(-222.03,-173.95){\circle*{0.000001}} \put(-221.32,-173.24){\circle*{0.000001}} \put(-220.62,-172.53){\circle*{0.000001}} \put(-219.91,-171.83){\circle*{0.000001}} \put(-219.20,-171.12){\circle*{0.000001}} \put(-218.50,-170.41){\circle*{0.000001}} \put(-217.79,-169.71){\circle*{0.000001}} \put(-217.08,-169.00){\circle*{0.000001}} \put(-216.37,-168.29){\circle*{0.000001}} \put(-215.67,-167.58){\circle*{0.000001}} \put(-214.96,-166.88){\circle*{0.000001}} \put(-214.25,-166.17){\circle*{0.000001}} \put(-213.55,-165.46){\circle*{0.000001}} \put(-212.84,-164.76){\circle*{0.000001}} \put(-212.13,-164.05){\circle*{0.000001}} \put(-211.42,-163.34){\circle*{0.000001}} \put(-210.72,-162.63){\circle*{0.000001}} \put(-210.01,-162.63){\circle*{0.000001}} \put(-209.30,-161.93){\circle*{0.000001}} \put(-208.60,-161.22){\circle*{0.000001}} \put(-207.89,-160.51){\circle*{0.000001}} \put(-207.18,-159.81){\circle*{0.000001}} \put(-206.48,-159.10){\circle*{0.000001}} \put(-205.77,-158.39){\circle*{0.000001}} \put(-205.06,-157.68){\circle*{0.000001}} \put(-204.35,-156.98){\circle*{0.000001}} \put(-203.65,-156.27){\circle*{0.000001}} \put(-202.94,-155.56){\circle*{0.000001}} \put(-202.23,-154.86){\circle*{0.000001}} \put(-201.53,-154.15){\circle*{0.000001}} \put(-200.82,-153.44){\circle*{0.000001}} \put(-200.11,-152.74){\circle*{0.000001}} \put(-199.40,-152.03){\circle*{0.000001}} \put(-198.70,-151.32){\circle*{0.000001}} \put(-197.99,-150.61){\circle*{0.000001}} \put(-197.28,-149.91){\circle*{0.000001}} \put(-196.58,-149.20){\circle*{0.000001}} \put(-195.87,-148.49){\circle*{0.000001}} \put(-195.16,-147.79){\circle*{0.000001}} \put(-194.45,-147.08){\circle*{0.000001}} \put(-193.75,-146.37){\circle*{0.000001}} \put(-193.04,-145.66){\circle*{0.000001}} \put(-192.33,-144.96){\circle*{0.000001}} \put(-192.33,-144.96){\circle*{0.000001}} \put(-192.33,-144.25){\circle*{0.000001}} \put(-191.63,-143.54){\circle*{0.000001}} \put(-191.63,-143.54){\circle*{0.000001}} \put(-190.92,-143.54){\circle*{0.000001}} \put(-190.21,-143.54){\circle*{0.000001}} \put(-189.50,-143.54){\circle*{0.000001}} \put(-189.50,-143.54){\circle*{0.000001}} \put(-187.38,-150.61){\circle*{0.000001}} \put(-187.38,-149.91){\circle*{0.000001}} \put(-188.09,-149.20){\circle*{0.000001}} \put(-188.09,-148.49){\circle*{0.000001}} \put(-188.09,-147.79){\circle*{0.000001}} \put(-188.09,-147.08){\circle*{0.000001}} \put(-188.80,-146.37){\circle*{0.000001}} \put(-188.80,-145.66){\circle*{0.000001}} \put(-188.80,-144.96){\circle*{0.000001}} \put(-189.50,-144.25){\circle*{0.000001}} \put(-189.50,-143.54){\circle*{0.000001}} \put(-187.38,-150.61){\circle*{0.000001}} \put(-186.68,-149.91){\circle*{0.000001}} \put(-186.68,-149.20){\circle*{0.000001}} \put(-185.97,-148.49){\circle*{0.000001}} \put(-185.26,-147.79){\circle*{0.000001}} \put(-184.55,-147.08){\circle*{0.000001}} \put(-184.55,-146.37){\circle*{0.000001}} \put(-183.85,-145.66){\circle*{0.000001}} \put(-183.14,-144.96){\circle*{0.000001}} \put(-183.14,-144.25){\circle*{0.000001}} \put(-182.43,-143.54){\circle*{0.000001}} \put(-181.73,-142.84){\circle*{0.000001}} \put(-181.02,-142.13){\circle*{0.000001}} \put(-181.02,-141.42){\circle*{0.000001}} \put(-180.31,-140.71){\circle*{0.000001}} \put(-255,-310){\makebox(0,0)[lt]{$\rho:0.1$}} \end{picture} \\ \setlength{\unitlength}{00.035em} \begin{picture}(483,391)(-368,-161) \ifx\envbox\undefined\newsavebox{\envbox}\fi \multiput(-368,-161)(483,0){2}{\line(0,1){391}} \multiput(-368,-161)(0,391){2}{\line(1,0){483}} \savebox{\envbox}{ \multiput(0,0)(16,0){2}{\line(0,1){16}} \multiput(0,0)(0,16){2}{\line(1,0){16}} } \put(46,-23){\usebox{\envbox}} \put(-345,-46){\usebox{\envbox}} \put(-207,-138){\usebox{\envbox}} \put(-69,-23){\usebox{\envbox}} \put(-253,23){\usebox{\envbox}} \put(-138,69){\usebox{\envbox}} \put(-207,-161){\usebox{\envbox}} \put(-276,0){\usebox{\envbox}} \put(-230,23){\usebox{\envbox}} \put(-92,46){\usebox{\envbox}} \put(-115,138){\usebox{\envbox}} \put(-46,115){\usebox{\envbox}} \put(-230,-46){\usebox{\envbox}} \put(-299,-92){\usebox{\envbox}} \put(-345,-92){\usebox{\envbox}} \put(-138,138){\usebox{\envbox}} \put(-207,138){\usebox{\envbox}} \put(23,46){\usebox{\envbox}} \put(-23,23){\usebox{\envbox}} \put(-322,-46){\usebox{\envbox}} \put(-253,0){\usebox{\envbox}} \put(-207,69){\usebox{\envbox}} \put(-276,23){\usebox{\envbox}} \put(-299,46){\usebox{\envbox}} \put(-138,207){\usebox{\envbox}} \put(-230,92){\usebox{\envbox}} \put(-253,92){\usebox{\envbox}} \put(-69,46){\usebox{\envbox}} \put(-138,92){\usebox{\envbox}} \put(-115,-23){\usebox{\envbox}} \put(46,46){\usebox{\envbox}} \put(-230,0){\usebox{\envbox}} \put(-92,161){\usebox{\envbox}} \put(92,23){\usebox{\envbox}} \put(23,23){\usebox{\envbox}} \put(-46,138){\usebox{\envbox}} \put(-184,46){\usebox{\envbox}} \put(0,0){\usebox{\envbox}} \put(-253,-138){\usebox{\envbox}} \put(-115,-46){\usebox{\envbox}} \put(-184,115){\usebox{\envbox}} \put(-46,46){\usebox{\envbox}} \put(-184,-115){\usebox{\envbox}} \put(-368,-92){\usebox{\envbox}} \put(-368,23){\usebox{\envbox}} \put(-345,-23){\usebox{\envbox}} \put(-322,-92){\usebox{\envbox}} \put(-253,-115){\usebox{\envbox}} \put(23,0){\usebox{\envbox}} \put(-299,23){\usebox{\envbox}} \put(-23,161){\usebox{\envbox}} \put(-69,115){\usebox{\envbox}} \put(-92,-23){\usebox{\envbox}} \put(0,23){\usebox{\envbox}} \put(-253,-23){\usebox{\envbox}} \put(-92,92){\usebox{\envbox}} \put(-92,-46){\usebox{\envbox}} \put(-276,69){\usebox{\envbox}} \put(-46,69){\usebox{\envbox}} \put(-207,92){\usebox{\envbox}} \put(-138,-69){\usebox{\envbox}} \put(-299,-69){\usebox{\envbox}} \put(-299,92){\usebox{\envbox}} \put(-69,0){\usebox{\envbox}} \put(-207,161){\usebox{\envbox}} \put(46,0){\usebox{\envbox}} \put(-276,-92){\usebox{\envbox}} \put(-299,-23){\usebox{\envbox}} \put(-230,69){\usebox{\envbox}} \put(-299,0){\usebox{\envbox}} \put(-322,-115){\usebox{\envbox}} \put(-23,138){\usebox{\envbox}} \put(0,46){\usebox{\envbox}} \put(-299,-46){\usebox{\envbox}} \put(-299,69){\usebox{\envbox}} \put( 6.36, 6.36){\circle*{0.000001}} \put( 7.07, 6.36){\circle*{0.000001}} \put( 7.78, 5.66){\circle*{0.000001}} \put( 8.49, 5.66){\circle*{0.000001}} \put( 7.78, 4.95){\circle*{0.000001}} \put( 8.49, 5.66){\circle*{0.000001}} \put( 7.78, 4.95){\circle*{0.000001}} \put( 7.78, 4.95){\circle*{0.000001}} \put( 7.78, 5.66){\circle*{0.000001}} \put( 7.78, 6.36){\circle*{0.000001}} \put( 7.78, 7.07){\circle*{0.000001}} \put( 7.78, 7.78){\circle*{0.000001}} \put( 7.78, 8.49){\circle*{0.000001}} \put( 7.78, 9.19){\circle*{0.000001}} \put( 7.78, 9.90){\circle*{0.000001}} \put( 7.78,10.61){\circle*{0.000001}} \put( 7.78,11.31){\circle*{0.000001}} \put( 7.78,12.02){\circle*{0.000001}} \put( 8.49,12.73){\circle*{0.000001}} \put( 8.49,13.44){\circle*{0.000001}} \put( 8.49,14.14){\circle*{0.000001}} \put( 8.49,14.85){\circle*{0.000001}} \put( 8.49,15.56){\circle*{0.000001}} \put( 8.49,16.26){\circle*{0.000001}} \put( 8.49,16.97){\circle*{0.000001}} \put( 8.49,17.68){\circle*{0.000001}} \put( 8.49,18.38){\circle*{0.000001}} \put( 8.49,19.09){\circle*{0.000001}} \put( 8.49,19.80){\circle*{0.000001}} \put( 8.49,20.51){\circle*{0.000001}} \put( 8.49,21.21){\circle*{0.000001}} \put( 8.49,21.92){\circle*{0.000001}} \put( 8.49,22.63){\circle*{0.000001}} \put( 8.49,23.33){\circle*{0.000001}} \put( 8.49,24.04){\circle*{0.000001}} \put( 8.49,24.75){\circle*{0.000001}} \put( 8.49,25.46){\circle*{0.000001}} \put( 8.49,26.16){\circle*{0.000001}} \put( 9.19,26.87){\circle*{0.000001}} \put( 9.19,27.58){\circle*{0.000001}} \put( 9.19,28.28){\circle*{0.000001}} \put( 9.19,28.99){\circle*{0.000001}} \put( 9.19,29.70){\circle*{0.000001}} \put( 9.19,30.41){\circle*{0.000001}} \put( 9.19,31.11){\circle*{0.000001}} \put( 9.19,31.82){\circle*{0.000001}} \put( 9.19,32.53){\circle*{0.000001}} \put( 9.19,33.23){\circle*{0.000001}} \put( 9.19,33.94){\circle*{0.000001}} \put( 9.19,34.65){\circle*{0.000001}} \put( 9.19,35.36){\circle*{0.000001}} \put( 9.19,36.06){\circle*{0.000001}} \put( 9.19,36.77){\circle*{0.000001}} \put( 9.19,37.48){\circle*{0.000001}} \put( 9.19,38.18){\circle*{0.000001}} \put( 9.19,38.89){\circle*{0.000001}} \put( 9.19,39.60){\circle*{0.000001}} \put( 9.19,40.31){\circle*{0.000001}} \put( 9.19,41.01){\circle*{0.000001}} \put( 9.90,41.72){\circle*{0.000001}} \put( 9.90,42.43){\circle*{0.000001}} \put( 9.90,43.13){\circle*{0.000001}} \put( 9.90,43.84){\circle*{0.000001}} \put( 9.90,44.55){\circle*{0.000001}} \put( 9.90,45.25){\circle*{0.000001}} \put( 9.90,45.96){\circle*{0.000001}} \put( 9.90,46.67){\circle*{0.000001}} \put( 9.90,47.38){\circle*{0.000001}} \put( 9.90,48.08){\circle*{0.000001}} \put( 9.90,48.79){\circle*{0.000001}} \put( 9.90,49.50){\circle*{0.000001}} \put( 9.90,50.20){\circle*{0.000001}} \put( 9.90,50.91){\circle*{0.000001}} \put( 9.90,51.62){\circle*{0.000001}} \put( 9.90,52.33){\circle*{0.000001}} \put( 9.90,53.03){\circle*{0.000001}} \put( 9.90,53.74){\circle*{0.000001}} \put( 9.90,54.45){\circle*{0.000001}} \put( 9.90,55.15){\circle*{0.000001}} \put(10.61,55.86){\circle*{0.000001}} \put(10.61,56.57){\circle*{0.000001}} \put(10.61,57.28){\circle*{0.000001}} \put(10.61,57.98){\circle*{0.000001}} \put(10.61,58.69){\circle*{0.000001}} \put(10.61,59.40){\circle*{0.000001}} \put(10.61,60.10){\circle*{0.000001}} \put(10.61,60.81){\circle*{0.000001}} \put(10.61,61.52){\circle*{0.000001}} \put(10.61,62.23){\circle*{0.000001}} \put(10.61,62.93){\circle*{0.000001}} \put(25.46, 4.95){\circle*{0.000001}} \put(25.46, 5.66){\circle*{0.000001}} \put(24.75, 6.36){\circle*{0.000001}} \put(24.75, 7.07){\circle*{0.000001}} \put(24.75, 7.78){\circle*{0.000001}} \put(24.75, 8.49){\circle*{0.000001}} \put(24.04, 9.19){\circle*{0.000001}} \put(24.04, 9.90){\circle*{0.000001}} \put(24.04,10.61){\circle*{0.000001}} \put(24.04,11.31){\circle*{0.000001}} \put(23.33,12.02){\circle*{0.000001}} \put(23.33,12.73){\circle*{0.000001}} \put(23.33,13.44){\circle*{0.000001}} \put(23.33,14.14){\circle*{0.000001}} \put(22.63,14.85){\circle*{0.000001}} \put(22.63,15.56){\circle*{0.000001}} \put(22.63,16.26){\circle*{0.000001}} \put(22.63,16.97){\circle*{0.000001}} \put(21.92,17.68){\circle*{0.000001}} \put(21.92,18.38){\circle*{0.000001}} \put(21.92,19.09){\circle*{0.000001}} \put(21.92,19.80){\circle*{0.000001}} \put(21.21,20.51){\circle*{0.000001}} \put(21.21,21.21){\circle*{0.000001}} \put(21.21,21.92){\circle*{0.000001}} \put(21.21,22.63){\circle*{0.000001}} \put(20.51,23.33){\circle*{0.000001}} \put(20.51,24.04){\circle*{0.000001}} \put(20.51,24.75){\circle*{0.000001}} \put(20.51,25.46){\circle*{0.000001}} \put(19.80,26.16){\circle*{0.000001}} \put(19.80,26.87){\circle*{0.000001}} \put(19.80,27.58){\circle*{0.000001}} \put(19.80,28.28){\circle*{0.000001}} \put(19.09,28.99){\circle*{0.000001}} \put(19.09,29.70){\circle*{0.000001}} \put(19.09,30.41){\circle*{0.000001}} \put(19.09,31.11){\circle*{0.000001}} \put(18.38,31.82){\circle*{0.000001}} \put(18.38,32.53){\circle*{0.000001}} \put(18.38,33.23){\circle*{0.000001}} \put(18.38,33.94){\circle*{0.000001}} \put(17.68,34.65){\circle*{0.000001}} \put(17.68,35.36){\circle*{0.000001}} \put(17.68,36.06){\circle*{0.000001}} \put(16.97,36.77){\circle*{0.000001}} \put(16.97,37.48){\circle*{0.000001}} \put(16.97,38.18){\circle*{0.000001}} \put(16.97,38.89){\circle*{0.000001}} \put(16.26,39.60){\circle*{0.000001}} \put(16.26,40.31){\circle*{0.000001}} \put(16.26,41.01){\circle*{0.000001}} \put(16.26,41.72){\circle*{0.000001}} \put(15.56,42.43){\circle*{0.000001}} \put(15.56,43.13){\circle*{0.000001}} \put(15.56,43.84){\circle*{0.000001}} \put(15.56,44.55){\circle*{0.000001}} \put(14.85,45.25){\circle*{0.000001}} \put(14.85,45.96){\circle*{0.000001}} \put(14.85,46.67){\circle*{0.000001}} \put(14.85,47.38){\circle*{0.000001}} \put(14.14,48.08){\circle*{0.000001}} \put(14.14,48.79){\circle*{0.000001}} \put(14.14,49.50){\circle*{0.000001}} \put(14.14,50.20){\circle*{0.000001}} \put(13.44,50.91){\circle*{0.000001}} \put(13.44,51.62){\circle*{0.000001}} \put(13.44,52.33){\circle*{0.000001}} \put(13.44,53.03){\circle*{0.000001}} \put(12.73,53.74){\circle*{0.000001}} \put(12.73,54.45){\circle*{0.000001}} \put(12.73,55.15){\circle*{0.000001}} \put(12.73,55.86){\circle*{0.000001}} \put(12.02,56.57){\circle*{0.000001}} \put(12.02,57.28){\circle*{0.000001}} \put(12.02,57.98){\circle*{0.000001}} \put(12.02,58.69){\circle*{0.000001}} \put(11.31,59.40){\circle*{0.000001}} \put(11.31,60.10){\circle*{0.000001}} \put(11.31,60.81){\circle*{0.000001}} \put(11.31,61.52){\circle*{0.000001}} \put(10.61,62.23){\circle*{0.000001}} \put(10.61,62.93){\circle*{0.000001}} \put(25.46, 4.95){\circle*{0.000001}} \put(25.46, 5.66){\circle*{0.000001}} \put(25.46, 6.36){\circle*{0.000001}} \put(22.63, 7.78){\circle*{0.000001}} \put(23.33, 7.78){\circle*{0.000001}} \put(24.04, 7.07){\circle*{0.000001}} \put(24.75, 7.07){\circle*{0.000001}} \put(25.46, 6.36){\circle*{0.000001}} \put(22.63, 7.78){\circle*{0.000001}} \put(22.63, 7.78){\circle*{0.000001}} \put(23.33, 8.49){\circle*{0.000001}} \put(23.33, 9.19){\circle*{0.000001}} \put(24.04, 9.90){\circle*{0.000001}} \put(24.04,10.61){\circle*{0.000001}} \put(24.75,11.31){\circle*{0.000001}} \put(24.75,12.02){\circle*{0.000001}} \put(25.46,12.73){\circle*{0.000001}} \put(26.16,13.44){\circle*{0.000001}} \put(26.16,14.14){\circle*{0.000001}} \put(26.87,14.85){\circle*{0.000001}} \put(26.87,15.56){\circle*{0.000001}} \put(27.58,16.26){\circle*{0.000001}} \put(28.28,16.97){\circle*{0.000001}} \put(28.28,17.68){\circle*{0.000001}} \put(28.99,18.38){\circle*{0.000001}} \put(28.99,19.09){\circle*{0.000001}} \put(29.70,19.80){\circle*{0.000001}} \put(29.70,20.51){\circle*{0.000001}} \put(30.41,21.21){\circle*{0.000001}} \put(31.11,21.92){\circle*{0.000001}} \put(31.11,22.63){\circle*{0.000001}} \put(31.82,23.33){\circle*{0.000001}} \put(31.82,24.04){\circle*{0.000001}} \put(32.53,24.75){\circle*{0.000001}} \put(33.23,25.46){\circle*{0.000001}} \put(33.23,26.16){\circle*{0.000001}} \put(33.94,26.87){\circle*{0.000001}} \put(33.94,27.58){\circle*{0.000001}} \put(34.65,28.28){\circle*{0.000001}} \put(34.65,28.99){\circle*{0.000001}} \put(35.36,29.70){\circle*{0.000001}} \put(36.06,30.41){\circle*{0.000001}} \put(36.06,31.11){\circle*{0.000001}} \put(36.77,31.82){\circle*{0.000001}} \put(36.77,32.53){\circle*{0.000001}} \put(37.48,33.23){\circle*{0.000001}} \put(38.18,33.94){\circle*{0.000001}} \put(38.18,34.65){\circle*{0.000001}} \put(38.89,35.36){\circle*{0.000001}} \put(38.89,36.06){\circle*{0.000001}} \put(39.60,36.77){\circle*{0.000001}} \put(39.60,37.48){\circle*{0.000001}} \put(40.31,38.18){\circle*{0.000001}} \put(41.01,38.89){\circle*{0.000001}} \put(41.01,39.60){\circle*{0.000001}} \put(41.72,40.31){\circle*{0.000001}} \put(41.72,41.01){\circle*{0.000001}} \put(42.43,41.72){\circle*{0.000001}} \put(43.13,42.43){\circle*{0.000001}} \put(43.13,43.13){\circle*{0.000001}} \put(43.84,43.84){\circle*{0.000001}} \put(43.84,44.55){\circle*{0.000001}} \put(44.55,45.25){\circle*{0.000001}} \put(44.55,45.96){\circle*{0.000001}} \put(45.25,46.67){\circle*{0.000001}} \put(45.96,47.38){\circle*{0.000001}} \put(45.96,48.08){\circle*{0.000001}} \put(46.67,48.79){\circle*{0.000001}} \put(46.67,49.50){\circle*{0.000001}} \put(47.38,50.20){\circle*{0.000001}} \put(48.08,50.91){\circle*{0.000001}} \put(48.08,51.62){\circle*{0.000001}} \put(48.79,52.33){\circle*{0.000001}} \put(48.79,53.03){\circle*{0.000001}} \put(49.50,53.74){\circle*{0.000001}} \put(49.50,54.45){\circle*{0.000001}} \put(50.20,55.15){\circle*{0.000001}} \put(50.91,55.86){\circle*{0.000001}} \put(50.91,56.57){\circle*{0.000001}} \put(51.62,57.28){\circle*{0.000001}} \put(51.62,57.98){\circle*{0.000001}} \put(52.33,58.69){\circle*{0.000001}} \put(52.33,58.69){\circle*{0.000001}} \put(53.03,58.69){\circle*{0.000001}} \put(53.74,59.40){\circle*{0.000001}} \put(53.74,59.40){\circle*{0.000001}} \put(53.74,60.10){\circle*{0.000001}} \put(54.45,60.81){\circle*{0.000001}} \put(54.45,60.81){\circle*{0.000001}} \put( 0.71,37.48){\circle*{0.000001}} \put( 1.41,37.48){\circle*{0.000001}} \put( 2.12,38.18){\circle*{0.000001}} \put( 2.83,38.18){\circle*{0.000001}} \put( 3.54,38.89){\circle*{0.000001}} \put( 4.24,38.89){\circle*{0.000001}} \put( 4.95,39.60){\circle*{0.000001}} \put( 5.66,39.60){\circle*{0.000001}} \put( 6.36,39.60){\circle*{0.000001}} \put( 7.07,40.31){\circle*{0.000001}} \put( 7.78,40.31){\circle*{0.000001}} \put( 8.49,41.01){\circle*{0.000001}} \put( 9.19,41.01){\circle*{0.000001}} \put( 9.90,41.72){\circle*{0.000001}} \put(10.61,41.72){\circle*{0.000001}} \put(11.31,42.43){\circle*{0.000001}} \put(12.02,42.43){\circle*{0.000001}} \put(12.73,42.43){\circle*{0.000001}} \put(13.44,43.13){\circle*{0.000001}} \put(14.14,43.13){\circle*{0.000001}} \put(14.85,43.84){\circle*{0.000001}} \put(15.56,43.84){\circle*{0.000001}} \put(16.26,44.55){\circle*{0.000001}} \put(16.97,44.55){\circle*{0.000001}} \put(17.68,44.55){\circle*{0.000001}} \put(18.38,45.25){\circle*{0.000001}} \put(19.09,45.25){\circle*{0.000001}} \put(19.80,45.96){\circle*{0.000001}} \put(20.51,45.96){\circle*{0.000001}} \put(21.21,46.67){\circle*{0.000001}} \put(21.92,46.67){\circle*{0.000001}} \put(22.63,46.67){\circle*{0.000001}} \put(23.33,47.38){\circle*{0.000001}} \put(24.04,47.38){\circle*{0.000001}} \put(24.75,48.08){\circle*{0.000001}} \put(25.46,48.08){\circle*{0.000001}} \put(26.16,48.79){\circle*{0.000001}} \put(26.87,48.79){\circle*{0.000001}} \put(27.58,48.79){\circle*{0.000001}} \put(28.28,49.50){\circle*{0.000001}} \put(28.99,49.50){\circle*{0.000001}} \put(29.70,50.20){\circle*{0.000001}} \put(30.41,50.20){\circle*{0.000001}} \put(31.11,50.91){\circle*{0.000001}} \put(31.82,50.91){\circle*{0.000001}} \put(32.53,51.62){\circle*{0.000001}} \put(33.23,51.62){\circle*{0.000001}} \put(33.94,51.62){\circle*{0.000001}} \put(34.65,52.33){\circle*{0.000001}} \put(35.36,52.33){\circle*{0.000001}} \put(36.06,53.03){\circle*{0.000001}} \put(36.77,53.03){\circle*{0.000001}} \put(37.48,53.74){\circle*{0.000001}} \put(38.18,53.74){\circle*{0.000001}} \put(38.89,53.74){\circle*{0.000001}} \put(39.60,54.45){\circle*{0.000001}} \put(40.31,54.45){\circle*{0.000001}} \put(41.01,55.15){\circle*{0.000001}} \put(41.72,55.15){\circle*{0.000001}} \put(42.43,55.86){\circle*{0.000001}} \put(43.13,55.86){\circle*{0.000001}} \put(43.84,55.86){\circle*{0.000001}} \put(44.55,56.57){\circle*{0.000001}} \put(45.25,56.57){\circle*{0.000001}} \put(45.96,57.28){\circle*{0.000001}} \put(46.67,57.28){\circle*{0.000001}} \put(47.38,57.98){\circle*{0.000001}} \put(48.08,57.98){\circle*{0.000001}} \put(48.79,58.69){\circle*{0.000001}} \put(49.50,58.69){\circle*{0.000001}} \put(50.20,58.69){\circle*{0.000001}} \put(50.91,59.40){\circle*{0.000001}} \put(51.62,59.40){\circle*{0.000001}} \put(52.33,60.10){\circle*{0.000001}} \put(53.03,60.10){\circle*{0.000001}} \put(53.74,60.81){\circle*{0.000001}} \put(54.45,60.81){\circle*{0.000001}} \put(-1.41,36.77){\circle*{0.000001}} \put(-0.71,36.77){\circle*{0.000001}} \put( 0.00,37.48){\circle*{0.000001}} \put( 0.71,37.48){\circle*{0.000001}} \put(-1.41,36.77){\circle*{0.000001}} \put(-0.71,37.48){\circle*{0.000001}} \put( 0.00,37.48){\circle*{0.000001}} \put( 0.71,38.18){\circle*{0.000001}} \put( 1.41,38.89){\circle*{0.000001}} \put( 2.12,39.60){\circle*{0.000001}} \put( 2.83,39.60){\circle*{0.000001}} \put( 3.54,40.31){\circle*{0.000001}} \put( 0.00,27.58){\circle*{0.000001}} \put( 0.00,28.28){\circle*{0.000001}} \put( 0.71,28.99){\circle*{0.000001}} \put( 0.71,29.70){\circle*{0.000001}} \put( 0.71,30.41){\circle*{0.000001}} \put( 0.71,31.11){\circle*{0.000001}} \put( 1.41,31.82){\circle*{0.000001}} \put( 1.41,32.53){\circle*{0.000001}} \put( 1.41,33.23){\circle*{0.000001}} \put( 1.41,33.94){\circle*{0.000001}} \put( 2.12,34.65){\circle*{0.000001}} \put( 2.12,35.36){\circle*{0.000001}} \put( 2.12,36.06){\circle*{0.000001}} \put( 2.83,36.77){\circle*{0.000001}} \put( 2.83,37.48){\circle*{0.000001}} \put( 2.83,38.18){\circle*{0.000001}} \put( 2.83,38.89){\circle*{0.000001}} \put( 3.54,39.60){\circle*{0.000001}} \put( 3.54,40.31){\circle*{0.000001}} \put( 0.00,27.58){\circle*{0.000001}} \put( 0.71,27.58){\circle*{0.000001}} \put( 1.41,28.28){\circle*{0.000001}} \put( 2.12,28.28){\circle*{0.000001}} \put( 2.83,28.99){\circle*{0.000001}} \put( 4.95,25.46){\circle*{0.000001}} \put( 4.24,26.16){\circle*{0.000001}} \put( 4.24,26.87){\circle*{0.000001}} \put( 3.54,27.58){\circle*{0.000001}} \put( 3.54,28.28){\circle*{0.000001}} \put( 2.83,28.99){\circle*{0.000001}} \put( 0.00,24.75){\circle*{0.000001}} \put( 0.71,24.75){\circle*{0.000001}} \put( 1.41,24.75){\circle*{0.000001}} \put( 2.12,24.75){\circle*{0.000001}} \put( 2.83,25.46){\circle*{0.000001}} \put( 3.54,25.46){\circle*{0.000001}} \put( 4.24,25.46){\circle*{0.000001}} \put( 4.95,25.46){\circle*{0.000001}} \put( 0.00,24.75){\circle*{0.000001}} \put( 0.00,25.46){\circle*{0.000001}} \put(-0.71,24.75){\circle*{0.000001}} \put( 0.00,25.46){\circle*{0.000001}} \put(-0.71,24.75){\circle*{0.000001}} \put(-0.71,25.46){\circle*{0.000001}} \put( 0.00,26.16){\circle*{0.000001}} \put( 0.00,26.87){\circle*{0.000001}} \put( 0.71,27.58){\circle*{0.000001}} \put( 0.71,28.28){\circle*{0.000001}} \put( 1.41,28.99){\circle*{0.000001}} \put( 1.41,29.70){\circle*{0.000001}} \put( 2.12,30.41){\circle*{0.000001}} \put( 2.12,31.11){\circle*{0.000001}} \put( 2.83,31.82){\circle*{0.000001}} \put( 1.41,32.53){\circle*{0.000001}} \put( 2.12,32.53){\circle*{0.000001}} \put( 2.83,31.82){\circle*{0.000001}} \put( 1.41,29.70){\circle*{0.000001}} \put( 1.41,30.41){\circle*{0.000001}} \put( 1.41,31.11){\circle*{0.000001}} \put( 1.41,31.82){\circle*{0.000001}} \put( 1.41,32.53){\circle*{0.000001}} \put( 1.41,29.70){\circle*{0.000001}} \put( 1.41,29.70){\circle*{0.000001}} \put( 0.71,30.41){\circle*{0.000001}} \put( 0.00,31.11){\circle*{0.000001}} \put(-0.71,31.82){\circle*{0.000001}} \put(-0.71,32.53){\circle*{0.000001}} \put(-1.41,33.23){\circle*{0.000001}} \put(-2.12,33.94){\circle*{0.000001}} \put(-2.83,34.65){\circle*{0.000001}} \put(-3.54,35.36){\circle*{0.000001}} \put(-4.24,36.06){\circle*{0.000001}} \put(-4.24,36.77){\circle*{0.000001}} \put(-4.95,37.48){\circle*{0.000001}} \put(-5.66,38.18){\circle*{0.000001}} \put(-6.36,38.89){\circle*{0.000001}} \put(-7.07,39.60){\circle*{0.000001}} \put(-7.78,40.31){\circle*{0.000001}} \put(-7.78,41.01){\circle*{0.000001}} \put(-8.49,41.72){\circle*{0.000001}} \put(-9.19,42.43){\circle*{0.000001}} \put(-9.90,43.13){\circle*{0.000001}} \put(-10.61,43.84){\circle*{0.000001}} \put(-11.31,44.55){\circle*{0.000001}} \put(-11.31,45.25){\circle*{0.000001}} \put(-12.02,45.96){\circle*{0.000001}} \put(-12.73,46.67){\circle*{0.000001}} \put(-13.44,47.38){\circle*{0.000001}} \put(-14.14,48.08){\circle*{0.000001}} \put(-14.85,48.79){\circle*{0.000001}} \put(-14.85,49.50){\circle*{0.000001}} \put(-15.56,50.20){\circle*{0.000001}} \put(-16.26,50.91){\circle*{0.000001}} \put(-16.97,51.62){\circle*{0.000001}} \put(-17.68,52.33){\circle*{0.000001}} \put(-18.38,53.03){\circle*{0.000001}} \put(-18.38,53.74){\circle*{0.000001}} \put(-19.09,54.45){\circle*{0.000001}} \put(-19.80,55.15){\circle*{0.000001}} \put(-20.51,55.86){\circle*{0.000001}} \put(-21.21,56.57){\circle*{0.000001}} \put(-21.92,57.28){\circle*{0.000001}} \put(-21.92,57.98){\circle*{0.000001}} \put(-22.63,58.69){\circle*{0.000001}} \put(-23.33,59.40){\circle*{0.000001}} \put(-24.04,60.10){\circle*{0.000001}} \put(-24.75,60.81){\circle*{0.000001}} \put(-25.46,61.52){\circle*{0.000001}} \put(-25.46,62.23){\circle*{0.000001}} \put(-26.16,62.93){\circle*{0.000001}} \put(-26.87,63.64){\circle*{0.000001}} \put(-27.58,64.35){\circle*{0.000001}} \put(-28.28,65.05){\circle*{0.000001}} \put(-28.99,65.76){\circle*{0.000001}} \put(-28.99,66.47){\circle*{0.000001}} \put(-29.70,67.18){\circle*{0.000001}} \put(-30.41,67.88){\circle*{0.000001}} \put(-31.11,68.59){\circle*{0.000001}} \put(-31.82,69.30){\circle*{0.000001}} \put(-32.53,70.00){\circle*{0.000001}} \put(-32.53,70.71){\circle*{0.000001}} \put(-33.23,71.42){\circle*{0.000001}} \put(-33.94,72.12){\circle*{0.000001}} \put(-34.65,72.83){\circle*{0.000001}} \put(-33.94,69.30){\circle*{0.000001}} \put(-33.94,70.00){\circle*{0.000001}} \put(-33.94,70.71){\circle*{0.000001}} \put(-34.65,71.42){\circle*{0.000001}} \put(-34.65,72.12){\circle*{0.000001}} \put(-34.65,72.83){\circle*{0.000001}} \put(-33.94,69.30){\circle*{0.000001}} \put(-33.23,70.00){\circle*{0.000001}} \put(-32.53,70.71){\circle*{0.000001}} \put(-33.94,70.71){\circle*{0.000001}} \put(-33.23,70.71){\circle*{0.000001}} \put(-32.53,70.71){\circle*{0.000001}} \put(-33.94,70.71){\circle*{0.000001}} \put(-33.23,71.42){\circle*{0.000001}} \put(-32.53,72.12){\circle*{0.000001}} \put(-31.82,72.83){\circle*{0.000001}} \put(-31.82,72.83){\circle*{0.000001}} \put(-31.82,73.54){\circle*{0.000001}} \put(-31.82,74.25){\circle*{0.000001}} \put(-31.82,74.95){\circle*{0.000001}} \put(-32.53,72.83){\circle*{0.000001}} \put(-32.53,73.54){\circle*{0.000001}} \put(-31.82,74.25){\circle*{0.000001}} \put(-31.82,74.95){\circle*{0.000001}} \put(-33.94,69.30){\circle*{0.000001}} \put(-33.94,70.00){\circle*{0.000001}} \put(-33.23,70.71){\circle*{0.000001}} \put(-33.23,71.42){\circle*{0.000001}} \put(-32.53,72.12){\circle*{0.000001}} \put(-32.53,72.83){\circle*{0.000001}} \put(-37.48,59.40){\circle*{0.000001}} \put(-37.48,60.10){\circle*{0.000001}} \put(-36.77,60.81){\circle*{0.000001}} \put(-36.77,61.52){\circle*{0.000001}} \put(-36.77,62.23){\circle*{0.000001}} \put(-36.06,62.93){\circle*{0.000001}} \put(-36.06,63.64){\circle*{0.000001}} \put(-36.06,64.35){\circle*{0.000001}} \put(-35.36,65.05){\circle*{0.000001}} \put(-35.36,65.76){\circle*{0.000001}} \put(-34.65,66.47){\circle*{0.000001}} \put(-34.65,67.18){\circle*{0.000001}} \put(-34.65,67.88){\circle*{0.000001}} \put(-33.94,68.59){\circle*{0.000001}} \put(-33.94,69.30){\circle*{0.000001}} \put(-37.48,59.40){\circle*{0.000001}} \put(-36.77,59.40){\circle*{0.000001}} \put(-36.06,58.69){\circle*{0.000001}} \put(-35.36,58.69){\circle*{0.000001}} \put(-34.65,57.98){\circle*{0.000001}} \put(-33.94,57.98){\circle*{0.000001}} \put(-34.65,52.33){\circle*{0.000001}} \put(-34.65,53.03){\circle*{0.000001}} \put(-34.65,53.74){\circle*{0.000001}} \put(-34.65,54.45){\circle*{0.000001}} \put(-34.65,55.15){\circle*{0.000001}} \put(-33.94,55.86){\circle*{0.000001}} \put(-33.94,56.57){\circle*{0.000001}} \put(-33.94,57.28){\circle*{0.000001}} \put(-33.94,57.98){\circle*{0.000001}} \put(-34.65,52.33){\circle*{0.000001}} \put(-33.94,52.33){\circle*{0.000001}} \put(-33.23,52.33){\circle*{0.000001}} \put(-32.53,52.33){\circle*{0.000001}} \put(-32.53,52.33){\circle*{0.000001}} \put(-31.82,51.62){\circle*{0.000001}} \put(-31.11,50.91){\circle*{0.000001}} \put(-30.41,50.20){\circle*{0.000001}} \put(-29.70,49.50){\circle*{0.000001}} \put(-28.99,48.79){\circle*{0.000001}} \put(-28.99,48.79){\circle*{0.000001}} \put(-28.28,48.08){\circle*{0.000001}} \put(-27.58,47.38){\circle*{0.000001}} \put(-26.87,46.67){\circle*{0.000001}} \put(-26.16,45.96){\circle*{0.000001}} \put(-25.46,45.25){\circle*{0.000001}} \put(-24.75,44.55){\circle*{0.000001}} \put(-24.04,43.84){\circle*{0.000001}} \put(-23.33,43.84){\circle*{0.000001}} \put(-22.63,43.13){\circle*{0.000001}} \put(-21.92,42.43){\circle*{0.000001}} \put(-21.21,41.72){\circle*{0.000001}} \put(-20.51,41.01){\circle*{0.000001}} \put(-19.80,40.31){\circle*{0.000001}} \put(-19.09,39.60){\circle*{0.000001}} \put(-18.38,38.89){\circle*{0.000001}} \put(-17.68,38.18){\circle*{0.000001}} \put(-16.97,37.48){\circle*{0.000001}} \put(-16.26,36.77){\circle*{0.000001}} \put(-15.56,36.06){\circle*{0.000001}} \put(-14.85,35.36){\circle*{0.000001}} \put(-14.14,34.65){\circle*{0.000001}} \put(-13.44,33.94){\circle*{0.000001}} \put(-12.73,33.23){\circle*{0.000001}} \put(-12.02,33.23){\circle*{0.000001}} \put(-11.31,32.53){\circle*{0.000001}} \put(-10.61,31.82){\circle*{0.000001}} \put(-9.90,31.11){\circle*{0.000001}} \put(-9.19,30.41){\circle*{0.000001}} \put(-8.49,29.70){\circle*{0.000001}} \put(-7.78,28.99){\circle*{0.000001}} \put(-7.07,28.28){\circle*{0.000001}} \put(-6.36,27.58){\circle*{0.000001}} \put(-5.66,26.87){\circle*{0.000001}} \put(-4.95,26.16){\circle*{0.000001}} \put(-4.24,25.46){\circle*{0.000001}} \put(-3.54,24.75){\circle*{0.000001}} \put(-2.83,24.04){\circle*{0.000001}} \put(-2.12,23.33){\circle*{0.000001}} \put(-1.41,23.33){\circle*{0.000001}} \put(-0.71,22.63){\circle*{0.000001}} \put( 0.00,21.92){\circle*{0.000001}} \put( 0.71,21.21){\circle*{0.000001}} \put( 1.41,20.51){\circle*{0.000001}} \put( 2.12,19.80){\circle*{0.000001}} \put( 2.83,19.09){\circle*{0.000001}} \put( 3.54,18.38){\circle*{0.000001}} \put( 4.24,17.68){\circle*{0.000001}} \put( 4.95,16.97){\circle*{0.000001}} \put( 5.66,16.26){\circle*{0.000001}} \put( 6.36,15.56){\circle*{0.000001}} \put( 7.07,14.85){\circle*{0.000001}} \put( 7.78,14.14){\circle*{0.000001}} \put( 8.49,13.44){\circle*{0.000001}} \put( 9.19,12.73){\circle*{0.000001}} \put( 9.90,12.73){\circle*{0.000001}} \put(10.61,12.02){\circle*{0.000001}} \put(11.31,11.31){\circle*{0.000001}} \put(12.02,10.61){\circle*{0.000001}} \put(12.73, 9.90){\circle*{0.000001}} \put(13.44, 9.19){\circle*{0.000001}} \put(14.14, 8.49){\circle*{0.000001}} \put(14.85, 7.78){\circle*{0.000001}} \put(12.73, 7.78){\circle*{0.000001}} \put(13.44, 7.78){\circle*{0.000001}} \put(14.14, 7.78){\circle*{0.000001}} \put(14.85, 7.78){\circle*{0.000001}} \put(12.73, 7.78){\circle*{0.000001}} \put(12.73, 8.49){\circle*{0.000001}} \put(12.73, 9.19){\circle*{0.000001}} \put(12.73, 9.90){\circle*{0.000001}} \put(12.73,10.61){\circle*{0.000001}} \put(12.73,11.31){\circle*{0.000001}} \put( 8.49,13.44){\circle*{0.000001}} \put( 9.19,13.44){\circle*{0.000001}} \put( 9.90,12.73){\circle*{0.000001}} \put(10.61,12.73){\circle*{0.000001}} \put(11.31,12.02){\circle*{0.000001}} \put(12.02,12.02){\circle*{0.000001}} \put(12.73,11.31){\circle*{0.000001}} \put( 8.49,13.44){\circle*{0.000001}} \put( 9.19,13.44){\circle*{0.000001}} \put( 9.90,12.73){\circle*{0.000001}} \put(10.61,12.73){\circle*{0.000001}} \put( 8.49, 9.90){\circle*{0.000001}} \put( 9.19,10.61){\circle*{0.000001}} \put( 9.19,11.31){\circle*{0.000001}} \put( 9.90,12.02){\circle*{0.000001}} \put(10.61,12.73){\circle*{0.000001}} \put( 8.49, 8.49){\circle*{0.000001}} \put( 8.49, 9.19){\circle*{0.000001}} \put( 8.49, 9.90){\circle*{0.000001}} \put( 7.78, 7.78){\circle*{0.000001}} \put( 8.49, 8.49){\circle*{0.000001}} \put( 7.78, 7.78){\circle*{0.000001}} \put( 8.49, 7.78){\circle*{0.000001}} \put( 9.19, 7.78){\circle*{0.000001}} \put( 9.90, 7.78){\circle*{0.000001}} \put(10.61, 7.07){\circle*{0.000001}} \put(11.31, 7.07){\circle*{0.000001}} \put(12.02, 7.07){\circle*{0.000001}} \put(12.73, 7.07){\circle*{0.000001}} \put(13.44, 7.07){\circle*{0.000001}} \put(14.14, 7.07){\circle*{0.000001}} \put(14.85, 6.36){\circle*{0.000001}} \put(15.56, 6.36){\circle*{0.000001}} \put(16.26, 6.36){\circle*{0.000001}} \put(16.97, 6.36){\circle*{0.000001}} \put(17.68, 6.36){\circle*{0.000001}} \put(18.38, 6.36){\circle*{0.000001}} \put(19.09, 5.66){\circle*{0.000001}} \put(19.80, 5.66){\circle*{0.000001}} \put(20.51, 5.66){\circle*{0.000001}} \put(21.21, 5.66){\circle*{0.000001}} \put(21.92, 5.66){\circle*{0.000001}} \put(22.63, 5.66){\circle*{0.000001}} \put(23.33, 5.66){\circle*{0.000001}} \put(24.04, 4.95){\circle*{0.000001}} \put(24.75, 4.95){\circle*{0.000001}} \put(25.46, 4.95){\circle*{0.000001}} \put(26.16, 4.95){\circle*{0.000001}} \put(26.87, 4.95){\circle*{0.000001}} \put(27.58, 4.95){\circle*{0.000001}} \put(28.28, 4.24){\circle*{0.000001}} \put(28.99, 4.24){\circle*{0.000001}} \put(29.70, 4.24){\circle*{0.000001}} \put(30.41, 4.24){\circle*{0.000001}} \put(31.11, 4.24){\circle*{0.000001}} \put(31.82, 4.24){\circle*{0.000001}} \put(32.53, 3.54){\circle*{0.000001}} \put(33.23, 3.54){\circle*{0.000001}} \put(33.94, 3.54){\circle*{0.000001}} \put(34.65, 3.54){\circle*{0.000001}} \put(35.36, 3.54){\circle*{0.000001}} \put(36.06, 3.54){\circle*{0.000001}} \put(36.77, 3.54){\circle*{0.000001}} \put(37.48, 2.83){\circle*{0.000001}} \put(38.18, 2.83){\circle*{0.000001}} \put(38.89, 2.83){\circle*{0.000001}} \put(39.60, 2.83){\circle*{0.000001}} \put(40.31, 2.83){\circle*{0.000001}} \put(41.01, 2.83){\circle*{0.000001}} \put(41.72, 2.12){\circle*{0.000001}} \put(42.43, 2.12){\circle*{0.000001}} \put(43.13, 2.12){\circle*{0.000001}} \put(43.84, 2.12){\circle*{0.000001}} \put(44.55, 2.12){\circle*{0.000001}} \put(45.25, 2.12){\circle*{0.000001}} \put(45.96, 1.41){\circle*{0.000001}} \put(46.67, 1.41){\circle*{0.000001}} \put(47.38, 1.41){\circle*{0.000001}} \put(48.08, 1.41){\circle*{0.000001}} \put(48.79, 1.41){\circle*{0.000001}} \put(49.50, 1.41){\circle*{0.000001}} \put(50.20, 0.71){\circle*{0.000001}} \put(50.91, 0.71){\circle*{0.000001}} \put(51.62, 0.71){\circle*{0.000001}} \put(52.33, 0.71){\circle*{0.000001}} \put(53.03, 0.71){\circle*{0.000001}} \put(53.74, 0.71){\circle*{0.000001}} \put(54.45, 0.71){\circle*{0.000001}} \put(55.15, 0.00){\circle*{0.000001}} \put(55.86, 0.00){\circle*{0.000001}} \put(56.57, 0.00){\circle*{0.000001}} \put(57.28, 0.00){\circle*{0.000001}} \put(57.98, 0.00){\circle*{0.000001}} \put(58.69, 0.00){\circle*{0.000001}} \put(59.40,-0.71){\circle*{0.000001}} \put(60.10,-0.71){\circle*{0.000001}} \put(60.81,-0.71){\circle*{0.000001}} \put(61.52,-0.71){\circle*{0.000001}} \put(62.23,-0.71){\circle*{0.000001}} \put(62.93,-0.71){\circle*{0.000001}} \put(63.64,-1.41){\circle*{0.000001}} \put(64.35,-1.41){\circle*{0.000001}} \put(65.05,-1.41){\circle*{0.000001}} \put(65.76,-1.41){\circle*{0.000001}} \put(65.76,-1.41){\circle*{0.000001}} \put(66.47,-0.71){\circle*{0.000001}} \put(66.47, 0.00){\circle*{0.000001}} \put(67.18, 0.71){\circle*{0.000001}} \put(67.88, 1.41){\circle*{0.000001}} \put(68.59, 2.12){\circle*{0.000001}} \put(68.59, 2.83){\circle*{0.000001}} \put(69.30, 3.54){\circle*{0.000001}} \put(70.00, 4.24){\circle*{0.000001}} \put(70.71, 4.95){\circle*{0.000001}} \put(70.71, 5.66){\circle*{0.000001}} \put(71.42, 6.36){\circle*{0.000001}} \put(72.12, 7.07){\circle*{0.000001}} \put(72.83, 7.78){\circle*{0.000001}} \put(72.83, 8.49){\circle*{0.000001}} \put(73.54, 9.19){\circle*{0.000001}} \put(74.25, 9.90){\circle*{0.000001}} \put(74.95,10.61){\circle*{0.000001}} \put(74.95,11.31){\circle*{0.000001}} \put(75.66,12.02){\circle*{0.000001}} \put(76.37,12.73){\circle*{0.000001}} \put(77.07,13.44){\circle*{0.000001}} \put(77.07,14.14){\circle*{0.000001}} \put(77.78,14.85){\circle*{0.000001}} \put(78.49,15.56){\circle*{0.000001}} \put(78.49,16.26){\circle*{0.000001}} \put(79.20,16.97){\circle*{0.000001}} \put(79.90,17.68){\circle*{0.000001}} \put(80.61,18.38){\circle*{0.000001}} \put(80.61,19.09){\circle*{0.000001}} \put(81.32,19.80){\circle*{0.000001}} \put(82.02,20.51){\circle*{0.000001}} \put(82.73,21.21){\circle*{0.000001}} \put(82.73,21.92){\circle*{0.000001}} \put(83.44,22.63){\circle*{0.000001}} \put(84.15,23.33){\circle*{0.000001}} \put(84.85,24.04){\circle*{0.000001}} \put(84.85,24.75){\circle*{0.000001}} \put(85.56,25.46){\circle*{0.000001}} \put(86.27,26.16){\circle*{0.000001}} \put(86.97,26.87){\circle*{0.000001}} \put(86.97,27.58){\circle*{0.000001}} \put(87.68,28.28){\circle*{0.000001}} \put(88.39,28.99){\circle*{0.000001}} \put(88.39,29.70){\circle*{0.000001}} \put(89.10,30.41){\circle*{0.000001}} \put(89.80,31.11){\circle*{0.000001}} \put(90.51,31.82){\circle*{0.000001}} \put(90.51,32.53){\circle*{0.000001}} \put(91.22,33.23){\circle*{0.000001}} \put(91.92,33.94){\circle*{0.000001}} \put(92.63,34.65){\circle*{0.000001}} \put(92.63,35.36){\circle*{0.000001}} \put(93.34,36.06){\circle*{0.000001}} \put(94.05,36.77){\circle*{0.000001}} \put(94.75,37.48){\circle*{0.000001}} \put(94.75,38.18){\circle*{0.000001}} \put(95.46,38.89){\circle*{0.000001}} \put(96.17,39.60){\circle*{0.000001}} \put(96.87,40.31){\circle*{0.000001}} \put(96.87,41.01){\circle*{0.000001}} \put(97.58,41.72){\circle*{0.000001}} \put(98.29,42.43){\circle*{0.000001}} \put(98.99,43.13){\circle*{0.000001}} \put(98.99,43.84){\circle*{0.000001}} \put(99.70,44.55){\circle*{0.000001}} \put(54.45, 3.54){\circle*{0.000001}} \put(55.15, 4.24){\circle*{0.000001}} \put(55.86, 4.95){\circle*{0.000001}} \put(56.57, 5.66){\circle*{0.000001}} \put(57.28, 6.36){\circle*{0.000001}} \put(57.98, 7.07){\circle*{0.000001}} \put(58.69, 7.07){\circle*{0.000001}} \put(59.40, 7.78){\circle*{0.000001}} \put(60.10, 8.49){\circle*{0.000001}} \put(60.81, 9.19){\circle*{0.000001}} \put(61.52, 9.90){\circle*{0.000001}} \put(62.23,10.61){\circle*{0.000001}} \put(62.93,11.31){\circle*{0.000001}} \put(63.64,12.02){\circle*{0.000001}} \put(64.35,12.73){\circle*{0.000001}} \put(65.05,13.44){\circle*{0.000001}} \put(65.76,13.44){\circle*{0.000001}} \put(66.47,14.14){\circle*{0.000001}} \put(67.18,14.85){\circle*{0.000001}} \put(67.88,15.56){\circle*{0.000001}} \put(68.59,16.26){\circle*{0.000001}} \put(69.30,16.97){\circle*{0.000001}} \put(70.00,17.68){\circle*{0.000001}} \put(70.71,18.38){\circle*{0.000001}} \put(71.42,19.09){\circle*{0.000001}} \put(72.12,19.80){\circle*{0.000001}} \put(72.83,20.51){\circle*{0.000001}} \put(73.54,20.51){\circle*{0.000001}} \put(74.25,21.21){\circle*{0.000001}} \put(74.95,21.92){\circle*{0.000001}} \put(75.66,22.63){\circle*{0.000001}} \put(76.37,23.33){\circle*{0.000001}} \put(77.07,24.04){\circle*{0.000001}} \put(77.78,24.75){\circle*{0.000001}} \put(78.49,25.46){\circle*{0.000001}} \put(79.20,26.16){\circle*{0.000001}} \put(79.90,26.87){\circle*{0.000001}} \put(80.61,27.58){\circle*{0.000001}} \put(81.32,27.58){\circle*{0.000001}} \put(82.02,28.28){\circle*{0.000001}} \put(82.73,28.99){\circle*{0.000001}} \put(83.44,29.70){\circle*{0.000001}} \put(84.15,30.41){\circle*{0.000001}} \put(84.85,31.11){\circle*{0.000001}} \put(85.56,31.82){\circle*{0.000001}} \put(86.27,32.53){\circle*{0.000001}} \put(86.97,33.23){\circle*{0.000001}} \put(87.68,33.94){\circle*{0.000001}} \put(88.39,33.94){\circle*{0.000001}} \put(89.10,34.65){\circle*{0.000001}} \put(89.80,35.36){\circle*{0.000001}} \put(90.51,36.06){\circle*{0.000001}} \put(91.22,36.77){\circle*{0.000001}} \put(91.92,37.48){\circle*{0.000001}} \put(92.63,38.18){\circle*{0.000001}} \put(93.34,38.89){\circle*{0.000001}} \put(94.05,39.60){\circle*{0.000001}} \put(94.75,40.31){\circle*{0.000001}} \put(95.46,41.01){\circle*{0.000001}} \put(96.17,41.01){\circle*{0.000001}} \put(96.87,41.72){\circle*{0.000001}} \put(97.58,42.43){\circle*{0.000001}} \put(98.29,43.13){\circle*{0.000001}} \put(98.99,43.84){\circle*{0.000001}} \put(99.70,44.55){\circle*{0.000001}} \put(52.33, 0.71){\circle*{0.000001}} \put(53.03, 1.41){\circle*{0.000001}} \put(53.03, 2.12){\circle*{0.000001}} \put(53.74, 2.83){\circle*{0.000001}} \put(54.45, 3.54){\circle*{0.000001}} \put(51.62, 0.00){\circle*{0.000001}} \put(52.33, 0.71){\circle*{0.000001}} \put(51.62, 0.00){\circle*{0.000001}} \put(51.62, 0.00){\circle*{0.000001}} \put(51.62, 0.71){\circle*{0.000001}} \put(50.91, 1.41){\circle*{0.000001}} \put(50.91, 2.12){\circle*{0.000001}} \put(50.20, 2.83){\circle*{0.000001}} \put(50.20, 3.54){\circle*{0.000001}} \put(50.20, 4.24){\circle*{0.000001}} \put(49.50, 4.95){\circle*{0.000001}} \put(49.50, 5.66){\circle*{0.000001}} \put(48.79, 6.36){\circle*{0.000001}} \put(48.79, 7.07){\circle*{0.000001}} \put(48.79, 7.78){\circle*{0.000001}} \put(48.08, 8.49){\circle*{0.000001}} \put(48.08, 9.19){\circle*{0.000001}} \put(47.38, 9.90){\circle*{0.000001}} \put(47.38,10.61){\circle*{0.000001}} \put(47.38,11.31){\circle*{0.000001}} \put(46.67,12.02){\circle*{0.000001}} \put(46.67,12.73){\circle*{0.000001}} \put(45.96,13.44){\circle*{0.000001}} \put(45.96,14.14){\circle*{0.000001}} \put(45.96,14.85){\circle*{0.000001}} \put(45.25,15.56){\circle*{0.000001}} \put(45.25,16.26){\circle*{0.000001}} \put(44.55,16.97){\circle*{0.000001}} \put(44.55,17.68){\circle*{0.000001}} \put(44.55,18.38){\circle*{0.000001}} \put(43.84,19.09){\circle*{0.000001}} \put(43.84,19.80){\circle*{0.000001}} \put(43.13,20.51){\circle*{0.000001}} \put(43.13,21.21){\circle*{0.000001}} \put(43.13,21.92){\circle*{0.000001}} \put(42.43,22.63){\circle*{0.000001}} \put(42.43,23.33){\circle*{0.000001}} \put(41.72,24.04){\circle*{0.000001}} \put(41.72,24.75){\circle*{0.000001}} \put(41.72,25.46){\circle*{0.000001}} \put(41.01,26.16){\circle*{0.000001}} \put(41.01,26.87){\circle*{0.000001}} \put(40.31,27.58){\circle*{0.000001}} \put(40.31,28.28){\circle*{0.000001}} \put(39.60,28.99){\circle*{0.000001}} \put(39.60,29.70){\circle*{0.000001}} \put(39.60,30.41){\circle*{0.000001}} \put(38.89,31.11){\circle*{0.000001}} \put(38.89,31.82){\circle*{0.000001}} \put(38.18,32.53){\circle*{0.000001}} \put(38.18,33.23){\circle*{0.000001}} \put(38.18,33.94){\circle*{0.000001}} \put(37.48,34.65){\circle*{0.000001}} \put(37.48,35.36){\circle*{0.000001}} \put(36.77,36.06){\circle*{0.000001}} \put(36.77,36.77){\circle*{0.000001}} \put(36.77,37.48){\circle*{0.000001}} \put(36.06,38.18){\circle*{0.000001}} \put(36.06,38.89){\circle*{0.000001}} \put(35.36,39.60){\circle*{0.000001}} \put(35.36,40.31){\circle*{0.000001}} \put(35.36,41.01){\circle*{0.000001}} \put(34.65,41.72){\circle*{0.000001}} \put(34.65,42.43){\circle*{0.000001}} \put(33.94,43.13){\circle*{0.000001}} \put(33.94,43.84){\circle*{0.000001}} \put(33.94,44.55){\circle*{0.000001}} \put(33.23,45.25){\circle*{0.000001}} \put(33.23,45.96){\circle*{0.000001}} \put(32.53,46.67){\circle*{0.000001}} \put(32.53,47.38){\circle*{0.000001}} \put(32.53,48.08){\circle*{0.000001}} \put(31.82,48.79){\circle*{0.000001}} \put(31.82,49.50){\circle*{0.000001}} \put(31.11,50.20){\circle*{0.000001}} \put(31.11,50.91){\circle*{0.000001}} \put(31.11,51.62){\circle*{0.000001}} \put(30.41,52.33){\circle*{0.000001}} \put(30.41,53.03){\circle*{0.000001}} \put(29.70,53.74){\circle*{0.000001}} \put(29.70,54.45){\circle*{0.000001}} \put(27.58,50.91){\circle*{0.000001}} \put(28.28,51.62){\circle*{0.000001}} \put(28.28,52.33){\circle*{0.000001}} \put(28.99,53.03){\circle*{0.000001}} \put(28.99,53.74){\circle*{0.000001}} \put(29.70,54.45){\circle*{0.000001}} \put(27.58,50.91){\circle*{0.000001}} \put(28.28,50.20){\circle*{0.000001}} \put(28.99,50.20){\circle*{0.000001}} \put(29.70,49.50){\circle*{0.000001}} \put(28.99,46.67){\circle*{0.000001}} \put(28.99,47.38){\circle*{0.000001}} \put(28.99,48.08){\circle*{0.000001}} \put(29.70,48.79){\circle*{0.000001}} \put(29.70,49.50){\circle*{0.000001}} \put(28.99,45.96){\circle*{0.000001}} \put(28.99,46.67){\circle*{0.000001}} \put(27.58,44.55){\circle*{0.000001}} \put(28.28,45.25){\circle*{0.000001}} \put(28.99,45.96){\circle*{0.000001}} \put(14.85,49.50){\circle*{0.000001}} \put(15.56,49.50){\circle*{0.000001}} \put(16.26,48.79){\circle*{0.000001}} \put(16.97,48.79){\circle*{0.000001}} \put(17.68,48.08){\circle*{0.000001}} \put(18.38,48.08){\circle*{0.000001}} \put(19.09,48.08){\circle*{0.000001}} \put(19.80,47.38){\circle*{0.000001}} \put(20.51,47.38){\circle*{0.000001}} \put(21.21,47.38){\circle*{0.000001}} \put(21.92,46.67){\circle*{0.000001}} \put(22.63,46.67){\circle*{0.000001}} \put(23.33,45.96){\circle*{0.000001}} \put(24.04,45.96){\circle*{0.000001}} \put(24.75,45.96){\circle*{0.000001}} \put(25.46,45.25){\circle*{0.000001}} \put(26.16,45.25){\circle*{0.000001}} \put(26.87,44.55){\circle*{0.000001}} \put(27.58,44.55){\circle*{0.000001}} \put(14.85,49.50){\circle*{0.000001}} \put(15.56,49.50){\circle*{0.000001}} \put(16.26,48.79){\circle*{0.000001}} \put(16.97,48.79){\circle*{0.000001}} \put( 9.90,48.79){\circle*{0.000001}} \put(10.61,48.79){\circle*{0.000001}} \put(11.31,48.79){\circle*{0.000001}} \put(12.02,48.79){\circle*{0.000001}} \put(12.73,48.79){\circle*{0.000001}} \put(13.44,48.79){\circle*{0.000001}} \put(14.14,48.79){\circle*{0.000001}} \put(14.85,48.79){\circle*{0.000001}} \put(15.56,48.79){\circle*{0.000001}} \put(16.26,48.79){\circle*{0.000001}} \put(16.97,48.79){\circle*{0.000001}} \put( 7.78,49.50){\circle*{0.000001}} \put( 8.49,49.50){\circle*{0.000001}} \put( 9.19,48.79){\circle*{0.000001}} \put( 9.90,48.79){\circle*{0.000001}} \put( 7.78,49.50){\circle*{0.000001}} \put( 8.49,48.79){\circle*{0.000001}} \put( 9.19,48.79){\circle*{0.000001}} \put( 9.90,48.08){\circle*{0.000001}} \put(10.61,47.38){\circle*{0.000001}} \put( 9.90,46.67){\circle*{0.000001}} \put(10.61,47.38){\circle*{0.000001}} \put( 9.90,46.67){\circle*{0.000001}} \put(10.61,46.67){\circle*{0.000001}} \put(11.31,47.38){\circle*{0.000001}} \put(11.31,47.38){\circle*{0.000001}} \put(12.02,47.38){\circle*{0.000001}} \put(12.73,47.38){\circle*{0.000001}} \put(13.44,47.38){\circle*{0.000001}} \put(13.44,45.96){\circle*{0.000001}} \put(13.44,46.67){\circle*{0.000001}} \put(13.44,47.38){\circle*{0.000001}} \put(13.44,45.96){\circle*{0.000001}} \put(13.44,46.67){\circle*{0.000001}} \put(12.73,47.38){\circle*{0.000001}} \put(10.61,46.67){\circle*{0.000001}} \put(11.31,46.67){\circle*{0.000001}} \put(12.02,47.38){\circle*{0.000001}} \put(12.73,47.38){\circle*{0.000001}} \put( 9.90,45.96){\circle*{0.000001}} \put(10.61,46.67){\circle*{0.000001}} \put( 7.78,46.67){\circle*{0.000001}} \put( 8.49,46.67){\circle*{0.000001}} \put( 9.19,45.96){\circle*{0.000001}} \put( 9.90,45.96){\circle*{0.000001}} \put( 7.78,45.96){\circle*{0.000001}} \put( 7.78,46.67){\circle*{0.000001}} \put( 6.36,43.84){\circle*{0.000001}} \put( 7.07,44.55){\circle*{0.000001}} \put( 7.07,45.25){\circle*{0.000001}} \put( 7.78,45.96){\circle*{0.000001}} \put( 6.36,43.84){\circle*{0.000001}} \put( 5.66,44.55){\circle*{0.000001}} \put( 4.95,45.25){\circle*{0.000001}} \put( 4.24,45.96){\circle*{0.000001}} \put( 3.54,46.67){\circle*{0.000001}} \put( 2.83,47.38){\circle*{0.000001}} \put( 2.12,48.08){\circle*{0.000001}} \put( 1.41,48.79){\circle*{0.000001}} \put( 1.41,49.50){\circle*{0.000001}} \put( 0.71,50.20){\circle*{0.000001}} \put( 0.00,50.91){\circle*{0.000001}} \put(-0.71,51.62){\circle*{0.000001}} \put(-1.41,52.33){\circle*{0.000001}} \put(-2.12,53.03){\circle*{0.000001}} \put(-2.83,53.74){\circle*{0.000001}} \put(-3.54,54.45){\circle*{0.000001}} \put(-4.24,55.15){\circle*{0.000001}} \put(-4.95,55.86){\circle*{0.000001}} \put(-5.66,56.57){\circle*{0.000001}} \put(-6.36,57.28){\circle*{0.000001}} \put(-7.07,57.98){\circle*{0.000001}} \put(-7.78,58.69){\circle*{0.000001}} \put(-8.49,59.40){\circle*{0.000001}} \put(-8.49,60.10){\circle*{0.000001}} \put(-9.19,60.81){\circle*{0.000001}} \put(-9.90,61.52){\circle*{0.000001}} \put(-10.61,62.23){\circle*{0.000001}} \put(-11.31,62.93){\circle*{0.000001}} \put(-12.02,63.64){\circle*{0.000001}} \put(-12.73,64.35){\circle*{0.000001}} \put(-13.44,65.05){\circle*{0.000001}} \put(-14.14,65.76){\circle*{0.000001}} \put(-14.85,66.47){\circle*{0.000001}} \put(-15.56,67.18){\circle*{0.000001}} \put(-16.26,67.88){\circle*{0.000001}} \put(-16.97,68.59){\circle*{0.000001}} \put(-17.68,69.30){\circle*{0.000001}} \put(-18.38,70.00){\circle*{0.000001}} \put(-19.09,70.71){\circle*{0.000001}} \put(-19.09,71.42){\circle*{0.000001}} \put(-19.80,72.12){\circle*{0.000001}} \put(-20.51,72.83){\circle*{0.000001}} \put(-21.21,73.54){\circle*{0.000001}} \put(-21.92,74.25){\circle*{0.000001}} \put(-22.63,74.95){\circle*{0.000001}} \put(-23.33,75.66){\circle*{0.000001}} \put(-24.04,76.37){\circle*{0.000001}} \put(-24.75,77.07){\circle*{0.000001}} \put(-25.46,77.78){\circle*{0.000001}} \put(-26.16,78.49){\circle*{0.000001}} \put(-26.87,79.20){\circle*{0.000001}} \put(-27.58,79.90){\circle*{0.000001}} \put(-28.28,80.61){\circle*{0.000001}} \put(-28.99,81.32){\circle*{0.000001}} \put(-28.99,82.02){\circle*{0.000001}} \put(-29.70,82.73){\circle*{0.000001}} \put(-30.41,83.44){\circle*{0.000001}} \put(-31.11,84.15){\circle*{0.000001}} \put(-31.82,84.85){\circle*{0.000001}} \put(-32.53,85.56){\circle*{0.000001}} \put(-33.23,86.27){\circle*{0.000001}} \put(-33.94,86.97){\circle*{0.000001}} \put(-33.94,86.97){\circle*{0.000001}} \put(-33.94,87.68){\circle*{0.000001}} \put(-33.94,88.39){\circle*{0.000001}} \put(-33.94,89.10){\circle*{0.000001}} \put(-33.94,89.80){\circle*{0.000001}} \put(-33.94,90.51){\circle*{0.000001}} \put(-33.94,91.22){\circle*{0.000001}} \put(-33.94,91.92){\circle*{0.000001}} \put(-33.94,92.63){\circle*{0.000001}} \put(-33.94,93.34){\circle*{0.000001}} \put(-33.94,94.05){\circle*{0.000001}} \put(-33.94,94.75){\circle*{0.000001}} \put(-33.94,95.46){\circle*{0.000001}} \put(-33.94,96.17){\circle*{0.000001}} \put(-33.94,96.87){\circle*{0.000001}} \put(-33.94,97.58){\circle*{0.000001}} \put(-33.94,98.29){\circle*{0.000001}} \put(-33.94,98.99){\circle*{0.000001}} \put(-33.94,99.70){\circle*{0.000001}} \put(-33.94,100.41){\circle*{0.000001}} \put(-33.94,101.12){\circle*{0.000001}} \put(-33.94,101.82){\circle*{0.000001}} \put(-33.94,102.53){\circle*{0.000001}} \put(-33.94,103.24){\circle*{0.000001}} \put(-33.94,103.94){\circle*{0.000001}} \put(-33.94,104.65){\circle*{0.000001}} \put(-33.94,105.36){\circle*{0.000001}} \put(-33.94,106.07){\circle*{0.000001}} \put(-33.94,106.77){\circle*{0.000001}} \put(-33.94,107.48){\circle*{0.000001}} \put(-33.94,108.19){\circle*{0.000001}} \put(-33.94,108.89){\circle*{0.000001}} \put(-33.94,109.60){\circle*{0.000001}} \put(-33.94,110.31){\circle*{0.000001}} \put(-33.94,111.02){\circle*{0.000001}} \put(-33.94,111.72){\circle*{0.000001}} \put(-33.94,112.43){\circle*{0.000001}} \put(-33.94,113.14){\circle*{0.000001}} \put(-33.94,113.84){\circle*{0.000001}} \put(-33.94,114.55){\circle*{0.000001}} \put(-33.94,115.26){\circle*{0.000001}} \put(-33.94,115.97){\circle*{0.000001}} \put(-34.65,116.67){\circle*{0.000001}} \put(-34.65,117.38){\circle*{0.000001}} \put(-34.65,118.09){\circle*{0.000001}} \put(-34.65,118.79){\circle*{0.000001}} \put(-34.65,119.50){\circle*{0.000001}} \put(-34.65,120.21){\circle*{0.000001}} \put(-34.65,120.92){\circle*{0.000001}} \put(-34.65,121.62){\circle*{0.000001}} \put(-34.65,122.33){\circle*{0.000001}} \put(-34.65,123.04){\circle*{0.000001}} \put(-34.65,123.74){\circle*{0.000001}} \put(-34.65,124.45){\circle*{0.000001}} \put(-34.65,125.16){\circle*{0.000001}} \put(-34.65,125.87){\circle*{0.000001}} \put(-34.65,126.57){\circle*{0.000001}} \put(-34.65,127.28){\circle*{0.000001}} \put(-34.65,127.99){\circle*{0.000001}} \put(-34.65,128.69){\circle*{0.000001}} \put(-34.65,129.40){\circle*{0.000001}} \put(-34.65,130.11){\circle*{0.000001}} \put(-34.65,130.81){\circle*{0.000001}} \put(-34.65,131.52){\circle*{0.000001}} \put(-34.65,132.23){\circle*{0.000001}} \put(-34.65,132.94){\circle*{0.000001}} \put(-34.65,133.64){\circle*{0.000001}} \put(-34.65,134.35){\circle*{0.000001}} \put(-34.65,135.06){\circle*{0.000001}} \put(-34.65,135.76){\circle*{0.000001}} \put(-34.65,136.47){\circle*{0.000001}} \put(-34.65,137.18){\circle*{0.000001}} \put(-34.65,137.89){\circle*{0.000001}} \put(-34.65,138.59){\circle*{0.000001}} \put(-34.65,139.30){\circle*{0.000001}} \put(-34.65,140.01){\circle*{0.000001}} \put(-34.65,140.71){\circle*{0.000001}} \put(-34.65,141.42){\circle*{0.000001}} \put(-34.65,142.13){\circle*{0.000001}} \put(-34.65,142.84){\circle*{0.000001}} \put(-34.65,143.54){\circle*{0.000001}} \put(-34.65,144.25){\circle*{0.000001}} \put(-34.65,144.96){\circle*{0.000001}} \put(-34.65,145.66){\circle*{0.000001}} \put(-34.65,144.96){\circle*{0.000001}} \put(-34.65,145.66){\circle*{0.000001}} \put(-35.36,144.96){\circle*{0.000001}} \put(-34.65,144.96){\circle*{0.000001}} \put(-35.36,144.96){\circle*{0.000001}} \put(-34.65,144.96){\circle*{0.000001}} \put(-72.83,97.58){\circle*{0.000001}} \put(-72.12,98.29){\circle*{0.000001}} \put(-71.42,98.99){\circle*{0.000001}} \put(-71.42,99.70){\circle*{0.000001}} \put(-70.71,100.41){\circle*{0.000001}} \put(-70.00,101.12){\circle*{0.000001}} \put(-69.30,101.82){\circle*{0.000001}} \put(-68.59,102.53){\circle*{0.000001}} \put(-68.59,103.24){\circle*{0.000001}} \put(-67.88,103.94){\circle*{0.000001}} \put(-67.18,104.65){\circle*{0.000001}} \put(-66.47,105.36){\circle*{0.000001}} \put(-65.76,106.07){\circle*{0.000001}} \put(-65.76,106.77){\circle*{0.000001}} \put(-65.05,107.48){\circle*{0.000001}} \put(-64.35,108.19){\circle*{0.000001}} \put(-63.64,108.89){\circle*{0.000001}} \put(-62.93,109.60){\circle*{0.000001}} \put(-62.23,110.31){\circle*{0.000001}} \put(-62.23,111.02){\circle*{0.000001}} \put(-61.52,111.72){\circle*{0.000001}} \put(-60.81,112.43){\circle*{0.000001}} \put(-60.10,113.14){\circle*{0.000001}} \put(-59.40,113.84){\circle*{0.000001}} \put(-59.40,114.55){\circle*{0.000001}} \put(-58.69,115.26){\circle*{0.000001}} \put(-57.98,115.97){\circle*{0.000001}} \put(-57.28,116.67){\circle*{0.000001}} \put(-56.57,117.38){\circle*{0.000001}} \put(-56.57,118.09){\circle*{0.000001}} \put(-55.86,118.79){\circle*{0.000001}} \put(-55.15,119.50){\circle*{0.000001}} \put(-54.45,120.21){\circle*{0.000001}} \put(-53.74,120.92){\circle*{0.000001}} \put(-53.74,121.62){\circle*{0.000001}} \put(-53.03,122.33){\circle*{0.000001}} \put(-52.33,123.04){\circle*{0.000001}} \put(-51.62,123.74){\circle*{0.000001}} \put(-50.91,124.45){\circle*{0.000001}} \put(-50.91,125.16){\circle*{0.000001}} \put(-50.20,125.87){\circle*{0.000001}} \put(-49.50,126.57){\circle*{0.000001}} \put(-48.79,127.28){\circle*{0.000001}} \put(-48.08,127.99){\circle*{0.000001}} \put(-48.08,128.69){\circle*{0.000001}} \put(-47.38,129.40){\circle*{0.000001}} \put(-46.67,130.11){\circle*{0.000001}} \put(-45.96,130.81){\circle*{0.000001}} \put(-45.25,131.52){\circle*{0.000001}} \put(-45.25,132.23){\circle*{0.000001}} \put(-44.55,132.94){\circle*{0.000001}} \put(-43.84,133.64){\circle*{0.000001}} \put(-43.13,134.35){\circle*{0.000001}} \put(-42.43,135.06){\circle*{0.000001}} \put(-41.72,135.76){\circle*{0.000001}} \put(-41.72,136.47){\circle*{0.000001}} \put(-41.01,137.18){\circle*{0.000001}} \put(-40.31,137.89){\circle*{0.000001}} \put(-39.60,138.59){\circle*{0.000001}} \put(-38.89,139.30){\circle*{0.000001}} \put(-38.89,140.01){\circle*{0.000001}} \put(-38.18,140.71){\circle*{0.000001}} \put(-37.48,141.42){\circle*{0.000001}} \put(-36.77,142.13){\circle*{0.000001}} \put(-36.06,142.84){\circle*{0.000001}} \put(-36.06,143.54){\circle*{0.000001}} \put(-35.36,144.25){\circle*{0.000001}} \put(-34.65,144.96){\circle*{0.000001}} \put(-72.83,97.58){\circle*{0.000001}} \put(-72.12,98.29){\circle*{0.000001}} \put(-71.42,98.99){\circle*{0.000001}} \put(-70.71,98.99){\circle*{0.000001}} \put(-70.00,99.70){\circle*{0.000001}} \put(-69.30,100.41){\circle*{0.000001}} \put(-68.59,101.12){\circle*{0.000001}} \put(-67.88,101.12){\circle*{0.000001}} \put(-67.18,101.82){\circle*{0.000001}} \put(-66.47,102.53){\circle*{0.000001}} \put(-65.76,103.24){\circle*{0.000001}} \put(-65.05,103.94){\circle*{0.000001}} \put(-64.35,103.94){\circle*{0.000001}} \put(-63.64,104.65){\circle*{0.000001}} \put(-62.93,105.36){\circle*{0.000001}} \put(-62.23,106.07){\circle*{0.000001}} \put(-61.52,106.07){\circle*{0.000001}} \put(-60.81,106.77){\circle*{0.000001}} \put(-60.10,107.48){\circle*{0.000001}} \put(-59.40,108.19){\circle*{0.000001}} \put(-58.69,108.89){\circle*{0.000001}} \put(-57.98,108.89){\circle*{0.000001}} \put(-57.28,109.60){\circle*{0.000001}} \put(-56.57,110.31){\circle*{0.000001}} \put(-55.86,111.02){\circle*{0.000001}} \put(-55.15,111.02){\circle*{0.000001}} \put(-54.45,111.72){\circle*{0.000001}} \put(-53.74,112.43){\circle*{0.000001}} \put(-53.03,113.14){\circle*{0.000001}} \put(-52.33,113.84){\circle*{0.000001}} \put(-51.62,113.84){\circle*{0.000001}} \put(-50.91,114.55){\circle*{0.000001}} \put(-50.20,115.26){\circle*{0.000001}} \put(-49.50,115.97){\circle*{0.000001}} \put(-48.79,115.97){\circle*{0.000001}} \put(-48.08,116.67){\circle*{0.000001}} \put(-47.38,117.38){\circle*{0.000001}} \put(-46.67,118.09){\circle*{0.000001}} \put(-45.96,118.09){\circle*{0.000001}} \put(-45.25,118.79){\circle*{0.000001}} \put(-44.55,119.50){\circle*{0.000001}} \put(-43.84,120.21){\circle*{0.000001}} \put(-43.13,120.92){\circle*{0.000001}} \put(-42.43,120.92){\circle*{0.000001}} \put(-41.72,121.62){\circle*{0.000001}} \put(-41.01,122.33){\circle*{0.000001}} \put(-40.31,123.04){\circle*{0.000001}} \put(-39.60,123.04){\circle*{0.000001}} \put(-38.89,123.74){\circle*{0.000001}} \put(-38.18,124.45){\circle*{0.000001}} \put(-37.48,125.16){\circle*{0.000001}} \put(-36.77,125.87){\circle*{0.000001}} \put(-36.06,125.87){\circle*{0.000001}} \put(-35.36,126.57){\circle*{0.000001}} \put(-34.65,127.28){\circle*{0.000001}} \put(-33.94,127.99){\circle*{0.000001}} \put(-33.23,127.99){\circle*{0.000001}} \put(-32.53,128.69){\circle*{0.000001}} \put(-31.82,129.40){\circle*{0.000001}} \put(-31.11,130.11){\circle*{0.000001}} \put(-30.41,130.81){\circle*{0.000001}} \put(-29.70,130.81){\circle*{0.000001}} \put(-28.99,131.52){\circle*{0.000001}} \put(-28.28,132.23){\circle*{0.000001}} \put(-27.58,132.94){\circle*{0.000001}} \put(-26.87,132.94){\circle*{0.000001}} \put(-26.16,133.64){\circle*{0.000001}} \put(-25.46,134.35){\circle*{0.000001}} \put(-82.73,110.31){\circle*{0.000001}} \put(-82.02,110.31){\circle*{0.000001}} \put(-81.32,111.02){\circle*{0.000001}} \put(-80.61,111.02){\circle*{0.000001}} \put(-79.90,111.72){\circle*{0.000001}} \put(-79.20,111.72){\circle*{0.000001}} \put(-78.49,112.43){\circle*{0.000001}} \put(-77.78,112.43){\circle*{0.000001}} \put(-77.07,112.43){\circle*{0.000001}} \put(-76.37,113.14){\circle*{0.000001}} \put(-75.66,113.14){\circle*{0.000001}} \put(-74.95,113.84){\circle*{0.000001}} \put(-74.25,113.84){\circle*{0.000001}} \put(-73.54,113.84){\circle*{0.000001}} \put(-72.83,114.55){\circle*{0.000001}} \put(-72.12,114.55){\circle*{0.000001}} \put(-71.42,115.26){\circle*{0.000001}} \put(-70.71,115.26){\circle*{0.000001}} \put(-70.00,115.97){\circle*{0.000001}} \put(-69.30,115.97){\circle*{0.000001}} \put(-68.59,115.97){\circle*{0.000001}} \put(-67.88,116.67){\circle*{0.000001}} \put(-67.18,116.67){\circle*{0.000001}} \put(-66.47,117.38){\circle*{0.000001}} \put(-65.76,117.38){\circle*{0.000001}} \put(-65.05,117.38){\circle*{0.000001}} \put(-64.35,118.09){\circle*{0.000001}} \put(-63.64,118.09){\circle*{0.000001}} \put(-62.93,118.79){\circle*{0.000001}} \put(-62.23,118.79){\circle*{0.000001}} \put(-61.52,119.50){\circle*{0.000001}} \put(-60.81,119.50){\circle*{0.000001}} \put(-60.10,119.50){\circle*{0.000001}} \put(-59.40,120.21){\circle*{0.000001}} \put(-58.69,120.21){\circle*{0.000001}} \put(-57.98,120.92){\circle*{0.000001}} \put(-57.28,120.92){\circle*{0.000001}} \put(-56.57,121.62){\circle*{0.000001}} \put(-55.86,121.62){\circle*{0.000001}} \put(-55.15,121.62){\circle*{0.000001}} \put(-54.45,122.33){\circle*{0.000001}} \put(-53.74,122.33){\circle*{0.000001}} \put(-53.03,123.04){\circle*{0.000001}} \put(-52.33,123.04){\circle*{0.000001}} \put(-51.62,123.04){\circle*{0.000001}} \put(-50.91,123.74){\circle*{0.000001}} \put(-50.20,123.74){\circle*{0.000001}} \put(-49.50,124.45){\circle*{0.000001}} \put(-48.79,124.45){\circle*{0.000001}} \put(-48.08,125.16){\circle*{0.000001}} \put(-47.38,125.16){\circle*{0.000001}} \put(-46.67,125.16){\circle*{0.000001}} \put(-45.96,125.87){\circle*{0.000001}} \put(-45.25,125.87){\circle*{0.000001}} \put(-44.55,126.57){\circle*{0.000001}} \put(-43.84,126.57){\circle*{0.000001}} \put(-43.13,127.28){\circle*{0.000001}} \put(-42.43,127.28){\circle*{0.000001}} \put(-41.72,127.28){\circle*{0.000001}} \put(-41.01,127.99){\circle*{0.000001}} \put(-40.31,127.99){\circle*{0.000001}} \put(-39.60,128.69){\circle*{0.000001}} \put(-38.89,128.69){\circle*{0.000001}} \put(-38.18,128.69){\circle*{0.000001}} \put(-37.48,129.40){\circle*{0.000001}} \put(-36.77,129.40){\circle*{0.000001}} \put(-36.06,130.11){\circle*{0.000001}} \put(-35.36,130.11){\circle*{0.000001}} \put(-34.65,130.81){\circle*{0.000001}} \put(-33.94,130.81){\circle*{0.000001}} \put(-33.23,130.81){\circle*{0.000001}} \put(-32.53,131.52){\circle*{0.000001}} \put(-31.82,131.52){\circle*{0.000001}} \put(-31.11,132.23){\circle*{0.000001}} \put(-30.41,132.23){\circle*{0.000001}} \put(-29.70,132.23){\circle*{0.000001}} \put(-28.99,132.94){\circle*{0.000001}} \put(-28.28,132.94){\circle*{0.000001}} \put(-27.58,133.64){\circle*{0.000001}} \put(-26.87,133.64){\circle*{0.000001}} \put(-26.16,134.35){\circle*{0.000001}} \put(-25.46,134.35){\circle*{0.000001}} \put(-61.52,50.91){\circle*{0.000001}} \put(-61.52,51.62){\circle*{0.000001}} \put(-62.23,52.33){\circle*{0.000001}} \put(-62.23,53.03){\circle*{0.000001}} \put(-62.23,53.74){\circle*{0.000001}} \put(-62.93,54.45){\circle*{0.000001}} \put(-62.93,55.15){\circle*{0.000001}} \put(-62.93,55.86){\circle*{0.000001}} \put(-63.64,56.57){\circle*{0.000001}} \put(-63.64,57.28){\circle*{0.000001}} \put(-64.35,57.98){\circle*{0.000001}} \put(-64.35,58.69){\circle*{0.000001}} \put(-64.35,59.40){\circle*{0.000001}} \put(-65.05,60.10){\circle*{0.000001}} \put(-65.05,60.81){\circle*{0.000001}} \put(-65.05,61.52){\circle*{0.000001}} \put(-65.76,62.23){\circle*{0.000001}} \put(-65.76,62.93){\circle*{0.000001}} \put(-65.76,63.64){\circle*{0.000001}} \put(-66.47,64.35){\circle*{0.000001}} \put(-66.47,65.05){\circle*{0.000001}} \put(-66.47,65.76){\circle*{0.000001}} \put(-67.18,66.47){\circle*{0.000001}} \put(-67.18,67.18){\circle*{0.000001}} \put(-67.88,67.88){\circle*{0.000001}} \put(-67.88,68.59){\circle*{0.000001}} \put(-67.88,69.30){\circle*{0.000001}} \put(-68.59,70.00){\circle*{0.000001}} \put(-68.59,70.71){\circle*{0.000001}} \put(-68.59,71.42){\circle*{0.000001}} \put(-69.30,72.12){\circle*{0.000001}} \put(-69.30,72.83){\circle*{0.000001}} \put(-69.30,73.54){\circle*{0.000001}} \put(-70.00,74.25){\circle*{0.000001}} \put(-70.00,74.95){\circle*{0.000001}} \put(-70.00,75.66){\circle*{0.000001}} \put(-70.71,76.37){\circle*{0.000001}} \put(-70.71,77.07){\circle*{0.000001}} \put(-71.42,77.78){\circle*{0.000001}} \put(-71.42,78.49){\circle*{0.000001}} \put(-71.42,79.20){\circle*{0.000001}} \put(-72.12,79.90){\circle*{0.000001}} \put(-72.12,80.61){\circle*{0.000001}} \put(-72.12,81.32){\circle*{0.000001}} \put(-72.83,82.02){\circle*{0.000001}} \put(-72.83,82.73){\circle*{0.000001}} \put(-72.83,83.44){\circle*{0.000001}} \put(-73.54,84.15){\circle*{0.000001}} \put(-73.54,84.85){\circle*{0.000001}} \put(-73.54,85.56){\circle*{0.000001}} \put(-74.25,86.27){\circle*{0.000001}} \put(-74.25,86.97){\circle*{0.000001}} \put(-74.95,87.68){\circle*{0.000001}} \put(-74.95,88.39){\circle*{0.000001}} \put(-74.95,89.10){\circle*{0.000001}} \put(-75.66,89.80){\circle*{0.000001}} \put(-75.66,90.51){\circle*{0.000001}} \put(-75.66,91.22){\circle*{0.000001}} \put(-76.37,91.92){\circle*{0.000001}} \put(-76.37,92.63){\circle*{0.000001}} \put(-76.37,93.34){\circle*{0.000001}} \put(-77.07,94.05){\circle*{0.000001}} \put(-77.07,94.75){\circle*{0.000001}} \put(-77.07,95.46){\circle*{0.000001}} \put(-77.78,96.17){\circle*{0.000001}} \put(-77.78,96.87){\circle*{0.000001}} \put(-78.49,97.58){\circle*{0.000001}} \put(-78.49,98.29){\circle*{0.000001}} \put(-78.49,98.99){\circle*{0.000001}} \put(-79.20,99.70){\circle*{0.000001}} \put(-79.20,100.41){\circle*{0.000001}} \put(-79.20,101.12){\circle*{0.000001}} \put(-79.90,101.82){\circle*{0.000001}} \put(-79.90,102.53){\circle*{0.000001}} \put(-79.90,103.24){\circle*{0.000001}} \put(-80.61,103.94){\circle*{0.000001}} \put(-80.61,104.65){\circle*{0.000001}} \put(-80.61,105.36){\circle*{0.000001}} \put(-81.32,106.07){\circle*{0.000001}} \put(-81.32,106.77){\circle*{0.000001}} \put(-82.02,107.48){\circle*{0.000001}} \put(-82.02,108.19){\circle*{0.000001}} \put(-82.02,108.89){\circle*{0.000001}} \put(-82.73,109.60){\circle*{0.000001}} \put(-82.73,110.31){\circle*{0.000001}} \put(-61.52,50.91){\circle*{0.000001}} \put(-60.81,50.91){\circle*{0.000001}} \put(-60.81,50.91){\circle*{0.000001}} \put(-51.62,-7.78){\circle*{0.000001}} \put(-51.62,-7.07){\circle*{0.000001}} \put(-51.62,-6.36){\circle*{0.000001}} \put(-51.62,-5.66){\circle*{0.000001}} \put(-52.33,-4.95){\circle*{0.000001}} \put(-52.33,-4.24){\circle*{0.000001}} \put(-52.33,-3.54){\circle*{0.000001}} \put(-52.33,-2.83){\circle*{0.000001}} \put(-52.33,-2.12){\circle*{0.000001}} \put(-52.33,-1.41){\circle*{0.000001}} \put(-53.03,-0.71){\circle*{0.000001}} \put(-53.03, 0.00){\circle*{0.000001}} \put(-53.03, 0.71){\circle*{0.000001}} \put(-53.03, 1.41){\circle*{0.000001}} \put(-53.03, 2.12){\circle*{0.000001}} \put(-53.03, 2.83){\circle*{0.000001}} \put(-53.74, 3.54){\circle*{0.000001}} \put(-53.74, 4.24){\circle*{0.000001}} \put(-53.74, 4.95){\circle*{0.000001}} \put(-53.74, 5.66){\circle*{0.000001}} \put(-53.74, 6.36){\circle*{0.000001}} \put(-53.74, 7.07){\circle*{0.000001}} \put(-53.74, 7.78){\circle*{0.000001}} \put(-54.45, 8.49){\circle*{0.000001}} \put(-54.45, 9.19){\circle*{0.000001}} \put(-54.45, 9.90){\circle*{0.000001}} \put(-54.45,10.61){\circle*{0.000001}} \put(-54.45,11.31){\circle*{0.000001}} \put(-54.45,12.02){\circle*{0.000001}} \put(-55.15,12.73){\circle*{0.000001}} \put(-55.15,13.44){\circle*{0.000001}} \put(-55.15,14.14){\circle*{0.000001}} \put(-55.15,14.85){\circle*{0.000001}} \put(-55.15,15.56){\circle*{0.000001}} \put(-55.15,16.26){\circle*{0.000001}} \put(-55.15,16.97){\circle*{0.000001}} \put(-55.86,17.68){\circle*{0.000001}} \put(-55.86,18.38){\circle*{0.000001}} \put(-55.86,19.09){\circle*{0.000001}} \put(-55.86,19.80){\circle*{0.000001}} \put(-55.86,20.51){\circle*{0.000001}} \put(-55.86,21.21){\circle*{0.000001}} \put(-56.57,21.92){\circle*{0.000001}} \put(-56.57,22.63){\circle*{0.000001}} \put(-56.57,23.33){\circle*{0.000001}} \put(-56.57,24.04){\circle*{0.000001}} \put(-56.57,24.75){\circle*{0.000001}} \put(-56.57,25.46){\circle*{0.000001}} \put(-57.28,26.16){\circle*{0.000001}} \put(-57.28,26.87){\circle*{0.000001}} \put(-57.28,27.58){\circle*{0.000001}} \put(-57.28,28.28){\circle*{0.000001}} \put(-57.28,28.99){\circle*{0.000001}} \put(-57.28,29.70){\circle*{0.000001}} \put(-57.28,30.41){\circle*{0.000001}} \put(-57.98,31.11){\circle*{0.000001}} \put(-57.98,31.82){\circle*{0.000001}} \put(-57.98,32.53){\circle*{0.000001}} \put(-57.98,33.23){\circle*{0.000001}} \put(-57.98,33.94){\circle*{0.000001}} \put(-57.98,34.65){\circle*{0.000001}} \put(-58.69,35.36){\circle*{0.000001}} \put(-58.69,36.06){\circle*{0.000001}} \put(-58.69,36.77){\circle*{0.000001}} \put(-58.69,37.48){\circle*{0.000001}} \put(-58.69,38.18){\circle*{0.000001}} \put(-58.69,38.89){\circle*{0.000001}} \put(-58.69,39.60){\circle*{0.000001}} \put(-59.40,40.31){\circle*{0.000001}} \put(-59.40,41.01){\circle*{0.000001}} \put(-59.40,41.72){\circle*{0.000001}} \put(-59.40,42.43){\circle*{0.000001}} \put(-59.40,43.13){\circle*{0.000001}} \put(-59.40,43.84){\circle*{0.000001}} \put(-60.10,44.55){\circle*{0.000001}} \put(-60.10,45.25){\circle*{0.000001}} \put(-60.10,45.96){\circle*{0.000001}} \put(-60.10,46.67){\circle*{0.000001}} \put(-60.10,47.38){\circle*{0.000001}} \put(-60.10,48.08){\circle*{0.000001}} \put(-60.81,48.79){\circle*{0.000001}} \put(-60.81,49.50){\circle*{0.000001}} \put(-60.81,50.20){\circle*{0.000001}} \put(-60.81,50.91){\circle*{0.000001}} \put(-110.31,-2.83){\circle*{0.000001}} \put(-109.60,-2.83){\circle*{0.000001}} \put(-108.89,-2.83){\circle*{0.000001}} \put(-108.19,-2.83){\circle*{0.000001}} \put(-107.48,-2.83){\circle*{0.000001}} \put(-106.77,-2.83){\circle*{0.000001}} \put(-106.07,-3.54){\circle*{0.000001}} \put(-105.36,-3.54){\circle*{0.000001}} \put(-104.65,-3.54){\circle*{0.000001}} \put(-103.94,-3.54){\circle*{0.000001}} \put(-103.24,-3.54){\circle*{0.000001}} \put(-102.53,-3.54){\circle*{0.000001}} \put(-101.82,-3.54){\circle*{0.000001}} \put(-101.12,-3.54){\circle*{0.000001}} \put(-100.41,-3.54){\circle*{0.000001}} \put(-99.70,-3.54){\circle*{0.000001}} \put(-98.99,-3.54){\circle*{0.000001}} \put(-98.29,-3.54){\circle*{0.000001}} \put(-97.58,-4.24){\circle*{0.000001}} \put(-96.87,-4.24){\circle*{0.000001}} \put(-96.17,-4.24){\circle*{0.000001}} \put(-95.46,-4.24){\circle*{0.000001}} \put(-94.75,-4.24){\circle*{0.000001}} \put(-94.05,-4.24){\circle*{0.000001}} \put(-93.34,-4.24){\circle*{0.000001}} \put(-92.63,-4.24){\circle*{0.000001}} \put(-91.92,-4.24){\circle*{0.000001}} \put(-91.22,-4.24){\circle*{0.000001}} \put(-90.51,-4.24){\circle*{0.000001}} \put(-89.80,-4.24){\circle*{0.000001}} \put(-89.10,-4.95){\circle*{0.000001}} \put(-88.39,-4.95){\circle*{0.000001}} \put(-87.68,-4.95){\circle*{0.000001}} \put(-86.97,-4.95){\circle*{0.000001}} \put(-86.27,-4.95){\circle*{0.000001}} \put(-85.56,-4.95){\circle*{0.000001}} \put(-84.85,-4.95){\circle*{0.000001}} \put(-84.15,-4.95){\circle*{0.000001}} \put(-83.44,-4.95){\circle*{0.000001}} \put(-82.73,-4.95){\circle*{0.000001}} \put(-82.02,-4.95){\circle*{0.000001}} \put(-81.32,-4.95){\circle*{0.000001}} \put(-80.61,-5.66){\circle*{0.000001}} \put(-79.90,-5.66){\circle*{0.000001}} \put(-79.20,-5.66){\circle*{0.000001}} \put(-78.49,-5.66){\circle*{0.000001}} \put(-77.78,-5.66){\circle*{0.000001}} \put(-77.07,-5.66){\circle*{0.000001}} \put(-76.37,-5.66){\circle*{0.000001}} \put(-75.66,-5.66){\circle*{0.000001}} \put(-74.95,-5.66){\circle*{0.000001}} \put(-74.25,-5.66){\circle*{0.000001}} \put(-73.54,-5.66){\circle*{0.000001}} \put(-72.83,-5.66){\circle*{0.000001}} \put(-72.12,-6.36){\circle*{0.000001}} \put(-71.42,-6.36){\circle*{0.000001}} \put(-70.71,-6.36){\circle*{0.000001}} \put(-70.00,-6.36){\circle*{0.000001}} \put(-69.30,-6.36){\circle*{0.000001}} \put(-68.59,-6.36){\circle*{0.000001}} \put(-67.88,-6.36){\circle*{0.000001}} \put(-67.18,-6.36){\circle*{0.000001}} \put(-66.47,-6.36){\circle*{0.000001}} \put(-65.76,-6.36){\circle*{0.000001}} \put(-65.05,-6.36){\circle*{0.000001}} \put(-64.35,-6.36){\circle*{0.000001}} \put(-63.64,-7.07){\circle*{0.000001}} \put(-62.93,-7.07){\circle*{0.000001}} \put(-62.23,-7.07){\circle*{0.000001}} \put(-61.52,-7.07){\circle*{0.000001}} \put(-60.81,-7.07){\circle*{0.000001}} \put(-60.10,-7.07){\circle*{0.000001}} \put(-59.40,-7.07){\circle*{0.000001}} \put(-58.69,-7.07){\circle*{0.000001}} \put(-57.98,-7.07){\circle*{0.000001}} \put(-57.28,-7.07){\circle*{0.000001}} \put(-56.57,-7.07){\circle*{0.000001}} \put(-55.86,-7.07){\circle*{0.000001}} \put(-55.15,-7.78){\circle*{0.000001}} \put(-54.45,-7.78){\circle*{0.000001}} \put(-53.74,-7.78){\circle*{0.000001}} \put(-53.03,-7.78){\circle*{0.000001}} \put(-52.33,-7.78){\circle*{0.000001}} \put(-51.62,-7.78){\circle*{0.000001}} \put(-130.81,-59.40){\circle*{0.000001}} \put(-130.81,-58.69){\circle*{0.000001}} \put(-130.11,-57.98){\circle*{0.000001}} \put(-130.11,-57.28){\circle*{0.000001}} \put(-130.11,-56.57){\circle*{0.000001}} \put(-129.40,-55.86){\circle*{0.000001}} \put(-129.40,-55.15){\circle*{0.000001}} \put(-128.69,-54.45){\circle*{0.000001}} \put(-128.69,-53.74){\circle*{0.000001}} \put(-128.69,-53.03){\circle*{0.000001}} \put(-127.99,-52.33){\circle*{0.000001}} \put(-127.99,-51.62){\circle*{0.000001}} \put(-127.99,-50.91){\circle*{0.000001}} \put(-127.28,-50.20){\circle*{0.000001}} \put(-127.28,-49.50){\circle*{0.000001}} \put(-127.28,-48.79){\circle*{0.000001}} \put(-126.57,-48.08){\circle*{0.000001}} \put(-126.57,-47.38){\circle*{0.000001}} \put(-125.87,-46.67){\circle*{0.000001}} \put(-125.87,-45.96){\circle*{0.000001}} \put(-125.87,-45.25){\circle*{0.000001}} \put(-125.16,-44.55){\circle*{0.000001}} \put(-125.16,-43.84){\circle*{0.000001}} \put(-125.16,-43.13){\circle*{0.000001}} \put(-124.45,-42.43){\circle*{0.000001}} \put(-124.45,-41.72){\circle*{0.000001}} \put(-124.45,-41.01){\circle*{0.000001}} \put(-123.74,-40.31){\circle*{0.000001}} \put(-123.74,-39.60){\circle*{0.000001}} \put(-123.04,-38.89){\circle*{0.000001}} \put(-123.04,-38.18){\circle*{0.000001}} \put(-123.04,-37.48){\circle*{0.000001}} \put(-122.33,-36.77){\circle*{0.000001}} \put(-122.33,-36.06){\circle*{0.000001}} \put(-122.33,-35.36){\circle*{0.000001}} \put(-121.62,-34.65){\circle*{0.000001}} \put(-121.62,-33.94){\circle*{0.000001}} \put(-121.62,-33.23){\circle*{0.000001}} \put(-120.92,-32.53){\circle*{0.000001}} \put(-120.92,-31.82){\circle*{0.000001}} \put(-120.92,-31.11){\circle*{0.000001}} \put(-120.21,-30.41){\circle*{0.000001}} \put(-120.21,-29.70){\circle*{0.000001}} \put(-119.50,-28.99){\circle*{0.000001}} \put(-119.50,-28.28){\circle*{0.000001}} \put(-119.50,-27.58){\circle*{0.000001}} \put(-118.79,-26.87){\circle*{0.000001}} \put(-118.79,-26.16){\circle*{0.000001}} \put(-118.79,-25.46){\circle*{0.000001}} \put(-118.09,-24.75){\circle*{0.000001}} \put(-118.09,-24.04){\circle*{0.000001}} \put(-118.09,-23.33){\circle*{0.000001}} \put(-117.38,-22.63){\circle*{0.000001}} \put(-117.38,-21.92){\circle*{0.000001}} \put(-116.67,-21.21){\circle*{0.000001}} \put(-116.67,-20.51){\circle*{0.000001}} \put(-116.67,-19.80){\circle*{0.000001}} \put(-115.97,-19.09){\circle*{0.000001}} \put(-115.97,-18.38){\circle*{0.000001}} \put(-115.97,-17.68){\circle*{0.000001}} \put(-115.26,-16.97){\circle*{0.000001}} \put(-115.26,-16.26){\circle*{0.000001}} \put(-115.26,-15.56){\circle*{0.000001}} \put(-114.55,-14.85){\circle*{0.000001}} \put(-114.55,-14.14){\circle*{0.000001}} \put(-113.84,-13.44){\circle*{0.000001}} \put(-113.84,-12.73){\circle*{0.000001}} \put(-113.84,-12.02){\circle*{0.000001}} \put(-113.14,-11.31){\circle*{0.000001}} \put(-113.14,-10.61){\circle*{0.000001}} \put(-113.14,-9.90){\circle*{0.000001}} \put(-112.43,-9.19){\circle*{0.000001}} \put(-112.43,-8.49){\circle*{0.000001}} \put(-112.43,-7.78){\circle*{0.000001}} \put(-111.72,-7.07){\circle*{0.000001}} \put(-111.72,-6.36){\circle*{0.000001}} \put(-111.02,-5.66){\circle*{0.000001}} \put(-111.02,-4.95){\circle*{0.000001}} \put(-111.02,-4.24){\circle*{0.000001}} \put(-110.31,-3.54){\circle*{0.000001}} \put(-110.31,-2.83){\circle*{0.000001}} \put(-130.81,-59.40){\circle*{0.000001}} \put(-130.11,-59.40){\circle*{0.000001}} \put(-129.40,-60.10){\circle*{0.000001}} \put(-128.69,-60.10){\circle*{0.000001}} \put(-127.99,-60.81){\circle*{0.000001}} \put(-127.99,-60.81){\circle*{0.000001}} \put(-127.99,-60.81){\circle*{0.000001}} \put(-127.28,-60.10){\circle*{0.000001}} \put(-126.57,-59.40){\circle*{0.000001}} \put(-125.87,-59.40){\circle*{0.000001}} \put(-125.16,-58.69){\circle*{0.000001}} \put(-124.45,-57.98){\circle*{0.000001}} \put(-123.74,-57.28){\circle*{0.000001}} \put(-123.04,-56.57){\circle*{0.000001}} \put(-122.33,-56.57){\circle*{0.000001}} \put(-121.62,-55.86){\circle*{0.000001}} \put(-120.92,-55.15){\circle*{0.000001}} \put(-120.21,-54.45){\circle*{0.000001}} \put(-119.50,-53.74){\circle*{0.000001}} \put(-118.79,-53.74){\circle*{0.000001}} \put(-118.09,-53.03){\circle*{0.000001}} \put(-117.38,-52.33){\circle*{0.000001}} \put(-116.67,-51.62){\circle*{0.000001}} \put(-115.97,-51.62){\circle*{0.000001}} \put(-115.26,-50.91){\circle*{0.000001}} \put(-114.55,-50.20){\circle*{0.000001}} \put(-113.84,-49.50){\circle*{0.000001}} \put(-113.14,-48.79){\circle*{0.000001}} \put(-112.43,-48.79){\circle*{0.000001}} \put(-111.72,-48.08){\circle*{0.000001}} \put(-111.02,-47.38){\circle*{0.000001}} \put(-110.31,-46.67){\circle*{0.000001}} \put(-109.60,-45.96){\circle*{0.000001}} \put(-108.89,-45.96){\circle*{0.000001}} \put(-108.19,-45.25){\circle*{0.000001}} \put(-107.48,-44.55){\circle*{0.000001}} \put(-106.77,-43.84){\circle*{0.000001}} \put(-106.07,-43.13){\circle*{0.000001}} \put(-105.36,-43.13){\circle*{0.000001}} \put(-104.65,-42.43){\circle*{0.000001}} \put(-103.94,-41.72){\circle*{0.000001}} \put(-103.24,-41.01){\circle*{0.000001}} \put(-102.53,-40.31){\circle*{0.000001}} \put(-101.82,-40.31){\circle*{0.000001}} \put(-101.12,-39.60){\circle*{0.000001}} \put(-100.41,-38.89){\circle*{0.000001}} \put(-99.70,-38.18){\circle*{0.000001}} \put(-98.99,-37.48){\circle*{0.000001}} \put(-98.29,-37.48){\circle*{0.000001}} \put(-97.58,-36.77){\circle*{0.000001}} \put(-96.87,-36.06){\circle*{0.000001}} \put(-96.17,-35.36){\circle*{0.000001}} \put(-95.46,-34.65){\circle*{0.000001}} \put(-94.75,-34.65){\circle*{0.000001}} \put(-94.05,-33.94){\circle*{0.000001}} \put(-93.34,-33.23){\circle*{0.000001}} \put(-92.63,-32.53){\circle*{0.000001}} \put(-91.92,-32.53){\circle*{0.000001}} \put(-91.22,-31.82){\circle*{0.000001}} \put(-90.51,-31.11){\circle*{0.000001}} \put(-89.80,-30.41){\circle*{0.000001}} \put(-89.10,-29.70){\circle*{0.000001}} \put(-88.39,-29.70){\circle*{0.000001}} \put(-87.68,-28.99){\circle*{0.000001}} \put(-86.97,-28.28){\circle*{0.000001}} \put(-86.27,-27.58){\circle*{0.000001}} \put(-85.56,-26.87){\circle*{0.000001}} \put(-84.85,-26.87){\circle*{0.000001}} \put(-84.15,-26.16){\circle*{0.000001}} \put(-83.44,-25.46){\circle*{0.000001}} \put(-82.73,-24.75){\circle*{0.000001}} \put(-82.02,-24.04){\circle*{0.000001}} \put(-81.32,-24.04){\circle*{0.000001}} \put(-80.61,-23.33){\circle*{0.000001}} \put(-79.90,-22.63){\circle*{0.000001}} \put(-79.90,-22.63){\circle*{0.000001}} \put(-79.20,-22.63){\circle*{0.000001}} \put(-78.49,-22.63){\circle*{0.000001}} \put(-77.78,-22.63){\circle*{0.000001}} \put(-76.37,-24.75){\circle*{0.000001}} \put(-77.07,-24.04){\circle*{0.000001}} \put(-77.07,-23.33){\circle*{0.000001}} \put(-77.78,-22.63){\circle*{0.000001}} \put(-100.41,-42.43){\circle*{0.000001}} \put(-99.70,-41.72){\circle*{0.000001}} \put(-98.99,-41.72){\circle*{0.000001}} \put(-98.29,-41.01){\circle*{0.000001}} \put(-97.58,-40.31){\circle*{0.000001}} \put(-96.87,-39.60){\circle*{0.000001}} \put(-96.17,-39.60){\circle*{0.000001}} \put(-95.46,-38.89){\circle*{0.000001}} \put(-94.75,-38.18){\circle*{0.000001}} \put(-94.05,-37.48){\circle*{0.000001}} \put(-93.34,-37.48){\circle*{0.000001}} \put(-92.63,-36.77){\circle*{0.000001}} \put(-91.92,-36.06){\circle*{0.000001}} \put(-91.22,-35.36){\circle*{0.000001}} \put(-90.51,-35.36){\circle*{0.000001}} \put(-89.80,-34.65){\circle*{0.000001}} \put(-89.10,-33.94){\circle*{0.000001}} \put(-88.39,-33.94){\circle*{0.000001}} \put(-87.68,-33.23){\circle*{0.000001}} \put(-86.97,-32.53){\circle*{0.000001}} \put(-86.27,-31.82){\circle*{0.000001}} \put(-85.56,-31.82){\circle*{0.000001}} \put(-84.85,-31.11){\circle*{0.000001}} \put(-84.15,-30.41){\circle*{0.000001}} \put(-83.44,-29.70){\circle*{0.000001}} \put(-82.73,-29.70){\circle*{0.000001}} \put(-82.02,-28.99){\circle*{0.000001}} \put(-81.32,-28.28){\circle*{0.000001}} \put(-80.61,-27.58){\circle*{0.000001}} \put(-79.90,-27.58){\circle*{0.000001}} \put(-79.20,-26.87){\circle*{0.000001}} \put(-78.49,-26.16){\circle*{0.000001}} \put(-77.78,-25.46){\circle*{0.000001}} \put(-77.07,-25.46){\circle*{0.000001}} \put(-76.37,-24.75){\circle*{0.000001}} \put(-100.41,-42.43){\circle*{0.000001}} \put(-100.41,-42.43){\circle*{0.000001}} \put(-99.70,-41.72){\circle*{0.000001}} \put(-98.99,-41.01){\circle*{0.000001}} \put(-98.29,-40.31){\circle*{0.000001}} \put(-97.58,-39.60){\circle*{0.000001}} \put(-96.87,-38.89){\circle*{0.000001}} \put(-96.17,-38.18){\circle*{0.000001}} \put(-95.46,-37.48){\circle*{0.000001}} \put(-94.75,-36.77){\circle*{0.000001}} \put(-94.05,-36.06){\circle*{0.000001}} \put(-93.34,-35.36){\circle*{0.000001}} \put(-93.34,-34.65){\circle*{0.000001}} \put(-92.63,-33.94){\circle*{0.000001}} \put(-91.92,-33.23){\circle*{0.000001}} \put(-91.22,-32.53){\circle*{0.000001}} \put(-90.51,-31.82){\circle*{0.000001}} \put(-89.80,-31.11){\circle*{0.000001}} \put(-89.10,-30.41){\circle*{0.000001}} \put(-88.39,-29.70){\circle*{0.000001}} \put(-87.68,-28.99){\circle*{0.000001}} \put(-86.97,-28.28){\circle*{0.000001}} \put(-86.27,-27.58){\circle*{0.000001}} \put(-85.56,-26.87){\circle*{0.000001}} \put(-84.85,-26.16){\circle*{0.000001}} \put(-84.15,-25.46){\circle*{0.000001}} \put(-83.44,-24.75){\circle*{0.000001}} \put(-82.73,-24.04){\circle*{0.000001}} \put(-82.02,-23.33){\circle*{0.000001}} \put(-81.32,-22.63){\circle*{0.000001}} \put(-80.61,-21.92){\circle*{0.000001}} \put(-79.90,-21.21){\circle*{0.000001}} \put(-79.90,-20.51){\circle*{0.000001}} \put(-79.20,-19.80){\circle*{0.000001}} \put(-78.49,-19.09){\circle*{0.000001}} \put(-77.78,-18.38){\circle*{0.000001}} \put(-77.07,-17.68){\circle*{0.000001}} \put(-76.37,-16.97){\circle*{0.000001}} \put(-75.66,-16.26){\circle*{0.000001}} \put(-74.95,-15.56){\circle*{0.000001}} \put(-74.25,-14.85){\circle*{0.000001}} \put(-73.54,-14.14){\circle*{0.000001}} \put(-72.83,-13.44){\circle*{0.000001}} \put(-72.12,-12.73){\circle*{0.000001}} \put(-71.42,-12.02){\circle*{0.000001}} \put(-70.71,-11.31){\circle*{0.000001}} \put(-70.00,-10.61){\circle*{0.000001}} \put(-69.30,-9.90){\circle*{0.000001}} \put(-68.59,-9.19){\circle*{0.000001}} \put(-67.88,-8.49){\circle*{0.000001}} \put(-67.18,-7.78){\circle*{0.000001}} \put(-66.47,-7.07){\circle*{0.000001}} \put(-65.76,-6.36){\circle*{0.000001}} \put(-65.76,-5.66){\circle*{0.000001}} \put(-65.05,-4.95){\circle*{0.000001}} \put(-64.35,-4.24){\circle*{0.000001}} \put(-63.64,-3.54){\circle*{0.000001}} \put(-62.93,-2.83){\circle*{0.000001}} \put(-62.23,-2.12){\circle*{0.000001}} \put(-61.52,-1.41){\circle*{0.000001}} \put(-60.81,-0.71){\circle*{0.000001}} \put(-60.10, 0.00){\circle*{0.000001}} \put(-59.40, 0.71){\circle*{0.000001}} \put(-58.69, 1.41){\circle*{0.000001}} \put(-58.69, 1.41){\circle*{0.000001}} \put(-58.69, 2.12){\circle*{0.000001}} \put(-57.98, 2.83){\circle*{0.000001}} \put(-57.98, 2.83){\circle*{0.000001}} \put(-57.28, 2.83){\circle*{0.000001}} \put(-56.57, 2.83){\circle*{0.000001}} \put(-55.86, 2.83){\circle*{0.000001}} \put(-55.86, 2.83){\circle*{0.000001}} \put(-55.86, 2.83){\circle*{0.000001}} \put(-56.57, 3.54){\circle*{0.000001}} \put(-56.57, 4.24){\circle*{0.000001}} \put(-57.28, 4.95){\circle*{0.000001}} \put(-57.28, 5.66){\circle*{0.000001}} \put(-57.98, 6.36){\circle*{0.000001}} \put(-58.69, 7.07){\circle*{0.000001}} \put(-58.69, 7.78){\circle*{0.000001}} \put(-59.40, 8.49){\circle*{0.000001}} \put(-59.40, 9.19){\circle*{0.000001}} \put(-60.10, 9.90){\circle*{0.000001}} \put(-60.81,10.61){\circle*{0.000001}} \put(-60.81,11.31){\circle*{0.000001}} \put(-61.52,12.02){\circle*{0.000001}} \put(-61.52,12.73){\circle*{0.000001}} \put(-62.23,13.44){\circle*{0.000001}} \put(-62.93,14.14){\circle*{0.000001}} \put(-62.93,14.85){\circle*{0.000001}} \put(-63.64,15.56){\circle*{0.000001}} \put(-63.64,16.26){\circle*{0.000001}} \put(-64.35,16.97){\circle*{0.000001}} \put(-65.05,17.68){\circle*{0.000001}} \put(-65.05,18.38){\circle*{0.000001}} \put(-65.76,19.09){\circle*{0.000001}} \put(-65.76,19.80){\circle*{0.000001}} \put(-66.47,20.51){\circle*{0.000001}} \put(-67.18,21.21){\circle*{0.000001}} \put(-67.18,21.92){\circle*{0.000001}} \put(-67.88,22.63){\circle*{0.000001}} \put(-67.88,23.33){\circle*{0.000001}} \put(-68.59,24.04){\circle*{0.000001}} \put(-69.30,24.75){\circle*{0.000001}} \put(-69.30,25.46){\circle*{0.000001}} \put(-70.00,26.16){\circle*{0.000001}} \put(-70.00,26.87){\circle*{0.000001}} \put(-70.71,27.58){\circle*{0.000001}} \put(-71.42,28.28){\circle*{0.000001}} \put(-71.42,28.99){\circle*{0.000001}} \put(-72.12,29.70){\circle*{0.000001}} \put(-72.12,30.41){\circle*{0.000001}} \put(-72.83,31.11){\circle*{0.000001}} \put(-73.54,31.82){\circle*{0.000001}} \put(-73.54,32.53){\circle*{0.000001}} \put(-74.25,33.23){\circle*{0.000001}} \put(-74.25,33.94){\circle*{0.000001}} \put(-74.95,34.65){\circle*{0.000001}} \put(-75.66,35.36){\circle*{0.000001}} \put(-75.66,36.06){\circle*{0.000001}} \put(-76.37,36.77){\circle*{0.000001}} \put(-76.37,37.48){\circle*{0.000001}} \put(-77.07,38.18){\circle*{0.000001}} \put(-77.78,38.89){\circle*{0.000001}} \put(-77.78,39.60){\circle*{0.000001}} \put(-78.49,40.31){\circle*{0.000001}} \put(-78.49,41.01){\circle*{0.000001}} \put(-79.20,41.72){\circle*{0.000001}} \put(-79.90,42.43){\circle*{0.000001}} \put(-79.90,43.13){\circle*{0.000001}} \put(-80.61,43.84){\circle*{0.000001}} \put(-80.61,44.55){\circle*{0.000001}} \put(-81.32,45.25){\circle*{0.000001}} \put(-82.02,45.96){\circle*{0.000001}} \put(-82.02,46.67){\circle*{0.000001}} \put(-82.73,47.38){\circle*{0.000001}} \put(-82.73,48.08){\circle*{0.000001}} \put(-83.44,48.79){\circle*{0.000001}} \put(-84.15,49.50){\circle*{0.000001}} \put(-84.15,50.20){\circle*{0.000001}} \put(-84.85,50.91){\circle*{0.000001}} \put(-84.85,51.62){\circle*{0.000001}} \put(-85.56,52.33){\circle*{0.000001}} \put(-84.85,50.91){\circle*{0.000001}} \put(-84.85,51.62){\circle*{0.000001}} \put(-85.56,52.33){\circle*{0.000001}} \put(-84.85,50.91){\circle*{0.000001}} \put(-84.85,51.62){\circle*{0.000001}} \put(-84.85,48.79){\circle*{0.000001}} \put(-84.85,49.50){\circle*{0.000001}} \put(-84.85,50.20){\circle*{0.000001}} \put(-84.85,50.91){\circle*{0.000001}} \put(-84.85,51.62){\circle*{0.000001}} \put(-84.85,48.79){\circle*{0.000001}} \put(-84.15,48.79){\circle*{0.000001}} \put(-83.44,48.79){\circle*{0.000001}} \put(-82.73,48.79){\circle*{0.000001}} \put(-82.73,48.79){\circle*{0.000001}} \put(-82.02,48.08){\circle*{0.000001}} \put(-81.32,47.38){\circle*{0.000001}} \put(-81.32,47.38){\circle*{0.000001}} \put(-81.32,47.38){\circle*{0.000001}} \put(-82.02,48.08){\circle*{0.000001}} \put(-82.02,48.79){\circle*{0.000001}} \put(-82.73,49.50){\circle*{0.000001}} \put(-83.44,50.20){\circle*{0.000001}} \put(-84.15,50.91){\circle*{0.000001}} \put(-84.15,51.62){\circle*{0.000001}} \put(-84.85,52.33){\circle*{0.000001}} \put(-85.56,53.03){\circle*{0.000001}} \put(-86.27,53.74){\circle*{0.000001}} \put(-86.27,54.45){\circle*{0.000001}} \put(-86.97,55.15){\circle*{0.000001}} \put(-87.68,55.86){\circle*{0.000001}} \put(-88.39,56.57){\circle*{0.000001}} \put(-88.39,57.28){\circle*{0.000001}} \put(-89.10,57.98){\circle*{0.000001}} \put(-89.80,58.69){\circle*{0.000001}} \put(-90.51,59.40){\circle*{0.000001}} \put(-90.51,60.10){\circle*{0.000001}} \put(-91.22,60.81){\circle*{0.000001}} \put(-91.92,61.52){\circle*{0.000001}} \put(-92.63,62.23){\circle*{0.000001}} \put(-92.63,62.93){\circle*{0.000001}} \put(-93.34,63.64){\circle*{0.000001}} \put(-94.05,64.35){\circle*{0.000001}} \put(-94.75,65.05){\circle*{0.000001}} \put(-94.75,65.76){\circle*{0.000001}} \put(-95.46,66.47){\circle*{0.000001}} \put(-96.17,67.18){\circle*{0.000001}} \put(-96.87,67.88){\circle*{0.000001}} \put(-96.87,68.59){\circle*{0.000001}} \put(-97.58,69.30){\circle*{0.000001}} \put(-98.29,70.00){\circle*{0.000001}} \put(-98.99,70.71){\circle*{0.000001}} \put(-98.99,71.42){\circle*{0.000001}} \put(-99.70,72.12){\circle*{0.000001}} \put(-100.41,72.83){\circle*{0.000001}} \put(-101.12,73.54){\circle*{0.000001}} \put(-101.12,74.25){\circle*{0.000001}} \put(-101.82,74.95){\circle*{0.000001}} \put(-102.53,75.66){\circle*{0.000001}} \put(-103.24,76.37){\circle*{0.000001}} \put(-103.24,77.07){\circle*{0.000001}} \put(-103.94,77.78){\circle*{0.000001}} \put(-104.65,78.49){\circle*{0.000001}} \put(-105.36,79.20){\circle*{0.000001}} \put(-105.36,79.90){\circle*{0.000001}} \put(-106.07,80.61){\circle*{0.000001}} \put(-106.77,81.32){\circle*{0.000001}} \put(-107.48,82.02){\circle*{0.000001}} \put(-107.48,82.73){\circle*{0.000001}} \put(-108.19,83.44){\circle*{0.000001}} \put(-108.89,84.15){\circle*{0.000001}} \put(-109.60,84.85){\circle*{0.000001}} \put(-109.60,85.56){\circle*{0.000001}} \put(-110.31,86.27){\circle*{0.000001}} \put(-111.02,86.97){\circle*{0.000001}} \put(-111.72,87.68){\circle*{0.000001}} \put(-111.72,88.39){\circle*{0.000001}} \put(-112.43,89.10){\circle*{0.000001}} \put(-113.14,89.80){\circle*{0.000001}} \put(-113.84,90.51){\circle*{0.000001}} \put(-113.84,91.22){\circle*{0.000001}} \put(-114.55,91.92){\circle*{0.000001}} \put(-115.26,92.63){\circle*{0.000001}} \put(-115.97,93.34){\circle*{0.000001}} \put(-115.97,94.05){\circle*{0.000001}} \put(-116.67,94.75){\circle*{0.000001}} \put(-117.38,95.46){\circle*{0.000001}} \put(-117.38,95.46){\circle*{0.000001}} \put(-117.38,96.17){\circle*{0.000001}} \put(-117.38,96.87){\circle*{0.000001}} \put(-117.38,97.58){\circle*{0.000001}} \put(-117.38,98.29){\circle*{0.000001}} \put(-116.67,98.99){\circle*{0.000001}} \put(-116.67,99.70){\circle*{0.000001}} \put(-116.67,100.41){\circle*{0.000001}} \put(-116.67,101.12){\circle*{0.000001}} \put(-116.67,101.82){\circle*{0.000001}} \put(-116.67,102.53){\circle*{0.000001}} \put(-116.67,103.24){\circle*{0.000001}} \put(-116.67,103.94){\circle*{0.000001}} \put(-115.97,104.65){\circle*{0.000001}} \put(-115.97,105.36){\circle*{0.000001}} \put(-115.97,106.07){\circle*{0.000001}} \put(-115.97,106.77){\circle*{0.000001}} \put(-115.97,107.48){\circle*{0.000001}} \put(-115.97,108.19){\circle*{0.000001}} \put(-115.97,108.89){\circle*{0.000001}} \put(-115.97,109.60){\circle*{0.000001}} \put(-115.26,110.31){\circle*{0.000001}} \put(-115.26,111.02){\circle*{0.000001}} \put(-115.26,111.72){\circle*{0.000001}} \put(-115.26,112.43){\circle*{0.000001}} \put(-115.26,113.14){\circle*{0.000001}} \put(-115.26,113.84){\circle*{0.000001}} \put(-115.26,114.55){\circle*{0.000001}} \put(-115.26,115.26){\circle*{0.000001}} \put(-114.55,115.97){\circle*{0.000001}} \put(-114.55,116.67){\circle*{0.000001}} \put(-114.55,117.38){\circle*{0.000001}} \put(-114.55,118.09){\circle*{0.000001}} \put(-114.55,118.79){\circle*{0.000001}} \put(-114.55,119.50){\circle*{0.000001}} \put(-114.55,120.21){\circle*{0.000001}} \put(-114.55,120.92){\circle*{0.000001}} \put(-113.84,121.62){\circle*{0.000001}} \put(-113.84,122.33){\circle*{0.000001}} \put(-113.84,123.04){\circle*{0.000001}} \put(-113.84,123.74){\circle*{0.000001}} \put(-113.84,124.45){\circle*{0.000001}} \put(-113.84,125.16){\circle*{0.000001}} \put(-113.84,125.87){\circle*{0.000001}} \put(-113.84,126.57){\circle*{0.000001}} \put(-113.84,127.28){\circle*{0.000001}} \put(-113.14,127.99){\circle*{0.000001}} \put(-113.14,128.69){\circle*{0.000001}} \put(-113.14,129.40){\circle*{0.000001}} \put(-113.14,130.11){\circle*{0.000001}} \put(-113.14,130.81){\circle*{0.000001}} \put(-113.14,131.52){\circle*{0.000001}} \put(-113.14,132.23){\circle*{0.000001}} \put(-113.14,132.94){\circle*{0.000001}} \put(-112.43,133.64){\circle*{0.000001}} \put(-112.43,134.35){\circle*{0.000001}} \put(-112.43,135.06){\circle*{0.000001}} \put(-112.43,135.76){\circle*{0.000001}} \put(-112.43,136.47){\circle*{0.000001}} \put(-112.43,137.18){\circle*{0.000001}} \put(-112.43,137.89){\circle*{0.000001}} \put(-112.43,138.59){\circle*{0.000001}} \put(-111.72,139.30){\circle*{0.000001}} \put(-111.72,140.01){\circle*{0.000001}} \put(-111.72,140.71){\circle*{0.000001}} \put(-111.72,141.42){\circle*{0.000001}} \put(-111.72,142.13){\circle*{0.000001}} \put(-111.72,142.84){\circle*{0.000001}} \put(-111.72,143.54){\circle*{0.000001}} \put(-111.72,144.25){\circle*{0.000001}} \put(-111.02,144.96){\circle*{0.000001}} \put(-111.02,145.66){\circle*{0.000001}} \put(-111.02,146.37){\circle*{0.000001}} \put(-111.02,147.08){\circle*{0.000001}} \put(-111.02,147.79){\circle*{0.000001}} \put(-111.02,148.49){\circle*{0.000001}} \put(-111.02,149.20){\circle*{0.000001}} \put(-111.02,149.91){\circle*{0.000001}} \put(-110.31,150.61){\circle*{0.000001}} \put(-110.31,151.32){\circle*{0.000001}} \put(-110.31,152.03){\circle*{0.000001}} \put(-110.31,152.74){\circle*{0.000001}} \put(-110.31,153.44){\circle*{0.000001}} \put(-110.31,153.44){\circle*{0.000001}} \put(-109.60,152.74){\circle*{0.000001}} \put(-108.89,152.74){\circle*{0.000001}} \put(-108.19,152.03){\circle*{0.000001}} \put(-107.48,152.03){\circle*{0.000001}} \put(-106.77,151.32){\circle*{0.000001}} \put(-106.07,150.61){\circle*{0.000001}} \put(-105.36,150.61){\circle*{0.000001}} \put(-104.65,149.91){\circle*{0.000001}} \put(-103.94,149.91){\circle*{0.000001}} \put(-103.24,149.20){\circle*{0.000001}} \put(-102.53,148.49){\circle*{0.000001}} \put(-101.82,148.49){\circle*{0.000001}} \put(-101.12,147.79){\circle*{0.000001}} \put(-100.41,147.79){\circle*{0.000001}} \put(-99.70,147.08){\circle*{0.000001}} \put(-98.99,147.08){\circle*{0.000001}} \put(-98.29,146.37){\circle*{0.000001}} \put(-97.58,145.66){\circle*{0.000001}} \put(-96.87,145.66){\circle*{0.000001}} \put(-96.17,144.96){\circle*{0.000001}} \put(-95.46,144.96){\circle*{0.000001}} \put(-94.75,144.25){\circle*{0.000001}} \put(-94.05,143.54){\circle*{0.000001}} \put(-93.34,143.54){\circle*{0.000001}} \put(-92.63,142.84){\circle*{0.000001}} \put(-91.92,142.84){\circle*{0.000001}} \put(-91.22,142.13){\circle*{0.000001}} \put(-90.51,141.42){\circle*{0.000001}} \put(-89.80,141.42){\circle*{0.000001}} \put(-89.10,140.71){\circle*{0.000001}} \put(-88.39,140.71){\circle*{0.000001}} \put(-87.68,140.01){\circle*{0.000001}} \put(-86.97,139.30){\circle*{0.000001}} \put(-86.27,139.30){\circle*{0.000001}} \put(-85.56,138.59){\circle*{0.000001}} \put(-84.85,138.59){\circle*{0.000001}} \put(-84.15,137.89){\circle*{0.000001}} \put(-83.44,137.89){\circle*{0.000001}} \put(-82.73,137.18){\circle*{0.000001}} \put(-82.02,136.47){\circle*{0.000001}} \put(-81.32,136.47){\circle*{0.000001}} \put(-80.61,135.76){\circle*{0.000001}} \put(-79.90,135.76){\circle*{0.000001}} \put(-79.20,135.06){\circle*{0.000001}} \put(-78.49,134.35){\circle*{0.000001}} \put(-77.78,134.35){\circle*{0.000001}} \put(-77.07,133.64){\circle*{0.000001}} \put(-76.37,133.64){\circle*{0.000001}} \put(-75.66,132.94){\circle*{0.000001}} \put(-74.95,132.23){\circle*{0.000001}} \put(-74.25,132.23){\circle*{0.000001}} \put(-73.54,131.52){\circle*{0.000001}} \put(-72.83,131.52){\circle*{0.000001}} \put(-72.12,130.81){\circle*{0.000001}} \put(-71.42,130.11){\circle*{0.000001}} \put(-70.71,130.11){\circle*{0.000001}} \put(-70.00,129.40){\circle*{0.000001}} \put(-69.30,129.40){\circle*{0.000001}} \put(-68.59,128.69){\circle*{0.000001}} \put(-67.88,127.99){\circle*{0.000001}} \put(-67.18,127.99){\circle*{0.000001}} \put(-66.47,127.28){\circle*{0.000001}} \put(-65.76,127.28){\circle*{0.000001}} \put(-65.05,126.57){\circle*{0.000001}} \put(-64.35,126.57){\circle*{0.000001}} \put(-63.64,125.87){\circle*{0.000001}} \put(-62.93,125.16){\circle*{0.000001}} \put(-62.23,125.16){\circle*{0.000001}} \put(-61.52,124.45){\circle*{0.000001}} \put(-60.81,124.45){\circle*{0.000001}} \put(-60.10,123.74){\circle*{0.000001}} \put(-59.40,123.04){\circle*{0.000001}} \put(-58.69,123.04){\circle*{0.000001}} \put(-57.98,122.33){\circle*{0.000001}} \put(-57.28,122.33){\circle*{0.000001}} \put(-56.57,121.62){\circle*{0.000001}} \put(-57.98,118.79){\circle*{0.000001}} \put(-57.98,119.50){\circle*{0.000001}} \put(-57.28,120.21){\circle*{0.000001}} \put(-57.28,120.92){\circle*{0.000001}} \put(-56.57,121.62){\circle*{0.000001}} \put(-57.98,118.79){\circle*{0.000001}} \put(-57.28,118.79){\circle*{0.000001}} \put(-56.57,118.79){\circle*{0.000001}} \put(-55.86,118.79){\circle*{0.000001}} \put(-54.45,116.67){\circle*{0.000001}} \put(-55.15,117.38){\circle*{0.000001}} \put(-55.15,118.09){\circle*{0.000001}} \put(-55.86,118.79){\circle*{0.000001}} \put(-54.45,116.67){\circle*{0.000001}} \put(-53.74,116.67){\circle*{0.000001}} \put(-53.74,116.67){\circle*{0.000001}} \put(-53.74,116.67){\circle*{0.000001}} \put(-53.03,117.38){\circle*{0.000001}} \put(-53.03,118.09){\circle*{0.000001}} \put(-52.33,118.79){\circle*{0.000001}} \put(-52.33,119.50){\circle*{0.000001}} \put(-51.62,120.21){\circle*{0.000001}} \put(-50.91,120.92){\circle*{0.000001}} \put(-50.91,121.62){\circle*{0.000001}} \put(-50.20,122.33){\circle*{0.000001}} \put(-50.20,123.04){\circle*{0.000001}} \put(-49.50,123.74){\circle*{0.000001}} \put(-48.79,124.45){\circle*{0.000001}} \put(-48.79,125.16){\circle*{0.000001}} \put(-48.08,125.87){\circle*{0.000001}} \put(-48.08,126.57){\circle*{0.000001}} \put(-47.38,127.28){\circle*{0.000001}} \put(-46.67,127.99){\circle*{0.000001}} \put(-46.67,128.69){\circle*{0.000001}} \put(-45.96,129.40){\circle*{0.000001}} \put(-45.96,130.11){\circle*{0.000001}} \put(-45.25,130.81){\circle*{0.000001}} \put(-44.55,131.52){\circle*{0.000001}} \put(-44.55,132.23){\circle*{0.000001}} \put(-43.84,132.94){\circle*{0.000001}} \put(-43.84,133.64){\circle*{0.000001}} \put(-43.13,134.35){\circle*{0.000001}} \put(-42.43,135.06){\circle*{0.000001}} \put(-42.43,135.76){\circle*{0.000001}} \put(-41.72,136.47){\circle*{0.000001}} \put(-41.72,137.18){\circle*{0.000001}} \put(-41.01,137.89){\circle*{0.000001}} \put(-40.31,138.59){\circle*{0.000001}} \put(-40.31,139.30){\circle*{0.000001}} \put(-39.60,140.01){\circle*{0.000001}} \put(-39.60,140.71){\circle*{0.000001}} \put(-38.89,141.42){\circle*{0.000001}} \put(-38.18,142.13){\circle*{0.000001}} \put(-38.18,142.84){\circle*{0.000001}} \put(-37.48,143.54){\circle*{0.000001}} \put(-37.48,144.25){\circle*{0.000001}} \put(-36.77,144.96){\circle*{0.000001}} \put(-36.06,145.66){\circle*{0.000001}} \put(-36.06,146.37){\circle*{0.000001}} \put(-35.36,147.08){\circle*{0.000001}} \put(-35.36,147.79){\circle*{0.000001}} \put(-34.65,148.49){\circle*{0.000001}} \put(-33.94,149.20){\circle*{0.000001}} \put(-33.94,149.91){\circle*{0.000001}} \put(-33.23,150.61){\circle*{0.000001}} \put(-33.23,151.32){\circle*{0.000001}} \put(-32.53,152.03){\circle*{0.000001}} \put(-31.82,152.74){\circle*{0.000001}} \put(-31.82,153.44){\circle*{0.000001}} \put(-31.11,154.15){\circle*{0.000001}} \put(-31.11,154.86){\circle*{0.000001}} \put(-30.41,155.56){\circle*{0.000001}} \put(-29.70,156.27){\circle*{0.000001}} \put(-29.70,156.98){\circle*{0.000001}} \put(-28.99,157.68){\circle*{0.000001}} \put(-28.99,158.39){\circle*{0.000001}} \put(-28.28,159.10){\circle*{0.000001}} \put(-27.58,159.81){\circle*{0.000001}} \put(-27.58,160.51){\circle*{0.000001}} \put(-26.87,161.22){\circle*{0.000001}} \put(-26.87,161.93){\circle*{0.000001}} \put(-26.16,162.63){\circle*{0.000001}} \put(-25.46,163.34){\circle*{0.000001}} \put(-25.46,164.05){\circle*{0.000001}} \put(-24.75,164.76){\circle*{0.000001}} \put(-24.75,165.46){\circle*{0.000001}} \put(-24.04,166.17){\circle*{0.000001}} \put(-23.33,166.88){\circle*{0.000001}} \put(-23.33,167.58){\circle*{0.000001}} \put(-22.63,168.29){\circle*{0.000001}} \put(-22.63,169.00){\circle*{0.000001}} \put(-21.92,169.71){\circle*{0.000001}} \put(-21.92,169.71){\circle*{0.000001}} \put(-21.92,170.41){\circle*{0.000001}} \put(-22.63,164.76){\circle*{0.000001}} \put(-22.63,165.46){\circle*{0.000001}} \put(-22.63,166.17){\circle*{0.000001}} \put(-22.63,166.88){\circle*{0.000001}} \put(-22.63,167.58){\circle*{0.000001}} \put(-21.92,168.29){\circle*{0.000001}} \put(-21.92,169.00){\circle*{0.000001}} \put(-21.92,169.71){\circle*{0.000001}} \put(-21.92,170.41){\circle*{0.000001}} \put(-22.63,164.76){\circle*{0.000001}} \put(-21.92,164.05){\circle*{0.000001}} \put(-21.21,164.05){\circle*{0.000001}} \put(-20.51,163.34){\circle*{0.000001}} \put(-21.92,161.93){\circle*{0.000001}} \put(-21.21,162.63){\circle*{0.000001}} \put(-20.51,163.34){\circle*{0.000001}} \put(-21.92,159.81){\circle*{0.000001}} \put(-21.92,160.51){\circle*{0.000001}} \put(-21.92,161.22){\circle*{0.000001}} \put(-21.92,161.93){\circle*{0.000001}} \put(-75.66,177.48){\circle*{0.000001}} \put(-74.95,177.48){\circle*{0.000001}} \put(-74.25,176.78){\circle*{0.000001}} \put(-73.54,176.78){\circle*{0.000001}} \put(-72.83,176.78){\circle*{0.000001}} \put(-72.12,176.07){\circle*{0.000001}} \put(-71.42,176.07){\circle*{0.000001}} \put(-70.71,176.07){\circle*{0.000001}} \put(-70.00,175.36){\circle*{0.000001}} \put(-69.30,175.36){\circle*{0.000001}} \put(-68.59,175.36){\circle*{0.000001}} \put(-67.88,174.66){\circle*{0.000001}} \put(-67.18,174.66){\circle*{0.000001}} \put(-66.47,174.66){\circle*{0.000001}} \put(-65.76,173.95){\circle*{0.000001}} \put(-65.05,173.95){\circle*{0.000001}} \put(-64.35,173.95){\circle*{0.000001}} \put(-63.64,173.24){\circle*{0.000001}} \put(-62.93,173.24){\circle*{0.000001}} \put(-62.23,173.24){\circle*{0.000001}} \put(-61.52,172.53){\circle*{0.000001}} \put(-60.81,172.53){\circle*{0.000001}} \put(-60.10,172.53){\circle*{0.000001}} \put(-59.40,171.83){\circle*{0.000001}} \put(-58.69,171.83){\circle*{0.000001}} \put(-57.98,171.83){\circle*{0.000001}} \put(-57.28,171.12){\circle*{0.000001}} \put(-56.57,171.12){\circle*{0.000001}} \put(-55.86,171.12){\circle*{0.000001}} \put(-55.15,170.41){\circle*{0.000001}} \put(-54.45,170.41){\circle*{0.000001}} \put(-53.74,170.41){\circle*{0.000001}} \put(-53.03,169.71){\circle*{0.000001}} \put(-52.33,169.71){\circle*{0.000001}} \put(-51.62,169.71){\circle*{0.000001}} \put(-50.91,169.00){\circle*{0.000001}} \put(-50.20,169.00){\circle*{0.000001}} \put(-49.50,169.00){\circle*{0.000001}} \put(-48.79,169.00){\circle*{0.000001}} \put(-48.08,168.29){\circle*{0.000001}} \put(-47.38,168.29){\circle*{0.000001}} \put(-46.67,168.29){\circle*{0.000001}} \put(-45.96,167.58){\circle*{0.000001}} \put(-45.25,167.58){\circle*{0.000001}} \put(-44.55,167.58){\circle*{0.000001}} \put(-43.84,166.88){\circle*{0.000001}} \put(-43.13,166.88){\circle*{0.000001}} \put(-42.43,166.88){\circle*{0.000001}} \put(-41.72,166.17){\circle*{0.000001}} \put(-41.01,166.17){\circle*{0.000001}} \put(-40.31,166.17){\circle*{0.000001}} \put(-39.60,165.46){\circle*{0.000001}} \put(-38.89,165.46){\circle*{0.000001}} \put(-38.18,165.46){\circle*{0.000001}} \put(-37.48,164.76){\circle*{0.000001}} \put(-36.77,164.76){\circle*{0.000001}} \put(-36.06,164.76){\circle*{0.000001}} \put(-35.36,164.05){\circle*{0.000001}} \put(-34.65,164.05){\circle*{0.000001}} \put(-33.94,164.05){\circle*{0.000001}} \put(-33.23,163.34){\circle*{0.000001}} \put(-32.53,163.34){\circle*{0.000001}} \put(-31.82,163.34){\circle*{0.000001}} \put(-31.11,162.63){\circle*{0.000001}} \put(-30.41,162.63){\circle*{0.000001}} \put(-29.70,162.63){\circle*{0.000001}} \put(-28.99,161.93){\circle*{0.000001}} \put(-28.28,161.93){\circle*{0.000001}} \put(-27.58,161.93){\circle*{0.000001}} \put(-26.87,161.22){\circle*{0.000001}} \put(-26.16,161.22){\circle*{0.000001}} \put(-25.46,161.22){\circle*{0.000001}} \put(-24.75,160.51){\circle*{0.000001}} \put(-24.04,160.51){\circle*{0.000001}} \put(-23.33,160.51){\circle*{0.000001}} \put(-22.63,159.81){\circle*{0.000001}} \put(-21.92,159.81){\circle*{0.000001}} \put(-127.99,207.89){\circle*{0.000001}} \put(-127.28,207.18){\circle*{0.000001}} \put(-126.57,207.18){\circle*{0.000001}} \put(-125.87,206.48){\circle*{0.000001}} \put(-125.16,206.48){\circle*{0.000001}} \put(-124.45,205.77){\circle*{0.000001}} \put(-123.74,205.77){\circle*{0.000001}} \put(-123.04,205.06){\circle*{0.000001}} \put(-122.33,204.35){\circle*{0.000001}} \put(-121.62,204.35){\circle*{0.000001}} \put(-120.92,203.65){\circle*{0.000001}} \put(-120.21,203.65){\circle*{0.000001}} \put(-119.50,202.94){\circle*{0.000001}} \put(-118.79,202.23){\circle*{0.000001}} \put(-118.09,202.23){\circle*{0.000001}} \put(-117.38,201.53){\circle*{0.000001}} \put(-116.67,201.53){\circle*{0.000001}} \put(-115.97,200.82){\circle*{0.000001}} \put(-115.26,200.82){\circle*{0.000001}} \put(-114.55,200.11){\circle*{0.000001}} \put(-113.84,199.40){\circle*{0.000001}} \put(-113.14,199.40){\circle*{0.000001}} \put(-112.43,198.70){\circle*{0.000001}} \put(-111.72,198.70){\circle*{0.000001}} \put(-111.02,197.99){\circle*{0.000001}} \put(-110.31,197.28){\circle*{0.000001}} \put(-109.60,197.28){\circle*{0.000001}} \put(-108.89,196.58){\circle*{0.000001}} \put(-108.19,196.58){\circle*{0.000001}} \put(-107.48,195.87){\circle*{0.000001}} \put(-106.77,195.87){\circle*{0.000001}} \put(-106.07,195.16){\circle*{0.000001}} \put(-105.36,194.45){\circle*{0.000001}} \put(-104.65,194.45){\circle*{0.000001}} \put(-103.94,193.75){\circle*{0.000001}} \put(-103.24,193.75){\circle*{0.000001}} \put(-102.53,193.04){\circle*{0.000001}} \put(-101.82,193.04){\circle*{0.000001}} \put(-101.12,192.33){\circle*{0.000001}} \put(-100.41,191.63){\circle*{0.000001}} \put(-99.70,191.63){\circle*{0.000001}} \put(-98.99,190.92){\circle*{0.000001}} \put(-98.29,190.92){\circle*{0.000001}} \put(-97.58,190.21){\circle*{0.000001}} \put(-96.87,189.50){\circle*{0.000001}} \put(-96.17,189.50){\circle*{0.000001}} \put(-95.46,188.80){\circle*{0.000001}} \put(-94.75,188.80){\circle*{0.000001}} \put(-94.05,188.09){\circle*{0.000001}} \put(-93.34,188.09){\circle*{0.000001}} \put(-92.63,187.38){\circle*{0.000001}} \put(-91.92,186.68){\circle*{0.000001}} \put(-91.22,186.68){\circle*{0.000001}} \put(-90.51,185.97){\circle*{0.000001}} \put(-89.80,185.97){\circle*{0.000001}} \put(-89.10,185.26){\circle*{0.000001}} \put(-88.39,184.55){\circle*{0.000001}} \put(-87.68,184.55){\circle*{0.000001}} \put(-86.97,183.85){\circle*{0.000001}} \put(-86.27,183.85){\circle*{0.000001}} \put(-85.56,183.14){\circle*{0.000001}} \put(-84.85,183.14){\circle*{0.000001}} \put(-84.15,182.43){\circle*{0.000001}} \put(-83.44,181.73){\circle*{0.000001}} \put(-82.73,181.73){\circle*{0.000001}} \put(-82.02,181.02){\circle*{0.000001}} \put(-81.32,181.02){\circle*{0.000001}} \put(-80.61,180.31){\circle*{0.000001}} \put(-79.90,179.61){\circle*{0.000001}} \put(-79.20,179.61){\circle*{0.000001}} \put(-78.49,178.90){\circle*{0.000001}} \put(-77.78,178.90){\circle*{0.000001}} \put(-77.07,178.19){\circle*{0.000001}} \put(-76.37,178.19){\circle*{0.000001}} \put(-75.66,177.48){\circle*{0.000001}} \put(-127.99,207.89){\circle*{0.000001}} \put(-127.28,207.18){\circle*{0.000001}} \put(-126.57,206.48){\circle*{0.000001}} \put(-128.69,206.48){\circle*{0.000001}} \put(-127.99,206.48){\circle*{0.000001}} \put(-127.28,206.48){\circle*{0.000001}} \put(-126.57,206.48){\circle*{0.000001}} \put(-128.69,206.48){\circle*{0.000001}} \put(-127.99,207.18){\circle*{0.000001}} \put(-127.28,207.18){\circle*{0.000001}} \put(-126.57,207.89){\circle*{0.000001}} \put(-126.57,207.89){\circle*{0.000001}} \put(-125.87,207.89){\circle*{0.000001}} \put(-125.16,208.60){\circle*{0.000001}} \put(-124.45,208.60){\circle*{0.000001}} \put(-124.45,208.60){\circle*{0.000001}} \put(-123.74,209.30){\circle*{0.000001}} \put(-123.74,209.30){\circle*{0.000001}} \put(-123.74,210.01){\circle*{0.000001}} \put(-123.74,210.72){\circle*{0.000001}} \put(-124.45,209.30){\circle*{0.000001}} \put(-124.45,210.01){\circle*{0.000001}} \put(-123.74,210.72){\circle*{0.000001}} \put(-125.87,210.72){\circle*{0.000001}} \put(-125.16,210.01){\circle*{0.000001}} \put(-124.45,209.30){\circle*{0.000001}} \put(-125.87,210.72){\circle*{0.000001}} \put(-125.16,210.01){\circle*{0.000001}} \put(-124.45,209.30){\circle*{0.000001}} \put(-84.85,161.93){\circle*{0.000001}} \put(-85.56,162.63){\circle*{0.000001}} \put(-86.27,163.34){\circle*{0.000001}} \put(-86.97,164.05){\circle*{0.000001}} \put(-86.97,164.76){\circle*{0.000001}} \put(-87.68,165.46){\circle*{0.000001}} \put(-88.39,166.17){\circle*{0.000001}} \put(-89.10,166.88){\circle*{0.000001}} \put(-89.80,167.58){\circle*{0.000001}} \put(-90.51,168.29){\circle*{0.000001}} \put(-90.51,169.00){\circle*{0.000001}} \put(-91.22,169.71){\circle*{0.000001}} \put(-91.92,170.41){\circle*{0.000001}} \put(-92.63,171.12){\circle*{0.000001}} \put(-93.34,171.83){\circle*{0.000001}} \put(-94.05,172.53){\circle*{0.000001}} \put(-94.05,173.24){\circle*{0.000001}} \put(-94.75,173.95){\circle*{0.000001}} \put(-95.46,174.66){\circle*{0.000001}} \put(-96.17,175.36){\circle*{0.000001}} \put(-96.87,176.07){\circle*{0.000001}} \put(-97.58,176.78){\circle*{0.000001}} \put(-97.58,177.48){\circle*{0.000001}} \put(-98.29,178.19){\circle*{0.000001}} \put(-98.99,178.90){\circle*{0.000001}} \put(-99.70,179.61){\circle*{0.000001}} \put(-100.41,180.31){\circle*{0.000001}} \put(-101.12,181.02){\circle*{0.000001}} \put(-101.12,181.73){\circle*{0.000001}} \put(-101.82,182.43){\circle*{0.000001}} \put(-102.53,183.14){\circle*{0.000001}} \put(-103.24,183.85){\circle*{0.000001}} \put(-103.94,184.55){\circle*{0.000001}} \put(-104.65,185.26){\circle*{0.000001}} \put(-104.65,185.97){\circle*{0.000001}} \put(-105.36,186.68){\circle*{0.000001}} \put(-106.07,187.38){\circle*{0.000001}} \put(-106.77,188.09){\circle*{0.000001}} \put(-107.48,188.80){\circle*{0.000001}} \put(-108.19,189.50){\circle*{0.000001}} \put(-108.19,190.21){\circle*{0.000001}} \put(-108.89,190.92){\circle*{0.000001}} \put(-109.60,191.63){\circle*{0.000001}} \put(-110.31,192.33){\circle*{0.000001}} \put(-111.02,193.04){\circle*{0.000001}} \put(-111.72,193.75){\circle*{0.000001}} \put(-111.72,194.45){\circle*{0.000001}} \put(-112.43,195.16){\circle*{0.000001}} \put(-113.14,195.87){\circle*{0.000001}} \put(-113.84,196.58){\circle*{0.000001}} \put(-114.55,197.28){\circle*{0.000001}} \put(-115.26,197.99){\circle*{0.000001}} \put(-115.26,198.70){\circle*{0.000001}} \put(-115.97,199.40){\circle*{0.000001}} \put(-116.67,200.11){\circle*{0.000001}} \put(-117.38,200.82){\circle*{0.000001}} \put(-118.09,201.53){\circle*{0.000001}} \put(-118.79,202.23){\circle*{0.000001}} \put(-118.79,202.94){\circle*{0.000001}} \put(-119.50,203.65){\circle*{0.000001}} \put(-120.21,204.35){\circle*{0.000001}} \put(-120.92,205.06){\circle*{0.000001}} \put(-121.62,205.77){\circle*{0.000001}} \put(-122.33,206.48){\circle*{0.000001}} \put(-122.33,207.18){\circle*{0.000001}} \put(-123.04,207.89){\circle*{0.000001}} \put(-123.74,208.60){\circle*{0.000001}} \put(-124.45,209.30){\circle*{0.000001}} \put(-85.56,160.51){\circle*{0.000001}} \put(-85.56,161.22){\circle*{0.000001}} \put(-84.85,161.93){\circle*{0.000001}} \put(-85.56,160.51){\circle*{0.000001}} \put(-85.56,161.22){\circle*{0.000001}} \put(-85.56,161.93){\circle*{0.000001}} \put(-84.85,160.51){\circle*{0.000001}} \put(-84.85,161.22){\circle*{0.000001}} \put(-85.56,161.93){\circle*{0.000001}} \put(-84.85,160.51){\circle*{0.000001}} \put(-84.15,161.22){\circle*{0.000001}} \put(-83.44,161.93){\circle*{0.000001}} \put(-82.73,162.63){\circle*{0.000001}} \put(-82.73,162.63){\circle*{0.000001}} \put(-82.02,163.34){\circle*{0.000001}} \put(-82.02,164.05){\circle*{0.000001}} \put(-81.32,164.76){\circle*{0.000001}} \put(-82.73,164.76){\circle*{0.000001}} \put(-82.02,164.76){\circle*{0.000001}} \put(-81.32,164.76){\circle*{0.000001}} \put(-82.73,164.76){\circle*{0.000001}} \put(-82.02,164.76){\circle*{0.000001}} \put(-81.32,165.46){\circle*{0.000001}} \put(-81.32,165.46){\circle*{0.000001}} \put(-81.32,166.17){\circle*{0.000001}} \put(-81.32,166.88){\circle*{0.000001}} \put(-80.61,164.76){\circle*{0.000001}} \put(-80.61,165.46){\circle*{0.000001}} \put(-81.32,166.17){\circle*{0.000001}} \put(-81.32,166.88){\circle*{0.000001}} \put(-80.61,164.76){\circle*{0.000001}} \put(-79.90,164.76){\circle*{0.000001}} \put(-79.20,165.46){\circle*{0.000001}} \put(-78.49,165.46){\circle*{0.000001}} \put(-78.49,165.46){\circle*{0.000001}} \put(-77.78,165.46){\circle*{0.000001}} \put(-77.07,165.46){\circle*{0.000001}} \put(-76.37,165.46){\circle*{0.000001}} \put(-76.37,165.46){\circle*{0.000001}} \put(-75.66,166.17){\circle*{0.000001}} \put(-75.66,166.88){\circle*{0.000001}} \put(-74.95,167.58){\circle*{0.000001}} \put(-130.81,149.91){\circle*{0.000001}} \put(-130.11,149.91){\circle*{0.000001}} \put(-129.40,150.61){\circle*{0.000001}} \put(-128.69,150.61){\circle*{0.000001}} \put(-127.99,150.61){\circle*{0.000001}} \put(-127.28,151.32){\circle*{0.000001}} \put(-126.57,151.32){\circle*{0.000001}} \put(-125.87,151.32){\circle*{0.000001}} \put(-125.16,152.03){\circle*{0.000001}} \put(-124.45,152.03){\circle*{0.000001}} \put(-123.74,152.03){\circle*{0.000001}} \put(-123.04,152.03){\circle*{0.000001}} \put(-122.33,152.74){\circle*{0.000001}} \put(-121.62,152.74){\circle*{0.000001}} \put(-120.92,152.74){\circle*{0.000001}} \put(-120.21,153.44){\circle*{0.000001}} \put(-119.50,153.44){\circle*{0.000001}} \put(-118.79,153.44){\circle*{0.000001}} \put(-118.09,154.15){\circle*{0.000001}} \put(-117.38,154.15){\circle*{0.000001}} \put(-116.67,154.15){\circle*{0.000001}} \put(-115.97,154.86){\circle*{0.000001}} \put(-115.26,154.86){\circle*{0.000001}} \put(-114.55,154.86){\circle*{0.000001}} \put(-113.84,155.56){\circle*{0.000001}} \put(-113.14,155.56){\circle*{0.000001}} \put(-112.43,155.56){\circle*{0.000001}} \put(-111.72,156.27){\circle*{0.000001}} \put(-111.02,156.27){\circle*{0.000001}} \put(-110.31,156.27){\circle*{0.000001}} \put(-109.60,156.27){\circle*{0.000001}} \put(-108.89,156.98){\circle*{0.000001}} \put(-108.19,156.98){\circle*{0.000001}} \put(-107.48,156.98){\circle*{0.000001}} \put(-106.77,157.68){\circle*{0.000001}} \put(-106.07,157.68){\circle*{0.000001}} \put(-105.36,157.68){\circle*{0.000001}} \put(-104.65,158.39){\circle*{0.000001}} \put(-103.94,158.39){\circle*{0.000001}} \put(-103.24,158.39){\circle*{0.000001}} \put(-102.53,159.10){\circle*{0.000001}} \put(-101.82,159.10){\circle*{0.000001}} \put(-101.12,159.10){\circle*{0.000001}} \put(-100.41,159.81){\circle*{0.000001}} \put(-99.70,159.81){\circle*{0.000001}} \put(-98.99,159.81){\circle*{0.000001}} \put(-98.29,160.51){\circle*{0.000001}} \put(-97.58,160.51){\circle*{0.000001}} \put(-96.87,160.51){\circle*{0.000001}} \put(-96.17,161.22){\circle*{0.000001}} \put(-95.46,161.22){\circle*{0.000001}} \put(-94.75,161.22){\circle*{0.000001}} \put(-94.05,161.22){\circle*{0.000001}} \put(-93.34,161.93){\circle*{0.000001}} \put(-92.63,161.93){\circle*{0.000001}} \put(-91.92,161.93){\circle*{0.000001}} \put(-91.22,162.63){\circle*{0.000001}} \put(-90.51,162.63){\circle*{0.000001}} \put(-89.80,162.63){\circle*{0.000001}} \put(-89.10,163.34){\circle*{0.000001}} \put(-88.39,163.34){\circle*{0.000001}} \put(-87.68,163.34){\circle*{0.000001}} \put(-86.97,164.05){\circle*{0.000001}} \put(-86.27,164.05){\circle*{0.000001}} \put(-85.56,164.05){\circle*{0.000001}} \put(-84.85,164.76){\circle*{0.000001}} \put(-84.15,164.76){\circle*{0.000001}} \put(-83.44,164.76){\circle*{0.000001}} \put(-82.73,165.46){\circle*{0.000001}} \put(-82.02,165.46){\circle*{0.000001}} \put(-81.32,165.46){\circle*{0.000001}} \put(-80.61,165.46){\circle*{0.000001}} \put(-79.90,166.17){\circle*{0.000001}} \put(-79.20,166.17){\circle*{0.000001}} \put(-78.49,166.17){\circle*{0.000001}} \put(-77.78,166.88){\circle*{0.000001}} \put(-77.07,166.88){\circle*{0.000001}} \put(-76.37,166.88){\circle*{0.000001}} \put(-75.66,167.58){\circle*{0.000001}} \put(-74.95,167.58){\circle*{0.000001}} \put(-133.64,148.49){\circle*{0.000001}} \put(-132.94,148.49){\circle*{0.000001}} \put(-132.23,149.20){\circle*{0.000001}} \put(-131.52,149.20){\circle*{0.000001}} \put(-130.81,149.91){\circle*{0.000001}} \put(-133.64,148.49){\circle*{0.000001}} \put(-133.64,149.20){\circle*{0.000001}} \put(-133.64,149.91){\circle*{0.000001}} \put(-133.64,150.61){\circle*{0.000001}} \put(-133.64,151.32){\circle*{0.000001}} \put(-134.35,150.61){\circle*{0.000001}} \put(-133.64,151.32){\circle*{0.000001}} \put(-136.47,149.91){\circle*{0.000001}} \put(-135.76,149.91){\circle*{0.000001}} \put(-135.06,150.61){\circle*{0.000001}} \put(-134.35,150.61){\circle*{0.000001}} \put(-136.47,149.91){\circle*{0.000001}} \put(-136.47,150.61){\circle*{0.000001}} \put(-136.47,150.61){\circle*{0.000001}} \put(-135.76,149.91){\circle*{0.000001}} \put(-135.06,149.20){\circle*{0.000001}} \put(-134.35,148.49){\circle*{0.000001}} \put(-135.76,147.79){\circle*{0.000001}} \put(-135.06,147.79){\circle*{0.000001}} \put(-134.35,148.49){\circle*{0.000001}} \put(-136.47,147.79){\circle*{0.000001}} \put(-135.76,147.79){\circle*{0.000001}} \put(-136.47,147.79){\circle*{0.000001}} \put(-135.76,147.79){\circle*{0.000001}} \put(-123.74,87.68){\circle*{0.000001}} \put(-123.74,88.39){\circle*{0.000001}} \put(-123.74,89.10){\circle*{0.000001}} \put(-124.45,89.80){\circle*{0.000001}} \put(-124.45,90.51){\circle*{0.000001}} \put(-124.45,91.22){\circle*{0.000001}} \put(-124.45,91.92){\circle*{0.000001}} \put(-124.45,92.63){\circle*{0.000001}} \put(-125.16,93.34){\circle*{0.000001}} \put(-125.16,94.05){\circle*{0.000001}} \put(-125.16,94.75){\circle*{0.000001}} \put(-125.16,95.46){\circle*{0.000001}} \put(-125.16,96.17){\circle*{0.000001}} \put(-125.87,96.87){\circle*{0.000001}} \put(-125.87,97.58){\circle*{0.000001}} \put(-125.87,98.29){\circle*{0.000001}} \put(-125.87,98.99){\circle*{0.000001}} \put(-125.87,99.70){\circle*{0.000001}} \put(-126.57,100.41){\circle*{0.000001}} \put(-126.57,101.12){\circle*{0.000001}} \put(-126.57,101.82){\circle*{0.000001}} \put(-126.57,102.53){\circle*{0.000001}} \put(-126.57,103.24){\circle*{0.000001}} \put(-127.28,103.94){\circle*{0.000001}} \put(-127.28,104.65){\circle*{0.000001}} \put(-127.28,105.36){\circle*{0.000001}} \put(-127.28,106.07){\circle*{0.000001}} \put(-127.28,106.77){\circle*{0.000001}} \put(-127.99,107.48){\circle*{0.000001}} \put(-127.99,108.19){\circle*{0.000001}} \put(-127.99,108.89){\circle*{0.000001}} \put(-127.99,109.60){\circle*{0.000001}} \put(-127.99,110.31){\circle*{0.000001}} \put(-128.69,111.02){\circle*{0.000001}} \put(-128.69,111.72){\circle*{0.000001}} \put(-128.69,112.43){\circle*{0.000001}} \put(-128.69,113.14){\circle*{0.000001}} \put(-128.69,113.84){\circle*{0.000001}} \put(-129.40,114.55){\circle*{0.000001}} \put(-129.40,115.26){\circle*{0.000001}} \put(-129.40,115.97){\circle*{0.000001}} \put(-129.40,116.67){\circle*{0.000001}} \put(-129.40,117.38){\circle*{0.000001}} \put(-130.11,118.09){\circle*{0.000001}} \put(-130.11,118.79){\circle*{0.000001}} \put(-130.11,119.50){\circle*{0.000001}} \put(-130.11,120.21){\circle*{0.000001}} \put(-130.11,120.92){\circle*{0.000001}} \put(-130.81,121.62){\circle*{0.000001}} \put(-130.81,122.33){\circle*{0.000001}} \put(-130.81,123.04){\circle*{0.000001}} \put(-130.81,123.74){\circle*{0.000001}} \put(-130.81,124.45){\circle*{0.000001}} \put(-131.52,125.16){\circle*{0.000001}} \put(-131.52,125.87){\circle*{0.000001}} \put(-131.52,126.57){\circle*{0.000001}} \put(-131.52,127.28){\circle*{0.000001}} \put(-131.52,127.99){\circle*{0.000001}} \put(-132.23,128.69){\circle*{0.000001}} \put(-132.23,129.40){\circle*{0.000001}} \put(-132.23,130.11){\circle*{0.000001}} \put(-132.23,130.81){\circle*{0.000001}} \put(-132.23,131.52){\circle*{0.000001}} \put(-132.94,132.23){\circle*{0.000001}} \put(-132.94,132.94){\circle*{0.000001}} \put(-132.94,133.64){\circle*{0.000001}} \put(-132.94,134.35){\circle*{0.000001}} \put(-132.94,135.06){\circle*{0.000001}} \put(-133.64,135.76){\circle*{0.000001}} \put(-133.64,136.47){\circle*{0.000001}} \put(-133.64,137.18){\circle*{0.000001}} \put(-133.64,137.89){\circle*{0.000001}} \put(-133.64,138.59){\circle*{0.000001}} \put(-134.35,139.30){\circle*{0.000001}} \put(-134.35,140.01){\circle*{0.000001}} \put(-134.35,140.71){\circle*{0.000001}} \put(-134.35,141.42){\circle*{0.000001}} \put(-134.35,142.13){\circle*{0.000001}} \put(-135.06,142.84){\circle*{0.000001}} \put(-135.06,143.54){\circle*{0.000001}} \put(-135.06,144.25){\circle*{0.000001}} \put(-135.06,144.96){\circle*{0.000001}} \put(-135.06,145.66){\circle*{0.000001}} \put(-135.76,146.37){\circle*{0.000001}} \put(-135.76,147.08){\circle*{0.000001}} \put(-135.76,147.79){\circle*{0.000001}} \put(-173.95,51.62){\circle*{0.000001}} \put(-173.24,52.33){\circle*{0.000001}} \put(-172.53,52.33){\circle*{0.000001}} \put(-171.83,53.03){\circle*{0.000001}} \put(-171.12,53.74){\circle*{0.000001}} \put(-170.41,54.45){\circle*{0.000001}} \put(-169.71,54.45){\circle*{0.000001}} \put(-169.00,55.15){\circle*{0.000001}} \put(-168.29,55.86){\circle*{0.000001}} \put(-167.58,55.86){\circle*{0.000001}} \put(-166.88,56.57){\circle*{0.000001}} \put(-166.17,57.28){\circle*{0.000001}} \put(-165.46,57.98){\circle*{0.000001}} \put(-164.76,57.98){\circle*{0.000001}} \put(-164.05,58.69){\circle*{0.000001}} \put(-163.34,59.40){\circle*{0.000001}} \put(-162.63,59.40){\circle*{0.000001}} \put(-161.93,60.10){\circle*{0.000001}} \put(-161.22,60.81){\circle*{0.000001}} \put(-160.51,61.52){\circle*{0.000001}} \put(-159.81,61.52){\circle*{0.000001}} \put(-159.10,62.23){\circle*{0.000001}} \put(-158.39,62.93){\circle*{0.000001}} \put(-157.68,63.64){\circle*{0.000001}} \put(-156.98,63.64){\circle*{0.000001}} \put(-156.27,64.35){\circle*{0.000001}} \put(-155.56,65.05){\circle*{0.000001}} \put(-154.86,65.05){\circle*{0.000001}} \put(-154.15,65.76){\circle*{0.000001}} \put(-153.44,66.47){\circle*{0.000001}} \put(-152.74,67.18){\circle*{0.000001}} \put(-152.03,67.18){\circle*{0.000001}} \put(-151.32,67.88){\circle*{0.000001}} \put(-150.61,68.59){\circle*{0.000001}} \put(-149.91,68.59){\circle*{0.000001}} \put(-149.20,69.30){\circle*{0.000001}} \put(-148.49,70.00){\circle*{0.000001}} \put(-147.79,70.71){\circle*{0.000001}} \put(-147.08,70.71){\circle*{0.000001}} \put(-146.37,71.42){\circle*{0.000001}} \put(-145.66,72.12){\circle*{0.000001}} \put(-144.96,72.12){\circle*{0.000001}} \put(-144.25,72.83){\circle*{0.000001}} \put(-143.54,73.54){\circle*{0.000001}} \put(-142.84,74.25){\circle*{0.000001}} \put(-142.13,74.25){\circle*{0.000001}} \put(-141.42,74.95){\circle*{0.000001}} \put(-140.71,75.66){\circle*{0.000001}} \put(-140.01,75.66){\circle*{0.000001}} \put(-139.30,76.37){\circle*{0.000001}} \put(-138.59,77.07){\circle*{0.000001}} \put(-137.89,77.78){\circle*{0.000001}} \put(-137.18,77.78){\circle*{0.000001}} \put(-136.47,78.49){\circle*{0.000001}} \put(-135.76,79.20){\circle*{0.000001}} \put(-135.06,79.90){\circle*{0.000001}} \put(-134.35,79.90){\circle*{0.000001}} \put(-133.64,80.61){\circle*{0.000001}} \put(-132.94,81.32){\circle*{0.000001}} \put(-132.23,81.32){\circle*{0.000001}} \put(-131.52,82.02){\circle*{0.000001}} \put(-130.81,82.73){\circle*{0.000001}} \put(-130.11,83.44){\circle*{0.000001}} \put(-129.40,83.44){\circle*{0.000001}} \put(-128.69,84.15){\circle*{0.000001}} \put(-127.99,84.85){\circle*{0.000001}} \put(-127.28,84.85){\circle*{0.000001}} \put(-126.57,85.56){\circle*{0.000001}} \put(-125.87,86.27){\circle*{0.000001}} \put(-125.16,86.97){\circle*{0.000001}} \put(-124.45,86.97){\circle*{0.000001}} \put(-123.74,87.68){\circle*{0.000001}} \put(-174.66,50.91){\circle*{0.000001}} \put(-173.95,51.62){\circle*{0.000001}} \put(-176.78,50.91){\circle*{0.000001}} \put(-176.07,50.91){\circle*{0.000001}} \put(-175.36,50.91){\circle*{0.000001}} \put(-174.66,50.91){\circle*{0.000001}} \put(-176.78,50.91){\circle*{0.000001}} \put(-176.07,50.91){\circle*{0.000001}} \put(-175.36,50.91){\circle*{0.000001}} \put(-175.36,50.91){\circle*{0.000001}} \put(-175.36,50.91){\circle*{0.000001}} \put(-174.66,50.91){\circle*{0.000001}} \put(-173.95,51.62){\circle*{0.000001}} \put(-173.24,51.62){\circle*{0.000001}} \put(-172.53,52.33){\circle*{0.000001}} \put(-171.83,52.33){\circle*{0.000001}} \put(-171.12,53.03){\circle*{0.000001}} \put(-170.41,53.03){\circle*{0.000001}} \put(-169.71,53.74){\circle*{0.000001}} \put(-169.00,53.74){\circle*{0.000001}} \put(-168.29,54.45){\circle*{0.000001}} \put(-167.58,54.45){\circle*{0.000001}} \put(-166.88,54.45){\circle*{0.000001}} \put(-166.17,55.15){\circle*{0.000001}} \put(-165.46,55.15){\circle*{0.000001}} \put(-164.76,55.86){\circle*{0.000001}} \put(-164.05,55.86){\circle*{0.000001}} \put(-163.34,56.57){\circle*{0.000001}} \put(-162.63,56.57){\circle*{0.000001}} \put(-161.93,57.28){\circle*{0.000001}} \put(-161.22,57.28){\circle*{0.000001}} \put(-160.51,57.98){\circle*{0.000001}} \put(-159.81,57.98){\circle*{0.000001}} \put(-159.10,57.98){\circle*{0.000001}} \put(-158.39,58.69){\circle*{0.000001}} \put(-157.68,58.69){\circle*{0.000001}} \put(-156.98,59.40){\circle*{0.000001}} \put(-156.27,59.40){\circle*{0.000001}} \put(-155.56,60.10){\circle*{0.000001}} \put(-154.86,60.10){\circle*{0.000001}} \put(-154.15,60.81){\circle*{0.000001}} \put(-153.44,60.81){\circle*{0.000001}} \put(-152.74,61.52){\circle*{0.000001}} \put(-152.03,61.52){\circle*{0.000001}} \put(-151.32,61.52){\circle*{0.000001}} \put(-150.61,62.23){\circle*{0.000001}} \put(-149.91,62.23){\circle*{0.000001}} \put(-149.20,62.93){\circle*{0.000001}} \put(-148.49,62.93){\circle*{0.000001}} \put(-147.79,63.64){\circle*{0.000001}} \put(-147.08,63.64){\circle*{0.000001}} \put(-146.37,64.35){\circle*{0.000001}} \put(-145.66,64.35){\circle*{0.000001}} \put(-144.96,65.05){\circle*{0.000001}} \put(-144.25,65.05){\circle*{0.000001}} \put(-143.54,65.76){\circle*{0.000001}} \put(-142.84,65.76){\circle*{0.000001}} \put(-142.13,65.76){\circle*{0.000001}} \put(-141.42,66.47){\circle*{0.000001}} \put(-140.71,66.47){\circle*{0.000001}} \put(-140.01,67.18){\circle*{0.000001}} \put(-139.30,67.18){\circle*{0.000001}} \put(-138.59,67.88){\circle*{0.000001}} \put(-137.89,67.88){\circle*{0.000001}} \put(-137.18,68.59){\circle*{0.000001}} \put(-136.47,68.59){\circle*{0.000001}} \put(-135.76,69.30){\circle*{0.000001}} \put(-135.06,69.30){\circle*{0.000001}} \put(-134.35,69.30){\circle*{0.000001}} \put(-133.64,70.00){\circle*{0.000001}} \put(-132.94,70.00){\circle*{0.000001}} \put(-132.23,70.71){\circle*{0.000001}} \put(-131.52,70.71){\circle*{0.000001}} \put(-130.81,71.42){\circle*{0.000001}} \put(-130.11,71.42){\circle*{0.000001}} \put(-129.40,72.12){\circle*{0.000001}} \put(-128.69,72.12){\circle*{0.000001}} \put(-127.99,72.83){\circle*{0.000001}} \put(-127.28,72.83){\circle*{0.000001}} \put(-126.57,72.83){\circle*{0.000001}} \put(-125.87,73.54){\circle*{0.000001}} \put(-125.16,73.54){\circle*{0.000001}} \put(-124.45,74.25){\circle*{0.000001}} \put(-123.74,74.25){\circle*{0.000001}} \put(-123.04,74.95){\circle*{0.000001}} \put(-122.33,74.95){\circle*{0.000001}} \put(-121.62,75.66){\circle*{0.000001}} \put(-120.92,75.66){\circle*{0.000001}} \put(-120.21,76.37){\circle*{0.000001}} \put(-119.50,76.37){\circle*{0.000001}} \put(-163.34,115.97){\circle*{0.000001}} \put(-162.63,115.26){\circle*{0.000001}} \put(-161.93,114.55){\circle*{0.000001}} \put(-161.22,113.84){\circle*{0.000001}} \put(-160.51,113.14){\circle*{0.000001}} \put(-159.81,112.43){\circle*{0.000001}} \put(-159.10,112.43){\circle*{0.000001}} \put(-158.39,111.72){\circle*{0.000001}} \put(-157.68,111.02){\circle*{0.000001}} \put(-156.98,110.31){\circle*{0.000001}} \put(-156.27,109.60){\circle*{0.000001}} \put(-155.56,108.89){\circle*{0.000001}} \put(-154.86,108.19){\circle*{0.000001}} \put(-154.15,107.48){\circle*{0.000001}} \put(-153.44,106.77){\circle*{0.000001}} \put(-152.74,106.07){\circle*{0.000001}} \put(-152.03,106.07){\circle*{0.000001}} \put(-151.32,105.36){\circle*{0.000001}} \put(-150.61,104.65){\circle*{0.000001}} \put(-149.91,103.94){\circle*{0.000001}} \put(-149.20,103.24){\circle*{0.000001}} \put(-148.49,102.53){\circle*{0.000001}} \put(-147.79,101.82){\circle*{0.000001}} \put(-147.08,101.12){\circle*{0.000001}} \put(-146.37,100.41){\circle*{0.000001}} \put(-145.66,99.70){\circle*{0.000001}} \put(-144.96,99.70){\circle*{0.000001}} \put(-144.25,98.99){\circle*{0.000001}} \put(-143.54,98.29){\circle*{0.000001}} \put(-142.84,97.58){\circle*{0.000001}} \put(-142.13,96.87){\circle*{0.000001}} \put(-141.42,96.17){\circle*{0.000001}} \put(-140.71,95.46){\circle*{0.000001}} \put(-140.01,94.75){\circle*{0.000001}} \put(-139.30,94.05){\circle*{0.000001}} \put(-138.59,93.34){\circle*{0.000001}} \put(-137.89,92.63){\circle*{0.000001}} \put(-137.18,92.63){\circle*{0.000001}} \put(-136.47,91.92){\circle*{0.000001}} \put(-135.76,91.22){\circle*{0.000001}} \put(-135.06,90.51){\circle*{0.000001}} \put(-134.35,89.80){\circle*{0.000001}} \put(-133.64,89.10){\circle*{0.000001}} \put(-132.94,88.39){\circle*{0.000001}} \put(-132.23,87.68){\circle*{0.000001}} \put(-131.52,86.97){\circle*{0.000001}} \put(-130.81,86.27){\circle*{0.000001}} \put(-130.11,86.27){\circle*{0.000001}} \put(-129.40,85.56){\circle*{0.000001}} \put(-128.69,84.85){\circle*{0.000001}} \put(-127.99,84.15){\circle*{0.000001}} \put(-127.28,83.44){\circle*{0.000001}} \put(-126.57,82.73){\circle*{0.000001}} \put(-125.87,82.02){\circle*{0.000001}} \put(-125.16,81.32){\circle*{0.000001}} \put(-124.45,80.61){\circle*{0.000001}} \put(-123.74,79.90){\circle*{0.000001}} \put(-123.04,79.90){\circle*{0.000001}} \put(-122.33,79.20){\circle*{0.000001}} \put(-121.62,78.49){\circle*{0.000001}} \put(-120.92,77.78){\circle*{0.000001}} \put(-120.21,77.07){\circle*{0.000001}} \put(-119.50,76.37){\circle*{0.000001}} \put(-163.34,115.97){\circle*{0.000001}} \put(-164.05,116.67){\circle*{0.000001}} \put(-164.76,117.38){\circle*{0.000001}} \put(-164.76,118.09){\circle*{0.000001}} \put(-165.46,118.79){\circle*{0.000001}} \put(-166.17,119.50){\circle*{0.000001}} \put(-166.88,120.21){\circle*{0.000001}} \put(-167.58,120.92){\circle*{0.000001}} \put(-168.29,121.62){\circle*{0.000001}} \put(-168.29,122.33){\circle*{0.000001}} \put(-169.00,123.04){\circle*{0.000001}} \put(-169.71,123.74){\circle*{0.000001}} \put(-170.41,124.45){\circle*{0.000001}} \put(-171.12,125.16){\circle*{0.000001}} \put(-171.12,125.87){\circle*{0.000001}} \put(-171.83,126.57){\circle*{0.000001}} \put(-172.53,127.28){\circle*{0.000001}} \put(-173.24,127.99){\circle*{0.000001}} \put(-173.95,128.69){\circle*{0.000001}} \put(-173.95,129.40){\circle*{0.000001}} \put(-174.66,130.11){\circle*{0.000001}} \put(-175.36,130.81){\circle*{0.000001}} \put(-176.07,131.52){\circle*{0.000001}} \put(-176.78,132.23){\circle*{0.000001}} \put(-177.48,132.94){\circle*{0.000001}} \put(-177.48,133.64){\circle*{0.000001}} \put(-178.19,134.35){\circle*{0.000001}} \put(-178.90,135.06){\circle*{0.000001}} \put(-179.61,135.76){\circle*{0.000001}} \put(-180.31,136.47){\circle*{0.000001}} \put(-180.31,137.18){\circle*{0.000001}} \put(-181.02,137.89){\circle*{0.000001}} \put(-181.73,138.59){\circle*{0.000001}} \put(-182.43,139.30){\circle*{0.000001}} \put(-183.14,140.01){\circle*{0.000001}} \put(-183.85,140.71){\circle*{0.000001}} \put(-183.85,141.42){\circle*{0.000001}} \put(-184.55,142.13){\circle*{0.000001}} \put(-185.26,142.84){\circle*{0.000001}} \put(-185.97,143.54){\circle*{0.000001}} \put(-186.68,144.25){\circle*{0.000001}} \put(-186.68,144.96){\circle*{0.000001}} \put(-187.38,145.66){\circle*{0.000001}} \put(-188.09,146.37){\circle*{0.000001}} \put(-188.80,147.08){\circle*{0.000001}} \put(-189.50,147.79){\circle*{0.000001}} \put(-190.21,148.49){\circle*{0.000001}} \put(-190.21,149.20){\circle*{0.000001}} \put(-190.92,149.91){\circle*{0.000001}} \put(-191.63,150.61){\circle*{0.000001}} \put(-192.33,151.32){\circle*{0.000001}} \put(-193.04,152.03){\circle*{0.000001}} \put(-193.04,152.74){\circle*{0.000001}} \put(-193.75,153.44){\circle*{0.000001}} \put(-194.45,154.15){\circle*{0.000001}} \put(-195.16,154.86){\circle*{0.000001}} \put(-195.87,155.56){\circle*{0.000001}} \put(-195.87,156.27){\circle*{0.000001}} \put(-196.58,156.98){\circle*{0.000001}} \put(-197.28,157.68){\circle*{0.000001}} \put(-197.99,158.39){\circle*{0.000001}} \put(-198.70,159.10){\circle*{0.000001}} \put(-199.40,159.81){\circle*{0.000001}} \put(-199.40,160.51){\circle*{0.000001}} \put(-200.11,161.22){\circle*{0.000001}} \put(-200.82,161.93){\circle*{0.000001}} \put(-197.99,158.39){\circle*{0.000001}} \put(-198.70,159.10){\circle*{0.000001}} \put(-199.40,159.81){\circle*{0.000001}} \put(-199.40,160.51){\circle*{0.000001}} \put(-200.11,161.22){\circle*{0.000001}} \put(-200.82,161.93){\circle*{0.000001}} \put(-200.82,95.46){\circle*{0.000001}} \put(-200.82,96.17){\circle*{0.000001}} \put(-200.82,96.87){\circle*{0.000001}} \put(-200.82,97.58){\circle*{0.000001}} \put(-200.82,98.29){\circle*{0.000001}} \put(-200.82,98.99){\circle*{0.000001}} \put(-200.82,99.70){\circle*{0.000001}} \put(-200.82,100.41){\circle*{0.000001}} \put(-200.82,101.12){\circle*{0.000001}} \put(-200.82,101.82){\circle*{0.000001}} \put(-200.82,102.53){\circle*{0.000001}} \put(-200.82,103.24){\circle*{0.000001}} \put(-200.11,103.94){\circle*{0.000001}} \put(-200.11,104.65){\circle*{0.000001}} \put(-200.11,105.36){\circle*{0.000001}} \put(-200.11,106.07){\circle*{0.000001}} \put(-200.11,106.77){\circle*{0.000001}} \put(-200.11,107.48){\circle*{0.000001}} \put(-200.11,108.19){\circle*{0.000001}} \put(-200.11,108.89){\circle*{0.000001}} \put(-200.11,109.60){\circle*{0.000001}} \put(-200.11,110.31){\circle*{0.000001}} \put(-200.11,111.02){\circle*{0.000001}} \put(-200.11,111.72){\circle*{0.000001}} \put(-200.11,112.43){\circle*{0.000001}} \put(-200.11,113.14){\circle*{0.000001}} \put(-200.11,113.84){\circle*{0.000001}} \put(-200.11,114.55){\circle*{0.000001}} \put(-200.11,115.26){\circle*{0.000001}} \put(-200.11,115.97){\circle*{0.000001}} \put(-200.11,116.67){\circle*{0.000001}} \put(-200.11,117.38){\circle*{0.000001}} \put(-200.11,118.09){\circle*{0.000001}} \put(-200.11,118.79){\circle*{0.000001}} \put(-199.40,119.50){\circle*{0.000001}} \put(-199.40,120.21){\circle*{0.000001}} \put(-199.40,120.92){\circle*{0.000001}} \put(-199.40,121.62){\circle*{0.000001}} \put(-199.40,122.33){\circle*{0.000001}} \put(-199.40,123.04){\circle*{0.000001}} \put(-199.40,123.74){\circle*{0.000001}} \put(-199.40,124.45){\circle*{0.000001}} \put(-199.40,125.16){\circle*{0.000001}} \put(-199.40,125.87){\circle*{0.000001}} \put(-199.40,126.57){\circle*{0.000001}} \put(-199.40,127.28){\circle*{0.000001}} \put(-199.40,127.99){\circle*{0.000001}} \put(-199.40,128.69){\circle*{0.000001}} \put(-199.40,129.40){\circle*{0.000001}} \put(-199.40,130.11){\circle*{0.000001}} \put(-199.40,130.81){\circle*{0.000001}} \put(-199.40,131.52){\circle*{0.000001}} \put(-199.40,132.23){\circle*{0.000001}} \put(-199.40,132.94){\circle*{0.000001}} \put(-199.40,133.64){\circle*{0.000001}} \put(-199.40,134.35){\circle*{0.000001}} \put(-198.70,135.06){\circle*{0.000001}} \put(-198.70,135.76){\circle*{0.000001}} \put(-198.70,136.47){\circle*{0.000001}} \put(-198.70,137.18){\circle*{0.000001}} \put(-198.70,137.89){\circle*{0.000001}} \put(-198.70,138.59){\circle*{0.000001}} \put(-198.70,139.30){\circle*{0.000001}} \put(-198.70,140.01){\circle*{0.000001}} \put(-198.70,140.71){\circle*{0.000001}} \put(-198.70,141.42){\circle*{0.000001}} \put(-198.70,142.13){\circle*{0.000001}} \put(-198.70,142.84){\circle*{0.000001}} \put(-198.70,143.54){\circle*{0.000001}} \put(-198.70,144.25){\circle*{0.000001}} \put(-198.70,144.96){\circle*{0.000001}} \put(-198.70,145.66){\circle*{0.000001}} \put(-198.70,146.37){\circle*{0.000001}} \put(-198.70,147.08){\circle*{0.000001}} \put(-198.70,147.79){\circle*{0.000001}} \put(-198.70,148.49){\circle*{0.000001}} \put(-198.70,149.20){\circle*{0.000001}} \put(-198.70,149.91){\circle*{0.000001}} \put(-197.99,150.61){\circle*{0.000001}} \put(-197.99,151.32){\circle*{0.000001}} \put(-197.99,152.03){\circle*{0.000001}} \put(-197.99,152.74){\circle*{0.000001}} \put(-197.99,153.44){\circle*{0.000001}} \put(-197.99,154.15){\circle*{0.000001}} \put(-197.99,154.86){\circle*{0.000001}} \put(-197.99,155.56){\circle*{0.000001}} \put(-197.99,156.27){\circle*{0.000001}} \put(-197.99,156.98){\circle*{0.000001}} \put(-197.99,157.68){\circle*{0.000001}} \put(-197.99,158.39){\circle*{0.000001}} \put(-200.82,95.46){\circle*{0.000001}} \put(-200.11,95.46){\circle*{0.000001}} \put(-199.40,95.46){\circle*{0.000001}} \put(-198.70,95.46){\circle*{0.000001}} \put(-197.99,92.63){\circle*{0.000001}} \put(-197.99,93.34){\circle*{0.000001}} \put(-197.99,94.05){\circle*{0.000001}} \put(-198.70,94.75){\circle*{0.000001}} \put(-198.70,95.46){\circle*{0.000001}} \put(-197.99,92.63){\circle*{0.000001}} \put(-197.28,92.63){\circle*{0.000001}} \put(-196.58,92.63){\circle*{0.000001}} \put(-196.58,92.63){\circle*{0.000001}} \put(-195.87,92.63){\circle*{0.000001}} \put(-195.16,92.63){\circle*{0.000001}} \put(-194.45,92.63){\circle*{0.000001}} \put(-193.75,92.63){\circle*{0.000001}} \put(-192.33,90.51){\circle*{0.000001}} \put(-193.04,91.22){\circle*{0.000001}} \put(-193.04,91.92){\circle*{0.000001}} \put(-193.75,92.63){\circle*{0.000001}} \put(-253.85,84.85){\circle*{0.000001}} \put(-253.14,84.85){\circle*{0.000001}} \put(-252.44,84.85){\circle*{0.000001}} \put(-251.73,84.85){\circle*{0.000001}} \put(-251.02,84.85){\circle*{0.000001}} \put(-250.32,84.85){\circle*{0.000001}} \put(-249.61,85.56){\circle*{0.000001}} \put(-248.90,85.56){\circle*{0.000001}} \put(-248.19,85.56){\circle*{0.000001}} \put(-247.49,85.56){\circle*{0.000001}} \put(-246.78,85.56){\circle*{0.000001}} \put(-246.07,85.56){\circle*{0.000001}} \put(-245.37,85.56){\circle*{0.000001}} \put(-244.66,85.56){\circle*{0.000001}} \put(-243.95,85.56){\circle*{0.000001}} \put(-243.24,85.56){\circle*{0.000001}} \put(-242.54,85.56){\circle*{0.000001}} \put(-241.83,86.27){\circle*{0.000001}} \put(-241.12,86.27){\circle*{0.000001}} \put(-240.42,86.27){\circle*{0.000001}} \put(-239.71,86.27){\circle*{0.000001}} \put(-239.00,86.27){\circle*{0.000001}} \put(-238.29,86.27){\circle*{0.000001}} \put(-237.59,86.27){\circle*{0.000001}} \put(-236.88,86.27){\circle*{0.000001}} \put(-236.17,86.27){\circle*{0.000001}} \put(-235.47,86.27){\circle*{0.000001}} \put(-234.76,86.27){\circle*{0.000001}} \put(-234.05,86.97){\circle*{0.000001}} \put(-233.35,86.97){\circle*{0.000001}} \put(-232.64,86.97){\circle*{0.000001}} \put(-231.93,86.97){\circle*{0.000001}} \put(-231.22,86.97){\circle*{0.000001}} \put(-230.52,86.97){\circle*{0.000001}} \put(-229.81,86.97){\circle*{0.000001}} \put(-229.10,86.97){\circle*{0.000001}} \put(-228.40,86.97){\circle*{0.000001}} \put(-227.69,86.97){\circle*{0.000001}} \put(-226.98,86.97){\circle*{0.000001}} \put(-226.27,87.68){\circle*{0.000001}} \put(-225.57,87.68){\circle*{0.000001}} \put(-224.86,87.68){\circle*{0.000001}} \put(-224.15,87.68){\circle*{0.000001}} \put(-223.45,87.68){\circle*{0.000001}} \put(-222.74,87.68){\circle*{0.000001}} \put(-222.03,87.68){\circle*{0.000001}} \put(-221.32,87.68){\circle*{0.000001}} \put(-220.62,87.68){\circle*{0.000001}} \put(-219.91,87.68){\circle*{0.000001}} \put(-219.20,88.39){\circle*{0.000001}} \put(-218.50,88.39){\circle*{0.000001}} \put(-217.79,88.39){\circle*{0.000001}} \put(-217.08,88.39){\circle*{0.000001}} \put(-216.37,88.39){\circle*{0.000001}} \put(-215.67,88.39){\circle*{0.000001}} \put(-214.96,88.39){\circle*{0.000001}} \put(-214.25,88.39){\circle*{0.000001}} \put(-213.55,88.39){\circle*{0.000001}} \put(-212.84,88.39){\circle*{0.000001}} \put(-212.13,88.39){\circle*{0.000001}} \put(-211.42,89.10){\circle*{0.000001}} \put(-210.72,89.10){\circle*{0.000001}} \put(-210.01,89.10){\circle*{0.000001}} \put(-209.30,89.10){\circle*{0.000001}} \put(-208.60,89.10){\circle*{0.000001}} \put(-207.89,89.10){\circle*{0.000001}} \put(-207.18,89.10){\circle*{0.000001}} \put(-206.48,89.10){\circle*{0.000001}} \put(-205.77,89.10){\circle*{0.000001}} \put(-205.06,89.10){\circle*{0.000001}} \put(-204.35,89.10){\circle*{0.000001}} \put(-203.65,89.80){\circle*{0.000001}} \put(-202.94,89.80){\circle*{0.000001}} \put(-202.23,89.80){\circle*{0.000001}} \put(-201.53,89.80){\circle*{0.000001}} \put(-200.82,89.80){\circle*{0.000001}} \put(-200.11,89.80){\circle*{0.000001}} \put(-199.40,89.80){\circle*{0.000001}} \put(-198.70,89.80){\circle*{0.000001}} \put(-197.99,89.80){\circle*{0.000001}} \put(-197.28,89.80){\circle*{0.000001}} \put(-196.58,89.80){\circle*{0.000001}} \put(-195.87,90.51){\circle*{0.000001}} \put(-195.16,90.51){\circle*{0.000001}} \put(-194.45,90.51){\circle*{0.000001}} \put(-193.75,90.51){\circle*{0.000001}} \put(-193.04,90.51){\circle*{0.000001}} \put(-192.33,90.51){\circle*{0.000001}} \put(-280.72,31.82){\circle*{0.000001}} \put(-280.01,32.53){\circle*{0.000001}} \put(-280.01,33.23){\circle*{0.000001}} \put(-279.31,33.94){\circle*{0.000001}} \put(-279.31,34.65){\circle*{0.000001}} \put(-278.60,35.36){\circle*{0.000001}} \put(-278.60,36.06){\circle*{0.000001}} \put(-277.89,36.77){\circle*{0.000001}} \put(-277.89,37.48){\circle*{0.000001}} \put(-277.19,38.18){\circle*{0.000001}} \put(-277.19,38.89){\circle*{0.000001}} \put(-276.48,39.60){\circle*{0.000001}} \put(-276.48,40.31){\circle*{0.000001}} \put(-275.77,41.01){\circle*{0.000001}} \put(-275.77,41.72){\circle*{0.000001}} \put(-275.06,42.43){\circle*{0.000001}} \put(-275.06,43.13){\circle*{0.000001}} \put(-274.36,43.84){\circle*{0.000001}} \put(-274.36,44.55){\circle*{0.000001}} \put(-273.65,45.25){\circle*{0.000001}} \put(-273.65,45.96){\circle*{0.000001}} \put(-272.94,46.67){\circle*{0.000001}} \put(-272.94,47.38){\circle*{0.000001}} \put(-272.24,48.08){\circle*{0.000001}} \put(-272.24,48.79){\circle*{0.000001}} \put(-271.53,49.50){\circle*{0.000001}} \put(-271.53,50.20){\circle*{0.000001}} \put(-270.82,50.91){\circle*{0.000001}} \put(-270.82,51.62){\circle*{0.000001}} \put(-270.11,52.33){\circle*{0.000001}} \put(-270.11,53.03){\circle*{0.000001}} \put(-269.41,53.74){\circle*{0.000001}} \put(-269.41,54.45){\circle*{0.000001}} \put(-268.70,55.15){\circle*{0.000001}} \put(-268.70,55.86){\circle*{0.000001}} \put(-267.99,56.57){\circle*{0.000001}} \put(-267.99,57.28){\circle*{0.000001}} \put(-267.29,57.98){\circle*{0.000001}} \put(-267.29,58.69){\circle*{0.000001}} \put(-266.58,59.40){\circle*{0.000001}} \put(-266.58,60.10){\circle*{0.000001}} \put(-265.87,60.81){\circle*{0.000001}} \put(-265.87,61.52){\circle*{0.000001}} \put(-265.17,62.23){\circle*{0.000001}} \put(-265.17,62.93){\circle*{0.000001}} \put(-264.46,63.64){\circle*{0.000001}} \put(-264.46,64.35){\circle*{0.000001}} \put(-263.75,65.05){\circle*{0.000001}} \put(-263.75,65.76){\circle*{0.000001}} \put(-263.04,66.47){\circle*{0.000001}} \put(-263.04,67.18){\circle*{0.000001}} \put(-262.34,67.88){\circle*{0.000001}} \put(-262.34,68.59){\circle*{0.000001}} \put(-261.63,69.30){\circle*{0.000001}} \put(-261.63,70.00){\circle*{0.000001}} \put(-260.92,70.71){\circle*{0.000001}} \put(-260.92,71.42){\circle*{0.000001}} \put(-260.22,72.12){\circle*{0.000001}} \put(-260.22,72.83){\circle*{0.000001}} \put(-259.51,73.54){\circle*{0.000001}} \put(-259.51,74.25){\circle*{0.000001}} \put(-258.80,74.95){\circle*{0.000001}} \put(-258.80,75.66){\circle*{0.000001}} \put(-258.09,76.37){\circle*{0.000001}} \put(-258.09,77.07){\circle*{0.000001}} \put(-257.39,77.78){\circle*{0.000001}} \put(-257.39,78.49){\circle*{0.000001}} \put(-256.68,79.20){\circle*{0.000001}} \put(-256.68,79.90){\circle*{0.000001}} \put(-255.97,80.61){\circle*{0.000001}} \put(-255.97,81.32){\circle*{0.000001}} \put(-255.27,82.02){\circle*{0.000001}} \put(-255.27,82.73){\circle*{0.000001}} \put(-254.56,83.44){\circle*{0.000001}} \put(-254.56,84.15){\circle*{0.000001}} \put(-253.85,84.85){\circle*{0.000001}} \put(-280.72,31.82){\circle*{0.000001}} \put(-280.72,32.53){\circle*{0.000001}} \put(-280.72,33.23){\circle*{0.000001}} \put(-280.72,33.94){\circle*{0.000001}} \put(-280.72,34.65){\circle*{0.000001}} \put(-280.72,35.36){\circle*{0.000001}} \put(-280.72,36.06){\circle*{0.000001}} \put(-280.72,36.77){\circle*{0.000001}} \put(-281.43,37.48){\circle*{0.000001}} \put(-281.43,38.18){\circle*{0.000001}} \put(-281.43,38.89){\circle*{0.000001}} \put(-281.43,39.60){\circle*{0.000001}} \put(-281.43,40.31){\circle*{0.000001}} \put(-281.43,41.01){\circle*{0.000001}} \put(-281.43,41.72){\circle*{0.000001}} \put(-281.43,42.43){\circle*{0.000001}} \put(-281.43,43.13){\circle*{0.000001}} \put(-281.43,43.84){\circle*{0.000001}} \put(-281.43,44.55){\circle*{0.000001}} \put(-281.43,45.25){\circle*{0.000001}} \put(-281.43,45.96){\circle*{0.000001}} \put(-281.43,46.67){\circle*{0.000001}} \put(-282.14,47.38){\circle*{0.000001}} \put(-282.14,48.08){\circle*{0.000001}} \put(-282.14,48.79){\circle*{0.000001}} \put(-282.14,49.50){\circle*{0.000001}} \put(-282.14,50.20){\circle*{0.000001}} \put(-282.14,50.91){\circle*{0.000001}} \put(-282.14,51.62){\circle*{0.000001}} \put(-282.14,52.33){\circle*{0.000001}} \put(-282.14,53.03){\circle*{0.000001}} \put(-282.14,53.74){\circle*{0.000001}} \put(-282.14,54.45){\circle*{0.000001}} \put(-282.14,55.15){\circle*{0.000001}} \put(-282.14,55.86){\circle*{0.000001}} \put(-282.14,56.57){\circle*{0.000001}} \put(-282.84,57.28){\circle*{0.000001}} \put(-282.84,57.98){\circle*{0.000001}} \put(-282.84,58.69){\circle*{0.000001}} \put(-282.84,59.40){\circle*{0.000001}} \put(-282.84,60.10){\circle*{0.000001}} \put(-282.84,60.81){\circle*{0.000001}} \put(-282.84,61.52){\circle*{0.000001}} \put(-282.84,62.23){\circle*{0.000001}} \put(-282.84,62.93){\circle*{0.000001}} \put(-282.84,63.64){\circle*{0.000001}} \put(-282.84,64.35){\circle*{0.000001}} \put(-282.84,65.05){\circle*{0.000001}} \put(-282.84,65.76){\circle*{0.000001}} \put(-282.84,66.47){\circle*{0.000001}} \put(-282.84,67.18){\circle*{0.000001}} \put(-283.55,67.88){\circle*{0.000001}} \put(-283.55,68.59){\circle*{0.000001}} \put(-283.55,69.30){\circle*{0.000001}} \put(-283.55,70.00){\circle*{0.000001}} \put(-283.55,70.71){\circle*{0.000001}} \put(-283.55,71.42){\circle*{0.000001}} \put(-283.55,72.12){\circle*{0.000001}} \put(-283.55,72.83){\circle*{0.000001}} \put(-283.55,73.54){\circle*{0.000001}} \put(-283.55,74.25){\circle*{0.000001}} \put(-283.55,74.95){\circle*{0.000001}} \put(-283.55,75.66){\circle*{0.000001}} \put(-283.55,76.37){\circle*{0.000001}} \put(-283.55,77.07){\circle*{0.000001}} \put(-284.26,77.78){\circle*{0.000001}} \put(-284.26,78.49){\circle*{0.000001}} \put(-284.26,79.20){\circle*{0.000001}} \put(-284.26,79.90){\circle*{0.000001}} \put(-284.26,80.61){\circle*{0.000001}} \put(-284.26,81.32){\circle*{0.000001}} \put(-284.26,82.02){\circle*{0.000001}} \put(-284.26,82.73){\circle*{0.000001}} \put(-284.26,83.44){\circle*{0.000001}} \put(-284.26,84.15){\circle*{0.000001}} \put(-284.26,84.85){\circle*{0.000001}} \put(-284.26,85.56){\circle*{0.000001}} \put(-284.26,86.27){\circle*{0.000001}} \put(-284.26,86.97){\circle*{0.000001}} \put(-284.96,87.68){\circle*{0.000001}} \put(-284.96,88.39){\circle*{0.000001}} \put(-284.96,89.10){\circle*{0.000001}} \put(-284.96,89.80){\circle*{0.000001}} \put(-284.96,90.51){\circle*{0.000001}} \put(-284.96,91.22){\circle*{0.000001}} \put(-284.96,91.92){\circle*{0.000001}} \put(-284.96,92.63){\circle*{0.000001}} \put(-286.38,91.92){\circle*{0.000001}} \put(-285.67,91.92){\circle*{0.000001}} \put(-284.96,92.63){\circle*{0.000001}} \put(-288.50,91.92){\circle*{0.000001}} \put(-287.79,91.92){\circle*{0.000001}} \put(-287.09,91.92){\circle*{0.000001}} \put(-286.38,91.92){\circle*{0.000001}} \put(-288.50,91.92){\circle*{0.000001}} \put(-287.79,92.63){\circle*{0.000001}} \put(-288.50,93.34){\circle*{0.000001}} \put(-287.79,92.63){\circle*{0.000001}} \put(-288.50,93.34){\circle*{0.000001}} \put(-287.79,93.34){\circle*{0.000001}} \put(-287.09,93.34){\circle*{0.000001}} \put(-286.38,93.34){\circle*{0.000001}} \put(-285.67,93.34){\circle*{0.000001}} \put(-283.55,89.80){\circle*{0.000001}} \put(-284.26,90.51){\circle*{0.000001}} \put(-284.26,91.22){\circle*{0.000001}} \put(-284.96,91.92){\circle*{0.000001}} \put(-284.96,92.63){\circle*{0.000001}} \put(-285.67,93.34){\circle*{0.000001}} \put(-291.33,29.70){\circle*{0.000001}} \put(-291.33,30.41){\circle*{0.000001}} \put(-291.33,31.11){\circle*{0.000001}} \put(-291.33,31.82){\circle*{0.000001}} \put(-290.62,32.53){\circle*{0.000001}} \put(-290.62,33.23){\circle*{0.000001}} \put(-290.62,33.94){\circle*{0.000001}} \put(-290.62,34.65){\circle*{0.000001}} \put(-290.62,35.36){\circle*{0.000001}} \put(-290.62,36.06){\circle*{0.000001}} \put(-290.62,36.77){\circle*{0.000001}} \put(-290.62,37.48){\circle*{0.000001}} \put(-289.91,38.18){\circle*{0.000001}} \put(-289.91,38.89){\circle*{0.000001}} \put(-289.91,39.60){\circle*{0.000001}} \put(-289.91,40.31){\circle*{0.000001}} \put(-289.91,41.01){\circle*{0.000001}} \put(-289.91,41.72){\circle*{0.000001}} \put(-289.91,42.43){\circle*{0.000001}} \put(-289.91,43.13){\circle*{0.000001}} \put(-289.21,43.84){\circle*{0.000001}} \put(-289.21,44.55){\circle*{0.000001}} \put(-289.21,45.25){\circle*{0.000001}} \put(-289.21,45.96){\circle*{0.000001}} \put(-289.21,46.67){\circle*{0.000001}} \put(-289.21,47.38){\circle*{0.000001}} \put(-289.21,48.08){\circle*{0.000001}} \put(-289.21,48.79){\circle*{0.000001}} \put(-288.50,49.50){\circle*{0.000001}} \put(-288.50,50.20){\circle*{0.000001}} \put(-288.50,50.91){\circle*{0.000001}} \put(-288.50,51.62){\circle*{0.000001}} \put(-288.50,52.33){\circle*{0.000001}} \put(-288.50,53.03){\circle*{0.000001}} \put(-288.50,53.74){\circle*{0.000001}} \put(-287.79,54.45){\circle*{0.000001}} \put(-287.79,55.15){\circle*{0.000001}} \put(-287.79,55.86){\circle*{0.000001}} \put(-287.79,56.57){\circle*{0.000001}} \put(-287.79,57.28){\circle*{0.000001}} \put(-287.79,57.98){\circle*{0.000001}} \put(-287.79,58.69){\circle*{0.000001}} \put(-287.79,59.40){\circle*{0.000001}} \put(-287.09,60.10){\circle*{0.000001}} \put(-287.09,60.81){\circle*{0.000001}} \put(-287.09,61.52){\circle*{0.000001}} \put(-287.09,62.23){\circle*{0.000001}} \put(-287.09,62.93){\circle*{0.000001}} \put(-287.09,63.64){\circle*{0.000001}} \put(-287.09,64.35){\circle*{0.000001}} \put(-287.09,65.05){\circle*{0.000001}} \put(-286.38,65.76){\circle*{0.000001}} \put(-286.38,66.47){\circle*{0.000001}} \put(-286.38,67.18){\circle*{0.000001}} \put(-286.38,67.88){\circle*{0.000001}} \put(-286.38,68.59){\circle*{0.000001}} \put(-286.38,69.30){\circle*{0.000001}} \put(-286.38,70.00){\circle*{0.000001}} \put(-285.67,70.71){\circle*{0.000001}} \put(-285.67,71.42){\circle*{0.000001}} \put(-285.67,72.12){\circle*{0.000001}} \put(-285.67,72.83){\circle*{0.000001}} \put(-285.67,73.54){\circle*{0.000001}} \put(-285.67,74.25){\circle*{0.000001}} \put(-285.67,74.95){\circle*{0.000001}} \put(-285.67,75.66){\circle*{0.000001}} \put(-284.96,76.37){\circle*{0.000001}} \put(-284.96,77.07){\circle*{0.000001}} \put(-284.96,77.78){\circle*{0.000001}} \put(-284.96,78.49){\circle*{0.000001}} \put(-284.96,79.20){\circle*{0.000001}} \put(-284.96,79.90){\circle*{0.000001}} \put(-284.96,80.61){\circle*{0.000001}} \put(-284.96,81.32){\circle*{0.000001}} \put(-284.26,82.02){\circle*{0.000001}} \put(-284.26,82.73){\circle*{0.000001}} \put(-284.26,83.44){\circle*{0.000001}} \put(-284.26,84.15){\circle*{0.000001}} \put(-284.26,84.85){\circle*{0.000001}} \put(-284.26,85.56){\circle*{0.000001}} \put(-284.26,86.27){\circle*{0.000001}} \put(-284.26,86.97){\circle*{0.000001}} \put(-283.55,87.68){\circle*{0.000001}} \put(-283.55,88.39){\circle*{0.000001}} \put(-283.55,89.10){\circle*{0.000001}} \put(-283.55,89.80){\circle*{0.000001}} \put(-291.33,29.70){\circle*{0.000001}} \put(-290.62,28.99){\circle*{0.000001}} \put(-289.91,26.87){\circle*{0.000001}} \put(-289.91,27.58){\circle*{0.000001}} \put(-290.62,28.28){\circle*{0.000001}} \put(-290.62,28.99){\circle*{0.000001}} \put(-289.91,26.87){\circle*{0.000001}} \put(-289.91,27.58){\circle*{0.000001}} \put(-290.62,23.33){\circle*{0.000001}} \put(-290.62,24.04){\circle*{0.000001}} \put(-290.62,24.75){\circle*{0.000001}} \put(-290.62,25.46){\circle*{0.000001}} \put(-289.91,26.16){\circle*{0.000001}} \put(-289.91,26.87){\circle*{0.000001}} \put(-289.91,27.58){\circle*{0.000001}} \put(-290.62,23.33){\circle*{0.000001}} \put(-289.91,22.63){\circle*{0.000001}} \put(-289.21,22.63){\circle*{0.000001}} \put(-288.50,21.92){\circle*{0.000001}} \put(-288.50,21.92){\circle*{0.000001}} \put(-287.79,21.92){\circle*{0.000001}} \put(-287.09,21.92){\circle*{0.000001}} \put(-286.38,21.92){\circle*{0.000001}} \put(-285.67,21.92){\circle*{0.000001}} \put(-284.96,21.92){\circle*{0.000001}} \put(-284.26,21.92){\circle*{0.000001}} \put(-283.55,21.92){\circle*{0.000001}} \put(-282.84,21.92){\circle*{0.000001}} \put(-282.14,21.92){\circle*{0.000001}} \put(-281.43,21.92){\circle*{0.000001}} \put(-280.72,21.92){\circle*{0.000001}} \put(-280.01,21.92){\circle*{0.000001}} \put(-279.31,21.92){\circle*{0.000001}} \put(-278.60,21.92){\circle*{0.000001}} \put(-277.89,21.21){\circle*{0.000001}} \put(-277.19,21.21){\circle*{0.000001}} \put(-276.48,21.21){\circle*{0.000001}} \put(-275.77,21.21){\circle*{0.000001}} \put(-275.06,21.21){\circle*{0.000001}} \put(-274.36,21.21){\circle*{0.000001}} \put(-273.65,21.21){\circle*{0.000001}} \put(-272.94,21.21){\circle*{0.000001}} \put(-272.24,21.21){\circle*{0.000001}} \put(-271.53,21.21){\circle*{0.000001}} \put(-270.82,21.21){\circle*{0.000001}} \put(-270.11,21.21){\circle*{0.000001}} \put(-269.41,21.21){\circle*{0.000001}} \put(-268.70,21.21){\circle*{0.000001}} \put(-267.99,21.21){\circle*{0.000001}} \put(-267.29,21.21){\circle*{0.000001}} \put(-266.58,21.21){\circle*{0.000001}} \put(-265.87,21.21){\circle*{0.000001}} \put(-265.17,21.21){\circle*{0.000001}} \put(-264.46,21.21){\circle*{0.000001}} \put(-263.75,21.21){\circle*{0.000001}} \put(-263.04,21.21){\circle*{0.000001}} \put(-262.34,21.21){\circle*{0.000001}} \put(-261.63,21.21){\circle*{0.000001}} \put(-260.92,21.21){\circle*{0.000001}} \put(-260.22,21.21){\circle*{0.000001}} \put(-259.51,21.21){\circle*{0.000001}} \put(-258.80,21.21){\circle*{0.000001}} \put(-258.09,21.21){\circle*{0.000001}} \put(-257.39,20.51){\circle*{0.000001}} \put(-256.68,20.51){\circle*{0.000001}} \put(-255.97,20.51){\circle*{0.000001}} \put(-255.27,20.51){\circle*{0.000001}} \put(-254.56,20.51){\circle*{0.000001}} \put(-253.85,20.51){\circle*{0.000001}} \put(-253.14,20.51){\circle*{0.000001}} \put(-252.44,20.51){\circle*{0.000001}} \put(-251.73,20.51){\circle*{0.000001}} \put(-251.02,20.51){\circle*{0.000001}} \put(-250.32,20.51){\circle*{0.000001}} \put(-249.61,20.51){\circle*{0.000001}} \put(-248.90,20.51){\circle*{0.000001}} \put(-248.19,20.51){\circle*{0.000001}} \put(-247.49,20.51){\circle*{0.000001}} \put(-246.78,20.51){\circle*{0.000001}} \put(-246.07,20.51){\circle*{0.000001}} \put(-245.37,20.51){\circle*{0.000001}} \put(-244.66,20.51){\circle*{0.000001}} \put(-243.95,20.51){\circle*{0.000001}} \put(-243.24,20.51){\circle*{0.000001}} \put(-242.54,20.51){\circle*{0.000001}} \put(-241.83,20.51){\circle*{0.000001}} \put(-241.12,20.51){\circle*{0.000001}} \put(-240.42,20.51){\circle*{0.000001}} \put(-239.71,20.51){\circle*{0.000001}} \put(-239.00,20.51){\circle*{0.000001}} \put(-238.29,20.51){\circle*{0.000001}} \put(-237.59,20.51){\circle*{0.000001}} \put(-236.88,19.80){\circle*{0.000001}} \put(-236.17,19.80){\circle*{0.000001}} \put(-235.47,19.80){\circle*{0.000001}} \put(-234.76,19.80){\circle*{0.000001}} \put(-234.05,19.80){\circle*{0.000001}} \put(-233.35,19.80){\circle*{0.000001}} \put(-232.64,19.80){\circle*{0.000001}} \put(-231.93,19.80){\circle*{0.000001}} \put(-231.22,19.80){\circle*{0.000001}} \put(-230.52,19.80){\circle*{0.000001}} \put(-229.81,19.80){\circle*{0.000001}} \put(-229.10,19.80){\circle*{0.000001}} \put(-228.40,19.80){\circle*{0.000001}} \put(-227.69,19.80){\circle*{0.000001}} \put(-226.98,19.80){\circle*{0.000001}} \put(-226.98,19.80){\circle*{0.000001}} \put(-226.98,20.51){\circle*{0.000001}} \put(-226.27,21.21){\circle*{0.000001}} \put(-226.27,21.92){\circle*{0.000001}} \put(-225.57,22.63){\circle*{0.000001}} \put(-225.57,23.33){\circle*{0.000001}} \put(-225.57,24.04){\circle*{0.000001}} \put(-224.86,24.75){\circle*{0.000001}} \put(-224.86,25.46){\circle*{0.000001}} \put(-224.15,26.16){\circle*{0.000001}} \put(-224.15,26.87){\circle*{0.000001}} \put(-224.15,27.58){\circle*{0.000001}} \put(-223.45,28.28){\circle*{0.000001}} \put(-223.45,28.99){\circle*{0.000001}} \put(-222.74,29.70){\circle*{0.000001}} \put(-222.74,30.41){\circle*{0.000001}} \put(-222.74,31.11){\circle*{0.000001}} \put(-222.03,31.82){\circle*{0.000001}} \put(-222.03,32.53){\circle*{0.000001}} \put(-222.03,33.23){\circle*{0.000001}} \put(-221.32,33.94){\circle*{0.000001}} \put(-221.32,34.65){\circle*{0.000001}} \put(-220.62,35.36){\circle*{0.000001}} \put(-220.62,36.06){\circle*{0.000001}} \put(-220.62,36.77){\circle*{0.000001}} \put(-219.91,37.48){\circle*{0.000001}} \put(-219.91,38.18){\circle*{0.000001}} \put(-219.20,38.89){\circle*{0.000001}} \put(-219.20,39.60){\circle*{0.000001}} \put(-219.20,40.31){\circle*{0.000001}} \put(-218.50,41.01){\circle*{0.000001}} \put(-218.50,41.72){\circle*{0.000001}} \put(-217.79,42.43){\circle*{0.000001}} \put(-217.79,43.13){\circle*{0.000001}} \put(-217.79,43.84){\circle*{0.000001}} \put(-217.08,44.55){\circle*{0.000001}} \put(-217.08,45.25){\circle*{0.000001}} \put(-216.37,45.96){\circle*{0.000001}} \put(-216.37,46.67){\circle*{0.000001}} \put(-216.37,47.38){\circle*{0.000001}} \put(-215.67,48.08){\circle*{0.000001}} \put(-215.67,48.79){\circle*{0.000001}} \put(-214.96,49.50){\circle*{0.000001}} \put(-214.96,50.20){\circle*{0.000001}} \put(-214.96,50.91){\circle*{0.000001}} \put(-214.25,51.62){\circle*{0.000001}} \put(-214.25,52.33){\circle*{0.000001}} \put(-213.55,53.03){\circle*{0.000001}} \put(-213.55,53.74){\circle*{0.000001}} \put(-213.55,54.45){\circle*{0.000001}} \put(-212.84,55.15){\circle*{0.000001}} \put(-212.84,55.86){\circle*{0.000001}} \put(-212.13,56.57){\circle*{0.000001}} \put(-212.13,57.28){\circle*{0.000001}} \put(-212.13,57.98){\circle*{0.000001}} \put(-211.42,58.69){\circle*{0.000001}} \put(-211.42,59.40){\circle*{0.000001}} \put(-211.42,60.10){\circle*{0.000001}} \put(-210.72,60.81){\circle*{0.000001}} \put(-210.72,61.52){\circle*{0.000001}} \put(-210.01,62.23){\circle*{0.000001}} \put(-210.01,62.93){\circle*{0.000001}} \put(-210.01,63.64){\circle*{0.000001}} \put(-209.30,64.35){\circle*{0.000001}} \put(-209.30,65.05){\circle*{0.000001}} \put(-208.60,65.76){\circle*{0.000001}} \put(-208.60,66.47){\circle*{0.000001}} \put(-208.60,67.18){\circle*{0.000001}} \put(-207.89,67.88){\circle*{0.000001}} \put(-207.89,68.59){\circle*{0.000001}} \put(-207.18,69.30){\circle*{0.000001}} \put(-207.18,70.00){\circle*{0.000001}} \put(-207.18,70.71){\circle*{0.000001}} \put(-206.48,71.42){\circle*{0.000001}} \put(-206.48,72.12){\circle*{0.000001}} \put(-205.77,72.83){\circle*{0.000001}} \put(-205.77,73.54){\circle*{0.000001}} \put(-205.77,73.54){\circle*{0.000001}} \put(-205.06,73.54){\circle*{0.000001}} \put(-204.35,73.54){\circle*{0.000001}} \put(-204.35,73.54){\circle*{0.000001}} \put(-204.35,73.54){\circle*{0.000001}} \put(-205.06,74.25){\circle*{0.000001}} \put(-205.06,74.95){\circle*{0.000001}} \put(-205.77,75.66){\circle*{0.000001}} \put(-205.77,76.37){\circle*{0.000001}} \put(-206.48,77.07){\circle*{0.000001}} \put(-206.48,77.78){\circle*{0.000001}} \put(-207.18,78.49){\circle*{0.000001}} \put(-207.18,79.20){\circle*{0.000001}} \put(-207.89,79.90){\circle*{0.000001}} \put(-207.89,80.61){\circle*{0.000001}} \put(-208.60,81.32){\circle*{0.000001}} \put(-209.30,82.02){\circle*{0.000001}} \put(-209.30,82.73){\circle*{0.000001}} \put(-210.01,83.44){\circle*{0.000001}} \put(-210.01,84.15){\circle*{0.000001}} \put(-210.72,84.85){\circle*{0.000001}} \put(-210.72,85.56){\circle*{0.000001}} \put(-211.42,86.27){\circle*{0.000001}} \put(-211.42,86.97){\circle*{0.000001}} \put(-212.13,87.68){\circle*{0.000001}} \put(-212.84,88.39){\circle*{0.000001}} \put(-212.84,89.10){\circle*{0.000001}} \put(-213.55,89.80){\circle*{0.000001}} \put(-213.55,90.51){\circle*{0.000001}} \put(-214.25,91.22){\circle*{0.000001}} \put(-214.25,91.92){\circle*{0.000001}} \put(-214.96,92.63){\circle*{0.000001}} \put(-214.96,93.34){\circle*{0.000001}} \put(-215.67,94.05){\circle*{0.000001}} \put(-215.67,94.75){\circle*{0.000001}} \put(-216.37,95.46){\circle*{0.000001}} \put(-217.08,96.17){\circle*{0.000001}} \put(-217.08,96.87){\circle*{0.000001}} \put(-217.79,97.58){\circle*{0.000001}} \put(-217.79,98.29){\circle*{0.000001}} \put(-218.50,98.99){\circle*{0.000001}} \put(-218.50,99.70){\circle*{0.000001}} \put(-219.20,100.41){\circle*{0.000001}} \put(-219.20,101.12){\circle*{0.000001}} \put(-219.91,101.82){\circle*{0.000001}} \put(-218.50,99.70){\circle*{0.000001}} \put(-219.20,100.41){\circle*{0.000001}} \put(-219.20,101.12){\circle*{0.000001}} \put(-219.91,101.82){\circle*{0.000001}} \put(-218.50,99.70){\circle*{0.000001}} \put(-217.79,100.41){\circle*{0.000001}} \put(-217.08,101.12){\circle*{0.000001}} \put(-216.37,101.82){\circle*{0.000001}} \put(-216.37,101.82){\circle*{0.000001}} \put(-216.37,102.53){\circle*{0.000001}} \put(-219.91,100.41){\circle*{0.000001}} \put(-219.20,101.12){\circle*{0.000001}} \put(-218.50,101.12){\circle*{0.000001}} \put(-217.79,101.82){\circle*{0.000001}} \put(-217.08,101.82){\circle*{0.000001}} \put(-216.37,102.53){\circle*{0.000001}} \put(-219.91,100.41){\circle*{0.000001}} \put(-219.91,101.12){\circle*{0.000001}} \put(-220.62,101.82){\circle*{0.000001}} \put(-220.62,101.82){\circle*{0.000001}} \put(-219.91,101.82){\circle*{0.000001}} \put(-223.45,94.75){\circle*{0.000001}} \put(-223.45,95.46){\circle*{0.000001}} \put(-222.74,96.17){\circle*{0.000001}} \put(-222.74,96.87){\circle*{0.000001}} \put(-222.03,97.58){\circle*{0.000001}} \put(-222.03,98.29){\circle*{0.000001}} \put(-221.32,98.99){\circle*{0.000001}} \put(-221.32,99.70){\circle*{0.000001}} \put(-220.62,100.41){\circle*{0.000001}} \put(-220.62,101.12){\circle*{0.000001}} \put(-219.91,101.82){\circle*{0.000001}} \put(-221.32,90.51){\circle*{0.000001}} \put(-221.32,91.22){\circle*{0.000001}} \put(-222.03,91.92){\circle*{0.000001}} \put(-222.03,92.63){\circle*{0.000001}} \put(-222.74,93.34){\circle*{0.000001}} \put(-222.74,94.05){\circle*{0.000001}} \put(-223.45,94.75){\circle*{0.000001}} \put(-256.68,38.89){\circle*{0.000001}} \put(-255.97,39.60){\circle*{0.000001}} \put(-255.97,40.31){\circle*{0.000001}} \put(-255.27,41.01){\circle*{0.000001}} \put(-254.56,41.72){\circle*{0.000001}} \put(-254.56,42.43){\circle*{0.000001}} \put(-253.85,43.13){\circle*{0.000001}} \put(-253.14,43.84){\circle*{0.000001}} \put(-253.14,44.55){\circle*{0.000001}} \put(-252.44,45.25){\circle*{0.000001}} \put(-251.73,45.96){\circle*{0.000001}} \put(-251.02,46.67){\circle*{0.000001}} \put(-251.02,47.38){\circle*{0.000001}} \put(-250.32,48.08){\circle*{0.000001}} \put(-249.61,48.79){\circle*{0.000001}} \put(-249.61,49.50){\circle*{0.000001}} \put(-248.90,50.20){\circle*{0.000001}} \put(-248.19,50.91){\circle*{0.000001}} \put(-248.19,51.62){\circle*{0.000001}} \put(-247.49,52.33){\circle*{0.000001}} \put(-246.78,53.03){\circle*{0.000001}} \put(-246.78,53.74){\circle*{0.000001}} \put(-246.07,54.45){\circle*{0.000001}} \put(-245.37,55.15){\circle*{0.000001}} \put(-245.37,55.86){\circle*{0.000001}} \put(-244.66,56.57){\circle*{0.000001}} \put(-243.95,57.28){\circle*{0.000001}} \put(-243.95,57.98){\circle*{0.000001}} \put(-243.24,58.69){\circle*{0.000001}} \put(-242.54,59.40){\circle*{0.000001}} \put(-241.83,60.10){\circle*{0.000001}} \put(-241.83,60.81){\circle*{0.000001}} \put(-241.12,61.52){\circle*{0.000001}} \put(-240.42,62.23){\circle*{0.000001}} \put(-240.42,62.93){\circle*{0.000001}} \put(-239.71,63.64){\circle*{0.000001}} \put(-239.00,64.35){\circle*{0.000001}} \put(-239.00,65.05){\circle*{0.000001}} \put(-238.29,65.76){\circle*{0.000001}} \put(-237.59,66.47){\circle*{0.000001}} \put(-237.59,67.18){\circle*{0.000001}} \put(-236.88,67.88){\circle*{0.000001}} \put(-236.17,68.59){\circle*{0.000001}} \put(-236.17,69.30){\circle*{0.000001}} \put(-235.47,70.00){\circle*{0.000001}} \put(-234.76,70.71){\circle*{0.000001}} \put(-234.05,71.42){\circle*{0.000001}} \put(-234.05,72.12){\circle*{0.000001}} \put(-233.35,72.83){\circle*{0.000001}} \put(-232.64,73.54){\circle*{0.000001}} \put(-232.64,74.25){\circle*{0.000001}} \put(-231.93,74.95){\circle*{0.000001}} \put(-231.22,75.66){\circle*{0.000001}} \put(-231.22,76.37){\circle*{0.000001}} \put(-230.52,77.07){\circle*{0.000001}} \put(-229.81,77.78){\circle*{0.000001}} \put(-229.81,78.49){\circle*{0.000001}} \put(-229.10,79.20){\circle*{0.000001}} \put(-228.40,79.90){\circle*{0.000001}} \put(-228.40,80.61){\circle*{0.000001}} \put(-227.69,81.32){\circle*{0.000001}} \put(-226.98,82.02){\circle*{0.000001}} \put(-226.98,82.73){\circle*{0.000001}} \put(-226.27,83.44){\circle*{0.000001}} \put(-225.57,84.15){\circle*{0.000001}} \put(-224.86,84.85){\circle*{0.000001}} \put(-224.86,85.56){\circle*{0.000001}} \put(-224.15,86.27){\circle*{0.000001}} \put(-223.45,86.97){\circle*{0.000001}} \put(-223.45,87.68){\circle*{0.000001}} \put(-222.74,88.39){\circle*{0.000001}} \put(-222.03,89.10){\circle*{0.000001}} \put(-222.03,89.80){\circle*{0.000001}} \put(-221.32,90.51){\circle*{0.000001}} \put(-256.68,38.89){\circle*{0.000001}} \put(-256.68,39.60){\circle*{0.000001}} \put(-255.97,40.31){\circle*{0.000001}} \put(-255.97,41.01){\circle*{0.000001}} \put(-255.27,41.72){\circle*{0.000001}} \put(-255.27,42.43){\circle*{0.000001}} \put(-255.27,43.13){\circle*{0.000001}} \put(-254.56,43.84){\circle*{0.000001}} \put(-254.56,44.55){\circle*{0.000001}} \put(-253.85,45.25){\circle*{0.000001}} \put(-253.85,45.96){\circle*{0.000001}} \put(-253.85,46.67){\circle*{0.000001}} \put(-253.14,47.38){\circle*{0.000001}} \put(-253.14,48.08){\circle*{0.000001}} \put(-253.14,48.79){\circle*{0.000001}} \put(-252.44,49.50){\circle*{0.000001}} \put(-252.44,50.20){\circle*{0.000001}} \put(-251.73,50.91){\circle*{0.000001}} \put(-251.73,51.62){\circle*{0.000001}} \put(-251.73,52.33){\circle*{0.000001}} \put(-251.02,53.03){\circle*{0.000001}} \put(-251.02,53.74){\circle*{0.000001}} \put(-250.32,54.45){\circle*{0.000001}} \put(-250.32,55.15){\circle*{0.000001}} \put(-250.32,55.86){\circle*{0.000001}} \put(-249.61,56.57){\circle*{0.000001}} \put(-249.61,57.28){\circle*{0.000001}} \put(-248.90,57.98){\circle*{0.000001}} \put(-248.90,58.69){\circle*{0.000001}} \put(-248.90,59.40){\circle*{0.000001}} \put(-248.19,60.10){\circle*{0.000001}} \put(-248.19,60.81){\circle*{0.000001}} \put(-247.49,61.52){\circle*{0.000001}} \put(-247.49,62.23){\circle*{0.000001}} \put(-247.49,62.93){\circle*{0.000001}} \put(-246.78,63.64){\circle*{0.000001}} \put(-246.78,64.35){\circle*{0.000001}} \put(-246.07,65.05){\circle*{0.000001}} \put(-246.07,65.76){\circle*{0.000001}} \put(-246.07,66.47){\circle*{0.000001}} \put(-245.37,67.18){\circle*{0.000001}} \put(-245.37,67.88){\circle*{0.000001}} \put(-245.37,68.59){\circle*{0.000001}} \put(-244.66,69.30){\circle*{0.000001}} \put(-244.66,70.00){\circle*{0.000001}} \put(-243.95,70.71){\circle*{0.000001}} \put(-243.95,71.42){\circle*{0.000001}} \put(-243.95,72.12){\circle*{0.000001}} \put(-243.24,72.83){\circle*{0.000001}} \put(-243.24,73.54){\circle*{0.000001}} \put(-242.54,74.25){\circle*{0.000001}} \put(-242.54,74.95){\circle*{0.000001}} \put(-242.54,75.66){\circle*{0.000001}} \put(-241.83,76.37){\circle*{0.000001}} \put(-241.83,77.07){\circle*{0.000001}} \put(-241.12,77.78){\circle*{0.000001}} \put(-241.12,78.49){\circle*{0.000001}} \put(-241.12,79.20){\circle*{0.000001}} \put(-240.42,79.90){\circle*{0.000001}} \put(-240.42,80.61){\circle*{0.000001}} \put(-239.71,81.32){\circle*{0.000001}} \put(-239.71,82.02){\circle*{0.000001}} \put(-239.71,82.73){\circle*{0.000001}} \put(-239.00,83.44){\circle*{0.000001}} \put(-239.00,84.15){\circle*{0.000001}} \put(-238.29,84.85){\circle*{0.000001}} \put(-238.29,85.56){\circle*{0.000001}} \put(-238.29,86.27){\circle*{0.000001}} \put(-237.59,86.97){\circle*{0.000001}} \put(-237.59,87.68){\circle*{0.000001}} \put(-237.59,88.39){\circle*{0.000001}} \put(-236.88,89.10){\circle*{0.000001}} \put(-236.88,89.80){\circle*{0.000001}} \put(-236.17,90.51){\circle*{0.000001}} \put(-236.17,91.22){\circle*{0.000001}} \put(-236.17,91.92){\circle*{0.000001}} \put(-235.47,92.63){\circle*{0.000001}} \put(-235.47,93.34){\circle*{0.000001}} \put(-234.76,94.05){\circle*{0.000001}} \put(-234.76,94.75){\circle*{0.000001}} \put(-234.76,32.53){\circle*{0.000001}} \put(-234.76,33.23){\circle*{0.000001}} \put(-234.76,33.94){\circle*{0.000001}} \put(-234.76,34.65){\circle*{0.000001}} \put(-234.76,35.36){\circle*{0.000001}} \put(-234.76,36.06){\circle*{0.000001}} \put(-234.76,36.77){\circle*{0.000001}} \put(-234.76,37.48){\circle*{0.000001}} \put(-234.76,38.18){\circle*{0.000001}} \put(-234.76,38.89){\circle*{0.000001}} \put(-234.76,39.60){\circle*{0.000001}} \put(-234.76,40.31){\circle*{0.000001}} \put(-234.76,41.01){\circle*{0.000001}} \put(-234.76,41.72){\circle*{0.000001}} \put(-234.76,42.43){\circle*{0.000001}} \put(-234.76,43.13){\circle*{0.000001}} \put(-234.76,43.84){\circle*{0.000001}} \put(-234.76,44.55){\circle*{0.000001}} \put(-234.76,45.25){\circle*{0.000001}} \put(-234.76,45.96){\circle*{0.000001}} \put(-234.76,46.67){\circle*{0.000001}} \put(-234.76,47.38){\circle*{0.000001}} \put(-234.76,48.08){\circle*{0.000001}} \put(-234.76,48.79){\circle*{0.000001}} \put(-234.76,49.50){\circle*{0.000001}} \put(-234.76,50.20){\circle*{0.000001}} \put(-234.76,50.91){\circle*{0.000001}} \put(-234.76,51.62){\circle*{0.000001}} \put(-234.76,52.33){\circle*{0.000001}} \put(-234.76,53.03){\circle*{0.000001}} \put(-234.76,53.74){\circle*{0.000001}} \put(-234.76,54.45){\circle*{0.000001}} \put(-234.76,55.15){\circle*{0.000001}} \put(-234.76,55.86){\circle*{0.000001}} \put(-234.76,56.57){\circle*{0.000001}} \put(-234.76,57.28){\circle*{0.000001}} \put(-234.76,57.98){\circle*{0.000001}} \put(-234.76,58.69){\circle*{0.000001}} \put(-234.76,59.40){\circle*{0.000001}} \put(-234.76,60.10){\circle*{0.000001}} \put(-234.76,60.81){\circle*{0.000001}} \put(-234.76,61.52){\circle*{0.000001}} \put(-234.76,62.23){\circle*{0.000001}} \put(-234.76,62.93){\circle*{0.000001}} \put(-234.76,63.64){\circle*{0.000001}} \put(-234.76,64.35){\circle*{0.000001}} \put(-234.76,65.05){\circle*{0.000001}} \put(-234.76,65.76){\circle*{0.000001}} \put(-234.76,66.47){\circle*{0.000001}} \put(-234.76,67.18){\circle*{0.000001}} \put(-234.76,67.88){\circle*{0.000001}} \put(-234.76,68.59){\circle*{0.000001}} \put(-234.76,69.30){\circle*{0.000001}} \put(-234.76,70.00){\circle*{0.000001}} \put(-234.76,70.71){\circle*{0.000001}} \put(-234.76,71.42){\circle*{0.000001}} \put(-234.76,72.12){\circle*{0.000001}} \put(-234.76,72.83){\circle*{0.000001}} \put(-234.76,73.54){\circle*{0.000001}} \put(-234.76,74.25){\circle*{0.000001}} \put(-234.76,74.95){\circle*{0.000001}} \put(-234.76,75.66){\circle*{0.000001}} \put(-234.76,76.37){\circle*{0.000001}} \put(-234.76,77.07){\circle*{0.000001}} \put(-234.76,77.78){\circle*{0.000001}} \put(-234.76,78.49){\circle*{0.000001}} \put(-234.76,79.20){\circle*{0.000001}} \put(-234.76,79.90){\circle*{0.000001}} \put(-234.76,80.61){\circle*{0.000001}} \put(-234.76,81.32){\circle*{0.000001}} \put(-234.76,82.02){\circle*{0.000001}} \put(-234.76,82.73){\circle*{0.000001}} \put(-234.76,83.44){\circle*{0.000001}} \put(-234.76,84.15){\circle*{0.000001}} \put(-234.76,84.85){\circle*{0.000001}} \put(-234.76,85.56){\circle*{0.000001}} \put(-234.76,86.27){\circle*{0.000001}} \put(-234.76,86.97){\circle*{0.000001}} \put(-234.76,87.68){\circle*{0.000001}} \put(-234.76,88.39){\circle*{0.000001}} \put(-234.76,89.10){\circle*{0.000001}} \put(-234.76,89.80){\circle*{0.000001}} \put(-234.76,90.51){\circle*{0.000001}} \put(-234.76,91.22){\circle*{0.000001}} \put(-234.76,91.92){\circle*{0.000001}} \put(-234.76,92.63){\circle*{0.000001}} \put(-234.76,93.34){\circle*{0.000001}} \put(-234.76,94.05){\circle*{0.000001}} \put(-234.76,94.75){\circle*{0.000001}} \put(-293.45,45.96){\circle*{0.000001}} \put(-292.74,45.96){\circle*{0.000001}} \put(-292.04,45.96){\circle*{0.000001}} \put(-291.33,45.25){\circle*{0.000001}} \put(-290.62,45.25){\circle*{0.000001}} \put(-289.91,45.25){\circle*{0.000001}} \put(-289.21,45.25){\circle*{0.000001}} \put(-288.50,44.55){\circle*{0.000001}} \put(-287.79,44.55){\circle*{0.000001}} \put(-287.09,44.55){\circle*{0.000001}} \put(-286.38,44.55){\circle*{0.000001}} \put(-285.67,43.84){\circle*{0.000001}} \put(-284.96,43.84){\circle*{0.000001}} \put(-284.26,43.84){\circle*{0.000001}} \put(-283.55,43.84){\circle*{0.000001}} \put(-282.84,43.84){\circle*{0.000001}} \put(-282.14,43.13){\circle*{0.000001}} \put(-281.43,43.13){\circle*{0.000001}} \put(-280.72,43.13){\circle*{0.000001}} \put(-280.01,43.13){\circle*{0.000001}} \put(-279.31,42.43){\circle*{0.000001}} \put(-278.60,42.43){\circle*{0.000001}} \put(-277.89,42.43){\circle*{0.000001}} \put(-277.19,42.43){\circle*{0.000001}} \put(-276.48,42.43){\circle*{0.000001}} \put(-275.77,41.72){\circle*{0.000001}} \put(-275.06,41.72){\circle*{0.000001}} \put(-274.36,41.72){\circle*{0.000001}} \put(-273.65,41.72){\circle*{0.000001}} \put(-272.94,41.01){\circle*{0.000001}} \put(-272.24,41.01){\circle*{0.000001}} \put(-271.53,41.01){\circle*{0.000001}} \put(-270.82,41.01){\circle*{0.000001}} \put(-270.11,40.31){\circle*{0.000001}} \put(-269.41,40.31){\circle*{0.000001}} \put(-268.70,40.31){\circle*{0.000001}} \put(-267.99,40.31){\circle*{0.000001}} \put(-267.29,40.31){\circle*{0.000001}} \put(-266.58,39.60){\circle*{0.000001}} \put(-265.87,39.60){\circle*{0.000001}} \put(-265.17,39.60){\circle*{0.000001}} \put(-264.46,39.60){\circle*{0.000001}} \put(-263.75,38.89){\circle*{0.000001}} \put(-263.04,38.89){\circle*{0.000001}} \put(-262.34,38.89){\circle*{0.000001}} \put(-261.63,38.89){\circle*{0.000001}} \put(-260.92,38.18){\circle*{0.000001}} \put(-260.22,38.18){\circle*{0.000001}} \put(-259.51,38.18){\circle*{0.000001}} \put(-258.80,38.18){\circle*{0.000001}} \put(-258.09,38.18){\circle*{0.000001}} \put(-257.39,37.48){\circle*{0.000001}} \put(-256.68,37.48){\circle*{0.000001}} \put(-255.97,37.48){\circle*{0.000001}} \put(-255.27,37.48){\circle*{0.000001}} \put(-254.56,36.77){\circle*{0.000001}} \put(-253.85,36.77){\circle*{0.000001}} \put(-253.14,36.77){\circle*{0.000001}} \put(-252.44,36.77){\circle*{0.000001}} \put(-251.73,36.06){\circle*{0.000001}} \put(-251.02,36.06){\circle*{0.000001}} \put(-250.32,36.06){\circle*{0.000001}} \put(-249.61,36.06){\circle*{0.000001}} \put(-248.90,36.06){\circle*{0.000001}} \put(-248.19,35.36){\circle*{0.000001}} \put(-247.49,35.36){\circle*{0.000001}} \put(-246.78,35.36){\circle*{0.000001}} \put(-246.07,35.36){\circle*{0.000001}} \put(-245.37,34.65){\circle*{0.000001}} \put(-244.66,34.65){\circle*{0.000001}} \put(-243.95,34.65){\circle*{0.000001}} \put(-243.24,34.65){\circle*{0.000001}} \put(-242.54,34.65){\circle*{0.000001}} \put(-241.83,33.94){\circle*{0.000001}} \put(-241.12,33.94){\circle*{0.000001}} \put(-240.42,33.94){\circle*{0.000001}} \put(-239.71,33.94){\circle*{0.000001}} \put(-239.00,33.23){\circle*{0.000001}} \put(-238.29,33.23){\circle*{0.000001}} \put(-237.59,33.23){\circle*{0.000001}} \put(-236.88,33.23){\circle*{0.000001}} \put(-236.17,32.53){\circle*{0.000001}} \put(-235.47,32.53){\circle*{0.000001}} \put(-234.76,32.53){\circle*{0.000001}} \put(-293.45,45.96){\circle*{0.000001}} \put(-292.74,46.67){\circle*{0.000001}} \put(-292.04,46.67){\circle*{0.000001}} \put(-291.33,47.38){\circle*{0.000001}} \put(-291.33,47.38){\circle*{0.000001}} \put(-290.62,47.38){\circle*{0.000001}} \put(-291.33,46.67){\circle*{0.000001}} \put(-290.62,47.38){\circle*{0.000001}} \put(-291.33,46.67){\circle*{0.000001}} \put(-349.31,29.70){\circle*{0.000001}} \put(-348.60,29.70){\circle*{0.000001}} \put(-347.90,30.41){\circle*{0.000001}} \put(-347.19,30.41){\circle*{0.000001}} \put(-346.48,30.41){\circle*{0.000001}} \put(-345.78,30.41){\circle*{0.000001}} \put(-345.07,31.11){\circle*{0.000001}} \put(-344.36,31.11){\circle*{0.000001}} \put(-343.65,31.11){\circle*{0.000001}} \put(-342.95,31.82){\circle*{0.000001}} \put(-342.24,31.82){\circle*{0.000001}} \put(-341.53,31.82){\circle*{0.000001}} \put(-340.83,32.53){\circle*{0.000001}} \put(-340.12,32.53){\circle*{0.000001}} \put(-339.41,32.53){\circle*{0.000001}} \put(-338.70,32.53){\circle*{0.000001}} \put(-338.00,33.23){\circle*{0.000001}} \put(-337.29,33.23){\circle*{0.000001}} \put(-336.58,33.23){\circle*{0.000001}} \put(-335.88,33.94){\circle*{0.000001}} \put(-335.17,33.94){\circle*{0.000001}} \put(-334.46,33.94){\circle*{0.000001}} \put(-333.75,33.94){\circle*{0.000001}} \put(-333.05,34.65){\circle*{0.000001}} \put(-332.34,34.65){\circle*{0.000001}} \put(-331.63,34.65){\circle*{0.000001}} \put(-330.93,35.36){\circle*{0.000001}} \put(-330.22,35.36){\circle*{0.000001}} \put(-329.51,35.36){\circle*{0.000001}} \put(-328.80,35.36){\circle*{0.000001}} \put(-328.10,36.06){\circle*{0.000001}} \put(-327.39,36.06){\circle*{0.000001}} \put(-326.68,36.06){\circle*{0.000001}} \put(-325.98,36.77){\circle*{0.000001}} \put(-325.27,36.77){\circle*{0.000001}} \put(-324.56,36.77){\circle*{0.000001}} \put(-323.85,37.48){\circle*{0.000001}} \put(-323.15,37.48){\circle*{0.000001}} \put(-322.44,37.48){\circle*{0.000001}} \put(-321.73,37.48){\circle*{0.000001}} \put(-321.03,38.18){\circle*{0.000001}} \put(-320.32,38.18){\circle*{0.000001}} \put(-319.61,38.18){\circle*{0.000001}} \put(-318.91,38.89){\circle*{0.000001}} \put(-318.20,38.89){\circle*{0.000001}} \put(-317.49,38.89){\circle*{0.000001}} \put(-316.78,38.89){\circle*{0.000001}} \put(-316.08,39.60){\circle*{0.000001}} \put(-315.37,39.60){\circle*{0.000001}} \put(-314.66,39.60){\circle*{0.000001}} \put(-313.96,40.31){\circle*{0.000001}} \put(-313.25,40.31){\circle*{0.000001}} \put(-312.54,40.31){\circle*{0.000001}} \put(-311.83,41.01){\circle*{0.000001}} \put(-311.13,41.01){\circle*{0.000001}} \put(-310.42,41.01){\circle*{0.000001}} \put(-309.71,41.01){\circle*{0.000001}} \put(-309.01,41.72){\circle*{0.000001}} \put(-308.30,41.72){\circle*{0.000001}} \put(-307.59,41.72){\circle*{0.000001}} \put(-306.88,42.43){\circle*{0.000001}} \put(-306.18,42.43){\circle*{0.000001}} \put(-305.47,42.43){\circle*{0.000001}} \put(-304.76,42.43){\circle*{0.000001}} \put(-304.06,43.13){\circle*{0.000001}} \put(-303.35,43.13){\circle*{0.000001}} \put(-302.64,43.13){\circle*{0.000001}} \put(-301.93,43.84){\circle*{0.000001}} \put(-301.23,43.84){\circle*{0.000001}} \put(-300.52,43.84){\circle*{0.000001}} \put(-299.81,43.84){\circle*{0.000001}} \put(-299.11,44.55){\circle*{0.000001}} \put(-298.40,44.55){\circle*{0.000001}} \put(-297.69,44.55){\circle*{0.000001}} \put(-296.98,45.25){\circle*{0.000001}} \put(-296.28,45.25){\circle*{0.000001}} \put(-295.57,45.25){\circle*{0.000001}} \put(-294.86,45.96){\circle*{0.000001}} \put(-294.16,45.96){\circle*{0.000001}} \put(-293.45,45.96){\circle*{0.000001}} \put(-292.74,45.96){\circle*{0.000001}} \put(-292.04,46.67){\circle*{0.000001}} \put(-291.33,46.67){\circle*{0.000001}} \put(-331.63,-27.58){\circle*{0.000001}} \put(-331.63,-26.87){\circle*{0.000001}} \put(-332.34,-26.16){\circle*{0.000001}} \put(-332.34,-25.46){\circle*{0.000001}} \put(-332.34,-24.75){\circle*{0.000001}} \put(-333.05,-24.04){\circle*{0.000001}} \put(-333.05,-23.33){\circle*{0.000001}} \put(-333.05,-22.63){\circle*{0.000001}} \put(-333.05,-21.92){\circle*{0.000001}} \put(-333.75,-21.21){\circle*{0.000001}} \put(-333.75,-20.51){\circle*{0.000001}} \put(-333.75,-19.80){\circle*{0.000001}} \put(-334.46,-19.09){\circle*{0.000001}} \put(-334.46,-18.38){\circle*{0.000001}} \put(-334.46,-17.68){\circle*{0.000001}} \put(-335.17,-16.97){\circle*{0.000001}} \put(-335.17,-16.26){\circle*{0.000001}} \put(-335.17,-15.56){\circle*{0.000001}} \put(-335.88,-14.85){\circle*{0.000001}} \put(-335.88,-14.14){\circle*{0.000001}} \put(-335.88,-13.44){\circle*{0.000001}} \put(-335.88,-12.73){\circle*{0.000001}} \put(-336.58,-12.02){\circle*{0.000001}} \put(-336.58,-11.31){\circle*{0.000001}} \put(-336.58,-10.61){\circle*{0.000001}} \put(-337.29,-9.90){\circle*{0.000001}} \put(-337.29,-9.19){\circle*{0.000001}} \put(-337.29,-8.49){\circle*{0.000001}} \put(-338.00,-7.78){\circle*{0.000001}} \put(-338.00,-7.07){\circle*{0.000001}} \put(-338.00,-6.36){\circle*{0.000001}} \put(-338.70,-5.66){\circle*{0.000001}} \put(-338.70,-4.95){\circle*{0.000001}} \put(-338.70,-4.24){\circle*{0.000001}} \put(-338.70,-3.54){\circle*{0.000001}} \put(-339.41,-2.83){\circle*{0.000001}} \put(-339.41,-2.12){\circle*{0.000001}} \put(-339.41,-1.41){\circle*{0.000001}} \put(-340.12,-0.71){\circle*{0.000001}} \put(-340.12, 0.00){\circle*{0.000001}} \put(-340.12, 0.71){\circle*{0.000001}} \put(-340.83, 1.41){\circle*{0.000001}} \put(-340.83, 2.12){\circle*{0.000001}} \put(-340.83, 2.83){\circle*{0.000001}} \put(-341.53, 3.54){\circle*{0.000001}} \put(-341.53, 4.24){\circle*{0.000001}} \put(-341.53, 4.95){\circle*{0.000001}} \put(-342.24, 5.66){\circle*{0.000001}} \put(-342.24, 6.36){\circle*{0.000001}} \put(-342.24, 7.07){\circle*{0.000001}} \put(-342.24, 7.78){\circle*{0.000001}} \put(-342.95, 8.49){\circle*{0.000001}} \put(-342.95, 9.19){\circle*{0.000001}} \put(-342.95, 9.90){\circle*{0.000001}} \put(-343.65,10.61){\circle*{0.000001}} \put(-343.65,11.31){\circle*{0.000001}} \put(-343.65,12.02){\circle*{0.000001}} \put(-344.36,12.73){\circle*{0.000001}} \put(-344.36,13.44){\circle*{0.000001}} \put(-344.36,14.14){\circle*{0.000001}} \put(-345.07,14.85){\circle*{0.000001}} \put(-345.07,15.56){\circle*{0.000001}} \put(-345.07,16.26){\circle*{0.000001}} \put(-345.07,16.97){\circle*{0.000001}} \put(-345.78,17.68){\circle*{0.000001}} \put(-345.78,18.38){\circle*{0.000001}} \put(-345.78,19.09){\circle*{0.000001}} \put(-346.48,19.80){\circle*{0.000001}} \put(-346.48,20.51){\circle*{0.000001}} \put(-346.48,21.21){\circle*{0.000001}} \put(-347.19,21.92){\circle*{0.000001}} \put(-347.19,22.63){\circle*{0.000001}} \put(-347.19,23.33){\circle*{0.000001}} \put(-347.90,24.04){\circle*{0.000001}} \put(-347.90,24.75){\circle*{0.000001}} \put(-347.90,25.46){\circle*{0.000001}} \put(-347.90,26.16){\circle*{0.000001}} \put(-348.60,26.87){\circle*{0.000001}} \put(-348.60,27.58){\circle*{0.000001}} \put(-348.60,28.28){\circle*{0.000001}} \put(-349.31,28.99){\circle*{0.000001}} \put(-349.31,29.70){\circle*{0.000001}} \put(-340.83,-85.56){\circle*{0.000001}} \put(-340.83,-84.85){\circle*{0.000001}} \put(-340.83,-84.15){\circle*{0.000001}} \put(-340.83,-83.44){\circle*{0.000001}} \put(-340.12,-82.73){\circle*{0.000001}} \put(-340.12,-82.02){\circle*{0.000001}} \put(-340.12,-81.32){\circle*{0.000001}} \put(-340.12,-80.61){\circle*{0.000001}} \put(-340.12,-79.90){\circle*{0.000001}} \put(-340.12,-79.20){\circle*{0.000001}} \put(-339.41,-78.49){\circle*{0.000001}} \put(-339.41,-77.78){\circle*{0.000001}} \put(-339.41,-77.07){\circle*{0.000001}} \put(-339.41,-76.37){\circle*{0.000001}} \put(-339.41,-75.66){\circle*{0.000001}} \put(-339.41,-74.95){\circle*{0.000001}} \put(-338.70,-74.25){\circle*{0.000001}} \put(-338.70,-73.54){\circle*{0.000001}} \put(-338.70,-72.83){\circle*{0.000001}} \put(-338.70,-72.12){\circle*{0.000001}} \put(-338.70,-71.42){\circle*{0.000001}} \put(-338.70,-70.71){\circle*{0.000001}} \put(-338.70,-70.00){\circle*{0.000001}} \put(-338.00,-69.30){\circle*{0.000001}} \put(-338.00,-68.59){\circle*{0.000001}} \put(-338.00,-67.88){\circle*{0.000001}} \put(-338.00,-67.18){\circle*{0.000001}} \put(-338.00,-66.47){\circle*{0.000001}} \put(-338.00,-65.76){\circle*{0.000001}} \put(-337.29,-65.05){\circle*{0.000001}} \put(-337.29,-64.35){\circle*{0.000001}} \put(-337.29,-63.64){\circle*{0.000001}} \put(-337.29,-62.93){\circle*{0.000001}} \put(-337.29,-62.23){\circle*{0.000001}} \put(-337.29,-61.52){\circle*{0.000001}} \put(-336.58,-60.81){\circle*{0.000001}} \put(-336.58,-60.10){\circle*{0.000001}} \put(-336.58,-59.40){\circle*{0.000001}} \put(-336.58,-58.69){\circle*{0.000001}} \put(-336.58,-57.98){\circle*{0.000001}} \put(-336.58,-57.28){\circle*{0.000001}} \put(-336.58,-56.57){\circle*{0.000001}} \put(-335.88,-55.86){\circle*{0.000001}} \put(-335.88,-55.15){\circle*{0.000001}} \put(-335.88,-54.45){\circle*{0.000001}} \put(-335.88,-53.74){\circle*{0.000001}} \put(-335.88,-53.03){\circle*{0.000001}} \put(-335.88,-52.33){\circle*{0.000001}} \put(-335.17,-51.62){\circle*{0.000001}} \put(-335.17,-50.91){\circle*{0.000001}} \put(-335.17,-50.20){\circle*{0.000001}} \put(-335.17,-49.50){\circle*{0.000001}} \put(-335.17,-48.79){\circle*{0.000001}} \put(-335.17,-48.08){\circle*{0.000001}} \put(-334.46,-47.38){\circle*{0.000001}} \put(-334.46,-46.67){\circle*{0.000001}} \put(-334.46,-45.96){\circle*{0.000001}} \put(-334.46,-45.25){\circle*{0.000001}} \put(-334.46,-44.55){\circle*{0.000001}} \put(-334.46,-43.84){\circle*{0.000001}} \put(-333.75,-43.13){\circle*{0.000001}} \put(-333.75,-42.43){\circle*{0.000001}} \put(-333.75,-41.72){\circle*{0.000001}} \put(-333.75,-41.01){\circle*{0.000001}} \put(-333.75,-40.31){\circle*{0.000001}} \put(-333.75,-39.60){\circle*{0.000001}} \put(-333.75,-38.89){\circle*{0.000001}} \put(-333.05,-38.18){\circle*{0.000001}} \put(-333.05,-37.48){\circle*{0.000001}} \put(-333.05,-36.77){\circle*{0.000001}} \put(-333.05,-36.06){\circle*{0.000001}} \put(-333.05,-35.36){\circle*{0.000001}} \put(-333.05,-34.65){\circle*{0.000001}} \put(-332.34,-33.94){\circle*{0.000001}} \put(-332.34,-33.23){\circle*{0.000001}} \put(-332.34,-32.53){\circle*{0.000001}} \put(-332.34,-31.82){\circle*{0.000001}} \put(-332.34,-31.11){\circle*{0.000001}} \put(-332.34,-30.41){\circle*{0.000001}} \put(-331.63,-29.70){\circle*{0.000001}} \put(-331.63,-28.99){\circle*{0.000001}} \put(-331.63,-28.28){\circle*{0.000001}} \put(-331.63,-27.58){\circle*{0.000001}} \put(-340.83,-85.56){\circle*{0.000001}} \put(-340.12,-85.56){\circle*{0.000001}} \put(-339.41,-85.56){\circle*{0.000001}} \put(-338.70,-85.56){\circle*{0.000001}} \put(-338.70,-86.97){\circle*{0.000001}} \put(-338.70,-86.27){\circle*{0.000001}} \put(-338.70,-85.56){\circle*{0.000001}} \put(-338.70,-86.97){\circle*{0.000001}} \put(-338.70,-86.97){\circle*{0.000001}} \put(-338.70,-86.27){\circle*{0.000001}} \put(-338.70,-85.56){\circle*{0.000001}} \put(-338.70,-84.85){\circle*{0.000001}} \put(-338.70,-84.15){\circle*{0.000001}} \put(-338.70,-83.44){\circle*{0.000001}} \put(-338.70,-82.73){\circle*{0.000001}} \put(-338.70,-82.02){\circle*{0.000001}} \put(-338.70,-81.32){\circle*{0.000001}} \put(-338.70,-80.61){\circle*{0.000001}} \put(-338.70,-79.90){\circle*{0.000001}} \put(-338.70,-79.20){\circle*{0.000001}} \put(-338.00,-78.49){\circle*{0.000001}} \put(-338.00,-77.78){\circle*{0.000001}} \put(-338.00,-77.07){\circle*{0.000001}} \put(-338.00,-76.37){\circle*{0.000001}} \put(-338.00,-75.66){\circle*{0.000001}} \put(-338.00,-74.95){\circle*{0.000001}} \put(-338.00,-74.25){\circle*{0.000001}} \put(-338.00,-73.54){\circle*{0.000001}} \put(-338.00,-72.83){\circle*{0.000001}} \put(-338.00,-72.12){\circle*{0.000001}} \put(-338.00,-71.42){\circle*{0.000001}} \put(-338.00,-70.71){\circle*{0.000001}} \put(-338.00,-70.71){\circle*{0.000001}} \put(-337.29,-70.71){\circle*{0.000001}} \put(-336.58,-70.71){\circle*{0.000001}} \put(-335.88,-70.71){\circle*{0.000001}} \put(-335.17,-70.71){\circle*{0.000001}} \put(-334.46,-70.71){\circle*{0.000001}} \put(-333.75,-70.71){\circle*{0.000001}} \put(-333.05,-70.71){\circle*{0.000001}} \put(-332.34,-70.71){\circle*{0.000001}} \put(-331.63,-70.00){\circle*{0.000001}} \put(-330.93,-70.00){\circle*{0.000001}} \put(-330.22,-70.00){\circle*{0.000001}} \put(-329.51,-70.00){\circle*{0.000001}} \put(-328.80,-70.00){\circle*{0.000001}} \put(-328.10,-70.00){\circle*{0.000001}} \put(-327.39,-70.00){\circle*{0.000001}} \put(-326.68,-70.00){\circle*{0.000001}} \put(-325.98,-70.00){\circle*{0.000001}} \put(-325.27,-70.00){\circle*{0.000001}} \put(-324.56,-70.00){\circle*{0.000001}} \put(-323.85,-70.00){\circle*{0.000001}} \put(-323.15,-70.00){\circle*{0.000001}} \put(-322.44,-70.00){\circle*{0.000001}} \put(-321.73,-70.00){\circle*{0.000001}} \put(-321.03,-70.00){\circle*{0.000001}} \put(-320.32,-69.30){\circle*{0.000001}} \put(-319.61,-69.30){\circle*{0.000001}} \put(-318.91,-69.30){\circle*{0.000001}} \put(-318.20,-69.30){\circle*{0.000001}} \put(-317.49,-69.30){\circle*{0.000001}} \put(-316.78,-69.30){\circle*{0.000001}} \put(-316.08,-69.30){\circle*{0.000001}} \put(-315.37,-69.30){\circle*{0.000001}} \put(-314.66,-69.30){\circle*{0.000001}} \put(-301.93,-98.99){\circle*{0.000001}} \put(-301.93,-98.29){\circle*{0.000001}} \put(-302.64,-97.58){\circle*{0.000001}} \put(-302.64,-96.87){\circle*{0.000001}} \put(-303.35,-96.17){\circle*{0.000001}} \put(-303.35,-95.46){\circle*{0.000001}} \put(-304.06,-94.75){\circle*{0.000001}} \put(-304.06,-94.05){\circle*{0.000001}} \put(-304.06,-93.34){\circle*{0.000001}} \put(-304.76,-92.63){\circle*{0.000001}} \put(-304.76,-91.92){\circle*{0.000001}} \put(-305.47,-91.22){\circle*{0.000001}} \put(-305.47,-90.51){\circle*{0.000001}} \put(-306.18,-89.80){\circle*{0.000001}} \put(-306.18,-89.10){\circle*{0.000001}} \put(-306.18,-88.39){\circle*{0.000001}} \put(-306.88,-87.68){\circle*{0.000001}} \put(-306.88,-86.97){\circle*{0.000001}} \put(-307.59,-86.27){\circle*{0.000001}} \put(-307.59,-85.56){\circle*{0.000001}} \put(-308.30,-84.85){\circle*{0.000001}} \put(-308.30,-84.15){\circle*{0.000001}} \put(-308.30,-83.44){\circle*{0.000001}} \put(-309.01,-82.73){\circle*{0.000001}} \put(-309.01,-82.02){\circle*{0.000001}} \put(-309.71,-81.32){\circle*{0.000001}} \put(-309.71,-80.61){\circle*{0.000001}} \put(-310.42,-79.90){\circle*{0.000001}} \put(-310.42,-79.20){\circle*{0.000001}} \put(-310.42,-78.49){\circle*{0.000001}} \put(-311.13,-77.78){\circle*{0.000001}} \put(-311.13,-77.07){\circle*{0.000001}} \put(-311.83,-76.37){\circle*{0.000001}} \put(-311.83,-75.66){\circle*{0.000001}} \put(-312.54,-74.95){\circle*{0.000001}} \put(-312.54,-74.25){\circle*{0.000001}} \put(-312.54,-73.54){\circle*{0.000001}} \put(-313.25,-72.83){\circle*{0.000001}} \put(-313.25,-72.12){\circle*{0.000001}} \put(-313.96,-71.42){\circle*{0.000001}} \put(-313.96,-70.71){\circle*{0.000001}} \put(-314.66,-70.00){\circle*{0.000001}} \put(-314.66,-69.30){\circle*{0.000001}} \put(-301.93,-98.99){\circle*{0.000001}} \put(-301.23,-98.29){\circle*{0.000001}} \put(-301.23,-97.58){\circle*{0.000001}} \put(-300.52,-96.87){\circle*{0.000001}} \put(-299.81,-96.17){\circle*{0.000001}} \put(-299.11,-95.46){\circle*{0.000001}} \put(-299.11,-94.75){\circle*{0.000001}} \put(-298.40,-94.05){\circle*{0.000001}} \put(-297.69,-93.34){\circle*{0.000001}} \put(-297.69,-92.63){\circle*{0.000001}} \put(-296.98,-91.92){\circle*{0.000001}} \put(-296.28,-91.22){\circle*{0.000001}} \put(-296.28,-90.51){\circle*{0.000001}} \put(-295.57,-89.80){\circle*{0.000001}} \put(-294.86,-89.10){\circle*{0.000001}} \put(-294.16,-88.39){\circle*{0.000001}} \put(-294.16,-87.68){\circle*{0.000001}} \put(-293.45,-86.97){\circle*{0.000001}} \put(-292.74,-86.27){\circle*{0.000001}} \put(-292.74,-85.56){\circle*{0.000001}} \put(-292.04,-84.85){\circle*{0.000001}} \put(-291.33,-84.15){\circle*{0.000001}} \put(-291.33,-83.44){\circle*{0.000001}} \put(-290.62,-82.73){\circle*{0.000001}} \put(-289.91,-82.02){\circle*{0.000001}} \put(-289.21,-81.32){\circle*{0.000001}} \put(-289.21,-80.61){\circle*{0.000001}} \put(-288.50,-79.90){\circle*{0.000001}} \put(-287.79,-79.20){\circle*{0.000001}} \put(-287.79,-78.49){\circle*{0.000001}} \put(-287.09,-77.78){\circle*{0.000001}} \put(-286.38,-77.07){\circle*{0.000001}} \put(-286.38,-76.37){\circle*{0.000001}} \put(-285.67,-75.66){\circle*{0.000001}} \put(-284.96,-74.95){\circle*{0.000001}} \put(-284.26,-74.25){\circle*{0.000001}} \put(-284.26,-73.54){\circle*{0.000001}} \put(-283.55,-72.83){\circle*{0.000001}} \put(-282.84,-72.12){\circle*{0.000001}} \put(-282.84,-71.42){\circle*{0.000001}} \put(-282.14,-70.71){\circle*{0.000001}} \put(-281.43,-70.00){\circle*{0.000001}} \put(-281.43,-69.30){\circle*{0.000001}} \put(-280.72,-68.59){\circle*{0.000001}} \put(-280.01,-67.88){\circle*{0.000001}} \put(-279.31,-67.18){\circle*{0.000001}} \put(-279.31,-66.47){\circle*{0.000001}} \put(-278.60,-65.76){\circle*{0.000001}} \put(-278.60,-65.76){\circle*{0.000001}} \put(-277.89,-66.47){\circle*{0.000001}} \put(-277.19,-66.47){\circle*{0.000001}} \put(-276.48,-67.18){\circle*{0.000001}} \put(-275.77,-67.88){\circle*{0.000001}} \put(-275.06,-68.59){\circle*{0.000001}} \put(-274.36,-68.59){\circle*{0.000001}} \put(-273.65,-69.30){\circle*{0.000001}} \put(-272.94,-70.00){\circle*{0.000001}} \put(-272.24,-70.71){\circle*{0.000001}} \put(-271.53,-70.71){\circle*{0.000001}} \put(-270.82,-71.42){\circle*{0.000001}} \put(-270.11,-72.12){\circle*{0.000001}} \put(-269.41,-72.12){\circle*{0.000001}} \put(-268.70,-72.83){\circle*{0.000001}} \put(-267.99,-73.54){\circle*{0.000001}} \put(-267.29,-74.25){\circle*{0.000001}} \put(-266.58,-74.25){\circle*{0.000001}} \put(-265.87,-74.95){\circle*{0.000001}} \put(-265.17,-75.66){\circle*{0.000001}} \put(-264.46,-75.66){\circle*{0.000001}} \put(-263.75,-76.37){\circle*{0.000001}} \put(-263.04,-77.07){\circle*{0.000001}} \put(-262.34,-77.78){\circle*{0.000001}} \put(-261.63,-77.78){\circle*{0.000001}} \put(-260.92,-78.49){\circle*{0.000001}} \put(-260.22,-79.20){\circle*{0.000001}} \put(-259.51,-79.90){\circle*{0.000001}} \put(-258.80,-79.90){\circle*{0.000001}} \put(-258.09,-80.61){\circle*{0.000001}} \put(-257.39,-81.32){\circle*{0.000001}} \put(-256.68,-81.32){\circle*{0.000001}} \put(-255.97,-82.02){\circle*{0.000001}} \put(-255.27,-82.73){\circle*{0.000001}} \put(-254.56,-83.44){\circle*{0.000001}} \put(-253.85,-83.44){\circle*{0.000001}} \put(-253.14,-84.15){\circle*{0.000001}} \put(-252.44,-84.85){\circle*{0.000001}} \put(-251.73,-85.56){\circle*{0.000001}} \put(-251.02,-85.56){\circle*{0.000001}} \put(-250.32,-86.27){\circle*{0.000001}} \put(-249.61,-86.97){\circle*{0.000001}} \put(-248.90,-86.97){\circle*{0.000001}} \put(-248.19,-87.68){\circle*{0.000001}} \put(-247.49,-88.39){\circle*{0.000001}} \put(-246.78,-89.10){\circle*{0.000001}} \put(-246.07,-89.10){\circle*{0.000001}} \put(-245.37,-89.80){\circle*{0.000001}} \put(-244.66,-90.51){\circle*{0.000001}} \put(-243.95,-90.51){\circle*{0.000001}} \put(-243.24,-91.22){\circle*{0.000001}} \put(-242.54,-91.92){\circle*{0.000001}} \put(-241.83,-92.63){\circle*{0.000001}} \put(-241.12,-92.63){\circle*{0.000001}} \put(-240.42,-93.34){\circle*{0.000001}} \put(-239.71,-94.05){\circle*{0.000001}} \put(-239.00,-94.75){\circle*{0.000001}} \put(-238.29,-94.75){\circle*{0.000001}} \put(-237.59,-95.46){\circle*{0.000001}} \put(-237.59,-95.46){\circle*{0.000001}} \put(-236.88,-95.46){\circle*{0.000001}} \put(-236.17,-96.17){\circle*{0.000001}} \put(-235.47,-96.17){\circle*{0.000001}} \put(-234.76,-96.17){\circle*{0.000001}} \put(-234.05,-96.17){\circle*{0.000001}} \put(-233.35,-96.87){\circle*{0.000001}} \put(-232.64,-96.87){\circle*{0.000001}} \put(-231.93,-96.87){\circle*{0.000001}} \put(-231.22,-97.58){\circle*{0.000001}} \put(-230.52,-97.58){\circle*{0.000001}} \put(-229.81,-97.58){\circle*{0.000001}} \put(-229.10,-97.58){\circle*{0.000001}} \put(-228.40,-98.29){\circle*{0.000001}} \put(-227.69,-98.29){\circle*{0.000001}} \put(-226.98,-98.29){\circle*{0.000001}} \put(-226.27,-98.99){\circle*{0.000001}} \put(-225.57,-98.99){\circle*{0.000001}} \put(-224.86,-98.99){\circle*{0.000001}} \put(-224.15,-98.99){\circle*{0.000001}} \put(-223.45,-99.70){\circle*{0.000001}} \put(-222.74,-99.70){\circle*{0.000001}} \put(-222.03,-99.70){\circle*{0.000001}} \put(-221.32,-100.41){\circle*{0.000001}} \put(-220.62,-100.41){\circle*{0.000001}} \put(-219.91,-100.41){\circle*{0.000001}} \put(-219.20,-101.12){\circle*{0.000001}} \put(-218.50,-101.12){\circle*{0.000001}} \put(-217.79,-101.12){\circle*{0.000001}} \put(-217.08,-101.12){\circle*{0.000001}} \put(-216.37,-101.82){\circle*{0.000001}} \put(-215.67,-101.82){\circle*{0.000001}} \put(-214.96,-101.82){\circle*{0.000001}} \put(-214.25,-102.53){\circle*{0.000001}} \put(-213.55,-102.53){\circle*{0.000001}} \put(-212.84,-102.53){\circle*{0.000001}} \put(-212.13,-102.53){\circle*{0.000001}} \put(-211.42,-103.24){\circle*{0.000001}} \put(-210.72,-103.24){\circle*{0.000001}} \put(-210.01,-103.24){\circle*{0.000001}} \put(-209.30,-103.94){\circle*{0.000001}} \put(-208.60,-103.94){\circle*{0.000001}} \put(-207.89,-103.94){\circle*{0.000001}} \put(-207.18,-103.94){\circle*{0.000001}} \put(-206.48,-104.65){\circle*{0.000001}} \put(-205.77,-104.65){\circle*{0.000001}} \put(-205.06,-104.65){\circle*{0.000001}} \put(-204.35,-105.36){\circle*{0.000001}} \put(-203.65,-105.36){\circle*{0.000001}} \put(-202.94,-105.36){\circle*{0.000001}} \put(-202.23,-105.36){\circle*{0.000001}} \put(-201.53,-106.07){\circle*{0.000001}} \put(-200.82,-106.07){\circle*{0.000001}} \put(-200.11,-106.07){\circle*{0.000001}} \put(-199.40,-106.77){\circle*{0.000001}} \put(-198.70,-106.77){\circle*{0.000001}} \put(-197.99,-106.77){\circle*{0.000001}} \put(-197.28,-106.77){\circle*{0.000001}} \put(-196.58,-107.48){\circle*{0.000001}} \put(-195.87,-107.48){\circle*{0.000001}} \put(-195.16,-107.48){\circle*{0.000001}} \put(-194.45,-108.19){\circle*{0.000001}} \put(-193.75,-108.19){\circle*{0.000001}} \put(-193.04,-108.19){\circle*{0.000001}} \put(-192.33,-108.89){\circle*{0.000001}} \put(-191.63,-108.89){\circle*{0.000001}} \put(-190.92,-108.89){\circle*{0.000001}} \put(-190.21,-108.89){\circle*{0.000001}} \put(-189.50,-109.60){\circle*{0.000001}} \put(-188.80,-109.60){\circle*{0.000001}} \put(-188.09,-109.60){\circle*{0.000001}} \put(-187.38,-110.31){\circle*{0.000001}} \put(-186.68,-110.31){\circle*{0.000001}} \put(-185.97,-110.31){\circle*{0.000001}} \put(-185.26,-110.31){\circle*{0.000001}} \put(-184.55,-111.02){\circle*{0.000001}} \put(-183.85,-111.02){\circle*{0.000001}} \put(-183.14,-111.02){\circle*{0.000001}} \put(-182.43,-111.72){\circle*{0.000001}} \put(-181.73,-111.72){\circle*{0.000001}} \put(-181.02,-111.72){\circle*{0.000001}} \put(-180.31,-111.72){\circle*{0.000001}} \put(-179.61,-112.43){\circle*{0.000001}} \put(-178.90,-112.43){\circle*{0.000001}} \put(-178.90,-112.43){\circle*{0.000001}} \put(-190.92,-118.79){\circle*{0.000001}} \put(-190.21,-118.09){\circle*{0.000001}} \put(-189.50,-118.09){\circle*{0.000001}} \put(-188.80,-117.38){\circle*{0.000001}} \put(-188.09,-117.38){\circle*{0.000001}} \put(-187.38,-116.67){\circle*{0.000001}} \put(-186.68,-116.67){\circle*{0.000001}} \put(-185.97,-115.97){\circle*{0.000001}} \put(-185.26,-115.97){\circle*{0.000001}} \put(-184.55,-115.26){\circle*{0.000001}} \put(-183.85,-115.26){\circle*{0.000001}} \put(-183.14,-114.55){\circle*{0.000001}} \put(-182.43,-114.55){\circle*{0.000001}} \put(-181.73,-113.84){\circle*{0.000001}} \put(-181.02,-113.84){\circle*{0.000001}} \put(-180.31,-113.14){\circle*{0.000001}} \put(-179.61,-113.14){\circle*{0.000001}} \put(-178.90,-112.43){\circle*{0.000001}} \put(-195.87,-149.91){\circle*{0.000001}} \put(-195.87,-149.20){\circle*{0.000001}} \put(-195.87,-148.49){\circle*{0.000001}} \put(-195.87,-147.79){\circle*{0.000001}} \put(-195.16,-147.08){\circle*{0.000001}} \put(-195.16,-146.37){\circle*{0.000001}} \put(-195.16,-145.66){\circle*{0.000001}} \put(-195.16,-144.96){\circle*{0.000001}} \put(-195.16,-144.25){\circle*{0.000001}} \put(-195.16,-143.54){\circle*{0.000001}} \put(-194.45,-142.84){\circle*{0.000001}} \put(-194.45,-142.13){\circle*{0.000001}} \put(-194.45,-141.42){\circle*{0.000001}} \put(-194.45,-140.71){\circle*{0.000001}} \put(-194.45,-140.01){\circle*{0.000001}} \put(-194.45,-139.30){\circle*{0.000001}} \put(-193.75,-138.59){\circle*{0.000001}} \put(-193.75,-137.89){\circle*{0.000001}} \put(-193.75,-137.18){\circle*{0.000001}} \put(-193.75,-136.47){\circle*{0.000001}} \put(-193.75,-135.76){\circle*{0.000001}} \put(-193.75,-135.06){\circle*{0.000001}} \put(-193.75,-134.35){\circle*{0.000001}} \put(-193.04,-133.64){\circle*{0.000001}} \put(-193.04,-132.94){\circle*{0.000001}} \put(-193.04,-132.23){\circle*{0.000001}} \put(-193.04,-131.52){\circle*{0.000001}} \put(-193.04,-130.81){\circle*{0.000001}} \put(-193.04,-130.11){\circle*{0.000001}} \put(-192.33,-129.40){\circle*{0.000001}} \put(-192.33,-128.69){\circle*{0.000001}} \put(-192.33,-127.99){\circle*{0.000001}} \put(-192.33,-127.28){\circle*{0.000001}} \put(-192.33,-126.57){\circle*{0.000001}} \put(-192.33,-125.87){\circle*{0.000001}} \put(-191.63,-125.16){\circle*{0.000001}} \put(-191.63,-124.45){\circle*{0.000001}} \put(-191.63,-123.74){\circle*{0.000001}} \put(-191.63,-123.04){\circle*{0.000001}} \put(-191.63,-122.33){\circle*{0.000001}} \put(-191.63,-121.62){\circle*{0.000001}} \put(-190.92,-120.92){\circle*{0.000001}} \put(-190.92,-120.21){\circle*{0.000001}} \put(-190.92,-119.50){\circle*{0.000001}} \put(-190.92,-118.79){\circle*{0.000001}} \put(-195.87,-149.91){\circle*{0.000001}} \put(-195.16,-149.91){\circle*{0.000001}} \put(-194.45,-149.20){\circle*{0.000001}} \put(-193.75,-149.20){\circle*{0.000001}} \put(-193.04,-148.49){\circle*{0.000001}} \put(-192.33,-148.49){\circle*{0.000001}} \put(-192.33,-148.49){\circle*{0.000001}} \put(-191.63,-148.49){\circle*{0.000001}} \put(-193.75,-151.32){\circle*{0.000001}} \put(-193.04,-150.61){\circle*{0.000001}} \put(-193.04,-149.91){\circle*{0.000001}} \put(-192.33,-149.20){\circle*{0.000001}} \put(-191.63,-148.49){\circle*{0.000001}} \put(-192.33,-155.56){\circle*{0.000001}} \put(-192.33,-154.86){\circle*{0.000001}} \put(-193.04,-154.15){\circle*{0.000001}} \put(-193.04,-153.44){\circle*{0.000001}} \put(-193.04,-152.74){\circle*{0.000001}} \put(-193.75,-152.03){\circle*{0.000001}} \put(-193.75,-151.32){\circle*{0.000001}} \put(-192.33,-155.56){\circle*{0.000001}} \put(-191.63,-154.86){\circle*{0.000001}} \put(-190.92,-154.86){\circle*{0.000001}} \put(-190.21,-154.15){\circle*{0.000001}} \put(-189.50,-153.44){\circle*{0.000001}} \put(-243.95,-124.45){\circle*{0.000001}} \put(-243.24,-125.16){\circle*{0.000001}} \put(-242.54,-125.16){\circle*{0.000001}} \put(-241.83,-125.87){\circle*{0.000001}} \put(-241.12,-125.87){\circle*{0.000001}} \put(-240.42,-126.57){\circle*{0.000001}} \put(-239.71,-126.57){\circle*{0.000001}} \put(-239.00,-127.28){\circle*{0.000001}} \put(-238.29,-127.28){\circle*{0.000001}} \put(-237.59,-127.99){\circle*{0.000001}} \put(-236.88,-127.99){\circle*{0.000001}} \put(-236.17,-128.69){\circle*{0.000001}} \put(-235.47,-128.69){\circle*{0.000001}} \put(-234.76,-129.40){\circle*{0.000001}} \put(-234.05,-129.40){\circle*{0.000001}} \put(-233.35,-130.11){\circle*{0.000001}} \put(-232.64,-130.81){\circle*{0.000001}} \put(-231.93,-130.81){\circle*{0.000001}} \put(-231.22,-131.52){\circle*{0.000001}} \put(-230.52,-131.52){\circle*{0.000001}} \put(-229.81,-132.23){\circle*{0.000001}} \put(-229.10,-132.23){\circle*{0.000001}} \put(-228.40,-132.94){\circle*{0.000001}} \put(-227.69,-132.94){\circle*{0.000001}} \put(-226.98,-133.64){\circle*{0.000001}} \put(-226.27,-133.64){\circle*{0.000001}} \put(-225.57,-134.35){\circle*{0.000001}} \put(-224.86,-134.35){\circle*{0.000001}} \put(-224.15,-135.06){\circle*{0.000001}} \put(-223.45,-135.06){\circle*{0.000001}} \put(-222.74,-135.76){\circle*{0.000001}} \put(-222.03,-136.47){\circle*{0.000001}} \put(-221.32,-136.47){\circle*{0.000001}} \put(-220.62,-137.18){\circle*{0.000001}} \put(-219.91,-137.18){\circle*{0.000001}} \put(-219.20,-137.89){\circle*{0.000001}} \put(-218.50,-137.89){\circle*{0.000001}} \put(-217.79,-138.59){\circle*{0.000001}} \put(-217.08,-138.59){\circle*{0.000001}} \put(-216.37,-139.30){\circle*{0.000001}} \put(-215.67,-139.30){\circle*{0.000001}} \put(-214.96,-140.01){\circle*{0.000001}} \put(-214.25,-140.01){\circle*{0.000001}} \put(-213.55,-140.71){\circle*{0.000001}} \put(-212.84,-140.71){\circle*{0.000001}} \put(-212.13,-141.42){\circle*{0.000001}} \put(-211.42,-141.42){\circle*{0.000001}} \put(-210.72,-142.13){\circle*{0.000001}} \put(-210.01,-142.84){\circle*{0.000001}} \put(-209.30,-142.84){\circle*{0.000001}} \put(-208.60,-143.54){\circle*{0.000001}} \put(-207.89,-143.54){\circle*{0.000001}} \put(-207.18,-144.25){\circle*{0.000001}} \put(-206.48,-144.25){\circle*{0.000001}} \put(-205.77,-144.96){\circle*{0.000001}} \put(-205.06,-144.96){\circle*{0.000001}} \put(-204.35,-145.66){\circle*{0.000001}} \put(-203.65,-145.66){\circle*{0.000001}} \put(-202.94,-146.37){\circle*{0.000001}} \put(-202.23,-146.37){\circle*{0.000001}} \put(-201.53,-147.08){\circle*{0.000001}} \put(-200.82,-147.08){\circle*{0.000001}} \put(-200.11,-147.79){\circle*{0.000001}} \put(-199.40,-148.49){\circle*{0.000001}} \put(-198.70,-148.49){\circle*{0.000001}} \put(-197.99,-149.20){\circle*{0.000001}} \put(-197.28,-149.20){\circle*{0.000001}} \put(-196.58,-149.91){\circle*{0.000001}} \put(-195.87,-149.91){\circle*{0.000001}} \put(-195.16,-150.61){\circle*{0.000001}} \put(-194.45,-150.61){\circle*{0.000001}} \put(-193.75,-151.32){\circle*{0.000001}} \put(-193.04,-151.32){\circle*{0.000001}} \put(-192.33,-152.03){\circle*{0.000001}} \put(-191.63,-152.03){\circle*{0.000001}} \put(-190.92,-152.74){\circle*{0.000001}} \put(-190.21,-152.74){\circle*{0.000001}} \put(-189.50,-153.44){\circle*{0.000001}} \put(-243.95,-124.45){\circle*{0.000001}} \put(-243.24,-124.45){\circle*{0.000001}} \put(-242.54,-124.45){\circle*{0.000001}} \put(-241.83,-123.74){\circle*{0.000001}} \put(-241.12,-123.74){\circle*{0.000001}} \put(-240.42,-123.74){\circle*{0.000001}} \put(-240.42,-123.74){\circle*{0.000001}} \put(-239.71,-124.45){\circle*{0.000001}} \put(-239.00,-124.45){\circle*{0.000001}} \put(-238.29,-125.16){\circle*{0.000001}} \put(-237.59,-125.87){\circle*{0.000001}} \put(-237.59,-125.87){\circle*{0.000001}} \put(-237.59,-125.16){\circle*{0.000001}} \put(-237.59,-124.45){\circle*{0.000001}} \put(-237.59,-124.45){\circle*{0.000001}} \put(-236.88,-124.45){\circle*{0.000001}} \put(-238.29,-128.69){\circle*{0.000001}} \put(-238.29,-127.99){\circle*{0.000001}} \put(-237.59,-127.28){\circle*{0.000001}} \put(-237.59,-126.57){\circle*{0.000001}} \put(-237.59,-125.87){\circle*{0.000001}} \put(-236.88,-125.16){\circle*{0.000001}} \put(-236.88,-124.45){\circle*{0.000001}} \put(-238.29,-128.69){\circle*{0.000001}} \put(-237.59,-128.69){\circle*{0.000001}} \put(-236.88,-128.69){\circle*{0.000001}} \put(-236.17,-128.69){\circle*{0.000001}} \put(-235.47,-128.69){\circle*{0.000001}} \put(-248.90,-118.79){\circle*{0.000001}} \put(-248.19,-119.50){\circle*{0.000001}} \put(-247.49,-119.50){\circle*{0.000001}} \put(-246.78,-120.21){\circle*{0.000001}} \put(-246.07,-120.92){\circle*{0.000001}} \put(-245.37,-121.62){\circle*{0.000001}} \put(-244.66,-121.62){\circle*{0.000001}} \put(-243.95,-122.33){\circle*{0.000001}} \put(-243.24,-123.04){\circle*{0.000001}} \put(-242.54,-123.74){\circle*{0.000001}} \put(-241.83,-123.74){\circle*{0.000001}} \put(-241.12,-124.45){\circle*{0.000001}} \put(-240.42,-125.16){\circle*{0.000001}} \put(-239.71,-125.87){\circle*{0.000001}} \put(-239.00,-125.87){\circle*{0.000001}} \put(-238.29,-126.57){\circle*{0.000001}} \put(-237.59,-127.28){\circle*{0.000001}} \put(-236.88,-127.99){\circle*{0.000001}} \put(-236.17,-127.99){\circle*{0.000001}} \put(-235.47,-128.69){\circle*{0.000001}} \put(-248.90,-118.79){\circle*{0.000001}} \put(-248.90,-118.09){\circle*{0.000001}} \put(-248.90,-117.38){\circle*{0.000001}} \put(-248.90,-116.67){\circle*{0.000001}} \put(-249.61,-115.97){\circle*{0.000001}} \put(-249.61,-115.26){\circle*{0.000001}} \put(-249.61,-114.55){\circle*{0.000001}} \put(-249.61,-113.84){\circle*{0.000001}} \put(-249.61,-113.14){\circle*{0.000001}} \put(-249.61,-112.43){\circle*{0.000001}} \put(-249.61,-111.72){\circle*{0.000001}} \put(-250.32,-111.02){\circle*{0.000001}} \put(-250.32,-110.31){\circle*{0.000001}} \put(-250.32,-109.60){\circle*{0.000001}} \put(-250.32,-108.89){\circle*{0.000001}} \put(-250.32,-108.19){\circle*{0.000001}} \put(-250.32,-107.48){\circle*{0.000001}} \put(-251.02,-106.77){\circle*{0.000001}} \put(-251.02,-106.07){\circle*{0.000001}} \put(-251.02,-105.36){\circle*{0.000001}} \put(-251.02,-104.65){\circle*{0.000001}} \put(-251.02,-103.94){\circle*{0.000001}} \put(-251.02,-103.24){\circle*{0.000001}} \put(-251.02,-102.53){\circle*{0.000001}} \put(-251.73,-101.82){\circle*{0.000001}} \put(-251.73,-101.12){\circle*{0.000001}} \put(-251.73,-100.41){\circle*{0.000001}} \put(-251.73,-99.70){\circle*{0.000001}} \put(-251.73,-98.99){\circle*{0.000001}} \put(-251.73,-98.29){\circle*{0.000001}} \put(-251.73,-97.58){\circle*{0.000001}} \put(-252.44,-96.87){\circle*{0.000001}} \put(-252.44,-96.17){\circle*{0.000001}} \put(-252.44,-95.46){\circle*{0.000001}} \put(-252.44,-94.75){\circle*{0.000001}} \put(-252.44,-94.05){\circle*{0.000001}} \put(-252.44,-93.34){\circle*{0.000001}} \put(-253.14,-92.63){\circle*{0.000001}} \put(-253.14,-91.92){\circle*{0.000001}} \put(-253.14,-91.22){\circle*{0.000001}} \put(-253.14,-90.51){\circle*{0.000001}} \put(-253.14,-89.80){\circle*{0.000001}} \put(-253.14,-89.10){\circle*{0.000001}} \put(-253.14,-88.39){\circle*{0.000001}} \put(-253.85,-87.68){\circle*{0.000001}} \put(-253.85,-86.97){\circle*{0.000001}} \put(-253.85,-86.27){\circle*{0.000001}} \put(-253.85,-85.56){\circle*{0.000001}} \put(-296.98,-70.71){\circle*{0.000001}} \put(-296.28,-70.71){\circle*{0.000001}} \put(-295.57,-71.42){\circle*{0.000001}} \put(-294.86,-71.42){\circle*{0.000001}} \put(-294.16,-71.42){\circle*{0.000001}} \put(-293.45,-72.12){\circle*{0.000001}} \put(-292.74,-72.12){\circle*{0.000001}} \put(-292.04,-72.12){\circle*{0.000001}} \put(-291.33,-72.83){\circle*{0.000001}} \put(-290.62,-72.83){\circle*{0.000001}} \put(-289.91,-72.83){\circle*{0.000001}} \put(-289.21,-73.54){\circle*{0.000001}} \put(-288.50,-73.54){\circle*{0.000001}} \put(-287.79,-73.54){\circle*{0.000001}} \put(-287.09,-74.25){\circle*{0.000001}} \put(-286.38,-74.25){\circle*{0.000001}} \put(-285.67,-74.95){\circle*{0.000001}} \put(-284.96,-74.95){\circle*{0.000001}} \put(-284.26,-74.95){\circle*{0.000001}} \put(-283.55,-75.66){\circle*{0.000001}} \put(-282.84,-75.66){\circle*{0.000001}} \put(-282.14,-75.66){\circle*{0.000001}} \put(-281.43,-76.37){\circle*{0.000001}} \put(-280.72,-76.37){\circle*{0.000001}} \put(-280.01,-76.37){\circle*{0.000001}} \put(-279.31,-77.07){\circle*{0.000001}} \put(-278.60,-77.07){\circle*{0.000001}} \put(-277.89,-77.07){\circle*{0.000001}} \put(-277.19,-77.78){\circle*{0.000001}} \put(-276.48,-77.78){\circle*{0.000001}} \put(-275.77,-77.78){\circle*{0.000001}} \put(-275.06,-78.49){\circle*{0.000001}} \put(-274.36,-78.49){\circle*{0.000001}} \put(-273.65,-78.49){\circle*{0.000001}} \put(-272.94,-79.20){\circle*{0.000001}} \put(-272.24,-79.20){\circle*{0.000001}} \put(-271.53,-79.20){\circle*{0.000001}} \put(-270.82,-79.90){\circle*{0.000001}} \put(-270.11,-79.90){\circle*{0.000001}} \put(-269.41,-79.90){\circle*{0.000001}} \put(-268.70,-80.61){\circle*{0.000001}} \put(-267.99,-80.61){\circle*{0.000001}} \put(-267.29,-80.61){\circle*{0.000001}} \put(-266.58,-81.32){\circle*{0.000001}} \put(-265.87,-81.32){\circle*{0.000001}} \put(-265.17,-81.32){\circle*{0.000001}} \put(-264.46,-82.02){\circle*{0.000001}} \put(-263.75,-82.02){\circle*{0.000001}} \put(-263.04,-82.73){\circle*{0.000001}} \put(-262.34,-82.73){\circle*{0.000001}} \put(-261.63,-82.73){\circle*{0.000001}} \put(-260.92,-83.44){\circle*{0.000001}} \put(-260.22,-83.44){\circle*{0.000001}} \put(-259.51,-83.44){\circle*{0.000001}} \put(-258.80,-84.15){\circle*{0.000001}} \put(-258.09,-84.15){\circle*{0.000001}} \put(-257.39,-84.15){\circle*{0.000001}} \put(-256.68,-84.85){\circle*{0.000001}} \put(-255.97,-84.85){\circle*{0.000001}} \put(-255.27,-84.85){\circle*{0.000001}} \put(-254.56,-85.56){\circle*{0.000001}} \put(-253.85,-85.56){\circle*{0.000001}} \put(-354.97,-80.61){\circle*{0.000001}} \put(-354.26,-80.61){\circle*{0.000001}} \put(-353.55,-80.61){\circle*{0.000001}} \put(-352.85,-79.90){\circle*{0.000001}} \put(-352.14,-79.90){\circle*{0.000001}} \put(-351.43,-79.90){\circle*{0.000001}} \put(-350.72,-79.90){\circle*{0.000001}} \put(-350.02,-79.90){\circle*{0.000001}} \put(-349.31,-79.90){\circle*{0.000001}} \put(-348.60,-79.20){\circle*{0.000001}} \put(-347.90,-79.20){\circle*{0.000001}} \put(-347.19,-79.20){\circle*{0.000001}} \put(-346.48,-79.20){\circle*{0.000001}} \put(-345.78,-79.20){\circle*{0.000001}} \put(-345.07,-79.20){\circle*{0.000001}} \put(-344.36,-78.49){\circle*{0.000001}} \put(-343.65,-78.49){\circle*{0.000001}} \put(-342.95,-78.49){\circle*{0.000001}} \put(-342.24,-78.49){\circle*{0.000001}} \put(-341.53,-78.49){\circle*{0.000001}} \put(-340.83,-78.49){\circle*{0.000001}} \put(-340.12,-77.78){\circle*{0.000001}} \put(-339.41,-77.78){\circle*{0.000001}} \put(-338.70,-77.78){\circle*{0.000001}} \put(-338.00,-77.78){\circle*{0.000001}} \put(-337.29,-77.78){\circle*{0.000001}} \put(-336.58,-77.78){\circle*{0.000001}} \put(-335.88,-77.07){\circle*{0.000001}} \put(-335.17,-77.07){\circle*{0.000001}} \put(-334.46,-77.07){\circle*{0.000001}} \put(-333.75,-77.07){\circle*{0.000001}} \put(-333.05,-77.07){\circle*{0.000001}} \put(-332.34,-77.07){\circle*{0.000001}} \put(-331.63,-76.37){\circle*{0.000001}} \put(-330.93,-76.37){\circle*{0.000001}} \put(-330.22,-76.37){\circle*{0.000001}} \put(-329.51,-76.37){\circle*{0.000001}} \put(-328.80,-76.37){\circle*{0.000001}} \put(-328.10,-76.37){\circle*{0.000001}} \put(-327.39,-75.66){\circle*{0.000001}} \put(-326.68,-75.66){\circle*{0.000001}} \put(-325.98,-75.66){\circle*{0.000001}} \put(-325.27,-75.66){\circle*{0.000001}} \put(-324.56,-75.66){\circle*{0.000001}} \put(-323.85,-74.95){\circle*{0.000001}} \put(-323.15,-74.95){\circle*{0.000001}} \put(-322.44,-74.95){\circle*{0.000001}} \put(-321.73,-74.95){\circle*{0.000001}} \put(-321.03,-74.95){\circle*{0.000001}} \put(-320.32,-74.95){\circle*{0.000001}} \put(-319.61,-74.25){\circle*{0.000001}} \put(-318.91,-74.25){\circle*{0.000001}} \put(-318.20,-74.25){\circle*{0.000001}} \put(-317.49,-74.25){\circle*{0.000001}} \put(-316.78,-74.25){\circle*{0.000001}} \put(-316.08,-74.25){\circle*{0.000001}} \put(-315.37,-73.54){\circle*{0.000001}} \put(-314.66,-73.54){\circle*{0.000001}} \put(-313.96,-73.54){\circle*{0.000001}} \put(-313.25,-73.54){\circle*{0.000001}} \put(-312.54,-73.54){\circle*{0.000001}} \put(-311.83,-73.54){\circle*{0.000001}} \put(-311.13,-72.83){\circle*{0.000001}} \put(-310.42,-72.83){\circle*{0.000001}} \put(-309.71,-72.83){\circle*{0.000001}} \put(-309.01,-72.83){\circle*{0.000001}} \put(-308.30,-72.83){\circle*{0.000001}} \put(-307.59,-72.83){\circle*{0.000001}} \put(-306.88,-72.12){\circle*{0.000001}} \put(-306.18,-72.12){\circle*{0.000001}} \put(-305.47,-72.12){\circle*{0.000001}} \put(-304.76,-72.12){\circle*{0.000001}} \put(-304.06,-72.12){\circle*{0.000001}} \put(-303.35,-72.12){\circle*{0.000001}} \put(-302.64,-71.42){\circle*{0.000001}} \put(-301.93,-71.42){\circle*{0.000001}} \put(-301.23,-71.42){\circle*{0.000001}} \put(-300.52,-71.42){\circle*{0.000001}} \put(-299.81,-71.42){\circle*{0.000001}} \put(-299.11,-71.42){\circle*{0.000001}} \put(-298.40,-70.71){\circle*{0.000001}} \put(-297.69,-70.71){\circle*{0.000001}} \put(-296.98,-70.71){\circle*{0.000001}} \put(-354.97,-80.61){\circle*{0.000001}} \put(-354.97,-79.90){\circle*{0.000001}} \put(-355.67,-79.20){\circle*{0.000001}} \put(-355.67,-78.49){\circle*{0.000001}} \put(-355.67,-80.61){\circle*{0.000001}} \put(-355.67,-79.90){\circle*{0.000001}} \put(-355.67,-79.20){\circle*{0.000001}} \put(-355.67,-78.49){\circle*{0.000001}} \put(-355.67,-80.61){\circle*{0.000001}} \put(-354.97,-80.61){\circle*{0.000001}} \put(-354.26,-80.61){\circle*{0.000001}} \put(-353.55,-80.61){\circle*{0.000001}} \put(-352.85,-82.73){\circle*{0.000001}} \put(-352.85,-82.02){\circle*{0.000001}} \put(-353.55,-81.32){\circle*{0.000001}} \put(-353.55,-80.61){\circle*{0.000001}} \put(-352.85,-82.73){\circle*{0.000001}} \put(-352.85,-82.02){\circle*{0.000001}} \put(-352.85,-81.32){\circle*{0.000001}} \put(-352.85,-80.61){\circle*{0.000001}} \put(-353.55,-81.32){\circle*{0.000001}} \put(-352.85,-80.61){\circle*{0.000001}} \put(-353.55,-83.44){\circle*{0.000001}} \put(-353.55,-82.73){\circle*{0.000001}} \put(-353.55,-82.02){\circle*{0.000001}} \put(-353.55,-81.32){\circle*{0.000001}} \put(-353.55,-83.44){\circle*{0.000001}} \put(-352.85,-83.44){\circle*{0.000001}} \put(-352.14,-83.44){\circle*{0.000001}} \put(-351.43,-83.44){\circle*{0.000001}} \put(-356.38,-87.68){\circle*{0.000001}} \put(-355.67,-86.97){\circle*{0.000001}} \put(-354.97,-86.27){\circle*{0.000001}} \put(-354.26,-85.56){\circle*{0.000001}} \put(-353.55,-85.56){\circle*{0.000001}} \put(-352.85,-84.85){\circle*{0.000001}} \put(-352.14,-84.15){\circle*{0.000001}} \put(-351.43,-83.44){\circle*{0.000001}} \put(-356.38,-87.68){\circle*{0.000001}} \put(-356.38,-87.68){\circle*{0.000001}} \put(-355.67,-86.97){\circle*{0.000001}} \put(-354.97,-86.27){\circle*{0.000001}} \put(-354.26,-85.56){\circle*{0.000001}} \put(-353.55,-84.85){\circle*{0.000001}} \put(-352.85,-84.15){\circle*{0.000001}} \put(-352.85,-83.44){\circle*{0.000001}} \put(-352.14,-82.73){\circle*{0.000001}} \put(-351.43,-82.02){\circle*{0.000001}} \put(-350.72,-81.32){\circle*{0.000001}} \put(-350.02,-80.61){\circle*{0.000001}} \put(-349.31,-79.90){\circle*{0.000001}} \put(-348.60,-79.20){\circle*{0.000001}} \put(-347.90,-78.49){\circle*{0.000001}} \put(-347.19,-77.78){\circle*{0.000001}} \put(-346.48,-77.07){\circle*{0.000001}} \put(-346.48,-76.37){\circle*{0.000001}} \put(-345.78,-75.66){\circle*{0.000001}} \put(-345.07,-74.95){\circle*{0.000001}} \put(-344.36,-74.25){\circle*{0.000001}} \put(-343.65,-73.54){\circle*{0.000001}} \put(-342.95,-72.83){\circle*{0.000001}} \put(-342.24,-72.12){\circle*{0.000001}} \put(-341.53,-71.42){\circle*{0.000001}} \put(-340.83,-70.71){\circle*{0.000001}} \put(-340.12,-70.00){\circle*{0.000001}} \put(-339.41,-69.30){\circle*{0.000001}} \put(-339.41,-68.59){\circle*{0.000001}} \put(-338.70,-67.88){\circle*{0.000001}} \put(-338.00,-67.18){\circle*{0.000001}} \put(-337.29,-66.47){\circle*{0.000001}} \put(-336.58,-65.76){\circle*{0.000001}} \put(-335.88,-65.05){\circle*{0.000001}} \put(-335.17,-64.35){\circle*{0.000001}} \put(-334.46,-63.64){\circle*{0.000001}} \put(-333.75,-62.93){\circle*{0.000001}} \put(-333.05,-62.23){\circle*{0.000001}} \put(-332.34,-61.52){\circle*{0.000001}} \put(-332.34,-60.81){\circle*{0.000001}} \put(-331.63,-60.10){\circle*{0.000001}} \put(-330.93,-59.40){\circle*{0.000001}} \put(-330.22,-58.69){\circle*{0.000001}} \put(-329.51,-57.98){\circle*{0.000001}} \put(-328.80,-57.28){\circle*{0.000001}} \put(-328.10,-56.57){\circle*{0.000001}} \put(-327.39,-55.86){\circle*{0.000001}} \put(-326.68,-55.15){\circle*{0.000001}} \put(-325.98,-54.45){\circle*{0.000001}} \put(-325.98,-53.74){\circle*{0.000001}} \put(-325.27,-53.03){\circle*{0.000001}} \put(-324.56,-52.33){\circle*{0.000001}} \put(-323.85,-51.62){\circle*{0.000001}} \put(-323.15,-50.91){\circle*{0.000001}} \put(-322.44,-50.20){\circle*{0.000001}} \put(-321.73,-49.50){\circle*{0.000001}} \put(-321.03,-48.79){\circle*{0.000001}} \put(-320.32,-48.08){\circle*{0.000001}} \put(-319.61,-47.38){\circle*{0.000001}} \put(-318.91,-46.67){\circle*{0.000001}} \put(-318.91,-45.96){\circle*{0.000001}} \put(-318.20,-45.25){\circle*{0.000001}} \put(-317.49,-44.55){\circle*{0.000001}} \put(-316.78,-43.84){\circle*{0.000001}} \put(-316.08,-43.13){\circle*{0.000001}} \put(-315.37,-42.43){\circle*{0.000001}} \put(-315.37,-42.43){\circle*{0.000001}} \put(-314.66,-41.72){\circle*{0.000001}} \put(-314.66,-41.72){\circle*{0.000001}} \put(-314.66,-41.72){\circle*{0.000001}} \put(-313.96,-41.72){\circle*{0.000001}} \put(-313.25,-41.01){\circle*{0.000001}} \put(-312.54,-41.01){\circle*{0.000001}} \put(-311.83,-41.01){\circle*{0.000001}} \put(-311.13,-40.31){\circle*{0.000001}} \put(-310.42,-40.31){\circle*{0.000001}} \put(-309.71,-40.31){\circle*{0.000001}} \put(-309.01,-39.60){\circle*{0.000001}} \put(-308.30,-39.60){\circle*{0.000001}} \put(-307.59,-39.60){\circle*{0.000001}} \put(-306.88,-38.89){\circle*{0.000001}} \put(-306.18,-38.89){\circle*{0.000001}} \put(-305.47,-38.89){\circle*{0.000001}} \put(-304.76,-38.89){\circle*{0.000001}} \put(-304.06,-38.18){\circle*{0.000001}} \put(-303.35,-38.18){\circle*{0.000001}} \put(-302.64,-38.18){\circle*{0.000001}} \put(-301.93,-37.48){\circle*{0.000001}} \put(-301.23,-37.48){\circle*{0.000001}} \put(-300.52,-37.48){\circle*{0.000001}} \put(-299.81,-36.77){\circle*{0.000001}} \put(-299.11,-36.77){\circle*{0.000001}} \put(-298.40,-36.77){\circle*{0.000001}} \put(-297.69,-36.06){\circle*{0.000001}} \put(-296.98,-36.06){\circle*{0.000001}} \put(-296.28,-36.06){\circle*{0.000001}} \put(-295.57,-35.36){\circle*{0.000001}} \put(-294.86,-35.36){\circle*{0.000001}} \put(-294.86,-35.36){\circle*{0.000001}} \put(-302.64,-28.99){\circle*{0.000001}} \put(-301.93,-29.70){\circle*{0.000001}} \put(-301.23,-30.41){\circle*{0.000001}} \put(-300.52,-30.41){\circle*{0.000001}} \put(-299.81,-31.11){\circle*{0.000001}} \put(-299.11,-31.82){\circle*{0.000001}} \put(-298.40,-32.53){\circle*{0.000001}} \put(-297.69,-33.23){\circle*{0.000001}} \put(-296.98,-33.94){\circle*{0.000001}} \put(-296.28,-33.94){\circle*{0.000001}} \put(-295.57,-34.65){\circle*{0.000001}} \put(-294.86,-35.36){\circle*{0.000001}} \put(-322.44,-10.61){\circle*{0.000001}} \put(-321.73,-11.31){\circle*{0.000001}} \put(-321.03,-12.02){\circle*{0.000001}} \put(-320.32,-12.73){\circle*{0.000001}} \put(-319.61,-13.44){\circle*{0.000001}} \put(-318.91,-14.14){\circle*{0.000001}} \put(-318.20,-14.85){\circle*{0.000001}} \put(-317.49,-14.85){\circle*{0.000001}} \put(-316.78,-15.56){\circle*{0.000001}} \put(-316.08,-16.26){\circle*{0.000001}} \put(-315.37,-16.97){\circle*{0.000001}} \put(-314.66,-17.68){\circle*{0.000001}} \put(-313.96,-18.38){\circle*{0.000001}} \put(-313.25,-19.09){\circle*{0.000001}} \put(-312.54,-19.80){\circle*{0.000001}} \put(-311.83,-20.51){\circle*{0.000001}} \put(-311.13,-21.21){\circle*{0.000001}} \put(-310.42,-21.92){\circle*{0.000001}} \put(-309.71,-22.63){\circle*{0.000001}} \put(-309.01,-23.33){\circle*{0.000001}} \put(-308.30,-24.04){\circle*{0.000001}} \put(-307.59,-24.04){\circle*{0.000001}} \put(-306.88,-24.75){\circle*{0.000001}} \put(-306.18,-25.46){\circle*{0.000001}} \put(-305.47,-26.16){\circle*{0.000001}} \put(-304.76,-26.87){\circle*{0.000001}} \put(-304.06,-27.58){\circle*{0.000001}} \put(-303.35,-28.28){\circle*{0.000001}} \put(-302.64,-28.99){\circle*{0.000001}} \put(-322.44,-10.61){\circle*{0.000001}} \put(-321.73,-10.61){\circle*{0.000001}} \put(-321.03,-11.31){\circle*{0.000001}} \put(-320.32,-11.31){\circle*{0.000001}} \put(-319.61,-11.31){\circle*{0.000001}} \put(-318.91,-12.02){\circle*{0.000001}} \put(-318.20,-12.02){\circle*{0.000001}} \put(-317.49,-12.02){\circle*{0.000001}} \put(-316.78,-12.73){\circle*{0.000001}} \put(-316.08,-12.73){\circle*{0.000001}} \put(-315.37,-12.73){\circle*{0.000001}} \put(-314.66,-13.44){\circle*{0.000001}} \put(-313.96,-13.44){\circle*{0.000001}} \put(-313.25,-14.14){\circle*{0.000001}} \put(-312.54,-14.14){\circle*{0.000001}} \put(-311.83,-14.14){\circle*{0.000001}} \put(-311.13,-14.85){\circle*{0.000001}} \put(-310.42,-14.85){\circle*{0.000001}} \put(-309.71,-14.85){\circle*{0.000001}} \put(-309.01,-15.56){\circle*{0.000001}} \put(-308.30,-15.56){\circle*{0.000001}} \put(-307.59,-15.56){\circle*{0.000001}} \put(-306.88,-16.26){\circle*{0.000001}} \put(-306.18,-16.26){\circle*{0.000001}} \put(-305.47,-16.26){\circle*{0.000001}} \put(-304.76,-16.97){\circle*{0.000001}} \put(-304.06,-16.97){\circle*{0.000001}} \put(-303.35,-16.97){\circle*{0.000001}} \put(-302.64,-17.68){\circle*{0.000001}} \put(-301.93,-17.68){\circle*{0.000001}} \put(-301.23,-17.68){\circle*{0.000001}} \put(-300.52,-18.38){\circle*{0.000001}} \put(-299.81,-18.38){\circle*{0.000001}} \put(-299.11,-18.38){\circle*{0.000001}} \put(-298.40,-19.09){\circle*{0.000001}} \put(-297.69,-19.09){\circle*{0.000001}} \put(-296.98,-19.09){\circle*{0.000001}} \put(-296.28,-19.80){\circle*{0.000001}} \put(-295.57,-19.80){\circle*{0.000001}} \put(-294.86,-20.51){\circle*{0.000001}} \put(-294.16,-20.51){\circle*{0.000001}} \put(-293.45,-20.51){\circle*{0.000001}} \put(-292.74,-21.21){\circle*{0.000001}} \put(-292.04,-21.21){\circle*{0.000001}} \put(-291.33,-21.21){\circle*{0.000001}} \put(-290.62,-21.92){\circle*{0.000001}} \put(-289.91,-21.92){\circle*{0.000001}} \put(-289.21,-21.92){\circle*{0.000001}} \put(-288.50,-22.63){\circle*{0.000001}} \put(-287.79,-22.63){\circle*{0.000001}} \put(-288.50,-23.33){\circle*{0.000001}} \put(-287.79,-22.63){\circle*{0.000001}} \put(-288.50,-23.33){\circle*{0.000001}} \put(-287.79,-22.63){\circle*{0.000001}} \put(-287.09,-21.92){\circle*{0.000001}} \put(-286.38,-21.92){\circle*{0.000001}} \put(-285.67,-21.21){\circle*{0.000001}} \put(-284.96,-20.51){\circle*{0.000001}} \put(-284.26,-19.80){\circle*{0.000001}} \put(-283.55,-19.09){\circle*{0.000001}} \put(-282.84,-18.38){\circle*{0.000001}} \put(-282.14,-18.38){\circle*{0.000001}} \put(-281.43,-17.68){\circle*{0.000001}} \put(-280.72,-16.97){\circle*{0.000001}} \put(-280.01,-16.26){\circle*{0.000001}} \put(-279.31,-15.56){\circle*{0.000001}} \put(-278.60,-15.56){\circle*{0.000001}} \put(-277.89,-14.85){\circle*{0.000001}} \put(-277.19,-14.14){\circle*{0.000001}} \put(-276.48,-13.44){\circle*{0.000001}} \put(-275.77,-12.73){\circle*{0.000001}} \put(-275.06,-12.02){\circle*{0.000001}} \put(-274.36,-12.02){\circle*{0.000001}} \put(-273.65,-11.31){\circle*{0.000001}} \put(-272.94,-10.61){\circle*{0.000001}} \put(-272.24,-9.90){\circle*{0.000001}} \put(-271.53,-9.19){\circle*{0.000001}} \put(-270.82,-9.19){\circle*{0.000001}} \put(-270.11,-8.49){\circle*{0.000001}} \put(-269.41,-7.78){\circle*{0.000001}} \put(-268.70,-7.07){\circle*{0.000001}} \put(-267.99,-6.36){\circle*{0.000001}} \put(-267.29,-5.66){\circle*{0.000001}} \put(-266.58,-5.66){\circle*{0.000001}} \put(-265.87,-4.95){\circle*{0.000001}} \put(-265.17,-4.24){\circle*{0.000001}} \put(-264.46,-3.54){\circle*{0.000001}} \put(-263.75,-2.83){\circle*{0.000001}} \put(-263.04,-2.83){\circle*{0.000001}} \put(-262.34,-2.12){\circle*{0.000001}} \put(-261.63,-1.41){\circle*{0.000001}} \put(-260.92,-0.71){\circle*{0.000001}} \put(-260.22, 0.00){\circle*{0.000001}} \put(-259.51, 0.71){\circle*{0.000001}} \put(-258.80, 0.71){\circle*{0.000001}} \put(-258.09, 1.41){\circle*{0.000001}} \put(-257.39, 2.12){\circle*{0.000001}} \put(-256.68, 2.83){\circle*{0.000001}} \put(-255.97, 3.54){\circle*{0.000001}} \put(-255.27, 3.54){\circle*{0.000001}} \put(-254.56, 4.24){\circle*{0.000001}} \put(-253.85, 4.95){\circle*{0.000001}} \put(-253.14, 5.66){\circle*{0.000001}} \put(-252.44, 6.36){\circle*{0.000001}} \put(-251.73, 7.07){\circle*{0.000001}} \put(-251.02, 7.07){\circle*{0.000001}} \put(-250.32, 7.78){\circle*{0.000001}} \put(-249.61, 8.49){\circle*{0.000001}} \put(-248.90, 9.19){\circle*{0.000001}} \put(-248.19, 9.90){\circle*{0.000001}} \put(-247.49, 9.90){\circle*{0.000001}} \put(-246.78,10.61){\circle*{0.000001}} \put(-246.07,11.31){\circle*{0.000001}} \put(-245.37,12.02){\circle*{0.000001}} \put(-244.66,12.73){\circle*{0.000001}} \put(-243.95,13.44){\circle*{0.000001}} \put(-243.24,13.44){\circle*{0.000001}} \put(-242.54,14.14){\circle*{0.000001}} \put(-241.83,14.85){\circle*{0.000001}} \put(-241.83,14.85){\circle*{0.000001}} \put(-241.12,15.56){\circle*{0.000001}} \put(-241.12,16.26){\circle*{0.000001}} \put(-240.42,16.97){\circle*{0.000001}} \put(-217.79,-36.77){\circle*{0.000001}} \put(-217.79,-36.06){\circle*{0.000001}} \put(-218.50,-35.36){\circle*{0.000001}} \put(-218.50,-34.65){\circle*{0.000001}} \put(-219.20,-33.94){\circle*{0.000001}} \put(-219.20,-33.23){\circle*{0.000001}} \put(-219.91,-32.53){\circle*{0.000001}} \put(-219.91,-31.82){\circle*{0.000001}} \put(-219.91,-31.11){\circle*{0.000001}} \put(-220.62,-30.41){\circle*{0.000001}} \put(-220.62,-29.70){\circle*{0.000001}} \put(-221.32,-28.99){\circle*{0.000001}} \put(-221.32,-28.28){\circle*{0.000001}} \put(-221.32,-27.58){\circle*{0.000001}} \put(-222.03,-26.87){\circle*{0.000001}} \put(-222.03,-26.16){\circle*{0.000001}} \put(-222.74,-25.46){\circle*{0.000001}} \put(-222.74,-24.75){\circle*{0.000001}} \put(-223.45,-24.04){\circle*{0.000001}} \put(-223.45,-23.33){\circle*{0.000001}} \put(-223.45,-22.63){\circle*{0.000001}} \put(-224.15,-21.92){\circle*{0.000001}} \put(-224.15,-21.21){\circle*{0.000001}} \put(-224.86,-20.51){\circle*{0.000001}} \put(-224.86,-19.80){\circle*{0.000001}} \put(-225.57,-19.09){\circle*{0.000001}} \put(-225.57,-18.38){\circle*{0.000001}} \put(-225.57,-17.68){\circle*{0.000001}} \put(-226.27,-16.97){\circle*{0.000001}} \put(-226.27,-16.26){\circle*{0.000001}} \put(-226.98,-15.56){\circle*{0.000001}} \put(-226.98,-14.85){\circle*{0.000001}} \put(-226.98,-14.14){\circle*{0.000001}} \put(-227.69,-13.44){\circle*{0.000001}} \put(-227.69,-12.73){\circle*{0.000001}} \put(-228.40,-12.02){\circle*{0.000001}} \put(-228.40,-11.31){\circle*{0.000001}} \put(-229.10,-10.61){\circle*{0.000001}} \put(-229.10,-9.90){\circle*{0.000001}} \put(-229.10,-9.19){\circle*{0.000001}} \put(-229.81,-8.49){\circle*{0.000001}} \put(-229.81,-7.78){\circle*{0.000001}} \put(-230.52,-7.07){\circle*{0.000001}} \put(-230.52,-6.36){\circle*{0.000001}} \put(-231.22,-5.66){\circle*{0.000001}} \put(-231.22,-4.95){\circle*{0.000001}} \put(-231.22,-4.24){\circle*{0.000001}} \put(-231.93,-3.54){\circle*{0.000001}} \put(-231.93,-2.83){\circle*{0.000001}} \put(-232.64,-2.12){\circle*{0.000001}} \put(-232.64,-1.41){\circle*{0.000001}} \put(-232.64,-0.71){\circle*{0.000001}} \put(-233.35, 0.00){\circle*{0.000001}} \put(-233.35, 0.71){\circle*{0.000001}} \put(-234.05, 1.41){\circle*{0.000001}} \put(-234.05, 2.12){\circle*{0.000001}} \put(-234.76, 2.83){\circle*{0.000001}} \put(-234.76, 3.54){\circle*{0.000001}} \put(-234.76, 4.24){\circle*{0.000001}} \put(-235.47, 4.95){\circle*{0.000001}} \put(-235.47, 5.66){\circle*{0.000001}} \put(-236.17, 6.36){\circle*{0.000001}} \put(-236.17, 7.07){\circle*{0.000001}} \put(-236.88, 7.78){\circle*{0.000001}} \put(-236.88, 8.49){\circle*{0.000001}} \put(-236.88, 9.19){\circle*{0.000001}} \put(-237.59, 9.90){\circle*{0.000001}} \put(-237.59,10.61){\circle*{0.000001}} \put(-238.29,11.31){\circle*{0.000001}} \put(-238.29,12.02){\circle*{0.000001}} \put(-238.29,12.73){\circle*{0.000001}} \put(-239.00,13.44){\circle*{0.000001}} \put(-239.00,14.14){\circle*{0.000001}} \put(-239.71,14.85){\circle*{0.000001}} \put(-239.71,15.56){\circle*{0.000001}} \put(-240.42,16.26){\circle*{0.000001}} \put(-240.42,16.97){\circle*{0.000001}} \put(-217.79,-36.77){\circle*{0.000001}} \put(-217.79,-36.06){\circle*{0.000001}} \put(-217.08,-35.36){\circle*{0.000001}} \put(-217.08,-34.65){\circle*{0.000001}} \put(-217.08,-33.94){\circle*{0.000001}} \put(-216.37,-33.23){\circle*{0.000001}} \put(-216.37,-32.53){\circle*{0.000001}} \put(-216.37,-32.53){\circle*{0.000001}} \put(-217.08,-31.82){\circle*{0.000001}} \put(-217.08,-31.11){\circle*{0.000001}} \put(-217.79,-30.41){\circle*{0.000001}} \put(-218.50,-29.70){\circle*{0.000001}} \put(-260.92,11.31){\circle*{0.000001}} \put(-260.22,10.61){\circle*{0.000001}} \put(-259.51, 9.90){\circle*{0.000001}} \put(-258.80, 9.19){\circle*{0.000001}} \put(-258.09, 8.49){\circle*{0.000001}} \put(-257.39, 7.78){\circle*{0.000001}} \put(-256.68, 7.07){\circle*{0.000001}} \put(-255.97, 6.36){\circle*{0.000001}} \put(-255.27, 5.66){\circle*{0.000001}} \put(-254.56, 4.95){\circle*{0.000001}} \put(-253.85, 4.24){\circle*{0.000001}} \put(-253.14, 3.54){\circle*{0.000001}} \put(-252.44, 2.83){\circle*{0.000001}} \put(-251.73, 2.12){\circle*{0.000001}} \put(-251.02, 1.41){\circle*{0.000001}} \put(-250.32, 1.41){\circle*{0.000001}} \put(-249.61, 0.71){\circle*{0.000001}} \put(-248.90, 0.00){\circle*{0.000001}} \put(-248.19,-0.71){\circle*{0.000001}} \put(-247.49,-1.41){\circle*{0.000001}} \put(-246.78,-2.12){\circle*{0.000001}} \put(-246.07,-2.83){\circle*{0.000001}} \put(-245.37,-3.54){\circle*{0.000001}} \put(-244.66,-4.24){\circle*{0.000001}} \put(-243.95,-4.95){\circle*{0.000001}} \put(-243.24,-5.66){\circle*{0.000001}} \put(-242.54,-6.36){\circle*{0.000001}} \put(-241.83,-7.07){\circle*{0.000001}} \put(-241.12,-7.78){\circle*{0.000001}} \put(-240.42,-8.49){\circle*{0.000001}} \put(-239.71,-9.19){\circle*{0.000001}} \put(-239.00,-9.90){\circle*{0.000001}} \put(-238.29,-10.61){\circle*{0.000001}} \put(-237.59,-11.31){\circle*{0.000001}} \put(-236.88,-12.02){\circle*{0.000001}} \put(-236.17,-12.73){\circle*{0.000001}} \put(-235.47,-13.44){\circle*{0.000001}} \put(-234.76,-14.14){\circle*{0.000001}} \put(-234.05,-14.85){\circle*{0.000001}} \put(-233.35,-15.56){\circle*{0.000001}} \put(-232.64,-16.26){\circle*{0.000001}} \put(-231.93,-16.97){\circle*{0.000001}} \put(-231.22,-17.68){\circle*{0.000001}} \put(-230.52,-18.38){\circle*{0.000001}} \put(-229.81,-19.09){\circle*{0.000001}} \put(-229.10,-19.09){\circle*{0.000001}} \put(-228.40,-19.80){\circle*{0.000001}} \put(-227.69,-20.51){\circle*{0.000001}} \put(-226.98,-21.21){\circle*{0.000001}} \put(-226.27,-21.92){\circle*{0.000001}} \put(-225.57,-22.63){\circle*{0.000001}} \put(-224.86,-23.33){\circle*{0.000001}} \put(-224.15,-24.04){\circle*{0.000001}} \put(-223.45,-24.75){\circle*{0.000001}} \put(-222.74,-25.46){\circle*{0.000001}} \put(-222.03,-26.16){\circle*{0.000001}} \put(-221.32,-26.87){\circle*{0.000001}} \put(-220.62,-27.58){\circle*{0.000001}} \put(-219.91,-28.28){\circle*{0.000001}} \put(-219.20,-28.99){\circle*{0.000001}} \put(-218.50,-29.70){\circle*{0.000001}} \put(-260.92,11.31){\circle*{0.000001}} \put(-260.22,11.31){\circle*{0.000001}} \put(-259.51,11.31){\circle*{0.000001}} \put(-258.80,11.31){\circle*{0.000001}} \put(-258.80,11.31){\circle*{0.000001}} \put(-258.80,12.02){\circle*{0.000001}} \put(-259.51,12.73){\circle*{0.000001}} \put(-259.51,13.44){\circle*{0.000001}} \put(-260.22,14.14){\circle*{0.000001}} \put(-260.22,14.85){\circle*{0.000001}} \put(-260.92,15.56){\circle*{0.000001}} \put(-260.92,16.26){\circle*{0.000001}} \put(-261.63,16.97){\circle*{0.000001}} \put(-261.63,17.68){\circle*{0.000001}} \put(-262.34,18.38){\circle*{0.000001}} \put(-262.34,19.09){\circle*{0.000001}} \put(-263.04,19.80){\circle*{0.000001}} \put(-263.04,20.51){\circle*{0.000001}} \put(-263.75,21.21){\circle*{0.000001}} \put(-263.75,21.92){\circle*{0.000001}} \put(-264.46,22.63){\circle*{0.000001}} \put(-264.46,23.33){\circle*{0.000001}} \put(-265.17,24.04){\circle*{0.000001}} \put(-265.17,24.75){\circle*{0.000001}} \put(-265.87,25.46){\circle*{0.000001}} \put(-265.87,26.16){\circle*{0.000001}} \put(-266.58,26.87){\circle*{0.000001}} \put(-266.58,27.58){\circle*{0.000001}} \put(-267.29,28.28){\circle*{0.000001}} \put(-267.29,28.99){\circle*{0.000001}} \put(-267.99,29.70){\circle*{0.000001}} \put(-267.99,30.41){\circle*{0.000001}} \put(-268.70,31.11){\circle*{0.000001}} \put(-268.70,31.82){\circle*{0.000001}} \put(-266.58,26.87){\circle*{0.000001}} \put(-266.58,27.58){\circle*{0.000001}} \put(-267.29,28.28){\circle*{0.000001}} \put(-267.29,28.99){\circle*{0.000001}} \put(-267.99,29.70){\circle*{0.000001}} \put(-267.99,30.41){\circle*{0.000001}} \put(-268.70,31.11){\circle*{0.000001}} \put(-268.70,31.82){\circle*{0.000001}} \put(-266.58,26.87){\circle*{0.000001}} \put(-267.29,27.58){\circle*{0.000001}} \put(-267.29,28.28){\circle*{0.000001}} \put(-267.99,28.99){\circle*{0.000001}} \put(-267.99,28.99){\circle*{0.000001}} \put(-267.29,28.28){\circle*{0.000001}} \put(-266.58,27.58){\circle*{0.000001}} \put(-265.87,26.87){\circle*{0.000001}} \put(-267.99,27.58){\circle*{0.000001}} \put(-267.29,27.58){\circle*{0.000001}} \put(-266.58,26.87){\circle*{0.000001}} \put(-265.87,26.87){\circle*{0.000001}} \put(-267.99,27.58){\circle*{0.000001}} \put(-267.29,27.58){\circle*{0.000001}} \put(-266.58,27.58){\circle*{0.000001}} \put(-265.87,27.58){\circle*{0.000001}} \put(-265.17,27.58){\circle*{0.000001}} \put(-264.46,27.58){\circle*{0.000001}} \put(-264.46,25.46){\circle*{0.000001}} \put(-264.46,26.16){\circle*{0.000001}} \put(-264.46,26.87){\circle*{0.000001}} \put(-264.46,27.58){\circle*{0.000001}} \put(-264.46,25.46){\circle*{0.000001}} \put(-263.75,24.75){\circle*{0.000001}} \put(-263.04,24.04){\circle*{0.000001}} \put(-262.34,24.04){\circle*{0.000001}} \put(-261.63,23.33){\circle*{0.000001}} \put(-260.92,22.63){\circle*{0.000001}} \put(-260.92,21.92){\circle*{0.000001}} \put(-260.92,22.63){\circle*{0.000001}} \put(-260.92,21.92){\circle*{0.000001}} \put(-260.22,21.92){\circle*{0.000001}} \put(-259.51,21.92){\circle*{0.000001}} \put(-258.80,21.92){\circle*{0.000001}} \put(-258.09,21.92){\circle*{0.000001}} \put(-257.39,21.21){\circle*{0.000001}} \put(-256.68,21.21){\circle*{0.000001}} \put(-255.97,21.21){\circle*{0.000001}} \put(-255.27,21.21){\circle*{0.000001}} \put(-254.56,21.21){\circle*{0.000001}} \put(-253.85,21.21){\circle*{0.000001}} \put(-253.14,21.21){\circle*{0.000001}} \put(-252.44,21.21){\circle*{0.000001}} \put(-251.73,21.21){\circle*{0.000001}} \put(-251.02,20.51){\circle*{0.000001}} \put(-250.32,20.51){\circle*{0.000001}} \put(-249.61,20.51){\circle*{0.000001}} \put(-248.90,20.51){\circle*{0.000001}} \put(-248.19,20.51){\circle*{0.000001}} \put(-247.49,20.51){\circle*{0.000001}} \put(-246.78,20.51){\circle*{0.000001}} \put(-246.07,20.51){\circle*{0.000001}} \put(-245.37,19.80){\circle*{0.000001}} \put(-244.66,19.80){\circle*{0.000001}} \put(-243.95,19.80){\circle*{0.000001}} \put(-243.24,19.80){\circle*{0.000001}} \put(-242.54,19.80){\circle*{0.000001}} \put(-242.54,19.80){\circle*{0.000001}} \put(-241.83,19.80){\circle*{0.000001}} \put(-241.12,19.80){\circle*{0.000001}} \put(-240.42,19.80){\circle*{0.000001}} \put(-239.71,19.80){\circle*{0.000001}} \put(-239.00,20.51){\circle*{0.000001}} \put(-238.29,20.51){\circle*{0.000001}} \put(-237.59,20.51){\circle*{0.000001}} \put(-236.88,20.51){\circle*{0.000001}} \put(-236.17,20.51){\circle*{0.000001}} \put(-235.47,20.51){\circle*{0.000001}} \put(-234.76,20.51){\circle*{0.000001}} \put(-234.05,20.51){\circle*{0.000001}} \put(-233.35,20.51){\circle*{0.000001}} \put(-232.64,21.21){\circle*{0.000001}} \put(-231.93,21.21){\circle*{0.000001}} \put(-231.22,21.21){\circle*{0.000001}} \put(-230.52,21.21){\circle*{0.000001}} \put(-229.81,21.21){\circle*{0.000001}} \put(-229.10,21.21){\circle*{0.000001}} \put(-228.40,21.21){\circle*{0.000001}} \put(-227.69,21.21){\circle*{0.000001}} \put(-226.98,21.21){\circle*{0.000001}} \put(-226.27,21.92){\circle*{0.000001}} \put(-225.57,21.92){\circle*{0.000001}} \put(-224.86,21.92){\circle*{0.000001}} \put(-224.15,21.92){\circle*{0.000001}} \put(-223.45,21.92){\circle*{0.000001}} \put(-222.74,21.92){\circle*{0.000001}} \put(-222.03,21.92){\circle*{0.000001}} \put(-221.32,21.92){\circle*{0.000001}} \put(-220.62,21.92){\circle*{0.000001}} \put(-219.91,22.63){\circle*{0.000001}} \put(-219.20,22.63){\circle*{0.000001}} \put(-218.50,22.63){\circle*{0.000001}} \put(-217.79,22.63){\circle*{0.000001}} \put(-217.08,22.63){\circle*{0.000001}} \put(-216.37,22.63){\circle*{0.000001}} \put(-215.67,22.63){\circle*{0.000001}} \put(-214.96,22.63){\circle*{0.000001}} \put(-214.25,22.63){\circle*{0.000001}} \put(-213.55,23.33){\circle*{0.000001}} \put(-212.84,23.33){\circle*{0.000001}} \put(-212.13,23.33){\circle*{0.000001}} \put(-211.42,23.33){\circle*{0.000001}} \put(-210.72,23.33){\circle*{0.000001}} \put(-238.29,-7.78){\circle*{0.000001}} \put(-237.59,-7.07){\circle*{0.000001}} \put(-236.88,-6.36){\circle*{0.000001}} \put(-236.17,-5.66){\circle*{0.000001}} \put(-235.47,-4.95){\circle*{0.000001}} \put(-235.47,-4.24){\circle*{0.000001}} \put(-234.76,-3.54){\circle*{0.000001}} \put(-234.05,-2.83){\circle*{0.000001}} \put(-233.35,-2.12){\circle*{0.000001}} \put(-232.64,-1.41){\circle*{0.000001}} \put(-231.93,-0.71){\circle*{0.000001}} \put(-231.22, 0.00){\circle*{0.000001}} \put(-230.52, 0.71){\circle*{0.000001}} \put(-229.81, 1.41){\circle*{0.000001}} \put(-229.81, 2.12){\circle*{0.000001}} \put(-229.10, 2.83){\circle*{0.000001}} \put(-228.40, 3.54){\circle*{0.000001}} \put(-227.69, 4.24){\circle*{0.000001}} \put(-226.98, 4.95){\circle*{0.000001}} \put(-226.27, 5.66){\circle*{0.000001}} \put(-225.57, 6.36){\circle*{0.000001}} \put(-224.86, 7.07){\circle*{0.000001}} \put(-224.86, 7.78){\circle*{0.000001}} \put(-224.15, 8.49){\circle*{0.000001}} \put(-223.45, 9.19){\circle*{0.000001}} \put(-222.74, 9.90){\circle*{0.000001}} \put(-222.03,10.61){\circle*{0.000001}} \put(-221.32,11.31){\circle*{0.000001}} \put(-220.62,12.02){\circle*{0.000001}} \put(-219.91,12.73){\circle*{0.000001}} \put(-219.20,13.44){\circle*{0.000001}} \put(-219.20,14.14){\circle*{0.000001}} \put(-218.50,14.85){\circle*{0.000001}} \put(-217.79,15.56){\circle*{0.000001}} \put(-217.08,16.26){\circle*{0.000001}} \put(-216.37,16.97){\circle*{0.000001}} \put(-215.67,17.68){\circle*{0.000001}} \put(-214.96,18.38){\circle*{0.000001}} \put(-214.25,19.09){\circle*{0.000001}} \put(-213.55,19.80){\circle*{0.000001}} \put(-213.55,20.51){\circle*{0.000001}} \put(-212.84,21.21){\circle*{0.000001}} \put(-212.13,21.92){\circle*{0.000001}} \put(-211.42,22.63){\circle*{0.000001}} \put(-210.72,23.33){\circle*{0.000001}} \put(-238.29,-7.78){\circle*{0.000001}} \put(-237.59,-7.78){\circle*{0.000001}} \put(-237.59,-7.78){\circle*{0.000001}} \put(-237.59,-7.07){\circle*{0.000001}} \put(-237.59,-6.36){\circle*{0.000001}} \put(-247.49, 0.00){\circle*{0.000001}} \put(-246.78,-0.71){\circle*{0.000001}} \put(-246.07,-0.71){\circle*{0.000001}} \put(-245.37,-1.41){\circle*{0.000001}} \put(-244.66,-2.12){\circle*{0.000001}} \put(-243.95,-2.12){\circle*{0.000001}} \put(-243.24,-2.83){\circle*{0.000001}} \put(-242.54,-2.83){\circle*{0.000001}} \put(-241.83,-3.54){\circle*{0.000001}} \put(-241.12,-4.24){\circle*{0.000001}} \put(-240.42,-4.24){\circle*{0.000001}} \put(-239.71,-4.95){\circle*{0.000001}} \put(-239.00,-5.66){\circle*{0.000001}} \put(-238.29,-5.66){\circle*{0.000001}} \put(-237.59,-6.36){\circle*{0.000001}} \put(-247.49, 0.00){\circle*{0.000001}} \put(-246.78, 0.00){\circle*{0.000001}} \put(-248.90, 0.00){\circle*{0.000001}} \put(-248.19, 0.00){\circle*{0.000001}} \put(-247.49, 0.00){\circle*{0.000001}} \put(-246.78, 0.00){\circle*{0.000001}} \put(-248.90, 0.00){\circle*{0.000001}} \put(-248.19, 0.00){\circle*{0.000001}} \put(-247.49, 0.71){\circle*{0.000001}} \put(-246.78, 0.71){\circle*{0.000001}} \put(-246.07, 0.71){\circle*{0.000001}} \put(-245.37, 1.41){\circle*{0.000001}} \put(-244.66, 1.41){\circle*{0.000001}} \put(-244.66, 1.41){\circle*{0.000001}} \put(-243.95, 1.41){\circle*{0.000001}} \put(-243.24, 1.41){\circle*{0.000001}} \put(-242.54, 1.41){\circle*{0.000001}} \put(-241.83, 0.00){\circle*{0.000001}} \put(-241.83, 0.71){\circle*{0.000001}} \put(-242.54, 1.41){\circle*{0.000001}} \put(-241.83, 0.00){\circle*{0.000001}} \put(-241.12, 0.00){\circle*{0.000001}} \put(-240.42, 0.71){\circle*{0.000001}} \put(-240.42, 0.71){\circle*{0.000001}} \put(-239.71, 1.41){\circle*{0.000001}} \put(-239.00, 2.12){\circle*{0.000001}} \put(-238.29, 2.83){\circle*{0.000001}} \put(-238.29, 2.83){\circle*{0.000001}} \put(-247.49, 7.78){\circle*{0.000001}} \put(-246.78, 7.07){\circle*{0.000001}} \put(-246.07, 7.07){\circle*{0.000001}} \put(-245.37, 6.36){\circle*{0.000001}} \put(-244.66, 6.36){\circle*{0.000001}} \put(-243.95, 5.66){\circle*{0.000001}} \put(-243.24, 5.66){\circle*{0.000001}} \put(-242.54, 4.95){\circle*{0.000001}} \put(-241.83, 4.95){\circle*{0.000001}} \put(-241.12, 4.24){\circle*{0.000001}} \put(-240.42, 4.24){\circle*{0.000001}} \put(-239.71, 3.54){\circle*{0.000001}} \put(-239.00, 3.54){\circle*{0.000001}} \put(-238.29, 2.83){\circle*{0.000001}} \put(-247.49, 7.78){\circle*{0.000001}} \put(-246.78, 7.78){\circle*{0.000001}} \put(-246.07, 7.78){\circle*{0.000001}} \put(-245.37, 7.78){\circle*{0.000001}} \put(-244.66, 7.78){\circle*{0.000001}} \put(-243.95, 7.78){\circle*{0.000001}} \put(-243.95, 7.78){\circle*{0.000001}} \put(-243.24, 7.07){\circle*{0.000001}} \put(-242.54, 6.36){\circle*{0.000001}} \put(-242.54, 6.36){\circle*{0.000001}} \put(-242.54, 7.07){\circle*{0.000001}} \put(-241.83, 7.78){\circle*{0.000001}} \put(-241.83, 8.49){\circle*{0.000001}} \put(-243.95, 8.49){\circle*{0.000001}} \put(-243.24, 8.49){\circle*{0.000001}} \put(-242.54, 8.49){\circle*{0.000001}} \put(-241.83, 8.49){\circle*{0.000001}} \put(-243.95, 8.49){\circle*{0.000001}} \put(-243.95, 8.49){\circle*{0.000001}} \put(-243.95, 9.19){\circle*{0.000001}} \put(-244.66, 9.90){\circle*{0.000001}} \put(-244.66,10.61){\circle*{0.000001}} \put(-245.37,11.31){\circle*{0.000001}} \put(-245.37,12.02){\circle*{0.000001}} \put(-245.37,12.73){\circle*{0.000001}} \put(-246.07,13.44){\circle*{0.000001}} \put(-246.07,14.14){\circle*{0.000001}} \put(-246.07,14.85){\circle*{0.000001}} \put(-246.78,15.56){\circle*{0.000001}} \put(-246.78,16.26){\circle*{0.000001}} \put(-247.49,16.97){\circle*{0.000001}} \put(-247.49,17.68){\circle*{0.000001}} \put(-247.49,17.68){\circle*{0.000001}} \put(-247.49,18.38){\circle*{0.000001}} \put(-247.49,19.09){\circle*{0.000001}} \put(-247.49,19.80){\circle*{0.000001}} \put(-247.49,20.51){\circle*{0.000001}} \put(-247.49,21.21){\circle*{0.000001}} \put(-247.49,21.92){\circle*{0.000001}} \put(-248.19,22.63){\circle*{0.000001}} \put(-248.19,23.33){\circle*{0.000001}} \put(-248.19,24.04){\circle*{0.000001}} \put(-248.19,24.75){\circle*{0.000001}} \put(-248.19,25.46){\circle*{0.000001}} \put(-248.19,26.16){\circle*{0.000001}} \put(-248.19,26.87){\circle*{0.000001}} \put(-248.19,27.58){\circle*{0.000001}} \put(-248.19,28.28){\circle*{0.000001}} \put(-248.19,28.99){\circle*{0.000001}} \put(-248.19,29.70){\circle*{0.000001}} \put(-248.19,30.41){\circle*{0.000001}} \put(-248.19,31.11){\circle*{0.000001}} \put(-248.19,31.82){\circle*{0.000001}} \put(-248.90,32.53){\circle*{0.000001}} \put(-248.90,33.23){\circle*{0.000001}} \put(-248.90,33.94){\circle*{0.000001}} \put(-248.90,34.65){\circle*{0.000001}} \put(-248.90,35.36){\circle*{0.000001}} \put(-248.90,36.06){\circle*{0.000001}} \put(-248.90,36.77){\circle*{0.000001}} \put(-248.90,35.36){\circle*{0.000001}} \put(-248.90,36.06){\circle*{0.000001}} \put(-248.90,36.77){\circle*{0.000001}} \put(-249.61,35.36){\circle*{0.000001}} \put(-248.90,35.36){\circle*{0.000001}} \put(-249.61,35.36){\circle*{0.000001}} \put(-249.61,36.06){\circle*{0.000001}} \put(-249.61,36.77){\circle*{0.000001}} \put(-250.32,34.65){\circle*{0.000001}} \put(-250.32,35.36){\circle*{0.000001}} \put(-249.61,36.06){\circle*{0.000001}} \put(-249.61,36.77){\circle*{0.000001}} \put(-251.73,30.41){\circle*{0.000001}} \put(-251.73,31.11){\circle*{0.000001}} \put(-251.02,31.82){\circle*{0.000001}} \put(-251.02,32.53){\circle*{0.000001}} \put(-251.02,33.23){\circle*{0.000001}} \put(-250.32,33.94){\circle*{0.000001}} \put(-250.32,34.65){\circle*{0.000001}} \put(-251.73,30.41){\circle*{0.000001}} \put(-251.02,30.41){\circle*{0.000001}} \put(-250.32,30.41){\circle*{0.000001}} \put(-249.61,29.70){\circle*{0.000001}} \put(-248.90,29.70){\circle*{0.000001}} \put(-368,-165){\makebox(0,0)[lt]{$\rho: 0.5$}} \end{picture} & \setlength{\unitlength}{00.025em} \begin{picture}(612,950)(-544,-289) \ifx\envbox\undefined\newsavebox{\envbox}\fi \multiput(-544,-289)(612,0){2}{\line(0,1){850}} \multiput(-544,-289)(0,850){2}{\line(1,0){612}} \savebox{\envbox}{ \multiput(0,0)(16,0){2}{\line(0,1){16}} \multiput(0,0)(0,16){2}{\line(1,0){16}} } \put(-408,408){\usebox{\envbox}} \put(-204,-119){\usebox{\envbox}} \put(-34,0){\usebox{\envbox}} \put(-136,-68){\usebox{\envbox}} \put(-221,-102){\usebox{\envbox}} \put(-459,306){\usebox{\envbox}} \put(-272,-153){\usebox{\envbox}} \put(-374,306){\usebox{\envbox}} \put(-493,153){\usebox{\envbox}} \put(-442,425){\usebox{\envbox}} \put(-272,374){\usebox{\envbox}} \put(-221,391){\usebox{\envbox}} \put(-255,391){\usebox{\envbox}} \put(-136,17){\usebox{\envbox}} \put(-170,476){\usebox{\envbox}} \put(-510,170){\usebox{\envbox}} \put(-238,340){\usebox{\envbox}} \put(-425,289){\usebox{\envbox}} \put(-17,0){\usebox{\envbox}} \put(-153,-34){\usebox{\envbox}} \put(-153,493){\usebox{\envbox}} \put(-170,-85){\usebox{\envbox}} \put(-408,357){\usebox{\envbox}} \put(-204,-34){\usebox{\envbox}} \put(-374,408){\usebox{\envbox}} \put(0,-272){\usebox{\envbox}} \put(-306,255){\usebox{\envbox}} \put(-459,357){\usebox{\envbox}} \put(-340,187){\usebox{\envbox}} \put(-272,85){\usebox{\envbox}} \put(-476,425){\usebox{\envbox}} \put(-425,374){\usebox{\envbox}} \put(0,-221){\usebox{\envbox}} \put(-68,-238){\usebox{\envbox}} \put(51,34){\usebox{\envbox}} \put(-153,-272){\usebox{\envbox}} \put(-102,17){\usebox{\envbox}} \put(-85,-187){\usebox{\envbox}} \put(-204,442){\usebox{\envbox}} \put(-34,-17){\usebox{\envbox}} \put(-510,187){\usebox{\envbox}} \put(-391,153){\usebox{\envbox}} \put(-187,-17){\usebox{\envbox}} \put(-493,306){\usebox{\envbox}} \put(-374,187){\usebox{\envbox}} \put(-272,221){\usebox{\envbox}} \put(-306,136){\usebox{\envbox}} \put(-238,0){\usebox{\envbox}} \put(-289,221){\usebox{\envbox}} \put(-510,306){\usebox{\envbox}} \put(-68,-153){\usebox{\envbox}} \put(51,17){\usebox{\envbox}} \put(-408,153){\usebox{\envbox}} \put(-459,102){\usebox{\envbox}} \put(-102,-255){\usebox{\envbox}} \put(-476,306){\usebox{\envbox}} \put(-187,-255){\usebox{\envbox}} \put(-221,-34){\usebox{\envbox}} \put(-306,374){\usebox{\envbox}} \put(-442,323){\usebox{\envbox}} \put(-425,255){\usebox{\envbox}} \put(-187,-187){\usebox{\envbox}} \put(-323,272){\usebox{\envbox}} \put(-272,425){\usebox{\envbox}} \put(-357,289){\usebox{\envbox}} \put(-442,357){\usebox{\envbox}} \put(-459,170){\usebox{\envbox}} \put(-102,-238){\usebox{\envbox}} \put(17,-187){\usebox{\envbox}} \put(-187,-204){\usebox{\envbox}} \put(-136,-51){\usebox{\envbox}} \put(-119,-255){\usebox{\envbox}} \put(-306,323){\usebox{\envbox}} \put(17,-17){\usebox{\envbox}} \put(-442,340){\usebox{\envbox}} \put(-255,17){\usebox{\envbox}} \put(-272,391){\usebox{\envbox}} \put(-306,102){\usebox{\envbox}} \put(-357,442){\usebox{\envbox}} \put(-255,374){\usebox{\envbox}} \put(-408,255){\usebox{\envbox}} \put(34,34){\usebox{\envbox}} \put(-238,357){\usebox{\envbox}} \put(-170,0){\usebox{\envbox}} \put(-476,187){\usebox{\envbox}} \put(-85,0){\usebox{\envbox}} \put(-187,-102){\usebox{\envbox}} \put(-425,102){\usebox{\envbox}} \put(-374,170){\usebox{\envbox}} \put(-102,-153){\usebox{\envbox}} \put(-17,34){\usebox{\envbox}} \put(-170,-289){\usebox{\envbox}} \put(-153,-17){\usebox{\envbox}} \put(0,-255){\usebox{\envbox}} \put(-306,204){\usebox{\envbox}} \put(-459,340){\usebox{\envbox}} \put(-442,306){\usebox{\envbox}} \put(-34,68){\usebox{\envbox}} \put(-442,459){\usebox{\envbox}} \put(-187,442){\usebox{\envbox}} \put(17,0){\usebox{\envbox}} \put(-374,357){\usebox{\envbox}} \put(-68,-221){\usebox{\envbox}} \put(-170,510){\usebox{\envbox}} \put(-221,-136){\usebox{\envbox}} \put(-272,408){\usebox{\envbox}} \put(-408,221){\usebox{\envbox}} \put(-476,136){\usebox{\envbox}} \put(-136,527){\usebox{\envbox}} \put(-510,136){\usebox{\envbox}} \put(-102,-51){\usebox{\envbox}} \put(34,-204){\usebox{\envbox}} \put(-408,323){\usebox{\envbox}} \put(-204,-68){\usebox{\envbox}} \put(-459,442){\usebox{\envbox}} \put(-221,-17){\usebox{\envbox}} \put(-221,-153){\usebox{\envbox}} \put(-442,289){\usebox{\envbox}} \put(0,34){\usebox{\envbox}} \put(-425,340){\usebox{\envbox}} \put(-323,204){\usebox{\envbox}} \put(-340,340){\usebox{\envbox}} \put(-204,476){\usebox{\envbox}} \put(-68,-272){\usebox{\envbox}} \put(-51,-17){\usebox{\envbox}} \put(-17,-204){\usebox{\envbox}} \put(-289,153){\usebox{\envbox}} \put(-102,51){\usebox{\envbox}} \put(-85,-221){\usebox{\envbox}} \put(-204,-17){\usebox{\envbox}} \put(-170,-221){\usebox{\envbox}} \put(-68,34){\usebox{\envbox}} \put(-102,-34){\usebox{\envbox}} \put(-408,493){\usebox{\envbox}} \put(-493,272){\usebox{\envbox}} \put(-119,-187){\usebox{\envbox}} \put(-391,408){\usebox{\envbox}} \put(17,-34){\usebox{\envbox}} \put(-357,221){\usebox{\envbox}} \put(-306,170){\usebox{\envbox}} \put(-34,-289){\usebox{\envbox}} \put(-340,238){\usebox{\envbox}} \put(-425,459){\usebox{\envbox}} \put(-51,-204){\usebox{\envbox}} \put(-204,17){\usebox{\envbox}} \put(-391,68){\usebox{\envbox}} \put(-17,-255){\usebox{\envbox}} \put(-221,-119){\usebox{\envbox}} \put(-85,-17){\usebox{\envbox}} \put(-170,-68){\usebox{\envbox}} \put(-102,-221){\usebox{\envbox}} \put(17,-204){\usebox{\envbox}} \put(-136,-34){\usebox{\envbox}} \put(-119,-272){\usebox{\envbox}} \put(0,-187){\usebox{\envbox}} \put(-306,272){\usebox{\envbox}} \put(-204,408){\usebox{\envbox}} \put(-153,544){\usebox{\envbox}} \put(-374,391){\usebox{\envbox}} \put(-238,374){\usebox{\envbox}} \put(-442,68){\usebox{\envbox}} \put(-170,17){\usebox{\envbox}} \put(-34,-238){\usebox{\envbox}} \put(-68,68){\usebox{\envbox}} \put(-119,-17){\usebox{\envbox}} \put(-408,391){\usebox{\envbox}} \put(-34,51){\usebox{\envbox}} \put(-391,102){\usebox{\envbox}} \put(-391,374){\usebox{\envbox}} \put(-238,408){\usebox{\envbox}} \put(-306,221){\usebox{\envbox}} \put(-459,323){\usebox{\envbox}} \put(-255,-136){\usebox{\envbox}} \put(-272,357){\usebox{\envbox}} \put(-442,408){\usebox{\envbox}} \put(-323,136){\usebox{\envbox}} \put(-357,408){\usebox{\envbox}} \put(-136,0){\usebox{\envbox}} \put(-170,527){\usebox{\envbox}} \put(-391,136){\usebox{\envbox}} \put(-425,170){\usebox{\envbox}} \put(-187,-68){\usebox{\envbox}} \put(-442,374){\usebox{\envbox}} \put(-459,255){\usebox{\envbox}} \put(-391,476){\usebox{\envbox}} \put(-187,-51){\usebox{\envbox}} \put(-204,-51){\usebox{\envbox}} \put(-459,425){\usebox{\envbox}} \put(-221,425){\usebox{\envbox}} \put(-255,-170){\usebox{\envbox}} \put(-289,306){\usebox{\envbox}} \put(-340,170){\usebox{\envbox}} \put(-221,-68){\usebox{\envbox}} \put(0,51){\usebox{\envbox}} \put(-289,255){\usebox{\envbox}} \put(-238,425){\usebox{\envbox}} \put(-187,0){\usebox{\envbox}} \put(-493,221){\usebox{\envbox}} \put(-102,0){\usebox{\envbox}} \put(-408,187){\usebox{\envbox}} \put(-408,204){\usebox{\envbox}} \put(-425,391){\usebox{\envbox}} \put(34,-170){\usebox{\envbox}} \put(-51,34){\usebox{\envbox}} \put(-153,-289){\usebox{\envbox}} \put(-119,-204){\usebox{\envbox}} \put(-408,289){\usebox{\envbox}} \put(-425,85){\usebox{\envbox}} \put(-476,340){\usebox{\envbox}} \put(-442,187){\usebox{\envbox}} \put(-340,374){\usebox{\envbox}} \put(-221,-187){\usebox{\envbox}} \put(-306,187){\usebox{\envbox}} \put(-204,391){\usebox{\envbox}} \put(0,0){\usebox{\envbox}} \put(-289,34){\usebox{\envbox}} \put(-204,-102){\usebox{\envbox}} \put(-34,17){\usebox{\envbox}} \put(-34,-170){\usebox{\envbox}} \put(-153,-204){\usebox{\envbox}} \put(-17,-221){\usebox{\envbox}} \put(-85,-255){\usebox{\envbox}} \put(-68,0){\usebox{\envbox}} \put(-391,442){\usebox{\envbox}} \put(0,-170){\usebox{\envbox}} \put(-204,425){\usebox{\envbox}} \put(-136,-238){\usebox{\envbox}} \put(-306,68){\usebox{\envbox}} \put(-340,204){\usebox{\envbox}} \put(34,0){\usebox{\envbox}} \put(-425,442){\usebox{\envbox}} \put(-544,170){\usebox{\envbox}} \put(-527,221){\usebox{\envbox}} \put(-34,-221){\usebox{\envbox}} \put(-408,85){\usebox{\envbox}} \put(40.31, 9.90){\circle*{0.000001}} \put(40.31,10.61){\circle*{0.000001}} \put(41.01,11.31){\circle*{0.000001}} \put(41.01,12.02){\circle*{0.000001}} \put(41.72,12.73){\circle*{0.000001}} \put(41.72,13.44){\circle*{0.000001}} \put(42.43,14.14){\circle*{0.000001}} \put(42.43,14.85){\circle*{0.000001}} \put(43.13,15.56){\circle*{0.000001}} \put(43.13,16.26){\circle*{0.000001}} \put(43.84,16.97){\circle*{0.000001}} \put(43.84,17.68){\circle*{0.000001}} \put(44.55,18.38){\circle*{0.000001}} \put(44.55,19.09){\circle*{0.000001}} \put(44.55,19.80){\circle*{0.000001}} \put(45.25,20.51){\circle*{0.000001}} \put(45.25,21.21){\circle*{0.000001}} \put(45.96,21.92){\circle*{0.000001}} \put(45.96,22.63){\circle*{0.000001}} \put(46.67,23.33){\circle*{0.000001}} \put(46.67,24.04){\circle*{0.000001}} \put(47.38,24.75){\circle*{0.000001}} \put(47.38,25.46){\circle*{0.000001}} \put(48.08,26.16){\circle*{0.000001}} \put(48.08,26.87){\circle*{0.000001}} \put(48.79,27.58){\circle*{0.000001}} \put(48.79,28.28){\circle*{0.000001}} \put(49.50,28.99){\circle*{0.000001}} \put(49.50,29.70){\circle*{0.000001}} \put(49.50,30.41){\circle*{0.000001}} \put(50.20,31.11){\circle*{0.000001}} \put(50.20,31.82){\circle*{0.000001}} \put(50.91,32.53){\circle*{0.000001}} \put(50.91,33.23){\circle*{0.000001}} \put(51.62,33.94){\circle*{0.000001}} \put(51.62,34.65){\circle*{0.000001}} \put(52.33,35.36){\circle*{0.000001}} \put(52.33,36.06){\circle*{0.000001}} \put(53.03,36.77){\circle*{0.000001}} \put(53.03,37.48){\circle*{0.000001}} \put(53.74,38.18){\circle*{0.000001}} \put(53.74,38.89){\circle*{0.000001}} \put(33.94,10.61){\circle*{0.000001}} \put(34.65,11.31){\circle*{0.000001}} \put(34.65,12.02){\circle*{0.000001}} \put(35.36,12.73){\circle*{0.000001}} \put(36.06,13.44){\circle*{0.000001}} \put(36.06,14.14){\circle*{0.000001}} \put(36.77,14.85){\circle*{0.000001}} \put(37.48,15.56){\circle*{0.000001}} \put(38.18,16.26){\circle*{0.000001}} \put(38.18,16.97){\circle*{0.000001}} \put(38.89,17.68){\circle*{0.000001}} \put(39.60,18.38){\circle*{0.000001}} \put(39.60,19.09){\circle*{0.000001}} \put(40.31,19.80){\circle*{0.000001}} \put(41.01,20.51){\circle*{0.000001}} \put(41.01,21.21){\circle*{0.000001}} \put(41.72,21.92){\circle*{0.000001}} \put(42.43,22.63){\circle*{0.000001}} \put(43.13,23.33){\circle*{0.000001}} \put(43.13,24.04){\circle*{0.000001}} \put(43.84,24.75){\circle*{0.000001}} \put(44.55,25.46){\circle*{0.000001}} \put(44.55,26.16){\circle*{0.000001}} \put(45.25,26.87){\circle*{0.000001}} \put(45.96,27.58){\circle*{0.000001}} \put(45.96,28.28){\circle*{0.000001}} \put(46.67,28.99){\circle*{0.000001}} \put(47.38,29.70){\circle*{0.000001}} \put(48.08,30.41){\circle*{0.000001}} \put(48.08,31.11){\circle*{0.000001}} \put(48.79,31.82){\circle*{0.000001}} \put(49.50,32.53){\circle*{0.000001}} \put(49.50,33.23){\circle*{0.000001}} \put(50.20,33.94){\circle*{0.000001}} \put(50.91,34.65){\circle*{0.000001}} \put(50.91,35.36){\circle*{0.000001}} \put(51.62,36.06){\circle*{0.000001}} \put(52.33,36.77){\circle*{0.000001}} \put(53.03,37.48){\circle*{0.000001}} \put(53.03,38.18){\circle*{0.000001}} \put(53.74,38.89){\circle*{0.000001}} \put(33.94,10.61){\circle*{0.000001}} \put(33.94,11.31){\circle*{0.000001}} \put(34.65,12.02){\circle*{0.000001}} \put(34.65,12.73){\circle*{0.000001}} \put(35.36,13.44){\circle*{0.000001}} \put(35.36,14.14){\circle*{0.000001}} \put(35.36,14.85){\circle*{0.000001}} \put(36.06,15.56){\circle*{0.000001}} \put(36.06,16.26){\circle*{0.000001}} \put(36.06,16.97){\circle*{0.000001}} \put(36.77,17.68){\circle*{0.000001}} \put(36.77,18.38){\circle*{0.000001}} \put(37.48,19.09){\circle*{0.000001}} \put(37.48,19.80){\circle*{0.000001}} \put(37.48,20.51){\circle*{0.000001}} \put(38.18,21.21){\circle*{0.000001}} \put(38.18,21.92){\circle*{0.000001}} \put(38.89,22.63){\circle*{0.000001}} \put(38.89,23.33){\circle*{0.000001}} \put(38.89,24.04){\circle*{0.000001}} \put(39.60,24.75){\circle*{0.000001}} \put(39.60,25.46){\circle*{0.000001}} \put(39.60,26.16){\circle*{0.000001}} \put(40.31,26.87){\circle*{0.000001}} \put(40.31,27.58){\circle*{0.000001}} \put(41.01,28.28){\circle*{0.000001}} \put(41.01,28.99){\circle*{0.000001}} \put(41.01,29.70){\circle*{0.000001}} \put(41.72,30.41){\circle*{0.000001}} \put(41.72,31.11){\circle*{0.000001}} \put(42.43,31.82){\circle*{0.000001}} \put(42.43,32.53){\circle*{0.000001}} \put(42.43,33.23){\circle*{0.000001}} \put(43.13,33.94){\circle*{0.000001}} \put(43.13,34.65){\circle*{0.000001}} \put(43.84,35.36){\circle*{0.000001}} \put(43.84,36.06){\circle*{0.000001}} \put(43.84,36.77){\circle*{0.000001}} \put(44.55,37.48){\circle*{0.000001}} \put(44.55,38.18){\circle*{0.000001}} \put(44.55,38.89){\circle*{0.000001}} \put(45.25,39.60){\circle*{0.000001}} \put(45.25,40.31){\circle*{0.000001}} \put(45.96,41.01){\circle*{0.000001}} \put(45.96,41.72){\circle*{0.000001}} \put(20.51,15.56){\circle*{0.000001}} \put(21.21,16.26){\circle*{0.000001}} \put(21.92,16.97){\circle*{0.000001}} \put(22.63,17.68){\circle*{0.000001}} \put(23.33,18.38){\circle*{0.000001}} \put(24.04,19.09){\circle*{0.000001}} \put(24.75,19.80){\circle*{0.000001}} \put(25.46,20.51){\circle*{0.000001}} \put(26.16,21.21){\circle*{0.000001}} \put(26.87,21.92){\circle*{0.000001}} \put(27.58,22.63){\circle*{0.000001}} \put(28.28,23.33){\circle*{0.000001}} \put(28.99,24.04){\circle*{0.000001}} \put(29.70,24.75){\circle*{0.000001}} \put(30.41,25.46){\circle*{0.000001}} \put(31.11,26.16){\circle*{0.000001}} \put(31.82,26.87){\circle*{0.000001}} \put(32.53,27.58){\circle*{0.000001}} \put(33.23,28.28){\circle*{0.000001}} \put(33.23,28.99){\circle*{0.000001}} \put(33.94,29.70){\circle*{0.000001}} \put(34.65,30.41){\circle*{0.000001}} \put(35.36,31.11){\circle*{0.000001}} \put(36.06,31.82){\circle*{0.000001}} \put(36.77,32.53){\circle*{0.000001}} \put(37.48,33.23){\circle*{0.000001}} \put(38.18,33.94){\circle*{0.000001}} \put(38.89,34.65){\circle*{0.000001}} \put(39.60,35.36){\circle*{0.000001}} \put(40.31,36.06){\circle*{0.000001}} \put(41.01,36.77){\circle*{0.000001}} \put(41.72,37.48){\circle*{0.000001}} \put(42.43,38.18){\circle*{0.000001}} \put(43.13,38.89){\circle*{0.000001}} \put(43.84,39.60){\circle*{0.000001}} \put(44.55,40.31){\circle*{0.000001}} \put(45.25,41.01){\circle*{0.000001}} \put(45.96,41.72){\circle*{0.000001}} \put(20.51,15.56){\circle*{0.000001}} \put(21.21,16.26){\circle*{0.000001}} \put(21.92,16.97){\circle*{0.000001}} \put(22.63,17.68){\circle*{0.000001}} \put(23.33,17.68){\circle*{0.000001}} \put(24.04,18.38){\circle*{0.000001}} \put(24.75,19.09){\circle*{0.000001}} \put(25.46,19.80){\circle*{0.000001}} \put(26.16,20.51){\circle*{0.000001}} \put(26.87,21.21){\circle*{0.000001}} \put(27.58,21.21){\circle*{0.000001}} \put(28.28,21.92){\circle*{0.000001}} \put(28.99,22.63){\circle*{0.000001}} \put(29.70,23.33){\circle*{0.000001}} \put(30.41,24.04){\circle*{0.000001}} \put(31.11,24.75){\circle*{0.000001}} \put(31.82,24.75){\circle*{0.000001}} \put(32.53,25.46){\circle*{0.000001}} \put(33.23,26.16){\circle*{0.000001}} \put(33.94,26.87){\circle*{0.000001}} \put(34.65,27.58){\circle*{0.000001}} \put(35.36,28.28){\circle*{0.000001}} \put(36.06,28.28){\circle*{0.000001}} \put(36.77,28.99){\circle*{0.000001}} \put(37.48,29.70){\circle*{0.000001}} \put(38.18,30.41){\circle*{0.000001}} \put(38.89,31.11){\circle*{0.000001}} \put(39.60,31.82){\circle*{0.000001}} \put(40.31,31.82){\circle*{0.000001}} \put(41.01,32.53){\circle*{0.000001}} \put(41.72,33.23){\circle*{0.000001}} \put(42.43,33.94){\circle*{0.000001}} \put(43.13,34.65){\circle*{0.000001}} \put(43.84,35.36){\circle*{0.000001}} \put(44.55,35.36){\circle*{0.000001}} \put(45.25,36.06){\circle*{0.000001}} \put(45.96,36.77){\circle*{0.000001}} \put(46.67,37.48){\circle*{0.000001}} \put(12.73,37.48){\circle*{0.000001}} \put(13.44,37.48){\circle*{0.000001}} \put(14.14,37.48){\circle*{0.000001}} \put(14.85,37.48){\circle*{0.000001}} \put(15.56,37.48){\circle*{0.000001}} \put(16.26,37.48){\circle*{0.000001}} \put(16.97,37.48){\circle*{0.000001}} \put(17.68,37.48){\circle*{0.000001}} \put(18.38,37.48){\circle*{0.000001}} \put(19.09,37.48){\circle*{0.000001}} \put(19.80,37.48){\circle*{0.000001}} \put(20.51,37.48){\circle*{0.000001}} \put(21.21,37.48){\circle*{0.000001}} \put(21.92,37.48){\circle*{0.000001}} \put(22.63,37.48){\circle*{0.000001}} \put(23.33,37.48){\circle*{0.000001}} \put(24.04,37.48){\circle*{0.000001}} \put(24.75,37.48){\circle*{0.000001}} \put(25.46,37.48){\circle*{0.000001}} \put(26.16,37.48){\circle*{0.000001}} \put(26.87,37.48){\circle*{0.000001}} \put(27.58,37.48){\circle*{0.000001}} \put(28.28,37.48){\circle*{0.000001}} \put(28.99,37.48){\circle*{0.000001}} \put(29.70,37.48){\circle*{0.000001}} \put(30.41,37.48){\circle*{0.000001}} \put(31.11,37.48){\circle*{0.000001}} \put(31.82,37.48){\circle*{0.000001}} \put(32.53,37.48){\circle*{0.000001}} \put(33.23,37.48){\circle*{0.000001}} \put(33.94,37.48){\circle*{0.000001}} \put(34.65,37.48){\circle*{0.000001}} \put(35.36,37.48){\circle*{0.000001}} \put(36.06,37.48){\circle*{0.000001}} \put(36.77,37.48){\circle*{0.000001}} \put(37.48,37.48){\circle*{0.000001}} \put(38.18,37.48){\circle*{0.000001}} \put(38.89,37.48){\circle*{0.000001}} \put(39.60,37.48){\circle*{0.000001}} \put(40.31,37.48){\circle*{0.000001}} \put(41.01,37.48){\circle*{0.000001}} \put(41.72,37.48){\circle*{0.000001}} \put(42.43,37.48){\circle*{0.000001}} \put(43.13,37.48){\circle*{0.000001}} \put(43.84,37.48){\circle*{0.000001}} \put(44.55,37.48){\circle*{0.000001}} \put(45.25,37.48){\circle*{0.000001}} \put(45.96,37.48){\circle*{0.000001}} \put(46.67,37.48){\circle*{0.000001}} \put( 1.41, 5.66){\circle*{0.000001}} \put( 1.41, 6.36){\circle*{0.000001}} \put( 2.12, 7.07){\circle*{0.000001}} \put( 2.12, 7.78){\circle*{0.000001}} \put( 2.12, 8.49){\circle*{0.000001}} \put( 2.83, 9.19){\circle*{0.000001}} \put( 2.83, 9.90){\circle*{0.000001}} \put( 2.83,10.61){\circle*{0.000001}} \put( 3.54,11.31){\circle*{0.000001}} \put( 3.54,12.02){\circle*{0.000001}} \put( 4.24,12.73){\circle*{0.000001}} \put( 4.24,13.44){\circle*{0.000001}} \put( 4.24,14.14){\circle*{0.000001}} \put( 4.95,14.85){\circle*{0.000001}} \put( 4.95,15.56){\circle*{0.000001}} \put( 4.95,16.26){\circle*{0.000001}} \put( 5.66,16.97){\circle*{0.000001}} \put( 5.66,17.68){\circle*{0.000001}} \put( 5.66,18.38){\circle*{0.000001}} \put( 6.36,19.09){\circle*{0.000001}} \put( 6.36,19.80){\circle*{0.000001}} \put( 6.36,20.51){\circle*{0.000001}} \put( 7.07,21.21){\circle*{0.000001}} \put( 7.07,21.92){\circle*{0.000001}} \put( 7.78,22.63){\circle*{0.000001}} \put( 7.78,23.33){\circle*{0.000001}} \put( 7.78,24.04){\circle*{0.000001}} \put( 8.49,24.75){\circle*{0.000001}} \put( 8.49,25.46){\circle*{0.000001}} \put( 8.49,26.16){\circle*{0.000001}} \put( 9.19,26.87){\circle*{0.000001}} \put( 9.19,27.58){\circle*{0.000001}} \put( 9.19,28.28){\circle*{0.000001}} \put( 9.90,28.99){\circle*{0.000001}} \put( 9.90,29.70){\circle*{0.000001}} \put( 9.90,30.41){\circle*{0.000001}} \put(10.61,31.11){\circle*{0.000001}} \put(10.61,31.82){\circle*{0.000001}} \put(11.31,32.53){\circle*{0.000001}} \put(11.31,33.23){\circle*{0.000001}} \put(11.31,33.94){\circle*{0.000001}} \put(12.02,34.65){\circle*{0.000001}} \put(12.02,35.36){\circle*{0.000001}} \put(12.02,36.06){\circle*{0.000001}} \put(12.73,36.77){\circle*{0.000001}} \put(12.73,37.48){\circle*{0.000001}} \put( 1.41, 5.66){\circle*{0.000001}} \put( 1.41, 6.36){\circle*{0.000001}} \put( 0.71, 7.07){\circle*{0.000001}} \put( 0.71, 7.78){\circle*{0.000001}} \put( 0.00, 8.49){\circle*{0.000001}} \put( 0.00, 9.19){\circle*{0.000001}} \put(-0.71, 9.90){\circle*{0.000001}} \put(-0.71,10.61){\circle*{0.000001}} \put(-1.41,11.31){\circle*{0.000001}} \put(-1.41,12.02){\circle*{0.000001}} \put(-2.12,12.73){\circle*{0.000001}} \put(-2.12,13.44){\circle*{0.000001}} \put(-2.83,14.14){\circle*{0.000001}} \put(-2.83,14.85){\circle*{0.000001}} \put(-3.54,15.56){\circle*{0.000001}} \put(-3.54,16.26){\circle*{0.000001}} \put(-4.24,16.97){\circle*{0.000001}} \put(-4.24,17.68){\circle*{0.000001}} \put(-4.95,18.38){\circle*{0.000001}} \put(-4.95,19.09){\circle*{0.000001}} \put(-5.66,19.80){\circle*{0.000001}} \put(-5.66,20.51){\circle*{0.000001}} \put(-6.36,21.21){\circle*{0.000001}} \put(-6.36,21.92){\circle*{0.000001}} \put(-7.07,22.63){\circle*{0.000001}} \put(-7.07,23.33){\circle*{0.000001}} \put(-7.78,24.04){\circle*{0.000001}} \put(-7.78,24.75){\circle*{0.000001}} \put(-8.49,25.46){\circle*{0.000001}} \put(-8.49,26.16){\circle*{0.000001}} \put(-9.19,26.87){\circle*{0.000001}} \put(-9.19,27.58){\circle*{0.000001}} \put(-9.90,28.28){\circle*{0.000001}} \put(-9.90,28.99){\circle*{0.000001}} \put(-10.61,29.70){\circle*{0.000001}} \put(-10.61,30.41){\circle*{0.000001}} \put(-11.31,31.11){\circle*{0.000001}} \put(-11.31,31.82){\circle*{0.000001}} \put(-12.02,32.53){\circle*{0.000001}} \put(-12.02,33.23){\circle*{0.000001}} \put(-12.73,33.94){\circle*{0.000001}} \put(-12.73,34.65){\circle*{0.000001}} \put(-12.73,34.65){\circle*{0.000001}} \put(-12.02,35.36){\circle*{0.000001}} \put(-11.31,36.06){\circle*{0.000001}} \put(-10.61,36.77){\circle*{0.000001}} \put(-9.90,37.48){\circle*{0.000001}} \put(-9.19,38.18){\circle*{0.000001}} \put(-8.49,38.89){\circle*{0.000001}} \put(-7.78,39.60){\circle*{0.000001}} \put(-7.07,40.31){\circle*{0.000001}} \put(-6.36,41.01){\circle*{0.000001}} \put(-5.66,41.72){\circle*{0.000001}} \put(-4.95,42.43){\circle*{0.000001}} \put(-4.24,43.13){\circle*{0.000001}} \put(-3.54,43.84){\circle*{0.000001}} \put(-2.83,44.55){\circle*{0.000001}} \put(-2.12,45.25){\circle*{0.000001}} \put(-1.41,45.96){\circle*{0.000001}} \put(-0.71,46.67){\circle*{0.000001}} \put( 0.00,46.67){\circle*{0.000001}} \put( 0.71,47.38){\circle*{0.000001}} \put( 1.41,48.08){\circle*{0.000001}} \put( 2.12,48.79){\circle*{0.000001}} \put( 2.83,49.50){\circle*{0.000001}} \put( 3.54,50.20){\circle*{0.000001}} \put( 4.24,50.91){\circle*{0.000001}} \put( 4.95,51.62){\circle*{0.000001}} \put( 5.66,52.33){\circle*{0.000001}} \put( 6.36,53.03){\circle*{0.000001}} \put( 7.07,53.74){\circle*{0.000001}} \put( 7.78,54.45){\circle*{0.000001}} \put( 8.49,55.15){\circle*{0.000001}} \put( 9.19,55.86){\circle*{0.000001}} \put( 9.90,56.57){\circle*{0.000001}} \put(10.61,57.28){\circle*{0.000001}} \put(11.31,57.98){\circle*{0.000001}} \put(12.02,58.69){\circle*{0.000001}} \put(12.73,59.40){\circle*{0.000001}} \put(-19.80,56.57){\circle*{0.000001}} \put(-19.09,56.57){\circle*{0.000001}} \put(-18.38,56.57){\circle*{0.000001}} \put(-17.68,56.57){\circle*{0.000001}} \put(-16.97,56.57){\circle*{0.000001}} \put(-16.26,56.57){\circle*{0.000001}} \put(-15.56,57.28){\circle*{0.000001}} \put(-14.85,57.28){\circle*{0.000001}} \put(-14.14,57.28){\circle*{0.000001}} \put(-13.44,57.28){\circle*{0.000001}} \put(-12.73,57.28){\circle*{0.000001}} \put(-12.02,57.28){\circle*{0.000001}} \put(-11.31,57.28){\circle*{0.000001}} \put(-10.61,57.28){\circle*{0.000001}} \put(-9.90,57.28){\circle*{0.000001}} \put(-9.19,57.28){\circle*{0.000001}} \put(-8.49,57.28){\circle*{0.000001}} \put(-7.78,57.28){\circle*{0.000001}} \put(-7.07,57.98){\circle*{0.000001}} \put(-6.36,57.98){\circle*{0.000001}} \put(-5.66,57.98){\circle*{0.000001}} \put(-4.95,57.98){\circle*{0.000001}} \put(-4.24,57.98){\circle*{0.000001}} \put(-3.54,57.98){\circle*{0.000001}} \put(-2.83,57.98){\circle*{0.000001}} \put(-2.12,57.98){\circle*{0.000001}} \put(-1.41,57.98){\circle*{0.000001}} \put(-0.71,57.98){\circle*{0.000001}} \put( 0.00,57.98){\circle*{0.000001}} \put( 0.71,58.69){\circle*{0.000001}} \put( 1.41,58.69){\circle*{0.000001}} \put( 2.12,58.69){\circle*{0.000001}} \put( 2.83,58.69){\circle*{0.000001}} \put( 3.54,58.69){\circle*{0.000001}} \put( 4.24,58.69){\circle*{0.000001}} \put( 4.95,58.69){\circle*{0.000001}} \put( 5.66,58.69){\circle*{0.000001}} \put( 6.36,58.69){\circle*{0.000001}} \put( 7.07,58.69){\circle*{0.000001}} \put( 7.78,58.69){\circle*{0.000001}} \put( 8.49,58.69){\circle*{0.000001}} \put( 9.19,59.40){\circle*{0.000001}} \put( 9.90,59.40){\circle*{0.000001}} \put(10.61,59.40){\circle*{0.000001}} \put(11.31,59.40){\circle*{0.000001}} \put(12.02,59.40){\circle*{0.000001}} \put(12.73,59.40){\circle*{0.000001}} \put(-21.92,20.51){\circle*{0.000001}} \put(-21.92,21.21){\circle*{0.000001}} \put(-21.92,21.92){\circle*{0.000001}} \put(-21.92,22.63){\circle*{0.000001}} \put(-21.92,23.33){\circle*{0.000001}} \put(-21.92,24.04){\circle*{0.000001}} \put(-21.92,24.75){\circle*{0.000001}} \put(-21.92,25.46){\circle*{0.000001}} \put(-21.92,26.16){\circle*{0.000001}} \put(-21.21,26.87){\circle*{0.000001}} \put(-21.21,27.58){\circle*{0.000001}} \put(-21.21,28.28){\circle*{0.000001}} \put(-21.21,28.99){\circle*{0.000001}} \put(-21.21,29.70){\circle*{0.000001}} \put(-21.21,30.41){\circle*{0.000001}} \put(-21.21,31.11){\circle*{0.000001}} \put(-21.21,31.82){\circle*{0.000001}} \put(-21.21,32.53){\circle*{0.000001}} \put(-21.21,33.23){\circle*{0.000001}} \put(-21.21,33.94){\circle*{0.000001}} \put(-21.21,34.65){\circle*{0.000001}} \put(-21.21,35.36){\circle*{0.000001}} \put(-21.21,36.06){\circle*{0.000001}} \put(-21.21,36.77){\circle*{0.000001}} \put(-21.21,37.48){\circle*{0.000001}} \put(-21.21,38.18){\circle*{0.000001}} \put(-20.51,38.89){\circle*{0.000001}} \put(-20.51,39.60){\circle*{0.000001}} \put(-20.51,40.31){\circle*{0.000001}} \put(-20.51,41.01){\circle*{0.000001}} \put(-20.51,41.72){\circle*{0.000001}} \put(-20.51,42.43){\circle*{0.000001}} \put(-20.51,43.13){\circle*{0.000001}} \put(-20.51,43.84){\circle*{0.000001}} \put(-20.51,44.55){\circle*{0.000001}} \put(-20.51,45.25){\circle*{0.000001}} \put(-20.51,45.96){\circle*{0.000001}} \put(-20.51,46.67){\circle*{0.000001}} \put(-20.51,47.38){\circle*{0.000001}} \put(-20.51,48.08){\circle*{0.000001}} \put(-20.51,48.79){\circle*{0.000001}} \put(-20.51,49.50){\circle*{0.000001}} \put(-20.51,50.20){\circle*{0.000001}} \put(-19.80,50.91){\circle*{0.000001}} \put(-19.80,51.62){\circle*{0.000001}} \put(-19.80,52.33){\circle*{0.000001}} \put(-19.80,53.03){\circle*{0.000001}} \put(-19.80,53.74){\circle*{0.000001}} \put(-19.80,54.45){\circle*{0.000001}} \put(-19.80,55.15){\circle*{0.000001}} \put(-19.80,55.86){\circle*{0.000001}} \put(-19.80,56.57){\circle*{0.000001}} \put(-21.92,20.51){\circle*{0.000001}} \put(-22.63,21.21){\circle*{0.000001}} \put(-23.33,21.92){\circle*{0.000001}} \put(-23.33,22.63){\circle*{0.000001}} \put(-24.04,23.33){\circle*{0.000001}} \put(-24.75,24.04){\circle*{0.000001}} \put(-25.46,24.75){\circle*{0.000001}} \put(-25.46,25.46){\circle*{0.000001}} \put(-26.16,26.16){\circle*{0.000001}} \put(-26.87,26.87){\circle*{0.000001}} \put(-27.58,27.58){\circle*{0.000001}} \put(-27.58,28.28){\circle*{0.000001}} \put(-28.28,28.99){\circle*{0.000001}} \put(-28.99,29.70){\circle*{0.000001}} \put(-29.70,30.41){\circle*{0.000001}} \put(-29.70,31.11){\circle*{0.000001}} \put(-30.41,31.82){\circle*{0.000001}} \put(-31.11,32.53){\circle*{0.000001}} \put(-31.82,33.23){\circle*{0.000001}} \put(-31.82,33.94){\circle*{0.000001}} \put(-32.53,34.65){\circle*{0.000001}} \put(-33.23,35.36){\circle*{0.000001}} \put(-33.94,36.06){\circle*{0.000001}} \put(-34.65,36.77){\circle*{0.000001}} \put(-34.65,37.48){\circle*{0.000001}} \put(-35.36,38.18){\circle*{0.000001}} \put(-36.06,38.89){\circle*{0.000001}} \put(-36.77,39.60){\circle*{0.000001}} \put(-36.77,40.31){\circle*{0.000001}} \put(-37.48,41.01){\circle*{0.000001}} \put(-38.18,41.72){\circle*{0.000001}} \put(-38.89,42.43){\circle*{0.000001}} \put(-38.89,43.13){\circle*{0.000001}} \put(-39.60,43.84){\circle*{0.000001}} \put(-40.31,44.55){\circle*{0.000001}} \put(-41.01,45.25){\circle*{0.000001}} \put(-41.01,45.96){\circle*{0.000001}} \put(-41.72,46.67){\circle*{0.000001}} \put(-42.43,47.38){\circle*{0.000001}} \put(-42.43,47.38){\circle*{0.000001}} \put(-41.72,48.08){\circle*{0.000001}} \put(-41.72,48.79){\circle*{0.000001}} \put(-41.01,49.50){\circle*{0.000001}} \put(-41.01,50.20){\circle*{0.000001}} \put(-40.31,50.91){\circle*{0.000001}} \put(-39.60,51.62){\circle*{0.000001}} \put(-39.60,52.33){\circle*{0.000001}} \put(-38.89,53.03){\circle*{0.000001}} \put(-38.89,53.74){\circle*{0.000001}} \put(-38.18,54.45){\circle*{0.000001}} \put(-37.48,55.15){\circle*{0.000001}} \put(-37.48,55.86){\circle*{0.000001}} \put(-36.77,56.57){\circle*{0.000001}} \put(-36.77,57.28){\circle*{0.000001}} \put(-36.06,57.98){\circle*{0.000001}} \put(-35.36,58.69){\circle*{0.000001}} \put(-35.36,59.40){\circle*{0.000001}} \put(-34.65,60.10){\circle*{0.000001}} \put(-34.65,60.81){\circle*{0.000001}} \put(-33.94,61.52){\circle*{0.000001}} \put(-33.23,62.23){\circle*{0.000001}} \put(-33.23,62.93){\circle*{0.000001}} \put(-32.53,63.64){\circle*{0.000001}} \put(-32.53,64.35){\circle*{0.000001}} \put(-31.82,65.05){\circle*{0.000001}} \put(-31.11,65.76){\circle*{0.000001}} \put(-31.11,66.47){\circle*{0.000001}} \put(-30.41,67.18){\circle*{0.000001}} \put(-30.41,67.88){\circle*{0.000001}} \put(-29.70,68.59){\circle*{0.000001}} \put(-28.99,69.30){\circle*{0.000001}} \put(-28.99,70.00){\circle*{0.000001}} \put(-28.28,70.71){\circle*{0.000001}} \put(-28.28,71.42){\circle*{0.000001}} \put(-27.58,72.12){\circle*{0.000001}} \put(-26.87,72.83){\circle*{0.000001}} \put(-26.87,73.54){\circle*{0.000001}} \put(-26.16,74.25){\circle*{0.000001}} \put(-26.16,74.95){\circle*{0.000001}} \put(-25.46,75.66){\circle*{0.000001}} \put(-57.98,68.59){\circle*{0.000001}} \put(-57.28,68.59){\circle*{0.000001}} \put(-56.57,68.59){\circle*{0.000001}} \put(-55.86,69.30){\circle*{0.000001}} \put(-55.15,69.30){\circle*{0.000001}} \put(-54.45,69.30){\circle*{0.000001}} \put(-53.74,69.30){\circle*{0.000001}} \put(-53.03,70.00){\circle*{0.000001}} \put(-52.33,70.00){\circle*{0.000001}} \put(-51.62,70.00){\circle*{0.000001}} \put(-50.91,70.00){\circle*{0.000001}} \put(-50.20,70.00){\circle*{0.000001}} \put(-49.50,70.71){\circle*{0.000001}} \put(-48.79,70.71){\circle*{0.000001}} \put(-48.08,70.71){\circle*{0.000001}} \put(-47.38,70.71){\circle*{0.000001}} \put(-46.67,70.71){\circle*{0.000001}} \put(-45.96,71.42){\circle*{0.000001}} \put(-45.25,71.42){\circle*{0.000001}} \put(-44.55,71.42){\circle*{0.000001}} \put(-43.84,71.42){\circle*{0.000001}} \put(-43.13,72.12){\circle*{0.000001}} \put(-42.43,72.12){\circle*{0.000001}} \put(-41.72,72.12){\circle*{0.000001}} \put(-41.01,72.12){\circle*{0.000001}} \put(-40.31,72.12){\circle*{0.000001}} \put(-39.60,72.83){\circle*{0.000001}} \put(-38.89,72.83){\circle*{0.000001}} \put(-38.18,72.83){\circle*{0.000001}} \put(-37.48,72.83){\circle*{0.000001}} \put(-36.77,73.54){\circle*{0.000001}} \put(-36.06,73.54){\circle*{0.000001}} \put(-35.36,73.54){\circle*{0.000001}} \put(-34.65,73.54){\circle*{0.000001}} \put(-33.94,73.54){\circle*{0.000001}} \put(-33.23,74.25){\circle*{0.000001}} \put(-32.53,74.25){\circle*{0.000001}} \put(-31.82,74.25){\circle*{0.000001}} \put(-31.11,74.25){\circle*{0.000001}} \put(-30.41,74.25){\circle*{0.000001}} \put(-29.70,74.95){\circle*{0.000001}} \put(-28.99,74.95){\circle*{0.000001}} \put(-28.28,74.95){\circle*{0.000001}} \put(-27.58,74.95){\circle*{0.000001}} \put(-26.87,75.66){\circle*{0.000001}} \put(-26.16,75.66){\circle*{0.000001}} \put(-25.46,75.66){\circle*{0.000001}} \put(-45.96,34.65){\circle*{0.000001}} \put(-45.96,35.36){\circle*{0.000001}} \put(-46.67,36.06){\circle*{0.000001}} \put(-46.67,36.77){\circle*{0.000001}} \put(-46.67,37.48){\circle*{0.000001}} \put(-47.38,38.18){\circle*{0.000001}} \put(-47.38,38.89){\circle*{0.000001}} \put(-47.38,39.60){\circle*{0.000001}} \put(-48.08,40.31){\circle*{0.000001}} \put(-48.08,41.01){\circle*{0.000001}} \put(-48.79,41.72){\circle*{0.000001}} \put(-48.79,42.43){\circle*{0.000001}} \put(-48.79,43.13){\circle*{0.000001}} \put(-49.50,43.84){\circle*{0.000001}} \put(-49.50,44.55){\circle*{0.000001}} \put(-49.50,45.25){\circle*{0.000001}} \put(-50.20,45.96){\circle*{0.000001}} \put(-50.20,46.67){\circle*{0.000001}} \put(-50.20,47.38){\circle*{0.000001}} \put(-50.91,48.08){\circle*{0.000001}} \put(-50.91,48.79){\circle*{0.000001}} \put(-50.91,49.50){\circle*{0.000001}} \put(-51.62,50.20){\circle*{0.000001}} \put(-51.62,50.91){\circle*{0.000001}} \put(-51.62,51.62){\circle*{0.000001}} \put(-52.33,52.33){\circle*{0.000001}} \put(-52.33,53.03){\circle*{0.000001}} \put(-53.03,53.74){\circle*{0.000001}} \put(-53.03,54.45){\circle*{0.000001}} \put(-53.03,55.15){\circle*{0.000001}} \put(-53.74,55.86){\circle*{0.000001}} \put(-53.74,56.57){\circle*{0.000001}} \put(-53.74,57.28){\circle*{0.000001}} \put(-54.45,57.98){\circle*{0.000001}} \put(-54.45,58.69){\circle*{0.000001}} \put(-54.45,59.40){\circle*{0.000001}} \put(-55.15,60.10){\circle*{0.000001}} \put(-55.15,60.81){\circle*{0.000001}} \put(-55.15,61.52){\circle*{0.000001}} \put(-55.86,62.23){\circle*{0.000001}} \put(-55.86,62.93){\circle*{0.000001}} \put(-56.57,63.64){\circle*{0.000001}} \put(-56.57,64.35){\circle*{0.000001}} \put(-56.57,65.05){\circle*{0.000001}} \put(-57.28,65.76){\circle*{0.000001}} \put(-57.28,66.47){\circle*{0.000001}} \put(-57.28,67.18){\circle*{0.000001}} \put(-57.98,67.88){\circle*{0.000001}} \put(-57.98,68.59){\circle*{0.000001}} \put(-62.93, 4.95){\circle*{0.000001}} \put(-62.23, 5.66){\circle*{0.000001}} \put(-62.23, 6.36){\circle*{0.000001}} \put(-61.52, 7.07){\circle*{0.000001}} \put(-61.52, 7.78){\circle*{0.000001}} \put(-60.81, 8.49){\circle*{0.000001}} \put(-60.81, 9.19){\circle*{0.000001}} \put(-60.10, 9.90){\circle*{0.000001}} \put(-59.40,10.61){\circle*{0.000001}} \put(-59.40,11.31){\circle*{0.000001}} \put(-58.69,12.02){\circle*{0.000001}} \put(-58.69,12.73){\circle*{0.000001}} \put(-57.98,13.44){\circle*{0.000001}} \put(-57.98,14.14){\circle*{0.000001}} \put(-57.28,14.85){\circle*{0.000001}} \put(-56.57,15.56){\circle*{0.000001}} \put(-56.57,16.26){\circle*{0.000001}} \put(-55.86,16.97){\circle*{0.000001}} \put(-55.86,17.68){\circle*{0.000001}} \put(-55.15,18.38){\circle*{0.000001}} \put(-55.15,19.09){\circle*{0.000001}} \put(-54.45,19.80){\circle*{0.000001}} \put(-53.74,20.51){\circle*{0.000001}} \put(-53.74,21.21){\circle*{0.000001}} \put(-53.03,21.92){\circle*{0.000001}} \put(-53.03,22.63){\circle*{0.000001}} \put(-52.33,23.33){\circle*{0.000001}} \put(-52.33,24.04){\circle*{0.000001}} \put(-51.62,24.75){\circle*{0.000001}} \put(-50.91,25.46){\circle*{0.000001}} \put(-50.91,26.16){\circle*{0.000001}} \put(-50.20,26.87){\circle*{0.000001}} \put(-50.20,27.58){\circle*{0.000001}} \put(-49.50,28.28){\circle*{0.000001}} \put(-49.50,28.99){\circle*{0.000001}} \put(-48.79,29.70){\circle*{0.000001}} \put(-48.08,30.41){\circle*{0.000001}} \put(-48.08,31.11){\circle*{0.000001}} \put(-47.38,31.82){\circle*{0.000001}} \put(-47.38,32.53){\circle*{0.000001}} \put(-46.67,33.23){\circle*{0.000001}} \put(-46.67,33.94){\circle*{0.000001}} \put(-45.96,34.65){\circle*{0.000001}} \put(-62.93, 4.95){\circle*{0.000001}} \put(-62.93, 5.66){\circle*{0.000001}} \put(-62.93, 6.36){\circle*{0.000001}} \put(-62.93, 7.07){\circle*{0.000001}} \put(-62.23, 7.78){\circle*{0.000001}} \put(-62.23, 8.49){\circle*{0.000001}} \put(-62.23, 9.19){\circle*{0.000001}} \put(-62.23, 9.90){\circle*{0.000001}} \put(-62.23,10.61){\circle*{0.000001}} \put(-62.23,11.31){\circle*{0.000001}} \put(-61.52,12.02){\circle*{0.000001}} \put(-61.52,12.73){\circle*{0.000001}} \put(-61.52,13.44){\circle*{0.000001}} \put(-61.52,14.14){\circle*{0.000001}} \put(-61.52,14.85){\circle*{0.000001}} \put(-61.52,15.56){\circle*{0.000001}} \put(-61.52,16.26){\circle*{0.000001}} \put(-60.81,16.97){\circle*{0.000001}} \put(-60.81,17.68){\circle*{0.000001}} \put(-60.81,18.38){\circle*{0.000001}} \put(-60.81,19.09){\circle*{0.000001}} \put(-60.81,19.80){\circle*{0.000001}} \put(-60.81,20.51){\circle*{0.000001}} \put(-60.10,21.21){\circle*{0.000001}} \put(-60.10,21.92){\circle*{0.000001}} \put(-60.10,22.63){\circle*{0.000001}} \put(-60.10,23.33){\circle*{0.000001}} \put(-60.10,24.04){\circle*{0.000001}} \put(-60.10,24.75){\circle*{0.000001}} \put(-59.40,25.46){\circle*{0.000001}} \put(-59.40,26.16){\circle*{0.000001}} \put(-59.40,26.87){\circle*{0.000001}} \put(-59.40,27.58){\circle*{0.000001}} \put(-59.40,28.28){\circle*{0.000001}} \put(-59.40,28.99){\circle*{0.000001}} \put(-59.40,29.70){\circle*{0.000001}} \put(-58.69,30.41){\circle*{0.000001}} \put(-58.69,31.11){\circle*{0.000001}} \put(-58.69,31.82){\circle*{0.000001}} \put(-58.69,32.53){\circle*{0.000001}} \put(-58.69,33.23){\circle*{0.000001}} \put(-58.69,33.94){\circle*{0.000001}} \put(-57.98,34.65){\circle*{0.000001}} \put(-57.98,35.36){\circle*{0.000001}} \put(-57.98,36.06){\circle*{0.000001}} \put(-57.98,36.77){\circle*{0.000001}} \put(-64.35, 1.41){\circle*{0.000001}} \put(-64.35, 2.12){\circle*{0.000001}} \put(-64.35, 2.83){\circle*{0.000001}} \put(-63.64, 3.54){\circle*{0.000001}} \put(-63.64, 4.24){\circle*{0.000001}} \put(-63.64, 4.95){\circle*{0.000001}} \put(-63.64, 5.66){\circle*{0.000001}} \put(-63.64, 6.36){\circle*{0.000001}} \put(-63.64, 7.07){\circle*{0.000001}} \put(-62.93, 7.78){\circle*{0.000001}} \put(-62.93, 8.49){\circle*{0.000001}} \put(-62.93, 9.19){\circle*{0.000001}} \put(-62.93, 9.90){\circle*{0.000001}} \put(-62.93,10.61){\circle*{0.000001}} \put(-62.23,11.31){\circle*{0.000001}} \put(-62.23,12.02){\circle*{0.000001}} \put(-62.23,12.73){\circle*{0.000001}} \put(-62.23,13.44){\circle*{0.000001}} \put(-62.23,14.14){\circle*{0.000001}} \put(-62.23,14.85){\circle*{0.000001}} \put(-61.52,15.56){\circle*{0.000001}} \put(-61.52,16.26){\circle*{0.000001}} \put(-61.52,16.97){\circle*{0.000001}} \put(-61.52,17.68){\circle*{0.000001}} \put(-61.52,18.38){\circle*{0.000001}} \put(-61.52,19.09){\circle*{0.000001}} \put(-60.81,19.80){\circle*{0.000001}} \put(-60.81,20.51){\circle*{0.000001}} \put(-60.81,21.21){\circle*{0.000001}} \put(-60.81,21.92){\circle*{0.000001}} \put(-60.81,22.63){\circle*{0.000001}} \put(-60.10,23.33){\circle*{0.000001}} \put(-60.10,24.04){\circle*{0.000001}} \put(-60.10,24.75){\circle*{0.000001}} \put(-60.10,25.46){\circle*{0.000001}} \put(-60.10,26.16){\circle*{0.000001}} \put(-60.10,26.87){\circle*{0.000001}} \put(-59.40,27.58){\circle*{0.000001}} \put(-59.40,28.28){\circle*{0.000001}} \put(-59.40,28.99){\circle*{0.000001}} \put(-59.40,29.70){\circle*{0.000001}} \put(-59.40,30.41){\circle*{0.000001}} \put(-58.69,31.11){\circle*{0.000001}} \put(-58.69,31.82){\circle*{0.000001}} \put(-58.69,32.53){\circle*{0.000001}} \put(-58.69,33.23){\circle*{0.000001}} \put(-58.69,33.94){\circle*{0.000001}} \put(-58.69,34.65){\circle*{0.000001}} \put(-57.98,35.36){\circle*{0.000001}} \put(-57.98,36.06){\circle*{0.000001}} \put(-57.98,36.77){\circle*{0.000001}} \put(-90.51,21.92){\circle*{0.000001}} \put(-89.80,21.21){\circle*{0.000001}} \put(-89.10,20.51){\circle*{0.000001}} \put(-88.39,20.51){\circle*{0.000001}} \put(-87.68,19.80){\circle*{0.000001}} \put(-86.97,19.09){\circle*{0.000001}} \put(-86.27,18.38){\circle*{0.000001}} \put(-85.56,18.38){\circle*{0.000001}} \put(-84.85,17.68){\circle*{0.000001}} \put(-84.15,16.97){\circle*{0.000001}} \put(-83.44,16.26){\circle*{0.000001}} \put(-82.73,15.56){\circle*{0.000001}} \put(-82.02,15.56){\circle*{0.000001}} \put(-81.32,14.85){\circle*{0.000001}} \put(-80.61,14.14){\circle*{0.000001}} \put(-79.90,13.44){\circle*{0.000001}} \put(-79.20,12.73){\circle*{0.000001}} \put(-78.49,12.73){\circle*{0.000001}} \put(-77.78,12.02){\circle*{0.000001}} \put(-77.07,11.31){\circle*{0.000001}} \put(-76.37,10.61){\circle*{0.000001}} \put(-75.66,10.61){\circle*{0.000001}} \put(-74.95, 9.90){\circle*{0.000001}} \put(-74.25, 9.19){\circle*{0.000001}} \put(-73.54, 8.49){\circle*{0.000001}} \put(-72.83, 7.78){\circle*{0.000001}} \put(-72.12, 7.78){\circle*{0.000001}} \put(-71.42, 7.07){\circle*{0.000001}} \put(-70.71, 6.36){\circle*{0.000001}} \put(-70.00, 5.66){\circle*{0.000001}} \put(-69.30, 4.95){\circle*{0.000001}} \put(-68.59, 4.95){\circle*{0.000001}} \put(-67.88, 4.24){\circle*{0.000001}} \put(-67.18, 3.54){\circle*{0.000001}} \put(-66.47, 2.83){\circle*{0.000001}} \put(-65.76, 2.83){\circle*{0.000001}} \put(-65.05, 2.12){\circle*{0.000001}} \put(-64.35, 1.41){\circle*{0.000001}} \put(-90.51,21.92){\circle*{0.000001}} \put(-90.51,22.63){\circle*{0.000001}} \put(-90.51,23.33){\circle*{0.000001}} \put(-90.51,24.04){\circle*{0.000001}} \put(-90.51,24.75){\circle*{0.000001}} \put(-90.51,25.46){\circle*{0.000001}} \put(-90.51,26.16){\circle*{0.000001}} \put(-89.80,26.87){\circle*{0.000001}} \put(-89.80,27.58){\circle*{0.000001}} \put(-89.80,28.28){\circle*{0.000001}} \put(-89.80,28.99){\circle*{0.000001}} \put(-89.80,29.70){\circle*{0.000001}} \put(-89.80,30.41){\circle*{0.000001}} \put(-89.80,31.11){\circle*{0.000001}} \put(-89.80,31.82){\circle*{0.000001}} \put(-89.80,32.53){\circle*{0.000001}} \put(-89.80,33.23){\circle*{0.000001}} \put(-89.80,33.94){\circle*{0.000001}} \put(-89.80,34.65){\circle*{0.000001}} \put(-89.10,35.36){\circle*{0.000001}} \put(-89.10,36.06){\circle*{0.000001}} \put(-89.10,36.77){\circle*{0.000001}} \put(-89.10,37.48){\circle*{0.000001}} \put(-89.10,38.18){\circle*{0.000001}} \put(-89.10,38.89){\circle*{0.000001}} \put(-89.10,39.60){\circle*{0.000001}} \put(-89.10,40.31){\circle*{0.000001}} \put(-89.10,41.01){\circle*{0.000001}} \put(-89.10,41.72){\circle*{0.000001}} \put(-89.10,42.43){\circle*{0.000001}} \put(-89.10,43.13){\circle*{0.000001}} \put(-88.39,43.84){\circle*{0.000001}} \put(-88.39,44.55){\circle*{0.000001}} \put(-88.39,45.25){\circle*{0.000001}} \put(-88.39,45.96){\circle*{0.000001}} \put(-88.39,46.67){\circle*{0.000001}} \put(-88.39,47.38){\circle*{0.000001}} \put(-88.39,48.08){\circle*{0.000001}} \put(-88.39,48.79){\circle*{0.000001}} \put(-88.39,49.50){\circle*{0.000001}} \put(-88.39,50.20){\circle*{0.000001}} \put(-88.39,50.91){\circle*{0.000001}} \put(-88.39,51.62){\circle*{0.000001}} \put(-87.68,52.33){\circle*{0.000001}} \put(-87.68,53.03){\circle*{0.000001}} \put(-87.68,53.74){\circle*{0.000001}} \put(-87.68,54.45){\circle*{0.000001}} \put(-87.68,55.15){\circle*{0.000001}} \put(-87.68,55.86){\circle*{0.000001}} \put(-87.68,55.86){\circle*{0.000001}} \put(-86.97,55.15){\circle*{0.000001}} \put(-86.27,54.45){\circle*{0.000001}} \put(-85.56,53.74){\circle*{0.000001}} \put(-84.85,53.74){\circle*{0.000001}} \put(-84.15,53.03){\circle*{0.000001}} \put(-83.44,52.33){\circle*{0.000001}} \put(-82.73,51.62){\circle*{0.000001}} \put(-82.02,50.91){\circle*{0.000001}} \put(-81.32,50.20){\circle*{0.000001}} \put(-80.61,50.20){\circle*{0.000001}} \put(-79.90,49.50){\circle*{0.000001}} \put(-79.20,48.79){\circle*{0.000001}} \put(-78.49,48.08){\circle*{0.000001}} \put(-77.78,47.38){\circle*{0.000001}} \put(-77.07,46.67){\circle*{0.000001}} \put(-76.37,46.67){\circle*{0.000001}} \put(-75.66,45.96){\circle*{0.000001}} \put(-74.95,45.25){\circle*{0.000001}} \put(-74.25,44.55){\circle*{0.000001}} \put(-73.54,43.84){\circle*{0.000001}} \put(-72.83,43.13){\circle*{0.000001}} \put(-72.12,43.13){\circle*{0.000001}} \put(-71.42,42.43){\circle*{0.000001}} \put(-70.71,41.72){\circle*{0.000001}} \put(-70.00,41.01){\circle*{0.000001}} \put(-69.30,40.31){\circle*{0.000001}} \put(-68.59,39.60){\circle*{0.000001}} \put(-67.88,39.60){\circle*{0.000001}} \put(-67.18,38.89){\circle*{0.000001}} \put(-66.47,38.18){\circle*{0.000001}} \put(-65.76,37.48){\circle*{0.000001}} \put(-65.05,36.77){\circle*{0.000001}} \put(-64.35,36.06){\circle*{0.000001}} \put(-63.64,36.06){\circle*{0.000001}} \put(-62.93,35.36){\circle*{0.000001}} \put(-62.23,34.65){\circle*{0.000001}} \put(-61.52,33.94){\circle*{0.000001}} \put(-61.52,33.94){\circle*{0.000001}} \put(-60.81,33.23){\circle*{0.000001}} \put(-60.10,33.23){\circle*{0.000001}} \put(-59.40,32.53){\circle*{0.000001}} \put(-58.69,31.82){\circle*{0.000001}} \put(-57.98,31.82){\circle*{0.000001}} \put(-57.28,31.11){\circle*{0.000001}} \put(-56.57,30.41){\circle*{0.000001}} \put(-55.86,30.41){\circle*{0.000001}} \put(-55.15,29.70){\circle*{0.000001}} \put(-54.45,28.99){\circle*{0.000001}} \put(-53.74,28.99){\circle*{0.000001}} \put(-53.03,28.28){\circle*{0.000001}} \put(-52.33,27.58){\circle*{0.000001}} \put(-51.62,27.58){\circle*{0.000001}} \put(-50.91,26.87){\circle*{0.000001}} \put(-50.20,26.16){\circle*{0.000001}} \put(-49.50,26.16){\circle*{0.000001}} \put(-48.79,25.46){\circle*{0.000001}} \put(-48.08,24.75){\circle*{0.000001}} \put(-47.38,24.75){\circle*{0.000001}} \put(-46.67,24.04){\circle*{0.000001}} \put(-45.96,24.04){\circle*{0.000001}} \put(-45.25,23.33){\circle*{0.000001}} \put(-44.55,22.63){\circle*{0.000001}} \put(-43.84,22.63){\circle*{0.000001}} \put(-43.13,21.92){\circle*{0.000001}} \put(-42.43,21.21){\circle*{0.000001}} \put(-41.72,21.21){\circle*{0.000001}} \put(-41.01,20.51){\circle*{0.000001}} \put(-40.31,19.80){\circle*{0.000001}} \put(-39.60,19.80){\circle*{0.000001}} \put(-38.89,19.09){\circle*{0.000001}} \put(-38.18,18.38){\circle*{0.000001}} \put(-37.48,18.38){\circle*{0.000001}} \put(-36.77,17.68){\circle*{0.000001}} \put(-36.06,16.97){\circle*{0.000001}} \put(-35.36,16.97){\circle*{0.000001}} \put(-34.65,16.26){\circle*{0.000001}} \put(-33.94,15.56){\circle*{0.000001}} \put(-33.23,15.56){\circle*{0.000001}} \put(-32.53,14.85){\circle*{0.000001}} \put(-32.53,14.85){\circle*{0.000001}} \put(-31.82,14.85){\circle*{0.000001}} \put(-31.11,14.14){\circle*{0.000001}} \put(-30.41,14.14){\circle*{0.000001}} \put(-29.70,13.44){\circle*{0.000001}} \put(-28.99,13.44){\circle*{0.000001}} \put(-28.28,12.73){\circle*{0.000001}} \put(-27.58,12.73){\circle*{0.000001}} \put(-26.87,12.02){\circle*{0.000001}} \put(-26.16,12.02){\circle*{0.000001}} \put(-25.46,11.31){\circle*{0.000001}} \put(-24.75,11.31){\circle*{0.000001}} \put(-24.04,10.61){\circle*{0.000001}} \put(-23.33,10.61){\circle*{0.000001}} \put(-22.63, 9.90){\circle*{0.000001}} \put(-21.92, 9.90){\circle*{0.000001}} \put(-21.21, 9.19){\circle*{0.000001}} \put(-20.51, 9.19){\circle*{0.000001}} \put(-19.80, 8.49){\circle*{0.000001}} \put(-19.09, 8.49){\circle*{0.000001}} \put(-18.38, 7.78){\circle*{0.000001}} \put(-17.68, 7.78){\circle*{0.000001}} \put(-16.97, 7.07){\circle*{0.000001}} \put(-16.26, 7.07){\circle*{0.000001}} \put(-15.56, 6.36){\circle*{0.000001}} \put(-14.85, 6.36){\circle*{0.000001}} \put(-14.14, 5.66){\circle*{0.000001}} \put(-13.44, 5.66){\circle*{0.000001}} \put(-12.73, 4.95){\circle*{0.000001}} \put(-12.02, 4.95){\circle*{0.000001}} \put(-11.31, 4.24){\circle*{0.000001}} \put(-10.61, 4.24){\circle*{0.000001}} \put(-9.90, 3.54){\circle*{0.000001}} \put(-9.19, 3.54){\circle*{0.000001}} \put(-8.49, 2.83){\circle*{0.000001}} \put(-7.78, 2.83){\circle*{0.000001}} \put(-7.07, 2.12){\circle*{0.000001}} \put(-6.36, 2.12){\circle*{0.000001}} \put(-5.66, 1.41){\circle*{0.000001}} \put(-4.95, 1.41){\circle*{0.000001}} \put(-4.24, 0.71){\circle*{0.000001}} \put(-3.54, 0.71){\circle*{0.000001}} \put(-2.83, 0.00){\circle*{0.000001}} \put(-2.83, 0.00){\circle*{0.000001}} \put(-2.12, 0.00){\circle*{0.000001}} \put(-1.41, 0.00){\circle*{0.000001}} \put(-0.71,-0.71){\circle*{0.000001}} \put( 0.00,-0.71){\circle*{0.000001}} \put( 0.71,-0.71){\circle*{0.000001}} \put( 1.41,-0.71){\circle*{0.000001}} \put( 2.12,-0.71){\circle*{0.000001}} \put( 2.83,-0.71){\circle*{0.000001}} \put( 3.54,-1.41){\circle*{0.000001}} \put( 4.24,-1.41){\circle*{0.000001}} \put( 4.95,-1.41){\circle*{0.000001}} \put( 5.66,-1.41){\circle*{0.000001}} \put( 6.36,-1.41){\circle*{0.000001}} \put( 7.07,-1.41){\circle*{0.000001}} \put( 7.78,-2.12){\circle*{0.000001}} \put( 8.49,-2.12){\circle*{0.000001}} \put( 9.19,-2.12){\circle*{0.000001}} \put( 9.90,-2.12){\circle*{0.000001}} \put(10.61,-2.12){\circle*{0.000001}} \put(11.31,-2.12){\circle*{0.000001}} \put(12.02,-2.83){\circle*{0.000001}} \put(12.73,-2.83){\circle*{0.000001}} \put(13.44,-2.83){\circle*{0.000001}} \put(14.14,-2.83){\circle*{0.000001}} \put(14.85,-2.83){\circle*{0.000001}} \put(15.56,-3.54){\circle*{0.000001}} \put(16.26,-3.54){\circle*{0.000001}} \put(16.97,-3.54){\circle*{0.000001}} \put(17.68,-3.54){\circle*{0.000001}} \put(18.38,-3.54){\circle*{0.000001}} \put(19.09,-3.54){\circle*{0.000001}} \put(19.80,-4.24){\circle*{0.000001}} \put(20.51,-4.24){\circle*{0.000001}} \put(21.21,-4.24){\circle*{0.000001}} \put(21.92,-4.24){\circle*{0.000001}} \put(22.63,-4.24){\circle*{0.000001}} \put(23.33,-4.24){\circle*{0.000001}} \put(24.04,-4.95){\circle*{0.000001}} \put(24.75,-4.95){\circle*{0.000001}} \put(25.46,-4.95){\circle*{0.000001}} \put(26.16,-4.95){\circle*{0.000001}} \put(26.87,-4.95){\circle*{0.000001}} \put(27.58,-4.95){\circle*{0.000001}} \put(28.28,-5.66){\circle*{0.000001}} \put(28.99,-5.66){\circle*{0.000001}} \put(29.70,-5.66){\circle*{0.000001}} \put(29.70,-5.66){\circle*{0.000001}} \put(30.41,-4.95){\circle*{0.000001}} \put(31.11,-4.24){\circle*{0.000001}} \put(31.82,-3.54){\circle*{0.000001}} \put(32.53,-2.83){\circle*{0.000001}} \put(33.23,-2.12){\circle*{0.000001}} \put(33.23,-1.41){\circle*{0.000001}} \put(33.94,-0.71){\circle*{0.000001}} \put(34.65, 0.00){\circle*{0.000001}} \put(35.36, 0.71){\circle*{0.000001}} \put(36.06, 1.41){\circle*{0.000001}} \put(36.77, 2.12){\circle*{0.000001}} \put(37.48, 2.83){\circle*{0.000001}} \put(38.18, 3.54){\circle*{0.000001}} \put(38.89, 4.24){\circle*{0.000001}} \put(39.60, 4.95){\circle*{0.000001}} \put(40.31, 5.66){\circle*{0.000001}} \put(40.31, 6.36){\circle*{0.000001}} \put(41.01, 7.07){\circle*{0.000001}} \put(41.72, 7.78){\circle*{0.000001}} \put(42.43, 8.49){\circle*{0.000001}} \put(43.13, 9.19){\circle*{0.000001}} \put(43.84, 9.90){\circle*{0.000001}} \put(44.55,10.61){\circle*{0.000001}} \put(45.25,11.31){\circle*{0.000001}} \put(45.96,12.02){\circle*{0.000001}} \put(46.67,12.73){\circle*{0.000001}} \put(47.38,13.44){\circle*{0.000001}} \put(47.38,14.14){\circle*{0.000001}} \put(48.08,14.85){\circle*{0.000001}} \put(48.79,15.56){\circle*{0.000001}} \put(49.50,16.26){\circle*{0.000001}} \put(50.20,16.97){\circle*{0.000001}} \put(50.91,17.68){\circle*{0.000001}} \put(17.68, 5.66){\circle*{0.000001}} \put(18.38, 5.66){\circle*{0.000001}} \put(19.09, 6.36){\circle*{0.000001}} \put(19.80, 6.36){\circle*{0.000001}} \put(20.51, 6.36){\circle*{0.000001}} \put(21.21, 7.07){\circle*{0.000001}} \put(21.92, 7.07){\circle*{0.000001}} \put(22.63, 7.78){\circle*{0.000001}} \put(23.33, 7.78){\circle*{0.000001}} \put(24.04, 7.78){\circle*{0.000001}} \put(24.75, 8.49){\circle*{0.000001}} \put(25.46, 8.49){\circle*{0.000001}} \put(26.16, 8.49){\circle*{0.000001}} \put(26.87, 9.19){\circle*{0.000001}} \put(27.58, 9.19){\circle*{0.000001}} \put(28.28, 9.19){\circle*{0.000001}} \put(28.99, 9.90){\circle*{0.000001}} \put(29.70, 9.90){\circle*{0.000001}} \put(30.41,10.61){\circle*{0.000001}} \put(31.11,10.61){\circle*{0.000001}} \put(31.82,10.61){\circle*{0.000001}} \put(32.53,11.31){\circle*{0.000001}} \put(33.23,11.31){\circle*{0.000001}} \put(33.94,11.31){\circle*{0.000001}} \put(34.65,12.02){\circle*{0.000001}} \put(35.36,12.02){\circle*{0.000001}} \put(36.06,12.02){\circle*{0.000001}} \put(36.77,12.73){\circle*{0.000001}} \put(37.48,12.73){\circle*{0.000001}} \put(38.18,12.73){\circle*{0.000001}} \put(38.89,13.44){\circle*{0.000001}} \put(39.60,13.44){\circle*{0.000001}} \put(40.31,14.14){\circle*{0.000001}} \put(41.01,14.14){\circle*{0.000001}} \put(41.72,14.14){\circle*{0.000001}} \put(42.43,14.85){\circle*{0.000001}} \put(43.13,14.85){\circle*{0.000001}} \put(43.84,14.85){\circle*{0.000001}} \put(44.55,15.56){\circle*{0.000001}} \put(45.25,15.56){\circle*{0.000001}} \put(45.96,15.56){\circle*{0.000001}} \put(46.67,16.26){\circle*{0.000001}} \put(47.38,16.26){\circle*{0.000001}} \put(48.08,16.97){\circle*{0.000001}} \put(48.79,16.97){\circle*{0.000001}} \put(49.50,16.97){\circle*{0.000001}} \put(50.20,17.68){\circle*{0.000001}} \put(50.91,17.68){\circle*{0.000001}} \put(17.68,-26.87){\circle*{0.000001}} \put(17.68,-26.16){\circle*{0.000001}} \put(17.68,-25.46){\circle*{0.000001}} \put(17.68,-24.75){\circle*{0.000001}} \put(17.68,-24.04){\circle*{0.000001}} \put(17.68,-23.33){\circle*{0.000001}} \put(17.68,-22.63){\circle*{0.000001}} \put(17.68,-21.92){\circle*{0.000001}} \put(17.68,-21.21){\circle*{0.000001}} \put(17.68,-20.51){\circle*{0.000001}} \put(17.68,-19.80){\circle*{0.000001}} \put(17.68,-19.09){\circle*{0.000001}} \put(17.68,-18.38){\circle*{0.000001}} \put(17.68,-17.68){\circle*{0.000001}} \put(17.68,-16.97){\circle*{0.000001}} \put(17.68,-16.26){\circle*{0.000001}} \put(17.68,-15.56){\circle*{0.000001}} \put(17.68,-14.85){\circle*{0.000001}} \put(17.68,-14.14){\circle*{0.000001}} \put(17.68,-13.44){\circle*{0.000001}} \put(17.68,-12.73){\circle*{0.000001}} \put(17.68,-12.02){\circle*{0.000001}} \put(17.68,-11.31){\circle*{0.000001}} \put(17.68,-10.61){\circle*{0.000001}} \put(17.68,-9.90){\circle*{0.000001}} \put(17.68,-9.19){\circle*{0.000001}} \put(17.68,-8.49){\circle*{0.000001}} \put(17.68,-7.78){\circle*{0.000001}} \put(17.68,-7.07){\circle*{0.000001}} \put(17.68,-6.36){\circle*{0.000001}} \put(17.68,-5.66){\circle*{0.000001}} \put(17.68,-4.95){\circle*{0.000001}} \put(17.68,-4.24){\circle*{0.000001}} \put(17.68,-3.54){\circle*{0.000001}} \put(17.68,-2.83){\circle*{0.000001}} \put(17.68,-2.12){\circle*{0.000001}} \put(17.68,-1.41){\circle*{0.000001}} \put(17.68,-0.71){\circle*{0.000001}} \put(17.68, 0.00){\circle*{0.000001}} \put(17.68, 0.71){\circle*{0.000001}} \put(17.68, 1.41){\circle*{0.000001}} \put(17.68, 2.12){\circle*{0.000001}} \put(17.68, 2.83){\circle*{0.000001}} \put(17.68, 3.54){\circle*{0.000001}} \put(17.68, 4.24){\circle*{0.000001}} \put(17.68, 4.95){\circle*{0.000001}} \put(17.68, 5.66){\circle*{0.000001}} \put(-17.68,-15.56){\circle*{0.000001}} \put(-16.97,-15.56){\circle*{0.000001}} \put(-16.26,-16.26){\circle*{0.000001}} \put(-15.56,-16.26){\circle*{0.000001}} \put(-14.85,-16.26){\circle*{0.000001}} \put(-14.14,-16.97){\circle*{0.000001}} \put(-13.44,-16.97){\circle*{0.000001}} \put(-12.73,-16.97){\circle*{0.000001}} \put(-12.02,-17.68){\circle*{0.000001}} \put(-11.31,-17.68){\circle*{0.000001}} \put(-10.61,-17.68){\circle*{0.000001}} \put(-9.90,-18.38){\circle*{0.000001}} \put(-9.19,-18.38){\circle*{0.000001}} \put(-8.49,-18.38){\circle*{0.000001}} \put(-7.78,-18.38){\circle*{0.000001}} \put(-7.07,-19.09){\circle*{0.000001}} \put(-6.36,-19.09){\circle*{0.000001}} \put(-5.66,-19.09){\circle*{0.000001}} \put(-4.95,-19.80){\circle*{0.000001}} \put(-4.24,-19.80){\circle*{0.000001}} \put(-3.54,-19.80){\circle*{0.000001}} \put(-2.83,-20.51){\circle*{0.000001}} \put(-2.12,-20.51){\circle*{0.000001}} \put(-1.41,-20.51){\circle*{0.000001}} \put(-0.71,-21.21){\circle*{0.000001}} \put( 0.00,-21.21){\circle*{0.000001}} \put( 0.71,-21.21){\circle*{0.000001}} \put( 1.41,-21.92){\circle*{0.000001}} \put( 2.12,-21.92){\circle*{0.000001}} \put( 2.83,-21.92){\circle*{0.000001}} \put( 3.54,-22.63){\circle*{0.000001}} \put( 4.24,-22.63){\circle*{0.000001}} \put( 4.95,-22.63){\circle*{0.000001}} \put( 5.66,-23.33){\circle*{0.000001}} \put( 6.36,-23.33){\circle*{0.000001}} \put( 7.07,-23.33){\circle*{0.000001}} \put( 7.78,-24.04){\circle*{0.000001}} \put( 8.49,-24.04){\circle*{0.000001}} \put( 9.19,-24.04){\circle*{0.000001}} \put( 9.90,-24.04){\circle*{0.000001}} \put(10.61,-24.75){\circle*{0.000001}} \put(11.31,-24.75){\circle*{0.000001}} \put(12.02,-24.75){\circle*{0.000001}} \put(12.73,-25.46){\circle*{0.000001}} \put(13.44,-25.46){\circle*{0.000001}} \put(14.14,-25.46){\circle*{0.000001}} \put(14.85,-26.16){\circle*{0.000001}} \put(15.56,-26.16){\circle*{0.000001}} \put(16.26,-26.16){\circle*{0.000001}} \put(16.97,-26.87){\circle*{0.000001}} \put(17.68,-26.87){\circle*{0.000001}} \put(-47.38,-1.41){\circle*{0.000001}} \put(-46.67,-1.41){\circle*{0.000001}} \put(-45.96,-2.12){\circle*{0.000001}} \put(-45.25,-2.12){\circle*{0.000001}} \put(-44.55,-2.83){\circle*{0.000001}} \put(-43.84,-2.83){\circle*{0.000001}} \put(-43.13,-3.54){\circle*{0.000001}} \put(-42.43,-3.54){\circle*{0.000001}} \put(-41.72,-4.24){\circle*{0.000001}} \put(-41.01,-4.24){\circle*{0.000001}} \put(-40.31,-4.95){\circle*{0.000001}} \put(-39.60,-4.95){\circle*{0.000001}} \put(-38.89,-5.66){\circle*{0.000001}} \put(-38.18,-5.66){\circle*{0.000001}} \put(-37.48,-6.36){\circle*{0.000001}} \put(-36.77,-6.36){\circle*{0.000001}} \put(-36.06,-7.07){\circle*{0.000001}} \put(-35.36,-7.07){\circle*{0.000001}} \put(-34.65,-7.78){\circle*{0.000001}} \put(-33.94,-7.78){\circle*{0.000001}} \put(-33.23,-8.49){\circle*{0.000001}} \put(-32.53,-8.49){\circle*{0.000001}} \put(-31.82,-8.49){\circle*{0.000001}} \put(-31.11,-9.19){\circle*{0.000001}} \put(-30.41,-9.19){\circle*{0.000001}} \put(-29.70,-9.90){\circle*{0.000001}} \put(-28.99,-9.90){\circle*{0.000001}} \put(-28.28,-10.61){\circle*{0.000001}} \put(-27.58,-10.61){\circle*{0.000001}} \put(-26.87,-11.31){\circle*{0.000001}} \put(-26.16,-11.31){\circle*{0.000001}} \put(-25.46,-12.02){\circle*{0.000001}} \put(-24.75,-12.02){\circle*{0.000001}} \put(-24.04,-12.73){\circle*{0.000001}} \put(-23.33,-12.73){\circle*{0.000001}} \put(-22.63,-13.44){\circle*{0.000001}} \put(-21.92,-13.44){\circle*{0.000001}} \put(-21.21,-14.14){\circle*{0.000001}} \put(-20.51,-14.14){\circle*{0.000001}} \put(-19.80,-14.85){\circle*{0.000001}} \put(-19.09,-14.85){\circle*{0.000001}} \put(-18.38,-15.56){\circle*{0.000001}} \put(-17.68,-15.56){\circle*{0.000001}} \put(-80.61, 3.54){\circle*{0.000001}} \put(-79.90, 3.54){\circle*{0.000001}} \put(-79.20, 3.54){\circle*{0.000001}} \put(-78.49, 3.54){\circle*{0.000001}} \put(-77.78, 2.83){\circle*{0.000001}} \put(-77.07, 2.83){\circle*{0.000001}} \put(-76.37, 2.83){\circle*{0.000001}} \put(-75.66, 2.83){\circle*{0.000001}} \put(-74.95, 2.83){\circle*{0.000001}} \put(-74.25, 2.83){\circle*{0.000001}} \put(-73.54, 2.83){\circle*{0.000001}} \put(-72.83, 2.12){\circle*{0.000001}} \put(-72.12, 2.12){\circle*{0.000001}} \put(-71.42, 2.12){\circle*{0.000001}} \put(-70.71, 2.12){\circle*{0.000001}} \put(-70.00, 2.12){\circle*{0.000001}} \put(-69.30, 2.12){\circle*{0.000001}} \put(-68.59, 1.41){\circle*{0.000001}} \put(-67.88, 1.41){\circle*{0.000001}} \put(-67.18, 1.41){\circle*{0.000001}} \put(-66.47, 1.41){\circle*{0.000001}} \put(-65.76, 1.41){\circle*{0.000001}} \put(-65.05, 1.41){\circle*{0.000001}} \put(-64.35, 1.41){\circle*{0.000001}} \put(-63.64, 0.71){\circle*{0.000001}} \put(-62.93, 0.71){\circle*{0.000001}} \put(-62.23, 0.71){\circle*{0.000001}} \put(-61.52, 0.71){\circle*{0.000001}} \put(-60.81, 0.71){\circle*{0.000001}} \put(-60.10, 0.71){\circle*{0.000001}} \put(-59.40, 0.71){\circle*{0.000001}} \put(-58.69, 0.00){\circle*{0.000001}} \put(-57.98, 0.00){\circle*{0.000001}} \put(-57.28, 0.00){\circle*{0.000001}} \put(-56.57, 0.00){\circle*{0.000001}} \put(-55.86, 0.00){\circle*{0.000001}} \put(-55.15, 0.00){\circle*{0.000001}} \put(-54.45,-0.71){\circle*{0.000001}} \put(-53.74,-0.71){\circle*{0.000001}} \put(-53.03,-0.71){\circle*{0.000001}} \put(-52.33,-0.71){\circle*{0.000001}} \put(-51.62,-0.71){\circle*{0.000001}} \put(-50.91,-0.71){\circle*{0.000001}} \put(-50.20,-0.71){\circle*{0.000001}} \put(-49.50,-1.41){\circle*{0.000001}} \put(-48.79,-1.41){\circle*{0.000001}} \put(-48.08,-1.41){\circle*{0.000001}} \put(-47.38,-1.41){\circle*{0.000001}} \put(-110.31,-5.66){\circle*{0.000001}} \put(-109.60,-5.66){\circle*{0.000001}} \put(-108.89,-4.95){\circle*{0.000001}} \put(-108.19,-4.95){\circle*{0.000001}} \put(-107.48,-4.95){\circle*{0.000001}} \put(-106.77,-4.24){\circle*{0.000001}} \put(-106.07,-4.24){\circle*{0.000001}} \put(-105.36,-4.24){\circle*{0.000001}} \put(-104.65,-4.24){\circle*{0.000001}} \put(-103.94,-3.54){\circle*{0.000001}} \put(-103.24,-3.54){\circle*{0.000001}} \put(-102.53,-3.54){\circle*{0.000001}} \put(-101.82,-2.83){\circle*{0.000001}} \put(-101.12,-2.83){\circle*{0.000001}} \put(-100.41,-2.83){\circle*{0.000001}} \put(-99.70,-2.12){\circle*{0.000001}} \put(-98.99,-2.12){\circle*{0.000001}} \put(-98.29,-2.12){\circle*{0.000001}} \put(-97.58,-1.41){\circle*{0.000001}} \put(-96.87,-1.41){\circle*{0.000001}} \put(-96.17,-1.41){\circle*{0.000001}} \put(-95.46,-1.41){\circle*{0.000001}} \put(-94.75,-0.71){\circle*{0.000001}} \put(-94.05,-0.71){\circle*{0.000001}} \put(-93.34,-0.71){\circle*{0.000001}} \put(-92.63, 0.00){\circle*{0.000001}} \put(-91.92, 0.00){\circle*{0.000001}} \put(-91.22, 0.00){\circle*{0.000001}} \put(-90.51, 0.71){\circle*{0.000001}} \put(-89.80, 0.71){\circle*{0.000001}} \put(-89.10, 0.71){\circle*{0.000001}} \put(-88.39, 1.41){\circle*{0.000001}} \put(-87.68, 1.41){\circle*{0.000001}} \put(-86.97, 1.41){\circle*{0.000001}} \put(-86.27, 2.12){\circle*{0.000001}} \put(-85.56, 2.12){\circle*{0.000001}} \put(-84.85, 2.12){\circle*{0.000001}} \put(-84.15, 2.12){\circle*{0.000001}} \put(-83.44, 2.83){\circle*{0.000001}} \put(-82.73, 2.83){\circle*{0.000001}} \put(-82.02, 2.83){\circle*{0.000001}} \put(-81.32, 3.54){\circle*{0.000001}} \put(-80.61, 3.54){\circle*{0.000001}} \put(-100.41,-37.48){\circle*{0.000001}} \put(-100.41,-36.77){\circle*{0.000001}} \put(-101.12,-36.06){\circle*{0.000001}} \put(-101.12,-35.36){\circle*{0.000001}} \put(-101.12,-34.65){\circle*{0.000001}} \put(-101.82,-33.94){\circle*{0.000001}} \put(-101.82,-33.23){\circle*{0.000001}} \put(-101.82,-32.53){\circle*{0.000001}} \put(-101.82,-31.82){\circle*{0.000001}} \put(-102.53,-31.11){\circle*{0.000001}} \put(-102.53,-30.41){\circle*{0.000001}} \put(-102.53,-29.70){\circle*{0.000001}} \put(-103.24,-28.99){\circle*{0.000001}} \put(-103.24,-28.28){\circle*{0.000001}} \put(-103.24,-27.58){\circle*{0.000001}} \put(-103.94,-26.87){\circle*{0.000001}} \put(-103.94,-26.16){\circle*{0.000001}} \put(-103.94,-25.46){\circle*{0.000001}} \put(-104.65,-24.75){\circle*{0.000001}} \put(-104.65,-24.04){\circle*{0.000001}} \put(-104.65,-23.33){\circle*{0.000001}} \put(-105.36,-22.63){\circle*{0.000001}} \put(-105.36,-21.92){\circle*{0.000001}} \put(-105.36,-21.21){\circle*{0.000001}} \put(-105.36,-20.51){\circle*{0.000001}} \put(-106.07,-19.80){\circle*{0.000001}} \put(-106.07,-19.09){\circle*{0.000001}} \put(-106.07,-18.38){\circle*{0.000001}} \put(-106.77,-17.68){\circle*{0.000001}} \put(-106.77,-16.97){\circle*{0.000001}} \put(-106.77,-16.26){\circle*{0.000001}} \put(-107.48,-15.56){\circle*{0.000001}} \put(-107.48,-14.85){\circle*{0.000001}} \put(-107.48,-14.14){\circle*{0.000001}} \put(-108.19,-13.44){\circle*{0.000001}} \put(-108.19,-12.73){\circle*{0.000001}} \put(-108.19,-12.02){\circle*{0.000001}} \put(-108.89,-11.31){\circle*{0.000001}} \put(-108.89,-10.61){\circle*{0.000001}} \put(-108.89,-9.90){\circle*{0.000001}} \put(-108.89,-9.19){\circle*{0.000001}} \put(-109.60,-8.49){\circle*{0.000001}} \put(-109.60,-7.78){\circle*{0.000001}} \put(-109.60,-7.07){\circle*{0.000001}} \put(-110.31,-6.36){\circle*{0.000001}} \put(-110.31,-5.66){\circle*{0.000001}} \put(-134.35,-43.84){\circle*{0.000001}} \put(-133.64,-43.84){\circle*{0.000001}} \put(-132.94,-43.84){\circle*{0.000001}} \put(-132.23,-43.13){\circle*{0.000001}} \put(-131.52,-43.13){\circle*{0.000001}} \put(-130.81,-43.13){\circle*{0.000001}} \put(-130.11,-43.13){\circle*{0.000001}} \put(-129.40,-43.13){\circle*{0.000001}} \put(-128.69,-43.13){\circle*{0.000001}} \put(-127.99,-42.43){\circle*{0.000001}} \put(-127.28,-42.43){\circle*{0.000001}} \put(-126.57,-42.43){\circle*{0.000001}} \put(-125.87,-42.43){\circle*{0.000001}} \put(-125.16,-42.43){\circle*{0.000001}} \put(-124.45,-41.72){\circle*{0.000001}} \put(-123.74,-41.72){\circle*{0.000001}} \put(-123.04,-41.72){\circle*{0.000001}} \put(-122.33,-41.72){\circle*{0.000001}} \put(-121.62,-41.72){\circle*{0.000001}} \put(-120.92,-41.01){\circle*{0.000001}} \put(-120.21,-41.01){\circle*{0.000001}} \put(-119.50,-41.01){\circle*{0.000001}} \put(-118.79,-41.01){\circle*{0.000001}} \put(-118.09,-41.01){\circle*{0.000001}} \put(-117.38,-41.01){\circle*{0.000001}} \put(-116.67,-40.31){\circle*{0.000001}} \put(-115.97,-40.31){\circle*{0.000001}} \put(-115.26,-40.31){\circle*{0.000001}} \put(-114.55,-40.31){\circle*{0.000001}} \put(-113.84,-40.31){\circle*{0.000001}} \put(-113.14,-39.60){\circle*{0.000001}} \put(-112.43,-39.60){\circle*{0.000001}} \put(-111.72,-39.60){\circle*{0.000001}} \put(-111.02,-39.60){\circle*{0.000001}} \put(-110.31,-39.60){\circle*{0.000001}} \put(-109.60,-38.89){\circle*{0.000001}} \put(-108.89,-38.89){\circle*{0.000001}} \put(-108.19,-38.89){\circle*{0.000001}} \put(-107.48,-38.89){\circle*{0.000001}} \put(-106.77,-38.89){\circle*{0.000001}} \put(-106.07,-38.89){\circle*{0.000001}} \put(-105.36,-38.18){\circle*{0.000001}} \put(-104.65,-38.18){\circle*{0.000001}} \put(-103.94,-38.18){\circle*{0.000001}} \put(-103.24,-38.18){\circle*{0.000001}} \put(-102.53,-38.18){\circle*{0.000001}} \put(-101.82,-37.48){\circle*{0.000001}} \put(-101.12,-37.48){\circle*{0.000001}} \put(-100.41,-37.48){\circle*{0.000001}} \put(-153.44,-71.42){\circle*{0.000001}} \put(-152.74,-70.71){\circle*{0.000001}} \put(-152.74,-70.00){\circle*{0.000001}} \put(-152.03,-69.30){\circle*{0.000001}} \put(-151.32,-68.59){\circle*{0.000001}} \put(-151.32,-67.88){\circle*{0.000001}} \put(-150.61,-67.18){\circle*{0.000001}} \put(-149.91,-66.47){\circle*{0.000001}} \put(-149.20,-65.76){\circle*{0.000001}} \put(-149.20,-65.05){\circle*{0.000001}} \put(-148.49,-64.35){\circle*{0.000001}} \put(-147.79,-63.64){\circle*{0.000001}} \put(-147.79,-62.93){\circle*{0.000001}} \put(-147.08,-62.23){\circle*{0.000001}} \put(-146.37,-61.52){\circle*{0.000001}} \put(-146.37,-60.81){\circle*{0.000001}} \put(-145.66,-60.10){\circle*{0.000001}} \put(-144.96,-59.40){\circle*{0.000001}} \put(-144.96,-58.69){\circle*{0.000001}} \put(-144.25,-57.98){\circle*{0.000001}} \put(-143.54,-57.28){\circle*{0.000001}} \put(-142.84,-56.57){\circle*{0.000001}} \put(-142.84,-55.86){\circle*{0.000001}} \put(-142.13,-55.15){\circle*{0.000001}} \put(-141.42,-54.45){\circle*{0.000001}} \put(-141.42,-53.74){\circle*{0.000001}} \put(-140.71,-53.03){\circle*{0.000001}} \put(-140.01,-52.33){\circle*{0.000001}} \put(-140.01,-51.62){\circle*{0.000001}} \put(-139.30,-50.91){\circle*{0.000001}} \put(-138.59,-50.20){\circle*{0.000001}} \put(-138.59,-49.50){\circle*{0.000001}} \put(-137.89,-48.79){\circle*{0.000001}} \put(-137.18,-48.08){\circle*{0.000001}} \put(-136.47,-47.38){\circle*{0.000001}} \put(-136.47,-46.67){\circle*{0.000001}} \put(-135.76,-45.96){\circle*{0.000001}} \put(-135.06,-45.25){\circle*{0.000001}} \put(-135.06,-44.55){\circle*{0.000001}} \put(-134.35,-43.84){\circle*{0.000001}} \put(-153.44,-71.42){\circle*{0.000001}} \put(-152.74,-70.71){\circle*{0.000001}} \put(-152.03,-70.71){\circle*{0.000001}} \put(-151.32,-70.00){\circle*{0.000001}} \put(-150.61,-70.00){\circle*{0.000001}} \put(-149.91,-69.30){\circle*{0.000001}} \put(-149.20,-68.59){\circle*{0.000001}} \put(-148.49,-68.59){\circle*{0.000001}} \put(-147.79,-67.88){\circle*{0.000001}} \put(-147.08,-67.88){\circle*{0.000001}} \put(-146.37,-67.18){\circle*{0.000001}} \put(-145.66,-66.47){\circle*{0.000001}} \put(-144.96,-66.47){\circle*{0.000001}} \put(-144.25,-65.76){\circle*{0.000001}} \put(-143.54,-65.76){\circle*{0.000001}} \put(-142.84,-65.05){\circle*{0.000001}} \put(-142.13,-64.35){\circle*{0.000001}} \put(-141.42,-64.35){\circle*{0.000001}} \put(-140.71,-63.64){\circle*{0.000001}} \put(-140.01,-63.64){\circle*{0.000001}} \put(-139.30,-62.93){\circle*{0.000001}} \put(-138.59,-62.93){\circle*{0.000001}} \put(-137.89,-62.23){\circle*{0.000001}} \put(-137.18,-61.52){\circle*{0.000001}} \put(-136.47,-61.52){\circle*{0.000001}} \put(-135.76,-60.81){\circle*{0.000001}} \put(-135.06,-60.81){\circle*{0.000001}} \put(-134.35,-60.10){\circle*{0.000001}} \put(-133.64,-59.40){\circle*{0.000001}} \put(-132.94,-59.40){\circle*{0.000001}} \put(-132.23,-58.69){\circle*{0.000001}} \put(-131.52,-58.69){\circle*{0.000001}} \put(-130.81,-57.98){\circle*{0.000001}} \put(-130.11,-57.28){\circle*{0.000001}} \put(-129.40,-57.28){\circle*{0.000001}} \put(-128.69,-56.57){\circle*{0.000001}} \put(-127.99,-56.57){\circle*{0.000001}} \put(-127.28,-55.86){\circle*{0.000001}} \put(-126.57,-55.15){\circle*{0.000001}} \put(-125.87,-55.15){\circle*{0.000001}} \put(-125.16,-54.45){\circle*{0.000001}} \put(-124.45,-54.45){\circle*{0.000001}} \put(-123.74,-53.74){\circle*{0.000001}} \put(-123.74,-53.74){\circle*{0.000001}} \put(-123.74,-53.03){\circle*{0.000001}} \put(-124.45,-52.33){\circle*{0.000001}} \put(-124.45,-51.62){\circle*{0.000001}} \put(-124.45,-50.91){\circle*{0.000001}} \put(-125.16,-50.20){\circle*{0.000001}} \put(-125.16,-49.50){\circle*{0.000001}} \put(-125.16,-48.79){\circle*{0.000001}} \put(-125.87,-48.08){\circle*{0.000001}} \put(-125.87,-47.38){\circle*{0.000001}} \put(-125.87,-46.67){\circle*{0.000001}} \put(-125.87,-45.96){\circle*{0.000001}} \put(-126.57,-45.25){\circle*{0.000001}} \put(-126.57,-44.55){\circle*{0.000001}} \put(-126.57,-43.84){\circle*{0.000001}} \put(-127.28,-43.13){\circle*{0.000001}} \put(-127.28,-42.43){\circle*{0.000001}} \put(-127.28,-41.72){\circle*{0.000001}} \put(-127.99,-41.01){\circle*{0.000001}} \put(-127.99,-40.31){\circle*{0.000001}} \put(-127.99,-39.60){\circle*{0.000001}} \put(-128.69,-38.89){\circle*{0.000001}} \put(-128.69,-38.18){\circle*{0.000001}} \put(-128.69,-37.48){\circle*{0.000001}} \put(-129.40,-36.77){\circle*{0.000001}} \put(-129.40,-36.06){\circle*{0.000001}} \put(-129.40,-35.36){\circle*{0.000001}} \put(-130.11,-34.65){\circle*{0.000001}} \put(-130.11,-33.94){\circle*{0.000001}} \put(-130.11,-33.23){\circle*{0.000001}} \put(-130.81,-32.53){\circle*{0.000001}} \put(-130.81,-31.82){\circle*{0.000001}} \put(-130.81,-31.11){\circle*{0.000001}} \put(-130.81,-30.41){\circle*{0.000001}} \put(-131.52,-29.70){\circle*{0.000001}} \put(-131.52,-28.99){\circle*{0.000001}} \put(-131.52,-28.28){\circle*{0.000001}} \put(-132.23,-27.58){\circle*{0.000001}} \put(-132.23,-26.87){\circle*{0.000001}} \put(-132.23,-26.16){\circle*{0.000001}} \put(-132.94,-25.46){\circle*{0.000001}} \put(-132.94,-24.75){\circle*{0.000001}} \put(-132.94,-24.04){\circle*{0.000001}} \put(-133.64,-23.33){\circle*{0.000001}} \put(-133.64,-22.63){\circle*{0.000001}} \put(-133.64,-22.63){\circle*{0.000001}} \put(-132.94,-22.63){\circle*{0.000001}} \put(-132.23,-22.63){\circle*{0.000001}} \put(-131.52,-23.33){\circle*{0.000001}} \put(-130.81,-23.33){\circle*{0.000001}} \put(-130.11,-23.33){\circle*{0.000001}} \put(-129.40,-23.33){\circle*{0.000001}} \put(-128.69,-23.33){\circle*{0.000001}} \put(-127.99,-24.04){\circle*{0.000001}} \put(-127.28,-24.04){\circle*{0.000001}} \put(-126.57,-24.04){\circle*{0.000001}} \put(-125.87,-24.04){\circle*{0.000001}} \put(-125.16,-24.04){\circle*{0.000001}} \put(-124.45,-24.04){\circle*{0.000001}} \put(-123.74,-24.75){\circle*{0.000001}} \put(-123.04,-24.75){\circle*{0.000001}} \put(-122.33,-24.75){\circle*{0.000001}} \put(-121.62,-24.75){\circle*{0.000001}} \put(-120.92,-24.75){\circle*{0.000001}} \put(-120.21,-25.46){\circle*{0.000001}} \put(-119.50,-25.46){\circle*{0.000001}} \put(-118.79,-25.46){\circle*{0.000001}} \put(-118.09,-25.46){\circle*{0.000001}} \put(-117.38,-25.46){\circle*{0.000001}} \put(-116.67,-26.16){\circle*{0.000001}} \put(-115.97,-26.16){\circle*{0.000001}} \put(-115.26,-26.16){\circle*{0.000001}} \put(-114.55,-26.16){\circle*{0.000001}} \put(-113.84,-26.16){\circle*{0.000001}} \put(-113.14,-26.87){\circle*{0.000001}} \put(-112.43,-26.87){\circle*{0.000001}} \put(-111.72,-26.87){\circle*{0.000001}} \put(-111.02,-26.87){\circle*{0.000001}} \put(-110.31,-26.87){\circle*{0.000001}} \put(-109.60,-27.58){\circle*{0.000001}} \put(-108.89,-27.58){\circle*{0.000001}} \put(-108.19,-27.58){\circle*{0.000001}} \put(-107.48,-27.58){\circle*{0.000001}} \put(-106.77,-27.58){\circle*{0.000001}} \put(-106.07,-27.58){\circle*{0.000001}} \put(-105.36,-28.28){\circle*{0.000001}} \put(-104.65,-28.28){\circle*{0.000001}} \put(-103.94,-28.28){\circle*{0.000001}} \put(-103.24,-28.28){\circle*{0.000001}} \put(-102.53,-28.28){\circle*{0.000001}} \put(-101.82,-28.99){\circle*{0.000001}} \put(-101.12,-28.99){\circle*{0.000001}} \put(-100.41,-28.99){\circle*{0.000001}} \put(-100.41,-28.99){\circle*{0.000001}} \put(-99.70,-28.28){\circle*{0.000001}} \put(-98.99,-28.28){\circle*{0.000001}} \put(-98.29,-27.58){\circle*{0.000001}} \put(-97.58,-26.87){\circle*{0.000001}} \put(-96.87,-26.87){\circle*{0.000001}} \put(-96.17,-26.16){\circle*{0.000001}} \put(-95.46,-25.46){\circle*{0.000001}} \put(-94.75,-24.75){\circle*{0.000001}} \put(-94.05,-24.75){\circle*{0.000001}} \put(-93.34,-24.04){\circle*{0.000001}} \put(-92.63,-23.33){\circle*{0.000001}} \put(-91.92,-23.33){\circle*{0.000001}} \put(-91.22,-22.63){\circle*{0.000001}} \put(-90.51,-21.92){\circle*{0.000001}} \put(-89.80,-21.92){\circle*{0.000001}} \put(-89.10,-21.21){\circle*{0.000001}} \put(-88.39,-20.51){\circle*{0.000001}} \put(-87.68,-20.51){\circle*{0.000001}} \put(-86.97,-19.80){\circle*{0.000001}} \put(-86.27,-19.09){\circle*{0.000001}} \put(-85.56,-18.38){\circle*{0.000001}} \put(-84.85,-18.38){\circle*{0.000001}} \put(-84.15,-17.68){\circle*{0.000001}} \put(-83.44,-16.97){\circle*{0.000001}} \put(-82.73,-16.97){\circle*{0.000001}} \put(-82.02,-16.26){\circle*{0.000001}} \put(-81.32,-15.56){\circle*{0.000001}} \put(-80.61,-15.56){\circle*{0.000001}} \put(-79.90,-14.85){\circle*{0.000001}} \put(-79.20,-14.14){\circle*{0.000001}} \put(-78.49,-14.14){\circle*{0.000001}} \put(-77.78,-13.44){\circle*{0.000001}} \put(-77.07,-12.73){\circle*{0.000001}} \put(-76.37,-12.02){\circle*{0.000001}} \put(-75.66,-12.02){\circle*{0.000001}} \put(-74.95,-11.31){\circle*{0.000001}} \put(-74.25,-10.61){\circle*{0.000001}} \put(-73.54,-10.61){\circle*{0.000001}} \put(-72.83,-9.90){\circle*{0.000001}} \put(-101.82, 4.95){\circle*{0.000001}} \put(-101.12, 4.24){\circle*{0.000001}} \put(-100.41, 4.24){\circle*{0.000001}} \put(-99.70, 3.54){\circle*{0.000001}} \put(-98.99, 3.54){\circle*{0.000001}} \put(-98.29, 2.83){\circle*{0.000001}} \put(-97.58, 2.83){\circle*{0.000001}} \put(-96.87, 2.12){\circle*{0.000001}} \put(-96.17, 2.12){\circle*{0.000001}} \put(-95.46, 1.41){\circle*{0.000001}} \put(-94.75, 1.41){\circle*{0.000001}} \put(-94.05, 0.71){\circle*{0.000001}} \put(-93.34, 0.71){\circle*{0.000001}} \put(-92.63, 0.00){\circle*{0.000001}} \put(-91.92, 0.00){\circle*{0.000001}} \put(-91.22,-0.71){\circle*{0.000001}} \put(-90.51,-0.71){\circle*{0.000001}} \put(-89.80,-1.41){\circle*{0.000001}} \put(-89.10,-1.41){\circle*{0.000001}} \put(-88.39,-2.12){\circle*{0.000001}} \put(-87.68,-2.12){\circle*{0.000001}} \put(-86.97,-2.83){\circle*{0.000001}} \put(-86.27,-2.83){\circle*{0.000001}} \put(-85.56,-3.54){\circle*{0.000001}} \put(-84.85,-3.54){\circle*{0.000001}} \put(-84.15,-4.24){\circle*{0.000001}} \put(-83.44,-4.24){\circle*{0.000001}} \put(-82.73,-4.95){\circle*{0.000001}} \put(-82.02,-4.95){\circle*{0.000001}} \put(-81.32,-5.66){\circle*{0.000001}} \put(-80.61,-5.66){\circle*{0.000001}} \put(-79.90,-6.36){\circle*{0.000001}} \put(-79.20,-6.36){\circle*{0.000001}} \put(-78.49,-7.07){\circle*{0.000001}} \put(-77.78,-7.07){\circle*{0.000001}} \put(-77.07,-7.78){\circle*{0.000001}} \put(-76.37,-7.78){\circle*{0.000001}} \put(-75.66,-8.49){\circle*{0.000001}} \put(-74.95,-8.49){\circle*{0.000001}} \put(-74.25,-9.19){\circle*{0.000001}} \put(-73.54,-9.19){\circle*{0.000001}} \put(-72.83,-9.90){\circle*{0.000001}} \put(-133.64,14.85){\circle*{0.000001}} \put(-132.94,14.85){\circle*{0.000001}} \put(-132.23,14.14){\circle*{0.000001}} \put(-131.52,14.14){\circle*{0.000001}} \put(-130.81,14.14){\circle*{0.000001}} \put(-130.11,13.44){\circle*{0.000001}} \put(-129.40,13.44){\circle*{0.000001}} \put(-128.69,13.44){\circle*{0.000001}} \put(-127.99,13.44){\circle*{0.000001}} \put(-127.28,12.73){\circle*{0.000001}} \put(-126.57,12.73){\circle*{0.000001}} \put(-125.87,12.73){\circle*{0.000001}} \put(-125.16,12.02){\circle*{0.000001}} \put(-124.45,12.02){\circle*{0.000001}} \put(-123.74,12.02){\circle*{0.000001}} \put(-123.04,11.31){\circle*{0.000001}} \put(-122.33,11.31){\circle*{0.000001}} \put(-121.62,11.31){\circle*{0.000001}} \put(-120.92,10.61){\circle*{0.000001}} \put(-120.21,10.61){\circle*{0.000001}} \put(-119.50,10.61){\circle*{0.000001}} \put(-118.79, 9.90){\circle*{0.000001}} \put(-118.09, 9.90){\circle*{0.000001}} \put(-117.38, 9.90){\circle*{0.000001}} \put(-116.67, 9.90){\circle*{0.000001}} \put(-115.97, 9.19){\circle*{0.000001}} \put(-115.26, 9.19){\circle*{0.000001}} \put(-114.55, 9.19){\circle*{0.000001}} \put(-113.84, 8.49){\circle*{0.000001}} \put(-113.14, 8.49){\circle*{0.000001}} \put(-112.43, 8.49){\circle*{0.000001}} \put(-111.72, 7.78){\circle*{0.000001}} \put(-111.02, 7.78){\circle*{0.000001}} \put(-110.31, 7.78){\circle*{0.000001}} \put(-109.60, 7.07){\circle*{0.000001}} \put(-108.89, 7.07){\circle*{0.000001}} \put(-108.19, 7.07){\circle*{0.000001}} \put(-107.48, 6.36){\circle*{0.000001}} \put(-106.77, 6.36){\circle*{0.000001}} \put(-106.07, 6.36){\circle*{0.000001}} \put(-105.36, 6.36){\circle*{0.000001}} \put(-104.65, 5.66){\circle*{0.000001}} \put(-103.94, 5.66){\circle*{0.000001}} \put(-103.24, 5.66){\circle*{0.000001}} \put(-102.53, 4.95){\circle*{0.000001}} \put(-101.82, 4.95){\circle*{0.000001}} \put(-163.34, 1.41){\circle*{0.000001}} \put(-162.63, 1.41){\circle*{0.000001}} \put(-161.93, 2.12){\circle*{0.000001}} \put(-161.22, 2.12){\circle*{0.000001}} \put(-160.51, 2.83){\circle*{0.000001}} \put(-159.81, 2.83){\circle*{0.000001}} \put(-159.10, 3.54){\circle*{0.000001}} \put(-158.39, 3.54){\circle*{0.000001}} \put(-157.68, 4.24){\circle*{0.000001}} \put(-156.98, 4.24){\circle*{0.000001}} \put(-156.27, 4.95){\circle*{0.000001}} \put(-155.56, 4.95){\circle*{0.000001}} \put(-154.86, 4.95){\circle*{0.000001}} \put(-154.15, 5.66){\circle*{0.000001}} \put(-153.44, 5.66){\circle*{0.000001}} \put(-152.74, 6.36){\circle*{0.000001}} \put(-152.03, 6.36){\circle*{0.000001}} \put(-151.32, 7.07){\circle*{0.000001}} \put(-150.61, 7.07){\circle*{0.000001}} \put(-149.91, 7.78){\circle*{0.000001}} \put(-149.20, 7.78){\circle*{0.000001}} \put(-148.49, 7.78){\circle*{0.000001}} \put(-147.79, 8.49){\circle*{0.000001}} \put(-147.08, 8.49){\circle*{0.000001}} \put(-146.37, 9.19){\circle*{0.000001}} \put(-145.66, 9.19){\circle*{0.000001}} \put(-144.96, 9.90){\circle*{0.000001}} \put(-144.25, 9.90){\circle*{0.000001}} \put(-143.54,10.61){\circle*{0.000001}} \put(-142.84,10.61){\circle*{0.000001}} \put(-142.13,11.31){\circle*{0.000001}} \put(-141.42,11.31){\circle*{0.000001}} \put(-140.71,11.31){\circle*{0.000001}} \put(-140.01,12.02){\circle*{0.000001}} \put(-139.30,12.02){\circle*{0.000001}} \put(-138.59,12.73){\circle*{0.000001}} \put(-137.89,12.73){\circle*{0.000001}} \put(-137.18,13.44){\circle*{0.000001}} \put(-136.47,13.44){\circle*{0.000001}} \put(-135.76,14.14){\circle*{0.000001}} \put(-135.06,14.14){\circle*{0.000001}} \put(-134.35,14.85){\circle*{0.000001}} \put(-133.64,14.85){\circle*{0.000001}} \put(-146.37,-25.46){\circle*{0.000001}} \put(-147.08,-24.75){\circle*{0.000001}} \put(-147.08,-24.04){\circle*{0.000001}} \put(-147.79,-23.33){\circle*{0.000001}} \put(-148.49,-22.63){\circle*{0.000001}} \put(-148.49,-21.92){\circle*{0.000001}} \put(-149.20,-21.21){\circle*{0.000001}} \put(-149.20,-20.51){\circle*{0.000001}} \put(-149.91,-19.80){\circle*{0.000001}} \put(-150.61,-19.09){\circle*{0.000001}} \put(-150.61,-18.38){\circle*{0.000001}} \put(-151.32,-17.68){\circle*{0.000001}} \put(-152.03,-16.97){\circle*{0.000001}} \put(-152.03,-16.26){\circle*{0.000001}} \put(-152.74,-15.56){\circle*{0.000001}} \put(-152.74,-14.85){\circle*{0.000001}} \put(-153.44,-14.14){\circle*{0.000001}} \put(-154.15,-13.44){\circle*{0.000001}} \put(-154.15,-12.73){\circle*{0.000001}} \put(-154.86,-12.02){\circle*{0.000001}} \put(-155.56,-11.31){\circle*{0.000001}} \put(-155.56,-10.61){\circle*{0.000001}} \put(-156.27,-9.90){\circle*{0.000001}} \put(-156.98,-9.19){\circle*{0.000001}} \put(-156.98,-8.49){\circle*{0.000001}} \put(-157.68,-7.78){\circle*{0.000001}} \put(-157.68,-7.07){\circle*{0.000001}} \put(-158.39,-6.36){\circle*{0.000001}} \put(-159.10,-5.66){\circle*{0.000001}} \put(-159.10,-4.95){\circle*{0.000001}} \put(-159.81,-4.24){\circle*{0.000001}} \put(-160.51,-3.54){\circle*{0.000001}} \put(-160.51,-2.83){\circle*{0.000001}} \put(-161.22,-2.12){\circle*{0.000001}} \put(-161.22,-1.41){\circle*{0.000001}} \put(-161.93,-0.71){\circle*{0.000001}} \put(-162.63, 0.00){\circle*{0.000001}} \put(-162.63, 0.71){\circle*{0.000001}} \put(-163.34, 1.41){\circle*{0.000001}} \put(-169.00,-51.62){\circle*{0.000001}} \put(-168.29,-50.91){\circle*{0.000001}} \put(-167.58,-50.20){\circle*{0.000001}} \put(-166.88,-49.50){\circle*{0.000001}} \put(-166.88,-48.79){\circle*{0.000001}} \put(-166.17,-48.08){\circle*{0.000001}} \put(-165.46,-47.38){\circle*{0.000001}} \put(-164.76,-46.67){\circle*{0.000001}} \put(-164.05,-45.96){\circle*{0.000001}} \put(-163.34,-45.25){\circle*{0.000001}} \put(-162.63,-44.55){\circle*{0.000001}} \put(-161.93,-43.84){\circle*{0.000001}} \put(-161.93,-43.13){\circle*{0.000001}} \put(-161.22,-42.43){\circle*{0.000001}} \put(-160.51,-41.72){\circle*{0.000001}} \put(-159.81,-41.01){\circle*{0.000001}} \put(-159.10,-40.31){\circle*{0.000001}} \put(-158.39,-39.60){\circle*{0.000001}} \put(-157.68,-38.89){\circle*{0.000001}} \put(-157.68,-38.18){\circle*{0.000001}} \put(-156.98,-37.48){\circle*{0.000001}} \put(-156.27,-36.77){\circle*{0.000001}} \put(-155.56,-36.06){\circle*{0.000001}} \put(-154.86,-35.36){\circle*{0.000001}} \put(-154.15,-34.65){\circle*{0.000001}} \put(-153.44,-33.94){\circle*{0.000001}} \put(-153.44,-33.23){\circle*{0.000001}} \put(-152.74,-32.53){\circle*{0.000001}} \put(-152.03,-31.82){\circle*{0.000001}} \put(-151.32,-31.11){\circle*{0.000001}} \put(-150.61,-30.41){\circle*{0.000001}} \put(-149.91,-29.70){\circle*{0.000001}} \put(-149.20,-28.99){\circle*{0.000001}} \put(-148.49,-28.28){\circle*{0.000001}} \put(-148.49,-27.58){\circle*{0.000001}} \put(-147.79,-26.87){\circle*{0.000001}} \put(-147.08,-26.16){\circle*{0.000001}} \put(-146.37,-25.46){\circle*{0.000001}} \put(-169.00,-51.62){\circle*{0.000001}} \put(-168.29,-50.91){\circle*{0.000001}} \put(-168.29,-50.20){\circle*{0.000001}} \put(-167.58,-49.50){\circle*{0.000001}} \put(-166.88,-48.79){\circle*{0.000001}} \put(-166.17,-48.08){\circle*{0.000001}} \put(-166.17,-47.38){\circle*{0.000001}} \put(-165.46,-46.67){\circle*{0.000001}} \put(-164.76,-45.96){\circle*{0.000001}} \put(-164.05,-45.25){\circle*{0.000001}} \put(-164.05,-44.55){\circle*{0.000001}} \put(-163.34,-43.84){\circle*{0.000001}} \put(-162.63,-43.13){\circle*{0.000001}} \put(-161.93,-42.43){\circle*{0.000001}} \put(-161.93,-41.72){\circle*{0.000001}} \put(-161.22,-41.01){\circle*{0.000001}} \put(-160.51,-40.31){\circle*{0.000001}} \put(-159.81,-39.60){\circle*{0.000001}} \put(-159.81,-38.89){\circle*{0.000001}} \put(-159.10,-38.18){\circle*{0.000001}} \put(-158.39,-37.48){\circle*{0.000001}} \put(-157.68,-36.77){\circle*{0.000001}} \put(-157.68,-36.06){\circle*{0.000001}} \put(-156.98,-35.36){\circle*{0.000001}} \put(-156.27,-34.65){\circle*{0.000001}} \put(-155.56,-33.94){\circle*{0.000001}} \put(-155.56,-33.23){\circle*{0.000001}} \put(-154.86,-32.53){\circle*{0.000001}} \put(-154.15,-31.82){\circle*{0.000001}} \put(-153.44,-31.11){\circle*{0.000001}} \put(-153.44,-30.41){\circle*{0.000001}} \put(-152.74,-29.70){\circle*{0.000001}} \put(-152.03,-28.99){\circle*{0.000001}} \put(-151.32,-28.28){\circle*{0.000001}} \put(-151.32,-27.58){\circle*{0.000001}} \put(-150.61,-26.87){\circle*{0.000001}} \put(-149.91,-26.16){\circle*{0.000001}} \put(-149.20,-25.46){\circle*{0.000001}} \put(-149.20,-24.75){\circle*{0.000001}} \put(-148.49,-24.04){\circle*{0.000001}} \put(-147.79,-23.33){\circle*{0.000001}} \put(-179.61,-14.85){\circle*{0.000001}} \put(-178.90,-14.85){\circle*{0.000001}} \put(-178.19,-15.56){\circle*{0.000001}} \put(-177.48,-15.56){\circle*{0.000001}} \put(-176.78,-15.56){\circle*{0.000001}} \put(-176.07,-15.56){\circle*{0.000001}} \put(-175.36,-16.26){\circle*{0.000001}} \put(-174.66,-16.26){\circle*{0.000001}} \put(-173.95,-16.26){\circle*{0.000001}} \put(-173.24,-16.26){\circle*{0.000001}} \put(-172.53,-16.97){\circle*{0.000001}} \put(-171.83,-16.97){\circle*{0.000001}} \put(-171.12,-16.97){\circle*{0.000001}} \put(-170.41,-16.97){\circle*{0.000001}} \put(-169.71,-17.68){\circle*{0.000001}} \put(-169.00,-17.68){\circle*{0.000001}} \put(-168.29,-17.68){\circle*{0.000001}} \put(-167.58,-18.38){\circle*{0.000001}} \put(-166.88,-18.38){\circle*{0.000001}} \put(-166.17,-18.38){\circle*{0.000001}} \put(-165.46,-18.38){\circle*{0.000001}} \put(-164.76,-19.09){\circle*{0.000001}} \put(-164.05,-19.09){\circle*{0.000001}} \put(-163.34,-19.09){\circle*{0.000001}} \put(-162.63,-19.09){\circle*{0.000001}} \put(-161.93,-19.80){\circle*{0.000001}} \put(-161.22,-19.80){\circle*{0.000001}} \put(-160.51,-19.80){\circle*{0.000001}} \put(-159.81,-19.80){\circle*{0.000001}} \put(-159.10,-20.51){\circle*{0.000001}} \put(-158.39,-20.51){\circle*{0.000001}} \put(-157.68,-20.51){\circle*{0.000001}} \put(-156.98,-21.21){\circle*{0.000001}} \put(-156.27,-21.21){\circle*{0.000001}} \put(-155.56,-21.21){\circle*{0.000001}} \put(-154.86,-21.21){\circle*{0.000001}} \put(-154.15,-21.92){\circle*{0.000001}} \put(-153.44,-21.92){\circle*{0.000001}} \put(-152.74,-21.92){\circle*{0.000001}} \put(-152.03,-21.92){\circle*{0.000001}} \put(-151.32,-22.63){\circle*{0.000001}} \put(-150.61,-22.63){\circle*{0.000001}} \put(-149.91,-22.63){\circle*{0.000001}} \put(-149.20,-22.63){\circle*{0.000001}} \put(-148.49,-23.33){\circle*{0.000001}} \put(-147.79,-23.33){\circle*{0.000001}} \put(-209.30,-30.41){\circle*{0.000001}} \put(-208.60,-29.70){\circle*{0.000001}} \put(-207.89,-29.70){\circle*{0.000001}} \put(-207.18,-28.99){\circle*{0.000001}} \put(-206.48,-28.99){\circle*{0.000001}} \put(-205.77,-28.28){\circle*{0.000001}} \put(-205.06,-28.28){\circle*{0.000001}} \put(-204.35,-27.58){\circle*{0.000001}} \put(-203.65,-27.58){\circle*{0.000001}} \put(-202.94,-26.87){\circle*{0.000001}} \put(-202.23,-26.87){\circle*{0.000001}} \put(-201.53,-26.16){\circle*{0.000001}} \put(-200.82,-26.16){\circle*{0.000001}} \put(-200.11,-25.46){\circle*{0.000001}} \put(-199.40,-25.46){\circle*{0.000001}} \put(-198.70,-24.75){\circle*{0.000001}} \put(-197.99,-24.75){\circle*{0.000001}} \put(-197.28,-24.04){\circle*{0.000001}} \put(-196.58,-24.04){\circle*{0.000001}} \put(-195.87,-23.33){\circle*{0.000001}} \put(-195.16,-23.33){\circle*{0.000001}} \put(-194.45,-22.63){\circle*{0.000001}} \put(-193.75,-21.92){\circle*{0.000001}} \put(-193.04,-21.92){\circle*{0.000001}} \put(-192.33,-21.21){\circle*{0.000001}} \put(-191.63,-21.21){\circle*{0.000001}} \put(-190.92,-20.51){\circle*{0.000001}} \put(-190.21,-20.51){\circle*{0.000001}} \put(-189.50,-19.80){\circle*{0.000001}} \put(-188.80,-19.80){\circle*{0.000001}} \put(-188.09,-19.09){\circle*{0.000001}} \put(-187.38,-19.09){\circle*{0.000001}} \put(-186.68,-18.38){\circle*{0.000001}} \put(-185.97,-18.38){\circle*{0.000001}} \put(-185.26,-17.68){\circle*{0.000001}} \put(-184.55,-17.68){\circle*{0.000001}} \put(-183.85,-16.97){\circle*{0.000001}} \put(-183.14,-16.97){\circle*{0.000001}} \put(-182.43,-16.26){\circle*{0.000001}} \put(-181.73,-16.26){\circle*{0.000001}} \put(-181.02,-15.56){\circle*{0.000001}} \put(-180.31,-15.56){\circle*{0.000001}} \put(-179.61,-14.85){\circle*{0.000001}} \put(-194.45,-61.52){\circle*{0.000001}} \put(-194.45,-60.81){\circle*{0.000001}} \put(-195.16,-60.10){\circle*{0.000001}} \put(-195.16,-59.40){\circle*{0.000001}} \put(-195.87,-58.69){\circle*{0.000001}} \put(-195.87,-57.98){\circle*{0.000001}} \put(-196.58,-57.28){\circle*{0.000001}} \put(-196.58,-56.57){\circle*{0.000001}} \put(-197.28,-55.86){\circle*{0.000001}} \put(-197.28,-55.15){\circle*{0.000001}} \put(-197.99,-54.45){\circle*{0.000001}} \put(-197.99,-53.74){\circle*{0.000001}} \put(-198.70,-53.03){\circle*{0.000001}} \put(-198.70,-52.33){\circle*{0.000001}} \put(-199.40,-51.62){\circle*{0.000001}} \put(-199.40,-50.91){\circle*{0.000001}} \put(-200.11,-50.20){\circle*{0.000001}} \put(-200.11,-49.50){\circle*{0.000001}} \put(-200.82,-48.79){\circle*{0.000001}} \put(-200.82,-48.08){\circle*{0.000001}} \put(-201.53,-47.38){\circle*{0.000001}} \put(-201.53,-46.67){\circle*{0.000001}} \put(-201.53,-45.96){\circle*{0.000001}} \put(-202.23,-45.25){\circle*{0.000001}} \put(-202.23,-44.55){\circle*{0.000001}} \put(-202.94,-43.84){\circle*{0.000001}} \put(-202.94,-43.13){\circle*{0.000001}} \put(-203.65,-42.43){\circle*{0.000001}} \put(-203.65,-41.72){\circle*{0.000001}} \put(-204.35,-41.01){\circle*{0.000001}} \put(-204.35,-40.31){\circle*{0.000001}} \put(-205.06,-39.60){\circle*{0.000001}} \put(-205.06,-38.89){\circle*{0.000001}} \put(-205.77,-38.18){\circle*{0.000001}} \put(-205.77,-37.48){\circle*{0.000001}} \put(-206.48,-36.77){\circle*{0.000001}} \put(-206.48,-36.06){\circle*{0.000001}} \put(-207.18,-35.36){\circle*{0.000001}} \put(-207.18,-34.65){\circle*{0.000001}} \put(-207.89,-33.94){\circle*{0.000001}} \put(-207.89,-33.23){\circle*{0.000001}} \put(-208.60,-32.53){\circle*{0.000001}} \put(-208.60,-31.82){\circle*{0.000001}} \put(-209.30,-31.11){\circle*{0.000001}} \put(-209.30,-30.41){\circle*{0.000001}} \put(-217.79,-85.56){\circle*{0.000001}} \put(-217.08,-84.85){\circle*{0.000001}} \put(-216.37,-84.15){\circle*{0.000001}} \put(-215.67,-83.44){\circle*{0.000001}} \put(-214.96,-82.73){\circle*{0.000001}} \put(-214.25,-82.02){\circle*{0.000001}} \put(-213.55,-81.32){\circle*{0.000001}} \put(-212.84,-80.61){\circle*{0.000001}} \put(-212.13,-79.90){\circle*{0.000001}} \put(-211.42,-79.20){\circle*{0.000001}} \put(-210.72,-78.49){\circle*{0.000001}} \put(-210.01,-77.78){\circle*{0.000001}} \put(-209.30,-77.07){\circle*{0.000001}} \put(-208.60,-76.37){\circle*{0.000001}} \put(-207.89,-75.66){\circle*{0.000001}} \put(-207.18,-74.95){\circle*{0.000001}} \put(-206.48,-74.25){\circle*{0.000001}} \put(-206.48,-73.54){\circle*{0.000001}} \put(-205.77,-72.83){\circle*{0.000001}} \put(-205.06,-72.12){\circle*{0.000001}} \put(-204.35,-71.42){\circle*{0.000001}} \put(-203.65,-70.71){\circle*{0.000001}} \put(-202.94,-70.00){\circle*{0.000001}} \put(-202.23,-69.30){\circle*{0.000001}} \put(-201.53,-68.59){\circle*{0.000001}} \put(-200.82,-67.88){\circle*{0.000001}} \put(-200.11,-67.18){\circle*{0.000001}} \put(-199.40,-66.47){\circle*{0.000001}} \put(-198.70,-65.76){\circle*{0.000001}} \put(-197.99,-65.05){\circle*{0.000001}} \put(-197.28,-64.35){\circle*{0.000001}} \put(-196.58,-63.64){\circle*{0.000001}} \put(-195.87,-62.93){\circle*{0.000001}} \put(-195.16,-62.23){\circle*{0.000001}} \put(-194.45,-61.52){\circle*{0.000001}} \put(-217.79,-85.56){\circle*{0.000001}} \put(-217.08,-85.56){\circle*{0.000001}} \put(-216.37,-85.56){\circle*{0.000001}} \put(-215.67,-86.27){\circle*{0.000001}} \put(-214.96,-86.27){\circle*{0.000001}} \put(-214.25,-86.27){\circle*{0.000001}} \put(-213.55,-86.27){\circle*{0.000001}} \put(-212.84,-86.27){\circle*{0.000001}} \put(-212.13,-86.27){\circle*{0.000001}} \put(-211.42,-86.97){\circle*{0.000001}} \put(-210.72,-86.97){\circle*{0.000001}} \put(-210.01,-86.97){\circle*{0.000001}} \put(-209.30,-86.97){\circle*{0.000001}} \put(-208.60,-86.97){\circle*{0.000001}} \put(-207.89,-87.68){\circle*{0.000001}} \put(-207.18,-87.68){\circle*{0.000001}} \put(-206.48,-87.68){\circle*{0.000001}} \put(-205.77,-87.68){\circle*{0.000001}} \put(-205.06,-87.68){\circle*{0.000001}} \put(-204.35,-87.68){\circle*{0.000001}} \put(-203.65,-88.39){\circle*{0.000001}} \put(-202.94,-88.39){\circle*{0.000001}} \put(-202.23,-88.39){\circle*{0.000001}} \put(-201.53,-88.39){\circle*{0.000001}} \put(-200.82,-88.39){\circle*{0.000001}} \put(-200.11,-88.39){\circle*{0.000001}} \put(-199.40,-89.10){\circle*{0.000001}} \put(-198.70,-89.10){\circle*{0.000001}} \put(-197.99,-89.10){\circle*{0.000001}} \put(-197.28,-89.10){\circle*{0.000001}} \put(-196.58,-89.10){\circle*{0.000001}} \put(-195.87,-89.80){\circle*{0.000001}} \put(-195.16,-89.80){\circle*{0.000001}} \put(-194.45,-89.80){\circle*{0.000001}} \put(-193.75,-89.80){\circle*{0.000001}} \put(-193.04,-89.80){\circle*{0.000001}} \put(-192.33,-89.80){\circle*{0.000001}} \put(-191.63,-90.51){\circle*{0.000001}} \put(-190.92,-90.51){\circle*{0.000001}} \put(-190.21,-90.51){\circle*{0.000001}} \put(-189.50,-90.51){\circle*{0.000001}} \put(-188.80,-90.51){\circle*{0.000001}} \put(-188.09,-91.22){\circle*{0.000001}} \put(-187.38,-91.22){\circle*{0.000001}} \put(-186.68,-91.22){\circle*{0.000001}} \put(-185.97,-91.22){\circle*{0.000001}} \put(-185.26,-91.22){\circle*{0.000001}} \put(-184.55,-91.22){\circle*{0.000001}} \put(-183.85,-91.92){\circle*{0.000001}} \put(-183.14,-91.92){\circle*{0.000001}} \put(-182.43,-91.92){\circle*{0.000001}} \put(-182.43,-91.92){\circle*{0.000001}} \put(-181.73,-91.22){\circle*{0.000001}} \put(-181.73,-90.51){\circle*{0.000001}} \put(-181.02,-89.80){\circle*{0.000001}} \put(-181.02,-89.10){\circle*{0.000001}} \put(-180.31,-88.39){\circle*{0.000001}} \put(-180.31,-87.68){\circle*{0.000001}} \put(-179.61,-86.97){\circle*{0.000001}} \put(-179.61,-86.27){\circle*{0.000001}} \put(-178.90,-85.56){\circle*{0.000001}} \put(-178.90,-84.85){\circle*{0.000001}} \put(-178.19,-84.15){\circle*{0.000001}} \put(-178.19,-83.44){\circle*{0.000001}} \put(-177.48,-82.73){\circle*{0.000001}} \put(-177.48,-82.02){\circle*{0.000001}} \put(-176.78,-81.32){\circle*{0.000001}} \put(-176.78,-80.61){\circle*{0.000001}} \put(-176.07,-79.90){\circle*{0.000001}} \put(-176.07,-79.20){\circle*{0.000001}} \put(-175.36,-78.49){\circle*{0.000001}} \put(-175.36,-77.78){\circle*{0.000001}} \put(-174.66,-77.07){\circle*{0.000001}} \put(-173.95,-76.37){\circle*{0.000001}} \put(-173.95,-75.66){\circle*{0.000001}} \put(-173.24,-74.95){\circle*{0.000001}} \put(-173.24,-74.25){\circle*{0.000001}} \put(-172.53,-73.54){\circle*{0.000001}} \put(-172.53,-72.83){\circle*{0.000001}} \put(-171.83,-72.12){\circle*{0.000001}} \put(-171.83,-71.42){\circle*{0.000001}} \put(-171.12,-70.71){\circle*{0.000001}} \put(-171.12,-70.00){\circle*{0.000001}} \put(-170.41,-69.30){\circle*{0.000001}} \put(-170.41,-68.59){\circle*{0.000001}} \put(-169.71,-67.88){\circle*{0.000001}} \put(-169.71,-67.18){\circle*{0.000001}} \put(-169.00,-66.47){\circle*{0.000001}} \put(-169.00,-65.76){\circle*{0.000001}} \put(-168.29,-65.05){\circle*{0.000001}} \put(-168.29,-64.35){\circle*{0.000001}} \put(-167.58,-63.64){\circle*{0.000001}} \put(-182.43,-95.46){\circle*{0.000001}} \put(-182.43,-94.75){\circle*{0.000001}} \put(-181.73,-94.05){\circle*{0.000001}} \put(-181.73,-93.34){\circle*{0.000001}} \put(-181.02,-92.63){\circle*{0.000001}} \put(-181.02,-91.92){\circle*{0.000001}} \put(-180.31,-91.22){\circle*{0.000001}} \put(-180.31,-90.51){\circle*{0.000001}} \put(-179.61,-89.80){\circle*{0.000001}} \put(-179.61,-89.10){\circle*{0.000001}} \put(-178.90,-88.39){\circle*{0.000001}} \put(-178.90,-87.68){\circle*{0.000001}} \put(-178.19,-86.97){\circle*{0.000001}} \put(-178.19,-86.27){\circle*{0.000001}} \put(-177.48,-85.56){\circle*{0.000001}} \put(-177.48,-84.85){\circle*{0.000001}} \put(-177.48,-84.15){\circle*{0.000001}} \put(-176.78,-83.44){\circle*{0.000001}} \put(-176.78,-82.73){\circle*{0.000001}} \put(-176.07,-82.02){\circle*{0.000001}} \put(-176.07,-81.32){\circle*{0.000001}} \put(-175.36,-80.61){\circle*{0.000001}} \put(-175.36,-79.90){\circle*{0.000001}} \put(-174.66,-79.20){\circle*{0.000001}} \put(-174.66,-78.49){\circle*{0.000001}} \put(-173.95,-77.78){\circle*{0.000001}} \put(-173.95,-77.07){\circle*{0.000001}} \put(-173.24,-76.37){\circle*{0.000001}} \put(-173.24,-75.66){\circle*{0.000001}} \put(-172.53,-74.95){\circle*{0.000001}} \put(-172.53,-74.25){\circle*{0.000001}} \put(-172.53,-73.54){\circle*{0.000001}} \put(-171.83,-72.83){\circle*{0.000001}} \put(-171.83,-72.12){\circle*{0.000001}} \put(-171.12,-71.42){\circle*{0.000001}} \put(-171.12,-70.71){\circle*{0.000001}} \put(-170.41,-70.00){\circle*{0.000001}} \put(-170.41,-69.30){\circle*{0.000001}} \put(-169.71,-68.59){\circle*{0.000001}} \put(-169.71,-67.88){\circle*{0.000001}} \put(-169.00,-67.18){\circle*{0.000001}} \put(-169.00,-66.47){\circle*{0.000001}} \put(-168.29,-65.76){\circle*{0.000001}} \put(-168.29,-65.05){\circle*{0.000001}} \put(-167.58,-64.35){\circle*{0.000001}} \put(-167.58,-63.64){\circle*{0.000001}} \put(-182.43,-95.46){\circle*{0.000001}} \put(-182.43,-94.75){\circle*{0.000001}} \put(-182.43,-94.05){\circle*{0.000001}} \put(-182.43,-93.34){\circle*{0.000001}} \put(-182.43,-92.63){\circle*{0.000001}} \put(-182.43,-91.92){\circle*{0.000001}} \put(-181.73,-91.22){\circle*{0.000001}} \put(-181.73,-90.51){\circle*{0.000001}} \put(-181.73,-89.80){\circle*{0.000001}} \put(-181.73,-89.10){\circle*{0.000001}} \put(-181.73,-88.39){\circle*{0.000001}} \put(-181.73,-87.68){\circle*{0.000001}} \put(-181.73,-86.97){\circle*{0.000001}} \put(-181.73,-86.27){\circle*{0.000001}} \put(-181.73,-85.56){\circle*{0.000001}} \put(-181.73,-84.85){\circle*{0.000001}} \put(-181.73,-84.15){\circle*{0.000001}} \put(-181.73,-83.44){\circle*{0.000001}} \put(-181.02,-82.73){\circle*{0.000001}} \put(-181.02,-82.02){\circle*{0.000001}} \put(-181.02,-81.32){\circle*{0.000001}} \put(-181.02,-80.61){\circle*{0.000001}} \put(-181.02,-79.90){\circle*{0.000001}} \put(-181.02,-79.20){\circle*{0.000001}} \put(-181.02,-78.49){\circle*{0.000001}} \put(-181.02,-77.78){\circle*{0.000001}} \put(-181.02,-77.07){\circle*{0.000001}} \put(-181.02,-76.37){\circle*{0.000001}} \put(-181.02,-75.66){\circle*{0.000001}} \put(-180.31,-74.95){\circle*{0.000001}} \put(-180.31,-74.25){\circle*{0.000001}} \put(-180.31,-73.54){\circle*{0.000001}} \put(-180.31,-72.83){\circle*{0.000001}} \put(-180.31,-72.12){\circle*{0.000001}} \put(-180.31,-71.42){\circle*{0.000001}} \put(-180.31,-70.71){\circle*{0.000001}} \put(-180.31,-70.00){\circle*{0.000001}} \put(-180.31,-69.30){\circle*{0.000001}} \put(-180.31,-68.59){\circle*{0.000001}} \put(-180.31,-67.88){\circle*{0.000001}} \put(-180.31,-67.18){\circle*{0.000001}} \put(-179.61,-66.47){\circle*{0.000001}} \put(-179.61,-65.76){\circle*{0.000001}} \put(-179.61,-65.05){\circle*{0.000001}} \put(-179.61,-64.35){\circle*{0.000001}} \put(-179.61,-63.64){\circle*{0.000001}} \put(-179.61,-62.93){\circle*{0.000001}} \put(-173.95,-93.34){\circle*{0.000001}} \put(-173.95,-92.63){\circle*{0.000001}} \put(-173.95,-91.92){\circle*{0.000001}} \put(-174.66,-91.22){\circle*{0.000001}} \put(-174.66,-90.51){\circle*{0.000001}} \put(-174.66,-89.80){\circle*{0.000001}} \put(-174.66,-89.10){\circle*{0.000001}} \put(-174.66,-88.39){\circle*{0.000001}} \put(-174.66,-87.68){\circle*{0.000001}} \put(-175.36,-86.97){\circle*{0.000001}} \put(-175.36,-86.27){\circle*{0.000001}} \put(-175.36,-85.56){\circle*{0.000001}} \put(-175.36,-84.85){\circle*{0.000001}} \put(-175.36,-84.15){\circle*{0.000001}} \put(-176.07,-83.44){\circle*{0.000001}} \put(-176.07,-82.73){\circle*{0.000001}} \put(-176.07,-82.02){\circle*{0.000001}} \put(-176.07,-81.32){\circle*{0.000001}} \put(-176.07,-80.61){\circle*{0.000001}} \put(-176.78,-79.90){\circle*{0.000001}} \put(-176.78,-79.20){\circle*{0.000001}} \put(-176.78,-78.49){\circle*{0.000001}} \put(-176.78,-77.78){\circle*{0.000001}} \put(-176.78,-77.07){\circle*{0.000001}} \put(-176.78,-76.37){\circle*{0.000001}} \put(-177.48,-75.66){\circle*{0.000001}} \put(-177.48,-74.95){\circle*{0.000001}} \put(-177.48,-74.25){\circle*{0.000001}} \put(-177.48,-73.54){\circle*{0.000001}} \put(-177.48,-72.83){\circle*{0.000001}} \put(-178.19,-72.12){\circle*{0.000001}} \put(-178.19,-71.42){\circle*{0.000001}} \put(-178.19,-70.71){\circle*{0.000001}} \put(-178.19,-70.00){\circle*{0.000001}} \put(-178.19,-69.30){\circle*{0.000001}} \put(-178.90,-68.59){\circle*{0.000001}} \put(-178.90,-67.88){\circle*{0.000001}} \put(-178.90,-67.18){\circle*{0.000001}} \put(-178.90,-66.47){\circle*{0.000001}} \put(-178.90,-65.76){\circle*{0.000001}} \put(-178.90,-65.05){\circle*{0.000001}} \put(-179.61,-64.35){\circle*{0.000001}} \put(-179.61,-63.64){\circle*{0.000001}} \put(-179.61,-62.93){\circle*{0.000001}} \put(-209.30,-100.41){\circle*{0.000001}} \put(-208.60,-100.41){\circle*{0.000001}} \put(-207.89,-100.41){\circle*{0.000001}} \put(-207.18,-99.70){\circle*{0.000001}} \put(-206.48,-99.70){\circle*{0.000001}} \put(-205.77,-99.70){\circle*{0.000001}} \put(-205.06,-99.70){\circle*{0.000001}} \put(-204.35,-99.70){\circle*{0.000001}} \put(-203.65,-98.99){\circle*{0.000001}} \put(-202.94,-98.99){\circle*{0.000001}} \put(-202.23,-98.99){\circle*{0.000001}} \put(-201.53,-98.99){\circle*{0.000001}} \put(-200.82,-98.99){\circle*{0.000001}} \put(-200.11,-98.29){\circle*{0.000001}} \put(-199.40,-98.29){\circle*{0.000001}} \put(-198.70,-98.29){\circle*{0.000001}} \put(-197.99,-98.29){\circle*{0.000001}} \put(-197.28,-98.29){\circle*{0.000001}} \put(-196.58,-97.58){\circle*{0.000001}} \put(-195.87,-97.58){\circle*{0.000001}} \put(-195.16,-97.58){\circle*{0.000001}} \put(-194.45,-97.58){\circle*{0.000001}} \put(-193.75,-97.58){\circle*{0.000001}} \put(-193.04,-96.87){\circle*{0.000001}} \put(-192.33,-96.87){\circle*{0.000001}} \put(-191.63,-96.87){\circle*{0.000001}} \put(-190.92,-96.87){\circle*{0.000001}} \put(-190.21,-96.87){\circle*{0.000001}} \put(-189.50,-96.17){\circle*{0.000001}} \put(-188.80,-96.17){\circle*{0.000001}} \put(-188.09,-96.17){\circle*{0.000001}} \put(-187.38,-96.17){\circle*{0.000001}} \put(-186.68,-96.17){\circle*{0.000001}} \put(-185.97,-95.46){\circle*{0.000001}} \put(-185.26,-95.46){\circle*{0.000001}} \put(-184.55,-95.46){\circle*{0.000001}} \put(-183.85,-95.46){\circle*{0.000001}} \put(-183.14,-95.46){\circle*{0.000001}} \put(-182.43,-94.75){\circle*{0.000001}} \put(-181.73,-94.75){\circle*{0.000001}} \put(-181.02,-94.75){\circle*{0.000001}} \put(-180.31,-94.75){\circle*{0.000001}} \put(-179.61,-94.75){\circle*{0.000001}} \put(-178.90,-94.05){\circle*{0.000001}} \put(-178.19,-94.05){\circle*{0.000001}} \put(-177.48,-94.05){\circle*{0.000001}} \put(-176.78,-94.05){\circle*{0.000001}} \put(-176.07,-94.05){\circle*{0.000001}} \put(-175.36,-93.34){\circle*{0.000001}} \put(-174.66,-93.34){\circle*{0.000001}} \put(-173.95,-93.34){\circle*{0.000001}} \put(-211.42,-133.64){\circle*{0.000001}} \put(-211.42,-132.94){\circle*{0.000001}} \put(-211.42,-132.23){\circle*{0.000001}} \put(-211.42,-131.52){\circle*{0.000001}} \put(-211.42,-130.81){\circle*{0.000001}} \put(-211.42,-130.11){\circle*{0.000001}} \put(-211.42,-129.40){\circle*{0.000001}} \put(-211.42,-128.69){\circle*{0.000001}} \put(-210.72,-127.99){\circle*{0.000001}} \put(-210.72,-127.28){\circle*{0.000001}} \put(-210.72,-126.57){\circle*{0.000001}} \put(-210.72,-125.87){\circle*{0.000001}} \put(-210.72,-125.16){\circle*{0.000001}} \put(-210.72,-124.45){\circle*{0.000001}} \put(-210.72,-123.74){\circle*{0.000001}} \put(-210.72,-123.04){\circle*{0.000001}} \put(-210.72,-122.33){\circle*{0.000001}} \put(-210.72,-121.62){\circle*{0.000001}} \put(-210.72,-120.92){\circle*{0.000001}} \put(-210.72,-120.21){\circle*{0.000001}} \put(-210.72,-119.50){\circle*{0.000001}} \put(-210.72,-118.79){\circle*{0.000001}} \put(-210.72,-118.09){\circle*{0.000001}} \put(-210.72,-117.38){\circle*{0.000001}} \put(-210.01,-116.67){\circle*{0.000001}} \put(-210.01,-115.97){\circle*{0.000001}} \put(-210.01,-115.26){\circle*{0.000001}} \put(-210.01,-114.55){\circle*{0.000001}} \put(-210.01,-113.84){\circle*{0.000001}} \put(-210.01,-113.14){\circle*{0.000001}} \put(-210.01,-112.43){\circle*{0.000001}} \put(-210.01,-111.72){\circle*{0.000001}} \put(-210.01,-111.02){\circle*{0.000001}} \put(-210.01,-110.31){\circle*{0.000001}} \put(-210.01,-109.60){\circle*{0.000001}} \put(-210.01,-108.89){\circle*{0.000001}} \put(-210.01,-108.19){\circle*{0.000001}} \put(-210.01,-107.48){\circle*{0.000001}} \put(-210.01,-106.77){\circle*{0.000001}} \put(-210.01,-106.07){\circle*{0.000001}} \put(-209.30,-105.36){\circle*{0.000001}} \put(-209.30,-104.65){\circle*{0.000001}} \put(-209.30,-103.94){\circle*{0.000001}} \put(-209.30,-103.24){\circle*{0.000001}} \put(-209.30,-102.53){\circle*{0.000001}} \put(-209.30,-101.82){\circle*{0.000001}} \put(-209.30,-101.12){\circle*{0.000001}} \put(-209.30,-100.41){\circle*{0.000001}} \put(-241.83,-124.45){\circle*{0.000001}} \put(-241.12,-124.45){\circle*{0.000001}} \put(-240.42,-125.16){\circle*{0.000001}} \put(-239.71,-125.16){\circle*{0.000001}} \put(-239.00,-125.16){\circle*{0.000001}} \put(-238.29,-125.87){\circle*{0.000001}} \put(-237.59,-125.87){\circle*{0.000001}} \put(-236.88,-125.87){\circle*{0.000001}} \put(-236.17,-125.87){\circle*{0.000001}} \put(-235.47,-126.57){\circle*{0.000001}} \put(-234.76,-126.57){\circle*{0.000001}} \put(-234.05,-126.57){\circle*{0.000001}} \put(-233.35,-127.28){\circle*{0.000001}} \put(-232.64,-127.28){\circle*{0.000001}} \put(-231.93,-127.28){\circle*{0.000001}} \put(-231.22,-127.99){\circle*{0.000001}} \put(-230.52,-127.99){\circle*{0.000001}} \put(-229.81,-127.99){\circle*{0.000001}} \put(-229.10,-127.99){\circle*{0.000001}} \put(-228.40,-128.69){\circle*{0.000001}} \put(-227.69,-128.69){\circle*{0.000001}} \put(-226.98,-128.69){\circle*{0.000001}} \put(-226.27,-129.40){\circle*{0.000001}} \put(-225.57,-129.40){\circle*{0.000001}} \put(-224.86,-129.40){\circle*{0.000001}} \put(-224.15,-130.11){\circle*{0.000001}} \put(-223.45,-130.11){\circle*{0.000001}} \put(-222.74,-130.11){\circle*{0.000001}} \put(-222.03,-130.11){\circle*{0.000001}} \put(-221.32,-130.81){\circle*{0.000001}} \put(-220.62,-130.81){\circle*{0.000001}} \put(-219.91,-130.81){\circle*{0.000001}} \put(-219.20,-131.52){\circle*{0.000001}} \put(-218.50,-131.52){\circle*{0.000001}} \put(-217.79,-131.52){\circle*{0.000001}} \put(-217.08,-132.23){\circle*{0.000001}} \put(-216.37,-132.23){\circle*{0.000001}} \put(-215.67,-132.23){\circle*{0.000001}} \put(-214.96,-132.23){\circle*{0.000001}} \put(-214.25,-132.94){\circle*{0.000001}} \put(-213.55,-132.94){\circle*{0.000001}} \put(-212.84,-132.94){\circle*{0.000001}} \put(-212.13,-133.64){\circle*{0.000001}} \put(-211.42,-133.64){\circle*{0.000001}} \put(-271.53,-136.47){\circle*{0.000001}} \put(-270.82,-136.47){\circle*{0.000001}} \put(-270.11,-135.76){\circle*{0.000001}} \put(-269.41,-135.76){\circle*{0.000001}} \put(-268.70,-135.06){\circle*{0.000001}} \put(-267.99,-135.06){\circle*{0.000001}} \put(-267.29,-135.06){\circle*{0.000001}} \put(-266.58,-134.35){\circle*{0.000001}} \put(-265.87,-134.35){\circle*{0.000001}} \put(-265.17,-133.64){\circle*{0.000001}} \put(-264.46,-133.64){\circle*{0.000001}} \put(-263.75,-133.64){\circle*{0.000001}} \put(-263.04,-132.94){\circle*{0.000001}} \put(-262.34,-132.94){\circle*{0.000001}} \put(-261.63,-132.23){\circle*{0.000001}} \put(-260.92,-132.23){\circle*{0.000001}} \put(-260.22,-132.23){\circle*{0.000001}} \put(-259.51,-131.52){\circle*{0.000001}} \put(-258.80,-131.52){\circle*{0.000001}} \put(-258.09,-130.81){\circle*{0.000001}} \put(-257.39,-130.81){\circle*{0.000001}} \put(-256.68,-130.81){\circle*{0.000001}} \put(-255.97,-130.11){\circle*{0.000001}} \put(-255.27,-130.11){\circle*{0.000001}} \put(-254.56,-129.40){\circle*{0.000001}} \put(-253.85,-129.40){\circle*{0.000001}} \put(-253.14,-128.69){\circle*{0.000001}} \put(-252.44,-128.69){\circle*{0.000001}} \put(-251.73,-128.69){\circle*{0.000001}} \put(-251.02,-127.99){\circle*{0.000001}} \put(-250.32,-127.99){\circle*{0.000001}} \put(-249.61,-127.28){\circle*{0.000001}} \put(-248.90,-127.28){\circle*{0.000001}} \put(-248.19,-127.28){\circle*{0.000001}} \put(-247.49,-126.57){\circle*{0.000001}} \put(-246.78,-126.57){\circle*{0.000001}} \put(-246.07,-125.87){\circle*{0.000001}} \put(-245.37,-125.87){\circle*{0.000001}} \put(-244.66,-125.87){\circle*{0.000001}} \put(-243.95,-125.16){\circle*{0.000001}} \put(-243.24,-125.16){\circle*{0.000001}} \put(-242.54,-124.45){\circle*{0.000001}} \put(-241.83,-124.45){\circle*{0.000001}} \put(-271.53,-136.47){\circle*{0.000001}} \put(-270.82,-137.18){\circle*{0.000001}} \put(-270.11,-137.18){\circle*{0.000001}} \put(-269.41,-137.89){\circle*{0.000001}} \put(-268.70,-138.59){\circle*{0.000001}} \put(-267.99,-138.59){\circle*{0.000001}} \put(-267.29,-139.30){\circle*{0.000001}} \put(-266.58,-140.01){\circle*{0.000001}} \put(-265.87,-140.01){\circle*{0.000001}} \put(-265.17,-140.71){\circle*{0.000001}} \put(-264.46,-141.42){\circle*{0.000001}} \put(-263.75,-141.42){\circle*{0.000001}} \put(-263.04,-142.13){\circle*{0.000001}} \put(-262.34,-142.84){\circle*{0.000001}} \put(-261.63,-142.84){\circle*{0.000001}} \put(-260.92,-143.54){\circle*{0.000001}} \put(-260.22,-144.25){\circle*{0.000001}} \put(-259.51,-144.25){\circle*{0.000001}} \put(-258.80,-144.96){\circle*{0.000001}} \put(-258.09,-145.66){\circle*{0.000001}} \put(-257.39,-145.66){\circle*{0.000001}} \put(-256.68,-146.37){\circle*{0.000001}} \put(-255.97,-146.37){\circle*{0.000001}} \put(-255.27,-147.08){\circle*{0.000001}} \put(-254.56,-147.79){\circle*{0.000001}} \put(-253.85,-147.79){\circle*{0.000001}} \put(-253.14,-148.49){\circle*{0.000001}} \put(-252.44,-149.20){\circle*{0.000001}} \put(-251.73,-149.20){\circle*{0.000001}} \put(-251.02,-149.91){\circle*{0.000001}} \put(-250.32,-150.61){\circle*{0.000001}} \put(-249.61,-150.61){\circle*{0.000001}} \put(-248.90,-151.32){\circle*{0.000001}} \put(-248.19,-152.03){\circle*{0.000001}} \put(-247.49,-152.03){\circle*{0.000001}} \put(-246.78,-152.74){\circle*{0.000001}} \put(-246.07,-153.44){\circle*{0.000001}} \put(-245.37,-153.44){\circle*{0.000001}} \put(-244.66,-154.15){\circle*{0.000001}} \put(-243.95,-154.86){\circle*{0.000001}} \put(-243.24,-154.86){\circle*{0.000001}} \put(-242.54,-155.56){\circle*{0.000001}} \put(-242.54,-155.56){\circle*{0.000001}} \put(-241.83,-156.27){\circle*{0.000001}} \put(-241.12,-156.27){\circle*{0.000001}} \put(-240.42,-156.98){\circle*{0.000001}} \put(-239.71,-157.68){\circle*{0.000001}} \put(-239.00,-157.68){\circle*{0.000001}} \put(-238.29,-158.39){\circle*{0.000001}} \put(-237.59,-158.39){\circle*{0.000001}} \put(-236.88,-159.10){\circle*{0.000001}} \put(-236.17,-159.81){\circle*{0.000001}} \put(-235.47,-159.81){\circle*{0.000001}} \put(-234.76,-160.51){\circle*{0.000001}} \put(-234.05,-161.22){\circle*{0.000001}} \put(-233.35,-161.22){\circle*{0.000001}} \put(-232.64,-161.93){\circle*{0.000001}} \put(-231.93,-162.63){\circle*{0.000001}} \put(-231.22,-162.63){\circle*{0.000001}} \put(-230.52,-163.34){\circle*{0.000001}} \put(-229.81,-163.34){\circle*{0.000001}} \put(-229.10,-164.05){\circle*{0.000001}} \put(-228.40,-164.76){\circle*{0.000001}} \put(-227.69,-164.76){\circle*{0.000001}} \put(-226.98,-165.46){\circle*{0.000001}} \put(-226.27,-166.17){\circle*{0.000001}} \put(-225.57,-166.17){\circle*{0.000001}} \put(-224.86,-166.88){\circle*{0.000001}} \put(-224.15,-166.88){\circle*{0.000001}} \put(-223.45,-167.58){\circle*{0.000001}} \put(-222.74,-168.29){\circle*{0.000001}} \put(-222.03,-168.29){\circle*{0.000001}} \put(-221.32,-169.00){\circle*{0.000001}} \put(-220.62,-169.71){\circle*{0.000001}} \put(-219.91,-169.71){\circle*{0.000001}} \put(-219.20,-170.41){\circle*{0.000001}} \put(-218.50,-171.12){\circle*{0.000001}} \put(-217.79,-171.12){\circle*{0.000001}} \put(-217.08,-171.83){\circle*{0.000001}} \put(-216.37,-171.83){\circle*{0.000001}} \put(-215.67,-172.53){\circle*{0.000001}} \put(-214.96,-173.24){\circle*{0.000001}} \put(-214.25,-173.24){\circle*{0.000001}} \put(-213.55,-173.95){\circle*{0.000001}} \put(-213.55,-173.95){\circle*{0.000001}} \put(-212.84,-174.66){\circle*{0.000001}} \put(-212.13,-174.66){\circle*{0.000001}} \put(-211.42,-175.36){\circle*{0.000001}} \put(-210.72,-176.07){\circle*{0.000001}} \put(-210.01,-176.07){\circle*{0.000001}} \put(-209.30,-176.78){\circle*{0.000001}} \put(-208.60,-177.48){\circle*{0.000001}} \put(-207.89,-177.48){\circle*{0.000001}} \put(-207.18,-178.19){\circle*{0.000001}} \put(-206.48,-178.90){\circle*{0.000001}} \put(-205.77,-179.61){\circle*{0.000001}} \put(-205.06,-179.61){\circle*{0.000001}} \put(-204.35,-180.31){\circle*{0.000001}} \put(-203.65,-181.02){\circle*{0.000001}} \put(-202.94,-181.02){\circle*{0.000001}} \put(-202.23,-181.73){\circle*{0.000001}} \put(-201.53,-182.43){\circle*{0.000001}} \put(-200.82,-182.43){\circle*{0.000001}} \put(-200.11,-183.14){\circle*{0.000001}} \put(-199.40,-183.85){\circle*{0.000001}} \put(-198.70,-183.85){\circle*{0.000001}} \put(-197.99,-184.55){\circle*{0.000001}} \put(-197.28,-185.26){\circle*{0.000001}} \put(-196.58,-185.26){\circle*{0.000001}} \put(-195.87,-185.97){\circle*{0.000001}} \put(-195.16,-186.68){\circle*{0.000001}} \put(-194.45,-186.68){\circle*{0.000001}} \put(-193.75,-187.38){\circle*{0.000001}} \put(-193.04,-188.09){\circle*{0.000001}} \put(-192.33,-188.09){\circle*{0.000001}} \put(-191.63,-188.80){\circle*{0.000001}} \put(-190.92,-189.50){\circle*{0.000001}} \put(-190.21,-190.21){\circle*{0.000001}} \put(-189.50,-190.21){\circle*{0.000001}} \put(-188.80,-190.92){\circle*{0.000001}} \put(-188.09,-191.63){\circle*{0.000001}} \put(-187.38,-191.63){\circle*{0.000001}} \put(-186.68,-192.33){\circle*{0.000001}} \put(-185.97,-193.04){\circle*{0.000001}} \put(-185.26,-193.04){\circle*{0.000001}} \put(-184.55,-193.75){\circle*{0.000001}} \put(-184.55,-193.75){\circle*{0.000001}} \put(-183.85,-194.45){\circle*{0.000001}} \put(-183.14,-195.16){\circle*{0.000001}} \put(-182.43,-195.16){\circle*{0.000001}} \put(-181.73,-195.87){\circle*{0.000001}} \put(-181.02,-196.58){\circle*{0.000001}} \put(-180.31,-197.28){\circle*{0.000001}} \put(-179.61,-197.99){\circle*{0.000001}} \put(-178.90,-198.70){\circle*{0.000001}} \put(-178.19,-198.70){\circle*{0.000001}} \put(-177.48,-199.40){\circle*{0.000001}} \put(-176.78,-200.11){\circle*{0.000001}} \put(-176.07,-200.82){\circle*{0.000001}} \put(-175.36,-201.53){\circle*{0.000001}} \put(-174.66,-201.53){\circle*{0.000001}} \put(-173.95,-202.23){\circle*{0.000001}} \put(-173.24,-202.94){\circle*{0.000001}} \put(-172.53,-203.65){\circle*{0.000001}} \put(-171.83,-204.35){\circle*{0.000001}} \put(-171.12,-204.35){\circle*{0.000001}} \put(-170.41,-205.06){\circle*{0.000001}} \put(-169.71,-205.77){\circle*{0.000001}} \put(-169.00,-206.48){\circle*{0.000001}} \put(-168.29,-207.18){\circle*{0.000001}} \put(-167.58,-207.89){\circle*{0.000001}} \put(-166.88,-207.89){\circle*{0.000001}} \put(-166.17,-208.60){\circle*{0.000001}} \put(-165.46,-209.30){\circle*{0.000001}} \put(-164.76,-210.01){\circle*{0.000001}} \put(-164.05,-210.72){\circle*{0.000001}} \put(-163.34,-210.72){\circle*{0.000001}} \put(-162.63,-211.42){\circle*{0.000001}} \put(-161.93,-212.13){\circle*{0.000001}} \put(-161.22,-212.84){\circle*{0.000001}} \put(-160.51,-213.55){\circle*{0.000001}} \put(-159.81,-214.25){\circle*{0.000001}} \put(-159.10,-214.25){\circle*{0.000001}} \put(-158.39,-214.96){\circle*{0.000001}} \put(-157.68,-215.67){\circle*{0.000001}} \put(-157.68,-215.67){\circle*{0.000001}} \put(-156.98,-216.37){\circle*{0.000001}} \put(-156.27,-216.37){\circle*{0.000001}} \put(-155.56,-217.08){\circle*{0.000001}} \put(-154.86,-217.08){\circle*{0.000001}} \put(-154.15,-217.79){\circle*{0.000001}} \put(-153.44,-218.50){\circle*{0.000001}} \put(-152.74,-218.50){\circle*{0.000001}} \put(-152.03,-219.20){\circle*{0.000001}} \put(-151.32,-219.20){\circle*{0.000001}} \put(-150.61,-219.91){\circle*{0.000001}} \put(-149.91,-219.91){\circle*{0.000001}} \put(-149.20,-220.62){\circle*{0.000001}} \put(-148.49,-221.32){\circle*{0.000001}} \put(-147.79,-221.32){\circle*{0.000001}} \put(-147.08,-222.03){\circle*{0.000001}} \put(-146.37,-222.03){\circle*{0.000001}} \put(-145.66,-222.74){\circle*{0.000001}} \put(-144.96,-223.45){\circle*{0.000001}} \put(-144.25,-223.45){\circle*{0.000001}} \put(-143.54,-224.15){\circle*{0.000001}} \put(-142.84,-224.15){\circle*{0.000001}} \put(-142.13,-224.86){\circle*{0.000001}} \put(-141.42,-224.86){\circle*{0.000001}} \put(-140.71,-225.57){\circle*{0.000001}} \put(-140.01,-226.27){\circle*{0.000001}} \put(-139.30,-226.27){\circle*{0.000001}} \put(-138.59,-226.98){\circle*{0.000001}} \put(-137.89,-226.98){\circle*{0.000001}} \put(-137.18,-227.69){\circle*{0.000001}} \put(-136.47,-228.40){\circle*{0.000001}} \put(-135.76,-228.40){\circle*{0.000001}} \put(-135.06,-229.10){\circle*{0.000001}} \put(-134.35,-229.10){\circle*{0.000001}} \put(-133.64,-229.81){\circle*{0.000001}} \put(-132.94,-229.81){\circle*{0.000001}} \put(-132.23,-230.52){\circle*{0.000001}} \put(-131.52,-231.22){\circle*{0.000001}} \put(-130.81,-231.22){\circle*{0.000001}} \put(-130.11,-231.93){\circle*{0.000001}} \put(-129.40,-231.93){\circle*{0.000001}} \put(-128.69,-232.64){\circle*{0.000001}} \put(-128.69,-232.64){\circle*{0.000001}} \put(-127.99,-232.64){\circle*{0.000001}} \put(-127.28,-233.35){\circle*{0.000001}} \put(-126.57,-233.35){\circle*{0.000001}} \put(-125.87,-234.05){\circle*{0.000001}} \put(-125.16,-234.05){\circle*{0.000001}} \put(-124.45,-234.76){\circle*{0.000001}} \put(-123.74,-234.76){\circle*{0.000001}} \put(-123.04,-235.47){\circle*{0.000001}} \put(-122.33,-235.47){\circle*{0.000001}} \put(-121.62,-236.17){\circle*{0.000001}} \put(-120.92,-236.17){\circle*{0.000001}} \put(-120.21,-236.88){\circle*{0.000001}} \put(-119.50,-236.88){\circle*{0.000001}} \put(-118.79,-237.59){\circle*{0.000001}} \put(-118.09,-237.59){\circle*{0.000001}} \put(-117.38,-238.29){\circle*{0.000001}} \put(-116.67,-238.29){\circle*{0.000001}} \put(-115.97,-239.00){\circle*{0.000001}} \put(-115.26,-239.00){\circle*{0.000001}} \put(-114.55,-239.71){\circle*{0.000001}} \put(-113.84,-239.71){\circle*{0.000001}} \put(-113.14,-239.71){\circle*{0.000001}} \put(-112.43,-240.42){\circle*{0.000001}} \put(-111.72,-240.42){\circle*{0.000001}} \put(-111.02,-241.12){\circle*{0.000001}} \put(-110.31,-241.12){\circle*{0.000001}} \put(-109.60,-241.83){\circle*{0.000001}} \put(-108.89,-241.83){\circle*{0.000001}} \put(-108.19,-242.54){\circle*{0.000001}} \put(-107.48,-242.54){\circle*{0.000001}} \put(-106.77,-243.24){\circle*{0.000001}} \put(-106.07,-243.24){\circle*{0.000001}} \put(-105.36,-243.95){\circle*{0.000001}} \put(-104.65,-243.95){\circle*{0.000001}} \put(-103.94,-244.66){\circle*{0.000001}} \put(-103.24,-244.66){\circle*{0.000001}} \put(-102.53,-245.37){\circle*{0.000001}} \put(-101.82,-245.37){\circle*{0.000001}} \put(-101.12,-246.07){\circle*{0.000001}} \put(-100.41,-246.07){\circle*{0.000001}} \put(-99.70,-246.78){\circle*{0.000001}} \put(-98.99,-246.78){\circle*{0.000001}} \put(-98.29,-247.49){\circle*{0.000001}} \put(-97.58,-247.49){\circle*{0.000001}} \put(-97.58,-247.49){\circle*{0.000001}} \put(-96.87,-247.49){\circle*{0.000001}} \put(-96.17,-248.19){\circle*{0.000001}} \put(-95.46,-248.19){\circle*{0.000001}} \put(-94.75,-248.90){\circle*{0.000001}} \put(-94.05,-248.90){\circle*{0.000001}} \put(-93.34,-248.90){\circle*{0.000001}} \put(-92.63,-249.61){\circle*{0.000001}} \put(-91.92,-249.61){\circle*{0.000001}} \put(-91.22,-250.32){\circle*{0.000001}} \put(-90.51,-250.32){\circle*{0.000001}} \put(-89.80,-251.02){\circle*{0.000001}} \put(-89.10,-251.02){\circle*{0.000001}} \put(-88.39,-251.02){\circle*{0.000001}} \put(-87.68,-251.73){\circle*{0.000001}} \put(-86.97,-251.73){\circle*{0.000001}} \put(-86.27,-252.44){\circle*{0.000001}} \put(-85.56,-252.44){\circle*{0.000001}} \put(-84.85,-252.44){\circle*{0.000001}} \put(-84.15,-253.14){\circle*{0.000001}} \put(-83.44,-253.14){\circle*{0.000001}} \put(-82.73,-253.85){\circle*{0.000001}} \put(-82.02,-253.85){\circle*{0.000001}} \put(-81.32,-254.56){\circle*{0.000001}} \put(-80.61,-254.56){\circle*{0.000001}} \put(-79.90,-254.56){\circle*{0.000001}} \put(-79.20,-255.27){\circle*{0.000001}} \put(-78.49,-255.27){\circle*{0.000001}} \put(-77.78,-255.97){\circle*{0.000001}} \put(-77.07,-255.97){\circle*{0.000001}} \put(-76.37,-255.97){\circle*{0.000001}} \put(-75.66,-256.68){\circle*{0.000001}} \put(-74.95,-256.68){\circle*{0.000001}} \put(-74.25,-257.39){\circle*{0.000001}} \put(-73.54,-257.39){\circle*{0.000001}} \put(-72.83,-258.09){\circle*{0.000001}} \put(-72.12,-258.09){\circle*{0.000001}} \put(-71.42,-258.09){\circle*{0.000001}} \put(-70.71,-258.80){\circle*{0.000001}} \put(-70.00,-258.80){\circle*{0.000001}} \put(-69.30,-259.51){\circle*{0.000001}} \put(-68.59,-259.51){\circle*{0.000001}} \put(-67.88,-259.51){\circle*{0.000001}} \put(-67.18,-260.22){\circle*{0.000001}} \put(-66.47,-260.22){\circle*{0.000001}} \put(-65.76,-260.92){\circle*{0.000001}} \put(-65.05,-260.92){\circle*{0.000001}} \put(-64.35,-261.63){\circle*{0.000001}} \put(-63.64,-261.63){\circle*{0.000001}} \put(-63.64,-261.63){\circle*{0.000001}} \put(-62.93,-261.63){\circle*{0.000001}} \put(-62.23,-262.34){\circle*{0.000001}} \put(-61.52,-262.34){\circle*{0.000001}} \put(-60.81,-262.34){\circle*{0.000001}} \put(-60.10,-263.04){\circle*{0.000001}} \put(-59.40,-263.04){\circle*{0.000001}} \put(-58.69,-263.04){\circle*{0.000001}} \put(-57.98,-263.75){\circle*{0.000001}} \put(-57.28,-263.75){\circle*{0.000001}} \put(-56.57,-264.46){\circle*{0.000001}} \put(-55.86,-264.46){\circle*{0.000001}} \put(-55.15,-264.46){\circle*{0.000001}} \put(-54.45,-265.17){\circle*{0.000001}} \put(-53.74,-265.17){\circle*{0.000001}} \put(-53.03,-265.17){\circle*{0.000001}} \put(-52.33,-265.87){\circle*{0.000001}} \put(-51.62,-265.87){\circle*{0.000001}} \put(-50.91,-265.87){\circle*{0.000001}} \put(-50.20,-266.58){\circle*{0.000001}} \put(-49.50,-266.58){\circle*{0.000001}} \put(-48.79,-266.58){\circle*{0.000001}} \put(-48.08,-267.29){\circle*{0.000001}} \put(-47.38,-267.29){\circle*{0.000001}} \put(-46.67,-267.99){\circle*{0.000001}} \put(-45.96,-267.99){\circle*{0.000001}} \put(-45.25,-267.99){\circle*{0.000001}} \put(-44.55,-268.70){\circle*{0.000001}} \put(-43.84,-268.70){\circle*{0.000001}} \put(-43.13,-268.70){\circle*{0.000001}} \put(-42.43,-269.41){\circle*{0.000001}} \put(-41.72,-269.41){\circle*{0.000001}} \put(-41.01,-269.41){\circle*{0.000001}} \put(-40.31,-270.11){\circle*{0.000001}} \put(-39.60,-270.11){\circle*{0.000001}} \put(-38.89,-270.11){\circle*{0.000001}} \put(-38.18,-270.82){\circle*{0.000001}} \put(-37.48,-270.82){\circle*{0.000001}} \put(-36.77,-271.53){\circle*{0.000001}} \put(-36.06,-271.53){\circle*{0.000001}} \put(-35.36,-271.53){\circle*{0.000001}} \put(-34.65,-272.24){\circle*{0.000001}} \put(-33.94,-272.24){\circle*{0.000001}} \put(-33.23,-272.24){\circle*{0.000001}} \put(-32.53,-272.94){\circle*{0.000001}} \put(-31.82,-272.94){\circle*{0.000001}} \put(-31.82,-272.94){\circle*{0.000001}} \put(-31.11,-272.24){\circle*{0.000001}} \put(-30.41,-272.24){\circle*{0.000001}} \put(-29.70,-271.53){\circle*{0.000001}} \put(-28.99,-270.82){\circle*{0.000001}} \put(-28.28,-270.11){\circle*{0.000001}} \put(-27.58,-270.11){\circle*{0.000001}} \put(-26.87,-269.41){\circle*{0.000001}} \put(-26.16,-268.70){\circle*{0.000001}} \put(-25.46,-267.99){\circle*{0.000001}} \put(-24.75,-267.99){\circle*{0.000001}} \put(-24.04,-267.29){\circle*{0.000001}} \put(-23.33,-266.58){\circle*{0.000001}} \put(-22.63,-266.58){\circle*{0.000001}} \put(-21.92,-265.87){\circle*{0.000001}} \put(-21.21,-265.17){\circle*{0.000001}} \put(-20.51,-264.46){\circle*{0.000001}} \put(-19.80,-264.46){\circle*{0.000001}} \put(-19.09,-263.75){\circle*{0.000001}} \put(-18.38,-263.04){\circle*{0.000001}} \put(-17.68,-262.34){\circle*{0.000001}} \put(-16.97,-262.34){\circle*{0.000001}} \put(-16.26,-261.63){\circle*{0.000001}} \put(-15.56,-260.92){\circle*{0.000001}} \put(-14.85,-260.22){\circle*{0.000001}} \put(-14.14,-260.22){\circle*{0.000001}} \put(-13.44,-259.51){\circle*{0.000001}} \put(-12.73,-258.80){\circle*{0.000001}} \put(-12.02,-258.80){\circle*{0.000001}} \put(-11.31,-258.09){\circle*{0.000001}} \put(-10.61,-257.39){\circle*{0.000001}} \put(-9.90,-256.68){\circle*{0.000001}} \put(-9.19,-256.68){\circle*{0.000001}} \put(-8.49,-255.97){\circle*{0.000001}} \put(-7.78,-255.27){\circle*{0.000001}} \put(-7.07,-254.56){\circle*{0.000001}} \put(-6.36,-254.56){\circle*{0.000001}} \put(-5.66,-253.85){\circle*{0.000001}} \put(-33.94,-234.76){\circle*{0.000001}} \put(-33.23,-235.47){\circle*{0.000001}} \put(-32.53,-235.47){\circle*{0.000001}} \put(-31.82,-236.17){\circle*{0.000001}} \put(-31.11,-236.88){\circle*{0.000001}} \put(-30.41,-236.88){\circle*{0.000001}} \put(-29.70,-237.59){\circle*{0.000001}} \put(-28.99,-238.29){\circle*{0.000001}} \put(-28.28,-238.29){\circle*{0.000001}} \put(-27.58,-239.00){\circle*{0.000001}} \put(-26.87,-239.71){\circle*{0.000001}} \put(-26.16,-239.71){\circle*{0.000001}} \put(-25.46,-240.42){\circle*{0.000001}} \put(-24.75,-241.12){\circle*{0.000001}} \put(-24.04,-241.12){\circle*{0.000001}} \put(-23.33,-241.83){\circle*{0.000001}} \put(-22.63,-242.54){\circle*{0.000001}} \put(-21.92,-242.54){\circle*{0.000001}} \put(-21.21,-243.24){\circle*{0.000001}} \put(-20.51,-243.95){\circle*{0.000001}} \put(-19.80,-243.95){\circle*{0.000001}} \put(-19.09,-244.66){\circle*{0.000001}} \put(-18.38,-245.37){\circle*{0.000001}} \put(-17.68,-246.07){\circle*{0.000001}} \put(-16.97,-246.07){\circle*{0.000001}} \put(-16.26,-246.78){\circle*{0.000001}} \put(-15.56,-247.49){\circle*{0.000001}} \put(-14.85,-247.49){\circle*{0.000001}} \put(-14.14,-248.19){\circle*{0.000001}} \put(-13.44,-248.90){\circle*{0.000001}} \put(-12.73,-248.90){\circle*{0.000001}} \put(-12.02,-249.61){\circle*{0.000001}} \put(-11.31,-250.32){\circle*{0.000001}} \put(-10.61,-250.32){\circle*{0.000001}} \put(-9.90,-251.02){\circle*{0.000001}} \put(-9.19,-251.73){\circle*{0.000001}} \put(-8.49,-251.73){\circle*{0.000001}} \put(-7.78,-252.44){\circle*{0.000001}} \put(-7.07,-253.14){\circle*{0.000001}} \put(-6.36,-253.14){\circle*{0.000001}} \put(-5.66,-253.85){\circle*{0.000001}} \put(-63.64,-213.55){\circle*{0.000001}} \put(-62.93,-214.25){\circle*{0.000001}} \put(-62.23,-214.25){\circle*{0.000001}} \put(-61.52,-214.96){\circle*{0.000001}} \put(-60.81,-215.67){\circle*{0.000001}} \put(-60.10,-216.37){\circle*{0.000001}} \put(-59.40,-216.37){\circle*{0.000001}} \put(-58.69,-217.08){\circle*{0.000001}} \put(-57.98,-217.79){\circle*{0.000001}} \put(-57.28,-217.79){\circle*{0.000001}} \put(-56.57,-218.50){\circle*{0.000001}} \put(-55.86,-219.20){\circle*{0.000001}} \put(-55.15,-219.91){\circle*{0.000001}} \put(-54.45,-219.91){\circle*{0.000001}} \put(-53.74,-220.62){\circle*{0.000001}} \put(-53.03,-221.32){\circle*{0.000001}} \put(-52.33,-221.32){\circle*{0.000001}} \put(-51.62,-222.03){\circle*{0.000001}} \put(-50.91,-222.74){\circle*{0.000001}} \put(-50.20,-223.45){\circle*{0.000001}} \put(-49.50,-223.45){\circle*{0.000001}} \put(-48.79,-224.15){\circle*{0.000001}} \put(-48.08,-224.86){\circle*{0.000001}} \put(-47.38,-224.86){\circle*{0.000001}} \put(-46.67,-225.57){\circle*{0.000001}} \put(-45.96,-226.27){\circle*{0.000001}} \put(-45.25,-226.98){\circle*{0.000001}} \put(-44.55,-226.98){\circle*{0.000001}} \put(-43.84,-227.69){\circle*{0.000001}} \put(-43.13,-228.40){\circle*{0.000001}} \put(-42.43,-228.40){\circle*{0.000001}} \put(-41.72,-229.10){\circle*{0.000001}} \put(-41.01,-229.81){\circle*{0.000001}} \put(-40.31,-230.52){\circle*{0.000001}} \put(-39.60,-230.52){\circle*{0.000001}} \put(-38.89,-231.22){\circle*{0.000001}} \put(-38.18,-231.93){\circle*{0.000001}} \put(-37.48,-231.93){\circle*{0.000001}} \put(-36.77,-232.64){\circle*{0.000001}} \put(-36.06,-233.35){\circle*{0.000001}} \put(-35.36,-234.05){\circle*{0.000001}} \put(-34.65,-234.05){\circle*{0.000001}} \put(-33.94,-234.76){\circle*{0.000001}} \put(-63.64,-213.55){\circle*{0.000001}} \put(-63.64,-212.84){\circle*{0.000001}} \put(-63.64,-212.13){\circle*{0.000001}} \put(-63.64,-211.42){\circle*{0.000001}} \put(-64.35,-210.72){\circle*{0.000001}} \put(-64.35,-210.01){\circle*{0.000001}} \put(-64.35,-209.30){\circle*{0.000001}} \put(-64.35,-208.60){\circle*{0.000001}} \put(-64.35,-207.89){\circle*{0.000001}} \put(-64.35,-207.18){\circle*{0.000001}} \put(-64.35,-206.48){\circle*{0.000001}} \put(-65.05,-205.77){\circle*{0.000001}} \put(-65.05,-205.06){\circle*{0.000001}} \put(-65.05,-204.35){\circle*{0.000001}} \put(-65.05,-203.65){\circle*{0.000001}} \put(-65.05,-202.94){\circle*{0.000001}} \put(-65.05,-202.23){\circle*{0.000001}} \put(-65.05,-201.53){\circle*{0.000001}} \put(-65.76,-200.82){\circle*{0.000001}} \put(-65.76,-200.11){\circle*{0.000001}} \put(-65.76,-199.40){\circle*{0.000001}} \put(-65.76,-198.70){\circle*{0.000001}} \put(-65.76,-197.99){\circle*{0.000001}} \put(-65.76,-197.28){\circle*{0.000001}} \put(-65.76,-196.58){\circle*{0.000001}} \put(-66.47,-195.87){\circle*{0.000001}} \put(-66.47,-195.16){\circle*{0.000001}} \put(-66.47,-194.45){\circle*{0.000001}} \put(-66.47,-193.75){\circle*{0.000001}} \put(-66.47,-193.04){\circle*{0.000001}} \put(-66.47,-192.33){\circle*{0.000001}} \put(-67.18,-191.63){\circle*{0.000001}} \put(-67.18,-190.92){\circle*{0.000001}} \put(-67.18,-190.21){\circle*{0.000001}} \put(-67.18,-189.50){\circle*{0.000001}} \put(-67.18,-188.80){\circle*{0.000001}} \put(-67.18,-188.09){\circle*{0.000001}} \put(-67.18,-187.38){\circle*{0.000001}} \put(-67.88,-186.68){\circle*{0.000001}} \put(-67.88,-185.97){\circle*{0.000001}} \put(-67.88,-185.26){\circle*{0.000001}} \put(-67.88,-184.55){\circle*{0.000001}} \put(-67.88,-183.85){\circle*{0.000001}} \put(-67.88,-183.14){\circle*{0.000001}} \put(-67.88,-182.43){\circle*{0.000001}} \put(-68.59,-181.73){\circle*{0.000001}} \put(-68.59,-181.02){\circle*{0.000001}} \put(-68.59,-180.31){\circle*{0.000001}} \put(-68.59,-179.61){\circle*{0.000001}} \put(-68.59,-179.61){\circle*{0.000001}} \put(-67.88,-179.61){\circle*{0.000001}} \put(-67.18,-180.31){\circle*{0.000001}} \put(-66.47,-180.31){\circle*{0.000001}} \put(-65.76,-181.02){\circle*{0.000001}} \put(-65.05,-181.02){\circle*{0.000001}} \put(-64.35,-181.02){\circle*{0.000001}} \put(-63.64,-181.73){\circle*{0.000001}} \put(-62.93,-181.73){\circle*{0.000001}} \put(-62.23,-181.73){\circle*{0.000001}} \put(-61.52,-182.43){\circle*{0.000001}} \put(-60.81,-182.43){\circle*{0.000001}} \put(-60.10,-183.14){\circle*{0.000001}} \put(-59.40,-183.14){\circle*{0.000001}} \put(-58.69,-183.14){\circle*{0.000001}} \put(-57.98,-183.85){\circle*{0.000001}} \put(-57.28,-183.85){\circle*{0.000001}} \put(-56.57,-183.85){\circle*{0.000001}} \put(-55.86,-184.55){\circle*{0.000001}} \put(-55.15,-184.55){\circle*{0.000001}} \put(-54.45,-185.26){\circle*{0.000001}} \put(-53.74,-185.26){\circle*{0.000001}} \put(-53.03,-185.26){\circle*{0.000001}} \put(-52.33,-185.97){\circle*{0.000001}} \put(-51.62,-185.97){\circle*{0.000001}} \put(-50.91,-185.97){\circle*{0.000001}} \put(-50.20,-186.68){\circle*{0.000001}} \put(-49.50,-186.68){\circle*{0.000001}} \put(-48.79,-187.38){\circle*{0.000001}} \put(-48.08,-187.38){\circle*{0.000001}} \put(-47.38,-187.38){\circle*{0.000001}} \put(-46.67,-188.09){\circle*{0.000001}} \put(-45.96,-188.09){\circle*{0.000001}} \put(-45.25,-188.09){\circle*{0.000001}} \put(-44.55,-188.80){\circle*{0.000001}} \put(-43.84,-188.80){\circle*{0.000001}} \put(-43.13,-189.50){\circle*{0.000001}} \put(-42.43,-189.50){\circle*{0.000001}} \put(-41.72,-189.50){\circle*{0.000001}} \put(-41.01,-190.21){\circle*{0.000001}} \put(-40.31,-190.21){\circle*{0.000001}} \put(-39.60,-190.21){\circle*{0.000001}} \put(-38.89,-190.92){\circle*{0.000001}} \put(-38.18,-190.92){\circle*{0.000001}} \put(-37.48,-191.63){\circle*{0.000001}} \put(-36.77,-191.63){\circle*{0.000001}} \put(-36.77,-191.63){\circle*{0.000001}} \put(-36.06,-191.63){\circle*{0.000001}} \put(-35.36,-191.63){\circle*{0.000001}} \put(-34.65,-191.63){\circle*{0.000001}} \put(-33.94,-191.63){\circle*{0.000001}} \put(-33.23,-191.63){\circle*{0.000001}} \put(-32.53,-191.63){\circle*{0.000001}} \put(-31.82,-191.63){\circle*{0.000001}} \put(-31.11,-191.63){\circle*{0.000001}} \put(-30.41,-191.63){\circle*{0.000001}} \put(-29.70,-191.63){\circle*{0.000001}} \put(-28.99,-191.63){\circle*{0.000001}} \put(-28.28,-191.63){\circle*{0.000001}} \put(-27.58,-191.63){\circle*{0.000001}} \put(-26.87,-191.63){\circle*{0.000001}} \put(-26.16,-191.63){\circle*{0.000001}} \put(-25.46,-191.63){\circle*{0.000001}} \put(-24.75,-191.63){\circle*{0.000001}} \put(-24.04,-191.63){\circle*{0.000001}} \put(-23.33,-191.63){\circle*{0.000001}} \put(-22.63,-191.63){\circle*{0.000001}} \put(-21.92,-191.63){\circle*{0.000001}} \put(-21.21,-191.63){\circle*{0.000001}} \put(-20.51,-190.92){\circle*{0.000001}} \put(-19.80,-190.92){\circle*{0.000001}} \put(-19.09,-190.92){\circle*{0.000001}} \put(-18.38,-190.92){\circle*{0.000001}} \put(-17.68,-190.92){\circle*{0.000001}} \put(-16.97,-190.92){\circle*{0.000001}} \put(-16.26,-190.92){\circle*{0.000001}} \put(-15.56,-190.92){\circle*{0.000001}} \put(-14.85,-190.92){\circle*{0.000001}} \put(-14.14,-190.92){\circle*{0.000001}} \put(-13.44,-190.92){\circle*{0.000001}} \put(-12.73,-190.92){\circle*{0.000001}} \put(-12.02,-190.92){\circle*{0.000001}} \put(-11.31,-190.92){\circle*{0.000001}} \put(-10.61,-190.92){\circle*{0.000001}} \put(-9.90,-190.92){\circle*{0.000001}} \put(-9.19,-190.92){\circle*{0.000001}} \put(-8.49,-190.92){\circle*{0.000001}} \put(-7.78,-190.92){\circle*{0.000001}} \put(-7.07,-190.92){\circle*{0.000001}} \put(-6.36,-190.92){\circle*{0.000001}} \put(-5.66,-190.92){\circle*{0.000001}} \put(-4.95,-190.92){\circle*{0.000001}} \put(-4.95,-190.92){\circle*{0.000001}} \put(-4.24,-190.92){\circle*{0.000001}} \put(-3.54,-190.92){\circle*{0.000001}} \put(-2.83,-190.21){\circle*{0.000001}} \put(-2.12,-190.21){\circle*{0.000001}} \put(-1.41,-190.21){\circle*{0.000001}} \put(-0.71,-190.21){\circle*{0.000001}} \put( 0.00,-189.50){\circle*{0.000001}} \put( 0.71,-189.50){\circle*{0.000001}} \put( 1.41,-189.50){\circle*{0.000001}} \put( 2.12,-189.50){\circle*{0.000001}} \put( 2.83,-189.50){\circle*{0.000001}} \put( 3.54,-188.80){\circle*{0.000001}} \put( 4.24,-188.80){\circle*{0.000001}} \put( 4.95,-188.80){\circle*{0.000001}} \put( 5.66,-188.80){\circle*{0.000001}} \put( 6.36,-188.09){\circle*{0.000001}} \put( 7.07,-188.09){\circle*{0.000001}} \put( 7.78,-188.09){\circle*{0.000001}} \put( 8.49,-188.09){\circle*{0.000001}} \put( 9.19,-188.09){\circle*{0.000001}} \put( 9.90,-187.38){\circle*{0.000001}} \put(10.61,-187.38){\circle*{0.000001}} \put(11.31,-187.38){\circle*{0.000001}} \put(12.02,-187.38){\circle*{0.000001}} \put(12.73,-186.68){\circle*{0.000001}} \put(13.44,-186.68){\circle*{0.000001}} \put(14.14,-186.68){\circle*{0.000001}} \put(14.85,-186.68){\circle*{0.000001}} \put(15.56,-186.68){\circle*{0.000001}} \put(16.26,-185.97){\circle*{0.000001}} \put(16.97,-185.97){\circle*{0.000001}} \put(17.68,-185.97){\circle*{0.000001}} \put(18.38,-185.97){\circle*{0.000001}} \put(19.09,-185.26){\circle*{0.000001}} \put(19.80,-185.26){\circle*{0.000001}} \put(20.51,-185.26){\circle*{0.000001}} \put(21.21,-185.26){\circle*{0.000001}} \put(21.92,-185.26){\circle*{0.000001}} \put(22.63,-184.55){\circle*{0.000001}} \put(23.33,-184.55){\circle*{0.000001}} \put(24.04,-184.55){\circle*{0.000001}} \put(24.75,-184.55){\circle*{0.000001}} \put(25.46,-183.85){\circle*{0.000001}} \put(26.16,-183.85){\circle*{0.000001}} \put(26.87,-183.85){\circle*{0.000001}} \put(26.87,-183.85){\circle*{0.000001}} \put(27.58,-183.14){\circle*{0.000001}} \put(28.28,-182.43){\circle*{0.000001}} \put(28.99,-181.73){\circle*{0.000001}} \put(28.99,-181.02){\circle*{0.000001}} \put(29.70,-180.31){\circle*{0.000001}} \put(30.41,-179.61){\circle*{0.000001}} \put(31.11,-178.90){\circle*{0.000001}} \put(31.82,-178.19){\circle*{0.000001}} \put(32.53,-177.48){\circle*{0.000001}} \put(33.23,-176.78){\circle*{0.000001}} \put(33.23,-176.07){\circle*{0.000001}} \put(33.94,-175.36){\circle*{0.000001}} \put(34.65,-174.66){\circle*{0.000001}} \put(35.36,-173.95){\circle*{0.000001}} \put(36.06,-173.24){\circle*{0.000001}} \put(36.77,-172.53){\circle*{0.000001}} \put(37.48,-171.83){\circle*{0.000001}} \put(37.48,-171.12){\circle*{0.000001}} \put(38.18,-170.41){\circle*{0.000001}} \put(38.89,-169.71){\circle*{0.000001}} \put(39.60,-169.00){\circle*{0.000001}} \put(40.31,-168.29){\circle*{0.000001}} \put(41.01,-167.58){\circle*{0.000001}} \put(41.72,-166.88){\circle*{0.000001}} \put(42.43,-166.17){\circle*{0.000001}} \put(42.43,-165.46){\circle*{0.000001}} \put(43.13,-164.76){\circle*{0.000001}} \put(43.84,-164.05){\circle*{0.000001}} \put(44.55,-163.34){\circle*{0.000001}} \put(45.25,-162.63){\circle*{0.000001}} \put(45.96,-161.93){\circle*{0.000001}} \put(46.67,-161.22){\circle*{0.000001}} \put(46.67,-160.51){\circle*{0.000001}} \put(47.38,-159.81){\circle*{0.000001}} \put(48.08,-159.10){\circle*{0.000001}} \put(48.79,-158.39){\circle*{0.000001}} \put(14.85,-167.58){\circle*{0.000001}} \put(15.56,-167.58){\circle*{0.000001}} \put(16.26,-166.88){\circle*{0.000001}} \put(16.97,-166.88){\circle*{0.000001}} \put(17.68,-166.88){\circle*{0.000001}} \put(18.38,-166.88){\circle*{0.000001}} \put(19.09,-166.17){\circle*{0.000001}} \put(19.80,-166.17){\circle*{0.000001}} \put(20.51,-166.17){\circle*{0.000001}} \put(21.21,-166.17){\circle*{0.000001}} \put(21.92,-165.46){\circle*{0.000001}} \put(22.63,-165.46){\circle*{0.000001}} \put(23.33,-165.46){\circle*{0.000001}} \put(24.04,-164.76){\circle*{0.000001}} \put(24.75,-164.76){\circle*{0.000001}} \put(25.46,-164.76){\circle*{0.000001}} \put(26.16,-164.76){\circle*{0.000001}} \put(26.87,-164.05){\circle*{0.000001}} \put(27.58,-164.05){\circle*{0.000001}} \put(28.28,-164.05){\circle*{0.000001}} \put(28.99,-164.05){\circle*{0.000001}} \put(29.70,-163.34){\circle*{0.000001}} \put(30.41,-163.34){\circle*{0.000001}} \put(31.11,-163.34){\circle*{0.000001}} \put(31.82,-163.34){\circle*{0.000001}} \put(32.53,-162.63){\circle*{0.000001}} \put(33.23,-162.63){\circle*{0.000001}} \put(33.94,-162.63){\circle*{0.000001}} \put(34.65,-161.93){\circle*{0.000001}} \put(35.36,-161.93){\circle*{0.000001}} \put(36.06,-161.93){\circle*{0.000001}} \put(36.77,-161.93){\circle*{0.000001}} \put(37.48,-161.22){\circle*{0.000001}} \put(38.18,-161.22){\circle*{0.000001}} \put(38.89,-161.22){\circle*{0.000001}} \put(39.60,-161.22){\circle*{0.000001}} \put(40.31,-160.51){\circle*{0.000001}} \put(41.01,-160.51){\circle*{0.000001}} \put(41.72,-160.51){\circle*{0.000001}} \put(42.43,-159.81){\circle*{0.000001}} \put(43.13,-159.81){\circle*{0.000001}} \put(43.84,-159.81){\circle*{0.000001}} \put(44.55,-159.81){\circle*{0.000001}} \put(45.25,-159.10){\circle*{0.000001}} \put(45.96,-159.10){\circle*{0.000001}} \put(46.67,-159.10){\circle*{0.000001}} \put(47.38,-159.10){\circle*{0.000001}} \put(48.08,-158.39){\circle*{0.000001}} \put(48.79,-158.39){\circle*{0.000001}} \put(21.92,-202.94){\circle*{0.000001}} \put(21.92,-202.23){\circle*{0.000001}} \put(21.92,-201.53){\circle*{0.000001}} \put(21.21,-200.82){\circle*{0.000001}} \put(21.21,-200.11){\circle*{0.000001}} \put(21.21,-199.40){\circle*{0.000001}} \put(21.21,-198.70){\circle*{0.000001}} \put(21.21,-197.99){\circle*{0.000001}} \put(20.51,-197.28){\circle*{0.000001}} \put(20.51,-196.58){\circle*{0.000001}} \put(20.51,-195.87){\circle*{0.000001}} \put(20.51,-195.16){\circle*{0.000001}} \put(20.51,-194.45){\circle*{0.000001}} \put(19.80,-193.75){\circle*{0.000001}} \put(19.80,-193.04){\circle*{0.000001}} \put(19.80,-192.33){\circle*{0.000001}} \put(19.80,-191.63){\circle*{0.000001}} \put(19.80,-190.92){\circle*{0.000001}} \put(19.09,-190.21){\circle*{0.000001}} \put(19.09,-189.50){\circle*{0.000001}} \put(19.09,-188.80){\circle*{0.000001}} \put(19.09,-188.09){\circle*{0.000001}} \put(19.09,-187.38){\circle*{0.000001}} \put(18.38,-186.68){\circle*{0.000001}} \put(18.38,-185.97){\circle*{0.000001}} \put(18.38,-185.26){\circle*{0.000001}} \put(18.38,-184.55){\circle*{0.000001}} \put(18.38,-183.85){\circle*{0.000001}} \put(17.68,-183.14){\circle*{0.000001}} \put(17.68,-182.43){\circle*{0.000001}} \put(17.68,-181.73){\circle*{0.000001}} \put(17.68,-181.02){\circle*{0.000001}} \put(17.68,-180.31){\circle*{0.000001}} \put(16.97,-179.61){\circle*{0.000001}} \put(16.97,-178.90){\circle*{0.000001}} \put(16.97,-178.19){\circle*{0.000001}} \put(16.97,-177.48){\circle*{0.000001}} \put(16.97,-176.78){\circle*{0.000001}} \put(16.26,-176.07){\circle*{0.000001}} \put(16.26,-175.36){\circle*{0.000001}} \put(16.26,-174.66){\circle*{0.000001}} \put(16.26,-173.95){\circle*{0.000001}} \put(16.26,-173.24){\circle*{0.000001}} \put(15.56,-172.53){\circle*{0.000001}} \put(15.56,-171.83){\circle*{0.000001}} \put(15.56,-171.12){\circle*{0.000001}} \put(15.56,-170.41){\circle*{0.000001}} \put(15.56,-169.71){\circle*{0.000001}} \put(14.85,-169.00){\circle*{0.000001}} \put(14.85,-168.29){\circle*{0.000001}} \put(14.85,-167.58){\circle*{0.000001}} \put(-9.90,-211.42){\circle*{0.000001}} \put(-9.19,-211.42){\circle*{0.000001}} \put(-8.49,-210.72){\circle*{0.000001}} \put(-7.78,-210.72){\circle*{0.000001}} \put(-7.07,-210.72){\circle*{0.000001}} \put(-6.36,-210.72){\circle*{0.000001}} \put(-5.66,-210.01){\circle*{0.000001}} \put(-4.95,-210.01){\circle*{0.000001}} \put(-4.24,-210.01){\circle*{0.000001}} \put(-3.54,-210.01){\circle*{0.000001}} \put(-2.83,-209.30){\circle*{0.000001}} \put(-2.12,-209.30){\circle*{0.000001}} \put(-1.41,-209.30){\circle*{0.000001}} \put(-0.71,-209.30){\circle*{0.000001}} \put( 0.00,-208.60){\circle*{0.000001}} \put( 0.71,-208.60){\circle*{0.000001}} \put( 1.41,-208.60){\circle*{0.000001}} \put( 2.12,-207.89){\circle*{0.000001}} \put( 2.83,-207.89){\circle*{0.000001}} \put( 3.54,-207.89){\circle*{0.000001}} \put( 4.24,-207.89){\circle*{0.000001}} \put( 4.95,-207.18){\circle*{0.000001}} \put( 5.66,-207.18){\circle*{0.000001}} \put( 6.36,-207.18){\circle*{0.000001}} \put( 7.07,-207.18){\circle*{0.000001}} \put( 7.78,-206.48){\circle*{0.000001}} \put( 8.49,-206.48){\circle*{0.000001}} \put( 9.19,-206.48){\circle*{0.000001}} \put( 9.90,-206.48){\circle*{0.000001}} \put(10.61,-205.77){\circle*{0.000001}} \put(11.31,-205.77){\circle*{0.000001}} \put(12.02,-205.77){\circle*{0.000001}} \put(12.73,-205.06){\circle*{0.000001}} \put(13.44,-205.06){\circle*{0.000001}} \put(14.14,-205.06){\circle*{0.000001}} \put(14.85,-205.06){\circle*{0.000001}} \put(15.56,-204.35){\circle*{0.000001}} \put(16.26,-204.35){\circle*{0.000001}} \put(16.97,-204.35){\circle*{0.000001}} \put(17.68,-204.35){\circle*{0.000001}} \put(18.38,-203.65){\circle*{0.000001}} \put(19.09,-203.65){\circle*{0.000001}} \put(19.80,-203.65){\circle*{0.000001}} \put(20.51,-203.65){\circle*{0.000001}} \put(21.21,-202.94){\circle*{0.000001}} \put(21.92,-202.94){\circle*{0.000001}} \put(10.61,-238.29){\circle*{0.000001}} \put( 9.90,-237.59){\circle*{0.000001}} \put( 9.19,-236.88){\circle*{0.000001}} \put( 9.19,-236.17){\circle*{0.000001}} \put( 8.49,-235.47){\circle*{0.000001}} \put( 7.78,-234.76){\circle*{0.000001}} \put( 7.07,-234.05){\circle*{0.000001}} \put( 7.07,-233.35){\circle*{0.000001}} \put( 6.36,-232.64){\circle*{0.000001}} \put( 5.66,-231.93){\circle*{0.000001}} \put( 4.95,-231.22){\circle*{0.000001}} \put( 4.95,-230.52){\circle*{0.000001}} \put( 4.24,-229.81){\circle*{0.000001}} \put( 3.54,-229.10){\circle*{0.000001}} \put( 2.83,-228.40){\circle*{0.000001}} \put( 2.83,-227.69){\circle*{0.000001}} \put( 2.12,-226.98){\circle*{0.000001}} \put( 1.41,-226.27){\circle*{0.000001}} \put( 0.71,-225.57){\circle*{0.000001}} \put( 0.71,-224.86){\circle*{0.000001}} \put( 0.00,-224.15){\circle*{0.000001}} \put(-0.71,-223.45){\circle*{0.000001}} \put(-1.41,-222.74){\circle*{0.000001}} \put(-2.12,-222.03){\circle*{0.000001}} \put(-2.12,-221.32){\circle*{0.000001}} \put(-2.83,-220.62){\circle*{0.000001}} \put(-3.54,-219.91){\circle*{0.000001}} \put(-4.24,-219.20){\circle*{0.000001}} \put(-4.24,-218.50){\circle*{0.000001}} \put(-4.95,-217.79){\circle*{0.000001}} \put(-5.66,-217.08){\circle*{0.000001}} \put(-6.36,-216.37){\circle*{0.000001}} \put(-6.36,-215.67){\circle*{0.000001}} \put(-7.07,-214.96){\circle*{0.000001}} \put(-7.78,-214.25){\circle*{0.000001}} \put(-8.49,-213.55){\circle*{0.000001}} \put(-8.49,-212.84){\circle*{0.000001}} \put(-9.19,-212.13){\circle*{0.000001}} \put(-9.90,-211.42){\circle*{0.000001}} \put(10.61,-269.41){\circle*{0.000001}} \put(10.61,-268.70){\circle*{0.000001}} \put(10.61,-267.99){\circle*{0.000001}} \put(10.61,-267.29){\circle*{0.000001}} \put(10.61,-266.58){\circle*{0.000001}} \put(10.61,-265.87){\circle*{0.000001}} \put(10.61,-265.17){\circle*{0.000001}} \put(10.61,-264.46){\circle*{0.000001}} \put(10.61,-263.75){\circle*{0.000001}} \put(10.61,-263.04){\circle*{0.000001}} \put(10.61,-262.34){\circle*{0.000001}} \put(10.61,-261.63){\circle*{0.000001}} \put(10.61,-260.92){\circle*{0.000001}} \put(10.61,-260.22){\circle*{0.000001}} \put(10.61,-259.51){\circle*{0.000001}} \put(10.61,-258.80){\circle*{0.000001}} \put(10.61,-258.09){\circle*{0.000001}} \put(10.61,-257.39){\circle*{0.000001}} \put(10.61,-256.68){\circle*{0.000001}} \put(10.61,-255.97){\circle*{0.000001}} \put(10.61,-255.27){\circle*{0.000001}} \put(10.61,-254.56){\circle*{0.000001}} \put(10.61,-253.85){\circle*{0.000001}} \put(10.61,-253.14){\circle*{0.000001}} \put(10.61,-252.44){\circle*{0.000001}} \put(10.61,-251.73){\circle*{0.000001}} \put(10.61,-251.02){\circle*{0.000001}} \put(10.61,-250.32){\circle*{0.000001}} \put(10.61,-249.61){\circle*{0.000001}} \put(10.61,-248.90){\circle*{0.000001}} \put(10.61,-248.19){\circle*{0.000001}} \put(10.61,-247.49){\circle*{0.000001}} \put(10.61,-246.78){\circle*{0.000001}} \put(10.61,-246.07){\circle*{0.000001}} \put(10.61,-245.37){\circle*{0.000001}} \put(10.61,-244.66){\circle*{0.000001}} \put(10.61,-243.95){\circle*{0.000001}} \put(10.61,-243.24){\circle*{0.000001}} \put(10.61,-242.54){\circle*{0.000001}} \put(10.61,-241.83){\circle*{0.000001}} \put(10.61,-241.12){\circle*{0.000001}} \put(10.61,-240.42){\circle*{0.000001}} \put(10.61,-239.71){\circle*{0.000001}} \put(10.61,-239.00){\circle*{0.000001}} \put(10.61,-238.29){\circle*{0.000001}} \put(-13.44,-246.78){\circle*{0.000001}} \put(-12.73,-247.49){\circle*{0.000001}} \put(-12.02,-248.19){\circle*{0.000001}} \put(-11.31,-248.90){\circle*{0.000001}} \put(-10.61,-249.61){\circle*{0.000001}} \put(-9.90,-250.32){\circle*{0.000001}} \put(-9.19,-251.02){\circle*{0.000001}} \put(-8.49,-251.73){\circle*{0.000001}} \put(-7.78,-252.44){\circle*{0.000001}} \put(-7.07,-252.44){\circle*{0.000001}} \put(-6.36,-253.14){\circle*{0.000001}} \put(-5.66,-253.85){\circle*{0.000001}} \put(-4.95,-254.56){\circle*{0.000001}} \put(-4.24,-255.27){\circle*{0.000001}} \put(-3.54,-255.97){\circle*{0.000001}} \put(-2.83,-256.68){\circle*{0.000001}} \put(-2.12,-257.39){\circle*{0.000001}} \put(-1.41,-258.09){\circle*{0.000001}} \put(-0.71,-258.80){\circle*{0.000001}} \put( 0.00,-259.51){\circle*{0.000001}} \put( 0.71,-260.22){\circle*{0.000001}} \put( 1.41,-260.92){\circle*{0.000001}} \put( 2.12,-261.63){\circle*{0.000001}} \put( 2.83,-262.34){\circle*{0.000001}} \put( 3.54,-263.04){\circle*{0.000001}} \put( 4.24,-263.75){\circle*{0.000001}} \put( 4.95,-263.75){\circle*{0.000001}} \put( 5.66,-264.46){\circle*{0.000001}} \put( 6.36,-265.17){\circle*{0.000001}} \put( 7.07,-265.87){\circle*{0.000001}} \put( 7.78,-266.58){\circle*{0.000001}} \put( 8.49,-267.29){\circle*{0.000001}} \put( 9.19,-267.99){\circle*{0.000001}} \put( 9.90,-268.70){\circle*{0.000001}} \put(10.61,-269.41){\circle*{0.000001}} \put(-13.44,-246.78){\circle*{0.000001}} \put(-13.44,-246.07){\circle*{0.000001}} \put(-14.14,-245.37){\circle*{0.000001}} \put(-14.14,-244.66){\circle*{0.000001}} \put(-14.14,-243.95){\circle*{0.000001}} \put(-14.14,-243.24){\circle*{0.000001}} \put(-14.85,-242.54){\circle*{0.000001}} \put(-14.85,-241.83){\circle*{0.000001}} \put(-14.85,-241.12){\circle*{0.000001}} \put(-14.85,-240.42){\circle*{0.000001}} \put(-15.56,-239.71){\circle*{0.000001}} \put(-15.56,-239.00){\circle*{0.000001}} \put(-15.56,-238.29){\circle*{0.000001}} \put(-16.26,-237.59){\circle*{0.000001}} \put(-16.26,-236.88){\circle*{0.000001}} \put(-16.26,-236.17){\circle*{0.000001}} \put(-16.26,-235.47){\circle*{0.000001}} \put(-16.97,-234.76){\circle*{0.000001}} \put(-16.97,-234.05){\circle*{0.000001}} \put(-16.97,-233.35){\circle*{0.000001}} \put(-17.68,-232.64){\circle*{0.000001}} \put(-17.68,-231.93){\circle*{0.000001}} \put(-17.68,-231.22){\circle*{0.000001}} \put(-17.68,-230.52){\circle*{0.000001}} \put(-18.38,-229.81){\circle*{0.000001}} \put(-18.38,-229.10){\circle*{0.000001}} \put(-18.38,-228.40){\circle*{0.000001}} \put(-18.38,-227.69){\circle*{0.000001}} \put(-19.09,-226.98){\circle*{0.000001}} \put(-19.09,-226.27){\circle*{0.000001}} \put(-19.09,-225.57){\circle*{0.000001}} \put(-19.80,-224.86){\circle*{0.000001}} \put(-19.80,-224.15){\circle*{0.000001}} \put(-19.80,-223.45){\circle*{0.000001}} \put(-19.80,-222.74){\circle*{0.000001}} \put(-20.51,-222.03){\circle*{0.000001}} \put(-20.51,-221.32){\circle*{0.000001}} \put(-20.51,-220.62){\circle*{0.000001}} \put(-21.21,-219.91){\circle*{0.000001}} \put(-21.21,-219.20){\circle*{0.000001}} \put(-21.21,-218.50){\circle*{0.000001}} \put(-21.21,-217.79){\circle*{0.000001}} \put(-21.92,-217.08){\circle*{0.000001}} \put(-21.92,-216.37){\circle*{0.000001}} \put(-21.92,-215.67){\circle*{0.000001}} \put(-21.92,-214.96){\circle*{0.000001}} \put(-22.63,-214.25){\circle*{0.000001}} \put(-22.63,-213.55){\circle*{0.000001}} \put(-22.63,-213.55){\circle*{0.000001}} \put(-21.92,-213.55){\circle*{0.000001}} \put(-21.21,-213.55){\circle*{0.000001}} \put(-20.51,-213.55){\circle*{0.000001}} \put(-19.80,-214.25){\circle*{0.000001}} \put(-19.09,-214.25){\circle*{0.000001}} \put(-18.38,-214.25){\circle*{0.000001}} \put(-17.68,-214.25){\circle*{0.000001}} \put(-16.97,-214.25){\circle*{0.000001}} \put(-16.26,-214.25){\circle*{0.000001}} \put(-15.56,-214.25){\circle*{0.000001}} \put(-14.85,-214.96){\circle*{0.000001}} \put(-14.14,-214.96){\circle*{0.000001}} \put(-13.44,-214.96){\circle*{0.000001}} \put(-12.73,-214.96){\circle*{0.000001}} \put(-12.02,-214.96){\circle*{0.000001}} \put(-11.31,-214.96){\circle*{0.000001}} \put(-10.61,-215.67){\circle*{0.000001}} \put(-9.90,-215.67){\circle*{0.000001}} \put(-9.19,-215.67){\circle*{0.000001}} \put(-8.49,-215.67){\circle*{0.000001}} \put(-7.78,-215.67){\circle*{0.000001}} \put(-7.07,-215.67){\circle*{0.000001}} \put(-6.36,-215.67){\circle*{0.000001}} \put(-5.66,-216.37){\circle*{0.000001}} \put(-4.95,-216.37){\circle*{0.000001}} \put(-4.24,-216.37){\circle*{0.000001}} \put(-3.54,-216.37){\circle*{0.000001}} \put(-2.83,-216.37){\circle*{0.000001}} \put(-2.12,-216.37){\circle*{0.000001}} \put(-1.41,-216.37){\circle*{0.000001}} \put(-0.71,-217.08){\circle*{0.000001}} \put( 0.00,-217.08){\circle*{0.000001}} \put( 0.71,-217.08){\circle*{0.000001}} \put( 1.41,-217.08){\circle*{0.000001}} \put( 2.12,-217.08){\circle*{0.000001}} \put( 2.83,-217.08){\circle*{0.000001}} \put( 3.54,-217.79){\circle*{0.000001}} \put( 4.24,-217.79){\circle*{0.000001}} \put( 4.95,-217.79){\circle*{0.000001}} \put( 5.66,-217.79){\circle*{0.000001}} \put( 6.36,-217.79){\circle*{0.000001}} \put( 7.07,-217.79){\circle*{0.000001}} \put( 7.78,-217.79){\circle*{0.000001}} \put( 8.49,-218.50){\circle*{0.000001}} \put( 9.19,-218.50){\circle*{0.000001}} \put( 9.90,-218.50){\circle*{0.000001}} \put(10.61,-218.50){\circle*{0.000001}} \put(10.61,-218.50){\circle*{0.000001}} \put(11.31,-217.79){\circle*{0.000001}} \put(12.02,-217.08){\circle*{0.000001}} \put(12.73,-216.37){\circle*{0.000001}} \put(13.44,-215.67){\circle*{0.000001}} \put(14.14,-214.96){\circle*{0.000001}} \put(14.85,-214.25){\circle*{0.000001}} \put(15.56,-213.55){\circle*{0.000001}} \put(16.26,-212.84){\circle*{0.000001}} \put(16.97,-212.13){\circle*{0.000001}} \put(17.68,-211.42){\circle*{0.000001}} \put(18.38,-210.72){\circle*{0.000001}} \put(19.09,-210.01){\circle*{0.000001}} \put(19.80,-209.30){\circle*{0.000001}} \put(20.51,-208.60){\circle*{0.000001}} \put(21.21,-207.89){\circle*{0.000001}} \put(21.92,-207.18){\circle*{0.000001}} \put(21.92,-206.48){\circle*{0.000001}} \put(22.63,-205.77){\circle*{0.000001}} \put(23.33,-205.06){\circle*{0.000001}} \put(24.04,-204.35){\circle*{0.000001}} \put(24.75,-203.65){\circle*{0.000001}} \put(25.46,-202.94){\circle*{0.000001}} \put(26.16,-202.23){\circle*{0.000001}} \put(26.87,-201.53){\circle*{0.000001}} \put(27.58,-200.82){\circle*{0.000001}} \put(28.28,-200.11){\circle*{0.000001}} \put(28.99,-199.40){\circle*{0.000001}} \put(29.70,-198.70){\circle*{0.000001}} \put(30.41,-197.99){\circle*{0.000001}} \put(31.11,-197.28){\circle*{0.000001}} \put(31.82,-196.58){\circle*{0.000001}} \put(32.53,-195.87){\circle*{0.000001}} \put(33.23,-195.16){\circle*{0.000001}} \put(33.94,-194.45){\circle*{0.000001}} \put( 2.83,-175.36){\circle*{0.000001}} \put( 3.54,-176.07){\circle*{0.000001}} \put( 4.24,-176.07){\circle*{0.000001}} \put( 4.95,-176.78){\circle*{0.000001}} \put( 5.66,-176.78){\circle*{0.000001}} \put( 6.36,-177.48){\circle*{0.000001}} \put( 7.07,-178.19){\circle*{0.000001}} \put( 7.78,-178.19){\circle*{0.000001}} \put( 8.49,-178.90){\circle*{0.000001}} \put( 9.19,-179.61){\circle*{0.000001}} \put( 9.90,-179.61){\circle*{0.000001}} \put(10.61,-180.31){\circle*{0.000001}} \put(11.31,-180.31){\circle*{0.000001}} \put(12.02,-181.02){\circle*{0.000001}} \put(12.73,-181.73){\circle*{0.000001}} \put(13.44,-181.73){\circle*{0.000001}} \put(14.14,-182.43){\circle*{0.000001}} \put(14.85,-182.43){\circle*{0.000001}} \put(15.56,-183.14){\circle*{0.000001}} \put(16.26,-183.85){\circle*{0.000001}} \put(16.97,-183.85){\circle*{0.000001}} \put(17.68,-184.55){\circle*{0.000001}} \put(18.38,-184.55){\circle*{0.000001}} \put(19.09,-185.26){\circle*{0.000001}} \put(19.80,-185.97){\circle*{0.000001}} \put(20.51,-185.97){\circle*{0.000001}} \put(21.21,-186.68){\circle*{0.000001}} \put(21.92,-187.38){\circle*{0.000001}} \put(22.63,-187.38){\circle*{0.000001}} \put(23.33,-188.09){\circle*{0.000001}} \put(24.04,-188.09){\circle*{0.000001}} \put(24.75,-188.80){\circle*{0.000001}} \put(25.46,-189.50){\circle*{0.000001}} \put(26.16,-189.50){\circle*{0.000001}} \put(26.87,-190.21){\circle*{0.000001}} \put(27.58,-190.21){\circle*{0.000001}} \put(28.28,-190.92){\circle*{0.000001}} \put(28.99,-191.63){\circle*{0.000001}} \put(29.70,-191.63){\circle*{0.000001}} \put(30.41,-192.33){\circle*{0.000001}} \put(31.11,-193.04){\circle*{0.000001}} \put(31.82,-193.04){\circle*{0.000001}} \put(32.53,-193.75){\circle*{0.000001}} \put(33.23,-193.75){\circle*{0.000001}} \put(33.94,-194.45){\circle*{0.000001}} \put(-26.87,-161.93){\circle*{0.000001}} \put(-26.16,-161.93){\circle*{0.000001}} \put(-25.46,-162.63){\circle*{0.000001}} \put(-24.75,-162.63){\circle*{0.000001}} \put(-24.04,-163.34){\circle*{0.000001}} \put(-23.33,-163.34){\circle*{0.000001}} \put(-22.63,-164.05){\circle*{0.000001}} \put(-21.92,-164.05){\circle*{0.000001}} \put(-21.21,-164.76){\circle*{0.000001}} \put(-20.51,-164.76){\circle*{0.000001}} \put(-19.80,-165.46){\circle*{0.000001}} \put(-19.09,-165.46){\circle*{0.000001}} \put(-18.38,-165.46){\circle*{0.000001}} \put(-17.68,-166.17){\circle*{0.000001}} \put(-16.97,-166.17){\circle*{0.000001}} \put(-16.26,-166.88){\circle*{0.000001}} \put(-15.56,-166.88){\circle*{0.000001}} \put(-14.85,-167.58){\circle*{0.000001}} \put(-14.14,-167.58){\circle*{0.000001}} \put(-13.44,-168.29){\circle*{0.000001}} \put(-12.73,-168.29){\circle*{0.000001}} \put(-12.02,-168.29){\circle*{0.000001}} \put(-11.31,-169.00){\circle*{0.000001}} \put(-10.61,-169.00){\circle*{0.000001}} \put(-9.90,-169.71){\circle*{0.000001}} \put(-9.19,-169.71){\circle*{0.000001}} \put(-8.49,-170.41){\circle*{0.000001}} \put(-7.78,-170.41){\circle*{0.000001}} \put(-7.07,-171.12){\circle*{0.000001}} \put(-6.36,-171.12){\circle*{0.000001}} \put(-5.66,-171.83){\circle*{0.000001}} \put(-4.95,-171.83){\circle*{0.000001}} \put(-4.24,-171.83){\circle*{0.000001}} \put(-3.54,-172.53){\circle*{0.000001}} \put(-2.83,-172.53){\circle*{0.000001}} \put(-2.12,-173.24){\circle*{0.000001}} \put(-1.41,-173.24){\circle*{0.000001}} \put(-0.71,-173.95){\circle*{0.000001}} \put( 0.00,-173.95){\circle*{0.000001}} \put( 0.71,-174.66){\circle*{0.000001}} \put( 1.41,-174.66){\circle*{0.000001}} \put( 2.12,-175.36){\circle*{0.000001}} \put( 2.83,-175.36){\circle*{0.000001}} \put(-59.40,-150.61){\circle*{0.000001}} \put(-58.69,-150.61){\circle*{0.000001}} \put(-57.98,-151.32){\circle*{0.000001}} \put(-57.28,-151.32){\circle*{0.000001}} \put(-56.57,-151.32){\circle*{0.000001}} \put(-55.86,-152.03){\circle*{0.000001}} \put(-55.15,-152.03){\circle*{0.000001}} \put(-54.45,-152.03){\circle*{0.000001}} \put(-53.74,-152.74){\circle*{0.000001}} \put(-53.03,-152.74){\circle*{0.000001}} \put(-52.33,-152.74){\circle*{0.000001}} \put(-51.62,-153.44){\circle*{0.000001}} \put(-50.91,-153.44){\circle*{0.000001}} \put(-50.20,-154.15){\circle*{0.000001}} \put(-49.50,-154.15){\circle*{0.000001}} \put(-48.79,-154.15){\circle*{0.000001}} \put(-48.08,-154.86){\circle*{0.000001}} \put(-47.38,-154.86){\circle*{0.000001}} \put(-46.67,-154.86){\circle*{0.000001}} \put(-45.96,-155.56){\circle*{0.000001}} \put(-45.25,-155.56){\circle*{0.000001}} \put(-44.55,-155.56){\circle*{0.000001}} \put(-43.84,-156.27){\circle*{0.000001}} \put(-43.13,-156.27){\circle*{0.000001}} \put(-42.43,-156.27){\circle*{0.000001}} \put(-41.72,-156.98){\circle*{0.000001}} \put(-41.01,-156.98){\circle*{0.000001}} \put(-40.31,-156.98){\circle*{0.000001}} \put(-39.60,-157.68){\circle*{0.000001}} \put(-38.89,-157.68){\circle*{0.000001}} \put(-38.18,-157.68){\circle*{0.000001}} \put(-37.48,-158.39){\circle*{0.000001}} \put(-36.77,-158.39){\circle*{0.000001}} \put(-36.06,-158.39){\circle*{0.000001}} \put(-35.36,-159.10){\circle*{0.000001}} \put(-34.65,-159.10){\circle*{0.000001}} \put(-33.94,-159.81){\circle*{0.000001}} \put(-33.23,-159.81){\circle*{0.000001}} \put(-32.53,-159.81){\circle*{0.000001}} \put(-31.82,-160.51){\circle*{0.000001}} \put(-31.11,-160.51){\circle*{0.000001}} \put(-30.41,-160.51){\circle*{0.000001}} \put(-29.70,-161.22){\circle*{0.000001}} \put(-28.99,-161.22){\circle*{0.000001}} \put(-28.28,-161.22){\circle*{0.000001}} \put(-27.58,-161.93){\circle*{0.000001}} \put(-26.87,-161.93){\circle*{0.000001}} \put(-92.63,-149.91){\circle*{0.000001}} \put(-91.92,-149.91){\circle*{0.000001}} \put(-91.22,-149.91){\circle*{0.000001}} \put(-90.51,-149.91){\circle*{0.000001}} \put(-89.80,-149.91){\circle*{0.000001}} \put(-89.10,-149.91){\circle*{0.000001}} \put(-88.39,-149.91){\circle*{0.000001}} \put(-87.68,-149.91){\circle*{0.000001}} \put(-86.97,-149.91){\circle*{0.000001}} \put(-86.27,-149.91){\circle*{0.000001}} \put(-85.56,-149.91){\circle*{0.000001}} \put(-84.85,-149.91){\circle*{0.000001}} \put(-84.15,-149.91){\circle*{0.000001}} \put(-83.44,-149.91){\circle*{0.000001}} \put(-82.73,-149.91){\circle*{0.000001}} \put(-82.02,-149.91){\circle*{0.000001}} \put(-81.32,-149.91){\circle*{0.000001}} \put(-80.61,-149.91){\circle*{0.000001}} \put(-79.90,-149.91){\circle*{0.000001}} \put(-79.20,-149.91){\circle*{0.000001}} \put(-78.49,-149.91){\circle*{0.000001}} \put(-77.78,-149.91){\circle*{0.000001}} \put(-77.07,-149.91){\circle*{0.000001}} \put(-76.37,-149.91){\circle*{0.000001}} \put(-75.66,-150.61){\circle*{0.000001}} \put(-74.95,-150.61){\circle*{0.000001}} \put(-74.25,-150.61){\circle*{0.000001}} \put(-73.54,-150.61){\circle*{0.000001}} \put(-72.83,-150.61){\circle*{0.000001}} \put(-72.12,-150.61){\circle*{0.000001}} \put(-71.42,-150.61){\circle*{0.000001}} \put(-70.71,-150.61){\circle*{0.000001}} \put(-70.00,-150.61){\circle*{0.000001}} \put(-69.30,-150.61){\circle*{0.000001}} \put(-68.59,-150.61){\circle*{0.000001}} \put(-67.88,-150.61){\circle*{0.000001}} \put(-67.18,-150.61){\circle*{0.000001}} \put(-66.47,-150.61){\circle*{0.000001}} \put(-65.76,-150.61){\circle*{0.000001}} \put(-65.05,-150.61){\circle*{0.000001}} \put(-64.35,-150.61){\circle*{0.000001}} \put(-63.64,-150.61){\circle*{0.000001}} \put(-62.93,-150.61){\circle*{0.000001}} \put(-62.23,-150.61){\circle*{0.000001}} \put(-61.52,-150.61){\circle*{0.000001}} \put(-60.81,-150.61){\circle*{0.000001}} \put(-60.10,-150.61){\circle*{0.000001}} \put(-59.40,-150.61){\circle*{0.000001}} \put(-105.36,-178.90){\circle*{0.000001}} \put(-105.36,-178.19){\circle*{0.000001}} \put(-104.65,-177.48){\circle*{0.000001}} \put(-104.65,-176.78){\circle*{0.000001}} \put(-103.94,-176.07){\circle*{0.000001}} \put(-103.94,-175.36){\circle*{0.000001}} \put(-103.24,-174.66){\circle*{0.000001}} \put(-103.24,-173.95){\circle*{0.000001}} \put(-102.53,-173.24){\circle*{0.000001}} \put(-102.53,-172.53){\circle*{0.000001}} \put(-102.53,-171.83){\circle*{0.000001}} \put(-101.82,-171.12){\circle*{0.000001}} \put(-101.82,-170.41){\circle*{0.000001}} \put(-101.12,-169.71){\circle*{0.000001}} \put(-101.12,-169.00){\circle*{0.000001}} \put(-100.41,-168.29){\circle*{0.000001}} \put(-100.41,-167.58){\circle*{0.000001}} \put(-100.41,-166.88){\circle*{0.000001}} \put(-99.70,-166.17){\circle*{0.000001}} \put(-99.70,-165.46){\circle*{0.000001}} \put(-98.99,-164.76){\circle*{0.000001}} \put(-98.99,-164.05){\circle*{0.000001}} \put(-98.29,-163.34){\circle*{0.000001}} \put(-98.29,-162.63){\circle*{0.000001}} \put(-97.58,-161.93){\circle*{0.000001}} \put(-97.58,-161.22){\circle*{0.000001}} \put(-97.58,-160.51){\circle*{0.000001}} \put(-96.87,-159.81){\circle*{0.000001}} \put(-96.87,-159.10){\circle*{0.000001}} \put(-96.17,-158.39){\circle*{0.000001}} \put(-96.17,-157.68){\circle*{0.000001}} \put(-95.46,-156.98){\circle*{0.000001}} \put(-95.46,-156.27){\circle*{0.000001}} \put(-95.46,-155.56){\circle*{0.000001}} \put(-94.75,-154.86){\circle*{0.000001}} \put(-94.75,-154.15){\circle*{0.000001}} \put(-94.05,-153.44){\circle*{0.000001}} \put(-94.05,-152.74){\circle*{0.000001}} \put(-93.34,-152.03){\circle*{0.000001}} \put(-93.34,-151.32){\circle*{0.000001}} \put(-92.63,-150.61){\circle*{0.000001}} \put(-92.63,-149.91){\circle*{0.000001}} \put(-105.36,-178.90){\circle*{0.000001}} \put(-105.36,-178.19){\circle*{0.000001}} \put(-105.36,-177.48){\circle*{0.000001}} \put(-104.65,-176.78){\circle*{0.000001}} \put(-104.65,-176.07){\circle*{0.000001}} \put(-104.65,-175.36){\circle*{0.000001}} \put(-104.65,-174.66){\circle*{0.000001}} \put(-103.94,-173.95){\circle*{0.000001}} \put(-103.94,-173.24){\circle*{0.000001}} \put(-103.94,-172.53){\circle*{0.000001}} \put(-103.94,-171.83){\circle*{0.000001}} \put(-103.94,-171.12){\circle*{0.000001}} \put(-103.24,-170.41){\circle*{0.000001}} \put(-103.24,-169.71){\circle*{0.000001}} \put(-103.24,-169.00){\circle*{0.000001}} \put(-103.24,-168.29){\circle*{0.000001}} \put(-102.53,-167.58){\circle*{0.000001}} \put(-102.53,-166.88){\circle*{0.000001}} \put(-102.53,-166.17){\circle*{0.000001}} \put(-102.53,-165.46){\circle*{0.000001}} \put(-102.53,-164.76){\circle*{0.000001}} \put(-101.82,-164.05){\circle*{0.000001}} \put(-101.82,-163.34){\circle*{0.000001}} \put(-101.82,-162.63){\circle*{0.000001}} \put(-101.82,-161.93){\circle*{0.000001}} \put(-101.82,-161.22){\circle*{0.000001}} \put(-101.12,-160.51){\circle*{0.000001}} \put(-101.12,-159.81){\circle*{0.000001}} \put(-101.12,-159.10){\circle*{0.000001}} \put(-101.12,-158.39){\circle*{0.000001}} \put(-100.41,-157.68){\circle*{0.000001}} \put(-100.41,-156.98){\circle*{0.000001}} \put(-100.41,-156.27){\circle*{0.000001}} \put(-100.41,-155.56){\circle*{0.000001}} \put(-100.41,-154.86){\circle*{0.000001}} \put(-99.70,-154.15){\circle*{0.000001}} \put(-99.70,-153.44){\circle*{0.000001}} \put(-99.70,-152.74){\circle*{0.000001}} \put(-99.70,-152.03){\circle*{0.000001}} \put(-98.99,-151.32){\circle*{0.000001}} \put(-98.99,-150.61){\circle*{0.000001}} \put(-98.99,-149.91){\circle*{0.000001}} \put(-98.99,-149.20){\circle*{0.000001}} \put(-98.99,-148.49){\circle*{0.000001}} \put(-98.29,-147.79){\circle*{0.000001}} \put(-98.29,-147.08){\circle*{0.000001}} \put(-98.29,-146.37){\circle*{0.000001}} \put(-98.29,-145.66){\circle*{0.000001}} \put(-97.58,-144.96){\circle*{0.000001}} \put(-97.58,-144.25){\circle*{0.000001}} \put(-97.58,-143.54){\circle*{0.000001}} \put(-83.44,-173.95){\circle*{0.000001}} \put(-83.44,-173.24){\circle*{0.000001}} \put(-84.15,-172.53){\circle*{0.000001}} \put(-84.15,-171.83){\circle*{0.000001}} \put(-84.85,-171.12){\circle*{0.000001}} \put(-84.85,-170.41){\circle*{0.000001}} \put(-85.56,-169.71){\circle*{0.000001}} \put(-85.56,-169.00){\circle*{0.000001}} \put(-86.27,-168.29){\circle*{0.000001}} \put(-86.27,-167.58){\circle*{0.000001}} \put(-86.97,-166.88){\circle*{0.000001}} \put(-86.97,-166.17){\circle*{0.000001}} \put(-87.68,-165.46){\circle*{0.000001}} \put(-87.68,-164.76){\circle*{0.000001}} \put(-88.39,-164.05){\circle*{0.000001}} \put(-88.39,-163.34){\circle*{0.000001}} \put(-88.39,-162.63){\circle*{0.000001}} \put(-89.10,-161.93){\circle*{0.000001}} \put(-89.10,-161.22){\circle*{0.000001}} \put(-89.80,-160.51){\circle*{0.000001}} \put(-89.80,-159.81){\circle*{0.000001}} \put(-90.51,-159.10){\circle*{0.000001}} \put(-90.51,-158.39){\circle*{0.000001}} \put(-91.22,-157.68){\circle*{0.000001}} \put(-91.22,-156.98){\circle*{0.000001}} \put(-91.92,-156.27){\circle*{0.000001}} \put(-91.92,-155.56){\circle*{0.000001}} \put(-92.63,-154.86){\circle*{0.000001}} \put(-92.63,-154.15){\circle*{0.000001}} \put(-92.63,-153.44){\circle*{0.000001}} \put(-93.34,-152.74){\circle*{0.000001}} \put(-93.34,-152.03){\circle*{0.000001}} \put(-94.05,-151.32){\circle*{0.000001}} \put(-94.05,-150.61){\circle*{0.000001}} \put(-94.75,-149.91){\circle*{0.000001}} \put(-94.75,-149.20){\circle*{0.000001}} \put(-95.46,-148.49){\circle*{0.000001}} \put(-95.46,-147.79){\circle*{0.000001}} \put(-96.17,-147.08){\circle*{0.000001}} \put(-96.17,-146.37){\circle*{0.000001}} \put(-96.87,-145.66){\circle*{0.000001}} \put(-96.87,-144.96){\circle*{0.000001}} \put(-97.58,-144.25){\circle*{0.000001}} \put(-97.58,-143.54){\circle*{0.000001}} \put(-112.43,-190.92){\circle*{0.000001}} \put(-111.72,-190.21){\circle*{0.000001}} \put(-111.02,-190.21){\circle*{0.000001}} \put(-110.31,-189.50){\circle*{0.000001}} \put(-109.60,-189.50){\circle*{0.000001}} \put(-108.89,-188.80){\circle*{0.000001}} \put(-108.19,-188.09){\circle*{0.000001}} \put(-107.48,-188.09){\circle*{0.000001}} \put(-106.77,-187.38){\circle*{0.000001}} \put(-106.07,-187.38){\circle*{0.000001}} \put(-105.36,-186.68){\circle*{0.000001}} \put(-104.65,-186.68){\circle*{0.000001}} \put(-103.94,-185.97){\circle*{0.000001}} \put(-103.24,-185.26){\circle*{0.000001}} \put(-102.53,-185.26){\circle*{0.000001}} \put(-101.82,-184.55){\circle*{0.000001}} \put(-101.12,-184.55){\circle*{0.000001}} \put(-100.41,-183.85){\circle*{0.000001}} \put(-99.70,-183.14){\circle*{0.000001}} \put(-98.99,-183.14){\circle*{0.000001}} \put(-98.29,-182.43){\circle*{0.000001}} \put(-97.58,-182.43){\circle*{0.000001}} \put(-96.87,-181.73){\circle*{0.000001}} \put(-96.17,-181.73){\circle*{0.000001}} \put(-95.46,-181.02){\circle*{0.000001}} \put(-94.75,-180.31){\circle*{0.000001}} \put(-94.05,-180.31){\circle*{0.000001}} \put(-93.34,-179.61){\circle*{0.000001}} \put(-92.63,-179.61){\circle*{0.000001}} \put(-91.92,-178.90){\circle*{0.000001}} \put(-91.22,-178.19){\circle*{0.000001}} \put(-90.51,-178.19){\circle*{0.000001}} \put(-89.80,-177.48){\circle*{0.000001}} \put(-89.10,-177.48){\circle*{0.000001}} \put(-88.39,-176.78){\circle*{0.000001}} \put(-87.68,-176.78){\circle*{0.000001}} \put(-86.97,-176.07){\circle*{0.000001}} \put(-86.27,-175.36){\circle*{0.000001}} \put(-85.56,-175.36){\circle*{0.000001}} \put(-84.85,-174.66){\circle*{0.000001}} \put(-84.15,-174.66){\circle*{0.000001}} \put(-83.44,-173.95){\circle*{0.000001}} \put(-91.92,-217.79){\circle*{0.000001}} \put(-92.63,-217.08){\circle*{0.000001}} \put(-93.34,-216.37){\circle*{0.000001}} \put(-93.34,-215.67){\circle*{0.000001}} \put(-94.05,-214.96){\circle*{0.000001}} \put(-94.75,-214.25){\circle*{0.000001}} \put(-95.46,-213.55){\circle*{0.000001}} \put(-95.46,-212.84){\circle*{0.000001}} \put(-96.17,-212.13){\circle*{0.000001}} \put(-96.87,-211.42){\circle*{0.000001}} \put(-97.58,-210.72){\circle*{0.000001}} \put(-97.58,-210.01){\circle*{0.000001}} \put(-98.29,-209.30){\circle*{0.000001}} \put(-98.99,-208.60){\circle*{0.000001}} \put(-99.70,-207.89){\circle*{0.000001}} \put(-99.70,-207.18){\circle*{0.000001}} \put(-100.41,-206.48){\circle*{0.000001}} \put(-101.12,-205.77){\circle*{0.000001}} \put(-101.82,-205.06){\circle*{0.000001}} \put(-101.82,-204.35){\circle*{0.000001}} \put(-102.53,-203.65){\circle*{0.000001}} \put(-103.24,-202.94){\circle*{0.000001}} \put(-103.94,-202.23){\circle*{0.000001}} \put(-104.65,-201.53){\circle*{0.000001}} \put(-104.65,-200.82){\circle*{0.000001}} \put(-105.36,-200.11){\circle*{0.000001}} \put(-106.07,-199.40){\circle*{0.000001}} \put(-106.77,-198.70){\circle*{0.000001}} \put(-106.77,-197.99){\circle*{0.000001}} \put(-107.48,-197.28){\circle*{0.000001}} \put(-108.19,-196.58){\circle*{0.000001}} \put(-108.89,-195.87){\circle*{0.000001}} \put(-108.89,-195.16){\circle*{0.000001}} \put(-109.60,-194.45){\circle*{0.000001}} \put(-110.31,-193.75){\circle*{0.000001}} \put(-111.02,-193.04){\circle*{0.000001}} \put(-111.02,-192.33){\circle*{0.000001}} \put(-111.72,-191.63){\circle*{0.000001}} \put(-112.43,-190.92){\circle*{0.000001}} \put(-84.85,-251.73){\circle*{0.000001}} \put(-84.85,-251.02){\circle*{0.000001}} \put(-84.85,-250.32){\circle*{0.000001}} \put(-85.56,-249.61){\circle*{0.000001}} \put(-85.56,-248.90){\circle*{0.000001}} \put(-85.56,-248.19){\circle*{0.000001}} \put(-85.56,-247.49){\circle*{0.000001}} \put(-85.56,-246.78){\circle*{0.000001}} \put(-86.27,-246.07){\circle*{0.000001}} \put(-86.27,-245.37){\circle*{0.000001}} \put(-86.27,-244.66){\circle*{0.000001}} \put(-86.27,-243.95){\circle*{0.000001}} \put(-86.27,-243.24){\circle*{0.000001}} \put(-86.97,-242.54){\circle*{0.000001}} \put(-86.97,-241.83){\circle*{0.000001}} \put(-86.97,-241.12){\circle*{0.000001}} \put(-86.97,-240.42){\circle*{0.000001}} \put(-87.68,-239.71){\circle*{0.000001}} \put(-87.68,-239.00){\circle*{0.000001}} \put(-87.68,-238.29){\circle*{0.000001}} \put(-87.68,-237.59){\circle*{0.000001}} \put(-87.68,-236.88){\circle*{0.000001}} \put(-88.39,-236.17){\circle*{0.000001}} \put(-88.39,-235.47){\circle*{0.000001}} \put(-88.39,-234.76){\circle*{0.000001}} \put(-88.39,-234.05){\circle*{0.000001}} \put(-88.39,-233.35){\circle*{0.000001}} \put(-89.10,-232.64){\circle*{0.000001}} \put(-89.10,-231.93){\circle*{0.000001}} \put(-89.10,-231.22){\circle*{0.000001}} \put(-89.10,-230.52){\circle*{0.000001}} \put(-89.10,-229.81){\circle*{0.000001}} \put(-89.80,-229.10){\circle*{0.000001}} \put(-89.80,-228.40){\circle*{0.000001}} \put(-89.80,-227.69){\circle*{0.000001}} \put(-89.80,-226.98){\circle*{0.000001}} \put(-89.80,-226.27){\circle*{0.000001}} \put(-90.51,-225.57){\circle*{0.000001}} \put(-90.51,-224.86){\circle*{0.000001}} \put(-90.51,-224.15){\circle*{0.000001}} \put(-90.51,-223.45){\circle*{0.000001}} \put(-91.22,-222.74){\circle*{0.000001}} \put(-91.22,-222.03){\circle*{0.000001}} \put(-91.22,-221.32){\circle*{0.000001}} \put(-91.22,-220.62){\circle*{0.000001}} \put(-91.22,-219.91){\circle*{0.000001}} \put(-91.92,-219.20){\circle*{0.000001}} \put(-91.92,-218.50){\circle*{0.000001}} \put(-91.92,-217.79){\circle*{0.000001}} \put(-116.67,-255.97){\circle*{0.000001}} \put(-115.97,-255.97){\circle*{0.000001}} \put(-115.26,-255.97){\circle*{0.000001}} \put(-114.55,-255.97){\circle*{0.000001}} \put(-113.84,-255.27){\circle*{0.000001}} \put(-113.14,-255.27){\circle*{0.000001}} \put(-112.43,-255.27){\circle*{0.000001}} \put(-111.72,-255.27){\circle*{0.000001}} \put(-111.02,-255.27){\circle*{0.000001}} \put(-110.31,-255.27){\circle*{0.000001}} \put(-109.60,-255.27){\circle*{0.000001}} \put(-108.89,-255.27){\circle*{0.000001}} \put(-108.19,-254.56){\circle*{0.000001}} \put(-107.48,-254.56){\circle*{0.000001}} \put(-106.77,-254.56){\circle*{0.000001}} \put(-106.07,-254.56){\circle*{0.000001}} \put(-105.36,-254.56){\circle*{0.000001}} \put(-104.65,-254.56){\circle*{0.000001}} \put(-103.94,-254.56){\circle*{0.000001}} \put(-103.24,-253.85){\circle*{0.000001}} \put(-102.53,-253.85){\circle*{0.000001}} \put(-101.82,-253.85){\circle*{0.000001}} \put(-101.12,-253.85){\circle*{0.000001}} \put(-100.41,-253.85){\circle*{0.000001}} \put(-99.70,-253.85){\circle*{0.000001}} \put(-98.99,-253.85){\circle*{0.000001}} \put(-98.29,-253.85){\circle*{0.000001}} \put(-97.58,-253.14){\circle*{0.000001}} \put(-96.87,-253.14){\circle*{0.000001}} \put(-96.17,-253.14){\circle*{0.000001}} \put(-95.46,-253.14){\circle*{0.000001}} \put(-94.75,-253.14){\circle*{0.000001}} \put(-94.05,-253.14){\circle*{0.000001}} \put(-93.34,-253.14){\circle*{0.000001}} \put(-92.63,-252.44){\circle*{0.000001}} \put(-91.92,-252.44){\circle*{0.000001}} \put(-91.22,-252.44){\circle*{0.000001}} \put(-90.51,-252.44){\circle*{0.000001}} \put(-89.80,-252.44){\circle*{0.000001}} \put(-89.10,-252.44){\circle*{0.000001}} \put(-88.39,-252.44){\circle*{0.000001}} \put(-87.68,-252.44){\circle*{0.000001}} \put(-86.97,-251.73){\circle*{0.000001}} \put(-86.27,-251.73){\circle*{0.000001}} \put(-85.56,-251.73){\circle*{0.000001}} \put(-84.85,-251.73){\circle*{0.000001}} \put(-136.47,-281.43){\circle*{0.000001}} \put(-135.76,-280.72){\circle*{0.000001}} \put(-135.06,-280.01){\circle*{0.000001}} \put(-135.06,-279.31){\circle*{0.000001}} \put(-134.35,-278.60){\circle*{0.000001}} \put(-133.64,-277.89){\circle*{0.000001}} \put(-132.94,-277.19){\circle*{0.000001}} \put(-132.94,-276.48){\circle*{0.000001}} \put(-132.23,-275.77){\circle*{0.000001}} \put(-131.52,-275.06){\circle*{0.000001}} \put(-130.81,-274.36){\circle*{0.000001}} \put(-130.11,-273.65){\circle*{0.000001}} \put(-130.11,-272.94){\circle*{0.000001}} \put(-129.40,-272.24){\circle*{0.000001}} \put(-128.69,-271.53){\circle*{0.000001}} \put(-127.99,-270.82){\circle*{0.000001}} \put(-127.99,-270.11){\circle*{0.000001}} \put(-127.28,-269.41){\circle*{0.000001}} \put(-126.57,-268.70){\circle*{0.000001}} \put(-125.87,-267.99){\circle*{0.000001}} \put(-125.16,-267.29){\circle*{0.000001}} \put(-125.16,-266.58){\circle*{0.000001}} \put(-124.45,-265.87){\circle*{0.000001}} \put(-123.74,-265.17){\circle*{0.000001}} \put(-123.04,-264.46){\circle*{0.000001}} \put(-123.04,-263.75){\circle*{0.000001}} \put(-122.33,-263.04){\circle*{0.000001}} \put(-121.62,-262.34){\circle*{0.000001}} \put(-120.92,-261.63){\circle*{0.000001}} \put(-120.21,-260.92){\circle*{0.000001}} \put(-120.21,-260.22){\circle*{0.000001}} \put(-119.50,-259.51){\circle*{0.000001}} \put(-118.79,-258.80){\circle*{0.000001}} \put(-118.09,-258.09){\circle*{0.000001}} \put(-118.09,-257.39){\circle*{0.000001}} \put(-117.38,-256.68){\circle*{0.000001}} \put(-116.67,-255.97){\circle*{0.000001}} \put(-136.47,-281.43){\circle*{0.000001}} \put(-135.76,-280.72){\circle*{0.000001}} \put(-135.76,-280.01){\circle*{0.000001}} \put(-135.06,-279.31){\circle*{0.000001}} \put(-135.06,-278.60){\circle*{0.000001}} \put(-134.35,-277.89){\circle*{0.000001}} \put(-134.35,-277.19){\circle*{0.000001}} \put(-133.64,-276.48){\circle*{0.000001}} \put(-133.64,-275.77){\circle*{0.000001}} \put(-132.94,-275.06){\circle*{0.000001}} \put(-132.23,-274.36){\circle*{0.000001}} \put(-132.23,-273.65){\circle*{0.000001}} \put(-131.52,-272.94){\circle*{0.000001}} \put(-131.52,-272.24){\circle*{0.000001}} \put(-130.81,-271.53){\circle*{0.000001}} \put(-130.81,-270.82){\circle*{0.000001}} \put(-130.11,-270.11){\circle*{0.000001}} \put(-130.11,-269.41){\circle*{0.000001}} \put(-129.40,-268.70){\circle*{0.000001}} \put(-128.69,-267.99){\circle*{0.000001}} \put(-128.69,-267.29){\circle*{0.000001}} \put(-127.99,-266.58){\circle*{0.000001}} \put(-127.99,-265.87){\circle*{0.000001}} \put(-127.28,-265.17){\circle*{0.000001}} \put(-127.28,-264.46){\circle*{0.000001}} \put(-126.57,-263.75){\circle*{0.000001}} \put(-126.57,-263.04){\circle*{0.000001}} \put(-125.87,-262.34){\circle*{0.000001}} \put(-125.16,-261.63){\circle*{0.000001}} \put(-125.16,-260.92){\circle*{0.000001}} \put(-124.45,-260.22){\circle*{0.000001}} \put(-124.45,-259.51){\circle*{0.000001}} \put(-123.74,-258.80){\circle*{0.000001}} \put(-123.74,-258.09){\circle*{0.000001}} \put(-123.04,-257.39){\circle*{0.000001}} \put(-123.04,-256.68){\circle*{0.000001}} \put(-122.33,-255.97){\circle*{0.000001}} \put(-121.62,-255.27){\circle*{0.000001}} \put(-121.62,-254.56){\circle*{0.000001}} \put(-120.92,-253.85){\circle*{0.000001}} \put(-120.92,-253.14){\circle*{0.000001}} \put(-120.21,-252.44){\circle*{0.000001}} \put(-120.21,-251.73){\circle*{0.000001}} \put(-119.50,-251.02){\circle*{0.000001}} \put(-119.50,-250.32){\circle*{0.000001}} \put(-118.79,-249.61){\circle*{0.000001}} \put(-152.74,-255.97){\circle*{0.000001}} \put(-152.03,-255.97){\circle*{0.000001}} \put(-151.32,-255.97){\circle*{0.000001}} \put(-150.61,-255.27){\circle*{0.000001}} \put(-149.91,-255.27){\circle*{0.000001}} \put(-149.20,-255.27){\circle*{0.000001}} \put(-148.49,-255.27){\circle*{0.000001}} \put(-147.79,-255.27){\circle*{0.000001}} \put(-147.08,-255.27){\circle*{0.000001}} \put(-146.37,-254.56){\circle*{0.000001}} \put(-145.66,-254.56){\circle*{0.000001}} \put(-144.96,-254.56){\circle*{0.000001}} \put(-144.25,-254.56){\circle*{0.000001}} \put(-143.54,-254.56){\circle*{0.000001}} \put(-142.84,-253.85){\circle*{0.000001}} \put(-142.13,-253.85){\circle*{0.000001}} \put(-141.42,-253.85){\circle*{0.000001}} \put(-140.71,-253.85){\circle*{0.000001}} \put(-140.01,-253.85){\circle*{0.000001}} \put(-139.30,-253.14){\circle*{0.000001}} \put(-138.59,-253.14){\circle*{0.000001}} \put(-137.89,-253.14){\circle*{0.000001}} \put(-137.18,-253.14){\circle*{0.000001}} \put(-136.47,-253.14){\circle*{0.000001}} \put(-135.76,-253.14){\circle*{0.000001}} \put(-135.06,-252.44){\circle*{0.000001}} \put(-134.35,-252.44){\circle*{0.000001}} \put(-133.64,-252.44){\circle*{0.000001}} \put(-132.94,-252.44){\circle*{0.000001}} \put(-132.23,-252.44){\circle*{0.000001}} \put(-131.52,-251.73){\circle*{0.000001}} \put(-130.81,-251.73){\circle*{0.000001}} \put(-130.11,-251.73){\circle*{0.000001}} \put(-129.40,-251.73){\circle*{0.000001}} \put(-128.69,-251.73){\circle*{0.000001}} \put(-127.99,-251.02){\circle*{0.000001}} \put(-127.28,-251.02){\circle*{0.000001}} \put(-126.57,-251.02){\circle*{0.000001}} \put(-125.87,-251.02){\circle*{0.000001}} \put(-125.16,-251.02){\circle*{0.000001}} \put(-124.45,-251.02){\circle*{0.000001}} \put(-123.74,-250.32){\circle*{0.000001}} \put(-123.04,-250.32){\circle*{0.000001}} \put(-122.33,-250.32){\circle*{0.000001}} \put(-121.62,-250.32){\circle*{0.000001}} \put(-120.92,-250.32){\circle*{0.000001}} \put(-120.21,-249.61){\circle*{0.000001}} \put(-119.50,-249.61){\circle*{0.000001}} \put(-118.79,-249.61){\circle*{0.000001}} \put(-166.88,-286.38){\circle*{0.000001}} \put(-166.88,-285.67){\circle*{0.000001}} \put(-166.17,-284.96){\circle*{0.000001}} \put(-166.17,-284.26){\circle*{0.000001}} \put(-165.46,-283.55){\circle*{0.000001}} \put(-165.46,-282.84){\circle*{0.000001}} \put(-164.76,-282.14){\circle*{0.000001}} \put(-164.76,-281.43){\circle*{0.000001}} \put(-164.05,-280.72){\circle*{0.000001}} \put(-164.05,-280.01){\circle*{0.000001}} \put(-163.34,-279.31){\circle*{0.000001}} \put(-163.34,-278.60){\circle*{0.000001}} \put(-162.63,-277.89){\circle*{0.000001}} \put(-162.63,-277.19){\circle*{0.000001}} \put(-161.93,-276.48){\circle*{0.000001}} \put(-161.93,-275.77){\circle*{0.000001}} \put(-161.93,-275.06){\circle*{0.000001}} \put(-161.22,-274.36){\circle*{0.000001}} \put(-161.22,-273.65){\circle*{0.000001}} \put(-160.51,-272.94){\circle*{0.000001}} \put(-160.51,-272.24){\circle*{0.000001}} \put(-159.81,-271.53){\circle*{0.000001}} \put(-159.81,-270.82){\circle*{0.000001}} \put(-159.10,-270.11){\circle*{0.000001}} \put(-159.10,-269.41){\circle*{0.000001}} \put(-158.39,-268.70){\circle*{0.000001}} \put(-158.39,-267.99){\circle*{0.000001}} \put(-157.68,-267.29){\circle*{0.000001}} \put(-157.68,-266.58){\circle*{0.000001}} \put(-157.68,-265.87){\circle*{0.000001}} \put(-156.98,-265.17){\circle*{0.000001}} \put(-156.98,-264.46){\circle*{0.000001}} \put(-156.27,-263.75){\circle*{0.000001}} \put(-156.27,-263.04){\circle*{0.000001}} \put(-155.56,-262.34){\circle*{0.000001}} \put(-155.56,-261.63){\circle*{0.000001}} \put(-154.86,-260.92){\circle*{0.000001}} \put(-154.86,-260.22){\circle*{0.000001}} \put(-154.15,-259.51){\circle*{0.000001}} \put(-154.15,-258.80){\circle*{0.000001}} \put(-153.44,-258.09){\circle*{0.000001}} \put(-153.44,-257.39){\circle*{0.000001}} \put(-152.74,-256.68){\circle*{0.000001}} \put(-152.74,-255.97){\circle*{0.000001}} \put(-166.88,-286.38){\circle*{0.000001}} \put(-166.88,-285.67){\circle*{0.000001}} \put(-166.88,-284.96){\circle*{0.000001}} \put(-166.88,-284.26){\circle*{0.000001}} \put(-167.58,-283.55){\circle*{0.000001}} \put(-167.58,-282.84){\circle*{0.000001}} \put(-167.58,-282.14){\circle*{0.000001}} \put(-167.58,-281.43){\circle*{0.000001}} \put(-167.58,-280.72){\circle*{0.000001}} \put(-167.58,-280.01){\circle*{0.000001}} \put(-168.29,-279.31){\circle*{0.000001}} \put(-168.29,-278.60){\circle*{0.000001}} \put(-168.29,-277.89){\circle*{0.000001}} \put(-168.29,-277.19){\circle*{0.000001}} \put(-168.29,-276.48){\circle*{0.000001}} \put(-168.29,-275.77){\circle*{0.000001}} \put(-168.29,-275.06){\circle*{0.000001}} \put(-169.00,-274.36){\circle*{0.000001}} \put(-169.00,-273.65){\circle*{0.000001}} \put(-169.00,-272.94){\circle*{0.000001}} \put(-169.00,-272.24){\circle*{0.000001}} \put(-169.00,-271.53){\circle*{0.000001}} \put(-169.00,-270.82){\circle*{0.000001}} \put(-169.00,-270.11){\circle*{0.000001}} \put(-169.71,-269.41){\circle*{0.000001}} \put(-169.71,-268.70){\circle*{0.000001}} \put(-169.71,-267.99){\circle*{0.000001}} \put(-169.71,-267.29){\circle*{0.000001}} \put(-169.71,-266.58){\circle*{0.000001}} \put(-169.71,-265.87){\circle*{0.000001}} \put(-170.41,-265.17){\circle*{0.000001}} \put(-170.41,-264.46){\circle*{0.000001}} \put(-170.41,-263.75){\circle*{0.000001}} \put(-170.41,-263.04){\circle*{0.000001}} \put(-170.41,-262.34){\circle*{0.000001}} \put(-170.41,-261.63){\circle*{0.000001}} \put(-170.41,-260.92){\circle*{0.000001}} \put(-171.12,-260.22){\circle*{0.000001}} \put(-171.12,-259.51){\circle*{0.000001}} \put(-171.12,-258.80){\circle*{0.000001}} \put(-171.12,-258.09){\circle*{0.000001}} \put(-171.12,-257.39){\circle*{0.000001}} \put(-171.12,-256.68){\circle*{0.000001}} \put(-171.83,-255.97){\circle*{0.000001}} \put(-171.83,-255.27){\circle*{0.000001}} \put(-171.83,-254.56){\circle*{0.000001}} \put(-171.83,-253.85){\circle*{0.000001}} \put(-171.83,-253.85){\circle*{0.000001}} \put(-171.12,-253.85){\circle*{0.000001}} \put(-170.41,-254.56){\circle*{0.000001}} \put(-169.71,-254.56){\circle*{0.000001}} \put(-169.00,-255.27){\circle*{0.000001}} \put(-168.29,-255.27){\circle*{0.000001}} \put(-167.58,-255.27){\circle*{0.000001}} \put(-166.88,-255.97){\circle*{0.000001}} \put(-166.17,-255.97){\circle*{0.000001}} \put(-165.46,-255.97){\circle*{0.000001}} \put(-164.76,-256.68){\circle*{0.000001}} \put(-164.05,-256.68){\circle*{0.000001}} \put(-163.34,-257.39){\circle*{0.000001}} \put(-162.63,-257.39){\circle*{0.000001}} \put(-161.93,-257.39){\circle*{0.000001}} \put(-161.22,-258.09){\circle*{0.000001}} \put(-160.51,-258.09){\circle*{0.000001}} \put(-159.81,-258.80){\circle*{0.000001}} \put(-159.10,-258.80){\circle*{0.000001}} \put(-158.39,-258.80){\circle*{0.000001}} \put(-157.68,-259.51){\circle*{0.000001}} \put(-156.98,-259.51){\circle*{0.000001}} \put(-156.27,-259.51){\circle*{0.000001}} \put(-155.56,-260.22){\circle*{0.000001}} \put(-154.86,-260.22){\circle*{0.000001}} \put(-154.15,-260.92){\circle*{0.000001}} \put(-153.44,-260.92){\circle*{0.000001}} \put(-152.74,-260.92){\circle*{0.000001}} \put(-152.03,-261.63){\circle*{0.000001}} \put(-151.32,-261.63){\circle*{0.000001}} \put(-150.61,-262.34){\circle*{0.000001}} \put(-149.91,-262.34){\circle*{0.000001}} \put(-149.20,-262.34){\circle*{0.000001}} \put(-148.49,-263.04){\circle*{0.000001}} \put(-147.79,-263.04){\circle*{0.000001}} \put(-147.08,-263.75){\circle*{0.000001}} \put(-146.37,-263.75){\circle*{0.000001}} \put(-145.66,-263.75){\circle*{0.000001}} \put(-144.96,-264.46){\circle*{0.000001}} \put(-144.25,-264.46){\circle*{0.000001}} \put(-143.54,-264.46){\circle*{0.000001}} \put(-142.84,-265.17){\circle*{0.000001}} \put(-142.13,-265.17){\circle*{0.000001}} \put(-141.42,-265.87){\circle*{0.000001}} \put(-140.71,-265.87){\circle*{0.000001}} \put(-140.71,-265.87){\circle*{0.000001}} \put(-140.01,-265.87){\circle*{0.000001}} \put(-139.30,-265.17){\circle*{0.000001}} \put(-138.59,-265.17){\circle*{0.000001}} \put(-137.89,-265.17){\circle*{0.000001}} \put(-137.18,-265.17){\circle*{0.000001}} \put(-136.47,-264.46){\circle*{0.000001}} \put(-135.76,-264.46){\circle*{0.000001}} \put(-135.06,-264.46){\circle*{0.000001}} \put(-134.35,-263.75){\circle*{0.000001}} \put(-133.64,-263.75){\circle*{0.000001}} \put(-132.94,-263.75){\circle*{0.000001}} \put(-132.23,-263.75){\circle*{0.000001}} \put(-131.52,-263.04){\circle*{0.000001}} \put(-130.81,-263.04){\circle*{0.000001}} \put(-130.11,-263.04){\circle*{0.000001}} \put(-129.40,-262.34){\circle*{0.000001}} \put(-128.69,-262.34){\circle*{0.000001}} \put(-127.99,-262.34){\circle*{0.000001}} \put(-127.28,-262.34){\circle*{0.000001}} \put(-126.57,-261.63){\circle*{0.000001}} \put(-125.87,-261.63){\circle*{0.000001}} \put(-125.16,-261.63){\circle*{0.000001}} \put(-124.45,-260.92){\circle*{0.000001}} \put(-123.74,-260.92){\circle*{0.000001}} \put(-123.04,-260.92){\circle*{0.000001}} \put(-122.33,-260.22){\circle*{0.000001}} \put(-121.62,-260.22){\circle*{0.000001}} \put(-120.92,-260.22){\circle*{0.000001}} \put(-120.21,-260.22){\circle*{0.000001}} \put(-119.50,-259.51){\circle*{0.000001}} \put(-118.79,-259.51){\circle*{0.000001}} \put(-118.09,-259.51){\circle*{0.000001}} \put(-117.38,-258.80){\circle*{0.000001}} \put(-116.67,-258.80){\circle*{0.000001}} \put(-115.97,-258.80){\circle*{0.000001}} \put(-115.26,-258.80){\circle*{0.000001}} \put(-114.55,-258.09){\circle*{0.000001}} \put(-113.84,-258.09){\circle*{0.000001}} \put(-113.14,-258.09){\circle*{0.000001}} \put(-112.43,-257.39){\circle*{0.000001}} \put(-111.72,-257.39){\circle*{0.000001}} \put(-111.02,-257.39){\circle*{0.000001}} \put(-110.31,-257.39){\circle*{0.000001}} \put(-109.60,-256.68){\circle*{0.000001}} \put(-108.89,-256.68){\circle*{0.000001}} \put(-108.89,-256.68){\circle*{0.000001}} \put(-108.19,-255.97){\circle*{0.000001}} \put(-108.19,-255.27){\circle*{0.000001}} \put(-107.48,-254.56){\circle*{0.000001}} \put(-106.77,-253.85){\circle*{0.000001}} \put(-106.07,-253.14){\circle*{0.000001}} \put(-106.07,-252.44){\circle*{0.000001}} \put(-105.36,-251.73){\circle*{0.000001}} \put(-104.65,-251.02){\circle*{0.000001}} \put(-104.65,-250.32){\circle*{0.000001}} \put(-103.94,-249.61){\circle*{0.000001}} \put(-103.24,-248.90){\circle*{0.000001}} \put(-103.24,-248.19){\circle*{0.000001}} \put(-102.53,-247.49){\circle*{0.000001}} \put(-101.82,-246.78){\circle*{0.000001}} \put(-101.12,-246.07){\circle*{0.000001}} \put(-101.12,-245.37){\circle*{0.000001}} \put(-100.41,-244.66){\circle*{0.000001}} \put(-99.70,-243.95){\circle*{0.000001}} \put(-99.70,-243.24){\circle*{0.000001}} \put(-98.99,-242.54){\circle*{0.000001}} \put(-98.29,-241.83){\circle*{0.000001}} \put(-97.58,-241.12){\circle*{0.000001}} \put(-97.58,-240.42){\circle*{0.000001}} \put(-96.87,-239.71){\circle*{0.000001}} \put(-96.17,-239.00){\circle*{0.000001}} \put(-96.17,-238.29){\circle*{0.000001}} \put(-95.46,-237.59){\circle*{0.000001}} \put(-94.75,-236.88){\circle*{0.000001}} \put(-94.05,-236.17){\circle*{0.000001}} \put(-94.05,-235.47){\circle*{0.000001}} \put(-93.34,-234.76){\circle*{0.000001}} \put(-92.63,-234.05){\circle*{0.000001}} \put(-92.63,-233.35){\circle*{0.000001}} \put(-91.92,-232.64){\circle*{0.000001}} \put(-91.22,-231.93){\circle*{0.000001}} \put(-91.22,-231.22){\circle*{0.000001}} \put(-90.51,-230.52){\circle*{0.000001}} \put(-89.80,-229.81){\circle*{0.000001}} \put(-89.10,-229.10){\circle*{0.000001}} \put(-89.10,-228.40){\circle*{0.000001}} \put(-88.39,-227.69){\circle*{0.000001}} \put(-111.72,-251.73){\circle*{0.000001}} \put(-111.02,-251.02){\circle*{0.000001}} \put(-110.31,-250.32){\circle*{0.000001}} \put(-109.60,-249.61){\circle*{0.000001}} \put(-108.89,-248.90){\circle*{0.000001}} \put(-108.19,-248.19){\circle*{0.000001}} \put(-107.48,-247.49){\circle*{0.000001}} \put(-106.77,-246.78){\circle*{0.000001}} \put(-106.07,-246.07){\circle*{0.000001}} \put(-105.36,-245.37){\circle*{0.000001}} \put(-104.65,-244.66){\circle*{0.000001}} \put(-103.94,-243.95){\circle*{0.000001}} \put(-103.24,-243.24){\circle*{0.000001}} \put(-102.53,-242.54){\circle*{0.000001}} \put(-101.82,-241.83){\circle*{0.000001}} \put(-101.12,-241.12){\circle*{0.000001}} \put(-100.41,-240.42){\circle*{0.000001}} \put(-100.41,-239.71){\circle*{0.000001}} \put(-99.70,-239.00){\circle*{0.000001}} \put(-98.99,-238.29){\circle*{0.000001}} \put(-98.29,-237.59){\circle*{0.000001}} \put(-97.58,-236.88){\circle*{0.000001}} \put(-96.87,-236.17){\circle*{0.000001}} \put(-96.17,-235.47){\circle*{0.000001}} \put(-95.46,-234.76){\circle*{0.000001}} \put(-94.75,-234.05){\circle*{0.000001}} \put(-94.05,-233.35){\circle*{0.000001}} \put(-93.34,-232.64){\circle*{0.000001}} \put(-92.63,-231.93){\circle*{0.000001}} \put(-91.92,-231.22){\circle*{0.000001}} \put(-91.22,-230.52){\circle*{0.000001}} \put(-90.51,-229.81){\circle*{0.000001}} \put(-89.80,-229.10){\circle*{0.000001}} \put(-89.10,-228.40){\circle*{0.000001}} \put(-88.39,-227.69){\circle*{0.000001}} \put(-111.72,-251.73){\circle*{0.000001}} \put(-111.02,-251.73){\circle*{0.000001}} \put(-110.31,-251.73){\circle*{0.000001}} \put(-109.60,-251.73){\circle*{0.000001}} \put(-108.89,-251.73){\circle*{0.000001}} \put(-108.19,-251.02){\circle*{0.000001}} \put(-107.48,-251.02){\circle*{0.000001}} \put(-106.77,-251.02){\circle*{0.000001}} \put(-106.07,-251.02){\circle*{0.000001}} \put(-105.36,-251.02){\circle*{0.000001}} \put(-104.65,-251.02){\circle*{0.000001}} \put(-103.94,-251.02){\circle*{0.000001}} \put(-103.24,-251.02){\circle*{0.000001}} \put(-102.53,-250.32){\circle*{0.000001}} \put(-101.82,-250.32){\circle*{0.000001}} \put(-101.12,-250.32){\circle*{0.000001}} \put(-100.41,-250.32){\circle*{0.000001}} \put(-99.70,-250.32){\circle*{0.000001}} \put(-98.99,-250.32){\circle*{0.000001}} \put(-98.29,-250.32){\circle*{0.000001}} \put(-97.58,-250.32){\circle*{0.000001}} \put(-96.87,-250.32){\circle*{0.000001}} \put(-96.17,-249.61){\circle*{0.000001}} \put(-95.46,-249.61){\circle*{0.000001}} \put(-94.75,-249.61){\circle*{0.000001}} \put(-94.05,-249.61){\circle*{0.000001}} \put(-93.34,-249.61){\circle*{0.000001}} \put(-92.63,-249.61){\circle*{0.000001}} \put(-91.92,-249.61){\circle*{0.000001}} \put(-91.22,-249.61){\circle*{0.000001}} \put(-90.51,-248.90){\circle*{0.000001}} \put(-89.80,-248.90){\circle*{0.000001}} \put(-89.10,-248.90){\circle*{0.000001}} \put(-88.39,-248.90){\circle*{0.000001}} \put(-87.68,-248.90){\circle*{0.000001}} \put(-86.97,-248.90){\circle*{0.000001}} \put(-86.27,-248.90){\circle*{0.000001}} \put(-85.56,-248.90){\circle*{0.000001}} \put(-84.85,-248.90){\circle*{0.000001}} \put(-84.15,-248.19){\circle*{0.000001}} \put(-83.44,-248.19){\circle*{0.000001}} \put(-82.73,-248.19){\circle*{0.000001}} \put(-82.02,-248.19){\circle*{0.000001}} \put(-81.32,-248.19){\circle*{0.000001}} \put(-80.61,-248.19){\circle*{0.000001}} \put(-79.90,-248.19){\circle*{0.000001}} \put(-79.20,-248.19){\circle*{0.000001}} \put(-78.49,-247.49){\circle*{0.000001}} \put(-77.78,-247.49){\circle*{0.000001}} \put(-77.07,-247.49){\circle*{0.000001}} \put(-76.37,-247.49){\circle*{0.000001}} \put(-75.66,-247.49){\circle*{0.000001}} \put(-75.66,-247.49){\circle*{0.000001}} \put(-74.95,-246.78){\circle*{0.000001}} \put(-74.25,-246.07){\circle*{0.000001}} \put(-73.54,-245.37){\circle*{0.000001}} \put(-72.83,-244.66){\circle*{0.000001}} \put(-72.12,-243.95){\circle*{0.000001}} \put(-71.42,-243.24){\circle*{0.000001}} \put(-70.71,-242.54){\circle*{0.000001}} \put(-70.00,-241.83){\circle*{0.000001}} \put(-70.00,-241.12){\circle*{0.000001}} \put(-69.30,-240.42){\circle*{0.000001}} \put(-68.59,-239.71){\circle*{0.000001}} \put(-67.88,-239.00){\circle*{0.000001}} \put(-67.18,-238.29){\circle*{0.000001}} \put(-66.47,-237.59){\circle*{0.000001}} \put(-65.76,-236.88){\circle*{0.000001}} \put(-65.05,-236.17){\circle*{0.000001}} \put(-64.35,-235.47){\circle*{0.000001}} \put(-63.64,-234.76){\circle*{0.000001}} \put(-62.93,-234.05){\circle*{0.000001}} \put(-62.23,-233.35){\circle*{0.000001}} \put(-61.52,-232.64){\circle*{0.000001}} \put(-60.81,-231.93){\circle*{0.000001}} \put(-60.10,-231.22){\circle*{0.000001}} \put(-59.40,-230.52){\circle*{0.000001}} \put(-58.69,-229.81){\circle*{0.000001}} \put(-57.98,-229.10){\circle*{0.000001}} \put(-57.98,-228.40){\circle*{0.000001}} \put(-57.28,-227.69){\circle*{0.000001}} \put(-56.57,-226.98){\circle*{0.000001}} \put(-55.86,-226.27){\circle*{0.000001}} \put(-55.15,-225.57){\circle*{0.000001}} \put(-54.45,-224.86){\circle*{0.000001}} \put(-53.74,-224.15){\circle*{0.000001}} \put(-53.03,-223.45){\circle*{0.000001}} \put(-52.33,-222.74){\circle*{0.000001}} \put(-84.85,-211.42){\circle*{0.000001}} \put(-84.15,-211.42){\circle*{0.000001}} \put(-83.44,-212.13){\circle*{0.000001}} \put(-82.73,-212.13){\circle*{0.000001}} \put(-82.02,-212.13){\circle*{0.000001}} \put(-81.32,-212.84){\circle*{0.000001}} \put(-80.61,-212.84){\circle*{0.000001}} \put(-79.90,-212.84){\circle*{0.000001}} \put(-79.20,-213.55){\circle*{0.000001}} \put(-78.49,-213.55){\circle*{0.000001}} \put(-77.78,-213.55){\circle*{0.000001}} \put(-77.07,-214.25){\circle*{0.000001}} \put(-76.37,-214.25){\circle*{0.000001}} \put(-75.66,-214.96){\circle*{0.000001}} \put(-74.95,-214.96){\circle*{0.000001}} \put(-74.25,-214.96){\circle*{0.000001}} \put(-73.54,-215.67){\circle*{0.000001}} \put(-72.83,-215.67){\circle*{0.000001}} \put(-72.12,-215.67){\circle*{0.000001}} \put(-71.42,-216.37){\circle*{0.000001}} \put(-70.71,-216.37){\circle*{0.000001}} \put(-70.00,-216.37){\circle*{0.000001}} \put(-69.30,-217.08){\circle*{0.000001}} \put(-68.59,-217.08){\circle*{0.000001}} \put(-67.88,-217.08){\circle*{0.000001}} \put(-67.18,-217.79){\circle*{0.000001}} \put(-66.47,-217.79){\circle*{0.000001}} \put(-65.76,-217.79){\circle*{0.000001}} \put(-65.05,-218.50){\circle*{0.000001}} \put(-64.35,-218.50){\circle*{0.000001}} \put(-63.64,-218.50){\circle*{0.000001}} \put(-62.93,-219.20){\circle*{0.000001}} \put(-62.23,-219.20){\circle*{0.000001}} \put(-61.52,-219.20){\circle*{0.000001}} \put(-60.81,-219.91){\circle*{0.000001}} \put(-60.10,-219.91){\circle*{0.000001}} \put(-59.40,-220.62){\circle*{0.000001}} \put(-58.69,-220.62){\circle*{0.000001}} \put(-57.98,-220.62){\circle*{0.000001}} \put(-57.28,-221.32){\circle*{0.000001}} \put(-56.57,-221.32){\circle*{0.000001}} \put(-55.86,-221.32){\circle*{0.000001}} \put(-55.15,-222.03){\circle*{0.000001}} \put(-54.45,-222.03){\circle*{0.000001}} \put(-53.74,-222.03){\circle*{0.000001}} \put(-53.03,-222.74){\circle*{0.000001}} \put(-52.33,-222.74){\circle*{0.000001}} \put(-116.67,-201.53){\circle*{0.000001}} \put(-115.97,-201.53){\circle*{0.000001}} \put(-115.26,-202.23){\circle*{0.000001}} \put(-114.55,-202.23){\circle*{0.000001}} \put(-113.84,-202.23){\circle*{0.000001}} \put(-113.14,-202.94){\circle*{0.000001}} \put(-112.43,-202.94){\circle*{0.000001}} \put(-111.72,-202.94){\circle*{0.000001}} \put(-111.02,-202.94){\circle*{0.000001}} \put(-110.31,-203.65){\circle*{0.000001}} \put(-109.60,-203.65){\circle*{0.000001}} \put(-108.89,-203.65){\circle*{0.000001}} \put(-108.19,-204.35){\circle*{0.000001}} \put(-107.48,-204.35){\circle*{0.000001}} \put(-106.77,-204.35){\circle*{0.000001}} \put(-106.07,-205.06){\circle*{0.000001}} \put(-105.36,-205.06){\circle*{0.000001}} \put(-104.65,-205.06){\circle*{0.000001}} \put(-103.94,-205.77){\circle*{0.000001}} \put(-103.24,-205.77){\circle*{0.000001}} \put(-102.53,-205.77){\circle*{0.000001}} \put(-101.82,-206.48){\circle*{0.000001}} \put(-101.12,-206.48){\circle*{0.000001}} \put(-100.41,-206.48){\circle*{0.000001}} \put(-99.70,-206.48){\circle*{0.000001}} \put(-98.99,-207.18){\circle*{0.000001}} \put(-98.29,-207.18){\circle*{0.000001}} \put(-97.58,-207.18){\circle*{0.000001}} \put(-96.87,-207.89){\circle*{0.000001}} \put(-96.17,-207.89){\circle*{0.000001}} \put(-95.46,-207.89){\circle*{0.000001}} \put(-94.75,-208.60){\circle*{0.000001}} \put(-94.05,-208.60){\circle*{0.000001}} \put(-93.34,-208.60){\circle*{0.000001}} \put(-92.63,-209.30){\circle*{0.000001}} \put(-91.92,-209.30){\circle*{0.000001}} \put(-91.22,-209.30){\circle*{0.000001}} \put(-90.51,-210.01){\circle*{0.000001}} \put(-89.80,-210.01){\circle*{0.000001}} \put(-89.10,-210.01){\circle*{0.000001}} \put(-88.39,-210.01){\circle*{0.000001}} \put(-87.68,-210.72){\circle*{0.000001}} \put(-86.97,-210.72){\circle*{0.000001}} \put(-86.27,-210.72){\circle*{0.000001}} \put(-85.56,-211.42){\circle*{0.000001}} \put(-84.85,-211.42){\circle*{0.000001}} \put(-148.49,-191.63){\circle*{0.000001}} \put(-147.79,-191.63){\circle*{0.000001}} \put(-147.08,-192.33){\circle*{0.000001}} \put(-146.37,-192.33){\circle*{0.000001}} \put(-145.66,-192.33){\circle*{0.000001}} \put(-144.96,-193.04){\circle*{0.000001}} \put(-144.25,-193.04){\circle*{0.000001}} \put(-143.54,-193.04){\circle*{0.000001}} \put(-142.84,-193.04){\circle*{0.000001}} \put(-142.13,-193.75){\circle*{0.000001}} \put(-141.42,-193.75){\circle*{0.000001}} \put(-140.71,-193.75){\circle*{0.000001}} \put(-140.01,-194.45){\circle*{0.000001}} \put(-139.30,-194.45){\circle*{0.000001}} \put(-138.59,-194.45){\circle*{0.000001}} \put(-137.89,-195.16){\circle*{0.000001}} \put(-137.18,-195.16){\circle*{0.000001}} \put(-136.47,-195.16){\circle*{0.000001}} \put(-135.76,-195.87){\circle*{0.000001}} \put(-135.06,-195.87){\circle*{0.000001}} \put(-134.35,-195.87){\circle*{0.000001}} \put(-133.64,-196.58){\circle*{0.000001}} \put(-132.94,-196.58){\circle*{0.000001}} \put(-132.23,-196.58){\circle*{0.000001}} \put(-131.52,-196.58){\circle*{0.000001}} \put(-130.81,-197.28){\circle*{0.000001}} \put(-130.11,-197.28){\circle*{0.000001}} \put(-129.40,-197.28){\circle*{0.000001}} \put(-128.69,-197.99){\circle*{0.000001}} \put(-127.99,-197.99){\circle*{0.000001}} \put(-127.28,-197.99){\circle*{0.000001}} \put(-126.57,-198.70){\circle*{0.000001}} \put(-125.87,-198.70){\circle*{0.000001}} \put(-125.16,-198.70){\circle*{0.000001}} \put(-124.45,-199.40){\circle*{0.000001}} \put(-123.74,-199.40){\circle*{0.000001}} \put(-123.04,-199.40){\circle*{0.000001}} \put(-122.33,-200.11){\circle*{0.000001}} \put(-121.62,-200.11){\circle*{0.000001}} \put(-120.92,-200.11){\circle*{0.000001}} \put(-120.21,-200.11){\circle*{0.000001}} \put(-119.50,-200.82){\circle*{0.000001}} \put(-118.79,-200.82){\circle*{0.000001}} \put(-118.09,-200.82){\circle*{0.000001}} \put(-117.38,-201.53){\circle*{0.000001}} \put(-116.67,-201.53){\circle*{0.000001}} \put(-181.73,-175.36){\circle*{0.000001}} \put(-181.02,-175.36){\circle*{0.000001}} \put(-180.31,-176.07){\circle*{0.000001}} \put(-179.61,-176.07){\circle*{0.000001}} \put(-178.90,-176.78){\circle*{0.000001}} \put(-178.19,-176.78){\circle*{0.000001}} \put(-177.48,-177.48){\circle*{0.000001}} \put(-176.78,-177.48){\circle*{0.000001}} \put(-176.07,-178.19){\circle*{0.000001}} \put(-175.36,-178.19){\circle*{0.000001}} \put(-174.66,-178.90){\circle*{0.000001}} \put(-173.95,-178.90){\circle*{0.000001}} \put(-173.24,-179.61){\circle*{0.000001}} \put(-172.53,-179.61){\circle*{0.000001}} \put(-171.83,-180.31){\circle*{0.000001}} \put(-171.12,-180.31){\circle*{0.000001}} \put(-170.41,-181.02){\circle*{0.000001}} \put(-169.71,-181.02){\circle*{0.000001}} \put(-169.00,-181.73){\circle*{0.000001}} \put(-168.29,-181.73){\circle*{0.000001}} \put(-167.58,-182.43){\circle*{0.000001}} \put(-166.88,-182.43){\circle*{0.000001}} \put(-166.17,-183.14){\circle*{0.000001}} \put(-165.46,-183.14){\circle*{0.000001}} \put(-164.76,-183.85){\circle*{0.000001}} \put(-164.05,-183.85){\circle*{0.000001}} \put(-163.34,-184.55){\circle*{0.000001}} \put(-162.63,-184.55){\circle*{0.000001}} \put(-161.93,-185.26){\circle*{0.000001}} \put(-161.22,-185.26){\circle*{0.000001}} \put(-160.51,-185.97){\circle*{0.000001}} \put(-159.81,-185.97){\circle*{0.000001}} \put(-159.10,-186.68){\circle*{0.000001}} \put(-158.39,-186.68){\circle*{0.000001}} \put(-157.68,-187.38){\circle*{0.000001}} \put(-156.98,-187.38){\circle*{0.000001}} \put(-156.27,-188.09){\circle*{0.000001}} \put(-155.56,-188.09){\circle*{0.000001}} \put(-154.86,-188.80){\circle*{0.000001}} \put(-154.15,-188.80){\circle*{0.000001}} \put(-153.44,-189.50){\circle*{0.000001}} \put(-152.74,-189.50){\circle*{0.000001}} \put(-152.03,-190.21){\circle*{0.000001}} \put(-151.32,-190.21){\circle*{0.000001}} \put(-150.61,-190.92){\circle*{0.000001}} \put(-149.91,-190.92){\circle*{0.000001}} \put(-149.20,-191.63){\circle*{0.000001}} \put(-148.49,-191.63){\circle*{0.000001}} \put(-181.73,-175.36){\circle*{0.000001}} \put(-182.43,-174.66){\circle*{0.000001}} \put(-183.14,-173.95){\circle*{0.000001}} \put(-183.85,-173.24){\circle*{0.000001}} \put(-184.55,-172.53){\circle*{0.000001}} \put(-185.26,-171.83){\circle*{0.000001}} \put(-185.97,-171.12){\circle*{0.000001}} \put(-186.68,-170.41){\circle*{0.000001}} \put(-187.38,-169.71){\circle*{0.000001}} \put(-187.38,-169.00){\circle*{0.000001}} \put(-188.09,-168.29){\circle*{0.000001}} \put(-188.80,-167.58){\circle*{0.000001}} \put(-189.50,-166.88){\circle*{0.000001}} \put(-190.21,-166.17){\circle*{0.000001}} \put(-190.92,-165.46){\circle*{0.000001}} \put(-191.63,-164.76){\circle*{0.000001}} \put(-192.33,-164.05){\circle*{0.000001}} \put(-193.04,-163.34){\circle*{0.000001}} \put(-193.75,-162.63){\circle*{0.000001}} \put(-194.45,-161.93){\circle*{0.000001}} \put(-195.16,-161.22){\circle*{0.000001}} \put(-195.87,-160.51){\circle*{0.000001}} \put(-196.58,-159.81){\circle*{0.000001}} \put(-197.28,-159.10){\circle*{0.000001}} \put(-197.99,-158.39){\circle*{0.000001}} \put(-198.70,-157.68){\circle*{0.000001}} \put(-199.40,-156.98){\circle*{0.000001}} \put(-199.40,-156.27){\circle*{0.000001}} \put(-200.11,-155.56){\circle*{0.000001}} \put(-200.82,-154.86){\circle*{0.000001}} \put(-201.53,-154.15){\circle*{0.000001}} \put(-202.23,-153.44){\circle*{0.000001}} \put(-202.94,-152.74){\circle*{0.000001}} \put(-203.65,-152.03){\circle*{0.000001}} \put(-204.35,-151.32){\circle*{0.000001}} \put(-205.06,-150.61){\circle*{0.000001}} \put(-205.77,-149.91){\circle*{0.000001}} \put(-205.77,-149.91){\circle*{0.000001}} \put(-205.77,-149.20){\circle*{0.000001}} \put(-205.77,-148.49){\circle*{0.000001}} \put(-205.06,-147.79){\circle*{0.000001}} \put(-205.06,-147.08){\circle*{0.000001}} \put(-205.06,-146.37){\circle*{0.000001}} \put(-205.06,-145.66){\circle*{0.000001}} \put(-205.06,-144.96){\circle*{0.000001}} \put(-204.35,-144.25){\circle*{0.000001}} \put(-204.35,-143.54){\circle*{0.000001}} \put(-204.35,-142.84){\circle*{0.000001}} \put(-204.35,-142.13){\circle*{0.000001}} \put(-203.65,-141.42){\circle*{0.000001}} \put(-203.65,-140.71){\circle*{0.000001}} \put(-203.65,-140.01){\circle*{0.000001}} \put(-203.65,-139.30){\circle*{0.000001}} \put(-203.65,-138.59){\circle*{0.000001}} \put(-202.94,-137.89){\circle*{0.000001}} \put(-202.94,-137.18){\circle*{0.000001}} \put(-202.94,-136.47){\circle*{0.000001}} \put(-202.94,-135.76){\circle*{0.000001}} \put(-202.94,-135.06){\circle*{0.000001}} \put(-202.23,-134.35){\circle*{0.000001}} \put(-202.23,-133.64){\circle*{0.000001}} \put(-202.23,-132.94){\circle*{0.000001}} \put(-202.23,-132.23){\circle*{0.000001}} \put(-201.53,-131.52){\circle*{0.000001}} \put(-201.53,-130.81){\circle*{0.000001}} \put(-201.53,-130.11){\circle*{0.000001}} \put(-201.53,-129.40){\circle*{0.000001}} \put(-201.53,-128.69){\circle*{0.000001}} \put(-200.82,-127.99){\circle*{0.000001}} \put(-200.82,-127.28){\circle*{0.000001}} \put(-200.82,-126.57){\circle*{0.000001}} \put(-200.82,-125.87){\circle*{0.000001}} \put(-200.82,-125.16){\circle*{0.000001}} \put(-200.11,-124.45){\circle*{0.000001}} \put(-200.11,-123.74){\circle*{0.000001}} \put(-200.11,-123.04){\circle*{0.000001}} \put(-200.11,-122.33){\circle*{0.000001}} \put(-199.40,-121.62){\circle*{0.000001}} \put(-199.40,-120.92){\circle*{0.000001}} \put(-199.40,-120.21){\circle*{0.000001}} \put(-199.40,-119.50){\circle*{0.000001}} \put(-199.40,-118.79){\circle*{0.000001}} \put(-198.70,-118.09){\circle*{0.000001}} \put(-198.70,-117.38){\circle*{0.000001}} \put(-198.70,-116.67){\circle*{0.000001}} \put(-208.60,-150.61){\circle*{0.000001}} \put(-208.60,-149.91){\circle*{0.000001}} \put(-207.89,-149.20){\circle*{0.000001}} \put(-207.89,-148.49){\circle*{0.000001}} \put(-207.89,-147.79){\circle*{0.000001}} \put(-207.89,-147.08){\circle*{0.000001}} \put(-207.18,-146.37){\circle*{0.000001}} \put(-207.18,-145.66){\circle*{0.000001}} \put(-207.18,-144.96){\circle*{0.000001}} \put(-206.48,-144.25){\circle*{0.000001}} \put(-206.48,-143.54){\circle*{0.000001}} \put(-206.48,-142.84){\circle*{0.000001}} \put(-206.48,-142.13){\circle*{0.000001}} \put(-205.77,-141.42){\circle*{0.000001}} \put(-205.77,-140.71){\circle*{0.000001}} \put(-205.77,-140.01){\circle*{0.000001}} \put(-205.06,-139.30){\circle*{0.000001}} \put(-205.06,-138.59){\circle*{0.000001}} \put(-205.06,-137.89){\circle*{0.000001}} \put(-204.35,-137.18){\circle*{0.000001}} \put(-204.35,-136.47){\circle*{0.000001}} \put(-204.35,-135.76){\circle*{0.000001}} \put(-204.35,-135.06){\circle*{0.000001}} \put(-203.65,-134.35){\circle*{0.000001}} \put(-203.65,-133.64){\circle*{0.000001}} \put(-203.65,-132.94){\circle*{0.000001}} \put(-202.94,-132.23){\circle*{0.000001}} \put(-202.94,-131.52){\circle*{0.000001}} \put(-202.94,-130.81){\circle*{0.000001}} \put(-202.94,-130.11){\circle*{0.000001}} \put(-202.23,-129.40){\circle*{0.000001}} \put(-202.23,-128.69){\circle*{0.000001}} \put(-202.23,-127.99){\circle*{0.000001}} \put(-201.53,-127.28){\circle*{0.000001}} \put(-201.53,-126.57){\circle*{0.000001}} \put(-201.53,-125.87){\circle*{0.000001}} \put(-201.53,-125.16){\circle*{0.000001}} \put(-200.82,-124.45){\circle*{0.000001}} \put(-200.82,-123.74){\circle*{0.000001}} \put(-200.82,-123.04){\circle*{0.000001}} \put(-200.11,-122.33){\circle*{0.000001}} \put(-200.11,-121.62){\circle*{0.000001}} \put(-200.11,-120.92){\circle*{0.000001}} \put(-199.40,-120.21){\circle*{0.000001}} \put(-199.40,-119.50){\circle*{0.000001}} \put(-199.40,-118.79){\circle*{0.000001}} \put(-199.40,-118.09){\circle*{0.000001}} \put(-198.70,-117.38){\circle*{0.000001}} \put(-198.70,-116.67){\circle*{0.000001}} \put(-208.60,-150.61){\circle*{0.000001}} \put(-208.60,-149.91){\circle*{0.000001}} \put(-209.30,-149.20){\circle*{0.000001}} \put(-209.30,-148.49){\circle*{0.000001}} \put(-209.30,-147.79){\circle*{0.000001}} \put(-210.01,-147.08){\circle*{0.000001}} \put(-210.01,-146.37){\circle*{0.000001}} \put(-210.01,-145.66){\circle*{0.000001}} \put(-210.72,-144.96){\circle*{0.000001}} \put(-210.72,-144.25){\circle*{0.000001}} \put(-210.72,-143.54){\circle*{0.000001}} \put(-211.42,-142.84){\circle*{0.000001}} \put(-211.42,-142.13){\circle*{0.000001}} \put(-211.42,-141.42){\circle*{0.000001}} \put(-212.13,-140.71){\circle*{0.000001}} \put(-212.13,-140.01){\circle*{0.000001}} \put(-212.13,-139.30){\circle*{0.000001}} \put(-212.84,-138.59){\circle*{0.000001}} \put(-212.84,-137.89){\circle*{0.000001}} \put(-212.84,-137.18){\circle*{0.000001}} \put(-213.55,-136.47){\circle*{0.000001}} \put(-213.55,-135.76){\circle*{0.000001}} \put(-213.55,-135.06){\circle*{0.000001}} \put(-214.25,-134.35){\circle*{0.000001}} \put(-214.25,-133.64){\circle*{0.000001}} \put(-214.25,-132.94){\circle*{0.000001}} \put(-214.25,-132.23){\circle*{0.000001}} \put(-214.96,-131.52){\circle*{0.000001}} \put(-214.96,-130.81){\circle*{0.000001}} \put(-214.96,-130.11){\circle*{0.000001}} \put(-215.67,-129.40){\circle*{0.000001}} \put(-215.67,-128.69){\circle*{0.000001}} \put(-215.67,-127.99){\circle*{0.000001}} \put(-216.37,-127.28){\circle*{0.000001}} \put(-216.37,-126.57){\circle*{0.000001}} \put(-216.37,-125.87){\circle*{0.000001}} \put(-217.08,-125.16){\circle*{0.000001}} \put(-217.08,-124.45){\circle*{0.000001}} \put(-217.08,-123.74){\circle*{0.000001}} \put(-217.79,-123.04){\circle*{0.000001}} \put(-217.79,-122.33){\circle*{0.000001}} \put(-217.79,-121.62){\circle*{0.000001}} \put(-218.50,-120.92){\circle*{0.000001}} \put(-218.50,-120.21){\circle*{0.000001}} \put(-218.50,-119.50){\circle*{0.000001}} \put(-219.20,-118.79){\circle*{0.000001}} \put(-219.20,-118.09){\circle*{0.000001}} \put(-219.20,-117.38){\circle*{0.000001}} \put(-219.91,-116.67){\circle*{0.000001}} \put(-219.91,-115.97){\circle*{0.000001}} \put(-219.91,-115.97){\circle*{0.000001}} \put(-219.20,-115.26){\circle*{0.000001}} \put(-218.50,-115.26){\circle*{0.000001}} \put(-217.79,-114.55){\circle*{0.000001}} \put(-217.08,-113.84){\circle*{0.000001}} \put(-216.37,-113.84){\circle*{0.000001}} \put(-215.67,-113.14){\circle*{0.000001}} \put(-214.96,-113.14){\circle*{0.000001}} \put(-214.25,-112.43){\circle*{0.000001}} \put(-213.55,-111.72){\circle*{0.000001}} \put(-212.84,-111.72){\circle*{0.000001}} \put(-212.13,-111.02){\circle*{0.000001}} \put(-211.42,-110.31){\circle*{0.000001}} \put(-210.72,-110.31){\circle*{0.000001}} \put(-210.01,-109.60){\circle*{0.000001}} \put(-209.30,-108.89){\circle*{0.000001}} \put(-208.60,-108.89){\circle*{0.000001}} \put(-207.89,-108.19){\circle*{0.000001}} \put(-207.18,-108.19){\circle*{0.000001}} \put(-206.48,-107.48){\circle*{0.000001}} \put(-205.77,-106.77){\circle*{0.000001}} \put(-205.06,-106.77){\circle*{0.000001}} \put(-204.35,-106.07){\circle*{0.000001}} \put(-203.65,-105.36){\circle*{0.000001}} \put(-202.94,-105.36){\circle*{0.000001}} \put(-202.23,-104.65){\circle*{0.000001}} \put(-201.53,-104.65){\circle*{0.000001}} \put(-200.82,-103.94){\circle*{0.000001}} \put(-200.11,-103.24){\circle*{0.000001}} \put(-199.40,-103.24){\circle*{0.000001}} \put(-198.70,-102.53){\circle*{0.000001}} \put(-197.99,-101.82){\circle*{0.000001}} \put(-197.28,-101.82){\circle*{0.000001}} \put(-196.58,-101.12){\circle*{0.000001}} \put(-195.87,-100.41){\circle*{0.000001}} \put(-195.16,-100.41){\circle*{0.000001}} \put(-194.45,-99.70){\circle*{0.000001}} \put(-193.75,-99.70){\circle*{0.000001}} \put(-193.04,-98.99){\circle*{0.000001}} \put(-192.33,-98.29){\circle*{0.000001}} \put(-191.63,-98.29){\circle*{0.000001}} \put(-190.92,-97.58){\circle*{0.000001}} \put(-190.92,-97.58){\circle*{0.000001}} \put(-190.92,-96.87){\circle*{0.000001}} \put(-191.63,-96.17){\circle*{0.000001}} \put(-191.63,-95.46){\circle*{0.000001}} \put(-192.33,-94.75){\circle*{0.000001}} \put(-192.33,-94.05){\circle*{0.000001}} \put(-192.33,-93.34){\circle*{0.000001}} \put(-193.04,-92.63){\circle*{0.000001}} \put(-193.04,-91.92){\circle*{0.000001}} \put(-193.75,-91.22){\circle*{0.000001}} \put(-193.75,-90.51){\circle*{0.000001}} \put(-193.75,-89.80){\circle*{0.000001}} \put(-194.45,-89.10){\circle*{0.000001}} \put(-194.45,-88.39){\circle*{0.000001}} \put(-195.16,-87.68){\circle*{0.000001}} \put(-195.16,-86.97){\circle*{0.000001}} \put(-195.16,-86.27){\circle*{0.000001}} \put(-195.87,-85.56){\circle*{0.000001}} \put(-195.87,-84.85){\circle*{0.000001}} \put(-196.58,-84.15){\circle*{0.000001}} \put(-196.58,-83.44){\circle*{0.000001}} \put(-196.58,-82.73){\circle*{0.000001}} \put(-197.28,-82.02){\circle*{0.000001}} \put(-197.28,-81.32){\circle*{0.000001}} \put(-197.99,-80.61){\circle*{0.000001}} \put(-197.99,-79.90){\circle*{0.000001}} \put(-198.70,-79.20){\circle*{0.000001}} \put(-198.70,-78.49){\circle*{0.000001}} \put(-198.70,-77.78){\circle*{0.000001}} \put(-199.40,-77.07){\circle*{0.000001}} \put(-199.40,-76.37){\circle*{0.000001}} \put(-200.11,-75.66){\circle*{0.000001}} \put(-200.11,-74.95){\circle*{0.000001}} \put(-200.11,-74.25){\circle*{0.000001}} \put(-200.82,-73.54){\circle*{0.000001}} \put(-200.82,-72.83){\circle*{0.000001}} \put(-201.53,-72.12){\circle*{0.000001}} \put(-201.53,-71.42){\circle*{0.000001}} \put(-201.53,-70.71){\circle*{0.000001}} \put(-202.23,-70.00){\circle*{0.000001}} \put(-202.23,-69.30){\circle*{0.000001}} \put(-202.94,-68.59){\circle*{0.000001}} \put(-202.94,-67.88){\circle*{0.000001}} \put(-202.94,-67.18){\circle*{0.000001}} \put(-203.65,-66.47){\circle*{0.000001}} \put(-203.65,-65.76){\circle*{0.000001}} \put(-204.35,-65.05){\circle*{0.000001}} \put(-204.35,-64.35){\circle*{0.000001}} \put(-204.35,-64.35){\circle*{0.000001}} \put(-203.65,-64.35){\circle*{0.000001}} \put(-202.94,-63.64){\circle*{0.000001}} \put(-202.23,-63.64){\circle*{0.000001}} \put(-201.53,-62.93){\circle*{0.000001}} \put(-200.82,-62.93){\circle*{0.000001}} \put(-200.11,-62.23){\circle*{0.000001}} \put(-199.40,-62.23){\circle*{0.000001}} \put(-198.70,-61.52){\circle*{0.000001}} \put(-197.99,-61.52){\circle*{0.000001}} \put(-197.28,-60.81){\circle*{0.000001}} \put(-196.58,-60.81){\circle*{0.000001}} \put(-195.87,-60.10){\circle*{0.000001}} \put(-195.16,-60.10){\circle*{0.000001}} \put(-194.45,-59.40){\circle*{0.000001}} \put(-193.75,-59.40){\circle*{0.000001}} \put(-193.04,-58.69){\circle*{0.000001}} \put(-192.33,-58.69){\circle*{0.000001}} \put(-191.63,-57.98){\circle*{0.000001}} \put(-190.92,-57.98){\circle*{0.000001}} \put(-190.21,-57.28){\circle*{0.000001}} \put(-189.50,-57.28){\circle*{0.000001}} \put(-188.80,-56.57){\circle*{0.000001}} \put(-188.09,-56.57){\circle*{0.000001}} \put(-187.38,-55.86){\circle*{0.000001}} \put(-186.68,-55.86){\circle*{0.000001}} \put(-185.97,-55.15){\circle*{0.000001}} \put(-185.26,-55.15){\circle*{0.000001}} \put(-184.55,-54.45){\circle*{0.000001}} \put(-183.85,-54.45){\circle*{0.000001}} \put(-183.14,-53.74){\circle*{0.000001}} \put(-182.43,-53.74){\circle*{0.000001}} \put(-181.73,-53.03){\circle*{0.000001}} \put(-181.02,-53.03){\circle*{0.000001}} \put(-180.31,-52.33){\circle*{0.000001}} \put(-179.61,-52.33){\circle*{0.000001}} \put(-178.90,-51.62){\circle*{0.000001}} \put(-178.19,-51.62){\circle*{0.000001}} \put(-177.48,-50.91){\circle*{0.000001}} \put(-176.78,-50.91){\circle*{0.000001}} \put(-176.07,-50.20){\circle*{0.000001}} \put(-175.36,-50.20){\circle*{0.000001}} \put(-174.66,-49.50){\circle*{0.000001}} \put(-173.95,-49.50){\circle*{0.000001}} \put(-173.95,-49.50){\circle*{0.000001}} \put(-174.66,-48.79){\circle*{0.000001}} \put(-175.36,-48.08){\circle*{0.000001}} \put(-176.07,-47.38){\circle*{0.000001}} \put(-176.07,-46.67){\circle*{0.000001}} \put(-176.78,-45.96){\circle*{0.000001}} \put(-177.48,-45.25){\circle*{0.000001}} \put(-178.19,-44.55){\circle*{0.000001}} \put(-178.90,-43.84){\circle*{0.000001}} \put(-179.61,-43.13){\circle*{0.000001}} \put(-179.61,-42.43){\circle*{0.000001}} \put(-180.31,-41.72){\circle*{0.000001}} \put(-181.02,-41.01){\circle*{0.000001}} \put(-181.73,-40.31){\circle*{0.000001}} \put(-182.43,-39.60){\circle*{0.000001}} \put(-183.14,-38.89){\circle*{0.000001}} \put(-183.14,-38.18){\circle*{0.000001}} \put(-183.85,-37.48){\circle*{0.000001}} \put(-184.55,-36.77){\circle*{0.000001}} \put(-185.26,-36.06){\circle*{0.000001}} \put(-185.97,-35.36){\circle*{0.000001}} \put(-186.68,-34.65){\circle*{0.000001}} \put(-187.38,-33.94){\circle*{0.000001}} \put(-187.38,-33.23){\circle*{0.000001}} \put(-188.09,-32.53){\circle*{0.000001}} \put(-188.80,-31.82){\circle*{0.000001}} \put(-189.50,-31.11){\circle*{0.000001}} \put(-190.21,-30.41){\circle*{0.000001}} \put(-190.92,-29.70){\circle*{0.000001}} \put(-190.92,-28.99){\circle*{0.000001}} \put(-191.63,-28.28){\circle*{0.000001}} \put(-192.33,-27.58){\circle*{0.000001}} \put(-193.04,-26.87){\circle*{0.000001}} \put(-193.75,-26.16){\circle*{0.000001}} \put(-194.45,-25.46){\circle*{0.000001}} \put(-194.45,-24.75){\circle*{0.000001}} \put(-195.16,-24.04){\circle*{0.000001}} \put(-195.87,-23.33){\circle*{0.000001}} \put(-196.58,-22.63){\circle*{0.000001}} \put(-196.58,-22.63){\circle*{0.000001}} \put(-195.87,-21.92){\circle*{0.000001}} \put(-195.87,-21.21){\circle*{0.000001}} \put(-195.16,-20.51){\circle*{0.000001}} \put(-194.45,-19.80){\circle*{0.000001}} \put(-194.45,-19.09){\circle*{0.000001}} \put(-193.75,-18.38){\circle*{0.000001}} \put(-193.04,-17.68){\circle*{0.000001}} \put(-192.33,-16.97){\circle*{0.000001}} \put(-192.33,-16.26){\circle*{0.000001}} \put(-191.63,-15.56){\circle*{0.000001}} \put(-190.92,-14.85){\circle*{0.000001}} \put(-190.92,-14.14){\circle*{0.000001}} \put(-190.21,-13.44){\circle*{0.000001}} \put(-189.50,-12.73){\circle*{0.000001}} \put(-189.50,-12.02){\circle*{0.000001}} \put(-188.80,-11.31){\circle*{0.000001}} \put(-188.09,-10.61){\circle*{0.000001}} \put(-188.09,-9.90){\circle*{0.000001}} \put(-187.38,-9.19){\circle*{0.000001}} \put(-186.68,-8.49){\circle*{0.000001}} \put(-185.97,-7.78){\circle*{0.000001}} \put(-185.97,-7.07){\circle*{0.000001}} \put(-185.26,-6.36){\circle*{0.000001}} \put(-184.55,-5.66){\circle*{0.000001}} \put(-184.55,-4.95){\circle*{0.000001}} \put(-183.85,-4.24){\circle*{0.000001}} \put(-183.14,-3.54){\circle*{0.000001}} \put(-183.14,-2.83){\circle*{0.000001}} \put(-182.43,-2.12){\circle*{0.000001}} \put(-181.73,-1.41){\circle*{0.000001}} \put(-181.73,-0.71){\circle*{0.000001}} \put(-181.02, 0.00){\circle*{0.000001}} \put(-180.31, 0.71){\circle*{0.000001}} \put(-179.61, 1.41){\circle*{0.000001}} \put(-179.61, 2.12){\circle*{0.000001}} \put(-178.90, 2.83){\circle*{0.000001}} \put(-178.19, 3.54){\circle*{0.000001}} \put(-178.19, 4.24){\circle*{0.000001}} \put(-177.48, 4.95){\circle*{0.000001}} \put(-209.30,-10.61){\circle*{0.000001}} \put(-208.60,-10.61){\circle*{0.000001}} \put(-207.89,-9.90){\circle*{0.000001}} \put(-207.18,-9.90){\circle*{0.000001}} \put(-206.48,-9.19){\circle*{0.000001}} \put(-205.77,-9.19){\circle*{0.000001}} \put(-205.06,-8.49){\circle*{0.000001}} \put(-204.35,-8.49){\circle*{0.000001}} \put(-203.65,-7.78){\circle*{0.000001}} \put(-202.94,-7.78){\circle*{0.000001}} \put(-202.23,-7.07){\circle*{0.000001}} \put(-201.53,-7.07){\circle*{0.000001}} \put(-200.82,-6.36){\circle*{0.000001}} \put(-200.11,-6.36){\circle*{0.000001}} \put(-199.40,-5.66){\circle*{0.000001}} \put(-198.70,-5.66){\circle*{0.000001}} \put(-197.99,-4.95){\circle*{0.000001}} \put(-197.28,-4.95){\circle*{0.000001}} \put(-196.58,-4.24){\circle*{0.000001}} \put(-195.87,-4.24){\circle*{0.000001}} \put(-195.16,-3.54){\circle*{0.000001}} \put(-194.45,-3.54){\circle*{0.000001}} \put(-193.75,-2.83){\circle*{0.000001}} \put(-193.04,-2.83){\circle*{0.000001}} \put(-192.33,-2.12){\circle*{0.000001}} \put(-191.63,-2.12){\circle*{0.000001}} \put(-190.92,-1.41){\circle*{0.000001}} \put(-190.21,-1.41){\circle*{0.000001}} \put(-189.50,-0.71){\circle*{0.000001}} \put(-188.80,-0.71){\circle*{0.000001}} \put(-188.09, 0.00){\circle*{0.000001}} \put(-187.38, 0.00){\circle*{0.000001}} \put(-186.68, 0.71){\circle*{0.000001}} \put(-185.97, 0.71){\circle*{0.000001}} \put(-185.26, 1.41){\circle*{0.000001}} \put(-184.55, 1.41){\circle*{0.000001}} \put(-183.85, 2.12){\circle*{0.000001}} \put(-183.14, 2.12){\circle*{0.000001}} \put(-182.43, 2.83){\circle*{0.000001}} \put(-181.73, 2.83){\circle*{0.000001}} \put(-181.02, 3.54){\circle*{0.000001}} \put(-180.31, 3.54){\circle*{0.000001}} \put(-179.61, 4.24){\circle*{0.000001}} \put(-178.90, 4.24){\circle*{0.000001}} \put(-178.19, 4.95){\circle*{0.000001}} \put(-177.48, 4.95){\circle*{0.000001}} \put(-193.75,-37.48){\circle*{0.000001}} \put(-194.45,-36.77){\circle*{0.000001}} \put(-194.45,-36.06){\circle*{0.000001}} \put(-195.16,-35.36){\circle*{0.000001}} \put(-195.16,-34.65){\circle*{0.000001}} \put(-195.87,-33.94){\circle*{0.000001}} \put(-195.87,-33.23){\circle*{0.000001}} \put(-196.58,-32.53){\circle*{0.000001}} \put(-197.28,-31.82){\circle*{0.000001}} \put(-197.28,-31.11){\circle*{0.000001}} \put(-197.99,-30.41){\circle*{0.000001}} \put(-197.99,-29.70){\circle*{0.000001}} \put(-198.70,-28.99){\circle*{0.000001}} \put(-199.40,-28.28){\circle*{0.000001}} \put(-199.40,-27.58){\circle*{0.000001}} \put(-200.11,-26.87){\circle*{0.000001}} \put(-200.11,-26.16){\circle*{0.000001}} \put(-200.82,-25.46){\circle*{0.000001}} \put(-200.82,-24.75){\circle*{0.000001}} \put(-201.53,-24.04){\circle*{0.000001}} \put(-202.23,-23.33){\circle*{0.000001}} \put(-202.23,-22.63){\circle*{0.000001}} \put(-202.94,-21.92){\circle*{0.000001}} \put(-202.94,-21.21){\circle*{0.000001}} \put(-203.65,-20.51){\circle*{0.000001}} \put(-203.65,-19.80){\circle*{0.000001}} \put(-204.35,-19.09){\circle*{0.000001}} \put(-205.06,-18.38){\circle*{0.000001}} \put(-205.06,-17.68){\circle*{0.000001}} \put(-205.77,-16.97){\circle*{0.000001}} \put(-205.77,-16.26){\circle*{0.000001}} \put(-206.48,-15.56){\circle*{0.000001}} \put(-207.18,-14.85){\circle*{0.000001}} \put(-207.18,-14.14){\circle*{0.000001}} \put(-207.89,-13.44){\circle*{0.000001}} \put(-207.89,-12.73){\circle*{0.000001}} \put(-208.60,-12.02){\circle*{0.000001}} \put(-208.60,-11.31){\circle*{0.000001}} \put(-209.30,-10.61){\circle*{0.000001}} \put(-211.42,-66.47){\circle*{0.000001}} \put(-210.72,-65.76){\circle*{0.000001}} \put(-210.72,-65.05){\circle*{0.000001}} \put(-210.01,-64.35){\circle*{0.000001}} \put(-210.01,-63.64){\circle*{0.000001}} \put(-209.30,-62.93){\circle*{0.000001}} \put(-208.60,-62.23){\circle*{0.000001}} \put(-208.60,-61.52){\circle*{0.000001}} \put(-207.89,-60.81){\circle*{0.000001}} \put(-207.89,-60.10){\circle*{0.000001}} \put(-207.18,-59.40){\circle*{0.000001}} \put(-206.48,-58.69){\circle*{0.000001}} \put(-206.48,-57.98){\circle*{0.000001}} \put(-205.77,-57.28){\circle*{0.000001}} \put(-205.06,-56.57){\circle*{0.000001}} \put(-205.06,-55.86){\circle*{0.000001}} \put(-204.35,-55.15){\circle*{0.000001}} \put(-204.35,-54.45){\circle*{0.000001}} \put(-203.65,-53.74){\circle*{0.000001}} \put(-202.94,-53.03){\circle*{0.000001}} \put(-202.94,-52.33){\circle*{0.000001}} \put(-202.23,-51.62){\circle*{0.000001}} \put(-202.23,-50.91){\circle*{0.000001}} \put(-201.53,-50.20){\circle*{0.000001}} \put(-200.82,-49.50){\circle*{0.000001}} \put(-200.82,-48.79){\circle*{0.000001}} \put(-200.11,-48.08){\circle*{0.000001}} \put(-200.11,-47.38){\circle*{0.000001}} \put(-199.40,-46.67){\circle*{0.000001}} \put(-198.70,-45.96){\circle*{0.000001}} \put(-198.70,-45.25){\circle*{0.000001}} \put(-197.99,-44.55){\circle*{0.000001}} \put(-197.28,-43.84){\circle*{0.000001}} \put(-197.28,-43.13){\circle*{0.000001}} \put(-196.58,-42.43){\circle*{0.000001}} \put(-196.58,-41.72){\circle*{0.000001}} \put(-195.87,-41.01){\circle*{0.000001}} \put(-195.16,-40.31){\circle*{0.000001}} \put(-195.16,-39.60){\circle*{0.000001}} \put(-194.45,-38.89){\circle*{0.000001}} \put(-194.45,-38.18){\circle*{0.000001}} \put(-193.75,-37.48){\circle*{0.000001}} \put(-211.42,-66.47){\circle*{0.000001}} \put(-210.72,-65.76){\circle*{0.000001}} \put(-210.01,-65.05){\circle*{0.000001}} \put(-209.30,-65.05){\circle*{0.000001}} \put(-208.60,-64.35){\circle*{0.000001}} \put(-207.89,-63.64){\circle*{0.000001}} \put(-207.18,-62.93){\circle*{0.000001}} \put(-206.48,-62.23){\circle*{0.000001}} \put(-205.77,-61.52){\circle*{0.000001}} \put(-205.06,-61.52){\circle*{0.000001}} \put(-204.35,-60.81){\circle*{0.000001}} \put(-203.65,-60.10){\circle*{0.000001}} \put(-202.94,-59.40){\circle*{0.000001}} \put(-202.23,-58.69){\circle*{0.000001}} \put(-201.53,-58.69){\circle*{0.000001}} \put(-200.82,-57.98){\circle*{0.000001}} \put(-200.11,-57.28){\circle*{0.000001}} \put(-199.40,-56.57){\circle*{0.000001}} \put(-198.70,-55.86){\circle*{0.000001}} \put(-197.99,-55.86){\circle*{0.000001}} \put(-197.28,-55.15){\circle*{0.000001}} \put(-196.58,-54.45){\circle*{0.000001}} \put(-195.87,-53.74){\circle*{0.000001}} \put(-195.16,-53.03){\circle*{0.000001}} \put(-194.45,-52.33){\circle*{0.000001}} \put(-193.75,-52.33){\circle*{0.000001}} \put(-193.04,-51.62){\circle*{0.000001}} \put(-192.33,-50.91){\circle*{0.000001}} \put(-191.63,-50.20){\circle*{0.000001}} \put(-190.92,-49.50){\circle*{0.000001}} \put(-190.21,-49.50){\circle*{0.000001}} \put(-189.50,-48.79){\circle*{0.000001}} \put(-188.80,-48.08){\circle*{0.000001}} \put(-188.09,-47.38){\circle*{0.000001}} \put(-187.38,-46.67){\circle*{0.000001}} \put(-186.68,-45.96){\circle*{0.000001}} \put(-185.97,-45.96){\circle*{0.000001}} \put(-185.26,-45.25){\circle*{0.000001}} \put(-184.55,-44.55){\circle*{0.000001}} \put(-184.55,-44.55){\circle*{0.000001}} \put(-185.26,-43.84){\circle*{0.000001}} \put(-185.26,-43.13){\circle*{0.000001}} \put(-185.97,-42.43){\circle*{0.000001}} \put(-186.68,-41.72){\circle*{0.000001}} \put(-186.68,-41.01){\circle*{0.000001}} \put(-187.38,-40.31){\circle*{0.000001}} \put(-188.09,-39.60){\circle*{0.000001}} \put(-188.09,-38.89){\circle*{0.000001}} \put(-188.80,-38.18){\circle*{0.000001}} \put(-189.50,-37.48){\circle*{0.000001}} \put(-189.50,-36.77){\circle*{0.000001}} \put(-190.21,-36.06){\circle*{0.000001}} \put(-190.92,-35.36){\circle*{0.000001}} \put(-190.92,-34.65){\circle*{0.000001}} \put(-191.63,-33.94){\circle*{0.000001}} \put(-192.33,-33.23){\circle*{0.000001}} \put(-192.33,-32.53){\circle*{0.000001}} \put(-193.04,-31.82){\circle*{0.000001}} \put(-193.75,-31.11){\circle*{0.000001}} \put(-193.75,-30.41){\circle*{0.000001}} \put(-194.45,-29.70){\circle*{0.000001}} \put(-194.45,-28.99){\circle*{0.000001}} \put(-195.16,-28.28){\circle*{0.000001}} \put(-195.87,-27.58){\circle*{0.000001}} \put(-195.87,-26.87){\circle*{0.000001}} \put(-196.58,-26.16){\circle*{0.000001}} \put(-197.28,-25.46){\circle*{0.000001}} \put(-197.28,-24.75){\circle*{0.000001}} \put(-197.99,-24.04){\circle*{0.000001}} \put(-198.70,-23.33){\circle*{0.000001}} \put(-198.70,-22.63){\circle*{0.000001}} \put(-199.40,-21.92){\circle*{0.000001}} \put(-200.11,-21.21){\circle*{0.000001}} \put(-200.11,-20.51){\circle*{0.000001}} \put(-200.82,-19.80){\circle*{0.000001}} \put(-201.53,-19.09){\circle*{0.000001}} \put(-201.53,-18.38){\circle*{0.000001}} \put(-202.23,-17.68){\circle*{0.000001}} \put(-202.94,-16.97){\circle*{0.000001}} \put(-202.94,-16.26){\circle*{0.000001}} \put(-203.65,-15.56){\circle*{0.000001}} \put(-203.65,-15.56){\circle*{0.000001}} \put(-202.94,-14.85){\circle*{0.000001}} \put(-202.94,-14.14){\circle*{0.000001}} \put(-202.23,-13.44){\circle*{0.000001}} \put(-201.53,-12.73){\circle*{0.000001}} \put(-201.53,-12.02){\circle*{0.000001}} \put(-200.82,-11.31){\circle*{0.000001}} \put(-200.11,-10.61){\circle*{0.000001}} \put(-199.40,-9.90){\circle*{0.000001}} \put(-199.40,-9.19){\circle*{0.000001}} \put(-198.70,-8.49){\circle*{0.000001}} \put(-197.99,-7.78){\circle*{0.000001}} \put(-197.99,-7.07){\circle*{0.000001}} \put(-197.28,-6.36){\circle*{0.000001}} \put(-196.58,-5.66){\circle*{0.000001}} \put(-196.58,-4.95){\circle*{0.000001}} \put(-195.87,-4.24){\circle*{0.000001}} \put(-195.16,-3.54){\circle*{0.000001}} \put(-195.16,-2.83){\circle*{0.000001}} \put(-194.45,-2.12){\circle*{0.000001}} \put(-193.75,-1.41){\circle*{0.000001}} \put(-193.04,-0.71){\circle*{0.000001}} \put(-193.04, 0.00){\circle*{0.000001}} \put(-192.33, 0.71){\circle*{0.000001}} \put(-191.63, 1.41){\circle*{0.000001}} \put(-191.63, 2.12){\circle*{0.000001}} \put(-190.92, 2.83){\circle*{0.000001}} \put(-190.21, 3.54){\circle*{0.000001}} \put(-190.21, 4.24){\circle*{0.000001}} \put(-189.50, 4.95){\circle*{0.000001}} \put(-188.80, 5.66){\circle*{0.000001}} \put(-188.09, 6.36){\circle*{0.000001}} \put(-188.09, 7.07){\circle*{0.000001}} \put(-187.38, 7.78){\circle*{0.000001}} \put(-186.68, 8.49){\circle*{0.000001}} \put(-186.68, 9.19){\circle*{0.000001}} \put(-185.97, 9.90){\circle*{0.000001}} \put(-213.55,-4.95){\circle*{0.000001}} \put(-212.84,-4.24){\circle*{0.000001}} \put(-212.13,-4.24){\circle*{0.000001}} \put(-211.42,-3.54){\circle*{0.000001}} \put(-210.72,-3.54){\circle*{0.000001}} \put(-210.01,-2.83){\circle*{0.000001}} \put(-209.30,-2.83){\circle*{0.000001}} \put(-208.60,-2.12){\circle*{0.000001}} \put(-207.89,-2.12){\circle*{0.000001}} \put(-207.18,-1.41){\circle*{0.000001}} \put(-206.48,-1.41){\circle*{0.000001}} \put(-205.77,-0.71){\circle*{0.000001}} \put(-205.06,-0.71){\circle*{0.000001}} \put(-204.35, 0.00){\circle*{0.000001}} \put(-203.65, 0.71){\circle*{0.000001}} \put(-202.94, 0.71){\circle*{0.000001}} \put(-202.23, 1.41){\circle*{0.000001}} \put(-201.53, 1.41){\circle*{0.000001}} \put(-200.82, 2.12){\circle*{0.000001}} \put(-200.11, 2.12){\circle*{0.000001}} \put(-199.40, 2.83){\circle*{0.000001}} \put(-198.70, 2.83){\circle*{0.000001}} \put(-197.99, 3.54){\circle*{0.000001}} \put(-197.28, 3.54){\circle*{0.000001}} \put(-196.58, 4.24){\circle*{0.000001}} \put(-195.87, 4.24){\circle*{0.000001}} \put(-195.16, 4.95){\circle*{0.000001}} \put(-194.45, 5.66){\circle*{0.000001}} \put(-193.75, 5.66){\circle*{0.000001}} \put(-193.04, 6.36){\circle*{0.000001}} \put(-192.33, 6.36){\circle*{0.000001}} \put(-191.63, 7.07){\circle*{0.000001}} \put(-190.92, 7.07){\circle*{0.000001}} \put(-190.21, 7.78){\circle*{0.000001}} \put(-189.50, 7.78){\circle*{0.000001}} \put(-188.80, 8.49){\circle*{0.000001}} \put(-188.09, 8.49){\circle*{0.000001}} \put(-187.38, 9.19){\circle*{0.000001}} \put(-186.68, 9.19){\circle*{0.000001}} \put(-185.97, 9.90){\circle*{0.000001}} \put(-213.55,-4.95){\circle*{0.000001}} \put(-212.84,-4.95){\circle*{0.000001}} \put(-212.13,-5.66){\circle*{0.000001}} \put(-211.42,-5.66){\circle*{0.000001}} \put(-210.72,-5.66){\circle*{0.000001}} \put(-210.01,-5.66){\circle*{0.000001}} \put(-209.30,-6.36){\circle*{0.000001}} \put(-208.60,-6.36){\circle*{0.000001}} \put(-207.89,-6.36){\circle*{0.000001}} \put(-207.18,-6.36){\circle*{0.000001}} \put(-206.48,-7.07){\circle*{0.000001}} \put(-205.77,-7.07){\circle*{0.000001}} \put(-205.06,-7.07){\circle*{0.000001}} \put(-204.35,-7.07){\circle*{0.000001}} \put(-203.65,-7.78){\circle*{0.000001}} \put(-202.94,-7.78){\circle*{0.000001}} \put(-202.23,-7.78){\circle*{0.000001}} \put(-201.53,-8.49){\circle*{0.000001}} \put(-200.82,-8.49){\circle*{0.000001}} \put(-200.11,-8.49){\circle*{0.000001}} \put(-199.40,-8.49){\circle*{0.000001}} \put(-198.70,-9.19){\circle*{0.000001}} \put(-197.99,-9.19){\circle*{0.000001}} \put(-197.28,-9.19){\circle*{0.000001}} \put(-196.58,-9.19){\circle*{0.000001}} \put(-195.87,-9.90){\circle*{0.000001}} \put(-195.16,-9.90){\circle*{0.000001}} \put(-194.45,-9.90){\circle*{0.000001}} \put(-193.75,-9.90){\circle*{0.000001}} \put(-193.04,-10.61){\circle*{0.000001}} \put(-192.33,-10.61){\circle*{0.000001}} \put(-191.63,-10.61){\circle*{0.000001}} \put(-190.92,-11.31){\circle*{0.000001}} \put(-190.21,-11.31){\circle*{0.000001}} \put(-189.50,-11.31){\circle*{0.000001}} \put(-188.80,-11.31){\circle*{0.000001}} \put(-188.09,-12.02){\circle*{0.000001}} \put(-187.38,-12.02){\circle*{0.000001}} \put(-186.68,-12.02){\circle*{0.000001}} \put(-185.97,-12.02){\circle*{0.000001}} \put(-185.26,-12.73){\circle*{0.000001}} \put(-184.55,-12.73){\circle*{0.000001}} \put(-183.85,-12.73){\circle*{0.000001}} \put(-183.14,-12.73){\circle*{0.000001}} \put(-182.43,-13.44){\circle*{0.000001}} \put(-181.73,-13.44){\circle*{0.000001}} \put(-181.73,-13.44){\circle*{0.000001}} \put(-181.02,-13.44){\circle*{0.000001}} \put(-180.31,-12.73){\circle*{0.000001}} \put(-179.61,-12.73){\circle*{0.000001}} \put(-178.90,-12.73){\circle*{0.000001}} \put(-178.19,-12.73){\circle*{0.000001}} \put(-177.48,-12.02){\circle*{0.000001}} \put(-176.78,-12.02){\circle*{0.000001}} \put(-176.07,-12.02){\circle*{0.000001}} \put(-175.36,-12.02){\circle*{0.000001}} \put(-174.66,-11.31){\circle*{0.000001}} \put(-173.95,-11.31){\circle*{0.000001}} \put(-173.24,-11.31){\circle*{0.000001}} \put(-172.53,-11.31){\circle*{0.000001}} \put(-171.83,-10.61){\circle*{0.000001}} \put(-171.12,-10.61){\circle*{0.000001}} \put(-170.41,-10.61){\circle*{0.000001}} \put(-169.71,-10.61){\circle*{0.000001}} \put(-169.00,-9.90){\circle*{0.000001}} \put(-168.29,-9.90){\circle*{0.000001}} \put(-167.58,-9.90){\circle*{0.000001}} \put(-166.88,-9.90){\circle*{0.000001}} \put(-166.17,-9.19){\circle*{0.000001}} \put(-165.46,-9.19){\circle*{0.000001}} \put(-164.76,-9.19){\circle*{0.000001}} \put(-164.05,-9.19){\circle*{0.000001}} \put(-163.34,-8.49){\circle*{0.000001}} \put(-162.63,-8.49){\circle*{0.000001}} \put(-161.93,-8.49){\circle*{0.000001}} \put(-161.22,-8.49){\circle*{0.000001}} \put(-160.51,-7.78){\circle*{0.000001}} \put(-159.81,-7.78){\circle*{0.000001}} \put(-159.10,-7.78){\circle*{0.000001}} \put(-158.39,-7.78){\circle*{0.000001}} \put(-157.68,-7.07){\circle*{0.000001}} \put(-156.98,-7.07){\circle*{0.000001}} \put(-156.27,-7.07){\circle*{0.000001}} \put(-155.56,-7.07){\circle*{0.000001}} \put(-154.86,-6.36){\circle*{0.000001}} \put(-154.15,-6.36){\circle*{0.000001}} \put(-153.44,-6.36){\circle*{0.000001}} \put(-152.74,-6.36){\circle*{0.000001}} \put(-152.03,-5.66){\circle*{0.000001}} \put(-151.32,-5.66){\circle*{0.000001}} \put(-150.61,-5.66){\circle*{0.000001}} \put(-149.91,-5.66){\circle*{0.000001}} \put(-149.20,-4.95){\circle*{0.000001}} \put(-148.49,-4.95){\circle*{0.000001}} \put(-148.49,-4.95){\circle*{0.000001}} \put(-147.79,-4.24){\circle*{0.000001}} \put(-147.79,-3.54){\circle*{0.000001}} \put(-147.08,-2.83){\circle*{0.000001}} \put(-147.08,-2.12){\circle*{0.000001}} \put(-146.37,-1.41){\circle*{0.000001}} \put(-145.66,-0.71){\circle*{0.000001}} \put(-145.66, 0.00){\circle*{0.000001}} \put(-144.96, 0.71){\circle*{0.000001}} \put(-144.25, 1.41){\circle*{0.000001}} \put(-144.25, 2.12){\circle*{0.000001}} \put(-143.54, 2.83){\circle*{0.000001}} \put(-143.54, 3.54){\circle*{0.000001}} \put(-142.84, 4.24){\circle*{0.000001}} \put(-142.13, 4.95){\circle*{0.000001}} \put(-142.13, 5.66){\circle*{0.000001}} \put(-141.42, 6.36){\circle*{0.000001}} \put(-140.71, 7.07){\circle*{0.000001}} \put(-140.71, 7.78){\circle*{0.000001}} \put(-140.01, 8.49){\circle*{0.000001}} \put(-140.01, 9.19){\circle*{0.000001}} \put(-139.30, 9.90){\circle*{0.000001}} \put(-138.59,10.61){\circle*{0.000001}} \put(-138.59,11.31){\circle*{0.000001}} \put(-137.89,12.02){\circle*{0.000001}} \put(-137.18,12.73){\circle*{0.000001}} \put(-137.18,13.44){\circle*{0.000001}} \put(-136.47,14.14){\circle*{0.000001}} \put(-136.47,14.85){\circle*{0.000001}} \put(-135.76,15.56){\circle*{0.000001}} \put(-135.06,16.26){\circle*{0.000001}} \put(-135.06,16.97){\circle*{0.000001}} \put(-134.35,17.68){\circle*{0.000001}} \put(-133.64,18.38){\circle*{0.000001}} \put(-133.64,19.09){\circle*{0.000001}} \put(-132.94,19.80){\circle*{0.000001}} \put(-132.94,20.51){\circle*{0.000001}} \put(-132.23,21.21){\circle*{0.000001}} \put(-131.52,21.92){\circle*{0.000001}} \put(-131.52,22.63){\circle*{0.000001}} \put(-130.81,23.33){\circle*{0.000001}} \put(-148.49,-2.83){\circle*{0.000001}} \put(-147.79,-2.12){\circle*{0.000001}} \put(-147.79,-1.41){\circle*{0.000001}} \put(-147.08,-0.71){\circle*{0.000001}} \put(-146.37, 0.00){\circle*{0.000001}} \put(-146.37, 0.71){\circle*{0.000001}} \put(-145.66, 1.41){\circle*{0.000001}} \put(-144.96, 2.12){\circle*{0.000001}} \put(-144.96, 2.83){\circle*{0.000001}} \put(-144.25, 3.54){\circle*{0.000001}} \put(-143.54, 4.24){\circle*{0.000001}} \put(-143.54, 4.95){\circle*{0.000001}} \put(-142.84, 5.66){\circle*{0.000001}} \put(-142.13, 6.36){\circle*{0.000001}} \put(-142.13, 7.07){\circle*{0.000001}} \put(-141.42, 7.78){\circle*{0.000001}} \put(-140.71, 8.49){\circle*{0.000001}} \put(-140.71, 9.19){\circle*{0.000001}} \put(-140.01, 9.90){\circle*{0.000001}} \put(-139.30,10.61){\circle*{0.000001}} \put(-138.59,11.31){\circle*{0.000001}} \put(-138.59,12.02){\circle*{0.000001}} \put(-137.89,12.73){\circle*{0.000001}} \put(-137.18,13.44){\circle*{0.000001}} \put(-137.18,14.14){\circle*{0.000001}} \put(-136.47,14.85){\circle*{0.000001}} \put(-135.76,15.56){\circle*{0.000001}} \put(-135.76,16.26){\circle*{0.000001}} \put(-135.06,16.97){\circle*{0.000001}} \put(-134.35,17.68){\circle*{0.000001}} \put(-134.35,18.38){\circle*{0.000001}} \put(-133.64,19.09){\circle*{0.000001}} \put(-132.94,19.80){\circle*{0.000001}} \put(-132.94,20.51){\circle*{0.000001}} \put(-132.23,21.21){\circle*{0.000001}} \put(-131.52,21.92){\circle*{0.000001}} \put(-131.52,22.63){\circle*{0.000001}} \put(-130.81,23.33){\circle*{0.000001}} \put(-148.49,-2.83){\circle*{0.000001}} \put(-147.79,-2.12){\circle*{0.000001}} \put(-147.08,-1.41){\circle*{0.000001}} \put(-146.37,-0.71){\circle*{0.000001}} \put(-145.66, 0.00){\circle*{0.000001}} \put(-144.96, 0.00){\circle*{0.000001}} \put(-144.25, 0.71){\circle*{0.000001}} \put(-143.54, 1.41){\circle*{0.000001}} \put(-142.84, 2.12){\circle*{0.000001}} \put(-142.13, 2.83){\circle*{0.000001}} \put(-141.42, 3.54){\circle*{0.000001}} \put(-140.71, 4.24){\circle*{0.000001}} \put(-140.01, 4.95){\circle*{0.000001}} \put(-139.30, 4.95){\circle*{0.000001}} \put(-138.59, 5.66){\circle*{0.000001}} \put(-137.89, 6.36){\circle*{0.000001}} \put(-137.18, 7.07){\circle*{0.000001}} \put(-136.47, 7.78){\circle*{0.000001}} \put(-135.76, 8.49){\circle*{0.000001}} \put(-135.06, 9.19){\circle*{0.000001}} \put(-134.35, 9.90){\circle*{0.000001}} \put(-133.64,10.61){\circle*{0.000001}} \put(-132.94,10.61){\circle*{0.000001}} \put(-132.23,11.31){\circle*{0.000001}} \put(-131.52,12.02){\circle*{0.000001}} \put(-130.81,12.73){\circle*{0.000001}} \put(-130.11,13.44){\circle*{0.000001}} \put(-129.40,14.14){\circle*{0.000001}} \put(-128.69,14.85){\circle*{0.000001}} \put(-127.99,15.56){\circle*{0.000001}} \put(-127.28,15.56){\circle*{0.000001}} \put(-126.57,16.26){\circle*{0.000001}} \put(-125.87,16.97){\circle*{0.000001}} \put(-125.16,17.68){\circle*{0.000001}} \put(-124.45,18.38){\circle*{0.000001}} \put(-158.39,22.63){\circle*{0.000001}} \put(-157.68,22.63){\circle*{0.000001}} \put(-156.98,22.63){\circle*{0.000001}} \put(-156.27,22.63){\circle*{0.000001}} \put(-155.56,22.63){\circle*{0.000001}} \put(-154.86,21.92){\circle*{0.000001}} \put(-154.15,21.92){\circle*{0.000001}} \put(-153.44,21.92){\circle*{0.000001}} \put(-152.74,21.92){\circle*{0.000001}} \put(-152.03,21.92){\circle*{0.000001}} \put(-151.32,21.92){\circle*{0.000001}} \put(-150.61,21.92){\circle*{0.000001}} \put(-149.91,21.92){\circle*{0.000001}} \put(-149.20,21.21){\circle*{0.000001}} \put(-148.49,21.21){\circle*{0.000001}} \put(-147.79,21.21){\circle*{0.000001}} \put(-147.08,21.21){\circle*{0.000001}} \put(-146.37,21.21){\circle*{0.000001}} \put(-145.66,21.21){\circle*{0.000001}} \put(-144.96,21.21){\circle*{0.000001}} \put(-144.25,21.21){\circle*{0.000001}} \put(-143.54,20.51){\circle*{0.000001}} \put(-142.84,20.51){\circle*{0.000001}} \put(-142.13,20.51){\circle*{0.000001}} \put(-141.42,20.51){\circle*{0.000001}} \put(-140.71,20.51){\circle*{0.000001}} \put(-140.01,20.51){\circle*{0.000001}} \put(-139.30,20.51){\circle*{0.000001}} \put(-138.59,20.51){\circle*{0.000001}} \put(-137.89,19.80){\circle*{0.000001}} \put(-137.18,19.80){\circle*{0.000001}} \put(-136.47,19.80){\circle*{0.000001}} \put(-135.76,19.80){\circle*{0.000001}} \put(-135.06,19.80){\circle*{0.000001}} \put(-134.35,19.80){\circle*{0.000001}} \put(-133.64,19.80){\circle*{0.000001}} \put(-132.94,19.80){\circle*{0.000001}} \put(-132.23,19.09){\circle*{0.000001}} \put(-131.52,19.09){\circle*{0.000001}} \put(-130.81,19.09){\circle*{0.000001}} \put(-130.11,19.09){\circle*{0.000001}} \put(-129.40,19.09){\circle*{0.000001}} \put(-128.69,19.09){\circle*{0.000001}} \put(-127.99,19.09){\circle*{0.000001}} \put(-127.28,19.09){\circle*{0.000001}} \put(-126.57,18.38){\circle*{0.000001}} \put(-125.87,18.38){\circle*{0.000001}} \put(-125.16,18.38){\circle*{0.000001}} \put(-124.45,18.38){\circle*{0.000001}} \put(-191.63,17.68){\circle*{0.000001}} \put(-190.92,17.68){\circle*{0.000001}} \put(-190.21,17.68){\circle*{0.000001}} \put(-189.50,17.68){\circle*{0.000001}} \put(-188.80,18.38){\circle*{0.000001}} \put(-188.09,18.38){\circle*{0.000001}} \put(-187.38,18.38){\circle*{0.000001}} \put(-186.68,18.38){\circle*{0.000001}} \put(-185.97,18.38){\circle*{0.000001}} \put(-185.26,18.38){\circle*{0.000001}} \put(-184.55,18.38){\circle*{0.000001}} \put(-183.85,19.09){\circle*{0.000001}} \put(-183.14,19.09){\circle*{0.000001}} \put(-182.43,19.09){\circle*{0.000001}} \put(-181.73,19.09){\circle*{0.000001}} \put(-181.02,19.09){\circle*{0.000001}} \put(-180.31,19.09){\circle*{0.000001}} \put(-179.61,19.80){\circle*{0.000001}} \put(-178.90,19.80){\circle*{0.000001}} \put(-178.19,19.80){\circle*{0.000001}} \put(-177.48,19.80){\circle*{0.000001}} \put(-176.78,19.80){\circle*{0.000001}} \put(-176.07,19.80){\circle*{0.000001}} \put(-175.36,19.80){\circle*{0.000001}} \put(-174.66,20.51){\circle*{0.000001}} \put(-173.95,20.51){\circle*{0.000001}} \put(-173.24,20.51){\circle*{0.000001}} \put(-172.53,20.51){\circle*{0.000001}} \put(-171.83,20.51){\circle*{0.000001}} \put(-171.12,20.51){\circle*{0.000001}} \put(-170.41,20.51){\circle*{0.000001}} \put(-169.71,21.21){\circle*{0.000001}} \put(-169.00,21.21){\circle*{0.000001}} \put(-168.29,21.21){\circle*{0.000001}} \put(-167.58,21.21){\circle*{0.000001}} \put(-166.88,21.21){\circle*{0.000001}} \put(-166.17,21.21){\circle*{0.000001}} \put(-165.46,21.92){\circle*{0.000001}} \put(-164.76,21.92){\circle*{0.000001}} \put(-164.05,21.92){\circle*{0.000001}} \put(-163.34,21.92){\circle*{0.000001}} \put(-162.63,21.92){\circle*{0.000001}} \put(-161.93,21.92){\circle*{0.000001}} \put(-161.22,21.92){\circle*{0.000001}} \put(-160.51,22.63){\circle*{0.000001}} \put(-159.81,22.63){\circle*{0.000001}} \put(-159.10,22.63){\circle*{0.000001}} \put(-158.39,22.63){\circle*{0.000001}} \put(-196.58,-15.56){\circle*{0.000001}} \put(-196.58,-14.85){\circle*{0.000001}} \put(-196.58,-14.14){\circle*{0.000001}} \put(-196.58,-13.44){\circle*{0.000001}} \put(-195.87,-12.73){\circle*{0.000001}} \put(-195.87,-12.02){\circle*{0.000001}} \put(-195.87,-11.31){\circle*{0.000001}} \put(-195.87,-10.61){\circle*{0.000001}} \put(-195.87,-9.90){\circle*{0.000001}} \put(-195.87,-9.19){\circle*{0.000001}} \put(-195.87,-8.49){\circle*{0.000001}} \put(-195.16,-7.78){\circle*{0.000001}} \put(-195.16,-7.07){\circle*{0.000001}} \put(-195.16,-6.36){\circle*{0.000001}} \put(-195.16,-5.66){\circle*{0.000001}} \put(-195.16,-4.95){\circle*{0.000001}} \put(-195.16,-4.24){\circle*{0.000001}} \put(-194.45,-3.54){\circle*{0.000001}} \put(-194.45,-2.83){\circle*{0.000001}} \put(-194.45,-2.12){\circle*{0.000001}} \put(-194.45,-1.41){\circle*{0.000001}} \put(-194.45,-0.71){\circle*{0.000001}} \put(-194.45, 0.00){\circle*{0.000001}} \put(-194.45, 0.71){\circle*{0.000001}} \put(-193.75, 1.41){\circle*{0.000001}} \put(-193.75, 2.12){\circle*{0.000001}} \put(-193.75, 2.83){\circle*{0.000001}} \put(-193.75, 3.54){\circle*{0.000001}} \put(-193.75, 4.24){\circle*{0.000001}} \put(-193.75, 4.95){\circle*{0.000001}} \put(-193.75, 5.66){\circle*{0.000001}} \put(-193.04, 6.36){\circle*{0.000001}} \put(-193.04, 7.07){\circle*{0.000001}} \put(-193.04, 7.78){\circle*{0.000001}} \put(-193.04, 8.49){\circle*{0.000001}} \put(-193.04, 9.19){\circle*{0.000001}} \put(-193.04, 9.90){\circle*{0.000001}} \put(-192.33,10.61){\circle*{0.000001}} \put(-192.33,11.31){\circle*{0.000001}} \put(-192.33,12.02){\circle*{0.000001}} \put(-192.33,12.73){\circle*{0.000001}} \put(-192.33,13.44){\circle*{0.000001}} \put(-192.33,14.14){\circle*{0.000001}} \put(-192.33,14.85){\circle*{0.000001}} \put(-191.63,15.56){\circle*{0.000001}} \put(-191.63,16.26){\circle*{0.000001}} \put(-191.63,16.97){\circle*{0.000001}} \put(-191.63,17.68){\circle*{0.000001}} \put(-223.45, 0.00){\circle*{0.000001}} \put(-222.74,-0.71){\circle*{0.000001}} \put(-222.03,-0.71){\circle*{0.000001}} \put(-221.32,-1.41){\circle*{0.000001}} \put(-220.62,-1.41){\circle*{0.000001}} \put(-219.91,-2.12){\circle*{0.000001}} \put(-219.20,-2.12){\circle*{0.000001}} \put(-218.50,-2.83){\circle*{0.000001}} \put(-217.79,-3.54){\circle*{0.000001}} \put(-217.08,-3.54){\circle*{0.000001}} \put(-216.37,-4.24){\circle*{0.000001}} \put(-215.67,-4.24){\circle*{0.000001}} \put(-214.96,-4.95){\circle*{0.000001}} \put(-214.25,-5.66){\circle*{0.000001}} \put(-213.55,-5.66){\circle*{0.000001}} \put(-212.84,-6.36){\circle*{0.000001}} \put(-212.13,-6.36){\circle*{0.000001}} \put(-211.42,-7.07){\circle*{0.000001}} \put(-210.72,-7.07){\circle*{0.000001}} \put(-210.01,-7.78){\circle*{0.000001}} \put(-209.30,-8.49){\circle*{0.000001}} \put(-208.60,-8.49){\circle*{0.000001}} \put(-207.89,-9.19){\circle*{0.000001}} \put(-207.18,-9.19){\circle*{0.000001}} \put(-206.48,-9.90){\circle*{0.000001}} \put(-205.77,-9.90){\circle*{0.000001}} \put(-205.06,-10.61){\circle*{0.000001}} \put(-204.35,-11.31){\circle*{0.000001}} \put(-203.65,-11.31){\circle*{0.000001}} \put(-202.94,-12.02){\circle*{0.000001}} \put(-202.23,-12.02){\circle*{0.000001}} \put(-201.53,-12.73){\circle*{0.000001}} \put(-200.82,-13.44){\circle*{0.000001}} \put(-200.11,-13.44){\circle*{0.000001}} \put(-199.40,-14.14){\circle*{0.000001}} \put(-198.70,-14.14){\circle*{0.000001}} \put(-197.99,-14.85){\circle*{0.000001}} \put(-197.28,-14.85){\circle*{0.000001}} \put(-196.58,-15.56){\circle*{0.000001}} \put(-251.73,22.63){\circle*{0.000001}} \put(-251.02,21.92){\circle*{0.000001}} \put(-250.32,21.21){\circle*{0.000001}} \put(-249.61,21.21){\circle*{0.000001}} \put(-248.90,20.51){\circle*{0.000001}} \put(-248.19,19.80){\circle*{0.000001}} \put(-247.49,19.09){\circle*{0.000001}} \put(-246.78,18.38){\circle*{0.000001}} \put(-246.07,18.38){\circle*{0.000001}} \put(-245.37,17.68){\circle*{0.000001}} \put(-244.66,16.97){\circle*{0.000001}} \put(-243.95,16.26){\circle*{0.000001}} \put(-243.24,15.56){\circle*{0.000001}} \put(-242.54,15.56){\circle*{0.000001}} \put(-241.83,14.85){\circle*{0.000001}} \put(-241.12,14.14){\circle*{0.000001}} \put(-240.42,13.44){\circle*{0.000001}} \put(-239.71,12.73){\circle*{0.000001}} \put(-239.00,12.73){\circle*{0.000001}} \put(-238.29,12.02){\circle*{0.000001}} \put(-237.59,11.31){\circle*{0.000001}} \put(-236.88,10.61){\circle*{0.000001}} \put(-236.17, 9.90){\circle*{0.000001}} \put(-235.47, 9.90){\circle*{0.000001}} \put(-234.76, 9.19){\circle*{0.000001}} \put(-234.05, 8.49){\circle*{0.000001}} \put(-233.35, 7.78){\circle*{0.000001}} \put(-232.64, 7.07){\circle*{0.000001}} \put(-231.93, 7.07){\circle*{0.000001}} \put(-231.22, 6.36){\circle*{0.000001}} \put(-230.52, 5.66){\circle*{0.000001}} \put(-229.81, 4.95){\circle*{0.000001}} \put(-229.10, 4.24){\circle*{0.000001}} \put(-228.40, 4.24){\circle*{0.000001}} \put(-227.69, 3.54){\circle*{0.000001}} \put(-226.98, 2.83){\circle*{0.000001}} \put(-226.27, 2.12){\circle*{0.000001}} \put(-225.57, 1.41){\circle*{0.000001}} \put(-224.86, 1.41){\circle*{0.000001}} \put(-224.15, 0.71){\circle*{0.000001}} \put(-223.45, 0.00){\circle*{0.000001}} \put(-251.73,22.63){\circle*{0.000001}} \put(-252.44,23.33){\circle*{0.000001}} \put(-253.14,24.04){\circle*{0.000001}} \put(-253.85,24.75){\circle*{0.000001}} \put(-254.56,25.46){\circle*{0.000001}} \put(-255.27,26.16){\circle*{0.000001}} \put(-255.97,26.87){\circle*{0.000001}} \put(-256.68,27.58){\circle*{0.000001}} \put(-257.39,28.28){\circle*{0.000001}} \put(-257.39,28.99){\circle*{0.000001}} \put(-258.09,29.70){\circle*{0.000001}} \put(-258.80,30.41){\circle*{0.000001}} \put(-259.51,31.11){\circle*{0.000001}} \put(-260.22,31.82){\circle*{0.000001}} \put(-260.92,32.53){\circle*{0.000001}} \put(-261.63,33.23){\circle*{0.000001}} \put(-262.34,33.94){\circle*{0.000001}} \put(-263.04,34.65){\circle*{0.000001}} \put(-263.75,35.36){\circle*{0.000001}} \put(-264.46,36.06){\circle*{0.000001}} \put(-265.17,36.77){\circle*{0.000001}} \put(-265.87,37.48){\circle*{0.000001}} \put(-266.58,38.18){\circle*{0.000001}} \put(-267.29,38.89){\circle*{0.000001}} \put(-267.99,39.60){\circle*{0.000001}} \put(-268.70,40.31){\circle*{0.000001}} \put(-268.70,41.01){\circle*{0.000001}} \put(-269.41,41.72){\circle*{0.000001}} \put(-270.11,42.43){\circle*{0.000001}} \put(-270.82,43.13){\circle*{0.000001}} \put(-271.53,43.84){\circle*{0.000001}} \put(-272.24,44.55){\circle*{0.000001}} \put(-272.94,45.25){\circle*{0.000001}} \put(-273.65,45.96){\circle*{0.000001}} \put(-274.36,46.67){\circle*{0.000001}} \put(-274.36,46.67){\circle*{0.000001}} \put(-275.06,47.38){\circle*{0.000001}} \put(-275.06,48.08){\circle*{0.000001}} \put(-275.77,48.79){\circle*{0.000001}} \put(-275.77,49.50){\circle*{0.000001}} \put(-276.48,50.20){\circle*{0.000001}} \put(-276.48,50.91){\circle*{0.000001}} \put(-277.19,51.62){\circle*{0.000001}} \put(-277.19,52.33){\circle*{0.000001}} \put(-277.89,53.03){\circle*{0.000001}} \put(-277.89,53.74){\circle*{0.000001}} \put(-278.60,54.45){\circle*{0.000001}} \put(-279.31,55.15){\circle*{0.000001}} \put(-279.31,55.86){\circle*{0.000001}} \put(-280.01,56.57){\circle*{0.000001}} \put(-280.01,57.28){\circle*{0.000001}} \put(-280.72,57.98){\circle*{0.000001}} \put(-280.72,58.69){\circle*{0.000001}} \put(-281.43,59.40){\circle*{0.000001}} \put(-281.43,60.10){\circle*{0.000001}} \put(-282.14,60.81){\circle*{0.000001}} \put(-282.14,61.52){\circle*{0.000001}} \put(-282.84,62.23){\circle*{0.000001}} \put(-283.55,62.93){\circle*{0.000001}} \put(-283.55,63.64){\circle*{0.000001}} \put(-284.26,64.35){\circle*{0.000001}} \put(-284.26,65.05){\circle*{0.000001}} \put(-284.96,65.76){\circle*{0.000001}} \put(-284.96,66.47){\circle*{0.000001}} \put(-285.67,67.18){\circle*{0.000001}} \put(-285.67,67.88){\circle*{0.000001}} \put(-286.38,68.59){\circle*{0.000001}} \put(-287.09,69.30){\circle*{0.000001}} \put(-287.09,70.00){\circle*{0.000001}} \put(-287.79,70.71){\circle*{0.000001}} \put(-287.79,71.42){\circle*{0.000001}} \put(-288.50,72.12){\circle*{0.000001}} \put(-288.50,72.83){\circle*{0.000001}} \put(-289.21,73.54){\circle*{0.000001}} \put(-289.21,74.25){\circle*{0.000001}} \put(-289.91,74.95){\circle*{0.000001}} \put(-289.91,75.66){\circle*{0.000001}} \put(-290.62,76.37){\circle*{0.000001}} \put(-290.62,76.37){\circle*{0.000001}} \put(-289.91,77.07){\circle*{0.000001}} \put(-289.21,77.07){\circle*{0.000001}} \put(-288.50,77.78){\circle*{0.000001}} \put(-287.79,78.49){\circle*{0.000001}} \put(-287.09,79.20){\circle*{0.000001}} \put(-286.38,79.20){\circle*{0.000001}} \put(-285.67,79.90){\circle*{0.000001}} \put(-284.96,80.61){\circle*{0.000001}} \put(-284.26,80.61){\circle*{0.000001}} \put(-283.55,81.32){\circle*{0.000001}} \put(-282.84,82.02){\circle*{0.000001}} \put(-282.14,82.02){\circle*{0.000001}} \put(-281.43,82.73){\circle*{0.000001}} \put(-280.72,83.44){\circle*{0.000001}} \put(-280.01,84.15){\circle*{0.000001}} \put(-279.31,84.15){\circle*{0.000001}} \put(-278.60,84.85){\circle*{0.000001}} \put(-277.89,85.56){\circle*{0.000001}} \put(-277.19,85.56){\circle*{0.000001}} \put(-276.48,86.27){\circle*{0.000001}} \put(-275.77,86.97){\circle*{0.000001}} \put(-275.06,86.97){\circle*{0.000001}} \put(-274.36,87.68){\circle*{0.000001}} \put(-273.65,88.39){\circle*{0.000001}} \put(-272.94,89.10){\circle*{0.000001}} \put(-272.24,89.10){\circle*{0.000001}} \put(-271.53,89.80){\circle*{0.000001}} \put(-270.82,90.51){\circle*{0.000001}} \put(-270.11,90.51){\circle*{0.000001}} \put(-269.41,91.22){\circle*{0.000001}} \put(-268.70,91.92){\circle*{0.000001}} \put(-267.99,91.92){\circle*{0.000001}} \put(-267.29,92.63){\circle*{0.000001}} \put(-266.58,93.34){\circle*{0.000001}} \put(-265.87,94.05){\circle*{0.000001}} \put(-265.17,94.05){\circle*{0.000001}} \put(-264.46,94.75){\circle*{0.000001}} \put(-289.91,115.97){\circle*{0.000001}} \put(-289.21,115.26){\circle*{0.000001}} \put(-288.50,114.55){\circle*{0.000001}} \put(-287.79,114.55){\circle*{0.000001}} \put(-287.09,113.84){\circle*{0.000001}} \put(-286.38,113.14){\circle*{0.000001}} \put(-285.67,112.43){\circle*{0.000001}} \put(-284.96,111.72){\circle*{0.000001}} \put(-284.26,111.02){\circle*{0.000001}} \put(-283.55,111.02){\circle*{0.000001}} \put(-282.84,110.31){\circle*{0.000001}} \put(-282.14,109.60){\circle*{0.000001}} \put(-281.43,108.89){\circle*{0.000001}} \put(-280.72,108.19){\circle*{0.000001}} \put(-280.01,107.48){\circle*{0.000001}} \put(-279.31,107.48){\circle*{0.000001}} \put(-278.60,106.77){\circle*{0.000001}} \put(-277.89,106.07){\circle*{0.000001}} \put(-277.19,105.36){\circle*{0.000001}} \put(-276.48,104.65){\circle*{0.000001}} \put(-275.77,103.94){\circle*{0.000001}} \put(-275.06,103.94){\circle*{0.000001}} \put(-274.36,103.24){\circle*{0.000001}} \put(-273.65,102.53){\circle*{0.000001}} \put(-272.94,101.82){\circle*{0.000001}} \put(-272.24,101.12){\circle*{0.000001}} \put(-271.53,100.41){\circle*{0.000001}} \put(-270.82,100.41){\circle*{0.000001}} \put(-270.11,99.70){\circle*{0.000001}} \put(-269.41,98.99){\circle*{0.000001}} \put(-268.70,98.29){\circle*{0.000001}} \put(-267.99,97.58){\circle*{0.000001}} \put(-267.29,96.87){\circle*{0.000001}} \put(-266.58,96.87){\circle*{0.000001}} \put(-265.87,96.17){\circle*{0.000001}} \put(-265.17,95.46){\circle*{0.000001}} \put(-264.46,94.75){\circle*{0.000001}} \put(-289.91,115.97){\circle*{0.000001}} \put(-290.62,116.67){\circle*{0.000001}} \put(-290.62,117.38){\circle*{0.000001}} \put(-291.33,118.09){\circle*{0.000001}} \put(-292.04,118.79){\circle*{0.000001}} \put(-292.04,119.50){\circle*{0.000001}} \put(-292.74,120.21){\circle*{0.000001}} \put(-292.74,120.92){\circle*{0.000001}} \put(-293.45,121.62){\circle*{0.000001}} \put(-294.16,122.33){\circle*{0.000001}} \put(-294.16,123.04){\circle*{0.000001}} \put(-294.86,123.74){\circle*{0.000001}} \put(-295.57,124.45){\circle*{0.000001}} \put(-295.57,125.16){\circle*{0.000001}} \put(-296.28,125.87){\circle*{0.000001}} \put(-296.28,126.57){\circle*{0.000001}} \put(-296.98,127.28){\circle*{0.000001}} \put(-297.69,127.99){\circle*{0.000001}} \put(-297.69,128.69){\circle*{0.000001}} \put(-298.40,129.40){\circle*{0.000001}} \put(-299.11,130.11){\circle*{0.000001}} \put(-299.11,130.81){\circle*{0.000001}} \put(-299.81,131.52){\circle*{0.000001}} \put(-300.52,132.23){\circle*{0.000001}} \put(-300.52,132.94){\circle*{0.000001}} \put(-301.23,133.64){\circle*{0.000001}} \put(-301.23,134.35){\circle*{0.000001}} \put(-301.93,135.06){\circle*{0.000001}} \put(-302.64,135.76){\circle*{0.000001}} \put(-302.64,136.47){\circle*{0.000001}} \put(-303.35,137.18){\circle*{0.000001}} \put(-304.06,137.89){\circle*{0.000001}} \put(-304.06,138.59){\circle*{0.000001}} \put(-304.76,139.30){\circle*{0.000001}} \put(-304.76,140.01){\circle*{0.000001}} \put(-305.47,140.71){\circle*{0.000001}} \put(-306.18,141.42){\circle*{0.000001}} \put(-306.18,142.13){\circle*{0.000001}} \put(-306.88,142.84){\circle*{0.000001}} \put(-306.88,142.84){\circle*{0.000001}} \put(-306.88,143.54){\circle*{0.000001}} \put(-306.18,144.25){\circle*{0.000001}} \put(-306.18,144.96){\circle*{0.000001}} \put(-305.47,145.66){\circle*{0.000001}} \put(-305.47,146.37){\circle*{0.000001}} \put(-304.76,147.08){\circle*{0.000001}} \put(-304.76,147.79){\circle*{0.000001}} \put(-304.06,148.49){\circle*{0.000001}} \put(-304.06,149.20){\circle*{0.000001}} \put(-304.06,149.91){\circle*{0.000001}} \put(-303.35,150.61){\circle*{0.000001}} \put(-303.35,151.32){\circle*{0.000001}} \put(-302.64,152.03){\circle*{0.000001}} \put(-302.64,152.74){\circle*{0.000001}} \put(-301.93,153.44){\circle*{0.000001}} \put(-301.93,154.15){\circle*{0.000001}} \put(-301.23,154.86){\circle*{0.000001}} \put(-301.23,155.56){\circle*{0.000001}} \put(-301.23,156.27){\circle*{0.000001}} \put(-300.52,156.98){\circle*{0.000001}} \put(-300.52,157.68){\circle*{0.000001}} \put(-299.81,158.39){\circle*{0.000001}} \put(-299.81,159.10){\circle*{0.000001}} \put(-299.11,159.81){\circle*{0.000001}} \put(-299.11,160.51){\circle*{0.000001}} \put(-298.40,161.22){\circle*{0.000001}} \put(-298.40,161.93){\circle*{0.000001}} \put(-298.40,162.63){\circle*{0.000001}} \put(-297.69,163.34){\circle*{0.000001}} \put(-297.69,164.05){\circle*{0.000001}} \put(-296.98,164.76){\circle*{0.000001}} \put(-296.98,165.46){\circle*{0.000001}} \put(-296.28,166.17){\circle*{0.000001}} \put(-296.28,166.88){\circle*{0.000001}} \put(-295.57,167.58){\circle*{0.000001}} \put(-295.57,168.29){\circle*{0.000001}} \put(-295.57,169.00){\circle*{0.000001}} \put(-294.86,169.71){\circle*{0.000001}} \put(-294.86,170.41){\circle*{0.000001}} \put(-294.16,171.12){\circle*{0.000001}} \put(-294.16,171.83){\circle*{0.000001}} \put(-293.45,172.53){\circle*{0.000001}} \put(-293.45,173.24){\circle*{0.000001}} \put(-292.74,173.95){\circle*{0.000001}} \put(-292.74,174.66){\circle*{0.000001}} \put(-309.71,145.66){\circle*{0.000001}} \put(-309.01,146.37){\circle*{0.000001}} \put(-309.01,147.08){\circle*{0.000001}} \put(-308.30,147.79){\circle*{0.000001}} \put(-308.30,148.49){\circle*{0.000001}} \put(-307.59,149.20){\circle*{0.000001}} \put(-306.88,149.91){\circle*{0.000001}} \put(-306.88,150.61){\circle*{0.000001}} \put(-306.18,151.32){\circle*{0.000001}} \put(-306.18,152.03){\circle*{0.000001}} \put(-305.47,152.74){\circle*{0.000001}} \put(-305.47,153.44){\circle*{0.000001}} \put(-304.76,154.15){\circle*{0.000001}} \put(-304.06,154.86){\circle*{0.000001}} \put(-304.06,155.56){\circle*{0.000001}} \put(-303.35,156.27){\circle*{0.000001}} \put(-303.35,156.98){\circle*{0.000001}} \put(-302.64,157.68){\circle*{0.000001}} \put(-301.93,158.39){\circle*{0.000001}} \put(-301.93,159.10){\circle*{0.000001}} \put(-301.23,159.81){\circle*{0.000001}} \put(-301.23,160.51){\circle*{0.000001}} \put(-300.52,161.22){\circle*{0.000001}} \put(-300.52,161.93){\circle*{0.000001}} \put(-299.81,162.63){\circle*{0.000001}} \put(-299.11,163.34){\circle*{0.000001}} \put(-299.11,164.05){\circle*{0.000001}} \put(-298.40,164.76){\circle*{0.000001}} \put(-298.40,165.46){\circle*{0.000001}} \put(-297.69,166.17){\circle*{0.000001}} \put(-296.98,166.88){\circle*{0.000001}} \put(-296.98,167.58){\circle*{0.000001}} \put(-296.28,168.29){\circle*{0.000001}} \put(-296.28,169.00){\circle*{0.000001}} \put(-295.57,169.71){\circle*{0.000001}} \put(-295.57,170.41){\circle*{0.000001}} \put(-294.86,171.12){\circle*{0.000001}} \put(-294.16,171.83){\circle*{0.000001}} \put(-294.16,172.53){\circle*{0.000001}} \put(-293.45,173.24){\circle*{0.000001}} \put(-293.45,173.95){\circle*{0.000001}} \put(-292.74,174.66){\circle*{0.000001}} \put(-309.71,145.66){\circle*{0.000001}} \put(-309.01,146.37){\circle*{0.000001}} \put(-308.30,146.37){\circle*{0.000001}} \put(-307.59,147.08){\circle*{0.000001}} \put(-306.88,147.08){\circle*{0.000001}} \put(-306.18,147.79){\circle*{0.000001}} \put(-305.47,147.79){\circle*{0.000001}} \put(-304.76,148.49){\circle*{0.000001}} \put(-304.06,148.49){\circle*{0.000001}} \put(-303.35,149.20){\circle*{0.000001}} \put(-302.64,149.91){\circle*{0.000001}} \put(-301.93,149.91){\circle*{0.000001}} \put(-301.23,150.61){\circle*{0.000001}} \put(-300.52,150.61){\circle*{0.000001}} \put(-299.81,151.32){\circle*{0.000001}} \put(-299.11,151.32){\circle*{0.000001}} \put(-298.40,152.03){\circle*{0.000001}} \put(-297.69,152.03){\circle*{0.000001}} \put(-296.98,152.74){\circle*{0.000001}} \put(-296.28,153.44){\circle*{0.000001}} \put(-295.57,153.44){\circle*{0.000001}} \put(-294.86,154.15){\circle*{0.000001}} \put(-294.16,154.15){\circle*{0.000001}} \put(-293.45,154.86){\circle*{0.000001}} \put(-292.74,154.86){\circle*{0.000001}} \put(-292.04,155.56){\circle*{0.000001}} \put(-291.33,156.27){\circle*{0.000001}} \put(-290.62,156.27){\circle*{0.000001}} \put(-289.91,156.98){\circle*{0.000001}} \put(-289.21,156.98){\circle*{0.000001}} \put(-288.50,157.68){\circle*{0.000001}} \put(-287.79,157.68){\circle*{0.000001}} \put(-287.09,158.39){\circle*{0.000001}} \put(-286.38,158.39){\circle*{0.000001}} \put(-285.67,159.10){\circle*{0.000001}} \put(-284.96,159.81){\circle*{0.000001}} \put(-284.26,159.81){\circle*{0.000001}} \put(-283.55,160.51){\circle*{0.000001}} \put(-282.84,160.51){\circle*{0.000001}} \put(-282.14,161.22){\circle*{0.000001}} \put(-281.43,161.22){\circle*{0.000001}} \put(-280.72,161.93){\circle*{0.000001}} \put(-280.01,161.93){\circle*{0.000001}} \put(-279.31,162.63){\circle*{0.000001}} \put(-304.76,186.68){\circle*{0.000001}} \put(-304.06,185.97){\circle*{0.000001}} \put(-303.35,185.26){\circle*{0.000001}} \put(-302.64,184.55){\circle*{0.000001}} \put(-301.93,183.85){\circle*{0.000001}} \put(-301.23,183.14){\circle*{0.000001}} \put(-300.52,182.43){\circle*{0.000001}} \put(-299.81,181.73){\circle*{0.000001}} \put(-299.11,181.02){\circle*{0.000001}} \put(-298.40,181.02){\circle*{0.000001}} \put(-297.69,180.31){\circle*{0.000001}} \put(-296.98,179.61){\circle*{0.000001}} \put(-296.28,178.90){\circle*{0.000001}} \put(-295.57,178.19){\circle*{0.000001}} \put(-294.86,177.48){\circle*{0.000001}} \put(-294.16,176.78){\circle*{0.000001}} \put(-293.45,176.07){\circle*{0.000001}} \put(-292.74,175.36){\circle*{0.000001}} \put(-292.04,174.66){\circle*{0.000001}} \put(-291.33,173.95){\circle*{0.000001}} \put(-290.62,173.24){\circle*{0.000001}} \put(-289.91,172.53){\circle*{0.000001}} \put(-289.21,171.83){\circle*{0.000001}} \put(-288.50,171.12){\circle*{0.000001}} \put(-287.79,170.41){\circle*{0.000001}} \put(-287.09,169.71){\circle*{0.000001}} \put(-286.38,169.00){\circle*{0.000001}} \put(-285.67,169.00){\circle*{0.000001}} \put(-284.96,168.29){\circle*{0.000001}} \put(-284.26,167.58){\circle*{0.000001}} \put(-283.55,166.88){\circle*{0.000001}} \put(-282.84,166.17){\circle*{0.000001}} \put(-282.14,165.46){\circle*{0.000001}} \put(-281.43,164.76){\circle*{0.000001}} \put(-280.72,164.05){\circle*{0.000001}} \put(-280.01,163.34){\circle*{0.000001}} \put(-279.31,162.63){\circle*{0.000001}} \put(-304.76,186.68){\circle*{0.000001}} \put(-304.76,187.38){\circle*{0.000001}} \put(-305.47,188.09){\circle*{0.000001}} \put(-305.47,188.80){\circle*{0.000001}} \put(-306.18,189.50){\circle*{0.000001}} \put(-306.18,190.21){\circle*{0.000001}} \put(-306.88,190.92){\circle*{0.000001}} \put(-306.88,191.63){\circle*{0.000001}} \put(-307.59,192.33){\circle*{0.000001}} \put(-307.59,193.04){\circle*{0.000001}} \put(-308.30,193.75){\circle*{0.000001}} \put(-308.30,194.45){\circle*{0.000001}} \put(-309.01,195.16){\circle*{0.000001}} \put(-309.01,195.87){\circle*{0.000001}} \put(-309.71,196.58){\circle*{0.000001}} \put(-309.71,197.28){\circle*{0.000001}} \put(-310.42,197.99){\circle*{0.000001}} \put(-310.42,198.70){\circle*{0.000001}} \put(-311.13,199.40){\circle*{0.000001}} \put(-311.13,200.11){\circle*{0.000001}} \put(-311.83,200.82){\circle*{0.000001}} \put(-311.83,201.53){\circle*{0.000001}} \put(-312.54,202.23){\circle*{0.000001}} \put(-312.54,202.94){\circle*{0.000001}} \put(-313.25,203.65){\circle*{0.000001}} \put(-313.25,204.35){\circle*{0.000001}} \put(-313.96,205.06){\circle*{0.000001}} \put(-313.96,205.77){\circle*{0.000001}} \put(-314.66,206.48){\circle*{0.000001}} \put(-314.66,207.18){\circle*{0.000001}} \put(-315.37,207.89){\circle*{0.000001}} \put(-315.37,208.60){\circle*{0.000001}} \put(-316.08,209.30){\circle*{0.000001}} \put(-316.08,210.01){\circle*{0.000001}} \put(-316.78,210.72){\circle*{0.000001}} \put(-316.78,211.42){\circle*{0.000001}} \put(-317.49,212.13){\circle*{0.000001}} \put(-317.49,212.84){\circle*{0.000001}} \put(-318.20,213.55){\circle*{0.000001}} \put(-318.20,214.25){\circle*{0.000001}} \put(-318.91,214.96){\circle*{0.000001}} \put(-318.91,215.67){\circle*{0.000001}} \put(-318.91,215.67){\circle*{0.000001}} \put(-318.20,216.37){\circle*{0.000001}} \put(-317.49,217.08){\circle*{0.000001}} \put(-316.78,217.08){\circle*{0.000001}} \put(-316.08,217.79){\circle*{0.000001}} \put(-315.37,218.50){\circle*{0.000001}} \put(-314.66,219.20){\circle*{0.000001}} \put(-313.96,219.20){\circle*{0.000001}} \put(-313.25,219.91){\circle*{0.000001}} \put(-312.54,220.62){\circle*{0.000001}} \put(-311.83,221.32){\circle*{0.000001}} \put(-311.13,221.32){\circle*{0.000001}} \put(-310.42,222.03){\circle*{0.000001}} \put(-309.71,222.74){\circle*{0.000001}} \put(-309.01,223.45){\circle*{0.000001}} \put(-308.30,223.45){\circle*{0.000001}} \put(-307.59,224.15){\circle*{0.000001}} \put(-306.88,224.86){\circle*{0.000001}} \put(-306.18,225.57){\circle*{0.000001}} \put(-305.47,225.57){\circle*{0.000001}} \put(-304.76,226.27){\circle*{0.000001}} \put(-304.06,226.98){\circle*{0.000001}} \put(-303.35,227.69){\circle*{0.000001}} \put(-302.64,227.69){\circle*{0.000001}} \put(-301.93,228.40){\circle*{0.000001}} \put(-301.23,229.10){\circle*{0.000001}} \put(-300.52,229.81){\circle*{0.000001}} \put(-299.81,229.81){\circle*{0.000001}} \put(-299.11,230.52){\circle*{0.000001}} \put(-298.40,231.22){\circle*{0.000001}} \put(-297.69,231.93){\circle*{0.000001}} \put(-296.98,231.93){\circle*{0.000001}} \put(-296.28,232.64){\circle*{0.000001}} \put(-295.57,233.35){\circle*{0.000001}} \put(-294.86,234.05){\circle*{0.000001}} \put(-294.16,234.05){\circle*{0.000001}} \put(-293.45,234.76){\circle*{0.000001}} \put(-292.74,235.47){\circle*{0.000001}} \put(-326.68,241.83){\circle*{0.000001}} \put(-325.98,241.83){\circle*{0.000001}} \put(-325.27,241.83){\circle*{0.000001}} \put(-324.56,241.12){\circle*{0.000001}} \put(-323.85,241.12){\circle*{0.000001}} \put(-323.15,241.12){\circle*{0.000001}} \put(-322.44,241.12){\circle*{0.000001}} \put(-321.73,241.12){\circle*{0.000001}} \put(-321.03,241.12){\circle*{0.000001}} \put(-320.32,240.42){\circle*{0.000001}} \put(-319.61,240.42){\circle*{0.000001}} \put(-318.91,240.42){\circle*{0.000001}} \put(-318.20,240.42){\circle*{0.000001}} \put(-317.49,240.42){\circle*{0.000001}} \put(-316.78,239.71){\circle*{0.000001}} \put(-316.08,239.71){\circle*{0.000001}} \put(-315.37,239.71){\circle*{0.000001}} \put(-314.66,239.71){\circle*{0.000001}} \put(-313.96,239.71){\circle*{0.000001}} \put(-313.25,239.00){\circle*{0.000001}} \put(-312.54,239.00){\circle*{0.000001}} \put(-311.83,239.00){\circle*{0.000001}} \put(-311.13,239.00){\circle*{0.000001}} \put(-310.42,239.00){\circle*{0.000001}} \put(-309.71,239.00){\circle*{0.000001}} \put(-309.01,238.29){\circle*{0.000001}} \put(-308.30,238.29){\circle*{0.000001}} \put(-307.59,238.29){\circle*{0.000001}} \put(-306.88,238.29){\circle*{0.000001}} \put(-306.18,238.29){\circle*{0.000001}} \put(-305.47,237.59){\circle*{0.000001}} \put(-304.76,237.59){\circle*{0.000001}} \put(-304.06,237.59){\circle*{0.000001}} \put(-303.35,237.59){\circle*{0.000001}} \put(-302.64,237.59){\circle*{0.000001}} \put(-301.93,236.88){\circle*{0.000001}} \put(-301.23,236.88){\circle*{0.000001}} \put(-300.52,236.88){\circle*{0.000001}} \put(-299.81,236.88){\circle*{0.000001}} \put(-299.11,236.88){\circle*{0.000001}} \put(-298.40,236.88){\circle*{0.000001}} \put(-297.69,236.17){\circle*{0.000001}} \put(-296.98,236.17){\circle*{0.000001}} \put(-296.28,236.17){\circle*{0.000001}} \put(-295.57,236.17){\circle*{0.000001}} \put(-294.86,236.17){\circle*{0.000001}} \put(-294.16,235.47){\circle*{0.000001}} \put(-293.45,235.47){\circle*{0.000001}} \put(-292.74,235.47){\circle*{0.000001}} \put(-353.55,220.62){\circle*{0.000001}} \put(-352.85,221.32){\circle*{0.000001}} \put(-352.14,222.03){\circle*{0.000001}} \put(-351.43,222.03){\circle*{0.000001}} \put(-350.72,222.74){\circle*{0.000001}} \put(-350.02,223.45){\circle*{0.000001}} \put(-349.31,224.15){\circle*{0.000001}} \put(-348.60,224.86){\circle*{0.000001}} \put(-347.90,224.86){\circle*{0.000001}} \put(-347.19,225.57){\circle*{0.000001}} \put(-346.48,226.27){\circle*{0.000001}} \put(-345.78,226.98){\circle*{0.000001}} \put(-345.07,226.98){\circle*{0.000001}} \put(-344.36,227.69){\circle*{0.000001}} \put(-343.65,228.40){\circle*{0.000001}} \put(-342.95,229.10){\circle*{0.000001}} \put(-342.24,229.81){\circle*{0.000001}} \put(-341.53,229.81){\circle*{0.000001}} \put(-340.83,230.52){\circle*{0.000001}} \put(-340.12,231.22){\circle*{0.000001}} \put(-339.41,231.93){\circle*{0.000001}} \put(-338.70,232.64){\circle*{0.000001}} \put(-338.00,232.64){\circle*{0.000001}} \put(-337.29,233.35){\circle*{0.000001}} \put(-336.58,234.05){\circle*{0.000001}} \put(-335.88,234.76){\circle*{0.000001}} \put(-335.17,235.47){\circle*{0.000001}} \put(-334.46,235.47){\circle*{0.000001}} \put(-333.75,236.17){\circle*{0.000001}} \put(-333.05,236.88){\circle*{0.000001}} \put(-332.34,237.59){\circle*{0.000001}} \put(-331.63,237.59){\circle*{0.000001}} \put(-330.93,238.29){\circle*{0.000001}} \put(-330.22,239.00){\circle*{0.000001}} \put(-329.51,239.71){\circle*{0.000001}} \put(-328.80,240.42){\circle*{0.000001}} \put(-328.10,240.42){\circle*{0.000001}} \put(-327.39,241.12){\circle*{0.000001}} \put(-326.68,241.83){\circle*{0.000001}} \put(-353.55,220.62){\circle*{0.000001}} \put(-352.85,220.62){\circle*{0.000001}} \put(-352.14,219.91){\circle*{0.000001}} \put(-351.43,219.91){\circle*{0.000001}} \put(-350.72,219.20){\circle*{0.000001}} \put(-350.02,219.20){\circle*{0.000001}} \put(-349.31,218.50){\circle*{0.000001}} \put(-348.60,218.50){\circle*{0.000001}} \put(-347.90,217.79){\circle*{0.000001}} \put(-347.19,217.79){\circle*{0.000001}} \put(-346.48,217.08){\circle*{0.000001}} \put(-345.78,217.08){\circle*{0.000001}} \put(-345.07,216.37){\circle*{0.000001}} \put(-344.36,216.37){\circle*{0.000001}} \put(-343.65,215.67){\circle*{0.000001}} \put(-342.95,215.67){\circle*{0.000001}} \put(-342.24,214.96){\circle*{0.000001}} \put(-341.53,214.96){\circle*{0.000001}} \put(-340.83,214.25){\circle*{0.000001}} \put(-340.12,214.25){\circle*{0.000001}} \put(-339.41,213.55){\circle*{0.000001}} \put(-338.70,213.55){\circle*{0.000001}} \put(-338.00,212.84){\circle*{0.000001}} \put(-337.29,212.84){\circle*{0.000001}} \put(-336.58,212.13){\circle*{0.000001}} \put(-335.88,212.13){\circle*{0.000001}} \put(-335.17,211.42){\circle*{0.000001}} \put(-334.46,211.42){\circle*{0.000001}} \put(-333.75,210.72){\circle*{0.000001}} \put(-333.05,210.72){\circle*{0.000001}} \put(-332.34,210.01){\circle*{0.000001}} \put(-331.63,210.01){\circle*{0.000001}} \put(-330.93,209.30){\circle*{0.000001}} \put(-330.22,209.30){\circle*{0.000001}} \put(-329.51,208.60){\circle*{0.000001}} \put(-328.80,208.60){\circle*{0.000001}} \put(-328.10,207.89){\circle*{0.000001}} \put(-327.39,207.89){\circle*{0.000001}} \put(-326.68,207.18){\circle*{0.000001}} \put(-325.98,207.18){\circle*{0.000001}} \put(-325.27,206.48){\circle*{0.000001}} \put(-324.56,206.48){\circle*{0.000001}} \put(-323.85,205.77){\circle*{0.000001}} \put(-323.85,205.77){\circle*{0.000001}} \put(-323.15,205.77){\circle*{0.000001}} \put(-322.44,205.77){\circle*{0.000001}} \put(-321.73,206.48){\circle*{0.000001}} \put(-321.03,206.48){\circle*{0.000001}} \put(-320.32,206.48){\circle*{0.000001}} \put(-319.61,206.48){\circle*{0.000001}} \put(-318.91,206.48){\circle*{0.000001}} \put(-318.20,207.18){\circle*{0.000001}} \put(-317.49,207.18){\circle*{0.000001}} \put(-316.78,207.18){\circle*{0.000001}} \put(-316.08,207.18){\circle*{0.000001}} \put(-315.37,207.18){\circle*{0.000001}} \put(-314.66,207.89){\circle*{0.000001}} \put(-313.96,207.89){\circle*{0.000001}} \put(-313.25,207.89){\circle*{0.000001}} \put(-312.54,207.89){\circle*{0.000001}} \put(-311.83,208.60){\circle*{0.000001}} \put(-311.13,208.60){\circle*{0.000001}} \put(-310.42,208.60){\circle*{0.000001}} \put(-309.71,208.60){\circle*{0.000001}} \put(-309.01,208.60){\circle*{0.000001}} \put(-308.30,209.30){\circle*{0.000001}} \put(-307.59,209.30){\circle*{0.000001}} \put(-306.88,209.30){\circle*{0.000001}} \put(-306.18,209.30){\circle*{0.000001}} \put(-305.47,209.30){\circle*{0.000001}} \put(-304.76,210.01){\circle*{0.000001}} \put(-304.06,210.01){\circle*{0.000001}} \put(-303.35,210.01){\circle*{0.000001}} \put(-302.64,210.01){\circle*{0.000001}} \put(-301.93,210.01){\circle*{0.000001}} \put(-301.23,210.72){\circle*{0.000001}} \put(-300.52,210.72){\circle*{0.000001}} \put(-299.81,210.72){\circle*{0.000001}} \put(-299.11,210.72){\circle*{0.000001}} \put(-298.40,210.72){\circle*{0.000001}} \put(-297.69,211.42){\circle*{0.000001}} \put(-296.98,211.42){\circle*{0.000001}} \put(-296.28,211.42){\circle*{0.000001}} \put(-295.57,211.42){\circle*{0.000001}} \put(-294.86,212.13){\circle*{0.000001}} \put(-294.16,212.13){\circle*{0.000001}} \put(-293.45,212.13){\circle*{0.000001}} \put(-292.74,212.13){\circle*{0.000001}} \put(-292.04,212.13){\circle*{0.000001}} \put(-291.33,212.84){\circle*{0.000001}} \put(-290.62,212.84){\circle*{0.000001}} \put(-289.91,212.84){\circle*{0.000001}} \put(-289.91,212.84){\circle*{0.000001}} \put(-289.21,213.55){\circle*{0.000001}} \put(-288.50,214.25){\circle*{0.000001}} \put(-287.79,214.96){\circle*{0.000001}} \put(-287.09,214.96){\circle*{0.000001}} \put(-286.38,215.67){\circle*{0.000001}} \put(-285.67,216.37){\circle*{0.000001}} \put(-284.96,217.08){\circle*{0.000001}} \put(-284.26,217.79){\circle*{0.000001}} \put(-283.55,218.50){\circle*{0.000001}} \put(-282.84,219.20){\circle*{0.000001}} \put(-282.14,219.91){\circle*{0.000001}} \put(-281.43,219.91){\circle*{0.000001}} \put(-280.72,220.62){\circle*{0.000001}} \put(-280.01,221.32){\circle*{0.000001}} \put(-279.31,222.03){\circle*{0.000001}} \put(-278.60,222.74){\circle*{0.000001}} \put(-277.89,223.45){\circle*{0.000001}} \put(-277.19,224.15){\circle*{0.000001}} \put(-276.48,224.86){\circle*{0.000001}} \put(-275.77,224.86){\circle*{0.000001}} \put(-275.06,225.57){\circle*{0.000001}} \put(-274.36,226.27){\circle*{0.000001}} \put(-273.65,226.98){\circle*{0.000001}} \put(-272.94,227.69){\circle*{0.000001}} \put(-272.24,228.40){\circle*{0.000001}} \put(-271.53,229.10){\circle*{0.000001}} \put(-270.82,229.81){\circle*{0.000001}} \put(-270.11,229.81){\circle*{0.000001}} \put(-269.41,230.52){\circle*{0.000001}} \put(-268.70,231.22){\circle*{0.000001}} \put(-267.99,231.93){\circle*{0.000001}} \put(-267.29,232.64){\circle*{0.000001}} \put(-266.58,233.35){\circle*{0.000001}} \put(-265.87,234.05){\circle*{0.000001}} \put(-265.17,234.76){\circle*{0.000001}} \put(-264.46,234.76){\circle*{0.000001}} \put(-263.75,235.47){\circle*{0.000001}} \put(-263.04,236.17){\circle*{0.000001}} \put(-262.34,236.88){\circle*{0.000001}} \put(-288.50,255.97){\circle*{0.000001}} \put(-287.79,255.27){\circle*{0.000001}} \put(-287.09,255.27){\circle*{0.000001}} \put(-286.38,254.56){\circle*{0.000001}} \put(-285.67,253.85){\circle*{0.000001}} \put(-284.96,253.14){\circle*{0.000001}} \put(-284.26,253.14){\circle*{0.000001}} \put(-283.55,252.44){\circle*{0.000001}} \put(-282.84,251.73){\circle*{0.000001}} \put(-282.14,251.02){\circle*{0.000001}} \put(-281.43,251.02){\circle*{0.000001}} \put(-280.72,250.32){\circle*{0.000001}} \put(-280.01,249.61){\circle*{0.000001}} \put(-279.31,249.61){\circle*{0.000001}} \put(-278.60,248.90){\circle*{0.000001}} \put(-277.89,248.19){\circle*{0.000001}} \put(-277.19,247.49){\circle*{0.000001}} \put(-276.48,247.49){\circle*{0.000001}} \put(-275.77,246.78){\circle*{0.000001}} \put(-275.06,246.07){\circle*{0.000001}} \put(-274.36,245.37){\circle*{0.000001}} \put(-273.65,245.37){\circle*{0.000001}} \put(-272.94,244.66){\circle*{0.000001}} \put(-272.24,243.95){\circle*{0.000001}} \put(-271.53,243.24){\circle*{0.000001}} \put(-270.82,243.24){\circle*{0.000001}} \put(-270.11,242.54){\circle*{0.000001}} \put(-269.41,241.83){\circle*{0.000001}} \put(-268.70,241.83){\circle*{0.000001}} \put(-267.99,241.12){\circle*{0.000001}} \put(-267.29,240.42){\circle*{0.000001}} \put(-266.58,239.71){\circle*{0.000001}} \put(-265.87,239.71){\circle*{0.000001}} \put(-265.17,239.00){\circle*{0.000001}} \put(-264.46,238.29){\circle*{0.000001}} \put(-263.75,237.59){\circle*{0.000001}} \put(-263.04,237.59){\circle*{0.000001}} \put(-262.34,236.88){\circle*{0.000001}} \put(-288.50,255.97){\circle*{0.000001}} \put(-288.50,256.68){\circle*{0.000001}} \put(-289.21,257.39){\circle*{0.000001}} \put(-289.21,258.09){\circle*{0.000001}} \put(-289.91,258.80){\circle*{0.000001}} \put(-289.91,259.51){\circle*{0.000001}} \put(-290.62,260.22){\circle*{0.000001}} \put(-290.62,260.92){\circle*{0.000001}} \put(-291.33,261.63){\circle*{0.000001}} \put(-291.33,262.34){\circle*{0.000001}} \put(-292.04,263.04){\circle*{0.000001}} \put(-292.04,263.75){\circle*{0.000001}} \put(-292.74,264.46){\circle*{0.000001}} \put(-292.74,265.17){\circle*{0.000001}} \put(-293.45,265.87){\circle*{0.000001}} \put(-293.45,266.58){\circle*{0.000001}} \put(-294.16,267.29){\circle*{0.000001}} \put(-294.16,267.99){\circle*{0.000001}} \put(-294.86,268.70){\circle*{0.000001}} \put(-294.86,269.41){\circle*{0.000001}} \put(-295.57,270.11){\circle*{0.000001}} \put(-295.57,270.82){\circle*{0.000001}} \put(-296.28,271.53){\circle*{0.000001}} \put(-296.28,272.24){\circle*{0.000001}} \put(-296.98,272.94){\circle*{0.000001}} \put(-296.98,273.65){\circle*{0.000001}} \put(-297.69,274.36){\circle*{0.000001}} \put(-297.69,275.06){\circle*{0.000001}} \put(-298.40,275.77){\circle*{0.000001}} \put(-298.40,276.48){\circle*{0.000001}} \put(-299.11,277.19){\circle*{0.000001}} \put(-299.11,277.89){\circle*{0.000001}} \put(-299.81,278.60){\circle*{0.000001}} \put(-299.81,279.31){\circle*{0.000001}} \put(-300.52,280.01){\circle*{0.000001}} \put(-300.52,280.72){\circle*{0.000001}} \put(-301.23,281.43){\circle*{0.000001}} \put(-301.23,282.14){\circle*{0.000001}} \put(-301.93,282.84){\circle*{0.000001}} \put(-301.93,283.55){\circle*{0.000001}} \put(-301.93,283.55){\circle*{0.000001}} \put(-301.23,284.26){\circle*{0.000001}} \put(-300.52,284.96){\circle*{0.000001}} \put(-299.81,285.67){\circle*{0.000001}} \put(-299.11,285.67){\circle*{0.000001}} \put(-298.40,286.38){\circle*{0.000001}} \put(-297.69,287.09){\circle*{0.000001}} \put(-296.98,287.79){\circle*{0.000001}} \put(-296.28,288.50){\circle*{0.000001}} \put(-295.57,289.21){\circle*{0.000001}} \put(-294.86,289.91){\circle*{0.000001}} \put(-294.16,289.91){\circle*{0.000001}} \put(-293.45,290.62){\circle*{0.000001}} \put(-292.74,291.33){\circle*{0.000001}} \put(-292.04,292.04){\circle*{0.000001}} \put(-291.33,292.74){\circle*{0.000001}} \put(-290.62,293.45){\circle*{0.000001}} \put(-289.91,294.16){\circle*{0.000001}} \put(-289.21,294.16){\circle*{0.000001}} \put(-288.50,294.86){\circle*{0.000001}} \put(-287.79,295.57){\circle*{0.000001}} \put(-287.09,296.28){\circle*{0.000001}} \put(-286.38,296.98){\circle*{0.000001}} \put(-285.67,297.69){\circle*{0.000001}} \put(-284.96,297.69){\circle*{0.000001}} \put(-284.26,298.40){\circle*{0.000001}} \put(-283.55,299.11){\circle*{0.000001}} \put(-282.84,299.81){\circle*{0.000001}} \put(-282.14,300.52){\circle*{0.000001}} \put(-281.43,301.23){\circle*{0.000001}} \put(-280.72,301.93){\circle*{0.000001}} \put(-280.01,301.93){\circle*{0.000001}} \put(-279.31,302.64){\circle*{0.000001}} \put(-278.60,303.35){\circle*{0.000001}} \put(-277.89,304.06){\circle*{0.000001}} \put(-277.19,304.76){\circle*{0.000001}} \put(-276.48,305.47){\circle*{0.000001}} \put(-275.77,306.18){\circle*{0.000001}} \put(-275.06,306.18){\circle*{0.000001}} \put(-274.36,306.88){\circle*{0.000001}} \put(-273.65,307.59){\circle*{0.000001}} \put(-272.94,308.30){\circle*{0.000001}} \put(-303.35,324.56){\circle*{0.000001}} \put(-302.64,323.85){\circle*{0.000001}} \put(-301.93,323.85){\circle*{0.000001}} \put(-301.23,323.15){\circle*{0.000001}} \put(-300.52,323.15){\circle*{0.000001}} \put(-299.81,322.44){\circle*{0.000001}} \put(-299.11,322.44){\circle*{0.000001}} \put(-298.40,321.73){\circle*{0.000001}} \put(-297.69,321.73){\circle*{0.000001}} \put(-296.98,321.03){\circle*{0.000001}} \put(-296.28,321.03){\circle*{0.000001}} \put(-295.57,320.32){\circle*{0.000001}} \put(-294.86,320.32){\circle*{0.000001}} \put(-294.16,319.61){\circle*{0.000001}} \put(-293.45,319.61){\circle*{0.000001}} \put(-292.74,318.91){\circle*{0.000001}} \put(-292.04,318.20){\circle*{0.000001}} \put(-291.33,318.20){\circle*{0.000001}} \put(-290.62,317.49){\circle*{0.000001}} \put(-289.91,317.49){\circle*{0.000001}} \put(-289.21,316.78){\circle*{0.000001}} \put(-288.50,316.78){\circle*{0.000001}} \put(-287.79,316.08){\circle*{0.000001}} \put(-287.09,316.08){\circle*{0.000001}} \put(-286.38,315.37){\circle*{0.000001}} \put(-285.67,315.37){\circle*{0.000001}} \put(-284.96,314.66){\circle*{0.000001}} \put(-284.26,314.66){\circle*{0.000001}} \put(-283.55,313.96){\circle*{0.000001}} \put(-282.84,313.25){\circle*{0.000001}} \put(-282.14,313.25){\circle*{0.000001}} \put(-281.43,312.54){\circle*{0.000001}} \put(-280.72,312.54){\circle*{0.000001}} \put(-280.01,311.83){\circle*{0.000001}} \put(-279.31,311.83){\circle*{0.000001}} \put(-278.60,311.13){\circle*{0.000001}} \put(-277.89,311.13){\circle*{0.000001}} \put(-277.19,310.42){\circle*{0.000001}} \put(-276.48,310.42){\circle*{0.000001}} \put(-275.77,309.71){\circle*{0.000001}} \put(-275.06,309.71){\circle*{0.000001}} \put(-274.36,309.01){\circle*{0.000001}} \put(-273.65,309.01){\circle*{0.000001}} \put(-272.94,308.30){\circle*{0.000001}} \put(-331.63,341.53){\circle*{0.000001}} \put(-330.93,340.83){\circle*{0.000001}} \put(-330.22,340.83){\circle*{0.000001}} \put(-329.51,340.12){\circle*{0.000001}} \put(-328.80,340.12){\circle*{0.000001}} \put(-328.10,339.41){\circle*{0.000001}} \put(-327.39,338.70){\circle*{0.000001}} \put(-326.68,338.70){\circle*{0.000001}} \put(-325.98,338.00){\circle*{0.000001}} \put(-325.27,338.00){\circle*{0.000001}} \put(-324.56,337.29){\circle*{0.000001}} \put(-323.85,336.58){\circle*{0.000001}} \put(-323.15,336.58){\circle*{0.000001}} \put(-322.44,335.88){\circle*{0.000001}} \put(-321.73,335.88){\circle*{0.000001}} \put(-321.03,335.17){\circle*{0.000001}} \put(-320.32,334.46){\circle*{0.000001}} \put(-319.61,334.46){\circle*{0.000001}} \put(-318.91,333.75){\circle*{0.000001}} \put(-318.20,333.75){\circle*{0.000001}} \put(-317.49,333.05){\circle*{0.000001}} \put(-316.78,332.34){\circle*{0.000001}} \put(-316.08,332.34){\circle*{0.000001}} \put(-315.37,331.63){\circle*{0.000001}} \put(-314.66,331.63){\circle*{0.000001}} \put(-313.96,330.93){\circle*{0.000001}} \put(-313.25,330.22){\circle*{0.000001}} \put(-312.54,330.22){\circle*{0.000001}} \put(-311.83,329.51){\circle*{0.000001}} \put(-311.13,329.51){\circle*{0.000001}} \put(-310.42,328.80){\circle*{0.000001}} \put(-309.71,328.10){\circle*{0.000001}} \put(-309.01,328.10){\circle*{0.000001}} \put(-308.30,327.39){\circle*{0.000001}} \put(-307.59,327.39){\circle*{0.000001}} \put(-306.88,326.68){\circle*{0.000001}} \put(-306.18,325.98){\circle*{0.000001}} \put(-305.47,325.98){\circle*{0.000001}} \put(-304.76,325.27){\circle*{0.000001}} \put(-304.06,325.27){\circle*{0.000001}} \put(-303.35,324.56){\circle*{0.000001}} \put(-361.33,359.92){\circle*{0.000001}} \put(-360.62,359.21){\circle*{0.000001}} \put(-359.92,359.21){\circle*{0.000001}} \put(-359.21,358.50){\circle*{0.000001}} \put(-358.50,358.50){\circle*{0.000001}} \put(-357.80,357.80){\circle*{0.000001}} \put(-357.09,357.09){\circle*{0.000001}} \put(-356.38,357.09){\circle*{0.000001}} \put(-355.67,356.38){\circle*{0.000001}} \put(-354.97,355.67){\circle*{0.000001}} \put(-354.26,355.67){\circle*{0.000001}} \put(-353.55,354.97){\circle*{0.000001}} \put(-352.85,354.97){\circle*{0.000001}} \put(-352.14,354.26){\circle*{0.000001}} \put(-351.43,353.55){\circle*{0.000001}} \put(-350.72,353.55){\circle*{0.000001}} \put(-350.02,352.85){\circle*{0.000001}} \put(-349.31,352.14){\circle*{0.000001}} \put(-348.60,352.14){\circle*{0.000001}} \put(-347.90,351.43){\circle*{0.000001}} \put(-347.19,351.43){\circle*{0.000001}} \put(-346.48,350.72){\circle*{0.000001}} \put(-345.78,350.02){\circle*{0.000001}} \put(-345.07,350.02){\circle*{0.000001}} \put(-344.36,349.31){\circle*{0.000001}} \put(-343.65,349.31){\circle*{0.000001}} \put(-342.95,348.60){\circle*{0.000001}} \put(-342.24,347.90){\circle*{0.000001}} \put(-341.53,347.90){\circle*{0.000001}} \put(-340.83,347.19){\circle*{0.000001}} \put(-340.12,346.48){\circle*{0.000001}} \put(-339.41,346.48){\circle*{0.000001}} \put(-338.70,345.78){\circle*{0.000001}} \put(-338.00,345.78){\circle*{0.000001}} \put(-337.29,345.07){\circle*{0.000001}} \put(-336.58,344.36){\circle*{0.000001}} \put(-335.88,344.36){\circle*{0.000001}} \put(-335.17,343.65){\circle*{0.000001}} \put(-334.46,342.95){\circle*{0.000001}} \put(-333.75,342.95){\circle*{0.000001}} \put(-333.05,342.24){\circle*{0.000001}} \put(-332.34,342.24){\circle*{0.000001}} \put(-331.63,341.53){\circle*{0.000001}} \put(-384.67,381.84){\circle*{0.000001}} \put(-383.96,381.13){\circle*{0.000001}} \put(-383.25,380.42){\circle*{0.000001}} \put(-382.54,379.72){\circle*{0.000001}} \put(-381.84,379.01){\circle*{0.000001}} \put(-381.13,378.30){\circle*{0.000001}} \put(-380.42,377.60){\circle*{0.000001}} \put(-379.72,376.89){\circle*{0.000001}} \put(-379.01,376.18){\circle*{0.000001}} \put(-378.30,376.18){\circle*{0.000001}} \put(-377.60,375.47){\circle*{0.000001}} \put(-376.89,374.77){\circle*{0.000001}} \put(-376.18,374.06){\circle*{0.000001}} \put(-375.47,373.35){\circle*{0.000001}} \put(-374.77,372.65){\circle*{0.000001}} \put(-374.06,371.94){\circle*{0.000001}} \put(-373.35,371.23){\circle*{0.000001}} \put(-372.65,370.52){\circle*{0.000001}} \put(-371.94,369.82){\circle*{0.000001}} \put(-371.23,369.11){\circle*{0.000001}} \put(-370.52,368.40){\circle*{0.000001}} \put(-369.82,367.70){\circle*{0.000001}} \put(-369.11,366.99){\circle*{0.000001}} \put(-368.40,366.28){\circle*{0.000001}} \put(-367.70,365.57){\circle*{0.000001}} \put(-366.99,365.57){\circle*{0.000001}} \put(-366.28,364.87){\circle*{0.000001}} \put(-365.57,364.16){\circle*{0.000001}} \put(-364.87,363.45){\circle*{0.000001}} \put(-364.16,362.75){\circle*{0.000001}} \put(-363.45,362.04){\circle*{0.000001}} \put(-362.75,361.33){\circle*{0.000001}} \put(-362.04,360.62){\circle*{0.000001}} \put(-361.33,359.92){\circle*{0.000001}} \put(-384.67,381.84){\circle*{0.000001}} \put(-384.67,382.54){\circle*{0.000001}} \put(-385.37,383.25){\circle*{0.000001}} \put(-385.37,383.96){\circle*{0.000001}} \put(-386.08,384.67){\circle*{0.000001}} \put(-386.08,385.37){\circle*{0.000001}} \put(-386.79,386.08){\circle*{0.000001}} \put(-386.79,386.79){\circle*{0.000001}} \put(-387.49,387.49){\circle*{0.000001}} \put(-387.49,388.20){\circle*{0.000001}} \put(-387.49,388.91){\circle*{0.000001}} \put(-388.20,389.62){\circle*{0.000001}} \put(-388.20,390.32){\circle*{0.000001}} \put(-388.91,391.03){\circle*{0.000001}} \put(-388.91,391.74){\circle*{0.000001}} \put(-389.62,392.44){\circle*{0.000001}} \put(-389.62,393.15){\circle*{0.000001}} \put(-390.32,393.86){\circle*{0.000001}} \put(-390.32,394.57){\circle*{0.000001}} \put(-390.32,395.27){\circle*{0.000001}} \put(-391.03,395.98){\circle*{0.000001}} \put(-391.03,396.69){\circle*{0.000001}} \put(-391.74,397.39){\circle*{0.000001}} \put(-391.74,398.10){\circle*{0.000001}} \put(-392.44,398.81){\circle*{0.000001}} \put(-392.44,399.52){\circle*{0.000001}} \put(-393.15,400.22){\circle*{0.000001}} \put(-393.15,400.93){\circle*{0.000001}} \put(-393.15,401.64){\circle*{0.000001}} \put(-393.86,402.34){\circle*{0.000001}} \put(-393.86,403.05){\circle*{0.000001}} \put(-394.57,403.76){\circle*{0.000001}} \put(-394.57,404.47){\circle*{0.000001}} \put(-395.27,405.17){\circle*{0.000001}} \put(-395.27,405.88){\circle*{0.000001}} \put(-395.98,406.59){\circle*{0.000001}} \put(-395.98,407.29){\circle*{0.000001}} \put(-395.98,408.00){\circle*{0.000001}} \put(-396.69,408.71){\circle*{0.000001}} \put(-396.69,409.41){\circle*{0.000001}} \put(-397.39,410.12){\circle*{0.000001}} \put(-397.39,410.83){\circle*{0.000001}} \put(-398.10,411.54){\circle*{0.000001}} \put(-398.10,412.24){\circle*{0.000001}} \put(-398.81,412.95){\circle*{0.000001}} \put(-398.81,413.66){\circle*{0.000001}} \put(-398.81,413.66){\circle*{0.000001}} \put(-398.10,413.66){\circle*{0.000001}} \put(-397.39,413.66){\circle*{0.000001}} \put(-396.69,414.36){\circle*{0.000001}} \put(-395.98,414.36){\circle*{0.000001}} \put(-395.27,414.36){\circle*{0.000001}} \put(-394.57,414.36){\circle*{0.000001}} \put(-393.86,415.07){\circle*{0.000001}} \put(-393.15,415.07){\circle*{0.000001}} \put(-392.44,415.07){\circle*{0.000001}} \put(-391.74,415.07){\circle*{0.000001}} \put(-391.03,415.07){\circle*{0.000001}} \put(-390.32,415.78){\circle*{0.000001}} \put(-389.62,415.78){\circle*{0.000001}} \put(-388.91,415.78){\circle*{0.000001}} \put(-388.20,415.78){\circle*{0.000001}} \put(-387.49,416.49){\circle*{0.000001}} \put(-386.79,416.49){\circle*{0.000001}} \put(-386.08,416.49){\circle*{0.000001}} \put(-385.37,416.49){\circle*{0.000001}} \put(-384.67,416.49){\circle*{0.000001}} \put(-383.96,417.19){\circle*{0.000001}} \put(-383.25,417.19){\circle*{0.000001}} \put(-382.54,417.19){\circle*{0.000001}} \put(-381.84,417.19){\circle*{0.000001}} \put(-381.13,417.19){\circle*{0.000001}} \put(-380.42,417.90){\circle*{0.000001}} \put(-379.72,417.90){\circle*{0.000001}} \put(-379.01,417.90){\circle*{0.000001}} \put(-378.30,417.90){\circle*{0.000001}} \put(-377.60,418.61){\circle*{0.000001}} \put(-376.89,418.61){\circle*{0.000001}} \put(-376.18,418.61){\circle*{0.000001}} \put(-375.47,418.61){\circle*{0.000001}} \put(-374.77,418.61){\circle*{0.000001}} \put(-374.06,419.31){\circle*{0.000001}} \put(-373.35,419.31){\circle*{0.000001}} \put(-372.65,419.31){\circle*{0.000001}} \put(-371.94,419.31){\circle*{0.000001}} \put(-371.23,420.02){\circle*{0.000001}} \put(-370.52,420.02){\circle*{0.000001}} \put(-369.82,420.02){\circle*{0.000001}} \put(-369.11,420.02){\circle*{0.000001}} \put(-368.40,420.02){\circle*{0.000001}} \put(-367.70,420.73){\circle*{0.000001}} \put(-366.99,420.73){\circle*{0.000001}} \put(-366.28,420.73){\circle*{0.000001}} \put(-365.57,420.73){\circle*{0.000001}} \put(-364.87,421.44){\circle*{0.000001}} \put(-364.16,421.44){\circle*{0.000001}} \put(-363.45,421.44){\circle*{0.000001}} \put(-363.45,421.44){\circle*{0.000001}} \put(-363.45,422.14){\circle*{0.000001}} \put(-363.45,422.85){\circle*{0.000001}} \put(-362.75,423.56){\circle*{0.000001}} \put(-362.75,424.26){\circle*{0.000001}} \put(-362.75,424.97){\circle*{0.000001}} \put(-362.75,425.68){\circle*{0.000001}} \put(-362.75,426.39){\circle*{0.000001}} \put(-362.04,427.09){\circle*{0.000001}} \put(-362.04,427.80){\circle*{0.000001}} \put(-362.04,428.51){\circle*{0.000001}} \put(-362.04,429.21){\circle*{0.000001}} \put(-362.04,429.92){\circle*{0.000001}} \put(-361.33,430.63){\circle*{0.000001}} \put(-361.33,431.34){\circle*{0.000001}} \put(-361.33,432.04){\circle*{0.000001}} \put(-361.33,432.75){\circle*{0.000001}} \put(-360.62,433.46){\circle*{0.000001}} \put(-360.62,434.16){\circle*{0.000001}} \put(-360.62,434.87){\circle*{0.000001}} \put(-360.62,435.58){\circle*{0.000001}} \put(-360.62,436.28){\circle*{0.000001}} \put(-359.92,436.99){\circle*{0.000001}} \put(-359.92,437.70){\circle*{0.000001}} \put(-359.92,438.41){\circle*{0.000001}} \put(-359.92,439.11){\circle*{0.000001}} \put(-359.92,439.82){\circle*{0.000001}} \put(-359.21,440.53){\circle*{0.000001}} \put(-359.21,441.23){\circle*{0.000001}} \put(-359.21,441.94){\circle*{0.000001}} \put(-359.21,442.65){\circle*{0.000001}} \put(-359.21,443.36){\circle*{0.000001}} \put(-358.50,444.06){\circle*{0.000001}} \put(-358.50,444.77){\circle*{0.000001}} \put(-358.50,445.48){\circle*{0.000001}} \put(-358.50,446.18){\circle*{0.000001}} \put(-358.50,446.89){\circle*{0.000001}} \put(-357.80,447.60){\circle*{0.000001}} \put(-357.80,448.31){\circle*{0.000001}} \put(-357.80,449.01){\circle*{0.000001}} \put(-357.80,449.72){\circle*{0.000001}} \put(-357.09,450.43){\circle*{0.000001}} \put(-357.09,451.13){\circle*{0.000001}} \put(-357.09,451.84){\circle*{0.000001}} \put(-357.09,452.55){\circle*{0.000001}} \put(-357.09,453.26){\circle*{0.000001}} \put(-356.38,453.96){\circle*{0.000001}} \put(-356.38,454.67){\circle*{0.000001}} \put(-356.38,455.38){\circle*{0.000001}} \put(-344.36,421.44){\circle*{0.000001}} \put(-344.36,422.14){\circle*{0.000001}} \put(-345.07,422.85){\circle*{0.000001}} \put(-345.07,423.56){\circle*{0.000001}} \put(-345.07,424.26){\circle*{0.000001}} \put(-345.78,424.97){\circle*{0.000001}} \put(-345.78,425.68){\circle*{0.000001}} \put(-345.78,426.39){\circle*{0.000001}} \put(-346.48,427.09){\circle*{0.000001}} \put(-346.48,427.80){\circle*{0.000001}} \put(-347.19,428.51){\circle*{0.000001}} \put(-347.19,429.21){\circle*{0.000001}} \put(-347.19,429.92){\circle*{0.000001}} \put(-347.90,430.63){\circle*{0.000001}} \put(-347.90,431.34){\circle*{0.000001}} \put(-347.90,432.04){\circle*{0.000001}} \put(-348.60,432.75){\circle*{0.000001}} \put(-348.60,433.46){\circle*{0.000001}} \put(-348.60,434.16){\circle*{0.000001}} \put(-349.31,434.87){\circle*{0.000001}} \put(-349.31,435.58){\circle*{0.000001}} \put(-349.31,436.28){\circle*{0.000001}} \put(-350.02,436.99){\circle*{0.000001}} \put(-350.02,437.70){\circle*{0.000001}} \put(-350.02,438.41){\circle*{0.000001}} \put(-350.72,439.11){\circle*{0.000001}} \put(-350.72,439.82){\circle*{0.000001}} \put(-351.43,440.53){\circle*{0.000001}} \put(-351.43,441.23){\circle*{0.000001}} \put(-351.43,441.94){\circle*{0.000001}} \put(-352.14,442.65){\circle*{0.000001}} \put(-352.14,443.36){\circle*{0.000001}} \put(-352.14,444.06){\circle*{0.000001}} \put(-352.85,444.77){\circle*{0.000001}} \put(-352.85,445.48){\circle*{0.000001}} \put(-352.85,446.18){\circle*{0.000001}} \put(-353.55,446.89){\circle*{0.000001}} \put(-353.55,447.60){\circle*{0.000001}} \put(-353.55,448.31){\circle*{0.000001}} \put(-354.26,449.01){\circle*{0.000001}} \put(-354.26,449.72){\circle*{0.000001}} \put(-354.97,450.43){\circle*{0.000001}} \put(-354.97,451.13){\circle*{0.000001}} \put(-354.97,451.84){\circle*{0.000001}} \put(-355.67,452.55){\circle*{0.000001}} \put(-355.67,453.26){\circle*{0.000001}} \put(-355.67,453.96){\circle*{0.000001}} \put(-356.38,454.67){\circle*{0.000001}} \put(-356.38,455.38){\circle*{0.000001}} \put(-379.72,416.49){\circle*{0.000001}} \put(-379.01,416.49){\circle*{0.000001}} \put(-378.30,416.49){\circle*{0.000001}} \put(-377.60,416.49){\circle*{0.000001}} \put(-376.89,417.19){\circle*{0.000001}} \put(-376.18,417.19){\circle*{0.000001}} \put(-375.47,417.19){\circle*{0.000001}} \put(-374.77,417.19){\circle*{0.000001}} \put(-374.06,417.19){\circle*{0.000001}} \put(-373.35,417.19){\circle*{0.000001}} \put(-372.65,417.19){\circle*{0.000001}} \put(-371.94,417.90){\circle*{0.000001}} \put(-371.23,417.90){\circle*{0.000001}} \put(-370.52,417.90){\circle*{0.000001}} \put(-369.82,417.90){\circle*{0.000001}} \put(-369.11,417.90){\circle*{0.000001}} \put(-368.40,417.90){\circle*{0.000001}} \put(-367.70,417.90){\circle*{0.000001}} \put(-366.99,418.61){\circle*{0.000001}} \put(-366.28,418.61){\circle*{0.000001}} \put(-365.57,418.61){\circle*{0.000001}} \put(-364.87,418.61){\circle*{0.000001}} \put(-364.16,418.61){\circle*{0.000001}} \put(-363.45,418.61){\circle*{0.000001}} \put(-362.75,418.61){\circle*{0.000001}} \put(-362.04,418.61){\circle*{0.000001}} \put(-361.33,419.31){\circle*{0.000001}} \put(-360.62,419.31){\circle*{0.000001}} \put(-359.92,419.31){\circle*{0.000001}} \put(-359.21,419.31){\circle*{0.000001}} \put(-358.50,419.31){\circle*{0.000001}} \put(-357.80,419.31){\circle*{0.000001}} \put(-357.09,419.31){\circle*{0.000001}} \put(-356.38,420.02){\circle*{0.000001}} \put(-355.67,420.02){\circle*{0.000001}} \put(-354.97,420.02){\circle*{0.000001}} \put(-354.26,420.02){\circle*{0.000001}} \put(-353.55,420.02){\circle*{0.000001}} \put(-352.85,420.02){\circle*{0.000001}} \put(-352.14,420.02){\circle*{0.000001}} \put(-351.43,420.73){\circle*{0.000001}} \put(-350.72,420.73){\circle*{0.000001}} \put(-350.02,420.73){\circle*{0.000001}} \put(-349.31,420.73){\circle*{0.000001}} \put(-348.60,420.73){\circle*{0.000001}} \put(-347.90,420.73){\circle*{0.000001}} \put(-347.19,420.73){\circle*{0.000001}} \put(-346.48,421.44){\circle*{0.000001}} \put(-345.78,421.44){\circle*{0.000001}} \put(-345.07,421.44){\circle*{0.000001}} \put(-344.36,421.44){\circle*{0.000001}} \put(-382.54,383.96){\circle*{0.000001}} \put(-382.54,384.67){\circle*{0.000001}} \put(-382.54,385.37){\circle*{0.000001}} \put(-382.54,386.08){\circle*{0.000001}} \put(-382.54,386.79){\circle*{0.000001}} \put(-382.54,387.49){\circle*{0.000001}} \put(-381.84,388.20){\circle*{0.000001}} \put(-381.84,388.91){\circle*{0.000001}} \put(-381.84,389.62){\circle*{0.000001}} \put(-381.84,390.32){\circle*{0.000001}} \put(-381.84,391.03){\circle*{0.000001}} \put(-381.84,391.74){\circle*{0.000001}} \put(-381.84,392.44){\circle*{0.000001}} \put(-381.84,393.15){\circle*{0.000001}} \put(-381.84,393.86){\circle*{0.000001}} \put(-381.84,394.57){\circle*{0.000001}} \put(-381.84,395.27){\circle*{0.000001}} \put(-381.84,395.98){\circle*{0.000001}} \put(-381.13,396.69){\circle*{0.000001}} \put(-381.13,397.39){\circle*{0.000001}} \put(-381.13,398.10){\circle*{0.000001}} \put(-381.13,398.81){\circle*{0.000001}} \put(-381.13,399.52){\circle*{0.000001}} \put(-381.13,400.22){\circle*{0.000001}} \put(-381.13,400.93){\circle*{0.000001}} \put(-381.13,401.64){\circle*{0.000001}} \put(-381.13,402.34){\circle*{0.000001}} \put(-381.13,403.05){\circle*{0.000001}} \put(-381.13,403.76){\circle*{0.000001}} \put(-380.42,404.47){\circle*{0.000001}} \put(-380.42,405.17){\circle*{0.000001}} \put(-380.42,405.88){\circle*{0.000001}} \put(-380.42,406.59){\circle*{0.000001}} \put(-380.42,407.29){\circle*{0.000001}} \put(-380.42,408.00){\circle*{0.000001}} \put(-380.42,408.71){\circle*{0.000001}} \put(-380.42,409.41){\circle*{0.000001}} \put(-380.42,410.12){\circle*{0.000001}} \put(-380.42,410.83){\circle*{0.000001}} \put(-380.42,411.54){\circle*{0.000001}} \put(-380.42,412.24){\circle*{0.000001}} \put(-379.72,412.95){\circle*{0.000001}} \put(-379.72,413.66){\circle*{0.000001}} \put(-379.72,414.36){\circle*{0.000001}} \put(-379.72,415.07){\circle*{0.000001}} \put(-379.72,415.78){\circle*{0.000001}} \put(-379.72,416.49){\circle*{0.000001}} \put(-410.83,404.47){\circle*{0.000001}} \put(-410.12,403.76){\circle*{0.000001}} \put(-409.41,403.76){\circle*{0.000001}} \put(-408.71,403.05){\circle*{0.000001}} \put(-408.00,402.34){\circle*{0.000001}} \put(-407.29,401.64){\circle*{0.000001}} \put(-406.59,401.64){\circle*{0.000001}} \put(-405.88,400.93){\circle*{0.000001}} \put(-405.17,400.22){\circle*{0.000001}} \put(-404.47,399.52){\circle*{0.000001}} \put(-403.76,399.52){\circle*{0.000001}} \put(-403.05,398.81){\circle*{0.000001}} \put(-402.34,398.10){\circle*{0.000001}} \put(-401.64,398.10){\circle*{0.000001}} \put(-400.93,397.39){\circle*{0.000001}} \put(-400.22,396.69){\circle*{0.000001}} \put(-399.52,395.98){\circle*{0.000001}} \put(-398.81,395.98){\circle*{0.000001}} \put(-398.10,395.27){\circle*{0.000001}} \put(-397.39,394.57){\circle*{0.000001}} \put(-396.69,394.57){\circle*{0.000001}} \put(-395.98,393.86){\circle*{0.000001}} \put(-395.27,393.15){\circle*{0.000001}} \put(-394.57,392.44){\circle*{0.000001}} \put(-393.86,392.44){\circle*{0.000001}} \put(-393.15,391.74){\circle*{0.000001}} \put(-392.44,391.03){\circle*{0.000001}} \put(-391.74,390.32){\circle*{0.000001}} \put(-391.03,390.32){\circle*{0.000001}} \put(-390.32,389.62){\circle*{0.000001}} \put(-389.62,388.91){\circle*{0.000001}} \put(-388.91,388.91){\circle*{0.000001}} \put(-388.20,388.20){\circle*{0.000001}} \put(-387.49,387.49){\circle*{0.000001}} \put(-386.79,386.79){\circle*{0.000001}} \put(-386.08,386.79){\circle*{0.000001}} \put(-385.37,386.08){\circle*{0.000001}} \put(-384.67,385.37){\circle*{0.000001}} \put(-383.96,384.67){\circle*{0.000001}} \put(-383.25,384.67){\circle*{0.000001}} \put(-382.54,383.96){\circle*{0.000001}} \put(-410.83,404.47){\circle*{0.000001}} \put(-411.54,405.17){\circle*{0.000001}} \put(-411.54,405.88){\circle*{0.000001}} \put(-412.24,406.59){\circle*{0.000001}} \put(-412.24,407.29){\circle*{0.000001}} \put(-412.95,408.00){\circle*{0.000001}} \put(-413.66,408.71){\circle*{0.000001}} \put(-413.66,409.41){\circle*{0.000001}} \put(-414.36,410.12){\circle*{0.000001}} \put(-414.36,410.83){\circle*{0.000001}} \put(-415.07,411.54){\circle*{0.000001}} \put(-415.78,412.24){\circle*{0.000001}} \put(-415.78,412.95){\circle*{0.000001}} \put(-416.49,413.66){\circle*{0.000001}} \put(-416.49,414.36){\circle*{0.000001}} \put(-417.19,415.07){\circle*{0.000001}} \put(-417.90,415.78){\circle*{0.000001}} \put(-417.90,416.49){\circle*{0.000001}} \put(-418.61,417.19){\circle*{0.000001}} \put(-418.61,417.90){\circle*{0.000001}} \put(-419.31,418.61){\circle*{0.000001}} \put(-420.02,419.31){\circle*{0.000001}} \put(-420.02,420.02){\circle*{0.000001}} \put(-420.73,420.73){\circle*{0.000001}} \put(-420.73,421.44){\circle*{0.000001}} \put(-421.44,422.14){\circle*{0.000001}} \put(-422.14,422.85){\circle*{0.000001}} \put(-422.14,423.56){\circle*{0.000001}} \put(-422.85,424.26){\circle*{0.000001}} \put(-422.85,424.97){\circle*{0.000001}} \put(-423.56,425.68){\circle*{0.000001}} \put(-424.26,426.39){\circle*{0.000001}} \put(-424.26,427.09){\circle*{0.000001}} \put(-424.97,427.80){\circle*{0.000001}} \put(-424.97,428.51){\circle*{0.000001}} \put(-425.68,429.21){\circle*{0.000001}} \put(-426.39,429.92){\circle*{0.000001}} \put(-426.39,430.63){\circle*{0.000001}} \put(-427.09,431.34){\circle*{0.000001}} \put(-427.09,432.04){\circle*{0.000001}} \put(-427.80,432.75){\circle*{0.000001}} \put(-427.80,432.75){\circle*{0.000001}} \put(-427.09,433.46){\circle*{0.000001}} \put(-427.09,434.16){\circle*{0.000001}} \put(-426.39,434.87){\circle*{0.000001}} \put(-425.68,435.58){\circle*{0.000001}} \put(-425.68,436.28){\circle*{0.000001}} \put(-424.97,436.99){\circle*{0.000001}} \put(-424.26,437.70){\circle*{0.000001}} \put(-424.26,438.41){\circle*{0.000001}} \put(-423.56,439.11){\circle*{0.000001}} \put(-422.85,439.82){\circle*{0.000001}} \put(-422.85,440.53){\circle*{0.000001}} \put(-422.14,441.23){\circle*{0.000001}} \put(-421.44,441.94){\circle*{0.000001}} \put(-421.44,442.65){\circle*{0.000001}} \put(-420.73,443.36){\circle*{0.000001}} \put(-420.02,444.06){\circle*{0.000001}} \put(-420.02,444.77){\circle*{0.000001}} \put(-419.31,445.48){\circle*{0.000001}} \put(-418.61,446.18){\circle*{0.000001}} \put(-418.61,446.89){\circle*{0.000001}} \put(-417.90,447.60){\circle*{0.000001}} \put(-417.90,448.31){\circle*{0.000001}} \put(-417.19,449.01){\circle*{0.000001}} \put(-416.49,449.72){\circle*{0.000001}} \put(-416.49,450.43){\circle*{0.000001}} \put(-415.78,451.13){\circle*{0.000001}} \put(-415.07,451.84){\circle*{0.000001}} \put(-415.07,452.55){\circle*{0.000001}} \put(-414.36,453.26){\circle*{0.000001}} \put(-413.66,453.96){\circle*{0.000001}} \put(-413.66,454.67){\circle*{0.000001}} \put(-412.95,455.38){\circle*{0.000001}} \put(-412.24,456.08){\circle*{0.000001}} \put(-412.24,456.79){\circle*{0.000001}} \put(-411.54,457.50){\circle*{0.000001}} \put(-410.83,458.21){\circle*{0.000001}} \put(-410.83,458.91){\circle*{0.000001}} \put(-410.12,459.62){\circle*{0.000001}} \put(-409.41,460.33){\circle*{0.000001}} \put(-409.41,461.03){\circle*{0.000001}} \put(-408.71,461.74){\circle*{0.000001}} \put(-429.92,432.75){\circle*{0.000001}} \put(-429.21,433.46){\circle*{0.000001}} \put(-429.21,434.16){\circle*{0.000001}} \put(-428.51,434.87){\circle*{0.000001}} \put(-427.80,435.58){\circle*{0.000001}} \put(-427.09,436.28){\circle*{0.000001}} \put(-427.09,436.99){\circle*{0.000001}} \put(-426.39,437.70){\circle*{0.000001}} \put(-425.68,438.41){\circle*{0.000001}} \put(-424.97,439.11){\circle*{0.000001}} \put(-424.97,439.82){\circle*{0.000001}} \put(-424.26,440.53){\circle*{0.000001}} \put(-423.56,441.23){\circle*{0.000001}} \put(-422.85,441.94){\circle*{0.000001}} \put(-422.85,442.65){\circle*{0.000001}} \put(-422.14,443.36){\circle*{0.000001}} \put(-421.44,444.06){\circle*{0.000001}} \put(-421.44,444.77){\circle*{0.000001}} \put(-420.73,445.48){\circle*{0.000001}} \put(-420.02,446.18){\circle*{0.000001}} \put(-419.31,446.89){\circle*{0.000001}} \put(-419.31,447.60){\circle*{0.000001}} \put(-418.61,448.31){\circle*{0.000001}} \put(-417.90,449.01){\circle*{0.000001}} \put(-417.19,449.72){\circle*{0.000001}} \put(-417.19,450.43){\circle*{0.000001}} \put(-416.49,451.13){\circle*{0.000001}} \put(-415.78,451.84){\circle*{0.000001}} \put(-415.78,452.55){\circle*{0.000001}} \put(-415.07,453.26){\circle*{0.000001}} \put(-414.36,453.96){\circle*{0.000001}} \put(-413.66,454.67){\circle*{0.000001}} \put(-413.66,455.38){\circle*{0.000001}} \put(-412.95,456.08){\circle*{0.000001}} \put(-412.24,456.79){\circle*{0.000001}} \put(-411.54,457.50){\circle*{0.000001}} \put(-411.54,458.21){\circle*{0.000001}} \put(-410.83,458.91){\circle*{0.000001}} \put(-410.12,459.62){\circle*{0.000001}} \put(-409.41,460.33){\circle*{0.000001}} \put(-409.41,461.03){\circle*{0.000001}} \put(-408.71,461.74){\circle*{0.000001}} \put(-429.92,432.75){\circle*{0.000001}} \put(-429.21,433.46){\circle*{0.000001}} \put(-429.21,434.16){\circle*{0.000001}} \put(-428.51,434.87){\circle*{0.000001}} \put(-428.51,435.58){\circle*{0.000001}} \put(-427.80,436.28){\circle*{0.000001}} \put(-427.80,436.99){\circle*{0.000001}} \put(-427.09,437.70){\circle*{0.000001}} \put(-427.09,438.41){\circle*{0.000001}} \put(-426.39,439.11){\circle*{0.000001}} \put(-426.39,439.82){\circle*{0.000001}} \put(-425.68,440.53){\circle*{0.000001}} \put(-425.68,441.23){\circle*{0.000001}} \put(-424.97,441.94){\circle*{0.000001}} \put(-424.97,442.65){\circle*{0.000001}} \put(-424.26,443.36){\circle*{0.000001}} \put(-424.26,444.06){\circle*{0.000001}} \put(-423.56,444.77){\circle*{0.000001}} \put(-423.56,445.48){\circle*{0.000001}} \put(-422.85,446.18){\circle*{0.000001}} \put(-422.85,446.89){\circle*{0.000001}} \put(-422.14,447.60){\circle*{0.000001}} \put(-421.44,448.31){\circle*{0.000001}} \put(-421.44,449.01){\circle*{0.000001}} \put(-420.73,449.72){\circle*{0.000001}} \put(-420.73,450.43){\circle*{0.000001}} \put(-420.02,451.13){\circle*{0.000001}} \put(-420.02,451.84){\circle*{0.000001}} \put(-419.31,452.55){\circle*{0.000001}} \put(-419.31,453.26){\circle*{0.000001}} \put(-418.61,453.96){\circle*{0.000001}} \put(-418.61,454.67){\circle*{0.000001}} \put(-417.90,455.38){\circle*{0.000001}} \put(-417.90,456.08){\circle*{0.000001}} \put(-417.19,456.79){\circle*{0.000001}} \put(-417.19,457.50){\circle*{0.000001}} \put(-416.49,458.21){\circle*{0.000001}} \put(-416.49,458.91){\circle*{0.000001}} \put(-415.78,459.62){\circle*{0.000001}} \put(-415.78,460.33){\circle*{0.000001}} \put(-415.07,461.03){\circle*{0.000001}} \put(-415.07,461.74){\circle*{0.000001}} \put(-414.36,462.45){\circle*{0.000001}} \put(-429.92,430.63){\circle*{0.000001}} \put(-429.92,431.34){\circle*{0.000001}} \put(-429.21,432.04){\circle*{0.000001}} \put(-429.21,432.75){\circle*{0.000001}} \put(-428.51,433.46){\circle*{0.000001}} \put(-428.51,434.16){\circle*{0.000001}} \put(-427.80,434.87){\circle*{0.000001}} \put(-427.80,435.58){\circle*{0.000001}} \put(-427.09,436.28){\circle*{0.000001}} \put(-427.09,436.99){\circle*{0.000001}} \put(-426.39,437.70){\circle*{0.000001}} \put(-426.39,438.41){\circle*{0.000001}} \put(-425.68,439.11){\circle*{0.000001}} \put(-425.68,439.82){\circle*{0.000001}} \put(-424.97,440.53){\circle*{0.000001}} \put(-424.97,441.23){\circle*{0.000001}} \put(-424.26,441.94){\circle*{0.000001}} \put(-424.26,442.65){\circle*{0.000001}} \put(-423.56,443.36){\circle*{0.000001}} \put(-423.56,444.06){\circle*{0.000001}} \put(-422.85,444.77){\circle*{0.000001}} \put(-422.85,445.48){\circle*{0.000001}} \put(-422.14,446.18){\circle*{0.000001}} \put(-422.14,446.89){\circle*{0.000001}} \put(-421.44,447.60){\circle*{0.000001}} \put(-421.44,448.31){\circle*{0.000001}} \put(-420.73,449.01){\circle*{0.000001}} \put(-420.73,449.72){\circle*{0.000001}} \put(-420.02,450.43){\circle*{0.000001}} \put(-420.02,451.13){\circle*{0.000001}} \put(-419.31,451.84){\circle*{0.000001}} \put(-419.31,452.55){\circle*{0.000001}} \put(-418.61,453.26){\circle*{0.000001}} \put(-418.61,453.96){\circle*{0.000001}} \put(-417.90,454.67){\circle*{0.000001}} \put(-417.90,455.38){\circle*{0.000001}} \put(-417.19,456.08){\circle*{0.000001}} \put(-417.19,456.79){\circle*{0.000001}} \put(-416.49,457.50){\circle*{0.000001}} \put(-416.49,458.21){\circle*{0.000001}} \put(-415.78,458.91){\circle*{0.000001}} \put(-415.78,459.62){\circle*{0.000001}} \put(-415.07,460.33){\circle*{0.000001}} \put(-415.07,461.03){\circle*{0.000001}} \put(-414.36,461.74){\circle*{0.000001}} \put(-414.36,462.45){\circle*{0.000001}} \put(-429.92,430.63){\circle*{0.000001}} \put(-429.92,431.34){\circle*{0.000001}} \put(-429.92,432.04){\circle*{0.000001}} \put(-429.92,432.75){\circle*{0.000001}} \put(-429.21,433.46){\circle*{0.000001}} \put(-429.21,434.16){\circle*{0.000001}} \put(-429.21,434.87){\circle*{0.000001}} \put(-429.21,435.58){\circle*{0.000001}} \put(-429.21,436.28){\circle*{0.000001}} \put(-429.21,436.99){\circle*{0.000001}} \put(-429.21,437.70){\circle*{0.000001}} \put(-428.51,438.41){\circle*{0.000001}} \put(-428.51,439.11){\circle*{0.000001}} \put(-428.51,439.82){\circle*{0.000001}} \put(-428.51,440.53){\circle*{0.000001}} \put(-428.51,441.23){\circle*{0.000001}} \put(-428.51,441.94){\circle*{0.000001}} \put(-428.51,442.65){\circle*{0.000001}} \put(-427.80,443.36){\circle*{0.000001}} \put(-427.80,444.06){\circle*{0.000001}} \put(-427.80,444.77){\circle*{0.000001}} \put(-427.80,445.48){\circle*{0.000001}} \put(-427.80,446.18){\circle*{0.000001}} \put(-427.80,446.89){\circle*{0.000001}} \put(-427.80,447.60){\circle*{0.000001}} \put(-427.80,448.31){\circle*{0.000001}} \put(-427.09,449.01){\circle*{0.000001}} \put(-427.09,449.72){\circle*{0.000001}} \put(-427.09,450.43){\circle*{0.000001}} \put(-427.09,451.13){\circle*{0.000001}} \put(-427.09,451.84){\circle*{0.000001}} \put(-427.09,452.55){\circle*{0.000001}} \put(-427.09,453.26){\circle*{0.000001}} \put(-426.39,453.96){\circle*{0.000001}} \put(-426.39,454.67){\circle*{0.000001}} \put(-426.39,455.38){\circle*{0.000001}} \put(-426.39,456.08){\circle*{0.000001}} \put(-426.39,456.79){\circle*{0.000001}} \put(-426.39,457.50){\circle*{0.000001}} \put(-426.39,458.21){\circle*{0.000001}} \put(-425.68,458.91){\circle*{0.000001}} \put(-425.68,459.62){\circle*{0.000001}} \put(-425.68,460.33){\circle*{0.000001}} \put(-425.68,461.03){\circle*{0.000001}} \put(-425.68,461.74){\circle*{0.000001}} \put(-425.68,462.45){\circle*{0.000001}} \put(-425.68,463.15){\circle*{0.000001}} \put(-424.97,463.86){\circle*{0.000001}} \put(-424.97,464.57){\circle*{0.000001}} \put(-424.97,465.28){\circle*{0.000001}} \put(-424.97,465.98){\circle*{0.000001}} \put(-427.80,428.51){\circle*{0.000001}} \put(-427.80,429.21){\circle*{0.000001}} \put(-427.80,429.92){\circle*{0.000001}} \put(-427.80,430.63){\circle*{0.000001}} \put(-427.80,431.34){\circle*{0.000001}} \put(-427.80,432.04){\circle*{0.000001}} \put(-427.80,432.75){\circle*{0.000001}} \put(-427.09,433.46){\circle*{0.000001}} \put(-427.09,434.16){\circle*{0.000001}} \put(-427.09,434.87){\circle*{0.000001}} \put(-427.09,435.58){\circle*{0.000001}} \put(-427.09,436.28){\circle*{0.000001}} \put(-427.09,436.99){\circle*{0.000001}} \put(-427.09,437.70){\circle*{0.000001}} \put(-427.09,438.41){\circle*{0.000001}} \put(-427.09,439.11){\circle*{0.000001}} \put(-427.09,439.82){\circle*{0.000001}} \put(-427.09,440.53){\circle*{0.000001}} \put(-427.09,441.23){\circle*{0.000001}} \put(-427.09,441.94){\circle*{0.000001}} \put(-426.39,442.65){\circle*{0.000001}} \put(-426.39,443.36){\circle*{0.000001}} \put(-426.39,444.06){\circle*{0.000001}} \put(-426.39,444.77){\circle*{0.000001}} \put(-426.39,445.48){\circle*{0.000001}} \put(-426.39,446.18){\circle*{0.000001}} \put(-426.39,446.89){\circle*{0.000001}} \put(-426.39,447.60){\circle*{0.000001}} \put(-426.39,448.31){\circle*{0.000001}} \put(-426.39,449.01){\circle*{0.000001}} \put(-426.39,449.72){\circle*{0.000001}} \put(-426.39,450.43){\circle*{0.000001}} \put(-426.39,451.13){\circle*{0.000001}} \put(-426.39,451.84){\circle*{0.000001}} \put(-425.68,452.55){\circle*{0.000001}} \put(-425.68,453.26){\circle*{0.000001}} \put(-425.68,453.96){\circle*{0.000001}} \put(-425.68,454.67){\circle*{0.000001}} \put(-425.68,455.38){\circle*{0.000001}} \put(-425.68,456.08){\circle*{0.000001}} \put(-425.68,456.79){\circle*{0.000001}} \put(-425.68,457.50){\circle*{0.000001}} \put(-425.68,458.21){\circle*{0.000001}} \put(-425.68,458.91){\circle*{0.000001}} \put(-425.68,459.62){\circle*{0.000001}} \put(-425.68,460.33){\circle*{0.000001}} \put(-425.68,461.03){\circle*{0.000001}} \put(-424.97,461.74){\circle*{0.000001}} \put(-424.97,462.45){\circle*{0.000001}} \put(-424.97,463.15){\circle*{0.000001}} \put(-424.97,463.86){\circle*{0.000001}} \put(-424.97,464.57){\circle*{0.000001}} \put(-424.97,465.28){\circle*{0.000001}} \put(-424.97,465.98){\circle*{0.000001}} \put(-427.80,428.51){\circle*{0.000001}} \put(-427.80,429.21){\circle*{0.000001}} \put(-428.51,429.92){\circle*{0.000001}} \put(-428.51,430.63){\circle*{0.000001}} \put(-428.51,431.34){\circle*{0.000001}} \put(-429.21,432.04){\circle*{0.000001}} \put(-429.21,432.75){\circle*{0.000001}} \put(-429.92,433.46){\circle*{0.000001}} \put(-429.92,434.16){\circle*{0.000001}} \put(-429.92,434.87){\circle*{0.000001}} \put(-430.63,435.58){\circle*{0.000001}} \put(-430.63,436.28){\circle*{0.000001}} \put(-430.63,436.99){\circle*{0.000001}} \put(-431.34,437.70){\circle*{0.000001}} \put(-431.34,438.41){\circle*{0.000001}} \put(-431.34,439.11){\circle*{0.000001}} \put(-432.04,439.82){\circle*{0.000001}} \put(-432.04,440.53){\circle*{0.000001}} \put(-432.75,441.23){\circle*{0.000001}} \put(-432.75,441.94){\circle*{0.000001}} \put(-432.75,442.65){\circle*{0.000001}} \put(-433.46,443.36){\circle*{0.000001}} \put(-433.46,444.06){\circle*{0.000001}} \put(-433.46,444.77){\circle*{0.000001}} \put(-434.16,445.48){\circle*{0.000001}} \put(-434.16,446.18){\circle*{0.000001}} \put(-434.16,446.89){\circle*{0.000001}} \put(-434.87,447.60){\circle*{0.000001}} \put(-434.87,448.31){\circle*{0.000001}} \put(-434.87,449.01){\circle*{0.000001}} \put(-435.58,449.72){\circle*{0.000001}} \put(-435.58,450.43){\circle*{0.000001}} \put(-436.28,451.13){\circle*{0.000001}} \put(-436.28,451.84){\circle*{0.000001}} \put(-436.28,452.55){\circle*{0.000001}} \put(-436.99,453.26){\circle*{0.000001}} \put(-436.99,453.96){\circle*{0.000001}} \put(-436.99,454.67){\circle*{0.000001}} \put(-437.70,455.38){\circle*{0.000001}} \put(-437.70,456.08){\circle*{0.000001}} \put(-437.70,456.79){\circle*{0.000001}} \put(-438.41,457.50){\circle*{0.000001}} \put(-438.41,458.21){\circle*{0.000001}} \put(-439.11,458.91){\circle*{0.000001}} \put(-439.11,459.62){\circle*{0.000001}} \put(-439.11,460.33){\circle*{0.000001}} \put(-439.82,461.03){\circle*{0.000001}} \put(-439.82,461.74){\circle*{0.000001}} \put(-439.82,461.74){\circle*{0.000001}} \put(-439.11,461.74){\circle*{0.000001}} \put(-438.41,462.45){\circle*{0.000001}} \put(-437.70,462.45){\circle*{0.000001}} \put(-436.99,462.45){\circle*{0.000001}} \put(-436.28,463.15){\circle*{0.000001}} \put(-435.58,463.15){\circle*{0.000001}} \put(-434.87,463.15){\circle*{0.000001}} \put(-434.16,463.86){\circle*{0.000001}} \put(-433.46,463.86){\circle*{0.000001}} \put(-432.75,463.86){\circle*{0.000001}} \put(-432.04,464.57){\circle*{0.000001}} \put(-431.34,464.57){\circle*{0.000001}} \put(-430.63,464.57){\circle*{0.000001}} \put(-429.92,465.28){\circle*{0.000001}} \put(-429.21,465.28){\circle*{0.000001}} \put(-428.51,465.28){\circle*{0.000001}} \put(-427.80,465.98){\circle*{0.000001}} \put(-427.09,465.98){\circle*{0.000001}} \put(-426.39,465.98){\circle*{0.000001}} \put(-425.68,466.69){\circle*{0.000001}} \put(-424.97,466.69){\circle*{0.000001}} \put(-424.26,466.69){\circle*{0.000001}} \put(-423.56,467.40){\circle*{0.000001}} \put(-422.85,467.40){\circle*{0.000001}} \put(-422.14,468.10){\circle*{0.000001}} \put(-421.44,468.10){\circle*{0.000001}} \put(-420.73,468.10){\circle*{0.000001}} \put(-420.02,468.81){\circle*{0.000001}} \put(-419.31,468.81){\circle*{0.000001}} \put(-418.61,468.81){\circle*{0.000001}} \put(-417.90,469.52){\circle*{0.000001}} \put(-417.19,469.52){\circle*{0.000001}} \put(-416.49,469.52){\circle*{0.000001}} \put(-415.78,470.23){\circle*{0.000001}} \put(-415.07,470.23){\circle*{0.000001}} \put(-414.36,470.23){\circle*{0.000001}} \put(-413.66,470.93){\circle*{0.000001}} \put(-412.95,470.93){\circle*{0.000001}} \put(-412.24,470.93){\circle*{0.000001}} \put(-411.54,471.64){\circle*{0.000001}} \put(-410.83,471.64){\circle*{0.000001}} \put(-410.12,471.64){\circle*{0.000001}} \put(-409.41,472.35){\circle*{0.000001}} \put(-408.71,472.35){\circle*{0.000001}} \put(-408.71,472.35){\circle*{0.000001}} \put(-408.71,473.05){\circle*{0.000001}} \put(-408.71,473.76){\circle*{0.000001}} \put(-408.71,474.47){\circle*{0.000001}} \put(-408.71,475.18){\circle*{0.000001}} \put(-408.71,475.88){\circle*{0.000001}} \put(-408.71,476.59){\circle*{0.000001}} \put(-408.71,477.30){\circle*{0.000001}} \put(-408.71,478.00){\circle*{0.000001}} \put(-408.71,478.71){\circle*{0.000001}} \put(-408.71,479.42){\circle*{0.000001}} \put(-408.71,480.13){\circle*{0.000001}} \put(-408.00,480.83){\circle*{0.000001}} \put(-408.00,481.54){\circle*{0.000001}} \put(-408.00,482.25){\circle*{0.000001}} \put(-408.00,482.95){\circle*{0.000001}} \put(-408.00,483.66){\circle*{0.000001}} \put(-408.00,484.37){\circle*{0.000001}} \put(-408.00,485.08){\circle*{0.000001}} \put(-408.00,485.78){\circle*{0.000001}} \put(-408.00,486.49){\circle*{0.000001}} \put(-408.00,487.20){\circle*{0.000001}} \put(-408.00,487.90){\circle*{0.000001}} \put(-408.00,488.61){\circle*{0.000001}} \put(-408.00,489.32){\circle*{0.000001}} \put(-408.00,490.02){\circle*{0.000001}} \put(-408.00,490.73){\circle*{0.000001}} \put(-408.00,491.44){\circle*{0.000001}} \put(-408.00,492.15){\circle*{0.000001}} \put(-408.00,492.85){\circle*{0.000001}} \put(-408.00,493.56){\circle*{0.000001}} \put(-408.00,494.27){\circle*{0.000001}} \put(-408.00,494.97){\circle*{0.000001}} \put(-408.00,495.68){\circle*{0.000001}} \put(-408.00,496.39){\circle*{0.000001}} \put(-408.00,497.10){\circle*{0.000001}} \put(-407.29,497.80){\circle*{0.000001}} \put(-407.29,498.51){\circle*{0.000001}} \put(-407.29,499.22){\circle*{0.000001}} \put(-407.29,499.92){\circle*{0.000001}} \put(-407.29,500.63){\circle*{0.000001}} \put(-407.29,501.34){\circle*{0.000001}} \put(-407.29,502.05){\circle*{0.000001}} \put(-407.29,502.75){\circle*{0.000001}} \put(-407.29,503.46){\circle*{0.000001}} \put(-407.29,504.17){\circle*{0.000001}} \put(-407.29,504.87){\circle*{0.000001}} \put(-407.29,505.58){\circle*{0.000001}} \put(-383.96,479.42){\circle*{0.000001}} \put(-384.67,480.13){\circle*{0.000001}} \put(-385.37,480.83){\circle*{0.000001}} \put(-386.08,481.54){\circle*{0.000001}} \put(-386.79,482.25){\circle*{0.000001}} \put(-386.79,482.95){\circle*{0.000001}} \put(-387.49,483.66){\circle*{0.000001}} \put(-388.20,484.37){\circle*{0.000001}} \put(-388.91,485.08){\circle*{0.000001}} \put(-389.62,485.78){\circle*{0.000001}} \put(-390.32,486.49){\circle*{0.000001}} \put(-391.03,487.20){\circle*{0.000001}} \put(-391.74,487.90){\circle*{0.000001}} \put(-392.44,488.61){\circle*{0.000001}} \put(-392.44,489.32){\circle*{0.000001}} \put(-393.15,490.02){\circle*{0.000001}} \put(-393.86,490.73){\circle*{0.000001}} \put(-394.57,491.44){\circle*{0.000001}} \put(-395.27,492.15){\circle*{0.000001}} \put(-395.98,492.85){\circle*{0.000001}} \put(-396.69,493.56){\circle*{0.000001}} \put(-397.39,494.27){\circle*{0.000001}} \put(-398.10,494.97){\circle*{0.000001}} \put(-398.81,495.68){\circle*{0.000001}} \put(-398.81,496.39){\circle*{0.000001}} \put(-399.52,497.10){\circle*{0.000001}} \put(-400.22,497.80){\circle*{0.000001}} \put(-400.93,498.51){\circle*{0.000001}} \put(-401.64,499.22){\circle*{0.000001}} \put(-402.34,499.92){\circle*{0.000001}} \put(-403.05,500.63){\circle*{0.000001}} \put(-403.76,501.34){\circle*{0.000001}} \put(-404.47,502.05){\circle*{0.000001}} \put(-404.47,502.75){\circle*{0.000001}} \put(-405.17,503.46){\circle*{0.000001}} \put(-405.88,504.17){\circle*{0.000001}} \put(-406.59,504.87){\circle*{0.000001}} \put(-407.29,505.58){\circle*{0.000001}} \put(-376.89,445.48){\circle*{0.000001}} \put(-376.89,446.18){\circle*{0.000001}} \put(-376.89,446.89){\circle*{0.000001}} \put(-377.60,447.60){\circle*{0.000001}} \put(-377.60,448.31){\circle*{0.000001}} \put(-377.60,449.01){\circle*{0.000001}} \put(-377.60,449.72){\circle*{0.000001}} \put(-377.60,450.43){\circle*{0.000001}} \put(-378.30,451.13){\circle*{0.000001}} \put(-378.30,451.84){\circle*{0.000001}} \put(-378.30,452.55){\circle*{0.000001}} \put(-378.30,453.26){\circle*{0.000001}} \put(-378.30,453.96){\circle*{0.000001}} \put(-379.01,454.67){\circle*{0.000001}} \put(-379.01,455.38){\circle*{0.000001}} \put(-379.01,456.08){\circle*{0.000001}} \put(-379.01,456.79){\circle*{0.000001}} \put(-379.72,457.50){\circle*{0.000001}} \put(-379.72,458.21){\circle*{0.000001}} \put(-379.72,458.91){\circle*{0.000001}} \put(-379.72,459.62){\circle*{0.000001}} \put(-379.72,460.33){\circle*{0.000001}} \put(-380.42,461.03){\circle*{0.000001}} \put(-380.42,461.74){\circle*{0.000001}} \put(-380.42,462.45){\circle*{0.000001}} \put(-380.42,463.15){\circle*{0.000001}} \put(-380.42,463.86){\circle*{0.000001}} \put(-381.13,464.57){\circle*{0.000001}} \put(-381.13,465.28){\circle*{0.000001}} \put(-381.13,465.98){\circle*{0.000001}} \put(-381.13,466.69){\circle*{0.000001}} \put(-381.13,467.40){\circle*{0.000001}} \put(-381.84,468.10){\circle*{0.000001}} \put(-381.84,468.81){\circle*{0.000001}} \put(-381.84,469.52){\circle*{0.000001}} \put(-381.84,470.23){\circle*{0.000001}} \put(-381.84,470.93){\circle*{0.000001}} \put(-382.54,471.64){\circle*{0.000001}} \put(-382.54,472.35){\circle*{0.000001}} \put(-382.54,473.05){\circle*{0.000001}} \put(-382.54,473.76){\circle*{0.000001}} \put(-383.25,474.47){\circle*{0.000001}} \put(-383.25,475.18){\circle*{0.000001}} \put(-383.25,475.88){\circle*{0.000001}} \put(-383.25,476.59){\circle*{0.000001}} \put(-383.25,477.30){\circle*{0.000001}} \put(-383.96,478.00){\circle*{0.000001}} \put(-383.96,478.71){\circle*{0.000001}} \put(-383.96,479.42){\circle*{0.000001}} \put(-408.71,450.43){\circle*{0.000001}} \put(-408.00,450.43){\circle*{0.000001}} \put(-407.29,450.43){\circle*{0.000001}} \put(-406.59,450.43){\circle*{0.000001}} \put(-405.88,449.72){\circle*{0.000001}} \put(-405.17,449.72){\circle*{0.000001}} \put(-404.47,449.72){\circle*{0.000001}} \put(-403.76,449.72){\circle*{0.000001}} \put(-403.05,449.72){\circle*{0.000001}} \put(-402.34,449.72){\circle*{0.000001}} \put(-401.64,449.01){\circle*{0.000001}} \put(-400.93,449.01){\circle*{0.000001}} \put(-400.22,449.01){\circle*{0.000001}} \put(-399.52,449.01){\circle*{0.000001}} \put(-398.81,449.01){\circle*{0.000001}} \put(-398.10,449.01){\circle*{0.000001}} \put(-397.39,449.01){\circle*{0.000001}} \put(-396.69,448.31){\circle*{0.000001}} \put(-395.98,448.31){\circle*{0.000001}} \put(-395.27,448.31){\circle*{0.000001}} \put(-394.57,448.31){\circle*{0.000001}} \put(-393.86,448.31){\circle*{0.000001}} \put(-393.15,448.31){\circle*{0.000001}} \put(-392.44,447.60){\circle*{0.000001}} \put(-391.74,447.60){\circle*{0.000001}} \put(-391.03,447.60){\circle*{0.000001}} \put(-390.32,447.60){\circle*{0.000001}} \put(-389.62,447.60){\circle*{0.000001}} \put(-388.91,447.60){\circle*{0.000001}} \put(-388.20,446.89){\circle*{0.000001}} \put(-387.49,446.89){\circle*{0.000001}} \put(-386.79,446.89){\circle*{0.000001}} \put(-386.08,446.89){\circle*{0.000001}} \put(-385.37,446.89){\circle*{0.000001}} \put(-384.67,446.89){\circle*{0.000001}} \put(-383.96,446.89){\circle*{0.000001}} \put(-383.25,446.18){\circle*{0.000001}} \put(-382.54,446.18){\circle*{0.000001}} \put(-381.84,446.18){\circle*{0.000001}} \put(-381.13,446.18){\circle*{0.000001}} \put(-380.42,446.18){\circle*{0.000001}} \put(-379.72,446.18){\circle*{0.000001}} \put(-379.01,445.48){\circle*{0.000001}} \put(-378.30,445.48){\circle*{0.000001}} \put(-377.60,445.48){\circle*{0.000001}} \put(-376.89,445.48){\circle*{0.000001}} \put(-443.36,454.67){\circle*{0.000001}} \put(-442.65,454.67){\circle*{0.000001}} \put(-441.94,454.67){\circle*{0.000001}} \put(-441.23,454.67){\circle*{0.000001}} \put(-440.53,454.67){\circle*{0.000001}} \put(-439.82,453.96){\circle*{0.000001}} \put(-439.11,453.96){\circle*{0.000001}} \put(-438.41,453.96){\circle*{0.000001}} \put(-437.70,453.96){\circle*{0.000001}} \put(-436.99,453.96){\circle*{0.000001}} \put(-436.28,453.96){\circle*{0.000001}} \put(-435.58,453.96){\circle*{0.000001}} \put(-434.87,453.96){\circle*{0.000001}} \put(-434.16,453.26){\circle*{0.000001}} \put(-433.46,453.26){\circle*{0.000001}} \put(-432.75,453.26){\circle*{0.000001}} \put(-432.04,453.26){\circle*{0.000001}} \put(-431.34,453.26){\circle*{0.000001}} \put(-430.63,453.26){\circle*{0.000001}} \put(-429.92,453.26){\circle*{0.000001}} \put(-429.21,453.26){\circle*{0.000001}} \put(-428.51,452.55){\circle*{0.000001}} \put(-427.80,452.55){\circle*{0.000001}} \put(-427.09,452.55){\circle*{0.000001}} \put(-426.39,452.55){\circle*{0.000001}} \put(-425.68,452.55){\circle*{0.000001}} \put(-424.97,452.55){\circle*{0.000001}} \put(-424.26,452.55){\circle*{0.000001}} \put(-423.56,452.55){\circle*{0.000001}} \put(-422.85,451.84){\circle*{0.000001}} \put(-422.14,451.84){\circle*{0.000001}} \put(-421.44,451.84){\circle*{0.000001}} \put(-420.73,451.84){\circle*{0.000001}} \put(-420.02,451.84){\circle*{0.000001}} \put(-419.31,451.84){\circle*{0.000001}} \put(-418.61,451.84){\circle*{0.000001}} \put(-417.90,451.84){\circle*{0.000001}} \put(-417.19,451.13){\circle*{0.000001}} \put(-416.49,451.13){\circle*{0.000001}} \put(-415.78,451.13){\circle*{0.000001}} \put(-415.07,451.13){\circle*{0.000001}} \put(-414.36,451.13){\circle*{0.000001}} \put(-413.66,451.13){\circle*{0.000001}} \put(-412.95,451.13){\circle*{0.000001}} \put(-412.24,451.13){\circle*{0.000001}} \put(-411.54,450.43){\circle*{0.000001}} \put(-410.83,450.43){\circle*{0.000001}} \put(-410.12,450.43){\circle*{0.000001}} \put(-409.41,450.43){\circle*{0.000001}} \put(-408.71,450.43){\circle*{0.000001}} \put(-471.64,435.58){\circle*{0.000001}} \put(-470.93,436.28){\circle*{0.000001}} \put(-470.23,436.28){\circle*{0.000001}} \put(-469.52,436.99){\circle*{0.000001}} \put(-468.81,437.70){\circle*{0.000001}} \put(-468.10,437.70){\circle*{0.000001}} \put(-467.40,438.41){\circle*{0.000001}} \put(-466.69,439.11){\circle*{0.000001}} \put(-465.98,439.11){\circle*{0.000001}} \put(-465.28,439.82){\circle*{0.000001}} \put(-464.57,440.53){\circle*{0.000001}} \put(-463.86,440.53){\circle*{0.000001}} \put(-463.15,441.23){\circle*{0.000001}} \put(-462.45,441.94){\circle*{0.000001}} \put(-461.74,441.94){\circle*{0.000001}} \put(-461.03,442.65){\circle*{0.000001}} \put(-460.33,443.36){\circle*{0.000001}} \put(-459.62,443.36){\circle*{0.000001}} \put(-458.91,444.06){\circle*{0.000001}} \put(-458.21,444.77){\circle*{0.000001}} \put(-457.50,444.77){\circle*{0.000001}} \put(-456.79,445.48){\circle*{0.000001}} \put(-456.08,446.18){\circle*{0.000001}} \put(-455.38,446.89){\circle*{0.000001}} \put(-454.67,446.89){\circle*{0.000001}} \put(-453.96,447.60){\circle*{0.000001}} \put(-453.26,448.31){\circle*{0.000001}} \put(-452.55,448.31){\circle*{0.000001}} \put(-451.84,449.01){\circle*{0.000001}} \put(-451.13,449.72){\circle*{0.000001}} \put(-450.43,449.72){\circle*{0.000001}} \put(-449.72,450.43){\circle*{0.000001}} \put(-449.01,451.13){\circle*{0.000001}} \put(-448.31,451.13){\circle*{0.000001}} \put(-447.60,451.84){\circle*{0.000001}} \put(-446.89,452.55){\circle*{0.000001}} \put(-446.18,452.55){\circle*{0.000001}} \put(-445.48,453.26){\circle*{0.000001}} \put(-444.77,453.96){\circle*{0.000001}} \put(-444.06,453.96){\circle*{0.000001}} \put(-443.36,454.67){\circle*{0.000001}} \put(-471.64,435.58){\circle*{0.000001}} \put(-470.93,435.58){\circle*{0.000001}} \put(-470.23,435.58){\circle*{0.000001}} \put(-469.52,434.87){\circle*{0.000001}} \put(-468.81,434.87){\circle*{0.000001}} \put(-468.10,434.87){\circle*{0.000001}} \put(-467.40,434.87){\circle*{0.000001}} \put(-466.69,434.87){\circle*{0.000001}} \put(-465.98,434.87){\circle*{0.000001}} \put(-465.28,434.16){\circle*{0.000001}} \put(-464.57,434.16){\circle*{0.000001}} \put(-463.86,434.16){\circle*{0.000001}} \put(-463.15,434.16){\circle*{0.000001}} \put(-462.45,434.16){\circle*{0.000001}} \put(-461.74,434.16){\circle*{0.000001}} \put(-461.03,433.46){\circle*{0.000001}} \put(-460.33,433.46){\circle*{0.000001}} \put(-459.62,433.46){\circle*{0.000001}} \put(-458.91,433.46){\circle*{0.000001}} \put(-458.21,433.46){\circle*{0.000001}} \put(-457.50,432.75){\circle*{0.000001}} \put(-456.79,432.75){\circle*{0.000001}} \put(-456.08,432.75){\circle*{0.000001}} \put(-455.38,432.75){\circle*{0.000001}} \put(-454.67,432.75){\circle*{0.000001}} \put(-453.96,432.75){\circle*{0.000001}} \put(-453.26,432.04){\circle*{0.000001}} \put(-452.55,432.04){\circle*{0.000001}} \put(-451.84,432.04){\circle*{0.000001}} \put(-451.13,432.04){\circle*{0.000001}} \put(-450.43,432.04){\circle*{0.000001}} \put(-449.72,431.34){\circle*{0.000001}} \put(-449.01,431.34){\circle*{0.000001}} \put(-448.31,431.34){\circle*{0.000001}} \put(-447.60,431.34){\circle*{0.000001}} \put(-446.89,431.34){\circle*{0.000001}} \put(-446.18,431.34){\circle*{0.000001}} \put(-445.48,430.63){\circle*{0.000001}} \put(-444.77,430.63){\circle*{0.000001}} \put(-444.06,430.63){\circle*{0.000001}} \put(-443.36,430.63){\circle*{0.000001}} \put(-442.65,430.63){\circle*{0.000001}} \put(-441.94,430.63){\circle*{0.000001}} \put(-441.23,429.92){\circle*{0.000001}} \put(-440.53,429.92){\circle*{0.000001}} \put(-439.82,429.92){\circle*{0.000001}} \put(-439.82,429.92){\circle*{0.000001}} \put(-439.82,430.63){\circle*{0.000001}} \put(-439.11,431.34){\circle*{0.000001}} \put(-439.11,432.04){\circle*{0.000001}} \put(-438.41,432.75){\circle*{0.000001}} \put(-438.41,433.46){\circle*{0.000001}} \put(-437.70,434.16){\circle*{0.000001}} \put(-437.70,434.87){\circle*{0.000001}} \put(-436.99,435.58){\circle*{0.000001}} \put(-436.99,436.28){\circle*{0.000001}} \put(-436.28,436.99){\circle*{0.000001}} \put(-436.28,437.70){\circle*{0.000001}} \put(-435.58,438.41){\circle*{0.000001}} \put(-435.58,439.11){\circle*{0.000001}} \put(-434.87,439.82){\circle*{0.000001}} \put(-434.87,440.53){\circle*{0.000001}} \put(-434.16,441.23){\circle*{0.000001}} \put(-434.16,441.94){\circle*{0.000001}} \put(-433.46,442.65){\circle*{0.000001}} \put(-433.46,443.36){\circle*{0.000001}} \put(-432.75,444.06){\circle*{0.000001}} \put(-432.75,444.77){\circle*{0.000001}} \put(-432.04,445.48){\circle*{0.000001}} \put(-432.04,446.18){\circle*{0.000001}} \put(-431.34,446.89){\circle*{0.000001}} \put(-431.34,447.60){\circle*{0.000001}} \put(-430.63,448.31){\circle*{0.000001}} \put(-430.63,449.01){\circle*{0.000001}} \put(-429.92,449.72){\circle*{0.000001}} \put(-429.92,450.43){\circle*{0.000001}} \put(-429.21,451.13){\circle*{0.000001}} \put(-429.21,451.84){\circle*{0.000001}} \put(-428.51,452.55){\circle*{0.000001}} \put(-428.51,453.26){\circle*{0.000001}} \put(-427.80,453.96){\circle*{0.000001}} \put(-427.80,454.67){\circle*{0.000001}} \put(-427.09,455.38){\circle*{0.000001}} \put(-427.09,456.08){\circle*{0.000001}} \put(-426.39,456.79){\circle*{0.000001}} \put(-426.39,457.50){\circle*{0.000001}} \put(-425.68,458.21){\circle*{0.000001}} \put(-425.68,458.91){\circle*{0.000001}} \put(-424.97,459.62){\circle*{0.000001}} \put(-446.89,431.34){\circle*{0.000001}} \put(-446.18,432.04){\circle*{0.000001}} \put(-445.48,432.75){\circle*{0.000001}} \put(-445.48,433.46){\circle*{0.000001}} \put(-444.77,434.16){\circle*{0.000001}} \put(-444.06,434.87){\circle*{0.000001}} \put(-443.36,435.58){\circle*{0.000001}} \put(-443.36,436.28){\circle*{0.000001}} \put(-442.65,436.99){\circle*{0.000001}} \put(-441.94,437.70){\circle*{0.000001}} \put(-441.23,438.41){\circle*{0.000001}} \put(-440.53,439.11){\circle*{0.000001}} \put(-440.53,439.82){\circle*{0.000001}} \put(-439.82,440.53){\circle*{0.000001}} \put(-439.11,441.23){\circle*{0.000001}} \put(-438.41,441.94){\circle*{0.000001}} \put(-438.41,442.65){\circle*{0.000001}} \put(-437.70,443.36){\circle*{0.000001}} \put(-436.99,444.06){\circle*{0.000001}} \put(-436.28,444.77){\circle*{0.000001}} \put(-436.28,445.48){\circle*{0.000001}} \put(-435.58,446.18){\circle*{0.000001}} \put(-434.87,446.89){\circle*{0.000001}} \put(-434.16,447.60){\circle*{0.000001}} \put(-433.46,448.31){\circle*{0.000001}} \put(-433.46,449.01){\circle*{0.000001}} \put(-432.75,449.72){\circle*{0.000001}} \put(-432.04,450.43){\circle*{0.000001}} \put(-431.34,451.13){\circle*{0.000001}} \put(-431.34,451.84){\circle*{0.000001}} \put(-430.63,452.55){\circle*{0.000001}} \put(-429.92,453.26){\circle*{0.000001}} \put(-429.21,453.96){\circle*{0.000001}} \put(-428.51,454.67){\circle*{0.000001}} \put(-428.51,455.38){\circle*{0.000001}} \put(-427.80,456.08){\circle*{0.000001}} \put(-427.09,456.79){\circle*{0.000001}} \put(-426.39,457.50){\circle*{0.000001}} \put(-426.39,458.21){\circle*{0.000001}} \put(-425.68,458.91){\circle*{0.000001}} \put(-424.97,459.62){\circle*{0.000001}} \put(-446.89,431.34){\circle*{0.000001}} \put(-446.18,432.04){\circle*{0.000001}} \put(-445.48,432.75){\circle*{0.000001}} \put(-444.77,433.46){\circle*{0.000001}} \put(-444.06,434.16){\circle*{0.000001}} \put(-443.36,434.87){\circle*{0.000001}} \put(-442.65,435.58){\circle*{0.000001}} \put(-442.65,436.28){\circle*{0.000001}} \put(-441.94,436.99){\circle*{0.000001}} \put(-441.23,437.70){\circle*{0.000001}} \put(-440.53,438.41){\circle*{0.000001}} \put(-439.82,439.11){\circle*{0.000001}} \put(-439.11,439.82){\circle*{0.000001}} \put(-438.41,440.53){\circle*{0.000001}} \put(-437.70,441.23){\circle*{0.000001}} \put(-436.99,441.94){\circle*{0.000001}} \put(-436.28,442.65){\circle*{0.000001}} \put(-435.58,443.36){\circle*{0.000001}} \put(-434.87,444.06){\circle*{0.000001}} \put(-434.87,444.77){\circle*{0.000001}} \put(-434.16,445.48){\circle*{0.000001}} \put(-433.46,446.18){\circle*{0.000001}} \put(-432.75,446.89){\circle*{0.000001}} \put(-432.04,447.60){\circle*{0.000001}} \put(-431.34,448.31){\circle*{0.000001}} \put(-430.63,449.01){\circle*{0.000001}} \put(-429.92,449.72){\circle*{0.000001}} \put(-429.21,450.43){\circle*{0.000001}} \put(-428.51,451.13){\circle*{0.000001}} \put(-427.80,451.84){\circle*{0.000001}} \put(-427.09,452.55){\circle*{0.000001}} \put(-427.09,453.26){\circle*{0.000001}} \put(-426.39,453.96){\circle*{0.000001}} \put(-425.68,454.67){\circle*{0.000001}} \put(-424.97,455.38){\circle*{0.000001}} \put(-424.26,456.08){\circle*{0.000001}} \put(-423.56,456.79){\circle*{0.000001}} \put(-422.85,457.50){\circle*{0.000001}} \put(-449.72,444.77){\circle*{0.000001}} \put(-449.01,444.77){\circle*{0.000001}} \put(-448.31,445.48){\circle*{0.000001}} \put(-447.60,445.48){\circle*{0.000001}} \put(-446.89,446.18){\circle*{0.000001}} \put(-446.18,446.18){\circle*{0.000001}} \put(-445.48,446.89){\circle*{0.000001}} \put(-444.77,446.89){\circle*{0.000001}} \put(-444.06,447.60){\circle*{0.000001}} \put(-443.36,447.60){\circle*{0.000001}} \put(-442.65,448.31){\circle*{0.000001}} \put(-441.94,448.31){\circle*{0.000001}} \put(-441.23,449.01){\circle*{0.000001}} \put(-440.53,449.01){\circle*{0.000001}} \put(-439.82,449.72){\circle*{0.000001}} \put(-439.11,449.72){\circle*{0.000001}} \put(-438.41,450.43){\circle*{0.000001}} \put(-437.70,450.43){\circle*{0.000001}} \put(-436.99,451.13){\circle*{0.000001}} \put(-436.28,451.13){\circle*{0.000001}} \put(-435.58,451.13){\circle*{0.000001}} \put(-434.87,451.84){\circle*{0.000001}} \put(-434.16,451.84){\circle*{0.000001}} \put(-433.46,452.55){\circle*{0.000001}} \put(-432.75,452.55){\circle*{0.000001}} \put(-432.04,453.26){\circle*{0.000001}} \put(-431.34,453.26){\circle*{0.000001}} \put(-430.63,453.96){\circle*{0.000001}} \put(-429.92,453.96){\circle*{0.000001}} \put(-429.21,454.67){\circle*{0.000001}} \put(-428.51,454.67){\circle*{0.000001}} \put(-427.80,455.38){\circle*{0.000001}} \put(-427.09,455.38){\circle*{0.000001}} \put(-426.39,456.08){\circle*{0.000001}} \put(-425.68,456.08){\circle*{0.000001}} \put(-424.97,456.79){\circle*{0.000001}} \put(-424.26,456.79){\circle*{0.000001}} \put(-423.56,457.50){\circle*{0.000001}} \put(-422.85,457.50){\circle*{0.000001}} \put(-425.68,419.31){\circle*{0.000001}} \put(-426.39,420.02){\circle*{0.000001}} \put(-427.09,420.73){\circle*{0.000001}} \put(-427.80,421.44){\circle*{0.000001}} \put(-428.51,422.14){\circle*{0.000001}} \put(-429.21,422.85){\circle*{0.000001}} \put(-429.92,423.56){\circle*{0.000001}} \put(-430.63,424.26){\circle*{0.000001}} \put(-431.34,424.97){\circle*{0.000001}} \put(-431.34,425.68){\circle*{0.000001}} \put(-432.04,426.39){\circle*{0.000001}} \put(-432.75,427.09){\circle*{0.000001}} \put(-433.46,427.80){\circle*{0.000001}} \put(-434.16,428.51){\circle*{0.000001}} \put(-434.87,429.21){\circle*{0.000001}} \put(-435.58,429.92){\circle*{0.000001}} \put(-436.28,430.63){\circle*{0.000001}} \put(-436.99,431.34){\circle*{0.000001}} \put(-437.70,432.04){\circle*{0.000001}} \put(-438.41,432.75){\circle*{0.000001}} \put(-439.11,433.46){\circle*{0.000001}} \put(-439.82,434.16){\circle*{0.000001}} \put(-440.53,434.87){\circle*{0.000001}} \put(-441.23,435.58){\circle*{0.000001}} \put(-441.94,436.28){\circle*{0.000001}} \put(-442.65,436.99){\circle*{0.000001}} \put(-443.36,437.70){\circle*{0.000001}} \put(-443.36,438.41){\circle*{0.000001}} \put(-444.06,439.11){\circle*{0.000001}} \put(-444.77,439.82){\circle*{0.000001}} \put(-445.48,440.53){\circle*{0.000001}} \put(-446.18,441.23){\circle*{0.000001}} \put(-446.89,441.94){\circle*{0.000001}} \put(-447.60,442.65){\circle*{0.000001}} \put(-448.31,443.36){\circle*{0.000001}} \put(-449.01,444.06){\circle*{0.000001}} \put(-449.72,444.77){\circle*{0.000001}} \put(-403.76,393.86){\circle*{0.000001}} \put(-404.47,394.57){\circle*{0.000001}} \put(-405.17,395.27){\circle*{0.000001}} \put(-405.88,395.98){\circle*{0.000001}} \put(-405.88,396.69){\circle*{0.000001}} \put(-406.59,397.39){\circle*{0.000001}} \put(-407.29,398.10){\circle*{0.000001}} \put(-408.00,398.81){\circle*{0.000001}} \put(-408.71,399.52){\circle*{0.000001}} \put(-409.41,400.22){\circle*{0.000001}} \put(-410.12,400.93){\circle*{0.000001}} \put(-410.12,401.64){\circle*{0.000001}} \put(-410.83,402.34){\circle*{0.000001}} \put(-411.54,403.05){\circle*{0.000001}} \put(-412.24,403.76){\circle*{0.000001}} \put(-412.95,404.47){\circle*{0.000001}} \put(-413.66,405.17){\circle*{0.000001}} \put(-414.36,405.88){\circle*{0.000001}} \put(-414.36,406.59){\circle*{0.000001}} \put(-415.07,407.29){\circle*{0.000001}} \put(-415.78,408.00){\circle*{0.000001}} \put(-416.49,408.71){\circle*{0.000001}} \put(-417.19,409.41){\circle*{0.000001}} \put(-417.90,410.12){\circle*{0.000001}} \put(-418.61,410.83){\circle*{0.000001}} \put(-419.31,411.54){\circle*{0.000001}} \put(-419.31,412.24){\circle*{0.000001}} \put(-420.02,412.95){\circle*{0.000001}} \put(-420.73,413.66){\circle*{0.000001}} \put(-421.44,414.36){\circle*{0.000001}} \put(-422.14,415.07){\circle*{0.000001}} \put(-422.85,415.78){\circle*{0.000001}} \put(-423.56,416.49){\circle*{0.000001}} \put(-423.56,417.19){\circle*{0.000001}} \put(-424.26,417.90){\circle*{0.000001}} \put(-424.97,418.61){\circle*{0.000001}} \put(-425.68,419.31){\circle*{0.000001}} \put(-403.76,359.92){\circle*{0.000001}} \put(-403.76,360.62){\circle*{0.000001}} \put(-403.76,361.33){\circle*{0.000001}} \put(-403.76,362.04){\circle*{0.000001}} \put(-403.76,362.75){\circle*{0.000001}} \put(-403.76,363.45){\circle*{0.000001}} \put(-403.76,364.16){\circle*{0.000001}} \put(-403.76,364.87){\circle*{0.000001}} \put(-403.76,365.57){\circle*{0.000001}} \put(-403.76,366.28){\circle*{0.000001}} \put(-403.76,366.99){\circle*{0.000001}} \put(-403.76,367.70){\circle*{0.000001}} \put(-403.76,368.40){\circle*{0.000001}} \put(-403.76,369.11){\circle*{0.000001}} \put(-403.76,369.82){\circle*{0.000001}} \put(-403.76,370.52){\circle*{0.000001}} \put(-403.76,371.23){\circle*{0.000001}} \put(-403.76,371.94){\circle*{0.000001}} \put(-403.76,372.65){\circle*{0.000001}} \put(-403.76,373.35){\circle*{0.000001}} \put(-403.76,374.06){\circle*{0.000001}} \put(-403.76,374.77){\circle*{0.000001}} \put(-403.76,375.47){\circle*{0.000001}} \put(-403.76,376.18){\circle*{0.000001}} \put(-403.76,376.89){\circle*{0.000001}} \put(-403.76,377.60){\circle*{0.000001}} \put(-403.76,378.30){\circle*{0.000001}} \put(-403.76,379.01){\circle*{0.000001}} \put(-403.76,379.72){\circle*{0.000001}} \put(-403.76,380.42){\circle*{0.000001}} \put(-403.76,381.13){\circle*{0.000001}} \put(-403.76,381.84){\circle*{0.000001}} \put(-403.76,382.54){\circle*{0.000001}} \put(-403.76,383.25){\circle*{0.000001}} \put(-403.76,383.96){\circle*{0.000001}} \put(-403.76,384.67){\circle*{0.000001}} \put(-403.76,385.37){\circle*{0.000001}} \put(-403.76,386.08){\circle*{0.000001}} \put(-403.76,386.79){\circle*{0.000001}} \put(-403.76,387.49){\circle*{0.000001}} \put(-403.76,388.20){\circle*{0.000001}} \put(-403.76,388.91){\circle*{0.000001}} \put(-403.76,389.62){\circle*{0.000001}} \put(-403.76,390.32){\circle*{0.000001}} \put(-403.76,391.03){\circle*{0.000001}} \put(-403.76,391.74){\circle*{0.000001}} \put(-403.76,392.44){\circle*{0.000001}} \put(-403.76,393.15){\circle*{0.000001}} \put(-403.76,393.86){\circle*{0.000001}} \put(-439.82,363.45){\circle*{0.000001}} \put(-439.11,363.45){\circle*{0.000001}} \put(-438.41,363.45){\circle*{0.000001}} \put(-437.70,363.45){\circle*{0.000001}} \put(-436.99,363.45){\circle*{0.000001}} \put(-436.28,363.45){\circle*{0.000001}} \put(-435.58,362.75){\circle*{0.000001}} \put(-434.87,362.75){\circle*{0.000001}} \put(-434.16,362.75){\circle*{0.000001}} \put(-433.46,362.75){\circle*{0.000001}} \put(-432.75,362.75){\circle*{0.000001}} \put(-432.04,362.75){\circle*{0.000001}} \put(-431.34,362.75){\circle*{0.000001}} \put(-430.63,362.75){\circle*{0.000001}} \put(-429.92,362.75){\circle*{0.000001}} \put(-429.21,362.75){\circle*{0.000001}} \put(-428.51,362.04){\circle*{0.000001}} \put(-427.80,362.04){\circle*{0.000001}} \put(-427.09,362.04){\circle*{0.000001}} \put(-426.39,362.04){\circle*{0.000001}} \put(-425.68,362.04){\circle*{0.000001}} \put(-424.97,362.04){\circle*{0.000001}} \put(-424.26,362.04){\circle*{0.000001}} \put(-423.56,362.04){\circle*{0.000001}} \put(-422.85,362.04){\circle*{0.000001}} \put(-422.14,362.04){\circle*{0.000001}} \put(-421.44,361.33){\circle*{0.000001}} \put(-420.73,361.33){\circle*{0.000001}} \put(-420.02,361.33){\circle*{0.000001}} \put(-419.31,361.33){\circle*{0.000001}} \put(-418.61,361.33){\circle*{0.000001}} \put(-417.90,361.33){\circle*{0.000001}} \put(-417.19,361.33){\circle*{0.000001}} \put(-416.49,361.33){\circle*{0.000001}} \put(-415.78,361.33){\circle*{0.000001}} \put(-415.07,361.33){\circle*{0.000001}} \put(-414.36,360.62){\circle*{0.000001}} \put(-413.66,360.62){\circle*{0.000001}} \put(-412.95,360.62){\circle*{0.000001}} \put(-412.24,360.62){\circle*{0.000001}} \put(-411.54,360.62){\circle*{0.000001}} \put(-410.83,360.62){\circle*{0.000001}} \put(-410.12,360.62){\circle*{0.000001}} \put(-409.41,360.62){\circle*{0.000001}} \put(-408.71,360.62){\circle*{0.000001}} \put(-408.00,360.62){\circle*{0.000001}} \put(-407.29,359.92){\circle*{0.000001}} \put(-406.59,359.92){\circle*{0.000001}} \put(-405.88,359.92){\circle*{0.000001}} \put(-405.17,359.92){\circle*{0.000001}} \put(-404.47,359.92){\circle*{0.000001}} \put(-403.76,359.92){\circle*{0.000001}} \put(-464.57,344.36){\circle*{0.000001}} \put(-463.86,345.07){\circle*{0.000001}} \put(-463.15,345.78){\circle*{0.000001}} \put(-462.45,345.78){\circle*{0.000001}} \put(-461.74,346.48){\circle*{0.000001}} \put(-461.03,347.19){\circle*{0.000001}} \put(-460.33,347.90){\circle*{0.000001}} \put(-459.62,347.90){\circle*{0.000001}} \put(-458.91,348.60){\circle*{0.000001}} \put(-458.21,349.31){\circle*{0.000001}} \put(-457.50,350.02){\circle*{0.000001}} \put(-456.79,350.02){\circle*{0.000001}} \put(-456.08,350.72){\circle*{0.000001}} \put(-455.38,351.43){\circle*{0.000001}} \put(-454.67,352.14){\circle*{0.000001}} \put(-453.96,352.85){\circle*{0.000001}} \put(-453.26,352.85){\circle*{0.000001}} \put(-452.55,353.55){\circle*{0.000001}} \put(-451.84,354.26){\circle*{0.000001}} \put(-451.13,354.97){\circle*{0.000001}} \put(-450.43,354.97){\circle*{0.000001}} \put(-449.72,355.67){\circle*{0.000001}} \put(-449.01,356.38){\circle*{0.000001}} \put(-448.31,357.09){\circle*{0.000001}} \put(-447.60,357.80){\circle*{0.000001}} \put(-446.89,357.80){\circle*{0.000001}} \put(-446.18,358.50){\circle*{0.000001}} \put(-445.48,359.21){\circle*{0.000001}} \put(-444.77,359.92){\circle*{0.000001}} \put(-444.06,359.92){\circle*{0.000001}} \put(-443.36,360.62){\circle*{0.000001}} \put(-442.65,361.33){\circle*{0.000001}} \put(-441.94,362.04){\circle*{0.000001}} \put(-441.23,362.04){\circle*{0.000001}} \put(-440.53,362.75){\circle*{0.000001}} \put(-439.82,363.45){\circle*{0.000001}} \put(-464.57,344.36){\circle*{0.000001}} \put(-463.86,343.65){\circle*{0.000001}} \put(-463.15,343.65){\circle*{0.000001}} \put(-462.45,342.95){\circle*{0.000001}} \put(-461.74,342.24){\circle*{0.000001}} \put(-461.03,342.24){\circle*{0.000001}} \put(-460.33,341.53){\circle*{0.000001}} \put(-459.62,341.53){\circle*{0.000001}} \put(-458.91,340.83){\circle*{0.000001}} \put(-458.21,340.12){\circle*{0.000001}} \put(-457.50,340.12){\circle*{0.000001}} \put(-456.79,339.41){\circle*{0.000001}} \put(-456.08,338.70){\circle*{0.000001}} \put(-455.38,338.70){\circle*{0.000001}} \put(-454.67,338.00){\circle*{0.000001}} \put(-453.96,337.29){\circle*{0.000001}} \put(-453.26,337.29){\circle*{0.000001}} \put(-452.55,336.58){\circle*{0.000001}} \put(-451.84,336.58){\circle*{0.000001}} \put(-451.13,335.88){\circle*{0.000001}} \put(-450.43,335.17){\circle*{0.000001}} \put(-449.72,335.17){\circle*{0.000001}} \put(-449.01,334.46){\circle*{0.000001}} \put(-448.31,333.75){\circle*{0.000001}} \put(-447.60,333.75){\circle*{0.000001}} \put(-446.89,333.05){\circle*{0.000001}} \put(-446.18,333.05){\circle*{0.000001}} \put(-445.48,332.34){\circle*{0.000001}} \put(-444.77,331.63){\circle*{0.000001}} \put(-444.06,331.63){\circle*{0.000001}} \put(-443.36,330.93){\circle*{0.000001}} \put(-442.65,330.22){\circle*{0.000001}} \put(-441.94,330.22){\circle*{0.000001}} \put(-441.23,329.51){\circle*{0.000001}} \put(-440.53,328.80){\circle*{0.000001}} \put(-439.82,328.80){\circle*{0.000001}} \put(-439.11,328.10){\circle*{0.000001}} \put(-438.41,328.10){\circle*{0.000001}} \put(-437.70,327.39){\circle*{0.000001}} \put(-436.99,326.68){\circle*{0.000001}} \put(-436.28,326.68){\circle*{0.000001}} \put(-435.58,325.98){\circle*{0.000001}} \put(-413.66,297.69){\circle*{0.000001}} \put(-414.36,298.40){\circle*{0.000001}} \put(-415.07,299.11){\circle*{0.000001}} \put(-415.07,299.81){\circle*{0.000001}} \put(-415.78,300.52){\circle*{0.000001}} \put(-416.49,301.23){\circle*{0.000001}} \put(-417.19,301.93){\circle*{0.000001}} \put(-417.19,302.64){\circle*{0.000001}} \put(-417.90,303.35){\circle*{0.000001}} \put(-418.61,304.06){\circle*{0.000001}} \put(-419.31,304.76){\circle*{0.000001}} \put(-420.02,305.47){\circle*{0.000001}} \put(-420.02,306.18){\circle*{0.000001}} \put(-420.73,306.88){\circle*{0.000001}} \put(-421.44,307.59){\circle*{0.000001}} \put(-422.14,308.30){\circle*{0.000001}} \put(-422.14,309.01){\circle*{0.000001}} \put(-422.85,309.71){\circle*{0.000001}} \put(-423.56,310.42){\circle*{0.000001}} \put(-424.26,311.13){\circle*{0.000001}} \put(-424.26,311.83){\circle*{0.000001}} \put(-424.97,312.54){\circle*{0.000001}} \put(-425.68,313.25){\circle*{0.000001}} \put(-426.39,313.96){\circle*{0.000001}} \put(-427.09,314.66){\circle*{0.000001}} \put(-427.09,315.37){\circle*{0.000001}} \put(-427.80,316.08){\circle*{0.000001}} \put(-428.51,316.78){\circle*{0.000001}} \put(-429.21,317.49){\circle*{0.000001}} \put(-429.21,318.20){\circle*{0.000001}} \put(-429.92,318.91){\circle*{0.000001}} \put(-430.63,319.61){\circle*{0.000001}} \put(-431.34,320.32){\circle*{0.000001}} \put(-432.04,321.03){\circle*{0.000001}} \put(-432.04,321.73){\circle*{0.000001}} \put(-432.75,322.44){\circle*{0.000001}} \put(-433.46,323.15){\circle*{0.000001}} \put(-434.16,323.85){\circle*{0.000001}} \put(-434.16,324.56){\circle*{0.000001}} \put(-434.87,325.27){\circle*{0.000001}} \put(-435.58,325.98){\circle*{0.000001}} \put(-395.98,266.58){\circle*{0.000001}} \put(-396.69,267.29){\circle*{0.000001}} \put(-396.69,267.99){\circle*{0.000001}} \put(-397.39,268.70){\circle*{0.000001}} \put(-397.39,269.41){\circle*{0.000001}} \put(-398.10,270.11){\circle*{0.000001}} \put(-398.10,270.82){\circle*{0.000001}} \put(-398.81,271.53){\circle*{0.000001}} \put(-399.52,272.24){\circle*{0.000001}} \put(-399.52,272.94){\circle*{0.000001}} \put(-400.22,273.65){\circle*{0.000001}} \put(-400.22,274.36){\circle*{0.000001}} \put(-400.93,275.06){\circle*{0.000001}} \put(-400.93,275.77){\circle*{0.000001}} \put(-401.64,276.48){\circle*{0.000001}} \put(-402.34,277.19){\circle*{0.000001}} \put(-402.34,277.89){\circle*{0.000001}} \put(-403.05,278.60){\circle*{0.000001}} \put(-403.05,279.31){\circle*{0.000001}} \put(-403.76,280.01){\circle*{0.000001}} \put(-403.76,280.72){\circle*{0.000001}} \put(-404.47,281.43){\circle*{0.000001}} \put(-404.47,282.14){\circle*{0.000001}} \put(-405.17,282.84){\circle*{0.000001}} \put(-405.88,283.55){\circle*{0.000001}} \put(-405.88,284.26){\circle*{0.000001}} \put(-406.59,284.96){\circle*{0.000001}} \put(-406.59,285.67){\circle*{0.000001}} \put(-407.29,286.38){\circle*{0.000001}} \put(-407.29,287.09){\circle*{0.000001}} \put(-408.00,287.79){\circle*{0.000001}} \put(-408.71,288.50){\circle*{0.000001}} \put(-408.71,289.21){\circle*{0.000001}} \put(-409.41,289.91){\circle*{0.000001}} \put(-409.41,290.62){\circle*{0.000001}} \put(-410.12,291.33){\circle*{0.000001}} \put(-410.12,292.04){\circle*{0.000001}} \put(-410.83,292.74){\circle*{0.000001}} \put(-411.54,293.45){\circle*{0.000001}} \put(-411.54,294.16){\circle*{0.000001}} \put(-412.24,294.86){\circle*{0.000001}} \put(-412.24,295.57){\circle*{0.000001}} \put(-412.95,296.28){\circle*{0.000001}} \put(-412.95,296.98){\circle*{0.000001}} \put(-413.66,297.69){\circle*{0.000001}} \put(-399.52,232.64){\circle*{0.000001}} \put(-399.52,233.35){\circle*{0.000001}} \put(-399.52,234.05){\circle*{0.000001}} \put(-399.52,234.76){\circle*{0.000001}} \put(-399.52,235.47){\circle*{0.000001}} \put(-398.81,236.17){\circle*{0.000001}} \put(-398.81,236.88){\circle*{0.000001}} \put(-398.81,237.59){\circle*{0.000001}} \put(-398.81,238.29){\circle*{0.000001}} \put(-398.81,239.00){\circle*{0.000001}} \put(-398.81,239.71){\circle*{0.000001}} \put(-398.81,240.42){\circle*{0.000001}} \put(-398.81,241.12){\circle*{0.000001}} \put(-398.81,241.83){\circle*{0.000001}} \put(-398.81,242.54){\circle*{0.000001}} \put(-398.10,243.24){\circle*{0.000001}} \put(-398.10,243.95){\circle*{0.000001}} \put(-398.10,244.66){\circle*{0.000001}} \put(-398.10,245.37){\circle*{0.000001}} \put(-398.10,246.07){\circle*{0.000001}} \put(-398.10,246.78){\circle*{0.000001}} \put(-398.10,247.49){\circle*{0.000001}} \put(-398.10,248.19){\circle*{0.000001}} \put(-398.10,248.90){\circle*{0.000001}} \put(-398.10,249.61){\circle*{0.000001}} \put(-397.39,250.32){\circle*{0.000001}} \put(-397.39,251.02){\circle*{0.000001}} \put(-397.39,251.73){\circle*{0.000001}} \put(-397.39,252.44){\circle*{0.000001}} \put(-397.39,253.14){\circle*{0.000001}} \put(-397.39,253.85){\circle*{0.000001}} \put(-397.39,254.56){\circle*{0.000001}} \put(-397.39,255.27){\circle*{0.000001}} \put(-397.39,255.97){\circle*{0.000001}} \put(-396.69,256.68){\circle*{0.000001}} \put(-396.69,257.39){\circle*{0.000001}} \put(-396.69,258.09){\circle*{0.000001}} \put(-396.69,258.80){\circle*{0.000001}} \put(-396.69,259.51){\circle*{0.000001}} \put(-396.69,260.22){\circle*{0.000001}} \put(-396.69,260.92){\circle*{0.000001}} \put(-396.69,261.63){\circle*{0.000001}} \put(-396.69,262.34){\circle*{0.000001}} \put(-396.69,263.04){\circle*{0.000001}} \put(-395.98,263.75){\circle*{0.000001}} \put(-395.98,264.46){\circle*{0.000001}} \put(-395.98,265.17){\circle*{0.000001}} \put(-395.98,265.87){\circle*{0.000001}} \put(-395.98,266.58){\circle*{0.000001}} \put(-399.52,232.64){\circle*{0.000001}} \put(-400.22,233.35){\circle*{0.000001}} \put(-400.93,234.05){\circle*{0.000001}} \put(-401.64,234.76){\circle*{0.000001}} \put(-402.34,235.47){\circle*{0.000001}} \put(-402.34,236.17){\circle*{0.000001}} \put(-403.05,236.88){\circle*{0.000001}} \put(-403.76,237.59){\circle*{0.000001}} \put(-404.47,238.29){\circle*{0.000001}} \put(-405.17,239.00){\circle*{0.000001}} \put(-405.88,239.71){\circle*{0.000001}} \put(-406.59,240.42){\circle*{0.000001}} \put(-407.29,241.12){\circle*{0.000001}} \put(-408.00,241.83){\circle*{0.000001}} \put(-408.00,242.54){\circle*{0.000001}} \put(-408.71,243.24){\circle*{0.000001}} \put(-409.41,243.95){\circle*{0.000001}} \put(-410.12,244.66){\circle*{0.000001}} \put(-410.83,245.37){\circle*{0.000001}} \put(-411.54,246.07){\circle*{0.000001}} \put(-412.24,246.78){\circle*{0.000001}} \put(-412.95,247.49){\circle*{0.000001}} \put(-412.95,248.19){\circle*{0.000001}} \put(-413.66,248.90){\circle*{0.000001}} \put(-414.36,249.61){\circle*{0.000001}} \put(-415.07,250.32){\circle*{0.000001}} \put(-415.78,251.02){\circle*{0.000001}} \put(-416.49,251.73){\circle*{0.000001}} \put(-417.19,252.44){\circle*{0.000001}} \put(-417.90,253.14){\circle*{0.000001}} \put(-418.61,253.85){\circle*{0.000001}} \put(-418.61,254.56){\circle*{0.000001}} \put(-419.31,255.27){\circle*{0.000001}} \put(-420.02,255.97){\circle*{0.000001}} \put(-420.73,256.68){\circle*{0.000001}} \put(-421.44,257.39){\circle*{0.000001}} \put(-421.44,257.39){\circle*{0.000001}} \put(-421.44,258.09){\circle*{0.000001}} \put(-422.14,258.80){\circle*{0.000001}} \put(-422.14,259.51){\circle*{0.000001}} \put(-422.14,260.22){\circle*{0.000001}} \put(-422.14,260.92){\circle*{0.000001}} \put(-422.85,261.63){\circle*{0.000001}} \put(-422.85,262.34){\circle*{0.000001}} \put(-422.85,263.04){\circle*{0.000001}} \put(-422.85,263.75){\circle*{0.000001}} \put(-423.56,264.46){\circle*{0.000001}} \put(-423.56,265.17){\circle*{0.000001}} \put(-423.56,265.87){\circle*{0.000001}} \put(-424.26,266.58){\circle*{0.000001}} \put(-424.26,267.29){\circle*{0.000001}} \put(-424.26,267.99){\circle*{0.000001}} \put(-424.26,268.70){\circle*{0.000001}} \put(-424.97,269.41){\circle*{0.000001}} \put(-424.97,270.11){\circle*{0.000001}} \put(-424.97,270.82){\circle*{0.000001}} \put(-424.97,271.53){\circle*{0.000001}} \put(-425.68,272.24){\circle*{0.000001}} \put(-425.68,272.94){\circle*{0.000001}} \put(-425.68,273.65){\circle*{0.000001}} \put(-425.68,274.36){\circle*{0.000001}} \put(-426.39,275.06){\circle*{0.000001}} \put(-426.39,275.77){\circle*{0.000001}} \put(-426.39,276.48){\circle*{0.000001}} \put(-427.09,277.19){\circle*{0.000001}} \put(-427.09,277.89){\circle*{0.000001}} \put(-427.09,278.60){\circle*{0.000001}} \put(-427.09,279.31){\circle*{0.000001}} \put(-427.80,280.01){\circle*{0.000001}} \put(-427.80,280.72){\circle*{0.000001}} \put(-427.80,281.43){\circle*{0.000001}} \put(-427.80,282.14){\circle*{0.000001}} \put(-428.51,282.84){\circle*{0.000001}} \put(-428.51,283.55){\circle*{0.000001}} \put(-428.51,284.26){\circle*{0.000001}} \put(-429.21,284.96){\circle*{0.000001}} \put(-429.21,285.67){\circle*{0.000001}} \put(-429.21,286.38){\circle*{0.000001}} \put(-429.21,287.09){\circle*{0.000001}} \put(-429.92,287.79){\circle*{0.000001}} \put(-429.92,288.50){\circle*{0.000001}} \put(-429.92,289.21){\circle*{0.000001}} \put(-429.92,289.91){\circle*{0.000001}} \put(-430.63,290.62){\circle*{0.000001}} \put(-430.63,291.33){\circle*{0.000001}} \put(-430.63,291.33){\circle*{0.000001}} \put(-429.92,291.33){\circle*{0.000001}} \put(-429.21,292.04){\circle*{0.000001}} \put(-428.51,292.04){\circle*{0.000001}} \put(-427.80,292.74){\circle*{0.000001}} \put(-427.09,292.74){\circle*{0.000001}} \put(-426.39,292.74){\circle*{0.000001}} \put(-425.68,293.45){\circle*{0.000001}} \put(-424.97,293.45){\circle*{0.000001}} \put(-424.26,294.16){\circle*{0.000001}} \put(-423.56,294.16){\circle*{0.000001}} \put(-422.85,294.16){\circle*{0.000001}} \put(-422.14,294.86){\circle*{0.000001}} \put(-421.44,294.86){\circle*{0.000001}} \put(-420.73,295.57){\circle*{0.000001}} \put(-420.02,295.57){\circle*{0.000001}} \put(-419.31,295.57){\circle*{0.000001}} \put(-418.61,296.28){\circle*{0.000001}} \put(-417.90,296.28){\circle*{0.000001}} \put(-417.19,296.98){\circle*{0.000001}} \put(-416.49,296.98){\circle*{0.000001}} \put(-415.78,296.98){\circle*{0.000001}} \put(-415.07,297.69){\circle*{0.000001}} \put(-414.36,297.69){\circle*{0.000001}} \put(-413.66,298.40){\circle*{0.000001}} \put(-412.95,298.40){\circle*{0.000001}} \put(-412.24,299.11){\circle*{0.000001}} \put(-411.54,299.11){\circle*{0.000001}} \put(-410.83,299.11){\circle*{0.000001}} \put(-410.12,299.81){\circle*{0.000001}} \put(-409.41,299.81){\circle*{0.000001}} \put(-408.71,300.52){\circle*{0.000001}} \put(-408.00,300.52){\circle*{0.000001}} \put(-407.29,300.52){\circle*{0.000001}} \put(-406.59,301.23){\circle*{0.000001}} \put(-405.88,301.23){\circle*{0.000001}} \put(-405.17,301.93){\circle*{0.000001}} \put(-404.47,301.93){\circle*{0.000001}} \put(-403.76,301.93){\circle*{0.000001}} \put(-403.05,302.64){\circle*{0.000001}} \put(-402.34,302.64){\circle*{0.000001}} \put(-401.64,303.35){\circle*{0.000001}} \put(-400.93,303.35){\circle*{0.000001}} \put(-400.22,303.35){\circle*{0.000001}} \put(-399.52,304.06){\circle*{0.000001}} \put(-398.81,304.06){\circle*{0.000001}} \put(-398.10,304.76){\circle*{0.000001}} \put(-397.39,304.76){\circle*{0.000001}} \put(-397.39,304.76){\circle*{0.000001}} \put(-397.39,305.47){\circle*{0.000001}} \put(-397.39,306.18){\circle*{0.000001}} \put(-397.39,306.88){\circle*{0.000001}} \put(-397.39,307.59){\circle*{0.000001}} \put(-397.39,308.30){\circle*{0.000001}} \put(-397.39,309.01){\circle*{0.000001}} \put(-397.39,309.71){\circle*{0.000001}} \put(-397.39,310.42){\circle*{0.000001}} \put(-397.39,311.13){\circle*{0.000001}} \put(-397.39,311.83){\circle*{0.000001}} \put(-397.39,312.54){\circle*{0.000001}} \put(-397.39,313.25){\circle*{0.000001}} \put(-398.10,313.96){\circle*{0.000001}} \put(-398.10,314.66){\circle*{0.000001}} \put(-398.10,315.37){\circle*{0.000001}} \put(-398.10,316.08){\circle*{0.000001}} \put(-398.10,316.78){\circle*{0.000001}} \put(-398.10,317.49){\circle*{0.000001}} \put(-398.10,318.20){\circle*{0.000001}} \put(-398.10,318.91){\circle*{0.000001}} \put(-398.10,319.61){\circle*{0.000001}} \put(-398.10,320.32){\circle*{0.000001}} \put(-398.10,321.03){\circle*{0.000001}} \put(-398.10,321.73){\circle*{0.000001}} \put(-398.10,322.44){\circle*{0.000001}} \put(-398.10,323.15){\circle*{0.000001}} \put(-398.10,323.85){\circle*{0.000001}} \put(-398.10,324.56){\circle*{0.000001}} \put(-398.10,325.27){\circle*{0.000001}} \put(-398.10,325.98){\circle*{0.000001}} \put(-398.10,326.68){\circle*{0.000001}} \put(-398.10,327.39){\circle*{0.000001}} \put(-398.10,328.10){\circle*{0.000001}} \put(-398.10,328.80){\circle*{0.000001}} \put(-398.10,329.51){\circle*{0.000001}} \put(-398.10,330.22){\circle*{0.000001}} \put(-398.81,330.93){\circle*{0.000001}} \put(-398.81,331.63){\circle*{0.000001}} \put(-398.81,332.34){\circle*{0.000001}} \put(-398.81,333.05){\circle*{0.000001}} \put(-398.81,333.75){\circle*{0.000001}} \put(-398.81,334.46){\circle*{0.000001}} \put(-398.81,335.17){\circle*{0.000001}} \put(-398.81,335.88){\circle*{0.000001}} \put(-398.81,336.58){\circle*{0.000001}} \put(-398.81,337.29){\circle*{0.000001}} \put(-398.81,338.00){\circle*{0.000001}} \put(-398.81,338.70){\circle*{0.000001}} \put(-398.81,338.70){\circle*{0.000001}} \put(-398.10,338.00){\circle*{0.000001}} \put(-397.39,338.00){\circle*{0.000001}} \put(-396.69,337.29){\circle*{0.000001}} \put(-395.98,336.58){\circle*{0.000001}} \put(-395.27,336.58){\circle*{0.000001}} \put(-394.57,335.88){\circle*{0.000001}} \put(-393.86,335.88){\circle*{0.000001}} \put(-393.15,335.17){\circle*{0.000001}} \put(-392.44,334.46){\circle*{0.000001}} \put(-391.74,334.46){\circle*{0.000001}} \put(-391.03,333.75){\circle*{0.000001}} \put(-390.32,333.05){\circle*{0.000001}} \put(-389.62,333.05){\circle*{0.000001}} \put(-388.91,332.34){\circle*{0.000001}} \put(-388.20,331.63){\circle*{0.000001}} \put(-387.49,331.63){\circle*{0.000001}} \put(-386.79,330.93){\circle*{0.000001}} \put(-386.08,330.93){\circle*{0.000001}} \put(-385.37,330.22){\circle*{0.000001}} \put(-384.67,329.51){\circle*{0.000001}} \put(-383.96,329.51){\circle*{0.000001}} \put(-383.25,328.80){\circle*{0.000001}} \put(-382.54,328.10){\circle*{0.000001}} \put(-381.84,328.10){\circle*{0.000001}} \put(-381.13,327.39){\circle*{0.000001}} \put(-380.42,327.39){\circle*{0.000001}} \put(-379.72,326.68){\circle*{0.000001}} \put(-379.01,325.98){\circle*{0.000001}} \put(-378.30,325.98){\circle*{0.000001}} \put(-377.60,325.27){\circle*{0.000001}} \put(-376.89,324.56){\circle*{0.000001}} \put(-376.18,324.56){\circle*{0.000001}} \put(-375.47,323.85){\circle*{0.000001}} \put(-374.77,323.15){\circle*{0.000001}} \put(-374.06,323.15){\circle*{0.000001}} \put(-373.35,322.44){\circle*{0.000001}} \put(-372.65,322.44){\circle*{0.000001}} \put(-371.94,321.73){\circle*{0.000001}} \put(-371.23,321.03){\circle*{0.000001}} \put(-370.52,321.03){\circle*{0.000001}} \put(-369.82,320.32){\circle*{0.000001}} \put(-369.82,320.32){\circle*{0.000001}} \put(-369.11,319.61){\circle*{0.000001}} \put(-368.40,319.61){\circle*{0.000001}} \put(-367.70,318.91){\circle*{0.000001}} \put(-366.99,318.91){\circle*{0.000001}} \put(-366.28,318.20){\circle*{0.000001}} \put(-365.57,318.20){\circle*{0.000001}} \put(-364.87,317.49){\circle*{0.000001}} \put(-364.16,317.49){\circle*{0.000001}} \put(-363.45,316.78){\circle*{0.000001}} \put(-362.75,316.78){\circle*{0.000001}} \put(-362.04,316.08){\circle*{0.000001}} \put(-361.33,315.37){\circle*{0.000001}} \put(-360.62,315.37){\circle*{0.000001}} \put(-359.92,314.66){\circle*{0.000001}} \put(-359.21,314.66){\circle*{0.000001}} \put(-358.50,313.96){\circle*{0.000001}} \put(-357.80,313.96){\circle*{0.000001}} \put(-357.09,313.25){\circle*{0.000001}} \put(-356.38,313.25){\circle*{0.000001}} \put(-355.67,312.54){\circle*{0.000001}} \put(-354.97,311.83){\circle*{0.000001}} \put(-354.26,311.83){\circle*{0.000001}} \put(-353.55,311.13){\circle*{0.000001}} \put(-352.85,311.13){\circle*{0.000001}} \put(-352.14,310.42){\circle*{0.000001}} \put(-351.43,310.42){\circle*{0.000001}} \put(-350.72,309.71){\circle*{0.000001}} \put(-350.02,309.71){\circle*{0.000001}} \put(-349.31,309.01){\circle*{0.000001}} \put(-348.60,309.01){\circle*{0.000001}} \put(-347.90,308.30){\circle*{0.000001}} \put(-347.19,307.59){\circle*{0.000001}} \put(-346.48,307.59){\circle*{0.000001}} \put(-345.78,306.88){\circle*{0.000001}} \put(-345.07,306.88){\circle*{0.000001}} \put(-344.36,306.18){\circle*{0.000001}} \put(-343.65,306.18){\circle*{0.000001}} \put(-342.95,305.47){\circle*{0.000001}} \put(-342.24,305.47){\circle*{0.000001}} \put(-341.53,304.76){\circle*{0.000001}} \put(-341.53,304.76){\circle*{0.000001}} \put(-340.83,304.06){\circle*{0.000001}} \put(-340.12,303.35){\circle*{0.000001}} \put(-339.41,302.64){\circle*{0.000001}} \put(-338.70,301.93){\circle*{0.000001}} \put(-338.00,301.23){\circle*{0.000001}} \put(-337.29,300.52){\circle*{0.000001}} \put(-336.58,299.81){\circle*{0.000001}} \put(-335.88,299.11){\circle*{0.000001}} \put(-335.17,299.11){\circle*{0.000001}} \put(-334.46,298.40){\circle*{0.000001}} \put(-333.75,297.69){\circle*{0.000001}} \put(-333.05,296.98){\circle*{0.000001}} \put(-332.34,296.28){\circle*{0.000001}} \put(-331.63,295.57){\circle*{0.000001}} \put(-330.93,294.86){\circle*{0.000001}} \put(-330.22,294.16){\circle*{0.000001}} \put(-329.51,293.45){\circle*{0.000001}} \put(-328.80,292.74){\circle*{0.000001}} \put(-328.10,292.04){\circle*{0.000001}} \put(-327.39,291.33){\circle*{0.000001}} \put(-326.68,290.62){\circle*{0.000001}} \put(-325.98,289.91){\circle*{0.000001}} \put(-325.27,289.21){\circle*{0.000001}} \put(-324.56,288.50){\circle*{0.000001}} \put(-323.85,287.79){\circle*{0.000001}} \put(-323.15,287.09){\circle*{0.000001}} \put(-322.44,287.09){\circle*{0.000001}} \put(-321.73,286.38){\circle*{0.000001}} \put(-321.03,285.67){\circle*{0.000001}} \put(-320.32,284.96){\circle*{0.000001}} \put(-319.61,284.26){\circle*{0.000001}} \put(-318.91,283.55){\circle*{0.000001}} \put(-318.20,282.84){\circle*{0.000001}} \put(-317.49,282.14){\circle*{0.000001}} \put(-316.78,281.43){\circle*{0.000001}} \put(-294.86,254.56){\circle*{0.000001}} \put(-295.57,255.27){\circle*{0.000001}} \put(-296.28,255.97){\circle*{0.000001}} \put(-296.28,256.68){\circle*{0.000001}} \put(-296.98,257.39){\circle*{0.000001}} \put(-297.69,258.09){\circle*{0.000001}} \put(-298.40,258.80){\circle*{0.000001}} \put(-299.11,259.51){\circle*{0.000001}} \put(-299.81,260.22){\circle*{0.000001}} \put(-299.81,260.92){\circle*{0.000001}} \put(-300.52,261.63){\circle*{0.000001}} \put(-301.23,262.34){\circle*{0.000001}} \put(-301.93,263.04){\circle*{0.000001}} \put(-302.64,263.75){\circle*{0.000001}} \put(-302.64,264.46){\circle*{0.000001}} \put(-303.35,265.17){\circle*{0.000001}} \put(-304.06,265.87){\circle*{0.000001}} \put(-304.76,266.58){\circle*{0.000001}} \put(-305.47,267.29){\circle*{0.000001}} \put(-305.47,267.99){\circle*{0.000001}} \put(-306.18,268.70){\circle*{0.000001}} \put(-306.88,269.41){\circle*{0.000001}} \put(-307.59,270.11){\circle*{0.000001}} \put(-308.30,270.82){\circle*{0.000001}} \put(-309.01,271.53){\circle*{0.000001}} \put(-309.01,272.24){\circle*{0.000001}} \put(-309.71,272.94){\circle*{0.000001}} \put(-310.42,273.65){\circle*{0.000001}} \put(-311.13,274.36){\circle*{0.000001}} \put(-311.83,275.06){\circle*{0.000001}} \put(-311.83,275.77){\circle*{0.000001}} \put(-312.54,276.48){\circle*{0.000001}} \put(-313.25,277.19){\circle*{0.000001}} \put(-313.96,277.89){\circle*{0.000001}} \put(-314.66,278.60){\circle*{0.000001}} \put(-315.37,279.31){\circle*{0.000001}} \put(-315.37,280.01){\circle*{0.000001}} \put(-316.08,280.72){\circle*{0.000001}} \put(-316.78,281.43){\circle*{0.000001}} \put(-279.31,225.57){\circle*{0.000001}} \put(-280.01,226.27){\circle*{0.000001}} \put(-280.01,226.98){\circle*{0.000001}} \put(-280.72,227.69){\circle*{0.000001}} \put(-280.72,228.40){\circle*{0.000001}} \put(-281.43,229.10){\circle*{0.000001}} \put(-281.43,229.81){\circle*{0.000001}} \put(-282.14,230.52){\circle*{0.000001}} \put(-282.14,231.22){\circle*{0.000001}} \put(-282.84,231.93){\circle*{0.000001}} \put(-282.84,232.64){\circle*{0.000001}} \put(-283.55,233.35){\circle*{0.000001}} \put(-283.55,234.05){\circle*{0.000001}} \put(-284.26,234.76){\circle*{0.000001}} \put(-284.96,235.47){\circle*{0.000001}} \put(-284.96,236.17){\circle*{0.000001}} \put(-285.67,236.88){\circle*{0.000001}} \put(-285.67,237.59){\circle*{0.000001}} \put(-286.38,238.29){\circle*{0.000001}} \put(-286.38,239.00){\circle*{0.000001}} \put(-287.09,239.71){\circle*{0.000001}} \put(-287.09,240.42){\circle*{0.000001}} \put(-287.79,241.12){\circle*{0.000001}} \put(-287.79,241.83){\circle*{0.000001}} \put(-288.50,242.54){\circle*{0.000001}} \put(-288.50,243.24){\circle*{0.000001}} \put(-289.21,243.95){\circle*{0.000001}} \put(-289.21,244.66){\circle*{0.000001}} \put(-289.91,245.37){\circle*{0.000001}} \put(-290.62,246.07){\circle*{0.000001}} \put(-290.62,246.78){\circle*{0.000001}} \put(-291.33,247.49){\circle*{0.000001}} \put(-291.33,248.19){\circle*{0.000001}} \put(-292.04,248.90){\circle*{0.000001}} \put(-292.04,249.61){\circle*{0.000001}} \put(-292.74,250.32){\circle*{0.000001}} \put(-292.74,251.02){\circle*{0.000001}} \put(-293.45,251.73){\circle*{0.000001}} \put(-293.45,252.44){\circle*{0.000001}} \put(-294.16,253.14){\circle*{0.000001}} \put(-294.16,253.85){\circle*{0.000001}} \put(-294.86,254.56){\circle*{0.000001}} \put(-310.42,206.48){\circle*{0.000001}} \put(-309.71,207.18){\circle*{0.000001}} \put(-309.01,207.18){\circle*{0.000001}} \put(-308.30,207.89){\circle*{0.000001}} \put(-307.59,207.89){\circle*{0.000001}} \put(-306.88,208.60){\circle*{0.000001}} \put(-306.18,209.30){\circle*{0.000001}} \put(-305.47,209.30){\circle*{0.000001}} \put(-304.76,210.01){\circle*{0.000001}} \put(-304.06,210.72){\circle*{0.000001}} \put(-303.35,210.72){\circle*{0.000001}} \put(-302.64,211.42){\circle*{0.000001}} \put(-301.93,211.42){\circle*{0.000001}} \put(-301.23,212.13){\circle*{0.000001}} \put(-300.52,212.84){\circle*{0.000001}} \put(-299.81,212.84){\circle*{0.000001}} \put(-299.11,213.55){\circle*{0.000001}} \put(-298.40,213.55){\circle*{0.000001}} \put(-297.69,214.25){\circle*{0.000001}} \put(-296.98,214.96){\circle*{0.000001}} \put(-296.28,214.96){\circle*{0.000001}} \put(-295.57,215.67){\circle*{0.000001}} \put(-294.86,215.67){\circle*{0.000001}} \put(-294.16,216.37){\circle*{0.000001}} \put(-293.45,217.08){\circle*{0.000001}} \put(-292.74,217.08){\circle*{0.000001}} \put(-292.04,217.79){\circle*{0.000001}} \put(-291.33,218.50){\circle*{0.000001}} \put(-290.62,218.50){\circle*{0.000001}} \put(-289.91,219.20){\circle*{0.000001}} \put(-289.21,219.20){\circle*{0.000001}} \put(-288.50,219.91){\circle*{0.000001}} \put(-287.79,220.62){\circle*{0.000001}} \put(-287.09,220.62){\circle*{0.000001}} \put(-286.38,221.32){\circle*{0.000001}} \put(-285.67,221.32){\circle*{0.000001}} \put(-284.96,222.03){\circle*{0.000001}} \put(-284.26,222.74){\circle*{0.000001}} \put(-283.55,222.74){\circle*{0.000001}} \put(-282.84,223.45){\circle*{0.000001}} \put(-282.14,224.15){\circle*{0.000001}} \put(-281.43,224.15){\circle*{0.000001}} \put(-280.72,224.86){\circle*{0.000001}} \put(-280.01,224.86){\circle*{0.000001}} \put(-279.31,225.57){\circle*{0.000001}} \put(-290.62,178.90){\circle*{0.000001}} \put(-291.33,179.61){\circle*{0.000001}} \put(-291.33,180.31){\circle*{0.000001}} \put(-292.04,181.02){\circle*{0.000001}} \put(-292.74,181.73){\circle*{0.000001}} \put(-293.45,182.43){\circle*{0.000001}} \put(-293.45,183.14){\circle*{0.000001}} \put(-294.16,183.85){\circle*{0.000001}} \put(-294.86,184.55){\circle*{0.000001}} \put(-294.86,185.26){\circle*{0.000001}} \put(-295.57,185.97){\circle*{0.000001}} \put(-296.28,186.68){\circle*{0.000001}} \put(-296.98,187.38){\circle*{0.000001}} \put(-296.98,188.09){\circle*{0.000001}} \put(-297.69,188.80){\circle*{0.000001}} \put(-298.40,189.50){\circle*{0.000001}} \put(-298.40,190.21){\circle*{0.000001}} \put(-299.11,190.92){\circle*{0.000001}} \put(-299.81,191.63){\circle*{0.000001}} \put(-300.52,192.33){\circle*{0.000001}} \put(-300.52,193.04){\circle*{0.000001}} \put(-301.23,193.75){\circle*{0.000001}} \put(-301.93,194.45){\circle*{0.000001}} \put(-302.64,195.16){\circle*{0.000001}} \put(-302.64,195.87){\circle*{0.000001}} \put(-303.35,196.58){\circle*{0.000001}} \put(-304.06,197.28){\circle*{0.000001}} \put(-304.06,197.99){\circle*{0.000001}} \put(-304.76,198.70){\circle*{0.000001}} \put(-305.47,199.40){\circle*{0.000001}} \put(-306.18,200.11){\circle*{0.000001}} \put(-306.18,200.82){\circle*{0.000001}} \put(-306.88,201.53){\circle*{0.000001}} \put(-307.59,202.23){\circle*{0.000001}} \put(-307.59,202.94){\circle*{0.000001}} \put(-308.30,203.65){\circle*{0.000001}} \put(-309.01,204.35){\circle*{0.000001}} \put(-309.71,205.06){\circle*{0.000001}} \put(-309.71,205.77){\circle*{0.000001}} \put(-310.42,206.48){\circle*{0.000001}} \put(-307.59,149.91){\circle*{0.000001}} \put(-306.88,150.61){\circle*{0.000001}} \put(-306.88,151.32){\circle*{0.000001}} \put(-306.18,152.03){\circle*{0.000001}} \put(-306.18,152.74){\circle*{0.000001}} \put(-305.47,153.44){\circle*{0.000001}} \put(-304.76,154.15){\circle*{0.000001}} \put(-304.76,154.86){\circle*{0.000001}} \put(-304.06,155.56){\circle*{0.000001}} \put(-304.06,156.27){\circle*{0.000001}} \put(-303.35,156.98){\circle*{0.000001}} \put(-303.35,157.68){\circle*{0.000001}} \put(-302.64,158.39){\circle*{0.000001}} \put(-301.93,159.10){\circle*{0.000001}} \put(-301.93,159.81){\circle*{0.000001}} \put(-301.23,160.51){\circle*{0.000001}} \put(-301.23,161.22){\circle*{0.000001}} \put(-300.52,161.93){\circle*{0.000001}} \put(-299.81,162.63){\circle*{0.000001}} \put(-299.81,163.34){\circle*{0.000001}} \put(-299.11,164.05){\circle*{0.000001}} \put(-299.11,164.76){\circle*{0.000001}} \put(-298.40,165.46){\circle*{0.000001}} \put(-298.40,166.17){\circle*{0.000001}} \put(-297.69,166.88){\circle*{0.000001}} \put(-296.98,167.58){\circle*{0.000001}} \put(-296.98,168.29){\circle*{0.000001}} \put(-296.28,169.00){\circle*{0.000001}} \put(-296.28,169.71){\circle*{0.000001}} \put(-295.57,170.41){\circle*{0.000001}} \put(-294.86,171.12){\circle*{0.000001}} \put(-294.86,171.83){\circle*{0.000001}} \put(-294.16,172.53){\circle*{0.000001}} \put(-294.16,173.24){\circle*{0.000001}} \put(-293.45,173.95){\circle*{0.000001}} \put(-293.45,174.66){\circle*{0.000001}} \put(-292.74,175.36){\circle*{0.000001}} \put(-292.04,176.07){\circle*{0.000001}} \put(-292.04,176.78){\circle*{0.000001}} \put(-291.33,177.48){\circle*{0.000001}} \put(-291.33,178.19){\circle*{0.000001}} \put(-290.62,178.90){\circle*{0.000001}} \put(-307.59,149.91){\circle*{0.000001}} \put(-307.59,150.61){\circle*{0.000001}} \put(-307.59,151.32){\circle*{0.000001}} \put(-306.88,152.03){\circle*{0.000001}} \put(-306.88,152.74){\circle*{0.000001}} \put(-306.88,153.44){\circle*{0.000001}} \put(-306.88,154.15){\circle*{0.000001}} \put(-306.18,154.86){\circle*{0.000001}} \put(-306.18,155.56){\circle*{0.000001}} \put(-306.18,156.27){\circle*{0.000001}} \put(-306.18,156.98){\circle*{0.000001}} \put(-306.18,157.68){\circle*{0.000001}} \put(-305.47,158.39){\circle*{0.000001}} \put(-305.47,159.10){\circle*{0.000001}} \put(-305.47,159.81){\circle*{0.000001}} \put(-305.47,160.51){\circle*{0.000001}} \put(-305.47,161.22){\circle*{0.000001}} \put(-304.76,161.93){\circle*{0.000001}} \put(-304.76,162.63){\circle*{0.000001}} \put(-304.76,163.34){\circle*{0.000001}} \put(-304.76,164.05){\circle*{0.000001}} \put(-304.06,164.76){\circle*{0.000001}} \put(-304.06,165.46){\circle*{0.000001}} \put(-304.06,166.17){\circle*{0.000001}} \put(-304.06,166.88){\circle*{0.000001}} \put(-304.06,167.58){\circle*{0.000001}} \put(-303.35,168.29){\circle*{0.000001}} \put(-303.35,169.00){\circle*{0.000001}} \put(-303.35,169.71){\circle*{0.000001}} \put(-303.35,170.41){\circle*{0.000001}} \put(-302.64,171.12){\circle*{0.000001}} \put(-302.64,171.83){\circle*{0.000001}} \put(-302.64,172.53){\circle*{0.000001}} \put(-302.64,173.24){\circle*{0.000001}} \put(-302.64,173.95){\circle*{0.000001}} \put(-301.93,174.66){\circle*{0.000001}} \put(-301.93,175.36){\circle*{0.000001}} \put(-301.93,176.07){\circle*{0.000001}} \put(-301.93,176.78){\circle*{0.000001}} \put(-301.93,177.48){\circle*{0.000001}} \put(-301.23,178.19){\circle*{0.000001}} \put(-301.23,178.90){\circle*{0.000001}} \put(-301.23,179.61){\circle*{0.000001}} \put(-301.23,180.31){\circle*{0.000001}} \put(-300.52,181.02){\circle*{0.000001}} \put(-300.52,181.73){\circle*{0.000001}} \put(-300.52,182.43){\circle*{0.000001}} \put(-300.52,146.37){\circle*{0.000001}} \put(-300.52,147.08){\circle*{0.000001}} \put(-300.52,147.79){\circle*{0.000001}} \put(-300.52,148.49){\circle*{0.000001}} \put(-300.52,149.20){\circle*{0.000001}} \put(-300.52,149.91){\circle*{0.000001}} \put(-300.52,150.61){\circle*{0.000001}} \put(-300.52,151.32){\circle*{0.000001}} \put(-300.52,152.03){\circle*{0.000001}} \put(-300.52,152.74){\circle*{0.000001}} \put(-300.52,153.44){\circle*{0.000001}} \put(-300.52,154.15){\circle*{0.000001}} \put(-300.52,154.86){\circle*{0.000001}} \put(-300.52,155.56){\circle*{0.000001}} \put(-300.52,156.27){\circle*{0.000001}} \put(-300.52,156.98){\circle*{0.000001}} \put(-300.52,157.68){\circle*{0.000001}} \put(-300.52,158.39){\circle*{0.000001}} \put(-300.52,159.10){\circle*{0.000001}} \put(-300.52,159.81){\circle*{0.000001}} \put(-300.52,160.51){\circle*{0.000001}} \put(-300.52,161.22){\circle*{0.000001}} \put(-300.52,161.93){\circle*{0.000001}} \put(-300.52,162.63){\circle*{0.000001}} \put(-300.52,163.34){\circle*{0.000001}} \put(-300.52,164.05){\circle*{0.000001}} \put(-300.52,164.76){\circle*{0.000001}} \put(-300.52,165.46){\circle*{0.000001}} \put(-300.52,166.17){\circle*{0.000001}} \put(-300.52,166.88){\circle*{0.000001}} \put(-300.52,167.58){\circle*{0.000001}} \put(-300.52,168.29){\circle*{0.000001}} \put(-300.52,169.00){\circle*{0.000001}} \put(-300.52,169.71){\circle*{0.000001}} \put(-300.52,170.41){\circle*{0.000001}} \put(-300.52,171.12){\circle*{0.000001}} \put(-300.52,171.83){\circle*{0.000001}} \put(-300.52,172.53){\circle*{0.000001}} \put(-300.52,173.24){\circle*{0.000001}} \put(-300.52,173.95){\circle*{0.000001}} \put(-300.52,174.66){\circle*{0.000001}} \put(-300.52,175.36){\circle*{0.000001}} \put(-300.52,176.07){\circle*{0.000001}} \put(-300.52,176.78){\circle*{0.000001}} \put(-300.52,177.48){\circle*{0.000001}} \put(-300.52,178.19){\circle*{0.000001}} \put(-300.52,178.90){\circle*{0.000001}} \put(-300.52,179.61){\circle*{0.000001}} \put(-300.52,180.31){\circle*{0.000001}} \put(-300.52,181.02){\circle*{0.000001}} \put(-300.52,181.73){\circle*{0.000001}} \put(-300.52,182.43){\circle*{0.000001}} \put(-325.98,170.41){\circle*{0.000001}} \put(-325.27,169.71){\circle*{0.000001}} \put(-324.56,169.00){\circle*{0.000001}} \put(-323.85,168.29){\circle*{0.000001}} \put(-323.15,167.58){\circle*{0.000001}} \put(-322.44,166.88){\circle*{0.000001}} \put(-321.73,166.17){\circle*{0.000001}} \put(-321.03,165.46){\circle*{0.000001}} \put(-320.32,164.76){\circle*{0.000001}} \put(-319.61,164.76){\circle*{0.000001}} \put(-318.91,164.05){\circle*{0.000001}} \put(-318.20,163.34){\circle*{0.000001}} \put(-317.49,162.63){\circle*{0.000001}} \put(-316.78,161.93){\circle*{0.000001}} \put(-316.08,161.22){\circle*{0.000001}} \put(-315.37,160.51){\circle*{0.000001}} \put(-314.66,159.81){\circle*{0.000001}} \put(-313.96,159.10){\circle*{0.000001}} \put(-313.25,158.39){\circle*{0.000001}} \put(-312.54,157.68){\circle*{0.000001}} \put(-311.83,156.98){\circle*{0.000001}} \put(-311.13,156.27){\circle*{0.000001}} \put(-310.42,155.56){\circle*{0.000001}} \put(-309.71,154.86){\circle*{0.000001}} \put(-309.01,154.15){\circle*{0.000001}} \put(-308.30,153.44){\circle*{0.000001}} \put(-307.59,152.74){\circle*{0.000001}} \put(-306.88,152.74){\circle*{0.000001}} \put(-306.18,152.03){\circle*{0.000001}} \put(-305.47,151.32){\circle*{0.000001}} \put(-304.76,150.61){\circle*{0.000001}} \put(-304.06,149.91){\circle*{0.000001}} \put(-303.35,149.20){\circle*{0.000001}} \put(-302.64,148.49){\circle*{0.000001}} \put(-301.93,147.79){\circle*{0.000001}} \put(-301.23,147.08){\circle*{0.000001}} \put(-300.52,146.37){\circle*{0.000001}} \put(-325.98,170.41){\circle*{0.000001}} \put(-325.98,171.12){\circle*{0.000001}} \put(-325.27,171.83){\circle*{0.000001}} \put(-325.27,172.53){\circle*{0.000001}} \put(-325.27,173.24){\circle*{0.000001}} \put(-325.27,173.95){\circle*{0.000001}} \put(-324.56,174.66){\circle*{0.000001}} \put(-324.56,175.36){\circle*{0.000001}} \put(-324.56,176.07){\circle*{0.000001}} \put(-324.56,176.78){\circle*{0.000001}} \put(-323.85,177.48){\circle*{0.000001}} \put(-323.85,178.19){\circle*{0.000001}} \put(-323.85,178.90){\circle*{0.000001}} \put(-323.15,179.61){\circle*{0.000001}} \put(-323.15,180.31){\circle*{0.000001}} \put(-323.15,181.02){\circle*{0.000001}} \put(-323.15,181.73){\circle*{0.000001}} \put(-322.44,182.43){\circle*{0.000001}} \put(-322.44,183.14){\circle*{0.000001}} \put(-322.44,183.85){\circle*{0.000001}} \put(-322.44,184.55){\circle*{0.000001}} \put(-321.73,185.26){\circle*{0.000001}} \put(-321.73,185.97){\circle*{0.000001}} \put(-321.73,186.68){\circle*{0.000001}} \put(-321.73,187.38){\circle*{0.000001}} \put(-321.03,188.09){\circle*{0.000001}} \put(-321.03,188.80){\circle*{0.000001}} \put(-321.03,189.50){\circle*{0.000001}} \put(-320.32,190.21){\circle*{0.000001}} \put(-320.32,190.92){\circle*{0.000001}} \put(-320.32,191.63){\circle*{0.000001}} \put(-320.32,192.33){\circle*{0.000001}} \put(-319.61,193.04){\circle*{0.000001}} \put(-319.61,193.75){\circle*{0.000001}} \put(-319.61,194.45){\circle*{0.000001}} \put(-319.61,195.16){\circle*{0.000001}} \put(-318.91,195.87){\circle*{0.000001}} \put(-318.91,196.58){\circle*{0.000001}} \put(-318.91,197.28){\circle*{0.000001}} \put(-318.20,197.99){\circle*{0.000001}} \put(-318.20,198.70){\circle*{0.000001}} \put(-318.20,199.40){\circle*{0.000001}} \put(-318.20,200.11){\circle*{0.000001}} \put(-317.49,200.82){\circle*{0.000001}} \put(-317.49,201.53){\circle*{0.000001}} \put(-317.49,202.23){\circle*{0.000001}} \put(-317.49,202.94){\circle*{0.000001}} \put(-316.78,203.65){\circle*{0.000001}} \put(-316.78,204.35){\circle*{0.000001}} \put(-332.34,174.66){\circle*{0.000001}} \put(-331.63,175.36){\circle*{0.000001}} \put(-331.63,176.07){\circle*{0.000001}} \put(-330.93,176.78){\circle*{0.000001}} \put(-330.93,177.48){\circle*{0.000001}} \put(-330.22,178.19){\circle*{0.000001}} \put(-330.22,178.90){\circle*{0.000001}} \put(-329.51,179.61){\circle*{0.000001}} \put(-329.51,180.31){\circle*{0.000001}} \put(-328.80,181.02){\circle*{0.000001}} \put(-328.80,181.73){\circle*{0.000001}} \put(-328.10,182.43){\circle*{0.000001}} \put(-328.10,183.14){\circle*{0.000001}} \put(-327.39,183.85){\circle*{0.000001}} \put(-327.39,184.55){\circle*{0.000001}} \put(-326.68,185.26){\circle*{0.000001}} \put(-326.68,185.97){\circle*{0.000001}} \put(-325.98,186.68){\circle*{0.000001}} \put(-325.98,187.38){\circle*{0.000001}} \put(-325.27,188.09){\circle*{0.000001}} \put(-325.27,188.80){\circle*{0.000001}} \put(-324.56,189.50){\circle*{0.000001}} \put(-323.85,190.21){\circle*{0.000001}} \put(-323.85,190.92){\circle*{0.000001}} \put(-323.15,191.63){\circle*{0.000001}} \put(-323.15,192.33){\circle*{0.000001}} \put(-322.44,193.04){\circle*{0.000001}} \put(-322.44,193.75){\circle*{0.000001}} \put(-321.73,194.45){\circle*{0.000001}} \put(-321.73,195.16){\circle*{0.000001}} \put(-321.03,195.87){\circle*{0.000001}} \put(-321.03,196.58){\circle*{0.000001}} \put(-320.32,197.28){\circle*{0.000001}} \put(-320.32,197.99){\circle*{0.000001}} \put(-319.61,198.70){\circle*{0.000001}} \put(-319.61,199.40){\circle*{0.000001}} \put(-318.91,200.11){\circle*{0.000001}} \put(-318.91,200.82){\circle*{0.000001}} \put(-318.20,201.53){\circle*{0.000001}} \put(-318.20,202.23){\circle*{0.000001}} \put(-317.49,202.94){\circle*{0.000001}} \put(-317.49,203.65){\circle*{0.000001}} \put(-316.78,204.35){\circle*{0.000001}} \put(-332.34,174.66){\circle*{0.000001}} \put(-331.63,175.36){\circle*{0.000001}} \put(-330.93,176.07){\circle*{0.000001}} \put(-330.22,176.07){\circle*{0.000001}} \put(-329.51,176.78){\circle*{0.000001}} \put(-328.80,177.48){\circle*{0.000001}} \put(-328.10,178.19){\circle*{0.000001}} \put(-327.39,178.90){\circle*{0.000001}} \put(-326.68,178.90){\circle*{0.000001}} \put(-325.98,179.61){\circle*{0.000001}} \put(-325.27,180.31){\circle*{0.000001}} \put(-324.56,181.02){\circle*{0.000001}} \put(-323.85,181.02){\circle*{0.000001}} \put(-323.15,181.73){\circle*{0.000001}} \put(-322.44,182.43){\circle*{0.000001}} \put(-321.73,183.14){\circle*{0.000001}} \put(-321.03,183.85){\circle*{0.000001}} \put(-320.32,183.85){\circle*{0.000001}} \put(-319.61,184.55){\circle*{0.000001}} \put(-318.91,185.26){\circle*{0.000001}} \put(-318.20,185.97){\circle*{0.000001}} \put(-317.49,186.68){\circle*{0.000001}} \put(-316.78,186.68){\circle*{0.000001}} \put(-316.08,187.38){\circle*{0.000001}} \put(-315.37,188.09){\circle*{0.000001}} \put(-314.66,188.80){\circle*{0.000001}} \put(-313.96,189.50){\circle*{0.000001}} \put(-313.25,189.50){\circle*{0.000001}} \put(-312.54,190.21){\circle*{0.000001}} \put(-311.83,190.92){\circle*{0.000001}} \put(-311.13,191.63){\circle*{0.000001}} \put(-310.42,191.63){\circle*{0.000001}} \put(-309.71,192.33){\circle*{0.000001}} \put(-309.01,193.04){\circle*{0.000001}} \put(-308.30,193.75){\circle*{0.000001}} \put(-307.59,194.45){\circle*{0.000001}} \put(-306.88,194.45){\circle*{0.000001}} \put(-306.18,195.16){\circle*{0.000001}} \put(-305.47,195.87){\circle*{0.000001}} \put(-339.41,198.70){\circle*{0.000001}} \put(-338.70,198.70){\circle*{0.000001}} \put(-338.00,198.70){\circle*{0.000001}} \put(-337.29,198.70){\circle*{0.000001}} \put(-336.58,198.70){\circle*{0.000001}} \put(-335.88,198.70){\circle*{0.000001}} \put(-335.17,198.70){\circle*{0.000001}} \put(-334.46,197.99){\circle*{0.000001}} \put(-333.75,197.99){\circle*{0.000001}} \put(-333.05,197.99){\circle*{0.000001}} \put(-332.34,197.99){\circle*{0.000001}} \put(-331.63,197.99){\circle*{0.000001}} \put(-330.93,197.99){\circle*{0.000001}} \put(-330.22,197.99){\circle*{0.000001}} \put(-329.51,197.99){\circle*{0.000001}} \put(-328.80,197.99){\circle*{0.000001}} \put(-328.10,197.99){\circle*{0.000001}} \put(-327.39,197.99){\circle*{0.000001}} \put(-326.68,197.99){\circle*{0.000001}} \put(-325.98,197.28){\circle*{0.000001}} \put(-325.27,197.28){\circle*{0.000001}} \put(-324.56,197.28){\circle*{0.000001}} \put(-323.85,197.28){\circle*{0.000001}} \put(-323.15,197.28){\circle*{0.000001}} \put(-322.44,197.28){\circle*{0.000001}} \put(-321.73,197.28){\circle*{0.000001}} \put(-321.03,197.28){\circle*{0.000001}} \put(-320.32,197.28){\circle*{0.000001}} \put(-319.61,197.28){\circle*{0.000001}} \put(-318.91,197.28){\circle*{0.000001}} \put(-318.20,197.28){\circle*{0.000001}} \put(-317.49,196.58){\circle*{0.000001}} \put(-316.78,196.58){\circle*{0.000001}} \put(-316.08,196.58){\circle*{0.000001}} \put(-315.37,196.58){\circle*{0.000001}} \put(-314.66,196.58){\circle*{0.000001}} \put(-313.96,196.58){\circle*{0.000001}} \put(-313.25,196.58){\circle*{0.000001}} \put(-312.54,196.58){\circle*{0.000001}} \put(-311.83,196.58){\circle*{0.000001}} \put(-311.13,196.58){\circle*{0.000001}} \put(-310.42,196.58){\circle*{0.000001}} \put(-309.71,196.58){\circle*{0.000001}} \put(-309.01,195.87){\circle*{0.000001}} \put(-308.30,195.87){\circle*{0.000001}} \put(-307.59,195.87){\circle*{0.000001}} \put(-306.88,195.87){\circle*{0.000001}} \put(-306.18,195.87){\circle*{0.000001}} \put(-305.47,195.87){\circle*{0.000001}} \put(-371.94,192.33){\circle*{0.000001}} \put(-371.23,192.33){\circle*{0.000001}} \put(-370.52,192.33){\circle*{0.000001}} \put(-369.82,193.04){\circle*{0.000001}} \put(-369.11,193.04){\circle*{0.000001}} \put(-368.40,193.04){\circle*{0.000001}} \put(-367.70,193.04){\circle*{0.000001}} \put(-366.99,193.04){\circle*{0.000001}} \put(-366.28,193.75){\circle*{0.000001}} \put(-365.57,193.75){\circle*{0.000001}} \put(-364.87,193.75){\circle*{0.000001}} \put(-364.16,193.75){\circle*{0.000001}} \put(-363.45,193.75){\circle*{0.000001}} \put(-362.75,194.45){\circle*{0.000001}} \put(-362.04,194.45){\circle*{0.000001}} \put(-361.33,194.45){\circle*{0.000001}} \put(-360.62,194.45){\circle*{0.000001}} \put(-359.92,194.45){\circle*{0.000001}} \put(-359.21,195.16){\circle*{0.000001}} \put(-358.50,195.16){\circle*{0.000001}} \put(-357.80,195.16){\circle*{0.000001}} \put(-357.09,195.16){\circle*{0.000001}} \put(-356.38,195.16){\circle*{0.000001}} \put(-355.67,195.16){\circle*{0.000001}} \put(-354.97,195.87){\circle*{0.000001}} \put(-354.26,195.87){\circle*{0.000001}} \put(-353.55,195.87){\circle*{0.000001}} \put(-352.85,195.87){\circle*{0.000001}} \put(-352.14,195.87){\circle*{0.000001}} \put(-351.43,196.58){\circle*{0.000001}} \put(-350.72,196.58){\circle*{0.000001}} \put(-350.02,196.58){\circle*{0.000001}} \put(-349.31,196.58){\circle*{0.000001}} \put(-348.60,196.58){\circle*{0.000001}} \put(-347.90,197.28){\circle*{0.000001}} \put(-347.19,197.28){\circle*{0.000001}} \put(-346.48,197.28){\circle*{0.000001}} \put(-345.78,197.28){\circle*{0.000001}} \put(-345.07,197.28){\circle*{0.000001}} \put(-344.36,197.99){\circle*{0.000001}} \put(-343.65,197.99){\circle*{0.000001}} \put(-342.95,197.99){\circle*{0.000001}} \put(-342.24,197.99){\circle*{0.000001}} \put(-341.53,197.99){\circle*{0.000001}} \put(-340.83,198.70){\circle*{0.000001}} \put(-340.12,198.70){\circle*{0.000001}} \put(-339.41,198.70){\circle*{0.000001}} \put(-387.49,162.63){\circle*{0.000001}} \put(-386.79,163.34){\circle*{0.000001}} \put(-386.79,164.05){\circle*{0.000001}} \put(-386.08,164.76){\circle*{0.000001}} \put(-386.08,165.46){\circle*{0.000001}} \put(-385.37,166.17){\circle*{0.000001}} \put(-385.37,166.88){\circle*{0.000001}} \put(-384.67,167.58){\circle*{0.000001}} \put(-384.67,168.29){\circle*{0.000001}} \put(-383.96,169.00){\circle*{0.000001}} \put(-383.96,169.71){\circle*{0.000001}} \put(-383.25,170.41){\circle*{0.000001}} \put(-383.25,171.12){\circle*{0.000001}} \put(-382.54,171.83){\circle*{0.000001}} \put(-382.54,172.53){\circle*{0.000001}} \put(-381.84,173.24){\circle*{0.000001}} \put(-381.84,173.95){\circle*{0.000001}} \put(-381.13,174.66){\circle*{0.000001}} \put(-381.13,175.36){\circle*{0.000001}} \put(-380.42,176.07){\circle*{0.000001}} \put(-380.42,176.78){\circle*{0.000001}} \put(-379.72,177.48){\circle*{0.000001}} \put(-379.01,178.19){\circle*{0.000001}} \put(-379.01,178.90){\circle*{0.000001}} \put(-378.30,179.61){\circle*{0.000001}} \put(-378.30,180.31){\circle*{0.000001}} \put(-377.60,181.02){\circle*{0.000001}} \put(-377.60,181.73){\circle*{0.000001}} \put(-376.89,182.43){\circle*{0.000001}} \put(-376.89,183.14){\circle*{0.000001}} \put(-376.18,183.85){\circle*{0.000001}} \put(-376.18,184.55){\circle*{0.000001}} \put(-375.47,185.26){\circle*{0.000001}} \put(-375.47,185.97){\circle*{0.000001}} \put(-374.77,186.68){\circle*{0.000001}} \put(-374.77,187.38){\circle*{0.000001}} \put(-374.06,188.09){\circle*{0.000001}} \put(-374.06,188.80){\circle*{0.000001}} \put(-373.35,189.50){\circle*{0.000001}} \put(-373.35,190.21){\circle*{0.000001}} \put(-372.65,190.92){\circle*{0.000001}} \put(-372.65,191.63){\circle*{0.000001}} \put(-371.94,192.33){\circle*{0.000001}} \put(-387.49,162.63){\circle*{0.000001}} \put(-386.79,163.34){\circle*{0.000001}} \put(-386.08,164.05){\circle*{0.000001}} \put(-385.37,164.05){\circle*{0.000001}} \put(-384.67,164.76){\circle*{0.000001}} \put(-383.96,165.46){\circle*{0.000001}} \put(-383.25,166.17){\circle*{0.000001}} \put(-382.54,166.17){\circle*{0.000001}} \put(-381.84,166.88){\circle*{0.000001}} \put(-381.13,167.58){\circle*{0.000001}} \put(-380.42,168.29){\circle*{0.000001}} \put(-379.72,168.29){\circle*{0.000001}} \put(-379.01,169.00){\circle*{0.000001}} \put(-378.30,169.71){\circle*{0.000001}} \put(-377.60,170.41){\circle*{0.000001}} \put(-376.89,170.41){\circle*{0.000001}} \put(-376.18,171.12){\circle*{0.000001}} \put(-375.47,171.83){\circle*{0.000001}} \put(-374.77,172.53){\circle*{0.000001}} \put(-374.06,172.53){\circle*{0.000001}} \put(-373.35,173.24){\circle*{0.000001}} \put(-372.65,173.95){\circle*{0.000001}} \put(-371.94,174.66){\circle*{0.000001}} \put(-371.23,174.66){\circle*{0.000001}} \put(-370.52,175.36){\circle*{0.000001}} \put(-369.82,176.07){\circle*{0.000001}} \put(-369.11,176.78){\circle*{0.000001}} \put(-368.40,176.78){\circle*{0.000001}} \put(-367.70,177.48){\circle*{0.000001}} \put(-366.99,178.19){\circle*{0.000001}} \put(-366.28,178.90){\circle*{0.000001}} \put(-365.57,178.90){\circle*{0.000001}} \put(-364.87,179.61){\circle*{0.000001}} \put(-364.16,180.31){\circle*{0.000001}} \put(-363.45,181.02){\circle*{0.000001}} \put(-362.75,181.02){\circle*{0.000001}} \put(-362.04,181.73){\circle*{0.000001}} \put(-361.33,182.43){\circle*{0.000001}} \put(-393.86,192.33){\circle*{0.000001}} \put(-393.15,192.33){\circle*{0.000001}} \put(-392.44,191.63){\circle*{0.000001}} \put(-391.74,191.63){\circle*{0.000001}} \put(-391.03,191.63){\circle*{0.000001}} \put(-390.32,190.92){\circle*{0.000001}} \put(-389.62,190.92){\circle*{0.000001}} \put(-388.91,190.92){\circle*{0.000001}} \put(-388.20,190.92){\circle*{0.000001}} \put(-387.49,190.21){\circle*{0.000001}} \put(-386.79,190.21){\circle*{0.000001}} \put(-386.08,190.21){\circle*{0.000001}} \put(-385.37,189.50){\circle*{0.000001}} \put(-384.67,189.50){\circle*{0.000001}} \put(-383.96,189.50){\circle*{0.000001}} \put(-383.25,188.80){\circle*{0.000001}} \put(-382.54,188.80){\circle*{0.000001}} \put(-381.84,188.80){\circle*{0.000001}} \put(-381.13,188.80){\circle*{0.000001}} \put(-380.42,188.09){\circle*{0.000001}} \put(-379.72,188.09){\circle*{0.000001}} \put(-379.01,188.09){\circle*{0.000001}} \put(-378.30,187.38){\circle*{0.000001}} \put(-377.60,187.38){\circle*{0.000001}} \put(-376.89,187.38){\circle*{0.000001}} \put(-376.18,186.68){\circle*{0.000001}} \put(-375.47,186.68){\circle*{0.000001}} \put(-374.77,186.68){\circle*{0.000001}} \put(-374.06,185.97){\circle*{0.000001}} \put(-373.35,185.97){\circle*{0.000001}} \put(-372.65,185.97){\circle*{0.000001}} \put(-371.94,185.97){\circle*{0.000001}} \put(-371.23,185.26){\circle*{0.000001}} \put(-370.52,185.26){\circle*{0.000001}} \put(-369.82,185.26){\circle*{0.000001}} \put(-369.11,184.55){\circle*{0.000001}} \put(-368.40,184.55){\circle*{0.000001}} \put(-367.70,184.55){\circle*{0.000001}} \put(-366.99,183.85){\circle*{0.000001}} \put(-366.28,183.85){\circle*{0.000001}} \put(-365.57,183.85){\circle*{0.000001}} \put(-364.87,183.85){\circle*{0.000001}} \put(-364.16,183.14){\circle*{0.000001}} \put(-363.45,183.14){\circle*{0.000001}} \put(-362.75,183.14){\circle*{0.000001}} \put(-362.04,182.43){\circle*{0.000001}} \put(-361.33,182.43){\circle*{0.000001}} \put(-426.39,199.40){\circle*{0.000001}} \put(-425.68,199.40){\circle*{0.000001}} \put(-424.97,199.40){\circle*{0.000001}} \put(-424.26,198.70){\circle*{0.000001}} \put(-423.56,198.70){\circle*{0.000001}} \put(-422.85,198.70){\circle*{0.000001}} \put(-422.14,198.70){\circle*{0.000001}} \put(-421.44,197.99){\circle*{0.000001}} \put(-420.73,197.99){\circle*{0.000001}} \put(-420.02,197.99){\circle*{0.000001}} \put(-419.31,197.99){\circle*{0.000001}} \put(-418.61,197.99){\circle*{0.000001}} \put(-417.90,197.28){\circle*{0.000001}} \put(-417.19,197.28){\circle*{0.000001}} \put(-416.49,197.28){\circle*{0.000001}} \put(-415.78,197.28){\circle*{0.000001}} \put(-415.07,197.28){\circle*{0.000001}} \put(-414.36,196.58){\circle*{0.000001}} \put(-413.66,196.58){\circle*{0.000001}} \put(-412.95,196.58){\circle*{0.000001}} \put(-412.24,196.58){\circle*{0.000001}} \put(-411.54,195.87){\circle*{0.000001}} \put(-410.83,195.87){\circle*{0.000001}} \put(-410.12,195.87){\circle*{0.000001}} \put(-409.41,195.87){\circle*{0.000001}} \put(-408.71,195.87){\circle*{0.000001}} \put(-408.00,195.16){\circle*{0.000001}} \put(-407.29,195.16){\circle*{0.000001}} \put(-406.59,195.16){\circle*{0.000001}} \put(-405.88,195.16){\circle*{0.000001}} \put(-405.17,194.45){\circle*{0.000001}} \put(-404.47,194.45){\circle*{0.000001}} \put(-403.76,194.45){\circle*{0.000001}} \put(-403.05,194.45){\circle*{0.000001}} \put(-402.34,194.45){\circle*{0.000001}} \put(-401.64,193.75){\circle*{0.000001}} \put(-400.93,193.75){\circle*{0.000001}} \put(-400.22,193.75){\circle*{0.000001}} \put(-399.52,193.75){\circle*{0.000001}} \put(-398.81,193.75){\circle*{0.000001}} \put(-398.10,193.04){\circle*{0.000001}} \put(-397.39,193.04){\circle*{0.000001}} \put(-396.69,193.04){\circle*{0.000001}} \put(-395.98,193.04){\circle*{0.000001}} \put(-395.27,192.33){\circle*{0.000001}} \put(-394.57,192.33){\circle*{0.000001}} \put(-393.86,192.33){\circle*{0.000001}} \put(-455.38,181.73){\circle*{0.000001}} \put(-454.67,182.43){\circle*{0.000001}} \put(-453.96,182.43){\circle*{0.000001}} \put(-453.26,183.14){\circle*{0.000001}} \put(-452.55,183.14){\circle*{0.000001}} \put(-451.84,183.85){\circle*{0.000001}} \put(-451.13,184.55){\circle*{0.000001}} \put(-450.43,184.55){\circle*{0.000001}} \put(-449.72,185.26){\circle*{0.000001}} \put(-449.01,185.26){\circle*{0.000001}} \put(-448.31,185.97){\circle*{0.000001}} \put(-447.60,186.68){\circle*{0.000001}} \put(-446.89,186.68){\circle*{0.000001}} \put(-446.18,187.38){\circle*{0.000001}} \put(-445.48,188.09){\circle*{0.000001}} \put(-444.77,188.09){\circle*{0.000001}} \put(-444.06,188.80){\circle*{0.000001}} \put(-443.36,188.80){\circle*{0.000001}} \put(-442.65,189.50){\circle*{0.000001}} \put(-441.94,190.21){\circle*{0.000001}} \put(-441.23,190.21){\circle*{0.000001}} \put(-440.53,190.92){\circle*{0.000001}} \put(-439.82,190.92){\circle*{0.000001}} \put(-439.11,191.63){\circle*{0.000001}} \put(-438.41,192.33){\circle*{0.000001}} \put(-437.70,192.33){\circle*{0.000001}} \put(-436.99,193.04){\circle*{0.000001}} \put(-436.28,193.04){\circle*{0.000001}} \put(-435.58,193.75){\circle*{0.000001}} \put(-434.87,194.45){\circle*{0.000001}} \put(-434.16,194.45){\circle*{0.000001}} \put(-433.46,195.16){\circle*{0.000001}} \put(-432.75,195.87){\circle*{0.000001}} \put(-432.04,195.87){\circle*{0.000001}} \put(-431.34,196.58){\circle*{0.000001}} \put(-430.63,196.58){\circle*{0.000001}} \put(-429.92,197.28){\circle*{0.000001}} \put(-429.21,197.99){\circle*{0.000001}} \put(-428.51,197.99){\circle*{0.000001}} \put(-427.80,198.70){\circle*{0.000001}} \put(-427.09,198.70){\circle*{0.000001}} \put(-426.39,199.40){\circle*{0.000001}} \put(-455.38,181.73){\circle*{0.000001}} \put(-454.67,181.73){\circle*{0.000001}} \put(-453.96,181.73){\circle*{0.000001}} \put(-453.26,181.73){\circle*{0.000001}} \put(-452.55,181.73){\circle*{0.000001}} \put(-451.84,181.73){\circle*{0.000001}} \put(-451.13,181.73){\circle*{0.000001}} \put(-450.43,181.73){\circle*{0.000001}} \put(-449.72,181.73){\circle*{0.000001}} \put(-449.01,181.02){\circle*{0.000001}} \put(-448.31,181.02){\circle*{0.000001}} \put(-447.60,181.02){\circle*{0.000001}} \put(-446.89,181.02){\circle*{0.000001}} \put(-446.18,181.02){\circle*{0.000001}} \put(-445.48,181.02){\circle*{0.000001}} \put(-444.77,181.02){\circle*{0.000001}} \put(-444.06,181.02){\circle*{0.000001}} \put(-443.36,181.02){\circle*{0.000001}} \put(-442.65,181.02){\circle*{0.000001}} \put(-441.94,181.02){\circle*{0.000001}} \put(-441.23,181.02){\circle*{0.000001}} \put(-440.53,181.02){\circle*{0.000001}} \put(-439.82,181.02){\circle*{0.000001}} \put(-439.11,181.02){\circle*{0.000001}} \put(-438.41,181.02){\circle*{0.000001}} \put(-437.70,180.31){\circle*{0.000001}} \put(-436.99,180.31){\circle*{0.000001}} \put(-436.28,180.31){\circle*{0.000001}} \put(-435.58,180.31){\circle*{0.000001}} \put(-434.87,180.31){\circle*{0.000001}} \put(-434.16,180.31){\circle*{0.000001}} \put(-433.46,180.31){\circle*{0.000001}} \put(-432.75,180.31){\circle*{0.000001}} \put(-432.04,180.31){\circle*{0.000001}} \put(-431.34,180.31){\circle*{0.000001}} \put(-430.63,180.31){\circle*{0.000001}} \put(-429.92,180.31){\circle*{0.000001}} \put(-429.21,180.31){\circle*{0.000001}} \put(-428.51,180.31){\circle*{0.000001}} \put(-427.80,180.31){\circle*{0.000001}} \put(-427.09,180.31){\circle*{0.000001}} \put(-426.39,179.61){\circle*{0.000001}} \put(-425.68,179.61){\circle*{0.000001}} \put(-424.97,179.61){\circle*{0.000001}} \put(-424.26,179.61){\circle*{0.000001}} \put(-423.56,179.61){\circle*{0.000001}} \put(-422.85,179.61){\circle*{0.000001}} \put(-422.14,179.61){\circle*{0.000001}} \put(-421.44,179.61){\circle*{0.000001}} \put(-421.44,179.61){\circle*{0.000001}} \put(-420.73,180.31){\circle*{0.000001}} \put(-420.73,181.02){\circle*{0.000001}} \put(-420.02,181.73){\circle*{0.000001}} \put(-419.31,182.43){\circle*{0.000001}} \put(-419.31,183.14){\circle*{0.000001}} \put(-418.61,183.85){\circle*{0.000001}} \put(-417.90,184.55){\circle*{0.000001}} \put(-417.19,185.26){\circle*{0.000001}} \put(-417.19,185.97){\circle*{0.000001}} \put(-416.49,186.68){\circle*{0.000001}} \put(-415.78,187.38){\circle*{0.000001}} \put(-415.78,188.09){\circle*{0.000001}} \put(-415.07,188.80){\circle*{0.000001}} \put(-414.36,189.50){\circle*{0.000001}} \put(-414.36,190.21){\circle*{0.000001}} \put(-413.66,190.92){\circle*{0.000001}} \put(-412.95,191.63){\circle*{0.000001}} \put(-412.24,192.33){\circle*{0.000001}} \put(-412.24,193.04){\circle*{0.000001}} \put(-411.54,193.75){\circle*{0.000001}} \put(-410.83,194.45){\circle*{0.000001}} \put(-410.83,195.16){\circle*{0.000001}} \put(-410.12,195.87){\circle*{0.000001}} \put(-409.41,196.58){\circle*{0.000001}} \put(-409.41,197.28){\circle*{0.000001}} \put(-408.71,197.99){\circle*{0.000001}} \put(-408.00,198.70){\circle*{0.000001}} \put(-407.29,199.40){\circle*{0.000001}} \put(-407.29,200.11){\circle*{0.000001}} \put(-406.59,200.82){\circle*{0.000001}} \put(-405.88,201.53){\circle*{0.000001}} \put(-405.88,202.23){\circle*{0.000001}} \put(-405.17,202.94){\circle*{0.000001}} \put(-404.47,203.65){\circle*{0.000001}} \put(-404.47,204.35){\circle*{0.000001}} \put(-403.76,205.06){\circle*{0.000001}} \put(-403.05,205.77){\circle*{0.000001}} \put(-402.34,206.48){\circle*{0.000001}} \put(-402.34,207.18){\circle*{0.000001}} \put(-401.64,207.89){\circle*{0.000001}} \put(-432.75,189.50){\circle*{0.000001}} \put(-432.04,190.21){\circle*{0.000001}} \put(-431.34,190.21){\circle*{0.000001}} \put(-430.63,190.92){\circle*{0.000001}} \put(-429.92,190.92){\circle*{0.000001}} \put(-429.21,191.63){\circle*{0.000001}} \put(-428.51,192.33){\circle*{0.000001}} \put(-427.80,192.33){\circle*{0.000001}} \put(-427.09,193.04){\circle*{0.000001}} \put(-426.39,193.04){\circle*{0.000001}} \put(-425.68,193.75){\circle*{0.000001}} \put(-424.97,193.75){\circle*{0.000001}} \put(-424.26,194.45){\circle*{0.000001}} \put(-423.56,195.16){\circle*{0.000001}} \put(-422.85,195.16){\circle*{0.000001}} \put(-422.14,195.87){\circle*{0.000001}} \put(-421.44,195.87){\circle*{0.000001}} \put(-420.73,196.58){\circle*{0.000001}} \put(-420.02,197.28){\circle*{0.000001}} \put(-419.31,197.28){\circle*{0.000001}} \put(-418.61,197.99){\circle*{0.000001}} \put(-417.90,197.99){\circle*{0.000001}} \put(-417.19,198.70){\circle*{0.000001}} \put(-416.49,199.40){\circle*{0.000001}} \put(-415.78,199.40){\circle*{0.000001}} \put(-415.07,200.11){\circle*{0.000001}} \put(-414.36,200.11){\circle*{0.000001}} \put(-413.66,200.82){\circle*{0.000001}} \put(-412.95,201.53){\circle*{0.000001}} \put(-412.24,201.53){\circle*{0.000001}} \put(-411.54,202.23){\circle*{0.000001}} \put(-410.83,202.23){\circle*{0.000001}} \put(-410.12,202.94){\circle*{0.000001}} \put(-409.41,202.94){\circle*{0.000001}} \put(-408.71,203.65){\circle*{0.000001}} \put(-408.00,204.35){\circle*{0.000001}} \put(-407.29,204.35){\circle*{0.000001}} \put(-406.59,205.06){\circle*{0.000001}} \put(-405.88,205.06){\circle*{0.000001}} \put(-405.17,205.77){\circle*{0.000001}} \put(-404.47,206.48){\circle*{0.000001}} \put(-403.76,206.48){\circle*{0.000001}} \put(-403.05,207.18){\circle*{0.000001}} \put(-402.34,207.18){\circle*{0.000001}} \put(-401.64,207.89){\circle*{0.000001}} \put(-432.75,189.50){\circle*{0.000001}} \put(-432.04,188.80){\circle*{0.000001}} \put(-431.34,188.09){\circle*{0.000001}} \put(-430.63,188.09){\circle*{0.000001}} \put(-429.92,187.38){\circle*{0.000001}} \put(-429.21,186.68){\circle*{0.000001}} \put(-428.51,185.97){\circle*{0.000001}} \put(-427.80,185.97){\circle*{0.000001}} \put(-427.09,185.26){\circle*{0.000001}} \put(-426.39,184.55){\circle*{0.000001}} \put(-425.68,183.85){\circle*{0.000001}} \put(-424.97,183.14){\circle*{0.000001}} \put(-424.26,183.14){\circle*{0.000001}} \put(-423.56,182.43){\circle*{0.000001}} \put(-422.85,181.73){\circle*{0.000001}} \put(-422.14,181.02){\circle*{0.000001}} \put(-421.44,181.02){\circle*{0.000001}} \put(-420.73,180.31){\circle*{0.000001}} \put(-420.02,179.61){\circle*{0.000001}} \put(-419.31,178.90){\circle*{0.000001}} \put(-418.61,178.90){\circle*{0.000001}} \put(-417.90,178.19){\circle*{0.000001}} \put(-417.19,177.48){\circle*{0.000001}} \put(-416.49,176.78){\circle*{0.000001}} \put(-415.78,176.07){\circle*{0.000001}} \put(-415.07,176.07){\circle*{0.000001}} \put(-414.36,175.36){\circle*{0.000001}} \put(-413.66,174.66){\circle*{0.000001}} \put(-412.95,173.95){\circle*{0.000001}} \put(-412.24,173.95){\circle*{0.000001}} \put(-411.54,173.24){\circle*{0.000001}} \put(-410.83,172.53){\circle*{0.000001}} \put(-410.12,171.83){\circle*{0.000001}} \put(-409.41,171.12){\circle*{0.000001}} \put(-408.71,171.12){\circle*{0.000001}} \put(-408.00,170.41){\circle*{0.000001}} \put(-407.29,169.71){\circle*{0.000001}} \put(-406.59,169.00){\circle*{0.000001}} \put(-405.88,169.00){\circle*{0.000001}} \put(-405.17,168.29){\circle*{0.000001}} \put(-404.47,167.58){\circle*{0.000001}} \put(-381.84,142.84){\circle*{0.000001}} \put(-382.54,143.54){\circle*{0.000001}} \put(-383.25,144.25){\circle*{0.000001}} \put(-383.96,144.96){\circle*{0.000001}} \put(-384.67,145.66){\circle*{0.000001}} \put(-385.37,146.37){\circle*{0.000001}} \put(-385.37,147.08){\circle*{0.000001}} \put(-386.08,147.79){\circle*{0.000001}} \put(-386.79,148.49){\circle*{0.000001}} \put(-387.49,149.20){\circle*{0.000001}} \put(-388.20,149.91){\circle*{0.000001}} \put(-388.91,150.61){\circle*{0.000001}} \put(-389.62,151.32){\circle*{0.000001}} \put(-390.32,152.03){\circle*{0.000001}} \put(-391.03,152.74){\circle*{0.000001}} \put(-391.74,153.44){\circle*{0.000001}} \put(-392.44,154.15){\circle*{0.000001}} \put(-393.15,154.86){\circle*{0.000001}} \put(-393.15,155.56){\circle*{0.000001}} \put(-393.86,156.27){\circle*{0.000001}} \put(-394.57,156.98){\circle*{0.000001}} \put(-395.27,157.68){\circle*{0.000001}} \put(-395.98,158.39){\circle*{0.000001}} \put(-396.69,159.10){\circle*{0.000001}} \put(-397.39,159.81){\circle*{0.000001}} \put(-398.10,160.51){\circle*{0.000001}} \put(-398.81,161.22){\circle*{0.000001}} \put(-399.52,161.93){\circle*{0.000001}} \put(-400.22,162.63){\circle*{0.000001}} \put(-400.93,163.34){\circle*{0.000001}} \put(-400.93,164.05){\circle*{0.000001}} \put(-401.64,164.76){\circle*{0.000001}} \put(-402.34,165.46){\circle*{0.000001}} \put(-403.05,166.17){\circle*{0.000001}} \put(-403.76,166.88){\circle*{0.000001}} \put(-404.47,167.58){\circle*{0.000001}} \put(-381.84,108.89){\circle*{0.000001}} \put(-381.84,109.60){\circle*{0.000001}} \put(-381.84,110.31){\circle*{0.000001}} \put(-381.84,111.02){\circle*{0.000001}} \put(-381.84,111.72){\circle*{0.000001}} \put(-381.84,112.43){\circle*{0.000001}} \put(-381.84,113.14){\circle*{0.000001}} \put(-381.84,113.84){\circle*{0.000001}} \put(-381.84,114.55){\circle*{0.000001}} \put(-381.84,115.26){\circle*{0.000001}} \put(-381.84,115.97){\circle*{0.000001}} \put(-381.84,116.67){\circle*{0.000001}} \put(-381.84,117.38){\circle*{0.000001}} \put(-381.84,118.09){\circle*{0.000001}} \put(-381.84,118.79){\circle*{0.000001}} \put(-381.84,119.50){\circle*{0.000001}} \put(-381.84,120.21){\circle*{0.000001}} \put(-381.84,120.92){\circle*{0.000001}} \put(-381.84,121.62){\circle*{0.000001}} \put(-381.84,122.33){\circle*{0.000001}} \put(-381.84,123.04){\circle*{0.000001}} \put(-381.84,123.74){\circle*{0.000001}} \put(-381.84,124.45){\circle*{0.000001}} \put(-381.84,125.16){\circle*{0.000001}} \put(-381.84,125.87){\circle*{0.000001}} \put(-381.84,126.57){\circle*{0.000001}} \put(-381.84,127.28){\circle*{0.000001}} \put(-381.84,127.99){\circle*{0.000001}} \put(-381.84,128.69){\circle*{0.000001}} \put(-381.84,129.40){\circle*{0.000001}} \put(-381.84,130.11){\circle*{0.000001}} \put(-381.84,130.81){\circle*{0.000001}} \put(-381.84,131.52){\circle*{0.000001}} \put(-381.84,132.23){\circle*{0.000001}} \put(-381.84,132.94){\circle*{0.000001}} \put(-381.84,133.64){\circle*{0.000001}} \put(-381.84,134.35){\circle*{0.000001}} \put(-381.84,135.06){\circle*{0.000001}} \put(-381.84,135.76){\circle*{0.000001}} \put(-381.84,136.47){\circle*{0.000001}} \put(-381.84,137.18){\circle*{0.000001}} \put(-381.84,137.89){\circle*{0.000001}} \put(-381.84,138.59){\circle*{0.000001}} \put(-381.84,139.30){\circle*{0.000001}} \put(-381.84,140.01){\circle*{0.000001}} \put(-381.84,140.71){\circle*{0.000001}} \put(-381.84,141.42){\circle*{0.000001}} \put(-381.84,142.13){\circle*{0.000001}} \put(-381.84,142.84){\circle*{0.000001}} \put(-412.95,105.36){\circle*{0.000001}} \put(-412.24,105.36){\circle*{0.000001}} \put(-411.54,105.36){\circle*{0.000001}} \put(-410.83,105.36){\circle*{0.000001}} \put(-410.12,105.36){\circle*{0.000001}} \put(-409.41,106.07){\circle*{0.000001}} \put(-408.71,106.07){\circle*{0.000001}} \put(-408.00,106.07){\circle*{0.000001}} \put(-407.29,106.07){\circle*{0.000001}} \put(-406.59,106.07){\circle*{0.000001}} \put(-405.88,106.07){\circle*{0.000001}} \put(-405.17,106.07){\circle*{0.000001}} \put(-404.47,106.07){\circle*{0.000001}} \put(-403.76,106.07){\circle*{0.000001}} \put(-403.05,106.77){\circle*{0.000001}} \put(-402.34,106.77){\circle*{0.000001}} \put(-401.64,106.77){\circle*{0.000001}} \put(-400.93,106.77){\circle*{0.000001}} \put(-400.22,106.77){\circle*{0.000001}} \put(-399.52,106.77){\circle*{0.000001}} \put(-398.81,106.77){\circle*{0.000001}} \put(-398.10,106.77){\circle*{0.000001}} \put(-397.39,106.77){\circle*{0.000001}} \put(-396.69,107.48){\circle*{0.000001}} \put(-395.98,107.48){\circle*{0.000001}} \put(-395.27,107.48){\circle*{0.000001}} \put(-394.57,107.48){\circle*{0.000001}} \put(-393.86,107.48){\circle*{0.000001}} \put(-393.15,107.48){\circle*{0.000001}} \put(-392.44,107.48){\circle*{0.000001}} \put(-391.74,107.48){\circle*{0.000001}} \put(-391.03,108.19){\circle*{0.000001}} \put(-390.32,108.19){\circle*{0.000001}} \put(-389.62,108.19){\circle*{0.000001}} \put(-388.91,108.19){\circle*{0.000001}} \put(-388.20,108.19){\circle*{0.000001}} \put(-387.49,108.19){\circle*{0.000001}} \put(-386.79,108.19){\circle*{0.000001}} \put(-386.08,108.19){\circle*{0.000001}} \put(-385.37,108.19){\circle*{0.000001}} \put(-384.67,108.89){\circle*{0.000001}} \put(-383.96,108.89){\circle*{0.000001}} \put(-383.25,108.89){\circle*{0.000001}} \put(-382.54,108.89){\circle*{0.000001}} \put(-381.84,108.89){\circle*{0.000001}} \put(-431.34,77.78){\circle*{0.000001}} \put(-430.63,78.49){\circle*{0.000001}} \put(-430.63,79.20){\circle*{0.000001}} \put(-429.92,79.90){\circle*{0.000001}} \put(-429.21,80.61){\circle*{0.000001}} \put(-429.21,81.32){\circle*{0.000001}} \put(-428.51,82.02){\circle*{0.000001}} \put(-427.80,82.73){\circle*{0.000001}} \put(-427.80,83.44){\circle*{0.000001}} \put(-427.09,84.15){\circle*{0.000001}} \put(-426.39,84.85){\circle*{0.000001}} \put(-426.39,85.56){\circle*{0.000001}} \put(-425.68,86.27){\circle*{0.000001}} \put(-424.97,86.97){\circle*{0.000001}} \put(-424.97,87.68){\circle*{0.000001}} \put(-424.26,88.39){\circle*{0.000001}} \put(-423.56,89.10){\circle*{0.000001}} \put(-423.56,89.80){\circle*{0.000001}} \put(-422.85,90.51){\circle*{0.000001}} \put(-422.14,91.22){\circle*{0.000001}} \put(-422.14,91.92){\circle*{0.000001}} \put(-421.44,92.63){\circle*{0.000001}} \put(-420.73,93.34){\circle*{0.000001}} \put(-420.73,94.05){\circle*{0.000001}} \put(-420.02,94.75){\circle*{0.000001}} \put(-419.31,95.46){\circle*{0.000001}} \put(-419.31,96.17){\circle*{0.000001}} \put(-418.61,96.87){\circle*{0.000001}} \put(-417.90,97.58){\circle*{0.000001}} \put(-417.90,98.29){\circle*{0.000001}} \put(-417.19,98.99){\circle*{0.000001}} \put(-416.49,99.70){\circle*{0.000001}} \put(-416.49,100.41){\circle*{0.000001}} \put(-415.78,101.12){\circle*{0.000001}} \put(-415.07,101.82){\circle*{0.000001}} \put(-415.07,102.53){\circle*{0.000001}} \put(-414.36,103.24){\circle*{0.000001}} \put(-413.66,103.94){\circle*{0.000001}} \put(-413.66,104.65){\circle*{0.000001}} \put(-412.95,105.36){\circle*{0.000001}} \put(-431.34,77.78){\circle*{0.000001}} \put(-430.63,77.78){\circle*{0.000001}} \put(-429.92,78.49){\circle*{0.000001}} \put(-429.21,78.49){\circle*{0.000001}} \put(-428.51,78.49){\circle*{0.000001}} \put(-427.80,78.49){\circle*{0.000001}} \put(-427.09,79.20){\circle*{0.000001}} \put(-426.39,79.20){\circle*{0.000001}} \put(-425.68,79.20){\circle*{0.000001}} \put(-424.97,79.90){\circle*{0.000001}} \put(-424.26,79.90){\circle*{0.000001}} \put(-423.56,79.90){\circle*{0.000001}} \put(-422.85,79.90){\circle*{0.000001}} \put(-422.14,80.61){\circle*{0.000001}} \put(-421.44,80.61){\circle*{0.000001}} \put(-420.73,80.61){\circle*{0.000001}} \put(-420.02,81.32){\circle*{0.000001}} \put(-419.31,81.32){\circle*{0.000001}} \put(-418.61,81.32){\circle*{0.000001}} \put(-417.90,82.02){\circle*{0.000001}} \put(-417.19,82.02){\circle*{0.000001}} \put(-416.49,82.02){\circle*{0.000001}} \put(-415.78,82.02){\circle*{0.000001}} \put(-415.07,82.73){\circle*{0.000001}} \put(-414.36,82.73){\circle*{0.000001}} \put(-413.66,82.73){\circle*{0.000001}} \put(-412.95,83.44){\circle*{0.000001}} \put(-412.24,83.44){\circle*{0.000001}} \put(-411.54,83.44){\circle*{0.000001}} \put(-410.83,83.44){\circle*{0.000001}} \put(-410.12,84.15){\circle*{0.000001}} \put(-409.41,84.15){\circle*{0.000001}} \put(-408.71,84.15){\circle*{0.000001}} \put(-408.00,84.85){\circle*{0.000001}} \put(-407.29,84.85){\circle*{0.000001}} \put(-406.59,84.85){\circle*{0.000001}} \put(-405.88,84.85){\circle*{0.000001}} \put(-405.17,85.56){\circle*{0.000001}} \put(-404.47,85.56){\circle*{0.000001}} \put(-403.76,85.56){\circle*{0.000001}} \put(-403.05,86.27){\circle*{0.000001}} \put(-402.34,86.27){\circle*{0.000001}} \put(-401.64,86.27){\circle*{0.000001}} \put(-400.93,86.97){\circle*{0.000001}} \put(-400.22,86.97){\circle*{0.000001}} \put(-399.52,86.97){\circle*{0.000001}} \put(-398.81,86.97){\circle*{0.000001}} \put(-398.10,87.68){\circle*{0.000001}} \put(-397.39,87.68){\circle*{0.000001}} \put(-397.39,87.68){\circle*{0.000001}} \put(-396.69,88.39){\circle*{0.000001}} \put(-396.69,89.10){\circle*{0.000001}} \put(-395.98,89.80){\circle*{0.000001}} \put(-395.98,90.51){\circle*{0.000001}} \put(-395.27,91.22){\circle*{0.000001}} \put(-395.27,91.92){\circle*{0.000001}} \put(-394.57,92.63){\circle*{0.000001}} \put(-394.57,93.34){\circle*{0.000001}} \put(-393.86,94.05){\circle*{0.000001}} \put(-393.86,94.75){\circle*{0.000001}} \put(-393.15,95.46){\circle*{0.000001}} \put(-393.15,96.17){\circle*{0.000001}} \put(-392.44,96.87){\circle*{0.000001}} \put(-392.44,97.58){\circle*{0.000001}} \put(-391.74,98.29){\circle*{0.000001}} \put(-391.74,98.99){\circle*{0.000001}} \put(-391.03,99.70){\circle*{0.000001}} \put(-391.03,100.41){\circle*{0.000001}} \put(-390.32,101.12){\circle*{0.000001}} \put(-390.32,101.82){\circle*{0.000001}} \put(-389.62,102.53){\circle*{0.000001}} \put(-389.62,103.24){\circle*{0.000001}} \put(-388.91,103.94){\circle*{0.000001}} \put(-388.91,104.65){\circle*{0.000001}} \put(-388.20,105.36){\circle*{0.000001}} \put(-388.20,106.07){\circle*{0.000001}} \put(-387.49,106.77){\circle*{0.000001}} \put(-387.49,107.48){\circle*{0.000001}} \put(-386.79,108.19){\circle*{0.000001}} \put(-386.79,108.89){\circle*{0.000001}} \put(-386.08,109.60){\circle*{0.000001}} \put(-386.08,110.31){\circle*{0.000001}} \put(-385.37,111.02){\circle*{0.000001}} \put(-385.37,111.72){\circle*{0.000001}} \put(-384.67,112.43){\circle*{0.000001}} \put(-384.67,113.14){\circle*{0.000001}} \put(-383.96,113.84){\circle*{0.000001}} \put(-383.96,114.55){\circle*{0.000001}} \put(-383.25,115.26){\circle*{0.000001}} \put(-383.25,115.97){\circle*{0.000001}} \put(-382.54,116.67){\circle*{0.000001}} \put(-397.39,84.85){\circle*{0.000001}} \put(-397.39,85.56){\circle*{0.000001}} \put(-396.69,86.27){\circle*{0.000001}} \put(-396.69,86.97){\circle*{0.000001}} \put(-395.98,87.68){\circle*{0.000001}} \put(-395.98,88.39){\circle*{0.000001}} \put(-395.27,89.10){\circle*{0.000001}} \put(-395.27,89.80){\circle*{0.000001}} \put(-394.57,90.51){\circle*{0.000001}} \put(-394.57,91.22){\circle*{0.000001}} \put(-393.86,91.92){\circle*{0.000001}} \put(-393.86,92.63){\circle*{0.000001}} \put(-393.15,93.34){\circle*{0.000001}} \put(-393.15,94.05){\circle*{0.000001}} \put(-392.44,94.75){\circle*{0.000001}} \put(-392.44,95.46){\circle*{0.000001}} \put(-392.44,96.17){\circle*{0.000001}} \put(-391.74,96.87){\circle*{0.000001}} \put(-391.74,97.58){\circle*{0.000001}} \put(-391.03,98.29){\circle*{0.000001}} \put(-391.03,98.99){\circle*{0.000001}} \put(-390.32,99.70){\circle*{0.000001}} \put(-390.32,100.41){\circle*{0.000001}} \put(-389.62,101.12){\circle*{0.000001}} \put(-389.62,101.82){\circle*{0.000001}} \put(-388.91,102.53){\circle*{0.000001}} \put(-388.91,103.24){\circle*{0.000001}} \put(-388.20,103.94){\circle*{0.000001}} \put(-388.20,104.65){\circle*{0.000001}} \put(-387.49,105.36){\circle*{0.000001}} \put(-387.49,106.07){\circle*{0.000001}} \put(-387.49,106.77){\circle*{0.000001}} \put(-386.79,107.48){\circle*{0.000001}} \put(-386.79,108.19){\circle*{0.000001}} \put(-386.08,108.89){\circle*{0.000001}} \put(-386.08,109.60){\circle*{0.000001}} \put(-385.37,110.31){\circle*{0.000001}} \put(-385.37,111.02){\circle*{0.000001}} \put(-384.67,111.72){\circle*{0.000001}} \put(-384.67,112.43){\circle*{0.000001}} \put(-383.96,113.14){\circle*{0.000001}} \put(-383.96,113.84){\circle*{0.000001}} \put(-383.25,114.55){\circle*{0.000001}} \put(-383.25,115.26){\circle*{0.000001}} \put(-382.54,115.97){\circle*{0.000001}} \put(-382.54,116.67){\circle*{0.000001}} \put(-397.39,84.85){\circle*{0.000001}} \put(-397.39,85.56){\circle*{0.000001}} \put(-397.39,86.27){\circle*{0.000001}} \put(-396.69,86.97){\circle*{0.000001}} \put(-396.69,87.68){\circle*{0.000001}} \put(-396.69,88.39){\circle*{0.000001}} \put(-396.69,89.10){\circle*{0.000001}} \put(-395.98,89.80){\circle*{0.000001}} \put(-395.98,90.51){\circle*{0.000001}} \put(-395.98,91.22){\circle*{0.000001}} \put(-395.98,91.92){\circle*{0.000001}} \put(-395.27,92.63){\circle*{0.000001}} \put(-395.27,93.34){\circle*{0.000001}} \put(-395.27,94.05){\circle*{0.000001}} \put(-395.27,94.75){\circle*{0.000001}} \put(-394.57,95.46){\circle*{0.000001}} \put(-394.57,96.17){\circle*{0.000001}} \put(-394.57,96.87){\circle*{0.000001}} \put(-394.57,97.58){\circle*{0.000001}} \put(-393.86,98.29){\circle*{0.000001}} \put(-393.86,98.99){\circle*{0.000001}} \put(-393.86,99.70){\circle*{0.000001}} \put(-393.86,100.41){\circle*{0.000001}} \put(-393.15,101.12){\circle*{0.000001}} \put(-393.15,101.82){\circle*{0.000001}} \put(-393.15,102.53){\circle*{0.000001}} \put(-393.15,103.24){\circle*{0.000001}} \put(-392.44,103.94){\circle*{0.000001}} \put(-392.44,104.65){\circle*{0.000001}} \put(-392.44,105.36){\circle*{0.000001}} \put(-392.44,106.07){\circle*{0.000001}} \put(-391.74,106.77){\circle*{0.000001}} \put(-391.74,107.48){\circle*{0.000001}} \put(-391.74,108.19){\circle*{0.000001}} \put(-391.74,108.89){\circle*{0.000001}} \put(-391.03,109.60){\circle*{0.000001}} \put(-391.03,110.31){\circle*{0.000001}} \put(-391.03,111.02){\circle*{0.000001}} \put(-391.03,111.72){\circle*{0.000001}} \put(-390.32,112.43){\circle*{0.000001}} \put(-390.32,113.14){\circle*{0.000001}} \put(-390.32,113.84){\circle*{0.000001}} \put(-390.32,114.55){\circle*{0.000001}} \put(-389.62,115.26){\circle*{0.000001}} \put(-389.62,115.97){\circle*{0.000001}} \put(-389.62,116.67){\circle*{0.000001}} \put(-387.49,82.73){\circle*{0.000001}} \put(-387.49,83.44){\circle*{0.000001}} \put(-387.49,84.15){\circle*{0.000001}} \put(-387.49,84.85){\circle*{0.000001}} \put(-387.49,85.56){\circle*{0.000001}} \put(-387.49,86.27){\circle*{0.000001}} \put(-387.49,86.97){\circle*{0.000001}} \put(-387.49,87.68){\circle*{0.000001}} \put(-387.49,88.39){\circle*{0.000001}} \put(-388.20,89.10){\circle*{0.000001}} \put(-388.20,89.80){\circle*{0.000001}} \put(-388.20,90.51){\circle*{0.000001}} \put(-388.20,91.22){\circle*{0.000001}} \put(-388.20,91.92){\circle*{0.000001}} \put(-388.20,92.63){\circle*{0.000001}} \put(-388.20,93.34){\circle*{0.000001}} \put(-388.20,94.05){\circle*{0.000001}} \put(-388.20,94.75){\circle*{0.000001}} \put(-388.20,95.46){\circle*{0.000001}} \put(-388.20,96.17){\circle*{0.000001}} \put(-388.20,96.87){\circle*{0.000001}} \put(-388.20,97.58){\circle*{0.000001}} \put(-388.20,98.29){\circle*{0.000001}} \put(-388.20,98.99){\circle*{0.000001}} \put(-388.20,99.70){\circle*{0.000001}} \put(-388.91,100.41){\circle*{0.000001}} \put(-388.91,101.12){\circle*{0.000001}} \put(-388.91,101.82){\circle*{0.000001}} \put(-388.91,102.53){\circle*{0.000001}} \put(-388.91,103.24){\circle*{0.000001}} \put(-388.91,103.94){\circle*{0.000001}} \put(-388.91,104.65){\circle*{0.000001}} \put(-388.91,105.36){\circle*{0.000001}} \put(-388.91,106.07){\circle*{0.000001}} \put(-388.91,106.77){\circle*{0.000001}} \put(-388.91,107.48){\circle*{0.000001}} \put(-388.91,108.19){\circle*{0.000001}} \put(-388.91,108.89){\circle*{0.000001}} \put(-388.91,109.60){\circle*{0.000001}} \put(-388.91,110.31){\circle*{0.000001}} \put(-388.91,111.02){\circle*{0.000001}} \put(-389.62,111.72){\circle*{0.000001}} \put(-389.62,112.43){\circle*{0.000001}} \put(-389.62,113.14){\circle*{0.000001}} \put(-389.62,113.84){\circle*{0.000001}} \put(-389.62,114.55){\circle*{0.000001}} \put(-389.62,115.26){\circle*{0.000001}} \put(-389.62,115.97){\circle*{0.000001}} \put(-389.62,116.67){\circle*{0.000001}} \put(-415.78,100.41){\circle*{0.000001}} \put(-415.07,99.70){\circle*{0.000001}} \put(-414.36,99.70){\circle*{0.000001}} \put(-413.66,98.99){\circle*{0.000001}} \put(-412.95,98.99){\circle*{0.000001}} \put(-412.24,98.29){\circle*{0.000001}} \put(-411.54,97.58){\circle*{0.000001}} \put(-410.83,97.58){\circle*{0.000001}} \put(-410.12,96.87){\circle*{0.000001}} \put(-409.41,96.17){\circle*{0.000001}} \put(-408.71,96.17){\circle*{0.000001}} \put(-408.00,95.46){\circle*{0.000001}} \put(-407.29,95.46){\circle*{0.000001}} \put(-406.59,94.75){\circle*{0.000001}} \put(-405.88,94.05){\circle*{0.000001}} \put(-405.17,94.05){\circle*{0.000001}} \put(-404.47,93.34){\circle*{0.000001}} \put(-403.76,92.63){\circle*{0.000001}} \put(-403.05,92.63){\circle*{0.000001}} \put(-402.34,91.92){\circle*{0.000001}} \put(-401.64,91.92){\circle*{0.000001}} \put(-400.93,91.22){\circle*{0.000001}} \put(-400.22,90.51){\circle*{0.000001}} \put(-399.52,90.51){\circle*{0.000001}} \put(-398.81,89.80){\circle*{0.000001}} \put(-398.10,89.10){\circle*{0.000001}} \put(-397.39,89.10){\circle*{0.000001}} \put(-396.69,88.39){\circle*{0.000001}} \put(-395.98,88.39){\circle*{0.000001}} \put(-395.27,87.68){\circle*{0.000001}} \put(-394.57,86.97){\circle*{0.000001}} \put(-393.86,86.97){\circle*{0.000001}} \put(-393.15,86.27){\circle*{0.000001}} \put(-392.44,85.56){\circle*{0.000001}} \put(-391.74,85.56){\circle*{0.000001}} \put(-391.03,84.85){\circle*{0.000001}} \put(-390.32,84.85){\circle*{0.000001}} \put(-389.62,84.15){\circle*{0.000001}} \put(-388.91,83.44){\circle*{0.000001}} \put(-388.20,83.44){\circle*{0.000001}} \put(-387.49,82.73){\circle*{0.000001}} \put(-442.65,116.67){\circle*{0.000001}} \put(-441.94,115.97){\circle*{0.000001}} \put(-441.23,115.97){\circle*{0.000001}} \put(-440.53,115.26){\circle*{0.000001}} \put(-439.82,115.26){\circle*{0.000001}} \put(-439.11,114.55){\circle*{0.000001}} \put(-438.41,113.84){\circle*{0.000001}} \put(-437.70,113.84){\circle*{0.000001}} \put(-436.99,113.14){\circle*{0.000001}} \put(-436.28,113.14){\circle*{0.000001}} \put(-435.58,112.43){\circle*{0.000001}} \put(-434.87,111.72){\circle*{0.000001}} \put(-434.16,111.72){\circle*{0.000001}} \put(-433.46,111.02){\circle*{0.000001}} \put(-432.75,111.02){\circle*{0.000001}} \put(-432.04,110.31){\circle*{0.000001}} \put(-431.34,109.60){\circle*{0.000001}} \put(-430.63,109.60){\circle*{0.000001}} \put(-429.92,108.89){\circle*{0.000001}} \put(-429.21,108.89){\circle*{0.000001}} \put(-428.51,108.19){\circle*{0.000001}} \put(-427.80,107.48){\circle*{0.000001}} \put(-427.09,107.48){\circle*{0.000001}} \put(-426.39,106.77){\circle*{0.000001}} \put(-425.68,106.07){\circle*{0.000001}} \put(-424.97,106.07){\circle*{0.000001}} \put(-424.26,105.36){\circle*{0.000001}} \put(-423.56,105.36){\circle*{0.000001}} \put(-422.85,104.65){\circle*{0.000001}} \put(-422.14,103.94){\circle*{0.000001}} \put(-421.44,103.94){\circle*{0.000001}} \put(-420.73,103.24){\circle*{0.000001}} \put(-420.02,103.24){\circle*{0.000001}} \put(-419.31,102.53){\circle*{0.000001}} \put(-418.61,101.82){\circle*{0.000001}} \put(-417.90,101.82){\circle*{0.000001}} \put(-417.19,101.12){\circle*{0.000001}} \put(-416.49,101.12){\circle*{0.000001}} \put(-415.78,100.41){\circle*{0.000001}} \put(-467.40,137.89){\circle*{0.000001}} \put(-466.69,137.18){\circle*{0.000001}} \put(-465.98,136.47){\circle*{0.000001}} \put(-465.28,135.76){\circle*{0.000001}} \put(-464.57,135.76){\circle*{0.000001}} \put(-463.86,135.06){\circle*{0.000001}} \put(-463.15,134.35){\circle*{0.000001}} \put(-462.45,133.64){\circle*{0.000001}} \put(-461.74,132.94){\circle*{0.000001}} \put(-461.03,132.23){\circle*{0.000001}} \put(-460.33,131.52){\circle*{0.000001}} \put(-459.62,131.52){\circle*{0.000001}} \put(-458.91,130.81){\circle*{0.000001}} \put(-458.21,130.11){\circle*{0.000001}} \put(-457.50,129.40){\circle*{0.000001}} \put(-456.79,128.69){\circle*{0.000001}} \put(-456.08,127.99){\circle*{0.000001}} \put(-455.38,127.28){\circle*{0.000001}} \put(-454.67,127.28){\circle*{0.000001}} \put(-453.96,126.57){\circle*{0.000001}} \put(-453.26,125.87){\circle*{0.000001}} \put(-452.55,125.16){\circle*{0.000001}} \put(-451.84,124.45){\circle*{0.000001}} \put(-451.13,123.74){\circle*{0.000001}} \put(-450.43,123.04){\circle*{0.000001}} \put(-449.72,123.04){\circle*{0.000001}} \put(-449.01,122.33){\circle*{0.000001}} \put(-448.31,121.62){\circle*{0.000001}} \put(-447.60,120.92){\circle*{0.000001}} \put(-446.89,120.21){\circle*{0.000001}} \put(-446.18,119.50){\circle*{0.000001}} \put(-445.48,118.79){\circle*{0.000001}} \put(-444.77,118.79){\circle*{0.000001}} \put(-444.06,118.09){\circle*{0.000001}} \put(-443.36,117.38){\circle*{0.000001}} \put(-442.65,116.67){\circle*{0.000001}} \put(-467.40,137.89){\circle*{0.000001}} \put(-468.10,138.59){\circle*{0.000001}} \put(-468.81,139.30){\circle*{0.000001}} \put(-468.81,140.01){\circle*{0.000001}} \put(-469.52,140.71){\circle*{0.000001}} \put(-470.23,141.42){\circle*{0.000001}} \put(-470.93,142.13){\circle*{0.000001}} \put(-471.64,142.84){\circle*{0.000001}} \put(-472.35,143.54){\circle*{0.000001}} \put(-472.35,144.25){\circle*{0.000001}} \put(-473.05,144.96){\circle*{0.000001}} \put(-473.76,145.66){\circle*{0.000001}} \put(-474.47,146.37){\circle*{0.000001}} \put(-475.18,147.08){\circle*{0.000001}} \put(-475.88,147.79){\circle*{0.000001}} \put(-475.88,148.49){\circle*{0.000001}} \put(-476.59,149.20){\circle*{0.000001}} \put(-477.30,149.91){\circle*{0.000001}} \put(-478.00,150.61){\circle*{0.000001}} \put(-478.71,151.32){\circle*{0.000001}} \put(-479.42,152.03){\circle*{0.000001}} \put(-479.42,152.74){\circle*{0.000001}} \put(-480.13,153.44){\circle*{0.000001}} \put(-480.83,154.15){\circle*{0.000001}} \put(-481.54,154.86){\circle*{0.000001}} \put(-482.25,155.56){\circle*{0.000001}} \put(-482.95,156.27){\circle*{0.000001}} \put(-482.95,156.98){\circle*{0.000001}} \put(-483.66,157.68){\circle*{0.000001}} \put(-484.37,158.39){\circle*{0.000001}} \put(-485.08,159.10){\circle*{0.000001}} \put(-485.78,159.81){\circle*{0.000001}} \put(-486.49,160.51){\circle*{0.000001}} \put(-486.49,161.22){\circle*{0.000001}} \put(-487.20,161.93){\circle*{0.000001}} \put(-487.90,162.63){\circle*{0.000001}} \put(-488.61,163.34){\circle*{0.000001}} \put(-488.61,163.34){\circle*{0.000001}} \put(-487.90,164.05){\circle*{0.000001}} \put(-487.90,164.76){\circle*{0.000001}} \put(-487.20,165.46){\circle*{0.000001}} \put(-487.20,166.17){\circle*{0.000001}} \put(-486.49,166.88){\circle*{0.000001}} \put(-485.78,167.58){\circle*{0.000001}} \put(-485.78,168.29){\circle*{0.000001}} \put(-485.08,169.00){\circle*{0.000001}} \put(-485.08,169.71){\circle*{0.000001}} \put(-484.37,170.41){\circle*{0.000001}} \put(-483.66,171.12){\circle*{0.000001}} \put(-483.66,171.83){\circle*{0.000001}} \put(-482.95,172.53){\circle*{0.000001}} \put(-482.25,173.24){\circle*{0.000001}} \put(-482.25,173.95){\circle*{0.000001}} \put(-481.54,174.66){\circle*{0.000001}} \put(-481.54,175.36){\circle*{0.000001}} \put(-480.83,176.07){\circle*{0.000001}} \put(-480.13,176.78){\circle*{0.000001}} \put(-480.13,177.48){\circle*{0.000001}} \put(-479.42,178.19){\circle*{0.000001}} \put(-479.42,178.90){\circle*{0.000001}} \put(-478.71,179.61){\circle*{0.000001}} \put(-478.00,180.31){\circle*{0.000001}} \put(-478.00,181.02){\circle*{0.000001}} \put(-477.30,181.73){\circle*{0.000001}} \put(-477.30,182.43){\circle*{0.000001}} \put(-476.59,183.14){\circle*{0.000001}} \put(-475.88,183.85){\circle*{0.000001}} \put(-475.88,184.55){\circle*{0.000001}} \put(-475.18,185.26){\circle*{0.000001}} \put(-474.47,185.97){\circle*{0.000001}} \put(-474.47,186.68){\circle*{0.000001}} \put(-473.76,187.38){\circle*{0.000001}} \put(-473.76,188.09){\circle*{0.000001}} \put(-473.05,188.80){\circle*{0.000001}} \put(-472.35,189.50){\circle*{0.000001}} \put(-472.35,190.21){\circle*{0.000001}} \put(-471.64,190.92){\circle*{0.000001}} \put(-471.64,191.63){\circle*{0.000001}} \put(-470.93,192.33){\circle*{0.000001}} \put(-503.46,182.43){\circle*{0.000001}} \put(-502.75,182.43){\circle*{0.000001}} \put(-502.05,183.14){\circle*{0.000001}} \put(-501.34,183.14){\circle*{0.000001}} \put(-500.63,183.14){\circle*{0.000001}} \put(-499.92,183.85){\circle*{0.000001}} \put(-499.22,183.85){\circle*{0.000001}} \put(-498.51,183.85){\circle*{0.000001}} \put(-497.80,183.85){\circle*{0.000001}} \put(-497.10,184.55){\circle*{0.000001}} \put(-496.39,184.55){\circle*{0.000001}} \put(-495.68,184.55){\circle*{0.000001}} \put(-494.97,185.26){\circle*{0.000001}} \put(-494.27,185.26){\circle*{0.000001}} \put(-493.56,185.26){\circle*{0.000001}} \put(-492.85,185.97){\circle*{0.000001}} \put(-492.15,185.97){\circle*{0.000001}} \put(-491.44,185.97){\circle*{0.000001}} \put(-490.73,185.97){\circle*{0.000001}} \put(-490.02,186.68){\circle*{0.000001}} \put(-489.32,186.68){\circle*{0.000001}} \put(-488.61,186.68){\circle*{0.000001}} \put(-487.90,187.38){\circle*{0.000001}} \put(-487.20,187.38){\circle*{0.000001}} \put(-486.49,187.38){\circle*{0.000001}} \put(-485.78,188.09){\circle*{0.000001}} \put(-485.08,188.09){\circle*{0.000001}} \put(-484.37,188.09){\circle*{0.000001}} \put(-483.66,188.80){\circle*{0.000001}} \put(-482.95,188.80){\circle*{0.000001}} \put(-482.25,188.80){\circle*{0.000001}} \put(-481.54,188.80){\circle*{0.000001}} \put(-480.83,189.50){\circle*{0.000001}} \put(-480.13,189.50){\circle*{0.000001}} \put(-479.42,189.50){\circle*{0.000001}} \put(-478.71,190.21){\circle*{0.000001}} \put(-478.00,190.21){\circle*{0.000001}} \put(-477.30,190.21){\circle*{0.000001}} \put(-476.59,190.92){\circle*{0.000001}} \put(-475.88,190.92){\circle*{0.000001}} \put(-475.18,190.92){\circle*{0.000001}} \put(-474.47,190.92){\circle*{0.000001}} \put(-473.76,191.63){\circle*{0.000001}} \put(-473.05,191.63){\circle*{0.000001}} \put(-472.35,191.63){\circle*{0.000001}} \put(-471.64,192.33){\circle*{0.000001}} \put(-470.93,192.33){\circle*{0.000001}} \put(-509.82,147.79){\circle*{0.000001}} \put(-509.82,148.49){\circle*{0.000001}} \put(-509.82,149.20){\circle*{0.000001}} \put(-509.12,149.91){\circle*{0.000001}} \put(-509.12,150.61){\circle*{0.000001}} \put(-509.12,151.32){\circle*{0.000001}} \put(-509.12,152.03){\circle*{0.000001}} \put(-509.12,152.74){\circle*{0.000001}} \put(-509.12,153.44){\circle*{0.000001}} \put(-508.41,154.15){\circle*{0.000001}} \put(-508.41,154.86){\circle*{0.000001}} \put(-508.41,155.56){\circle*{0.000001}} \put(-508.41,156.27){\circle*{0.000001}} \put(-508.41,156.98){\circle*{0.000001}} \put(-507.70,157.68){\circle*{0.000001}} \put(-507.70,158.39){\circle*{0.000001}} \put(-507.70,159.10){\circle*{0.000001}} \put(-507.70,159.81){\circle*{0.000001}} \put(-507.70,160.51){\circle*{0.000001}} \put(-507.70,161.22){\circle*{0.000001}} \put(-507.00,161.93){\circle*{0.000001}} \put(-507.00,162.63){\circle*{0.000001}} \put(-507.00,163.34){\circle*{0.000001}} \put(-507.00,164.05){\circle*{0.000001}} \put(-507.00,164.76){\circle*{0.000001}} \put(-506.29,165.46){\circle*{0.000001}} \put(-506.29,166.17){\circle*{0.000001}} \put(-506.29,166.88){\circle*{0.000001}} \put(-506.29,167.58){\circle*{0.000001}} \put(-506.29,168.29){\circle*{0.000001}} \put(-505.58,169.00){\circle*{0.000001}} \put(-505.58,169.71){\circle*{0.000001}} \put(-505.58,170.41){\circle*{0.000001}} \put(-505.58,171.12){\circle*{0.000001}} \put(-505.58,171.83){\circle*{0.000001}} \put(-505.58,172.53){\circle*{0.000001}} \put(-504.87,173.24){\circle*{0.000001}} \put(-504.87,173.95){\circle*{0.000001}} \put(-504.87,174.66){\circle*{0.000001}} \put(-504.87,175.36){\circle*{0.000001}} \put(-504.87,176.07){\circle*{0.000001}} \put(-504.17,176.78){\circle*{0.000001}} \put(-504.17,177.48){\circle*{0.000001}} \put(-504.17,178.19){\circle*{0.000001}} \put(-504.17,178.90){\circle*{0.000001}} \put(-504.17,179.61){\circle*{0.000001}} \put(-504.17,180.31){\circle*{0.000001}} \put(-503.46,181.02){\circle*{0.000001}} \put(-503.46,181.73){\circle*{0.000001}} \put(-503.46,182.43){\circle*{0.000001}} \put(-509.82,147.79){\circle*{0.000001}} \put(-510.53,148.49){\circle*{0.000001}} \put(-510.53,149.20){\circle*{0.000001}} \put(-511.24,149.91){\circle*{0.000001}} \put(-511.95,150.61){\circle*{0.000001}} \put(-512.65,151.32){\circle*{0.000001}} \put(-512.65,152.03){\circle*{0.000001}} \put(-513.36,152.74){\circle*{0.000001}} \put(-514.07,153.44){\circle*{0.000001}} \put(-514.07,154.15){\circle*{0.000001}} \put(-514.77,154.86){\circle*{0.000001}} \put(-515.48,155.56){\circle*{0.000001}} \put(-516.19,156.27){\circle*{0.000001}} \put(-516.19,156.98){\circle*{0.000001}} \put(-516.90,157.68){\circle*{0.000001}} \put(-517.60,158.39){\circle*{0.000001}} \put(-517.60,159.10){\circle*{0.000001}} \put(-518.31,159.81){\circle*{0.000001}} \put(-519.02,160.51){\circle*{0.000001}} \put(-519.02,161.22){\circle*{0.000001}} \put(-519.72,161.93){\circle*{0.000001}} \put(-520.43,162.63){\circle*{0.000001}} \put(-521.14,163.34){\circle*{0.000001}} \put(-521.14,164.05){\circle*{0.000001}} \put(-521.84,164.76){\circle*{0.000001}} \put(-522.55,165.46){\circle*{0.000001}} \put(-522.55,166.17){\circle*{0.000001}} \put(-523.26,166.88){\circle*{0.000001}} \put(-523.97,167.58){\circle*{0.000001}} \put(-524.67,168.29){\circle*{0.000001}} \put(-524.67,169.00){\circle*{0.000001}} \put(-525.38,169.71){\circle*{0.000001}} \put(-526.09,170.41){\circle*{0.000001}} \put(-526.09,171.12){\circle*{0.000001}} \put(-526.79,171.83){\circle*{0.000001}} \put(-527.50,172.53){\circle*{0.000001}} \put(-528.21,173.24){\circle*{0.000001}} \put(-528.21,173.95){\circle*{0.000001}} \put(-528.92,174.66){\circle*{0.000001}} \put(-528.92,174.66){\circle*{0.000001}} \put(-528.21,175.36){\circle*{0.000001}} \put(-527.50,175.36){\circle*{0.000001}} \put(-526.79,176.07){\circle*{0.000001}} \put(-526.09,176.78){\circle*{0.000001}} \put(-525.38,176.78){\circle*{0.000001}} \put(-524.67,177.48){\circle*{0.000001}} \put(-523.97,178.19){\circle*{0.000001}} \put(-523.26,178.19){\circle*{0.000001}} \put(-522.55,178.90){\circle*{0.000001}} \put(-521.84,179.61){\circle*{0.000001}} \put(-521.14,179.61){\circle*{0.000001}} \put(-520.43,180.31){\circle*{0.000001}} \put(-519.72,181.02){\circle*{0.000001}} \put(-519.02,181.02){\circle*{0.000001}} \put(-518.31,181.73){\circle*{0.000001}} \put(-517.60,182.43){\circle*{0.000001}} \put(-516.90,182.43){\circle*{0.000001}} \put(-516.19,183.14){\circle*{0.000001}} \put(-515.48,183.85){\circle*{0.000001}} \put(-514.77,183.85){\circle*{0.000001}} \put(-514.07,184.55){\circle*{0.000001}} \put(-513.36,185.26){\circle*{0.000001}} \put(-512.65,185.97){\circle*{0.000001}} \put(-511.95,185.97){\circle*{0.000001}} \put(-511.24,186.68){\circle*{0.000001}} \put(-510.53,187.38){\circle*{0.000001}} \put(-509.82,187.38){\circle*{0.000001}} \put(-509.12,188.09){\circle*{0.000001}} \put(-508.41,188.80){\circle*{0.000001}} \put(-507.70,188.80){\circle*{0.000001}} \put(-507.00,189.50){\circle*{0.000001}} \put(-506.29,190.21){\circle*{0.000001}} \put(-505.58,190.21){\circle*{0.000001}} \put(-504.87,190.92){\circle*{0.000001}} \put(-504.17,191.63){\circle*{0.000001}} \put(-503.46,191.63){\circle*{0.000001}} \put(-502.75,192.33){\circle*{0.000001}} \put(-502.05,193.04){\circle*{0.000001}} \put(-501.34,193.04){\circle*{0.000001}} \put(-500.63,193.75){\circle*{0.000001}} \put(-500.63,193.75){\circle*{0.000001}} \put(-500.63,194.45){\circle*{0.000001}} \put(-501.34,195.16){\circle*{0.000001}} \put(-501.34,195.87){\circle*{0.000001}} \put(-501.34,196.58){\circle*{0.000001}} \put(-502.05,197.28){\circle*{0.000001}} \put(-502.05,197.99){\circle*{0.000001}} \put(-502.75,198.70){\circle*{0.000001}} \put(-502.75,199.40){\circle*{0.000001}} \put(-502.75,200.11){\circle*{0.000001}} \put(-503.46,200.82){\circle*{0.000001}} \put(-503.46,201.53){\circle*{0.000001}} \put(-503.46,202.23){\circle*{0.000001}} \put(-504.17,202.94){\circle*{0.000001}} \put(-504.17,203.65){\circle*{0.000001}} \put(-504.17,204.35){\circle*{0.000001}} \put(-504.87,205.06){\circle*{0.000001}} \put(-504.87,205.77){\circle*{0.000001}} \put(-505.58,206.48){\circle*{0.000001}} \put(-505.58,207.18){\circle*{0.000001}} \put(-505.58,207.89){\circle*{0.000001}} \put(-506.29,208.60){\circle*{0.000001}} \put(-506.29,209.30){\circle*{0.000001}} \put(-506.29,210.01){\circle*{0.000001}} \put(-507.00,210.72){\circle*{0.000001}} \put(-507.00,211.42){\circle*{0.000001}} \put(-507.00,212.13){\circle*{0.000001}} \put(-507.70,212.84){\circle*{0.000001}} \put(-507.70,213.55){\circle*{0.000001}} \put(-508.41,214.25){\circle*{0.000001}} \put(-508.41,214.96){\circle*{0.000001}} \put(-508.41,215.67){\circle*{0.000001}} \put(-509.12,216.37){\circle*{0.000001}} \put(-509.12,217.08){\circle*{0.000001}} \put(-509.12,217.79){\circle*{0.000001}} \put(-509.82,218.50){\circle*{0.000001}} \put(-509.82,219.20){\circle*{0.000001}} \put(-509.82,219.91){\circle*{0.000001}} \put(-510.53,220.62){\circle*{0.000001}} \put(-510.53,221.32){\circle*{0.000001}} \put(-511.24,222.03){\circle*{0.000001}} \put(-511.24,222.74){\circle*{0.000001}} \put(-511.24,223.45){\circle*{0.000001}} \put(-511.95,224.15){\circle*{0.000001}} \put(-511.95,224.86){\circle*{0.000001}} \put(-511.95,224.86){\circle*{0.000001}} \put(-511.24,224.86){\circle*{0.000001}} \put(-510.53,224.86){\circle*{0.000001}} \put(-509.82,225.57){\circle*{0.000001}} \put(-509.12,225.57){\circle*{0.000001}} \put(-508.41,225.57){\circle*{0.000001}} \put(-507.70,225.57){\circle*{0.000001}} \put(-507.00,225.57){\circle*{0.000001}} \put(-506.29,226.27){\circle*{0.000001}} \put(-505.58,226.27){\circle*{0.000001}} \put(-504.87,226.27){\circle*{0.000001}} \put(-504.17,226.27){\circle*{0.000001}} \put(-503.46,226.98){\circle*{0.000001}} \put(-502.75,226.98){\circle*{0.000001}} \put(-502.05,226.98){\circle*{0.000001}} \put(-501.34,226.98){\circle*{0.000001}} \put(-500.63,226.98){\circle*{0.000001}} \put(-499.92,227.69){\circle*{0.000001}} \put(-499.22,227.69){\circle*{0.000001}} \put(-498.51,227.69){\circle*{0.000001}} \put(-497.80,227.69){\circle*{0.000001}} \put(-497.10,227.69){\circle*{0.000001}} \put(-496.39,228.40){\circle*{0.000001}} \put(-495.68,228.40){\circle*{0.000001}} \put(-494.97,228.40){\circle*{0.000001}} \put(-494.27,228.40){\circle*{0.000001}} \put(-493.56,229.10){\circle*{0.000001}} \put(-492.85,229.10){\circle*{0.000001}} \put(-492.15,229.10){\circle*{0.000001}} \put(-491.44,229.10){\circle*{0.000001}} \put(-490.73,229.10){\circle*{0.000001}} \put(-490.02,229.81){\circle*{0.000001}} \put(-489.32,229.81){\circle*{0.000001}} \put(-488.61,229.81){\circle*{0.000001}} \put(-487.90,229.81){\circle*{0.000001}} \put(-487.20,229.81){\circle*{0.000001}} \put(-486.49,230.52){\circle*{0.000001}} \put(-485.78,230.52){\circle*{0.000001}} \put(-485.08,230.52){\circle*{0.000001}} \put(-484.37,230.52){\circle*{0.000001}} \put(-483.66,231.22){\circle*{0.000001}} \put(-482.95,231.22){\circle*{0.000001}} \put(-482.25,231.22){\circle*{0.000001}} \put(-481.54,231.22){\circle*{0.000001}} \put(-480.83,231.22){\circle*{0.000001}} \put(-480.13,231.93){\circle*{0.000001}} \put(-479.42,231.93){\circle*{0.000001}} \put(-478.71,231.93){\circle*{0.000001}} \put(-478.71,231.93){\circle*{0.000001}} \put(-478.00,232.64){\circle*{0.000001}} \put(-477.30,233.35){\circle*{0.000001}} \put(-476.59,234.05){\circle*{0.000001}} \put(-475.88,234.76){\circle*{0.000001}} \put(-475.18,235.47){\circle*{0.000001}} \put(-474.47,236.17){\circle*{0.000001}} \put(-473.76,236.88){\circle*{0.000001}} \put(-473.05,237.59){\circle*{0.000001}} \put(-472.35,238.29){\circle*{0.000001}} \put(-471.64,239.00){\circle*{0.000001}} \put(-470.93,239.71){\circle*{0.000001}} \put(-470.23,240.42){\circle*{0.000001}} \put(-469.52,241.12){\circle*{0.000001}} \put(-468.81,241.83){\circle*{0.000001}} \put(-468.10,242.54){\circle*{0.000001}} \put(-467.40,243.24){\circle*{0.000001}} \put(-467.40,243.95){\circle*{0.000001}} \put(-466.69,244.66){\circle*{0.000001}} \put(-465.98,245.37){\circle*{0.000001}} \put(-465.28,246.07){\circle*{0.000001}} \put(-464.57,246.78){\circle*{0.000001}} \put(-463.86,247.49){\circle*{0.000001}} \put(-463.15,248.19){\circle*{0.000001}} \put(-462.45,248.90){\circle*{0.000001}} \put(-461.74,249.61){\circle*{0.000001}} \put(-461.03,250.32){\circle*{0.000001}} \put(-460.33,251.02){\circle*{0.000001}} \put(-459.62,251.73){\circle*{0.000001}} \put(-458.91,252.44){\circle*{0.000001}} \put(-458.21,253.14){\circle*{0.000001}} \put(-457.50,253.85){\circle*{0.000001}} \put(-456.79,254.56){\circle*{0.000001}} \put(-456.08,255.27){\circle*{0.000001}} \put(-455.38,255.97){\circle*{0.000001}} \put(-484.37,273.65){\circle*{0.000001}} \put(-483.66,272.94){\circle*{0.000001}} \put(-482.95,272.94){\circle*{0.000001}} \put(-482.25,272.24){\circle*{0.000001}} \put(-481.54,272.24){\circle*{0.000001}} \put(-480.83,271.53){\circle*{0.000001}} \put(-480.13,270.82){\circle*{0.000001}} \put(-479.42,270.82){\circle*{0.000001}} \put(-478.71,270.11){\circle*{0.000001}} \put(-478.00,270.11){\circle*{0.000001}} \put(-477.30,269.41){\circle*{0.000001}} \put(-476.59,268.70){\circle*{0.000001}} \put(-475.88,268.70){\circle*{0.000001}} \put(-475.18,267.99){\circle*{0.000001}} \put(-474.47,267.29){\circle*{0.000001}} \put(-473.76,267.29){\circle*{0.000001}} \put(-473.05,266.58){\circle*{0.000001}} \put(-472.35,266.58){\circle*{0.000001}} \put(-471.64,265.87){\circle*{0.000001}} \put(-470.93,265.17){\circle*{0.000001}} \put(-470.23,265.17){\circle*{0.000001}} \put(-469.52,264.46){\circle*{0.000001}} \put(-468.81,264.46){\circle*{0.000001}} \put(-468.10,263.75){\circle*{0.000001}} \put(-467.40,263.04){\circle*{0.000001}} \put(-466.69,263.04){\circle*{0.000001}} \put(-465.98,262.34){\circle*{0.000001}} \put(-465.28,262.34){\circle*{0.000001}} \put(-464.57,261.63){\circle*{0.000001}} \put(-463.86,260.92){\circle*{0.000001}} \put(-463.15,260.92){\circle*{0.000001}} \put(-462.45,260.22){\circle*{0.000001}} \put(-461.74,259.51){\circle*{0.000001}} \put(-461.03,259.51){\circle*{0.000001}} \put(-460.33,258.80){\circle*{0.000001}} \put(-459.62,258.80){\circle*{0.000001}} \put(-458.91,258.09){\circle*{0.000001}} \put(-458.21,257.39){\circle*{0.000001}} \put(-457.50,257.39){\circle*{0.000001}} \put(-456.79,256.68){\circle*{0.000001}} \put(-456.08,256.68){\circle*{0.000001}} \put(-455.38,255.97){\circle*{0.000001}} \put(-484.37,273.65){\circle*{0.000001}} \put(-484.37,274.36){\circle*{0.000001}} \put(-485.08,275.06){\circle*{0.000001}} \put(-485.08,275.77){\circle*{0.000001}} \put(-485.78,276.48){\circle*{0.000001}} \put(-485.78,277.19){\circle*{0.000001}} \put(-486.49,277.89){\circle*{0.000001}} \put(-486.49,278.60){\circle*{0.000001}} \put(-486.49,279.31){\circle*{0.000001}} \put(-487.20,280.01){\circle*{0.000001}} \put(-487.20,280.72){\circle*{0.000001}} \put(-487.90,281.43){\circle*{0.000001}} \put(-487.90,282.14){\circle*{0.000001}} \put(-487.90,282.84){\circle*{0.000001}} \put(-488.61,283.55){\circle*{0.000001}} \put(-488.61,284.26){\circle*{0.000001}} \put(-489.32,284.96){\circle*{0.000001}} \put(-489.32,285.67){\circle*{0.000001}} \put(-490.02,286.38){\circle*{0.000001}} \put(-490.02,287.09){\circle*{0.000001}} \put(-490.02,287.79){\circle*{0.000001}} \put(-490.73,288.50){\circle*{0.000001}} \put(-490.73,289.21){\circle*{0.000001}} \put(-491.44,289.91){\circle*{0.000001}} \put(-491.44,290.62){\circle*{0.000001}} \put(-492.15,291.33){\circle*{0.000001}} \put(-492.15,292.04){\circle*{0.000001}} \put(-492.15,292.74){\circle*{0.000001}} \put(-492.85,293.45){\circle*{0.000001}} \put(-492.85,294.16){\circle*{0.000001}} \put(-493.56,294.86){\circle*{0.000001}} \put(-493.56,295.57){\circle*{0.000001}} \put(-494.27,296.28){\circle*{0.000001}} \put(-494.27,296.98){\circle*{0.000001}} \put(-494.27,297.69){\circle*{0.000001}} \put(-494.97,298.40){\circle*{0.000001}} \put(-494.97,299.11){\circle*{0.000001}} \put(-495.68,299.81){\circle*{0.000001}} \put(-495.68,300.52){\circle*{0.000001}} \put(-495.68,301.23){\circle*{0.000001}} \put(-496.39,301.93){\circle*{0.000001}} \put(-496.39,302.64){\circle*{0.000001}} \put(-497.10,303.35){\circle*{0.000001}} \put(-497.10,304.06){\circle*{0.000001}} \put(-497.80,304.76){\circle*{0.000001}} \put(-497.80,305.47){\circle*{0.000001}} \put(-497.80,305.47){\circle*{0.000001}} \put(-497.10,305.47){\circle*{0.000001}} \put(-496.39,306.18){\circle*{0.000001}} \put(-495.68,306.18){\circle*{0.000001}} \put(-494.97,306.18){\circle*{0.000001}} \put(-494.27,306.18){\circle*{0.000001}} \put(-493.56,306.88){\circle*{0.000001}} \put(-492.85,306.88){\circle*{0.000001}} \put(-492.15,306.88){\circle*{0.000001}} \put(-491.44,306.88){\circle*{0.000001}} \put(-490.73,307.59){\circle*{0.000001}} \put(-490.02,307.59){\circle*{0.000001}} \put(-489.32,307.59){\circle*{0.000001}} \put(-488.61,308.30){\circle*{0.000001}} \put(-487.90,308.30){\circle*{0.000001}} \put(-487.20,308.30){\circle*{0.000001}} \put(-486.49,308.30){\circle*{0.000001}} \put(-485.78,309.01){\circle*{0.000001}} \put(-485.08,309.01){\circle*{0.000001}} \put(-484.37,309.01){\circle*{0.000001}} \put(-483.66,309.71){\circle*{0.000001}} \put(-482.95,309.71){\circle*{0.000001}} \put(-482.25,309.71){\circle*{0.000001}} \put(-481.54,309.71){\circle*{0.000001}} \put(-480.83,310.42){\circle*{0.000001}} \put(-480.13,310.42){\circle*{0.000001}} \put(-479.42,310.42){\circle*{0.000001}} \put(-478.71,310.42){\circle*{0.000001}} \put(-478.00,311.13){\circle*{0.000001}} \put(-477.30,311.13){\circle*{0.000001}} \put(-476.59,311.13){\circle*{0.000001}} \put(-475.88,311.83){\circle*{0.000001}} \put(-475.18,311.83){\circle*{0.000001}} \put(-474.47,311.83){\circle*{0.000001}} \put(-473.76,311.83){\circle*{0.000001}} \put(-473.05,312.54){\circle*{0.000001}} \put(-472.35,312.54){\circle*{0.000001}} \put(-471.64,312.54){\circle*{0.000001}} \put(-470.93,313.25){\circle*{0.000001}} \put(-470.23,313.25){\circle*{0.000001}} \put(-469.52,313.25){\circle*{0.000001}} \put(-468.81,313.25){\circle*{0.000001}} \put(-468.10,313.96){\circle*{0.000001}} \put(-467.40,313.96){\circle*{0.000001}} \put(-466.69,313.96){\circle*{0.000001}} \put(-465.98,313.96){\circle*{0.000001}} \put(-465.28,314.66){\circle*{0.000001}} \put(-464.57,314.66){\circle*{0.000001}} \put(-464.57,314.66){\circle*{0.000001}} \put(-464.57,315.37){\circle*{0.000001}} \put(-463.86,316.08){\circle*{0.000001}} \put(-463.86,316.78){\circle*{0.000001}} \put(-463.86,317.49){\circle*{0.000001}} \put(-463.15,318.20){\circle*{0.000001}} \put(-463.15,318.91){\circle*{0.000001}} \put(-463.15,319.61){\circle*{0.000001}} \put(-462.45,320.32){\circle*{0.000001}} \put(-462.45,321.03){\circle*{0.000001}} \put(-461.74,321.73){\circle*{0.000001}} \put(-461.74,322.44){\circle*{0.000001}} \put(-461.74,323.15){\circle*{0.000001}} \put(-461.03,323.85){\circle*{0.000001}} \put(-461.03,324.56){\circle*{0.000001}} \put(-461.03,325.27){\circle*{0.000001}} \put(-460.33,325.98){\circle*{0.000001}} \put(-460.33,326.68){\circle*{0.000001}} \put(-460.33,327.39){\circle*{0.000001}} \put(-459.62,328.10){\circle*{0.000001}} \put(-459.62,328.80){\circle*{0.000001}} \put(-459.62,329.51){\circle*{0.000001}} \put(-458.91,330.22){\circle*{0.000001}} \put(-458.91,330.93){\circle*{0.000001}} \put(-458.21,331.63){\circle*{0.000001}} \put(-458.21,332.34){\circle*{0.000001}} \put(-458.21,333.05){\circle*{0.000001}} \put(-457.50,333.75){\circle*{0.000001}} \put(-457.50,334.46){\circle*{0.000001}} \put(-457.50,335.17){\circle*{0.000001}} \put(-456.79,335.88){\circle*{0.000001}} \put(-456.79,336.58){\circle*{0.000001}} \put(-456.79,337.29){\circle*{0.000001}} \put(-456.08,338.00){\circle*{0.000001}} \put(-456.08,338.70){\circle*{0.000001}} \put(-456.08,339.41){\circle*{0.000001}} \put(-455.38,340.12){\circle*{0.000001}} \put(-455.38,340.83){\circle*{0.000001}} \put(-454.67,341.53){\circle*{0.000001}} \put(-454.67,342.24){\circle*{0.000001}} \put(-454.67,342.95){\circle*{0.000001}} \put(-453.96,343.65){\circle*{0.000001}} \put(-453.96,344.36){\circle*{0.000001}} \put(-475.88,315.37){\circle*{0.000001}} \put(-475.18,316.08){\circle*{0.000001}} \put(-474.47,316.78){\circle*{0.000001}} \put(-474.47,317.49){\circle*{0.000001}} \put(-473.76,318.20){\circle*{0.000001}} \put(-473.05,318.91){\circle*{0.000001}} \put(-472.35,319.61){\circle*{0.000001}} \put(-472.35,320.32){\circle*{0.000001}} \put(-471.64,321.03){\circle*{0.000001}} \put(-470.93,321.73){\circle*{0.000001}} \put(-470.23,322.44){\circle*{0.000001}} \put(-470.23,323.15){\circle*{0.000001}} \put(-469.52,323.85){\circle*{0.000001}} \put(-468.81,324.56){\circle*{0.000001}} \put(-468.10,325.27){\circle*{0.000001}} \put(-468.10,325.98){\circle*{0.000001}} \put(-467.40,326.68){\circle*{0.000001}} \put(-466.69,327.39){\circle*{0.000001}} \put(-465.98,328.10){\circle*{0.000001}} \put(-465.98,328.80){\circle*{0.000001}} \put(-465.28,329.51){\circle*{0.000001}} \put(-464.57,330.22){\circle*{0.000001}} \put(-463.86,330.93){\circle*{0.000001}} \put(-463.86,331.63){\circle*{0.000001}} \put(-463.15,332.34){\circle*{0.000001}} \put(-462.45,333.05){\circle*{0.000001}} \put(-461.74,333.75){\circle*{0.000001}} \put(-461.74,334.46){\circle*{0.000001}} \put(-461.03,335.17){\circle*{0.000001}} \put(-460.33,335.88){\circle*{0.000001}} \put(-459.62,336.58){\circle*{0.000001}} \put(-459.62,337.29){\circle*{0.000001}} \put(-458.91,338.00){\circle*{0.000001}} \put(-458.21,338.70){\circle*{0.000001}} \put(-457.50,339.41){\circle*{0.000001}} \put(-457.50,340.12){\circle*{0.000001}} \put(-456.79,340.83){\circle*{0.000001}} \put(-456.08,341.53){\circle*{0.000001}} \put(-455.38,342.24){\circle*{0.000001}} \put(-455.38,342.95){\circle*{0.000001}} \put(-454.67,343.65){\circle*{0.000001}} \put(-453.96,344.36){\circle*{0.000001}} \put(-475.88,315.37){\circle*{0.000001}} \put(-475.18,316.08){\circle*{0.000001}} \put(-474.47,316.78){\circle*{0.000001}} \put(-474.47,317.49){\circle*{0.000001}} \put(-473.76,318.20){\circle*{0.000001}} \put(-473.05,318.91){\circle*{0.000001}} \put(-472.35,319.61){\circle*{0.000001}} \put(-472.35,320.32){\circle*{0.000001}} \put(-471.64,321.03){\circle*{0.000001}} \put(-470.93,321.73){\circle*{0.000001}} \put(-470.23,322.44){\circle*{0.000001}} \put(-469.52,323.15){\circle*{0.000001}} \put(-469.52,323.85){\circle*{0.000001}} \put(-468.81,324.56){\circle*{0.000001}} \put(-468.10,325.27){\circle*{0.000001}} \put(-467.40,325.98){\circle*{0.000001}} \put(-466.69,326.68){\circle*{0.000001}} \put(-466.69,327.39){\circle*{0.000001}} \put(-465.98,328.10){\circle*{0.000001}} \put(-465.28,328.80){\circle*{0.000001}} \put(-464.57,329.51){\circle*{0.000001}} \put(-464.57,330.22){\circle*{0.000001}} \put(-463.86,330.93){\circle*{0.000001}} \put(-463.15,331.63){\circle*{0.000001}} \put(-462.45,332.34){\circle*{0.000001}} \put(-461.74,333.05){\circle*{0.000001}} \put(-461.74,333.75){\circle*{0.000001}} \put(-461.03,334.46){\circle*{0.000001}} \put(-460.33,335.17){\circle*{0.000001}} \put(-459.62,335.88){\circle*{0.000001}} \put(-458.91,336.58){\circle*{0.000001}} \put(-458.91,337.29){\circle*{0.000001}} \put(-458.21,338.00){\circle*{0.000001}} \put(-457.50,338.70){\circle*{0.000001}} \put(-456.79,339.41){\circle*{0.000001}} \put(-456.79,340.12){\circle*{0.000001}} \put(-456.08,340.83){\circle*{0.000001}} \put(-455.38,341.53){\circle*{0.000001}} \put(-481.54,319.61){\circle*{0.000001}} \put(-480.83,320.32){\circle*{0.000001}} \put(-480.13,321.03){\circle*{0.000001}} \put(-479.42,321.73){\circle*{0.000001}} \put(-478.71,321.73){\circle*{0.000001}} \put(-478.00,322.44){\circle*{0.000001}} \put(-477.30,323.15){\circle*{0.000001}} \put(-476.59,323.85){\circle*{0.000001}} \put(-475.88,324.56){\circle*{0.000001}} \put(-475.18,325.27){\circle*{0.000001}} \put(-474.47,325.27){\circle*{0.000001}} \put(-473.76,325.98){\circle*{0.000001}} \put(-473.05,326.68){\circle*{0.000001}} \put(-472.35,327.39){\circle*{0.000001}} \put(-471.64,328.10){\circle*{0.000001}} \put(-470.93,328.80){\circle*{0.000001}} \put(-470.23,328.80){\circle*{0.000001}} \put(-469.52,329.51){\circle*{0.000001}} \put(-468.81,330.22){\circle*{0.000001}} \put(-468.10,330.93){\circle*{0.000001}} \put(-467.40,331.63){\circle*{0.000001}} \put(-466.69,332.34){\circle*{0.000001}} \put(-465.98,332.34){\circle*{0.000001}} \put(-465.28,333.05){\circle*{0.000001}} \put(-464.57,333.75){\circle*{0.000001}} \put(-463.86,334.46){\circle*{0.000001}} \put(-463.15,335.17){\circle*{0.000001}} \put(-462.45,335.88){\circle*{0.000001}} \put(-461.74,335.88){\circle*{0.000001}} \put(-461.03,336.58){\circle*{0.000001}} \put(-460.33,337.29){\circle*{0.000001}} \put(-459.62,338.00){\circle*{0.000001}} \put(-458.91,338.70){\circle*{0.000001}} \put(-458.21,339.41){\circle*{0.000001}} \put(-457.50,339.41){\circle*{0.000001}} \put(-456.79,340.12){\circle*{0.000001}} \put(-456.08,340.83){\circle*{0.000001}} \put(-455.38,341.53){\circle*{0.000001}} \put(-481.54,319.61){\circle*{0.000001}} \put(-480.83,319.61){\circle*{0.000001}} \put(-480.13,319.61){\circle*{0.000001}} \put(-479.42,319.61){\circle*{0.000001}} \put(-478.71,319.61){\circle*{0.000001}} \put(-478.00,320.32){\circle*{0.000001}} \put(-477.30,320.32){\circle*{0.000001}} \put(-476.59,320.32){\circle*{0.000001}} \put(-475.88,320.32){\circle*{0.000001}} \put(-475.18,320.32){\circle*{0.000001}} \put(-474.47,320.32){\circle*{0.000001}} \put(-473.76,320.32){\circle*{0.000001}} \put(-473.05,320.32){\circle*{0.000001}} \put(-472.35,321.03){\circle*{0.000001}} \put(-471.64,321.03){\circle*{0.000001}} \put(-470.93,321.03){\circle*{0.000001}} \put(-470.23,321.03){\circle*{0.000001}} \put(-469.52,321.03){\circle*{0.000001}} \put(-468.81,321.03){\circle*{0.000001}} \put(-468.10,321.03){\circle*{0.000001}} \put(-467.40,321.03){\circle*{0.000001}} \put(-466.69,321.73){\circle*{0.000001}} \put(-465.98,321.73){\circle*{0.000001}} \put(-465.28,321.73){\circle*{0.000001}} \put(-464.57,321.73){\circle*{0.000001}} \put(-463.86,321.73){\circle*{0.000001}} \put(-463.15,321.73){\circle*{0.000001}} \put(-462.45,321.73){\circle*{0.000001}} \put(-461.74,321.73){\circle*{0.000001}} \put(-461.03,322.44){\circle*{0.000001}} \put(-460.33,322.44){\circle*{0.000001}} \put(-459.62,322.44){\circle*{0.000001}} \put(-458.91,322.44){\circle*{0.000001}} \put(-458.21,322.44){\circle*{0.000001}} \put(-457.50,322.44){\circle*{0.000001}} \put(-456.79,322.44){\circle*{0.000001}} \put(-456.08,322.44){\circle*{0.000001}} \put(-455.38,323.15){\circle*{0.000001}} \put(-454.67,323.15){\circle*{0.000001}} \put(-453.96,323.15){\circle*{0.000001}} \put(-453.26,323.15){\circle*{0.000001}} \put(-452.55,323.15){\circle*{0.000001}} \put(-451.84,323.15){\circle*{0.000001}} \put(-451.13,323.15){\circle*{0.000001}} \put(-450.43,323.15){\circle*{0.000001}} \put(-449.72,323.85){\circle*{0.000001}} \put(-449.01,323.85){\circle*{0.000001}} \put(-448.31,323.85){\circle*{0.000001}} \put(-447.60,323.85){\circle*{0.000001}} \put(-447.60,323.85){\circle*{0.000001}} \put(-446.89,324.56){\circle*{0.000001}} \put(-446.18,325.27){\circle*{0.000001}} \put(-445.48,325.98){\circle*{0.000001}} \put(-445.48,326.68){\circle*{0.000001}} \put(-444.77,327.39){\circle*{0.000001}} \put(-444.06,328.10){\circle*{0.000001}} \put(-443.36,328.80){\circle*{0.000001}} \put(-442.65,329.51){\circle*{0.000001}} \put(-441.94,330.22){\circle*{0.000001}} \put(-441.23,330.93){\circle*{0.000001}} \put(-441.23,331.63){\circle*{0.000001}} \put(-440.53,332.34){\circle*{0.000001}} \put(-439.82,333.05){\circle*{0.000001}} \put(-439.11,333.75){\circle*{0.000001}} \put(-438.41,334.46){\circle*{0.000001}} \put(-437.70,335.17){\circle*{0.000001}} \put(-436.99,335.88){\circle*{0.000001}} \put(-436.99,336.58){\circle*{0.000001}} \put(-436.28,337.29){\circle*{0.000001}} \put(-435.58,338.00){\circle*{0.000001}} \put(-434.87,338.70){\circle*{0.000001}} \put(-434.16,339.41){\circle*{0.000001}} \put(-433.46,340.12){\circle*{0.000001}} \put(-432.75,340.83){\circle*{0.000001}} \put(-432.04,341.53){\circle*{0.000001}} \put(-432.04,342.24){\circle*{0.000001}} \put(-431.34,342.95){\circle*{0.000001}} \put(-430.63,343.65){\circle*{0.000001}} \put(-429.92,344.36){\circle*{0.000001}} \put(-429.21,345.07){\circle*{0.000001}} \put(-428.51,345.78){\circle*{0.000001}} \put(-427.80,346.48){\circle*{0.000001}} \put(-427.80,347.19){\circle*{0.000001}} \put(-427.09,347.90){\circle*{0.000001}} \put(-426.39,348.60){\circle*{0.000001}} \put(-425.68,349.31){\circle*{0.000001}} \put(-459.62,345.78){\circle*{0.000001}} \put(-458.91,345.78){\circle*{0.000001}} \put(-458.21,345.78){\circle*{0.000001}} \put(-457.50,345.78){\circle*{0.000001}} \put(-456.79,345.78){\circle*{0.000001}} \put(-456.08,346.48){\circle*{0.000001}} \put(-455.38,346.48){\circle*{0.000001}} \put(-454.67,346.48){\circle*{0.000001}} \put(-453.96,346.48){\circle*{0.000001}} \put(-453.26,346.48){\circle*{0.000001}} \put(-452.55,346.48){\circle*{0.000001}} \put(-451.84,346.48){\circle*{0.000001}} \put(-451.13,346.48){\circle*{0.000001}} \put(-450.43,346.48){\circle*{0.000001}} \put(-449.72,346.48){\circle*{0.000001}} \put(-449.01,347.19){\circle*{0.000001}} \put(-448.31,347.19){\circle*{0.000001}} \put(-447.60,347.19){\circle*{0.000001}} \put(-446.89,347.19){\circle*{0.000001}} \put(-446.18,347.19){\circle*{0.000001}} \put(-445.48,347.19){\circle*{0.000001}} \put(-444.77,347.19){\circle*{0.000001}} \put(-444.06,347.19){\circle*{0.000001}} \put(-443.36,347.19){\circle*{0.000001}} \put(-442.65,347.19){\circle*{0.000001}} \put(-441.94,347.90){\circle*{0.000001}} \put(-441.23,347.90){\circle*{0.000001}} \put(-440.53,347.90){\circle*{0.000001}} \put(-439.82,347.90){\circle*{0.000001}} \put(-439.11,347.90){\circle*{0.000001}} \put(-438.41,347.90){\circle*{0.000001}} \put(-437.70,347.90){\circle*{0.000001}} \put(-436.99,347.90){\circle*{0.000001}} \put(-436.28,347.90){\circle*{0.000001}} \put(-435.58,348.60){\circle*{0.000001}} \put(-434.87,348.60){\circle*{0.000001}} \put(-434.16,348.60){\circle*{0.000001}} \put(-433.46,348.60){\circle*{0.000001}} \put(-432.75,348.60){\circle*{0.000001}} \put(-432.04,348.60){\circle*{0.000001}} \put(-431.34,348.60){\circle*{0.000001}} \put(-430.63,348.60){\circle*{0.000001}} \put(-429.92,348.60){\circle*{0.000001}} \put(-429.21,348.60){\circle*{0.000001}} \put(-428.51,349.31){\circle*{0.000001}} \put(-427.80,349.31){\circle*{0.000001}} \put(-427.09,349.31){\circle*{0.000001}} \put(-426.39,349.31){\circle*{0.000001}} \put(-425.68,349.31){\circle*{0.000001}} \put(-480.83,319.61){\circle*{0.000001}} \put(-480.13,320.32){\circle*{0.000001}} \put(-479.42,321.03){\circle*{0.000001}} \put(-479.42,321.73){\circle*{0.000001}} \put(-478.71,322.44){\circle*{0.000001}} \put(-478.00,323.15){\circle*{0.000001}} \put(-477.30,323.85){\circle*{0.000001}} \put(-476.59,324.56){\circle*{0.000001}} \put(-476.59,325.27){\circle*{0.000001}} \put(-475.88,325.98){\circle*{0.000001}} \put(-475.18,326.68){\circle*{0.000001}} \put(-474.47,327.39){\circle*{0.000001}} \put(-473.76,328.10){\circle*{0.000001}} \put(-473.05,328.80){\circle*{0.000001}} \put(-473.05,329.51){\circle*{0.000001}} \put(-472.35,330.22){\circle*{0.000001}} \put(-471.64,330.93){\circle*{0.000001}} \put(-470.93,331.63){\circle*{0.000001}} \put(-470.23,332.34){\circle*{0.000001}} \put(-470.23,333.05){\circle*{0.000001}} \put(-469.52,333.75){\circle*{0.000001}} \put(-468.81,334.46){\circle*{0.000001}} \put(-468.10,335.17){\circle*{0.000001}} \put(-467.40,335.88){\circle*{0.000001}} \put(-467.40,336.58){\circle*{0.000001}} \put(-466.69,337.29){\circle*{0.000001}} \put(-465.98,338.00){\circle*{0.000001}} \put(-465.28,338.70){\circle*{0.000001}} \put(-464.57,339.41){\circle*{0.000001}} \put(-463.86,340.12){\circle*{0.000001}} \put(-463.86,340.83){\circle*{0.000001}} \put(-463.15,341.53){\circle*{0.000001}} \put(-462.45,342.24){\circle*{0.000001}} \put(-461.74,342.95){\circle*{0.000001}} \put(-461.03,343.65){\circle*{0.000001}} \put(-461.03,344.36){\circle*{0.000001}} \put(-460.33,345.07){\circle*{0.000001}} \put(-459.62,345.78){\circle*{0.000001}} \put(-480.83,319.61){\circle*{0.000001}} \put(-480.13,319.61){\circle*{0.000001}} \put(-479.42,319.61){\circle*{0.000001}} \put(-478.71,319.61){\circle*{0.000001}} \put(-478.00,319.61){\circle*{0.000001}} \put(-477.30,319.61){\circle*{0.000001}} \put(-476.59,319.61){\circle*{0.000001}} \put(-475.88,319.61){\circle*{0.000001}} \put(-475.18,319.61){\circle*{0.000001}} \put(-474.47,319.61){\circle*{0.000001}} \put(-473.76,319.61){\circle*{0.000001}} \put(-473.05,319.61){\circle*{0.000001}} \put(-472.35,319.61){\circle*{0.000001}} \put(-471.64,319.61){\circle*{0.000001}} \put(-470.93,319.61){\circle*{0.000001}} \put(-470.23,319.61){\circle*{0.000001}} \put(-469.52,319.61){\circle*{0.000001}} \put(-468.81,319.61){\circle*{0.000001}} \put(-468.10,319.61){\circle*{0.000001}} \put(-467.40,319.61){\circle*{0.000001}} \put(-466.69,319.61){\circle*{0.000001}} \put(-465.98,319.61){\circle*{0.000001}} \put(-465.28,319.61){\circle*{0.000001}} \put(-464.57,319.61){\circle*{0.000001}} \put(-463.86,319.61){\circle*{0.000001}} \put(-463.15,319.61){\circle*{0.000001}} \put(-462.45,319.61){\circle*{0.000001}} \put(-461.74,319.61){\circle*{0.000001}} \put(-461.03,319.61){\circle*{0.000001}} \put(-460.33,319.61){\circle*{0.000001}} \put(-459.62,319.61){\circle*{0.000001}} \put(-458.91,319.61){\circle*{0.000001}} \put(-458.21,319.61){\circle*{0.000001}} \put(-457.50,319.61){\circle*{0.000001}} \put(-456.79,319.61){\circle*{0.000001}} \put(-456.08,319.61){\circle*{0.000001}} \put(-455.38,319.61){\circle*{0.000001}} \put(-454.67,319.61){\circle*{0.000001}} \put(-453.96,319.61){\circle*{0.000001}} \put(-453.26,319.61){\circle*{0.000001}} \put(-452.55,319.61){\circle*{0.000001}} \put(-451.84,319.61){\circle*{0.000001}} \put(-451.13,319.61){\circle*{0.000001}} \put(-450.43,319.61){\circle*{0.000001}} \put(-449.72,319.61){\circle*{0.000001}} \put(-449.01,319.61){\circle*{0.000001}} \put(-448.31,319.61){\circle*{0.000001}} \put(-447.60,319.61){\circle*{0.000001}} \put(-446.89,319.61){\circle*{0.000001}} \put(-446.18,319.61){\circle*{0.000001}} \put(-445.48,319.61){\circle*{0.000001}} \put(-444.77,319.61){\circle*{0.000001}} \put(-444.06,319.61){\circle*{0.000001}} \put(-443.36,319.61){\circle*{0.000001}} \put(-443.36,319.61){\circle*{0.000001}} \put(-442.65,320.32){\circle*{0.000001}} \put(-441.94,321.03){\circle*{0.000001}} \put(-441.23,321.03){\circle*{0.000001}} \put(-440.53,321.73){\circle*{0.000001}} \put(-439.82,322.44){\circle*{0.000001}} \put(-439.11,323.15){\circle*{0.000001}} \put(-438.41,323.85){\circle*{0.000001}} \put(-437.70,323.85){\circle*{0.000001}} \put(-436.99,324.56){\circle*{0.000001}} \put(-436.28,325.27){\circle*{0.000001}} \put(-435.58,325.98){\circle*{0.000001}} \put(-434.87,326.68){\circle*{0.000001}} \put(-434.16,326.68){\circle*{0.000001}} \put(-433.46,327.39){\circle*{0.000001}} \put(-432.75,328.10){\circle*{0.000001}} \put(-432.04,328.80){\circle*{0.000001}} \put(-431.34,329.51){\circle*{0.000001}} \put(-430.63,329.51){\circle*{0.000001}} \put(-429.92,330.22){\circle*{0.000001}} \put(-429.21,330.93){\circle*{0.000001}} \put(-428.51,331.63){\circle*{0.000001}} \put(-427.80,331.63){\circle*{0.000001}} \put(-427.09,332.34){\circle*{0.000001}} \put(-426.39,333.05){\circle*{0.000001}} \put(-425.68,333.75){\circle*{0.000001}} \put(-424.97,334.46){\circle*{0.000001}} \put(-424.26,334.46){\circle*{0.000001}} \put(-423.56,335.17){\circle*{0.000001}} \put(-422.85,335.88){\circle*{0.000001}} \put(-422.14,336.58){\circle*{0.000001}} \put(-421.44,337.29){\circle*{0.000001}} \put(-420.73,337.29){\circle*{0.000001}} \put(-420.02,338.00){\circle*{0.000001}} \put(-419.31,338.70){\circle*{0.000001}} \put(-418.61,339.41){\circle*{0.000001}} \put(-417.90,340.12){\circle*{0.000001}} \put(-417.19,340.12){\circle*{0.000001}} \put(-416.49,340.83){\circle*{0.000001}} \put(-415.78,341.53){\circle*{0.000001}} \put(-415.78,341.53){\circle*{0.000001}} \put(-416.49,342.24){\circle*{0.000001}} \put(-416.49,342.95){\circle*{0.000001}} \put(-417.19,343.65){\circle*{0.000001}} \put(-417.90,344.36){\circle*{0.000001}} \put(-417.90,345.07){\circle*{0.000001}} \put(-418.61,345.78){\circle*{0.000001}} \put(-419.31,346.48){\circle*{0.000001}} \put(-420.02,347.19){\circle*{0.000001}} \put(-420.02,347.90){\circle*{0.000001}} \put(-420.73,348.60){\circle*{0.000001}} \put(-421.44,349.31){\circle*{0.000001}} \put(-421.44,350.02){\circle*{0.000001}} \put(-422.14,350.72){\circle*{0.000001}} \put(-422.85,351.43){\circle*{0.000001}} \put(-422.85,352.14){\circle*{0.000001}} \put(-423.56,352.85){\circle*{0.000001}} \put(-424.26,353.55){\circle*{0.000001}} \put(-424.97,354.26){\circle*{0.000001}} \put(-424.97,354.97){\circle*{0.000001}} \put(-425.68,355.67){\circle*{0.000001}} \put(-426.39,356.38){\circle*{0.000001}} \put(-426.39,357.09){\circle*{0.000001}} \put(-427.09,357.80){\circle*{0.000001}} \put(-427.80,358.50){\circle*{0.000001}} \put(-427.80,359.21){\circle*{0.000001}} \put(-428.51,359.92){\circle*{0.000001}} \put(-429.21,360.62){\circle*{0.000001}} \put(-429.92,361.33){\circle*{0.000001}} \put(-429.92,362.04){\circle*{0.000001}} \put(-430.63,362.75){\circle*{0.000001}} \put(-431.34,363.45){\circle*{0.000001}} \put(-431.34,364.16){\circle*{0.000001}} \put(-432.04,364.87){\circle*{0.000001}} \put(-432.75,365.57){\circle*{0.000001}} \put(-432.75,366.28){\circle*{0.000001}} \put(-433.46,366.99){\circle*{0.000001}} \put(-434.16,367.70){\circle*{0.000001}} \put(-434.87,368.40){\circle*{0.000001}} \put(-434.87,369.11){\circle*{0.000001}} \put(-435.58,369.82){\circle*{0.000001}} \put(-435.58,369.82){\circle*{0.000001}} \put(-434.87,370.52){\circle*{0.000001}} \put(-434.16,371.23){\circle*{0.000001}} \put(-433.46,371.94){\circle*{0.000001}} \put(-432.75,371.94){\circle*{0.000001}} \put(-432.04,372.65){\circle*{0.000001}} \put(-431.34,373.35){\circle*{0.000001}} \put(-430.63,374.06){\circle*{0.000001}} \put(-429.92,374.77){\circle*{0.000001}} \put(-429.21,375.47){\circle*{0.000001}} \put(-428.51,375.47){\circle*{0.000001}} \put(-427.80,376.18){\circle*{0.000001}} \put(-427.09,376.89){\circle*{0.000001}} \put(-426.39,377.60){\circle*{0.000001}} \put(-425.68,378.30){\circle*{0.000001}} \put(-424.97,379.01){\circle*{0.000001}} \put(-424.26,379.01){\circle*{0.000001}} \put(-423.56,379.72){\circle*{0.000001}} \put(-422.85,380.42){\circle*{0.000001}} \put(-422.14,381.13){\circle*{0.000001}} \put(-421.44,381.84){\circle*{0.000001}} \put(-420.73,382.54){\circle*{0.000001}} \put(-420.02,382.54){\circle*{0.000001}} \put(-419.31,383.25){\circle*{0.000001}} \put(-418.61,383.96){\circle*{0.000001}} \put(-417.90,384.67){\circle*{0.000001}} \put(-417.19,385.37){\circle*{0.000001}} \put(-416.49,386.08){\circle*{0.000001}} \put(-415.78,386.08){\circle*{0.000001}} \put(-415.07,386.79){\circle*{0.000001}} \put(-414.36,387.49){\circle*{0.000001}} \put(-413.66,388.20){\circle*{0.000001}} \put(-412.95,388.91){\circle*{0.000001}} \put(-412.24,389.62){\circle*{0.000001}} \put(-411.54,389.62){\circle*{0.000001}} \put(-410.83,390.32){\circle*{0.000001}} \put(-410.12,391.03){\circle*{0.000001}} \put(-409.41,391.74){\circle*{0.000001}} \put(-438.41,375.47){\circle*{0.000001}} \put(-437.70,376.18){\circle*{0.000001}} \put(-436.99,376.18){\circle*{0.000001}} \put(-436.28,376.89){\circle*{0.000001}} \put(-435.58,376.89){\circle*{0.000001}} \put(-434.87,377.60){\circle*{0.000001}} \put(-434.16,377.60){\circle*{0.000001}} \put(-433.46,378.30){\circle*{0.000001}} \put(-432.75,378.30){\circle*{0.000001}} \put(-432.04,379.01){\circle*{0.000001}} \put(-431.34,379.72){\circle*{0.000001}} \put(-430.63,379.72){\circle*{0.000001}} \put(-429.92,380.42){\circle*{0.000001}} \put(-429.21,380.42){\circle*{0.000001}} \put(-428.51,381.13){\circle*{0.000001}} \put(-427.80,381.13){\circle*{0.000001}} \put(-427.09,381.84){\circle*{0.000001}} \put(-426.39,382.54){\circle*{0.000001}} \put(-425.68,382.54){\circle*{0.000001}} \put(-424.97,383.25){\circle*{0.000001}} \put(-424.26,383.25){\circle*{0.000001}} \put(-423.56,383.96){\circle*{0.000001}} \put(-422.85,383.96){\circle*{0.000001}} \put(-422.14,384.67){\circle*{0.000001}} \put(-421.44,384.67){\circle*{0.000001}} \put(-420.73,385.37){\circle*{0.000001}} \put(-420.02,386.08){\circle*{0.000001}} \put(-419.31,386.08){\circle*{0.000001}} \put(-418.61,386.79){\circle*{0.000001}} \put(-417.90,386.79){\circle*{0.000001}} \put(-417.19,387.49){\circle*{0.000001}} \put(-416.49,387.49){\circle*{0.000001}} \put(-415.78,388.20){\circle*{0.000001}} \put(-415.07,388.91){\circle*{0.000001}} \put(-414.36,388.91){\circle*{0.000001}} \put(-413.66,389.62){\circle*{0.000001}} \put(-412.95,389.62){\circle*{0.000001}} \put(-412.24,390.32){\circle*{0.000001}} \put(-411.54,390.32){\circle*{0.000001}} \put(-410.83,391.03){\circle*{0.000001}} \put(-410.12,391.03){\circle*{0.000001}} \put(-409.41,391.74){\circle*{0.000001}} \put(-418.61,349.31){\circle*{0.000001}} \put(-419.31,350.02){\circle*{0.000001}} \put(-420.02,350.72){\circle*{0.000001}} \put(-420.02,351.43){\circle*{0.000001}} \put(-420.73,352.14){\circle*{0.000001}} \put(-421.44,352.85){\circle*{0.000001}} \put(-422.14,353.55){\circle*{0.000001}} \put(-422.14,354.26){\circle*{0.000001}} \put(-422.85,354.97){\circle*{0.000001}} \put(-423.56,355.67){\circle*{0.000001}} \put(-424.26,356.38){\circle*{0.000001}} \put(-424.26,357.09){\circle*{0.000001}} \put(-424.97,357.80){\circle*{0.000001}} \put(-425.68,358.50){\circle*{0.000001}} \put(-426.39,359.21){\circle*{0.000001}} \put(-426.39,359.92){\circle*{0.000001}} \put(-427.09,360.62){\circle*{0.000001}} \put(-427.80,361.33){\circle*{0.000001}} \put(-428.51,362.04){\circle*{0.000001}} \put(-428.51,362.75){\circle*{0.000001}} \put(-429.21,363.45){\circle*{0.000001}} \put(-429.92,364.16){\circle*{0.000001}} \put(-430.63,364.87){\circle*{0.000001}} \put(-430.63,365.57){\circle*{0.000001}} \put(-431.34,366.28){\circle*{0.000001}} \put(-432.04,366.99){\circle*{0.000001}} \put(-432.75,367.70){\circle*{0.000001}} \put(-432.75,368.40){\circle*{0.000001}} \put(-433.46,369.11){\circle*{0.000001}} \put(-434.16,369.82){\circle*{0.000001}} \put(-434.87,370.52){\circle*{0.000001}} \put(-434.87,371.23){\circle*{0.000001}} \put(-435.58,371.94){\circle*{0.000001}} \put(-436.28,372.65){\circle*{0.000001}} \put(-436.99,373.35){\circle*{0.000001}} \put(-436.99,374.06){\circle*{0.000001}} \put(-437.70,374.77){\circle*{0.000001}} \put(-438.41,375.47){\circle*{0.000001}} \put(-426.39,316.78){\circle*{0.000001}} \put(-426.39,317.49){\circle*{0.000001}} \put(-426.39,318.20){\circle*{0.000001}} \put(-425.68,318.91){\circle*{0.000001}} \put(-425.68,319.61){\circle*{0.000001}} \put(-425.68,320.32){\circle*{0.000001}} \put(-425.68,321.03){\circle*{0.000001}} \put(-424.97,321.73){\circle*{0.000001}} \put(-424.97,322.44){\circle*{0.000001}} \put(-424.97,323.15){\circle*{0.000001}} \put(-424.97,323.85){\circle*{0.000001}} \put(-424.26,324.56){\circle*{0.000001}} \put(-424.26,325.27){\circle*{0.000001}} \put(-424.26,325.98){\circle*{0.000001}} \put(-424.26,326.68){\circle*{0.000001}} \put(-423.56,327.39){\circle*{0.000001}} \put(-423.56,328.10){\circle*{0.000001}} \put(-423.56,328.80){\circle*{0.000001}} \put(-423.56,329.51){\circle*{0.000001}} \put(-422.85,330.22){\circle*{0.000001}} \put(-422.85,330.93){\circle*{0.000001}} \put(-422.85,331.63){\circle*{0.000001}} \put(-422.85,332.34){\circle*{0.000001}} \put(-422.85,333.05){\circle*{0.000001}} \put(-422.14,333.75){\circle*{0.000001}} \put(-422.14,334.46){\circle*{0.000001}} \put(-422.14,335.17){\circle*{0.000001}} \put(-422.14,335.88){\circle*{0.000001}} \put(-421.44,336.58){\circle*{0.000001}} \put(-421.44,337.29){\circle*{0.000001}} \put(-421.44,338.00){\circle*{0.000001}} \put(-421.44,338.70){\circle*{0.000001}} \put(-420.73,339.41){\circle*{0.000001}} \put(-420.73,340.12){\circle*{0.000001}} \put(-420.73,340.83){\circle*{0.000001}} \put(-420.73,341.53){\circle*{0.000001}} \put(-420.02,342.24){\circle*{0.000001}} \put(-420.02,342.95){\circle*{0.000001}} \put(-420.02,343.65){\circle*{0.000001}} \put(-420.02,344.36){\circle*{0.000001}} \put(-419.31,345.07){\circle*{0.000001}} \put(-419.31,345.78){\circle*{0.000001}} \put(-419.31,346.48){\circle*{0.000001}} \put(-419.31,347.19){\circle*{0.000001}} \put(-418.61,347.90){\circle*{0.000001}} \put(-418.61,348.60){\circle*{0.000001}} \put(-418.61,349.31){\circle*{0.000001}} \put(-426.39,316.78){\circle*{0.000001}} \put(-427.09,317.49){\circle*{0.000001}} \put(-427.80,318.20){\circle*{0.000001}} \put(-428.51,318.91){\circle*{0.000001}} \put(-429.21,319.61){\circle*{0.000001}} \put(-429.92,320.32){\circle*{0.000001}} \put(-430.63,321.03){\circle*{0.000001}} \put(-431.34,321.73){\circle*{0.000001}} \put(-431.34,322.44){\circle*{0.000001}} \put(-432.04,323.15){\circle*{0.000001}} \put(-432.75,323.85){\circle*{0.000001}} \put(-433.46,324.56){\circle*{0.000001}} \put(-434.16,325.27){\circle*{0.000001}} \put(-434.87,325.98){\circle*{0.000001}} \put(-435.58,326.68){\circle*{0.000001}} \put(-436.28,327.39){\circle*{0.000001}} \put(-436.99,328.10){\circle*{0.000001}} \put(-437.70,328.80){\circle*{0.000001}} \put(-438.41,329.51){\circle*{0.000001}} \put(-439.11,330.22){\circle*{0.000001}} \put(-439.82,330.93){\circle*{0.000001}} \put(-440.53,331.63){\circle*{0.000001}} \put(-441.23,332.34){\circle*{0.000001}} \put(-441.94,333.05){\circle*{0.000001}} \put(-441.94,333.75){\circle*{0.000001}} \put(-442.65,334.46){\circle*{0.000001}} \put(-443.36,335.17){\circle*{0.000001}} \put(-444.06,335.88){\circle*{0.000001}} \put(-444.77,336.58){\circle*{0.000001}} \put(-445.48,337.29){\circle*{0.000001}} \put(-446.18,338.00){\circle*{0.000001}} \put(-446.89,338.70){\circle*{0.000001}} \put(-447.60,339.41){\circle*{0.000001}} \put(-447.60,339.41){\circle*{0.000001}} \put(-447.60,340.12){\circle*{0.000001}} \put(-448.31,340.83){\circle*{0.000001}} \put(-448.31,341.53){\circle*{0.000001}} \put(-448.31,342.24){\circle*{0.000001}} \put(-449.01,342.95){\circle*{0.000001}} \put(-449.01,343.65){\circle*{0.000001}} \put(-449.01,344.36){\circle*{0.000001}} \put(-449.72,345.07){\circle*{0.000001}} \put(-449.72,345.78){\circle*{0.000001}} \put(-449.72,346.48){\circle*{0.000001}} \put(-450.43,347.19){\circle*{0.000001}} \put(-450.43,347.90){\circle*{0.000001}} \put(-450.43,348.60){\circle*{0.000001}} \put(-451.13,349.31){\circle*{0.000001}} \put(-451.13,350.02){\circle*{0.000001}} \put(-451.13,350.72){\circle*{0.000001}} \put(-451.84,351.43){\circle*{0.000001}} \put(-451.84,352.14){\circle*{0.000001}} \put(-451.84,352.85){\circle*{0.000001}} \put(-452.55,353.55){\circle*{0.000001}} \put(-452.55,354.26){\circle*{0.000001}} \put(-452.55,354.97){\circle*{0.000001}} \put(-453.26,355.67){\circle*{0.000001}} \put(-453.26,356.38){\circle*{0.000001}} \put(-453.96,357.09){\circle*{0.000001}} \put(-453.96,357.80){\circle*{0.000001}} \put(-453.96,358.50){\circle*{0.000001}} \put(-454.67,359.21){\circle*{0.000001}} \put(-454.67,359.92){\circle*{0.000001}} \put(-454.67,360.62){\circle*{0.000001}} \put(-455.38,361.33){\circle*{0.000001}} \put(-455.38,362.04){\circle*{0.000001}} \put(-455.38,362.75){\circle*{0.000001}} \put(-456.08,363.45){\circle*{0.000001}} \put(-456.08,364.16){\circle*{0.000001}} \put(-456.08,364.87){\circle*{0.000001}} \put(-456.79,365.57){\circle*{0.000001}} \put(-456.79,366.28){\circle*{0.000001}} \put(-456.79,366.99){\circle*{0.000001}} \put(-457.50,367.70){\circle*{0.000001}} \put(-457.50,368.40){\circle*{0.000001}} \put(-457.50,369.11){\circle*{0.000001}} \put(-458.21,369.82){\circle*{0.000001}} \put(-458.21,370.52){\circle*{0.000001}} \put(-458.21,371.23){\circle*{0.000001}} \put(-458.91,371.94){\circle*{0.000001}} \put(-458.91,372.65){\circle*{0.000001}} \put(-458.91,372.65){\circle*{0.000001}} \put(-458.21,372.65){\circle*{0.000001}} \put(-457.50,372.65){\circle*{0.000001}} \put(-456.79,372.65){\circle*{0.000001}} \put(-456.08,372.65){\circle*{0.000001}} \put(-455.38,372.65){\circle*{0.000001}} \put(-454.67,372.65){\circle*{0.000001}} \put(-453.96,372.65){\circle*{0.000001}} \put(-453.26,372.65){\circle*{0.000001}} \put(-452.55,372.65){\circle*{0.000001}} \put(-451.84,372.65){\circle*{0.000001}} \put(-451.13,372.65){\circle*{0.000001}} \put(-450.43,372.65){\circle*{0.000001}} \put(-449.72,372.65){\circle*{0.000001}} \put(-449.01,372.65){\circle*{0.000001}} \put(-448.31,372.65){\circle*{0.000001}} \put(-447.60,372.65){\circle*{0.000001}} \put(-446.89,372.65){\circle*{0.000001}} \put(-446.18,372.65){\circle*{0.000001}} \put(-445.48,372.65){\circle*{0.000001}} \put(-444.77,372.65){\circle*{0.000001}} \put(-444.06,372.65){\circle*{0.000001}} \put(-443.36,372.65){\circle*{0.000001}} \put(-442.65,372.65){\circle*{0.000001}} \put(-441.94,372.65){\circle*{0.000001}} \put(-441.23,372.65){\circle*{0.000001}} \put(-440.53,373.35){\circle*{0.000001}} \put(-439.82,373.35){\circle*{0.000001}} \put(-439.11,373.35){\circle*{0.000001}} \put(-438.41,373.35){\circle*{0.000001}} \put(-437.70,373.35){\circle*{0.000001}} \put(-436.99,373.35){\circle*{0.000001}} \put(-436.28,373.35){\circle*{0.000001}} \put(-435.58,373.35){\circle*{0.000001}} \put(-434.87,373.35){\circle*{0.000001}} \put(-434.16,373.35){\circle*{0.000001}} \put(-433.46,373.35){\circle*{0.000001}} \put(-432.75,373.35){\circle*{0.000001}} \put(-432.04,373.35){\circle*{0.000001}} \put(-431.34,373.35){\circle*{0.000001}} \put(-430.63,373.35){\circle*{0.000001}} \put(-429.92,373.35){\circle*{0.000001}} \put(-429.21,373.35){\circle*{0.000001}} \put(-428.51,373.35){\circle*{0.000001}} \put(-427.80,373.35){\circle*{0.000001}} \put(-427.09,373.35){\circle*{0.000001}} \put(-426.39,373.35){\circle*{0.000001}} \put(-425.68,373.35){\circle*{0.000001}} \put(-424.97,373.35){\circle*{0.000001}} \put(-424.26,373.35){\circle*{0.000001}} \put(-423.56,373.35){\circle*{0.000001}} \put(-423.56,373.35){\circle*{0.000001}} \put(-422.85,373.35){\circle*{0.000001}} \put(-422.14,373.35){\circle*{0.000001}} \put(-421.44,374.06){\circle*{0.000001}} \put(-420.73,374.06){\circle*{0.000001}} \put(-420.02,374.06){\circle*{0.000001}} \put(-419.31,374.06){\circle*{0.000001}} \put(-418.61,374.77){\circle*{0.000001}} \put(-417.90,374.77){\circle*{0.000001}} \put(-417.19,374.77){\circle*{0.000001}} \put(-416.49,374.77){\circle*{0.000001}} \put(-415.78,375.47){\circle*{0.000001}} \put(-415.07,375.47){\circle*{0.000001}} \put(-414.36,375.47){\circle*{0.000001}} \put(-413.66,375.47){\circle*{0.000001}} \put(-412.95,376.18){\circle*{0.000001}} \put(-412.24,376.18){\circle*{0.000001}} \put(-411.54,376.18){\circle*{0.000001}} \put(-410.83,376.18){\circle*{0.000001}} \put(-410.12,376.89){\circle*{0.000001}} \put(-409.41,376.89){\circle*{0.000001}} \put(-408.71,376.89){\circle*{0.000001}} \put(-408.00,376.89){\circle*{0.000001}} \put(-407.29,377.60){\circle*{0.000001}} \put(-406.59,377.60){\circle*{0.000001}} \put(-405.88,377.60){\circle*{0.000001}} \put(-405.17,377.60){\circle*{0.000001}} \put(-404.47,378.30){\circle*{0.000001}} \put(-403.76,378.30){\circle*{0.000001}} \put(-403.05,378.30){\circle*{0.000001}} \put(-402.34,378.30){\circle*{0.000001}} \put(-401.64,379.01){\circle*{0.000001}} \put(-400.93,379.01){\circle*{0.000001}} \put(-400.22,379.01){\circle*{0.000001}} \put(-399.52,379.01){\circle*{0.000001}} \put(-398.81,379.72){\circle*{0.000001}} \put(-398.10,379.72){\circle*{0.000001}} \put(-397.39,379.72){\circle*{0.000001}} \put(-396.69,379.72){\circle*{0.000001}} \put(-395.98,380.42){\circle*{0.000001}} \put(-395.27,380.42){\circle*{0.000001}} \put(-394.57,380.42){\circle*{0.000001}} \put(-393.86,380.42){\circle*{0.000001}} \put(-393.15,381.13){\circle*{0.000001}} \put(-392.44,381.13){\circle*{0.000001}} \put(-391.74,381.13){\circle*{0.000001}} \put(-391.03,381.13){\circle*{0.000001}} \put(-390.32,381.84){\circle*{0.000001}} \put(-389.62,381.84){\circle*{0.000001}} \put(-388.91,381.84){\circle*{0.000001}} \put(-388.91,381.84){\circle*{0.000001}} \put(-388.91,382.54){\circle*{0.000001}} \put(-388.20,383.25){\circle*{0.000001}} \put(-388.20,383.96){\circle*{0.000001}} \put(-388.20,384.67){\circle*{0.000001}} \put(-388.20,385.37){\circle*{0.000001}} \put(-387.49,386.08){\circle*{0.000001}} \put(-387.49,386.79){\circle*{0.000001}} \put(-387.49,387.49){\circle*{0.000001}} \put(-387.49,388.20){\circle*{0.000001}} \put(-386.79,388.91){\circle*{0.000001}} \put(-386.79,389.62){\circle*{0.000001}} \put(-386.79,390.32){\circle*{0.000001}} \put(-386.79,391.03){\circle*{0.000001}} \put(-386.08,391.74){\circle*{0.000001}} \put(-386.08,392.44){\circle*{0.000001}} \put(-386.08,393.15){\circle*{0.000001}} \put(-385.37,393.86){\circle*{0.000001}} \put(-385.37,394.57){\circle*{0.000001}} \put(-385.37,395.27){\circle*{0.000001}} \put(-385.37,395.98){\circle*{0.000001}} \put(-384.67,396.69){\circle*{0.000001}} \put(-384.67,397.39){\circle*{0.000001}} \put(-384.67,398.10){\circle*{0.000001}} \put(-384.67,398.81){\circle*{0.000001}} \put(-383.96,399.52){\circle*{0.000001}} \put(-383.96,400.22){\circle*{0.000001}} \put(-383.96,400.93){\circle*{0.000001}} \put(-383.96,401.64){\circle*{0.000001}} \put(-383.25,402.34){\circle*{0.000001}} \put(-383.25,403.05){\circle*{0.000001}} \put(-383.25,403.76){\circle*{0.000001}} \put(-382.54,404.47){\circle*{0.000001}} \put(-382.54,405.17){\circle*{0.000001}} \put(-382.54,405.88){\circle*{0.000001}} \put(-382.54,406.59){\circle*{0.000001}} \put(-381.84,407.29){\circle*{0.000001}} \put(-381.84,408.00){\circle*{0.000001}} \put(-381.84,408.71){\circle*{0.000001}} \put(-381.84,409.41){\circle*{0.000001}} \put(-381.13,410.12){\circle*{0.000001}} \put(-381.13,410.83){\circle*{0.000001}} \put(-381.13,411.54){\circle*{0.000001}} \put(-381.13,412.24){\circle*{0.000001}} \put(-380.42,412.95){\circle*{0.000001}} \put(-380.42,413.66){\circle*{0.000001}} \put(-385.37,381.84){\circle*{0.000001}} \put(-385.37,382.54){\circle*{0.000001}} \put(-385.37,383.25){\circle*{0.000001}} \put(-385.37,383.96){\circle*{0.000001}} \put(-384.67,384.67){\circle*{0.000001}} \put(-384.67,385.37){\circle*{0.000001}} \put(-384.67,386.08){\circle*{0.000001}} \put(-384.67,386.79){\circle*{0.000001}} \put(-384.67,387.49){\circle*{0.000001}} \put(-384.67,388.20){\circle*{0.000001}} \put(-383.96,388.91){\circle*{0.000001}} \put(-383.96,389.62){\circle*{0.000001}} \put(-383.96,390.32){\circle*{0.000001}} \put(-383.96,391.03){\circle*{0.000001}} \put(-383.96,391.74){\circle*{0.000001}} \put(-383.96,392.44){\circle*{0.000001}} \put(-383.96,393.15){\circle*{0.000001}} \put(-383.25,393.86){\circle*{0.000001}} \put(-383.25,394.57){\circle*{0.000001}} \put(-383.25,395.27){\circle*{0.000001}} \put(-383.25,395.98){\circle*{0.000001}} \put(-383.25,396.69){\circle*{0.000001}} \put(-383.25,397.39){\circle*{0.000001}} \put(-382.54,398.10){\circle*{0.000001}} \put(-382.54,398.81){\circle*{0.000001}} \put(-382.54,399.52){\circle*{0.000001}} \put(-382.54,400.22){\circle*{0.000001}} \put(-382.54,400.93){\circle*{0.000001}} \put(-382.54,401.64){\circle*{0.000001}} \put(-381.84,402.34){\circle*{0.000001}} \put(-381.84,403.05){\circle*{0.000001}} \put(-381.84,403.76){\circle*{0.000001}} \put(-381.84,404.47){\circle*{0.000001}} \put(-381.84,405.17){\circle*{0.000001}} \put(-381.84,405.88){\circle*{0.000001}} \put(-381.84,406.59){\circle*{0.000001}} \put(-381.13,407.29){\circle*{0.000001}} \put(-381.13,408.00){\circle*{0.000001}} \put(-381.13,408.71){\circle*{0.000001}} \put(-381.13,409.41){\circle*{0.000001}} \put(-381.13,410.12){\circle*{0.000001}} \put(-381.13,410.83){\circle*{0.000001}} \put(-380.42,411.54){\circle*{0.000001}} \put(-380.42,412.24){\circle*{0.000001}} \put(-380.42,412.95){\circle*{0.000001}} \put(-380.42,413.66){\circle*{0.000001}} \put(-385.37,381.84){\circle*{0.000001}} \put(-385.37,382.54){\circle*{0.000001}} \put(-385.37,383.25){\circle*{0.000001}} \put(-386.08,383.96){\circle*{0.000001}} \put(-386.08,384.67){\circle*{0.000001}} \put(-386.08,385.37){\circle*{0.000001}} \put(-386.08,386.08){\circle*{0.000001}} \put(-386.79,386.79){\circle*{0.000001}} \put(-386.79,387.49){\circle*{0.000001}} \put(-386.79,388.20){\circle*{0.000001}} \put(-386.79,388.91){\circle*{0.000001}} \put(-386.79,389.62){\circle*{0.000001}} \put(-387.49,390.32){\circle*{0.000001}} \put(-387.49,391.03){\circle*{0.000001}} \put(-387.49,391.74){\circle*{0.000001}} \put(-387.49,392.44){\circle*{0.000001}} \put(-387.49,393.15){\circle*{0.000001}} \put(-388.20,393.86){\circle*{0.000001}} \put(-388.20,394.57){\circle*{0.000001}} \put(-388.20,395.27){\circle*{0.000001}} \put(-388.20,395.98){\circle*{0.000001}} \put(-388.91,396.69){\circle*{0.000001}} \put(-388.91,397.39){\circle*{0.000001}} \put(-388.91,398.10){\circle*{0.000001}} \put(-388.91,398.81){\circle*{0.000001}} \put(-388.91,399.52){\circle*{0.000001}} \put(-389.62,400.22){\circle*{0.000001}} \put(-389.62,400.93){\circle*{0.000001}} \put(-389.62,401.64){\circle*{0.000001}} \put(-389.62,402.34){\circle*{0.000001}} \put(-390.32,403.05){\circle*{0.000001}} \put(-390.32,403.76){\circle*{0.000001}} \put(-390.32,404.47){\circle*{0.000001}} \put(-390.32,405.17){\circle*{0.000001}} \put(-390.32,405.88){\circle*{0.000001}} \put(-391.03,406.59){\circle*{0.000001}} \put(-391.03,407.29){\circle*{0.000001}} \put(-391.03,408.00){\circle*{0.000001}} \put(-391.03,408.71){\circle*{0.000001}} \put(-391.03,409.41){\circle*{0.000001}} \put(-391.74,410.12){\circle*{0.000001}} \put(-391.74,410.83){\circle*{0.000001}} \put(-391.74,411.54){\circle*{0.000001}} \put(-391.74,412.24){\circle*{0.000001}} \put(-392.44,412.95){\circle*{0.000001}} \put(-392.44,413.66){\circle*{0.000001}} \put(-392.44,414.36){\circle*{0.000001}} \put(-392.44,414.36){\circle*{0.000001}} \put(-391.74,413.66){\circle*{0.000001}} \put(-391.03,412.95){\circle*{0.000001}} \put(-390.32,412.95){\circle*{0.000001}} \put(-389.62,412.24){\circle*{0.000001}} \put(-388.91,411.54){\circle*{0.000001}} \put(-388.20,410.83){\circle*{0.000001}} \put(-387.49,410.83){\circle*{0.000001}} \put(-386.79,410.12){\circle*{0.000001}} \put(-386.08,409.41){\circle*{0.000001}} \put(-385.37,408.71){\circle*{0.000001}} \put(-384.67,408.71){\circle*{0.000001}} \put(-383.96,408.00){\circle*{0.000001}} \put(-383.25,407.29){\circle*{0.000001}} \put(-382.54,406.59){\circle*{0.000001}} \put(-381.84,406.59){\circle*{0.000001}} \put(-381.13,405.88){\circle*{0.000001}} \put(-380.42,405.17){\circle*{0.000001}} \put(-379.72,404.47){\circle*{0.000001}} \put(-379.01,404.47){\circle*{0.000001}} \put(-378.30,403.76){\circle*{0.000001}} \put(-377.60,403.05){\circle*{0.000001}} \put(-376.89,402.34){\circle*{0.000001}} \put(-376.18,402.34){\circle*{0.000001}} \put(-375.47,401.64){\circle*{0.000001}} \put(-374.77,400.93){\circle*{0.000001}} \put(-374.06,400.22){\circle*{0.000001}} \put(-373.35,400.22){\circle*{0.000001}} \put(-372.65,399.52){\circle*{0.000001}} \put(-371.94,398.81){\circle*{0.000001}} \put(-371.23,398.10){\circle*{0.000001}} \put(-370.52,398.10){\circle*{0.000001}} \put(-369.82,397.39){\circle*{0.000001}} \put(-369.11,396.69){\circle*{0.000001}} \put(-368.40,395.98){\circle*{0.000001}} \put(-367.70,395.98){\circle*{0.000001}} \put(-366.99,395.27){\circle*{0.000001}} \put(-366.28,394.57){\circle*{0.000001}} \put(-365.57,393.86){\circle*{0.000001}} \put(-364.87,393.86){\circle*{0.000001}} \put(-364.16,393.15){\circle*{0.000001}} \put(-363.45,392.44){\circle*{0.000001}} \put(-363.45,392.44){\circle*{0.000001}} \put(-362.75,391.74){\circle*{0.000001}} \put(-362.04,391.74){\circle*{0.000001}} \put(-361.33,391.03){\circle*{0.000001}} \put(-360.62,391.03){\circle*{0.000001}} \put(-359.92,390.32){\circle*{0.000001}} \put(-359.21,389.62){\circle*{0.000001}} \put(-358.50,389.62){\circle*{0.000001}} \put(-357.80,388.91){\circle*{0.000001}} \put(-357.09,388.91){\circle*{0.000001}} \put(-356.38,388.20){\circle*{0.000001}} \put(-355.67,387.49){\circle*{0.000001}} \put(-354.97,387.49){\circle*{0.000001}} \put(-354.26,386.79){\circle*{0.000001}} \put(-353.55,386.08){\circle*{0.000001}} \put(-352.85,386.08){\circle*{0.000001}} \put(-352.14,385.37){\circle*{0.000001}} \put(-351.43,385.37){\circle*{0.000001}} \put(-350.72,384.67){\circle*{0.000001}} \put(-350.02,383.96){\circle*{0.000001}} \put(-349.31,383.96){\circle*{0.000001}} \put(-348.60,383.25){\circle*{0.000001}} \put(-347.90,383.25){\circle*{0.000001}} \put(-347.19,382.54){\circle*{0.000001}} \put(-346.48,381.84){\circle*{0.000001}} \put(-345.78,381.84){\circle*{0.000001}} \put(-345.07,381.13){\circle*{0.000001}} \put(-344.36,381.13){\circle*{0.000001}} \put(-343.65,380.42){\circle*{0.000001}} \put(-342.95,379.72){\circle*{0.000001}} \put(-342.24,379.72){\circle*{0.000001}} \put(-341.53,379.01){\circle*{0.000001}} \put(-340.83,378.30){\circle*{0.000001}} \put(-340.12,378.30){\circle*{0.000001}} \put(-339.41,377.60){\circle*{0.000001}} \put(-338.70,377.60){\circle*{0.000001}} \put(-338.00,376.89){\circle*{0.000001}} \put(-337.29,376.18){\circle*{0.000001}} \put(-336.58,376.18){\circle*{0.000001}} \put(-335.88,375.47){\circle*{0.000001}} \put(-335.17,375.47){\circle*{0.000001}} \put(-334.46,374.77){\circle*{0.000001}} \put(-334.46,374.77){\circle*{0.000001}} \put(-333.75,374.77){\circle*{0.000001}} \put(-333.05,374.77){\circle*{0.000001}} \put(-332.34,374.77){\circle*{0.000001}} \put(-331.63,374.77){\circle*{0.000001}} \put(-330.93,374.77){\circle*{0.000001}} \put(-330.22,374.77){\circle*{0.000001}} \put(-329.51,374.77){\circle*{0.000001}} \put(-328.80,374.77){\circle*{0.000001}} \put(-328.10,374.77){\circle*{0.000001}} \put(-327.39,374.77){\circle*{0.000001}} \put(-326.68,374.77){\circle*{0.000001}} \put(-325.98,374.77){\circle*{0.000001}} \put(-325.27,374.77){\circle*{0.000001}} \put(-324.56,374.77){\circle*{0.000001}} \put(-323.85,374.77){\circle*{0.000001}} \put(-323.15,374.77){\circle*{0.000001}} \put(-322.44,374.77){\circle*{0.000001}} \put(-321.73,374.77){\circle*{0.000001}} \put(-321.03,374.77){\circle*{0.000001}} \put(-320.32,374.77){\circle*{0.000001}} \put(-319.61,374.77){\circle*{0.000001}} \put(-318.91,374.77){\circle*{0.000001}} \put(-318.20,374.77){\circle*{0.000001}} \put(-317.49,374.77){\circle*{0.000001}} \put(-316.78,374.77){\circle*{0.000001}} \put(-316.08,374.77){\circle*{0.000001}} \put(-315.37,374.77){\circle*{0.000001}} \put(-314.66,374.77){\circle*{0.000001}} \put(-313.96,374.77){\circle*{0.000001}} \put(-313.25,374.77){\circle*{0.000001}} \put(-312.54,374.77){\circle*{0.000001}} \put(-311.83,374.77){\circle*{0.000001}} \put(-311.13,374.77){\circle*{0.000001}} \put(-310.42,374.77){\circle*{0.000001}} \put(-309.71,374.77){\circle*{0.000001}} \put(-309.01,374.77){\circle*{0.000001}} \put(-308.30,374.77){\circle*{0.000001}} \put(-307.59,374.77){\circle*{0.000001}} \put(-306.88,374.77){\circle*{0.000001}} \put(-306.18,374.77){\circle*{0.000001}} \put(-305.47,374.77){\circle*{0.000001}} \put(-304.76,374.77){\circle*{0.000001}} \put(-304.06,374.77){\circle*{0.000001}} \put(-303.35,374.77){\circle*{0.000001}} \put(-302.64,374.77){\circle*{0.000001}} \put(-301.93,374.77){\circle*{0.000001}} \put(-301.93,374.77){\circle*{0.000001}} \put(-301.23,374.77){\circle*{0.000001}} \put(-300.52,374.77){\circle*{0.000001}} \put(-299.81,375.47){\circle*{0.000001}} \put(-299.11,375.47){\circle*{0.000001}} \put(-298.40,375.47){\circle*{0.000001}} \put(-297.69,375.47){\circle*{0.000001}} \put(-296.98,375.47){\circle*{0.000001}} \put(-296.28,376.18){\circle*{0.000001}} \put(-295.57,376.18){\circle*{0.000001}} \put(-294.86,376.18){\circle*{0.000001}} \put(-294.16,376.18){\circle*{0.000001}} \put(-293.45,376.18){\circle*{0.000001}} \put(-292.74,376.89){\circle*{0.000001}} \put(-292.04,376.89){\circle*{0.000001}} \put(-291.33,376.89){\circle*{0.000001}} \put(-290.62,376.89){\circle*{0.000001}} \put(-289.91,377.60){\circle*{0.000001}} \put(-289.21,377.60){\circle*{0.000001}} \put(-288.50,377.60){\circle*{0.000001}} \put(-287.79,377.60){\circle*{0.000001}} \put(-287.09,377.60){\circle*{0.000001}} \put(-286.38,378.30){\circle*{0.000001}} \put(-285.67,378.30){\circle*{0.000001}} \put(-284.96,378.30){\circle*{0.000001}} \put(-284.26,378.30){\circle*{0.000001}} \put(-283.55,378.30){\circle*{0.000001}} \put(-282.84,379.01){\circle*{0.000001}} \put(-282.14,379.01){\circle*{0.000001}} \put(-281.43,379.01){\circle*{0.000001}} \put(-280.72,379.01){\circle*{0.000001}} \put(-280.01,379.01){\circle*{0.000001}} \put(-279.31,379.72){\circle*{0.000001}} \put(-278.60,379.72){\circle*{0.000001}} \put(-277.89,379.72){\circle*{0.000001}} \put(-277.19,379.72){\circle*{0.000001}} \put(-276.48,379.72){\circle*{0.000001}} \put(-275.77,380.42){\circle*{0.000001}} \put(-275.06,380.42){\circle*{0.000001}} \put(-274.36,380.42){\circle*{0.000001}} \put(-273.65,380.42){\circle*{0.000001}} \put(-272.94,381.13){\circle*{0.000001}} \put(-272.24,381.13){\circle*{0.000001}} \put(-271.53,381.13){\circle*{0.000001}} \put(-270.82,381.13){\circle*{0.000001}} \put(-270.11,381.13){\circle*{0.000001}} \put(-269.41,381.84){\circle*{0.000001}} \put(-268.70,381.84){\circle*{0.000001}} \put(-267.99,381.84){\circle*{0.000001}} \put(-267.99,381.84){\circle*{0.000001}} \put(-267.99,382.54){\circle*{0.000001}} \put(-267.99,383.25){\circle*{0.000001}} \put(-267.29,383.96){\circle*{0.000001}} \put(-267.29,384.67){\circle*{0.000001}} \put(-267.29,385.37){\circle*{0.000001}} \put(-267.29,386.08){\circle*{0.000001}} \put(-267.29,386.79){\circle*{0.000001}} \put(-267.29,387.49){\circle*{0.000001}} \put(-266.58,388.20){\circle*{0.000001}} \put(-266.58,388.91){\circle*{0.000001}} \put(-266.58,389.62){\circle*{0.000001}} \put(-266.58,390.32){\circle*{0.000001}} \put(-266.58,391.03){\circle*{0.000001}} \put(-266.58,391.74){\circle*{0.000001}} \put(-265.87,392.44){\circle*{0.000001}} \put(-265.87,393.15){\circle*{0.000001}} \put(-265.87,393.86){\circle*{0.000001}} \put(-265.87,394.57){\circle*{0.000001}} \put(-265.87,395.27){\circle*{0.000001}} \put(-265.17,395.98){\circle*{0.000001}} \put(-265.17,396.69){\circle*{0.000001}} \put(-265.17,397.39){\circle*{0.000001}} \put(-265.17,398.10){\circle*{0.000001}} \put(-265.17,398.81){\circle*{0.000001}} \put(-265.17,399.52){\circle*{0.000001}} \put(-264.46,400.22){\circle*{0.000001}} \put(-264.46,400.93){\circle*{0.000001}} \put(-264.46,401.64){\circle*{0.000001}} \put(-264.46,402.34){\circle*{0.000001}} \put(-264.46,403.05){\circle*{0.000001}} \put(-263.75,403.76){\circle*{0.000001}} \put(-263.75,404.47){\circle*{0.000001}} \put(-263.75,405.17){\circle*{0.000001}} \put(-263.75,405.88){\circle*{0.000001}} \put(-263.75,406.59){\circle*{0.000001}} \put(-263.75,407.29){\circle*{0.000001}} \put(-263.04,408.00){\circle*{0.000001}} \put(-263.04,408.71){\circle*{0.000001}} \put(-263.04,409.41){\circle*{0.000001}} \put(-263.04,410.12){\circle*{0.000001}} \put(-263.04,410.83){\circle*{0.000001}} \put(-263.04,411.54){\circle*{0.000001}} \put(-262.34,412.24){\circle*{0.000001}} \put(-262.34,412.95){\circle*{0.000001}} \put(-262.34,413.66){\circle*{0.000001}} \put(-245.37,383.96){\circle*{0.000001}} \put(-246.07,384.67){\circle*{0.000001}} \put(-246.07,385.37){\circle*{0.000001}} \put(-246.78,386.08){\circle*{0.000001}} \put(-246.78,386.79){\circle*{0.000001}} \put(-247.49,387.49){\circle*{0.000001}} \put(-247.49,388.20){\circle*{0.000001}} \put(-248.19,388.91){\circle*{0.000001}} \put(-248.90,389.62){\circle*{0.000001}} \put(-248.90,390.32){\circle*{0.000001}} \put(-249.61,391.03){\circle*{0.000001}} \put(-249.61,391.74){\circle*{0.000001}} \put(-250.32,392.44){\circle*{0.000001}} \put(-250.32,393.15){\circle*{0.000001}} \put(-251.02,393.86){\circle*{0.000001}} \put(-251.73,394.57){\circle*{0.000001}} \put(-251.73,395.27){\circle*{0.000001}} \put(-252.44,395.98){\circle*{0.000001}} \put(-252.44,396.69){\circle*{0.000001}} \put(-253.14,397.39){\circle*{0.000001}} \put(-253.14,398.10){\circle*{0.000001}} \put(-253.85,398.81){\circle*{0.000001}} \put(-254.56,399.52){\circle*{0.000001}} \put(-254.56,400.22){\circle*{0.000001}} \put(-255.27,400.93){\circle*{0.000001}} \put(-255.27,401.64){\circle*{0.000001}} \put(-255.97,402.34){\circle*{0.000001}} \put(-255.97,403.05){\circle*{0.000001}} \put(-256.68,403.76){\circle*{0.000001}} \put(-257.39,404.47){\circle*{0.000001}} \put(-257.39,405.17){\circle*{0.000001}} \put(-258.09,405.88){\circle*{0.000001}} \put(-258.09,406.59){\circle*{0.000001}} \put(-258.80,407.29){\circle*{0.000001}} \put(-258.80,408.00){\circle*{0.000001}} \put(-259.51,408.71){\circle*{0.000001}} \put(-260.22,409.41){\circle*{0.000001}} \put(-260.22,410.12){\circle*{0.000001}} \put(-260.92,410.83){\circle*{0.000001}} \put(-260.92,411.54){\circle*{0.000001}} \put(-261.63,412.24){\circle*{0.000001}} \put(-261.63,412.95){\circle*{0.000001}} \put(-262.34,413.66){\circle*{0.000001}} \put(-263.75,357.80){\circle*{0.000001}} \put(-263.04,358.50){\circle*{0.000001}} \put(-263.04,359.21){\circle*{0.000001}} \put(-262.34,359.92){\circle*{0.000001}} \put(-261.63,360.62){\circle*{0.000001}} \put(-260.92,361.33){\circle*{0.000001}} \put(-260.92,362.04){\circle*{0.000001}} \put(-260.22,362.75){\circle*{0.000001}} \put(-259.51,363.45){\circle*{0.000001}} \put(-259.51,364.16){\circle*{0.000001}} \put(-258.80,364.87){\circle*{0.000001}} \put(-258.09,365.57){\circle*{0.000001}} \put(-258.09,366.28){\circle*{0.000001}} \put(-257.39,366.99){\circle*{0.000001}} \put(-256.68,367.70){\circle*{0.000001}} \put(-255.97,368.40){\circle*{0.000001}} \put(-255.97,369.11){\circle*{0.000001}} \put(-255.27,369.82){\circle*{0.000001}} \put(-254.56,370.52){\circle*{0.000001}} \put(-254.56,371.23){\circle*{0.000001}} \put(-253.85,371.94){\circle*{0.000001}} \put(-253.14,372.65){\circle*{0.000001}} \put(-253.14,373.35){\circle*{0.000001}} \put(-252.44,374.06){\circle*{0.000001}} \put(-251.73,374.77){\circle*{0.000001}} \put(-251.02,375.47){\circle*{0.000001}} \put(-251.02,376.18){\circle*{0.000001}} \put(-250.32,376.89){\circle*{0.000001}} \put(-249.61,377.60){\circle*{0.000001}} \put(-249.61,378.30){\circle*{0.000001}} \put(-248.90,379.01){\circle*{0.000001}} \put(-248.19,379.72){\circle*{0.000001}} \put(-248.19,380.42){\circle*{0.000001}} \put(-247.49,381.13){\circle*{0.000001}} \put(-246.78,381.84){\circle*{0.000001}} \put(-246.07,382.54){\circle*{0.000001}} \put(-246.07,383.25){\circle*{0.000001}} \put(-245.37,383.96){\circle*{0.000001}} \put(-263.75,357.80){\circle*{0.000001}} \put(-263.04,357.80){\circle*{0.000001}} \put(-262.34,358.50){\circle*{0.000001}} \put(-261.63,358.50){\circle*{0.000001}} \put(-260.92,358.50){\circle*{0.000001}} \put(-260.22,359.21){\circle*{0.000001}} \put(-259.51,359.21){\circle*{0.000001}} \put(-258.80,359.92){\circle*{0.000001}} \put(-258.09,359.92){\circle*{0.000001}} \put(-257.39,359.92){\circle*{0.000001}} \put(-256.68,360.62){\circle*{0.000001}} \put(-255.97,360.62){\circle*{0.000001}} \put(-255.27,360.62){\circle*{0.000001}} \put(-254.56,361.33){\circle*{0.000001}} \put(-253.85,361.33){\circle*{0.000001}} \put(-253.14,361.33){\circle*{0.000001}} \put(-252.44,362.04){\circle*{0.000001}} \put(-251.73,362.04){\circle*{0.000001}} \put(-251.02,362.75){\circle*{0.000001}} \put(-250.32,362.75){\circle*{0.000001}} \put(-249.61,362.75){\circle*{0.000001}} \put(-248.90,363.45){\circle*{0.000001}} \put(-248.19,363.45){\circle*{0.000001}} \put(-247.49,363.45){\circle*{0.000001}} \put(-246.78,364.16){\circle*{0.000001}} \put(-246.07,364.16){\circle*{0.000001}} \put(-245.37,364.16){\circle*{0.000001}} \put(-244.66,364.87){\circle*{0.000001}} \put(-243.95,364.87){\circle*{0.000001}} \put(-243.24,364.87){\circle*{0.000001}} \put(-242.54,365.57){\circle*{0.000001}} \put(-241.83,365.57){\circle*{0.000001}} \put(-241.12,366.28){\circle*{0.000001}} \put(-240.42,366.28){\circle*{0.000001}} \put(-239.71,366.28){\circle*{0.000001}} \put(-239.00,366.99){\circle*{0.000001}} \put(-238.29,366.99){\circle*{0.000001}} \put(-237.59,366.99){\circle*{0.000001}} \put(-236.88,367.70){\circle*{0.000001}} \put(-236.17,367.70){\circle*{0.000001}} \put(-235.47,367.70){\circle*{0.000001}} \put(-234.76,368.40){\circle*{0.000001}} \put(-234.05,368.40){\circle*{0.000001}} \put(-233.35,369.11){\circle*{0.000001}} \put(-232.64,369.11){\circle*{0.000001}} \put(-231.93,369.11){\circle*{0.000001}} \put(-231.22,369.82){\circle*{0.000001}} \put(-230.52,369.82){\circle*{0.000001}} \put(-230.52,369.82){\circle*{0.000001}} \put(-230.52,370.52){\circle*{0.000001}} \put(-231.22,371.23){\circle*{0.000001}} \put(-231.22,371.94){\circle*{0.000001}} \put(-231.22,372.65){\circle*{0.000001}} \put(-231.93,373.35){\circle*{0.000001}} \put(-231.93,374.06){\circle*{0.000001}} \put(-231.93,374.77){\circle*{0.000001}} \put(-232.64,375.47){\circle*{0.000001}} \put(-232.64,376.18){\circle*{0.000001}} \put(-232.64,376.89){\circle*{0.000001}} \put(-233.35,377.60){\circle*{0.000001}} \put(-233.35,378.30){\circle*{0.000001}} \put(-233.35,379.01){\circle*{0.000001}} \put(-234.05,379.72){\circle*{0.000001}} \put(-234.05,380.42){\circle*{0.000001}} \put(-234.05,381.13){\circle*{0.000001}} \put(-234.76,381.84){\circle*{0.000001}} \put(-234.76,382.54){\circle*{0.000001}} \put(-234.76,383.25){\circle*{0.000001}} \put(-235.47,383.96){\circle*{0.000001}} \put(-235.47,384.67){\circle*{0.000001}} \put(-235.47,385.37){\circle*{0.000001}} \put(-236.17,386.08){\circle*{0.000001}} \put(-236.17,386.79){\circle*{0.000001}} \put(-236.17,387.49){\circle*{0.000001}} \put(-236.88,388.20){\circle*{0.000001}} \put(-236.88,388.91){\circle*{0.000001}} \put(-236.88,389.62){\circle*{0.000001}} \put(-237.59,390.32){\circle*{0.000001}} \put(-237.59,391.03){\circle*{0.000001}} \put(-237.59,391.74){\circle*{0.000001}} \put(-238.29,392.44){\circle*{0.000001}} \put(-238.29,393.15){\circle*{0.000001}} \put(-238.29,393.86){\circle*{0.000001}} \put(-239.00,394.57){\circle*{0.000001}} \put(-239.00,395.27){\circle*{0.000001}} \put(-239.00,395.98){\circle*{0.000001}} \put(-239.71,396.69){\circle*{0.000001}} \put(-239.71,397.39){\circle*{0.000001}} \put(-239.71,398.10){\circle*{0.000001}} \put(-240.42,398.81){\circle*{0.000001}} \put(-240.42,399.52){\circle*{0.000001}} \put(-240.42,399.52){\circle*{0.000001}} \put(-239.71,399.52){\circle*{0.000001}} \put(-239.00,399.52){\circle*{0.000001}} \put(-238.29,399.52){\circle*{0.000001}} \put(-237.59,399.52){\circle*{0.000001}} \put(-236.88,399.52){\circle*{0.000001}} \put(-236.17,399.52){\circle*{0.000001}} \put(-235.47,398.81){\circle*{0.000001}} \put(-234.76,398.81){\circle*{0.000001}} \put(-234.05,398.81){\circle*{0.000001}} \put(-233.35,398.81){\circle*{0.000001}} \put(-232.64,398.81){\circle*{0.000001}} \put(-231.93,398.81){\circle*{0.000001}} \put(-231.22,398.81){\circle*{0.000001}} \put(-230.52,398.81){\circle*{0.000001}} \put(-229.81,398.81){\circle*{0.000001}} \put(-229.10,398.81){\circle*{0.000001}} \put(-228.40,398.81){\circle*{0.000001}} \put(-227.69,398.81){\circle*{0.000001}} \put(-226.98,398.10){\circle*{0.000001}} \put(-226.27,398.10){\circle*{0.000001}} \put(-225.57,398.10){\circle*{0.000001}} \put(-224.86,398.10){\circle*{0.000001}} \put(-224.15,398.10){\circle*{0.000001}} \put(-223.45,398.10){\circle*{0.000001}} \put(-222.74,398.10){\circle*{0.000001}} \put(-222.03,398.10){\circle*{0.000001}} \put(-221.32,398.10){\circle*{0.000001}} \put(-220.62,398.10){\circle*{0.000001}} \put(-219.91,398.10){\circle*{0.000001}} \put(-219.20,398.10){\circle*{0.000001}} \put(-218.50,397.39){\circle*{0.000001}} \put(-217.79,397.39){\circle*{0.000001}} \put(-217.08,397.39){\circle*{0.000001}} \put(-216.37,397.39){\circle*{0.000001}} \put(-215.67,397.39){\circle*{0.000001}} \put(-214.96,397.39){\circle*{0.000001}} \put(-214.25,397.39){\circle*{0.000001}} \put(-213.55,397.39){\circle*{0.000001}} \put(-212.84,397.39){\circle*{0.000001}} \put(-212.13,397.39){\circle*{0.000001}} \put(-211.42,397.39){\circle*{0.000001}} \put(-210.72,397.39){\circle*{0.000001}} \put(-210.01,396.69){\circle*{0.000001}} \put(-209.30,396.69){\circle*{0.000001}} \put(-208.60,396.69){\circle*{0.000001}} \put(-207.89,396.69){\circle*{0.000001}} \put(-207.18,396.69){\circle*{0.000001}} \put(-206.48,396.69){\circle*{0.000001}} \put(-206.48,396.69){\circle*{0.000001}} \put(-206.48,397.39){\circle*{0.000001}} \put(-206.48,398.10){\circle*{0.000001}} \put(-206.48,398.81){\circle*{0.000001}} \put(-206.48,399.52){\circle*{0.000001}} \put(-206.48,400.22){\circle*{0.000001}} \put(-205.77,400.93){\circle*{0.000001}} \put(-205.77,401.64){\circle*{0.000001}} \put(-205.77,402.34){\circle*{0.000001}} \put(-205.77,403.05){\circle*{0.000001}} \put(-205.77,403.76){\circle*{0.000001}} \put(-205.77,404.47){\circle*{0.000001}} \put(-205.77,405.17){\circle*{0.000001}} \put(-205.77,405.88){\circle*{0.000001}} \put(-205.77,406.59){\circle*{0.000001}} \put(-205.77,407.29){\circle*{0.000001}} \put(-205.77,408.00){\circle*{0.000001}} \put(-205.06,408.71){\circle*{0.000001}} \put(-205.06,409.41){\circle*{0.000001}} \put(-205.06,410.12){\circle*{0.000001}} \put(-205.06,410.83){\circle*{0.000001}} \put(-205.06,411.54){\circle*{0.000001}} \put(-205.06,412.24){\circle*{0.000001}} \put(-205.06,412.95){\circle*{0.000001}} \put(-205.06,413.66){\circle*{0.000001}} \put(-205.06,414.36){\circle*{0.000001}} \put(-205.06,415.07){\circle*{0.000001}} \put(-205.06,415.78){\circle*{0.000001}} \put(-205.06,416.49){\circle*{0.000001}} \put(-204.35,417.19){\circle*{0.000001}} \put(-204.35,417.90){\circle*{0.000001}} \put(-204.35,418.61){\circle*{0.000001}} \put(-204.35,419.31){\circle*{0.000001}} \put(-204.35,420.02){\circle*{0.000001}} \put(-204.35,420.73){\circle*{0.000001}} \put(-204.35,421.44){\circle*{0.000001}} \put(-204.35,422.14){\circle*{0.000001}} \put(-204.35,422.85){\circle*{0.000001}} \put(-204.35,423.56){\circle*{0.000001}} \put(-204.35,424.26){\circle*{0.000001}} \put(-203.65,424.97){\circle*{0.000001}} \put(-203.65,425.68){\circle*{0.000001}} \put(-203.65,426.39){\circle*{0.000001}} \put(-203.65,427.09){\circle*{0.000001}} \put(-203.65,427.80){\circle*{0.000001}} \put(-203.65,428.51){\circle*{0.000001}} \put(-191.63,397.39){\circle*{0.000001}} \put(-191.63,398.10){\circle*{0.000001}} \put(-192.33,398.81){\circle*{0.000001}} \put(-192.33,399.52){\circle*{0.000001}} \put(-193.04,400.22){\circle*{0.000001}} \put(-193.04,400.93){\circle*{0.000001}} \put(-193.04,401.64){\circle*{0.000001}} \put(-193.75,402.34){\circle*{0.000001}} \put(-193.75,403.05){\circle*{0.000001}} \put(-193.75,403.76){\circle*{0.000001}} \put(-194.45,404.47){\circle*{0.000001}} \put(-194.45,405.17){\circle*{0.000001}} \put(-195.16,405.88){\circle*{0.000001}} \put(-195.16,406.59){\circle*{0.000001}} \put(-195.16,407.29){\circle*{0.000001}} \put(-195.87,408.00){\circle*{0.000001}} \put(-195.87,408.71){\circle*{0.000001}} \put(-196.58,409.41){\circle*{0.000001}} \put(-196.58,410.12){\circle*{0.000001}} \put(-196.58,410.83){\circle*{0.000001}} \put(-197.28,411.54){\circle*{0.000001}} \put(-197.28,412.24){\circle*{0.000001}} \put(-197.28,412.95){\circle*{0.000001}} \put(-197.99,413.66){\circle*{0.000001}} \put(-197.99,414.36){\circle*{0.000001}} \put(-198.70,415.07){\circle*{0.000001}} \put(-198.70,415.78){\circle*{0.000001}} \put(-198.70,416.49){\circle*{0.000001}} \put(-199.40,417.19){\circle*{0.000001}} \put(-199.40,417.90){\circle*{0.000001}} \put(-200.11,418.61){\circle*{0.000001}} \put(-200.11,419.31){\circle*{0.000001}} \put(-200.11,420.02){\circle*{0.000001}} \put(-200.82,420.73){\circle*{0.000001}} \put(-200.82,421.44){\circle*{0.000001}} \put(-201.53,422.14){\circle*{0.000001}} \put(-201.53,422.85){\circle*{0.000001}} \put(-201.53,423.56){\circle*{0.000001}} \put(-202.23,424.26){\circle*{0.000001}} \put(-202.23,424.97){\circle*{0.000001}} \put(-202.23,425.68){\circle*{0.000001}} \put(-202.94,426.39){\circle*{0.000001}} \put(-202.94,427.09){\circle*{0.000001}} \put(-203.65,427.80){\circle*{0.000001}} \put(-203.65,428.51){\circle*{0.000001}} \put(-223.45,387.49){\circle*{0.000001}} \put(-222.74,387.49){\circle*{0.000001}} \put(-222.03,388.20){\circle*{0.000001}} \put(-221.32,388.20){\circle*{0.000001}} \put(-220.62,388.20){\circle*{0.000001}} \put(-219.91,388.91){\circle*{0.000001}} \put(-219.20,388.91){\circle*{0.000001}} \put(-218.50,388.91){\circle*{0.000001}} \put(-217.79,388.91){\circle*{0.000001}} \put(-217.08,389.62){\circle*{0.000001}} \put(-216.37,389.62){\circle*{0.000001}} \put(-215.67,389.62){\circle*{0.000001}} \put(-214.96,390.32){\circle*{0.000001}} \put(-214.25,390.32){\circle*{0.000001}} \put(-213.55,390.32){\circle*{0.000001}} \put(-212.84,391.03){\circle*{0.000001}} \put(-212.13,391.03){\circle*{0.000001}} \put(-211.42,391.03){\circle*{0.000001}} \put(-210.72,391.74){\circle*{0.000001}} \put(-210.01,391.74){\circle*{0.000001}} \put(-209.30,391.74){\circle*{0.000001}} \put(-208.60,392.44){\circle*{0.000001}} \put(-207.89,392.44){\circle*{0.000001}} \put(-207.18,392.44){\circle*{0.000001}} \put(-206.48,392.44){\circle*{0.000001}} \put(-205.77,393.15){\circle*{0.000001}} \put(-205.06,393.15){\circle*{0.000001}} \put(-204.35,393.15){\circle*{0.000001}} \put(-203.65,393.86){\circle*{0.000001}} \put(-202.94,393.86){\circle*{0.000001}} \put(-202.23,393.86){\circle*{0.000001}} \put(-201.53,394.57){\circle*{0.000001}} \put(-200.82,394.57){\circle*{0.000001}} \put(-200.11,394.57){\circle*{0.000001}} \put(-199.40,395.27){\circle*{0.000001}} \put(-198.70,395.27){\circle*{0.000001}} \put(-197.99,395.27){\circle*{0.000001}} \put(-197.28,395.98){\circle*{0.000001}} \put(-196.58,395.98){\circle*{0.000001}} \put(-195.87,395.98){\circle*{0.000001}} \put(-195.16,395.98){\circle*{0.000001}} \put(-194.45,396.69){\circle*{0.000001}} \put(-193.75,396.69){\circle*{0.000001}} \put(-193.04,396.69){\circle*{0.000001}} \put(-192.33,397.39){\circle*{0.000001}} \put(-191.63,397.39){\circle*{0.000001}} \put(-223.45,353.55){\circle*{0.000001}} \put(-223.45,354.26){\circle*{0.000001}} \put(-223.45,354.97){\circle*{0.000001}} \put(-223.45,355.67){\circle*{0.000001}} \put(-223.45,356.38){\circle*{0.000001}} \put(-223.45,357.09){\circle*{0.000001}} \put(-223.45,357.80){\circle*{0.000001}} \put(-223.45,358.50){\circle*{0.000001}} \put(-223.45,359.21){\circle*{0.000001}} \put(-223.45,359.92){\circle*{0.000001}} \put(-223.45,360.62){\circle*{0.000001}} \put(-223.45,361.33){\circle*{0.000001}} \put(-223.45,362.04){\circle*{0.000001}} \put(-223.45,362.75){\circle*{0.000001}} \put(-223.45,363.45){\circle*{0.000001}} \put(-223.45,364.16){\circle*{0.000001}} \put(-223.45,364.87){\circle*{0.000001}} \put(-223.45,365.57){\circle*{0.000001}} \put(-223.45,366.28){\circle*{0.000001}} \put(-223.45,366.99){\circle*{0.000001}} \put(-223.45,367.70){\circle*{0.000001}} \put(-223.45,368.40){\circle*{0.000001}} \put(-223.45,369.11){\circle*{0.000001}} \put(-223.45,369.82){\circle*{0.000001}} \put(-223.45,370.52){\circle*{0.000001}} \put(-223.45,371.23){\circle*{0.000001}} \put(-223.45,371.94){\circle*{0.000001}} \put(-223.45,372.65){\circle*{0.000001}} \put(-223.45,373.35){\circle*{0.000001}} \put(-223.45,374.06){\circle*{0.000001}} \put(-223.45,374.77){\circle*{0.000001}} \put(-223.45,375.47){\circle*{0.000001}} \put(-223.45,376.18){\circle*{0.000001}} \put(-223.45,376.89){\circle*{0.000001}} \put(-223.45,377.60){\circle*{0.000001}} \put(-223.45,378.30){\circle*{0.000001}} \put(-223.45,379.01){\circle*{0.000001}} \put(-223.45,379.72){\circle*{0.000001}} \put(-223.45,380.42){\circle*{0.000001}} \put(-223.45,381.13){\circle*{0.000001}} \put(-223.45,381.84){\circle*{0.000001}} \put(-223.45,382.54){\circle*{0.000001}} \put(-223.45,383.25){\circle*{0.000001}} \put(-223.45,383.96){\circle*{0.000001}} \put(-223.45,384.67){\circle*{0.000001}} \put(-223.45,385.37){\circle*{0.000001}} \put(-223.45,386.08){\circle*{0.000001}} \put(-223.45,386.79){\circle*{0.000001}} \put(-223.45,387.49){\circle*{0.000001}} \put(-248.90,373.35){\circle*{0.000001}} \put(-248.19,372.65){\circle*{0.000001}} \put(-247.49,371.94){\circle*{0.000001}} \put(-246.78,371.94){\circle*{0.000001}} \put(-246.07,371.23){\circle*{0.000001}} \put(-245.37,370.52){\circle*{0.000001}} \put(-244.66,369.82){\circle*{0.000001}} \put(-243.95,369.82){\circle*{0.000001}} \put(-243.24,369.11){\circle*{0.000001}} \put(-242.54,368.40){\circle*{0.000001}} \put(-241.83,367.70){\circle*{0.000001}} \put(-241.12,366.99){\circle*{0.000001}} \put(-240.42,366.99){\circle*{0.000001}} \put(-239.71,366.28){\circle*{0.000001}} \put(-239.00,365.57){\circle*{0.000001}} \put(-238.29,364.87){\circle*{0.000001}} \put(-237.59,364.87){\circle*{0.000001}} \put(-236.88,364.16){\circle*{0.000001}} \put(-236.17,363.45){\circle*{0.000001}} \put(-235.47,362.75){\circle*{0.000001}} \put(-234.76,362.04){\circle*{0.000001}} \put(-234.05,362.04){\circle*{0.000001}} \put(-233.35,361.33){\circle*{0.000001}} \put(-232.64,360.62){\circle*{0.000001}} \put(-231.93,359.92){\circle*{0.000001}} \put(-231.22,359.92){\circle*{0.000001}} \put(-230.52,359.21){\circle*{0.000001}} \put(-229.81,358.50){\circle*{0.000001}} \put(-229.10,357.80){\circle*{0.000001}} \put(-228.40,357.09){\circle*{0.000001}} \put(-227.69,357.09){\circle*{0.000001}} \put(-226.98,356.38){\circle*{0.000001}} \put(-226.27,355.67){\circle*{0.000001}} \put(-225.57,354.97){\circle*{0.000001}} \put(-224.86,354.97){\circle*{0.000001}} \put(-224.15,354.26){\circle*{0.000001}} \put(-223.45,353.55){\circle*{0.000001}} \put(-248.90,373.35){\circle*{0.000001}} \put(-249.61,374.06){\circle*{0.000001}} \put(-250.32,374.77){\circle*{0.000001}} \put(-250.32,375.47){\circle*{0.000001}} \put(-251.02,376.18){\circle*{0.000001}} \put(-251.73,376.89){\circle*{0.000001}} \put(-252.44,377.60){\circle*{0.000001}} \put(-253.14,378.30){\circle*{0.000001}} \put(-253.14,379.01){\circle*{0.000001}} \put(-253.85,379.72){\circle*{0.000001}} \put(-254.56,380.42){\circle*{0.000001}} \put(-255.27,381.13){\circle*{0.000001}} \put(-255.97,381.84){\circle*{0.000001}} \put(-255.97,382.54){\circle*{0.000001}} \put(-256.68,383.25){\circle*{0.000001}} \put(-257.39,383.96){\circle*{0.000001}} \put(-258.09,384.67){\circle*{0.000001}} \put(-258.80,385.37){\circle*{0.000001}} \put(-258.80,386.08){\circle*{0.000001}} \put(-259.51,386.79){\circle*{0.000001}} \put(-260.22,387.49){\circle*{0.000001}} \put(-260.92,388.20){\circle*{0.000001}} \put(-261.63,388.91){\circle*{0.000001}} \put(-262.34,389.62){\circle*{0.000001}} \put(-262.34,390.32){\circle*{0.000001}} \put(-263.04,391.03){\circle*{0.000001}} \put(-263.75,391.74){\circle*{0.000001}} \put(-264.46,392.44){\circle*{0.000001}} \put(-265.17,393.15){\circle*{0.000001}} \put(-265.17,393.86){\circle*{0.000001}} \put(-265.87,394.57){\circle*{0.000001}} \put(-266.58,395.27){\circle*{0.000001}} \put(-267.29,395.98){\circle*{0.000001}} \put(-267.99,396.69){\circle*{0.000001}} \put(-267.99,397.39){\circle*{0.000001}} \put(-268.70,398.10){\circle*{0.000001}} \put(-269.41,398.81){\circle*{0.000001}} \put(-269.41,398.81){\circle*{0.000001}} \put(-269.41,399.52){\circle*{0.000001}} \put(-269.41,400.22){\circle*{0.000001}} \put(-269.41,400.93){\circle*{0.000001}} \put(-269.41,401.64){\circle*{0.000001}} \put(-269.41,402.34){\circle*{0.000001}} \put(-269.41,403.05){\circle*{0.000001}} \put(-269.41,403.76){\circle*{0.000001}} \put(-269.41,404.47){\circle*{0.000001}} \put(-269.41,405.17){\circle*{0.000001}} \put(-269.41,405.88){\circle*{0.000001}} \put(-269.41,406.59){\circle*{0.000001}} \put(-269.41,407.29){\circle*{0.000001}} \put(-269.41,408.00){\circle*{0.000001}} \put(-269.41,408.71){\circle*{0.000001}} \put(-269.41,409.41){\circle*{0.000001}} \put(-269.41,410.12){\circle*{0.000001}} \put(-269.41,410.83){\circle*{0.000001}} \put(-269.41,411.54){\circle*{0.000001}} \put(-269.41,412.24){\circle*{0.000001}} \put(-269.41,412.95){\circle*{0.000001}} \put(-269.41,413.66){\circle*{0.000001}} \put(-269.41,414.36){\circle*{0.000001}} \put(-269.41,415.07){\circle*{0.000001}} \put(-269.41,415.78){\circle*{0.000001}} \put(-269.41,416.49){\circle*{0.000001}} \put(-269.41,417.19){\circle*{0.000001}} \put(-269.41,417.90){\circle*{0.000001}} \put(-269.41,418.61){\circle*{0.000001}} \put(-269.41,419.31){\circle*{0.000001}} \put(-269.41,420.02){\circle*{0.000001}} \put(-269.41,420.73){\circle*{0.000001}} \put(-269.41,421.44){\circle*{0.000001}} \put(-269.41,422.14){\circle*{0.000001}} \put(-269.41,422.85){\circle*{0.000001}} \put(-269.41,423.56){\circle*{0.000001}} \put(-269.41,424.26){\circle*{0.000001}} \put(-269.41,424.97){\circle*{0.000001}} \put(-269.41,425.68){\circle*{0.000001}} \put(-269.41,426.39){\circle*{0.000001}} \put(-269.41,427.09){\circle*{0.000001}} \put(-269.41,427.80){\circle*{0.000001}} \put(-269.41,428.51){\circle*{0.000001}} \put(-269.41,429.21){\circle*{0.000001}} \put(-269.41,429.92){\circle*{0.000001}} \put(-269.41,430.63){\circle*{0.000001}} \put(-269.41,431.34){\circle*{0.000001}} \put(-269.41,432.04){\circle*{0.000001}} \put(-269.41,432.75){\circle*{0.000001}} \put(-269.41,432.75){\circle*{0.000001}} \put(-268.70,432.75){\circle*{0.000001}} \put(-267.99,432.04){\circle*{0.000001}} \put(-267.29,432.04){\circle*{0.000001}} \put(-266.58,431.34){\circle*{0.000001}} \put(-265.87,431.34){\circle*{0.000001}} \put(-265.17,430.63){\circle*{0.000001}} \put(-264.46,430.63){\circle*{0.000001}} \put(-263.75,430.63){\circle*{0.000001}} \put(-263.04,429.92){\circle*{0.000001}} \put(-262.34,429.92){\circle*{0.000001}} \put(-261.63,429.21){\circle*{0.000001}} \put(-260.92,429.21){\circle*{0.000001}} \put(-260.22,428.51){\circle*{0.000001}} \put(-259.51,428.51){\circle*{0.000001}} \put(-258.80,427.80){\circle*{0.000001}} \put(-258.09,427.80){\circle*{0.000001}} \put(-257.39,427.80){\circle*{0.000001}} \put(-256.68,427.09){\circle*{0.000001}} \put(-255.97,427.09){\circle*{0.000001}} \put(-255.27,426.39){\circle*{0.000001}} \put(-254.56,426.39){\circle*{0.000001}} \put(-253.85,425.68){\circle*{0.000001}} \put(-253.14,425.68){\circle*{0.000001}} \put(-252.44,425.68){\circle*{0.000001}} \put(-251.73,424.97){\circle*{0.000001}} \put(-251.02,424.97){\circle*{0.000001}} \put(-250.32,424.26){\circle*{0.000001}} \put(-249.61,424.26){\circle*{0.000001}} \put(-248.90,423.56){\circle*{0.000001}} \put(-248.19,423.56){\circle*{0.000001}} \put(-247.49,423.56){\circle*{0.000001}} \put(-246.78,422.85){\circle*{0.000001}} \put(-246.07,422.85){\circle*{0.000001}} \put(-245.37,422.14){\circle*{0.000001}} \put(-244.66,422.14){\circle*{0.000001}} \put(-243.95,421.44){\circle*{0.000001}} \put(-243.24,421.44){\circle*{0.000001}} \put(-242.54,420.73){\circle*{0.000001}} \put(-241.83,420.73){\circle*{0.000001}} \put(-241.12,420.73){\circle*{0.000001}} \put(-240.42,420.02){\circle*{0.000001}} \put(-239.71,420.02){\circle*{0.000001}} \put(-239.00,419.31){\circle*{0.000001}} \put(-238.29,419.31){\circle*{0.000001}} \put(-237.59,418.61){\circle*{0.000001}} \put(-236.88,418.61){\circle*{0.000001}} \put(-236.88,418.61){\circle*{0.000001}} \put(-236.17,418.61){\circle*{0.000001}} \put(-235.47,418.61){\circle*{0.000001}} \put(-234.76,418.61){\circle*{0.000001}} \put(-234.05,418.61){\circle*{0.000001}} \put(-233.35,418.61){\circle*{0.000001}} \put(-232.64,418.61){\circle*{0.000001}} \put(-231.93,418.61){\circle*{0.000001}} \put(-231.22,418.61){\circle*{0.000001}} \put(-230.52,418.61){\circle*{0.000001}} \put(-229.81,418.61){\circle*{0.000001}} \put(-229.10,418.61){\circle*{0.000001}} \put(-228.40,418.61){\circle*{0.000001}} \put(-227.69,418.61){\circle*{0.000001}} \put(-226.98,418.61){\circle*{0.000001}} \put(-226.27,418.61){\circle*{0.000001}} \put(-225.57,418.61){\circle*{0.000001}} \put(-224.86,418.61){\circle*{0.000001}} \put(-224.15,418.61){\circle*{0.000001}} \put(-223.45,418.61){\circle*{0.000001}} \put(-222.74,418.61){\circle*{0.000001}} \put(-222.03,418.61){\circle*{0.000001}} \put(-221.32,418.61){\circle*{0.000001}} \put(-220.62,418.61){\circle*{0.000001}} \put(-219.91,418.61){\circle*{0.000001}} \put(-219.20,418.61){\circle*{0.000001}} \put(-218.50,417.90){\circle*{0.000001}} \put(-217.79,417.90){\circle*{0.000001}} \put(-217.08,417.90){\circle*{0.000001}} \put(-216.37,417.90){\circle*{0.000001}} \put(-215.67,417.90){\circle*{0.000001}} \put(-214.96,417.90){\circle*{0.000001}} \put(-214.25,417.90){\circle*{0.000001}} \put(-213.55,417.90){\circle*{0.000001}} \put(-212.84,417.90){\circle*{0.000001}} \put(-212.13,417.90){\circle*{0.000001}} \put(-211.42,417.90){\circle*{0.000001}} \put(-210.72,417.90){\circle*{0.000001}} \put(-210.01,417.90){\circle*{0.000001}} \put(-209.30,417.90){\circle*{0.000001}} \put(-208.60,417.90){\circle*{0.000001}} \put(-207.89,417.90){\circle*{0.000001}} \put(-207.18,417.90){\circle*{0.000001}} \put(-206.48,417.90){\circle*{0.000001}} \put(-205.77,417.90){\circle*{0.000001}} \put(-205.06,417.90){\circle*{0.000001}} \put(-204.35,417.90){\circle*{0.000001}} \put(-203.65,417.90){\circle*{0.000001}} \put(-202.94,417.90){\circle*{0.000001}} \put(-202.23,417.90){\circle*{0.000001}} \put(-201.53,417.90){\circle*{0.000001}} \put(-201.53,417.90){\circle*{0.000001}} \put(-200.82,418.61){\circle*{0.000001}} \put(-200.11,419.31){\circle*{0.000001}} \put(-199.40,420.02){\circle*{0.000001}} \put(-198.70,420.73){\circle*{0.000001}} \put(-197.99,421.44){\circle*{0.000001}} \put(-197.28,422.14){\circle*{0.000001}} \put(-196.58,422.85){\circle*{0.000001}} \put(-195.87,423.56){\circle*{0.000001}} \put(-195.87,424.26){\circle*{0.000001}} \put(-195.16,424.97){\circle*{0.000001}} \put(-194.45,425.68){\circle*{0.000001}} \put(-193.75,426.39){\circle*{0.000001}} \put(-193.04,427.09){\circle*{0.000001}} \put(-192.33,427.80){\circle*{0.000001}} \put(-191.63,428.51){\circle*{0.000001}} \put(-190.92,429.21){\circle*{0.000001}} \put(-190.21,429.92){\circle*{0.000001}} \put(-189.50,430.63){\circle*{0.000001}} \put(-188.80,431.34){\circle*{0.000001}} \put(-188.09,432.04){\circle*{0.000001}} \put(-187.38,432.75){\circle*{0.000001}} \put(-186.68,433.46){\circle*{0.000001}} \put(-185.97,434.16){\circle*{0.000001}} \put(-185.26,434.87){\circle*{0.000001}} \put(-184.55,435.58){\circle*{0.000001}} \put(-183.85,436.28){\circle*{0.000001}} \put(-183.85,436.99){\circle*{0.000001}} \put(-183.14,437.70){\circle*{0.000001}} \put(-182.43,438.41){\circle*{0.000001}} \put(-181.73,439.11){\circle*{0.000001}} \put(-181.02,439.82){\circle*{0.000001}} \put(-180.31,440.53){\circle*{0.000001}} \put(-179.61,441.23){\circle*{0.000001}} \put(-178.90,441.94){\circle*{0.000001}} \put(-178.19,442.65){\circle*{0.000001}} \put(-177.48,443.36){\circle*{0.000001}} \put(-209.30,437.70){\circle*{0.000001}} \put(-208.60,437.70){\circle*{0.000001}} \put(-207.89,437.70){\circle*{0.000001}} \put(-207.18,438.41){\circle*{0.000001}} \put(-206.48,438.41){\circle*{0.000001}} \put(-205.77,438.41){\circle*{0.000001}} \put(-205.06,438.41){\circle*{0.000001}} \put(-204.35,438.41){\circle*{0.000001}} \put(-203.65,438.41){\circle*{0.000001}} \put(-202.94,439.11){\circle*{0.000001}} \put(-202.23,439.11){\circle*{0.000001}} \put(-201.53,439.11){\circle*{0.000001}} \put(-200.82,439.11){\circle*{0.000001}} \put(-200.11,439.11){\circle*{0.000001}} \put(-199.40,439.11){\circle*{0.000001}} \put(-198.70,439.82){\circle*{0.000001}} \put(-197.99,439.82){\circle*{0.000001}} \put(-197.28,439.82){\circle*{0.000001}} \put(-196.58,439.82){\circle*{0.000001}} \put(-195.87,439.82){\circle*{0.000001}} \put(-195.16,440.53){\circle*{0.000001}} \put(-194.45,440.53){\circle*{0.000001}} \put(-193.75,440.53){\circle*{0.000001}} \put(-193.04,440.53){\circle*{0.000001}} \put(-192.33,440.53){\circle*{0.000001}} \put(-191.63,440.53){\circle*{0.000001}} \put(-190.92,441.23){\circle*{0.000001}} \put(-190.21,441.23){\circle*{0.000001}} \put(-189.50,441.23){\circle*{0.000001}} \put(-188.80,441.23){\circle*{0.000001}} \put(-188.09,441.23){\circle*{0.000001}} \put(-187.38,441.94){\circle*{0.000001}} \put(-186.68,441.94){\circle*{0.000001}} \put(-185.97,441.94){\circle*{0.000001}} \put(-185.26,441.94){\circle*{0.000001}} \put(-184.55,441.94){\circle*{0.000001}} \put(-183.85,441.94){\circle*{0.000001}} \put(-183.14,442.65){\circle*{0.000001}} \put(-182.43,442.65){\circle*{0.000001}} \put(-181.73,442.65){\circle*{0.000001}} \put(-181.02,442.65){\circle*{0.000001}} \put(-180.31,442.65){\circle*{0.000001}} \put(-179.61,442.65){\circle*{0.000001}} \put(-178.90,443.36){\circle*{0.000001}} \put(-178.19,443.36){\circle*{0.000001}} \put(-177.48,443.36){\circle*{0.000001}} \put(-217.79,403.76){\circle*{0.000001}} \put(-217.79,404.47){\circle*{0.000001}} \put(-217.79,405.17){\circle*{0.000001}} \put(-217.08,405.88){\circle*{0.000001}} \put(-217.08,406.59){\circle*{0.000001}} \put(-217.08,407.29){\circle*{0.000001}} \put(-217.08,408.00){\circle*{0.000001}} \put(-216.37,408.71){\circle*{0.000001}} \put(-216.37,409.41){\circle*{0.000001}} \put(-216.37,410.12){\circle*{0.000001}} \put(-216.37,410.83){\circle*{0.000001}} \put(-215.67,411.54){\circle*{0.000001}} \put(-215.67,412.24){\circle*{0.000001}} \put(-215.67,412.95){\circle*{0.000001}} \put(-215.67,413.66){\circle*{0.000001}} \put(-214.96,414.36){\circle*{0.000001}} \put(-214.96,415.07){\circle*{0.000001}} \put(-214.96,415.78){\circle*{0.000001}} \put(-214.96,416.49){\circle*{0.000001}} \put(-214.25,417.19){\circle*{0.000001}} \put(-214.25,417.90){\circle*{0.000001}} \put(-214.25,418.61){\circle*{0.000001}} \put(-214.25,419.31){\circle*{0.000001}} \put(-213.55,420.02){\circle*{0.000001}} \put(-213.55,420.73){\circle*{0.000001}} \put(-213.55,421.44){\circle*{0.000001}} \put(-213.55,422.14){\circle*{0.000001}} \put(-212.84,422.85){\circle*{0.000001}} \put(-212.84,423.56){\circle*{0.000001}} \put(-212.84,424.26){\circle*{0.000001}} \put(-212.84,424.97){\circle*{0.000001}} \put(-212.13,425.68){\circle*{0.000001}} \put(-212.13,426.39){\circle*{0.000001}} \put(-212.13,427.09){\circle*{0.000001}} \put(-212.13,427.80){\circle*{0.000001}} \put(-211.42,428.51){\circle*{0.000001}} \put(-211.42,429.21){\circle*{0.000001}} \put(-211.42,429.92){\circle*{0.000001}} \put(-211.42,430.63){\circle*{0.000001}} \put(-210.72,431.34){\circle*{0.000001}} \put(-210.72,432.04){\circle*{0.000001}} \put(-210.72,432.75){\circle*{0.000001}} \put(-210.72,433.46){\circle*{0.000001}} \put(-210.01,434.16){\circle*{0.000001}} \put(-210.01,434.87){\circle*{0.000001}} \put(-210.01,435.58){\circle*{0.000001}} \put(-210.01,436.28){\circle*{0.000001}} \put(-209.30,436.99){\circle*{0.000001}} \put(-209.30,437.70){\circle*{0.000001}} \put(-217.79,403.76){\circle*{0.000001}} \put(-217.79,404.47){\circle*{0.000001}} \put(-218.50,405.17){\circle*{0.000001}} \put(-218.50,405.88){\circle*{0.000001}} \put(-218.50,406.59){\circle*{0.000001}} \put(-219.20,407.29){\circle*{0.000001}} \put(-219.20,408.00){\circle*{0.000001}} \put(-219.20,408.71){\circle*{0.000001}} \put(-219.91,409.41){\circle*{0.000001}} \put(-219.91,410.12){\circle*{0.000001}} \put(-219.91,410.83){\circle*{0.000001}} \put(-219.91,411.54){\circle*{0.000001}} \put(-220.62,412.24){\circle*{0.000001}} \put(-220.62,412.95){\circle*{0.000001}} \put(-220.62,413.66){\circle*{0.000001}} \put(-221.32,414.36){\circle*{0.000001}} \put(-221.32,415.07){\circle*{0.000001}} \put(-221.32,415.78){\circle*{0.000001}} \put(-222.03,416.49){\circle*{0.000001}} \put(-222.03,417.19){\circle*{0.000001}} \put(-222.03,417.90){\circle*{0.000001}} \put(-222.74,418.61){\circle*{0.000001}} \put(-222.74,419.31){\circle*{0.000001}} \put(-222.74,420.02){\circle*{0.000001}} \put(-223.45,420.73){\circle*{0.000001}} \put(-223.45,421.44){\circle*{0.000001}} \put(-223.45,422.14){\circle*{0.000001}} \put(-224.15,422.85){\circle*{0.000001}} \put(-224.15,423.56){\circle*{0.000001}} \put(-224.15,424.26){\circle*{0.000001}} \put(-224.86,424.97){\circle*{0.000001}} \put(-224.86,425.68){\circle*{0.000001}} \put(-224.86,426.39){\circle*{0.000001}} \put(-224.86,427.09){\circle*{0.000001}} \put(-225.57,427.80){\circle*{0.000001}} \put(-225.57,428.51){\circle*{0.000001}} \put(-225.57,429.21){\circle*{0.000001}} \put(-226.27,429.92){\circle*{0.000001}} \put(-226.27,430.63){\circle*{0.000001}} \put(-226.27,431.34){\circle*{0.000001}} \put(-226.98,432.04){\circle*{0.000001}} \put(-226.98,432.75){\circle*{0.000001}} \put(-226.98,433.46){\circle*{0.000001}} \put(-227.69,434.16){\circle*{0.000001}} \put(-227.69,434.87){\circle*{0.000001}} \put(-227.69,434.87){\circle*{0.000001}} \put(-226.98,434.87){\circle*{0.000001}} \put(-226.27,435.58){\circle*{0.000001}} \put(-225.57,435.58){\circle*{0.000001}} \put(-224.86,435.58){\circle*{0.000001}} \put(-224.15,435.58){\circle*{0.000001}} \put(-223.45,436.28){\circle*{0.000001}} \put(-222.74,436.28){\circle*{0.000001}} \put(-222.03,436.28){\circle*{0.000001}} \put(-221.32,436.99){\circle*{0.000001}} \put(-220.62,436.99){\circle*{0.000001}} \put(-219.91,436.99){\circle*{0.000001}} \put(-219.20,437.70){\circle*{0.000001}} \put(-218.50,437.70){\circle*{0.000001}} \put(-217.79,437.70){\circle*{0.000001}} \put(-217.08,437.70){\circle*{0.000001}} \put(-216.37,438.41){\circle*{0.000001}} \put(-215.67,438.41){\circle*{0.000001}} \put(-214.96,438.41){\circle*{0.000001}} \put(-214.25,439.11){\circle*{0.000001}} \put(-213.55,439.11){\circle*{0.000001}} \put(-212.84,439.11){\circle*{0.000001}} \put(-212.13,439.82){\circle*{0.000001}} \put(-211.42,439.82){\circle*{0.000001}} \put(-210.72,439.82){\circle*{0.000001}} \put(-210.01,439.82){\circle*{0.000001}} \put(-209.30,440.53){\circle*{0.000001}} \put(-208.60,440.53){\circle*{0.000001}} \put(-207.89,440.53){\circle*{0.000001}} \put(-207.18,441.23){\circle*{0.000001}} \put(-206.48,441.23){\circle*{0.000001}} \put(-205.77,441.23){\circle*{0.000001}} \put(-205.06,441.94){\circle*{0.000001}} \put(-204.35,441.94){\circle*{0.000001}} \put(-203.65,441.94){\circle*{0.000001}} \put(-202.94,441.94){\circle*{0.000001}} \put(-202.23,442.65){\circle*{0.000001}} \put(-201.53,442.65){\circle*{0.000001}} \put(-200.82,442.65){\circle*{0.000001}} \put(-200.11,443.36){\circle*{0.000001}} \put(-199.40,443.36){\circle*{0.000001}} \put(-198.70,443.36){\circle*{0.000001}} \put(-197.99,444.06){\circle*{0.000001}} \put(-197.28,444.06){\circle*{0.000001}} \put(-196.58,444.06){\circle*{0.000001}} \put(-195.87,444.06){\circle*{0.000001}} \put(-195.16,444.77){\circle*{0.000001}} \put(-194.45,444.77){\circle*{0.000001}} \put(-194.45,444.77){\circle*{0.000001}} \put(-194.45,445.48){\circle*{0.000001}} \put(-195.16,446.18){\circle*{0.000001}} \put(-195.16,446.89){\circle*{0.000001}} \put(-195.16,447.60){\circle*{0.000001}} \put(-195.16,448.31){\circle*{0.000001}} \put(-195.87,449.01){\circle*{0.000001}} \put(-195.87,449.72){\circle*{0.000001}} \put(-195.87,450.43){\circle*{0.000001}} \put(-195.87,451.13){\circle*{0.000001}} \put(-196.58,451.84){\circle*{0.000001}} \put(-196.58,452.55){\circle*{0.000001}} \put(-196.58,453.26){\circle*{0.000001}} \put(-196.58,453.96){\circle*{0.000001}} \put(-197.28,454.67){\circle*{0.000001}} \put(-197.28,455.38){\circle*{0.000001}} \put(-197.28,456.08){\circle*{0.000001}} \put(-197.99,456.79){\circle*{0.000001}} \put(-197.99,457.50){\circle*{0.000001}} \put(-197.99,458.21){\circle*{0.000001}} \put(-197.99,458.91){\circle*{0.000001}} \put(-198.70,459.62){\circle*{0.000001}} \put(-198.70,460.33){\circle*{0.000001}} \put(-198.70,461.03){\circle*{0.000001}} \put(-198.70,461.74){\circle*{0.000001}} \put(-199.40,462.45){\circle*{0.000001}} \put(-199.40,463.15){\circle*{0.000001}} \put(-199.40,463.86){\circle*{0.000001}} \put(-199.40,464.57){\circle*{0.000001}} \put(-200.11,465.28){\circle*{0.000001}} \put(-200.11,465.98){\circle*{0.000001}} \put(-200.11,466.69){\circle*{0.000001}} \put(-200.82,467.40){\circle*{0.000001}} \put(-200.82,468.10){\circle*{0.000001}} \put(-200.82,468.81){\circle*{0.000001}} \put(-200.82,469.52){\circle*{0.000001}} \put(-201.53,470.23){\circle*{0.000001}} \put(-201.53,470.93){\circle*{0.000001}} \put(-201.53,471.64){\circle*{0.000001}} \put(-201.53,472.35){\circle*{0.000001}} \put(-202.23,473.05){\circle*{0.000001}} \put(-202.23,473.76){\circle*{0.000001}} \put(-202.23,474.47){\circle*{0.000001}} \put(-202.23,475.18){\circle*{0.000001}} \put(-202.94,475.88){\circle*{0.000001}} \put(-202.94,476.59){\circle*{0.000001}} \put(-202.94,476.59){\circle*{0.000001}} \put(-202.23,476.59){\circle*{0.000001}} \put(-201.53,476.59){\circle*{0.000001}} \put(-200.82,476.59){\circle*{0.000001}} \put(-200.11,477.30){\circle*{0.000001}} \put(-199.40,477.30){\circle*{0.000001}} \put(-198.70,477.30){\circle*{0.000001}} \put(-197.99,477.30){\circle*{0.000001}} \put(-197.28,477.30){\circle*{0.000001}} \put(-196.58,477.30){\circle*{0.000001}} \put(-195.87,477.30){\circle*{0.000001}} \put(-195.16,478.00){\circle*{0.000001}} \put(-194.45,478.00){\circle*{0.000001}} \put(-193.75,478.00){\circle*{0.000001}} \put(-193.04,478.00){\circle*{0.000001}} \put(-192.33,478.00){\circle*{0.000001}} \put(-191.63,478.00){\circle*{0.000001}} \put(-190.92,478.00){\circle*{0.000001}} \put(-190.21,478.00){\circle*{0.000001}} \put(-189.50,478.71){\circle*{0.000001}} \put(-188.80,478.71){\circle*{0.000001}} \put(-188.09,478.71){\circle*{0.000001}} \put(-187.38,478.71){\circle*{0.000001}} \put(-186.68,478.71){\circle*{0.000001}} \put(-185.97,478.71){\circle*{0.000001}} \put(-185.26,478.71){\circle*{0.000001}} \put(-184.55,479.42){\circle*{0.000001}} \put(-183.85,479.42){\circle*{0.000001}} \put(-183.14,479.42){\circle*{0.000001}} \put(-182.43,479.42){\circle*{0.000001}} \put(-181.73,479.42){\circle*{0.000001}} \put(-181.02,479.42){\circle*{0.000001}} \put(-180.31,479.42){\circle*{0.000001}} \put(-179.61,480.13){\circle*{0.000001}} \put(-178.90,480.13){\circle*{0.000001}} \put(-178.19,480.13){\circle*{0.000001}} \put(-177.48,480.13){\circle*{0.000001}} \put(-176.78,480.13){\circle*{0.000001}} \put(-176.07,480.13){\circle*{0.000001}} \put(-175.36,480.13){\circle*{0.000001}} \put(-174.66,480.13){\circle*{0.000001}} \put(-173.95,480.83){\circle*{0.000001}} \put(-173.24,480.83){\circle*{0.000001}} \put(-172.53,480.83){\circle*{0.000001}} \put(-171.83,480.83){\circle*{0.000001}} \put(-171.12,480.83){\circle*{0.000001}} \put(-170.41,480.83){\circle*{0.000001}} \put(-169.71,480.83){\circle*{0.000001}} \put(-169.00,481.54){\circle*{0.000001}} \put(-168.29,481.54){\circle*{0.000001}} \put(-167.58,481.54){\circle*{0.000001}} \put(-166.88,481.54){\circle*{0.000001}} \put(-166.88,481.54){\circle*{0.000001}} \put(-166.17,481.54){\circle*{0.000001}} \put(-165.46,482.25){\circle*{0.000001}} \put(-164.76,482.25){\circle*{0.000001}} \put(-164.05,482.95){\circle*{0.000001}} \put(-163.34,482.95){\circle*{0.000001}} \put(-162.63,483.66){\circle*{0.000001}} \put(-161.93,483.66){\circle*{0.000001}} \put(-161.22,484.37){\circle*{0.000001}} \put(-160.51,484.37){\circle*{0.000001}} \put(-159.81,485.08){\circle*{0.000001}} \put(-159.10,485.08){\circle*{0.000001}} \put(-158.39,485.78){\circle*{0.000001}} \put(-157.68,485.78){\circle*{0.000001}} \put(-156.98,486.49){\circle*{0.000001}} \put(-156.27,486.49){\circle*{0.000001}} \put(-155.56,487.20){\circle*{0.000001}} \put(-154.86,487.20){\circle*{0.000001}} \put(-154.15,487.90){\circle*{0.000001}} \put(-153.44,487.90){\circle*{0.000001}} \put(-152.74,488.61){\circle*{0.000001}} \put(-152.03,488.61){\circle*{0.000001}} \put(-151.32,489.32){\circle*{0.000001}} \put(-150.61,489.32){\circle*{0.000001}} \put(-149.91,490.02){\circle*{0.000001}} \put(-149.20,490.02){\circle*{0.000001}} \put(-148.49,490.73){\circle*{0.000001}} \put(-147.79,490.73){\circle*{0.000001}} \put(-147.08,491.44){\circle*{0.000001}} \put(-146.37,491.44){\circle*{0.000001}} \put(-145.66,492.15){\circle*{0.000001}} \put(-144.96,492.15){\circle*{0.000001}} \put(-144.25,492.85){\circle*{0.000001}} \put(-143.54,492.85){\circle*{0.000001}} \put(-142.84,493.56){\circle*{0.000001}} \put(-142.13,493.56){\circle*{0.000001}} \put(-141.42,494.27){\circle*{0.000001}} \put(-140.71,494.27){\circle*{0.000001}} \put(-140.01,494.97){\circle*{0.000001}} \put(-139.30,494.97){\circle*{0.000001}} \put(-138.59,495.68){\circle*{0.000001}} \put(-137.89,495.68){\circle*{0.000001}} \put(-137.89,495.68){\circle*{0.000001}} \put(-138.59,496.39){\circle*{0.000001}} \put(-138.59,497.10){\circle*{0.000001}} \put(-139.30,497.80){\circle*{0.000001}} \put(-140.01,498.51){\circle*{0.000001}} \put(-140.01,499.22){\circle*{0.000001}} \put(-140.71,499.92){\circle*{0.000001}} \put(-141.42,500.63){\circle*{0.000001}} \put(-141.42,501.34){\circle*{0.000001}} \put(-142.13,502.05){\circle*{0.000001}} \put(-142.84,502.75){\circle*{0.000001}} \put(-142.84,503.46){\circle*{0.000001}} \put(-143.54,504.17){\circle*{0.000001}} \put(-144.25,504.87){\circle*{0.000001}} \put(-144.25,505.58){\circle*{0.000001}} \put(-144.96,506.29){\circle*{0.000001}} \put(-145.66,507.00){\circle*{0.000001}} \put(-145.66,507.70){\circle*{0.000001}} \put(-146.37,508.41){\circle*{0.000001}} \put(-147.08,509.12){\circle*{0.000001}} \put(-147.79,509.82){\circle*{0.000001}} \put(-147.79,510.53){\circle*{0.000001}} \put(-148.49,511.24){\circle*{0.000001}} \put(-149.20,511.95){\circle*{0.000001}} \put(-149.20,512.65){\circle*{0.000001}} \put(-149.91,513.36){\circle*{0.000001}} \put(-150.61,514.07){\circle*{0.000001}} \put(-150.61,514.77){\circle*{0.000001}} \put(-151.32,515.48){\circle*{0.000001}} \put(-152.03,516.19){\circle*{0.000001}} \put(-152.03,516.90){\circle*{0.000001}} \put(-152.74,517.60){\circle*{0.000001}} \put(-153.44,518.31){\circle*{0.000001}} \put(-153.44,519.02){\circle*{0.000001}} \put(-154.15,519.72){\circle*{0.000001}} \put(-154.86,520.43){\circle*{0.000001}} \put(-154.86,521.14){\circle*{0.000001}} \put(-155.56,521.84){\circle*{0.000001}} \put(-155.56,521.84){\circle*{0.000001}} \put(-155.56,522.55){\circle*{0.000001}} \put(-154.86,523.26){\circle*{0.000001}} \put(-154.86,523.97){\circle*{0.000001}} \put(-154.86,524.67){\circle*{0.000001}} \put(-154.15,525.38){\circle*{0.000001}} \put(-154.15,526.09){\circle*{0.000001}} \put(-154.15,526.79){\circle*{0.000001}} \put(-153.44,527.50){\circle*{0.000001}} \put(-153.44,528.21){\circle*{0.000001}} \put(-153.44,528.92){\circle*{0.000001}} \put(-152.74,529.62){\circle*{0.000001}} \put(-152.74,530.33){\circle*{0.000001}} \put(-152.74,531.04){\circle*{0.000001}} \put(-152.03,531.74){\circle*{0.000001}} \put(-152.03,532.45){\circle*{0.000001}} \put(-152.03,533.16){\circle*{0.000001}} \put(-151.32,533.87){\circle*{0.000001}} \put(-151.32,534.57){\circle*{0.000001}} \put(-151.32,535.28){\circle*{0.000001}} \put(-150.61,535.99){\circle*{0.000001}} \put(-150.61,536.69){\circle*{0.000001}} \put(-150.61,537.40){\circle*{0.000001}} \put(-149.91,538.11){\circle*{0.000001}} \put(-149.91,538.82){\circle*{0.000001}} \put(-149.91,539.52){\circle*{0.000001}} \put(-149.20,540.23){\circle*{0.000001}} \put(-149.20,540.94){\circle*{0.000001}} \put(-149.20,541.64){\circle*{0.000001}} \put(-148.49,542.35){\circle*{0.000001}} \put(-148.49,543.06){\circle*{0.000001}} \put(-148.49,543.77){\circle*{0.000001}} \put(-147.79,544.47){\circle*{0.000001}} \put(-147.79,545.18){\circle*{0.000001}} \put(-147.79,545.89){\circle*{0.000001}} \put(-147.08,546.59){\circle*{0.000001}} \put(-147.08,547.30){\circle*{0.000001}} \put(-147.08,548.01){\circle*{0.000001}} \put(-146.37,548.71){\circle*{0.000001}} \put(-146.37,549.42){\circle*{0.000001}} \put(-146.37,550.13){\circle*{0.000001}} \put(-145.66,550.84){\circle*{0.000001}} \put(-145.66,551.54){\circle*{0.000001}} \put(-145.66,552.25){\circle*{0.000001}} \put(-144.96,552.96){\circle*{0.000001}} \put(-144.96,553.66){\circle*{0.000001}} \put(-166.88,527.50){\circle*{0.000001}} \put(-166.17,528.21){\circle*{0.000001}} \put(-165.46,528.92){\circle*{0.000001}} \put(-164.76,529.62){\circle*{0.000001}} \put(-164.76,530.33){\circle*{0.000001}} \put(-164.05,531.04){\circle*{0.000001}} \put(-163.34,531.74){\circle*{0.000001}} \put(-162.63,532.45){\circle*{0.000001}} \put(-161.93,533.16){\circle*{0.000001}} \put(-161.22,533.87){\circle*{0.000001}} \put(-161.22,534.57){\circle*{0.000001}} \put(-160.51,535.28){\circle*{0.000001}} \put(-159.81,535.99){\circle*{0.000001}} \put(-159.10,536.69){\circle*{0.000001}} \put(-158.39,537.40){\circle*{0.000001}} \put(-157.68,538.11){\circle*{0.000001}} \put(-157.68,538.82){\circle*{0.000001}} \put(-156.98,539.52){\circle*{0.000001}} \put(-156.27,540.23){\circle*{0.000001}} \put(-155.56,540.94){\circle*{0.000001}} \put(-154.86,541.64){\circle*{0.000001}} \put(-154.15,542.35){\circle*{0.000001}} \put(-154.15,543.06){\circle*{0.000001}} \put(-153.44,543.77){\circle*{0.000001}} \put(-152.74,544.47){\circle*{0.000001}} \put(-152.03,545.18){\circle*{0.000001}} \put(-151.32,545.89){\circle*{0.000001}} \put(-150.61,546.59){\circle*{0.000001}} \put(-150.61,547.30){\circle*{0.000001}} \put(-149.91,548.01){\circle*{0.000001}} \put(-149.20,548.71){\circle*{0.000001}} \put(-148.49,549.42){\circle*{0.000001}} \put(-147.79,550.13){\circle*{0.000001}} \put(-147.08,550.84){\circle*{0.000001}} \put(-147.08,551.54){\circle*{0.000001}} \put(-146.37,552.25){\circle*{0.000001}} \put(-145.66,552.96){\circle*{0.000001}} \put(-144.96,553.66){\circle*{0.000001}} \put(-166.88,527.50){\circle*{0.000001}} \put(-166.17,527.50){\circle*{0.000001}} \put(-165.46,527.50){\circle*{0.000001}} \put(-164.76,527.50){\circle*{0.000001}} \put(-164.05,527.50){\circle*{0.000001}} \put(-163.34,527.50){\circle*{0.000001}} \put(-162.63,527.50){\circle*{0.000001}} \put(-161.93,527.50){\circle*{0.000001}} \put(-161.22,527.50){\circle*{0.000001}} \put(-160.51,527.50){\circle*{0.000001}} \put(-159.81,527.50){\circle*{0.000001}} \put(-159.10,527.50){\circle*{0.000001}} \put(-158.39,527.50){\circle*{0.000001}} \put(-157.68,527.50){\circle*{0.000001}} \put(-156.98,527.50){\circle*{0.000001}} \put(-156.27,527.50){\circle*{0.000001}} \put(-155.56,527.50){\circle*{0.000001}} \put(-154.86,527.50){\circle*{0.000001}} \put(-154.15,527.50){\circle*{0.000001}} \put(-153.44,527.50){\circle*{0.000001}} \put(-152.74,527.50){\circle*{0.000001}} \put(-152.03,527.50){\circle*{0.000001}} \put(-151.32,527.50){\circle*{0.000001}} \put(-150.61,527.50){\circle*{0.000001}} \put(-149.91,526.79){\circle*{0.000001}} \put(-149.20,526.79){\circle*{0.000001}} \put(-148.49,526.79){\circle*{0.000001}} \put(-147.79,526.79){\circle*{0.000001}} \put(-147.08,526.79){\circle*{0.000001}} \put(-146.37,526.79){\circle*{0.000001}} \put(-145.66,526.79){\circle*{0.000001}} \put(-144.96,526.79){\circle*{0.000001}} \put(-144.25,526.79){\circle*{0.000001}} \put(-143.54,526.79){\circle*{0.000001}} \put(-142.84,526.79){\circle*{0.000001}} \put(-142.13,526.79){\circle*{0.000001}} \put(-141.42,526.79){\circle*{0.000001}} \put(-140.71,526.79){\circle*{0.000001}} \put(-140.01,526.79){\circle*{0.000001}} \put(-139.30,526.79){\circle*{0.000001}} \put(-138.59,526.79){\circle*{0.000001}} \put(-137.89,526.79){\circle*{0.000001}} \put(-137.18,526.79){\circle*{0.000001}} \put(-136.47,526.79){\circle*{0.000001}} \put(-135.76,526.79){\circle*{0.000001}} \put(-135.06,526.79){\circle*{0.000001}} \put(-134.35,526.79){\circle*{0.000001}} \put(-133.64,526.79){\circle*{0.000001}} \put(-544,-293){\makebox(0,0)[lt]{$\rho: 0.95$}} \end{picture} \end{tabular} \end{center} \caption{\small A typical path of the best individual for various values of the density. Note the different macroscopic behavior for high density and for low density. In the case of high density, the path is practically always contained in some patch, and the off-food status is practically unused. The low density situation shows the characteristics of ARS.} \label{paths} \end{figure} \begin{table} \begin{center} \begin{tabular}{|l|c|c|c|c|c|} \hline \hline $\delta$ & $\alpha_f$ & $l_f$ & $\tau$ & $\alpha_n$ & $l_n$ \\ \hline \hline 0.95 & 0.00 & 136.94 & 57 & 34.25 & 33.50 \\ \hline 0.5 & 199.06 & 15.53 & 0 & 2.00 & 60.50 \\ \hline 0.1 & 309.18 & 1.75 & 18 & 285.18 & 58.00 \\ \hline 0.01 & 227.29 & 1.50 & 37 & 345.88 & 45.50 \\ \hline \end{tabular} \end{center} \caption{\small The values of $A_F,L_F,T,A_N,L_F$ for the best individual for different values of the food density. The values are normalized as in (\ref{normal}) to obtain the walk parameters $(\alpha_f,l_f,\tau,\alpha_n,l_n)$.} \label{params} \end{table} \begin{figure}[tbp] \begin{center} \begin{tabular}{cc} \hspace{-3em}% \setlength{\unitlength}{0.240900pt} \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \sbox{\plotpoint}{\rule[-0.200pt]{0.400pt}{0.400pt}}% \begin{picture}(1050,1050)(0,0) \sbox{\plotpoint}{\rule[-0.200pt]{0.400pt}{0.400pt}}% \put(172.0,131.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(152,131){\makebox(0,0)[r]{ 0}} \put(947.0,131.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(172.0,230.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(152,230){\makebox(0,0)[r]{ 500}} \put(947.0,230.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(172.0,330.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(152,330){\makebox(0,0)[r]{ 1000}} \put(947.0,330.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(172.0,429.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(152,429){\makebox(0,0)[r]{ 1500}} \put(947.0,429.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(172.0,529.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(152,529){\makebox(0,0)[r]{ 2000}} \put(947.0,529.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(172.0,628.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(152,628){\makebox(0,0)[r]{ 2500}} \put(947.0,628.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(172.0,727.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(152,727){\makebox(0,0)[r]{ 3000}} \put(947.0,727.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(172.0,827.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(152,827){\makebox(0,0)[r]{ 3500}} \put(947.0,827.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(172.0,926.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(152,926){\makebox(0,0)[r]{ 4000}} \put(947.0,926.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(172.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(172,90){\makebox(0,0){ 0}} \put(172.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(253.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(253,90){\makebox(0,0){ 5}} \put(253.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(334.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(334,90){\makebox(0,0){ 10}} \put(334.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(415.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(415,90){\makebox(0,0){ 15}} \put(415.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(496.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(496,90){\makebox(0,0){ 20}} \put(496.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(578.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(578,90){\makebox(0,0){ 25}} \put(578.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(659.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(659,90){\makebox(0,0){ 30}} \put(659.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(740.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(740,90){\makebox(0,0){ 35}} \put(740.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(821.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(821,90){\makebox(0,0){ 40}} \put(821.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(902.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(902,90){\makebox(0,0){ 45}} \put(902.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(172.0,131.0){\rule[-0.200pt]{0.400pt}{191.515pt}} \put(172.0,131.0){\rule[-0.200pt]{191.515pt}{0.400pt}} \put(967.0,131.0){\rule[-0.200pt]{0.400pt}{191.515pt}} \put(172.0,926.0){\rule[-0.200pt]{191.515pt}{0.400pt}} \put(569,29){\makebox(0,0){$t$}} \put(569,988){\makebox(0,0){\small density: $\rho=0.01$}} \put(434,807){\makebox(0,0)[r]{walk}} \put(172,131){\makebox(0,0){$+$}} \put(204,132){\makebox(0,0){$+$}} \put(237,135){\makebox(0,0){$+$}} \put(269,135){\makebox(0,0){$+$}} \put(302,138){\makebox(0,0){$+$}} \put(334,139){\makebox(0,0){$+$}} \put(367,141){\makebox(0,0){$+$}} \put(399,144){\makebox(0,0){$+$}} \put(432,145){\makebox(0,0){$+$}} \put(464,150){\makebox(0,0){$+$}} \put(496,158){\makebox(0,0){$+$}} \put(529,169){\makebox(0,0){$+$}} \put(561,183){\makebox(0,0){$+$}} \put(594,201){\makebox(0,0){$+$}} \put(626,223){\makebox(0,0){$+$}} \put(659,250){\makebox(0,0){$+$}} \put(691,302){\makebox(0,0){$+$}} \put(724,324){\makebox(0,0){$+$}} \put(756,391){\makebox(0,0){$+$}} \put(789,453){\makebox(0,0){$+$}} \put(821,514){\makebox(0,0){$+$}} \put(853,616){\makebox(0,0){$+$}} \put(886,702){\makebox(0,0){$+$}} \put(918,812){\makebox(0,0){$+$}} \put(951,907){\makebox(0,0){$+$}} \put(504,807){\makebox(0,0){$+$}} \put(434,766){\makebox(0,0)[r]{$x^{3.78}$}} \put(454.0,766.0){\rule[-0.200pt]{24.090pt}{0.400pt}} \put(180,131){\usebox{\plotpoint}} \put(276,130.67){\rule{1.927pt}{0.400pt}} \multiput(276.00,130.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(180.0,131.0){\rule[-0.200pt]{23.126pt}{0.400pt}} \put(317,131.67){\rule{1.927pt}{0.400pt}} \multiput(317.00,131.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(284.0,132.0){\rule[-0.200pt]{7.950pt}{0.400pt}} \put(341,132.67){\rule{1.927pt}{0.400pt}} \multiput(341.00,132.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(325.0,133.0){\rule[-0.200pt]{3.854pt}{0.400pt}} \put(357,133.67){\rule{1.927pt}{0.400pt}} \multiput(357.00,133.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(365,134.67){\rule{1.927pt}{0.400pt}} \multiput(365.00,134.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(349.0,134.0){\rule[-0.200pt]{1.927pt}{0.400pt}} \put(381,135.67){\rule{1.927pt}{0.400pt}} \multiput(381.00,135.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(389,136.67){\rule{1.927pt}{0.400pt}} \multiput(389.00,136.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(397,137.67){\rule{1.927pt}{0.400pt}} \multiput(397.00,137.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(405,138.67){\rule{1.927pt}{0.400pt}} \multiput(405.00,138.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(413,139.67){\rule{1.927pt}{0.400pt}} \multiput(413.00,139.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(421,141.17){\rule{1.700pt}{0.400pt}} \multiput(421.00,140.17)(4.472,2.000){2}{\rule{0.850pt}{0.400pt}} \put(429,142.67){\rule{1.927pt}{0.400pt}} \multiput(429.00,142.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(437,144.17){\rule{1.700pt}{0.400pt}} \multiput(437.00,143.17)(4.472,2.000){2}{\rule{0.850pt}{0.400pt}} \put(445,146.17){\rule{1.700pt}{0.400pt}} \multiput(445.00,145.17)(4.472,2.000){2}{\rule{0.850pt}{0.400pt}} \put(453,147.67){\rule{1.927pt}{0.400pt}} \multiput(453.00,147.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(461,149.17){\rule{1.700pt}{0.400pt}} \multiput(461.00,148.17)(4.472,2.000){2}{\rule{0.850pt}{0.400pt}} \multiput(469.00,151.61)(1.579,0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(469.00,150.17)(5.579,3.000){2}{\rule{0.583pt}{0.400pt}} \put(477,154.17){\rule{1.700pt}{0.400pt}} \multiput(477.00,153.17)(4.472,2.000){2}{\rule{0.850pt}{0.400pt}} \put(485,156.17){\rule{1.700pt}{0.400pt}} \multiput(485.00,155.17)(4.472,2.000){2}{\rule{0.850pt}{0.400pt}} \multiput(493.00,158.61)(1.579,0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(493.00,157.17)(5.579,3.000){2}{\rule{0.583pt}{0.400pt}} \multiput(501.00,161.61)(1.579,0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(501.00,160.17)(5.579,3.000){2}{\rule{0.583pt}{0.400pt}} \multiput(509.00,164.61)(1.579,0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(509.00,163.17)(5.579,3.000){2}{\rule{0.583pt}{0.400pt}} \multiput(517.00,167.61)(1.579,0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(517.00,166.17)(5.579,3.000){2}{\rule{0.583pt}{0.400pt}} \multiput(525.00,170.60)(1.066,0.468){5}{\rule{0.900pt}{0.113pt}} \multiput(525.00,169.17)(6.132,4.000){2}{\rule{0.450pt}{0.400pt}} \multiput(533.00,174.61)(1.579,0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(533.00,173.17)(5.579,3.000){2}{\rule{0.583pt}{0.400pt}} \multiput(541.00,177.60)(1.066,0.468){5}{\rule{0.900pt}{0.113pt}} \multiput(541.00,176.17)(6.132,4.000){2}{\rule{0.450pt}{0.400pt}} \multiput(549.00,181.60)(1.066,0.468){5}{\rule{0.900pt}{0.113pt}} \multiput(549.00,180.17)(6.132,4.000){2}{\rule{0.450pt}{0.400pt}} \multiput(557.00,185.59)(0.821,0.477){7}{\rule{0.740pt}{0.115pt}} \multiput(557.00,184.17)(6.464,5.000){2}{\rule{0.370pt}{0.400pt}} \multiput(565.00,190.59)(0.933,0.477){7}{\rule{0.820pt}{0.115pt}} \multiput(565.00,189.17)(7.298,5.000){2}{\rule{0.410pt}{0.400pt}} \multiput(574.00,195.60)(1.066,0.468){5}{\rule{0.900pt}{0.113pt}} \multiput(574.00,194.17)(6.132,4.000){2}{\rule{0.450pt}{0.400pt}} \multiput(582.00,199.59)(0.671,0.482){9}{\rule{0.633pt}{0.116pt}} \multiput(582.00,198.17)(6.685,6.000){2}{\rule{0.317pt}{0.400pt}} \multiput(590.00,205.59)(0.821,0.477){7}{\rule{0.740pt}{0.115pt}} \multiput(590.00,204.17)(6.464,5.000){2}{\rule{0.370pt}{0.400pt}} \multiput(598.00,210.59)(0.671,0.482){9}{\rule{0.633pt}{0.116pt}} \multiput(598.00,209.17)(6.685,6.000){2}{\rule{0.317pt}{0.400pt}} \multiput(606.00,216.59)(0.671,0.482){9}{\rule{0.633pt}{0.116pt}} \multiput(606.00,215.17)(6.685,6.000){2}{\rule{0.317pt}{0.400pt}} \multiput(614.00,222.59)(0.671,0.482){9}{\rule{0.633pt}{0.116pt}} \multiput(614.00,221.17)(6.685,6.000){2}{\rule{0.317pt}{0.400pt}} \multiput(622.00,228.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(622.00,227.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(630.00,235.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(630.00,234.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(638.00,242.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(638.00,241.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(646.00,250.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(646.00,249.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(654.00,257.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(654.00,256.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(662.59,265.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(661.17,265.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(670.59,274.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(669.17,274.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(678.59,283.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(677.17,283.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(686.59,292.00)(0.488,0.626){13}{\rule{0.117pt}{0.600pt}} \multiput(685.17,292.00)(8.000,8.755){2}{\rule{0.400pt}{0.300pt}} \multiput(694.59,302.00)(0.488,0.626){13}{\rule{0.117pt}{0.600pt}} \multiput(693.17,302.00)(8.000,8.755){2}{\rule{0.400pt}{0.300pt}} \multiput(702.59,312.00)(0.488,0.626){13}{\rule{0.117pt}{0.600pt}} \multiput(701.17,312.00)(8.000,8.755){2}{\rule{0.400pt}{0.300pt}} \multiput(710.59,322.00)(0.488,0.692){13}{\rule{0.117pt}{0.650pt}} \multiput(709.17,322.00)(8.000,9.651){2}{\rule{0.400pt}{0.325pt}} \multiput(718.59,333.00)(0.488,0.758){13}{\rule{0.117pt}{0.700pt}} \multiput(717.17,333.00)(8.000,10.547){2}{\rule{0.400pt}{0.350pt}} \multiput(726.59,345.00)(0.488,0.758){13}{\rule{0.117pt}{0.700pt}} \multiput(725.17,345.00)(8.000,10.547){2}{\rule{0.400pt}{0.350pt}} \multiput(734.59,357.00)(0.488,0.758){13}{\rule{0.117pt}{0.700pt}} \multiput(733.17,357.00)(8.000,10.547){2}{\rule{0.400pt}{0.350pt}} \multiput(742.59,369.00)(0.488,0.824){13}{\rule{0.117pt}{0.750pt}} \multiput(741.17,369.00)(8.000,11.443){2}{\rule{0.400pt}{0.375pt}} \multiput(750.59,382.00)(0.488,0.890){13}{\rule{0.117pt}{0.800pt}} \multiput(749.17,382.00)(8.000,12.340){2}{\rule{0.400pt}{0.400pt}} \multiput(758.59,396.00)(0.488,0.824){13}{\rule{0.117pt}{0.750pt}} \multiput(757.17,396.00)(8.000,11.443){2}{\rule{0.400pt}{0.375pt}} \multiput(766.59,409.00)(0.488,0.956){13}{\rule{0.117pt}{0.850pt}} \multiput(765.17,409.00)(8.000,13.236){2}{\rule{0.400pt}{0.425pt}} \multiput(774.59,424.00)(0.488,0.956){13}{\rule{0.117pt}{0.850pt}} \multiput(773.17,424.00)(8.000,13.236){2}{\rule{0.400pt}{0.425pt}} \multiput(782.59,439.00)(0.488,0.956){13}{\rule{0.117pt}{0.850pt}} \multiput(781.17,439.00)(8.000,13.236){2}{\rule{0.400pt}{0.425pt}} \multiput(790.59,454.00)(0.488,1.088){13}{\rule{0.117pt}{0.950pt}} \multiput(789.17,454.00)(8.000,15.028){2}{\rule{0.400pt}{0.475pt}} \multiput(798.59,471.00)(0.488,1.022){13}{\rule{0.117pt}{0.900pt}} \multiput(797.17,471.00)(8.000,14.132){2}{\rule{0.400pt}{0.450pt}} \multiput(806.59,487.00)(0.488,1.154){13}{\rule{0.117pt}{1.000pt}} \multiput(805.17,487.00)(8.000,15.924){2}{\rule{0.400pt}{0.500pt}} \multiput(814.59,505.00)(0.488,1.088){13}{\rule{0.117pt}{0.950pt}} \multiput(813.17,505.00)(8.000,15.028){2}{\rule{0.400pt}{0.475pt}} \multiput(822.59,522.00)(0.488,1.220){13}{\rule{0.117pt}{1.050pt}} \multiput(821.17,522.00)(8.000,16.821){2}{\rule{0.400pt}{0.525pt}} \multiput(830.59,541.00)(0.489,1.077){15}{\rule{0.118pt}{0.944pt}} \multiput(829.17,541.00)(9.000,17.040){2}{\rule{0.400pt}{0.472pt}} \multiput(839.59,560.00)(0.488,1.286){13}{\rule{0.117pt}{1.100pt}} \multiput(838.17,560.00)(8.000,17.717){2}{\rule{0.400pt}{0.550pt}} \multiput(847.59,580.00)(0.488,1.286){13}{\rule{0.117pt}{1.100pt}} \multiput(846.17,580.00)(8.000,17.717){2}{\rule{0.400pt}{0.550pt}} \multiput(855.59,600.00)(0.488,1.418){13}{\rule{0.117pt}{1.200pt}} \multiput(854.17,600.00)(8.000,19.509){2}{\rule{0.400pt}{0.600pt}} \multiput(863.59,622.00)(0.488,1.352){13}{\rule{0.117pt}{1.150pt}} \multiput(862.17,622.00)(8.000,18.613){2}{\rule{0.400pt}{0.575pt}} \multiput(871.59,643.00)(0.488,1.484){13}{\rule{0.117pt}{1.250pt}} \multiput(870.17,643.00)(8.000,20.406){2}{\rule{0.400pt}{0.625pt}} \multiput(879.59,666.00)(0.488,1.484){13}{\rule{0.117pt}{1.250pt}} \multiput(878.17,666.00)(8.000,20.406){2}{\rule{0.400pt}{0.625pt}} \multiput(887.59,689.00)(0.488,1.550){13}{\rule{0.117pt}{1.300pt}} \multiput(886.17,689.00)(8.000,21.302){2}{\rule{0.400pt}{0.650pt}} \multiput(895.59,713.00)(0.488,1.616){13}{\rule{0.117pt}{1.350pt}} \multiput(894.17,713.00)(8.000,22.198){2}{\rule{0.400pt}{0.675pt}} \multiput(903.59,738.00)(0.488,1.682){13}{\rule{0.117pt}{1.400pt}} \multiput(902.17,738.00)(8.000,23.094){2}{\rule{0.400pt}{0.700pt}} \multiput(911.59,764.00)(0.488,1.682){13}{\rule{0.117pt}{1.400pt}} \multiput(910.17,764.00)(8.000,23.094){2}{\rule{0.400pt}{0.700pt}} \multiput(919.59,790.00)(0.488,1.748){13}{\rule{0.117pt}{1.450pt}} \multiput(918.17,790.00)(8.000,23.990){2}{\rule{0.400pt}{0.725pt}} \multiput(927.59,817.00)(0.488,1.814){13}{\rule{0.117pt}{1.500pt}} \multiput(926.17,817.00)(8.000,24.887){2}{\rule{0.400pt}{0.750pt}} \multiput(935.59,845.00)(0.488,1.880){13}{\rule{0.117pt}{1.550pt}} \multiput(934.17,845.00)(8.000,25.783){2}{\rule{0.400pt}{0.775pt}} \multiput(943.59,874.00)(0.488,1.880){13}{\rule{0.117pt}{1.550pt}} \multiput(942.17,874.00)(8.000,25.783){2}{\rule{0.400pt}{0.775pt}} \multiput(951.59,903.00)(0.482,2.027){9}{\rule{0.116pt}{1.633pt}} \multiput(950.17,903.00)(6.000,19.610){2}{\rule{0.400pt}{0.817pt}} \put(373.0,136.0){\rule[-0.200pt]{1.927pt}{0.400pt}} \put(172.0,131.0){\rule[-0.200pt]{0.400pt}{191.515pt}} \put(172.0,131.0){\rule[-0.200pt]{191.515pt}{0.400pt}} \put(967.0,131.0){\rule[-0.200pt]{0.400pt}{191.515pt}} \put(172.0,926.0){\rule[-0.200pt]{191.515pt}{0.400pt}} \end{picture} \hspace{-2em}% \setlength{\unitlength}{0.240900pt} \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(1050,1050)(0,0) \sbox{\plotpoint}{\rule[-0.200pt]{0.400pt}{0.400pt}}% \put(182.0,131.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162,131){\makebox(0,0)[r]{ 0}} \put(957.0,131.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(182.0,330.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162,330){\makebox(0,0)[r]{ 5000}} \put(957.0,330.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(182.0,529.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162,529){\makebox(0,0)[r]{ 10000}} \put(957.0,529.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(182.0,727.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162,727){\makebox(0,0)[r]{ 15000}} \put(957.0,727.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(182.0,926.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162,926){\makebox(0,0)[r]{ 20000}} \put(957.0,926.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(182.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(182,90){\makebox(0,0){ 0}} \put(182.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(263.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(263,90){\makebox(0,0){ 5}} \put(263.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(344.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(344,90){\makebox(0,0){ 10}} \put(344.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(425.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(425,90){\makebox(0,0){ 15}} \put(425.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(506.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(506,90){\makebox(0,0){ 20}} \put(506.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(588.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(588,90){\makebox(0,0){ 25}} \put(588.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(669.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(669,90){\makebox(0,0){ 30}} \put(669.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(750.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(750,90){\makebox(0,0){ 35}} \put(750.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(831.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(831,90){\makebox(0,0){ 40}} \put(831.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(912.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(912,90){\makebox(0,0){ 45}} \put(912.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(182.0,131.0){\rule[-0.200pt]{0.400pt}{191.515pt}} \put(182.0,131.0){\rule[-0.200pt]{191.515pt}{0.400pt}} \put(977.0,131.0){\rule[-0.200pt]{0.400pt}{191.515pt}} \put(182.0,926.0){\rule[-0.200pt]{191.515pt}{0.400pt}} \put(579,29){\makebox(0,0){$t$}} \put(579,988){\makebox(0,0){\small density: $\rho=0.1$}} \put(424,827){\makebox(0,0)[r]{walk}} \put(198,131){\makebox(0,0){$+$}} \put(231,132){\makebox(0,0){$+$}} \put(263,134){\makebox(0,0){$+$}} \put(296,138){\makebox(0,0){$+$}} \put(328,145){\makebox(0,0){$+$}} \put(360,157){\makebox(0,0){$+$}} \put(393,167){\makebox(0,0){$+$}} \put(425,185){\makebox(0,0){$+$}} \put(458,208){\makebox(0,0){$+$}} \put(490,240){\makebox(0,0){$+$}} \put(523,276){\makebox(0,0){$+$}} \put(555,316){\makebox(0,0){$+$}} \put(588,350){\makebox(0,0){$+$}} \put(620,396){\makebox(0,0){$+$}} \put(653,436){\makebox(0,0){$+$}} \put(685,480){\makebox(0,0){$+$}} \put(717,533){\makebox(0,0){$+$}} \put(750,567){\makebox(0,0){$+$}} \put(782,622){\makebox(0,0){$+$}} \put(815,658){\makebox(0,0){$+$}} \put(847,704){\makebox(0,0){$+$}} \put(880,747){\makebox(0,0){$+$}} \put(912,790){\makebox(0,0){$+$}} \put(945,838){\makebox(0,0){$+$}} \put(977,893){\makebox(0,0){$+$}} \put(494,827){\makebox(0,0){$+$}} \put(424,786){\makebox(0,0)[r]{$x^{1.8}$}} \put(444.0,786.0){\rule[-0.200pt]{24.090pt}{0.400pt}} \put(190,131){\usebox{\plotpoint}} \put(190,130.67){\rule{1.927pt}{0.400pt}} \multiput(190.00,130.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(206,131.67){\rule{1.927pt}{0.400pt}} \multiput(206.00,131.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(214,133.17){\rule{1.700pt}{0.400pt}} \multiput(214.00,132.17)(4.472,2.000){2}{\rule{0.850pt}{0.400pt}} \put(222,134.67){\rule{1.927pt}{0.400pt}} \multiput(222.00,134.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(230,135.67){\rule{1.927pt}{0.400pt}} \multiput(230.00,135.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(238,137.17){\rule{1.700pt}{0.400pt}} \multiput(238.00,136.17)(4.472,2.000){2}{\rule{0.850pt}{0.400pt}} \put(246,139.17){\rule{1.700pt}{0.400pt}} \multiput(246.00,138.17)(4.472,2.000){2}{\rule{0.850pt}{0.400pt}} \put(254,141.17){\rule{1.700pt}{0.400pt}} \multiput(254.00,140.17)(4.472,2.000){2}{\rule{0.850pt}{0.400pt}} \multiput(262.00,143.61)(1.579,0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(262.00,142.17)(5.579,3.000){2}{\rule{0.583pt}{0.400pt}} \put(270,146.17){\rule{1.700pt}{0.400pt}} \multiput(270.00,145.17)(4.472,2.000){2}{\rule{0.850pt}{0.400pt}} \multiput(278.00,148.61)(1.579,0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(278.00,147.17)(5.579,3.000){2}{\rule{0.583pt}{0.400pt}} \multiput(286.00,151.61)(1.579,0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(286.00,150.17)(5.579,3.000){2}{\rule{0.583pt}{0.400pt}} \multiput(294.00,154.61)(1.579,0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(294.00,153.17)(5.579,3.000){2}{\rule{0.583pt}{0.400pt}} \multiput(302.00,157.61)(1.579,0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(302.00,156.17)(5.579,3.000){2}{\rule{0.583pt}{0.400pt}} \multiput(310.00,160.61)(1.802,0.447){3}{\rule{1.300pt}{0.108pt}} \multiput(310.00,159.17)(6.302,3.000){2}{\rule{0.650pt}{0.400pt}} \multiput(319.00,163.61)(1.579,0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(319.00,162.17)(5.579,3.000){2}{\rule{0.583pt}{0.400pt}} \multiput(327.00,166.60)(1.066,0.468){5}{\rule{0.900pt}{0.113pt}} \multiput(327.00,165.17)(6.132,4.000){2}{\rule{0.450pt}{0.400pt}} \multiput(335.00,170.60)(1.066,0.468){5}{\rule{0.900pt}{0.113pt}} \multiput(335.00,169.17)(6.132,4.000){2}{\rule{0.450pt}{0.400pt}} \multiput(343.00,174.60)(1.066,0.468){5}{\rule{0.900pt}{0.113pt}} \multiput(343.00,173.17)(6.132,4.000){2}{\rule{0.450pt}{0.400pt}} \multiput(351.00,178.60)(1.066,0.468){5}{\rule{0.900pt}{0.113pt}} \multiput(351.00,177.17)(6.132,4.000){2}{\rule{0.450pt}{0.400pt}} \multiput(359.00,182.60)(1.066,0.468){5}{\rule{0.900pt}{0.113pt}} \multiput(359.00,181.17)(6.132,4.000){2}{\rule{0.450pt}{0.400pt}} \multiput(367.00,186.60)(1.066,0.468){5}{\rule{0.900pt}{0.113pt}} \multiput(367.00,185.17)(6.132,4.000){2}{\rule{0.450pt}{0.400pt}} \multiput(375.00,190.59)(0.821,0.477){7}{\rule{0.740pt}{0.115pt}} \multiput(375.00,189.17)(6.464,5.000){2}{\rule{0.370pt}{0.400pt}} \multiput(383.00,195.59)(0.821,0.477){7}{\rule{0.740pt}{0.115pt}} \multiput(383.00,194.17)(6.464,5.000){2}{\rule{0.370pt}{0.400pt}} \multiput(391.00,200.60)(1.066,0.468){5}{\rule{0.900pt}{0.113pt}} \multiput(391.00,199.17)(6.132,4.000){2}{\rule{0.450pt}{0.400pt}} \multiput(399.00,204.59)(0.821,0.477){7}{\rule{0.740pt}{0.115pt}} \multiput(399.00,203.17)(6.464,5.000){2}{\rule{0.370pt}{0.400pt}} \multiput(407.00,209.59)(0.671,0.482){9}{\rule{0.633pt}{0.116pt}} \multiput(407.00,208.17)(6.685,6.000){2}{\rule{0.317pt}{0.400pt}} \multiput(415.00,215.59)(0.821,0.477){7}{\rule{0.740pt}{0.115pt}} \multiput(415.00,214.17)(6.464,5.000){2}{\rule{0.370pt}{0.400pt}} \multiput(423.00,220.59)(0.821,0.477){7}{\rule{0.740pt}{0.115pt}} \multiput(423.00,219.17)(6.464,5.000){2}{\rule{0.370pt}{0.400pt}} \multiput(431.00,225.59)(0.671,0.482){9}{\rule{0.633pt}{0.116pt}} \multiput(431.00,224.17)(6.685,6.000){2}{\rule{0.317pt}{0.400pt}} \multiput(439.00,231.59)(0.821,0.477){7}{\rule{0.740pt}{0.115pt}} \multiput(439.00,230.17)(6.464,5.000){2}{\rule{0.370pt}{0.400pt}} \multiput(447.00,236.59)(0.671,0.482){9}{\rule{0.633pt}{0.116pt}} \multiput(447.00,235.17)(6.685,6.000){2}{\rule{0.317pt}{0.400pt}} \multiput(455.00,242.59)(0.671,0.482){9}{\rule{0.633pt}{0.116pt}} \multiput(455.00,241.17)(6.685,6.000){2}{\rule{0.317pt}{0.400pt}} \multiput(463.00,248.59)(0.671,0.482){9}{\rule{0.633pt}{0.116pt}} \multiput(463.00,247.17)(6.685,6.000){2}{\rule{0.317pt}{0.400pt}} \multiput(471.00,254.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(471.00,253.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(479.00,261.59)(0.671,0.482){9}{\rule{0.633pt}{0.116pt}} \multiput(479.00,260.17)(6.685,6.000){2}{\rule{0.317pt}{0.400pt}} \multiput(487.00,267.59)(0.671,0.482){9}{\rule{0.633pt}{0.116pt}} \multiput(487.00,266.17)(6.685,6.000){2}{\rule{0.317pt}{0.400pt}} \multiput(495.00,273.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(495.00,272.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(503.00,280.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(503.00,279.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(511.00,287.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(511.00,286.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(519.00,294.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(519.00,293.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(527.00,301.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(527.00,300.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(535.00,308.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(535.00,307.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(543.00,315.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(543.00,314.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(551.00,323.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(551.00,322.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(559.00,330.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(559.00,329.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(567.00,338.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(567.00,337.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(575.00,346.59)(0.560,0.488){13}{\rule{0.550pt}{0.117pt}} \multiput(575.00,345.17)(7.858,8.000){2}{\rule{0.275pt}{0.400pt}} \multiput(584.00,354.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(584.00,353.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(592.00,362.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(592.00,361.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(600.00,370.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(600.00,369.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(608.59,378.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(607.17,378.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(616.00,387.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(616.00,386.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(624.59,395.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(623.17,395.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(632.59,404.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(631.17,404.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(640.59,413.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(639.17,413.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(648.59,422.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(647.17,422.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(656.59,431.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(655.17,431.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(664.59,440.00)(0.488,0.626){13}{\rule{0.117pt}{0.600pt}} \multiput(663.17,440.00)(8.000,8.755){2}{\rule{0.400pt}{0.300pt}} \multiput(672.59,450.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(671.17,450.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(680.59,459.00)(0.488,0.626){13}{\rule{0.117pt}{0.600pt}} \multiput(679.17,459.00)(8.000,8.755){2}{\rule{0.400pt}{0.300pt}} \multiput(688.59,469.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(687.17,469.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(696.59,478.00)(0.488,0.626){13}{\rule{0.117pt}{0.600pt}} \multiput(695.17,478.00)(8.000,8.755){2}{\rule{0.400pt}{0.300pt}} \multiput(704.59,488.00)(0.488,0.626){13}{\rule{0.117pt}{0.600pt}} \multiput(703.17,488.00)(8.000,8.755){2}{\rule{0.400pt}{0.300pt}} \multiput(712.59,498.00)(0.488,0.626){13}{\rule{0.117pt}{0.600pt}} \multiput(711.17,498.00)(8.000,8.755){2}{\rule{0.400pt}{0.300pt}} \multiput(720.59,508.00)(0.488,0.626){13}{\rule{0.117pt}{0.600pt}} \multiput(719.17,508.00)(8.000,8.755){2}{\rule{0.400pt}{0.300pt}} \multiput(728.59,518.00)(0.488,0.692){13}{\rule{0.117pt}{0.650pt}} \multiput(727.17,518.00)(8.000,9.651){2}{\rule{0.400pt}{0.325pt}} \multiput(736.59,529.00)(0.488,0.626){13}{\rule{0.117pt}{0.600pt}} \multiput(735.17,529.00)(8.000,8.755){2}{\rule{0.400pt}{0.300pt}} \multiput(744.59,539.00)(0.488,0.692){13}{\rule{0.117pt}{0.650pt}} \multiput(743.17,539.00)(8.000,9.651){2}{\rule{0.400pt}{0.325pt}} \multiput(752.59,550.00)(0.488,0.626){13}{\rule{0.117pt}{0.600pt}} \multiput(751.17,550.00)(8.000,8.755){2}{\rule{0.400pt}{0.300pt}} \multiput(760.59,560.00)(0.488,0.692){13}{\rule{0.117pt}{0.650pt}} \multiput(759.17,560.00)(8.000,9.651){2}{\rule{0.400pt}{0.325pt}} \multiput(768.59,571.00)(0.488,0.692){13}{\rule{0.117pt}{0.650pt}} \multiput(767.17,571.00)(8.000,9.651){2}{\rule{0.400pt}{0.325pt}} \multiput(776.59,582.00)(0.488,0.692){13}{\rule{0.117pt}{0.650pt}} \multiput(775.17,582.00)(8.000,9.651){2}{\rule{0.400pt}{0.325pt}} \multiput(784.59,593.00)(0.488,0.692){13}{\rule{0.117pt}{0.650pt}} \multiput(783.17,593.00)(8.000,9.651){2}{\rule{0.400pt}{0.325pt}} \multiput(792.59,604.00)(0.488,0.758){13}{\rule{0.117pt}{0.700pt}} \multiput(791.17,604.00)(8.000,10.547){2}{\rule{0.400pt}{0.350pt}} \multiput(800.59,616.00)(0.488,0.692){13}{\rule{0.117pt}{0.650pt}} \multiput(799.17,616.00)(8.000,9.651){2}{\rule{0.400pt}{0.325pt}} \multiput(808.59,627.00)(0.488,0.692){13}{\rule{0.117pt}{0.650pt}} \multiput(807.17,627.00)(8.000,9.651){2}{\rule{0.400pt}{0.325pt}} \multiput(816.59,638.00)(0.488,0.758){13}{\rule{0.117pt}{0.700pt}} \multiput(815.17,638.00)(8.000,10.547){2}{\rule{0.400pt}{0.350pt}} \multiput(824.59,650.00)(0.488,0.758){13}{\rule{0.117pt}{0.700pt}} \multiput(823.17,650.00)(8.000,10.547){2}{\rule{0.400pt}{0.350pt}} \multiput(832.59,662.00)(0.488,0.758){13}{\rule{0.117pt}{0.700pt}} \multiput(831.17,662.00)(8.000,10.547){2}{\rule{0.400pt}{0.350pt}} \multiput(840.59,674.00)(0.489,0.669){15}{\rule{0.118pt}{0.633pt}} \multiput(839.17,674.00)(9.000,10.685){2}{\rule{0.400pt}{0.317pt}} \multiput(849.59,686.00)(0.488,0.758){13}{\rule{0.117pt}{0.700pt}} \multiput(848.17,686.00)(8.000,10.547){2}{\rule{0.400pt}{0.350pt}} \multiput(857.59,698.00)(0.488,0.758){13}{\rule{0.117pt}{0.700pt}} \multiput(856.17,698.00)(8.000,10.547){2}{\rule{0.400pt}{0.350pt}} \multiput(865.59,710.00)(0.488,0.758){13}{\rule{0.117pt}{0.700pt}} \multiput(864.17,710.00)(8.000,10.547){2}{\rule{0.400pt}{0.350pt}} \multiput(873.59,722.00)(0.488,0.824){13}{\rule{0.117pt}{0.750pt}} \multiput(872.17,722.00)(8.000,11.443){2}{\rule{0.400pt}{0.375pt}} \multiput(881.59,735.00)(0.488,0.758){13}{\rule{0.117pt}{0.700pt}} \multiput(880.17,735.00)(8.000,10.547){2}{\rule{0.400pt}{0.350pt}} \multiput(889.59,747.00)(0.488,0.824){13}{\rule{0.117pt}{0.750pt}} \multiput(888.17,747.00)(8.000,11.443){2}{\rule{0.400pt}{0.375pt}} \multiput(897.59,760.00)(0.488,0.824){13}{\rule{0.117pt}{0.750pt}} \multiput(896.17,760.00)(8.000,11.443){2}{\rule{0.400pt}{0.375pt}} \multiput(905.59,773.00)(0.488,0.758){13}{\rule{0.117pt}{0.700pt}} \multiput(904.17,773.00)(8.000,10.547){2}{\rule{0.400pt}{0.350pt}} \multiput(913.59,785.00)(0.488,0.890){13}{\rule{0.117pt}{0.800pt}} \multiput(912.17,785.00)(8.000,12.340){2}{\rule{0.400pt}{0.400pt}} \multiput(921.59,799.00)(0.488,0.824){13}{\rule{0.117pt}{0.750pt}} \multiput(920.17,799.00)(8.000,11.443){2}{\rule{0.400pt}{0.375pt}} \multiput(929.59,812.00)(0.488,0.824){13}{\rule{0.117pt}{0.750pt}} \multiput(928.17,812.00)(8.000,11.443){2}{\rule{0.400pt}{0.375pt}} \multiput(937.59,825.00)(0.488,0.824){13}{\rule{0.117pt}{0.750pt}} \multiput(936.17,825.00)(8.000,11.443){2}{\rule{0.400pt}{0.375pt}} \multiput(945.59,838.00)(0.488,0.890){13}{\rule{0.117pt}{0.800pt}} \multiput(944.17,838.00)(8.000,12.340){2}{\rule{0.400pt}{0.400pt}} \multiput(953.59,852.00)(0.488,0.824){13}{\rule{0.117pt}{0.750pt}} \multiput(952.17,852.00)(8.000,11.443){2}{\rule{0.400pt}{0.375pt}} \multiput(961.59,865.00)(0.488,0.890){13}{\rule{0.117pt}{0.800pt}} \multiput(960.17,865.00)(8.000,12.340){2}{\rule{0.400pt}{0.400pt}} \multiput(969.59,879.00)(0.488,0.890){13}{\rule{0.117pt}{0.800pt}} \multiput(968.17,879.00)(8.000,12.340){2}{\rule{0.400pt}{0.400pt}} \put(198.0,132.0){\rule[-0.200pt]{1.927pt}{0.400pt}} \put(182.0,131.0){\rule[-0.200pt]{0.400pt}{191.515pt}} \put(182.0,131.0){\rule[-0.200pt]{191.515pt}{0.400pt}} \put(977.0,131.0){\rule[-0.200pt]{0.400pt}{191.515pt}} \put(182.0,926.0){\rule[-0.200pt]{191.515pt}{0.400pt}} \end{picture} \\ \hspace{-3em}% \setlength{\unitlength}{0.240900pt} \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(1050,1050)(0,0) \sbox{\plotpoint}{\rule[-0.200pt]{0.400pt}{0.400pt}}% \put(182.0,131.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162,131){\makebox(0,0)[r]{ 0}} \put(957.0,131.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(182.0,219.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162,219){\makebox(0,0)[r]{ 5000}} \put(957.0,219.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(182.0,308.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162,308){\makebox(0,0)[r]{ 10000}} \put(957.0,308.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(182.0,396.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162,396){\makebox(0,0)[r]{ 15000}} \put(957.0,396.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(182.0,484.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162,484){\makebox(0,0)[r]{ 20000}} \put(957.0,484.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(182.0,573.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162,573){\makebox(0,0)[r]{ 25000}} \put(957.0,573.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(182.0,661.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162,661){\makebox(0,0)[r]{ 30000}} \put(957.0,661.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(182.0,749.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162,749){\makebox(0,0)[r]{ 35000}} \put(957.0,749.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(182.0,838.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162,838){\makebox(0,0)[r]{ 40000}} \put(957.0,838.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(182.0,926.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162,926){\makebox(0,0)[r]{ 45000}} \put(957.0,926.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(182.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(182,90){\makebox(0,0){ 0}} \put(182.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(263.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(263,90){\makebox(0,0){ 5}} \put(263.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(344.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(344,90){\makebox(0,0){ 10}} \put(344.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(425.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(425,90){\makebox(0,0){ 15}} \put(425.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(506.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(506,90){\makebox(0,0){ 20}} \put(506.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(588.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(588,90){\makebox(0,0){ 25}} \put(588.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(669.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(669,90){\makebox(0,0){ 30}} \put(669.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(750.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(750,90){\makebox(0,0){ 35}} \put(750.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(831.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(831,90){\makebox(0,0){ 40}} \put(831.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(912.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(912,90){\makebox(0,0){ 45}} \put(912.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(182.0,131.0){\rule[-0.200pt]{0.400pt}{191.515pt}} \put(182.0,131.0){\rule[-0.200pt]{191.515pt}{0.400pt}} \put(977.0,131.0){\rule[-0.200pt]{0.400pt}{191.515pt}} \put(182.0,926.0){\rule[-0.200pt]{191.515pt}{0.400pt}} \put(579,29){\makebox(0,0){$t$}} \put(579,988){\makebox(0,0){\small density: $\rho=0.5$}} \put(424,729){\makebox(0,0)[r]{walk}} \put(198,135){\makebox(0,0){$+$}} \put(231,159){\makebox(0,0){$+$}} \put(263,191){\makebox(0,0){$+$}} \put(296,219){\makebox(0,0){$+$}} \put(328,253){\makebox(0,0){$+$}} \put(360,289){\makebox(0,0){$+$}} \put(393,313){\makebox(0,0){$+$}} \put(425,347){\makebox(0,0){$+$}} \put(458,378){\makebox(0,0){$+$}} \put(490,412){\makebox(0,0){$+$}} \put(523,442){\makebox(0,0){$+$}} \put(555,472){\makebox(0,0){$+$}} \put(588,502){\makebox(0,0){$+$}} \put(620,530){\makebox(0,0){$+$}} \put(653,562){\makebox(0,0){$+$}} \put(685,599){\makebox(0,0){$+$}} \put(717,621){\makebox(0,0){$+$}} \put(750,661){\makebox(0,0){$+$}} \put(782,696){\makebox(0,0){$+$}} \put(815,731){\makebox(0,0){$+$}} \put(847,770){\makebox(0,0){$+$}} \put(880,795){\makebox(0,0){$+$}} \put(912,831){\makebox(0,0){$+$}} \put(945,864){\makebox(0,0){$+$}} \put(977,908){\makebox(0,0){$+$}} \put(494,729){\makebox(0,0){$+$}} \put(424,688){\makebox(0,0)[r]{$x^{1.1}$}} \put(444.0,688.0){\rule[-0.200pt]{24.090pt}{0.400pt}} \put(190,136){\usebox{\plotpoint}} \multiput(190.00,136.59)(0.671,0.482){9}{\rule{0.633pt}{0.116pt}} \multiput(190.00,135.17)(6.685,6.000){2}{\rule{0.317pt}{0.400pt}} \multiput(198.00,142.59)(0.671,0.482){9}{\rule{0.633pt}{0.116pt}} \multiput(198.00,141.17)(6.685,6.000){2}{\rule{0.317pt}{0.400pt}} \multiput(206.00,148.59)(0.671,0.482){9}{\rule{0.633pt}{0.116pt}} \multiput(206.00,147.17)(6.685,6.000){2}{\rule{0.317pt}{0.400pt}} \multiput(214.00,154.59)(0.671,0.482){9}{\rule{0.633pt}{0.116pt}} \multiput(214.00,153.17)(6.685,6.000){2}{\rule{0.317pt}{0.400pt}} \multiput(222.00,160.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(222.00,159.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(230.00,167.59)(0.671,0.482){9}{\rule{0.633pt}{0.116pt}} \multiput(230.00,166.17)(6.685,6.000){2}{\rule{0.317pt}{0.400pt}} \multiput(238.00,173.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(238.00,172.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(246.00,180.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(246.00,179.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(254.00,187.59)(0.671,0.482){9}{\rule{0.633pt}{0.116pt}} \multiput(254.00,186.17)(6.685,6.000){2}{\rule{0.317pt}{0.400pt}} \multiput(262.00,193.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(262.00,192.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(270.00,200.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(270.00,199.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(278.00,207.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(278.00,206.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(286.00,214.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(286.00,213.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(294.00,221.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(294.00,220.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(302.00,228.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(302.00,227.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(310.00,236.59)(0.645,0.485){11}{\rule{0.614pt}{0.117pt}} \multiput(310.00,235.17)(7.725,7.000){2}{\rule{0.307pt}{0.400pt}} \multiput(319.00,243.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(319.00,242.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(327.00,250.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(327.00,249.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(335.00,257.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(335.00,256.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(343.00,265.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(343.00,264.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(351.00,272.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(351.00,271.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(359.00,280.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(359.00,279.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(367.00,287.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(367.00,286.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(375.00,294.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(375.00,293.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(383.00,302.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(383.00,301.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(391.00,309.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(391.00,308.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(399.00,317.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(399.00,316.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(407.00,325.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(407.00,324.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(415.00,332.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(415.00,331.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(423.00,340.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(423.00,339.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(431.00,348.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(431.00,347.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(439.00,355.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(439.00,354.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(447.00,363.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(447.00,362.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(455.00,371.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(455.00,370.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(463.00,379.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(463.00,378.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(471.00,386.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(471.00,385.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(479.00,394.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(479.00,393.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(487.00,402.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(487.00,401.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(495.00,410.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(495.00,409.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(503.00,418.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(503.00,417.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(511.00,426.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(511.00,425.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(519.00,434.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(519.00,433.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(527.00,441.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(527.00,440.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(535.00,449.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(535.00,448.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(543.00,457.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(543.00,456.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(551.00,465.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(551.00,464.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(559.00,473.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(559.00,472.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(567.00,481.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(567.00,480.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(575.00,489.59)(0.560,0.488){13}{\rule{0.550pt}{0.117pt}} \multiput(575.00,488.17)(7.858,8.000){2}{\rule{0.275pt}{0.400pt}} \multiput(584.59,497.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(583.17,497.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(592.00,506.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(592.00,505.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(600.00,514.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(600.00,513.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(608.00,522.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(608.00,521.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(616.00,530.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(616.00,529.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(624.00,538.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(624.00,537.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(632.00,546.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(632.00,545.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(640.00,554.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(640.00,553.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(648.59,562.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(647.17,562.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(656.00,571.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(656.00,570.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(664.00,579.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(664.00,578.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(672.00,587.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(672.00,586.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(680.59,595.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(679.17,595.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(688.00,604.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(688.00,603.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(696.00,612.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(696.00,611.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(704.00,620.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(704.00,619.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(712.59,628.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(711.17,628.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(720.00,637.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(720.00,636.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(728.00,645.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(728.00,644.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(736.59,653.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(735.17,653.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(744.00,662.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(744.00,661.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(752.00,670.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(752.00,669.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(760.59,678.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(759.17,678.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(768.00,687.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(768.00,686.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(776.00,695.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(776.00,694.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(784.59,703.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(783.17,703.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(792.00,712.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(792.00,711.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(800.59,720.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(799.17,720.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(808.00,729.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(808.00,728.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(816.59,737.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(815.17,737.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(824.00,746.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(824.00,745.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(832.00,754.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(832.00,753.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(840.00,762.59)(0.495,0.489){15}{\rule{0.500pt}{0.118pt}} \multiput(840.00,761.17)(7.962,9.000){2}{\rule{0.250pt}{0.400pt}} \multiput(849.00,771.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(849.00,770.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(857.59,779.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(856.17,779.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(865.00,788.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(865.00,787.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(873.59,796.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(872.17,796.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(881.00,805.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(881.00,804.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(889.59,813.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(888.17,813.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(897.59,822.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(896.17,822.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(905.00,831.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(905.00,830.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(913.59,839.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(912.17,839.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(921.00,848.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(921.00,847.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(929.59,856.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(928.17,856.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(937.00,865.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(937.00,864.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(945.59,873.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(944.17,873.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(953.59,882.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(952.17,882.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(961.00,891.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(961.00,890.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(969.59,899.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(968.17,899.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \put(182.0,131.0){\rule[-0.200pt]{0.400pt}{191.515pt}} \put(182.0,131.0){\rule[-0.200pt]{191.515pt}{0.400pt}} \put(977.0,131.0){\rule[-0.200pt]{0.400pt}{191.515pt}} \put(182.0,926.0){\rule[-0.200pt]{191.515pt}{0.400pt}} \end{picture} \hspace{-2em}% \setlength{\unitlength}{0.240900pt} \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(1050,1050)(0,0) \sbox{\plotpoint}{\rule[-0.200pt]{0.400pt}{0.400pt}}% \put(182.0,131.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162,131){\makebox(0,0)[r]{ 0}} \put(957.0,131.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(182.0,212.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162,212){\makebox(0,0)[r]{ 5000}} \put(957.0,212.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(182.0,293.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162,293){\makebox(0,0)[r]{ 10000}} \put(957.0,293.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(182.0,374.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162,374){\makebox(0,0)[r]{ 15000}} \put(957.0,374.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(182.0,455.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162,455){\makebox(0,0)[r]{ 20000}} \put(957.0,455.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(182.0,537.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162,537){\makebox(0,0)[r]{ 25000}} \put(957.0,537.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(182.0,618.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162,618){\makebox(0,0)[r]{ 30000}} \put(957.0,618.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(182.0,699.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162,699){\makebox(0,0)[r]{ 35000}} \put(957.0,699.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(182.0,780.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162,780){\makebox(0,0)[r]{ 40000}} \put(957.0,780.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(182.0,861.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162,861){\makebox(0,0)[r]{ 45000}} \put(957.0,861.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(182.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(182,90){\makebox(0,0){ 0}} \put(182.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(263.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(263,90){\makebox(0,0){ 5}} \put(263.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(344.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(344,90){\makebox(0,0){ 10}} \put(344.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(425.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(425,90){\makebox(0,0){ 15}} \put(425.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(506.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(506,90){\makebox(0,0){ 20}} \put(506.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(588.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(588,90){\makebox(0,0){ 25}} \put(588.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(669.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(669,90){\makebox(0,0){ 30}} \put(669.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(750.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(750,90){\makebox(0,0){ 35}} \put(750.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(831.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(831,90){\makebox(0,0){ 40}} \put(831.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(912.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(912,90){\makebox(0,0){ 45}} \put(912.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(182.0,131.0){\rule[-0.200pt]{0.400pt}{191.515pt}} \put(182.0,131.0){\rule[-0.200pt]{191.515pt}{0.400pt}} \put(977.0,131.0){\rule[-0.200pt]{0.400pt}{191.515pt}} \put(182.0,926.0){\rule[-0.200pt]{191.515pt}{0.400pt}} \put(579,29){\makebox(0,0){$t$}} \put(579,988){\makebox(0,0){\small density: $\rho=0.95$}} \put(444,679){\makebox(0,0)[r]{walk}} \put(198,149){\makebox(0,0){$+$}} \put(231,186){\makebox(0,0){$+$}} \put(263,224){\makebox(0,0){$+$}} \put(296,261){\makebox(0,0){$+$}} \put(328,290){\makebox(0,0){$+$}} \put(360,326){\makebox(0,0){$+$}} \put(393,368){\makebox(0,0){$+$}} \put(425,405){\makebox(0,0){$+$}} \put(458,427){\makebox(0,0){$+$}} \put(490,458){\makebox(0,0){$+$}} \put(523,490){\makebox(0,0){$+$}} \put(555,517){\makebox(0,0){$+$}} \put(588,537){\makebox(0,0){$+$}} \put(620,564){\makebox(0,0){$+$}} \put(653,594){\makebox(0,0){$+$}} \put(685,627){\makebox(0,0){$+$}} \put(717,662){\makebox(0,0){$+$}} \put(750,696){\makebox(0,0){$+$}} \put(782,735){\makebox(0,0){$+$}} \put(815,766){\makebox(0,0){$+$}} \put(847,782){\makebox(0,0){$+$}} \put(880,813){\makebox(0,0){$+$}} \put(912,844){\makebox(0,0){$+$}} \put(945,888){\makebox(0,0){$+$}} \put(977,917){\makebox(0,0){$+$}} \put(514,679){\makebox(0,0){$+$}} \put(444,638){\makebox(0,0)[r]{$x^{0.95}$}} \put(464.0,638.0){\rule[-0.200pt]{24.090pt}{0.400pt}} \put(190,141){\usebox{\plotpoint}} \multiput(190.59,141.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(189.17,141.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(198.59,150.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(197.17,150.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(206.59,159.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(205.17,159.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(214.59,168.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(213.17,168.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(222.59,177.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(221.17,177.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(230.00,186.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(230.00,185.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(238.59,194.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(237.17,194.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(246.59,203.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(245.17,203.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(254.00,212.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(254.00,211.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(262.59,220.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(261.17,220.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(270.00,229.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(270.00,228.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(278.00,237.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(278.00,236.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(286.59,245.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(285.17,245.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(294.00,254.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(294.00,253.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(302.00,262.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(302.00,261.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(310.00,270.59)(0.560,0.488){13}{\rule{0.550pt}{0.117pt}} \multiput(310.00,269.17)(7.858,8.000){2}{\rule{0.275pt}{0.400pt}} \multiput(319.59,278.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(318.17,278.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(327.00,287.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(327.00,286.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(335.00,295.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(335.00,294.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(343.00,303.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(343.00,302.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(351.00,311.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(351.00,310.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(359.00,319.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(359.00,318.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(367.59,327.00)(0.488,0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(366.17,327.00)(8.000,7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(375.00,336.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(375.00,335.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(383.00,344.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(383.00,343.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(391.00,352.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(391.00,351.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(399.00,360.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(399.00,359.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(407.00,368.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(407.00,367.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(415.00,376.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(415.00,375.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(423.00,384.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(423.00,383.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(431.00,392.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(431.00,391.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(439.00,400.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(439.00,399.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(447.00,408.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(447.00,407.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(455.00,416.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(455.00,415.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(463.00,424.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(463.00,423.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(471.00,432.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(471.00,431.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(479.00,440.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(479.00,439.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(487.00,448.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(487.00,447.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(495.00,455.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(495.00,454.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(503.00,463.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(503.00,462.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(511.00,471.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(511.00,470.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(519.00,479.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(519.00,478.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(527.00,487.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(527.00,486.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(535.00,495.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(535.00,494.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(543.00,503.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(543.00,502.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(551.00,511.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(551.00,510.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(559.00,518.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(559.00,517.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(567.00,526.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(567.00,525.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(575.00,534.59)(0.560,0.488){13}{\rule{0.550pt}{0.117pt}} \multiput(575.00,533.17)(7.858,8.000){2}{\rule{0.275pt}{0.400pt}} \multiput(584.00,542.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(584.00,541.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(592.00,550.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(592.00,549.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(600.00,557.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(600.00,556.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(608.00,565.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(608.00,564.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(616.00,573.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(616.00,572.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(624.00,581.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(624.00,580.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(632.00,589.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(632.00,588.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(640.00,596.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(640.00,595.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(648.00,604.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(648.00,603.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(656.00,612.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(656.00,611.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(664.00,620.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(664.00,619.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(672.00,627.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(672.00,626.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(680.00,635.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(680.00,634.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(688.00,643.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(688.00,642.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(696.00,650.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(696.00,649.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(704.00,658.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(704.00,657.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(712.00,666.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(712.00,665.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(720.00,674.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(720.00,673.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(728.00,681.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(728.00,680.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(736.00,689.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(736.00,688.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(744.00,697.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(744.00,696.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(752.00,704.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(752.00,703.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(760.00,712.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(760.00,711.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(768.00,720.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(768.00,719.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(776.00,727.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(776.00,726.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(784.00,735.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(784.00,734.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(792.00,743.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(792.00,742.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(800.00,750.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(800.00,749.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(808.00,758.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(808.00,757.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(816.00,766.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(816.00,765.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(824.00,773.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(824.00,772.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(832.00,781.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(832.00,780.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(840.00,788.59)(0.560,0.488){13}{\rule{0.550pt}{0.117pt}} \multiput(840.00,787.17)(7.858,8.000){2}{\rule{0.275pt}{0.400pt}} \multiput(849.00,796.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(849.00,795.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(857.00,804.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(857.00,803.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(865.00,811.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(865.00,810.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(873.00,819.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(873.00,818.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(881.00,826.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(881.00,825.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(889.00,834.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(889.00,833.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(897.00,842.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(897.00,841.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(905.00,849.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(905.00,848.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(913.00,857.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(913.00,856.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(921.00,864.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(921.00,863.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(929.00,872.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(929.00,871.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(937.00,879.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(937.00,878.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(945.00,887.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(945.00,886.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(953.00,895.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(953.00,894.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(961.00,902.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(961.00,901.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(969.00,910.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(969.00,909.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \put(182.0,131.0){\rule[-0.200pt]{0.400pt}{191.515pt}} \put(182.0,131.0){\rule[-0.200pt]{191.515pt}{0.400pt}} \put(977.0,131.0){\rule[-0.200pt]{0.400pt}{191.515pt}} \put(182.0,926.0){\rule[-0.200pt]{191.515pt}{0.400pt}} \end{picture} \end{tabular} \end{center} \caption{\small The behavior of $\langle{X^2(t)}\rangle$ as a function of $t$ for various values of the resource density $\rho$. Together with the curve, the figure shows its best approximation as $\langle{X^2(t)}\rangle\sim{x^\nu}$. The coefficient $\nu$ of the best approximation varies with $\rho$. On the significance of this variation, see the text.} \label{variances} \end{figure} We note that in the case of high food density the random walk does not exhibit the characteristics of ARS (this observation will be made more formal later on), simply because the individual is always, or almost always, on food. As the density decreases, the behavior becomes more characteristic of ARS. In order to study the characteristics of these walks, we consider the square average of the displacement from the initial position: \begin{equation} \langle X^2 \rangle \stackrel{\triangle}{=} \langle (x-x_0)^2 + (y-y_0)^2 \rangle. \end{equation} Let the individual be fixed (viz.\ let it be the best performing individual). We execute, with this individual, $N$ random walks on the environment with the prescribed density, each of length $T$. Let \begin{equation} w^i = [p_0^i,\ldots,p_t^i,\ldots,p_T^i] \end{equation} be the $i$th random walk, where $p_t^i=(x_t^i-x_0,y_t^i-y_0)$. We are interested in knowing how far the individual has gone, on average, from its initial position, at time $t$. That is, we are interested in studying the function \begin{equation} \langle X^2(t) \rangle = \frac{1}{N} \sum_{i=1}^N (p_t^i)^2 \end{equation} Figure~\ref{variances} shows the behavior of $\langle{X^2(t)}\rangle$ as a function of $t$ for various values of the density $\rho$, together with the best approximation of the form $\langle{X^2(t)}\rangle\sim{x^\nu}$, where the exponent $\nu$ depends on $\rho$: \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $\rho$ & 0.01 & 0.1 & 0.5 & 0.9\\ \hline $\nu$ & 3.78 & 1.8 & 1.1 & 0.95 \\ \hline \end{tabular} \end{center} Figure~\ref{varplot} shows the behavior of the exponent $\nu$ as a function of $\rho$. As density approaches 1 or 0, that is, as the environment becomes more homogeneous (either with a lot of food or very little food), the exponent approaches 1, that is, we approach a situation in which $\langle{X^2(t)}\rangle\sim{x^1}$. This, as we shall see, is the behavior characteristic of Brownian motion, as well as of several other types of random walks. This should not come as a surprise: when food can be found everywhere, the individuals have no particular reason to modify their behavior, and will simply move to and fro in an haphazard manner: they will do a random walk. The same is true if there is very little food. The patches are so small and far apart that the in-patch behavior will last for a very short time and will not change significantly the characteristics of the walk, which will be a random walk from patch to patch looking for resources. When the food is patchy, on the other hand, the random walk of the individual doesn't follow the standard Brownian model. In the next section I shall consider the foraging walk more closely from a mathematical point of view. We shall begin, in the next section by considering the switching from the on-patch behavior to the off-patch, without taking onto account the spatial characteristics of the environment. Then, in the following section, we shall study random walks in search of a model that fits the characteristics of a forager on patchy resources. We shall see that such a model is given by the so-called \textsl{Levy walks}. \begin{figure}[thbp] \begin{center} \setlength{\unitlength}{0.240900pt} \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \sbox{\plotpoint}{\rule[-0.200pt]{0.400pt}{0.400pt}}% \begin{picture}(1050,1050)(0,0) \sbox{\plotpoint}{\rule[-0.200pt]{0.400pt}{0.400pt}}% \put(162.0,169.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(142,169){\makebox(0,0)[r]{ 1}} \put(937.0,169.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162.0,358.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(142,358){\makebox(0,0)[r]{ 1.5}} \put(937.0,358.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162.0,547.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(142,547){\makebox(0,0)[r]{ 2}} \put(937.0,547.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162.0,737.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(142,737){\makebox(0,0)[r]{ 2.5}} \put(937.0,737.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162.0,926.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(142,926){\makebox(0,0)[r]{ 3}} \put(937.0,926.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(162.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(162,90){\makebox(0,0){ 0}} \put(162.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(261.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(261,90){\makebox(0,0){ 0.1}} \put(261.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(361.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(361,90){\makebox(0,0){ 0.2}} \put(361.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(460.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(460,90){\makebox(0,0){ 0.3}} \put(460.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(560.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(560,90){\makebox(0,0){ 0.4}} \put(560.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(659.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(659,90){\makebox(0,0){ 0.5}} \put(659.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(758.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(758,90){\makebox(0,0){ 0.6}} \put(758.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(858.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(858,90){\makebox(0,0){ 0.7}} \put(858.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(957.0,131.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(957,90){\makebox(0,0){ 0.8}} \put(957.0,906.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(162.0,131.0){\rule[-0.200pt]{0.400pt}{191.515pt}} \put(162.0,131.0){\rule[-0.200pt]{191.515pt}{0.400pt}} \put(957.0,131.0){\rule[-0.200pt]{0.400pt}{191.515pt}} \put(162.0,926.0){\rule[-0.200pt]{191.515pt}{0.400pt}} \put(559,29){\makebox(0,0){$\rho$}} \put(559,988){\makebox(0,0){\small Exponent $\nu$ of $\langle{X^2(t)}\rangle\sim{t^\nu}$}} \put(172,250){\makebox(0,0){$\blacksquare$}} \put(182,877){\makebox(0,0){$\blacksquare$}} \put(212,832){\makebox(0,0){$\blacksquare$}} \put(242,750){\makebox(0,0){$\blacksquare$}} \put(261,297){\makebox(0,0){$\blacksquare$}} \put(361,213){\makebox(0,0){$\blacksquare$}} \put(659,133){\makebox(0,0){$\blacksquare$}} \put(957,182){\makebox(0,0){$\blacksquare$}} \multiput(170.59,131.00)(0.488,39.654){13}{\rule{0.117pt}{30.150pt}} \multiput(169.17,131.00)(8.000,538.422){2}{\rule{0.400pt}{15.075pt}} \multiput(178.59,732.00)(0.488,9.475){13}{\rule{0.117pt}{7.300pt}} \multiput(177.17,732.00)(8.000,128.848){2}{\rule{0.400pt}{3.650pt}} \multiput(186.59,873.72)(0.488,-0.560){13}{\rule{0.117pt}{0.550pt}} \multiput(185.17,874.86)(8.000,-7.858){2}{\rule{0.400pt}{0.275pt}} \multiput(194.59,863.47)(0.488,-0.956){13}{\rule{0.117pt}{0.850pt}} \multiput(193.17,865.24)(8.000,-13.236){2}{\rule{0.400pt}{0.425pt}} \multiput(202.59,848.26)(0.488,-1.022){13}{\rule{0.117pt}{0.900pt}} \multiput(201.17,850.13)(8.000,-14.132){2}{\rule{0.400pt}{0.450pt}} \multiput(210.59,832.47)(0.488,-0.956){13}{\rule{0.117pt}{0.850pt}} \multiput(209.17,834.24)(8.000,-13.236){2}{\rule{0.400pt}{0.425pt}} \multiput(218.59,817.26)(0.488,-1.022){13}{\rule{0.117pt}{0.900pt}} \multiput(217.17,819.13)(8.000,-14.132){2}{\rule{0.400pt}{0.450pt}} \multiput(226.59,800.23)(0.488,-1.352){13}{\rule{0.117pt}{1.150pt}} \multiput(225.17,802.61)(8.000,-18.613){2}{\rule{0.400pt}{0.575pt}} \multiput(234.59,776.53)(0.488,-2.211){13}{\rule{0.117pt}{1.800pt}} \multiput(233.17,780.26)(8.000,-30.264){2}{\rule{0.400pt}{0.900pt}} \multiput(242.59,709.02)(0.485,-13.002){11}{\rule{0.117pt}{9.871pt}} \multiput(241.17,729.51)(7.000,-150.511){2}{\rule{0.400pt}{4.936pt}} \multiput(249.59,530.22)(0.488,-15.352){13}{\rule{0.117pt}{11.750pt}} \multiput(248.17,554.61)(8.000,-208.612){2}{\rule{0.400pt}{5.875pt}} \multiput(257.59,333.75)(0.488,-3.730){13}{\rule{0.117pt}{2.950pt}} \multiput(256.17,339.88)(8.000,-50.877){2}{\rule{0.400pt}{1.475pt}} \multiput(265.59,285.68)(0.488,-0.890){13}{\rule{0.117pt}{0.800pt}} \multiput(264.17,287.34)(8.000,-12.340){2}{\rule{0.400pt}{0.400pt}} \multiput(273.59,272.09)(0.488,-0.758){13}{\rule{0.117pt}{0.700pt}} \multiput(272.17,273.55)(8.000,-10.547){2}{\rule{0.400pt}{0.350pt}} \multiput(281.59,260.51)(0.488,-0.626){13}{\rule{0.117pt}{0.600pt}} \multiput(280.17,261.75)(8.000,-8.755){2}{\rule{0.400pt}{0.300pt}} \multiput(289.00,251.93)(0.494,-0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(289.00,252.17)(6.962,-8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(297.00,243.93)(0.671,-0.482){9}{\rule{0.633pt}{0.116pt}} \multiput(297.00,244.17)(6.685,-6.000){2}{\rule{0.317pt}{0.400pt}} \multiput(305.00,237.93)(0.671,-0.482){9}{\rule{0.633pt}{0.116pt}} \multiput(305.00,238.17)(6.685,-6.000){2}{\rule{0.317pt}{0.400pt}} \multiput(313.00,231.94)(1.066,-0.468){5}{\rule{0.900pt}{0.113pt}} \multiput(313.00,232.17)(6.132,-4.000){2}{\rule{0.450pt}{0.400pt}} \multiput(321.00,227.94)(1.066,-0.468){5}{\rule{0.900pt}{0.113pt}} \multiput(321.00,228.17)(6.132,-4.000){2}{\rule{0.450pt}{0.400pt}} \multiput(329.00,223.95)(1.579,-0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(329.00,224.17)(5.579,-3.000){2}{\rule{0.583pt}{0.400pt}} \multiput(337.00,220.95)(1.579,-0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(337.00,221.17)(5.579,-3.000){2}{\rule{0.583pt}{0.400pt}} \multiput(345.00,217.95)(1.579,-0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(345.00,218.17)(5.579,-3.000){2}{\rule{0.583pt}{0.400pt}} \multiput(353.00,214.95)(1.579,-0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(353.00,215.17)(5.579,-3.000){2}{\rule{0.583pt}{0.400pt}} \multiput(361.00,211.94)(1.066,-0.468){5}{\rule{0.900pt}{0.113pt}} \multiput(361.00,212.17)(6.132,-4.000){2}{\rule{0.450pt}{0.400pt}} \multiput(369.00,207.95)(1.579,-0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(369.00,208.17)(5.579,-3.000){2}{\rule{0.583pt}{0.400pt}} \multiput(377.00,204.94)(1.066,-0.468){5}{\rule{0.900pt}{0.113pt}} \multiput(377.00,205.17)(6.132,-4.000){2}{\rule{0.450pt}{0.400pt}} \multiput(385.00,200.95)(1.579,-0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(385.00,201.17)(5.579,-3.000){2}{\rule{0.583pt}{0.400pt}} \multiput(393.00,197.95)(1.355,-0.447){3}{\rule{1.033pt}{0.108pt}} \multiput(393.00,198.17)(4.855,-3.000){2}{\rule{0.517pt}{0.400pt}} \multiput(400.00,194.94)(1.066,-0.468){5}{\rule{0.900pt}{0.113pt}} \multiput(400.00,195.17)(6.132,-4.000){2}{\rule{0.450pt}{0.400pt}} \multiput(408.00,190.95)(1.579,-0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(408.00,191.17)(5.579,-3.000){2}{\rule{0.583pt}{0.400pt}} \multiput(416.00,187.95)(1.579,-0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(416.00,188.17)(5.579,-3.000){2}{\rule{0.583pt}{0.400pt}} \multiput(424.00,184.95)(1.579,-0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(424.00,185.17)(5.579,-3.000){2}{\rule{0.583pt}{0.400pt}} \multiput(432.00,181.95)(1.579,-0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(432.00,182.17)(5.579,-3.000){2}{\rule{0.583pt}{0.400pt}} \multiput(440.00,178.95)(1.579,-0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(440.00,179.17)(5.579,-3.000){2}{\rule{0.583pt}{0.400pt}} \multiput(448.00,175.95)(1.579,-0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(448.00,176.17)(5.579,-3.000){2}{\rule{0.583pt}{0.400pt}} \multiput(456.00,172.95)(1.579,-0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(456.00,173.17)(5.579,-3.000){2}{\rule{0.583pt}{0.400pt}} \multiput(464.00,169.95)(1.579,-0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(464.00,170.17)(5.579,-3.000){2}{\rule{0.583pt}{0.400pt}} \put(472,166.17){\rule{1.700pt}{0.400pt}} \multiput(472.00,167.17)(4.472,-2.000){2}{\rule{0.850pt}{0.400pt}} \multiput(480.00,164.95)(1.579,-0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(480.00,165.17)(5.579,-3.000){2}{\rule{0.583pt}{0.400pt}} \put(488,161.17){\rule{1.700pt}{0.400pt}} \multiput(488.00,162.17)(4.472,-2.000){2}{\rule{0.850pt}{0.400pt}} \multiput(496.00,159.95)(1.579,-0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(496.00,160.17)(5.579,-3.000){2}{\rule{0.583pt}{0.400pt}} \put(504,156.17){\rule{1.700pt}{0.400pt}} \multiput(504.00,157.17)(4.472,-2.000){2}{\rule{0.850pt}{0.400pt}} \put(512,154.17){\rule{1.700pt}{0.400pt}} \multiput(512.00,155.17)(4.472,-2.000){2}{\rule{0.850pt}{0.400pt}} \put(520,152.17){\rule{1.700pt}{0.400pt}} \multiput(520.00,153.17)(4.472,-2.000){2}{\rule{0.850pt}{0.400pt}} \put(528,150.17){\rule{1.700pt}{0.400pt}} \multiput(528.00,151.17)(4.472,-2.000){2}{\rule{0.850pt}{0.400pt}} \put(536,148.17){\rule{1.700pt}{0.400pt}} \multiput(536.00,149.17)(4.472,-2.000){2}{\rule{0.850pt}{0.400pt}} \put(544,146.17){\rule{1.700pt}{0.400pt}} \multiput(544.00,147.17)(4.472,-2.000){2}{\rule{0.850pt}{0.400pt}} \put(552,144.17){\rule{1.700pt}{0.400pt}} \multiput(552.00,145.17)(4.472,-2.000){2}{\rule{0.850pt}{0.400pt}} \put(560,142.67){\rule{1.686pt}{0.400pt}} \multiput(560.00,143.17)(3.500,-1.000){2}{\rule{0.843pt}{0.400pt}} \put(567,141.17){\rule{1.700pt}{0.400pt}} \multiput(567.00,142.17)(4.472,-2.000){2}{\rule{0.850pt}{0.400pt}} \put(575,139.67){\rule{1.927pt}{0.400pt}} \multiput(575.00,140.17)(4.000,-1.000){2}{\rule{0.964pt}{0.400pt}} \put(583,138.67){\rule{1.927pt}{0.400pt}} \multiput(583.00,139.17)(4.000,-1.000){2}{\rule{0.964pt}{0.400pt}} \put(591,137.17){\rule{1.700pt}{0.400pt}} \multiput(591.00,138.17)(4.472,-2.000){2}{\rule{0.850pt}{0.400pt}} \put(599,135.67){\rule{1.927pt}{0.400pt}} \multiput(599.00,136.17)(4.000,-1.000){2}{\rule{0.964pt}{0.400pt}} \put(615,134.67){\rule{1.927pt}{0.400pt}} \multiput(615.00,135.17)(4.000,-1.000){2}{\rule{0.964pt}{0.400pt}} \put(623,133.67){\rule{1.927pt}{0.400pt}} \multiput(623.00,134.17)(4.000,-1.000){2}{\rule{0.964pt}{0.400pt}} \put(607.0,136.0){\rule[-0.200pt]{1.927pt}{0.400pt}} \put(647,132.67){\rule{1.927pt}{0.400pt}} \multiput(647.00,133.17)(4.000,-1.000){2}{\rule{0.964pt}{0.400pt}} \put(631.0,134.0){\rule[-0.200pt]{3.854pt}{0.400pt}} \put(671,132.67){\rule{1.927pt}{0.400pt}} \multiput(671.00,132.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(655.0,133.0){\rule[-0.200pt]{3.854pt}{0.400pt}} \put(703,133.67){\rule{1.927pt}{0.400pt}} \multiput(703.00,133.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(679.0,134.0){\rule[-0.200pt]{5.782pt}{0.400pt}} \put(726,134.67){\rule{1.927pt}{0.400pt}} \multiput(726.00,134.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(711.0,135.0){\rule[-0.200pt]{3.613pt}{0.400pt}} \put(742,135.67){\rule{1.927pt}{0.400pt}} \multiput(742.00,135.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(750,136.67){\rule{1.927pt}{0.400pt}} \multiput(750.00,136.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(758,137.67){\rule{1.927pt}{0.400pt}} \multiput(758.00,137.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(734.0,136.0){\rule[-0.200pt]{1.927pt}{0.400pt}} \put(774,138.67){\rule{1.927pt}{0.400pt}} \multiput(774.00,138.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(782,139.67){\rule{1.927pt}{0.400pt}} \multiput(782.00,139.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(790,140.67){\rule{1.927pt}{0.400pt}} \multiput(790.00,140.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(798,141.67){\rule{1.927pt}{0.400pt}} \multiput(798.00,141.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(806,143.17){\rule{1.700pt}{0.400pt}} \multiput(806.00,142.17)(4.472,2.000){2}{\rule{0.850pt}{0.400pt}} \put(814,144.67){\rule{1.927pt}{0.400pt}} \multiput(814.00,144.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(822,145.67){\rule{1.927pt}{0.400pt}} \multiput(822.00,145.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(830,147.17){\rule{1.700pt}{0.400pt}} \multiput(830.00,146.17)(4.472,2.000){2}{\rule{0.850pt}{0.400pt}} \put(838,148.67){\rule{1.927pt}{0.400pt}} \multiput(838.00,148.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(846,150.17){\rule{1.700pt}{0.400pt}} \multiput(846.00,149.17)(4.472,2.000){2}{\rule{0.850pt}{0.400pt}} \put(854,152.17){\rule{1.700pt}{0.400pt}} \multiput(854.00,151.17)(4.472,2.000){2}{\rule{0.850pt}{0.400pt}} \put(862,154.17){\rule{1.700pt}{0.400pt}} \multiput(862.00,153.17)(4.472,2.000){2}{\rule{0.850pt}{0.400pt}} \put(870,156.17){\rule{1.500pt}{0.400pt}} \multiput(870.00,155.17)(3.887,2.000){2}{\rule{0.750pt}{0.400pt}} \put(877,158.17){\rule{1.700pt}{0.400pt}} \multiput(877.00,157.17)(4.472,2.000){2}{\rule{0.850pt}{0.400pt}} \put(885,160.17){\rule{1.700pt}{0.400pt}} \multiput(885.00,159.17)(4.472,2.000){2}{\rule{0.850pt}{0.400pt}} \put(893,162.17){\rule{1.700pt}{0.400pt}} \multiput(893.00,161.17)(4.472,2.000){2}{\rule{0.850pt}{0.400pt}} \put(901,164.17){\rule{1.700pt}{0.400pt}} \multiput(901.00,163.17)(4.472,2.000){2}{\rule{0.850pt}{0.400pt}} \multiput(909.00,166.61)(1.579,0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(909.00,165.17)(5.579,3.000){2}{\rule{0.583pt}{0.400pt}} \put(917,169.17){\rule{1.700pt}{0.400pt}} \multiput(917.00,168.17)(4.472,2.000){2}{\rule{0.850pt}{0.400pt}} \multiput(925.00,171.61)(1.579,0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(925.00,170.17)(5.579,3.000){2}{\rule{0.583pt}{0.400pt}} \multiput(933.00,174.61)(1.579,0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(933.00,173.17)(5.579,3.000){2}{\rule{0.583pt}{0.400pt}} \put(941,177.17){\rule{1.700pt}{0.400pt}} \multiput(941.00,176.17)(4.472,2.000){2}{\rule{0.850pt}{0.400pt}} \multiput(949.00,179.61)(1.579,0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(949.00,178.17)(5.579,3.000){2}{\rule{0.583pt}{0.400pt}} \put(766.0,139.0){\rule[-0.200pt]{1.927pt}{0.400pt}} \put(162.0,131.0){\rule[-0.200pt]{0.400pt}{191.515pt}} \put(162.0,131.0){\rule[-0.200pt]{191.515pt}{0.400pt}} \put(957.0,131.0){\rule[-0.200pt]{0.400pt}{191.515pt}} \put(162.0,926.0){\rule[-0.200pt]{191.515pt}{0.400pt}} \end{picture} \end{center} \caption{\small Exponent of the best approximation $\langle{X^2(t)}\rangle\sim{x^\nu}$ as a function of $\rho$. The squares are the values that have been calculated from the simulation (in each case the best gene as resulting from the genetic algorithm has been used), the continuous line is a spline interpolation \cite{press:86}. The value $\nu\sim{1}$ is characteristic of standard Brownian motion. For very low and high $\rho$, the walk is essentially Brownian: when the density is very low, the forager wanders long distances and spends comparatively little time on each patch. When $\rho\sim{1}$, there is no ARS, as the food is everywhere. The region $0.1<\rho<0.4$ is that in which ARS is clearly taking place.} \label{varplot} \end{figure} \section{\sc Should I stay of Should I go?} In the next section, I shall analyze the global characteristics of ARS exploration considering it as a random walk such as those that emerge from our genetic experiment. Before that, in this section I shall consider a more basic problem. Suppose that you are in a patch. For a while, you stay there happily eating, as there is plenty of food. After a while, the food begins to dwindle, the resources of the patch begin to be exhausted. When is it a good time to leave? You are confronted with two contrasting criteria. On the one hand, staying implies that you can continue eating without having to make a possibly long journey without food before you find another patch. In the long run, you want to spend more time in a patch and less between patches. On the other hand, the new patch that you will find has lots and lots of food so it might be a good idea for you to move now to greener pastures instead of half starving in this half barren patch. When is it a good time to leave? This is the question I want to answer in this section. I shall consider a very simple model: I analyze only the time that an individual spends on a patch ($t_p$) and the time that it spends between patches ($t_b$), and how to optimize them for maximum foraging efficiency. In this, I follow essentially the techniques developed for \textsl{optimal foraging theory} \cite{stephens:86}. Suppose that a forager searches for food for a certain (long) amount of time. It spends a total time $T_b$ looking for the next patch, and a total time $T_p$ staying on a patch and eating (all the symbols used in this section are shown in Table~\ref{forasymbols}) \begin{table}[bht] \begin{center} \begin{tabular}{|l|p{25em}|} \hline \multicolumn{1}{|c|}{Symbol} & \multicolumn{1}{|c|}{Meaning} \\ \hline $T_b$ & Total time spent looking for a patch, \\ $T_p$ & total time spent on a patch, \\ $t_b$ & average time spent looking for a patch, \\ $t_p$ & average time spent on a patch, \\ $G$ & total resource gain, \\ $g$ & average gain per patch, \\ $g(t)$ & gain per patch as a function of the time spent on a patch, \\ $R$ & rate of reward: average gain per time unit, \\ $\pi$ & ${=g/t_p}$, profitability of a patch (resource per unit time when on the patch), \\ $\lambda$ & average number of patches found per unit time, \\ $P$ & number of types of patches, \\ $p_i$ & probability of using a resource of type $i$. \\ \hline \end{tabular} \end{center} \caption{\small Symbols used in this section.} \label{forasymbols} \end{table} If the total gain of the activity is $G$, then the rate of reward (reward per unit time), is \begin{equation} \label{totalgain} R = \frac{G}{T_b+T_p} \end{equation} This equation is inconvenient as it depends on the total times $T_p$ and $T_b$ and on the total gain $G$ (if the time spent goes to infinity, both the numerator and the denominator go to infinity). One can derive a more convenient equation, independent on the actual foraging time, by considering the average on-patch time $t_p$, the average time between patches $t_b$ and the average gain per patch $g$. The rate at which patches are discovered, that is, the number of patches discovered per unit time, is $\lambda=1/t_b$. The total number of patches discovered during foraging is therefore $\lambda{T_b}$. The total gain and the total time spent on patches depend on this number, that is: \begin{equation} \begin{aligned} G &= \lambda T_b g \\ T_p &= \lambda T_b t_p \end{aligned} \end{equation} Introducing these values in (\ref{totalgain}) we have \begin{equation} R = \frac{\lambda T_b g}{T_b + \lambda T_b t_p} = \frac{\lambda g}{1 + \lambda t_p} \end{equation} or, in terms of average times \begin{equation} R = \frac{g/t_b}{1 + t_p/t_b} = \frac{g}{t_b+t_p} \end{equation} This equation is known as the \textsl{Holling disk} equation \cite{holling:59}% \footnote{The name \dqt{disk equation} has nothing to do with the properties of the equation. Holling developed his model by studying the behavior of a blindfolded researcher assistant who was given the task of picking up randomly scattered sandpaper disks.}% . Define the profitability of a patch as $\pi=g/t_p$, that is, the gain per unit of time spent on the patch. With this definition we have: \begin{equation} R = \frac{\pi}{{\displaystyle 1+\frac{t_b}{t_p}}} \end{equation} When the patches become more and more dense, then $t_b\rightarrow{0}$, and \begin{equation} \label{pilimit} \lim_{t_b\rightarrow{0}} \frac{\pi}{{\displaystyle 1+\frac{t_b}{t_p}}} = \pi \end{equation} This simple model can be extended in several ways. One very useful one is to consider that a patch has \textsl{diminishing returns}: as the forager spends time on a path, its resources become depleted, so it becomes harder to get rewards, and the profitability of the patch is reduced. That is, the reward $g$ is a function of $t$, $g(t)$, that tells us how much reward one accumulates while foraging on a patch for a time equal to $t$. The profitability is also a function of time: $\pi(t)=g(t)/t$. Physical considerations place certain constraints on these functions. The gain is positive and cumulative (you never lose what you have gained), which implies $g(t)\ge{0}$ and $g'(t)\ge{0}$. The first inequality also entails $\pi(t)\ge{0}$. On the other hand, it is reasonable to assume that, as time goes by and the resources become depleted, it will take longer and longer to amass the same amount of reward; this entails $\pi'(t)\le{0}$. These two relations imply $\lim_{t\rightarrow\infty}\pi(t)=C\ge{0}$. The condition $\pi'(t)\le{0}$ imposes conditions on $g'(t)$. From $\pi(t)=g(t)/t$, we have \begin{equation} \pi'(t) = \frac{1}{t^2}\Bigl[g'(t)t-g(t)\Bigr] \le 0 \end{equation} that is \begin{equation} \label{gprime} g'(t)\le \frac{g(t)}{t} = \pi(t) \end{equation} I assume certain regularity conditions. In particular, that $\pi(t)$ decreases without \dqt{bumps}, that is, that $\pi'$ is monotonically increasing, which entails that $\pi''\ge{0}$. Similarly, I assume $g''(t)\le{0}$. Note that in the most common case the patch will become depleted, that is, \begin{equation} \lim_{t\rightarrow\infty} g(t) = g_\infty > 0 \end{equation} but the analysis applies to the more general case in which $g(t)$ goes to infinity slower than a linear function. Given the average between-patches time $t_b$, we are interested in finding the optimal time that the forager should spend on a patch ($t_p$) to maximize the reward $R$. The idea is that if you spend too little time on a patch, then you don't take full advantage of its resources, and spend comparatively too much time without patches, in an area where you have no reward: your rate of gain will be reduced. On the other hand, since the resources get depleted as we stay on a patch, if you spend too much time there you shall waste your time on a depleted patch that won't yield too much, while it would be more convenient to invest some time ($t_b$) to find a new patch with better yield. Given the equality \begin{equation} R(t) = \frac{g(t)}{t_b+t} \end{equation} compute the derivative \begin{equation} \frac{\partial R}{\partial t} = \frac{g'(t)(t_b+t)-g(t)}{(t_b+t)^2} \end{equation} It is $\partial{R}/\partial{t}=0$ if $g'(t)(t_b+t)-g(t)=0$, that is, if \begin{equation} g'(t)=\frac{g(t)}{t_b+t}=R \end{equation} This result is known as the \textsl{Charnov's Marginal Value Theorem} \cite{charnov:76}. This equation has a simple geometric interpretation. The average gain $R$ results in a straight line in a $t$-$g$ diagram (Figure~\ref{patchdiag}). \begin{figure} \begin{center} \setlength{\unitlength}{1em} \begin{picture}(17.746531,10.000000)(-6.760408,0) \thicklines \put(-6.760408,0){\line(1,0){17.746531}} \put(0,0){\line(0,1){10.000000}} \thinlines \put(0.000000,0.000000){\circle*{0.000001}} \put(0.036620,0.072973){\circle*{0.000001}} \put(0.073241,0.145414){\circle*{0.000001}} \put(0.109861,0.217326){\circle*{0.000001}} \put(0.146482,0.288714){\circle*{0.000001}} \put(0.183102,0.359580){\circle*{0.000001}} \put(0.219722,0.429929){\circle*{0.000001}} \put(0.256343,0.499765){\circle*{0.000001}} \put(0.292963,0.569091){\circle*{0.000001}} \put(0.329584,0.637912){\circle*{0.000001}} \put(0.366204,0.706230){\circle*{0.000001}} \put(0.402825,0.774050){\circle*{0.000001}} \put(0.439445,0.841375){\circle*{0.000001}} \put(0.476065,0.908208){\circle*{0.000001}} \put(0.512686,0.974554){\circle*{0.000001}} \put(0.549306,1.040415){\circle*{0.000001}} \put(0.585927,1.105796){\circle*{0.000001}} \put(0.622547,1.170700){\circle*{0.000001}} \put(0.659167,1.235131){\circle*{0.000001}} \put(0.695788,1.299091){\circle*{0.000001}} \put(0.732408,1.362584){\circle*{0.000001}} \put(0.769029,1.425614){\circle*{0.000001}} \put(0.805649,1.488184){\circle*{0.000001}} \put(0.842269,1.550298){\circle*{0.000001}} \put(0.878890,1.611958){\circle*{0.000001}} \put(0.915510,1.673168){\circle*{0.000001}} \put(0.952131,1.733932){\circle*{0.000001}} \put(0.988751,1.794252){\circle*{0.000001}} \put(1.025371,1.854132){\circle*{0.000001}} \put(1.061992,1.913575){\circle*{0.000001}} \put(1.098612,1.972584){\circle*{0.000001}} \put(1.135233,2.031163){\circle*{0.000001}} \put(1.171853,2.089314){\circle*{0.000001}} \put(1.208474,2.147041){\circle*{0.000001}} \put(1.245094,2.204347){\circle*{0.000001}} \put(1.281714,2.261234){\circle*{0.000001}} \put(1.318335,2.317706){\circle*{0.000001}} \put(1.354955,2.373767){\circle*{0.000001}} \put(1.391576,2.429418){\circle*{0.000001}} \put(1.428196,2.484663){\circle*{0.000001}} \put(1.464816,2.539505){\circle*{0.000001}} \put(1.501437,2.593946){\circle*{0.000001}} \put(1.538057,2.647991){\circle*{0.000001}} \put(1.574678,2.701641){\circle*{0.000001}} \put(1.611298,2.754899){\circle*{0.000001}} \put(1.647918,2.807769){\circle*{0.000001}} \put(1.684539,2.860253){\circle*{0.000001}} \put(1.721159,2.912354){\circle*{0.000001}} \put(1.757780,2.964075){\circle*{0.000001}} \put(1.794400,3.015418){\circle*{0.000001}} \put(1.831020,3.066387){\circle*{0.000001}} \put(1.867641,3.116984){\circle*{0.000001}} \put(1.904261,3.167212){\circle*{0.000001}} \put(1.940882,3.217073){\circle*{0.000001}} \put(1.977502,3.266570){\circle*{0.000001}} \put(2.014123,3.315706){\circle*{0.000001}} \put(2.050743,3.364484){\circle*{0.000001}} \put(2.087363,3.412905){\circle*{0.000001}} \put(2.123984,3.460973){\circle*{0.000001}} \put(2.160604,3.508691){\circle*{0.000001}} \put(2.197225,3.556060){\circle*{0.000001}} \put(2.233845,3.603083){\circle*{0.000001}} \put(2.270465,3.649764){\circle*{0.000001}} \put(2.307086,3.696104){\circle*{0.000001}} \put(2.343706,3.742105){\circle*{0.000001}} \put(2.380327,3.787771){\circle*{0.000001}} \put(2.416947,3.833104){\circle*{0.000001}} \put(2.453567,3.878106){\circle*{0.000001}} \put(2.490188,3.922779){\circle*{0.000001}} \put(2.526808,3.967126){\circle*{0.000001}} \put(2.563429,4.011150){\circle*{0.000001}} \put(2.600049,4.054853){\circle*{0.000001}} \put(2.636669,4.098237){\circle*{0.000001}} \put(2.673290,4.141304){\circle*{0.000001}} \put(2.709910,4.184056){\circle*{0.000001}} \put(2.746531,4.226497){\circle*{0.000001}} \put(2.783151,4.268628){\circle*{0.000001}} \put(2.819772,4.310452){\circle*{0.000001}} \put(2.856392,4.351971){\circle*{0.000001}} \put(2.893012,4.393186){\circle*{0.000001}} \put(2.929633,4.434101){\circle*{0.000001}} \put(2.966253,4.474717){\circle*{0.000001}} \put(3.002874,4.515037){\circle*{0.000001}} \put(3.039494,4.555062){\circle*{0.000001}} \put(3.076114,4.594796){\circle*{0.000001}} \put(3.112735,4.634239){\circle*{0.000001}} \put(3.149355,4.673395){\circle*{0.000001}} \put(3.185976,4.712265){\circle*{0.000001}} \put(3.222596,4.750851){\circle*{0.000001}} \put(3.259216,4.789156){\circle*{0.000001}} \put(3.295837,4.827181){\circle*{0.000001}} \put(3.332457,4.864929){\circle*{0.000001}} \put(3.369078,4.902401){\circle*{0.000001}} \put(3.405698,4.939600){\circle*{0.000001}} \put(3.442319,4.976528){\circle*{0.000001}} \put(3.478939,5.013186){\circle*{0.000001}} \put(3.515559,5.049576){\circle*{0.000001}} \put(3.552180,5.085701){\circle*{0.000001}} \put(3.588800,5.121562){\circle*{0.000001}} \put(3.625421,5.157162){\circle*{0.000001}} \put(3.662041,5.192501){\circle*{0.000001}} \put(3.698661,5.227583){\circle*{0.000001}} \put(3.735282,5.262409){\circle*{0.000001}} \put(3.771902,5.296981){\circle*{0.000001}} \put(3.808523,5.331300){\circle*{0.000001}} \put(3.845143,5.365369){\circle*{0.000001}} \put(3.881763,5.399190){\circle*{0.000001}} \put(3.918384,5.432763){\circle*{0.000001}} \put(3.955004,5.466092){\circle*{0.000001}} \put(3.991625,5.499177){\circle*{0.000001}} \put(4.028245,5.532021){\circle*{0.000001}} \put(4.064865,5.564626){\circle*{0.000001}} \put(4.101486,5.596992){\circle*{0.000001}} \put(4.138106,5.629122){\circle*{0.000001}} \put(4.174727,5.661018){\circle*{0.000001}} \put(4.211347,5.692681){\circle*{0.000001}} \put(4.247968,5.724113){\circle*{0.000001}} \put(4.284588,5.755315){\circle*{0.000001}} \put(4.321208,5.786290){\circle*{0.000001}} \put(4.357829,5.817039){\circle*{0.000001}} \put(4.394449,5.847564){\circle*{0.000001}} \put(4.431070,5.877865){\circle*{0.000001}} \put(4.467690,5.907946){\circle*{0.000001}} \put(4.504310,5.937807){\circle*{0.000001}} \put(4.540931,5.967450){\circle*{0.000001}} \put(4.577551,5.996877){\circle*{0.000001}} \put(4.614172,6.026089){\circle*{0.000001}} \put(4.650792,6.055088){\circle*{0.000001}} \put(4.687412,6.083875){\circle*{0.000001}} \put(4.724033,6.112452){\circle*{0.000001}} \put(4.760653,6.140821){\circle*{0.000001}} \put(4.797274,6.168983){\circle*{0.000001}} \put(4.833894,6.196939){\circle*{0.000001}} \put(4.870514,6.224691){\circle*{0.000001}} \put(4.907135,6.252241){\circle*{0.000001}} \put(4.943755,6.279589){\circle*{0.000001}} \put(4.980376,6.306738){\circle*{0.000001}} \put(5.016996,6.333689){\circle*{0.000001}} \put(5.053617,6.360444){\circle*{0.000001}} \put(5.090237,6.387003){\circle*{0.000001}} \put(5.126857,6.413368){\circle*{0.000001}} \put(5.163478,6.439541){\circle*{0.000001}} \put(5.200098,6.465523){\circle*{0.000001}} \put(5.236719,6.491315){\circle*{0.000001}} \put(5.273339,6.516919){\circle*{0.000001}} \put(5.309959,6.542336){\circle*{0.000001}} \put(5.346580,6.567568){\circle*{0.000001}} \put(5.383200,6.592615){\circle*{0.000001}} \put(5.419821,6.617480){\circle*{0.000001}} \put(5.456441,6.642163){\circle*{0.000001}} \put(5.493061,6.666667){\circle*{0.000001}} \put(5.529682,6.690991){\circle*{0.000001}} \put(5.566302,6.715138){\circle*{0.000001}} \put(5.602923,6.739109){\circle*{0.000001}} \put(5.639543,6.762905){\circle*{0.000001}} \put(5.676163,6.786527){\circle*{0.000001}} \put(5.712784,6.809976){\circle*{0.000001}} \put(5.749404,6.833255){\circle*{0.000001}} \put(5.786025,6.856364){\circle*{0.000001}} \put(5.822645,6.879304){\circle*{0.000001}} \put(5.859266,6.902077){\circle*{0.000001}} \put(5.895886,6.924683){\circle*{0.000001}} \put(5.932506,6.947125){\circle*{0.000001}} \put(5.969127,6.969403){\circle*{0.000001}} \put(6.005747,6.991518){\circle*{0.000001}} \put(6.042368,7.013472){\circle*{0.000001}} \put(6.078988,7.035265){\circle*{0.000001}} \put(6.115608,7.056900){\circle*{0.000001}} \put(6.152229,7.078377){\circle*{0.000001}} \put(6.188849,7.099697){\circle*{0.000001}} \put(6.225470,7.120861){\circle*{0.000001}} \put(6.262090,7.141871){\circle*{0.000001}} \put(6.298710,7.162728){\circle*{0.000001}} \put(6.335331,7.183433){\circle*{0.000001}} \put(6.371951,7.203986){\circle*{0.000001}} \put(6.408572,7.224389){\circle*{0.000001}} \put(6.445192,7.244644){\circle*{0.000001}} \put(6.481813,7.264751){\circle*{0.000001}} \put(6.518433,7.284711){\circle*{0.000001}} \put(6.555053,7.304525){\circle*{0.000001}} \put(6.591674,7.324195){\circle*{0.000001}} \put(6.628294,7.343721){\circle*{0.000001}} \put(6.664915,7.363105){\circle*{0.000001}} \put(6.701535,7.382347){\circle*{0.000001}} \put(6.738155,7.401449){\circle*{0.000001}} \put(6.774776,7.420411){\circle*{0.000001}} \put(6.811396,7.439235){\circle*{0.000001}} \put(6.848017,7.457922){\circle*{0.000001}} \put(6.884637,7.476473){\circle*{0.000001}} \put(6.921257,7.494888){\circle*{0.000001}} \put(6.957878,7.513168){\circle*{0.000001}} \put(6.994498,7.531315){\circle*{0.000001}} \put(7.031119,7.549330){\circle*{0.000001}} \put(7.067739,7.567214){\circle*{0.000001}} \put(7.104359,7.584966){\circle*{0.000001}} \put(7.140980,7.602590){\circle*{0.000001}} \put(7.177600,7.620084){\circle*{0.000001}} \put(7.214221,7.637451){\circle*{0.000001}} \put(7.250841,7.654692){\circle*{0.000001}} \put(7.287462,7.671806){\circle*{0.000001}} \put(7.324082,7.688796){\circle*{0.000001}} \put(7.360702,7.705661){\circle*{0.000001}} \put(7.397323,7.722404){\circle*{0.000001}} \put(7.433943,7.739024){\circle*{0.000001}} \put(7.470564,7.755523){\circle*{0.000001}} \put(7.507184,7.771902){\circle*{0.000001}} \put(7.543804,7.788161){\circle*{0.000001}} \put(7.580425,7.804302){\circle*{0.000001}} \put(7.617045,7.820324){\circle*{0.000001}} \put(7.653666,7.836230){\circle*{0.000001}} \put(7.690286,7.852020){\circle*{0.000001}} \put(7.726906,7.867694){\circle*{0.000001}} \put(7.763527,7.883255){\circle*{0.000001}} \put(7.800147,7.898701){\circle*{0.000001}} \put(7.836768,7.914035){\circle*{0.000001}} \put(7.873388,7.929257){\circle*{0.000001}} \put(7.910008,7.944368){\circle*{0.000001}} \put(7.946629,7.959369){\circle*{0.000001}} \put(7.983249,7.974260){\circle*{0.000001}} \put(8.019870,7.989042){\circle*{0.000001}} \put(8.056490,8.003717){\circle*{0.000001}} \put(8.093111,8.018284){\circle*{0.000001}} \put(8.129731,8.032746){\circle*{0.000001}} \put(8.166351,8.047101){\circle*{0.000001}} \put(8.202972,8.061352){\circle*{0.000001}} \put(8.239592,8.075499){\circle*{0.000001}} \put(8.276213,8.089543){\circle*{0.000001}} \put(8.312833,8.103484){\circle*{0.000001}} \put(8.349453,8.117324){\circle*{0.000001}} \put(8.386074,8.131062){\circle*{0.000001}} \put(8.422694,8.144700){\circle*{0.000001}} \put(8.459315,8.158239){\circle*{0.000001}} \put(8.495935,8.171679){\circle*{0.000001}} \put(8.532555,8.185021){\circle*{0.000001}} \put(8.569176,8.198265){\circle*{0.000001}} \put(8.605796,8.211413){\circle*{0.000001}} \put(8.642417,8.224465){\circle*{0.000001}} \put(8.679037,8.237422){\circle*{0.000001}} \put(8.715657,8.250284){\circle*{0.000001}} \put(8.752278,8.263052){\circle*{0.000001}} \put(8.788898,8.275727){\circle*{0.000001}} \put(8.825519,8.288310){\circle*{0.000001}} \put(8.862139,8.300800){\circle*{0.000001}} \put(8.898760,8.313200){\circle*{0.000001}} \put(8.935380,8.325509){\circle*{0.000001}} \put(8.972000,8.337729){\circle*{0.000001}} \put(9.008621,8.349859){\circle*{0.000001}} \put(9.045241,8.361900){\circle*{0.000001}} \put(9.081862,8.373854){\circle*{0.000001}} \put(9.118482,8.385721){\circle*{0.000001}} \put(9.155102,8.397500){\circle*{0.000001}} \put(9.191723,8.409194){\circle*{0.000001}} \put(9.228343,8.420803){\circle*{0.000001}} \put(9.264964,8.432327){\circle*{0.000001}} \put(9.301584,8.443767){\circle*{0.000001}} \put(9.338204,8.455123){\circle*{0.000001}} \put(9.374825,8.466397){\circle*{0.000001}} \put(9.411445,8.477588){\circle*{0.000001}} \put(9.448066,8.488697){\circle*{0.000001}} \put(9.484686,8.499726){\circle*{0.000001}} \put(9.521307,8.510674){\circle*{0.000001}} \put(9.557927,8.521542){\circle*{0.000001}} \put(9.594547,8.532331){\circle*{0.000001}} \put(9.631168,8.543041){\circle*{0.000001}} \put(9.667788,8.553673){\circle*{0.000001}} \put(9.704409,8.564227){\circle*{0.000001}} \put(9.741029,8.574704){\circle*{0.000001}} \put(9.777649,8.585105){\circle*{0.000001}} \put(9.814270,8.595430){\circle*{0.000001}} \put(9.850890,8.605680){\circle*{0.000001}} \put(9.887511,8.615855){\circle*{0.000001}} \put(9.924131,8.625955){\circle*{0.000001}} \put(9.960751,8.635982){\circle*{0.000001}} \put(9.997372,8.645936){\circle*{0.000001}} \put(10.033992,8.655817){\circle*{0.000001}} \put(10.070613,8.665626){\circle*{0.000001}} \put(10.107233,8.675363){\circle*{0.000001}} \put(10.143853,8.685029){\circle*{0.000001}} \put(10.180474,8.694625){\circle*{0.000001}} \put(10.217094,8.704151){\circle*{0.000001}} \put(10.253715,8.713607){\circle*{0.000001}} \put(10.290335,8.722994){\circle*{0.000001}} \put(10.326956,8.732313){\circle*{0.000001}} \put(10.363576,8.741564){\circle*{0.000001}} \put(10.400196,8.750747){\circle*{0.000001}} \put(10.436817,8.759863){\circle*{0.000001}} \put(10.473437,8.768913){\circle*{0.000001}} \put(10.510058,8.777896){\circle*{0.000001}} \put(10.546678,8.786815){\circle*{0.000001}} \put(10.583298,8.795668){\circle*{0.000001}} \put(10.619919,8.804456){\circle*{0.000001}} \put(10.656539,8.813180){\circle*{0.000001}} \put(10.693160,8.821841){\circle*{0.000001}} \put(10.729780,8.830438){\circle*{0.000001}} \put(10.766400,8.838973){\circle*{0.000001}} \put(10.803021,8.847445){\circle*{0.000001}} \put(10.839641,8.855856){\circle*{0.000001}} \put(10.876262,8.864205){\circle*{0.000001}} \put(10.912882,8.872493){\circle*{0.000001}} \put(10.949502,8.880721){\circle*{0.000001}} \put(-4.506939,0){\line(3,2){15.493061}} \multiput(5.493061,-1)(0,0.766667){12}{\line(0,1){0.383333}} \put(5.493061,-0.8){\makebox(0,0)[t]{$t_p$}} \put(-4.506939,-0.8){\makebox(0,0)[t]{$-t_b$}} \put(10.986123,8.444444){\makebox(0,0)[tl]{$g(t)$}} \put(10.986123,10.328708){\makebox(0,0)[tl]{$R$}} \put(-3.605551,0.600925){\line(0,1){1}} \put(-3.605551,1.600925){\line(1,0){1.500000}} \put(-3.805551,0.800925){\makebox(0,0)[br]{$R$}} \end{picture} \end{center} \caption{\small A graphical illustration of Charnov's Marginal Value Theorem. The patch time $t_w$ that maximizes the rate of gain $R$ occurs when the line with slope $R$ is tangent to the function $g(t)$, that is, when the instantaneous rate of gain on the patch equals the average rate of gain $R$.} \label{patchdiag} \end{figure} The patch gain is a curve that stays at zero for a time $t_b$ and then grows as $g(t)$; the optimal $t_p$ occurs when the slope of the curve $g(t)$ is equal to the average rate of gain. Note that there are two ways in which the environment can change so that $R$ increases. First, the profitability of the patch may increase, that is, the curve $g(t)$ can be pulled upward (Figure~\ref{increase_patch}.a). Second, the patches may become dense, reducing $t_b$ (Figure~\ref{increase_patch}.b). As $t_b\rightarrow{0}$ the rate $R$ approaches $\pi$ (the profitability of the patch), as per (\ref{pilimit}). \begin{figure} \begin{center} \setlength{\unitlength}{1em} \begin{tabular}{ccc} \begin{picture}(17.746531,17.000000)(-6.760408,-2) \thicklines \put(-6.760408,0){\line(1,0){17.746531}} \put(0,0){\line(0,1){10.000000}} \thinlines \put(0.000000,0.000000){\circle*{0.000001}} \put(0.036620,0.072973){\circle*{0.000001}} \put(0.073241,0.145414){\circle*{0.000001}} \put(0.109861,0.217326){\circle*{0.000001}} \put(0.146482,0.288714){\circle*{0.000001}} \put(0.183102,0.359580){\circle*{0.000001}} \put(0.219722,0.429929){\circle*{0.000001}} \put(0.256343,0.499765){\circle*{0.000001}} \put(0.292963,0.569091){\circle*{0.000001}} \put(0.329584,0.637912){\circle*{0.000001}} \put(0.366204,0.706230){\circle*{0.000001}} \put(0.402825,0.774050){\circle*{0.000001}} \put(0.439445,0.841375){\circle*{0.000001}} \put(0.476065,0.908208){\circle*{0.000001}} \put(0.512686,0.974554){\circle*{0.000001}} \put(0.549306,1.040415){\circle*{0.000001}} \put(0.585927,1.105796){\circle*{0.000001}} \put(0.622547,1.170700){\circle*{0.000001}} \put(0.659167,1.235131){\circle*{0.000001}} \put(0.695788,1.299091){\circle*{0.000001}} \put(0.732408,1.362584){\circle*{0.000001}} \put(0.769029,1.425614){\circle*{0.000001}} \put(0.805649,1.488184){\circle*{0.000001}} \put(0.842269,1.550298){\circle*{0.000001}} \put(0.878890,1.611958){\circle*{0.000001}} \put(0.915510,1.673168){\circle*{0.000001}} \put(0.952131,1.733932){\circle*{0.000001}} \put(0.988751,1.794252){\circle*{0.000001}} \put(1.025371,1.854132){\circle*{0.000001}} \put(1.061992,1.913575){\circle*{0.000001}} \put(1.098612,1.972584){\circle*{0.000001}} \put(1.135233,2.031163){\circle*{0.000001}} \put(1.171853,2.089314){\circle*{0.000001}} \put(1.208474,2.147041){\circle*{0.000001}} \put(1.245094,2.204347){\circle*{0.000001}} \put(1.281714,2.261234){\circle*{0.000001}} \put(1.318335,2.317706){\circle*{0.000001}} \put(1.354955,2.373767){\circle*{0.000001}} \put(1.391576,2.429418){\circle*{0.000001}} \put(1.428196,2.484663){\circle*{0.000001}} \put(1.464816,2.539505){\circle*{0.000001}} \put(1.501437,2.593946){\circle*{0.000001}} \put(1.538057,2.647991){\circle*{0.000001}} \put(1.574678,2.701641){\circle*{0.000001}} \put(1.611298,2.754899){\circle*{0.000001}} \put(1.647918,2.807769){\circle*{0.000001}} \put(1.684539,2.860253){\circle*{0.000001}} \put(1.721159,2.912354){\circle*{0.000001}} \put(1.757780,2.964075){\circle*{0.000001}} \put(1.794400,3.015418){\circle*{0.000001}} \put(1.831020,3.066387){\circle*{0.000001}} \put(1.867641,3.116984){\circle*{0.000001}} \put(1.904261,3.167212){\circle*{0.000001}} \put(1.940882,3.217073){\circle*{0.000001}} \put(1.977502,3.266570){\circle*{0.000001}} \put(2.014123,3.315706){\circle*{0.000001}} \put(2.050743,3.364484){\circle*{0.000001}} \put(2.087363,3.412905){\circle*{0.000001}} \put(2.123984,3.460973){\circle*{0.000001}} \put(2.160604,3.508691){\circle*{0.000001}} \put(2.197225,3.556060){\circle*{0.000001}} \put(2.233845,3.603083){\circle*{0.000001}} \put(2.270465,3.649764){\circle*{0.000001}} \put(2.307086,3.696104){\circle*{0.000001}} \put(2.343706,3.742105){\circle*{0.000001}} \put(2.380327,3.787771){\circle*{0.000001}} \put(2.416947,3.833104){\circle*{0.000001}} \put(2.453567,3.878106){\circle*{0.000001}} \put(2.490188,3.922779){\circle*{0.000001}} \put(2.526808,3.967126){\circle*{0.000001}} \put(2.563429,4.011150){\circle*{0.000001}} \put(2.600049,4.054853){\circle*{0.000001}} \put(2.636669,4.098237){\circle*{0.000001}} \put(2.673290,4.141304){\circle*{0.000001}} \put(2.709910,4.184056){\circle*{0.000001}} \put(2.746531,4.226497){\circle*{0.000001}} \put(2.783151,4.268628){\circle*{0.000001}} \put(2.819772,4.310452){\circle*{0.000001}} \put(2.856392,4.351971){\circle*{0.000001}} \put(2.893012,4.393186){\circle*{0.000001}} \put(2.929633,4.434101){\circle*{0.000001}} \put(2.966253,4.474717){\circle*{0.000001}} \put(3.002874,4.515037){\circle*{0.000001}} \put(3.039494,4.555062){\circle*{0.000001}} \put(3.076114,4.594796){\circle*{0.000001}} \put(3.112735,4.634239){\circle*{0.000001}} \put(3.149355,4.673395){\circle*{0.000001}} \put(3.185976,4.712265){\circle*{0.000001}} \put(3.222596,4.750851){\circle*{0.000001}} \put(3.259216,4.789156){\circle*{0.000001}} \put(3.295837,4.827181){\circle*{0.000001}} \put(3.332457,4.864929){\circle*{0.000001}} \put(3.369078,4.902401){\circle*{0.000001}} \put(3.405698,4.939600){\circle*{0.000001}} \put(3.442319,4.976528){\circle*{0.000001}} \put(3.478939,5.013186){\circle*{0.000001}} \put(3.515559,5.049576){\circle*{0.000001}} \put(3.552180,5.085701){\circle*{0.000001}} \put(3.588800,5.121562){\circle*{0.000001}} \put(3.625421,5.157162){\circle*{0.000001}} \put(3.662041,5.192501){\circle*{0.000001}} \put(3.698661,5.227583){\circle*{0.000001}} \put(3.735282,5.262409){\circle*{0.000001}} \put(3.771902,5.296981){\circle*{0.000001}} \put(3.808523,5.331300){\circle*{0.000001}} \put(3.845143,5.365369){\circle*{0.000001}} \put(3.881763,5.399190){\circle*{0.000001}} \put(3.918384,5.432763){\circle*{0.000001}} \put(3.955004,5.466092){\circle*{0.000001}} \put(3.991625,5.499177){\circle*{0.000001}} \put(4.028245,5.532021){\circle*{0.000001}} \put(4.064865,5.564626){\circle*{0.000001}} \put(4.101486,5.596992){\circle*{0.000001}} \put(4.138106,5.629122){\circle*{0.000001}} \put(4.174727,5.661018){\circle*{0.000001}} \put(4.211347,5.692681){\circle*{0.000001}} \put(4.247968,5.724113){\circle*{0.000001}} \put(4.284588,5.755315){\circle*{0.000001}} \put(4.321208,5.786290){\circle*{0.000001}} \put(4.357829,5.817039){\circle*{0.000001}} \put(4.394449,5.847564){\circle*{0.000001}} \put(4.431070,5.877865){\circle*{0.000001}} \put(4.467690,5.907946){\circle*{0.000001}} \put(4.504310,5.937807){\circle*{0.000001}} \put(4.540931,5.967450){\circle*{0.000001}} \put(4.577551,5.996877){\circle*{0.000001}} \put(4.614172,6.026089){\circle*{0.000001}} \put(4.650792,6.055088){\circle*{0.000001}} \put(4.687412,6.083875){\circle*{0.000001}} \put(4.724033,6.112452){\circle*{0.000001}} \put(4.760653,6.140821){\circle*{0.000001}} \put(4.797274,6.168983){\circle*{0.000001}} \put(4.833894,6.196939){\circle*{0.000001}} \put(4.870514,6.224691){\circle*{0.000001}} \put(4.907135,6.252241){\circle*{0.000001}} \put(4.943755,6.279589){\circle*{0.000001}} \put(4.980376,6.306738){\circle*{0.000001}} \put(5.016996,6.333689){\circle*{0.000001}} \put(5.053617,6.360444){\circle*{0.000001}} \put(5.090237,6.387003){\circle*{0.000001}} \put(5.126857,6.413368){\circle*{0.000001}} \put(5.163478,6.439541){\circle*{0.000001}} \put(5.200098,6.465523){\circle*{0.000001}} \put(5.236719,6.491315){\circle*{0.000001}} \put(5.273339,6.516919){\circle*{0.000001}} \put(5.309959,6.542336){\circle*{0.000001}} \put(5.346580,6.567568){\circle*{0.000001}} \put(5.383200,6.592615){\circle*{0.000001}} \put(5.419821,6.617480){\circle*{0.000001}} \put(5.456441,6.642163){\circle*{0.000001}} \put(5.493061,6.666667){\circle*{0.000001}} \put(5.529682,6.690991){\circle*{0.000001}} \put(5.566302,6.715138){\circle*{0.000001}} \put(5.602923,6.739109){\circle*{0.000001}} \put(5.639543,6.762905){\circle*{0.000001}} \put(5.676163,6.786527){\circle*{0.000001}} \put(5.712784,6.809976){\circle*{0.000001}} \put(5.749404,6.833255){\circle*{0.000001}} \put(5.786025,6.856364){\circle*{0.000001}} \put(5.822645,6.879304){\circle*{0.000001}} \put(5.859266,6.902077){\circle*{0.000001}} \put(5.895886,6.924683){\circle*{0.000001}} \put(5.932506,6.947125){\circle*{0.000001}} \put(5.969127,6.969403){\circle*{0.000001}} \put(6.005747,6.991518){\circle*{0.000001}} \put(6.042368,7.013472){\circle*{0.000001}} \put(6.078988,7.035265){\circle*{0.000001}} \put(6.115608,7.056900){\circle*{0.000001}} \put(6.152229,7.078377){\circle*{0.000001}} \put(6.188849,7.099697){\circle*{0.000001}} \put(6.225470,7.120861){\circle*{0.000001}} \put(6.262090,7.141871){\circle*{0.000001}} \put(6.298710,7.162728){\circle*{0.000001}} \put(6.335331,7.183433){\circle*{0.000001}} \put(6.371951,7.203986){\circle*{0.000001}} \put(6.408572,7.224389){\circle*{0.000001}} \put(6.445192,7.244644){\circle*{0.000001}} \put(6.481813,7.264751){\circle*{0.000001}} \put(6.518433,7.284711){\circle*{0.000001}} \put(6.555053,7.304525){\circle*{0.000001}} \put(6.591674,7.324195){\circle*{0.000001}} \put(6.628294,7.343721){\circle*{0.000001}} \put(6.664915,7.363105){\circle*{0.000001}} \put(6.701535,7.382347){\circle*{0.000001}} \put(6.738155,7.401449){\circle*{0.000001}} \put(6.774776,7.420411){\circle*{0.000001}} \put(6.811396,7.439235){\circle*{0.000001}} \put(6.848017,7.457922){\circle*{0.000001}} \put(6.884637,7.476473){\circle*{0.000001}} \put(6.921257,7.494888){\circle*{0.000001}} \put(6.957878,7.513168){\circle*{0.000001}} \put(6.994498,7.531315){\circle*{0.000001}} \put(7.031119,7.549330){\circle*{0.000001}} \put(7.067739,7.567214){\circle*{0.000001}} \put(7.104359,7.584966){\circle*{0.000001}} \put(7.140980,7.602590){\circle*{0.000001}} \put(7.177600,7.620084){\circle*{0.000001}} \put(7.214221,7.637451){\circle*{0.000001}} \put(7.250841,7.654692){\circle*{0.000001}} \put(7.287462,7.671806){\circle*{0.000001}} \put(7.324082,7.688796){\circle*{0.000001}} \put(7.360702,7.705661){\circle*{0.000001}} \put(7.397323,7.722404){\circle*{0.000001}} \put(7.433943,7.739024){\circle*{0.000001}} \put(7.470564,7.755523){\circle*{0.000001}} \put(7.507184,7.771902){\circle*{0.000001}} \put(7.543804,7.788161){\circle*{0.000001}} \put(7.580425,7.804302){\circle*{0.000001}} \put(7.617045,7.820324){\circle*{0.000001}} \put(7.653666,7.836230){\circle*{0.000001}} \put(7.690286,7.852020){\circle*{0.000001}} \put(7.726906,7.867694){\circle*{0.000001}} \put(7.763527,7.883255){\circle*{0.000001}} \put(7.800147,7.898701){\circle*{0.000001}} \put(7.836768,7.914035){\circle*{0.000001}} \put(7.873388,7.929257){\circle*{0.000001}} \put(7.910008,7.944368){\circle*{0.000001}} \put(7.946629,7.959369){\circle*{0.000001}} \put(7.983249,7.974260){\circle*{0.000001}} \put(8.019870,7.989042){\circle*{0.000001}} \put(8.056490,8.003717){\circle*{0.000001}} \put(8.093111,8.018284){\circle*{0.000001}} \put(8.129731,8.032746){\circle*{0.000001}} \put(8.166351,8.047101){\circle*{0.000001}} \put(8.202972,8.061352){\circle*{0.000001}} \put(8.239592,8.075499){\circle*{0.000001}} \put(8.276213,8.089543){\circle*{0.000001}} \put(8.312833,8.103484){\circle*{0.000001}} \put(8.349453,8.117324){\circle*{0.000001}} \put(8.386074,8.131062){\circle*{0.000001}} \put(8.422694,8.144700){\circle*{0.000001}} \put(8.459315,8.158239){\circle*{0.000001}} \put(8.495935,8.171679){\circle*{0.000001}} \put(8.532555,8.185021){\circle*{0.000001}} \put(8.569176,8.198265){\circle*{0.000001}} \put(8.605796,8.211413){\circle*{0.000001}} \put(8.642417,8.224465){\circle*{0.000001}} \put(8.679037,8.237422){\circle*{0.000001}} \put(8.715657,8.250284){\circle*{0.000001}} \put(8.752278,8.263052){\circle*{0.000001}} \put(8.788898,8.275727){\circle*{0.000001}} \put(8.825519,8.288310){\circle*{0.000001}} \put(8.862139,8.300800){\circle*{0.000001}} \put(8.898760,8.313200){\circle*{0.000001}} \put(8.935380,8.325509){\circle*{0.000001}} \put(8.972000,8.337729){\circle*{0.000001}} \put(9.008621,8.349859){\circle*{0.000001}} \put(9.045241,8.361900){\circle*{0.000001}} \put(9.081862,8.373854){\circle*{0.000001}} \put(9.118482,8.385721){\circle*{0.000001}} \put(9.155102,8.397500){\circle*{0.000001}} \put(9.191723,8.409194){\circle*{0.000001}} \put(9.228343,8.420803){\circle*{0.000001}} \put(9.264964,8.432327){\circle*{0.000001}} \put(9.301584,8.443767){\circle*{0.000001}} \put(9.338204,8.455123){\circle*{0.000001}} \put(9.374825,8.466397){\circle*{0.000001}} \put(9.411445,8.477588){\circle*{0.000001}} \put(9.448066,8.488697){\circle*{0.000001}} \put(9.484686,8.499726){\circle*{0.000001}} \put(9.521307,8.510674){\circle*{0.000001}} \put(9.557927,8.521542){\circle*{0.000001}} \put(9.594547,8.532331){\circle*{0.000001}} \put(9.631168,8.543041){\circle*{0.000001}} \put(9.667788,8.553673){\circle*{0.000001}} \put(9.704409,8.564227){\circle*{0.000001}} \put(9.741029,8.574704){\circle*{0.000001}} \put(9.777649,8.585105){\circle*{0.000001}} \put(9.814270,8.595430){\circle*{0.000001}} \put(9.850890,8.605680){\circle*{0.000001}} \put(9.887511,8.615855){\circle*{0.000001}} \put(9.924131,8.625955){\circle*{0.000001}} \put(9.960751,8.635982){\circle*{0.000001}} \put(9.997372,8.645936){\circle*{0.000001}} \put(10.033992,8.655817){\circle*{0.000001}} \put(10.070613,8.665626){\circle*{0.000001}} \put(10.107233,8.675363){\circle*{0.000001}} \put(10.143853,8.685029){\circle*{0.000001}} \put(10.180474,8.694625){\circle*{0.000001}} \put(10.217094,8.704151){\circle*{0.000001}} \put(10.253715,8.713607){\circle*{0.000001}} \put(10.290335,8.722994){\circle*{0.000001}} \put(10.326956,8.732313){\circle*{0.000001}} \put(10.363576,8.741564){\circle*{0.000001}} \put(10.400196,8.750747){\circle*{0.000001}} \put(10.436817,8.759863){\circle*{0.000001}} \put(10.473437,8.768913){\circle*{0.000001}} \put(10.510058,8.777896){\circle*{0.000001}} \put(10.546678,8.786815){\circle*{0.000001}} \put(10.583298,8.795668){\circle*{0.000001}} \put(10.619919,8.804456){\circle*{0.000001}} \put(10.656539,8.813180){\circle*{0.000001}} \put(10.693160,8.821841){\circle*{0.000001}} \put(10.729780,8.830438){\circle*{0.000001}} \put(10.766400,8.838973){\circle*{0.000001}} \put(10.803021,8.847445){\circle*{0.000001}} \put(10.839641,8.855856){\circle*{0.000001}} \put(10.876262,8.864205){\circle*{0.000001}} \put(10.912882,8.872493){\circle*{0.000001}} \put(10.949502,8.880721){\circle*{0.000001}} \put(-4.506939,0){\line(3,2){15.493061}} \put(5.493061,-0.8){\makebox(0,0)[t]{$t_w$}} \put(-4.506939,-0.8){\makebox(0,0)[t]{$-t_s$}} \put(10.986123,8.444444){\makebox(0,0)[tl]{$g(t)$}} \put(10.986123,10.328708){\makebox(0,0)[tl]{$R$}} \put(0.000000,0.000000){\circle*{0.000001}} \put(0.036620,0.109460){\circle*{0.000001}} \put(0.073241,0.218121){\circle*{0.000001}} \put(0.109861,0.325989){\circle*{0.000001}} \put(0.146482,0.433070){\circle*{0.000001}} \put(0.183102,0.539370){\circle*{0.000001}} \put(0.219722,0.644894){\circle*{0.000001}} \put(0.256343,0.749648){\circle*{0.000001}} \put(0.292963,0.853637){\circle*{0.000001}} \put(0.329584,0.956868){\circle*{0.000001}} \put(0.366204,1.059345){\circle*{0.000001}} \put(0.402825,1.161075){\circle*{0.000001}} \put(0.439445,1.262062){\circle*{0.000001}} \put(0.476065,1.362312){\circle*{0.000001}} \put(0.512686,1.461831){\circle*{0.000001}} \put(0.549306,1.560623){\circle*{0.000001}} \put(0.585927,1.658695){\circle*{0.000001}} \put(0.622547,1.756050){\circle*{0.000001}} \put(0.659167,1.852696){\circle*{0.000001}} \put(0.695788,1.948636){\circle*{0.000001}} \put(0.732408,2.043876){\circle*{0.000001}} \put(0.769029,2.138421){\circle*{0.000001}} \put(0.805649,2.232276){\circle*{0.000001}} \put(0.842269,2.325447){\circle*{0.000001}} \put(0.878890,2.417937){\circle*{0.000001}} \put(0.915510,2.509752){\circle*{0.000001}} \put(0.952131,2.600898){\circle*{0.000001}} \put(0.988751,2.691378){\circle*{0.000001}} \put(1.025371,2.781198){\circle*{0.000001}} \put(1.061992,2.870363){\circle*{0.000001}} \put(1.098612,2.958877){\circle*{0.000001}} \put(1.135233,3.046745){\circle*{0.000001}} \put(1.171853,3.133971){\circle*{0.000001}} \put(1.208474,3.220562){\circle*{0.000001}} \put(1.245094,3.306520){\circle*{0.000001}} \put(1.281714,3.391851){\circle*{0.000001}} \put(1.318335,3.476560){\circle*{0.000001}} \put(1.354955,3.560650){\circle*{0.000001}} \put(1.391576,3.644127){\circle*{0.000001}} \put(1.428196,3.726994){\circle*{0.000001}} \put(1.464816,3.809257){\circle*{0.000001}} \put(1.501437,3.890919){\circle*{0.000001}} \put(1.538057,3.971986){\circle*{0.000001}} \put(1.574678,4.052461){\circle*{0.000001}} \put(1.611298,4.132349){\circle*{0.000001}} \put(1.647918,4.211654){\circle*{0.000001}} \put(1.684539,4.290380){\circle*{0.000001}} \put(1.721159,4.368531){\circle*{0.000001}} \put(1.757780,4.446113){\circle*{0.000001}} \put(1.794400,4.523128){\circle*{0.000001}} \put(1.831020,4.599581){\circle*{0.000001}} \put(1.867641,4.675476){\circle*{0.000001}} \put(1.904261,4.750818){\circle*{0.000001}} \put(1.940882,4.825609){\circle*{0.000001}} \put(1.977502,4.899855){\circle*{0.000001}} \put(2.014123,4.973559){\circle*{0.000001}} \put(2.050743,5.046725){\circle*{0.000001}} \put(2.087363,5.119358){\circle*{0.000001}} \put(2.123984,5.191460){\circle*{0.000001}} \put(2.160604,5.263036){\circle*{0.000001}} \put(2.197225,5.334090){\circle*{0.000001}} \put(2.233845,5.404625){\circle*{0.000001}} \put(2.270465,5.474646){\circle*{0.000001}} \put(2.307086,5.544155){\circle*{0.000001}} \put(2.343706,5.613158){\circle*{0.000001}} \put(2.380327,5.681657){\circle*{0.000001}} \put(2.416947,5.749655){\circle*{0.000001}} \put(2.453567,5.817158){\circle*{0.000001}} \put(2.490188,5.884168){\circle*{0.000001}} \put(2.526808,5.950690){\circle*{0.000001}} \put(2.563429,6.016725){\circle*{0.000001}} \put(2.600049,6.082279){\circle*{0.000001}} \put(2.636669,6.147355){\circle*{0.000001}} \put(2.673290,6.211955){\circle*{0.000001}} \put(2.709910,6.276085){\circle*{0.000001}} \put(2.746531,6.339746){\circle*{0.000001}} \put(2.783151,6.402943){\circle*{0.000001}} \put(2.819772,6.465678){\circle*{0.000001}} \put(2.856392,6.527956){\circle*{0.000001}} \put(2.893012,6.589779){\circle*{0.000001}} \put(2.929633,6.651151){\circle*{0.000001}} \put(2.966253,6.712076){\circle*{0.000001}} \put(3.002874,6.772555){\circle*{0.000001}} \put(3.039494,6.832594){\circle*{0.000001}} \put(3.076114,6.892194){\circle*{0.000001}} \put(3.112735,6.951359){\circle*{0.000001}} \put(3.149355,7.010093){\circle*{0.000001}} \put(3.185976,7.068398){\circle*{0.000001}} \put(3.222596,7.126277){\circle*{0.000001}} \put(3.259216,7.183734){\circle*{0.000001}} \put(3.295837,7.240772){\circle*{0.000001}} \put(3.332457,7.297394){\circle*{0.000001}} \put(3.369078,7.353602){\circle*{0.000001}} \put(3.405698,7.409400){\circle*{0.000001}} \put(3.442319,7.464792){\circle*{0.000001}} \put(3.478939,7.519778){\circle*{0.000001}} \put(3.515559,7.574364){\circle*{0.000001}} \put(3.552180,7.628551){\circle*{0.000001}} \put(3.588800,7.682343){\circle*{0.000001}} \put(3.625421,7.735742){\circle*{0.000001}} \put(3.662041,7.788752){\circle*{0.000001}} \put(3.698661,7.841375){\circle*{0.000001}} \put(3.735282,7.893614){\circle*{0.000001}} \put(3.771902,7.945471){\circle*{0.000001}} \put(3.808523,7.996951){\circle*{0.000001}} \put(3.845143,8.048054){\circle*{0.000001}} \put(3.881763,8.098785){\circle*{0.000001}} \put(3.918384,8.149145){\circle*{0.000001}} \put(3.955004,8.199138){\circle*{0.000001}} \put(3.991625,8.248766){\circle*{0.000001}} \put(4.028245,8.298032){\circle*{0.000001}} \put(4.064865,8.346939){\circle*{0.000001}} \put(4.101486,8.395488){\circle*{0.000001}} \put(4.138106,8.443683){\circle*{0.000001}} \put(4.174727,8.491527){\circle*{0.000001}} \put(4.211347,8.539021){\circle*{0.000001}} \put(4.247968,8.586169){\circle*{0.000001}} \put(4.284588,8.632973){\circle*{0.000001}} \put(4.321208,8.679435){\circle*{0.000001}} \put(4.357829,8.725559){\circle*{0.000001}} \put(4.394449,8.771345){\circle*{0.000001}} \put(4.431070,8.816798){\circle*{0.000001}} \put(4.467690,8.861919){\circle*{0.000001}} \put(4.504310,8.906710){\circle*{0.000001}} \put(4.540931,8.951175){\circle*{0.000001}} \put(4.577551,8.995315){\circle*{0.000001}} \put(4.614172,9.039133){\circle*{0.000001}} \put(4.650792,9.082632){\circle*{0.000001}} \put(4.687412,9.125813){\circle*{0.000001}} \put(4.724033,9.168679){\circle*{0.000001}} \put(4.760653,9.211232){\circle*{0.000001}} \put(4.797274,9.253474){\circle*{0.000001}} \put(4.833894,9.295408){\circle*{0.000001}} \put(4.870514,9.337037){\circle*{0.000001}} \put(4.907135,9.378361){\circle*{0.000001}} \put(4.943755,9.419384){\circle*{0.000001}} \put(4.980376,9.460108){\circle*{0.000001}} \put(5.016996,9.500534){\circle*{0.000001}} \put(5.053617,9.540666){\circle*{0.000001}} \put(5.090237,9.580504){\circle*{0.000001}} \put(5.126857,9.620052){\circle*{0.000001}} \put(5.163478,9.659311){\circle*{0.000001}} \put(5.200098,9.698284){\circle*{0.000001}} \put(5.236719,9.736972){\circle*{0.000001}} \put(5.273339,9.775378){\circle*{0.000001}} \put(5.309959,9.813504){\circle*{0.000001}} \put(5.346580,9.851352){\circle*{0.000001}} \put(5.383200,9.888923){\circle*{0.000001}} \put(5.419821,9.926220){\circle*{0.000001}} \put(5.456441,9.963245){\circle*{0.000001}} \put(5.493061,10.000000){\circle*{0.000001}} \put(5.529682,10.036487){\circle*{0.000001}} \put(5.566302,10.072707){\circle*{0.000001}} \put(5.602923,10.108663){\circle*{0.000001}} \put(5.639543,10.144357){\circle*{0.000001}} \put(5.676163,10.179790){\circle*{0.000001}} \put(5.712784,10.214965){\circle*{0.000001}} \put(5.749404,10.249883){\circle*{0.000001}} \put(5.786025,10.284546){\circle*{0.000001}} \put(5.822645,10.318956){\circle*{0.000001}} \put(5.859266,10.353115){\circle*{0.000001}} \put(5.895886,10.387025){\circle*{0.000001}} \put(5.932506,10.420687){\circle*{0.000001}} \put(5.969127,10.454104){\circle*{0.000001}} \put(6.005747,10.487277){\circle*{0.000001}} \put(6.042368,10.520208){\circle*{0.000001}} \put(6.078988,10.552898){\circle*{0.000001}} \put(6.115608,10.585350){\circle*{0.000001}} \put(6.152229,10.617565){\circle*{0.000001}} \put(6.188849,10.649545){\circle*{0.000001}} \put(6.225470,10.681292){\circle*{0.000001}} \put(6.262090,10.712807){\circle*{0.000001}} \put(6.298710,10.744092){\circle*{0.000001}} \put(6.335331,10.775149){\circle*{0.000001}} \put(6.371951,10.805979){\circle*{0.000001}} \put(6.408572,10.836584){\circle*{0.000001}} \put(6.445192,10.866966){\circle*{0.000001}} \put(6.481813,10.897126){\circle*{0.000001}} \put(6.518433,10.927066){\circle*{0.000001}} \put(6.555053,10.956788){\circle*{0.000001}} \put(6.591674,10.986292){\circle*{0.000001}} \put(6.628294,11.015582){\circle*{0.000001}} \put(6.664915,11.044657){\circle*{0.000001}} \put(6.701535,11.073521){\circle*{0.000001}} \put(6.738155,11.102173){\circle*{0.000001}} \put(6.774776,11.130617){\circle*{0.000001}} \put(6.811396,11.158853){\circle*{0.000001}} \put(6.848017,11.186883){\circle*{0.000001}} \put(6.884637,11.214709){\circle*{0.000001}} \put(6.921257,11.242331){\circle*{0.000001}} \put(6.957878,11.269752){\circle*{0.000001}} \put(6.994498,11.296973){\circle*{0.000001}} \put(7.031119,11.323995){\circle*{0.000001}} \put(7.067739,11.350820){\circle*{0.000001}} \put(7.104359,11.377450){\circle*{0.000001}} \put(7.140980,11.403885){\circle*{0.000001}} \put(7.177600,11.430127){\circle*{0.000001}} \put(7.214221,11.456177){\circle*{0.000001}} \put(7.250841,11.482038){\circle*{0.000001}} \put(7.287462,11.507709){\circle*{0.000001}} \put(7.324082,11.533194){\circle*{0.000001}} \put(7.360702,11.558492){\circle*{0.000001}} \put(7.397323,11.583606){\circle*{0.000001}} \put(7.433943,11.608536){\circle*{0.000001}} \put(7.470564,11.633285){\circle*{0.000001}} \put(7.507184,11.657853){\circle*{0.000001}} \put(7.543804,11.682242){\circle*{0.000001}} \put(7.580425,11.706453){\circle*{0.000001}} \put(7.617045,11.730487){\circle*{0.000001}} \put(7.653666,11.754345){\circle*{0.000001}} \put(7.690286,11.778030){\circle*{0.000001}} \put(7.726906,11.801542){\circle*{0.000001}} \put(7.763527,11.824882){\circle*{0.000001}} \put(7.800147,11.848052){\circle*{0.000001}} \put(7.836768,11.871053){\circle*{0.000001}} \put(7.873388,11.893886){\circle*{0.000001}} \put(7.910008,11.916552){\circle*{0.000001}} \put(7.946629,11.939053){\circle*{0.000001}} \put(7.983249,11.961389){\circle*{0.000001}} \put(8.019870,11.983563){\circle*{0.000001}} \put(8.056490,12.005575){\circle*{0.000001}} \put(8.093111,12.027426){\circle*{0.000001}} \put(8.129731,12.049118){\circle*{0.000001}} \put(8.166351,12.070652){\circle*{0.000001}} \put(8.202972,12.092028){\circle*{0.000001}} \put(8.239592,12.113249){\circle*{0.000001}} \put(8.276213,12.134314){\circle*{0.000001}} \put(8.312833,12.155226){\circle*{0.000001}} \put(8.349453,12.175985){\circle*{0.000001}} \put(8.386074,12.196593){\circle*{0.000001}} \put(8.422694,12.217050){\circle*{0.000001}} \put(8.459315,12.237359){\circle*{0.000001}} \put(8.495935,12.257518){\circle*{0.000001}} \put(8.532555,12.277531){\circle*{0.000001}} \put(8.569176,12.297398){\circle*{0.000001}} \put(8.605796,12.317120){\circle*{0.000001}} \put(8.642417,12.336698){\circle*{0.000001}} \put(8.679037,12.356133){\circle*{0.000001}} \put(8.715657,12.375426){\circle*{0.000001}} \put(8.752278,12.394578){\circle*{0.000001}} \put(8.788898,12.413591){\circle*{0.000001}} \put(8.825519,12.432465){\circle*{0.000001}} \put(8.862139,12.451201){\circle*{0.000001}} \put(8.898760,12.469800){\circle*{0.000001}} \put(8.935380,12.488264){\circle*{0.000001}} \put(8.972000,12.506593){\circle*{0.000001}} \put(9.008621,12.524788){\circle*{0.000001}} \put(9.045241,12.542850){\circle*{0.000001}} \put(9.081862,12.560781){\circle*{0.000001}} \put(9.118482,12.578581){\circle*{0.000001}} \put(9.155102,12.596251){\circle*{0.000001}} \put(9.191723,12.613792){\circle*{0.000001}} \put(9.228343,12.631205){\circle*{0.000001}} \put(9.264964,12.648490){\circle*{0.000001}} \put(9.301584,12.665650){\circle*{0.000001}} \put(9.338204,12.682685){\circle*{0.000001}} \put(9.374825,12.699595){\circle*{0.000001}} \put(9.411445,12.716382){\circle*{0.000001}} \put(9.448066,12.733046){\circle*{0.000001}} \put(9.484686,12.749589){\circle*{0.000001}} \put(9.521307,12.766011){\circle*{0.000001}} \put(9.557927,12.782313){\circle*{0.000001}} \put(9.594547,12.798496){\circle*{0.000001}} \put(9.631168,12.814561){\circle*{0.000001}} \put(9.667788,12.830509){\circle*{0.000001}} \put(9.704409,12.846340){\circle*{0.000001}} \put(9.741029,12.862056){\circle*{0.000001}} \put(9.777649,12.877658){\circle*{0.000001}} \put(9.814270,12.893145){\circle*{0.000001}} \put(9.850890,12.908520){\circle*{0.000001}} \put(9.887511,12.923782){\circle*{0.000001}} \put(9.924131,12.938933){\circle*{0.000001}} \put(9.960751,12.953973){\circle*{0.000001}} \put(9.997372,12.968903){\circle*{0.000001}} \put(10.033992,12.983725){\circle*{0.000001}} \put(10.070613,12.998438){\circle*{0.000001}} \put(10.107233,13.013044){\circle*{0.000001}} \put(10.143853,13.027544){\circle*{0.000001}} \put(10.180474,13.041938){\circle*{0.000001}} \put(10.217094,13.056226){\circle*{0.000001}} \put(10.253715,13.070411){\circle*{0.000001}} \put(10.290335,13.084491){\circle*{0.000001}} \put(10.326956,13.098469){\circle*{0.000001}} \put(10.363576,13.112346){\circle*{0.000001}} \put(10.400196,13.126120){\circle*{0.000001}} \put(10.436817,13.139795){\circle*{0.000001}} \put(10.473437,13.153369){\circle*{0.000001}} \put(10.510058,13.166845){\circle*{0.000001}} \put(10.546678,13.180222){\circle*{0.000001}} \put(10.583298,13.193501){\circle*{0.000001}} \put(10.619919,13.206684){\circle*{0.000001}} \put(10.656539,13.219770){\circle*{0.000001}} \put(10.693160,13.232761){\circle*{0.000001}} \put(10.729780,13.245657){\circle*{0.000001}} \put(10.766400,13.258459){\circle*{0.000001}} \put(10.803021,13.271168){\circle*{0.000001}} \put(10.839641,13.283784){\circle*{0.000001}} \put(10.876262,13.296308){\circle*{0.000001}} \put(10.912882,13.308740){\circle*{0.000001}} \put(10.949502,13.321082){\circle*{0.000001}} \put(-4.506939,0){\line(1,1){15.493061}} \multiput(5.493061,-1)(0,1.100000){12}{\line(0,1){0.550000}} \put(10.986123,15.493061){\makebox(0,0)[tl]{$R'>R$}} \put(10.986123,12.666667){\makebox(0,0)[tl]{$g(t)$}} \end{picture} \begin{picture}(17.746531,17.000000)(-6.760408,-2) \thicklines \put(-6.760408,0){\line(1,0){17.746531}} \put(0,0){\line(0,1){10.000000}} \thinlines \put(0.000000,0.000000){\circle*{0.000001}} \put(0.036620,0.072973){\circle*{0.000001}} \put(0.073241,0.145414){\circle*{0.000001}} \put(0.109861,0.217326){\circle*{0.000001}} \put(0.146482,0.288714){\circle*{0.000001}} \put(0.183102,0.359580){\circle*{0.000001}} \put(0.219722,0.429929){\circle*{0.000001}} \put(0.256343,0.499765){\circle*{0.000001}} \put(0.292963,0.569091){\circle*{0.000001}} \put(0.329584,0.637912){\circle*{0.000001}} \put(0.366204,0.706230){\circle*{0.000001}} \put(0.402825,0.774050){\circle*{0.000001}} \put(0.439445,0.841375){\circle*{0.000001}} \put(0.476065,0.908208){\circle*{0.000001}} \put(0.512686,0.974554){\circle*{0.000001}} \put(0.549306,1.040415){\circle*{0.000001}} \put(0.585927,1.105796){\circle*{0.000001}} \put(0.622547,1.170700){\circle*{0.000001}} \put(0.659167,1.235131){\circle*{0.000001}} \put(0.695788,1.299091){\circle*{0.000001}} \put(0.732408,1.362584){\circle*{0.000001}} \put(0.769029,1.425614){\circle*{0.000001}} \put(0.805649,1.488184){\circle*{0.000001}} \put(0.842269,1.550298){\circle*{0.000001}} \put(0.878890,1.611958){\circle*{0.000001}} \put(0.915510,1.673168){\circle*{0.000001}} \put(0.952131,1.733932){\circle*{0.000001}} \put(0.988751,1.794252){\circle*{0.000001}} \put(1.025371,1.854132){\circle*{0.000001}} \put(1.061992,1.913575){\circle*{0.000001}} \put(1.098612,1.972584){\circle*{0.000001}} \put(1.135233,2.031163){\circle*{0.000001}} \put(1.171853,2.089314){\circle*{0.000001}} \put(1.208474,2.147041){\circle*{0.000001}} \put(1.245094,2.204347){\circle*{0.000001}} \put(1.281714,2.261234){\circle*{0.000001}} \put(1.318335,2.317706){\circle*{0.000001}} \put(1.354955,2.373767){\circle*{0.000001}} \put(1.391576,2.429418){\circle*{0.000001}} \put(1.428196,2.484663){\circle*{0.000001}} \put(1.464816,2.539505){\circle*{0.000001}} \put(1.501437,2.593946){\circle*{0.000001}} \put(1.538057,2.647991){\circle*{0.000001}} \put(1.574678,2.701641){\circle*{0.000001}} \put(1.611298,2.754899){\circle*{0.000001}} \put(1.647918,2.807769){\circle*{0.000001}} \put(1.684539,2.860253){\circle*{0.000001}} \put(1.721159,2.912354){\circle*{0.000001}} \put(1.757780,2.964075){\circle*{0.000001}} \put(1.794400,3.015418){\circle*{0.000001}} \put(1.831020,3.066387){\circle*{0.000001}} \put(1.867641,3.116984){\circle*{0.000001}} \put(1.904261,3.167212){\circle*{0.000001}} \put(1.940882,3.217073){\circle*{0.000001}} \put(1.977502,3.266570){\circle*{0.000001}} \put(2.014123,3.315706){\circle*{0.000001}} \put(2.050743,3.364484){\circle*{0.000001}} \put(2.087363,3.412905){\circle*{0.000001}} \put(2.123984,3.460973){\circle*{0.000001}} \put(2.160604,3.508691){\circle*{0.000001}} \put(2.197225,3.556060){\circle*{0.000001}} \put(2.233845,3.603083){\circle*{0.000001}} \put(2.270465,3.649764){\circle*{0.000001}} \put(2.307086,3.696104){\circle*{0.000001}} \put(2.343706,3.742105){\circle*{0.000001}} \put(2.380327,3.787771){\circle*{0.000001}} \put(2.416947,3.833104){\circle*{0.000001}} \put(2.453567,3.878106){\circle*{0.000001}} \put(2.490188,3.922779){\circle*{0.000001}} \put(2.526808,3.967126){\circle*{0.000001}} \put(2.563429,4.011150){\circle*{0.000001}} \put(2.600049,4.054853){\circle*{0.000001}} \put(2.636669,4.098237){\circle*{0.000001}} \put(2.673290,4.141304){\circle*{0.000001}} \put(2.709910,4.184056){\circle*{0.000001}} \put(2.746531,4.226497){\circle*{0.000001}} \put(2.783151,4.268628){\circle*{0.000001}} \put(2.819772,4.310452){\circle*{0.000001}} \put(2.856392,4.351971){\circle*{0.000001}} \put(2.893012,4.393186){\circle*{0.000001}} \put(2.929633,4.434101){\circle*{0.000001}} \put(2.966253,4.474717){\circle*{0.000001}} \put(3.002874,4.515037){\circle*{0.000001}} \put(3.039494,4.555062){\circle*{0.000001}} \put(3.076114,4.594796){\circle*{0.000001}} \put(3.112735,4.634239){\circle*{0.000001}} \put(3.149355,4.673395){\circle*{0.000001}} \put(3.185976,4.712265){\circle*{0.000001}} \put(3.222596,4.750851){\circle*{0.000001}} \put(3.259216,4.789156){\circle*{0.000001}} \put(3.295837,4.827181){\circle*{0.000001}} \put(3.332457,4.864929){\circle*{0.000001}} \put(3.369078,4.902401){\circle*{0.000001}} \put(3.405698,4.939600){\circle*{0.000001}} \put(3.442319,4.976528){\circle*{0.000001}} \put(3.478939,5.013186){\circle*{0.000001}} \put(3.515559,5.049576){\circle*{0.000001}} \put(3.552180,5.085701){\circle*{0.000001}} \put(3.588800,5.121562){\circle*{0.000001}} \put(3.625421,5.157162){\circle*{0.000001}} \put(3.662041,5.192501){\circle*{0.000001}} \put(3.698661,5.227583){\circle*{0.000001}} \put(3.735282,5.262409){\circle*{0.000001}} \put(3.771902,5.296981){\circle*{0.000001}} \put(3.808523,5.331300){\circle*{0.000001}} \put(3.845143,5.365369){\circle*{0.000001}} \put(3.881763,5.399190){\circle*{0.000001}} \put(3.918384,5.432763){\circle*{0.000001}} \put(3.955004,5.466092){\circle*{0.000001}} \put(3.991625,5.499177){\circle*{0.000001}} \put(4.028245,5.532021){\circle*{0.000001}} \put(4.064865,5.564626){\circle*{0.000001}} \put(4.101486,5.596992){\circle*{0.000001}} \put(4.138106,5.629122){\circle*{0.000001}} \put(4.174727,5.661018){\circle*{0.000001}} \put(4.211347,5.692681){\circle*{0.000001}} \put(4.247968,5.724113){\circle*{0.000001}} \put(4.284588,5.755315){\circle*{0.000001}} \put(4.321208,5.786290){\circle*{0.000001}} \put(4.357829,5.817039){\circle*{0.000001}} \put(4.394449,5.847564){\circle*{0.000001}} \put(4.431070,5.877865){\circle*{0.000001}} \put(4.467690,5.907946){\circle*{0.000001}} \put(4.504310,5.937807){\circle*{0.000001}} \put(4.540931,5.967450){\circle*{0.000001}} \put(4.577551,5.996877){\circle*{0.000001}} \put(4.614172,6.026089){\circle*{0.000001}} \put(4.650792,6.055088){\circle*{0.000001}} \put(4.687412,6.083875){\circle*{0.000001}} \put(4.724033,6.112452){\circle*{0.000001}} \put(4.760653,6.140821){\circle*{0.000001}} \put(4.797274,6.168983){\circle*{0.000001}} \put(4.833894,6.196939){\circle*{0.000001}} \put(4.870514,6.224691){\circle*{0.000001}} \put(4.907135,6.252241){\circle*{0.000001}} \put(4.943755,6.279589){\circle*{0.000001}} \put(4.980376,6.306738){\circle*{0.000001}} \put(5.016996,6.333689){\circle*{0.000001}} \put(5.053617,6.360444){\circle*{0.000001}} \put(5.090237,6.387003){\circle*{0.000001}} \put(5.126857,6.413368){\circle*{0.000001}} \put(5.163478,6.439541){\circle*{0.000001}} \put(5.200098,6.465523){\circle*{0.000001}} \put(5.236719,6.491315){\circle*{0.000001}} \put(5.273339,6.516919){\circle*{0.000001}} \put(5.309959,6.542336){\circle*{0.000001}} \put(5.346580,6.567568){\circle*{0.000001}} \put(5.383200,6.592615){\circle*{0.000001}} \put(5.419821,6.617480){\circle*{0.000001}} \put(5.456441,6.642163){\circle*{0.000001}} \put(5.493061,6.666667){\circle*{0.000001}} \put(5.529682,6.690991){\circle*{0.000001}} \put(5.566302,6.715138){\circle*{0.000001}} \put(5.602923,6.739109){\circle*{0.000001}} \put(5.639543,6.762905){\circle*{0.000001}} \put(5.676163,6.786527){\circle*{0.000001}} \put(5.712784,6.809976){\circle*{0.000001}} \put(5.749404,6.833255){\circle*{0.000001}} \put(5.786025,6.856364){\circle*{0.000001}} \put(5.822645,6.879304){\circle*{0.000001}} \put(5.859266,6.902077){\circle*{0.000001}} \put(5.895886,6.924683){\circle*{0.000001}} \put(5.932506,6.947125){\circle*{0.000001}} \put(5.969127,6.969403){\circle*{0.000001}} \put(6.005747,6.991518){\circle*{0.000001}} \put(6.042368,7.013472){\circle*{0.000001}} \put(6.078988,7.035265){\circle*{0.000001}} \put(6.115608,7.056900){\circle*{0.000001}} \put(6.152229,7.078377){\circle*{0.000001}} \put(6.188849,7.099697){\circle*{0.000001}} \put(6.225470,7.120861){\circle*{0.000001}} \put(6.262090,7.141871){\circle*{0.000001}} \put(6.298710,7.162728){\circle*{0.000001}} \put(6.335331,7.183433){\circle*{0.000001}} \put(6.371951,7.203986){\circle*{0.000001}} \put(6.408572,7.224389){\circle*{0.000001}} \put(6.445192,7.244644){\circle*{0.000001}} \put(6.481813,7.264751){\circle*{0.000001}} \put(6.518433,7.284711){\circle*{0.000001}} \put(6.555053,7.304525){\circle*{0.000001}} \put(6.591674,7.324195){\circle*{0.000001}} \put(6.628294,7.343721){\circle*{0.000001}} \put(6.664915,7.363105){\circle*{0.000001}} \put(6.701535,7.382347){\circle*{0.000001}} \put(6.738155,7.401449){\circle*{0.000001}} \put(6.774776,7.420411){\circle*{0.000001}} \put(6.811396,7.439235){\circle*{0.000001}} \put(6.848017,7.457922){\circle*{0.000001}} \put(6.884637,7.476473){\circle*{0.000001}} \put(6.921257,7.494888){\circle*{0.000001}} \put(6.957878,7.513168){\circle*{0.000001}} \put(6.994498,7.531315){\circle*{0.000001}} \put(7.031119,7.549330){\circle*{0.000001}} \put(7.067739,7.567214){\circle*{0.000001}} \put(7.104359,7.584966){\circle*{0.000001}} \put(7.140980,7.602590){\circle*{0.000001}} \put(7.177600,7.620084){\circle*{0.000001}} \put(7.214221,7.637451){\circle*{0.000001}} \put(7.250841,7.654692){\circle*{0.000001}} \put(7.287462,7.671806){\circle*{0.000001}} \put(7.324082,7.688796){\circle*{0.000001}} \put(7.360702,7.705661){\circle*{0.000001}} \put(7.397323,7.722404){\circle*{0.000001}} \put(7.433943,7.739024){\circle*{0.000001}} \put(7.470564,7.755523){\circle*{0.000001}} \put(7.507184,7.771902){\circle*{0.000001}} \put(7.543804,7.788161){\circle*{0.000001}} \put(7.580425,7.804302){\circle*{0.000001}} \put(7.617045,7.820324){\circle*{0.000001}} \put(7.653666,7.836230){\circle*{0.000001}} \put(7.690286,7.852020){\circle*{0.000001}} \put(7.726906,7.867694){\circle*{0.000001}} \put(7.763527,7.883255){\circle*{0.000001}} \put(7.800147,7.898701){\circle*{0.000001}} \put(7.836768,7.914035){\circle*{0.000001}} \put(7.873388,7.929257){\circle*{0.000001}} \put(7.910008,7.944368){\circle*{0.000001}} \put(7.946629,7.959369){\circle*{0.000001}} \put(7.983249,7.974260){\circle*{0.000001}} \put(8.019870,7.989042){\circle*{0.000001}} \put(8.056490,8.003717){\circle*{0.000001}} \put(8.093111,8.018284){\circle*{0.000001}} \put(8.129731,8.032746){\circle*{0.000001}} \put(8.166351,8.047101){\circle*{0.000001}} \put(8.202972,8.061352){\circle*{0.000001}} \put(8.239592,8.075499){\circle*{0.000001}} \put(8.276213,8.089543){\circle*{0.000001}} \put(8.312833,8.103484){\circle*{0.000001}} \put(8.349453,8.117324){\circle*{0.000001}} \put(8.386074,8.131062){\circle*{0.000001}} \put(8.422694,8.144700){\circle*{0.000001}} \put(8.459315,8.158239){\circle*{0.000001}} \put(8.495935,8.171679){\circle*{0.000001}} \put(8.532555,8.185021){\circle*{0.000001}} \put(8.569176,8.198265){\circle*{0.000001}} \put(8.605796,8.211413){\circle*{0.000001}} \put(8.642417,8.224465){\circle*{0.000001}} \put(8.679037,8.237422){\circle*{0.000001}} \put(8.715657,8.250284){\circle*{0.000001}} \put(8.752278,8.263052){\circle*{0.000001}} \put(8.788898,8.275727){\circle*{0.000001}} \put(8.825519,8.288310){\circle*{0.000001}} \put(8.862139,8.300800){\circle*{0.000001}} \put(8.898760,8.313200){\circle*{0.000001}} \put(8.935380,8.325509){\circle*{0.000001}} \put(8.972000,8.337729){\circle*{0.000001}} \put(9.008621,8.349859){\circle*{0.000001}} \put(9.045241,8.361900){\circle*{0.000001}} \put(9.081862,8.373854){\circle*{0.000001}} \put(9.118482,8.385721){\circle*{0.000001}} \put(9.155102,8.397500){\circle*{0.000001}} \put(9.191723,8.409194){\circle*{0.000001}} \put(9.228343,8.420803){\circle*{0.000001}} \put(9.264964,8.432327){\circle*{0.000001}} \put(9.301584,8.443767){\circle*{0.000001}} \put(9.338204,8.455123){\circle*{0.000001}} \put(9.374825,8.466397){\circle*{0.000001}} \put(9.411445,8.477588){\circle*{0.000001}} \put(9.448066,8.488697){\circle*{0.000001}} \put(9.484686,8.499726){\circle*{0.000001}} \put(9.521307,8.510674){\circle*{0.000001}} \put(9.557927,8.521542){\circle*{0.000001}} \put(9.594547,8.532331){\circle*{0.000001}} \put(9.631168,8.543041){\circle*{0.000001}} \put(9.667788,8.553673){\circle*{0.000001}} \put(9.704409,8.564227){\circle*{0.000001}} \put(9.741029,8.574704){\circle*{0.000001}} \put(9.777649,8.585105){\circle*{0.000001}} \put(9.814270,8.595430){\circle*{0.000001}} \put(9.850890,8.605680){\circle*{0.000001}} \put(9.887511,8.615855){\circle*{0.000001}} \put(9.924131,8.625955){\circle*{0.000001}} \put(9.960751,8.635982){\circle*{0.000001}} \put(9.997372,8.645936){\circle*{0.000001}} \put(10.033992,8.655817){\circle*{0.000001}} \put(10.070613,8.665626){\circle*{0.000001}} \put(10.107233,8.675363){\circle*{0.000001}} \put(10.143853,8.685029){\circle*{0.000001}} \put(10.180474,8.694625){\circle*{0.000001}} \put(10.217094,8.704151){\circle*{0.000001}} \put(10.253715,8.713607){\circle*{0.000001}} \put(10.290335,8.722994){\circle*{0.000001}} \put(10.326956,8.732313){\circle*{0.000001}} \put(10.363576,8.741564){\circle*{0.000001}} \put(10.400196,8.750747){\circle*{0.000001}} \put(10.436817,8.759863){\circle*{0.000001}} \put(10.473437,8.768913){\circle*{0.000001}} \put(10.510058,8.777896){\circle*{0.000001}} \put(10.546678,8.786815){\circle*{0.000001}} \put(10.583298,8.795668){\circle*{0.000001}} \put(10.619919,8.804456){\circle*{0.000001}} \put(10.656539,8.813180){\circle*{0.000001}} \put(10.693160,8.821841){\circle*{0.000001}} \put(10.729780,8.830438){\circle*{0.000001}} \put(10.766400,8.838973){\circle*{0.000001}} \put(10.803021,8.847445){\circle*{0.000001}} \put(10.839641,8.855856){\circle*{0.000001}} \put(10.876262,8.864205){\circle*{0.000001}} \put(10.912882,8.872493){\circle*{0.000001}} \put(10.949502,8.880721){\circle*{0.000001}} \put(-4.506939,0){\line(3,2){15.493061}} \multiput(5.493061,-1)(0,0.766667){12}{\line(0,1){0.383333}} \put(5.493061,-0.8){\makebox(0,0)[t]{$t_w$}} \put(-4.506939,-0.8){\makebox(0,0)[t]{$-t_s$}} \put(10.986123,8.444444){\makebox(0,0)[tl]{$g(t)$}} \put(10.986123,10.328708){\makebox(0,0)[tl]{$R$}} \put(-1.534264,0){\line(1,1){12.520387}} \multiput(3.465736,-1)(0,0.600000){12}{\line(0,1){0.300000}} \put(3.465736,-0.8){\makebox(0,0)[t]{$t_w'$}} \put(-1.534264,-0.8){\makebox(0,0)[t]{$-t_s'$}} \put(10.986123,12.520387){\makebox(0,0)[tl]{$R'>R$}} \end{picture} (a) & & (b) \end{tabular} \end{center} \caption{\small As a consequence of Charnov's Marginal Value Theorem, there are two ways to increase the average rate of gain $R$: one can either increase the profitability of a patch, i.e.\ raising the gain curge $g(t)$ as in (a), or make the patches more dense, thereby decreasing the average beyween-patches time $t_b$, as in (b).} \label{increase_patch} \end{figure} \example Suppose we are looking for a low-priced hotel in Paris. We have several web sites available, and our strategy is to log in to one, start checking prices looking for the cheapest price for a while, then move to another one looking for a new price or, simply, stop looking and accept the lowest price we have found. The question is: how long should we stay on the site and keep looking before we move on? When we are on a page, the important events are the prices that we look at so, for the sake of convenience, we take the time that it takes to move from one hotel to the next one on the same page as the time unit so that at time $t=n$ we have looked at $n+1$ prices (we look at the first price at $t=0$). The actual length depends of what we are looking for: if we look just for the best price, the interval is very small; if we look for price subject to certain constraint (hotel with a bar, with a sauna, etc.), it will take longer. In any case, we look at a new hotel (one the same page) per unit of time. In these units, let $t_b$ be the time that it takes to get set on a page (including typing the address, logging in, etc.) We begin by determinig the expected value of the minimum price that we have observed if we have observed $n$ prices. In this simple example we shall favor simplicity over plausibility and we shall assume that the prices of teh hotels are uniformly distributed in the interval $[\mu-a,\mu+a]$. To begin with, we shall answer an even simpler question: given $n$ observations $X=\{x_1,\ldots,x_n\}$ of $n$ random variables independent and uniformly distributed in $[-1,1]$, which is the expected value of $\min(X)$? The cumulative distribution and the density for each of the $x_i$ are \begin{equation} \Phi(x) = \begin{cases} 0 & x \le 1 \\ \frac{x+1}{2} & -1<x<1 \\ 1 & x\ge 1 \end{cases} \end{equation} and \begin{equation} \phi(x) = \begin{cases} \frac{1}{2} & -1\le{x}\le{1} \\ 0 & \mbox{otherwise} \end{cases} \end{equation} respectively. The probability density for the minimum is given by (\ref{mindense}): \begin{equation} \phi_{\min}(x) = n[1 - \Phi(x)]^{n-1}\phi(x) = \begin{cases} \frac{n}{2^n}[1-x]^{n-1} & (-1\le{x}\le{1}) \\ 0 & \mbox{otherwise} \end{cases} \end{equation} and its expected value is \begin{equation} \begin{aligned} m(n) &= \int_{-1}^{1} u \phi_{\min}(u) du = \frac{n}{2^n} \int_{-1}^{1} u(1-u)^{n-1} du \\ &= \frac{n}{2^n} \left[ \int_{-1}^{1} (1-u)^{n-1}du - \int_{-1}^{1}(1-u)^ndu\right] \\ &= \frac{n}{2^n} \left[ - \int_{2}^{0} u^{n-1}du + \int_{2}^{0}u^n du\right] \\ &= \frac{n}{2^n} \left[ \frac{2^n}{n} - \frac{2^{n+1}}{n+1} \right] \\ &= \frac{1-n}{1+n} \end{aligned} \end{equation} Scaling and shifting one obtains, for variables distributed in $[\mu-a,\mu+a]$: \begin{equation} m(n) = \mu - a \frac{1-n}{1+n} \end{equation} Note that if we only observe one price, the expected value of the minimum is $\mu$. We we observe more prices, the expected value decreases, with \begin{equation} \lim_{t\rightarrow\infty} m(t) = \mu-a \end{equation} Let us consider that we start exploring at the time when we observe the first price, and that everything that comes before that is preparation that is included in the time $t_b$. At time $t$, we have looked at $t+1$ prices, and the expected value for the minimum price is $m(t+1)$. The gain that we have obtained is the difference between the first price that we saw (expected value equal to $\mu$) and the current one: \begin{equation} \label{gdef} g(t) = m(1) = m(t+1) = a \frac{t}{t+2} \end{equation} Up to now, I have considered $t$ as a discrete variable (the number of prices observed); from now on we regard it as a continuous variable, so I can take the derivative: \begin{equation} g'(t) = \frac{2a}{(t+2)^2} \end{equation} I now apply Charnov's Marginal value theorem: the optimal value to spend on the site is given by \begin{equation} g'(t) = \frac{2a}{(t+2)^2} = \frac{at}{t+2} \frac{1}{t+t_b} = \frac{g(t)}{t+t_b} \end{equation} which has solution $\tau_1=\sqrt{2t_b}$% \footnote{Dimensionally, the equation is sound: the factor $t+2$ in (\ref{gdef}) entails that the term 2 has the dimensions of a time.}% . The corresponding rate of reward is \begin{equation} \label{t1bum} R_1 = g'(\tau_1) = \frac{2a}{(\sqrt{2t_b}+2)^2} = \frac{a}{(\sqrt{t_b}+\sqrt{2})^2}. \end{equation} Note that the time that we spend on a site grown sub-linearly with the time we spend looking for the site: if we spend twice as long looking for the web page, the time we should spend on the web page grows like $\sqrt{2}$, that is, we should stay about 40\% longer. ~~\\\hfill(end of example)\\\bigskip \example The considerations of the previous example are valid for the first \dqt{patch}, that is, for the first site that we visit. Suppose now that we want to visit a second site (the time necessary to do the switch is assumed to be $t_b$). Now we already have a minimum, the one that we found in the first patch, namely \begin{equation} m_1 \stackrel{\triangle}{=} \mu - a\frac{\tau_1}{\tau_1+2} = \mu - a \frac{\sqrt{t_b}}{\sqrt{t_b}+\sqrt{2}} \stackrel{\triangle}{=} \mu - a\alpha \end{equation} with \begin{equation} \alpha \stackrel{\triangle}{=} \frac{\sqrt{t_b}}{\sqrt{t_b}+\sqrt{2}} < 1 \end{equation} While we explore the second site, as long as the minimum price that we find there is greater than $m_1$, our gain is zero. Assume that the second patch has the same average price as the first, but a larger spread, that is, in the second patch the prices are uniformly distributed in $[\mu-b,\mu+b]$, with $b>a$. Also, define $\gamma=b/a>1$ (I shall need it later). The current minimum after we have explored the second site for a time $t$ is \begin{equation} \tilde{m}(t) = \mu - b\frac{t}{t+2} \end{equation} The expected minimum in the second site is the same as the optimal price in the first at a time $t_0$ such that $\tilde{m}(t_0)=m_1$, that is \begin{equation} t_0 = \frac{2a\alpha}{b-a\alpha} = \frac{2\alpha}{\gamma-\alpha} \end{equation} The gain in the second site is given by the savings we obtain over the previous minimum, that is \begin{equation} g_2(t) = \begin{cases} 0 & t<t_0 \\ b\frac{t}{t+2} - a\alpha & t \ge t_0 \end{cases} \end{equation} and \begin{equation} g_2^\prime(t) = \begin{cases} 0 & t<t_0 \\ b\frac{2b}{(t+2)^2} - a\alpha & t \ge t_0 \end{cases} \end{equation} The situation is depicted schematically in figure~\ref{patchdiag2}. \begin{figure} \begin{center} \setlength{\unitlength}{1.5em} \begin{picture}(24.722816,6.630996)(0,0) \put(0,0){\line(1,0){24.722816}} \put(5.000000,0.000000){\circle*{0.0001}} \put(5.037947,0.093102){\circle*{0.0001}} \put(5.075895,0.182800){\circle*{0.0001}} \put(5.113842,0.269277){\circle*{0.0001}} \put(5.151789,0.352705){\circle*{0.0001}} \put(5.189737,0.433241){\circle*{0.0001}} \put(5.227684,0.511033){\circle*{0.0001}} \put(5.265631,0.586219){\circle*{0.0001}} \put(5.303579,0.658928){\circle*{0.0001}} \put(5.341526,0.729281){\circle*{0.0001}} \put(5.379473,0.797389){\circle*{0.0001}} \put(5.417421,0.863360){\circle*{0.0001}} \put(5.455368,0.927291){\circle*{0.0001}} \put(5.493315,0.989276){\circle*{0.0001}} \put(5.531263,1.049402){\circle*{0.0001}} \put(5.569210,1.107753){\circle*{0.0001}} \put(5.607157,1.164405){\circle*{0.0001}} \put(5.645105,1.219431){\circle*{0.0001}} \put(5.683052,1.272901){\circle*{0.0001}} \put(5.720999,1.324880){\circle*{0.0001}} \put(5.758947,1.375428){\circle*{0.0001}} \put(5.796894,1.424605){\circle*{0.0001}} \put(5.834841,1.472466){\circle*{0.0001}} \put(5.872789,1.519062){\circle*{0.0001}} \put(5.910736,1.564443){\circle*{0.0001}} \put(5.948683,1.608656){\circle*{0.0001}} \put(5.986631,1.651745){\circle*{0.0001}} \put(6.024578,1.693754){\circle*{0.0001}} \put(6.062525,1.734721){\circle*{0.0001}} \put(6.100473,1.774685){\circle*{0.0001}} \put(6.138420,1.813683){\circle*{0.0001}} \put(6.176367,1.851749){\circle*{0.0001}} \put(6.214315,1.888917){\circle*{0.0001}} \put(6.252262,1.925217){\circle*{0.0001}} \put(6.290209,1.960680){\circle*{0.0001}} \put(6.328157,1.995334){\circle*{0.0001}} \put(6.366104,2.029206){\circle*{0.0001}} \put(6.404051,2.062324){\circle*{0.0001}} \put(6.441999,2.094711){\circle*{0.0001}} \put(6.479946,2.126392){\circle*{0.0001}} \put(6.517893,2.157390){\circle*{0.0001}} \put(6.555841,2.187725){\circle*{0.0001}} \put(6.593788,2.217421){\circle*{0.0001}} \put(6.631735,2.246495){\circle*{0.0001}} \put(6.669683,2.274969){\circle*{0.0001}} \put(6.707630,2.302859){\circle*{0.0001}} \put(6.745577,2.330185){\circle*{0.0001}} \put(6.783525,2.356962){\circle*{0.0001}} \put(6.821472,2.383207){\circle*{0.0001}} \put(6.859419,2.408937){\circle*{0.0001}} \put(6.897367,2.434165){\circle*{0.0001}} \put(6.935314,2.458907){\circle*{0.0001}} \put(6.973261,2.483176){\circle*{0.0001}} \put(7.011209,2.506986){\circle*{0.0001}} \put(7.049156,2.530349){\circle*{0.0001}} \put(7.087103,2.553279){\circle*{0.0001}} \put(7.125051,2.575787){\circle*{0.0001}} \put(7.162998,2.597885){\circle*{0.0001}} \put(7.200945,2.619583){\circle*{0.0001}} \put(7.238893,2.640893){\circle*{0.0001}} \put(7.276840,2.661825){\circle*{0.0001}} \put(7.314787,2.682389){\circle*{0.0001}} \put(7.352735,2.702594){\circle*{0.0001}} \put(7.390682,2.722449){\circle*{0.0001}} \put(7.428629,2.741965){\circle*{0.0001}} \put(7.466577,2.761149){\circle*{0.0001}} \put(7.504524,2.780010){\circle*{0.0001}} \put(7.542471,2.798555){\circle*{0.0001}} \put(7.580419,2.816793){\circle*{0.0001}} \put(7.618366,2.834732){\circle*{0.0001}} \put(7.656313,2.852378){\circle*{0.0001}} \put(7.694261,2.869739){\circle*{0.0001}} \put(7.732208,2.886821){\circle*{0.0001}} \put(7.770155,2.903632){\circle*{0.0001}} \put(7.808103,2.920177){\circle*{0.0001}} \put(7.846050,2.936464){\circle*{0.0001}} \put(7.883997,2.952497){\circle*{0.0001}} \put(7.921945,2.968283){\circle*{0.0001}} \put(7.959892,2.983827){\circle*{0.0001}} \put(7.997839,2.999135){\circle*{0.0001}} \put(8.035787,3.014213){\circle*{0.0001}} \put(8.073734,3.029065){\circle*{0.0001}} \put(8.111681,3.043696){\circle*{0.0001}} \put(8.149629,3.058112){\circle*{0.0001}} \put(8.187576,3.072317){\circle*{0.0001}} \put(8.225523,3.086316){\circle*{0.0001}} \put(8.263471,3.100113){\circle*{0.0001}} \put(8.301418,3.113712){\circle*{0.0001}} \put(8.339365,3.127118){\circle*{0.0001}} \put(8.377313,3.140335){\circle*{0.0001}} \put(8.415260,3.153367){\circle*{0.0001}} \put(8.453207,3.166217){\circle*{0.0001}} \put(8.491155,3.178889){\circle*{0.0001}} \put(8.529102,3.191388){\circle*{0.0001}} \put(8.567049,3.203716){\circle*{0.0001}} \put(8.604997,3.215878){\circle*{0.0001}} \put(8.642944,3.227875){\circle*{0.0001}} \put(8.680891,3.239713){\circle*{0.0001}} \put(8.718839,3.251393){\circle*{0.0001}} \put(8.756786,3.262920){\circle*{0.0001}} \put(6.264911,1.937129){\makebox(0,0)[lt]{$g_1(t)$}} \put(0.000000,0.000000){\circle*{0.0001}} \put(0.030303,0.000000){\circle*{0.0001}} \put(0.060606,0.030303){\circle*{0.0001}} \put(0.090909,0.030303){\circle*{0.0001}} \put(0.121212,0.060606){\circle*{0.0001}} \put(0.151515,0.060606){\circle*{0.0001}} \put(0.181818,0.060606){\circle*{0.0001}} \put(0.212121,0.090909){\circle*{0.0001}} \put(0.242424,0.090909){\circle*{0.0001}} \put(0.272727,0.090909){\circle*{0.0001}} \put(0.303030,0.121212){\circle*{0.0001}} \put(0.333333,0.121212){\circle*{0.0001}} \put(0.363636,0.151515){\circle*{0.0001}} \put(0.393939,0.151515){\circle*{0.0001}} \put(0.424242,0.151515){\circle*{0.0001}} \put(0.454545,0.181818){\circle*{0.0001}} \put(0.484848,0.181818){\circle*{0.0001}} \put(0.515152,0.181818){\circle*{0.0001}} \put(0.545455,0.212121){\circle*{0.0001}} \put(0.575758,0.212121){\circle*{0.0001}} \put(0.606061,0.242424){\circle*{0.0001}} \put(0.636364,0.242424){\circle*{0.0001}} \put(0.666667,0.242424){\circle*{0.0001}} \put(0.696970,0.272727){\circle*{0.0001}} \put(0.727273,0.272727){\circle*{0.0001}} \put(0.757576,0.272727){\circle*{0.0001}} \put(0.787879,0.303030){\circle*{0.0001}} \put(0.818182,0.303030){\circle*{0.0001}} \put(0.848485,0.333333){\circle*{0.0001}} \put(0.878788,0.333333){\circle*{0.0001}} \put(0.909091,0.333333){\circle*{0.0001}} \put(0.939394,0.363636){\circle*{0.0001}} \put(0.969697,0.363636){\circle*{0.0001}} \put(1.000000,0.363636){\circle*{0.0001}} \put(1.030303,0.393939){\circle*{0.0001}} \put(1.060606,0.393939){\circle*{0.0001}} \put(1.090909,0.424242){\circle*{0.0001}} \put(1.121212,0.424242){\circle*{0.0001}} \put(1.151515,0.424242){\circle*{0.0001}} \put(1.181818,0.454545){\circle*{0.0001}} \put(1.212121,0.454545){\circle*{0.0001}} \put(1.242424,0.454545){\circle*{0.0001}} \put(1.272727,0.484848){\circle*{0.0001}} \put(1.303030,0.484848){\circle*{0.0001}} \put(1.333333,0.515152){\circle*{0.0001}} \put(1.363636,0.515152){\circle*{0.0001}} \put(1.393939,0.515152){\circle*{0.0001}} \put(1.424242,0.545455){\circle*{0.0001}} \put(1.454545,0.545455){\circle*{0.0001}} \put(1.484848,0.545455){\circle*{0.0001}} \put(1.515152,0.575758){\circle*{0.0001}} \put(1.545455,0.575758){\circle*{0.0001}} \put(1.575758,0.606061){\circle*{0.0001}} \put(1.606061,0.606061){\circle*{0.0001}} \put(1.636364,0.606061){\circle*{0.0001}} \put(1.666667,0.636364){\circle*{0.0001}} \put(1.696970,0.636364){\circle*{0.0001}} \put(1.727273,0.636364){\circle*{0.0001}} \put(1.757576,0.666667){\circle*{0.0001}} \put(1.787879,0.666667){\circle*{0.0001}} \put(1.818182,0.696970){\circle*{0.0001}} \put(1.848485,0.696970){\circle*{0.0001}} \put(1.878788,0.696970){\circle*{0.0001}} \put(1.909091,0.727273){\circle*{0.0001}} \put(1.939394,0.727273){\circle*{0.0001}} \put(1.969697,0.727273){\circle*{0.0001}} \put(2.000000,0.757576){\circle*{0.0001}} \put(2.030303,0.757576){\circle*{0.0001}} \put(2.060606,0.787879){\circle*{0.0001}} \put(2.090909,0.787879){\circle*{0.0001}} \put(2.121212,0.787879){\circle*{0.0001}} \put(2.151515,0.818182){\circle*{0.0001}} \put(2.181818,0.818182){\circle*{0.0001}} \put(2.212121,0.818182){\circle*{0.0001}} \put(2.242424,0.848485){\circle*{0.0001}} \put(2.272727,0.848485){\circle*{0.0001}} \put(2.303030,0.878788){\circle*{0.0001}} \put(2.333333,0.878788){\circle*{0.0001}} \put(2.363636,0.878788){\circle*{0.0001}} \put(2.393939,0.909091){\circle*{0.0001}} \put(2.424242,0.909091){\circle*{0.0001}} \put(2.454545,0.909091){\circle*{0.0001}} \put(2.484848,0.939394){\circle*{0.0001}} \put(2.515152,0.939394){\circle*{0.0001}} \put(2.545455,0.969697){\circle*{0.0001}} \put(2.575758,0.969697){\circle*{0.0001}} \put(2.606061,0.969697){\circle*{0.0001}} \put(2.636364,1.000000){\circle*{0.0001}} \put(2.666667,1.000000){\circle*{0.0001}} \put(2.696970,1.000000){\circle*{0.0001}} \put(2.727273,1.030303){\circle*{0.0001}} \put(2.757576,1.030303){\circle*{0.0001}} \put(2.787879,1.060606){\circle*{0.0001}} \put(2.818182,1.060606){\circle*{0.0001}} \put(2.848485,1.060606){\circle*{0.0001}} \put(2.878788,1.090909){\circle*{0.0001}} \put(2.909091,1.090909){\circle*{0.0001}} \put(2.939394,1.090909){\circle*{0.0001}} \put(2.969697,1.121212){\circle*{0.0001}} \put(3.000000,1.121212){\circle*{0.0001}} \put(3.030303,1.151515){\circle*{0.0001}} \put(3.060606,1.151515){\circle*{0.0001}} \put(3.090909,1.151515){\circle*{0.0001}} \put(3.121212,1.181818){\circle*{0.0001}} \put(3.151515,1.181818){\circle*{0.0001}} \put(3.181818,1.181818){\circle*{0.0001}} \put(3.212121,1.212121){\circle*{0.0001}} \put(3.242424,1.212121){\circle*{0.0001}} \put(3.272727,1.242424){\circle*{0.0001}} \put(3.303030,1.242424){\circle*{0.0001}} \put(3.333333,1.242424){\circle*{0.0001}} \put(3.363636,1.272727){\circle*{0.0001}} \put(3.393939,1.272727){\circle*{0.0001}} \put(3.424242,1.272727){\circle*{0.0001}} \put(3.454545,1.303030){\circle*{0.0001}} \put(3.484848,1.303030){\circle*{0.0001}} \put(3.515152,1.333333){\circle*{0.0001}} \put(3.545455,1.333333){\circle*{0.0001}} \put(3.575758,1.333333){\circle*{0.0001}} \put(3.606061,1.363636){\circle*{0.0001}} \put(3.636364,1.363636){\circle*{0.0001}} \put(3.666667,1.363636){\circle*{0.0001}} \put(3.696970,1.393939){\circle*{0.0001}} \put(3.727273,1.393939){\circle*{0.0001}} \put(3.757576,1.424242){\circle*{0.0001}} \put(3.787879,1.424242){\circle*{0.0001}} \put(3.818182,1.424242){\circle*{0.0001}} \put(3.848485,1.454545){\circle*{0.0001}} \put(3.878788,1.454545){\circle*{0.0001}} \put(3.909091,1.454545){\circle*{0.0001}} \put(3.939394,1.484848){\circle*{0.0001}} \put(3.969697,1.484848){\circle*{0.0001}} \put(4.000000,1.515152){\circle*{0.0001}} \put(4.030303,1.515152){\circle*{0.0001}} \put(4.060606,1.515152){\circle*{0.0001}} \put(4.090909,1.545455){\circle*{0.0001}} \put(4.121212,1.545455){\circle*{0.0001}} \put(4.151515,1.545455){\circle*{0.0001}} \put(4.181818,1.575758){\circle*{0.0001}} \put(4.212121,1.575758){\circle*{0.0001}} \put(4.242424,1.606061){\circle*{0.0001}} \put(4.272727,1.606061){\circle*{0.0001}} \put(4.303030,1.606061){\circle*{0.0001}} \put(4.333333,1.636364){\circle*{0.0001}} \put(4.363636,1.636364){\circle*{0.0001}} \put(4.393939,1.636364){\circle*{0.0001}} \put(4.424242,1.666667){\circle*{0.0001}} \put(4.454545,1.666667){\circle*{0.0001}} \put(4.484848,1.696970){\circle*{0.0001}} \put(4.515152,1.696970){\circle*{0.0001}} \put(4.545455,1.696970){\circle*{0.0001}} \put(4.575758,1.727273){\circle*{0.0001}} \put(4.606061,1.727273){\circle*{0.0001}} \put(4.636364,1.727273){\circle*{0.0001}} \put(4.666667,1.757576){\circle*{0.0001}} \put(4.696970,1.757576){\circle*{0.0001}} \put(4.727273,1.787879){\circle*{0.0001}} \put(4.757576,1.787879){\circle*{0.0001}} \put(4.787879,1.787879){\circle*{0.0001}} \put(4.818182,1.818182){\circle*{0.0001}} \put(4.848485,1.818182){\circle*{0.0001}} \put(4.878788,1.818182){\circle*{0.0001}} \put(4.909091,1.848485){\circle*{0.0001}} \put(4.939394,1.848485){\circle*{0.0001}} \put(4.969697,1.878788){\circle*{0.0001}} \put(5.000000,1.878788){\circle*{0.0001}} \put(5.030303,1.878788){\circle*{0.0001}} \put(5.060606,1.909091){\circle*{0.0001}} \put(5.090909,1.909091){\circle*{0.0001}} \put(5.121212,1.909091){\circle*{0.0001}} \put(5.151515,1.939394){\circle*{0.0001}} \put(5.181818,1.939394){\circle*{0.0001}} \put(5.212121,1.969697){\circle*{0.0001}} \put(5.242424,1.969697){\circle*{0.0001}} \put(5.272727,1.969697){\circle*{0.0001}} \put(5.303030,2.000000){\circle*{0.0001}} \put(5.333333,2.000000){\circle*{0.0001}} \put(5.363636,2.000000){\circle*{0.0001}} \put(5.393939,2.030303){\circle*{0.0001}} \put(5.424242,2.030303){\circle*{0.0001}} \put(5.454545,2.060606){\circle*{0.0001}} \put(5.484848,2.060606){\circle*{0.0001}} \put(5.515152,2.060606){\circle*{0.0001}} \put(5.545455,2.090909){\circle*{0.0001}} \put(5.575758,2.090909){\circle*{0.0001}} \put(5.606061,2.090909){\circle*{0.0001}} \put(5.636364,2.121212){\circle*{0.0001}} \put(5.666667,2.121212){\circle*{0.0001}} \put(5.696970,2.151515){\circle*{0.0001}} \put(5.727273,2.151515){\circle*{0.0001}} \put(5.757576,2.151515){\circle*{0.0001}} \put(5.787879,2.181818){\circle*{0.0001}} \put(5.818182,2.181818){\circle*{0.0001}} \put(5.848485,2.181818){\circle*{0.0001}} \put(5.878788,2.212121){\circle*{0.0001}} \put(5.909091,2.212121){\circle*{0.0001}} \put(5.939394,2.242424){\circle*{0.0001}} \put(5.969697,2.242424){\circle*{0.0001}} \put(6.000000,2.242424){\circle*{0.0001}} \put(6.030303,2.272727){\circle*{0.0001}} \put(6.060606,2.272727){\circle*{0.0001}} \put(6.090909,2.272727){\circle*{0.0001}} \put(6.121212,2.303030){\circle*{0.0001}} \put(6.151515,2.303030){\circle*{0.0001}} \put(6.181818,2.333333){\circle*{0.0001}} \put(6.212121,2.333333){\circle*{0.0001}} \put(6.242424,2.333333){\circle*{0.0001}} \put(6.272727,2.363636){\circle*{0.0001}} \put(6.303030,2.363636){\circle*{0.0001}} \put(6.333333,2.363636){\circle*{0.0001}} \put(6.363636,2.393939){\circle*{0.0001}} \put(6.393939,2.393939){\circle*{0.0001}} \put(6.424242,2.424242){\circle*{0.0001}} \put(6.454545,2.424242){\circle*{0.0001}} \put(6.484848,2.424242){\circle*{0.0001}} \put(6.515152,2.454545){\circle*{0.0001}} \put(6.545455,2.454545){\circle*{0.0001}} \put(6.575758,2.454545){\circle*{0.0001}} \put(6.606061,2.484848){\circle*{0.0001}} \put(6.636364,2.484848){\circle*{0.0001}} \put(6.666667,2.515152){\circle*{0.0001}} \put(6.696970,2.515152){\circle*{0.0001}} \put(6.727273,2.515152){\circle*{0.0001}} \put(6.757576,2.545455){\circle*{0.0001}} \put(6.787879,2.545455){\circle*{0.0001}} \put(6.818182,2.545455){\circle*{0.0001}} \put(6.848485,2.575758){\circle*{0.0001}} \put(6.878788,2.575758){\circle*{0.0001}} \put(6.909091,2.606061){\circle*{0.0001}} \put(6.939394,2.606061){\circle*{0.0001}} \put(6.969697,2.606061){\circle*{0.0001}} \put(7.000000,2.636364){\circle*{0.0001}} \put(7.030303,2.636364){\circle*{0.0001}} \put(7.060606,2.636364){\circle*{0.0001}} \put(7.090909,2.666667){\circle*{0.0001}} \put(7.121212,2.666667){\circle*{0.0001}} \put(7.151515,2.696970){\circle*{0.0001}} \put(7.181818,2.696970){\circle*{0.0001}} \put(7.212121,2.696970){\circle*{0.0001}} \put(7.242424,2.727273){\circle*{0.0001}} \put(7.272727,2.727273){\circle*{0.0001}} \put(7.303030,2.727273){\circle*{0.0001}} \put(7.333333,2.757576){\circle*{0.0001}} \put(7.363636,2.757576){\circle*{0.0001}} \put(7.393939,2.787879){\circle*{0.0001}} \put(7.424242,2.787879){\circle*{0.0001}} \put(7.454545,2.787879){\circle*{0.0001}} \put(7.484848,2.818182){\circle*{0.0001}} \put(7.515152,2.818182){\circle*{0.0001}} \put(7.545455,2.818182){\circle*{0.0001}} \put(7.575758,2.848485){\circle*{0.0001}} \put(7.606061,2.848485){\circle*{0.0001}} \put(7.636364,2.878788){\circle*{0.0001}} \put(7.666667,2.878788){\circle*{0.0001}} \put(7.696970,2.878788){\circle*{0.0001}} \put(7.727273,2.909091){\circle*{0.0001}} \put(7.757576,2.909091){\circle*{0.0001}} \put(7.787879,2.909091){\circle*{0.0001}} \put(7.818182,2.939394){\circle*{0.0001}} \put(7.848485,2.939394){\circle*{0.0001}} \put(7.878788,2.969697){\circle*{0.0001}} \put(7.909091,2.969697){\circle*{0.0001}} \put(7.939394,2.969697){\circle*{0.0001}} \put(7.969697,3.000000){\circle*{0.0001}} \put(8.000000,3.000000){\circle*{0.0001}} \put(8.030303,3.000000){\circle*{0.0001}} \put(8.060606,3.030303){\circle*{0.0001}} \put(8.090909,3.030303){\circle*{0.0001}} \put(8.121212,3.060606){\circle*{0.0001}} \put(8.151515,3.060606){\circle*{0.0001}} \put(4.500000,1.788612){\makebox(0,0)[rb]{$R_1$}} \put(8.162278,0){\line(0,1){3.369158}} \put(8.162278,3.062871){\line(1,0){6.240750}} \put(0,0){\line(0,-1){0.500000}} \put(5.000000,0){\line(0,-1){0.500000}} \put(8.162278,0){\line(0,-1){0.500000}} \put(13.162278,0){\line(0,-1){0.500000}} \put(2.500000,-0.300000){\makebox(0,0){$t_b$}} \put(6.581139,-0.300000){\makebox(0,0){$\tau_1$}} \put(10.662278,-0.300000){\makebox(0,0){$t_b$}} \put(8.262278,2.144009){\makebox(0,0)[l]{$m_1$}} \put(13.162278,0.000000){\circle*{0.0001}} \put(13.259151,0.369592){\circle*{0.0001}} \put(13.356024,0.706542){\circle*{0.0001}} \put(13.452898,1.014992){\circle*{0.0001}} \put(13.549771,1.298411){\circle*{0.0001}} \put(13.646644,1.559727){\circle*{0.0001}} \put(13.743518,1.801429){\circle*{0.0001}} \put(13.840391,2.025645){\circle*{0.0001}} \put(13.937264,2.234206){\circle*{0.0001}} \put(14.034138,2.428698){\circle*{0.0001}} \put(14.131011,2.610496){\circle*{0.0001}} \put(14.227884,2.780805){\circle*{0.0001}} \put(14.324758,2.940679){\circle*{0.0001}} \put(14.421631,3.091051){\circle*{0.0001}} \put(14.518504,3.232741){\circle*{0.0001}} \put(14.615378,3.366482){\circle*{0.0001}} \put(14.712251,3.492924){\circle*{0.0001}} \put(14.809124,3.612648){\circle*{0.0001}} \put(14.905998,3.726176){\circle*{0.0001}} \put(15.002871,3.833977){\circle*{0.0001}} \put(15.099744,3.936473){\circle*{0.0001}} \put(15.196618,4.034048){\circle*{0.0001}} \put(15.293491,4.127046){\circle*{0.0001}} \put(15.390364,4.215782){\circle*{0.0001}} \put(15.487238,4.300544){\circle*{0.0001}} \put(15.584111,4.381591){\circle*{0.0001}} \put(15.680984,4.459164){\circle*{0.0001}} \put(15.777858,4.533480){\circle*{0.0001}} \put(15.874731,4.604741){\circle*{0.0001}} \put(15.971604,4.673131){\circle*{0.0001}} \put(16.068478,4.738820){\circle*{0.0001}} \put(16.165351,4.801966){\circle*{0.0001}} \put(16.262224,4.862712){\circle*{0.0001}} \put(16.359097,4.921194){\circle*{0.0001}} \put(16.455971,4.977535){\circle*{0.0001}} \put(16.552844,5.031852){\circle*{0.0001}} \put(16.649717,5.084250){\circle*{0.0001}} \put(16.746591,5.134831){\circle*{0.0001}} \put(16.843464,5.183687){\circle*{0.0001}} \put(16.940337,5.230904){\circle*{0.0001}} \put(17.037211,5.276565){\circle*{0.0001}} \put(17.134084,5.320744){\circle*{0.0001}} \put(17.230957,5.363512){\circle*{0.0001}} \put(17.327831,5.404937){\circle*{0.0001}} \put(17.424704,5.445080){\circle*{0.0001}} \put(17.521577,5.484000){\circle*{0.0001}} \put(17.618451,5.521752){\circle*{0.0001}} \put(17.715324,5.558388){\circle*{0.0001}} \put(17.812197,5.593956){\circle*{0.0001}} \put(17.909071,5.628503){\circle*{0.0001}} \put(18.005944,5.662072){\circle*{0.0001}} \put(18.102817,5.694704){\circle*{0.0001}} \put(18.199691,5.726437){\circle*{0.0001}} \put(18.296564,5.757309){\circle*{0.0001}} \put(18.393437,5.787354){\circle*{0.0001}} \put(18.490311,5.816604){\circle*{0.0001}} \put(18.587184,5.845091){\circle*{0.0001}} \put(18.684057,5.872844){\circle*{0.0001}} \put(18.780931,5.899891){\circle*{0.0001}} \put(18.877804,5.926259){\circle*{0.0001}} \put(18.974677,5.951974){\circle*{0.0001}} \put(19.071551,5.977058){\circle*{0.0001}} \put(19.168424,6.001535){\circle*{0.0001}} \put(19.265297,6.025427){\circle*{0.0001}} \put(19.362171,6.048755){\circle*{0.0001}} \put(19.459044,6.071538){\circle*{0.0001}} \put(19.555917,6.093795){\circle*{0.0001}} \put(19.652791,6.115544){\circle*{0.0001}} \put(19.749664,6.136802){\circle*{0.0001}} \put(19.846537,6.157586){\circle*{0.0001}} \put(19.943411,6.177912){\circle*{0.0001}} \put(20.040284,6.197794){\circle*{0.0001}} \put(20.137157,6.217246){\circle*{0.0001}} \put(20.234031,6.236284){\circle*{0.0001}} \put(20.330904,6.254919){\circle*{0.0001}} \put(20.427777,6.273164){\circle*{0.0001}} \put(20.524651,6.291032){\circle*{0.0001}} \put(20.621524,6.308533){\circle*{0.0001}} \put(20.718397,6.325680){\circle*{0.0001}} \put(20.815271,6.342483){\circle*{0.0001}} \put(20.912144,6.358952){\circle*{0.0001}} \put(21.009017,6.375097){\circle*{0.0001}} \put(21.105891,6.390927){\circle*{0.0001}} \put(21.202764,6.406452){\circle*{0.0001}} \put(21.299637,6.421680){\circle*{0.0001}} \put(21.396511,6.436620){\circle*{0.0001}} \put(21.493384,6.451279){\circle*{0.0001}} \put(21.590257,6.465666){\circle*{0.0001}} \put(21.687131,6.479789){\circle*{0.0001}} \put(21.784004,6.493654){\circle*{0.0001}} \put(21.880877,6.507268){\circle*{0.0001}} \put(21.977751,6.520638){\circle*{0.0001}} \put(22.074624,6.533771){\circle*{0.0001}} \put(22.171497,6.546673){\circle*{0.0001}} \put(22.268371,6.559349){\circle*{0.0001}} \put(22.365244,6.571807){\circle*{0.0001}} \put(22.462117,6.584051){\circle*{0.0001}} \put(22.558991,6.596086){\circle*{0.0001}} \put(22.655864,6.607919){\circle*{0.0001}} \put(22.752737,6.619554){\circle*{0.0001}} \put(14.403027,0){\line(0,1){3.062871}} \put(13.782652,-0.300000){\makebox(0,0){$t_0$}} \put(14.403027,0){\line(0,-1){0.500000}} \put(13.162278,-0.750000){\vector(1,0){8.072777}} \put(17.198666,-0.800000){\makebox(0,0)[t]{$\tau_2$}} \put(15.584111,1.318721){\makebox(0,0)[lt]{$g_2(t)$}} \put(17.198666,2.912166){\makebox(0,0)[rb]{$R_2$}} \put(14.403027,0.000000){\circle*{0.0001}} \put(14.445260,0.063512){\circle*{0.0001}} \put(14.487493,0.125411){\circle*{0.0001}} \put(14.529726,0.185757){\circle*{0.0001}} \put(14.571959,0.244609){\circle*{0.0001}} \put(14.614192,0.302020){\circle*{0.0001}} \put(14.656425,0.358043){\circle*{0.0001}} \put(14.698658,0.412729){\circle*{0.0001}} \put(14.740891,0.466123){\circle*{0.0001}} \put(14.783123,0.518272){\circle*{0.0001}} \put(14.825356,0.569219){\circle*{0.0001}} \put(14.867589,0.619004){\circle*{0.0001}} \put(14.909822,0.667667){\circle*{0.0001}} \put(14.952055,0.715246){\circle*{0.0001}} \put(14.994288,0.761775){\circle*{0.0001}} \put(15.036521,0.807291){\circle*{0.0001}} \put(15.078754,0.851824){\circle*{0.0001}} \put(15.120987,0.895408){\circle*{0.0001}} \put(15.163220,0.938071){\circle*{0.0001}} \put(15.205453,0.979843){\circle*{0.0001}} \put(15.247686,1.020752){\circle*{0.0001}} \put(15.289918,1.060823){\circle*{0.0001}} \put(15.332151,1.100083){\circle*{0.0001}} \put(15.374384,1.138555){\circle*{0.0001}} \put(15.416617,1.176264){\circle*{0.0001}} \put(15.458850,1.213231){\circle*{0.0001}} \put(15.501083,1.249478){\circle*{0.0001}} \put(15.543316,1.285027){\circle*{0.0001}} \put(15.585549,1.319897){\circle*{0.0001}} \put(15.627782,1.354107){\circle*{0.0001}} \put(15.670015,1.387677){\circle*{0.0001}} \put(15.712248,1.420623){\circle*{0.0001}} \put(15.754481,1.452963){\circle*{0.0001}} \put(15.796713,1.484713){\circle*{0.0001}} \put(15.838946,1.515891){\circle*{0.0001}} \put(15.881179,1.546510){\circle*{0.0001}} \put(15.923412,1.576586){\circle*{0.0001}} \put(15.965645,1.606133){\circle*{0.0001}} \put(16.007878,1.635165){\circle*{0.0001}} \put(16.050111,1.663695){\circle*{0.0001}} \put(16.092344,1.691737){\circle*{0.0001}} \put(16.134577,1.719302){\circle*{0.0001}} \put(16.176810,1.746403){\circle*{0.0001}} \put(16.219043,1.773051){\circle*{0.0001}} \put(16.261276,1.799258){\circle*{0.0001}} \put(16.303508,1.825034){\circle*{0.0001}} \put(16.345741,1.850390){\circle*{0.0001}} \put(16.387974,1.875337){\circle*{0.0001}} \put(16.430207,1.899883){\circle*{0.0001}} \put(16.472440,1.924039){\circle*{0.0001}} \put(16.514673,1.947814){\circle*{0.0001}} \put(16.556906,1.971216){\circle*{0.0001}} \put(16.599139,1.994255){\circle*{0.0001}} \put(16.641372,2.016939){\circle*{0.0001}} \put(16.683605,2.039275){\circle*{0.0001}} \put(16.725838,2.061273){\circle*{0.0001}} \put(16.768071,2.082939){\circle*{0.0001}} \put(16.810303,2.104281){\circle*{0.0001}} \put(16.852536,2.125306){\circle*{0.0001}} \put(16.894769,2.146022){\circle*{0.0001}} \put(16.937002,2.166435){\circle*{0.0001}} \put(16.979235,2.186551){\circle*{0.0001}} \put(17.021468,2.206377){\circle*{0.0001}} \put(17.063701,2.225919){\circle*{0.0001}} \put(17.105934,2.245184){\circle*{0.0001}} \put(17.148167,2.264176){\circle*{0.0001}} \put(17.190400,2.282903){\circle*{0.0001}} \put(17.232633,2.301369){\circle*{0.0001}} \put(17.274866,2.319580){\circle*{0.0001}} \put(17.317098,2.337541){\circle*{0.0001}} \put(17.359331,2.355257){\circle*{0.0001}} \put(17.401564,2.372734){\circle*{0.0001}} \put(17.443797,2.389975){\circle*{0.0001}} \put(17.486030,2.406986){\circle*{0.0001}} \put(17.528263,2.423771){\circle*{0.0001}} \put(17.570496,2.440336){\circle*{0.0001}} \put(17.612729,2.456683){\circle*{0.0001}} \put(17.654962,2.472817){\circle*{0.0001}} \put(17.697195,2.488743){\circle*{0.0001}} \put(17.739428,2.504465){\circle*{0.0001}} \put(17.781661,2.519986){\circle*{0.0001}} \put(17.823893,2.535310){\circle*{0.0001}} \put(17.866126,2.550441){\circle*{0.0001}} \put(17.908359,2.565382){\circle*{0.0001}} \put(17.950592,2.580138){\circle*{0.0001}} \put(17.992825,2.594711){\circle*{0.0001}} \put(18.035058,2.609105){\circle*{0.0001}} \put(18.077291,2.623323){\circle*{0.0001}} \put(18.119524,2.637369){\circle*{0.0001}} \put(18.161757,2.651245){\circle*{0.0001}} \put(18.203990,2.664955){\circle*{0.0001}} \put(18.246223,2.678501){\circle*{0.0001}} \put(18.288456,2.691887){\circle*{0.0001}} \put(18.330688,2.705114){\circle*{0.0001}} \put(18.372921,2.718187){\circle*{0.0001}} \put(18.415154,2.731108){\circle*{0.0001}} \put(18.457387,2.743879){\circle*{0.0001}} \put(18.499620,2.756503){\circle*{0.0001}} \put(18.541853,2.768983){\circle*{0.0001}} \put(18.584086,2.781321){\circle*{0.0001}} \put(18.626319,2.793519){\circle*{0.0001}} \put(18.668552,2.805579){\circle*{0.0001}} \put(18.710785,2.817505){\circle*{0.0001}} \put(18.753018,2.829298){\circle*{0.0001}} \put(18.795251,2.840961){\circle*{0.0001}} \put(18.837483,2.852495){\circle*{0.0001}} \put(18.879716,2.863903){\circle*{0.0001}} \put(18.921949,2.875187){\circle*{0.0001}} \put(18.964182,2.886348){\circle*{0.0001}} \put(19.006415,2.897390){\circle*{0.0001}} \put(19.048648,2.908313){\circle*{0.0001}} \put(19.090881,2.919120){\circle*{0.0001}} \put(19.133114,2.929812){\circle*{0.0001}} \put(19.175347,2.940391){\circle*{0.0001}} \put(19.217580,2.950860){\circle*{0.0001}} \put(19.259813,2.961219){\circle*{0.0001}} \put(19.302046,2.971471){\circle*{0.0001}} \put(19.344278,2.981618){\circle*{0.0001}} \put(19.386511,2.991659){\circle*{0.0001}} \put(19.428744,3.001599){\circle*{0.0001}} \put(19.470977,3.011437){\circle*{0.0001}} \put(19.513210,3.021176){\circle*{0.0001}} \put(19.555443,3.030816){\circle*{0.0001}} \put(19.597676,3.040361){\circle*{0.0001}} \put(19.639909,3.049810){\circle*{0.0001}} \put(19.682142,3.059165){\circle*{0.0001}} \put(19.724375,3.068428){\circle*{0.0001}} \put(19.766608,3.077601){\circle*{0.0001}} \put(19.808841,3.086683){\circle*{0.0001}} \put(19.851073,3.095677){\circle*{0.0001}} \put(19.893306,3.104585){\circle*{0.0001}} \put(19.935539,3.113406){\circle*{0.0001}} \put(19.977772,3.122143){\circle*{0.0001}} \put(20.020005,3.130797){\circle*{0.0001}} \put(20.062238,3.139369){\circle*{0.0001}} \put(20.104471,3.147859){\circle*{0.0001}} \put(20.146704,3.156270){\circle*{0.0001}} \put(20.188937,3.164602){\circle*{0.0001}} \put(20.231170,3.172857){\circle*{0.0001}} \put(20.273403,3.181035){\circle*{0.0001}} \put(20.315636,3.189137){\circle*{0.0001}} \put(20.357868,3.197165){\circle*{0.0001}} \put(20.400101,3.205120){\circle*{0.0001}} \put(20.442334,3.213002){\circle*{0.0001}} \put(20.484567,3.220813){\circle*{0.0001}} \put(20.526800,3.228553){\circle*{0.0001}} \put(20.569033,3.236224){\circle*{0.0001}} \put(20.611266,3.243826){\circle*{0.0001}} \put(20.653499,3.251361){\circle*{0.0001}} \put(20.695732,3.258829){\circle*{0.0001}} \put(20.737965,3.266231){\circle*{0.0001}} \put(20.780198,3.273568){\circle*{0.0001}} \put(20.822431,3.280841){\circle*{0.0001}} \put(20.864663,3.288050){\circle*{0.0001}} \put(20.906896,3.295198){\circle*{0.0001}} \put(20.949129,3.302283){\circle*{0.0001}} \put(20.991362,3.309307){\circle*{0.0001}} \put(21.033595,3.316272){\circle*{0.0001}} \put(21.075828,3.323177){\circle*{0.0001}} \put(21.118061,3.330023){\circle*{0.0001}} \put(21.160294,3.336812){\circle*{0.0001}} \put(21.202527,3.343543){\circle*{0.0001}} \put(21.244760,3.350219){\circle*{0.0001}} \put(21.286993,3.356838){\circle*{0.0001}} \put(21.329226,3.363402){\circle*{0.0001}} \put(21.371458,3.369913){\circle*{0.0001}} \put(21.413691,3.376369){\circle*{0.0001}} \put(21.455924,3.382773){\circle*{0.0001}} \put(21.498157,3.389124){\circle*{0.0001}} \put(21.540390,3.395423){\circle*{0.0001}} \put(21.582623,3.401672){\circle*{0.0001}} \put(21.624856,3.407870){\circle*{0.0001}} \put(21.667089,3.414018){\circle*{0.0001}} \put(21.709322,3.420117){\circle*{0.0001}} \put(21.751555,3.426167){\circle*{0.0001}} \put(21.793788,3.432169){\circle*{0.0001}} \put(21.836020,3.438124){\circle*{0.0001}} \put(21.878253,3.444032){\circle*{0.0001}} \put(21.920486,3.449893){\circle*{0.0001}} \put(21.962719,3.455709){\circle*{0.0001}} \put(22.004952,3.461479){\circle*{0.0001}} \put(22.047185,3.467204){\circle*{0.0001}} \put(22.089418,3.472885){\circle*{0.0001}} \put(22.131651,3.478523){\circle*{0.0001}} \put(22.173884,3.484117){\circle*{0.0001}} \put(22.216117,3.489669){\circle*{0.0001}} \put(22.258350,3.495178){\circle*{0.0001}} \put(22.300583,3.500645){\circle*{0.0001}} \put(22.342815,3.506071){\circle*{0.0001}} \put(22.385048,3.511457){\circle*{0.0001}} \put(22.427281,3.516801){\circle*{0.0001}} \put(22.469514,3.522106){\circle*{0.0001}} \put(22.511747,3.527372){\circle*{0.0001}} \put(22.553980,3.532598){\circle*{0.0001}} \put(22.596213,3.537786){\circle*{0.0001}} \put(22.638446,3.542936){\circle*{0.0001}} \put(22.680679,3.548048){\circle*{0.0001}} \put(22.722912,3.553122){\circle*{0.0001}} \put(22.765145,3.558160){\circle*{0.0001}} \put(22.807378,3.563161){\circle*{0.0001}} \put(0.000000,0.000000){\circle*{0.0001}} \put(0.030303,0.000000){\circle*{0.0001}} \put(0.060606,0.000000){\circle*{0.0001}} \put(0.090909,0.000000){\circle*{0.0001}} \put(0.121212,0.030303){\circle*{0.0001}} \put(0.151515,0.030303){\circle*{0.0001}} \put(0.181818,0.030303){\circle*{0.0001}} \put(0.212121,0.030303){\circle*{0.0001}} \put(0.242424,0.030303){\circle*{0.0001}} \put(0.272727,0.030303){\circle*{0.0001}} \put(0.303030,0.060606){\circle*{0.0001}} \put(0.333333,0.060606){\circle*{0.0001}} \put(0.363636,0.060606){\circle*{0.0001}} \put(0.393939,0.060606){\circle*{0.0001}} \put(0.424242,0.060606){\circle*{0.0001}} \put(0.454545,0.060606){\circle*{0.0001}} \put(0.484848,0.090909){\circle*{0.0001}} \put(0.515152,0.090909){\circle*{0.0001}} \put(0.545455,0.090909){\circle*{0.0001}} \put(0.575758,0.090909){\circle*{0.0001}} \put(0.606061,0.090909){\circle*{0.0001}} \put(0.636364,0.090909){\circle*{0.0001}} \put(0.666667,0.090909){\circle*{0.0001}} \put(0.696970,0.121212){\circle*{0.0001}} \put(0.727273,0.121212){\circle*{0.0001}} \put(0.757576,0.121212){\circle*{0.0001}} \put(0.787879,0.121212){\circle*{0.0001}} \put(0.818182,0.121212){\circle*{0.0001}} \put(0.848485,0.121212){\circle*{0.0001}} \put(0.878788,0.151515){\circle*{0.0001}} \put(0.909091,0.151515){\circle*{0.0001}} \put(0.939394,0.151515){\circle*{0.0001}} \put(0.969697,0.151515){\circle*{0.0001}} \put(1.000000,0.151515){\circle*{0.0001}} \put(1.030303,0.151515){\circle*{0.0001}} \put(1.060606,0.151515){\circle*{0.0001}} \put(1.090909,0.181818){\circle*{0.0001}} \put(1.121212,0.181818){\circle*{0.0001}} \put(1.151515,0.181818){\circle*{0.0001}} \put(1.181818,0.181818){\circle*{0.0001}} \put(1.212121,0.181818){\circle*{0.0001}} \put(1.242424,0.181818){\circle*{0.0001}} \put(1.272727,0.212121){\circle*{0.0001}} \put(1.303030,0.212121){\circle*{0.0001}} \put(1.333333,0.212121){\circle*{0.0001}} \put(1.363636,0.212121){\circle*{0.0001}} \put(1.393939,0.212121){\circle*{0.0001}} \put(1.424242,0.212121){\circle*{0.0001}} \put(1.454545,0.242424){\circle*{0.0001}} \put(1.484848,0.242424){\circle*{0.0001}} \put(1.515152,0.242424){\circle*{0.0001}} \put(1.545455,0.242424){\circle*{0.0001}} \put(1.575758,0.242424){\circle*{0.0001}} \put(1.606061,0.242424){\circle*{0.0001}} \put(1.636364,0.242424){\circle*{0.0001}} \put(1.666667,0.272727){\circle*{0.0001}} \put(1.696970,0.272727){\circle*{0.0001}} \put(1.727273,0.272727){\circle*{0.0001}} \put(1.757576,0.272727){\circle*{0.0001}} \put(1.787879,0.272727){\circle*{0.0001}} \put(1.818182,0.272727){\circle*{0.0001}} \put(1.848485,0.303030){\circle*{0.0001}} \put(1.878788,0.303030){\circle*{0.0001}} \put(1.909091,0.303030){\circle*{0.0001}} \put(1.939394,0.303030){\circle*{0.0001}} \put(1.969697,0.303030){\circle*{0.0001}} \put(2.000000,0.303030){\circle*{0.0001}} \put(2.030303,0.333333){\circle*{0.0001}} \put(2.060606,0.333333){\circle*{0.0001}} \put(2.090909,0.333333){\circle*{0.0001}} \put(2.121212,0.333333){\circle*{0.0001}} \put(2.151515,0.333333){\circle*{0.0001}} \put(2.181818,0.333333){\circle*{0.0001}} \put(2.212121,0.333333){\circle*{0.0001}} \put(2.242424,0.363636){\circle*{0.0001}} \put(2.272727,0.363636){\circle*{0.0001}} \put(2.303030,0.363636){\circle*{0.0001}} \put(2.333333,0.363636){\circle*{0.0001}} \put(2.363636,0.363636){\circle*{0.0001}} \put(2.393939,0.363636){\circle*{0.0001}} \put(2.424242,0.393939){\circle*{0.0001}} \put(2.454545,0.393939){\circle*{0.0001}} \put(2.484848,0.393939){\circle*{0.0001}} \put(2.515152,0.393939){\circle*{0.0001}} \put(2.545455,0.393939){\circle*{0.0001}} \put(2.575758,0.393939){\circle*{0.0001}} \put(2.606061,0.424242){\circle*{0.0001}} \put(2.636364,0.424242){\circle*{0.0001}} \put(2.666667,0.424242){\circle*{0.0001}} \put(2.696970,0.424242){\circle*{0.0001}} \put(2.727273,0.424242){\circle*{0.0001}} \put(2.757576,0.424242){\circle*{0.0001}} \put(2.787879,0.424242){\circle*{0.0001}} \put(2.818182,0.454545){\circle*{0.0001}} \put(2.848485,0.454545){\circle*{0.0001}} \put(2.878788,0.454545){\circle*{0.0001}} \put(2.909091,0.454545){\circle*{0.0001}} \put(2.939394,0.454545){\circle*{0.0001}} \put(2.969697,0.454545){\circle*{0.0001}} \put(3.000000,0.484848){\circle*{0.0001}} \put(3.030303,0.484848){\circle*{0.0001}} \put(3.060606,0.484848){\circle*{0.0001}} \put(3.090909,0.484848){\circle*{0.0001}} \put(3.121212,0.484848){\circle*{0.0001}} \put(3.151515,0.484848){\circle*{0.0001}} \put(3.181818,0.484848){\circle*{0.0001}} \put(3.212121,0.515152){\circle*{0.0001}} \put(3.242424,0.515152){\circle*{0.0001}} \put(3.272727,0.515152){\circle*{0.0001}} \put(3.303030,0.515152){\circle*{0.0001}} \put(3.333333,0.515152){\circle*{0.0001}} \put(3.363636,0.515152){\circle*{0.0001}} \put(3.393939,0.545455){\circle*{0.0001}} \put(3.424242,0.545455){\circle*{0.0001}} \put(3.454545,0.545455){\circle*{0.0001}} \put(3.484848,0.545455){\circle*{0.0001}} \put(3.515152,0.545455){\circle*{0.0001}} \put(3.545455,0.545455){\circle*{0.0001}} \put(3.575758,0.575758){\circle*{0.0001}} \put(3.606061,0.575758){\circle*{0.0001}} \put(3.636364,0.575758){\circle*{0.0001}} \put(3.666667,0.575758){\circle*{0.0001}} \put(3.696970,0.575758){\circle*{0.0001}} \put(3.727273,0.575758){\circle*{0.0001}} \put(3.757576,0.575758){\circle*{0.0001}} \put(3.787879,0.606061){\circle*{0.0001}} \put(3.818182,0.606061){\circle*{0.0001}} \put(3.848485,0.606061){\circle*{0.0001}} \put(3.878788,0.606061){\circle*{0.0001}} \put(3.909091,0.606061){\circle*{0.0001}} \put(3.939394,0.606061){\circle*{0.0001}} \put(3.969697,0.636364){\circle*{0.0001}} \put(4.000000,0.636364){\circle*{0.0001}} \put(4.030303,0.636364){\circle*{0.0001}} \put(4.060606,0.636364){\circle*{0.0001}} \put(4.090909,0.636364){\circle*{0.0001}} \put(4.121212,0.636364){\circle*{0.0001}} \put(4.151515,0.666667){\circle*{0.0001}} \put(4.181818,0.666667){\circle*{0.0001}} \put(4.212121,0.666667){\circle*{0.0001}} \put(4.242424,0.666667){\circle*{0.0001}} \put(4.272727,0.666667){\circle*{0.0001}} \put(4.303030,0.666667){\circle*{0.0001}} \put(4.333333,0.666667){\circle*{0.0001}} \put(4.363636,0.696970){\circle*{0.0001}} \put(4.393939,0.696970){\circle*{0.0001}} \put(4.424242,0.696970){\circle*{0.0001}} \put(4.454545,0.696970){\circle*{0.0001}} \put(4.484848,0.696970){\circle*{0.0001}} \put(4.515152,0.696970){\circle*{0.0001}} \put(4.545455,0.727273){\circle*{0.0001}} \put(4.575758,0.727273){\circle*{0.0001}} \put(4.606061,0.727273){\circle*{0.0001}} \put(4.636364,0.727273){\circle*{0.0001}} \put(4.666667,0.727273){\circle*{0.0001}} \put(4.696970,0.727273){\circle*{0.0001}} \put(4.727273,0.757576){\circle*{0.0001}} \put(4.757576,0.757576){\circle*{0.0001}} \put(4.787879,0.757576){\circle*{0.0001}} \put(4.818182,0.757576){\circle*{0.0001}} \put(4.848485,0.757576){\circle*{0.0001}} \put(4.878788,0.757576){\circle*{0.0001}} \put(4.909091,0.757576){\circle*{0.0001}} \put(4.939394,0.787879){\circle*{0.0001}} \put(4.969697,0.787879){\circle*{0.0001}} \put(5.000000,0.787879){\circle*{0.0001}} \put(5.030303,0.787879){\circle*{0.0001}} \put(5.060606,0.787879){\circle*{0.0001}} \put(5.090909,0.787879){\circle*{0.0001}} \put(5.121212,0.818182){\circle*{0.0001}} \put(5.151515,0.818182){\circle*{0.0001}} \put(5.181818,0.818182){\circle*{0.0001}} \put(5.212121,0.818182){\circle*{0.0001}} \put(5.242424,0.818182){\circle*{0.0001}} \put(5.272727,0.818182){\circle*{0.0001}} \put(5.303030,0.818182){\circle*{0.0001}} \put(5.333333,0.848485){\circle*{0.0001}} \put(5.363636,0.848485){\circle*{0.0001}} \put(5.393939,0.848485){\circle*{0.0001}} \put(5.424242,0.848485){\circle*{0.0001}} \put(5.454545,0.848485){\circle*{0.0001}} \put(5.484848,0.848485){\circle*{0.0001}} \put(5.515152,0.878788){\circle*{0.0001}} \put(5.545455,0.878788){\circle*{0.0001}} \put(5.575758,0.878788){\circle*{0.0001}} \put(5.606061,0.878788){\circle*{0.0001}} \put(5.636364,0.878788){\circle*{0.0001}} \put(5.666667,0.878788){\circle*{0.0001}} \put(5.696970,0.909091){\circle*{0.0001}} \put(5.727273,0.909091){\circle*{0.0001}} \put(5.757576,0.909091){\circle*{0.0001}} \put(5.787879,0.909091){\circle*{0.0001}} \put(5.818182,0.909091){\circle*{0.0001}} \put(5.848485,0.909091){\circle*{0.0001}} \put(5.878788,0.909091){\circle*{0.0001}} \put(5.909091,0.939394){\circle*{0.0001}} \put(5.939394,0.939394){\circle*{0.0001}} \put(5.969697,0.939394){\circle*{0.0001}} \put(6.000000,0.939394){\circle*{0.0001}} \put(6.030303,0.939394){\circle*{0.0001}} \put(6.060606,0.939394){\circle*{0.0001}} \put(6.090909,0.969697){\circle*{0.0001}} \put(6.121212,0.969697){\circle*{0.0001}} \put(6.151515,0.969697){\circle*{0.0001}} \put(6.181818,0.969697){\circle*{0.0001}} \put(6.212121,0.969697){\circle*{0.0001}} \put(6.242424,0.969697){\circle*{0.0001}} \put(6.272727,1.000000){\circle*{0.0001}} \put(6.303030,1.000000){\circle*{0.0001}} \put(6.333333,1.000000){\circle*{0.0001}} \put(6.363636,1.000000){\circle*{0.0001}} \put(6.393939,1.000000){\circle*{0.0001}} \put(6.424242,1.000000){\circle*{0.0001}} \put(6.454545,1.000000){\circle*{0.0001}} \put(6.484848,1.030303){\circle*{0.0001}} \put(6.515152,1.030303){\circle*{0.0001}} \put(6.545455,1.030303){\circle*{0.0001}} \put(6.575758,1.030303){\circle*{0.0001}} \put(6.606061,1.030303){\circle*{0.0001}} \put(6.636364,1.030303){\circle*{0.0001}} \put(6.666667,1.060606){\circle*{0.0001}} \put(6.696970,1.060606){\circle*{0.0001}} \put(6.727273,1.060606){\circle*{0.0001}} \put(6.757576,1.060606){\circle*{0.0001}} \put(6.787879,1.060606){\circle*{0.0001}} \put(6.818182,1.060606){\circle*{0.0001}} \put(6.848485,1.090909){\circle*{0.0001}} \put(6.878788,1.090909){\circle*{0.0001}} \put(6.909091,1.090909){\circle*{0.0001}} \put(6.939394,1.090909){\circle*{0.0001}} \put(6.969697,1.090909){\circle*{0.0001}} \put(7.000000,1.090909){\circle*{0.0001}} \put(7.030303,1.090909){\circle*{0.0001}} \put(7.060606,1.121212){\circle*{0.0001}} \put(7.090909,1.121212){\circle*{0.0001}} \put(7.121212,1.121212){\circle*{0.0001}} \put(7.151515,1.121212){\circle*{0.0001}} \put(7.181818,1.121212){\circle*{0.0001}} \put(7.212121,1.121212){\circle*{0.0001}} \put(7.242424,1.151515){\circle*{0.0001}} \put(7.272727,1.151515){\circle*{0.0001}} \put(7.303030,1.151515){\circle*{0.0001}} \put(7.333333,1.151515){\circle*{0.0001}} \put(7.363636,1.151515){\circle*{0.0001}} \put(7.393939,1.151515){\circle*{0.0001}} \put(7.424242,1.151515){\circle*{0.0001}} \put(7.454545,1.181818){\circle*{0.0001}} \put(7.484848,1.181818){\circle*{0.0001}} \put(7.515152,1.181818){\circle*{0.0001}} \put(7.545455,1.181818){\circle*{0.0001}} \put(7.575758,1.181818){\circle*{0.0001}} \put(7.606061,1.181818){\circle*{0.0001}} \put(7.636364,1.212121){\circle*{0.0001}} \put(7.666667,1.212121){\circle*{0.0001}} \put(7.696970,1.212121){\circle*{0.0001}} \put(7.727273,1.212121){\circle*{0.0001}} \put(7.757576,1.212121){\circle*{0.0001}} \put(7.787879,1.212121){\circle*{0.0001}} \put(7.818182,1.242424){\circle*{0.0001}} \put(7.848485,1.242424){\circle*{0.0001}} \put(7.878788,1.242424){\circle*{0.0001}} \put(7.909091,1.242424){\circle*{0.0001}} \put(7.939394,1.242424){\circle*{0.0001}} \put(7.969697,1.242424){\circle*{0.0001}} \put(8.000000,1.242424){\circle*{0.0001}} \put(8.030303,1.272727){\circle*{0.0001}} \put(8.060606,1.272727){\circle*{0.0001}} \put(8.090909,1.272727){\circle*{0.0001}} \put(8.121212,1.272727){\circle*{0.0001}} \put(8.151515,1.272727){\circle*{0.0001}} \put(8.181818,1.272727){\circle*{0.0001}} \put(8.212121,1.303030){\circle*{0.0001}} \put(8.242424,1.303030){\circle*{0.0001}} \put(8.272727,1.303030){\circle*{0.0001}} \put(8.303030,1.303030){\circle*{0.0001}} \put(8.333333,1.303030){\circle*{0.0001}} \put(8.363636,1.303030){\circle*{0.0001}} \put(8.393939,1.333333){\circle*{0.0001}} \put(8.424242,1.333333){\circle*{0.0001}} \put(8.454545,1.333333){\circle*{0.0001}} \put(8.484848,1.333333){\circle*{0.0001}} \put(8.515152,1.333333){\circle*{0.0001}} \put(8.545455,1.333333){\circle*{0.0001}} \put(8.575758,1.333333){\circle*{0.0001}} \put(8.606061,1.363636){\circle*{0.0001}} \put(8.636364,1.363636){\circle*{0.0001}} \put(8.666667,1.363636){\circle*{0.0001}} \put(8.696970,1.363636){\circle*{0.0001}} \put(8.727273,1.363636){\circle*{0.0001}} \put(8.757576,1.363636){\circle*{0.0001}} \put(8.787879,1.393939){\circle*{0.0001}} \put(8.818182,1.393939){\circle*{0.0001}} \put(8.848485,1.393939){\circle*{0.0001}} \put(8.878788,1.393939){\circle*{0.0001}} \put(8.909091,1.393939){\circle*{0.0001}} \put(8.939394,1.393939){\circle*{0.0001}} \put(8.969697,1.424242){\circle*{0.0001}} \put(9.000000,1.424242){\circle*{0.0001}} \put(9.030303,1.424242){\circle*{0.0001}} \put(9.060606,1.424242){\circle*{0.0001}} \put(9.090909,1.424242){\circle*{0.0001}} \put(9.121212,1.424242){\circle*{0.0001}} \put(9.151515,1.424242){\circle*{0.0001}} \put(9.181818,1.454545){\circle*{0.0001}} \put(9.212121,1.454545){\circle*{0.0001}} \put(9.242424,1.454545){\circle*{0.0001}} \put(9.272727,1.454545){\circle*{0.0001}} \put(9.303030,1.454545){\circle*{0.0001}} \put(9.333333,1.454545){\circle*{0.0001}} \put(9.363636,1.484848){\circle*{0.0001}} \put(9.393939,1.484848){\circle*{0.0001}} \put(9.424242,1.484848){\circle*{0.0001}} \put(9.454545,1.484848){\circle*{0.0001}} \put(9.484848,1.484848){\circle*{0.0001}} \put(9.515152,1.484848){\circle*{0.0001}} \put(9.545455,1.484848){\circle*{0.0001}} \put(9.575758,1.515152){\circle*{0.0001}} \put(9.606061,1.515152){\circle*{0.0001}} \put(9.636364,1.515152){\circle*{0.0001}} \put(9.666667,1.515152){\circle*{0.0001}} \put(9.696970,1.515152){\circle*{0.0001}} \put(9.727273,1.515152){\circle*{0.0001}} \put(9.757576,1.545455){\circle*{0.0001}} \put(9.787879,1.545455){\circle*{0.0001}} \put(9.818182,1.545455){\circle*{0.0001}} \put(9.848485,1.545455){\circle*{0.0001}} \put(9.878788,1.545455){\circle*{0.0001}} \put(9.909091,1.545455){\circle*{0.0001}} \put(9.939394,1.575758){\circle*{0.0001}} \put(9.969697,1.575758){\circle*{0.0001}} \put(10.000000,1.575758){\circle*{0.0001}} \put(10.030303,1.575758){\circle*{0.0001}} \put(10.060606,1.575758){\circle*{0.0001}} \put(10.090909,1.575758){\circle*{0.0001}} \put(10.121212,1.575758){\circle*{0.0001}} \put(10.151515,1.606061){\circle*{0.0001}} \put(10.181818,1.606061){\circle*{0.0001}} \put(10.212121,1.606061){\circle*{0.0001}} \put(10.242424,1.606061){\circle*{0.0001}} \put(10.272727,1.606061){\circle*{0.0001}} \put(10.303030,1.606061){\circle*{0.0001}} \put(10.333333,1.636364){\circle*{0.0001}} \put(10.363636,1.636364){\circle*{0.0001}} \put(10.393939,1.636364){\circle*{0.0001}} \put(10.424242,1.636364){\circle*{0.0001}} \put(10.454545,1.636364){\circle*{0.0001}} \put(10.484848,1.636364){\circle*{0.0001}} \put(10.515152,1.666667){\circle*{0.0001}} \put(10.545455,1.666667){\circle*{0.0001}} \put(10.575758,1.666667){\circle*{0.0001}} \put(10.606061,1.666667){\circle*{0.0001}} \put(10.636364,1.666667){\circle*{0.0001}} \put(10.666667,1.666667){\circle*{0.0001}} \put(10.696970,1.666667){\circle*{0.0001}} \put(10.727273,1.696970){\circle*{0.0001}} \put(10.757576,1.696970){\circle*{0.0001}} \put(10.787879,1.696970){\circle*{0.0001}} \put(10.818182,1.696970){\circle*{0.0001}} \put(10.848485,1.696970){\circle*{0.0001}} \put(10.878788,1.696970){\circle*{0.0001}} \put(10.909091,1.727273){\circle*{0.0001}} \put(10.939394,1.727273){\circle*{0.0001}} \put(10.969697,1.727273){\circle*{0.0001}} \put(11.000000,1.727273){\circle*{0.0001}} \put(11.030303,1.727273){\circle*{0.0001}} \put(11.060606,1.727273){\circle*{0.0001}} \put(11.090909,1.757576){\circle*{0.0001}} \put(11.121212,1.757576){\circle*{0.0001}} \put(11.151515,1.757576){\circle*{0.0001}} \put(11.181818,1.757576){\circle*{0.0001}} \put(11.212121,1.757576){\circle*{0.0001}} \put(11.242424,1.757576){\circle*{0.0001}} \put(11.272727,1.757576){\circle*{0.0001}} \put(11.303030,1.787879){\circle*{0.0001}} \put(11.333333,1.787879){\circle*{0.0001}} \put(11.363636,1.787879){\circle*{0.0001}} \put(11.393939,1.787879){\circle*{0.0001}} \put(11.424242,1.787879){\circle*{0.0001}} \put(11.454545,1.787879){\circle*{0.0001}} \put(11.484848,1.818182){\circle*{0.0001}} \put(11.515152,1.818182){\circle*{0.0001}} \put(11.545455,1.818182){\circle*{0.0001}} \put(11.575758,1.818182){\circle*{0.0001}} \put(11.606061,1.818182){\circle*{0.0001}} \put(11.636364,1.818182){\circle*{0.0001}} \put(11.666667,1.818182){\circle*{0.0001}} \put(11.696970,1.848485){\circle*{0.0001}} \put(11.727273,1.848485){\circle*{0.0001}} \put(11.757576,1.848485){\circle*{0.0001}} \put(11.787879,1.848485){\circle*{0.0001}} \put(11.818182,1.848485){\circle*{0.0001}} \put(11.848485,1.848485){\circle*{0.0001}} \put(11.878788,1.878788){\circle*{0.0001}} \put(11.909091,1.878788){\circle*{0.0001}} \put(11.939394,1.878788){\circle*{0.0001}} \put(11.969697,1.878788){\circle*{0.0001}} \put(12.000000,1.878788){\circle*{0.0001}} \put(12.030303,1.878788){\circle*{0.0001}} \put(12.060606,1.909091){\circle*{0.0001}} \put(12.090909,1.909091){\circle*{0.0001}} \put(12.121212,1.909091){\circle*{0.0001}} \put(12.151515,1.909091){\circle*{0.0001}} \put(12.181818,1.909091){\circle*{0.0001}} \put(12.212121,1.909091){\circle*{0.0001}} \put(12.242424,1.909091){\circle*{0.0001}} \put(12.272727,1.939394){\circle*{0.0001}} \put(12.303030,1.939394){\circle*{0.0001}} \put(12.333333,1.939394){\circle*{0.0001}} \put(12.363636,1.939394){\circle*{0.0001}} \put(12.393939,1.939394){\circle*{0.0001}} \put(12.424242,1.939394){\circle*{0.0001}} \put(12.454545,1.969697){\circle*{0.0001}} \put(12.484848,1.969697){\circle*{0.0001}} \put(12.515152,1.969697){\circle*{0.0001}} \put(12.545455,1.969697){\circle*{0.0001}} \put(12.575758,1.969697){\circle*{0.0001}} \put(12.606061,1.969697){\circle*{0.0001}} \put(12.636364,2.000000){\circle*{0.0001}} \put(12.666667,2.000000){\circle*{0.0001}} \put(12.696970,2.000000){\circle*{0.0001}} \put(12.727273,2.000000){\circle*{0.0001}} \put(12.757576,2.000000){\circle*{0.0001}} \put(12.787879,2.000000){\circle*{0.0001}} \put(12.818182,2.000000){\circle*{0.0001}} \put(12.848485,2.030303){\circle*{0.0001}} \put(12.878788,2.030303){\circle*{0.0001}} \put(12.909091,2.030303){\circle*{0.0001}} \put(12.939394,2.030303){\circle*{0.0001}} \put(12.969697,2.030303){\circle*{0.0001}} \put(13.000000,2.030303){\circle*{0.0001}} \put(13.030303,2.060606){\circle*{0.0001}} \put(13.060606,2.060606){\circle*{0.0001}} \put(13.090909,2.060606){\circle*{0.0001}} \put(13.121212,2.060606){\circle*{0.0001}} \put(13.151515,2.060606){\circle*{0.0001}} \put(13.181818,2.060606){\circle*{0.0001}} \put(13.212121,2.090909){\circle*{0.0001}} \put(13.242424,2.090909){\circle*{0.0001}} \put(13.272727,2.090909){\circle*{0.0001}} \put(13.303030,2.090909){\circle*{0.0001}} \put(13.333333,2.090909){\circle*{0.0001}} \put(13.363636,2.090909){\circle*{0.0001}} \put(13.393939,2.090909){\circle*{0.0001}} \put(13.424242,2.121212){\circle*{0.0001}} \put(13.454545,2.121212){\circle*{0.0001}} \put(13.484848,2.121212){\circle*{0.0001}} \put(13.515152,2.121212){\circle*{0.0001}} \put(13.545455,2.121212){\circle*{0.0001}} \put(13.575758,2.121212){\circle*{0.0001}} \put(13.606061,2.151515){\circle*{0.0001}} \put(13.636364,2.151515){\circle*{0.0001}} \put(13.666667,2.151515){\circle*{0.0001}} \put(13.696970,2.151515){\circle*{0.0001}} \put(13.727273,2.151515){\circle*{0.0001}} \put(13.757576,2.151515){\circle*{0.0001}} \put(13.787879,2.151515){\circle*{0.0001}} \put(13.818182,2.181818){\circle*{0.0001}} \put(13.848485,2.181818){\circle*{0.0001}} \put(13.878788,2.181818){\circle*{0.0001}} \put(13.909091,2.181818){\circle*{0.0001}} \put(13.939394,2.181818){\circle*{0.0001}} \put(13.969697,2.181818){\circle*{0.0001}} \put(14.000000,2.212121){\circle*{0.0001}} \put(14.030303,2.212121){\circle*{0.0001}} \put(14.060606,2.212121){\circle*{0.0001}} \put(14.090909,2.212121){\circle*{0.0001}} \put(14.121212,2.212121){\circle*{0.0001}} \put(14.151515,2.212121){\circle*{0.0001}} \put(14.181818,2.242424){\circle*{0.0001}} \put(14.212121,2.242424){\circle*{0.0001}} \put(14.242424,2.242424){\circle*{0.0001}} \put(14.272727,2.242424){\circle*{0.0001}} \put(14.303030,2.242424){\circle*{0.0001}} \put(14.333333,2.242424){\circle*{0.0001}} \put(14.363636,2.242424){\circle*{0.0001}} \put(14.393939,2.272727){\circle*{0.0001}} \put(14.424242,2.272727){\circle*{0.0001}} \put(14.454545,2.272727){\circle*{0.0001}} \put(14.484848,2.272727){\circle*{0.0001}} \put(14.515152,2.272727){\circle*{0.0001}} \put(14.545455,2.272727){\circle*{0.0001}} \put(14.575758,2.303030){\circle*{0.0001}} \put(14.606061,2.303030){\circle*{0.0001}} \put(14.636364,2.303030){\circle*{0.0001}} \put(14.666667,2.303030){\circle*{0.0001}} \put(14.696970,2.303030){\circle*{0.0001}} \put(14.727273,2.303030){\circle*{0.0001}} \put(14.757576,2.333333){\circle*{0.0001}} \put(14.787879,2.333333){\circle*{0.0001}} \put(14.818182,2.333333){\circle*{0.0001}} \put(14.848485,2.333333){\circle*{0.0001}} \put(14.878788,2.333333){\circle*{0.0001}} \put(14.909091,2.333333){\circle*{0.0001}} \put(14.939394,2.333333){\circle*{0.0001}} \put(14.969697,2.363636){\circle*{0.0001}} \put(15.000000,2.363636){\circle*{0.0001}} \put(15.030303,2.363636){\circle*{0.0001}} \put(15.060606,2.363636){\circle*{0.0001}} \put(15.090909,2.363636){\circle*{0.0001}} \put(15.121212,2.363636){\circle*{0.0001}} \put(15.151515,2.393939){\circle*{0.0001}} \put(15.181818,2.393939){\circle*{0.0001}} \put(15.212121,2.393939){\circle*{0.0001}} \put(15.242424,2.393939){\circle*{0.0001}} \put(15.272727,2.393939){\circle*{0.0001}} \put(15.303030,2.393939){\circle*{0.0001}} \put(15.333333,2.424242){\circle*{0.0001}} \put(15.363636,2.424242){\circle*{0.0001}} \put(15.393939,2.424242){\circle*{0.0001}} \put(15.424242,2.424242){\circle*{0.0001}} \put(15.454545,2.424242){\circle*{0.0001}} \put(15.484848,2.424242){\circle*{0.0001}} \put(15.515152,2.424242){\circle*{0.0001}} \put(15.545455,2.454545){\circle*{0.0001}} \put(15.575758,2.454545){\circle*{0.0001}} \put(15.606061,2.454545){\circle*{0.0001}} \put(15.636364,2.454545){\circle*{0.0001}} \put(15.666667,2.454545){\circle*{0.0001}} \put(15.696970,2.454545){\circle*{0.0001}} \put(15.727273,2.484848){\circle*{0.0001}} \put(15.757576,2.484848){\circle*{0.0001}} \put(15.787879,2.484848){\circle*{0.0001}} \put(15.818182,2.484848){\circle*{0.0001}} \put(15.848485,2.484848){\circle*{0.0001}} \put(15.878788,2.484848){\circle*{0.0001}} \put(15.909091,2.484848){\circle*{0.0001}} \put(15.939394,2.515152){\circle*{0.0001}} \put(15.969697,2.515152){\circle*{0.0001}} \put(16.000000,2.515152){\circle*{0.0001}} \put(16.030303,2.515152){\circle*{0.0001}} \put(16.060606,2.515152){\circle*{0.0001}} \put(16.090909,2.515152){\circle*{0.0001}} \put(16.121212,2.545455){\circle*{0.0001}} \put(16.151515,2.545455){\circle*{0.0001}} \put(16.181818,2.545455){\circle*{0.0001}} \put(16.212121,2.545455){\circle*{0.0001}} \put(16.242424,2.545455){\circle*{0.0001}} \put(16.272727,2.545455){\circle*{0.0001}} \put(16.303030,2.575758){\circle*{0.0001}} \put(16.333333,2.575758){\circle*{0.0001}} \put(16.363636,2.575758){\circle*{0.0001}} \put(16.393939,2.575758){\circle*{0.0001}} \put(16.424242,2.575758){\circle*{0.0001}} \put(16.454545,2.575758){\circle*{0.0001}} \put(16.484848,2.575758){\circle*{0.0001}} \put(16.515152,2.606061){\circle*{0.0001}} \put(16.545455,2.606061){\circle*{0.0001}} \put(16.575758,2.606061){\circle*{0.0001}} \put(16.606061,2.606061){\circle*{0.0001}} \put(16.636364,2.606061){\circle*{0.0001}} \put(16.666667,2.606061){\circle*{0.0001}} \put(16.696970,2.636364){\circle*{0.0001}} \put(16.727273,2.636364){\circle*{0.0001}} \put(16.757576,2.636364){\circle*{0.0001}} \put(16.787879,2.636364){\circle*{0.0001}} \put(16.818182,2.636364){\circle*{0.0001}} \put(16.848485,2.636364){\circle*{0.0001}} \put(16.878788,2.666667){\circle*{0.0001}} \put(16.909091,2.666667){\circle*{0.0001}} \put(16.939394,2.666667){\circle*{0.0001}} \put(16.969697,2.666667){\circle*{0.0001}} \put(17.000000,2.666667){\circle*{0.0001}} \put(17.030303,2.666667){\circle*{0.0001}} \put(17.060606,2.666667){\circle*{0.0001}} \put(17.090909,2.696970){\circle*{0.0001}} \put(17.121212,2.696970){\circle*{0.0001}} \put(17.151515,2.696970){\circle*{0.0001}} \put(17.181818,2.696970){\circle*{0.0001}} \put(17.212121,2.696970){\circle*{0.0001}} \put(17.242424,2.696970){\circle*{0.0001}} \put(17.272727,2.727273){\circle*{0.0001}} \put(17.303030,2.727273){\circle*{0.0001}} \put(17.333333,2.727273){\circle*{0.0001}} \put(17.363636,2.727273){\circle*{0.0001}} \put(17.393939,2.727273){\circle*{0.0001}} \put(17.424242,2.727273){\circle*{0.0001}} \put(17.454545,2.757576){\circle*{0.0001}} \put(17.484848,2.757576){\circle*{0.0001}} \put(17.515152,2.757576){\circle*{0.0001}} \put(17.545455,2.757576){\circle*{0.0001}} \put(17.575758,2.757576){\circle*{0.0001}} \put(17.606061,2.757576){\circle*{0.0001}} \put(17.636364,2.757576){\circle*{0.0001}} \put(17.666667,2.787879){\circle*{0.0001}} \put(17.696970,2.787879){\circle*{0.0001}} \put(17.727273,2.787879){\circle*{0.0001}} \put(17.757576,2.787879){\circle*{0.0001}} \put(17.787879,2.787879){\circle*{0.0001}} \put(17.818182,2.787879){\circle*{0.0001}} \put(17.848485,2.818182){\circle*{0.0001}} \put(17.878788,2.818182){\circle*{0.0001}} \put(17.909091,2.818182){\circle*{0.0001}} \put(17.939394,2.818182){\circle*{0.0001}} \put(17.969697,2.818182){\circle*{0.0001}} \put(18.000000,2.818182){\circle*{0.0001}} \put(18.030303,2.818182){\circle*{0.0001}} \put(18.060606,2.848485){\circle*{0.0001}} \put(18.090909,2.848485){\circle*{0.0001}} \put(18.121212,2.848485){\circle*{0.0001}} \put(18.151515,2.848485){\circle*{0.0001}} \put(18.181818,2.848485){\circle*{0.0001}} \put(18.212121,2.848485){\circle*{0.0001}} \put(18.242424,2.878788){\circle*{0.0001}} \put(18.272727,2.878788){\circle*{0.0001}} \put(18.303030,2.878788){\circle*{0.0001}} \put(18.333333,2.878788){\circle*{0.0001}} \put(18.363636,2.878788){\circle*{0.0001}} \put(18.393939,2.878788){\circle*{0.0001}} \put(18.424242,2.909091){\circle*{0.0001}} \put(18.454545,2.909091){\circle*{0.0001}} \put(18.484848,2.909091){\circle*{0.0001}} \put(18.515152,2.909091){\circle*{0.0001}} \put(18.545455,2.909091){\circle*{0.0001}} \put(18.575758,2.909091){\circle*{0.0001}} \put(18.606061,2.909091){\circle*{0.0001}} \put(18.636364,2.939394){\circle*{0.0001}} \put(18.666667,2.939394){\circle*{0.0001}} \put(18.696970,2.939394){\circle*{0.0001}} \put(18.727273,2.939394){\circle*{0.0001}} \put(18.757576,2.939394){\circle*{0.0001}} \put(18.787879,2.939394){\circle*{0.0001}} \put(18.818182,2.969697){\circle*{0.0001}} \put(18.848485,2.969697){\circle*{0.0001}} \put(18.878788,2.969697){\circle*{0.0001}} \put(18.909091,2.969697){\circle*{0.0001}} \put(18.939394,2.969697){\circle*{0.0001}} \put(18.969697,2.969697){\circle*{0.0001}} \put(19.000000,3.000000){\circle*{0.0001}} \put(19.030303,3.000000){\circle*{0.0001}} \put(19.060606,3.000000){\circle*{0.0001}} \put(19.090909,3.000000){\circle*{0.0001}} \put(19.121212,3.000000){\circle*{0.0001}} \put(19.151515,3.000000){\circle*{0.0001}} \put(19.181818,3.000000){\circle*{0.0001}} \put(19.212121,3.030303){\circle*{0.0001}} \put(19.242424,3.030303){\circle*{0.0001}} \put(19.272727,3.030303){\circle*{0.0001}} \put(19.303030,3.030303){\circle*{0.0001}} \put(19.333333,3.030303){\circle*{0.0001}} \put(19.363636,3.030303){\circle*{0.0001}} \put(19.393939,3.060606){\circle*{0.0001}} \put(19.424242,3.060606){\circle*{0.0001}} \put(19.454545,3.060606){\circle*{0.0001}} \put(19.484848,3.060606){\circle*{0.0001}} \put(19.515152,3.060606){\circle*{0.0001}} \put(19.545455,3.060606){\circle*{0.0001}} \put(19.575758,3.090909){\circle*{0.0001}} \put(19.606061,3.090909){\circle*{0.0001}} \put(19.636364,3.090909){\circle*{0.0001}} \put(19.666667,3.090909){\circle*{0.0001}} \put(19.696970,3.090909){\circle*{0.0001}} \put(19.727273,3.090909){\circle*{0.0001}} \put(19.757576,3.090909){\circle*{0.0001}} \put(19.787879,3.121212){\circle*{0.0001}} \put(19.818182,3.121212){\circle*{0.0001}} \put(19.848485,3.121212){\circle*{0.0001}} \put(19.878788,3.121212){\circle*{0.0001}} \put(19.909091,3.121212){\circle*{0.0001}} \put(19.939394,3.121212){\circle*{0.0001}} \put(19.969697,3.151515){\circle*{0.0001}} \put(20.000000,3.151515){\circle*{0.0001}} \put(20.030303,3.151515){\circle*{0.0001}} \put(20.060606,3.151515){\circle*{0.0001}} \put(20.090909,3.151515){\circle*{0.0001}} \put(20.121212,3.151515){\circle*{0.0001}} \put(20.151515,3.151515){\circle*{0.0001}} \put(20.181818,3.181818){\circle*{0.0001}} \put(20.212121,3.181818){\circle*{0.0001}} \put(20.242424,3.181818){\circle*{0.0001}} \put(20.272727,3.181818){\circle*{0.0001}} \put(20.303030,3.181818){\circle*{0.0001}} \put(20.333333,3.181818){\circle*{0.0001}} \put(20.363636,3.212121){\circle*{0.0001}} \put(20.393939,3.212121){\circle*{0.0001}} \put(20.424242,3.212121){\circle*{0.0001}} \put(20.454545,3.212121){\circle*{0.0001}} \put(20.484848,3.212121){\circle*{0.0001}} \put(20.515152,3.212121){\circle*{0.0001}} \put(20.545455,3.242424){\circle*{0.0001}} \put(20.575758,3.242424){\circle*{0.0001}} \put(20.606061,3.242424){\circle*{0.0001}} \put(20.636364,3.242424){\circle*{0.0001}} \put(20.666667,3.242424){\circle*{0.0001}} \put(20.696970,3.242424){\circle*{0.0001}} \put(20.727273,3.242424){\circle*{0.0001}} \put(20.757576,3.272727){\circle*{0.0001}} \put(20.787879,3.272727){\circle*{0.0001}} \put(20.818182,3.272727){\circle*{0.0001}} \put(20.848485,3.272727){\circle*{0.0001}} \put(20.878788,3.272727){\circle*{0.0001}} \put(20.909091,3.272727){\circle*{0.0001}} \put(20.939394,3.303030){\circle*{0.0001}} \put(20.969697,3.303030){\circle*{0.0001}} \put(21.000000,3.303030){\circle*{0.0001}} \put(21.030303,3.303030){\circle*{0.0001}} \put(21.060606,3.303030){\circle*{0.0001}} \put(21.090909,3.303030){\circle*{0.0001}} \put(21.121212,3.333333){\circle*{0.0001}} \put(21.151515,3.333333){\circle*{0.0001}} \put(21.181818,3.333333){\circle*{0.0001}} \put(21.212121,3.333333){\circle*{0.0001}} \put(21.235055,0){\line(0,1){3.348690}} \end{picture} \end{center} \caption{\small A graphical illustration of the Charnov's Marginal Value Theorem for two sites. When we get to the second site, we already have a minimum $m_1$ from the first site so that the gain $g_2(t)$ only is greater than zero starting from $t_0$. The second patch has prices distributed in $[\mu-b,\mu+b]$: for a large enough value of $\gamma=b/a$ the rate of gain $R_2$ that we obtain by exploring the second site is greater than the rate $R_1$ obtained in the first site.} \label{patchdiag2} \end{figure} When we arrive at the second site, a time $T\stackrel{\triangle}{=}{2t_b}+\tau_1$ has already elapsed, so if we stay on the site a time $\tau$, the rate of gain is: \begin{equation} R_2(\tau) = \frac{g_2(\tau)}{T+\tau} = \frac{g_2(\tau)}{2t_b+\tau_1+\tau} \end{equation} To find the maximum of $R_2(\tau)$ we proceed as in the previous example, setting \begin{equation} g_2^\prime(\tau) = \frac{g_2(\tau)}{T+\tau} \end{equation} that is \begin{equation} \frac{2b}{(\tau+2)^2} = \frac{1}{T+\tau} \Bigl[\frac{\tau b}{\tau+2} - a\alpha\Bigr] \end{equation} or \begin{equation} \frac{2}{(\tau+2)^2} = \frac{1}{T+\tau} \Bigl[\frac{\tau}{\tau+2} - \frac{\alpha}{\gamma}\Bigr] \end{equation} that is \begin{equation} 2(T+\tau) = (\tau+2)\Bigl[\tau - \frac{\alpha}{\gamma}(\tau+2)\Bigr] \end{equation} Rearranging the terms, we obtain the equation \begin{equation} (\gamma-\alpha)\tau^2-4\alpha\tau-(4\alpha+2\gamma{T})=0 \end{equation} whose only positive solution is given by \begin{equation} \begin{cases} \displaystyle \Delta = 8\gamma\Bigl[2\alpha+(\gamma-\alpha)T\Bigr] \\ \displaystyle \tau_2 = \frac{4\alpha+\sqrt{\Delta}}{2(\gamma-\alpha)} \end{cases} \end{equation} The corresponding maximum rate of gain is \begin{equation} R_2 = g_2^\prime(\tau_2) = \frac{2b}{(\tau_2+2)^2} \end{equation} Using the second patch is convenient if we improve our gain rate, that is, if $R_2>R_1$, where $R_1$ is as in (\ref{t1bum}). This imposes a condition on $b/a$, that is, on $\gamma$: it is convenient to use the second patch if $\gamma$ is large enough that we can find a better price that offsets the extra time spent in searching. The limit condition $R_2=R_1$ yields \begin{equation} \gamma = \left[ \frac{\tau_2+2}{\sqrt{2t_b}+2}\right]^2 \end{equation} The value of $\tau_2$ depends on $\gamma$ in such a way that we can't find a closed form solution to this equation. We can, however, determine its limits. For $t_b\rightarrow{0}$, we have $\alpha\rightarrow{0}$, $T\rightarrow{0}$, $\Delta\rightarrow{0}$, and $\tau_2\rightarrow{0}$, therefore \begin{equation} \lim_{t_b\rightarrow{0}} \gamma = 1 \end{equation} For $t_b\rightarrow\infty$, $\alpha\rightarrow{1}$, $T\sum2t_b$, and \begin{equation} \begin{aligned} \Delta &\sim 16\gamma(\gamma-1)t_b \\ \tau &\sim 2\frac{\sqrt{\gamma(\gamma-1)}}{\gamma-1}\sqrt{t_b} \end{aligned} \end{equation} The equation \begin{equation} \label{gamma} \gamma = \left[ \frac{2\sqrt{\gamma(\gamma-1)}}{\sqrt{2}(\gamma-1)} \right]^2 = \frac{2\gamma}{\gamma-1} \end{equation} has solution $\gamma=3$, therefore \begin{equation} \lim_{t_b\rightarrow\infty} \gamma = 3 \end{equation} The behavior of $\gamma$ can be found solving (\ref{gamma}) numerically, which, given the stability of the equation, can be done with a simple iteration. The result is shown in figure~\ref{gammafig} \begin{figure} \begin{center} \setlength{\unitlength}{20em} \setlength{\unitlength}{0.240900pt} \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(839,839)(0,0) \sbox{\plotpoint}{\rule[-0.200pt]{0.400pt}{0.400pt}}% \put(151.0,151.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(131,151){\makebox(0,0)[r]{$1.2$}} \put(758.0,151.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(151.0,221.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(131,221){\makebox(0,0)[r]{$1.4$}} \put(758.0,221.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(151.0,290.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(131,290){\makebox(0,0)[r]{$1.6$}} \put(758.0,290.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(151.0,360.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(131,360){\makebox(0,0)[r]{$1.8$}} \put(758.0,360.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(151.0,430.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(131,430){\makebox(0,0)[r]{$2$}} \put(758.0,430.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(151.0,499.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(131,499){\makebox(0,0)[r]{$2.2$}} \put(758.0,499.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(151.0,569.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(131,569){\makebox(0,0)[r]{$2.4$}} \put(758.0,569.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(151.0,639.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(131,639){\makebox(0,0)[r]{$2.6$}} \put(758.0,639.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(151.0,708.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(131,708){\makebox(0,0)[r]{$2.8$}} \put(758.0,708.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(151.0,778.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(131,778){\makebox(0,0)[r]{$3$}} \put(758.0,778.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(151.0,151.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(151,110){\makebox(0,0){$10^{-4}$}} \put(151.0,758.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(182.0,151.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(182.0,768.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(224.0,151.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(224.0,768.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(245.0,151.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(245.0,768.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(255.0,151.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(255,110){\makebox(0,0){$10^{-3}$}} \put(255.0,758.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(287.0,151.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(287.0,768.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(329.0,151.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(329.0,768.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(350.0,151.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(350.0,768.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(360.0,151.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(360,110){\makebox(0,0){$10^{-2}$}} \put(360.0,758.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(391.0,151.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(391.0,768.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(433.0,151.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(433.0,768.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(454.0,151.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(454.0,768.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(464.0,151.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(464,110){\makebox(0,0){$10^{-1}$}} \put(464.0,758.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(496.0,151.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(496.0,768.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(538.0,151.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(538.0,768.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(559.0,151.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(559.0,768.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(569.0,151.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(569,110){\makebox(0,0){$1$}} \put(569.0,758.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(600.0,151.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(600.0,768.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(642.0,151.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(642.0,768.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(663.0,151.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(663.0,768.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(674.0,151.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(674,110){\makebox(0,0){$10$}} \put(674.0,758.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(705.0,151.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(705.0,768.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(747.0,151.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(747.0,768.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(768.0,151.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(768.0,768.0){\rule[-0.200pt]{0.400pt}{2.409pt}} \put(778.0,151.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(778,110){\makebox(0,0){$10^2$}} \put(778.0,758.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(151.0,151.0){\rule[-0.200pt]{0.400pt}{151.044pt}} \put(151.0,151.0){\rule[-0.200pt]{151.044pt}{0.400pt}} \put(778.0,151.0){\rule[-0.200pt]{0.400pt}{151.044pt}} \put(151.0,778.0){\rule[-0.200pt]{151.044pt}{0.400pt}} \put(30,464){\makebox(0,0){$\gamma$}} \put(464,49){\makebox(0,0){$t_b$}} \put(151,163){\usebox{\plotpoint}} \put(151,162.67){\rule{0.482pt}{0.400pt}} \multiput(151.00,162.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(153,163.67){\rule{0.482pt}{0.400pt}} \multiput(153.00,163.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(155,164.67){\rule{0.723pt}{0.400pt}} \multiput(155.00,164.17)(1.500,1.000){2}{\rule{0.361pt}{0.400pt}} \put(158,165.67){\rule{0.482pt}{0.400pt}} \multiput(158.00,165.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(160,166.67){\rule{0.482pt}{0.400pt}} \multiput(160.00,166.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(162,167.67){\rule{0.482pt}{0.400pt}} \multiput(162.00,167.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(164,169.17){\rule{0.482pt}{0.400pt}} \multiput(164.00,168.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(166,170.67){\rule{0.482pt}{0.400pt}} \multiput(166.00,170.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(168,171.67){\rule{0.723pt}{0.400pt}} \multiput(168.00,171.17)(1.500,1.000){2}{\rule{0.361pt}{0.400pt}} \put(171,172.67){\rule{0.482pt}{0.400pt}} \multiput(171.00,172.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(173,173.67){\rule{0.482pt}{0.400pt}} \multiput(173.00,173.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(175,174.67){\rule{0.482pt}{0.400pt}} \multiput(175.00,174.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(177,175.67){\rule{0.723pt}{0.400pt}} \multiput(177.00,175.17)(1.500,1.000){2}{\rule{0.361pt}{0.400pt}} \put(180,176.67){\rule{0.482pt}{0.400pt}} \multiput(180.00,176.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(182,177.67){\rule{0.482pt}{0.400pt}} \multiput(182.00,177.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(184,179.17){\rule{0.482pt}{0.400pt}} \multiput(184.00,178.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(186,180.67){\rule{0.482pt}{0.400pt}} \multiput(186.00,180.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(188,181.67){\rule{0.723pt}{0.400pt}} \multiput(188.00,181.17)(1.500,1.000){2}{\rule{0.361pt}{0.400pt}} \put(191,182.67){\rule{0.482pt}{0.400pt}} \multiput(191.00,182.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(193,183.67){\rule{0.482pt}{0.400pt}} \multiput(193.00,183.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(195,185.17){\rule{0.482pt}{0.400pt}} \multiput(195.00,184.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(197,186.67){\rule{0.482pt}{0.400pt}} \multiput(197.00,186.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(199,187.67){\rule{0.723pt}{0.400pt}} \multiput(199.00,187.17)(1.500,1.000){2}{\rule{0.361pt}{0.400pt}} \put(202,188.67){\rule{0.482pt}{0.400pt}} \multiput(202.00,188.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(204,190.17){\rule{0.482pt}{0.400pt}} \multiput(204.00,189.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(206,191.67){\rule{0.482pt}{0.400pt}} \multiput(206.00,191.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(208,192.67){\rule{0.482pt}{0.400pt}} \multiput(208.00,192.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(210,194.17){\rule{0.700pt}{0.400pt}} \multiput(210.00,193.17)(1.547,2.000){2}{\rule{0.350pt}{0.400pt}} \put(213,195.67){\rule{0.482pt}{0.400pt}} \multiput(213.00,195.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(215,196.67){\rule{0.482pt}{0.400pt}} \multiput(215.00,196.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(217,198.17){\rule{0.482pt}{0.400pt}} \multiput(217.00,197.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(219,199.67){\rule{0.482pt}{0.400pt}} \multiput(219.00,199.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(221,200.67){\rule{0.723pt}{0.400pt}} \multiput(221.00,200.17)(1.500,1.000){2}{\rule{0.361pt}{0.400pt}} \put(224,202.17){\rule{0.482pt}{0.400pt}} \multiput(224.00,201.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(226,203.67){\rule{0.482pt}{0.400pt}} \multiput(226.00,203.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(228,205.17){\rule{0.482pt}{0.400pt}} \multiput(228.00,204.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(230,207.17){\rule{0.482pt}{0.400pt}} \multiput(230.00,206.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(232,208.67){\rule{0.723pt}{0.400pt}} \multiput(232.00,208.17)(1.500,1.000){2}{\rule{0.361pt}{0.400pt}} \put(235,210.17){\rule{0.482pt}{0.400pt}} \multiput(235.00,209.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(237,211.67){\rule{0.482pt}{0.400pt}} \multiput(237.00,211.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(239,213.17){\rule{0.482pt}{0.400pt}} \multiput(239.00,212.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(241,214.67){\rule{0.723pt}{0.400pt}} \multiput(241.00,214.17)(1.500,1.000){2}{\rule{0.361pt}{0.400pt}} \put(244,216.17){\rule{0.482pt}{0.400pt}} \multiput(244.00,215.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(246,217.67){\rule{0.482pt}{0.400pt}} \multiput(246.00,217.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(248,219.17){\rule{0.482pt}{0.400pt}} \multiput(248.00,218.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(250,221.17){\rule{0.482pt}{0.400pt}} \multiput(250.00,220.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(252,222.67){\rule{0.482pt}{0.400pt}} \multiput(252.00,222.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(254,224.17){\rule{0.700pt}{0.400pt}} \multiput(254.00,223.17)(1.547,2.000){2}{\rule{0.350pt}{0.400pt}} \put(257,226.17){\rule{0.482pt}{0.400pt}} \multiput(257.00,225.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(259,227.67){\rule{0.482pt}{0.400pt}} \multiput(259.00,227.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(261,229.17){\rule{0.482pt}{0.400pt}} \multiput(261.00,228.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(263,231.17){\rule{0.700pt}{0.400pt}} \multiput(263.00,230.17)(1.547,2.000){2}{\rule{0.350pt}{0.400pt}} \put(266,232.67){\rule{0.482pt}{0.400pt}} \multiput(266.00,232.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(268,234.17){\rule{0.482pt}{0.400pt}} \multiput(268.00,233.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(270,236.17){\rule{0.482pt}{0.400pt}} \multiput(270.00,235.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(272,238.17){\rule{0.482pt}{0.400pt}} \multiput(272.00,237.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(274,240.17){\rule{0.700pt}{0.400pt}} \multiput(274.00,239.17)(1.547,2.000){2}{\rule{0.350pt}{0.400pt}} \put(277,242.17){\rule{0.482pt}{0.400pt}} \multiput(277.00,241.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(279,243.67){\rule{0.482pt}{0.400pt}} \multiput(279.00,243.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(281,245.17){\rule{0.482pt}{0.400pt}} \multiput(281.00,244.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(283,247.17){\rule{0.482pt}{0.400pt}} \multiput(283.00,246.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(285,249.17){\rule{0.700pt}{0.400pt}} \multiput(285.00,248.17)(1.547,2.000){2}{\rule{0.350pt}{0.400pt}} \put(288,251.17){\rule{0.482pt}{0.400pt}} \multiput(288.00,250.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(290,253.17){\rule{0.482pt}{0.400pt}} \multiput(290.00,252.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(292,255.17){\rule{0.482pt}{0.400pt}} \multiput(292.00,254.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(294,257.17){\rule{0.482pt}{0.400pt}} \multiput(294.00,256.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(296,259.17){\rule{0.700pt}{0.400pt}} \multiput(296.00,258.17)(1.547,2.000){2}{\rule{0.350pt}{0.400pt}} \put(299,261.17){\rule{0.482pt}{0.400pt}} \multiput(299.00,260.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(301,263.17){\rule{0.482pt}{0.400pt}} \multiput(301.00,262.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(303,265.17){\rule{0.482pt}{0.400pt}} \multiput(303.00,264.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(305,267.17){\rule{0.482pt}{0.400pt}} \multiput(305.00,266.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(307,269.17){\rule{0.700pt}{0.400pt}} \multiput(307.00,268.17)(1.547,2.000){2}{\rule{0.350pt}{0.400pt}} \put(310,271.17){\rule{0.482pt}{0.400pt}} \multiput(310.00,270.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(312.17,273){\rule{0.400pt}{0.700pt}} \multiput(311.17,273.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(314,276.17){\rule{0.482pt}{0.400pt}} \multiput(314.00,275.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(316,278.17){\rule{0.482pt}{0.400pt}} \multiput(316.00,277.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(318,280.17){\rule{0.700pt}{0.400pt}} \multiput(318.00,279.17)(1.547,2.000){2}{\rule{0.350pt}{0.400pt}} \put(321,282.17){\rule{0.482pt}{0.400pt}} \multiput(321.00,281.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(323.17,284){\rule{0.400pt}{0.700pt}} \multiput(322.17,284.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(325,287.17){\rule{0.482pt}{0.400pt}} \multiput(325.00,286.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(327,289.17){\rule{0.482pt}{0.400pt}} \multiput(327.00,288.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(329,291.17){\rule{0.700pt}{0.400pt}} \multiput(329.00,290.17)(1.547,2.000){2}{\rule{0.350pt}{0.400pt}} \put(332.17,293){\rule{0.400pt}{0.700pt}} \multiput(331.17,293.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(334,296.17){\rule{0.482pt}{0.400pt}} \multiput(334.00,295.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(336.17,298){\rule{0.400pt}{0.700pt}} \multiput(335.17,298.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(338,301.17){\rule{0.482pt}{0.400pt}} \multiput(338.00,300.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(340,303.17){\rule{0.700pt}{0.400pt}} \multiput(340.00,302.17)(1.547,2.000){2}{\rule{0.350pt}{0.400pt}} \put(343.17,305){\rule{0.400pt}{0.700pt}} \multiput(342.17,305.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(345,308.17){\rule{0.482pt}{0.400pt}} \multiput(345.00,307.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(347.17,310){\rule{0.400pt}{0.700pt}} \multiput(346.17,310.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(349,313.17){\rule{0.482pt}{0.400pt}} \multiput(349.00,312.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \multiput(351.00,315.61)(0.462,0.447){3}{\rule{0.500pt}{0.108pt}} \multiput(351.00,314.17)(1.962,3.000){2}{\rule{0.250pt}{0.400pt}} \put(354,318.17){\rule{0.482pt}{0.400pt}} \multiput(354.00,317.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(356.17,320){\rule{0.400pt}{0.700pt}} \multiput(355.17,320.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(358,323.17){\rule{0.482pt}{0.400pt}} \multiput(358.00,322.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(360.17,325){\rule{0.400pt}{0.700pt}} \multiput(359.17,325.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \multiput(362.00,328.61)(0.462,0.447){3}{\rule{0.500pt}{0.108pt}} \multiput(362.00,327.17)(1.962,3.000){2}{\rule{0.250pt}{0.400pt}} \put(365,331.17){\rule{0.482pt}{0.400pt}} \multiput(365.00,330.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(367.17,333){\rule{0.400pt}{0.700pt}} \multiput(366.17,333.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(369,336.17){\rule{0.482pt}{0.400pt}} \multiput(369.00,335.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(371.17,338){\rule{0.400pt}{0.700pt}} \multiput(370.17,338.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \multiput(373.00,341.61)(0.462,0.447){3}{\rule{0.500pt}{0.108pt}} \multiput(373.00,340.17)(1.962,3.000){2}{\rule{0.250pt}{0.400pt}} \put(376.17,344){\rule{0.400pt}{0.700pt}} \multiput(375.17,344.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(378,347.17){\rule{0.482pt}{0.400pt}} \multiput(378.00,346.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(380,349.17){\rule{0.482pt}{0.400pt}} \multiput(380.00,348.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(382.17,351){\rule{0.400pt}{0.700pt}} \multiput(381.17,351.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \multiput(384.00,354.61)(0.462,0.447){3}{\rule{0.500pt}{0.108pt}} \multiput(384.00,353.17)(1.962,3.000){2}{\rule{0.250pt}{0.400pt}} \put(387.17,357){\rule{0.400pt}{0.700pt}} \multiput(386.17,357.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(389.17,360){\rule{0.400pt}{0.700pt}} \multiput(388.17,360.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(391,363.17){\rule{0.482pt}{0.400pt}} \multiput(391.00,362.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(393.17,365){\rule{0.400pt}{0.700pt}} \multiput(392.17,365.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \multiput(395.00,368.61)(0.462,0.447){3}{\rule{0.500pt}{0.108pt}} \multiput(395.00,367.17)(1.962,3.000){2}{\rule{0.250pt}{0.400pt}} \put(398.17,371){\rule{0.400pt}{0.700pt}} \multiput(397.17,371.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(400.17,374){\rule{0.400pt}{0.700pt}} \multiput(399.17,374.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(402.17,377){\rule{0.400pt}{0.700pt}} \multiput(401.17,377.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(404.17,380){\rule{0.400pt}{0.700pt}} \multiput(403.17,380.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \multiput(406.00,383.61)(0.462,0.447){3}{\rule{0.500pt}{0.108pt}} \multiput(406.00,382.17)(1.962,3.000){2}{\rule{0.250pt}{0.400pt}} \put(409.17,386){\rule{0.400pt}{0.700pt}} \multiput(408.17,386.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(411,389.17){\rule{0.482pt}{0.400pt}} \multiput(411.00,388.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(413.17,391){\rule{0.400pt}{0.700pt}} \multiput(412.17,391.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(415.17,394){\rule{0.400pt}{0.700pt}} \multiput(414.17,394.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \multiput(417.00,397.61)(0.462,0.447){3}{\rule{0.500pt}{0.108pt}} \multiput(417.00,396.17)(1.962,3.000){2}{\rule{0.250pt}{0.400pt}} \put(420.17,400){\rule{0.400pt}{0.700pt}} \multiput(419.17,400.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(422.17,403){\rule{0.400pt}{0.700pt}} \multiput(421.17,403.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(424.17,406){\rule{0.400pt}{0.900pt}} \multiput(423.17,406.00)(2.000,2.132){2}{\rule{0.400pt}{0.450pt}} \put(426.17,410){\rule{0.400pt}{0.700pt}} \multiput(425.17,410.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \multiput(428.00,413.61)(0.462,0.447){3}{\rule{0.500pt}{0.108pt}} \multiput(428.00,412.17)(1.962,3.000){2}{\rule{0.250pt}{0.400pt}} \put(431.17,416){\rule{0.400pt}{0.700pt}} \multiput(430.17,416.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(433.17,419){\rule{0.400pt}{0.700pt}} \multiput(432.17,419.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(435.17,422){\rule{0.400pt}{0.700pt}} \multiput(434.17,422.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(437.17,425){\rule{0.400pt}{0.700pt}} \multiput(436.17,425.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \multiput(439.00,428.61)(0.462,0.447){3}{\rule{0.500pt}{0.108pt}} \multiput(439.00,427.17)(1.962,3.000){2}{\rule{0.250pt}{0.400pt}} \put(442.17,431){\rule{0.400pt}{0.700pt}} \multiput(441.17,431.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(444.17,434){\rule{0.400pt}{0.700pt}} \multiput(443.17,434.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(446.17,437){\rule{0.400pt}{0.700pt}} \multiput(445.17,437.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(448.17,440){\rule{0.400pt}{0.900pt}} \multiput(447.17,440.00)(2.000,2.132){2}{\rule{0.400pt}{0.450pt}} \multiput(450.00,444.61)(0.462,0.447){3}{\rule{0.500pt}{0.108pt}} \multiput(450.00,443.17)(1.962,3.000){2}{\rule{0.250pt}{0.400pt}} \put(453.17,447){\rule{0.400pt}{0.700pt}} \multiput(452.17,447.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(455.17,450){\rule{0.400pt}{0.700pt}} \multiput(454.17,450.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(457.17,453){\rule{0.400pt}{0.700pt}} \multiput(456.17,453.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \multiput(459.00,456.61)(0.462,0.447){3}{\rule{0.500pt}{0.108pt}} \multiput(459.00,455.17)(1.962,3.000){2}{\rule{0.250pt}{0.400pt}} \put(462.17,459){\rule{0.400pt}{0.700pt}} \multiput(461.17,459.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(464.17,462){\rule{0.400pt}{0.900pt}} \multiput(463.17,462.00)(2.000,2.132){2}{\rule{0.400pt}{0.450pt}} \put(466.17,466){\rule{0.400pt}{0.700pt}} \multiput(465.17,466.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(468.17,469){\rule{0.400pt}{0.700pt}} \multiput(467.17,469.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \multiput(470.00,472.61)(0.462,0.447){3}{\rule{0.500pt}{0.108pt}} \multiput(470.00,471.17)(1.962,3.000){2}{\rule{0.250pt}{0.400pt}} \put(473.17,475){\rule{0.400pt}{0.900pt}} \multiput(472.17,475.00)(2.000,2.132){2}{\rule{0.400pt}{0.450pt}} \put(475.17,479){\rule{0.400pt}{0.700pt}} \multiput(474.17,479.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(477.17,482){\rule{0.400pt}{0.900pt}} \multiput(476.17,482.00)(2.000,2.132){2}{\rule{0.400pt}{0.450pt}} \put(479.17,486){\rule{0.400pt}{0.700pt}} \multiput(478.17,486.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \multiput(481.00,489.61)(0.462,0.447){3}{\rule{0.500pt}{0.108pt}} \multiput(481.00,488.17)(1.962,3.000){2}{\rule{0.250pt}{0.400pt}} \put(484.17,492){\rule{0.400pt}{0.700pt}} \multiput(483.17,492.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(486.17,495){\rule{0.400pt}{0.700pt}} \multiput(485.17,495.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(488.17,498){\rule{0.400pt}{0.900pt}} \multiput(487.17,498.00)(2.000,2.132){2}{\rule{0.400pt}{0.450pt}} \put(490.17,502){\rule{0.400pt}{0.700pt}} \multiput(489.17,502.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \multiput(492.00,505.61)(0.462,0.447){3}{\rule{0.500pt}{0.108pt}} \multiput(492.00,504.17)(1.962,3.000){2}{\rule{0.250pt}{0.400pt}} \put(495.17,508){\rule{0.400pt}{0.700pt}} \multiput(494.17,508.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(497.17,511){\rule{0.400pt}{0.900pt}} \multiput(496.17,511.00)(2.000,2.132){2}{\rule{0.400pt}{0.450pt}} \put(499.17,515){\rule{0.400pt}{0.700pt}} \multiput(498.17,515.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(501.17,518){\rule{0.400pt}{0.700pt}} \multiput(500.17,518.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \multiput(503.00,521.61)(0.462,0.447){3}{\rule{0.500pt}{0.108pt}} \multiput(503.00,520.17)(1.962,3.000){2}{\rule{0.250pt}{0.400pt}} \put(506.17,524){\rule{0.400pt}{0.700pt}} \multiput(505.17,524.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(508.17,527){\rule{0.400pt}{0.700pt}} \multiput(507.17,527.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(510.17,530){\rule{0.400pt}{0.900pt}} \multiput(509.17,530.00)(2.000,2.132){2}{\rule{0.400pt}{0.450pt}} \put(512.17,534){\rule{0.400pt}{0.700pt}} \multiput(511.17,534.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \multiput(514.00,537.61)(0.462,0.447){3}{\rule{0.500pt}{0.108pt}} \multiput(514.00,536.17)(1.962,3.000){2}{\rule{0.250pt}{0.400pt}} \put(517.17,540){\rule{0.400pt}{0.700pt}} \multiput(516.17,540.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(519.17,543){\rule{0.400pt}{0.700pt}} \multiput(518.17,543.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(521.17,546){\rule{0.400pt}{0.700pt}} \multiput(520.17,546.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(523.17,549){\rule{0.400pt}{0.700pt}} \multiput(522.17,549.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \multiput(525.00,552.61)(0.462,0.447){3}{\rule{0.500pt}{0.108pt}} \multiput(525.00,551.17)(1.962,3.000){2}{\rule{0.250pt}{0.400pt}} \put(528.17,555){\rule{0.400pt}{0.700pt}} \multiput(527.17,555.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(530.17,558){\rule{0.400pt}{0.700pt}} \multiput(529.17,558.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(532.17,561){\rule{0.400pt}{0.700pt}} \multiput(531.17,561.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(534.17,564){\rule{0.400pt}{0.900pt}} \multiput(533.17,564.00)(2.000,2.132){2}{\rule{0.400pt}{0.450pt}} \multiput(536.00,568.61)(0.462,0.447){3}{\rule{0.500pt}{0.108pt}} \multiput(536.00,567.17)(1.962,3.000){2}{\rule{0.250pt}{0.400pt}} \put(539.17,571){\rule{0.400pt}{0.700pt}} \multiput(538.17,571.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(541,574.17){\rule{0.482pt}{0.400pt}} \multiput(541.00,573.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(543,576.17){\rule{0.482pt}{0.400pt}} \multiput(543.00,575.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(545.17,578){\rule{0.400pt}{0.700pt}} \multiput(544.17,578.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \multiput(547.00,581.61)(0.462,0.447){3}{\rule{0.500pt}{0.108pt}} \multiput(547.00,580.17)(1.962,3.000){2}{\rule{0.250pt}{0.400pt}} \put(550.17,584){\rule{0.400pt}{0.700pt}} \multiput(549.17,584.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(552.17,587){\rule{0.400pt}{0.700pt}} \multiput(551.17,587.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(554.17,590){\rule{0.400pt}{0.700pt}} \multiput(553.17,590.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(556,593.17){\rule{0.482pt}{0.400pt}} \multiput(556.00,592.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \multiput(558.00,595.61)(0.462,0.447){3}{\rule{0.500pt}{0.108pt}} \multiput(558.00,594.17)(1.962,3.000){2}{\rule{0.250pt}{0.400pt}} \put(561.17,598){\rule{0.400pt}{0.700pt}} \multiput(560.17,598.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(563.17,601){\rule{0.400pt}{0.700pt}} \multiput(562.17,601.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(565,604.17){\rule{0.482pt}{0.400pt}} \multiput(565.00,603.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(567.17,606){\rule{0.400pt}{0.700pt}} \multiput(566.17,606.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \multiput(569.00,609.61)(0.462,0.447){3}{\rule{0.500pt}{0.108pt}} \multiput(569.00,608.17)(1.962,3.000){2}{\rule{0.250pt}{0.400pt}} \put(572,612.17){\rule{0.482pt}{0.400pt}} \multiput(572.00,611.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(574.17,614){\rule{0.400pt}{0.700pt}} \multiput(573.17,614.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(576,617.17){\rule{0.482pt}{0.400pt}} \multiput(576.00,616.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(578.17,619){\rule{0.400pt}{0.700pt}} \multiput(577.17,619.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \multiput(580.00,622.61)(0.462,0.447){3}{\rule{0.500pt}{0.108pt}} \multiput(580.00,621.17)(1.962,3.000){2}{\rule{0.250pt}{0.400pt}} \put(583,625.17){\rule{0.482pt}{0.400pt}} \multiput(583.00,624.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(585.17,627){\rule{0.400pt}{0.700pt}} \multiput(584.17,627.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(587,630.17){\rule{0.482pt}{0.400pt}} \multiput(587.00,629.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(589,632.17){\rule{0.482pt}{0.400pt}} \multiput(589.00,631.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \multiput(591.00,634.61)(0.462,0.447){3}{\rule{0.500pt}{0.108pt}} \multiput(591.00,633.17)(1.962,3.000){2}{\rule{0.250pt}{0.400pt}} \put(594,637.17){\rule{0.482pt}{0.400pt}} \multiput(594.00,636.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(596.17,639){\rule{0.400pt}{0.700pt}} \multiput(595.17,639.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(598,642.17){\rule{0.482pt}{0.400pt}} \multiput(598.00,641.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(600,644.17){\rule{0.482pt}{0.400pt}} \multiput(600.00,643.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(602,646.17){\rule{0.700pt}{0.400pt}} \multiput(602.00,645.17)(1.547,2.000){2}{\rule{0.350pt}{0.400pt}} \put(605.17,648){\rule{0.400pt}{0.700pt}} \multiput(604.17,648.00)(2.000,1.547){2}{\rule{0.400pt}{0.350pt}} \put(607,651.17){\rule{0.482pt}{0.400pt}} \multiput(607.00,650.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(609,653.17){\rule{0.482pt}{0.400pt}} \multiput(609.00,652.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(611.17,655){\rule{0.400pt}{0.900pt}} \multiput(610.17,655.00)(2.000,2.132){2}{\rule{0.400pt}{0.450pt}} \put(613,659.17){\rule{0.700pt}{0.400pt}} \multiput(613.00,658.17)(1.547,2.000){2}{\rule{0.350pt}{0.400pt}} \put(616,661.17){\rule{0.482pt}{0.400pt}} \multiput(616.00,660.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(618,663.17){\rule{0.482pt}{0.400pt}} \multiput(618.00,662.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(620,665.17){\rule{0.482pt}{0.400pt}} \multiput(620.00,664.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(622,667.17){\rule{0.482pt}{0.400pt}} \multiput(622.00,666.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(624,669.17){\rule{0.700pt}{0.400pt}} \multiput(624.00,668.17)(1.547,2.000){2}{\rule{0.350pt}{0.400pt}} \put(627,671.17){\rule{0.482pt}{0.400pt}} \multiput(627.00,670.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(629,673.17){\rule{0.482pt}{0.400pt}} \multiput(629.00,672.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(631,675.17){\rule{0.482pt}{0.400pt}} \multiput(631.00,674.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(633,677.17){\rule{0.482pt}{0.400pt}} \multiput(633.00,676.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(635,679.17){\rule{0.700pt}{0.400pt}} \multiput(635.00,678.17)(1.547,2.000){2}{\rule{0.350pt}{0.400pt}} \put(638,681.17){\rule{0.482pt}{0.400pt}} \multiput(638.00,680.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(640,683.17){\rule{0.482pt}{0.400pt}} \multiput(640.00,682.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(642,685.17){\rule{0.482pt}{0.400pt}} \multiput(642.00,684.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(644,686.67){\rule{0.482pt}{0.400pt}} \multiput(644.00,686.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(646,688.17){\rule{0.700pt}{0.400pt}} \multiput(646.00,687.17)(1.547,2.000){2}{\rule{0.350pt}{0.400pt}} \put(649,690.17){\rule{0.482pt}{0.400pt}} \multiput(649.00,689.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(651,692.17){\rule{0.482pt}{0.400pt}} \multiput(651.00,691.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(653,693.67){\rule{0.482pt}{0.400pt}} \multiput(653.00,693.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(655,695.17){\rule{0.700pt}{0.400pt}} \multiput(655.00,694.17)(1.547,2.000){2}{\rule{0.350pt}{0.400pt}} \put(658,697.17){\rule{0.482pt}{0.400pt}} \multiput(658.00,696.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(660,698.67){\rule{0.482pt}{0.400pt}} \multiput(660.00,698.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(662,700.17){\rule{0.482pt}{0.400pt}} \multiput(662.00,699.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(664,701.67){\rule{0.482pt}{0.400pt}} \multiput(664.00,701.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(666,703.17){\rule{0.700pt}{0.400pt}} \multiput(666.00,702.17)(1.547,2.000){2}{\rule{0.350pt}{0.400pt}} \put(669,705.17){\rule{0.482pt}{0.400pt}} \multiput(669.00,704.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(671,706.67){\rule{0.482pt}{0.400pt}} \multiput(671.00,706.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(673,708.17){\rule{0.482pt}{0.400pt}} \multiput(673.00,707.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(675,709.67){\rule{0.482pt}{0.400pt}} \multiput(675.00,709.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(677,710.67){\rule{0.723pt}{0.400pt}} \multiput(677.00,710.17)(1.500,1.000){2}{\rule{0.361pt}{0.400pt}} \put(680,712.17){\rule{0.482pt}{0.400pt}} \multiput(680.00,711.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(682,713.67){\rule{0.482pt}{0.400pt}} \multiput(682.00,713.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(684,714.67){\rule{0.482pt}{0.400pt}} \multiput(684.00,714.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(686,716.17){\rule{0.482pt}{0.400pt}} \multiput(686.00,715.17)(1.000,2.000){2}{\rule{0.241pt}{0.400pt}} \put(691,717.67){\rule{0.482pt}{0.400pt}} \multiput(691.00,717.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(693,718.67){\rule{0.482pt}{0.400pt}} \multiput(693.00,718.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(695,719.67){\rule{0.482pt}{0.400pt}} \multiput(695.00,719.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(697,720.67){\rule{0.482pt}{0.400pt}} \multiput(697.00,720.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(699,722.17){\rule{0.700pt}{0.400pt}} \multiput(699.00,721.17)(1.547,2.000){2}{\rule{0.350pt}{0.400pt}} \put(702,723.67){\rule{0.482pt}{0.400pt}} \multiput(702.00,723.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(704,724.67){\rule{0.482pt}{0.400pt}} \multiput(704.00,724.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(706,725.67){\rule{0.482pt}{0.400pt}} \multiput(706.00,725.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(708,726.67){\rule{0.482pt}{0.400pt}} \multiput(708.00,726.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(710,727.67){\rule{0.723pt}{0.400pt}} \multiput(710.00,727.17)(1.500,1.000){2}{\rule{0.361pt}{0.400pt}} \put(713,728.67){\rule{0.482pt}{0.400pt}} \multiput(713.00,728.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(715,729.67){\rule{0.482pt}{0.400pt}} \multiput(715.00,729.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(717,730.67){\rule{0.482pt}{0.400pt}} \multiput(717.00,730.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(719,731.67){\rule{0.482pt}{0.400pt}} \multiput(719.00,731.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(721,732.67){\rule{0.723pt}{0.400pt}} \multiput(721.00,732.17)(1.500,1.000){2}{\rule{0.361pt}{0.400pt}} \put(724,733.67){\rule{0.482pt}{0.400pt}} \multiput(724.00,733.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(726,734.67){\rule{0.482pt}{0.400pt}} \multiput(726.00,734.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(728,735.67){\rule{0.482pt}{0.400pt}} \multiput(728.00,735.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(730,736.67){\rule{0.482pt}{0.400pt}} \multiput(730.00,736.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(688.0,718.0){\rule[-0.200pt]{0.723pt}{0.400pt}} \put(735,737.67){\rule{0.482pt}{0.400pt}} \multiput(735.00,737.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(737,738.67){\rule{0.482pt}{0.400pt}} \multiput(737.00,738.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(739,739.67){\rule{0.482pt}{0.400pt}} \multiput(739.00,739.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(741,740.67){\rule{0.482pt}{0.400pt}} \multiput(741.00,740.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(732.0,738.0){\rule[-0.200pt]{0.723pt}{0.400pt}} \put(746,741.67){\rule{0.482pt}{0.400pt}} \multiput(746.00,741.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(748,742.67){\rule{0.482pt}{0.400pt}} \multiput(748.00,742.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(750,743.67){\rule{0.482pt}{0.400pt}} \multiput(750.00,743.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(743.0,742.0){\rule[-0.200pt]{0.723pt}{0.400pt}} \put(754,744.67){\rule{0.723pt}{0.400pt}} \multiput(754.00,744.17)(1.500,1.000){2}{\rule{0.361pt}{0.400pt}} \put(757,745.67){\rule{0.482pt}{0.400pt}} \multiput(757.00,745.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(752.0,745.0){\rule[-0.200pt]{0.482pt}{0.400pt}} \put(761,746.67){\rule{0.482pt}{0.400pt}} \multiput(761.00,746.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(763,747.67){\rule{0.482pt}{0.400pt}} \multiput(763.00,747.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(759.0,747.0){\rule[-0.200pt]{0.482pt}{0.400pt}} \put(768,748.67){\rule{0.482pt}{0.400pt}} \multiput(768.00,748.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(770,749.67){\rule{0.482pt}{0.400pt}} \multiput(770.00,749.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(765.0,749.0){\rule[-0.200pt]{0.723pt}{0.400pt}} \put(774,750.67){\rule{0.482pt}{0.400pt}} \multiput(774.00,750.17)(1.000,1.000){2}{\rule{0.241pt}{0.400pt}} \put(772.0,751.0){\rule[-0.200pt]{0.482pt}{0.400pt}} \put(776.0,752.0){\rule[-0.200pt]{0.482pt}{0.400pt}} \put(151.0,151.0){\rule[-0.200pt]{0.400pt}{151.044pt}} \put(151.0,151.0){\rule[-0.200pt]{151.044pt}{0.400pt}} \put(778.0,151.0){\rule[-0.200pt]{0.400pt}{151.044pt}} \put(151.0,778.0){\rule[-0.200pt]{151.044pt}{0.400pt}} \end{picture} \end{center} \caption{The behavior of $\gamma$ as a function of $t_b$ (the horizontal axis is logarithmic). For a given $t_b$, if the ratio of the spreads of the two patches, $b/a$ is greater than the $\gamma$ shown in this graph, then it is convenient to explore the second patch.} \label{gammafig} \end{figure} For $\frac{b}{a}>\gamma(t_b)$ and switch time $t_b$, it is convenient to explore the second patch. \section{\sc Walking in Continuous Space} We have seen, in the first section, the neurological basis of ARS, and its widespread presence to solve many apparently unrelated problems in the animal kingdom. All these problems have a common abstract structure: that of foraging in a patchy environment, that is, in an environment in which the resources are distributed in clumps: there are concentrated areas in which the resource is present, separated by stretches in which no (or little) resource is found. Some fiddling with genetic algorithms convinced us that ARS is indeed an optimal strategy for this kind of problems. The problem that I should like to consider now is how to characterize this behavior at a large scale. I have shown in the previous section how to decide when to leave a patch and venture in search of another, now I am interested in analyzing the global behavior that results from these decisions. One result, which we have already glimpsed at the end of section \ref{genetic} is to see ARS as a type of \textsl{random walk}. In this section, we shall study the paths produced by ARS as random walks in a continuum (viz.\ in ${\mathbb{R}}^n$, usually in ${\mathbb{R}}^2$). As we shall see, an important parameter in this exploration is the exponent $\nu$ of the curves in Figure~\ref{variances}, that is, in the relation $\langle{X^2(t)}\rangle\sim{t^\nu}$; the fact that $\nu>1$ makes ARS walk of a peculiar kind, known as \textsl{Levy walks}. \subsection{Random Walks and Diffusion Processes} A random walk is the description of the motion of a point (sometimes called, in reference to the physics in which random walks were first studied, a \textsl{particle} or, in reference to ecology, an \textsl{individual}) subject to forces that can be modeled as a stochastic process. Random walks can be described at three levels of detail, given by the schema of Figure~\ref{walkonsunshine}). \begin{figure} \begin{center} \begin{tabular}{cccp{6em}} Input & Description Level & Output & Math, formalism \\ \hline \parbox{8em}{individual fluctuations} & \framebox{\parbox{8em}{\centerline{microscopic}}} & trajectories & Langevin equations \\ \parbox{8em}{averaged fluctuations} & \framebox{\parbox{8em}{\centerline{mesoscopic}}} & prob. density & Master equation \\ \parbox{8em}{macroscopic parameters} & \framebox{\parbox{8em}{\centerline{macroscopic}}} & prob. density & Fokker-Planck equations \\ \end{tabular} \setlength{\unitlength}{1em} \begin{picture}(0,0)(0,0) \put(0,-4){\vector(0,1){8}} \put(0.2,-2){\makebox(0,0)[l]{complexity}} \end{picture} \end{center} \caption{\small Three levels of description of random walks: the mesoscopic entails considering the time span of each motion small with respect to the time of the whole phenomenon, and approximating time functions with their first derivatives; the macroscopic entails making the same approximation on space. This entails that the master equation (mesocsopic) is a differential equation in time and an integral equation in space, while the Fokker-Planck equation (macroscopic) is differential in time and space. (Adapted from \cite{mendez:14}.)} \label{walkonsunshine} \end{figure} We shall consider these levels of description one by one, especially as they apply to the best known model of random walk: \textsl{Brownian motion}. \subsection{Microscopic description--Langevin equations} Langevin's equations were originally studied for what become the prototype of diffusive random walks, namely \textsl{Brownian motion} \cite{hida:80,karatzas:12}. In 1827, botanist R. Brown discovered, during microscopic observations, that particles of pollen suspended in water exhibited an incessant and irregular motion \cite{powles:78}. Vitalist explanations were soon discarded since mineral (viz.\ non-living) particles exhibited the same phenomenon. Brownian motion occurs when the mass of the particle of pollen is larger than the mass of the molecules of the liquid, so that the continuous collisions drive the particles in a chaotic way (Figure~\ref{brownian}). The first theoretical explanation of this phenomenon was given in 1905 by Albert Einstein at a macroscopic level \cite{einstein:05}, and we shall consider his approach shortly. In 1906, Paul Langevin offered a microscopic model of Brownian motion based on stochastic differential equations. Langevin's equations for a one-dimensional Brownian motion (the case that is commonly studied) are: \begin{equation} \begin{aligned} \frac{dx}{dt} &= v \\ m\frac{dv}{dt} &= -\gamma v + \sigma \xi(t) \end{aligned} \end{equation} where $m$ is the particle mass, $\gamma$ is the friction coefficient, and $\xi(t)$ is the force resulting from the impact with the molecules. Langevin assumed that $\xi(t)$ is a Gaussian, uncorrelated stochastic process with zero mean, that is $\langle\xi(t)\rangle=0$, $\langle\xi(t)\xi(t')\rangle=\delta(t-t')$. He also considered the limit of strong friction $m|dv/dt|\ll|\gamma{v}|$ (more on this hypothesis later) so the equations become \begin{equation} \gamma \frac{dx}{dt} = \sigma\xi(t) \end{equation} and, defining the \textsl{diffusion coefficient} $D=\frac{\sigma^2}{2\gamma^2}$, \begin{equation} \frac{dx}{dt} = \sqrt{2D} \xi(t) \end{equation} or \begin{equation} \label{intme} dx = \sqrt{2D}\xi(t) dt = \sqrt{2D}dW(t) \end{equation} where $W(t)$ is a Wiener process (see section \ref{gausswie}). Integrating (\ref{intme}) we get \begin{equation} x(t) = x(0) + \sqrt{2D}w(t) \end{equation} So, Brownian motion is the motion of a particle whose displacement is a Wiener process and whose velocity is an uncorrelated Gaussian process. The hypothesis of strong friction is key to obtain velocity as an uncorrelated stochastic process: if inertial phenomena are present, then the velocities at two different instants in time are correlated. From the solution of this equation we can determine macroscopic quantities such as the mean position $\langle{X}\rangle$ and the mean square displacement $\langle{X^2}\rangle-\langle{X}\rangle^2$: \begin{equation} \label{lookieloopsie} \begin{aligned} \langle X \rangle &= \langle X(0) \rangle + \sqrt{2D} \langle W(t) \rangle = \langle X(0) \rangle \\ \langle{X^2}\rangle-\langle{X}\rangle^2 &= 2D\langle W^2(t) \rangle = 2Dt \end{aligned} \end{equation} From the second equation we see that $\langle{X^2}\rangle\sim{t}$. This is an equation that we have already encountered: it characterizes the ARS walk for very low $\rho$ and for $\rho\sim{1}$ (figure~\ref{variances}). It is a behavior typical of \textsl{diffusive processes}, and we shall meet it quite a few times in the following. \begin{figure}[tbhp] \begin{center} \setlength{\unitlength}{1.5em} \begin{picture}(15,14)(0,0) \newsavebox{\bump} \savebox{\bump}{ \thicklines \put(0,0){\circle{1}} \thinlines \put(0,0){\circle{1.2}} \put(0,0){\circle{1.4}} } \put(4.5,5.5){\usebox{\bump}} \put(5.5,2.5){\usebox{\bump}} \put(8.5,8.5){\usebox{\bump}} \put(9.5,6.5){\usebox{\bump}} \put(6.5,7.5){\usebox{\bump}} \put(7.5,4.5){\usebox{\bump}} \put(13,4){\usebox{\bump}} \put(10.5,2.5){\usebox{\bump}} \put(12.5,6){\usebox{\bump}} \put(10,11.5){\usebox{\bump}} \put(6.5,10.5){\usebox{\bump}} \put(8.5,12.5){\usebox{\bump}} \put(0.5,1){\line(1,1){4}} \put(4.5,5){\line(1,-2){1}} \put(5.5,3){\line(3,5){3}} \put(8.5,8){\line(1,-2){0.5}} \put(9,7){\line(-4,1){2}} \put(7,7.5){\line(1,-5){0.5}} \put(7.5,5){\line(5,-1){5}} \put(12.5,4){\line(-3,-2){1.5}} \put(11,3){\line(1,3){1}} \put(12,6){\line(-2,5){2}} \put(10,11){\line(-6,-1){3}} \put(7,10.5){\line(1,2){1}} \put(8,12.5){\line(-1,2){0.5}} % \put(3.5,1.5){\circle{1}} \put(7.5,0.5){\circle{1}} \put(8.5,2.5){\circle{1}} \put(2,7){\circle{1}} \put(2.5,4.5){\circle{1}} \put(4,9){\circle{1}} \put(2.5,11.5){\circle{1}} \put(4.5,12.5){\circle{1}} \put(12.5,10.5){\circle{1}} \put(13.5,8){\circle{1}} \end{picture} \end{center} \caption{\small A schematic illustration of Brownian motion: a particle of pollen \dqt{bumps} on the molecules of the liquid in which it is suspended, creating an irregular random trajectory.} \label{brownian} \end{figure} \subsection{Mesoscopic description--Master equation} The \textsl{master equation} is an ensemble equation that expresses the probability $P(x,t)$ that a particle be at position $x$ at time $t$. It is an integro-differential equation, expressing the time derivative of $P(x,t)$ as a balance of the probability of arriving at $x$ and the probability of leaving the position $x$ once we are there. Consider a stationary Markov process. For this kind of process, the probability $P(x,t|z,t')$ depends only on $t-t'$, so we can define $P_\tau(x,z)=P(x,t+\tau|z,t)=P(x,\tau|z,0)$. Consider now $P(x,t+\tau)$, that is, the probability that the walking particle will be in position $x$ at time $t+\tau$. The particle is in $x$ at time $t+\tau$ if there is a position $z$ such that the particle was in $z$ at time $t$ and has moved from $z$ to $x$ in the time interval $\tau$ (if $z=x$, this is the probability that the particle were already in $x$ and hasn't moved). This event, for a specific $z$, has a probability $P(z,t)P_\tau(x,z)$. Integrating over all possible $z$, we obtain \begin{equation} P(x,t+\tau) = \int_{-\infty}^\infty\!\!\!\!P(z,t)P_\tau(x,z)\,dz \end{equation} If $\tau\ll{t}$, we can approximate $P(x,t+\tau)$ as \begin{equation} P(x,t+\tau) = P(x,t) + \frac{\partial P}{\partial t} \tau + O(\tau^2) \end{equation} Let $\omega(x|z)$ be the transition probability per unit time from $z$ to $x$, that is, $\omega(x|z)\tau$ is the probability that the particle go from $z$ to $x$ in a time $\tau$. If the particle is in $x$ at time $t$, then \begin{equation} \int_{-\infty}^{\infty}\!\!\!\!\omega(z|x)\tau\,dz \end{equation} is the probability that it will move somewhere else, and \begin{equation} 1 - \int_{-\infty}^{\infty}\!\!\!\!\omega(z|x)\tau\,dz \end{equation} is the probability that it will stay in $x$. Balancing the probability of arriving at $x$ and that of not moving if we are already there, we obtain \begin{equation} P(x,t+\tau) = P(x,t)\left( 1 - \int_{-\infty}^{\infty}\!\!\!\!\omega(z|x)\tau\,dz \right) + \int_{-\infty}^{\infty}\!\!\!\!\omega(x|z)P(z,t)\tau\,dz \end{equation} The first terms gives us the probability that the particle were in $x$ at time $t$ and did not move in the interval $[t,t+\tau]$, while the second is the probability that the particle were in a different position at $t$ and that it moved to $x$ in the interval $[t,t+\tau]$. Rearranging and taking the limit for $\tau\rightarrow{0}$, we obtain the \textsl{master equation} \begin{equation} \frac{\partial}{\partial t} P(x,t) = \int_{-\infty}^{\infty}\!\!\!\!\omega(x|z)P(z,t)\,dz - \int_{-\infty}^{\infty}\!\!\!\!\omega(z|x)P(x,t)\tau\,dz \end{equation} If $X$ is a discrete stochastic process, then, calling $\omega_{nm}$ the prbability of moving from position $x_n$ to position $x_m$ in unit time, the equation becomes \begin{equation} \frac{\partial}{\partial t} P(n,t) = \sum_m \omega_{mn}P(m,t) - \sum_m \omega_{nm}P(n,t) \end{equation} ~~\\\hfill(end of example)\\\bigskip \example As an example, consider a counting process that transitions from $n$ to $n+1$ with probability $\lambda$ at each instant, that is, $\omega_{n,n+1}=\lambda$ and $\omega_{nm}=0$ for $m\ne{n+1}$. Then the master equation reads \begin{equation} \frac{\partial}{\partial t} P(n,t) = \lambda\bigr[ P(n-1,t) - P(n,t) \bigr] \end{equation} ~~\\\hfill(end of example)\\\bigskip This type of equation can be solved through the use of the $z$-transform of the sequence $P(n,t)$, defined as \begin{equation} \label{this} F(z,t) = {\mathcal{Z}}[P(n,t)] = \sum_{n=0}^\infty z^n P(n,t) \end{equation} Then \begin{equation} \sum_{n=0}^\infty z^n \frac{\partial}{\partial t} P(n,t) = \sum_{n=0}^\infty \frac{\partial}{\partial t} (z^n P(n,t)) = \frac{\partial}{\partial t} F(z,t) \end{equation} and \begin{equation} \sum_{n=0}^\infty z^n P(n-1,t) = z \sum_{n=1}^\infty z^{n-1} P(n-1,t) = z \sum_{n=0}^\infty z^n P(n,t) = z F(z,t) \end{equation} so that \begin{equation} \sum_{n=0}^\infty z^n \lambda\bigr[ P(n-1,t) - P(n,t) \bigr] = \lambda \Bigl[ \sum_{n=0}^\infty z^n P(n-1,t) - \sum_{n=0}^\infty z^n P(n,t) \Bigr] = \lambda(z-1)F(z,t) \end{equation} resulting in \begin{equation} \label{ohohoh} \frac{\partial}{\partial t} F(z,t) = \lambda(z-1)F(z,t) \end{equation} If $P(n,0)=\delta_{n,0}$, it is easy to check that $F(z,0)=1$. With this initial condition, (\ref{ohohoh}) can be easily integrated yielding \begin{equation} F(z,t) = \exp(\lambda(z-1)t) = \sum_{n=0}^\infty z^n \frac{(\lambda t)^n}{n!} e^{-\lambda t} \end{equation} Comparing with (\ref{this}) we have that $P(n,t)$ follows a Poisson distribution \begin{equation} P(n,t) = \frac{(\lambda t)^n}{n!} e^{-\lambda t} \end{equation} ~~\\\hfill(end of example)\\\bigskip \subsection{Macroscopic level--Fokker-Planck equations} \subsubsection{Diffusion} I shall introduce the macroscopic level of description in a slightly more general setting that needed here before seeing how it related to Brownian motion: as \textsl{diffusion}. As I mentioned, the macroscopic level consists in considering that the characteristic magnitudes of the walk (time and distance between collisions) are much smaller than the magnitudes we are considering. This means that we can characterize the problem using a continuous \textsl{population density} $\rho(\vct{x},t)$: the number of particles in a unit volume around $\vct{x}$ at time $t$ (Figure~\ref{diffmodel}). \begin{figure} \begin{center} \setlength{\unitlength}{1.5em} \begin{picture}(6,10)(0,0) \thicklines \multiput(0,0)(0,4){2}{\line(1,0){4}} \multiput(0,0)(4,0){2}{\line(0,1){4}} \put(4,4){\line(1,1){2}} \put(4,0){\line(1,1){2}} \put(0,4){\line(1,1){2}} \put(2,6){\line(1,0){4}} \put(6,6){\line(0,-1){4}} \multiput(0,0)(0.125,0.125){16}{\circle*{0.000001}} \multiput(2,2)(0.125,0){32}{\circle*{0.000001}} \multiput(2,2)(0,0.125){32}{\circle*{0.000001}} \put(3,5){\vector(0,1){2}} \put(3,5){\vector(1,2){2}} \put(2.9,7){\makebox(0,0)[r]{$\mathbf{n}$}} \put(5.1,9.1){\makebox(0,0)[l]{$\mathbf{J}$}} \put(3,3){\makebox(0,0){$\mathbf{\rho}(x)$}} \put(0,-0.1){\makebox(0,0)[t]{$x$}} \put(4,-0.1){\makebox(0,0)[t]{$x+dx$}} \thinlines \put(3,5){\line(0,1){5}} \multiput(5,9)(-0.125,-0.0675){17}{\circle*{0.000001}} \end{picture} \end{center} \caption{\small Schematic view of the model for the derivation of the diffusion equation; $\rho(\vct{x},t)$ is the local density of particle, $\vct{J}(\vct{x},t)$ is the population flow: the number of particles that move in a given direction, at a given point and at a given time.} \label{diffmodel} \end{figure} We can approximate the local density of particles with a continuous field, and take the limit for space and time going to zero. In addition to the density, we define the population flow $\vct{J}(\vct{x},t)$, which is a vector pointing in the direction of movement and indicating how many particles move per unit time in a surface patch of unitary area. Considering a closed volume $V$ with a closed surface $S$, if there is no generation or annihilation of particles, the variation in density is due to the particles that enter and leave through the surface. So, we have: \begin{equation} \label{booze} \frac{\partial}{\partial t}\int_V\!\!\!\rho(\vct{x},t)\,dV = - \oint_S \vct{J}(\vct{x},t)\cdot\vct{n}\,dS \end{equation} where $\vct{n}$ is the normal to the surface at $\vct{x}$. By the divergence theorem: \begin{equation} \oint_S \vct{J}(\vct{x},t)\cdot\vct{n}\,dS = \int_V\!\!\!\nabla\cdot\vct{J}\,dV \end{equation} Applying this theorem to (\ref{booze}) we have \begin{equation} \int_V \Bigl[ \frac{\partial \rho(\vct{x},t)}{\partial t} + \nabla\cdot\vct{J}(\vct{x},t) \Bigr]\,dV = 0 \end{equation} Since the volume $V$ is arbitrary, we get the continuity equation \begin{equation} \label{fock} \frac{\partial \rho(\vct{x},t)}{\partial t} + \nabla\cdot\vct{J}(\vct{x},t) = 0 \end{equation} In order to get a solvable equation in $\rho$, we need to determine how $\vct{J}$ emerges as a consequence of variations of the population density, that is, how $\vct{J}$ relates to $\rho$. Such an expression is called a \textsl{constitutive equation}. One common constitutive equation, known as \textsl{Fick's law}, assume that the flow is proportional to the local population gradient, that is \begin{equation} \label{fick} \vct{J}(\vct{x},t) = - D \nabla\rho(\vct{x},t) \end{equation} The minus sign takes into account the fact that the flow goes from regions of high density to regions of low density. Introducing (\ref{fick}) into (\ref{fock}) one gets the diffusion equation \begin{equation} \label{fuck} \frac{\partial\rho}{\partial t} = \nabla \cdot (D\nabla\rho) \end{equation} If $D$ is a constant independent of $\vct{x}$ then (\ref{fuck}) turns into \begin{equation} \label{diff} \frac{\partial\rho}{\partial t} = D \nabla^2\rho \end{equation} and, in the one-dimensional case \begin{equation} \label{diffone} \frac{\partial\rho}{\partial t} = D \frac{\partial^2 \rho}{\partial x^2} \end{equation} This \textsl{diffusion equation} has, in principle, nothing to do with Brownian motion or random walks: it has been derived considering a completely different problem, namely the diffusion of a fluid into space under the action of the gradient of its density. Yet, surprisingly, it turns out that this equation does indeed describe Brownian motion. In particular, it describes the evolution of the probability of finding a Brownian walker in $x$ at time $t$. \subsubsection{Fokker-Planck equation} The \textsl{Fokker-Planck equation} is a partial differential equation in time and space that describes Brownian motion at a macroscopic level. This makes it a macroscopic equation, since the use of differential operators entails that we are considering times and distances much greater than the time and space between changes in direction of a particle. In this section we present the Einstein derivation of the Fokker-Planck equations considering, for the sake of simplicity, the one-dimensional case \cite{einstein:05}. The motion of a particle in a Brownian motion can be interpreted as a series of jumps that can have an arbitrary length $z$. Let the jump lengths be distributed according to a PDF $\phi(z)$, and let them be i.i.d. The density of individuals at position $x$ at time $t+\tau$ is given by those individuals that were at a position $z$ at time $t$ and that have jumped to $x$ after waiting a time $\tau$. Since $z$ is arbitrary, we integrate over all possible $z$, obtaining a form of non-Markovian Chapman-Kolmogorov equation \begin{equation} \label{albert} \rho(x,t+\tau) = \int_{-\infty}^\infty\!\!\!\!\!\rho(x-z,t)\phi(z)\,dz \end{equation} Note that this equation is continuous in space and discrete in time. In particular, the PDF $\phi(z)$, which in ecology is called the \textsl{dispersion kernel} is continuous, meaning that we are making an implicit assumption of a large number of individuals/particles. If we now take the macroscopic limit, that is, is we consider that $\tau$ and $z$ are both small with respect to the scale of interest, then we can use a Taylor expansion in $t$ and $z$: \begin{equation} \begin{aligned} \rho(x,t+\tau) &= \sum_{n=0}^\infty \frac{\tau^n}{n!} \frac{\partial^n \rho}{\partial \tau^n} \\ \rho(x-z,t) &= \sum_{n=0}^\infty \frac{(-z)^n}{n!} \frac{\partial^n \rho}{\partial z^n} \end{aligned} \end{equation} Inserting into (\ref{albert}), one gets: \begin{equation} \rho(x,t) + \tau \frac{\partial \rho}{\partial \tau} + \cdots = \rho(x,t)\int_{-\infty}^\infty\!\!\!\!\!\phi(z)\,dz - \frac{\partial \rho}{\partial x}\int_{-\infty}^\infty\!\!\!\!\!z\phi(z)\,dz + \frac{\partial^2 \rho}{\partial x^2}\int_{-\infty}^\infty\!\frac{z^2}{2!}\phi(z)\,dz + \cdots \end{equation} The kernel $\phi(z)$ is a PDF, therefore $\int_{-\infty}^\infty\phi(z)\,dz=1$. Moreover, if the movements are isotropic, that is, there is no preferential direction of movement, then $\phi(z)=\phi(-z)$, and $\int_{-\infty}^\infty{z^n}\phi(z)\,dz=0$ for $n$ odd. So, we have: \begin{equation} \rho(x,t) + \tau \frac{\partial \rho}{\partial \tau} + O(\tau^2) = \rho(x,t) + \frac{\partial^2 \rho}{\partial x^2}\int_{-\infty}^\infty\!\frac{z^2}{2!}\phi(z)\,dz + O(z^4) \end{equation} Or, simplifying the common term and dividing by $\tau$, \begin{equation} \frac{\partial \rho}{\partial \tau} = \frac{\partial^2 \rho}{\partial x^2}\int_{-\infty}^\infty\!\frac{z^2}{2\tau}\phi(z)\,dz + O(z^4/\tau^2) \end{equation} We now take the macroscopic limit $z,\tau\rightarrow{0}$, but in such a way that $\lim z^2/\tau=C\ne{0}$, that is, keeping $z^2$ and $\tau$ of the same order of magnitude. Then we obtain, as a Fokker-Planck equation, a diffusion equation like (\ref{diff}): \begin{equation} \label{einfock} \frac{\partial \rho}{\partial t} = D \frac{\partial^2 \rho}{\partial x^2} \end{equation} where \begin{equation} D = \frac{1}{2\tau} \int_{-\infty}^\infty\!\!\!\!\!z^2\phi(z)\,dz = \frac{\langle{z^2}\rangle}{2\tau} \end{equation} Note that (\ref{einfock}) depends only on the second moment of the diffusion kernel $\phi(z)$. In no place have we made the hypothesis that $\phi(z)$ is Gaussian so very different diffusion kernels with the same second moment will generate the same Fokker-Planck equation. We have lost information with respect to the distribution $\phi(z)$: in particular, \textsl{given any distribution, a Gaussian distribution with the same second moment will generate the same macroscopic distribution}. In this sense, we also notice that the Langevin equation does indeed make the hypothesis that $\xi(t)$ is Gaussian. It would therefore seem that the Einstein derivation is more general than the Langevin equation. In reality, it is not so, and the reason is the Central Limit Theorem% \footnote{We shall not do the derivation here, but from the Langevin equation one can derive the same Fokker-Planck equation as from the Einstein's derivation. So, despite the different hypotheses the two methods describe the same macroscopic phenomenon.}% : the hypothesis that $z^2$ and $\tau$ be of the same magnitude entails that $D$ is finite and therefore, by (\ref{einfock}), that $\langle{Z^2}\rangle$ is finite. We are in the hypotheses of the Central Limit Theorem, so the sum of all the jumps of the Einstein's derivation, with distribution $\phi(z)$, will end up being Gaussian regardless of the exact shape of $\phi(z)$. \subsubsection{Solution of the Diffusion Equation} One simple way to solve the diffusion equation is through the use of the Characteristic function of the distribution $\rho$, viz.\ its Fourier Transform. Taking the Fourier transform of (\ref{diffone}) we get the ordinary differential equation \begin{equation} \frac{d\tilde{\rho}}{dt}(\omega,t) = -D \omega^2 \tilde{\rho}(\omega,t) \end{equation} which has solution \begin{equation} \tilde{\rho}(\omega,t) = \tilde{\rho}(\omega,0)\exp\bigl[-D\omega^2t\bigr] \end{equation} In the simplest case, at the beginning all the individuals are concentrated at $x_0$, that is, $\rho(x,0)=\delta(x-x_0)$. By the formula for the characteristic function of the Dirac distribition (\ref{deltachar}), we have $\tilde{\rho}(\omega,0)=\exp\bigl[-i\omega{x_0}\bigr]$. That is \begin{equation} \tilde{\rho}(\omega,t) = \exp\bigl[-i\omega{x_0}-D\omega^2t\bigr] \end{equation} The inverse Fourier transform gives us \begin{equation} \label{pussy} \rho(x,t) = \frac{1}{2\pi} \int_{-\infty}^\infty\!\!\!e^{i\omega{x}}\tilde{\rho}(\omega,t)\,d\omega = \frac{1}{2\pi} \int_{-\infty}^\infty\!\!\!\exp\bigl[ i\omega(x-x_0)-D\omega^2{t}\bigr]\,d\omega = \frac{1}{\sqrt{4\pi{Dt}}} \exp\Bigl[-\frac{(x-x_0)^2}{4Dt}\Bigr] \end{equation} From this solution we can obtain the general solution for $\rho(x,0)=g(x)$. Writing \begin{equation} g(y) = \int_{-\infty}^\infty\!\!\!g(x)\delta(x-y)\,dy \end{equation} and applying superposition we have \begin{equation} \rho(x,t) = \frac{1}{\sqrt{4\pi{Dt}}} \int_{-\infty}^\infty\!\!\!\!\!g(y)\exp\Bigl[-\frac{(y-x_0)^2}{4Dt}\Bigr]\,dy \end{equation} With reference to the simple solution (\ref{pussy}), Figure~\ref{difsol} shows $\rho(x,t)$ as a function of $x$ for several values of $t$. \begin{figure} \begin{center} \setlength{\unitlength}{0.240900pt} \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \begin{picture}(839,839)(0,0) \sbox{\plotpoint}{\rule[-0.200pt]{0.400pt}{0.400pt}}% \put(56.0,82.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(56,41){\makebox(0,0){$-8$}} \put(56.0,778.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(146.0,82.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(146,41){\makebox(0,0){$-6$}} \put(146.0,778.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(235.0,82.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(235,41){\makebox(0,0){$-4$}} \put(235.0,778.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(325.0,82.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(325,41){\makebox(0,0){$-2$}} \put(325.0,778.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(414.0,82.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(414,41){\makebox(0,0){$0$}} \put(414.0,778.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(504.0,82.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(504,41){\makebox(0,0){$2$}} \put(504.0,778.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(593.0,82.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(593,41){\makebox(0,0){$4$}} \put(593.0,778.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(683.0,82.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(683,41){\makebox(0,0){$6$}} \put(683.0,778.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(772.0,82.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(772,41){\makebox(0,0){$8$}} \put(772.0,778.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(56.0,82.0){\rule[-0.200pt]{0.400pt}{172.484pt}} \put(56.0,82.0){\rule[-0.200pt]{172.484pt}{0.400pt}} \put(772.0,82.0){\rule[-0.200pt]{0.400pt}{172.484pt}} \put(56.0,798.0){\rule[-0.200pt]{172.484pt}{0.400pt}} \put(450,735){\makebox(0,0)[l]{t=0.1}} \put(463,286){\makebox(0,0)[l]{t=0.5}} \put(597,123){\makebox(0,0)[l]{t=2}} \put(56,82){\usebox{\plotpoint}} \put(331,81.67){\rule{1.686pt}{0.400pt}} \multiput(331.00,81.17)(3.500,1.000){2}{\rule{0.843pt}{0.400pt}} \put(338,82.67){\rule{1.686pt}{0.400pt}} \multiput(338.00,82.17)(3.500,1.000){2}{\rule{0.843pt}{0.400pt}} \multiput(345.00,84.59)(0.821,0.477){7}{\rule{0.740pt}{0.115pt}} \multiput(345.00,83.17)(6.464,5.000){2}{\rule{0.370pt}{0.400pt}} \multiput(353.59,89.00)(0.485,0.798){11}{\rule{0.117pt}{0.729pt}} \multiput(352.17,89.00)(7.000,9.488){2}{\rule{0.400pt}{0.364pt}} \multiput(360.59,100.00)(0.485,2.094){11}{\rule{0.117pt}{1.700pt}} \multiput(359.17,100.00)(7.000,24.472){2}{\rule{0.400pt}{0.850pt}} \multiput(367.59,128.00)(0.485,4.154){11}{\rule{0.117pt}{3.243pt}} \multiput(366.17,128.00)(7.000,48.269){2}{\rule{0.400pt}{1.621pt}} \multiput(374.59,183.00)(0.485,7.052){11}{\rule{0.117pt}{5.414pt}} \multiput(373.17,183.00)(7.000,81.762){2}{\rule{0.400pt}{2.707pt}} \multiput(381.59,276.00)(0.488,8.748){13}{\rule{0.117pt}{6.750pt}} \multiput(380.17,276.00)(8.000,118.990){2}{\rule{0.400pt}{3.375pt}} \multiput(389.59,409.00)(0.485,11.934){11}{\rule{0.117pt}{9.071pt}} \multiput(388.17,409.00)(7.000,138.172){2}{\rule{0.400pt}{4.536pt}} \multiput(396.59,566.00)(0.485,10.943){11}{\rule{0.117pt}{8.329pt}} \multiput(395.17,566.00)(7.000,126.714){2}{\rule{0.400pt}{4.164pt}} \multiput(403.59,710.00)(0.485,6.671){11}{\rule{0.117pt}{5.129pt}} \multiput(402.17,710.00)(7.000,77.355){2}{\rule{0.400pt}{2.564pt}} \put(56.0,82.0){\rule[-0.200pt]{66.247pt}{0.400pt}} \multiput(418.59,776.71)(0.485,-6.671){11}{\rule{0.117pt}{5.129pt}} \multiput(417.17,787.36)(7.000,-77.355){2}{\rule{0.400pt}{2.564pt}} \multiput(425.59,675.43)(0.485,-10.943){11}{\rule{0.117pt}{8.329pt}} \multiput(424.17,692.71)(7.000,-126.714){2}{\rule{0.400pt}{4.164pt}} \multiput(432.59,528.34)(0.485,-11.934){11}{\rule{0.117pt}{9.071pt}} \multiput(431.17,547.17)(7.000,-138.172){2}{\rule{0.400pt}{4.536pt}} \multiput(439.59,380.98)(0.488,-8.748){13}{\rule{0.117pt}{6.750pt}} \multiput(438.17,394.99)(8.000,-118.990){2}{\rule{0.400pt}{3.375pt}} \multiput(447.59,253.52)(0.485,-7.052){11}{\rule{0.117pt}{5.414pt}} \multiput(446.17,264.76)(7.000,-81.762){2}{\rule{0.400pt}{2.707pt}} \multiput(454.59,169.54)(0.485,-4.154){11}{\rule{0.117pt}{3.243pt}} \multiput(453.17,176.27)(7.000,-48.269){2}{\rule{0.400pt}{1.621pt}} \multiput(461.59,120.94)(0.485,-2.094){11}{\rule{0.117pt}{1.700pt}} \multiput(460.17,124.47)(7.000,-24.472){2}{\rule{0.400pt}{0.850pt}} \multiput(468.59,96.98)(0.485,-0.798){11}{\rule{0.117pt}{0.729pt}} \multiput(467.17,98.49)(7.000,-9.488){2}{\rule{0.400pt}{0.364pt}} \multiput(475.00,87.93)(0.821,-0.477){7}{\rule{0.740pt}{0.115pt}} \multiput(475.00,88.17)(6.464,-5.000){2}{\rule{0.370pt}{0.400pt}} \put(483,82.67){\rule{1.686pt}{0.400pt}} \multiput(483.00,83.17)(3.500,-1.000){2}{\rule{0.843pt}{0.400pt}} \put(490,81.67){\rule{1.686pt}{0.400pt}} \multiput(490.00,82.17)(3.500,-1.000){2}{\rule{0.843pt}{0.400pt}} \put(410.0,798.0){\rule[-0.200pt]{1.927pt}{0.400pt}} \put(497.0,82.0){\rule[-0.200pt]{66.247pt}{0.400pt}} \put(56,82){\usebox{\plotpoint}} \put(251,81.67){\rule{1.927pt}{0.400pt}} \multiput(251.00,81.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(56.0,82.0){\rule[-0.200pt]{46.975pt}{0.400pt}} \put(266,82.67){\rule{1.686pt}{0.400pt}} \multiput(266.00,82.17)(3.500,1.000){2}{\rule{0.843pt}{0.400pt}} \put(273,84.17){\rule{1.500pt}{0.400pt}} \multiput(273.00,83.17)(3.887,2.000){2}{\rule{0.750pt}{0.400pt}} \put(280,86.17){\rule{1.500pt}{0.400pt}} \multiput(280.00,85.17)(3.887,2.000){2}{\rule{0.750pt}{0.400pt}} \multiput(287.00,88.61)(1.579,0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(287.00,87.17)(5.579,3.000){2}{\rule{0.583pt}{0.400pt}} \multiput(295.00,91.59)(0.710,0.477){7}{\rule{0.660pt}{0.115pt}} \multiput(295.00,90.17)(5.630,5.000){2}{\rule{0.330pt}{0.400pt}} \multiput(302.00,96.59)(0.492,0.485){11}{\rule{0.500pt}{0.117pt}} \multiput(302.00,95.17)(5.962,7.000){2}{\rule{0.250pt}{0.400pt}} \multiput(309.59,103.00)(0.485,0.645){11}{\rule{0.117pt}{0.614pt}} \multiput(308.17,103.00)(7.000,7.725){2}{\rule{0.400pt}{0.307pt}} \multiput(316.59,112.00)(0.488,0.758){13}{\rule{0.117pt}{0.700pt}} \multiput(315.17,112.00)(8.000,10.547){2}{\rule{0.400pt}{0.350pt}} \multiput(324.59,124.00)(0.485,1.179){11}{\rule{0.117pt}{1.014pt}} \multiput(323.17,124.00)(7.000,13.895){2}{\rule{0.400pt}{0.507pt}} \multiput(331.59,140.00)(0.485,1.408){11}{\rule{0.117pt}{1.186pt}} \multiput(330.17,140.00)(7.000,16.539){2}{\rule{0.400pt}{0.593pt}} \multiput(338.59,159.00)(0.485,1.713){11}{\rule{0.117pt}{1.414pt}} \multiput(337.17,159.00)(7.000,20.065){2}{\rule{0.400pt}{0.707pt}} \multiput(345.59,182.00)(0.488,1.748){13}{\rule{0.117pt}{1.450pt}} \multiput(344.17,182.00)(8.000,23.990){2}{\rule{0.400pt}{0.725pt}} \multiput(353.59,209.00)(0.485,2.171){11}{\rule{0.117pt}{1.757pt}} \multiput(352.17,209.00)(7.000,25.353){2}{\rule{0.400pt}{0.879pt}} \multiput(360.59,238.00)(0.485,2.323){11}{\rule{0.117pt}{1.871pt}} \multiput(359.17,238.00)(7.000,27.116){2}{\rule{0.400pt}{0.936pt}} \multiput(367.59,269.00)(0.485,2.399){11}{\rule{0.117pt}{1.929pt}} \multiput(366.17,269.00)(7.000,27.997){2}{\rule{0.400pt}{0.964pt}} \multiput(374.59,301.00)(0.485,2.323){11}{\rule{0.117pt}{1.871pt}} \multiput(373.17,301.00)(7.000,27.116){2}{\rule{0.400pt}{0.936pt}} \multiput(381.59,332.00)(0.488,1.748){13}{\rule{0.117pt}{1.450pt}} \multiput(380.17,332.00)(8.000,23.990){2}{\rule{0.400pt}{0.725pt}} \multiput(389.59,359.00)(0.485,1.713){11}{\rule{0.117pt}{1.414pt}} \multiput(388.17,359.00)(7.000,20.065){2}{\rule{0.400pt}{0.707pt}} \multiput(396.59,382.00)(0.485,1.179){11}{\rule{0.117pt}{1.014pt}} \multiput(395.17,382.00)(7.000,13.895){2}{\rule{0.400pt}{0.507pt}} \multiput(403.59,398.00)(0.485,0.569){11}{\rule{0.117pt}{0.557pt}} \multiput(402.17,398.00)(7.000,6.844){2}{\rule{0.400pt}{0.279pt}} \put(259.0,83.0){\rule[-0.200pt]{1.686pt}{0.400pt}} \multiput(418.59,403.69)(0.485,-0.569){11}{\rule{0.117pt}{0.557pt}} \multiput(417.17,404.84)(7.000,-6.844){2}{\rule{0.400pt}{0.279pt}} \multiput(425.59,393.79)(0.485,-1.179){11}{\rule{0.117pt}{1.014pt}} \multiput(424.17,395.89)(7.000,-13.895){2}{\rule{0.400pt}{0.507pt}} \multiput(432.59,376.13)(0.485,-1.713){11}{\rule{0.117pt}{1.414pt}} \multiput(431.17,379.06)(7.000,-20.065){2}{\rule{0.400pt}{0.707pt}} \multiput(439.59,352.98)(0.488,-1.748){13}{\rule{0.117pt}{1.450pt}} \multiput(438.17,355.99)(8.000,-23.990){2}{\rule{0.400pt}{0.725pt}} \multiput(447.59,324.23)(0.485,-2.323){11}{\rule{0.117pt}{1.871pt}} \multiput(446.17,328.12)(7.000,-27.116){2}{\rule{0.400pt}{0.936pt}} \multiput(454.59,292.99)(0.485,-2.399){11}{\rule{0.117pt}{1.929pt}} \multiput(453.17,297.00)(7.000,-27.997){2}{\rule{0.400pt}{0.964pt}} \multiput(461.59,261.23)(0.485,-2.323){11}{\rule{0.117pt}{1.871pt}} \multiput(460.17,265.12)(7.000,-27.116){2}{\rule{0.400pt}{0.936pt}} \multiput(468.59,230.71)(0.485,-2.171){11}{\rule{0.117pt}{1.757pt}} \multiput(467.17,234.35)(7.000,-25.353){2}{\rule{0.400pt}{0.879pt}} \multiput(475.59,202.98)(0.488,-1.748){13}{\rule{0.117pt}{1.450pt}} \multiput(474.17,205.99)(8.000,-23.990){2}{\rule{0.400pt}{0.725pt}} \multiput(483.59,176.13)(0.485,-1.713){11}{\rule{0.117pt}{1.414pt}} \multiput(482.17,179.06)(7.000,-20.065){2}{\rule{0.400pt}{0.707pt}} \multiput(490.59,154.08)(0.485,-1.408){11}{\rule{0.117pt}{1.186pt}} \multiput(489.17,156.54)(7.000,-16.539){2}{\rule{0.400pt}{0.593pt}} \multiput(497.59,135.79)(0.485,-1.179){11}{\rule{0.117pt}{1.014pt}} \multiput(496.17,137.89)(7.000,-13.895){2}{\rule{0.400pt}{0.507pt}} \multiput(504.59,121.09)(0.488,-0.758){13}{\rule{0.117pt}{0.700pt}} \multiput(503.17,122.55)(8.000,-10.547){2}{\rule{0.400pt}{0.350pt}} \multiput(512.59,109.45)(0.485,-0.645){11}{\rule{0.117pt}{0.614pt}} \multiput(511.17,110.73)(7.000,-7.725){2}{\rule{0.400pt}{0.307pt}} \multiput(519.00,101.93)(0.492,-0.485){11}{\rule{0.500pt}{0.117pt}} \multiput(519.00,102.17)(5.962,-7.000){2}{\rule{0.250pt}{0.400pt}} \multiput(526.00,94.93)(0.710,-0.477){7}{\rule{0.660pt}{0.115pt}} \multiput(526.00,95.17)(5.630,-5.000){2}{\rule{0.330pt}{0.400pt}} \multiput(533.00,89.95)(1.579,-0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(533.00,90.17)(5.579,-3.000){2}{\rule{0.583pt}{0.400pt}} \put(541,86.17){\rule{1.500pt}{0.400pt}} \multiput(541.00,87.17)(3.887,-2.000){2}{\rule{0.750pt}{0.400pt}} \put(548,84.17){\rule{1.500pt}{0.400pt}} \multiput(548.00,85.17)(3.887,-2.000){2}{\rule{0.750pt}{0.400pt}} \put(555,82.67){\rule{1.686pt}{0.400pt}} \multiput(555.00,83.17)(3.500,-1.000){2}{\rule{0.843pt}{0.400pt}} \put(410.0,406.0){\rule[-0.200pt]{1.927pt}{0.400pt}} \put(569,81.67){\rule{1.927pt}{0.400pt}} \multiput(569.00,82.17)(4.000,-1.000){2}{\rule{0.964pt}{0.400pt}} \put(562.0,83.0){\rule[-0.200pt]{1.686pt}{0.400pt}} \put(577.0,82.0){\rule[-0.200pt]{46.975pt}{0.400pt}} \put(56,82){\usebox{\plotpoint}} \put(107,81.67){\rule{1.686pt}{0.400pt}} \multiput(107.00,81.17)(3.500,1.000){2}{\rule{0.843pt}{0.400pt}} \put(56.0,82.0){\rule[-0.200pt]{12.286pt}{0.400pt}} \put(136,82.67){\rule{1.686pt}{0.400pt}} \multiput(136.00,82.17)(3.500,1.000){2}{\rule{0.843pt}{0.400pt}} \put(114.0,83.0){\rule[-0.200pt]{5.300pt}{0.400pt}} \put(150,83.67){\rule{1.686pt}{0.400pt}} \multiput(150.00,83.17)(3.500,1.000){2}{\rule{0.843pt}{0.400pt}} \put(143.0,84.0){\rule[-0.200pt]{1.686pt}{0.400pt}} \put(164,84.67){\rule{1.927pt}{0.400pt}} \multiput(164.00,84.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(172,85.67){\rule{1.686pt}{0.400pt}} \multiput(172.00,85.17)(3.500,1.000){2}{\rule{0.843pt}{0.400pt}} \put(179,86.67){\rule{1.686pt}{0.400pt}} \multiput(179.00,86.17)(3.500,1.000){2}{\rule{0.843pt}{0.400pt}} \put(186,88.17){\rule{1.500pt}{0.400pt}} \multiput(186.00,87.17)(3.887,2.000){2}{\rule{0.750pt}{0.400pt}} \put(193,89.67){\rule{1.927pt}{0.400pt}} \multiput(193.00,89.17)(4.000,1.000){2}{\rule{0.964pt}{0.400pt}} \put(201,91.17){\rule{1.500pt}{0.400pt}} \multiput(201.00,90.17)(3.887,2.000){2}{\rule{0.750pt}{0.400pt}} \multiput(208.00,93.61)(1.355,0.447){3}{\rule{1.033pt}{0.108pt}} \multiput(208.00,92.17)(4.855,3.000){2}{\rule{0.517pt}{0.400pt}} \put(215,96.17){\rule{1.500pt}{0.400pt}} \multiput(215.00,95.17)(3.887,2.000){2}{\rule{0.750pt}{0.400pt}} \multiput(222.00,98.61)(1.579,0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(222.00,97.17)(5.579,3.000){2}{\rule{0.583pt}{0.400pt}} \multiput(230.00,101.60)(0.920,0.468){5}{\rule{0.800pt}{0.113pt}} \multiput(230.00,100.17)(5.340,4.000){2}{\rule{0.400pt}{0.400pt}} \multiput(237.00,105.60)(0.920,0.468){5}{\rule{0.800pt}{0.113pt}} \multiput(237.00,104.17)(5.340,4.000){2}{\rule{0.400pt}{0.400pt}} \multiput(244.00,109.60)(0.920,0.468){5}{\rule{0.800pt}{0.113pt}} \multiput(244.00,108.17)(5.340,4.000){2}{\rule{0.400pt}{0.400pt}} \multiput(251.00,113.59)(0.821,0.477){7}{\rule{0.740pt}{0.115pt}} \multiput(251.00,112.17)(6.464,5.000){2}{\rule{0.370pt}{0.400pt}} \multiput(259.00,118.59)(0.710,0.477){7}{\rule{0.660pt}{0.115pt}} \multiput(259.00,117.17)(5.630,5.000){2}{\rule{0.330pt}{0.400pt}} \multiput(266.00,123.59)(0.581,0.482){9}{\rule{0.567pt}{0.116pt}} \multiput(266.00,122.17)(5.824,6.000){2}{\rule{0.283pt}{0.400pt}} \multiput(273.00,129.59)(0.581,0.482){9}{\rule{0.567pt}{0.116pt}} \multiput(273.00,128.17)(5.824,6.000){2}{\rule{0.283pt}{0.400pt}} \multiput(280.00,135.59)(0.492,0.485){11}{\rule{0.500pt}{0.117pt}} \multiput(280.00,134.17)(5.962,7.000){2}{\rule{0.250pt}{0.400pt}} \multiput(287.00,142.59)(0.569,0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(287.00,141.17)(6.844,7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(295.00,149.59)(0.492,0.485){11}{\rule{0.500pt}{0.117pt}} \multiput(295.00,148.17)(5.962,7.000){2}{\rule{0.250pt}{0.400pt}} \multiput(302.59,156.00)(0.485,0.569){11}{\rule{0.117pt}{0.557pt}} \multiput(301.17,156.00)(7.000,6.844){2}{\rule{0.400pt}{0.279pt}} \multiput(309.59,164.00)(0.485,0.569){11}{\rule{0.117pt}{0.557pt}} \multiput(308.17,164.00)(7.000,6.844){2}{\rule{0.400pt}{0.279pt}} \multiput(316.00,172.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(316.00,171.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(324.59,180.00)(0.485,0.569){11}{\rule{0.117pt}{0.557pt}} \multiput(323.17,180.00)(7.000,6.844){2}{\rule{0.400pt}{0.279pt}} \multiput(331.59,188.00)(0.485,0.569){11}{\rule{0.117pt}{0.557pt}} \multiput(330.17,188.00)(7.000,6.844){2}{\rule{0.400pt}{0.279pt}} \multiput(338.00,196.59)(0.492,0.485){11}{\rule{0.500pt}{0.117pt}} \multiput(338.00,195.17)(5.962,7.000){2}{\rule{0.250pt}{0.400pt}} \multiput(345.00,203.59)(0.494,0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(345.00,202.17)(6.962,8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(353.00,211.59)(0.581,0.482){9}{\rule{0.567pt}{0.116pt}} \multiput(353.00,210.17)(5.824,6.000){2}{\rule{0.283pt}{0.400pt}} \multiput(360.00,217.59)(0.492,0.485){11}{\rule{0.500pt}{0.117pt}} \multiput(360.00,216.17)(5.962,7.000){2}{\rule{0.250pt}{0.400pt}} \multiput(367.00,224.59)(0.710,0.477){7}{\rule{0.660pt}{0.115pt}} \multiput(367.00,223.17)(5.630,5.000){2}{\rule{0.330pt}{0.400pt}} \multiput(374.00,229.59)(0.710,0.477){7}{\rule{0.660pt}{0.115pt}} \multiput(374.00,228.17)(5.630,5.000){2}{\rule{0.330pt}{0.400pt}} \multiput(381.00,234.60)(1.066,0.468){5}{\rule{0.900pt}{0.113pt}} \multiput(381.00,233.17)(6.132,4.000){2}{\rule{0.450pt}{0.400pt}} \multiput(389.00,238.61)(1.355,0.447){3}{\rule{1.033pt}{0.108pt}} \multiput(389.00,237.17)(4.855,3.000){2}{\rule{0.517pt}{0.400pt}} \multiput(396.00,241.61)(1.355,0.447){3}{\rule{1.033pt}{0.108pt}} \multiput(396.00,240.17)(4.855,3.000){2}{\rule{0.517pt}{0.400pt}} \put(403,243.67){\rule{1.686pt}{0.400pt}} \multiput(403.00,243.17)(3.500,1.000){2}{\rule{0.843pt}{0.400pt}} \put(157.0,85.0){\rule[-0.200pt]{1.686pt}{0.400pt}} \put(418,243.67){\rule{1.686pt}{0.400pt}} \multiput(418.00,244.17)(3.500,-1.000){2}{\rule{0.843pt}{0.400pt}} \multiput(425.00,242.95)(1.355,-0.447){3}{\rule{1.033pt}{0.108pt}} \multiput(425.00,243.17)(4.855,-3.000){2}{\rule{0.517pt}{0.400pt}} \multiput(432.00,239.95)(1.355,-0.447){3}{\rule{1.033pt}{0.108pt}} \multiput(432.00,240.17)(4.855,-3.000){2}{\rule{0.517pt}{0.400pt}} \multiput(439.00,236.94)(1.066,-0.468){5}{\rule{0.900pt}{0.113pt}} \multiput(439.00,237.17)(6.132,-4.000){2}{\rule{0.450pt}{0.400pt}} \multiput(447.00,232.93)(0.710,-0.477){7}{\rule{0.660pt}{0.115pt}} \multiput(447.00,233.17)(5.630,-5.000){2}{\rule{0.330pt}{0.400pt}} \multiput(454.00,227.93)(0.710,-0.477){7}{\rule{0.660pt}{0.115pt}} \multiput(454.00,228.17)(5.630,-5.000){2}{\rule{0.330pt}{0.400pt}} \multiput(461.00,222.93)(0.492,-0.485){11}{\rule{0.500pt}{0.117pt}} \multiput(461.00,223.17)(5.962,-7.000){2}{\rule{0.250pt}{0.400pt}} \multiput(468.00,215.93)(0.581,-0.482){9}{\rule{0.567pt}{0.116pt}} \multiput(468.00,216.17)(5.824,-6.000){2}{\rule{0.283pt}{0.400pt}} \multiput(475.00,209.93)(0.494,-0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(475.00,210.17)(6.962,-8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(483.00,201.93)(0.492,-0.485){11}{\rule{0.500pt}{0.117pt}} \multiput(483.00,202.17)(5.962,-7.000){2}{\rule{0.250pt}{0.400pt}} \multiput(490.59,193.69)(0.485,-0.569){11}{\rule{0.117pt}{0.557pt}} \multiput(489.17,194.84)(7.000,-6.844){2}{\rule{0.400pt}{0.279pt}} \multiput(497.59,185.69)(0.485,-0.569){11}{\rule{0.117pt}{0.557pt}} \multiput(496.17,186.84)(7.000,-6.844){2}{\rule{0.400pt}{0.279pt}} \multiput(504.00,178.93)(0.494,-0.488){13}{\rule{0.500pt}{0.117pt}} \multiput(504.00,179.17)(6.962,-8.000){2}{\rule{0.250pt}{0.400pt}} \multiput(512.59,169.69)(0.485,-0.569){11}{\rule{0.117pt}{0.557pt}} \multiput(511.17,170.84)(7.000,-6.844){2}{\rule{0.400pt}{0.279pt}} \multiput(519.59,161.69)(0.485,-0.569){11}{\rule{0.117pt}{0.557pt}} \multiput(518.17,162.84)(7.000,-6.844){2}{\rule{0.400pt}{0.279pt}} \multiput(526.00,154.93)(0.492,-0.485){11}{\rule{0.500pt}{0.117pt}} \multiput(526.00,155.17)(5.962,-7.000){2}{\rule{0.250pt}{0.400pt}} \multiput(533.00,147.93)(0.569,-0.485){11}{\rule{0.557pt}{0.117pt}} \multiput(533.00,148.17)(6.844,-7.000){2}{\rule{0.279pt}{0.400pt}} \multiput(541.00,140.93)(0.492,-0.485){11}{\rule{0.500pt}{0.117pt}} \multiput(541.00,141.17)(5.962,-7.000){2}{\rule{0.250pt}{0.400pt}} \multiput(548.00,133.93)(0.581,-0.482){9}{\rule{0.567pt}{0.116pt}} \multiput(548.00,134.17)(5.824,-6.000){2}{\rule{0.283pt}{0.400pt}} \multiput(555.00,127.93)(0.581,-0.482){9}{\rule{0.567pt}{0.116pt}} \multiput(555.00,128.17)(5.824,-6.000){2}{\rule{0.283pt}{0.400pt}} \multiput(562.00,121.93)(0.710,-0.477){7}{\rule{0.660pt}{0.115pt}} \multiput(562.00,122.17)(5.630,-5.000){2}{\rule{0.330pt}{0.400pt}} \multiput(569.00,116.93)(0.821,-0.477){7}{\rule{0.740pt}{0.115pt}} \multiput(569.00,117.17)(6.464,-5.000){2}{\rule{0.370pt}{0.400pt}} \multiput(577.00,111.94)(0.920,-0.468){5}{\rule{0.800pt}{0.113pt}} \multiput(577.00,112.17)(5.340,-4.000){2}{\rule{0.400pt}{0.400pt}} \multiput(584.00,107.94)(0.920,-0.468){5}{\rule{0.800pt}{0.113pt}} \multiput(584.00,108.17)(5.340,-4.000){2}{\rule{0.400pt}{0.400pt}} \multiput(591.00,103.94)(0.920,-0.468){5}{\rule{0.800pt}{0.113pt}} \multiput(591.00,104.17)(5.340,-4.000){2}{\rule{0.400pt}{0.400pt}} \multiput(598.00,99.95)(1.579,-0.447){3}{\rule{1.167pt}{0.108pt}} \multiput(598.00,100.17)(5.579,-3.000){2}{\rule{0.583pt}{0.400pt}} \put(606,96.17){\rule{1.500pt}{0.400pt}} \multiput(606.00,97.17)(3.887,-2.000){2}{\rule{0.750pt}{0.400pt}} \multiput(613.00,94.95)(1.355,-0.447){3}{\rule{1.033pt}{0.108pt}} \multiput(613.00,95.17)(4.855,-3.000){2}{\rule{0.517pt}{0.400pt}} \put(620,91.17){\rule{1.500pt}{0.400pt}} \multiput(620.00,92.17)(3.887,-2.000){2}{\rule{0.750pt}{0.400pt}} \put(627,89.67){\rule{1.927pt}{0.400pt}} \multiput(627.00,90.17)(4.000,-1.000){2}{\rule{0.964pt}{0.400pt}} \put(635,88.17){\rule{1.500pt}{0.400pt}} \multiput(635.00,89.17)(3.887,-2.000){2}{\rule{0.750pt}{0.400pt}} \put(642,86.67){\rule{1.686pt}{0.400pt}} \multiput(642.00,87.17)(3.500,-1.000){2}{\rule{0.843pt}{0.400pt}} \put(649,85.67){\rule{1.686pt}{0.400pt}} \multiput(649.00,86.17)(3.500,-1.000){2}{\rule{0.843pt}{0.400pt}} \put(656,84.67){\rule{1.927pt}{0.400pt}} \multiput(656.00,85.17)(4.000,-1.000){2}{\rule{0.964pt}{0.400pt}} \put(410.0,245.0){\rule[-0.200pt]{1.927pt}{0.400pt}} \put(671,83.67){\rule{1.686pt}{0.400pt}} \multiput(671.00,84.17)(3.500,-1.000){2}{\rule{0.843pt}{0.400pt}} \put(664.0,85.0){\rule[-0.200pt]{1.686pt}{0.400pt}} \put(685,82.67){\rule{1.686pt}{0.400pt}} \multiput(685.00,83.17)(3.500,-1.000){2}{\rule{0.843pt}{0.400pt}} \put(678.0,84.0){\rule[-0.200pt]{1.686pt}{0.400pt}} \put(714,81.67){\rule{1.686pt}{0.400pt}} \multiput(714.00,82.17)(3.500,-1.000){2}{\rule{0.843pt}{0.400pt}} \put(692.0,83.0){\rule[-0.200pt]{5.300pt}{0.400pt}} \put(721.0,82.0){\rule[-0.200pt]{12.286pt}{0.400pt}} \put(56.0,82.0){\rule[-0.200pt]{0.400pt}{172.484pt}} \put(56.0,82.0){\rule[-0.200pt]{172.484pt}{0.400pt}} \put(772.0,82.0){\rule[-0.200pt]{0.400pt}{172.484pt}} \put(56.0,798.0){\rule[-0.200pt]{172.484pt}{0.400pt}} \end{picture} \end{center} \vspace{-2em} \caption{\small The solution of the diffusion $\rho(x,t)$ for $\rho(x,0)=\delta(x)$ as a function of $x$ for several values of $t$. The density is a Gaussian that becomes progressively more spread, indicating that the population covers larger and larger areas.} \label{difsol} \end{figure} The result is typical of diffusion processes in which the population, initially concentrated at $x=0$ ($\rho(x,0)=\delta(x)$) spreads over larger and larger areas. The \dqt{speed} of this diffusion is determined by the increasing variance $\sigma(t)=\langle{X^2}\rangle=2Dt$ which, as predicted by (\ref{lookie}) and (\ref{einfock}), grows linearly with $t$. \subsection{Anomalous Diffusion} The standard diffusion process, which we have considered so far, is characterized by the relation \begin{equation} \label{difflaw} \langle X^2 \rangle \sim t \end{equation} where the notation is shorthand for \begin{equation} \lim_{t\rightarrow\infty} \frac{\langle X^2 \rangle}{t} = C \ne 0 \end{equation} The reason for this boils down to the fact that the diffusion equation has first derivatives in time and second derivatives in space, so that one obtain homologous quantities starting with a constant and integrating once in time and twice in space. To see a different kind of behavior, consider \textsl{ballistic displacements}, that is, the motion of a particle that moves at a constant speed $v$ and never changes direction. For a movement along the $x$ axis, this can be modeled as a stochastic process with PDF $P(x,t)=\delta(x-vt)$ so that \begin{equation} \label{ball} \langle X^2 \rangle = \int_{-\infty}^\infty\!\!\!x^2\delta(x-vt)\,dx = v^2t^2\sim t^2 \end{equation} That is, in this case \begin{equation} \lim_{t\rightarrow\infty} \frac{\langle X^2 \rangle}{t^2} = C \ne 0 \end{equation} We can see ballistic movement as a type of random walk, albeit a not-quite-so-random one, and one that moves from the origin much faster than the Brownian motion. Small wonder: ballistic movement moves purposely in a fixed direction, while Brownian motion is bounced to and fro. Note, however, that this means that ballistic movement will explore around much less than Brownian motion: it will stick to a trajectory and not look around at all. Just like a traveler in a rush: you may go very far, but you miss the view. Processes that don't follow the standard diffusion law (\ref{difflaw}) are called \textsl{anomalous} \cite{havlin:87}. Ballistic movement is our first example of anomalous diffusion (albeit a rather pathological one). The asymptotic relation between $\langle{X^2}\rangle$ and $t$ is normally defined using the \textsl{Hurst exponent} $H$ \cite{hurst:65} defined by \begin{equation} \label{hurst} \langle X^2 \rangle \sim t^{2H} \end{equation} Diffusion corresponds to $H=1/2$; if $H<1/2$ we have \textsl{subdiffusion}, while for $1/2<H<1$ we have \textsl{superdiffusion}. The ballistic limit (\ref{ball}) is achieved for $H=1$ (see Figure~\ref{hurstfig}). Note that, because of the central limit theorem, normal diffusion ($H=1/2$) is obtained under a wide family of displacements distributions. If the displacements: \begin{description} \item[i)] are independent, \item[ii)] are identically distributed, and \item[iii)] follow a PDF with finite mean and variance, \end{description} then we can apply the CLT in its standard form, and the total distance covered (that is, the sum of all these displacements) is a Gaussian $\exp(-x^2/\sigma^2)$, where $\sigma^2$ is proportional to the number of displacements, that is, $\sigma^2\sim{t}$. Then (\ref{difflaw}) follow directly from the equality $\langle{X^2}\rangle=\sigma^2$ valid for a Gaussian. The hypotheses i)--iii) hint at three possible ways in which they can be violated, resulting in three different mechanisms that can generate anomalous diffusion. \begin{description} \item[i)] the displacements are not independent due to long range correlations: once a particle moves, it will tend to remain in motion (leading to superdiffusion---ballistic motion is an example of this kind of process) or, contrariwise% % \footnote{Neologism courtesy of Lewis Carroll.}% , once it stops it will tend to remain at rest (leading to subdiffusion); \item[ii)] the distribution of the displacements is not identical, either because they become shorter with time (leading to subdiffusion) or because they become longer (leading to superdiffusion); \item[iii)] the displacements are distributed according to a PDF with infinite variance, so that arbitrary large displacements are relatively likely. \end{description} \begin{figure}[tbhp] \begin{center} \setlength{\unitlength}{2em} \begin{picture}(20,10.5)(0,-0.5) \multiput(0,0)(0,10){2}{\line(1,0){20}} \multiput(0,0)(20,0){2}{\line(0,1){10}} \multiput(0,0)(1,0){20}{\line(0,1){0.2}} \multiput(0,0)(0,1){10}{\line(1,0){0.2}} \thicklines \put(0,0){\line(2,1){20}} \put(0,0){\line(1,1){10}} \put(1,9){\makebox(0,0)[lt]{\parbox{8em}{$H>1$\\superballistic regimen}}} \put(5,5.5){\makebox(0,0){\begin{rotate}{45}$H=1$; ballistic limit\end{rotate}}} \put(8,4.2){\makebox(0,0){\begin{rotate}{28}$H=1/2$; normal diffusion\end{rotate}}} \put(11,9){\makebox(0,0)[lt]{\parbox{8em}{$H>1/2$\\superdiffusion}}} \put(15,5){\makebox(0,0)[lt]{\parbox{8em}{$H<1/2$\\subdiffusion}}} \put(10,0.4){\makebox(0,0)[lb]{\parbox{10em}{$H\rightarrow{0}$: confinement}}} \put(10,-0.2){\makebox(0,0)[t]{$\log\,\,{t}$}} \put(-0.3,5){\makebox(0,0)[t]{\begin{rotate}{90}$\log\,\,\langle{X^2}\rangle$\end{rotate}}} \end{picture} \end{center} \caption{\small The Hurst exponent quantifies the asymptotic behavior of diffusive processes. The usual uncorrelated Brownian random walks satisfy the standard form of the central limit theorem, and have $H=1/2$. Subdiffusive processes have $H<1/2$. For vanishing $H$, the process is localized and confined: diffusion has a finite reach. Superdiffusion, viz.\ processes with a propagation speed higher than Brownian diffusion, have $H>1/2$. The ballistic limit is reached for $H=1$. Superballistic processes (not considered here) correspond to accelerated particles. L\'evy flights and walks, of special interest here, are superdiffusive processes.} \label{hurstfig} \end{figure} In the following, we shall consider mostly the third case but, before digging into it, I shall give a brief example of how the first two work. \example For the case of long-term correlations, divide the trajectory into intervals of fixed duration $\Delta{t}$. With this division, the correlations between displacements are the same as those between velocities. In this case, we can determine the derivative of $\langle{X^2}\rangle$ as \begin{equation} \label{kubo} \frac{d}{dt}\langle{X^2}\rangle = \frac{d}{dt} \int_0^t\!\!\!\int_0^t \langle{v(s)v(\tau)}\rangle\,d\tau\,ds = 2 \int_0^t\!\!\!\langle{v(t)v(\tau)}\rangle\,d\tau \end{equation} This result is known as the Taylor's formula \cite{taylor:21} (also known as the Green-Kubo formula). If $\langle{v(t)v(\tau)}\rangle$ is integrable, then the limit for $t\rightarrow\infty$ of the integral exists, so the right-hand side of (\ref{kubo}) is asymptotically a constant, that is, for $t\rightarrow\infty$, \begin{equation} \frac{d}{dt}\langle{X^2}\rangle \sim C \mbox{~~~or~~~}% \langle{X^2}\rangle\sim t \end{equation} and we find again a diffusive behavior. If, on the other hand, the correlation decays slowly enough that the integral diverges, then the CLT doesn't hold, and we observe anomalous diffusion. If, for example, $\langle{v(t)v(\tau)}\rangle\sim(t-\tau)^{-\eta}$, with $0<\eta<1$, then $\langle{X^2}\rangle\sim{t}^{2-\eta}$, that is, we have superdiffusion. ~~\\\hfill(end of example)\\\bigskip \example Non-identical displacements ocurr when displacements become either longer or shorter with time or, equivalently, as the particle gets farther from its initial position. If we take a macroscopic point of view---that is, if we write a diffusion-like Fokker-Planck equation---then we can model this as a time and/or space varying coefficient $D$. This is tantamount to saying that the obstacles to motions become gradually larger or smaller as we get away from the initial position. Consider a space-dependent diffusion coefficient that varies as a power law: $D=D_0x^\theta$. This leads to a diffusion equation: \begin{equation} \frac{\partial\rho}{\partial t} = \nabla\cdot(D_0 x^\theta \nabla\rho) \end{equation} A rigorous derivation of the behavior of $\langle{X^2}\rangle$ under this equation can be found in \cite{shaughnessy:85}; here we shall do a simple informal derivation using dimensional analysis. The density $\rho$ is a number of particles per unit of $x$ that is, dimensionally, $[\rho]=[x]^{-1}$ and, consequently \begin{equation} \left[ \frac{\partial \rho}{\partial t} \right] = [x]^{-1}[T]^{-1} \end{equation} where $T$ is the dimension of time. Similarly $[\nabla\rho]=[x]^{-2}$, $[x^\theta\nabla\rho]=[x]^{\theta-2}$ and $[\nabla\cdot(x^\theta\nabla\rho)]=[x]^{\theta-3}$. This equality gives us \begin{equation} [x]^{-1}[T]^{-1} = \left[ \frac{\partial \rho}{\partial t} \right] = [\nabla\cdot(x^\theta\nabla\rho)]=[x]^{\theta-3} \end{equation} that is, $[T]^{-1}=[x]^{\theta-2}$, $[T]=[x]^{2-\theta}$, or $[x]=[T]^{1/(2-\theta)}$, whch leads to \begin{equation} [x^2] = [x]^2 = [T]^{\frac{2}{2-\theta}} = [X^2] \end{equation} This dimensional equality indicates that, asymptotically, \begin{equation} \langle X^2 \rangle \sim t^{\frac{2}{2-\theta}} \end{equation} leading, again, to anomalous diffusion. ~~\\\hfill(end of example)\\\bigskip The case of divergent moments that I shall consider closely is that of \textsl{L\'evy flights}. If the individual displacements are i.i.d., then we are in the conditions of the generalized Central Limit Theorem: no matter what the individual PDF are, the sum of a large number of them will converge to a L\'evy stable distribution. So, just like in the finite moment case we could assume that the displacements followed a Gaussian distribution% \footnote{Remember that the Langevin equation, which assume Gaussian displacements, leads to the same macroscopic result as the Einstein method, which doesn't.}% , we can now assume that they follow a L\'evy distribution which, as seen in (\ref{levypow}), behaves like $x^{-(1+\alpha)}$ for $t\rightarrow\infty$. For $\alpha<2$, $\langle{X^2}\rangle$ diverges due to the \dqt{long tail} of the distribution, which makes arbitrarily large displacements relatively frequent. In order to frame these ideas properly, it is first necessary to study random walks from a slightly more general point of view, that of Continuous Time Random Walks. \subsection{Continuous Time Random Walks} The random walks that we have considered so far were limits of what we can consider a discrete time scenario: we considered that jumps take place at regular time intervals, and we take the limit of $\Delta{t}\rightarrow{0}$, corresponding to a continuum of jumps of length zero (this is enforced by the fact that we require $\langle{x^2}\rangle/\tau$ to stay finite). In a \textsl{Continuous Time Random Walk} (CTRW) we assume that the waiting time between jumps is a random process as well, that is, that the particle will intersperse jumps of random length with pauses of random duration. I shall introduce the analysis of CTRW in two steps: first I shall consider the PDF of the position of the particle after $n$ jumps, without considering \textsl{when} did these jumps occur, then the probability of doing $n$ jumps in time $t$. This corresponds to a specific type of CTRW, one in which the jump length is independent of the waiting time. The result can easily be extended to the case in which waiting time and jump length are correlated. \bigskip Let $Z_n$ be the length of the $n$th jump. The position of a particle after $n$ jumps is \begin{equation} X_n = \sum_{k=1}^n Z_k = X_{n-1} + Z_n \end{equation} This equation shows that the walk is a Markov chain. Let $Z_k$ be i.i.d. with PDF $\phi(z)$; the function $\phi$ (the dispersal kernel) represents the transition probability of the Markov chain. Adapting the Chapman-Kolmogorov equation (\ref{albert}) to this discrete-time scenario, we obtain an equation for $\rho_n(x)$, the density of individuals after $n$ jumps: \begin{equation} \rho_n(x) = \int_{-\infty}^{\infty}\!\!\!\rho_{n-1}(x-z)\phi(z)\,dz = \rho_{n-1}*\phi \end{equation} where $*$ denotes spatial convolution. If $\rho_0$ is the initial density, then: \begin{equation} \begin{aligned} \rho_1 &= \rho_0 * \phi \\ \rho_2 &= \rho_1 * \phi = \rho_0 * \phi * \phi \\ \vdots \\ \rho_n &= \rho_{n-1} * \phi = \rho_0 * \overbrace{\phi * \cdots * \phi}^{n} \end{aligned} \end{equation} Considering, for the sake of simplicity, the one-dimensional case, we can take the Fourier transform and apply (\ref{pluck}) to obtain \begin{equation} \label{palooza} \tilde{\rho}_n(\omega) = \tilde{\rho}_0(\omega) \tilde{\phi}^n(\omega) \end{equation} Consider now the jump times. Let $\theta_n$ be the waiting time between jump $n-1$ and jump $n$, and $\psi(t)$ its PDF. The time at which the $n$th jump is taken is then \begin{equation} T_n = \sum_{k=1}^n \theta_n \end{equation} Let $\psi^0(t)$ be the probability that no jump has ocurred by time $t$, viz. \begin{equation} \psi^0(t) = \int_t^\infty\!\!\!\psi(u)\,du = 1 - \int_0^t\!\!\!\psi(u)\,du \end{equation} Let $P_n(t)$ be the probability of performing $n$ jumps by time $t$. Then, clearly, $P_0(t)=\psi^0(t)$. The probability that there is a jump at a time $u<t$ and then no further jumps until time $t$ is $\psi(u)\psi^0(t-u)$. Integrating over all $u<t$ we have% \footnote{The asterisk denotes here convolution in time, which has different integration limits than convolution in space, since $\psi$ and $\psi^0$ can only take non-negative arguments. Strictly speaking, we should have used a different symbol. However, since is it usually clear what convolution is being used, I have preferred not to complicate the notation using non-standard symbols.}% \begin{equation} P_1(t) = \int_0^t\!\!\!\psi(u)\psi^0(t-u)\,du = \psi*\psi \end{equation} Iterating this, we have \begin{equation} P_n(t) = \psi^0*\overbrace{\psi* \cdots *\psi}^{n} \end{equation} In this case, since we have different limits and a different convolution, one must use the Laplace transform in lieu of the Fourier: \begin{equation} \psi(s)=\int_0^\infty\!\!\!e^{-st}\psi(t)\,dt \end{equation} where $s\in{\mathbb{C}}$.% \footnote{The Laplace transform has similar properties as the Fourier, but is more general. If $f(t)$ is a function and ${\mathcal{L}}[f]$ is its Laplace transform (which I shall also indicate as $\tilde{f}$), then % % \begin{equation} f(t) = \frac{1}{2\pi{i}} \lim_{T\rightarrow\infty} \int_{\gamma-iT}^{\gamma+iT}\!\!\!\!{\mathcal{L}}[f](s)e^{st}\,ds \end{equation} % where $\gamma$ is a real number that exceeds the real part of all singularities of ${\mathcal{L}}[f]$. Also: % % \begin{equation} \begin{aligned} {\mathcal{L}}\Bigl[ \int_0^t\!\!\!f(u)g(t-u)\,du\Bigr] &= {\mathcal{L}}[f]{\mathcal{L}}[g] \\ {\mathcal{L}}\Bigl[ \int_0^t\!\!\!f(u)\,du\Bigr] &= \frac{1}{s}{\mathcal{L}}[f] \\ {\mathcal{L}}\bigl[ e^{at}f(t)\bigr] &= {\mathcal{L}}[f](s-a) \\ {\mathcal{L}}\bigl[1\bigr] &= \frac{1}{s} \end{aligned} \end{equation} % % }% This leads to \begin{equation} \tilde{P}_n(s) = \tilde{\psi^0}(s)\tilde{\psi}^{n}(s) = \frac{1-\tilde{\psi}(s)}{s}(\tilde{\psi}(s))^{n} \end{equation} \separate Consider now the combination of the two processes. The position of an individual at time $t$ (assume $x(0)=0$) is \begin{equation} x(t)=\sum_{k=0}^{N(t)} z_k \end{equation} where $N(t)$ is the number of jumps taken before time $t$, itself a random variable. We are interested in finding an expression for $\rho(t)$, the density of individuals at time $t$. If by time $t$ $n$ jumps have been made, then \begin{equation} \rho(x,t | N(t)=n) = \rho_n(x) \end{equation} The value of $\rho(x,t)$ is then given by the value of $\rho_n$ for all possible $n$, weighted by their probability: \begin{equation} \rho(x,t) = \sum_{n=0}^\infty \rho_n(x)P_n(t) \end{equation} that is, taking the Fourier and Laplace transforms: \begin{equation} \label{ooohh} \begin{aligned} \tilde{\rho}(\omega,s) &= \sum_{n=0}^\infty \tilde{\rho}_n(\omega)\tilde{P}_n(t) \\ &= \tilde{\rho}(\omega,0) \frac{1-\tilde{\psi}(s)}{s} \sum_{n=0}^\infty \bigl[\phi(\omega)\psi(s)\bigr]^n \\ &= \tilde{\rho}(\omega,0) \frac{1-\tilde{\psi}(s)}{s} \frac{1}{1-\phi(\omega)\psi(s)} \end{aligned} \end{equation} This is known as the \textsl{Montroll-Wei$\beta$} equation. I made here the assumption that the waiting times and the jump length are independent, hence the product $\phi(\omega)\psi(s)$. If they are not, then their joint probability would be expressed by a distribution $\phi(\omega,s)$, and (\ref{ooohh}) becomes \begin{equation} \tilde{\rho}(\omega,s) = \tilde{\rho}(\omega,0) \frac{1-\tilde{\psi}(s)}{s} \frac{1}{1-\phi(\omega,s)} \end{equation} For any distribution $\phi(\omega)$ and $\psi(s)$, (\ref{ooohh}) allows us to determine the evolution of the density of individuals by taking the inverse Fourier/Laplace transform. \subsubsection{Finite moments: diffusion} The generality of the CTRW notwithstanding, if we assume that $\phi$ and $\psi$ have finite moment we still revert to the normal diffusive behavior. In order to see this, we first rearrange (\ref{ooohh}) in a more useful form. From (\ref{ooohh}), we write \begin{equation} \label{auhh} s\tilde{\rho}(\omega,s) - \tilde{\rho}(\omega,0) = \tilde{\rho}(\omega,0) \Bigl[ \frac{1 - \psi(s)}{1 - \phi(\omega)\psi(s)} - 1 \Bigr] \end{equation} Express, from the same equation \begin{equation} \tilde{\rho}(\omega,0) = \frac{s}{1-\psi(s)} \bigl[1 - \phi(\omega)\psi(s)\bigr]\tilde{\rho}(\omega,s) \end{equation} Replacing in the right-hand side of (\ref{auhh}) and simplifying we get \begin{equation} \label{bebop} s\tilde{\rho}(\omega,s) - \tilde{\rho}(\omega,0) \frac{s\psi(s)}{1-\psi(s)} \bigl[1 - \phi(\omega)\psi(s)\bigr]\tilde{\rho}(\omega,s) \end{equation} The quantity \begin{equation} M(s) = \frac{s\psi(s)}{1 - \psi(s)} \end{equation} is called the \textsl{memory kernel} of the CTRW. Equation (\ref{bebop}) can in turn be rewritten in a way that separates the spatial and temporal variables: \begin{equation} \label{alula} \frac{1-\psi(s)}{s\psi(s)}\bigl[ s\tilde{\rho}(\omega,s) - \tilde{\rho}(\omega,o)\bigr] = \bigl[1 - \phi(\omega)\psi(s)\bigr]\tilde{\rho}(\omega,s) \end{equation} We now consider the macroscopic limit in space, which entails assuming that the microscopic scale of the process is very small compared to the scale of $x$. This means that we shall consider the limit for $\omega\rightarrow{0}$ in the Fourier space. Similarly, the macroscopic limit in time consists in taking the limit $s\rightarrow{0}$ in the complex plane of the Laplace transform. If $\phi$ is symmetric and has finite moments, then it has an expansion $\phi(\omega)=1-\langle\phi^2\rangle\omega^2/2+o(\omega^4)$, where $\langle\phi\rangle$ is the average displacement $\langle{X^2}\rangle$ when $X$ has PDF $\phi$. Similarly, $\psi(s)=1-\langle\psi\rangle{s}+o(s^2)$, where $\langle\psi\rangle$ is the mean waiting time. From this we get \begin{equation} \frac{1-\psi^0(s)}{s\psi(s)} \sim \frac{\langle\psi\rangle}{1-\langle\psi\rangle{s}} = \langle\psi\rangle + o(s) \end{equation} while \begin{equation} \bigl[\phi(\omega)-1\bigr] = - \frac{\langle\phi^2\rangle}{2}\omega^2 + o(\omega^2) \end{equation} Putting these in (\ref{alula}) we have \begin{equation} \langle\psi\rangle[s\rho(\omega,s)-\rho(\omega,0)] = \frac{\langle\phi^2\rangle}{2}\omega^2 \rho(\omega,s) \end{equation} that is, taking the inverse Fourier and Laplace transforms \begin{equation} \frac{\partial}{\partial t}\rho(x,t) = \frac{\langle\phi^2\rangle}{2\langle\psi\rangle} \frac{\partial^2}{\partial x^2}\rho(x,t) \end{equation} that is, we are back to a diffusion equation that behaves like (\ref{hurst}), with $H=1/2$. We obtain anomalous diffusion in two ways: we can either make long pauses (viz.\ pauses with a distribution with diverging variance) or we can make long jumps. \subsubsection{Long pauses} Let us assume that $\phi(z)$ has a Gaussian distribution% \footnote{As we have seen, any distribution, as long as it has finite moments, will give the same results, as we are in the hypotheses of the standard Central Limit Theorem.}% , while $\psi(t)$ has a L\'evy distribution with a long tail \begin{equation} \psi(t) \sim A_\alpha \left( \frac{\tau}{t} \right)^\alpha \end{equation} with $0<\alpha<1$. We are interested in the long term behavior of the walk, that is, in terms of characteristic functions, in the limit $\omega\rightarrow{0}$, $|s|\rightarrow{0}$ so we can write \begin{equation} \begin{aligned} \phi(\omega) &= \exp\bigl(- \frac{\omega^2\sigma^2}{2} \bigr) \sim 1 - \sigma^2\omega^2 \\ \psi(s) &= \exp\bigl( -\tau^\alpha|s|^\alpha \bigr) \sim 1 - (\tau{s})^\alpha \end{aligned} \end{equation} Introducing into (\ref{ooohh}), we get \begin{equation} \tilde{\rho(\omega,s)} = \frac{1}{s} \frac{\rho(\omega,0)}{1 + K_\alpha\omega^2s^{-\alpha}} \end{equation} with $K_\alpha = \sigma^2/\tau^{\alpha}$. The long term behavior of $\langle{X^2}\rangle$ can be determined using the relation \begin{equation} \langle X^2 \rangle = \lim_{\omega\rightarrow{0}} - \frac{\partial \tilde{\rho}}{\partial \omega} \end{equation} For $\rho(x,0)=\delta(x)$, i.e.\ $\tilde{\rho}(\omega,0)=1$, we have \begin{equation} \begin{aligned} \langle X^2 \rangle &= \lim_{\omega\rightarrow{0}} \Bigl[ - \frac{2}{s} K_\alpha s^{-\alpha} (1+K_\alpha\omega^2s^{-\alpha})^{-2} - \frac{8}{s}(1+K_\alpha\omega^2s^{-\alpha})^{-3}(K_\alpha s^{-\alpha}\omega^2)^2 \\ &= 2K_\alpha s^{-(\alpha+1)} \end{aligned} \end{equation} which, inverting the Laplace transform, gives \begin{equation} \langle X^2 \rangle = \frac{2K_\alpha}{\Gamma(\alpha+1)}t^{\alpha} \end{equation} Since $\alpha<1$, we are in the presence of subdiffusion ($H=\alpha/2<1/2$), as could be expected given that we have arbitrarily long pauses with relative high frequency. \subsubsection{Long jumps} We consider now the opposite situation: assume that $\psi(t)$ has a distribution with finite moments (exponential, in this case, since $t>0$) and that $\phi(z)$ has a L\'evy distribution with L\'evy parameter $\mu$: \begin{equation} \begin{aligned} \psi(t) &= \tau\exp\bigl(-\frac{t}{\tau}\bigr) \\ \phi(z) &= A_\mu \left( \frac{z_0}{z} \right)^{1+\mu} \end{aligned} \end{equation} Note that in this case we consider $1<\mu<2$ for the sake of simplicity: the results are similar for $0<\mu<1$. As before, in the limit $\omega\rightarrow{0}$, $|s|\rightarrow{0}$, we can approximate them as \begin{equation} \begin{aligned} \psi(s) &\sim 1 - s\tau \\ \phi(\omega) &\sim 1 - \sigma^\mu\omega^\mu \end{aligned} \end{equation} \begin{figure}[tbhp] \begin{center} \setlength{\unitlength}{1.5em} \begin{picture}(8,16)(0,0) \multiput(0,8)(0,8){2}{\line(1,0){8}} \multiput(0,8)(8,0){2}{\line(0,1){8}} \multiput(0,0)(0.5,0){16}{\line(1,0){0.25}} \multiput(0,0)(8,0){2}{ \multiput(0,0)(0,0.5){16}{\line(0,1){0.25}} } \put(0,0){\line(1,2){8}} \put(9,12){\vector(-1,0){2.8}} \put(9.1,12){\makebox(0,0)[l]{$H=1/2$; diffusion}} \put(-0.1,0){\makebox(0,0)[r]{0}} \put(-0.1,8){\makebox(0,0)[r]{1}} \put(-0.1,16){\makebox(0,0)[r]{2}} \put(-0.3,12){\makebox(0,0)[r]{$\mu$}} \put(0,-0.2){\makebox(0,0)[t]{0}} \put(8,-0.2){\makebox(0,0)[t]{1}} \put(4,-0.2){\makebox(0,0)[t]{$\alpha$}} \put(1,15){\makebox(0,0)[l]{$H<1/2$}} \put(1,14){\makebox(0,0)[l]{subdiffusion}} \put(3,4){\makebox(0,0)[l]{$H>1/2$}} \put(3,3){\makebox(0,0)[l]{superdiffusion}} \end{picture} \end{center} \caption{\small Subdiffusion and superdiffusion for long waits and long jumps as a function of the parameters $\alpha$ and $\mu$ of the relation $\langle{X^2}\rangle\sim{t^{2\alpha/\mu}}$, that is, $H=\alpha/\mu$ (see text).} \label{diffme} \end{figure} Inserting these approximations into (\ref{ooohh}) we have \begin{equation} \tilde{\rho}(\omega,s) = \tau \frac{\tilde{\rho}(\omega,0)}{s + K^\mu\omega^\mu} \end{equation} where $K^\mu=\sigma^\mu/\tau$ or, for $\rho(x,0)=\delta(x)$, \begin{equation} \tilde{\rho}(\omega,s) = \frac{\tau}{s + K^\mu\omega^\mu} \end{equation} Taking the inverse Laplace transform, we have \begin{equation} \tilde{\rho}(\omega,t) = \exp(-K^\mu \omega^\mu) \end{equation} that is, we obtain a L\'evy stable distribution, as expected from the generalized Central Limit Theorem. Note that in this case $\langle{X^2}\rangle\rightarrow\infty$, so we can't directly compare this distribution with the standard diffusion. There are however several ways to arrive at a result. The first is to use a truncated L\'evy distribution, which is closer to real applications as in the physical world one doesn't have arbitrarily long jumps. The second is to extrapolate from fractional moments $\langle{X^q}\rangle$ with $q<\mu$, which can be shown to converge and, in this case: \begin{equation} \langle{X^q}\rangle \sim t^{q/\mu} \end{equation} which leads to a Hurst exponent $H=1/\mu>1/2$, that is, to superdiffusion% \footnote{An informal way to reach the same conclusion is to note that $K^\mu=\sigma^\mu/\tau$ so that, in order to have $K^\mu$ finite, we must have $\sigma^\mu\sim\tau$, that is, $\sigma^2\sim\tau^{2/\mu}$, leading again to $H=1/\mu$.}% . \subsubsection{Long waits and long jumps} The case in which both jumps and waiting times have L\'evy distribution can be trated similarly, leading to \begin{equation} \tilde{\rho}(\omega,s) = \frac{1}{s} \frac{1}{1 + K_\alpha^\mu \omega^\mu s^{-\alpha}} \end{equation} By analogy with the previous cases, we see that \begin{equation} \langle X^2 \rangle \sim t^{2\alpha/\mu} \end{equation} which entails $H=\alpha/\mu$. If $\mu>2\alpha$, then $H<1/2$, and we have subdiffusion, if $\mu>2\alpha$ we have superdiffusion (see figure~\ref{diffme}).
1,108,101,564,734
arxiv
\section{Introduction} The central mathematical principle underlying AdS/CFT duality is the fact that the group $SO(2,4)$ of Poincar\'e and conformal transformations of physical $3+1$ space-time has an elegant mathematical representation on ${\rm AdS}_5$ space where the fifth dimension has the anti-de Sitter warped metric. The group of conformal transformations $SO(2,4)$ in 3+1 space is isomorphic to the group of isometries of AdS space, $x^\mu \to \lambda x^\mu$, $r \to r/\lambda$, where $r$ represents the coordinate in the fifth dimension. The dynamics at $x^2 \to 0$ in 3+1 space thus matches the behavior of the theory at the boundary $r \to \infty.$ This allows one to map the physics of quantum field theories with conformal symmetry to an equivalent description in which scale transformations have an explicit representation in AdS space. Even though quantum chromodynamics is a broken conformal theory, the AdS/CFT correspondence has led to important insights into the properties of QCD. For example, as shown by Polchinski and Strassler,\cite{Polchinski:2001tt} the AdS/CFT duality, modified to give a mass scale, provides a nonperturbative derivation of the empirically successful dimensional counting rules\cite{Brodsky:1973kr,Matveev:1973ra} for the leading power-law fall-off of the hard exclusive scattering amplitudes of the bound states of the gauge theory. The modified theory generates the hard behavior expected from QCD instead of the soft behavior characteristic of strings. Other important applications include the description of spacelike hadron form factors at large transverse momentum\cite{Polchinski:2001ju} and deep inelastic scattering structure functions at small $x$.\cite{Polchinski:2002jw} The power falloff of hadronic light-front wave functions (LFWF) including states with nonzero orbital angular momentum is also predicted.\cite{Brodsky:2003px} In the original formulation by Maldacena,\cite{Maldacena:1997re} a correspondence was established between a supergravity string theory on a curved background and a conformally invariant $\mathcal{N} = 4$ super Yang-Mills theory in four-dimensional space-time. The higher dimensional theory is $AdS_5 \times S^5$ where $R = ({4 \pi g_s N_C})^{1/4} \alpha_s'^{1/2}$ is the radius of AdS and the radius of the five-sphere and $\alpha_s'^{1/2}$ is the string scale. The extra dimensions of the five-dimensional sphere $S^5$ correspond to the $SU(4) \sim SO(6)$ global symmetry which rotates the particles present in the SYM supermultiplet in the adjoint representation of $SU(N_C)$. In our application to QCD, baryon number in QCD is represented as a Casimir constant on $S^5.$ The reason why AdS/CFT duality can have at least approximate applicability to physical QCD is based on the fact that the underlying classical QCD Lagrangian with massless quarks is scale-invariant.\cite{Parisi:1972zy} One can thus take conformal symmetry as an initial approximation to QCD, and then systematically correct for its nonzero $\beta$ function and quark masses.\cite{Brodsky:1985ve} This ``conformal template" approach underlies the Banks-Zak method\cite{Banks:1981nn} for expansions of QCD expressions near the conformal limit and the BLM method\cite{Brodsky:1982gc} for setting the renormalization scale in perturbative QCD applications. In the BLM method the corrections to a perturbative series from the $\beta$-function are systematically absorbed into the scale of the QCD running coupling. An important example is the ``Generalized Crewther Relation"\cite{Brodsky:1995tb} which relates the Bjorken and Gross-Llewellyn sum rules at the deep inelastic scale $Q^2$ to the $e^+ e^-$ annihilation cross sections at specific commensurate scales $ s^*(Q^2) \simeq 0.52~ Q^2$. The Crewther relation\cite{Crewther:1972kn} was originally derived in conformal theory; however, after BLM scale setting, it becomes a fundamental test of physical QCD, with no uncertainties from the choice of renormalization scale or scheme. QCD is nearly conformal at large momentum transfers where asymptotic freedom is applicable. Nevertheless, it is remarkable that dimensional scaling for exclusive processes is observed even at relatively low momentum transfer where gluon exchanges involve relatively soft momenta.\cite{deTeramond:2005kp} The observed scaling of hadron scattering amplitudes at moderate momentum transfers can be understood if the QCD coupling has an infrared fixed point.\cite{Brodsky:2004qb} In this sense, QCD resembles a strongly-coupled conformal theory. \section{Hadron Spectra from AdS/CFT} The duality between a gravity theory on $AdS_{d+1}$ space and a conformal gauge theory at its $d$-dimensional boundary requires one to match the partition functions at the $AdS$ boundary, $z = R^2/r \to 0$. The physical string modes $\Phi(x,z) \sim e^{-i P \cdot x} f(r)$, are plane waves along the Poincar\'e coordinates with four-momentum $P^\mu$ and hadronic invariant mass states $P_\mu P^\mu = \mathcal{M}^2$. For large-$r$ or small-$z$, $f(r) \sim r^{-\Delta}$, where the dimension $\Delta$ of the string mode must be the same dimension as that of the interpolating operator {\small$\mathcal{O}$} which creates a specific hadron out of the vacuum: $\langle P \vert \mathcal{O} \vert 0 \rangle \neq 0$. The physics of color confinement in QCD can be described in the AdS/CFT approach by truncating the AdS space to the domain $r_0 < r < \infty$ where $r_0 = \Lambda_{\rm QCD} R^2$. The cutoff at $r_0$ is dual to the introduction of a mass gap $\Lambda_{\rm QCD}$; it breaks conformal invariance and is responsible for the generation of a spectrum of color-singlet hadronic states. The truncation of the AdS space insures that the distance between the colored quarks and gluons as they stream into the fifth dimension is limited to $z < z_0 = {1/\Lambda_{\rm QCD}}$. The resulting $3+1$ theory has both color confinement at long distances and conformal behavior at short distances. The latter property allows one to derive dimensional counting rules for form factors and other hard exclusive processes at high momentum transfer. This approach, which can be described as a ``bottom-up" approach, has been successful in obtaining general properties of the low-lying hadron spectra, chiral symmetry breaking, and hadron couplings in AdS/QCD,\cite{Boschi-Filho:2002ta} in addition to the hard scattering predictions.\cite{Polchinski:2001tt,Polchinski:2002jw,Brodsky:2003px} In this ``classical holographic model", the quarks and gluons propagate into the truncated AdS interior according to the AdS metric without interactions. In effect, their Wilson lines, which are represented by open strings in the fifth dimension, are rigid. The resulting equations for spin 0, $\frac{1}{2}$, 1 and $\frac{3}{2}$ hadrons on $AdS_5 \times S^5$ lead to color-singlet states with dimension $3, 4$ and $\frac{9}{2}$. Consequently, only the hadronic states (dimension-$3$) $J^P=0^-,1^-$ pseudoscalar and vector mesons, the (dimension-$\frac{9}{2}$) $J^P=\frac{1}{2}^+, \frac{3}{2}^+$ baryons, and the (dimension-$4$) $J^P= 0^+$ glueball states, can be derived in the classical holographic limit.\cite{deTeramond:2005su} This description corresponds to the valence Fock state as represented by the light-front Fock expansion. Hadrons also fluctuate in particle number, in their color representations ( such as the hidden-color states\cite{Brodsky:1983vf} of the deuteron), as well as in internal orbital angular momentum. The higher Fock components of the hadrons are manifestations of the quantum fluctuations of QCD; these correspond to the fluctuations of the bulk geometry about the fixed AdS metric. Similarly, the orbital excitations of hadronic states correspond to quantum fluctuations about the AdS metric.\cite{Gubser:2002tv} We thus can consistently identify higher-spin hadrons with the fluctuations around the spin 0, $\frac{1}{2}$, 1 and $\frac{3}{2}$ classical string solutions of the $AdS_5$ sector.\cite{deTeramond:2005su} As a specific example, consider the twist-two (dimension minus spin) glueball interpolating operator $\mathcal{O}_{4 + L}^{\ell_1 \cdots \ell_m} = F D_{\{\ell_1} \dots D_{\ell_m\}} F$ with total internal space-time orbital momentum $L = \sum_{i=1}^m \ell_i$ and conformal dimension $\Delta_L = 4 + L$. We match the large $r$ asymptotic behavior of each string mode to the corresponding conformal dimension of the boundary operators of each hadronic state while maintaining conformal invariance. In the conformal limit, an $L$ quantum, which is identified with a quantum fluctuation about the AdS geometry, corresponds to an effective five-dimensional mass $\mu$ in the bulk side. The allowed values of $\mu$ are uniquely determined by requiring that asymptotically the dimensions become spaced by integers, according to the spectral relation $(\mu R)^2 = \Delta_L(\Delta_L - 4)$.\cite{deTeramond:2005su} The four-dimensional mass spectrum follows from the Dirichlet boundary condition $\Phi(x,z_o) = 0$, $z_0 = 1 / \Lambda_{\rm QCD}$, on the AdS string amplitudes for each wave functions with spin $<$ 2. The eigen-spectrum is then determined from the zeros of Bessel functions, $\beta_{\alpha,k}$. The predicted spectra \cite{deTeramond:2005su} of mesons and baryons with zero mass quarks is shown in Figs.~\ref{fig:MesonSpec} and \ref{fig:BaryonSpec}. The only parameter is $\Lambda_{\rm QCD} = 0.263$ GeV, and $0.22$ GeV for mesons and baryons, respectively. \begin{figure}[tbh] \centerline{\psfig{file=8694A10.eps,width=12.0cm}} \vspace*{8pt} \caption{Light meson orbital states for $\Lambda_{\rm QCD} = 0.263$ GeV: (a) vector mesons and (b) pseudoscalar mesons. The dashed line is a linear Regge trajectory with slope 1.16 ${\rm GeV}^2$.} \label{fig:MesonSpec} \end{figure} \begin{figure}[tbh] \centerline{\psfig{file=8694A11.eps,angle=0,width=12.0cm}} \vspace*{8pt} \caption{Light baryon orbital spectrum for $\Lambda_{\rm QCD}$ = 0.22 GeV: (a) nucleons and (b) $\Delta$ states.} \label{fig:BaryonSpec} \end{figure} \section{Dynamics from AdS/CFT} Current matrix elements in AdS/QCD are computed from the overlap of the normalizable modes dual to the incoming and outgoing hadron $\Phi_I$ and $\Phi_F$ and the non-normalizable mode $J(x,z)$, dual to the external source \begin{equation} F(Q^2)_{I \to F} \simeq R^{3 + 2\sigma} \int_0^{z_o} \frac{dz}{z^{3 + 2\sigma}}~ \Phi_F(z)~J(Q,z)~\Phi_I(z), \label{eq:FF} \end{equation} where $\sigma_n = \sum_{i=1}^n \sigma_i$ is the spin of the interpolating operator $\mathcal{O}_n$, which creates an $n$-Fock state $\vert n \rangle$ at the AdS boundary. $J(x,z)$ has the value 1 at zero momentum transfer, and as boundary limit the external current, thus $A^\mu(x,z) = \epsilon^\mu e^{i Q \cdot x} J(Q,z)$. The solution to the AdS wave equation subject to boundary conditions at $Q = 0$ and $z \to 0$ is\cite{Polchinski:2002jw} $J(Q,z) = z Q K_1(z Q)$. At large enough $Q \sim r/R^2$, the important contribution to (\ref{eq:FF}) is from the region near $z \sim 1/Q$. At small $z$, the $n$-mode $\Phi^{(n)}$ scales as $\Phi^{(n)} \sim z^{\Delta_n}$, and we recover the power law scaling,\cite{Brodsky:1973kr} $F(Q^2) \to \left[1/Q^2\right]^{\tau - 1}$, where the twist $\tau = \Delta_n - \sigma_n$, is equal to the number of partons, $\tau_n = n$. A numerical computation for the pion form factor gives the results shown in Fig.~\ref{fig:FF}, where the resonant structure in the time-like region from the AdS cavity modes is apparent. \begin{figure}[htb] \centerline{\psfig{file=8721A06.eps,width=8.0cm}} \vspace*{8pt} \caption{Space-like and time-like structure for the pion form factor in AdS/QCD.} \label{fig:FF} \end{figure} \section{AdS/CFT Predictions for Light-Front Wavefunctions} The AdS/QCD correspondence provides a simple description of hadrons at the amplitude level by mapping string modes to the impact space representation of LFWFs. It is useful to define the partonic variables $ x_i \vec r_{\perp i} = \vec R_\perp + \vec b_{\perp i}$, where $\vec r_{\perp i}$ are the physical position coordinates, $\vec b_{\perp i}$ are frame-independent internal coordinates, $\sum_i \vec b_{\perp i} = 0$, and $\vec R_\perp$ is the hadron transverse center of momentum $\vec R_\perp = \sum_i x_i \vec r_{\perp i}$, $\sum_i x_i = 1$. We find for a two-parton LFWF the Lorentz-invariant form \begin{equation} \widetilde{\psi}_L(x, \vec b_{\perp}) = C ~x(1-x) ~\frac{J_{1+L}\left(\vert\vec b_\perp\vert \sqrt{x(1-x)}~\beta_{1+L,k} \Lambda_{\rm QCD} \right)} {\vert\vec b_\perp\vert \sqrt{x(1-x)}}. \label{eq:LFWFbM} \end{equation} The $ \beta_{1+L,k}$ are the zeroes of the Bessel functions reflecting the Dirichlet boundary condition. The variable $\zeta=\vert\vec b_\perp\vert \sqrt{x(1-x)},$ $0 \leq \zeta \leq \Lambda^{-1}_{QCD}$, represents the invariant separation between quarks. In the case of a two-parton state, it gives a direct relation between the scale of the invariant separation between quarks, $\zeta$, and the holographic coordinate in AdS space: $\zeta = z = R^2/r$. The ground state and first orbital eigenmode are depicted in the figure below. \begin{figure}[tbh] \centerline{\psfig{file=8726A02.eps,angle=0,width=6.0cm} {\psfig{file=8726A03.eps,angle=0,width=6.0cm}}} \vspace*{8pt} \caption[*]{Prediction for the square of the two-parton bound-state light-front wave function $\widetilde{\psi}_L(x,\vec b_\perp)$ as function of the constituents longitudinal momentum fraction $x$ and $1-x$ and the impact space relative coordinate $\vec b_\perp$: (a) $L=0$ and (b) $L=1$.} \label{fig:LFWFb} \end{figure} The holographic model is quite successful in describing the known light hadron spectrum. The only mass scale is $\Lambda_{QCD}$. The model incorporates confinement and conformal symmetry. Only dimension-$3, \frac{9}{2}$ and 4 states $\bar q q$, $q q q$, and $g g$ appear in the duality at the classical level. As we have described, non-zero orbital angular momentum and higher Fock-states require the introduction of quantum fluctuations. The model gives a simple description of the structure of hadronic form factors and LFWFs, which can be used as an initial approximation to the actual eigensolutions of the light-front Hamiltonian for QCD. It also explains the suppression of the odderon. The dominance of the quark-interchange mechanism in hard exclusive processes also emerges naturally from the classical duality of the holographic model. \section*{Acknowledgements} Presented by SJB at QCD 2005, 20 June 2005, Beijing, China. This work was supported by the Department of Energy contract DE--AC02--76SF00515.
1,108,101,564,735
arxiv
\section{Motivations} \label{sec:motiv} For a complete, separable metric space $X$, the topology of convergence in distribution is metrizable \cite{DudleyRealanalysisprobability2002} by considering the so-called Kolmogorov-Rubinstein or Wasserstein-1 distance: \begin{equation}\label{eq_core:1} \operatorname{dist}_{{\text{\tiny{\scshape{KR}}}}}(\mu,\nu)=\sup_{F\in \operatorname{Lip}_{1}(X)} \left( \int_{X} F\text{ d} \mu -\int_{X}F\text{ d} \nu \right) \end{equation} where \begin{equation*} \operatorname{Lip}_{1}(X)=\left\{ F\, :\, X\to \mathbf{R}, |F(x)-F(y)|\le \operatorname{dist}_{X}(x,y),\ \forall x,y\in X \right\}. \end{equation*} The formulation \eqref{eq_core:1} is well suited to evaluate distance by the Stein's method. When $X=\mathbf{R}$, there is no particular difficulty to evaluate the K-R distance when $\mu$ is the Gaussian distribution. When, $X=\mathbf{R}^{d}$, it is only recently (see \cite{Fang2018,Gallouet2018,Raic2018} and references therein) that some improvement of the standard Stein's method has been proposed to get the K-R distance to the Gaussian measure on $\mathbf{R}^{d}$. The bottleneck is the estimate of the Lipschitz modulus of the second order derivative of the solution of the Stein's equation when $F$ is only assumed to be Lipschitz continuous. Namely, for $f\, :\, \mathbf{R}^{d}\to \mathbf{R}$, for any $t>0$, consider the function \begin{equation*} P_{t}f\, :\, x\in\mathbf{R}^{d}\longmapsto \int_{\mathbf{R}^{d}}f(e^{-t}x+\sqrt{1-e^{-2t}}y)\text{ d} \mu_{d}(y) \end{equation*} where $\mu_{d}$ is the standard Gaussian measure on $\mathbf{R}^{d}$. In dimension $1$, the Stein's equation reads as \begin{equation*} -xh(x)+h'(x)=f(x)-\int_{\mathbf{R}}f\text{ d} \mu_{1}, \end{equation*} so that \begin{equation}\label{eq_core:16} h(x)=\int_{0}^{\infty }P_{t}f(x)\text{ d} t \end{equation} and the subsequent computations require to evaluate only the Lipschitz modulus of $h'$. For $f\in L^{1}(\mu)$, it is classical to see that $P_{t}f$ is infinitely differentiable and that \begin{equation} \label{eq_core:2} (P_{t}f)^{(k)}(x)=\left( \frac{e^{-t}}{\sqrt{1-e^{-2t}}} \right)^{k}\int_{\mathbf{R}^{d}}f(e^{-t}x+\sqrt{1-e^{-2t}}y)H_{k}(y)\text{ d} \mu_{d}(y) \end{equation} where $H_{k}$ is the $k$-th Hermite polynomial. On the other hand, if $f$ is $k$-times differentiable, we have \begin{equation}\label{eq_core:3} (P_{t}f)^{(k)}=e^{-kt}P_{t}(f^{(k)}). \end{equation} According to \eqref{eq_core:2}, we get \begin{equation*} h'(x)=\int_{0}^{\infty } \frac{e^{-t}}{\sqrt{1-e^{-2t}}} \int_{\mathbf{R}^{d}}f(e^{-t}x+\sqrt{1-e^{-2t}}y)y\text{ d} \mu_{1}(y)\text{ d} t. \end{equation*} It is apparent that the Lipschitz modulus of $h'$ simply depends on the Lipschitz modulus of $f$. However, in higher dimension, the Stein's equation becomes \begin{equation}\label{eq_core:17} -x.\nabla h(x)+\Delta h(x)=f(x)-\int_{\mathbf{R}^{d}}f\text{ d} \mu_{d}, \end{equation} whose solution is formally given by \eqref{eq_core:16}. The form of \eqref{eq_core:17} entails that we need to estimate the Lipschitz modulus of $\Delta h$, which requires to use \eqref{eq_core:2} for $k=2$. Unfortunately, we have to realize that \begin{equation*} \left( \frac{e^{-t}}{\sqrt{1-e^{-2t}}} \right)^{k} \notin L^{1}([0,+\infty);\text{ d} t). \end{equation*} Hence, until the very recent papers \cite{Fang2018,Raic2018}, the strategy was to assume that $\nabla f$ is Lipschitz, apply once \eqref{eq_core:3} to compute the first derivative of $P_{t}f$ and then apply \eqref{eq_core:2} to this expression: \begin{equation*} \Delta h(x)=\int_{0}^{\infty } \frac{e^{-t}}{\sqrt{1-e^{-2t}}} \int_{\mathbf{R}^{d}}\nabla f(e^{-t}x+\sqrt{1-e^{-2t}}y).y\text{ d} \mu_{d}(y)\text{ d} t. \end{equation*} This means that instead of computing the supremum in the right-hand-side of \eqref{eq_core:1}, over Lipschitz functions, it is computed over functions whose first derivative is Lipschitz. This also defines a distance, which does not change the induced topology but the accuracy of the bound is degraded. In infinite dimension, a new problem arises which is best explained by going back to the roots of the Stein's method in dimension~$1$. Consider that we want to estimate the K-R distance in the standard Central Limit Theorem. Let $(X_{n},\, n\ge 1)$ be a sequence of independent, identically distributed random variables with $\esp{X}=0$ and $\esp{X^{2}}=1$. Let $T_{n}=n^{-1/2}\sum_{j=1}^{n}X_{j}$. The Stein-Dirichlet representation formula \cite{DecreusefondSteinDirichletMalliavinmethod2015} states that \begin{equation} \label{eq:SteinDirichlet} \esp{f(T_{n})}-\int_{\mathbf{R}}f\text{ d} \mu_{1}=\esp{\int_{0}^{\infty} LP_{t}f(T_{n})\text{ d} t} \end{equation} where \begin{equation*} Lf(x)=-xf(x)+f'(x)=L_{1}f(x)+L_{2}f(x), \end{equation*} with obvious notations. Now, \begin{equation*} L_{1}P_{t}f(T_{n})=-T_{n}(P_{t}f)'(T_{n})=-\frac{1}{\sqrt{n}}\sum_{j=1}^{n} X_{j}(P_{t}f)'(T_{n}). \end{equation*} The trick, which amounts to an integration by parts for a Malliavin structure on independent random variables (see \cite{Decreusefond_2018}), is to write \begin{equation*} \esp{X_{j}(P_{t}f)'(T_{n})}= \esp{X_{j}\Bigl( (P_{t}f)'(T_{n})-(P_{t}f)'(T_{n}-X_{j}/\sqrt{n}) \Bigr)} \end{equation*} in view of the independence of the random variables. Then, we use the fundamental theorem of calculus in this expression around the point $T_{n}^{\neg j}=T_{n}-X_{j}/\sqrt{n}$: \begin{multline*} \esp{X_{j}\Bigl( (P_{t}f)'(T_{n})-(P_{t}f)'(T_{n}-X_{j}/\sqrt{n}) \Bigr)}\\=\frac{1}{\sqrt{n}}\int_{0}^{1} \esp{X_{j}^{2} (P_{t}f)''(T_{n}+r X_{j}/\sqrt{n}) \Bigr) }\text{ d} r. \end{multline*} Since, \begin{equation*} \int_{0}^{1} \esp{X_{j}^{2} (P_{t}f)''(T_{n}^{\neg j})}\text{ d} r=\esp{(P_{t}f)''(T_{n}^{\neg j})}, \end{equation*} we get \begin{multline}\label{eq_core:4} LP_{t}f(T_{n})\\=-\frac{1}{n}\sum_{j=1}^{n } \int_{0}^{1} \esp{X_{j}^{2} \Bigr((P_{t}f)''(T_{n}^{\neg j}+r X_{j}/\sqrt{n})-(P_{t}f)''(T_{n}^{\neg j}) \Bigr) }\text{ d} r\\ +\frac{1}{n}\sum_{j=1}^{n } \esp{(P_{t}f)''(T_{n}^{\neg j}) -(P_{t}f)''(T_{n})}. \end{multline} This formula confirms that the crux of the matter is now to estimate uniformly the Lipschitz modulus of $(P_{t}f)''$. It also shows how we get the order of convergence. We have one occurrence of $n^{-1/2}$ in the definition of $T_{n}$, which appears in the expression of $L_{1}$. The same factor appears a second time when we proceed to the Taylor expansion and then, it will appear a third time when we plug~\eqref{eq_core:2} into~\eqref{eq_core:4}. This means that we have a factor $n^{-3/2}$ which is summed up $n$ times, hence the rate of convergence which is known to be $n^{-1/2}$. Now, if we are interested in the Donsker theorem, the process whose limit we would like to assess is \begin{equation*} S_{n}(t)=\sum_{j=1}^{n} X_{j} h_{j}^{n}(t) \end{equation*} where \begin{equation*} h_{j}^{n}(t)=\sqrt{n}\int_{0}^{t} \mathbf{1}_{[j/n,(j+1)/n)}(s)\text{ d} s. \end{equation*} For reasons that will be explained below, the analog of the second order derivatives will involve \begin{equation}\label{eq_core:5} \< h_{j}^{\otimes 2}, \nabla^{(2)}(P_{t}f)(S_{n}^{\neg j}S_{n}+r X_{j}/\sqrt{n})-\nabla^{(2)}(P_{t}f)(S_{n}^{\neg j}) \>_{I_{1,2}^{\otimes 2}} \end{equation} where $\nabla$ is the Malliavin derivative, $I_{1,2}$ is the Cameron-Martin space \begin{equation*} I_{1,2}=\left\{f, \exists ! \dot f \in L^{2}([0,1],\text{ d} t) \text{ with } f(t)=\int_{0}^{t}\dot f(s)\text{ d} s\right\} \end{equation*} and \begin{equation*} \|f\|_{I_{1,2}}=\|\dot f\|_{L^{2}}. \end{equation*} Recall that in the context of Malliavin calculus, this space is identified to its dual which means that the dual of $L^{2}$ is not itself. The difficulty is then that we do not have a $n^{-1/2}$ factor in the definition of $S_{n}$ and it is easily seen that $\|h_{j}^{n}\|_{I_{1,2}}=1$, hence no multiplicative factor will pop up in~\eqref{eq_core:5}. In \cite{coutin_convergence_2017}, we bypassed this difficulty by assuming enough regularity of $f$ so that $\nabla^{(2)}P_{t}f$ belong to the dual of $L^{2}$. Then, in the estimate of terms as those appearing in \eqref{eq_core:5}, it is the $L^{2}$-norm of $h_{j}^{n}$ which appears and it turns out that $\|h_{j}^{n}\|_{L^{2}}\le c\,n^{-1/2}$, hence the presence of a factor $n^{-1}$, which saves the proof. The goal of this paper is to weaken the hypothesis on $f$ to be able to upper-bound the true K-R distance between the distribution of $S_{n}$ and the distribution of a Brownian motion, that is \begin{equation*} \sup_{f\in \operatorname{Lip}_{1}(X)}\esp{f(S_{n})}-\esp{f(B)}. \end{equation*} The space $X$ is a Banach space we can choose arbitrarily as far as it can be equipped with the structure of an abstract Wiener space and it contains the sample paths of $S_{n}$ and $B$. The main technical result of this article is Theorem~\ref{thm:majoModulusOfContinuity} which gives a new estimate of the Lipschitz modulus of $\nabla^{(2)}P_{t}f$ for $t>0$. The main idea is to introduce a hierarchy of approximations. There is a first scale induced by the time discretization coming from the definition of $S_{n}$. Then, we consider a coarser discretization onto which we project our approximations in order to benefit from the averaging effect of the ordinary CLT. It turns out that the optimal ratio is obtained when the mesh of the coarser subdivision is roughly the cubic root of the mesh of the reference partition. Moreover, after \cite{coutin_steins_2013} and \cite{coutin_convergence_2017}, we are convinced that it is simpler and as efficient to stick to finite dimension as long as possible. For, we consider the affine interpolation of the Brownian motion as an intermediary process. The distance between the Brownian sample-paths and their affine interpolation is well known. This reduces the problem to estimate the distance between $S_{n}$ and the affine interpolation of~$B$, a task which can be handled by the Stein's method. It turns out that the bottleneck is in fact the rate of convergence of the Brownian interpolation to the Brownian motion. This paper is organized as follows. In Section~\ref{sec:prelim}, we show how to view fractional Sobolev spaces as Wiener spaces. In Section~\ref{sec:donsk-theor-ws_eta}, we explain the line of thoughts we used. The proofs are given in Section~\ref{sec:proofs}. \section{Preliminaries} \label{sec:prelim} \subsection{Fractional Sobolev spaces} \label{sec:fract-sobol-spac} As in \cite{decreusefond_stochastic_2005,friz_multidimensional_2010}, we consider the fractional Sobolev spaces ${W}_{\eta,p}$ defined for $\eta \in (0,1)$ and $p\ge 1$ as the the closure of ${\mathcal C}^1$ functions with respect to the norm \begin{equation*} \|f\|_{\eta,p}^p=\int_0^1 |f(t)|^p \text{ d} t + \iint_{[0,1]^2} \frac{|f(t)-f(s)|^p}{|t-s|^{1+p\eta}}\text{ d} t\text{ d} s. \end{equation*} For $\eta=1$, ${W}_{1,p}$ is the completion of $\mathcal C^1$ for the norm: \begin{equation*} \|f\|_{1,p}^p=\int_0^1 |f(t)|^p \text{ d} t + \int_0^1 |f^\prime(t)|^p \text{ d} t. \end{equation*} They are known to be Banach spaces and to satisfy the Sobolev embeddings \cite{adams_sobolev_2003,feyel_fractional_1999}: \begin{equation*} {W}_{\eta,p}\subset \mbox{Hol}(\eta-1/p) \text{ for } \eta-1/p>0 \end{equation*} and \begin{equation*} {W}_{\eta,p}\subset {W}_{\gamma,q}\text{ for } 1\ge \eta\ge \gamma \text{ and } \eta-1/p\ge \gamma-1/q. \end{equation*} As a consequence, since ${W}_{1,p}$ is separable (see \cite{brezis_analyse_1987}), so does ${W}_{\eta,p}$. We need to compute the ${W}_{\eta,p}$ norm of primitive of step functions. \begin{lemma} \label{lem:normeHDansDual} Let $0\le s_{1} < s_{2}\le 1$ and consider \begin{equation*} h_{s_{1},s_{2}}(t)=\int_{0}^{t} \mathbf{1}_{[s_{1},s_{2}]}(r)\text{ d} r. \end{equation*} There exists $c>0$ such that for any $s_{1},s_{2}$, we have \begin{equation} \label{eq_donsker:3} \|h_{s_{1},s_{2}}\|_{{\eta,p}} \le c\, |s_{2}-s_{1}|^{1/2-\eta}. \end{equation} \end{lemma} \begin{proof} Remark that for any $s,t\in [0,1]$, \begin{equation*} \left| h_{s_{1},s_{2}} (t)-h_{s_{1},s_{2}}(s) \right|\le |t-s|\wedge (s_{2}-s_{1}). \end{equation*} The result then follows from the definition of the ${W}_{\eta,p}$ norm. \end{proof} We denote by $\ws_{0,\infty}$ the space of continuous (hence bounded) functions on $[0,1]$ equipped with the uniform norm. \subsection{Fractional spaces $\ws_{\eta,p}$ as Wiener spaces} \label{sec:gelfand-triplet-1} Let \begin{equation*} \Lambda=\{(\eta,p)\in \mathbf{R}^{+}\times\mathbf{R}^{+}, 0< \eta-1/p<1/2\}\cup\{(0,\infty)\}. \end{equation*} In what follows, we always choose $\eta$ and $p$ in $\Lambda$. Consider $(Z_{n},\, n\ge 1)$ a sequence of independent, standard Gaussian random variables and let $(z_{n},\, n\ge 1)$ be a complete orthonormal basis of $I_{1,2}$. Then, we know from \cite{ito_convergence_1968} that \begin{equation}\label{eq_core:6} \sum_{n=1}^{N} Z_{n}\, z_{n} \xrightarrow{N\to \infty} B:=\sum_{n=1}^{\infty} Z_{n}\, z_{n} \text{ in } W_{\eta,p} \text{ with probability } 1, \end{equation} where $B$ is a Brownian motion. We clearly have the diagram \begin{equation}\label{eq_core:8} W_{\eta,p}^{*}\xrightarrow{{\mathfrak e}_{\eta,p}^{*}} (I_{1,2})^{*}\simeq I_{1,2}\xrightarrow{{\mathfrak e}_{\eta,p}} W_{\eta,p}, \end{equation} where ${\mathfrak e}_{\eta,p}$ is the embedding from $I_{1,2}$ into $W_{\eta,p}$. The space $I_{1,2}$ is dense in $W_{\eta,p}$ since polynomials do belong to $I_{1,2}$. Moreover, Eqn.~\eqref{eq_core:6} and the Parseval identity entail that for any $z\in \ws^{*}$, \begin{equation}\label{eq_core:7} \esp{e^{i\< z,B\>_{\ws^{*}_{\eta,p},\ws_{\eta,p}}}}=\exp\left(-\frac12 \|{\mathfrak e}_{\eta,p}^* (z)\|_{I_{1,2}}^{2}\right). \end{equation} We denote by $\mu_{\eta,p}$ the law of $B$ on $\ws_{\eta,p}$. Then, the diagram \eqref{eq_core:8} and the identity~\eqref{eq_core:7} mean that $(I_{1,2},\ws_{\eta,p},\mu_{\eta,p})$ is {a Wiener space}. \begin{definition}[Wiener integral] The Wiener integral, denoted as $\delta_{\eta,p}$, is the isometric extension of the map \begin{align*} \delta_{\eta,p}\, :\, \emb_{\eta,p}^*({W}_{\eta,p}^*)\subsetI_{1,2}&\longrightarrow L^2(\mu_{\eta,p})\\ \emb_{\eta,p}^*(\eta) &\longmapsto \<\eta,\, y\>_{{W}_{\eta,p}^*,{W}_{\eta,p}}. \end{align*} \end{definition} This means that if $h=\lim_{n\to \infty } \emb_{\eta,p}^{*}(\eta_{n})$ in $I_{1,2}$, \begin{equation*} \delta_{\eta,p} h(y)=\lim_{n\to \infty} \<\eta_{n},\, y\>_{{W}_{\eta,p}^*,{W}_{\eta,p}} \text{ in }L^{2}(\mu_{\eta,p}). \end{equation*} \begin{definition}[Ornstein-Uhlenbeck semi-group] For any Lipschitz function on $\ws_{\eta,p}$, for any $\tau \ge 0$, \begin{equation*} P_\tau f(x)=\int_{\ws_{\eta,p}} f(e^{-\tau}x+\beta_\tau y)\text{ d}\mu_{\eta,p}(y) \end{equation*} where $\beta_\tau=\sqrt{1-e^{-2\tau}}$. \end{definition} The dominated convergence theorem entails that $P_\tau$ is ergodic: For any $x\in \ws_{\eta,p}$, with probability~$1$, \begin{equation*} P_\tau f(x)\xrightarrow{\tau\to\infty} \int_{\ws_{\eta,p}} f\text{ d}\mu_{\eta,p}. \end{equation*} Moreover, the invariance by rotation of Gaussian measures implies that \begin{equation*} \int_{\ws_{\eta,p}} P_\tau f(x)\text{ d}\mu_{\eta,p}(x)= \int_{\ws_{\eta,p}}f\text{ d}\mu_{\eta,p}\text{, for any $\tau \ge 0$.} \end{equation*} Otherwise stated, the Gaussian measure on ${W}_{\eta,p}$ is the invariant and stationary measure of the semi-group $P=(P_\tau, \, \tau \ge 0)$. For details on the Malliavin gradient, we refer to \cite{Nualart1995b,Ustunel2010}. \begin{definition} \label{def_donsker_final:2} Let $X$ be a Banach space. A function $f\, :\, \ws_{\eta,p}\to X$ is said to be cylindrical if it is of the form \begin{equation*} f(y)=\sum_{j=1}^k f_j(\delta_{\eta,p} h_1(y),\cdots,\delta_{\eta,p} h_k(y))\, x_j \end{equation*} where for any $j\in \{1,\cdots,k\}$, $f_j$ belongs to the Schwartz space on $\mathbf{R}^k$, $(h_1,\cdots, h_k)$ are elements of $I_{1,2}$ and $(x_1,\cdots,x_k)$ belong to $X$. The set of such functions is denoted by $\mathfrak C(X)$. For $h\in I_{1,2}$, \begin{equation*} \< \nabla f, \, h\>_{ I_{1,2}}=\sum_{j=1}^k\sum_{l=1}^{k} \partial_lf(\delta_{\eta,p} h_1(y),\cdots,\delta_{\eta,p} h_k(y))\, \<h_l,\, h\>_{ I_{1,2}}\ x_j, \end{equation*} which is equivalent to say \begin{equation*} \nabla f = \sum_{j,l=1}^k \partial_jf(\delta_{\eta,p} h_1(y),\cdots,\delta_{\eta,p} h_k(y))\, h_l\otimes\ x_j. \end{equation*} \end{definition} It is proved in \cite[Theorem 4.8]{shih_steins_2011} that \begin{theorem} For $f\in \operatorname{Lip}_{1}(\ws_{\eta,p})$, for any $t>0$, for any $x\in \ws_{\eta,p}$ \begin{equation} \label{eq:dSurDtPtF} \frac{d}{dt}P_{t}f(x)=-\<x,\, \nabla(P_{t}f)(x)\>_{\ws_{\eta,p},\ws_{\eta,p}^{*}}+\sum_{i=1}^{\infty} \<\nabla^{(2)}P_{t}f(x),\, h_{i}\otimes h_{i}\>_{I_{1,2}} \end{equation} where $(h_{n}, \, n\ge 1)$ is complete orthonormal basis of $H$. \end{theorem} Note that a non trivial part of this theorem is to prove that the terms are meaningful: that $\nabla P_{t}f$ has values in $\ws_{\eta,p}^{*}$ instead of $I_{1,2}$ and that $\nabla^{(2)}P_{t}f(x)$ is trace-class. Actually, we only need a finite dimensional version of this identity in which all these difficulties do not appear. \section{Donsker's theorem in $\ws_{\eta,p}$} \label{sec:donsk-theor-ws_eta} For $m\ge 1$, let $\mathcal D^{m}=\{i/m,\, i=0,\cdots, m\}$, the regular subdivision of the interval $[0,1]$. Let \begin{equation*} {\mathcal A}^{m}=\{1,\cdots,d\}\times \{0,\cdots,m-1\} \end{equation*} and for $a=(a_{1},\, a_{2})\in {\mathcal A}^{m}$ \begin{equation*} h_{a}^{m}(t)=\sqrt{m}\,\int_{0}^{t}\mathbf{1}_{[a_{2},a_{2}+1/m)}(s)\text{ d} s\ e_{a_{1}}. \end{equation*} Consider \begin{equation*} S^{m}=\sum_{a\in {\mathcal A}^{m}} X_{a}\, h_{a}^{m} \end{equation*} where $(X_{a},\, a\in {\mathcal A}^{m})$ is a family of independent identically distributed, $\mathbf{R}^{d}$-valued, random variables. We denote by $X$ a random variable which has their common distribution. Moreover, we assume that $\esp{X}=0$ and $\esp{\|X\|_{\mathbf{R}^{d}}^{2}}=1$. Remark that $(h_{a}^{m},a\in {\mathcal A}^{m})$ is an orthonormal family in $\mathbf{R}^{d}\otimes I_{1,2}:=I_{1,2}^{d}$. Let \begin{equation*} {\mathcal V}^{m}=\operatorname{span}(h_{a}^{m},\,a\in {\mathcal A}^{m})\subset I_{1,2}^{d} . \end{equation*} For any $m>0$, the map $\pi^{m}$ is the orthogonal projection from $H:=I_{1,2}^{d}$ onto ${\mathcal V}^{m}$. Let $0<N<m$, for $f\in \operatorname{Lip}_{1}(\ws_{\eta,p})$, we write \begin{align} \label{dec-A} \esp{f(S^{m})} - \esp{f(B)} =\sum_{i=1}^3 A_i \end{align} where \begin{align*} A_{1}&=\esp{f(S^{m})}-\esp{f( \pi^{N}(S^{m})}\\ A_2&= \esp{f\circ \pi^{N}(S^{m})}-\esp{f\circ \pi^N(B^{m})},\\ A_3&=\esp{f\circ \pi^N(B^{m})}-\esp{f(B)}, \end{align*} where $B^{m}$ is the affine interpolation of the Brownian motion: \begin{equation*} B^{m}(t)=\sum_{a\in{\mathcal A}^{m}}\sqrt{m}\ \bigl(B_{a_{1}}(a_{2}+\frac1{m})-B_{a_{1}}(a_{2})\bigr) \ h_{a}^{m}(t). \end{equation*} The two terms $A_{1}$ and $A_{3}$ are of the same nature: We have to compare two processes which live on the same probability space. Since $f$ is Lipschitz, we can proceed by comparison of their sample-paths. The term $A_{2}$ is different as the two processes involved live on different probability spaces. This is for this term that the Stein's method will be used. We know from \cite{friz_multidimensional_2010} that \begin{theorem} For any $(\eta,p)\in \Lambda,$ there exists $c>0$ such that \begin{equation}\label{eq_core:10} \sup_{N} N^{1/2-\eta} \,\esp{\|B^{N}-B\|_{{\eta,p}}^{p}}^{1/p}\le c. \end{equation} \end{theorem} Moreover, we have \begin{theorem}\label{propA1} Let $(\eta,p)\in \Lambda.$ Assume that $X\in L^{p}(\ws;\mathbf{R}^{d},\mu_{\eta,p})$. There exists a constant $c>0$ such that \begin{equation*} \sup_{m,N} N^{\frac{1}{2} -\eta}\, \esp{ \| S^{m} - \pi^N(S^{m}) \|_{{\eta,p}}^p}^{1/p} \le c \,\|X\|_{L^p . \end{equation*} \end{theorem} This upper-bound is far from being optimal and it is likely that it could be improved to obtain a factor $N^{1-\eta}$. However, in view of \eqref{eq_core:10}, it would bring no improvement to our final result. \begin{theorem}\label{propA2} Let $(\eta,p)\in \Lambda.$ Let $X_a$ belong to $L^p(\ws;\mathbf{R}^{d},\mu_{\eta,p})$ for some $p\ge 3$. Then, there exists $c>0$ such that for any $f \in \operatorname{Lip}_{1}(\ws_{\eta, p})$, \begin{equation}\label{eq_donsker_wiener:proj-bis} \esp{f(\pi^N (S^m)) }-\esp{ f(\pi^N(B^m))} \le c\,\|X\|_{L^{p}}\ \frac{N^{1+\eta}}{\sqrt{m}}\ln( \frac{N^{1+\eta}}{\sqrt{m}})\cdotp \end{equation} \end{theorem} The global upper-bound for \eqref{dec-A} is proportional to \begin{equation*} N^{-1/2+\eta}+N^{1+\eta}m^{-1/2}\ln(N^{1+\eta}m^{-1/2}). \end{equation*} See $N$ as a function of $m$ and note that this expression is minimal for $N\sim m^{1/3}$. Plug this into the previous expressions to obtain the main result of this paper: \begin{theorem} \label{thm:main} Assume that $X\in L^{p}(\ws;\mathbf{R}^{d},\mu_{\eta,p})$. Then, there exists a constant $c>0$ such that \begin{equation} \label{eq:main} \sup_{f \in \operatorname{Lip}_{1}(\ws_{\eta, p})} \esp{f(S^{m})} - \esp{f(B)} \le c\, \|X\|_{L^p(\Omega)}^p \ m^{-\frac{1}{6} +\frac{\eta}{3}}\,\ln (m) . \end{equation} \end{theorem} As an application of the previous considerations, we obtain as a corollary an approximation theorem for the local time of the Brownian motion. The reflected Brownian motion is defined as \begin{align*} R_t = B_t + \sup_{ 0\le s \le t} \max \left(0, -B_s\right) \end{align*} and the reflected linear interpolation of random walk is \begin{align*} R^m_t = X^m_t + \sup_{ 0\le s \le t} \max \left(0, -X_s^m\right):=X^m_t+L_{0}^{m}(t). \end{align*} The process $L_{0}(t):=\sup_{ 0\le s \le t} \max \left(0, -B_s\right)$ is an expression of the local time of the Brownian motion at~$0$. Note that the map $f \mapsto \left( t \mapsto f(t) + \sup_{ 0\le s \le t} \max \left(0, -f(s)\right)\right)$ is Lipschitz continuous from any $W_{\eta,p}$ into $W_{0,\infty}$. One of the interest of our new result is that we can then apply the previous theorem in $W_{0,\infty}$ to $L_{0}^{m}$ and $L_{0}$. We get \begin{corollary}\label{cor-theo-princ} Assume that the hypothesis of Theorem~\ref{thm:main} hold. There exists a constant $c>0$ such that \begin{equation*} \sup_{f \in \operatorname{Lip}_{1}(\ws_{0,\infty})} \esp{f(L^{m}_{0})} - \esp{f(L_{0})} \le c \|X\|_{L^3}\, m^{-\frac{1}{6} }\ln (m) . \end{equation*} \end{corollary} \section{Proofs} \label{sec:proofs} In what follows, $c $ denote a non significant constant which may vary from line to line. We borrow from the current usage in rough path theory the notation \begin{equation}\label{eq_core:22} f_{s,t} = f(t)-f(s). \end{equation} As a preparation to the proof of Theorem~\ref{propA1}, we need the following lemma. \begin{lemma}\label{lem-mart-dis} For all $p\ge 2 $, there exists a constant $c_p$ such that for any sequence of independent, identically distributed random variables $(X_i, {i \in {\mathbf N}})$ with $X\in L^{p}$ and any sequence $(\alpha_i,\, i\in {\mathbf N})$. \begin{equation*} \esp{\left| \sum_{i=1}^n \alpha_i X_i \right|^p } \le c_p \bigl|\{i\le n, \alpha_i\neq 0\}\,\bigr|^{p/2} (\sum_{i \le n}|\alpha_i|^p) \ {\mathbb E}(|X|^p), \end{equation*} where $|A|$ is the cardinality of the set $A$. \end{lemma} \begin{proof} The Burkholder-Davis Gundy inequality applied to the discrete martingale $(\sum_{i=1}^n \alpha_i X_i, \, n\ge 0)$ yields \begin{equation*} \esp{ \left| \sum_{i=1}^n \alpha_i X_i \right|^p } \le c_p \esp{ \left| \sum_{i=1}^n \alpha_i^2 X_i^2 \right|^{p/2}}. \end{equation*} Using Jensen inequality we obtain \begin{equation*} \esp{ \left| \sum_{i=1}^n \alpha_i X_i \right|^p} \le c_p \bigl|\{i\le n, \alpha_i\neq 0\}\bigr|^{p/2-1}\ \esp{ \sum_{i=1}^n |\alpha_i|^p X_i^p }. \end{equation*} The proof is thus complete. \end{proof} \begin{proof}[Proof of Theorem~\ref{propA1}] Actually, we already proved in \cite{coutin_convergence_2017} that \begin{equation}\label{lem3.1-la} \esp{ \|S^{m}_{s,t}\|^{p}}\le c \|X\|_{L^{p}}\ \left(\sqrt{t-s}\wedge m^{-1/2}\right). \end{equation} Assume that $s$ and $t$ belongs to the same sub-interval: There exists $l\in \{1,...,N \}$ such that $$\frac{l-1}{N} \le s<t \le \frac{l}{N}\cdotp$$ Then we have (see \eqref{eq_core:22}) \begin{equation*} \pi^N(S^m)_{s,t}= \sqrt{N}\, \left(\sum_{k=1}^m X_k\, (h_k^m, h^N_l)_{\cm}\right) \,(t-s). \end{equation*} Using Lemma \ref{lem-mart-dis}, there exists a constant $c$ such that \begin{equation*} \frac{\| \pi^N(S^m)_{s,t} \|_{L^{p}} }{\sqrt{N}\,|t-s| } \le c \, \|X\|_{L^{p}}\ \bigl|\{k,\,(h_k^m,h_l^N)_{\cm}\neq 0 \}\bigr|^{1/2} \sup_k \left|(h_k^m, h^N_l)_{\cm}\right|. \end{equation*} Note that $|(h_k^m,h_l^N)_{\cm}|\le \sqrt{\frac{N}{m}}$ and there is at most $\frac{m}{N} +2$ terms such that $ (h_k^m, h_N^l)_{\cm}$ is non zero. Thus, \begin{equation*} \label{eq:lem3.1-1} \frac{\| \pi^N(S^m)_{s,t} \|_{L^{p}} }{\sqrt{N}\,|t-s|} \le c\, \|X\|_{L^{p}}\ \left(\frac{m}{N} +2\right)^{1/2}\sqrt{\frac{N}{m}} \le c\, \|X\|_{L^{p}}, \end{equation*} as $m/N$ tends to infinity. Since $|t-s|\le 1/N$, \begin{equation}\label{eq_core:11} \| \pi^N(S^m)_{s,t} \|_{L^{p}} \le c\, \|X\|_{L^{p}}\, \sqrt{|t-s|}. \end{equation} For $0 \le s \le t \le 1$ let $s_+^N:=\min\{l,\,s \le \frac{l}{N}\}$ and $t_-^N:=\sup\{l,\,t \ge \frac{l}{N}\}$. We have \begin{multline*} \pi^N(S^m)_{s,t} - S^m_{s,t} = \bigl(\pi^N(S^m)_ {s,s_+^N} - S^m_{s,s_+^N}\bigr) \\ +\bigl(\pi^N(S^m)_{s_+^N,t_-^N } - S^m_{s_+^N,t_-^N } \bigr) + \bigl (\pi^N(S^m)_{t_-^N,t} - S^m_{t_-^N,t}\bigr). \end{multline*} Note that for all $f\in \ws_{\eta,p},$ $\pi^{N}(f)$ is the linear interpolation of $f$ along the subdivision $\mathcal D_{N}$; hence, for $s,t \in \mathcal D_{N}$, $\pi^N(S^m)_{s,t}=S^m_{s,t}$ . Thus the median term vanishes and we obtain \begin{multline}\label{eq_core:14} \esp{ \|\pi^N(S^m)_{s,t} - S^m_{s,t} \|^{p}}\le c\Bigl( \esp{\|\pi^N(S^m)_{s,s_+^N}\|^{p }} + \esp{\|S^m_{s,s_+^N}\|^{p }}\\ + \esp{\|\pi^{N}(S^m)_{t_-^N,t}\|^{p }}+ \esp{\|S^m_{t_-^N,t}\|^{p }} \Bigr). \end{multline} From \eqref{eq_core:11}, we deduce that \begin{equation} \label{eq_core:12} \esp{\|\pi^N(S^m)_{s,s_+^N}\|^{p}}^{1/p}\le c \, \|X\|_{L^{p}}\ \sqrt{s_{+}^{N}-s}\le c \,\|X\|_{L^{p}}\ N^{-1/2}, \end{equation} and the same holds for $ \esp{\|\pi^N(S^m)_{t_-^N,t}\|^{p}}$. We infer from \eqref{lem3.1-la}, \eqref{eq_core:11} and \eqref{eq_core:12} that \begin{equation} \label{eq:continuityModulus} \esp{\|\pi^N(S^m)_{s,t}-S^{m}_{s,t}\|^{p}}^{1/p}\le c \, \|X\|_{L^{p}}\left(\sqrt{t-s}\wedge N^{-1/2}\right). \end{equation} A straightforward computation shows that \begin{equation}\label{eq_core:13} \iint_{[0,1]^2} \frac {[| t-s| \wedge {N^{-1}}]^{p/2}}{|t-s|^{ 1+\eta p}} \text{ d} s\text{ d} t\le c\, N^{-p ( 1/2 -\eta)}. \end{equation} The result follows \eqref{eq:continuityModulus} and \eqref{eq_core:13}. \end{proof} \subsection{Stein method} \label{stein} We wish to estimate \begin{align*} \esp{ f( \pi^N(S^m) )} - \esp{ f( \pi^N(B^m) )}, \end{align*} using the Stein's method. For the sake of simplicity, we set \begin{equation*} f_{N}=f\circ \pi^{N}. \end{equation*} The Stein-Dirichlet representation formula \cite{DecreusefondSteinDirichletMalliavinmethod2015} stands that, for any ${\tau_0}>0$, \begin{multline*}\label{eq_donsker_wiener:2} \esp{f_N (B^m)}-\esp{f_N (S^m)}=\esp{ \int_0^\infty \frac{d}{dt}P_\tau f_N (S^m)\text{ d} \tau }\\ =\esp{P_{\tau_0} f_N (S^m)-f_N (S^m)}+\esp{\int_{\tau_0}^\infty LP_\tau f_N (S^m)\text{ d} \tau}, \end{multline*} where \begin{equation*} LP_\tau f_{N}(S^{m})=-\< S^{m}, \nabla P_\tau f_{N}(S^{m }) \>_{H}+\sum_{a\in {\mathcal A}^{m}}\<\nabla^{(2)} P_\tau f_{N}(S^{m}),\, h_{a}^{m}\otimes h_{a}^{m}\>_{\cm^{\otimes 2}}. \end{equation*} It is straightforward (see \cite[Lemma 4.1]{coutin_convergence_2017}): \begin{lemma}\label{lem:voisDeZero} For any $(\eta,p)\in \Lambda$, there exists a constant $c>0$ such that for any sequence of independent, centered random vectors $(X_a, \, a\in {\mathcal A}^{m})$ such that $\esp{ \|X\|^p} <\infty$, for any $f\in \operatorname{Lip}_{1}(W_{\eta,p})$, we have \begin{align*} \esp{f(S^m)}- \esp{ P_{\tau_0} f(S^m)} \le c\, \|X\|_{L^{p}}\ \sqrt{1- e^{\tau_0}}. \end{align*} \end{lemma} We now show, that as usual, the rate of convergence in the Stein's method is related to the Lipschitz modulus of the second order derivative of the solution of the Stein's equation. Namely, we have \begin{lemma} \label{lem:reductionToContinuityModulus} For any $f\in \operatorname{Lip}_{1}(\ws_{\eta,p})$, we have \begin{multline*} \esp{ L P_\tau f_N (S^m)}\\ \shoveleft{ =-\ \esp{\sum_{a\in {\mathcal A}}%{1,...,d}x{1,...,m^m} \<\nabla^{(2)} P_\tau f_N(S^m_{\neg a})-\nabla^{(2)} P_\tau f_N(S^m),\, h^m_a\otimes h^m_a\>_{\cm^{\otimes 2}}}}\\ + \ \esp{\sum_{a\in {\mathcal A}}%{1,...,d}x{1,...,m} \,X_a^2\int_0^1 \ \<\nabla^{(2)} P_\tau f_{N}(S^m_{\neg a}+r X_a h^m_a)-\nabla^{(2)}P_\tau f_N(S^m_{\neg a}),\, h^m_a{}^{\otimes 2}\>_{\cm^{\otimes 2}}\kern-5pt\text{ d} r}. \end{multline*} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:reductionToContinuityModulus}] Let $S^m_{\neg a} = S^m- X_a h^m_a$. Since the $X_a$'s are independent, \begin{multline*} \esp{ \< \nabla P_\tau f_{N},\, S^m\>_{\cm}}\\ \begin{aligned} &= \ \esp{\sum_{a\in {\mathcal A}}%{1,...,d}x{1,...,m^m} X_a \ \left\langle {\nabla P_\tau f_N (S^m),h^m_a}\right \rangle_{\cm}}\\ &=\ \esp{\sum_{a\in {\mathcal A}}%{1,...,d}x{1,...,m^m} X_a \ \left\langle \nabla P_\tau f_N (S^m)-\nabla P_\tau f_N (S^m_{\neg a}),\, h^m_a \right\rangle_{\cm}}\\ &=\esp{\sum_{a\in {\mathcal A}}%{1,...,d}x{1,...,m^m} X_a^2\ \<\nabla^{(2)} P_\tau f_N (S^m_{\neg a}),\, h^m_a\otimes h^m_a\>_{\cm^{\otimes 2}}}\\ & + \ \esp{\sum_{a\in {\mathcal A}}%{1,...,d}x{1,...,m} X_a^2\int_0^1 \ \< \nabla^{(2)}P_\tau f_N (S^m_{\neg a}+r\, X_a h^m_a) -\nabla^{(2)}P_\tau f_N (S^m_{\neg a}) , h^m_a{}^{\otimes 2}\>_{\cm^{\otimes 2}}\kern-5pt\text{ d} r}, \end{aligned} \end{multline*} according to the Taylor formula. Since $\esp{X_a^2}=1$, we have \begin{multline*} \esp{\sum_{a\in {\mathcal A}}%{1,...,d}x{1,...,m^m} X_a^2\ \<\nabla^{(2)} P_\tau f_N (S^m_{\neg a}),\, h^m_a\otimes h^m_a\>_{\cm^{\otimes 2}}}\\= \esp{\sum_{a\in {\mathcal A}}%{1,...,d}x{1,...,m^m} \<\nabla^{(2)} P_\tau f_N (S^m_{\neg a}),\, h^m_a\otimes h^m_a\>_{\cm^{\otimes 2}}}. \end{multline*} The result follows by difference. \end{proof} The main difficulty and then the main contribution of this paper is to find an estimate of \begin{equation*} \sup_{v\in {\mathcal V}^{m}} \<\nabla^{(2)} P_\tau f_N (v)-\nabla^{(2)} P_\tau f_N (v+ \varepsilon h_{a}^{m}),\, h^m_a\otimes h^m_a\>_{\cm^{\otimes 2}}, \end{equation*} for any $\varepsilon.$ \begin{theorem} \label{thm:majoModulusOfContinuity} There exists a constant $c$ such that for any $\tau >0$, for any $v\in {\mathcal V}^{m}$, for any $f\in \operatorname{Lip}({W}_{\eta,p})$, \begin{multline} \label{eq:_donsker:2-l} \left| \left\langle\nabla^{(2)}P^{m}_\tau f_N (v+ \varepsilon h^m_a)-\nabla^{(2)}P^{m}_\tau f_N (v),\,h^m_a\otimes h^m_a\right\rangle_{\cm^{\otimes 2}}\right|\\ \le c \, \frac{e^{-5\tau/2}}{\beta_{\tau/2}^2}\, \varepsilon N^{\eta-\frac{1}{2}}\sqrt{\frac{N^3}{m^3}} \cdotp \end{multline} \end{theorem} \begin{proof}[Proof of Theorem~\ref{thm:majoModulusOfContinuity}] We know from \cite{shih_steins_2011,coutin_convergence_2017} that we have the following representation: for any $h\in \cm$, \begin{equation}\label{eq:1} \left\langle\nabla^{(2)}P^{m}_\tau f(v),\,h\otimes h\right\rangle_{\cm^{\otimes 2}}\\=\frac{e^{-3\tau/2}}{\beta_{\tau/2}^2}\, \esp{ f\Bigl(w_{\tau}(v,B^{m},\hat{B}^{m})\Bigr) \delta_{\eta,p} h(B)\delta_{\eta,p} h(\hat{B})} \end{equation} where \begin{equation*} w_{\tau}(v,y,z)=e^{-\tau/2}(e^{-\tau/2}v+\beta_{\tau/2}y)+\beta_{\tau/2}z \end{equation*} and $\hat{B}$ is an independent copy of $B$. Since the map $v$ is linear with respect to its three arguments, \begin{equation*} f_{N}\Bigl(w_{\tau}(v,\,B^{m},\,\hat{B}^{m})\Bigr)=f_{N}\Bigl(w_{\tau}(\pi^{N}v,\,\pi^{N}B^{m},\,\pi^{N}\hat{B}^{m})\Bigr). \end{equation*} Hence, \begin{multline}\label{eq_core:15} \left( \frac{e^{-3\tau/2}}{\beta_{\tau/2}^2} \right)^{-1} \left\langle\nabla^{(2)}P^{m}_\tau f_{N}(v),\,h\otimes h\right\rangle_{\cm^{\otimes 2}}\\= \esp{ f_{N}\Bigl(w_{\tau}(\pi^{N}v,\pi^{N}B,\pi^{N}\hat{B})\Bigr) \esp{\delta_{\eta,p} h(B)\,|\, \pi^{N}B^{m}}\esp{\delta_{\eta,p} h(\hat{B})\,|\,\pi^{N}\hat{B}^{m}}} \end{multline} From Lemma \ref{lem-maj-var-esp}, we know that \begin{equation} \label{eq:varianceEspCond} \text{Var}\left(\esp{\delta_{\eta,p} h(\hat{B})\,|\,\pi^{N}\hat{B}^{m}}\right)\le c\ \frac{N}{m} \end{equation} for $m>8\,N$, and the same holds for the other conditional expectation. Use Cauchy-Schwarz inequality in \eqref{eq_core:15} and take \eqref{eq:varianceEspCond} into account to obtain \begin{multline}\label{eq_core:9} \left( \frac{e^{-3\tau/2}}{\beta_{\tau/2}^2} \right)^{-1} \left| \left\langle\nabla^{(2)}P^{m}_\tau f_N (v+ \varepsilon h^m_a)-\nabla^{(2)}P^{m}_\tau f_N (v),\,h^m_a\otimes h^m_a\right\rangle_{\cm^{\otimes 2}}\right| \\ \le c \, \left(\frac{N}{m}\right)^{2} \left\|w_{\tau}(\pi^{N}v,\pi^{N}B^{m},\pi^{N}\hat{B}^{m})-w_{\tau}(\pi^{N}v+\varepsilon \pi^{N}h_{a}^{m},\pi^{N}B^{m},\pi^{N}\hat{B}^{m})\right\|_{\ws_{\eta,p}}\\ = ce^{-\tau} \varepsilon \, \left(\frac{N}{m}\right)^{2} \left\|\pi^{N}h_{a}^{m}\right\|_{\ws_{\eta,p}} \end{multline} since $f_{N}$ belongs to $\operatorname{Lip}_{1}(\ws_{\eta,p})$. Furthermore, \begin{equation*} \pi^N( h_a^m)= \sum_{b \in {\mathcal A}^N} \<h_a^m ,h_b^N\>_{\cm} \,h_b^N. \end{equation*} We already know that \begin{equation*} 0 \le \<h_a^m,h_b^N\>_{\cm}\le \sqrt{\frac{N}{m}} \end{equation*} and that at most two terms $\<h_a^m,h_b^N\>_{\cm}$ are non zero. Moreover, according to Lemma~\ref{lem:normeHDansDual} \begin{equation*} \|h_b^N\|_{{W}_{\eta,p}}\le c\, N^{\eta-\frac{1}{2}}. \end{equation*} Thus, \begin{equation}\label{majnormepiNh} \| \pi^N( h_a^m) \|_{\ws_{\eta,p}}\le c \,\sqrt{\frac{N}{m}}\ N^{\eta-\frac{1}{2}}. \end{equation} Plug estimation \eqref{majnormepiNh} into estimation \eqref{eq_core:9} yields estimate \eqref{eq:_donsker:2-l}. \end{proof} According to \eqref{eq:_donsker:2-l} and Lemma~\ref{lem:reductionToContinuityModulus}, since the cardinality of ${\mathcal A}^{m}$ is $dm$, we obtain the following theorem. \begin{theorem} \label{thm:tauAlInfini} If $X_a$ belongs to $L^p$, for any ${\tau_0}>0$, there exists $c>0$ such that \begin{equation}\label{eq_donsker_wiener:5} \esp{ \int_{\tau_0}^\infty L P_\tau f_{N}(S^m)\text{ d} \tau } \le c\, \|X\|_{L^{p}}\ \frac{N^{1+\eta}}{\sqrt{m}} \, \int_{\tau_0}^\infty \frac{e^{-5\tau/2}}{1-e^{-\tau/2}}\text{ d} \tau. \end{equation} \end{theorem} If we combine Lemma~ \ref{lem:voisDeZero} and \eqref{eq_donsker_wiener:5}, we get \begin{multline*} \left| \esp{f_{N}(S^{m})}-\esp{f_{N}(B^{m})} \right| \\ \le c \|X\|_{L^{p}}\left( \sqrt{1-e^{-\tau_{0}}}+\frac{N^{1+\eta}}{\sqrt{m}} \, \int_{\tau_0}^\infty \frac{e^{-5\tau/2}}{1-e^{-\tau/2}}\text{ d} \tau \right). \end{multline*} Optimizing with respect to $\tau_{0}$ yields Theorem~\ref{propA2}. It remains to prove \eqref{eq:varianceEspCond}. For the sake of simplicity, we give the proof for $d=1$. The general situation is similar but with more involved notations. We recall that \begin{equation*} \pi^N(B^m)= \sum_{b=0}^{N-1} G_{b}^{m,N} h_{b}^N . \end{equation*} where \begin{equation}\label{def-z-appendice} G_b^{m,N} =\sum_{ a=0}^{m-1} \<h_{a}^m,h_{b}^N\>_{\cm} \delta_{\eta,p} (h_{a}^m). \end{equation} \begin{lemma} \label{lem:cov} The covariance matrix $\Gamma$ of the Gaussian vector $(G_{b}^{m,N},\, b=0,\cdots,N-1)$ is invertible and satisfies \begin{equation}\label{eq_core:18} \|\Gamma^{-1}\|_{\infty }\le 2. \end{equation} \end{lemma} \begin{proof} Since the $h_{a}^{m}$ are orthogonal in $L^{2}$, for any $b,c\in\{0,\cdots,N-1\}, $ \begin{equation}\label{eq_core:19} \Gamma_{b,c}=\sum_{a=0}^{m-1} \<h_{a}^m,h_{b}^N\>_{\cm} \<h_{a}^m,h_{c}^N\>_{\cm} . \end{equation} Since a sub-interval of $\mathcal D_{m}$ intersects at most two sub-intervals of $\mathcal D_{N}$, the matrix $\Gamma$ is tridiagonal. Furthermore, we know that \begin{equation}\label{eq_core:20} 0\le \<h_{a}^m,h_{b}^N\>_{\cm} \le \sqrt{\frac{N}{m}}, \end{equation} and for each $b$, there are at least $(N/m-3)$ terms of this kind which are equal to $(N/m)^{-1/2}$. Hence, \begin{equation*} \Gamma_{b,b}\ge (\frac{m}{N}-3)(\sqrt{\frac{N}{m}})^{2}\ge \frac{3}{4}\cdotp \end{equation*} Since $\Gamma$ is tridiagonal, this implies that it is invertible. Moreover, let $D$ be the diagonal matrix extracted from $\Gamma$. We have proved that $\|D\|_{\infty}\ge 3/4.$ For $|b-c|=1$, there is at most one term of the sum \eqref{eq_core:19} which yields a non zero scalar product, hence \begin{equation*} |\Gamma_{b,c}|\le \frac{N}{m}\cdotp \end{equation*} Set $S=\Gamma-D$. The matrix $D^{-1}S$ has at most two non null entries and \begin{equation*} \|D^{-1}S\|_{\infty}\le \frac{8}{3}\frac{N}{m}\le \frac{1}{3}, \end{equation*} if $m>8N$. By iteration, we get for any $k\ge 1$, \begin{equation*} \|(D^{-1}S)^{k}\|_{\infty}\le \frac{1}{3^{k}}\cdotp \end{equation*} Moreover, \begin{equation*} \sum_{k=0}^{\infty} (-D^{-1}S)^{k}=(\mbox{Id}+D^{-1}S)^{-1}=\Gamma^{-1}D. \end{equation*} Thus, \begin{equation*} \|\Gamma^{-1}\|_{\infty}\le \frac{4}{3}\sum_{k=0}^{\infty} \frac{1}{3^{k}} =2. \end{equation*} The proof is thus complete. \end{proof} \begin{lemma}\label{lem-maj-var-esp} There exists a constant $c$ which depends only on the dimension~$d$ such that for all $m,N$ with $m >8N$, for any $a \in {\mathcal A}^N$ \begin{align*} {\text{var}}{\esp{ \delta_{\eta,p} (h_a^m) \/ \pi^N(B^m) }} \le \,c\,\frac{N}{m}\cdotp \end{align*} \end{lemma} \begin{proof} Using the framework of Gaussian vectors, for all $a \in \{0,\cdots,m-1\}$ \begin{equation}\label{def-coeff} \esp{ \delta_{\eta,p} (h_a^m) \/ \pi^N(B^m)}= \sum_{b \in {\mathcal A}^N} C^{m,N}_{a,b} G_b^{m,N}. \end{equation} For any $c\in \{0,\cdots,N-1\}$, on the one hand \begin{multline*} \esp{\esp{ \delta_{\eta,p} (h_a^m) \/ \pi^N(B^m)}\ G_{c}}=\sum_{b=0}^{N-1} \sum_{\tau=0}^{m-1} C_{a,b}^{m,N} \, \<h_{\tau}^{m},\ h_{b}^{N}\>_{\cm}\<h_{\tau}^{m},\ h_{c}^{N}\>_{\cm}\\ =\sum_{b=0}^{N-1} C_{a,b}^{m,N} \Gamma_{b,c}. \end{multline*} and on the other hand, \begin{equation*} \esp{\esp{ \delta_{\eta,p} (h_a^m) \/ \pi^N(B^m)}\ G_{c}}=\esp{ \delta_{\eta,p} (h_a^m) \, G_{c}}=\<h_{a}^{m},\ h_{c}^{N}\>_{\cm}. \end{equation*} This means that \begin{equation*} \left( \<h_{a}^{m},\ h_{c}^{N}\>_{\cm},\, c=0,\cdots,N-1 \right) = \left( C_{a,b}^{m,N},\, b=0,\cdots,N-1 \right)\Gamma. \end{equation*} In view of Lemma~\ref{lem:cov}, this entails that \begin{equation*} \left( C_{a,b}^{m,N},\, b=0,\cdots,N-1 \right)= \left( \<h_{a}^{m},\ h_{c}^{N}\>_{\cm},\, c=0,\cdots,N-1 \right)\Gamma^{-1}. \end{equation*} Once again we invoke \eqref{eq_core:20} and the fact that at most two of the terms $\<h_{a}^{m},\ h_{c}^{N}\>_{\cm}$ are non zero for a fixed $a$, to deduce that \begin{equation}\label{eq_core:21} \sup_{a,b}|C_{a,b}^{m,N}|\le 2 \|\Gamma^{-1}\|_{\infty} \sqrt{\frac{N}{m}}=4 \sqrt{\frac{N}{m}}\cdotp \end{equation} Now then, according to the very definition of the conditional expectation \begin{align*} {\text{var}}{\esp{ \delta_{\eta,p} (h_a^m) | \pi^N(B^m) }}&=\esp{ \delta_{\eta,p} (h_a^m)\ \esp{ \delta_{\eta,p} (h_a^m) | \pi^N(B^m)} }\\ &=\sum_{b=0}^{N-1} C^{m,N}_{a,b} \<h_a^m,h_{b}^{N}\>_{\cm}. \end{align*} Hence, \begin{equation*} {\text{var}}{\esp{ \delta_{\eta,p} (h_a^m) | \pi^N(B^m) }} \le 2 \sup_{a,b}|C_{a,b}^{m,N}| \sqrt{\frac{N}{m}}\le 8\, \frac{N}{m}, \end{equation*} according to \eqref{eq_core:21}. The constant $8$ has to be modified when $d>1$. \end{proof}
1,108,101,564,736
arxiv
\section{Introduction} Human-Object Interaction (HOI) detection, which aims to localize and infer relationships between human and objects in images/videos, $\left \langle human, verb, object \right \rangle$, is an essential step towards deeper scene and action understanding~\cite{chao2018learning, gao2018ican}. In real-world scenarios, long-tailed distributions are common for the data perceived by human vision system, \eg, actions/verbs and objects~\cite{liu2019large}. The combinatorial nature of HOI further highlights the issues of long-tailed distributions in HOI detection, while human can efficiently learn to recognize seen and even unseen HOIs from limited samples. An intuitive example of open long-tailed HOI detection is shown in Figure~\ref{fig:open_long_tailed}, in which one can easily recognize the unseen action ``ride bear'', nevertheless it never even happened. However, existing HOI detection approaches usually focus on either the head \cite{gao2018ican,liao2019ppdm,wang2018low}, the tail~\cite{xu2019learning} or unseen categories~\cite{shen2018scaling,preye2019detecting}, leaving the problem of open long-tailed HOI detection poorly investigated. \begin{figure}[t] \begin{center} \includegraphics[width=0.48\textwidth]{open_long_tailed1.pdf} \end{center} \caption{Open long-tailed HOI detection addresses the problem of imbalanced learning and zero-shot learning in a unified way. We propose to compose new HOIs for open long-tailed HOI detection. Specifically, the blurred HOIs, \eg, ``ride bear", are composite. See more examples in supplementary materials.} \label{fig:open_long_tailed} \end{figure} Open long-tailed HOI detection falls into the category of the long-tailed zero-shot learning problem, which is usually referred into several isolated problems, including long-tailed learning \cite{japkowicz2002class, he2009learning}, few-shot learning \cite{fei2006one, vinyals2016matching}, zero-shot learning \cite{lampert2009learning}. To address the problem of imbalanced training data, existing methods mainly focus on three strategies: 1) re-sampling \cite{drummond2003c4, han2005borderline, kang2019decoupling}; 2) re-weighted loss functions \cite{cui2019class, cao2019learning, hayat2019gaussian}; and 3) knowledge transfer \cite{wang2017learning, liu2019large, fei2006one, lampert2009learning, schonfeld2019generalized, frome2013devise}. Specifically, re-sampling and re-weighted loss functions are usually designed for imbalance problem, while knowledge transfer is introduced to relieve all the long-tailed \cite{wang2017learning}, few-shot \cite{snell2017prototypical}, and zero-shot problem~\cite{xian2018feature, frome2013devise}. Recently, two popular knowledge transfer methods have received increasing attention from the community, data generation~\cite{wang2017learning, wang2018low, xian2018feature, liu2019large, fei2006one, lampert2009learning, schonfeld2019generalized, keshari2020generalized} (transferring head/base classes to tail/unseen classes) and visual-semantic embedding~\cite{frome2013devise} (transferring from language knowledge). Along the first way, we address the problem of open long-tailed HOI detection from the perspective of HOI generation. Unlike the samples in typical long-tailed zero-shot learning for visual recognition, each HOI sample is composed of a verb and an object, and different HOIs may share the same verb or object (\eg, ``ride bike'' and ``ride horse''). In cognitive science, human perceives concepts as the compositions of shareable components~\cite{biederman1987recognition, hoffman1983parts} (\eg, verb and object in HOI), which indicates that human can conceive a new concept through a composition of existing components. Inspired by this, several zero-and few-shot HOI detection approaches have been proposed to enforce the factored primitive (verb and object) representation of the same primitive class to be similar among different HOIs, such as factorized model \cite{shen2018scaling, bansal2020detecting} and factor visual-language model \cite{xu2019learning, preye2019detecting, bansal2020detecting}. However, regularizing factor representation, \ie enforcing the same verb/object representation to be similar among different HOIs, is only sub-optimal for HOI detection. Recently, Hou \etal \cite{hou2020visual} present to compose novel HOI samples via combining decomposed verbs and objects between pair-wise images and within image. Nevertheless, it still remains a great challenge to compose massive HOI samples in each minibatch from images due to limited number of HOIs in each image, especially when the distribution of objects/verbs is also long-tailed. We demonstrate the distribution of the number of objects in Figure~\ref{fig:obj_distribution}. \begin{figure}[t] \begin{center} \includegraphics[width=.45\textwidth]{obj_distribution4.pdf} \end{center} \caption{Illustration of distribution of the number of object box in HICO-DET dataset. The categories are sorted by the number of instances.} \label{fig:obj_distribution} \end{figure} The long-tailed distribution of objects/verbs makes it difficult to compose new HOIs from each mini-batch, significantly degrading the performance of compositional learning-based methods for rare and zero-shot HOI detection~\cite{hou2020visual}. Inspired by recent success of visual object representation generation~\cite{xian2018feature, hariharan2017low, wang2018low}, we thus apply fabricated object representation, instead of fabricated verb representation, to compose more balanced HOIs. We referred to the proposed compositional learning framework with fabricated object representation as Fabricated Compositional Learning or FCL. Specifically, we first extract verb representations from input images, and then design a simple yet efficient object fabricator to generate object representation. Next, the generated visual object features are further combined with the verb features to compose new HOI samples. With the proposed object fabricator, we are able to generate balanced objects for each verb within the mini-batch of training data as well as compose massive balanced HOI training samples. The main contributions of this paper can be summarized as follows: 1) proposing to compose HOI samples for Open Long-Tailed HOI detection; 2) designing an object fabricator to generate objects for HOI composition; 3) significantly outperforming recent state-of-the-art methods on HICO-DET dataset among rare and unseen categories. \begin{figure*} \centering \includegraphics[width=0.90\textwidth]{structure3.pdf} \caption{An overview of the proposed multi-branch fabricated compositional learning framework for HOI detection. We first detect human and object with Faster-RCNN \cite{ren2015faster} from the image. Next, with ROI-Pooling and residual CNN blocks, we extract human features, verb features and object features. Meanwhile, an object identity embedding, verb feature and noise are input into Fabricator to generate fake object feature. Then, these features are fed into the following branches: individual spatial HOI branch, HOI branch and fabricated compositional HOI branch. Finally, HOI representations from HOI branch and fabricated branch are optimized by a shared FC-Classifier, while HOI representations from spatial branch are classified by an individual FC-Classifier. In fabricated compositional HOI branch, verb features are combined with fabricated objects to construct fabricated HOIs.} \label{fig:pipeline} \end{figure*} \section{Related Works} {\bf HOI Detection}. HOI detection is essential for deeper scene and action understanding~\cite{chao2018learning}. Recent HOI detection approaches usually focus on representation learning \cite{gao2018ican, zhou2019relation, ulutan2020vsgnet, wang2019deep, wan2019pose}, zero/few-shot generalization \cite{shen2018scaling, xu2019learning, preye2019detecting, bansal2020detecting, hou2020visual}, and One-Stage HOI detection \cite{liao2019ppdm, wang2020learning}. Specifically, existing methods improve HOI representation learning by exploring the relationships among different features \cite{qi2018learning, zhou2019relation, ulutan2020vsgnet}, including pose information \cite{li2018transferable, wan2019pose, li2020detailed}, context \cite{gao2018ican, wang2019deep}, and human parts \cite{zhou2019relation}; Generalization methods for HOI detection mainly include visual-language model \cite{preye2019detecting, xu2019learning}, factorized model \cite{shen2018scaling, gupta2019no, ulutan2020vsgnet, bansal2020detecting}, and HOI composition \cite{hou2020visual}. Recently, Liao \etal \cite{liao2019ppdm} and Wang \etal \cite{wang2020learning} propose to detect the interaction point for HOI by heatmap-based localization \cite{newell2016stacked}. Wang \etal \cite{wang2020discovering} try to detect HOI with novel objects by leveraging human visual clues to localize interacting objects. However, existing HOI approaches usually fail to investigate the imbalance issue and zero-shot detection. Inspired by the factorized model \cite{shen2018scaling}, we propose to compose visual verb and fabricated objects to address the open long-tailed issue in HOI detection. Furthermore, according to whether detect the objects with a separated detector or not, existing HOI detection approaches can be divided into two categories: 1) one-stage \cite{shen2018scaling, liao2019ppdm, wang2020learning, gkioxari2018detecting} and two-stage \cite{gao2018ican, li2018transferable, zhou2019relation, ulutan2020vsgnet, wang2019deep, xu2019learning, bansal2020detecting, ulutan2020vsgnet}. Two-stage methods usually achieve better performance and our method falls into this category. {\bf Compositional Learning}. Irving Biederman illustrates that human representations of concepts are decomposable~\cite{biederman1987recognition}. Meanwhile, Lake \etal \cite{lake2017building} argue compositionality is one of the key blocks in a human-like learning system. Tokmakov \etal \cite{tokmakov2019learning} apply the compositional deep representation into few-shot learning. External knowledge graph and graph convolutional networks in \cite{kato2018compositional} are used to compose verb-object pairs for HOI recognition. Recently, Hou \etal \cite{hou2020visual} propose a novel visual compositional learning framework to compose HOIs from image-pairs for HOI detection, failing to address the open and long-tailed issues. Therefore, we further compose verb and fake object representations for HOI detection. {\bf Generalized Zero/Few-Shot Learning}. Different from typical zero/few-shot learning \cite{fei2006one, lampert2009learning, vinyals2016matching}, generalized zero/few-shot learning \cite{xian2018zero} is a more realistic variant, since the performance is evaluated on both seen and unseen classes~\cite{schonfeld2019generalized, chao2016empirical}. The distribution of HOIs is naturally long-tailed \cite{chao2018learning}, \ie, most classes have a few training examples. Moreover, the open long-tailed HOI detection aims to handle the long-tailed, low-shot and zero-shot issue in a unified way. The long-tailed data distribution \cite{japkowicz2002class, he2009learning, huang2016learning} is one of challenging problem in visual recognition. Currently, re-sampling \cite{gupta2019lvis, kang2019decoupling}, specific loss \cite{lin2017focal, cui2019class, cao2019learning, hayat2019gaussian}, knowledge transfer \cite{wang2017learning, liu2019large}, and data generation~\cite{wang2018low,kumar2018generalized, xian2018feature, alfassy2019laso} are major strategies for imbalanced learning \cite{japkowicz2002class, he2009learning, huang2016learning}. To make full use of the composition characteristic of HOI, we aim to compose HOI samples by visual feature generation to relieve the open long-tailed issue in HOI detection. Recent feature generation methods \cite{kumar2018generalized, xian2018feature} mainly depend on Variational Autoencoder \cite{kingma2013auto} and Generative Adversarial Network \cite{goodfellow2014generative}, which usually suffer from the problem of model collapse \cite{salimans2016improved}. Wang \etal \cite{wang2018low} present a new method for low-shot learning that directly learns to hallucinate examples that are useful for classification. Similar to \cite{wang2018low}, we compose HOI samples with an object fabricator in an end-to-end optimization without using the adversarial loss. \section{Method} In this section, we first describe the multi-branch compositional learning framework for HOI detection. We then introduce the proposed fabricated compositional learning for open long-tailed HOI detection. \subsection{Multi-branch HOI Detection} HOI detection aims to find the interactions between human and different objects in a given image/video. Existing HOI detection methods~\cite{gao2018ican, li2018transferable, bansal2020detecting} usually contain two separated stages: 1) human and object detection; and 2) interaction detection. Specifically, we first use a common object detector, \eg, Faster R-CNN~\cite{ren2015faster}, to localize the positions and extract the features for both human and objects. According to the union of human and object bounding boxes, we then extract the verb feature from the feature map of backbone networks via the ROI-Pooling operation. Similar to~\cite{gao2018ican, gupta2019no, li2018transferable}, an additional stream for spatial pattern, \ie, spatial stream, is defined as the concatenation of human and object masks, \ie, the value in the human/object bounding box region is 1 and 0 elsewhere. As a result, we obtain several input streams from the first stage, \ie, human stream, object stream, verb stream, and spatial stream. The input streams from the first stage then are used to construct different branches in the second stage: 1) \textbf{the spatial HOI branch}, which concatenates the spatial and the human streams to construct spatial HOI feature for HOI recognition; 2) \textbf{the HOI branch}, which concatenates the verb and the object streams; and 3) \textbf{the fabricated compositional branch}, which is based on a new stream, the fabricator stream, to generate fake object features for composing new HOIs. Specifically, the fabricated compositional branch generates novel HOIs by combining visual verb features and generated object features. The main multi-branch HOI detection framework is shown in Figure~\ref{fig:pipeline}, and we leave the details of the fabricated compositional branch in next section. \subsection{Fabricated Compositional Learning} \label{sec:fab} The motivation of compositional learning is to decompose a model/concept into several sub-models/concepts, in which each sub-model/concept focuses on a specific task, and then all responses are coordinated and aggregated to make the final prediction~\cite{biederman1987recognition}. Recent compositional learning method for HOI detection considers each HOI as the combination of a verb and an object to compose new HOIs from objects and verbs within the mini-batch of training samples~\cite{kato2018compositional, hou2020visual}. However, existing compositional learning methods fail to address the problem of long-tailed distribution on objects. To address the open long-tailed issue, we propose to generate balanced objects for each decoupled visual verb as follows. Formally, we denote $\mathbf{l}_{v}$ as the label of a verb $x_v$, $\mathbf{l}_{o}$ as the label of an object $x_o$ and $\mathbf{y}$ as the HOI label of $\left \langle x_v, x_o \right \rangle$. Given another verb representation $\hat{x}_v$ (sharing the same label $\mathbf{l}_{v}$ with $x_v$), and another object representation $\hat{x}_o$ (sharing the same label $\mathbf{l}_{o}$ with $x_o$), regardless of the sources of the verb and object representations, an effective composition of verb and object should be \begin{equation} \label{eq:compose} g_{hoi}(\hat{x}_{v}, \hat{x}_{o}) \approx g_{hoi}(x_v, x_o), \end{equation} where $g_{hoi}$ indicates the HOI classification network. By doing this, we can compose new verb-object pair $\left \langle \hat{x}_v, \hat{x}_o \right \rangle$, which have similar semantic type $\mathbf{y}$ to the real pair $\left \langle x_v, x_o \right \rangle$, to relieve the scarcity of rare and unseen HOI categories. To generate effective verb-object pair $\left \langle \hat{x}_v, \hat{x}_o \right \rangle$, we regularize the verb representation $\hat{x}_v$ and object representation $\hat{x}_o$ such that same verbs/objects have similar feature representation. Similar to previous approaches, such as factor visual-language joint embedding \cite{xu2019learning, preye2019detecting} and factorized model \cite{shen2018scaling, gupta2019no}, when $\hat{x}_v$ is similar to $x_v$ and $\hat{x}_o$ is similar to $x_o$, we then have that Equation~\eqref{eq:compose} can be generalized to HOI detection via the compositional branch. We refer to the proposed compositional learning framework with fabricated object representation as Fabricated Compositional Learning or FCL. We train the proposed method with composited HOI samples $\left \langle \hat{x}_v, \hat{x}_o \right \rangle$ in an end-to-end manner, and the overall loss function are defined as follows: \begin{equation} \label{eq:cl} \mathcal{L} = \lambda_1 \mathcal{L}_{hoi} + \lambda_2 \mathcal{L}_{CL} + \lambda_3 \mathcal{L}_{reg} + \mathcal{L}_{hoi\_sp}, \end{equation} where $\mathcal{L}_{reg}$ aims to regularize verb and object features, $\mathcal{L}_{CL}$ indicates a typical compositional learning loss function for the classification network $g_{hoi}$ with composite HOI samples $\left \langle \hat{x}_v, \hat{x}_o \right \rangle$ as the input, $\mathcal{L}_{hoi\_sp}$ is the loss for Spatial HOI Branch. $\lambda_1, \lambda_2, \lambda_3$ are the hyper-parameters to balance different loss functions. Specifically, object feature extracted from a pre-trained object detector backbone network (\ie Faster-RCNN \cite{ren2015faster}) are usually discriminative. Thus, we only regularize verb representation. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{fabricating.pdf} \caption{For a given visual verb feature and each $j_{th}$ ($0\leq j < N_o$), we firstly select the $j_{th}$ object identity embedding. Then, we concatenate verb feature, object embedding and Gaussian noise to input to fabricator for generating a fake object feature. We can fabricate $N_o$ objects for a verb feature. We finally remove nonexisting HOIs as described in Section 3.2.2.} \label{fig:fabricating} \end{figure} \subsubsection{Object Generation} The HOI is composed of a verb and an object, in which the verb is usually a very abstract notation compared to the object, making it difficult to directly generate verb features. Recent visual feature generation methods have demonstrated the effectiveness of feature generation for visual object recognition~\cite{wang2018low, xian2018feature}. Therefore, we devise an object fabricator to generate object feature representations for composing novel HOI samples. The overall framework of object generation is shown in Figure~\ref{fig:fabricating}. Specifically, we maintain a pool of object identity embeddings, \ie, $v_{id}$. We provide three kinds of embeddings in supplementary material. In each HOI, the pose of the object is usually influenced by the human who is interacting the object \cite{zhang2020phosa}, and the person who is interacting with the object is firmly related to verb feature representation. Thus, for each extracted verb and the $j_{th}$ object ($0\leq j < N_o$ and $N_o$ is the number of all different objects), we concatenate the $j_{th}$ object identity embedding $v_{id}^j$, the verb feature $x_v$ and a noise vector $\epsilon \sim \mathcal{N}(0,1)$, as the input of the object fabricator, \ie, \begin{equation} \hat{x}_o = f_{obj}(\{v_{id}^j, x_v, \epsilon\}), \end{equation} where $\hat{x}_o$ is the fake object feature and $f$ indicates the object fabricator network. Here, the noise $\epsilon$ is used to increase the diversity of generated objects. We then combine the fake object feature $\hat{x}_o$ and the verb $x_v$ to compose a new HOI sample $\left \langle x_v, \hat{x}_o \right \rangle$. Specifically, during training, both real HOIs and composite HOIs share the same HOI classification network $g_{hoi}$. \subsubsection{Efficient HOI Composition} To compose new HOIs from verb and object representations, we need to remove some infeasible composite HOIs (\eg, ``ride vase") as illustrated in Figure~\ref{fig:fabricating}. To avoid frequently checking the pair $(x_{v}, x_{o})$, we use an efficient HOI composition similar to~\cite{hou2020visual}. Specifically, the HOI label space is decoupled into verb and object spaces, \ie, the co-occurrence matrices $\mathbf{A}_v\in R^{N_v\times C}$ and $\mathbf{A}_o\in R^{N_o\times C}$, where $N_v$, $N_o$, and $C$ indicate the number of verbs, objects and HOI categories, respectively. Given an one-hot HOI label vector $\mathbf{y} \in R^{C}$, we then have the verb label vectors, \begin{equation} \label{eq:decompose} \mathbf{l}_v = \mathbf{y} \mathbf{A}_v ^\mathsf{T} , \end{equation} where $\mathbf{l}_v \in R^{N_v}$ can be a multi-hot vector with multiple verbs, \eg, $\left \langle \{hold, read\}, book \right \rangle$). Similarly, combining the verb $\mathbf{l}_v$ with all $N_o$ objects, we have the matrix $\mathbf{\hat{l}}_o \in R^{N_o \times N_o}$ as labels of all $N_o$ fake objects. Let $\mathbf{\hat{l}}_v \in R^{N_o \times N_v}$ denote the verb labels corresponding to fake object features $\mathbf{\hat{l}}_o$, the new interaction label can then be evaluated as follows, \begin{equation} \label{eq:label} \hat{\mathbf{y}} = (\mathbf{\hat{l}}_o \mathbf{A}_o)~\&~(\mathbf{\hat{l}}_v\mathbf{A}_v), \end{equation} where $\&$ indicates the logical operation ``$\mathbf{and}$''. Finally, the logical operation automatically filters out the infeasible HOIs since the labels of those infeasible HOIs are all-zero vectors in the label space. \subsection{Optimization} \label{sec:verb_loss} \textbf{Training}. The verb feature contains the pose information of the object, making it difficult to jointly train the network with an object fabricator from scratch. Therefore, we introduce a step-wise training strategy for the long-tailed HOI detection. Firstly, we pre-train the network by $\mathcal{L}_{hoi}$, $\mathcal{L}_{hoi\_sp}$ and $\mathcal{L}_{reg}$ without the fabricator branch. Then, we fix the pre-trained model and train the randomly initialized object fabricator via the loss function for the fabricator branch $\mathcal{L}_{CL}$. Lastly, we jointly fine-tune all branches by $\mathcal{L}$ in an end-to-end manner. To avoid the bias to seen data in the first step, we optimize the network in one step for zero-shot HOI detection (See analysis in Section~\ref{sec:ab}). \textbf{Inference}. The fabricated branch is only used in the training stage, \ie, we remove it during the inference stage. Similar to previous multi-branch methods~ \cite{gao2018ican, li2018transferable, hou2020visual}, for each human-object bounding box pair ($b_h$, $b_o$), the final HOI prediction $S^c_{h,o}$ for each category $c \in 1, ..., C$, can be evaluated as follows, \begin{equation} \label{eq:final_score} S^c_{h,o} = s_h\cdot s_o \cdot S^c_{sp} \cdot S^c_{hoi}, \end{equation} where $s_h$ and $s_o$ indicate the object detection scores for the human and object, respectively. $S^c_{sp}$ and $S^c_{hoi}$ are the scores from the Spatial branch and the HOI branch, respectively. \section{Experiments} In this section, we first introduce datasets and metrics, and then provide the details of the implementation of our method. Next, we present our experimental results compared with state-of-the-art approaches. Finally, we conduct ablation studies to validate the components in our work. \subsection{Datasets and Metrics} We adopt the largest HOI datasets HICO-DET \cite{chao2018learning}, which contains 47,776 images including 38,118 images for training and 9,658 images for testing. All 600 HOI categories are constructed from 80 object categories and 117 verb categories. HICO-DET provides more than 150k annotated human-object pairs. In addition, V-COCO is another small HOI dataset with 29 categories~\cite{gupta2015visual}. Considering that V-COCO mainly focuses to verb recognition and do not contain a severe long-tailed issue, we mainly evaluate the proposed method on HICO-DET. We also illustrate the result on visual relation detection \cite{lu2016visual, zhan2019exploring}, which requires to detect the triplet (subject, predicate, object) in supplementary materials. We follow the evaluation settings in \cite{chao2018learning}, \ie a HOI prediction is a true positive if 1) both the human and object bounding boxes have IoUs larger than 0.5 with the reference ground truth bounding boxes; and 2) the HOI prediction is accurate. \subsection{Implementation Details} Similar to \cite{bansal2020detecting, hou2020visual}, our HOI detection model contains two separated stages: 1) we finetune the Faster R-CNN detector pre-trained on COCO~\cite{lin2014microsoft} using HICO-DET to detect the human and objects~\footnote{We use the Faster R-CNN detector implemented in detectron2 \cite{wu2019detectron2}.}; 2) we use the proposed FCL model for HOI classification. Specifically, all branches are two-layer MLP sigmoid classifiers with 2048-d input and 1024-d hidden units. Fabricator is a two-layer MLP. The $\mathcal{L}_{reg}$ is a sigmoid classifier for verb representation. $\mathcal{L}_{CL}$, $\mathcal{L}_{hoi}$ and $\mathcal{L}_{hoi\_sp}$ are binary cross entropy losses. $\mathbf{A}_v$ and $\mathbf{A}_o$ are set according to HOI dataset, and we can also set them by prior knowledge to detect more types of unseen HOIs. Besides, to prevent the fabricated HOIs from dominating the model optimization process, we randomly sample fabricated HOIs in each mini-batch to keep that the number of fabricated HOIs is not more than three times the number of non-fabricated HOIs. We train our network for one million iterations by SGD optimizer on the HICO-DET dataset with an initial learning rate of 0.01, a weight decay of 0.0005, and a momentum of 0.9. We set $\lambda_1$ as 2.0, $\lambda_2$ as 0.5 and $\lambda_3$ as 0.3, while we set 1 for the coefficient of $\mathcal{L}_{hoi\_sp}$. The hyper-parameters are ablated in supplementary materials. We jointly fine-tune the model with the object fabricator for ~500k iterations, and decay the initial learning rate 0.01 with a cosine annealing schedule. All our experiments on HICO-DET are conducted using TensorFlow~\cite{abadi2016tensorflow} on a single Nvidia GeForce RTX 2080Ti GPU. We evaluate V-COCO based on PMFNet \cite{wan2019pose} with two GPUs. We do not use auxiliary verb loss since there are only two kinds of objects on V-COCO. We set $\lambda_1$ as 1 and $\lambda_2$ as 0.25 on V-COCO. \subsection{Comparison to Recent State-of-the-Arts} Our method aims to relieve open long-tailed HOI detection. However current approaches usually focus on full categories, rare categories and unseen categories separately. In order to compare with state-of-the-art methods, we evaluate our method on long-tailed detection and generalized zero-shot detection separately. The HOI detection result is evaluated with mean average precision (mAP) (\%). \subsubsection{Effectiveness for Zero-Shot HOI Detection} There are different settings \cite{bansal2020detecting} for zero-shot HOI detection: 1) unseen composition; and 2) unseen object. Specifically, for the unseen composition setting, it indicates that the training data contains all factors (\ie, verbs and objects) but misses the verb-object pairs; for the unseen object setting, it requires to detect unseen HOIs, in which the object do not appear in the training data. For unseen composition HOI detection, similar to \cite{hou2020visual}, we select two groups of 120 unseen HOIs from tail preferentially (rare first) and from head preferentially (non-rare first) separately, which roughly compares the lowest and highest performances. As a result, we report our result in the following settings: Unseen (120 HOIs), Seen (480 HOIs), Full (600 HOIs) in the ``Default" mode on HICO-DET dataset. For a better comparison, we implement the factorized model \cite{shen2018scaling} under our framework for unseen composition zero-shot HOI detection. For unseen object HOI detection, we use the same HOI categories for unseen data as \cite{bansal2020detecting} (\ie randomly selecting 12 objects from the 80 objects and picking all HOIs containing there objects as unseen HOIs). Then, we report our results in the setting: Unseen (100 HOIs), Seen (500 HOIs), Full (600 HOIs). To compare with the contemporary work~\cite{hou2020visual}, we use the same object detection result released by~\cite{hou2020visual}. Here, our baseline method is the model without object fabricator, \ie, the compositional branch. \setlength{\tabcolsep}{4pt} \begin{table}[tp] \caption{Comparison of zero-shot detection results of our proposed method. UC indicates unseen composition zero-shot HOI detection. UO indicates unseen object zero-shot HOI detection. For better illustration, we choose the mean UC result of \cite{bansal2020detecting}. } \label{table:zero_shot1} \centering \small \begin{tabular}{@{}lcccccc@{}} \hline Method & Type & Unseen & Seen & Full \cr \hline\hline Shen \etal \cite{shen2018scaling} & UC & 5.62 & - & 6.26 \\ FG \cite{bansal2020detecting} & UC & 11.31 & 12.74 & 12.45 \\ \hline VCL \cite{hou2020visual} (rare first) & UC & 10.06 & 24.28 & 21.43 \\ Baseline (rare first) & UC & 8.94 & 24.18 & 21.13 \\ Factorized (rare first) & UC & 7.35 & 22.19 & 19.22 \\ FCL (rare first) & UC & {\bf 13.16} & 24.23 & {\bf 22.01} \\ \hline VCL \cite{hou2020visual} (non-rare first) & UC & 16.22 & 18.52 & 18.06 \\ Baseline (non-rare first) & UC & 13.47 & 19.22 & 18.07 \\ Factorized (non-rare first) &UC& 15.72 & 16.95 & 16.71 \\ FCL (non-rare first) & UC & {\bf 18.66} & {\bf 19.55} & {\bf 19.37} \\ \hline\hline FG \cite{bansal2020detecting} & UO & 11.22 & 14.36 & 13.84 \\ Baseline & UO & 12.86 & 20.77 & 19.45 \\ FCL & UO & {\bf 15.54} & 20.74 & {\bf 19.87} \\ \hline \end{tabular} \end{table} \begin{table}[tp] \caption{Comparison to the state-of-the-art approaches on HICO-DET dataset~\cite{chao2018learning}. FCL $^{DRG}$ is FCL with object detector provided by \cite{gao2020drg}. FCL + VCL means we fuse the result provided in \cite{hou2020visual} with FCL. VCL$^{DRG}$ uses the released model of VCL.} \label{table:sota_hico} \centering \resizebox{0.95\linewidth}{!}{ \begin{tabular}{@{}lcccccc@{}} \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Default}&\multicolumn{3}{c}{Known Object}\cr\cline{2-7} &Full&Rare&NonRare&Full&Rare&NonRare\cr \hline\hline FG \cite{bansal2020detecting} & 21.96 & 16.43 & 23.62 & - & - & - \\ IP-Net \cite{wang2020learning} & 19.56 & 12.79 & 21.58 & 22.05 & 15.77 & 23.92 \\ PPDM \cite{liao2019ppdm} & 21.73 & 13.78 &24.10 &24.58 &16.65 &26.84 \\ VCL \cite{hou2020visual} & 23.63 & 17.21 & 25.55 & 25.98 & 19.12 & 28.03 \\ DRG \cite{gao2020drg} & 24.53 & 19.47 & 26.04 & 27.98 & 23.11 & 29.43 \\ \hline Baseline & 23.35 & 17.08 & 25.22 & 25.44 & 18.78 & 27.43 \\ FCL & {\bf 24.68} & {\bf 20.03 } & {\bf 26.07} & {\bf 26.80} & {\bf 21.61} & {\bf 28.35 }\\ FCL + VCL & {\bf 25.27} & {\bf 20.57} & {\bf 26.67} & {\bf 27.71} & {\bf 22.34} & {\bf 28.93} \\ \hline VCL \cite{hou2020visual} $^{DRG}$ & 28.33 & 20.69 & 30.62 & 30.59 & 22.40 & 33.04\\ Baseline$^{DRG}$ & 28.12 & 21.07 & 30.23 & 30.13 &22.30 &32.47\\ FCL $^{DRG}$ & {\bf 29.12} & {\bf 23.67} & {\bf 30.75} & {\bf 31.31} & {\bf 25.62} & {\bf 33.02} \\ (FCL + VCL) $^{DRG}$ & {\bf 30.11} & {\bf 24.46} & {\bf 31.80} & {\bf 32.17} & {\bf 26.00} & {\bf 34.02} \\ \hline VCL \cite{hou2020visual} $^{GT}$ & 43.09 & 32.56 & 46.24 & - & - & - \\ FCL$^{GT}$ & {\bf 44.26} & {\bf 35.46} & {\bf 46.88} & - & - & - \\ (FCL + VCL)$^{GT}$ & {\bf 45.25} & {\bf 36.27} & {\bf 47.94} & - & - & - \\ \hline \end{tabular} } \end{table} {\bf Unseen composition}. Table~\ref{table:zero_shot1} shows that FCL achieves large improvement on Unseen category by {\bf 4.22\%} and {\bf 5.19\%} than baseline, and by {\bf 3.10\% and 2.44\%} compared to previous works~\cite{bansal2020detecting, hou2020visual} on the two selection strategies respectively. Meanwhile, the two selection strategies witness a consistent improvement with FCL on nearly all categories, which indicates that composing novel HOI samples contributes to overcome the scarcity of HOI samples. In rare first selection, FCL has a similr result to baseline and VCL \cite{hou2020visual} on Seen category. But step-wise optimization can improve the result on Seen category and Full category (See Table~\ref{table:step}). In addition, the factorized model has a very poor performance in the head classes compared to our baseline. Noticeably, factorized model achieves better performance on Unseen category than baseline in non-rare first selection while has worse result on Unseen category in rare first selection. FCL witnesses a consistent improvement in different evaluation settings. In the remaining data, unseen HOIs of rare first zero-shot have more rare verbs (less than 10 instances) than that of non-rare first zero-shot. {\bf Unseen object}. We further evaluate FCL in novel object zero-shot HOI detection, which requires to detect HOIs that is interacting with novel objects. Table~\ref{table:zero_shot1} shows FCL effectively improves the baseline by 2.68\% on Unseen Category, although there are no real objects of unseen HOIs in training set. This illustrates the ability of FCL for detecting unseen HOIs with novel objects. Here, the same as \cite{bansal2020detecting}, we also use a generic detector to enable unseen object detection. \subsubsection{Effectiveness for Long-Tailed HOI Detection} We compare FCL with recent state-of-the-art HOI detection approaches \cite{wang2020learning, liao2019ppdm, bansal2020detecting, hou2020visual, gao2020drg} using fine-tuned object detector on HICO-DET to validate its effectiveness on long-tailed HOI detection. For fair comparison, we use the same fine-tuned object detector provided by \cite{hou2020visual}. For evaluation, we follow the settings in \cite{chao2018learning}: Full (600 HOIs), Rare (138 HOIs), Non-Rare (462 HOIs) in ``Default" and ``Known Object" on HICO-DET. In Table~\ref{table:sota_hico}, we find that the proposed method achieves new state-of-the-art performance, {\bf 24.68\%} and {\bf 26.80\%} mAP on ``Default" and ``Known Object". Meanwhile, we achieve a significant performance improvement of {\bf 2.82\%} over the contemporary best rare performance model \cite{hou2020visual} under the same object detector, which indicates the effectiveness of the proposed compositional learning for the long-tailed HOI detection. Furthermore, with the same object detection result to \cite{gao2020drg}, our results surprisingly increase to {\bf 29.12\%} on ``Default'' mode. Here, we merely change the detection result provided in \cite{hou2020visual} to that provided in \cite{gao2020drg} during inference. Particularly, we find our method is complementary to compose HOIs between images \cite{hou2020visual}. By simply fusing the result provided by \cite{hou2020visual} with FCL, we can further largely improve the results under different object detectors. \begin{table}[tp] \centering \small \caption{Illustration of proposed modules under step-wise optimization. FCL means proposed Fabricated Compositional Learning. V indicates the verb regularization loss.} \begin{tabular}{@{}ccccccc@{}} \hline FCL & V &Full&Rare&NonRare&Unseen\cr \hline\hline \hline - & - & 18.12 & 15.99 & 20.65 & 12.41 \\ \checkmark & - & 19.08 & 17.47 & 20.95 & 14.90\\ - &\checkmark & 18.32 & 16.73 & 20.82 & 12.23 \\ \checkmark & \checkmark & {\bf 19.61} & {\bf 18.69} & {\bf 21.13} &{\bf 15.86} \\ \hline \end{tabular} \label{table:ablation} \end{table} \begin{table} \small \centering \caption{Ablation study of fabricator under step-wise optimization. FCL within image means we compose HOIs within image. + verb fabricator is we fabricate verb and object features.} \begin{tabular}{@{}cccccc@{}} \hline Method &Full&Rare&NonRare&Unseen\cr \hline\hline \hline FCL & {\bf 19.61} & {\bf 18.69} & 21.13 &{\bf 15.86} \\ FCL w/o noise & 19.45 & 17.69 & 21.22 & 15.74 \\ FCL w/o verb & 19.20 & 18.02 & 21.04 & 14.71 \\ FCL + verb fabricator & 19.47 & 16.93 & 21.43 & 15.89 \\ \hline \end{tabular} \label{table:ab_fabricator} \end{table} \subsubsection{Effectiveness on V-COCO} We also evaluate FCL on V-COCO. Although the data on V-COCO is balanced, FCL still improves the baseline (reproduced PMFNet \cite{wan2019pose}) in Table~\ref{table:vcoco}. \begin{table} \small \centering \caption{Illustration of Fabricated Compositional Learning on V-COCO based on PMFNet \cite{wan2019pose}} \begin{tabular}{@{}lc@{}} \hline Method & $AP_{role}$\cr \hline\hline \hline PMFNet \cite{wan2019pose} & 52.0 \\ Baseline & 51.85\\ FCL & {\bf 52.35} \\ \hline \end{tabular} \label{table:vcoco} \end{table} \subsection{Ablation Studies} \label{sec:ab} For a robust validation of the proposed method in rare categories and unseen categories simultaneously, we select 24 rare categories and 96 non-rare categories for zero-shot learning (remained 30,662 training instances). This result is roughly between non-rare first selection and rare first selection in Table~\ref{table:zero_shot1}. See supplementary material for unseen type details and ablation study of long-tailed HOI detection based on Table~\ref{table:sota_hico}. We conduct ablation study on FCL, verb regularization loss, verb fabricator, step-wise optimization and the effect of object detector. {\bf Fabricated Compositional Learning}. In Table~\ref{table:ablation}, we find that the proposed compositional method with fabricator can steadily improve the performance and it is orthogonal to verb feature regularization (verb regularization loss). {\bf Verb Feature Regularization}. We use a simple auxiliary verb loss to regularize verb features. Although verb regularization loss can slightly improve the rare and unseen category performance (See row 1 and row 3 in Table~\ref{table:ablation}), FCL further achieves better performance. This indicates that regularizing factor features is suboptimal compared to the proposed method. Semantic verb regularization like \cite{xu2019learning} has a similar result (See supplementary materials). {\bf Verb and Noise for Fabricator}. Table~\ref{table:ab_fabricator} demonstrates that performance drops without verb representation or noise. This shows verb representations can provide useful information for generating objects and noise efficiently improves the performance by increasing feature diversity. We meanwhile find the fabricator still effectively improves the baseline without verb or noise by comparing Table~\ref{table:ablation} and Table~\ref{table:ab_fabricator}, which indicates the efficiency of FCL. {\bf Verb Fabricator}. The result of fabricating verb features (from verb identity embedding, object features and noise) is even worse as in Table~\ref{table:ab_fabricator}. This verifies that it is difficult to directly generate useful verb or HOI samples due to the complexity and abstraction. Supplementary materials provide more visualized analysis of verb and object feature. \begin{table}[tp] \caption{Comparison between step-wise optimization and one step optimization. ZS is the setting in our ablation study.} \label{table:step} \centering \small \begin{tabular}{@{}lcccccc@{}} \hline Method &Full&Rare&NonRare&Unseen\cr \hline\hline one step (long-tailed) & 24.03 & 18.42 & 25.70 & - \\ step-wise (long-tailed) & {\bf 24.68} & {\bf 20.03} & {\bf 26.07} & -\\ \hline one step (ZS) & {\bf 19.69} & 18.22 & 20.82 & {\bf 17.64} \\ step-wise (ZS) & 19.61 & {\bf 18.69} & {\bf 21.13} & 15.86 \\ \hline one step (rare first ZS) & 22.01 & 15.55 & 24.56 & {\bf 13.16} \\ step-wise (rare first ZS) & {\bf 22.45} & {\bf 17.19} & {\bf 25.34} & 12.12 \\ \hline one step (non-rare ZS) & {\bf 19.37} & 15.39 & 20.56 & {\bf 18.66} \\ step-wise (non-rare ZS) & 19.11 & {\bf 17.12} & {\bf 21.02} & 15.97 \\ \hline \end{tabular} \end{table} \begin{table}[tp] \centering \small \caption{Illustration of the effect of fine-tuned detectors on FCL. The COCO detector is trained on COCO dataset provided in \cite{wu2019detectron2}. We fine-tune the ResNet-101 Faster R-CNN detector based on Detectron2 \cite{wu2019detectron2}. Here, the baseline is our model without fabricator. The last column is object detection result on HICO-DET test. } \label{table:detector} \begin{tabular}{@{}cccccc@{}} \hline Method & Detector & Full & Rare & NonRare & Object mAP\cr \hline\hline \hline Baseline & COCO & 21.24 & 17.44 & 22.37 & 20.82\\ FCL & COCO & {\bf 21.80} & {\bf 18.73} & {\bf 22.71} & 20.82 \\ \hline Baseline & HICO-DET & 23.94 & 17.48 & 25.87 & 30.79 \\ FCL & HICO-DET & {\bf 24.68} & {\bf 20.03} & {\bf 26.07} & 30.79\\ \hline Baseline & GT & 43.63 & 34.23 & 46.43 & 100.00 \\ FCL & GT & {\bf 44.26} & {\bf 35.46} & {\bf 46.88} & 100.00 \\ \hline \hline \end{tabular} \end{table} \begin{figure*} \centering \includegraphics[width=0.84\textwidth]{improvement4.png} \caption{Illustration of the improvement in those improved categories between FCL and baseline on HICO-DET dataset under default setting. The graph is sorted by the frequency of category samples and the horizontal axis is the number of training samples for each category. The result is reported in mAP (\%). The details of category name are provided in supplementary materials.} \label{fig:long_tailed_improve} \end{figure*} {\bf Step-wise Optimization.} Table~\ref{table:step} illustrates that step-wise training has better performance in rare and non-rare categories while has worse performance in unseen categories. We think it might be because the model with the step-wise training has the bias to seen categories in the first step since there are no training data for unseen categories. {\bf Object Detector.} The quality of detected objects has important effect on two-stage HOI Detection methods \cite{hou2020visual}. Table~\ref{table:detector} shows that the improvement of FCL over baseline is higher with the fine-tuned detector on HOI data. COCO detector without finetuning on HICO-DET contains a large number of false positive and false negative boxes on HICO-DET due to domain shift, which is in fact less useful to evaluate the effectiveness of modeling human interactions for HOI detection. If the detected boxes during inference are false, the features extracted from the false boxes are also unreal and have large shift to the fabricated objects during training. This causes that fabricated objects are less useful for inferring HOIs during inference. Besides, GT boxes provide a strong object label prior for verb recognition. \begin{figure} \centering \includegraphics[width=0.34\textwidth]{cos_longtailed_1.png} \caption{The changing trend of cosine similarity between fabricated object features and real object features during optimization in long-tailed HOI detection in step-wise training.} \label{fig:cosdis_zero_shot_obj} \end{figure} \section{Qualitative Analysis} \label{sec:qua} {\bf Illustration of improvement among categories}. In Figure~\ref{fig:long_tailed_improve}, we find that \textit{the rarer the category is, the more the proposed method can improve}. The result illustrates the benefit of FCL for long-tailed issue in HOI Detection. {\bf Visualized Analysis between fabricated and real object features}. Figure~\ref{fig:cosdis_zero_shot_obj} presents that cosine similarity between fabricated and real object features gradually goes down to stability in step-wise training. This demonstrates the end-to-end optimization with shared HOI classifier helps fabricate efficient and similar objects during optimization process. \textit{More analysis of generated object representations by t-SNE is provided in Supplementary Materials}. \section{Conclusion} In this paper, we introduce a Fabricated Compostional Learning approach to compose samples for open long-tailed HOI Detection. Specifically, we design an object fabricator to fabricate object features, and then stitch the fake object features and real verb features to compose HOI samples. Meanwhile, we utilize an auxiliary verb regularization loss to regularize the verb feature for improving Human-Object Interaction generalization. Extensive experiments illustrate the efficiency of FCL on the largest HOI detection benchmarks, particularly for low-shot and zero-shot detection. \noindent {\bf Acknowledgements} This work was supported in part by Australian Research Council Projects FL-170100117, DP-180103424, IH-180100002, and IC-190100031. {\small \bibliographystyle{ieee_fullname}
1,108,101,564,737
arxiv
\section{Appendix} \label{sec:appendix} \Eq{loss_joint} is expanded as follows: \begin{alignat*}{2} &E(X, Y) \\ &= -\log \sum_{k=1}^K \phi_k(X) && \mathcal{N}(f(Y) \,|\, \mu_k(X), \sigma_k(X)^2) \\ &= - \log \sum_{k=1}^K \phi_k(X) && \prod_{d=1}^{D} \left( \frac{1}{\sqrt{2 \pi} \sigma_{k,d}(X)} \exp \left( -\frac{ (f(Y)_{d} - \mu_{k,d}(X))^2 }{2 \sigma_{k,d}(X)^2} \right) \right) \\ &= - \log \sum_{k=1}^K \exp \Bigg[ && -\sum_{d=1}^{D} \left( \frac{ (f(Y)_{d} - \mu_{k,d}(X))^2 }{2 \sigma_{k,d}(X)^2} - \log \sigma_{k,d}(X) \right) \\ & && - D \frac{\log 2 \pi}{2} + \log \phi_k(X) \Bigg], \end{alignat*} where $x_d$ is the d-th dimension value of $x$. This leads to a \emph{log-sum-exp} formulation, which requires the following computation trick for avoiding numerical issues: \begin{align*} \log \sum_k \exp \left[ x_k \right] = \max_k (x_k) + \log \sum_k \exp \left[ x_k - \max_k (x_k) \right] \end{align*} \section{Conclusion} \label{sec:conclusion} We propose a novel method for retrieving and placing complementary parts for assembly-based modeling tools. Our method does not require consistent segmentation and part labeling, and it can learn from a non-curated and inconsistent oversegmentation of shapes in an online repository. We jointly learn how to predict complementary parts and how to organize them in a low-dimensional manifold. This enables us to retrieve parts that have good functional, geometric, and stylistic compatibility with the query shape. We also propose the first method to predict target part position just from their normalized geometry. \begin{figure} \centering \includegraphics[width=\linewidth]{./figures/failure_cases.pdf} \caption{\rev{Failure examples. 1st row: the bottom components match a single leg, but triple legs are retrieved. 2nd row: Green and blue components partially overlap. 3rd row: A sky-blue component is floating. 4th row: Components overlap.}} \label{fig:failure_cases} \end{figure} \rev{Our framework has some limitations. While we randomly sample points over the surface of the partial input shape so that larger components have bigger influences in the next component retrieval, sometimes small/thin components play roles as certain parts, deciding the style of the whole object and occupying certain areas. Thus, the retrieval network can result in unreasonable outputs when these components are not well taken into account. In \Fig{failure_cases}, the automatically synthesized shapes have conflicted and unmatched components. Our placement network may also break physical constraints and have the new component to float or overlap with the input components. These issues, however, can be easily fixed with the user interaction.} In the future, we plan to augment our method with capabilities to synthesize and deform retrieved parts, providing an even better compatibility with the query. For any practical interactive interface, it is essential to also provide additional user control beyond part selection: for example, enabling specifying high-level part attributes, rough shapes, and rough placements. \section{Results} \label{sec:experiments} We demonstrate interactive and automatic modeling tools that can leverage our method. We also quantitatively evaluate our method and compare to the state-of-the-art alternatives. \vspace{0.1cm} {\noindent \bfseries Dataset.} We test our method with 9 categories from ShapeNet repository~\cite{Chang:2015}: Airplane, Car, Chair, Guitar, Lamp, Rifle, Sofa, Table, and Watercraft. We picked diverse categories with interesting part structures and enough instances to provide useful training data. Our pre-processing produces a few components per shape and we disregard shapes that have \rev{only 1 or more than 8 components} (see \Fig{data_stats} for details). An interactive modeling or a shape synthesis tool performs the best if they leverage the entire dataset, so the qualitative results provided in Section~\ref{sec:geometricmodel} are trained on the entire dataset. For quantitative evaluations and comparisons for various retrieval and placement algorithms we randomly split every category into 80\% for training and 20\% for test sets and report quantitative results and qualitative comparisons on test sets only (Section~\ref{sec:evaluation}). \begin{figure} \centering \includegraphics[width=\linewidth]{./figures/data_stats.pdf} \caption{Histograms of numbers of components. \rev{The number next to the category name is the total number of models we used in experiments (including train/test), and the next line is the numbers of discarded models which have less than two or greater than eight components.}} \label{fig:data_stats} \end{figure} \vspace{0.5cm} \subsection{Assembly-Based Geometric Modeling} \label{sec:geometricmodel} We first evaluate our method qualitatively for interactive and automatic shape modeling. \vspace{0.1cm} {\noindent \bfseries Interactive Modeling.} We use our retrieval and placement networks in an interactive modeling interface. Given a partial assembly, our algorithm first proposes a set of possible components by sampling from the conditional probability distribution predicted by the retrieval network and shows the candidates in our UI. \setlength{\columnsep}{10pt} \begin{wrapfigure}{r}{0.20\textwidth} \begin{center} \includegraphics[width=0.20\textwidth]{figures/interface.pdf} \end{center} \end{wrapfigure} Then, the user selects a desired complementary component, and the algorithm predicts the location for it via the placement network. The new shape is synthesized for the user, and the next component is proposed. Refer to the supplemental video for several interactive sessions. \vspace{0.1cm} {\noindent \bfseries Automatic Shape Synthesis.} Our method can also be used to facilitate a fully automatic shape synthesis that is able to generate diverse designs. We simply start with a random component, and iteratively add a component by sampling from the predicted distribution. Figure~\ref{fig:automatic_assembly} shows the evolution of the model when the component with maximal probability is added at every iteration. The retrieval network successfully finds new components that are missing in the query and can be connected to the given partial assembly. At each step, one can also make various decisions by taking different components from the sampling, so in Figure~\ref{fig:assembly_trees} we show a binary tree of possibilities. Note that in various content creation scenarios one can use this to control the complexity of resulting models (based on the depth of the tree) and diversity of the resulting models (based on the breadth of the tree). \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{./figures/assembly_joint_embedding_batch_best.pdf} \caption{Automatic iterative assembly results from a single component. Small component not typically labeled with semantics in the shape database (e.g. as slats between chair/table legs, pillows on sofas, cords in lamps/watercrafts) are appropriately retrieved and placed.} \label{fig:automatic_assembly} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{./figures/assembly_joint_embedding_batch_random.pdf} \caption{Automatic iterative assembly with two different random choices at every time. From the initial component at the bottom, various objects are synthesized by assembling different components.} \label{fig:assembly_trees} \end{figure*} \subsection{Quantitative Evaluations and Comparisons} \label{sec:evaluation} Evaluating an assembly-based geometric modeling tool is a challenging problem since there is no well-established protocol. In particular, evaluating the whole end-to-end object design process relies on subjective user evaluations which is prone to bias (e.g., the modeling task can be geared to favor a particular method). We thus propose a benchmark that evaluates various aspects of our core contributions: complement retrieval, part embedding, and part placement. Evaluating whether retrieved parts are compatible with the query partial shape is not a trivial task, since compatibility is a subjective criterion. We propose to evaluate functional, geometric, and stylistic compatibilities with separate metrics outlined in the following paragraphs. For each criterion, we compare our result to \rev{a random suggestion baseline} and two state-of-the-art alternatives. First is the method of Chaudhuri and Koltun~\shortcite{Chaudhuri:2010} (CK10) that also does not require a database of labeled parts and thus is directly comparable with our input. Their method operates in two steps: they find shapes that are similar to the query using global shape descriptors, and then pick components in the retrieved shapes that are dissimilar from the components in the query. Second, to test the value of our embedding, we replace our joint training of embedding network $f$ and retrieval networks $g$ with a fixed embedding space from MVCNN~\cite{Su:2015} (i.e., only $g$ is trained in this case). More specifically, we extract the last layer of MVCNN and use PCA to project it to a 50-dimensional space. We also evaluate the part placement network and present quantitative results to enable future comparisons. \vspace{0.1cm} {\noindent \bfseries Functional compatibility.} To answer the question whether a retrieved part is functionally compatible with the partial query, we rely on existing segmentation benchmark with part labels that refer to their functionality (e.g., an airplane can include four functional parts: a body, an engine, a tail, and wings). We then remove a single part from the query shape and evaluate how many of the retrieved components have correct labels. In this experiment we use part labels in the ShapeNet dataset~\cite{yi:2016}. Since this dataset provides per-point labels rather than isolated components we first label connected components and group them into bigger parts. In particular, we use majority voting to label each connected component, and then group all components with the same label into a single part. We disregard shapes if they have the final labeled part covering less than 80\% of the labeled points in the dataset (we use 3396 out of 8670 models in this evaluation). This provides us with a database of shapes that are decomposed into consistent semantic parts. Note that this is very different from our training components obtained after database preprocessing in Section~\ref{sec:data_preprocessing} which are inconsistent and unlabeled. We do not use these labeled components for training, but only use them to create the query shapes and component database in this experiment. In this experiment, we report numbers on 6 categories for which ShapeNet has part annotations out of the 9 categories we tested. We generate 100 queries for each category. For each query, we exclude a randomly chosen part for each shape and measure the mean average precision scores (mAPs) of the top 5 retrieval results (where a result is considered to be correct if the part label matches the label of excluded part). We present quantitative results in Table~\ref{tbl:part_label_map} demonstrating that our method outperforms CK10 on all categories, except guitars, where both methods perform well due to very regular structure of the shape. \begin{table}[ht] {\small \setlength\tabcolsep{4pt} \begin{tabular}{c|*{6}c|c} \toprule Category & Plane & Car & Chair & Guitar & Lamp & Table & Mean \\ \midrule Random & 0.41 & 0.40 & 0.42 & 0.51 & 0.49 & 0.63 & 0.48 \\ CK10 & 0.37 & 0.39 & 0.37 & \textbf{1.00} & 0.52 & 0.64 & 0.55 \\ Ours (MVCNN) & 0.70 & \textbf{0.71} & 0.80 & 0.93 & 0.74 & 0.88 & 0.79 \\ Ours (Joint) & \textbf{0.89} & 0.55 & \textbf{0.86} & 0.95 & \textbf{0.73} & \textbf{0.91} & \textbf{0.81} \\ \bottomrule \end{tabular} } \caption{Evaluating functional labels of retrieved parts, this table reports mean average precision for top 5 retrievals across different methods and categories. } \label{tbl:part_label_map} \vspace{-0.5cm} \end{table} \vspace{0.1cm} {\noindent \bfseries Geometric compatibility.} Even if a retrieved part has correct label, it might not fit well with the query shape. While it is hard to evaluate geometric compatibility, we resort to comparing geometry of the retrieved part and the original part that was excluded from the assembly. We use the same experimental setup as in evaluating functional compatibility, but measure the average Hausdorff distance between the original and top 5 retrieved parts (\Tbl{hausdorff_distances}). The distances are relative to the shape radius which is scaled to 1. Note that our method returns parts that are more similar to the original complement than parts returned by CK10. Similar to functional compatibility metric, we found that our method performed slightly worse on guitars, where global shape descriptors might be mostly appropriate to capture guitar shapes. \begin{table}[ht] {\small \setlength\tabcolsep{4pt} \begin{tabular}{c|*{6}c|c} \toprule Category & Plane & Car & Chair & Guitar & Lamp & Table & Mean \\ \midrule Random & 0.27 & 0.24 & 0.27 & 0.14 &0.30 & 0.36 & 0.26 \\ CK10 & 0.23 & 0.21 & 0.27 & \textbf{0.04} & 0.25 & 0.34 & 0.23 \\ Ours (MVCNN) & 0.15 & \textbf{0.15} & 0.19 & 0.05 & 0.22 & 0.27 & 0.17 \\ Ours (Joint) & \textbf{0.11} & 0.21 & \textbf{0.16} & 0.05 & \textbf{0.21} & \textbf{0.24} & \textbf{0.16} \\ \bottomrule \end{tabular} } \caption{Evaluating geometric compatibility, this table report average Hausdorff distances of top-5 retrievals with the ground truth missing parts. Note that all models are normalized to have unit radius.} \label{tbl:hausdorff_distances} \vspace{-0.5cm} \end{table} \vspace{0.1cm} {\noindent \bfseries Style compatibility.} Our next goal is to evaluate whether the retrieved component is compatible to the query in style. While this is not a well-formulated problem, for the purpose of evaluation we use ShapeNet \emph{taxonomy} to reason about finer-grained classes, since such fine-grained classes are often defined by style (e.g., chair's find-grained classes are club chair, straight chair, lounge chair, recliner, etc). In particular, we consider a retrieved part to be accurate if it is from a shape in the same fine-grained class. While this is generally an imperfect measure (e.g., car wheels are interchangeable between convertibles and jeeps and table legs are interchangeable between rectangular tables and round tables, even though global fine-grained labels are different), we found that this measure does correlate with style compatibility of query and target shape, and nicely complements other measures. We evaluate style compatibility on 6 categories that have various fine-grained classes. We test two extremes: only one part is missing from the query, and only one part is present in the query. One issue is that the fine-grained categories of ShapeNet models do have overlaps, i.e., one model may have multiple fine-grained class labels (e.g. club chair and armchair), and thus the subclasses are considered to be matched if there is any overlap between the sets of subclasses of the query and the retrieval results. \Tbl{texonomy_map} shows mAPs of top 5 retrieval results. Our method performs better when only one part is given, suggesting that it can capture stylistic compatibility with very little information. However, we found that global shape descriptors used by CK10 perform better at retrieving stylistically similar shape when the query has almost the complete shape. It is worth noting that our method has more disadvantage under this metric. We introduce an embedding space and in it components from different fine-grained categories may be near each other (the car wheel example before). However, the CK10 approach tends to find a shape very similar to the query in the all-expect-one setting, with a high chance to be from the same fine-grained category. Note that this metric only reasons about style of the global shape, even if individual retrieved parts look identical. \begin{table} {\small \setlength\tabcolsep{4pt} \begin{tabular}{c|*{6}c|c} \toprule Category & Plane & Car & Chair & Sofa & Table & Ship & Mean \\ \midrule Random & 0.65 & 0.28 & 0.50 & 0.38 & 0.37 & 0.50 & 0.50 \\ \midrule \multicolumn{8}{c}{All except one } \\ \midrule CK10 & \textbf{0.79} & \textbf{0.80} & \textbf{0.83} & \textbf{0.84} & \textbf{0.71} & \textbf{0.75} & \textbf{0.79} \\ Ours (MVCNN) & 0.76 & 0.32 & 0.67 & 0.52 & 0.47 & 0.56 & 0.60 \\ Ours (Joint) & 0.78 & 0.34 & 0.65 & 0.67 & 0.49 & 0.59 & 0.62 \\ \midrule \multicolumn{8}{c}{Single } \\ \midrule CK10 & 0.51 & \textbf{0.44} & 0.52 & 0.56 & 0.44 & 0.54 & 0.49 \\ Ours (MVCNN) & 0.71 & 0.29 & 0.64 & 0.54 & 0.46 & 0.53 & 0.58 \\ Ours (Joint) & \textbf{0.72} & 0.26 & \textbf{0.64} & \textbf{0.59} & \textbf{0.52} & \textbf{0.59} & \textbf{0.59} \\ \bottomrule \end{tabular} } \caption{Evaluating style compatibility, this table reports mAP for fine-grained style categories of retrieved components.} \label{tbl:texonomy_map} \vspace{-0.5cm} \end{table} \textbf{Comparison to Chaudhuri and Koltun~\shortcite{Chaudhuri:2010} (CK10).} Previously we mentioned that CK10 relies on hand-crafted global shape descriptors to retrieve the most similar shape, and then propose parts that are dissimilar to the parts in the query. In contrast, our method learns to predict the descriptor of a complement from the query, which is a more direct method. We also use neural networks for this task, which enable our approach to leverage large datasets as they become available. We demonstrate some qualitative results in \Fig{comparisons}. \begin{figure*} \centering \includegraphics[width=\textwidth]{./figures/comparisons.pdf} \caption{Part retrieval comparisons. The left is query shape with a missing component (blue is missing). The three rows on the right are top-5 retrieval results of CK10, ours with MVCNN embedding, and ours with joint embedding, respectively. The retrieved components are highlighted with light green and pink; light greens are correct, and pinks are wrong parts.} \label{fig:comparisons} \end{figure*} \vspace{0.1cm} {\noindent \bfseries Effect of learning the embedding.} We also evaluate the influence of using fixed embedding space instead of learning the embedding. In particular, we pick MVCNN descriptor as one of the state-of-the-art deep learned shape descriptors and train only the retrieval network $g$, while $f$ is prescribed by the PCA transformation of the MVCNN descriptor in the training data. Evaluated by part label and style prediction metrics, \Tbl{part_label_map} and \Tbl{texonomy_map} show that our method based on the learned embedding works on par or slightly better than the feature space from MVCNN. Qualitatively, we find that the learned embedding often exhibits larger diversity orthogonal to the appearance similarity captured by MVCNN. In \Fig{neighbors}, we visualize our learned embedding space and observe such diversity by our learned embedding. For example, the returned table legs in the third row differ greatly in shape but are all reasonable components to be added to the partial assembly. \begin{figure*} \centering \includegraphics[width=\textwidth]{./figures/neighbors.pdf} \caption{Neighborhood in the embedding space learned by our embedding network. Top-10 nearest neighbors from the query with the nearest neighbor ranks in the MVCNN embedding space below.} \label{fig:neighbors} \end{figure*} \vspace{0.1cm} {\noindent \bfseries Part placement evaluation.} A unique advantage of our method over CK10 and existing approaches is its ability to predict part placement purely from the geometry of the new component. \Tbl{placement_error} shows the placement error on both training and testing data, where the error is measured relative to shapes with the unit radius. Note that all models in the database are normalized to have unit radius from the bounding box center. In all categories, our placement network predicts positions with reasonably small errors. \begin{table} {\small \setlength\tabcolsep{2pt} \begin{tabular}{c|*{9}c|c} \toprule Category & Plane & Car & Chair & Guitar & Lamp & Rifle & Sofa & Table & Ship & Mean \\ \midrule Train Err & 0.02 & 0.03 & 0.06 & 0.02 & 0.04 & 0.02 & 0.04 & 0.06 & 0.03 & 0.04 \\ Test Err & 0.06 & 0.13 & 0.13 & 0.03 & 0.20 & 0.14 & 0.12 & 0.15 & 0.19 & 0.12 \\ \bottomrule \end{tabular} } \caption{Placement Network Error. Note that all models are normalized to have unit radius.} \label{tbl:placement_error} \vspace{-0.5cm} \end{table} \begin{figure} \centering \includegraphics[width=\linewidth]{./figures/assembly_cross_category.pdf} \caption{\rev{Automatic assembly results using components in the other category. The assemblies across table, sofa, and chair categories are more reasonable than the assemblies across airplane, watercraft, and car categories due to the commonality in the component shapes.}} \label{fig:cross_category_assembly} \end{figure} \vspace{0.1cm} {\noindent \bfseries \rev{Cross-category assembly.}} \rev{Lastly, we test to our method to assembly components in a different category with the trained models. \Fig{cross_category_assembly} demonstrates some results of cross-category automatic synthesis. We achieve reasonable outputs when using categories sharing many similar components (e.g., table - sofa - chair). Note that some components are even used with different functionalities such as table legs as chair arms and a back. Obviously, it is not possible to obtain plausible outputs when there is no commonality among component shapes (e.g., airplane - watercraft - car), but still the outputs show meaningful mappings such as a watercraft body to an airplane body and a watercraft sail to a airplane tail wing.} \vspace{0.1cm} {\noindent \bfseries Timing.} We ran both training and test with a single NVIDIA GeForce GTX TITAN X Graphics Card. It took 12 hours to train each of the retrieval/embedding networks and placement networks for $100k$ iterations. In test time, each of the retrieval and placement network takes 0.1 second. \section{Introduction} \label{sec:introduction} Geometric modeling is essential for populating virtual environments as well as for designing real objects. Yet creating 3D models from scratch is a tedious and time-consuming process that requires substantial expertise. To address this challenge, Funkhouser et al.~\shortcite{Funkhouser:2004} put forth the idea of re-using parts of existing 3D models to generate new content. To alleviate the burden of finding and segmenting geometric regions, Chaudhuri et al.~\shortcite{Chaudhuri:2011} proposed an interface for part-wise shape assembly, which reduces the user interaction to component selection and placement. Their suggestion model was trained on a heavily supervised dataset, where every shape was segmented into a consistent set of parts with semantic part labels. Even at a very coarse part level, significant expense has to be incurred with the help of crowd-sourced workers and active learning techniques~\cite{yi:2016}. In this work we propose a novel component suggestion approach that does not require explicit part annotations. While our method still requires unlabeled components, this is a much weaker requirement, as it has been observed before that these decompositions can be done automatically~\cite{Chaudhuri:2011}. In this work, we also leverage the observation of Yi et al.~\shortcite{Li:2017} that models that come from online repositories, such as the 3D Warehouse~\cite{Warehouse}, already have some segmentations (based on connected components and scene graph nodes) that often align with natural part boundaries. Despite the fact that these components are inconsistent and unlabeled we can still train a model for component suggestion, because given some (partial) shape assembly we know exactly which components are missing and where they need to be placed. \rev{We propose novel neural network architectures for suggesting complementary components and their locations given a partially assembled shape. Our networks use unordered point clouds to represent geometry~\cite{Qi:2017}, which makes them widely applicable\footnote{In particular, this makes it possible to easily integrate our approach within other extant design systems.}. There are two main challenges in training the retrieval network. First, since we do not require consistent segmentations and labels our network needs to index the parts for retrieval. Thus, we jointly train two networks, an embedding network that indexes the parts by mapping them to a low-dimensional latent space, and a retrieval network that maps a partial assembly to the appropriate subspace of complements. These networks are trained together from triplets of examples: a partial assembly, a correct complement, and an incorrect complement. We use contrastive loss to ensure that correct and incorrect complements are separated by a margin, which favors embeddings that are compatible with the retrieval network predictions. The second challenge is that multiple design options can complement each partial assembly (e.g., one can add either legs, or a back, or arm rests to a seat of a chair). We address this challenge by predicting a probability distribution over the space of all plausible predictions, which we model as a mixture of Gaussians with confidence weights. This enables us to train a network that suggests multiple plausible solutions simultaneously, even though every training pair provides only one solution. Finally, the location prediction network takes a partial assembly and a complementary component and outputs possible placements for that component. } We demonstrate that our method leads to a modeling tool that requires minimal or no user input. We also propose a novel benchmark to evaluate the performance of component suggestion methods. In that setting, our approach outperforms state-of-the-art retrieval techniques that do not rely on heavily curated datasets. \section{Data Preprocessing} \label{sec:data_preprocessing} \begin{figure}[b] \centering \includegraphics[width=\linewidth]{./figures/components.pdf} \caption{\rev{This figure illustrates our pre-processing step, where the image on the left depicts input connected components, and image on the right are the nodes are the final components (after small, overlapping, and repetitive elements are grouped together).}} \label{fig:component_examples} \end{figure} Given a database of shapes the goal of this step is to decompose them into components and construct contact graphs over the components. \rev{We can partition this graph in various ways to create training pairs of a partial assembly (connected subgraph) and its complements (nodes adjacent to the subgraph).} This step has loose requirements, since subsequent steps do not require these components to be consistent or labeled. That said, it is desirable for these components to have non-negligible size so that adding them makes a visible difference to the assembly, and have their boundaries roughly align with geometric features to avoid visual artifacts in stitching the parts together. Larger components also aid in learning a more meaningful and discriminative embedding space. Thus, we start with an over-segmentation where each component has reasonable boundaries and then iteratively merge small components. While we could use an automatic segmentation algorithm such as randomized cuts~\cite{Golovinskiy:2008} to produce the initial components, we found ShapeNet models are already represented by scene graphs where leaf geometry nodes provide reasonable components with minimal post-processing. \rev{We first construct an initial contact graph by creating an edge between any two components such that the minimum distance between them is less than $\tau_\text{proximity}=0.05$ of their radius. We then choose a set of nodes to merge into a single component based on three criteria: size, amount of overlap, and similarity. Specifically, any component with PCA-aligned bounding box diagonal below $\tau_\text{size}=0.2$ of mesh radius is merged to its largest neighbor. Also, overlapping components with \emph{directional Hausdorff} distance below $\tau_\text{Hausdorff tol}=0.05$ (in either direction) are merged into the same group. Finally, identical components that share the same geometry in the scene graph or with identical top/front/side grayscale renderings are treated as a single component. The last merge favors placing all symmetric parts at once, which we found to be more time effective from the user perspective (e.g., think of placing every slat separately to form a back). The output of these merges is a new contact graph and we synthesize training pairs by partitioning this graph in different ways (\Sec{joint_training}). We only use graphs that have at most $N_\text{max CC} = 8$ components during training. We demonstrate the effect of these pre-processing steps in Figure~\ref{fig:component_examples} and statistics over our training data in Figure~\ref{fig:data_stats}.} For our retrieval and placement networks we represent our components with $N_\text{points}=1000$ randomly sampled points re-centered at the origin. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{./figures/network.pdf} \caption{\rev{Neural networks architectures. (a) PointNet \cite{Qi:2017} (provided for completeness). The numbers in MLP (Multi-Layer Perception) are layer sizes. In our use, we omit spatial transformer networks and assume that orientations are pre-aligned (\Sec{placement_network}). (b) the joint training framework of both retrieval and embedding networks. (c) the placement network.}} \label{fig:network} \end{figure} \section{Method} \label{sec:method} The input to our method is a partial shape and the output are several component proposals selected from the database that can be added next. We design several neural networks to facilitate the proposal: a retrieval network $g$, an embedding network $f$, and a placement network $h$. \rev{While our networks can be re-targeted to deal with any 3D shape representation such as voxel grids or multi-view projections we chose to represent all input shapes with point clouds~\cite{Qi:2017} which are versatile representations that can be used on a wide range of geometries.} To index parts we build an embedding space for all components, where interchangeable and stylistically compatible components are embedded nearby. We represent this space with a neural network $f$ that takes part geometry and maps it to low-dimensional vector. The retrieval network and embedding network are tightly coupled. The retrieval network $g$ takes geometry of a partial query as an input, and outputs a probability distribution over the low-dimensional embedding learned by $f$ (see \Fig{overview}). A good embedding needs to provide a space that is easy to represent with the output of the retrieval network. Thus, we jointly train both networks with triplets: a partial shape, one of its complements, and a non-complementing part. We then separately train a placement network $h$ that takes geometry of the query shape and a retrieved complement and outputs placement coordinates for the component. \subsection{Retrieval Network} \label{sec:retrieval_network} Given a partially assembled shape our goal is to retrieve a set of complementary parts. Our input partial assembly $X$ is represented as a point cloud of $N_\text{points}$ points. Note that any partial shape $X$ can have several complementary parts, thus instead of predicting a unique coordinate, $Y_c$, we predict a conditional distribution over the embedded space, $P(Y_c | X)$. We model $P(Y_c|X)$ as a mixture of Gaussians, defined on some $D-$dimensional embedding space, i.e., $Y_c \in \mathbb{R}^D$ (where $D=50$ in all experiments). We predict the distribution by mixture density network (MDN) \cite{Bishop:1994}, which essentially predicts the parameters of the Gaussian mixture. For the mixture of Gaussians, we use $N_\text{GM}=N_\text{max CC}$ modes in our model, set to maximal number of connected components, and represent each $k^\text{th}$ Gaussian with a weight $\phi_k \in \mathbb{R}$, a mean $\mu_k\in \mathbb{R}^D$, and a standard deviation $\sigma_k \in \mathbb{R}^D$. To take unordered points as input we use the PointNet network~\cite{Qi:2017} as the backbone structure, which leverages symmetric (order-independent) functions to map points to categories \rev{(see \Fig{network}a)}. To predict probability distribution over the embedded space we replace classification output layers with parameters of Gaussian mixture model: $g(X) = \{\phi_k(X), \mu_k(X), \sigma_k(X)\}_{k=1..N_\text{GM}}$, where $g$ is a PointNet architecture. Each of weights, means, and standard deviations are mapped from the feature of the input with a single fully connected layer for each with different activations: softmax for weights to make sum one, exponential for variances to constrain them to be positive, and linear for means. It is worth mentioning that, the modeling of a conditional distribution over neural network output is an active research field recently, and our choice of MDN as the tool is mainly due to its significantly better performance to capture multiple modes. In principle, recent techniques such as conditional GAN \cite{Mirza:2014} or conditional VAE \cite{Kingma:2014,Sohn:2015} can also be used here; however, it is well-known that these approaches are still unapt to capture multiple modes well, suffering from a phenomenon known as mode collapse. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{./figures/training.pdf} \caption{A triplet of the query partial assembly $X$, a positive sample $Y$, and a negative sample $Z$ becomes an instance of the training data. Function $f(\cdot)$ maps $Y$ and $Z$ to single points on the embedding space, and function $g(\cdot)$ generates Gaussian mixture distribution on the space from $X$. See \Sec{joint_training} for details.} \label{fig:training} \end{figure} \subsection{Embedding Network} \label{sec:embedding_network} The next step is to design the embedding network $f$ that takes a shape and maps it to the $D-$dimensional embedding space. Since the retrieval network works by predicting a coordinate in the space and selecting candidates by proximity search, nearby components have to be interchangeable when they are added to some partial object assemblies. A naive approach would be to use some fixed embedding (e.g., PCA) based on any shape features (e.g., deep learned classification features~\cite{Su:2015}). The disadvantage of this approach is that embedding is created independently from the prediction network $g$, so we cannot expect complementary parts to be captured well with the Gaussian Mixture model. Thus, we propose to learn the embedding space jointly with the network $g$. To do this we use a PointNet architecture to represent function $f(Y)$ that maps the point cloud of a component $Y$ to its embedding coordinates $Y_c$. Learning the embedding function $f$ enables us to create an embedding space that tightly clusters candidate complements that share stylistic and functional attributes. \subsection{Joint Training for Retrieval and Embedding} We now describe how to jointly train the retrieval and embedding networks using our pre-processed dataset. \label{sec:joint_training} {\noindent \bfseries Loss function.} Our loss is a triplet contrastive loss. Given some positive example of a partial assembly $X$ and its complementing part $Y$, we need to define an appropriate loss function to be able to learn optimal parameters for networks $f$ and $g$. We define it as a negative log likelihood that $Y$ is sampled from the probability distribution predicted by $g(X)$, $P(Y | X)$: \begin{align} E(X, Y) &= -\log \sum_{k=1}^{N_\text{GM}} \phi_k(X) \mathcal{N}(f(Y) \,|\, \mu_k(X), \sigma_k(X)^2 ). \label{eq:loss_joint} \end{align} \rev{See Appendix for an expanded form.} Directly optimizing for parameters of $g$ and $f$ with respect to Equation~\ref{eq:loss_joint}, however, would collapse the embedding space to a single point, i.e., the optimal value is attained when $f$ contracts to a single point~\cite{Hadsell:2006}. Thus, we introduce a negative example, component $Z$ that does not complement $X$, to avoid the contraction of $f$. We now use the triplet $(X,Y,Z)$ to define a contrastive loss~\cite{Chechik:2010}: \begin{align} E(X, Y, Z) &= \max \{m + E(X, Y) - E(X, Z), 0 \}, \label{eq:loss_constrasive} \end{align} where $m=10$ is a constant margin set for all experiments. \Fig{network}b shows the final version of the network for the component embedding with the contrastive loss. The subnetworks processing $X$, $Y$, and $Z$ have the same PointNet structure, but only the subnetworks of $Y$ and $Z$ share parameters. \vspace{0.1cm} {\noindent \bfseries Training.} To generate the training triplet $(X, Y, Z)$ we use the components in the pre-processed contact graphs described in Section~\ref{sec:data_preprocessing}. We first pick a random shape. Suppose its contact graph has $n$ nodes, we then pick a random value $r \in [1,n]$ and create a random subgraph with $r$ nodes. To do that we pick a random node and iteratively add a random unvisited adjacent node until we create a connected subgraph graph of size $r$. We sample $N_\text{points}$ on the included components to obtain $X$ (note that these points are defined in global coordinate system of the object). We then pick a random unvisited component that is adjacent to the selected subgraph $X$ to define $Y$, and a random non-adjacent component (including components from other shapes) to define $Z$ (note that $Y, Z$ are represented by $N_\text{points}$ centered at the origin). We train the retrieval network for $2000$ epochs with batch size $32$. We use ADAM optimizer \cite{Kingma:2014b} with $1.0E-3$ initial learning rate and 0.8 decay rate for 50k decay steps. Each epoch iterates all 3D models in the training set while randomly sampling the query subgraph $X$, and positive/negative components $Y$ and $Z$. In MDN, the standard deviations ${\sigma_k(X)}$ easily diverges to the $\inf$ since this leads to $-\inf$ loss. Hence, we set the upper limit of ${\sigma_k(X)}$ to 0.05. \subsection{Placement Network} \label{sec:placement_network} The retrieval network predicts a probability on the embedding space. We can accordingly propose new components to be selected for interactive or fully-automatic model design. Suppose that $Y$ is the selected new component given a partial object $X$. The placement network $h$ predicts 3D coordinates for the component $Y_p=h(X,Y)$. We assume that only translation needs to be predicted and orient $Y$ the same way as it was oriented in the source shape. We use two independent PointNet networks to analyze point clouds $X$ and $Y$, concatenate the features from these two networks, and add multilayer perceptron layers to obtain 3D coordinates $Y_p$ (\Fig{network}c). We use the same training data samples $(X,Y)$ as in training the retrieval network. \section{Overview} \label{sec:overview} Incremental assembly-based modeling systems require two key technical components: part retrieval and part placement. In this work we provide solutions to both of these problems. In particular, given a partial object, our method proposes a set of potential complementary components from a repository of 3D shapes and a placement of each component. The goal is to retrieve components that are \emph{compatible} with the partial assembly in style, functionality and other factors; while simultaneously are as \emph{diverse} as possible to leave more options to the designer. We also need to predict positions for these components so that they form a valid shape with the partial assembly. These are challenging problems that require human-level understanding of objects, and thus we propose learning-based approaches for generating these proposals. \rev{Our first challenge is to obtain the training data: pairs of geometries including a partial 3D assembly and potential complementing components. We use the 3D models from ShapeNet~\cite{Chang:2015}, a large-scale online repository, to create these pairs. We first need to decompose these objects into components, which form the basic unit of our system. Unlike most previous works, we do not require these decompositions to be consistent across shapes, have explicit correspondences, or have labels. Similar to Chaudhuri et al.~\shortcite{Chaudhuri:2011}, we could use existing segmentation algorithms. However, following the observation of Yi et al.~\shortcite{Yi:2017} we found that most shapes in these repositories are composed of connected components that mostly align with natural part boundaries. Thus, we propose a simple data pre-processing procedure that merges small and repetitive components, and uses the resulting larger parts. While we could train directly on these parts by picking a subset of components and trying to predict the rest, we found that it is very unintuitive to predict a part that is not attached to the current assembly, and thus use a proximity-based graph of the processed components to avoid training on disconnected examples. We describe this step in more details in Section~\ref{sec:data_preprocessing}.} We use this data to train a neural network for selecting complementary components, and use point clouds to represent shape geometry~\cite{Qi:2017}. To train our network we pick a random connected subgraph of components as input and use all remaining components adjacent to the subgraph as training examples. For example, given a single chair seat, a back, a leg, or arm-rests are all correct suggestions. Furthermore, in practice any of these parts in the style that is compatible to the seat can be valid retrievals. This means that the mapping from our inputs $X$ to the outputs $Y$ is not a function since it has multiple output values, and thus cannot be modeled with a simple regression. In this work we address two fundamental challenges associated with the retrieval problem: how to model ambiguity in retrievals and how to index parts. To address the first challenge we propose a \emph{retrieval network} architecture that produces a conditional probability distribution $P(Y|X)$ modeled as a Gaussian mixture for the output. Our network is designed based on the Mixture Density Network \cite{Bishop:1994}. This method enables us to retrieve a diverse set of plausible results, as detailed in Section~\ref{sec:retrieval_network}. To address the second challenge we learn a low-dimensional embedding of all parts to encode the retrieved result $Y$. Then, proposing new components corresponds to sampling a few coordinates in this embedding space according to $P(Y|X)$. While one could use a fixed embedding space (e.g., based on shape descriptors), we learn this embedding by training an \emph{embedding network} that aims to embed compatible complementary parts that share functional and stylistic features nearby (Section~\ref{sec:embedding_network}). We use a form of contrastive loss to jointly train the retrieval and embedding networks (Section~\ref{sec:joint_training}). Finally, we address the challenge of placing the retrieved part in the context of the partial query by training a regression \emph{placement network} that uses both the partial object and a complementary component as an input and the true position of the component as a training example (Section~\ref{sec:placement_network}). During the incremental assembly design, we first run our retrieval network to obtain a set of high-probability components, and then run the placement network on each component to generate a gallery of potential assembly candidates placed with respect to the input object (see Figure~\ref{fig:overview}). \section{Related Work} \label{sec:related_work} We review related work on assembly-based modeling and recent uses of neural networks for geometric modeling. {\noindent \bfseries 3D modeling by assembly.} Funkhouser et al.~\shortcite{Funkhouser:2004} pioneered the idea of creating 3D models by assembling parts segmented from shapes in a repository. Subsequent interfaces reduce the amount of tedious manual segmentation by using a heavily curated repository of objects pre-segmented into labeled parts~\cite{Chaudhuri:2011,Kalogerakis:2012}. They proposed a probabilistic graphical model to reason about which parts can complement one another. Part assemblies can also be used to create plausible complete objects from partial point clouds. For example, Shen et al.~\shortcite{Shen:2012} detect and fill missing components by aligning the input to 3D models in the database. Sung et al.~\shortcite{Sung:2015} fit structure templates to the partial scan data to leverage both symmetry and part retrieval for completion. These part-based models rely on a database of consistently segmented shapes with part labels, which limits the applicability of these techniques as they incur significant data annotation costs~\cite{yi:2016}. There are two notable exceptions. Jaiswal et al.~\shortcite{Jaiswal:2016} used factor graphs to model pairwise compatibilities when suggesting a new part. Their suggestions are based only on pairwise relationships, rendering this method less suitable for holistic reasoning. Chaudhuri and Koltun~\shortcite{Chaudhuri:2010} proposed a method that retrieves partially similar shapes and detect components that can be added to the existing assembly. They assumed that the coarse shape is mostly complete, so that global shape descriptors can reliably retrieve a structurally similar model, and that part placement will not change significantly from the retrieved model. While these techniques also do not require part labels and consistent segmentations, unlike our approach, they do not learn how to predict parts. There are several issues associated with that. First, hand-crafted shape descriptors, parameters, and weights that they use in their systems might have to be adapted as one switches to new dataset. Second, it is challenging for these systems to learn what a complete target shape in a particular category looks like. In contrast, our method uses neural networks to learn an appropriate shape representation to map a partial assembly to complementary parts and their respective positions. It does not require manual parameter tuning and can easily apply to a wide range of shape categories. \vspace{0.5cm} {\noindent \bfseries Neural networks for 3D modeling.} Several recent techniques use neural networks for modeling 3D shapes. A direct extension of image synthesis is 3D volume synthesis, where previous work explored synthesizing volumes from depth~\cite{Wu:2015}, images~\cite{Choy:2016,Grant:2016}, or both~\cite{Tulsiani:2017}. Other output 3D representations include skeletons~\cite{Wu:2016c}, graph-based structures~\cite{Kong:2017}, and point clouds~\cite{Fan:2017}. In this work we demonstrate that neural networks can also be used for incremental interactive shape assembly from parts. Since our geometry representation focuses on retrieving appropriate components from the repository instead of synthesizing geometry from scratch, we are able to create high fidelity 3D meshes. Since our assembly process relies on training a component retrieval network, our method is also related to learning shape embeddings. Previous techniques learned embeddings for different purposes: \cite{Girdhar:2016} for reconstructing 3D from 2D, \cite{Sharma:2016} for denoising, \cite{Wu:2016} for synthesizing shapes, and \cite{Li:2017} for detecting hierarchical part structures. We introduce a different embedding designed specifically for our retrieval problem. Our approach jointly learns to embed complementing components that occur in similar context nearby, and learns to map partial objects to their complements.
1,108,101,564,738
arxiv
\section{Introduction} Let $P_0$ and $P_1$ be two distinct points in $\mathds R\times\mathds R_{\geq0}$ and consider for $\alpha\geq0$ the variational problem \[ \int y^\alpha\ d\mathscr{H}^1(x,y) \ \to \ \min \] within the class \[ \mathscr{C}\coloneqq\{\, \mathfrak{K}\colon [0,1]\to\mathds R\times\mathds R_{\geq0} \ \text{ Lipschitz s.t. } \ \mathfrak{K}(0)=P_0, \mathfrak{K}(1)=P_1 \,\}. \] Hence, with $\alpha=0$ we are looking for the shortest curve joining $P_0$ and $P_1$, with $\alpha=\frac12$ we gain a parametric version of the brachistochrone-problem, and the case $\alpha=1$ leads to rotationally symmetric minimal surfaces in $\mathds R^3$. On the other hand, the variational integral with $\alpha=1$ appears when considering the potential energy of heavy chains. Of course, the shortest path between $P_0$ and $P_1$ is a line, and the minimizing curve in the case $\alpha=\frac12$ was named brachistochrone. However, the variational problem with $\alpha=1$ may possess two distinct minimizers, namely a catenary and a Goldschmidt curve, which consists of three straight lines, cf. \cite[ch. 8 sec. 4.3]{GH2}. In order to prove the minimality of the above mentioned curves it is sufficient to embed the corresponding curve into a field of extremals\footnote{An argument which goes back to \textsc{Weierstrass}.}, i.e. into a foliation of extremal curves, cf. \cite[ch. 6 sec. 2.3]{GH1}. In fact, this can be directly justified by the divergence theorem. For this purpose let us look at the vector field \[ \xi(x,y)\coloneqq y^\alpha\cdot\nu(x,y), \] where $\nu(x,y)$ are the normal fields orienting the curves from the foliation. Since all these curves are extremals, the vector field $\xi$ is divergence-free. The conclusion then follows by applying the divergence theorem to the vector field $\xi$ on the open set which is bounded by a critical curve and a comparison curve. In geometric measure theory setting, the critical curve is said being calibrated by $\xi$, and the vector field $\xi$ is called \textit{calibration}.\footnote{Such method of conclusion is applicable even in a more general context and is well-known as Federer's differential form argument, cf. \cite[5.4.19]{Federer}.} In this paper we consider the higher dimensional variational problem and prove the minimizing property of special hypercones. Therefor we will construct suitable foliations. The crux hereby is to find an auxiliary function whose level sets are extremals. First, we will weaken our considerations and look at ``inner'' and ``outer'' variations separately as in \cite{dPP}. This gives simplified proofs and yields sub-solutions and sub-calibrations. The advantage of this weakened ansatz is that we can gain specific auxiliary functions. Moreover, we will show that a careful analysis of extremals as in \cite{Davini} provides better results to our variational problem but loses the concrete representation of an auxiliary function. \subsection{The main result} Let $m\in\{\,2,3,\dots\,\}$ and let $\EuScript{M}$ be an oriented Lipschitz-hypersurface in $\mathds R^m\times\mathds R_{\geq 0}$. Its \emph{$\alpha$-energy} is given by \begin{equation}\label{eq:Lew:varprob} \EuScript{E}_\alpha(\EuScript{M})\coloneqq \int_\EuScript{M} y^\alpha d\mathscr{H}^m(z), \end{equation} where we use the notation $z\coloneqq(x,y)\in\mathds R^m\times\mathds R_{\geq0}$ and denote by $\mathscr{H}^m$ the $m$-dimensional Hausdorff measure. We show \begin{theorem}\label{Satz:Lew:1} There exists an algebraic number $\alpha_m >\frac2m$ such that the cone \[ \mathcal{C}^\alpha_m\coloneqq \left\{0\leq y\leq \sqrt{\frac{\alpha}{m-1}}\cdot |x|\right\}, \qquad \text{with arbitrary $\alpha\geq \alpha_m$,} \] is a local $\alpha$-perimeter minimizer in $\mathds R^m\times\mathds R_{\geq0}$. \end{theorem} \begin{remark} For $\alpha$ an integer, our result is equivalent to the area-minimizing property of the corresponding rotated cones in $\mathds R^{m+\alpha+1}$. Indeed, with our lower bounds presented in rem. \ref{rem:bounds} we recover the area-minimizing property of all Lawson's cones, i.e. of the cones \[ C_{k,h}\coloneqq\{\,(x,y)\in\mathds R^k\times\mathds R^h\mid(h-1)|x|^2=(k-1)|y|^2\,\} \] with \ $k,h\geq2$ and $k+h\geq9$ \ or \ $(k,h)\in\{(3,5),(5,3),(4,4)\}$, \ cf. \cite{BdGG, Lawson,Simoes}, where $k$ and $h$ take over the parts of $m$ and $\alpha+1$. For further reading on area-minimizing cones, see also \cite{Lawlor} and the references contained therein. \end{remark} \begin{remark} Following the minimal surfaces theory we will introduce the terminology of a \emph{local $\alpha$-perimeter minimizer} in the next section. Alternatively, we could say in theorem \ref{Satz:Lew:1} that the hypercone \[ \mathcal{M}^\alpha_m\coloneqq\partial \mathcal{C}^\alpha_m=\{\sqrt{m-1}\cdot y=\sqrt{\alpha}\cdot |x|\}, \qquad \text{with arbitrary $\alpha\geq \alpha_m$,} \] is $\alpha$-minimizing in $\mathds R^m\times\mathds R_{\geq0}$, where the boundary of $\mathcal{C}^\alpha_m$ is seen with respect to the induced topology. \end{remark} \begin{remark} In our proof, we will specify polynomials $\text{\boldm{$p_{m}$}}$ which characterize the corresponding $\alpha_m$ as the unique positive root. Moreover, we show $\alpha_m<\frac{12}{m}$, thus $\alpha_m\to 0$ with $m\to\infty$. \end{remark} \begin{remark}\label{rem:bounds} First ( integer ) bounds can be found in \cite{Dierkes:erstErg}, namely \[ \alpha_2 = 11, ~ \alpha_3=6, ~ \alpha_4=\alpha_5=\alpha_6=3, ~ \alpha_7=\dots=\alpha_{11}=2, ~ \alpha_m=1 \text{ for $m\geq12$}. \] Shortly thereafter, they were corrected in \cite{Dierkes:verbessErg} to \[ \alpha_2=6, \quad \alpha_3=4, \quad \alpha_4=3, \quad \alpha_5=\alpha_6=2, \quad \alpha_m=1 \quad \text{for ~ $m\geq7$}. \] Our investigations show, that they can be improved to \begin{center} \begin{tabular}{r@{\ $\approx$\ }r@{.}l} $\alpha_2$ & 5 & 881525129\\ $\alpha_3$ & 3 & 958758640\\ $\alpha_4$ & 2 & 829350458\\ $\alpha_5$ & 1 & 969224627\\ $\alpha_6$ & 1 & 352500103 \end{tabular}\quad \begin{tabular}{r@{\ $\approx$\ }r@{.}l} $\alpha_7$ & 0 & 963594772\\ $\alpha_8$ & 0 & 728989161\\ $\alpha_9$ & 0 & 581153278\\ $\alpha_{10}$ & 0 & 481712568\\ $\alpha_{11}$ & 0 & 410855526 \end{tabular}\quad \begin{tabular}{r@{\ $\approx$\ }r@{.}l} $\alpha_{12}$ & 0 & 357996307\\ $\alpha_{13}$ & 0 & 317117533\\ \multicolumn{3}{c}{\ldots}\\ $\alpha_{2017}$ & 0 & 001377480\\ \multicolumn{3}{c}{\ldots} \end{tabular} \end{center} \end{remark} \begin{remark} For all $m=2,3,\dots$ we have $m+\alpha_m\geq4+\sqrt{8}$, cf. remark \ref{Lew:Bem:Koeff}, so, direct calculations yield that all hypercones $\mathcal{M}^\alpha_m$, with $\alpha\geq \alpha_m$, are (of course) $\EuScript{E}_\alpha$-stable, see also \cite[p. 168]{DHT}. \end{remark} \begin{remark} Although $\mathcal{M}^5_2$ is $\EuScript{E}_5$-stable, the corresponding cone $\mathcal{C}^5_2$ is not a (local) $5$-perimeter minimizer in $\mathds R^2\times\mathds R_{\geq0}$. Similarly, the hypercone $\mathcal{M}^1_6$ is $\EuScript{E}_1$-stable, but the cone $\mathcal{C}^1_6$ does not minimize the $1$-perimeter in $\mathds R^6\times\mathds R_{\geq0}$, cf. \cite{Dierkes:verbessErg}. Hence, the optimality question of our $\alpha_m$'s still remains open. \end{remark} \section{Notations and preliminary results} Let $\Omega\subseteq \mathds R^m\times\mathds R_{\geq0}$ be open (with respect to the induced topology) and let $\alpha>0$. We say that $f\in BV^\alpha(\Omega)$\ if $f\in L_1(\Omega)$ and the quantity \[ \int_\Omega y^\alpha|Df|\coloneqq\sup\left\{\int_\Omega f(z)\operatorname{div}(\psi(z))dz : \psi\in C^1_c(\Omega,\mathds R^{m+1}), |\psi(z)|\leq y^\alpha\right\} \] is finite. For a Lebesgue measurable set $E\subseteq\mathds R^m\times\mathds R_{\geq0}$ we call \[ \EuScript{P}_\alpha(E;\Omega)\coloneqq \int_\Omega y^\alpha|D\chi_E| \] the \emph{$\alpha$-perimeter of $E$ in $\Omega$}. Furthermore, we call $E$ an \emph{$\alpha$-Caccioppoli set in $\Omega$} if $E$ has a locally finite $\alpha$-perimeter in $\Omega$, i.e. $\chi_E\in BV^\alpha_{loc}(\Omega)$. \begin{example} By the divergence theorem, if $E\subseteq\mathds R^m\times\mathds R_{\geq0}$ is an open set with regular boundary, then \[ \EuScript{P}_\alpha(E;\Omega) = \EuScript{E}_{\alpha}(\partial E\cap\Omega) \] for all open sets $\Omega$. \end{example} \begin{remark} Of course, several properties of the $\alpha$-perimeter can be directly transferred from the known properties of the perimeter, cf. \cite{Giusti,Maggi}. \end{remark} \begin{remark} Note that there are $\alpha$-Caccioppoli sets which are not Caccioppoli, i.e. do not possess a locally finite perimeter: In an arbitrary neighborhood of the origin consider the set \[ A\coloneqq\bigcup_{n=0}^\infty A_n, \] where $A_n$ is a triangle with vertices \[\textstyle \left(\frac{1}{2^{n+1}},0\right), ~ \left(\frac{1}{2^{n}},0\right) ~ \text{and} ~ \left(\frac{3}{2^{n+2}},\sqrt{\frac{1}{4(n+1)^2}-\frac{1}{2^{2n+4}}}\right). \] Hereby, the $A_n$ are chosen in such a way that \[ \left|\partial A_n\cap\big(\mathds R\times\mathds R_{>0}\big)\right| = \frac{1}{n+1}. \] On the other hand, the $\alpha$-perimeter of $A$ is dominated by the convergent series \[ \sum_{n=0}^\infty \frac{1}{(\alpha+1)(n+1)}\left(\frac{1}{4(n+1)^2}-\frac{1}{2^{2n+4}}\right)^{ \Large \nicefrac{\alpha}{2}}. \] \end{remark} \begin{definition} Let $E$ be an $\alpha$-Caccioppoli set in $\Omega$. We say that $E$ is a \emph{local $\alpha$-perimeter minimizer in $\Omega$} if in all bounded open sets $B\subseteq\Omega$ we have \[ \EuScript{P}_\alpha(E;B)\leq\EuScript{P}_\alpha(F;B) \quad \text{for all $F$ such that $F\bigtriangleup E \subset\subset B$.} \] \end{definition} \subsection{Under weakened conditions} The following definitions and results are analogous to the observations in \cite[sec. 1]{dPP}. We only prove one proposition, which was not used in \cite{dPP}. \begin{definition} Let $E$ be an $\alpha$-Caccioppoli set in $\Omega$. We say that $E$ is a \emph{local $\alpha$-perimeter sub-minimizer in $\Omega$} if in all bounded open sets $B\subseteq\Omega$ we have \[ \EuScript{P}_\alpha(E;B)\leq\EuScript{P}_\alpha(F;B) \quad \text{for all $F\subseteq E$ such that $E\backslash F \subset\subset B$.} \] \end{definition} The connection with minimizers is given by \begin{proposition}\label{prop:Lew:BedMin} $E$ is a local $\alpha$-perimeter minimizer in $\Omega$ if and only if $E$ as well as $\Omega\backslash E$ is a local $\alpha$-perimeter sub-minimizer in $\Omega$. \end{proposition} The lower semicontinuity of the $\alpha$-perimeter implies \begin{proposition}\label{prop:Lew:konv} Let $\{E_k\}_{k\in\mathbb{N}}$ and $E$ be $\alpha$-Caccioppoli sets in $\Omega$ with $E_k\subseteq E$ and suppose that $E_k$ locally converge to $E$ in $\Omega$. If all $E_k$'s are local $\alpha$-perimeter sub-minimizers in $\Omega$, then $E$ is a local $\alpha$-perimeter sub-minimizer in $\Omega$ as well. \end{proposition} Furthermore, the existence of a so called sub-calibration ensures the sub-minimality. \begin{definition} Let $E\subseteq\Omega$ be an $\alpha$-Caccioppoli set in $\Omega$ with $\partial E\cap \Omega\in C^2$. We call a vector field $\xi\in C^1(\Omega,\mathds R^{m+1})$ an \emph{$\alpha$-sub-calibration of $E$ in $\Omega$} if it fulfills \begin{enumerate}[\qquad (i)] \item $|\xi(z)|\leq y^\alpha$\neghphantom{$|\xi(z)|\leq y^\alpha$}\hphantom{ $\xi(z)=y^\alpha\cdot \nu_E(z)$} ~ for all $z\in \Omega$, \item $\xi(z)=y^\alpha\cdot \nu_E(z)$ ~ for all $z\in\partial E\cap \Omega$, \item $\operatorname{div} \xi(z)\leq 0$\neghphantom{$\operatorname{div} \xi(z)\leq 0$}\hphantom{$\xi(z)=y^\alpha\cdot \nu_E(z)$}~ for all $z\in\Omega$, \end{enumerate} where $\nu_E$ denotes the exterior unit normal vector field on $\partial E$.\footnote{Note that, in contrast to \cite{Davini}, our vector field has been weighted.} \end{definition} \begin{proposition}\label{prop:Lew:subKal} If $\xi$ is an $\alpha$-sub-calibration of $E$ in an open set $\EuScript{O}\subseteq\Omega$, then $E$ is a local $\alpha$-perimeter sub-minimizer in all $\Omega$. \end{proposition} Note that it suffices to find a sub-calibration on a subset of $\Omega$ which contains $E$ since we only deal with inner deformations. Finally, we add \begin{proposition}\label{prop:Lew:ganz} If the cone $\mathcal{C}^\alpha_m$ is a local $\alpha$-perimeter sub-minimizer in $\mathds R^m\times\mathds R_{>0}\backslash\{x=0\}$, then $\mathcal{C}^\alpha_m$ is also a local $\alpha$-perimeter sub-minimizer in the whole $\mathds R^m\times\mathds R_{\geq0}$. \end{proposition} \begin{proof} Firstly, we have for a bounded open set $\widetilde{B}\subset\mathds R^m\times\mathds R_{\geq0}$: \[ \EuScript{P}_\alpha(\mathcal{C}^\alpha_m;\widetilde{B})\leq \EuScript{P}_\alpha(F;\widetilde{B}) \] for all $F\subseteq\mathcal{C}^\alpha_m$ such that $\mathcal{C}^\alpha_m\backslash F \ \subset\subset \ \widetilde{B}\backslash\{\,x=0\,\vee\,y=0\,\}$. Let now be $\widetilde{F}\subseteq \mathcal{C}^\alpha_m$ with $\mathcal{C}^\alpha_m\backslash\widetilde{F}\subset\subset\widetilde{B}$. For $\varepsilon>0$ we consider the set \[ \widetilde{F}_\varepsilon\coloneqq \widetilde{F}\cup \big( \mathcal{C}^\alpha_m \cap \{\,|x|<\varepsilon\,\ \vee\,y<\varepsilon\,\} \big). \] Hence, \[ \mathcal{C}^\alpha_m\backslash\widetilde{F}_\varepsilon \ \subset\subset \ \widetilde{B}\backslash\{\,x=0\,\vee\,y=0\,\}, \] thus with the preliminary observation we have \begin{align*} \EuScript{P}_\alpha(\mathcal{C}^\alpha_m;\widetilde{B})&\leq\EuScript{P}_\alpha(\widetilde{F}_\varepsilon;\widetilde{B}) \\ &\leq \EuScript{P}_\alpha(\widetilde{F};\widetilde{B}) + c_1(m,\alpha,\widetilde{B})\cdot\{\varepsilon^{m+\alpha}+\varepsilon^\alpha+\varepsilon^{m-1}\} \\ &\xrightarrow{\varepsilon\searrow0}\EuScript{P}_\alpha(\widetilde{F};\widetilde{B}). \end{align*} \end{proof} \section{First proof of theorem \ref{Satz:Lew:1}} Arguing in this section as in \cite{dPP} we give a first proof of theorem \ref{Satz:Lew:1}. Unfortunately, this does not lead to our best bounds, but gives the $\alpha_m$'s as constructible numbers. This study is based on the analysis of the cubic polynomial \[ \mathbf{Q}_{m,\alpha}(t)\coloneqq (m-1)^4t^3-3(m-1)^2\alpha t^2-3(m-1)\alpha^2t+\alpha^4. \] \begin{lemma}\label{Lemma:Lew:Q} For all $\displaystyle\alpha\geq\frac{2m^{\nicefrac{3}{2}}+3m-1}{(m-1)^2}$, we have \[ \mathbf{Q}_{m,\alpha}(t)\geq0 \quad \text{for all $t\geq0$.}\] \end{lemma} \begin{proof} For all admissible $m\in\{\,2,3,\ldots\,\}$ and $\alpha>0$, the polynomial $\mathbf{Q}_{m,\alpha}(-t)$ has one sign change in the sequence of its coefficients \[-(m-1)^4, \quad -3(m-1)^2\alpha, \quad 3(m-1)\alpha^2,\quad \alpha^4.\] Hence, due to Descartes' rule of signs, $\mathbf{Q}_{m,\alpha}$ always has one negative root. On the other hand, $\mathbf{Q}_{m,\alpha}$ has none, a double or two distinct positive roots. The number of real roots of the cubic polynomial $\mathbf{Q}_{m,\alpha}$ is determined by its discriminant \[ \vartheta=-27(m-1)^6\alpha^6\cdot\{(m-1)^2\alpha^2-(6m-2)\alpha+1-4m\}. \] Summarizing, we have: \begin{enumerate}[i)] \item If $\vartheta>0$, then $\mathbf{Q}_{m,\alpha}$ has one negative and two distinct positive roots. \item If $\vartheta=0$, then $\mathbf{Q}_{m,\alpha}$ has one negative and a double positive root. \item If $\vartheta<0$, then $\mathbf{Q}_{m,\alpha}$ has only one negative root. \end{enumerate} The statement of the lemma then follows since $-\vartheta$ has the same sign as the quadratic polynomial \[ \mathbf{q}_m(\alpha)\coloneqq(m-1)^2\alpha^2-(6m-2)\alpha+1-4m, \] whose sole positive root is \[\alpha=\frac{2m^{\nicefrac{3}{2}}+3m-1}{(m-1)^2}.\qedhere\] \end{proof} \begin{proof}[Proof of theorem \ref{Satz:Lew:1} (with concrete bounds)] We consider over $\mathds R^m\times\mathds R_{\geq0}$ the function \[ F_{m,\alpha}(z)\coloneqq \frac14\left\{\alpha^2|x|^4-(m-1)^2 y^4\right\}. \] It is \[ \nabla F_{m,\alpha}(z)= \big(\alpha^2|x|^2x \ \ , \ -(m-1)^2 y^3 \big).\] Moreover, on $\{\,|\nabla F_{m,\alpha}|\neq0\,\}$ we have: \begin{align*} \nabla_{x}\ \frac{|x|^2}{|\nabla F_{m,\alpha}|}&=\frac{2x}{|\nabla F_{m,\alpha}|}- \frac{3\alpha^4|x|^6x}{|\nabla F_{m,\alpha}|^3} \\ \shortintertext{and} \frac{\partial}{\partial y}\ \frac{1}{|\nabla F_{m,\alpha}|}&=-\frac{3(m-1)^4 y^5}{|\nabla F_{m,\alpha}|^3}. \end{align*} Hence, \begin{align*} \operatorname{div}\left(-y^\alpha\vphantom{\frac{1}{1}}\frac{\nabla F_{m,\alpha}}{|\nabla F_{m,\alpha}|}\right)& = -\skalarProd{\nabla_{x} \ }{\ \frac{y^\alpha\alpha^2|x|^2x}{|\nabla F_{m,\alpha}|}}+\frac{\partial}{\partial y}\ \frac{(m-1)^2y^{\alpha+3}}{|\nabla F_{m,\alpha}|} \\[2ex] &\hspace{-5.5em}=-m\frac{y^\alpha\alpha^2|x|^2}{|\nabla F_{m,\alpha}|}- \skalarProd{x}{ \nabla_{x}\ \frac{y^\alpha\alpha^2|x|^2}{|\nabla F_{m,\alpha}|}}\\[0.5ex] &\hspace{-5.5em} \hspace*{1em}+\frac{(m-1)^2(\alpha+3)y^{\alpha+2}}{|\nabla F_{m,\alpha}|} + (m-1)^2y^{\alpha+3}\frac{\partial}{\partial y}\ \frac{1}{|\nabla F_{m,\alpha}|}\\[2ex] &\hspace{-5.5em}= |\nabla F_{m,\alpha}|^{-3}\{-(m-1)\alpha^6y^\alpha|x|^8 - (m-1)^4(m+2)\alpha^2y^{\alpha+6} |x|^2 \\[0.5ex] &\hspace*{2em} + (m-1)^2(\alpha+3)\alpha^4y^{\alpha+2}|x|^6+(m-1)^6 \alpha y^{\alpha+8} \} \\[2ex] &\hspace{-5.5em}= - |\nabla F_{m,\alpha}|^{-3}(m-1)\alpha y^\alpha|x|^{6}\,\mathbf{Q}_{m,\alpha}\left(\frac{y^2}{|x|^2}\right)\{\alpha|x|^2-(m-1)y^2\}. \end{align*} For $k\in\mathbb{N}$ consider the sets \[ E_k\coloneqq\left\{\, z\in\mathds R^m\times\mathds R_{\geq0}: F_{m,\alpha}(z)\geq\frac1k \,\right\}\subset\mathcal{C}^\alpha_m. \] They all are $\alpha$-Caccioppoli sets in $\mathds R^m\times\mathds R_{>0}\backslash\{x=0\}$ since $$F_{m,\alpha}\in C^2(\big(\mathds R^m\times\mathds R_{>0}\backslash\{\,z=0\,\}\big)\backslash\mathcal{M}^\alpha_m),$$ whereby $\mathcal{M}^\alpha_m=\partial\mathcal{C}^\alpha_m=\{\,F_{m,\alpha}=0\,\}$. Furthermore, the $E_k$'s locally converge to $\mathcal{C}^\alpha_m=\{\, F_{m,\alpha}\geq0 \,\}$. With lemma \ref{Lemma:Lew:Q} we have \[ \mathbf{Q}_{m,\alpha}\left(\frac{y^2}{|x|^2}\right)\geq0 \quad \text{for all \ $x\neq0$, \ $y\geq0$}, \text{ and for all } \alpha\geq\frac{2m^{\nicefrac32}+3m-1}{(m-1)^2} \] consequently, due to the above computation of the divergence, the vector filed \[ \xi_{+}(z)\coloneqq -y^\alpha\frac{\nabla F_{m,\alpha}(z)}{|\nabla F_{m,\alpha}(z)|} \] is an $\alpha$-sub-calibration for each $E_k$ in $\{0<\sqrt{m-1}y<\sqrt{\alpha}|x|\}$. Hence, propositions \ref{prop:Lew:subKal}, \ref{prop:Lew:konv} and \ref{prop:Lew:ganz} ensure that $\mathcal{C}^\alpha_m$ is a local $\alpha$-perimeter sub-minimizer in the whole $\mathds R^m\times\mathds R_{\geq0}$. In view of the characterization of $\alpha$-perimeter minimizing sets, cf. proposition \ref{prop:Lew:BedMin}, the claim of theorem \ref{Satz:Lew:1} follows for \[ \alpha\geq\frac{2m^{\nicefrac32}+3m-1}{(m-1)^2}, \] after proving the sub-minimality of the complement of $\mathcal{C}^\alpha_m$. We therefor argue as above considering the sets \[ D_k\coloneqq\left\{\, z\in\mathds R^m\times\mathds R_{\geq0}: F_{m,\alpha}(z)\leq-\frac1k \,\right\} \] and the vector field \[ \xi_{-}(z)\coloneqq y^\alpha\frac{\nabla F_{m,\alpha}(z)}{|\nabla F_{m,\alpha}(z)|} \quad\text{on ~ $\{F_{m,\alpha}<0\}$.}\qedhere \] \end{proof} \begin{remark} All previous computations were carried out by hand. \end{remark} \begin{remark} For $m\geq14$ we have $\frac{2m^{\nicefrac32}+3m-1}{(m-1)^2}>\frac{12}{m}$ and $\frac{12}{m}$ is an upper bound for our best $\alpha_m$'s. \end{remark} \begin{remark}\label{Lew:Bed} Improvements of these bounds can be achieved by an alternative auxiliary function. As seen in the proof, such a function $F$ should fulfill the following conditions \begin{enumerate}[1.] \item $F\in C^2(\big(\mathds R^m\times\mathds R_{>0}\backslash\{\,x=0\,\}\big)\backslash\mathcal{M}^\alpha_m)\cap C^0(\mathds R^m\times\mathds R_{\geq0}) $, \item $\{\,F\geq0\,\}=\mathcal{C}^\alpha_m$, \quad $\{\,F=0\,\}=\partial \mathcal{C}^\alpha_m=\mathcal{M}^\alpha_m$, \item $\displaystyle F\cdot\operatorname{div}\left(-y^\alpha\frac{\nabla F}{|\nabla F|}\right)\leq0$ ~ in $\{\,\nabla F \neq0\,\}$. \end{enumerate} \end{remark} \begin{remark}\label{rem:minimizingArea} In fact, corresponding auxiliary functions can be found in papers concerning the minimizing property of Lawson's cone , namely \begin{itemize} \item in \cite{MM}: $F(x,y)=\big(|x|^2-|y|^2\big)\big(|x|^2+|y|^2\big)$, for $k=h=4$. \item in \cite{CM}: $$F(x,y)=\big((h-1)|x|^2-(k-1)|y|^2\big)\big((5k-h-4)(h-1)|x|^2-(5h-k-4)(k-1)|y|^2\big),$$ for $k+4<5h$ and $(k,h)\neq(3,5)$, and for $h+4<5k$ and $(k,h)\neq(5,3)$. \item in \cite{BM}: $$F(x,y)= \big((h-1)|x|^2-(k-1)|y|^2\big)\cdot\begin{cases}\big((h-1)|x|^2\big)^\beta, \text{ in ``$\{F>0\}$'',}\\\big((k-1)|y|^2\big)^\beta, \text{ in ``$\{F<0\}$'',} \end{cases}$$ where $\beta$ was chosen in a way, that such an argumentation was admissible for all Lawson's cones. \item in \cite{dPP}: $F(x,y)=\frac14\big(|x|^2-|y|^2\big)\big(|x|^2+|y|^2\big)$, for $k=h\geq4$. \end{itemize} Note that \begin{itemize} \item in \cite{CM,BM} computer algebra systems were used to perform the symbolic manipulations. \item the argumentation using sub-calibration method from \cite{dPP} is applicable to the function \[F(x,y)=\frac{1}{4}\big((h-1)|x|^2-(k-1)|y|^2\big)\big((h-1)|x|^2+(k-1)|y|^2\big) \] and yields the minimality of all Lawson's cones with \begin{align*} (k,h)\notin\{&(2,7),(2,8),(2,9),(2,10),(2,11),(3,5),\\ &(5,3),(7,2),(8,2),(9,2),(10,2),(11,2)\}. \end{align*} However, we have already performed such computations above and the exceptional cases correspond to the given bounds in lemma \ref{Lemma:Lew:Q} for integer values, where $k$ and $h$ take over the parts of $m$ and $\alpha+1$. \end{itemize} \end{remark} \begin{remark}\label{rem:Davini} With the aid of a suitable parametrization \textsc{Davini} detected the existence of an auxiliary function which was applicable to all Lawson's cones. All his computations he carried out by hand, cf. \cite{Davini}. \end{remark} \section{Second proof of theorem \ref{Satz:Lew:1} with better bounds} Since the hypercones $\mathcal{M}^\alpha_m=\partial\mathcal{C}^\alpha_m$ are invariant under the action of $SO(m)$ on the first $m$ components, we will look for a foliation consisting of extremal hypersurfaces with the same type of symmetry. In fact, recalling \eqref{eq:Lew:varprob}, a dimension reduction and the special parametrization\footnote{Note that the simplification in \cite{Davini} towards the argumentation as in \cite{BdGG} comes from such a parametrization.} \begin{equation}\label{eq:Lew:parametKurve} \begin{cases} |x| &= \mathrm{e}^{v(t)}\cdot\cos t,\\ \hphantom{|}y &=\mathrm{e}^{v(t)}\cdot\sin t, \end{cases} \end{equation} with $v\in C^2(0,\frac\pi2)$ yields as Euler-Lagrange equation \begin{equation}\label{eq:Lew:DiffglZ} \ddot{v}=\Big(1+\dot{v}^2\Big)\cdot\left\{m+\alpha+\frac{m-\alpha-1-(m+\alpha-1)\cos(2t)}{\sin(2t)}\cdot \dot{v}\right\}, \end{equation} cf. \cite{Davini}, where $m$ and $\alpha$ take over the parts of $k$ and $h-1$. Hence, with $w\coloneqq \dot{v}$ the initial problem reduces to a question about the behavior of solutions of the following ordinary differential equation of first order: \begin{equation}\label{eq:Lew:DiffglY} \dot{w}=\big(1+w^2\big)\cdot\left\{m+\alpha+\frac{m-\alpha-1-(m+\alpha-1)\cos(2t)}{\sin(2t)}\cdot w\right\}. \end{equation} The existence of a solution follows, for example, from the existence of an upper and a lower solution of \eqref{eq:Lew:DiffglY}. Arguing as \textsc{Davini} we will directly give an upper solution and the difficult part is in finding the conditions on $m$ and $\alpha$ under which a suitable lower solution exists. Note that we push the argumentation from \cite{Davini} to the extreme, since $\alpha>0$ is real valued and not necessarily an integer. Our study is based on the analysis of the quartic polynomial \begin{gather*} \P(\gamma)\coloneqq a_4\gamma^4+a_3\gamma^3+a_2\gamma^2+a_1\gamma+a_0, \shortintertext{with} \begin{aligned} a_4&=(m+\alpha)^3,\\[1ex] a_3&=-(m+\alpha)^2(m+\alpha+1),\\[1ex] a_2&=(m+\alpha)(2m+6\alpha-4m\alpha-1),\\[1ex] a_1&=4m^2\alpha+4\alpha^2m-4\alpha^2-5\alpha-m+1,\\[1ex] a_0&=-8(m-1)\alpha. \end{aligned} \end{gather*} \begin{lemma}\label{Lemma:Lew:biquadr} There exists an algebraic number $\alpha_m >\frac2m$ such that for all $\alpha\geq\alpha_m$ we can find a value $\text{\boldm{$\gamma_{m,\alpha}$}}\in(0,1-\frac{1}{m+\alpha})$ with \[\P(\text{\boldm{$\gamma_{m,\alpha}$}})\geq0.\] \end{lemma} \begin{proof} Note that \begin{align*} \P(0)&=-8(m-1)\alpha<0\\ \shortintertext{and} \P(1-\textstyle\frac{1}{m+\alpha})&=-\frac{8(m-1)\alpha}{m+\alpha}<0. \end{align*} Further, for all admissible $m\in\{\,2,3,\ldots\,\}$ and $\textstyle\alpha>\frac2m$ the coefficients of $\P$ fulfill: \begin{align*} a_4 &=(m+\alpha)^3>0,\\[1ex] a_3 &=-(m+\alpha)^2(m+\alpha+1)<0,\\[1ex] a_1 &=5\alpha(\textstyle\frac{m^2}{4}-1)+4\alpha^2(m-1)+m(\frac{11}{4}m\alpha-1)+1>0,\\[1ex] a_0 &=-8(m-1)\alpha<0, \end{align*} consequently, $\P(-\gamma)$ has, regardless of the value $a_2$, always one sign change in the sequence of its coefficients $ a_4,\ -a_3,\ a_2,\ -a_1,\ a_0$. Hence, due to Descartes' rule of signs, $\P$ always has one negative root. Moreover, we have \begin{align*} &\P(\gamma+1-\textstyle\frac{1}{m+\alpha})= \widetilde{a}_4\gamma^4+\widetilde{a}_3\gamma^3+\widetilde{a}_2\gamma^2+\widetilde{a}_1\gamma+\widetilde{a}_0,\\[1.5ex] &\quad\begin{aligned} \text{with} \quad &\widetilde{a}_4=(m+\alpha)^3>0,\\[1ex] &\widetilde{a}_3=(m+\alpha)^2(3m+3\alpha-5)>0,\\[1ex] &\widetilde{a}_2=(m+\alpha)\{(m-2)(3m-4)+\textstyle\frac{2\alpha}{m}(m^2-3m+\frac32\alpha m)\}>0,\\[1ex] &\widetilde{a}_0=-\textstyle\frac{8(m-1)\alpha}{m+\alpha}<0, \end{aligned} \end{align*} thus, regardless of the value $\widetilde{a}_1$, we always have one sign change in the sequence of coefficients of the polynomial $\P(\gamma+1-\textstyle\frac{1}{m+\alpha})$. In other words, $\P$ always has one root in $(1-\frac{1}{m+\alpha},\infty)$. All in all, $\P$ has none, a double or two distinct roots in the interval $(0,1-\frac{1}{m+\alpha})$. To determine the nature of roots of the quartic equation \begin{equation}\label{eq:Lew:biquadr} \P(\gamma)=0. \end{equation} we convert it by the change of variable $\gamma=u+\frac{m+\alpha+1}{4(m+\alpha)}$ to the depressed quartic \begin{equation}\label{eq:Lew:reduziert} u^4+pu^2+qu+r=0, \tag{\ref{eq:Lew:biquadr}*} \end{equation} with coefficients \begin{align*} p &= \textstyle -\frac{1}{8(m+\alpha)^2}\{3m^2-10m+11+3\alpha^2+2(19m-21)\alpha\}<0 ,\\[1.7ex] q &= \textstyle -\frac{1}{8(m+\alpha)^3}\{ \alpha^3+\alpha^2(11-13m)-\alpha(m-1)(13m+23)+(m-3)(m-1)^2\}, \\[1.7ex] r &= \textstyle -\frac{1}{256(m+\alpha)^4} \begin{aligned}[t]\{3 \alpha ^4+172 \alpha ^3-1630 \alpha ^2+204 \alpha +3 m^4-180 \alpha m^3-20 m^3-366\alpha ^2 m^2 \\[1ex]+1796 \alpha m^2+34 m^2-180 \alpha ^3 m+1988 \alpha ^2 m-1788 \alpha m+12 m-45 \}, \end{aligned} \end{align*} and consider its resolvent cubic, namely \begin{equation}\label{eq:Lew:kubRes} \zeta^3+2p\zeta^2+(p^2-4r)\zeta-q^2=0 \tag{\ref{eq:Lew:reduziert}*}. \end{equation} We have $p<0$ and $p^2-4r>0$ as \begin{align*} 16(m+\alpha)^4(p^2-4r)=&3 \alpha ^4+4 (3m-5)\alpha ^3+\left(274 m^2-316 m+50\right)\alpha ^2 \\[1ex] & \quad+4(m-1) (3 m^2+52m+45)\alpha+(m-1)^2 (3 m^2-14m+19). \end{align*} Consequently, \eqref{eq:Lew:kubRes} has no negative roots, since there is no sign change in the sequence of the coefficients $-1,\ 2p,\ 4r-p^2,\ -q^2$. On the other hand, \eqref{eq:Lew:kubRes} has one or three positive roots depending on the sign of its discriminant \[ \theta=4p^2(p^2-4r)^2-4(p^2-4r)^3-36p(p^2-4r)q^2+32p^3q^2-27q^4. \] In view of the foregoing, it follows: \begin{enumerate}[i)] \item If $\theta>0$, then $\P$ has two distinct roots in $(0,1-\frac{1}{m+\alpha})$. \item If $\theta=0$, then $\P$ has one double root in $(0,1-\frac{1}{m+\alpha})$. \item If $\theta<0$, then $\P$ has no roots in $(0,1-\frac{1}{m+\alpha})$. \end{enumerate} So, the statement of the lemma follows for such values of $m$ and $\alpha$ for which $\theta=\theta_m(\alpha)\geq0$. We have\label{Lew:Polynom}: \begin{align*} \frac{(m+\alpha)^{12}}{16\alpha(m-1)}\cdot\theta_m(\alpha)=&\\ &\hspace{-5.7em} 16(m-1)^2\alpha^8 \\[1ex] &\hspace{-5.7em} - 4(m-1)(8m^2+3)\alpha^7 \\[1ex] &\hspace{-5.7em} - (16 m^4-256 m^3+584 m^2-496 m+153)\alpha^6 \\[1ex] &\hspace{-5.7em} + 2 (32 m^5-224 m^4+1238 m^3-2738 m^2+2545 m-852)\alpha^5 \\[1ex] &\hspace{-5.7em} - (m-1) (16 m^5+48 m^4-1712 m^3+6672 m^2-4321 m-641)\alpha^4\\[1ex] &\hspace{-5.7em} - 2 (16 m^7-208 m^6+250 m^5+2302 m^4-3214 m^3-588 m^2+1566 m-123)\alpha^3\\[1ex] &\hspace{-5.7em} + (16 m^8-192 m^7+984 m^6-2864 m^5+1001 m^4+4184 m^3-3870 m^2+794 m-52)\alpha^2\\[1ex] &\hspace{-5.7em} - 2 (m-1) (22 m^6-148 m^5+363 m^4-381 m^3+185 m^2-60 m+2)\alpha\\[1ex] &\hspace{-5.7em} - (m-2)^3 (m-1)^2 m \eqqcolon \text{\boldm{$p_{m}$}}(\alpha). \end{align*} Note that the polynomial $\text{\boldm{$p_{m}$}}$ has three changes of sign in its sequence of coefficients if $m=2,\ldots,6$ and five changes if $m\geq7$, so that Descartes' rule of signs is not applicable to show that $\text{\boldm{$p_{m}$}}$ has only one positive root. To prove the latter we will now apply Sturm's theorem. For that purpose we consider the canonical Sturm chain \[ \text{\boldm{$p_{m}$}}_{,0}(\alpha), \ \text{\boldm{$p_{m}$}}_{,1}(\alpha), \ldots, \ \text{\boldm{$p_{m}$}}_{,8}(\alpha) \] and count the number of sign changes in these sequences for $\alpha=0$ and $\alpha\to\infty$: $$ \begin{array}{ll"c:c|} & & \alpha= 0 & \alpha\to\infty \\[0.5ex] \noalign{\hrule height 1pt} \rule{0pt}{25pt} \multirow{6}{4ex}{\rotatebox[origin=c]{90}{\text{sign of}}} & \text{\boldm{$p_{m}$}}_{,0}(\alpha) & \text{\renewcommand{\arraystretch}{1.4}\footnotesize$\begin{array}{@{}cl@{}} 0 & m=2 \\ - & m\geq3 \end{array}$} & + \\[3.5ex] \cdashline{2-4} \rule{0pt}{20pt} & \text{\boldm{$p_{m}$}}_{,1}(\alpha) & - & + \\[2ex] \cdashline{2-4} \rule{0pt}{20pt} & \text{\boldm{$p_{m}$}}_{,2}(\alpha) & + & + \\[2ex] \cdashline{2-4} \rule{0pt}{25pt} & \text{\boldm{$p_{m}$}}_{,3}(\alpha) & + & \rule{10pt}{0pt}\text{\renewcommand{\arraystretch}{1.4}\footnotesize$\begin{array}{@{}cl@{}} - & m=2,\ldots,28 \\ + & m\geq29 \end{array}$}\rule{10pt}{0pt} \\[3.5ex] \cdashline{2-4} \rule{0pt}{25pt} & \text{\boldm{$p_{m}$}}_{,4}(\alpha) & \text{\renewcommand{\arraystretch}{1.4}\footnotesize$\begin{array}{@{}cl@{}} - & m=2 \\ + & m\geq3 \end{array}$} & - \\[3.5ex] \cdashline{2-4} \rule{0pt}{35pt} & \text{\boldm{$p_{m}$}}_{,5}(\alpha) & \text{\renewcommand{\arraystretch}{1.3}\footnotesize$\begin{array}{@{}cl@{}} - & m=2,3 \\ + & m=4,5 \\ - & m\geq 6 \end{array}$} & \text{\renewcommand{\arraystretch}{1.3}\footnotesize$\begin{array}{@{}cl@{}} - & m=2,\ldots,4 \\ + & m=5,\ldots,10 \\ - & m\geq11 \end{array}$} \\[5ex] \cdashline{2-4} \rule{0pt}{25pt} & \text{\boldm{$p_{m}$}}_{,6}(\alpha) & \text{\renewcommand{\arraystretch}{1.4}\footnotesize$\begin{array}{@{}cl@{}} + & m=2 \\ - & m\geq3 \end{array}$} & \text{\renewcommand{\arraystretch}{1.4}\footnotesize$\begin{array}{@{}cl@{}} + & m=2,\ldots,22 \\ - & m\geq23 \end{array}$} \\[3.5ex] \cdashline{2-4} \rule{0pt}{25pt} & \text{\boldm{$p_{m}$}}_{,7}(\alpha) & \rule{10pt}{0pt}\text{\renewcommand{\arraystretch}{1.4}\footnotesize$\begin{array}{@{}cl@{}} + & m=2,\ldots,6 \\ - & m\geq7 \end{array}$}\rule{10pt}{0pt} & + \\[3.5ex] \cdashline{2-4} \rule{0pt}{20pt} & \text{\boldm{$p_{m}$}}_{,8}(\alpha) & + & + \\[2ex] \noalign{\hrule height 1pt} \multicolumn{2}{c"}{\text{sign changes}} & \rule{0pt}{20pt} 3 & 2 \\[1.5ex]\hline \end{array} $$ Hence, due to Sturm's theorem, the polynomial $\text{\boldm{$p_{m}$}}$ has always $3-2=1$ positive root which we denote by $\alpha_m$. Moreover we have \begin{align*} m^8\cdot \text{\boldm{$p_{m}$}}\left(\frac2m\right)= {} & -25 m^{14}-80 m^{13}+1611 m^{12}-5114 m^{11}-2544 m^{10}-19620 m^9\\[-0.2ex] & +65904 m^8 -135888m^7 +228832 m^6 -215760 m^5+111152 m^4 \\[1.2ex] & -18688 m^3 -7232 m^2-6656 m+4096 <0 \shortintertext{and} m^8\cdot\text{\boldm{$p_{m}$}}\left(\frac{12}{m}\right)= {} & 1775 m^{14}-23560 m^{13}+74111 m^{12}+324326 m^{11}-1065244 m^{10}\\[-0.2ex] &-8010880m^9 +62969424 m^8-283180848 m^7+790863552 m^6\\[1.2ex] & -674075520 m^5-1637169408 m^4+2203656192 m^3+5992869888m^2 \\[1.2ex] &-13329432576 m+6879707136 > 0 \quad \text{for all $m\geq2$}, \end{align*} thus, \[ \frac2m<\alpha_m<\frac{12}{m} \ {}.\qedhere \] \end{proof} \begin{remark} The lengthy symbolic manipulations were completed here with the aid of the \emph{Wolfram Language} on a \emph{Raspberry Pi 2, Model B}. The following computations will again be carried out by hand: \end{remark} \begin{proof}[Proof of theorem \ref{Satz:Lew:1}] Denoting the right-hand side of \eqref{eq:Lew:DiffglY} by $H_{m,\alpha}(t,w)$ we see that \[ {g_{m,\alpha}}(t)\coloneqq(m+\alpha)\cdot\frac{\sin(2t)}{(m+\alpha-1)\cos(2t)-(m-\alpha-1)} \] fulfills \[H_{m,\alpha}(t,{g_{m,\alpha}}(t))=0 \quad \text{on $(0,\t)\cup(\t,\textstyle\frac\pi2)$,}\] where \[ \t\coloneqq\frac12\arccos\left(\frac{m-\alpha-1}{m+\alpha-1}\right) = \arctan\sqrt{\frac{\alpha}{m-1}}. \] Since ${g_{m,\alpha}}'(t)\geq0$, the function ${g_{m,\alpha}}$ is an upper solution of \eqref{eq:Lew:DiffglY}. As we are interested in a solution of \eqref{eq:Lew:DiffglY}, which has the same growth properties as ${g_{m,\alpha}}$, it is natural to ask for a lower solution of the form $\gamma\cdot{g_{m,\alpha}}$ with $ \gamma\in(0,1)$, i.e., we should have \begin{equation}\label{eq:Lew:Unterfkt} \gamma\cdot {g_{m,\alpha}}'(t) \leq H_{m,\alpha}(t,\gamma\cdot {g_{m,\alpha}}(t)) \qquad \text{for all $t\in (0,\t)\cup(\t,\textstyle\frac\pi2)$.} \end{equation} For $t\neq\t$ this is equivalent to \begin{align}\label{eq:Lew:Unterfkt**} &\qquad a\cdot\cos^2(2t)-2b\cdot\cos(2t)+c\geq0,\tag{\ref{eq:Lew:Unterfkt}*} \shortintertext{with} a&= (1-\gamma)\big((m+\alpha-1)^2-\gamma^2(m+\alpha)^2 \big),\notag\\[1ex] b&= (m-\alpha-1)\big(m+\alpha-1 -\gamma(m+\alpha) \big),\notag\\[1ex] c&= (1-\gamma)\gamma^2(m+\alpha)^2-2\gamma(m+\alpha-1)+(1-\gamma)(m-\alpha-1)^2.\notag \end{align} Note that \eqref{eq:Lew:Unterfkt**} is valid on $(0,\frac\pi2)$ as long as $\gamma\in(0,1-\frac{1}{m+\alpha})$. The latter is equivalent to $a>0$. Hence, the left hand side of \eqref{eq:Lew:Unterfkt**} is bounded below by $$ c-\frac{b^2}{a}.$$ In other words, to find an adequate lower solution, it suffices to find conditions on $m$ and $\alpha$ under which a $\gamma\in(0,1-\frac{1}{m+\alpha})$ exists with \begin{align*} c-\frac{b^2}{a}&\geq0 \\[2ex] \hspace{-8em}\underset{m\geq2, \ \alpha>\frac2m}{\overset{\gamma\in(0,1)}{\Longleftrightarrow}}\quad \P(\gamma) &\geq0, \end{align*} and lemma \ref{Lemma:Lew:biquadr} yields the desired conclusion. Consequently, we gain for $\text{\boldm{$\gamma_{m,\alpha}$}}$: \begin{equation*} \text{\boldm{$\gamma_{m,\alpha}$}}\cdot {g_{m,\alpha}}'(t) \leq H_{m,\alpha}(t,\text{\boldm{$\gamma_{m,\alpha}$}}\cdot {g_{m,\alpha}}(t)) \qquad \text{on $(0,\t)\cup(\t,\textstyle\frac\pi2)$,} \end{equation*} i.e., the function $\text{\boldm{$\gamma_{m,\alpha}$}}\cdot {g_{m,\alpha}}$ is a lower solution of \eqref{eq:Lew:DiffglY}, so that we can proceed as in \cite{Davini}: Due to results from classical ordinary differential equations theory it follows the existence of a $C^1$-solution $\text{\boldm{$w_{m,\alpha}$}}$ of \eqref{eq:Lew:DiffglY} on $(0,\t)\cup(\t,\textstyle\frac\pi2)$. Moreover, $\text{\boldm{$w_{m,\alpha}$}}$ satisfies \begin{align*} &0 <\text{\boldm{$\gamma_{m,\alpha}$}}\cdot {g_{m,\alpha}}(t)\leq \text{\boldm{$w_{m,\alpha}$}}(t)\leq {g_{m,\alpha}}(t) \quad \text{on $(0,\t)$} \shortintertext{and} \quad & 0> \text{\boldm{$\gamma_{m,\alpha}$}}\cdot {g_{m,\alpha}}(t)\geq \text{\boldm{$w_{m,\alpha}$}}(t)\geq {g_{m,\alpha}}(t) \quad \text{on $(\t,\textstyle\frac\pi2)$,} \end{align*} as well as \begin{gather*} \lim_{t\nearrow\t} \text{\boldm{$w_{m,\alpha}$}}(t) = +\infty, \quad \lim_{t\searrow\t} \text{\boldm{$w_{m,\alpha}$}}(t) = -\infty,\\[2ex] \qquad \lim_{t\searrow0}\text{\boldm{$w_{m,\alpha}$}}(t)=0=\lim_{t\nearrow\frac{\pi}{2}}\text{\boldm{$w_{m,\alpha}$}}(t). \end{gather*} Let us denote by $\v$ the antiderivative of $\text{\boldm{$w_{m,\alpha}$}}$ with $$\lim\limits_{t\searrow0}\v(t)=0 \quad\text{and}\quad \lim\limits_{t\nearrow\frac\pi2}\v(t)=0.$$ Reconstructing the auxiliary function from its level curves which are parametrized by \begin{equation*} \begin{cases} |x| &= \lambda\cdot\mathrm{e}^{\v(t)}\cdot\cos t,\\ \hphantom{|}y &=\lambda\cdot\mathrm{e}^{\v(t)}\cdot\sin t, \end{cases} \end{equation*} with $\lambda>0$ and $t\in(0,\t)\cup(\t,\frac\pi2)$, we gain \begin{equation*} \text{\boldm{$F_{m,\alpha}$}}(x,y)\coloneqq\begin{cases} &\sqrt{|x|^2 + y^2}\cdot\mathrm{e}^{-\v(\arctan\frac{y}{|x|})}, \quad 0<\arctan\frac{y}{|x|}<\t, \\[2ex] -\hspace*{-3ex}&\sqrt{|x|^2 + y^2}\cdot\mathrm{e}^{-\v(\arctan\frac{y}{|x|})}, \quad \t<\arctan\frac{y}{|x|}<\frac{\pi}{2}. \end{cases} \end{equation*} Note that, since $\v$ satisfies \eqref{eq:Lew:DiffglZ}, we obtain \[ \operatorname{div}\left(-y^\alpha\frac{\nabla \text{\boldm{$F_{m,\alpha}$}}}{|\nabla \text{\boldm{$F_{m,\alpha}$}}|}\right)=0, \quad \text{on $\big(\mathds R^m\times\mathds R_{>0}\backslash\{\,x=0\,\}\big)\backslash\mathcal{M}^\alpha_m$.} \] We than conclude as in our first proof above because $\text{\boldm{$F_{m,\alpha}$}}$ has the desired properties, cf. remark \ref{Lew:Bed}. \end{proof} \begin{remark}\label{Lew:Bem:Koeff} The crucial ingredient in our argumentation was to find conditions on $m\geq2$ and $\alpha>0$ under which a $\gamma\in(0,1)$ exists such that \eqref{eq:Lew:Unterfkt**} is fulfilled on $(0,\t)\cup(\t,\frac{\pi}{2})$. For $t\to\t$ the inequality \eqref{eq:Lew:Unterfkt**} is equivalent to \[ (1-\gamma)\gamma\geq\frac{2(m+\alpha-1)}{(m+\alpha)^2}. \] The last inequality has solutions in $(0,1)$ as long as $m+\alpha\geq4+\sqrt{8}$. Hence, $$\max\{\,4-m+\sqrt{8},0\,\}$$ are lower bounds for the optimal $\alpha_m$'s. With our values we have already reached the lower bounds quite close, so, for $m=4$ we have \[ \alpha_4-\sqrt{8}<\frac{1}{1000}. \] \end{remark} \begin{acknowledgement} This paper is a part of my PhD thesis written under supervision of Prof. Ulrich \textsc{Dierkes}. \end{acknowledgement}
1,108,101,564,739
arxiv
\section{Introduction} \label{sec:introduction} \input{Introduction} \section{Structural Aspects} \label{sec:structure} \input{Structure} \section{Charge Order and Ferroelectricity} \label{sec:ChargeOrder} \input{ChargeOrder} \input{Phasediagram} \input{Ferroelectricity_weaklydimerized} \subsection{States nearby charge order} \label{sec:NearChargeOrder} \input{ChargeGlass} \input{DiracElectrons} \input{Electrodynamics_weaklydimerized} \input{Ferroelectricity_dimerized} \section{Mott Metal-Insulator Phase Transition} \label{sec:MottTransition} \input{MottMIT} \input{afmMott} \input{QSLMott} \input{FermiLiquid} \input{Irradiation} \section[Quantum Spin Liquid versus Magnetic Order]{Frustration in Two Dimensions: Quantum Spin Liquid versus Magnetic Order} \label{sec:Frustration} \input{Frustration} \section{Coupling of Quantum Electric and Magnetic Dipoles} \label{sec:quantumdisorder} \input{QuantumElectricDipoles} \input{CAT} \input{DipoleLiquid} \section{Summary and Prospects} \label{sec:summary} \input{Summary} \section{Acknowledgements} Working in the field of low-dimensional organic conductors for decades, we have enjoyed fruitful collaborations and illuminating discussions with numerous colleagues and students whom we all want to thank very much. In this context we would like to mention in particular M. Basleti{\'c}, K. Biljakovi{\'c}, S. Brown, E. Canadell, R.T. Clay, M. \v{C}ulo, V. Dobrosavljevi{\'c}, N. Do\v{s}li{\'c}, N. Drichko, M. Dumm, P. Foury-Leylekian, H. Fukuyama, S. Fratini, A. Girlando, B. Gorshunov, B. Gumhalter, C. Hotta, V. Ilakovac, S. Ishihara, T. Ivek, S. Kaiser, K. Kanoda, R. Kato, B. Korin-Hamzi{\'c}, M. Lang, P. Lazi{\'c}, W. Li, P. Littlewood, S. Mazumdar, J. Merino, O. Milat, J. M{\"u}ller, M. Pinteri{\'c}, J.P. Pouget, B. Powell, A. Pustogow, R. R{\"o}sslhuber, G. Saito, Y. Saito, T. Sasaki, J.A. Schlueter, D. Schweitzer, R. Valent{\'i}, S. Winter, Y. Yoshida. The project was financially supported by the Deutsche Forschungsgemeinschaft (DFG), the Deutscher Akademischer Auslandsdienst (DAAD), the Croatian Ministry of Science and Education and by the Croatian Science Foundation. We acknowledge B. Gumhalter, T. Ivek, M. Pinteri{\'c}, M. Prester, A. Pustogow and W. Strohmaier for their support and assistance during the preparation of this Review. \bibliographystyle{tADP} \subsection{Antiferromagnetic Mott insulator} \label{sec:afmMott} Here the antiferromagnetic Mott insulator \etcl\ certainly is of superior importance because a tiny pressure of 30~MPa suffices for entering the metallic or superconducting phase with the record high $T_{c}\approx 13$~K. At elevated temperatures, the overall charge transport is incoherent; but upon cooling the behavior bifurcates. At ambient pressure, the system becomes a pronounced insulator: a maximum of the transport gap derived from the logarithmic derivative of $\rho(T)$ indicates the so-called quantum Widom line \cite{Pustogow18a,Pinteric15}. This crossover from a more insulating to a more metallic regime is best seen in electrical transport studies using pressure sweeps at fixed temperatures \cite{Furukawa15a}. Furukawa {\it et al.} investigated several organic Mott insulators and could extract extended crossover regimes as plotted in Figure~\ref{fig:MottCriticality}. The observed behavior collapses into a genuine phase diagram for all compounds, displayed in \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{Furukawa.pdf} \caption{\label{fig:MottCriticality} Pressure–temperature phase diagram of (a) \etcn, (b) \etcl\ and (c) \dmit\ obtained from resistivity measurements in a gas-pressure cell. QSL stands for quantum spin liquid, SC for supercondcutor, AFM for antiferromagnet and PM for paramagnetic metal. The red line represents the first-order Mott transition line terminating at a critical end point. The open circles indicate the metal–insulator crossover pressure $p_c(T)$. The color represents the magnitude of $\left|\log_{10} \tilde{\rho}\right|$, where $\tilde{\rho}$ is the normalized resistivity. (d)~The scaling plot of the normalized resistivity $\tilde{\rho}(\delta p, T)$ {\it versus} $T/T_0 = T/|c\, \delta p|^{z\nu}$ with the present values, $z\nu = 0.62$ and $c = 20.9$ for \etcn. (e)~A symmetric quantum critical region can be obtained by renormalizing the axes. A range of colour represents the magnitude of $|\log_{10}\tilde{\rho}|$ for \etcn, where $\tilde{\rho}$ is the normalized resistivity. The grey-colored region is experimentally inaccessible. The bold red line represents the first-order Mott transition line terminating at a critical endpoint. The insulating and metallic states form at $\delta p < 0$ and $\delta p > 0$, respectively. The open circles indicate the characteristic temperatures for the entrance to the low-temperature regimes of the gapped Mott insulator or the Fermi liquid, $T^*$, defined by the value of $T/T_0$ at which $\tilde{\rho}(\delta p,T)$ starts to deviate from the critical behavior $\tilde{\rho} = \exp\left\{\pm (T/T_0)^{-1/z\nu}\right\}$ ($z\nu = 0.62$). Below these circles, the system departs from the quantum critical regime toward the Mott insulator ($\delta p < 0$) or Fermi liquid ($\delta p > 0$) (data from \cite{Furukawa15a}). (f)~Scheme how the different critical exponents $\beta$, $\gamma$ and $\delta$ are determined from scans in the phase diagram as a function of temperature and pressure (suggested in \cite{Gati16}).} \end{figure} Figure~\ref{fig:MIT3}, if the temperature $T$ and the electronic correlations $U$ are normalized by the respective bandwidth $W$ \cite{Pustogow18a}. Using advanced temperature-dependent scaling, a mirror-symmetric behavior on the insulating and metallic sides can be obtained, leading to a fan-shaped quantum critical region in the pressure-temperature phase diagram. A material-independent quantum-critical scaling relation was identified with a clear distinction into a Fermi liquid or a Mott insulator, irrespective of the ground states of the organic compounds \cite{Furukawa15a}. The observed Mott quantum criticality confirms the predictions of Dobrosavljevi{\'c} and collaborators \cite{Terletska11}, who calculated the incoherent charge transport in the high-temperature crossover region. \subsubsection{Mott quantum criticality} \label{sec:Mottquantumcriticality} The critical point $T_{\rm crit} \approx 40$~K has drawn particular interest as it ends the first-order phase boundary, and might act as the starting point for quantum critical behavior. In the seminal work on Cr-substituted V$_2$O$_3$, Limelette {\it et al.} analyzed the critical exponents and scaling functions close to the critical endpoint of the Mott metal-insulator transition [Figure~\ref{fig:MottCriticality}(f)] \begin{equation} \sigma-\sigma_c \propto(T_{\rm crit} - T)^{\beta} ~,~~~~~ \sigma-\sigma_c \propto (p - p_{\rm crit})^{1/\delta} ~,~~~~ \left({\rm d}\sigma/{\rm d}p\right)|_{p_c(T)} \propto (T - T_{\rm crit})^{\gamma} \quad , \label{eq:criticalexponents} \end{equation} and found the universal properties of a liquid-gas transition \cite{Limelette03b}; {\it i.e.} $\beta = 0.5$, $\delta \approx 3$, and $\gamma = 1$ as predicted by mean-field theory \cite{Castellani79,Kotliar00}. These critical exponents are believed to be universal, {\it i.e.} independent on the microscopic details of the system \cite{Kadanoff67,StanleyBook}. Along these lines, Kagawa {\it et al.} investigated \etcl\ using a variable gas pressure cell down to $T=33$~K \cite{Kagawa05,Kagawa04b}; the critical exponents extracted ($\beta = 1$, $\delta = 2$, and $\gamma = 1$) did not correspond to any known universality class [Figure~\ref{fig:CriticalPoint}(a)]. The analysis extended up to $|T-T_{\rm crit}|/T_{\rm crit}<0.2$ with significant deviations close to the critical endpoint. Although the Mott transition takes place in the charge sector, it is interesting to consider also the spin degrees of freedom in these strongly correlated electron systems. The spin-lattice relaxation rate $1/T_1$ measured by $^{13}$C-NMR is a probe of spin fluctuations. A clear jump in $1/T_1 T$ is observed at the Mott transition at low temperatures when the system is isothermally tuned across the transition by He-gas pressure \cite{Kagawa09}, as depicted in Figure~\ref{fig:CriticalPoint}(b). The jump vanishes at the critical point and only a smooth cross-over remains at higher temperatures. As seen in Figure~\ref{fig:CriticalPoint}(a) and (b), the conductance is enhanced when the Mott transition is crossed to a metal with increasing pressure; the paramagnetic spin fluctuations are strongly suppressed. $\left[1/T_1(p)T-1/T_1(p_c)T\right]$ follows a power law $|p-p_c|^{1/\delta}$ with $\delta=2$, again. Any deviation from the average value of one electron per site obviously makes charge carriers contribute to the conductivity, but in addition spin fluctuations are suppressed \cite{Sentef11}. The subject remains open for future investigations. \begin{figure} \centering \includegraphics[width=\columnwidth]{CriticalPoint.pdf} \caption{\label{fig:CriticalPoint} The critical behavior is observed in the electrical, magnetic and lattice degrees of freedom. (a)~Pressure-dependent conductance $G_T(p)$ of \etcl\ measured at various temperatures around the critical endpoint (filled circle). The shaded area indicates the conductance jump. The red and green curves represent the critical behaviour at $T = T_{\rm crit} \approx 39.7$~K and $T < T_{\rm crit}$, which give the critical exponents $\delta$ and $\beta$, respectively. The hysteresis of the conductance jump (for example, $\sim 0.2$~MPa at $\sim 32$~K) is not appreciable at this scale. (reproduced from \cite{Kagawa05} with permission). (b)~$^{13}$C-NMR relaxation rate around the Mott critical endpoint $T_{\rm crit}$. The pressure dependence of $1/T_1 T$ at various fixed temperatures; note the reversed pressure scale, compared to panel (a). The gray shaded areas represent the coexistence of insulating and bad metallic phases (taken from \cite{Kagawa09}). (c)~Relative length changes, $\Delta L/L$, as a function of applied pressure at constant temperatures between 30 and 55~K measured on \etcl\ along $a$-axis. The data have been offset for clarity. The broken lines close to the data at $T=43$~K, that is, distinctly above $T_{\rm crit} \approx 36.5$~K of the critical endpoint, are guides to the eyes and serve to estimate the pressure-induced changes in compressibility. The strong non-linearities, which are observed here, reflecting nonlinear strain-stress relations, highlight a breakdown of Hooke’s law of elasticity (data from \cite{Gati16}).} \end{figure} Commonly neglected by theory, the electronic system couples to the underlying compressible crystal lattice. Lang and collaborators thoroughly measured the temperature and pressure dependence of the thermal expansion in order to study the lattice response near the Mott transition. Fully deuterated $\kappa$-(d8-BE\-DT\--TTF)$_2$\-Cu\-[N\-(CN)$_{2}$]Br falls extremely close to the critical point already at ambient pressure \cite{deSouza07}; it was found that near the critical endpoint the Gr{\"u}neisen scaling breaks down \cite{Nakazawa96,Papanikolaou08,Bartosch10,Zacharias12,deSouza15}. In a next step \etcl\ could be deliberately tuned across the Mott transition using a continuously controlled helium-gas pressure. The relative length change $\Delta L/L$ with pressure exhibits a strong nonlinear variation around $T_{\rm crit}$; here Hooke’s law of elasticity breaks down \cite{Gati16}. Figure~\ref{fig:CriticalPoint}(c) displays $\Delta L/L$ as a function of pressure measured within the plane at various temperatures; similar results are recorded perpendicular to the $ac$-plane. At $T=30$~K an abrupt jump is observed, reflecting the first-order character of the phase transition. As the temperature rises, the discontinuity gradually decreases until it becomes a continuous crossover for $T>T_{\rm crit}$. Slightly above the critical endpoint the relative length change is rather nonlinear, which is explained by a critical elasticity as a result of the coupling of the critical electronic background to the lattice; in other words Hooke's law does not hold in the temperature-pressure regime close the critical endpoint. The critical exponents [Eq.~(\ref{eq:criticalexponents})] extracted from the experimental data, $\beta= 0.52$, $\delta = 3.2$, and $\gamma = 1.0$, are in good consistence with the values for the mean-field universality class. There are valid arguments that this behavior at the Mott transition holds for all systems that are amenable to pressure tuning \cite{Gati16}. While \etbr\ is well on the metallic side of the phase boundary, the successive substitution of deuterated (d8-BE\-DT\--TTF) molecules effectively increases correlations and eventually shifts the alloy across the Mott transition. Magnetotransport measurements by Sasaki {\it et al.} \cite{Sasaki08b} found that a suppression of $T_{c}$ in magnetic fields; for $H > H_{c2}\approx 12$~T the superconducting phase has completely vanished. The critical endpoint $T_{\rm crit}$, however, seems not to be affected. An alternative approach utilizes x-ray irradiation to systematically influence the critical behavior in \etcl\ \cite{Urai19}. The introduced disorder reduces the superconducting transition temperature $T_c$ only slightly but strongly affects the resistivity around the critical endpoint. Pressure-sweeps at fixed temperatures down to 15~K yield a broadening of the phase transition and a corresponding shift of $T_{\rm crit}$. Analyzing their data according to Figure~\ref{fig:MottCriticality}, Urai {\it et al.} found that the exponent $z\nu \approx 0.46$ basically does not change upon irradiation up to 70~h. The drastic suppression of the critical end point $T_{\rm crit}^*$ is interpreted as disorder-enhanced critical fluctuations of the metal-insulator transition. They speculate that the Mott quantum critical fluctuations are hidden behind the first-order transition and can be revealed by disorder. However, this implies that the system is spatially homogeneous for $T > T_{\rm crit}^* \rightarrow 0$ due to disorder, which is in contrast to the coexistence regime. As discussed in more detail in Sec.~\ref{sec:randomness}, extended irradiation blurs any clear signatures of a discontinuous metal-insulator transition \cite{Gati18a}; {\it i.e.} above a certain disorder level, the Mott transition becomes a smeared first-order transition with some residual hysteresis. \subsubsection{Coexistence regime} \label{sec:coexistenceregime} The Mott transition is supposed to be of first order below the critical endpoint $T_{\rm crit}$, implying a pressure region in which correlated metal and insulator coexist \cite{Georges96,Vollhardt12,Vollhardt20}. Limelette {\it et al.} measured the in-plane electrical transport of \etcl\ at fixed temperatures below $T_{\rm crit} \approx 40$~K in a gas pressure cell \cite{Limelette03a} and observed a marked hysteresis around the metal-insulator transition that was attributed to spatial inhomogeneities. As depicted in Figure~\ref{fig:PhaseDiagram_k-Cl}(a), the coexistence regime extends over a pressure range of approximately 20~MPa, as determined from the inflection in the conductivity data $\sigma(p)$ upon increasing and decreasing pressure. Below $T_N\approx 25$~K, antiferromagnetic ordering occurs that was investigated by magnetic resonance spectroscopy and other methods \cite{Yasin11,Lefebvre00}. Extended ac susceptibility measurements as a function of pressure for selected temperatures as well as by taking cooling and heating curves at fixed pressure values were used by Lefebvre {\it et al.} to map the coexistence regime between the antiferromagnetic Mott insulator and superconducting states \cite{Lefebvre00} as plotted in Figure~\ref{fig:PhaseDiagram_k-Cl}(b). Below a characteristic temperature $T^* \approx 20$~K the NMR lines split into two groups corresponding to a metallic (superconducting) and an insulating (antiferromagnetic) phase that spatially coexist. These remarkable findings evidence percolative superconductivity \cite{Kagawa04a,Muller09,Muller17}. The results indicate the absence of \begin{figure}[h] \centering \includegraphics[width=0.8\columnwidth]{PhaseDiagram_k-Cl.pdf} \caption{\label{fig:PhaseDiagram_k-Cl} Pressure-temperature phase diagram of the \etcl\ salt. (a)~Four regions can be identified from transport measurements \cite{Limelette03a}: bad metal, Fermi liquid, semiconductor and insulator, which orders antiferromagnetically below $T_N\approx 25$~K \cite{Lefebvre00,Yasin11}. The spinodal lines define the region of coexistence of insulating and metallic phases, which is indicated by the hatched area. The first-order transition line is identified by the maximum slope d$\sigma$/d$P$ and terminates at the critical end point. (b)~The antiferromagnetic critical line $T_N(P)$ (dark circles) was determined from NMR relaxation rate while $T_c(P)$ for unconventional superconductivity and the metal-insulator $T_{\rm MI}(P)$ (open circles) lines were obtained from the ac susceptibility. The afm/SC boundary (double-dashed line) is determined from the inflection point of $\chi^{\prime}(P)$ and, for 8.5 K, from sublattice magnetization. This boundary line separates two regions of inhomogeneous phase coexistence (shaded area) (adopted from \cite{Limelette03a,Lefebvre00}).} \end{figure} itinerant antiferromagnetism in \etcl, confirming previous suggestions \cite{Kanoda97c}; the interacting spins are localized on the dimers. This scenario is in contrast to (TMTSF)$_2$PF$_6$, where superconductivity coexists with the itinerant antiferromagnetism of the spin-density-wave phase \cite{Vuletic02}. In the present case superconductivity can be directly stabilized from the antiferromagnetic Mott insulator. These findings were confirmed by ultrasonic velocity and attenuation measurements on \etbr\ \cite{Fournier07} where the coexistence zone of the antiferromagnetic and superconducting phases was observed deep in the metallic part of the pressure-temperature phase diagram. The system was tuned by varying the cooling cycle, {\it i.e.} fast, slow cooling, low-temperature annealing. The two phases are found to compete, whereas superconducting fluctuations begin to contribute to the attenuation at 15~K, namely, at the onset of magnetic order that is well above the superconducting transition temperature $T_{c}=11.9$~K. \begin{figure}[h] \centering \includegraphics[width=0.6\columnwidth]{Scan.pdf} \caption{\label{fig:Scan} (a) Peak frequency contour map of the $\nu_3$ mode ($E\parallel a$-axis) of $\kappa$-[(h8-BEDT-TTF)$_{0.4}$(d8-BEDT-TTF)$_{0.6}$]$_{2}$Cu[N(CN)$_2$]Br at $T=4$~K. Bright orange colors (higher frequency) indicate a metallic nature and dark violet colors (lower frequency) indicate an insulating nature. The metal–insulator phase separation can be observed as a spatial image. (b)~Reflectivity spectra along the arrow A–B in panel (a) with a $12~\mu$m steps (reproduced from \cite{Sasaki09}).} \end{figure} In order to demonstrate the spatial phase separation Kimura and collaborators \cite{Nishi05,Nishi07} studied the properties of partially deuterated \etbr\ using spatially resolved magneto-optical spectroscopy. While they can conclude a coexistence of the metallic or superconducting phases with the insulating phase, the present resolution of approximately 10~$\mu$m constitutes only an upper limit. Interestingly, with increasing magnetic field, the insulator-metal phase boundary shifts towards smaller $U/W$, in accord with transport experiments \cite{Sasaki08b}; above $H_{\rm c2}$, however, they suggest an enlargement of the metallic regime. Concomitantly, Sasaki {\it et al.} focussed on the vibrational features of \etbr\ to obtain the crucial contrast between metallic and insulating regions when tuning through the phase transition by different amount of deuteration \cite{Sasaki04a,Sasaki05,Sasaki09}. The fully symmetric $\nu_3(a_g)$ mode around 1300~\cm\ is strongly electron-molecular vibrational (emv) coupled \cite{Maksimuk01,Dressel04a,Girlando11a} and provides a local probe for detecting changes in the electronic state. Figure~\ref{fig:Scan} displays a contour map of metallic and insulating regions having micrometer sizes and irregular shapes. The boundary between the insulating and metallic regions is within the instrumental resolution, and hence no intermediate state appears at the frontier. This observation indicates a macroscopic phase separation between the metal/superconductor and Mott insulator. Since the system falls right at the characteristic S-shape phase boundary, the sample crosses the first-order transition when cooled from room temperature. The phase separation occurs after intersection with the first-order transition line. M{\"u}ller {\it et al.} ~\cite{Muller11,Muller12} established fluctuation spectroscopy as a powerful method for investigating the dynamics of correlated charge carriers in the vicinity of the Mott transition in the quasi-two-dimensional charge-transfer salts, looking in particular at $\kappa$-(d8-BEDT-TTF)$_2$Cu[N(CN)$_2$]Br. The observed $1/f$-type fluctuations are quantitatively very well described by a phenomenological model based on the concept of non-exponential kinetics. The main result is a correlation-induced enhancement of the fluctuations accompanied by a substantial shift of spectral weight to low frequencies in the vicinity of the Mott critical end point. This sudden slowing down of the electron dynamics is considered as a universal feature of metal-insulator transitions. The findings support the idea of electronic phase separation in the critical region of the phase diagram \cite{Muller11,Muller12}. \subsection{Quantum electric dipoles in a quantum spin liquid} \label{sec:cat} A quantum liquid consisting of both electric and magnetic dipoles might be realized in the hydrogen-bonded single-molecular Mott insulator $\kappa$-H$_3$(Cat-EDT-TTF)$_2$, in which Cat-EDT-TTF spin-$\frac{1}{2}$ dimers are arranged on a two-dimensional triangular lattice as depicted in Figure~\ref{fig:CAT-structure} \cite{Isono14,Shimozawa17}. The moderately one-dimensional frustration $t^{\prime} / t \approx$ 1.25 is distinict from 0.83 of quantum spin liquid candidates \etcn\ and \agcn\ (see Section \ref{sec:propertiesQSL}), but close to the $\kappa$-(BE\-DT\--TTF)$_2$B(CN)$_4$ with $t^{\prime}/t \approx$ 1.44 \cite{Yoshida15}. This material, similar to \cat{} compound, maintains a magnetically disordered Mott insulating state with enhanced quantum fluctuations over a wide temperature range, but in contrast to \cat\ undergoes a phase transition below 5~K into a spin-gapped phase. The discovery of the purely organic metal \cat\ by Mori and collaborators \cite{Kamo12,Isono13,Isono14,Ueda14} is of particular interest in so far, \begin{figure}[h] \centering \includegraphics[width=0.4\columnwidth]{CAT-structure.pdf} \caption{For \cat\ the molecules are arranged in dimers that constitute an anisotropic triangular lattice within the conduction layer. The inter-dimer transfer integrals $t$ and $t^{\prime}$ are defined along the sides of rhomboids and along one diagonal, respectively (after \cite{Isono14}). } \label{fig:CAT-structure} \end{figure} as the organic molecules are linked by hydrogen bonds as illustrated in Figure~\ref{fig:structure_CAT}. Figure~\ref{fig:cat1} shows the temperature dependence of the dielectric constant $\varepsilon^{\prime}(T)$, which exhibits a quantum paraelectric behavior as described by the Barrett formula \cite{Barrett52}: \begin{equation} \varepsilon^{\prime}(T) = A + \frac{C}{ (T_1/2)\coth(T_1/2T) - T_C} \quad , \label{eq:Barrettformula} \end{equation} where $C = n\mu^2/k_{B}$ is the Curie constant, $n$ the density of dipoles, $\mu$ the dipole moment, and $k_{B}$ the Boltzmann constant. $T_C=-6.4$~K is the Curie-Weiss temperature of ferroelectric order in the absence of strong quantum fluctuations; while $T_1$ is the characteristic crossover temperature from the classical Curie-Weiss regime to the quantum paraelectric regime. From Figure~\ref{fig:cat1} we see that quantum fluctuations become important below $T_1 \approx 8$~K. The development of strong quantum fluctuations may be associated with enhanced proton fluctuations arising from the zero-point motion of hydrogen atoms, which persist down to low temperatures. Cooperative action between proton and electron degrees of freedom induce intra- or inter-dimer charge fluctuations, and thereby a disordered state of quantum electric dipoles develops at low $T$. Interestingly, the quantum spin liquid state emerges simultaneously with the quantum electric dipolar liquid. Shimozawa {\it et al.} suggested that the localized spins couple with the zero-point motion of the protons. First principle density functional theory (DFT) calculations confirm this idea \cite{Tsumuraya15}. The potential energy surface for the H atom is very shallow near the minimum points; hence, there is a certain probability that the proton is delocalized between the two oxygen atoms. Overall results thereby suggest that the quantum proton fluctuations give rise to a combined quantum liquid consisting of electric and magnetic dipoles. Interestingly, the Barrett-like behavior of the dielectric constant $\varepsilon^{\prime}(T)$ is absent in the other organic quantum-spin-liquids candidates consisting of $\pi$-electron molecular layers that are separated by the anion layers. Deuterated crystals, however, undergo an ordering transition at $T_{\rm CO} =185$~K leading to a charge disproportionation ($+0.94e : +0.06e$) associated with deuterium localization within the Cat-EDT-TTF layers. The resistivity abruptly jumps at that temperature with a hysteresis of 4~K \cite{Ueda14}. The $D_3$-compound exhibits a small and constant permittivity and a non-magnetic ground state below the phase transition at 185~K. In other words, the quantum-disorder due to the fluctuating proton bond is crucial for the low-temperature properties, in the spin sector as well as for the electric dipoles. And {\it vice versa}, the arrangement of the charge order and formation of spin singlets in the charge-rich dimers lead to the non-magnetic ground state of \dcat. As displayed in Fig.~\ref{fig:cat3}(b), the heat capacity of the deuterated compound remains at smaller values for the entire temperature range due to the lack of spin contributions. Except for very low $T$, $C_p T^{-1}$ increases linearly with $T^2$ but the Sommerfeld $\gamma_e$ term is much smaller. An additional Schottky term infers paramagnetic spins due to disorder. \begin{figure} \centering \includegraphics[width=0.7\columnwidth]{cat5.pdf} \caption{\label{fig:cat5} $J$ {\it versus} $E$ curves of (a) \cat\ and (b) the deuterated analogue at several temperatures as indicated. The solid and dashed arrows represent the voltage-increasing and voltage-decreasing processes, respectively. The hysteresis appears only in the deuterated compound but not in the hydrogenated one (after \cite{Ueda19}). } \end{figure} Measuring the current density–electric field characteristics of \dcat{} above and below the charge-ordering transition Ueda {\it et al.} found a negative differential resistance and hysteresis, which is considered to be induced by the deuterium dynamics \cite{Ueda19}. Applying a pulsed voltage, the initial charge-ordered state changes to a metastable state through a high-conducting (excited) state, which results in the appearance of the hysteresis, as illustrated in Figure~\ref{fig:cat5}(a). Raman spectroscopy suggests that this metastable state is a non-charge-ordered dimer-Mott state. The results can be understood by considering the temperature-dependent dynamics of hydrogen-bonded deuterium ({\it i.e.}, localization/fluctuations) coupled to the $\pi$-electrons in the conducting layers. Figure~\ref{fig:cat5}(b) shows that on the contrary the hydrogen analogue \cat, which is a dimer-Mott insulator without charge-ordering phase transition, does not exhibit such hysteretic behavior, but it does display a similar negative differential resistance as well, indicating that some degree of proton localization is present. This idea is confirmed by DFT calculations \cite{Tsumuraya15}, which find another H-localized state with reduced symmetry which lies only 2\,meV above the optimized state with minimum energy. The quasi-degenerate electronic state implies that random domains are present; domain wall motion may be responsible for the nonlinear effects. It is interesting to recall the cooperative charge dynamics observed in the charge-ordered state of \aeti, where also a negative differential resistance is observed (Figure~22) with a significant change in shape of the measured resistivity in time \cite{IvekPRB12,Peterseim16}. In Section~\ref{sec:dielectric_domainwalls} we discussed how the findings of negative differential resistance and switching to transient states are explained by cooperative domain-wall dynamics inherent to the ferroelectric state driven by charge-ordering. \subsubsection{Charge glass} \label{sec:ChargeGlass} Glassy phases and related phenomena such as metastability and slow relaxation are found in a variety of systems when long-range order is prevented by randomness, rapid cooling or competing interactions \cite{Dagotto05,Dyre06}; these effects are commonly attributed to disorder. A more intriguing case is the disorder-free glassy behavior established in geometrically-frustrated spin systems \cite{Anderson56,Bouchaud98}, while the influence of geometric frustration in the charge sector is less explored \cite{Merino05}. Glassy phenomena have been demonstrated in the compound \tetrz\ with triangular arrangement of molecular units, where a high degree of charge frustration is present \cite{Kagawa13,Sato14,Sato17}. As discussed in the previous Section~\ref{sec:COweaklydimerized}, the material displays a metal-to-insulator transition at $T_{\rm CO} = 190$~K when cooled slowly, leading to a charge-order-driven ferroelectric ground state. The transition can be avoided either by rapid cooling or by replacing the RbZn(SCN)$_4^-$ anions with CsZn(SCN)$_4^-$ as shown in Figure~\ref{fig:thetaresistivity}. In the absence of long-range charge order, the resistivity remains low, the dielectric peak is suppressed \cite{Nad07} and the rapidly-cooled phase bears similarities with the high-temperature phase, e.g.\ it exhibits slow dynamics of the order of kHz \cite{Chiba04}. \begin{figure}[h] \centering\includegraphics[clip,width=0.4\columnwidth]{thetaresistivity.pdf} \caption{Resistivity behavior $\rho(T)$ of \tetrz\ drastically depends on the cooling rate: the charge-order phase transition is suppressed by rapid cooling leaving space to charge-glass formation. The transition is also suppressed by replacing Rb by Cs in \tetcz: the obtained resistivity behavior does not change with cooling rate in the range of $0.1 - 10$~K/min (after \cite{Sato14}). \label{fig:thetaresistivity}} \end{figure} \paragraph{Slow charge dynamics} \label{sec:chargedynamics} The temperature evolution of the slow charge dynamics in organic charge transfer salts can be revealed by resistance fluctuation spectroscopy \cite{Mueller11}, as demonstrated in Figure~\ref{fig:thetaglass}. \begin{figure}[h] \centering\includegraphics[clip,width=1\columnwidth]{thetaglass.pdf} \caption{Resistance fluctuations in the rapidly cooled \tetrz. (a) Resistance power spectrum density multiplied by frequency at representative temperatures. Full lines are fits to continously distributed Lorentzians plus 1/$f^\alpha$. (b) Slowing down of the characteristic frequency and (c) growth of the dynamic heterogeneity (after \cite{Kagawa13}). \label{fig:thetaglass}} \end{figure} One observes resistance fluctuations with a characteristic frequency superimposed on the background $1/f$ noise for both compounds: \tetrz\ when cooled rapidly \cite{Kagawa13}, and \tetcz\ \cite{Sato14}. When plotting $f^{\alpha}\times S_R$ as a function of frequency, a broad peak is uncovered that rises out of a constant background [Figure~\ref{fig:thetaglass}(a)]. The spectral shape is well described by a superposition of continuously distributed Lorentzians with high-frequency $f_1$ and low-frequency $f_2$ cutoffs. When $T$ is reduced, the peak position shifts to lower frequency and its linewidth strongly increases [panels (b) and (c)], indicating that the dynamics becomes slower and more heterogeneous. Both features are fingerprints of glassy freezing in supercooled liquids \cite{Ediger00}. It is interesting to note that the growth of slow dynamics is correlated with the evolution of two-dimensional charge clusters, recorded by x-ray diffuse scattering. These charge clusters are present already at high temperatures and \begin{figure}[b] \centering\includegraphics[clip,width=0.7\columnwidth]{thetaglassxray.pdf} \caption{Temperature evolution of the charge cluster correlation length $\xi(T)$ during (a) slow cooling and (b) after rapid cooling of \tetrz\ (after \cite{Kagawa13}). \label{fig:thetaglassxray}} \end{figure} their intensities and correlation lengths increase on cooling down to $T_{\rm CO}$, as illustrated in Figure~\ref{fig:thetaglassxray}. After rapid cooling, when the charge ordering is suppressed, charge clusters are only recorded at $T=120$~K. The correlation length $\xi = 140$~\AA\ corresponds to about 25 triangular units and is temperature independent up to $T=150$~K, indicating a frozen metastable state with no long-range order. With further warming, $\xi$ increases and at $T_g$ $\approx$ 165\,K attains the value of about 200~\AA\ in accord with data recorded on cooling. The temperature behavior of $\xi$ indicates that $T_g$ can be identified as the glass transition temperature of charge cluster dynamics. \paragraph{Evolution of the electronic crystal in time} \label{sec:timeevolution} Having established the charge glass in rapidly cooled \tetrz\ as a metastable state, the time evolution of the electronic crystal state can be followed by monitoring at a fixed temperature $T_q < T_{\rm CO}$ how the resistivity and NMR spectra evolve in time after the crystal was cooled down rapidly. The resistivity is much lower in the glass state compared to the electronic-crystal charge-ordered state and thus characterizes the electronic state macroscopically. In Figure~\ref{fig:thetacrystallization}, the crystallization time is plotted versus $T_q$: the dome-like structure is known as time-temperature-transformation (TTT) curve, commonly observed for crystallization of structural, ionic glasses, or metallic alloys \cite{Loeffler00}. Analyzing the TTT curve within classical nucleation theory allows the determination of nucleation and growth dominated regimes. Microscopic evidence for the electronic crystal growth from a glass state was obtained by $^{13}$C-NMR investigations. Figure~\ref{fig:thetaNMRcrystallization} compares the NMR spectra obtained during the slow cooling process with the time evolution of spectra obtained at $T=140$~K after rapid cooling. Remarkably, the broad spectrum observed initially at 140\,K changes its structure in time and eventually adopts the shape characteristic for a charge-ordered state. \begin{figure}[h] \centering\includegraphics[clip,width=0.4\columnwidth]{thetacrystallization.pdf} \caption{Time-temperature-transformation (TTT) curves of $\theta$-(BEDT-TTF)$_2$RbZn(SCN)$_4$ deduced from the time evolution of resistivity after rapid cooling at different temperatures $T_q$. The color scale denotes the percentage of resistivity recovery (after \cite{Sato17}). \label{fig:thetacrystallization}} \end{figure} \paragraph{Charge vitrification and aging} \label{sec:aging} \begin{figure}[h] \centering\includegraphics[clip,width=0.6\columnwidth]{thetaNMRcrystallization.pdf} \caption{Temperature and time evolution of the $^{13}$C-NMR spectra of \tetrz. Slow cooling results in the charge-ordered electronic crystal, while after rapid cooling the charge glass is established. (a) At room temperature the spectrum consists of two narrow lines due to two non-equivalent $^{13}$C sites in the BEDT-TTF molecule indicating homogeneous charge density. (b) Upon cooling the spectrum broadens due to the slowing down of charge dynamics. (c) The spectrum indicates that charge poor and charge rich sites are formed in the charge ordered stated at $T=190$~K. (d) Time evolution of $^{13}$C-NMR spectra after rapid cooling during the crystallization process at $T=140$~K. The blue and red components are attributed to the charge glass and charge order state (after \cite{Sato17}). \label{fig:thetaNMRcrystallization}} \end{figure} Eventually charge-order prevents further investigations of the glassy phenomena in \tetrz\ under slow cooling. This problem can be circumvented by studying \tetcz, where the higher degree of frustration fully prevents long-range charge order. Indeed, additional fingerprints of glassy states such as cooling-rate-dependent charge vitrification and non-equilibrium aging behavior were successfully demonstrated in \tetcz\ \cite{Sato14}. (i) From the resistivity {\it vs.} temperature curves under different sweeping rates, we can conclude that faster cooling leads to a higher glass transition temperature $T_g$, as expected in glass phenomenology. (ii) The charge-fluctuation broad peak exhibits a strong decrease in characteristic frequency $f_0$ around 100~K. This indicates that the glass transition temperature $T_g \approx 100$~K. (iii) At $T < T_g$, aging behavior can be observed in the resistivity, as shown in Figure~\ref{fig:thetaglassaging}. It can be well described by the Kohlrausch-Williams-Watts-law based on stretched exponential function \cite{Ediger00,Ediger96}. Its relaxation time $\tau_{\rm aging}$ obeys the Arrhenius law reaching $100-1000$~s in the vicinity of $T_g$; again consistent with the conventional definition of glass transition. Most significantly, $\tau_{\rm noise}$ = 1/$f_0$ at $T > T_g$ follows the same behavior uncovering common dynamics of the equilibrium states at high temperatures and of the non-equilibrium states below $T_g$. The observed Arrhenius behavior indicates strong-liquid nature meaning that the glassy dynamics on approaching $T_g$ can be described as a gradual slowing down probably reflecting an increasing number of dynamically correllated charge clusters. (iv) Consistently, x-ray diffuse scattering data reveal that the spatial growth of the charge clusters is closely associated with the glassy charge dynamics. \begin{figure} \centering\includegraphics[clip,width=0.7\columnwidth]{thetaglassaging.pdf} \caption{Aging behavior of $\theta$-(BEDT-TTF)$_2$CsZn(SCN)$_4$. (a) Evolution of the resistivity in time at representative temperatures below the glass transition $T_g$. The lines are fits to the Kohlrausch-Willimas-Watts (KWW) relaxation law. (b) Aging relaxation time extracted from the Kohlrausch-Willimas-Watts behavior $\tau_{\rm aging}$ and $\tau_{\rm noise}$ = 1/$f_0$ extracted from the resistance fluctuations peak as a function of inverse temeprature. The line is a fit to Arrheniuos law giving the activation energy of about 2600 K. The dotted lines denote the temperature range where $\tau_{\rm aging}$ reaches values 100-1000\,s widely recognized to define $T_g$ (after \cite{Sato14}). \label{fig:thetaglassaging}} \end{figure} Having in mind that these organic materials are rather clean systems with no significant amount of impurities, and that \tetcz, in contrast to \tetrz, possess nearly equilateral triangular lattice, Kanoda and collaborators suggested that the charge frustration works against long-range electronic crystallization and thus plays the primary role in the formation of charge glass in these two-dimensional strongly correlated systems \cite{Kagawa13,Sato17}. In other words, the competition between the charge-ordered phase and a charge glass may be governed by geometrical frustration. Indeed, Dobrosavljevi{\'c} and collaborators demonstrated theoretically that the interplay of long-range interactions and geometric frustration in disorder-free Coulomb liquids consists in lifting the ground state degeneracy produced by frustration and in generating a manifold of low-lying metastable states together with characteristic features of glassy dynamics, as observed experimentally. Their results suggest that \tetrz\ and \tetcz\ constitute prominent examples of a self-generated Coulomb glass \cite{Mahmoudian15}. Still, we should recall that the lattice degrees of freedom are involved in the creation of the electronic-crystals states, both the charge glass as well as the charge order; the extent to which they influence the crystallization mechanism remains to be clarified. \subsection{Quantum electric dipoles with glassy signatures} \label{sec:QEL} \label{sec:DipoleLiquid} \subsubsection{Quantum electric dipole lattice} \label{sec:hgbr} Recently, Drichko and collaborators suggested that \hgbr\ may serve as an example of a quantum dipole liquid, based on Raman scattering investigations \cite{Hassan18}. The system is {\it a priori} a good candidate because electric dipoles are arranged on a triangular lattice with moderate frustration (Figure~\ref{fig:structure_HgBrCl}) \cite{Hotta10}. Raman and infrared vibrational spectroscopy shows that static charge order is absent \cite{Ivek17,Hassan18} despite a pronounced metal-insulator transition at $T_{\rm MIT}=80$~K, the origin of which is still elusive. Charge fluctuations are invoked to explain the behavior observed in the $\nu_{2}(a_g)$ Raman band: Figure~\ref{fig:HgBrnu2} demonstrates that the band broadens on cooling and that the linewidth goes through a slight minimum at around $T_{\rm MIT}$ before it increases again below. The explanation starts from the two types of BEDT-TTF molecules with charge $+0.6e$ and $+0.4e$, respectively, that are present in the charge-ordered sister system \hgcl\ (Section~\ref{sec:COdimerized}); it is assumed {\rm ad hoc} that these charges are also present in \hgbr, but fluctuate between two molecules in a dimer; the assumption is in line with our estimate of charge fluctuations indicated by the broad $\nu_{27}({\rm b}_{1u}$) mode as discussed in Section \ref{sec:COHgCl}. Applying Kubo’s two-states-jump model \cite{Yakushi12,Girlando14}, the shape of the band can be mimicked assuming charge fluctuations with an exchange frequency $\omega_{\rm ex} \approx 30-40$\cm. \begin{figure} \centering\includegraphics[clip,width=0.65\columnwidth]{HgBrnu2.pdf} \caption{(a) Raman spectra of \hgbr\ in the region of the molecular vibrations $\nu_{2}$ and $\nu_{3}$ at temperatures between 200 and 8~K. (b) No splitting due to charge ordering is observed. For the $\nu_{2}$ mode, the linewidth increases below 80\,K. For comparison, the behavior of the charge-insensitive $\nu_{3}$ band is shown; its linewidth decreases continuously from 300~K down to low temperatures (after \cite{Hassan18}). \label{fig:HgBrnu2}} \end{figure} Most important, the Raman spectra in the $A_{1g}$ channel reveal a broad feature around 40~\cm\ of non-phonon origin that grows strongly below $T_{\rm MIT}=80$~K and attains maximum intensity below around 40~K, shown in Figure~\ref{fig:HgCl_Raman1}(a). The energy of mode is lower than expected for magnetic excitations; instead it is associated with the exchange frequency of charge fluctuations $\omega_{ex} \approx 30-40$~\cm. The results are interpreted as fluctuating dipoles forming a quantum electric dipole liquid state. \subsubsection{Glassy behavior} Along this line one expects a quantum paraelectric behavior described by the Barrett formula \cite{Barrett52}, Eq.~(\ref{eq:Barrettformula}), as it is observed in other compounds with a triangular lattice, such as BaFe$_{12}$O$_{19}$ and $\kappa$-H$_3$(Cat-EDT-TTF)$_2$ \cite{Shen16,Shimozawa17} (Figure~\ref{fig:CatBarrett}). Instead, the dielectric response of \hgbr\ exhibits only a weak relaxor-like behavior in the audio and radio-frequency range, which is screened by the conduction electrons \cite{Ivek17}. Simultaneous fits of $\varepsilon^{\prime}(\omega)$ and $\varepsilon^{\prime\prime}(\omega)$ to the generalized Debye formula (see Section~\ref{sec:DielectricResponse}) disclose a strongly diminishing dielectric constant and a broadening of the relaxation time distribution when the temperature decreases; this behavior might indicate glassy freezing \cite{Staresinic02}. The mean relaxation time $\tau_0$ follows the Arrhenius type of gradual slowing down. When we take the temperature where $\tau_0$ extrapolates to the value of 100~s as the glass transition, we obtain $T_g \approx$ 5\,K. In the glassy phenomenology, the Arrhenius behavior is characteristic of strong slow-cooled glass formers and implies that glassy dynamics involves only local rearrangements of charge configurations. From this point of view, the low-frequency Raman mode is interpreted as a Boson peak arising from charge fluctuations between two molecular dimer sites. In analogy to the charge-density wave phenomenology, we speculate that the low-frequency Raman mode (Boson peak) and the dielectric mode in kHz-MHz range represent fingerprints of the same phenomenon \cite{Biljakovic12, Remenyi15}: the formation of dipole liquid state occurs on the local scale and bears glassy signatures. This interpretation is in accord with previous observations: systems with strong Boson peaks above $T_g$ tend to be those with strong liquid character \cite{Ediger96,Schroeder04,Nakayama02}. \begin{figure} \centering \includegraphics[width=1\columnwidth]{HgBrRaman3.pdf} \caption{\label{fig:HgCl_Raman1} (a) Temperature dependence of the collective mode in the A$_{1g}$ scattering channel for \hgbr, determined by subtracting phonons from the full Raman spectrum. Inset shows the intensity of the mode as a function of temperature. (b) Temperature dependence of the heat capacity $C_p$ for \hgcl\ (red line) and \hgbr\ (black line) below $T=40$~K. The two curves deviate from each other below approximately 6 K. The inset shows the low-temperature data with linear behavior of heat capacity for \hgbr. (c) Temperature dependence of specific heat of \hgbr. The $C/T^3$ {\it vs.} $T$ plot demonstrates the excess contribution from the Boson peak located at around $T=5$~K (after \cite{Hassan18,Hemmida18}). } \label{fig:HgBrBoson} \end{figure} Also specific heat measurements reveal the presence of a Boson peak below $T=20$~K, as displayed in Figure~\ref{fig:HgBrBoson}. The significant non-Debye behavior indicating an excess of low-energy vibrational states \cite{Hemmida18} is often taken as a universal signature of heterogeneity and glassy properties of the liquid state \cite{Ando18}. Importantly, the entropy associated with the excess peak is significantly larger than what is expected for the pure magnetic entropy of a spin-$\frac{1}{2}$ system, indicating that the glassy state should be mainly attributed to heterogeneity in the charge sector. Since no estimates of the Boson peak frequency, either on the basis of specific heat or low-frequency Raman data exist, we cannot finally conclude on their common origin. This issue is certainly worth clarifying in future studies. \subsubsection{Coupling to magnetic degrees of freedom} While in the regular dimer Mott insulator the magnetic coupling between the dimers in \hgbr\ is given by $J\approx 20$~meV (Table~\ref{tab:1}), unequal charge distribution on the dimers causes a renormalization \cite{Hotta10,Hotta12} to $J_{\rm DS}\approx 6-7$~meV, as consistently estimated from ESR and Raman experiments \cite{Hemmida18,Hassan18}; still too high to assign the 40~\cm-peak observed in Figure~\ref{fig:HgCl_Raman1}(a) to purely magnetic excitations. From other metallic BEDT-TTF salts close to the charge-order transition, it is known \cite{Merino03,Dressel03b,Dressel10,Kaiser10} that charge fluctuations can cause collective modes present in optical spectra. The coupling of these electric dipole fluctuations to $S=\frac{1}{2}$ spins on a triangular lattice might serve as mechanism for spin-liquid behavior \cite{Hotta10,Naka16}. Starting with a Kugel-Khomskii type Hamiltonian, Naka and Ishihara show that the spin-charge interaction promotes an instability of the long-range magnetic ordered state around a parameter region where two spin-spiral phases merge \cite{Naka16}. As a matter of fact, the fluctuating dipole liquid is rather similar to an orbital liquid \cite{Balents10,Savary16}. Specific heat measurements down to low temperatures are also consistent with the idea of a spin-liquid behavior in \hgbr, since a linear term is present only in the Br-compound but not in the Cl-analogue, where the electric dipoles are well ordered [Figure \ref{fig:HgBrBoson}(b)]. It is interesting to recall the proposal of Mazumdar and Clay \cite{Li10,Dayal11,Clay12,Clay19} about a paired electron crystal where magnetic interaction acts as a driving force for the charge order in a frustrated dimer lattice and a singlet ground state. In the case of \hgcl, where an abrupt charge-ordered phase occurs at $T_{\rm CO} = 30$~K (Figure~\ref{fig:HgClnu27}), it is likely that the long-range magnetic order must be excluded below $T_{\rm CO} = 30$~K, as discussed below. At the first sight, evidence of magnetic properties in the charge-ordered state of \hgcl{} is pretty vague. In the region of $T_{\rm CO}$, one finds a kink in dc resistivity together with a decrease of the susceptibility; this might be seen as tendency to a spin-gapped state. On the other hand, early ESR data suggested antiferromagnetic state formation slightly below $T_{\rm CO}$ \cite{Yasin12b}, while more recent ESR and specific heat measurements on samples from different sources failed to detect any signature of magnetic transition \cite{Gati18a}. Although there is general agreement of both results, the details seem to be affected by impurities. Recently, the absence of long-range magnetic order is also concluded from $^{1}$H nuclear magnetic resonance data on \hgcl\ \cite{Pustogow19b}. This study reveals a classical Korringa temperature dependence in the metallic state; in other words $(T_1T)^{-1} \approx 1000$~Ks remains constant from ambient temperatures down to $T_{\rm CO} =30$~K. There the relaxation rate is strongly enhanced due to charge order, but otherwise the NMR spectra do not change when passing the phase transition; in fact no signature of long-range magnetic order was observed down to 25~mK. A maximum in the relaxation rate $T_1^{-1}$ occurs around 5~K, similar to previous observations in the spin liquid candidates \etcn\ and \agcn\ around 1~K, as plotted in Figure~\ref{fig:HgCl_NMR1}(a) \cite{Shimizu03,Shimizu06,Shimizu16}. The strong field dependence of $T_1^{-1}$ observed at that temperature [(Figure~\ref{fig:HgCl_NMR1}(b)] is taken as indication of contributions for dynamic inhomogeneities. \begin{figure} \centering \includegraphics[width=0.7\columnwidth]{HgCl_NMR1.pdf} \caption{\label{fig:HgCl_NMR1} (a) Temperature dependence of the $^{1}$H-NMR relaxation rate of several spin liquid candidates. At temperatures above the maximum, the $T^{-1}_1$ data in the insulating state of \hgcl\ (indicated by $\kappa$-HgCl) \cite{Pustogow19b} coincide with the paradigmatic quantum spin liquid compounds \etcn\ \cite{Shimizu03} and \agcn\ \cite{Shimizu16}. Here, $T^{-1}_1$ follows a field-independent approximately linear $T$ dependence suggesting that this is the intrinsic response with $J \approx 200$~K. (b) Upon increasing $B_0$ the maximum is strongly suppressed and shifts to higher temperatures. This behavior is observed for the other compounds, too \cite{Shimizu03,Shimizu06,Shimizu16} (after \cite{Pustogow19b}). } \end{figure} It seems that the low-temperature NMR properties of all these frustrated materials are dominated by extrinsic magnetic contributions rather than by instrinsic spin degrees of freedom. While antiferromagnetism is quite common in charge transfer salts, there has been only one report on indications of weak ferromagnetic order: shortly after the discovery of superconductivity in slightly pressurized \etcl\ at a record temperature of $T_c=12.8$~K by the Argonne group \cite{Williams90}, they observed an antiferromagnetic transition near $T_N=45$~K followed by a weak ferromagnetic hysteresis below 22~K \cite{Welp92}. As depicted in Figure~\ref{fig:kappa_phasediagram}(b), NMR, ESR und SQUID experiments confirm the antiferromagnetic ground state below 26~K; the weak ferromagnetic moment results from a canting of the ordered spins at lower temperatures \cite{Miyagawa95,Pinteric99,Yasin11}. In this context, the recent discovery of weak ferromagnetic order in \hgbr\ below $T=20$~K was fairly surprising \cite{Hemmida18}. As discussed in Sections \ref{sec:COHgCl} and ~\ref{sec:hgbr}, the material is known to undergo a smooth metal-insulator transition at 80~K, which is neither a Mott nor a charge-order transition and also not associated with structural changes \cite{Aldoshina93,Ivek17}. In the spin sector, the heat capacity shows a finite linear term $\gamma_e$ \cite{Hassan18}, while the spin susceptibility and ESR spectroscopy \cite{Hemmida18} suggest that frustrated spins in the molecular dimers suppress long-range antiferromagnetic order, forming a spin-glass-type ground state of the triangular lattice below $40$~K, which locally contains ferromagnetic polarons. Interestingly, this estimate of $T_g$ corresponds to the energy of low-frequency Raman mode located at 40~\cm, which attains maximum intensity around $T=40$~K. Moreover, most recent $^{13}$C-NMR investigation finds weak NMR line broadening starting right below $T_{\rm MIT}$ and strongly developing below 40\,K. The spin-lattice relaxation time forms a broad maximum centered around 5~K indicating a dynamical slowing down of magnetic fluctuations. Overall results are interpreted as disordered charge disproportionation and antiferromagnetism developed at short-range scales \cite{Le20}. Considering both, the charge and spin sectors, we suggest for \hgbr\ an exotic quantum liquid state with glassy nature, that consists of entangled fluctuating electric dipoles and spins. Below $T_{\rm MIT}=80$~K, composite charge-spin clusters develop upon cooling, the dynamics gradually slows down and freezes at low temperatures. More efforts in future research, experimental as well theoretical, are needed in order to further elaborate this scenario. \subsubsection{Dirac electrons} \label{sec:DiracElectrons} \label{sec:tiltedcones} For more than three decades the charge-transfer salt \aeti\ is one of the most fascinating members of the BEDT-TTF family \cite{Bender84a,Bender84b,Dressel94}. Besides serving as the prime model for charge-order, the compound attracts enormous attention for the formation of a Dirac electronic state when pressure is applied (Figure~\ref{fig:alphapressure}). Dirac materials are a novel class of solid state systems in which the low-energy electronic properties are not described in terms of a Schr\"{o}dinger wave equation, but by a relativistic Dirac equation resulting in a linear energy dispersion around the Fermi energy, where valence and conduction bands touch each other \cite{CastroNeto09, Wehling14}. While three-dimensional Dirac and Weyl semimetals are subject to intense research nowadays \cite{Armitage18}, the underlying physics was first explored in the two-dimensional case of mono-layered graphite, {\it i.e.} graphene \cite{CastroNeto09}. Whereas in graphene the Dirac cones are isotropic, less symmetric Dirac materials such as two-dimensional organic solid \aeti\ possess anisotropic cones with tilted axes and in presence of long-range and short-range Coulomb interaction provide novel exotic phenomena. Experimentally this topic was pioneered by the magnetotransport studies of Tajima and collaborators \cite{Tajima00,Tajima06,Tajima07,Tajima09,Tajima18,Kajita14}, quickly explained by bandstructure calculations \cite{Katayama06,Kino06,Kobayashi07,Goerbig08,Kobayashi09,Kajita14}. Here the challenge is that the atomic coordinates are not really known with high precision in the required parameter range of high-pressure and low-temperature; and that the influence of the I$^-_3$ anions cannot be neglected \cite{Kondo05,Kondo09,Alemany12}. Although the electrical conductivity is basically temperature independent, the carrier density drops quadratically with temperature while the mobility increases approximately by $\mu(T)\propto T^{-2}$, as shown in Figure~\ref{fig:Dirac1}(a). Subsequent measurements of specific heat \cite{Konoike12}, nuclear magnetic resonance (NMR) \cite{Hirata16,Hirata17} and optical spectroscopy \cite{Beyer16,Uykur19} provide further evidence of massless Dirac Fermions. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{Dirac1.pdf} \caption{\label{fig:Dirac1} (a)~The carrier density $n(T)$ and the mobility $\mu(T)$ under the pressure of $p=1.8$~GPa plotted as a function of temperature in double logarithmic manner. The cyan dots show the effective carrier density $n_{\rm eff}$ and the magenta dots the mobility $\mu_{\rm eff}$ estimated from the Hall coefficient ($R_H=1/ne$) and the conductivity ($\mu = \sigma_1/ne$). The magnetoresistance mobility $\mu_M$ and the density $n_M$, on the other hand, is shown as red and blue squares, respectively. The carrier density obeys $n(T) \propto T^2$ from $T=10$~K to 50~K (indicated by the dashed line) (data from \cite{Tajima06}). (b)~Band structure of \aeti\ near the Dirac point. The energy dispersion $E_{\lambda}(q) = w_0 + \lambda \sqrt{w_x^2q_x^2 + w_y^2 q_y^2}$ for the special choice of $w_x = w_y = 1$ and $w_0 = (0,0.6)$ is given in natural units. The Dirac cone is tilted in the $y$-direction (reproduced from \cite{Goerbig08} with permission). } \end{figure} Using an extended Hubbard model within the Hartree mean-field theory, Kobayashi {\it et al.} examined the band structure of the stripe charge-ordered state of \aeti\ under pressure and found that with increasing pressure a topological transition occurs from a conventional insulator with a single-minimum in the dispersion relation at the M point in the Brillouin zone, toward a new phase, which exhibits a double minimum \cite{Montambaux09,Kobayashi11}. The transition is characterized by the appearance of a pair of Dirac electrons with a finite mass. Due to the topological nature of this transition, the Berry curvature vanishes in the conventional phase and has a double peak structure with opposite signs in the new phase \cite{Vanderbilt18}. Osada considered the possibility that \aeti\ becomes a Chern insulator due to additional transfers, potentials and magnetic modulations \cite{Osada17,Osada19,Osada20}. An interesting approach should be mentioned at this point; instead of applying hydrostatic pressure, two and four sulfur atoms in the BEDT-TTF molecule can be replaced by selenium in order to increase the orbital overlap and bandwidth \cite{Inokuchi95}. The resulting $\alpha$-(BEDT-STF)$_2$I$_3$ and $\alpha$-(BEDT-TSF)$_2$I$_3$ [better known as $\alpha$-(BETS)$_2$I$_3$] stay metallic down to $T_{\rm CO}=80$~K and 50~K, respectively. There have been suggestions that exotic Dirac cones can be achieved even under ambient pressure and temperature \cite{Morinari14,Naito20a}, unfortunately heavy disorder makes the experimental realization challenging at present\cite{Naito97,Naito20c,Kitou20}. \paragraph{Effect of correlations} \label{sec:DiracElectrons_correlations} In contrast to graphene, the Dirac cones in \aeti\ are strongly tilted \cite{Katayama06,Goerbig08}, as shown in Figure~\ref{fig:Dirac1}(b). The position of the bands can by tuned by pressure; also temperature is an important parameter. The real situation in \aeti, however, is not as clean as in the case of graphene, due to the existence of additional electronic bands and the effect of the anion sheets \cite{Alemany12,Pouget18}. Pressure-dependent optical studies \cite{Beyer16,Uykur19} reveal the presence of additional charge excitations, also inferred from magnetotransport \cite{Monteverde13}. The effect of interlayer magnetoresistance in the case of tilted Dirac cones was subsequently discussed \cite{Tajima18,Mori19,Tani19} and suggested that angular-dependent measurements could reveal the in-plane anisotropy of electronic structure. Liu {\it et al.} {}\cite{Liu16} observe an increase of $\rho(T)$ at low temperatures \cite{Liu16} that cannot be completely suppressed and becomes stronger when approaching the charge-ordering transition (Figure~\ref{fig:dcpressure}). They suggest that massless Dirac fermions interact via short-range Coulomb repulsion \cite{Tanaka16}. A pseudogap forms in the charge channel before opening a real charge gap in the charge-ordered phase; this is in accord with pressure-dependent optical experiments, which infer that the closing of the charge-order gap with pressure comes to a halt around 1~GPa \cite{Beyer16}. Recently is was pointed out \cite{Winter17} that in organic materials spin-orbit coupling might play an important role, causing spin-orbit gaps that render impossible the realization of a true zero-gap Dirac state in \aeti. \begin{figure}[h] \centering \includegraphics[width=0.8\columnwidth]{Dirac5.pdf} \caption{\label{fig:dcpressure} (a) In the massless Dirac fermion state of \aeti\ above 1.1~GPa the resistivity $\rho(T)$ increases at very low temperatures even when the pressure is raised up to 4~GPa. Also indicated is the bulk resistivity corresponding to the quantum sheet resistance, $C \cdot (h/e^2) = 4.5 \times 10^{- 3}~\Omega{\rm cm}$, with the lattice constant along the $c$-axis under a pressure of around 2~GPa: $C=1.7$~nm. (b) Conductivity versus pressure for fixed temperatures above 1.1~GPa in the low-temperature region, where the resistivity upturn appears (after \cite{Liu16}). } \end{figure} From extensive optical studies under high pressure up to 4.0~GPa, Uykur {\it et al.} concluded that at low pressure \aeti\ possesses a clear charge-order gap in the optical conductivity; but with rising $p$ these bands approach each other and overlap, leading to a more-or-less narrow Drude contribution. At low temperatures the edge of these two bands develop linear dispersions. As illustrated in Figure~\ref{fig:Dirac4}, $\sigma_1(\omega)$ consists of three components in the range of high-pressure: (i) a low-energy Drude response, (ii) a frequency-independent conductivity due to the Dirac electrons and (iii) a mid-infrared band arising with the incoherent transitions due to on-site and inter-site Coulomb repulsion. Upon cooling, electronic correlations cave in a pseudogap with states piling up at the edges. As a result, the Drude spectral weight is transferred to finite energies. In other words, there are clear fingerprints of the electronic correlations among the Dirac electrons; the interaction can be tuned by temperature and pressure. \begin{figure} \centering \includegraphics[width=0.7\columnwidth]{Dirac4.pdf} \caption{\label{fig:Dirac4} Schematic diagrams for the electronic structure and corresponding optical conductivity of \aeti\ (reproduced from \cite{Uykur19}). The diagrams are given for the low-pressure insulator state and the high-pressure Dirac state at various temperatures. Here, $T_{PG}$ stands for the temperature, where the pseudogap starts to open. At the bottom the pressure evolution of the various electronic phases of \aeti\ at low $T$ is summarized: the charge-order insulating state at low pressure; a metallic states in the intermediate pressure regime consisting of massless Dirac electrons, next to carriers in correlation-split and trivial bands; and above 4.0 GPa only the Dirac electronic state and carriers in trivial bands remain.} \end{figure} Also NMR investigations lead to the conclusion that the Dirac fermions in \aeti\ do interact \cite{Hirata16,Hirata17}. Using $^{13}$C-NMR spectroscopy an unusual temperature dependence of spin-related properties was observed in \aeti, when pressure of 2.3~GPa is applied indicating strong correlations among the linearly dispersing electrons \cite{Hirata17}. In regular metals, the electronic density of states is constant in energy, resulting in a constant quantity $1/(T_1 T K^2)$ upon cooling, where $T_1$ is the spin-lattice relaxation rate, $T$ the temperature and $K$ is the Knight shift; this is known as the Korringa law \cite{Moriya63,Narath68}. In contrast, in systems with linearly dispersing bands and Dirac cones, both $1/T_1 T$ and $K^2$ rapidly drop upon cooling, reflecting the vanishingly small density of states (DOS) around the Fermi energy $E_F$. Despite the fact that the charge-order transition is strongly suppressed at a pressure of 2.3~GPa as shown in Figure~\ref{fig:alphapressure}, a crossover from a Korringa-like metal to a gapless state occurs below $T\approx 150$~K, suggesting a density of state profile depicted in the inset of Figure~\ref{fig:Hirata}(a). \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{Hirata2.pdf} \caption{\label{fig:Hirata} (a) Temperature dependence of the $^{13}$C-NMR spin-lattice relaxation rate $1/T_1T$ (triangles) and the squared Knight shift $K$ (circles) \cite{Hirata16} measured at a pressure of 2.3~GPa and a magnetic field of 6~T. The insets depict the density of electronic states near $E_F$ with thermally generated electron-hole pairs (circles) indicated for low (I) and high (II) temperatures, respectively. The density of states is linear up to $|E_W – E_F|$ and levels off above it. (b) The Korringa ratio $K$ (squares) dramatically increases as the temperature is lowered. For comparison the results for $\theta$-(BEDT-TTF)$_2$I$_3$ (diamonds) are plotted \cite{Hirata12} (after \cite{Hirata17}). } \end{figure} The strength of short-range electronic correlations can be measured by a so-called Korringa ratio \begin{equation} \mathcal{K}=\frac{1}{T_1 T} \frac{1}{S_0 \beta K^2} \hspace*{5mm} \text{with} \hspace*{5mm} S_0 = \frac{4\pi k_B}{\hbar} \frac{\gamma_n}{\gamma_e} \quad , \end{equation} where $\gamma_n$ is the nuclear gyromagnetic ratio and $\gamma_e$ is the electron gyromagnetic ratio; $k_B$ is the Boltzmann constant and $\hbar = 2\pi h$ the Planck constant; $\beta$ is a form factor representing the anisotropy of the hyperfine interaction. In weakly correlated electron systems, $\mathcal{K}$ is constant in $T$ and of the order of unity, as demonstrated in Figure~\ref{fig:Hirata}(b) for the example of $\theta$-(BEDT-TTF)$_2$I$_3$. The divergent increase of the Korringa ratio by a factor of 1000 upon cooling evidences that \aeti\ at high pressure is far from behaving like a regular metal. Combining these observations with model calculations, Hirata {\it et al.} suggest that this divergence stems from an interaction-driven velocity renormalization that almost exclusively suppresses zero-momentum spin fluctuations because the long-range part of the Coulomb interaction remains unscreened around the Dirac points. It addition the bandwidth is reduced by short-range electron correlation. The NMR results also indicate that preexisting excitonic fluctuations in close proximity to the charge order govern the electronic nature in the low-temperature region. The excitonic instability is controlled by a small chemical-potential shift and an in-plane magnetic field. NMR relaxation rate probes these excitonic-spin fluctuations \cite{Ohki19a}. \paragraph{Domain Walls} \label{sec:DiracElectrons_domainwalls} Dielectric and optical studies of the anisotropic charge response in \aeti\ revealed two low-frequency relaxation modes (Figure~\ref{fig:alphadielectricfreq}), as discussed in full detail in Sections~\ref{sec:DielectricResponse} and ~\ref{sec:dielectric_domainwalls}. Both modes can be attributed to the motion of domain wall pairs between two types of domains which are created due to breaking the inversion symmetry. The first mode is attributed to the motion of charged-domain walls along the $a$-axis, while the smaller second mode is associated with the motion of neutral $180^\circ$ domain-wall pairs along the $b$-axis (cf.\ Figure~\ref{fig:DomainWalls1}). Ohki {\it et al.} examined the detailed temperature dependence of the electronic states of interacting two-dimensional Dirac electrons in pressurized \aeti\ by using a semi-infinite two-dimensional lattice model \cite{Ohki19b}. Above the charge-order gap, a peak structure emerges due to the two-dimensional Dirac cones, which drastically changes in the vicinity of $T_{\rm CO}$ when they merge with the massive Dirac electrons \cite{Ohki18c}; a behavior not uncommon in indirect semiconductors. They also address the problem of domain wall conductivity in a charge-ordered insulating phase and can explain the discrepancy of a small energy gap extracted from resistivity data \cite{Liu16} compared to the optical gap \cite{Beyer16} by metallic conduction along a one-dimensional domain wall emerging at the border of two charge-ordered ferroelectric regions with opposite polarizations. With increasing intersite interaction $V$, a transition from the massless Dirac phase to the massive Dirac phase occurs simultaneously with the charge ordering \cite{Matsuno16,Omori17,Ohki18a}. Upon further increasing $V$, the system changes from the charge-ordered massive Dirac state to the charge-ordered state with no Dirac cones. \subsection{Electrodynamics of weakly dimerized ferroelectrics} \label{sec:COFerroelectricity} Electronic ferroelectricity describes an ordering phenomenon that involves mainly electrons \cite{VandenBrinkKhomskii08,Naka10,Yamauchi14,Ishihara10,Ishihara14,TomicDressel15}, in contrast to conventional ferroelectricity, for instance in BaTiO$_3$ or other titanates, where the ferroelectric properties are determined by the ions \cite{LinesGlassBook} (see the introductory part of this Chapter~\ref{sec:ChargeOrder} and references therein for more information). In the above Section \ref{sec:COweaklydimerized} we have discussed basic features characteristic for electronic ferroelectricity which have been observed in two-dimensional organic solids: the optical second harmonic generation, the development of charge disproportionation and the formation of ferroelectric domains. Here we continue by exploring dynamical features: hysteresis effect, specific dielectric response, and strong non-linear effects and ultrafast response. \subsubsection{Polarization switching} \label{sec:Switching} Polarization reversal or switching induced by an electric field is commonly considered as the most important property of ferroelectrics. The occurrence of a ferroelectric hysteresis loop is a direct consequence of the motion of domain walls which is microscopically explained by the switching of polarization. In the case of \aeti, Lunkenheimer {\it et al.} could proof ferroelectricity by conducting the respective polarization experiments \cite{Lunkenheimer15}. In Figure~\ref{fig:alphahysteresis} the polarization-induced current response is plotted as a function of time while the electric field is switched as shown in the upper inset. \begin{figure}[h] \centering\includegraphics[clip,width=0.5\columnwidth]{alphahysteresis.pdf} \caption{Time-dependent current of $\alpha$-(BEDT-TTF)$_2$I$_3$ at $T=36$~K (main frame) generated by the sequence of excitation signals plotted in the upper inset. The lower inset shows the polarization as a function of the electric field at $T=5$~K (after \cite{Lunkenheimer15}). \label{fig:alphahysteresis}} \end{figure} The polarization shows a hysteresis loop that closes around 20~kV/cm and exhibits the form typically observed in true ferroelectrics (lower inset of Figure~\ref{fig:alphahysteresis}). The latter result is obtained only at $T=5$~K, probably because the conductivity at elevated temperatures is still to high and the remaining Ohmic losses prevent switchability \cite{Scott08}. Surprisingly, the saturation polarization of 2~nC/cm$^2$ is several orders of magnitude smaller than the values typically found in one-dimensional organic ferroelectrics such as TTF-CA \cite{TomicDressel15}. This suggests that at $T=5$~K only a tiny fraction of polar domains can be switched by the applied electric fields, indicating a high degree of cooperative freezing of the domain wall motion; for further discussion see Section~\ref{sec:DielectricResponse}. Finally, we note that in \tetrz\ polarization switching has not been reported by now, which may be due to either relatively high conductivity or due to a low breakdown field. \subsubsection{Dielectric response} \label{sec:DielectricResponse} For \aeti\ a rather complex and anisotropic dielectric response is observed in the Hz to MHz frequency range below $T_{\rm CO}$ \cite{IvekPRB11,Dressel94,IvekPRL10,Kodama12,Lunkenheimer15,IvekCulo17}. To disentangle the different contributions, simultaneous fits of real and imaginary parts of the dielectric function by a generalized Debye form, known as Cole-Cole function \cite{Jonscher77,Jonscher99}, are performed as plotted in Figure~\ref{fig:alphadielectricfreq}. The results reveal that within the molecular planes the dielectric spectra exhibit substantial dispersion with two discernible contributions: \begin{equation} \varepsilon(\omega)-\varepsilon_\infty = \frac{\Delta\varepsilon_\mathrm{LD}}{ 1 + \left(i \omega \tau_{0,\mathrm{LD}} \right)^{ 1-\alpha_\mathrm{LD} } } + \frac{\Delta\varepsilon_\mathrm{SD}}{1 + \left(i \omega \tau_{0,\mathrm{SD}} \right)^{ 1-\alpha_\mathrm{SD} } } \quad , \label{eqmodel} \end{equation} where $\varepsilon_\infty$ is the high-frequency dielectric constant, $\Delta\varepsilon$ is the dielectric strength, $\tau_0$ the mean relaxation time and $(1-\alpha)$ the symmetric broadening of the relaxation time distribution function of the large (LD) and small (SD) dielectric modes, respectively. The broadening parameter $(1-\alpha)$ of both modes is about $0.70 \pm 0.05$, and the dielectric strength does not change significantly with temperature. Importantly, $\tau_{0,\mathrm{LD}}$ changes with temperature in a thermally activated manner, whereas $\tau_{0,\mathrm{SD}}$ is temperature-independent and reminiscent of a domain-wall-like behavior. These features of the dielectric response are strikingly different from \begin{figure}[h] \centering\includegraphics[width=0.8\columnwidth]{alphadielectricfreq.pdf} \caption{Double logarithmic plot of the real part (a) and imaginary part (b) of the in-plane dielectric function of $\alpha$-(BEDT-TTF)$_2$I$_3$ for three representative temperatures. For $T=47$~K the full lines correspond to the fit to a sum of two generalized Debye functions; the dashed lines represent contributions of the two single modes. Above $T=75$~K, only one mode is identified, and the full lines represent fits to single generalized Debye function (after \cite{IvekPRB11}). \label{fig:alphadielectricfreq}} \end{figure} the ones found in one-dimensional organic ferroelectrics, such as TMTTF$_2$$X$ and TTF-CA compounds (see \cite{TomicDressel15} and references therein) and also reported in \tetrz\ \cite{NadMonceautheta06}. In those ferroelectrics a clear Curie peak is observed at $T_\mathrm{CO}$, as expected for a regular ferroelectric phase transition. In contrast, $\alpha$-(BEDT-TTF)$_2$I$_3$ shows no sign of a non-dispersive Curie-like peak in $\varepsilon^{\prime}(T)$ at $T_{\rm CO}$, where a clear-cut charge order occurs, as discussed in Section~\ref{sec:COweaklydimerized}. Instead, when recording the dielectric response as a function of temperature as diplayed in Figure \ref{fig:alphadielectrictemp}, one finds a strongly dispersive peak in $\varepsilon^\prime(T)$ well below $T_\mathrm{CO}$. It is more pronounced within the plane compared to the perpendicular direction. A finite out-of-plane component of polarization is expected due to an asymmetric charge distribution along the molecular axis oriented almost parallel to the $c^*$-axis, the direction perpendicular to molecular $ab$-planes (cf.\ Figure~\ref{fig:structure_alpha}). For the in-plane orientation a second, smaller peak can be resolved; this particular detailed structure is due to the complex two-mode response as revealed by the data recorded in the frequency space. Distinct interpretations have been suggested to explain these observations ranging from two-dimensional cooperative bond-charge-density wave with ferroelectric nature \cite{IvekPRL10, IvekPRB11, TomicDressel15}, over relaxor-based picture \cite{Lunkenheimer15, LunkenLoidl15} to disorder-induced scenario \cite{IvekCulo17, IvekPRB11}. \begin{figure} \centering\includegraphics[width=0.7\columnwidth]{alphadielectrictemp.pdf} \caption{Temperature dependence of the real part of dielectric function of \aeti\ measured (a) in plane and (b) out of plane for various frequencies as indicated (after \cite{IvekCulo17}). For the in-plane direction a second, smaller peak is resolved in the temperature sweep of $\varepsilon^{\prime}(T)$ that corresponds to the complex two-mode response detected in frequency space, plotted in Figure~\ref{fig:alphadielectricfreq}. \label{fig:alphadielectrictemp}} \end{figure} In the following we have to answer conclusively the fundamental question: what is the origin of the complex dielectric response in \aeti? The aim is to adequately describe both the large dispersive and small non-dispersive mode in the charge ordered state developed at long-range scale as testified by optical second harmonic generation (Figure~\ref{fig:alphaSHG}) \cite{YamamotoJPSJ08}. \subsubsection{Domain walls} \label{sec:dielectric_domainwalls} According to this interpretation, the long-range ferroelectric order remains in place. The failure in detection of the Curie peak is likely due to experimental problems related to the high conductivity at $T>T_\mathrm{CO}$ and/or restricted frequency range of the dielectric measurements. The dielectric response in the charge order state of \aeti{} is then naturally attributed to domain wall motion. This motion commonly depends on the interaction with randomly located defects, resulting in a distribution of relaxation times, just as observed. It indicates that the dielectric relaxation takes place between different metastable states, which correspond to local changes of the charge distribution across the length scale of the domain-wall thickness. In \aeti{}, disorder originates in the anions \cite{Dressel94} and due to many short hydrogen bonds between anions and ethylene groups of BEDT-TTF molecules on A, A$^\prime$, and B sites \cite{Alemany12} directly influence the charge response in the BEDT-TTF conducting layers \cite{IvekCulo17, IvekPRB11}. Domain walls in ferroelectric crystals separate the symmetry-equivalent directions of the polarization; their creation minimizes the electrostatic and elastic energies \cite{Catalan12}. They are commonly classified in two types according to the relative angle between the domain-wall plane and the polarization vector: electrically neutral domain walls and charged domain walls. Figure~\ref{fig:DomainWalls1} illustrates that in two dimensions the neutral domain wall orientation is restricted to $180^\circ$ and 90$^\circ$ walls; the former one segregate domains with antiparallel polarization, while the 90$^\circ$ walls separate regions with mutually perpendicular polarization. On the other hand, domain configurations with a jump of the normal component of polarization can be separated only by domain walls that carry bound charge. The creation of this type of walls require almost a perfect compensation by free charge carriers \cite{Bednyakov15}. \begin{figure} \centering\includegraphics[clip,width=1\columnwidth]{DomainWalls.pdf} \caption{Different configurations of domain wall denoted by the dashed lines: (a)~Sketch of $180^\circ$ domain wall with dipoles oriented antiparallel next to each other. (b) Also a head-to-head arrangement is possible, leading to a charged domain wall. Panels (c) and (d) depict $90^\circ$ domain walls. \label{fig:DomainWalls1}} \end{figure} As mentioned in Section~\ref{sec:MITchargeorder}, x-ray diffraction and optical second-harmonic interferometry provide experimental evidence of domain walls in \aeti. Yamamoto {\it et al.} took images of large polar domains with opposite polarizations that are reproduced in Figure~\ref{fig:alphadomainimagesA}. They clearly resolved the formation of the boundary along the $a$-axis indicating 180$^\circ$ domains \cite{Yamamoto10}. The mobility of these walls was determined by recording images of the same crystal area after rapid (panel c) and slow cooling (panel d) through the charge-ordering phase transition at $T_{\rm CO} = 135$~K. The image of the domain structure in the annealed state demonstrates a lateral shift of the neutral 180$^\circ$-wall and growth of the bright domain. Furthermore, the domain boundaries are rugged and a closer inspection reveals that they consist of both neutral and charged domain walls. Evidently, the latter are strongly pinned and thus much less mobile. A low degree of switchable polarization at $T=5$~K, shown in Figure~\ref{fig:alphahysteresis}, may be a result of mobility difference between neutral and charged domain walls: only the domain states that contain neutral 180$^\circ$-domain walls exhibit polarization switching. On this basis, the complex dielectric response in \aeti\ can be understood. Ivek {\it et al.} \cite{IvekPRB11} \begin{figure}[h] \centering\includegraphics[clip,width=1\columnwidth]{alphadomainimagesA.pdf} \caption{(a) Transmission image of a single crystal of $\alpha$-(BEDT-TTF)$_2$I$_3$. Square denotes the area of sample used in second harmonic generation (SHG) interferometry measurements; (b) SHG image taken at $T$=140\,K (above $T_\mathrm{CO}$) (c) SHG image taken at $T$=50\,K (below $T_\mathrm{CO}$) after rapid cooling. Bright and dark regions are split dominantly along the a-axis indicating 180$^\circ$ domain wall; (d) SHG image taken at 50\,K after high-temperature annealing; the image reveals that the domains were displaced from the original positions before annealing dominantly along the $b$-axis. Red and blue arrows in (c) and (d) denote neutral and charged domain walls, respectively (reproduced from Yamamoto {\it et al.} \cite{Yamamoto10} with the permission of AIP Publishing). \label{fig:alphadomainimagesA}} \end{figure} suggested two possible types of domain-wall pairs with the constraint of charge neutrality implying that a change of stripes is equivalent to strictly replacing the unit cells of one twin type [(A,B)-rich unit cells] by another [(A$^\prime$,B)-rich unit cells], as depicted in Figure~\ref{fig:alphatwin}. In Figure~\ref{fig:DomainWalls1}(a) we illustrate that the first type is a pair of neutral 180$^\circ$ domain walls, {\it i.e.} a domain wall and the corresponding anti-domain wall between the charge-rich and charge-poor stripes along the $b$-axis. A second type of charged domain walls that develops along the $a$-axis is shown in Figure~\ref{fig:DomainWalls1}(b): here a jump of the normal component of the polarization is equal to the polarization itself. Importantly, in contrast to neutral 180$^\circ$ domain walls, the stability of charged domain walls depends on the compensation by free charge carriers. The nearly temperature-independent mean relaxation time of the small dielectric relaxation observed in \aeti\ evidences that resistive dissipation is not dominant; thus it is described appropriately by 180$^\circ$ domain walls. Let's now return to the large dispersive dielectric mode observed in \aeti\ (Figure~\ref{fig:alphadielectrictemp}). We propose that this dispersive mode is caused by the motion of charged-domain walls, whose formation should be promoted by screening since the charge-ordered state is characterized by a relatively high conductivity. They will remain stable as long as the free-charge-carrier screening effectively compensates their charges. Since dispersion is determined by the free carrier screening, the mode should follow a thermally activated behavior similar to the dc resistivity, exactly as observed. There still remain some open questions concerning the dielectric response in \aeti. In order to reveal its microscopic origin, as well as to clarify the low degree of switchable polarization, further investigations of the topology of domain structure are highly desirable. A first approach was recently reported utilizing scanning near-field infrared nanoscopy in the vicinity of $T_{\rm CO}$ \cite{PustogowSciAdv18}. \subsubsection{Non-linear effects and ultra-fast response} \label{sec:nonlinear+ultafastresponse} Domain wall motion may also be responsible for the huge negative differential resistance above very high threshold fields and in reversible switching to transient high-conducting states, plotted in Figure~\ref{fig:alphaNDRoptics}. Time- and field-dependent transport measurements on \aeti\ provide evidence \cite{Tamura10, IvekPRB12, Peterseim16} that the rate of domain wall formation strongly increases when the dc electric fields is high enough to overcome their formation due to thermal excitations. Under these conditions, the motion of domain wall pairs becomes increasingly correlated and creates growing conduction regions until percolation promotes negative differential resistance. The time-dependent effects are due to changes in the coupling between molecular and anion sublattices induced by the applied electric field, otherwise involved in the charge-ordering phase transition \cite{Alemany12}. \begin{figure} \centering\includegraphics[clip,width=0.4\columnwidth]{alphaNDRoptics.pdf} \caption{Current density $J$ as a function of the applied electric field $E$ measured in \aeti\ at different temperatures below $T_\mathrm{CO}$. A negative differential resistance behavior is observed above the threshold field, which increases with decreasing temperature (after \cite{Peterseim16}). \label{fig:alphaNDRoptics}} \end{figure} Alternatively, the overall behavior can be explained within a two-state model of non-equilibrium electrons by excitation of charge carriers with the high mobility \cite{Peterseim16}. Especially optical studies show that transient optical properties of electric field-induced metallic state at about 125\,K, {\it i.e.} close to $T_\mathrm{CO}$, differ from the state induced deep in the insulating state at around 80\,K. From Figure~\ref{fig:aeti_contour2}, we can see that the spectral signatures of the former are similar to the optical properties found for $T > T_\mathrm{CO}$, while the spectral response of the latter evidences a novel electronically induced metalic-like state. Peterseim {\it et al.} suggest that the novel state is characterized by excitations of charge carriers with an extremely high mobility, which resemble massless Dirac-like carriers with linear dispersion found in \aeti\ at high pressure \cite{Peterseim16} (cf.\ Section~\ref{sec:DiracElectrons}). \begin{figure}[b] \centering \includegraphics[width=0.8\columnwidth]{aeti_contour2.pdf} \caption{Contour plot of the reflectivity change $\Delta_t R(\nu,t)$ along the $b$-direction of \aeti\ at different temperatures: (a) At $T=125$~K an electric field of $E_{\rm sample}=216$~V/cm is applied along the $a$-axis lasting for 10~ms. (b) At lower temperatures, $T=80$~K, $E_{\rm sample}=2900$~V/cm have to be applied. The dashed horizontal lines mark the onset and end of the voltage pulse. The vertical stripes in the spectrum are due to instabilities of the interferometer mirror during the step-scan run (after \cite{Peterseim16}). \label{fig:aeti_contour2}} \end{figure} Another non-linear study performed at temperatures lower below $T=60$~K revealed a power-law behavior in the current-voltage characteristics that is explained by assuming an electric field dependent potential barrier of the thermally excited topological defects such as electron-hole pairs and/or domain walls from the charged-ordered state \cite{Uji13}. On reducing the temperature, the power-law exponent increases from 1 to 3 around $30~{\rm K} < T_\mathrm{KT} < 40$~K, which is interpreted as a fingerprint of the Kosterlitz-Thouless type of transition. This intrepretation implies that many topological defects exist above $T_\mathrm{KT}$, while there remain only few below $T_\mathrm{KT}$. Uji {\it et al.} \cite{Uji13} suggest that these very excitations, when polarized by an ac electric fields, also contribute to the in-plane dielectric function $\varepsilon^\prime$. Finally, Okamoto and collaborators reported nonlinear electric transport measurements at 30\,K conducted simultaneously with a terahertz-radiation imaging method. The results showed that in the negative differential resistance state the ferroelectric order melts in the elongated region forming a nonlinear conducting path \cite{Sotome17}. Electrical conductivity switching and negative differential resistance can also be induced by strong light pulses. Usually, photoexcitated states undergo a rapid decay to the ground electronic state and most changes induced by photoexcitation occur transiently. Time-resolved measurements of the electrical conductivity are therefore required to detect a transient photo response. By irradiating the crystal with a pulsed laser, it is switched to a high conductivity state; then the current response to an applied voltage pulses is measured. Notably, the conductivity switching can be repeatedly retrieved by applied pulsed voltage without further irradiation, thus indicating a memory effect. The memory effect can be controlled by the temporal width and height of voltage pulses; it is understood in terms of a high current filament formation in the high conductivity state \cite{Iimori09, Iimori14}. As the last point in this Section, we briefly address the photo-induced phase transition phenomena observed by time-resolved femtosecond pump-probe spectroscopy on these charge-ordered systems \cite{Iimori07, Iwai07, YamamotoJPSJ08, Kawakami10, Miyashita10, Tanaka10,Iwai12}. The findings strongly support a purely electronic mechanism of ferroelectricity, while the electron-phonon interaction resulting in the molecular rearrangements contribute to the decay processes. Iwai {\it et al.} found that charge order is destroyed by an initial femtosecond laser pulse; the charge-ordered state melts extremely fast on a sub-picosecond time scale, indicating that the initial process is purely electronic and no structural instability is associated with it. Microscopic metallic domains form within 15~fs; while in \aeti\ they condense further to a macroscopic metallic region on a timescale of about 200~fs, in \tetrz\ a large potential barrier against molecular displacement prevents the evolution of macroscopic metallic islands; this is sketched in Figure~\ref{fig:alphaPIPT2}(a). Eventually, both systems relax back to the charge-ordered state within several picoseconds or nanoseconds. This fast relaxation is clearly different from the one at the millisecond time scale observed when a high dc electric field is applied, as shown in Figure~\ref{fig:aeti_contour2}. The decay processes strongly depend on temperature and light intensity: a significant slowing-down is found when the temperature rises close to $T_{\rm CO}$ suggesting an inhomogeneous character of the photo-induced metallic state. While for low intensities these microscopic metallic domains relax rapidly to the charge-ordered insulating state, for high laser intensities the decay time of metallic state gets markedly longer, enabling formation of macroscopic domains associated with some molecular rearrangement. The decay process consists of two components; the two relaxation times of the metallic state follow a critical slowing down behavior. Slower recovery takes place at the temperatures close to $T_\mathrm{CO}$, while fast recovery occurs at the temperatures deep below $T_\mathrm{CO}$. The corresponding relaxation times, slow and fast, are associated with the relaxation of macroscopic and microscopic metallic islands, respectively [Figure~\ref{fig:alphaPIPT2}(b-d)]. \begin{figure} \centering\includegraphics[width=\columnwidth]{alphaPIPT2.pdf} \caption{(a) Schematic representation of the primary processes of the photo-induced phase transition: intially the microscopic metallic domains are generated; they can quickly recover the charge ordered state as in $\theta$-(BEDT-TTF)$_2$RbZn(SCN)$_4$, or they can evolve in macroscopic metallic domains as in $\alpha$-(BEDT-TTF)$_2$I$_3$. (b) Relaxation times of the photoinduced metallic state in $\alpha$-(BEDT-TTF)$_2$I$_3$, $\tau_{\rm fast}$ and $\tau_{\rm slow}$ versus reduced temperature for several excitation intensities. (c) Fraction of the fast (blue dots: $\tau_{\rm fast} \approx 1$~ps) and slow (red dots: $\tau_{\rm slow} \approx 1$~ns) decay components as a function of $I_{ex}$ at $T=124$\,K. (d) Schematic illustrations of free energy surface for $T \ll T_{\rm CO}$ (fast relaxation) and $T\approx T_{\rm CO}$ (slow relaxation). The energy barrier $E_B$ is much smaller than the thermal energy $k_BT\approx 10$~meV (after \cite{Iwai12}).} \label{fig:alphaPIPT2} \end{figure} Very recently an ultrafast response of the THz-wave generation was discovered in $\alpha$-(BEDT-TTF)$_2$I$_3$ upon photoexcitation \cite{Itoh14, Itoh18}. The THz-wave generation by means of optical rectification is a second-order nonlinear optical process -- the same way the generation of a second-harmonic signal (SHG) is. Remarkably, the electric field amplitude of THz-wave sets in at $T_\mathrm{CO}$, as SHG does (Figure~\ref{fig:alphaSHG}), indicating that it originates in charge ordering driven ferroelectric polarization. The difference in the photo-induced dynamics found in \aeti\ and \tetrz\ are only quantitative. Numerical studies using the extended Peierls-Hubbard model on an anisotropic triangular lattice show \cite{Miyashita10} that this difference arises from the distinct crystallographic symmetry and different degree of structural modification associated with the thermally-driven charge order phase transition: in the former low-symmetry system it is very small \cite{Kakiuchi07, Tanaka08}, while in the latter high-symmetry compound it is rather substantial \cite{HMoriPRB98,Mori99b,Watanabe04,Miyashita07}. A detailed account of this topic is given by Iwai in \cite{Iwai12}. \subsection{Coherent transport} \label{sec:coherenttransport} In a seminal paper, Merino and McKenzie \cite{Merino00} demonstrated that the transport properties of $\kappa$-BEDT-TTF salts can be described even on a quantitative level within dynamical mean-field theory (DMFT) \cite{Rozenberg94,Rozenberg95,Limelette03a,Merino08}. At low temperatures, \etbr, \etscn\ and similar organic conductors constitute prime examples of Fermi-liquids: The resistance exhibits a quadratic temperature dependence \begin{equation} \rho(T) = \rho_0 + A T^2 \quad , \label{eq:FL1} \end{equation} obeying the Kadowaki-Woods rule \cite{Jacko09} \begin{equation} A \propto \gamma_e^2 \quad , \label{eq:KadowakiWoods} \end{equation} with $\gamma_e$ the Sommerfeld coefficient, characterizing the electronic contribution to the specific heat, {\it i.e.} the electronic density of states at the Fermi level $E_F$. This coherent state extends up to the Fermi-liquid temperature $T_{\rm FL}$, where the parabolic increase in $\rho(T)$ is lost. Metallic transport, defined by ${\rm d}\rho(T)/{\rm d}T > 0$, prevails up to the so-called Brinkman-Rice temperature $T_{\rm BR}$, well above the Ioffe-Regel-Mott limit where metallic transport is supposed to break down \cite{Pustogow19}, see Figure~\ref{fig:schematicPhaseDiagrams}(b). Above this maximum in $\rho(T)$ around 80 to 100~K, the systems exhibit some incoherent behavior resulting in a semiconducting temperature dependence. The smooth crossover from coherent Fermi-liquid to more incoherent excitations leads to a non-monotonic $T$-dependence not only in the electrical resistance, but also in thermopower and Hall coefficient. While the overall behavior is quite generic, details are sample dependent and due to intrinsic remnant disorder caused by an incomplete ordering of the ethylene-end-groups of BEDT-TTF molecule at about 75\,K ~\cite{Pinteric02,Strack05}; cf.\ Figure~\ref{fig:BEDT-TTF} in Chapter~\ref{sec:structure}. Among the molecular quantum materials, the $\kappa$-phase BEDT-TTF family serves as the primary testfield for the correlation dependence of their electronic properties. The starting point is either the antiferromagnetic Mott insulator \etcl\ or the highly-frustrated sister compound \etcn. The systems can be tuned into the metallic or even superconducting Fermi-liquid state by increasing the bandwidth via external pressure or chemical modification, as depicted in Figure~\ref{fig:schematicPhaseDiagrams}(b) and (c). Limelette {\it et al.} investigated \etcl\ under pressure $p>30$~MPa in order to explore the emerging Fermi-liquid regime \cite{Limelette03a}: they could characterize the dc resistivity according to Eq.~(\ref{eq:FL1}) up to $T_{\rm FL}$, which is of the order of 35~K at 50~MPa. The prefactor $A(p)$ strongly increases as the pressure is reduced and diverges with a critical pressure of approximately 20~MPa (see\ Figure~\ref{fig:PhaseDiagram_k-Cl}). \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{kappa_BrCl.pdf} \caption{ (a) Temperature dependence of the in-plane dc resistivity $\rho(T)$ of $\kappa$-(BEDT-TTF)$_2$\-Cu[N(CN)$_2$]Br$_{x}$Cl$_{1-x}$ with different Br substitution as indicated. For $x=40\%$ metallic fluctuations indicate the closeness to the metallic phase. \label{fig:kappa-BrCl_dc} (b)~Schematic phase diagram of $\kappa$-(BEDT-TTF)$_2X$. The on-site Coulomb repulsion with respect to the bandwidth $U/W$ can be tuned either by external pressure or modifying the anions $X$. The bandwidth controlled phase transition between the insulator and the Fermi liquid/superconductor can be explored by gradually replacing Cl by Br in $\kappa$-(BEDT-TTF)$_2$Cu[N(CN)$_2$]Br$_{x}$Cl$_{1-x}$. The first-order insulator-to-metal transition line is sketched after the generic diagram for $\kappa$-(BEDT-TTF)$_2X$ \cite{Ito96,Kanoda97a,Limelette03a}. The data points are obtained from transport measurements (a) and magnetic susceptibility (from \cite{Yasin11}). \label{fig:kappa_phasediagram} } \end{figure} Replacing Cl by Br in \etbrcl\ provides an alternative route to shift the compound across the metal-insulator transition. Despite the larger ions, the orbital overlap increases and the effective Coulomb repulsion $U/W$ decreases, as depicted in the phase diagram of Figure~\ref{fig:kappa_phasediagram}(b). Comprehensive optical, transport and magnetic investigations on \etbrcl\ yield the charge and spin dynamics as the metallic state evolves, when moving across the phase boundary or reducing the temperature \cite{Yasin11,Faltermeier07,Merino08,Dumm09,Dressel09}. As shown in Figure~\ref{fig:kappa-BrCl_dc}(a), the pristine and weakly substituted compounds behave strongly isulating; but for $x=0.4$ metallic fluctuations become obvious at low temperatures. Above $x=0.7$, metallic and superconducting properties are present. It is important to note, however, that in these cases the coherent charge carrier response develops only below approximately $50$~K, in accord with theory \cite{Merino00}. In Figure~\ref{fig:kappa-scattering}(d)-(f), $\rho(T)$ is presented on a quadratic temperature scale in order to better visualize the Fermi liquid behavior. The $[\rho(T)-\rho_0] \propto T^2$ dependence holds up to the Fermi-liquid temperature $T_{\rm FL} \approx 30$~K basically independent on $x$. Pressure-dependent studies \cite{Limelette03a} shown in Figure~\ref{fig:PhaseDiagram_k-Cl}(a) reveal a somewhat stronger increase with $p$. \subsubsection{Fermi-liquid behavior} \label{sec:FermiLiquid} The obtained prefactor $A(x)$ in Eq.~(\ref{eq:FL1}) increases, when going from the pristine Br compound to $x=0.7$. Within Landau's Fermi-liquid theory, the factor $A$ characterizes the strength of the electron-electron interaction and is related to the effective carrier mass $m^*$ or the effective Fermi temperature $T_F^*$, like \begin{equation} A\propto (m^*)^2 \propto (T_F^*)^{-2} \quad , \label{eq:KadowakiWoods2} \end{equation} in accord with the Kadowaki-Woods relation (\ref{eq:KadowakiWoods}), that is well established for heavy fermions or transition-metal compounds, and was also seen in organics \cite{Kadowaki86,Ito96,Miyake89,Maeno97,Jacko09}. The product $A\times (T_{\rm FL})^2$ was suggested to remain constant when going through the phase diagram either by chemical or physical pressure \cite{Limelette03a,Yasin11}; more detailed studies are required to confirm this point. We should note that results on $\kappa$-(BEDT-TTF)$_2$Cu[N(CN)$_2$]Br have been reported, which deviate from this behavior \cite{Strack05}, likely caused by a remnant disorder \cite{Pinteric02}. In addition to the temperature dependence, optical spectroscopy provides the possibility to extract the frequency dependence of the scattering rate and of the effective mass, when performing an extended Drude analysis of the conductivity spectra \cite{DresselGruner02,BasovRMP}. Eventually we can consider a $\omega$-$T$ scaling of the scattering and interaction processes with respect to these two energies. For \etbrcl\ single crystals with $x = 0.9$ and 0.85 and 0.73 one obtains a gradual increase in the effective mass from 2 to $m^*/m_b\approx 6$ \cite{Dumm09,Dressel11}. This agrees with the general prediction by Brinkman-Rice theory \cite{Imada98} and DMFT calculations \cite{Georges96} that the electronic correlations become stronger on approaching the Mott transition from the metallic side. More recent calculations by resonating-valence-bond theory of the Hub\-bard-Heisenberg model also predict a gradual increase in $m^*$ for values of effective repulsion $U/W$ not too close to the first-order Mott transition and a strong increase very close to the transition \cite{Powell05}. Kagawa {\it et al.} \cite{Kagawa09} noted that $m^*$ is well defined only in the Fermi-liquid regime at low temperatures; in the `bad metallic' regime of the phase diagram [Figure~\ref{fig:kappa_phasediagram}(b)], however, $m^*$ is probably not an adequate physical parameter to feature the Mott criticality at the critical end point. \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{kappa_scattering.pdf} \caption{\label{fig:kappa-scattering} Temperature and frequency dependence of the scattering rate for \etbrcl\ with different substitution values $x$. (a) - (c) The frequency-dependent part of the scattering rate $\gamma(\omega) - \gamma_0$ at $T=5$~K of $\kappa$-(BEDT-TTF)$_2$Cu[N(CN)$_2$]Br$_{x}$Cl$_{1-x}$ as function of the squared frequency. We determined for the frequency-independent dc limit of the scattering rate $\gamma_0$ the following values: 315~cm$^{-1}$ ($x=0.73$), 48~cm$^{-1}$ ($x=0.85$), and 280~cm$^{-1}$ ($x=0.9$). Note the different vertical scales for the frames. (d) - (f) The temperature-dependent scattering rate $\tau^{-1}(T)=\gamma(T)$ is obtained from the in-plane dc resistivity ($\rho(T)-\rho_{0})\propto 1/\tau(T)$. The plots as a function of $T^2$ in the low-temperature region yield a quadratic behavior up to $T_{0}$. Below $T_c\approx 12$~K the systems become superconducting (data from \cite{Dumm09,Yasin11}). } \end{figure} Fermi-liquid theory makes predictions about the temperature dependence of the scattering rate as well as about its frequency dependence \cite{AshcroftBook,AbrikosovBook,PinesBook,Maslov17}: \begin{equation} \gamma(T,\omega) = \tau^{-1} = \gamma_0 + B\left[(p \pi k_B T/\hbar)^2 + \omega^2\right] \quad , \label{eq:FL2} \end{equation} where the constant $\gamma_0$ describes the residual scattering processes at zero energy due to impurities, surface etc. corresponding to the residual resistivity $\rho_0$ in Eq.~(\ref{eq:FL1}). The parameter $B$ is related to the density of electronic states at the Fermi level $E_F$. For purely inelastic scattering, $p=2$ is predicted \cite{Landau57,Gurzhi59,Rosch06,Berthod13} but experimental verification is scarce \cite{Yasin11,Nagel12,Mirzaei13,Stricker14,Tytarenko15}. As seen from Figure~\ref{fig:kappa-scattering}(a)-(c), a quadratic frequency dependence of the low-temperature scattering rate was obtained in the case of \etbrcl\ up to approximately 600~cm$^{-1}$ where unconventional non-Fermi-liquid behavior prevails even at $T=5$~K \cite{Dumm09,Dressel11}. The corresponding temperature-dependent dc resistivity is plotted in panels (d) to (f). Consistently, the slope rises as the Mott transition is approached: the effective correlations $U/W$ increase due to the reduction of bandwidth. From the optical data taken at different temperatures, $p=2.3$ is extracted close to the expectation. As seen in Figure~\ref{fig:kappa-scattering}, the upper limit of the Fermi-liquid regime is determined by deviations from the $T^2$ and $\omega^2$ line. It is interesting to note that in all cases the rise in temperature has a much stronger effect than the increase in frequency. An indirect method of probing the metallic phase boundary was suggested by Sasaki {\it et al.} \cite{Sasaki04b} similar to the spatially resolved study presented in Figure~\ref{fig:Scan}: by carefully following the $\nu_3(a_g)$ mode of several $\kappa$-(BEDT-TTF)$_2X$ salts when the temperature is reduced, they can identify the border $T^*$ between the bad metallic behavior at elevated temperature and the Fermi-liquid behavior of a correlated good metal at low temperatures. Variations of the electronic properties affect the emv coupling of the molecular vibrations. They also found a shift in the $\nu_3(a_g)$ mode when the boundary to the Mott insulator is crossed by cooling \etcl. \subsubsection{$\omega$-$T$ scaling} \label{sec:wTscaling} A comprehensive investigation of the Fermi-liquid regime in the two-dimensional organic compounds was conducted by Pustogow {\it et al.} \cite{Pustogow20}, who compared the transport and optical properties of the substitutional series \stfcn\ with $0 \leq x \leq 1$. Due to the extended selenium orbitals, the bandwidth $W$ is enlarged as the BEDT-STF content increases; the Mott transition takes place around $x = 0.12$ as shown in Figure~\ref{fig:dcPressureAlloy}(a). The optical conductivity displayed in the inset of Figure~\ref{fig:STF-optics}(b) resembles an insulator [$\sigma_1(\omega\rightarrow 0) = 0$] for low $x$ with a pronounced mid-infrared absorption band ($U \approx 2000$~\cm) that corresponds to excitations across the Mott-Hubbard gap. With increasing $x$, spectral weight shifts to lower energies and eventually a Drude-like term appears as the metallic phase is entered \cite{Saito20}. The contour plot illustrates the evolution of the conductivity as $x$ rises. Fueled by the vanishing Mott-Hubbard excitations, the coherent quasiparticle response appears above $x \approx 0.2$. In that range, an extended Drude analysis of the optical conductivity \cite{DresselGruner02} yields the energy dependence of the scattering rate and effective mass, presented in Figure~\ref{fig:STF-optics}(b) and (c). When plotting $\gamma(\omega)$ as a function of $\omega^2$, a straight-line behavior can be observed up to $\omega_{\rm FL}$, which corresponds to the upper limit of the Fermi liquid regime. With increasing \begin{figure}[h] \centering \includegraphics[width=0.8\columnwidth]{STF-optics.pdf} \caption{ Optical properties of the series \stfcn. The inset in panel (a) displays the optical conductivity for several crystals with different substition $x$ as indicated. The data are recorded at $T=5$~K for the polarization $E\parallel c$. The most prominent feature is the Mott-Hubbard band around 2000~\cm. Upon increasing $x$, this peak shifts towards higher energies. At the same time, a strong Drude-type zero-frequency conductivity emerges indicating the transition from a gapped Mott insulator to a strongly correlated metallic state with renormalized quasiparticles. The main frame contains the contour plot of the conductivity spectra at different substitutional values $x$ ($\sigma_1$ is plotted on a logarithmic blue-white scale) illustrating the spectral-weight shift from high to low energies as the BEDT-STF content increases. As correlations become less pronounced, the Mott gap closes and coherent quasiparticles are stabilized at low frequencies. The strong but substitution-independent vibrational feature at 1250~\cm\ corresponds to the emv coupled $\nu_2(a_g)$ mode. (b) From the extended Drude analysis, the energy-dependent scattering rate $\gamma$ and effective mass $m^*$ is obtained. For the metallic compounds $x\geq 0.28$, $\gamma(\omega)$ increases quadratically with frequency, providing strong evidence for a Fermi liquid response. With increasing $x$, the slope $B$ is reduced and the $\gamma(\omega) \propto \omega^2$ range expands to higher frequency, shown for the two examples $x=0.44$ and 1.00. (c) Effective mass as a function of frequency for different BEDT-STF substitutions. The mass enhancement becomes more pronounced as the Mott transition is approached; for $x=0.44$ one finds $m^*/m_b=2.6$ (taken from \cite{Pustogow20,Pustogow21}). \label{fig:STF-optics}} \end{figure} $x$ the boundary considerably shifts to higher energies, in accord with the observations made on \etbrcl\ (Figure~\ref{fig:kappa-scattering}). It is worth to note that the Fermi-liquid behavior is observed over an extremely large temperature and frequency range because of the narrow bands compared to transition metal oxides, for instance. The slope determined by the pre\-fac\-tor $B$ decreases with rising $x$, {\it i.e.} as we move away from the insulator-metal transition. Correspondingly, from the frequency dependence of the renormalized mass, a strong increase of $m^*$ is observed as $x$ is reduced towards the Mott transition at 0.12. The low-frequency limit is taken as a measure of the correlation strength that is related as $B \propto (m^*/m_{\rm b})^2$ in analogy to Eq.~(\ref{eq:KadowakiWoods}). The crucial point is the $\omega$-$T$ scaling expressed in Eq.~(\ref{eq:FL2}): The quadratic dependence of the scattering rate on frequency, plotted in Figure~\ref{fig:STF-optics}(b), is observed at different temperatures; they can be unified by scaling the curves with a temperature independent factor $p$. This is taken as compelling evidence for the universality of Landau's Fermi-liquid concept upon varying the correlation strength and do not leave much space for theories of a quasi-continuous Mott transition that involve a divergent Kadowaki-Woods ratio \cite{Senthil08}. Interestingly, the proportionality between the $T^2$ and $\omega^2$ dependence of $\gamma(T,\omega)$ exceeds the inelastic limit $p = 2$ and exhibits a pronounced enhancement towards the Mott metal-insulator transition \cite{Pustogow20,Pustogow21}. This behavior is in contrast to the observations in various strongly correlated electron systems \cite{Maslov17}, and can be explained by the fact that in \stfcn\ the narrow quasiparticle peak centred at $\omega = 0$ coexists and considerably overlaps with the broad, non-metallic Hubbard bands at $\pm U/2$ (with a width $W$) as we are close to the Mott transition. In other words, the strongly correlated charge carriers are not as freely moving as expected for a good metal; in part they are localized due strong Coulomb repulsion. Further theoretical studies are required in order to elucidate these issues in more depth. \subsubsection{Bad-metal regime and Ioffe-Regel-Mott limit} The Fermi-liquid transport at low temperatures ($T<50$~K or so) is followed by a vaguely-defined `bad metal' regime \cite{Emery95}. Although quasiparticles are still present, their lifetime is significantly limited \cite{Deng13}. This regime --~often identified with a linear-in-$T$ resistivity~-- is not well understood, although continuously subject to theoretical treatises \cite{Hartnoll15}. At even high temperatures, when transport becomes incoherent and dominated by the large scattering rate, metallic conduction breaks down. The so-called Ioffe-Regel-Mott limit is the maximal resistivity that can be reached in a metal according to the Boltzmann semiclassical theory \cite{Gunnarsson03,Hussey04}. As first realized by Ioffe and Regel \cite{Ioffe60}, the metallic conductivity is restricted as the mean free path $\ell$ cannot get smaller than the lattice spacing $d$ \cite{Mott72}; in other words, the linear-in-temperature increase of the resistivity should saturate at one point. There are numerous examples of strongly correlated systems, however, for which $\rho(T)$ does not show a sign of saturation up to fairly elevated temperatures. For that reason, it is rather illuminative to look at the optical response, where the low-frequency spectral weight becomes reduced and the simple Drude-like behavior modified well before $\rho(T)$ saturates. We can define `bad metals' by the loss of coherence. Dynamical mean field calculations \cite{Rozenberg95,Merino00,Limelette03a,Deng13} predict the modification of the Drude peak that extends well above the Ioffe-Regel-Mott limit, in full agreement with experiments on several $\kappa$-(BEDT-TTF)$_2X$ salts \cite{Merino08,Dumm09,Li19,Saito20,Pustogow20}, as shown in Figure~\ref{fig:STF-optics}(a), but also $\theta$-(BEDT-TTF)$_2$I$_3$. \begin{figure} \centering \includegraphics[width=1\columnwidth]{theta-I3.pdf} \caption{\label{fig:theta-I3_dc} Temperature-dependent electronic properties of $\theta$-(BEDT-TTF)$_2$I$_3$. (a) Parallel and perpendicular to the $ab$-planes, the resistivity exhibits a $\rho(T)\propto T^{1.25}$ power-law behavior at elevated temperatures. (b) Below $T_0\approx 20$~K a quadratic dependence reveals a Fermi-liquid (after \cite{Dressel11}). (c) In-plane conductivity spectra of $\theta$-(BEDT-TTF)$_2$I$_3$ at different temperatures. The Drude component at low temperatures diminishes and shifts to a finite energy peak (indicated by black triangles) with increasing $T$ (after \cite{Takenaka05}). \label{fig:theta-I3_optics} } \end{figure} The two-dimensional organic conductor $\theta$-(BEDT-TTF)$_2$I$_3$ draws attention for numerous reasons. In contrast to other $\theta$-phase salts that undergo charge-ordering transitions into an insulating ground state \cite{Mori99b,Mori06,Alemany15}, the dc resistivity of $\theta$-(BEDT-TTF)$_2$I$_3$ remains metallic down to $T_c=3.6$~K. The transition temperature can be raised above 5~K, when the specimens are tempered at 70\,$^{\circ}$C \cite{Salameh07}. Quantum oscillations prove the presence of a Fermi surface with two-dimensional orbits and one-dimensional trajectories \cite{Salameh07,Terashima94,Nothardt04,Nothardt06}. The presence of a Fermi-liquid state is also seen from a quadratic temperature increase in the resistivity observed at low temperatures. As displayed in Figure~\ref{fig:theta-I3_dc}(a,b), it turns into a $T^{1.25}$ dependence above $T_{\rm FL}\approx 20$~K or so. A closer inspection reveals a slight kink around 120~K where also the anisotropy $\rho_{\parallel}/\rho_{\perp}$ changes \cite{Salameh07}. As pointed out by Takenaka {\it et al.} \cite{Takenaka05}, above $T\approx 50$~K the in-plane resistivity exceeds the Ioffe-Regel-Mott limit $\rho_{\rm IRM}=(h/e^2)d=4.4\times 10^3~\Omega$\,cm for an interlayer distance of $d=17$~\AA\ \cite{Tamura88}, implying incoherent transport above. The transition from a coherent quasiparticle state to an incoherent state was addressed by temperature-dependent optical spectroscopy on $\theta$-(BEDT-TTF)$_2$I$_3$. Despite the metallic behavior of $\rho(T)$, a well defined Drude peak is present only at low-temperatures. Upon warming to $T=100$~K, the peak moves to finite frequencies in the far infrared spectral range. The origin of the finite energy peak seen in Figure~\ref{fig:theta-I3_optics}(c) is not clear. For both directions ($E\parallel a$ and $b$) the maximum moves to approximately $250 - 400$~\cm\ when going up to room temperature. It is discussed whether this spectral behavior represents a `dynamical localization' theoretically explored for other organic semiconductors in more detail \cite{Fratini14}, or can be treated like fluctuating density waves \cite{Delacretaz17}. The disappearance of the Drude peak is accompanied by a shift of spectral weight to energies above 1~eV. The loss of coherence and the redistribution of a substantial amount of spectral weight is explained by strong electronic correlations, as has been observed in a large variety of correlated bad metals \cite{Basov11}. The disappearance of the Drude peak indicates a strong suppression of kinetic energy with increasing $T$: the charge carriers transport becomes incoherent. The spectral weight shifts to energies higher than the bandwidth. Gunnarsson and collaborators pointed out \cite{Gunnarsson07} that the reason for this violation of the Ioffe-Regel-Mott limit is not simply due to interactions, like in some transition metal oxides, or due to the small bandwidth compared to the temperatures. \begin{equation} \sigma_1(0) = \frac{\varphi}{W} \int_0^{\infty}\sigma_1(\omega){\rm d}\omega \approx \frac{\left| E_k \right| }{W} \quad , \end{equation} where $\varphi = 1$-2 and $W$ is the bandwidth of the order of 500~meV. In the case of $\kappa$-(BEDT-TTF)$_2X$ salts \cite{Faltermeier07,Dumm09}, $\sigma_1(\omega)/\rho_{\rm IRM}$ is suppressed due to correlations reducing $|E_k|$ and expanding the energy scale \cite{Gunnarsson07}. It is interesting to note that the behavior of the optical conductivity $\sigma_1(\omega)$ in the bad metallic regime at high temperatures is rather similar to that seen in disordered materials at low $T$ \cite{Hussey04}. Radonji{\'c} {\it et al.} \cite{Radonjic10} showed that with increasing disorder the bandwidth $W$ increases, but disorder does not lead to qualitative differences. The large bandwidth makes the compounds more metallic, in accord with experiments \cite{Sasaki12c} as discussed in Sec.~\ref{sec:randomness}. With proper scaling, the optical conductivity remains even quantitatively very similar. No question, the `bad metallic' regime remains a desideratum for future research. Using numerical methods Wessel and collaborators considered the Mott metal-insulator transition while varying the degree of frustration \cite{Dang15}. Most interesting, the slope of the phase boundary changes sign, from positive slope of ${\rm d}T/{\rm d}p$ at $t^{\prime} = t$ to negative slope for the unfrustrated case $t^{\prime}=0$ This basically reflects the difference in entropy of the insulating ground state. Similar results have been received earlier by Tremblay and collaborators. They also provided a detailed thermodynamic description of the correlation-driven Mott transition \cite{Walsh19a,Walsh19b,Sordi19}, where they can trace the quantum Widom line above $T_{\rm crit}$ and the spinodal lines $U_{c1}$ and $U_{c2}$ below. Particular attention was paid to the local entropy. \subsection{Ferroelectricity driven by charge order in dimerized solids} \label{sec:COdimerized} \label{sec:COHgCl} Charge ordering phenomena are rare in dimerized two-dimensional BEDT-TTF compounds because the organic layers consist of face-to-face pairs of {BEDT-TTF}$^{0.5+}$ molecules, {\it i.e.} dimers, resulting in an effectively half-filled band. In that case the onsite Coulomb repulsion $U$ is much more important compared to the nearest neighbor interaction $V$. For a long time, there were only a few compounds reported to exhibit appreciable charge order, such as $\kappa$-(BEDT\--TTF)$_4$\-PtCl$_6$$\cdot$C$_6$H$_5$CN, the triclinic $\kappa$-(BEDT\--TTF)$_4$\-[$M$(CN)$_6$]\-[N(C$_2$H$_5$)$_4$]$\cdot$3H$_2$O and the monoclinic $\kappa$-(BEDT\--TTF)$_4$[$M$(CN)$_6$]\-[N(C$_2$H$_5$)$_4$]\-$\cdot$2H$_2$O (with $M$ = Co$^{\rm III}$, Fe$^{\rm III}$, and Cr$^{\rm III}$) salts. However, all of them are challenging to grow and limited in crystal size; hence details of their physical properties and electronic states are lacking at the moment \cite{Swietlik06, Ota07, Lapinski13}. Due to the lack of appreciable charge disproportionation in most of the commonly studied Mott insulators and quantum spin liquid compounds, a family of dimerized BEDT-TTF compounds has drawn particular attention, the $\kappa$-{\rm (BEDT-TTF)$_2$Hg(SCN)$_2$}$X$ series, where $X$ = Cl$^{-}$, Br$^-$, I$^-$, that was introduced almost twenty years ago by Lyubovskaya {\it et al.} \cite{Lyubovskaya91,Aldoshina93,Lyubovskii96,Zhilyaeva99,Lyubovskii02}. In contrast to the $\kappa$-salts containing Cu-ion in the polymeric anion sheet, the BEDT-TTF molecules of the Hg-compounds are slightly displaced with respect to each other within a dimer, as shown in Figure~\ref{fig:structure_HgBrCl}(a). This results in a smaller intra-dimer transfer integral $t_d=129$~meV and correspondingly a weaker on-site Coulomb repulsion $U\approx 250$~meV \cite{Drichko14}. For the case of \hgcl, {\it ab initio} density functional theory calculations were performed using the full potential local orbital basis, generalized gradient approximation and the room-temperature crystal structure \cite{Gati18a}. With interdimer transfer integrals $t^{\prime} = 51.0$~meV and $t= 40.4$~meV, as defined in Figure \ref{fig:structure_HgBrCl}(b), the frustration is rather strong $t/ t^{\prime} \approx 0.8$, although not as close to unity as the canonical spin liquid systems listed in Table~\ref{tab:1}, Section \ref{sec:propertiesQSL}. \hgcl\ is characterized by a moderate strength of dimerization, thus bridging the space between weakly dimerized, quarter-filled systems on one side (Chapter~\ref{sec:ChargeOrder}) and strongly dimerized, half filled dimer-Mott systems (Chapters~\ref{sec:MottTransition} and \ref{sec:Frustration}) on the other side (cf.\ Hotta's classification in Figure~\ref{fig:Hotta1}). At this point, it stands as a sole example of a dimerized solid that undergoes a pronounced metal-insulator phase transition due to charge ordering when cooled through $T_{\rm CO}=30$\,K \cite{Drichko14, Ivek17}. In the metallic state, at temperatures above $T_{\rm CO}$, optical and dc resistivity properties are characteristic of a metal with half-filled band and strong electronic correlations. A broad band of the C=C vibration $\nu_{27}({\rm b}_{1u}$) peaks around 1455\,\cm{} at room temperature and narrows on cooling down. Below about 100~K, it becomes obvious that the band consists of two peaks, which are assigned to the different environment of the BEDT-TTF molecules in the unit cell. Charge ordering is evidenced by a clear splitting of the $\nu_{27}$ mode below $T_{\rm CO}$ into two well-separated bands at 1441 and 1470~\cm, plotted in Figure \ref{fig:HgClnu27}. The splitting takes place as abruptly as in \aeti\ and \tetrz\ that exhibit charge ordering and ferroelectricity (see Section~\ref{sec:COweaklydimerized}). Fitting the optical bands by two Fano resonances for temperatures above $T_{\rm CO}=30$\,K and by four modes below $T_{\rm CO}$ reveals a charge imbalance of $2\delta_{\rho} \approx 0.2e$ between two different molecular sites, which are most likely located within the BEDT-TTF dimer. \begin{figure} \centering\includegraphics[clip,width=1\columnwidth]{HgClnu27+stripes.pdf} \caption{Molecular vibration $\nu_{27}({\rm b}_{1u}$) as a function of temperature of \hgcl\ showing clear splitting when the charge ordering is established at $T_{\rm CO}=30$~K. A shoulder in the mode above $T_{\rm CO}$ indicating two crystallographically different sites per unit cell; below $T_{\rm CO}$ this becomes even more pronounced in two doubled components. The results of fits with two, respectively four components are indicated by red and blue dots in the inset (after \cite{Ivek17}). On the right we show a schematic representation of the horizontal charge stripe arrangement within the $bc$ dimer layer proposed for \hgcl\ \cite{Drichko14}. The dark and light blue circles denote charge-rich and charge-poor molecules, respectively.} \label{fig:HgClnu27}\label{fig:HgClstripes} \end{figure} This value is certainly smaller than the charge disproportionation $2\delta_{\rho} \approx 0.6e$ found in \aeti\ and \tetrz, but significantly larger than the charge disproportionation in other dimerized salts, such as \etcl\ or \etcn, that does not exceed the limit of $2\delta_{\rho} \approx \pm 0.01e$ (see Section~\ref{sec:quantumelectricdipoles}). The charge ordering transition in \hgcl\ is of first order as testified by a strong change of the magnitude of low-frequency resistance fluctuations \cite{Thomas19}, and by jump-like anomalies of the relative length change, implying a divergent thermal expansion coefficient, and by thermal hysteresis between cooling and warming cycles \cite{Gati18a}. The dominant lattice change takes place along the out-of-plane $a$-axis, indicating that the coupling between anions and molecular cations plays an important role in the formation of charge order. Lang and collaborators argue that the interplay between charge ordering and cation-anion coupling results in the charge stripes running along the $c$-axis (Figure~\ref{fig:HgClstripes}), as previously proposed by Drichko {\it et al.} based on the large anisotropy of the optical spectra \cite{Drichko14}. Unfortunately, an experimental evidence for the charge pattern shown in Figure~\ref{fig:HgClstripes} is still missing: x-ray diffraction measurements performed down to $T=10$~K could not resolve any symmetry change \cite{Drichko14}. The problem might be due to melting of charge order at low temperatures. Results from a recent Raman scattering study suggest that the charge-ordered state formed at $T_{\rm CO}=30$~K is not the ground state; rather it persists only down to $T=15$~K and gradually melts below \cite{HassanDrichko19}. In Figure~\ref{fig:HgClnu2} the temperature evolution of the Raman spectra is plotted in the frequency range of the charge-sensitive $\nu_{2} (a_{g})$ stretching vibration that involves the central C=C bond of the BEDT-TTF molecules. The single $\nu_{2}$ mode at 1490\,\cm{} corresponding to {BEDT-TTF}$^{0.5+}$ observed above $T_{\rm CO}$, splits into two bands at 1475~\cm\ and 1507~\cm{} at $T=20$~K, corresponding to {BEDT-TTF}$^{0.4+}$ and {BEDT-TTF}$^{0.6+}$ below $T_{\rm CO}$. On cooling below 15~K, these bands gradually broaden, move closer in frequency and lose spectral weight concomitantly as the band corresponding to {BEDT-TTF}$^{0.5+}$ gains it. \begin{figure} \centering\includegraphics[clip,width=0.6\columnwidth]{HgClnu2+3.pdf} \caption{The fully symmetric molecular vibrations $\nu_{2}(a_g)$ and $\nu_{3}(a_g)$ are very sensitive to the local charge per molecule; the corresponding motion of the C=C bonds are indicated to the right. The Raman shift measured for \hgcl\ at several representative temperatures above and below $T_{\rm CO}=30$~K evidences that the $\nu_{3}$ remains unaffected. Significant changes, however, are observed for the $\nu_{2}$ mode. {ET}$^{0.5+}$, {ET}$^{0.4+}$ and {ET}$^{0.6+}$ denote $\nu_{2}$ bands corresponding to $+0.5e$, $+0.4e$ and $+0.6e$ charge per BEDT-TTF molecule. In the metallic state at $T=45$~K {BEDT-TTF}$^{0.5+}$ band is observed, while in the charged ordered state at 20~K two bands {BEDT-TTF}$^{0.4+}$ and {BEDT-TTF}$^{0.6+}$ are clearly resolved. On further cooling at $T=10$~K and 4~K signatures of charge order melting are detected: the two bands {BEDT-TTF}$^{0.4+}$ and {BEDT-TTF}$^{0.6+}$ broaden and a {BEDT-TTF}$^{0.5+}$ band emerges (after \cite{HassanDrichko19}).} \label{fig:HgClnu2} \end{figure} At $T = 2$~K the distribution of differently charged BEDT-TTF molecules $+0.4e : +0.5e : +0.6e$ is approximately equal to 0.3. The question remains whether this charge distribution is the final state or whether it changes on further cooling. The charge-order phase transition in \hgcl\ is extremely sensitive to external pressure: once the charge order is suppressed the compound remains metallic without indications of superconductivity \cite{Lohle17,Lohle18,Gati18a}. This behavior is distinct from the linear pressure dependence of charge imbalance observed in \aeti\ \cite{Beyer16}. It is very surprising that such small pressure variations lead to so substantial changes in the electronic properties; it is only comparable to \etcl, which is located next to the Mott insulator-metal transition, discussed in Section~\ref{sec:afmMott}. In the case of the charge-ordered \hgcl\ one does not expect that reducing the effective Coulomb repulsion $V/W$ or $U/W$, where $V$, $U$ and $W$ are inter-site and on-site Coulomb interaction, and bandwidth, respectively, is responsible for the pronounced effect. L{\"o}hle {\it et al.} suggested that the lattice plays an important role in the phase transition \cite{Lohle17}. However, from Raman spectroscopy no major changes of lattice phonons are observed at $T_{\rm CO}$ implying that the coupling of the charge order to the lattice is weak. Finally, indications for ferroelectricity driven by charge-order in \hgcl, were provided by dielectric spectroscopy, albeit no polarization switching could be recorded so far, probably due its rather high in-plane conductivity \cite{Gati18a}. For that reason, the dielectric measurements were performed with the ac electric field applied along the out-of-plane $a$-axis only. The dielectric response observed in the MHz frequency range exhibits some features expected for conventional ferroelectrics: Curie-like peak occurs in the real part of dielectric function $\varepsilon^{\prime}(T) \approx 400$ right at $T_\mathrm{CO}=30$~K with negligible dispersion. Also, the two branches of $1/\varepsilon^{\prime}(T)$ above and below $T_\mathrm{CO}$ are close to linear; the slope in the ordered state is much larger than the one above $T_\mathrm{CO}$. It was suggested that the ferroelectric order formed below the metal-insulator transition is of order-disorder type \cite{Gati18a}. However, a pronounced frequency dependence of $\varepsilon^{\prime}(T)$ is expected in this case \cite{Krohns19}, just in contrast to what is observed. Order-disorder type implies that disordered electric dipoles exist already in the metallic phase above $T_{\rm CO}$, but compelling experimental evidence for this proposal is lacking. Another important issue in this regard is the anionic contribution to the measured dielectric response. As pointed out \cite{Gati18a}, the anions are mobile in some way and shift towards charge-rich molecular sites in the vicinity of $T_\mathrm{CO}$ in order to minimize overall Coulomb energy; this arrangement stabilizes the long-range charge order. This implies that the dielectric measurements along the $a$-axis probe not only the electronic but also the anionic contribution to the dielectric response, because this is exactly the direction the cationic BEDT-TTF molecules and the anion layers alternate. Recently is was shown \cite{deSouza18} that similar effects happen in the quasi-one-dimensional organic ferroelectrics (TMTTF)$_2$$X$. We close by noting that the subject of ferrolectricity driven by charge order in dimerized solids deserves more attention in future. Specifically, efforts should be invested to unravel conclusive experimental evidences for the charge stripe pattern in the ground state, the role of electron-phonon coupling and the intrinsic dielectric response. \subsection{Ferroelectricity driven by charge order in weakly dimerized solids} \label{sec:COweaklydimerized} Ferroelectricity driven by a charge-order phase transition is predominantly established in weakly dimerized solids with quarter-filled bands. Among them \aeti\ represents the most outstanding example, which has attracted strong interest among researchers worldwide since it was introduced to the community by Schweitzer and collaborators in 1984 \cite{Bender84a,Bender84b}. At ambient pressure, \aeti\ undergoes a metal-insulator phase transition from a high-temperature semimetallic state to a charge-ordered at $T_\mathrm{CO}$ = 135\,K. Below this temperature, a striped pattern of charge disproportionation sets in. The inversion symmetry is broken, the charge is disproportionated between the molecular sites within the dimerized stack of the unit cell and consequently a net polarization is created resulting in ferroelectricity. In the present and the subsequent Section~\ref{sec:COFerroelectricity}, we give a comprehensive presentation of the relevant findings in \aeti\ and compare them with the results obtained in the studies of ferroelectricity in slowly cooled $\theta$-(BEDT-TTF)$_2$RbZn(SCN)$_4$ wherever appropriate. For completeness reasons, we briefly address $\beta^{\prime\prime}$-(BEDT-TTF)$_2$SF$_5$CHFCF$_2$SO$_3$, in which charge order does not lead to ferroelectricity; inversion symmetry persists at all temperatures and ferroelectricity cannot be established. \subsubsection{Semi-metallic state at high temperatures} \label{sec:COhightemperatures} We first address the semi-metallic state at high temperatures. Here, $\alpha$-(BEDT-TTF)$_2$I$_3$ \cite{Bender84a, Bender84b} crystallizes in non-polar space group ${\rm P}\bar{1}$. As is described in Section~\ref{sec:structure}, the unit cell contains four {BEDT-TTF}$^{0.5+}$ molecules and two I$_3^-$ anions. Commonly, two stacks are distinguished: within column I, which is weakly dimerized, BEDT-TTF molecules, denoted as A and A$^\prime$, are related through an inversion center and thus are equivalent; column II consists of two different types of molecules, denoted as B and C, which are uniformly spaced and lie at the crystallographic inversion centers [Figure \ref{fig:alphamolecularstructure} (a)]. \begin{figure}[h] \centering\includegraphics[clip,width=0.5\columnwidth]{alphamolecularstructure.pdf} \caption{(a) Schematic representation of the in-plane molecular structure of \aeti\ in the high-temperature semi-metallic state. Within the $(ab)$-plane the BEDT-TTF molecules form stacks along the $a$-direction in a herringbone way and exhibit a triangular arrangement with adjacent stacks. Molecules belonging to weakly dimerized stack I and non-dimerized stack II are denoted as A, A$^\prime$, and B, C, respectively. Crosses denote the inversion center relating A and A$^\prime$ molecules. (b)~Charge-density distribution in the in-plane molecular layer of \aeti\ in the low-temperature charge-ordered state. Molecules with higher (charge-rich) and lower (charge-poor) charge density are shown by dark and light blue circles, respectively. Electric dipoles are indicated by arrows: importantly, only dipoles in stack I contribute to a net polarization, while those in stack II cancel each other out.} \label{fig:alphamolecularstructure} \end{figure} According to the density functional calculations \cite{Alemany12}, as well as extended H\"{u}ckel molecular-orbital calculations \cite{Mori84}, the system is a semi-metal with very small electron and hole pockets; this results in an experimentally observed weakly metal-like conductivity within the molecular plane \cite{Bender84a, Bender84b,Dressel94}. The direct experimental proof for semimetallicity is provided by recent Hall effect and magnetoresistance measurements, which show that dc transport is governed by the high mobility of electrons and holes resulting in an almost temperature-independent conductivity. The value of the Hall coefficient confirms the idea of a quarter-filled band; the dominantly interpocket scattering equalizes the mobilities of the two types of charge carriers \cite{IvekCulo17}. Notably, a striped charge pattern is observed at room temperature by scanning tunneling microscopy \cite{Katano15}. Partial charge disproportionation and charge fluctuations already present at $T=300$~K persist all the way down to the charge-ordering phase transition. The charge imbalance was first suggested based on nuclear-magnetic-resonance (NMR) measurements by Takahashi and collaborators \cite{Takano01b, Takahashi06} and was later deduced from x-ray diffraction measurements \cite{Kakiuchi07} using empirical relationship between intramolecular bond lengths and charge density \cite{Guionneau97} and confirmed by optical vibrational measurements \cite{Wojciechowski03,Yue10, IvekPRB11}. The best local probes utilized in these experiments are the highly charge-sensitive stretching modes of the BEDT-TTF molecules, such as the symmetric Raman-active $\nu_{3}$ mode, and the asymmetric infrared-active $\nu_{27}$ mode of the outer C=C bonds \cite{Maksimuk01,Dressel04a,Yamamoto05,Girlando11a,Yakushi12}. The $\nu_{27}(b_{1u})$ band at high temperatures is shown in the inset in Figure \ref{fig:alphanu27} {}\cite{Beyer16}. The result corresponds well to the one reported previously \cite{IvekPRB11,TomicDressel15}. With an -- in a first approximation -- linear shift of 140~cm$^{-1}$ per unit charge, the charge disproportionation $2\delta_\mathrm{\rho}$ is calculated from \begin{equation} 2\delta_{\rho} = \frac{\delta\nu_{27}(b_{1u})}{140~{\rm cm}^{-1}/e} \quad , \end{equation} where $\delta\nu_{27}$ is the difference in frequency positions between the two $\nu_{27}$ vibration bands associated with two non-equal BEDT-TTF molecules, i.e 2$\delta_{\rho}$ = $\rho_{\rm rich}$ - $ \rho_{\rm poor}$ \cite{Yamamoto05}. For the BEDT-TTF cations, an increase in positive charge loosens the bonds, {\it i.e.} the vibrational features are redshifted. \begin{figure} \centering\includegraphics[clip,width=0.7\columnwidth]{alphanu27.pdf} \caption{Temperature dependence of the charge distribution (left axis) and the frequency position of $\nu_{27}(b_{1u})$ stretching mode (right axis) of $\alpha$-(BEDT-TTF)$_{2}$I$_{3}$. Below the phase transition temperature of $T_{\rm CO}=135$~\,K, the charge per molecule changes drastically for all four sites. The inset shows reflectivity in the frequency range of the $\nu_{27}(b_{1u})$ vibrations. The positions of the vibrational modes are illustrated by the colored arrows, corresponding to the dots in the main frame. At 300\,K, blue color indicates the low-frequency vibration of the B molecule, green the C molecule at high frequencies, and the red color the molecules A and A$^\prime$ located in between. The charge ordering phase transition is visible by the splitting of the peak: now two features are present each one with a double-peak structure (from \cite{Beyer16}). The sketch on the right illustrates the structure of the BEDT-TTF molecule and the antisymmetric vibrations of the outer C=C double bond involved in the $\nu_{27}(b_{1u})$ mode leading to an alternating charge flow along the molecular axis.} \label{fig:alphanu27} \end{figure} The broad absorption band observed in the infrared spectra at about 1445~\cm, corresponding to an average charge of $+0.5e$ per molecule hides a non-uniform site charge distribution. Resolving the associated splitting indicates that charge imbalance is present only within stack II and is rather small $2\delta_\mathrm{\rho} < 0.2e$ (Figure \ref{fig:alphanu27}); a similar result is deduced from x-ray scattering measurements \cite{Kakiuchi07,TomicDressel15}. The values of charge density at different molecular sites A, A$^\prime$, B and C agree well with results obtained by density functional theory (DFT) calculations: $+0.52e$ (A, A$^{\prime}$), $+0.55e$ (B) and $+0.38e$ (C) \cite{Ishibashi06, Alemany12}. The question remains how to explain the charge disproportionation at elevated temperatures. Kakiuchi {\it et al.} discard the local electric potential distribution originating from the iodine ion in the anionic layer and suggest that the charge disproportionation may be due to distribution of transfer integrals among the BEDT-TTF molecules \cite{Kakiuchi07}. Their view was challenged by the findings of Alemany {\it et al.} \cite{Alemany12}, who argue that the hole concentration in the highest-occupied molecular orbitals is intrinsically related to the strength of hydrogen bonding between the teminal hydrogens of BEDT-TTF molecules and the anions. They conclude that molecules A and B participate in a number of very short H-bonds with anions, whereas molecules C are not involved in those; instead they make the strongest sulfur-sulfur contacts insuring in this way the stability and electronic delocalization within molecular layers. At the end, we note that short-range charge disproportionation and charge fluctuations at high temperatures also develop in slowly cooled $\theta$-(BEDT-TTF)$_2$RbZn(SCN)$_4$ as evidenced by NMR line broadening and diffuse x-ray scattering \cite{Miyagawa00, Chiba04, Watanabe04}. \subsubsection{Metal-insulator transition into the charge-ordered state} \label{sec:MITchargeorder} In the following, we address the charge-ordering phase transition and the charge-ordered state which develops upon cooling. When the temperature reaches $T_\mathrm{CO}=135$\,K, several striking changes in the physical properties, electronic as well as in structural, take place. \begin{figure}[h] \centering\includegraphics[clip,width=0.4\columnwidth]{alphaSHG.pdf} \caption{Temperature dependence of the optical second-harmonic-generation (SHG) signal in \aeti. The SHG signal, recognized as a most reliable fingerprint of inversion symmetry breaking, is generated right at the charge-ordering phase transition $T_{\rm CO} = 135$~K, and steadily increases further as the temperature is reduced (after \cite{YamamotoJPSJ08}). \label{fig:alphaSHG}} \end{figure} Optical second-harmonic generation provides the definite proof that inversion symmetry is lost. As displayed in Figure~\ref{fig:alphaSHG}, the SHG signal sets in abruptly at $T_\mathrm{CO}$ and develops gradually with further decreasing temperature \cite{YamamotoJPSJ08, Denev11}. In addition, large polar domains develop with several 100 $\mu$m in size and opposite polarization as detected by interferometric experiments of the second-harmonic signal \cite{Yamamoto10}. These findings agree perfectly with the results of x-ray diffraction measurements: At $T_\mathrm{CO}$ lattice deformation breaks the inversion symmetry; the symmetry is reduced from ${\rm P}\bar{1}$ space group at high temperatures to polar space group P1 and twin, right-handed and left-handed domains are formed due to the acentric structure \cite{Kakiuchi07}. \begin{figure}[b] \centering\includegraphics[clip,width=0.9\columnwidth]{stripes.pdf} \caption{Schematic representation of the horizontal charge stripe arrangement observed in (a) $\alpha$-(BEDT-TTF)$_2$I$_3$, (b) $\theta$-(BEDT-TTF)$_2$RbZn(SCN)$_4$ and in (c) $\beta^{\prime\prime}$-(BEDT-TTF)$_2$SF$_5$CHFCF$_2$SO$_3$. Dark and light shaded circles denote charge-rich and charge-poor molecules, respectively. Dimerized molecular chains run along the $a$-axis in \aeti\ and $\beta^{\prime\prime}$-(BEDT-TTF)$_2$SF$_5$CHFCF$_2$SO$_3$, and along the $c$-axis in $\theta$-(BEDT-TTF)$_2$RbZn(SCN)$_4$, due to labelling convention. Charge order is accompanied by bond order in \aeti\ and in $\theta$-(BEDT-TTF)$_2$RbZn(SCN)$_4$, while there is no bond order in $\beta^{\prime\prime}$-(BEDT-TTF)$_2$\-SF$_5$CHFCF$_2$SO$_3$.} \label{fig:stripes} \end{figure} Remarkably, upon cooling the charge distribution changes drastically right at $T_{\rm CO}$ as deduced from the temperature-dependent intramolecular deformations. Raman and infrared vibrational spectroscopy as well as $^{13}$C-NMR measurements unanimously confirm these results \cite{Takano01, Moldenhauer93, Wojciechowski03, Kakiuchi07, Yue10, Clauss10, Hirata11, IvekPRB11, Beyer16}. Figure \ref{fig:alphanu27} displays the abrupt splitting of the charge-sensitive $\nu_{27}(b_{1u})$ mode at $T_\mathrm{CO}= 135$~K, resembling a first-order phase transition; further cooling leads to only minor changes. Eventually two pairs of bands are observed: a stronger one around 1425~\cm\ and a weaker one slightly above 1500~\cm, signaling the formation of four different molecular sites in the unit cell. The lower-frequency bands correspond to a charge of approximately $+0.79e$ and $+0.84e$ per molecule, and the upper-frequency bands to $+0.25e$ and $+0.22e$, respectively. The overall results reveal the formation of horizontal stripes of charge-rich and charge-poor molecular sites as illustrated in Figure \ref{fig:stripes}(a). \begin{figure} \centering\includegraphics[clip,width=0.35\columnwidth]{alphatwin.pdf} \caption{Twin domains in $\alpha$-(BEDT-TTF)$_2$I$_3$. The stripes are formed either (a) by the charge-rich molecules A und B or (b) by the charge-rich molecules A$^\prime$ and B.} \label{fig:alphatwin} \end{figure} In stack II the molecules are equally spaced, but B and C differ in charge. On the other column, the molecules A and A$^\prime$ are crystallographically equivalent at temperatures above $T_\mathrm{CO}$, but posses different charge for $T<T_{\rm CO}$. Now a net-dipole moment is generated as sketched in Figure \ref{fig:alphamolecularstructure}(b). Conversely, the equivalent bonds within stack II result in a cancellation of the electric dipoles between the charge-rich and charge-poor neighboring molecules. Importantly, there are two ways of rearranging stack I below $T_\mathrm{CO}$: (A, A$^\prime$)(A, A$^\prime$) and (A$^\prime$, A)(A$^\prime$, A). The two possible valence arrangements along the $a$-axis lead to twin polar domains, as illustrated in Figure~\ref{fig:alphatwin}. When $\theta$-(BEDT-TTF)$_2$RbZn(SCN)$_4$ crystals are slowly cooled, a similar charge order is identified below the metal-insulator transition at $T_\mathrm{CO} = 190$~K \cite{HMoriPRB98,Chiba01,Chiba01b,Watanabe04}. $^{13}$C-NMR, Raman, infrared spectroscopy and x-ray structural studies, all have concordantly revealed the formation of horizontal charge stripes similar as illustrated in Figure~\ref{fig:stripes}(b); the stripes are formed in-plane along the $a$-axis with a charge-rich and charge-poor pattern along the $c$-axis: $+0.8e:+0.2e$ on two non-equal BEDT-TTF molecules within dimers stacked in two columns in the unit cell \cite{Takahashi06,Miyagawa00,Chiba01,Wang01,Yamamoto02,Drichko09}. Therefore, the net-dipole moment is generated by the electric dipoles on the molecular dimers within both BEDT-TTF columns, in contrast to the ferroelectricity established in $\alpha$-(BEDT-TTF)$_2$I$_3$. Another distinction concerns the bonds between molecules along the two diagonal directions in the plane: for \tetrz\ the bonds are identical, yielding an uniform Heisenberg coupling between spins. Consequently, no gap opens in the spin sector at $T_\mathrm{CO}$ and the system behaves as a paramagnetic insulator down to the spin-Peierls phase transition at $T_{\rm SP}=30$~K, where it enters into the spin-singlet phase \cite{HMoriPRB98,Seo06,Miyagawa00}. Notably, the bonds become inequivalent in the singlet phase, resembling the charge-ordered state of \aeti. Finally, we note that structural changes associated with a loss of inversion centers between the BEDT-TTF molecules and the doubling of unit cell along the in-plane $c$-axis are much larger than in \aeti \cite{HMoriBCSJ98,HMoriPRB98,Watanabe04}. \subsubsection{Character of the metal-insulator phase transition} A close inspection of the presented results infers that the phase transition between the high-temperature paraelectric and low-temperature ferroelectric states is of first-order: the charge distribution among the BEDT-TTF molecules within the dimerized chains, the SHG signal, as well as the reduced space symmetry evidence a very rapid variation at $T_\mathrm{CO}=135$~K (cf.\ Figures~\ref{fig:alphanu27} and \ref{fig:alphaSHG}). Similar abrupt changes are also noticed in other quantities. At $T_\mathrm{CO}$ the dc and ac conductivity suddenly drops by several orders of magnitude along all three crystallographic directions, optical spectroscopy and measurements of the static susceptibility evidence that charge and spin gaps suddenly open, revealing the insulating and diamagnetic nature of the ferroelectric low-temperature state in \aeti\ \cite{Bender84a, Bender84b, RothaemelPRB86, IvekPRB11, IvekCulo17, Dressel94, Lunkenheimer15}. \begin{figure}[h] \centering\includegraphics[clip,width=0.5\columnwidth]{alphapressure.pdf} \caption{Temperature dependence of the in-plane dc resistivity of $\alpha$-(BEDT-TTF)$_{2}$I$_{3}$ under different pressure values as determined at room temperature (extracted from \cite{Beyer16}). The behavior for higher pressure and lower temperatures is plotted in Figure~\ref{fig:dcpressure}.} \label{fig:alphapressure} \end{figure} By applying hydrostatic pressure, the charge-order phase transition shifts to lower temperatures at the rate 80-90~K/GPa and is fully suppressed above 1.5~GPa. The charge disproportionation is concomitantly reduced by $0.17~e$/GPa and the optical gap decreases by 470~\cm/GPa \cite{Wojciechowski03, Beyer16}. Notably, a rather abrupt change in charge dispropotionation at $T_\mathrm{CO}$ persists at all applied pressures, whereas steepness of the conductivity drop is gradually diminished, as seen from Figure~\ref{fig:alphapressure}. For pressure values above 1.5~GPa and at low temperatures, the BEDT-TTF molecules carry nearly the same amount of charge as they do at ambient conditions in the semi-metallic state. The observed behavior may be ascribed to the pressure-induced enlarged bandwidth and the resulting decrease of effective intersite Coulomb repulsion \cite{Wojciechowski03, Beyer16}. In Section~\ref{sec:DiracElectrons} we come back to the low-temperature behavior when higher pressure is applied, and continue with a detailed discussion of correlation effects and Dirac electrons. Let us just mention here that in slowly cooled $\theta$-(BEDT-TTF)$_2$\-RbZn(SCN)$_4$, unlike the latter compound~-- the charge-order phase transition rises with hydrostatic pressure, which may be due to the enhancement of electronic correlations caused by an increase of diheadral angle between neighboring BEDT-TTF molecules \cite{HMoriPRB98}. Ref.\ \cite{Dressel04a, IvekPRB11, Yakushi12, TomicDressel15} give extensive account on charge-ordering phase transition and charge-sensitive Raman and infrared vibrational modes in the metallic phase of the charge-ordered state in $\theta$-(BEDT-TTF)$_2$\-RbZn(SCN)$_4$ as well as in \aeti. On the one hand, the observations summarized above indicate that only the ferroelectric order parameter, which is associated with the symmetry breaking at $T_\mathrm{CO}$ and the appearance of charge disproportionation at the dimerized BEDT-TTF molecular sites, are indeed of first-order \cite{Kittel}. On the other hand, the dc conductivity reveals no hysteresis and the temperature behavior of the transport gap indicates a dominantly mean-field character \cite{IvekPRB11, IvekCulo17}. Thus, the situation does not appear simple; we recall similar controversies and dilemmas in numerous strongly correlated electron systems \cite{Dagotto05}. The recent cryogenic scanning optical near-field microscopy study sheds light on this intriguing issue by revealing the spatial evolution of this phase transition, which evidences its first-order nature \cite{PustogowSciAdv18}. Recorded images around $T_{\rm CO}=135$~K demonstrate a pronounced phase segregation with a sharp boundary between metallic and insulating regions within a few hundred mK temperature range (Figure~\ref{fig:alphanearfield}). The narrow coexistence range explains why the dc conductivity experiments failed to detect any hysteresis. Remarkably, such a sharp transition occurs only in a homogeneous single crystal. Conversely, when the sample is subject to appreciable strain --~a situation that may easily arise due to improper mounting~-- metallic and insulating regions spatially coexist in a wide temperature range. As a consequence of the pressure dependence presented in Figure~\ref{fig:alphapressure}, the phase transition is suppressed to lower temperatures. \begin{figure} \centering\includegraphics[clip,width=0.7\columnwidth]{alphanearfield.pdf} \caption{Near-field image of a 28$\mu$m $\times$ 28$\mu$m area recorded with a spatial resolution of 25~nm slightly below the charge ordering phase transition in \aeti. The microscopic images are obtained by the (a)~amplitude and (b)~phase of the scattered CO$_2$ laser beam ($\lambda = 11~\mu$m) at an oscillating afm tip. There is a well-defined phase boundary between the metallic (light) and insulating (dark) regions proving its first-order character. The evaporated gold layer (upper left corner) serves as a reference (after \cite{PustogowSciAdv18}).} \label{fig:alphanearfield} \end{figure} \subsubsection[Origin of the charge-order phase transition]{Origin of the charge-order phase transition and ferroelectric ground state} In an attempt to explain the metal-insulator phase transition observed in \aeti, Fukuyama and collaborators suggested that electron-electron interactions play a crucial role in this regard \cite{Kino95a, Kino95b}. Including the intersite Coulomb repulsion $V$, they identified the importance of charge order in low-dimensional quarter-filled compounds \cite{Seo97,Fukuyama00,Seo03,Seo04}, opening a completely new chapter for these molecular quantum materials. Seo {\it et al.} also worked out a solely electronic mechanism \cite{Seo00,Seo06} including on-site and inter-site Coulomb interactions, $U$ and $V$ in an extended Hubbard model; it results in a Wigner-crystal-type phase with three possible charge-order patterns as depicted in Figure~\ref{fig:COpatterns}. Importantly, in this mean-field approach the horizontal stripe pattern is only found if the dimerization is explicitly included. \begin{figure}[h] \centering\includegraphics[clip,width=0.9\columnwidth]{COpatterns.pdf} \caption{Schematic presentation of charge order patterns: (a) vertical stripe, (b) diagonal stripe, (c) horizontal stripe. All patterns have identical energies within the classical limit, assuming that intersite Coulomb interactions are identical in all three crystallographic directions. According to Clay {\it et. al.} \cite{Clay19} the horizontal stripe is the most stable charge order thanks to the ``1010'' charge pattern in the direction of weakest intermolecular hopping and the ``1100'' along the two diagonal directions with strong intermolecular hopping.} \label{fig:COpatterns} \end{figure} The explanation on purely electronic grounds, however, was challenged subsequently; we now understand that a coupling to the lattice cannot be neglected. In a charge transfer crystal electron-electron and electron-phonon coupling are always present simultaneously. Thus, even if the ferroelectric phase transition is primarily an electronic instability, {\it i.e.} driven by charge ordering, the lattice modifications remain an important side effect due to a finite coupling of electrons to the lattice. The prevailing electronic origin of ferroelectricity is supported by strong non-linear effects and ultra-fast photoresponse; this will be discussed further in Section~\ref{sec:nonlinear+ultafastresponse}. Along this avenue Clay, Mazumdar and collaborators proposed a model beyond the mean-field theory, termed ``paired electron crystal'' that includes electron-electron and electron-phonon interactions \cite{Clay02,Gomes16,Clay19}. Numerical many-body calculations successfully reproduce the experimentally established charge pattern shown in Figures~\ref{fig:alphanu27} and \ref{fig:stripes}(a,b). Remarkably, only a relatively weak electron-phonon interaction has to be incorporated for forming horizontal stripes; in addition, the bond dimerization perpendicular to them appears as a consequence of the charge ordering. The authors suggest that the simultaneous formation of the paired electron crystal and the coexisting spin-singlet state in a sole phase transition occurs only due to lower symmetry in \aeti\ as compared to $\theta$-(BEDT-TTF)$_2$\-RbZn(SCN)$_4$ where charge ordering and spin-singlet transition take place at different temperatures. In another approach, Alemany {\it et al.} argue that electronic interactions between molecular sites are not the driving force of the phase transition; rather it is the interplay between coupled BEDT-TTF molecular and anion subsystems that stabilizes charge order and ferroelectricity \cite{Alemany12, Alemany15}. Notably, density-functional theory (DFT) calculations yield no significant changes in the wave-vector dispersion of the band structure below $T_\mathrm{CO}$, suggesting that only modifications of the local anion-molecular interactions occur at the ordering. In that case even small displacements of anions toward (away) the charge-rich (charge-poor) molecules would increase (decrease) the hydrogen bonding between anions and BEDT-TTF molecules, and thus induce charge redistribution; the resulting striped charge pattern qualitatively corresponds to the experimental findings. A close look at the phase transition of \aeti\ by Pouget {\it et al.} provides additional support to the relevance of this mechanism \cite{PougetMH18}. They also point out that the paradigmatic metal-insulator phase transition at 135~K is actually a semimetal-semiconductor phase transition. The presence of two types of charge carriers, electrons and holes, allow at high temperatures that the electron-hole interaction generates excitonic effects and plays a role in the semimetal-semiconductor phase transition \cite{Rossnagel11}. Finally, the existence of two types of carriers questions the basic assumption of the extended Hubbard model, which considers a quarter-filled system with one type of carriers only. Further work is needed to consistently explain the complex nature and origin of the phase transition in \aeti. \subsubsection{Charge-ordered states with no ferroelectricity} It is instructive to compare the charge order in \aeti\ and $\theta$-(BEDT-TTF)$_2$\-RbZn(SCN)$_4$, on the one hand, with the one observed in $\beta^{\prime\prime}$-(BEDT-TTF)$_2$SF$_5$CHFCF$_2$SO$_3$, on the other hand. In all three systems, the charge density is arranged in horizontal stripes, however, other properties of the charge order in $\beta^{\prime\prime}$-(BEDT-TTF)$_2$SF$_5$CHFCF$_2$SO$_3$ are strikingly different \cite{PustogowRC19, PustogowPRB19}. In the first two systems, below the metal-insulator phase transition differently charged molecules arrange themselves along dimerized stacks; thus allow the formation of ferroelectricity. Conversely, in $\beta^{\prime\prime}$-(BEDT-TTF)$_2$SF$_5$CHFCF$_2$SO$_3$, the BEDT-TTF molecules have equal charge densities within dimerized stacks, while a charge disproportionation of $2\delta_{\rho} \approx 0.10e$ exists between crystallographically nonequivalent BEDT-TTF molecules belonging to two neighboring stacks [Figure \ref{fig:stripes}(c)]. Thus, charge order is present without bond order; and consequently no net polarization can be established (see Figure~\ref{fig:modelCOFE}). Another important distinction lies in the fact that charge order persists at all temperatures without any noticeable temperature dependence of the charge imbalance; hence it is not associated with the metal-insulator phase transition at 190~K, below which the high-temperature non-polar space group ${\rm P}\bar{1}$ does not change. The origin of the temperature-independent charge order, as well as the metal-insulator phase transition remains to be clarified. The phase transition may be related to the pronounced one-dimensional character of dimer chains running along the interstack direction at low temperatures, or it may be due to anion ordering \cite{Schlueter01, PustogowPRB19}. It is remarkable that the particular structure in $\beta^{\prime\prime}$-(BEDT-TTF)$_2$\-SF$_5$CHFCF$_2$SO$_3$ enables several short hydrogen bonds between the BEDT-TTF molecules and the oxygen and fluorine atoms of the anions; at present, however, it is not established whether these short hydrogen bonds differ for two non-equivalent BEDT-TTF molecules. In any case, the anions are disordered at room temperature as well as at $150$~K, {\it i.e.} below the metal-insulator phase transition. On the other hand, the ethylene end groups of the BEDT-TTF molecules, which are disordered at room temperature, are ordered below the phase transition at $T=150$~K. This change may contribute to the mechanism of the phase transition, as it does in $\theta$-(BEDT-TTF)$_2$\-RbZn(SCN)$_4$ \cite{Alemany15}. These hypotheses should be verified in the future. \subsection{Properties} \label{sec:propertiesQSL} \begin{figure} \centering \includegraphics[width=0.3\columnwidth]{k-structure.pdf} \caption{For $\kappa$-(BEDT-TTF)$_2X$ the molecules are arranged in dimers that constitute an anisotropic triangular lattice within the conduction layer. The nearest-neighbor and the second-nearest-neighbor inter-dimer transfer integrals are labeled by $t$ and $t^{\prime}$, respectively; the inter-dimer transfer integral is marked by $t_d$. } \label{fig:k-structure} \end{figure} For the most prominent examples \etcn\ \cite{Shimizu03,Kurosaki05}, \agcn\ \cite{Shimizu16,Pinteric16}, \dmit\ \cite{Itou08,Itou10} and \cat\ \cite{Isono13,Isono14,Shimozawa17} the starting point is rather similar: here molecular dimers each hosting $S=\frac{1}{2}$ form a highly frustrated triangular lattice \cite{Kandpal09,Nakamura09,Nakamura12}. Defining the frustration as the ratio of transfer integrals $t^{\prime}$ and $t$ by the sketches in Figures~\ref{fig:k-structure} and \ref{fig:CAT-structure}, the compounds are close to a perfect triangular arrangement as summarized in Table~\ref{tab:1}. The intra-dimer transfer integral of \etcn\ and \agcn\ is $t_d\approx 200$~meV and 264~meV, respectively. When estimating the onsite Coulomb repulsion $U\approx 2t_d$ \cite{McKenzie98}, at ambient conditions one obtains $U/t=7.3$ and 10.5 with the ratio of the two inter-dimer transfer integrals $t^{\prime}/t \approx 0.83$ and 0.90 very close to unity. For comparison, the related compounds \etbr\ and \etcl\ possess a frustration \begin{table}[h] \caption{Four different molecular-based quantum spin-liquids compounds on a triangular lattice with antiferromagnetic coupling $J$; the degree of frustration $t^{\prime}/t$ is calculated by tight-binding approximation based on extended H{\"u}ckel (EH) studies of molecular orbitals or on {\it ab-initio} density-functional-theory (DFT) calculations \cite{Komatsu96,Nakamura09,Kandpal09,Jeschke12,Jacko13a,Shimizu16,Hiramatsu17}. The effective correlations defined as the ratio of inter-dimer Coulomb repulsion $U$ and bandwidth $W$ are listed as extracted from the optical conductivity; the electronic contribution to the specific heat is also shown $\gamma_e$ \cite{Oshima88,Komatsu96,Shimizu16,Hiramatsu15,Itou08,Kato12a,Kato12c,Pustogow18a,Isono14,Shimozawa17, Yamashita11,Yamashita17}. \label{tab:1}} \hspace*{2mm} \begin{center} \begin{tabular}{l|c c c c} Compound &$J$ &$t^{\prime}/t$ & $U/W$ & $\gamma_e$ \\ &(meV; K) & (DFT; EH) & &(mJ\,K$^{-2}$mol$^{-1}$)\\ \hline $\kappa$-(BE\-DT\--TTF)$_2$\-Cu$_2$\-(CN)$_{3}$ & 19; 220 & 0.83; 1.1~~& 1.52 & 12-15 \\ $\kappa$-(BE\-DT\--TTF)$_2$\-Ag$_2$(CN)$_{3}$ & 19; 220 & ~~--~~; 0.90 & 1.96 & 10 \\ $\beta^{\prime}$-EtMe$_3$\-Sb\-[Pd(dmit)$_2$]$_2$ & 22; 250 & 0.77; 0.90 & 2.35 & 19.9 \\ $\kappa$-H$_3$(Cat-EDT-TTF)$_2$ & ~~7-8; 80-90 &1.25; 1.48 & & 58.8 \end{tabular}} \end{center} \end{table} $t^{\prime}/t \approx 0.42$ and 0.44, respectively \cite{Kandpal09,Koretsune14}. Theoretical considerations suggest a frustration $t^{\prime}/t$ slightly off perfect triangle \cite{Tocchio09,Dang15,Kyung06,Laubach15,Yamada14} At ambient pressure, no indication of N{\'e}el order is observed for temperatures as low as 20~mK, despite the considerable antiferromagnetic exchange of $J\approx 220-250$~K for the dimerized $\kappa$- and $\beta^{\prime}$-salts \cite{Shimizu03,Shimizu16,Itou08} and slightly less in the case of the hydrogen bonded \cat. As an example, the low-temperature physical properties of \cat\ are displayed in Figure~\ref{fig:cat1}. Below a maximum around 20~K, the spin susceptibility $\chi(T)$ drops upon cooling, but no magnetic transition can be identified down to $T=50$~mK using SQUID and torque magnetometry \cite{Isono14}. The behavior is reasonably well fitted by the $S=\frac{1}{2}$ antiferromagnetic Heisenberg model on an isotropic triangular lattice. The thermal conductivity shows a maximum around $T=0.4$~K and then decreases in a linear fashion with a finite intercept as $T\rightarrow 0$. The dielectric permittivity rises as the temperatures is lowered but $\epsilon_1(T)$ saturates below 2~K; this behavior is associated with quantum fluctuations and described by the Barrett formula (\ref{eq:Barrettformula}) developed for perovskites \cite{Barrett52} (see Section \ref{sec:cat}). \begin{figure} \centering \includegraphics[width=0.5\columnwidth]{cat2.pdf} \caption{Low-temperature physical properties of \cat\ illustrating the spin-liquid and paraelectric states. The temperature dependence of the dielectric constant $\epsilon_1(T)$ corresponds to the blue open squares and left axis, the thermal conductivity divided by temperature $\kappa/T$ is shown by the red dots on the red left axis, and the magnetic susceptibility $\chi(T)$ refers to the green symbols and right axis. The dashed line is an eye guide. The concurrent occurrence of quantum paraelectric and quantum-spin-liquid phases below $T\approx 2$~K is inferred from the saturation of $\epsilon_1(T)$ and $\chi(T)$ (after \cite{Isono14,Shimozawa17}).} \label{fig:cat1}\label{fig:CatBarrett} \end{figure} \subsection{Gapless quantum spin liquid?} \label{sec:gaplessQSL} Thermodynamic measurements on \etcn\ reveal a linear term in the heat capacity \begin{equation} C_p(T) = \gamma_e T + \beta T^3 \label{eq:specificheat} \end{equation} that provides evidence for gapless spinon excitations \cite{Shimizu06,Yamashita08,Isono18}; the second term contains the typical phonon contributions. This is in stark contrast to thermal transport data, which exhibit a vanishing $\kappa/T$ at $T=0$~K; the result implies the presence of a small gap of $\Delta \approx 0.5$~K {\it i.e.} the absence of gapless fermionic excitations \cite{Yamashita09}. The presence of a spinon Fermi surface is of crucial importance for the low-energy excitations in these Mott insulators. While in inorganic compounds the dispersion of the excitation spectrum is directly mapped by inelastic neutron scattering, molecular quantum magnets evade these investigations by the abundance of hydrogen and their limited crystal size. Since most methods applied are indirect, the dispute is not resolved easily one should carefully consider what they are sensitive to. The results of complementary approaches have to be included eventually forming a consistent story; although the peculiarities of certain compounds might stain the unified picture. In Figure~\ref{fig:cat3}(a) the temperature dependence of $C_p(T)$ is plotted for \cat\ together with the results of other spin-liquid compounds. The extrapolation to $T=0$ yields a rather large offset of $\gamma_e = 58.8$~mJ\,K$^{-2}$mol$^{-1}$, three to four times bigger than observed for \etcn\ and \dmit, in accord with $J = 80$~K compared to 220 - 250~K of the other materials listed in Table~\ref{tab:1}. The large electronic density of states has been related to spinon excitations in the gapless spin liquid. The thermodynamic observations are confirmed by susceptibility measurements ($\chi_0 = 1.2\times 10^{-3}$~emu/mol). It is instructive to consider the Wilson ratio \begin{equation} R_W = \frac{4\pi^2 k_B^2 \chi_0}{3(g\mu_B)^2\gamma_e} \quad , \label{eq:Wilson} \end{equation} where $g\approx 2.002$ is the $g$-value of non-interacting electrons, $\mu_B$ is the Bohr magneton and $k_B$ the Boltzmann constant. The Wilson ratio is also seen as an empirical indicator for the importance of spin-orbit coupling, which is commonly taken as a minor issue in organics; albeit this aspect has come under scrutiny in some systems recently \cite{Winter17,Osada18,Riedl19}. Most theories on quantum spin liquids predict that $R_W\ll 1$, in contrast to measured ratios \cite{Balents10,Prelovsek19}. For all organic compounds, we find a slightly enhanced Wilson ratio compared to a free electron gas: $R_W = 1.4 - 1.6$ \cite{Yamashita17}, but significantly less than the anomalously large Wilson ratio ($R_W= 70$) reported for the three-dimensional quantum spin liquid Na$_4$Ir$_3$O$_8$ \cite{Okamoto07n,Chen13}. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{cat4.pdf} \caption{\label{fig:cat3} (a)~The $C_p T^{-1}$ {\it versus} $T^2$ plot of the specific heat capacity of \cat\ in the low temperature region that allows the separation of the contributions according to $C_p(T) = \gamma_e T + \beta T^3$. No dependence on magnetic field is observed up to 6~T. The dashed and solid lines represents the data of other two spin-liquid compounds, \etcn\ and \dmit. (b)~Comparison of $C_p T^{-1}$ {\it vs.} $T^2$ plots of several single crystals of \cat\ with a pressed pellet. The data of $\kappa$-D$_3$(Cat-EDT-TTF)$_2$ obtained under fields up to 6~T are also plotted (after \cite{Yamashita17}).} \end{figure} According to Ng and Lee \cite{Ng07}, optical studies may provide important information; if spinon excitations exhibit a Fermi surface, they should show up not only in the thermal conductivity but also contribute to the optical conductivity \cite{Ioffe89,Ng07,Potter13}. A power-law behavior $\sigma_1(\omega)\propto \omega^{2}$ is expected for low-temperatures and frequencies that becomes $\sigma_1(\omega)\propto \omega^{3.3}$ above the exchange coupling $J$ if impurity scattering is negligible. In the case of \etcn\ the challenge is that close to the Mott transition, inadvertent effects of metallic fluctuations and regions dominate at finite temperatures, hampering the extraction of spinon contributions \cite{Kezsmarki06,Elsasser12,Pustogow18a}. Hence, only strongly correlated systems deep in the Mott insulating phase are suitable candidates. \begin{figure} \centering \includegraphics[width=0.7\columnwidth]{EtMeCond2.pdf} \caption{ Delectric and optical conductivity of \dmit\ measured along the $a$-direction at low temperatures ($T=3$~K) in a broad range from radio frequencies up to the infrared. Infrared reflectivity measurements yield $\sigma_1(\omega)$ via the Kramers-Kronig analysis (KK), while in the THz range reflectivity and transmission data (R \&\ T) allow a direct evaluation of $\sigma_1(\omega)$. Minima and maxima of Fabry-Perot oscillations (FP) are also used in this regard. The optical data in the THz and far-infrared ranges ($2~{\rm cm}^{-1} < (\omega/2\pi c) < 200$~\cm) is much larger than the background, which was approximated by a power law $\sigma_1\propto \omega^{1.75}$ from the optical region down to lower frequencies; and by a stretched exponential $\exp\{\omega^{0.1}\}$ that smoothly connects the dielectric range with the optical data. The dashed green line represents the theoretically predicted frequency-dependence of a coherent spinon Fermi surface coupled to the optical conductivity via an emergent gauge field \cite{Ng07}. The effective range of spinons (shaded in red) extends from the crossing of the $\omega^2$ line with the electronic background conductivity up to the antiferromagnetic exchange coupling energy $\hbar\omega/ k_B \approx J \approx 250$~K (data taken from \cite{Pustogow18b,Dressel18}). \label{fig:spinons} } \end{figure} By preparing several extremely thin single crystals of the strongly correlated spin-liquid compound \dmit\ the optical transmission could be measured down to the THz range of frequency. Figure~\ref{fig:spinons} shows that clear indications of spinon contributions to the optical conductivity can be identified. They become pronounced only at rather low temperatures ($T=3$~K) and low frequencies $\hbar\omega < J = 22$~meV, corresponding to 250~K. For $\hbar\omega < k_BT$ one expects a drop according to $\sigma_1(\omega) \propto \omega^2$ \cite{Ng07} until hopping transport dominates at lowest frequencies \cite{Pinteric14,Dressel16,Dressel18}. This is a confirmation of gapless excitations and the presence of a spin Fermi surface \cite{Pustogow18b}. Unfortunately, comparable experiments on \cat\ have not been performed yet. Measurements of the magnetocaloric effect of \etcn\ down to mK temperatures reveal a decoupling of the electron spins from the lattice in the quantum-spin-liquid state, which is seen as indication for gapless spinon excitations \cite{Isono18}. Isono {\it et al.} then apply a magnetic field in order to move the system away from a quantum critical point, and they find the number of spin states that interact with the lattice vibrations strongly reduced; in other words, there exists only a weak spin–lattice coupling. The picture of a quantum critical behavior in the quantum-spin-liquid compound \etcn\ was previously suggested on the basis of magnetic torque measurements \cite{Isono16}, where a universal critical scaling was observed. Winter and collaborators \cite{Riedl19}, however, challenged this interpretation and proposed that disorder-induced spin defects (Figure~\ref{fig:RVB}) better explain the low-temperature properties. These spins are attributed to valence-bond defects that emerge spontaneously as the quantum spin liquid enters a valence-bond glass phase. This suggestion is strongly supported by recent ESR experiments \cite{Miksch20}, presented in Section~\ref{sec:magneticfieldQSL}. Disorder effects may also resolve the dispute on the existence or non-existence of a spin gap, since they make the spinons localized and thus not conduct heat. Indeed, two independent recent studies of \dmit\ --~in contrast to \cite{Yamashita10} but similar to previous studies of \etcn{} \cite{Yamashita09}~-- do not find a finite residual term in the thermal conductivity suggesting an absence of mobile gapless fermionic excitations. They conclude that the low-energy excitations responsible for the sizeable $\gamma_e$ are localized; the heat transport is entirely due to phonons scattered by low-energy spinon excitations of the spin liquid state \cite{Bourgeois-Hope19,Ni19}. We will come back to this topic in Section~\ref{sec:randomnessQSL} shortly. \subsection{6~K-anomaly} \label{sec:6Kanomaly} Although the spin-liquid state is supposed to be homogeneous down to lowest temperatures, for \etcn\ a remarkable anomaly was observed around $T^*=6$~K in a large number of different physical quantities: it is seen in NMR \cite{Shimizu06}, ESR \cite{Komatsu96,Padmalekha15,Miksch20}, and magnetic susceptibility measurements \cite{Manna10}, in heat capacity and thermal conductivity \cite{Shimizu03,Yamashita08,Yamashita09}, by anisotropic lattice effects \cite{Manna10}, and as an anomaly in the out-of-plane phonon velocity and ultrasound attenuation \cite{Poirier14}. There is no doubt that the 6~K-anomaly in \etcn\ is omnipresent and robust; a broadly accepted explanation, however, remains elusive to date. It was suggested that the second-order phase transition reflects some instability of the quantum-spin-liquid phase. This may be related to a change in the spin chirality with local $Z_2$ formation \cite{Baskaran89,Kawamura84,Moessner01,Kimchi18}, or Amperean pairing of spinons \cite{Lee07,Galitski07,Grover10,Li10} or the formation of an excitonic condensate by charge neutral pairs of charge $+e$ and charge $ -e $ fermions \cite{Qi08}. Several results evidence that the lattice is involved via spin-phonon coupling. In Figure~\ref{fig:CuCNanomaly}(a) the thermal expansion coefficient $\alpha_i = l_i^{-1} \partial l_i / \partial T$ of \etcn\ is plotted for different crystal direction $i$. The distinct $\alpha_b$ versus $\alpha_c$ anisotropy implies in-plane lattice distortions. They are temperature dependent and most pronounced for $T < 50$~K, where upon cooling the $b$-axis lattice parameter strongly contracts (large positive $\alpha_b$) while the $c$-axis lattice constant expands ($\alpha_c<0$). Since the hopping amplitudes $t^{\prime}$ and $t$ among the dimers depend sensitively on the lattice parameters (cf. Figure~\ref{fig:k-structure}), the degree of frustration $t^{\prime}/t$ increases in this temperature range. Around 10~K the behavior reverses sign. \begin{figure}[h] \centering \includegraphics[width=0.7\columnwidth]{Manna+Poirier.pdf} \caption{\label{fig:CuCNanomaly} (a)~Thermal expansion coefficients $\alpha_i(T)$ for \etcn\ measured along the in-plane $b$- and $c$-axes around the 6~K phase transition at $H=0$ and 8~T (green dots) (after \cite{Manna10,Manna12}). (b)~When results of different crystals are compared, a large sample-to-sample variation becomes obvious as far as the size of the transition is concerned while the temperature shifts less than 0.5~K (after \cite{Manna18}). (c)~Softening peaks $(\Delta V / V)_S$ and (d) variation of the attenuation $\Delta \alpha$ for \etcn\ as a function of temperature below 10~K: $H=0$~T, 150~MHz (red dots) and 210~MHz (blue dots); $H=16$~T (black symbols). The vertical dashed lines indicate the transition temperatures $T_p(0)$ and $T_p(H)$ (data from \cite{Poirier14}). } \end{figure} The distinct peak around $T^*=6$~K clearly indicates a phase transition. While the overall feature is robust, there is a sizeable variation among crystals from the same and different batches, as illustrated in Figure~\ref{fig:CuCNanomaly}(b). Surprisingly, no dependence on magnetic field is observed up to 8~T. The absence of any hysteresis is consistent with a second order phase transition. The entropy release found experimentally \cite{Manna10} exceeds the residual spin entropy considerably, providing strong arguments that spin degrees of freedom alone are insufficient to account for the phase transition. Charge degrees of freedom are a possible candidate to account for this mismatch. In fact, according to optical conductivity measurements \cite{Kezsmarki06,Elsasser12}, the charge gap in \etcn\ is strongly suppressed because the material is next to the insulator-metal transition where metallic quantum fluctuations are present \cite{Pustogow18a,Dressel18} due the vicinity of the coexistence regime. Ultrasound measurement of \etcn\ by Poirier {\it et al.} \cite{Poirier14} reveal a softening of the longitudinal velocity along the direction perpendicular to the $bc$-plane for temperatures below 20~K. As shown in Figure~\ref{fig:CuCNanomaly}(c), the maximum value is found around 5\,K and it slightly depends on a frequency. The behavior below the peak is affected by an applied magnetic field. Although more susceptible to extrinsic effects, the corresponding attenuation displayed in Figure~\ref{fig:CuCNanomaly}(d) is in accord with the sound velocity. Neither a charge-lattice coupling nor a classical magnetoelastic coupling appear appropriate to render an account of these elastic anomalies. On the one hand, vibrational spectroscopy could prove the absence of charge order and any variation around $T^* \approx 6$~K \cite{Sedlmeier12}. On the other hand, classical model of the magnetoelastic coupling cannot predict a softening peak located near 6~K, when the exchange interaction $J$ is 40 times larger at 250 K. Relaxation effects were also discarded because of inconsistencies with the frequency dependence. There are attempts to explain the frequency-dependent velocity softening observed in ultrasound at low temperatures to a spinon-phonon coupling; this effect is reduced below $T^*$ due to a pairing instability transition of the spinons \cite{Poirier14}. Below $T\approx 100$~K, numerous physical properties in both the spin and charge sectors evidence some anomalous behavior that infers an exotic charge-spin coupling and the possibility that it may play a pertinent role in the formation of the quantum spin liquid at low temperatures. Microwave investigations reveal an anomaly of the dielectric function around $T^*=6$~K that depends on frequency and power as well as on external magnetic field \cite{Poirier12a}. Even more surprising is the Curie-Weiss behavior observed in the audio- and radio-frequency dielectric constant below 60\,K; $\epsilon_1(T) \propto~ $C$/(T-T_C)^{-1}$, where $C$ is the Curie constant and $T_C$ was fixed to 6\,K in line with previous observations \cite{Abdel10}; see Figure~\ref{fig:kappaQSLdieltemp} for illustration. In order to verify this assumption, Pinteric {\it et al.} worked with both $C$ and $T_C$ as free fit parameters and found that the Curie-Weiss parameters differ for the in-plane and out-of-plane crystallographic directions and for single crystals of different syntheses \cite{Pinteric14}. This stresses the involvement of the lattice with anionic-cationic layers and the effect of disorder. Abdel-Jawad {\it et al.} claimed that the permittivity of \etcn\ exhibits a tiny anomaly around $T_C\approx 6 K$ that is almost independent on frequency; the absence of any remnant polarization was interpreted as an indication of antiferroelectric ordering of electric dipoles on dimers. Recent dielectric measurements by R{\"o}sslhuber {\it et al.}, however, cast doubt on this interpretation, as with increasing hydrostatic pressure as well as by chemical substitution, a low-temperature feature evolves and grows rapidly that could be related to percolative effects due to spatial inhomogeneities when the first order phase transition is approached \cite{Pustogow19,Rosslhuber19,Saito20}, as discussed in Sec.~\ref{sec:QSLMott}. Here no anomaly is found around 6~K. We should also note that the relaxor dielectric anomaly observed in all quantum-spin liquid candidates at temperatures below about 50~K cannot simply be assigned to electric dipoles; the absence of sizeable static electric dipoles has been demonstrated by overall vibrational, dielectric and structural studies thereby eliminating also any connection to the spin-liquid state. However, this conclusion may not discard fluctuating dipoles and the charge-spin coupling in the quantum-spin-liquid formation since fast charge oscillations are observed. We come back to this topic in Section~\ref{sec:randomnessQSL} and discuss it at length in Sections \ref{sec:quantumelectricdipoles} and \ref{sec:QSL+afm}. \subsection{Valence-bond solid} \label{sec:magneticfieldQSL} \label{sec:VBS} Since magnetic field is expected to affect the spin-arrangement in a quantum spin liquid \cite{Zhou17}, the application of an external $H$-field was considered for most experiments on \etcn . Hence it is surprising that the influence reported for common laboratory scale magnetic fields up to 8 - 14~T is negligible, e.g.\ Figure~\ref{fig:CuCNanomaly}. Low-temperature NMR measurements reveal an anomalous field-dependent spectral broadening of the $^{13}$C line for fields along the $a^\star$-axis ({\it i.e.} perpendicular to the planes) that is attributable to spatially nonuniform staggered magnetization induced in the spin liquid under magnetic fields. Nonmagnetic impurities also cause a broadening of the NMR-line \cite{Gregor09}. The thermal conductivity $\kappa$ in the low-temperature regime is slightly enhanced for fields above 4~T in the $a$-direction, which Yamashita {\it et. al.} interpreted as the closing of a small spin gap in the quantum-spin-liquid state \cite{Yamashita09,Yamashita12}. A negligible field effect in \dmit\ quantum spin liquid was recently reported by two independent groups \cite{Bourgeois-Hope19,Li19}, in contrast to previous measurements \cite{Yamashita10}. The absence of reproducible experimental data on the thermal conductivity $\kappa(T)$ represents a challenge to understanding quantum-spin-liquid nature in organic solids. Based on muon spin relaxation ($\mu$SR) measurements, a macroscopic phase separation was claimed in the quantum spin liquid \etcn\ below $T=0.3$~K \cite{Nakajima12}. Already at zero field two phases are identified by different spin dynamics; but with increasing magnetic field the difference stabilizes and observed as static inhomogeneity. Similar investigations have been conducted by Pratt {\it et al.} down to mK~temperatures \cite{Pratt11}. They report that applying a small magnetic field of 5~mT to \etcn\ at very low temperatures produces a quantum phase transition between the spin-liquid phase and an antiferromagnetic phase with a strongly suppressed moment. This is described as Bose–Einstein condensation of spinon excitations with an extremely small spin gap of 3.5~mK. A weak-moment antiferromagnetic phase dominates the low-temperature phase diagram, with several subphases depending on magnetic field and temperature. At higher fields, a second transition is found that suggests a threshold for deconfinement of the spinon excitations. Several of these sub-phases also show up in temperature and field-dependent ESR experiments \cite{Miksch20}. Electron spin resonance (ESR) has the advantage to most other spectroscopic methods that it actually uses the electron spins to directly probe the magnetic properties of the quantum-spin-liquid state. In line with first indications observed previously \cite{Komatsu96,Padmalekha15}, comprehensive ESR experiments conducted in a large frequency and temperature range recently \cite{Miksch20} \begin{figure}[h] \centering \includegraphics[width=1\columnwidth]{ESR_QSL.pdf} \caption{\label{fig:ESR} Temperature dependence of the (a)~spin susceptibility $\chi_S$, (b)~$g$-value, and (c)~line width $\Delta H$ obtained from fits to the ESR spectra measured in the three directions ${H}\parallel a^*, b, c$ for \etcn\ single crystals. The experiments have been performed in the X-band (9.5~GHz; solid symbols) and the W-band (95~GHz; open symbols). The spin susceptibility is described by an antiferromagnetic Heisenberg model on a triangular lattice at elevated temperatures. Below $T^*=6$~K an exponential decay of the main signal evidences the opening of a spin gap $\Delta = 12$~K. Below $T^*$ we notice the simultaneous appearance of the lone-spin component: The orange diamonds corresponds to the defect spins; the green $\star$, blue $+$, and red $\times$ indicate the respective line width. The $g$-value does not exhibit any change with temperature within the experimental uncertainty estimated as 5\% of the line width. The line width $\Delta H(T)$ is basically independent of the measurement frequency and field; it is largest along the $b$-direction. The minimum of $\Delta H(T)$ around $T \approx 3$~K coincides with features that can be identified in specific heat data at the same temperature \cite{Yamashita08,Manna10}. Note the logarithmic temperature scale. (d)~Anisotropy of the X-band ESR resonance field of \etcn\ measured at $T=2$~K. Besides the main signal (solid line) from the BEDT-TTF radical, additional lines (dashed lines) appear at low-temperatures. The main line has its maximum along the crystallographic $b$-direction whereas the maxima of the defect signals are shifted by an angle of $\pm 22^\circ$ in the $bc$-plane. Similar shifts can be identified for the other directions: $\pm 25^\circ$ in the $a^*b$-plane $\pm 38^\circ$ in the $a^*c$-plane. The mirror-image doubling of the signal corresponds to the two distinct orientations of the molecules that occur due to stacking faults. (data from \cite{Miksch20}). } \end{figure} give compelling evidence that the spin-liquid state of \etcn\ is modified or even vanishes at $T^*\approx 6$~K. In the high-temperature region, the magnetic properties can be described by an antiferromagnetic Heisenberg model on a triangular lattice, yielding a coupling of $J=250$~K in accord with previous estimates \cite{Shimizu03} listed in Table~\ref{tab:1}. Figure~\ref{fig:ESR}(b) shows that the $g$-values do not depend on temperature and frequency. Below $T^*$ the signal becomes very narrow, and an additional component appears. Most important, the main signal completely vanishes upon further cooling, as plotted in Figure~\ref{fig:ESR}(a). In other words, the spin susceptibility $\chi_S(T)$ drops in an exponential way due to opening of a spin gap $\Delta \approx 12$~K in the excitation spectrum. This is explained by the formation of valence bonds (sketched in Figure~\ref{fig:RVB}) causing a singlet state; by spin-phonon coupling the lattice is affected like in spin-Peierls systems \cite{Dumm00}, comprising the observations by lattice expansion and ultrasound, depicted in Figure~\ref{fig:CuCNanomaly}. Similar conclusions can be drawn from the NMR spin-lattice relaxation rate $(T_1T)^{-1}$, which is proportional to the susceptibility: a drop in the spin susceptibility is seen below $T^*$ before it strongly increases due to defects \cite{Shimizu03,Shimizu06}. The minor ESR signal shifts to lower fields and can be clearly identified as a separate maximum below $T=2.5$~K. In Figure~\ref{fig:ESR}(d) the angular dependence of the ESR signal is displayed, revealing two low-temperature features with a large anisotropy of about 10~mT, which are symmetrically shifted by $\pm 22^{\circ}$ with respect to the main signal. A similar behavior has been reported for \etcl, due to crystallographically inequivalent adjacent layers \cite{Antal09,Antal12c,Antal15}, depicted in Figure~\ref{fig:structure_kappa}. For the monoclinic symmetry of \etcn, this explanation does not hold; hence stacking faults in the crystal are suggested to cause the doubling of the line. As the temperature is reduced further to the mK~range, this ESR line separates even more. From its pronounced dependence on the external magnetic field and the particular anisotropy shown in Figure~\ref{fig:ESR}(d), the minor ESR feature is assigned to defect spins, which can interact with local magnetic moments of the Cu$^{2+}$ sites in the otherwise non-magnetic Cu$^{+}$ anion layers \cite{Komatsu96,Padmalekha15} by dipolar coupling eventually forming some weakly coupled antiferromagnetic state \cite{Miksch20}. Its strength might vary from sample to sample, but the effect in general is robust. \begin{figure}[h] \centering \includegraphics[width=0.8\columnwidth]{VBS+phasediagram.pdf} \caption{\label{fig:RVB} (a)~Quasi-static valence-bond solid on an anisotropic triangular lattice, as suggested for \etcn\ \cite{Riedl19,Kawamura19,Miksch20}. The spin singlets indicated in blue preferably form along the ($b\pm c$)-directions; domain walls between the valence-bond patterns indicated in red. Local valence-bond defects can occur at various kinds of grain boundaries; a local spin $\frac{1}{2}$ can also be caused by the breaking of a singlet bond probably due to an anion layer vacancy, emphasized by the red circle. (b)~The valence bond solid state (violet) will affect the low-temperature phase boundaries sketched in Figure~\ref{fig:schematicPhaseDiagrams}. In the case of \etcn\ the insulating phase is next to the superconducting state. \label{fig:VBS+phasediagram}} \end{figure} The findings are in agreement with the $\mu$SR experiments by Pratt {\it et al.} \cite{Pratt11} and the suggestion of Riedl {\it et al.} analyzing the low-temperature magnetic torque measurements \cite{Isono16,Riedl19}. Figure~\ref{fig:RVB}(a) summarizes the present understanding of the ground state in \etcn\ \cite{Riedl19,Kawamura19,Miksch20}: below $T^*$, spin singlets form a valence-bond solid accompanied by a lattice distortion, similar to a spin-Peierls coupling. There remain numerous orphan spins, {\it i.e.} local magnetic moments that might interact by dipolar coupling with each other, but also with magnetic moments in the anions layers. Although there exists no long-range magnetic order, the valence-bond solid does influence the low-temperature phase diagram compared to the spin-less case sketched in Figure~\ref{fig:schematicPhaseDiagrams}(b). In Figure~\ref{fig:VBS+phasediagram}(b) we suggest a revised phase diagram resembling the quantum spin liquid candidate \etcn\ where the transition to the superconducting state is assumed vertical since no experimental data are available due to low temperatures \cite{Kurosaki05,Lohle18,Furukawa18}. It is crucial to extend this sort of low-temperature ESR investigations to other spin-liquid compounds in order to conclude whether the case of \etcn\ is particular. The effect of Cu$^{2+}$ ions, for instance should not be an issue in the \agcn\ and \dmit\ salts. However, the formation of a disordered valence-bond solid implying the opening of a spin gap could be a general scenario \cite{Shimizu07,Tamura09,Kimchi18,Clay19}. In this regard, the recently synthesized compound $\kappa$-(BEDT-TTF)$_2$Cu[Au(CN)$_2$]Cl is of interest as it possesses no disorder in the anions \cite{Tomeno20}. \subsection{Randomness} \label{sec:randomnessQSL} This brings us to the long debate whether the quantum-spin-liquid properties in the organic compounds originate solely from geometrical frustration or whether some sort of randomness or disorder might be crucial \cite{Watanabe14}. The interplay between mutual interaction and quenched disorder is a fundamental issue not only in condensed matter research but also in the physics of cold atoms. Recently this was further investigated theoretically in three and two dimensions \cite{Watanabe14,Shimokawa15,Liu18,Wu19,Uematsu19,Kawamura19}. Within randomness-induced quantum-spin-liquid models, disorder supports the cooperative action of quantum fluctuations and triangular frustration in the stabilization of a quantum-spin-liquid state \cite{Watanabe14, Kawamura19}. This is important since the frustrated triangular lattices in two dimensions are unable to destroy the long-range magnetic order on their own \cite{Huse88,Bernu92,Capriotti99}. Randomness helps constituting the quantum spin liquid in two-dimensional organics; its nature is two-fold: (i) The intrinsic randomness originates in the charge sector and acts via charge-spin coupling; Hotta \cite{Hotta10} suggests that quantum electric dipoles are formed on the dimers, which interact with each other and thus modify the exchange coupling $J$ between the spins on the dimers, crucial for the formation of the spin-liquid state. Experimentally, only inhomogeneous charge fluctuations are observed \cite{Sedlmeier12, Yakushi15, Nakamura17}, which indicate fluctuating electric dimers at the rate of about 0.1\,THz. We discuss this topic in Section \ref{sec:quantumelectricdipoles}. (ii) The extrinsic quenched randomness comes into play as a result of disorder in the anion layers [Figures~\ref{fig:structure_kappa}(h) and (i)], and was revealed to be inherent to \etcn\ and \agcn\ \cite{Dressel16,Pinteric16}; this aspect is crucial for understanding the dc transport and electrodynamic properties of these materials in general and is discussed in Sections \ref{sec:randomness} and \ref{sec:QSL+afm}. Hence, the quenched randomness originating in the anions might provide a spatially random effective interaction to the spin degrees of freedom. \etcl\ is an antiferromagnetically ordered Mott insulator at low temperatures with a coupling $t^{\prime}/t$ far from frustration. Extended x-ray irradiation of 500~h (0.5 MGy/h) introduces sufficiently strong disorder to suppress the antiferromagnetic state in \etcl. It was suggested \cite{Furukawa15b,Urai20} that the system evolves towards a quantum spin liquid, because the antiferromagnetic ordering observed in the pristine crystal disappears; no spin freezing, spin gap, nor critical slowing down are observed by $^1$H-NMR experiments, instead gapless spinon excitations emerge. Saito {\it et al.} went the other way by slightly modifying the BEDT-TTF molecules with partially replacing sulfur by selenium in the inner rings \cite{Saito18}. As discussed in Section \ref{sec:QSLMott}, BEDT-STF substitution enlarges the bandwidth and drives the system across the Mott transition [Figure~\ref{fig:dcPressureAlloy}(b)]. However, the magnetic characteristics of \stfcn\ with $x=0.05$ are quantitatively similar to those of the pristine crystals (Figure \ref{fig:random3}). Moreover, magnetically the substituted sites are also the same as in the bulk. NMR spectra from the impurity site suggest a decrease in local spin susceptibility and that no staggered moments are induced. Thus, the results indicate that the static and dynamic susceptibilities do not change, even at very low temperatures. This led the authors to the conclusion that disorder might already play a role in the pure compound and that the observed magnetic quantum-spin-liquid properties are not solely caused by geometrical frustration. \begin{figure} \centering \includegraphics[width=0.4\columnwidth]{NMR1.pdf}~~~~ \includegraphics[width=0.47\columnwidth]{NMR2.pdf} \caption{\label{fig:random3}\label{fig:NMR1}\label{fig:NMR2} (a) Temperature dependence of $(T_1 T)^{-1}$ for the pure and $x=0.05$ crystals of \stfcn. The green diamonds indicate data of a pure samples without distinguishing inner and outer sites \cite{Saito18,Shimizu06}. (b) Mean value of $(T_1 T)^{-1}$ of the associated inner and outer sites for bulk and impurity sites. The results of the bulk site refer to the left axis, and the results of the impurity site are plotted with the right axis \cite{Saito18}. } \end{figure} This idea was extended to $\lambda$-(BEDT-STF)$_2$GaCl$_4$ \cite{Saito19}, where also site-selective NMR was utilized to investigate the non-magnetic insulating phase of the stripe lattice system consisting of triangular and square tilings. As the temperature decreases antiferromagnetic spin fluctuations develop. The spin-lattice relaxation rate $T_1^{-1}$ is strongly enhanced, but saturates below 3.5 K with no indications of long-rang magnetic order. $\lambda$-(BEDT-STF)$_2$GaCl$_4$ is a disordered electronic system where a novel quantum disordered state is realized. The non-magnetic ground state is discussed in terms of geometrical frustration, disorder, and the quantum critical point between the antiferromagnetic phase and the spin gap phase. In a new class of hybrid organic crystals [EDT-TTF-CONH$_2$]$_2$[BABCO] Szirmai {\it et al.} \cite{Szirmai20} suggested that a quantum-spin-liquid state is introduced not by frustration -- which is only $t : t^{\prime} : t^{\prime\prime} = 1.00 : 0.75 : 060$ -- but mainly by disorder that originates in the molecular rotor BABCO$^-$ \cite{Lemouchi12,Lemouchi13}. Despite the rather strong coupling of $J \approx 314$~K, the compound does not show indications of a magnetic phase transition down to 20~mK. From a variety of advanced magnetic probes (multi-frequency ESR, $^1$H-NMR, zero-field $\mu$SR), the authors conclude that intrinsic randomness due to the configuration of the frozen BCO Brownian rotors causes subtle disorder potential that suppresses magnetic order. The $^1$H-NMR relaxation rate $T_1^{-1}$ exhibits a weak increase below 10~K following an unusual power law, similar to \etcn\ and other quantum-spin-liquid candidates, cf. Figures~\ref{fig:NMR2} and \ref{fig:HgCl_NMR1}. The electron spin relaxation rate of $\mu$SR spectra exhibits only a slow change with temperatures without a critical slowing down in the temperature range from 0.5~K down to 20~mK, which is a universal characteristic of several quantum-spin-liquid compounds \cite{Kermarrec11,Pratt11,Fak12,Clark13}. Dynamic magnetic inhomogeneities are indicated by the strong field dependence of $T_1^{-1}$ observed at around 1~K in \etcn\ and \agcn\ \cite{Shimizu03,Shimizu06,Shimizu16}. Disorder is also identified in magnetically coupled defect spins linked to the frustrated BEDT-TTF organic lattice discussed in previous Section \ref{sec:magneticfieldQSL}. Defect spins coupled to kagome and triangular lattices are also observed in inorganic quantum spin liquids herbertsmithite ZnCu$_3$(OH)$_6$Cl$_2${} \cite{Zorko17,Khuntia20} and 1T-TaS$_2${} \cite{Klanjsek17}. The observation of disorder outside frustrated lattices might present a common feature of quantum-spin-liquid candidates; further studies are needed to clarify the interplay between the defect spins and the inherent frustrated lattices, as well as whether the presence of this disorder is related to the establishment of quantum-spin-liquid states. \subsection{Outline} \label{sec:outline} In prototypical strongly correlated materials, such as heavy fermions or transition metal compounds, $f$- and $d$-electrons with their rather narrow bands govern the electronic properties. Often the Fermi surface is very complex and multiple bands compete. Molecular materials, on the other side, are characterized by delocalized $\pi$-electrons that are distributed over the extended organic molecule. If the almost planar molecules are stacked or assembled face-to-face in certain patterns constituting bulk crystals, the protruding orbitals of adjacent molecules overlap slightly, forming narrow electronic bands. Although the unit cell of molecular crystals contains many atoms, it consists of a small number of molecules. In most cases, only one electronic band cuts the Fermi energy, leading to rather simple Fermi surfaces \cite{WosnitzaBook}. Typical bands consist only of lowest unoccupied molecular orbital (LUMO) and highest occupied molecular orbital (HOMO), which often are isolated from other bands. Therefore, the effective Coulomb interaction between electrons on the HOMO and LUMO orbitals is poorly screened by other bands. These two factors (small bandwidth and ineffective screening) make the molecular conductors mostly strongly correlated electron systems. Since heavy fermions are intermetallic compounds with delocalized electrons, strong electronic correlations are adequately captured by simple renormalization of the Fermi-liquid parameters \cite{ColemanBook}. Transition metal compounds are most successfully described by the Hubbard model, {\it i.e.} a lattice fermion model including onsite and intersite interaction as required. Here strong electronic repulsion causes a metal insulator transition that takes place either by diverging mass or -- more commonly -- by vanishing carrier number \cite{Imada98}. In many transition metal oxides, hybridization with the oxygen $p$ orbitals takes place and the character of low-energy charge excitations changes; these Mott insulators are actually charge transfer insulators. In this regard organics represent the best examples of Mott-Hubbard insulators with the lowest energy excitation between the lower and upper Hubbard bands. Here the quasi-particle mass increases significantly as the Mott transition is approached. By now it became clear that molecular conductors are an original class of strongly correlated electron systems. They possess several features that makes them a diverse playground for the study of quantum many body physics and the properties of quantum materials. Organic solids are available as extremely pure single crystals of millimeter size and stable under ambient conditions. The energy scales fall in an easily accessible range, as far as temperature, magnetic field and pressure is concerned: superconductivity occurs around 10~K; but can be destroyed by applying less than 20~T; the typical bandwidth is less than 1~eV; due to the large compressibility of organic compounds, pressure of less than 1~GPa already leads to significant alterations of the physical properties. Due to different organic molecules that can be combined with a number of counterions and the numerous stacking pattern possible in two-dimensions, a variety of prototypical behaviors of correlated electron systems can be achieved and easily modified by chemical means. But there are some caveats that have to be mentioned: Several of the methods, which have been advanced over the last decades to investigate the electronic and magnetic properties of inorganic materials cannot easily be applied to molecular compounds. Limited crystal size and the presence of hydrogen essentially prevent neutron scattering experiments, the most powerful tool for exploration of magnetic properties and dispersion. Furthermore, scanning tunneling methods as well as photoemission spectroscopy are challenging due to the surface properties and ionic structure. Nevertheless, the study of organic crystals has significantly profited from the enormous progress in instrumentation seen over the last years and spectacular achievements are reported on a regular basis. We start in Chapter~\ref{sec:ChargeOrder} with a survey given on charge order and ferroelectricity in two-dimensional molecular solids, in particular covering the experimental achievements and theoretical insight gained over the last decade or so. By now, dimerized BEDT-TTF salts are widely recognized as the prime examples for studying the metal-insulator transition driven by onsite Coulomb repulsion. Hence in Chapter~\ref{sec:MottTransition} we cover various aspects of the Mott transition, starting from the scaling behavior around the critical endpoint and followed by the spatial coexistence of metallic and insulating regions at the first-order Mott transition. In the following Chapter we consider the highly-frustrated magnetic state leading to the quantum spin liquid ground state. In Chapter~\ref{sec:quantumdisorder} we turn our attention to charge degrees of freedom and explore their interplay with magnetic degrees of freedom and how varying dimerization and geometrical frustration results in the particular ordered and liquid ground states. The Chapters should be self-contained with numerous cross-references that allow the reader to select certain topics of particular interest. \subsection{Effects of disorder on the Mott transition} \label{sec:randomness} The electronic properties of real materials are strongly affected by electronic interaction and randomness \cite{Lee85,Belitz94,DobrosavljevicBook}. Localization of particles can be caused by Coulomb repulsion and disorder. While electronic correlations are the driving force behind the Mott transition, the Anderson transition is due to coherent backscattering of non-interacting particles from randomly distributed \begin{figure}[h] \centering \includegraphics[width=0.4\columnwidth]{Anderson1.pdf} \caption{\label{fig:Anderson1} Phase diagram of the non-magnetic Anderson-Hubbard model as calculated by dynamical mean field theory with the typical local density of states (based on \cite{Byczuk05}). Note the reversed scale for $U/W$.} \end{figure} impurities. The challenge for a theoretical understanding is that disorder and interaction effects are known to compete in subtle ways \cite{Finkelshtein83, Castellani84, Tusch93, Lohneysen00,Kravchenko04, Byczuk05,AbrahamsBook}. In Figure~\ref{fig:Anderson1} the ground state phase diagram of the Anderson-Hubbard model is displayed, based on dynamical mean field theory (DMFT) calculations of the disordered Hubbard model at half filling with the typical local density of states \cite{Byczuk05}. The interaction strength is given by $U/W$, the disorder strength by $\Delta$. Two different phase transitions are found: a Mott insulator-metal transition for weak disorder $\Delta$ and an Anderson transition for weak interaction $U$. Two insulating phases surround the correlated, disordered metallic phase. Dobrosavljevi{\'c} and collaborators also considered the temperature evolution when investigating the effect of disorder on the Mott transition by DMFT combined with typical medium theory \cite{Dobrosavljevic97,Dobrosavljevic03,Aguiar05,Braganca15}; in particular they looked at the coexistence regime of insulating and metallic solutions. While for weak disorder the coexistence region is found to be similar to that in the clean case, with increasing disorder Anderson localization effects are responsible for shrinking the coexistence region, as depicted in Figure~\ref{fig:disorder}. Both the $U_{c1}$ and $U_{c2}$ lines move toward larger interaction potential and become closer to each other when disorder increases. As disorder becomes even stronger and exceeds twice the bandwidth, the region drastically narrows and the critical temperature $T_{\rm crit}$ abruptly goes to zero. Here the transition occurs at $U \approx W$ and Anderson and Mott routes to localization become equally important; in other words the effects of interaction and disorder are of comparable relevance for charge localization. The observation of a metal-insulator transition without a coexistence region suggests that the nature of the transition has changed from first to second order as disorder increases. This is in accord with the idea of a quantum-critical regime for $T > T_{\rm crit}$, that was suggested by theory \cite{Terletska11} and experiments \cite{Furukawa15a} on various dimerized organic Mott insulators, as shown in Figure~\ref{fig:MottCriticality} and discussed in Sec.~\ref{sec:Mottquantumcriticality}. \begin{figure}[h] \centering \includegraphics[width=0.8\columnwidth]{disorder2+3.pdf} \caption{\label{fig:disorder} Phase diagram for the disordered Hubbard model at nonzero temperature (a) for different degrees of disorder and (b) for different temperatures $T$ given in units of the bandwidth $W$ (after \cite{Aguiar05}); note the axis $U/W$ goes from right to left in order to mimic the pressure dependence. The spinodal lines $U_{c1}$ and $U_{c2}$ indicate the boundaries to the insulating and metallic solutions. } \end{figure} Single crystals from organic charge transfer salts are renowned for their superior quality, mainly because they are grown by electrochemical methods \cite{WilliamsBook,Montgomery94}. Occasionally some effect of the starting molecules, solvent, or atmospheric environment has been reported \cite{Strack05,Lang06,Pinteric14}, but the crystal structure and overall behavior is barely affected. An intrinsic and reversible way of introducing disorder in BEDT-TTF salts is the adjustment of the cooling rate \cite{Su98b,Stalcup99}, which affects the freezing of the terminal ethylene-group disorder (Figure~\ref{fig:BEDT-TTF}) at around 70 to 80~K and thereby the resistivity profiles \cite{Tanatar99,Saito99,Muller02,Wolter07} and the pairing symmetry of the superconducting ground state \cite{Kanoda90,Kanoda93,Achkir93,Le92,Harshman90,Lang92,Dressel94a,Lang96,Pinteric00,Pinteric02}. This provides the possibility to study the disorder dependence of physical properties by targeted artificial defects. X-ray irradiation is commonly utilized to create disorder in molecular solids in a controlled way. Sasaki and collaborators could show \cite{Sasaki11,Yoneyama10,Sasaki12,Sasaki12c} that the defects are introduced mainly in the anion layers separating the organic molecules leading to a modulation of the potential rather than a creation of electronic charges \cite{Sasaki07,Matsumoto12}. In general, the room-temperature resistivity substantially decreases with irradiation dose. Also the characteristic hump around $T=100$~K quickly disappears \cite{Analytis06,Sano10,Sasaki12c}, in good agreement with DMFT calculations \cite{Radonjic10}. The low-temperature properties are even more sensitive to irradiation as revealed by extensive studies of the normal and the superconducting state of \etscn\ and \etbr, as well as of $\beta$-(BEDT\--TTF)$_2$AuI$_2$. Already small doses increase the residual resistance $\rho_0$ and reduce the transition temperature $T_{c}$ \cite{Dolanski92,Analytis06}; however it takes rather extensive irradiation to completely kill the superconducting state \cite{Sano10,Sasaki11}. The metallic compound \etbr, displayed in Figure~\ref{fig:irradiation1}(a) as an example, turns insulating when the electrons become localized due to the randomness of the potential: the charge transport takes place via hopping between sites \cite{Sano10}. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{irradiation1.pdf} \caption{\label{fig:irradiation1} Temperature dependence of the dc resistivity of (a) \etbr\ und (b) \etcl\ irradiated by x-ray. The time indicated is the total x-ray exposure time at room temperature (data from \cite{Sasaki07,Sano10,Sasaki12c}).} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.8\columnwidth]{irradiation2.pdf} \caption{\label{fig:irradiation2} (a) Infrared optical reflectivity and conductivity of \etcl\ before and after x-ray irradiation. The inset of the lower panel indicates the effective number of carriers; (b)~Scanning micro-region infrared reflectance spectroscopy scanning map of the partly x-ray irradiated \etcl. The sample is irradiated through the molybdenum mesh mask (reproduced from \cite{Sasaki12c}).} \end{figure} The Mott insulators, on the other hand, such as \etcl\ or \etcn, loose their high-resistivity state upon irradiation: the resistivity decreases in the whole temperature range. For \etcl\ a metallic-like temperature dependence down to about 50\,K is reached at a rather low dose \cite{Sasaki12c,Urai19}, as shown in Figure~\ref{fig:irradiation1}(b). With increasing defect concentration, the Mott gap is filled, resulting in a shift of spectral weight from the interband transition to low energies: optical measurements \cite{Sasaki08a} show a Drude-like behavior at low temperatures, shown in Figure~\ref{fig:irradiation2}(a). DMFT calculations by Radonji{\'c} {\it et al.} explain this behavior by an increase of the bandwidth \cite{Radonjic10}. By using an x-ray microbeam, the spatial dependence can be investigated. To that end a \etcl\ crystal is irradiated through a molybdenum mesh ($90~\mu{\rm m} \times 90~\mu$m) \cite{Sasaki04a,Sasaki05} and the fabricated pattern studied by scanning micro-region infrared reflectance spectroscopy with a spatial resolution of 5 to $15~\mu$m as already introduced in Sections~\ref{sec:coexistenceregime} and \ref{sec:FermiLiquid}. Mapping the emv coupled molecular vibration $\nu_3(a_g)$ provides local information on the electronic states (Figure~\ref{fig:irradiation2}) because the frequency of the mode reflects the electronic state via electron molecular vibration coupling \cite{Sasaki04b,Sasaki12c}. Spatial inhomogeneities in the two-dimensional electron and magnetic system may give rise to a Griffiths phase at the metal-insulator transition \cite{Dagotto05,Vojta06,Krivoruchko14}. Following theoretical studies \cite{Tanaskovic04,Andrade09} recent $^{13}$C-NMR investigations on strongly irradiated (500~h x-ray exposure) \etcl\ found a slow dynamics that was interpreted in this regard \cite{Yamamoto20}. The electronic system exhibits long-length self-organization without long-range order. A temperature-pressure-randomness phase diagram with an electronic Griffiths phase is proposed as displayed in Figure~\ref{fig:Griffiths}. \begin{figure}[h] \centering \includegraphics[width=0.55\columnwidth]{Griffiths.pdf} \caption{\label{fig:Griffiths} Schematic phase diagram how the Mott transition develops with increasing randomness, as suggested from NMR experiments on \etcl\ subject to x-ray irradiation \cite{Yamamoto20}. The coexistence regime at the first-order phase transition gradually diminishes as shown in Figure~\ref{fig:disorder}. Above that spatial inhomgeneities give rise to an electronic Griffiths pase.} \end{figure} The quantum-spin-liquid Mott insulator \etcn\ seems to be less susceptible to x-ray irradiation. The resistivity varies only little \cite{Sasaki07,Sasaki12c,Sasaki15} even for large doses and also the magnetic properties remain unchanged by chemical substitution \cite{Saito18}. The Mott gap does not collapse due to irradiation, but hopping conduction, present even in the pristine crystals \cite{Sedlmeier12,Elsasser12,Sasaki15}, becomes dominant. \v{C}ulo {\it et al.} measured the dc resistivity and Hall effect of several quantum-spin disordered Mott insulators with rather similar chemical compositions and crystal structures: \etcn, \agcn, and $\kappa$-(BEDT-TTF)$_2$B(CN)$_4$ \cite{Pinteric14,Culo15,Culo19}. While around room temperature the transport properties are mainly determined by the strength of the electron correlations, upon reducing $T$, hopping transport takes over due to inherent disorder. The most disordered compound \etcn\ exhibits the lowest dc resistivity and the highest charge carrier density, {\it i.e.}, in the phase diagram it is located closest to the metal-insulator transition. The least disordered compound $\kappa$-(BEDT-TTF)$_2$B(CN)$_4$ shows the highest resistivity and the lowest carrier density, {\it i.e.}, lies farthest from the phase transition. The observations are explained within the theory of Mott-Anderson localization as a consequence of disorder-induced localized states within the correlation gap \cite{Pinteric14,Culo19}. \subsection{Phase diagrams of quarter-filled systems} \label{sec:COPhaseDiagram} Two-dimensional organic materials based on BEDT-TTF molecule hosting electronic ferroelectricity driven by charge order possess extremely rich phase diagrams; their origin lies in the competition between the tendency of electrons to delocalize and strong interactions between charge, spin and lattice. In the conducting layers, BEDT-TTF molecules form a geometrically frustrated triangular lattice and each molecule accommodates half a hole so that the conduction bands are quarter filled. In contrast to dimer Mott insulators at half filling, which are mainly governed by the on-site Coulomb repulsion $U$, here the combined action of a $U$ and a sizeable inter-site Coulomb repulsion $V$, analyzed within an extended Hubbard model, is expected to lead to the formation of a charge-ordered ground state \cite{Kino95,Seo00, McKenzie01,Seo04,Kino96}. Indeed, it turns out that $V$ is strong enough so that charge-ordered states are observed in numerous materials based on the BEDT-TTF molecule possessing different types of crystal morphologies, labelled by $\alpha$, $\theta$ and $\beta^{\prime\prime}$ \cite{WilliamsBook,Mori98a,Mori99b,Mori99c}, as depicted in Figure~\ref{fig:structure_general}. On the other hand, in materials with $\kappa$-pattern, in which molecules are paired in dimers (s.c. dimerized materials), the Mott physics plays important role (see Chapters~\ref{sec:MottTransition} and \ref{sec:Frustration}), so that no tendency towards charge order has been expected. However, this common view has been recently challenged by a discovery of charge ordered ferroelectricity in \hgcl\ \cite{Drichko14,Hassan18,Gati18a}. The charge ordered state is quickly suppressed by a rather low pressure, however without any trace of superconductivity \cite{Lohle17,Lohle18}. \begin{figure} \centering\includegraphics[clip,width=0.6\columnwidth]{alphaphasediagram.pdf} \caption{The phase diagram of \aeti\ shows how with varying pressure and temperature different phases and boundaries develop. The charge-ordered transition $T_{\rm CO}$ is suppressed by pressure as probed by various methods (squares, left scale); concomitantly the charge gap $\Delta$ closes as extracted from thermal activated conductivity and optical properties (circles, right axis). The charge-ordered insulator and an anomalous metal coexist at high temperatures and low pressures, whereas they share the common boundary with the massless Dirac fermionic state at lower temperature and higher pressure (after \cite{Uykur19}). \label{fig:alphaphasediagram}} \end{figure} The charge-ordered ferroelectric \aeti\ is certainly of utmost importance among the class of two-dimensional molecular materials with quarter filling, thanks to its rich phase diagram with a number of exotic quantum phenomena ranging from electronic ferroelectricity to superconductivity, from non-linear transport to zero-gap semiconductivity and ferrimagnetism characterized by massless Dirac fermions \cite{Dressel94, Tajima06, Hirata16, Uykur19}. Superconductivity with $T_c$ $\approx$ 7\,K is reported to occur under 0.2 GPa of uniaxial strain applied along the in-plane stacking $a$-axis \cite{Tajima02}. A stable superconducting phase with $T_c=8$~K can be achieved by tempering the crystals at $70^{\circ}$C for a few days \cite{Schweitzer87}. The most outstanding feature of the \aeti\ phase diagram is that a charge-ordered ferroelectricity lies nearby a massless Dirac fermions state, as displayed in Figure~\ref{fig:alphaphasediagram}. At ambient pressure, charge-ordering develops below the metal-insulator phase transition $T_\mathrm{CO} = 135$~K with inversion symmetry broken and charge disproportionated between neighboring sites. Increasing pressure suppresses charge order and a massless Dirac fermions phase emerges, thus making $\alpha$-(BEDT-TTF)$_2$I$_3$ the first bulk material, in which the impact of electron correlations on the Dirac-point conductance can be studied \cite{Alemany12, Liu16, Uykur19} (see Section~\ref{sec:DiracElectrons}). \begin{figure}[h] \centering\includegraphics[clip,width=0.4\columnwidth]{thetaphasediagram.pdf} \caption{Temperature-dependent resistivity of $\theta$-(BEDT-TTF)$_2$RbZn(SCN)$_4$ measured with different cooling rates. Slow cooling leads to a sharp metal-insulator transition at $T_\mathrm{CO} = 190$~K into the charge-ordered state. Rapid cooling (less than 1~K per minute) prevents this order leaving space to charge glass formation (after \cite{Kagawa13}). \label{fig:thetaphasediagram}} \end{figure} Another prominent example of electronic ferroelectricity due to charge order is $\theta$-(BEDT-TTF)$_2$RbZn(SCN)$_4$. A remarkable feature of this material is that charge order sets in right at the boundary of the charge-glass in the phase diagram (Figure \ref{fig:thetaphasediagram}). The competition between charge-ordered phase and charge glass is governed by geometrical frustration. Increasing the degree of frustration by replacing the RbZn(SCN)$_4^-$ anions by CsZn(SCN)$_4^-$, which form nearly isotropic triangular lattice, completely suppresses the formation of charge order \cite{Sato14}. In $\theta$-(BEDT-TTF)$_2$RbZn(SCN)$_4$ charge order develops at long-range scale only under sufficiently slow cooling conditions (cooling speed slower than 1 K/min). Similarly to \aeti, a charge-ordered state builds up below the metal-insulator transition $T_\mathrm{CO} = 190$~K with the inversion symmetry broken and charge imbalance between neighboring sites. On the other hand, when cooled rapidly, the charge ordering is avoided and a charge glass with slow dynamics is formed below $T_\mathrm{g} = 170$~K \cite{Kagawa13} (see Section~\ref{sec:ChargeGlass}). Among other weakly dimerized compounds with charge order, the BEDT-TTF family of $\beta^{\prime\prime}$-(BEDT-TTF)$_2$SF$_5$$R$SO$_3$ has attracted a lot of attention because they constitute an all-organic superconductor with large polyfluorinated anions [Figure~\ref{fig:structure_beta}(c)] built with H$\cdots$F hydrogen bonding \cite{Geiser96}. Some years ago, Merino and McKenzie suggested these materials for investigating the interplay between charge order and superconductivity \cite{Merino01, Kaiser10}. However, only recently, a generalized phase diagram has been established showing that both the charge disproportionation and the temperature of the charge-ordering phase transition scale with intersite repulsion $V$, \begin{figure}[h] \centering\includegraphics[clip,width=0.7\columnwidth]{betaphasediagram.pdf} \caption{Phase diagram of $\beta^{\prime\prime}$-(BEDT-TTF)$_2$SF$_5$$R$SO$_3$: as electronic correlations get weaker, the charge order is gradually suppressed and superconductivity sets in. $\beta^{\prime\prime}$-I, $\beta^{\prime\prime}$-MI, $\beta^{\prime\prime}$-SC and $\beta^{\prime\prime}$-M stand for materials with $R$ equal CH$_2$, CHFCF$_2$, CH$_2$CF$_2$ and CHF, respectively; cf. Figure~\ref{fig:structure_beta}(c) (adopted from \cite{PustogowPRB19}). \label{fig:betaphasediagram}} \end{figure} as theoretically expected \cite{PustogowRC19, PustogowPRB19, Drichko09, Girlando11u, Girlando12, Girlando14}. Starting at the strongly correlated side $\beta^{\prime\prime}$-I ($R$ = CH$_2$) is an insulator up to the room temperature, followed by $\beta^{\prime\prime}$-MI ($R$ = CHFCF$_2$) which undergoes a metal-to-insulator phase transition at 170\,K, $\beta^{\prime\prime}$-SC ($R$ = CH$_2$CF$_2$) becomes a superconductor at $T_c=5.5$~K, while $\beta^{\prime\prime}$-M ($R$ = CHF) is metal at all temperatures. Most remarkably, a superconducting state occurs close to the charge-ordered insulating state as plotted in the phase diagram of Figure \ref{fig:betaphasediagram}; an enhanced charge disproportionation leads to an increase of superconducting transition temperature $T_c$ indicating a critical role of charge fluctuations in its formation. \subsection{Frustrated Mott insulator} \label{sec:QSLMott} In order to target investigations of the genuine Mott transitions without being affected by magnetic order, quantum spin liquids are the systems of choice. As illustrated in Figure~\ref{fig:schematicPhaseDiagrams}(b), the phase boundary is solely determined by the electronic degrees of freedom; hence magnetic contributions to the entropy are absent. In this case, Clausius-Clapeyron relation implies that the phase boundary acquires a positive slope ${\rm d}T_{\rm IM}/{\rm d}p >0$: the thermodynamic ground state is metallic. In other words, the mobile electrons in the Fermi liquid state have less entropy than those localized in the Mott state. Pustogow {\it et al.} succeeded to compose a genuine phase diagram of Mott insulators by combining optical and transport experiments on three representative spin liquid compounds \dmit, \agcn\ and \etcn\ \cite{Pustogow18a}. \begin{figure}[h] \centering \includegraphics[width=0.9\columnwidth]{Mott-optics.pdf} \caption{\label{fig:Mott-optics} Temperature evolution of the optical conductivity of the three quantum-spin-liquid compounds \dmit, \agcn\ and \etcn. The dominant feature that contains all information of the intrinsic Mott physics is the Mott-Hubbard band centered around 2000~\cm. At low frequencies narrow phonon modes appear on top. The contour plots illustrate the tem\-per\-ature-dependent changes of the Mott-Hubbard band, where the open white symbols denote the maximum and half-maximum positions (cf. Figure~\ref{fig:MIT3} upper right). Note that the band shape and the low-frequency conductivity show distinct behavior for each compound which is related to the respective position in the phase diagram (following \cite{Pustogow18a}). } \end{figure} Figure~\ref{fig:Mott-optics} presents the temperature changes of the frequency-dependent conductivity measured along one representative orientation. While in \dmit\ the low-frequency Mott-gap opens upon cooling, and spectral weight shifts toward the Mott-Hubbard band, for \agcn\ only little variation is observed below 1000~\cm. Most surprising, however, is the increase of the in-gap absorption in \etcn\ \cite{Kezsmarki06,Elsasser12} that could be assigned to metallic fluctuations. One can imagine that upon approaching the metal-insulator-phase boundary, the number and effect of metallic puddles increases, reflecting the phase coexistance close to the first-order phase boundary, as depicted in Figure~\ref{fig:schematicPhaseDiagrams}(b). When normalizing the temperature $T$ and the correlation strength $U$ by the respective bandwidth $W$ as determined from optical experiments, the materials behavior collapses on a quantitative level as displayed in Figure~\ref{fig:MIT3}. The quantum Widom line is clearly seen as well as the back-bending towards the critical endpoint. Below $T_{\rm crit}$, it becomes a first-order phase boundary with a range of coexisting phases, which have been subject of subsequent studies. Tuning the bandwidth enables us to follow the electron system from the strongly correlated Mott insulator through the range of phase coexistence into the metallic regime even at lowest temperatures. \begin{figure} \centering \includegraphics[width=0.7\columnwidth]{MIT3.pdf} \caption{\label{fig:MIT3} Quantitative phase diagram of pristine Mott insulators. The temperature $k_BT$ and Coulomb repulsion $U$ are normalized to the bandwidth $W$ extracted from optical spectroscopy; note that the direction of the bottom axis is reversed in order to mimic pressure. Since in these quantum spin liquids magnetic order is suppressed, the large residual entropy causes a pronounced back-bending of the quantum Widom line at low temperatures leading to metallic fluctuations (semi-transparent blue area) in the Mott state close to the phase boundary. As the effective correlations decrease further, a metallic phase forms (blue area) with Fermi liquid properties \cite{Yasin11}. The universal phase diagram guided by the quantum Widom line is constructed on basis of optical and transport data \cite{Pustogow18a} as well as the pressure-dependent transport studies \cite{Furukawa15a,Shimizu16} on \dmit\ (black diamonds), \agcn\ (red circles), and \etcn\ (blue squares). Upon rescaling temperature (right bars) and pressure (top bars), the quantum Widom line is found to be universal for all compounds. The curvature, as well as the $T=W$ and $U=W$ values match well with theoretical calculations \cite{Vucicevic13}. On the right we illustrate how $U$ and $W$ are determined from the optical spectra and the quantum Widom line from the electrical resistivity $\rho(T;p)$ measured as a function of temperature (open symbols) and pressure (solid symbols), indicated by arrows in the main graph.} \end{figure} Unfortunately, for the paradigmatic quantum spin liquids, like \etcn, $T_{\rm crit}$ is around 20~K, {\it i.e.} below the temperature range accessible for continuously-tunable helium gas pressure cells due to solidification of the pressure medium \cite{Jeftic97}; preventing a continuous pressure sweep at fixed temperatures that could be applied in the case of V$_2$O$_3$ and \etcl. Nevertheless, temperature-dependent experiments at a large number of pressure values or chemical substitutions make it possible to densely map the region around the phase transition. Figure~\ref{fig:dcPressureAlloy} demonstrates the evolution of temperature-dependent dc transport of the highly-frustrated Mott insulator \etcn\ as the bandwidth is enlarged in two alternative ways: hydrostatic pressure and chemical substitution. The strongly insulating behavior turns into a metallic, first at low temperatures, eventually in the entire temperature range. The data provide a wealth of information on the insulating as well as on the conducting regime that is discussed in Section \ref{sec:randomnessQSL}. Concerning the insulator-metal transition, however, the temperature sweeps do not really cross the first-order phase boundary because the critical pressure changes only weakly with temperature \cite{Geiser91,Kurosaki05,Furukawa18}; hence no hysteresis is observed. \begin{figure} \centering \includegraphics[width=0.7\columnwidth]{dc_pressure+alloys.pdf} \caption{\label{fig:dcPressureAlloy} (a)~Temperature dependence of the dc resistivity of \etcn\ measured within the highly conducting $bc$-plane when applying different values of hydrostatic pressure as indicated \cite{Lohle18}. (b)~Temperature dependent resistivity of \stfcn\ where in the inner BEDT-TTF rings sulfur was partially substituted by selenium. The data are normalized to room temperature \cite{Saito20}. } \end{figure} The hallmark of a first-order phase transition is the involvement of latent heat, often seen as a hysteric behavior in various physical quantities. Maybe more compelling is the real-space phase coexistence in a sizeable tuning range. In Sec.~\ref{sec:afmMott} we discussed how optical methods allow a mapping of the phase segregation at low temperatures, but common infrared spectroscopy is limited to a resolution of approximately $10~\mu$m, as illustrated in Figure~\ref{fig:Scan}(a). Utilizing scanning near-field microscopy in the infrared spectral range \cite{Knoll99,Hillenbrand00,Keilmann04}, spatial inhomogeneities on the nanometer scale can be visualized, as first demonstrated by Basov and collaborators on vanadium oxides \cite{Qazilbash07,Qazilbash09,Qazilbash11,Huffman18,Stinson18}. In Figure~\ref{fig:alphanearfield} the example of \aeti\ is presented where cryogenic near-field nanoscopy was first applied to the charge-order phase transition at $T_{\rm CO}=135$~K \cite{PustogowSciAdv18}. At present, however, temperatures below 20~K are not routinely accessible for this method, and only first attempts have been made so far to apply it to the problem under discussion here \cite{Iakutkina20}. Nevertheless, indirect methods have been employed in order to prove that at this first-order phase transition, metallic regions coexist in the insulating matrix. As a matter of fact, the transformation from an insulator to a metal resembles classical percolation phenomena, where a conduction threshold is hit when the metallic filling fraction rises. The statistical problem of percolation has been subject to numerous theoretical and numerical treatments \cite{Kirkpatrick73,StaufferBook,BollobasBook}; the behavior depends on the dimension and particular type of percolation (bond, site). The electrodynamic properties of mixed media can be well described by effective-medium approximations suggested by Garnett, Bruggeman and others \cite{TorquatoBook,ChoyBook}. A characteristic of the percolation regime is the pronounced divergence of the dielectric constant \cite{Efros76,Bergman78,Hovel10,Hovel11} shortly before the percolation threshold is reached. In order to look at the coexistence regime of the first-order Mott transition from this perspective, comprehensive dielectric measurements have been conducted on \etcn\ as a function of temperature and frequency, when the bandwidth is increased via hydrostatic pressure as well as chemical substitution of the organic molecules \cite{Pustogow19,Rosslhuber19,Saito20} (cf.\ Figure~\ref{fig:dcPressureAlloy}). \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{percolation1.pdf} \caption{\label{fig:percolation1} (a) The Mott insulator-to-metal transition of \etcn\ appears as a rapid increase of $\sigma_1(p)$ that smoothens at higher temperatures; above $T_{\rm crit}$ a gradual crossover remains. (b) $\epsilon_1(p)$ exhibits a sharp peak below $T_{\rm crit}$. The results are taken at $f=380$~kHz and plotted on a logarithmic scale. (c,d) Similar behavior is observed for chemical BEDT-STF substitution. (e,f) Fixed-temperature line cuts of hybrid DMFT simulations (see text) as a function of correlation strength $U/W$ and $T/W$~\cite{Vucicevic13,Pustogow18a} resemble the experimental situation in minute detail, including the shift of the transition with temperature. The lack of saturation of $\sigma_1$ as temperature is lowered, which is seen in DMFT modeling, simply reflects the neglect of elastic (impurity) scattering within the metallic phase (outside the phase coexistence region) (from \cite{Pustogow19}).} \end{figure} Figure~\ref{fig:percolation1}(a) demonstrates the temperature evolution of the conductivity step as the pressure increases across the insulator-metal transition. The rapid enhancement of six orders of magnitude within a narrow range of less than 100~MPa at $T=10$~K becomes smoother and more gradual as the temperature rises towards and across $T_{\rm crit}$. A rather similar behavior is seen in Figure~\ref{fig:percolation1}(c) where the substitutional-dependence of $\sigma_1(T,x)$ is plotted for various $T$. Although the density of $p$- and $x$-dependent data points do not allow to extract critical exponents, the overall behavior resembles well the case of the antiferromagnetic Mott insulator \etcl\ discussed in Sec.~\ref{sec:afmMott}, in particular Figure~\ref{fig:CriticalPoint}(a). In the corresponding lower panels of Figure~\ref{fig:percolation1}, the permittivity $\epsilon_1$ is plotted for fixed temperatures as a function of pressure and BEDT-STF substitution, respectively. Around the critical pressure $p\approx 150$~MPa and critical concentration of $x\approx 0.15$, $\epsilon_1$ reaches a maximum that shifts to higher $p$- and $x$-values as $T$ increases, corresponding to the positive slope predicted in Figure~\ref{fig:schematicPhaseDiagrams}(b). Well above the critical temperature $T_{\rm crit}$, the increase in $\epsilon_1$ is minuscule. The experimental observations are well reproduced by DMFT calculations for a single-band Hubbard model [Figure~\ref{fig:percolation1}(e,f)], which are supplemented by an appropriate electrical-network model representing such spatial inhomogeneity utilizing the standard effective-medium approximation \cite{Pustogow19,Rosslhuber19}. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{3D_eps1+2.pdf} \caption{\label{fig:3D_eps1} (a)~Temperature-pressure contour plot of the dielectric permittivity of \etcn. The permittivity of $\epsilon_{1}(T,p)$ probed at $f=380$~kHz increases up to 600 centered around $p=180$~MPa and below $T=20$~K, close to the first-order Mott transition. (b)~The temperature-substitution plot of the permittivity for \stfcn\ exhibits its maximum of $\epsilon_{1}\approx 2500$ around $x=0.15$. This is ascribed to a range of spatially separated metallic and insulating regions. Projected phase diagram also includes $\rho(T,p)$ data: at $T^{\ast}$, the resistivity deviates from the Fermi-liquid behavior (after \cite{Rosslhuber19,Saito20}). } \end{figure} R{\"o}sslhuber {\it et al.} mapped the dielectric catastrophe as a function of $T$ and effective correlations \cite{Rosslhuber19,Saito20,Pustogow19}, displayed in Figure~\ref{fig:3D_eps1}. The two alternative methods of tuning the bandwidth in the highly frustrated spin-liquid compound \etcn\ yield rather similar features as far as the overall behavior is concerned as well as most of the details. This provides compelling evidence that in fact the intrinsic physics is probed consisting of correlation effects and spatial percolation. Note, the dielectric catastrophe at the Mott transition was first observed and discussed in the context of critically doped semiconductors, such as Si:P \cite{MottBook,Castner75,Rosenbaum83,Hering07,Lohneysen90,Lohneysen00,Lohneysen11}, which is a particular blend of percolation, correlations and disorder. It was suggested \cite{Terletska11} that the transition from a quantum spin liquid as the fully frustrated Mott insulator to a Fermi-liquid metal is a much cleaner and well defined situation. Although disorder might be an issue for the substitutional series, the agreement between both approaches infer that the overall behavior is not severely affected. It would be of interest to apply this dielectric method to intentionally disordered systems using progressive irradiation. \subsection{Quantum electric dipoles} \label{sec:quantumelectricdipoles} Due to the $A_2B$ stoichiometry of most BEDT-TTF salts with monovalent anions $B^-$, the systems are supposed to possess a quarter-filled conduction band; dimerized structures, such as the $\kappa$- or $\lambda$-phases (Figure~\ref{fig:structure_general}), however, lead to half-filled bands. In the latter case, on-site Coulomb repulsion $U$ dominates the electronic interactions, and the compounds serve as prime models for Mott physics. subject of Chapter~\ref{sec:MottTransition}. In the former case, however, inter-site Coulomb interaction $V$ cannot be neglected; the $\alpha$-, $\beta$- and $\theta$- compounds (Figure~\ref{fig:structure_general}) are the most important and heavily studied ones. A general classification of these quasi-two dimensional organic conductors is given by Hotta's minimal model based on an anisotropic triangular lattice \cite{Hotta03,Seo04,Hotta12}. The main factors that determine the band structure are dimerization and anisotropy, {\it i.e.} geometrical frustration, as illustrated in Figure~\ref{fig:Hotta1}. The inter-molecular Coulomb interaction $V$ governs the degree of geometrical frustration in the limit of strong dimerization; the dimerization is given by strength of interaction $t_d$ within the molecular dimer. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{Hotta2.pdf} \caption{\label{fig:Hotta1} The classification of quarter-filled electronic system in the weak-coupling regime. The $\theta$- ($\alpha$-) and $\beta$-type with weak or no dimerization form a triangular lattice in unit of molecule, whereas those with strong dimerization, $\kappa$- and $\lambda$-type do so in unit of dimer. The distortion of the ideal triangular lattice in terms of unequal bonds is described by the anisotropy (after \cite{Hotta03,Hotta12}). } \end{figure} When dimerization is negligibly small but finite, the charge degrees of freedom are governed by $V$ and the transfer integral $t$ along the nearest neighbor bonds. In this limit, the system tends to a charge-ordered insulator, as discussed in Chapter~\ref{sec:ChargeOrder}. Frustration weakens the inter-site repulsion and finally destroys charge order; an exotic metallic state exists with limited charge mobility. The compounds are paramagnetic with no tendency towards magnetic order. As the dimerization $t_d$ develops, the electron becomes confined within the dimer (dimer Mott insulator, cf.\ Chapter~\ref{sec:MottTransition}) but still delocalized, providing the possibility that dielectric degrees of freedom form a so-called `quantum electric dipole', which is subject to fluctuations and correlations depending on $t_d$ and $V$. The electric moments start to compete as frustration weakens $V$, leading to some unusual dielectric properties. Exchange coupling of the spins associated with those dielectric moments may give rise to some nontrivial magnetic behavior, as depicted in Figure~\ref{fig:DSliquid}. \begin{figure} \centering\includegraphics[clip,width=0.35\columnwidth]{DSliquid.pdf} \caption{Phase diagram calculated within an effective dipolar-spin model by Hotta \cite{Hotta10}. $t_B$ and $t_d$ are the interdimer and intradimer hopping integrals, and $V_q$ is the intersite Coulomb interaction. S+L, L+S and L+L denote a charge ordered spin liquid, a spin-ordered charge liquid, and a charge-spin liquid phase, respectively (after \cite{Hotta10}).} \label{fig:DSliquid} \end{figure} Increasing $t_d$ suppresses the dielectric moments, while the spin degrees of freedom become more important. By weakening the interaction strength and approaching the insulator-metal transition (for instance, by reducing the effect of on-site repulsion $U/t$), spatial fluctuations of electrons develop and nontrivial long-range spin exchange interactions when the geometrical frustation becomes large. The long-range antiferromagnetic order may be destroyed and a quantum spin liquid state appears, as discussed in Chapter~\ref{sec:Frustration}. Based on the extended Hubbard model with on-site and nearest-neighbor interaction $V$ that includes intra- as well as inter-dimer terms on a model lattice of $\kappa$ compounds, Hotta \cite{Hotta10} considered quantum electric dipoles as depicted in Figure~\ref{fig:Hotta3}. In order to describe the observations on \etcn{}, she suggests that at low temperatures \etcn\ falls in the dipolar-spin liquid phase, where both spins and charges remain short-range ordered. The dielectric anomaly that develops below 50~K in most of these charge transfer salts \cite{Pinteric99,Abdel10,Lunkenheimer12,Tomic13,Iguchi13,Pinteric14,Pinteric16} could be explained under the condition that electric dipoles exist when the charge is unbalanced within the dimers. A similar conclusion of ferroelectric charge order (dipolar order) is obtained by mean field and classical Monte Carlo methods \cite{Naka10}. \begin{figure}[h] \centering \includegraphics[width=0.7\columnwidth]{Hotta3.pdf} \caption{ (a)~In this $\kappa$-type lattice structure the molecules are presented by circles, the shaded area and bold red line represent the dimers ($t_d$, $V_d$). The interactions between the dimers are indicated by the overlap $t_q$ and $t_p$ and by the Coulomb repulsion $V_q$ and $V_p$. (b)~Unpolarized and (c,d) polarized configurations of quantum dipoles, depending on the hierarchy of interactions. Charges avoid neighboring alignment along the bond with strong interaction (according to \cite{Hotta10}). } \label{fig:Hotta3} \end{figure} Very recently Powell and collaborators performed a comprehensive study on several $\kappa$-(BEDT-TTF)$_2$$X$ salts combing first-principle density functional calculations with empirical relationships for the Coulomb interactions \cite{Jacko20} in order to evaluate the model of Hotta \cite{Hotta10,Hotta12}, Naka and Ishihara \cite{Naka10} developed for the coupled dipolar and spin degrees of freedom. The transverse-field in the Ising model is governed by the intradimer coupling $t_d$, which is significantly smaller in \hgcl\ and $\kappa$-(d8-BE\-DT\--TTF)$_2$\-Cu\-[N\-(CN)$_{2}$]Br, compared to \etcl\ or \agcn. More important, however, is the more one-dimensional arrangement in the mercury-containing compounds compared to the others, which are in the quasi-two-dimensional limit. We should note, however, that all these models do not include the underlying ionic lattice and coupling to phonons. \begin{figure} \centering \includegraphics[width=0.7\columnwidth]{Clay2.pdf} \caption{\label{fig:Clay2} (a)~Ground state phase diagrams from self-consistent calculations by Dayal {\it et al.} on periodic $4 \times 4$ lattice as a function of $t^{\prime}$ and $V=V_x=V_y$, $V^{\prime}=0$, for $U=6$, $\alpha =1.1$, $\beta=0.1$, and $K_{\alpha} = K_{\beta}=2.0$. (b) Same as panel (a), but with $V=V_x=V_y=V^{\prime}$. Here antiferromagnetic (AFM) and paired-electron crystal (PEC) phases are shown together with Wigner crystal (WC) and spin gap phase. Lines are guides to the eye. (from \cite{Dayal11,Clay19}). } \end{figure} An interplay of spins with charges and bonds is pursued in the exact diagonalization study by Clay, Mazumdar and collaborators \cite{Li10,Dayal11,Clay12,Clay19} on the extended Hubbard model with both Holstein- and Peierls-type of electron-lattice couplings on the anisotropic triangular lattice. The freezing of bonds and charges simultaneously with a spin-Peierls singlet formation results in a so-called `paired-electron crystal', which exhibits moderate degrees of charge disproportionation and lattice displacement compared to those of the charge ordered state at larger $V$. The geometrical frustration completely replaces the charge ordered phases with the novel paired-electron crystal. As shown above, geometrical frustration in $V$ destabilizes charge order against the dimer Mott insulator; however, compared to the pure dimer Mott case, the pair-electron crystal is more significantly stabilized, due to the spin degrees of freedom or to the spontaneous lattice modulation instead of the intrinsic dimerization. Figure~\ref{fig:Clay2} illustrates how electron-electron interaction affect the `paired electron crystal'. As $U$ increases moderately, the coupling $t^{\prime}$ also increases. The phase diagram depends critically on the form of nearest neighbor interaction $V$. With $V_x = V_y= V$ but the diagonal term $V^{\prime}=0$, the Wigner crystal is found for sufficiently strong $V$, along with a narrow region, where it coexists with a spin gapped phase. Since most two-dimensional charge transfer salts have a tendency towards triangular lattices, the assumption $V^{\prime} = 0$ is not realistic. For $V_x =V_y = V^{\prime}$ the Wigner crystal is completely replaced by the paired electron crystal. Fukuyama {\it et al.} considered the crossover from a quarter-filled system with charge-ordered ground state to the dimer Mott insulator due to strong dimerization \cite{Fukuyama17}. At high energy (in the range of eV, {\it i.e.} optical frequencies) the dimer Mott insulator is stable, whereas at very low energy ($10^{-10}~{\rm eV} \approx 10$~kHz) charge order becomes dominant leading to extended domains of different charge polarities. As a consequence, domain walls form in the system, giving rise to the dielectric anomaly observed. This aspect is presented in full detail in Section~\ref{sec:COFerroelectricity}. Despite considerable experimental efforts, no spectroscopic evidence has been obtained for sizeable charge disproportionation in any of these $\kappa$-phase quantum spin liquid candidates and antiferromagnets \cite{Sedlmeier12,Pinteric16}. There is a broad consensus that static charge order is less than 1\%. This is supported by the most recent x-ray diffraction measurements of \etcn\ and \agcn{} by Foury-Leylekian {\it et al.} \cite{Foury18,Foury20}. At the same time, the new data provide evidence for the symmetry breaking of the non-polar mean P2$_1/c$ structure thereby proving the presence of non-equivalent crystallographic sites and allowing the formation of electric dipoles. Interestingly, despite the fact that the structural signatures of symmetry breaking are not weak, the resulting static charge disproportionation is negligibly small and in the range of the error bar. However, fluctuating electric dipoles might be concluded from dynamic charge fluctuations inferred from the charge-sensitive molecular-vibrational modes observed in Raman and infrared spectroscopy \cite{Yakushi15,Nakamura17}, which are broader than those of typical BEDT-TTF compounds; similar conclusion are drawn from NMR experiments. These fluctuating dipoles may show up in the microwave \cite{Poirier12a} and terahertz response \cite{Itoh13}, but the latter has been demonstrated to involve coupled anion-dimer vibrations instead \cite{Dressel16}. Lastly, these fast charge fluctuations cannot account for the dielectric response in the audio- and radio-frequency range. We discuss this topic in subsequent Section \ref{sec:QSL+afm}. \subsection[Ferroelectric signatures in quantum spin liquid and afm states]{Ferroelectric-like signatures in quantum spin liquid and antiferromagnetic states} \label{sec:QSL+afm} An anomalous dielectric peak shows up as an ubiquitous property independent of the nature of the ground state in the spin sector \cite{Abdel10, Pinteric14, Pinteric16, Lazic18, Pinteric18, Lunkenheimer12}: in spin liquid compounds \etcn\ and \agcn\ (Figure~\ref{fig:kappaQSLdieltemp}), as well as in \etcl\ with antiferromagnetic ground state [Figure~\ref{fig:kappaCldieltemp}(c)]. \begin{figure}[h] \centering\includegraphics[width=0.6\columnwidth]{kappaQSLdieltemp.pdf} \caption{Temperature dependence of the real part of dielectric function $\varepsilon^{\prime}$ measured with the ac electric field applied ${E}\parallel a^\ast$ {\it i.e.} along the direction perpendicular to the molecular layers (a) of \etcn\ and (b) of \agcn\ (after \cite{Pinteric18}). \label{fig:kappaQSLdieltemp}} \end{figure} The relaxor-type dielectric response suggests a short-range ferroelectric-like order \cite{Abdel10, Abdel13} and the existence of domains of low symmetry equivalent with the average high symmetry, in accord with density functional (DFT) calculations and structural refinements \cite{Dressel16,Pinteric16,Lazic18,Foury18,Foury20}. However, both spectroscopic \cite{Sedlmeier12, Pinteric16, Pinteric18,Lazic18} and structural measurements \cite{Foury18, Foury20} consistently demonstrate the equally distributed charge in quantum spin liquid compounds. Moreover, the persistence of strong quantum fluctuations due to charge or dipolar fluctuations in the kHz-MHz frequency range is unlikely because no Barrett-like behavior of the dielectric constant [cf. Equation~(\ref{eq:Barrettformula})] has been detected. \begin{figure} \centering\includegraphics[width=0.9\columnwidth]{kappaCldieltemp.pdf} \caption{Temperature dependence of the real part of dielectric function $\varepsilon^{\prime}$ measured with the ac electric field applied ${E}\parallel b$, {\it i.e.} along the direction perpendicular to the molecular layers of three different single crystals of \etcl. Different behaviors are observed ranging from (a) Curie-like at the magnetic transition $T_{N} \approx 25$~K suggesting multiferrocity, over (b) Curie-like at 8\,K to (c) relaxor-like (after \cite{Pinteric18,Lunkenheimer12}). \label{fig:kappaCldieltemp}} \end{figure} Since no appreciable static electric dipoles have been found, the dielectric anomaly in quantum spin liquid compounds has been attributed to the cooperative motion of charged domain walls within random domain structure \cite{Pinteric14,TomicDressel15,Pinteric18}. In the BEDT-TTF-based quantum spin liquids, this structure originates from the cyanide groups located at the inversion centers bridging the polymeric CuCN/AgCN chains in the anionic layer. In the \dmit\ with quantum spin liquid ground state, disorder occurs due to two equally probable orientations of the Et groups (static disorder) and to internal rotational degrees of freedom of the Me groups (dynamic disorder); both Et and Me groups reside in the non-conducting cations \cite{Lazic18}. Interestingly, a quantum-spin-liquid state is not established in $\beta^{\prime}$-Me$_4$P[Pd(dmit)$_2$]$_2$ and $\beta^{\prime}$-Me$_4$Sb[Pd(dmit)$_2$]$_2$: here static disorder is absent, but dynamic disorder is still present and causes a relaxor peak in $\varepsilon^{\prime}(T)$ \cite{Abdel13}. In all cases, the disorder-induced domain structure is mapped onto the organic dimers and results in relaxor dielectric response with glassy signatures. The charged domain walls increasingly contribute to the dielectric constant, and the response time gets longer as the temperature is lowered because screening becomes weak when the number of charge carriers is reduced \cite{Pinteric14,Pinteric16}. The domain structure changes significantly under x-ray irradiation: the relaxor peak shifts to lower temperatures with extended irradiation before a second feature emerges and becomes dominant, which is attributed to the direct response of anion defects \cite{Sasaki15}. The broadened charge-sensitive molecular vibrational modes observed in Raman and infrared spectroscopy \cite{Sedlmeier12, Yakushi15, Nakamura17} also infer inhomogeneous charge fluctuations; however, these fast fluctuations cannot be invoked for explaining the dielectric response in kHz-MHz frequency range. Similar conclusions on the presence of an inhomogeneous charge distribution are drawn from the dc conductivity behavior showing variable range hoping, as well as from the broadening of NMR linewidth \cite{Kawamoto02, Pinteric14, Culo19}. In order to shed light on the microscopic understanding of the observed dielectric anomalies in kHz-MHz range, Fukuyama {\it et al.} proposed a one-dimensional tight binding model with on-site and inter-site Coulomb repulsion \cite{Fukuyama17}. The approach reveals that charge fluctuations are possible in the boundary region between Mott and charge order insulator. At low frequencies, oscillations occur in the double-well potential corresponding to a small charge disproportionation of opposite charge polarity; importantly, spatially extended domain walls connecting two respective domains are also present; they give rise to the observed dielectric response. The structural results of Foury-Leylekian {\it et al.} support the relevancy of this theoretical consideration as they evidence a symmetry breaking; however the tiny inter-dimer charge imbalance is hardly above the resolution limit \cite{Foury18,Foury20}. The extension to a larger number of randomly distributed domains, which lift off the inversion symmetry in the anionic layers, is straightforward, as identified by density-functional theory calculations of Lazi{\'c} {\it et al.}; the calculations find the ground state quasi-degenerate in energy and with reduced symmetry \cite{Dressel16,Pinteric16,Lazic18}. On the other hand, a long-range dipolar order has been proposed to occur in $\kappa$-(BE\-DT\--TTF)$_2$Cu[N(CN)$_2$]Cl acting as a driving force for the antiferromagnetic ground state \cite{Lunkenheimer12}. The inferred multiferroicity is based on the Curie-like dielectric peak [Figure~\ref{fig:kappaCldieltemp}(a)], as well as on the observed hysteresis and time-dependent phenomena in the vicinity of the antiferromagnetic transition at $T_{N} \approx 25$~K. Intriguing results obtained on another single crystal of \etcl\ suggest that ferroelectricity is proximate to the magnetic and superconducting phases [Figure~\ref{fig:kappaCldieltemp}(b)] \cite{Pinteric18}. However, the relaxor-like dielectric peak as depicted in Figure~\ref{fig:kappaCldieltemp}(c) has been observed in the most number of single crystals studied by different groups; most importantly, no symmetry breaking has been detected until now \cite{Matsuura19}. Since there is no disorder in the anionic layers, the tendency to phase segregation has been invoked, in order to support the relevancy of the charged domain wall motion scenario \cite{Pinteric18}. Thus, further experimental efforts are vital in search for structural inversion symmetry breaking in order to clarify the ferroelectric-like signatures in this antiferromagnetic system. \subsection{BEDT-TTF salts} \label{sec:BEDT-TTF} In succession of the Fabre salts based on TMTTF (tetramethyl-tetrathiafulvalene) and Bechgaard salts based on TMTSF (tetramethyl-tetraselenafulvalene) molecule, the BEDT-TTF molecules become the widely used building block for the two-dimensional organic superconductors and molecular quantum materials. Here BEDT-TTF or sometimes simply ET, stands for bis(ethylenedithio)tetrathiafulvalene (C$_{10}$H$_8$S$_8$) and was first synthesized by Saito {\it et al.} in 1982 \cite{Saito82}; as depicted in Figure~\ref{fig:BEDT-TTF}, the core is still the TTF-unit, but now extended by two sulfur and two ethylene groups on each side. Owed to the extended size, the molecule is not completely flat anymore but the two terminal groups are slightly twisted. This can be done in the same direction, called eclipsed, or opposite directions (staggered); opening the possibility of intrinsic disorder being present. In the case of the organic superconductor \etbr, for instance, the crystals exhibit ordering of the ethylene endgroups upon cooling: While at room temperature 70\%\ of the molecules are in the eclipsed configuration, it becomes 29\%\ at $T=100$~K \cite{Strack05}. \begin{figure} \centering\includegraphics[clip,width=0.5\columnwidth]{structure_BEDT-TTF.pdf} \caption{Molecular structure of BEDT-TTF or ET, which stands for bis(ethylenedithio)tetrathiafulvalene. The eight sulfur atoms are indicated by yellow, the carbon atoms by dark grey balls, the hydrogens are depicted by light gray. The molecule is not absolutely flat, but the endgroups are tilted either in eclipsed or staggered fashion, as indicated.} \label{fig:BEDT-TTF} \end{figure} As pointed out by Girlando \cite{Girlando11a}, the symmetry of the BEDT-TTF molecule is often assumed D$_{2h}$, implying a completely planar molecule \cite{Kozlov87,Kozlov89,Eldridge95}, in order to reduce the 72 independent vibrational degrees of freedom when calculating the molecular vibrations. The actual symmetry, however, is D$_2$ in the case of a staggered molecule, and C$_{2h}$ in the case of eclipsed. As a matter of fact, the neutral molecule acquires a boat conformation with C$_2$ symmetry \cite{Kobayashi86,Demiralp95}, but this aspect is commonly neglected in the ionic crystal. In order to vary the electronic orbitals, some of the sulfur atoms can be substituted. Replacing the four central S by Se, for instance, leads to BEDT-TSF or BETS, {\it i.e.} bis(ethylenedithio)tetraselenafulvalene. Alternatively, the substitution can be performed only on one side, resulting in the asymmetric BEDT-STF, {\it i.e.} bis(ethylenedithio)selenathiafulvalene. This will be utilized in several studies presented in Section~\ref{sec:QSLMott}. Typically the compounds grow as (BEDT-TTF)$_2$$X$, where $X$ stands for a monovalent anion. The electronic charge of half a hole per BEDT-TTF molecule is distributed over the entire molecule, with the highest density around the central C=C, followed by the other two carbon double bonds. In most of the (BEDT-TTF)$_2$$X$ salts, the organic donor molecules are packed more-or-less upright in layers; they are held together by strong in-plane covalent bonds but weak out-of-plane van der Waals forces. The molecular layers alternate along the third direction with sheets of monovalent anions or polymeric networks. The interface between the donor and acceptor layers is determined by a hydrogen-bond network constituted by the terminal ethylene groups. Commonly, anions are considered to serve as spacer and as charge reservoir, but the role of donor-anion interaction in stabilization of diverse electronic phases has been emphasized \cite{Pouget18}. \begin{figure} \centering\includegraphics[clip,width=0.9\columnwidth]{structure_general.pdf} \caption{Schematic representation of some of the (BEDT-TTF)$_2$$X$ packing motives by looking at the quasi-two-dimensional layer. The pattern labeled as $\alpha$-, $\beta$-, $\theta$- and $\kappa$-phase possesses a different degree of dimerization. Only the strongest inter-donor interactions are indicated. While in the $\alpha$- $\beta$- and $\theta$-polymorphs chains of BEDT-TTF molecules can be identified, the lattice of the $\kappa$-phases consists of pairs of molecules so-called dimers arranged almost perpendicular to each other. Here an effective triangular lattice of dimeric units can be identified as indicated by the dashed purple lines (suggested by \cite{Pouget18}).} \label{fig:structure_general} \end{figure} Due to the larger number of orbitals and extended size compared to the TMTSF-molecule, multiple polymorphs can be found for most BEDT-TTF compounds, labeled by Greek letters \cite{WilliamsBook,Mori84,Mori98a,Mori99b,Mori99c}. Figure~\ref{fig:structure_general} displays some of the packing motives of relevance here. In the case of the $\alpha$- $\beta$- and $\theta$-phase, the organic molecules are arranged in stacks, while in $\kappa$-salts pairs of molecules so-called dimers are formed that are almost orthogonal. Within the stacks of the $\beta$-phase, neighboring molecules are slightly shifted with respect to each other; for the $\alpha$- and $\theta$-phase they are alternatively tilted. As indicated, neighboring stacks are coupled, resulting in two-dimensional, almost isotropic properties. The dimers of the $\kappa$-lattice compose an effective triangular lattice giving way to frustration, as we will see in Chapter~\ref{sec:Frustration}. \subsubsection{\rm $\alpha$-(BEDT-TTF)$_2$I$_3$} In the case of \aeti, the triclinic crystal structure (space group P$\bar{1}$) is an alternation of insulating I$_3^-$ anion layers and conducting layers of donor molecules BEDT-TTF$^{0.5+}$, displayed in Figure~\ref{fig:structure_alpha}(a) \cite{Bender84a}. The BEDT-TTF molecules form a herringbone pattern and are organized in a triangular lattice with two types of stacks. At room temperature, stack I is weakly dimerized and composed of crystallographically equivalent molecules A and A$^\prime$ related by an inversion center, while the stack II is a uniform chain composed of distinct B and C molecules, shown in Figure~\ref{fig:structure_alpha}(b). These two types of stacks are interconnected by many S$\cdots$S short contacts that provide the electronic delocalization within the $ab$-layer. The I$_3^-$ anions form two distinct chains, labeled by chain 1 and chain 2, illustrated in panel (c). Thus the unit cell contains four BEDT-TTF molecules. \begin{figure}[h] \centering\includegraphics[clip,width=1.0\columnwidth]{structure_alpha.pdf} \caption{Crystal structure of \aeti. (a)~The BEDT-TTF molecules are arranged in the $ab$-layers with the molecular axis not normal but slightly tilted to this layer. Sheets of I$_3^-$-anions separate these layers in $c$-direction. (b)~View along the molecular axis reveals two distinct stacks. Stack I contains the BEDT-TTF molecules A and A$^{\prime}$, which are identical by symmetry. The molecules B and C in stack II are distinct to the rest. A dihedral angle between BEDT-TTF molecules in neighboring stacks is labeled by $\theta$. (c)~Axiometric view of \aeti\ with the unit cell indicated in green. There are two arrangements of the I$_3^-$ anions, labelled as chain 1 and chain 2. \label{fig:structure_alpha}} \end{figure} At high temperatures, the system is a semimetal with small electron and hole pockets in the Fermi surface \cite{Bender84b,Mori84}. There is a slight charge disproportionation (among B and C molecules, while there is none among A and A$^\prime$) already at ambient condition that gets considerable when cooled below the charge-ordering phase transition temperature at $T_{\rm CO} = 135$~K, as discussed in Section~\ref{sec:COweaklydimerized}. \subsubsection{\rm $\beta^{\prime\prime}$-(BEDT-TTF)$_2$SF$_5$$R$SO$_3$} The $\beta^{\prime\prime}$-(BEDT-TTF)$_2$SF$_5$$R$SO$_3$ compounds with different groups $R$ are isostructural, crystallizing in the P$\bar{1}$ triclinic system, with two formula units per unit cell \cite{Ward00,Geiser96}. The structure is characterized by layers of BEDT-TTF in the $ab$-crystal plane, separated by the all-organic anions, shown in Figure~\ref{fig:structure_beta}. Within the cation layer, the BEDT-TTF are arranged in tilted stacks, typical of the $\beta^{\prime\prime}$-packing motif \cite{Mori98a} with the strongest interaction along the crystallographic $b$-axis, {\it i.e.} perpendicular to the stacks. Along the stacking direction $a$, two BEDT-TTF molecules are related by inversion symmetry (A and A$^{\prime}$; B and B$^{\prime}$); and the two pairs in neighboring stacks, AA$^{\prime}$ and BB$^{\prime}$, are crystallographically independent. Figure~\ref{fig:structure_beta}(c) presents four different anions constituting systems that have been subject of intense investigations discussed in Section~\ref{sec:COweaklydimerized}: \begin{figure}[h] \centering\includegraphics[clip,width=0.9\columnwidth]{structure_beta.pdf} \caption{Crystal structure of $\beta^{\prime\prime}$-(BEDT-TTF)$_2$SF$_5$$R$SO$_3$. (a)~View onto the $bc$-plane illustrates how the BEDT-TTF molecules form layers, separated in $c$-direction by sheets of the all-organic anions. (b)~Within the $ab$-plane, the BEDT-TTF molecules are arranged in two slightly distinct stacks along the $a$-direction with uniform distance. (c)~The four anions SF$_5$$R$SO$_3^-$ differ by the central entity: $R$ equal CH$_2$, CHFCF$_2$, CH$_2$CF$_2$ and CHF, respectively. \label{fig:structure_beta}} \end{figure} $\beta^{\prime\prime}$-(BEDT-TTF)$_2$SF$_5$CH$_2$SO$_3$ (denoted $\beta^{\prime\prime}$-I) is a charge-ordered insulator, $\beta^{\prime\prime}$-(BEDT-TTF)$_2$SF$_5$CHFCF$_2$SO$_3$ ($\beta^{\prime\prime}$-MI) undergoes a metal-insulator transition at 180~K, $\beta^{\prime\prime}$-(BEDT-TTF)$_2$SF$_5$CH$_2$CF$_2$SO$_3$ ($\beta^{\prime\prime}$-SC) is a superconductor at $T_c=5$~K driven by charge fluctuations, and $\beta^{\prime\prime}$-(BEDT-TTF)$_2$SF$_5$CHFSO$_3$ ($\beta^{\prime\prime}$-M) remains metallic down to low temperatures. The phase diagram of Figure~\ref{fig:betaphasediagram} summarizes the ground states of this family. At $T=300$~K the crystal structure of $\beta^{\prime\prime}$-I exhibits some degree of disorder in the terminal ethylene groups \cite{Ward00}. No disorder is observed in the structure of $\beta^{\prime\prime}$-SC, which, however, has been collected at $T=123$~K. \subsubsection{\rm $\theta$-(BEDT-TTF)$_2$RbZn(SCN)$_4$} $\theta$-(BEDT-TTF)$_2$RbZn(SCN)$_4$ crystallizes in the higher symmetry orthorhombic structure (space group I222) with four {BEDT-TTF}$^{0.5+}$ molecules and two [RbZn(SCN)$_4]^-$ anions per unit cell, shown in Figure~\ref{fig:structure_theta}. All BEDT-TTF molecules are crystallographically equivalent and stack along the $a$-axis. Since the molecules are strongly tilted with respect to each other, there is a large orbital overlap between neighboring stacks leading to a two-dimensional conductivity in the $ac$-plane \cite{Mori99b}; in $b$-direction the layers are separated by the anions as shown in panel(b). This two-dimensional RbZn(SCN)$_4^-$ network is built from two (SCN)$_2$ chains, connected by a Rb$^+$, in which Zn$^{2+}$ is tetrahedrally coordinated to the SCN$^-$ groups. At high temperatures the system is characterized by degenerated bands and a two-dimensional closed Fermi surface with three-quarter filling, resulting in a metallic conductivity behavior within the molecular planes \cite{HMoriPRB98}. Similar to \aeti, charge disproportionation develops in the metallic state and when cooled slowly a phase transition into the long-range charge-order state sets in at $T_{\rm CO} = 190$~K, as discussed in Section~\ref{sec:COweaklydimerized}. Conversely, rapid cooling inhibits order and a charge glass state is formed, as discussed in Section~\ref{sec:ChargeGlass}. \begin{figure}[h] \centering\includegraphics[clip,width=0.7\columnwidth]{structure_theta.pdf} \caption{Crystallographic structure of $\theta$-(BEDT-TTF)$_2$RbZn(SCN)$_4$. (a)~View of BEDT-TTF molecules from one out of two cation layers in the $ac$-plane showing their triangular arrangement. They are tilted by a dihedral angle $\theta$ with respect to each other. (b)~Unit cell of \tetrz\ containing four BEDT-TTF molecules and two RbZn(SCN)$_4$ entities.} \label{fig:structure_theta} \end{figure} \subsubsection{\rm $\kappa$-(BEDT-TTF)$_2$$X$ salts} The $\kappa$-(BEDT-TTF)$_2$$X$ salts are characterized by the BEDT-TTF dimers, which are almost orthogonal to each other. Due to the strong dimerization, the $\kappa$-salts are prime examples of half filled Mott systems. The intradimer coupling $t_d$ is a fair estimate of the on-site Coulomb repulsion $U$ \cite{Saito95,McKenzie98}. Advancing previous calculations \cite{Emge86,Kobayashi87,Campos96,Fortunelli97a,Fortunelli97b,Rahal97,Mori98a, Schlueter02,Nakamura09,Kandpal09}, Scriven and Powell computed the effective Coulomb interaction within the BEDT-TTF dimers by density functional theory (DFT) \cite{Scriven09b}. As usual, the organic molecules are arranged in layers separated by the anion sheets. In the case of \etcl\ and \etbr\ the BEDT-TTF dimers are tilted in alternating directions [Figure~\ref{fig:structure_kappa}(a)]. In each of two organic layers, related by mirror symmetry, all four BEDT-TTF molecules are equivalent. Hence, \etcl\ crystallizes in the space group Pnma. \begin{figure}[h] \centering\includegraphics[clip,width=0.8\columnwidth]{structure_kappa.pdf} \caption{Crystal structure of three $\kappa$-phase salts: (a,d,g) \etcl, (b,e,h) \etcn\ and (c,f,i) \agcn. The lines mark the unit cell. In panels (d,g) for clarity reasons only one out of two cation and anion layers constituting the unit cell is shown. (d,e,f) View of BEDT-TTF dimers arranged in anisotropic triangles in the two-dimensional planes. Carbon, sulfur and hydrogen atoms of the BEDT-TTF molecule are colored in dark gray, yellow and light gray, respectively. The interdimer transfer integrals are denoted by $t$ and $t^{\prime}$, and the intradimer transfer integral by $t_\mathrm{d}$. The ratio $t^{\prime}/t$ measures the degree of frustration. (g,h,i) View of the anion network in the two-dimensional planes. Chlorine, cooper, silver, carbon and nitrogen are colored in green, red, pink, blue and orange, respectively. Ordered cyanide (CN)$^-$ groups exist in all three systems, while CN$^-$ groups (labeled by black) located at inversion centers are present only in \etcn\ and \agcn\ and are source of intrinsic disorder. Note that for \etcl\ the anion network reveals a kind of linear bonding scheme, while for \etcn\ and \agcn\ the anion network displays two-dimensional bonding arrangement.} \label{fig:structure_kappa} \end{figure} The BEDT-TTF layers are separated by polymeric Cu[N(CN)$_2$]Cl$^{-}$ anions along the $b$-direction. Figure~\ref{fig:structure_kappa}(b) and (c) display the extended unit cell of \etcn\ and \agcn, respectively. The space group is commonly solved in monoclinic P2$_1$/c, in which all four BEDT-TTF molecules are equivalent \cite{Geiser91, Hiramatsu17}. However, the P2$_1$/c is only the average structure, while the exact structure presents a triclinic symmetry with two non-equivalent crystallographic sites \cite{Foury18,Foury20}. In the panel (d) to (f) the projection of one of the layers is shown. Neighboring dimers are rotated by about $90^{\circ}$ with respect to each other. The ratio $t^{\prime}/t$ of next-nearest-neighbor ($t^{\prime}$) and nearest-neighbor ($t$) coupling between the dimers measures the degree of frustration. For \etcl\ $t^{\prime}/t \approx 0.5$ is rather small, while for the spin liquid candidates \etcn\ and \agcn\ the ratio $t^{\prime}/t \approx 0.85$ indicates high frustration on the anisotropic triangle. Using tight-binding analysis, it was demonstrated that also the molecular conformation of the ethylene endgroups has some influence on the electronic structure \cite{Guterding15}. Panels (g) to (i) show the anion networks. In \etcl\ all cyanide (CN) groups positioned between the copper atoms are ordered in a zigzag line along the $a$-axis so that the anion layer consists of one-dimensional chains [panel (g)]. In contrast to that, in \etcn\ and \agcn\ in addition to ordered CN groups (so-called chain CN) between Cu/Ag atoms in the chains along the $b$-axis, there are CN groups (so-called bridging CN) located at the inversion centers. These bridging CN groups connect Cu/Ag atoms along the $c$-axis so that the anion network is formed in two-dimensions [panels (h) and (i)]. The triangular coordination of Cu and Ag implies frustration since each Cu/Ag atom can be linked either to two N and one C atom, or to one N atom and two C atoms introducing intrinsic disorder. This ambiguity was eliminated in the recently synthesized salt $\kappa$-(BEDT-TTF)$_2$Cu[Au(CN)$_2$]Cl that is highly frustrated $t^{\prime}/t = 1.19$ but possesses no disorder in the polymeric anions \cite{Tomeno20}. The properties of \etcl, \etcn\ and \agcn\ are discussed in Chapters~\ref{sec:MottTransition} and \ref{sec:Frustration}, and Section~\ref{sec:QSL+afm}. The crystal structure of \hgcl\ and \hgbr\ corresponds to space group C2/c \cite{Drichko14}. It consists of alternating layers of BEDT-TTF radical cations and anions along the crystallographic $a$-axis, as shown in Figure~\ref{fig:structure_HgBrCl}(a). The anionic layer contains [Hg(SCN)$_2$Cl]$_{\infty}^-$ and [Hg(SCN)$_2$Br]$_{\infty}^-$ chains in the case of \hgcl\ and \hgbr, respectively. It is worth to note that in contrast to many other BEDT-TTF salts, in the present family, {\it i.e.} in \hgcl\ and the bromine-containing sister compound, the anions are not completely flat, but have a sizable width. This implies that the BEDT-TTF layers are separated more than usually. \begin{figure} \centering\includegraphics[width=0.8\columnwidth]{structure_kappaHgBrCl.pdf} \caption{Crystal structure of \hgcl\ and \hgbr. (a)~The molecular layers along the conducting $bc$-plane are separated by the [Hg(SCN)$_2$Cl]$_{\infty}^{-}$ anions and [Hg(SCN)$_2$Br]$_{\infty}^{-}$ in the case of \hgcl\ and \hgbr, respectively. Note that within a dimer the two molecules are displaced along the molecular axis, leading to a reduced transfer integral $t_d$. (b)~Two face-to-face BEDT-TTF molecules form dimer with intra-dimer transfer integral $t_d$. The dimers are arranged on an anisotropic triangular lattice with effective transfer integrals $t$ and $t^{\prime}$.} \label{fig:structure_HgBrCl} \end{figure} As illustrated in panel (b), in \hgcl\ as well as in \hgbr\ the dimers form a triangular lattice with rather large frustration $t^{\prime}/t \approx 0.80$ \cite{Drichko14,Gati18a}. The $\kappa$-type packing motif is slightly distorted compared to the previous ones in \etcl\ or \etcn. The large number of S$\cdots$S intermolecular interactions between dimer units leads to a pretty strong $V$. For that reason, for \hgcl\ and \hgbr\ the interdimer interaction becomes so important, that the system cannot be treated by the simple Hubbard model with a half-filled conduction band. Another distinction is that within the dimer, the BEDT-TTF molecules are slightly shifted with respect to each other, leading to a reduced intradimer transfer integral $t_d$ and thus smaller $U$. For \hgcl\ and \hgbr\ there is one crystallographically unique BEDT-TTF molecule in the unit cell. In the case of \hgcl\ at room temperature, both ethylene end-groups are disordered, but below $T=100$~K a staggered conformation prevails. Interestingly, the unit cell is slightly larger than in the sister compound \hgbr\ \cite{Aldoshina93}; this trend is just opposite than expected from the larger Br ion compared to Cl. However, there seems to be no possibility to transform the physical properties of \hgbr\ towards those of \hgcl, but also not {\it vice versa}. The two structurally similar compounds display different low-temperature behavior. \hgcl\ undergoes a pronounced metal-insulator transition due to charge ordering at $T_{\rm CO} = 30$\,K, as discussed in Section~\ref{sec:COdimerized}, whereas \hgbr\ when cooled below metal-insulator transition at $T_{\rm CO} = 80$\,K develops quantum dipole liquid state with glassy signatures, as discussed in Section \ref{sec:DipoleLiquid}. \subsection{Other molecular compounds} During the last decades there have been numerous alternative attempts towards organic superconductors, most of them with rather limited success. The molecular conductors based on the anion radicals [$M$(dmit)$_2$] ($M$ = Ni and Pd) synthesized by R. Kato \cite{Kato14} is probably the most prominent family as it exhibits several interesting properties. More recently H. Mori suggested an even other approach via a H-bonded molecular unit-based organic conductor with a fused structure of short hydrogen bonds and stacked $\pi$-electron systems. \subsubsection{\rm $\beta^{\prime}$-EtMe$_3$\-Sb[Pd(dmit)$_2$]$_2$} \dmit\ radical anion salt is based on the metal dithiolene complex Pd(dmit)$_2$ \cite{Kato12a}. The crystal has a layered structure along the $c$-axis with symmetry space group C2/c (Figure~\ref{fig:structure_DMIT}). Two Pd(dmit)$_2$ molecules form a dimer with one negative charge, and the dimers are arranged to form an almost isotropic triangular lattice with the ratio $t^{\prime}/t \approx 0.8$. There are two equivalent anion layers consisting of four molecular dimers (two in the central and two in the side layers), where dimers stack face-to-face along two diagonal directions $[110]$ and $[1\bar{1}0]$. \begin{figure} \centering\includegraphics[clip,width=0.6\columnwidth]{structure_DMIT.pdf} \caption{Crystal structure of \dmit. (a)~Sketch of the molecule (1,3-dithiole-2-thione-4,5-dithiolate), Pd(dmit)$_2$: yellow, grey and green circles denote sulfur, carbon, and palladium atoms, respectively. (b)~and (c) View of Pd(dmit)$_2$ dimers in the $ab$-plane projected along the direction tilted $17^{\circ}$ away from the $c$-axis. The panels show dimers in two neighboring layers which have ($a+b$) and ($a - b$) stacking directions, respectively. An almost isotropic triangular lattice is denoted by gray lines, the interdimer transfer integrals are labeled by $t_b$ (thick gray) and $t_{+}$ (thin gray), and $t_{-}$ (dashed gray), while the intradimer transfer integral is labeled by $t_d$. (d)~Side view of the extended unit cell. Antimony and hydrogen atoms are denoted by violet and light grey circles. Possible hydrogen bonds between the end S ions of the Pd(dmit)$_2$ molecules and H of Et (CH$_2$-CH$_3$) or Me (CH$_3$) groups of the cations are indicated by full green lines.} \label{fig:structure_DMIT} \end{figure} There are four monovalent cations EtMe$_3$Sb; each cation consists of three methyl CH$_3$ groups labeled as Me$_3$ and one ethyl CH$_2$-CH$_3$ group labeled as Et, which can occupy one of two different equally probable orientations. The latter indicates that the formation of several crystallographic configurations is possible. Quantum spin liquid is suggested as the ground state of \dmit; its properties are discussed in Chapters~\ref{sec:MottTransition} and \ref{sec:Frustration}. \subsubsection{\rm $\kappa$-H$_3$(Cat\--EDT-TTF)$_2$} In comparison to the previous radical cation (BEDT-TTF) and anion [Pd(dmit)$_2$] salts, \cat\ and its deuterated isotopologue \dcat\ do not contain counter-ion species. \cat\ consists of two crystallographically equivalent catechol-fused ethylenedithio-tetrathiafulvalene (Cat-EDT-TTF) molecules linked by a symmetric anionic [O$\cdots$H$\cdots$O]$^{-}$-type strong hydrogen-bond as shown in Figure~\ref{fig:structure_CAT}(a) \cite{Isono13,Ueda14,Ueda15,Yamamoto16}. H-bonded hydrogen is located at the central position between two oxygen atoms and these H-bonds result in the generation of holes (+0.5) on both Cat-EDT-TTF molecuels. The crystal holds C2/c symmetry. These conductors enabled the construction of an unprecedented packing structure where the two-dimensional $\pi$-electron conducting layers shown in Figure~\ref{fig:structure_CAT}(c) are connected by the H-bonds [panel (b)]. In these layers there are four kinds of inter-molecular interactions ($b1$, $b2$, $p$, and $q$); among them the strongest interaction is $b1$ from the face-to-face $\pi –\pi$ stacking. Two face-to-face Cat-EDT-TTF molecules are paired in a dimer and the dimers form a two-dimensional triangular lattice in the $bc$-plane. The degree of frustration measured by the ratio of the couplings \begin{figure} \centering\includegraphics[clip,width=0.9\columnwidth]{structure_CAT.pdf} \caption{Crystal structure of \cat and \dcat. (a)~Hydrogen-bonded molecular unit of the $\kappa$-$X$$_3$(Cat\--EDT-TTF)$_2$, where $X$ stands for either a hydrogen or deuterium atom, in the case of \cat\ and \dcat, respectively. Two Cat-EDT-TTF molecules are related by twofold rotational symmetry with respect to the central hydrogen atom, and are thus crystallographically equivalent to each other. (b)~Packing arrangement of the molecular \cat\ and \dcat\ units. Two face-to-face Cat-EDT-TTF molecules, colored by pink and light blue, form a dimer. (c) View of the two-dimensional crystal structure in the layer shown in panel (b). There are four different kinds of inter-molecular interactions in the two-dimensional layer; the corresponding transfer integrals are labeled as $b1$ (face-to-face $\pi –\pi$ stacking), $b2$, $p$, and $q$. The dimers are arranged on an anisotropic triangular lattice in the $(b,c)$ plane. The interdimer transfer integrals are denoted by $t$ and $t^{\prime}$, and the intradimer transfer integral by $t_\mathrm{d}$. The ratio $t^{\prime}/t$ measures the degree of frustration. All hydrogen atoms are omitted for the simplicity reasons (after \cite{Yamamoto16}).} \label{fig:structure_CAT} \end{figure} between dimers $t^{\prime}/t \approx 1.25$ is slightly one-dimensional unlike $t^{\prime}/t$ values found in $\kappa$-(BEDT-TTF)$_2$$X$. \cat\ is suggested as a quantum spin liquid candidate with simultaneously developed quantum disordered state of electric dipoles, while \dcat\ undergoes a metal-insulator transition to a charge-ordered state at $T_{\rm CO} = 185$~K. We discuss their properties in Sections~\ref{sec:propertiesQSL} and \ref{sec:cat}. \subsection{Electronic ferroelectricity} \label{sec:Summary_ferroelectricity} Organic charge transfer salts attract attention because some of them exhibit ferroelectricity of electronic origin that arises from charge order instead of cooperative displacements of ions observed in conventional ferroelectrics. The charge ordering is controlled by strong on-site and inter-site Coulomb interactions and is found in a number of weakly dimerized materials with effectively quarter-filled bands. However, ferroelectricity is established only if the van den Brink and Khomski requirement for the coexistence of non-equivalent sites with different charge density and non-equivalent bonds is fulfilled in the charge-ordered state. The most prominent example is certainly \aeti, for which charge-order-driven ferroelectricity below the metal-insulator phase transition at $T_{\rm CO} = 135$~K is demonstrated unambiguously in numerous experiments including optical and THz-wave second-harmonic-generation, x-ray scattering, Raman and infrared vibrational spectroscopy, electron-spin and nuclear magnetic resonance and polarization switching. The phase transition is of first order as evidenced by an abrupt onset of second-harmonic-generation signal, charge imbalance and charge and spin gaps. Observation of near-field images of its spatial evolution corroborates these findings. The dynamic response of ferroelectric domains subjected to ultrafast external stimuli has been extensively studied by time-resolved measurements of electrical conductivity and femtosecond pump-probe spectroscopy and by now a rather fair understanding is achieved. New emergent phases, memory effect, photo-induced metallic state; overall, these observations strongly support a purely electronic mechanism of ferroelectricity with only minor involvement of electron-phonon interaction. At longer time scales, the non-linear response observed by time- and field-dependent transport exhibits two distinct regimes, both of which are in line with domain wall motion. At low temperatures the behavior involves thermally excited topological defects over an electric field dependent potential barrier, whereas at temperatures closer to the phase transition the negative differential resistance behavior can be simulated consistently by a two-state model of excited charge carriers with the high mobility. The dynamic response in kHz-MHz frequency range studied by dielectric spectroscopy displays the two-mode response deeper in the charge ordered state than expected from a Curie-like peak. The anomalous response indicates the presence of disorder arising due to intrinsic heterogeneity but no consensus has been reached yet how to explain these observations. A plausible mechanism may involve the motion of two types of domain walls. No doubt, a thorough study of the topology of domain structure is highly desired in order to reach a consistent understanding of the dielectric response as well as of the low level of switchable polarization. For the related compound, {\it i.e.} the slowly-cooled \tetrz, charge order and ultrafast dynamics has been clearly identified, while the experimental evidence for polar order is not complete and its recognition is mostly based on a Curie-like dielectric peak. From a unifying view of experimental data and current theoretical approaches, it becomes rather clear that the stabilization of charge-order driven ferroelectricity takes place by a cooperative action between Coulomb repulsion --~both on-site and inter-site~-- and coupled molecular-anion subsystems. There are also examples, such as $\beta^{\prime\prime}$-(BEDT-TTF)$_2$\-SF$_5$CHFCF$_2$SO$_3$, where some degree of charge order exists at all temperatures, but it is not associated with a metal-insulator phase transition, and the van den Brink and Khomski requirement is not fulfilled. Glassy phases in the vicinity of charge order have been identified in two charge transfer salts with a frustrated triangular lattice of molecular units. Charge order can be avoided by rapid cooling in \tetrz\ or by replacing Rb by Cs in \tetcz. Slow charge dynamics and glassy freezing are demonstrated by resistance fluctuation spectroscopy: upon reducing the temperature, a broad peak shifts to lower frequency and its linewidth strongly increases. The development of slow dynamics is correlated with the growth of two-dimensional charge clusters observed by x-ray diffuse scattering. Metastability of the charge glass phase in \tetrz\ is demonstrated by the time-evolution of the resistivity and nuclear magnetic resonance spectra at a fixed temperature; the obtained data show time-temperature-transformation curves commonly observed for crystallization of structural, ionic glasses and metallic alloys. Additional fingerprints of glassy state --~charge vitrification and non-equilibrium aging phenomena~-- are demonstrated in \tetcz\ by means of the resistivity and resistance fluctuations spectroscopy: faster cooling results in a higher glass transition temperature, the resistivity behavior in time can be described by the stretched exponential function with the relaxation time obeying the gradual slowing down law; the equilibrium states at high temperature and non-equilibrium below the glass transition follow the same dynamics. Overall, the data show that the interplay of long-range interactions and geometric frustration takes a primary role in the formation of charge glass; theoretical considerations confirm these observations and suggest \tetrz\ and \tetcz\ as prominent examples of a self-generated Coulomb glass. However, in real materials the lattice degrees of freedom are involved in the creation of the electronic crystals states, both the charge glass as well as the charge order; the extent to which they influence the crystallization mechanism remains to be clarified. Finally, among those two-dimensional organic materials, in which organic molecules are paired in dimers and organized on an anisotropic triangular lattice, charge ordering presents a prominent topic because a link to the magnetic degrees of freedom was proposed. Until now, charge ordering below the metal-insulator transition at $T_{\rm CO} = 30$~K and a Curie-like non-dispersive dielectric peak are detected only in \hgcl. There remain some open issues concerning the potentially polar nature of the charge-ordered state. The charge pattern and the symmetry space-group changes are not identified yet; this fact, together with indications that charge order melts below 15~K, calls for more efforts in x-ray diffraction measurements. The dielectric investigations were conducted only for an electric field applied perpendicular to molecular layers where anionic contribution cannot be avoided; certainly, in-plane measurements are needed. \subsection{Dirac electrons} \label{sec:Summary_Dirac} The Dirac electron state in the two-dimensional organic solid \aeti\ emerges under high pressure as evidenced by the electrical conductivity, Hall effect, specific heat, nuclear magnetic resonance, magnetotrasport and optical conductivity measurements. These measurements also indicate that low-mobility massive holes coexist with high-mobility massless Dirac carriers. Theoretical considerations reveal that with increasing pressure, a topological transition occurs from a charge-ordered insulator to a zero-gap state with a pair of Dirac electrons of finite mass; their existence is characterized by a special structure of the Berry curvature inside the Brillouin zone. Angle-resolved photoemission spectroscopy is certainly the most desirable tool to verify modifications of the energy spectrum associated with the topological transition; unfortunately the material and parameter range constitutes a real challenge. Here molecular engineering might provide the possibility to enlarge the orbitals towards Dirac electrons at ambient conditions. The outstanding feature of the Dirac state in \aeti\ is that the Dirac cones are fixed at the Fermi energy and are shifted away from the high crystallographic symmetry points in the first Brillouin zone, they are strongly anisotropic and tilted in the wavevector-energy space, very much in contrast to graphene with the isotropic Dirac cones at the corners of the first Brillouin zone. Consequently, the low-energy electronic and magnetic response show significant deviations from theoretical expectations for simple Dirac systems. The electronic correlations, long-range and short-range Coulomb interactions, induce non-uniform cone re-shaping and bandwidth reduction together with an emergent ferrimagnetic spin polarization. The long-range part is responsible for anomalous spin dynamics and excitonic fluctuations in the vicinity of the charge-order state. Effects of correlations are also observed in dc resistivity and in optical studies; interaction among Dirac electrons can be modified by temperature and pressure. Numerical studies around the critical region of the charge ordering and Dirac state show how the system passes through a phase change when increasing the intersite Coulomb interaction, which varies upon changing the external pressure: from massless Dirac via massive Dirac state coexistent with the charge order into the charge-ordered state with no Dirac cones. The much smaller gap extracted from dc resistivity compared to the one found in optics, is a result of the conduction of domain walls between ferroelectric domains with opposite polarization. It remains unclear whether a true zero-gap Dirac state is realized in \aeti; spin-orbit coupling is suggested to may be responsible for that it is not the case. \subsection{Mott metal-insulator phase transition} \label{sec:Summary_Mott} Organic charge-transfer salts turn out to constitute ideal model systems for studying the quantum-critical nature of the Mott transition and to verify different scenarios occurring in strongly correlated systems. For $\kappa$-(BEDT-TTF)$_2$$X$ compounds a first-order Mott transition is observed up to the critical endpoints between $T_{\rm crit}=20$ and 40~K. Introducing disorder blurs the discontinuity and a smeared first-order transition remains. The bandwidth-controlled Mott criticality involves fluctuations in electron and lattice degrees of freedom. A material-independent quantum critical scaling of the dc resisitivity, bifurcating into a Fermi liquid metal or Mott insulator, is identified regardless of the ground state; the findings reveal the incoherent charge transport in the crossover region at high temperatures as predicted by dynamical mean-field theory calculations. Hook's law of elasticity breaks down close to the critical endpoint due the coupling of the critical electronic system to the lattice; critical elasticity shows the universal properties of an isostructural solid-solid endpoint with mean-field critical exponents. The coexistence region of insulating and metallic phases below the critical endpoint, that is a result of the first-order Mott transition, is experimentally determined by dc conductivity measurements under varying pressure. Nuclear magnetic resonance, ac susceptibility and spatially resolved magneto-optical spectroscopy studies uncover the regime where antiferromagnetic and superconducting phases spatially coexist evidencing percolative superconductivity, whereas their competition deep in the Fermi liquid part of the phase diagram is revealed by ultrasonic velocity and attenuation measurements. The phase separation in the critical region of the phase diagram is also supported by slowing down of electron dynamics demonstrated by fluctuation spectroscopy. Calculations by dynamical mean field theory find a first-order phase transition with a coexistence region on both sides limited by spinodal lines that end at the critical point at finite temperatures. In the high-temperature region above the critical point quantum critical transport spreads out, following the quantum Widom line; it separates the more-insulating from the more metallic features. The predicted behavior is successfully verified experimentally combining optical and transport measurements of three highly-frustrated materials with no magnetic order \dmit, \etcn\ and \agcn. The genuine phase diagram of Mott insulators is established: quantum Widom line, the back-bending towards the critical endpoint and metallic fluctuations close to the first-order phase boundary are found. Visualization of the real-space phase coexistence in the critical region is still limited to about 10~$\mu$m, disabling a direct proof that metallic regions coexist in the insulating background; scanning near-field microscopy at cryogenic temperatures is a desirable tool in future studies. Hence, at present indirect approaches must be employed. Tuning the bandwidth by hydrostatic pressure and chemical substitution enables us to follow the temperature-dependent dc transport and dielectric constant from the strongly correlated Mott insulator via the range of phase coexistence into the metallic regime; no hysteresis is observed. The spatial coexistence of correlated insulating and metallic regions is successfully testified by a strongly enhanced dielectric constant, when approaching the first-order transition; this behavior uncovers percolative nature of the first-order Mott transition. The experimental findings are supported by dynamical mean-field theory calculations including spatial inhomogeneities in a hybrid approach. When the system is tuned into the metallic state, coherent electronic charge transport emerges, this is highlighted via the Fermi-liquid behavior including quadratic temperature dependence of dc resistivity and $\omega$-$T$ scaling in the optical conductivity. As a matter of fact, in these organic conductors Fermi-liquid behavior is observed over a wider temperature and frequency range than in any inorganic compound, owing to the fact that the intrinsic energy scales (bandwidth, Fermi energy) are fairly low. The upper limit of the Fermi liquid regime is given by deviations of the scattering rate from the $T^2$ and $\omega^2$-lines. At low temperatures, in the Fermi-liquid regime, the data exhibit a gradual increase of the effective mass indicating that the electronic correlations become stronger when approaching the Mott transition from the metallic side; the findings are in agreement with the Brinkman-Rice theory and dynamical mean-field calculations. The comprehensive set of optical data evidences the universality of Landau's Fermi liquid concept upon varying the correlation strength. At higher temperatures above the Fermi liquid range, a bad metal regime is found as commonly observed for strongly correlated materials; and resistivity even exceeds the Ioffe-Regel-Mott limit. Reduction of the low-frequency spectral weight is in agreement with theoretical calculations. The displacement and finally disappearance of the Drude peak is accompanied by a transfer of spectral weight to energies above 1\,eV. The phenomenon was ascribed to strong correlations, but arguments were raised that this might not be the main cause for all correlated organic materials. Effects of disorder on the Mott transition were studied for different interaction and disorder strength by dynamical mean-field theory; the resulting Anderson-Hubbard phase diagram displays a correlated disordered metallic phase, surrounded by Anderson and Mott insulating phases. With increasing disorder the coexistence region gets smaller; when disorder becomes large enough (larger than twice the bandwidth), the critical temperature abruptly goes to zero and the coexistence region disappears. In this regime, the effects of interaction and disorder are found to be comparably important for charge localization. Experimentally, x-ray irradiation of the Mott insulator \etcl\ results in a shift of spectral weight from the interband transition to low energies where the Drude-like behavior is present. The result that disorder can make the system more metallic agrees with theoretical considerations; recent calculations find that the random potential broadens the bandwidth and moves the system away from the Mott transition. In addition, recently it was found that x-ray irradiation can induce slow electronic fluctuations caused by a synergistic effect between the Mott boundary and randomness indicating the formation of electronic Griffiths phase. On the other hand, \etcn\ is much less susceptible to x-ray irradiation; the resistivity decreases but even for large doses the effect is not significant. However, effects of inherent disorder on the transport properties are clearly observed for several $\kappa$-(BEDT-TTF)$_2$$X$ quantum-spin-disordered Mott insulators and can be explained within the Mott-Anderson localization theory. While at high temperatures the transport properties is dominated by the correlation strength, upon lowering temperature variable-range hopping transport takes over; the most disordered material, \etcn, exhibits the lowest dc resistivity and the highest charge-carrier density, indicating that it is located closest to the insulator-metal transition. \subsection{Quantum spin liquid versus magnetic order} \label{sec:Summary_QSL} Quantum spin liquids are quantum disordered ground states of strongly correlated spins, in which the ground state is a superposition of multiple coonfigurations and quantum fluctuations are important enough to prevent conventional magnetic long-range order. At this point, there are four prominent triangular organic solids, which are considered candidates for the realization of a quantum spin liquid state: charge-transfer salts \etcn, \agcn, \dmit\ and the hydrogen-bonded single-molecular compound \cat. These materials are Mott insulators, in which --~despite the relatively large exchange coupling~-- there is no experimental indication of magnetic ordering: the susceptibility, specific heat and nuclear magnetic resonance show no singular features related to phase transitions. Initially, it was proposed that the key variable is frustration -- spatial anisotropy -- measured by the ratio of the next-nearest-neighbor and nearest-neighbor transfer integral, which is close to unity for \etcn, \agcn\ and \dmit, when estimated by the extended H{\"u}ckel method. Recent evidence challenges this idea because calculations by density functional theory yield values closer to 0.8, as well as the more one-dimensional anisotropy value of about 1.3 for \cat. Low-energy excitations are probed by thermal and optical properties: good evidence supporting spin liquid and gapless spin excitations --~spinons~-- comes from specific heat displaying a large linear term for all four candidates. The Wilson ratio is slightly above unity, the value expected for free fermions; this result thus indicates that the same degrees of freedom determine the behavior of both the susceptibility and the specific heat. However, very recent electron-spin resonance measurements on \etcn\ observe a drop of the spin susceptibility, which can only be explained by the opening of a gap due to the formation of spin singlets. Interestingly, a finite residual linear term of thermal conductivity is only observed in \cat, while it is fully suppressed in \etcn\ leading to the suggestion of a gap opening. In \dmit, in contrast to initial findings, most recent reports exclude the magnetic thermal conductivity as well. The absence of reproducible finite residual thermal conductivity questions the presence of delocalized gapless low-energy excitations in these spin-liquid candidates; spinons may become localized due to the spin-lattice decoupling, or to disorder. On the other hand, the spinon contribution to the optical conductivity is successfully verified in \dmit\ in agreement with theory predicting a power-law absorption at low frequencies; similar experiments failed in the case of \etcn\ due to effects of metallic fluctuations in the vicinity of the Mott transition. The legendary 6~K-anomaly in \etcn\ is omnipresent; effects observed in nuclear magnetic resonance, electron and muon spin resonance, in specific heat and thermal conductivity, in thermal expansion and ultrasonic velocity, and in microwave and dielectric response indicate coupled spin, lattice and charge degrees of freedom are involved. Various scenarios are suggested for the explanation of this anomaly, such as spin-chirality ordering, a spinon-pairing transition, or formation of an exciton condensate. A large sample-to-sample variations in the size of 6~K anomaly as well as in the microwave and dielectric anomalies are consistent with inhomogeneity and disorder. Theoretical considerations suggest disorder-induced spin defects may provide a comprehensive explanation of the low-temperature properties of \etcn\ and maybe other compounds, too. In this disorder-based approach, the 6~K anomaly can be interpreted as the formation of a valence bond solid. The formation of singlets yields the drop in susceptibility but leaves sufficient defects spins randomly placed due to disorder also induced from the anion layer. At this point, a comprehensive explanation of the low-temperature anomaly and its possible relation to a spin-liquid state in all quantum spin liquid candidates is still lacking. It is intriguing that randomness appears to support the quantum spin liquid state: a nuclear magnetic resonance experiment on \etcl\ found that the antiferromagnetic ordering disappears, when crystals are irradiated by x-rays while spin-liquid properties emerge. Alloying \etcn\ with BEDT-STF molecules results in no change of magnetic properties suggesting inherent disorder is already present in \etcn. The theoretically developed randomness-induced quantum spin-liquid models show disorder enhance cooperative action of quantum fluctuations and triangular frustration. Two inherent sources of randomness are anticipated: the intrinsic randomness is suggested to originate in the charge sector and to act via charge-spin coupling, while the extrinsic quenched randomness is due to disorder in the anion layers. It remains a challenging task to verify the quantum spin liquid state experimentally. Certainly, a crucial open issue is the extent and influence of disorder and inhomogeneity. Whether any of these phenomena are also present in \cat, and whether they are important for the formation of the quantum spin liquid state in organics remains to be clarified in future. \subsection{Quantum states of electric and magnetic dipoles} \label{sec:Summary_Dipoles} Although the spin liquid phase is insulating, anomalous charge dynamics in \etcn\ and \agcn\ are suggested for explaining the observed dielectric and low-energy optical responses. Several theoretical approaches have been developed to study the interplay of spins with charges on frustrated triangular lattice with dimerized sites. The approach based on extended Hubbard model considers quantum electric dipoles on organic dimers treating the inter-dimer interactions pertubatively; it results in the phase diagram, in which charge-spin liquid shares a common boundary with charge-ordered spin liquid and with spin-ordered charge liquid. An exact diagonalization study including electron-lattice coupling finds paired-electron crystal phase adjacent to the antiferromagnetic, spin gap and Wigner crystal phases. Despite considerable efforts, no experimental evidence exists for sizeable electric dipoles; a negligibly small charge imbalance is found in vibrational spectroscopy and x-ray diffraction. But it has been shown that the symmetry of the non-polar mean P2$_1$/c structure is broken, {\it i.e.} that non-equivalent crystallographic sites exist. Whereas static electric dipoles are excluded, fluctuating dipoles may be concluded based on the unusually broad charge-sensitive molecular Raman and infrared modes. While these fast fluctuations cannot be invoked to explain the anomalous dielectric response in kHz-MHz range, suggestions were raised that they manifest in the microwave response; the terahertz response, however, was proven to be due to coupled dimer-anion vibrations instead. Since no sizeable static electric dipoles are present, the dielectric anomaly in quantum spin liquids is attributed to the cooperative motion of charged domain walls. The walls are created within random domain structure induced by quenched disorder originating in the anion layer, as identified by density functional theory calculations. The domain wall scenario is also supported by an approach based on the one-dimensional tight binding model with on-site and inter-site Coulomb repulsion indicating that charge fluctuations occur in the boundary region between Mott and charge-order insulators. At low frequencies, oscillations occur in the double-well potential corresponding to a small charge disproportionation of opposite polarity; spatially extended domain walls connecting two respective domains may give rise to the observed dielectric response. In \etcl\ an anomalous dielectric response shows up, which coincides with an antiferromagnetic state with signatures ranging from multiferroicity to ferroelectricity next to superconductivity; but relaxor-like response prevails in the majority of single crystals studied. Further experimental efforts are indispensable for the search for structural inversion-symmetry breaking in order to clarify the ferroelectric-like signatures in this antiferromagnetic system. The low-temperature behavior of the hydrogen-bonded single-molecular compound \cat\ is unique among organics in the way that it demonstrates a quantum liquid of electric dipoles established simultaneously with a quantum spin liquid. The dielectric constant follows the Barrett behavior revealing strong quantum fluctuations originating in the zero-point motion of hydrogen atoms; density functional theory suggests that proton fluctuations coupled to charges and spins give rise to a quantum liquid of electric and magnetic dipoles. The crucial importance of the fluctuating proton-bond is evidenced in the behavior of deuterated sister system \dcat\ showing no signatures of liquid phases; instead a charge-ordered state with spin gap is established. Under strong dc electric fields, the negative differential resistance is observed and attributed to the deuterium dynamics coupled to the electron system. A negative differential resistance behavior is also seen in \cat. This surprising result may be explained by density functional theory, which finds a quasi-degenerate electronic state implying random domains in the real space; the domain wall motion may be responsible for the nonlinear effects. Another candidate for a quantum electric dipole liquid is given by \hgbr\ because no evidence for charge ordering is detected. The $\nu_{2}(a_g)$ Raman band broadens on cooling and its shape is well described by Kubo’s two-states-jump model assuming charge fluctuations with an exchange frequency $\omega_{\rm ex} \approx 30-40$\cm. A broad feature of non-phonon origin around 40~\cm\ in the $A_{1g}$ Raman channel is taken as a fingerprint of these very fluctuations; arguments are given that the energy of the mode is lower than expected for magnetic excitations. However, a quantum paraelectric behavior is not observed; dielectric response instead displays features characteristic of glassy dynamics indicating that the low-frequency Raman mode may be more appropriately interpreted as a Boson peak. The suggestion is supported by the significant non-Debye behavior found in the specific heat measurements; an excess of low-energy vibrational states, whose entropy is significantly larger than what is expected for the pure magnetic entropy, indicates heterogeneity and glassy properties mostly in the charge sector of the liquid state. But spins are coupled to fluctuating charges as also indicated by a finite linear term observed in the specific heat measurements consistent with a spin liquid behavior; a spin-glass-like phase develops at lower temperatures as implied by the spin susceptibility and electron spin resonance measurements. An idea of exotic quantum liquid state with glassy nature that consists of entangled fluctuating electric dipoles and spins is worth to be verified in future. Theoretical considerations of spin-charge coupling suggest possible mechanisms of charge-driven instabilities towards a long-range magnetic state. Initially the charge-ordered \hgcl\ was thought to tend to a magnetic transition at low temperatures. But the recent specific heat, electron spin resonance and nuclear magnetic resonance measurements give strong indications that magnetic order is absent and defect states dominate the properties at low temperatures. Nevertheless more efforts in this direction is needed to clarify the ground state in the spin sector and enable verification of current theoretical models based on dipolar-spin coupling. \subsection{Outlook} \label{sec:outlook} We live in a three-dimensional world, but physics in smaller number of dimensions reveals a qualitative change in the system’s properties. Despite not being atomically thin as graphene, quasi-one- and two-dimensional organic materials have many characteristics linked to their reduced dimensionality. It is associated with the overlap of molecular orbitals provides the cornerstone of organic metallicity and modifies the way quantum fluctuations compete with long-range order. They developed as a workbench of real materials to discover novel aspects of matter such as Peierls distortion and charge and spin density waves in one-dimensional materials, electronic ferroelectricity and quantum spin liquids in two dimensions. Unconventional non-phonon-mediated superconductivity was first discovered and studied in low-dimensional organics; the knowledge gained helped a lot in the quest for high $T_c$ {}superconductors since their discovery at the end of eighties. While conducting polymers and organic semiconductors are implemented in a broad range of electronic devices, actual applications of crystalline two-dimensional organic conductors did not surface until today; and one should not make any strong claims or serious predictions in this regard. Whether or not these materials eventually find useful applications, they keep expanding our understanding of fundamental science and offer paths to designing other materials with practical use. The family of molecular quantum materials draws its strength from the versatility of the compounds and selectivity of their physical properties. It allows us to tune the interplay of various degrees of freedom in an unprecedented way, to reach unexplored areas of the phase diagram and to shift the frontier towards new states of matter. In roughly ten years, research in two-dimensional organic solids has made spectacular advances: it is amazing to see how the collaboration between theory and experiment enables us to realize what has been suggested in the past, but also to understand what has been observed years ago. Organic conductors always have been a niche in condensed matter science, the general interest is soaring since molecular solids have been realized as superior model compounds and often prime examples for investigating quantum phenomena. There is a historical record that supports this and justifies believing this should continue to be the case. But the paradigm of successful science is shifting nowadays. Accomplishing the bright future of the field will require a stronger effort in materials synthesis and design. To this end important advances can be achieved by integrating machine learning applications. Experimentally, the organic materials are attractive because the sample can be prepared from commercially low-cost components we find in nature. The clever performance of making the targeted material will vitalize the field together with up-to-date experimental methods center on imaging, scattering and spectroscopy tools with increasing spatial and time resolution, including instrumentation for the individual investigator and the scientific user facilities. Reaching the goal equally requires the use of state-of-the-art computational facilities and synergy of numerical calculations and microscopic theory. Only time will tell, but the potential for the continued growth of the field is here. We would be gratified if this paper turns out to be useful to this end and succeeds to drive enthusiasm of coming generations in physical and chemical sciences.
1,108,101,564,740
arxiv
\section{Introduction}\label{intro} Splines are known as piecewise polynomial functions defined on faces of polyhedral complexes with a smoothness condition. They control the curvature of a surface in engineering and industry. They also appear in numerical analysis, optimization theory, computer-aided design and modeling, and solutions of differential equations. The notion of a generalized spline is introduced by Gilbert, Viel, and Tymoczko \cite{GPT}. As a generalization of classical splines, generalized splines are defined on edge labeled graphs over arbitrary rings instead of polyhedral complexes over polynomial rings. The set of all generalized splines on an edge labeled graph over a base ring $R$ has a ring and an $R$-module structure~\cite{GPT}. Module structure of $R_{(G,\alpha)}$ has been studied chiefly in terms of freeness, minimum generating sets, and bases~\cite{HMR, BT, BHKR, PSTW, Alt3, AD, AMT, DM, RS}. Furthermore, generalized splines have also been viewed in terms of homological algebraic methods and with combinatorial, graph-theoretic approaches~\cite{Dip, And, Alt5}. In this paper, we focus on the following question about generalized spline modules over greatest common divisor (GCD) domains. A GCD domain $R$ is an integral domain such that any two elements of $R$ have a greatest common divisor. \begin{question} When the base ring $R$ is a GCD domain, under what conditions does a given set of generalized splines form an $R$-module basis for generalized spline modules? \label{1que1} \end{question} Question~\ref{1que1} has been investigated in many papers using determinant-based methods for particular types of graphs and base rings. Gjoni studied generalized splines over integers and gave a basis criterion on cycles~\cite{Gjo}. A similar technique is used for generalized splines over integers on diamond graphs~\cite{Mah,Bla}. Alt{\i}nok and Sar{\i}o\u{g}lan extended these results to GCD domains for cycles, diamond graphs, and trees~\cite{Alt2}. Calta and Rose presented a basis criterion for $R_{(G,\alpha)}$ on arbitrary graphs over GCD domains under certain conditions on edge labels~\cite{RC}. This paper aims to present a determinantal basis condition for generalized spline modules on arbitrary graphs over GCD domains. The final result of the paper, Theorem~\ref{5thm1}, was stated as a conjecture in two papers~\cite{Alt2, RC}. We fill this gap in the literature by proving it and finalize the problem. In Section~\ref{pre}, we give fundamental definitions and fix notations. We support these new notations with an example. In Section~\ref{dm}, we explain the determinantal methods. We prove Theorem~\ref{3thm1} for arbitrary graphs, which is stated for special types of graphs in previous works. In Section~\ref{bcocg}, we focus on generalized splines on complete graphs over GCD domains and introduce an algorithm that produces special generalized splines. We illustrate the algorithm with an example. We then present a basis condition that answers Question~\ref{1que1} for complete graphs. In Section~\ref{bcoag}, we extend our outcomes to arbitrary graphs. We first define the completion of an edge labeled graph, then clarify Question~\ref{1que1} for arbitrary graphs. \section{Preliminaries}\label{pre} We begin with the definition of a generalized spline. \begin{defn} Given a finite graph $G=(V, E)$ with $n$ vertices and a commutative ring $R$ with identity, an edge labeling on $G$ is defined as a function $\alpha: E \to I(R)$ where $I(R)$ is the set of ideals of $R$. A generalized spline on $(G,\alpha)$ is a vertex labeling $F = \big( F(v_1) , F(v_2) , \ldots , F(v_n) \big) \in R^n$ such that for each edge $uv$, the difference $F(u) - F(v) \in \alpha(uv)$. \end{defn} In order to specify the graph itself, we use $V(G)$ and $E(G)$ for the vertex set and edge set of $G$. Let $R_{(G,\alpha)}$ denote the set of all generalized splines on $(G,\alpha)$ with base ring $R$. Then, $R_{(G,\alpha)}$ has a ring and an $R$-module structure by componentwise addition and multiplication by the elements of $R$. From now on, we refer to generalized splines as splines. We use principal ideals for edge labels and denote each edge label by the generator of the ideal. We use the notations $e_j$ or $uv$ for an edge of $G$, and symbolize the corresponding edge label by $\alpha(e_j) = l_j$ and $\alpha(uv) = l_{uv}$. We also refer to generators of edge labels as edge labels. Figure~\ref{fig1} illustrates an edge labeled diamond graph with base ring $\mathbb{Z}$. \begin{figure}[H] \begin{center} \begin{tikzpicture} \begin{scope}[every node/.style={circle,thick,draw}] \node (A) at (1.5,3) {$v_1$}; \node (B) at (1.5,0) {$v_2$}; \node (C) at (3,1.5) {$v_3$}; \node (D) at (0,1.5) {$v_4$}; \end{scope} \begin{scope}[>={Stealth[black]}, every node/.style={fill=white,circle}, every edge/.style={draw=black,very thick}] \path (A) edge node {$5$} (B); \path (A) edge node {$4$} (C); \path (A) edge node {$6$} (D); \path (B) edge node {$2$} (C); \path (B) edge node {$9$} (D); \end{scope} \end{tikzpicture} \caption{Edge labeled diamond graph} \label{fig1} \end{center} \end{figure} A spline on Figure~\ref{fig1} can be given by $F = (2,32,34,50)$. This paper focuses on splines over GCD domains. We shorten the notation $\gcd(a,b)$ and $\lcm(a,b)$ as $(a,b)$ and $[a,b]$. We denote the greatest common divisor and least common multiple of elements of a set $A$ by $\big( A \big)$ and $\big[ A \big]$. Throughout the paper, we assume that $R$ is a GCD domain unless otherwise stated. Given non-empty subsets $A_1 , A_2 , \ldots , A_k$ of a GCD domain, we define the product by \begin{displaymath} A_1 \cdot A_2 \cdots A_k = \{ a_1 \cdot a_2 \cdots a_k \text{ $\vert$ } a_i \in A_i, 1 \leq i \leq k\}. \end{displaymath} The following lemma can be proved by combining elements and using the greatest common divisor and least common multiple properties. \begin{lemma} Let $A_1 , A_2 , \ldots , A_k$ be non-empty subsets of a GCD domain. Then, \begin{enumerate}[label=\emph{(\alph*)}] \item $\big( A_1 \cdot A_2 \cdots A_k \big) = \big( A_1 \big) \cdot \big( A_2 \big) \cdots \big( A_k \big)$. \item $\big[ A_1 \cdot A_2 \cdots A_k \big] = \big[ A_1 \big] \cdot \big[ A_2 \big] \cdots \big[ A_k \big]$. \end{enumerate} \label{2lem1} \end{lemma} A special type of splines, called flow-up class, is defined as follows. \begin{defn} Given an edge labeled graph $(G,\alpha)$, an $i$-th flow-up class is defined as a spline $F^{(i)} \in R_{(G,\alpha)}$ such that $F^{(i)} (v_j) = 0$ for $j < i$ and $F^{(i)} (v_i) \neq 0$. \end{defn} For instance, the flow-up classes in Figure~\ref{fig1} can be listed as $F^{(1)} = (1,1,1,1)$, $F^{(2)} = (0,30,0,48)$, $F^{(3)} = (0,0,8,0)$ and $F^{(4)} = (0,0,0,36)$. Let $(G,\alpha)$ be an edge labeled graph and $v_i , v_j \in V(G)$. A $v_j$-trail of $v_i$, denoted by $\textbf{p} ^{(v_i,v_j)}$, is a sequence of vertices and edges starting at $v_i$ and ending at $v_j$ such that no edge is repeated. We represent a trail by the sequence of the edge labels on it. The length of a trail, denoted by $\big\vert \textbf{p} ^{(v_i,v_j)} \big\vert$, is the number of edges it contains. We symbolize the greatest common divisor of edge labels on $\textbf{p} ^{(v_i , v_j)}$ by $\big( \textbf{p} ^{(v_i , v_j)} \big)$. Given an edge labeled graph $(G,\alpha)$, fix a vertex $v_i$ with $i > 1$ and let the vertices $v_j$ with $j < i$ be labeled by zero. In this case, $\textbf{p} ^{(v_i,v_j)}$ is called a zero trail of $v_i$. Under this setting, note that each zero trail of $v_i$ corresponds to a $v_j$-trail of $v_i$ with $j < i$. An arbitrary zero trail of $v_i$ is denoted by $\textbf{p} ^{(v_i , 0)}$. A zero trail with length $1$ is called a zero edge. Let the zero trails of $v_i$ be given by the set $\Big\{ \textbf{p}_t ^{(v_i , 0)} \text{ $\vert$ } t=1,2,\ldots,k \Big\}$. We define the element $\mathscr{L}_i \in R$ as \begin{displaymath} \mathscr{L}_i = \Big[ \big( \textbf{p}_1 ^{(v_i , 0)} \big) , \big( \textbf{p}_2 ^{(v_i , 0)} \big) , \ldots , \big( \textbf{p}_k ^{(v_i , 0)} \big) \Big] \end{displaymath} and we set $\mathscr{L}_1 = 1$. Over principal ideal domains, there exist flow-up classes $F^{(i)} \in R_{(G,\alpha)}$ for each $i$ with $F^{(i)} (v_i) = \mathscr{L}_i$, and such flow-up classes form an $R$-module basis for $R_{(G,\alpha)}$~\cite{Alt1}. However, such flow-up classes may not exist if $R$ is not a principal ideal domain. Our first observation on trails is given below. \begin{theorem} Let $v_i , v_j \in V(G)$ and $\textbf{\emph{p}} ^{(v_i,v_j)}$ be a $v_j$-trail of $v_i$. If $F \in R_{(G,\alpha)}$, then $\big( \textbf{\emph{p}} ^{(v_i,v_j)} \big)$ divides $F(v_i) - F(v_j)$. \label{2thm1} \end{theorem} \begin{proof} Assume that $\textbf{p} ^{(v_i,v_j)} = < l_{v_i u_1} , l_{u_1 u_2} , \ldots , l_{u_{k-1} u_k} , l_{u_k v_j} >$. Since $F \in R_{(G,\alpha)}$, we have \begin{align*} F(v_i) - F(u_1) &= r_1 l_{v_i u_1} \\ F(u_1) - F(u_2) &= r_2 l_{u_1 u_2} \\ &\vdots \\ F(u_{k-1}) - F(u_k) &= r_k l_{u_{k-1} u_k} \\ F(u_k) - F(v_j) &= r_{k+1} l_{u_k v_j} \end{align*} for some $r_1 , r_2 , \ldots , r_{k+1} \in R$. Hence, we write \begin{displaymath} F(v_i) - F(v_j) = r_1 l_{v_i u_1} + r_2 l_{u_1 u_2} + \ldots + r_k l_{u_{k-1} u_k} + r_{k+1} l_{u_k v_j}. \end{displaymath} The greatest common divisor $\big( \textbf{p} ^{(v_i,v_j)} \big)$ divides each summand, and thus, it divides $F(v_i) - F(v_j)$. \end{proof} We can simplify the computation of $\mathscr{L}_i$ as below. \begin{remark} Let $\textbf{p}_j ^{(v_i , 0)}$ and $\textbf{p}_k ^{(v_i , 0)}$ be two zero trails of $v_i$ with $\textbf{p}_j ^{(v_i , 0)} \subset \textbf{p}_k ^{(v_i , 0)}$. In this case, it is sufficient to consider just $\textbf{p}_j ^{(v_i , 0)}$ to compute $\mathscr{L}_i$ since $\big( \textbf{p}_k ^{(v_i , 0)} \big)$ divides $\big( \textbf{p}_j ^{(v_i , 0)} \big)$. Therefore, we consider only zero trails of $v_i$ that do not contain any other. \label{2rem1} \end{remark} Let $\textbf{p}_t ^{(v_i , 0)} = < l_{t_1}, l_{t_2}, \ldots, l_{t_s} >$ be a zero trail of $v_i$ with $t_s \neq 1$. We write $l_{t_j} = \big( \textbf{p}_t ^{(v_i , 0)} \big) \cdot {l_{t_j} ^{(t)}} '$ for $1 \leq j \leq s$ where ${l_{t_j} ^{(t)}}' \in R$ such that $\Big( {l_{t_1} ^{(t)}}' , {l_{t_2} ^{(t)}}' , \ldots, {l_{t_s} ^{(t)}}' \Big) = 1$ and use the notations \begin{align*} D^i _t &= \Big\{ {l_{t_1} ^{(t)}}' , {l_{t_2} ^{(t)}}' , \ldots, {l_{t_s} ^{(t)}}' \Big\} \text{ and} \\ \mathfrak{D}_i &= \{ D^i _1 , D^i _2, \ldots , D^i _{k_i} \} \end{align*} where $k_i$ is the number of zero trails of $v_i$ with length greater than $1$. Here $\big( D^i _t \big) = 1$ for all $t$, and hence, $\big( \mathfrak{D}_i \big) = \Big( \big( D^i _1 \big) , \big( D^i _2 \big) , \ldots , \big( D^i _{k_i} \big) \Big) = 1$ for all $2 \leq i \leq n-1$. We define the following set similar to the cartesian product, whose elements are subsets instead of tuples \begin{displaymath} \bigtimes D^i = \Big\{ \big\{ {l_{j_1} ^{(1)}}' , {l_{j_2} ^{(2)}}' , \ldots, {l_{j_{k_i}} ^{(k_i)}}' \big\} \text{ $\vert$ } {l_{j_t} ^{(t)}}' \in D^i _t , \squad 1 \leq t \leq k_i \Big\} \end{displaymath} and symbolize an arbitrary element of $\bigtimes D^i$ by $a ^{(i)}$. \begin{remark} When $i = n$, all vertices except $v_i$ are labeled by zero and each zero trail of $v_i$ contains a zero edge. In this case, we only consider zero edges by Remark~\ref{2rem1} and hence, $\mathfrak{D}_n = \emptyset$. \label{2rem2} \end{remark} For each element $a ^{(i)} = \Big\{ {l_{j_1} ^{(1)}}' , {l_{j_2} ^{(2)}}' , \ldots, {l_{j_{k_i}} ^{(k_i)}}' \Big\} \in \bigtimes D^i$, we say that the edge label $l_{j_s}$ lies in $a ^{(i)}$ for $1 \leq s \leq k_i$. An edge label from each zero trail $\textbf{p} ^{(v_i , 0)}$ with $\big\vert \textbf{p} ^{(v_i , 0)} \big\vert > 1$ is contained in $l(a ^{(i)})$ by construction. The set of all edge labels that lie in $a ^{(i)}$ is denoted by $l(a ^{(i)})$. We use the notation $\prod a ^{(i)}$ for the product of elements of $a ^{(i)}$. A subgraph $H$ of $G$ is a graph such that $V(H) \subset V(G)$ and $E(H) \subset E(G)$. For each element $a ^{(i)} \in \bigtimes D^i$, we define the subgraph $H_{a ^{(i)}} \subset G$ with edge set $E(H_{a ^{(i)}}) = \{ e_j \text{ $\vert$} l_j \in l(a ^{(i)}) \}$. Note that $v_i v_j \not\in E(H_{a ^{(i)}})$ for $j<i$ by the definition of $a ^{(i)}$, since $< l_{v_i v_j} >$ is a zero edge of $v_i$. \begin{example} Consider the edge labeled $K_4$ in Figure~\ref{fig2}. \begin{figure}[!htb] \centering \begin{minipage}{.5\textwidth} \centering \begin{tikzpicture} \begin{scope}[every node/.style={circle,thick,draw}] \node (A) at (2,3.5) {$v_1$}; \node (B) at (0,0) {$v_2$}; \node (C) at (4,0) {$v_3$}; \node (D) at (2,1.5) {$v_4$}; \end{scope} \begin{scope}[>={Stealth[black]}, every node/.style={fill=white,circle}, every edge/.style={draw=black,very thick}] \path (A) edge[bend right=15] node {$l_1$} (B); \path (A) edge[bend left=15] node {$l_3$} (C); \path (A) edge node {$l_4$} (D); \path (B) edge node {$l_2$} (C); \path (B) edge node {$l_5$} (D); \path (C) edge node {$l_6$} (D); \end{scope} \end{tikzpicture} \caption{\small{Edge labeled $K_4$}} \label{fig2} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \begin{tikzpicture} \begin{scope}[every node/.style={circle,thick,draw}] \node (A) at (2,3.5) {$v_1$}; \node (B) at (0,0) {$v_2$}; \node (C) at (4,0) {$v_3$}; \node (D) at (2,1.5) {$v_4$}; \end{scope} \begin{scope}[>={Stealth[black]}, every node/.style={fill=white,circle}, every edge/.style={draw=black,very thick}] \path (A) edge node {$l_4$} (D); \path (B) edge node {$l_2$} (C); \path (C) edge node {$l_6$} (D); \end{scope} \end{tikzpicture} \caption{\small{The subgraph $H_a$}} \label{fig3} \end{minipage} \end{figure} \noindent The zero trails of $v_2$ are determined as \squad $\textbf{p} ^{(v_2 , 0)} _1 = <l_1>$, \squad $\textbf{p} ^{(v_2 , 0)} _2 = <l_2 , l_3>$, \squad $\textbf{p} ^{(v_2 , 0)} _3 = <l_5 , l_4>$, \squad $\textbf{p} ^{(v_2 , 0)} _4 = <l_2 , l_6 , l_4>$ \squad and \squad $\textbf{p} ^{(v_2 , 0)} _5 = <l_5 , l_6 , l_3>$. Thus, \begin{displaymath} \mathfrak{D}_2 = \Big\{ \{ {l_2 ^{(2)}} ' , {l_3 ^{(2)}} ' \}, \quad \{ {l_5 ^{(3)}} ' , {l_4 ^{(3)}} ' \}, \quad \{ {l_2 ^{(4)}} ' , {l_6 ^{(4)}} ' , {l_4 ^{(4)}} ' \}, \quad \{ {l_5 ^{(5)}} ' , {l_6 ^{(5)}} ' , {l_3 ^{(5)}} ' \} \Big\}. \end{displaymath} Here $a ^{(2)} = \Big\{ {l_2 ^{(2)}} ' , {l_4 ^{(3)}} ' , {l_6 ^{(4)}} ' , {l_6 ^{(5)}} ' \Big\} \in \bigtimes D^2$ and the subgraph $H_{a ^{(2)}} \subset K_4$ is given in Figure~\ref{fig3}. Moreover, $l (a ^{(2)}) = \{ l_2 , l_4 , l_6 \}$ and $\prod a ^{(2)} = {l_2 ^{(2)}} ' \cdot {l_4 ^{(3)}} ' \cdot {l_6 ^{(4)}} ' \cdot {l_6 ^{(5)}} '$. \label{2ex1} \end{example} \begin{remark} The subgraph $H_{a ^{(i)}}$ contains at least one edge from each zero trail $\textbf{p} ^{(v_i , 0)}$ with $\big\vert \textbf{p} ^{(v_i , 0)} \big\vert > 1$ by definition of the element $a ^{(i)}$. \label{2rem3} \end{remark} As a result of Lemma~\ref{2lem1}, we present the following proposition. \begin{proposition} If $\mathbb{A}$ is the set of products $\prod a ^{(2)} \cdot \prod a ^{(3)} \cdots \prod a ^{(n-1)}$ where $a ^{(i)} \in \bigtimes D^i$ are arbitrary elements for $2 \leq i \leq n-1$, then we have \begin{center} $\big( \mathbb{A} \big) = \Big( \big( \mathfrak{D}_2 \big) , \big( \mathfrak{D}_3 \big) , \ldots , \big( \mathfrak{D}_{n-1} \big) \Big) = 1$. \end{center} \label{2prop1} \end{proposition} Let $a_1 ^{(i)}, a_2 ^{(i)} \in \bigtimes D^i$. We say that $a_1 ^{(i)} \subset a_2 ^{(i)}$ if $l( a_1 ^{(i)}) \subset l(a_2 ^{(i)})$. If there is no element $a_2 ^{(i)}$ with $a_2 ^{(i)} \subsetneqq a_1 ^{(i)}$, we call $a_1 ^{(i)}$ a minimal element. When $a ^{(i)}$ is minimal, deleting an arbitrary edge from $H_{a ^{(i)}}$ yields a zero trail on $(G,\alpha)$ with no edge it contains lies in $a ^{(i)}$. The minimal elements of $\bigtimes D^i$ in Figure~\ref{fig2} are listed in terms of containing edge labels by \squad $l (a_1 ^{(2)}) = \{ l_2 , l_5 \}$, \squad $l (a_2 ^{(2)}) = \{ l_3 , l_4 \}$, \squad $l (a_3 ^{(2)}) = \{ l_2 , l_4 , l_6 \}$ \squad and \squad $l (a_4 ^{(2)}) = \{ l_3 , l_5 , l_6 \}$. The following proposition plays a crucial role in the proof of the main results of the paper. \begin{proposition} Let $a ^{(i)} \in \bigtimes D^i$ and ${l_{t_j} ^{(t)}}' \in a ^{(i)}$ be the component that comes from the zero trail $\textbf{\emph{p}}_t ^{(v_i , 0)} = < l_{t_1}, l_{t_2}, \ldots, l_{t_s} >$ of $v_i$. Then, the edge label $l_{t_j}$ divides $\prod a ^{(i)} \cdot \mathscr{L}_i$. \label{2prop2} \end{proposition} \begin{proof} Recall that $l_{t_j} = \big( \textbf{p}_t ^{(v_i , 0)} \big) \cdot {l_{t_j} ^{(t)}} '$. Here ${l_{t_j} ^{(t)}}'$ divides the product $\prod a ^{(i)}$ and $\big( \textbf{p}_t ^{(v_i , 0)} \big)$ divides $\mathscr{L}_i$. Thus, $\big( \textbf{p}_t ^{(v_i , 0)} \big) \cdot {l_{t_j} ^{(t)}} ' = l_{t_j}$ divides $\prod a ^{(i)} \cdot \mathscr{L}_i$. \end{proof} In addition to Proposition~\ref{2prop2}, the label of any zero edge of $v_i$ also divides $\prod a ^{(i)} \cdot \mathscr{L}_i$. \begin{proposition} Given an edge labeled $(G , \alpha)$, a zero edge $e_j$ of $v_i$, and $a ^{(i)} \in \bigtimes D^i$, the edge label $l_j$ divides $\prod a ^{(i)} \cdot \mathscr{L}_i$. \label{2prop3} \end{proposition} \begin{proof} Since $e_j$ is a zero edge of $v_i$, the edge label $l_j$ divides $\mathscr{L}_i$, and so $\prod a ^{(i)} \cdot \mathscr{L}_i$. \end{proof} \section{Determinantal Methods}\label{dm} Let $(G, \alpha)$ be an edge labeled graph with $n$-vertices and $A = \{ F_1 , \ldots , F_n \} \subset R_{(G,\alpha)}$. We can rewrite $A$ in a matrix form, whose columns are the elements of $A$, such as \begin{displaymath} A = \begin{pmatrix} F_1 (v_n) & F_2 (v_n) & \ldots & F_n (v_n) \\ \vdots \\ F_1 (v_2) & F_2 (v_2) & \ldots & F_n (v_2) \\ F_1 (v_1) & F_2 (v_1) & \ldots & F_n (v_1) \end{pmatrix}. \end{displaymath} $A$ is called a spline matrix and the determinant $\big\vert A \big\vert$ is denoted by $\big\vert F_1 \squad F_2 \squad \ldots \squad F_n \big\vert$. Given an edge labeled graph $(G,\alpha)$ with $n$ vertices, we present the element $Q_G$ as \begin{displaymath} Q_G = \prod\limits_{i=1} ^n \mathscr{L}_i. \end{displaymath} We give the basis condition for $R_{(G,\alpha)}$ by using $Q_G$ and the determinant $\big\vert A \big\vert$. An explicit formula for $Q_G$ is given in terms of edge labels for cycles, diamond graphs, and trees~\cite{Gjo, Mah, Bla, Alt2}. It is difficult to determine such a formula when the number of edges of $G$ is greater than the number of vertices. We reach our goal without providing a general formula for $Q_G$ for arbitrary graphs. The properties of the determinant $\big\vert A \big\vert$ are listed below. \begin{proposition} Let $(G, \alpha)$ be an edge labeled graph with $n$-vertices. Assume that $\{ F_1 , \ldots , F_n \}$ forms a basis for $R_{(G,\alpha)}$ and let $\{ G_1 , \ldots , G_n \} \subset R_{(G,\alpha)}$. Then $\big\vert F_1 \squad F_2 \squad \ldots \squad F_n \big\vert$ divides $\big\vert G_1 \squad G_2 \squad \ldots \squad G_n \big\vert$. \label{3prop1} \end{proposition} \begin{proof} See Lemma 5.1.4. in~\cite{Gjo}. \end{proof} \begin{proposition} Let $(G, \alpha)$ be an edge labeled graph with $n$-vertices. Let\linebreak $\{ F_1 , F_2, \ldots , F_n \} \subset R_{(G , \alpha)}$. If $\big\vert F_1 \squad F_2 \squad \ldots \squad F_n \big\vert = r \cdot Q_G$ where $r \in R$ is a unit, then $\{ F_1 , F_2 , \ldots , F_n \}$ forms a basis for $R_{(G , \alpha)}$. \label{3prop2} \end{proposition} \begin{proof} The proof of Lemma 3.19 in~\cite{Alt2} holds for an arbitrary edge labeled graph $(G,\alpha)$. \end{proof} \begin{theorem} Given an edge labeled graph $(G, \alpha)$ with $n$-vertices and a set of splines $\{ F_1 , F_2, \ldots , F_n \} \subset R_{(G , \alpha)}$, the element $Q_G$ divides $\big\vert F_1 \squad F_2 \squad \ldots \squad F_n \big\vert$. \label{3thm1} \end{theorem} \begin{proof} Let $F_i = \big( F_i (v_1) , \ldots , F_i (v_n) \big)$ and $\textbf{p} ^{(v_2 , 0)} , \textbf{p} ^{(v_3 , 0)} , \ldots , \textbf{p} ^{(v_n , 0)}$ be arbitrary zero trails of $v_i$ for $2 \leq i \leq n$. Then, $\textbf{p} ^{(v_i , 0)}$ corresponds to a $v_{i_j}$-trail of $v_i$ with $i_j < i$. By using elementray row operations on the spline matrix $A$, we replace the $(n-i+1)$-th row of $A$ as \begin{displaymath} \big( F_1 (v_i) \squad , \ldots , \squad F_n (v_i) \big) \to \big( F_1 (v_i) - F_1 (v_{i_j}) \squad, \ldots , \squad F_n (v_i) - F_n (v_{i_j}) \big). \end{displaymath} Here $\big( \textbf{p} ^{(v_i , 0)} \big)$ divides this new row by Theorem~\ref{2thm1}. Note that such row operations fix the last row of $A$. After these proper row operations, one concludes that the product $\big( \textbf{p} ^{(v_2 , 0)} \big) \cdot \big( \textbf{p} ^{(v_3 , 0)} \big) \cdots \big( \textbf{p} ^{(v_n , 0)} \big)$ divides the determinant $\big\vert A \big\vert$. Since the zero trails are chosen arbitrarily, we conclude that each element of the set \begin{displaymath} \mathbb{P} = \Big\{ \big( \textbf{p} ^{(v_2 , 0)} \big) \cdot \big( \textbf{p} ^{(v_3 , 0)} \big) \cdots \big( \textbf{p} ^{(v_n , 0)} \big) \text{ $\vert$ } \textbf{p} ^{(v_i , 0)} \text{ is a zero trail of $v_i$ for $2 \leq i \leq n$} \Big\} \end{displaymath} divides $\big\vert A \big\vert$. Therefore, the least common multiple \begin{displaymath} \big[ \mathbb{P} \big] = \mathscr{L}_1 \cdot \mathscr{L}_2 \cdot \mathscr{L}_3 \cdots \mathscr{L}_n = Q_G \end{displaymath} divides $\big\vert A \big\vert = \big\vert F_1 \squad F_2 \squad \ldots \squad F_n \big\vert$ by Lemma~\ref{2lem1}. \end{proof} \section{Basis Condition on Complete Graphs}\label{bcocg} A complete graph with $n$ vertices, denoted by $K_n$, is a graph such that $uv \in E(K_n)$ for each distinct pair of vertices $u,v \in V(K_n)$. The most challenging graph type of studying splines is complete graphs because any pair of vertices is connected, and there are many spline conditions to check. Therefore, previously, determinantal techniques have never been used efficiently for module bases of $R_{(K_n , \alpha)}$. In this section, we focus on splines on complete graphs over GCD domains and give a determinantal basis criterion for $R_{(K_n , \alpha)}$. The first aim of this section is to answer the following problem. \begin{problem} Let $(K_n , \alpha)$ be an edge labeled complete graph and $R$ be a GCD domain. Fix an integer $i$ with $2 \leq i \leq n-1$. For each element of $a ^{(i)} \in \bigtimes D^i$, can we find a spline $F \in R_{(K_n , \alpha)}$ such that $F (v) \in \Big\{ 0, \squad \prod a ^{(i)} \cdot \mathscr{L}_i \Big\}$ for all $v \in V(K_n)$? We symbolize such a spline by $F_{a ^{(i)}}$. \label{4prob1} \end{problem} Problem~\ref{4prob1} is easy to answer for any type of graph in the following case. \begin{proposition} Let $(G,\alpha)$ be an arbitrary edge labeled graph and assume that $l_{u v_i}$ lie in $a ^{(i)} \in \bigtimes D^i$ for all $u v_i \in E(G)$. A vertex labeling $g_{a ^{(i)}}$ such that $g_{a ^{(i)}} (v_i) = \prod a ^{(i)} \cdot \mathscr{L}_i$ and $g_{a ^{(i)}} (v) = 0$ for $v \neq v_i$ is a spline on $R_{(G , \alpha)}$. \label{4prop2} \end{proposition} \begin{proof} Consider the edge $uv \in E(G)$. Assume that $u$ or $v$ is equal to $v_i$, then $F(u) - F(v) = \pm \prod a ^{(i)} \cdot \mathscr{L}_i$ and $l_{uv}$ divides the difference by Proposition~\ref{2prop2}. Hence, the spline condition holds on $uv$. If none of $u$ or $v$ is equal to $v_i$, then $g_{a ^{(i)}} (u) = g_{a ^{(i)}} (v) = 0$ and the spline condition holds on $uv$. Thus, $g_{a ^{(i)}} \in R_{(G, \alpha)}$. \end{proof} We consider the case of $l_{u v_i}$ do not lie in $a ^{(i)}$ for some $u v_i \in E(K_n)$ in Algorithm~\ref{4alg1}. Nevertheless, we first reduce Problem~\ref{4prob1} to minimal elements of $\bigtimes D^i$. \begin{theorem} Let $a ^{(i)} \in \bigtimes D^i$ be minimal. For any element ${a^*} ^{(i)} \in \bigtimes D^i$ with $a ^{(i)} \subset {a^*} ^{(i)}$, a spline $F_{a ^{(i)}} \in R_{(K_n , \alpha)}$ induces a spline $F_{{a^*} ^{(i)}} \in R_{(K_n , \alpha)}$. \label{4thm1} \end{theorem} \begin{proof} Since $a ^{(i)} \subset {a^*} ^{(i)}$, we have $H_{a ^{(i)}} \subset H_{{a^*} ^{(i)}}$ and $\prod a ^{(i)}$ divides $\prod {a^*} ^{(i)}$. We define the induced spline $F_{{a^*} ^{(i)}}$ as follows. \begin{displaymath} F_{{a^*} ^{(i)}} (v) = \begin{cases} \prod {a^*} ^{(i)} \cdot \mathscr{L}_i & \text{ if } F_{a ^{(i)}} (v) = \prod a ^{(i)} \cdot \mathscr{L}_i, \\ 0 & \text{ if } F_{a ^{(i)}} (v) = 0. \end{cases} \end{displaymath} To conclude $F_{{a^*} ^{(i)}} \in R_{(K_n , \alpha)}$, consider an arbitrary edge $uv \in E(K_n)$. If $uv \not\in E(H_{{a^*} ^{(i)}})$, then $uv \not\in E(H_{a ^{(i)}})$ and $F_{{a^*} ^{(i)}} (u) = F_{{a^*} ^{(i)}} (v)$ since $F_{a ^{(i)}} \in R_{(K_n , \alpha)}$. So the spline condition holds. If $uv \in E(H_{{a^*} ^{(i)}})$, we have $\alpha(uv)$ divides $\prod {a^*} ^{(i)} \cdot \mathscr{L}_i$ by Proposition~\ref{2prop2} and the spline condition holds again. Therefore, $F_{{a^*} ^{(i)}} \in R_{(K_n , \alpha)}$. \end{proof} We present an application of Theorem~\ref{4thm1} at the end of Example~\ref{4ex1}. We introduce the following algorithm to construct a spline $F_{a ^{(i)}} \in R_{(K_n , \alpha)}$ when $R$ is a GCD domain. \begin{algorithm} Let $(K_n , \alpha)$ be an edge labeled complete graph and $a ^{(i)} \in \bigtimes D^i$ be a minimal element. Assume that $v_i v_{k_1} , \ldots , v_i v_{k_t} \not\in E(H_{a ^{(i)}})$ with $k_j > i$ for $1 \leq j \leq t$. Define a vertex labeling $g_{a ^{(i)}}$ on $(K_n , \alpha)$ as follows. \begin{enumerate} \item Set $g_{a ^{(i)}}(v_i) = \prod a ^{(i)} \cdot \mathscr{L}_i$. \item Set $g_{a ^{(i)}}(v_{k_j}) = \prod a ^{(i)} \cdot \mathscr{L}_i$ for $1 \leq j \leq t$. \item For a vertex $v_s$ with $s > i$ and $v_i v_s \in E(H_{a ^{(i)}})$, set $g_{a ^{(i)}}(v_s) = 0$ if $v_s v_{k_j} \in E(H_{a ^{(i)}})$ for $1 \leq j \leq t$. Otherwise, set $g_{a ^{(i)}}(v_s) = \prod a ^{(i)} \cdot \mathscr{L}_i$. \end{enumerate} \label{4alg1} \end{algorithm} Recall that any vertex $v_s$ with $s < i$ is pre-labeled by zero by the setting of $a ^{(i)} \in \bigtimes D^i$. Algorithm~\ref{4alg1} agrees these pre-labeled vertices: Assume that $v_s \in V(K_n)$ with $s < i$, then $< l_{v_s v_{k_j}} , l_{v_{k_j} v_i} >$ is a zero trail of $v_i$ and $v_{k_j} v_i \not\in E(H_{a ^{(i)}})$ for $1 \leq j \leq t$. Thus, $v_s v_{k_j} \in E(H_{a ^{(i)}})$ for all $j$ by Remark~\ref{2rem3} and $g_{a ^{(i)}}(v_s) = 0$. The results of Algorithm~\ref{4alg1} are presented below. \begin{proposition} The vertex labeling $g_{a ^{(i)}}$ is a spline in $R_{(H_{a ^{(i)}} , \alpha)}$. \label{4prop3} \end{proposition} \begin{proof} Let $uv \in E(H_{a ^{(i)}})$. If $g_{a ^{(i)}} (u) - g_{a ^{(i)}} (v) = 0$, then the spline condition holds. Assume that $g_{a ^{(i)}} (u) - g_{a ^{(i)}} (v) = \prod a ^{(i)} \cdot \mathscr{L}_i$. By Proposition~\ref{2prop2}, $l_{uv}$ divides $\prod a ^{(i)} \cdot \mathscr{L}_i$ and the spline condition holds again. Hence, $g_{a ^{(i)}} \in R_{(H_{a ^{(i)}} , \alpha)}$. \end{proof} We state the main result of Algorithm~\ref{4alg1} as follows. \begin{theorem} The vertex labeling $g_{a ^{(i)}}$ is a spline in $R_{(K_n , \alpha)}$. \label{4thm2} \end{theorem} \begin{proof} Algorithm~\ref{4alg1} covers all vertices $v_j$ with $j > i$ since the graph is complete. Let $uv \in E(K_n)$. If $uv \in E(H_{a ^{(i)}})$, the spline condition holds by Proposition~\ref{4prop3}. Let $uv \not\in E(H_{a ^{(i)}})$ and assume that the spline condition does not hold on $uv$. Without loss of generality, say $g_{a ^{(i)}} (u) = \prod a ^{(i)} \cdot \mathscr{L}_i$ and $g_{a ^{(i)}} (v) = 0$ where $u = v_s$ and $v = v_{s'}$. Note that $s > i$ because of the label $g_{a ^{(i)}} (u)$. We deal with the rest of the proof in two cases by comparing the indices $s'$ and $i$. Assume that $s' < i$, then $v v_i \not\in E(H_{a ^{(i)}})$ by the construction of $a ^{(i)}$. If $u v_i \not\in E(H_{a ^{(i)}})$, we have the zero trail of $v_i$ given by $< l_{v_i u} , l_{u v} >$ on which no edge lie in $a ^{(i)}$, and this contradicts to Remark~\ref{2rem3}. If $u v_i \in E(H_{a ^{(i)}})$, we have another vertex $v_k$ with $k > i$ such that $v_i v_k , v_k u \not\in E(H_{a ^{(i)}})$ since $g_{a ^{(i)}} (u) = \prod a ^{(i)} \cdot \mathscr{L}_i$. In this case, no edge on the zero trail $< l_{v_i v_k}, l_{v_k u} , l_{u v} >$ of $v_i$ lie in $a ^{(i)}$, a contradiction to Remark~\ref{2rem3} again. Suppose that $s' > i$, then $v v_i \in E(H_{a ^{(i)}})$ since $g_{a ^{(i)}} (v) = 0$. Moreover, $u v_i \in E(H_{a ^{(i)}})$ since the other case would contradict the label of $v$. By reason of $u v_i \in E(H_{a ^{(i)}})$ and $g_{a ^{(i)}} (u) = \prod a ^{(i)} \cdot \mathscr{L}_i$, there exists a vertex $v_k$ with $k > i$ such that $v_k v_i \not\in E(H_{a ^{(i)}})$ and $u v_k \not\in E(H_{a ^{(i)}})$. Since $g_{a ^{(i)}} (v) = 0$, we say $v v_k \in E(H_{a ^{(i)}})$. As $a ^{(i)}$ is minimal, there exists a zero trail $\textbf{p} ^{(v_i , 0)}$ such that the only edge on $\textbf{p} ^{(v_i , 0)}$ that lie in $a^ {(i)}$ is $v v_k$. However, replacing $l_{v v_k}$ with $l_{v_k u} , l_{uv}$ on $\textbf{p} ^{(v_i , 0)}$ yields another zero trail $\textbf{p}* ^{(v_i , 0)}$ such that no edge on $\textbf{p}* ^{(v_i , 0)}$ lies in $a ^{(i)}$, which contradicts to Remark~\ref{2rem3}. Since we have contradiction in all possible cases, the assumption is wrong, and the spline condition holds on $uv$. Thus, $g_{a ^{(i)}} \in R_{(K_n , \alpha)}$. \end{proof} We denote the vertex label $g_{a ^{(i)}}$ by $F_{a ^{(i)}}$ in the rest of the paper. We run Algorithm~\ref{4alg1} in the following example. \begin{example} Consider the edge labeled $(K_5 , \alpha)$ in Figure~\ref{fig6}. A minimal element $a ^{(2)} \in \bigtimes D^i$ is given by $l (a ^{(2)}) = \{ l_2 , l_4 , l_7 , l_5 , l_9 , l_{10} \}$ in terms of edge labels. The subgraph $H_{a ^{(2)}}$ is presented as solid lines, and the other edges are denoted by dashed lines below. \begin{figure}[H] \centering \begin{tikzpicture} \begin{scope}[every node/.style={circle,thick,draw}] \node (A) at (2,4) {$v_1$}; \node (B) at (0,0) {$v_2$}; \node (C) at (4,0) {$v_3$}; \node (D) at (2,1.5) {$v_4$}; \node (E) at (-1,5) {$v_5$}; \end{scope} \begin{scope}[>={Stealth[black]}, every node/.style={fill=white,circle}, every edge/.style={draw=black,very thick}] \path[dashed] (A) edge node {$l_1$} (B); \path[dashed] (A) edge node {$l_3$} (C); \path (A) edge node {$l_4$} (D); \path (A) edge node {$l_7$} (E); \path (B) edge node {$l_2$} (C); \path (B) edge node {$l_5$} (D); \path[dashed] (B) edge node {$l_8$} (E); \path[dashed] (C) edge node {$l_6$} (D); \path (C) edge[bend right=75] node {$l_9$} (E); \path (D) edge node {$l_{10}$} (E); \end{scope} \end{tikzpicture} \caption{Edge labeled $(K_5 , \alpha)$} \label{fig6} \end{figure} We execute Algorithm~\ref{4alg1} as follows. \begin{enumerate} \item $g_{a ^{(2)}} (v_1) = 0$ as a pre-labeled vertex. \item Set $g_{a ^{(2)}} (v_2) = \prod a ^{(2)} \cdot \mathscr{L}_2$. \item Set $g_{a ^{(2)}} (v_5) = \prod a ^{(2)} \cdot \mathscr{L}_2$ since $v_5 v_2 \not\in E(H_{a ^{(2)}})$. \item Set $g_{a ^{(2)}} (v_3) = g_{a ^{(2)}} (v_4) = 0$ since $v_5 v_3 , v_5 v_4 \in E(H_{a ^{(2)}})$. \end{enumerate} Therefore, $g_{a ^{(2)}} = F_{a ^{(2)}} = \Big( 0, \prod a ^{(2)} \cdot \mathscr{L}_2 , 0 , 0 , \prod a ^{(2)} \cdot \mathscr{L}_2 \Big)$. As an application of Theorem~\ref{4thm1}, $F_{a ^{(2)}}$ induces the spline $F_{{a^*} ^{(2)}} = \Big( 0, \prod {a^*} ^{(2)} \cdot \mathscr{L}_2 , 0 , 0 , \prod {a^*} ^{(2)} \cdot \mathscr{L}_2 \Big)$ for $a ^{(2)} \subset {a^*} ^{(2)}$ with $l ({a^*} ^{(2)}) = \{ l_2 , l_4 , l_7 , l_5 , l_9 , l_{10} , l_3 \}$. \label{4ex1} \end{example} \begin{remark} Since $\mathfrak{D}_n = \emptyset$ by Remark~\ref{2rem2}, we fix $F_{a ^{(n)}} = (0,0, \ldots , \mathscr{L}_n)$. We conclude that $F_{a ^{(n)}} \in R_{(K_n , \alpha)}$ by Proposition~\ref{2prop3}. \end{remark} \begin{remark} The spline $F_{a ^{(i)}} \in R_{(K_n , \alpha)}$ has at least $i$ zero components. To see this, notice that $F_{a ^{(i)}} (v_j) = 0$ for all $j < i$ where $2 \leq i \leq n-1$. In case of $i = n$, we have $F_{a ^{(n)}} = (0,0, \ldots , \mathscr{L}_n)$. \label{4rem1} \end{remark} We summarize Proposition~\ref{4prop2}, Theorem~\ref{4thm1}, Theorem~\ref{4thm2} and Remark~\ref{4rem1} by the following corollary. \begin{corollary} Let $(K_n , \alpha)$ be an edge labeled complete graph and $R$ be a GCD domain. For any element $a ^{(i)} \in \bigtimes D^i$ with $i = 2 \leq i \leq n-1$, there corresponds a spline $F_{a ^{(i)}} \in R_{(K_n , \alpha)}$ with at least $i$ zero components. \label{4cor1} \end{corollary} Corollary~\ref{4cor1} answers Problem~\ref{4prob1} positively for complete graphs. The main result of the section is stated below. \begin{theorem} Let $\{ F_1 , F_2, \ldots , F_n \} \subset R_{(K_n , \alpha)}$. If $\{ F_1 , F_2 , \ldots , F_n \}$ forms a basis for $R_{(K_n , \alpha)}$, then $\big\vert F_1 \squad F_2 \squad \ldots \squad F_n \big\vert = r \cdot Q_{K_n}$ where $r \in R$ is a unit. \label{4thm3} \end{theorem} \begin{proof} Assume that $\{ F_1 , F_2 , \ldots , F_n \}$ forms a basis for $R_{(K_n , \alpha)}$. By Theorem~\ref{3thm1}, $Q_{K_n}$ divides $\big\vert F_1 \squad F_2 \squad \ldots \squad F_n \big\vert$, say $\big\vert F_1 \squad F_2 \squad \ldots \squad F_n \big\vert = r \cdot Q_{K_n}$. We show that $r$ is a unit. By Corollary~\ref{4cor1}, there exists a spline $F_{a ^{(i)}} \in R_{(K_n , \alpha)}$ with at least $i$ zero components for any $a ^{(i)} \in \bigtimes D^i$ and $2 \leq i \leq n-1$. Construct the spline matrix \begin{displaymath} A = \big( 1 \squad F_{a ^{(2)}} \squad F_{a ^{(3)}} \squad \ldots \squad F_{a ^{(n-1)}} \squad F_{a ^{(n)}} \big) \end{displaymath} where $1 = (1,\ldots,1)$ is the trivial spline. The matrix $A$ is an upper triangular matrix since the first $i-1$ components of $F_{a ^{(i)}}$ are zero. Therefore, \begin{align*} \big\vert A \big\vert &= \mathscr{L}_2 \cdot \prod a ^{(2)} \cdot \mathscr{L}_3 \cdot \prod a ^{(3)} \cdots \mathscr{L}_{(n-1)} \cdot \prod a ^{(n-1)} \cdot \mathscr{L}_n \\ &= \prod a ^{(2)} \cdot \prod a ^{(3)} \cdots \prod a ^{(n-1)} \cdot Q_{K_n}. \end{align*} By Proposition~\ref{5prop1}, $\big\vert F_1 \squad F_2 \squad \ldots \squad F_n \big\vert = r \cdot Q_{K_n}$ divides $\big\vert A' \big\vert$ and hence, $r$ divides $\prod a ^{(2)} \cdot \prod a ^{(3)} \cdots \prod a ^{(n-1)}$. Since $a ^{(i)} \in \bigtimes D^i$ are arbitrary, we conclude that $r$ divides each element of the set \begin{displaymath} \mathbb{D} = \Big\{ \prod a ^{(2)} \cdot \prod a ^{(3)} \cdots \prod a ^{(n-1)} \text{ $\vert$ } a ^{(i)} \in \bigtimes D^i , \squad 2 \leq i \leq n-1 \Big\}. \end{displaymath} Thus, $r$ divides the greatest common divisor $\big( \mathbb{D} \big)$ which is equal to $1$ by Proposition~\ref{2prop1} and so, $r$ is a unit. \end{proof} We give the basis condition for $R_{(K_n , \alpha)}$ over GCD domains by gathering Proposition~\ref{4prop2} and Theorem~\ref{4thm3}. \begin{theorem} Let $\{ F_1 , F_2, \ldots , F_n \} \subset R_{(K_n , \alpha)}$ and $R$ be a GCD domain. Then $\{ F_1 , F_2 , \ldots , F_n \}$ forms a basis for $R_{(K_n , \alpha)}$ if and only if $\big\vert F_1 \squad F_2 \squad \ldots \squad F_n \big\vert = r \cdot Q_{K_n}$ where $r \in R$ is a unit. \label{4thm4} \end{theorem} \section{Basis Condition on Arbitrary Graphs}\label{bcoag} In this section, we extend Theorem~\ref{4thm4} to arbitrary graphs and obtain a general basis condition over GCD domains. In order to do this, we define the completion of an edge labeled graph. Given an edge labeled graph $(G,\alpha)$, we define the completion of it, denoted by $(K_G , \alpha^*)$, by adding the missing edges $uv$ and labeling them as $\alpha^* (uv) = 1$. If $uv \in E(G)$, then we set $\alpha^* (uv) = \alpha(uv)$. The following proposition states that the completion does not affect the set of splines. \begin{proposition} Let $(G,\alpha)$ be an edge labeled graph and $(K_G , \alpha^*)$ be the completion of it. Then $R_{(G,\alpha)} = R_{(K_G , \alpha^*)}$. \label{5prop1} \end{proposition} \begin{proof} Let $F \in R_{(G,\alpha)}$ and $uv \in E(K_G)$. If $uv \in E(G)$, then $F(u) - F(v) \in \alpha(uv) = \alpha^* (uv)$. Otherwise, $F(u) - F(v) \in \alpha^* (uv) = 1$ and $F \in R_{(K_G , \alpha^*)}$. Conversely, if $F \in R_{(K_G , \alpha^*)}$, then $F \in R_{(G,\alpha)}$ since $(G,\alpha) \subset (K_G , \alpha^*)$. Thus, $R_{(G,\alpha)} = R_{(K_G , \alpha^*)}$. \end{proof} The relation between $Q_G$ and $Q_{K_G}$ is stated below. \begin{proposition} Let $(G,\alpha)$ be an edge labeled graph and $(K_G , \alpha^*)$ be the completion of it. Then $Q_G = Q_{K_G}$. \label{5prop2} \end{proposition} \begin{proof} Fix a vertex $v_i \in V(G)$ with $i > 1$. Since $G \subset K_G$, any zero trail $\textbf{p} ^{(v_i , 0)}$ on $(G, \alpha)$ is a zero trail on $(K_G , \alpha^*)$. Assume that a zero trail $\textbf{p}_j ^{(v_i , 0)}$ on $(K_G , \alpha^*)$ contain an edge $uv$ with $\alpha^* (uv) = 1$, then $\big( \textbf{p} ^{(v_i , 0)} \big) = 1$ and such a zero trail does not effect $\mathscr{L}_i$. Hence, $\mathscr{L}_i$ is the same on $(G,\alpha)$ and $(K_G , \alpha^*)$, and so $Q_G = Q_{K_G}$. \end{proof} Consider the completion of $(G, \alpha)$. By Theorem~\ref{4thm4}, $\{ F_1 , F_2 , \ldots , F_n \}$ forms a basis for $R_{(K_G , \alpha^*)}$ if and only if $\big\vert F_1 \squad F_2 \squad \ldots \squad F_n \big\vert = r \cdot Q_{K_G}$ where $r \in R$ is a unit. Since $R_{(G,\alpha)} = R_{(K_G , \alpha^*)}$ and $Q_G = Q_{K_G}$ by Proposition~\ref{5prop1} and Proposition~\ref{5prop2}, Theorem~\ref{4thm4} extends to arbitrary graphs as follows. \begin{theorem} Let $(G, \alpha)$ be an arbitrary graph with $n$ vertices, $\{ F_1 , F_2, \ldots , F_n \} \subset R_{(G , \alpha)}$ and $R$ be a GCD domain. Then $\{ F_1 , F_2 , \ldots , F_n \}$ forms a basis for $R_{(G , \alpha)}$ if and only if $\big\vert F_1 \squad F_2 \squad \ldots \squad F_n \big\vert = r \cdot Q_G$ where $r \in R$ is a unit. \label{5thm1} \end{theorem} \section*{Acknowledgements} The second author is supported by the Scientific and Technological Research Council of Turkey, International Postdoctoral Research Fellowship Program for Turkish Citizens T\"{U}B\.{I}TAK-2219 (Project Number:1059B192201169). In addition, he thanks Smith College Department of Mathematical Sciences for their hospitality. \bibliographystyle{alpha}
1,108,101,564,741
arxiv
\section{Introduction} \pagenumbering{arabic} In the present era when financial institutions are facing serious challenges in the advent of improving life expectancy, pricing of key products such as `Guaranteed Annuity Options' which involve survival benefit has gained a lot of momentum. It is the need of the hour to equip the longevity product designers with an insight to efficient pricing of these instruments. This involves designing an apparatus that provides state of art solutions to measure the random impulse of mortality, which indeed calls for looking at mortality in a stochastic sense. Till, very lately the conventional approach of actuaries consisted in treating mortality in a deterministic way in contrast to interest rates which were assumed to possess a stochastic nature. Post this came the era of the assumption that mortality evolves in a stochastic manner but is independent of interest rates (see for example \cite{Biffis1}). However, the latter assumption is also far from being realistic. This is because both extreme mortality events such as catastrophes and pandemics as well as improving life expectancy go a long way in influencing the value of interest rate. While the former shows a stronger effect in a short term, the latter affects the financial market in a gradual manner. Interested readers can refer to \cite{Deelestra}, \cite{Liu1}, \cite{Liu2} and \cite{Jalen} and the references therein. To the best of our knowledge, \cite{Miltersen} were the first ones to introduce dependence between mortality and interest rates in the actuarial world. In the context of the real world, a study by \cite{Nicolini} to understand the relation between these two underlying risks demonstrates that the decline of interest rate in pre-industrial England was perhaps triggered by the decline of adult mortality at the end of the 17th century. More recently \cite{Dacorogna} examine correlation between mortality and market risks in periods of extremes such as a severe pandemic outbreak while \cite{Dacorogna2} explore existence of this dependence within the Feller process framework. As remarked in the beginning of this section `The Life Expectancy Revolution' has pressurised social security programmes of various nations thereby triggering fiscal crisis for governments who find it hard to fulfill the needs of an ever growing aging population. The price for this imbalance affects the financial markets adversely leading to a downtrend in returns on investments. To take care of these issues, EU's Solvency II Directive has laid out new insurance risk management practices for capital adequacy requirements based on the assumption of dependence between financial markets and life/health insurance markets including the correlation between the two underpinning risks viz. interest rate and mortality (c.f. Quantitative Impact Study 5:Technical Specifications \cite{QIS5}). In this paper, we consider the most generalised modelling framework where both interest and mortality risks are stochastic and correlated. In a set up similar to \cite{Biffis1}, we advocate the use of doubly stochastic stopping times to incorporate the randomness about the time of death. We then utilize this set up and the theory of comonotonicity to devise model-independent price bounds for Guaranteed Annuity Options (GAOs). These are options embedded in certain pension policies that provide the policyholders the right to choose between a lump sum at time of retirement/maturity or to convert the proceeds into an annuity at a guaranteed rate. The reports of the Insurance Institute of London (1972) (c.f. \cite{IIL}) show that the origin of GAOs dates back to 1839. However these instruments came into the limelight in UK in the era of 1970-1990. In the advent of increased life expectancy, the research on pricing of GAOs has gained a lot of momentum as the underpricing of such guarantees has already caused serious solvency problems to insurers, for example in the UK, as an after effect of encashment of too many GAOs, the world's oldest life insurer - Equitable Life had to close to new business in 2000. The existing literature in the direction of pricing of GAO's under the correlation assumption is very thin and only Monte Carlo estimation of the GAO price is available for sophisticated models (c.f. \cite{Deelestra}). But Monte Carlo method is generally extremely time consuming for complex models (c.f. \cite{Runhuan}). This article is a concrete step in the direction of pricing of GAOs under the correlation direction. It investigates the designing of price bounds for GAO's under the assumption of dependence between mortality and interest rate risks and provides a much needed confidence interval for the pricing of these options. Moreover the proposed bounds are model-free or general in the sense they are applicable for all kinds of models and in particular suitable for the affine set up. Keeping pace with the relevant literature (c.f. \cite{Liu2}, \cite{Deelestra}), we applied a change of probability measure with the `Survival Zero Coupon Bond' as num\'{e}raire for the valuation of the GAO. This change of measure facilitates computation and enhances efficiency (c.f. \cite{Liu1}). The organization of the paper follows. In section 2 we introduce the market framework with the necessary notations. In section 3 we define GAOs and show that their payoff is similar to that of a basket option. This is followed by Section 4 which highlights the technicalities of affine processes. Sections 5 and 6 are the core sections which present details on finding lower and upper bounds for GAOs. In section 7 we present examples while numerical investigations in support of the developed theory appear in Section 8. Section 9 then concludes the paper. \section{The Market Framework} In this section, we introduce the necessary set up required to construct the mathematical interplay between financial market and the mortality model. We denote by $\mathbb{P}$, the physical world measure and we utilize the fact that in the absence of arbitrage, at least one equivalent martingale measure (EMM) $\mathbb{Q}$ exists. We consider a filtered probability space $\left(\Omega, \mathscr{F}, \mathbb{F}, \mathbb{P}\right)$ where $\mathbb{F}=\{\mathscr{F}_{t}\}_{t\geq 0}$ such that the filtration is large enough to support a process $X$ in $\mathbb{R}^{k}$, representing the evolution of financial variables and a process $Y$ in $\mathbb{R}^{d}$, representing the evolution of mortality. We take as given an adapted short rate process $r=\{r_{t}\}_{t\geq 0}$ such that it satisfies the technical condition $\int_{0}^{t} r_{s}ds<\infty$ a.s. for all $t\geq 0$. The short rate process $r$ represents the continuously compounded rate of interest of a risk-less security. Moreover, we concentrate on an insured life aged $x$ at time 0, with random residual lifetime denoted by $\tau_{x}$ which is an $\mathscr{F}_{t}$-stopping time. The filtration $\mathbb{F}$ includes knowledge of the evolution of all state variables up to each time t and of whether the policyholder has died by that time. More explicitly, we have: \[ \mathscr{F}_{t}=\mathscr{G}_{t} \vee \mathscr{H}_{t} \] where \[ \mathscr{G}_{t} \vee \mathscr{H}_{t}=\sigma\left(\mathscr{G}_{t} \cup \mathscr{H}_{t}\right) \] with \[\mathscr{G}_{t}=\sigma\left(Z_{s}:\;0\leq s\leq t\right),\;\;\;\mathscr{H}_{t}=\sigma\left(\mathbbm{1}_{\{\tau\leq s\}}:\;0\leq s\leq t\right) \] and where $Z=\left(X,Y\right)$ is the joint state variables process in $\mathbb{R}^{k+d}$. Thus we have \[ \mathscr{G}_{t}=\mathscr{G}_{t}^{X} \vee \mathscr{G}_{t}^{Y}. \] In fact $\mathbb{H}=\{\mathscr{H}_{t}\}_{t\geq 0}$ is the smallest filtration with respect to which $\tau$ is a stopping time. In other words $\mathbb{H}$ makes $\mathbb{F}$ the smallest enlargement of $\mathbb{G}=\{\mathscr{G}_{t}\}_{t\geq 0}$ with respect to which $\tau$ is a stopping time, i.e., \[ \mathscr{F}_{t}=\cap_{s>t}\mathscr{G}_{s} \vee \sigma\left(\tau\wedge s\right), \;\forall t. \] We may think of $\mathscr{G}_{t}$ as carrying information captured from medical/demographical data collected at population/ industry level and of $\mathscr{H}_{t}$ as recording the actual occurence of death in an insurance portfolio. To make the set up more robust, we assume that $\tau_{x}$ is the first jump-time of a nonexplosive $\mathscr{F}_{t}$-counting process $N$ recording at each time $t \geq 0$ whether the individual has died $\left(N_{t} \neq 0\right)$ or not $(N_{t} = 0)$. The stopping time $\tau_{x}$ is said to admit an intensity $\mu_{x}$ if $N$ does, i.e. if $\mu_{x}$ is a non-negative predictable process such that $\int_{0}^{t}\mu_{x}\left(s\right)ds<\infty$ a.s. for all $t\geq 0$ and such that the compensated process $M = \{N_{t}-\int_{0}^{t}\mu_{x}\left(s\right)ds:t\geq 0\}$ is a local $\mathscr{F}_{t}$-martingale. Our next assumption is that $N$ is a doubly stochastic process or Cox Process driven by a subfiltration $\mathscr{G}_{t}$ of $\mathscr{F}_{t}$, with $\mathscr{G}_{t}$-predictable intensity $\mu$. This implies that on any particular trajectory $t \mapsto \mu_{t}\left(\omega\right)$ of $\mu$, the counting process $N$ is a Poisson-inhomogeneous process with parameter $\int_{0}^{.}\mu_{s}\left(\omega\right)ds$, i.e., we have that for all $t \in \left[0,T\right]$ and non-negative integer $k$, \begin{equation}\label{2.1abcde} \mathbb{P}\left(N_{T}-N_{t} = k|\mathscr{F}_{t} \vee \mathscr{G}_{T}\right)=\frac{\left(\int_{t}^{T}\mu_{s}ds\right)^{k}}{k!}e^{-\int_{t}^{T}\mu_{s}ds}. \end{equation} The main reason for the consideration of a strict subfiltration $\mathscr{G}_{T}$ of $\mathscr{F}_{t}$ is that it provides enough information about the evolution of the intensity of mortality, i.e., about the likelihood of death happening, but not enough information about the actual occurrence of death. Such information is carried by the larger filtration $\mathscr{F}_{t}$, with respect to which $\tau$ is a stopping time. From \eqref{2.1abcde} by putting $k = 0$, we now proceed to compute the `probability of survival' up to time $T \geq t$, on the set $\{\tau > t\}$. Let $A$ be the event of no death in the interval $t \in \left[0,T\right]$, i.e., $A\equiv\{N_{T}-N_{t} = 0\}$, then the tower property of conditional expectation tells us that \begin{eqnarray}\label{2.1abcdef} \mathbb{P}\left(\tau > T|\mathscr{F}_{t}\right) & = & E[\mathbbm{1}_{A}|\mathscr{F}_{t}] \nonumber\\ & = & E\left[E\left(\mathbbm{1}_{A}|\mathscr{F}_{t} \vee \mathscr{G}_{T}\right)|\mathscr{F}_{t}\right] \nonumber\\ & = & E\left[\mathbb{P}\left(N_{T}-N_{t} = 0|\mathscr{F}_{t} \vee \mathscr{G}_{T}\right)|\mathscr{F}_{t}\right] \nonumber\\ & = & E\left[e^{-\int_{t}^{T}\mu_{s}ds}|\mathscr{F}_{t}\right]. \end{eqnarray} In fact, we characterize the conditional law of $\tau$ in several steps. Given the non-negative $\mathscr{G}_{t}$-predictable process $\mu$ is satisfying $\int_{0}^{t}\mu_{x}\left(s\right)ds<\infty$ a.s. for all $t>0$, we consider an exponential random variable $\Phi$ with parameter 1, independent of $\mathscr{G}_{\infty}$ and define the random time of death $\tau$ as the first time when the process $\int_{0}^{t}\mu_{s}ds$ is above the random threshold $\Phi$, i.e., \begin{equation}\label{2.1abcdefg} \tau \doteq \{t\in \mathbb{R}^{+}: \int_{0}^{t}\mu_{s}\left(s\right)ds \geq \Phi\}. \end{equation} It is evident from \eqref{2.1abcdefg} that $\{\tau > T\}=\{\int_{0}^{t}\mu_{s}ds < \Phi\}$, for $T\geq 0$. Next, we work out $\mathbb{P}\left(\tau > T|\mathscr{G}_{t}\right)$ for $T\geq t\geq 0$ by using tower property of conditional expectation, independence of $\Phi$ and $\mathscr{G}_{\infty}$ and facts that $\mu$ is a $\mathscr{G}_{t}$-predictable process and $\Phi \sim Exponential\left(1\right)$, i.e., \begin{equation}\label{2.2ab} \mathbb{P}\left(\tau > T|\mathscr{G}_{t}\right) = E\left[e^{-\int_{0}^{T}\mu_{s}ds}|\mathscr{G}_{t}\right]. \end{equation} In fact, the same result holds for $0 \leq T < t$. Further, we observe that $\{\tau > t\}$ is an atom of $\mathscr{H}_{t}$. As a result, in a manner similar to \cite{Biffis1}, we have constructed a doubly stochastic $\mathscr{F}_{t}$-stopping time driven by $\mathscr{G}_{t} \subset \mathscr{F}_{t}$ in the following way (c.f. \cite{billie}, ex 34.4, p.455): \begin{eqnarray}\label{2.2abc} \mathbb{P}\left(\tau > T|\mathscr{G}_{T} \vee \mathscr{F}_{t}\right) & = & \mathbbm{1}_{\{\tau > t\}}E\left[\mathbbm{1}_{\{\tau > T\}}|\mathscr{G}_{T} \vee \mathscr{H}_{t}\right] \nonumber\\ & = & \mathbbm{1}_{\{\tau > t\}}e^{-\int_{t}^{T}\mu_{s}ds}. \end{eqnarray} Next, the conditioning on $\mathscr{F}_{t}$ can be replaced by conditioning on $\mathscr{G}_{t}$ as shown in the Appendix C of \cite{Biffis1}. We remark that, we do not take $\mathscr{G}_{t} \vee \sigma(\Phi)$ as our filtration $\mathscr{G}_{t}$ because, in that case, the stopping time $\tau$ would be predictable and would not admit an intensity. The construction potrayed here guarantees that $\tau$ is a totally inaccessible stopping time, a concept intuitively meaning that the insured’s death arrives as a total surprise to the insurer (see \cite{Protter}, Chapter III.2, for details). With this, we move to the focal point of this paper viz. GAOs. \section{Guaranteed Annuity Options} \subsection{Introduction} A Guaranteed Annuity Option(GAO) is a contract that gives the policyholder the flexibility to convert his/her survival benefit into an annuity at a pre-specified conversion rate. The guaranteed conversion rate denoted by $g$, can be quoted as an annuity/cash value ratio. According to \cite{Bolton}, the most popular choice for for the guaranteed conversion rate $g$ for males aged 65 in the UK in the 1980s was $g=\frac{1}{9}$, which means that per \pounds 1000 cash value can be converted into an annuity of \pounds 111 per annum. The GAO would have a positive value if the guaranteed conversion rate is higher than the available conversion rate; otherwise the GAO is worthless since the policyholder could use the cash to obtain higher value of annuity from the primary market. As a result, the moneyness of the GAO at maturity depends on the price of annuity available in the market at that time and this in turn is calculated using the prevailing interest and mortality rates. \subsection{Mathematical Formulation} Consider an $x$ year old policyholder at time 0 who has an access to a unity amount at his retirement age $R_{x}$. Then, a GAO gives the policyholder a choice to choose at time $T=R_{x}-x$ between an annual payment of $g$ or a cash payment of 1. Let $\ddot{a}_{x}\left(T\right)$ denote a whole life annuity due for a person aged $x$ at time 0, which gives an annual payment of one unit amount at the start of each year, this payment beginning from time $T$ and conditional on survival. If $w$ is the largest possible survival age then we have \begin{eqnarray}\label{5.1} \ddot{a}_{x}\left(T\right) & = & \sum_{j=0}^{w-\left(T+x\right)-1}\mathbb{E}\left[e^{-\int_{T}^{T+j}\left(r_{s}+\mu_{s}\right)ds}|\mathscr{G}_{T}\right] \nonumber\\ & = & \sum_{j=0}^{w-\left(T+x\right)-1} \tilde{P}\left(T,T+j\right), \end{eqnarray} where \begin{equation}\label{4.3} \tilde{P}\left(t,T\right)=\mathbb{E}\left[e^{-\int_{t}^{T}\left(r_{s}+\mu_{s}\right)ds}|\mathscr{G}_{t}\right] \end{equation} denotes the price at time t of a pure endowment insurance with maturity $T$ for an insured of age $x$ at time 0 who is still alive at time $t$. This insurance instrument is nomenclated as a \emph{survival zero-coupon bond} abbreviated as SZCB by \cite{Deelestra} and the authors remark that it can be used as a num\`eraire because it can be replicated by a strategy that involves longevity bonds (c.f. \cite{Lin2}) in analogy with the usual bootstrapping methodology used to find the zero rate curve starting by coupon bonds. This insurance instrument pays one unit of money at time $T$ upon the survival of the insured at that time. In fact $r+\mu$ can be viewed as a fictitious short rate or yield to compare these instruments with their financial counterparts. At time $T$, the value of the contract having the above embedded GAO can be described by the following decomposition \begin{eqnarray}\label{5.2} V\left(T\right)& = & \max(g \ddot{a}_{x}\left(T\right), 1) \nonumber\\ & = & 1 + g \max\left(\ddot{a}_{x}\left(T\right)-\frac{1}{g}\right). \end{eqnarray} In order to apply risk neutral evaluation, we state a result from \cite{Biffis1} to compute the fair values of a basic payoff involved by standard insurance contracts. These are benefits, of amount possibly linked to other security prices, contingent on survival over a given time period. We require the short rate process $r$ and the intensity of mortality $\mu$ to satisfy the technical conditions stated in Section 2. \begin{prop}(Survival benefit). Let $C$ be a bounded $\mathscr{G}_{t}$-adapted process. Then, the time-$t$ fair value $SB_{t}\left(C_{T} ; T\right)$ of the time-$T$ survival benefit of amount $C_{T}$, with $0 \leq t \leq T$ , is given by: \begin{equation}\label{3.1ab} SB_{t}\left(C_{T} ; T\right) = \mathbb{E}\left[e^{-\int_{t}^{T}r_{s}ds}\mathbbm{1}_{\{\tau > T\}}C_{T}|\mathscr{F}_{t}\right]=\mathbbm{1}_{\{\tau > t\}}\mathbb{E}\left[e^{-\int_{t}^{T}\left(r_{s}+\mu_{s}\right)ds}C_{T}|\mathscr{G}_{t}\right] \end{equation} In particular, if $C$ is $\mathscr{G}_{t}^{X}$-adapted and $X$ and $Y$ are independent, then, the following holds \begin{equation}\label{3.1abc} SB_{t}\left(C_{T} ; T\right) =\mathbbm{1}_{\{\tau > t\}}\mathbb{E}\left[e^{-\int_{t}^{T}r_{s}ds}C_{T}|\mathscr{G}_{t}^{X}]\mathbb{E}[e^{-\int_{t}^{T}\mu_{s}ds}|\mathscr{G}_{t}^{Y}\right] \end{equation} \end{prop} \begin{proof} A comprehensive proof can be found in \cite{Biffis1}. \end{proof} Thus, we have the value at time $t = 0$ of the second term in \eqref{5.3}, which is called the GAO option price entered by an $x$-year policyholder at time $t = 0$ as \begin{equation}\label{5.3} C(0, x, T )=\mathbb{E}\left[e^{-\int_{0}^{T}\left(r_{s}+\mu_{s}\right)ds}g\left(\ddot{a}_{x}\left(T\right)-\frac{1}{g}\right)^{+}\right]. \end{equation} In order to facilitate calculation, we adopt the following change of measure. \subsection{Change of Measure} We advocate a change of measure similar to the one adopted in \cite{Deelestra}. We define a new probability measure $\tilde{Q}$ with the Radon-Nikodym derivative of $\tilde{Q}$ w.r.t $\mathbb{Q}$ as: \begin{equation}\label{4.1} \frac{d\tilde{Q}}{d\mathbb{Q}} := \eta_{T}=\frac{e^{-\int_{0}^{T}\left(r_{s}+\mu_{s}\right)ds}}{\mathbb{E}\left[e^{-\int_{0}^{T}\left(r_{s}+\mu_{s}\right)ds}\right]} \end{equation} where $\mathbb{E}$ denotes the usual expectation w.r.t the EMM $\mathbb{Q}$ and we will use $\tilde{E}$ to denote the expectation w.r.t the new probability measure $\tilde{Q}$. Further on using Bayes' Rule for conditional expectation, the survival benefit in \eqref{3.1ab} can be rewritten as \begin{equation}\label{4.2} SB_{t}\left(C_{T} ; T\right) = \mathbbm{1}_{\{\tau > t\}}\tilde{P}\left(t,T\right)\tilde{E}\left[C_{T}|\mathscr{G}_{t}\right] \end{equation} The advantage of the change of measure approach is that the complex expectation appearing in the survival benefit given in \eqref{3.1ab} has been decomposed into two simpler expectations: the first one corresponds to the price of the SZCB given in \eqref{4.3} and the second one is connected to the expected value of the survival benefit $C_{T}$ under the new probability measure $\tilde{Q}$ which needs to be determined. In the passing, one notes that in \eqref{4.2} if $C_{T}=1$, we get a very interesting relationship \begin{equation}\label{4.4} SB_{t}\left(1 ; T\right) = \mathbbm{1}_{\{\tau > t\}}\tilde{P}\left(t,T\right). \end{equation} In particular \begin{equation}\label{4.5} SB_{0}\left(1 ; T\right) = \mathbbm{1}_{\{\tau > t\}}\tilde{P}\left(0,T\right). \end{equation} A similar change of measure has been employed by \cite{Liu1} and \cite{Liu2} with the only difference that they use the unitary survival benefit given in \eqref{4.4} as the num\`eraire. On the contrary, \cite{Jalen} have used a twin change of measure to compute value of a GAO. \subsection{Payoff} Under the new probability measure $\tilde{Q}$ defined in \eqref{4.1}, the GAO option price decomposes into the following product \begin{equation}\label{5.4} C(0, x, T ) = g \tilde{P}\left(0,T\right)\tilde{E}\left[\left(\ddot{a}_{x}\left(T\right)-\frac{1}{g}\right)^{+}\right] \end{equation} where $\tilde{P}\left(0,T\right)$ is defined in \eqref{4.3}. To develop ideas further, we express the payoff in a more appealing form as follows: \begin{equation}\label{5.5} C(0, x, T ) = g \tilde{P}\left(0,T\right){\displaystyle\tilde{E}}\left[\left(\sum_{i=1}^{n-1}S_{T}^{\left(i\right)}-\left(K-1\right)\right)^{+}\right] \end{equation} where we utilize the fact that $\tilde{P}\left(T,T\right)=1$ and define $n=w-\left(T+x\right)$ and \begin{equation}\label{5.6} S_{T}^{\left(i\right)}=\tilde{P}\left(T,T+i\right);\;i=1,2,...,n-1. \end{equation} The last term on the R.H.S in the payoff of the GAO resembles the payoff of a basket option having unit weights and the SZCBs, maturing at times $T+1, T+2,...,w-x-1$ acting as the underlying assets. We seek to evaluate tight model-independent bounds for the GAOs in the ensuing sections. To the best of our knowledge, the equations \eqref{5.3} and \eqref{5.4} have only been valued by Monte Carlo simulations for specific choice of models. In \cite{Liu1}, numerical experiments in the Gaussian setting have shown that \eqref{5.4} is a little bit more precise and in particular it is less time consuming than the implementation of \eqref{5.3}. \cite{Deelestra} have investigated these calculations for different affine models such as the multi-CIR and the Wishart cases. \cite{Liu2} have computed very specific comonotonic bounds for GAOs in the Gaussian framework. \section{Affine Processes} Affine processes are essentially Markov processes with conditional characteristic function of the affine form. A thorough discussion of these processes on canonical state space appears in \cite{Duffie2} and \cite{Filipovic}. More recently the development of multivariate stochastic volatility models has lead to the evolution of applications of affine processes on non-canonical state spaces, in particular on the cone of positive semi-definite matrices. A plethora of research papers are available to explore and interested readers can refer to \cite{Cuchiero1} for details. A unified approach on affine processes is presented in \cite{Keller2} and following this approach we recall the details of the affine processes in the Appendix A. In regards to the evolution of interest rates and the force of mortality we consider a set up similar to \cite{Deelestra}. Suppose we have a time-homogeneous affine Markov process $X$ taking values in a non-empty convex subset $E$ of $\mathbb{R}^{d}$, $\left(d\geq 1\right)$ equipped with the inner product $\langle\cdot,\cdot\rangle$. We then assume that the dynamics of the interest rate and force of mortality are given respectively as follows. \begin{equation}\label{51.11} r_{t}=\bar{r}+\langle R,X_{t}\rangle \end{equation} and \begin{equation}\label{51.12} \mu_{t}=\bar{\mu}+\langle M,X_{t}\rangle \end{equation} where $\bar{r}, \bar{\mu} \in \mathbb{R}$, $M, R \in \mathbb{R}_{d} \mbox{ or } M_{d}$ where $M_{d}$ is the set of real square matrices of order $d$. This means that the interest rate and mortality are linear projections of the common stochastic factor $X$ along constant directions given by the parameter $R$ and $M$ respectively. We will be interested in the cases where the $X$ is a classical affine process on the state space $\mathbb{R}^{m}_{+}\times \mathbb{R}^{n}$ or an affine Wishart process on the state space $S_{d}^{+}$, which is the set of $d\times d$ symmetric positive definite matrices. The inner product possesses the flexibility to condense into scalar product or trace depending on the nature of $R$ and $M$ being respectively vectors or matrices. In the former set up we consider multi-dimensional CIR case (c.f. \cite{CIR}). In the case of Vasicek model (c.f. \cite{Vasicek}), the affine set up is uni-dimensional. A very good reference to show that the stochastic processes underlying the Vasicek and CIR models fall under the affine set up is \cite{martink7}. In the passing it is important to note that the affiness of the underlying model is preserved as we move from the physical world to the the risk neutral environment, although new affine dynamics emerge (c.f. \cite{Biffis2} and \cite{Duffie1}). In fact, more recently \cite{Dhaene} examine the conditions under which it is possible or not to translate the independence assumption from the physical world to the pricing world. We now state without proof the following proposition which presents the methodology to value SZCBs and in turn GAOs. A detailed proof appears in \cite{Gnoatto1} and the necessary notations are defined in the Appendix A. \begin{prop}\label{411.7} Let $X$ be a conservative affine process on $S_{d}^{+}$ under the risk neutral measure $\mathbb{Q}$. Let the short rate be given in accordance with \eqref{51.11}. Let $\tau=T-t$, then the price of a zero-coupon bond is given by { \allowdisplaybreaks \begin{eqnarray}\label{51.122} \tilde{P}\left(t, T\right) & = & \mathbb{E}\left[e^{-\int^{T}_{t}\left(\bar{r}+\bar{\mu}+\langle R+M,X_{u}\rangle\right) du}|\mathscr{F}_{t}\right]\nonumber\\ & = & e^{-\left(\bar{r}+\bar{\mu}\right)\tau}e^{-\tilde{\phi}\left(\tau,R+M\right)-\langle \tilde{\psi}\left(\tau,R+M\right),X_{t}\rangle}, \end{eqnarray} } where $\tilde{\phi}$ and $\tilde{\psi}$ satisfy the following Ordinary Differential Equations (ODEs) which are known also as Riccati ODE's. \begin{equation}\label{51.13} \frac{\partial\tilde{\phi}}{\partial \tau}=\tilde{\Im}\left(\tilde{\psi}\left(\tau,R+M\right)\right),\;\;\tilde{\phi}\left(0,R+M\right)=0, \end{equation} \begin{equation}\label{51.14} \frac{\partial\tilde{\psi}}{\partial\tau}=\tilde{\Re}\left(\tilde{\psi}\left(\tau,R+M\right)\right),\;\;\tilde{\psi}\left(0,R+M\right)=0, \end{equation} with \begin{equation}\label{51.15} \tilde{\Im}\left(\tilde{\psi}\left(\tau,R+M\right)\right)=\langle b,\tilde{\psi}\left(\tau,R+M\right)\rangle-\int_{S_{d}^{+}\setminus \{0\}}\left(e^{-\langle \tilde{\psi}\left(\tau,R+M\right),\xi\rangle} -1\right) m\left(d\xi\right), \end{equation} and { \allowdisplaybreaks \begin{eqnarray}\label{51.16} \tilde{\Re}\left(\tilde{\psi}\left(\tau,R+M\right)\right) & = & -2\tilde{\psi}\left(\tau,R+M\right)\alpha \tilde{\psi}\left(\tau,R+M\right)+B^{T}\left(\tilde{\psi}\left(\tau,R+M\right)\right)\nonumber\\ & {} & {} -\int_{S_{d}^{+}\setminus \{0\}}\left(\frac{e^{-\langle \tilde{\psi}\left(\tau,R+M\right),\xi\rangle} -1+\langle \chi\left(\xi\right),\tilde{\psi}\left(\tau,R+M\right)\rangle}{\parallel \xi \parallel^{2} \wedge 1}\right)\mu\left(d\xi\right)+R+M.\nonumber\\ \end{eqnarray} } \end{prop} In fact it is interesting to note that assuming this kind of affine structure means that our fictitious yield model is ``affine" in the sense that there is, for each maturity $T$, an affine mapping $Z_{T}:\mathbb{R}^{n}\rightarrow \mathbb{R}$ such that, at any time $t$, the yield of any SZCB of maturity $T$ is $Z_{T}\left(X_{t}\right)$ echoing the results obtained in the seminal paper of \cite{Duffie0}. As a result we have for $i=1,2,...,n-1$, \begin{equation}\label{51.123} S_{T}^{\left(i\right)} = e^{-\left(\bar{r}+\bar{\mu}\right)i}e^{-\tilde{\phi}\left(i,R+M\right)-\langle \tilde{\psi}\left(i,R+M\right),X_{T}\rangle}, \end{equation} where $\tilde{\phi}\left(i,R+M\right)$ and $\tilde{\psi}\left(i,R+M\right)$ satisfy the equations \eqref{51.13} and \eqref{51.14} with $\tau=i$. Alternatively, one may write \begin{equation}\label{51.123a} S_{T}^{\left(i\right)} = S_{0}^{\left(i\right)}e^{X_{T}^{\left(i\right)}};\;i=1,2,...,n-1, \end{equation} with \begin{equation}\label{51.123b} S_{0}^{\left(i\right)} = e^{-\left(\left(\bar{r}+\bar{\mu}\right)i+\tilde{\phi}\left(i,R+M\right)\right)} \end{equation} and \begin{equation}\label{51.123c} X_{T}^{\left(i\right)} = -\langle \tilde{\psi}\left(i,R+M\right),X_{T}\rangle. \end{equation} As a result in the affine case, by using equation \eqref{51.123} in \eqref{5.5} the formula for GAO payoff can be written in a very compact form as shown below. \begin{equation}\label{51.124} C(0, x, T ) = g \tilde{P}\left(0,T\right){\displaystyle\tilde{E}}\left[\left(\sum_{i=1}^{n-1}e^{-\left(\bar{r}+\bar{\mu}\right)i}e^{-\tilde{\phi}\left(i,R+M\right)-\langle \tilde{\psi}\left(i,R+M\right),X_{T}\rangle}-\left(K-1\right)\right)^{+}\right], \end{equation} where $\tilde{P}\left(0,T\right)$ given by equation \eqref{51.122} with $\tau=T$. As a result in the affine case, our quest of bounds for the GAO becomes simplified as we are dealing only with $X_{T}$. The analytical tractability of affine processes is essentially linked to generalized Riccati equations as given above which can be in general solved by numerical methods although explicit solutions are available in the Vasicek (c.f. \cite{Vasicek}) and CIR (c.f. \cite{CIR}) models without jumps. \section{Lower Bound for Guaranteed Annuity Options} We now proceed to work out appropriate lower bounds for the payoff of the GAO as given in \eqref{5.5}. Invoking Jensen's inequality , we have \begin{eqnarray}\label{6.1} \tilde{E}\ensuremath{\left[\left({\displaystyle \sum_{i=1}^{n-1}}S_{T}^{\left(i\right)}-\left(K-1\right)\right)^{+}\right]} & \geq & \tilde{E}\ensuremath{\left[\left({\displaystyle \sum_{i=1}^{n-1}}\tilde{E}\left(S_{T}^{\left(i\right)}|\Lambda\right)-\left(K-1\right)\right)^{+}\right]}. \end{eqnarray} The general derivation concerning lower bounds for stop loss premium of a sum of random variables based on Jensen's inequality can be found in \cite{Simon} and for its application to Asian basket options, one can refer to \cite{Deelstra2}. Define \begin{equation}\label{6.1a} S=\displaystyle \sum_{i=1}^{n-1}S_{T}^{\left(i\right)} \end{equation} and \begin{equation}\label{6.1b} S^{l}=\displaystyle \sum_{i=1}^{n-1}\tilde{E}\left(S_{T}^{\left(i\right)}|\Lambda\right) \end{equation} Thus, we have obtained \begin{equation}\label{6.1c} S \geq_{cx} S^{l}. \end{equation} Now, suitably tailoring the inequality \eqref{6.1}, we obtain \begin{equation}\label{6.2} C(0, x, T ) \geq g \tilde{P}\left(0,T\right)\tilde{E}\ensuremath{\left[\left({\displaystyle \sum_{i=1}^{n-1}}\tilde{E}\left(S_{T}^{\left(i\right)}|\Lambda\right)-\left(K-1\right)\right)^{+}\right]}. \end{equation} \subsection{A Lower Bound} In case, if the random variable $\Lambda$ is independent of the prices of pure endowments having term periods $1,2,...,n-1$ at the time $T$, i.e., of $S_{T}^{\left(i\right)};\;i=1,2,...,n-1$, respectively, the bound in \eqref{6.2} simply reduces to: \begin{equation}\label{6.9} C(0, x, T )\geq g \tilde{P}\left(0,T\right)\tilde{E}\ensuremath{\left[\left({\displaystyle \sum_{i=1}^{n-1}}\tilde{E}\left(S_{T}^{\left(i\right)}\right)-\left(K-1\right)\right)^{+}\right]}. \end{equation} or even more precisely as the outer expectation is redundant, we obtain a very trivial bound for GAO expressed in terms of expectation of $S_{T}^{i}$, i.e., \begin{equation}\label{6.10} C(0, x, T )\geq g \tilde{P}\left(0,T\right)\ensuremath{\left({\displaystyle \sum_{i=1}^{n-1}}\tilde{E}\left(S_{T}^{\left(i\right)}\right)-\left(K-1\right)\right)^{+}}=:\mbox{ GAOLB}. \end{equation} \subsubsection{The Lower Bound under the Affine Set Up} Under the affine set up of section 4 (c.f. equation \eqref{51.123}), the lower bound given in equation \eqref{6.10} reduces to \begin{equation}\label{6.10a} \mbox{ GAOLB}^{\emph{aff}}=g \tilde{P}\left(0,T\right)\ensuremath{\left({\displaystyle \sum_{i=1}^{n-1}}\left(e^{-\left(\left(\bar{r}+\bar{\mu}\right)i+\tilde{\phi}\left(i,R+M\right)\right)} \mathscr{L}\left(\tilde{\psi}\left(i,R+M\right)\right)\right)-\left(K-1\right)\right)^{+}} \end{equation} where $\mathscr{L}$ denotes the Laplace transform of $X_{T}$ with parameter $\tilde{\psi}\left(i,R+M\right)$ under the transformed measure $\tilde{Q}$. This means that if one can lay hands on the distribution of $X_{T}$, this bound has a very compact form. \section{Upper Bound for Guaranteed Annuity Options} In order to obtain an upper bound for GAOs which is directly applicable to the affine set up, we make use of arithmetic-geometric mean inequality in a manner similar to \cite{Grasselli5} who used this methodology to arrive at an upper bound for basket options. Let us first define the arithmetic and geometric mean of the $\left(n-1\right)$ pure endowments appearing in the payoff of GAO (c.f. \eqref{5.5}) respectively as \begin{equation}\label{20.18} A_{T}^{\left(n-1\right)}=\frac{1}{n-1} {\displaystyle \sum_{i=1}^{n-1}} S_{T}^{\left(i\right)} \end{equation} and \begin{equation}\label{20.19} G_{T}^{\left(n-1\right)}= \left({\displaystyle \prod_{i=1}^{n-1}} S_{T}^{\left(i\right)}\right)^{\frac{1}{n-1}}, \end{equation} where $S_{T}^{\left(i\right)};\;i=1,2,...,n-1$ are defined in equation \eqref{5.6}. It is well known that \begin{equation}\label{20.19a} A_{T}^{\left(n-1\right)} \geq G_{T}^{\left(n-1\right)}\;\;a.s. \end{equation} Further, let us define the log-geometric average as \begin{equation}\label{20.20} Y_{T}^{\left(n-1\right)}=\frac{1}{n-1} {\displaystyle \sum_{i=1}^{n-1}} \ln S_{T}^{\left(i\right)}. \end{equation} Next we define as in equation \eqref{51.123a}, \begin{equation}\label{20.20a} X_{T}^{\left(i\right)}=\ln \left(\frac{S_{T}^{\left(i\right)}}{S_{0}^{\left(i\right)}}\right);\;i=1,2,...,n-1. \end{equation} Further, we assume that the joint characteristic function of $\left(X_{T}^{\left(1\right)},...,X_{T}^{\left(n-1\right)}\right)$ can be obtained under the transformed measure $\tilde{Q}$, where we define \begin{equation}\label{20.20b} \phi_{T}\left(\boldsymbol{\gamma}\right)=\tilde{E}\left[e^{i\sum_{k=1}^{n-1}\gamma_{k}X_{T}^{\left(k\right)}}\right] \end{equation} with $\boldsymbol{\gamma}=\left[\gamma_{1}, \gamma_{2},...,\gamma_{n-1}\right]$. As the next step, we obtain the relationship between log-geometric average and $X_{T}^{\left(i\right)}$'s as follows \begin{eqnarray}\label{20.20c} Y_{T}^{\left(n-1\right)} & = & \frac{1}{n-1} {\displaystyle \sum_{i=1}^{n-1}} \ln \left(\frac{S_{T}^{\left(i\right)}}{S_{0}^{\left(i\right)}}S_{0}^{\left(i\right)}\right)\nonumber\\ & = & \frac{1}{n-1} {\displaystyle \sum_{i=1}^{n-1}} X_{T}^{\left(i\right)}+ Y_{0}^{\left(n-1\right)}. \end{eqnarray} Next, we try to express the characteristic function of log-geometric average under the transformed measure $\tilde{Q}$ in terms of the joint characteristic function of $X_{T}^{\left(i\right)}$'s viz. $\phi_{T}\left(\boldsymbol{\gamma}\right)$ defined in equation \eqref{20.20b}. Let $\phi_{Y_{T}}\left(\gamma_{0}\right)$ denote the characteristic function of log-geometric average $Y_{T}^{\left(n-1\right)}$ with parameter $\gamma_{0}$. Then we have \begin{eqnarray}\label{20.20d} \phi_{Y_{T}}\left(\gamma_{0}\right) & = & \tilde{E}\left[e^{i\gamma_{0}Y_{T}^{\left(n-1\right)}}\right]\nonumber\\ & = & \tilde{E}\left[e^{i\gamma_{0}Y_{0}^{\left(n-1\right)}+i\sum_{k=1}^{n-1}\left(\frac{\gamma_{0}}{n-1}\right)X_{T}^{\left(k\right)}}\right]\nonumber\\ & = & e^{i\gamma_{0}Y_{0}^{\left(n-1\right)}}\phi_{T}\left(\frac{\gamma_{0}}{n-1}\mathbf{1}\right) \end{eqnarray} where $\mathbf{1}=\left(1,1,...,1\right)$ is a $1\times \left(n-1\right)$ vector of 1's, so that $\frac{\gamma_{0}}{n-1}\mathbf{1}$ is $1\times \left(n-1\right)$ vector with components $\frac{\gamma_{0}}{n-1}$ and $\phi_{T}\left(\boldsymbol{\gamma}\right)$ is defined in \eqref{20.20b}. In light of equation \eqref{20.18}, we can express the GAO payoff formula given in equation \eqref{5.5} as \begin{equation}\label{20.21} C(0, x, T ) = g \left(n-1\right) \tilde{P}\left(0,T\right)\tilde{E}\left[\left(A_{T}^{\left(n-1\right)}-K'\right)^{+}\right], \end{equation} where \begin{equation}\label{20.22} K'=\frac{K-1}{n-1}. \end{equation} Adding and subtracting $G_{T}^{\left(n-1\right)}$ within the $max$ function on R.H.S. of equation \eqref{20.21}, and exploiting equation \eqref{20.19a}, we obtain an upper bound of GAO as \begin{eqnarray}\label{20.27} C(0, x, T ) & \leq & g \left(n-1\right) \tilde{P}\left(0,T\right)\left(\tilde{E}\left[\left(G_{T}^{\left(n-1\right)}-K^{'}\right)^{+}\right]+\tilde{E}\left[A_{T}^{\left(n-1\right)}\right]-\tilde{E}\left[G_{T}^{\left(n-1\right)}\right]\right) \nonumber\\ & = & :\mbox{ GAOUB} \end{eqnarray} We make use of Fourier inversion to compute the call type expectation involved in the upper bound and we state the result in the following proposition. \begin{prop}\label{203} Given the geometric mean of $n-1$ pure endowments defined in equation \eqref{20.19} and $K^{'}>0$, \begin{equation}\label{20.28} \tilde{E}\left[\left(G_{T}^{\left(n-1\right)}-K^{'}\right)^{+}\right]=\frac{e^{-\delta \ln K'}}{\pi}\int_{0}^{\infty}e^{-i\eta \ln K^{'}}\Psi_{T}^{G}\left(\eta;\delta\right)d\eta \end{equation} where $\Psi_{T}^{G}\left(\eta;\delta\right)$ denotes the Fourier transform of $\tilde{E}\left[\left(G_{T}^{\left(n-1\right)}-K^{'}\right)^{+}\right]$ with respect to $\ln K^{'}$ along with the damping factor $e^{\delta \ln K^{'}}$ such that \begin{equation}\label{20.28a} \Psi_{T}^{G}\left(\eta;\delta\right)=e^{i\left(\eta-i\left(\delta+1\right)\right)Y_{0}^{\left(n-1\right)}}\frac{\phi_{T}\left(\frac{\eta-i\left(\delta+1\right)}{n-1}\mathbf{1}\right)}{\delta^2+\delta-\eta^{2}+i\eta\left(2\delta+1\right)}, \end{equation} where the parameter $\delta$ tunes the damping factor (c.f. \cite{Carr} and \cite{Grasselli5}) and $\phi_{T}\left(.\right)$ is defined in equation \eqref{20.20b}. \end{prop} \begin{proof} Let $f_{Y_{T}}\left(y\right)$ denote the probability density function (p.d.f.) of the log-geometric average $Y_{T}^{\left(n-1\right)}$. We introduce the damping factor in accordance with \cite{Carr}. Then, by definition, the Fourier transform of $\tilde{E}\left[\left(G_{T}^{\left(n-1\right)}-K^{'}\right)^{+}\right]$ with respect to $\ln K^{'}$ along with the damping factor $e^{\delta \ln K^{'}}$ is given as \begin{eqnarray}\label{20.29} \Psi_{T}^{G}\left(\eta;\delta\right) & = & \int_{\mathbb{R}}e^{i\eta\ln K^{'}+\delta \ln K^{'}}\tilde{E}\left[\left(e^{Y_{T}^{\left(n-1\right)}}-K^{'}\right)^{+}\right]d\ln K^{'} \nonumber\\ & = & \int_{\mathbb{R}}e^{i\eta\ln K^{'}+\delta \ln K^{'}}\int_{\ln K^{'}}^{\infty}\left(e^{y}-K^{'}\right)f_{Y_{T}}\left(y\right)\;dy\;d\ln K^{'}\nonumber\\ & = & \int_{\mathbb{R}}e^{i\eta\ln K^{'}+\delta \ln K^{'}}\int_{\ln K^{'}}^{\infty}e^{y}f_{Y_{T}}\left(y\right)\;dy\;d\ln K^{'}\nonumber\\ & {} {} & {} -\int_{\mathbb{R}}e^{i\eta\ln K^{'}+\delta \ln K^{'}}\int_{\ln K^{'}}^{\infty}K^{'}f_{Y_{T}}\left(y\right)\;dy\;d\ln K^{'}\nonumber\\ & = & \Psi_{T}^{G_{1}}\left(\eta;\delta\right)-\Psi_{T}^{G_{2}}\left(\eta;\delta\right). \end{eqnarray} We evaluate both integrals by adopting a change of order of integration, as detailed below \begin{eqnarray}\label{20.30} \Psi_{T}^{G_{1}}\left(\eta;\delta\right) & = & \int_{\mathbb{R}}e^{y}\left(\int_{-\infty}^{y}e^{i\eta\ln K^{'}+\delta \ln K^{'}}d\ln K^{'}\right)f_{Y_{T}}\left(y\right)dy \nonumber\\ & = & \frac{1}{i\eta+\delta}\int_{\mathbb{R}}e^{i\left(\eta-i\left(\delta+1\right)\right)y}f_{Y_{T}}\left(y\right)dy \nonumber\\ & = & \frac{\phi_{Y_{T}}\left(\eta-i\left(\delta+1\right)\right)}{i\eta+\delta}\nonumber\\ & = & e^{i\left(\eta-i\left(\delta+1\right)\right)Y_{0}^{\left(n-1\right)}}\frac{\phi_{T}\left(\frac{\eta-i\left(\delta+1\right)}{n-1}\mathbf{1}\right)}{i\eta+\delta}. \end{eqnarray} where the last couple of statements follow from the definition of the characteristic function of $Y_{0}^{\left(n-1\right)}$ given in \eqref{20.20d} and its link to the joint characteristic function of joint characteristic function of $\left(X_{T}^{\left(1\right)},...,X_{T}^{\left(n-1\right)}\right)$ defined in \eqref{20.20b}. On the same lines we have \begin{equation}\label{20.31} \Psi_{T}^{G_{2}}\left(\eta;\delta\right) = e^{i\left(\eta-i\left(\delta+1\right)\right)Y_{0}^{\left(n-1\right)}}\frac{\phi_{T}\left(\frac{\eta-i\left(\delta+1\right)}{n-1}\mathbf{1}\right)}{i\eta+\left(\delta+1\right)}. \end{equation} Substituting $\Psi_{T}^{G_{1}}\left(\eta;\delta\right)$ and $\Psi_{T}^{G_{2}}\left(\eta;\delta\right)$ in equation \eqref{20.29}, remembering the damping factor we get the requisite result given in equation \eqref{20.28}. \end{proof} In a similar manner we obtain \begin{equation}\label{20.32a} \tilde{E}\left[G_{T}^{\left(n-1\right)}\right]=e^{Y_{0}^{\left(n-1\right)}}\phi_{T}\left(\frac{-i}{n-1}\mathbf{1}\right). \end{equation} We then plug the formulae \eqref{20.28} and \eqref{20.32a} into equation \eqref{20.27} to obtain the upper bound $\mbox{ GAOUB}$. \subsection{The Upper Bound under the Affine Set Up} Consider the affine set up of section 4 (c.f. equations \eqref{51.123}-\eqref{51.123c}). Let $\phi_{X_{T}}$ denote the characteristic function of $X_{T}$ with parameter $\Lambda$ under the transformed measure $\tilde{Q}$ so that \begin{equation}\label{20.33a} \phi_{X_{T}}\left(\Lambda\right)=\tilde{E}\left[e^{i\langle \Lambda,\;X_{T} \rangle}\right]. \end{equation} Now using equation \eqref{51.123c}, we see that the joint characteristic function of $\left(X_{T}^{\left(1\right)},...,X_{T}^{\left(n-1\right)}\right)$ under the transformed measure $\tilde{Q}$, given in equation \eqref{20.20b} becomes , \begin{equation}\label{20.33} \phi_{T}^{\emph{aff}}\left(\boldsymbol{\gamma}\right)= \phi_{X_{T}}\left(-\sum_{k=1}^{n-1}\gamma_{k}\tilde{\psi}\left(k,R+M\right)\right), \end{equation} where $\left(-\sum_{k=1}^{n-1}\gamma_{k}\tilde{\psi}\left(k,R+M\right)\right)$ is the parameter of the characteristic function, with $\tilde{\psi}\left(k,R+M\right)$ satisfying the equations \eqref{51.14} with $\tau=k$. As a result, $\Psi_{T}^{G}\left(\eta;\delta\right)$ given in equation \eqref{20.28a} can be written in a more compact way as \begin{equation}\label{20.34} \Psi_{T}^{G^{\emph{aff}}}\left(\eta;\delta\right)=e^{i\left(\eta-i\left(\delta+1\right)\right)Y_{0}^{\left(n-1\right)}}\frac{\phi_{X_{T}}\left(-\frac{\left(\eta-i\left(\delta+1\right)\right)}{n-1}\sum_{k=1}^{n-1}\tilde{\psi}\left(k,R+M\right)\right)}{\delta^2+\delta-\eta^{2}+i\eta\left(2\delta+1\right)}. \end{equation} Similarly, we have from equation \eqref{20.32a}, \begin{equation}\label{20.35} \tilde{E}^{\emph{aff}}\left[G_{T}^{\left(n-1\right)}\right]=e^{Y_{0}^{\left(n-1\right)}}\phi_{X_{T}}\left(\frac{i}{n-1}\sum_{k=1}^{n-1}\tilde{\psi}\left(k,R+M\right)\right). \end{equation} Moreover, using the definition of arithmetic average given in equation \eqref{20.18} and utilizing \eqref{51.123}, we see that \begin{equation}\label{20.36} \tilde{E}^{\emph{aff}}\left[A_{T}^{\left(n-1\right)}\right]=\frac{1}{n-1}\sum_{k=1}^{n-1}\left(e^{-\left(\left(\bar{r}+\bar{\mu}\right)k+\tilde{\phi}\left(k,R+M\right)\right)} \mathscr{L}\left(\tilde{\psi}\left(k,R+M\right)\right)\right), \end{equation} where as defined in Section 5.1.1, $\mathscr{L}$ denotes the Laplace transform of $X_{T}$ with parameter $\tilde{\psi}\left(k,R+M\right)$ under the transformed measure $\tilde{Q}$. Finally we substitute equation \eqref{20.34} in the expression \eqref{20.28} and then the result and the equations \eqref{20.35}-\eqref{20.36} into \eqref{20.27} to obtain \begin{eqnarray}\label{20.37} \mbox{ GAOUB}^{\emph{aff}}& = & g \left(n-1\right) \tilde{P}\left(0,T\right)\Bigg(\frac{1}{n-1}\sum_{k=1}^{n-1}\left(e^{-\left(\left(\bar{r}+\bar{\mu}\right)k+\tilde{\phi}\left(k,R+M\right)\right)} \mathscr{L}\left(\tilde{\psi}\left(k,R+M\right)\right)\right)\nonumber\\ & {} & -e^{Y_{0}^{\left(n-1\right)}}\phi_{X_{T}}\left(\frac{i}{n-1}\sum_{k=1}^{n-1}\tilde{\psi}\left(k,R+M\right)\right)\nonumber\\ & & +\frac{e^{-\delta \ln K'}}{\pi}\int_{0}^{\infty}\frac{e^{-i\left(\eta \ln K^{'}-\left(\eta-i\left(\delta+1\right)\right)Y_{0}^{\left(n-1\right)}\right)}}{\delta^2+\delta-\eta^{2}+i\eta\left(2\delta+1\right)}\phi_{X_{T}}\left(-\frac{\left(\eta-i\left(\delta+1\right)\right)}{n-1}\sum_{k=1}^{n-1}\tilde{\psi}\left(k,R+M\right)\right)d\eta\Bigg),\nonumber\\ \end{eqnarray} where $\phi_{X_{T}}\left(.\right)$ is defined in equation \eqref{20.33a} and $\mathscr{L}$ denotes the Laplace transform of $X_{T}$ under the transformed measure $\tilde{Q}$. \section{Examples} We now derive lower and upper bounds by choosing specific models for the interest rate and force of mortality. \subsection{The Multi-CIR Model} We now consider a $p$-dimensional affine process $X:=\left(X_{t}\right)_{t\geq 0}$ having independent components $\left(X_{it}\right)_{t\geq 0}$ that function according to the following CIR risk-neutral dynamics: \begin{equation}\label{50.1} dX_{it}=k_{i}\left(\theta_{i}-X_{it}\right)dt+\sigma_{i}\sqrt{X_{it}}dW^{\mathbb{Q}}_{it}, \;i=1,...,p. \end{equation} One can refer to \cite{Deelestra} to show that this model fits into the general affine framework. \subsubsection{Survival Zero Coupon Bond Pricing} Adhering to the notations of the affine set-up defined in section 6, in context of mortality and interest rate, let $M, R \in \mathbb{R}_{n}$ with respective components $M_{i}, R_{i};\;i=1,2,...,p$. The price of a zero-coupon bond under the multi CIR model \eqref{51.18a} is given by { \allowdisplaybreaks \begin{eqnarray}\label{50.2} \tilde{P}\left(t, T\right) & = & \mathbb{E}\left[e^{-\int^{T}_{t}\left(\bar{r}+ \bar{\mu}\right)+\langle \left(R+M\right),X_{s}\rangle ds}|\mathscr{F}_{t}\right]\nonumber\\ & = & e^{-\left(\bar{r}+\bar{\mu}\right)\left(T-t\right)}{\displaystyle \prod_{i=1}^{p}} \mathbb{E}\left[e^{-\int^{T}_{t}\langle \left(R_{i}+M_{i}\right),X_{is}\rangle ds}|\mathscr{F}_{t}\right]\nonumber\\ & = & e^{-\left(\bar{r}+\bar{\mu}\right)\left(T-t\right)}{\displaystyle \prod_{i=1}^{p}} e^{-\tilde{\phi}_{i}\left(T-t,R_{i}+M_{i}\right)-\tilde{\psi}_{i}\left(T-t,R_{i}+M_{i}\right)X_{it}} \end{eqnarray} } where $\tilde{\phi}_{i}$ and $\tilde{\psi}_{i}$ satisfy the following Riccatti equations for every $i=1,2,...,p$ (c.f. \cite{Duffie1}): \begin{equation}\label{50.3} \begin{cases} \frac{\partial\tilde{\psi}\left(\tau,u_{i}\right)}{\partial \tau} = 1-k_{i}\tilde{\psi}_{i}\left(\tau,u_{i}\right)+\frac{u_{i}\sigma^{2}_{i}}{2}\tilde{\psi}_{i}\left(\tau,u_{i}\right)^{2},\\ \frac{\partial\tilde{\phi}\left(\tau,u_{i}\right)}{\partial \tau} = k_{i}\theta_{i}u_{i}\tilde{\psi}_{i}\left(\tau,u_{i}\right), \end{cases} \end{equation} with $\tau=T-t$, $u_{i}=R_{i}+M_{i}$ and initial conditions $\tilde{\psi}_{i}\left(0,u_{i}\right)=0$ and $\tilde{\phi}_{i}\left(0,u_{i}\right)=0$. The solutions of this system with $i=1,2,...,p$ are \begin{eqnarray}\label{50.4} \tilde{\psi}_{i}\left(\tau,u_{i}\right) & = & \frac{2u_{i}}{\eta\left(u_{i}\right)+ k_{i}}-\frac{4u_{i}+\eta\left(u_{i}\right)}{\eta\left(u_{i}\right)+ k_{i}}\nonumber\\ & {} & \times \frac{1}{\left(\eta\left(u_{i}\right)+ k_{i}\right)\exp\left[\eta\left(u_{i}\right)\tau\right]+\eta\left(u_{i}\right)-k_{i}} \end{eqnarray} \begin{eqnarray}\label{50.5} \tilde{\phi}_{i}\left(\tau,u_{i}\right) & = & \frac{k_{i}\theta_{i}}{\sigma^{2}_{i}}\left[\eta\left(u_{i}\right)k_{i}\right]\tau\nonumber\\ & {} & +\frac{2k_{i}\theta_{i}}{\sigma^{2}_{i}}\log\left[\left(\eta\left(u_{i}\right)+ k_{i}\right)\exp\left[\eta\left(u_{i}\right)\tau\right]+\eta\left(u_{i}\right)-k_{i}\right]\nonumber\\ & {} & -\frac{2k_{i}\theta_{i}}{\sigma^{2}_{i}}\log\left(2\eta\left(u_{i}\right)\right) \end{eqnarray} where $\eta\left(u_{i}\right)=\sqrt{k^{2}_{i}+2u_{i}\sigma^{2}_{i}}$. \subsubsection{Price of the GAO} We use equations \eqref{51.124} and \eqref{50.2} to obtain the price of the GAO under the transformed measure $\tilde{Q}$ as \begin{equation}\label{50.6} C(0, x, T ) = g \tilde{P}\left(0,T\right){\displaystyle\tilde{E}}\left[\left(\sum_{i=1}^{n-1}e^{-\left(\bar{r}+\bar{\mu}\right)i}{\displaystyle \prod_{j=1}^{p}} e^{-\tilde{\phi}_{j}\left(i,R_{j}+M_{j}\right)-\tilde{\psi}_{j}\left(i,R_{j}+M_{j}\right)X_{jT}}-\left(K-1\right)\right)^{+}\right] \end{equation} where $\tilde{P}\left(0,T\right)$ given by equation \eqref{50.2} with $\tau=T$ while $\tilde{\psi}_{j}\left(i,R_{j}+M_{j}\right)$ and $\tilde{\phi}_{j}\left(i,R_{j}+M_{j}\right)$ are given by equations \eqref{50.4} and \eqref{50.5}. \subsubsection{Distribution of $X_{T}$} In order to obtain explicit bounds for the GAO in the multidimensional CIR case, we need to obtain the distribution of $X_{jT}$ under the transformed measure $\tilde{Q}$. We state this in the following proposition (c.f. \cite{CIR} and \cite{Deelestra} for details). \begin{prop}\label{4111.2} The dynamics of the CIR process $X_{jt}$ defined in equation \eqref{50.1} under the transformed measure $\tilde{Q}$ are given by \begin{equation}\label{50.7} dX_{jt}=k_{j}^{'}\left(\theta_{j}^{'}-X_{jt}\right)dt+\sigma_{j}\sqrt{X_{jt}}dW^{'}_{jt}, \;j=1,...,p. \end{equation} where \begin{equation}\label{50.8} k_{j}^{'}=k_{j}+\sigma_{j}^{2}\tilde{\psi}_{j}\left(0,R_{j}+M_{j}\right), \end{equation} \begin{equation}\label{50.9} \theta_{j}^{'}=\frac{k_{j}\theta_{j}}{k_{j}+\sigma_{j}^{2}\tilde{\psi}_{j}\left(0,R_{j}+M_{j}\right)} \end{equation} and $X_{j0};\;j=1,2,...,p$ is the initial value of the process. Then the density function of $X_{jT}$ is given by \begin{equation}\label{50.10} f_{X_{jT}}\left(x\right)=f_{\frac{\chi^{2}\left(\nu_{jT},\lambda_{jT}\right)}{c_{jT}}}\left(x\right)=c_{jT}f_{\chi^{2}\left(\nu_{jT},\;\lambda_{jT}\right)}\left(c_{jT}x\right) \end{equation} where $f_{\chi^{2}\left(\nu_{jT},\;\lambda_{jT}\right)};\;j=1,2,...,p$ is the p.d.f. of non-central $\chi^{2}$ with degrees of freedom $\nu_{j}$ and non-centrality parameter $\lambda_{jT}$ such that \begin{equation}\label{50.11} c_{jT}=\frac{4k_{j}^{'}}{\sigma_{j}^{2}\left(1-e^{-k_{j}^{'}T}\right)}, \end{equation} \begin{equation}\label{50.12} \nu_{jT}=\frac{4k_{j}^{'}\theta_{j}^{'}}{\sigma_{j}^{2}} \end{equation} and \begin{equation}\label{50.13} \lambda_{jT}=c_{jT}X_{j0}e^{-k_{j}^{'}T}. \end{equation} \end{prop} The moment generating function (m.g.f.) of $X_{jT}$ has a very interesting exposition as detailed below (c.f. \cite{Daniel} for details). \begin{equation}\label{50.13a} \mathscr{M}_{X_{jT}}\left(s_{j}\right)=\left(\beta\left(s_{j}\right)\right)^{\bar{\nu_{j}}}e^{\lambda^{'}_{jT}\left(\beta\left(s_{j}\right)-1\right)} \end{equation} where \begin{equation}\label{50.13b} \beta\left(s_{j}\right)=\left(1-s_{j}\mu_{jT}\right)^{-1}, \end{equation} with \begin{equation}\label{50.13c} \mu_{jT}=\frac{2}{c_{jT}}, \end{equation} \begin{equation}\label{50.13d} \bar{\nu_{j}}=\frac{\nu_{jT}}{2} \end{equation} and \begin{equation}\label{50.13e} \lambda^{'}_{jT}=2\lambda_{jT}. \end{equation} \subsubsection{The Lower Bound $\mbox{GAOLB}^{\left(MCIR\right)}$} The lower bound $\mbox{GAOLB}$ obtained in equation \eqref{6.10} condenses into a very compact formula for the Multi-CIR case in a manner similar to the formula \eqref{6.10a} under the affine set up. Before unravelling the same, we see that in light of the notations defined in Section 4, one can write \begin{equation}\label{50.5a} S_{T}^{\left(i\right)}=S_{0}^{\left(i\right)}e^{X_{T}^{\left(i\right)}};\;i=1,2,...,n-1, \end{equation} where \begin{equation}\label{50.5b} S_{0}^{\left(i\right)}=e^{-\left(\left(\bar{r}+\bar{\mu}\right)i+\sum_{j=1}^{p}\tilde{\phi}_{j}\left(i,R_{j}+M_{j}\right)\right)} \end{equation} and \begin{equation}\label{50.5c} X_{T}^{\left(i\right)}=-{\displaystyle \sum_{j=1}^{p}}\tilde{\psi}_{j}\left(i,R_{j}+M_{j}\right)X_{jT}, \end{equation} where $\tilde{\phi}_{j}\left(i,R_{j}+M_{j}\right)$ and $\tilde{\psi}_{j}\left(i,R_{j}+M_{j}\right)$ for $i=1,2,...,n-1$ and $j=1,2,...,p$ are given in equations \eqref{50.4}-\eqref{50.5} with $\tau$ replaced by $i$. Further, $X_{jT};\;j=1,2,...,p$ are independent random variables and their m.g.f. is given in equation \eqref{50.13a}. This leads us to the formulation of the lower bound for the Multi-CIR case presented in the form of the following proposition: \begin{prop} The lower bound under the multi-CIR case is \begin{eqnarray}\label{50.14} \mbox{ GAOLB}^{MCIR} & = & g \tilde{P}\left(0,T\right)\Bigg({\displaystyle \sum_{i=1}^{n-1}}\Bigg(e^{-\left(\left(\bar{r}+\bar{\mu}\right)i+\sum_{j=1}^{p}\tilde{\phi}_{j}\left(i,R_{j}+M_{j}\right)\right)+\sum_{j=1}^{p}\lambda^{'}_{jT}\left(\beta\left(-\tilde{\psi}_{j}\left(i,R_{j}+M_{j}\right)\right)-1\right)}\nonumber\\ & {} {} {} {} & {} {} \;\;\;\; \;\;\;\;\;\;\;\;\;\;\left(\prod_{j=1}^{p} \left(\beta\left(-\tilde{\psi}_{j}\left(i,R_{j}+M_{j}\right)\right)\right)^{\bar{\nu_{j}}}\right)\Bigg)-\left(K-1\right)\Bigg)^{+} \end{eqnarray} where $\beta\left(.\right)$ is defined in \eqref{50.13b} and $\bar{\nu_{j}}$ and $\lambda^{'}_{jT}$ are given in equations \eqref{50.13d}-\eqref{50.13e} and $\tilde{\psi}_{j}\left(i,R_{j}+M_{j}\right)$ for $i=1,2,...,n-1;j=1,2,...,p$ are given in \eqref{50.5}. \end{prop} \begin{proof} Using the formula for lower bound given in equation \eqref{6.10} \begin{equation}\label{50.15} \mbox{ GAOLB} = g \tilde{P}\left(0,T\right)\ensuremath{\left({\displaystyle \sum_{i=1}^{n-1}}\tilde{E}\left(S_{T}^{\left(i\right)}\right)-\left(K-1\right)\right)^{+}} \end{equation} Using the formula of $S_{T}^{\left(i\right)}$ given in equations \eqref{50.5a}-\eqref{50.5c} we have \begin{equation}\label{50.16} \mbox{ GAOLB}^{MCIR}=g \tilde{P}\left(0,T\right)\ensuremath{\left({\displaystyle \sum_{i=1}^{n-1}}\left(e^{-\left(\left(\bar{r}+\bar{\mu}\right)i+\sum_{j=1}^{p}\tilde{\phi}_{j}\left(i,R_{j}+M_{j}\right)\right)}\prod_{j=1}^{p} \mathscr{M}_{X_{jT}}\left(-\tilde{\psi}_{j}\left(i,R_{j}+M_{j}\right)\right)\right)-\left(K-1\right)\right)^{+}}. \end{equation} Using the definition of m.g.f. of $X_{jT};\;j=1,2,...,p$ given in equation \eqref{50.13a} we obtain the requisite result. \end{proof} \subsubsection{The Upper Bound $\mbox{GAOUB}^{\left(MCIR\right)}$} Under the formulation of the pure endowments constituting the GAO basket under the MCIR case (\eqref{50.5a}-\eqref{50.5c}), we write \begin{equation}\label{50.17} Y_{0}^{\left(n-1\right)}=-\frac{\left(\bar{r}+\bar{\mu}\right)n}{2}-\frac{1}{n-1} {\displaystyle \sum_{k=1}^{n-1}} \sum_{j=1}^{p}\tilde{\phi}_{j}\left(k,R_{j}+M_{j}\right) \end{equation} using the definition of log-geometric average $Y_{T}^{\left(n-1\right)}$ given in equation \eqref{20.20} in Section 6.3. We now exploit the set up of the upper bound under the affine case given in Section 6.3.1 and note that here instead of $\tilde{\phi}\left(k,R+M\right)$ and $\tilde{\psi}\left(k,R+M\right)$, we have respectively $\sum_{j=1}^{p}\tilde{\phi}_{j}\left(k,R_{j}+M_{j}\right)$ and $\sum_{j=1}^{p}\tilde{\psi}_{j}\left(k,R_{j}+M_{j}\right)$ since we are dealing with a $p$-dimensional CIR process. Thus the joint characteristic function of $\left(X_{T}^{\left(1\right)},...,X_{T}^{\left(n-1\right)}\right)$ under the transformed measure $\tilde{Q}$, given in equation \eqref{20.20b} becomes , \begin{equation}\label{50.18} \phi_{T}^{\emph{MCIR}}\left(\boldsymbol{\gamma}\right)= \prod_{j=1}^{p}\phi_{X_{jT}}\left(-\sum_{k=1}^{n-1}\gamma_{k}\tilde{\psi}_{j}\left(k,R_{j}+M_{j}\right)\right), \end{equation} where $\boldsymbol{\gamma}=\left[\gamma_{1}, \gamma_{2},...,\gamma_{n-1}\right]$, $\phi_{X_{jT}};\;j=1,2,...,p$ denotes the characteristic function of the $X_{jT}$ with parameter $\left(-\sum_{k=1}^{n-1}\gamma_{k}\tilde{\psi}_{j}\left(k,R_{j}+M_{j}\right)\right)$ for $j=1,2,...,p$, with $\tilde{\psi}_{j}\left(k,R_{j}+M_{j}\right)$ for $k=1,2,...,n-1$; $j=1,2,...,p$ are given in equation \eqref{50.5} with $\tau$ replaced by $k$. $\phi_{X_{jT}}\left(s\right)$ can be obtained from the formula of its m.g.f. given in equation \eqref{50.13a} by replacing $s$ by $is$. Further, we see that $\Psi_{T}^{G}\left(\eta;\delta\right)$ given in equation \eqref{20.28a} reduces to \begin{equation}\label{50.19} \Psi_{T}^{G^{\emph{MCIR}}}\left(\eta;\delta\right)=e^{i\left(\eta-i\left(\delta+1\right)\right)Y_{0}^{\left(n-1\right)}}\frac{\prod_{j=1}^{p}\phi_{X_{jT}}\left(-\frac{\left(\eta-i\left(\delta+1\right)\right)}{n-1}\sum_{k=1}^{n-1}\tilde{\psi}_{j}\left(k,R_{j}+M_{j}\right)\right)}{\delta^2+\delta-\eta^{2}+i\eta\left(2\delta+1\right)}. \end{equation} Next, we obtain $\tilde{E}^{\emph{MCIR}}\left[G_{T}^{\left(n-1\right)}\right]$ from equation \eqref{20.34} by utilizing \eqref{50.18}. Further, we compute \begin{equation}\label{50.20} \tilde{E}^{\emph{MCIR}}\left[A_{T}^{\left(n-1\right)}\right]=\frac{1}{n-1}\sum_{k=1}^{n-1}\left(e^{-\left(\left(\bar{r}+\bar{\mu}\right)k+\sum_{j=1}^{p}\tilde{\phi}_{j}\left(k,R_{j}+M_{j}\right)\right)} \prod_{j=1}^{p}\mathscr{M}_{X_{jT}}\left(-\tilde{\psi}_{j}\left(k,R_{j}+M_{j}\right)\right)\right). \end{equation} Finally we plug in the components one by one into equation \eqref{20.27} to obtain the upper bound $\mbox{GAOUB}^{\left(MCIR\right)}$. \subsection{The Wishart Short Rate Model} \subsubsection{The Set Up} In this section, we assume that the affine process $X:=\left(X_{t}\right)_{t\geq 0}$ is a d-dimensional Wishart process. Given a $d \times d$ matrix Brownian motion $W$ (i.e a matrix whose entries are independent Brownian motions) the Wishart process $X$ (without jumps) is defined as the solution of the $d \times d$-dimensional stochastic differential equation \begin{equation}\label{51.17} dX_{t}=\left(\beta Q^{T}Q+HX_{t}+X_{t}H^{T}\right)dt+\sqrt{X_{t}}dW_{t}Q+Q^{T}dW^{T}_{t}\sqrt{X_{t}}, \;t\geq 0, \end{equation} where $X_{0}=x \in S_{d}^{+}$, $\beta \geq d-1$, $H \in M_{d}$, $Q \in GL_{d}$ and $Q^{T}$ denotes its transpose. $M_{d}$ has been defined in Section 4 while $GL_{d}$ denote the set of invertible real $d \times d$ matrices In short, we assume that the law of $X$ is $WIS_{d}\left(x_{0},\beta,H,Q\right)$. \subsubsection{Existence and Uniqueness of Solution} This process was pioneered by \cite{Bru2} and she showed the existence and uniqueness of a weak solution for Eq. \eqref{51.17}. She also established the existence of a unique strong solution taking values in $S_{d}^{++}$, i.e. the interior of the cone of positive semi-definite symmetric $d\times d$ matrices that we have denoted by $S_{d}^{+}$. \subsubsection{Generator} \cite{Bru2} has calculated the infinitesimal generator of the Wishart process as: \begin{equation}\label{51.18} \mathscr{A}=Tr\left(\left(\beta Q^{T}Q+Hx+xH^{T}\right)D^{S}+2xD^{S}Q^{T}QD^{S}\right), \end{equation} where $Tr$ stands for trace and $D^{S}=\left(\partial/\partial x_{ij}\right)_{1\leq i, j\leq d}$. A good reference for understanding the detailed derivation of this generator is \cite{Alfonsi} and following this reference we have defined generator in Appendix A. \subsubsection{Survival Zero Coupon Bond Pricing} We now give an explicit formula for calculating the the survival zero coupon bond price under the Wishart short rate model. \begin{thm}\label{411.8} Let the dynamics for short rate and mortality rate be given in accordance with equation \eqref{51.11} respecively as \begin{equation}\label{51.18a} r_{t}=\bar{r}+Tr\left[RX_{t}\right] \end{equation} and \begin{equation}\label{51.18b} \mu_{t}=\bar{\mu}+Tr\left[MX_{t}\right] \end{equation} for a process $X$ with law $WIS_{d}\left(x_{0},\beta,H,Q\right)$. Let $R, M \in S_{d}^{++}$ and $\tau=T-t$, then the price of a zero-coupon bond under the Wishart short rate model \eqref{51.18a} is given by { \allowdisplaybreaks \begin{eqnarray}\label{51.19} \tilde{P}\left(t, T\right) & = & \mathbb{E}\left[e^{-\int^{T}_{t}\left(\bar{r}+\bar{\mu}+Tr\left[\left(R+M\right)X_{u}\right]\right) du}|\mathscr{F}_{t}\right]\nonumber\\ & = & e^{-\tilde{\phi}\left(\tau,R+M\right)-Tr\left[\tilde{\psi}\left(\tau,R+M\right)X_{t}\right]}, \end{eqnarray} } where $\tilde{\phi}$ and $\tilde{\psi}$ satisfy the following system of ODEs: \begin{equation}\label{51.20} \begin{cases} \frac{\partial\tilde{\phi}}{\partial \tau} = Tr\left[\beta Q^{T}Q\tilde{\psi}\left(\tau,R+M\right)\right]+\bar{r}+\bar{\mu},\\ \tilde{\phi}\left(0,R+M\right) = 0,\\ \frac{\partial\tilde{\psi}}{\partial \tau} = \tilde{\psi}\left(\tau,R+M\right)H+H^{T}\tilde{\psi}\left(\tau,R+M\right),\\ \;\;\;\;\;\;-2\tilde{\psi}\left(\tau,R+M\right)Q^{T}Q\tilde{\psi}\left(\tau,R+M\right)+R+M,\\ \tilde{\psi}\left(0,R+M\right) = 0.\end{cases} \end{equation} \end{thm} \begin{proof} Consider the expectation in equation \eqref{51.19}. As remarked in section 2, the conditioning on $\mathscr{F}_{t}$ can be reduced to that on $\mathscr{G}_{t}$ and so we define $t\leq T$, define \begin{equation}\label{51.20a} F\left(t, X_{t}\right) = f\left(\tau, X_{t}\right) = \mathbb{E}\left[e^{-\int^{T}_{t}\left(\bar{r}+\bar{\mu}+Tr\left[\left(R+M\right)X_{u}\right]\right) du}|X_{t}\right]. \end{equation} This conditional expectation is the Feynman-Kac representation which satisfies the following Partial Differential Equation (PDE): \begin{equation}\label{51.20b} \begin{cases} \frac{\partial f\left(\tau, x\right)}{\partial\tau} = \mathscr{A}f\left(\tau, x\right)-\left(\bar{r}+\bar{\mu}+Tr\left[\left(R+M\right)x\right]\right)f\left(\tau, x\right),\\ f\left(0, x\right)=1,\end{cases} \end{equation} for all $x \in S_{d}^{+}$, where $\mathscr{A}$ is the infinitesimal generator of the Wishart process given in equation \eqref{51.18}. We introduce a candidate solution given below \begin{equation}\label{51.23} f\left(\tau, x\right)=e^{-\tilde{\phi}\left(\tau,R+M\right)-Tr\left[\tilde{\psi}\left(\tau,R+M\right)x\right]} \end{equation} so that \begin{equation}\label{51.24} \frac{\partial f\left(\tau, x\right)}{\partial\tau}=\left(-\frac{\partial\tilde{\phi}}{\partial \tau}-Tr\left[\frac{\partial\tilde{\psi}}{\partial \tau}x\right]\right)f\left(\tau, x\right) \end{equation} Also it is clear that \begin{equation}\label{51.25} \mathscr{A}e^{-\tilde{\phi}\left(\tau,R+M\right)-Tr\left[\tilde{\psi}\left(\tau,R+M\right)x\right]}=e^{-\tilde{\phi}\left(\tau,R+M\right)}\mathscr{A}e^{-Tr\left[\tilde{\psi}\left(\tau,R+M\right)x\right]}, \end{equation} where on using the generator of the Wishart process given in equation \eqref{51.18}, we have \begin{eqnarray}\label{51.26} \mathscr{A}e^{-Tr\left(\tilde{\psi}\left(\tau,R+M\right)x\right)} & = & \Bigg(-Tr\left[\beta Q^{T}Q\tilde{\psi}\left(\tau,R+M\right)\right]+Tr\Bigg[\Bigg(2\tilde{\psi}\left(\tau,R+M\right)Q^{T}Q\tilde{\psi}\left(\tau,R+M\right)\nonumber\\ & {} & -\tilde{\psi}\left(\tau,R+M\right)H-H^{T}\tilde{\psi}\left(\tau,R+M\right)\Bigg)x\Bigg]\Bigg) e^{-Tr\left(\tilde{\psi}\left(\tau,R+M\right)x\right)} \end{eqnarray} Using equations \eqref{51.24}-\eqref{51.26} in equation \eqref{51.20a} and canceling $f\left(\tau, x\right)$ throughout, we get \begin{eqnarray}\label{51.27} -\frac{\partial\tilde{\phi}}{\partial \tau}-Tr\left[\frac{\partial\tilde{\psi}}{\partial \tau}x\right] & = & -Tr\left[\beta Q^{T}Q\tilde{\psi}\left(\tau,R+M\right)\right]-\left(\bar{r}+Tr\left[\left(R+M\right)x\right]\right)+Tr\Bigg[\Bigg(2\tilde{\psi}\left(\tau,R+M\right)\nonumber\\ & {} & \times Q^{T}Q\tilde{\psi}\left(\tau,R+M\right)-\tilde{\psi}\left(\tau,R+M\right)H-H^{T}\tilde{\psi}\left(\tau,R+M\right)\Bigg)x\Bigg]\Bigg) \end{eqnarray} Comparing the terms independent of $x$ and the coefficients of $x$ on both sides of equation \eqref{51.27}, we get the required system of ODEs given in equation \eqref{51.20}. This completes the proof. \end{proof} The methodology of solving the system of Riccati equations given in \eqref{51.20} appears in \cite{Fonseca} where the authors propose that matrix Riccati equations can be linearized by doubling the dimension of the problem, Interested readers can also refer to \cite{Grasselli} and \cite{Deelestra}. We state without proof the solution in the following proposition. \begin{prop}\label{411.9} The functions $\tilde{\phi}$ and $\tilde{\psi}$ in Theorem \ref{411.8} are given by \begin{equation}\label{51.200} \begin{cases} \tilde{\psi}\left(\tau,R+M\right) = A_{22}^{-1}\left(\tau\right)A_{21}\left(\tau\right) ,\\ \tilde{\phi}\left(\tau, R+M\right) = \frac{\beta}{2}\left(\log\left(\det\left(A_{22}\left(\tau\right)\right)\right)+\tau Tr\left[H^{T}\right]\right).\end{cases} \end{equation} where \begin{equation}\label{51.201} \begin{pmatrix} A_{11}\left(\tau\right) & A_{12}\left(\tau\right) \\ A_{21}\left(\tau\right) & A_{22}\left(\tau\right) \end{pmatrix} = \exp\left(\tau \begin{pmatrix} H & 2Q^{T}Q \\ R+M & -H^{T} \end{pmatrix}\right) \end{equation} \end{prop} \bigskip Alternative approaches for the pricing of zero coupon bond under the Wishart short rate model can be found in \cite{Grasselli} and \cite{Gnoatto2}. \subsubsection{Price of the GAO} We use Theorem \ref{411.8} and equation \eqref{51.124} to obtain the price of the GAO under the transformed measure $\tilde{Q}$ as \begin{equation}\label{51.202} C(0, x, T ) = g \tilde{P}\left(0,T\right){\displaystyle\tilde{E}}\left[\left(\sum_{i=1}^{n-1}e^{-\left(\bar{r}+\bar{\mu}\right)i}e^{-\tilde{\phi}\left(i,R+M\right)-Tr\left[\tilde{\psi}\left(i,R+M\right)X_{T}\right]}-\left(K-1\right)\right)^{+}\right], \end{equation} where $\tilde{P}\left(0,T\right)$ is given by equation \eqref{51.20} with $\tau=T$ while $\tilde{\psi}\left(i,R+M\right)$ and $\tilde{\phi}\left(i,R+M\right)$ for $i=1,2,...,n-1$ are given by the system of equations \eqref{51.200} with $\tau=i$. \subsubsection{Distribution of $X_{T}$} In order to obtain explicit bounds for the GAO in the Wishart case, we need to obtain the distribution of $X_{T}$ under the transformed measure $\tilde{Q}$. We state this in the following proposition (c.f. \cite{Deelestra} and \cite{Kang} for details). \begin{prop}\label{411.10} The dynamics of the Wishart process $X$ defined in equation \eqref{51.17} under the transformed measure $\tilde{Q}$ are given by \begin{equation}\label{51.203} dX_{t}=\left(\beta Q^{T}Q+H\left(t\right)X_{t}+X_{t}H\left(t\right)^{T}\right)dt+\sqrt{X_{t}}dW_{t}Q+Q^{T}dW^{T}_{t}\sqrt{X_{t}}, \;t\geq 0, \end{equation} where \begin{equation}\label{51.204} H\left(t\right)=H-Q^{T}Q\tilde{\psi}\left(\tau,R+M\right), \end{equation} $X_{0}=x \in S_{d}^{+}$, $\beta \geq d-1$, $H \in M_{d}$ and $Q \in GL_{d}$. Then \begin{equation}\label{51.205} X_{T} \sim \mathscr{W}_{d}\left(\beta,V\left(0\right),V\left(0\right)^{-1}\psi\left(0\right)^{T}x\psi\left(0\right)\right), \end{equation} where $\mathscr{W}_{d}$ stands for non-central Wishart Distribution with parameters $d$, $\beta$, $V\left(0\right)$ and $\psi\left(0\right)^{T}x\psi\left(0\right)$ with the last parameter known as non-centrality parameter and is denoted by $\Theta$. Moreover $V\left(t\right)$ and $\psi\left(t\right)$ solve the following system of ODEs \begin{equation}\label{51.206} \begin{cases} \frac{d}{dt}\psi\left(t\right) = -H\left(t\right)^{T}\psi\left(t\right) ,\\ \frac{d}{dt}V\left(t\right) = -\psi\left(t\right)^{T} Q^{T}Q \psi\left(t\right), \end{cases} \end{equation} with terminal conditions $\psi\left(T\right)=I_{d}$ and $V\left(T\right)=0$. \end{prop} We now state two propositions in context of non-central Wishart Distribution which are very important for the derivation of bounds for the GAO in the Wishart case (c.f. \cite{Pfaffel} and \cite{Gupta}) \begin{prop}\label{411.11}(Laplace Transform of Non-Central Wishart Distribution) Let $X_{T} \sim \mathscr{W}_{d}\left(\beta,V\left(0\right), \Theta\right)$ with $\Theta = V\left(0\right)^{-1}\psi\left(0\right)^{T}x\psi\left(0\right)$. Then the Laplace transform of $X_{T}$ is given by \begin{equation}\label{51.207} \mathscr{L}\left(U\right)=\tilde{E}\left[e^{Tr\left[-UX_{T} \right]}\right]=\det \left(I_{d}+2V\left(0\right)U\right)^{-\frac{\beta}{2}}e^{Tr\left[ -\Theta\left(I_{d}+2V\left(0\right)U\right)^{-1}V\left(0\right)U \right]} \end{equation} where $U \in S_{d}^{+}$. \end{prop} \begin{prop}\label{411.12}(Characteristic Function of Non-Central Wishart Distribution) Consider $X_{T} \sim \mathscr{W}_{d}\left(\beta,V\left(0\right), \Theta\right)$ with $\Theta = V\left(0\right)^{-1}\psi\left(0\right)^{T}x\psi\left(0\right)$. Then the Characteristic Function of $X_{T}$ is given by \begin{equation}\label{51.208} \phi_{X_{T}}\left(\Lambda\right)=\tilde{E}\left[e^{Tr\left[ i\Lambda X_{T} \right]}\right]=\det \left(I_{d}-2iV\left(0\right)\Lambda\right)^{-\frac{\beta}{2}}e^{Tr\left[ i\Theta\left(I_{d}-2V\left(0\right)\Lambda\right)^{-1}V\left(0\right)\Lambda \right]} \end{equation} where $\Lambda \in M_{d}$. \end{prop} \subsubsection{The Lower Bound $\mbox{GAOLB}^{\left(WIS\right)}$} Under the Wishart set up, lower bound $\mbox{GAOLB}$ obtained in equation \eqref{6.10} reduces to a very neat form. Before arriving at the formula, we define the following notations in the spirit of section 4. \begin{equation}\label{51.209} S_{T}^{\left(i\right)}=S_{0}^{\left(i\right)}e^{X_{T}^{\left(i\right)}};\;i=1,2,...,n-1, \end{equation} where \begin{equation}\label{51.210} S_{0}^{\left(i\right)}=e^{-\left(\left(\bar{r}+\bar{\mu}\right)i+\tilde{\phi}\left(i,R+M\right)\right)} \end{equation} and \begin{equation}\label{51.211} X_{T}^{\left(i\right)}=-Tr\left[\tilde{\psi}\left(i,R+M\right)X_{T}\right], \end{equation} where $\tilde{\psi}\left(i,R+M\right)$ and $\tilde{\phi}\left(i,R+M\right)$ for $i=1,2,...,n-1$ are given by the system of equations \eqref{51.200} with $\tau=i$. Further, $X_{T}$ has a non-central Wishart distribution with Laplace transform given in equation \eqref{51.207}. This result along with the formula \eqref{6.10a}, the lower bound for the Wishart case manifests itself into the following form. \begin{eqnarray}\label{51.212} \mbox{ GAOLB}^{\left(WIS\right)} & = & g \tilde{P}\left(0,T\right)\Bigg({\displaystyle \sum_{i=1}^{n-1}}\Bigg(e^{-\left(\left(\bar{r}+\bar{\mu}\right)i+\tilde{\phi}\left(i,R+M\right)\right)}\det \left(I_{d}+2V\left(0\right)\tilde{\psi}\left(i,R+M\right)\right)^{-\frac{\beta}{2}}\nonumber\\ & {} & {}\times e^{Tr\left[ -\Theta\left(I_{d}+2V\left(0\right)\tilde{\psi}\left(i,R+M\right)\right)^{-1}V\left(0\right)\tilde{\psi}\left(i,R+M\right) \right]}\Bigg)-\left(K-1\right)\Bigg)^{+} \end{eqnarray} \subsubsection{The Upper Bound $\mbox{GAOUB}^{\left(WIS\right)}$} Under the formulation of the assets in the basket under the Wishart case (\eqref{51.209}-\eqref{51.211}), we have \begin{equation}\label{51.212a} Y_{0}^{\left(n-1\right)}=-\frac{\left(\bar{r}+\bar{\mu}\right)n}{2}-\frac{1}{n-1} {\displaystyle \sum_{k=1}^{n-1}} \tilde{\phi}\left(k,R+M\right) \end{equation} using the definition of log-geometric average $Y_{T}^{\left(n-1\right)}$ given in equation \eqref{20.20} in Section 6. Further, obtaining the upper bound for the GAO in the Wishart set up is a straightforward exercise as one can exploit the upper bound GAO formula under the affine case given in equation \eqref{20.37} by calculating the Laplace transform given in equation \eqref{51.207} such that for $k=1,2,...,n-1$, \begin{equation}\label{51.213} \mathscr{L}\left(\tilde{\psi}\left(k,R+M\right)\right)=\det \left(I_{d}+2V\left(0\right)\tilde{\psi}\left(k,R+M\right)\right)^{-\frac{\beta}{2}}e^{Tr\left[ -\Theta\left(I_{d}+2V\left(0\right)\tilde{\psi}\left(k,R+M\right)\right)^{-1}V\left(0\right)\tilde{\psi}\left(k,R+M\right) \right]} \end{equation} and calculating $\phi_{X_{T}}\left(-\frac{\left(\eta-i\left(\delta+1\right)\right)}{n-1}\sum_{k=1}^{n-1}\tilde{\psi}\left(k,R+M\right)\right)$ and $\phi_{X_{T}}\left(\frac{i}{n-1}\sum_{k=1}^{n-1}\tilde{\psi}\left(k,R+M\right)\right)$ from the formula \eqref{51.208} by replacing $\Lambda$ by $-\frac{\left(\eta-i\left(\delta+1\right)\right)}{n-1}\sum_{k=1}^{n-1}\tilde{\psi}\left(k,R+M\right)$ and $\frac{i}{n-1}\sum_{k=1}^{n-1}\tilde{\psi}\left(k,R+M\right)$ respectively. \section{Numerical Results} Now we investigate the applications of the theory derived in the previous sections. We have successfully obtained a number of lower bounds and an upper bound for Guaranteed Annuity Options in sections 5 and 6. We now test these vis-a-vis the well-known Monte Carlo estimate for the GAO. We carry out this working for a couple of more general affine models. The nomenclature for the bounds has already been specified in sections 5, 6 and 7. In all the examples, we have the following `Contract Specification': \[ g=11.1\%,\; T=15,\; n=35; \] \subsection{Multi CIR Model} First we consider a 3-dimensional CIR process $X:=\left(X_{t}\right)_{t\geq 0}$ having independent components $\left(X_{it}\right)_{t\geq 0}$, $i=1,2,3$ (c.f. \cite{Deelestra} for details). We assume the following dynamics for the interest rate process and the mortality process. \begin{equation}\label{10.1} r_{t}=\bar{r}+X_{1t}+X_{2t} \end{equation} and \begin{equation}\label{10.2} \mu_{t}=\bar{\mu}+m_{2}X_{2t}+m_{3}X_{3t}, \end{equation} where $\bar{r}$, $\bar{\mu}$, $m_{2}$ and $m_{3}$ are constants. We use model specifications similar to \cite{Deelestra} and make a minute alteration in the parameter set. We fix the value of $m_{2}$ and obtain the value of $m_{3}$ such that the expectation of the mortality is fixed to a specified level denoted by $C_{x}\left(T\right)$ which is predicted by e.g. a Gompertz-Makeham model (c.f. \cite{Dickson}) at age $x+T$ for an individual aged x at time 0, i.e., \begin{equation}\label{10.3} \mathbb{E}\left[\mu_{t}\right]=C_{x}\left(T\right), \end{equation} Applying expectation on both sides of equation \eqref{10.2} and substituting in \eqref{10.3} we get \begin{equation}\label{10.4} \bar{\mu}+m_{2}\mathbb{E}\left[X_{2t}\right]+m_{3}\mathbb{E}\left[X_{3t}\right]=C_{x}\left(T\right), \end{equation} where as $X_{it};\;i=1,2,3$ is obtained using the Stochastic Differential Equation (SDE) given by \eqref{50.1}, we have \begin{equation}\label{10.5} \mathbb{E}\left[X_{it}\right]=X_{i,0}e^{-k_{i}T}+\theta_{i}\left(1-e^{-k_{i}T}\right). \end{equation} Using our contract specifications outlined in the beginning of this section we fix the expected value in \eqref{10.3} to the level $C_{50}\left(15\right)=0.0125$. A very good discussion in regards to the validity of the model to be used for mortality appears in \cite{Deelestra}. In fact this model was completely calibrated in \cite{Chiarella}. Using the set up defined by equations \eqref{10.1}-\eqref{10.2}, the linear pairwise correlation between $\left(r_{t}\right)_{t\geq 0}$ and $\left(\mu_{t}\right)_{t\geq 0}$, denoted by $\rho_{t}$ forms a stochastic process given by \begin{equation}\label{10.6} \rho_{t}=\frac{m_{2}\sigma_{2}^{2}X_{2t}}{\sqrt{\sigma_{1}^{2}X_{1t}+\sigma_{2}^{2}X_{2t}}\sqrt{m_{2}^2\sigma_{2}^{2}X_{2t}+m_{2}^2\sigma_{2}^{2}X_{2t}}}. \end{equation} We vary the value of $m_{2}$ and therefore obtain the value of $m_{3}$ using equation \eqref{10.3} and this finally yields the value of $\rho$. Further in line with \cite{Deelestra}, we make the following parameter specifications \[ \bar{r}=-0.12332,\;\;\bar{\mu}=0 \] \begin{table}[ht] \centering \begin{tabular}{c c c c c c c c} \hline\hline\\[0.01ex] CIR process & & Parameters & & & \\ [1ex] \hline\hline\\[0.5ex] $X_{1}$ & $k_{1}=0.3731$ & $\theta_{1}=0.074484$ & $\sigma_{1}=0.0452$ & $X_{1,0}=0.0510234$ \\[1ex] $X_{2}$ & $k_{2}=0.011\;$ & $\theta_{1}=0.245455$ & $\sigma_{2}=0.0368$ & $X_{2,0}=0.0890707$ \\[1ex] $X_{3}$ & $k_{3}=0.01\;\;$ & $\theta_{1}=0.0013\;\;$ & $\sigma_{3}=0.0015$ & $X_{3,0}=0.0004\;\;\;\;$ \\[1ex] \hline\hline \end{tabular} \label{table1} \caption{Parameter Values for the 3-dimensional CIR process} \end{table} Table 2 depicts the lower bound, the upper bound and the Monte Carlo estimate of the GAO price for different values of $m_{2}$ and therefore for different values of the initial pairwise linear correlation coefficient $\rho_{0}$. We find that an increase in the value of $\rho_{0}$ enhances the value of the GAO. The lower bound is extremely sharp. On the other hand, upper bound is slightly wider. The results of Table 2 are portrayed in Figures 1-3. Figure 2 reflects that the relative difference ($=\frac{|bound-MC|}{MC}$) between the upper bound and the benchmark Monte Carlo estimate decreases with an increase in the correlation between mortality and interest rate while the relative difference for the lower bound almost remains constant with varying $\rho_{0}$. On the other hand, figure 3 depicts the absolute difference between the Monte Carlo estimate of the GAO price and the derived bounds which remain more or less constant. The lower bound fares much better than $GAOUB$. Finally figure 4 shows the price bounds and in fact the lower bound stick completely camouflages with that of the MC estimate which is a testimony to the tightness of the lower bound. \begin{table}[ht] \centering \begin{tabular}{|c||c||c|c|c|c|c|c|} \hline $\;\;m_{2}\;\;$ & $\;\rho\;$ & $\;\;\;\;\;\mbox{GAOLB}^{\left(MCIR\right)}\;\;\;\;$ & $\;\;MC\;\;\;\;\;\;\;\;\;\;\;$ & $\;\;\;\;\;\mbox{GAOUB}^{\left(MCIR\right)}\;\;$ \\ \hline\hline -0.300 & -0.570960646515027 & 0.153351236437789 & 0.153431631010533 & 0.216286630652776 \\ -0.100 & -0.460513730466363 & 0.181641413947461 & 0.181871723226662 & 0.243710313225013 \\ -0.070 & -0.403426257094426 & 0.187186872445969 & 0.187285214852833 & 0.249173899703122 \\ -0.060 & -0.376271648827787 & 0.189122188373390 & 0.189373949402726 & 0.251083730739797 \\ -0.050 & -0.343007585286942 & 0.191102351580502 & 0.191474297920361 & 0.253039217047040 \\ -0.040 & -0.301756813619030 & 0.193128263182051 & 0.195421232722993 & 0.255041205164633 \\ -0.030 & -0.250041147986350 & 0.195200853300304 & 0.195132243321684 & 0.257090572307459 \\ -0.020 & -0.184739400604580 & 0.197321081986930 & 0.197531098187496 & 0.259188227324353 \\ -0.010 & -0.102346730178820 & 0.199489940182500 & 0.199619257104038 & 0.261335111674878 \\ -0.001 & -0.011167160239806 & 0.201484335480591 & 0.201710195921424 & 0.263310203859562 \\ 0.000 & 0.000000000000000 & 0.201708450715130 & 0.201879045816498 & 0.263532200435103 \\ 0.001 & 0.011370596893292 & 0.201933073002533 & 0.202090425152612 & 0.263754709122691 \\ 0.010 & 0.122142590872118 & 0.203977669339908 & 0.204292134604299 & 0.265780503352825 \\ 0.020 & 0.257493768936871 & 0.206298685820891 & 0.206369996912367 & 0.268081065972310 \\ 0.030 & 0.391761086281179 & 0.208672625057373 & 0.208709896009824 & 0.270434970824946 \\ 0.040 & 0.508145173072700 & 0.211100648256358 & 0.211180180724315 & 0.272843338639536 \\ 0.050 & 0.596334605305204 & 0.213583954153270 & 0.213584231985838 & 0.275307329501738 \\ 0.060 & 0.656025897318996 & 0.216123780282872 & 0.216228415988778 & 0.277828143936307 \\ 0.070 & 0.693071640464574 & 0.218721404302618 & 0.218840241843838 & 0.280407024005732 \\ 0.100 & 0.730953349866014 & 0.226874471461256 & 0.226934772658478 & 0.288505131181583 \\ \hline \end{tabular} \label{table30} \caption{Lower Bounds and Upper Bound $\mbox{GAOUB}$ for Guaranteed Annuity Option under the MCIR Model with partial parameter choice in accordance with \cite{Deelestra}. MC Simulations: 50000} \end{table} \subsection{Wishart Model} As a final step we test our bounds in the backdrop of the celebrated Wishart model for mortality and interest rate. The functional form of the model for the two aforesaid risks has been detailed in equations \eqref{51.18a}-\eqref{51.18b}. To present the application of our methodology we stick to a 2-dimensional Wishart process, i.e., $d=2$ due to the fact that higher dimensional Wishart processes are difficult to implement. The law for the underlying process $X$ governing the mortality and interest rate processes has been outlined in equation \eqref{51.17}. We consider partial choice of the parameter set in accordance with \cite{Deelestra}. For all examples considered below, let \[ \beta=3,\;\;\bar{r}=0.04,\;\;\bar{\mu}=0, \] \begin{equation}\label{10.7} H = \begin{pmatrix} -0.5 & 0.4 \\ 0.007 & -0.008 \end{pmatrix},\;\; M = \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix},\;\; R = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} \end{equation} in equations, \eqref{51.17} and \eqref{51.18a}-\eqref{51.18b}. In light of this data, the stochastic correlation between $\left(r_{t}\right)_{t\geq 0}$ and $\left(\mu_{t}\right)_{t\geq 0}$, denoted by $\rho_{t}$ forms a stochastic process given by \begin{equation}\label{10.9} \rho_{t}=\frac{\left(Q_{11}Q_{12}+Q_{22}Q_{21}\right)X_{t}^{12}}{\sqrt{\left(Q_{11}^2+Q_{21}^{2}\right)X_{t}^{11}\left(Q_{22}^2+Q_{12}^{2}\right)X_{t}^{22}}}. \end{equation} As is evident from \eqref{10.9}, using a Wishart formulation for underlying process $X$ produces a more richer dependence structure for the underlying risks than was available under the multidimensional CIR case. This calls for carrying out a more sophisticated sensitivity analysis in regards to the involved parameters. In the same spirit as \cite{Deelestra}, we carry out a two-fold testing \begin{itemize} \item the first one by varying the off-diagonal elements of the initial Wishart process $X_{0}$ and investigating the impact on the prices of the GAO, \item the second one by experimenting with the off-diagonal elements of the matrix $Q$. \end{itemize} In each case, we compute the bounds and compare them with the benchmark Monte Carlo value which is computed using 20000 simulations. For stability checks in relation to the expected values of the interest rate and mortality intensity w.r.t. varying correlation, interested readers can refer to \cite{Deelestra}. \subsubsection{Effect of a Change in Initial Value $X_{0}$} In order to see the behaviour of the price bounds for the GAO price vis-a-vis change in the initial value of the Wishart process, we experiment with two cases: \begin{itemize} \item Negative off-diagonal elements in the volatility matrix Q \item Positive off-diagonal elements in the volatility matrix Q. \end{itemize} \bigskip \textbf{Example 1}. In this case we consider the following Wishart process: \begin{equation}\label{10.10} Q = \begin{pmatrix} 0.06 & -0.0006 \\ -0.06 & 0.006 \end{pmatrix},\;\; X_{0} = \begin{pmatrix} 0.01 & X_{0}^{12} \\ X_{0}^{12} & 0.001 \end{pmatrix}.\;\; \end{equation} Table 3 portrays the lower bound, the upper bound and the Monte Carlo estimate of the GAO price for different values of $X_{0}^{12}$ and therefore for different values of the initial pairwise linear correlation coefficient $\rho_{0}$. We find that an increase in the value of $\rho_{0}$ enhances the value of the GAO in a fashion similar to the one shown for the Multi-CIR set up in Table 3. In this case both the lower and the upper bounds show close proximity to the GAO value. \bigskip \begin{table}[ht] \centering \begin{tabular}{|c||c||c|c|c|c|c|c|} \hline $\;\;X_{0}^{12}\;\;$ & $\;\rho\;$ & $\;\;\;\;\;\mbox{GAOLB}^{\left(WIS\right)}\;\;\;\;$ & $\;\;MC\;\;\;\;\;\;\;\;\;\;\;$ & $\;\;\;\;\;\mbox{GAOUB}^{\left(WIS\right)}\;\;$ \\ \hline\hline -0.003 & 0.734240363158475 & 0.241898614923743 & 0.241247798732840 & 0.241898616247735 \\ -0.002 & 0.489493575438983 & 0.241133565561902 & 0.240529742517039 & 0.241133567256078 \\ -0.0015 & 0.367120181579237 & 0.240751892681841 & 0.239890712272120 & 0.240751894570155 \\ -0.0005 & 0.122373393859746 & 0.239990246464251 & 0.239141473598451 & 0.239990248759807 \\ 0 & 0.000000000000000 & 0.239610271506445 & 0.238621509824004 & 0.239610274015476 \\ 0.0005 & -0.122373393859746 & 0.239230860904335 & 0.238198077279364 & 0.239230863633664 \\ 0.0015 & -0.367120181579237 & 0.238473729539084 & 0.237679950746197 & 0.238473732730283 \\ 0.002 & -0.489493575438983 & 0.238096007164703 & 0.237331879397777 & 0.238096010597887 \\ 0.003 & -0.734240363158475 & 0.237342245012764 & 0.236699850447918 & 0.237342248953105 \\ \hline \end{tabular} \label{table3} \caption{Lower Bounds and Upper Bound $\mbox{GAOUB}$ for Guaranteed Annuity Option under the Wishart Model Example 1 with parameter choice in accordance with \cite{Deelestra}. MC Simulations: 20000} \end{table} \textbf{Example 2}. In the second investigation, we consider the following Wishart process: \begin{equation}\label{10.11} Q = \begin{pmatrix} 0.06 & 0.00001 \\ 0.0002 & 0.006 \end{pmatrix},\;\; X_{0} = \begin{pmatrix} 0.01 & X_{0}^{12} \\ X_{0}^{12} & 0.001 \end{pmatrix}.\;\; \end{equation} As can be seen, in this example, we consider positive off-diagonal elements for the matrix $Q$. Table 4 portrays the lower bound, the upper bound and the Monte Carlo estimate of the GAO price for different values of $X_{0}^{12}$ and therefore for different values of the initial pairwise linear correlation coefficient $\rho_{0}$. The results obtained present a sharp contrast to those obtained in Table 3 and the value of the GAO price and the corresponding bounds begin to drop as the value of $\rho_{0}$ is increases. Both the bounds continue to perform well even on this occasion. A good justification of the behaviour of the GAO price in the first two examples (also see \cite{Deelestra}) vis-a-vis the values of $X_{0}^{12}$ can be provided by noting that under dynamics of the Wishart process (\eqref{51.17}) the positive factors swell on an average when the initial value $X_{0}^{12}$ increases. Moreover, for the aforementioned parameter choice, the models for mortality and interest rate for $t \geq 0$ are given as \begin{equation}\label{10.12} r_{t}=0.04+X_{t}^{11} \end{equation} and \begin{equation}\label{10.13} \mu_{t}=X_{t}^{11}. \end{equation} Now, it is clear from the formula for GAO price given in equation \eqref{51.202}, that the exponential term containing $r_{t}$ and $\mu_{t}$ decays when $X_{0}^{12}$ increases and this causes the GAO price and corresponding bounds to diminish when $X_{0}^{12}$ soars. \begin{table}[ht] \centering \begin{tabular}{|c||c||c|c|c|c|c|c|} \hline $\;\;X_{0}^{12}\;\;$ & $\;\rho\;$ & $\;\;\;\;\;\mbox{GAOLB}^{\left(WIS\right)}\;\;\;\;$ & $\;\;MC\;\;\;\;\;\;\;\;\;\;\;$ & $\;\;\;\;\;\mbox{GAOUB}^{\left(WIS\right)}\;\;$ \\ \hline\hline -0.003 & -0.004743383550130 & 0.332948404889575 & 0.341196353690094 & 0.332948737923275 \\ -0.002 & -0.003162255700087 & 0.331667762094902 & 0.340868095614857 & 0.331668129236460 \\ -0.0015 & -0.002371691775065 & 0.331029148831226 & 0.339651861654315 & 0.331029534152954 \\ -0.0005 & -0.000790563925022 & 0.329755328754714 & 0.339133498851769 & 0.329755752815905 \\ 0 & 0.000000000000000 & 0.329120118025352 & 0.338246665341653 & 0.329120562698337 \\ 0.0005 & 0.000790563925022 & 0.328486037563152 & 0.337667153845205 & 0.328486503712013 \\ 0.0015 & 0.002371691775065 & 0.327221259643971 & 0.336913730200477 & 0.327221771448801 \\ 0.002 & 0.003162255700087 & 0.326590558297188 & 0.336554330647556 & 0.326591094339721 \\ 0.003 & 0.004743383550130 & 0.325332521129667 & 0.335045167150194 & 0.325333108616048 \\ \hline \end{tabular} \label{table4} \caption{Lower Bounds and Upper Bound $\mbox{GAOUB}$ for Guaranteed Annuity Option under the Wishart Model Example 2. MC Simulations: 20000} \end{table} \subsubsection{Effect of a Change in Volatility Matrix $Q$} We now carry out an experiment to vary the off-diagonal elements of the volatility matrix $Q$ which we assume to be symmetric while specifying the initial value $X_{0}$ of the Wishart process. \bigskip \textbf{Example 3}. Here the Wishart process is as follows: \begin{equation}\label{10.14} Q = \begin{pmatrix} 0.06 & Q_{12} \\ Q_{12} & 0.006 \end{pmatrix},\;\; X_{0} = \begin{pmatrix} 0.01 & 0.001 \\ 0.001 & 0.001 \end{pmatrix}.\;\; \end{equation} Table 5 depicts the lower bound, the upper bound and the Monte Carlo estimate of the GAO price for different values of $Q_{12}$ and therefore for different values of the initial pairwise linear correlation coefficient $\rho_{0}$. The results obtained show that the value of the GAO price and the corresponding bounds do not show a monotone behaviour in respect of the linear correlation between mortality and interest rate risks. The tightness of the bounds around the Monte Carlo estimate still remains intact. These observations are echoed in Figure 7. In addition Figure 5 reflects that the relative difference $\left(=\frac{|bound-MC|}{MC}\right)$ between the lower bound and the benchmark Monte Carlo estimate increases with an increase in the correlation $\rho_{0}$ between mortality and interest rate. For example looking at table 5, we see that the relative difference for $\mbox{GAOLB}$ increases from a meagre $0.2\%$ for $\rho_{0}=-0.3$ to about $7.7\%$ for a $\rho_{0}=0.3$. However, under the same set, the relative difference between the estimated GAO price and the upper bound increases and then there is a switch at $\rho_{0}=0.3$ and this gap begins to diminish. The last observation is also seen in Figure 6 for the absolute difference between the bounds and the MC estimate of GAO price. The reason for this behaviour of the GAO price lies in the structure of the matrix $Q^{T}Q$ (also see \cite{Deelestra}). It is clear that the diagonal elements of $Q^{T}Q$ increase with a rise in the absolute value of $Q_{12}$. A glance at the law of the Wishart process given in equation \eqref{51.17} and equations \eqref{10.12}-\eqref{10.13} brings out the fact that the drift and in particular the long term value of the positive factors of the Wishart process and in turn the drift of mortality and interest rate process is an increasing function of the absolute value of $Q_{12}$. Thus an upward rise in the value of $Q_{12}$ will enhance the positive factors. As a result, it is evident from equation \eqref{51.202} describing the GAO price in the Wishart case, that the exponential term containing $r_{t}$ and $\mu_{t}$ decreases when $Q_{12}$ moves away from zero and this causes the GAO price and corresponding bounds to diminish. Overall our numerical experiments provide strong evidence in support of the extremely adequate performance of our proposed bounds. \begin{table}[ht] \centering \begin{tabular}{|c||c||c|c|c|c|c|c|} \hline $\;\;Q_{12}\;\;$ & $\;\rho\;$ & $\;\;\;\;\;\mbox{GAOLB}^{\left(WIS\right)}\;\;\;\;$ & $\;\;MC\;\;\;\;\;\;\;\;\;\;\;$ & $\;\;\;\;\;\mbox{GAOUB}^{\left(WIS\right)}\;\;$ \\ \hline\hline -0.01 & -0.294220967543866 & 0.290016256883993 & 0.290601398401997 & 0.290593286187411 \\ -0.006 & -0.244746787719492 & 0.331837945818948 & 0.332421093218907 & 0.331843140669134 \\ -0.002 & -0.109938939767707 & 0.339526376457815 & 0.344143066326585 & 0.339526466816062 \\ 0.002 & 0.109938939767707 & 0.308919593324378 & 0.322579113504993 & 0.308928343737340 \\ 0.006 & 0.244746787719492 & 0.257040019380241 & 0.274376988651895 & 0.257891897298705 \\ 0.01 & 0.294220967543866 & 0.196440417823759 & 0.212744888444368 & 0.204994244625801 \\ \hline \end{tabular} \label{table5} \caption{Lower Bounds and Upper Bound $\mbox{GAOUB}$ for Guaranteed Annuity Option under the Wishart Model Example 3 with parameter choice in accordance with \cite{Deelestra}. MC Simulations: 20000} \end{table} \begin{figure}[h] \centering \includegraphics[scale=0.8]{mcirgao7.pdf} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=0.8]{wishpb7.pdf} \end{figure} \subsection{Computational Speed of the Bounds} We summarize the time consumed in computation of the bounds and the Monte Carlo estimate in the following table. Further, these observations are portrayed in the figure that follows. \begin{table}[ht] \centering \begin{tabular}{c c c c c c c} \hline\hline\\ & & & Number of & Simulations & for & Monte Carlo \\ Example & GAOLB & GAOUB & 1000 & 10000 & 20000 & 50000 \\ \hline\hline MCIR & 0 & 1 & 44 & 352 & 696 & 1800 \\ Wishart 1 & 0 & 1 & 47 & 369 & 749 & 2100 \\ Wishart 2 & 0 & 1 & 49 & 379 & 757 & 2200 \\ Wishart 3 & 0 & 1 & 43 & 359 & 724 & 2000\\ \hline\hline \end{tabular} \label{table100} \caption{Time taken in seconds for Bounds and Simulations} \end{table} \begin{figure}[H] \centering \includegraphics[clip, trim=0.7cm 15.2cm 0.9cm 4cm, width=1.00\textwidth]{sim7.pdf} \caption{The CPU time (seconds) for MCIR and Wishart (average for 3 cases)} \label{fig2} \end{figure} All computations in Section 8 are carried out on a personal laptop with Intel(R) Core(TM) i5 CPU-M450 at 2.40 GHz and a RAM of 4.00 GB. \section{Conclusions} We have derived some very general bounds for the valuation of GAO's under the assumption of a prevailing correlation between mortality and interest rate risk. These bounds serve as a useful tool for financial institutions which are striving hard to find methodologies that offer efficient pricing of longevity linked securities. The techniques used in this paper are successful in circumventing the issue of dealing with sums of a large number of correlated variables. Moreover they are extremely useful in reducing the burden of dealing with cumbersome stochastic processes. The most successful finding of this research is that in the affine case, both the lower and the upper bound depend on the properties of the distribution of the random variables connected to the transformed stochastic processes underlying mortality and interest rate. Moreover the lower bound manifests itself in form of Laplace transform of the underlying random variable while the upper bound reveals itself in the form of the associated characteristic function. Both of these tools are the most conveniently obtained vital statistics for any distribution. The most satisfying aspect is that we need to work in one dimension, in contrast to what would have been atleast a 34-dimensional set up, assuming that a person lives atleast 100 years making $n=35$. Another feather in the cap of the bounds is their computational speed. As indicated in the previous section, the Monte Carlo method is extremely slow for large number of simulations in case of sophisticated models. As a result given the same time budget, Monte Carlo estimates are deemed to be extremely inaccurate. Moreover for highly sophisticated multivariate distributions like non-central Wishart, generating random samples generally involves complex algorithms, which are not inbuilt in libraries of packages such as MATLAB. It is indeed very satisfactory that our lower bound takes just 0.133 or 0.192 seconds on an average to execute in the MCIR and Wishart case while an average of about 0.286 or 0.659 seconds are required by the upper bound respectively in the two cases. In other words pricing of complex GAOs can be done in no time. Last but not the least the fact that the upper bound performs much better in the case of Wishart is a noteworthy observation since the Wishart model is much more intricate in terms of gauging the mesh of correlation between mortality and interest rate. The sensitivity analysis done in this article reiterates the fact that it is not possible to explain the value of a GAO completely in terms on the initial pairwise linear correlation between mortality and interest rate risks as highlighted by the Wishart model (c.f. \cite{Deelestra} for earlier work in this direction). This finding sends alarm signals for the risk management in the presence of an unknown dependence as various scenarios are possible. If the prices of a GAO increase with the (initial) linear correlation coefficient as in the multi-CIR model or the Wishart specifications in Example 1, then the most risk-averse methodology when pricing a GAO would be to take the linear correlation coefficient equal to unity. This will protect the seller from an awkward scenario of underestimation of the GAO price in the event of a high correlation. However, Example 2 of the Wishart case, presents an opposite scenario where prices decrease with increasing initial linear correlation and therefore, risk-adverse seller would adopt the opposite rule in that situation. Example 3 in the Wishart case portrays prices which are not monotone with respect to the correlation, but which seem to lead to the highest prices for zero correlation. Therefore, in this situation, choosing zero correlation might be the appropriate risk-averse choice. In fact the Wishart model comes across the most versatile model presenting all possible dependence scenarios. The methodology proposed in this paper is extremely flexible and can be easily extended to value other insurance products such as indexed annuities or to instruments with option embedded features such as equity-linked annuities, equity-indexed annuities and variable annuities.
1,108,101,564,742
arxiv
\section{Introduction} In a recent publication, the important ratio $F_K/F_\pi$ was evaluated in a scheme that allows for the derivation of compact analytic approximations in two loop chiral perturbation theory (ChPT) \cite{Ananthanarayan:2017qmx}, based on the Mellin-Barnes (MB) approach detailed in \cite{Ananthanarayan:2016pos}. In a prior work, a different scheme was employed to obtain analytic approximations of $m_\pi$ and $F_\pi$ in $SU(3)$ ChPT at two-loops \cite{Ananthanarayan:2017yhz}. Recall that ChPT is an effective field theory for the pseudo-scalar octet degrees of freedom, namely the pions, kaons and eta. At one-loop order, this theory was elucidated in \cite{Gasser:1983yg, Gasser:1984gg}. At two-loop order, the $SU(2)$ theory with just the pion degrees of freedom was worked out in \cite{Bijnens:1997vq}, while the significantly more complicated $SU(3)$ theory has been described in \cite{Bijnens:2006zp}. For many observables and processes of interest in the $SU(2)$ theory, there is a single mass scale in the problem when isospin violation and electromagnetic corrections are neglected, namely the pion mass. At the two-loop order, integrals that arise in this context have been discussed in \cite{Gasser:1998qt}. In the $SU(3)$ theory, all three masses of the pseudoscalar mesons may appear in quantities of interest. Of the relevant integrals in the latter case, the self-energy diagram, which is known as the sunset, may be represented as: \begin{align} H_{\{\alpha,\beta,\gamma\}}^d \{m_1,m_2,m_3; p^2\} = \frac{1}{i^2} \int \frac{d^dq}{(2\pi)^d} \frac{d^dr}{(2\pi)^d} \frac{1}{[q^2-m_1^2]^{\alpha} [r^2-m_2^2]^{\beta} [(q+r-p)^2-m_3^2]^{\gamma}} \label{Eq:SunsetDef} \end{align} In our conventions for dimensional regularisation, $d=4-2\epsilon$. \begin{figure}[hbtp] \centering \includegraphics[scale=0.6]{figsunset.eps} \caption{Sunset diagram} \label{sunset} \end{figure} Tarasov established that integration by parts allows one to express all sunset integrals using a minimal set of four master integrals (MI) \cite{Tarasov:1997kx}. For some configurations, such as the one mass scale case, the representation of the sunset in terms of MI may require fewer than the full set of four. For quantities of interest in $SU(3)$ ChPT, such as the masses and decay constants of the pion, kaon and eta (denoted by $m_\pi$, $F_\pi$, $m_K$, $F_K$, $m_\eta$ and $F_\eta$, respectively), several variations of the basic sunset diagram of Eq.(\ref{Eq:SunsetDef}) appear in their expressions. These include $H_1$, $H_{21}$ and $H_{22}$, which are the scalar integrals appearing from the Passarino-Veltman decomposition of the tensor sunset integrals: \begin{align} & H_{\mu}^d = p_{\mu} H_{1} \nonumber \\ & H_{\mu \nu}^d = p_{\mu} p_{\nu} H_{21} + g_{\mu \nu} H_{22} \end{align} where \begin{align} H_{\mu}^d \{m_1,m_2,m_3; p^2\} = \frac{1}{i^2} \int \frac{d^dq}{(2\pi)^d} \frac{d^dr}{(2\pi)^d} \frac{q_{\mu}}{[q^2-m_1^2] [r^2-m_2^2] [(q+r-p)^2-m_3^2] } \nonumber \\ \nonumber \\ H_{\mu \nu}^d \{m_1,m_2,m_3; p^2\} = \frac{1}{i^2} \int \frac{d^dq}{(2\pi)^d} \frac{d^dr}{(2\pi)^d} \frac{q_{\mu} q_{\nu}}{[q^2-m_1^2] [r^2-m_2^2] [(q+r-p)^2-m_3^2] } \end{align} These may be expressed in terms of the MI. Similarly, while the meson masses require the evaluation of only basic sunset integrals, the decay constants also need calculation of the derivatives of the sunsets with respect to the square of the external momentum. However, it is possible to discuss both the mass and decay constant on an equal footing by reducing the task to evaluating just the MI. It is of interest to obtain representations of the $m_P$ and $F_P$ that do not require numerical evaluation of the higher order loop integrals. Such analytic approaches allow for more widespread use of these expressions, and facilitate cross-disciplinary studies. An example of the latter would be comparisons with lattice simulations as the quark masses are varied in order to obtain insights into the behaviour of these quantities. In \cite{Ecker:2010nc, Ecker:2013pba}, for example, an approximation for $F_K/F_\pi$ was obtained by means of a large-$N$ approach at the Lagrangian level, and the resulting expression was used to fit lattice data to extract values of several ChPT parameters. In \cite{Kaiser:2007kf}, $m_\pi$ was treated by taking an approximation at the level of the loop integral \cite{Kaiser:2007kf}. In \cite{Ananthanarayan:2017yhz}, some of the present authors extended the former work to the case of $F_\pi$, and were able to integrate out the s-quark from the expressions of the pion mass and decay constant, in the same way as the $SU(3)$ ChPT reduces to the $SU(2)$ version, reproducing known results in the chiral limit, as well as evaluating the departure from the chiral limit to leading order in the light quark mass. The goal of this paper is in the same spirit of furthering studies in the area of analytic approaches to observables and other quantities of interest. In particular, the aims of the current work are: \begin{itemize} \item To extend the work of \cite{Kaiser:2007kf, Ananthanarayan:2017yhz} to the case of the kaon and eta, and provide approximate analytic expressions for $m_K$, $m_\eta$, $F_K$ and $F_\eta$ that are easily amenable to fitting with lattice data. \item To provide exact (non-approximate) two loop analytic expressions for all the pseusoscalar meson masses and decay constants. \item To perform a first order study of the numerics of $m_K$, $m_\eta$, $F_K$ and $F_\eta$ to determine the relative contributions to them of their different components, as well as their dependence on the values of parameters such as the low energy constants of ChPT. \end{itemize} Although this work is a sequel to \cite{Ananthanarayan:2017yhz}, the approach taken here is completely different and novel. In the aforementioned work, as well as in \cite{Kaiser:2007kf}, the three mass scale sunset integrals were approximated by taking an expansion in the external momentum $p^2=m_\pi^2$ around zero. In the case of the kaon and eta, however, such an expansion around $p^2=m_K^2$ or $p^2=m_\eta^2$ results in a poorly converging series due to the presence of the much smaller $m_\pi^2$ in the propagator. An expansion about the propagator mass $m_\pi^2 = 0$ is also not feasible as this gives rise to an infrared divergence. In this work, therefore, we turn to the MB approach to evaluate the three-mass scale sunset integrals. The analytic evaluation of a sunset integral depends on its mass configuration. For special cases, with upto two distinct mass scales, closed form results are available \cite{Berends:1997vk,Czyz:2002re,Davydychev:1992mt} \footnote{By complete results, we refer only to the finite $\mathcal{O}(\epsilon^0)$ term obtained using dimensional regularisation. The $\mathcal{O}(\epsilon^{-1})$ and $\mathcal{O}(\epsilon^{-2})$ terms are known exactly for all mass configurations \cite{Davydychev:1992mt}.}. With two mass scales not falling into the threshold or pseudo-threshold configurations \cite{Ananthanarayan:2016pos}, or with three distinct mass scales, the sunsets cannot be written in terms of elementary functions. In fact, the sunset diagram with three different masses and arbitrary $p^2$ cannot even be expressed entirely in terms of multiple polylogarithms. In \cite{Berends:1993ee}, for non-zero $\epsilon$, expressions in terms of Lauricella functions are given but, as emphasized in \cite{Adams:2016sob}, none of the present methods allows for an expansion of the Lauricella functions in terms of multiple polylogarithms. Indeed, it seems that, as shown in \cite{Adams:2015gva}, the sunset diagram requires the introduction of yet another generalisation of the polylogarithms, the so-called elliptic polylogarithms (see also \cite{Ablinger:2017bjx} for more details on elliptic integrals in the context of sunset integrals). In this work, we adopt a more utilitarian approach to get the analytical expressions needed for our analysis. We keep to the spirit of \cite{Berends:1993ee} where, once the $\epsilon\rightarrow 0$ limit of the Lauricella functions is taken, one stays with triple series in powers and logarithms of the mass ratios that one may truncate to achieve a desired level of accuracy. Note, however, that the expressions given in \cite{Berends:1993ee} cannot be used to obtain an analytic expression of the sunset valid in the context of ChPT, since the triple series given there do not converge for the physical values of the pseudo-scalar meson masses. In this work, we therefore present the full analytic expressions valid for the kaon and eta masses and decay constants in terms of double infinite series involving two mass ratios, thus completing the programme first initiated in \cite{Post:1996gg} of evaluating the three mass scale sunset diagrams in $SU(3)$ ChPT \footnote{Some years ago, there had been an attempt to pursue such a programme \cite{David1}. However, the investigations were not completed and publications did not result \cite{David2}.}. Here, we present only the results, and the complete derivation will be given in an upcoming paper \cite{Ananthanarayan:2018}. An overview of the method used is given in \cite{Ananthanarayan:2016pos}, and detailed descriptions can be found in \cite{Friot:2011ic,Aguilar:2008qj}. One of the possible applications of fully analytic representations of the quantities considered here is their use to obtain different analytic approximations that may easily be fitted with data coming from various lattice simulations in conjunction with various lattice data. In addition to the full results, we therefore also present a set of analytic approximations that may easily be fitted with data from lattice simulations. These approximations are obtained by suitably truncating the infinite series so that their omitted tails are numerically smaller than a chosen percentage of the central value obtained from the exact expression\footnote{By exact expression we mean the partial sum where a big number of terms is retained such that it is assured that by adding more terms the corresponding numerical result stays stable within the standard numerical precision of \texttt{Mathematica}.} for the inputs being considered. And as these inputs will depend on the precise lattice set being used, the approximations will need to be changed accordingly. Therefore, we present along with this paper, a set of supplementary \texttt{Mathematica} files that automates the task of finding a suitable truncation for the sunsets, given a set of (lattice) input masses and a permissible error threshold value. The lattice expressions given in Section~\ref{Sec:LatticeFits} are suitable for fits with the data given in \cite{Durr:2016ulb}, and an illustrative fit with these expressions is presented in \cite{Ananthanarayan:2017qmx}. The structure of this paper is as follows. In Section~\ref{Sec:MI}, we present our notation and a short discussion on convergence properties of our series representations of the sunset integrals. In Section~\ref{Sec:KaonMass}, the expression for the kaon mass is given, in Section~\ref{Sec:KaonDecay} the same is given for the kaon decay constant, in Section~\ref{Sec:EtaMass} we give the expression for the eta mass, and in Section~\ref{Sec:EtaDecay} we give the expression for the eta decay constant. In these sections, the expressions presented are simplified using the Gell-Mann-Okubo (GMO) formula. The same expressions, in which the eta mass has been retained, are presented in Appendix~\ref{Sec:NonGMOExpr}. Simplified analytic results for each of the two sets of four master integrals appearing in the expressions for the kaon and eta masses and decay constants, obtained from the ancillary \texttt{Mathematica} code, are discussed in Section~\ref{Sec:NumAnalysis}, while full results for these master integrals are given in Appendix~\ref{Sec:SunsetResults}. Numerical implications of the expressions presented in the paper are also shown in Section~\ref{Sec:NumAnalysis}. This work closes by presenting a set of compact expressions for each of $m_K^2$, $m_\eta^2$, $F_K$ and $F_\eta$, which may be used for easy fitting with lattice data, in Section~\ref{Sec:LatticeFits}, which is followed by a detailed summary and conclusion section. In Appendix~\ref{Sec:PionSunsets}, we explain how exact expressions may be obtained for $m_\pi$ and $F_\pi$, and also provide a set of truncated three mass scale sunset results that may be used to produce approximate analytic expressions for $m_\pi$, $F_\pi$ and $F_K/F_\pi$. \section{Sunset Master Integrals: notation and convergence properties of the series representations \label{Sec:MI}} The four three-mass-scale sunset master integrals that arise in the kaon mass and decay constant expressions are: \begin{align} \nonumber H_{\{1,1,1\}}^d\{ m_K, m_{\pi},m_{\eta}; p^2=m_K^2\},\\ \nonumber H_{\{2,1,1\}}^d\{ m_K, m_{\pi},m_{\eta}; p^2=m_K^2\},\\ \nonumber H_{\{1,2,1\}}^d\{ m_K, m_{\pi},m_{\eta}; p^2=m_K^2\}, \\ H_{\{1,1,2\}}^d\{ m_K, m_{\pi},m_{\eta}; p^2=m_K^2\}. \end{align} and the three independent three-mass-scale sunset master integrals that arise in the eta mass and decay constant expressions are: \begin{align} \nonumber H_{\{1,1,1\}}^d\{ m_\pi, m_K, m_K; p^2=m_\eta^2 \},\\ \nonumber H_{\{2,1,1\}}^d\{ m_\pi, m_K, m_K; p^2=m_\eta^2\},\\ H_{\{1,2,1\}}^d\{ m_\pi, m_K, m_K; p^2=m_\eta^2\}. \end{align} In ChPT, the renormalisation is normally done using a modified form of the $\overline{MS}$ scheme, and involves multiplying the sunset integral with the factor $(\mu^{2}_{\chi})^{4-d}$, where: \begin{align} \mu^2_{\chi} \equiv \mu^2 \frac{e^{\gamma_E - 1}}{4\pi} \end{align} We therefore define: \begin{align} H^{\chi} \equiv (\mu^2_{\chi})^{4-d} H^d \end{align} which is the sunset integral suitably renormalised. In this paper, we denote the sunset integrals normalised using the $\overline{MS}_{\chi}$ scheme by $H^{\chi}$, to differentiate it from the unrenormalised sunset integral $H^d$ defined in Eq.(\ref{Eq:SunsetDef}). This renormalisation introduces in each of these integrals terms containing chiral logarithms: \begin{align} l_{P}^{r} = \frac{1}{2(4\pi)^2}\log \left[ \frac{m_P^2}{\mu^2} \right], \; \; \; \; \; \; \; \; \; \; P = \pi, K, \eta . \end{align} We denote the chiral log terms of a sunset integral using a $\log$ superscript, i.e. $H^{\log}$, and the rest of the integral by a bar, i.e. $\overline{H}$. Therefore we have, for instance: \begin{align} H^{\chi} \equiv \overline{H}^{\chi} + H^{\chi,\log} \end{align} We also adopt the notation of \cite{Kaiser:2007kf} and denote the sunset integrals in this paper by means of the abbreviation: \begin{align} H_{aP \, bQ \, cR} \equiv H_{\{a,b,c\}} \{ m_P, m_Q, m_R; p^2 = m_{K}^2 \} \end{align} where we normally omit the numerical index $a,b,c$ on the LHS of the above when their values are unity, and with either the $\log$ superscript or bar over the $H$, as well as either a $\chi$ or a $d$ superscript on it, as appropriate. The $H^{\chi,\log}$ for the master integrals considered in this paper are given by: \begin{align} & H^{\chi,log}_{P \, Q \, R} = 4 m_P^2 (l^r_P)^2 + 4 m_Q^2 (l^r_Q)^2 + 4 m_R^2 (l^r_R)^2 - \frac{m_P^2}{8\pi^2} l^r_P - \frac{m_Q^2}{8\pi^2} l^r_Q -\frac{m_R^2}{8\pi^2} l^r_R + \frac{s}{16\pi^2} l^r_{s} \nonumber \\ & H^{\chi,log}_{2P \, Q \, R} = 4 (l^r_P)^2 + \frac{1}{8\pi^2} l^r_P \nonumber \\ & H^{\chi,log}_{P \, 2Q \, R} = 4 (l^r_Q)^2 + \frac{1}{8\pi^2} l^r_Q \nonumber \\ & H^{\chi,log}_{P \, Q \, 2R} = 4 (l^r_R)^2 + \frac{1}{8\pi^2} l^r_R \end{align} where $s=p^2$, and $l^r_s = l^r_K$ or $l^r_\eta$ as the case may be. The full expressions for these master integrals are given in Appendix \ref{Sec:SunsetResults} in the form of linear combinations of independent terms, single infinite series, and double infinite series. These series do not converge for all values of the masses. Rather, they converge for values of the masses that satisfy the following set of inequalities: $(m_{\pi} < m_{\eta}) \bigwedge (m_{\pi} + m_{\eta} < 2m_{K})$, which is graphically described by the blue areas in Figure~\ref{Fig:RegOfConv}, plotted with two different sets of mass ratio axes. The black line in the left panel denotes mass values that obey the GMO formula, while the red dot in both panels marks the physical values of the meson masses. \begin{figure} \centering \begin{minipage}{0.45\textwidth} \includegraphics[scale=0.5]{convergence.eps} \end{minipage} ~~ \begin{minipage}{0.45\textwidth} \includegraphics[scale=0.5]{convergence2.eps} \end{minipage} \caption{Region of convergence for results presented in Appendix~\ref{Sec:SunsetResults} (blue area).} \label{Fig:RegOfConv} \end{figure} \section{The pseudoscalar meson masses and decay constants} \label{Sec:MainResults} The expressions for the pseudo-scalar meson masses is given up to two loops in \cite{Amoros:1999dp} as: \begin{align} m^2_{P} = m^{2}_{P0} + \left( m^{2}_{P} \right)^{(4)} + \left( m^{2}_{P} \right)^{(6)}_{CT} + \left( m^{2}_{P} \right)^{(6)}_{loop} + \mathcal{O}(p^8) \label{NNLOMass} \end{align} where $P$ is the particle in question. The model independent $\mathcal{O}(p^6)$ contribution can be subdivided as: \begin{align} F_{\pi}^4 \left( m^{2}_{P} \right)^{(6)}_{loop} = c_{sunset}^{P} + c_{log \times log}^{P} + c_{log}^{P} + c_{log \times L_i}^{P} + c_{L_i}^{P} + c_{L_i \times L_j}^{P} \end{align} where the $c_{log}^{P}$ represents the terms containing the chiral logarithms, $c_{log \times log}^{P}$ are the bilinear chiral log terms, $c_{L_i}^{P}$ are those terms proportional to the low energy constants $L_i$, $c_{L_i \times L_j}^{P}$ are terms bilinear in the LECs, and $c_{log \times L_i}^{P}$ are those terms that contain a product of the low energy constants and a chiral logarithm. The expressions and breakup for their decay constants is similar: \begin{align} \frac{F_P}{F_0} = 1 + F_P^{(4)} + \left( F_P \right)^{(6)}_{CT} + \left( F_P \right)^{(6)}_{loop} + \mathcal{O}(p^8) \label{NNLODecay} \end{align} where: \begin{align} F_{\pi}^4 \left( F_P \right)^{(6)}_{loop} = d_{sunset}^{P} + d_{log \times log}^{P} + d_{log}^{P} + d_{log \times L_i}^{P} + d_{L_i}^{P} + d_{L_i \times L_j}^{P} \end{align} In the following sections, we give explicit expressions for each of the terms above for the kaon mass and decay constant. The expressions have been simplified by the use of the GMO relation to rewrite the eta mass in terms of the pion and kaon masses, except in the chiral logarithms, in which the eta mass is understood to mean $ m_{\eta0}^2 = (4 m_{K0}^2-m_{\pi0}^2)/3$. The full expressions, not simplified by use of the GMO relation, are given in Appendix~\ref{Sec:NonGMOExpr} . \subsection{The kaon mass \label{Sec:KaonMass}} In this section, we use the expression for the kaon mass to two-loops given in Eqs.(A.5)-(A.7) of \cite{Amoros:1999dp}, and rewrite the linear log terms, the bilinear log terms, and the terms involving the sunset integrals (e.g. $H^F$, $H_1^F$) in Eq.(A.7) of \cite{Amoros:1999dp} by applying Tarasov's relations \cite{Tarasov:1997kx}. The kaon mass is given in \cite{Amoros:1999dp} as: \begin{align} m^2_{K} = m^{2}_{K0} + \left( m^{2}_{K} \right)^{(4)} + \left( m^{2}_{K} \right)^{(6)}_{CT} + \left( m^{2}_{K} \right)^{(6)}_{loop} + \mathcal{O}(p^8) \end{align} The expressions given here have been simplified using the GMO relation. See Appendix~\ref{Sec:NonGMOExpr} for the full results. The full form of the components above are given by: \begin{align} m^{2}_{K} = B_0 \left( m_s + \hat{m} \right) \label{cTreeMass} \end{align} \begin{align} \frac{F_{\pi}^2}{m_K^2} \left( m^{2}_{K} \right)^{(4)} = 8(m_{\pi}^2 + 2 m_K^2)(2 L_6^r - L_4^r) + 8 m_K^2 (2L_8^r - L_5^r) + \frac{2}{9} \left(4 m_{K}^2-m_{\pi}^2\right) l^r_{\eta} \label{KaonMassNLOContrib} \end{align} \begin{align} \frac{F_{\pi}^2}{m_K^2} \left( m^{2}_{K} \right)^{(6)}_{CT} =& -32 m_{K}^4 C^r_{12} - 32 m_{K}^2 \left(2 m_{K}^2+m_{\pi}^2\right) C^r_{13} - 16 \left(2 m_{K}^4-2 m_{K}^2 m_{\pi}^2+m_{\pi}^4\right) C^r_{14} \nonumber \\ & -16 m_{K}^2 \left(2 m_{K}^2+m_{\pi}^2\right) C^r_{15} -16 \left(4 m_{K}^4-4 m_{K}^2 m_{\pi}^2+3 m_{\pi}^4\right) C^r_{16} \nonumber \\ & +16 m_{\pi}^2 \left(m_{\pi}^2-2 m_{K}^2\right) C^r_{17} + 48 \left(2 m_{K}^4-2 m_{K}^2 m_{\pi}^2+m_{\pi}^4\right) C^r_{19} \nonumber \\ & +16 \left(8 m_{K}^4-2 m_{K}^2 m_{\pi}^2+3 m_{\pi}^4\right) C^r_{20} + 48 \left(2 m_{K}^2+m_{\pi}^2\right)^2 C^r_{21} \nonumber \\ & +32 m_{K}^4 C^r_{31} + 32 m_{K}^2 \left(2 m_{K}^2+m_{\pi}^2\right) C^r_{32} \label{cCT} \end{align} and \begin{align} F_{\pi}^4 \left( m^{2}_{K} \right)^{(6)}_{loop} = c^{K}_{L_i} + c^{K}_{L_i \times L_j} + c^{K}_{log \times L_i} + c^{K}_{log} + c^{K}_{log \times log} + c^{K}_{sunset} \end{align} where \begin{align} 27 (16 \pi^2) c^{K}_{L_i} =& 108 m_K^6 L_1^r + 6 m^2_K \left(61 m_K^4-8 m_K^2 m_{\pi}^2+28 m_{\pi}^4\right) L^r_2 + m^2_K \left(89 m_K^4 - 4 m_K^2 m_{\pi}^2 + 41 m_{\pi}^4 \right) L_3^r \nonumber \\ & -32 m^2_K \left( m_K^2 - m_{\pi}^2\right)^2 \left( L^r_5 -12 L^r_7 -6 L^r_8 \right) \end{align} \begin{align} c^{K}_{L_i \times L_j} =& -128 \left(4 m_K^6 + 4 m_K^4 m_{\pi}^2 + m_K^2 m_{\pi}^4\right) (L_4^r)^2 - 128 \left(3 m_K^6 + 2 m_K^4 m_{\pi}^2 + m_K^2 m_{\pi}^4\right) L_4^r L_5^r \nonumber \\ & + 512 \left(4 m_K^6 + 4 m_K^4 m_{\pi}^2 + m_K^2 m_{\pi}^4\right) L_4^r L_6^r + 128 \left(8 m_K^6 + 3 m_K^4 m_{\pi}^2 + m_K^2 m_{\pi}^4\right) L_4^r L_8^r \nonumber \\ & - 64 \left(m_K^6 + m_K^4 m_{\pi}^2\right)(L_5^r)^2 + 256 \left(3 m_K^6 + 2 m_K^4 m_{\pi}^2 + m_K^2 m_{\pi}^4\right) L_5^r L_6^r \nonumber \\ & + 128 \left(3 m_K^6 + m_K^4 m_{\pi}^2\right) L_5^r L_8^r -512 \left(4 m_K^6 + 4 m_K^4 m_{\pi}^2 + m_K^2 m_{\pi}^4\right) (L_6^r)^2 \nonumber \\ & - 256 \left(8 m_K^6 + 3 m_K^4 m_{\pi}^2 + m_K^2 m_{\pi}^4\right) L_6^r L_8^r - 512 m_K^6 (L_8^r)^2 \label{cLiLj} \end{align} \begin{align} c^{K}_{log \times L_i} =& 2 m_K^2 m_{\pi}^2 \bigg\{ -3 m_{\pi}^2 (16 L^r_{1}+4 L^r_{2}+5 L^r_{3})+4 \left(8 m_K^2+17 m_{\pi}^2\right) L^r_{4} + 4 \left(4 m_K^2+3 m_{\pi}^2\right) (L^r_{5}-2 L^r_{8}) \nonumber \\ & \quad -8 \left(8 m_K^2+11 m_{\pi}^2\right) L^r_{6} \bigg\} l_{\pi}^r \nonumber \\ & - 4 m_K^4 \bigg\{ m_{K}^2 (36 L^r_{1}+18 L^r_{2}+15 L^r_{3}-16 L^r_{5}+32 L^r_{8})-4 \left(10 m_{K}^2+m_{\pi}^2\right) L^r_{4} + 8 \left(8 m_{K}^2+m_{\pi}^2\right) L^r_{6} \bigg\} l_K^r \nonumber \\ & - \frac{2}{9} m_K^2 \bigg\{ \left(4 m_K^2-m_{\pi}^2\right)^2 (16 L^r_{1}+4 L^r_{2}+7 L^r_{3})-12 L^r_{4} \left(32 m_K^4-12 m_K^2 m_{\pi}^2+m_{\pi}^4\right) \nonumber \\ & \quad -4 \left(32 m_K^4-2 m_K^2 m_{\pi}^2-3 m_{\pi}^4\right) L^r_{5} + 8 \left(64 m_K^4-20 m_K^2 m_{\pi}^2+m_{\pi}^4\right) L^r_{6} + 96 m_{\pi}^2 \left(m_K^2-m_{\pi}^2\right) L^r_{7} \nonumber \\ & \quad +8 \left(32 m_K^4-6 m_K^2 m_{\pi}^2-5 m_{\pi}^4\right) L^r_{8} \bigg\} l_{\eta}^r \end{align} \begin{align} \left(-16\pi^2\right) c^{K}_{log} =& \left( \frac{11}{4} m_{K}^4 m_{\pi}^2 + \frac{455}{144} m_{K}^2 m_{\pi}^4 \right) l^r_{\pi} + \left( \frac{148}{27} m_{K}^6 - \frac{5}{4} m_{K}^4 m_{\pi}^2 - \frac{13}{432} m_{K}^2 m_{\pi}^4 \right) l^r_{\eta} \nonumber \\ & + \left( \frac{41}{18} m_{K}^4 m_{\pi}^2 + \frac{487}{72} m_{K}^6 \right) l^r_{K} \label{cBarlog} \end{align} \begin{align} c^{K}_{log \times log} =& \left(-\frac{11}{81} m_{K}^6 - \frac{47}{81} m_{K}^4 m_{\pi}^2 + \frac{1279}{1296} m_{K}^2 m_{\pi}^4 - \frac{5}{24} m_{\pi}^6 \right) (l^r_{\eta})^2 \nonumber \\ & + \left( \frac{14}{9} m_{K}^6 + \frac{19}{18} m_{K}^4 m_{\pi}^2 -\frac{1}{4} m_{K}^2 m_{\pi}^4 \right) l^r_{\eta} l^r_{K} + \left(\frac{4}{9} m_{K}^4 m_{\pi}^2 + \frac{43}{6} m_{K}^6 \right) (l^r_{K})^2 \nonumber \\ & - \left(\frac{3}{2} m_{K}^4 m_{\pi}^2 - \frac{1}{4} m_{K}^2 m_{\pi}^4 \right) l^r_{\pi} l^r_{K} + \left(\frac{1}{2} m_{K}^4 m_{\pi}^2 + \frac{169}{48} m_{K}^2 m_{\pi}^4 - \frac{5}{24} m_{\pi}^6 \right) (l^r_{\pi})^2 \nonumber \\ & - \left(\frac{55}{18} m_{K}^4 m_{\pi}^2 + \frac{97}{72} m_{K}^2 m_{\pi}^4 - \frac{5}{12} m_{\pi}^6 \right) l^r_{\pi}l^r_{\eta} \label{cBarloglog} \end{align} The expressions of Eq.(\ref{cBarlog})-(\ref{cBarloglog}) above are a combination of the linear and bilinear chiral logarithms arising from the evaluation of the sunset integrals, those stemming directly from the $\mathcal{O}(p^6)$ kaon mass expression, as well as contributions arising from the $\mathcal{O}(p^4)$ term due to application of the GMO relation. Similarly, the $c^{K}_{L_i}$and $c^{K}_{log \times L_i}$ components are also made up of terms taken directly from the $\mathcal{O}(p^6)$ kaon mass expression, and contributions arising from the $\mathcal{O}(p^4)$ term due to application of the GMO relation. The $c^{K}_{sunset}$ term itself has the following contributions to it, where the terms of the first line are a combination of terms from the kaon mass expression as well as from the single mass sunset integrals: \begin{align} c^{K}_{sunset} = \frac{1}{\left(16 \pi ^2\right)^2} & \bigg\{ \left(\frac{767}{108}+\frac{427 \pi^2}{1296}\right) m_{K}^4 m_{\pi}^2 - \left(\frac{12307}{3456}+\frac{275 \pi ^2}{648}\right) m_{K}^6 - \left(\frac{571}{288}+\frac{59 \pi ^2}{216}\right) m_{K}^2 m_{\pi}^4 \nonumber \\ & - \left(\frac{49}{72}+\frac{\pi ^2}{48}\right) m_{\pi}^6 \bigg\} + c^{K}_{K \pi \pi} + c^{K}_{K \eta \eta} + c^{K}_{K \pi \eta} \end{align} where \begin{align} c^{K}_{K \pi \pi} &= \left(-\frac{3}{32} m_{K}^4 + \frac{9}{16} m_{K}^2 m_{\pi}^2 + \frac{9}{32} m_{\pi}^4 \right) \overline{H}^{\chi}_{K \pi \pi} + \left(\frac{3}{8} m_{K}^6 - \frac{3}{8} m_{K}^2 m_{\pi}^4 \right) \overline{H}^{\chi}_{2K \pi \pi} \label{cBarkpp} \end{align} \begin{align} c^{K}_{K \eta \eta} &= \left(\frac{289}{288} m_{K}^4 -\frac{41}{48} m_{K}^2 m_{\pi}^2 + \frac{5}{32} m_{\pi}^4 \right) \overline{H}^{\chi}_{K \eta \eta} + \left(-\frac{73}{72} m_{K}^6 + \frac{11}{9} m_{K}^4 m_{\pi}^2 - \frac{5}{24} m_{K}^2 m_{\pi}^4 \right) \overline{H}^{\chi}_{2K \eta \eta} \end{align} \begin{align} c^{K}_{K \pi \eta} &= \left( \frac{17}{16} m_{K}^4 - \frac{17}{24} m_{K}^2 m_{\pi}^2 + \frac{7}{48} m_{\pi}^4 \right) \overline{H}^{\chi}_{K \pi \eta} - \left( m_{K}^4 m_{\pi}^2 - \frac{5}{4} m_{K}^2 m_{\pi}^4 + \frac{1}{4} m_{\pi}^6 \right) \overline{H}^{\chi}_{K 2\pi \eta} \nonumber \\ & - \left( \frac{1}{3} m_{K}^6 - \frac{7}{36} m_{K}^4 m_{\pi}^2 - \frac{7}{36} m_{K}^2 m_{\pi}^4 + \frac{1}{18} m_{\pi}^6 \right) \overline{H}^{\chi}_{K \pi 2\eta} \end{align} The terms $c^{K}_{\pi \pi K}, c^{K}_{\eta \eta K}, c^{K}_{K \pi \eta}$ are the result of applying Tarasov's relations to the variety of sunset integrals appearing in Eq.(A.7) of \cite{Amoros:1999dp} and rewriting them in terms of the master integrals given in Appendix~\ref{Sec:SunsetResults}. \subsection{The kaon decay constant \label{Sec:KaonDecay}} The treatment of the kaon decay constant is similar to that of the kaon mass in the previous section, except that the expression for the kaon decay constant also involves derivatives of the sunsets with respect to the external momentum. The kaon decay constant to two-loops is given in Eqs.(A.15)-(A.17) of \cite{Amoros:1999dp} as: \begin{align} \frac{F_K}{F_0} = 1 + F_K^{(4)} + \left( F_K \right)^{(6)}_{CT} + \left( F_K \right)^{(6)}_{loop} + \mathcal{O}(p^8) \end{align} where: \begin{align} F_{\pi}^2 F_K^{(4)} = 4\left(2 m_{K}^2+m_{\pi}^2\right) L_4^r + 4 m_{K}^2 L_5^r - \frac{3}{4} m_{\pi}^2 l_{\pi}^r - \frac{3}{2} m_{K}^2 l^r_{K} -\frac{1}{4} \left(4 m_{K}^2-m_{\pi}^2\right) l^r_{\eta} \label{KaonDecayNLOContrib} \end{align} \begin{align} F_{\pi}^4 \left( F_K \right)^{(6)}_{CT} &= 8 \left(2 m_{K}^4-2 m_{K}^2 m_{\pi}^2+m_{\pi}^4\right) C^r_{14} + 8 m_{K}^2 \left(2 m_{K}^2+m_{\pi}^2\right) C^r_{15} \nonumber \\ & + 8 \left(4 m_{K}^4-4 m_{K}^2 m_{\pi}^2+3 m_{\pi}^4\right) C^r_{16} + 8 m_{\pi}^2 \left(2 m_{K}^2-m_{\pi}^2\right) C^r_{17} \label{dCT} \end{align} and \begin{align} F_{\pi}^4 \left( F_K \right)^{(6)}_{loop} = d^K_{L_i} + d^K_{L_i \times L_j} + d^K_{log \times L_i} + d^K_{log} + d^K_{log \times log} + d^K_{sunset} \end{align} where: \begin{align} - 54(16\pi^2) d^K_{L_i} =& 108 m_{K}^4 L_1^r + 6 \left(61 m_{K}^4 - 8 m_{K}^2 m_{\pi}^2 + 28 m_{\pi}^4 \right) L_2^r + \left(89 m_{K}^4 - 4 m_{K}^2 m_{\pi}^2 + 41 m_{\pi}^4 \right) L_3^r \nonumber \\ & - 72 \left(m_{K}^2-m_{\pi}^2\right)^2 \left( L^r_{5} - 12 L^r_{7} - 6 L^r_{8} \right) \end{align} \begin{align} d^K_{L_i \times L_j} =& 56 \left(4 m_{K}^4 + 4 m_{K}^2 m_{\pi}^2 + m_{\pi}^4 \right) (L_{4}^r)^2 + 16 \left(10 m_{K}^4 + 7 m_{K}^2 m_{\pi}^2 + 4 m_{\pi}^4\right) L_4^r L_5^r \nonumber \\ & -64 \left(4 m_{K}^4 + 4 m_{K}^2 m_{\pi}^2 + m_{\pi}^4\right) L_4^r L_6^r - 64 \left(2 m_{K}^4 + m_{\pi}^4\right) L_4^r L_8^r \nonumber \\ & + 8 \left(3 m_{K}^4 + 4 m_{K}^2 m_{\pi}^2\right) (L_{5}^r)^2 -64 \left(2 m_{K}^4 + m_{K}^2 m_{\pi}^2\right) L_5^r L_6^r -64 m_{K}^4 L_5^r L_8^r \label{dBarLiLj} \end{align} \begin{align} d^K_{log \times L_i} =& \left\{ 48 m_{\pi}^4 L^r_{1} + 12 m_{\pi}^4 L^r_{2} + 15 m_{\pi}^4 L^r_{3} - \left(38 m_{K}^2 m_{\pi}^2+47 m_{\pi}^4\right) L^r_{4} - \left(19 m_{K}^2 m_{\pi}^2+6 m_{\pi}^4 \right) L^r_{5} \right\} l^r_{\pi} \nonumber \\ & + \left\{ 72 m_{K}^4 L^r_{1} + 36 m_{K}^4 L^r_{2} + 30 m_{K}^4 L^r_{3} - 2 m_{K}^2 \left(30 m_{K}^2+7 m_{\pi}^2\right) L^r_{4} - 2 m_{K}^2 \left(7 m_{K}^2+6 m_{\pi}^2\right) L^r_{5} \right\} l^r_{K} \nonumber \\ & + \bigg\{ \frac{1}{9} \left(4 m_K^2-m_{\pi}^2\right)^2 \left( 16 L^r_{1} + 4 L^r_{2} +7 L^r_{3} \right) - \frac{1}{3} \left(4 m_K^2-m_{\pi}^2\right) \left(22 m_K^2 - m_{\pi}^2\right) L^r_{4} \nonumber \\ & \quad - \frac{1}{3} \left(4 m_K^4 + 37 m_K^2 m_{\pi}^2 - 14 m_{\pi}^4\right) L^r_{5} - 16 \left(m_K^2-m_{\pi}^2\right)^2 \left( 2 L^r_{7} + L^r_{8} \right) \bigg\} l^r_{\eta} \end{align} \begin{align} \left(16\pi^2\right) d^K_{log} =& \left(\frac{3}{8} m_{K}^2 m_{\pi}^2 + \frac{53}{32} m_{\pi}^4 \right) l^r_{\pi} + \left( \frac{19}{9} m_{K}^4 - \frac{65}{72} m_{K}^2 m_{\pi}^2 + \frac{3}{32} m_{\pi}^4 \right) l^r_{\eta} \nonumber \\ & + \left(\frac{245}{48} m_{K}^4 + \frac{173}{72} m_{K}^2 m_{\pi}^2 \right) l^r_{K} \label{dBarlog} \end{align} \begin{align} d^K_{log \times log} =& \left(\frac{5}{16} \frac{m_{\pi}^6}{m_{K}^2} +\frac{2}{3} m_{K}^2 m_{\pi}^2 - \frac{5}{48} m_{\pi}^4 \right) (l^r_{\pi})^2 - \left(\frac{5}{8} \frac{m_{\pi}^6}{m_{K}^2} - \frac{25}{6} m_{K}^2 m_{\pi}^2 - \frac{47}{24} m_{\pi}^4 \right) l^r_{\eta} l^r_{\pi} \nonumber \\ & + \left(\frac{31}{9} m_{K}^4 + \frac{5}{16} \frac{m_{\pi}^6}{m_{K}^2} - \frac{11}{18} m_{K}^2 m_{\pi}^2 - \frac{21}{16} m_{\pi}^4 \right) (l^r_{\eta})^2 + \left(\frac{155}{72} m_{K}^4 + \frac{11}{36} m_{K}^2 m_{\pi}^2 \right) (l^r_{K})^2 \nonumber \\ & - \left(\frac{91}{18} m_{K}^4 + \frac{53}{72} m_{K}^2 m_{\pi}^2 - \frac{3}{8} m_{\pi}^4 \right) l^r_{\eta} l^r_{K} + \left(\frac{51}{8} m_{K}^2 m_{\pi}^2 - \frac{3}{8} m_{\pi}^4 \right) l^r_{K} l^r_{\pi} \label{dBarloglog} \end{align} The linear and bilinear chiral log terms given in Eq.(\ref{dBarlog}) and Eq.(\ref{dBarloglog}) are a combination of the terms coming directly from the $\mathcal{O}(p^6)$ kaon decay constant expression, the chiral logs arising from the sunset integrals, and contributions stemming from the $\mathcal{O}(p^4)$ term due to application of the GMO relation. $d^K_{L_i}$ and $d^K_{log \times L_i}$ are similarly made up of terms taken directly from the $\mathcal{O}(p^6)$ expression, and contributions arising from the $\mathcal{O}(p^4)$ term due to application of the GMO relation. As in the case of the kaon mass, we break up the sunset contribution as follows, in which the first line contains contributions from the single mass sunsets, as well as terms arising from the free terms (i.e. not containing a chiral logarithm or a low energy constant) from the expression for the $\mathcal{O}(p^6)$ contribution to the kaon decay constant: \begin{align} d^K_{sunset} = \frac{1}{\left( 16 \pi ^2\right)^2} & \bigg\{ \left(\frac{17671}{2304}+\frac{1195 \pi ^2}{2592}\right) m_{K}^4 + \left(\frac{49}{48}+\frac{\pi ^2}{32}\right) \frac{m_{\pi}^6}{m_{K}^2}-\left(\frac{1625}{144}+\frac{689 \pi ^2}{1296}\right) m_{K}^2 m_{\pi}^2 \nonumber \\ & + \left(\frac{2153}{576}+\frac{151 \pi ^2}{432}\right) m_{\pi}^4 \bigg\} + d^K_{K \pi \pi} + d^K_{K \eta \eta} + d^K_{K \pi \eta} \end{align} where \begin{align} d^K_{K \pi \pi} &= -\left(\frac{27 m_{\pi}^4}{64 m_{K}^2}+\frac{m_{K}^2}{64}+\frac{9 m_{\pi}^2}{16}\right) \overline{H}^{\chi}_{K \pi \pi} + \left(\frac{m_{K}^4}{16}+\frac{m_{K}^2 m_{\pi}^2}{8}+\frac{9 m_{\pi}^4}{16}\right) \overline{H}^{\chi}_{2K \pi \pi} \label{dBarkpp} \end{align} \begin{align} d^K_{K \eta \eta} &= - \left(\frac{15 m_{\pi}^4}{64 m_{K}^2}+\frac{1189 m_{K}^2}{576}-\frac{65 m_{\pi}^2}{48}\right) \overline{H}^{\chi}_{K \eta \eta} + \left(\frac{143 m_{K}^4}{48}-\frac{139 m_{K}^2 m_{\pi}^2}{72}+\frac{5 m_{\pi}^4}{16}\right) \overline{H}^{\chi}_{2K \eta \eta} \end{align} \begin{align} d^K_{K \pi \eta} &= \left( - \frac{7}{32} \frac{m_{\pi}^4}{m_{K}^2} + \frac{5}{96} m_{K}^2 + \frac{7}{6} m_{\pi}^2 \right) \overline{H}^{\chi}_{K \pi \eta} + \left( \frac{3}{8} \frac{m_{\pi}^6}{m_{K}^2} + \frac{1}{4} m_{K}^2 m_{\pi}^2 - \frac{15}{8} m_{\pi}^4 \right) \overline{H}^{\chi}_{K 2\pi \eta} \nonumber \\ & - \left( \frac{11}{18} m_{K}^4 - \frac{1}{12} \frac{m_{\pi}^6}{m_{K}^2} + \frac{41}{72} m_{K}^2 m_{\pi}^2 + \frac{11}{72}m_{\pi}^4 \right) \overline{H}^{\chi}_{K \pi 2\eta} - \left( \frac{1}{2} m_{K}^4 \right) \overline{H}^{\chi}_{2K \pi \eta} \end{align} The terms $d^{K}_{\pi \pi K}$, $d^K_{\eta \eta K}$ and $d^K_{K \pi \eta}$ are the result of applying Tarasov's relations to the sunset integrals appearing in Eq.(A.17) of \cite{Amoros:1999dp} and rewriting them in terms of the sunset diagram master integrals given in Appendix~\ref{Sec:SunsetResults}. \subsection{The eta mass \label{Sec:EtaMass}} The eta mass is given in \cite{Amoros:1999dp} as: \begin{align} m^2_{\eta} = m^{2}_{\eta 0} + \left( m^{2}_{\eta} \right)^{(4)} + \left( m^{2}_{\eta} \right)^{(6)}_{CT} + \left( m^{2}_{\eta} \right)^{(6)}_{loop} + \mathcal{O}(p^8) \end{align} where \begin{align} m^{2}_{\eta} = \frac{2}{3} B_0 \left(2 m_s + \hat{m} \right) \end{align} and \begin{align} F_{\pi}^2 \left( m_{\eta}^{2} \right)^{(4)} =& \frac{8}{9}(3 L^r_{4}-L^r_{5}-6 L^r_{6}+48 L^r_{7}+18 L^r_{8}) m_{\pi}^4 -\frac{16}{9} (3 L^r_{4}-4 L^r_{5}-6 L^r_{6}+48 L^r_{7}+24 L^r_{8}) m_K^2 m_{\pi}^2 \nonumber \\ & -\frac{64}{9} (3 L^r_{4}+2 L^r_{5}-6 L^r_{6}-6 L^r_{7}-6 L^r_{8}) m_K^4 + \left(\frac{8}{3} l^r_{K} - \frac{64}{27} l^r_{\eta} \right) m_K^4 - \left(\frac{7}{27} l^r_{\eta} + l^r_{\pi}\right) m_{\pi}^4 \nonumber \\ & + \frac{44}{27} l^r_{\eta} m_K^2 m_{\pi}^2 \label{EqMeP4} \end{align} The $\mathcal{O}(p^6)$ counter-term contribution is given by: \begin{align} F_{\pi}^4 \left( m_{\eta}^{2} \right)^{(6)}_{CT} =& -\frac{256}{27} m_{K}^6 ( 8 C_{12}^r + 12 C_{13}^r + 6 C_{14}^r + 6 C_{15}^r + 9 C_{16}^r + 6 C_{17}^r + 6 C_{18}^r - 27 C_{19}^r - 27 C_{20}^r \nonumber \\ & - 27 C_{21}^r - 18 C_{31}^r - 18 C_{32}^r - 18 C_{33}^r ) + \frac{16}{27} m_{\pi}^6 ( 2 C_{12}^r - 6 C_{13}^r + 9 C_{14}^r - 3 C_{15}^r + 27 C_{16}^r \nonumber \\ & + 9 C_{17}^r + 24 C_{18}^r - 27 C_{19}^r + 27 C_{20}^r - 27 C_{21}^r - 18 C_{31}^r + 54 C_{32}^r) -\frac{32}{9} m_{K}^2 m_{\pi}^4 ( 4 C_{12}^r \nonumber \\ & - 6 C_{13}^r + 10 C_{14}^r - 3 C_{15}^r + 24 C_{16}^r +10 C_{17}^r + 24 C_{18}^r - 54 C_{19}^r - 18 C_{20}^r -36 C_{31}^r + 6 C_{32}^r \nonumber \\ & - 48 C_{33}^r)+\frac{64}{9} m_{K}^4 m_{\pi}^2 (8 C_{12}^r + 10 C_{14}^r + 15 C_{16}^r + 10 C_{17}^r + 18 C_{18}^r - 54 C_{19}^r - 27 C_{20}^r \nonumber \\ & + 27 C_{21}^r - 36 C_{31}^r - 12 C_{32}^r - 48 C_{33}^r ) \end{align} and the model independent $\mathcal{O}(p^6)$ contribution can be subdivided as: \begin{align} F_\pi^4 \left( m^{2}_{\eta} \right)^{(6)}_{loop} = c_{sunset}^{\eta} + c_{log \times log}^{\eta} + c_{log}^{\eta} + c_{log \times L_i}^{\eta} + c_{L_i}^{\eta} + c_{L_i \times L_j}^{\eta} \end{align} where $c_{log}^{\eta}$ represents the terms containing the chiral logarithms: \begin{align} (16 \pi^2) c_{log}^{\eta} =& \left(\frac{41}{324} l^r_{\eta} + \frac{961}{108} l^r_K - 3 l^r_{\pi} \right) m_K^4 m_{\pi}^2 + \left(\frac{371}{486} l^r_{\eta} - 3 l^r_K +\frac{61}{27} l^r_{\pi} \right) m_K^2 m_{\pi}^4 -\left(\frac{1093}{729} l^r_{\eta} + \frac{577}{27} l^r_K \right) m_K^6 \nonumber \\ & - \left(\frac{2045}{11664} l^r_{\eta} + \frac{931}{432} l^r_{\pi} \right) m_{\pi}^6 \end{align} The $c_{log \times log}^{\eta}$ term refers to the collection of bilinear chiral log terms: \begin{align} c_{log \times log}^{\eta} &= \left(-\frac{2713}{108} (l^r_{\eta})^2 +\frac{473}{54} l^r_{\eta} l^r_{K} + \frac{256}{27} l^r_{\eta} l^r_{\pi} - \frac{133}{18} (l^r_{K})^2 - \frac{55}{6} l^r_{K} l^r_{\pi} - \frac{3}{4}(l^r_{\pi})^2 \right) m_K^4 m_{\pi}^2 \nonumber \\ & + \left(\frac{1367}{162} (l^r_{\eta})^2 - \frac{31}{27} l^r_{\eta} l^r_{K} - \frac{172}{27} l^r_{\eta} l^r_{\pi} + \frac{10}{3} (l^r_{K})^2 - 3 l^r_{K} l^r_{\pi} + \frac{5}{2} (l^r_{\pi})^2 \right) m_K^2 m_{\pi}^4 \nonumber \\ & + \left(\frac{6185}{243}(l^r_{\eta})^2 - \frac{118}{9} l^r_{\eta} l^r_{K} + \frac{103}{9} (l^r_{K})^2 \right) m_K^6 + \left(-\frac{911}{972} (l^r_{\eta})^2 + \frac{7}{9} l^r_{\eta} l^r_{\pi} + \frac{65}{12} (l^r_{\pi})^2 \right) m_{\pi}^6 \end{align} and $c_{L_i}^{\eta}$ are those terms proportional to the low energy constants $L_i$: \begin{align} \left(16 \pi ^2\right) c_{L_i}^{\eta} &= \frac{1}{27} \left(256 L^r_{1}+544 L^r_{2}+152 L^r_{3}+\frac{256}{3} L^r_{5} - 1024 L^r_{7}-512 L^r_{8}\right) m_K^6 \nonumber \\ & + \frac{1}{9} \left(-64 L^r_{1}-88 L^r_{2}-34 L^r_{3}-\frac{208}{3}L^r_{5} + 832 L^r_{7}+416 L^r_{8}\right) m_K^4 m_{\pi}^2 \nonumber \\ & +\frac{1}{9} \left(16 L^r_{1}+88 L^r_{2}+32 L^r_{3}+\frac{160}{3}L^r_{5} - 640 L^r_{7}-320 L^r_{8}\right) m_K^2 m_{\pi}^4 \nonumber \\ & + \frac{1}{27} \left(-4 L^r_{1}-58 L^r_{2}-20 L^r_{3}-\frac{112}{3} L^r_{5} + 448 L^r_{7}+224 L^r_{8}\right) m_{\pi}^6 \label{cLi} \end{align} while bilinears in the LECs are given by $c_{L_i \times L_j}^{\eta}$: \begin{align} c_{L_i \times L_j}^{\eta} =& -\frac{128}{9} \bigg(36 (L^r_{4})^2+15 L^r_{4} L^r_{5}-144 L^r_{4} L^r_{6}+144 L^r_{4} L^r_{7}+42 L^r_{4} L^r_{8}+12 (L^r_{5})^2-30 L^r_{5} L^r_{6} \nonumber \\ & \qquad -48 L^r_{5} L^r_{7} -32 L^r_{5} L^r_{8}+144 (L^r_{6})^2-288 L^r_{6} L^r_{7}-84 L^r_{6} L^r_{8}-96 L^r_{7} L^r_{8}-48 (L^r_{8})^2 \bigg) m_K^4 m_{\pi}^2 \nonumber \\ & -\frac{128}{9} \bigg( 3 L^r_{4} L^r_{5}+6 L^r_{4} L^r_{8}-10 (L^r_{5})^2-6 L^r_{5} L^r_{6}+144 L^r_{5} L^r_{7}+76 L^r_{5} L^r_{8}-12 L^r_{6} L^r_{8} \nonumber \\ & \qquad -96 L^r_{7} L^r_{8}-48 (L^r_{8})^2 \bigg) m_K^2 m_{\pi}^4 \nonumber \\ & -\frac{1024}{27} \bigg( 6 L^r_{4}+L^r_{5}-12 L^r_{6}-6 L^r_{8}) (3 L^r_{4}+2 L^r_{5}-6 L^r_{6}-6 L^r_{7}-6 L^r_{8} \bigg) m_K^6 \nonumber \\ & +\frac{128}{27} \bigg( 3 L^r_{4}+5 L^r_{5}-6 L^r_{6}-6 L^r_{8}) (3 L^r_{4}-L^r_{5}-6 L^r_{6}+48 L^r_{7}+18 L^r_{8} \bigg) m_{\pi}^6 \end{align} $c_{log \times L_i}^{\eta}$ are those terms that contain a product of the low energy constants and a chiral log: \begin{align} c_{log \times L_i}^{\eta} &= \bigg\{ \frac{32}{27}(72 L^r_{1}+72 L^r_{2}+36 L^r_{3}-54 L^r_{4}-113 L^r_{5}+156 L^r_{6}+684 L^r_{7}+422 L^r_{8}) l^r_{\eta} \nonumber \\ & \quad +\frac{8}{3} (16 L^r_{1}+4 L^r_{2}+7 L^r_{3}-12 L^r_{4}-4 L^r_{5}+8 L^r_{6}+96 L^r_{7}+56 L^r_{8}) l^r_K \nonumber \\ & \quad +\frac{256}{9} (3 L^r_{4}+2 L^r_{5}-6 L^r_{6} -6 L^r_{7} - 6 L^r_{8}) l^r_{\pi} \bigg\} m_K^4 m_{\pi}^2 \nonumber \\ & + \bigg\{ -\frac{16}{27} (36 L^r_{1}+36 L^r_{2}+18 L^r_{3}-27 L^r_{4}-104 L^r_{5}+78 L^r_{6}+720 L^r_{7}+404 L^r_{8}) l^r_{\eta} \nonumber \\ & \quad -\frac{16}{9} (72 L^r_{1}+18 L^r_{2}+18 L^r_{3}-87 L^r_{4}+8 L^r_{5}+102 L^r_{6}-312 L^r_{7}-120 L^r_{8}) l^r_{\pi} \nonumber \\ & \quad -\frac{16}{9} (3 L^r_{4}-L^r_{5}-6 L^r_{6}+48 L^r_{7}+18 L^r_{8}) l^r_K \bigg\} m_K^2 m_{\pi}^4 \nonumber \\ & + \bigg\{ -\frac{512}{27} (6 L^r_{1}+6 L^r_{2}+3 L^r_{3}-6 L^r_{4}-6 L^r_{5}+16 L^r_{6}+24 L^r_{7}+20 L^r_{8}) l^r_{\eta} \nonumber \\ & \quad -\frac{32}{9} (48 L^r_{1}+12 L^r_{2}+21 L^r_{3}-60 L^r_{4}-22 L^r_{5}+72 L^r_{6}+48 L^r_{7}+60 L^r_{8}) l^r_K \bigg\} m_K^6 \nonumber \\ & + \bigg\{ \frac{8}{27} (6 L^r_{1}+6 L^r_{2}+3 L^r_{3}-6 L^r_{4}-32 L^r_{5}+16 L^r_{6}+240 L^r_{7}+130 L^r_{8}) l^r_{\eta} \nonumber \\ & \quad + 8 (4 L^r_{1}+L^r_{2}+L^r_{3}-6 L^r_{4}+8 L^r_{6}-48 L^r_{7}-18 L^r_{8}) l^r_{\pi} \bigg\} m_{\pi}^6 \end{align} The contributions from the sunset integrals $\overline{c}_{sunset}^{\eta}$ can in turn be expressed as: \begin{align} c^{\eta}_{sunset} = \frac{1}{\left(16 \pi ^2\right)^2} &\Bigg\{ \left(\frac{8783}{1944}-\frac{115 \pi ^2}{162}\right) m_K^6+\left(\frac{629 \pi ^2}{1296}-\frac{3515}{864}\right) m_K^4 m_{\pi}^2-\left(\frac{1259}{2592}+\frac{77 \pi ^2}{216}\right) m_K^2 m_{\pi}^4 \nonumber \\ & \quad -\left(\frac{20183}{31104}+\frac{7 \pi ^2}{432}\right) m_{\pi}^6 \Bigg\} + c_{\eta \pi \pi}^{\eta} + c_{\eta K K}^{\eta} + c_{\pi K K}^{\eta} \end{align} where the contributions in the square brackets come from a combination of the single mass scale sunsets and the free terms (i.e. those not involving a chiral log, a low energy constant, or arising from a sunset diagram) of $\mathcal{O}(p^6)$, and where: \begin{align} c_{\eta \pi \pi}^{\eta} &= \frac{1}{6} m_{\pi}^4 \overline{H}^\chi_{\eta \pi \pi} \end{align} \begin{align} c_{\eta K K}^{\eta} = \left(\frac{53}{36} m_K^2 m_{\pi}^2 -\frac{1}{24} m_K^4 - \frac{5}{24} m_{\pi}^4 \right) \overline{H}^\chi_{\eta K K} + \left(\frac{146}{27} m_K^6 - \frac{425}{54} m_K^4 m_{\pi}^2 + \frac{74}{27} m_K^2 m_{\pi}^4 - \frac{5}{18} m_{\pi}^6 \right) \overline{H}^\chi_{2\eta K K} \end{align} \begin{align} \overline{c}_{\pi K K}^{\eta} =& \left(\frac{9}{8} m_{K}^4 - \frac{13}{12} m_{K}^2 m_{\pi}^2 + \frac{23}{24} m_{\pi}^4 \right) \overline{H}^\chi_{\pi K K} + \left(m_{K}^6 - \frac{1}{3} m_{K}^4 m_{\pi}^2 - \frac{2}{3} m_{K}^2 m_{\pi}^4 \right) \overline{H}^\chi_{\pi 2K K} \nonumber \\ & + \left(-\frac{3}{2} m_{K}^4 m_{\pi}^2 + \frac{7}{3} m_{K}^2 m_{\pi}^4 - \frac{5}{6} m_{\pi}^6 \right) \overline{H}^\chi_{2\pi K K} \end{align} \subsection{The eta decay constant \label{Sec:EtaDecay}} The eta decay constant is given in \cite{Amoros:1999dp} as: \begin{align} F_{\eta} = F^0 \left( \overline{F}_{\eta}^{(4)} + ( \overline{F}_{\eta}^{(6)} )_{CT} + ( \overline{F}_{\eta}^{(6)} )_{loop} \right) + \mathcal{O}(p^8) \end{align} where the $\mathcal{O}(p^4)$ term is: \begin{align} F_{\pi}^2 \overline{F}_{\eta}^{(4)} =& 8\left(L^r_{4}+\frac{2}{3} L^r_{5}\right) m_K^2 + 4 \left(L^r_{4}-\frac{1}{3}L^r_{5}\right) m_{\pi}^2 - 3 l^r_K m_K^2 \label{EqFeP4} \end{align} and the $\mathcal{O}(p^6)$ counter-term contribution is given by: \begin{align} F_{\pi}^4 \left( F_{\eta}^{2} \right)^{(6)}_{CT} =& \left(\frac{64}{3} C^r_{14}+\frac{64}{3} C^r_{15}+32 C^r_{16}+\frac{64}{3} C^r_{17}+\frac{64}{3} C^r_{18}\right) m_K^4 \nonumber \\ & + \left(-\frac{64}{3} C^r_{14}+\frac{16}{3} C^r_{15}-32 C^r_{16}-\frac{64}{3} C^r_{17}-\frac{128}{3} C^r_{18}\right) m_K^2 m_{\pi}^2 \nonumber \\ & + \left(8 C^r_{14}-\frac{8}{3} C^r_{15}+24 C^r_{16}+8 C^r_{17}+\frac{64}{3} C^r_{18}\right) m_{\pi}^4 \end{align} and the model independent $\mathcal{O}(p^6)$ contribution can be subdivided as: \begin{align} \left( F_{\eta} \right)^{(6)}_{loop} = d_{sunset}^{\eta} + d_{log \times log}^{\eta} + d_{log}^{\eta} + d_{log \times L_i}^{\eta} + d_{L_i}^{\eta} + d_{L_i \times L_j}^{\eta} \end{align} where $d_{log}^{\eta}$ represents the terms containing the chiral logarithms: \begin{align} (16 \pi^2) d_{log}^{\eta} =& \left( \frac{3}{8} l^r_{\pi} + \frac{3}{2} l^r_K - \frac{4363}{1944} l^r_{\eta} \right) m_K^2 m_{\pi}^2 + \left(\frac{16631}{1944} l^r_{\eta} + \frac{17}{24} l^r_K \right) m_K^4 + \left(\frac{3713}{7776} l^r_{\eta} + \frac{47}{32} l^r_{\pi} \right) m_{\pi}^4 \end{align} The $d_{log \times log}^{\eta}$ term refers to the collection of bilinear chiral log terms: \begin{align} (4 m_K^2-m_{\pi}^2) d_{log \times log}^{\eta} =& -\frac{1}{4} \left(\frac{23}{6} (l^r_{\eta})^2 - \frac{167}{3} l^r_{\eta} l^r_K + \frac{43}{3} (l^r_K)^2 - 93 l^r_K l^r_{\pi} - \frac{99}{2} (l^r_{\pi})^2 \right) m_K^4 m_{\pi}^2 \nonumber \\ & + \frac{1}{3} \left(\frac{71}{2} (l^r_{\eta})^2 - 119 l^r_{\eta} l^r_K + \frac{191}{2} (l^r_K)^2 \right) m_K^6 + \frac{1}{8} \bigg( (l^r_{\eta})^2+9 (l^r_{\pi})^2 \bigg) m_{\pi}^6 \nonumber \\ & - \left( (l^r_{\eta})^2 + l^r_{\eta} l^r_K + (l^r_K)^2+6 l^r_K l^r_{\pi}+\frac{15}{2} (l^r_{\pi})^2 \right) m_K^2 m_{\pi}^4 \end{align} and $d_{L_i}^{\eta}$ are those terms proportional to the low energy constants $L_i$: \begin{align} 9 \left(16 \pi ^2\right) d_{L_i}^{\eta} &= 8(2 L^r_{1}+2 L^r_{2}+L^r_{3}) m_K^2 m_{\pi}^2 - (2 L^r_{1}+29 L^r_{2}+10 L^r_{3}) m_{\pi}^4 - (32 L^r_{1}+68 L^r_{2}+19 L^r_{3}) m_K^4 \label{dLi} \end{align} while bilinears in the LECs are given by $d_{L_i \times L_j}^{\eta}$: \begin{align} d_{L_i \times L_j}^{\eta} =& \left(224 (L^r_{4})^2+192 L^r_{4} L^r_{5}-256 L^r_{4} L^r_{6}-128 L^r_{4} L^r_{8}+\frac{256}{9} (L^r_{5})^2-\frac{512}{3} L^r_{5} L^r_{6}-\frac{256}{3} L^r_{5} L^r_{8}\right) m_K^4 \nonumber \\ & + \left(56 (L^r_{4})^2 + 48 L^r_{4} L^r_{5}-64 L^r_{4} L^r_{6}-64 L^r_{4} L^r_{8} - \frac{200}{9} (L^r_{5})^2 + \frac{64}{3} L^r_{5} L^r_{6} + \frac{64}{3} L^r_{5} L^r_{8} \right) m_{\pi}^4 \nonumber \\ & + \left(224 (L^r_{4})^2+96 L^r_{4} L^r_{5}-256 L^r_{4} L^r_{6}+\frac{448}{9} (L^r_{5})^2 - \frac{128}{3} L^r_{5} L^r_{6} \right) m_K^2 m_{\pi}^2 \end{align} $d_{log \times L_i}^{\eta}$ are those terms that contain a product of the low energy constants and a chiral log: \begin{align} d_{log \times L_i}^{\eta} &= \left\{ \frac{4}{3} \left(2 L^r_{1}+2 L^r_{2}+L^r_{3}-L^r_{4}-\frac{2}{3} L^r_{5} \right) l^r_{\eta} + 4 \left(12 L^r_{1}+3 L^r_{2}+3 L^r_{3}-11 L^r_{4}+\frac{2}{3} L^r_{5} \right) l^r_{\pi} \right\} m_{\pi}^4 \nonumber \\ & - \left\{ \frac{32}{3} \left(2 L^r_{1}+2 L^r_{2}+L^r_{3}-L^r_{4}-\frac{2}{3} L^r_{5}\right) l^r_{\eta} + 4 \left(5 L^r_{4}+\frac{13}{3} L^r_{5}\right) l^r_K + \frac{32}{3} (3 L^r_{4}+2 L^r_{5}) l^r_{\pi} \right\} m_K^2 m_{\pi}^2 \nonumber \\ & + \left\{ \frac{64}{3} \left(2 L^r_{1}+2 L^r_{2}+L^r_{3}-L^r_{4}-\frac{2}{3} L^r_{5} \right) l^r_{\eta} + 4 (16 L^r_{1}+4 L^r_{2}+7 L^r_{3}-18 L^r_{4}-4 L^r_{5}) l^r_K \right\} m_K^4 \end{align} The contributions from the sunset integrals $d_{sunset}^{\eta}$ can in turn be expressed as: \begin{align} \left(4 m_K^2-m_{\pi}^2\right) d^{\eta}_{sunset} &= \frac{1}{\left(16 \pi ^2\right)^2} \bigg\{ \left(\frac{65765}{3888}+\frac{59 \pi ^2}{36}\right) m_K^6-\left(\frac{13465}{1728}+\frac{47 \pi ^2}{96}\right) m_K^4 m_{\pi}^2 \nonumber \\ & -\left(\frac{3377}{5184}-\frac{3 \pi ^2}{8}\right) m_K^2 m_{\pi}^4+\left(\frac{46099}{62208}-\frac{\pi ^2}{96}\right) m_{\pi}^6 \bigg\} + d_{\eta \pi \pi}^{\eta} + d_{\eta K K}^{\eta} + d_{\pi K K}^{\eta} \end{align} where the contributions in the square brackets come from a combination of the single mass scale sunsets and the free terms (i.e. those not involving a chiral log, a low energy constant, or arising from a sunset diagram) of $\mathcal{O}(p^6)$, and where: \begin{align} d_{\eta \pi \pi}^{\eta} &= \left(\frac{1}{3}m_K^2 m_{\pi}^4 - \frac{1}{12} m_{\pi}^6 \right) \overline{H}^\chi_{2\eta \pi \pi} - \frac{1}{4} m_{\pi}^4 \overline{H}^\chi_{\eta \pi \pi} \end{align} \begin{align} d_{\eta K K}^{\eta} = \left(-\frac{479}{48} m_K^4 - \frac{17}{12} m_K^2 m_{\pi}^2 + \frac{1}{16} m_{\pi}^4 \right) \overline{H}^\chi_{\eta K K} + \left(\frac{173}{9} m_K^6 - \frac{23}{12} m_K^4 m_{\pi}^2 - \frac{19}{18} m_K^2 m_{\pi}^4 + \frac{1}{12}m_{\pi}^6 \right) \overline{H}^\chi_{2\eta K K} \end{align} \begin{align} d_{\pi K K}^{\eta} &= \left(\frac{87}{16} m_K^4 + \frac{1}{4} m_K^2 m_{\pi}^2 + \frac{5}{16} m_{\pi}^4 \right) \overline{H}^\chi_{\pi K K} + \left(\frac{3}{4} m_K^4 m_{\pi}^2 - 4 m_K^2 m_{\pi}^4 + \frac{1}{4} m_{\pi}^6 \right) \overline{H}^\chi_{2\pi K K} \nonumber \\ & + \left(-\frac{33}{2} m_K^6 + \frac{5}{2} m_K^4 m_{\pi}^2 - m_K^2 m_{\pi}^4\right) \overline{H}^\chi_{\pi 2K K} \end{align} \section{Approximate Results for the Three Mass Sunsets \label{Sec:ApproxSunsets}} We now present truncated results which numerically agree to within 1\% of the full results of the master integrals given in Appendix \ref{Sec:SunsetResults} for much of the range of masses we are interested in. These partial sums have been obtained with the help of an ancillary \texttt{Mathematica} file, called \texttt{truncation.nb}, provided with this paper. In this file, as inputs one can choose the numerical values of the meson masses and the maximum error acceptable due to the truncation. The file gives the partial sums for each of the master integrals accordingly as an output. The truncation procedure that we use does not follow from a rigorous asymptotic analysis. Our aim here is more to give simplified formulas that may be used in numerical simulations to save CPU time, mainly for interested lattice practitioners. To get the simplified expressions, we use a simple criterion: for a given set of numerical values of the pseudo-scalar masses, in each of the different contributions of Eqs.(\ref{Eq:Hkpe})-(\ref{Eq:Hp2kk}) we keep terms that are bigger than $10^{-p}$, $p\geq1$ being incremented until we achieve the precision goal given by the numerical difference between the corresponding partial sum and the sum of the first hundreds\footnote{1000 terms for single series and 10000 terms for double series.} of terms, the latter being defined as the `exact' value (notice that the very small uncertainties on the pseudo-scalar masses are neglected in this procedure). This way of getting truncations of course implies that for sufficiently different sets of pseudo-scalar masses one gets non-identical simplified expressions of the master integrals. This however does not detract from their numerical utility. The truncated results presented below have been tested for all the sets of meson masses presented in the lattice study of \cite{Durr:2016ulb}, and for the majority of these mass values, the truncated expressions give results that are accurate upto $1\%$ of the exact value. The numerical implications and accuracy of these approximate results are studied in more detail in Section~\ref{Sec:NumAnalysis}. \subsection{Truncated kaon sunsets} \begin{align} & \overline{H}^{\chi}_{K \pi \eta} \approx \frac{m_{K}^2}{512\pi ^4} \Bigg\{ \frac{5 \pi^2}{6} -\frac{1}{4} - \frac{7}{4}\left(\frac{m_{\eta}^4}{m_{K}^4}+\frac{m_{\pi}^4}{m_{K}^4}\right) + \left(1-\frac{\pi^2}{2}\right)\left(\frac{m_{\eta}^2}{m_{K}^2}+\frac{m_{\pi}^2}{m_{K}^2}\right) + \frac{1}{2} \frac{m_{\eta}^4}{m_{K}^4} \log\left[\frac{m_{\eta}^2}{m_{K}^2}\right] \nonumber \\ & \quad +\frac{m_{\pi}^2 m_{\eta}^2}{m_{K}^4} \left(7+\frac{2 \pi^2}{3}-2 \log\left[\frac{m_{\eta}^2}{m_{K}^2}\right]-2 \log\left[\frac{m_{\pi}^2}{m_{K}^2}\right]+\log\left[\frac{m_{\eta}^2}{m_{K}^2}\right] \log\left[\frac{m_{\pi}^2}{m_{K}^2}\right]\right) + \frac{1}{2} \frac{m_{\pi}^4}{m_{K}^4} \log\left[\frac{m_{\pi}^2}{m_{K}^2}\right] \nonumber \\ & \quad -\frac{m_{\pi}^2}{m_{K}^2} \log\left[\frac{m_{\pi}^2}{m_{K}^2}\right]^2-\frac{m_{\eta}^2}{m_{K}^2} \log\left[\frac{m_{\eta}^2}{m_{K}^2}\right]^2 + \frac{8 \pi }{3} \frac{m_{\eta}^3}{m_{K}^3} {}_2F_1 \bigg[ \begin{array}{c} -\frac{1}{2},\frac{1}{2} \\ \frac{5}{2} \\ \end{array} \bigg| \frac{m_\eta^2}{4 m_{K}^2} \bigg] +\frac{1}{36}\frac{m_{\eta}^6}{m_{K}^6} {}_3F_2 \bigg[ \begin{array}{c} 1,1,2 \\ \frac{5}{2},4 \\ \end{array} \bigg| \frac{m_\eta^2}{4 m_K^2} \bigg] \nonumber \\ & \quad + \frac{1}{6} \frac{m_{\pi}^2 m_{\eta}^2}{m_K^4} \left( \log \left[\frac{m_{\eta}^2}{4 m_K^2}\right] + \log \left[\frac{m_{\pi}^2}{4 m_K^2}\right] \right) \left( \frac{m_{\eta}^2}{m_K^2} {}_2F_1 \bigg[ \begin{array}{c} 1,1 \\ \frac{5}{2} \\ \end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] + \frac{m_{\pi}^2}{m_K^2} {}_2F_1 \bigg[ \begin{array}{c} 1,1 \\ \frac{5}{2} \\ \end{array} \bigg| \frac{m_\pi^2}{4m_K^2} \bigg] \right) \nonumber \\ & \quad - \frac{15\pi}{512} \frac{m_{\pi}^4 m_{\eta}^3}{m_{K}^7} \left(\log \left[\frac{m_{\pi}^2}{16 m_{\eta}^2}\right]+\frac{13}{6}\right) - \frac{1}{20} \frac{m_{\pi}^4 m_{\eta}^4}{m_{K}^8} \left(\frac{37}{15}-\log \left[\frac{m_{\eta}^2}{m_{K}^2}\right]-\log \left[\frac{m_{\pi}^2}{m_{K}^2}\right] \right) \nonumber \\ & \quad - \frac{\pi}{4} \frac{m_{\pi}^2 m_{\eta}^3}{m_{K}^5} \left(\log \left[\frac{m_{\pi}^2}{16 m_{\eta}^2}\right] + \frac{11}{3}\right) + \frac{1}{12} \frac{m_{\pi}^2 m_{\eta}^2}{m_{K}^4} \left(\frac{m_{\eta}^2}{m_{K}^2}+\frac{m_{\pi}^2}{m_{K}^2}\right) \left(5-8 \gamma-4\psi \left[\frac{5}{2}\right] \right) \nonumber \\ & \quad + 2 \pi \frac{m_{\pi}^2 m_{\eta}}{m_{K}^3} \left(\log \left[\frac{m_{\pi}^2}{16 m_{\eta}^2}\right]+1\right) + \frac{\pi}{32} \frac{ m_{\pi}^4}{m_{K}^4} \left(8 \frac{m_{K}}{m_{\eta}} + 3 \frac{m_{\eta}}{m_{K}}\right) \left(\frac{1}{2}-\log \left[\frac{m_{\pi}^2}{16 m_{\eta}^2}\right] \right) \Bigg\} \label{HkpeApprox} \end{align} \begin{align} & \overline{H}^{\chi}_{2K\pi\eta} \approx \frac{1}{512\pi^4} \Bigg\{ \frac{5\pi^2}{6} -1 - \frac{m_{\eta}^2}{m_{K}^2} \left( 1+\frac{\pi^2}{3}+ \frac{1}{2} \log^2 \left[\frac{m_{K}^2}{m_{\eta}^2}\right] + \log \left[\frac{m_{K}^2}{m_{\eta}^2}\right] + \text{Li}_2 \left[1-\frac{m_{\pi}^2}{m_{\eta}^2}\right] \right) \nonumber \\ & \quad - \frac{m_{\pi}^2}{m_{K}^2} \left( 1+\frac{\pi^2}{3} - \log \left[\frac{m_{\pi}^2}{m_{K}^2}\right] - \log \left[\frac{m_{K}^2}{m_{\eta}^2}\right]\log \left[\frac{m_{\pi}^2}{m_{K}^2}\right] - \text{Li}_2 \left[1-\frac{m_{\pi}^2}{m_{\eta}^2}\right] \right) + \frac{2\pi}{3} \frac{m_{\eta}^3}{m_K^3} {}_2F_1 \bigg[ \begin{array}{c} \frac{1}{2},\frac{1}{2} \\ \frac{5}{2} \\ \end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] \nonumber \\ & \quad - \frac{1}{4} \frac{m_{\eta }^4}{m_K^4} {}_3F_2 \bigg[ \begin{array}{c} 1,1,1 \\ \frac{3}{2},3 \\ \end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] + \frac{1}{2} \frac{m_{\pi}^2 m_{\eta}^2}{m_{K}^4} \left( 4 - \log\left[ \frac{m_{\pi}^2}{m_{K}^2} \right] + \log\left[ \frac{m_{K}^2}{m_{\eta}^2} \right] \right) + \frac{\pi}{2} \frac{m_{\pi}^2 m_{\eta}}{m_{K}^3} \left( 1 + \log \left[ \frac{m_{\pi}^2}{16 m_{\eta}^2} \right] \right) \nonumber \\ & \quad + \frac{1}{60} \frac{m_{\pi}^2 m_{\eta}^6}{m_{K}^8} \left( \frac{4}{5} - \log\left[ \frac{m_{\pi}^2}{m_{K}^2} \right] + \log\left[ \frac{m_{K}^2}{m_{\eta}^2} \right] \right) + \frac{1}{12} \frac{m_{\pi}^2 m_{\eta}^4}{m_{K}^6} \left( \frac{11}{6} - \log\left[ \frac{m_{\pi}^2}{m_{K}^2} \right] + \log \left[ \frac{m_{K}^2}{m_{\eta}^2} \right] \right) \nonumber \\ & \quad + \frac{\pi}{16} \frac{m_{\pi}^2 m_{\eta}^3}{m_{K}^5} \left( \frac{11}{3} + \log \left[ \frac{m_{\pi}^2}{16 m_{\eta}^2} \right] \right) + \frac{\pi}{16} \frac{m_{\pi}^4}{m_{K}^3 m_{\eta}} \left( \frac{1}{2} - \log \left[ \frac{m_{\pi}^2}{16 m_{\eta}^2} \right] \right) \Bigg\} \label{H2kpeApprox} \end{align} \begin{align} & \overline{H}^{\chi}_{K 2\pi \eta} \approx \frac{1}{512\pi ^4} \Bigg\{ - 1 - \frac{\pi^2}{2} - 2\log \left[ \frac{m_{\pi}^2}{m_{K}^2} \right] -\log^2 \left[\frac{m_{\pi}^2}{m_{K}^2} \right] -\frac{m_{\pi}^2}{m_{K}^2} \left(3-\log \left[ \frac{m_{\pi}^2}{m_{K}^2}\right] \right) \nonumber \\ & \quad + \frac{m_{\eta}^2}{m_{K}^2} \left( 5 + \frac{2 \pi^2}{3} - 2 \log \left[ \frac{m_{\pi}^2}{m_{K}^2} \right] + \log \left[ \frac{m_{K}^2}{m_{\eta}^2}\right] - \log \left[ \frac{m_{K}^2}{m_{\eta}^2}\right] \log \left[ \frac{m_{\pi}^2}{m_{K}^2}\right] \right) + \frac{1}{12} \frac{m_{\pi}^4}{m_{K}^4} {}_3F_2 \bigg[ \begin{array}{c} 1,1,2 \\ \frac{5}{2},3 \\ \end{array} \bigg| \frac{m_\pi^2}{4m_K^2} \bigg] \nonumber \\ & \quad + \frac{1}{6} \frac{m_{\eta}^4}{m_{K}^4} \left( 2\gamma_E + \log \left[ \frac{m_{\eta}^2}{4 m_{K}^2} \right] + \log \left[ \frac{m_{\pi}^2}{4 m_{K}^2} \right] \right) {}_2F_1 \bigg[ \begin{array}{c} 1,1 \\ \frac{5}{2} \\ \end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] + \frac{1}{3} \frac{m_{\eta}^4}{m_{K}^4} \left(\frac{5}{4} - 2 \gamma -\psi\left[\frac{5}{2}\right]\right) \nonumber \\ & \quad + \frac{1}{3} \frac{m_{\pi}^2 m_{\eta}^2}{m_{K}^4} \left( \log \left[ \frac{m_{\eta}^2}{4 m_{K}^2} \right] + \log \left[ \frac{m_{\pi}^2}{4 m_{K}^2} \right] \right) {}_3F_2 \bigg[ \begin{array}{c} 1,1,3 \\ 2,\frac{5}{2} \\ \end{array} \bigg| \frac{m_\pi^2}{4m_K^2} \bigg] - \frac{2}{3} \frac{m_{\pi}^2 m_{\eta}^2}{m_{K}^4} \left(\frac{7}{6}+\gamma -\log [4] \right) \nonumber \\ & \quad - \frac{15 \pi}{256} \frac{m_{\pi}^2 m_{\eta}^3}{m_{K}^5}\left(\log \left[ \frac{m_{\pi}^2}{16 m_{\eta}^2}\right] + \frac{8}{3}\right) - \frac{\pi}{4} \frac{m_{\eta}^3}{m_{K}^3} \left(\log \left[ \frac{m_{\pi}^2}{16 m_{\eta}^2}\right] + \frac{14}{3}\right) - \frac{3 \pi}{32} \frac{m_{\pi}^4 }{m_{K} m_{\eta}^3} \left(\log \left[ \frac{m_{\pi}^2}{16 m_{\eta}^2} \right] + \frac{5}{3}\right) \nonumber \\ & \quad - \frac{\pi}{2} \frac{m_{\eta}}{m_{K}} \left(\frac{m_{\pi}^2}{m_{\eta}^2} + \frac{3}{8} \frac{m_{\pi}^2}{m_{K}^2}\right) \log \left[\frac{m_{\pi}^2}{16 m_{\eta}^2}\right] - \frac{1}{10} \frac{m_{\pi}^2 m_{\eta}^4}{m_{K}^6} \left( \frac{59}{30}-\log \left[ \frac{m_{\eta}^2}{m_{K}^2} \right] - \log \left[ \frac{m_{\pi}^2}{m_{K}^2} \right] \right) \nonumber \\ & \quad - \frac{\pi}{64} \frac{m_{\eta}^5}{m_{K}^5} \left(\log \left[ \frac{m_{\pi}^2}{16 m_{\eta}^2} \right] + \frac{86}{15}\right) + 2 \pi \frac{m_{\eta}}{m_{K}} \left(\log \left[ \frac{m_{\pi}^2}{16 m_{\eta}^2}\right]+2 \right) \Bigg\} \label{Hk2peApprox} \end{align} \begin{align} & \overline{H}^{\chi}_{K \pi 2\eta} \approx \frac{1}{512\pi^4} \Bigg\{ - 1 -\frac{\pi^2}{2} + 2 \log \left[ \frac{m_k^2}{m_{\eta}^2} \right] - \log^2 \left[\frac{m_{K}^2}{m_{\eta}^2}\right] + \pi \left(\frac{m_{\eta}^2}{m_{K}^2}\right)^{1/2} \left(4-\frac{m_{\eta}^2}{m_{K}^2}\right)^{1/2} - \frac{\pi m_{\pi}^2}{m_{\eta} m_{K}} \nonumber \\ & \quad + \frac{m_{\pi}^2}{m_{K}^2} \left( 5 + \frac{2 \pi ^2}{3} + 2 \log \left[\frac{m_{K}^2}{m_{\eta}^2}\right] - \log \left[\frac{m_{\pi}^2}{m_{K}^2}\right] - \log \left[\frac{m_{\pi}^2}{m_{K}^2}\right] \log \left[ \frac{m_{K}^2}{m_{\eta}^2} \right] \right) - \frac{m_{\eta}^2}{m_{K}^2} \left(3 + \log \left[\frac{m_{K}^2}{m_{\eta}^2}\right] \right) \nonumber \\ & \quad + 2 \pi \frac{m_{\eta}}{m_K} {}_2F_1 \bigg[ \begin{array}{c} \frac{1}{2},\frac{1}{2} \\ \frac{3}{2} \\ \end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] + \frac{m_{\eta}^4}{12 m_{K}^4} {}_3F_2 \bigg[ \begin{array}{c} 1,1,2 \\ \frac{5}{2},3 \\ \end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] - \frac{1}{10} \frac{m_{\eta}^4 m_{\pi}^2}{m_{K}^6} \left(\frac{7}{30}+\gamma -\log (4)\right) \nonumber \\ & \quad -\frac{2}{3} \frac{m_{\pi}^2 m_{\eta}^2}{m_{K}^4} \left(\frac{7}{6}+\gamma -\log (4)\right) - \frac{3 \pi}{8} \frac{m_{\pi}^2 m_{\eta}}{m_{K}^3} \left(\log \left[\frac{m_{\pi}^2}{16 m_{\eta}^2}\right]+3\right) + \frac{\pi m_{\pi}^2}{m_{\eta} m_{K}} \log \left[\frac{m_{\pi}^2}{16 m_{\eta}^2}\right] \nonumber \\ & \quad - \frac{5\pi}{128} \frac{m_{\pi}^2 m_{\eta}^3}{m_{K}^5} \left(\log \left[\frac{m_{\pi}^2}{16 m_{\eta}^2}\right] + \frac{13}{3}\right) + \frac{\pi}{16} \frac{m_{\pi}^4}{m_{\eta}^3 m_{K}} \left(2 \log \left[\frac{m_{\pi}^2}{16 m_{\eta}^2}\right]+3\right) \nonumber \\ & \quad + \frac{1}{3} \frac{m_\pi^2}{m_K^2} \frac{m_\eta^2}{m_K^2} {}_3F_2 \bigg[ \begin{array}{c} 1,1,3 \\ \frac{5}{2},2 \\ \end{array} \bigg| \frac{m_\eta^2}{4m_K^2} \bigg] \left( 2\gamma - 1 + \log \left[ \frac{m_\eta^2}{4 m_K^2} \right] + \log \left[ \frac{m_\pi^2}{4 m_K^2} \right] \right) \Bigg\} \label{Hkp2eApprox} \end{align} \subsection{Truncated eta sunsets} \begin{align} & \overline{H}^\chi_{\pi K K} \approx \frac{m_\pi^2}{512\pi^4} \Bigg\{ \frac{\pi ^2}{6}-5-\log^2 \left[ \frac{m_\pi^2}{m_K^2} \right] + 4 \log \left[\frac{m_\pi^2}{m_K^2}\right] + \frac{m_\eta^2}{m_\pi^2} \left(\log \left[ \frac{m_K^2}{m_\eta^2}\right] + \frac{5}{4}\right) + \frac{m_K^2}{m_\pi^2} \left(6+\frac{\pi ^2}{3}\right) \nonumber \\ & + \frac{1}{3} \frac{m_\eta^2}{m_K^2} \left(\frac{7}{6}-\log \left[\frac{m_\pi^2}{m_K^2}\right] \right) + \frac{1}{10} \frac{m_\pi^2}{m_K^2} \frac{m_\eta^2}{m_K^2} \left(\frac{37}{30}-\log \left[ \frac{m_\pi^2}{m_K^2}\right] \right) - \frac{1}{18} \frac{m_\eta^2}{m_\pi^2} \frac{m_\eta^2}{m_K^2} {}_3F_2 \bigg[ \begin{array}{c} 1,1,2 \\ \frac{5}{2},4 \\ \end{array} \bigg| \frac{m_\eta^2}{4 m_K^2} \bigg] \nonumber \\ & + \frac{1}{3} \frac{m_\pi^2}{m_K^2} \left( \frac{8}{3}-\log [4] -{}_2F_1 \bigg[ \begin{array}{c} 1,1 \\ \frac{5}{2} \\ \end{array} \bigg| \frac{m_\pi^2}{4 m_K^2} \bigg] \log \left[ \frac{m_\pi^2}{4 m_K^2} \right] \right) \Bigg\} \label{HpkkApprox} \end{align} \begin{align} & \overline{H}^\chi_{2\pi K K} \approx \frac{1}{512\pi^4} \Bigg\{2 \log \left[\frac{m_\pi^2}{m_K^2}\right]-\log ^2\left[\frac{m_\pi^2}{m_K^2}\right] + \frac{1}{3} \frac{m_\eta^2}{m_K^2} \left(\frac{1}{6}-\log \left[\frac{m_\pi^2}{m_K^2} \right] \right) - \frac{1}{30} \frac{m_\eta^4}{m_K^4} \left(\frac{19}{15} + \log \left[\frac{m_\pi^2}{m_K^2}\right] \right) \nonumber \\ & \quad + \frac{2}{3} \frac{m_\pi^2}{m_K^2} \Bigg( \frac{8}{3}-\log [4] -\, {}_2F_1 \bigg[ \begin{array}{c} 1,1 \\ \frac{5}{2} \\ \end{array} \bigg| \frac{m_\pi^2}{4 m_K^2} \bigg] \left(\frac{1}{2} + \log \left[\frac{m_\pi^2}{4 m_K^2}\right] \right) \Bigg) + \frac{1}{5} \frac{m_\pi^2}{m_K^2} \frac{m_\eta^2}{m_K^2} \left(\frac{11}{15}-\log \left[\frac{m_\pi^2}{m_K^2}\right] \right) \nonumber \\ & \quad + \frac{1}{10} \frac{m_\pi^4}{m_K^4} \left( \frac{31}{15} - \log [4] -\frac{1}{3} {}_2F_1 \bigg[ \begin{array}{c} 2,2 \\ \frac{7}{2} \\ \end{array} \bigg| \frac{m_\pi^2}{4 m_K^2} \bigg] \log \left[\frac{m_\pi^2}{4 m_K^2}\right] \right) + \frac{5}{231} \frac{m_\pi^4}{m_K^4} \frac{m_\eta^6}{m_K^{6}} \left(\frac{757}{2772} - \log \left[ \frac{m_\pi^2}{m_K^2}\right] \right) \nonumber \\ & \quad - \frac{1}{210} \frac{m_\eta^6}{m_K^6} - \frac{1}{63} \frac{m_\pi^2}{m_K^2} \frac{m_\eta^6}{m_K^6} \left(\frac{79}{630} + \log \left[\frac{m_\pi^2}{m_K^2}\right] \right) + \frac{1}{21} \frac{m_\pi^4}{m_K^4} \frac{m_\eta^4}{m_K^4} \left(\frac{223}{315}-\log \left[\frac{m_\pi^2}{m_K^2}\right] \right) \nonumber \\ & \quad - \frac{2}{35} \frac{m_\pi^2}{m_K^2} \frac{m_\eta^4}{m_K^4} \left(\frac{9}{140} + \log \left[\frac{m_\pi^2}{m_K^2}\right] \right) + \frac{3}{35} \frac{m_\eta^2}{m_K^2} \frac{m_\pi^4}{m_K^4} \left(\frac{533}{420}-\log \left[\frac{m_\pi^2}{m_K^2}\right] \right) + \frac{\pi ^2}{6} - 3 \Bigg\} \label{H2pkkApprox} \end{align} \begin{align} & \overline{H}^\chi_{\pi 2K K} \approx \frac{1}{512\pi^4} \Bigg\{ 1 + \frac{\pi ^2}{6} - \frac{1}{10} \frac{m_\eta^2}{m_K^2} \frac{m_\pi^4}{m_K^4} \left(\frac{11}{15}-\log \left[ \frac{m_\pi^2}{m_K^2} \right]\right) + \frac{1}{60} \frac{m_\pi^6}{m_K^6} {}_2F_1 \bigg[ \begin{array}{c} 2,2 \\ \frac{7}{2} \\ \end{array} \bigg| \frac{m_\pi^2}{4 m_K^2} \bigg] \log \left[\frac{m_\pi^2}{4 m_K^2}\right] \nonumber \\ & - \frac{3}{70} \frac{m_\pi^4}{m_K^8} \frac{m_\eta^4}{m_K^4} \left(\frac{43}{420}-\log \left[ \frac{m_\pi^2}{m_K^2}\right] \right) + \frac{1}{30} \frac{m_\pi^2}{m_K^2} \frac{m_\eta^4}{m_K^4} \left(\frac{23}{30} + \log \left[ \frac{m_\pi^2}{m_K^2}\right] \right) - \frac{2}{63} \frac{m_\eta^4}{m_K^4} \frac{m_\pi^6}{m_K^6} \left(\frac{223}{315}-\log \left[ \frac{m_\pi^2}{m_K^2} \right] \right) \nonumber \\ & - \frac{3}{70} \frac{m_\eta^2}{m_K^2} \frac{m_\pi^6}{m_K^6} \left(\frac{533}{420}-\log \left[ \frac{m_\pi^2}{m_K^2} \right] \right) - \frac{1}{6} \frac{m_\pi^2}{m_K^2}\frac{m_\eta^2}{m_K^2} \left(\frac{1}{6} - \log \left[ \frac{m_\pi^2}{m_K^2} \right] \right) + \frac{1}{2} \frac{m_\eta^2}{m_K^2} {}_3F_2 \bigg[ \begin{array}{c} 1,1,1 \\ \frac{3}{2},3 \\ \end{array} \bigg| \frac{m_\eta^2}{4 m_K^2} \bigg] \nonumber \\ & - \frac{1}{6} \frac{m_\pi^4}{m_K^4} \Bigg( \frac{8}{3} - \log [4] - \left( 1 + \log \left[\frac{m_\pi^2}{4 m_K^2}\right] \right) {}_2F_1 \bigg[ \begin{array}{c} 1,1 \\ \frac{5}{2} \\ \end{array} \bigg| \frac{m_\pi^2}{4 m_K^2} \bigg] \Bigg) - \frac{m_\pi^2}{m_K^2} \left(2-\log \left[ \frac{m_\pi^2}{m_K^2} \right] \right) \Bigg\} \label{Hp2kkApprox} \end{align} \section{Numerical analysis \label{Sec:NumAnalysis}} Several numerical analyses are perfomed in this section. We first perform a study to determine the relative contribution of the various classes of terms making up the NNLO piece of $m_K$, $F_K$, $m_\eta$ and $F_\eta$, while also examining the difference that arises due to use of the GMO simplified, as contrasted to the physical, expressions. Next, by means of numerical tests, we justify the use of the truncated sunset expressions of Section~\ref{Sec:ApproxSunsets} instead of the exact expressions of Appendix~\ref{Sec:SunsetResults} in potential studies involing fits with lattice data. In both these studies, we do not provide uncertainties for the values provided, as the numerics are comparative rather than absolute in nature. In the last part of this section, we compute values for $m_K$, $F_K$, $m_\eta$ and $F_\eta$ using our expressions, and physical meson mass values as inputs. Since our aim in this last part is to provide numbers that can be used to check our expressions, rather than to present new and carefully recalculated values of the $m_P$ and $F_P$, and in keeping with the convention used in \cite{Bijnens:2014lea}, with whose values our own are compared, we give only central values for the calculated quantities. \subsection{Breakup of the contributions} We begin by giving a numerical breakup of the various terms that make up the masses and decay constants to show their relative contributions. As the expressions given earlier in this paper are `renormalized' ones, we can directly substitute physical values for the meson masses and the pion decay constant in them, and the error is of $\mathcal{O}(p^8)$. Table \ref{TableContrib} gives numerical values for the various components of the two loop contributions to the kaon and eta masses and decay constants for two different sets of values of the LECs, i.e. the free fit and the BE14 fit, obtained from continuum fits at NNLO, and the results of which are summarized in Ref.\cite{Bijnens:2014lea}. These numbers have been obtained by using the full $m_{\eta}^2$ dependent expressions (i.e. that have not been simplified by use of the GMO relation), and by summing the first 1000 terms of the single series, and the first 10000 terms of the double series, of the expressions given in Appendix~\ref{Sec:SunsetResults} for the three mass scale sunsets. \begin{table} \centering \begin{tabular}{| c | | c | c c c c c c | c | } \hline & \multirow{2}{*}{Fit} & \multirow{2}{*}{$sunset$} & \multirow{2}{*}{$log \times log$} & \multirow{2}{*}{$log$} & \multirow{2}{*}{$log \times L_i$} & \multirow{2}{*}{${L}_i$} & \multirow{2}{*}{${L}_i \times {L}_j$} & \multirow{2}{*}{Sum} \\[2ex] \hline \hline \multirow{2}{*}{$m^2_K$} & Free Fit & \multirow{2}{*}{$2.4100$} & \multirow{2}{*}{$0.9420$} & \multirow{2}{*}{$3.0586$} & $-2.8763$ & $0.1178$ & $-0.3124$ & $3.3396$ \\ & BE14 & & & & $-4.3794$ & $0.2768$ & $0.0665$ & $2.3745$ \\ \hline \multirow{2}{*}{$F_K$} & Free Fit & \multirow{2}{*}{$-1.2220$} & \multirow{2}{*}{$1.7648$} & \multirow{2}{*}{$-7.3042$} & $18.3342$ & $-0.2398$ & $3.1301$ & $14.4631$ \\ & BE14 & & & & $15.0591$ & $-0.5637$ & $1.2018$ & $8.9358$ \\ \hline \hline \multirow{2}{*}{$m^2_\eta$} & Free Fit & \multirow{2}{*}{$4.1105$} & \multirow{2}{*}{$1.5896$} & \multirow{2}{*}{$5.9059$} & $-7.1642$ & $0.2018$ & $-1.1207$ & $3.5228$ \\ & BE14 & & & & $-10.2093$ & $0.3845$ & $-0.6144$ & $1.1668$ \\ \hline \multirow{2}{*}{$F_\eta$} & Free Fit & \multirow{2}{*}{$-1.2220$} & \multirow{2}{*}{$1.7648$} & \multirow{2}{*}{$-7.3042$} & $18.3342$ & $-0.2398$ & $3.1301$ & $14.4631$ \\ & BE14 & & & & $15.0591$ & $-0.5637$ & $1.2018$ & $8.9358$ \\ \hline \end{tabular} \caption{Contribution (in units of $10^{-6}$) of NNLO component terms to $m^2_P$ and $F_P$. The inputs are $m_{\pi} = m_{\pi^0} = 0.1350$, $m_K = m_K^{\text{avg}} = 0.4955$, $m_{\eta} = 0.5479$ and $F_{\pi} = F_{\pi\text{ phys}} = 0.0922$, all in GeV. The renormalization scale $\mu = 0.77$ GeV.} \label{TableContrib} \end{table} For the kaon mass, we see that the largest contribution arises from the pure log term and the pure sunset contributions. The contribution from the terms involving both the chiral logs as well as the low energy constants is also large, but its negative sign serves to reduce the contribution rather than augment it. The contribution of the bilinear log terms is also substantial. The large uncertainty due to the $L_i$, however, means that the contribution to full loop contribution of both the $L_i \times L_j$ term and the $\log \times L_i$ term may be significantly different from what their central values suggest. \begin{table} \centering \begin{tabular}{| c | | c | c c c c c c | c | } \hline & \multirow{2}{*}{Input Masses} & \multirow{2}{*}{$sunset$} & \multirow{2}{*}{$log \times log$} & \multirow{2}{*}{$log$} & \multirow{2}{*}{$log \times L_i$} & \multirow{2}{*}{${L}_i$} & \multirow{2}{*}{${L}_i \times {L}_j$} & \multirow{2}{*}{Sum} \\[2ex] \hline \hline \multirow{2}{*}{$m^2_K$} & Physical & $2.4100$ & $0.9420$ & $3.0586$ & $-4.3794$ & $0.2768$ & \multirow{2}{*}{$0.0665$} & $2.3745$ \\ & GMO & $2.5102$ & $0.9289$ & $3.0225$ & $-4.0554$ & $0.0587$ & & $2.5313$ \\ \hline \multirow{2}{*}{$F_K$} & Physical & $-1.2220$ & $1.7648$ & $-7.3042$ & $15.0591$ & $-0.5637$ & \multirow{2}{*}{$1.2018$} & $8.9358$ \\ & GMO & $-1.2939$ & $1.7698$ & $-7.2988$ & $14.3140$ & $0.4358$ & & $9.1287$ \\ \hline \hline \multirow{2}{*}{$m^2_\eta$} & Physical & $4.1105$ & $1.5896$ & $5.9059$ & $-10.2093$ & $0.3845$ & $-0.6144$ & $1.1668$ \\ & GMO & $4.8962$ & $1.3989$ & $5.9110$ & $-9.7738$ & $0.9473$ & $0.9980$ & $4.3775$ \\ \hline \multirow{2}{*}{$F_\eta$} & Physical & $-1.2220$ & $1.7648$ & $-7.3042$ & $15.0591$ & $-0.5637$ & \multirow{2}{*}{$1.2018$} & $8.9358$ \\ & GMO & $-1.2939$ & $1.7698$ & $-7.2988$ & $14.3140$ & $0.4358$ & & $9.1287$ \\ \hline \end{tabular} \caption{Contribution (in units of $10^{-6}$) of NNLO component terms to $m^2_P$ and $F_P$ for physical and GMO-calculated eta masses. The inputs are the same as those used for Table~\ref{TableContrib}. The $L_i$ used are the BE14 fit values.} \label{TablePhysVsGMOContrib} \vspace*{12mm} \centering \begin{tabular}{|c||c|c|c|} \hline & \multirow{2}{*}{Physical} & \multirow{2}{*}{GMO} & \multirow{2}{*}{Lattice} \\[2ex] \hline \hline $\overline{H}^{\chi}_{K \pi \eta}$ & $50.1058$ & $52.3059$ & $52.6996$ \\ $\overline{H}^{\chi}_{2K \pi \eta}$ & $47.1145$ & $43.9569$ & $25.3240$ \\ $\overline{H}^{\chi}_{K 2\pi \eta}$ & $-258.6990$ & $-264.8280$ & $25.3240$ \\ $\overline{H}^{\chi}_{K \pi 2\eta}$ & $63.0648$ & $65.3259$ & $38.1248$ \\ \hline $c^K_{K \pi \eta}$ & $3.0345$ & $3.1439$ & $3.7614$ \\ $d^K_{K \pi \eta}$ & $-2.3367$ & $-2.2692$ & $6.3472$ \\ \hline $c^K_{sunsets}$ & $2.4100$ & $2.5102$ & $4.1692$ \\ $d^K_{sunsets}$ & $-1.2220$ & $-1.2939$ & $-1.1516$ \\ \hline \hline $\overline{H}^{\chi}_{\pi K K}$ & $44.7862$ & $44.7750$ & $49.4563$ \\ $\overline{H}^{\chi}_{2\pi K K}$ & $-236.5110$ & $-234.5361$ & $29.5042$ \\ $\overline{H}^{\chi}_{\pi 2K K}$ & $58.2355$ & $59.1524$ & $32.2094$ \\ \hline $c^\eta_{\pi K K}$ & $4.0771$ & $4.0273$ & $4.7771$ \\ $d^\eta_{\pi K K}$ & $0.1336$ & $0.3386$ & $11.6803$ \\ \hline $c^\eta_{sunsets}$ & $4.1105$ & $4.8962$ & $6.0683$ \\ $d^\eta_{sunsets}$ & $-1.1654$ & $-1.6868$ & $-1.8894$ \\ \hline \end{tabular} \caption{Contribution (in units of $10^{-6}$) of various components to $m^2_P$ and $F_P$.} \label{TableA} \vspace*{12mm} \begin{tabular}{|c||c|c|c|} \hline & \multirow{2}{*}{Physical} & \multirow{2}{*}{GMO} & \multirow{2}{*}{Lattice} \\[2ex] \hline \hline $(m_K^2)^{(6)}_{loop}$ & $0.0329$ & $0.0350$ & $0.0656$ \\ $(F_K^2)^{(6)}_{loop}$ & $0.1237$ & $0.1263$ & $0.3305$ \\ \hline $(m_K^2)^{(6)}_{CT}$ & $-0.0437$ & $-0.0437$ & $-0.0276$ \\ $(F_K^2)^{(6)}_{CT}$ & $0.0238$ & $0.0238$ & $-0.0097$ \\ \hline \hline $(m_\eta^2)^{(6)}_{loop}$ & $0.0161$ & $0.0606$ & $0.0779$ \\ $(F_\eta^2)^{(6)}_{loop}$ & $0.1888$ & $0.1856$ & $0.3678$ \\ \hline $(m_\eta^2)^{(6)}_{CT}$ & $-0.0115$ & $-0.0115$ & $0.0035$ \\ $(F_\eta^2)^{(6)}_{CT}$ & $0.0009$ & $0.0009$ & $-0.0302$ \\ \hline \end{tabular} \caption{Contribution of various components to $m^2_P$ and $F_P$.} \label{TableB} \caption*{Tables \ref{TableA} and \ref{TableB}: The inputs for the physical and GMO case are the same as for Table~\ref{TableContrib}. The inputs for the lattice column are $m_{\pi} = 0.4023$ and $m_{K} = 0.5574$, both in GeV.} \end{table} In the case of the kaon decay constant, the largest contribution is from the $log \times L_i$ terms, and is of an order of magnitude higher than the next highest positive contributions, coming from the bilinear LEC and bilinear log terms. The linear chiral log terms and the pure sunset terms reduce the two-loop contribution due to their negative sign. As in the case of the kaon mass, the contribution of the $L_i \times L_j$ term may be significantly different due to the large uncertainty of its central value. Similarly, with both the eta mass and decay constant, the largest contribution in absolute terms comes from the $log \times L_i$ terms. For the eta mass, though, the negative sign of this term serves to reduce the contributions from the $log$ and $sunset$ contributions that have the next largest values. In the eta decay constant, however, the $log \times L_i$ dominates the overall value of the $\mathcal{O}(p^6)$ contribution. In Tables \ref{TablePhysVsGMOContrib}, \ref{TableA} and \ref{TableB}, we justify the use of the GMO relation to obtain simplified expressions for the masses and decay constants. In all three tables, we see that the difference between quantities calculated using GMO masses varies from those using the physical masses by a maximum of around 4\% in most calculated quantities, exceptions being $\overline{H}_{2K \pi \eta}$, $c^\eta_{L_i}$, $c^\eta_{L_i \times L_j}$ (and consequently in $(m_\eta^2)_{loop}^{(6)}$), and $d^\eta_{\pi K K}$ (thus also in $(d^\eta)_{sunsets}$). However, at the level of the total NNLO contribution, the difference is negligible for the kaon mass and small for the kaon and eta decay constants. The column labelled `lattice' in Tables \ref{TableA} and \ref{TableB} gives values for the sunset integrals and various components making up the NNLO contributions to $m^2_K$ and $\overline{F}_K$ using as input a particular set of meson mass values used in the lattice simulations of \cite{Durr:2016ulb}. The large divergence between the numbers obtained using the physical and GMO mass inputs on one hand, and the lattice mass inputs on the other hand, demonstrate the necessity to use lattice results carefully when comparing with the expressions presented in this paper. \subsection{Simplified expressions for three mass scale sunset results \label{Sec:NumApproxSunsets}} We show here that the approximate expressions for the sunset integrals presented in Section~\ref{Sec:ApproxSunsets}, made by truncating the infinite series at suitable points, is sufficiently precise for purposes of data fitting against the results of the lattice simulations presented in \cite{Durr:2016ulb, Durr:2010hr}. In Tables \ref{TableApprox1} and \ref{TableApprox2} are shown the results for three sets of mass inputs, all taken from \cite{Durr:2016ulb}. The `Lattice Low' columns have as inputs: $m_{\pi} = 0.1830$ GeV, $m_{K} = 0.4964$ GeV, which are values representative of the lower end of the range of values of masses used in \cite{Durr:2016ulb}. The `Lattice Mid' columns have as inputs: $m_{\pi} = 0.3010$ GeV, $m_{K} = 0.5625$ GeV, and the `Lattice High' columns have as inputs: $m_{\pi} = 0.4023$ GeV, $m_{K} = 0.5574$ GeV, which are values representative of the middle and upper end, respectively, of the range of masses used in the aforementioned lattice study. For each of these three sets of masses, the values of various quantities are calculated in two ways- using the exact values of the sunsets (as given by the results of Appendix~\ref{Sec:SunsetResults}, and using the approximate expressions for the sunsets (as given by Eq.(\ref{HkpeApprox})-(\ref{Hkp2eApprox}). It can be seen from the results of these tables that the deviation from the exact results is less than $1\%$ in all cases apart from $(m_K^2)^{(6)}_{loop}$ calculated using the `Lattice Low' values. Indeed, the truncations were performed on the full expressions of the sunsets in such a manner, that the numerical deviation of the approximations from the exact values was less than $1\%$ for the majority of the meson masses used in \cite{Durr:2016ulb}. More specifically, for $\overline{H}^{\chi}_{K \pi \eta}$ Eq.(\ref{HkpeApprox}) differs from Eq.(\ref{Eq:Hkpe}) by less than $0.5\%$ for all 47 sets of masses used in \cite{Durr:2016ulb}. For $\overline{H}^{\chi}_{2K \pi \eta}$ Eq.(\ref{H2kpeApprox}) differs from Eq.(\ref{Eq:H2kpe}) by more than $1\%$ for seven of these sets of masses, and by less than $0.4\%$ for 38 sets. And for $\overline{H}^{\chi}_{K 2\pi \eta}$ and $\overline{H}^{\chi}_{K \pi 2\eta}$ both, the truncated results differ from the exact ones by more than $1\%$ for the same 3 sets of masses. Similarly, for the eta sunsets, $\overline{H}^{\chi}_{\pi K K}$ differs from Eq.(\ref{HpkkApprox}) by less than 1\% for all sets of masses, and $\overline{H}^{\chi}_{2\pi K K}$ and $\overline{H}^{\chi}_{\pi 2K K}$ differ from Eq.(\ref{H2pkkApprox}) and Eq.(\ref{Hp2kkApprox}) by less than 1\% for all but (the same) six sets of masses. \begin{table} \centering \begin{tabular}{|c||c|c||c|c||c|c||} \hline & \multicolumn{2}{|c||}{\multirow{2}{*}{Lattice Low}} & \multicolumn{2}{|c||}{\multirow{2}{*}{Lattice Mid}} & \multicolumn{2}{|c||}{\multirow{2}{*}{Lattice High}} \\[2ex] \hline & \multirow{2}{*}{Approx} & \multirow{2}{*}{Exact} & \multirow{2}{*}{Approx} & \multirow{2}{*}{Exact} & \multirow{2}{*}{Approx} & \multirow{2}{*}{Exact} \\[2ex] \hline \hline $\overline{H}^{\chi}_{K \pi \eta}$ & $49.1972$ & $49.2763$ & $57.3564$ & $57.4264$ & $52.6594$ & $52.6996$ \\ $\overline{H}^{\chi}_{2K \pi \eta}$ & $40.3584$ & $40.3898$ & $33.4005$ & $33.5287$ & $25.3936$ & $25.3240$ \\ $\overline{H}^{\chi}_{K 2\pi \eta}$ & $-181.192$ & $-180.8920$ & $-94.4140$ & $-94.5730$ & $-37.4788$ & $-37.7974$ \\ $\overline{H}^{\chi}_{K \pi 2\eta}$ & $60.6167$ & $60.8187$ & $51.0392$ & $51.2868$ & $37.9694$ & $38.1248$ \\ \hline $c^K_{K \pi \eta}$ & $2.9267$ & $2.9300$ & $5.1472$ & $5.1522$ & $3.7574$ & 3.7614 \\ $d^K_{K \pi \eta}$ & $-1.2676$ & $-1.2730$ & $1.6939$ & $1.6774$ & $6.3404$ & $6.3472$ \\ \hline $c^K_{sunsets}$ & $2.4126$ & $2.4158$ & $4.6864$ & $4.6914$ & $4.1651$ & 4.1692 \\ $d^K_{sunsets}$ & $-1.2508$ & $-1.2562$ & $-1.6999$ & $-1.7164$ & $-1.1584$ & $-1.1516$ \\ \hline \hline $\overline{H}^{\chi}_{\pi K K}$ & $42.5595$ & $42.6486$ & $51.1414$ & $51.3158$ & $49.1902$ & $49.4563$ \\ $\overline{H}^{\chi}_{2\pi K K}$ & $-157.3080$ & $-157.1500$ & $-79.1677$ & $-79.2012$ & $-29.5237$ & $-29.5042$ \\ $\overline{H}^{\chi}_{\pi 2K K}$ & $54.2419$ & $54.1775$ & $44.4467$ & $44.3589$ & $32.3957$ & $32.2094$ \\ \hline $c^\eta_{\pi K K}$ & $3.7206$ & $3.7247$ & $6.4170$ & $6.4305$ & $4.7598$ & $4.7771$ \\ $d^\eta_{\pi K K}$ & $1.0047$ & $1.0522$ & $5.3347$ & $5.4545$ & $11.4664$ & $11.6803$ \\ \hline $c^\eta_{sunsets}$ & $4.5926$ & $4.5967$ & $8.1678$ & $8.1813$ & $6.0509$ & $6.0683$ \\ $d^\eta_{sunsets}$ & $-1.8788$ & $-1.8313$ & $-2.9205$ & $-2.8007$ & $-2.1033$ & $-1.8894$ \\ \hline \end{tabular} \caption{Contribution (in units of $10^{-6}$) of various components to $m^2_P$ and $F_P$ for three sets of meson mass inputs from lattice simulations. For `Lattice Low', $m_\pi=0.1830$ and $m_K=0.4964$; for `Lattice Mid', $m_\pi=0.3010$ and $m_K=0.5625$; for `Lattice High', $m_\pi=0.4023$ and $m_K=0.5574$; all in GeV.} \label{TableApprox1} \end{table} Figure~\ref{FigHApproxError} gives a graphical representation of the relative errors of the truncated sunset expressions over a range of values of $\rho$. The points on the curves are the specific mass points used in \cite{Durr:2016ulb}. It is seen that for values of $\rho \lesssim 0.5$, which constitutes the majority of the mass values in the simulation of \cite{Durr:2016ulb}, the relative error is less than $1\%$. \begin{figure} \centering \begin{minipage}{0.45\textwidth} \includegraphics[width=0.98\textwidth]{HkaonApproxError.eps} \end{minipage} ~~ \begin{minipage}{0.45\textwidth} \includegraphics[width=0.98\textwidth]{HetaApproxError.eps} \end{minipage} \caption{Relative errors of the truncated sunset kaon (left) and eta (right) integrals.} \label{FigHApproxError} \end{figure} \vspace*{15mm} \begin{table} \centering \begin{tabular}{|c||c|c||c|c||c|c||} \hline & \multicolumn{2}{|c||}{\multirow{2}{*}{Lattice Low}} & \multicolumn{2}{|c||}{\multirow{2}{*}{Lattice Mid}} & \multicolumn{2}{|c||}{\multirow{2}{*}{Lattice High}} \\[2ex] \hline & \multirow{2}{*}{Approx} & \multirow{2}{*}{Exact} & \multirow{2}{*}{Approx} & \multirow{2}{*}{Exact} & \multirow{2}{*}{Approx} & \multirow{2}{*}{Exact} \\[2ex] \hline \hline $(m_K^2)^{(6)}_{loop}$ & 0.0353 & 0.0353 & 0.0710 & 0.0711 & 0.0655 & 0.0656 \\ $(\overline{F}_K^2)^{(6)}_{loop}$ & 0.1536 & 0.1536 & 0.2559 & 0.2557 & 0.3304 & 0.3305 \\ \hline $(m_K^2)^{(6)}_{CT}$ & -0.0384 & -0.0384 & -0.0560 & -0.0560 & -0.0276 & -0.0276 \\ $(\overline{F}_K^2)^{(6)}_{CT}$ & 0.0183 & 0.0183 & 0.0108 & 0.0108 & -0.0097 & -0.0097 \\ \hline \hline $(m_\eta^2)^{(6)}_{loop}$ & $0.0217$ & $0.0218$ & $0.0544$ & $0.0544$ & $0.0776$ & $0.0779$ \\ $(\overline{F}_\eta^2)^{(6)}_{loop}$ & $0.2102$ & $0.2109$ & $0.3152$ & $0.3169$ & $0.3649$ & $0.3678$ \\ \hline $(m_\eta^2)^{(6)}_{CT}$ & $-0.0076$ & $-0.0076$ & $-0.0023$ & $-0.0023$ & $0.0035$ & $0.0035$ \\ $(\overline{F}_\eta^2)^{(6)}_{CT}$ & $-0.0034$ & $-0.0034$ & $-0.0197$ & $-0.0197$ & $-0.0302$ & $-0.0302$ \\ \hline \end{tabular} \caption{Contribution (in units of $10^0$) of various components to $m^2_P$ and $F_P$ for three sets of meson mass inputs from lattice simulations. For `Lattice Low', $m_\pi=0.1830$ and $m_K=0.4964$; for `Lattice Mid', $m_\pi=0.3010$ and $m_K=0.5625$; for `Lattice High', $m_\pi=0.4023$ and $m_K=0.5574$; all in GeV.} \label{TableApprox2} \end{table} \subsection{Comparison with prior determinations} In this section, we give numerical values for the quantities discussed in this paper in the form of LO + NLO + NNLO for both the BE14 and free fits. These have been calculated with the input parameters given under the tables of the previous section. We give both the values calculated using our GMO-simplified expressions, as well as the full ones. \subsubsection{$m^2_K$} Using the full expressions of Section~\ref{Sec:KaonMass} and the BE14 (free fit) LECs, we get the following values: \begin{align} \frac{m_{K}^2}{m_{K,\text{phys}}^2} & = \frac{1}{m_{K,\text{phys}}^2} + \frac{\left(m_{K} \right)^{(4)}}{m_{K,\text{phys}}^2} + \frac{\left(m_{K}\right)^{(6)}_{\text{loop}}}{m_{K,\text{phys}}^2} + \frac{\left(m_{K}\right)^{(6)}_{\text{CT}}}{m_{K, \text{phys}}^2} \nonumber \\ & = 1 -0.0690 (+0.0229) + 0.1338 (0.1882) -0.1779 (-0.2049) \end{align} and using the GMO-simplified expressions: \begin{align} \frac{m_{K}^2}{m_{K,\text{phys}}^2} = 1 -0.0704 (+0.0215) + 0.1427(0.1959) -0.1779 (-0.2049) \end{align} These numbers are close to the literature values \cite{Ecker:2013pba}: \begin{align} \left( \frac{m_{K}^2}{m_{K,\text{phys}}^2} \right)_{lit} & = 1.112(0.0994) -0.069(+0.022) -0.043(-0.016) \end{align} for the BE14 case, and although less so for the free fit case, are still compatible with them. \subsubsection{$F_K$} For the kaon decay constant, using the BE14 (free fit) low energy constants and the expressions of Section \ref{Sec:KaonDecay}, we obtain: \begin{align} \frac{F_{K}}{F_{0}} &= 1 + F^{(4)}_{K} + \left(F_{K} \right)^{(6)}_{\text{loop}} + \left(F_{K} \right)^{(6)}_{\text{CT}} \nonumber\\ & = 1 + 0.3849(0.4355) + 0.1237(0.2001) + 0.0238(0.0422) \end{align} Using the using the BE14 (free fit) low energy constants and GMO-simplified expressions, we get: \begin{align} \frac{F_{K}}{F_{0}} = 1 + 0.3828 (0.4334) + 0.1263 (0.2012) + 0.0238 (0.0423) \end{align} To obtain $F_{K}/F_{\pi}$, we use the expansion presented in \cite{Bijnens:2011tb}: \begin{align} \frac{F_K}{F_{\pi}} = 1 + \left( \frac{F_K}{F_0} \bigg|_{p^4} - \frac{F_{\pi}}{F_0} \bigg|_{p^4} \right)_{\text{NLO}} + \left( \frac{F_K}{F_0} \bigg|_{p^6} - \frac{F_{\pi}}{F_0} \bigg|_{p^6} - \frac{F_K}{F_0} \bigg|_{p^4} \frac{F_{\pi}}{F_0} \bigg|_{p^4} + \frac{F_{\pi}}{F_0} \bigg|^2_{p^4} \right)_{\text{NNLO}} \end{align} and values for the $F_{\pi}/F_0$ calculated in \cite{Ananthanarayan:2017yhz}. We get: \begin{align} \frac{F_{K}}{F_{\pi}} = 1 + 0.1764 (0.1208) + 0.0226 (0.0769) \end{align} using the full expressions and the BE14 (free fit) LEC values. These values agree well with the numbers presented in \cite{Ecker:2013pba}: \begin{align} \left( \frac{F_{K}}{F_{\pi}} \right)_{lit} = 1 + 0.176(0.121) + 0.023(0.077) \end{align} \subsubsection{$m^2_\eta$} Using the full expressions of Section \ref{Sec:EtaMass} and the BE14 (free fit) LECs, we get the following values: \begin{align} \frac{m_\eta^2}{m_{\eta,\text{phys}}^2} & = \frac{1}{m_{\eta,\text{phys}}^2} + \frac{\left(m_\eta \right)^{(4)}}{m_{\eta,\text{phys}}^2} + \frac{\left(m_\eta \right)^{(6)}_{\text{loop}}}{m_{\eta,\text{phys}}^2} + \frac{\left(m_\eta \right)^{(6)}_{\text{CT}}}{m_{\eta, \text{phys}}^2} \nonumber \\ & = 1 -0.2126(-0.0736) +0.0538(0.1624) -0.0383(-0.1498) \end{align} and using the GMO-simplified expressions: \begin{align} \frac{m_\eta^2}{m_{\eta,\text{phys}}^2} = 1 -0.2595(-0.1250) + 0.2018(0.2919) -0.0383(-0.1498) \end{align} As with the kaon, the BE14 numbers are close to the literature values \cite{Ecker:2013pba}, while the free fit numbers only mildly agree. \begin{align} \left( \frac{m_{\eta}^2}{m_{\eta,\text{phys}}^2} \right)_{lit} & = 1.197(0.938) -0.214(-0.076) +0.017(0.014) \end{align} \subsubsection{$F_\eta$} Using the full expressions of Section \ref{Sec:EtaDecay} and the BE14 (free fit) LECs, we get the following values: \begin{align} \frac{F_\eta}{F_0} & = 1 + \left(F_\eta \right)^{(4)} + \left(F_\eta\right)^{(6)}_{\text{loop}} + \left(F_\eta\right)^{(6)}_{\text{CT}} \nonumber \\ & = 1 + 0.4672(0.4996) + 0.1888(0.2597) + 0.0009(0.0254) \end{align} and using the GMO-simplified expressions: \begin{align} m_\eta^2 = 1 +0.4672(0.4996) + 0.1797(0.2508) + 0.0009(0.0254) \end{align} \section{Lattice Fittings \label{Sec:LatticeFits}} We present in this section a simplified form of the expressions for $m_K$, $F_K$, $m_\eta$ and $F_\eta$ that may conveniently be used in fits with lattice data. For this purpose, we used the simplified expressions of the sunset master integrals of Section~$\ref{Sec:ApproxSunsets}$, and expanded the $c^P_{sunset}$ and $d^P_{sunset}$ terms around the mass ratio $m_\pi^2/m_K^2 = 0$. Though the integrals $\overline{H}^\chi_{K 2\pi \eta}$ and $\overline{H}^\chi_{2\pi K K}$ diverge in the $m_\pi^2 \rightarrow 0$ limit, that they are multiplied by factors of $m_\pi^2$ ensures analyticity of the expressions in this limit. \subsection{$m^2_K$} The GMO expressions for the kaon mass can be written as: \begin{align} m_K^2 =& m_{K0}^2 + m_K^2 \left\{ \left(\frac{4}{9}\xi_K-\frac{1}{9}\xi_\pi\right) \lambda_\eta +\xi_K \hat L_{1M}^r + \xi_\pi \hat L_{2M}^r \right\} \nonumber\\ & \qquad + m_K^2\Bigg\{ \hat K_{1M}^r \lambda_\pi^2 + \hat K_{2M}^r \lambda_\pi\lambda_K + \hat K_{3M}^r \lambda_\pi\lambda_\eta + \hat K_{4M}^r \lambda_K^2 + \hat K_{5M}^r \lambda_K\lambda_\eta + \hat K_{6M}^r \lambda_\eta^2 \nonumber\\ & \qquad \qquad \quad + \xi_K^2 F_M \left[\frac{m_\pi^2}{m_K^2}\right] + \hat C_{1M} \lambda_\pi+\hat C_{2M}\lambda_K+\hat C_{3M}\lambda_\eta + \hat C_{4M} \Bigg\} \end{align} where $\xi_\pi=m_\pi^2/(16\pi^2 F_\pi^2)$, $\xi_K= m_K^2/(16\pi^2 F_\pi^2)$ and $\lambda_i = \log(m_i^2/\mu^2)$. The coefficients $\hat L^r_{iM}$ are functions of the NLO LECs $L_i^r$. Each of the $\hat K_{iM}^r,\hat C_{iM}^r$ has three terms proportional to $\xi_\pi^2,\xi_\pi\xi_K,\xi_K^2$ respectively. The $\hat K_{iM}$ and $F_M$ are fully determined, the $\hat C_{iM}^r, i=1,2,3$ depend linearly on the NLO LECs and $\hat C_{4M}$ depends up to quadratically on the NLO LECS and linearly on the NNLO LECs. There is some ambiguity in dividing the terms not depending on LECs between the various terms since $\log(m_i^2/m_K^2)=\lambda_i-\lambda_K$ for $i=\pi,\eta$. The $F_I$ can be subdivided as: \begin{align} F_I [ \rho ] =& a_{1I} + \bigg( a_{2I} + a_{3I} \log[\rho] + a_{4I} \log^2[\rho] \bigg) \rho + \bigg( a_{5I} + a_{6I} \log[\rho] + a_{7I} \log^2[\rho] \bigg) \rho^2 \nonumber \\ & + \bigg( a_{8I} + a_{9I} \log[\rho] + a_{10I} \log^2[\rho] \bigg) \rho^3 + \bigg( a_{11I} + a_{12I} \log[\rho] + a_{13I} \log^2[\rho] \bigg) \rho^4 + \mathcal{O} \left( \rho^5 \right) \label{Eq:FI} \end{align} where $\rho = m_\pi^2/m_K^2$, and for the kaon mass, $I=M$. For a more detailed discussion of the various possible ways in which the above expressions may be expressed for fitting with lattice data, see \cite{Ananthanarayan:2017yhz}. Note that unlike in \cite{Ananthanarayan:2017yhz} where $F_I [ \rho ]$ was truncated after $\mathcal{O} \left( \rho^3 \right)$, here we retain up to $\mathcal{O} \left( \rho^4 \right)$ terms. Our justification for doing so is that only at $\mathcal{O} \left( \rho^4 \right)$ does the expansion converge to the desired level of accuracy. This is shown graphically in Figure~\ref{FigLatticeFit}, where the expression $F_I$, which contains the terms $c_{\text{sunsets}}$ or $d_{\text{sunsets}}$ and terms from the bilinear chiral logs that are proportional to powers of $\rho$, are plotted with four different inputs. The blue plot was calculated using the exact values of the sunset integrals, the red using the approximate expressions of Section~\ref{Sec:ApproxSunsets}, and the dotted and dashed plots using the truncated sunset expressions expanded in $\rho$ up to $\mathcal{O}(\rho^3)$ and $\mathcal{O}(\rho^4)$ respectively. It is seen that only at $\mathcal{O}(\rho^4)$ do the expansions converge reasonably well to the exact ones over the entire range of interest of $\rho$, i.e. for $\rho \lesssim 0.5$. \begin{figure} \centering \begin{minipage}{0.45\textwidth} \includegraphics[width=0.98\textwidth]{LatticeFitMK.eps} \end{minipage} ~~ \begin{minipage}{0.45\textwidth} \includegraphics[width=0.98\textwidth]{LatticeFitFK.eps} \end{minipage} \caption{$F_M$ (left) and $F_F$ (right) plotted against $\rho$ using exact and truncated sunset integral values, as well as expansions of the latter upto $\mathcal{O}(\rho^3)$ and $\mathcal{O}(\rho^4)$.} \label{FigLatticeFit} \end{figure} Explicitly, for $m_K$, we have: \begin{align} & \hat{L}^r_{1M} = -8 (4 \pi )^2 (2 L^r_{4}+L^r_{5}-4 L^r_{6}-2 L^r_{8}), \quad \hat{L}^r_{2M} = -8 (4 \pi )^2 (L^r_{4}-2 L^r_{6}) \end{align} \begin{align} & \hat{K}^r_{1M} = \frac{1}{8} \xi _{\pi } \xi _K + \frac{169}{192} \xi _{\pi }^2, \quad \hat{K}^r_{2M} = \frac{1}{16} \xi _{\pi }^2 -\frac{3}{8} \xi _{\pi } \xi _K , \quad \hat{K}^r_{6M} = -\frac{11}{324} \xi_K^2 - \frac{47}{324} \xi _{\pi } \xi _K + \frac{1279}{5184} \xi _{\pi }^2 \nonumber \\ & \hat{K}^r_{4M} = \frac{43}{24} \xi _K^2 + \frac{1}{9} \xi _{\pi } \xi _K, \quad \hat{K}^r_{5M} = \frac{7}{18} \xi _K^2 + \frac{19}{72} \xi _{\pi } \xi _K - \frac{1}{16} \xi _{\pi }^2, \quad \hat{K}^r_{3M} = -\frac{55}{72} \xi _{\pi } \xi _K - \frac{97}{288} \xi _{\pi }^2 \end{align} \begin{align} \hat{C}^r_{1M} =& \left(16 (4 \pi )^2 (2 L^r_{4}+L^r_{5}-4 L^r_{6}-2 L^r_{8})-\frac{11}{8}\right) \xi _{\pi } \xi _K \nonumber \\ & - \left((4 \pi )^2 (48 L^r_{1}+12 L^r_{2}+15 L^r_{3}-68 L^r_{4}-12 L^r_{5}+88 L^r_{6}+24 L^r_{8})+\frac{455}{288}\right) \xi _{\pi }^2 \end{align} \begin{align} \hat{C}^r_{2M} =& \left(8 (4 \pi )^2 (L^r_{4}-2 L^r_{6})-\frac{41}{36}\right) \xi _{\pi } \xi _K \nonumber \\ & - \left(2 (4 \pi )^2 (36 L^r_{1}+18 L^r_{2}+15 L^r_{3}-40 L^r_{4}-16 L^r_{5}+64 L^r_{6}+32 L^r_{8})+\frac{487}{144}\right) \xi _K^2 \end{align} \begin{align} \hat{C}^r_{3M} =& \left(\frac{8}{9} (4 \pi )^2 (16 L^r_{1}+4 L^r_{2}+7 L^r_{3}-18 L^r_{4}-L^r_{5}+20 L^r_{6}-12 L^r_{7}+6 L^r_{8})+\frac{5}{8}\right) \xi _{\pi } \xi _K \nonumber \\ & -\left(\frac{16}{9} (4 \pi )^2 (16 L^r_{1}+4 L^r_{2}+7 L^r_{3}-24 L^r_{4}-8 L^r_{5}+32 L^r_{6}+16 L^r_{8})+\frac{74}{27}\right) \xi _K^2 \nonumber \\ & + \left(\frac{13}{864}-\frac{1}{9} (4 \pi )^2 (16 L^r_{1}+4 L^r_{2}+7 L^r_{3}-12 L^r_{4}+12 L^r_{5}+8 L^r_{6}-96 L^r_{7}-40 L^r_{8})\right) \xi _{\pi }^2 \end{align} \begin{align} \hat{C}^r_{4M} &= \frac{1}{27} (4 \pi )^2 \bigg\{ \bigg( 108 L^r_{1}+366 L^r_{2}+89 L^r_{3}-32 L^r_{5}+384 L^r_{7}+192 L^r_{8} \bigg) \xi _K^2 \nonumber \\ & \quad -\bigg( 48 L^r_{2} + 4 L^r_{3} - 64 L^r_{5} + 768 L^r_{7} + 384 L^r_{8} \bigg) \xi _{\pi } \xi _K + \bigg( 168 L^r_{2}+41 L^r_{3}-32 L^r_{5}+384 L^r_{7}+192 L^r_{8} \bigg) \xi _{\pi }^2 \bigg\} \nonumber \\ & -16 \left( 16 \pi ^2\right)^2 \bigg\{ 2 \bigg (C^r_{12}+2 C^r_{13}+C^r_{14}+C^r_{15}+2 C^r_{16}-3 C^r_{19}-4 C^r_{20}-6 C^r_{21}-C^r_{31}-2 C^r_{32} + 16 (L^r_{4})^2 \nonumber \\ & \qquad + 12 L^r_{4} L^r_{5} - 64 L^r_{4} L^r_{6} - 32 L^r_{4} L^r_{8} + 2 (L^r_{5})^2 - 24 L^r_{5} L^r_{6} - 12 L^r_{5} L^r_{8} + 64 (L^r_{6})^2 + 64 L^r_{6} L^r_{8} + 16 (L^r_{8})^2 \bigg) \xi _K^2 \nonumber \\ & \quad + \bigg( 2 C^r_{13}-2 C^r_{14}+C^r_{15}-4 C^r_{16}+2 C^r_{17}+6 C^r_{19}+2 C^r_{20}-12 C^r_{21}-2 C^r_{32} + 32 (L^r_{4})^2 + 16 L^r_{4} L^r_{5} \nonumber \\ & \qquad - 128 L^r_{4} L^r_{6} - 24 L^r_{4} L^r_{8} + 4 (L^r_{5})^2 - 32 L^r_{5} L^r_{6} - 8 L^r_{5} L^r_{8} + 128 (L^r_{6})^2 + 48 L^r_{6} L^r_{8} \bigg) \xi _{\pi } \xi _K \nonumber \\ & \quad + \bigg( C^r_{14}+3 C^r_{16}-C^r_{17}-3 C^r_{19}-3 C^r_{20}-3 C^r_{21}+8 (L^r_{4}-2 L^r_{6}) (L^r_{4}+L^r_{5}-2 L^r_{6}-L^r_{8}) \bigg) \xi _{\pi }^2 \bigg\} \end{align} \begin{align} a_{1M} =& \frac{1165}{2592} \left(\text{Li}_2\left[ \frac{3}{4} \right] +\log [4] \log \left[ \frac{4}{3}\right] \right) +\frac{25 \pi ^2}{288}+\frac{2665}{3456}+\frac{23 \pi }{12 \sqrt{2}}-\frac{103}{192} \log ^2\left[\frac{4}{3}\right] - \frac{163}{216} \log \left[ \frac{4}{3} \right] \nonumber \\ & - \frac{1}{24} \text{arccosec}^2\left[\sqrt{3}\right] + \left(\frac{\pi}{24}-\frac{23}{6 \sqrt{2}}\right) \text{arccosec} \left[ \sqrt{3} \right] \end{align} \begin{align} a_{2M} =& -\frac{689}{648} \left(\text{Li}_2\left[\frac{3}{4}\right]+\log [4] \log \left[\frac{4}{3}\right] \right)+\frac{11 \pi ^2}{72}-\frac{386 \gamma }{135}+\frac{71687}{16200}-\frac{221 \pi }{108 \sqrt{2}}-\frac{3277 \pi }{4320 \sqrt{3}}+\frac{5 \sqrt{2} \pi }{27} \nonumber \\ & +\frac{53}{144} \log ^2\left[\frac{4}{3}\right] + \frac{55}{54} \log \left[ \frac{4}{3} \right] -\frac{1}{90}\log [4] - \frac{7 \pi}{288 \sqrt{3}} \log \left[\frac{64}{3}\right] + \frac{19}{24} \text{arccosec}^2 \left[ \sqrt{3} \right] \nonumber \\ & +\left(\frac{43 \sqrt{2}}{27}+\frac{17 \gamma }{3 \sqrt{2}}-\frac{19 \pi }{24}\right) \text{arccosec}\left[ \sqrt{3} \right] - \frac{1}{54} \psi \left[\frac{5}{2}\right] \end{align} \begin{align} a_{3M} = \frac{11}{8}+\frac{7 \pi}{288 \sqrt{3}}-\frac{1}{8} \log \left[ \frac{4}{3} \right], \quad a_{4M} = -\frac{1}{8}, \quad a_{7M} = -\frac{169}{192}, \quad a_{10M} = \frac{3}{16}, \quad a_{13M} = \frac{9}{64} \end{align} \begin{align} a_{5M} =& \frac{1031}{1296} \left(\text{Li}_2 \left[ \frac{3}{4}\right] +\log [4] \log \left[ \frac{4}{3} \right] \right) -\frac{23 \pi ^2}{48}+\frac{479393}{388800}+\frac{65 \pi }{72 \sqrt{2}}+\frac{706841 \pi }{331776 \sqrt{3}}+\frac{21737 \gamma }{6480} \nonumber \\ & -\frac{55}{192} \log ^2\left[\frac{4}{3}\right]-\frac{151}{90} \log [4] - \frac{551}{1728} \log \left[\frac{4}{3}\right] - \frac{62437 \pi}{55296 \sqrt{3}} \log \left[\frac{64}{3}\right] - \frac{23}{24} \text{arccosec}^2 \left[ \sqrt{3} \right] \nonumber \\ & -\frac{251}{648} \psi \left[ \frac{5}{2} \right] + \left(-\frac{173}{96 \sqrt{2}}-\frac{1009 \gamma }{144 \sqrt{2}}+\frac{23 \pi }{24}+\frac{23}{16 \sqrt{2}} \log [12] \right) \text{arccosec} \left[ \sqrt{3}\right] \end{align} \begin{align} & a_{6M} = -\frac{79}{48}+\frac{62437 \pi }{55296 \sqrt{3}}+\frac{43}{96} \log \left[ \frac{4}{3}\right] -\frac{23}{16 \sqrt{2}} \text{arccosec} \left[ \sqrt{3} \right] \end{align} \begin{align} a_{8M} &= -\frac{43}{216} \left(\text{Li}_2\left[\frac{3}{4}\right]+\log [4] \log \left[\frac{4}{3}\right] \right)+\frac{11 \pi ^2}{72}-\frac{199933 \gamma }{207360}-\frac{9347509}{6220800}-\frac{563 \pi }{2304 \sqrt{2}}-\frac{8967451 \pi }{13271040 \sqrt{3}} \nonumber \\ & +\frac{30889}{51840} \log [4] + \frac{9653}{34560} \log \left[ \frac{4}{3}\right] + \frac{284179 \pi}{442368 \sqrt{3}} \log \left[ \frac{64}{3}\right] + \frac{5}{24} \text{arccosec}^2 \left[ \sqrt{3} \right] + \frac{47}{144} \psi\left[\frac{5}{2}\right] \nonumber \\ & + \left(-\frac{5 \pi }{24}+\frac{1015}{1024 \sqrt{2}}+\frac{6313 \gamma }{4608 \sqrt{2}}-\frac{175}{768 \sqrt{2}}\log [12] \right) \text{arccosec} \left[ \sqrt{3} \right] \end{align} \begin{align} & a_{9M} = -\frac{5681}{17280}-\frac{284179 \pi }{442368 \sqrt{3}}+\frac{1}{24} \log \left[ \frac{4}{3}\right] + \frac{175}{768 \sqrt{2}} \text{arccosec} \left[ \sqrt{3}\right] \end{align} \begin{align} a_{11M} &= \frac{5}{288} \left(\text{Li}_2 \left[ \frac{3}{4} \right] + \log [4] \log \left[\frac{4}{3}\right] \right)+\frac{25 \pi ^2}{288}+\frac{21213943}{33177600}+\frac{1981 \pi }{110592 \sqrt{2}}+\frac{331627 \pi }{42467328 \sqrt{3}}+\frac{166979 \gamma }{1105920} \nonumber \\ & -\frac{61451}{1548288} \log \left[\frac{4}{3}\right] - \frac{737789}{3870720} \log [4] - \frac{708911 \pi}{7077888 \sqrt{3}} \log \left[\frac{64}{3}\right] -\frac{7}{72} \psi\left[\frac{5}{2}\right] \nonumber \\ & + \left(-\frac{2309 \gamma }{73728 \sqrt{2}}-\frac{8057}{442368 \sqrt{2}}+\frac{527}{24576 \sqrt{2}} \log [12] \right) \text{arccosec} \left[ \sqrt{3}\right] \end{align} \begin{align} & a_{12M} = -\frac{499231}{1548288}+\frac{708911 \pi }{7077888 \sqrt{3}}-\frac{1}{96} \log \left[\frac{4}{3}\right]-\frac{527}{24576 \sqrt{2}} \text{arccosec} \left[ \sqrt{3}\right] \end{align} \subsection{$F_K$} We can fit $F_K$ in a similar manner as follows: \begin{align} \frac{F_K}{F} &= 1 + \left\{ -\frac{3}{8} \xi_{\pi} \lambda_{\pi} + \left(\frac{1}{8}\xi_{\pi}-\frac{1}{2}\xi_K \right) \lambda _{\eta } -\frac{3}{4} \xi_K \lambda_K +\xi_K \hat L_{1F}^r + \xi_\pi \hat L_{2F}^r \right\} \nonumber \\ & \qquad +\Bigg\{ \hat K_{1F}^r \lambda_\pi^2 + \hat K_{2F}^r \lambda_\pi\lambda_K + \hat K_{3F}^r \lambda_\pi\lambda_\eta + \hat K_{4F}^r \lambda_K^2 + \hat K_{5F}^r \lambda_K\lambda_\eta + \hat K_{6F}^r \lambda_\eta^2 \nonumber\\ & \qquad \quad + \xi_K^2 F_F\left[ \frac{m_\pi^2}{m_K^2} \right] + \hat C_{1F}\lambda_\pi+\hat C_{2F}\lambda_K+\hat C_{3F}\lambda_\eta + \hat C_{4F} \Bigg\} \end{align} where \begin{align} \hat{L}^r_{1F} = 4 (4 \pi )^2 (2 L^r_{4}+L^r_{5}), \quad \hat{L}^r_{2F} = 4 (4 \pi )^2 L^r_{4} \end{align} \begin{align} & \hat{K}^r_{1F} = \frac{1}{6} \xi _{\pi } \xi _K - \frac{5}{192} \xi _{\pi }^2, \quad \hat{K}^r_{2F} = \frac{51}{32} \xi _{\pi } \xi _K - \frac{3}{32} \xi _{\pi }^2, \hat{K}^r_{6F} = \frac{31}{36} \xi _K^2 - \frac{11}{72} \xi _{\pi } \xi _K - \frac{21}{64} \xi _{\pi }^2 \nonumber \\ & \hat{K}^r_{4F} = \frac{155}{288} \xi _K^2 + \frac{11}{144} \xi _{\pi } \xi _K, \quad \hat{K}^r_{5F} = -\frac{91}{72} \xi _K^2 - \frac{53}{288} \xi _{\pi } \xi _K + \frac{3}{32} \xi _{\pi }^2, \quad \quad \hat{K}^r_{3F} = \frac{25}{24} \xi _{\pi } \xi _K + \frac{47}{96} \xi _{\pi }^2 \end{align} \begin{align} \hat{C}^r_{1F} = \left(\frac{3}{16}-\frac{19}{2} (4 \pi )^2 (2 L^r_{4}+L^r_{5})\right) \xi _{\pi } \xi _K + \left(\frac{1}{2} (4 \pi )^2 (48 L^r_{1}+12 L^r_{2}+15 L^r_{3}-47 L^r_{4}-6 L^r_{5})+\frac{53}{64}\right) \xi _{\pi }^2 \end{align} \begin{align} \hat{C}^r_{2F} = \left(\frac{245}{96} + (4 \pi )^2 (36 L^r_{1}+18 L^r_{2}+15 L^r_{3}-30 L^r_{4}-7 L^r_{5}) \right) \xi _K^2 + \left(\frac{173}{144}-(4 \pi )^2 (7 L^r_{4}+6 L^r_{5})\right) \xi _{\pi } \xi _K \end{align} \begin{align} \hat{C}^r_{3F} =& \left(\frac{19}{18} + \frac{2}{9} (4 \pi )^2 (64 L^r_{1}+16 L^r_{2}+28 L^r_{3}-66 L^r_{4}-3 L^r_{5}-72 L^r_{7}-36 L^r_{8})\right) \xi _K^2 \nonumber \\ & - \left(\frac{65}{144} + \frac{1}{18} (4 \pi )^2 (128 L^r_{1}+32 L^r_{2}+56 L^r_{3}-78 L^r_{4}+111 L^r_{5}-576 L^r_{7}-288 L^r_{8}) \right) \xi _{\pi } \xi _K \nonumber \\ & + \left(\frac{3}{64} + \frac{1}{18} (4 \pi )^2 (16 L^r_{1}+4 L^r_{2}+7 L^r_{3}-3 L^r_{4}+42 L^r_{5}-288 L^r_{7}-144 L^r_{8}) \right) \xi _{\pi }^2 \end{align} \begin{align} \hat{C}^r_{4F} &= 8 \left(16 \pi ^2\right)^2 \bigg\{ \bigg( -2 C^r_{14}+C^r_{15}-4 C^r_{16}+2 C^r_{17}+28 (L^r_{4})^2+14 L^r_{4} L^r_{5}-32 L^r_{4} L^r_{6}+4 (L^r_{5})^2-8 L^r_{5} L^r_{6} \bigg) \xi _{\pi } \xi _K \nonumber \\ & \quad + \bigg( 2 C^r_{14}+2 C^r_{15}+4 C^r_{16}+(2 L^r_{4}+L^r_{5}) (14 L^r_{4}+3 L^r_{5}-16 L^r_{6}-8 L^r_{8} \bigg) \xi _K^2 \nonumber \\ & \quad + \bigg( C^r_{14}+3 C^r_{16}-C^r_{17}+7 (L^r_{4})^2+8 L^r_{4} L^r_{5}-8 L^r_{4} L^r_{6}-8 L^r_{4} L^r_{8} \bigg) \xi _{\pi }^2 \bigg\} \nonumber \\ & +\frac{2}{27} (4 \pi )^2 \bigg\{ \bigg( 12 L^r_{2}+L^r_{3}-36 L^r_{5}+432 L^r_{7}+216 L^r_{8} \bigg) \xi _{\pi } \xi _K - \left(42 L^r_{2}+\frac{41}{4} L^r_{3}-18 L^r_{5}+216 L^r_{7}+108 L^r_{8}\right) \xi _{\pi }^2 \nonumber \\ & \quad -\left(27 L^r_{1}+\frac{183 L^r_{2}}{2}+\frac{89 L^r_{3}}{4}-18 L^r_{5}+216 L^r_{7}+108 L^r_{8}\right) \xi _K^2 \bigg\} \end{align} We subdivide $F_F$ as in Eq.(\ref{Eq:FI}) with $I=F$, and with the $a_{iF}$ given by: \begin{align} a_{1F} =& -\frac{6337}{5184} \left(\text{Li}_2\left[\frac{3}{4}\right] + \log [4] \log \left[ \frac{4}{3} \right] \right) + \frac{41 \pi^2}{192} - \frac{11 \sqrt{2} \pi}{27} + \frac{10525}{6912} - \frac{119 \pi }{216 \sqrt{2}}-\frac{23}{1152} \log ^2\left[\frac{4}{3}\right] \nonumber \\ & + \frac{127}{48} \log \left[\frac{4}{3}\right] + \frac{41}{48} \text{arccosec}^2 \left[ \sqrt{3}\right] + \left(\frac{295}{108 \sqrt{2}}-\frac{41 \pi }{48}\right) \text{arccosec} \left[\sqrt{3}\right] \end{align} \begin{align} a_{2F} =& \frac{5821}{2592} \left(\text{Li}_2\left[\frac{3}{4}\right] + \log [4] \log \left[ \frac{4}{3} \right] \right) -\frac{25 \pi ^2}{96}-\frac{2050019}{388800}+\frac{145 \pi }{72 \sqrt{2}}+\frac{38693 \pi }{25920 \sqrt{3}}+\frac{82 \gamma }{405}-\frac{137}{576} \log ^2 \left[ \frac{4}{3} \right] \nonumber \\ & - \frac{1687}{810} \log \left[ \frac{4}{3} \right] - \frac{281}{540} \log [4] - \frac{13 \pi}{1728 \sqrt{3}} \log \left[\frac{64}{3}\right] +\frac{11}{48} \text{arccosec} \left[ \sqrt{3} \right]^2 - \frac{29}{324} \psi \left[\frac{5}{2}\right] \nonumber \\ & + \left(-\frac{11 \pi }{48}-\frac{13}{3 \sqrt{2}}-\frac{13 \gamma }{18 \sqrt{2}}+\frac{1}{6 \sqrt{2}} \log [1728] \right) \text{arccosec} \left[ \sqrt{3} \right] \end{align} \begin{align} a_{3F} = \frac{169}{6480} + \frac{13 \pi }{1728 \sqrt{3}}+\frac{7}{48} \log \left[ \frac{4}{3} \right] -\frac{1}{2 \sqrt{2}} \text{arccosec} \left[ \sqrt{3} \right] \end{align} \begin{align} a_{4F} = -\frac{1}{6}, \quad a_{7F} = \frac{325}{384}, \quad a_{10F} = -\frac{9}{64}, \quad a_{13F} = - \frac{27}{128} \end{align} \begin{align} a_{5F} =& -\frac{845}{648} \left(\text{Li}_2\left[\frac{3}{4}\right]+\log [4] \log \left[ \frac{4}{3}\right] \right) +\frac{5 \pi ^2}{18} - \frac{1301 \sqrt{3} \pi }{512}-\frac{66191 \gamma }{12960}+\frac{25789}{155520}-\frac{145 \pi }{144 \sqrt{2}}+\frac{3572063 \pi }{663552 \sqrt{3}} \nonumber \\ & + \frac{145}{384} \log^2 \left[ \frac{4}{3} \right] + \frac{15403}{6480} \log [4] + \frac{15941}{17280} \log \left[\frac{4}{3}\right] + \frac{176189 \pi}{110592 \sqrt{3}} \log \left[\frac{64}{3}\right] + \frac{59}{48} \text{arccosec}^2 \left[ \sqrt{3} \right] \nonumber \\ & + \frac{35}{144} \psi \left[ \frac{5}{2} \right] + \left(-\frac{59 \pi }{48}+\frac{323}{192 \sqrt{2}}+\frac{3167 \gamma }{288 \sqrt{2}}-\frac{115}{48 \sqrt{2}} \log [12] \right) \text{arccosec} \left[ \sqrt{3} \right] \end{align} \begin{align} a_{6F} = \frac{4427}{2160}-\frac{176189 \pi }{110592 \sqrt{3}}-\frac{155}{192} \log \left[ \frac{4}{3} \right] + \frac{115}{48 \sqrt{2}} \text{arccosec} \left[ \sqrt{3} \right] \end{align} \begin{align} a_{8F} =& \frac{265}{864} \left( \text{Li}_2 \left[\frac{3}{4}\right] + \log [4] \log \left[ \frac{4}{3} \right] \right) - \frac{29 \pi ^2}{288} + \frac{11061169}{4147200} + \frac{4753 \pi }{13824 \sqrt{2}}+\frac{20910563 \pi }{26542080 \sqrt{3}}+\frac{199393 \gamma }{138240} \nonumber \\ & - \frac{16337}{23040} \log [4] - \frac{10477}{27648} \log \left[\frac{4}{3}\right] -\frac{804611 \pi}{884736 \sqrt{3}} \log \left[ \frac{64}{3} \right] - \frac{5}{16} \text{arccosec}^2 \left[ \sqrt{3}\right] -\frac{119}{288} \psi \left[\frac{5}{2}\right] \nonumber \\ & + \left(-\frac{19319 \gamma }{9216 \sqrt{2}}-\frac{84251}{55296 \sqrt{2}}+\frac{5 \pi }{16}+\frac{823}{3072 \sqrt{2}} \log [12] \right) \text{arccosec} \left[ \sqrt{3}\right] \end{align} \begin{align} a_{9F} = -\frac{2971}{27648}+\frac{804611 \pi }{884736 \sqrt{3}}-\frac{1}{96} \log \left[\frac{4}{3}\right] - \frac{823}{3072 \sqrt{2}} \text{arccosec} \left[ \sqrt{3} \right] \end{align} \begin{align} a_{11F} =& -\frac{5}{192} \left(\text{Li}_2 \left[ \frac{3}{4} \right] + \log [4] \log \left[ \frac{4}{3} \right] \right) -\frac{25 \pi ^2}{192}-\frac{4582831}{4423680}-\frac{1310311 \gamma }{6635520}-\frac{2135 \pi }{73728 \sqrt{2}}-\frac{13905571 \pi }{84934656 \sqrt{3}} \nonumber \\ & +\frac{4453 \sqrt{3} \pi }{65536}+\frac{532067}{1935360} \log [4] + \frac{312911}{2903040} \log \left[ \frac{4}{3} \right] + \frac{1674775 \pi}{14155776 \sqrt{3}} \log \left[ \frac{64}{3}\right] + \frac{97}{648} \psi \left[ \frac{5}{2} \right] \nonumber \\ & + \left(-\frac{391 \gamma }{49152 \sqrt{2}}+\frac{9421}{294912 \sqrt{2}}-\frac{59}{4096 \sqrt{2}} \log [12] \right) \text{arccosec} \left[\sqrt{3}\right] \end{align} \begin{align} a_{12F} = \frac{5174549}{11612160}-\frac{1674775 \pi }{14155776 \sqrt{3}}+\frac{1}{64} \log \left[ \frac{4}{3} \right] + \frac{59}{4096 \sqrt{2}} \text{arccosec} \left[ \sqrt{3}\right] \end{align} The divergence of $F_F$ as given above from its exact value is shown in Figure~\ref{FigLatticeFit}. \subsection{$m^2_\eta$} The GMO expressions for the eta mass can similarly be expressed as: \begin{align} m_\eta^2 =& m_{\eta 0}^2 + \bigg\{ \frac{64 \pi^2}{3} \xi _K^2 \lambda _K - 8 \pi ^2 \xi _{\pi }^2 \lambda_\pi + \left( \frac{352 \pi^2}{27} \xi_\pi \xi_K - \frac{512\pi^2}{27} \xi _K^2 - \frac{56\pi^2}{27} \xi_\pi^2 \right) \lambda_\eta \nonumber \\ & \qquad \qquad - \frac{64}{9} \xi _K^2 \hat L_{1m}^r - \frac{16}{9} \xi_\pi \xi_K \hat L_{2m}^r + \frac{8}{9} \xi_\pi^2 \hat L_{3m}^r \bigg\} \nonumber\\ & \qquad +\bigg\{ \hat K_{1m}^r \lambda_\pi^2 + \hat K_{2m}^r \lambda_\pi\lambda_K + \hat K_{3m}^r \lambda_\pi\lambda_\eta + \hat K_{4m}^r \lambda_K^2 + \hat K_{5m}^r \lambda_K\lambda_\eta + \hat K_{6m}^r \lambda_\eta^2 \nonumber\\ & \hspace*{7ex} + m_K^2 \xi_K^2 F_m \left[\frac{m_\pi^2}{m_K^2}\right] + \hat C_{1m} \lambda_\pi+\hat C_{2m}\lambda_K+\hat C_{3m}\lambda_\eta + \hat C_{4m} \bigg\} \end{align} Note that in contrast with the kaon, there is an extra $m_K^2$ prefactor to $F_m$ aside from the $\xi_K^2$. Furthermore, each of the $\hat K_{im}^r,\hat C_{im}^r$ have six terms proportional to $\xi_\pi^2,\xi_\pi\xi_K,\xi_K^2$ and $m_\pi^2$ multiplied by either $m_\pi^2$ or $m_K^2$. \begin{figure} \centering \begin{minipage}{0.45\textwidth} \includegraphics[width=0.98\textwidth]{LatticeFitMeta.eps} \end{minipage} ~~ \begin{minipage}{0.45\textwidth} \includegraphics[width=0.98\textwidth]{LatticeFitFeta.eps} \end{minipage} \caption{$F_m$ (left) and $F_f$ (right) plotted against $\rho$ using exact and truncated sunset integral values, as well as expansions of the latter upto $\mathcal{O}(\rho^3)$ and $\mathcal{O}(\rho^4)$.} \label{FigLatticeFitEta} \end{figure} Explicitly, for $m_\eta$, we have: \begin{align} & \hat{L}^r_{1M} = (4 \pi )^4 ( 3 L^r_{4}+2 L^r_{5}-6 L^r_{6}-6 L^r_{7}-6 L^r_{8} ) \nonumber \\ & \hat{L}^r_{2M} = (4 \pi )^4 ( 3 L^r_{4}-4 L^r_{5}-6 L^r_{6}+48 L^r_{7}+24 L^r_{8} ) \nonumber \\ & \hat{L}^r_{3M} = (4 \pi )^4 ( 3 L^r_{4}-L^r_{5}-6 L^r_{6}+48 L^r_{7}+18 L^r_{8} ) \end{align} \begin{align} & \hat{K}^r_{1M} = \left(\frac{5}{8} \xi _{\pi } \xi _K + \frac{65}{48} \xi _{\pi }^2 \right) m_\pi^2 - \left( \frac{3}{16} \xi_\pi \xi_K \right) m_K^2 \nonumber \\ & \hat{K}^r_{2M} = \left( \frac{3}{4} \xi_\pi \xi_K \right) m_\pi^2 + \left( \frac{55}{24} \xi_\pi \xi_K \right) m_K^2 \nonumber \\ & \hat{K}^r_{3M} = \left(\frac{7}{36} \xi_\pi^2 - \frac{43}{27} \xi_\pi \xi_K \right) m_\pi^2 + \left( \frac{64}{27} \xi_\pi \xi_K \right) m_K^2 \nonumber \\ & \hat{K}^r_{4M} = \left( \frac{5}{6} \xi_\pi \xi_K \right) m_\pi^2 + \left( \frac{103}{36} \xi_K^2 -\frac{133}{72} \xi_\pi \xi_K \right) m_K^2 \nonumber \\ & \hat{K}^r_{5M} = - \left( \frac{31}{108} \xi_\pi \xi_K \right) m_\pi^2 + \left(\frac{473}{216} \xi_\pi \xi_K - \frac{59}{18} \xi_K^2 \right) m_K^2 \nonumber \\ & \hat{K}^r_{6M} = \left(\frac{1367}{648} \xi_\pi \xi_K - \frac{911}{3888} \xi_\pi^2 \right) m_\pi^2 + \left(\frac{6185}{972} \xi_K^2 - \frac{2713}{432} \xi_\pi \xi_K \right) m_K^2 \end{align} \begin{align} \hat{C}^r_{1M} =& \left(\frac{61}{54} \xi_\pi \xi_K - \frac{931}{864} \xi_\pi^2 \right) m_\pi^2 - \frac{3}{2} \xi_\pi \xi_K m_K^2 \nonumber \\ & + (16 \pi^2 ) \bigg\{ \left(\frac{128}{3} L^r_{4} + \frac{256}{9} L^r_{5} - \frac{256}{3} L^r_{6} - \frac{256}{3} L^r_{7} - \frac{256}{3} L^r_{8} \right) \xi_\pi \xi_K m_K^2 \nonumber \\ & \qquad \qquad - \left( 64 L^r_{1}+16 L^r_{2}+16 L^r_{3}-\frac{232}{3} L^r_{4} + \frac{64}{9} L^r_{5} + \frac{272}{3} L^r_{6} - \frac{832}{3} L^r_{7} - \frac{320}{3} L^r_{8} \right) \xi_\pi \xi_K m_\pi^2 \nonumber \\ & \qquad \qquad + \bigg( 16 L^r_{1}+4 L^r_{2}+4 L^r_{3}-24 L^r_{4}+32 L^r_{6}-192 L^r_{7}-72 L^r_{8} \bigg) \xi_\pi^2 m_\pi^2 \bigg\} \end{align} \begin{align} \hat{C}^r_{2M} =& -\frac{3}{2} \xi_\pi \xi_K m_\pi^2 + \left(\frac{961}{216} \xi_\pi \xi_K -\frac{577}{54} \xi_K^2 \right) m_K^2 \nonumber \\ & + (16\pi^2) \bigg\{ \left(\frac{64}{3} L^r_{1} + \frac{16}{3} L^r_{2} + \frac{28}{3} L^r_{3} - 16 L^r_{4} - \frac{16}{3} L^r_{5} + \frac{32}{3} L^r_{6} + 128 L^r_{7} + \frac{224}{3} L^r_{8} \right) \xi_\pi \xi_K m_K^2 \nonumber \\ & \qquad \qquad -\left(\frac{256}{3} L^r_{1} + \frac{64}{3} L^r_{2} + \frac{112}{3} L^r_{3} - \frac{320}{3} L^r_{4} - \frac{352}{9} L^r_{5} + 128 L^r_{6} + \frac{256}{3} L^r_{7} + \frac{320}{3} L^r_{8} \right) \xi_K^2 m_K^2 \nonumber \\ & \qquad \qquad + \left(-\frac{8}{3} L^r_{4} + \frac{8}{9} L^r_{5} + \frac{16}{3} L^r_{6} - \frac{128}{3} L^r_{7} - 16 L^r_{8} \right) \xi_\pi \xi_K m_\pi^2 \bigg\} \end{align} \begin{align} \hat{C}^r_{3M} =& \left(\frac{371}{972} \xi_\pi \xi_K -\frac{2045}{23328} \xi_\pi^2 \right) m_\pi^2 + \left(\frac{41}{648} \xi_\pi \xi_K - \frac{1093}{1458} \xi_K^2 \right) m_K^2 + \nonumber \\ & + \left(16 \pi ^2\right) \bigg\{ \left(\frac{128}{3} L^r_{1} + \frac{128}{3} L^r_{2} + \frac{64}{3} L^r_{3} - 32 L^r_{4} - \frac{1808}{27} L^r_{5} + \frac{832}{9} L^r_{6} +\frac{1216}{3} L^r_{7} + \frac{6752}{27} L^r_{8} \right) \xi_\pi \xi_K m_K^2 \nonumber \\ & \qquad \qquad -\left(\frac{512}{9} L^r_{1} + \frac{512}{9} L^r_{2} + \frac{256}{9} L^r_{3} - \frac{512}{9} L^r_{4} - \frac{512}{9} L^r_{5} + \frac{4096}{27} L^r_{6} + \frac{2048}{9} L^r_{7} + \frac{5120}{27} L^r_{8} \right) m_K^2 \xi _K^2 \nonumber \\ & \qquad \qquad - \left(\frac{32}{3} L^r_{1} + \frac{32}{3} L^r_{2} + \frac{16}{3} L^r_{3} - 8 L^r_{4} - \frac{832}{27} L^r_{5} + \frac{208}{9} L^r_{6} + \frac{640}{3} L^r_{7} + \frac{3232}{27} L^r_{8} \right) \xi_\pi \xi_K m_\pi^2 \nonumber \\ & \qquad \qquad + \left(\frac{8}{9} L^r_{1} + \frac{8}{9} L^r_{2} + \frac{4}{9} L^r_{3} - \frac{8}{9} L^r_{4} - \frac{128}{27} L^r_{5} + \frac{64}{27} L^r_{6} + \frac{320}{9} L^r_{7} + \frac{520}{27} L^r_{8} \right) \xi_\pi^2 m_\pi^2 \bigg\} \end{align} \begin{align} \hat{C}^r_{4M} &= \frac{2}{81} (16 \pi^2) \bigg\{ \bigg(384 L^r_{1}+816 L^r_{2}+228 L^r_{3}+128 L^r_{5}-1536 L^r_{7}-768 L^r_{8} \bigg) \xi _K^2 m_K^2 \nonumber \\ & \qquad \qquad \qquad - \bigg( 288 L^r_{1}+396 L^r_{2}+153 L^r_{3}+312 L^r_{5}-3744 L^r_{7}-1872 L^r_{8} \bigg) \xi_\pi \xi_K m_K^2 \nonumber \\ & \qquad \qquad \qquad + \bigg( 72 L^r_{1}+396 L^r_{2}+144 L^r_{3}+240 L^r_{5}-2880 L^r_{7}-1440 L^r_{8} \bigg) \xi_\pi \xi_K m_\pi^2 \nonumber \\ & \qquad \qquad \qquad + \bigg( -6 L^r_{1}-87 L^r_{2}-30 L^r_{3}-56 L^r_{5}+672 L^r_{7}+336 L^r_{8} \bigg) \xi_\pi^2 m_\pi^2 \bigg\} \nonumber \\ & + (16 \pi^2)^2 \bigg\{ \frac{128}{27} (3 L^r_{4}+5 L^r_{5}-6 L^r_{6}-6 L^r_{8}) \left( 3 L^r_{4}-L^r_{5}-6 L^r_{6}+48 L^r_{7}+18 L^r_{8} \right) \xi_\pi^2 m_\pi^2 \nonumber \\ & \qquad \qquad - \frac{256}{27} \bigg( 8 C^r_{12}+12 C^r_{13}+6 C^r_{14}+6 C^r_{15}+9 C^r_{16}+6 C^r_{17}+6 C^r_{18}-27 C^r_{19}-27 C^r_{20}-27 C^r_{21} \nonumber \\ & \qquad \qquad \qquad -18 C^r_{31}-18 C^r_{32}-18 C^r_{33} \bigg) \xi_K^2 m_K^2 \nonumber \\ & \qquad \qquad - \frac{1024}{27} \bigg( 6 L^r_{4}+L^r_{5}-12 L^r_{6}-6 L^r_{8}) (3 L^r_{4}+2 L^r_{5}-6 L^r_{6}-6 L^r_{7}-6 L^r_{8} \bigg) \xi_K^2 m_K^2 \nonumber \\ & \qquad \qquad + \frac{16}{27} \bigg( 2 C^r_{12}-6 C^r_{13}+9 C^r_{14}-3 C^r_{15}+27 C^r_{16}+9 C^r_{17}+24 C^r_{18}-27 C^r_{19}+27 C^r_{20}-27 C^r_{21} \nonumber \\ & \qquad \qquad \qquad -18 C^r_{31}+54 C^r_{32} \bigg) \xi_\pi^2 m_\pi^2 \nonumber \\ & \qquad \qquad -\frac{32}{9} \bigg( 4 C^r_{12}-6 C^r_{13}+10 C^r_{14}-3 C^r_{15}+24 C^r_{16}+10 C^r_{17}+24 C^r_{18}-54 C^r_{19}-18 C^r_{20}-36 C^r_{31} \nonumber \\ & \qquad \qquad \qquad + 6 C^r_{32}-48 C^r_{33} \bigg) \xi _\pi \xi_K m_\pi^2 \nonumber \\ & \qquad \qquad + \frac{64}{9} \bigg( 8 C^r_{12}+10 C^r_{14}+15 C^r_{16}+10 C^r_{17}+18 C^r_{18}-54 C^r_{19}-27 C^r_{20}+27 C^r_{21}-36 C^r_{31} \nonumber \\ & \qquad \qquad \qquad -12 C^r_{32}-48 C^r_{33} \bigg) \xi_\pi \xi_K m_K^2 \nonumber \\ & \qquad \qquad - \frac{128}{9} \bigg( 36 (L^r_{4})^2+15 L^r_{4} L^r_{5}-144 L^r_{4} L^r_{6}+144 L^r_{4} L^r_{7}+42 L^r_{4} L^r_{8}+12 (L^r_{5})^2-30 L^r_{5} L^r_{6} -48 L^r_{5} L^r_{7} \nonumber \\ & \qquad \qquad \qquad -32 L^r_{5} L^r_{8}+144 (L^r_{6})^2-288 L^r_{6} L^r_{7}-84 L^r_{6} L^r_{8}-96 L^r_{7} L^r_{8}-48 (L^r_{8})^2 \bigg) \xi_\pi \xi_K m_K^2 \nonumber \\ & \qquad \qquad -\frac{128}{9} \bigg(3 L^r_{4} L^r_{5}+6 L^r_{4} L^r_{8}-10 (L^r_{5})^2-6 L^r_{5} L^r_{6}+144 L^r_{5} L^r_{7}+76 L^r_{5} L^r_{8}-12 L^r_{6} L^r_{8}-96 L^r_{7} L^r_{8} \nonumber \\ & \qquad \qquad \qquad -48 (L^r_{8})^2 \bigg) \xi_\pi \xi_K m_\pi^2 \bigg\} \end{align} The $F_m$ can be subdivided as: \begin{align} F_m^\eta [ \rho ] =& a_{1m} + \bigg( a_{2m} + a_{3m} \log[\rho] + a_{4m} \log^2[\rho] \bigg) \rho + \bigg( a_{5m} + a_{6m} \log[\rho] + a_{7m} \log^2[\rho] \bigg) \rho^2 \nonumber \\ & \quad + \bigg( a_{8m} + a_{9m} \log[\rho] + a_{10m} \log^2[\rho] \bigg) \rho^3 + \bigg( a_{11m} + a_{12m} \log[\rho] + a_{13m} \log^2[\rho] \bigg) \rho^4 + \mathcal{O} \left( \rho^5 \right) \end{align} Note that we omit the factor of $1/(16\pi^2)^2$ in this definition in contrast to Eq.(\ref{Eq:FI}). In Figure~\ref{FigLatticeFitEta}, we see that the $\mathcal{O}(\rho^4)$ expansion version of $F_m$ agrees with the exact valued $F_m$ well within our desired range of $\rho$. \begin{align} a_{1m} &= \frac{1165}{864} \left( \frac{\pi^2}{3} + \log ^2 \left[ 2 \sqrt{3}-3\right] - \log \left[ \frac{4}{3} \right] \log \left[ 3+2\sqrt{3} \right] - 2 \text{Li}_2 \left[ \frac{\sqrt{3}}{2}\right] + 2 \text{Li}_2 \left[ 2\sqrt{3}-3 \right] \right) \nonumber \\ & + \frac{875}{486}-\frac{1157}{384} \log^2\left[ \frac{4}{3} \right] - \frac{19}{24} \log \left[ \frac{4}{3} \right] + \frac{1}{8} \csc ^{-1}\left[ \sqrt{3} \right]^2 + \frac{23}{2\sqrt{2}} \csc ^{-1}\left[ \sqrt{3} \right] \end{align} \begin{align} a_{2m} &= -\frac{9859}{3456} \left( \frac{\pi^2}{3} + \log ^2 \left[ 2 \sqrt{3}-3 \right] - \log \left[ \frac{4}{3}\right] \log \left[ 3+2\sqrt{3} \right] -2 \text{Li}_2 \left[ \frac{\sqrt{3}}{2} \right] +2 \text{Li}_2 \left[ -3+2 \sqrt{3} \right] \right) \nonumber \\ & + \frac{18889}{22680} + \frac{16865}{4608} \log^2 \left[\frac{4}{3}\right] + \frac{683}{288} \log \left[ \frac{4}{3} \right] - \frac{75}{32} \csc^{-1} \left[ \sqrt{3} \right]^2 - \frac{517}{72 \sqrt{2}} \csc ^{-1} \left[ \sqrt{3} \right] \end{align} \begin{align} a_{3m} = \frac{41}{27}, \quad a_{4m} = \frac{3}{16}, \quad a_{6m} = \frac{947}{3780}, \quad a_{7m} = -\frac{5}{8} \end{align} \begin{align} a_{5m} &= \frac{7711}{4608} \left( \log ^2\left[ 2 \sqrt{3}-3\right] - \log \left[ \frac{4}{3} \right] \log \left[ 3 + 2\sqrt{3} \right] -2 \text{Li}_2\left[ \frac{\sqrt{3}}{2}\right] + 2 \text{Li}_2 \left[2 \sqrt{3}-3 \right] \right) +\frac{8735 \pi ^2}{13824} \nonumber \\ & -\frac{206795171}{57153600}-\frac{8629}{6144} \log^2 \left[\frac{4}{3}\right] - \frac{179}{1152} \log \left[\frac{4}{3}\right] + \frac{293}{128} \csc ^{-1} \left[ \sqrt{3} \right]^2 + \frac{1043}{288 \sqrt{2}} \csc ^{-1}\left[\sqrt{3}\right] \end{align} \begin{align} a_{8m} &= -\frac{1099}{6144} \left( \log^2\left[2 \sqrt{3}-3\right] -\log \left[ \frac{4}{3} \right] \log \left[ 3+2 \sqrt{3}\right] -2 \text{Li}_2\left[\frac{\sqrt{3}}{2}\right] +2 \text{Li}_2 \left[ 2 \sqrt{3}-3 \right] \right) -\frac{10465 \pi^2}{55296} \nonumber \\ & + \frac{27092374721}{31120135200} - \frac{437}{24576} \log^2 \left[ \frac{4}{3} \right] + \frac{\log [3]}{480} - \frac{13681}{69120} \log \left[\frac{4}{3}\right]-\frac{27}{512} \csc^{-1} \left[\sqrt{3}\right]^2 - \frac{323}{576 \sqrt{2}} \csc^{-1} \left[\sqrt{3}\right] \end{align} \begin{align} a_{9m} = \frac{1}{3} \log \left[\frac{4}{3}\right]-\frac{52837}{561330}, \quad a_{10m} = -\frac{11}{48}, \quad a_{12m} = -\frac{1}{8} \log \left[\frac{4}{3}\right] - \frac{4327283}{8981280}, \quad a_{13m} = \frac{1}{16} \end{align} \begin{align} a_{11m} &= \frac{181}{24576} \left( \log ^2\left[2 \sqrt{3}-3\right]-\log \left[\frac{4}{3}\right] \log \left[3+2 \sqrt{3}\right] -2 \text{Li}_2\left[ \frac{\sqrt{3}}{2} \right] + 2 \text{Li}_2\left[ 2 \sqrt{3}-3 \right] \right) +\frac{356603876663}{569053900800} \nonumber \\ & + \frac{3253\pi^2}{73728} + \frac{5963}{98304} \log^2\left[\frac{4}{3}\right] + \frac{177301}{967680} \log \left[\frac{4}{3}\right] -\frac{31 \log [3]}{1120} - \frac{27}{2048} \csc ^{-1} \left[ \sqrt{3} \right]^2 - \frac{67}{2048 \sqrt{2}} \csc^{-1} \left[\sqrt{3}\right] \end{align} \subsection{$F_\eta$} The expression for $F_\eta$ can be written as: \begin{align} \frac{F_\eta}{F} &= 1 + \left\{ \frac{8}{3} \xi_K \hat{L}^r_{1f} + \frac{4}{3} \xi_\pi \hat{L}^r_{2f} - \frac{3}{2} \xi_K \lambda_K \right\} \nonumber\\ & \qquad + \Bigg\{ \hat K_{1F}^r \lambda_\pi^2 + \hat K_{2f}^r \lambda_\pi\lambda_K + \hat K_{3f}^r \lambda_\pi\lambda_\eta + \hat K_{4f}^r \lambda_K^2 + \hat K_{5f}^r \lambda_K\lambda_\eta + \hat K_{6f}^r \lambda_\eta^2 \nonumber\\ & \hspace*{7ex} + m^2_K \xi_K^2 F_f\left[ \frac{m_\pi^2}{m_K^2} \right] + \hat C_{1f}\lambda_\pi+\hat C_{2f}\lambda_K+\hat C_{3f}\lambda_\eta + \hat C_{4f} \Bigg\} \end{align} where \begin{align} \hat{L}^r_{1f} = (4 \pi)^2 (3 L^r_{4} + 2 L^r_{5}), \quad \hat{L}^r_{2f} = (4 \pi)^2 (3 L^r_{4} - L^r_{5}) \end{align} \begin{align} & \hat{K}^r_{1f} = \frac{99}{128} \rho - \frac{141}{512} \rho^2 + \frac{3}{2048} \rho^3 + \frac{3}{8192} \rho^4 + \mathcal{O}(\rho)^5, \quad \hat{K}^r_{3f} = 0 \nonumber \\ & \hat{K}^r_{2f} = \frac{93}{64} \rho - \frac{3}{256} \rho^2 - \frac{3}{1024} \rho^3 - \frac{3}{4096} \rho^4 + \mathcal{O}(\rho)^5, \quad \hat{K}^r_{5f} = -\frac{119}{48} + \frac{1}{4} \rho \nonumber \\ & \hat{K}^r_{4f} = \frac{191}{96} + \frac{35}{128} \rho + \frac{3}{512} \rho^2 + \frac{3}{2048} \rho^3 + \frac{3}{8192} \rho^4 + \mathcal{O}(\rho)^5, \quad \hat{K}^r_{6f} = \frac{71}{96} + \frac{1}{8} \rho - \frac{1}{32} \rho^2 \end{align} \begin{align} \hat{C}^r_{1f} =& \left(\frac{3}{16}-\frac{16}{3} (4 \pi)^2 (3 L^r_{4}+2 L^r_{5})\right) \xi_\pi \xi_K + \left(\frac{2}{3} (4 \pi )^2 (36 L^r_{1}+9 L^r_{2}+9 L^r_{3}-33 L^r_{4}+2 L^r_{5})+\frac{47}{64}\right) \xi_\pi^2 \end{align} \begin{align} \hat{C}^r_{2F} =& \left(2 (4 \pi)^2 (16 L^r_{1}+4 L^r_{2}+7 L^r_{3}-18 L^r_{4}-4 L^r_{5}) + \frac{17}{48}\right) \xi_K^2 + \left(\frac{3}{4}-\frac{2}{3} (4 \pi )^2 (15 L^r_{4}+13 L^r_{5})\right) \xi_\pi \xi_K \end{align} \begin{align} \hat{C}^r_{3F} =& \left(\frac{32}{9} (4 \pi )^2 (6 L^r_{1}+6 L^r_{2}+3 L^r_{3}-3 L^r_{4}-2 L^r_{5})+\frac{16631}{3888}\right) \xi_K^2 \nonumber \\ & - \left(\frac{16}{9} (4 \pi )^2 (6 L^r_{1}+6 L^r_{2}+3 L^r_{3}-3 L^r_{4}-2 L^r_{5})+\frac{4363}{3888}\right)\xi_\pi \xi_K \nonumber \\ & + \left(\frac{2}{9} (4 \pi )^2 (6 L^r_{1}+6 L^r_{2}+3 L^r_{3}-3 L^r_{4}-2 L^r_{5})+\frac{3713}{15552}\right) \xi_\pi^2 \end{align} \begin{align} \hat{C}^r_{4F} &= \frac{1}{9} (4 \pi )^2 \bigg\{ 8 ( 2 L^r_{1}+2 L^r_{2}+L^r_{3}) \xi_\pi \xi_K - (32 L^r_{1}+68 L^r_{2}+19 L^r_{3}) \xi_K^2 - (2 L^r_{1}+29 L^r_{2}+10 L^r_{3}) \xi_\pi^2 \bigg\} \nonumber \\ & + \frac{8}{9} (4\pi)^4 \bigg\{ 4 \bigg( 6 C^r_{14}+6 C^r_{15}+9 C^r_{16}+6 C^r_{17}+6 C^r_{18}+(3 L^r_{4}+2 L^r_{5}) (21 L^r_{4}+4 L^r_{5}-24 L^r_{6}-12 L^r_{8}) \bigg) \xi_K^2 \nonumber \\ & - \bigg( 24 C^r_{14} - 6 C^r_{15} + 36 C^r_{16} + 24 C^r_{17} + 48 C^r_{18} - 252 (L^r_{4})^2 - 108 L^r_{4} L^r_{5} + 288 L^r_{4} L^r_{6} - 56 (L^r_{5})^2 + 48 L^r_{5} L^r_{6} \bigg) \xi_\pi \xi_K \nonumber \\ & + \bigg( 9 C^r_{14}-3 C^r_{15}+27 C^r_{16}+9 C^r_{17}+24 C^r_{18}+(3 L^r_{4}-L^r_{5}) (21 L^r_{4}+25 L^r_{5}-24 L^r_{6}-24 L^r_{8} ) \bigg) \xi_\pi^2 \bigg\} \end{align} Due to the numerically large prefactors of the masses in the expression for $d^\eta_{\pi K K}$, the errors that arise due to the use of the truncated sunset expressions get magnified significantly, resulting in a poorly converging expression if these approximate results for the sunsets are used. This can be seen in Figure~\ref{FigLatticeFitEta}, where the divergence between the truncated and exact values is significant even for small values of $\rho$. Therefore, in this case of $F_f$, we present an expansion in $\rho$ taken from the sunset integral series evaluated to a high order (and which therefore results in rapid convergence to the exact result), but an expansion which is numerical. \begin{align} F_f [ \rho ] = 9.03816 & + \bigg( -7.82805 + 1.51852 \log (\rho) + 0.1875 \log ^2(\rho) \bigg) \rho \nonumber \\ & + \bigg( 2.69955 + 0.250529 \log (\rho) - 0.625 \log ^2(\rho) \bigg) \rho^2 \nonumber \\ & + \bigg( -1.08218 + 0.00176579 \log (\rho) - 0.229167 \log^2(\rho) \bigg) \rho^3 \nonumber \\ & + \bigg( 0.722228 - 0.306794 \log (\rho) + 0.0625 \log ^2(\rho) \bigg) \rho^4 + \mathcal{O}(\rho^5) \end{align} \section{Summary and Conclusion} $SU(3)$ ChPT is the effective theory of the strong interactions at low-energies, and describes the pseudo-scalar octet degrees of freedom and their interactions. Of the many properties associated with this sector, the masses and decay constants are amongst the most fundamental. The predictions for these from the effective theory and from the lattice constitute some of the most important tests of this part of the standard model and the standard picture of spontaneous symmetry breaking of the axial-vector symmetries associated with the massless limit of the theory. In the limit of isospin invariance, there are three masses in the theory, namely $m_\pi$, $m_K$ and $m_\eta$. At two-loop order, the meson mass expressions involve the computation of the sunset diagrams, while the decay constants also require calculation of the energy derivative of the sunsets, all evaluated on-shell. Sunset integrals have been investigated in great detail independently of ChPT, and much is known about them. In the most general mass configuration, it has been shown that any sunset can be expressed in terms of at most four MI. If some of the masses are equal, then the number of MI reduces. On the other hand, if any of the masses is set to zero, the sunset is known to suffer from infra-red problems. All the above features contribute to the complexity of analyzing the masses and decay constants in ChPT. Analytic treatments of the pion mass and decay constant have been performed in \cite{Ananthanarayan:2017yhz, Kaiser:2007kf}, where due to strangeness conservation, the only configuration not corresponding to a pseudo-threshold is a sunset with a kaon pair and an $\eta$ in the propagators. Since the pion mass is the smallest parameter in the theory, it is possible in this case to provide an expansion in this parameter to get the corresponding analytic expression. On the other hand, for the eta, in which a similar configuration appears, except with the pion mass in the propagator and the eta mass in the external momentum, it is not possible to expand in the small parameter without encountering IR divergences. In the case of the kaon, the sole configuration not of the pseudo-threshold type is one in which all the three particles are present in the propagators. For the quantities of interest, there are then two mass ratios present and one may wish to provide a double series representation in these mass ratios. In this work we have carried out precisely this exercise, by introducing MB representation for the sunset diagrams at hand (see \cite{Ananthanarayan:2018} for details). Whereas for problems with a single MB parameter, a simple approach exists which allows one to carry out the evaluation and summation of residues of poles, closing the contour in the complex plane to the left or to the right, and then using simple ratio tests to figure out the regions of convergence in the single parameter, a more sophisticated analysis is required when two or (especially) more MB parameters appear. The case at hand is a concrete realization of this scenario with two parameters. Our work follows several steps which will be summarized now. \begin{enumerate} \item We decompose the vector, tensor and derivative sunsets integrals appearing in the expressions for the $m_P$ and $F_P$ by applying integration by parts to express then in terms of the MI. \item The resulting MI are of various mass configurations, and appear with up to three distinct mass scales. Each of these MI are then evaluated by using MB representations. The solutions of the one and two mass scale MI appearing in this analysis can all be written in closed form. The solutions of the three mass scale master integrals, however, are expressed as linear combinations of single and double infinite series. The full results are given in Appendix~\ref{Sec:SunsetResults}; see also \cite{Ananthanarayan:2017qmx} for an equivalent rewriting of these results in terms of Kamp\' e de F\' eriet series. We show in Appendix \ref{Sec:PionSunsets} how to get analytic results valid for the pion case. \item We substitute the sunset integrals results into the expressions of the $m_P$ and $F_P$. \item The GMO relation is then applied to these expressions. As the GMO is a tree-level relation, and we wish to express the $m_P$ and $F_P$ in terms of physical meson masses, this involves calculating and including contributions from lower $\mathcal{O}(p^4)$ terms to the higher orders ones $\mathcal{O}(p^6)$. The motivation for applying the GMO relation stems partly from the desire to provide simple expressions that can be compared against lattice simulations, in which the eta mass is generally calculated using the GMO relation and is not an independent parameter. \item We isolate the contributions to the $m_P$ and $F_P$ from different terms, e.g. linear chiral log terms, bilinear chiral logs, terms involving the $\mathcal{O}(p^4)$ LEC, etc., to determine their relative weight in the final expressions of the masses and decay constants. \item The $m_P$ and $F_P$ expressions for $P=K,\eta$ without application of the GMO relation, but separated into terms of different classes, is given in Appendix~\ref{Sec:NonGMOExpr}. \item A set of results are given for the three mass scale sunsets that are truncations of the exact results, but which are numerically close to the latter for the lattice input sets of \cite{Durr:2010hr}. The approximate results for the the sunsets appearing in the kaon and eta expressions is given in Section~\ref{Sec:NumApproxSunsets}, and those for the pion are given in Appendix~\ref{Sec:PionSunsets}. The numerical justification for some of these approximations is presented in Section~\ref{Sec:NumAnalysis}. \item Also presented as an ancillary tool to this paper is a \texttt{Mathematica} based code that allows one to obtain a truncated expression for the three mass master sunset integrals when the level of precision and values of the input meson masses are provided. This allows lattice practitioners, amongst others, to obtain analytic approximations for the sunsets for any given set of lattice inputs. These can then be used to construct relatively compact analytic expressions for easy comparison with lattice or experimental data. \item A numerical study is done in Section~\ref{Sec:NumAnalysis} for $m_K$, $F_K$, $m_\eta$ and $F_\eta$ to provide a breakup of the relative numerical contributions to the NNLO part of their different constituents. This shows that the sunset integral contribution is significant. \item In Section~\ref{Sec:NumAnalysis}, we also numerically justify the use of our GMO-simplified expressions by showing that the error on various components constituting the NNLO contribution due to the use of the GMO relation does not exceed 5\% in most cases, and that the final error on the NNLO contribution is effectively zero for the kaon mass, and very small for the kaon and eta decay constants. \item We provide in Section~\ref{Sec:LatticeFits} a set of expressions for $m_K$, $m_\eta$, $F_K$ and $F_\eta$ that can be easily fit with lattice data, and in which the term that depends on the approximation of the loop integrals may easily be substituted by other approximations (calculated, for example, using tools such as the aforementioned supplementary \texttt{Mathematica} files). \item We also calculate values of $m_K$, $F_K$, $m_\eta$ and $F_\eta$, and see that the comparison of our results with prior determinations shows good agreement when the BE14 LEC values are used. When the free fit LEC values are used, our results show some divergence with some literature values. \end{enumerate} In this paper, we adopt a phenomenology practitioners perspective, and provide principally the final results that are of relevance in this respect. The results given in Appendix~\ref{Sec:SunsetResults}, for example, are for the $\mathcal{O}(\epsilon^0)$ term, and are only convergent for the values of mass ratios shown in Figure~\ref{Fig:RegOfConv}. In a forthcoming publication \cite{Ananthanarayan:2018}, we describe the calculation of the three mass scale sunset integrals in detail, and give the complete $\epsilon$-expansion for all possible values of the meson masses. An important field where analytic expressions may be of use, and one which we have emphasised strongly in this work, is in lattice QCD. In \cite{Ananthanarayan:2017qmx}, the use of analytic expressions to determine values of ChPT parameters was demonstrated. Data from recent lattice simulations for $m_K$, $m_\eta$, $F_K$ and $F_\eta$ is not publically available, but we hope that the expressions and tools provided in this work will encourage and assist lattice practioners to perform such a cross-disciplinary study. \section*{Acknowledgements} SF thanks David Greynat for helpful discussions and correspondence. JB is supported in part by the Swedish Research Council grants contract numbers 2015-04089 and 2016-05996 and by the European Research Council under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 668679). BA is partly supported by the MSIL Chair of the Division of Physical and Mathematical Sciences, Indian Institute of Science.
1,108,101,564,743
arxiv
\section{Introduction} It is well-known that at finite Reynolds numbers particles migrate across streamlines of the flow to some equilibrium positions in the microchannel. This migration is attributed to inertial lift forces, which are currently successfully used in microfluidic systems to focus and separate particles of different sizes continuously, which is important for a wide range of applications~\cite{zhang2016,dicarlo19}. The rapid development of an inertial microfluidics has raised a considerable interest in the lift forces on particles in confined flows. The majority of previous work on lift forces has assumed that particles are spherical. In their pioneering experiments, Segr\`e and Silberberg found that small spheres focus to a narrow annulus at a radial position of about 0.6 of a pipe radius\cite{Segre:Silb62a}. Later, several theoretical\cite{Vas:Cox77,Asmolov99,hood2015,asmolov2018} and numerical\cite{dicarlo2009prl} studies proposed useful scaling and approximate expressions for the lift force in a channel flow, which are frequently invoked. The assumption that particles are spherical often becomes unrealistic. The non-sphericity could strongly modify the lift forces, so the shape of particles becomes a very important consideration~\cite{behdani2018}. The body of theoretical and experimental work investigating lift forces on non-spherical particles is much less than that for spheres, although there is a growing literature in this area. \citet{hur2011} and \citet{masaeli2012} appear to have been the first to study experimentally the inertial focusing of non-spherical particles. These authors addressed themselves to the case of particles (spheres and rods of different aspect ratios) of equal volume, and demonstrated the possibility of their separation in a planar channel of moderate Reynolds numbers, $\Re\leq 100$. \citet{roth2018} recently reported the separation of spheres, ellipsoids and peanut-shaped particles in a spiral microfluidic device, where the inertial lift force is balanced by the Dean force that can be generated in curved channels~\cite{dicarlo2007}. These papers concluded that a key parameter defining equilibrium positions of particles is their rotational diameter. The authors, however, could not relate their results neither to the variation of the lift force across the channel nor with its dependence on particle shape since these are unaccessible in experiment. The theoretical analysis of the lift on non-spherical particles is beset with difficulties since they could vary their orientation due to a rotation in a shear flow, which, in turn, could induce unsteady flow disturbances leading to a time-dependent lift~\cite{su2018}. There have been some attempts to provide a theory of such a motion by employing spheroids as a simplest model for non-spherical particles. It is known that at vanishing particle Reynolds numbers, $\mathrm{Re_p}$, non-inertial spheroids exhibit in a shear flow a periodic kayaking motion along one of the Jeffrey orbits~\cite{jeffrey1922}. However, the orientation of oblate spheroids of finite $\mathrm{Re_p}$ eventually tends to a stable state due to inertia of the fluid and the particle~\cite{saffman1956}. Computer simulations might shed some light on these phenomena, and, indeed, computational inertial microfluidics is a growing field that currently attracts much research efforts~\cite{bazaz2020computational}. There is a number of simulations using the lattice Boltzmann method (LBM) that is well-suited for parallel processing and allows one an efficient tracking of the particle-fluid interface~\cite{LaddVerberg2001}, which are directly relevant. A large fraction of these deal with rotation properties of spheroids in shear flows~\cite{qi2003,janoschek11,rosen2014,huang2017}. At moderate $\mathrm{Re_p}$, oblate spheroids exhibit a log-rolling motion about their minor axis oriented along the vorticity direction, while prolate particles tumble, but when $\mathrm{Re_p}> 200$, in some situations a transition to other rotational regimes may occur~\cite{qi2003}. Its threshold depends on the particle aspect ratio, which can be used for their separation~\cite{li2017shape}. However, neither this paper addresses itself to the issues of inertial migration. This was taken up only recently in the paper by \citet{lashgari2017}, who carried out simulations of stable equilibrium positions and orientations of oblate spheroids in rectangular channels. The lift force on cylindrical particles in rectangular ducts has been calculated by \citet{su2018}. These authors found that particles execute a periodic tumbling motion, so that the lift force is unsteady, but its average dependence on the particle position, however, is similar to that for a sphere. Finally we should mention that\citet{huang2019} used dissipative particle dynamics simulations to find the equilibrium positions for prolate and oblate spheroids in a plane Poiseuille flow. Nevertheless, in spite of its importance for separations of particles, the connection between the shape of non-spherical particles and emerging lift forces to remain poorly understood. In this paper we present some results of an LBM study of the inertial migration of oblate spheroids in a plane channel, where steady laminar flow of moderate $\Re$ is generated by a pressure gradient. We perform measurements of the lift force acting on spheroids in the stable log-rolling regime and find that the lift coefficient depends only on their equatorial radius $a$. To interpret this result we develop a scaling theory and derive an expression for a lift force. Our scaling expression has a power to easily predict equilibrium positions of oblate spheroids in microfluidic channels. Our paper is arranged as follows. In Sec.~\ref{sec:setup} we define our system and mention briefly some expressions for a lift force acting on a spherical particle. Sec.~\ref{sec:simulation} describes details and parameters of simulations. Simulation results are discussed in Sec.~\ref{sec:results}. We then present scaling arguments leading to an expression for a lift force. We conclude in Sec~\ref{sec:conclusion}. \section{Model} \label{sec:setup} \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{Nizkaya_Figure1.eps} \caption{An oblate spheroid orienting in a pressure-driven flow to perform a stable log-rolling state.} \label{fig:sketch} \end{figure} We consider an oblate spheroid with an equatorial radius $a$ and a polar radius $b<a$ in a pressure-driven flow between two parallel walls, separated by a distance $H$ (see Fig.~\ref{fig:sketch}). Its location in the channel is defined by coordinates of the center $\mathbf{x}=(x,y,z)$ and by a unit vector directed along the symmetry axis $\mathbf{n}=(n_x,n_y,n_z)$ (referred below to as the orientation vector). At the channel walls and particle surface we apply no-slip boundary conditions. The velocity profile in the channel in the absence of the particle is parabolic, \begin{equation} U(z)=4U_mz(1-z/H)/H, \end{equation} where $U_m=|\nabla p|H^2/(8\mu)$ is the fluid velocity at the channel center, $\nabla p$ is a pressure gradient and $\mu$ is the dynamic viscosity. The (finite) channel Reynolds number $\Re=\rho U_mH/\mu$, where $\rho $ is the fluid density. The inertial lift force drives particles across the flow streamlines. For spherical particles it can be written as~\cite{Asmolov99,asmolov2018} \begin{equation} F_l(z)=\rho a^4 G_m^2 c_l, \label{lift} \end{equation} where $G_m=4U_m/H$ is the shear rate at the wall and $c_l$ is the lift coefficient, given by \begin{equation} c_l=c_{l0}+V_s c_{l1}+V^2_s c_{l2}, \label{cli} \end{equation} where $c_{li}$, $i=0,1,2$ are the lift coefficients that depend on the dimensionless particle position $z/H$, its size $a/H$, and $\Re $. The dimensionless slip velocity is defined by \begin{equation} V_s = \dfrac{V^x_p-U(z)}{U_m}, \label{Vslip} \end{equation} where $V^x_p$ is the $x$-component of the particle velocity and $U$ is the undisturbed fluid velocity at the particle center $z$. Note that it is normally considered that the slip velocity is induced by external forces only and, consequently, does not impact a hydrodynamic lift of neutrally buoyant particles. However, it has been recently shown that for neutrally buoyant particles $V_s$ is negligibly small only in the central portion of the channel, but not near the wall, where it becomes finite~\cite{asmolov2018}. Eq.(\ref{lift}) is widely invoked to estimate the migration velocity of neutrally buoyant (of $\rho_p = \rho$) spherical particles~\cite{dicarlo2007}. When the lift force is balanced by the Dean~\cite{dicarlo2007} or external~\cite{zhang2014real,dutz2017fractionation} force $F_{ex}$ (in the case of non-neutrally buoyant particles, $\rho_p \neq \rho$), Eq.(\ref{lift}) can be applied to find the equilibrium positions, $z_{eq}$, using the force balance \begin{equation}\label{eq:balance} F_l(z_{eq})+F_{ex}=0 \end{equation} One normally assumes that $F_{ex} = V f_{ex}$, where $V=\frac{4}{3}\pi a^{3}$ is the volume of a sphere and $f_{ex}$ is a force per unit volume. For instance, under the influence of gravity $f_{ex}=-\left( \rho _{p}-\rho \right) g$. In order to employ a similar approach to the shape-based separation of spheroids (of $V=\frac{4}{3}\pi a^{2}b$) it is necessary to know how the lift force scales with the particle radii $a$ and $b$, and with the aspect ratio $b/a$. It is of considerable interest to obtain a similar scaling equation for spheroids. However, as described in the Introduction, their instantaneous orientation and rotation are often functions of time, which should lead to a time-dependent lift. Nevertheless, for neutrally-buoyant oblate spheroids of finite $\mathrm{Re_p}=\rho G_ma^2/\mu$, the symmetry axis eventually becomes parallel to the vorticity direction, $\mathbf n_{eq}=(0,1,0)$ (the log-rolling motion). Consequently, to predict their long-term migration we have to find a lift force for this steady configuration. Once it is known, the equilibrium positions of oblate spheroids (including non-neutrally buoyant too) can be found by balancing the lift and external forces. \section{Simulation setup}\label{sec:simulation} To simulate fluid flow in the channel we use a 3D, 19 velocity, single relaxation time implementation of the lattice Boltzmann method (LBM) with a Batnagar Gross Krook (BGK) collision operator~\citep{benzi_lattice_1992,kunert2010random}. Spheroids are discretized on the fluid lattice and implemented as moving no-slip boundaries following the pioneering work of Ladd~\cite{LaddVerberg2001}. Details of our implementation can be found in our previous publications\cite{janoschek2010b,kunert2010random,janoschek2014,Dubov14,asmolov2018,nizkaya2020}. The size of the simulation domain is $(N_{x},N_{y},N_z)=(128,128,81)$, with corresponding channel height $H=80$ (all units are simulation units). No-slip boundaries are implemented at the top and bottom channel walls using mid-grid bounce-back boundary conditions and all remaining boundaries are periodic. The kinematic viscosity is $\nu=1/6$ and the fluid is initialized with a density $\rho=1$. A body force directed along $x$ with volumetric density $g=0.5\dots 2 \times10^{-5}$ is applied both to the fluid and the particle, resulting in a Poiseuille flow with $\Re \simeq 11...44$. To simulate particle trajectories we use spheroids of equatorial radii $a=6$, $8$ and $12$ and several aspect ratios, $0.33\leq b/a\leq 1$. This range of particle aspect ratios is chosen to ensure the correct representation of ellipsoidal shape on the grid. The particles start close to the expected equilibrium with zero initial velocity and in log-rolling orientation. We assume that the equilibrium is reached when the difference between an average of particle $z$-coordinate over 10 time steps and its average over the next 10 step does not exceed $1.25\times10^{-6}H$. To measure the lift force as a function of $z$, we fix particle $z$-coordinate but let to rotate and to move in all other directions. Particle motion starts with zero initial velocity and $\mathbf{n}=(0,1,0)$ that corresponds to the stable log-rolling state. Once a stationary velocity is reached, the vertical component of the force $F_l(z)$ is measured and is averaged over $10^4$ simulation steps. Therefore, these measurements also correspond (if we neglect force fluctuations) to non-neutrally particles at equilibrium, Eq.\eqref{eq:balance}. To check if the results depend on the box size due to periodic boundary conditions in $x-$ and $y-$directions, we also simulate the migration of the large sphere of $a=b=12$ in a larger simulation box with $(N_{x},N_{y},N_z)=(256,256,81)$. The difference in equilibrium positions for the two box sizes is $100$ times smaller than the typical separation of equilibrium positions of different particles. \begin{figure} \vspace{-0.4cm} \centering \includegraphics[width=1.0\columnwidth]{Nizkaya_Figure2.eps} \vspace{-1.2cm} \caption{Velocity of a sphere of $a/H=0.1$ located at a distance $z$ from the wall and free to rotate and translate in the $x$-direction (circles). Dashed curve are calculations from Eq.~\eqref{gold} representing the semi-analytical solution for a wall-bounded shear flow. Dotted line indicates a contact with the wall. } \label{fig:near_wall} \end{figure} To test the resolution of the method in the near-wall zone, we measure the velocity of the freely rotating and translating in $x$-direction sphere of radius $a=8$, which $z$-coordinate is fixed. The $x$-component of the velocity $V_p^x$ is plotted in Fig.~\ref{fig:near_wall}, along with a semi-analytical solution for a wall-bounded shear flow \cite{reschiglian2000}(see Eq. \eqref{gold}). One can see that a sufficient accuracy is attained for separations as small as 1 lattice nodes ($z/a>1.05$), similarly to previous results for the sphere approaching a rough wall~\cite{kunert2010random}. \section{Numerical results and discussion}\label{sec:results} \begin{figure} \vspace{-0.4cm} \centering \includegraphics[width=1.0\columnwidth]{Nizkaya_Figure3.eps} \vspace{-1.4cm} \caption{(a) $x$-components (colored curves) and $y$-components (black curves) of the orientation vector and (b) trajectories for spheroids with $a/H=0.15$ and $b/a=0.5$ (solid), $0.8$ (dashed) and $1$ (dotted). } \label{fig:traj} \end{figure} We first simulate trajectories and orientations of freely moving neutrally buoyant spheroids of different sizes and aspect ratios in a flow with $\Re=22$. Our results show that regardless of the initial position and orientation, oblate spheroids eventually reorient to the stable log-rolling motion around the axis of symmetry and their angular velocity is directed along the $y$ axis, $\mathbf{n}=(0,1,0)$, $\boldsymbol{\omega}=(0,\omega_y,0)$. We also observe that they focus at some distance $z_{eq}$ from the wall due to the inertial migration. The rates of reorientation and migration depend on the particle size and the aspect ratio. In Fig.~\ref{fig:traj} we compare the rotational behavior and trajectories of spheroids of several aspect ratios, $b/a=1$ (sphere), $0.8$ and $0.5$, but of the same equatorial radius $a/H=0.15$. For all simulations the initial position and orientation are fixed to $z_0/H=0.2$ and $\mathbf{n}_0=(0.66,0.75,0)$. It is well seen in Fig.~\ref{fig:traj}(a) that the $x$-component of the orientation vector $n_x$ experiences decaying oscillations around 0, while $n_y$ converges to 1. This indicates that at the beginning particles exhibit a kayaking, which then slowly evolves to a log-rolling motion (see Fig.~\ref{fig:sketch}). Note that oscillations in the orientation of a spheroid with $b/a=0.8$ decrease much slower than those for a spheroid of $b/a=0.5$. The kayaking motion is responsible for the oscillations in trajectories shown in Fig.~\ref{fig:traj}(b). We see that for a spheroid of $b/a=0.8$ the migration to the equilibrium position is faster than for that of $b/a=0.5$, although the particle trajectory is much less affected by the kayaking motion. Another important observation is that the equilibrium positions for all spheroids are very close, pointing strongly that they are defined by the equatorial radius $a$. This result is consistent with reported experimental data~\cite{hur2011, masaeli2012}. To validate this finding, below we compute $z_{eq}$ for spheroids of different sizes and aspect ratios. \begin{figure} \vspace{-0.4cm} \centering \includegraphics[width=1.0\columnwidth]{Nizkaya_Figure4.eps} \vspace{-1.2cm} \caption{Equilibrium positions of spheroids with fixed $a/H=0.075$, $0.1$, $0.15$ (open,colored, black symbols, respectively) vs. their aspect ratio.} \label{fig:zeq_ba} \end{figure} Let us now fix several $a$ and simulate $z_{eq}$ as a function of $b/a$. The results for a lower equilibrium position are plotted in Fig.~\ref{fig:zeq_ba}. As expected, $z_{eq}$ strongly depends on $a$, but is practically independent of the aspect ratio of spheroids. We stress that equilibrium positions are nearly independent of $\Re$ in the range from 11 to 44 used here. The same conclusion has been earlier made for spherical particles~\cite{asmolov2018}. Based on these observations, one can speculate that the lift coefficient at any, not only equilibrium, $z$ is controlled by the equatorial radius. If so, we can suppose that the lift force on oblate spheroids of equatorial radius $a$ represents a product of Eq.~\eqref{lift} for a sphere of the same radius and the correction $f$ that depends on the aspect ratio \begin{equation} F_l=\rho a^4 G_m^2c_l(z/H,a/H,\Re)f(b/a) . \label{factorization} \end{equation} This ansatz constitutes nothing more than an assumption, made to provide the lift force that depends on $z$ only through the lift coefficient $c_l$. \begin{figure}[tbp] \centering \vspace{-0.5cm} \includegraphics[width=1.0\columnwidth]{Nizkaya_Figure5.eps} \vspace{-1.1cm} \caption{Ratio of the lift forces for spheroids and spheres of the same $a$ computed at $\Re=22$ using $a/H=0.075$ (open symbols) and $0.15$ (black symbols). The aspect ratio $b/a$ of spheroids in simulations is set to be equal to $0.33$ (triangles), $0.5$ (squares), and $0.8$ (diamonds). Equilibrium positions of spheroids are marked by open ($a/H=0.075$) and black ($a/H=0.15$) crosses. Dotted lines show $f = b/a$.} \label{fig:ratios} \end{figure} To verify Eq.~\eqref{factorization}, we fix the $z$ position of a spheroid that exhibits a stable log-rolling motion but is free to also translate in two other directions, and measure the lift force. If the form of ansatz~\eqref{factorization} is correct, the ratio $F_l/\left(\rho a^4 G_m^2c_l\right)$ would be equal to $f(b/a)$. In Fig.~\ref{fig:ratios} we plot this ratio as a function of the particle position and conclude that for a given $b/a$ it is nearly constant. Moreover, we see that $f \simeq b/a$. Note that results displayed in Fig.~\ref{fig:ratios} correspond to $\Re = 22$, but these conclusions have been verified for $\Re=11$ and 44 (not shown). Therefore, one can rewrite Eq.\eqref{factorization} as \begin{equation} F_l=\rho a^3 b G_m^2 c_l(z/H,a/H,\Re). \label{scaling} \end{equation} \begin{figure} \centering \vspace{-0.4cm} \includegraphics[width=1.\columnwidth]{Nizkaya_Figure6.eps} \vspace{-1.2cm} \caption{Lift coefficients, Eq.~(\ref{scaling}), in the channel central portion of the channel at $\Re=22$ computed using $a/H=0.075$, $0.1$, and $0.15$ (open, colored, black symbols, respectively). The aspect ratio $b/a$ is equal to $0.5$ (squares), $0.8$ (diamonds) and $1$ (circles).Dotted curves show calculations from Eq.(\ref{cli}) using Eqs.(\ref{cl_0})-(\ref{cl_2}) for $c_{li}$.} \label{fig:scaling} \end{figure} Eq.\eqref{scaling} allows one to obtain $c_l$ from the simulation data simply by computing the ratio $F_l/(\rho a^3 b G_m^2)$, which is expected to depend on $a/H$, but not on the spheroid aspect ratio. We now calculate $c_l$ for a sphere and spheroids of two aspect ratios (0.5 and 0.8, as before) using several values of $a/H$. The simulation results for the central region of the channel, $0.2 \leq z/H \leq 0.5$ are given in Fig.~\ref{fig:scaling}, which fully confirms that at fixed $a/H$ the lift coefficient indeed does not depend on $b/a$. Using these simulation results in Appendix~\ref{ap:fit} we propose fitting expressions for the lift coefficient. Calculations from Eq.(\ref{cli}) using $c_{li}$ given by Eqs.(\ref{cl_0})-(\ref{cl_2}) are also included in Fig.~\ref{fig:scaling}, and we see that they fit well the simulation dataThe overall conclusion from this plot is that our scaling Eq.\eqref{scaling} adequately describes the lift force in the central region of the channel. \begin{figure} \vspace{-0.5cm} \centering \includegraphics[width=1.0\columnwidth]{Nizkaya_Figure7.eps} \vspace{-1.2cm} \caption{(a) Lift coefficient and (b) particle slip velocity near the wall for spheroids of $b/a=0.5$ (squares), $0.8$ (diamonds) and spheres (circles). Colored and black symbols correspond to $a/H=0.1$ and $0.15$. Dotted lines indicate a contact with the wall.} \label{fig:scaling2} \end{figure} However, as seen in Fig.~\ref{fig:scaling2}(a), Eq.\eqref{scaling} becomes inaccurate very close to the wall, namely, when $z-a\leq 0.2a$. At such small distances between spheroids and the wall the lift coefficient is no longer independent on the aspect ratio, and we see that $c_l$ augments with $b/a$. An explanation for the smaller $c_l$ for the spheroids compared to the sphere can be obtained if we invoke their hydrodynamic interactions with the wall that depends on both $a$ and $b$~\cite{vinogradova1996}. This is illustrated and confirmed in Fig.~\ref{fig:scaling2}(b), where the data for the particle slip velocity near the wall are presented. It is well seen that close to the wall $V_s$ is finite and varies with both $a$ and $b/a$. More oblate particles have a smaller slip velocity and, therefore, experience a smaller lift force. Additional insight into the problem can be gleaned by computing the equilibrium positions for particles of an equal volume $V$, but of various aspect ratio. This situation is relevant to separation experiments~\cite{hur2011,masaeli2012}. We now fix $a^{2}b$, so that it is equivalent of that for a sphere of $a/H = 0.1$, and measure $z_{eq}$ at $\Re=22$ and different $b/a$. Simulation results for neutrally-buoyant oblate spheroids are included in Fig.~\ref{fig:zeq_vol} (black symbols). It is seen that a decrease in $b/a$ has the effect of larger $z/H$, although rather insignificant. Since the lift coefficient and $z_{eq}$ (where $c_l$ vanishes) are independent of $b$ as follows from Eq.~\eqref{scaling}, the weak variations of $z_{eq}$ with the aspect ratio are caused by the changes in values of $a$. Fig.~\ref{fig:zeq_vol} also includes the data obtained by Lashgari {\it et al.} \cite{lashgari2017} by means of the LBM simulations at $\Re=50$ (shown by stars). We see that their results agree well with our simulations thus confirming that at moderate $\Re$ the equilibrium positions do not depend on its values. Finally, we note that calculations from Eq.(\ref{scaling}) using Eq.~(\ref{cli}) for $c_ l$ and Eqs.(\ref{cl_0})-(\ref{cl_2}) for $c_{li}$ fit the simulation data very well (solid curve). \begin{figure} \centering \vspace{-0.4cm} \includegraphics[width=1.0\columnwidth]{Nizkaya_Figure8.eps} \vspace{-1.2cm} \caption{Equilibrium positions for spheroids of the same volume (equivalent to that of a sphere of $a/H=0.1$) vs. the aspect ratio obtained in simulations at $\Re=22$ (circles). Black circles indicate neutrally buoyant spheroids, open circles show results for non-neutrally buoyant spheroids subject to an external force ($c_{ex}=-0.045$). Stars show the simulation data by \citet{lashgari2017} obtained at $\Re=50$. Solid and dashed curves are calculations from Eqs.~(\ref{scaling}) and (\ref{zf_eq}). In both cases Eq.~(\ref{cli}) and Eqs.(\ref{cl_0})-(\ref{cl_2}) are used to calculate $c_l$ and $c_{li}$. } \label{fig:zeq_vol} \end{figure} These simulations are compared with analogous, made with the same parameters, but in which $\rho_p \neq \rho$ and an external force is incorporated. The equilibrium positions of such non-neutrally buoyant spheroids have been found from \begin{equation} c_{l}(z_{eq}/H,a/H,\Re )=-\dfrac{c_{ex}H}{a}, \label{zf_eq} \end{equation} obtained by using Eq.\eqref{eq:balance} together with Eq.~(\ref{scaling}), where the dimensionless parameter $c_{ex}$ that characterizes the relative value of the external force is given by \begin{equation}\label{eq:cex} c_{ex}=\frac{4\pi f_{ex}}{3\rho G_{m}^{2}H}. \end{equation} Since Eq.(\ref{zf_eq}) does not include $b$, at constant $f_{ex}$ the equilibrium positions for particles of the same $a$ coincide. Simulations made with $c_{ex}=-0.045$ are included in Fig.~\ref{fig:zeq_vol} (open symbols) and show that $z_{eq}/H$ are shifted towards the bottom wall compared to neutrally buoyant spheroids. Note that with our parameters Eq.~(\ref{zf_eq}) has only one root, so that the upper equilibrium position cannot be attained. Also included in Fig.~\ref{fig:zeq_vol} are the calculations from Eq.~(\ref{zf_eq}), where $c_l$ is obtained using Eq.(\ref{cli}) with $c_{li}$ given by (\ref{cl_0})-(\ref{cl_2}) (dashed curve). We see that they are in a good agreement with the simulation results. \section{Conclusion}\label{sec:conclusion} We have presented lattice Boltzmann simulation data on the inertial migration of oblate spheroids in the channel flow with moderate Reynolds numbers. Our results show that spheroids focus to equilibrium positions, which depend only on their equatorial radius $a$, but not on the polar radius $b$. We invoke this simulation result to derive a scaling expression for a lift force, Eq.(\ref{scaling}). In this expression, the lift force is proportional to $a^3 b$, but the lift coefficient, $c_l$, is the same as for a sphere of radius $a$. We have also proposed fitting expressions allowing one to easily calculate $c_l$. Our scaling theory is shown to be valid throughout the channel, except very narrow regions near a wall. Thus, it can be employed to predict, with high accuracy, the equilibrium positions of spheroids in the channel. These, in turn, could be used to develop inertial microfluidic methods for a shape-based separation. We recall, that in our work we have limited ourselves by oblate spheroids of $b/a \geq 0.3$ and used $\Re \leq 44$ only, but one cannot exclude that at lower aspect ratios and/or larger Reynolds numbers the equilibrium positions would depend on both radii of particles. It would be of considerable interest to explore the validity of Eq.~(\ref{scaling}) using other flow and oblate spheroid parameters. Another fruitful direction could be an investigation of prolate particles to develop an analogue of Eq.~(\ref{scaling}). \begin{acknowledgments} This research was partly supported by the Russian Foundation for Basic Research (grant 18- 01-00729), by the Ministry of Science and Higher Education of the Russian Federation and by the German Research Foundation (research unit FOR2688, project HA4382/8-1). \end{acknowledgments} \section*{Data availability statement} The data that support the findings of this study are available within the article.
1,108,101,564,744
arxiv
\section{Introduction} In this paper, we present LifeJacket{}, a system for automatically verifying floating-point optimizations. Floating-point arithmetic is ubiquitous---modern hardware architectures natively support it and programming languages treat it as a canonical representation of real numbers---but writing correct floating-point programs is difficult. Optimizing these programs is even more difficult. Unfortunately, despite hardware support, floating-point computations are still expensive, so avoiding optimization is undesirable. Reasoning about floating-point optimizations and programs is difficult because of floating-point arithmetic's unintuitive semantics. Floating-point arithmetic is inherently imprecise and lossy, and programmers must account for rounding, signed zeroes, special values, and non-associativity~\cite{goldberg1991every}. Before the standardization, a wide range of incompatible floating-point hardware with varying support for range, precision, and rounding existed. These implementations were not only incompatible but also had undesirable properties such as numbers that were not equal to zero for comparisons but were treated as zeros for multiplication and division~\cite{severance1998interview}. The IEEE 754-1985 standard and its stricter IEEE 754-2008 successor were carefully designed to avoid many of these pitfalls and designed for (contrary to popular opinion, perhaps) non-expert users. Despite these advances, program correctness and reproducibility still rests on a fragile interplay between developers, programming languages, compilers, and hardware implementations. Compiler optimizations that alter the semantics of programs, even in subtle ways, can confuse users, make problems hard to debug, and cause cascading issues. IEEE 754-2008 acknowledges this by recommending that language standards and implementations provide means to generate \textit{reproducible} results for programs, independent from optimizations. In practice, many transformations that are valid for real numbers, change the precision of floating-point expressions. As a result, compilers optimizing floating-point programs face the dilemma of choosing between speed and reproducibility. They often address this dilemma by dividing floating-point optimizations into two groups, precise and imprecise optimizations, where imprecise optimizations are optional (e.g. the \texttt{-ffast-math} flag in \texttt{clang}). While precise optimizations always produce the same result, imprecise ones produce reasonable results on common inputs (e.g. not for special values) but are arbitrarily bad in the general case. To implement precise optimizations, developers have to reason about all edge cases of floating-point arithmetic, making it challenging to avoid bugs. To illustrate the challenge of developing floating-point optimizations, \Cref{fig:bug} shows an example of an invalid transformation implemented in LLVM 3.7.1. We discuss the specification language in more detail in \Cref{sec:alive} but, at a high-level, the transformation simplifies ${+0.0} - ({-0.0} - x)$ to $x$, an optimization that is correct in the realm of real numbers. Because floating-point numbers distinguish between negative and positive zero, however, the optimization is not valid if $x = {-0.0}$, because the original code returns $+0.0$ and the optimized code returns ${-0.0}$. While the zero's sign may be insignificant for many applications, the unexpected sign change may cause a ripple effect. For example, the reciprocal of zero is defined as $1 / {+0.0} = {+\infty}$ and $1 / {-0.0} = {-\infty}$. Since reasoning manually about floating-point operations and optimizations is difficult, we argue that automated reasoning can help ensure correct optimizations. The goal of LifeJacket{} is to allow LLVM developers to automatically verify precise floating-point optimizations. Our work focuses on precise optimizations because they are both more amenable to verification and arguably harder to get right. LifeJacket{} builds on Alive~\cite{lopes2015provably}, a tool for verifying LLVM optimizations, extending it with floating-point support. \begin{figure} \small \begin{Verbatim} Name: PR26746 => \end{Verbatim} \caption{Incorrect transformation involving floating-point instructions in LLVM 3.7.1.} \label{fig:bug} \end{figure} Our contributions are as follows: \begin{itemize} \item We describe the background for verifying precise floating-point optimizations in LLVM and propose an approach using SMT solvers. \item We implemented the approach in LifeJacket{}, an open source fork of Alive that adds support for floating-point types, floating-point instructions, floating-point predicates, and certain fast-math flags. \item We validated the approach by verifying 43{} optimizations. LifeJacket{} finds 8{} incorrect optimizations, including \numberstringnum{3} previously unreported problems in LLVM 3.7.1. \end{itemize} In addition to the core contributions, our work also lead to the discovery of two issues in Z3~\cite{de2008z3}, the SMT solver used by LifeJacket{}, related to floating-point support. \section{Related Work} \label{sec:related} Alive is a system that verifies LLVM peephole optimizations. LifeJacket{} is a fork of this project that extends it with support for floating-point arithmetic. We are not the only ones interested in verifying floating-point optimizations; close to the submission deadline, we found that one of the Alive authors had independently begun a reimplementation of Alive that seems to include support for floating-point arithmetic.\footnote{\url{https://github.com/rutgers-apl/alive-nj}} Our work intersects with the areas of compiler correctness, optimization correctness, and analysing floating-point expressions. Research on compiler correctness has addressed floating-point and floating-point optimizations. CompCert, a formally-verified compiler, supports IEEE 754-2008 floating-point types and implements two floating-point optimizations~\cite{boldo2015verified}. In CompCert, developers use Coq to prove optimizations correct, while LifeJacket{} proves optimization correctness automatically. Regarding optimization correctness, researchers have explored both the consequences of existing optimizations and techniques for generating new optimizations. Recent work has discussed consequences of unexpected optimizations~\cite{wang2013towards}. In terms of new optimizations, STOKE~\cite{schkufza2014stochastic} is a stochastic optimizer that supports floating-point arithmetic and verifies instances of floating-point optimizations with random testing. Souper~\cite{souper} discovers new LLVM peephole optimizations using an SMT solver. Similarly, Optgen generates peephole optimizations and verifies them using an SMT solver~\cite{buchwald2015optgen}. All of these approaches are concerned with the correctness of new optimizations, while our work focuses on existing ones. Vellvm, a framework for verifying LLVM optimizations and transformations using Coq, also operates on existing transformations but does not do automatic reasoning. Researchers have explored debugging floating-point accuracy~\cite{chiang2014efficient} and improving the accuracy of floating-point expressions~\cite{panchekha2015automatically}. These efforts are more closely related to imprecise optimizations and provide techniques that could be used to analyze them. Z3's support for reasoning about floating-point arithmetic relies on a model construction procedure instead of naive bit-blasting~\cite{zeljic2014approximations}. \section{Background} \begin{table*} \small \centering \begin{tabular}{lp{9cm}V{6.5cm}} \toprule Flag & Description & Formula \\ \midrule \texttt{nnan} & Assume arguments and result are not \texttt{NaN}. Result undefined over \texttt{NaN}s. & \begin{Verbatim} ite (or (isNaN a) (isNaN b) (isNaN r) (x (_ FP <ebits> <sbits>)) r \end{Verbatim} \\ \texttt{ninf} & Assume arguments and result are not $\pm\infty$. Result undefined over $\pm\infty$. & \begin{Verbatim} ite (or (isInf a) (isInf b) (isInf r)) (x (_ FP <ebits> <sbits>)) r \end{Verbatim} \\ \texttt{nsz} & Allow optimizations to treat the sign of a zero argument or result as insignificant. & \begin{Verbatim} or (a = b) (and (isZero a) (isZero b)) \end{Verbatim} \\ \bottomrule \end{tabular} \caption{Fast-math flags that LifeJacket{} supports. The \texttt{isNaN} and \texttt{isInf} are not part of the SMT-LIB standard but supported in Z3's Python interface and used for illustration purposes here. The variable \texttt{x} is a fresh, unconstrained variable, \texttt{a} and \texttt{b} are the SMT formulas of the operands, \texttt{r} of the result. The formula for \texttt{nsz} replaces the standard equality check \texttt{a = b}.} \label{tab:fast-math} \end{table*} \noindent Our work verifies LLVM floating-point optimizations. These optimizations take place on LLVM assembly language, a human-readable, low-level language. The language serves as a common representation for optimizations, transformations, and analyses. Front ends (like \texttt{clang}) output the language, and, later, back ends use it to generate machine code for different architectures. Our focus is verifying peephole optimizations implemented in LLVM's InstCombine pass. This pass replaces small subtrees in the program tree without changing the control-flow graph. Alive already verifies some InstCombine optimizations, but it does not support optimizations involving floating-point arithmetic. Instead of building LifeJacket{} from scratch, we extends Alive with the machinery to verify floating-point optimizations. To give the necessary context for discussing our implementation in \Cref{sec:implementation}, we describe LLVM's floating-point types and instructions and give a brief overview of Alive. \subsection{Floating-point arithmetic in LLVM} \label{sec:fpllvm} In the following, we discuss LLVM's semantics of floating-point types and instructions. The information is largely based on the LLVM Language Reference Manual for LLVM 3.7.1~\cite{llvm-lang-ref} and the IEEE 754-2008 standard. For completeness, we note that the language reference does not explicitly state that LLVM floating-point arithmetic is based on IEEE 754. However, the language reference refers to the IEEE standard multiple times, and LLVM's floating-point software implementation \texttt{APFloat} is explicitly based on the standard. \paragraph{Floating-point types} LLVM defines six different floating-point types with bit-widths ranging from 16 bit to 128 bit. Floating-point values are stored in the IEEE binary interchange format, which encodes them in three parts: the sign $s$, the exponent $e$ and the significand $t$. The value of a normal floating-point number is given by: $(-1)^s \times (1 + 2^{1-p} \times t) \times 2^{e - bias}$, where $bias = 2^{w - 1} - 1$ and $w$ is the number of bits in the exponent. The range of the exponents for normal floating-point numbers is $[1, 2^w-2]$. Exponents outside of this range are used to encode special values: subnormal numbers, Not-a-Number values (\texttt{NaN}s), and infinities. Floating-point zeros are signed, meaning that $-0.0$ and $+0.0$ are distinct. While most operations ignore the sign of a zero, the sign has an observable effect in some situations: a division by zero (generally) returns $+\infty$ or $-\infty$ depending on the zero's sign, for example. As a consequence, $x = y$ does not imply $\frac{1}{x} = \frac{1}{y}$. If $x = 0$ and $y = -0$, $x = y$ is true, since floating point $0 = -0$. On the other hand, $\frac{1}{x} = \frac{1}{y}$ is false, since $\frac{1}{0} = \infty \neq -\infty = \frac{1}{-0}$. Infinities ($\pm\infty$) are used to represent an overflow or a division by zero. They are encoded by setting $t = 0$ and $e = 2^w - 1$. Subnormal numbers, on the other hand, are numbers with exponents below the minimum exponent; normal floating-point numbers have an implicit leading $1$ in the significand that prevents them from representing these numbers. The IEEE standard defines the value for subnormal numbers as: $(-1)^s \times (0 + 2^{1-p} \times t) \times 2^{e_{min}}$, where $e_{min} = 1 - bias$. \texttt{NaN}s are used to represent the result of an invalid operation (such as $\infty - \infty$) and are described by $e = 2^w - 1$ and a non-zero $t$. There are two types of \texttt{NaN}s: quiet \texttt{NaN}s (\texttt{qNaN}s) and signalling \texttt{NaN}s (\texttt{sNaN}s). The first bit in the significand determines the type of \texttt{NaN} ($1$ in the case of a \texttt{qNaN}) and the remaining bits can be used to encode debug information. Operations generally propagate \texttt{qNaN}s and quiet \texttt{sNaN}s: If one of the operands is \texttt{qNaN}, the result is \texttt{qNaN}, if the operand is an \texttt{sNaN}, it is quieted by setting the first bit to $1$. Floating-point exceptions occur in situations like division by zero or computation involving an \texttt{sNaN}. By default, floating-point exceptions do not alter control-flow but raise a status flag and return a default result (e.g. a \texttt{qNaN}). \paragraph{Floating-point instructions} In its assembly language, LLVM defines several instructions for binary floating-point operations (\texttt{fadd}, \texttt{fsub}, \texttt{fmul}, \texttt{fdiv},~\ldots), conversion instructions (\texttt{fptrunc}, \texttt{fpext}, \texttt{fptoui}, \texttt{uitofp},~\ldots), and allows floating-point arguments in other operations (e.g. \texttt{select}). We assert that floating-point instructions cannot generate poison values (values that cause undefined behavior for instructions that depend on them) or result in undefined behavior. The documentation is not entirely clear but our interpretation is that undefined behavior does not occur in the absence of \texttt{sNaN}s and that \texttt{sNaN}s are not fully supported. While IEEE 754-2008 defines different rounding modes, LLVM does not yet allow users to specify them. As a consequence, the rounding performed by \texttt{fptrunc} (casting a floating-point value to a smaller floating-point type) is undefined for inexact results. \paragraph{Fast-math flags} Some programs either do not depend on the exact semantics of special floating-point values or do not expect special values (such as \texttt{NaN}) to occur. To specify these cases, LLVM binary operators can provide \emph{fast-math flags}, which allow LLVM to do additional optimizations with the knowledge that special values will not occur. \Cref{tab:fast-math} summarizes the fast-math flags that LifeJacket{} supports. There are two additional flags, \texttt{arcp} (allows replacing arguments of a division with the reciprocal) and \texttt{fast} (allows imprecise optimizations), that we do not support. \paragraph{Discussion} The properties of floating-point arithmetic discussed in this section hint at how difficult it is to manually reason about floating-point optimizations. The floating-point standard is complex, so compilers do not always follow it completely---as we mentioned earlier, LLVM does not currently support different rounding modes.\footnote{More details: \url{http://lists.llvm.org/pipermail/llvm-dev/2016-February/094869.html}.} Similarly, it does not yet support access to the floating-point environment, which makes reliable checks for floating-point exceptions in \texttt{clang} impossible, for example. This runs counter to the IEEE standard, which defines reproducability as including ``invalid operation,'' ``division by zero,'' and ``overflow'' exceptions. \subsection{Verifying transformations with Alive} \label{sec:alive} Alive is a tool that verifies peephole optimizations on LLVM's intermediate representation; these optimizations are expressed (as input) in a domain-specific language. At a high level, verifying an optimization with Alive takes the following steps: \begin{enumerate} \item The user specifies a new or an existing LLVM optimization using the Alive language. \item Alive translates the optimization into a series of SMT queries that express the equivalence of the source and the target. \item Alive uses Z3, an SMT solver, to check whether any combination of values makes the source and target disagree. If the optimization is incorrect, Alive returns a counter-example that breaks the optimization. \end{enumerate} Alive specializes in peephole optimizations that are highly local and do not alter the control-flow graph of a program. This type of optimization is performed by the LLVM InstCombine pass in \texttt{lib/Transforms/InstCombine} and InstructionSimplify in \texttt{lib/Analysis}. Alive can also generate code for an optimizer pass that performs all of the verified optimizations. We do not discuss this feature further since LifeJacket{} does not support it for floating-point optimizations. In the following, we discuss the Alive language and the role of SMT solvers in proving optimization correctness. \paragraph{Specifying transformations with the Alive language} In the domain-specific Alive language, each transformation consists of a list of preconditions, a source template, and a target template. Alive verifies whether it is safe to replace the instructions in the source template with the instructions in the target given that the preconditions hold. \Cref{fig:bug} is an example of a transformation in the Alive language. This transformation has no preconditions, so it always applies. The instructions above the ``\texttt{=>}'' delimiter are the source template, while the target template are below. Preconditions are logical expressions enforced by the compiler at compile-time and Alive takes them for granted. The precondition \texttt{isNormal(\%x)}, for example, expresses the fact that an optimization only holds when \texttt{\%x} is a normal floating-point value. Alive interprets the instructions in the sources and targets as expression trees, so the order of instructions does not matter, only the dependencies. Verifying the equivalence of the source and the target is done on the root of the trees. The arguments for instructions are either inputs (e.g. \texttt{\%x}), constant expressions (e.g. \texttt{C}), or immediate values (e.g. \texttt{0.0}). Inputs model registers, constant expressions correspond to computations that LLVM performs at compile-time, and immediate values are values known at verification time. Constant expressions consist of constant functions and compile-time constants. Inputs and constant expressions can be subjects for predicates in the precondition. In contrast to actual LLVM code, the Alive language does not require type information for instructions and inputs. Instead, it uses the types expected by instructions to restrict types and bit-widths of types. Then, it issues an SMT query that encodes these constraints to infer all possible types and sizes of registers, constants, and values. This mirrors the fact that LLVM optimizations often apply to multiple bit-widths and makes specifying optimizations less repetitive. Alive instantiates the source and target templates with the possible type and size combinations and verifies each instance. \begin{figure} \small \centering \begin{minipage}{3.5cm} Incorrect: \begin{Verbatim} => \end{Verbatim} \end{minipage} % \begin{minipage}{3.5cm} Correct: \begin{Verbatim} => \end{Verbatim} \end{minipage} % \caption{Example of a problematic optimization using \texttt{undef} on the left and a better version on the right. If \texttt{\%x} is \texttt{NaN} then \texttt{\%r} can only be \texttt{NaN}, so \texttt{\%r} cannot be \texttt{undef}.} \label{fig:undef} \end{figure} Undefined values (\texttt{undef}) in LLVM represent input values of arbitrary bit-patterns when used and may be of any type. For each \texttt{undef} value in the target template, Alive has to verify that any value can be produced and for each \texttt{undef} value in the source, Alive may assume any convenient value. \Cref{fig:undef} is a known incorrect optimization in LLVM that LifeJacket{} confirms and that illustrates this concept: The source template cannot produce all possible bit-patterns, so it cannot be replaced with \texttt{undef}.\footnote{Discussion on this optimization: \url{https://groups.google.com/d/topic/llvm-dev/iRb0gxroT9o/discussion}} \paragraph{Verifying transformations with SMT solvers} Alive translates the source and target template into SMT formulas. For each possible combination of variable types in the templates, it creates SMT formulas for definedness constraints, poison-free constraints, and the execution values for the source and target. Alive checks the definedness and poison-free constraints of the source and target for consistency. These checks are not directly relevant to floating-point arithmetic, so we do not discuss them further. Instead, we deal more directly with the execution values of the source and target. An optimization is only correct if the source and the target always produce the same value. To check this property, Alive asks an SMT solver to verify that $\texttt{preconditions} \wedge \texttt{src\_formula} \ne \texttt{tgt\_formula}$ is unsatisfiable---that there is no assignment that can make the formula true. If there is, the optimization is \textit{incorrect}: there is an assignment for which the source value is different from the target value. When Alive encounters an incorrect optimization, it uses the output of the SMT solver to return a counterexample in the form of input and constant assignments that lead to different source and target values. Ultimately, Alive relies on Z3 to determine whether an optimization is correct (by answering the SMT queries). LifeJacket{} would have been impossible without Z3's floating-point support, which was added in version 4.4.0 by implementing the SMT-LIB standard for floating-point arithmetic~\cite{smtFPA2010} less than a year ago. \section{Implementation} \label{sec:implementation} Our implementation extends Alive in four major ways: It adds support for floating-point types, floating-point instructions, floating-point predicates, and fast-math flags. In the following, we describe our work in those areas, briefly comment on our experience with floating-point support in Z3, and conclude with a discussion of the limitations of the current version of LifeJacket{}. \paragraph{Floating-point types} LifeJacket{} implements support for \texttt{half}, \texttt{single}, and \texttt{double} floating-points. Alive itself provides support for integer and pointer types of arbitrary bit-widths up to 64 bit. Following the philosophy of the original implementation, we do not require users to explicitly annotate floating-point types. Instead, we use a logical disjunction (in the SMT formula for type constraints) to limit floating-point types to bit-widths of 16, 32, or 64 bits. Then, we use Alive's existing mechanisms to determine all possible type combinations for each optimization (as discussed in \Cref{sec:alive}). Adding a new type required us to relax some assumptions, e.g. that the arguments of \texttt{select} are integers. Additionally, we modified the parser to support floating-point immediate values. \paragraph{Floating-point predicates and constant functions} LifeJacket{} adds precondition predicates and constant functions related to floating-point arithmetic. Recall that preconditions are logical formulas that describe facts that must be true in order to perform an optimization; they are fulfilled by LLVM and assumed by Alive. In the context of floating-point optimizations, preconditions may include predicates about the type of a floating-point number (e.g. \texttt{isNormal(\%x)} to make sure that \texttt{\%x} is a normal floating-point number) or checks to ensure that conversions are lossless. We discuss more predicates in the following paragraphs. Constant functions mirror computation performed by LLVM at compile-time and are evaluated by Alive symbolically at verification-time. For example, the constant function \texttt{fptosi(C)} (not to be confused with the instruction) converts a floating point number to a signed integer, corresponding to a conversion LLVM does at compile time. Constant expressions (expressions that contain constant functions) can be assigned to registers in the target template, mirroring the common strategy of optimizing operations by partially evaluating them at compile-time. In contrast to Alive, LifeJacket{} supports precondition predicates that refer to constant expressions in target templates. For example, some optimizations have restrictions about precise conversions, and we express those restrictions in the precondition. If the target converts a floating-point constant to an integer with \texttt{\%c = fptosi(C)}, then the precondition can ensure that the conversion is lossless by including \texttt{sitofp(\%c) == C} (which guarantees that converting the number back and forth results in the original number). If the precondition does not refer to \texttt{\%c} in the target and instead imposes \texttt{sitofp(fptosi(C)) == C} then it would not restrict the bit-width of \texttt{\%c}, so \texttt{\%c} could be too narrow to represent the number. \paragraph{Floating-point instructions} Our implementation supports binary floating-point instructions (\texttt{fadd}, \texttt{fsub}, \texttt{fmul}, \texttt{fdiv}, and \texttt{frem}), conversions involving floating-point numbers (\texttt{fptrunc}, \texttt{fpext}, \texttt{fptoui}, \texttt{fptosi}, \texttt{uitofp}, \texttt{sitofp}), the \texttt{fabs} intrinsic, and floating-point comparisons (\texttt{fcmp}). Most of these instructions directly correspond to operations that the SMT-LIB for floating-point standard supports, so translating them to SMT formulas is straightforward. Next, we discuss our support for \texttt{frem}, \texttt{fcmp}, conversions, and the equivalence check for floating-point optimizations. \begin{figure} \small \begin{Verbatim} double fmod(double x, double y) { double result; result = remainder(fabs(x), (y = fabs(y))); if (signbit(result)) result += y; return copysign(result, x); } \end{Verbatim} \vspace{-0.2cm} \noindent\rule{\columnwidth}{0.4pt} \begin{Verbatim} (= abs_y (abs y)) (= r (remainder (abs x) abs_y)) (= r' (ite (isNeg r) (+ RNE r abs_y) r)) (= fmod (ite (xor (isNeg x) (isNeg r')) (- r') r)) \end{Verbatim} \caption{The \texttt{fmod} function implemented using IEEE \texttt{remainder} as suggested by the C standard and an informal representation of the implementation used by LifeJacket{}.} \label{fig:fmod} \end{figure} The \texttt{frem} instruction does not correspond to \texttt{remainder} as defined by IEEE 754 but rather to \texttt{fmod} in the C POSIX library, so translating it to an SMT formula involves multiple operations. Both \texttt{fmod} and \texttt{remainder} calculate $x - n * y$ (where $n$ is $\frac{x}{y}$), but \texttt{fmod} rounds toward zero whereas \texttt{remainder} rounds to the nearest value and ties to even. \Cref{fig:fmod} shows how the C standard defines \texttt{fmod} in terms of \texttt{remainder} for \texttt{double}s~\cite[\S F.10.7.1]{c11} and the corresponding SMT formula that LifeJacket{} implements. The formula uses a fixed rounding-mode because the rounding-mode of the environment does not affect \texttt{fmod}. The \texttt{fcmp} instruction compares two floating-point values. In addition to the two floating-point values, it expects a third operand, the \emph{condition code}. The condition code determines the type of comparison. There are two larger genres of comparison: \emph{ordered} comparisons can only be true if none of the inputs are \texttt{NaN} and \emph{unordered} comparisons are true if any of the inputs is \texttt{NaN}. LLVM supports an ordered version and an unordered version of the usual comparisons such as equality, inequality, greater-than, etc. Additionally, there are condition codes that just check whether both inputs are not \texttt{NaN} (\texttt{ord}) or any of the inputs are \texttt{NaN} (\texttt{uno}). Optimizations involving comparisons often apply to multiple condition codes. To allow users to efficiently describe such optimizations, LifeJacket{} supports predicates in the precondition that describe the applicable set of condition codes. For example, there are predicates for constraining the set of condition codes to either ordered or unordered conditions. We also support predicates that express a relationship between multiple condition codes. This is useful, for example, to describe an optimization that performs a multiplication by negative one on both sides: To replace the comparison (\texttt{C1}) between \texttt{-x} and \texttt{C} with the comparison (\texttt{C2}) between \texttt{x} and \texttt{-C}, we use the \texttt{swap(C1, C2)} predicate. When no sensible conversion between floating-point values and integers is possible, LLVM defaults to returning \texttt{undef}. For conversions from floating-point to integer value (signed or unsigned), LifeJacket{} checks whether the (symbolic) floating-point value is \texttt{NaN}, $\pm\infty$, too small, or too large and returns \texttt{undef} if necessary. Conversions from integer to floating-point values, similarly return \texttt{undef} for values that are too small or too large. Recall that LifeJacket{} must determine the unsatisfiability of $\texttt{precondition} \wedge \texttt{src\_formula} \ne \texttt{tgt\_formula}$ to verify optimizations. The SMT-LIB standard defines two equality operators for floating-point, one implementing bit-wise equality, and one implementing the IEEE equality operator. The latter operator treats signed zeros as equal and \texttt{NaN}s as different, so using it to verify optimizations would not work, since it would accept optimizations that produce different zeros and reject source-target pairs that both produce \texttt{NaN}. The bit-wise equality works, because SMT-LIB uses a single \texttt{NaN} value (recall that there are multiple bit-patterns that correspond to \texttt{NaN}). While this is convenient, it also means that we cannot model different \texttt{NaN}s. We discuss the implications later. \paragraph{Fast-math flags} LifeJacket{} currently supports three of the five fast-math flags that LLVM implements: \texttt{nnan}, \texttt{ninf}, and \texttt{nsz}. LifeJacket{} handles the \texttt{nnan} and \texttt{ninf} flags in a similar way by modifying the SMT formula for the instruction on which the flag appears. As \Cref{tab:fast-math} shows, if the instruction's arguments or result is a \texttt{NaN} or $\pm \infty$, respectively, the formula returns a fresh unconstrained variable that it treats as an \texttt{undef} value. This is a direct translation from the description in the language reference and works for root and non-root instructions. The \texttt{nsz} flag is different: Instead of relaxing the requirements for the behavior for certain inputs and results, it states that the sign of a zero value can be ignored. This primarily affects how LifeJacket{} compares the source and target values: it adds a logical conjunction to the SMT query that states that the source and target values are only different if both are nonzero (shown in \Cref{tab:fast-math}). The flag itself has no effect on zero values at runtime, meaning that it does not affect the computation performed by instructions with the flag. Thus, we do not change the SMT formula for the instruction. Since the \texttt{nsz} flag has no direct effect on how LLVM does matching, this flag \emph{also} does not change the significance of the sign of immediate zeros (e.g. \texttt{+0.0}) in the optimization templates. Instead, we mirror how LLVM determines whether an optimization applies. In LLVM, optimizations that match a certain sign of zero do not automatically apply to other zeros when the \texttt{nsz} flag is set. For example, an optimization that applies to \texttt{fadd x, -0.0} does \emph{not} automatically apply to \texttt{fadd nsz x, +0.0}. If applicable, developers explicitly match any zero if the \texttt{nsz} flag is set. We mirror this design by implementing an \texttt{AnyZero(C)} predicate, which makes \texttt{C} negative or positive zero. \begin{figure*} \centering \small \begin{minipage}[t]{5cm} \begin{Verbatim} Name: PR26958 Precondition: AnyZero(C0) => \end{Verbatim} \end{minipage} % \begin{minipage}[t]{5cm} \begin{Verbatim} Name: PR26943 => \end{Verbatim} \end{minipage} % \begin{minipage}[t]{5cm} \begin{Verbatim} Name: PR27036 Precondition: hasOneUse hasOneUse WillNotOverflowSignedAdd => \end{Verbatim} \end{minipage} \caption{New bugs in LLVM 3.7.1 found by LifeJacket{}.} \label{fig:bugs} \end{figure*} \paragraph{Limitations} While \Cref{sec:evaluation} shows that LifeJacket{} is a useful tool, it does not support all floating-point types and imprecise optimizations, uses a fixed rounding-mode, and does not model floating-point exceptions and debug information in \texttt{NaN}s. Currently, LifeJacket{} does not support LLVM's vectors and the two 128-bit and the 80-bit floating-point types. Supporting those would likely not require fundamental changes. There are many imprecise optimizations in LLVM. These optimizations need a different style of verification because they do not make any guarantees about how much they affect the program output. A possible way to deal with these optimizations would be to verify that they are correct for real numbers and estimate accuracy changes by randomly sampling inputs, similar to Herbie~\cite{panchekha2015automatically}. LifeJacket{}'s verification ultimately relies on the SMT-LIB standard for floating-point arithmetic. The standard corresponds to IEEE 754-2008 but it only defines a single \texttt{NaN} value and does not distinguish between signalling and quiet \texttt{NaN}s. Thus, our implementation cannot verify whether an operation with \texttt{NaN} operands returns one of the input \texttt{NaN}s, propagating debug information encoded in the \texttt{NaN}, as recommended by the IEEE standard. In practice, LLVM does not attempt to preserve information in \texttt{NaN}s, so this limitation does not affect our ability to verify LLVM optimizations. We do not model floating-point exceptions, either, since LLVM does not currently make guarantees about handling floating-point exceptions. Floating-point exceptions could be verified with separate SMT queries, similar to how Alive verifies definedness. LifeJacket{} currently rounds to nearest and ties to the nearest even digit, mirroring the most common rounding-mode. Even though LLVM does not yet support different rounding-modes, we are planning to add support soon. The limited type and rounding-mode support and missing floating-point exceptions make our implementation unsound at worst: LifeJacket{} may label some incorrect optimizations as correct, but optimizations labelled as incorrect are certainly wrong. \paragraph{Working with Z3} Even though Z3's implementation of floating-point support is recent, we found it to be an effective tool for the job. Due to the youth of the floating-point support, we found that LifeJacket{} does not work with the newest release of Z3 because of issues in the implementation and the Python API. During the development of LifeJacket{}, we reported issues that were fixed quickly and fixed some issues, mostly in the Python API, ourselves. This suggests that LifeJacket{} is an interesting test case for floating-point support in SMT solvers. \section{Evaluation} \label{sec:evaluation} To evaluate LifeJacket{}, we translated 54{} optimizations from LLVM 3.7.1 into the Alive language and tried to verify them. We discovered 8{} incorrect optimizations and verified 43{} optimizations to be correct. In the following, we outline the optimizations that we checked and describe the bugs that we found. We performed our evaluation on a machine with an Intel i3-4160 CPU and 8 GB RAM, running Ubuntu 15.10. We compiled Z3 commit \texttt{b66fc4e}\footnote{Full disclaimer: We ran into regression issues with this version, we verified some optimizations with an older version, will change for camera ready.} with GCC 5.2.1, the default compiler, used the \texttt{qffpbv} tactic, and chose a 5 minute timeout for SMT queries. \Cref{fig:opts} summarizes the results for the different source files: AddSub contains optimizations with \texttt{fadd}/\texttt{fsub} at the root, MulDivRem with \texttt{fmul}/\texttt{fdiv}/\texttt{frem}, Compares deals with \texttt{fcmp}s and Simplify contains simple optimizations for all instructions. Using this process, LifeJacket{} found 43{} out of 54{} optimizations to be correct. LifeJacket{} timed out on \numberstringnum{4} optimizations. The AddSub optimization that times out contains a \texttt{sitofp} instruction and verification is slow for integers with a large bit-width. The two MulDivRem optimizations that timeout both contain \texttt{nsz} flags and \texttt{AnyZero} predicates. Similar optimizations without those features do not timeout. In general, \texttt{fdiv} seems to slow down verification as seems to be the case for the timeout in Simplify. Out of the 8{} optimizations that we found to be incorrect, \numberstringnum{4} had been reported. The bug in \Cref{fig:bug} had already been fixed in a newer version of LLVM when we discovered it. The rest of the reported bugs resembled the example in \Cref{fig:undef} and are all caused by an unjustified \texttt{undef} in the target. \Cref{fig:bugs} depicts the \numberstringnum{3} previously unreported incorrect optimizations that we reported to the LLVM developers. We discuss these bugs in the next paragraphs. \texttt{PR26958} optimizes $(0 - x) + x$ to $0$. The implementation of this optimization requires that the \texttt{nnan} and the \texttt{ninf} flag each appear at least once on the source instructions. We translate four variants of this instruction: One where both flags are on \texttt{fsub}, one where both are on \texttt{fadd} and two where each instruction has one of the flags. As it turns out, it is not enough to have both flags on either of the instructions. For the case where both flags are on \texttt{fsub}, the transformation is invalid if \texttt{\%x} is \texttt{NaN} or $\pm \infty$. The \texttt{nnan} and \texttt{ninf} flags require the optimized program to retain defined behavior over \texttt{NaN} and $\pm \infty$, so \texttt{\%r} must be \texttt{0.0} even for those inputs (if they resulted in undefined behavior, any result would be correct). If \texttt{\%x} is \texttt{NaN}, however, then there is no value for \texttt{\%a} that would result in \texttt{\%r} being \texttt{0.0} because \texttt{NaN} added to any other number is \texttt{NaN}. \texttt{PR26958} optimizes \texttt{fmod(x, c ? 0 : C)} to \texttt{fmod(x, C)} (\texttt{select} acts like a ternary and \texttt{frem} corresponds to \texttt{fmod}). The implementation of this optimization shares its code with the same optimization for the \texttt{rem} instruction that deals with integers. For integers, \texttt{rem \%x, 0} results in undefined behavior, so the optimization is valid. The POSIX standard specifies that \texttt{fmod(x, 0.0)} returns \texttt{NaN}, though, so the optimization is incorrect for \texttt{frem} because \texttt{\%r} must be \texttt{NaN} and not \texttt{frem \%x, C} if \texttt{\%a} is \texttt{0.0}. \texttt{PR27036} illustrates the last incorrect optimization that LifeJacket{} identified. It transforms \texttt{(float) x + (float) y} into \texttt{(float) (x + y)}, replacing an \texttt{fadd} instruction with a more efficient \texttt{add}. This transformation is invalid, though, since adding two rounded numbers is not equivalent to adding two numbers and rounding the result. For example, assuming 16-bit floating-point numbers, let \texttt{\%x = -4095} and \texttt{\%y = 17}. In the portion of the source formula \texttt{\%a = sitofp \%a}, \texttt{\%a} cannot store an exact number and stores \texttt{-4094} instead. The target formula, though, can accurately represent the result \texttt{-4078} of the addition. Our results confirm that it is difficult to write correct floating-point optimizations; we found bugs in almost all the LLVM files from which we collected our optimizations. Unsurprisingly, all of these bugs relate to floating-point specific properties such as rounding, \texttt{NaN}, $\pm \infty$ inputs, and signed zeros. These edge cases are clearly difficult for programmers to reason about. \begin{table} \centering \small \begin{tabular}{lrrr} \toprule File & Verified & Timeouts & Bugs \\ \midrule AddSub & 7 & 1 & \final{1} \\ MulDivRem & \final{3} & \final{2} & \final{1} \\ Compares & \final{11} & \final{0} & \final{0} \\ Simplify & \final{22} & \final{1} & \final{6} \\ \midrule Total & 43{} & 4 & 8{} \\ \bottomrule \end{tabular} \caption{Number of optimizations verified, timeouts, and bugs.} \label{fig:opts} \end{table} \section{Conclusion} In an ideal world, programming languages and compilers are boring. They do what the user expects. They exhibit the same behavior with and without optimization, at all optimization levels, and on all hardware. ``Boring,'' however, is surprisingly difficult to achieve, especially in the context of the complicated semantics of floating-point arithmetic. With LifeJacket{}, we hope to make LLVM's precise floating-point optimizations more predictable (and boring) by automatically checking them for correctness. \bibliographystyle{plain}
1,108,101,564,745
arxiv
\section{Introduction} \label{intro} Speciation requires the evolution of reproductive isolation between initially compatible individuals \cite{mayr_1942}. One of the most accepted mechanisms for the development of reproductive barriers is the geographic isolation of the populations, which can be complete (allopatry) or partial (parapatry) \cite{coyne_speciation_2004,nostil_2012}. In both cases the reduction of gene flow between individuals inhabiting different regions facilitates the fixation of local adaptations. These, in turn, may lead to pre-zygotic or post-zygotic mating incompatibilities and can eventually develop into full reproductive isolation. Sympatric speciation, where new species arise from a population inhabiting a single geographic region, has been highly debated \cite{kirk-2004,Bolnick_2007} and its occurrence in nature is documented in only a few cases \cite{kondra-1986,coyne_speciation_2004}. Mathematical models have shown that sympatric speciation is theoretically possible \cite{Udovic_1980,Felsenstein_1981,kondra-1998,kondra-1999,Doebeli_2005,Gavrilets_2006} and the model by Dieckmann and Doebeli \cite{Dieckmann_1999}, in particular, attracted a lot of attention. In their model, speciation results from the interplay between strong competition for resources and the way resources are distributed. It was shown that, under appropriate conditions, the population will evolve in such a way as to consume the most abundant type of resource, but it will arrive at the corresponding `optimal' phenotype in a minimum of the fitness landscape, causing it to split in two. The resulting system of two subpopulations with different phenotypes would then be at a fitness maximum. For sexually reproducing individuals the evolution of reproductive barriers between the two nearly split populations would need a further ingredient, assortative mating. This is tendency of individuals to mate with others that are similar to themselves and prevents the mixing of the two subpopulations. In the model assortative mating was shown to evolve naturally because only then the fitness maximum could be achieved. In spite of its theoretical plausibility, this and other models have been criticized for their supposedly unrealistic assumptions \cite{gavrilets_fitness_2004,Barton_2005,Bolnick_2004,Gavrilets_2005}. More surprising than sympatric speciation under strong selection is the possibility of sympatric speciation in a neutral scenario \cite{kondra-1998}. It was shown by Derrida and Higgs (DH) \cite{higgs_stochastic_1991,higgs_genetic_1992} that a population of initially identical individuals subjected only to mutations and assortative mating can indeed split into species in sympatry if the genomes are infinitely large. The hypothesis of genomes containing infinitely many loci, though not new \cite{Kimura_1964,Ewens_1979}, raises important questions about the meaning of a locus (a gene, the position of a base-pair on a chromosome or other replicating molecule) and the actual number of such loci that would be necessary for speciation to occur, since, after all, genes or base-pairs are always finite. Moreover, the model assumes that all these infinitely many loci are involved in assortative mating, which also needs to be taken into account in the interpretation of the loci. In this paper we consider the DH model for finite genomes. We find that, for the parameters considered in the original paper, speciation occurs only for very large number of loci, of the order of $10^5$. We compare the dynamics of the model for finite number of loci with the infinite loci model and show how the broadening of the genetic distribution of genotypes in the finite loci model prevents the onset of speciation. We also compare the finite loci DH model with a spatial version of the dynamics introduced in \cite{de_aguiar_global_2009}. When space is added and reproduction is restricted by assortative mating and spatial proximity, the minimum number of loci needed for speciation drops by several orders of magnitude and can occur even for as few as 100 loci. Our simulations show that when the number of loci is small species form in well defined spatial regions, with little overlap at the boundaries. When the number of loci is large, on the other hand, space becomes less important and the species overlap considerably more in space. \section{The Derrida-Higgs model of sympatric speciation} \label{DH} The model introduced by Derrida and Higgs \cite{higgs_stochastic_1991} considers a sympatric population of $M$ haploid individuals whose genomes are represented by binary strings of size $B$, $\{S_1^\alpha, S_2^\alpha, \dots, S_B^\alpha\}$ where $S_i^\alpha$, can assume the values $\pm 1$. For simplicity, each locus of the genome will be called a gene and the values $+1$ and $-1$ the corresponding alleles. The number of individuals at each generation is kept constant and the population is characterized by an $M \times M$ matrix $q$ measuring the degree of genetic similarity between pairs of individuals: \begin{equation} q^{\alpha \beta} = \frac{1}{B} \sum_{i=1}^B S_i^\alpha S_i^\beta. \label{qdef} \end{equation} If the genomes of $\alpha$ and $\beta$ are identical $q^{\alpha \beta}=1$ whereas two genomes with random entries will have $q^{\alpha \beta}$ close to zero. Alternatively, the genetic distance between the individuals, measuring the number of genes bearing different alleles, is $d^{\alpha \beta} = B(1 - q^{\alpha \beta})/2$. Each generation is constructed from the previous one as follows: a first parent $P_1$ is chosen at random. The second parent $P_2$ has to be genetically compatible with the first, i.e., their degree of similarity has to satisfy $q^{P_1 P_2} \geq q_{min}$. In other words parents must have at least $G = B (1-q_{min})/2$ genes bearing the same allele to be compatible (assortative mating). Individuals $P_2$ are then randomly selected until this condition is met. If no such individual is found, $P_1$ is discarded and a new first parent is selected. The offspring inherits, gene by gene, the allele of either parent with equal probability (sexual reproduction). The process is repeated until $M$ offspring have been generated. Individuals are also subjected to a mutation rate $\mu$ per gene, which is typically small. The model is neutral in the sense that the probability that an individual is chosen as first parent ($1/M$) does not depend on its genome. However, once the first parent has been selected, the chances of being picked as second parent and produce an offspring does depend on the genome and a fitness measure can actually be defined in association with assortativeness \cite{de_aguiar_error_2015}. To understand how the similarity matrix changes through generations, consider first an asexual population where each individual $\alpha$ has a single parent $P(\alpha)$ in previous generation. The allele $S_i^\alpha$ will be equal to $S_i^{P(\alpha)}$ with probability $\frac{1}{2}(1+e^{-2\mu}) \approx 1 -\mu$ and $-S_i^{P(\alpha)}$ with probability $\frac{1}{2}(1-e^{-2\mu}) \approx \mu$, so that the expected value is \begin{equation} E(S_i^\alpha) = e^{-2\mu} S_i^{P(\alpha)}. \label{aves} \end{equation} For independent genes, the expected value of the similarity between $\alpha$ and $\beta$ is, therefore, % \begin{equation} E(q^{\alpha \beta}) = e^{-4\mu} q^{P(\alpha) P(\beta)}. \label{aveq} \end{equation} % In sexual populations $\alpha$ and $\beta$ have two parents each, $P_1(\alpha)$, $P_2(\alpha)$ and $P_1(\beta)$, $P_2(\beta)$, respectively, and since each inherits (on the average) half the alleles from each parent, it follows that, on the average, \begin{equation} q^{\alpha \beta} = \frac{e^{-4\mu}}{4} \left( q^{P_1(\alpha) P_1(\beta)} + q^{P_2(\alpha) P_1(\beta)} + q^{P_1(\alpha) P_2(\beta)} + q^{P_2(\alpha) P_2(\beta)} \right) \label{hdupdate} \end{equation} with $q^{\alpha \alpha} \equiv 1$. In the limit of infinitely many genes, $B \rightarrow \infty$, this expression becomes exact and the entire dynamics can be obtained by simply updating the similarity matrix. If there is no restriction on mating, $q_{min}=0$, the overlaps $q^{\alpha \beta}$ converge to a stationary distribution centered at $q_0 \approx 1/(1+4\mu M)$. The approximation holds for $\mu$ and $1/M $ much smaller than one, which is always the case for real populations. If $q_{min} > q_0$ the population splits into species formed by groups of individuals whose average similarity is larger than $q_{min}$ and such that interspecies similarity is smaller than $q_{min}$, tending to zero with time \cite{higgs_stochastic_1991}. Figure \ref{fighd}(d) shows the histogram of $q^{\alpha \beta}$ between all pairs of individuals for $M=1000$, $\mu=1/4000$ ($q_0=0.5$) and $q_{min}=0.8$ for 100, 200, 300 and 400 generations. The initial condition is $q^{\alpha \beta}=1$ for all $\alpha$ and $\beta$, representing genetically identical individuals at the start of the simulation. At $T=100$ and $T=200$ the histogram displays peaks to right of $q_{min}$ showing that all pairs are still compatible. At $T=300$, on the other hand, the peak at $0.77 < q_{min}$ is a signature of speciation, showing that several pairs of individuals have become reproductively isolated. The corresponding species are represented by the smaller peaks to the right of $q_{min}$. \begin{figure} \centering \includegraphics[scale=0.25]{fig1a.pdf} \includegraphics[scale=0.25]{fig1b.pdf} \includegraphics[scale=0.25]{fig1c.pdf} \includegraphics[scale=0.24]{fig1d.pdf} \caption{Distribution of similarity coefficients for the DH model with (a) 50,000, (b) 100,000 and (c) 200,000 genes. Panel (d) shows the result for original DH model with infinite genes. The distribution is shown after $T=100$, $200$, $300$ and $400$ generations. In all cases $M=1000$, $\mu=1/4000$ and $q_{min}=0.8$.} \label{fighd} \end{figure} The only condition for speciation in the DH model is $q_{mim} > q_0 \approx (1+4\mu M)^{-1}$ \cite{higgs_stochastic_1991}. Changing the mutation rate (or the population size) but keeping $\mu M$ fixed only affects the time to speciation, which increases approximately linearly with $M$, or $1/\mu$, as shown in Fig. \ref{time}(a). This is consistent with Eqs.(\ref{aveq}) and (\ref{hdupdate}), which indicade that the change in the similarity matrix per time step is proportional to $\mu$ for $\mu \ll 1$. Increasing $\mu M$, on the other hand, decreases $q_0$ and increases the number of species formed \cite{higgs_stochastic_1991,higgs_genetic_1992}. \begin{figure} \centering \includegraphics[scale=0.25]{fig2a.pdf} \includegraphics[scale=0.25]{fig2b.pdf} \caption{(color online) (a) Time to speciation as a function of population size in the DH model. Dashed line (red) shows a linear fit. (b) Minimum genome size for speciation in the finite genome model as a function of $M$ (stars, upper horizontal axis) and $\mu$ (squares, lower horizontal axis). The values $M=1000$ and $\mu=0.00025$ used in the other figures are marked with a larger (red) symbol. In both panels $4\mu M=1$.} \label{time} \end{figure} \section{The DH model with finite genomes} \label{DHF} An important feature of the DH model is that there is no need to describe the genomes explicitly: all it takes is the initial similarity matrix $q$, which is set to 1, and the update rule Eq. (\ref{hdupdate}). To implement the model with a finite number of genes it is necessary to keep track of all the $M$ genomes at each generation and calculate the similarities from the definition Eq. (\ref{qdef}). Offspring for the next generation are created by choosing (gene by gene) the allele of each parent with equal probability and letting the allele mutate (from +1 to -1 or vice-versa) with probability $\mu$. Parents are chosen like in the original DH model, picking the first parent at random and a second parent compatible with it. If the number $B$ of genes is small the distribution of similarities starts from $q=1$ and moves to the left until $q=q_{min}$, where it remains stationary and no speciation occurs. We found that speciation only occurs for very large values of $B$, as shown in Fig. \ref{fighd} (a)-(c). For the parameters used ($M=1000$, $\mu=0.00025$) we observed speciation only for $B$ larger than about $110000$. When compared to the DH model we see that the finite number of genes blurs the peaks of the $q$ distribution, preventing them from breaking up. The minimum number of genes required for speciation depends directly on the mutation rate and population size. Figure \ref{time}(b) shows the minimum value of $B$ for speciation as a function of $M$ and $\mu$ for $4 \mu M = 1$. For populations with only $M=600$ individuals $B$ drops to about 35000 whereas for $M=2000$ it is necessary at least 430000 genes, showing that $B$ rapidly grows with population sizes. In all cases the simulations were ran for at least twice the time to speciation of the DH model, Fig. \ref{time}(a) to make sure the similarity distribution would either move to the left of $q_{min}$ and speciate (for large enough $B$) or stay frozen close to $q=q_{min}$. Error bars for $B$ are of the order of symbols size: $\pm 10^4$ for $B \geq 10^5$ and $\pm 5 \times 10^3$ for $B < 10^5$. \section{Spatial model with finite genomes} \label{SM} The importance of space in evolution has long been recognized \cite{Wright-1943,Rosen-1995,coyne_speciation_2004,nostil_2012} and explicit empirical evidence of its role has been recently provided by ring species \cite{irwin_speciation_2005,martins_evolution_2013,Martins-2016}. The spatial model we discuss here is a simplified version of the model proposed in \cite{de_aguiar_global_2009}. The main additional ingredient with respect to the DH model with finite genomes is that the individuals are now distributed on a two-dimensional $L\times L$ square area with periodic boundary conditions. Mating is not only restricted by genetic similarity but also by spatial proximity, so that an individual can only choose as mating partner those inside a circular neighborhood of radius $S$ centered on its spatial location, called the {\it mating neighborhood}. We note that a number of other effects, such as demographic stochasticity \cite{mckane2016}, population expansions \cite{martins_evolution_2013,Goodsman-2014}, costs of reproduction \cite{lecunff-2014}, and migration rates between subpopulations \cite{Yamaguchi-2013} might also influence the outcome of speciation. Here we consider only the effect of finite mating neighborhoods and keep all the other ingredients as similar as possible to the original DH model. \begin{figure} \centering \includegraphics[scale=0.35]{fig4.pdf} \caption{Minimum number of genes required for speciation as a function of the mating neighborhood radius $S$. The top axis show the corresponding average number of individuals in the mating neighborhood ($M=1000$, $\mu=1/4000$.)} \label{fig3} \end{figure} Space represents an environment where resources are distributed homogeneously and the individuals should occupy it more or less uniformily so that enough resources would be available to all of them. The total carrying capacity is the population size $M$ and the average area needed per individual is $M/L^2$. The dynamics is constructed in such a way that offspring are placed close to the location of the original parents and the approximately uniform distribution of the population is preserved at all times. However, the mechanism used in the DH model of picking a random individual from the population to be the first parent and then a second individual to be the second parent (and repeating the process $M$ times) promotes strong spatial clustering. \begin{figure} \centering \includegraphics[scale=0.27]{fig3a.pdf} \includegraphics[scale=0.27]{fig3b.pdf} \includegraphics[scale=0.27]{fig3c.pdf} \includegraphics[scale=0.27]{fig3d.pdf} \includegraphics[scale=0.27]{fig3e.pdf} \includegraphics[scale=0.27]{fig3f.pdf} \caption{(color online) (a) Distribution of similarity coefficients for the spatial model with $B=1000$, $q_{min}=0.8$ and $S=7$ for $T=200$, $400$ and $600$ generations. (b) Spatial distribution of the population at $T=600$; different colors (shades of gray) show different species. Panels (c) and (d) show similar plots for $B=10000$ and $S=11$; (e) and (f) show the results for $B=100000$ and $S=14$.} \label{fighds} \end{figure} In order to avoid clustering we implement the dynamics in a slightly different way \cite{de_aguiar_global_2009} (see also section \ref{VLG}): the initial population is randomly placed in the $L \times L$ area. Each one of the $M$ individuals has a chance of reproducing but there is a probability $Q$ that it will not do so, accounting for the fact that not all individuals in the present generation will be first parents of the next. In the case the {\it focal} individual does not reproduce, another one from its mating neighborhood is randomly chosen to reproduce in its place. In either case the offspring generated will be positioned exactly at the location of the focal individual or will disperse with probability $D=0.01$ to one of 20 neighboring sites. Therefore, close to the location of every individual of the previous generation there will be an individual in the present generation, keeping the spatial distribution uniform and avoiding the formation of clusters. The first parent (or a neighbor reproducing in its place) chooses a compatible second parent within its mating neighborhood of radius $S$. If $S$ is small, the number of individuals in the mating neighborhood can be close to zero due to fluctuations in the spatial distribution. To avoid this situation, and also to follow the procedure introduced in \cite{de_aguiar_global_2009}, if the number of compatible mates in the neighborhood is smaller than $P$ ($P=3$ in the simulations), the individual expands the search radius to $S+1$. If the number of compatible mates is still smaller than $P$, the process is repeated twice more up to $S + 3$, and if there is still less than $P$ potential mates another neighbor is randomly selected to reproduce in its place \cite{de_aguiar_global_2009}. The probability $Q$ was fixed in $Q=e^{-1} \approx 0.37$, which corresponds approximately to the probability that an individual is not selected in $M$ trials with replacement, $(1-1/M)^M \approx e^{-1}$, in accordance with the DH model. The size of mating neighborhood is a key extra parameter. If $S \simeq L$ we recover the DH model with finite genomes. If $S$ is small speciation is strongly facilitated and can occur for much smaller values of $B$ for a given $q_{min}$ or $G/B$. Fig. \ref{fig3} shows the minimum value of $B$ for which speciation happens as a function of the size of the mating neighborhood $S$ for $q_{min}=0.8$. The figure also shows the average number of individuals inside the mating neighborhood, $M_S = M \pi S^2/L^2$. The values were obtained by varying $S$ by units of $0.5$ when $B \leq1000$ and by units of $2$ for larger $B$'s. Populations evolved for $T=600$ generations. In some cases speciation did occur for slightly smaller values of $B$, but it took much longer times. For $B=1000$, for instance, we observed speciation for $S=8$ for $T \approx 3000$. Comparing with the finite DH model we note that speciation can take place even for $B=100$ if $S \leq 5$. As $B$ increases the restriction in the size of the mating neighborhood required for speciation decreases, until it is unnecessary if $B$ is of the order of 110000. Figure \ref{fighds} shows the histogram of similarity between pairs of individuals at three times and a snapshot of the population for $B=1000$, $S=7$; $B=10000$, $S=11$ and $B=100000$ and $S=14$. The peaks in the distribution are larger both because $B$ is small and because of the restriction in $S$. For $B=1000$ the species are well localized in space, with little overlap at their boundaries (see \cite{de_aguiar_global_2009}). For $B=10000$ and $B=100000$ the spatial overlap between species is considerably larger. \section{Spatial model with very large genomes} \label{VLG} As discussed in section \ref{SM}, the construction of spatial models that converge to the finite DH model as the parameter S becomes large is not straightforward. The problem resides in the way generations are constructed in the DH model, where a first parent is chosen at random from the population and then a second, genetically compatible, parent is also chosen at random to generate an offspring. The direct application of this procedure in the spatial model would consist in choosing a random first parent and picking the second parent from its mating neighborhood, an area $\pi S^2$ centered on the individual. The offspring should be put close to the location of the first or second parent, or somewhere in between. This, however, leads to spatial clustering of the population, since a large fraction ($\approx e^{-1}$) of the population is never chosen as first parents, leaving holes in these areas and overcrowding areas where individuals are picked twice or more. To avoid this situation we have replaced the random choice of the first parent by going through the population one by one and giving each individual a chance ($1-e^{-1}$) to reproduce. When it did not we picked another individual from its mating neighborhood to reproduce in its place, instead of a random individual taken from the entire population as in the original DH or finite DH models. The offspring is always placed close to the location of the first parent, keeping the distribution spatially uniform. \begin{figure} \centering \includegraphics[scale=0.27]{fig5a.pdf} \includegraphics[scale=0.27]{fig5b.pdf} \caption{(color online) Spatial distribution of the population for $B=200000$ at $T=300$ for (a) $S=10$ and (b) $S=40$. Small mating neighborhoods favors spatial isolation of species.} \label{fig5} \end{figure} In order to compare the spatial model with an equivalent sympatric system we implemented a variation of the finite DH model in which, similarly to the spatial model, we give each individual a chance ($1-e^{-1}$) to reproduce and, when it does not, we pick another random individual from the entire population to reproduce in its place. The results obtained with this variation are qualitatively identical to those described in sections \ref{DH} and \ref{DHF}. This validates the comparison of the DH finite genome model with the spatial model introduced in section \ref{SM}. Finally we consider the process of speciation in the spatial model for genome sizes $B$ above the threshold where speciation occurs in the sympatric model. In this case speciation occurs for any value of $S$, small or large. For small $S$ local mating creates a spatial distribution of genotypes where nearby individuals tend to be similar but different from others located far away \cite{de_aguiar_global_2009}. For finite populations this genetic gradient is not smooth, but step-like, and prone to break up into genetically isolated groups that are spatially correlated. Speciation happens not because $B$ is large, but because $S$ is small. For large $S$, on the other hand, the mechanism is very different and takes place on the global scale of the population. The balance between mutations, which generally increases the average genetic distance between two individuals, and sexual reproduction, which mixes the genomes and has the opposite effect, leads to a distribution of genetic distances in the population. Only when this distribution becomes wide enough, as compared with the criterion of assortative mating, the population splits into species. Figure \ref{fig5} shows the resulting populations in each case, where a signature of $S$ is clearly seen in the spatial organization of the species. \section{Discussion} The mechanisms responsible for the origin of species remain controversial \cite{Chung_2004}. Among the important questions yet to be fully understood is the role of geography (allopatric, parapatric and sympatric modes) and the number of genes involved in the evolution of reproductive isolation. It has been recently argued that these two points are closely related and that detailed molecular analysis might reveal the geographic mode behind specific speciation events \cite{Machado_2002}. Sympatric speciation has been thought to be the most unlikely of the modes, since the possibility of constant gene flow would keep the populations mixed and prevent the evolution of reproductive barriers \cite{Bolnick_2007}. Dieckmann and collaborators have argued that this is not so if competition for resources is strong enough \cite{Dieckmann_1999,Dieckmann_2003,Doebeli_2005}. In that case reproductive isolation would not only evolve naturally but would {\it require} sympatry, otherwise competition for local resources would not be intense \cite{Dieckmann_1999}. Surprisingly, Derrida and Higgs have shown that sympatric speciation may occur even without competition, in a totally neutral scenario, if mating is assortative and if the number of loci in the genome can be considered infinite \cite{higgs_stochastic_1991}. When loci are interpreted as genes the hypothesis of an infinitely large genome becomes rather unrealistic. However, at the molecular level, where nucleotide sequences are the units to be considered, infinite loci models become attractive \cite{Kimura_1964,Ewens_1979}. Nevertheless, nucleotides can hardly be considered independent and certainly do not segregate in the way as assumed by the DH models. Understanding how many independently segregating loci are necessary for neutral sympatric speciation is, therefore, an important question. The main parameters controlling the dynamics in the DH model are $q_0 = (1+ 4\mu M)^{-1}$ and the assortativity measure $q_{min}$. The combination $\theta = 2 \mu M$ also appears in Hubbell's neutral theory of biodiversity \cite{Hubbell-2001} and is the {\it fundamental biodiversity number}, since it controls the number of species in a community and the abundance distribution. In this paper we have revisited the DH model and simulated it for finite numbers of loci. We found that, for typical parameters used in the original paper, speciation happens only for very large number of loci, of the order of $10^5$. When the number of loci is small, the genetic variability within the population is large, hindering speciation. The histogram of genetic similarity between individuals evolves into a broad peak instead of multiple sharp peaks that represent groups of very similar individuals that are dissimilar among different groups. As the number of genes increase the peaks become thiner, secondary structures appear and eventually turn into species. Increasing the mutation rate and keeping $\mu M$ fixed decreases the minimum number of loci needed and also the time to speciation. In contrast with the sympatric DH model, the spatial model introduced in \cite{de_aguiar_global_2009} displays speciation with much smaller number of loci. The model considers a population that is uniformly distributed in space, without any explicit separation into demes or subpopulations. Although it falls into the class of parapatric models, it has been termed topopatric to distinguish it from models involving demes or metapopulations \cite{Manzo_Peliti_1994}. Mating is restricted not only by genetic similarity but also by spatial proximity, allowing gene flow across the entire population but substantially reducing the speed of the flow. Mutations are transmitted diffusively across the population, and not instantaneously like in the sympatric model, largely facilitating speciation \cite{Schneider2016}. Previous studies with metapopulation models (and infinitely large genomes) observed similar effects, with speciation occurring for smaller mutation rates due to the isolation of subpopulations \cite{Manzo_Peliti_1994}. For small number of genes, of the order of 100, speciation occurs only with severe spatial mating restrictions. Accordingly, the species that form display strong spatial segregation, with little overlap between adjacent species (Figs. \ref{fighds}(b) and \ref{fig5}(a)). This type of geographic distribution leads back to question of modes of speciation and suggests that it might happen that species appeared not because the populations were geographically separated (as in allopatry) but rather they are geographically separated because they emerged in a homogeneous environment with slow gene flow. If the number of loci participating in the assortative process is large the spatial restriction on mating can be relaxed. As a consequence, the spatial segregation decreases and the overlap among species increases.\\ \\ \noindent Acknowledgments: \noindent It is a pleasure to thank David M. Schneider, Ayana Martins and Blake Stacey for their critical reading of this paper and many suggestions. This work was partly supported by the Brazilian agencies FAPESP (grant 2016/06054-3) and CNPq (grant 302049/2015-0). \clearpage
1,108,101,564,746
arxiv
\section{Introduction} The reconstruction of anatomical joint surface and angular relationships is a paramount aspect in surgical management of fractures or ligament injuries. Intra-operative fluoroscopic guidance, 3D imaging, or navigation is typically used to ensure anatomically and mechanically correct reduction, so that irregular joint loading and complications caused by aberrant biomechanics can be alleviated or avoided. Moreover, for technically demanding procedures, a pre-operative planning sketch is obligatory and helps the surgeon to achieve operational safety \cite{Ewerbeck.2014}. In many of these planning and verification steps, the bone axis serves as an important reference line (Fig.~\ref{fig1}). While planning such axes can be easily done on pre-operative static data, doing so consistently on live images during surgery is inherently more complex due to motion and a limited field of view. In addition, non-sterile interaction with a planning software is unwanted. For this reason, axial alignment is typically verified by visual inspection and use of hardware-based solutions such as the cable method, alignment rods, goniometers, or optical navigation amongst others \cite{Krettek.1998,Lee.2018,Waelkens.2016}. However, these methods either increase task complexity, are inherently imprecise, or require an open reduction or additional incisions regardless of the surgical technique used. \begin{figure}[tb] \centering \subfloat[Palmar tilt and radial inclination angle on the wrist joint \cite{Dee.2000,Kreder.1996,Schmitt.2015,Watson.2016}. \label{fig1a}]{ \makebox[0.45\textwidth][c]{\includegraphics[height=0.38\textwidth]{fig_radius_ulnar.eps}} }\quad \subfloat[Baumann angle on frontal radiograph of the elbow \cite{Silva.2010,Williamson.1992}. \label{fig1c}]{ \makebox[0.45\textwidth][c]{\includegraphics[height=0.38\textwidth]{fig_baumann.eps}} } \\ \subfloat[Angles for tibial intramedullary nail insertion (solid) \cite{Franke.2018} and transtibial tunnel drilling (dashed) \cite{Hiesterman.2011,Johannsen.2013}. \label{fig1b}]{ \makebox[0.45\textwidth][c]{\includegraphics[height=0.38\textwidth]{fig_tibia.eps}} }\quad \subfloat[Approximation of knee flexion angle based on femoral and tibial bone axes. \label{fig1d}]{ \makebox[0.45\textwidth][c]{\includegraphics[height=0.38\textwidth]{fig_axisangle.eps}} } \caption{Examples for using the shaft axis of long bones as reference line.} \label{fig1} \end{figure} To this end, several methods were proposed to automate detection of the bone axis on image data. Tian et al. \cite{Tian.2003} compute the femoral shaft axis by using a combination of contour extraction and analysis of intersecting line normals to the shaft contour. They recover the contour by using Canny edge detection and identify the relevant straight line sections with Hough transformation and active contour mode via Gradient Vector Flow. While this approach can deal with truncated bones, it prerequisites the bone to be oriented in an upward position on the X-ray image to isolate the relevant intersection points. Donnelley et al. \cite{Donnelley.2008} use a scale-space approach and approximate line straight parameters via Hough transformation. To deal with ambiguous peak spread in the dual space encountered in real-world radiographs, this methods relies on prior spread quantification which falls short in the case of truncated bones. Subburaj et al. \cite{Subburaj.2010} use a 3D-reconstructed bone model from pre-operative CT scans. They combine geometrically detected landmarks and maximal inscribed sphere fitting to detect the medial axis, which is then used for identification of anatomical and mechanical axes. Although very accurate results can be achieved, such 3D information is oftentimes not available and requires registration with the intra-operative 2D image. To circumvent these limitations, we propose a simple and clinically motivated image-guided approach for detection of the anatomical axis of long bones on 2D X-ray images. We translate the established two-line/two-circle manual method \cite{Hiesterman.2011,James.2014,Johannsen.2013,Kostogiannis.2011} to a learning based extraction of anatomical features and subsequent geometric construction based on segmentation of the bone cortex outline. With reference to \cite{Kordon.2019}, region of interest (ROI) encoding of the relevant contour sections is used to cope with variability in image truncation and arbitrary image rotation. Moreover, the segmentation results can directly be used for registration of the detected axis on fluoroscopic live images. The method is evaluated for the femur and tibia in the knee joint, which are amongst the most prominent anatomies treated in trauma surgery. The reliability of the proposed method is evaluated and confirmed in an inter-rater study with three expert trauma surgeons. \section{Methods} The anatomical axis of long bones in a 2D image plane can be described by two auxiliary lines that follow the orientation of the anterior/posterior or medial/lateral contour of the bone shaft. In contrast to conventional radiographs with rather standardized imaging, this shaft area is usually truncated on intra-operative images due to a limited field of view and a joint-centered acquisition protocol. Furthermore, the largely linear shaft contour can suffer from structural changes due to e.g. bony proliferation. To this end, first the relevant contour sections are estimated and extracted from the image. Subsequently, these sections are masked based on positional probability and smoothed to reduce the influence of outliers. Lastly, the clinically motivated two-line method is used to calculate the bone axis. \subsection{Likelihood Encoding of Relevant Contour Regions} Given a binary bone segmentation mask $S$ we extract the complete cortex contour $\mathcal{K}$ by using a morphological erosion operation. With a cross-shaped $3 \times 3$ structuring element $X = \{(-1,0),(0,-1),(0,0),(0,1),(1,0)\}$ this equates to \begin{equation} \mathcal{K} = \mathrm{XOR}\left(S,\,\mathrm{erode}(S,X) \right). \end{equation} To constrain the relevant contour section, a ROI similar to \cite{Kordon.2019} is constructed (Fig.~\ref{fig1}). Its bounds are defined by the start and end points of an additional line segment. Positional variance both in the parallel as well as in the orthogonal direction to this line segment is encoded by a 2D Gaussian distribution with a standard deviation of $\sigma=6\,\mathrm{px}$ and truncation bounds at $3\sigma$. This gives us a symmetrical fall-off in probability orthogonal to the line within a margin of $37\,\mathrm{px}$. This spatial likelihood distribution is used to decide whether a contour point should be considered part of the relevant contour region. Since we can assume a mainly linear contour, we argue that using a threshold at $1\sigma$ retains the most probable points while eliminating most outliers (Fig.~\ref{fig2a}). \begin{figure}[tb] \centering \subfloat[Masking of segmentation contour $\mathcal{K}$ by evaluation the predicted likelihood in $\mathrm{ROI}$ and parametrization of the line segment. \label{fig2a}]{ \makebox[0.45\textwidth][c]{\includegraphics[height=0.318\textwidth]{fig_masking.eps}} }\quad \subfloat[Vector projection of intermediary line points onto the opposing segment and geometric construction of the axis points $m_1$ and $m_2$. \label{fig2b}]{ \makebox[0.45\textwidth][c]{\includegraphics[height=0.343\textwidth]{fig_construction.eps}} } \caption{Implementation of the two line method for bone axis estimation based on the extracted segmentation contour.} \label{fig2} \end{figure} \subsection{Axis Construction with Two-Line Method} The auxiliary contour extension lines are obtained by fitting two linear functions to the pair of relevant contour regions. Since we cannot assume a designated dependent variable due to unknown image rotation, major axis regression\footnote[1]{Ordinary least squares with dependent variable of highest variance is also possible.} is used \cite{Warton.2006}. Given these two lines, we can now perform a geometric construction of the in-between axis based on the midpoints of two parallel and intersecting line segments. This method is known as the two-line method and is a clinically known and trusted procedure especially in pre-operative manual planning \cite{Hiesterman.2011,James.2014,Johannsen.2013,Kostogiannis.2011}. First, a line segment is parametrized for each contour line which is bounded by the relevant contour region. One of these segments is subdivided by two points at distance $\mathrm{d}_1$ and $\mathrm{d_2}$ from the respective start and end points (Fig.~\ref{fig2a}). The actual distances can be selected depending on the target anatomical structure to facilitate easier correction by the user. In a second step, these intermediary points are then projected onto the opposing line segment (Fig.~\ref{fig2b}). This procedure allows for different orientation and length of each segment and is close to clinical practice. \subsection{Neural Network Architecture} The proposed construction relies on a segmentation mask of the target long bone and ROI encodings for both relevant contour regions. For combined prediction we use a multi-task variant of the hourglass network architecture by Newell et al. \cite{Kordon.2019,Newell.2016}. This network architecture allows to optimize a joint representation of both tasks and benefits execution time and computational footprint upon inference. We separate segmentation and prediction of ROIs into two tasks. The segmentation task is trained with binary cross entropy to delineate the target bone (foreground) and all other image content (background). The ROIs are optimized by direct matching of the pixel intensity values with a mean squared error loss. In addition, we employ gradient normalization \cite{Chen.2018,Kordon.2019} to cope with different loss function characteristics and task difficulties. To limit the hardware requirements in consideration of the intra-operative application area, we refrain from a stacked network variant. \subsection{Data and Evaluation} Network training and geometric construction were evaluated for the femur and tibia on a dataset of 221 clinical X-ray images of the knee joint. Each image was acquired as a lateral standard projection where the outlines of both femoral condyles are aligned. The ground truth segmentation masks and line segments representing the ROIs were annotated by a medically-trained engineer with the \textit{labelme} annotation tool \cite{Kentaro.2016}. Our experiment and evaluation setup on this dataset was two-fold. \begin{enumerate} \item Training of the network in three configurations (a) femur only, (b) tibia only, (c) joint training of femur and tibia, followed by a quantitative evaluation of the performance. For variant (c) the number of output channels for each task head of the network was increased accordingly. \item Assessment of clinical reliability of the automatic axis detection in an inter-rater study. To this end, three expert trauma surgeons (one site) were asked to annotate the femoral and tibial axes on all 38 evaluation images via two axis control points. \end{enumerate} For both experiment series a hold-out test set of 38 images with a $3\,\mathrm{mm}$ calibration sphere was defined. Representative variability in bone truncation and absolute joint rotation was confirmed. The remaining data was split into training and validation subsets of 167/16 images respectively. The data was split up in such a way that disjoint patient groups are ensured in the training/validation and test data sets. Optimization for the first experiment step was performed using Stochastic Gradient Descent (SGD) with a batch size of 2 over 300 epochs on a NVIDIA TITAN RTX graphics card in the PyTorch (v.1.2) Deep Learning framework. We used a learning rate of $2.5e-4$ which we halved every 50 epochs. To aid generalization and to prevent early overfitting, we applied L2 weight decay with a factor of $5e-5$ and a basic online augmentation sequence during training. This sequence comprised affine transformations (scaling, rotation, shearing, horizontal flipping) and margin crops of random strength. Upon propagation in the network, min-max normalization to the interval of $[0, 1]$ was applied and the image resolution was standardized to $256 \times 256\,\mathrm{px}$ by resizing and a subsequent center-crop. All reported results are based on the respective model parameters for which the minimum combined task error on the validation split was observed. \section{Results and Discussion} \subsubsection{Bone Segmentation} The results for bone segmentation by the multi-task neural network variants are given in Table~\ref{tab1}. In general we observe segmentation results which closely resemble the annotated ground truth. Despite missing annotations of other bony structures in the knee joint, the single-anatomy model is capable of delineating the target bone from other structures, even in ambiguous overlap areas. On the other hand, prediction quality of the combined variant does not suffer from a doubling of inference tasks which benefits fast execution time and a smaller computational footprint. A very low contour error indicates that the networks do not only learn the global shape but also successfully capture small details which are often caused by bony erosion and proliferation. This allows for marginal error propagation into geometric axis construction. Segmentation outliers indicated by higher Hausdorff error points are exclusively caused by the inserted measuring spheres which are not represented in the training data. \begin{table}[b] \caption{Evaluation of segmentation performance for the femur and tibia (DICE=Sørensen–Dice coefficient; ASD=Average Surface Distance; HD=Hausdorff Distance).} \label{tab1} \centering \begin{tabular}{@{\extracolsep{8pt}}llccccc@{}} \toprule & & DICE & & ASD (mm) & & HD (mm) \\ Bone & Network & Mean $\pm$ Std & & Mean $\pm$ Std & & Mean $\pm$ Std \\ \midrule Femur & Single & 0.99 $\pm$ 0.003 & & 0.57 $\pm$ 0.45 & & 7.91 $\pm$ 13.86 \\ & Comb. & 0.99 $\pm$ 0.004 & & 0.57 $\pm$ 0.58 & & 11.38 $\pm$ 22.17 \\ \midrule Tibia & Single & 0.99 $\pm$ 0.005 & & 0.62 $\pm$ 0.53 & & 7.07 $\pm$ 7.08 \\ & Comb. & 0.99 $\pm$ 0.003 & & 0.51 $\pm$ 0.16 & & 4.23 $\pm$ 1.95 \\ \bottomrule \end{tabular} \end{table} \begin{table}[t] \caption{Angulation and displacement error for the femur and tibia in single/combined anatomy training. Results are reported for the anterior/posterior auxiliary lines and bone shaft axis. The displacement error is constructed as the mean orthogonal point-to-line distance of predicted points $s_1/t_1$, $s_2/t_2$, $m_1/m_2$ onto the respective ground truth axis and combines translation and angulation error components. The best results for each axis are marked in bold ($\mathrm{CI}_{95}$ = 95\% confidence interval).} \label{tab2} \centering \subfloat[Femoral axes detection. \label{tab2a}]{ \begin{tabular}{@{\extracolsep{0.5pt}}llccccccc@{}} \toprule & & \multicolumn{3}{c}{Angulation (deg)} & & \multicolumn{3}{c}{Displacement (mm)} \\ \cmidrule(lr){3-5} \cmidrule(l){7-9} Axis & Netw. & Mean $\pm$ Std & Median \& $\mathrm{CI}_{95}$ & Max & & Mean $\pm$ Std & Median \& $\mathrm{CI}_{95}$ & Max \\ \midrule Ant. & Single & $0.55 \pm 0.98$ & $\mathbf{0.31}$ {[}0.18, 0.57{]} & 6.17 & & $0.57 \pm 1.25$ & 0.34 {[}0.25, 0.49{]} & 8.09 \\ & Comb. & $\mathbf{0.48} \pm 0.64$ & 0.34 {[}0.24, 0.52{]} & $\mathbf{3.89}$ & & $\mathbf{0.47} \pm 0.79$ & $\mathbf{0.31}$ {[}0.19, 0.44{]} & $\mathbf{5.10}$ \\ \midrule Post. & Single & $\mathbf{0.56} \pm 0.38$ & $\mathbf{0.49}$ {[}0.36, 0.68{]} & $\mathbf{1.50}$ & & $\mathbf{0.54} \pm 0.33$ & $\mathbf{0.43}$ {[}0.36, 0.55{]} & $\mathbf{1.53}$ \\ & Comb. & $0.64 \pm 0.44$ & 0.54 {[}0.38, 0.83{]} & 1.94 & & $0.59 \pm 0.47$ & 0.45 {[}0.36, 0.56{]} & 2.71 \\ \midrule Shaft & Single & $0.35 \pm 0.42$ & 0.21 {[}0.14, 0.36{]} & 2.54 & & $0.18 \pm 0.22$ & $\mathbf{0.13}$ {[}0.09, 0.16{]} & 4.11 \\ & Comb. & $\mathbf{0.28} \pm 0.24$ & $\mathbf{0.19}$ {[}0.11, 0.23{]} & $\mathbf{0.98}$ & & $\mathbf{0.15} \pm 0.13$ & $\mathbf{0.13}$ {[}0.07, 0.15{]} & $\mathbf{2.65}$ \\ \bottomrule \end{tabular} }\\ \subfloat[Tibial axes detection. \label{tab2b}]{ \begin{tabular}{@{\extracolsep{0.5pt}}llccccccc@{}} \toprule & & \multicolumn{3}{c}{Angulation (deg)} & & \multicolumn{3}{c}{Displacement (mm)} \\ \cmidrule(lr){3-5} \cmidrule(l){7-9} Axis & Netw. & Mean $\pm$ Std & Median \& $\mathrm{CI}_{95}$ & Max & & Mean $\pm$ Std & Median \& $\mathrm{CI}_{95}$ & Max \\ \midrule Ant. & Single & $1.59 \pm 1.99$ & $\mathbf{0.64}$ {[}0.43, 1.59{]} & 7.37 & & $0.86 \pm 0.95$ & $\mathbf{0.48}$ {[}0.35, 0.92{]} & 5.26 \\ & Comb. & $\mathbf{1.43} \pm 1.62$ & 0.81 {[}0.67, 1.18{]} & $\mathbf{6.98}$ & & $\mathbf{0.75} \pm 0.50$ & 0.62 {[}0.52, 0.74{]} & $\mathbf{1.93}$ \\ \midrule Post. & Single & $\mathbf{0.66} \pm 0.55$ & $\mathbf{0.51}$ {[}0.32, 0.68{]} & $\mathbf{1.98}$ & & $\mathbf{0.38} \pm 0.21$ & 0.36 {[}0.25, 0.45{]} & $\mathbf{0.95}$ \\ & Comb. & $0.83 \pm 0.84$ & 0.52 {[}0.31, 0.81{]} & 3.37 & & $0.43 \pm 0.28$ & $\mathbf{0.33}$ {[}0.26, 0.46{]} & 1.30 \\ \midrule Shaft & Single & $0.78 \pm 0.95$ & 0.48 {[}0.32, 0.78{]} & 4.11 & & $0.21 \pm 0.19$ & $\mathbf{0.16}$ {[}0.11, 0.25{]} & 0.96 \\ & Comb. & $\mathbf{0.62} \pm 0.66$ & $\mathbf{0.33}$ {[}0.23, 0.69{]} & $\mathbf{2.65}$ & & $\mathbf{0.17} \pm 0.11$ & 0.17 {[}0.11, 0.21{]} & $\mathbf{0.38}$ \\ \bottomrule \end{tabular} } \end{table} \subsubsection{Axes Detection} The performance of the proposed geometric axis construction is presented in Table~\ref{tab2}. We observe an average angulation error of less than $0.65^{\circ}$ for the anterior and posterior auxiliary lines on both bones and only minor differences between single and combined training. This indicates that the predicted ROIs can provide masking of relevant contour sections on a sufficiently fine scale. We can also qualitatively confirm that the likelihood distribution follows along the actual anatomical contour, albeit this area is only approximated by a straight line in the ground truth annotations. These observations strengthen our assumption that we can retain all relevant contour points by masking at a likelihood threshold of $1\sigma$. In addition, low values for the displacement error (Tab.~\ref{tab2}) indicate minor deviation of the line's shift off the ground truth bone contour. The constructed bone axes generally benefit from the combined training variant and exhibit a comparatively lower maximum error bound (Tab.~\ref{tab2}). Furthermore, it can be observed that by training both anatomies together, the respective confidence intervals taper off and follow the downward shift of the position measure. Based on these results, we chose the combined network for evaluation in the inter-rater study. \begin{table}[t] \caption{Comparison of automatically detected shaft axis (Auto) to the annotation of three expert readers (E-1, E-2, E-3) and assessment of inter-rater variability. Due to missing midpoints $m_1$ and $m_2$ in the expert reader annotations, the respective displacement error is based on the two annotated control points. Here, $\mapsto$ denotes a mapping of the $1^{\mathrm{st}}$ rater's control points on the predicted axis of the $2^{\mathrm{nd}}$ rater. $\mapsfrom$ marks a mapping in reverse order.} \label{tab3} \begin{tabular}{@{\extracolsep{2pt}}llccccc@{}} \toprule & & \multicolumn{2}{c}{Femur} & & \multicolumn{2}{c}{Tibia} \\ \cmidrule(lr){3-4} \cmidrule(l){6-7} \begin{tabular}[c]{@{}l@{}}$1^{\mathrm{st}}$ \\ rater\end{tabular} & \begin{tabular}[c]{@{}l@{}}$2^{\mathrm{nd}}$ \\ rater\end{tabular} & \begin{tabular}[c]{@{}c@{}}Angulation\\ (deg)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Displacement\\ (mm)\end{tabular} & & \begin{tabular}[c]{@{}c@{}}Angulation\\ (deg)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Displacement\\ (mm)\end{tabular} \\ \midrule Auto & E-1 & 0.61 {[}0.38, 1.01{]} & 0.91 {[}0.57, 1.16{]} & & 1.48 {[}1.12, 2.23{]} & 1.90 {[}1.36, 2.62{]} \\ Auto & E-2 & 1.12 {[}0.63, 1.82{]} & 0.58 {[}0.39, 0.72{]} & & 2.36 {[}1.68, 3.07{]} & 1.68 {[}1.22, 2.13{]} \\ Auto & E-3 & 0.49 {[}0.29, 0.68{]} & 0.74 {[}0.43, 1.17{]} & & 5.76 {[}4.87, 6.44{]} & 3.89 {[}3.38, 4.56{]} \\ & & & & & & \\ E-1 & E-2 & 0.93 {[}0.39, 1.32{]} & $\mapsto$ 1.85 {[}1.51, 2.21{]} & & 1.75 {[}1.01, 2.21{]} & $\mapsto$ 1.93 {[}1.60, 2.58{]} \\ & & & $\mapsfrom$ 1.68 {[}1.52, 1.93{]} & & & $\mapsfrom$ 2.02 {[}1.72, 2.61{]} \\ E-1 & E-3 & 0.70 {[}0.49, 1.06{]} & $\mapsto$ 1.38 {[}1.20, 1.77{]} & & 4.53 {[}3.47, 5.25{]} & $\mapsto$ 4.64 {[}3.72, 5.22{]} \\ & & & $\mapsfrom$ 1.82 {[}1.35, 2.15{]} & & & $\mapsfrom$ 4.09 {[}3.27, 4.81{]} \\ E-2 & E-3 & 1.02 {[}0.69, 1.29{]} & $\mapsto$ 1.41 {[}1.15, 1.74{]} & & 3.22 {[}2.40, 4.24{]} & $\mapsto$ 4.71 {[}3.33, 5.47{]} \\ & & & $\mapsfrom$ 1.38 {[}1.20, 1.77{]} & & & $\mapsfrom$ 3.71 {[}2.85, 4.25{]} \\ \bottomrule \end{tabular} \end{table} \subsubsection{Inter-rater Comparison} The reliability of our method in comparison to expert rater annotations is analyzed in Table \ref{tab3}. For the femur, low angulation and displacement errors indicate reliable axis estimates which are independent of the amount of truncation and rotation present in the image data. A significantly higher angular deviation of the tibial axis can be explained by comparatively more divergent contour lines. Together with structural variation of the anterior tibia (tibial tuberosity), this leads to higher complexity and differences in the individual approach to manual annotation. This reasoning is strengthened by comparison with rater E-3 for whom a systematically more posterior position and orientation can be observed. If compared to the differences between expert raters (Tab.~\ref{tab3}), the automatic approach yields very comparable performance and achieves axis predictions that lie within the inter-rater error bounds. It should be noted that agreement between raters could be further increased if a dedicated tool for semi-automatic two-line planning is used. \section{Conclusion} This study investigated a method for automatic detection of the shaft axis on long bone X-rays. The experiments reveal encouraging results which match expert rater performance. A major strength of the proposed method is the flexibility of ROI masking which we use to select relevant sections of the bone contour without strong prerequisites on image truncation and rotation. We see limitations in that no evaluation was performed for bones that suffer from increased antecurvation/recurvation (e.g. due to natural deformity or increased weight bearing) or major occlusion of the contour by surgical implants. In addition, future work should analyze potential extensions to our method to promote axis estimation in cases of multi-fragment fractures. \subsubsection{Disclaimer} The methods and information presented here are based on research and are not commercially available. \bibliographystyle{splncs04}
1,108,101,564,747
arxiv
\section*{Introduction} \label{sec:intro} During spaceflight, astronauts are exposed to a variety of environmental stressors ranging from chemical and bacterial insults (resulting from the materials and occupants of the space vehicle) to microgravity and mixed fields of ionizing radiation. The space radiation environment is a complex combination of fast-moving ions derived from all atomic species found in the periodic table, with any meaningful abundance up to approximately nickel (atomic number $Z=28$). These ionized nuclei have sufficient energy to penetrate the spacecraft structure and cause deleterious biological damage to astronaut crews and other biological material, such as cell and tissue cultures \cite{Chancellor,Walker}. Recent studies have demonstrated that the biological response and disease pathogenesis to space radiation is unique to a nonhomogeneous, multi-energetic dose distribution similar to the interplanetary space environment \cite{Kennedy,RomeroWeaver2014}. Previous radiobiological models and experiments utilizing mono-energetic beams may not have fully characterized the biological responses or described the impact of space radiation on the health of vital tissues and organ systems \cite{Chancellor_2018}. Currently, radiobiology studies on the effects of {\em galactic cosmic ray} (GCR) radiation utilize single ion, mono-energetic beams (e.g., Li, C, O, Si, Fe, etc.) at heavy-ion accelerators where the projected dose for an entire exploration-class mission is given to biological models using highly-acute, single ion exposures. Recently, a GCR simulator was developed that can provide three to five consecutive mono-energetic heavy ions for space radiation studies \cite{hidding2017laser,norbury2016galactic}. While an improvement upon previous capabilities, this approach only provides a few data points and lacks the generation of pions and neutrons that account for as much as $15$-$20$\% of a dose exposure \cite{Slaba2015}. Additionally, for radiobiology studies, questions still remain regarding what order the ion species should be given as this may affect experimental outcomes \cite{Elmore2011,Fry2002}. Unfortunately, these approaches do not reflect the low dose-rate found in interplanetary space, nor do they accurately replicate the multi-ion species and energies found in the GCR radiation environment. It is believed that the complex GCR environment could cause multi-organ dose toxicity, inhibiting cell regrowth and tissue repair mechanisms. Thus, high-fidelity simulation is critical for the determination of accurate radiobiological experimental outcomes \cite{Chancellor_2018,wilson1995issues}. Furthermore, interaction with the spacecraft hull attenuates the energy of heavy charged particles and frequently causes their fragmentation into lighter, less energetic elements, changing the complexity and makeup of the {\em intravehicular} (IVA) radiation spectrum. Therefore, it is important for the fidelity of space radiation studies that the space radiation environment both outside and inside spacecrafts can be accurately simulated. In this work we demonstrate an approach to simulate the space radiation environment in a laboratory setting. For simplicity, we focus on the IVA radiation spectrum measured on different spacecrafts. Our goal is to numerically develop a target moderator block that can be easily constructed from materials with multiple layers of varying geometry to generate specific nuclear reactions and spallation products. The moderator block is designed so that the final field closely simulates the IVA \emph{linear energy transfer} (LET) spectrum measured on previous spaceflights. The LET quantifies how much energy is lost in a material and is typically given in units of kilo electron volts per micron (keV/$\mu$m) for quantification of radiobiological damage. This proposed target moderator block can, for example, be placed in front of a $1$ giga electron volt per nucleon (GeV/n) iron ($^{56}$Fe) single-particle beam with no modifications to the beamline infrastructure. As the iron beam passes through the moderator block, nuclear spallation processes can create modest amounts of the desired fragments resulting in a complex mixed field of particle nuclei with different atomic numbers $Z$ in the range $0< Z \leq 26$ and LETs up to approximately 200 keV/$\mu$m. Modifications to the internal geometry and chemical composition of the materials in the target moderator block allow for a shaping of the simulated IVA LET to specific spectra. The concept is shown in Figure \ref{fig:blockconcept}. Our approach thus leverages available beamline technologies to provide an enhancement to current ground-based analogs of the space radiation environment by reproducing the measured IVA LET spectrum. \begin{figure}[ht] \centering \includegraphics[scale=0.40]{blockconcept.pdf} \caption{Moderator block geometry concept for the emulation of space radiation spectra. A primary beam of $^{56}$Fe (iron, left) is selectively degraded with a carefully designed moderator block to produce a desired distribution of energies and ions (represented by the colorful lines on the right) simulating the intravehicular space radiation environment. Figure reprinted with permission from Chancellor et al.\cite{Chancellor_2018}, under the \href{https://creativecommons.org/licenses/by/4.0/}{Creative Commons license}.} \label{fig:blockconcept} \end{figure} To demonstrate applicability of this approach, results from our numerical models are compared below to real-world measurements of the IVA LET spectrum from the U.S. Space Shuttle, International Space Station (ISS), and and NASA’s new Orion Multi-Purpose Crew Vehicle (MPCV). While these intravehicular environments were chosen for concept demonstration, we emphasize that this approach could be generalized to other radiation spectra and a wide range of environmental conditions for radiobiological studies as well as other applications such as the testing of shielding, electronics, and materials for a space environment, or for nuclear research facilities and laboratories. \section*{Background} \label{sec:background} Highly-charged heavy ions penetrate matter with an approximate straight path. The interaction of the ion with material's atomic structure results in one of two outcomes: transfer of energy from the primary ion into the medium (gradually dissipating the primary ion's energy) or the creation of progeny nuclei and spallation fragments \cite{rutherford2012scattering}. The loss of an ion's energy can be accurately approximated using the stopping power equation, \cite{Fano1963} \begin{eqnarray}\label{eq:BB} \frac{dE}{dx} = \frac{4\pi e^{4}Z_{1}^{2}Z_{2}}{m_{e}\beta^{2}} && \!\!\!\!\!\!\! \Big[ \ln\left(\frac{2m_{e}v^{2}}{I}\right) - \Big. \\ \nonumber && - \Big. \ln(1-\beta^{2})-\beta^{2} - \frac{C}{Z_{2}} - \frac{\delta}{2} \Big]\, . \end{eqnarray} \noindent Here, $Z_{1}$ and $Z_{2}$ are the charges of the primary ion and the medium being traversed, respectively; $m_{e}$ is the electron mass density of the medium and $\beta = v/c$ with $v$ representing the velocity of the primary ion and $c$ representing the speed of light. Material-specific effects are described by the average ionizing potential of the medium $I$, the shell correction $C/Z_{2}$, and the density effect $\delta$. These relationships have been validated with both theoretical and experimental results.\cite{bischel1964,national1964studies,Sternheimer1982,Ziegler2010} Analysis of Eq. \eqref{eq:BB} shows that a charged particle traversing a given material will lose kinetic energy at a rate inversely proportional to its speed, with a prompt loss of energy as it comes to rest. This sudden rise in energy loss is referred to as the \textit{Bragg peak}. In this context, the term "Bragg peak" differs from the definition used in materials and condensed matter studies. In radiation dosimetry, reference to the Bragg peak implies the point at which a charged particle promptly loses kinetic energy before coming to rest in a medium. As mentioned above, there is a chance that the interaction between the primary and the nuclear structure results in the dislocation of nuclear matter from the primary ion, creating fragments of ion species each with charges equal to or less than the charge of the primary ion. Brandt and Peters demonstrated that the cross-section for a nuclear interaction to induce a charge-changing spallation can be determined with,\cite{bradt1950heavy,wilson1986} \begin{equation}\label{eq:bradtandpeters} \sigma = \pi r_{0}^{2} \Big[\sqrt[3]{A_{\rm P}} + \sqrt[3]{A_{\rm T}} - \delta (A_{\rm T},A_{\rm P},E)\Big]^{2}, \end{equation} \noindent where $A_{\rm P}$ and $A_{\rm T}$ are the mass numbers of the primary ion and target medium, respectively, $\delta $ is a fitted parameter dependent on the energy of the primary ion, and $r_{0} = 1.26$~fm. These phenomena described in Eqs. \ref{eq:BB} and \ref{eq:bradtandpeters} provide valuable information about the character and properties of materials, and provide a novel method of generating a mixed field of ions using accelerator technologies. Careful observation of Eq. \eqref{eq:BB} shows that a primary ion can penetrate a material of thickness, $\Delta x$, assuming a high enough incident energy, $E$. We can choose $\Delta x$ so that roughly half of the primary ions have a charge-changing reaction defined by Eq. \eqref{eq:bradtandpeters}. With these constraints the location of the first charge-changing interaction will occur, on average, at approximately the midpoint (e.g., $\Delta x /2$). Most nuclear reactions are peripheral and remove only a few nucleons from the incident primary ion. The resulting secondary particles will then travel a distance equal to approximately half of the material's thickness, $\Delta x$, and will lose energy at a rate described by Eq. \eqref{eq:BB}. On average, observing Eq. \eqref{eq:bradtandpeters}, the probability of the lighter fragments having tertiary interactions is proportional to their atomic mass, e.g. $~A^{2/3}$, or approximately 25\%. Thus, for a primary ion of constant energy $E$, atomic mass $A_{\rm P}$, and charge $Z$, incident on a target with atomic mass $A_{\rm T}$ and thickness $\Delta x$, the emerging field will consist of a mixed ion species with varied charges up to the primary ion's atomic mass, $A_{\rm P}$. The stopping power described in Eq.~\ref{eq:BB} is equivalent to the energy loss per unit path length of the primary ion, or LET, thus ${\rm LET} = dE/dx$, and quantifies how much energy is lost in a material. Note, here we infer the \emph{unrestricted stopping power}, e.g. all the energy loss by the primary is into the medium. LET is typically given in units of mega electron volts per centimeter (MeV/cm) for materials studies; however, for radiobiological quantification where the outcome varies at distances of 10$^{-6}$~m, LET is given in keV/$\mu$m. Although not uniquely related to biological response, LET is an important metric that is utilized to determine radiation tissue damage where the differences in the {\em relative biological effectiveness} (RBE) of different ions are, in part, attributed to differences in the LET of the radiation \cite{icrp60}. The RBE of a particular radiation type is the numerical expression of the relative amount of damage that a fixed dose of that type of radiation will have on biological tissues. LET remains the focus of many biological investigations and serves as the basis of radiation protection and risk assessment \cite{wilson1995issues,icrp60}. Conceptually, it is reasonable to predict that a single particle, mono-energetic ion beam can be accelerated at target blocks constructed of one or more materials. The spectrum of the emerging field can then be moderated by the amount of mass or length of material the primary and secondary nuclei travels. The robustness of the resulting field of mixed ions and energies would be dependent on the careful selection of target material(s) and the relative contribution of each layer to the desired spectrum. Figure \ref{fig:schematic} demonstrates this concept. \begin{figure}[ht] \centering \includegraphics[width=0.95\columnwidth]{schematic} \vspace*{-2.5em} \caption{ Schematic of the moderator block designed to simulate specific space radiation spectra. A primary beam of $^{56}$Fe (left) is selectively degraded with a carefully-designed moderator block to produce a desired distribution of energies and ions (represented by the colorful lines on the right-hand side). To preferentially enhance fragmentation and energy loss, cuts (white sections on the left-hand side) are performed in the moderator block made up of different materials (depicted by different shades of gray). Before the spallation products exit the moderator block, a high-$Z$ material layer is added for scattering. The inset shows the circular beam spot, as well as the symmetric cuts made into the moderator block.} \label{fig:schematic} \end{figure} \section*{Methods}\label{sec:methods} Highly-charged heavy ions penetrate a material with an approximate straight path and gradually dissipate energy through multiple collisions with the material's electronic structure. Eq.~\ref{eq:bradtandpeters} shows that the effectiveness of a material to instigate energy loss attenuation and spallation typically increases with decreasing atomic number, with hydrogen being the most efficient. The contribution to Eq.~\eqref{eq:BB} made by the density effect correction, $\delta$, is only significant for primary particles with kinetic energies that exceed their rest mass (\emph{e.g.} $\geq$ 1~Gev/n) ~\cite{Fermi1945, Sternheimer1960, Sternheimer1966,Sternheimer1982,Crispin1970}. This exceeds the energy of primary particle considered in this study and does not play a significant role in material selection. The shell correction, $C/Z_{2}$, provides a correction to the stopping power for ions $\leq$ 200 MeV/n, where their velocity is equal to or less than the orbital velocity of the lattice electrons. This correction, however, is of most consequence to ions with energies less than approximately 5~MeV/n, and thus does not play a important role in material selection. The ionization potential, $I$, provides the largest opportunity to perturb the medium's properties in order to instigate specific changes in the emerging particle spectra that more closely model the desired field. It describes how easily a target material can absorb the kinetic energy imparted from the projectile through electronic and vibrational excitation. Unlike the density and shell corrections, whose relative contribution to stopping is strongly dependent on the projectile's energy or atomic charge, the ionization potential, $I$, is characteristic of the target material only and is independent of the properties of the projectile ion. Since the contribution of $I$ to stopping is logarithmic, small changes in its value do not produce major changes in the stopping cross section. \cite{Ziegler2010,cabrera2003} This provides an opportunity to make fine adjustments to the energies of the emerging particles by making perturbations around the measured values of the mean excitation potential for materials under consideration. As shown in Eq.~\ref{eq:BB} and Eq.~\ref{eq:bradtandpeters}, spallation, and especially the energy loss spectrum for a heavy ion beam in a particular material, is strongly dependent on the beam species, energy, and the properties of the target material being traversed. Polymers and hydrogenated materials are favorable materials because, per unit mass, these hydrogenous materials cause higher fragmentation of high-energy heavy ions and stop more of the incident low-energy particles than other materials \cite{Zeitlin1997,Ziegler2010}. Polymers are suitable candidate materials because they have a high hydrogen content and have sufficient tensile strength for machining. For example, polyethylene (CH$_{2}$), with two hydrogen atoms and one carbon atom per monomer, is ideal for the design and construction of moderator blocks. In order to generate a desired LET spectrum, the moderator geometry and thickness need to balance the effects of energy loss and fragmentation. A moderator geometry is chosen to correspond with the desired transmission of primary and progeny nuclei needed in the final spectrum. The desired fluence of particles required can be determined using data from, e.g., satellite measurements, IVA measurements during space missions, or from peer-reviewed models of the targeted spectrum \cite{ONeill2010}. Demonstrated in Figure \ref{fig:schematic}, each channel or cut represents a separate path the primary ions can travel through the block, where collisions between the primary ions and the moderator nuclei will result in projectile and target fragments and recoil products. Surviving primary ions continue with their initial velocity, losing energy by electromagnetic interactions. Because energy is lost and the LET depends on the inverse square of velocity, ions with sufficient range to fully traverse the moderator will emerge with higher LET \cite{zeitlin2005shielding}. The primary particles and the heavy projectile fragments represent the high-range LET components and the mid-range LET components of the GCR. The lighter fragment products will provide the contribution to the mid and low-range LET components of the GCR. The diameter, length, and material of each cut are chosen to induce specific spallation and energy loss events of the primary ion. This provides a method to selectively induce specific fragmentation and energy losses that result in the emerging field having the desired distribution of emerging ions and energies. These result in an emerging particle field mostly consisting of nuclei with a LET less than approximately 200 keV/$\mu$m. For this work, 1 GeV/n iron ($^{56}$Fe) was chosen as the ion species and energy of the primary ion given that iron is the heaviest nuclei with significant contribution to absorbed dose in the GCR environment. The LET is approximately 150 keV/$\mu$m, which has been shown to be about the peak effectiveness for producing chromosomal damage indicative of cancer outcomes in murine models \cite{cucinotta2006cancer,durante2008heavy}. This energy and ion choice also has ranges much greater than the presumed shorter depths of the moderator block (approximately 25 cm). This ensures that the effects of fragmentation dominate while instigating a positive dose attenuation and minimal change in the LET of primaries that survive transport through the block. The correct fluence of particles required for each layer is determined using numerical particle transport methods. Analytical prediction of the resulting particle species, their multiplicity, and corresponding energies is not possible to any high degree of accuracy. The energy loss of the primary will increases with depth and this begins to counter the expected decrease in average LET caused by fragmentation. As the primary ranges out and velocity decreases, the LET rises sharply at depths that are small compared to the mean free path for a nuclear interaction and the effects of energy loss outweigh those of fragmentation. Moderator geometry and thickness will need to balance the effects of energy loss and fragmentation. To overcome the highly-stochastic results of primary and progeny fragmentation, we incrementally vary the geometry in each layer to quantify what material(s) and properties (e.g. length, width normal to the primary beam's path, etc.) of each layer can best produce a desired range of ions and resulting energies. The key factor in this approach is to match each layer thickness and width normal to the beam spot so that it contributes to a specific portion of the desired LET spectrum. The final moderator block is designed so that the addition of each layer will result in a final field, $F(i,E')$, such that: \begin{equation}\label{eq:blockfunctiontotal} F(P,E') = \sum_{n}g_{n}(m,v)f(p,E_{i}) = G(M,V)f(p,E_{i}), \end{equation} \noindent where $f(p,E_{i})$ is the impinging field of the primary ion, $p$, with initial energy, $E_{i}$. The function $g(m,v)$ describes the individual layers of material $m$ and volume $v$ (e.g. length, width, height). The individual layers are summed and $G(M,V)$ describes the final moderator block material(s), $M$ and geometry, $V$. The function, $F(P,E')$, represents the resulting field of ion species, $P$, with energies, $E'$, that closely simulates the desired LET spectrum, e.g. the intravehicular LET spectrum measured on previous spaceflights. \begin{figure}[ht] \centering \includegraphics[width=1.0\columnwidth]{mcgeometry} \vspace*{-0.5em} \caption{Geometry of the Monte Carlo simulation. Iron ions enter the target from the left and surviving primaries and progeny fragments exit from the right of the block.} \label{fig:mcmodel} \end{figure} A three-dimensional version of the moderator block is then recreated using combinatorial geometry for the Monte Carlo simulation. This includes accurate determinations of the width, length, and curvature of the various channels and cuts. The chemical composition and density specific to each of the moderator's layers also has to be specified for determining the material properties, such as atomic structure, ionization potential, electron shell configuration, etc. The numerical simulations are then performed using multi-core, high performance computers (HPCs) and the particle transport simulation software PHITS \cite{Niita2006}, in order to model particles traversing through thick absorbers and to approximate the desired LET spectrum. The use of HPC (i.e. \emph{supercomputers}) allows the fast modeling of complex nuclear phenomena that would typically require significant time and computer resources. The parallelization of our numerical models allows for calculations to be distributed across multiple CPU cores and significantly increasing the number of samples sets. This greater computational power enables our models to be computed faster, since more operations can be performed per time unit, and more importantly, decrease statistical errors inherent with Monte Carlo calculations. For example, typical results were run on 5000-35,000 CPU cores over 6-12 hour time period. These produced data sets greater than approximately 2 terabytes (TB) and would equivocally take 2-3 years on a typical computer. PHITS features an event generator mode that produces a fully-correlated transport for all particles with energies up to $200$GeV/n \cite{Niita2006}. The software calculates the average energy loss and stopping power by using the charge density of the material and determines the momentum of the primary particle by tracking the fluctuations of energy loss and angular deviation. PHITS utilizes the SPAR code for simulating ionization processes of the charge particles and the average stopping power $dE/dx$ \cite{armstrong1973stopping,armstrong1973spar}. PHITS has been previously compared to experimental cross-section data using similar energies and materials. Zeitlin {\em et al.}~\cite{zeitlin2007fragmentation,zeitlin2008fragmentation} showed that, for large detector acceptance angles, there is good agreement between experimental beamline measurements of fragmentation cross sections and simulated outcomes that utilized PHITS to generate the expected progeny fragments and energy loss. A full description of the capabilities of PHITS and the various nuclear models utilized in the code can be found in Niita \textit{et al.} 2006.~\cite{Niita2006}. An example two-dimensional schematic of the moderator block model used in the numerical simulation is shown in Fig.~\ref{fig:mcmodel}. The $1$GeV/n $^{56}$Fe primary beam is accelerated from the left, propagated through the moderator block, and emerges along with progeny fragments generated during spallation reactions with the block materials. The field continues to the right where a scoring plane is located $1$~meter from the moderator block face. Particle species, energy, and directional cosines are recorded for analysis and LET calculations. The LET values (in tissue) are then calculated using the stopping power formula described in Eq.~\eqref{eq:BB}. All particles are scored, including electrons, pions, neutrons, etc. However, only charged particles are considered for the final LET spectrum. The medium traversed by the particle field emerging from the moderator is assumed to be open air. The $1$~meter distance between the back plane of the moderator and the scoring plane allows air attenuation of low-energy particles. Additionally, this space simulates experimental moderator placement with hardware, tissue, or biological samples located down stream on the beamline. Systematic errors are attributed to the many approximations required for a three-dimensional particle-transport Monte Carlo simulation and are, unfortunately, out of our control. The bootstrap method was utilized to determine the statistical stability of the results and minimize systematic biases in the outcomes \cite{athreya1987}. The moderator block design was used to model IVA LET for various space exploration missions. The IVA LET spectrum measured on the U.S. Space Shuttle during the Mir Space Station Expedition 18-19 (1995) was chosen for initial validation of our approach \cite{Badhwar1998}. This particular mission was selected after identifying a rich selection of publicly accessible LET spectrums from the mission, with available measurements spanning days, weeks, and, in some cases, months. The numerical model was utilized to simulate replication of mission LET using the moderator block design. Subsequent prototype testing was performed using numerically determined model geometry. Prototype testing was performed at the NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory in Brookhaven, New York. A $1$GeV/n iron ($^{56}$Fe) beam was accelerated at a moderator block with beamline measurements taken using plastic scintillator detectors placed $1$~meter down the beamline to replicate model conditions. \begin{figure}[!ht] \centering\includegraphics[width=1.0\columnwidth]{shuttle1} \caption{Intravehicular particle flux versus the LET (keV/micron) field from the Shuttle-Mir 18-19 Expedition measured by Badhwar {\em et.~al.}~\cite{Badhwar1998} (dashed line), as well as the results of our moderator block model simulation (blue solid line). Note the close approximation of simulated results to the real-world curve of recorded particle flux. In addition, four single-ion exposures from current radiobiological experiments are shown (large colored symbols) to highlight the lack in breadth of energies in current radiobiological damage studies.\cite{Slaba2015,norbury2016galactic}} \label{fig:mir} \end{figure} For this initial test case, every effort was made to utilize a single polymer material that is commercially available. Pragmatic decisions motivated material choice and subsequent geometry: it should be practical for a moderator block to be crafted with high precision in the machine shop of a typical accelerator laboratory, allowing for replicated use at any heavy-ion accelerator. Thus, polymers or soft materials were given priority because of sufficient tensile strength and relative ease of machining. The results from our numerical model, along with beamline measurements from the prototype block, were compared to measurements of the U.S. Space Shuttle-Mir Expedition 18-19 IVA LET. Additional test cases of numerical models alone, without prototype correlates, were performed to include model replication of the IVA spectrum measured onboard the International Space Station (ISS) and NASA’s Orion Multi-Purpose Crew Vehicle (MPCV) during Exploration Flight Test 1 (EFT-1) in 2014. \section*{Results} The Mir Space Station had an orbital inclination and flight altitude of $51.6^{\circ}$ and approximately $200$ nautical miles ($370$km). Beginning in March of 1995, NASA astronauts flew several long-duration missions on the Mir Space Station, returning to earth via the U.S. Space Shuttle. Badhwar {\em et al.}~\cite{Badhwar1998} measured the integrated LET spectrum that was directly attributed to GCR ions and their spallation progeny using tissue equivalent proportional counters (TEPC) and plastic nuclear track detectors located at six different areas of the vehicle. Contributions from neutrons and non-GCR particles (e.g., Van Allen Belt ions) were not considered in model calculations in order to closely replicate real-world measured results. \begin{figure}[ht] \centering \includegraphics[width=1.0\columnwidth]{zdist} \caption{ Comparison of predicted charge distributions. The relative abundance of intravehicular ions in the exiting field created by the moderator block (red squares) are plotted against the predicted IVA environment as published by Durante {\em et al.}~\cite{durante2008heavy} (blue circles) as a function of the atomic number $Z$. Both distributions have been normalized to the most prolific ion, hydrogen ($Z=1$).} \label{fig:zdist} \end{figure} Figure \ref{fig:mir} shows the LET (per day) measured during the U.S. Space Shuttle-Mir Expedition 18-19 \cite{Badhwar1998} (black dashed line). The blue solid line represents the results of particle-transport simulations using the moderator block design developed in this work. A monoenergetic $1$GeV/n iron ($^{56}$Fe) beam passes the moderator block to output the simulated particle flux. The distribution of LET obtained from the beamline simulation demonstrates close approximation for particles having LET between $10$keV/$\mu$m and $90$keV/$\mu$m and a reasonable fit for LET up to $185$keV/$\mu$m. The output is appropriately scaled to replicate the average daily LET rate as measured during the Expedition. Note that the simulated target moderator block reproduces the spectrum over approximately five orders of magnitude. For comparison, Figure \ref{fig:mir} also identifies the individual mono-energetic ion beams (large symbols; see caption) currently used for radiobiological experiments \cite{norbury2016galactic,Slaba2015}. While the mono-energetic beams fall within the spectra of measured IVA LET, these individual ion beams do not capture the richness and diversity of the measured real-world particle flux. We further conducted a more detailed analysis of the relative accuracy of the charge distribution resulting from the moderator block calculations compared to the predicted IVA environment as described by Durante {\em et al.}~\cite{durante2008heavy}, depicted in Figure \ref{fig:zdist}. There is a good approximation of lower-Z ions, particularly hydrogen ($Z=1$) and helium ($Z=2$), while a reasonable estimate is demonstrated for ion species $4 \leq Z \leq 26$. Figure \ref{fig:validation} shows the results from beamline measurements performed at the NSRL utilizing a prototype moderator block, compared to real-world LET measurements from U.S. Space Shuttle-Mir Expedition 18-19. Spectra measured after passing through the moderator block demonstrate replication of modeled outputs as well as close approximation of real-world LET measurements from approximately $18$keV/$\mu$m to $185$keV/$\mu$m. As demonstrated in both Figure \ref{fig:mir} and Figure \ref{fig:validation}, there are some discrepancies between simulated block outputs, beamline measurements, and real-world data in the lower LET distributions. It is unclear how well the target design reproduces the LET distribution for particles with LETs approximately $5$keV/$\mu$m - $18$keV/$\mu$m. It is likely that discrepancies may be resolved by adjusting the geometry or composition of the proposed target moderators. \begin{figure}[ht] \centering \includegraphics[width=1.0\columnwidth]{shuttle} \caption{Comparison of model with beamline measurements. The results of our moderator block model simulation (blue solid line) shown in Figure~\ref{fig:mir} are compared to beamline measurements of a prototype moderator block that replicates the numerically determined geometry (red solid line). The measured field closely matches both the measured and numerically predicted spectrum for LETs between $18$keV/$\mu$m and $185$keV/$\mu$m} \label{fig:validation} \end{figure} The ISS maintains an orbital inclination of $51.6^{\circ}$ and an altitude of approximately $400$km. Figure \ref{fig:ISS-LET} shows the measured IVA LET spectrum from the ISS compared to numerical results from the moderator block design. Real-world LET measurements were taken using a TimePix hybrid pixel detector \cite{pinsky2008,stoffle2015,hoang2012let}.Note that the original LET measurements were normalized per second; here we have re-normalized to LET-per-day for consistency and for display of the estimated LET rate in units that are more relevant to radiation risk estimation for long-duration missions. The red shaded area indicates an uncertainty in the low-energy measurements (LET $\leq$ 40keV/$\mu$m) of the TimePix detector. This is most likely due to secondary electrons stopping within the instrument's silicon detector and resulting in an overestimation of approximately 10\% to their LET values. The measured LET spectrum in the ISS IVA environment includes all charged particles (electrons, pions, heavy charged particles, etc.). However, as with the Badhwar {\em et al.}~measurements for the U.S. Space Shuttle-Mir data presented above \cite{Badhwar1998}, onboard measurements exclude neutrons. The model spectrum approximates the real-world measured energies with reasonable accuracy for continuous LET values of up to 180keV/$\mu$m over approximately seven orders of magnitude. The contribution of particles with low LET ($\leq$ 40 keV/$\mu$m) falls off much more slowly than what was seen for the U.S. Space Shuttle-Mir Expedition 18-19 measurements. To replicate this spectra in the moderator block design would require complex geometry, including layers with thicknesses much greater than anticipated (e.g., larger than $50$cm) that could generate the low-$Z$, high-energy particles needed to experimentally shape this portion of the LET distribution. The sharp peak in the modeled LET spectra seen at 90keV/$\mu$m result from an overabundance of ions with charges of $12 \leq Z \leq 14$ generated in the thicker portion of the moderator block. Modifications to the internal block geometry and material composition could be made to better fit dose spectra observed on the ISS without the demonstrated overabundance peaks seen in these results. \begin{figure}[ht] \centering \includegraphics[width=1.0\columnwidth]{iss} \caption{Measured intravehicular LET field (per day) as measured onboard the ISS with the TimePix dosimeter (dashed line) compared to our moderator block model (blue solid line). The light red shading indicates uncertainty in the low-LET measurements of the TimePix dosimeter. The resulting spectrum closely replicates the real-world measured energies for continuous LET values of up to 180keV/$\mu$m over approximately seven orders of magnitude.} \label{fig:ISS-LET} \end{figure} Measured real-world IVA spectra from NASA's EFT-1 were recently made publically available and provided an opportunity to illustrate the ability to fit the model for replication of an IVA LET spectrum from a third space vehicle \cite{kroupa2015semiconductor}. The MPCV had a flight duration of only four hours; even so, EFT-1 data are unique as the vehicle obtained a high apogee on the second orbit that included traversal through the radiation-dense Van Allen Belts and briefly into the interplanetary radiation environment. TimePix-based radiation detectors were operational shortly after liftoff and collected data for the duration of the mission \cite{Bahadori2015}. \begin{figure}[ht] \centering \includegraphics[width=1.0\columnwidth]{eft1} \caption{LET field measured during the EFT-1 flight of NASA's Orion Multi-Purpose Crew Vehicle (dashed line) compared to moderator block numerical outputs. The EFT-1 mission lasted approximately four hours and included two orbits with a peak altitude of approximately $5800$km. The LET was measured for the duration of the entire flight and averaged to LET per day for this experiment. The exposure includes both interplanetary and Van Allen belt radiation fields. The light red shading indicates uncertainty in the low-LET measurements of the TimePix dosimeter. Modeled results are demonstrated by the blue solid line.} \label{fig:eft1} \end{figure} EFT-1 flight data are shown in Fig.~\ref{fig:eft1} along with the results of the moderator block model modified to replicate the unique EFT-1 spectrum. While modeled results fit reasonably well compared to the flight measurements, there are visible fluctuations in the $30$-$80$keV/$\mu$m range that weakly correlates to a smaller fluctuation found from $30$-$50$keV/$\mu$m in the measured data. It is not yet clear whether these are indicators of the true nature of the measured LET spectrum, or are simple statistical fluctuations resulting from the smaller measurement period of the EFT-1 flight. Moderator layers made of polymers as thick as $100$cm are required to produce this LET spectrum. The sharp peaks at approximately $65$keV/$\mu$m and $73$keV/$\mu$m in figure \ref{fig:eft1} are due to an overabundance of ions with charge $Z \leq 6$ in block layers of $90$cm and thicker. We note that effort was made to use few hydrogen-rich materials; the presence of these peaks suggesting ion overabundance may be an indication that other low-$Z$ materials and metamaterials should be considered in future studies. \section*{Discussion} \label{sec:discussion} Our initial results indicate that the moderator block approach is capable of generating a complex mix of nuclei and energies that appear to more accurately simulate the space radiation environment than previous terrestrial radiation analogs. Model results show qualitative agreement with beamline measurements found in peer-reviewed literature when adapted to a geometry and environment representative of the experiment setup. This approach could enhance ground-based radiation studies by providing a more accurate recreation of the space radiation environment and allowing for a continuous generation of ionizing radiation that matches the LET spectrum and dose-rate of GCR for experimental purposes. Further, the model can be adapted to multiple scenarios, as demonstrated by simulation of varied intravehicular environments of the U.S. Space Shuttle, ISS, and MPCV vehicles. This approach could be additionally utilized to simulate the external GCR field, a planetary surface spectrum (e.g., Mars), or the local radiation environment of orbiting satellites, allowing for the characterization of multiple radiation environments that may be encountered during future space exploration. We emphasize that our model can generate both thermal and fast spallation neutron products, though these data were not included in the results presented here for more transparent comparison with measured real-world flight data. In future work, more extensive measurement of real-world IVA neutron spectrum could provide much-needed insight regarding the neutron contribution to the IVA particle spectrum and allow for higher fidelity comparisons between real-world data and model capability. An important outcome of the results discussed here is the demonstration of validity for use of Monte Carlo numerical techniques in determining complex physical outcomes using high-performance, multi-core computers. The results presented above demonstrate computational alternatives of complex dynamics that are difficult to mimic in a laboratory setting. The recent advances in multi-core computation techniques allow for decreasing statistical errors by drastically increasing the number of samples sets. The simulation results reported for each test case required massive computation resources. Each model (U.S. Space Shuttle-Mir, ISS, EFT-1) utilized the equivalent of 135,000 cpu hours (equivalent to approximately 2.5 years of computation on a typical computer) and generated 2.5 TB of data using 5000 or more CPU cores. With the use of supercomputers, these computations were performed in roughly 10 hours. Remarkably, recent updates to the high-performance computing cluster utilized reduces the same computational time to approximately 60 minutes. The application of high-performance computational techniques allows for the adaptation of the moderator block design for high-fidelity radiation studies of materials and human health outcomes. The ability to better simulate the space radiation environment in terrestrial research efforts has the potential to substantially enhance understanding of the space radiation environment, allowing for rapid advances in the understanding of human radiobiological health during long-duration spaceflight. Simultaneously, such capability could drastically reduce costs and risks associated with space radiobiological research efforts by providing a high-fidelity terrestrial analog. There is a pressing need for better understanding of the true health risk imposed by the space radiation environment on future human exploration missions. The approach presented here has the potential allow rapid improvements to radiobiological studies aimed at addressing these concerns, and can additionally be generalized to other radiation spectra with wide applicability for general radiation studies of unique and extraterrestrial environments. Such a capability would fill a much-needed niche not just for radiobiological research, but also for the development of shielding, electronics, and other materials for the space environment. \section*{Acknowledgments} \noindent J.C.C.~would like to thank Nicholas Stoffle for the discussions about the ISS and EFT-1 LET measurement data, James Ziegler for reviewing the concept and theory, and Serena Aunon-Chancellor for the many proof-reads and valuable input. \newline \newline \noindent H.G.K.~acknowledges support from the NSF (Grant No.~DMR-1151387). Part of the work of H.G.K.and J.C.C.~has been based upon work supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Interagency Umbrella Agreement IA1-1198. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.\newline \newline \noindent The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources that have contributed to the research results reported within this paper. \section*{Author contributions statement} \noindent J.C.C.~conceived, conducted, and analyzed the experiment. J.C.C., S.B.G., and H.G.K.~provided statistical analysis of the results. R.S.B, J.R.F.~and K.A.C.~contributed to the radiobiology discussion and interpretations. All authors reviewed the manuscript. \newpage
1,108,101,564,748
arxiv
\section{Introduction} Machine Translation (MT) systems have been shown to exhibit severely degraded performance when required to translate of out-of-domain or noisy data \citep{luong15, sakaguchi16, belinkov17}. This is particularly pronounced when systems trained on clean, formalized parallel data such as Europarl \citep{europarl}, are tasked with translation of unedited, human generated text such as is common in domains such as social media, where accurate translation is becoming of widespread relevance \citep{mtnt}. Improving the robustness of MT systems to naturally occurring noise presents an important and interesting task. Recent work on MT robustness \citep{belinkov17} has demonstrated the need to build or adapt systems that are resilient to such noise. We approach the problem of adapting to noisy data aiming to answer two primary research questions: \begin{enumerate} \item Can we artificially synthesize the types of noise common to social media text in otherwise clean data? \item Are we able to improve the performance of vanilla MT systems on noisy data by leveraging artificially generated noise? \end{enumerate} In this work we present two primary methods of synthesizing natural noise, in accordance with the types of noise identified in prior work as naturally occurring in internet and social media based text \citep{eisenstein13, mtnt}. Specifically, we introduce a \textbf{synthetic noise induction} model which heuristically introduces types of noise unique to social media text and \textbf{labeled back translation} \cite{bt}, a data-driven method to emulate target noise. We present a series of experiments based on the Machine Translation of Noisy Text (MTNT) data set \citep{mtnt} through which we demonstrate improved resilience of a vanilla MT system by adaptation using artificially noised data. \section{Related Work} \citet{szegedy13} demonstrate the fragility of neural networks to noisy input. This fragility has been shown to extend to MT systems \citep{belinkov17,khayrallah2018impact} where both artificial and natural noise are shown to negatively affect performance. Human generated text on the internet and social media are a particularly rich source of natural noise \citep{eisenstein13, baldwin15} which causes pronounced problems for MT \citep{mtnt}. Robustness to noise in MT can be treated as a domain adaptation problem \citep{koehn17} and several attempts have been made to handle noise from this perspective. Notable approaches \citep{li10, axelrod11} include training on varying amounts of data from the target domain. \citet{luong15} suggest the use of fine-tuning on varying amounts of target domain data, and \citet{micelibarone17} note a logarithmic relationship between the amount of data used in fine-tuning and the relative success of MT models. Other approaches to domain adaptation include weighting of domains in the system objective function \citep{wang17} and specifically curated datasets for adaptation \citep{blodgett17}. \citet{kobus16} introduce a method of domain tagging to assist neural models in differentiating domains. Whilst the above approaches have shown success in specifically adapting across domains, we contend that adaptation to noise is a nuanced task and treating the problem as a simple domain adaptation task may fail to fully account for the varied types of noise that can occur in internet and social media text. Experiments that specifically handle noise include text normalization approaches \citep{baldwin15} and (most relevant to our work) the artificial induction of noise in otherwise clean data \citep{sperber17, belinkov17}. \section{Data} To date, work in the adaptation of MT to natural noise has been restricted by a lack of available parallel data. \citet{mtnt} recently introduced a new data set of noisy social media content and demonstrate the success of fine-tuning which we leverage in the current work. The dataset consists of naturally noisy data from social media sources in both English-French and English-Japanese pairs. In our experimentation we utilize the subset of the data for English to French which contains data scraped from Reddit\footnote{\url{www.reddit.com}}. The data set contains training, validation and test data. The training data is used in fine-tuning of our model as outlined below. All results are reported on the MTNT test set for French-English. We additionally use other datasets including Europarl (EP)~\cite{europarl} and TED talks (TED) \cite{ted} for training our models as described in \S \ref{exps}. \begin{table}[h] \centering{} \small \begin{tabular}{c c c} \toprule Training Data & \# Sentences & Pruned Size\\ \midrule Europarl (EP) & 2,007,723 & 1,859,898\\ Ted talk (TED) & 192,304 & 181,582\\ Noisy Text (MTNT) & 19,161 & 18,112\\ \bottomrule \end{tabular} \caption{Statistics about different datasets used in our experiments. We prune each dataset to retain sentences with length $\leq$ 50.} \end{table} \section{Baseline Model} Our baseline MT model architecture consists of a bidirectional Long Short-Term Memory (LSTM) network encoder-decoder model with two layers. The hidden and embedding sizes are set to 256 and 512, respectively. We also employ weight-tying~\cite{presswolf} between the embedding layer and projection layer of the decoder. For expediency and convenience of experimentation we have chosen to deploy a smaller, faster variant of the model used in \citet{mtnt}, which allows us to provide comparative results across a variety of settings. Other model parameters reflect the implementation outlined in \citet{mtnt}. In all experimental settings we employ Byte-Pair Encoding (BPE) \cite{bpe} using SentencePiece\footnote{\url{https://github.com/google/sentencepiece}}. \begin{figure*}[t] \begin{center} \includegraphics[width=1.0\textwidth]{tbt.jpg} \end{center} \caption{Pipeline for injecting noise through back translation. For demostration purposes we show the process in an English sentence but in experiments, we use French sentences as input. } \label{fig:bt} \end{figure*} \section{Experimental Approaches \label{exps}} We propose two primary approaches to increasing the resilience of our baseline model to the MTNT data, outlined as follows: \subsection{Synthetic Noise Induction (SNI)} For this method, we inject artificial noise in the clean data according to the distribution of types of noise in MTNT specified in \citet{mtnt}. For every token we choose to introduce the different types of noise with some probability on both French and English sides in 100k sentences of EP. Specifically, we fix the probabilities of error types as follows: spelling (0.04), profanity (0.007), grammar (0.015) and emoticons (0.002). To simulate spelling errors, we randomly add or drop a character in a given word. For grammar error and profanity, we randomly select and insert a stop word or an expletive and its translation on either side. Similarly for emoticons, we randomly select an emoticon and insert it on both sides. Algorithm \ref{alg:arti_noise} elaborates on this procedure. \begin{small} \begin{algorithm} \caption{Synthetic Noise Induction} \label{alg:arti_noise} \begin{algorithmic} \State \textbf{Inputs:}{$\left[(p_1,\eta_1), (p_2,\eta_2) \cdots (p_k, \eta_k) \right]$}\Comment{{\small pairs of noise probabilities and noise functions}} \Procedure{Add\_Noise}{$fr, en$} \State $ o = 1 - \sum_i p_i $ \Comment{{\small probability of keeping original }} \State $ D = [o, p_1, p_2, \cdots, p_k] $ \Comment{{ \small Discrete densities}} \State $ j \gets \textsc{Select\_Index}(\textsc{Draw}(D))$ \Comment{{\small noise type}} \If{$j \ne 0$} \Comment{{\small not original}} \State $ (fr,en) \gets \eta_j(fr,en) $ \Comment{{\small add noise to words}} \EndIf \State \textbf{return} $fr,en$ \EndProcedure \end{algorithmic} \end{algorithm} \end{small} \subsection{Noise Generation Through Back-Translation \label{bt-techs}} We further propose two experimental methods to inject noise into clean data using the back-translation technique~\cite{bt}. \subsubsection{Un-tagged Back-Translation (UBT)} We first train both our baseline model for fr-en and an en-fr model using TED and MTNT. We subsequently take 100k French sentences from EP and generate a noisy version thereof by passing them sequentially through the trained models as shown in Figure \ref{fig:bt}. The resulting translation will be inherently noisy as a result of imperfect translation of the intervening MT system. \subsubsection{Tagged Back-Translation (TBT)} The intuition behind this method is to generate noise in clean data whilst leveraging the particular style of the intermediate corpus. Both models are trained using TED and MTNT as in the preceding setting, save that we additionally append a tag in front on every sentence while training to indicate the origin data set of each sentence \cite{kobus16}. For generating the noisy version of 100k French sentences from EP, we append MTNT tag in front of the sentences before passing them through the pipeline shown in Figure \ref{fig:bt}. \section{Results} We present quantitative results of our experiments in Table \ref{tbl:results}. Of specific note is the apparent correlation between the amount of in-domain training data and the resulting BLEU score. The tagged back-translation technique produces the most pronounced increase in BLEU score of +6.07 points $(14.42 \longrightarrow 20.49)$. This represents a particularly significant result given that we do not fine-tune the baseline model on in-domain data, attributing this gain to the quality of the noise generated. \begin{table}[t] \centering{} \small \begin{tabular}{c c c } \toprule & Training data & BLEU \\ \midrule \multicolumn{3}{c}{{\it Baselines}}\\ \midrule Baseline & Europarl (EP) & 14.42 \\ + FT w/ & MTNT-train-10k & 22.49 \\ + FT w/ & MTNT-train-20k & 23.74 \\ \midrule Baseline FT w/ & TED-100k & 10.92 \\ + FT w/ & MTNT-train-20k & 24.10 \\ \midrule \multicolumn{3}{c}{{\it Synthetic Noise Induction}}\\ \midrule Baseline FT w/ & EP-100k-SNI & 13.53\\ + FT w/ & MTNT-train-10k & 22.67 \\ + FT w/ & MTNT-train-20k & 25.05 \\ \midrule \multicolumn{3}{c}{{\it Un-tagged Back Translation}}\\ \midrule Baseline FT w/ & EP-100k-UBT & 18.71\\ + FT w/ & MTNT-train-10k & 22.75 \\ + FT w/ & MTNT-train-20k & 24.84 \\ \midrule \multicolumn{3}{c}{{\it Tagged Back Translation}}\\ \midrule Baseline FT w/ & EP-100k-TBT & 20.49\\ + FT w/ & MTNT-train-10k & \textbf{23.89} \\ + FT w/ & MTNT-train-20k & \textbf{25.75} \\ \bottomrule \end{tabular} \caption{BLEU scores are reported on MTNT test set. MTNT valid set is used for fine-tuning in all the experiments. + FT denotes fine-tuning of the Baseline model of that particular sub-table, being continued training for 30 epochs or until convergence. } \label{tbl:results} \end{table} \begin{table*}[t] \centering{} \scriptsize \begin{tabular}{l l} \toprule Systems & Output \\ \midrule \textbf{REFERENCE} & $>$ And yes, I am an idiot with a telephone in usb-c... F*** that's annoying, I had to invest in new cables when I changed phones. \\\hline \textbf{Baseline (trained on EP)} & And yes, I am an eelot with a phone in the factory ... P***** to do so, I have invested in new words when I have changed telephone. \\\hline \textbf{FT w/ MTNT-train-20k} & $>$ And yes, I am an idiot with a phone in Ub-c. Sh**, it's annoying that, I have to invest in new cable when I changed a phone. \\\hline \textbf{FT w/ EP-100k-TBT} & - And yes, I'm an idiot with a phone in the factory... Puard is annoying that, I have to invest in new cables when I changed phone. \\\hline \textbf{FT w/ EP-100k-TBT} & $>$ And yes, I am an idiot with a phone in USb-c... Sh** is annoying that, I have to invest in new cables when I changed a phone. \\\hspace{0.2cm}+ \textbf{MTNT-train-20k}&\\ \bottomrule \end{tabular} \caption{Output comparison of decoded sentences across different models. Profane words are censored.} \label{ex1} \end{table*} \begin{figure} \centering{} \includegraphics[width=0.5\textwidth]{noise_impact_simple.png} \caption{The impact of varying the amount of Synthetic Noise Induction on BLEU. } \label{fig:arti_noise} \end{figure} The results for all our proposed experimental methods further imply that out-of-domain clean data can be leveraged to make the existing MT models robust on a noisy dataset. However, simply using clean data is not that beneficial as can be seen from the experiment involving \textit{FT Baseline w/ TED-100k}. We further present analysis of both methods introduced above. Figure \ref{fig:arti_noise} illustrates the relative effect of varying the level of SNI on the BLEU score as evaluated on the newsdiscuss2015\footnote{\url{http://www.statmt.org/wmt15/test.tgz}} dev set, which is a clean dataset. From this we note that the relationship between the amount of noise and the effect on BLEU score appears to be linear. We also note that the most negative effect is obtained by including profanity. Our current approach involves inserting expletives, spelling and grammatical errors at random positions in a given sentence. However we note that our approach might under-represent the nuanced linguistic usage of expletives in natural text, which may result in its above-mentioned effect on accuracy. Table \ref{ex1} shows the decoded output produced by different models. We find that the output produced by our best model is reasonably successful at imitating the language and style of the reference. The output of \textit{Baseline + FT w/ EP-100k-TBT} is far superior than that of \textit{Baseline}, which highlights the quality of obtained back translated noisy EP through our tagging method. \begin{table}[t] \centering{} \scriptsize \begin{tabular}{l l} \toprule Systems & Output \\ \midrule \textbf{REFERENCE} & Voluntary or not because politicians are *very*\\&friendly with large businesses. \\\hline \textbf{FT w/ EP-100k-TBT} & Whether it's voluntarily, or invoiseally because\\&the fonts are *èsn* friends with the big companies. \\\hline \textbf{FT w/ EP-100k-TBT} & Whether it's voluntarily, or invokes because the \\\hspace{0.2cm}+ \textbf{MTNT-train-10k}&politics are *rès* friends with big companies. \\\hline \textbf{FT w/ EP-100k-TBT} & Whether it's voluntarily, or invisible because the\\\hspace{0.2cm}+ \textbf{MTNT-train-20k}&politics are *very* friends with big companies. \\ \bottomrule \end{tabular} \caption{Output comparison of decoded sentences for different amounts of supervision.} \label{ex2} \end{table} We also consider the effect of varying the amount of supervision which is added for fine-tuning the model. From Table~\ref{ex2} we note that the \textit{Baseline + FT w/ EP-100k-TBT} model already produces a reasonable translation for the input sentence. However, if we further fine-tune the model using only 10k MTNT data, we note that the model still struggles with generation of *very*. This error dissipates if we use 20k MTNT data for fine-tuning. These represent small nuances which the model learns to capture with increasing supervision. To better understand the performance difference between UBT and TBT, we evaluate the noised EP data. Figure \ref{fig:bt} shows an example where we can clearly see that the style of translation obtained from TBT is very informal as opposed to the output generated by UBT. Both the outputs are noisy and different from the input but since the TBT method enforces the style of MTNT, the resulting output is perceptibly closer in style to the MTNT equivalent. This difference results in a gain of 0.9 BLEU of TBT over UBT. \section{Conclusion} This paper introduced two methods of improving the resilience of vanilla MT systems to noise occurring in internet and social media text: a method of emulating specific types of noise and the use of back-translation to create artificial noise. Both of these methods are shown to increase system accuracy when used in fine-tuning without the need for the training of a new system and for large amounts of naturally noisy parallel data. \section{Acknowledgements} The authors would like to thank the AWS Educate program for donating computational GPU resources used in this work. \bibliographystyle{acl_natbib}
1,108,101,564,749
arxiv
\section{Introduction} The cosmological constant($\Lambda$) has been an outstanding problem for the past seventy-five years[1,2], ever since Einstein introduced it in the field equations to avoid an expanding universe. One of the great developments of 1980's, was the creation of a standard model of cosmology based on the ideas arising from particle physics. This model involved the following trilogy of ideas: (i) $\Omega$=1, (ii) $\Lambda=0$ and (iii) $\Omega_{matter}\approx\Omega_{{CDM}^{WIMP}_{axion}} \geq0.9$. But, such a model of 1980's is no more[1]. Infact, the density of the matter insufficient to result in a flat universe($\Omega=1$) suggests a positive $\Lambda$. One would now prefer either (1) $\Omega\neq1$, $\Lambda=0$, $\Omega_{matter}\approx\Omega_{{CDM}^{WIMP}_{axion}}\approx0.1-0.3$ or (2) $\Omega\equiv1$, $\Lambda\neq0$, $\Omega_{matter}\approx\Omega_{{CDM}^{WIMP}_{axion}}\approx0.1-0.3$. A small non-vanishing $\Lambda$ is required to make the two independent observations: the Hubble constant ($H_o$ : which explains the expansion rate of the present universe) and the present age of the universe $(t_o)$ consistent each other[3]. This has forced us critically re-examine the simplest and most appealing cosmological model- a flat universe with $\Lambda=0$[3,4]. A flat universe with $\Omega_{m}\equiv0.3$ and $\Omega_{m} + \Omega_{\Lambda}=1$ is most preferable and a matter dominated flat universe with $\Lambda=0$ is ruled out[5].\\ A various inflationary models suggest that the scale factor $a(t)$ of the universe after inflation has been in the order of $\sim10^{28}$ before the inflation, which created a smooth and effectively flat universe of the right size and entropy. This further implies that the inhomogeneity is created by quantum fluctuation of the scalar field in the inflationary phase and gives us a preference for $\Omega=1$ and claims $\Lambda\neq0$[6]. In fact $\Lambda$ follows from the dynamical evolution of our universe as one interprets it as the vacuum energy of the quantized fields[7]. Today $\Lambda$ has incredibly small value $\sim10^{-47} (GeV)^4$[3,6], while the quantum field theories in curved spacetime would imply quite different values of vacuum energy density($\rho_v$) in the early universe (in units ${8\pi G=1}$ ,we often denote ${\rho_v}$ by ${\Lambda}$). The generic inflation models also require it to have a large value during the inflationary epoch[6]. In particular, $\Lambda_{GUT}\sim10^{64}(Gev)^4$, $\Lambda_{EW} \sim 10^8 (GeV)^4$ and $\Lambda_{QCD} \sim10^{-4} (GeV)^4$[8]. This gives, up to some extent, a natural explanation for the small value of $\Lambda$ at present and its large value in the early universe. Also, a high intensity of vacuum energy in the inflation era (i.e.,de Sitter phase) corresponds to the large vacuum energy needed to drive inflation[9]. However, it is unlikely that such quantum instabilities can lower the value of $\Lambda$ by a large factor and yield a universe even remotely like our own[10]. The problem of cosmological constant is therefore to explain these huge orders of magnitude difference between $\rho^{early}_{vac}$ and $\rho^{p}_{vac}$ in a natural manner, in particular without fine tuning of the values of cosmological parameters[8]. A number of time dependent cosmological constant models, such as $\Lambda\propto (1/t^2)$; t as the age of present universe suggest that the vacuum energy is a function of scalar(dilatonic) field, an idea supported also from non criticle string theories[2].\\ In this note we consider the above facts on the basis of simple cosmological relations and recent determinations of cosmological parameters. We explain a case of non zero $\Lambda$ by considering its effect on the geodetic motions in the Schwarzschild de-Sitter spacetime.\\ \section{The Standard Cosmology and $\Lambda$} To a good approximation the present universe is spatially homogeneous and isotropic on large scales[11]. So the space-time geometry of the universe is appropriately described by Robertson-Walker(RW) metric in the form \begin{eqnarray} ds^2=dt^2-a^2(t) \Big[ \frac{dr^2}{1-kr^2}+r^2 d\theta^2 + r^2\sin^2\theta d\phi^2 \Big] \end{eqnarray} where $a(t)$ is the cosmic scale factor and $k=+1, 0, -1$ depending upon whether the universe is closed, flat or open. The vacuum expectation value of energy momentum tensor of quantum fields in de Sitter space takes the form[12] $<T_{\mu\nu}^{vac}>=\rho_{vac} g_{\mu\nu}$. So a model universe with an additional term $\rho g_{\mu\nu}$ in the Einstein field equation is highly motivated and the $\Lambda$ corresponding to the vacuum energy density enters in the form \begin{eqnarray} R_{\mu\nu} - \frac{1}{2}g_{\mu\nu}R =G_{\mu\nu}= 8\pi GT_{\mu\nu} - \Lambda g_{\mu\nu} \end{eqnarray} which with eqn(1) gives the so-called Friedmann equation \begin{eqnarray} \Big( \frac{\dot{a}}{a} \Big)^{2} = \frac{8\pi G}{3}\rho - \frac{k}{a^{2}} + \frac{\Lambda}{3} \end{eqnarray} where $\rho$ is the energy density of baryonic plus dark matter. Eqn(3) gives \begin{eqnarray} \frac{k}{{H_o}^{2}{a_o}^{2}}=\Omega_{m}+\Omega_{v} - 1 \end{eqnarray} where $\Omega_m=\rho_m/\rho_c$ and $\Omega_v=\rho_v/\rho_c$ are respectively the matter and vacuum density parameters; and $\rho_{c}=3{H_o}^{2}/8\pi G$ and $\rho_{v}=\Lambda/8\pi G$.\\ A number of recent observations would converge the present value of the Hubble constant in the range: (1) $H_{o} = 67\pm 6{} kms^{-1} Mpc^{-1}$ ( Nevalainen and Roos '97), (2) $H_{o} = 70\pm 5{}kms^{-1} Mpc^{-1}$ (Freedmann '98). Since $H_{o}^{2} a^{2} \sim 1$, eqn(4) gives the following relations: (i) for $k=0,{}\Omega _{m} + \Omega _{v}=1$, (ii) for $k=1,{}\Omega _{m} + \Omega _{v} = 2$ and (iii) for $k=-1,{}\Omega _{m} + \Omega _{v} = 0$. However, the present observational limit allows $0.2<\Omega <2$. The present energy density contributed by matter is estimated as $\Omega= 0.1\sim 0.4$ in a broad range[13] which corresponds to $\rho_v \leq 10^{-47} (GeV)^4$.\\ \section{The $\Lambda_{eff}$ and Age of the Universe} The limit on the present age of the universe taken from the age of the oldest clusters corresponds to $t_{globulars} = 11.5\pm 1.3 Gyr$ (C.Hipparcos et.al.'97); to which the age of the universe at the time of their formation must be added; while the dynamical age of the universe would be $t_{o} = 14.2\pm 1.5 Gyr$, which includes the systematic uncertainties in the current cepheid distance scale[14]. According to Friedman-Lamaitre model, the age $t(z)$ of the universe at redshift $(z)$ is expressed by \begin{eqnarray} t(z) = \frac{1}{H_o}\int_{0}^{1/1+z} dx \Big[(1-\Omega_m-\Omega_{\Lambda})+\Omega_m x^{2-3(1+w)}+\Omega_{\Lambda}x^2 \Big]^{1/2} \end{eqnarray} with the equation of state $p=w\rho$. With the given values of $H_o$ and $t(z)$, the above equation puts contraint on $\Omega_m$ and $\Omega_{\Lambda}$. For $w=0 ( i.e., p<<\rho)$; and $H_o = 60-75 kms^{-1}Mpc^{-1}$ and $t_o = 12.8{}Gyr$, one gets $\Omega_m=0.2-0.4$ for $\Omega_{\Lambda}=0.6-0.8$. However, the statistics of gravitational lensing puts the upper limit $\Omega_{\Lambda}\leq 0.66$ for a flat universe ( Kochanek '96 ), while the observations of the clusters of galaxies put the lower limit $\Omega_{\Lambda}\geq0.6$[15]. Based on eqn(5) the limits $\Omega _m < 0.22$ and $\Omega _{\Lambda} > 0.6$ are indicated in the ref.[16]. Therefore, $\Omega _{\Lambda} = 0.6 - 0.66$ puts a very strong limit on $\Lambda_o$ to be $(2\sim3)10^{-47}(Gev)^4$ for $\Omega_{vac} + \Omega_{matter} = 1$.\\ The age of a flat universe that contains both matter and positive vacuum energy is given by[11] \begin{eqnarray} t_o = \frac{2}{3}H_{o}^{-1}\Omega_{vac}^{-1/2} \ln \Big[ \frac{1+\Omega_{vac}^{1/2}}{(1-\Omega_{vac})^{1/2}} \Big] \end{eqnarray} A value of $H_o$ to its lower edge (e.g. $64 kms^{-1} Mpc^{-1}$) and the value of $t_o = 14.2 Gyr$ implies $H_o t_o = 0.93$ which corresponds to $\Omega _{vac} = 0.66$. For $\Omega _{vac} = 0.7$, one gets $H_o t_o = 0.964$. So $t_o=14.2Gyr$ implies $H_o=66{}kms^{-1}Mpc^{-1}$. For $\Omega_{vac} = 0.8$, one gets $H_o t_o = 1.076$. A model universe with $\Omega_{vac}\geq 0.74$ is older than $H_{o}^{-1}$, thereby implies an accelerating universe and in the limit $t_o \to\infty$ one gets $\Omega_{vac}\to1$(i.e., a de-Sitter solution). Also, the higher values of $\Omega_{vac}$ start to conflict with the lower bound on matter energy density from galaxies and clusters. So we usually discard the value $\Lambda>0.74$ based on the ideas from the experimental evidences.\\ \section {Schwarzschild de-Sitter Spacetime and $\Lambda$} In this section we express $\Lambda$ in the unit of $cm^{-2}$. So we define $\Lambda_{pl}=\Lambda (a/l_{pl})^{\alpha}, (a/l_{pl})$ is the scale factor in units of Planck length, $\Lambda_{pl}\sim M_{pl}^2$ is the natural size of the cosmological constant and $\alpha$ is a constant to be determined by the present upper bound on $\Lambda$. Since $a_o/l_{pl}\sim ct_o/l_{pl}\sim 10^{61}$, the present upper bound on $\Lambda_{o}\leq 10^{-123} M_{pl}^2$ implies $\alpha\equiv2$ and therefore a relation $\Lambda_{pl}\equiv\Lambda (a/l_{pl})^2$ is claimed to explain the spontaneous decay of $\Lambda$ from its large value at Planck's era to its extremely small value at the present universe. In this formalism the value $\Lambda_{o}\leq 10^{-47}(GeV)^4$ corresponds to $|\Lambda_o|\leq 10^{-123} M_{pl}^{2}\approx 10^{-56} cm^{-2}$.\\ A generally spherically symmetric metric is described by the form \begin{eqnarray} d\tau^2 = e^{2\lambda(r,t)} dt^2 - e^{2v(r,t)} dr^2 - r^2 (d\theta^2 + \sin^2 d\phi^2) \end{eqnarray} Corresponding to the vacuum field equations $G_{\mu\nu}=-\Lambda g_{\mu\nu}$, the generalization of the Schwarzschild solution for the above metric by allowing non-zero cosmological constant [17,18](in units $c=G=1$) is given by \begin{eqnarray} d\tau^2 = \Big( 1-\frac{2M}{r}-\frac{\Lambda r^2}{3} \Big)dt^2 - \frac{dr^2}{\displaystyle{1-\frac{2M}{r}-\frac{\Lambda r^2}{3}}} - r^2 (d\theta^2 + \sin^2 \theta d\phi^2) \end{eqnarray} The above metric is considered as the Schwarzschild de-Sitter metric and hence the space determined by it is not asymtotically flat as in the case of Schwarzschild metric, for $\Lambda$ related to the vacuum energy density implies a pre-existing curvature[18]. It is easy to see that the Lagrangian and Hamiltonian for this metric are equal. So there is no potential energy in the problem. By rescaling $\tau$ and setting $\theta=\pi/2$ ( i.e., an equatorial plane), we get \begin{eqnarray} \frac{E^2}{\displaystyle{1-\frac{2M}{r}-\frac{\Lambda r^2}{3}}} - \frac{\dot{r}^2}{\displaystyle{1-\frac{2M}{r}-\frac{\Lambda r^2}{3}}} - \frac{L^2}{r^2} = 2{\cal L}= +1 ~\mbox{or} ~0 \end{eqnarray} for the time like or null geodesics respectively; where $E=(1-2M/r-\Lambda r^2/3)\dot{t}$ and $L=r^2 \dot{\phi}$ are the constants associated with the energy and momentum of the particles respectively. In the case of physical interest (i.e.,for the time like geodesics), considering $r$ as a function of $\phi$ and letting $u=1/r$, we get \begin{eqnarray} \frac{d^2 u}{d\phi^2} + u = \frac{M}{L^2} + 3Mu^2 - \frac{\Lambda}{3L^2 u^3} \end{eqnarray} The numerical solutions to this quintic polynomial with some constraints, e.g. $E^2+\Lambda L^2/3<1$ and $\Lambda M^2<1/9$ for bound orbits, can show only three real roots with two positive and one negative. Out of these roots, the two smaller roots would be near the cosmological horizon and the larger root would be near the black hole horizon, and no real roots are present in the region $r_{CO}<r<r_{BH}$[19]. There could be two more roots, but they are essentially imaginary. As the function is negative in the region $r_{CO}<r<r_{HB}$, finding the exact analytic solutions in terms of the elliptic integrals with all roots seems much more complicated than the case in Schwarzschild metric. However, to study the effect of $\Lambda$ in the geodetic motion of the particles one can treat the third term on rhs of eqn(10) as a perturbation, for it is $\sim 10^{-8}$ of the first term and $\sim 10^{-6}$ of the second term in the case of Mercury's orbit with $\Lambda \approx 10^{-43} cm^{-2}$. A simple approximation to the problem shows that the main effect of the term involving $\Lambda $ in eqn(9) is to cause an additional advance of the perihelian by an amount (retaining all the parameters in their original units) \begin{eqnarray} \Delta\phi_{\Lambda} = \frac{\pi\Lambda c^2 a^3 (1-e^2)^3}{GM}=\frac{2\pi\Lambda l^3}{r_s} \end{eqnarray} where $a$ is semimajor axis, $e$ is the eccentricity, $l$ is the semilatus rectum and $r_{s}=2GM/c^2$ is Schwarzschild radius. The additional precession due to $\Lambda$ is therefore $\Delta\phi_{\Lambda}=\pi \Lambda c^2 r^3/GM $.\\ For circular orbits, $\Delta\phi_\Lambda=\pi\Lambda c^2 a^3/GM$. If we define $\bar{\rho}$ as the average density within a sphere of radius $a$ and $\rho_{vac}=\Lambda c^2/8\pi G$ as the vacuum density equivalent of the cosmological constant, we get $\Delta\phi_{\Lambda} = 6\pi\rho_{vac}/\bar{\rho} $ rad/orbit. For the case of bound orbits, a relation between the cosmological constant and the minimum orbit radius can be expressed by $r_{min}=(3MG/\Lambda c^2)^{1/3}$.\\ \section{Discussion} Though, the microscopic theories of particle physics and gravity suggest a large contribution of vacuum energy to energy momentum tensor, all observations to date show that $\Lambda$ is very small and positive. In the case of Mercury, the extra precession factor $\Delta$ would be $0.1''$ per century (i.e.,the maximum uncertainty in the precession of the perihelion), if $\Lambda\leq 3.2\times 10^{-43} cm^{-2}$. With the current value of $|\Lambda|\leq 10^{-56} cm^{-2}$, for Mercury one gets $\Delta\phi_{\Lambda}\leq 3.6\times 10^{-23}$ arc second per orbit; which is unmeasurably small and very far from the present detectable limit($3\times 10^{-4}$arc second) of VLBI(Very Long Baseline Interferometry). It sounds more logical to argue that only the tests based on large scale geometry of the universe can put a strong limit on the present value of $\Lambda$.\\ However, the precession in the perihelion of the planets provide a sensitive solar test for a cosmological constant. Also, for very massive binary star systems such as Great Attractor(GA) and Virgo Cluster with highly eccentric orbits, the value of cosmological constant may show up. In this case, however, an accurate profile of infall velocities of galaxies into the GA is needed to provide a good estimate of present bound on cosmological constant. For example, in the case of Pluto with $\Lambda\leq 10^{-56} cm^{-2}$, one gets $\Delta\phi_{\Lambda}=3.5\times 10^{-17}$ arc second per orbit; which is also unmeasurably small. An extremely small value of the $\Lambda$ makes us unable to measure the extra precession with the required precision. It is here worthnoting that $\Lambda$ must be quite larger than $10^{-50}cm^{-2}$ to observe its effects possibly with an advance of additional precession of perihelion orbit in the inner planets. In the case of Pluto with $\Delta\phi_{\Lambda}\leq 0.1$ arc second per orbit, we get $\Lambda\leq 3.3\times10^{-49}cm^{-2}$, which may be very near to the bound on the present value of cosmological constant, i.e., $0\leq|\Lambda_o|\leq 2.2\times 10^{-56}cm^{-2}$. However, the planetary perturbations cannot be used to limit the cosmological constant.\\ {\large\bf Acknowledgements}\\ I would like to thank the organizers of the IMFP'98 for kind hospitality and providing partial travel support to attend the meeting.\\ {\large\bf References} \begin{description} \item{[1]} L.M.Krauss, {\it Preprint, hep-ph/9810393}(1998). \item{[2]} J.Z.Lopez and D.V.Nanopoulos, {\it Mod.Phys.Lett.} A11 (1996) 1. \item{[3]} Anup Singh, {\it Phys.Rev.} D52 (1995) 6700. \item{[4]} W.L.Freedmann et.al., {\it Nature}, 371(1994)757. \item{[5]} M.Chiba and Y.Yoshii, {\it Preprint, astro-ph/9808321}, (1998). \item{[6]} A.Linde, {\it Particle Physics and Inflationary Cosmology}, harwood academic publishers (1990). \item{[7]} N.D.Birrel and P.C.W.Davies, {\it Quantum Fields in Curved Scape}, Cambridge Univ. Press (1982). \item{[8]} S.Weinberg, {\it Rev.Mod.Phy.} 61(1989)1. \item{[9]} J.W.Moffat, {\it Preprint, astro-ph/9606071} (1996). \item{[10]} W.A.Hiscock, {\it Phys.Lett.}166B, \underline{Vol.3} (1986)285. \item{[11]} E.W.Kolb and M.S.Turner, {\it The Early Universe} (1990). \item{[12]} L.Ford, {\it Phys.Rev.} D28(1983)710. \item{[13]} M.Ozer and M.O.Taha, {\it Mod.Phys.Lett.} A13 (1998) 571-580. \item{[14]} Reiss et.al., {\it Preprint, astro-ph/9805201} (1998). \item{[15]} Mellier et.al., {\it Preprint, astro-ph/9609197} (1996). \item{[16]} M.Roos and S.M.Rashid, {\it Astron.Astrophys.} 329 (1998) L17. \item{[17]} C.Kahn and F.Kahn, {\it Nature} 257 (1975) 451. \item{[18]} G.Gibbons and S.Hawking, {\it Phys.Rev.} D15 (1977) 2738. \item{[19]} J.Pokharel and U.Khanal, {\it Geodesics in Schwarzschild de-Sitter Spacetime}, an unpublished work, Dept. of Phys.,Tribhuvan Univ.(1997).\\ \end{description} \end{document}
1,108,101,564,750
arxiv
\section{Introduction} Face recognition research has been significantly promoted by deep learning techniques recently. But a persistent challenge remains to develop methods capable of matching heterogeneous faces that have large appearance discrepancy due to various sensing conditions. Typical heterogeneous face recognition (HFR) tasks conclude visual versus near infrared (VIS-NIR) face recognition~\cite{yi2007face_NIS-VIR,yi2009partial_NIS-VIR}, visual versus thermal infrared (VIS-TIR) face recognition~\cite{socolinsky2002TIR_analysis}, face photo versus face sketch~\cite{tang2004face_sketch,wang_tang2009photo-sketch}, face recognition across pose~\cite{huang2017beyond} and so on. VIS-NIR HFR is the most popular and representative task in HFR. This is because NIR imaging provides a low-cost and effective solution to acquire high-quality images under low-light scenarios. It is widely applied in surveillance systems nowadays. However, the popularization of NIR images is far from VIS images, and most face databases are enrolled in VIS domain. Consequently, the demand for face matching between NIR and VIS images grows gradually. A major challenge of HFR comes from the gap between sensing patterns of different face modalities. In practice, human face appearance is often influenced by many factors, including identities, illuminations, viewing angles, expressions and so on. Among all the factors, identity difference accounts for intra-personal differences while the rest lead to inter-personal differences. A key effort for face recognition is to alleviate intra-personal differences while enlarge inter-personal differences. Specifically, in the heterogeneous case, the noise factors that cause inter-personal differences show diverse distributions in different modalities, e.g. various spectrum sensing distribution between VIS domain and NIR domain, leading to a more complex problem to preserving the identity relevance between different modalities. \begin{figure}[t] \begin{center} \includegraphics[width=1.05\linewidth]{AD-HFR_pipeline.pdf} \end{center} \caption{The proposed adversarial discriminative HFR framework. Adversarial learning is employed on both raw-pixel space and compact feature space.} \label{fig:pipeline} \end{figure} A lot of research efforts have been devoted to eliminating the sensing gap~\cite{socolinsky2002TIR_analysis,yi2007face_NIS-VIR,li2013casia}. One straightforward approach to cope with the sensing gap is to transform heterogeneous data onto a common comparable space~\cite{lei2012coupled}. Another commonly used strategy is to map data from one modality to another~\cite{lei2008CCA_mapping,wang2009analysis,huang2013coupled}. Most of these methods only focus on minimizing the sensing gap, but not emphasize discrimination among different subjects, causing performance reduction when the number of subjects increases. Another challenge for HFR is the lack of paired training data. General face recognition and hallucination have benefited a lot from the development of deep neural networks. However, the success of deep learning relies on large amount of labeled or paired training data to some extent. Although we can easily collect large-scale VIS images through the internet, it is hard to collect massive paired heterogeneous image data such as NIR images and TIR images. How to take the advantage of the powerful general face recognition to boost HFR and cross-spectral face hallucination is worth studying. To address the above two issues, this paper proposes an adversarial discriminative feature learning framework for HFR by introducing adversarial learning on both raw-pixel space and compact feature space. Figure~\ref{fig:pipeline} is the pipeline of our approach. Cross-spectral face hallucination and discriminative feature learning are simultaneously considered in this network. In the pixel space, we make use of generative adversarial networks (GAN) as a sub-network to perform cross-spectral face hallucination. An elaborate two-path model is introduced in this sub-network to alleviate the lack of paired images, which gives consideration to both global structures and local textures and results in a better visual result. In the feature space, an adversarial loss and a high-order variance discrepancy loss are employed to measure the global and local discrepancy between two heterogeneous feature distributions respectively. These two losses enhance domain-invariant feature learning and modality independent noise removing. Moreover, we implement all these global and local information in an end-to-end adversarial network, resulting in relatively compact 256 dimensional features. Experimental results show that our proposed adversarial approach not only outperforms state-of-the-art HFR methods but also can generate photo-realistic VIS images from NIR images, without requiring of complex network or large-scale training dataset. The results also suggest that the joint hallucination and feature learning is helpful to reduce the sensing gap. The main contributions are summarized as follows, \begin{itemize} \item A cross-spectral face hallucination framework is embedded as a sub-network in adversarial learning based on GAN. A two-path architecture is presented to cope with the absence of well aligned image pairs and improve face image quality. \item An adversarial discriminative feature learning strategy is presented to seek domain-invariant features. It aims at eliminating the heterogeneities in compact feature space and reducing the discrepancy between different modalities in terms of both local and global distributions. \item Extensive experimental evaluations on three challenging HFR databases demonstrate the superiority of the proposed adversarial method, especially taking feature dimension and visual quality into consideration. \end{itemize} \section{Related Work} What makes heterogeneous face recognition different from general face recognition is that we need to place data from different domains to the same space, only by which the measurement between heterogeneous data can make sense. A kind of approaches uses data synthesis to map data from one modality into another. Thus the similarity relationship of heterogeneous data from different domain can be measured. In \cite{liu2005nonlinear}, a local geometry preserving based nonlinear method is proposed to generate pseudo-sketch from face photo. In~\cite{lei2008CCA_mapping}, they propose a canonical correlation analysis (CCA) based multi-variate mapping algorithm to reconstruct 3D model from a single 2D NIR image. In~\cite{wang_tang2009photo-sketch}, multi-scale Markov Random Fields (MRF) models are extend to synthesize sketch drawing from given face photo and vice versa. In \cite{wang2009analysis}, a cross-spectrum face mapping method is proposed to transform NIR and VIS data to another type. Many works~\cite{wang2012sketch_synthesis,juefei2015cvprw_nir} resort to coupled or joint dictionary learning to reconstruct face images and then perform face recognition. However, large amount of pairwise multi-view data are essential for these methods based on data synthesis, making it very difficult to collect training images. In~\cite{lezama2016not}, they design a patch mining strategy to collect aligned image patches, and then produce VIS faces from NIR images through a deep learning approach. Another kind of methods deals with heterogeneous data by projecting them to a common latent space respectively, or learn modality-invariant features that are robust to domain transfer. In~\cite{lin_tang2006}, Common Discriminant Feature Extraction (CDFE) is proposed to transform data to a common feature space, which takes both inter-modality discriminant information and intra-modality local consistency into consideration. \cite{liao2009heterogeneous} use DoG filtering as preprocessing for illumination normalization, and then employ Multi-block LBP (MB-LBP) to encode NIR as well as VIS images. \cite{klare2010heterogeneous} further combine HoG features to LBP descriptors, and utilize sparse representation to improve recognition accuracy. \cite{goswami2011evaluation} incorporate a series of preprocessing methods to do normalization, then combine Local Binary Pattern Histogram (LBPH) representation with LDA to extract robust features. In~\cite{zhang2011coupled}, a coupled information-theoretic projection method is proposed to reduce the modality gap by maximizing the mutual information between photos and sketches in the quantized feature spaces. In~\cite{lei2012coupled}, a coupled discriminant analysis method is suggested that involves the locality information in kernel space. In~\cite{huang2013regularized}, a regularized discriminative spectral regression (DSR) method is developed to map heterogeneous data into the same latent space. In~\cite{hou2014domain}, a domain adaptive self-taught learning approach is developed to derive a common subspace. In ~\cite{zhu2014matching}, Log-DoG filtering is involved with local encoding and uniform feature normalization to reduce heterogeneities between VIS and NIR images. ~\cite{shao2017cross} propose a hierarchical hyperlingual-words (Hwords) to capture high-level semantics across different modalities, and a distance metric through the hierarchical structure of Hwords is presented accordingly Recently, many works attempt to address the cross-modal matching problem by deep learning methods benefitting from the development of deep learning. In~\cite{yidong2015shared} , Restricted Boltzmann Machines (RBMs) is used to learn a shared representation between different modalities. In~\cite{liu2016transferring}, the triplet loss is applied to reduce intra-class variations among different modalities as well as augment the number of training sample pairs. \cite{kan2016cross-view} develop a multi-view deep network that is made up of view-specific sub-network and common sub-network, in which the view-specific sub-network attempts to remove view-specific variations while the common sub-network seeks for common representation shared by all views. In~\cite{he2017idr}, subspace learning and invariant feature extraction are combined into CNNs. This method obtains the state-of-the-art HFR result on CASIA NIR-VIS 2.0 database. As mentioned before, our work is also related to the famous adversarial learning. GAN~\cite{goodfellow2014generative} has achieved great success in many computer vision applications including image style transfer~\cite{zhu2017unpaired,pix2pix2016}, image generation~\cite{shrivastava2016learning,huang2017beyond} , image super-resolution~\cite{ledig2016photo}, object detection~\cite{li2017perceptual,wang2017fast}. Adversarial learning provides a simple yet efficient way to fit target distribution via the min-max two-player game between generator and discriminator. Motivated by this, we introduce adversarial learning in NIR-VIS face hallucination and domain-invariant feature learning, aiming at closing the sensing gap of heterogeneous data in pixel space and feature space simultaneously. \section{The Proposed Approach} In this section, we present a novel framework for the cross-modal face matching problem based on adversarial discriminative learning. We first introduce the overall architecture, and then describe the cross-spectral face hallucination and the adversarial discriminative feature learning separately. \subsection{Overall Architecture} The goal of this paper is to design a framework that enables learning of domain-invariant feature representations for images from different modalities, i.e. VIS face images $I^V$ and NIR face images $I^N$. We can easily get numerous VIS face images for training thanks to the prosperous of social network. In most circumstances, face recognition approaches are trained with VIS face images, which cannot achieve full performance when handling with NIR images. Besides, it is necessary to archive all processed images for most face recognition systems in real-world applications. However, NIR face images are much harder to distinguish by humans comparing with VIS faces. A feasible way is to convert NIR face images into VIS spectrum. Thus, we employ a GAN to perform cross-spectral face hallucination, aiming at better fitting the VIS-based face models as well as producing VIS-like images that are friendly to human eyes. However, we find that it is insufficient that only transferring NIR images into VIS spectrum in NIR-VIS HFR. A reasonable explanation is that NIR images are distinct with VIS images not just on imaging spectrum. For example, NIR face images often have darker or blurrier outlines due to the distance limit of the near-infrared illumination. The special way of imaging for NIR images makes the noise factors that cause inter-personal differences show diverse distributions compared to the VIS images. Hence, an adversarial discriminative feature learning strategy is proposed in our approach to reduce heterogeneities between VIS and NIR images. To summarize, the proposed approach consists of two key components (shown in Fig.~\ref{fig:pipeline}): cross-spectral face hallucination and adversarial discriminative feature learning. These two components try to eliminate the gap between different modalities in raw-pixel space and compact feature space respectively. \subsection{Cross-spectral Face Hallucination} The outstanding performance of GAN in fitting data distribution has significantly promoted many computer vision applications such as image style transfer~\cite{zhu2017unpaired,pix2pix2016}. Motivated by its remarkable success, we employ GAN to perform the cross-spectral face hallucination that converting NIR face images into VIS spectrum. A major challenge in NIR-VIS image converting is that image pairs are not aligned accurately in most databases. Even though we can align images based on facial landmarks, the pose and facial expression of the same subject still vary quite a lot. Therefore, we build our cross-spectral face hallucination models based on the CycleGAN framework~\cite{zhu2017unpaired}, which can handle unpaired image translation tasks. As illustrated in Fig.~\ref{fig:pipeline}, a pair of generators ${G_V}:{I^N} \to {I^V}$ and ${G_N}:{I^V} \to {I^N}$ are introduced to achieve opposite transformation, with which we can construct mapping cycles between VIS and NIR domain. Associated with these two generators, $D_V$ and $D_N$ aim to distinguish between real images $I$ and generated images $G(I)$ correspondingly. Generators and discriminators are trained alternatively toward adversarial goals, following the pioneering work of~\cite{goodfellow2014generative}. The adversarial losses for generator and discriminator are shown in Eq.~\ref{L_adv_G} and Eq.~\ref{L_adv_D} respectively. \begin{equation}\label{L_adv_G} \begin{split} {L_{G-adv}} = - {{\mathbb{E}}_{{I} \sim P\left( {{I}} \right)}}\log {D}\left( {G\left( {{I}} \right)} \right), \end{split} \end{equation} \begin{equation}\label{L_adv_D} \begin{split} {L_{D-adv}} &={{\mathbb{E}}_{{I^{'}} \sim P\left( {{I^{'}}} \right)}}\log {D}( {1 - {I^{'}}} ) \\ &+{{\mathbb{E}}_{{I} \sim P\left( {{I}} \right)}}\log {D}\left( {G\left( {I} \right)} \right), \end{split} \end{equation} where $I$ and $I^{'}$ are images from different modalities. In the CycleGAN framework, an extra cycle consistency loss $L_{cyc}$ is introduced to guarantee consistency between input images and the reconstructed images, e.g. $I^N$ vs. ${G_{N}}({G_{V}}(I^N))$ and ${G_{V}}({G_{N}}(I^V))$. $L_{cyc}$ is calculated as \begin{equation}\label{L_cyc} \begin{split} {L_{cyc}} = {E_{I \sim P\left( I \right)}}{\left\| {I - F\left( {G\left( I \right)} \right)} \right\|_1}, \end{split} \end{equation} where $F$ is the opposite generator to $G$. In our cross-spectral face hallucination case, if $G$ is used to transfer VIS faces into NIR spectrum, then $F$ is used to transfer NIR faces into VIS spectrum. We find that a single generator is hard to synthesize high quality cross-spectral images with both global structures and local details are well reconstructed. A possible explanation is that convolutional filters are shared across all the spatial locations, which are seldom suitable for recovering global and local information at the same time. Therefore, we employ a two-path architecture as shown is Fig.~\ref{fig:Two_path_model}. Since the periocular regions show special correspondences between NIR images and VIS images diverse from other facial areas, we add a local path around eyes so as to precisely recover details of the periocular regions. \begin{figure}[t] \begin{center} \includegraphics[width=1.0\linewidth]{Two-path_model.pdf} \end{center} \caption{The proposed two-path architecture used in cross-spectral face hallucination.} \label{fig:Two_path_model} \end{figure} Because VIS images and NIR images mainly have difference in light spectrum, the structure information should be preserved after cross-spectral translations. Similar to~\cite{lezama2016not}, we choose to represent the input and output images in YCbCr space, for which the luminance component Y encode most structure information as well as identity information. An luminance-preserving term is adopted in the global path to enforce structure consistency: \begin{equation}\label{L_intensity} \begin{split} L_{intensity} = {{\mathbb{E}}_{I \sim P\left( I \right)}}{\left\| {Y\left( I \right) - Y\left( {G\left( I \right)} \right)} \right\|_1} \end{split} \end{equation} in which $Y(.)$ stands for the Y channel of images in YCbCr space. To sum up, the full objective for generators $G_V, G_N$ is: \begin{equation}\label{L_G_full} \begin{split} {L_G} = {L_{G - adv}} + {\alpha _1}{L_{cyc}} + {\alpha _2}{L_{intensity}} \end{split} \end{equation} where $\alpha_1$ and $\alpha_2$ are loss weight coefficients. \subsection{Adversarial Discriminative Feature Learning} In this section, we propose a simple way to learn domain-invariant face representations using adversarial discriminative feature learning strategy. Ideal face feature extractor should be capable of alleviating the discrepancy caused by different modalities, while keeping discriminant among different subjects. \subsubsection{Adversarial Loss} As mentioned above, GAN has strong ability of fitting target distribution via the simple min-max two-player game. In this section, we use GAN in cross-view feature learning so as to eliminate domain discrepancy in feature-level. As demonstrated in Fig.~\ref{fig:pipeline}, an extra discriminator $D_F$ is employed to act as the adversary to our feature extractor. $D_F$ outputs a scalar value that indicates the probability of belonging to VIS feature space. The adversarial loss of our feature extractor takes the form: \begin{equation}\label{L_adv_DF} \begin{split} {L_{F-{adv}}} = - {{\text{E}}_{{I^N} \sim P\left( {{I^N}} \right)}}\log {D_f }\left( {{F}\left( {{G_V }\left( {{I^N}} \right)} \right)} \right) \end{split} \end{equation} By enforcing the fitting of NIR feature distribution to VIS feature distribution, we can remove the noise factors accounting for domain discrepancy. Since the adversarial loss is used to eliminate the discrepancy between distributions of heterogeneous data in a global view without taking local discrepancy into consideration, and distributions in each modalities consist of many sub-distributions of different subjects, the local consistency may not be well preserved. \subsubsection{Variance Discrepancy} Similar to the conventional domain adaptation tasks~\cite{long2016RTN,zellinger2017CMD}, we want to bridge two different domains by learning domain-invariant feature representations in HFR. But HFR faces more challenges. First, HFR needs to match the same subject or instance rather than the same class, and distinguishe two different subjects that belong to the same class in most domain adaptation tasks. Second, there is no upper limit of the number of subject classes, the majority of which are not appeared in training phase. Fortunately, unlike these unsupervised domain adaptation tasks, label information in the target domain is supported in HFR, which can supervise the discriminative feature learning. The usage of adversarial loss can only handle partial intra-personal difference caused by modality transfer, but not the modality-independent noise factors. Considering that the feature distribution of the same subject should be as close as possible ideally, we employ the class-wise variance discrepancy (CVD) to enforce the consistency of subject-related variation with the guide of identity label information: \begin{equation}\label{moment} \begin{split} \sigma \left( F \right) = {\mathbb{E}}\left( {{{\left( {F - {\mathbb{E}}\left( F \right)} \right)}^2}} \right), \end{split} \end{equation} \begin{equation}\label{L_moment} \begin{split} {L_{\text{CVD}}} = \sum\limits_{c = 1}^C {{\mathbb{E}}\left( {{{\left\| {\sigma \left( {{F_c}^V} \right) - \sigma \left( {{F_c}^N} \right)} \right\|}_2}} \right)} \end{split} \end{equation} where $\sigma(.)$ is the variance function, and the ${F_c}^V$, ${F_c}^N$ denote feature observations belonging to the $c-$th class in VIS and NIR domain respectively. \subsubsection{Cross-Entropy Loss} As the adversarial loss and the variance discrepancy penalties cannot ensure the inter-class diversity which exists in both the source domain and the target domain, we further employ the common-used classification architecture to enforce the discrimination and compactness of the learned feature. Empirical error of all samples is minimized as \begin{equation}\label{L_softmax} \begin{split} {L_{cls}} = \frac{1}{{\left| N \right| + \left| V \right|}}\sum\limits_{i \in \{ N,V\} } {{\mathcal{L}}\left( {W{F_i},{y_i}} \right)} \end{split} \end{equation} where $W$ is the parameter for softmax normalization, and $\mathcal{L}\left( { \cdot , \cdot } \right)$ is the cross-entropy loss function. The final loss function is a weighted sum of all the losses defined above: $L_{adv}$ to remove the modality gap, $L_{\text{CVD}}$ to guarantee intra-class consistency, and $L_{cls}$ to preserve identity discrimination. \begin{equation}\label{L_final} \begin{split} L = {L_{F-{adv}}} + {\lambda _1}{L_{\text{CVD}}} + {\lambda _2}{L_{cls}} \end{split} \end{equation} \section{Experiments} In this section, we evaluate the proposed approach on three NIR-VIS databases. The databases and testing protocols are introduced firstly. Then, the implementation details is presented. Finally, comprehensive experimental analysis is conducted among the comparison with related works. \subsection{Datasets and Protocols} \textbf{The CASIA NIR-VIS 2.0 face database~\cite{li2013casia}}. It is so far the largest as well as the most challenging public face database across NIR and VIS spectrum. Its challenge contains large variations of the same identity, expression, pose and distance. The database collects 725 subjects, each with 1-22 VIS and 5-50 NIR images. All images in this database are randomly gathered, and no one-to-one correspondence between NIR and VIS images. In our experiments, we follow the View 2 of the standard protocol defined in~\cite{li2013casia}, which is used for performance evaluation. There are 10-fold experiments in View 2, where each fold contains non-overlapped training and testing lists. There are about 6,100 NIR images and 2,500 VIS images from about 360 identities for training in each fold. In the testing phase, cross-view face verification is taken between the gallery set of 358 VIS image belonging to different subjects, and the probe set of over 6,000 NIR images from the same 358 identity. The Rank-1 identification rate and the ROC curve are used as evaluation criteria. \textbf{The BUAA-VisNir face database~\cite{Huang2012Buaa}}. This dataset is made up of 150 subjects with 40 images per subject, among which there are 13 VIS-NIR pairs and 14 VIS images in different illumination. Each VIS-NIR image pairs are captured synchronously using a single multi-spectral camera. The paired images in the BUAA-VisNir dataset vary in poses and expressions. Following the testing protocol proposed in~\cite{shao2017cross}, 900 images of 50 subjects are randomly selected for training, and the other 100 subjects make up the testing set. It is worth noted that the gallery set contains only one VIS image of each subject. Therefore, a testing set of 100 VIS images and 900 NIR images are organized. We report the Rank-1 accuracy and the ROC curve according to the protocol. \textbf{The Oulu-CASIA NIR-VIS facial expression database~\cite{chen2009learning}}. Videos of 80 subjects with six typical expressions and three different illumination conditions are captured in both NIR and VIS imaging systems in this database. We take cross-spectral face recognition experiments following the protocols in~\cite{shao2017cross}, where only images from the normal indoor illumination are used. In each expression, eight face images are randomly selected such that 48 VIS images and 48 NIR images of each subject are used. Based on the protocol in~\cite{shao2017cross}, the training set and testing set contain 20 subjects respectively, resulting in a total of 960 gallery VIS images and 960 NIR probe images in testing phase. Similar to the above two datasets, the Rank-1 accuracy and the ROC curve are reported. \begin{table*}[htbp] \centering \caption{Experimental results for the 10-fold face verification tasks on the CASIA NIR-VIS 2.0 database of the proposed method.} \begin{tabular}{l|c|c|c|c} \toprule & Rank-1 acc.(\%) & VR@FAR=1\%(\%) & VR@FAR=0.1\%(\%) & VR@FAR=0.01\%(\%) \\ \midrule \midrule Basic model & $87.16\pm0.45$ & $89.65\pm0.89$ & $72.06\pm1.38$ & $48.25\pm2.68$ \\ Softmax & $95.89\pm0.75$ & $98.26\pm0.48$ & $93.25\pm1.14$ & $75.13\pm3.02$ \\ \midrule ADFL w/o $L_{adv}$ & $96.56\pm0.63$ & $98.56\pm0.27$ & $95.24\pm0.36$ & $81.69\pm1.77$ \\ ADFL w/o $L_{\text{CVD}}$ & $ 97.34\pm0.53$ & $98.95\pm0.14$ & $96.88\pm0.40$ & $85.83\pm3.02$ \\ \midrule Hallucination & $90.56\pm0.86$ & $92.95\pm0.20$ & $81.17\pm0.42$ & $62.24\pm2.77$ \\ ADFL & $97.81\pm0.29$ & $99.04\pm0.21$ & $\textbf{97.21}\pm0.34$ & $\textbf{88.11}\pm3.09$ \\ Hallucination + ADFL & $\textbf{98.15}\pm0.34$ & $\textbf{99.12}\pm0.15$ & $97.18\pm0.48$ & $87.79\pm2.33$ \\ \bottomrule \end{tabular}% \label{tab:res_ours}% \end{table*}% \subsection{Implementation Details} \begin{figure}[t] \begin{center} \includegraphics[width=1.0\linewidth]{gan_res.pdf} \end{center} \caption{Results of the cross-spectral face hallucination. From left to right, the input NIR images, generated VIS images by cycleGAN, generated VIS images by the proposed cross-spectral face hallucination framework, and corresponding VIS images of the same subjects.} \label{fig:face_generate_res} \end{figure} \textbf{Training data}. Our cross-spectral hallucination network is trained on the CASIA NIR-VIS 2.0 face dataset. Note that the label annotation is not involved in the training of face hallucination module, therefore it would not affect the reliability of our following HFR tests. The feature extraction network is pre-trained on the MS-Celeb-1M dataset~\cite{guo2016ms}, and finetuned on each testing datasets respectively. All the face images are normalized by similarity transformation using the locations of two eyes, and then cropped to $144 \times 144$ size, of which $128 \times 128$ sized sub images are selected by random cropping in training and center cropping in testing. For the local-path, $32 \times 32$ patches are cropped around two eyes, and then flipped to the same side. As mentioned above, in the cross-spectral hallucination module, images are encoded in YCbCr space. In the feature extraction step, grayscale images are used as input. \textbf{Network architecture}. Our cross-spectral hallucination networks take the architecture of ResNet~\cite{he2016resnet}, where the global-path is comprised of 6 residual blocks and the local-path contains 3 residual blocks. Output of the local-path is feed to the global-path before the last block. In the adversarial discriminative feature learning module, we employ the model-B of the Light CNN~\cite{wu2015lightened} as our basic model, which includes 9 convolution layers, 4 max-pooling and one fully-connected layer. Parameters of the convolution layers are shared across the VIS and NIR channels as shown in Fig.~\ref{fig:pipeline}. The output feature dimension of our approach is 256, which is relatively compact comparing with other state-of-the-art face recognition networks. \subsection{Experimental Results} \subsubsection{Face Hallucination Results} Fig.~\ref{fig:face_generate_res} shows some examples generated by our cross-spectral hallucination framework. We report the results of cycleGAN~\cite{zhu2017unpaired} for comparison. As shown in Fig.~\ref{fig:face_generate_res}, the results of cycleGAN are not satisfying, which may caused by the lack of strong constraint such as the proposed $L_{intensity}$. Note that our method can accurately recover details of the VIS faces, e.p. eyeballs, mouths and hairs. Specifically, the periocular regions are well transformed to VIS-like faces in which eyeballs are distinguishable. Results in Fig.~\ref{fig:face_generate_res} demonstrate the ability of our cross-spectral hallucination framework to generate photo-realistic VIS images from NIR inputs, with both global structure and local details are well preserved. \subsubsection{Results on the CASIA NIR-VIS 2.0 database} Table~\ref{tab:res_ours} shows results of the proposed approach with different settings. We report mean value and standard deviation of Rank-1 identification rate, verification rates at $1\%$, $0.1\%$, $0.01\%$ false accept rate (VR@FAR=$1\%$, VR@FAR=$0.1\%$, VR@FAR=$0.01\%$) for a detailed analysis. We evaluate the performance obtained by our method in different settings, including cross-spectral hallucination, ADFL and hallucination $+$ ADFL. In order to validate the effectiveness of $L_{adv}$ and $L_{\text{CVD}}$, we report results of removing one of them respectively. The cross-spectral hallucination brings a performance gain for about $3\%$ in Rank-1 accuracy as well as VR@FAR=$1\%$, addressing that the cross-spectral image transfer helps to close the sensing gap between different modalities. Obviously, significant improvements can be observed when the proposed ADFL is used. Since supervision signals are introduced in the ADFL, it has stronger capacity than cross-spectral hallucination to boost the HFR accuracy. Both the adversarial loss and the variance discrepancy help to improve the recognition performance according to results of w/o $L_{adv}$ and w/o $L_{\text{CVD}}$. When the cross-spectral hallucination and the adversarial discriminative learning strategies are applied together, the best performance is obtained. \begin{table} \centering \caption{Experimental results for the 10-fold face verification tasks on the CASIA NIR-VIS 2.0 database.} \begin{tabular}{l|c|c|c} \toprule & Rank-1 & FAR=0.1\% & Dim. \\ \midrule \midrule PCA+Sym+HCA(2013) & $23.70$ & 19.27 & -\\ LCFS(2015) & $35.40$ & 16.74 & -\\ CDFD(2015) & $65.8$ & 46.3 & -\\ CDFL(2015) & $71.5$ & 55.1 & 1000 \\ Gabor+RBM(2015) & $86.16$ & $81.29$ & 14080 \\ Recon.+UDP(2015) & $78.46$ & 85.80 & - \\ $\text{H}^2(\text{LBP}_3)$(2016) & 43.8 &10.1 & - \\ COTS+Low-rank(2017) & 89.59 & - & 1024 \\ IDR(2017) & $97.33$ & $95.73$ & \textbf{128} \\ \midrule Ours & \textbf{98.15} & $\textbf{97.18}$ & 256\\ \bottomrule \end{tabular}% \label{tab:res_nir2.0}% \end{table}% We also compare the proposed approach with both conventional and state-of-the-art deep learning based NIR-VIS face recognition methods: PCA+Sym+HCA~\cite{li2013casia}, learning coupled feature space (LCFS)~\cite{jin2015coupled}, coupled discriminant face descriptor(CDFD)~\cite{jin2015coupled,wangkaiye2013iccv},coupled discriminant feature learning (CDFL)~\cite{jin2015coupled}, Gabor+RBM~\cite{yidong2015shared}, NIR-VIS reconstruction+UDP~\cite{juefei2015cvprw_nir}, COTS+Low-rank~cite{lezama2016not} and Invariant Deep Representation (IDR)~\cite{he2017idr}. The experimental results are consolidated in Table~\ref{tab:res_nir2.0}. We can see that deep learning based HFR methods perform much better than conventional approaches. The proposed method improves the previous best Rank-1 accuracy and VR@FAR=$0.1\%$, which are obtained by IDR in ~\cite{he2017idr}, from $97.33\%$ to $98.14\%$ and $95.73\%$ to $97.18\%$ respectively. All of these results suggest that our method is effective for the NIR-VIS recognition problem. \subsubsection{Results on the BUAA-VisNir face database} \begin{table} \centering \caption{Experimental results on the BUAA-VisNir Database.} \begin{tabular}{l|c|c|c} \toprule & Rank-1 & FAR=1\% & FAR=0.1\% \\ \midrule \midrule MPL3(2009) & 53.2 & 58.1 & 33.3\\ KCSR(2009) & 81.4 & 83.8 & 66.7\\ KPS(2013) & 66.6 & 60.2 & 41.7\\ KDSR(2013) & 83.0 & 86.8 & 69.5\\ $\text{H}^2(\text{LBP}_3)$(2017) & 88.8 & 88.8 &73.4 \\ IDR(2017) & 94.3 & 93.4 & 84.7 \\ \midrule Basic model & 92.0& 91.5& 78.9 \\ Softmax & 94.2 & 93.1 & 80.6 \\ ADFL w/o $L_{\text{CVD}}$ & 94.8& 92.2& 83.9 \\ ADFL w/o $L_{adv}$ & 94.9& 94.5& 87.7 \\ ADFL & \textbf{95.2}& \textbf{95.3} & \textbf{88.0} \\ \bottomrule \end{tabular}% \label{tab:res_buaa}% \end{table}% We compare the proposed approach with MPL3~\cite{chen2009learning}, KCSR~\cite{lei2009coupled}, KPS~\cite{lei2009coupled}, KDSR~\cite{huang2013regularized} and $\text{H}^2(\text{LBP}_3$~\cite{shao2017cross}. The results of these comparing methods are from~\cite{shao2017cross}. Table~\ref{tab:res_buaa} shows the Rank-1 accuracy and verification rate of each method. Profit from the powerful large-scale training data, the basic model achieves really good performance that is better than most of the comparing methods. We can see that performance can be further improved when adversarial loss and variance discrepancy are introduced. Particularly, without the constraint of variance consistency, the verification rate drops dramatically at low FAR. This phenomenon demonstrates the effectiveness of variance discrepancy in removing intra-subject variations. Finally, the proposed ADFL acquires the best performance. \subsubsection{Results on the Oulu-CASIA NIR-VIS facial expression database} Results on the Oulu-CASIA NIR-VIS are presented in Table\ref{tab:res_oulu}, in which the results of these comparing methods are from~\cite{shao2017cross}. Similar to results on the BUAA-VisNir database, our proposed ADFL further boosts the performance beyond the powerful basic model. We observe that the adversarial loss contributes little to this database since the training set of Oulu-CASIA NIR-VIS database only contains 20 subjects and is relatively small-scale. So it is easy for the powerful Light CNN to learn good feature extractor for such a small dataset with the guidance of softmax loss. Besides, the variance discrepancy still shows great capability in promoting verification rate at low FAR. These results demonstrate the superiority of our method. \begin{table} \centering \caption{Experimental results on Oulu-CASIA NIR-VIS Database.} \begin{tabular}{l|c|c|c} \toprule & Rank-1 & FAR=1\% & FAR=0.1\% \\ \midrule \midrule MPL3(2009) & 48.9 & 41.9 & 11.4\\ KCSR(2009) & 66.0 & 49.7 & 26.1\\ KPS(2013) & 62.2 & 48.3 & 22.2\\ KDSR(2013) & 66.9 & 56.1 &31.9\\ $\text{H}^2(\text{LBP}_3)$(2017) & 70.8 &62.0 &33.6 \\ IDR(2017) & 94.3 & 73.4 & 46.2 \\ \midrule Basic model & 92.2& 80.3& 53.1 \\ Softmax & 93.0 & 80.9 & 56.1 \\ ADFL w/o $L_{\text{CVD}}$ & 93.1 &81.2 &55.0 \\ ADFL w/o $L_{adv}$ &92.7 & \textbf{83.5} & 60.6 \\ ADFL & \textbf{95.5}& 83.0 & \textbf{60.7} \\ \bottomrule \end{tabular}% \label{tab:res_oulu}% \end{table}% \section{Conclusions} In this paper, we focus on the VIS-NIR face verification problem. An adversarial discriminative feature learning framework is developed by introducing adversarial learning in both raw-pixel space and compact feature space. In the raw-pixel space, the powerful generative adversarial network is employed to perform cross-spectral face hallucination, using a two-path architecture that is carefully designed to alleviate the absence of paired images in NIR-VIS transfer. As for the feature space, we utilize the adversarial loss and a high-order variance discrepancy loss to measure the global and local discrepancy between feature distributions of heterogeneous data respectively. The proposed cross-spectral face hallucination and adversarial discriminative learning are embedded in an end-to-end adversarial network, resulting in a compact 256-dimensional feature representation. Experimental results on three challenging NIR-VIS face databases demonstrate the effectiveness of the proposed method in NIR-VIS face verification.
1,108,101,564,751
arxiv
\section{Introduction} The top quark was discovered at the Tevatron proton-antiproton collider at Fermilab in 1995~\cite{top_discovery}. The study of its properties continues even today. The top is the heaviest particle in the Standard Model (SM) with a mass $\sim$175 GeV which differs widely from those of the other fundamental fermions. This seems to suggest that the top quark may have a role to play in electroweak symmetry breaking and prompts us to question whether the top quark has couplings different from and in addition to those of the other quarks. Various anomalous couplings of the top have been discussed in Ref.~\cite{eff_terms}. Of these, the ones that pertain to the QCD-sector form the subject of this study. Large anomalous couplings may arise in a plethora of models~\cite{relevant_models}, contributing to higher order corrections to the $ttg$ vertex. In a model independent framework, the lowest-dimensional anomalous coupling of the top with the gluon can be parametrized by extra terms in the interaction Lagrangian of the form \begin{equation} {\cal L}_{int} \ni \frac{g_s}{\Lambda} \, F^{\mu\nu}_a \; \bar t \sigma_{\mu\nu} (\rho + i \, \rho' \, \gamma_5) \, T_a \, t \label{lagrangian} \end{equation} where $\Lambda$ denotes the scale of the effective theory. While $\rho$ represents the anomalous chromomagnetic dipole moment of the top, $\rho'$ indicates the presence of a ($CP$-violating) chromoelectric dipole moment. Within the SM, $\rho'$ is non-zero only at the three-loop level and is, thus, tiny. $\rho$, on the other hand, receives a contribution at the one-loop level and is ${\cal O}(\alpha_s/\pi)$ for $\Lambda \sim m_t$. The evidence for a larger $\rho$ or $\rho'$ would be a strong indicator of new physics lurking nearby. Whereas both $\rho$ and $\rho'$ can, in general, be complex, note that any imaginary part thereof denotes absorptive contributions and would render the Lagrangian non-Hermitian. We desist from considering such a possibility. The phenomenological consequences of such anomalous couplings have been considered earlier in Ref.~\cite{previous}. We reopen the issue in light of the improved measurements of top quark mass and $t\bar t$ cross-section and the first reported measurement of $t\bar t$ invariant mass. \section{Hadron Collider Prospects} The inclusion of a chromomagnetic moment term leads to a modification of the vertex factor for the usual $ttg$ interaction to $ig_s[\gamma^{\alpha} + (2 \, i \, \rho / \Lambda) \, \sigma^{\alpha\mu}k_{\mu}]T^a$ where $k$ is the momentum of the gluon coming into the vertex. An additional quartic interaction involving two top quarks and two gluons is also generated with the corresponding vertex factor being $(2 \, i \,g_s^2 \, \rho / \Lambda) \, f_{abc}\sigma^{\alpha\beta}T^c$. The changes in the presence of the chromoelectric dipole moment term are analogous, with $\rho$ above being replaced by $ (i \, \rho' \, \gamma_5)$. At a hadron collider, the leading order contributions to $t\bar t$ production come from the $q\bar q \rightarrow t\bar t$ and $gg \rightarrow t\bar t$ sub-processes. Detailed expressions for the differential cross-sections can be found in Ref.~\cite{chromo_top} as well as Ref.~\cite{previous}. Using these results, we compute the expected $t\bar t$ cross-section at the Tevatron and the LHC. We use the CTEQ6L1 parton distribution sets~\cite{CTEQ} with $m_{t}$ as the scale for both factorization as well as renormalization. For a consistent comparison with the cross-section measurement reported by the CDF collaboration~\cite{CDF_top_csec}, we use $m_{t} = 172.5$ GeV for the Tevatron analysis. For the LHC analysis, though, we use the updated value of $m_{t} = 173.1$ GeV, obtained from combined CDF+D\O{} analysis~\cite{CDF-D0}. To incorporate the higher order corrections absent in our leading order results, we use the $K$-factors at the NLO+NLL level\footnote{In the absence of a similar calculation incorporating anomalous dipole moments, we use the same \mbox{$K$-factor} as obtained for the SM case. While this is not entirely accurate, given the fact that the color structure is similar the error associated with this approximation is not expected to be large.} calculated by Cacciari et. al.~\cite{Cacciari}. Once this is done, the theoretical errors in the calculation owing to the choice of PDFs and scale are approximately 7-8\% for the Tevatron and 9-10\% for the LHC~\cite{Cacciari}. However, the estimates reported for the LHC operating at 7 TeV, are only leading order ones since NLO calculations for these energies are, so far, unavailable. \subsection{Tevatron Results} At the Tevatron, the dominant contribution accrues from the $q \bar q$ initial states, even on the inclusion of the dipole moments. Fig.\ref{fig:contour}($a$) displays the parameter space that is still allowed by the Tevatron data, namely~\cite{CDF_top_csec} \begin{equation} \sigma_{t\bar t}(m_t = 172.5~{\rm GeV}) = (7.50 \pm 0.48) \, {\rm pb}\,. \end{equation} The central region of the plot shows that the data allows for large values of dipole moments. This is essentially due to cancellations between various terms contributing to the cross-section. \begin{figure}[!htbp] \centering \subfigure[] { \includegraphics[scale=0.75]{135_contour.eps} } \subfigure[] { \includegraphics[scale=0.75]{totcsec_TeV.eps} } \caption{\em {\em (a)} The region in ($\rho/\Lambda$)-($\rho'/\Lambda$) plane allowed by the Tevatron data~{\em\cite{CDF_top_csec}} at the 1-$\sigma$, 3-$\sigma$ and 5-$\sigma$ level. {\em (b)} $t\bar t$ production rates for the Tevatron ($\sqrt{s}$ = {\em 1.96 TeV)}. The horizontal lines denote the CDF central value and the 3-$\sigma$ interval~{\em\cite{CDF_top_csec}}.} \label{fig:contour} \end{figure} Having seen the extent to which cancellations may, in principle, be responsible for hiding the presence of substantial dipole moments, we now restrict ourselves to the case where only one of $\rho$ and $\rho'$ may be non-zero. If only one of the two couplings is to be non-zero, we may rescale $\rho, \rho' = 0, \pm 1$ and, thus, reduce the parameter space to one dimension ($\Lambda$). $\rho' = \pm 1$ are equivalent as the the cross-section only depends on even powers of $\rho'$. Fig.\ref{fig:contour}($b$) exhibits the corresponding dependence of the total cross-section at the Tevatron on $\Lambda$ for various combinations of $(\rho, \rho')$. It can be seen that $\Lambda \lesssim 7400~{\rm GeV}$ can be ruled out at 99\% confidence level for the $\rho = +1$ case. For $\rho = -1$ on the other hand, $\Lambda \lesssim 9000~{\rm GeV}$ can be ruled out at the same confidence level. One expects similar sensitivity for $\rho=+1$ and $\rho=-1$. The difference essentially owes its origin to the slight discrepancy between the SM expectations (as computed with our choices) and the experimental central value. The sensitivity to chromoelectric moment is low. This is understandable as the corresponding contribution is suppressed by at least $\Lambda^2$. The cross-sections considered above depend on powers of $\Lambda$ upto $\Lambda^4$. However, the Lagrangian considered in Eqn.\ref{lagrangian} contains only the lowest dimensional anomalous operators of an effective theory. Higher dimensional operators~\cite{dim_6}, if included in the Lagrangian, could change the behaviour of the cross-sections and hence the conclusions drawn from Fig.\ref{fig:contour}($b$). A closer examination of this issue (see Fig.\ref{fig:chisq}($a$)) reveals that that were we to neglect ${\cal O}(\Lambda^{-2})$ terms, the shape of the curves would indeed change but the limits on $\Lambda$ for either of $\rho = \pm 1$ would hardly alter. \begin{figure}[!htbp] \centering \subfigure[] { \includegraphics[scale=0.75]{totcsec_TeV_comp.eps} } \subfigure[] { \includegraphics[scale=0.75]{chisq.eps} } \caption{\em {\em (a)} Comparison of production rates obtained at the Tevatron with truncated cross-sections (up to ${\cal O}(\Lambda^{-1})$; denoted by subscript $\Lambda$ in the key) and full cross-sections(all orders in $\Lambda$). {\em (b)} $\chi^2$ per degree of freedom obtained by fitting the $m_{t\bar t}$ spectrum. } \label{fig:chisq} \end{figure} Yet another measurement reported by the Tevatron is the invariant mass distribution~\cite{CDF_mtt}. This data can be used to put further constraints on values of $\rho$ and $\rho'$. In the reported measurement, the first bin which extends in the range 0-350 GeV also has a non-zero number of events, an artefact of experimental errors associated with the reconstruction of the $t\bar t$ events. For our analysis, we exclude this bin. Further, we normalize the our calculated $m_{t\bar t}$ distribution so that for the SM case it matches the CDF simulation. As a statistic, we consider a $\chi^2$ defined through \[ \chi^2 = \sum_{i = 2}^9 \left(\frac{\sigma^{\rm th}_i - \sigma^{\rm obs}_i}{\delta \sigma_i} \right)^2 \] where the sum runs over the bins and $\sigma^{\rm th}_i$ is the number of events expected in a given theory (defined by the values of $\rho, \rho', \Lambda$) in a particular bin. $\sigma^{\rm obs}_i$ and $\delta \sigma_i$ are the observed event numbers and the errors therein. The $\chi^2$ values thus obtained are plotted as function of $\Lambda$ in Fig.\ref{fig:chisq}($b$). It is interesting to note that the $\rho = -1$ case gives a better fit than the SM, over a large range of $\Lambda$ values while $\rho = +1$ is now strongly disfavoured for much higher values of $\Lambda$. Even for the chromoelectric moment case ($\rho' \neq 0$), the increase in sensitivity is evident. However, in all of this, we wish to tread with caution. This distribution has been constructed on the basis of only 2.7 $fb^{-1}$ of data. Robust limits may be obtained once more statistics has been accumulated and a more realistic simulation, with the inclusion of the effects of dipole moment terms, has been carried out. \subsection{LHC Sensitivity} At the LHC, the $gg$ flux dominates, especially at smaller $\hat s$ values. In Fig.\ref{fig:tot_LHC}, we present the cross-sections at the LHC for various values of the proton-proton center-of-mass energy $\sqrt{s}$. In the absence of any data, we can only compare these with the SM expectations and the estimated errors~\cite{CMS-PAS}. \begin{figure}[!htbp] \centering \subfigure[] { \includegraphics[width=1.7in,height=2in]{totcsec_LHCS.eps} } \subfigure[] { \includegraphics[width=1.7in,height=2in]{totcsec_LHCE.eps} } \subfigure[] { \includegraphics[width=1.7in,height=2in]{totcsec_LHC.eps} } \caption{\em $t\bar t$ production rates for the LHC as a function of the new physics scale $\Lambda$. Panels from left to right correspond to $\sqrt{s}$ = {\em 7, 10, 14 TeV}. The horizontal lines show the SM expectation and the 10\% and 20\% intervals as estimates of errors in the measurement~{\em \cite{CMS-PAS}}.} \label{fig:tot_LHC} \end{figure} For non-zero $\rho'$, an early run of the LHC with $\sqrt{s}$ = 7 TeV (Fig.\ref{fig:tot_LHC}$a$) would be sensitive to $\Lambda \lesssim 2700~{\rm GeV}$. The improvement of the sensitivity with the machine energy is marginal at best. For $\rho = +1$, naively a sensitivity up to about $\Lambda \sim 10~{\rm TeV}$ could be expected. For $\rho = -1$, on the other hand, it appears that the best that the LHC can do is to rule out (for $\rho = -1$) $\Lambda \lesssim 8~{\rm TeV}$. This, however, should be compared with the Tevatron results which have already ruled out $\Lambda \lesssim 9~{\rm TeV}$. \subsection{Summary of Limits from Hadron Colliders} Rephrasing the above results in terms of notation commonly used in literature: \begin{center} $\dfrac{1}{\Lambda}(\rho + i\rho') \longleftrightarrow \dfrac{1}{2m_t}(\kappa + i\tilde\kappa)$ \hspace{15pt} : \hspace{15pt} $-0.038 \leq \kappa \leq$ 0.034 \hspace{5pt} and \hspace{5pt} $|\tilde\kappa| \leq$ 0.12 \end{center} \section{Linear Collider Prospects} An electron-positron collider would be the ideal ground for probing anomalous electroweak couplings of the top quark. However, anomalous top-gluon couplings would play a role in the process $e^+e^- \rightarrow t\bar tg$. This has been studied in Refs.~\cite{Rizzo_94} and \cite{Rizzo_96}, where, it was shown that the energy distribution of the gluon is sensitive to such anomalous couplings. Limits on couplings were obtained by fitting the energy spectrum of the gluon assuming that there is no excess in the total production cross-section. Some of the results from Ref.~\cite{Rizzo_96} are shown in Fig.\ref{fig:Rizzo}. \begin{figure}[!htbp] \centering \subfigure[] { \includegraphics[width=2.6in,height=2.6in,angle=-90]{rizzo_fig1.ps} } \subfigure[] { \includegraphics[width=2.6in,height=2.6in,angle=-90]{rizzo_fig4.ps} } \caption{\em Reproduced from Ref.~{\em \cite{Rizzo_96}}. Shows the 95\% CL allowed region for {\em (a)} $\sqrt{s}$ = {\em 500 GeV} ; ${\cal L}$ = {\em 50 $fb^{-1}$}(solid), {\em 100 $fb^{-1}$}(dotted) ; $E_g^{min}$ = {\em 25 GeV}. {\em (b)} $\sqrt{s}$ = {\em 1 TeV} ; ${\cal L}$ = {\em 100 $fb^{-1}$}(solid), {\em 200 $fb^{-1}$}(dotted) ; $E_g^{min}$ = {\em 25 GeV}. } \label{fig:Rizzo} \end{figure} Considering only one of $\kappa$ and $\tilde\kappa$ to be non-zero at a time, the dotted curve in Fig.\ref{fig:Rizzo}($a$) implies \mbox{-0.015 $\leq \kappa \leq$ 0.033} and \mbox{$|\tilde\kappa| \leq$ 0.47}. With an increase in center-of-mass energy and luminosity, this limit may be improved as indicated by Fig.\ref{fig:Rizzo}($b$). Here, the dotted curve leads to \mbox{-0.024 $\leq \kappa \leq$ 0.026} and \mbox{$|\tilde\kappa| \leq$ 0.14}. Comparing this to the limits expected from hadron colliders listed in the previous section, it can be seen that, at a linear collider, better sensitivity may be expected for $\kappa$ but not for $\tilde\kappa$. At a linear collider, there also exists the possibility of collisions using a polarized beam. This too was studied in Ref.~\cite{Rizzo_96}. However it was found that, using a polarized beam does not yield better limits on either chromomagnetic or chromoelectric dipole moments as compared to what can be obtained with unpolarized beams. \section*{\begin{center}Acknowledgement\end{center}} The author would like to thank the organisers of LCWS10 for support and hospitality and the \mbox{ILC-India Forum} for funding the trip to Beijing to attend this conference. \begin{footnotesize}
1,108,101,564,752
arxiv
\section{Introduction} \label{sec:intro} This paper is the first in a series devoted to the development of a rigorous renormalisation group method. We develop the method with the specific goal of providing the necessary ingredients for our analysis of the critical behaviour of the continuous-time weakly self-avoiding walk in dimension~4 \cite{BBS-saw4-log,BBS-saw4}, via its representation as a supersymmetric field theory involving both boson and fermion fields \cite{BIS09}. However, our approach is more general, and also applies in other settings, including purely bosonic or purely fermionic field theories. In particular, it is applied to the 4-dimensional $n$-component $|\varphi|^4$ model in \cite{BBS-phi4-log}. Other approaches to the rigorous renormalisation group are discussed in \cite{Bryd09}. In the renormalisation group approach, we are interested in performing a Gaussian integral with respect to a positive-definite covariance operator $C$. The integration is performed progressively: the covariance is decomposed as a sum of positive-definite terms $C=C_1+C'$ and the original integral is equal to a convolution of Gaussian integrals with respect to $C_1$ and $C'$. A proof that decomposition of the covariance corresponds to convolution of Gaussian integrals can be found for our context in \cite{BI03d}, but we will give a self-contained proof here within our current formalism and notation. In order to perform analysis with Gaussian integrals, it is necessary to define suitable norms. In this paper, we define an algebra $\Ncal$ and the $T_\phi$ \emph{semi-norm} on $\Ncal$, and prove that the $T_\phi$ semi-norm obeys an essential product property. We prove several estimates for the $T_\phi$ semi-norm, which are essential for our renormalisation group method, including estimates for Gaussian integrals. In addition, as an example of use of the $T_\phi$ semi-norm, and as preparation for more detailed estimates obtained in \cite{BS-rg-IE}, we prove a preliminary estimate for the self-avoiding walk interaction. The concepts and results from this paper that are needed in subsequent papers in the series are summarised in Section~\ref{sec:gint}, which pertains to Gaussian integration, and in Section~\ref{sec:Tphi-props}, which pertains to norms and norm estimates. Most of the proofs are deferred to later sections. \section{Gaussian integration} \label{sec:gint} \subsection{Fields and the algebra \texorpdfstring{$\Ncal$}{Ncal}} \label{sec:Ncal} Given a finite set $\pmb{\Lambda}$, and $p\in\Nbold$, let $\pmb{\Lambda}^p$ denote the $p$-fold cartesian product of $\pmb{\Lambda}$ with itself, so that elements of $\pmb{\Lambda}^p$ are sequences of elements of $\pmb{\Lambda}$ of \emph{length} $p$. We define $\pmb{\Lambda}^0=\{\varnothing\}$ to be the set whose element is the empty sequence. Then $\pmb{\Lambda}^{*} = \sqcup_{p=0}^\infty \pmb{\Lambda}^p$ is the set of arbitrary finite sequences of elements of $\pmb{\Lambda}$, of any length, including zero. We typically denote the length of $z \in \pmb{\Lambda}^{*}$ as $p=p (z)$ or $q=q(z)$, and, for $z \in \pmb{\Lambda}^{*}$, we write $z! = p(z)!$. For $z', z'' \in \pmb{\Lambda}^{*}$ we define the \emph{concatenation} $z'\circ z''$ to be the sequence in $\pmb{\Lambda}^{*}$ whose elements are the elements of $z'$ followed by the elements of $z''$. Let $\pmb{\Lambda}_b$ be any finite set. An element of $\Rbold^{\pmb{\Lambda}_b}$ is called a \emph{boson field}, and can be written as $\phi = (\phi_{x},\;x \in \pmb{\Lambda}_{b})$. Let $\Rcal=\Rcal(\pmb{\Lambda}_b)$ denote the ring of smooth functions from $\Rbold^{\pmb{\Lambda}_b}$ to $\mathbb{C}$. Here \emph{smooth} means having at least $p_\Ncal$ continuous derivatives, where $p_\Ncal$ is a parameter at our disposal. Let $\pmb{\Lambda}_f$ be a finite set and let $\pmb{\Lambda} = \pmb{\Lambda}_b \sqcup \pmb{\Lambda}_f$. The \emph{fermion field} $\psi = (\psi_{y}, y \in \pmb{\Lambda}_{f})$ is a set of anticommuting generators for an algebra $\Ncal=\Ncal (\pmb{\Lambda})$ over the ring $\Rcal$. In particular, $\psi_y^2=0$ for all $y \in \pmb{\Lambda}_f$. By definition, $\Ncal$ consists of elements $F$ of the form \begin{equation} \label{e:K} F = \sum_{y \in \pmb{\Lambda}_f^*} \frac{1}{y!} F_y \psi^y , \end{equation} where each coefficient $F_{y}$ is an element of $\Rcal$, and \begin{equation} \lbeq{psiy} \psi^y = \begin{cases} 1 & \text{if $q(y)=0$} \\ \psi_{y_1}\cdots \psi_{y_q} & \text{if $q \geq 1$ and $y=(y_1,\ldots,y_q)$}. \end{cases} \end{equation} We always require $F_{y}$ to be antisymmetric under permutation of the components of $y$; this ensures that the representation \refeq{K} is unique. We denote $F_{y}$ evaluated at $\phi$ by $F_{y} (\phi)$, and write $F(\phi)=\sum_{y \in \pmb{\Lambda}_f^*} \frac{1}{y!} F_y(\phi) \psi^y$. Given $x\in \pmb{\Lambda}_b^*$, we define $\phi^x$ in the same way as \refeq{psiy}. \begin{defn} \label{def:Npoly} For $A$ a nonnegative integer, we say that $F \in \Ncal$ is a \emph{polynomial of degree} $A$ if there are coefficients $F_{x,y}\in\mathbb{C}$ such that $F(\phi) = \sum_{x,y: p(x)+q(y) \le A} \frac{1}{x!y!} F_{x,y} \phi^x \psi^y$, with $F_{x,y}\neq 0$ for some $x,y$ with $p(x)+q(y)=A$. \end{defn} Polynomial elements of $\Ncal$ play an important role in our analysis. An example of a polynomial of degree 2 is $\phi_w\phi_x + \psi_y\psi_z$, for some $w,x\in \pmb{\Lambda}_b$ and $y,z\in\pmb{\Lambda}_f$. \subsection{Fermionic Gaussian integration} Let $\pmb{\Lambda}'_{b}$ and $\pmb{\Lambda}'_{f}$ be sets, with an order specified on the elements of $\pmb{\Lambda}'_f$. We integrate over fields labelled by elements of these sets, starting in this section with the fermion fields labelled by $\pmb{\Lambda}'_{f}$, and then in Section~\ref{sec:bgi} with the boson fields with labels in $\pmb{\Lambda}'_{b}$. We define the monomial $\psi^{\pmb{\Lambda}'_{f}}$ to be the product of the generators in the specified order. Let $\pmb{\Lambda}' = \pmb{\Lambda}'_{b}\sqcup \pmb{\Lambda}'_{f}$. We write $F \in \Ncal (\pmb{\Lambda}\sqcup \pmb{\Lambda}'_b)$ for the algebra $\Ncal$ with fermion fields indexed by $\pmb{\Lambda}_f$ and boson fields indexed by $\pmb{\Lambda}_b \sqcup \pmb{\Lambda}_b'$, and $F \in \Ncal (\pmb{\Lambda}\sqcup \pmb{\Lambda}')$ for the algebra $\Ncal$ with fermion fields indexed by $\pmb{\Lambda}_f\sqcup \pmb{\Lambda}_f'$ and boson fields indexed by $\pmb{\Lambda}_b \sqcup \pmb{\Lambda}_b'$. \begin{defn} \label{def:grassman-integration} The \emph{Grassmann integral} is the linear map $\int_{\pmb{\Lambda}'_{f}}:\Ncal (\pmb{\Lambda}\sqcup\pmb{\Lambda}')\rightarrow \Ncal (\pmb{\Lambda}\sqcup\pmb{\Lambda}'_{b})$ uniquely defined by the conditions: \\ (a) for all $F \in \Ncal (\pmb{\Lambda}\sqcup \pmb{\Lambda}'_b)$, $\int_{\pmb{\Lambda}'_{f}} F\psi^{y'} = 0$ whenever the elements of $y'\in (\pmb{\Lambda}_f')^*$ do not form an enumeration of $\pmb{\Lambda}_f'$, and \\ (b) $\int_{\pmb{\Lambda}'_{f}} F\psi^{\pmb{\Lambda}'_{f}} = F$ for all $F \in \Ncal (\pmb{\Lambda}\sqcup \pmb{\Lambda}'_b)$. \end{defn} The classic reference for Grassmann integration is \cite{Bere66}; accessible and more modern treatments can be found in \cite{CSS13,FKT02,Salm99}. Given an antisymmetric invertible $\pmb{\Lambda}'_{f} \times \pmb{\Lambda}'_{f}$ matrix $\pmb{A}_{f}$, let \begin{equation} \label{e:Sfdef} S_{f} = \frac{1}{2}\sum_{u,v \in \pmb{\Lambda}'_{f}} \pmb{A}_{f;u,v} \psi_{u} \psi_{v} . \end{equation} Since the generators anti-commute and since $\pmb{\Lambda}'_{f}$ is finite, the series $\sum_{n=0}^\infty \frac{1}{n!} (-S_{f})^{n}$ terminates after finitely many terms, and therefore defines an element of $\Ncal (\pmb{\Lambda}')$, and hence also of $\Ncal (\pmb{\Lambda}\sqcup\pmb{\Lambda}')$. We denote this element by $e^{-S_{f}}$. Let $\pmb{C}_{f}$ be the inverse of $\pmb{A}_{f}$. The Grassmann analogue of Gaussian integration is the linear map $\mathbb{E}_{\pmb{C}_f}: \Ncal (\pmb{\Lambda}\sqcup\pmb{\Lambda}') \to \Ncal (\pmb{\Lambda}\sqcup\pmb{\Lambda}'_{b})$ defined by \begin{equation} \label{e:ECf} \mathbb{E}_{\pmb{C}_f} F = N_{f}\int_{\pmb{\Lambda}'_{f}}e^{-S_{f}}F , \quad\quad F \in \Ncal (\pmb{\Lambda}\sqcup\pmb{\Lambda}' ) , \end{equation} where $N_{f}$ is a normalisation constant such that $ \mathbb{E}_{C_f} 1=1$. It is a consequence of \cite[(3.16)]{Bere66} that \begin{equation} \label{e:Pfaff} N_{f} = (\det \pmb{C}_{f})^{1/2} . \end{equation} The choice of square root depends on the order we have chosen for $\Lambda'_{f}$. We will be specific below in a less general setting. \subsection{Bosonic Gaussian integration} \label{sec:bgi} Given a real symmetric positive-definite $\pmb{\Lambda}'_{b} \times \pmb{\Lambda}'_{b}$ matrix $\pmb{A}_b$, and given $\phi \in \Rbold^{\pmb{\Lambda}_b'}$, let \begin{equation} \label{e:Sbdef} S_{b} = \frac{1}{2}\sum_{u,v \in \pmb{\Lambda}'_{b}} \pmb{A}_{b;u,v} \phi_{u} \phi_{v} . \end{equation} The matrix $\pmb{A}_b$ has positive eigenvalues and therefore an inverse matrix $\pmb{C}_b$ exists. The Gaussian expectation $\mathbb{E}_{\pmb{C}_b} : \Ncal(\pmb{\Lambda}\sqcup\pmb{\Lambda}'_{b}) \to \Ncal (\pmb{\Lambda})$ is the linear map defined as follows. Let $D\phi$ be Lebesgue measure on $\Rbold^{\pmb{\Lambda}_b'}$. For $F \in \Rcal(\pmb{\Lambda}_b\sqcup\pmb{\Lambda}'_b)$, we define \begin{equation} \label{e:ECb} \mathbb{E}_{\pmb{C}_b} F = N_{b}\int_{\Rbold^{\pmb{\Lambda}'_{b}}} e^{-S_{b}}F \,D\phi , \end{equation} where $N_{b}$ is chosen so that $\mathbb{E}_{C_b} 1=1$. It is a standard fact about Gaussian integrals that $N_{b}$ is given by the positive square root \begin{equation} \label{e:Nb} N_b = \left( \det (2\pi \pmb{C}_b) \right)^{-1/2}. \end{equation} Of course $\mathbb{E}_{\pmb{C}_b}$ is only defined on elements of $\Ncal (\pmb{\Lambda}\sqcup\pmb{\Lambda}'_{b})$ which are such that the growth of the coefficients at infinity is not too rapid. For $F=\sum_{y \in \pmb{\Lambda}_f} \frac{1}{y!}F_y \psi^y \in \Ncal(\pmb{\Lambda}\sqcup\pmb{\Lambda}'_{b})$, we define \begin{equation} \mathbb{E}_{\pmb{C}_b} F = \sum_{y \in \pmb{\Lambda}_f}\frac{1}{y!} (\mathbb{E}_{\pmb{C}_b} F_y) \psi^y. \end{equation} \subsection{Combined bosonic-fermionic Gaussian integration on \texorpdfstring{$\Ncal$}{Ncal}} \label{sec:Grass} Let $\pmb{C}$ denote the pair $\pmb{C}_b,\pmb{C}_f$. Given matrices $\pmb{A}_{f}$ and $\pmb{A}_{b}$ as above, we define the \emph{combined bosonic-fermionic expectation} to be the linear map $\mathbb{E}_{\pmb{C}} : \Ncal (\pmb{\Lambda}\sqcup \pmb{\Lambda}') \to \Ncal (\pmb{\Lambda})$ given by \begin{equation} \label{e:Ecomb} \mathbb{E}_{\pmb{C}} = \mathbb{E}_{\pmb{C}_{b}} \mathbb{E}_{\pmb{C}_{f}}, \end{equation} where $\mathbb{E}_{\pmb{C}_{b}}$ acts only on bosons, and $\mathbb{E}_{\pmb{C}_f}$ acts only on fermions. By linearity, the action of $\mathbb{E}_{\pmb{C}}$ is determined by its action on $KF$ where $K \in \Ncal(\pmb{\Lambda} \sqcup \pmb{\Lambda}_b')$ and $F$ is a monomial in the generators indexed by $\pmb{\Lambda}_f'$. The map $\mathbb{E}_{\pmb{C}}$ is defined, for such $K,F$, by \begin{align} \label{e:ECbf} \mathbb{E}_{\pmb{C}} KF = ( \mathbb{E}_{\pmb{C}_{b}}K)( \mathbb{E}_{\pmb{C}_{f}}F) &= \Big(N_b \int_{\Rbold^{\pmb{\Lambda}'_{b}}} e^{-S_b }K \,D\phi\Big) \Big(N_{f}\int_{\pmb{\Lambda}'_{f}} e^{-S_{f}}F\Big) . \end{align} On the right-hand side, the boson and fermion fields corresponding to $\pmb{\Lambda}'$ have been integrated out, leaving dependence only on the fields corresponding to $\pmb{\Lambda}$. \subsection{The Laplacian} It is ordinary calculus to differentiate a function $f\in \Rcal(\pmb{\Lambda}_b)$ with respect to the components $\phi_u$ of the boson field, for $u \in \pmb{\Lambda}_b$. The following definition extends this calculus by providing the standard Grassmann analogue of differentiation with respect to the fermion field (see, e.g., \cite{Bere66,FKT02,Salm99}). \begin{defn} \label{def:iu} For $u \in \pmb{\Lambda}_f$, the linear map $i_{u} : \Ncal (\pmb{\Lambda}) \rightarrow \Ncal (\pmb{\Lambda} )$ is defined uniquely by the conditions: \\ (a) $i_{u} (f \psi^y)= f i_u \psi^y$ for $ f \in \Rcal(\pmb{\Lambda}_b)$, $y \in \pmb{\Lambda}_f^*$, \\ (b) $i_u$ acts as an anti-derivation on products of factors of $\psi$, namely $i_u(\psi^{y_1}\psi^{y_2}) = (i_u\psi^{y_1})\psi^{y_2} + (-1)^{p_1}\psi^{y_1}(i_u\psi^{y_2})$, for $y_1,y_2\in \pmb{\Lambda}_f^*$ and $p_1$ the length of $y_1$, and \\ (c) $i_u \psi_v = \delta_{u,v}$ for $u,v\in\pmb{\Lambda}$, where the right-hand side is the Kronecker delta. \\ It is natural, and also standard, to write \eq i_u = \frac{\partial}{\partial \psi_{u}}. \en By (b) and (c), these operators anti-commute with each other: $i_ui_v = -i_vi_u$. \end{defn} Suppose that there is a bijection $x \mapsto x' =x'(x)$ between a subset of $\pmb{\Lambda}$ and $\pmb{\Lambda} '$. The elements of $\pmb{\Lambda}$ where the bijection is not defined are called \emph{external}; they do not participate in any integrations. We extend the matrices $\pmb{C}_b,\pmb{C}_f$ to $\pmb{\Lambda}_b \times \pmb{\Lambda}_b$ and $\pmb{\Lambda}_f \times \pmb{\Lambda}_f$, respectively, by setting $\pmb{C}_{b,u',v'}= \pmb{C}_{f,u',v'}=0$ when $u'$ or $v'$ is undefined. We write $\pmb{C}$ for the pair $\pmb{C}_b,\pmb{C}_f$. The Laplacian operator $\Delta_{\pmb{C}} : \Ncal (\pmb{\Lambda} ) \rightarrow \Ncal (\pmb{\Lambda} )$ is then defined by \begin{equation} \label{e:LapC} \Delta_{\pmb{C}} = \sum_{u,v \in \pmb{\Lambda}_{b}} \pmb{C}_{b;u,v} \frac{\partial}{\partial \phi_{u}} \frac{\partial}{\partial \phi_{v}} + \sum_{u,v \in \pmb{\Lambda}_{f}} \pmb{C}_{f;u,v} \frac{\partial}{\partial \psi_{u}} \frac{\partial}{\partial \psi_{v}} , \end{equation} where the first term on the right-hand side acts only on the coefficients $F_y(\phi)$ of $F \in \Ncal$, while the second acts only on the fermionic part $\psi^y$. \subsection{Gaussian integration and the convolution property} \label{sec:gi-def} \begin{example} \label{ex:conv} For a bounded function $f$ defined on $\Rbold$ and a probability measure $\mu$ on $\Rbold$, we can define the convolution $\mu \star f (x) = \int f (x+y)d\mu (y)$. The map $f \mapsto \mu \star f $ is the composition of the map $(\theta f) (x,y) = f (x + y)$ followed by integrating $y$ with respect to $\mu$. The map $\theta$ maps a function of one variable to a function of two variables. \end{example} The following definition implements the construction of Example~\ref{ex:conv} in the context of the algebra $\Ncal$. To avoid simultaneously using $\phi$ to denote a function on $\pmb{\Lambda}_{b}$ and a function on the larger space $\pmb{\Lambda}_{b} \sqcup \pmb{\Lambda}'_{b}$, we replace $\phi:\pmb{\Lambda}_{b} \sqcup \pmb{\Lambda}'_{b}\rightarrow \Rbold$ by the notation \begin{equation}\label{e:phi-extended} \phi\sqcup\xi:\pmb{\Lambda}_{b} \sqcup \pmb{\Lambda}'_{b} \rightarrow \Rbold , \end{equation} where $(\phi\sqcup\xi)_{x} = \phi_{x}$ and $(\phi\sqcup\xi)_{x'} = \xi_{x'}$. The algebra $\Ncal(\pmb{\Lambda})$ is a subset of $\Ncal(\pmb{\Lambda} \sqcup \pmb{\Lambda}')$. \begin{defn} \label{def:theta-new} Given $t\in \Rbold$, we define the algebra homomorphism $\theta_{t} : \Ncal (\pmb{\Lambda}) \rightarrow \Ncal (\pmb{\Lambda} \sqcup \pmb{\Lambda}')$ to be the unique algebra homomorphism which obeys: \\ (a) the action on generators $\theta_{t} \psi_{y} = \psi_{y} + t\psi_{y'}$, for $y \in \pmb{\Lambda}_f$, and \\ (b) the action on coefficients $(\theta_{t} f) (\phi,\xi) = f (\phi + t\xi)$, for $f \in \Rcal(\pmb{\Lambda}_b)$. \\ If $x'$ or $y'$ is not defined, as discussed above \refeq{LapC}, then the associated $\xi_{x'}$, $\psi_{y'}$ is set equal to zero. Also, on the right-hand side in (b), we interpret $(\phi+t\xi)_x$ as $\phi_x + t\xi_{x'(x)}$. We write $\theta =\theta_{1}$. \end{defn} The following proposition states a convolution property of Gaussian integrals that is at the heart of the renormalisation group method. \begin{prop} \label{prop:conv} For covariances $\pmb{C}_1,\pmb{C}_2$ and for $F \in \Ncal(\pmb{\Lambda})$ such that both sides of \refeq{conv} are well-defined, \begin{equation} \label{e:conv} (\mathbb{E}_{\pmb{C}_2} \theta \circ \mathbb{E}_{\pmb{C}_1} \theta ) F = \mathbb{E}_{\pmb{C}_2+\pmb{C}_1}\theta F. \end{equation} Moreover, if $P \in \Ncal(\pmb{\Lambda})$ is a polynomial of finite degree, as in Definition~\ref{def:Npoly}, then \begin{equation} \label{e:ELap} \mathbb{E}_{\pmb{C}} \theta P = e^{\frac{1}{2} \Delta_{\pmb{C}}} P. \end{equation} \end{prop} The identity \refeq{conv} follows immediately from \refeq{ELap} for polynomial $F$, but \refeq{conv} holds more generally. A proof of Proposition~\ref{prop:conv} is given in Section~\ref{sec:Gihe}. The convolution property \refeq{conv} is standard (see, e.g., \cite{FKT02} for the purely fermionic version), but our proof follows the approach in \cite{BI03d} which extends the familiar connection \refeq{ELap} between Gaussian integration and the Laplacian to the mixed bosonic-fermionic integral. The formulas \eqref{e:ELap} and \eqref{e:LapC} compute moments. For example, if we take $P=\phi_{u} \phi_{v}$ and after evaluation of \eqref{e:ELap} set $\phi=0$, the result is $\mathbb{E}_{\pmb{C}} \xi_{u} \xi_{v} = \pmb{C}_{b;u,v}$. Similarly, by taking $P=\psi_{u} \psi_{v}$, we obtain $\mathbb{E}_{\pmb{C}} \psi_{u} \psi_{v} = - \pmb{C}_{f; u,v}$. Thus \eqref{e:ELap} is a generalisation of Wick's theorem (see, e.g., \cite[Lemma~2.3]{BIS09}), which is the standard formula for moments of a Gaussian measure. \subsection{Conjugate fermion field} \label{sec:cff} Suppose that $\pmb{\Lambda}_f'$ has even cardinality $2M_f$, so the Grassmann generators can be written in a list as $\bar\psi_1,\psi_1,\ldots, \bar\psi_{M_f},\psi_{M_f}$, or, more compactly, as $(\bar\psi_k,\psi_k)_{k=1,\ldots, {M_f}}$. For the Grassmann generators, there is not a notion of complex conjugation, so here the bars are used only as a notational device to list the generators in pairs. However, we will still refer to the pairs of generators as conjugate generators (and see Section~\ref{sec:df} below). We use the order $\bar\psi_1, \psi_1, \bar\psi_2,\psi_2, \ldots \bar\psi_{M_f},\psi_{M_f}$ for the generators in the definition of Grassmann integration in Definition~\ref{def:grassman-integration}. Let $A_f$ be an invertible symmetric ${M_f}\times {M_f}$ matrix, with $A_f^{-1}=C_f$. We define the matrix $\pmb{A}_f$ and its inverse matrix $\pmb{C}_f$ by \begin{equation} \label{e:Afmat} \pmb{A}_{f} = \left( \begin{array}{cc} 0&A_f\\ -A_f^T&0 \end{array} \right) , \quad\quad \pmb{C}_{f} = \left( \begin{array}{cc} 0&-C_f^T\\ C_f&0 \end{array} \right) , \end{equation} with the rows and columns labelled by $\psi_1,\ldots\, \psi_M,\bar\psi_1,\ldots, \bar\psi_M$. Then $S_f$ of \refeq{Sfdef} becomes \eq \label{e:Sfsym} S_{f} = \sum_{k,l=1}^{M_f} A_{f;k,l} \psi_k\bar\psi_l \en and the normalisation constant $N_f$ of \refeq{Pfaff} is \eq \label{e:Pfaffsusy} N_f = (\det \pmb{C}_f)^{1/2} = \det C_f. \en For $F$ a monomial in the Grassmann generators, let $J_F = \mathbb{E}_{\pmb{C}_{f}} F$. The evaluation of the Grassmann integral $J_F$ is standard (see, e.g., \cite[Lemma~B.7]{Salm99} or \cite[Proposition~4.1]{BIS09}). In particular, $J_F=1$ when $F=1$, $J_F=0$ when $F=\prod_{r=1}^p\bar\psi_{i_r}\prod_{s=1}^q \psi_{j_s}$ with $p\neq q$, and \begin{equation} \label{e:JF} J_F = \det C_{f;k_1,\ldots,k_p;l_1,\ldots,l_p}, \end{equation} when $F=\bar\psi_{k_1}\psi_{l_1}\cdots\bar\psi_{k_p}\psi_{l_p}$, where $C_{f;k_1,\ldots,k_p;l_1,\ldots,l_p}$ is the $p\times p$ matrix whose $r,s$ element is $C_{f;k_r,l_s}$. In particular, \eq \label{e:Epsipsi} \mathbb{E}_{\pmb{C}_{f}} \bar\psi_k \psi_l = C_{f;k,l}, \en and $C_f$ is the covariance of the conjugate fermion field. Conjugate fermion fields will be needed in Proposition~\ref{prop:EK} below. \subsection{Complex boson field} \label{sec:cbf} We now discuss a way to accommodate complex boson fields within the formalism. The boson field $\phi$ may include several species of fields, including real external fields which behave as constants during integration, and a complex field which does get integrated. To describe the latter, we suppose that $\pmb{\Lambda}_b'$ has even cardinality $2M_b$ and write the field as $u_1,v_1,\ldots, u_{M_b},v_{M_b}$. Then, for $k=1,\ldots,M_b$, we define \begin{equation} \label{e:phi-def} \phi_k = u_k + i v_k, \quad \bar\phi_k = u_k - i v_k. \end{equation} The boson field then corresponds to a complex field $(\bar\phi_k,\phi_k)_{k=1,\ldots,M_b}$. Define \begin{align} \label{e:complex-derivs} \frac{\partial}{\partial\phi_k} &= \frac 12 \left( \frac{\partial}{\partial u_k} - i\frac{\partial}{\partial v_k } \right), \quad\quad \frac{\partial}{\partial\bar\phi_k} = \frac 12 \left( \frac{\partial}{\partial u_k} + i\frac{\partial}{\partial v_k } \right). \end{align} By definition, these obey, for $k,l=1,\ldots, M_b$, \begin{align} \frac{\partial\phi_k}{\partial\phi_l} &= \frac{\partial\bar\phi_k}{\partial\bar\phi_l} = \delta_{k,l}, \quad\quad \frac{\partial\phi_k}{\partial\bar\phi_l} = \frac{\partial\bar\phi_k}{\partial\phi_{l}} =0. \end{align} Let $A_b$ be a real invertible symmetric ${M_b}\times {M_b}$ matrix, with $A_b^{-1}=C_b$. We define the matrix $\pmb{A}_b$ and its inverse matrix $\pmb{C}_b$ by \begin{equation} \label{e:Abmat} \pmb{A}_{b} = 2 \left( \begin{array}{cc} A_b&0\\ 0&A_b \end{array} \right) , \quad\quad \pmb{C}_{b} = \frac{1}{2} \left( \begin{array}{cc} C_b&0\\ 0&C_b \end{array} \right) , \end{equation} with the rows and columns labelled by the real and imaginary parts $u_1,\ldots,u_{M_b}$, $v_1,\ldots, v_{M_b}$ of the complex boson field. Then $S_b$ of \refeq{Sbdef} becomes \begin{equation} \label{e:Sbsym} S_{b} = \sum_{k,l=1}^{M_b} A_{b;k,l} \phi_k\bar\phi_l \end{equation} and the normalisation constant $N_b$ of \refeq{Nb} is \begin{equation} N_b = \left(\det (2\pi \pmb{C}_b)\right)^{-1/2} = (\det (\pi C_b))^{-1}. \end{equation} For $K \in \Ncal(\pmb{\Lambda} \sqcup \pmb{\Lambda}_b')$, the Gaussian integral $I_K =\mathbb{E}_{\pmb{C}_b}K$ can equivalently be written as the complex Gaussian integral \begin{equation} \label{e:If} I_K = \int_{\mathbb{C}^{M_b}}K d\mu_{C_b} \quad\text{with}\quad d\mu_{C_b} = N_b' \, e^{-S_b} \prod_{k=1}^{M_b}\frac{d\bar\phi_k d{\phi}_k}{2\pi i} , \end{equation} where $d\bar\phi_k d{\phi}_k$ is by definition equal to $2idu_{k}dv_{k}$, where $K$ is considered as a function of $\bar\phi,\phi$ instead of as a function of the real and imaginary parts, and where the normalisation constant is \begin{equation} (N_b')^{-1} = \int_{\mathbb{C}^{M_b}} e^{-S_b } \prod_{k=1}^{M_b} \frac{d\bar\phi_k d{\phi}_k}{2\pi i} = \frac{1}{\det C_b}. \end{equation} The factors of $2$ in \eqref{e:Abmat} are included so that \begin{equation} \label{e:Ephiphi} \Ebold_{\pmb{C}_b} \bar\phi_k \phi_l = C_{b;k,l}, \end{equation} and thus we call $C_b$ the covariance of the complex boson field. Expectations of $\phi$$\phi$ and $\bar{\phi}$$\bar{\phi}$ are zero. More generally, expectations of products of factors of $\phi$ and $\bar\phi$ can be evaluated using \eqref{e:ELap} together with \begin{equation} \label{e:exp-complex-Laplacian} \frac 12 \Delta_{\pmb{C}} = \sum_{k,l=1}^{M_b} C_{b;k,l} \frac{\partial}{\partial \phi_{k}} \frac{\partial}{\partial \bar\phi_{l}} + \sum_{k,l=1}^{M_f} C_{f;k,l} \frac{\partial}{\partial \psi_{k}} \frac{\partial}{\partial \bar\psi_{l}} , \end{equation} where we computed the Laplacian \eqref{e:LapC} using \eqref{e:complex-derivs} and \eqref{e:Abmat}. \subsection{Differential forms} \label{sec:df} Suppose we are in the setting of the conjugate fermion field and complex boson field of Sections~\ref{sec:cff}--\ref{sec:cbf}, and that $M_f=M_b=M$. Let \begin{equation} \label{e:SA-bis} S_{A} = S_{b} + S_{f} . \end{equation} Now \refeq{ECbf} can be written as \begin{align} \label{e:ECbfss} \mathbb{E}_{\pmb{C}} K F & = I_KJ_F = N_b'N_f \int e^{-S_A} KF , \end{align} where the Lebesgue measure $D\phi$ has been omitted intentionally from the right-hand side. The reason for this omission makes use of a specific choice of Grassmann generators, as follows. We choose as Grassmann generators the 1-forms \begin{align} \psi_k &= \frac{1}{(2\pi i)^{1/2}}d\phi_k= \frac{1}{(2\pi i)^{1/2}}(du_k + i dv_k), \nnb \bar\psi_k &= \frac{1}{(2\pi i)^{1/2}}d\bar\phi_k = \frac{1}{(2\pi i)^{1/2}}(du_k - i dv_k), \end{align} where we fix a choice of square root of $2\pi i$ once and for all. Multiplication of generators is via the standard anti-commuting wedge product for differential forms (see, e.g., \cite{Rudi76}); the wedges are left implicit in what follows. The 1-forms generate the Grassmann algebra of differential forms. In this case the complex conjugation that acts on the boson field at the same time interchanges $\psi_k$ and $\bar\psi_k$, but there are no relations other than anti-commutativity linking the generators of the Grassmann algebra. Now \refeq{SA-bis} becomes the differential form \begin{equation} \label{e:SA-bis-prime} S_{A} = \sum_{k,l=1}^M \left( A_{b;k,l} \phi_{k}\bar\phi_{l} + \frac{1}{2\pi i}A_{f;k,l} d\phi_{k} d\bar\phi_{l} \right) . \end{equation} The theory of Gaussian integration in this setting is developed in \cite{BIS09}. In particular, it follows from \cite[Proposition~4.1]{BIS09} that when we interpret the fermionic part of $e^{-S_f}$ as the differential form $\sum_{n=0}^\infty \frac{1}{n!}(-S_f)^n$ (the series truncates due to anti-commutativity), then standard integration of differential forms gives again \begin{equation} \label{e:Efacform} \mathbb{E}_{\pmb{C}} K F =I_KJ_F. \end{equation} Thus Grassmann integral and the standard integration of differential forms become the same thing. In the formalism of differential forms, the omitted Lebesgue measure is supplied by the volume form $\prod_{k=1}^M d\bar\phi_kd\phi_k$ arising from the expansion of $e^{-S_f}$. Earlier, we defined $d\bar\phi_k d{\phi}_k$ to be $2idu_{k}dv_{k}$ because by \eqref{e:phi-def} this is the wedge product $d\bar\phi_k d{\phi}_k$. The above shows that the algebra of differential forms and the form integration used in \cite{BIS09} is a special case of the construction of Sections~\ref{sec:cff}--\ref{sec:cbf}. We do not need this special case in this paper, but it plays an important role in \cite{BBS-saw4-log,BBS-saw4}. \subsection{Supersymmetry} \label{sec:supersymmetry} The field theories discussed in \cite{BIS09} and \cite{BBS-saw4-log} have an additional property of \emph{supersymmetry}: a symmetry between bosons and fermions. A discussion of supersymmetry can be found in \cite[Section~6]{BIS09}. The field theory becomes supersymmetric by taking $M_b=M_f=M$ and choosing the boson and fermion covariances to be equal: $C_b=C_f= C$. Then \eq \label{e:NbNf1} N_b'N_f = \frac{\det C_f}{\det C_b} =1, \en and, with $A=C^{-1}$, \refeq{SA-bis} becomes \begin{equation} \label{e:SA} S_{A} = \sum_{u,v\in \Lambda } A_{u,v} \left( \phi_{u}\bar\phi_{v} + \psi_{u} \bar\psi_{v} \right) . \end{equation} Also, in view of \refeq{NbNf1}, the normalisation constants cancel in \refeq{ECbf}, which becomes \begin{align} \label{e:ECbfssz} \mathbb{E}_{\pmb{C}} K F & = \Big(\int_{\mathbb{C}^{M}} K \,e^{-S_b} \prod_{k=1}^{M_b}d\bar\phi_k d{\phi}_k \Big) \Big(\int_{\pmb{\Lambda}'_{f}} e^{-S_f}F\Big) =I_KJ_F. \end{align} The Laplacian \eqref{e:exp-complex-Laplacian} now simplifies to \begin{equation} \label{e:Lapss} \frac 12 \Delta_{\pmb{C}} = \sum_{k,l=1}^{M_b} C_{k,l} \left( \frac{\partial}{\partial \phi_{k}} \frac{\partial}{\partial \bar\phi_{l}} + \frac{\partial}{\partial \psi_{k}} \frac{\partial}{\partial \bar\psi_{l}} \right) , \end{equation} and from \eqref{e:ECbfssz} we obtain \begin{equation} \mathbb{E}_{\pmb{C}} \bar\phi_k \phi_l = \mathbb{E}_{\pmb{C}} \bar\psi_k \psi_l = C_{kl}. \end{equation} \subsection{Factorisation property of the expectation} \label{sec:facexp} We now present a factorisation property of the expectation that is needed in \cite{BS-rg-step}. We formulate the factorisation property in the supersymmetric setting of Section~\ref{sec:supersymmetry} for simplicity, although it does hold more generally. Let $\Lambda = \pmb{\Lambda}_b = \pmb{\Lambda}_f$, and let $X \subset \Lambda$. We define $\Ncal (X)$ to be the set of all $F = \sum_{y \in \Lambda^*} F_y\psi^y\in \Ncal$ such that $F_{y}=0$ if any component of $y$ is not in $X$, and such that, for all $y$, $F_y$ does not depend on $\phi_x$ for any $x \not\in X$. Similarly, given $X' \subset \Lambda'$, we define $\Ncal(\pmb{\Lambda} \sqcup X')$ as those $F$ that only depend on the fermion and boson fields indexed by $\Lambda \sqcup X'$. \begin{prop}\label{prop:factorisationE} Let $X,Y \subset \Lambda$, let $F_1(X) \in \Ncal(\pmb{\Lambda} \sqcup X')$, $F_2(Y) \in \Ncal(\pmb{\Lambda} \sqcup Y')$, and suppose that $C_{x',y'}= 0$ whenever $x' \in X'$, $y' \in Y'$. Then the expectation $\Ebold_{\pmb{C}}$ has the \emph{factorisation property}: \begin{equation} \label{e:Efaczz} \Ebold_{\pmb{C}} \big( F_1(X)F_2(Y) \big) = \big(\Ebold_{\pmb{C}} F_2(X)\big) \big( \Ebold_{\pmb{C}}F_2(Y) \big). \end{equation} \end{prop} \begin{proof} By linearity of the expectation, it suffices to consider the case where $F_1(X)$ is of the form $f_1\psi^{x}$ where $f_1$ depends only on the boson field in $\pmb{\Lambda} \sqcup X'$ and $x \in (X')^*$, and where $F_2(Y)$ is of the form $f_2\psi^{y}$ where $f_1$ depends only on the boson field in $\pmb{\Lambda} \sqcup Y_b'$ and $y \in (Y')^*$. According to \eqref{e:ECbfssz}, the expectation factors as \begin{equation} \Ebold_{\pmb{C}} f_1\psi^{x} f_2 \psi^{y} = (\Ebold_{\pmb{C}} f_1 f_2 )(\Ebold_{\pmb{C}} \psi^{x} \psi^{y}), \end{equation} where the first expectation on the right-hand side is a bosonic expectation with covariance matrix $C$, while the second is a fermionic expectation which is equal to a determinant of a submatrix of $C$ taken from rows and columns labelled by the points in $x$ and $y$. By assumption, the covariance matrix elements vanish for rows and columns labelled by points in $X$ and $Y$, respectively. It is a standard fact that uncorrelated Gaussian random vectors are independent \cite{Eato07}, and hence $\Ebold_{\pmb{C}} f_1 f_2 =(\Ebold_{\pmb{C}} f_1 ) (\Ebold_{\pmb{C}} f_2 )$. Also by assumption, the determinant yielding the fermion expectation is the determinant of a block diagonal matrix, so also factors to give $(\Ebold_{\pmb{C}} \psi^{x} \psi^{y}) =(\Ebold_{\pmb{C}} \psi^{x})(\Ebold_{\pmb{C}} \psi^{y})$. This completes the proof. \end{proof} \section{The \texorpdfstring{$T_\phi$}{Tphi} semi-norm} \label{sec:Tphi-props} \subsection{Motivation} In the progressive integrations carried out in the renormalisation group approach, it is necessary to estimate how the size of the result of an integration compares with the size of the integrand. When integrating real-valued functions of real variables, the inequality \begin{equation} \label{e:intav} \left| \int f(x) dx \right| \le \int |f(x)| dx \end{equation} is fundamental. We need an analogue of \refeq{intav} for the Gaussian integral $\mathbb{E}_{\pmb{C}} : \Ncal(\pmb{\Lambda} \sqcup \pmb{\Lambda}') \to \Ncal(\pmb{\Lambda})$. In particular, we need to define norms (or semi-norms) so that $\Ncal(\pmb{\Lambda} \sqcup \pmb{\Lambda}')$ and $\Ncal(\pmb{\Lambda})$ become normed algebras. The norms we define here emerge from a long history going back to \cite{BY90}; other norms in the purely fermionic context are developed in \cite{FKT04}. We choose $\pmb{\Lambda}_b$ and $\pmb{\Lambda}_f$ each to consist of disjoint unions of copies of the discrete $d$-dimensional torus of side length $mR$, namely \begin{equation} \label{e:RLambda} \Lambda = \Zd / (mR\Zd), \end{equation} where $R\ge 2$ and $m \ge 1$ are integers. As a basic example, suppose there are two species of field: the first species is a complex boson field as in Section~\ref{sec:cbf}, and the second species is a conjugate fermion field as in Section~\ref{sec:cff}. We choose $\pmb{\Lambda}_b= \Lambda_{1} \sqcup \bar\Lambda_{1}$ and $\pmb{\Lambda}_f=\Lambda_{2} \sqcup \bar\Lambda_{2}$, where each $\Lambda_{i}$ and $\bar{\Lambda}_{i}$ is a copy of $\Lambda$. The fermion field $(\psi_v)_{v \in \Lambda}$ is $\psi_{y \in \pmb{\Lambda}_{f}}$ restricted to $y \in \Lambda_{2}$, the fermion field $(\bar\psi_v)_{v \in \Lambda}$ is $\psi_{y \in \pmb{\Lambda}_{f}}$ restricted to $y \in \bar\Lambda_{2}$, and the complex boson field $(\bar\phi_v,\phi_v)_{v \in \Lambda}$ is the restriction of $\phi$ to $\Lambda_{1} \sqcup \bar\Lambda_{1}$. For $x \in \Lambda_{1}$ let $\bar{x}$ be the corresponding point in the copy $\bar\Lambda_{1}$. The restriction of $\phi$ to $\Lambda_{1} \sqcup \bar\Lambda_{1}$ is a complex field $\phi=u+iv$ as defined in Section~\ref{sec:cbf} if and only if \begin{equation} \label{e:reality-condition} \phi_{\bar{x}}=\bar\phi_{x}, \quad\quad x \in \Lambda_{2} . \end{equation} Given $a \in \Rbold$ and $u \in \Lambda$, an example of an element of $K\in \Ncal(\pmb{\Lambda})$ is given by \begin{equation} \label{e:eatau} K(\phi,\bar\phi) =e^{-a (\phi_u\bar\phi_u + \psi_u\bar\psi_u)}. \end{equation} Functions of the fermion field are defined as elements of $\Ncal$ via Taylor expansion in powers of the fermion field. Due to anti-commutativity and the finite index set for the fermion field, such Taylor series always truncate to polynomials in the fermion field. For \refeq{eatau}, the Taylor polynomial is \begin{equation} K(\phi)=e^{-a (\phi_u\bar\phi_u + \psi_u\bar\psi_u)} = e^{-a \phi_u\bar\phi_u}\left( 1 - a\psi_u\bar\psi_u\right). \end{equation} For functions of products of even numbers of $\psi$ factors, which are the only kind we will encounter, there is no sign ambiguity in the Taylor expansion. We also consider Taylor expansion in the boson field. For this, we replace $\phi$ by $\phi+\xi$ and expand in powers of $\xi$. We use the set $\Lambda \sqcup \bar\Lambda$ to keep track of factors $\xi$ versus $\bar\xi$, by writing, e.g., $\xi^x = \xi_{x_1} \xi_{x_2} \bar\xi_{\bar x_3} \bar\xi_{\bar x_4}$ for $x = (x_1,x_2,\bar{x}_3,\bar{x}_4)$, and similarly for the fermion field. A general $K \in \Ncal(\pmb{\Lambda})$ then has (formal) Taylor expansion \begin{equation} \label{e:norm-mot1} K(\phi+\xi) \sim \sum_{ x,y} \frac{1}{x!y!} K_{x,y}(\phi) \xi^x \psi^y, \end{equation} where the sum is over sequences $x\in(\Lambda \sqcup \bar\Lambda)^*$ and $y\in(\Lambda \sqcup \bar\Lambda)^*$, and where the coefficients $K_{x,y}$ are symmetric in the elements of $x$ and anti-symmetric in the elements of $y$. Given $\phi$, we define the semi-norm of $K$, in terms of the coefficients $K_{x,y}(\phi)$. These coefficients eventually vanish once the sequence $y$ has length exceeding twice the cardinality of $\Lambda$. In general, the coefficients will be non-zero for infinitely many values of $x$, but the semi-norm will examine only those with length of $x$ at most $p_\Ncal$ for a fixed choice of the parameter $p_\Ncal$ (this replaces the ``formal'' Taylor expansion above by a Taylor polynomial). The semi-norm is designed to be used in conjunction with integration, where fields have a typical size. This motivates us to define the semi-norm of $K$ to be the result of replacing $\xi^x \psi^y$, in each term in the truncation of the sum over $x$ at length $p_\Ncal$ in \refeq{norm-mot1}, by a test function $g_{x,y}$ whose size and smoothness mimic the behaviour expected for products of typical fields. The precise definition of the semi-norm, given below, is more general than the above in several respects. It allows the possibility of more ``species'' of field than the boson and fermion fields above, and allows scalar, complex, and multi-component fields. It allows distinction between the size of the test functions in its components corresponding to different field species, and leaves flexible the choice of weights governing the test functions. In the remainder of Section~\ref{sec:Tphi-props}, we define the $T_\phi$ semi-norm on $\Ncal$ and state and develop its properties. Most proofs are deferred to Sections~\ref{sec:tphisemi}--\ref{sec:integration}. \subsection{Sequence spaces} \label{sec:seq} The sets $\pmb{\Lambda}_{b}$ and $\pmb{\Lambda}_{f}$ are required to have the following particular structure. First, $\pmb{\Lambda}_{b}$ decomposes into a disjoint union of sets $\pmb{\Lambda}_{b}^{(i)}$, for $i=1,\ldots,s_b$, corresponding to $s_b$ distinct boson field \emph{species}. Each set $\pmb{\Lambda}_{b}^{(i)}$ is either $\Lambda \sqcup \bar\Lambda$ (for a species of complex field) or is the disjoint union of $c_b^{i}$ copies of $\Lambda$ (for a field species with $c_b^{(i)}$ real components). The set $\pmb{\Lambda}_f$ has the same structure, but with a possibly different number $s_f$ of species which can also have components. Then, as before, we set $\pmb{\Lambda} = \pmb{\Lambda}_b \sqcup \pmb{\Lambda}_f$, and $\pmb{\Lambda}^*$ is the corresponding set of sequences. Each $u \in \pmb{\Lambda}$ thus carries a species label $i =i(u) \in \mathbf{s} = \{1,\ldots, s\}$, where $s=s_b+s_f$. Of specific interest is the subset $\vec\pmb{\Lambda}^*$ of $\pmb{\Lambda}^*$, which consists of sequences whose species labels are ordered in such a way that the first elements of $z\in \vec\pmb{\Lambda}^*$ are of species $\pmb{\Lambda}_{b}^{(1)}$, the next are of species $\pmb{\Lambda}_{b}^{(2)}$, and so on until the boson species have been exhausted, and then subsequent elements are first of species $\pmb{\Lambda}_f^{(1)}$, then $\pmb{\Lambda}_f^{(2)}$, and so on. For example, a complex species of boson field has components $\phi$ and $\bar{\phi}$, which belong to the same species, so entries $z_{j}$ of $(z_{1},\dots ,z_{p})\in \vec\pmb{\Lambda}^*$ are not ordered according to where they are in $\Lambda \sqcup \bar\Lambda$, likewise for a fermion species $\psi,\bar{\psi}$. We also define $\vec\pmb{\Lambda}_b^*$ and $\vec\pmb{\Lambda}_f^*$ to be the subsets of $\vec\pmb{\Lambda}^*$ consisting of only boson or only fermion species. There is a canonical bijection between $\vec\pmb{\Lambda}^*$ and the Cartesian product $\pmb{\Lambda}_{b}^{(1)*}\times \cdots\times \pmb{\Lambda}_{b}^{(s_b)*} \times \pmb{\Lambda}_{f}^{(1)*}\times \cdots\times \pmb{\Lambda}_{f}^{(s_f)*}$, given by the correspondence in which a single sequence in $\vec\pmb{\Lambda}^*$ is regarded as a collection of subsequences of each species. We will sometimes blur the distinction between $\vec\pmb{\Lambda}^*$ and the Cartesian product in what follows. In $\vec\pmb{\Lambda}^*$, concatenation $z'\circ z''$ of two sequences is defined by concatenation of each of the individual species subsequences. Then $\vec\pmb{\Lambda}^*$ is closed under concatenation. For $r \ge 0$, we write $\vec\pmb{\Lambda}^{(r)}$ for the subset of $\vec\pmb{\Lambda}^*$ consisting of sequences of length $r$, with the degenerate case $\vec\pmb{\Lambda}^{(0)}=\{\varnothing\}$. \subsection{Test functions} \label{sec:tf} Recall from \eqref{e:RLambda} that $\pmb{\Lambda}$ is a disjoint union of copies of a lattice torus. A \emph{test function} is a function $g : \vec\pmb{\Lambda}^* \to \mathbb{C}$. In particular, even when there are complex fields, no relation such as \eqref{e:reality-condition} is imposed on test functions. We will define a norm on the set of test functions as a weighted finite-difference version of a $\Ccal^{k}$ norm, where $k$ is however proportional to the number of arguments of $g$, i.e., the length of the sequence in $\vec\pmb{\Lambda}^*$. First we need notation for multiple finite-difference derivatives. We write $\Ucal$ for the set $\{\pm e_1, \ldots, \pm e_d\}$ of $2d$ positive and negative unit vectors in $\Zd$. For a unit lattice vector $e \in \Ucal$ and a function $f$ on $\Lambda$ the difference operator is given by $\nabla^{e} f_{x}=f_{x+e} - f_{x}$. When $e$ is the negative of a standard unit vector $\nabla^{e}$ is the negative of a conventional backward derivative. Derivatives of test functions are defined as follows. Let $A = \Nbold_{0}^{\Ucal}$, and for an integer $r>0$, let $\Acal^{(r)}= A^{r} \times \vec\pmb{\Lambda}^{(r)}$. In the degenerate case, we set $\Acal^{(0)} = \{\varnothing\}$. The operator $\nabla^\varnothing$ is the identity operator, and for $r>0$, $\alpha=(\alpha_1,\ldots,\alpha_r)\in \Acal^{(r)}$ and $z=(z_1,\ldots,z_r)\in \vec\pmb{\Lambda}^{(r)}$, we define \begin{equation} (\nabla^\alpha g)_z = \nabla_{z_1}^{\alpha_1} \cdots \nabla_{z_r}^{\alpha_r}g_{z_1,\ldots,z_r}. \end{equation} Thus, each $\alpha_k$ is a multi-index which specifies finite-difference derivatives with respect to the variable $z_k$. \begin{defn} \label{def:gnorm-general} Fix $p_\Ncal \in \Nbold_0 \cup \{+\infty\}$, and consider the set of test functions such that $g_z=0$ whenever $z$ has more than $p_\Ncal$ boson components. Let $w:A\times \pmb{\Lambda} \rightarrow [0,\infty]$ be a given function. For $r>0$ and $(\alpha,z)\in \Acal^{(r)}$, we write $w_{\alpha,z} = \prod_{k=1}^{r} w_{\alpha_k,z_k}$, and we set $w_\varnothing =1$ in the degenerate case $r=0$. We define the $\Phi$ norm on test functions by \begin{align} \|g\|_{\Phi } &= \sup_{(\alpha,z) \in \Acal} w_{\alpha,z}^{-1} |\nabla^\alpha g_{z}| . \end{align} Let $g^{(r)}:\vec\pmb{\Lambda}^{(r)}\to\mathbb{C}$ denote the restriction of $g : \vec\pmb{\Lambda}^* \to \mathbb{C}$ to $\vec\pmb{\Lambda}^{(r)}$. The $\Phi$ norm induces the $\Phi^{(r)}$ norm on these restricted test functions by \begin{align} \|g^{(r)}\|_{\Phi^{(r)} } & = \sup_{(\alpha,z)\in \Acal^{(r)}} w_{\alpha,z}^{-1} |\nabla^\alpha g^{(r)}_{z}| , \end{align} with $0^{-1}=\infty$ and $\infty^{-1}=0$, and with \begin{align} \|g\|_{\Phi } &= \sup_{r \ge 0}\|g^{(r)}\|_{\Phi^{(r)} }. \end{align} When it is important to make the dependence on $w$ explicit we write $\Phi(w)$ and $\Phi^{(r)}(w)$. \end{defn} As an instance of restriction, suppose that that there is just one species of field, namely a single complex boson field $(\bar\phi_x,\phi_x)_{x\in\Lambda}$. We may regard this field as a test function by extending it to be the zero function on sequences in $\vec\pmb{\Lambda}^*$ of length different from $1$. This special case will frequently be relevant for us. \begin{example} \label{ex:h} Fix any integer $p_\Phi \ge 0$ and for each species $i$ fix $\mathfrak{h}_i >0$. Let $R$ be the constant of \refeq{RLambda}. In applications, the period of the torus is $L^N$ for integers $L,N>1$, the torus can thus be paved by disjoint blocks of side length $L^j$ for $j=1,\ldots, N$, and we take $R=L^j$ (so $m=L^{N-j}$ to give $mR=L^N$). The choice of weight $w:A\times \pmb{\Lambda} \rightarrow [0,\infty]$ given by \begin{equation} w_{\alpha_k,z_k}^{-1} = \begin{cases} \mathfrak{h}_{i(z_k)}^{-1} R^{\alpha_k} & \text{if $|\alpha|_1 \le p_\Phi$} \\ 0 & \text{if $|\alpha|_1 > p_\Phi$}, \end{cases} \end{equation} defines the normed space $\Phi(\mathfrak{h})$. We have written $R^{\alpha_{k}}=R^{|\alpha_k|_{1}}$ where $|\alpha_k|_1$ is the order of the derivative $\nabla^{\alpha_k}$. Then test functions in the unit ball $B(\Phi)$ of $\Phi$ are those which obey the estimate \begin{equation} \lbeq{tfe} |\nabla^\alpha g_z| \le \mathfrak{h}^{z}R^{-\alpha}, \end{equation} for all $z$ with at most $p_\Ncal$ boson components, and for all $\alpha$ with $|\alpha_{k}|_{1}\le p_{\Phi}$ for each component $\alpha_{k}$ of $\alpha \in \Acal $. Here $\mathfrak{h}^{z}$ is an abbreviation for $\prod_k \mathfrak{h}_{i(z_k)}^{z_j}$. The estimate \refeq{tfe} means that $g$ is approximately constant on regions whose diameter is small compared to $R$. Note that the parameter $p_{\Phi}$ specifies that $p_{\Phi}$ derivatives per argument of $g$ are bounded by the norm, whereas the parameter $p_{\Ncal}$ is an upper bound on how many bosonic spatial variables a test function can depend on. \end{example} Let $\vec r \in \Nbold_0^s$ and let $\Phi^{(\vec r)}$ denote the restriction of $\Phi$ to test functions defined on the subset of $\vec\Lambda^*$ consisting of sequences with exactly $r_i$ components of species $i$ for each $i=1,\ldots,s$. Given $\vec r$, $g' \in \Phi^{(\vec r)}$, and $g'' \in \Phi$, we define $g \in \Phi$ by setting $g_{z} = g'_{z'}g''_{z''}$ for $z=z'\circ z''$, with $g_z=0$ whenever $z$ has fewer than $r_i$ elements of species $i$ for any $i$. It follows from the definition of the norm that \begin{equation}\label{e:plusnormass} \|g\|_{\Phi} \le \|g'\|_{\Phi^{(\vec r)}} \|g''\|_{\Phi}, \end{equation} and we will use this fact later. Here it is the fact that $g' \in \Phi^{(\vec r)}$ which provides a unique decomposition $z=z'\circ z''$ to make $g$ well defined. A similar inequality is obtained whenever a unique decomposition is specified. For example, suppose that we designate some field species as prime species and some as double prime. Then $z$ can be decomposed in a unique way as $(z',z'')$ and if we define a test function $g$ by $g_z = g'_{z'}g''_{z''}$, then it follows from the definition of the norm that \begin{equation}\label{e:plusnormass-2} \|g\|_{\Phi} \le \|g'\|_{\Phi} \|g''\|_{\Phi}. \end{equation} \subsection{Definition of the \texorpdfstring{$T_\phi$}{Tphi} semi-norm} \label{sec:Tphidef} Given $F\in \Ncal(\pmb{\Lambda})$, $x=(x_1,\ldots, x_p)\in \pmb{\Lambda}_{b}$, $y \in \pmb{\Lambda}_f$, and a boson field $\phi \in \Rbold^{\pmb{\Lambda}_b}$, we write \begin{equation} \lbeq{Fxyphi} F_{x,y}(\phi) = \frac{\partial^p F_y(\phi)}{\partial \phi_{x_p} \cdots \partial \phi_{x_1}} . \end{equation} (This notation is consistent with Definition~\ref{def:Npoly}.) We are writing the boson field as an element of $\Rbold^{\pmb{\Lambda}_b}$ for simplicity, but our intention is to include the possibility of complex species and for such species derivatives are with respect to $\phi_{x_i}$ or $\bar\phi_{x_i}$ depending on whether $x_i$ is an element of $\Lambda$ or $\bar\Lambda$. This point will be made more explicit in Section~\ref{sec:Tphi-ex} below. Also, for $x \in \vec\pmb{\Lambda}_{b}^*$, we write $x!=\prod_{i=1}^{s_{b}} x_i!$ where the product is over species and $x_i!$ denotes the factorial of the length of the species-$i$ subsequence of $x$. Similarly $y!$ is defined for $y \in \vec\pmb{\Lambda}_{f}^*$. For $z = (x,y)$ in $\vec\pmb{\Lambda}_{b}^*$ we write $z!= x!y!$. \begin{defn} \label{def:Tphi-norm} For a test function $g:\vec\pmb{\Lambda}^{*} \rightarrow \Cbold$, for $F \in \Ncal (\pmb{\Lambda})$, and for $\phi \in \Rbold^{\pmb{\Lambda}_{b}}$, we define the \emph{pairing} \begin{equation} \label{e:Kgpairdef} \langle F , g \rangle_\phi = \sum_{z\in \vec\pmb{\Lambda}^{*}} \frac{1}{z!} F_{z}(\phi) g_{z} = \sum_{x\in \vec\pmb{\Lambda}_b^{*}} \sum_{y\in \vec\pmb{\Lambda}_f^{*}} \frac{1}{x!y!} F_{x,y}(\phi) g_{x,y} , \end{equation} and the $T_\phi$ \emph{semi-norm} \begin{equation} \label{e:Tdef} \|F\|_{T_\phi} = \sup_{g \in B (\Phi)} | \langle F , g \rangle_\phi | , \end{equation} where $B(\Phi)$ denotes the unit ball in the space $\Phi$ of test functions. \end{defn} By definition, $F_{x,y}$ is symmetric under permutations within each subsequence of $x$ having the same species, and is similarly antisymmetric in $y$. This symmetry is reflected by a corresponding property of the pairing. To develop this idea, we begin with the following definition. \begin{defn} \label{def:S} For $z \in \vec\pmb{\Lambda}^{(r)}$, let $\vec\Sigma_z$ denote the set of permutations of $1,\ldots, r$ that preserve the order of the species of $z$. For $\sigma \in \vec\Sigma_z$ we define $\sigma z \in \vec\pmb{\Lambda}^{(r)}$ by $(\sigma z)_i=z_{\sigma(i)}$, and we use this to define a map $S : \Phi \to \Phi$ by \eq (Sg)_z = \frac{1}{z!} \sum_{\sigma \in \vec\Sigma_z} \mathrm{sgn}(\sigma_f) g_{\sigma z}, \en where $\sigma_f$ denotes the restriction of $\sigma$ to the fermion components of $z$ and $\mathrm{sgn}(\sigma_f)$ denotes the sign of this permutation. \end{defn} \begin{prop} \label{prop:pairingS} For $F \in \Ncal(\pmb{\Lambda})$, $g \in \Phi$, and $\phi \in \Rbold^{\pmb{\Lambda}_{b}}$, \eq \pair{F,g}_\phi = \pair{F,Sg}_\phi. \en \end{prop} \begin{proof} By the above-mentioned symmetry, $F_{z}(\phi) = \mathrm{sgn}(\sigma_f)F_{\sigma(z)}(\phi)$ for all $\sigma \in \vec\Sigma_z$. This implies that $F_z(\phi) = \frac{1}{z!} \sum_{\sigma \in \vec\Sigma_z} \mathrm{sgn}(\sigma_f)F_{\sigma(z)}(\phi)$, and hence \begin{align} \pair{F,g}_\phi & = \sum_{z\in \vec\pmb{\Lambda}^{*}} \frac{1}{z!} F_{z}(\phi) g_{z} = \sum_{z\in \vec\pmb{\Lambda}^{*}} \frac{1}{z!} \frac{1}{z!} \sum_{\sigma \in \vec\Sigma_z} \mathrm{sgn}(\sigma_f)F_{\sigma(z)}(\phi) g_{z}. \end{align} The sum over $z$ is graded by sums over sequences of fixed length and species choices, and for $z$ fixed within this gradation the set $\vec\Sigma_z$ is independent of $z$. It therefore makes sense to replace the summand within the sum over $\sigma$ by an equivalent expression with $z$ replaced by $\sigma^{-1}z$, and this does not change the sum. This gives \begin{align} \pair{F,g}_\phi & = \sum_{z\in \vec\pmb{\Lambda}^{*}} \frac{1}{z!} \frac{1}{z!} \sum_{\sigma \in \vec\Sigma_z} \mathrm{sgn}(\sigma_f)F_{z}(\phi) g_{\sigma^{-1}z} = \sum_{z\in \vec\pmb{\Lambda}^{*}} \frac{1}{z!} F_{z}(\phi) \frac{1}{z!} \sum_{\sigma \in \vec\Sigma_z} \mathrm{sgn}(\sigma_f) g_{\sigma^{-1}z}. \end{align} Since $\mathrm{sgn}(\sigma_f)=\mathrm{sgn}(\sigma^{-1}_f)$, and since summing over $\sigma$ is the same as summing over $\sigma^{-1}$, this gives the desired result. \end{proof} \begin{example} \label{ex:pairing} As a simple example of the zero-field pairing, for fixed points $x_i \in\Lambda$ and for $p \le p_\Ncal$, let $F(\phi) = \prod_{i=1}^p \nabla^{\alpha_i}\phi_{x_i}$. Direct computation shows that \refeq{Kgpairdef} leads to \eq \langle F, g \rangle_0 = \nabla^{\alpha_1}_{x_1} \cdots \nabla^{\alpha_p}_{x_p} \, (Sg)_{x_1,\ldots,x_p}. \en The right-hand side is in general not the same as the corresponding expression with $S$ omitted. This shows that the pairing has a symmetrising effect. \end{example} By definition, \begin{equation} \label{e:Tphi-r} \|F\|_{T_\phi} = \sup_{r \ge 0} \sup_{g^{(r)} \in B (\Phi^{(r)})} | \langle F , g^{(r)} \rangle_\phi | . \end{equation} Note that $\|F\|_{T_\phi}$ is always at least as large as $|F_{\varnothing}|$ because this is the contribution from the empty sequence part $g_{\varnothing}$ of the test function, corresponding to $r=0$. The $T_\phi$ semi-norm has several attractive and useful properties. The most fundamental of these is the product property stated in the following proposition. Its proof is given in Section~\ref{sec:pfprod} below. \begin{prop} \label{prop:prod} For $F,G \in \Ncal$, $\|FG\|_{T_\phi} \leq \|F\|_{T_\phi}\|G\|_{T_\phi}$. \end{prop} Another property is the following proposition, which is proved in Section~\ref{sec:ebdTphi} below. In its statement, $e^{-F}$ is defined by Taylor expansion in the fermion field. In general, this can introduce sign ambiguities, but the semi-norm is insensitive to these by \refeq{Tphi-r}. However, in our application in \refeq{bdFfoptbis} below, no sign ambiguity arises. \begin{prop} \label{prop:eK} Let $F\in \Ncal$ and let $F_\varnothing$ be the purely bosonic part of $F$. Then \eq \|e^{-F }\|_{T_\phi} \le e^{-2{\rm Re} F_\varnothing(\phi)+\|F \|_{T_\phi}} . \en \end{prop} \subsection{Example for the \texorpdfstring{$T_\phi$}{Tphi} semi-norm} \label{sec:Tphi-ex} For the next proposition, we consider the case $\pmb{\Lambda}_b = (\Lambda \sqcup \bar\Lambda)$ and $\pmb{\Lambda}_f = (\Lambda \sqcup \bar\Lambda)$, corresponding to a complex boson field $(\bar\phi,\phi)$ and a conjugate fermion field $(\bar\psi,\psi)$. We use the test function space $\Phi(\mathfrak{h})$ of Example~\ref{ex:h}, with its associated space $T_\phi(\mathfrak{h})$, where $\mathfrak{h}$ takes the same value for all fields. For a complex boson field $\phi$ and $x \in \Lambda$, we define $\tau_x \in \Ncal$ by \eq \tau_x = \phi_x\bar\phi_x + \psi_x\bar\psi_x. \en We may regard $\phi_x$ as an element of $\Ncal$. By definition its $T_\phi$ semi-norm is $\|\phi_x\|_{T_\phi(\mathfrak{h})} = |\phi_x|+\mathfrak{h}$. We may also regard the boson field $\phi$ as the test function obtained by extending to the zero function on sequences in $\vec\pmb{\Lambda}^*$ which do not consist of a single component in $\pmb{\Lambda}_b$; then its norm is $\|\phi\|_\Phi$. Since $\mathfrak{h}^{-1}|\phi_x| \le \|\phi\|_{\Phi(\mathfrak{h})}$ by definition, we have \eq \|\phi_x\|_{T_\phi(\mathfrak{h})} \le \mathfrak{h} \left(1+ \|\phi\|_{\Phi(\mathfrak{h})} \right). \en \begin{prop} \label{prop:taunorm} The $T_\phi(\mathfrak{h})$ semi-norm of $\tau_x$ obeys the identity \eq \label{e:taunorm} \|\tau_x\|_{T_\phi(\mathfrak{h})} = (|\phi_x|+\mathfrak{h} )^2+\mathfrak{h}^2 \en and the inequality \eqalign \label{e:VxT0} \|\tau_x\|_{T_\phi(\mathfrak{h})} & \le 3\mathfrak{h}^2 (1+\|\phi\|_{\Phi(\mathfrak{h})}^2) . \enalign Suppose that $a \in \mathbb{C}$ obeys $|{\rm Im}\, a| \le \frac 12 {\rm Re}\, a$. Given any real number $q_2$, there is a constant $q_{1}$ (with $q_{1}=O(q_2^2)$ as $q_2\to\infty$) such that \eqalign \label{e:bdFfoptbis} \|e^{-a\tau_x^2}\|_{T_{\phi(\mathfrak{h})}} & \le e^{({\rm Re}\,a)\mathfrak{h}^4 (q_1-q_2 |\phi_x/h|^2 )}. \enalign \end{prop} \begin{proof} By definition, $\tau_x = \phi_x\bar\phi_x + \psi_x\bar\psi_x$. Also by definition, the semi-norm of a sum of terms of different fermionic degree is the sum of the semi-norms, and hence \eq \|\tau_{x}\|_{T_\phi(\mathfrak{h})} = \|\phi_x\bar\phi_x\|_{T_\phi(\mathfrak{h})} + \|\psi_x\bar\psi_x\|_{T_\phi(\mathfrak{h})}. \en By definition of the semi-norm, \eq \|\psi_x\bar\psi_x\|_{T_\phi(\mathfrak{h})} = \mathfrak{h}^2 \en and \eq \|\phi_x\bar\phi_x\|_{T_\phi(\mathfrak{h})} = |\phi_x|^2 + |\phi_x|\, \mathfrak{h} + \mathfrak{h}|\bar\phi_x| + \mathfrak{h}^2 = (|\phi_x| + \mathfrak{h})^2. \en This proves \refeq{taunorm}. We write $t= |\phi_x|/\mathfrak{h}$ and $P(t)=(t+1)^2+1$. Then \eq \|\tau_x\|_{T_\phi(\mathfrak{h})} = \mathfrak{h}^2 P(t). \en Since $t \le \|\phi\|_{\Phi(\mathfrak{h})}$ and $P(t) \le 3(1+t^2)$, this gives \eq \label{e:tauxbd} \|\tau_x\|_{T_\phi(\mathfrak{h})} \le \mathfrak{h}^2 P(t) = \mathfrak{h}^2 P(\|\phi\|_{\Phi(\mathfrak{h})}) \le 3\mathfrak{h}^2 (1+\|\phi\|_{\Phi(\mathfrak{h})}^2), \en which proves \refeq{VxT0}. Let $\alpha = {\rm Re}\, a$. By \refeq{tauxbd}, the product property, and the fact that $|a| \le \frac 32 \alpha$ by assumption, \eq \label{e:tau2bd} \|a\tau_x^2\|_{T_{\phi(\mathfrak{h})}} \le |a|\, \mathfrak{h}^4 P(t)^2 \le \frac 32 \alpha \mathfrak{h}^4 P(t)^2 . \en By Proposition~\ref{prop:eK} and \refeq{tau2bd}, \begin{equation} \|e^{-a \tau_{x}^2}\|_{T_{\phi(\mathfrak{h})}} \le e^{-2\alpha |\phi_x|^4} e^{\frac 32 \alpha \mathfrak{h}^4 P(t)^2} \leq e^{\alpha\mathfrak{h}^4 [-2 t^4 + \frac 32 P(t)^2]}. \end{equation} Since $P$ has leading term $t^2$, given any real number $q_2$ there is a constant $q_{1} = O(q_2^2)$ such that $-2 t^4 + \frac 32 P(t)^2 \leq q_{1} - q_2 t^2$. (In fact, a quartic bound also holds, but this quadratic bound will suffice for our needs.) This gives \refeq{bdFfoptbis}, and completes the proof. \end{proof} On the right-hand side of \eqref{e:VxT0}, the appearance of the norm $\|\phi\|_{\Phi (\mathfrak{h})}$ could be considered alarming, as this involves a supremum over the entire lattice and typical fields will be uncontrollably large in some regions of space. In our applications this difficulty will be overcome as follows. First, we need some definitions. For $X \subset \Lambda$ and any test function space $\Phi$, we define a new norm on $\Phi$ by \begin{align} \label{e:PhiXdef} \|g\|_{\Phi(X)} &= \inf \{ \|g -f\|_{\Phi} : \text{$f_{z} = 0$ if all components of $z\in\vec\pmb{\Lambda}^*$ are in $X$}\}. \end{align} As in Section~\ref{sec:facexp}, we define \begin{align} \label{e:NXdef} \Ncal (X) &= \{ F \in \Ncal : F_{z}=0 \; \text{if any component of $z\in\vec\pmb{\Lambda}^*$ is not in $X$}\} . \end{align} Then $\Ncal(X)$ is a subspace of $\Ncal$, and $\Ncal = \Ncal (\pmb{\Lambda})$. Suppose now that $F \in \Ncal(X)$. Changing the value of $\phi_x$ for $x \not\in X$ has no effect on the pairing of $F$ with any test function $g$ and hence has no effect on any $T_\phi$ semi-norm of $F$. Thus, returning to \eqref{e:VxT0}, by taking the infimum over all possible redefinitions of $\phi$ off $X = \{x \}$, we can replace \eqref{e:VxT0} by \begin{equation} \|\tau_x\|_{T_\phi(\mathfrak{h})} \le 3\mathfrak{h}^2 (1+\|\phi\|_{\Phi(X,\mathfrak{h})}^2) . \end{equation} \subsection{Further properties of the \texorpdfstring{$T_\phi$}{Tphi} semi-norm} \label{sec:Tphifurther} Recall the definition of polynomial elements of $\Ncal$ in Definition~\ref{def:Npoly}. The following proposition bounds the $T_\phi$ semi-norm of a polynomial in terms of the $T_0$ semi-norm. \begin{prop}\label{prop:T0K} If $F$ is a polynomial of degree $A \le p_\Ncal$ then \begin{align} \|F\|_{T_{\phi}} &\le \|F\|_{T_{0}}\big(1+\|\phi\|_{\Phi}\big)^{A} . \end{align} \end{prop} It is an immediate consequence of Proposition~\ref{prop:T0K} that for $A \ge 0$ and any $\kappa \in (0,2^{-1/2}]$, \begin{align} \label{e:Fpolyexp} \|F\|_{T_{\phi}} &\le \|F\|_{T_{0}} A^{A/2}\kappa^{-A} e^{\kappa^2 \|\phi\|_{\Phi}^{2}}. \end{align} For $A=0$ this is trivial (with $0^{0}=1$), since then $F$ is simply a complex number $w$ and $\|F\|_{T_{\phi}} =\|F\|_{T_{0}}=|w|$. Also, for $A \ge 1$ and $\kappa \in (0,2^{-1/2}]$, \refeq{Fpolyexp} follows from Proposition~\ref{prop:T0K} together with the inequality \begin{align} \label{e:expkappa} 1+x \le \sqrt{2} \big(1+x^{2}\big)^{1/2} \le A^{1/2}\kappa^{-1} \big(1+2A^{-1}\kappa^{2}x^{2}\big)^{1/2} \le A^{1/2}\kappa^{-1} e^{A^{-1}\kappa^{2}x^{2}}. \end{align} Suppose we have two test function spaces $\Phi$ and $\Phi'$, with corresponding semi-norms $T_\phi$ and $T_\phi'$. For $n \ge 0$, let \eq \label{e:rhodef1} \rho^{(n)} = 2\sup_{r \ge n} \sup_{g \in B (\Phi'^{(r)})} \|g\|_{\Phi^{(r)}}. \en In our applications, $\rho^{(n)}$ will be small for $n \ge 1$. The following proposition relates the $T_\phi$ and $T_\phi'$ semi-norms. \begin{prop} \label{prop:Tphi-bound} Let $A< p_\Ncal$ be a non-negative integer and let $F \in\Ncal$. Then \begin{align} \|F\|_{T_{\phi}'} & \leq \left(1 + \|\phi\|_{\Phi'}\right)^{A+1} \left( \|F\|_{T_{0}'} + \rho^{(A+1)} \sup_{0\le t\le 1} \|F\|_{T_{t\phi}} \right). \label{e:Tphicor1} \end{align} \end{prop} Recalling the discussion around \eqref{e:PhiXdef}, we can improve \refeq{Tphicor1} by taking the infimum over all possible redefinitions of $\phi$ off $X$, with the result that \begin{align} \|F\|_{T_{\phi}'} & \leq \left(1 + \|\phi\|_{\Phi'(X)}\right)^{A+1} \left( \|F\|_{T_{0}'} + \rho^{(A+1)} \sup_{0\le t\le 1} \|F\|_{T_{t\phi}} \right) \quad \text{for $F \in \Ncal(X)$} . \label{e:TphicorX} \end{align} Finally, the following proposition shows that the map $\theta$ of Definition~\ref{def:theta-new} has a contractive property. For its statement, let $\pmb{\Lambda}$, $\pmb{\Lambda}'$ and the map $z \mapsto z'$ be as described above Definition~\ref{def:theta-new} and let $w:A\times \pmb{\Lambda}\rightarrow [0,\infty]$ and $w':A\times \pmb{\Lambda}' \rightarrow [0,\infty]$ be weights as specified in Definition~\ref{def:gnorm-general}. These weights together define a new weight $w\sqcup w':A\times (\pmb{\Lambda}\sqcup\pmb{\Lambda}')\rightarrow [0,\infty]$. Species in $\pmb{\Lambda}$ and species in $\pmb{\Lambda}'$ are distinct, and we order the species in such a way that a species from $\pmb{\Lambda}'$ occurs immediately following its counterpart in $\pmb{\Lambda}$. We denote the corresponding norm on test functions $g:(\overrightarrow{\pmb{\Lambda}\sqcup\pmb{\Lambda}'})^{*} \to\mathbb{C}$ by $\Phi (w\sqcup w')$. Also, we define the function $w+w'$ from $A \times \pmb{\Lambda}$ to $\mathbb{C}$ by \begin{equation} (w + w') (a,z) = w(a,z) + w'(a,z'). \end{equation} \begin{prop} \label{prop:derivs-of-tau-bis} For $F \in \Ncal(\pmb{\Lambda})$, \begin{align} \label{e:theta-bd1} \|\theta F\|_{T_{\phi \sqcup \xi} (w\sqcup w')} & \le \|F\|_{T_{\phi +\xi} (w+w')}. \end{align} \end{prop} Proofs of Propositions~\ref{prop:T0K}--\ref{prop:derivs-of-tau-bis} are given in Sections~\ref{sec-Tphiestimates}--\ref{sec:compsp0} below. \subsection{Field regulators and associated norms} \label{sec:fran} \begin{defn} \label{def:blocks} (a) The set $\Lambda = \Zd/(mR)$ is paved in a natural way by disjoint cubes of side $R$. We call these cubes \emph{blocks} and denote the set of blocks by $\Bcal$. \\ (b) A union of blocks is called a \emph{polymer}, and the set of polymers is denoted ${\cal P}$. The size $|X|_R$ of $X\in {\cal P}$ is the number of blocks in $X$. \\ (c) A polymer $X$ is \emph{connected} if for any two points $x_{a}, x_{b}\in X$ there exists a path $( x_0,\dotsc ,x_n)$ in $X$ with $\|x_{{i+1}}-x_{i}\|_\infty =1$, $x_{0} = x_{a}$ and $x_{n}=x_{b}$. \\ (d) A polymer $X\in\Pcal$ is a \emph{small set} if $X$ is connected and $|X|_R \le 2^{d}$. Let $\Scal\subset\Pcal$ be the set of all small sets. \\ (e) The \emph{small set neighbourhood} of $X \subset \Lambda $ is the subset $X^\Box$ of $\Lambda$ given by \begin{equation} \label{e:9ssn} X^{\Box} = \bigcup_{Y\in \Scal :X\cap Y \not =\varnothing } Y. \end{equation} (Other papers have used the notation $X^*$ in place of $X^\Box$, but we use $X^\Box$ to avoid confusion with our notation for sequence spaces.) \end{defn} Note that, by definition, $X \subset X^\Box$ and $(X\cup Y)^\Box=X^\Box \cup Y^\Box$. The following definitions involve a positive parameter $\ell$ whose value will be chosen to satisfy the (related) hypotheses of Propositions~\ref{prop:EK}--\ref{prop:EG2} below. For concreteness, in these definitions we consider only the case where $\pmb{\Lambda}_b = \Lambda \sqcup \bar\Lambda$ and the boson field is the complex field of Section~\ref{sec:cbf}. For the $n$-component $|\varphi|^4$ model studied in \cite{BBS-phi4-log}, the same definitions apply with $\phi$ replaced by $\varphi\in (\Rbold^n)^\Lambda$. \begin{defn} \label{def:ffregulator} Given $X \subset \Lambda$ and $\phi \in \mathbb{C}^{\Lambda}$, the \emph{fluctuation-field regulator} is given by \begin{align} \label{e:GPhidef} G (X,\phi) = \prod_{x \in X} \exp \left(|B_{x}|^{-1}\|\phi\|_{\Phi (B_{x}^\Box,\ell )}^2 \right) , \end{align} where $B_{x}$ is the unique block that contains $x$, and where the norm on the right-hand side is the $\Phi(\mathfrak{h})$ norm of Example~\ref{ex:h} with $\mathfrak{h}=\ell>0$ and localised to the small set neighbourhood $B^\Box$ as in \refeq{PhiXdef}. We define a norm on $\Ncal(X^\Box)$ by \begin{equation}\label{e:Gnormdef} \| F(X)\|_{G ,\ell } = \sup_{\phi \in \mathbb{C}^\Lambda} \frac{\|F(X)\|_{T_{\phi }(\ell )}}{G (X,\phi)} \quad \text{for $F(X) \in \Ncal (X^{\Box})$}. \end{equation} Although the norm depends on $X$, we choose not to add a subscript $X$ to the norm to make this dependence explicit. \end{defn} For $X \in \Pcal$ the formula \eqref{e:GPhidef} simplifies to \begin{align} \label{e:GPhidef2} G (X,\phi) = \prod_{B \in \Bcal (X)} \exp \|\phi\|_{\Phi (B^\Box,\ell )}^2 , \end{align} and the more complicated formula in the definition is a way to extend this simpler formula to all subsets $X \subset \Lambda$. A similar remark applies to the next definition. Suppose that $R$ and $m$ are chosen in such a way that the diameter of $B^\Box$ is less than $mR$ (e.g., if $m$ is sufficiently large). We can then identify $B^\Box$ with a subset of $\Zd$ and use this identification to define polynomial functions from $B^\Box$ to $\mathbb{C}$. The \emph{dimension} of such a polynomial $f$, of a single variable, is defined to be $\frac{d-2}{2}$ plus the degree of $f$. Let $d_{\widetilde{\Pi}}$ be a fixed non-negative integer. We define \begin{equation} \widetilde{\Pi}(B^\Box) = \left\{ f \in \mathbb{C}^{\Lambda} \mid \text{$f$ restricted to $B^\Box$ is a polynomial of dimension at most $d_{\widetilde{\Pi}}$}\right\}. \end{equation} Then, for $\phi \in \mathbb{C}^{\Lambda}$, we define the semi-norm \begin{equation} \label{e:Phitilnorm} \| \phi \|_{\tilde{\Phi} (B^\Box)} = \inf \{ \| \phi -f\|_{\Phi} : f \in \widetilde{\Pi} (B^\Box)\}. \end{equation} \begin{defn} \label{def:regulator} Given $X \subset \Lambda$ and $\phi \in \mathbb{C}^{\Lambda}$, the \emph{large-field regulator} is given by \begin{align} \label{e:9Gdef} \tilde G (X,\phi) = \prod_{x \in X} \exp \left(\frac 12 |B_{x}|^{-1}\|\phi\|_{\tilde\Phi (B_{x}^\Box,\ell)}^2 \right) . \end{align} The factor $\frac 12$, which does not occur in \refeq{GPhidef}, has been inserted in \refeq{9Gdef} for later convenience. We define a norm on $\Ncal(X^{\Box})$ by \begin{equation} \label{e:9Nnormdef} \|F(X)\|_{\tilde{G},\mathfrak{h}} = \sup_{\phi \in \mathbb{C}^\Lambda} \frac{\|F(X)\|_{T_{\phi} (\mathfrak{h})}} {\tilde{G}(X,\phi)} \quad \text{for $F(X) \in \Ncal (X^{\Box})$}, \end{equation} where we have made explicit in the notation the fact that the norm on the left-hand side depends on a parameter $\mathfrak{h}$ which may be chosen to be different from the parameter $\ell$ used for the regulators. The dependence of the norm on $\ell$ is left implicit. \end{defn} It is immediate from the definitions that $G (X, \phi)$ and $\tilde{G} (X, \phi)$ are increasing in $X$, and that for all disjoint $X,Y$ and for all $\phi \in \mathbb{C}^\Lambda$, \eqalign \label{e:GXYfluct} G (X \cup Y, \phi) &= G (X, \phi ) G (Y, \phi ), \\ \label{e:GXY} \tilde{G} (X \cup Y, \phi) &= \tilde{G} (X, \phi ) \tilde{G} (Y, \phi ). \enalign In addition, for $A \ge 0$ there is a $c_A \ge 1$ such that for all $t \in [0,1]$, \begin{align} &1=G(X,0)\le G(X,\phi), \quad \tilde G (X,t\phi) \leq G^{1/2} (X,\phi), \nnb & \label{e:KKK6} \left(1+ \|\phi\|_{\Phi(\ell, X^{\Box}) }\right)^{A+1} \le c_A G ^{1/2}(X,\phi). \end{align} The first two inequalities are valid for $X \subset \Lambda$. The third holds for $X \in \Pcal$, and follows from \eqref{e:GPhidef2}. The following proposition extends the product property to the $G$ and $\tilde G$ norms. \begin{prop} If $X,Y$ are \emph{disjoint} and if $F(X) \in \Ncal (X^{\Box}) ,\ i=1,2$ and $K(Y) \in \Ncal(Y^\Box)$, then $F(X)K(Y) \in \Ncal ((X\cup Y)^{\Box})$, and for either of the $G$ or $\tilde G$ norms \refeq{Gnormdef} and \refeq{9Nnormdef}, \begin{equation} \|F(X)K(Y)\| \leq \|F(X)\| \| K(Y)\| . \end{equation} \end{prop} \begin{proof} This follows immediately from the product property Proposition~\ref{prop:prod} for the $T_\phi$ semi-norm, together with \refeq{GXYfluct}--\refeq{GXY}. \end{proof} By definition, \begin{equation} \label{e:T0G} \|F\|_{T_0(\ell)} \leq \|F\|_{G,\ell}. \end{equation} The following proposition shows that this inequality can be partially reversed, at the expense of a term involving a multiple of $\|F\|_{\tilde{G}}$. In our application, the ratio $\ell/\mathfrak{h}$ appearing in this term will be small. \begin{prop} \label{prop:KKK} Let $X \in \Pcal$ and $F \in \Ncal(X)$. For any positive integer $A<p_\Ncal$, there is a constant $c_A$ such that \begin{equation} \label{e:KKK1} \|F \|_{G ,\ell} \le c_A \left( \|F \|_{T_{0}(\ell)} + \left( \frac{\ell}{\mathfrak{h}} \right)^{A+1} \|F \|_{\tilde{G} ,\mathfrak{h} } \right). \end{equation} \end{prop} \begin{proof} We apply Proposition~\ref{prop:Tphi-bound}, with $T_\phi'=T_\phi(\ell)$ and $T_\phi=T_\phi(\mathfrak{h})$. Then $\rho^{(n)} = (\ell/\mathfrak{h})^n$ by definition. It follows from \refeq{9Nnormdef} and \refeq{KKK6} that \begin{equation} \label{e:KGGG} \|F\|_{T_{t\phi}} \leq \|F\|_{\tilde{G} ,\mathfrak{h} } \, \tilde G (X,t\phi) \leq \|F\|_{\tilde{G} ,\mathfrak{h} } \, G^{1/2} (X,\phi). \end{equation} We use this in the last term on the right-hand side of \refeq{Tphicor1}, to obtain \eqalign \|F \|_{T_{\phi}(\ell)} & \leq \left(1+ \|\phi\|_{\Phi (\ell)}\right)^{A+1} \left( \|F \|_{T_{0}(\ell)} + \left(\frac{\ell}{\mathfrak{h}} \right)^{A+1} \|F\|_{\tilde G,\mathfrak{h}} G^{1/2} (X,\phi) \right) . \label{e:KKK4.5} \enalign We then apply \refeq{KKK6}, divide by $G(X,\phi)$, and take the supremum over $\phi$ to obtain \refeq{KKK1}. \end{proof} \subsection{Norm estimates for Gaussian integration} The following proposition shows that the Laplacian, and in view of \refeq{ELap} also the Gaussian integral, are bounded operators on a space of polynomials in $\Ncal$. In its statement, we regard $\pmb{C}$ as a test function in $\Phi$, by extending the definition above \refeq{LapC} to $\pmb{C}_z=0$ for $z \in \vec\pmb{\Lambda}^*$ unless the length of $z$ is $2$ and both components are either in $\pmb{\Lambda}_b$ or in $\pmb{\Lambda}_f$, in which case it is given respectively by $\pmb{C}_{b;z}$ or $\pmb{C}_{f;z}$. Then it makes sense to take the norm $\|\pmb{C}\|_\Phi$. \begin{prop} \label{prop:Etau-bound} If $F \in \Ncal$ is a polynomial of degree at most $A$, with $A \le p_\Ncal$, then \begin{equation} \label{e:DCK} \|\Delta_{\pmb{C}} F\|_{T_{\phi}} \le A^2 \|\pmb{C}\|_{\Phi}\,\|F\|_{T_{\phi}} \end{equation} and \begin{equation} \label{e:eDC} \|e^{t\Delta_{\pmb{C}}} F\| \le e^{|t| A^{2}\|\pmb{C}\|_{\Phi}}\,\|F\|_{T_{\phi}}. \end{equation} \end{prop} Note that \refeq{eDC} follows from $\|e^{t\Delta_{\pmb{C}}}\| \leq \sum_{n=0}^\infty \frac{1}{n!} \|t\Delta_{\pmb{C}}\|^n$ together with \refeq{DCK}, so it suffices to prove \refeq{DCK}. In the next proposition, we restrict to the conjugate fermion field setting of Section~\ref{sec:cff}, with fields $(\bar\psi_x, \psi_x)_{x \in \Lambda}$. We extend $C_f$ to a test function in $\Phi(\Lambda)$ by setting it equal to zero when evaluated on any sequence $z$ except those where $z$ has length $2$ and both components are in $\Lambda$. Then the norm $\|C_f\|_{\Phi(w')}$ makes sense. \begin{prop} \label{prop:EK} In the conjugate fermion field setting of Section~\ref{sec:cff}, suppose that the covariance $C_f$ obeys $\|C_f\|_{\Phi(w')}\le 1$. If $F \in \Ncal (\pmb{\Lambda} \sqcup \pmb{\Lambda}')$ then \eq \label{e:EKz} \| \mathbb{E}_{\pmb{C}} F \|_{T_\phi(w)} \le \mathbb{E}_{\pmb{C}_b} \|F \|_{T_{\phi \sqcup \xi}(w\sqcup w')} . \en Also, if $F \in \Ncal (\pmb{\Lambda} )$ then \eq \label{e:EK} \| \mathbb{E}_{\pmb{C}} \theta F \|_{T_\phi(w)} \le \mathbb{E}_{\pmb{C}_b} \|\theta F \|_{T_{\phi \sqcup \xi}(w\sqcup w')} \le \mathbb{E}_{\pmb{C}_b} \|F \|_{T_{\phi+\xi}(w+w')} . \en \end{prop} The variable $\xi$, which occurs in \refeq{EK} (and also in \refeq{EG2}) is a dummy variable of integration for $\mathbb{E}_{\pmb{C}_b}$. Note that the first inequality of \refeq{EK} is an immediate consequence of \refeq{EKz}, and that the second follows from \refeq{theta-bd1}, so it suffices to prove \refeq{EKz}. In fact, as we show in Lemma~\ref{lem:EKzz} below, a stronger statement than \refeq{EKz} holds. Namely, if $h: \Rbold^{\pmb{\Lambda}'_{b}} \to \mathbb{C}$ then \begin{equation} \lbeq{EKzh} \| \mathbb{E}_{\pmb{C}} h F \|_{T_\phi(w)} \le \mathbb{E}_{\pmb{C}_b} \left[ | h(\xi)|\, \|F \|_{T_{\phi \sqcup \xi}(w\sqcup w')} \right] . \end{equation} Finally, we have an estimate for the Gaussian expectation of the fluctuation-field regulator. \begin{prop} \label{prop:EG2} Let $t \ge 0$, $\alpha_{G} >1$, and $X \subset \Lambda$. There exists a (small) positive constant $c (\alpha_{G})$ such that if $ \|\pmb{C}_b\|_{\Phi^+ (\ell)}\le c (\alpha_{G}) t^{-1}$, where the $\Phi^+$ norm is the $\Phi$ norm with $p_\Phi$ replaced by $p_\Phi+d$, then \eq \label{e:EG2} 0 \leq \mathbb{E}_{\pmb{C}_b} G^t(X,\xi) \le \alpha_{G}^{R^{-d}|X|}. \en \end{prop} Proofs of Propositions~\ref{prop:Etau-bound}--\ref{prop:EG2} are given in Sections~\ref{sec:heat}--\ref{sec:ffr} below. \section{Gaussian integration and the heat equation} \label{sec:Gihe} In this section, we prove Proposition~\ref{prop:conv}. The proof uses integration by parts. For the purely bosonic case, it is straightforward to apply integration by parts to obtain \begin{equation} \label{e:Gibp} \mathbb{E}_{\pmb{C}_b} \phi_x f = \sum_{y\in \pmb{\Lambda}_b} \pmb{C}_{b; x,y}\mathbb{E}_{\pmb{C}_b} \frac{\partial f}{\partial \phi_y} , \quad \quad x \in \pmb{\Lambda}_b \end{equation} where $f$ is any smooth function such that both sides are integrable. The following lemma is a fermionic version of \refeq{Gibp}. Although it is standard (see, e.g., \cite[Proposition~1.17]{FKT02}), we give the simple proof. \begin{lemma} \label{lem:fibp} For $F \in \Ncal (\pmb{\Lambda})$ and $x \in \pmb{\Lambda}_f$, \begin{equation} \mathbb{E}_{\pmb{C}_f} \psi_{x} F = \sum_{y\in\pmb{\Lambda}_f} \pmb{C}_{f;x,y}\, \mathbb{E}_{\pmb{C}_f} i_{y}F . \end{equation} \end{lemma} \begin{proof} By definition, \begin{equation} i_{y} S = \frac{1}{2}\sum_{v\in\pmb{\Lambda}_f} \pmb{A}_{f;y,v} \psi_{v} - \frac{1}{2}\sum_{u\in\pmb{\Lambda}_f} \pmb{A}_{f;u,y} \psi_{u} = \sum_{v\in\pmb{\Lambda}_f} \pmb{A}_{f;y,v} \psi_{v} . \end{equation} It suffices by linearity to consider $F$ a product of generators, and since $i_{y}F$ cannot contain all generators as factors, \begin{equation} \int_{\pmb{\Lambda}_f} i_{y}F =0 . \end{equation} By replacing $F$ by $e^{-S}F$, we have \begin{equation} \int_{\pmb{\Lambda}_f} \left(i_{y}e^{-S} \right) F + \int_{\pmb{\Lambda}_f} e^{-S} \left(i_{y}F\right) = 0 . \end{equation} This is the same as \begin{equation} \int_{\pmb{\Lambda}_f} e^{-S} \left(-\sum_{v} \pmb{A}_{f;y,v} \psi_{v}\right) F + \int_{\pmb{\Lambda}_f} e^{-S} \left(i_{y}F\right) = 0 . \end{equation} By applying the inverse of $\pmb{A}_{f}$ to both sides, we obtain the desired result. \end{proof} The following lemma provides the expression in our context of the intimate link between Gaussian integration and the heat equation. In the purely bosonic context, this is a standard fact about Gaussian random variables. \begin{lemma} \label{lem:*heat-eq} For $T>0$ and $F\in \Ncal(\pmb{\Lambda})$ such that $F_{t} = \mathbb{E}_{t\pmb{C}}\theta F$ is defined for $t<T$, the differential equation \begin{equation}\label{e:*heat-eq1} \frac{d}{dt} F_{t} = \frac{1}{2} \Delta_{\pmb{C}} F_{t} \end{equation} holds for $t \in (0,T)$. Moreover, if $P \in \Ncal(\pmb{\Lambda})$ is a polynomial of finite degree, then \begin{equation} \label{e:*EWick} \mathbb{E}_{\pmb{C}} \theta P = e^{\frac{1}{2} \Delta_{\pmb{C}}} P. \end{equation} \end{lemma} \begin{proof} Since the Gaussian expectation factors as in \refeq{ECbf}, to prove \eqref{e:*heat-eq1} it suffices to consider separately the cases where $F$ is purely bosonic or purely fermionic. We first prove \eqref{e:*heat-eq1} in the bosonic case, where $F=f$ is a smooth function of $\phi$. The expectation is then a standard Gaussian integral, and by a change of variables we have \begin{align}\label{e:*heateq2} \frac{d}{dt} F_{t} (\phi) &= \frac{d}{dt}\mathbb{E}_{\pmb{C}_b} F (\phi + \sqrt{t}\xi) = \mathbb{E}_{\pmb{C}_b} \sum_{x \in \pmb{\Lambda}_{b}} F_{x}(\phi + \sqrt{t}\xi) \frac{1}{2\sqrt{t}}\xi_{x}. \end{align} To differentiate under the expectation we need to know that the resulting integrand is integrable. To see this, we observe that since $t<T$ there exists $\epsilon >0$ such that $F (\phi + \sqrt{t}\xi)\exp[\epsilon \sum_{x} \xi_{x}^{2}]$ is integrable. Now we apply the integration by parts identity \refeq{Gibp}, and the definition \refeq{LapC} of the Laplacian, to conclude that \begin{align}\label{e:*heateq2a} \frac{d}{dt} F_{t} (\phi) &= \frac{1}{2} \mathbb{E}_{\pmb{C}_b} \sum_{x,y \in \pmb{\Lambda}_{b}} \pmb{C}_{b;x,y} F_{x,y}(\phi + \sqrt{t}\xi) \nnb &= \frac{1}{2} \mathbb{E}_{\pmb{C}_b} \Delta_{\pmb{C}_b} F(\phi + \sqrt{t}\xi) = \frac{1}{2} \Delta_{\pmb{C}_b} \mathbb{E}_{\pmb{C}_b} F (\phi + \sqrt{t}\xi) = \frac{1}{2}\Delta_{\pmb{C}_b} F_{t} (\phi). \end{align} This proves the bosonic case of \refeq{*heat-eq1}. For the fermionic case, we can suppose that $F=\psi^y=\psi_{y_1}\cdots \psi_{y_k}$. We first note that \begin{align} \frac{d}{dt} \theta_{t} \psi^{y} &= \frac{d}{dt} \prod_{j=1}^k \left(\psi_{y_{j}} + t \psi_{y'_{j}} \right) = \sum_{i} (-1)^{i-1}\psi_{y'_{i}} \prod_{j\not =i} \left(\psi_{y_{j}} + t \psi_{y'_{j}} \right), \end{align} with the factors under the product maintaining their original order. By definition of $i_x$, this gives \begin{align} \frac{d}{dt} \theta_{t} \psi^{y} &= \sum_{x \in \pmb{\Lambda}_f} \psi_{x}\frac{1}{t}i_{x}\left( \theta_{t}\psi^{y} \right) = \sum_{x \in \pmb{\Lambda}_f} \psi_{x} \theta_{t}\left(i_{x} \psi^{y} \right) , \end{align} where the sum extends to all $x\in\pmb{\Lambda}_f$ because terms with $x\neq y_j'$ for some $j$ vanish. With Lemma~\ref{lem:fibp}, we then obtain \begin{align} \frac{d}{dt} \mathbb{E}_{\pmb{C}_f} \theta_{t}F &= \sum_{x \in \pmb{\Lambda}_f} \mathbb{E}_{\pmb{C}_f} \psi_{x} \theta_{t} \left(i_{x} F \right) \nnb &= \sum_{x,y \in \pmb{\Lambda}_f} \pmb{C}_{f;x,y} \mathbb{E}_{\pmb{C}_f} i_{y} \theta_{t}\left(i_{x} F \right) = \sum_{x,y \in \pmb{\Lambda}_f} \pmb{C}_{f;x,y} \mathbb{E}_{\pmb{C}_f} \theta_{t}\left(ti_{y}i_{x} F \right) , \end{align} which is the same as \begin{align} \label{e:dEC} \frac{1}{t} \frac{d}{dt} \mathbb{E}_{\pmb{C}_f} \theta_{t}F = \Delta_{\pmb{C}_f}\mathbb{E}_{\pmb{C}_f} \theta_{t} F . \end{align} Writing $\frac{1}{t}\frac{d}{dt} = 2\frac{d}{d (t^{2})}$, and then replacing $t^{2}$ by $t$, we obtain \begin{equation} \frac{d}{dt}\mathbb{E}_{\pmb{C}_f} \theta_{\sqrt{t}}F = \frac 12 \Delta_{\pmb{C}_f}\mathbb{E}_{\pmb{C}_f} \theta_{\sqrt{t}} F . \end{equation} It can be verified from the definitions that $\mathbb{E}_{\pmb{C}_f}\theta_{\sqrt{t}} = \mathbb{E}_{t\pmb{C}_f}\theta$, and the fermionic case of \refeq{*heat-eq1} follows. Finally, suppose that $F$ is a polynomial $P$ of finite degree. By \eqref{e:*heat-eq1}, each of $P_{t}$ and $e^{\frac{t}{2} \Delta_{\pmb{C}}} P$ solves the heat equation with the same initial data. The heat equation is a finite-dimensional linear system of ordinary differential equations because $\pmb{\Lambda}$ is a finite set and thus $\Delta_{\pmb{C}}$ is a linear operator acting on the finite-dimensional vector space of polynomials in $\phi$ and $\psi$. Therefore solutions for the heat equation are unique by the standard theory of linear systems, and \eqref{e:*EWick} follows. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:conv}.] Since \refeq{ELap} has been proven in \refeq{*EWick}, it suffices to prove \refeq{conv}. By the first equality of \refeq{ECbf}, it suffices to verify \refeq{conv} individually for $F=f$ and $F=\psi^y$. For $F=\psi^y$, \refeq{conv} is an immediate consequence of \refeq{*EWick}. For $F=f$, the expectation is a standard Gaussian expectation. Since finite Borel measures are uniquely characterised by their Fourier transforms, it suffices to consider the case $f(\phi)=e^{i\phi \cdot \eta}$ for $\eta \in \Rbold^{\pmb{\Lambda}_b}$. The Fourier transform of a Gaussian measure with covariance $\pmb{C}_b$ is $e^{-(\eta, \pmb{C}_b \eta)}$. Thus, setting $\pmb{C}_b = \pmb{C}_{b,1}+ \pmb{C}_{b,2}$, we have \eq \mathbb{E}_{\pmb{C}_b}\theta f = e^{i\phi \cdot \eta}e^{-(\eta, \pmb{C}_b \eta)}, \en and also \eq \mathbb{E}_{\pmb{C}_{b,2}}\theta\left( \mathbb{E}_{\pmb{C}_{b,2}}\theta f \right) = \mathbb{E}_{\pmb{C}_{b,2}}\theta\left(e^{i\phi \cdot \eta}e^{-(\eta, \pmb{C}_{b,1} \eta)} \right) = e^{i\phi \cdot \eta}e^{-(\eta, \pmb{C}_{b,2} \eta)}e^{-(\eta, \pmb{C}_{b,1} \eta)} \en The above two right-hand sides are equal, and \refeq{conv} follows in the bosonic case. This completes the proof. \end{proof} \section{The \texorpdfstring{$T_\phi$}{Tphi} semi-norm} \label{sec:tphisemi} We now prove the five propositions stated in Sections~\ref{sec:Tphidef}--\ref{sec:Tphifurther}: the product property of Proposition~\ref{prop:prod}, the exponential norm estimate of Proposition~\ref{prop:eK}, the polynomial norm estimate of Proposition~\ref{prop:T0K}, the change of norm estimate of Proposition~\ref{prop:Tphi-bound}, and the contractive bound for the map $\theta$ of Proposition~\ref{prop:derivs-of-tau-bis}. Many of the proofs follow the strategy of writing the $T_\phi$ semi-norm in terms of the pairing \eqref{e:Kgpairdef} that defines it, and then introducing an adjoint operation that transfers the desired statement into an estimate on test functions. \subsection{Proof of the product property} \label{sec:pfprod} In this section, we prove the product property stated in Proposition~\ref{prop:prod}. The proof proceeds by first establishing the product property for a more general algebra with semi-norm, and then noting that the product property of the $T_\phi$ norm follows as an instance. Let $\Hcal$ be the algebra, generated by the fermion field, and over the ring of formal power series in indeterminates $(\xi_{x})_{x\in \pmb{\Lambda}_{b} }$. An element $A\in\Hcal$ has a unique representation \begin{equation} \label{e:Arep} A=\sum_{z \in \vec\pmb{\Lambda}^{*}} \frac{1}{z!} F_{z} \xi^{z_{b}}\psi^{z_{f}} , \end{equation} where $z=(z_b,z_f)$, the coefficients $F_{z}$ are complex valued, symmetric in the components of $z_{b} \in \vec\pmb{\Lambda}_b^*$, and antisymmetric in the components of $z_{f} \in \vec\pmb{\Lambda}_f^*$. Coefficients that obey these symmetry conditions are said to be \emph{admissible}. Let $\Fcal$ be the set of admissible coefficients. As vector spaces, $\Hcal$ and $\Fcal$ are isomorphic by the map $A \mapsto (F_z)_{z\in\vec\pmb{\Lambda}^*}$ implicitly defined by \eqref{e:Arep}. We use this isomorphism to transport the product from $\Hcal$ to a product on $\Fcal$. Let \begin{equation} \label{e:eta} \eta^{z} = \xi^{z_{b}}\psi^{z_{f}} . \end{equation} For $F',F'' \in \Fcal$, we define $(F' \star F'')$ to be the unique element of $\Fcal$ such that \begin{equation} \label{e:star-product} \sum_{z \in \vec\pmb{\Lambda}^{*}} \frac{1}{z!} (F'\star F'')_{z}\eta^{z} = \left( \sum_{z' \in \vec\pmb{\Lambda}^{*}} \frac{1}{z'!} F'_{z'}\eta^{z'} \right) \left( \sum_{z'' \in \vec\pmb{\Lambda}^{*}} \frac{1}{z''!} F''_{z''}\eta^{z''} \right) . \end{equation} The vector space isomorphism between $\Hcal$ and $\Fcal$ implies the existence of $F'\star F''$, and with the $\star$ product, $\Fcal$ becomes an algebra isomorphic to $\Hcal$. For a sequence $x= (x_{1},x_{2},\dots ,x_{p})$, we say that $(x',x'')$ are \emph{complementary with respect to} $x$ if $x'$ is a subsequence of $x$ and $x''$ is the sequence obtained by removing $x'$ from $x$. The pairs such that $x'$ or $x''$ is the empty sequence are included. We denote by $S_{x}$ the set of all pairs $(x',x'')$ that are complementary with respect to $x$. There is an inverse relation: given sequences $x'$ and $x''$ we define $x'\diamond x''$ to be the set of all $x$ such that $(x',x'') \in S_{x}$. We extend this notation to $z \in \vec\pmb{\Lambda}^{*}$ by applying it to $z$ species by species. For example, with just one boson and one fermion species, $(z',z'')$ are complementary with respect to $z$ if $(z'_{b},z''_{b}) \in S_{z_{b}}$ and $(z'_{f},z''_{f}) \in S_{z_{f}}$. We define $S_{z}$ to be the set of all $(z',z'')$ that are complementary with respect to $z$ and we define $z'\diamond z''$ to be the set of all $z\in\vec\pmb{\Lambda}^{*}$ such that $(z',z'') \in S_{z}$. Recall that factorials and concatenation are defined species-wise in $\vec\pmb{\Lambda}^*$, in Section~\ref{sec:seq}. Finally, given a sequence $z$ with complementary subsequences $z'$ and $z''$, we define $\mathrm{sgn} (z',z'';z)\in \{-1,1\}$ by the requirement that $\eta^{z} = \mathrm{sgn} (z',z'';z)\eta^{z'}\eta^{z''}$. \begin{lemma} \label{lem:product-formula} For $F',F'' \in \Fcal$, the product defined on $\Fcal$ by \eqref{e:star-product} is given by \begin{equation} \label{e:starproddef} (F' \star F'')_{z} = \sum_{(z',z'') \in S_{z}} F'_{z'} F''_{z''} \, \mathrm{sgn} (z',z'';z) . \end{equation} \end{lemma} \begin{proof} Let $(F' * F'')_{z}$ denote the right-hand side of \eqref{e:starproddef}. It suffices to show that \eq \label{e:closed} F'*F'' \in \Fcal, \en and \eq \label{e:rightprod} \sum_z \frac{1}{z!} (F'*F'')_z \eta^z = \text{right-hand side of \eqref{e:star-product}}. \en First, by definition, \begin{align} \label{e:bullet-prod} (F' * F'')_{z}\eta^{z} &= \sum_{(z',z'') \in S_{z}} F'_{z'} F''_{z''} \, \mathrm{sgn} (z',z'';z)\eta^{z} = \sum_{(z',z'') \in S_{z}} F'_{z'} \eta^{z'} F''_{z''} \eta^{z''} . \end{align} Therefore, \begin{align} \sum_z \frac{1}{z!} (F'*F'')_z \eta^z &= \sum_{z} \frac{1}{z!} \sum_{(z',z'')\in S_{z}} F'_{z'}\eta^{z'} F''_{z''}\eta^{z''} = \sum_{z',z''} F'_{z'}\eta^{z'} F''_{z''} \eta^{z''} \sum_{z \in z'\diamond z''} \frac{1}{z!}. \end{align} The number of $z$ in the set $z'\diamond z''$ is $z!/(z'!z''!)$, because each $z$ is specified by choosing a subsequence $(j_{1},..,j_{p'})$ of $(1,\dots ,p (z))$ and setting $z_{j_{k}}=z'_{k}$, with the other components of $z$ then determined by $z''$. This gives \begin{align} \sum_z \frac{1}{z!} (F'*F'')_z \eta^z &= \sum_{z',z''} \frac{1}{z'!}\frac{1}{z''!} F'_{z'} \eta^{z'} F''_{z''} \eta^{z''} , \end{align} which proves \refeq{rightprod}. For $F:\vec\pmb{\Lambda}^{*}\to\mathbb{C}$, let $\tilde{F}_{z} = F_{z}\eta^{z}$. The admissibility requirement in the definition of $\Fcal$ is equivalent to the statement that $F\in \Fcal$ if and only if $\tilde F_{\pi z} = \tilde F_z$ for any permutation $\pi$ of $z$. Also, given $(z',z'') \in S_{\pi z}$, we can define $(\hat z', \hat z'') \in S_{z}$ in a unique way by reordering the components of $z'$ to produce $\hat z'$ and similarly for $z''$. Then, by \refeq{bullet-prod}, \begin{align} \widetilde {(F' * F'')}_{\pi z} &= \sum_{(z',z'') \in S_{\pi z}} \tilde{F}'_{z'} \tilde{F}''_{z''} = \sum_{(z',z'') \in S_{\pi z}} \tilde{F}'_{\hat z'} \tilde{F}''_{\hat z''} = \sum_{(z',z'') \in S_{z}} \tilde{F}'_{z'} \tilde{F}''_{ z''} = \widetilde {(F' * F'')}_{ z} , \end{align} where the second equality holds since $F',F''\in\Fcal$. This proves \refeq{closed}, and completes the proof. \end{proof} Given $F \in \Fcal$ and a test function $g\in\Phi$, we define a pairing and a semi-norm by \begin{equation} \label{e:Tpairdef} \pair{ F, g } = \sum_{z \in \vec\pmb{\Lambda}^{*}} \frac{1}{z!} F_{z} g_{z} , \quad\quad \|F\|_{T} = \sup_{g \in B (\Phi)} \, |\langle F , g \rangle | . \end{equation} The following proposition shows that the $T$ semi-norm on $\Fcal$ obeys the product property. \begin{prop} \label{prop:prodT} For all $F,G \in \Fcal$, $\|F\star G\|_{T} \leq \|F\|_{T}\|G\|_{T}$. \end{prop} \begin{proof} Let $g\in\Phi $ and $G \in \Fcal$. By Lemma~\ref{lem:product-formula}, \begin{align} \langle F\star G, g \rangle &= \sum_z \frac{1}{z!} \sum_{(z',z'') \in S_z} F_{z'}G_{z''} \,\mathrm{sgn} (z',z'';z) \,g_z \nnb &= \sum_{z',z''} \sum_{z\in z' \diamond z''} \frac{1}{z!} F_{z'}G_{z''} \,\mathrm{sgn} (z',z'';z) \,g_z . \end{align} We define $G^{*}g \in \Phi$ by \begin{equation} (G^{*}g)_{z'} = \sum_{z''} \frac{1}{z''!} G_{z''} \sum_{z \in z' \diamond z''} \frac{z'!z''!}{z!} \,\mathrm{sgn} (z',z'';z) \, g_z, \end{equation} so that \begin{align} \lbeq{Gadj} \langle F\star G, g \rangle &= \sum_{z'} \frac{1}{z'!} F_{z'}(G^{*} g)_{z'} = \pair{F,G^{*}g} \end{align} and hence \begin{equation} \|F\star G\|_{T} \le \|F\|_{T} \sup_{g \in B (\Phi)} \|G^{*}g\|_{\Phi} . \end{equation} Thus it remains to show that \eq \label{e:Gstarbd} \|G^{*}g\|_{\Phi} \le \|G\|_{T} \quad \quad \text{for $g \in B (\Phi)$}. \en Given $g \in B(\Phi)$ and $z' \in \vec\pmb{\Lambda}^{*}$, we define a test function $f_{z'} \in \Phi$ by setting its value $(f_{z'})_{z''}$ at $z''$ to be equal to \begin{equation} \label{e:fzzdef} f_{z',z''} = \sum_{z \in z' \diamond z''} \frac{z'!z''!}{z!} \,\mathrm{sgn} (z',z'';z) \, g_z, \end{equation} where $f_{z',z''}$ is a short notation for $(f_{z'})_{z''}$. We regard this as a function of $z''$ with $z'$ fixed. By definition, $(G^{*}g)_{z'} = \pair{G,f_{z'}}$, and hence, by Definition~\ref{def:gnorm-general}, \begin{equation} \|G^*g\|_\Phi = \sup_{(\alpha',z')\in \Acal'} |\lambda_{\alpha',z'} \pair{G,f_{z'}}| = \sup_{(\alpha',z')\in \Acal'} |\pair{G,\lambda_{\alpha',z'} f_{z'}}|, \end{equation} where $\Acal'$ denotes a copy of $\Acal$, and where we have made the abbreviation $\lambda_{\alpha',z'}= w_{\alpha',z'}^{-1}\nabla^{\alpha'}$. Thus we obtain \begin{equation} \|G^*g\|_\Phi \le \|G\|_T \sup_{(\alpha',z')\in \Acal'} \|\lambda_{\alpha',z'} f_{z'}\|_\Phi. \end{equation} Thus, it is sufficient to show that for all $g \in B(\Phi)$ and $(\alpha',z')\in \Acal'$, $(\alpha'',z'') \in \Acal''$, \begin{equation} |\lambda_{\alpha'',z''}\lambda_{\alpha',z'} f_{z',z''}| \le 1. \end{equation} In \refeq{fzzdef}, the operations $\lambda_{\alpha'',z''}\lambda_{\alpha',z'}$ can be interchanged with the summation because they are linear, and with the factorials and sgn function since these depend only on the length and order of the relevant sequences. Since the number of terms in the sum over $z\in z' \diamond z''$ is equal to $z!/(z'!z''!)$, we find after taking the absolute values inside the summation that it suffices to show that, for each $z \in z'\diamond z''$, \begin{equation} \label{e:lam1} |\lambda_{\alpha'',z''}\lambda_{\alpha',z'} g_z| \le 1, \end{equation} where the derivatives within the $\lambda$ factors act on the arguments of $g_z$ according to their permuted locations within $z \in z'\diamond z''$. Since \refeq{lam1} is a consequence of $g \in B(\Phi)$ and the definition of the $\Phi$ norm, this completes the proof. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:prod}.] Let $F =\sum_{y \in \Lambda_f^*}\frac{1}{y!}F_y \psi^y \in \Ncal$. For boson fields $\phi,\xi$, Taylor expansion of the coefficients $F_{y}$ about a fixed $\phi$ in powers of $\xi$ defines an algebra isomorphism \begin{equation} F \mapsto \sum_{(x,y) \in \vec\pmb{\Lambda}^{*}} \frac{1}{x!y!} F_{x,y} (\phi) \xi^{x}\psi^y \end{equation} of $\Ncal$ into a subalgebra (if $p_\Ncal < \infty$) of the algebra $\Hcal$ and, in turn, $\Hcal$ is isomorphic as an algebra to $\Fcal$. The composition of these isomorphisms is an isometry of the semi-normed algebras $(\Ncal,T_{\phi})$ and $(\Fcal,T)$, so Proposition~\ref{prop:prod} follows from Proposition~\ref{prop:prodT}. \end{proof} Finally, we extract and develop a detail from the proof of Proposition~\ref{prop:prodT}, needed only in \cite[Section~\ref{loc-sec:LTnormestimates}]{BS-rg-loc}. Examination of the proof of \refeq{Gadj} shows that it is also true that $\pair{F\star G,g} = \pair{G,F^\dagger g}$ for all $F,G\in \Fcal$ and $g \in \Phi$, where \begin{align} \label{e:Fdag} (F^{\dagger}g)_{z''} &= \sum_{z'} \frac{1}{z'!}F_{z'} \sum_{z \in z' \diamond z''} \frac{z'!z''!}{z!} \,\mathrm{sgn}(z',z'';z) \, g_z . \end{align} By the isomorphism mentioned in the proof of Proposition~\ref{prop:prod}, \refeq{Fdag} also defines an adjoint in $\Ncal$, in the sense that $\pair{FG,g}_\phi = \pair{G,F^\dagger g}_\phi$ also for $F,G\in \Ncal$. We apply this to the case of a test function $f_z$ which is nonzero only on sequences $z$ of fixed length $p(z)=n$ and of fixed choice of species for each of the $n$ components of $z$. In this case, $z!=n!$, and given $z'$ in the sum in \refeq{Fdag}, $z''!$ is determined by $z'!$ (and by the fixed value of $n$). In addition, given $z'$, it is also the case that $\mathrm{sgn}(z',z'';z)$ is determined since the species in $z$ are known when $f_z \neq 0$. Thus there are coefficients $c_{z'}= \frac{z''!}{z!}\mathrm{sgn}(z',z'';z)$ such that, for the special $f$ under consideration, \begin{align} \label{e:Fdagf} (F^{\dagger}f)_{z''} &= \sum_{z'} c_{z'}F_{z'} \tilde f^{(z')}_{z''} \quad \text{with} \quad \tilde f^{(z')}_{z''} = \sum_{z \in z' \diamond z''} f_z . \end{align} \subsection{Exponential norm estimate} \label{sec:ebdTphi} In this section, we prove Proposition~\ref{prop:eK}. Let $f(u)=\sum_{n=0}^\infty a_n u^n$ and $h(u)=\sum_{n=0}^\infty |a_n| u^n$, and let $\|\cdot\|$ denote any semi-norm that obeys the product property, e.g., the $T_\phi$ semi-norm. As an immediate consequence of the product property, for any $F$, we have \eq \label{e:apos} \|f(F)\| \leq \sum_{n=0}^\infty |a_n| \|F^n\| \leq \sum_{n=0}^\infty |a_n| \|F\|^n = h(\|F\|). \en It follows from \refeq{apos} that \eq \label{e:apos-e} \|e^{-F}\|_{T_\phi} \le e^{\|F \|_{T_\phi}}. \en Proposition~\ref{prop:eK} provides an improvement to \refeq{apos-e} when the purely bosonic part of $F$ has positive real part. Its proof is based on the following lemma. \begin{lemma} \label{lem:fK} Let $\|\cdot\|$ denote any semi-norm that obeys the product property. If $\|F\| \le 1$, then \begin{align} \label{e:RK3} \|e^{-\frac{1}{2}F^{2}}-(1+F)e^{-F}\| & \le \|e^{-\frac{1}{2}F^{2}}\|\,\|F^{3}\| . \end{align} \end{lemma} \begin{proof} Let \eq \label{e:fK7} R = e^{-\frac{1}{2}F^{2}}-(1+F)e^{-F}. \en Let \eq f (z) = 1+(z-1)e^{z+\frac{1}{2}z^{2}}. \en Then $R=e^{-\frac 12 F^2}f(-F)$. By definition, $f (0)=0$ and $ f' (z) = z^{2} e^{z+\frac{1}{2}z^{2}}. $ Thus $f'(z)$ has a power series with non-negative coefficients, and hence so does $f(z)$. Also, $f (z)= z^{3}g (z)$, for some $g (z)=\sum b_{n}z^{n}$ with $b_{n}\ge 0$. In addition, $g(1)=f(1)=1$. Therefore, by \refeq{apos}, \eq \|f (-F) \| = \|F^{3}g(-F)\| \le \|F^{3}\|\|g(-F) \| \leq \|F^{3}\|\,g(\|F\|) . \en If $\|F\|\le 1$, this simplifies to \eq \label{e:f-K} \|f (-F) \| \le \|F^{3}\|. \en This gives \eq \label{e:fK1} \|R\| = \big\| e^{-\frac{1}{2}F^{2}} f (-F) \big\| \le \|e^{-\frac{1}{2}F^{2}}\| \|F^{3}\|, \en which is \refeq{RK3}. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:eK}.] Let $F\in \Ncal$ and let $F_\varnothing(\phi)$ be the purely bosonic part of $F$. We will prove that \eq \|e^{-F }\|_{T_\phi} \le e^{-2{\rm Re} F_\varnothing(\phi)+\|F \|_{T_\phi}} . \en We first assume that $|F_\varnothing(\phi)|$ is sufficiently small that $|1-F_\varnothing(\phi)| - |F_\varnothing(\phi)| \ge 0$, and we write $F_\varnothing(\phi)=z=x+iy$. We show that this implies that \eq \label{e:eKF0} |1-z| - |z| \le 1 - 2x, \en as follows. By hypothesis, \refeq{eKF0} is equivalent to the inequality obtained by squaring both sides, and algebra reduces the latter to \eq x(1-x)+y^2 \le |z| \, |1-z|. \en This certainly holds if the left-hand side is negative, and otherwise it suffices to show that the inequality is valid if both sides are squared, and the latter reduces to \eq 2x(1-x) \le (1-x)^2+x^2, \en which does hold. This completes the proof of \refeq{eKF0} when $|1-F_\varnothing(\phi)| - |F_\varnothing(\phi)| \ge 0$. The $T_\phi$ semi-norm is defined via the pairing given in \eqref{e:Kgpairdef}. Let $g$ be any test function of norm at most $1$. By separating out the null contribution to the sum over $z$ we have \begin{align} |\langle 1-F , g \rangle_\phi| &\le |(1-F_\varnothing(\phi))g_\varnothing| + \sum_{r\not = 0} \big| \sum_{z\in \vec\pmb{\Lambda}^{(r)}} \frac{1}{z!} F_{z}(\phi) g_{z} \big| \nnb &= (|1-F_\varnothing(\phi)| -|F_{\varnothing}(\phi)|)|g_\varnothing | + \sum_{r \ge 0} \big| \sum_{z\in \vec\pmb{\Lambda}^{(r)}} \frac{1}{z!} F_{z}(\phi) g_{z} \big|. \end{align} We take the supremum over test functions $g$ of unit norm. The final term becomes $\|F\|_{T_\phi}$, so \begin{equation} \label{e:explinbis} \|1-F \|_{T_\phi} \le |1-F_\varnothing(\phi)| -|F_{\varnothing}(\phi)| + \|F\|_{T_\phi}. \end{equation} For the rest of the proof, we drop the $T_{\phi}$ subscript. Given $\phi$, we choose $N$ sufficiently large that $|1-\frac{1}{N} F_\varnothing(\phi)| - \frac{1}{N}|F_\varnothing(\phi)| \ge 0$. By \refeq{explinbis} and \refeq{eKF0}, \eq \left\|1-\frac{1}{N}F \right\| \le 1 - \frac{2}{N}{\rm Re} F_\varnothing(\phi) + \frac{1}{N}\|F\|. \en By the product property, \eq \left\|\big(1-\frac{1}{N}F \big)^{N}\right\| \le \big( 1 - \frac{2}{N}{\rm Re} F_{\varnothing}(\phi) + \frac{1}{N}\|F\| \big)^{N} \le e^{-2{\rm Re} F_\varnothing(\phi)+\|F\|}. \en It suffices now to show that the limit $N \to \infty$ can be taken inside the semi-norm on the left-hand side. For this we define $A = e^{F/N}(1-\frac{1}{N}F)$. By \eqref{e:apos} with $f(z)=e^z$ and with $f (z) = e^{z}-1$, we have $\|e^{-\frac 12 F^2/N^2}\| = O(1)$ and $\|e^{-\frac 12 F^2/N^2}-1\|=O(N^{-2})$ as $N \to \infty$. Therefore, by \eqref{e:RK3} with $F$ replaced by $-F/N$, \begin{align} \|A-1\| &\le \|e^{F/N}(1-\frac{1}{N}F)- e^{-\frac{1}{2}F^{2}/N^{2}}\| + \|e^{-\frac 12 F^2/N^2}-1\| \nnb & = O (N^{-3}) + O(N^{-2}) = O (N^{-2}). \end{align} Now let $f (z) = (1-z^{N}) (1-z)^{-1} = \sum_{n=0}^{N-1} z^{n}$. Then, by \eqref{e:apos}, \begin{align} \big\| \big(1-\frac{1}{N}F \big)^{N} - e^{-F} \big\| &= \|e^{-F} ( A^{N} - 1 ) \| \nnb & \le \|e^{-F}\|\, \|A-1\|\, f (\|A\|) =O(N^{-2})f(\|A\|), \end{align} and the right-hand side is $O(N^{-1})$ since $f(1+O(N^{-2}))=O(N)$. This completes the proof. \end{proof} \subsection{Polynomial norm estimate} \label{sec-Tphiestimates} In this section, we prove Proposition~\ref{prop:T0K}. We begin with some definitions and a preliminary lemma which will be useful also in Sections~\ref{sec:normchange}--\ref{sec:compsp0}. For $z \in \vec\pmb{\Lambda}^{*}$, let $B_z$ denote the set of pairs $(z',z_b'')\in \vec\pmb{\Lambda}^{*}\times \vec\pmb{\Lambda}_b^{*}$ such that $z'\circ z_b''=z$. For $s \in [0,1]$, $g\in\Phi$, $\xi \in \Rbold^{\pmb{\Lambda}_b}$ and $z \in \vec\pmb{\Lambda}^{*}$, we define a new test function $\sigma^{*} (s) g \in \Phi$ by setting $(\sigma^{*} (s) g)_{z} = 0$ if the length of $z$ exceeds $p_\Ncal$, and otherwise \begin{equation} \label{e:sigmastardef} (\sigma^{*}_\xi (s) g)_{z} = \sum_{(z',z_b'')\in B_z} \frac{z!}{z'!z_b''!} s^{z_b''} \xi^{z_b''} g_{z'} . \end{equation} We write $\sigma^{* (m)}_\xi g$ to denote the $m^{\rm th}$ derivative of $\sigma_\xi^* (s) g$ at $s=0$. \begin{lemma} \label{lem:Stm} For $s,t \in [0,1]$, $g\in\Phi$, $P \in \Ncal$ a polynomial of degree at most $p_\Ncal$, and $\phi,\xi \in \Rbold^{\pmb{\Lambda}_b}$, \begin{equation} \label{e:sigstar} \pair{P,g}_{t\phi + s\xi} = \pair{P,\sigma_\xi^{*} (s) g}_{t\phi} . \end{equation} If $g \in \Phi^{(p)}$ and $m+p \le p_\Ncal$, then $\sigma_\xi^{*(m)}g\in \Phi^{(m+p)}$, and, for any $F\in \Ncal$, \begin{equation} \label{e:sigstarmid} \frac{d^m}{dt^m} \pair{F,g}_{t\phi} = \pair{ F, \sigma_\phi^{*(m)}g }_{t\phi}. \end{equation} For all $p$ and for $g \in \Phi $, \begin{equation} \label{e:sigstar0} \|\sigma^{*}_\xi (1) g\|_{\Phi^{(p)}} \le \left(1 + \|\xi\|_{\Phi}\right)^{p} \|g\|_{\Phi} , \end{equation} and \begin{equation} \label{e:sigstarm} \| \sigma_\xi^{*(m)} g\|_{\Phi^{(m+p)}} \le \frac{(m+p)!}{p!}\|\xi\|_{\Phi}^{m} \|g\|_{\Phi} . \end{equation} \end{lemma} \begin{proof} By definition, for $g\in\Phi$ and for a polynomial $P$ of degree $p_\Ncal$, \begin{align} \pair{P,g}_{t\phi + s\xi} &= \sum_{z'} \frac{1}{z'!} P_{z'} \left(t\phi + s\xi \right) g_{z'} = \sum_{z',z_b''} \frac{1}{z'!z_b''!} P_{z'\circ z_b''} (t\phi) s^{z_b''} \xi^{z_b''} g_{z'} \nnb &= \sum_{z}\frac{1}{z!} P_{z} (t\phi) \sum_{( z',z_b'') \in B (z)} \frac{z!}{z'!z_b''!} s^{z_b''} \xi^{z_b''} g_{z'} = \pair{P,\sigma^{*}_\xi (s) g}_{t\phi} , \end{align} which proves \eqref{e:sigstar}. If $g \in \Phi^{(p)}$ then differentiation of \eqref{e:sigmastardef} gives \begin{equation} \label{e:sigmastardiff} (\sigma^{*(m)}_\xi g)_{z} = \mathbbm{1}_{z'\circ z_b''=z} \frac{(m+p)!}{p!} \xi^{z_b''} g_{z'} , \end{equation} so $\sigma^{*(m)}_\xi g\in \Phi^{(m+p)}$. Also, when $g \in \Phi^{(p)}$, we may regard $F$ in \refeq{sigstarmid} as a polynomial and thus by \refeq{sigstar} we obtain \eqref{e:sigstarmid} via differentiation with respect to $s$ (with $\xi=\phi$). By the triangle inequality and \eqref{e:plusnormass}, for $p \le p_{\Ncal}$, \begin{equation} \|\sigma^{*}_\xi (1) g\|_{\Phi^{(p)}} \le \sumtwo{p',p_b'':}{p'+p_b''=p} \frac{p!}{p'! p_b''!} \|\xi\|_{\Phi}^{p_b''} \|g\|_{\Phi} = \left(1 + \|\xi\|_{\Phi}\right)^{p} \|g\|_{\Phi} . \end{equation} Since the left-hand side is zero for $p>p_{\Ncal}$, this proves \eqref{e:sigstar0}. For \eqref{e:sigstarm}, we only consider the case $m+p \le p_\Ncal$ because otherwise the left-hand side is zero. Also we can assume that $g \in\Phi^{(p)}$ because no other part of $g$ can contribute to the left-hand side. If $g \in\Phi^{(p)}$ and $m+p \le p_\Ncal$, then from \eqref{e:sigmastardiff} and \eqref{e:plusnormass} we have \begin{equation} \| \sigma^{*(m)}_\xi g\|_{\Phi^{(m+p)}} \le \frac{(m+p)!}{p!} \|\xi\|_{\Phi}^{m} \| g \|_{\Phi} . \end{equation} This proves \eqref{e:sigstarm}, and completes the proof. \end{proof} \begin{rk} \label{rk:pair-deriv} It follows from Lemma~\ref{lem:Stm} that for $F \in \Ncal$, $g \in \Phi^{(p)}$, and for $m+p \le p_\Ncal$, \begin{equation} \label{e:shift-phi} \left| \frac{d^{m}}{dt^{m}} \pair{F,g}_{T_{\phi+t\xi} } \right| \le \frac{(m+p)!}{m!} \|F\|_{T_{\phi+t\xi} (\Phi)} \|\xi\|_{\Phi}^{m} \|g\|_{\Phi} . \end{equation} To see this, note that as in the proof of Lemma~\ref{lem:Stm}, \eq \frac{d^{m}}{ds^{m}} \Big|_{0} \pair{F,g}_{T_{\phi+s\xi} (\Phi)} = \langle{F,\sigma_\xi^{*(m)}g\rangle}_{t\phi}. \en With \eqref{e:sigstarm}, this gives \begin{align} \left| \frac{d^{m}}{ds^{m}} \Big|_{0} \pair{F,g}_{T_{\phi+s\xi} (\Phi)} \right| &\le \frac{(m+p)!}{m!} \|F\|_{T_{\phi} } \|\xi\|_{\Phi}^{m} \|g\|_{\Phi} , \end{align} and then \refeq{shift-phi} follows by replacing $\phi$ with $\phi + t \xi$. \end{rk} For $F \in \Ncal$ and $t \ge 0$, we define $\tau_t F \in \Ncal$ by replacing the fields $(\phi ,\psi)$ in $F$ by $(t\phi ,t\psi)$. For a positive integer $A$, $t \in [0,1]$ and $F \in \Ncal$, we define the truncated Taylor expansion for $\tau_{t}F$ by \begin{equation} \label{e:sigmaA} \tau_{t}^{(\le A)}F = \sum_{n=0}^{A}\frac{t^{n}}{n!}\tau^{(n)}_{0}F , \end{equation} where $\tau_{t}^{(n)}F$ is the $n^{\rm th}$ derivative of $\tau_{t}F$ with respect to $t$. The following lemma gives the result of Proposition~\ref{prop:T0K}. \begin{lemma} \label{lem:Tphi-pol-bound} For $F \in \Ncal$ and $A \le p_\Ncal$, let $P=\tau_{1}^{(\le A)}F$. Then \begin{align} & \label{e:SsleA} \|P\|_{T_{\phi}} \le \|F\|_{T_{0}} \big(1+ \|\phi\|_{\Phi}\big)^{A} , \end{align} and if $F$ is a polynomial of degree $A$ then \begin{equation} \label{e:Fpoly} \|F\|_{T_{\phi}} \le \|F\|_{T_{0}}\big(1+ \|\phi\|_{\Phi}\big)^{A} . \end{equation} \end{lemma} \begin{proof} The second claim is a consequence of the first because, in this case, $F=\tau_{1}^{(\le A)}F$ by the uniqueness of Taylor expansions. To prove \eqref{e:SsleA}, we apply Lemma~\ref{lem:Stm} with $\xi =\phi$, $t=0$ and $s=1$ to obtain \begin{equation} \left|\pair{P ,g}_{\phi}\right| = \left|\pair{P,\sigma^{*}_\phi (1) g}_{0}\right| \le \|P\|_{T_{0}}\,\|\sigma^{*}_\phi (1) g\|_{\Phi} . \end{equation} Since $\|P\|_{T_{0}}$ is a truncation of the sum of positive terms that constitute $\|F\|_{T_{0}}$, it is the case that $\|P\|_{T_{0}} \le \|F\|_{T_{0}}$. Also, we need only consider the case where $\sigma^{*}_\phi (1) g$ depends on at most $A$ variables, since otherwise its pairing with $P$ vanishes. It then follows from Lemma~\ref{lem:Stm} with $\xi =\phi$ that \begin{equation} \left|\pair{P ,g}_{\phi}\right| \le \|F\|_{T_{0}}\,\left(1 + \|\phi\|_{\Phi}\right)^{A} \|g\|_{\Phi} . \end{equation} Taking the supremum now over $g\in B(\Phi)$, we obtain \eqref{e:SsleA} and the proof is complete. \end{proof} \subsection{Estimate with change of norm} \label{sec:normchange} The following lemma gives the result of Proposition~\ref{prop:Tphi-bound}. \begin{lemma} \label{lem:Tphi-bound-bis} Let $A<p_\Ncal$ be a non-negative integer. For $F \in\Ncal$, let $P=\tau_{1}^{(\le A)}F$. Then \begin{align} \|F\|_{T_{\phi}'} & \leq \left(1 + \|\phi\|_{\Phi'}\right)^{A+1} \left( \|P\|_{T_{0}'} + \rho^{(A+1)} \sup_{t \in [0,1]} \|F\|_{T_{t\phi}} \right) \nnb & \leq \left(1 + \|\phi\|_{\Phi'}\right)^{A+1} \left( \|F\|_{T_{0}'} + \rho^{(A+1)} \sup_{t \in [0,1]} \|F\|_{T_{t\phi}} \right). \label{e:Tphicor} \end{align} \end{lemma} \begin{proof} The second estimate follows from the first and $\|P\|_{T_{0}'} \le \|F\|_{T_{0}'}$. To prove the first inequality let $R= F - P$. By the triangle inequality and Lemma~\ref{lem:Tphi-pol-bound} it is sufficient to prove that \begin{align} \label{e:tautmGm} \|R\|_{T'_{\phi}} \le \rho^{(A+1)}\, \left(1 + \|\phi\|_{\Phi'}\right)^{A+1} \sup_{t \in [0,1]} \|F\|_{T_{t \phi}} . \end{align} For this, it suffices to show that for a test function $g \in\Phi$ we have \begin{align} \label{e:tautmGmpair} |\pair{R,g}_\phi| \le \rho^{(A+1)}\, \left(1 + \|\phi\|_{\Phi'}\right)^{A+1} \sup_{t \in [0,1]} \|F\|_{T_{t \phi}} \|g\|_{\Phi'} . \end{align} We consider separately the cases (i) $g_{z}=0$ for $z$ with $p=p (z) \le A$, and (ii) $g_{z}=0$ except when $p=p(z) = 0,1,\dots ,A$. Any $g$ can be decomposed into these two cases using $g = g\mathbbm{1}_{p > A}+ \sum_{r \le A}g\mathbbm{1}_{p =r}$, and \begin{equation} | \pair{R,g}_{\phi} | \le |\pair{R,g\mathbbm{1}_{p > A}}_{\phi}| + \sum_{r \le A}|\pair{R,g\mathbbm{1}_{p =r}}_{\phi}|. \end{equation} For case (i), we simply note from \eqref{e:rhodef1} that \begin{align} \left| \pair{R,g}_{\phi} \right| &= \left| \pair{F,g}_{\phi} \right| \le \|F\|_{T_{\phi}}\|g\|_{\Phi} \le \|F\|_{T_{\phi}}\frac 12 \rho^{(A+1)} \|g\|_{\Phi'} . \end{align} Note that the above right-hand side is at most half the right-hand side of \eqref{e:tautmGmpair}. For the more substantial case (ii), fix $g\in \Phi$ with $g_{z}=0$ supported on sequences of length exactly $p$ with some $p\in \{0,1,\dots ,A \}$. Let $f (t) = \pair{R,g}_{t\phi}$. By the Taylor remainder formula, for any $m \le A+1$, \begin{equation} \left| \pair{R,g}_{\phi}\right| \le \frac{1}{m!} \sup_{t \in [0,1]} \left| f^{(m)} (t) \right| . \end{equation} By Lemma~\ref{lem:Stm} with $\xi =\phi$, \begin{equation} \label{e:fRg} f^{(m)} (t) = \pair{R,\sigma_\phi^{*(m)}g}_{t\phi} . \end{equation} Let $m=A+1-p$. Then we can replace $R=F-P$ by $F$ in \eqref{e:fRg} because $\sigma_\phi^{*(m)}g$ is supported on sequences of length $m+p=A+1$ by Lemma~\ref{lem:Stm}, whereas $P$ is a polynomial of degree $A$. Therefore, \begin{equation} \label{e:ii-1} \left| \pair{R,g}_{\phi} \right| \le \frac{1}{m!}\sup_{t \in [0,1]} \|F\|_{T_{t\phi}} \; \| \sigma_\phi^{*(m)}g\|_{\Phi^{(A+1)}} . \end{equation} Since $\sigma_\phi^{*(m)}g$ is supported on sequences of length $A+1$, by \eqref{e:rhodef1} we have \begin{equation} \label{e:ii-2} \| \sigma_\phi^{*(m)}g\|_{\Phi^{(A+1)}} \le \frac 12 \rho^{(A+1)}\| \sigma_\phi^{*(m)}g\|_{\Phi'^{(A+1)}} . \end{equation} It follows from \eqref{e:ii-1}--\eqref{e:ii-2} and Lemma~\ref{lem:Stm} that \begin{align} \left| \pair{R,g}_{\phi} \right| &\le \frac 12 \rho^{(A+1)} \sum_{p=0}^A \binom{A+1}{p} \|\phi\|_{\Phi'}^{A+1-p} \|g\|_{\Phi'} \sup_{t \in [0,1]} \|F\|_{T_{t\phi}} \nnb &\le \frac 12 \rho^{(A+1)} \left( 1+ \|\phi\|_{\Phi'} \right)^{A+1} \|g\|_{\Phi'} \sup_{t \in [0,1]} \|F\|_{T_{t\phi}} . \end{align} Combined with the estimate for case (i), this gives \eqref{e:tautmGmpair} and completes the proof. \end{proof} \subsection{Contractive bound on \texorpdfstring{$\theta$}{theta}} \label{sec:compsp0} In this section, we prove Proposition~\ref{prop:derivs-of-tau-bis}. Recall from the discussion above \eqref{e:phi-extended} that there is a bijection between a subset of $\pmb{\Lambda}$ and $\pmb{\Lambda}'$, written $x \mapsto x'$. Recall from the discussion above Proposition~\ref{prop:derivs-of-tau-bis} that species in $\pmb{\Lambda}$ and species in $\pmb{\Lambda}'$ are distinct, and are ordered in such a way that a species from $\pmb{\Lambda}'$ occurs immediately following its counterpart in $\pmb{\Lambda}$. The \emph{forget} function $f:\pmb{\Lambda} \sqcup \pmb{\Lambda}' \to \pmb{\Lambda}$ is defined by setting $f (x')=x$ when $x' \in \pmb{\Lambda}'$ and $f (x)=x$ when $x \in \pmb{\Lambda}$. We extend $f$ to a map from $(\overrightarrow{\pmb{\Lambda} \sqcup \pmb{\Lambda}'})^{*}$ to $\vec\pmb{\Lambda}^{*}$ by letting $f$ act componentwise on sequences. We define a map $\theta^{*}: \Phi(\pmb{\Lambda} \sqcup \pmb{\Lambda}') \to \Phi (\pmb{\Lambda})$ by setting \begin{equation} \label{e:thetastar} (\theta^{*}g)_{z} = \sumtwo{v \in (\overrightarrow{\pmb{\Lambda} \sqcup \pmb{\Lambda}'})^{*}:} {f (v) = z} \frac{z!}{v!}g_{v}. \end{equation} By definition, the $v!$ appearing in the above equation is equal to $u!u'!$, where $u$ and $u'$ are respectively the subsequences of $v$ drawn from $\pmb{\Lambda}$ and $\pmb{\Lambda}'$. \begin{lemma} \label{lem:thetaadj} For $F \in \Ncal(\pmb{\Lambda})$, $g\in \Phi(\pmb{\Lambda} \sqcup \pmb{\Lambda}')$, $\phi \in \Rbold^{\pmb{\Lambda}_b}$ and $\xi \in \Rbold^{\pmb{\Lambda}_b'}$, \begin{align} \label{e:theta-dual} \pair{\theta F,g}_{\phi \sqcup \xi} &= \pair{F,\theta^{*}g}_{\phi + \xi} . \end{align} \end{lemma} \begin{proof} First, we compute the coefficients $(\theta F)_v$ for $v \in (\overrightarrow{\pmb{\Lambda} \sqcup \pmb{\Lambda}'})^{*}$, which is what is relevant for the pairing of $\theta F$ with $g$. By Definition~\ref{def:theta-new}, \begin{equation} \theta F = \sum_{z_{f} \in \vec\pmb{\Lambda}_{f}^{*}} \frac{1}{z_{f}!}F_{z_{f}} (\phi+\xi) ( \psi + \psi')^{z_{f}} . \end{equation} We expand $F_{z_{f}} ((\phi+\xi)+ (\hat\phi+\hat\xi))$ in a power series in $\hat\phi+\hat\xi$ to obtain \begin{equation} \theta F = \sum_{z \in \vec\pmb{\Lambda}^{*}} \frac{1}{z!}F_{z} (\phi+\xi) (\hat\phi+\hat\xi)^{z_b} ( \psi + \psi')^{z_{f}} . \end{equation} Now we expand the binomials on the right-hand side and reorder the species within both the bosonic and fermionic products. We reorder the subscript on $F_z$ in exactly the same way; then no sign change occurs. From this, we can read off the coefficients \eq (\theta F)_v = F_{f(v)}(\phi+\xi). \en We abbreviate the right-hand side as $F_{f(v)}=F_{f(v)}(\phi+\xi)$. Then \begin{equation} \pair{\theta F,g}_{\phi \sqcup \xi} = \sum_{v \in (\overrightarrow{\pmb{\Lambda} \sqcup \pmb{\Lambda}'})^{*}} \frac{1}{v!} F_{f(v)} g_v = \sum_{z\in \vec\pmb{\Lambda}^{*}} \frac{1}{z!} F_{z} \sum_{v:f (v) = z} \frac{z!}{v!} g_{v} = \pair{F,\theta^{*}g}_{\phi + \xi}, \end{equation} and the proof is complete. \end{proof} \begin{lemma} \label{lem:thetastar} The map $\theta^{*} : \Phi(w\sqcup w') \to \Phi (w+w')$ is a contraction, namely, for $g \in \Phi(w\sqcup w')$, \begin{align} \label{e:theta-bound11} \| \theta^{*}g \|_{\Phi(w+w')} &\le \|g\|_{\Phi(w\sqcup w')} . \end{align} \end{lemma} \begin{proof} In the following, $v\in(\overrightarrow{\pmb{\Lambda} \sqcup \pmb{\Lambda}'})^{*}$ and $z\in\vec\pmb{\Lambda}^{*}$. By \refeq{thetastar}, \begin{align} \label{e:theta-bound6} \left| (w + w')_{\alpha ,z}^{-1} (\nabla^{\alpha} \theta^{*}g)_{z} \right| &\le (w + w')_{\alpha ,z}^{-1} \sum_{v: f ( v) = z} \frac{z!}{v!} \left| \nabla^{\alpha} g_{v} \right| \nnb &\le \|g\|_{\Phi(w\sqcup w')}(w + w')_{\alpha ,z}^{-1} \sum_{v: f ( v) = z} \frac{z!}{v!} (w\sqcup w')_{\alpha ,v} . \end{align} The final sum equals $(w + w')_{\alpha ,z}$ by the binomial theorem; to see this we recall that $v$ has species segregated so that in particular primed and unprimed variables are not interleaved, and the binomial coefficient $z!/v!$ accounts for the number of ways to desegregate these variables. Then \refeq{theta-bound11} follows by taking the supremum over $(\alpha,z) \in \Acal$. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:derivs-of-tau-bis}] Let $g \in B (\Phi (w\sqcup w'))$. By Lemma~\ref{lem:thetaadj}, \begin{equation} \left| \pair{\theta F,g}_{\phi \sqcup \xi} \right| = \left| \pair{ F,\theta^* g}_{\phi + \xi} \right| \le \|F \|_{T_{\phi + \xi}(w+w')} \|\theta^{*}g\|_{\Phi (w+ w')} . \end{equation} Taking the supremum over $g \in B (\Phi (w\sqcup w'))$ and applying Lemma~\ref{lem:thetastar}, we have \begin{equation} \|\theta F\|_{T_{\phi \sqcup \xi} (w\sqcup w')} \le \|F\|_{T_{\phi +\xi}(w+w')} . \end{equation} This proves \eqref{e:theta-bd1}. \end{proof} \section{Integration norm estimates} \label{sec:integration} In this section, we prove Propositions~\ref{prop:Etau-bound}, \ref{prop:EK} and \ref{prop:EG2}. \subsection{Laplacian norm estimates} \label{sec:heat} In this section, we prove Proposition~\ref{prop:Etau-bound}. For this, it suffices to prove the following lemma, which slightly improves \refeq{DCK} by reducing the factor $A^2$ to $A(A-1)$ on its right-hand side. \begin{lemma} \label{lem:Etau-boundzz} If $F \in \Ncal$ is a polynomial of degree at most $A$, with $A \le p_\Ncal$, then \begin{equation} \label{e:DCKzz} \frac 12 \|\Delta_{\pmb{C}} F\|_{T_{\phi}} \le \binom{A}{2}\,\|\pmb{C}\|_{\Phi}\,\|F\|_{T_{\phi}}. \end{equation} \end{lemma} \begin{proof} For $g \in \Phi$ and $v \in \vec\pmb{\Lambda}^*$, let \begin{equation} \label{e:Cstarg} (\pmb{C}^{*}g)_{v} = \mathbbm{1}_{u\circ z=v} \frac{v!}{u!z!} \pmb{C}_{u} g_{z} \end{equation} if the length of $v$ is at most $A$, and otherwise $(\pmb{C}^{*}g)_{v}=0$. Here $u$ denotes the first two coordinates of $v$ and $z$ denotes the others; in particular $(\pmb{C}^{*}g)_{v} =0$ if the length of $v$ is less than $2$. Then, by the definition of the Laplacian in \refeq{LapC}, \begin{align} \frac{1}{2} \pair{\Delta_{\pmb{C}} F,g}_{\phi} &= \frac{1}{2} \sum_{z} \frac{1}{z!} (\Delta_{\pmb{C}} F(\phi))_{z} g_{z} = \sum_{u,z} \frac{1}{u!z!} \pmb{C}_{u} F_{u \circ z}(\phi) g_{z} \nnb & = \sum_{v}\frac{1}{v!} F_{v}(\phi) (\pmb{C}^{*}g)_{v} = \pair{F,\pmb{C}^{*}g}_{\phi} . \label{e:LapCpair} \end{align} Since $F$ is a polynomial of degree at most $A$, $F_v=0$ as soon as the length of $v$ exceeds $A$; the fact that $A \le p_\Ncal$ has been used in the last equality. The binomial coefficient in \refeq{Cstarg} is at most $\binom{A}{2}$. With \refeq{plusnormass}, this gives \begin{align} \|\pmb{C}^{*}g\|_{\Phi} &\le \binom{A}{2} \|\pmb{C}\|_{\Phi} \|g \|_{\Phi} \end{align} and hence \begin{align} \frac{1}{2} \left| \pair{\Delta_{\pmb{C}} F,g}_{\phi} \right| &\le \|F\|_{T_{\phi}} \|\pmb{C}^{*}g\|_{\Phi} \le \|F\|_{T_{\phi}} \binom{A}{2} \|\pmb{C}\|_{\Phi} \|g \|_{\Phi} , \end{align} and \refeq{DCKzz} follows by taking the supremum over $g \in B (\Phi)$. \end{proof} \subsection{The main integration estimate} In this section, we prove Proposition~\ref{prop:EK}. For this, we adopt the conjugate fermion fields setting described in Section~\ref{sec:cff}, with fields $\psi, \bar\psi$. As noted below the statement of Proposition~\ref{prop:EK}, it suffices to prove the bound \refeq{EKz}. The proof is based on the following lemma, which is known as \emph{Gram's inequality}. A proof of Lemma~\ref{lem:Gramineq} can be found in \cite[Lemma~1.33]{FKT02}. \begin{lemma} \label{lem:Gramineq} Let $H$ be a Hilbert space with inner product $\langle \cdot, \cdot \rangle$. If $u_i,v_i \in H$ for $i=1,\ldots, n$, then \eq \left| \,\det \left( \langle u_i,v_j\rangle \right)_{1 \le i,j \le n} \right| \le \prod_{i=1}^n \langle u_i, u_i \rangle^{1/2}\langle v_i, v_i \rangle^{1/2}. \en \end{lemma} Recall that $C_f$ can be interpreted as a test function as described above the statement of Proposition~\ref{prop:EK}. Let $E$ be the test function defined by $E_z = \Ebold_{\pmb{C}_f} \psi^z$ for $z \in \vec\pmb{\Lambda}_f^*$, with the convention $E_\varnothing =1$. For $z \in \vec\pmb{\Lambda}^* \setminus \vec\pmb{\Lambda}_f^*$ we set $E_z=0$. \begin{lemma} \label{lem:ell2} If $\|C_f\|_{\Phi} \le 1$ then $\|E\|_\Phi \le 1$. \end{lemma} \begin{proof} To simplify the notation, we drop the subscript $f$ from $C_f$. By \refeq{JF}, we may assume that $\psi^z$ has the form $\bar\psi_{x_1}\psi_{y_1}\cdots\bar\psi_{x_p}\psi_{y_p}$, in which case \eq \label{e:Edet} E_z = \det C_{x,y}. \en Let $\lambda_{\alpha,z} = (\prod_{i=1}^p\lambda_{\alpha_i',x_i}) ( \prod_{i=1}^p\lambda_{\alpha_i'',y_i})$ with $\lambda_{\alpha_i',x_i}= w_{\alpha_i',x_i}^{-1}\nabla^{\alpha_i'}$ and $\lambda_{\alpha_i'',y_i}= w_{\alpha_i'',y_i}^{-1}\nabla^{\alpha_i''}$, with $\nabla^{\alpha_i'}$ acting on the $x_i$ variable and $\nabla^{\alpha_i''}$ acting on the $y_i$ variable. It suffices to prove that \eq \label{e:ECbd} |\lambda_{\alpha,z} E_{z} | \leq \prod_{i=1}^p \left(\lambda_{\alpha_i',u}\lambda_{\alpha_i',v} C_{u,v}|_{u=v=x} \right)^{1/2} \left(\lambda_{\alpha_i'',u}\lambda_{\alpha_i'',v} C_{u,v}|_{u=v=y_i} \right)^{1/2}, \en since \refeq{ECbd} implies the inequality \eq \|E \|_{\Phi } \le \sup_{p \ge 1} \|C\|_{\Phi}^p . \en By \refeq{Edet} and the fact the determinant is linear in rows and columns, \eq \label{e:ECbd1} |\lambda_{\alpha,z} E_{z}| = |\lambda_{\alpha,z} \det C_{x,y}| = |\det(\lambda_{\alpha,z} C_{x,y})|. \en We rewrite the determinant as follows. Let $V$ be the vector space of all functions $f:\Lambda \rightarrow \Cbold$. Given functions $h,k\in V$, we define \eq (h,k) = \sum_{x\in\Lambda} h_x k_x. \en Then we define $f_i,g_i\in V$ by \eq (\lambda_{\alpha_i',x_i } k)_{x_i} = ( \delta_{x_i}, \lambda_{\alpha_i',x_i } k) = (\lambda_{\alpha_i',x_i }^\dagger \delta_{x_i}, k) = (f_i,k) \en and \eq (\lambda_{\alpha_i'',y_i }h )_{y_i} = ( \lambda_{\alpha_i'',y_i } h, \delta_{y_i}) = (h,\lambda_{\alpha_i'',y_i }^\dagger \delta_{y_i}) = (h,g_i). \en We define an inner product on $V$ by \eq \pair{f,g} = \sum_{x \in \Lambda} f_x C_{x,y}\bar{g}_y. \en By definition, for $i,j \in \{1,\ldots, p\}$, \eq \lambda_{\alpha_i',x_i} \lambda_{\alpha_j'',y_j} C_{x_i,y_{j}} = \pair{f_{i},g_{j}} , \en and thus $\det (\lambda_{\alpha,z} C_{x,y} ) = \det(\pair{f_{i},g_{j}})$. By Lemma~\ref{lem:Gramineq}, \eq |\det (\lambda_{\alpha,z} C_{x,y} ) | = |\det\left( \pair{f_{i},g_{j}}\right) | \le \prod_{i=1}^{n} \pair{f_{i}, f_i}^{1/2} \pair{g_i,g_i}^{1/2}. \en For the right-hand side, we use \eq \pair{f_{i}, f_i} = \pair{ \lambda_{\alpha_i',x_i }^\dagger \delta_{x_i}, \lambda_{\alpha_i',x_i }^\dagger \delta_{x_i} } = \lambda_{\alpha_i',u } \lambda_{\alpha_i',v } C_{u,v}\vert_{u=v=x_i} , \en and similarly for $\pair{g_i,g_i}$. With \refeq{ECbd1}, this proves \refeq{ECbd} and completes the proof. \end{proof} Proposition~\ref{prop:EK} is a consequence of the following lemma (with $h=1$), which establishes \refeq{EKz}. \begin{lemma} \label{lem:EKzz} In the conjugate fermion field setting of Section~\ref{sec:cff}, suppose that the covariance satisfies $\|C_f\|_{\Phi(w')}\le 1$. If $F \in \Ncal (\pmb{\Lambda} \sqcup \pmb{\Lambda}')$ and $h: \Rbold^{\pmb{\Lambda}'_{b}} \to \mathbb{C}$, then \eq \label{e:EKzzz} \| \mathbb{E}_{\pmb{C}} h F \|_{T_\phi (w)} \le \mathbb{E}_{\pmb{C}_b} \left[|h(\xi)|\, \|F \|_{T_{\phi\sqcup\xi} (w\sqcup w')} \right] . \en \end{lemma} \begin{proof} By definition, we can write $F = \sum_{z_f \in (\overrightarrow{\pmb{\Lambda}_{f}\sqcup \pmb{\Lambda}_{f}'})^{*}}\frac{1}{z_f!} F_{z_f}\psi^{z_f}$, with $F_{z_f}=F_{z_f}(\phi\sqcup\xi)$. Given $z_{f}$, let $y$ be the subsequence of $z_{f}$ such that $y \in \pmb{\Lambda}_{f}^{*}$, and let $y'$ be the complementary subsequence of components of $z_{f}$ in $\pmb{\Lambda}_{f}'$. The operator $\Ebold_{\pmb{C}}$ acts only on the $\xi$ and $\psi'$ variables. In particular, \begin{align} \mathbb{E}_{\pmb{C}_f} \psi^{z_{f}} &= \mathrm{sgn} (z_{f},y\circ y') \mathbb{E}_{\pmb{C}_f} \psi^{y\circ y'} \nnb &= \mathrm{sgn} (z_{f},y\circ y') \psi^{y}\mathbb{E}_{\pmb{C}_f} \psi^{y'} = \mathrm{sgn} (z_{f},y\circ y')E_{y'} \psi^{y} , \end{align} where $\mathrm{sgn} (z_{f},y\circ y')$ denotes the sign of the permutation that maps $z_f$ to $y\circ y'$, and $E$ denotes the test function of Lemma~\ref{lem:ell2}. Therefore, by \refeq{ECbf}, \begin{align} \mathbb{E}_{\pmb{C}} h F &= \sum_{z_{f}\in (\overrightarrow{\pmb{\Lambda}_{f}\sqcup \pmb{\Lambda}_{f}'})^{*}} \frac{1}{z_{f}!} (\mathbb{E}_{\pmb{C}_b} h F_{z_{f}} ) \mathrm{sgn} (z_{f},y\circ y')E_{y'}\psi^{y} . \end{align} For $g \in \Phi (w)$, we define $E^{*}g \in \Phi (w\sqcup w')$ by \begin{equation} (E^{*}g)_{z} = \begin{cases} \mathrm{sgn} (z_{f},y\circ y')E_{y'}g_{z_{b}\circ y } &z_{b} \in \pmb{\Lambda}_{b}^{*}\\ 0 &z_{b} \not \in \pmb{\Lambda}_{b}^{*} \end{cases} \end{equation} for $z \in (\overrightarrow{\pmb{\Lambda} \sqcup \pmb{\Lambda}'})^{*}$. Then \begin{align} \pair{\mathbb{E}_{\pmb{C}} h F,g}_{\phi} &= \sum_{z\in (\overrightarrow{\pmb{\Lambda}\sqcup \pmb{\Lambda}_{f}'})^{*}} \frac{1}{z!} (\mathbb{E}_{\pmb{C}_b} h F_{z} ) (E^{*}g)_{z} = \mathbb{E}_{\pmb{C}_b} [h(\xi) \pair{F, E^{*}g}_{\phi\sqcup\xi}] , \end{align} and hence \begin{align} \label{e:EFgbd} \left| \pair{\mathbb{E}_{\pmb{C}_b} h F,g}_{\phi} \right| &\le \mathbb{E}_{\pmb{C}_b} [|h(\xi)|\, |\pair{ F, E^{*}g}_{\phi\sqcup\xi}|] \nnb & \le \left(\mathbb{E}_{\pmb{C}_b} [|h(\xi)|\,\|F\|_{T_{\phi\sqcup\xi} (w\sqcup w')} ] \right) \|E^{*}g\|_{\Phi (w\sqcup w')} . \end{align} Derivative operators $\nabla^\alpha$ do not act on the sgn function, so we may apply \refeq{plusnormass-2} and then Lemma~\ref{lem:ell2} to conclude that $\|E^{*}g\|_{\Phi (w\sqcup w')} \le \|g\|_{\Phi (w)}$. Then \refeq{EKzzz} follows by taking the supremum over $g \in B (\Phi(w))$ in \refeq{EFgbd}, and the proof is complete. \end{proof} Since $(E^{*}g)_z$ vanishes by definition whenever $z$ contains an entry in $\pmb{\Lambda}_b'$, the above proof shows that \refeq{EKzzz} could be strengthened by replacing the semi-norm on the right-hand side by the smaller semi-norm which does not involve derivatives with respect to the boson fluctuation field $\xi$. \subsection{Expectation of the fluctuation-field regulator} \label{sec:ffr} The main result of this section is Lemma~\ref{lem:EG2zz}, which immediately gives Proposition~\ref{prop:EG2}. In preparation for Lemma~\ref{lem:EG2zz}, we prove three preliminary lemmas. The first of these is proved in \cite[Lemma~6.28]{Bryd09}, and a precursor of the second is \cite[Lemma~B.2]{BGM04}. \begin{lemma} \label{lem:integrability2} Let $(\xi_{a})_{a \in \Acal}$ be a finite set of Gaussian random variables with covariance $C$. Suppose that the largest eigenvalue of $C$ is less than $\frac 12$. Let $(\xi,\xi) = \sum_{a \in \Acal} \xi_{a}^{2}$. Then \begin{equation} \Ebold \, e^{\frac{1}{2}(\xi,\xi)} \le e^{\sum_{a \in \Acal} C (a ,a )}. \end{equation} \end{lemma} \begin{proof} Let $t \in (0,1)$. It suffices to show that \begin{equation} \frac{d}{dt } \ln \Ebold \, e^{\frac{t }{2} (\xi,\xi)} \le \sum_{a \in \Acal} C (a ,a), \end{equation} since the desired inequality then follows by integration over $t \in (0,1)$. Let $A$ be the inverse of the matrix $C$. The eigenvalues of $A$ are at least $2$ by the hypothesis on $C$, so the inverse matrix $C_{t}= (A-t)^{-1}$ exists. Let $\Ebold_t$ denote the Gaussian expectation with covariance $C_t$. Then \begin{align} \begin{split} \frac{d}{dt } \ln \Ebold \, e^{\frac{t }{2} (\xi,\xi)} &= \frac{1}{2} \Ebold_t (\xi,\xi) = \frac{1}{2} \sum_{a \in \Acal} C_{t} (a ,a) = \frac 12 {\rm Trace}\, C_t = \frac 12 \sum_\lambda (\lambda^{-1} - t)^{-1}, \end{split} \end{align} where the sum over $\lambda$ runs over the eigenvalues of $C$ (with multiplicity). Since each $\lambda$ is at most $\frac 12$ by hypothesis, $(\lambda^{-1}-t)^{-1} = \lambda (1-t \lambda)^{-1} \leq 2 \lambda$, and hence \begin{equation} \frac{d}{dt } \ln \Ebold \, e^{\frac{t }{2} (\xi,\xi)} \le \sum_{\lambda} \lambda = \mathrm{Trace}\, C = \sum_{a \in \Acal} C (a ,a), \end{equation} which completes the proof. \end{proof} \begin{lemma}[Lattice Sobolev inequality] \label{lem:sobolev2} Let $f:B \rightarrow \mathbb{C}$, where $B\in \Bcal$ is a block of side length $R$. Let $\nabla_R=R\nabla$. Then for any $x \in B$, \begin{equation} \label{e:sobolev2} |f (x)|^{2} \le 2^{3d+2} R^{-d} \sum_{y \in B} \sum_{|\alpha|_\infty \le 1} | \nabla_R^{\alpha }f (y)|^{2}. \end{equation} \end{lemma} \begin{proof} We can choose coordinates on $B$ such that $B=\{0,1,\dotsc ,R-1 \}^{d}$. Let $g:B\rightarrow \Rbold$ be any function that vanishes on $\cup_{i=1}^{d}\{(x_{1},\dotsc ,x_{d}) \in B: x_{i} = 0 \}$. Then we have the telescoping sum \begin{equation} g (x) = \sum_{y:y_{i}< x_{i}\, \forall i} \nabla^{e_1}\dotsb \nabla^{e_d} \,g (y) . \end{equation} Therefore, by the Cauchy--Schwarz inequality, \begin{equation} |g (x)| \le \sum_{y \in B} |\nabla^{e_1}\dotsb \nabla^{e_d} \,g (y)| \le \big( |B|\sum_{y \in B} |\nabla^{e_1}\dotsb \nabla^{e_d} \,g (y)|^{2} \big)^{1/2}. \end{equation} We apply this to $g (x) = x_{1}\dotsb x_{d}f (x)$, for points $x \in B$ with each coordinate $x_{i}\ge R/2$. This gives \begin{equation} |f(x)| \le \left(\frac 2R \right)^{d} |x_{1}\dotsb x_{d}f(x)| \le 2^{d} \big( |B|^{-1} \sum_{y \in B} |\nabla^{e_1}\dotsb \nabla^{e_d}y_{1}\dotsb y_{d}f (y)|^{2} \big)^{1/2}. \end{equation} We evaluate the derivatives using $ \nabla^{e_i} y_{i} h (y) = y_{i} \nabla^{e_i} h (y) + \nabla^{e_i} h (y)+ h (y)$. Since $y_{i} \le R$, \begin{equation} |f(x)|^{2} \le 2^{2d} |B|^{-1}\sum_{y \in B} \big( \sum_{\alpha \in \{0,1\}^{d}} 2 | \nabla_R^{\alpha} f (y)| \big)^{2} \le 2^{2d+2} |B|^{-1}\sum_{y \in B} 2^{d}\sum_{\alpha \in \{0,1\}^{d}} | \nabla_R^{\alpha} f (y)|^{2}. \end{equation} Since this holds for all functions $f$ we can change variables by reflections through hyperplanes bisecting $B$ so as to remove the assumption that every coordinate $x_i$ obeys $x_{i}\ge R/2$. These reflections turn forward derivatives into backward derivatives, and we obtain \eqref{e:sobolev2} by noticing that the absolute value of a backward derivative equals the absolute value of a forward derivative at a neighbouring point. \end{proof} Recall the definition of $G(X,\phi)$ in Definition~\ref{def:ffregulator}, for $X\in \Pcal$ a polymer as in Definition~\ref{def:blocks}. \begin{lemma} \label{lem:Gtxi} For $X \subset \Lambda$, $t \ge 0$, and $\phi \in \mathbb{C}^{\Lambda}$, \eq G^t(X,\phi) \le \exp\left[ \frac{1}{2} \sum_{y \in X^{\Box}} \sum_{|\alpha|_{1} \le d+p_{\Phi}} | \xi (y,\alpha )|^{2} \right], \en where $\xi (y,\alpha) = c t^{1/2} R^{-d/2} \ell^{-1} \nabla_R^{\alpha }\phi (y)$ for some constant $c$ depending only on $d$. \end{lemma} \begin{proof} By definition, \eq \label{e:GXbfluct} G^t (X,\phi) = \exp \left[ t\sum_{x \in X} |B_{x}|^{-1}\|\phi\|_{\Phi (B_{x}^{\Box},\ell)}^2\right], \en so it suffices to show that \begin{align} t \sum_{x \in X} |B_{x}|^{-1}\|\phi\|^2_{\Phi(B^\Box_{x},\ell)} &\le \frac{1}{2} \sum_{y \in X^{\Box}} \sum_{|\alpha|_{1} \le d+p_{\Phi}} | \xi (y,\alpha )|^{2}. \end{align} Throughout the proof, $c$ denotes a $d$-dependent constant whose value may change from line to line. Note that for $B\in \Bcal$, $B^\Box$ is a cube (since connectivity of blocks can be via corners) whose side length is a $d$-dependent multiple of $R$. We first apply Lemma~\ref{lem:sobolev2} with $f(x)=\nabla_R^\alpha \phi(x)$ and $B$ replaced by $B^\Box$ to obtain, for $x \in B^\Box$, \begin{align} |\nabla_R^\alpha \phi(x)|^2 &\le cR^{-d} \sum_{y \in B^\Box} \sum_{|\alpha'|_{\infty} \le 1} | \nabla_R^{\alpha + \alpha' }\phi (y)|^{2}. \end{align} From this, we obtain \begin{align} \|\phi\|_{\Phi(B^\Box,\ell)}^2 &\le \max_{|\alpha|_{1} \le p_{\Phi} ,x \in B^\Box} |\ell^{-1} \nabla_R^{\alpha}\phi (x)|^{2} \le cR^{-d} \sum_{y \in B^\Box} \sum_{|\alpha|_{1} \le d+p_{\Phi}} | \ell^{-1}\nabla_R^{\alpha }\phi (y)|^{2}. \end{align} If $y \in B_{x}^{\Box}$ then $x \in B_{y}^{\Box}$ and $|B_{y}^{\Box}|/|B|$ is bounded by a geometric constant. With a larger value of $c$, this gives \begin{align} t \sum_{x \in X} |B_{x}|^{-1}\|\phi\|^2_{\Phi(B_{x}^\Box,\ell)} &\le ctR^{-d} \sum_{y \in X^\Box} \sum_{|\alpha|_{1} \le d+p_{\Phi}} | \ell^{-1}\nabla_R^{\alpha }\phi (y)|^{2} \nnb & = \frac{1}{2} \sum_{y \in X^{\Box}} \sum_{|\alpha|_{1} \le d+p_{\Phi}} | \xi (y,\alpha )|^{2}, \end{align} and the proof is complete. \end{proof} Now we restate, and prove, Proposition~\ref{prop:EG2} as the following lemma. Recall that the $\Phi^+(\ell)$ norm is the $\Phi(\ell)$ norm with $p_\Phi$ increased to $p_\Phi +d$. \begin{lemma} \label{lem:EG2zz} Let $t \ge 0$, $\alpha_{G} >1$, and let $X \subset \Lambda$. There exists a (small) positive constant $c (\alpha_{G})$, which is independent of $R$, such that if $\|\pmb{C}_b\|_{\Phi^+(\ell)}\le c (\alpha_{G} )t^{-1}$, then \begin{equation} \label{e:EG2zz} 0 \leq \mathbb{E}_{\pmb{C}_b} G^t(X,\phi) \le \alpha_{G}^{R^{-d}|X|}. \end{equation} \end{lemma} \begin{proof} By Lemma~\ref{lem:Gtxi}, \begin{equation} \mathbb{E}_{\pmb{C}_b} G^t(X,\phi) \le \mathbb{E}_{\pmb{C}_b} \exp\left[ \frac{1}{2} \sum_{y \in X^{\Box}} \sum_{|\alpha|_{1} \le d+p_{\Phi}} | \xi (y,\alpha )|^{2} \right]. \end{equation} The variables $\xi (x,\alpha)$ are Gaussian and we denote their covariance by $Q$. The largest eigenvalue $\lambda_{\rm max}$ of $Q$ is at most the norm of $Q$ considered a convolution operator on $l^2(X^\Box)$. Therefore, using Young's inequality we obtain \eq \label{e:YI} \lambda_{\rm max} \leq \sup_{f : \|f\|_2 \leq 1} \|Q*f\|_2 \leq \|Q\|_1 \leq cR^{d} \|Q\|_\infty. \en Since $Q$ is a positive-definite function, its maximum value occurs on the diagonal, and obeys \eq \|Q\|_\infty \leq ct R^{-d} \max_{|\alpha|_{1} \le d+p_{\Phi},\, x\in X^\Box} |\ell^{-2}\nabla_{R}^{2\alpha}\pmb{C}_{b;x,x}| \le c t R^{-d} \|\pmb{C}_b\|_{\Phi^+ }, \en so \begin{equation} \lambda_{\rm max} \leq ct \|\pmb{C}_b\|_{\Phi^+} . \end{equation} This will be less than $\frac 12$ if $\|\pmb{C}_b\|_{\Phi^+} \le c (d)t^{-1}$ with $c (d)$ sufficiently small. We may therefore apply Lemma~\ref{lem:integrability2} with $\xi_{a}$ replaced by $\xi (x,\alpha)$. This gives \begin{equation} \mathbb{E}_{\pmb{C}_b} G^t(X,\phi) \le e^{\sum_{y\in X^{\Box}}\sum_{|\alpha|_{1} \le d+p_{\Phi}} \text{Var} (\xi (y,\alpha))}. \end{equation} Since $\text{Var} (\xi (y,\alpha)) \le c t R^{-d} \|\pmb{C}_b\|_{\Phi^+ }$ this gives \begin{equation} \mathbb{E}_{\pmb{C}_b} G^t(X,\phi) \le e^{ct \|\pmb{C}_b\|_{\Phi^+} R^{-d}|X^{\Box}|}, \end{equation} and the desired result follows since $|X^{\Box}| \le a |X|$ for some $a=a(d)$. \end{proof} \section*{Acknowledgements} The work of both authors was supported in part by NSERC of Canada. DB gratefully acknowledges the support and hospitality of the Institute for Advanced Study at Princeton and of Eurandom during part of this work. GS gratefully acknowledges the support and hospitality of the Institut Henri Poincar\'e, and of the Kyoto University Global COE Program in Mathematics, during stays in Paris and Kyoto where part of this work was done. We thank Beno\^it Laslier for many helpful comments, and an anonymous referee for numerous pertinent suggestions.
1,108,101,564,753
arxiv
\@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex}{2.3ex plus .2ex}{\large\bf}{\@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex}{2.3ex plus .2ex}{\large\bf}} \defAppendix \Alph{section}{\arabic{section}} \def\Alph{section}.\arabic{subsection}{\arabic{section}.\arabic{subsection}} \def\arabic{subsubsection}{\arabic{subsubsection}} \def
1,108,101,564,754
arxiv
\section{Introduction} \label{intro_sect} The spectrum of hot, massive stars reveals signatures of dense, highly-supersonic mass outflows that are driven by the strong radiation field. These stellar winds have a significant effect on the evolution of massive stars, as O stars will lose a sizable fraction of their total mass during their lifetime. Mass loss rates are, therefore, a crucial parameter of stellar evolution models. These mass loss rates, like other basic stellar parameters, can be derived from spectrophotometric analyses based on model stellar atmospheres. For a recent review of the properties of hot star winds, see \cite{kudritzki00}. During the last decade, major advances have been accomplished in constructing more realistic model atmospheres, which consistently account for the stellar wind, for departures from the Local Thermodynamic Equilibrium (LTE), and for metal line blanketing \citep{hubeny95, hillier98, hauschildt97, pauldrach01, koesterke02, Tueb02}. Current state-of-the-art non-LTE (NLTE) model atmospheres assume smooth, homogeneous and stationary winds. There are, however, theoretical arguments as well as observational evidences that the winds of hot stars are extensively structured. Radiation hydrodynamics simulations of the line-driven flow's nonlinear evolution show the line force to be highly unstable and lead to strong reverse shocks \citep[see, e.g.,][]{owocki88,owocki94, feldmeier95, owocki99}. We thus expect the presence of density contrasts in the wind of O stars, which is supported by several lines of observations. First, the soft X-ray emission of O stars is widely believed to originate from shocks propagating through the stellar wind \citep{lucy82, cassinelli83}. With the advent of high resolution X-ray spectroscopic capabilities on {\sl Chandra\/} and XMM-{\sl Newton}, detailed wind-shock models can be tested by fitting X-ray line profiles. The fitted lines indicate that the hot emitting plasma is located throughout the wind starting close to the photosphere \citep{kramer03}. However, the high opacity of their wind model has to be lowered in order to reproduce the X-ray flux level. \cite{oskinova04} recently offered a solution to this issue considering highly fragmented winds. Second, temporal variability of O-star wind line profiles is well documented and might indicate the propagation of disturbances throughout the wind, most often observed as Discrete Absorption Components \citep[see, e.g.,][]{howarth95, massa95}. In the spectrum of the prototypical O supergiant $\zeta$~Pup, \cite{eversberg98} observed stochastic variable substructures in \mbox{He~{\sc ii}$\lambda$ 4686} that move away from the line center with time. They explained them as evidence of blobs or clumps moving outward in the stellar wind. Third, \cite{hillier91} showed that the weak electron scattering wings of \mbox{He~{\sc ii}$\lambda$ 4686} in the spectrum of Wolf-Rayet (WR) stars can be explained by clumped wind models and can thus be used to diagnose density inhomogeneities in hot star winds. The inclusion of clumping in wind studies of WR stars resulted in lowering their mass loss rates by a factor of 2 to 4 \citep{moffat94, hamann98, hillier99}. Lastly, spectral evidence of clumping has been reported in recent studies of Magellanic Cloud (MC) O stars. \cite{crowther02} and \cite{hillier03} found that phosphorus needs to be significantly underabundant relative to other heavy elements in order to match \mbox{P~{\sc v}$\lambda$\lb1118, 1128} in O supergiants. Alternatively, theoretical line profiles from clumped wind models match the observed \mbox{P~{\sc v}} lines, assuming a ``standard'' phosphorus abundance. >From studying the wind ionization of a large sample of LMC O stars, \cite{massa03} came to a similar conclusion about the \mbox{P~{\sc v}~} resonance lines providing an indication of wind clumping. Moreover, \cite{hillier03} showed that the clumped wind models match strong Fe lines, while homogeneous wind models predict marked blue asymmetries that are not observed. Independently, \cite{bouret03} successfully matched for the first time the \mbox{O~{\sc v}\lb1371} line profile using clumped models. A good match to this line was never achieved with homogeneous wind models. Bouret et al. found that clumping starts just above the photosphere. They derived mass loss rates for several O dwarfs in the SMC cluster NGC~346, which are a factor of 3 to 10 smaller than those obtained from smooth winds. This reduced mass loss is expected to significantly alter the predicted evolution of these stars. \cite{puls96} showed the existence of a tight correlation between the modified wind momentum and the stellar luminosity. This Wind momentum -- Luminosity Relation (WLR) varies for Galactic and MC O stars because of the different metallicities. Stars with lower abundances of heavy elements have weaker winds \citep{walbornSMC}. Mass loss rates predicted using radiative line-driven wind theory bring support to the WLR \citep{vink00}, hence allowing a derivation of O star luminosities and of extragalactic distances. However, the dependence of the WLR with the luminosity class remains an open problem. Recent theoretical studies do not predict such a dependence while, empirically, a difference in the WLR for supergiants and for dwarfs has been found. \cite{repolust04} argued that this discrepancy between the empirical and theoretical WLR's might be solved if the mass loss rates derived from H$\alpha$ are affected by clumping in the lower wind region. Clumping would result in decreasing the mass loss rates of Galactic O stars with H$\alpha$ emission by a factor of about $\sqrt 5$. Despite mounting evidence that hot star winds do not appear to be smooth and homogeneous, most studies still assume so with the hope of determining average wind properties at least. In MC O stars, however, spectral analyses accounting for clumping yield significantly lower mass loss rates, with potentially crucial consequences for our understanding of massive star evolution, and for the radiative line-driven wind theory and its application to extragalactic studies using the WLR. In this context, we believe that it is of special importance to extend the recent studies of MC O stars to Galactic O stars which have stronger winds. Furthermore, it is essential to show that a consistent picture may be drawn from the different diagnoses of clumping, in particular \mbox{P~{\sc v}$\lambda$\lb1118, 1128}, \mbox{O~{\sc v}\lb1371}, and H$\alpha$, ensuring that the spectral diagnoses of wind clumping are not spuriously affected by abundance or ionization effects. \begin{table*}[] \centering \caption{Basic stellar data, \emph{FUSE}\ datasets, and interstellar column densities.} \begin{tabular}{lccccccccc} \hline \hline Star & RA & Dec &Spectral & \emph{FUSE}\ & $V$ & $B-V$ & $E(B-V)$ & $\log N$(H~{\sc i}) & $\log N({\rm H}_{2})$ \\ & (J2000) & (J2000) &Type & ID & (mag) &(mag) & (mag) & (cm$^{-2}$) & (cm$^{-2}$) \\ \hline HD 96715 & 11 07 32.9 & -59 57 49 &O4V((f)) & P1024301 & 8.27 & 0.10 & 0.42 & 21.3 & 19.7 \\ HD 190429A & 20 03 29.4 & +36 01 30 &O4If+ & P1028401 & 7.12 & 0.14 & 0.51 & 21.5 & 20.3 \\ \hline \end{tabular} \label{tabone} \end{table*} In this paper, we investigate the properties of two Galactic O4 stars, HD~190429A\ and HD~96715, in order to derive their properties and address these issues. These two stars were chosen to cover a wide range of luminosity classes, thus investigating wind clumping in O dwarfs and in O supergiants, and to compare Galactic stars to SMC O4V stars for which Bouret et al. found evidence of clumping. The general stellar properties, along with the observational data and reduction are presented in \S\ref{obs_sect}.~~Sect.~\ref{modass_sect} considers our model atmospheres. We discuss the methodology used to derive the stellar parameters, chemical abundances, and wind parameters, and the related uncertainties, for each star, in \S\ref{specan_sect}. Our results are then put into a broader context in \S\ref{disc_sect}, comparing in particular the derived mass loss rates to other predictions based on homogeneous wind models. General conclusions are summarized in \S\ref{concl_sect}. \section{Stellar sample and observations} \label{obs_sect} We have selected two Galactic O4 stars, HD~190429A\ and HD~96715, which have been previously observed in the far ultraviolet (FUV) by \emph{FUSE}\ and \emph{IUE}, to investigate the FUV spectral signatures of clumping (\mbox{O~{\sc v}}, \mbox{P~{\sc v}}). The two stars are hot enough (O4 spectral type) to reveal \mbox{O~{\sc v}\lb1371} in their spectrum. We have selected one supergiant and one dwarf providing an initial comparison between the two classes. The two stars are part of the \emph{FUSE}\ atlas of Galactic O stars \citep{pellerin02}, from which we extracted basic stellar data (Table~\ref{tabone}). HD~190429A\ is a bright O supergiant for which extensive observational and modeling work has been published \citep[e.g.,][]{walborn00, markova04, garcia04}. \cite{garcia04} recently reported the first detailed analysis of the O4V((f)) star HD~96715. We have extracted \emph{IUE}\ short wavelength, high resolution spectra from the Multimission Archive at the Space Telescope Science Institute (MAST). The SWP spectra cover the spectral range, $\lambda$\lb1150-2000\,\AA, at a resolving power $R = 10,000$. Each spectrum was processed with the NEWSIPS package. We have selected all spectra obtained through the large aperture, that is, 16 spectra for HD~190429A\ and 4 spectra for HD~96715\ (Table~\ref{SWPTab}). These spectra do not show conspicuous signs of variability, in flux as well as in line profiles. Although we cannot rule out definitively variability (in particular for the supergiant HD~190429A), this absence of variations justifies our co-adding all merged extracted spectra so as to form an average spectrum for each star. Data points flagged by the NEWSIPS software have been excluded; in particular, this concerns the saturated portion of the long-exposed images SWP\,43980 and SWP\,43981 at $\lambda\ga 1700$\,\AA. Finally, we smoothed the co-added spectra to a resolution of 40~km\,s$^{-1}$\ in order to increase the signal-to-noise ratio. The spectra of both stars show a large number of narrow lines of interstellar (IS) origin. This IS contamination becomes an issue especially for HD~96715\ because its apparent rotational velocity is relatively low \citep[$v\sin\,i$ = 80 km\,s$^{-1}$;][]{howarth89}. The processed \emph{FUSE}\ spectra have been retrieved from MAST too. The nominal spectral resolution is 20,000, or about 20~km\,s$^{-1}$. Details about the observations and data reduction can be found in the \emph{FUSE}\ atlas of Galactic OB spectra presented by \cite{pellerin02}. Individual sub-exposures have been co-added for each segment and then merged to form a single spectrum, using Lindler's FUSE-REGISTER program. We avoided contamination by the so-called worm artefact \citep{sahnow00} by using only the LiF2A spectra on the long-wavelength side ($\lambda$\lb1086-1183\,\AA) of the spectrum. Finally, the co-added merged spectra have been smoothed to a 30~km\,s$^{-1}$\ resolution to enhance the signal-to-noise ratio. >From the \emph{IUE}\ and \emph{FUSE}\ spectra, we have measured the atomic and molecular hydrogen column densities towards the two stars, fitting \mbox{Ly$\alpha$}\ and H$_{2}$ lines, respectively. The values are listed in the rightmost columns of Table~\ref{tabone}, indicating that the \emph{FUSE}\ spectra suffer from severe blending from the broad absorption bands of \mbox{H$_{2}$}. This absorption especially becomes a serious issue shortwards of 1006\,\AA\ (SiC2A channel), and thus little information about the stars can be inferred from the analysis of this part of the spectra. Furthermore, numerous IS metal lines are also present in the \emph{FUSE}\ spectra, superimposed to the stellar lines. These lines are narrower than the stellar lines but, because of their large number \citep[see Table~3,][]{pellerin02}, they may complicate the analysis of the stellar components (especially for weak lines in HD~96715). \begin{table}[b] \centering \caption{\emph{IUE}\ SWP spectra.} \begin{tabular}{llcl} \hline \hline Star & Image & $t_{\rm exp}$ & Obs. Date \\ & & (s) & \\ \hline HD 96715 & SWP 21999 & 3000 & 1984-01-13 \\ & SWP 22000 & 1800 & 1984-01-13 \\ & SWP 43980 & 9000 & 1992-02-13 \\ & SWP 43981 & 9000 & 1992-02-13 \\ [2mm] HD 190429A & SWP 04903 & 1515 & 1979-04-09 \\ & SWP 38958 & 1500 & 1990-06-02 \\ & SWP 38965 & 1500 & 1990-06-02 \\ & SWP 38970 & 1500 & 1990-06-03 \\ & SWP 38973 & 1500 & 1990-06-03 \\ & SWP 38978 & 1500 & 1990-06-03 \\ & SWP 38981 & 1500 & 1990-06-03 \\ & SWP 38986 & 1500 & 1990-06-03 \\ & SWP 38989 & 1500 & 1990-06-03 \\ & SWP 38994 & 1500 & 1990-06-04 \\ & SWP 38998 & 1500 & 1990-06-04 \\ & SWP 39003 & 1500 & 1990-06-04 \\ & SWP 39006 & 1500 & 1990-06-04 \\ & SWP 39011 & 1500 & 1990-06-04 \\ & SWP 54573 & 1500 & 1995-05-02 \\ & SWP 55986 & 1500 & 1995-09-22 \\ \hline \end{tabular} \label{SWPTab} \end{table} \section{Modeling assumptions} \label{modass_sect} Unified models, with a consistent treatment of the photosphere and the wind, are mandatory to analyze the P~Cygni profile of strong lines in the UV spectra of O stars and, thus, to derive the basic wind parameters (mass loss rate, terminal velocity). However, the bulk of spectral lines in O stars are formed in the photosphere where velocities are small and geometrical extension is negligible. Hydrostatic, fully-blanketed, NLTE photospheric models may then remain a preferable alternative to unified models with a simplified treatment of metal line blanketing. We demonstrated this point in the case of low metallicity stars in the SMC, where stellar winds are weaker than those of Galactic stars \citep{bouret03}. Here, we deal with Galactic objects and, in particular, with an extreme O4If+ supergiant (HD~190429A) which exhibits a strong stellar wind. This is thus an exemplar case for studying the wind contribution to weak photospheric lines, comparing the predictions of photospheric and unified models to observations. We have performed such an analysis using model atmospheres calculated with the photospheric program, {\sc Tlusty}\ \citep{hubeny95}, and with the unified model code, {\sc CMFGEN}\ \citep{hillier98}. The atomic data used by the two codes mostly come from the same sources, thus ensuring a meaningful comparison of the stellar parameters derived by the two codes. {\sc CMFGEN}\ does not solve the full hydrodynamics, but rather assumes the density structure. We use a hydrostatic density structure computed with {\sc Tlusty}\ in the deep layers, and the wind part is described with a standard $\beta$-velocity law. The two parts are connected below the sonic point at $v(r)\approx 15$\,km\,s$^{-1}$. For more details on these two codes, we refer to \cite{hubeny95}, \cite{lanz03}, \cite{hillier98}, \cite{hillier03}, and \cite{bouret03}. Radiatively driven winds are intrinsically subject to instabilities, resulting in the formation of the discrete structures or ``clumps''. As discussed in \S1, there are several observational evidences as well as theoretical arguments that foster the concept of highly-structured winds. To investigate spectral signatures of clumping in the winds of HD~96715\ and HD~190429A, and its insuing consequences on the derived wind parameters, we have thus constructed clumped wind models with CMFGEN. A simple, parametric treatment of wind clumping is implemented in CMFGEN, which is expressed by a volume filling factor, $f$, and which assumes a void interclump medium and the clumps to be small compared to the photons mean free path. The filling factor is such that $\bar{\rho} = f\rho$, where $\bar{\rho}$ is the homogeneous (unclumped) wind density. The filling factor decreases exponentially with increasing radius (or, equivalently, with increasing velocity) \begin{equation} f = f_\infty + (1-f_\infty) \exp(-v/v_{\rm cl}), \end{equation} where $v_{\rm cl}$ is the velocity at which clumping starts. \cite{hillier03} and \cite{bouret03} found that clumping starts close to the photosphere. We have thus adopted, $v_{\rm cl}=30$\,km\,s$^{-1}$, that is, clumps start forming just above the sonic point. The observation of strong resonance lines of highly-ionized species, like \mbox{O~{\sc vi}}$\lambda$\lb1032, 1038 and \mbox{N~{\sc v}}$\lambda$\lb1238, 1242, provides an indirect evidence of X-ray emission in the stellar winds. An important consequence of the X-ray and EUV shock radiation is to enhance photoionization that results in ``wind superionization''. Auger processes are accounted for in calculating the wind ionization \citep{cassinelli79}. X-rays probably originate from the cooling zone of post-shock regions, while shocks arise from radiative instabilities inherent to line-driven winds \citep{lucy70, lucy82, owocki88}. X-ray emission has been observed in O-type stars of all luminosity classes \citep{chlebo91}. To reproduce the \mbox{O~{\sc vi}} and the \mbox{N~{\sc v}} lines, we have accounted for shock-generated X-ray emission in the final modeling stage. Because the two selected stars have not been observed with X-ray satellites, we have assumed the luminosity ratios, $\log L_{\rm X}/L_{\rm bol}$, measured for stars of same spectral subtype and luminosity class. For HD~190429A, we have adopted $\log L_{\rm X}/L_{\rm bol}$\ = -7.1, quoted by \cite{pauldrach01} for the O4~I(n)f star $\zeta$~Puppis, whose spectral and physical properties are expected to be quite similar to those of HD~190429A; for HD~96715 , we have adopted $\log L_{\rm X}/L_{\rm bol}$\ = -6.6, measured for the O4V((f)) star HD~46223 \citep{chlebo91}. \section{Spectral analysis} \label{specan_sect} Following the methodology outlined in \cite{bouret03}, we determined initial estimates of the stellar parameters with {\sc Tlusty}\ model atmospheres \citep{lanz03}. Subsequent analysis is performed with {\sc CMFGEN}, using {\sc Tlusty}\ parameters as initial inputs. At this stage, we still allowed for changes in the photospheric parameters, taking advantage of the realistic description of photospheric layers by {\sc CMFGEN}. We always checked for consistency between {\sc Tlusty}\ and {\sc CMFGEN}\ determinations. Hydrogen and helium lines in the optical spectrum are the classical diagnoses used in spectral analyses of O stars. In this paper, we present an analysis based on the far-ultraviolet spectrum exclusively. \cite{heap05} discussed in detail the UV lines and ionization equilibria that are good indicators of stellar parameters (effective temperature, surface gravity). Table~\ref{LineTab} lists the most important lines used in our analysis and the quantities they are most sensitive to. As a reference, we adopted the solar abundances from \cite{grevesse98}. However, we note that significant downward revisions of the abundance of light elements in the solar photosphere have been recently proposed on the basis of a 3-D hydrodynamical model of the solar atmosphere \citep{asplund04}, bringing the latter values in better agreement with surface abundances of B-type stars in the solar neighborhood. In this paper, all chemical abundances are quoted by number density relative to hydrogen, or relative to \cite{grevesse98} solar values. \begin{table} \centering \caption{Main spectral lines used in the analysis of the photospheric and wind properties of HD~190429A\ and HD~96715, and their sensitivity to stellar parameters.} \label{LineTab} \begin{tabular}{lccc} \hline \hline & & \multicolumn{2}{c}{Dependencies} \\ \cline {3-4} Line & $\lambda$\ (\AA) & HD~190429A & HD~96715 \\ \hline \mbox{S~{\sc iv}}\ & 1073-99 & $T_{\rm eff}$, $\dot{M}$, S/H & $\ldots$ \\ \mbox{P~{\sc v}}\ & 1118-28 & $\dot{M}$, $f$, P/H & $\xi_{t}$, P/H \\ \mbox{O~{\sc iii}}\ & 1150-54 & $\xi_{t}$, O/H & $\xi_{t}$, O/H \\ \mbox{C~{\sc iv}}\ & 1169 & $T_{\rm eff}$, $\dot{M}$, C/H & $T_{\rm eff}$, $\xi_{t}$, C/H \\ \mbox{C~{\sc iii}}\ & 1176 & $T_{\rm eff}$, $\dot{M}$, C/H & $T_{\rm eff}$, $\xi_{t}$, C/H \\ \mbox{N~{\sc iii}}\ & 1182-84 & $\xi_{t}$, N/H & $\xi_{t}$, N/H \\ \mbox{Fe~{\sc vi}}\ & 1250-1350 & $T_{\rm eff}$, $\xi_{t}$, Fe/H & $T_{\rm eff}$, $\xi_{t}$, Fe/H \\ \mbox{O~{\sc iv}} & 1338-43 & $T_{\rm eff}$, $\dot{M}$, O/H & $T_{\rm eff}$, $\dot{M}$, O/H \\ \mbox{O~{\sc v}}\ & 1371 & $T_{\rm eff}$, $\dot{M}$, $f$, O/H & $T_{\rm eff}$, $\dot{M}$, $f$, O/H \\ \mbox{Fe~{\sc v}}\ & 1350-1500 & $T_{\rm eff}$, $\xi_{t}$, Fe/H & $T_{\rm eff}$, $\xi_{t}$, Fe/H \\ \mbox{Si~{\sc iv}}\ & 1393,1402 & $T_{\rm eff}$, $\dot{M}$, $f$ & $\ldots$ \\ \mbox{C~{\sc iii}}\ & 1426-28 & $\ldots$ & C/H, $T_{\rm eff}$ \\ \mbox{S~{\sc v}}\ & 1502 & $\dot{M}$, $f$ , S/H & $\xi_{t}$, $T_{\rm eff}$ \\ \mbox{C~{\sc iv}}\ & 1548-50 & $v_{\infty}$, $\beta$, $\xi_{t}$ & $v_{\infty}$, $\beta$, $\xi_{t}$ \\ \mbox{Fe~{\sc iv}}\ & 1500-1750 & $T_{\rm eff}$, $\xi_{t}$, Fe/H & $T_{\rm eff}$, $\xi_{t}$, Fe/H \\ \mbox{He~{\sc ii}}\ & 1640 & $\dot{M}$, $f$, He/H & $\dot{M}$, He/H \\ \mbox{N~{\sc iv}} & 1718 & $\dot{M}$, $f$, N/H & $\dot{M}$, $f$, N/H \\ \mbox{N~{\sc iii}}\ & 1748-52 & $\xi_{t}$, N/H & $\xi_{t}$, N/H \\ \mbox{C~{\sc iv}}\ & 1860 & C/H, $T_{\rm eff}$ & $\ldots$ \\ \mbox{C~{\sc iii}}\ & 1875-78 & C/H, $T_{\rm eff}$ & $\ldots$ \\ \mbox{N~{\sc iii}}\ & 1885 & $\xi_{t}$, N/H & $\ldots$ \\ \hline \end{tabular} \end{table} \subsection{HD~190429A\ - {\rm O4~If+}} \label{hdo_sect} The \emph{IUE}\ and \emph{FUSE}\ spectra offer a broad range of ionization stages and diversity of species that may be used to constrain the effective temperature: \mbox{He~{\sc ii}}, \mbox{C~{\sc iii}}, \mbox{C~{\sc iv}}, \mbox{N~{\sc iii}}, \mbox{N~{\sc iv}}, \mbox{S~{\sc iv}}, \mbox{S~{\sc v}}, \mbox{S~{\sc vi}}, \mbox{Fe~{\sc iv}}, \mbox{Fe~{\sc v}}, \mbox{Fe~{\sc vi}}. Following \cite{heap05}, we used in priority ratios between successive ions of C, N, O and Fe. Both the \mbox{C~{\sc iii}}\lb1176/\mbox{C~{\sc iv}}\lb1169 and the \mbox{Fe~{\sc iv}}/\mbox{Fe~{\sc v}} \footnote{\mbox{Fe~{\sc iv}}\ lines between 1500 and 1700\,\AA, and \mbox{Fe~{\sc v}}\ lines between 1300 and 1500\,\AA.} line ratios indicate $T_{\rm eff}$ $\la 40,000$\,K. The \mbox{Fe~{\sc vi}}\ lines in the 1250-1350\,\AA\ range are quite weak at these temperatures. Moreover, the \mbox{N~{\sc iii}}$\lambda$\lb981-991 lines are found to be very sensitive to $T_{\rm eff}$, as noticed by \cite{crowther02}, and they set a lower limit $T_{\rm eff}$\ $\ga 37,500$\,K. The ionization ratio, \mbox{O~{\sc iv}}$\lambda$\lb1338-1343/\mbox{O~{\sc v}}\lb1371, further supports this lower limit on $T_{\rm eff}$. On the other hand, \mbox{Si~{\sc iv}}$\lambda$\lb1393-1402, which are also very sensitive to temperature, suggest $T_{\rm eff}$\ $\approx 35,000$\,K. It is unlikely, however, that a contamination of the \emph{IUE}\ spectrum by the companion is responsible for this lower value. Indeed, the large \emph{IUE}\ aperture contains both the primary and the companion HD~190429B (at~$2\arcsec$), but the O9.5~II spectral type of the companion \citep{walborn00} implies that its contribution remains small (the luminosity ratio should be about 5). Overall, we found that the best fit is achieved for $T_{\rm eff}$ = 39,000\,K, with an uncertainty better than 1000\,K. This value is in excellent agreement with the results from \cite{markova04}, who used a calibration of $T_{\rm eff}$\ with the spectral type based on results obtained from the analysis of hydrogen and helium lines in the optical spectrum \citep{repolust04}. On the other hand, \cite{garcia04} derived a slightly lower value, $T_{\rm eff}$ = 37,500\,K, based on UV lines formed in the wind. These lines are sensitive to the detail of the wind ionization, hence to wind clumping that Garcia~\& Bianchi neglected. Moreover, they used carbon and oxygen lines assuming solar abundances while admitting that these two species might have highly non-solar abundances (see below). Our initial estimate of the surface gravity comes from the relation between the spectral type and $\log g$\ for luminosity class~I stars \citep[Fig.~2,][]{markova04}. This relation yields $\log g$\ = 3.65 for an O4 supergiant. We then explored models with $\log g$\ ranging from 3.5 to 3.75 (by steps of 0.05\,dex, for $T_{\rm eff}$\ fixed at 40,000\,K). We found that the shape of the observed spectral energy distribution is indeed best reproduced for $\log g\approx 3.6 - 3.65$ \cite[see][ for the dependence of the SED as a function of $\log g$, at a given $T_{\rm eff}$]{lanz03}. We have adopted, $\log g$\ = 3.6, for the rest of the analysis. The visual photometry listed in Table~\ref{tabone} pertains to both components A+B. Based on the Hipparcos magnitude difference of the two components, \cite{walborn00} obtained $V = 7.12$ for component~A alone. The absolute visual magnitude is then derived, assuming a distance $d = 2.29$ kpc for the CygOB3 association \citep{humphreys78}. The stellar luminosity is calculated by applying a bolometric correction from \cite{lanz03}. The derivation of the stellar radius then follows straightforwardly. The microturbulent velocity, $\xi_{t}$, is determined from the iron line strengths (\mbox{Fe~{\sc iv}}\ and \mbox{Fe~{\sc v}}). The iron abundance is kept to the solar value throughout this analysis. As noted in \cite{hillier03} and in \cite{bouret03}, \mbox{O~{\sc iv}}$\lambda$\lb1338-1343 is also sensitive to $\xi_{t}$. However, the oxygen abundance may have a non-solar value at the surface of an O4~If+ star (see below). A consistent fit to the aforementioned lines is obtained for $\xi_{t}$\ = 15\,km\,s$^{-1}$. We note that these lines have been similarly used in other studies \citep{bouret03, heap05}, yielding a range of microturbulence values for O stars ($\xi_{t}$\ = 2--25\,km\,s$^{-1}$). It is therefore unlikely that the high microturbulence derived here results from inadequate model atoms. \begin{figure*}[] \centering \rotatebox{0}{\includegraphics[width=18cm]{2531f1.eps}} \caption[12cm]{Best fit to the photospheric lines used to derive the stellar parameters of HD~190429A\ ($T_{\rm eff}$ = 39,000\,K, $\log g$\ = 3.6, $\xi_{t}$\ = 15\,km\,s$^{-1}$). The adopted abundances are listed in Table~\ref{TabRes}}. \label{figone} \end{figure*} The helium abundance is poorly constrained by the FUV spectrum alone. We first used $y$ = He/H = 0.1 (by number), but subsequently we increased the abundance to $y$ = 0.2. The higher helium abundance provides a better fit to \mbox{He~{\sc ii}}\lb1640 (once the mass loss rate is fixed at the value derived from other lines). The photospheric \mbox{He~{\sc ii}}\lb1085 line, which is not affected by the wind in our models, could not be used because of strong blends from IS \mbox{N~{\sc ii}}\ lines. The enhanced helium abundance is consistent with the evolutionary status of an O4 supergiant \citep{meynet00}, but we cannot rule out that it may be further revised when optical data is analyzed in addition to the FUV spectrum. A simultaneous fit of all \mbox{He~{\sc i}}\ and \mbox{He~{\sc ii}}\ lines throughout the spectrum is, however, rarely (if ever) achieved \cite[see, e.g.,][]{hillier03}. Additionally, we note that changes in the helium abundance impact the predicted \mbox{P~{\sc v}}$\lambda$\lb1118-1128 lines, because the ionization threshold of \mbox{P~{\sc v}}\ is at 191\,\AA, close to the \mbox{He~{\sc ii}}\ Lyman limit. Models with $y$ = 0.2 yield a phosphorus abundance that is closer to the solar value compared to values derived with $y = 0.1$ models. On the basis of these various arguments, we finally adopted $y$ = 0.2. The carbon abundance is constrained by the \mbox{C~{\sc iii}}\lb1176, \mbox{C~{\sc iii}}\lb1923, and \mbox{C~{\sc iv}}\lb1169 photospheric lines. Only models strongly depleted in carbon match these lines. A low carbon abundance is also consistent with the \mbox{C~{\sc iii}}\lb977 and the \mbox{C~{\sc iv}}$\lambda$\lb1548-1550 resonance lines, although these lines are not sensitive indicators because they show saturated P~Cygni profiles that are only weakly sensitive to the wind velocity law. We also checked that the models do not predict a strong \mbox{C~{\sc iii}}\lb4647-51 emission that is not observed in the optical spectrum \citep{walborn00}. The best fit to the lines discussed above is obtained for C/C$_\odot$ = 0.05. This value is consistent with the evolved nature of HD~190429A. All nitrogen FUV lines, but \mbox{N~{\sc iv}}\lb1718, show that the surface of HD~190429A\ has been enriched in nitrogen. This conclusion is additionally supported by the observed \mbox{N~{\sc iii}}$\lambda$\lb4634-4640 emission \citep[see, e. g.,][]{walborn00}. We have derived a nitrogen surface abundance, N/N$_\odot$ = 4.0. The nitrogen enrichment of HD 190429A is consistent with its being highly evolved, as significant nitrogen enrichment together with carbon depletion is predicted by stellar evolution models for stars in the helium burning phase. Note in this respect that \cite{conti95} and \cite{walborn00} already suggested an advanced evolutionary status for HD~190429A\, based on K-band and optical data (respectively), which they interpreted as indications that the star is well advanced toward the WN stage. The lack of wind-insensitive oxygen lines hampers an accurate determination of the oxygen abundance. The strong \mbox{O~{\sc vi}}$\lambda$\lb1032-1038 are fully controlled by the X-ray flux and other wind parameters. The \mbox{O~{\sc v}}\lb1371 line is rather weak and is most sensitive to the mass loss rate and clumping parameters. Finally, although mostly formed in the photosphere, the \mbox{O~{\sc iv}}$\lambda$\lb1338-1343 lines show a blue asymmetry that indicates a contribution from the wind. Assuming $\xi_{t}$\ = 15\,km\,s$^{-1}$\ (see above), we had to adopt a low oxygen abundance, O/O$_\odot$ = 0.1, in order to match the \mbox{O~{\sc iv}}$\lambda$\lb1338-1343 lines. However, we always failed to reproduce the \mbox{O~{\sc v}}\lb1371 line with homogeneous wind models. This low abundance is also supported by the weak \mbox{O~{\sc iii}}\ triplet ($\lambda$\lb1150-1154\,\AA) in the \emph{FUSE}\ spectrum. Phosphorus and sulfur abundances are not expected to be affected by nucleosynthetic processes. We thus adopted initially (and kept for most of the analysis) solar abundances. There are, however, indications that both elements might be slightly depleted. For homogeneous wind models, a low phosphorus abundance, P/P$_\odot$ = 0.1, is required to match the \mbox{P~{\sc v}}$\lambda$\lb1118-1128 resonance doublet. On the other hand, we derive P/P$_\odot$ = 0.5 from a clumped model with a filling factor $f_\infty = 0.04$ (see below). All sulfur ions indicate that S/S$_\odot$ = 0.5 for homogeneous models and S/S$_\odot$ = 0.9 for clumped models ($f_\infty = 0.04$). Similarly, \cite{hillier03} derived lower than one-fifth solar-scaled P and S abundances for the SMC O7~Iaf+ supergiant AV~83. However, given the recent downward revision of the solar P and S abundances \citep{asplund04}, we argue that the abundances derived from the clumped wind model roughly remain in agreement with solar values within the uncertainties (0.1 to 0.2\,dex). \begin{figure*}[] \centering \rotatebox{0}{\includegraphics[width=18cm]{2531f2.eps}} \caption[12cm]{Best fit to HD~190429A\ wind-sensitive lines, obtained with clumped (full grey line) and homogeneous (dashed grey line) models respectively. The clumped model has $f_{\infty}$ = 0.04, $\beta = 0.8$, and $\dot{M}$\ = 1.8\,10$^{-6}$\,M$_{\odot}$\,yr$^{-1}$, while the homogeneous model has $\dot{M}$\ = 6.0\,10$^{-6}$\,M$_{\odot}$\,yr$^{-1}$. The mass loss rates have been adjusted as described in \S\ref{hdo_sect}. Adopted abundances for the homogeneous and the clumped wind models are discussed in the text (see also Table~\ref{TabRes})}. The short wavelength side of \mbox{N~{\sc v}}\lb1240 is affected by interstellar \mbox{Ly$\alpha$}\ absorption (see Table~1 for n(\mbox{H~{\sc i}})). Notice in particular the excellent fit to both lines of the \mbox{P~{\sc v}}\ doublet achieved with the clumped wind model. \label{figtwo} \end{figure*} A detailed examination of the complete FUV spectrum reveals that only a few lines are influenced by the stellar wind, even for a supergiant with such an early spectral type and strong wind ($\dot{M}$\ $\approx$ few $10^{-6}$\,M$_{\odot}$\,yr$^{-1}$). In particular, these lines include a few strong Fe lines whose cores are filled a little by wind emission. For most other lines, we have a very good consistency between {\sc Tlusty}\ and {\sc CMFGEN}\ model spectra. This agreement demonstrates that we may use NLTE photospheric models to derive reliable stellar parameters and abundances, even for Galactic O supergiants with relatively dense winds. Furthermore, we stress that we had to use large ($\xi_{t}$\ = 15\,km\,s$^{-1}$) microturbulent velocities in CMFGEN wind models too. Despite an early comment by \cite{kudritzki92}, this demonstrates that the need for microturbulence is not restricted to hydrostatic models that neglect the atmospheric velocity field. We display in Fig.~\ref{figone} the best {\sc CMFGEN}\ model fit to some of the lines discussed above, which are only weakly sensitive to the wind properties. We now turn to the analysis of the wind properties of HD~190429A. The parameters of the $\beta$-type velocity law are derived from the saturated \mbox{C~{\sc iv}}$\lambda$\lb1548-1550 doublet and from the \mbox{N~{\sc iv}}\lb1718 P~Cygni profile. A good match is achieved for $\beta$ = 0.8 and $v_{\infty}$\ = 2300\,km\,s$^{-1}$. We have used an outward increasing microturbulence velocity from the photospheric value (15\,km\,s$^{-1}$) to a maximum of 200\,km\,s$^{-1}$. To derive the mass loss rate, we started with an estimate based on \mbox{H$\alpha$}\ modeling, $\dot{M}$\ = 1.4\,10$^{-5}$\,M$_{\odot}$\,yr$^{-1}$\ \citep{markova04}. Such a high value can be excluded from the FUV spectrum, because a wind model assuming this value predicts much stronger P~Cygni profiles than those observed. Furthermore, this model also produces asymmetric \mbox{Fe~{\sc iv}}\ and \mbox{Fe~{\sc v}}\ lines with a blue extended absorption that is not observed. \cite{garcia04} derived such a high mass loss rate value from the UV wind lines. However, from their Fig.~7, we see that their model predicts \mbox{C~{\sc iii}}\lb1176 and \mbox{O~{\sc iv}}\lb1340 emissions that are not observed. Generally, all other wind lines are also predicted too strong by their model. We then built a series of wind models, decreasing $\dot{M}$\ in several steps. Using homogeneous wind models, we have not been able to achieve a good match to most observed P~Cygni profiles and to other wind sensitive lines like \mbox{O~{\sc iv}}$\lambda$\lb1338-1343 and \mbox{He~{\sc ii}}\lb1640. Therefore, we switched to clumped wind models with the presumption that the high mass loss rate derived by \cite{markova04} was the result of the effect of clumps on \mbox{H$\alpha$}\ line formation \citep[as already suggested by][]{repolust04}. We searched thus for a combination of $\dot{M}$\ and $f_\infty$, keeping $\dot{M}$/$\sqrt {f_\infty} \approx$ 1.4\,10$^{-5}$\,M$_{\odot}$\,yr$^{-1}$. The best match to the FUV lines was obtained for $\dot{M}$\ = 1.8\,10$^{-6}$\,M$_{\odot}$\,yr$^{-1}$ and $f_\infty = 0.04$ (see Fig.~\ref{figtwo}). The fit to \mbox{P~{\sc v}}$\lambda$\lb1118-1128 and to \mbox{O~{\sc v}}\lb1371 line profiles are the most striking improvements achieved with the clumped wind model. The agreement with the observed profile gets also better for \mbox{N~{\sc iv}}\lb1718, while there are little profile changes for the \mbox{C~{\sc iv}}\ resonance doublet or \mbox{He~{\sc ii}}\lb1640 compared to profiles predicted by a homogeneous wind model. The sensitivity of \mbox{P~{\sc v}}\ lines to clumping was first established by \cite{crowther02} and \cite{hillier03}. We defer to \S\ref{disc_sect} a discussion of the physical implications of clumping on line formation. \begin{figure*}[] \centering \rotatebox{0}{\includegraphics[width=18cm]{2531f3.eps}} \caption[12cm]{Best fit to photospheric lines used to derive HD~96715\ stellar parameters ($T_{\rm eff}$ = 43,500\,K, $\log g$\ = 4.0, $\xi_{t}$\ = 15\,km\,s$^{-1}$). The adopted abundances are listed in Table~\ref{TabRes}.} \label{figthree} \end{figure*} Other wind line profiles, such as \mbox{O~{\sc vi}}\lb1036 and \mbox{N~{\sc v}}\lb1240 are strongly influenced by X-ray wind emission (see \S\ref{discus_2} and Fig.~\ref{figseven}), and provide little additional constraints on the mass loss rate or on the clumping properties of the wind of HD~190429A. P~Cygni profiles of lower ions, like \mbox{C~{\sc iii}}\lb977 or \mbox{N~{\sc iii}}\lb991, are severely blended with \mbox{H$_{2}$}\ lines that hamper an accurate determination of $\dot{M}$\ from these lines. \subsection{HD~96715\ - {\rm O4~V((f))}} \label{hdn_sect} The \emph{IUE}\ spectrum of HD~96715\ exhibits conspicuous lines of \mbox{Fe~{\sc vi}}\ (1250-1350\,\AA), \mbox{Fe~{\sc v}}\ (1300-1550\,\AA) and \mbox{Fe~{\sc iv}}\ (1550-1700\,\AA), as well as \mbox{O~{\sc iv}}$\lambda$\lb1338-1343, \mbox{C~{\sc iii}}$\lambda$\lb1426-1428 and \mbox{S~{\sc v}}\lb1502. The \emph{FUSE}\ spectrum shows \mbox{C~{\sc iv}}\lb1169 and \mbox{C~{\sc iii}}\lb1176 lines, as well as \mbox{O~{\sc iii}}$\lambda$\lb1150-1154, \mbox{S~{\sc iv}}$\lambda$\lb1073-1099. The lines are relatively narrow, implying an apparent rotational velocity, $v\sin\,i$\ = 80\,km\,s$^{-1}$, and resulting in less severe blending problems from iron lines. Using the ionization balance between these ions, we derived a best overall match with a {\sc Tlusty}\ model having $T_{\rm eff}$\ = 43,500\,K and $\log g$\ = 4.0. A realistic estimate of the uncertainty on the $T_{\rm eff}$\ determination is about $\pm$ 1500\,K. On the other hand, the sensitivity of the FUV spectrum to $\log g$\ is small. We can, however, firmly exclude models with gravities differing by $\pm$~0.25\,dex in $\log g$, based on the comparison of theoretical spectral energy distributions with the \emph{IUE}\ spectrum corrected for reddening (assuming $E(B-V) = 0.42$, Table~\ref{tabone}). The best overall agreement is obtained for models with $\log g$\ = 3.9 $\pm$ 0.1. The relation between spectral type and surface gravity for luminosity class~V stars \citep{markova04} yields $\log g$ = 3.9 in agreement with our estimate within the error bars. We have finally adopted $\log g$ = 4.0, see Table~\ref{TabRes}. \cite{garcia04} obtained a markedly lower value, $T_{\rm eff}$ = 39,000$\pm$2,000\,K. They established upper and lower temperature limits from the absence of \mbox{P~{\sc v}}\lb1118, 1128 and \mbox{O~{\sc v}}\lb1371. In particular, the weakness of the \mbox{O~{\sc v}}\ line constrains $T_{\rm eff}$$<$40,000\,K. Homogeneous wind models predict a very strong \mbox{O~{\sc v}}\ feature at temperatures higher than 40\,kK, thus forcing an artificially low $T_{\rm eff}$\ to be adopted. Garcia~\& Bianchi methodology yields temperatures that are systematically lower than all other recent analyses of optical and UV spectra of O stars (see \cite{heap05} and \cite{martins05} who provide a detailed comparison of these studies and who are led to exclude Garcia~\& Bianchi results). We believe that their methodology underestimates temperatures because of the reliance on wind lines that are sensitive to the wind structure and abundances as well as to $T_{\rm eff}$. \begin{figure*}[] \rotatebox{0}{\includegraphics[width=12cm]{2531f4.eps}} \caption{~Best fit to HD~96715\ wind-sensitive lines, obtained with clumped (full grey line) and homogeneous (dashed grey line) models respectively. The parameters of the clumped model are $f_{\infty}$ = 0.02 and $\dot{M}$\ = 2.5\,10$^{-7}$\,M$_{\odot}$\,yr$^{-1}$, while the homogeneous model has $\dot{M}$\ = 1.8\,10$^{-6}$\,M$_{\odot}$\,yr$^{-1}$. The blue side of \mbox{N~{\sc v}}\lb1240 is affected by IS \mbox{Ly$\alpha$}\ absorption (see Table 1 for n(\mbox{H~{\sc i}})). The adopted abundances are listed in Table~\ref{TabRes}.} \label{figfour} \end{figure*} We adopted the absolute magnitude, $M_{\rm v} = -5.5$, from \cite{howarth89}, and derived the stellar luminosity and radius using a bolometric correction from \cite{lanz03}. The stellar mass then follows from $\log g$\ and $R_{*}$. The microturbulent velocity has been derived from \mbox{O~{\sc iv}}, \mbox{Fe~{\sc iv}}, \mbox{Fe~{\sc v}}, \mbox{Fe~{\sc vi}}, and \mbox{S~{\sc v}}\ lines. The best match is obtained for $\xi_{t}$\ = 15\,km\,s$^{-1}$, assuming solar abundances for these species, consistently with the expected evolutionary status of HD~96715. We started the analysis using a normal helium abundance, $y = 0.1$. Subsequently, we slightly decreased it to $y$ = 0.09 to improve the match to \mbox{He~{\sc ii}}\lb1085 and \mbox{He~{\sc ii}}\lb1640. Admittedly, this value should be considered as a provisional estimate until an analysis of the optical spectrum becomes available. Yet, there is no evidence of helium enrichment at the surface of HD~96715. The carbon abundance was derived from the photospheric lines, \mbox{C~{\sc iv}}\lb1169, \mbox{C~{\sc iii}}\lb1176, and \mbox{C~{\sc iii}}\lb1426-1428. Using a {\sc Tlusty}\ model with the stellar parameters previously derived, we found that C/C$_\odot$ = 0.5 matches well these carbon lines. Like HD~190429A\, we constrained the nitrogen abundance from \mbox{N~{\sc iii}}$\lambda$\lb1182-1184 and \mbox{N~{\sc iii}}$\lambda$\lb1748-1752. We found that an enhancement of a factor of 4 with respect to the solar value is required to match the observed lines. Note that wind lines such as \mbox{N~{\sc iv}}\lb1718 further support this determination since no match can be achieved without a significant nitrogen enrichment. Our result is also consistent with a line blend, \mbox{N~{\sc ii}}$\lambda$\lb1084-1086, although the presence of superimposed IS lines makes the determination of the photospheric contribution difficult. To measure the oxygen abundance, we relied on \mbox{O~{\sc iii}}$\lambda$\lb1150-1154 photospheric lines, which are clearly seen in the \emph{FUSE}\ spectrum. A very good fit to this triplet is obtained for O/O$_\odot$ = 0.9, which also results in a good fit of \mbox{O~{\sc iv}}$\lambda$\lb1338-1343. Fig.~\ref{figthree} displays the best model fit achieved with {\sc CMFGEN}\ to the photospheric lines discussed in this section. The wind velocity law has been determined on the saturated P~Cygni profile of \mbox{C~{\sc iv}}$\lambda$\lb1548-1550. The terminal velocity and maximum turbulent velocity have been measured on the blue-side of the absorption component; we have adopted a terminal wind velocity, $v_{\infty}$\ = 3000\,km\,s$^{-1}$, and a maximum turbulent velocity in the wind of 250\,km\,s$^{-1}$. The P~Cygni profile of \mbox{C~{\sc iv}}$\lambda$\lb1548-1550 was preferred to \mbox{N~{\sc v}}\lb1240 because the latter is affected by a significant contamination due to IS \mbox{Ly$\alpha$}\ and because of its well-known sensitivity to X-rays. The wind acceleration parameter, $\beta$, and the mass loss rate have been inferred from the P~Cygni profiles of \mbox{C~{\sc iv}}$\lambda$\lb1548-1550 and \mbox{N~{\sc iv}}\lb1718 together with other wind sensitive lines like \mbox{O~{\sc iv}}$\lambda$\lb1338-1343 and \mbox{He~{\sc ii}}\lb1640. For the adopted abundances, we found that a homogeneous wind model with $\beta$ = 0.8 and $\dot{M}$\ = 1.8\,10$^{-6}$\,M$_{\odot}$\,yr$^{-1}$\ reproduces well these lines, although the match to \mbox{N~{\sc iv}}\lb1718 remains poor. On the other hand, the \mbox{O~{\sc v}}\lb1371 line profile predicted by homogeneous wind models is stronger than observed. This behavior has been found in all previous studies of O-type stars that show \mbox{O~{\sc v}}\lb1371. More specifically, the wind models predict too strong absorption at moderate velocities, too little absorption at low velocities, and a too strong redward emission. Homogeneous wind models thus fail to reproduce this line as in our earlier study of main-sequence O stars in the SMC \citep{bouret03}. Therefore, we have calculated several models with different clump volume filling factors $f_\infty$, adjusting the mass loss rate so as to maintain a good fit to \mbox{C~{\sc iv}}\ and \mbox{N~{\sc iv}}\ P~Cygni profiles. Fig.~\ref{figfour} shows that we can achieve a good match of \mbox{O~{\sc v}}\lb1371 when assuming a very small filling factor, $f_\infty=0.02$, with a mass loss rate, $\dot{M}$ = 2.5\,10$^{-7}$\,M$_{\odot}$\,yr$^{-1}$. A larger filling factor, $f_\infty=0.1$, and a lower oxygen abundance by a factor of 4 also improves over the case of a homogeneous wind model, but does not fit the observed profile as well as a model with the smaller filling factor. Larger volume filling factors may seem more reasonable or, at least, they are more in line with earlier studies of Wolf-Rayet clumped winds \citep{hamann98} and of SMC O stars \citep{bouret03}. However, other oxygen lines, like \mbox{O~{\sc iii}}$\lambda$\lb1150-1154 and \mbox{O~{\sc iv}}\lb1338-1343, are no longer fitted with the lower oxygen abundance. These lines are indeed sensitive to changes in the adopted oxygen abundance since they are unsaturated. There is no compelling reason for adopting an oxygen abundance lower by a factor of 4 relative to the solar value, especially since carbon is not depleted as well. Carbon would indeed be expected to show first a depletion typical of CNO-cycle processed material. We have therefore adopted the clumped model with the very small volume filling factor. The steep transition between the absorption and emission components indicates that clumps start forming at low velocities, $v_{\rm cl}\approx 30$\,km\,s$^{-1}$, just above the sonic point. At this point, we discovered that \mbox{N~{\sc iv}}\lb1718 behaves similarly to \mbox{O~{\sc v}}\lb1371, and is thus sensitive to wind clumping too. The fit to \mbox{N~{\sc iv}}\lb1718 is improved by using clumped wind models, especially at the transition between the absorption and emission components. The extended blue absorption is affected similarly to \mbox{O~{\sc v}}\lb1371. The nitrogen abundance has been kept to the value derived from photospheric lines. Finally, we stress that we have achieved a good match of both the \mbox{N~{\sc iv}}\ and the \mbox{O~{\sc v}}\ lines using the same low filling factor for the clumps, which thus provides convincing support for the highly clumped model. The profile of other lines (\mbox{C~{\sc iv}}\ and \mbox{N~{\sc v}}) is not directly affected by clumping. \section{Discussion} \label{disc_sect} Table~\ref{TabRes} summarizes the results of our analysis. Conservative estimates of the uncertainties are $\pm$5\% on $T_{\rm eff}$, $\pm$0.1\,dex on $\log g$, $\pm$2 to $\pm$5\,km\,s$^{-1}$\ on $\xi_{t}$\ (for HD~190429A\ and HD~96715, respectively), and $\pm$0.1 to $\pm$0.2\,dex on chemical abundances. Abundances are listed as number densities and refer to values adopted in clumped wind models (lower abundances might be preferred with homogeneous wind models in a few instances, see \S4). The mass loss rates are derived with an accuracy of 10-20\,\% from the fitting process, though this does not include systematic errors. We discuss now the implications of our results, and compare them to other observational and theoretical studies. \subsection{Evidence of clumping from the FUV spectrum} \label{discus_1} >From the foregoing analysis, we conclude that \mbox{P~{\sc v}}$\lambda$\lb1118-1128 and \mbox{O~{\sc v}}\lb1371 provide the best diagnostics of clumping in the wind of O stars, supporting the seminal studies of \cite{crowther02}, \cite{hillier03} and \cite{bouret03}, and extending them to the case of Galactic stars. In attempting to reproduce the \mbox{O~{\sc v}}\lb1371 line profile, Bouret et~al. explored a number of alternative explanations besides clumping. They emphasized that the key point consists in predicting the correct \mbox{O~{\sc v}}/\mbox{O~{\sc iv}}\ ionization structure in the vicinity of the sonic point. To alter ionization, they considered X-ray emission from shocks in the wind ($L_X/L_{\rm bol}=2\,10^{-7}$), adiabatic cooling, larger model atoms, and alternate recombination rates \citep{nahar99}. All these changes resulted in essentially no difference in the model spectrum, and wind clumping remained as the only viable explanation. \begin{table} \centering \caption{Stellar parameters, wind properties, and chemical abundances for HD~190429A\ and HD~96715. The listed abundances correspond to values derived from clumped models (see \S4.1).} \label{TabRes} \begin{tabular}{lcc} \hline \hline Star & HD~190429A & HD~96715 \\ \hline Spectral Type & O4 If+ & O4 V((f)) \\ $T_{\rm eff}$\ [K] & 39000 & 43500 \\ $\log g$ (cgs) & 3.6 & 4.0 \\ $L$ [$L_\odot$] & 7.9\,$\times\,10^5$ & 4.6\,$\times\,10^5$ \\ $\xi_{\rm t}$ [km\,s$^{-1}$]& 15 & 15 \\ $v \sin i$ [km\,s$^{-1}$] & 160 & 80 \\ \hline Homogeneous winds & & \\ \hline $\dot{M}$\ [M$_{\odot}$\,yr$^{-1}$] & 6.0\,$\times\,10^{-6}$ & 1.8\,$\times\,10^{-6}$ \\ $v_{\infty}$\ [km\,s$^{-1}$] & 2300 & 3000 \\ $\beta$ & 0.8 & 1.0 \\ \hline Clumped winds & & \\ \hline $\dot{M}$\ [M$_{\odot}$\,yr$^{-1}$] & 1.8\,$\times\,10^{-6}$ & 2.5\,$\times\,10^{-7}$ \\ $f_\infty$ & 0.04 & 0.02 \\ \hline Surface abundances$^{\mathrm{a}}$ & & \\ \hline $y$ (He/H) & 0.2 & 0.09 \\ C/C$_{\odot}$ & 0.05 & 0.5 \\ N/N$_{\odot}$ & 4.0 & 4.0 \\ O/O$_{\odot}$ & 0.1 & 0.9 \\ Si/Si$_{\odot}$ & 1.0 & 1.0 \\ P/P$_{\odot}$ & 0.5 & 1.0 \\ S/S$_{\odot}$ & 0.9 & 1.0 \\ Fe/Fe$_{\odot}$ & 1.0 & 1.0 \\ \hline N/C$^{\mathrm{b}}$ & 80 & 8 \\ \hline \end{tabular} \begin{list}{}{} \item[$^{\mathrm{a}}$] Solar abundances from \citet{grevesse98}. \item[$^{\mathrm{b}}$] Abundance ratio relative to the solar ratio, N/C $\approx$ 0.25. \end{list} \end{table} The case of \mbox{P~{\sc v}}\ is different. \cite{crowther02} and \cite{hillier03} showed that an effect on the line profile similar to clumping could be obtained by drastically reducing the phosphorus abundance because \mbox{P~{\sc v}}\ reaches its maximum and is the dominant ion throughout the wind of late O supergiants. \cite{hillier03} achieved a good match to the observed \mbox{P~{\sc v}}\ doublet in the SMC O7Iaf+ supergiant AV~83 with homogeneous wind models using a low abundance P/P$_{\odot} = 0.04$ (compared to the adopted overall SMC metallicity, Z/Z$_{\odot} = 0.2$). While there are remaining uncertainties relative to the phosphorus abundance baseline and the chemical evolution of the Magellanic Clouds leaving thus some leeway toward a definitive conclusion, we note that \cite{pauldrach94, pauldrach01} similarly claimed that two Galactic supergiants, namely $\zeta$~Puppis\ (O4~I(n)f) and $\alpha$~Cam (O9.5~Ia) must have a very low phosphorus abundance based on their analysis of \emph{Copernicus}\ data. In the case of Galactic stars, a low phosphorus abundance seems a much less likely explanation of the weak \mbox{P~{\sc v}}\ resonance lines, and we thus speculate that Pauldrach et al.'s results actually indicate that $\zeta$~Puppis\ and $\alpha$~Cam also possess a highly-clumped wind. Our study shows that clumping {\em combined} with a slightly reduced phosphorus abundance is required to match the \mbox{P~{\sc v}}\ resonance doublet in HD~190429A, while still fitting well the rest of the spectrum. \cite{hillier03} also achieved an excellent match of the full spectrum (from FUV to the optical) of AV~83 using $f_\infty$ = 0.1 and P/P$_{\odot}$ = 0.08, but the evidence for a lower phosphorus abundance remains relatively weak because of the possibility of adopting a lower clump filling factor coupled to a higher abundance. These results, therefore, only hint at a depletion of P in O supergiants with respect to the other metal abundances. \begin{figure}[] \hspace{-0.6cm} \rotatebox{0}{\includegraphics[width=9.8cm]{2531f5.eps}} \caption[12cm]{Phosphorus ionization fractions in clumped (black lines) and homogeneous (grey lines) wind models of HD 190429A. The \mbox{P~{\sc iv}}\ fraction remains smaller than 1\% throughout the wind (max. at $v(r)\approx~1$\,km\,s$^{-1}$). Notice that the velocity scale is shifted toward higher velocities for the homogeneous wind model because of a higher mass loss rate (the maximum of \mbox{P~{\sc v}}\ occurs at similar densities in the two models), and that the adopted phosphorus abundances are different in the two models (P/P$_{\odot}$ = 0.1 and 0.5, for the homogeneous and clumped models, see \S4.1).} \label{figfive} \end{figure} In addition to \mbox{O~{\sc v}}\ and \mbox{P~{\sc v}}, we finally found that \mbox{N~{\sc iv}}\lb1718 is also an indicator of wind clumping. The \mbox{N~{\sc iv}}\ line profile exhibits the same behavior as \mbox{O~{\sc v}}\lb1371. The two lines correspond to the same transition, 2s($^2$S)2p~$^1$P$^0$ -- 2p$^2$~$^1$D, in an isoelectronic sequence. The \mbox{O~{\sc v}}\ and the \mbox{N~{\sc iv}}\ lines can be fitted with a clumped wind model having the same clumping parameters. For both stars, we found small volume filling factors, {$f_\infty = 0.04$} and 0.02, for HD~190429A\ and HD~96715, respectively. Although these small values may seem problematical, they are supported by other recent results about wind clumping. In the analysis of 25 O stars in the LMC, \cite{massa03} found that the \mbox{P~{\sc v}}\ ionization fraction never exceeds 0.20, although it is expected to be the dominant stage in some of the stars. They argued that this implies either that the calculated mass loss rates or the phosphorus abundance are too large, or that the winds are strongly clumped ($f_\infty < 0.05$). The first evidences of wind clumping were found in WR stars for which small filling factors ($f_\infty \approx 0.1$) were derived too \citep{hamann98, hillier99, crowther02d}. \cite{kurosawa02} and \cite{schild04} derived even lower values, $f_\infty \approx 0.05 - 0.075$, for the WR stars V444~Cyg and $\gamma^2$~Vel. Hydrodynamical 1-D calculations of line-driven wind instabilities support low volume filling factors \citep[$f_\infty < 0.1$;][]{runacres02}, but initial results of a restricted 2-D approach indicate larger values \citep[$f_\infty \approx 0.2$;][]{dessart03}. \subsection{Effect of clumping on wind ionization} \label{discus_2} We now examine the reasons why these line profiles are modified by clumping. Essentially, the presence of optically thick clumps separated by transparent voids has two major consequences: {\it (i)} the wind over-densities strengthen the emission of density-sensitive lines; {\it (ii)} the number of recombinations increases because of the higher density in the clumps, thus reducing ionization. The competition between these two effects may either increase or decrease emission. Because of the radial dependence of the volume filling factor, lines of different strength behave in different ways. This is shown in Fig.~\ref{figtwo} and Fig.~\ref{figfour} where profiles for clumped and homogeneous models are compared for HD~190429A\ and HD~96715. \begin{figure}[] \hspace{-0.6cm} \rotatebox{0}{\includegraphics[width=9.8cm]{2531f6.eps}} \caption[12cm]{Oxygen ionization fractions in clumped (black lines) and homogeneous (grey lines) wind models of HD 96715. Notice that the velocity scale is shifted toward higher velocities for the homogeneous wind model because of a higher mass loss rate.} \label{figsix} \end{figure} In O7 supergiants, \mbox{P~{\sc v}}\ is the dominant ionization stage of phosphorus throughout the wind \citep{hillier03}. However, HD~190429A\ being a hotter supergiant, \mbox{P~{\sc v}}\ (in the photosphere) and \mbox{P~{\sc vi}}\ (in the wind) are now the main ionization stages. The \mbox{P~{\sc iv}}\ fraction always remains small (less than 1\%). Figure~\ref{figfive} illustrates the effect of clumps in the stellar wind: we find a lower ionization, \mbox{P~{\sc vi}}/\mbox{P~{\sc v}}, compared to the homogeneous wind model, demonstrating the increased recombination from \mbox{P~{\sc vi}}\ to \mbox{P~{\sc v}}\ in the clumps. In HD~190429A, the \mbox{P~{\sc v}}$\lambda$\lb1118-1128 resonance lines reflect the factor of 2 between the $f$-values of the two lines, thus indicating that these lines are unsaturated. The interpretation of the strength of these lines is, however, not straightforward, that is, it depends not only on the phosphorus density (a product of the mass loss rate and the phosphorus abundance), but it also depends on the detail of the wind ionization. The dependence with the clumping parameter, $f_\infty$, is indirect through the requirement that the ratio $\dot{M}$/$\sqrt {f_\infty}$ has to remain constant for keeping a good match to other lines, such as H$\alpha$ and \mbox{C~{\sc iv}}\lb1550. A low phosphorus abundance (P/P$_\odot=0.1$) must be adopted with a homogeneous wind model, thus implying significant clumping (low $f_\infty$, hence low $\dot{M}$) is necessary to increase the phosphorus abundance to a near-solar value. \cite{massa03} followed the same argument to explain the systematically low \mbox{P~{\sc v}}\ ionization fractions ($<$20\%) that they derived for 25 O stars in the LMC. We stress that our clumped wind model matches the {\em two} \mbox{P~{\sc v}}\ resonance lines very well, providing a strong evidence that the wind ionization is correct and that there is no particular issue such as covering factors with our treatment of clumps. The oxygen lines, however, respond in a different way to clumping. They are affected by clumping via a lower ionization in the wind -- see Fig.~\ref{figsix} for HD~96715. Notice that ionization in fact is not lower in the photosphere as Fig.~\ref{figsix} would suggest: there is a shift in velocity space because of the higher mass loss rate of the homogeneous wind model. At similar photospheric densities, the ionization is the same. On the other hand, \mbox{O~{\sc iv}}\ is the dominant ion throughout the wind of HD~190429A. The homogeneous model predicts an increase of \mbox{O~{\sc v}}\ at moderate velocities (over 30\% at few 100's km\,s$^{-1}$), but increased recombination due to the presence of clumps keeps the \mbox{O~{\sc v}}\ fraction below a few percent in the clumped wind. The lower \mbox{O~{\sc v}}\ fraction in the clumped wind models weakens the blue-shifted line absorption substantially. The wind ionization is sensitive to the clumping parameter, ${f_\infty}$, and \mbox{O~{\sc v}}\lb1371 is the best indicator of clumping in the wind of HD~96715, similarly to the SMC O4 stars \citep{bouret03}. Besides the \mbox{O~{\sc v}}\ and \mbox{P~{\sc v}}\ lines, we found that \mbox{N~{\sc iv}}\lb1718 is also sensitive to clumping. It is a special interest because clumping decreases \mbox{N~{\sc v}}/\mbox{N~{\sc iv}}\ ionization in a way quite similar to \mbox{O~{\sc v}}/\mbox{O~{\sc iv}}. \mbox{N~{\sc iv}}\ is the dominant ion throughout the wind of HD~190429A, while it is the dominant ion only in the photosphere of HD~96715\ (\mbox{N~{\sc v}}\ is the dominant stage in the wind). The \mbox{N~{\sc iv}}\lb1718 responds to clumping like \mbox{O~{\sc v}}\lb1371. The absorption component of \mbox{N~{\sc iv}}\lb1718 is globally weaker in the clumped model, with the exception of velocities between $\approx$ 15\,km\,s$^{-1}$\ and 650\,km\,s$^{-1}$\ (Fig.~\ref{figfour}). The direct mapping of the velocity law on the absorption component implies that these velocities correspond to the region where \mbox{N~{\sc v}}/\mbox{N~{\sc iv}}\ ionization is lower in the clumped model. In the outer wind, at higher velocities, \mbox{N~{\sc v}}\ recombines into \mbox{N~{\sc iv}}\ in the homogeneous and clumped models of HD~96715\ because of lower temperatures. The predicted profile at these high velocities ($v > 650$\,km\,s$^{-1}$) then reflects the higher mass loss rate adopted for the homogeneous wind model. \begin{figure}[] \hspace{-0.6cm} \rotatebox{0}{\includegraphics[width=9.8cm]{2531f7.eps}} \caption[12cm]{Effect of X-ray wind emission on the predicted \mbox{N~{\sc v}}\lb1240 line profile of clumped wind models of HD~190429A\ (full line: $\log L_{\rm X}/L_{\rm bol}$\ $= -7.1$; dotted line: no X-rays). All the other lines sensitive to wind clumping and shown in Fig.~\ref{figtwo} are not affected by X-ray emission.} \label{figseven} \end{figure} Several other lines are displayed in Fig.~\ref{figtwo}, showing smaller changes because of clumping. For instance, \mbox{S~{\sc v}}\lb1502 shows a slightly stronger blueshifted wing because of increased recombination from \mbox{S~{\sc vi}}\ to \mbox{S~{\sc v}}. On the other hand, the \mbox{C~{\sc iv}}\ resonance doublet is only marginally sensitive to clumping because these lines are saturated. X-ray emission in the wind also changes the wind ionization structure, increasing the populations of ``superions'' like \mbox{N~{\sc v}}, \mbox{O~{\sc vi}}, and \mbox{S~{\sc vi}}. We found that the profiles of \mbox{N~{\sc v}}\lb1240 and \mbox{O~{\sc vi}}\lb1036 are substantially changed by X-ray emission (see Fig.~\ref{figseven}; the \mbox{O~{\sc vi}}\ lines are not shown because of strong interstellar H$_2$ absorption). All other FUV lines shown in Fig.~\ref{figtwo}, especially those sensitive to wind clumping, remain unaffected. In the outer wind of HD~190429A\ ($v(r) > 500$\,km\,s$^{-1}$), the \mbox{N~{\sc v}}\ is increased by a factor of several up to several orders of magnitude at terminal velocity by the X-ray wind-shock emission compared to predictions neglecting X-rays. In the inner wind ($v(r) < 500$\,km\,s$^{-1}$), \mbox{N~{\sc v}}\ increase is more modest ($+$20\% compared to predictions without X-rays). The \mbox{N~{\sc v}}\ fraction reaches about 10\% in the inner wind and always remains larger than 1\% in the outer wind while it precipitously drops in the model without X-rays. This change in the wind ionization structure results in the marked line profile change displayed in Fig.~\ref{figseven}. Finally, we need to stress here that clumping is not a simple numerical trick introduced to correct the ionization balance of a single species (e.g., \mbox{O~{\sc v}}/\mbox{O~{\sc iv}}). We showed here that our clumped models change in a consistent way the ionization balance of different species, with different ionization potentials, and affect the corresponding lines differently as they form in different parts of the wind and respond differently to the nature of clumping in the wind. We therefore conclude that our description of wind clumping, albeit simple, must be basically correct because of our ability to fit lines of different ions and species using the {\em same} clumping parameters. The clumped wind models therefore provide a sound basis for determining improved mass loss rates. \subsection{Clumping, \mbox{H$\alpha$}, and the mass loss rate} \label{discus_3} We discuss now the implication of wind clumping on the derived mass loss rates, comparing our results based on the analysis of the FUV spectrum with determinations based on \mbox{H$\alpha$}\ and thermal radio emission. Such a comparison is essential because the quantity that is actually measured from the FUV lines is the product of the mass loss rate by the appropriate ionization fraction, and we just argued that clumping results in changing the mean ionization in the wind. We believe that {\sc CMFGEN}\ models provide a reliable description of the ionization structure of O star winds thanks to the detailed, unified treatment of NLTE metal line blanketing in the photosphere and in the wind. Yet, a comparison with results derived from \mbox{H$\alpha$}\ is crucial since the latter are much less sensitive to many factors influencing the wind ionization. Moreover, the FUV lines behave linearly with the wind density while \mbox{H$\alpha$}\ and the radio emission have a quadratic dependence. \mbox{H$\alpha$}\ is a recombination line, and is sensitive to the densest regions of the wind. Therefore, this line is expected to constrain clumping deep in the wind. On the other hand, thermal emission in the radio probes the outer regions of the wind. \cite{kudritzki00} argued that \mbox{H$\alpha$}\ should not be sensitive to wind clumping contrary to the submillimeter/radio emission because they expected clumps to form far out in the wind. This is contrary to our findings that clumps already form close to the sonic point. \begin{figure}[] \centering \rotatebox{90}{\includegraphics[width=6cm]{2531f8.eps}} \caption[12cm]{Predicted \mbox{H$\alpha$}\ profiles for HD~190429A, calculated with clumped (grey line) and homogeneous (black line) wind models. The corresponding parameters are listed in Table~4.} \label{figeight} \end{figure} Although we have no \mbox{H$\alpha$}\ observations in hand, we can compare our predictions for homogeneous and clumped wind models with published results for HD~190429A. We display in Fig.~\ref{figeight} the predicted \mbox{H$\alpha$}\ profiles calculated with the parameters derived from our analysis (see Table~\ref{TabRes}). We found only little changes between the predicted profile for the homogeneous and clumped wind models, thus indicating that \mbox{H$\alpha$}\ observations alone would not exclude one or the other model (but different mass loss rates would be inferred). Our predicted line profiles are consistent with Walborn~\& Howarth's (2000; their Fig.~2) published profile. Table~\ref{TabWind} lists the mass loss rates derived from \mbox{H$\alpha$}\ and radio observations and compares them to our results. The mass loss rates (\mbox{H$\alpha$}\ and radio) depend on the adopted distance, proportionally to $d^{3/2}$ (e.g., Scuderi et al. 1998). The values listed in Table~\ref{TabWind} have been corrected of this dependence. The mass loss rate for a homogeneous wind derived from the FUV lines is consistent with earlier analyses of \mbox{H$\alpha$}\ \citep{leitherer88, lamers93}, but \cite{scuderi98} and \cite{markova04} derived a value twice as large. Unified comoving-frame models were used by Markova et~al. and in our study, while the other analyses used the Sobolev approximation. \cite{scuderi98} also report an increase by a factor of 2 of the \mbox{H$\alpha$}\ equivalent width between 1988 and 1991. They obtained also a single observation of the free-free emission at 8.45\,GHz. It is therefore difficult to reach a firm conclusion from this comparison since intrinsic stellar variability most likely is at the origin of some of the differences. The main conclusions thus remain that {\it (i)} the profile from the clumped wind model would match the observed \mbox{H$\alpha$}\ profiles as well as the predicted profile from the homogeneous wind, and {\it (ii)} the derived mass loss rate from the clumped wind model is a factor of 3 lower. \begin{figure}[] \centering \rotatebox{90}{\includegraphics[width=6cm]{2531f9.eps}} \caption[12cm]{Predicted \mbox{H$\alpha$}\ profiles for HD~96715, calculated with clumped (grey line) and homogeneous (black line) wind models. The corresponding parameters are listed in Table~4.} \label{fignine} \end{figure} Finally, hydrogen infrared recombination lines such as \mbox{Br$\alpha$}\ and \mbox{Br$\gamma$}\ may be strongly affected by clumping in the wind, because of their sensitivity to the square of the local density. As pointed out by \cite{lenorzer04}, clumping results in increased recombination rates inside the clumps ($\langle \rho^2 \rangle > \langle \rho \rangle^2$, assuming an infinite contrast of the clumps with the voids). The presence of clumps deep in the wind would thus explain the present discrepancies between observed and theoretical profiles (and EWs) of lines such as \mbox{Br$\gamma$}\ \citep{lenorzer04}. Indeed, clumping close to the photosphere (where \mbox{Br$\gamma$}\ forms) would increase the collisional excitation rates driving the excitation ratios closer to LTE. These recombination lines would thus exhibit (pure) emission profiles rather than P Cygni profiles \citep[cf.][]{puls96}. An interesting point noted by \cite{lenorzer04} is that their predicted \mbox{Br$\alpha$}\ agree fairly well with observations. This suggests that the degree of clumping might not be constant throughout the wind, but rather peaks and then decreases further out in the wind. The location of the strongest clumping (expected to depend on the maximum line strength according to \cite{runacres02}) might then be in the region where \mbox{Br$\gamma$}\ forms. \mbox{Br$\alpha$}\ which forms over a larger volume (because of its larger oscillator strength) would then be less affected by clumping. \begin{table} \centering \caption{Mass loss rates and modified wind momenta measured for HD~190429A\ and HD~96715.} \label{TabWind} \begin{tabular}{lcc} \hline \hline Star & HD~190429A & HD~96715 \\ \hline \multicolumn{3}{c}{Mass Loss Rates [$10^{-6}$\,M$_{\odot}$\,yr$^{-1}$]} \\ \multicolumn{3}{c}{This study (far-UV)} \\ Clumped wind & 1.8$\pm$0.3 & 0.25$\pm$0.05 \\ Homogeneous wind & 6.0 & 1.8 \\ \multicolumn{3}{c}{H$\alpha$} \\ \cite{leitherer88} & 9.8 & \\ \cite{lamers93} & 5.4$^{+3.1}_{-2.3}$ & \\ \cite{scuderi98} & 11.8$\pm$1.8 & \\ \cite{markova04} & 14.2 & \\ \multicolumn{3}{c}{Radio} \\ \cite{scuderi98} & 7.2$\pm$0.9 & \\ \hline \multicolumn{3}{c}{Modified Wind Momentum} \\ \multicolumn{3}{c}{$\log$\,($\dot{M}$\,$v_\infty\,\sqrt{R_\star})$} \\ Clumped wind & 29.07 & 28.21 \\ Homogeneous wind & 29.66 & 29.07 \\ \cite{repolust04} & 29.54 & 29.11 \\ \cite{vink00} & 29.55 & 29.13 \\ \hline \end{tabular} \end{table} Our estimate of $\dot{M}$\ for HD~96715\ derived from the homogeneous wind model compares well to mass loss rates for stars of same spectral type \citep{lamers93, repolust04}. \mbox{H$\alpha$}\ is always observed in absorption for these stars, which is predicted both for the clumped and homogeneous wind models. Fig.~\ref{fignine} shows that \mbox{H$\alpha$}\ is somewhat filled by wind emission for the homogeneous wind model, but it remains unlikely that either model could be excluded based on \mbox{H$\alpha$}\ observations. The clumped model yields a mass loss rate that is 7 times lower than the value derived from the homogeneous wind model. \subsection{The Wind-momentum Luminosity Relation} \label{discus_4} The presence of clumps deep in the wind of O stars has been recently invoked by \cite{puls03} and \cite{repolust04} to explain their findings of two distinct Wind-momentum Luminosity Relations (WLR) for stars with \mbox{H$\alpha$}\ in emission and in absorption, respectively. While the WLR for the latter objects agrees with \cite{vink00} theoretical predictions, it is clearly separated from the empirical WLR for emission-line objects by a significant offset. As shown in \cite{markova04} and \cite{repolust04}, this offset practically vanishes when the mass loss rates for the emission-line stars are reduced by a factor of 0.44. Because clumped winds mimic winds with higher mass loss rates, these authors suggested that the higher empirical WLR for emission-line objects reflects the influence of clumps in the \mbox{H$\alpha$}\ line formation region. From our analysis, we conclude that mass loss rates from the homogeneous wind models are very similar to those predicted by the \cite{vink00} formula using the parameters derived for HD~190429A\ and HD~96715. Table~\ref{TabWind} lists Vink et al.'s predicted modified wind momenta together with the values derived from our analysis. As might be expected from the lower mass loss rates derived from the clumped wind models, the wind momenta calculated from these models are significantly lower than the theoretical predictions. Our results cast doubt on \cite{repolust04} and \cite{markova04} tentative suggestion to lower supergiant mass loss rates in order to reunify the empirical WLRs with the theoretical one. We have indeed shown that a correction for clumping is required even for dwarfs that are expected to show \mbox{H$\alpha$}\ in absorption (or partially filled in by wind emission). This, in turn, implies that the mass loss rates should be lowered, leaving thus a difference between the supergiant and dwarf WLRs. Moreover, \cite{repolust04} mention that the blue Balmer lines in stars with \mbox{H$\alpha$}\ emission show too much wind emission in their cores, pointing to too high \mbox{H$\alpha$}\ mass loss rates. Clumps at the base of the wind might account for this behaviour. We finally note that the existence of a relatively deep-seated clumped region is compatible with recent, theoretical results \citep[e.g.][ and references therein]{feldmeier97, owocki99} . Although the stellar sample is limited, we emphasized that we found evidence of wind clumping in all early O dwarfs that we have analyzed so far, see \cite{bouret03}. A unique WLR for supergiants and dwarfs might eventually be empirically derived with clumped wind models once larger stellar samples have been analyzed. We found, however, that wind clumping results in revising down the mass loss rates by a factor larger than the factor of about 2.3 advocated by \cite{repolust04}. \section{Conclusions} \label{concl_sect} We have performed a quantitative analysis of the \emph{FUSE}\ and \emph{IUE}\ spectra of two Galactic O4-type stars, one supergiant (HD~190429A) and one dwarf (HD~96715), investigating the role of wind clumping and its resulting effect on stellar parameters and thus extending our recent studies of SMC O stars \citep{bouret03, hillier03}. Our analysis is based on the NLTE model atmosphere programs {\sc Tlusty}\ and {\sc CMFGEN}, that account for NLTE metal line blanketing and thus provide a detailed description of the photosphere and wind of O stars. We have first determined the stellar and wind parameters using homogeneous wind models, but failed to reproduce several key lines, e.g. \mbox{O~{\sc v}}\lb1371, like all previous analyses of O stars. Similarly to the case of MC stars \citep{crowther02, hillier03}, we have to adopt a very low phosphorus abundance in order to match the strength of the \mbox{P~{\sc v}}$\lambda$\lb1118-1128 resonance lines with homogeneous wind models. We have then considered clumped wind models in order to test their ability to improve the fit to the observed spectra. The clumped wind models {\em consistently} improve the match to lines of different species, especially \mbox{P~{\sc v}}$\lambda$\lb1118-1128, \mbox{O~{\sc v}}\lb1371 and \mbox{N~{\sc iv}}\lb1718. The fit to strong Fe photospheric lines (similarly to AV~83, Hillier et~al. 2003) and to \mbox{Si~{\sc iv}}$\lambda$\lb1393-1402 (though still not excellent) is also improved in HD~190429A. In both stars, we need to adopt a highly clumped wind model, $f_\infty = 0.04$ (HD~190429A) and $f_\infty = 0.02$ (HD~96715), in order to match these lines. Based on measured phosphorus ionization fractions, \cite{massa03} argued that mass loss rates of O stars might be lower by a factor of 5, which is thus consistent with $f_\infty < 0.1$. In agreement with \cite{hillier03} and \cite{bouret03}, clumping needs to start deep in the wind, just above the sonic point, at velocities as low as $v_{\rm cl} \approx 30$\,km\,s$^{-1}$, to reproduce the steep transition between the emission and absorption components of clump-sensitive lines. This finding also supports the conjecture of \cite{repolust04} that their mass loss rates based on \mbox{H$\alpha$}\ were too high for supergiants due to clumping. We showed that the basic physical reason behind the better fits is the increased recombination in clumps that lowers the wind ionization. We argue that our clumped wind models accurately predict the ionization structure of O-type star winds because of their ability to consistently match lines of different species and ionization stages. Our results, therefore, provide a considerably more robust conclusion about the clumped nature of the wind of O stars than previous studies, and are in agreement with our earlier results on SMC O stars \citep{hillier03, bouret03}. The main consequence of wind clumping is that mass loss rates need to be revised down substantially, here by a factor of 3 (HD~190429A) to 7 (HD~96715). This factor is of the same order as the correction derived for the SMC O stars \citep{bouret03}. We show that such a drastic correction remains in good agreement with \mbox{H$\alpha$}\ observations. Most earlier studies of O stars ignored the effect of deviations from standard winds models, i.~e. assuming a globally stationary wind with a smooth density/velocity stratification, to determine the properties of stellar winds. It was argued that ``this standard analysis yields reliable average models of the stellar wind'' \citep{kudritzki00}, although it was recognized that such models are inherently incapable of reproducing the spectral signatures that suggest the existence of extensive wind structures. Although this argument may hold for deriving the wind velocity structure, we have now demonstrated that this is not the case for determining mass loss rates. The present work, together with our recent work on SMC O stars, establishes that clumping likely is a general property of O star winds. Accounting for clumping will lead to a systematic and significant downward revision of mass loss rates. Because mass loss is a crucial aspect of massive star evolution, a revision of mass loss rates accounting for the effect of clumping is urgently needed. We note in parallel that two analyses of low-luminosity O stars also revealed mass loss rates much lower than those predicted by the theoretical WLR \citep{bouret03, martins04}. Our study therefore calls for a fundamental revision in our understanding of mass loss and of O-type star stellar winds. Finally, the surface abundance of HD~190429A\ is in agreement with its advanced evolutionary stage, in particular showing CNO-cycle processed material (nitrogen overabundance, carbon and oxygen depletion) at the stellar surface. Similarly to SMC O dwarfs, we find also enhanced nitrogen at the surface of HD~96715, though the carbon depletion is much milder and oxygen is close to the solar abundance. \begin{acknowledgements} We are grateful to Joachim Puls for a detailed and helpful referee's report which contributed to improve our paper. All data used in this paper were extracted from the Multimission Archive at the Space Telescope Science Institute (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NAG5-7584 and by other grants and contracts. J.-C. Bouret acknowledges CNES for financial support. T. Lanz is supported by a NASA ADP grant (NNG04GC81G); he is grateful for the hospitality and support of the Laboratoire d'Astrophysique in Marseille when this work was initiated. D.~J. Hillier gratefully acknowledges support from a NASA-LTSA grant (NAGW-3828). \end{acknowledgements} \bibliographystyle{aa}
1,108,101,564,755
arxiv
\section{BACKGROUND} Self-driving technology is advancing rapidly --- albeit with significant challenges and limitations. This progress is due in large part to the success of GPU-driven deep learning algorithms. However, many competing architectures and techniques for deep learning are currently available and/or in development, and little scientific research has been published that explicitly assesses best practices and outcomes across different learning approaches and models. This is due, in part, to the lack of self-driving data and results available to the academic research and larger public communities. The cost of such real-world systems (e.g. road-ready full size vehicles outfitted with broad arrays of sensors) can be prohibitive, making them off-limits to everyone but a few select large research institutions and, more commonly, private commercial ventures, who may not be encouraged to share their results with the broader research community because of commercialization concerns. Smaller systems (i.e. systems that are not the size of an actual automobile), on the other hand, may suffer from hardware limitations for onboard computers that are not capable of running state-of-the-art deep learning models. One of the most promising approaches to full autonomous performance is the use of so-called `end-to-end' learning models that are trained on sensor inputs, paired with human behavioral outputs, without the need for explicitly encoding intermediate representations. To date, only a few publications of which we are aware have used a deep neural network in an end-to-end fashion to control a real autonomous vehicle \cite{DBLP:journals/corr/BojarskiTDFFGJM16} \cite{bojarski2017explaining} \cite{yang2018end} \cite{xu2017end} \cite{sotoyolo}. However, these studies provide limited information regarding the model's training and performance on the road tests. Bojarski et al. \cite{DBLP:journals/corr/BojarskiTDFFGJM16} only report the results of a single trip on a real road without any description of basic features such as the miles traveled, nature of the roadway, conditions, training time, etc. Similarly, Yang et al. \cite{yang2018end} provide no quantitative results or a sufficient description of of the road test and tasks performed at all, and Xu et al. \cite{xu2017end} and Soto et al. \cite{sotoyolo} do not test their model in a vehicle at all. Furthermore, none of these provide any comparison across different model types and/or training protocols. Thus, these studies are of limited utility in establishing best practices for development. Other published results on autonomous driving are not focused on end-to-end supervised learning and/or not tested in actual vehicles but are instead concerned with specific sub-problems of self-driving, such as navigating in sub-optimal weather conditions \cite{Lee2018}, pedestrian detection \cite{mujahed2018admissible} \cite{zhang2017citypersons}, traffic light/obstacle detection \cite{hane2015obstacle} \cite{ramos2017detecting}, mind wandering detection \cite{baldwin2017detecting}, and the classification and/or segmentation of traffic scenes \cite{agarwal2017real} \cite{kundu2016feature}. These studies are generally performed on public datasets taken from dashcam videos such as the Udacity self-driving datasets \cite{ud} \cite{ud2}, the KITTI dataset \cite{Fritsch2013ITSC}, the more recently released SAIC \cite{yang2018end}, CityScapes \cite{cordts2016cityscapes}, BDD100k \cite{berkeley_selfdriving_dataset}, and Apollo \cite{apollo} datasets, or video games \cite{martinez2017beyond}. These approaches may lead to a potential ``deployment gap'' between a model's performance during training and validation --- what is essentially a traditional image classification task --- and its behavior in an embedded control system operating in the real world. In particular, once deployed, a self-driving model's behavior will also determine its \textit{inputs} which may end up being poorly represented by the human-generated dataset used to train and validate the models. As a result, there has been a recent effort by some industry leaders and the United States Department of Transportation to create a rigorous protocol for testing a self-driving technology's competency in an actual automobile, as there has already been one incident in which a self-driving vehicle struck and killed a pedestrian in Arizona \cite{uberaccident}. One such testing protocol consists of a ``91-acre, closed course testing facility . . . set up like a mock city'' that includes everything from highways to suburban driveways and railroad crossings \cite{cerf2018comprehensive}. Of course, access to such resources is highly limited and to date no systematic studies have been reported comparing different model performance in deployed driving tasks. Here we introduce the first (to our knowledge) systematic, real-world comparison of autonomous driving performance across multiple, contemporary deep neural networks and training data types. We use a simple, easily replicated platform, assembled from commercially available components (all hardware and software specs are described below; software is publicly available as a Docker repository). The setup consists of an off-the-shelf, remotely operated vehicle (Brookstone Rover 2.0), a GPU equipped computer and an indoor foam-rubber racetrack. The vehicle communicates with the computer over wifi in order to send its camera images to the computer. The images are then run through a trained neural network in real time in order to output an action decision that the computer sends back to the vehicle over the wifi network. This setup allows us to test computing intensive deep learning models without the need to ``onboard'' the GPU hardware. We used this platform to train seven different neural networks, across three image input classes, on data from multiple humans driving around the track. We then compared autonomous performance on the track under identical experimental conditions for each of the 21 (7 architectures $\times$ 3 image input classes) conditions. We report performance along multiple metrics including the percentage of successful loops (i.e. without crashing) and the average time in seconds needed to complete a loop. \section{METHODS} We compared the driving performance of multiple network architectures, which were chosen to reflect the diverse types of architectures employed in recent years as well as some older ones, in driving a remote controlled vehicle around a track after being trained on human driving data. The tested architectures included: 1) a three hidden layer fully-connected network, 2) a simple convolutional neural network (CNN) with two convolutional/pooling layers followed by two fully-connected layers, 3) AlexNet, \cite{NIPS2012_4824}, 4) a slightly modified version of VGG-16 \cite{DBLP:journals/corr/SimonyanZ14a}, 5) Inception-V3 \cite{DBLP:journals/corr/SzegedyLJSRAEVR14}, 6) a version of the ResNet architecture which we refer to as ResNet-26 \cite{DBLP:journals/corr/HeZRS15}, and 7) a Long Short-Term Memory (LSTM) network \cite{hochreiter1997lstm}. The details of each network are described below in detail. Each network's architecture, as well as the training procedure can be viewed at {\tt\small https://github.com/mpcrlab/AIRover}. Another goal of the current study was to determine what kinds of information are most useful in end-to-end training of an autonomous driving system. For example, how helpful is it to include color information, which involves three times as much input information as grayscale? To assess this, we included three input types used as training and test data for the different models: 1) single grayscale video frame, 2) single color video frame, and 3) the current grayscale video frame plus past grayscale video frames concatenated along the channel dimension (which we term `framestack') as input to each different network. The framestack method provides a simple method for incorporating temporal information without the need for an architecture that is specifically designed to incorporate sequential information (e.g. CNNs). These three input classes were chosen to determine whether spatial, color, or temporal information is more useful for such tasks, a consideration when designing low-power, smaller systems that may not be able to afford to utilize all three feature modalities. Note that the framestack approach that we use here provides a method for including temporal sequence information in a simpler manner than typical approaches, such as recurrent neural networks. This allowed us to test the role of temporal information, using the same architectures as we used for the individual images. In addition, it introduces a novel, potentially simpler approach to incorporating temporal information in self-driving applications. \subsection{Experimental Setup} To test each network architecture in a self-driving task, we used a 3.56m $\times$ 2.34m L-shaped foam racetrack \cite{rcptracks} and a Brookstone Rover 2.0 \cite{brookstone} (Fig. \ref{fig:track}). Each $240 \times 320$ color video frame (Fig. \ref{fig:frame}) collected by the vehicle's single, built-in camera (which was set to collect 30 frames per second) was sent over wifi to a computer containing two GeForce GTX 1080 TI GPUs. \begin{figure}[ht] \begin{centering} \includegraphics[width=\linewidth]{rovertrack_1_.png} \caption{The track used to train and test each network. The vehicle was trained and tested on its ability to navigate the track successfully in the direction indicated by the white arrows. The four test positions are indicated by the red circles, and the vehicle is contained within the green box.} \label{fig:track} \end{centering} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=5cm]{frames2.png} \caption{A sample frame from the vehicle's camera taken approximately from its position in Fig. 1. The top frame was taken under the high-light condition, and the bottom was taken under the low-light condition. As can be seen in these images, the vehicle's camera was very narrow and could only capture the scene directly in front of it, which added difficulty to the task.} \label{fig:frame} \end{center} \end{figure} \subsection{Training Protocol} To create a supervised dataset on which to train each network, multiple humans drove the vehicle a single direction around the track (Fig. \ref{fig:track}) under variable lighting conditions (during one half all of the overhead room lights were on, and during the other half only one-third of the room's lights were on). We recorded each video frame along with the action --- left pivot, right pivot, forward, or backward --- the human performed at that frame. The dataset, which totaled approximately 250,000 frames and their respective labels, was composed of a validation dataset of $\sim7,000$ frames, which was taken from a completely separate test run than those used in training, and a training dataset of $\sim$243,000 frames. The validation set was used to test the network every 100 training iterations. Each network was tested and trained on these same validation and training sets for 6,000 training iterations. The number of training iterations was chosen such that slower-learning networks would have sufficient training time to learn while still controlling for the number of training iterations, as many networks converged well before this point but did not overfit. Each training iteration began with a random video frame and the subsequent 79 video frames (80 in total) from the training set. The training batch size was determined by finding the maximum number of examples that the most resource-intensive network could handle on our GPU hardware (essentially double the chosen training examples due to one of our augmentation methods) and using that number of examples to to train each network. Once the batch was randomly chosen from the training set, each frame was cropped by removing the top 110 rows (making the images $130 \times 320 \times 3$), and further operations were performed depending on the image processing method being employed. These operations are described below: \subsubsection{Grayscale} For the video frames in the grayscale method, each frame was made grayscale and instance normalization \cite{DBLP:journals/corr/UlyanovVL17} was subsequently employed on each individual frame. \subsubsection{Color} For the color class, instance normalization \cite{DBLP:journals/corr/UlyanovVL17} was used on each color frame. \subsubsection{Grayscale Framestack} Each video frame in this method began with the same operations as in the grayscale class. Each frame$_{t}$ in the batch was then paired with the frame$_{t-5}$ and frame$_{t-15}$ along the channel dimension. The human action at frame$_{t}$ was used as the label for the framestack training example. The intervals were chosen empirically by trying many different values and observing how well the trained vehicle navigated the track. These intervals are likely dependent, at least to some extent, on the frame rate of the camera(s), as well as the top speed of the vehicle. There is some existing research in which temporal correlations in video data were exploited in a similar manner \cite{pan2016learning}. After performing the appropriate operations, each image's height and width were zero-padded with 30 pixels and randomly cropped as in \cite{NIPS2012_4824}. After cropping, a copy of each image was created (essentially doubling the batch size), and white noise was added to each of the copies\footnote{We also tried to augment each data batch by flipping each image with respect to the vertical axis and changing the label accordingly, but this caused the vehicle's movement to be less continuous and its accuracy worse.}. The peak signal-to-noise ratio (PSNR) was computed over 500 random frames and their noise-augmented counterparts, and the average for each frame was 10.0dB. The batch was then sent to the neural network to continue the training iteration. Each network's weights were optimized using Tensorflow's Adam optimizer and a learning rate of 3e-5. \section{Architectures Tested} We tested seven different models, described below. (A description of each model's layer architecture is included in Table 2). For the CNNs besides ResNet-26, these were initialized with random values from a uniform distribution without scaling variance as described in \cite{DBLP:journals/corr/Sussillo14}, and weights in all fully-connected layers were initialized with random values taken from a truncated normal distribution as in \cite{gers1999learning}. The weights in the convolutional filters in ResNet-26 were initialized with random values such that the variance of the inputs would be constant as in all other networks' convolution layers, but, instead of taking these values from a uniform distribution, they were taken from a truncated normal distribution as in \cite{DBLP:journals/corr/HeZR015}. A weight decay \cite{krogh1992simple} of 0.001 was employed in all convolution and fully-connected layers. All convolution layers utilized a rectified linear activation function \cite{dahl2013improving}, and all fully-connected layers employed a hyperbolic tangent activation function \cite{kalman1992tanh}, except for those at the end of VGG-16 which used rectified linear activation functions. Those networks with max pooling \cite{wersing2003learning}, with the exception of the 2-layer CNN, use overlapping pooling \cite{NIPS2012_4824} with a kernel size of $3\times3$ and a stride of 2. Dropout \cite{srivastava2014dropout} of fully-connected nodes, which is used to reduce overfitting, was utilized in all networks except ResNet-26, with dropout probabilities of 0.5 and 0.0 for training and testing, respectively (Inception-V3 contained a dropout probability of 0.6 for training). Instead of using dropout to reduce overfitting of fully-connected layers, ResNet-26 employs global average pooling on the last layer of feature maps \cite{DBLP:journals/corr/LinCY13}, which reportedly helps the network's generalization ability. The output layer of every network consisted of four fully-connected nodes and a softmax activation function \cite{dunne1997pairing}. \subsection{Fully-Connected Network} Perhaps the most basic deep artificial neural network, the fully-connected network consists of the input layer, three hidden layers, and the output layer. The input layer contains 124,800 input nodes for color images ($130 \times 320 \times 3$). Each hidden layer contains 64 nodes, which are each connected to every node in the previous and subsequent layers, with a hyperbolic tangent activation function \cite{kalman1992tanh}, $\ell_2$ regularization \cite{DBLP:journals/corr/Laarhoven17b} to reduce overfitting and complexity, and weight decay of 0.001 \cite{krogh1992simple}. Dropout \cite{srivastava2014dropout} is also applied after each hidden layer to decrease the chance of overfitting the training data. The first fully-connected network to be employed successfully in an autonomous vehicle was developed by Dean Pomerleau in 1989 as part of the ALVINN (Autonomous Land Vehicle in a Neural Network) project \cite{pomerleau1989alvinn}. \subsection{2-layer CNN} This architecture was chosen because it is perhaps one of the simplest convolutional networks and would, as a result, allow for a close comparison between a fully-connected architecture and a CNN architecture. $\ell_2$ regularization \cite{DBLP:journals/corr/Laarhoven17b} is used in both convolution layers. After each convolution layer, $2 \times 2$ max pooling \cite{riesenhuber1999hierarchical} \cite{wersing2003learning} with stride 2 and local response normalization \cite{NIPS2012_4824}, which encourages sparsity, were applied. \subsection{AlexNet} First published in 2012, AlexNet \cite{NIPS2012_4824} remains one of the most well-known and widely used deep neural networks to date, which is greatly due to its remarkable performance on the ImageNet Large Scale Visual Recognition Challenge in 2010 \cite{ILSVRC15}. Since its publication, AlexNet has been used on many tasks, including object detection \cite{DBLP:journals/corr/GirshickDDM13}, image segmentation \cite{DBLP:journals/corr/LongSD14}, and video classification \cite{KarpathyCVPR14}, to name a few. Following these applications and achievements of AlexNet, the computer vision and neural network communities were spurred to move from the engineering of features to the engineering of networks \cite{DBLP:journals/corr/XieGDTH16} and create deeper, more elaborate networks that could perform even better at such tasks. \subsection{VGG-16} Larger, more elaborate networks, however, often pose additional challenges due in part to the increased number of hyper-parameters, which must still be chosen relatively carefully at this time. For example, the stride size, filter size, and number of filters in a convolution layer have an effect on the performance of the network. The VGG architecture \cite{DBLP:journals/corr/SimonyanZ14a} attempts to address the issue of choosing different stride and filter sizes by using a stride of one and a filter size of $3\times3$ for all convolution layers. Thus, this style of architecture reduces the number of hyper-parameters --- despite its greater depth --- than its predecessor AlexNet by stacking ``building blocks of the same shape . . . which increases simplicity and reduces the chance of overadapting the architecture to a specific dataset'' \cite{DBLP:journals/corr/XieGDTH16}. The ability of VGG-nets to generalize to different tasks has been shown in many applications \cite{DBLP:journals/corr/DonahueJVHZTD13} \cite{DBLP:journals/corr/Girshick15} \cite{Pinheiro:2015:LSO:2969442.2969462} \cite{DBLP:journals/corr/XiongDHSSSYZ16}. \subsection{Inception-V3} In contrast to the VGG-style architectures, the Inception architecture \cite{DBLP:journals/corr/SzegedyLJSRAEVR14} \cite{DBLP:journals/corr/SzegedyIV16} \cite{DBLP:journals/corr/SzegedyVISW15} contains hand-crafted topologies with many varying hyper-parameters while still exhibiting low model complexity \cite{DBLP:journals/corr/XieGDTH16} and high performance on an array of tasks \cite{esteva2017dermatologist} \cite{7952132} \cite{gkioxari2016chained}. These architectures, including Inception-V3, which we use here, all operate on the principle of splitting the feature map outputs of certain layers into multiple different streams of operations (represented by the dashed lines in Table 2) and subsequently merging their outputs together via concatenation. \subsection{ResNet-26} The ResNet architecture \cite{DBLP:journals/corr/HeZRS15} builds off of the splitting/merging strategy of the Inception architectures and the simple, block-template style of VGG nets. These networks are composed of ``residual blocks'', where a template of convolutions is repeated a set number of times, and after each repetition, the features that served as input to that specific repetition are added to the output of the repetition. This is possible because of the block-template structure employed in VGG nets, as the output of a residual block often contains the same dimensions of the input of the block. When this is not the case (i.e. when the stride is greater than one or when increasing the number of filters) the input is downsampled via average pooling with a stride length of two and/or a linear transformation is used to increase the channel dimension of the input \cite{DBLP:journals/corr/HeZRS15} respectively. After every convolutional layer, batch normalization \cite{DBLP:journals/corr/IoffeS15} and a rectified linear activation function \cite{dahl2013improving} are applied to the output (except when adding the identity of the previous block, in which the activation function comes after the addition). The model complexity, measured in FLOPs and number of parameters of ResNets is extremely low relative to other CNNs (Table 2), yet these networks are still able to perform well on and generalize to different tasks \cite{DBLP:journals/corr/XiongDHSSSYZ16} \cite{akbar2018determining} \cite{lu2018deep} \cite{rezende2017malicious} \cite{jung2017resnet} \cite{zhang2017combination}. \subsection{LSTM} Since their development in the mid-1990s in the context of language and writing processing, LSTMs have proven to be well suited to an array of problems that contain sequential data, as they are able to capture both long- and short-term dependencies in such data. They also are less susceptible to the problems encountered by simple recurrent networks \cite{hochreiter2001gradient}. As a result, they have been used in applications from handwriting classification \cite{doetsch2014fast} \cite{graves2009novel} to handwriting generation \cite{graves2013generating} and speech translation \cite{luong2014addressing}. Each node in these networks has four gates: input, output, and input modulation which use the sigmoid activation function as in \cite{karlik2011performance}, and the forget gate which uses the hyperbolic tangent activation function \cite{kalman1992tanh}. These gates work in conjunction with each other to help regulate which information enters the cell state, which is able to hold long-term dependencies, and a hidden state, which captures the short-term dependencies. The typical input for such networks is an $m \times n$ matrix, where each row is the next timestep in the data and each column is a dimension in those timesteps. Here, we treat each image as this matrix, as though the rows of the image are the timesteps with $320 \times 3$ dimensions for color and grayscale framestack images and 320 dimensions for grayscale images. We do this by concatenating the input channels along the column dimension\footnote{It is worth noting that we initially attempted concatenation along the row dimension with poor results.}. The network we use here has two hidden LSTM layers each containing 500 nodes, where each layer is essentially comprised of four fully-connected layers representing each gate. Each node in the first of these hidden layers outputs a sequence of hidden states corresponding to the number of time-steps (image rows), whereas each node in the second hidden layer returns one output for the whole sequence it was given. \section{Testing Protocol} After each network completed training, it was then used to control the vehicle autonomously at a constant speed around the track. To measure the performance of each network, the vehicle was placed at four different positions around the track (Fig. \ref{fig:track}) and driven autonomously for 10 trials at each position, totaling 40 test trials per network. Five of these ten trials at each position were performed under a high-light condition (all lights on; Fig. \ref{fig:frame}, top) and five with a low-light condition (one-third of the lights on, Fig. \ref{fig:frame}, bottom). Every time the testing position was changed (every 10 trials), the vehicle's batteries, which were new and unused at the start of this research, were replaced with fully-charged ones. A camera, which was fixed to the ceiling and faced down toward the track, was used to film each test run. Along with this video, the time of each trial was taken, and it was recorded whether the vehicle successfully completed a single lap or not. A trial was ended when one of four circumstances occurred: 1) the vehicle completed a lap and made it back to its starting position, 2) the vehicle turned around and went the wrong direction three times in the same trial (most models eventually turned back around and righted themselves when this occurred), 3) the vehicle hit the wall and/or became stuck, or 4) the vehicle became stuck in an oscillatory back-and-forth motion without making net progress on the track for 10 consecutive seconds. Using the protocol above, each network was tested in two separate test trials. During the first testing trial, the same track shape and environment (i.e. room layout) that were used in the training data were also used in the testing phase. In the second testing trial, each network was tested under the same protocol but only using the input image method that enabled it to obtain the best performance in the first testing trial. Furthermore, during this second testing phase, the objects in the room (which were in the vehicle's field-of-view while it was driving) were rearranged, the track was configured in an oval shape instead of the L-shape, four diverse objects (pictured in Fig. \ref{fig:obj}) which were not present in the training data were placed randomly on the track, and a different vehicle of the same make and model of the training vehicle was used. For each of these test trials, a random number generator was used to determine 1) the number of objects placed on the track, 2) how far along the lap each object should be placed, 3) where each object was positioned relative to the middle of the path, and 4) how much the object was rotated. All of these parameters regarding object placement were consistent across all networks tested. \begin{figure}[ht] \begin{centering} \includegraphics[width=\linewidth]{test_objects.jpg} \caption{The four objects used in the second testing trial of this research.} \label{fig:obj} \end{centering} \end{figure} \subsection{Performance Analysis} The equation, \begin{equation} \textnormal{success rate} = \frac{\textnormal{\# of trials with lap completed}}{40} \end{equation} \\ was used as the primary metric to determine how each network performed on this task. The number of inferences per second each network could perform on images from this dataset was calculated by having the network perform 1000 inferences, obtaining the elapsed time and dividing by 1000. This procedure was performed five times for each network, and the average of these five trials is reported. This metric is potentially relevant to mobile/vehicular applications, where the ability of a network to perform a certain number of inferences per second is critical when moving at a high speed and/or difficult conditions. \subsection{Path Analysis} To further explore each network's effect on the vehicle's path, we used the videos taken by the ceiling camera and employed an object-tracking algorithm to determine the location of the center of the vehicle during every trial in testing phase one\footnote {We did not film the second testing phase.}. These coordinates were used to overlay colored dots on top of an image of the track to indicate where the vehicle had traveled over its 40 trials. Using these coordinates, it was possible to quantify path similarities and differences across runs between and within networks. To compare the paths taken in two different trials, the coordinates of the two trials were paired by time-step and by starting location. The average mean-squared distances between each of these corresponding points was then calculated for all of the pairs of points in the two compared trials. If one trial ended before the other (i.e. the vehicle failed to complete the lap due to a crash, etc.) the longer trial was shortened to match the length of the shorter trial. Using this method, we compared each trial to every other trial and obtained a single number representing the average path differences across the compared models. \subsection{Hyper-Parameter Analysis} In order to determine which hyper-parameter(s) were most important in determining a network's success rate in each testing phase, a random forest regression model \cite{breiman2001random} was trained on a number of meta variables surrounding each network with the goal of predicting the success rate in the respective testing phase. For each testing phase, we ran scikit-learn's \cite{scikit-learn} RandomForestRegressor 1000 times. Each time, the number of estimators and the maximum number of features to consider when looking for the best split were chosen randomly from the ranges [500, 5000] and [1, \textit{n}-1] respectively (where \textit{n} is the number of hyper-parameters input to the random forest model). All other parameters of the random forest were kept at the default values. After the forest is trained, it is possible to observe the importance of each feature in predicting the output, which allowed us to determine what hyper-parameters played a large role in determining success in each testing phase. We derived each feature's average rated importance across the 1000 runs. Some of the input variables considered by the random forest model were the number of FLOPs and parameters, the number of hidden layers in a network, the mean validation loss over the last 1200 training iterations, and the validation loss at the start of training. \subsection{Network ``Bias''} One factor that is influential to generalization ability --- and a possible deployment gap --- is the ability of a network to avoid overfitting the training set. This is made more difficult when the frequencies of the labels (or actions) in the training set are very uneven (e.g. the forward action appeared much more than any other in the training set). In order to determine whether certain networks overfit to the distribution of actions in the training set, and how they were affected by this, the bias weights of the output layer were examined and compared against the actual distribution of labels in the dataset. This method helped determine whether a given network was `biased' toward a specific action based on the training set and how this `bias' affected its performance in both testing phases. \subsection{Spatial Distribution of Attention} In order to gain further insight into the observed differences between tested models, we assessed which portions of the image each model deemed more important in order to make its behavioral decisions. To do so, we utilized a novel method loosely based on \cite{one-pixel} in which we systematically `flipped' the values of each pixel value in the input images, one by one, and observed the corresponding difference in the model outputs compared to the unaltered image. This serves to determine what pixels/regions of the image were more important in the network's classification. This method bears some similarity to other recently developed methods designed to make neural networks more interpretable --- often using methods such as deconvolutional layers \cite{zeiler2010deconvolutional} or layer-wise relevance propagation (LRP) \cite{bach2015pixel}. However, these other methods do not generalize well to all neural network architectures and their results are not always readily interpretable \cite{interpret_learning}. The method introduced here provides a simple and easily interpretable method for localizing the information in images across all models. The procedure consisted of five steps: 1) preprocess and perform inference on a given test image and record the output layer values and the action chosen, 2) loop through every pixel in the image and maximally flip its value (i.e. pixel values $\geq$ 128 were given the value 0 and those $<$ 128 were given the value 255), 3) perform preprocessing and inference on the image with the altered pixel, 4) calculate the MSE between the output layer values associated with the altered image and the unaltered image and record it along with the altered pixel's location, and 5) determine the action from the altered image's output and record whether changing that pixel caused the network to choose a different action. This procedure was performed over 50 images chosen randomly from a test dataset taken on the L-shaped track. To assess how easy it was to change the output action for a given network, we also calculate the average confidence value for the action chosen over 5000 images. Finally, we use the MSE for each pixel location in step 4 to create a heatmap in the image space that illustrated how important each pixel of the image was in changing the output. \section{RESULTS} \subsection{Validation Performance} Fig. \ref{fig:val_loss} shows the validation loss of each model over the entire course of training for each of the three input types. For the single grayscale frame and single color frame conditions (top and middle), all of the models converged to moderate losses except for the fully-connected network, which yielded a loss that was approximately $2\times$ higher upon model convergence than all others. For the grayscale framestack input (bottom), the four contemporary CNNs show significantly reduced loss upon convergence compared with the three other models. \begin{figure} \begin{centering} \includegraphics[width=\linewidth]{val_loss_graphs.png} \caption{Each network's validation loss over training with single grayscale frame (top), single color frame (middle), and grayscale framestack (bottom) as input.} \label{fig:val_loss} \end{centering} \end{figure} \subsection{Success Rate} \subsubsection{Testing Phase One} Fig. \ref{fig:success_rates} (top) presents the success rate (i.e. the percentage of trials in which the lap was completed) across all of the tested architectures during testing phase one. Overall, the convolutional neural networks and LSTM vastly outperformed the fully-connected network. Within the contemporary CNNs, all of them achieved reasonably good success rates ($\sim$95\%) with at least one data type. However, only AlexNet, trained on single color video frames, achieved a perfect success rate over 40 trials. VGG-16 was found to be the most robust to the input class, as its success rate was the equally high for all three input image types. Across the different data types, the color single frame yielded the best overall performance across models while the grayscale framestack lagged dramatically across most models. \subsubsection{Testing Phase Two} The success rates achieved by each network in testing phase two were much lower than those in phase one (Fig. \ref{fig:success_rates}, bottom). AlexNet exhibited the best success rate during this phase, completing 55\% of the 40 laps, followed by VGG-16 which completed 45\% of the 40 test laps (Fig. \ref{fig:success_rates}, bottom). ResNet-26 exhibited a disappointing performance during this phase, as it only completed 25\%, or 10, of its test laps. \begin{figure} \begin{centering} \includegraphics[width=\linewidth]{successrates_2.png} \caption{The success rate of each network during test phase one (top) and test phase two (bottom).} \label{fig:success_rates} \end{centering} \end{figure} In order to test for a possible 'deployment gap'---i.e., the extent to which offline training/validation does not predict real driving performance--- we calculated the mean validation loss for each model and input type for the last 1200 training iterations. Figures \ref{fig:val_succ} and \ref{fig:val_succ2} show the relationship between the validation loss and the respective success rate during testing phases one and two, respectively. As can be seen, many model/input types with similar validation losses (those between .4 and .45) demonstrate widely variable success rates in both testing phases, suggesting the presence of a deployment gap. For example, Inception-V3 trained on grayscale framestack input had the same validation loss as VGG-16 trained on the same input, but the former's success rate in testing phase one was 50\% and the latter's was 95\%. Furthermore, AlexNet trained on single color frames was the only network to achieve perfect success in testing phase one despite having one of the worst validation losses (Fig. \ref{fig:val_succ}), and an array of network/input combinations obtained 95\% success or better while their validation losses showed significant variability (Inception-V3/color frame, VGG-16/color frame, VGG-16/grayscale frame, Inception-V3/grayscale frame, VGG-16/grayscale framestack, ResNet-26/grayscale frame, 2-layer CNN/color frame, and AlexNet/grayscale frame; Fig. \ref{fig:val_succ}). The most dramatic examples are AlexNet, using single grayscale frames and Inception-V3 using single color frames. These two models showed validation losses that differed from each other by over 20\% but the same success rate in testing. Similar conclusions for testing phase two can be drawn in Fig. \ref{fig:val_succ2}, as many of the same models (i.e. 2-layer CNN, LSTM, ResNet-26, and AlexNet) performed very differently despite having a similar mean validation loss of $\sim$0.43. \begin{figure} \begin{centering} \includegraphics[width=\linewidth]{val_succ_chart.png} \caption{The success rate as a function of the mean validation loss over training iterations 4800 through 6000 (fully-connected network not shown). R\textsuperscript{2} = 0.46.} \label{fig:val_succ} \end{centering} \end{figure} \begin{figure} \begin{centering} \includegraphics[width=\linewidth]{phase2_val_success.png} \caption{Success rate of each network in testing phase two as a function of the respective mean validation loss over training iterations 4800 through 6000. R\textsuperscript{2} = 0.34.} \label{fig:val_succ2} \end{centering} \end{figure} \subsection{Path Analysis} In order to further compare driving performance across models, we used the video taken from the ceiling camera to track the specific paths taken by some of the networks highlighted in the section above over all test trials in testing phase one. These results also demonstrated the presence of a deployment gap in that similar training/validation did not always predict similar driving paths. For example, the path taken by Inception-V3 using grayscale framestack varied dramatically with that of VGG-16 using the same input, although they reached the same validation loss (Fig. \ref{fig:path_vis}, top). On the other hand, AlexNet, using single grayscale frames and Inception-V3 using single color frames obtained very different mean validation losses but the same success rate, and their paths were very similar (Fig. \ref{fig:path_vis}, bottom). A plot of pairwise similarity between all the networks (collapsed across all trials and image types) is shown in Figure \ref{fig:path_diff_bar}. The upper portion of the figure (above the horizontal black line) shows each model's similarity to itself across different loops around the track while the lower portion shows comparisons between different networks. As may be seen, each model's path exhibited much more self-similarity than similarity to the other models' paths. VGG-16 had the most self-similar (i.e. consistent) path, while ResNet-26 had the least self-similar path between trials. Fig. \ref{fig:path_diff_scatter} shows the relationship between measures of path difference and success difference across all model pairs. As may be seen, there is a moderate positive trend such that models with similar paths had similar success rates. However, this trend does not characterize the data well. Instead, there appear to be four main `clusters', showing widely varying relationships, which we have outlined in the figure. The solid ellipse in the bottom left (models with similar paths and driving performance) contains all pairwise comparisons between AlexNet, VGG-16, Inception-V3, and LSTM. These were the highest-performing networks in both testing phases. The dashed ellipse to its right (moderately similar paths and success rates) contains comparisons between the 2-layer CNN on one hand and AlexNet, VGG-16 and Inception-V3 and LSTM on the other hand. The ellipse at the top (large differences in paths and success) contains comparisons between the fully-connected network on one hand and AlexNet, VGG-16, Inception-V3, and LSTM on the other. Finally, the circle in the lower right (very different paths but similar success rates) contains comparisons between ResNet-26 and every other network except the fully-connected. \begin{figure} \begin{centering} \includegraphics[width=\linewidth]{rover_paths.png} \caption{Each dot represents the center of the vehicle at that point on the track at some point during the 40 test trials in testing phase one. Top) VGG-16 trained on framestack (cyan) and Inception-V3 trained on framestack (red). Bottom) AlexNet trained on single grayscale frame (purple) and Inception-V3 trained on single color frames (green).} \label{fig:path_vis} \end{centering} \end{figure} \begin{figure} \begin{centering} \includegraphics[width=\linewidth]{path_comparison2.png} \caption{The difference in path around the L-shaped track in testing phase one between each network over all image conditions.} \label{fig:path_diff_bar} \end{centering} \end{figure} \begin{figure} \begin{centering} \includegraphics[width=\linewidth]{path_diff_vs_success_diff.png} \caption{Path difference across image types for all combinations of two networks. The solid ellipse in the bottom left contains all pairwise comparisons between AlexNet, VGG-16, Inception-V3, and LSTM. The ellipse to its right contains comparisons between the 2-layer CNN and AlexNet, VGG-16, Inception-V3, and AlexNet. The ellipse at the top contains comparisons between the fully-connected network and AlexNet, VGG-16, Inception-V3, and LSTM. The circle in the lower right contains comparisons between ResNet-26 and every other network except the fully-connected.} \label{fig:path_diff_scatter} \end{centering} \end{figure} \subsection{Inference Rate} The Fully-Connected network was able to perform the most inferences s\textsuperscript{-1} by far (729.83 inferences s\textsuperscript{-1} for color images; Fig. \ref{fig:inf_rate}), while the LSTM performed just 18 inferences s\textsuperscript{-1} on images with three channels. Generally, the larger, more advanced networks exhibited decreased inference rates than the smaller, more primitive ones (with the exception of ResNet-26 due to its relatively efficient architecture). \begin{figure} \begin{centering} \includegraphics[width=\linewidth]{inf_s.png} \caption{The number of inferences each network was able to perform on color images.} \label{fig:inf_rate} \end{centering} \end{figure} \subsection{Network ``Bias''} The bias weights of each network indicated that the networks that performed better were less 'biased' toward a particular action(s) (Fig. \ref{fig:bias_weights}, as the bias weights of their output layer were relatively similar. Most high-performing networks had dissimilar weight distributions relative to the labels, but some high-performing networks also had relatively similar weight distributions to the labels (e.g. Inception-V3 using single grayscale frame inputs). \begin{figure} \begin{centering} \includegraphics[width=\linewidth]{bias_weights.png} \caption{The bias weights of each network's output layer, and the actual distribution of the dataset's labels (top). The backward action was not included due to its very low frequency relative to the other actions.} \label{fig:bias_weights} \end{centering} \end{figure} \subsection{Hyper-Parameter Analysis} The levels of feature importance produced by the random forest analysis indicated that the validation loss was the most important feature in determining success in testing phase one, while the input image type was the second most important followed by path self-similarity and the number of FLOPs (Fig. \ref{fig:feat_imp}). For testing phase two, the number of FLOPs emerged as the most important feature, while the maximum number of convolutional filters in the network and validation loss were second and third most important respectively. \begin{figure} \begin{centering} \includegraphics[width=\linewidth]{feat_imp3.png} \caption{The importance of each hyper-parameter in predicting the success rate in testing phases one (blue) and two (red) as determined by the random forest.} \label{fig:feat_imp} \end{centering} \end{figure} \subsection{Spatial Distribution of Attention} Table 1 illustrates the results of the pixel flipping analysis described above. Each metric contained within the table represents an average over 50 random test images. The action decisions of the two non-convolutional networks --- fully-connected and LSTM --- were completely unaffected by the flipped pixels (i.e. no pixel flip caused the network to change its action decision). Regarding the convolutional networks tested, the models that exhibited higher performance in both testing phases generally had more action decisions changed due to flipped pixels. Furthermore, the MSE between the output layers associated with the image containing the altered pixel and the unaltered image showed a similar trend, as the fully-connected and LSTM networks showed little difference. The convolutional networks, on average, contained much higher differences than the fully-connected-based networks. Despite having more decisions altered when given altered images, the convolutional networks were also more confident in their decisions in general when given unaltered images. Fig. \ref{heatmap_examples} depicts representative examples of some of the heatmaps constructed for single images using the MSE values for each pixel when flipped. Although VGG-16 (Fig. \ref{heatmap_examples}, top left) and AlexNet (Fig. \ref{heatmap_examples}, top right) had the most action decisions affected by flipped pixels per image on average, these pixels tended to lie in a very confined region of the image. This was not true for the LSTM (middle left) or the 2-layer CNN (middle right), as the information they attended to was fairly distributed, although the 2-layer CNN was affected more by pixels that did not correspond to `useful' features of the scene (e.g. objects outside of the track or parts of the room's wall such as the rubber baseboard). \begin{table}[ht] \caption{Summary of Spatial Information Analysis} \centering \begin{tabular}[width=\linewidth]{c c c c} \hline \hline \\ \textbf{Network} & \textbf{Num. Altered Actions} & \textbf{Output MSE} & \textbf{Confidence} \\ \hline \\ FC & 0.00 & $4.08\times10$\textsuperscript{-6} & 0.80 \\ 2-layer CNN & 32.24 & $4.00\times10$\textsuperscript{-4} & 0.84 \\ AlexNet & 881.92 & $1.30\times10$\textsuperscript{-3} & 0.91 \\ VGG-16 & 880.88 & $2.00\times10$\textsuperscript{-3} & 0.92 \\ Inception-V3 & 445.96 & $6.30\times10$\textsuperscript{-3} & 0.91 \\ ResNet-26 & 270.46 & $1.00\times10$\textsuperscript{-3} & 0.85 \\ LSTM & 0.00 & $9.87\times10$\textsuperscript{-6} & 0.86 \\ \hline \hline \end{tabular} \end{table} \begin{figure}[ht] \begin{centering} \includegraphics[width=\linewidth]{heat_maps.png} \caption{Representative heatmaps depicting how much each pixel, when maximally flipped, changed the output of the network. The three heatmaps in the left column were created using the same image where the desired action was forward and the networks displayed are VGG-16 (top), LSTM (middle), and Inception-V3 (bottom). The three in the right column were all created from an image which had a desired action of right, and the networks displayed are AlexNet (top), 2-layer CNN (middle), and ResNet-26 (bottom). Each individual heatmap's values were scaled between 0 and 255, so intensities cannot be compared between heatmaps here.} \label{heatmap_examples} \end{centering} \end{figure} \section{Discussion} The current study presents the first systematic assessment and comparison of multiple neural network models in an experimentally controlled driving task. Overall, we found that, with the exception of the fully-connected network, the most `primitive' among models, all networks achieved a reasonably good range of performance (80\% to 100\%) during testing phase one for both the color and grayscale input types and most performed very well ($\sim95$ \%) on at least one input type. The sole exception among contemporary networks was Resnet-26, which barely broke 80\% on any input type. With the exception of VGG-16, the inclusion of framestack data rather than single frames did not improve performance but actually reduced it, sometimes dramatically depending on the network. The presence of good performance across most contemporary models, at least for some data types, indicates that these models possess the computational complexity needed to perform the driving task. However, there was also a high degree of variability across models for specific data types. Critically, this variability was not well predicted by the models' validation performance (particularly in testing phase two), which was uniformly good across all models (except fully-connected) for all input types. These data demonstrate the presence of a large deployment gap. Overall, the top performer among the tested group of models was AlexNet, using color images as an input, which outperformed every other network in both testing phase one and testing phase two. VGG-16 was the most robust to the input image type, as it achieved the same, high success rate across all three image types in testing phase one and had the second-best success rate in testing phase two. It also had a relatively small deployment gap, as its validation loss correlated relatively well with its success rate in both testing phases. Inception-V3 using color images as input also had a relatively small deployment gap, as it had the lowest validation loss out of all networks and its success rates in both testing phases were comparably good. The single, color image yielded the best success rates when used as input to all networks except the LSTM. The single, grayscale images consistently performed slightly worse, but still well, on average compared to the single color frames. The gray framestack, however, yielded the poorest and most inconsistent results out of the three, although many of the contemporary networks obtained decent success rates using it. The path analysis demonstrated that the route taken over the testing trials was much more similar within models than between models and the better performing networks generally took a more consistent path across trials than the worse performing ones. Furthermore, these more successful models tended to converge on a relatively similar path around the track. In contrast, the fully-connected network and ResNet-26 both took very different paths than the other networks. In the case of the fully-connected, this different path led to very poor performance while in the case of ResNet-26, it led to moderately worse performance. This suggests that both models very quickly entered new `terrain' never encountered before by the human driver, but the latter was able to generalize more (mainly in test phase one) while the former was not. The performance of ResNet-26 is explored in greater detail below. The results of the pixel-flipping analysis illustrate that the networks that do not use convolutional layers, such as the fully-connected network and LSTM, are affected far less by flipped pixels than networks that do use convolutions. However, they also do not perform as well on the task. One explanation for this is that the fully-connected networks by nature look for global patterns in the input, whereas CNNs look for local patterns via convolutions, and, thus, a local perturbation in the input should not have as strong an effect on the performance of the former, as we report (Table 1). Knowing what specifically to look at in the image allows CNNs to be more efficient/better at image recognition tasks than networks without convolutions, but as a result, this also makes these networks less robust to certain types of noise (e.g. adversarial attacks). One of the most interesting and important results was evidence for the presence of a significant deployment gap --- that is, models that performed similarly well during validation showed highly variable performance during testing. At first glance, this may seem puzzling. If a model can generalize from training to validation then why not from training to the images during deployment? We believe this deployment gap may be explained by the fact that each inference in a self-driving task is not causally isolated from future inferences, as in a traditional image recognition task. Instead, each inference made by the network leads to a behavioral choice, which, in turn, affects \textit{all subsequent inputs}. This means that a small initial difference in the path the vehicle takes can lead it on a novel `orbit' unlike any paths taken by a human driver. In turn, this means the resulting \textit{inputs} will be different than those present in the training or validation set (all of which were human-driver generated) and errors in generalization may accumulate, leading to a vicious circle of unfamiliar inputs and resulting actions. One factor that could effect a networks performance in response to novel inputs is the extent to which it has a bias to choose any particular action, which can be due to that action's frequency in the training set. If a network does have a strong bias, it is likely to choose this action most frequently. This can be evidenced in the `bias' analysis of each network as shown in Fig. \ref{fig:bias_weights}, where the bias weight of each output node was examined for every network. Although a few exceptions were present, the networks that performed better in both testing phases (e.g. AlexNet, VGG-16, LSTM) were less `biased' toward a particular action, which indicates that they were more easily able to adapt to a situation in which the action distribution was very different from the training set due to a previously untraveled orbit. In this sense, the preference of one action much more than the others as indicated by the metrics used in the bias analysis illustrate a form of overfitting, as the bias weights of the output layer are relied upon too heavily (possibly to the detriment of the rest of the weights in the network). Furthermore, these same networks, with the addition of Inception-V3, were the best-performing networks in both testing phases, and they all converged on a comparably similar path (Fig. \ref{fig:path_diff_scatter}). Therefore, they had to generalize less because they took a more consistent path, which was also taken by other good networks (Figures \ref{fig:path_diff_bar} and \ref{fig:path_diff_scatter}), but they were also able to generalize more because they were not biased too much toward a particular action. One surprising, and somewhat disappointing, result was the poor performance of most models on the gray framestack in testing phase one compared with the other two input types (Fig. \ref{fig:success_rates}, top), and in every case --- except for VGG-16 where it accrued an equivalent success rate as the other input types --- it performs worse than a single gray frame. This seems counter-intuitive because more information, in the form of past images of the same type, would be expected to allow a network to achieve equivalent or even better performance, than single frames. This poor performance cannot be attributed to the increased size of the input images from one channel to three, because in most cases, a single color frame --- which also has three channels --- had the best success. We believe that this poor performance may be due to the fact that a stack of three frames may carry more complexity than a single frame or three color channels, leading to greater computational strain on the model. To test this, we calculated the Structural Similarity Index (SSIM) \cite{wang2004image} --- which is commonly used as a measure of image similarity --- between each channel in the color images and each channel in the framestack images respectively, averaged the three, then took the average over 1000 random images. The average SSIM between color image channels over these five runs was 0.94, and the average SSIM between framestack channels was 0.68. Therefore, it is possible to conclude that the gray framestack images performed poorly relative to the other image types, and extremely poorly in some of the simpler networks, because the channels in this type of image are not as correlated as those in color images, adding complexity. We believe this is also made worse by the fact that it is impossible to drive exactly the same as in the training dataset (as detailed in the paragraph above), meaning the past frames will never be exactly the same as those the network was trained on, and the network will need to generalize even further to overcome this. This is likely why most of the more recent, contemporary networks perform adequately when using framestack images, but the fully-connected network and 2-layer CNN perform very badly. The performance of the ResNet-26 architecture may exemplify the challenge of the deployment gap. This architecture has been used widely in many image-related networks, from classifiers to generative networks, mainly due to its ability to attain very good classification accuracies on many computer vision benchmarks while using less weights and complexity than many other contemporary networks, such as VGG and Inception. In fact, prior research comparing some of these same networks on object detection found that ResNet-50 outperformed both Inception-V3 and AlexNet \cite{comparison_nn_2018}, both of which outperformed ResNet in this experiment. One possible explanation is that, while relatively deep compared to the other contemporary networks tested, our ResNet-26 architecture here is not as wide as most, as the maximum number of filters it has in a single layer is 128. For example, the 2-layer CNN --- which outperformed ResNet-26 during testing phase one when using the gray and color single frames and was not greatly outperformed by ResNet in testing phase two --- has twice the number of filters as ResNet in its second hidden layer (256) but is much more shallow. It seems reasonable to conclude that these additional hidden layers relative to most other networks tested here should have been able make up for this deficit in width, as it has been reported that depth is more important than width in determining representational power and ability of a network to approximate a given function \cite{eldan2016power}\cite{liang2017deep}\cite{safran2016depth}. However, considering that the number of FLOPs in the network was found to play a relatively large role in success during both testing phases, and the highest number of convolutional filters in a CNN's hidden layer was especially significant in testing phase two, it is likely that these two probably had some role in ResNet-26's performance. Instead, we believe this network's relatively poor performance is likely due to its unusual orbit (its paths show low self-similarity as well as low similarity to the high-performing models, as shown in Fig. \ref{fig:path_diff_bar}) coupled with its strong bias as shown in Figure \ref{fig:bias_weights}. This means that this network was not very consistent, which would cause its inputs and outputs to be different than those in the training set as it traversed the track. This would require it to generalize more in order to perform well; however its bias toward certain actions, based on training, precludes successful adaptation to this new, previously unseen path. Another interesting result was the LSTM network's superior performance on the grayscale inputs compared to the color frames, something not seen in other networks. This may be explained quite simply with the aid of Fig. \ref{fig:inf_rate}. On images the same size and type as those coming through the vehicle's camera, the LSTM used here was only able to perform 18 inferences s\textsuperscript{-1} on images with three channels, such as the single color frame and the gray framestack, but for the single gray image, it was able to perform 60 inferences s\textsuperscript{-1}. Since the vehicle operated at 30 FPS during the course of this research, it was impossible for the LSTM using color and framestack inputs to produce inferences fast enough, which probably caused dropped frames and/or bottlenecking. \section{CONCLUSION} In summary, the present study presents the first systematic comparison of multiple contemporary deep learning architectures and training protocols in a self-driving vehicle. While most contemporary architectures showed reasonably good driving performance for some input types, we found that there was often a gap between performance on the validation dataset and actual driving, which we term the `deployment gap'. We believe this gap is likely based on the fact that small initial differences in driving behavior can lead to inputs that are poorly represented by the human-generated training/validation data, which requires further generalization. This is likely why most of the networks that performed the best, such as AlexNet, VGG-16, and Inception-V3, seemed to take similar paths around the track and had relatively consistent paths over multiple runs. Conversely, the ResNet-26 architecture did not take a very consistent path around the track, its path was very different than all other networks, and it was also heavily biased toward certain actions, which decreased its ability to generalize to these new inputs. Based on its validation performance alone --- which was above average and similar to other networks that performed well in the testing phases --- there was no indication that its performance would be so different when deployed in an embedded control system. Thus, this network demonstrates that the deployment gap exists and needs to be taken into consideration when designing real world end-to-end DL-based systems. In addition to demonstrating a deployment gap, this research also suggests some potential ways to identify and address such gaps including an even bias across output actions, increased FLOPs and convolutional filters, and an increased weighting of useful features in the input images. These may be more useful than validation performance in determining driving ability when a deep neural network is used to control a vehicle in an end-to-end fashion. \section{Acknowledgments} The authors gratefully acknowledge the support of NVIDIA Corporation with the donation of the GPU hardware used for this research. The authors would also like to thank Levi Stein for his hard work and creativity throughout this research. \bibliographystyle{unsrt}
1,108,101,564,756
arxiv
\section{Introduction} Shuttling one or several atoms or ions is a key operation for fundamental research and to implement quantum-based technologies. In many of these applications it is important to deliver the particles fast and motionally unexcited at destination. Shortcut-to-adiabaticity (STA) techniques \cite{Torrontegui2013,Guery2019} provide transport protocols for that end, but in practice the nominal trajectory of the control parameters is not implemented exactly because of technical imperfections and constraints. These control errors pose the need for a) studying the effect of perturbations on STA-based protocols and b) devising protocols that are robust with respect to imperfections and satisfy the technical constraints. Noisy perturbations in STA-based shuttling operations have been studied quite thoroughly recently \cite{Lu2020}. In this work we shall address a complementary aspect, namely the effect of ``systematic'' oscillatory perturbations of the ideal trap frequency or of the trap trajectory, and design robust protocols. \end{fmtext} \maketitle The intermediate regime of excitations between an elementary monochromatic perturbation and a noisy one could be handled by linearly combining monochromatic perturbations, but a full understanding of the effect of a monochromatic perturbation is needed first. The theoretical analysis is done for one single particle and it is quite general within the harmonic oscillator assumption for the trap, but numerical examples and physical motivation for approximations and parameter values are borrowed from realistic trapped ion settings. There are several STA approaches to shuttle a single particle or condensate from the trap position $Q(0)=0$ at $t=0$ to $Q(T)=d$ in a transport time $T$ \cite{Muga2021}. Invariant-based inverse engineering and the ``Fourier method'' have been widely used the design the trap motion and will be the core approaches here. In particular, the transport in a harmonic trap may be inverse engineered using quadratic invariants of motion \cite{Torrontegui2011} (alternatively ``scaling'' for condensates \cite{Muga2009,Schaff2011,Torrontegui2012}). The invariant eigenvectors are also very useful to describe the dynamics, and combined with perturbation theory, they provide compact expressions for the energy excitation. If shuttling is performed in a rigid harmonic oscillator, with constant trap frequency, and vanishing trap speed at initial and final times, the final excitation energy can also be expressed in terms of the Fourier transform (FT) of the trap acceleration (or velocity) at the trap frequency \cite{Landau1976,Bowler2012,Couvert2008,Guery2014,Reichle2006,MartinezCercos2020}. A consequence is that if the ideal, excitation-free trap trajectory is affected by some perturbation or deviation, the final excitation only depends on the Fourier transform of the {\it deviation} of the trap acceleration with respect to the ideally designed one. These deviations may be independent of the ideal trajectory, for example if they are due to homogeneous background noise, or may depend on it, e.g. if some locations are more prone to errors. This possible dependence makes in general, smooth, band-limited and spatially-limited ideal trajectories preferable. A formal framework to design trap trajectories to nullify the FT of the acceleration at the trap frequency can be worked out systematically \cite{Guery2014,MartinezCercos2020}, without making explicit use of invariants. In fact combining invariants and the Fourier forms as we do in this work is worthwhile. In particular, whereas the Fourier method, as used so far, needs constant trap frequencies, here we shall extend it to time-dependent perturbations and apply optimization strategies. In Section \ref{invsec}, we start by applying invariant-based inverse engineering to shuttling, modifying the perturbative treatment developed for noisy perturbations, to determine the effect of arbitrary perturbations. We also find compact FT expressions of the excitation; In Section \ref{polsec}, we apply the previous general results to harmonic transport with an elementary sinusoidal perturbation in the trap frequency. We focus on a specific polynomial STA protocol and study the different contributions to the final energy; In Section \ref{optsec}, we employ several techniques to find trap trajectories that satisfy different optimization criteria when the trap frequency is affected by one or more sinusoidal perturbations; The conclusions are presented in Section \ref{consec}. Throughout the work we shall assume an effective one-dimensional transport, which is realistic with current experimental settings. \section{Transport of an ion using invariant-based inverse engineering\label{invsec}} The Hamiltonian of a particle of mass $m$ in a harmonic oscillator of (angular) frequency $\Omega(t)$, shuttled along $Q(t)$, \begin{equation}\label{eq:hamiltonian} H(t)=\frac{p^2}{2m}+\frac{m\Omega^2(t)}{2}\big[x-Q(t)\big]^2, \end{equation} has a quadratic Lewis-Riesenfeld invariant \cite{Lu2020} \begin{eqnarray} I(t)&=&\frac{1}{2m}\bigg\{\rho(t)\big[p-m\dot{q}_c(t)\big]-m\dot{\rho}(t)\big[x-q_c(t)\big]\bigg\}^2 +\frac{1}{2}m\Omega_0^2\bigg[\frac{x-q_c(t)}{\rho(t)}\bigg]^2, \end{eqnarray} where $\rho(t)$ is a scaling factor for the width of the eigenstates of $I$, and $q_c(t)$ is a classical trajectory for the forced oscillator. The dots denote time derivatives. The invariant satisfies \begin{equation} \label{invar} \frac{dI(t)}{dt}=\frac{\partial{I(t)}}{\partial{t}}+\frac{1}{i\hbar}\big[I(t),H(t)\big]=0, \end{equation} so that its expectation value for states driven by \textit{H(t)} is constant. From (\ref{invar}), we find the "Ermakov" and "Newton" equations, \begin{eqnarray}\label{eq:Ermakov} \ddot{\rho}(t)+\Omega^2(t)\rho(t)&=&\frac{\Omega_0^2}{\rho^3(t)}, \\\label{eq:Newton} \ddot{q}_c(t)+\Omega^2(t)q_c(t)&=&\Omega^2(t)Q(t). \end{eqnarray} Hereafter we choose for convenience $\Omega_0=\Omega(0)$. The main idea for inverse engineering a quiet driving is to design $q_c(t)$ and introduce it in (\ref{eq:Newton}) to deduce special trap trajectories without final excitation. We impose the initial conditions \begin{eqnarray} q_c(0)=0, &&\hspace{1.32cm}\rho(0)=1, \nonumber\\\label{eq:6} \dot{q}_c(0)=0,&&\hspace{1.32cm}\dot{\rho}(0)=0,\\ \ddot{q}_c(0)=0,&&\hspace{1.32cm}\ddot{\rho}(0)=0, \nonumber \end{eqnarray} so that the invariant commutes with the Hamiltonian at $t=0$. The last two guarantee the continuity of $Q(t)$ and $\Omega(t)$ at the initial time. Similar conditions (except for $q_c(T)=d$) are needed at final time $T$ to achieve excitationless shuttling. We shall use a specific notation, $Q_0(t)$, for the ideal trap trajectory deduced from Newton's equation for the $q_c(t)$ that satisfy the imposed boundary conditions when $\Omega(t)=\Omega_0$. The actual, experimentally implemented trap trajectory, $Q(t)$, and the actual trap frequency may differ from $Q_0(t)$ and $\Omega_0$ producing motional excitation at time $T$. \subsection{Final and transient energies} Let us calculate the final energy due to deviations in the ideal trap trajectory and trap frequency. Now we assume that $Q(t)$ and $\Omega(t)$ are given, and $q_c(t)$ and $\rho(t)$ are found from them, using (\ref{eq:Ermakov}) and (\ref{eq:Newton}) and the initial conditions (\ref{eq:6}). The wave-function is found through the Lewis-Riesenfeld invariant. The corresponding eigenstates can be found analytically \cite{Lu2020}, \begin{equation} \psi_n(x,t)=\frac{1}{\sqrt{\rho}}e^{\frac{im}{\hbar}\left[\frac{\dot{\rho}x^2}{2\rho}+\frac{\left(\dot{q}_c\rho-\dot{\rho}q_c\right)x}{\rho}\right]}\phi_n\left(\frac{x-q_c}{\rho}\right), \end{equation} where $\phi_n$ is the $n$-th eigenstate of the rigid harmonic oscillator of frequency $\Omega_0$. Elementary solutions of the time-dependent Schrödinger equation may be written as $$ \Psi_n(x,t)=e^{i\theta_n(t)}\psi_n(x,t), $$ where $\theta_n(t)$ are the Lewis-Riesenfeld phases, which are found so that $\Psi_n$ is indeed a solution. At final time $T$, the harmonic trap is in $Q(T)$ and its frequency is $\Omega(T)$, not necessarily equal to $d$ and $\Omega_0$, respectively- The final energy can be found exactly as \begin{eqnarray} E_n(T)&=&\big\langle H(T)\big\rangle=\bigg\langle\frac{p^2}{2m}+\frac{m\Omega^2(T)}{2}(x-Q(T))^2\bigg\rangle \nonumber\\ \label{eq:energy} &=&\frac{m\Omega^2(T)}{2}\left[q_c(T)-Q(T)\right]^2+\frac{m}{2}\dot{q}_c^2(T)\nonumber\\ &+&\frac{\hbar}{4\Omega_0}(2n+1)\left[\dot{\rho}^2(T)+\frac{\Omega_0^2}{\rho^2(T)}+\Omega^2(T)\rho^2(T)\right]\!. \end{eqnarray} Some terms depend on the trap trajectory $Q(t)$ (also through $q_c(t)$), while others do not. Following Lu {\it et al.} \cite{Lu2020}, {\em we call the trap-motion independent terms ``static", and the dependent ones, ``dynamical"}. Equation (\ref{eq:energy}) is also valid with the change $T\rightarrow t$ for any time $t$ during the transport. \subsection{Perturbation in the trap frequency\label{section:2.2}} Assume that the trap frequency is perturbed as \begin{equation}\label{eq:12} \Omega(t)=\Omega_0\left[1+\lambda f(t)\right], \end{equation} where $\lambda$ is a dimensionless perturbative parameter much smaller than 1 during the calculations, and $f(t)$ can be any (dimensionless) function. We assume now $Q(t)=Q_0(t)$, with $Q_0(0)=0$ and $Q_0(T)=d$. To analyze the effect of the perturbation, $\rho(t)$ and $q_c(t)$ are expanded in powers of $\lambda$, \begin{eqnarray}\label{eq:13} \rho(t)&=&\rho^{(0)}(t)+\lambda\rho^{(1)}(t)+O(\lambda^2), \nonumber\\ q_c(t)&=&q_c^{(0)}(t)+\lambda q_c^{(1)}(t)+O(\lambda^2). \end{eqnarray} The zeroth order, or unperturbed limit, corresponding to $\Omega(t)=\Omega_0$, fulfills \begin{eqnarray}\label{eq:14} \rho^{(0)}(t)=1,\;\;\;\; \ddot{q}_c^{(0)}(t)+\Omega_0^2q_c^{(0)}(t)=\Omega_0^2Q_0(t), \end{eqnarray} with $q_c^{(0)}$ satisfying the initial conditions (\ref{eq:6}). To minimize the final excitation and make $Q_0(t)$ continuous at $t=T$, also the following boundary conditions have to be imposed, \begin{eqnarray}\label{eq:15} q^{(0)}_c(T)=d,\;\;\;\; \dot{q}^{(0)}_c(T)= \ddot{q}^{(0)}_c(T)=0. \end{eqnarray} With these conditions, the final energy given by (\ref{eq:energy}) is simply the $n$-th energy level of the static harmonic oscillator with the unperturbed frequency $\Omega_0$, \begin{equation} E_n^{(0)}=\hbar\Omega_0(n+1/2). \end{equation} If $q_c^{(0)}(t)$ is designed with the aforementioned boundary conditions, $Q_0(t)$ is found from (\ref{eq:14}), and excitationless transport is guaranteed in the unperturbed limit. Introducing the expansions for $\Omega(t)$, $\rho(t)$ and $q_c(t)$ given by (\ref{eq:12}) and (\ref{eq:13}), into (\ref{eq:energy}), we find an expansion of the final energy with dynamical and static contributions. Combining the zeroth and first order in $\lambda$ of both the dynamical and the static terms, we get \begin{equation} E_n^{(0)}+\lambda E_n^{(1)}=\hbar\Omega(T)\left(n+1/2\right). \end{equation} For the second order in $E_n=E_n^{(0)}+\lambda E_n^{(1)}+\lambda^2 E_n^{(2)}+...$, the dynamical and static terms are \begin{eqnarray}\label{eq:energy2} E_n^{(2)}(T)&=&\frac{m\Omega_0^2}{2}\left\{\left[q_c^{(1)}(T)\right]^2+\frac{1}{\Omega_0^2}\left[\dot{q}_c^{(1)}(T)\right]^2\right\} \nonumber\\ &+& \frac{\hbar\Omega_0}{4}(2n+1)\!\left\{\!\left[2\rho^{(1)}(T)+f(T)\right]^2\!+\!\frac{\left[\dot{\rho}^{(1)}(T)\right]^2}{\Omega_0^2}\!\right\}\!. \end{eqnarray} To achieve robust shuttling protocols, the main goal is to minimize this last expression. Substituting the expansions of $\rho(t)$ and $q_c(t)$ into (\ref{eq:Ermakov}) and (\ref{eq:Newton}), we find the differential equations satisfied by $\rho^{(1)}(t)$ and $q_c^{(1)}(t)$ by keeping only the first order in $\lambda$, % \begin{equation}\label{eq:17} \ddot{\rho}^{(1)}(t)+4\Omega_0^2\rho^{(1)}(t)=-2\Omega_0^2f(t), \end{equation} % with initial conditions $\rho^{(1)}(0)=\dot{\rho}^{(1)}(0)=0$, and % \begin{equation}\label{eq:18} \ddot{q}_c^{(1)}(t)+\Omega_0^2q_c^{(1)}(t)=2f(t)\ddot{q}_c^{(0)}(t), \end{equation} % with initial conditions $q_c^{(1)}(0)=\dot{q}_c^{(1)}(0)=0$. Equations (\ref{eq:17}) and (\ref{eq:18}) admit a formal solution, \begin{eqnarray}\label{eq:19} \rho^{(1)}(t)&=&-\Omega_0\int_0^tdt'f(t')\sin\left[2\Omega_0(t-t')\right], \\ \label{eq:20} q_c^{(1)}(t)&=&\frac{2}{\Omega_0}\int_0^tdt'f(t')\ddot{q}_c^{(0)}(t')\sin\left[\Omega_0(t-t')\right], \end{eqnarray} with similar expressions for their first time derivatives. \subsection{Perturbation in the trap trajectory}\label{sect:2.3} Complementing the previous section, consider now a constant trap frequency $\Omega_0$, but errors in the trap position, \begin{equation}\label{eq:q0} Q(t)=Q_0(t)+\varepsilon dh(t), \end{equation} where $\varepsilon$ is the dimensionless perturbative parameter, and $h(t)$ is a (dimensionless) time dependent function. The distance $d$ is included to the set the scale and make the expression dimensionally consistent. For neutral atom transport in optical lattices iterative approaches to minimize deviations have been put forward \cite{Lam2021}. In the numerical voltage optimization performed in the trapped ion laboratories, there is some choice on whether minimizing deviations of the trap frequency or the trap position, see e.g \cite{Muga2021}. We shall discuss later where the emphasis has to be put on. Indeed, in a trapped ion experiment, perturbations cannot be suppressed to any desired level because of technical imperfections and limitations of the control, for example the voltages have upper limits, the time resolution is limited, the number of electrodes is limited, and their geometry is fixed \cite{Muga2021}. In this context advice on where to put the emphasis in parameter optimizations is quite useful. As in (\ref{eq:13}), we expand $\rho(t)$ and $q_c(t)$ with the change $\lambda\to\epsilon$. We introduce these expansions, together with (\ref{eq:q0}) for $Q(t)$, into Ermakov and Newton equations. The zeroth order energy fulfills once again (\ref{eq:14}), with $q_c^{(0)}$ satisfying the boundary conditions (\ref{eq:6}) and (\ref{eq:15}). The first order is zero and for the second order of the final excitation we get \begin{eqnarray}\label{eq:energy3} E_n^{(2)}(T)&=&\frac{m\Omega_0^2}{2}\left\{\left[q_c^{(1)}(T)-dh(T)\right]^2+\frac{1}{\Omega_0^2}\left[\dot{q}_c^{(1)}(T)\right]^2\right\} \nonumber\\ &+& \frac{\hbar\Omega_0}{4}(2n+1)\left\{4\left[\rho^{(1)}(T)\right]^2+\frac{1}{\Omega_0^2}\left[\dot{\rho}^{(1)}(T)\right]^2\right\}. \end{eqnarray} $\rho^{(1)}(t)$ and $q_c^{(1)}(t)$ satisfy \begin{eqnarray} \ddot{\rho}^{(1)}(t)+4\Omega_0^2\rho^{(1)}(t)&=&0,\nonumber \\ \ddot{q}_c^{(1)}(t)+\Omega_0^2q_c^{(1)}(t)&=&d\Omega_0^2h(t), \end{eqnarray} with initial conditions $\rho^{(1)}(0)=\dot{\rho}^{(1)}(0)=0$ and $q_c^{(1)}(0)=\dot{q}_c^{(1)}(0)=0$. The solutions are \begin{eqnarray} \rho^{(1)}(t)&=&0,\label{eq:2.30} \\ q_c^{(1)}(t)&=&d\Omega_0\int_0^tdt'h(t')\sin\left[\Omega_0(t-t')\right].\label{eq:2.31} \end{eqnarray} Neither of the auxiliary variables depends on the trap trajectory, $Q_0(t)$, so there is only a static contribution to the second-order excitation\footnote{We assume that $h(t)$ does not depend on $q_c^{(0)}(t)$ or $Q_0(t)$}. For fixed $T$ it is not possible to design an optimal trap trajectory that minimizes the excitation. To diminish the effect of a perturbation in the trap position we may choose $T$ to make (\ref{eq:2.31}) and its derivative zero at $T$. This may be done systematically if the form of the perturbation function $h(t)$ is known as we shall see. \subsection{The Fourier forms}\label{section:2.4} Here we find compact expressions for the excitation energy in the form of Fourier transforms. \subsubsection{Perturbation in the trap frequency}\label{section:2.4a} Let us start by rewriting the term that depends on the classical trajectory $q_c(t)$ and its time derivative ---the dynamical term in (\ref{eq:energy2})-- as \begin{eqnarray} E_{n,dynamical}^{(2)}= \frac{m\Omega_0^2}{2}\Big\arrowvert q_c^{(1)}(T)-\frac{i}{\Omega_0}\dot{q}_c^{(1)}(T)\Big\arrowvert^2. \end{eqnarray} Now, we introduce the integral expressions for $q_c^{(1)}(t)$ and $\dot{q}_c^{(1)}(t)$, see (\ref{eq:20}), to write \begin{equation}\label{eq:2.39} E_{n,dynamical}^{(2)}=2m\left\arrowvert\int_0^Tdt\,f(t)\ddot{q}_c^{(0)}(t)e^{-i\Omega_0t}\right\arrowvert^2. \end{equation} The same procedure can be applied to the static term (\ref{eq:energy2}). Assuming that there is no perturbation at final time, i.e., $\Omega(T)=\Omega_0$, or equivalently $f(T)=0$, we can write it as \begin{eqnarray} E^{(2)}_{n,static}= \frac{\hbar\Omega_0}{4}(2n+1)\Big\arrowvert 2\rho^{(1)}(T)-\frac{i}{\Omega_0}\dot{\rho}^{(1)}(T)\Big\arrowvert^2. \end{eqnarray} Introducing now the integral expressions for $\rho^{(1)}(t)$ and $\dot{\rho}^{(1)}(t)$, see (\ref{eq:19}), we get \begin{equation} E^{(2)}_{n,static}=\hbar\Omega_0^3(2n+1)\left\arrowvert\int_0^Tdt\,f(t)e^{-2i\Omega_0t}\right\arrowvert^2.\label{eq:2.40} \end{equation} \subsubsection{Perturbation in the trap position} As discussed in subsection \ref{invsec}\ref{sect:2.3}, the second order excitation due to a perturbation in the trap trajectory is purely static. When we apply the same kind of manipulations as before to this component, using now (\ref{eq:2.30}) into the final excitation (\ref{eq:energy3}), we get \begin{equation}\label{eq:2.43} E_{n,static}^{(2)}=\frac{m\Omega_0^4d^2}{2}\left\arrowvert\int_0^Tdt\,h(t)e^{-i\Omega_0t}\right\arrowvert^2, \end{equation} where we have assumed $Q(T)=d$ and $h(T)=0$. The static excitation (\ref{eq:2.43}) is very similar to the one produced by a time-dependent deviation in the trap frequency, (\ref{eq:2.40}). Assuming that the perturbation functions $f(t)$ and $h(t)$ are similar, and that the parameters $\lambda$ and $\varepsilon$ are of the same order, there are mainly two differences between these two expressions. Firstly, the Fourier transform is evaluated at $2\Omega_0$ in (\ref{eq:2.40}) and at $\Omega_0$ in (\ref{eq:2.43}). Secondly, the prefactors are different. Their ratio is \begin{equation} \eta_n=\frac{\hbar\Omega_0^3(2n+1)}{{m\Omega_0^4d^2}/{2}}=\frac{2\hbar(2n+1)}{m\Omega_0 d^2}. \end{equation} For the typical experimental values to shuttle an ion, this parameter is much smaller than 1. This means that in principle (for similar contributions of the moduli) {\em it is preferable to have an absolute control of the trap position even if that compromises the control over the trap frequency}. \section{Polynomial STA protocol for a transport with an oscillating trap frequency\label{polsec}} We discussed in the previous section that special attention should be paid to perfectly adjusting the harmonic potential trajectory to the theoretically designed one, even if this implies assuming some errors in the trap frequency. In this section, we shall focus on trap frequency errors. While the static component is the same for every STA trap trajectory with a given duration $T$, the dynamical one can be optimized with an appropriate trajectory. From now on, we consider a sinusoidal perturbation for the trap frequency, \begin{equation}\label{eq:3.1} \Omega(t)=\Omega_0\left[1+\lambda\sin(\omega t)\right], \end{equation} \begin{figure}[h] \begin{center} \includegraphics[width=0.80\textwidth]{oscilatorypert_fig1} \caption{(a) Classical trajectory $q_c^{(0)}(t)/d$ versus $t/T$ and (b) acceleration of the classical trajectory $\ddot{q}_c(t)/(d/T^2)$ versus $t/T$ for the polynomial protocol.} \label{fig:3.1} \centering \end{center} \end{figure} to later consider the combination of several sines. This perturbation can be understood as an elementary Fourier component of an arbitrary perturbation. We can apply the results from the previous section, particularly from subsection \ref{invsec}\ref{section:2.2}, to this elementary perturbation. We start by designing a classical trajectory $q_c^{(0)}(t)$ that satisfies the boundary conditions (\ref{eq:6}) and (\ref{eq:15}) as a 5th order polynomial, \begin{equation}\label{eq:24} q_c^{(0)}(t)=10d\left(\frac{t}{T}\right)^3-15d\left(\frac{t}{T}\right)^4+6d\left(\frac{t}{T}\right)^5, \end{equation} namely, the simplest polynomial that satisfies all six boundary conditions, and for that reason it has been used often \cite{Lu2020,Torrontegui2011,Zhang2016}. Once $q_c^{(0)}(t)$ is set (and so is the acceleration, see figure \ref{fig:3.1}), we get the trap trajectory $Q_0(t)$ from (\ref{eq:14}). For short transport times $T$, $Q_0(t)$ could exceed the domain $[0,d]$. This occurs symmetrically at both edges for $\Omega_0T\le2.505$ \cite{Torrontegui2011} (this value is independent of the total distance $d$). The excess beyond $[0,d]$ may be a problem in practice and a remedy will be discussed later on. \begin{figure}[ht] \begin{center} \includegraphics[width=0.80\textwidth]{oscilatorypert_fig2} \caption{(a) Log plot of the second order ground-state excitation due to a sinusoidal perturbation in the trap frequency (in units of final quanta) versus $\omega$. The transport is driven by a 5th order polynomial STA protocol. The black solid line is the total excitation, the blue dashed line is the static component and the red dashed line the dynamical component. The parameters are $\lambda=0.01$, $\Omega_0=2\pi\times4$ MHz, $m=1.455\cdot10^{-25}$ kg ($^{{88}}$Sr$^+$ ion), $d=50\;\mu$m and $T=2\;\mu$s. (b) Log plot of the second order ground state excitation due to a sinusoidal perturbation in the trap frequency (in units of final quanta) versus $T$. Same parameters $\lambda$, $\Omega_0$, $m$, $d$, and color code as in (a), $\omega=2\pi\times 6$ MHz. } \label{fig:1} \centering \end{center} \end{figure} \subsection{Final energy using the perturbation method} In figure \ref{fig:1}(a), the second order final excitation of a particle which is initially in its ground state, $E_0^{(2)}(T)$, and its two components, $static$ and $dynamical$, are shown versus $\omega$, for a $^{88}$Sr$^+$ ion shuttled a distance $d=50$ $\mu$m in $T=2$ $\mu$s using a trap with frequency $\Omega_0=2\pi\times 4$ MHz and the polynomial (\ref{eq:24}), see details in caption (these values are realistic for current shuttling experiments). The dynamical component experiences a resonance at $\omega=\Omega_0$, and the static one at $\omega=2\Omega_0.$\footnote{We define these resonances phenomenolgically here, as the frequencies {\it around which} maximum excitation is found. They are better identified by the maximal envelope of the excitation rather than by the excitation itself. Note that the resonance at $2\Omega_0$ is a ``parameric resonance''.} In an experimental setting in which the trap frequency $\Omega_0$ is tunable and the perturbation frequency $\omega$ --or, at least, a dominant Fourier component of the perturbation-- is known, these resonances should be avoided. We also represent the two contributions to the excitation energy versus the transport time, from $T=0.1$ $\mu s$ to $T=20$ $\mu s$, for a fixed perturbation frequency, $\omega=2\pi\times 6$ MHz, in figure \ref{fig:1}(b). Both components periodically reach minimum values for special transport times. Moreover, the maxima of the static term remains constant at longer times, while the dynamical term maxima decay and become negligible compared to the static term for very slow shuttling, consistently with Eqs. (\ref{eq:2.39}) and (\ref{eq:2.40}). We shall later determine the shortest transport times that make the static contribution dominate. \subsection{Envelope functions} To test the validity of the perturbative treatment, we calculate $\Delta E_n(T)=E_n(T)-\hbar \Omega(T)(n+1/2)$ by numerically solving Ermakov and Newton equations (\ref{eq:Ermakov}) and (\ref{eq:Newton}) for the auxiliary variables $\rho(t)$ and $q_c(t)$ and inserting them into the equation for the final energy (\ref{eq:energy}). This "exact" result may be compared with the perturbative result, and the differences for the parameters chosen, e.g. in figure \ref{fig:1}(b) are hardly noticeable. The main advantage of using the perturbative analysis, instead of numerically solving the differential equations obeyed by $\rho(t)$ and $q_c(t)$, is that we find analytical expressions, which, if lengthy, can be simplified or approximated to get envelope functions. These functions will allow us to find interesting features such as asymptotic behavior at large and small perturbation frequencies or transport times, or to estimate the transport time that makes the static contribution dominate over the dynamical one. If the static part dominates, increasing the process time will not improve performance, on average, whereas if the dynamical part dominates, it may be worthwhile to increase the process time. We begin with the static contribution to the excitation. Solving the integrals (\ref{eq:19}) and (\ref{eq:20}) for $f(t)=\sin(\omega t)$, the static term (second line in (\ref{eq:energy2})) takes the form \begin{eqnarray}\label{eq:26} E^{(2)}_{n,stat}(T)&=&\frac{\hbar\Omega_0(2n+1)}{4\left(\omega^2-4\Omega_0^2\right)^2}\Big\{\big[\omega^2\sin{(\omega T)} \nonumber\\ &-&2\omega\Omega_0\sin{(2\Omega_0 T)}\big]^{\!2} \!+\!4\omega^2\Omega_0^2\big[\!\cos{(\omega T)}\!-\!\cos{(2\Omega_0 T)}\big]^{2}\Big\}. \end{eqnarray} For perturbation frequencies for which $\omega T=k\pi$, i.e. $\Omega(T)=\Omega_0$ (we will later extend the analysis to arbitrary frequencies), \begin{equation}\label{eq:3.5} E^{(2)}_{n,stat}(T)=\frac{2\hbar\Omega_0(2n+1)}{\left(\omega^2-4\Omega_0^2\right)^2}\omega^2\Omega_0^2\big[1-(-1)^k\cos{(2\Omega_0T)}\big]. \end{equation} This term vanishes when the condition $(-1)^k\cos{(2\Omega_0T)}=1$ is fulfilled, i.e., when (i) $k$ is even $\left(\Leftrightarrow\omega T=2i\pi\right)$ and $2\Omega_0T=2j\pi $ with $i,j\in \mathbb{N}$, (ii) $k$ is odd $\left(\Leftrightarrow\omega T=(2i'+1)\pi \right)$ and $2\Omega_0T=(2j'+1)\pi $ with $i',j'\in \mathbb{N}$, \noindent whereas it is maximum when (iii) $k$ is even $\left(\Leftrightarrow\omega T=2i\pi\right)$ and $2\Omega_0T=(2j+1)\pi $ with $i,j\in \mathbb{N}$, (iv) $k$ is odd $\left(\Leftrightarrow\omega T=(2i+1)\pi \right)$ and $2\Omega_0T=2j'\pi $ with $i',j'\in \mathbb{N}$. Therefore, by tuning the trap frequency and the transport time appropriately, the final excitation may be minimized. In fact, {\em if $\omega$ is known, one can first choose $T$ and then $\Omega_0$ to fulfill one of the two conditions (i) or (ii) that make the static contribution vanish}. Although in this section we are analyzing a 5th order polynomial protocol, the results for the static contribution are completely general for a sinusoidal perturbation in the trap frequency, as every possible STA trajectory has the same static term. Thus, the choice of $T$ and $\Omega_0$ described to make (\ref{eq:3.5}) vanish holds for any STA protocol. We take as the envelope function of $E^{(2)}_{n,stat}(T)$ the one that goes through all the maxima, \begin{equation}\label{eq:28} F_{stat}=\frac{2\hbar\Omega_0(2n+1)}{\left(\omega^2-4\Omega_0^2\right)^2}\omega^2\Omega_0^2\big[1+|\cos{(2\Omega_0T)}|\big]. \end{equation} The envelope is not valid for very large frequencies ($\omega\gg\Omega_0$) where it decays as $1/\omega^2$, whereas the true static term (\ref{eq:26}) has an oscillating term, proportional to $\sin^2(\omega T)$, that does not decay for large $\omega$. In figure \ref{fig:5}(a) we plotted this envelope, together with the static contribution, versus the perturbation frequency for $T=2$ $\mu$s. Even though we only considered a discrete set of frequencies, the envelope is valid for a continuum of perturbation frequencies. In figure \ref{fig:5}(b), we set the perturbation frequency to $\omega=2\pi\times 6$ MHz and let the transport time vary from $0.1$ to $10$ $\mu$s. In this case, the oscillating term in (\ref{eq:28}) does not add much information, and it could be simply substituted by its maximum value. Again, we should not expect this analysis to work for $\omega\gg\Omega_0$. \begin{figure}[t] \begin{center} \includegraphics[width=0.80\textwidth]{oscilatorypert_fig3} \caption{Log plot of the static component of the excitation (second order term) in units of quanta (light blue line) and its estimated envelope (dark blue line) versus: (a) the sinusoidal perturbation frequency for a transport time of $2\mu$s, and (b) the transport time for a perturbation frequency of $2\pi\times 6$ MHz. } \label{fig:5} \centering \end{center} \end{figure} While the static contribution has a simple analytical expression for $f(t)=\sin(\omega t)$, the dynamical contribution is more complicated. Nevertheless, we managed to find an approximate envelope function, \begin{equation}\label{eq:29} F_{dyn}=\frac{57600 m\;d^2}{T^6(\Omega_0^2-\omega^2)^4}\omega^2\Omega_0^2\big[1+|\cos{(\Omega_0T)}|\big]. \end{equation} We compare this function and the dynamical excitation in figure \ref{fig:6} in the perturbation-frequency and the transport-time domains. Despite having neglected many terms, (\ref{eq:29}) is a good approximation of the true envelope function. The oscillating term, that now has frequency $\Omega_0$ instead of $2\Omega_0$ as in the envelope for the static contribution, could also be substituted by its maximum value in the transport time plot. \begin{figure}[t] \begin{center} \includegraphics[width=0.80\textwidth]{oscilatorypert_fig4} \caption{Log plot of the dynamical component of the excitation in units of quanta (light red line) and its estimated envelope (dark red line) versus: (a) the sinusoidal perturbation frequency for a transport time of $2\mu$s, and (b) the transport time for a perturbation frequency of $2\pi\times 6$ MHz. } \centering \label{fig:6} \end{center} \end{figure} In figure \ref{fig:1} the minima of the dynamical contribution approximately coincide with the minima of the static contribution, which we were able to identify analytically. There is an explanation of this feature using the Fourier expressions (\ref{eq:2.39}) and (\ref{eq:2.40}). The integrals of interest are \begin{eqnarray}\label{eq:3.8} &&\int_0^Tdt\sin(\omega t)e^{-2i\Omega_0 t} =\frac{1}{2i}\!\int_0^T\!\!dt\left(e^{-i(2\Omega_0-\omega) t}-e^{-i(2\Omega_0+\omega) t}\right), \\ \label{eq:3.9} &&\int_0^Tdt\,\ddot{q}_c^{(0)}(t)\sin(\omega t)e^{-i\Omega_0 t} =\frac{1}{2i}\int_0^Tdt\,\ddot{q}_c^{(0)}(t)\left(e^{-i(\Omega_0-\omega) t}-e^{-i(\Omega_0+\omega) t}\right). \end{eqnarray} We took $\Omega_0 T=8\pi$, i.e., an even multiple of $\pi$, and so is $2\Omega_0T$. When condition (i) is satisfied, every exponential in Eqs. (\ref{eq:3.8}) and (\ref{eq:3.9}) takes the form $e^{-i2\pi Kt/T}$, where $K$ is an integer. Since the set $\{e^{-i2\pi Kt/T},\,K\in{\mathbb Z}\}$ forms an orthogonal basis for functions with period $T$, (\ref{eq:3.8}) (the static contribution), vanishes when condition (i) is verified except when $\omega=2\Omega_0$, which corresponds to $K=0$. Something similar happens with (\ref{eq:3.9}). The acceleration of the classical trajectory is an antisymmetric function around $T/2$ (see figure \ref{fig:3.1}(b)) that resembles the function $\sin(2\pi t/T)$. When projected to each of the functions $e^{-i2\pi Kt/T}$, the values $K=\pm1$ will be the most relevant ones. In fact, \begin{eqnarray}\label{eq:3.10} \int_0^Tdt\,\ddot{q}_c^{(0)}(t)e^{-i2\pi Kt/T}&=& \frac{90d}{\pi^2T}\frac{1}{K^3},\hspace{.1cm}\text{if $K\neq0$}, \end{eqnarray} where $\ddot{q}_c^{(0)}(t)$ is deduced from (\ref{eq:24}). $K=0$ gives 0 due to antisymmetry. According to (\ref{eq:3.10}), the most significant projection is achieved for $\lvert\Omega_0-\omega\rvert=2\pi/T$ $(\lvert K\rvert=1)$. Equation (\ref{eq:3.10}) also sets a $K^{-3}$ scaling for the rest of the projections. \subsubsection{Crossing between static and dynamical terms} One of the motivations to find the envelopes is to estimate at what point the static contribution starts to dominate the excitation energy and the dynamical contribution becomes irrelevant. In figure \ref{fig:7}(a), we plot the envelopes for the static and dynamical terms as functions of the perturbation frequency and the transport time. In figure \ref{fig:7}(b) we present a top view of these two surfaces, showing at each point only the one that dominates. Even if the envelopes are already much simpler than the corresponding contributions to the excitation, the curve defining the crossing points is complicated because of the oscillating terms of (\ref{eq:28}) and (\ref{eq:29}). In figure \ref{fig:7}(c) we show the envelopes when those oscillating terms are ignored. Thus, we find the transport time at which both envelopes cross as a function of $\omega$, \begin{equation}\label{eq:3.11} T^*(\omega)=\Bigg\{\frac{28800m\;d^2}{\hbar\Omega_0}\left[\frac{\omega^2-(2\Omega_0)^2}{\left(\omega^2-\Omega_0^2\right)^2}\right]^2\Bigg\}^{1/6}. \end{equation} The behavior described by the curve in (\ref{eq:3.11}) is quite intuitive. {\em Each contribution dominates around its own resonance, $\omega=2\Omega_0$ for the static and $\omega=\Omega_0$ for the dynamical}. When perturbing frequencies, assumed to be given, are at or near these values a change of $\Omega_0$ is advisable to avoid excitations. \begin{figure}[] \begin{center} \includegraphics[width=0.98\textwidth]{oscilatorypert_fig5} \caption{(a) Log plot of the two envelope functions, static (blue surface) and dynamical (red surface) in Eqs. (\ref{eq:28}) and (\ref{eq:29}), respectively, versus perturbation frequency and the transport time. (b) Top view, showing only the dominant contribution. (c) Top view ignoring the oscillating terms. The parameters $m$, $d$, and $\Omega_0$ are the same as in figure \ref{fig:1}.} \centering \label{fig:7} \end{center} \end{figure} \section{Optimal trajectories for an oscillating trap frequency\label{optsec}} In the previous section we used a polynomial protocol for some given $T$ without trying to optimize performance. We will now look for trajectories that minimize the final excitation at second perturbative order when the trap frequency is perturbed sinusoidallly. We present several methods that can be applied to find such optimal trap trajectories. \subsection{Design of the classical trajectory through an auxiliary function (Fourier method)}\label{section:4.1} For a particle shuttled by a constant-frequency trap the final excitation energy is, assuming zero boundary conditions for the trap velocity, proportional to the Fourier transform of the trap acceleration. This was exploited in a systematic approach by Gu\'ery-Odelin and Muga \cite{Guery2014}. The approach makes use of an auxiliary function $g(t)$ to impose the vanishing of \begin{equation}\label{eq:2.36} {\mathcal V}(\Omega_0)=\bigg\lvert\int_0^Tdt\,\ddot{q}_0(t)e^{-i\Omega_0t}\bigg\rvert=0 \end{equation} at chosen, discrete values of the trap frequency, see below. This method does not use invariants explicitly (even if they are of course implicit) and was devised to transport different species and/or achieve robustness with respect to uncertainty or slow changes in the trap frequency, i.e., the trap frequency must be effectively constant throughout each single shuttling process. When condition (\ref{eq:2.36}) is satisfied, and as far as there are no time dependent perturbations affecting the trap parameters, the system ends unexcited. Qi {\it et al.} \cite{Muga2021} posed as open questions the applicability or possible generalizations of the method to deal with a fast time dependences of the trap frequency (i.e., noticeable in the scale of $T$), as well as its combination with optimization algorithms. In this section we shall first generalize the method in Ref. \cite{Guery2014} to produce transport without residual excitation for a trap-frequency affected by a sinusoidal perturbation. To optimize the trap trajectories we shall later find it more efficient to directly impose conditions of the form (\ref{eq:2.36}) without the need to use an intermediate function $g$. To find trajectories that minimize the final excitation we use the Fourier form (\ref{eq:2.39}) for $f(t)=\sin(\omega t)$. The integral to be minimized is \begin{eqnarray}\label{eq:36} \hspace*{-.1cm}{\cal I}(\omega,\Omega_0)\equiv\int_0^T\!dt\,\sin(\omega t)\ddot{q}_c^{(0)}\!(t)e^{-i\Omega_0t} =\frac{1}{2i}\!\Bigg[\!\!\int_0^T\!\!\!dt\,\ddot{q}_c^{(0)}\!(t)e^{-i(\Omega_0-\omega)t} \!-\!\!\int_0^T\!\!\!dt\,\ddot{q}_c^{(0)}\!(t)e^{-i(\Omega_0+\omega)t}\Bigg]. \end{eqnarray} Thus, transport without final excitation at second perturbative order can be achieved by designing a $q_c^{(0)}(t)$ for which the Fourier transform of its acceleration at $\Omega_0+\omega$ and $\Omega_0-\omega$ takes the same value. One possibility is to cancel it at both frequencies. Following \cite{Guery2014} we introduce an auxiliary function $g(t)$ such that \begin{eqnarray}\label{eq:4.3} \ddot{q}_c^{(0)}(t)=\frac{d^4g}{dt^4}(t)+\big[\left(\Omega_0-\omega\right)^2+\left(\Omega_0+\omega\right)^2\big]\frac{d^2g}{dt^2}(t) +\left(\Omega_0^2-\omega^2\right)^2g(t), \end{eqnarray} and which obeys the boundary conditions $g(0)=g(T)= \dot{g}(0)=\dot{g}(T)=\ddot{g}(0)=\ddot{g}(T)=g^{(3)}(0)=g^{(3)}(T)=g^{(4)}(0)=g^{(4)}(T)=0$, where dots denote derivatives with respect to time and $g^{(n)}$ is the $n$-th derivative. Equation (\ref{eq:36}) vanishes with such an auxiliary function. We also have to take into account the boundary conditions $q_c^{(0)}(T)=d$ and $\dot{q}_c^{(0)}(T)=0$, which imply that \begin{equation}\label{eq:4.4} \int_0^Tdt\int_0^{t}dt'g(t')=\frac{d}{\left(\Omega_0^2-\omega^2\right)^2}\;\; \text{and} \; \int_0^Tdt\,g(t)=0. \end{equation} \begin{figure}[th] \begin{center} \includegraphics[width=0.85\textwidth]{oscilatorypert_fig6} \caption{(a) Trap trajectory from (\ref{eq:4.3}) for $\omega=3.5$ MHz. (b) Final dynamical excitation at second perturbative order and in units of final quanta versus the perturbation frequency for the designed trajectory. The parameters are $\lambda=0.01$, $\Omega_0=2\pi\times4$ MHz, $m=1.455\cdot10^{-25}$ kg ($^{{88}}$Sr$^+$ ion), $d=50\;\mu$m and $T=2\;\mu$s.} \centering \label{fig:4.1} \end{center} \end{figure} The auxiliary function $g(t)$ is designed to satisfy its boundary conditions and (\ref{eq:4.4}). From $g(t)$, $\ddot{q}_c^{(0)}(t)$ is deduced via (\ref{eq:4.3}). Then, we integrate this expression twice to get the classical trajectory. Let us consider, similarly to \cite{Guery2014}, the simple form \begin{equation}\label{eq:4.5} g(t)={\cal N}\left(\frac{t}{T}\right)^5\left(\frac{t}{T}-1\right)^5\left(\frac{t}{T}-\frac{1}{2}\right), \end{equation} where ${\cal N}$ is a normalization factor that has to be deduced from the first condition of (\ref{eq:4.4}). The second and third factors in (\ref{eq:4.5}) guarantee the boundary conditions at initial and final times, while the fourth one provides the odd symmetry to satisfy the second condition in (\ref{eq:4.4}). In figure \ref{fig:4.1} we show the results found using this method for $\omega=3.5$ MHz. We have plotted the trap trajectory $Q_0(t)$ and the final excitation (in quanta units) versus the perturbation frequency $\omega$ around $3.5$ MHz. The rest of the parameters are the same as the ones used in the previous section. We observe a vanishing excitation at the perturbation frequency used to design the trajectory. The procedure to make the protocol robust for a range of trap frequencies in \cite{Guery2014} can be adapted to our problem. Suppose that there are multiple perturbation frequencies $\omega_1$, $\omega_2$,..., $\omega_p$ affecting the shuttling operation. In order for the protocol to provide an excitation-free final state (at second perturbative order of the excitation), the classical acceleration may be written as $$ \ddot{q}_c^{(0)}(t)\!=\!P_0g^{(4p)}+P_1g^{(4p-2)}+\cdots+ P_jg^{(4p-2j)}+\cdots+P_{2p}g(t), $$ where \begin{eqnarray} P_0&=&1,\hspace{1cm} P_1=\sum_{i=1}^p\sum_{\sigma_i=\{+,-\}} (\Delta_i^{\sigma_i})^2, \nonumber\\ P_2&=&\sum_{i<j}\sum_{\sigma_{i,j}=\{+,-\}} (\Delta_i^{\sigma_i})^2(\Delta_j^{\sigma_j})^2, \dots\nonumber\\ P_{2p}&=& (\Delta_1^+)^2(\Delta_1^-)^2\cdots (\Delta_p^+)^2(\Delta_p^-)^2,\nonumber \end{eqnarray} and $\Delta_i^\pm=\Omega_0\pm\omega_i. $ Now, the function $g(t)$ should have $8p+2$ vanishing boundary conditions, $$ g(0)\!=\!g(T)\!=\!\dot{g}(0)\!=\!\dot{g}(T)\!=\!\cdots\!=\!g^{(4p)}(0)\!=\!g^{(4p)} (T)=0. $$ If the perturbation frequencies are distributed in a continuous region, the robustness is achieved by choosing the $p$ frequencies close enough in that region, flattening the excitation in a window of frequencies. \subsection{Fourier ansatz for the classical acceleration}\label{section:4.2} With the method described in the previous subsection, the number of boundary conditions imposed on $g(t)$ escalates considerably to increase robustness. To solve this problem and avoid the use of an an intermediate function $g(t)$, we now choose a different, direct ansatz, \begin{equation}\label{eq:4.8} \ddot{q}_c^{(0)}(t)=\sum_{j=1}^N a_j \sin(j\pi t/T). \end{equation} The boundary conditions $\ddot{q}_c^{(0)}(0)=\ddot{q}_c^{(0)}(T)=0$ are automatically satisfied, and the number of terms $N$ will depend on the number of constrains imposed on $q_c^{(0)}(t)$ and its derivatives, whereas the $\{a_j\}$ will be determined from them. We integrate (\ref{eq:4.8}) to specify the classical trajectory and velocity, \begin{eqnarray} \hspace*{-.4cm}q_c^{(0)}(t)&=&\int_0^tdt'\int_0^{t'}dt''\ddot{q}_c^{(0)}(t'') =\sum_{j=1}^N a_j\frac{T}{(j\pi)^2} \left[j\pi t-T\sin(j\pi t/T)\right], \\ \hspace*{-.4cm}\dot{q}_c^{(0)}(t)&=&\int_0^tdt'\ddot{q}_c^{(0)}(t')=\sum_{j=1}^N \!a_j\frac{T}{j\pi}\!\left[1\!-\! \cos(j\pi t/T)\right]\!. \end{eqnarray} It can be checked that the initial conditions $q_c^{(0)}(0)=0$ and $\dot{q}_c^{(0)}(0)=0$ are fulfilled. The final time boundary conditions $q_c^{(0)}(T)=d$ and $\dot{q}_c^{(0)}(T)=0$, lead to two conditions on the coefficients $\{a_j\}$, \begin{equation}\label{eq:4.11} \sum_{j=1}^N a_j \frac{T^2}{j\pi}=d,\;\;\;\;\;\; \sum_{j=1}^N \frac{a_j}{j} \left[1-(-1)^j\right]=0. \end{equation} To cancel the final excitation, the $\{a_j\}$ must also verify \begin{eqnarray}\label{eq:4.13} {\cal I}(\omega,\Omega_0)&=&\sum_{j=1}^N a_j {\cal I}_j(\omega,\Omega_0)=0, \\ {\cal I}_j(\omega,\Omega_0)&\equiv&\int_0^Tdt\,\sin(\omega t)\sin(j\pi t/T)e^{-i\Omega_0t}\nonumber\\ &=&\frac{T}{2}\Bigg\{\frac{i\Omega_0 T}{(j\pi-\omega T)^2-(\Omega_0 T)^2}-\frac{i\Omega_0 T}{(j\pi+\omega T)^2-(\Omega_0 T)^2}\nonumber\\ &+&e^{-i\Omega_0 T}\bigg[\frac{\left(j\pi-\omega T\right)\sin(j\pi-\omega T)-i\Omega_0 T \cos(j\pi-\omega T)}{(j\pi-\omega T)^2-(\Omega_0 T)^2}\nonumber\\ &-&\frac{\left(j\pi+\omega T\right)\sin(j\pi+\omega T)-i\Omega_0 T \cos(j\pi+\omega T)}{(j\pi+\omega T)^2-(\Omega_0 T)^2}\bigg]\Bigg\}.\nonumber \end{eqnarray} which are in fact two conditions, since the real and the imaginary parts have to be canceled. Together with conditions (\ref{eq:4.11}), there are 4 equations for the coefficients $\{a_j\}$, and thus, at least $N=4$ terms are needed to define the classical trajectory. \begin{figure} \begin{center} \includegraphics[width=0.98\textwidth]{oscilatorypert_fig7} \caption{(a) Transport function for a protocol that cancels the integral from (\ref{eq:36}) (red solid line), and up to its first (yellow dashed line), second (light green dotted line) and third (dark green dash-dotted line) derivatives with respect to the perturbation frequency when $\Omega_0=2\pi\times 4$ MHz and $\omega=2\pi\times5$ MHz. The parameters are: $^{88}$Sr$^+$ ion, $d=50$ $\mu$m, $T=2$ $\mu$s, and $\lambda=0.01$. (b) Dynamical component of the final excitation energy in units of final quanta in each of the protocols versus $\omega$. (c) Transient energy (in units of quanta) during the transport for the protocols shown in figures (a) and (b) (same line and color code). \label{fig:8}} \end{center} \end{figure} More terms can be added to increase robustness. For instance, to have excitation-free final states for a range of perturbation frequencies, we may impose the cancellation of the derivatives of ${\cal I}(\omega,\Omega_0)$ from (\ref{eq:36}) with respect to $\omega$. For every derivative nullified, we have to add at least two terms in (\ref{eq:4.8}) for the system of equations relating the $\{a_j\}$ not to be overdetermined. In figure \ref{fig:8} we compare different transport protocols: $N=4$ with no restriction on the derivatives; $N=6$ with cancellation of first derivative with respect to $\omega$; $N=8$ with cancellation of the first two derivatives; and $N=10$ with cancellation of first three derivatives. The coefficients $\{a_j\}$ are uniquely determined. The parameters are the same as the ones used in figure \ref{fig:1} or figure \ref{fig:4.1}. We clearly observe in figure \ref{fig:8}(b) an increase of the robustness against the perturbation frequencies when the number of canceled derivatives increases. A price to pay is a more oscillatory behavior in the trap trajectory $Q_0(t)$, see figure \ref{fig:8}(a) (let us recall that the trap trajectory is related to the classical trajectory $q_c(t)$ through (\ref{eq:14})), which may involve larger transient energies, see figure \ref{fig:8}(c). \begin{figure}[] \begin{center} \includegraphics[width=0.80\textwidth]{oscilatorypert_fig8} \caption{(a) Transport function for a protocol that cancels the integral in (\ref{eq:36}) (red solid line), and up to its first (yellow dashed line), second (light green dotted line) and third (dark green dash-dotted line) derivatives with respect to $\Omega_0$ when $\Omega_0=2\pi\times 4$ MHz and $\omega=2\pi\times5$ MHz. The parameters are: $^{88}$Sr$^+$ ion, $d=50$ $\mu$m, $T= 2$ $\mu$s, and $\lambda=0.01$. (b) Dynamical component of the final excitation energy in units of final quanta versus the trap frequency.} \centering \label{fig:9} \end{center} \end{figure} Similarly, the concept of robustness can be extended to other errors. For instance, suppose that, aside from the sinusoidal perturbation, the central trap frequency $\Omega_0$ takes different values over multiple runs of a transport experiment. Robustness with respect to these deviations can be achieved by imposing the cancellation of the derivatives of ${\cal I}(\omega,\Omega_0$) with respect to $\Omega_0$. In figure \ref{fig:9} we show again 4 different protocols: $N=4$ with no restriction on the derivatives; $N=6$ with cancellation of first derivative; $N=8$ with cancellation of the first two derivatives; and $N=10$ with cancellation of first three derivatives. As in the previous case, the coefficients $\{a_j\}$ are uniquely determined. Now, the protocols increase the robustness against variations of $\Omega_0$ when the number of nullified derivatives increases. These ideas can be combined, simultaneously canceling derivatives with respect to $\omega$ and $\Omega_0$ and making the protocol robust against variations of both of them. \subsection{Comparison between the auxiliary function and Fourier ansatz methods} The methods in subsections \ref{section:4.1} and \ref{section:4.2} lead to trajectories that leave the ion in its final position without final dynamical excitation up to second perturbation order. Both rely on nullifying the integral (\ref{eq:36}). However, the Fourier ansatz is more straightforward, since it does not involve additional steps to design an auxiliary function. The auxiliary function method forces two integrals to vanish for each perturbation frequency (see (\ref{eq:36})) instead of their sum. In figure \ref{fig:9.1} we have compared the two methods by finding the trajectories for which the second perturbation dynamical excitation is zero for a fixed perturbation frequency $\omega_{target}=4.5$ MHz, using the same parameters in figure \ref{fig:4.1}. Both methods are used without applying additional conditions to flatten the excitation, that is, in their most basic forms (cancellation of up to the 4th derivative of g(t) at its bounds and $N=4$ terms in the Fourier ansatz). The final dynamical excitation curve versus an $\omega$ actually applied is lower with the trajectory found with the Fourier ansatz for $\omega_{target}$, see figure \ref{fig:9.1}(b). The trajectory given by the Fourier ansatz is also smoother, avoiding significant accelerations during the shuttling. Therefore, {\em the Fourier ansatz method presents advantages in simplicity and effectiveness over the method that uses an auxiliary function}. In the following section, we apply the Fourier sum ansatz in combination with a genetic algorithm. \begin{figure}[] \begin{center} \includegraphics[width=0.89\textwidth]{oscilatorypert_fig9} \caption{Comparison between the method based on an auxiliary function, see \ref{section:4.1}, and the method based on a Fourier ansatz for $\ddot{q}_c^{(0)}(t)$. (a) Trap trajectories with zero final excitation (up to second perturbation order) for $\omega=4.5$ MHz. (b) Final excitation versus the perturbation frequency for the trajectories in the left figure. The rest of parameters are the same as in figure \ref{fig:4.1}.} \centering \label{fig:9.1} \end{center} \end{figure} \subsection{Genetic algorithms \label{section:4.3}} The method in subsection \ref{section:4.2} can be generalized for further flexibility by including more terms in (\ref{eq:4.8}) and applying more conditions. For example, trap trajectories that do not exceed the range from the initial to the final position, i.e., $0<Q_0(t)<d$, are highly preferable. Although this condition is fulfilled by the protocols in figures \ref{fig:8} and \ref{fig:9}, it is not generally satisfied. Short transport times, perturbation frequencies close to the trap frequency, and cancellation of too many derivatives may lead to trajectories that go beyond these limits. A solution is to leave the system of equations for the $\{a_j\}$ underdetermined by letting $N$ be greater than the number of conditions. Then the coefficients may be chosen by minimizing a given cost function. For instance, to limit the trajectory inside its boundaries $[0,d]$, the cost function can be \begin{equation}\label{eq:cf} \hspace*{-.2cm}f\!=\!\!\int_0^T\!\!dt\,F[Q_0(t)],\,\,\;\;\;\;\;\;\; F[Q_0(t)]\! =\! \begin{cases} Q_0(t)\!-\!d, &Q_0>d\\ 0, &0\!\le\! Q_0\!\le\! d\\ -Q_0(t),\; &Q_0<0, \\ \end{cases} \end{equation} with the least possible number of terms $N$ defining $Q_0$. Genetic algorithms are versatile optimization methods methods where a population of individuals evolve through selection, crossover and mutation towards better solutions, inspired by natural selection \cite{Goldberg1989}. \begin{figure}[] \begin{center} \includegraphics[width=0.48\textwidth]{oscilatorypert_fig10} \caption{Trap trajectories that cancel the integral from (\ref{eq:4.13}) for a perturbation frequency $\omega=2\pi\times5$ MHz, a trap frequency $\Omega_0=2\pi\times4$ MHz, and a transport time $T=0.5$ $\mu$s. The red curve is for $N=4$ terms in the ansatz (\ref{eq:4.8}), while the green curve is for $N=10$ and letting the genetic algorithm minimize condition (\ref{eq:cf}).} \centering \label{fig:10} \end{center} \end{figure} In our problem, each individual is a set of $N$ coefficients $\{a_j\}$ such that conditions (\ref{eq:4.11}) and (\ref{eq:4.13}) are verified. The algorithm stops whenever the result of integral (\ref{eq:cf}) is zero, or when too many generations give the same value for the integral, meaning that the algorithm has fallen into a local minimum and mutations are not enough to jump to a better minimum. In figure \ref{fig:10} we compare the trap trajectory that satisfies the aforementioned conditions found for $N=4$, which is the unique solution, since we have 3 real + 1 imaginary conditions (red line), with a trap trajectory found by the genetic algorithm for $N=10$ (green line). The short transport time ($T=0.5$ $\mu$s) makes the first protocol to exceed the interval $[0,d]$, while the solution by the genetic algorithm stays inside $[0,d]$. \subsection{Optimal Control Theory\label{section:4.4}} Invariant-based inverse engineering may be combined with optimal control theory via Pontryagin's principle \cite{Pontryagin}, see Chen {\it et al} \cite{Chen2011} and more examples and references in Gu\'ery-Odelin {\it et al.} \cite{Guery2019}. In this section, we will apply the OCT formalism to minimize the transient potential energy. Let us define first the state variables \begin{eqnarray} x_1(t)&=&q_c^{(0)}(t),\hspace{6mm} x_2(t)=\dot{q}_c^{(0)}(t), \nonumber\\ x_3(t)&=&q_c^{(1)}(t),\hspace{6mm} x_4(t)=\dot{q}_c^{(1)}(t) \end{eqnarray} and (scalar) control function \begin{equation} u(t)=q_c^{(0)}(t)-Q_0(t). \end{equation} Equations (\ref{eq:14}) and (\ref{eq:18}) give a system of equations with the form $\mathbf{\dot{x}}=\mathbf{f}\big[\mathbf{x}(t),u(t)\big]$, that is \begin{eqnarray}\label{eq:4.15} \dot{x}_1(t)&=&x_2(t),\;\; \dot{x}_2(t)=-\Omega_0^2u(t),\;\; \dot{x}_3(t)=x_4(t),\nonumber\\ \dot{x}_4(t)&=&-\Omega_0^2x_3(t)-2\Omega_0^2\sin(\omega t)u(t). \end{eqnarray} Our optimal control problem is to minimize some cost function. We choose to minimize the average dynamical term of the potential energy, \begin{equation}\label{eq:4.20} \overline{E}_{P,dyn}=\frac{1}{T}\int_0^Tdt\frac{m\Omega^2(t)}{2}\left[q_c(t)-Q_0(t)\right]^2, \end{equation} which, assuming small $\lambda$, can be approximated by \begin{equation}\label{eq:4.16} \overline{E}_{P,dyn}\approx\frac{1}{T}\frac{m\Omega_0^2}{2}\int_0^Tdt\left[u(t)\right]^2. \end{equation} Equation (\ref{eq:4.16}) uses only the zeroth order approximation for the energy. We shall later demonstrate that this order is enough to account for the transient energy. The reason is that, unlike the final energy, the zeroth order of the energy takes a nonzero value during the transport, and therefore higher perturbative orders are negligible in comparison. Thus, from (\ref{eq:4.16}) the cost function is \begin{equation}\label{eq:4.21} J(u)=\int_0^Tdt\,\left[u(t)\right]^2. \end{equation} For an excitationless transport, the boundary conditions that have to be satisfied are (i) (\ref{eq:6}) and (\ref{eq:15}) for $q_c^{(0)}$ and $\dot{q}_c^{(0)}$, and (ii) the cancellation of $q_c^{(1)}$ and $\dot{q}_c^{(1)}$ at the endpoints, to make the first line in (\ref{eq:energy2}) (the dynamical excitation) vanish at final time. This implies that the dynamical system starts and ends at \begin{equation}\label{eq:4.24} \mathbf{x}(0)=\begin{pmatrix} x_1(0)\\ x_2(0)\\ x_3(0)\\x_4(0) \end{pmatrix}=\begin{pmatrix} 0\\ 0\\ 0 \\ 0 \end{pmatrix},\; \mathbf{x}(T)=\begin{pmatrix} x_1(T)\\ x_2(T)\\ x_3(T)\\x_4(T) \end{pmatrix}=\begin{pmatrix} d\\ 0\\ 0 \\ 0 \end{pmatrix}, \end{equation} The additional conditions $Q_0(0)=0$ and $Q_0(T)=d$ are translated to the control parameter as $u(0)=u(T)=0$. At these points, jumps of the optimal control will be required to match these boundary conditions. To minimize the cost function (\ref{eq:4.21}), we apply Pontryagin’s maximal principle. The control Hamiltonian is \begin{equation} H_c=p_1x_2-p_2\Omega_0^2u+p_3x_4-p_4x_3\Omega_0^2-2p_4\Omega_0^2\sin(\omega t)u-p_0u^2, \end{equation} where $p_0$ is a normalization constant greater than 0, and $\{p_1,\,p_2,\,p_3,\,p_4\}$ are the costates (time dependences of the state, costate and control variables have been dropped to simplify the notation). Pontryagin’s maximal principle states that for the dynamical system $\mathbf{\dot{x}}= \mathbf{f}(\mathbf{x}(t), u(t))$, the coordinates of the extremal vector $\mathbf{x}(t)$ and of the corresponding adjoint sate $\mathbf{p}(t)$ fulfill $\mathbf{\dot{x}}=\partial{H_c}/\partial{\mathbf{p}}$ and $\mathbf{\dot{p}}=-\partial{H_c}/\partial{\mathbf{x}}$, which gives the four costate equations \begin{equation} \dot{p}_1(t)=0,\;\;\;\;\;\; \dot{p}_2(t)=-p_1(t),\;\;\;\;\;\; \dot{p}_3(t)=\Omega_0^2p_4(t),\;\;\;\;\;\; \dot{p}_4(t)=-p_3(t).\label{eq:4.26} \end{equation} According to the maximum principle, the control $u(t)$ maximizes the control Hamiltonian at each time. For simplicity, we choose $p_0=\Omega_0^2/2$, so that the minimal condition of the control Hamiltonian $\partial{H_c}/\partial{u}=0$ gives \begin{equation} u(t)=-\left[p_2(t)+2\sin(\omega t)p_4(t)\right], \end{equation} whereas, from (\ref{eq:4.26}), we get \begin{eqnarray} p_1(t)&=&-c_1,\;\;\, p_2(t)=c_1t+c_2,\\ p_3(t)&=&c_3\Omega_0\sin\left(\Omega_0 t\right)-c_4\Omega_0\cos\left(\Omega_0 t\right),\\ p_4(t)&=&c_3\cos\left(\Omega_0 t\right)+c_4\sin\left(\Omega_0 t\right), \end{eqnarray} Where $c_1,...,c_4$ are constants that will eventually be determined from the boundary conditions on $\mathbf{x}(t)$. Substituting $p_2$ and $p_4$ into $u(t)$, and then inserting $u(t)$ in the system (\ref{eq:4.15}), we find explicit but somewhat lengthy expressions for $x_1$, $x_2$, $x_3$, and $x_4$, not shown here. Finally, the trap trajectory is determined as $$ Q_0(t)= \begin{cases} 0, & t\le0\\ x_1(t)-u(t), & 0<t<T\\ d,& t\ge T \end{cases}. $$ \begin{figure}[] \begin{center} \includegraphics[width=0.98\textwidth]{oscilatorypert_fig11} \caption{Time average of the dynamical contribution to the potential energy using (\ref{eq:4.20}) including the first perturbation order of $q_c(t)$ during the transport of a $^{88}$Sr$^+$ ion versus (a) the transport time $T$, (b) the trap frequency $\Omega_0$, (c) the perturbation frequency $\omega$, and (d) the shuttling distance $d$. In each figure, the parameters that do not vary are kept at $T=2$ $\mu$s, $\Omega_0=2\pi\times4$ MHz, $\omega=2\pi\times5$ MHz and $d=50$ $\mu$m. The $\delta$ is the asymptotic exponent of each of the parameters.} \centering \label{fig:11} \end{center} \end{figure} The discontinuities of $Q_0(t)$ at $t=0$ and $t=T$ may prevent these trajectories to be experimentally feasible, but they provide, in any case, a lower bound for the time average of $E_P$. The values of the coefficients $c_1-c_8$ depend mainly on $T$, $\Omega_0$, $\omega$ and $d$. In figure \ref{fig:11}, we show the average dynamical potential energy as a function of each of these 4 parameters, while keeping the rest fixed (see caption for further details). Although for the optimal control problem we have only considered the zeroth order of the transient energy, in the calculations for this figure we have included the first order perturbative term of $q_c(t)$. The results are indistinguishable to those in which they are not considered. The asymptotic behavior, away from oscillations around $\omega=\Omega_0$, is given by a power law of $T$, $\Omega_0$ and $d$, \begin{equation} \overline{E}_{P,dyn}\varpropto\frac{d^2}{T^4\Omega_0^2}, \end{equation} in agreement with the lower bound found for the average potential energy in the unperturbed case \cite{Torrontegui2011}. The perturbation does not change the asymptotic behavior of the mean potential energy. This is also show in figure \ref{fig:11}(c), where the curve flattens far from $\omega=\Omega_0$. \section{Conclusion\label{consec}} In this work, we studied the effect of small perturbations in some of the trap parameters in shortcuts-to-adiabaticity (STA) shuttling protocols of an ion driven by a harmonic trap, with emphasis in sinusoidal perturbations or their combinations. We have also found robust protocols with respect to these perturbations. We have applied the invariant-based inverse engineering formalism, combined with a perturbative treatment, to find expressions of the final excitation when the perturbation affects the trap frequency or the trap trajectory, identifying static and dynamical terms (independent and dependent, respectively, on the ideal STA trajectory). We have also found for these terms simple Fourier integral forms. Quite generally the static contribution is worse for perturbed trajectories than for perturbed frequencies which suggests to put the emphasis in implementing the trajectory faithfully in inversion subroutines from the ideal trajectories to the implemented electrode voltages in multisegmented Paul traps. We have thoroughly analyzed the basic 5th order polynomial STA protocol to shuttle a particle for a distance $d$ in a time $T$ for a sinusoidally perturbed trap frequency. (The analysis for the static contribution is generic and valid for any STA protocol.) We could determine points with no final (static and dynamical) excitation when the perturbation frequency is known. We also found conditions, in particular minimal times, for the static contribution to dominate. Finally we have presented several techniques to optimize the driving for sinusoidally perturbed trap frequencies with respect to final energy for a span of perturbation frequencies; trajectory domain; or average transient energy. These techniques are flexible and complementary, they could be applied to other objectives as well. In particular the same approaches could be applied for perturbations in the trajectory. Both methods described in \ref{optsec}\ref{section:4.1} (auxiliary function) and \ref{optsec}\ref{section:4.2} (Fourier ansatz) to design the classical acceleration increase robustness by widening the window of perturbation frequencies for quiet transport. We found better results with the Fourier ansatz method when comparing the most basic approaches, but notice that the auxiliary function method admits unexplored generalizations with different auxiliary functions. The Fourier ansatz method can be easily combined with optimization algorithms, such as genetic algorithms, as in \ref{optsec}\ref{section:4.3}. Although we have focused on limiting the trap trajectory inside the range $[0,d]$, genetic algorithms can be used for a broad span of optimizations (bounded velocity, minimal peak transient energy,...). A problem with genetic algorithms is that there is no guarantee that the solution found is the global minimum, and many runs of the algorithm could be needed to find an optimal trajectory. On the other hand, optimal control theory offers the tools to find analytical bounds and asymptotic behavior, even if the solutions may contain discontinuities that make them hard to implement experimentally. \vskip6pt \enlargethispage{20pt} \aucontribute{HE carried out the calculations and drafted the manuscript. XJL worked on the perturbative analysis. JE provided support on the numerical work. JGM designed the study. All authors read and approved the manuscript.} \competing{The authors declare that they have no competing interests.} \funding{This work was supported by the Basque Country Government (Grant No. IT986-16), by the Spanish Ministry of Science and Innovation through projects PGC2018-101355-B-I00 and PGC2018-095113-B-I00 (MCIU/AEI/FEDER,UE), and by the Natural Science Foundation of Henan Province (Grant No. 212300410238). } \ack{We thank A. Ruschhaupt, D. Gu\'ery-Odelin, E. Torrontegui, J. Chiaverini, and L. Chi for many discussions.}
1,108,101,564,757
arxiv
\section{Introduction} For last decades a great attention has been paid to developments of both exact and approximate techniques to solve and examine different dynamical problems for quantum strongly coupled systems whose interaction Hamiltonians are expressed by nonlinear functions of operators describing subsystems (see, e.g., [1-9] and references therein). However, as a rule, such techniques either are adapted for treating special forms of model Hamiltonians and initial quantum states [1-5,7-9] or require lengthy and tedious calculations (as it is the case, e.g., for the algebraic Bethe ansatz [6]). Recently, a new universal Lie-algebraic approach has been developed [10-13] to get exact solutions of both spectral and evolution problems for some nonlinear quantum models of strongly coupled subsystems having symmetry groups $G_{inv}$. It was based on exploiting a formalism of polynomial Lie algebras $g_{pd}$ as dynamic symmetry algebras $g^{DS}$ of models under study, and, besides, generators of these algebras $g_{pd}$ can be interpreted as $G_{inv}$-invariant "essential" collective dynamic variables in whose terms model dynamics are described completely. Specifically, this approach enabled us to develop some efficient techniques for solving physical tasks in the case of $g^{DS}=sl_{pd}(2)$, when model Hamiltonians $H$ are expressed as follows $$ H = aV_0 +g V_+ + g^* V_- +C,\quad [V_{\alpha}, C]=0, \quad V_- =(V_+)^+, \eqno (1.1) $$ where $C$ is a function of model integrals of motion $R_i$ and $V_0, V_{\pm}$ are the $sl_{pd}(2)$ generators satisfying the commutation relations $$ [V_0, V_{\pm}]= \pm V_{\pm}, \quad [V_-, V_+] = \psi_n(V_0+1) - \psi_n(V_0),$$ $$\psi_n(V_0)=A\prod_{i=1}^{n} (V_0+\lambda_i(\{R_j\})) \eqno (1.2)$$ The structure polynomials $\psi_n(V_0)$ depend additionally on $\{R_i, i=1, \dots\}$, and their exact expressions for some wide-spread classes of concrete models were given in [10-12]. All techniques [10-13] are based on using expansions of most important physical quantities (evolution operators, generalized coherent states (GCS), eigenfunctions etc.) by power series in the $sl_{pd}(2)$ shift generators $V_{\pm}$ and on decompositions $$ L(H) =\sum_{\oplus}L([l_i]), \quad (V_+V_- -\psi_n(V_0)\equiv -\psi_n(R_0) |_{L([l_i])}=0 \eqno (1.3)$$ of Hilbert spaces $L(H)$ of quantum model states in direct sums of the subspaces $L([l_i])$ which are irreducible with respect to joint actions of algebras $sl_{pd}(2)$ and symmetry groups $G_{inv}$ and describe specific "$sl_{pd}(2)$-domains" evolving independently in time under action of the Hamiltonians (1.1); $[l_0]$ are lowest weights of $L([l_i]): \psi_n(l_0)=0$ and other quantum numbers $l_i, i=1, \dots$ are eigennumbers of operators $R_i$. Then, using restrictions of Eqs. (1.1)-(1.2) on $L([l_i])$, one can develop simple algebraic calculation schemes for finding evolution operators $$U_{H}(t)=\sum_{f=-\infty}^{\infty}V_+^f u(v_0;t), \quad V_+^{-f}\equiv V_-^{f}([\psi_n(V_0)]^{(f)})^{-1}, \; [\psi_n(x)]^{(f)}\equiv \prod_{r=0}^{f-1}\psi_n(x-r), \eqno (1.4a)$$ amplitudes $Q_v(E_f)$ of expansions $$ |E_f\rangle = A_f\prod_{j}(V_+ -\kappa^f_j) |[l_i]\rangle= \sum_v Q_v(E_f) |[l_i]; v\rangle \eqno (1.4b)$$ of energy eigenstates $|E_f\rangle$ in orthonormalized bases $\{|[l_i]; v\rangle : V_0 |[l_i]; v\rangle= (l_0+v) |[l_i]; v\rangle\}$) and appropriate energy spectra $\{E_f\}$ of bound states [11,13]. (In fact, the factorized form of $|E_f\rangle$ given by the first equality in (1.4b) realizes an efficient modification of the algebraic Bethe ansatz [6] in terms of collective dynamic variables related to the $sl_{pd}(2)$ algebras [11,13].) In the paper [12] some explicit integral expressions were found for amplitudes $Q_v(E)$, eigenenergies $\{E_a\}$ and "coefficients" $u(v_0;t)$ of evolution operators $U_{H}(t)$ with the help of a specific "dressing" (mapping) of solutions of some auxiliary exactly solvable tasks with the dynamic algebra $sl(2)$. However, all exact results obtained do not yield simple working formulas for analysis of models (1.1) and revealing different physical effects (e.g., a structure of collapses and revivals of the Rabi oscillations [2,8], bifurcations of solutions [5] etc.) at arbitrary initial quantum states of models. Therefore, it is necessary to develop some simple techniques, in particular, to get some closed, perhaps, approximate expressions for evolution operators, energy eigenvalues and wave eigenfunctions, which would describe main important physical features of model dynamics with a good accuracy (cf. [5,8,9]). Below we examine some possibilities along these lines for models (1.1)-(1.2) by means of reformulating them in terms of the formalism of the usual $sl(2)$ algebra and developing variational schemes corresponding to quasiclassical approximations for original models by analogy with developments [5,14-16]. \section {A reduction of linear $sl_{pd}(2)$ problems to non-linear $sl(2)$ ones} We can reformulate models (1.1)-(1.2) in terms of $sl(2)$ generators using an isomorphism of the $sl_{pd}(2)$ algebras to extended enveloping algebras ${\cal U}_{\psi}(sl(2))$ of the familiar algebra $sl(2)$. This isomorphism is established via a generalized Holstein-Primakoff mapping given on each subspace $L([l_i])$ as follows [10,11] $$Y_0 = V_0-l_0\mp j,\; Y_+= V_+ [\phi_{n-2}(Y_0)]^{-1/2},\; \phi_{n-2}(Y_0)=\frac{\psi_n(Y_0+l_0\pm j+1)}{(j\mp Y_0)(\pm j+1+Y_0)},\; Y_-=(Y_+)^+,$$ $$ [Y_0, Y_{\pm}]= \pm Y_{\pm}, \quad [Y_-, Y_+] = \mp2 Y_0 \eqno (2.1)$$ where $ Y_{\alpha}$ are the $sl(2)$ generators, $\mp j$ are lowest weights of $sl(2)$ irreducible representations realized on subspaces $L([l_i])$ and $\psi_2(x)=(j\pm x)(\pm j+1-x)$ are quadratic structure functions $\psi_n(x)\equiv\psi_2(x)$ of $sl(2)$ (hereafter upper/lower signs corresponding to the $su(2)$/$su(1,1)$ algebras are chosen for finite/infinite dimensions $d([l_i])$ of the spaces $L([l_i])$). Note that, by definition, functions $\phi_{n-2}(Y_0)$ on spaces $L([l_i])$ can be chosen as polynomials of $(n-2)$-th degree in $Y_0$. For example, substituting $$\psi_3 (V_0)=\frac{1}{4}(2V_0 +R_2-R_1)(2V_0 +R_1+R_2)(-V_0 +R_2+1),$$ $$l_0=\frac{|k|-s}{3}, \quad l_1=k,l_2=\frac{|k|+2s}{3}, \quad k=0,\pm1, \pm2,...;\;s=2j= d([l_i])-1=0,1,2,... \eqno (2.2)$$ for three-boson models [11-13] $$ H_{tb} = \omega_1 a^+_1 a_1 +\omega_2 a^+_2 a_2 +\omega_3 a^+_3 a_3 + g(a^+_1a^+_2) a_3 + g^*(a_1a_2) a_3 ^+ , \eqno (2.3a)$$ $$V_0 =(N_1+N_2-N_3)/3,\; V_+ =(a^+_1a^+_2) a_3, \quad a= \omega_1 +\omega_2- \omega_3, \quad N_i=a^+_i a_i, $$ $$2C =R_1(\omega_1 -\omega_2) +R_2(\omega_1+\omega_2 +2\omega_3), \; R_1= N_1-N_2,\;3R_2= N_1+N_2+2N_3 \eqno (2.3b)$$ we get $\phi_{1}(Y_0)= Y_0+j+|k|+1$. Similar expressions can be found for $\phi_{1}(Y_0)$ in the cases of the point-like Dicke and the second harmonic generation models taking appropriate expressions from [12]. Then, restrictions $H_{[l_i]}$ of Hamiltonians (1.1) on $L([l_i])$ may be re-written in terms of $Y_{\alpha}$ as follows $$ H_{[l_i]} = aY_0 + Y_+\tilde g(Y_0) + \tilde g^+(Y_0) Y_- +\tilde C, \quad \tilde g(Y_0)= g\sqrt{\phi_{n-2}(Y_0)},\;\tilde C=C+a(\pm j+l_0), \eqno (2.4) $$ Evidently, this form corresponds to generalizations of semi-classical (linear in $sl(2)$ generators) versions of matter-radiation interaction models [8,9,12] by introducing operator (intensity-dependent) coupling coefficients $\tilde g(Y_0)$ (cf. [3,7]). Emphasize, however, a collective (not associated with a single subsystem) nature of operators $Y_{\alpha}$ in Eq. (2.4) (cf. [9]); therefore, dynamic variables $Y_{\alpha}$ correspond to a non-standard quasiclassical approximation (when $\tilde g(Y_0)=const$ in Eq. (2.4)) of original models as it follows, e.g., from a direct comparison of such an approximation with standard (when creation/destruction operators of one mode are replaced by $c$-numbers) semiclassical limits for the model (2.3). If $n=2$, then $\phi_{n-2}(Y_0)=1, sl_{pd}(2)=sl(2), l_0=\pm j $, and the formalism of GCS related to the $SL(2)$ group displacement operators $$S_Y(\xi=re^{i \theta})=\exp(\xi Y_+-\xi^* Y_-)=\exp[t(r)e^{i \theta} Y_+] \exp[-2\ln c(r) Y_0]\exp[-t(r)e^{-i\theta} Y_-], \eqno (2.5)$$ ($t(r)= \tan r/\tanh r, c(r)=\cos r/\cosh r$ for $su(2)/su(1,1)$) yields a powerful tool for solving both spectral and evolution tasks [16]. Specifically, in this case, using the well-known $sl(2)$ transformation properties of operators $Y_{\alpha}$ under the action of $S_Y(\xi)$ [16]: $$S_Y(\xi)Y_{+}S_Y(\xi)^{\dagger}\equiv Y_{+}(\xi)=[c(r)]^2 Y_{+} \pm e^{-i\theta} [ s(2r) Y_0 - e^{-i\theta} [s(r)]^2 Y_-], \; Y_{-}(\xi)= (Y_{+}(\xi))^{\dagger},$$ $$ S_Y(\xi)Y_{0}S_Y(\xi)^{\dagger}\equiv Y_{0}(\xi)=c(2r) Y_{0} - \frac{s(2r)}{2} [ e^{i\theta} Y_{+} + e^{-i\theta} Y_{-}, \quad s(r) = sin r/sinh r, \eqno (2.6)$$ Hamiltonians $H_{[l_i]}$ can be transformed into the form $$ \tilde{H}_{[l_i]}(\xi)=S_Y(\xi)H_{[l_i]} S_Y(\xi)^{\dagger}= \tilde C +Y_0 A_0(a, g; \xi)+Y_+ A_+(a, g; \xi) + Y_- A^*_+(a, g; \xi) \eqno (2.7a)$$ At the values $\xi_0=\frac{g}{2|g|}arctan\frac{2|g|}{a}$ for $su(2)$ and $\xi_0= \frac{g}{2|g|}arctanh\frac{2|g|}{a}$ for $su(1,1)$ of the parameter $\xi$ one gets $A_+(a, g; \xi)=0$, and the Hamiltonian $\tilde{H}_{[l_i]}(\xi)$ takes the form $$\tilde{H}_{[l_i]}(\xi_0)= \tilde C +Y_0\sqrt{a^2\pm 4 |g|^2} \eqno (2.7b)$$ which is diagonal on eigenfunctions $|[l_i];v\rangle= \tilde N(j,v)(Y_+)^v|[l_i];v=0\rangle, \; N^{-2}(j,v)=v!(2j)!/(2j-v)!\; \mbox{for}\; su(2)\;\mbox{and}\; N^{-2}(j,v)=v!\Gamma (2j+v)/\Gamma (2j)\; \mbox{for}\; su(1,1)$. Therefore, original Hamiltonians $H_{[l_i]}$ have the eigenenergies $$E_{v}([l_i];\xi_0)= \tilde C +(\mp j+v)\sqrt{a^2\pm 4 |g|^2} \eqno (2.8a)$$ and eigenfunctions $$|[l_i];v;\xi_0\rangle= S_Y(\xi_0)^{\dagger}|[l_i];v; \rangle \eqno (2.8b)$$ Similarly, when $sl_{pd}(2)=sl(2)$, operators $S_Y(\xi (t))$ are "principal" parts in the evolution operators $U_{H}(t)= \exp(i\phi (t) Y_0) S_Y(\xi (t))$ with $c$-number functions $\phi (t), \xi (t)$ being determined from a set of non-linear differential equations corresponding to classical motions [16,17]. However, for arbitrary degrees $n$ of $\psi_n(V_0)$ Hamiltonians (2.4) are essentially non-linear in $sl(2)$ generators $Y_{\alpha}$, and, therefore, the situation is very changed. Particularly, in general cases it is unlikely to diagonalize $H_{[l_i]}$ with the help of operators $S_Y(\xi)$ since analogs of Eq. (2.7a) on multi-dimensional spaces $L([l_i])$ $$\tilde{H}_{[l_i]}(\xi)=S_Y(\xi)H_{[l_i]}S_Y(\xi)^{\dagger}= aY_0 (\xi) + Y_+(\xi)\tilde g(Y_0(\xi)) + \tilde g^+(Y_0(\xi)) Y_- +\tilde C \eqno (2.9) $$ contain (after expanding them in power series) many terms with higher powers of $Y_{\pm}$ [13]. Nevertheless, the formalism of the $SL(2)$ group GCS $|[l_i];v;\xi\rangle=S_Y(\xi)^{\dagger}|[l_i];v; \rangle$ [16] can be an efficient tool for analyzing non-linear models [5,11,14-16], in particular, for getting approximate analytical solutions. Specifically, a simplest example of such approximations was obtained in [11] by mapping (with the help of the change $V_{\alpha}\rightarrow Y_{\alpha}$) Hamiltonians (1.1) by Hamiltonians $H_{sl(2)}$ which are linear in $sl(2)$ generators $Y_{\alpha}$ (but with modified constants $\tilde a, \tilde g$) and have on each fixed subspace $L([l_i])$ equidistant energy spectra obtained from Eq. (2.8a). However, this (quasi)equidistant approximation, in fact, corresponding to a substitution of certain effective coupling constants $\tilde g$ instead of true operator entities $\tilde g(Y_0)$ in Eq. (2.4), does not enable to display many peculiarities of models (1.1) related to essentially non-equidistant parts of their spectra. Therefore, it is needed in corrections, e.g., with the help of iterative schemes [8,14,15]; specifically, one may develop perturbative schemes by using expansions of operator entities $\tilde g(Y_0)$ in Taylor series in $Y_0$ as it was made implicitly for the Dicke model in [8,9]. But there exist a more effective, incorporating many peculiarities of models (1.1), way to amend the quasi-equidistant approximation. \section{ $SL(2)$ energy functionals and variational schemes for solving spectral and evolution tasks} This way is in applying $SL(2)$ GCS $|[l_i];v;\xi\rangle=S_Y(\xi)^{\dagger} |[l_i];v; \rangle$ as trial functions in the variational schemes of determing energy spectra and quasiclassical dynamics [5,15]. Indeed, the results (2.8) are obtained by using a variational scheme determined by the stationarity conditions $$a)\;\frac{\partial {\cal H}([l_i];v;\xi)}{\partial \theta}=0,\quad b)\;\frac{\partial {\cal H}([l_i];v;\xi)}{\partial r}=0 \eqno (3.1)$$ for the energy functional ${\cal H}([l_i];v;\xi)=\langle[l_i];v;\xi| H|[l_i];v;\xi\rangle= \langle[l_i];v|aY_0 (\xi) + Y_+(\xi) + Y_- +\tilde C |[l_i];v\rangle$. At same time an appropriate quasiclassical dynamics, which is isomorphic to the exact quantum one when $sl_{pd}(2)=sl(2)$ [14-16], is described by the classical Hamiltonian equations [5,14,16] $$ \dot q =\frac{\partial {\cal H}}{\partial p}, \qquad \dot p = -\frac{\partial {\cal H}}{\partial q}, \quad {\cal H}= \langle z(t)|H|z(t)\rangle \eqno (3.2a)$$ for "motion" of the canonical parameters $p, q$ of the $SL(2)$ GCS $|z(t)\rangle=\exp(-z(t) Y_++ z(t)^*Y_-)|\psi_0\rangle$ as trial functions in the time-dependent Hartree-Fock variational scheme with the Lagrangian ${\cal L}=\langle z(t)|(i\partial/\partial t - H)|z(t)\rangle$; $p= j\cos \theta, q= \phi, z = \theta/2 \exp (-i\phi)$ for $su(2)$ and $p=j\cosh \theta, q= \phi, z = \theta/2 \exp (-i\phi)$ for $su(1,1)$. An equivalent formulation in ${\bf Y}= (Y_1,Y_2,Y_0)$ space can be given in terms of $sl(2)$ Euler-Lagrange equations, $$ \dot {\bf y}=\frac{1}{2} {\bf \bigtriangledown} {\cal H}\times {\bf \bigtriangledown} C, $$ $$ C=\pm y_0^2+y_1^2+y_2^2, \; y_i = \langle z(t)|Y_i|z(t)\rangle, \; y_{\pm}=y_1\pm y_2, \; {\bf \bigtriangledown} = (\partial/\partial y_1, \partial/\partial y_2, \partial/\partial y_0) \eqno (3.2b)$$ reducing to the well-known (linear) Bloch equations [5,14,17]. Similarly, general ideas of the analysis above and calculation schemes (3.1), (3.2) may be extended to the case of arbitrary polynomial algebras $sl_{pd}(2)$ by using the energy functional ${\cal H}([l_i];v;\xi)= \langle[l_i];v|\tilde{H}_{[l_i]}(\xi)|[l_i];v\rangle$ with $\tilde{H}_{[l_i]}(\xi)$ being given by Eq. (2.9). Naturally, results obtained in such a manner are not expected to coincide with exact solutions on all subspaces $L([l_i])$ due to an essential nonlinearity of Hamiltonians (2.4) and their non-equivalence (unlike Eq. (2.7b)) to diagonal parts of Eq. (2.9); however, they yield, evidently, most close to exact "smooth" (analytical) solutions (cf. [5,14]). Without dwelling on a discussion of all aspects of such an extension we consider in detail an application of the procedure (3.1) to the most wide-spread class [11] of Hamiltonians (2.4) with the $su(2)$ dynamic symmetry which includes the model (2.3). Note that the condition (3.1a) gives $e^{i\theta}=g/|g|$ as in the linear case, and, due to the form of trial functions, it is sufficiently to solve Eq. (3.1b) only for finding ground states $|[l_i]; v=0;\xi\rangle$. Then, expanding r.h.s. of Eq. (2.5) in $Y_{\alpha}$ power series and taking into account defining relations for the $su(2)$ algebra one gets after some algebra the following expressions $$ E^{su(2)}_v ([l_i];\xi_0) ={\cal H}([l_i];v;\xi_0) =C+a(l_0+j)+a(-j+v)\cos 2r-2|g|\sum_{f\geq 0} E_f^{\phi}(r;j;v), $$ $$E^{\phi}_f(r;j;v)=E^{\phi}_f(r;j;0)(\frac{1}{2}\sin 2r)^{-2v}\frac{(f)! (2j-v)!(f+1)!}{(2j)!v!(f-v)!(f+1-v)!}\times$$ $$F(-v,-v+2j+1;f-v+1; \sin^2 r)F(-v,-v+2j+1;f-v+2; \sin^2 r),$$ $$E^{\phi}_f(r;j;0)=(\cos^{4j} r)\frac{(\tan r)^{2f+1}(2j)!}{(f)!(2j-f-1)!} \sqrt{\phi_{n-2}(-j+f)}, \quad \phi_{n-2}(-j+f)=\frac{\psi(l_0+1+f)} {(2j-f)(f+1)}, \eqno (3.3)$$ with $F(...)$ being the Gauss hypergeometric function [18], for energy eigenvalues $E^{su(2)}_v ([l_i];\xi_0=rg/|g|)$ where diagonalizing values of the parameter $r$ are determined from solving the algebraic equation $$0 = \sum_{f\geq 0}\frac{\alpha^{2f}}{(2j-1-f)!f!}\{\frac{a \alpha} {|g|}-[4\alpha^2 j -(1+\alpha^2)(2f+1)]\sqrt{\phi_{n-2}(-j+f)}\}, \quad \alpha =-\tan r \eqno (3.4)$$ For the case of the $su(1,1)$ dynamic symmetry Eqs. (3.3), (3.4), retaining their general structure form, are slightly modified due to differences in the definition (2.5) of $S_Y(\xi)$ for $su(2)$ and $su(1,1)$. Let us make some remarks concerning this result. 1) As is seen from Eq. (3.3), its general structure coincides with the energy formula given by the algebraic Bethe ansatz [6], and spectral functions $E^{\phi}_f(r;j;v)$ are non-linear in the discrete variable $v$ labeling energy levels that provides a non-equdistant character of energy spectra within fixed subspaces $L([l_i])$ at $d([l_i])>3$. Besides, due to the square roots in expressions for these functions different eigenfrequencies $\omega^{su(2)}_v\equiv E^{su(2)}_v/\hbar$ are incommensurable: $\; m\omega^{su(2)}_{v_1}\neq n\omega^{su(2)}_{v_2}$ that is an indicator of an origin of collapses and revivals of the Rabi oscillations [2,8] as well as of pre-chaotic dynamics [19]. Note that this dependence is impossible to get by using GCS related to uncoupled subsystems. 2) The r.h.s. of Eq. (3.4) is a polynomial of the degree $2j+1=d([l_i])$, and, in general, Eq. (3.4) may have $2j+1$ different roots $r_i$ corresponding to $2j+1$ different stationary values of the energy functional ${\cal H}([l_i];v;\xi)$. Therefore, one may assume that it is possible to get more simple expressions for $E^{\phi}_f(r;j;v)$ with any $v$ using $E^{\phi}_f(r;j;0)$ with different roots $r_i$. Note that this conjecture is valid for little dimensions $d([l_i])$ when Eqs. (3.3)-(3.4) give exact results. Another way to modify and to simplify the results above is in using different properties, including integral representations, of the hypergeometric functions $F(a,b;c;x)$; specifically, using relations between hypergeometric functions [18], one can express spectral functions $E^{\phi}_f(r;j;v)$ in terms of the hypergeometric functions ${_4F_3(...;1)}$ (which are proportional to the $sl(2)$ Racah coefficients). 3) Evidently, Eq. (3.3) generalizes Eq. (2.8a) for the (quasi)equidistant approximation abovementioned. Indeed, when replacing functions $\phi_{n-2}(-j+f)$ by their certain "average" values, series in (3.3), (3.4) are summed up, and Eq. (3.3) is reduced to Eq. (2.8a); Taylor series expansions of functions $\sqrt{\phi_{n-2}(-j+f)}$ provide perturbative corrections related to higher degrees of the an-harmonicity of Hamiltonians (2.4). Furthermore, we can get an intermediate approximation for energy spectra if replacing in Eqs. (3.1) the exact energy functionals ${\cal H}([l_i];v;\xi)=\langle[l_i];v|\tilde{H}_{[l_i]}(\xi)|[l_i];v\rangle$ by their mean-field (corresponding to the Ehrenfest theorem) approximations $${\cal H} ^{mfa}([l_i];v;\xi) = a<Y_0(\xi)> + <Y_+(\xi)>\tilde g(<Y_0(\xi)>) + \tilde g^+(<Y_0(\xi)>) <Y_-(\xi)> +\tilde C,$$ $$ <Y_{\alpha}(\xi)> =\langle[l_i];v|Y_{\alpha}(\xi)|[l_i];v\rangle \eqno (3.5) $$ Then Eqs. (3.3)-(3.4) are very simplified retaining their main characteristic features. For example, for the model (2.3) we find $$ E_v^{mfa} ([l_i];\xi_0) =$$ $$C+a(l_0+j)+a(-j+v)\cos 2r- 2|g|(j-v)\sin 2r \sqrt{(-j+v)\cos 2r +j+|k|+1} \eqno (3.6a)$$ where $r$ is determined from the equation $$\frac{a}{2|g|}\sin 2r= \cos 2r \sqrt{2j\sin^2 r +|k|+1} + \frac{j\sin^2 2r} {2\sqrt{2j\sin^2 r +|k|+1}} \eqno (3.6b)$$ (Similar expressions can be found for the point-like Dicke and the second harmonic generation models.) Besides, substituting Eq. (3.5) in Eqs. (3.2) one may get a mean-field approximation for dynamics equations reducing in the ${\bf Y}$ space representation to non-linear Bloch equations (cf.[5,11]) obtained from Eqs. (3.2b) by the substitution $${\bf \bigtriangledown} {\cal H}= ([g+g^*] [\phi_{1}(y_0)]^{1/2}, [g-g^*] [\phi_{1}(y_0)]^{1/2}, a+ \frac{1}{2}[g(y_1+y_2)+g^*(y_1-y_2)] [\phi_{1}(y_0)]^{-1/2}), $$ $${\bf \bigtriangledown} C = 2(y_1, y_2, y_0), \quad \phi_{1}(y_0)= y_0+j+|k|+1 \eqno (3.7)$$ 4) Finally, Eqs. (3.3) and (3.6a) can be used for obtaining appropriate approximations $$U^{su(2)/mfa}_{H}(t)= \sum _{[l_i], v} S_Y(\xi_0)^{\dagger} \; \exp (\frac{-it \omega^{su(2)/mfa}_v}{\hbar}) \;|[l_i];v\rangle \langle[l_i];v| \;S_Y(\xi_0) \eqno (3.8)$$ for the evolution operators which are transformed to the form (1.4a) with the help of the standard group-theoretical technique [20]. \section{Conclusion} So, we have obtained new approximations for energy spectra and evolution equations of models (1.1) by means of using the mapping (2.1) and the variational schemes (3.1), (3.2) with the $SL(2)$ GCS as trial functions. They may be called as a "smooth" $sl(2)$ quasiclassical approximations since they, in fact, correspond to picking out "smooth" (analytical) $sl(2)$ factors $\exp(\xi_0 Y_+- \xi_0^* Y_-)$ in exact diagonalizing operators $S(\tilde{\xi})$ and in the evolution operator $U_H(t)$. These approximations may be used for calculations of evolution of different quantum statistical quantities (cf. [8,14]) and for determining bifurcation sets of non-linear Hamiltonian flows in parameter space (cf. [5]). Further investigations may be related to a search of suitable multi-parametric specifications of exact diagonalizing operators $S(\tilde{\xi})=S([\xi_0,\xi_1,\xi_2,...])$ using $\exp(\xi_0 Y_+- \xi_0^* Y_-)$ as initial ones in iterative schemes which are similiar to those developed to examine non-linear problems of classical mechanics and optics [21] or as "principal" factors in the diagonalization schemes like (2.7) for Hamiltonians (1.1). From the practical point of view an important question is to get estimations of accuracy of approximations obtained and to make comparisons of their efficiency with other approximations (e.g., given in [8,9,11]). For the model (2.3) (and other ones with the structure polynomial $\psi_3(x)$ of the third degree) it is of interest to compare results of approximations found above with exact calculations obtained by considering solvable cases of models under study. One of latters is given by integral solutions [12] and other may be yielded by the Riccati equations arising from a differential realization of $sl_{pd}(2)$ generators $V_{\alpha}$ [13]: $$ V_-=d/dz,\; V_0=zd/dz+l_0,\; V_+=\psi_n(zd/dz+l_0)(d/dz)^{-1} \eqno (4.1)$$ which is, in turn, related to a realization of $sl_{pd}(2)$ generators $V_{\alpha}$ by quadratic forms in $sl(2)$ generators $Y_{\alpha}$ (cf. [15,22]). (In fact, this realization was used implicitly for obtaining exact integral solutioms [12].) Besides, it is also of interest to investigate possible connections of these results with quasi-exactly solvable $sl(2)$ models [23,24]. The work along these lines is now in progress. \section{Acknowledgements} Preliminary results of the work were reported at the VII International Conference on Symmetry in Physics (JINR, Dubna, July 10-16, 1995) and at the XV Workshop on Geometric Methods in Physics (Bialowieza, Poland, July 1-7, 1996). The author thanks C. Daskaloyannis, S.M. Chumakov and A. Odzijewicz for useful discussions. The paper is prepared under partial support of the Russian Foundation for Basic research, grant No 96-02 18746-a.
1,108,101,564,758
arxiv
\section{Introduction} \label{intro} The existence of pre-defined values for quantum observables that are independent of any measurement settings, has been a matter of debate ever since quantum theory came into existence. While Einstein made a case for looking for hidden variable theories that would give such values~\cite{EPR}, the work of John Bell proved that such local hidden variable theories cannot be compatible with quantum mechanics~\cite{Bell_theorem}. This points towards a fundamental departure of the behaviour of quantum correlations from the ones that can be accommodated within classical descriptions. While the departure from classical behaviour indicated by Bell's inequalities requires composite quantum systems and the assumption of locality, the contradiction between assignment of predefined measurement-independent values to observables and quantum mechanics, goes deeper and was brought out more vividly by the discovery of quantum contextuality~\cite{Kochen1967}. In a non-contextual classical description, a joint probability distribution exists for the results of any joint measurements on the system, and the results of a measurement of a variable do not depend on other compatible variables being measured. Quantum mechanics precludes such a description of physical reality; on the contrary in the quantum description, there exists a context among the measurement outcomes, which forbids us from arriving at joint probability distributions of more than two observables. Given a situation where an observable $A$ commutes with two other observables $B$ and $C$ which do not commute with each other: a measurement of $A$ along with $B$ and a measurement of $A$ along with $C$, may lead to different measurement outcomes for $A$. Thus, to be able to make quantum mechanical predictions about the outcome of a measurement, the context of the measurement needs to be specified. The first proof that the quantum world is contextual, was given by Kochen and Specker and involved 117 different vectors in a 3-dimensional Hilbert space~\cite{Kochen1967}. Subsequently, the number of observables required for such a `no-coloring' proof was brought down to 31 by Conway and Kochen~\cite{Aperes}, while Peres provided a compact proof based on cubic symmetry using 33 observables~\cite{Asher1991-JPhA}. In higher dimensions the number of observables can be further reduced and more compact proofs are possible~\cite{Mermin-hvt,Exp_SIC_2008}. Klyachko {\em et.~al.} found a minimal set of 5 observables for a qutrit for which the predicted value for quantum correlation exceeds the bound (the KCBS inequality) imposed by non-contextual deterministic models~\cite{kcbs_2008}. The violation observed is state dependent and one can find states that do not allow for stronger than classical correlations for the same set of observables. A state independent violation of a non-contextuality inequality implies that stronger correlations than classical are possible for all states for the same set of observables~\cite{State_ind_cond_2015}. In a 3-dimensional Hilbert space the minimum number of observables required to achieve such a violation is 13~\cite{Yu_oh_2012,State_ind_cond_2015} and can be brought down to 9 if one excludes the maximally mixed state~\cite{9_obs_2012}. Recently graph theory has also been used to describe contextuality scenarios, where vertices describe unit vectors and edges describe the orthogonality relationships between them~\cite{Graph_cabello_2014,Comb_acin}. While at the level of individual measurements quantum mechanics is contextual, the probability distribution for an observable $A$ does not depend upon the context and is not disturbed by other compatible observables being measured. This is called the `no-disturbance' principle and leads to interesting monogamy relations for contextuality inequalities~\cite{Context_mono_2012} similar to those obeyed by Bell-type inequalities~\cite{Bell_mono_2009}. These monogamy relations are a powerful expression of quantum constraints on correlations without involving a tensor product structure, and we shall exploit them in our work. Non-trivial quantum features of the world play an important role in quantum information processing~\cite{Nielsen_Chuang} and in particular in making the QKD protocols~\cite{Crypt_review} fundamentally secure as opposed to their classical counterparts~\cite{Sec_qkd, Acin_sec, Sec_monogamy}. QKD protocols can be categorized into two distinct classes, namely the `prepare and measure schemes', and the `entanglement assisted schemes'. In the prepare and measure schemes whose prime example is the BB84~\cite{BB84} protocol, one party prepares a quantum state and transmits it to the other party who performs suitable measurements to generate a key. On the other hand, the entanglement assisted protocols utilize entanglement between two parties and a prime example of such a protocol is the Ekert protocol~\cite{E91}. One distinct advantage in the entanglement assisted QKD protocols is the ability to check security based on classical constraints on correlations between interested parties via Bell's inequalities. It has also been shown that any two non-orthogonal states suffice for constructing a QKD protocol~\cite{B92}. The idea has been extended to qutrits~\cite{3_state_crypt_Peres} to allow four mutually unbiased bases for QKD. Quantum cryptography protocols have been proven to be robust against eavesdropping and noise~\cite{Sec_qkd, Acin_sec, Sec_monogamy,BB84,E91,B92,3_state_crypt_Peres, Shor_Pres_2000, Sec_BB84, Perform_BB84}. Our focus in this work is to explore the utility of quantum contextuality for QKD. While contextuality has already been exploited for QKD~\cite{Cabello_ququart_2011}, we propose a new QKD protocol which is based on the KCBS scenario and the related monogamy relationships~\cite{kcbs_2008,Context_mono_2012}. Our protocol falls in the class of `prepare and measure schemes' but still allows a security check based on conditions on correlations shared between the the two parties Alice and Bob. In fact in our protocol it is the monogamy relation of the KCBS inequality which is responsible for unconditional security. We first devise a QKD protocol between Alice and Bob utilizing the KCBS scenario of contextuality as a resource with post-processing of outcomes allowed on Alice's site. Considering Eve as an eavesdropper and using the novel graph theoretic approach~\cite{Graph_cabello_2014, Context_mono_2012} we then derive an appropriate monogamy relation between Alice-Bob and Alice-Eve correlations for the optimal settings of Eve. From this monogamy relationship, we then explicitly calculate the bounds on correlation to be shared among Alice and Bob demonstrating the security of the protocol. Our protocol enjoys a distinct advantage of not employing entanglement as a resource which is quite costly to produce, and still allows for a security test based on the KCBS inequality which is analogous to Bell-like test for security available for the entanglement based protocols. Further, it can be transformed into an entanglement assisted QKD protocol by making suitable adjustments. Although our protocol is not device independent, it adds a new angle to the QKD protocol research. The material in the paper is arranged as follows: In Section~\ref{back} we provide a brief review of the KCBS inequality. In Section~\ref{the_protocol} we describe our protocol, in Section~\ref{monogamy} we derive the monogamy relations for the required measurement settings and in Section~\ref{sec} we discuss the security of the protocol. Section~\ref{conc} offers some concluding remarks. \section{KCBS inequality} \label{back} \begin{figure}[H] \centering \includegraphics[scale=1]{jaskaran_fig1.pdf} \caption{The KCBS orthogonality graph. Each vertex corresponds to a projector and the edge linking two projectors indicates their orthogonality relationship.} \label{fig:kcbsineq} \end{figure} The KCBS inequality is used as a test of contextuality in systems with Hilbert space dimension three and more. In this section we review two equivalent formulations of the inequality, one of which will be directly used in our QKD protocol to be described later. Consider a set of five observables which are projectors in a 3-dimensional Hilbert space. The projectors are related via an orthogonality graph as given in Figure~\ref{fig:kcbsineq}. The vertices in the graph correspond to the projectors, and two projectors are orthogonal to each other if they are connected by an edge. A set of projectors which are mutually orthogonal also commute pairwise and can therefore be measured jointly. Such a set of co-measurable observables is called a {\em context}. Therefore, in the KCBS scenario, every edge between two projectors denotes a measurement context and each projector appears in two different contexts. However, a non-contextual model will not differentiate between different contexts of a measurement and will deterministically assign values to the vertices irrespective of the context. A deterministic non-contextual model must assign a value $0$ or $1$ to the $i^{th}$ vertex and therefore the probability that the vertex is assigned a value $1$, denoted by $P_i$, takes values $0$ or $1$ (and the corresponding probability for a vertex to have value $0$ is $1-P_i$). In such a non-contextual assignment the maximum number of vertices that can be assigned the probability $P_{i} = 1$ (constrained by the orthogonality relations), is 2 irrespective of the state. Therefore, \begin{equation} \tilde{K}(A,B)=\frac{1}{5}\sum_{i = 0}^{4}P_i \leq \frac{2}{5}. \label{eq1} \end{equation} This is the KCBS inequality~\cite{kcbs_2008, Graph_cabello_2014}, which is a state-dependent test of contextuality utilizing these projectors, and is satisfied by all non-contextual deterministic models. In a quantum mechanical description, given a quantum state and the projector $\Pi_i$ we can calculate the probabilities $P_i$ readily and it turns out that the sum total probability can take values upto $\frac{\sqrt{5}}{~5} > \frac{2}{5}$, with the maximum value attained for a particular pure state. Therefore, quantum mechanics does not respect non-contextual assignments and is a contextual theory. In a more general scenario, where one only uses the exclusivity principle~\cite{Comb_acin} - that the sum of probabilities for two mutually exclusive events cannot be greater than unity - one can reach the algebraic maximum of the inequality namely, \begin{equation} \text{Max}\frac{1}{5}\sum_{i=0}^{4} P_i = \frac{1}{2}. \end{equation} Unlike in inequality~(\ref{eq1}), here $P_i$s can take continuous values in the interval $\left[ 0, 1 \right]$. The bounds so imposed by non-contextuality, quantum theory and the exclusivity principle can be identified with graph theoretic invariants of the exclusivity graph of the five projectors, which in this case is also a pentagon~\cite{Graph_cabello_2014}. The correlation can be further analyzed if one considers observables which take values $X_{i}\in\lbrace -1, +1\rbrace$ and are related to the projectors considered above as \begin{equation} X_{i} = 2\Pi_{i} - I. \label{eq:eq2} \end{equation} One can then reformulate Eqn.~(\ref{eq1}) in terms of anti-correlation between two measurements as~\cite{LSW_2011} \begin{equation} K(A,B)=\frac{1}{5}\sum_{i = 0}^{4}P(X_{i} \neq X_{i+1}) \leq \frac{3}{5}. \label{eq:kcbs} \end{equation} Where $i+1$ is sum modulo 5 and $P(X_{i}\neq X_{i+1})$ denotes the probability that a joint measurement of $X_{i}$ and $X_{i+1}$ yields anti-correlated outcomes. Eqn.~(\ref{eq:kcbs}) is obeyed by all non-contextual and deterministic models. However, quantum theory can exhibit violation of the above inequality. The maximum value that can be achieved in quantum theory is for a pure state and turns out to be \begin{equation} \frac{1}{5}\sum_{i = 0}^{4}P_{\rm QM}(X_{i} \neq X_{i+1}) = \frac{4\sqrt{5}-5}{5}>\frac{3}{5}. \label{eq:kcbs2} \end{equation} It should be noted that the maximum algebraic value of the expression on the left hand side of the KCBS inequality as formulated in Eqn.~(\ref{eq:kcbs}) is one. We shall use this formulation of the KCBS inequality directly in our protocol in the next section as it allows evaluation of (anti-)correlation between two joint measurements. \section{The QKD protocol, contextuality monogamy and security} \label{res} \subsection{The protocol} \label{the_protocol} In a typical key-distribution situation, two separated parties Alice and Bob want to share a secret key securely. They both have access to the KCBS scenario of five projectors. Alice randomly selects a vertex $i$ and prepares the corresponding pure state $\Pi_i$ and transmits the state to Bob. Bob on his part, also randomly selects a vertex $j$ and performs a measurement $\left\lbrace \Pi_j, I- \Pi_j\right\rbrace$ on the state. We denote $i$ and $j$ as the settings of Alice and Bob respectively. The outcome of Bob's measurement depends on whether he ended up measuring in the context of Alice's state or not. The outcome $\Pi_j$ is assigned the value $1$ and the outcome $I - \Pi_j$ is assigned the value $0$. After the measurement, Bob publicly announces his measurement setting, namely the vertex $j$. Three distinct cases arise: \begin{itemize} \item[\bf C1:] $i,j$ are equal($i=j$): By definition Bob is assured to get the outcome $1$. Alice notes down a $0$ with herself and publicly announces that the transmission was successful. Both of them thus share an anti-correlated bit. \item[\bf C2:] $i,j$ are in context but not equal: Bob's projector is in the context of Alice's state. Since the state Alice is sending is orthogonal to Bob's chosen projector, he is assured to get the outcome $0$. Alice then notes down $1$ with herself and publicly announces that the transmission was successful and Bob uses his outcome as part of the key. This way they both share an anti-correlated bit. It should be noted that Alice does not note down her part of the key until Bob has announced his choice of setting. \item[\bf C3:] $i,j$ are not in context: Bob's projector does not lie in the context of Alice's state. Alice publicly announces that the transmission was unsuccessful and they try again. However they keep this data, as it may turn out to be useful to detect Eve. \end{itemize} Using the protocol, Alice and Bob can securely share a random binary key. Their success depends on chances that Bob's measurements are made in the context of Alice's state. Whenever Bob measures in the correct context which happens three-fifths of the time, Alice is able to ensure that they have an anti-correlated key bit. When Bob measures in the same context but not the same projector as Alice, she notes down a 1 with herself and thus they share a 1-0 anti-correlation. On the other hand, when Bob measures the same projector as Alice's state, she notes down a 0 with herself and again they share a 0-1 anti-correlation. At no stage Alice needs to reveal her state in public or to Bob. The QKD scenario is depicted in Figure~\ref{protocol_channel}. In the ideal scenario without any eavesdropper, Alice and Bob will always get an anti-correlated pair of outcomes and therefore will violate the KCBS inequality to its algebraic maximum value which is one. It should be noted that they are able to achieve the algebraic bound because when Bob ends up measuring the same projector as Alice, she notes down $0$ on her side which is not the quantum outcome of her state. Thus this in no way is a demonstration that quantum theory reaches the algebraic bound of KCBS inequality which in fact it does not. However, in the presence of an eavesdropper the violation of the KCBS inequality can be used as a test for security as will be shown later. The presence of Eve is bound to decrease the Alice Bob anti-correlation and that can be checked by sacrificing part of the key. The key as generated by the above protocol although completely anticorrelated is not completely random, there are more ones in the key than zeros. Therefore, the actual length of the effective key is smaller than the number of successful transmissions. In order to calculate the actual key rate we compute the Shannon information of the transmitted string. Given the fact that $P_0 = \frac{1}{3}$ and $P_1 = \frac{2}{3}$ for the string generated out of successful transmission, the Shannon information turns out be \begin{equation} S = -P_0\log_2P_0 -P_1\log_2P_1=0.9183 \end{equation} The probability of success ({\em i.e.} when Bob chooses his measurement in the context of Alice's state) is $\frac{3}{5}$ as stated earlier. Thus the average key generation rate per transmission can be obtained as $\frac{3}{5} S = 0.55 $. We tabulate the average key rate of a few QKD protocols in the absence of an eavesdropper in Table~\ref{table: comparison}. \begin{table}[h] \centering \begin{tabular}{|p{2.6cm}|p{2.7cm}|p{2.7cm}|} \hline ~QKD protocol & Success probability & Av. key rate in bits\\ & (per transmission) &(per transmission) \\ \hline \hline ~BB84 (2 basis) & ~~~~~~~~~$1/2$ & $~~~~~~~~0.50$ \\ \hline ~BB84 (3 basis) & ~~~~~~~~~$1/3$ & $~~~~~~~~0.50$ \\ \hline ~Ekert(EPR pairs) & ~~~~~~~~~$1/2$ & $~~~~~~~~0.50$ \\ \hline ~3-State~\cite{3_state_crypt_Peres} & ~~~~~~~~~$1/4$ & $~~~~~~~~0.50$ \\ \hline ~\textbf{KCBS} & ~~~~~~~~~$3/5$ & $~~~~~~~~0.55$ \\ \hline \end{tabular} \caption{The key rate for various QKD protocols in the absence of an eavesdropper. As can be seen the KCBS protocol offers a little higher key rate compared to the other protocols.} \label{table: comparison} \end{table} \begin{figure}[H] \centering \includegraphics[scale=1]{jaskaran_fig3.pdf} \caption{Alice and Bob are trying to violate the KCBS inequality [$K(A, B)$], while Eve in her attempts to gain information is trying to violate the same inequality with Alice [$K(A, E)$].} \label{protocol_channel} \end{figure} It is instructive to note that the above QKD protocol can be transformed into an `entanglement assisted' protocol, where Alice and Bob share an isotropic two-qutrit maximally entangled state: \begin{equation} |\psi\rangle = \frac{1}{\sqrt{3}}\sum_{k=0}^{2}|kk\rangle. \end{equation} Alice randomly chooses a measurement setting $i$ and implements the measurement $\lbrace\Pi_i,I-\Pi_i\rbrace$ on her part of the entangled state. In the situation when she gets a positive answer and her states collapses to $\Pi_i$ Bob's state collapses to $\Pi_i$ too. This then becomes equivalent to the situation where Alice prepares the state $\Pi_i$ and sends it to Bob. The probability of this occurrence is $1/3$. Bob too randomly chooses a measurement setting $j$ and implements the corresponding measurement. The rest of the protocol proceeds exactly as in the case of prepare and measure scenario. Although there are a number of possible choices for the projectors $\Pi_i$, we detail below a particular choice of vectors $|v_i\rangle$ (un-normalized) corresponding to the projectors $\Pi_i$, on which the above assertions can be easily verified. \begin{eqnarray} |v_0\rangle &=& \left( 1, 0, \sqrt{\cos\frac{\pi}{5}}\right) \nonumber \\ |v_1\rangle &=& \left( \cos\frac{4\pi}{5}, -\sin\frac{4\pi}{5}, \sqrt{\cos\frac{\pi}{5}}\right) \nonumber \\ |v_2\rangle &=& \left( \cos\frac{2\pi}{5}, \sin\frac{2\pi}{5}, \sqrt{\cos\frac{\pi}{5}}\right) \nonumber \\ |v_3\rangle &=& \left( \cos\frac{2\pi}{5}, -\sin\frac{2\pi}{5}, \sqrt{\cos\frac{\pi}{5}}\right) \nonumber \\ |v_4\rangle &=& \left( \cos\frac{4\pi}{5}, \sin\frac{4\pi}{5}, \sqrt{\cos\frac{\pi}{5}}\right) \end{eqnarray} With \begin{equation} \Pi_i=\frac{\vert v_i\rangle \langle v_i \vert}{\langle v_i \vert v_i \rangle},\quad i=0,1,2,3,4. \end{equation} Thus our `prepare and measure' protocol can be translated into an `entanglement assisted' protocol. We have provided this mapping for the sake of completeness and in our further discussions we will continue to consider the prepare and measure scheme. \subsection{Contextuality monogamy} \label{monogamy} In quantum mechanics, given observables $A, B, C$, such that $A$ can be jointly measured both with $B$ and $C$ ({\em i.e.} it is compatible with both) the marginal probability distribution $P(A)$ for $A$ as calculated from both the joint probability distributions $P(A,B)$ and $P(A,C)$ is the same: \begin{equation} \sum_{b} P(A=a, B=b) = \sum_c P(A=a, C=c) = P(A = a). \end{equation} This is called the `no-disturbance' principle and it reduces to the `no-signaling' principle when the measurements $B$ and $C$ are performed on spatially separate systems. The `no-disturbance' principle can be used to construct contextuality monogamy relationships of a set of observables if they can be partitioned into disjoint subsets each of which can reveal contextuality by themselves but cannot be simultaneously used as tests of contextuality. \begin{figure}[H] \centering \includegraphics[scale=1]{jaskaran_fig2.pdf} \caption{Joint commutation graph (top) of Alice-Bob KCBS test (Thin-red) and Alice-Eve KCBS test (Thick-blue) and its decomposition into two chordal subgraphs (below). Dotted edges indicate commutation relation between two projectors belonging to the two different KCBS tests. (color online)} \label{fig:mono} \end{figure} Consider the situation where Alice and Bob are different parties who make preparations and measurements as detailed in Section~\ref{the_protocol}. We consider the possibility of a third party Eve who tries to eavesdrop on the conversation between them. As will be detailed in Section~\ref{sec} Eve will have to violate the KCBS inequality with Alice to gain substantial information about the key. We denote the Alice-Bob KCBS test by $\tilde{K}(A,B)$ with projectors $\left\lbrace \Pi_i \right\rbrace$ and Alice-Eve KCBS test by $\tilde{K}(A,E)$ with projectors $\left\lbrace \Pi^E_i \right\rbrace$. We have assumed different projectors in the two KCBS tests for clarity in derivation of a monogamy relationship, but essentially the measurements to be performed by Eve would have to be the same as that of Bob to mimic Alice and Bob's KCBS scenario as will be detailed in Section~\ref{sec} where we take up the security analysis of our protocol. In this joint scenario the $\Pi_i^{th}$ projector is connected by an edge to $\Pi_{i+1}$, $\Pi^E_{i+1}$, $\Pi_{i-1}$, $\Pi^E_{i-1}$ and $\Pi^E_i$, where $i+1$ and $i-1$ are taken modulo 5 and the presence of an edge denotes commutativity between the two connected vertices. These relationships follow from the fact that the projectors used by Eve will follow the same commutativity relationships as the original KCBS scenario. By introducing herself in the channel, Eve has created an extended scenario which will have to obey contextuality monogamy due to the no-disturbance principle. The no-disturbance principle guarantees that the marginal probabilities as calculated from the joint probability distribution do not depend on the choice of the joint probability distribution used. We follow the graph theoretical approach developed to derive generalized monogamy relationships based only on no-disturbance principle in reference~\cite{Context_mono_2012}. A joint commutation graph representing a set of $n$ KCBS-type inequalities each of which has a non-contextual bound $\alpha$ gives rise to a monogamy relationship if and only if its vertex clique cover number is $n.\alpha$. The vertex clique cover number is the minimum number of cliques required to cover all the vertices of the graph and a clique is a graph in which all non-adjacent vertices are connected by an edge. The joint commutation graph considered in the protocol resulting in the presence of Eve satisfies the condition for the existence of a monogamy relationship between Alice-Bob and Alice-Eve KCBS inequalities as can be seen from Fig.~\ref{fig:mono}. In order to derive the monogamy relationship one needs to identify $m$ chordal sub-graphs of the joint commutation graph such that the sum of their non-contextual bounds is $n.\alpha$. A chordal graph is a graph which does not contain induced cycles of length greater than 3. As shown in reference~\cite{Context_mono_2012} a chordal graph admits a joint probability distribution and therefore cannot violate a contextuality inequality. To this end we identify the decomposition of the joint commutation graph into two chordal subgraphs such that each vertex appears at most once in both the subgraphs, as shown in Fig.~\ref{fig:mono}. Their maximum non-contextual bound will then be given by the independence number of the subgraph. Therefore, \begin{eqnarray} p(\Pi^E_0) + p(\Pi_2) + p(\Pi^E_1) + p(\Pi_1) + p(\Pi^E_2) \leq 2, \\ p(\Pi_0) + p(\Pi_3) + p(\Pi^E_3) + p(\Pi_4) + p(\Pi^E_4) \leq 2. \end{eqnarray} Adding and grouping the terms according to their respective inequalities (Eqn.(\ref{eq1})) and normalizing, we get \begin{equation} \tilde{K}(A, B) + \tilde{K}(A, E)\leq \frac{4}{5}. \label{eq:mono1} \end{equation} If the projectors involved in the KCBS tests are transformed according to Eqn.(\ref{eq:eq2}), then using the KCBS given in Eqn.(\ref{eq:kcbs}) the monogamy relationship reads as \begin{equation} K(A, B) + K(A, E) \leq \frac{6}{5}. \label{eq:mono3} \end{equation} The relationship derived above follows directly from the no-disturbance principle and cannot be violated. In other words, the correlation between Alice and Eve is complementary to the correlation between Alice and Bob and thus if one is strong the other has to be weak. One can thus use this fundamental monogamous relationship to derive conditions for unconditional security as will be shown in the next section. \subsection{Security analysis} \label{sec} In this section we prove that the above QKD protocol is secure against individual attacks by an eavesdropper Eve. We first motivate the best strategy available to an eavesdropper limited only by the no-disturbance principle. The best strategy would then dictate the optimal settings to be used to maximize the information of Eve about the key. We then prove unconditional security of the protocol based on monogamy of the KCBS inequality. The analysis is inspired by the security proof for QKD protocols based on the monogamy of violations of Bell's inequality~\cite{Sec_monogamy}. Alice and Bob perform the protocol a large number of times and share the probability distribution $P(a, b|i, j)$, which denotes the probability of Alice and Bob obtaining outcomes $a, b \in \lbrace 0, 1\rbrace$ when their settings are $i, j \in \lbrace 0, 1, 2, 3, 4 \rbrace$ respectively. In the ideal case they obtain $a \neq b$ when $j = i + 1$, where addition is taken modulo 5. However in the presence of Eve, the secrecy of correlation between Alice and Bob has to be ensured even if Eve is distributing the correlation between them. On the other hand, Eve would like to obtain information about the correlation between Alice and Bob and the associated key. Eve can attempt to accomplish this in several ways which might include intercepting the information from Alice and re-sending to Bob after gaining suitable knowledge about the key. It could also be that she is correlated to Alice's preparation system or to Bob's measurement devices. In other words Eve has access to a tripartite probability distribution $P(a, b, e|i, j, k)$, where Alice, Bob and Eve obtain outcomes $a$, $b$ and $e$ when their settings are $i$, $j$ and $k$ respectively. It is required that the marginals to this probability distribution correspond to the observed correlation between Alice and Bob as will be shown below. In general it is not easy to characterize the strategy of an eavesdropper without placing some constraints on her. For the following security analysis we place fairly minimal restrictions on the eavesdropper. It is required of her to obey the no-disturbance principle and as a consequence her correlation with Alice will be limited by monogamy~(\ref{eq:mono3}). Such a constraint is well motivated because it is a fundamental law of nature and will have to be obeyed at all times. We assume that the correlation observed by Alice and Bob, $P(a, b|i, j)$ as defined above, is a consequence of marginalizing over an extended tripartite probability distribution $P(a, b, e|i, j, k)$, distributed by an eavesdropper Eve: \begin{equation} \begin{aligned} P(a, b|i, j) &= \sum_e P(a, b, e|i, j, k) \\ &=\sum_e P(e|k)P(a, b|i, j, k, e). \end{aligned} \label{eq:abcorrel} \end{equation} Where the second equality is a consequence of the no-disturbance principle: Eve's output is independent of the settings used by Alice and Bob. We can also analyze the correlation between Alice and Eve in a similar manner: \begin{equation} \begin{aligned} P(a, e|i, k) &= \sum_b P(a, b, e|i, j, k) \\ &=\sum_b P(b|j)P(a, e|i, j, k). \end{aligned} \label{eq:aecorrel} \end{equation} Where the second equality also follows from the no-disturbance principle and implies that Eve can decide on her output based on the settings disclosed by Bob. Bob's outcome, however, cannot be used as it is never disclosed in the protocol. The natural question that arises now is how strong does the correlation between Alice and Bob need to be such that the protocol is deemed secure. As will be seen the question can be answered by monogamy of contextuality. The QKD scenario now is as follows: Alice and Bob utilize the preparations and measurements as detailed in Section~\ref{the_protocol}, while an eavesdropper Eve limited only by the no-disturbance principle is assumed to distribute the correlation between them. Whenever Eve distributes the correlation between herself and Alice she uses the same measurement settings as Bob to guess the bit of Alice. This way Eve can hope to gain some information about the key. However, contextuality monogamy limits the amount to which Eve can be correlated to Alice without disturbing the correlation between Alice and Bob significantly as shown in Section~\ref{monogamy}. The condition for a secure key distribution between Alice and Bob in terms of Alice-Bob mutual information $I(A : B)$ and Alice-Eve mutual information $I(A : E)$ is~\cite{secure_cond}: \begin{equation} I(A : B) > I(A : E). \end{equation} For individual attacks and binary outputs of Alice it essentially means that the probability $P_\text{B}$ that Bob guesses the bit of Alice should be greater than the probability, $P_\text{E}$ for Eve to correctly guess the bit of Alice. Thus the above condition simplifies to~\cite{sec_bell_reply} \begin{equation} P_\text{B} > P_\text{E}. \label{sec_cond} \end{equation} Bob can correctly guess the bit of Alice with probability $P_\text{B} = K(A, B)$. For $K(A, B) = 1$ Bob has perfect knowledge about the bit of Alice while for $K(A, B) = 0$ he has no knowledge. For any other values of $K(A, B)$ they may have to perform a security check. We assume that Eve has a procedure that enables her to distribute correlation according to Eqns.~(\ref{eq:abcorrel}-\ref{eq:aecorrel}). The procedure takes an input $k$ among the five possible inputs according to the KCBS scenario and outputs $e$. She uses this outcome to determine the bit of Alice when Alice's setting was $i$. The probability that Eve correctly guesses the bit of Alice is denoted by $P_{ik}$. Since there are 5 possible settings for Alice and Eve each, the average probability for Eve to be successful $P_\text{E}$ is, \begin{eqnarray} P_\text{E} &=& \frac{1}{15}\sum_{i=0}^{4} \left( P_{ii} + P_{ii+1}+P_{ii-1}\right)\nonumber \\ &\leq& \text{max}\lbrace P_{ii}, P_{ii+1}, P_{ii-1}| \forall\, i \rbrace. \label{eve_success} \end{eqnarray} The terms in the above equation denote the success probability of Eve when she uses the same setting as Alice and when she measures in the context of Alice, respectively. For all other cases she is unsuccessful. Without loss of generality we can assume that $P_{01}$ is the greatest term appearing in Eqn~(\ref{eve_success}). This corresponds to the success probability of Eve when her setting is $1$ and Alice's setting is $0$. However, Alice's setting is not known to Eve as it is never disclosed in the protocol. Therefore the best strategy that Eve can employ is to always choose her setting to be $1$ irrespective of Alice's settings and try to violate the KCBS inequality with her. The probabilities that appear in the KCBS inequality would then be, \begin{eqnarray} P(a \neq e|i=0, k=1) &=& P_{01}=P_{01}, \nonumber \\ P(a \neq e|i=1, k=1) &=& P_{11}=1 - P_{01}, \nonumber \\ P(a \neq e|i=2, k=1) &=& P_{21}\leq P_{01}, \nonumber \\ P(a \neq e|i=3, k=1) &=& P_{31}\leq P_{01}, \nonumber \\ P(a \neq e|i=4, k=1) &=& P_{41}\leq P_{01}. \end{eqnarray} The probability for Eve to get a particular outcome is independent of Alice's choice of settings. Her best strategy to eavesdrop can at most yield all the preceding probabilities to be equal (except the second term which will show a correlation instead of the required anti-correlation) which will maximize $K(A, E)$. Evaluating the KCBS violation for Alice and Eve, we get, \begin{equation} K(A, E) = \frac{3}{5}P_{01} + \frac{1}{5}>\frac{3}{5}P_\text{E} + \frac{1}{5}. \end{equation} Using the monogamy relationship given by Eqn.~(\ref{eq:mono3}), we get, \begin{equation} \frac{3}{5}P_\text{E} + \frac{1}{5}\leq \frac{6}{5} - P_\text{B}. \end{equation} For the protocol to work Eqn.~(\ref{sec_cond}) must hold and the above condition implies that it happens only if \begin{equation} K(A, B) > \frac{5}{8}. \label{condition} \end{equation} Therefore the protocol is unconditionally secure if Alice and Bob share KCBS correlation greater than $\frac{5}{8}$. It is worth mentioning that $\frac{5}{8}$ is lesser than the maximum violation of the KCBS inequality in quantum theory. As shown in reference~\cite{Context_mono_2012} the monogamy relation~(\ref{eq:mono3}) is a minimal condition and no stronger conditions exist. This implies that any QKD protocol whose security is based on the violation of the KCBS inequality cannot offer security if the condition given in Eqn.~(\ref{condition}) is not satisfied. This quantifies the minimum correlation required for unconditional security. We conjecture that no key distribution scheme based on the violation of the KCBS inequality can perform better than our protocol since we utilize post-processing on Alice's side to extend the maximum violation of the KCBS inequality upto its algebraic maximum. \section{Conclusions} \label{conc} The cryptography protocol we presented is a direct application of the simplest known test of contextuality namely the KCBS inequality and the related monogamy relation. For the protocol to work, Alice and Bob try to achieve the maximum possible anti-correlation amongst themselves. They achieve the algebraic maximum of the KCBS inequality by allowing post-processing on Alice's site. We then showed that any eavesdropper will have to share a monogamous relationship with Alice and Bob severely limiting her eavesdropping. For this purpose we derived a monogamy relationship for the settings of Eve which allow her to gain optimal information. We found that the optimal information gained by Eve cannot even allow her to maximally violate the KCBS inequality as allowed by quantum theory. Such an unconditional security provides a significant advantage to our protocol since it does not utilize the costly resource of entanglement. Furthermore, being a prepare and measure scheme of QKD it also allows for a check of security via the violation of the KCBS inequality much like the protocols based on the violation of Bell's inequalities. Finally, we note that our protocol is a consequence of contextuality monogamy relationship, which are expected to play an interesting role in quantum information processing. \begin{acknowledgments} We acknowledge Sandeep K. Goyal and Adan Cabello for useful discussions. Arvind acknowledges funding from DST India under Grant No. EMR/2014/000297 and Kishor Bharti acknowledges the KVPY fellowship of DST India. \end{acknowledgments}
1,108,101,564,759
arxiv
\section{Introduction} It is well known that a compact Hausdorff space $T$ is second countable if and only if the Banach space $C(T)$ consisting of all the scalar valued functions on $T$ is separable. Similarly, the compact Hausdorff $T$ is an Eberlein compact if and only if $C(T)$ is generated by a weakly compact subset. We generalize to some extent these results to Banach bundles with compact and locally compact Hausdorff base spaces. To establish the separability of the space of sections we restrict the discussion to locally uniform Banach bundles, a class of Banach bundles that was introduced in \cite{L}; its definition will be given below. Namely, we prove that the space of all the sections that vanish at infinity is separable provided that the Banach bundle is locally uniform, the base space is a second countable locally compact Hausdorff space, and all the fiber spaces of the bundle are separable. In the other direction we prove that if the Banach space of all the sections of the Banach bundle $\xi := (\E,p,T)$ that vanish at infinity on the locally compact Hausdorff space $T$ is separable then $T$ is second countable; of course, each fiber is a separable Banach space. We also show that if the base space is compact Hausdorff and the space of all the sections of $\xi$ is weakly compact generated (WCG) then $T$ is Eberlein compact. For other separability results one should consult \cite[215 ff]{G} and the references there. We discuss here Banach bundles for which the norm is upper semi-continuous that is $(H)$ Banach bundles in the terminology of \cite{DG}. We shall always suppose that the base space of a Banach bundle is Hausdorff. For a Banach bundle $\xi := (\E,p,T)$ we denote by $\Gamma(\xi)$ the space of all the sections of $\xi$; if the base space is compact $\Gamma(\xi)$ is a Banach space when one endows it with the sup norm. Similarly, if $T$ is locally compact then $\Gamma_0(\xi)$ stands for the space of all the sections of $\xi$ that vanish at infinity; we endow it with the sup norm to make it a Banach space. We shall denote the closed unit ball of the Banach space $Y$ by $Y_1$. Let $X$ be a Banach space with its family of bounded closed subsets endowed with the Hausdorff metric, $T$ a Hausdorff topological space and $t\to X(t), t\in T$ a map such that $t\to X(t)_1$ is continuous. Set $\E := \cup_{t\in T} (\{t\}\times X(t))$ and define $p :\E\to T$ by $p((t,x)) := t$. Given an open subset $V$ of $T$ and an open subset $O$ of $X$ we denote $\mathcal{U}(V,O) := \{(t,x)\mid t\in V,\ x\in O\cap X(t)\}$. It is shown in \cite[section 5]{L} that the family of all the sets $\{\mathcal{U}(V,O)\}$ is the base of a topology on $\E$ such that $(\E,p,T)$ is a Banach bundle for which the norm is continuous on the bundle space $\E$. A Banach bundle isomorphic to a Banach bundle as detailed above is called a uniform Banach bundle. A Banach bundle $\xi := (\E,p,T)$ is called a locally uniform Banach bundle if there is a family $\{T_{\iota}\}$ of closed subsets of $T$ such that $\{\mathrm{Int}(T_{\iota})\}$ is an open cover of $T$ and every restriction $\xi\mid T_{\iota}$ is a uniform Banach bundle. An Eberlein compact is a compact Hausdorff space that is homeomorphic to a weakly compact subset of a Banach space. A Banach space $X$ is called weakly compact generated (WCG) if it contains a weakly compact subset whose linear span is dense in $X$. A thorough discussion of Eberlein compact spaces and of WCG Banach spaces can be found in \cite{Z}. \section{results} We begin by discussing conditions on the Banach bundle that ensure the separability of the space of sections. \begin{prop} \label{P:compact} Let $\xi := (\E,p,T)$ be a locally uniform Banach bundle with $T$ compact and metrizable and $p^{-1}(t)$ separable for each $t\in T$. Then $\Gamma(\xi)$ is separable. \end{prop} The proof uses a lemma that undoubtedly is known but a proof is included for lack of a reference. The proof presented here relies on a rather advanced result but the lemma can be given a direct elementary proof. \begin{lem} \label{L:separable} Let $X$ be a Banach space and $Y$ a closed subspace of $X$. If $Y$ and $X/Y$ are separable then $X$ is separable. \end{lem} \begin{proof} Let $\theta : X\to X/Y$ be the quotient map. By \cite{BG} there is a continuous map $\psi : X/Y\to X$ such that $\theta(\psi(z)) = z$ for every $z\in X/Y$. It is immediately seen that the map $x\to (x - \psi(\theta(x)),\theta(x))$ is a homeomorphism of $X$ onto the separable space $Y\times X/Y$; the inverse map is $(y,z)\to y + \psi(z)$. \end{proof} \begin{proof}[Proof of Proposition \ref{P:compact}] We begin with a particular case: let $\xi$ be a uniform Banach bundle. Thus we may consider the following setting: each fiber $p^{-1}(t)$ is a subspace of some Banach space $X$ such that the map $t\to p^{-1}(t)$ from $T$ to the family of all the closed subsets of $X$ has the property that $t\to p^{-1}(t)_1$ is continuous when the collection of all the closed bounded subsets of $X$ is endowed with the Hausdorff metric. We claim that the closed subspace $\tilde{X}$ of $X$ generated by $\cup \{p^{-1}(t) \mid t\in T\}$ is separable. Indeed, with $\{t_n\}$ a countable dense subset of $T$ and $\{x_m^n\}_{m=1}^{\infty}$ a dense subset of $p^{-1}(t_n)_1$, it is easily seen that the set of all linear combinations with rational coefficients of $\{x_n^m \mid n, m\in \mathbb{N}\}$ is dense in $\tilde{X}$. From now on to simplify the notation we shall assume that $\tilde{X} = X$. In this situation the Banach space $\C(T,X) = \C(T)\otimes X$ is separable hence its subspace $\Gamma(\xi)$ is separable too. We consider now the general case. Thus, there is a family $\{T_i\}_{i=1}^n$ of closed subsets of $T$ such that $\{\mathrm{Int}(T_i)\}_{i=1}^n$ is a cover of $T$, $\mathrm{Int}(T_i)\neq \mset$ for each $i$, and the restriction of $\xi$ to each $T_i$ is a uniform Banach bundle. We shall prove the conclusion by induction. The step $n=1$ was treated above and we shall suppose now that if the base space can be covered by $n-1$ closed sets as before-mentioned then the conclusion of the theorem is valid. Set $S := \cup_{i=1}^{n-1} T_i$. The restriction map $\varphi\to \varphi\mid S$ maps $\Gamma(\xi)$ onto the separable space $\Gamma(\xi\mid S)$ by \cite[p. 15]{DG}. The kernel of this map $\{\varphi\in \Gamma(\xi)\mid \varphi\mid S \equiv 0\}$ which can be considered as a closed subspace of the separable space $\Gamma(\xi\mid T_n)\}$. Thus $\Gamma(\xi)$ is separable by Lemma \ref{L:separable}. \end{proof} In the above proposition we required the bundle to be locally uniform. If one discards this hypothesis it may happen that $\Gamma(\xi)$ is not separable even if the base space is metrizable and all the fibers are separable Banach spaces. Such a situation occurs in \cite[Example 5.1]{M}. We consider now a setting more general than in Proposition \ref{P:compact}. \begin{thm} \label{T:l.c.} Let $\xi := (\E,p,T)$ be a locally uniform Banach bundle. If $T$ is a second countable locally compact space and each fiber $p^{-1}(t),\ t\in T,$ is separable then $\Gamma_0(\xi)$ is separable. \end{thm} \begin{proof} There is a sequence of compact subsets $\{T_n\}_{n=1}^{\infty}$ of $T$ such that $\rm{Int}{T_1}\neq \mset$, $T_n\subseteq \rm{Int}(T_{n+1})$, $n = 1,2,\ldots $, and $T = \cup_{n=1}^{\infty} T_n$, see \cite[Theorem XI.7.2]{D}. Let $f_n$ be a real valued continuous function on $T$ such that $f_n\mid T_n \equiv 1$, $0\leq f_n\leq 1$, and $f_n(t) = 0$ if $t\in T\setminus \rm{Int}(T_{n+1})$, $n = 1,2,\ldots$ .For each $n$ the Banach space $\Gamma(\xi\mid T_n)$ is separable by Proposition \ref{P:compact} and we choose a dense countable set $\{\varphi_m^n\}_{m=1}^{\infty}$ in this space. By using again \cite[p. 15]{DG} we extend each $\varphi_m^n$ to a section of $\xi$ which, for simplicity, we shall denote also by $\varphi_m^n$ with the requirement that $$ \|\varphi_m^n(t)\|\leq \|\varphi_m^n\mid T_n\| $$ for each $t\in T$. Denote now $\psi_m^n := f_n\varphi_m^{n+1}$. We claim that $\{\psi_m^n\}_{m,n=1}^{\infty}$ is dense in $\Gamma_0(\xi)$. To prove the claim let $\varphi\in \Gamma_0(\xi)$, $\epsilon > 0$ and $n\in \mathbb{N}$ such that $$ \{t\in T\mid \|\varphi(t)\| \geq \epsilon\}\subseteq T_n. $$ Choose $m$ such that $\|\varphi(t) - \varphi_m^{n+1}(t)\| < \epsilon$ for every $t\in T_{n+1}$. For this choice of $m$ we have $$ \|\varphi(t) - \psi_m^n(t)\| = \|\varphi(t) - \varphi_m^{n+1}(t)\| < \epsilon, \ t\in T_n, $$ and $$ \|\varphi_m^{n+1}(t)\|\leq \epsilon + \|\varphi(t)\| < 2\epsilon, \ t\in \rm{Int}(T_{n+1})\setminus T_n, $$ hence $$ \|\varphi(t) - \psi_m^n(t)\| = \|\varphi(t) - f_n(t)\varphi_m^{n+1}(t)\|\leq \|\varphi(t)\| + |f_n(t)|\|\varphi_m^{n+1}(t)\| < 3\epsilon, \ t\in \rm{Int}(T_{n+1})\setminus T_n, $$ and $\|\varphi(t) - \psi_m^n(t)\| = \|\varphi(t)\| < \epsilon$ if $t\in T\setminus \rm{Int}(T_{n+1})$. We got $\|\varphi - \psi_m^n\|\leq 3\epsilon$ and this ends the proof. \end{proof} \begin{qu} Let $\xi := (\E,p,T)$ be a Banach bundle with $T$ an Eberlein compact. Suppose moreover that each fiber $p^{-1}(t)$ is WCG. Is $\Gamma(\xi)$ WCG? What if $\xi$ is locally uniform? \end{qu} We now discuss how certain properties of the space of sections are reflected in properties of the base space. \begin{thm} \label{T:c} Let $\xi := (\E,p,T)$ be a Banach bundle with $T$ compact Hausdorff. If $\Gamma(\xi)$ is separable (WCG) then $T$ is second countable (Eberlein compact, respectively) and each fiber space of $\xi$ is separable (WCG, respectively). \end {thm} \begin{proof} We begin by showing that each point $t$ of the base space has a compact neighbourhood $U_t$ such that the Banach space $C(U_t)$ is isomorphic with a subspace of $\Gamma(\xi\mid U_t)$. By \cite[p. 15]{DG} there is a section $\varphi_t$ of $\Gamma(\xi)$ such that $\|\varphi_t(t)\| = 1$. From the upper semicontinuity on $\E$ of the norm it follows that $\{s\in T\mid \|\varphi_t(t) - \varphi_t(s)\| < 1/2\}$ is a neighbourhood $V_t$ of $t$. For each $s\in V_t$ we have $\|\varphi_t(s)\| > 1 - 1/2 = 1/2$. Let $U_t$ be a compact neighbourhood of $t$ contained in $V_t$. The map $f\to f\cdot \varphi_t\mid U_t$ is an isomorphism of $C(U_t)$ into $\Gamma(\xi\mid U_t)$. Indeed, $$ \frac{1}{2}\|f\|\leq \|f\cdot \varphi_t\mid U_t\|\leq \|f\|\|\varphi_t\|. $$ We shall denote by $\Phi_t$ this map of $C(U_t)$ into $\Gamma(\xi\mid U_t)$ Now, if $\Gamma(\xi)$ is separable each $\Gamma(\xi\mid U_t)$ is separable hence each $C(U_t)$ is separable. Thus each point $t\in T$ has a second countable neighbourhood and we infer that $T$ is second countable. If $\Gamma(\xi)$ is WCG then each $\Gamma(\xi\mid U_t)$ is WCG by being a quotient of $\Gamma(\xi)$. Therefore all the norm closed balls of $\Phi_t(C(U_t))^*$, $t\in T$, are Eberlein compacts in their weak star topology by \cite[Theorem 4.9]{Z}. Hence the image $B_t$ by $(\Phi_t^*)^{-1}$ of the norm closed unit ball of $C(U_t)^*$ is an Eberlein compact in its weak star topology since it is a weak star closed subset of a norm closed ball centered at the origin of $\Phi_t(C(U_t))^*$. Since $U_t$ is homeomorphic with a weak star closed subset of $B_t$ we infer that $U_t$ is an Eberlein compact. It turns out that $T$ is a finite union ot Eberlein compact. The disjoint union $S$ of these finitely many Eberlein compacts is obviously an Eberlein compact. There is a natural continuous map of $S$ onto $T$ hence $T$ is an Eberlein compact by \cite[Theorem 2.15]{Z}. \end{proof} For locally compact base spaces we have the following result. \begin{thm} If $\xi := (\E,p,T)$ is a Banach bundle with a locally compact base space $T$ and $\Gamma_0(\xi)$ is separable then $T$ is second countable and each fiber space of $\xi$ is separable. \end{thm} \begin{proof} Let $\{\varphi_n\}_{n=1}^{\infty}$ be a dense countable subset of $\Gamma_0(\xi)$. For each $n$ the set $\{t\in T\mid \|\varphi_n(t)\|\}$ is $\sigma$-compact and $T$ is the union of all these sets. As in the proof of Theorem \ref{T:l.c.} there is an increasing sequence of compact subsets $\{T_m\}$ such that $T_m\subset \rm{Int}(T_{m+1})$ and $T = \cup_{m=1}^{\infty} T_m$. Each $T_m$ is second countable by Theorem \ref{T:c} hence $T$ is second countable. The second assertion is obvious. \end{proof} \bibliographystyle{amsplain}
1,108,101,564,760
arxiv
\section{Introduction} Mobile devices are important gadgets in our lives. Nowadays, mobile systems, especially Android, and their applications (apps) dominate most of our daily economic and social interactions. Android has the biggest share in the mobile computing industry \cite{idc2016smartphone} due to its open-source distribution and sophistication. Besides, it became not only the dominant platform for mobile phones and tablets, but is also gaining increasing attention and penetration in the realm of Internet of Things (IoT) \cite{android_wear, brillokey}. The ubiquitous nature and popularity of Android OS made it the first target of malicious threats in mobile computing platforms \cite{GData_2015_stats}. Indeed, malware apps are not only targeting conventional devices such as phones and tablets, but also more critical systems such as home IoT devices. The latter could allow the adversary to achieve more severe attacks, which could inflict physical damages \cite{android_auto} as the attacker could gain access to physical system controllers of cars, air conditioning systems, refrigerators, etc. Mobile and IoT devices are more critical than personal computers in many ways: (i) In contrast with personal computers, they are equipped with sophisticated sensors, from cameras and microphone to gyroscope and GPS. These various sensors open a whole new world of applications for end-users. However, they also unleash unprecedented potential cyber-threats that could be committed by adversaries who gain access to these resources through Android malware apps. (ii) Thin devices (smart handsets and IoT devices) have limited resources in terms of computation, energy power, and network bandwidth compared to PCs. This makes extensive security analyses very expensive, if not impossible in some cases, to track maliciousness indicators whether dynamic or static in nature. Therefore, the adversary needs less sophisticated malicious apps compared to PC ones to achieve the attack. (iii) A thin device could inflict more damage than a PC due to its high portability, and hence could infect/damage a large number of networks (e.g., work, home, restaurant, airport). Indeed, infected thin devices could play the role of a payload transporter to harm other systems and networks. (iv) In terms of deployment, the number of thin devices (including Android ones) is by far larger than the number of PCs. Therefore, an adversary that leverages malicious apps could infect more IoT devices than PCs. The attacker could infect and control (tens of) thousands of PCs and use them as her/his malicious cyber-infrastructure. Nowadays, malicious cyber-infrastructures could reach (tens of) millions of devices if we include devices in TVs, smart watches, connected cars, etc. (v) Finally, the centralized mechanism through which Android apps are distributed using App repositories \cite{google_play} allows for the distribution of malicious apps that bypass vetting systems and hence be available on a huge number of end-user devices. The aforementioned factors highlight the urgent need to design and implement new methodologies, techniques and tools to mitigate cyber-threats against Android mobile and IoT devices, especially that we are witnessing a convergence between Android and IoT devices. IoT devices could run Android OS or a lightweight version of it. In this context, Google proposes AndroidThing \cite{brillokey}, an Android-based IoT operating system. On the other hand, Android may animate other IoT devices that control systems such as smart homes or smart buildings. To mitigate these cyber-threats \cite{GData_2015_stats}, we need to have an accurate situational awareness of the threat landscape. The state-of-the-art Android security solutions mainly concentrate on: (i) Static analysis \cite{arp2014drebin, karbab2016dna, feng2014apposcopy, yang2014apklancet}, where the emphasis is on the actual Android malware file (Android Packaging APK): Here, the community tends to fingerprint Android malware using approximate fingerprints or learning models that leverage statistical features engineered from the static content. Static analysis is not generally effective in the presence of obfuscation techniques. (ii) Dynamic analysis \cite{canfora2016acquiring, spreitzenbarth2013mobile, ali2016aspectdroid, zhang2013vetting} based on the reports generated after executing the actual malware in a sandboxing system: The security analysis leverages these reports to discover and fingerprint malicious behaviours of Android malware samples. The dynamic analysis tends to be more resilient against obfuscation. However, it is more time consuming compared to static analysis. (iii) Hybrid approaches \cite{yuan2014droid, grace2012riskranker,bhandari2015draco, vidas2014a5} leverage both static and dynamic analyses techniques to achieve a higher detection performance. However, current Android malware solutions do not address the network dimension of mobile and IoT security. Furthermore, a common important characteristic between IoT devices and smart handsets is Internet access. Therefore, having malicious apps could allow the adversary to connect to infected devices at any time. Besides, Internet access is far from being a suspicious permission in Android vetting system. More precisely, the gap resides in the lack of situational awareness about malicious cyber-infrastructures that relate Android malware apps and their families. By cyber-infrastructure, we mean all the domains and IP addresses, i.e., the network information that is used by the adversary to control, download, upload, or at least, collect sensitive information through malicious apps that are already installed on infected Android devices (e.g. smart handset and IoT devices). Solutions, such as \cite{boukhtouta2015graph} \cite{nadji2013connected}, address malicious infrastructures in general focusing on malware samples and their families but without a special emphasis on Android-based platforms. In other words, there is a need for online solutions that leverage the large number of detected Android malware samples from different families. The latter should be the starting point of security solutions to achieve a situational awareness about malicious cyber-infrastructures underlying daily Android malware at different granularity levels. In other terms, the intention is to achieve a better understanding and focus on the malicious cyber-infrastructures underlying one Android malware sample, one malware family of samples, or several families at the same time. In this respect, we propose \textsf{ToGather}, an automatic investigation framework for Android malware cyber infrastructures. \textsf{ToGather} framework is a set of techniques and tools together with security feeds to automatically achieve a situational awareness about Android malware. Actually, \textsf{ToGather} characterizes the cyber-infrastructure of a given malware sample, a set of samples, family or families as a multipartite graph that relates malware samples and the corresponding network footprint in terms of IPs and domains. \textsf{ToGather} goes even a step further by dividing this cyber-infrastructure into sub-infrastructure components based on the connectivity between the nodes. The result is in fact multiple network communities representing many sub-cyber-infrastructures that are related to the Android malware sample or family. To this end, \textsf{ToGather} leverages the enormous amount of cyber-threat intelligence that is derived from various sources such as spam, Windows malware, darknet, and passive DNS to ascribe cyber-threats to the corresponding cyber-infrastructure. Accordingly, the input of \textsf{ToGather} framework is made of malware samples, and the output is networks of cyber-infrastructures together with their network footprint, which would give the security practitioner an overview of Android malware cyber-activities on the Internet. The process of \textsf{ToGather} framework starts by taking, as input, Android malware samples. First, we extract network information from these malicious apps. For this purpose, we use a hybrid approach, where both static and dynamic analyses are applied on malware. The resulting network information (IPs and domain names) of the malware sample represents the malicious nodes of its malicious cyber-infrastructure. However, the network information could be very noisy, as it might include several benign domain names of well-known sites, and the same applies to IP addresses. Hence, \textsf{ToGather} filters these entries through whitelisting in order to remove such IPs and domain names. Afterwards, \textsf{ToGather} correlates the network footprint with a passive DNS database to enrich the network information in two ways: (i) Get the IP addresses resolved from the current domain names list. (ii) Get the domain names that point to the collected IP addresses. The result is an enriched network information that has more coverage in terms of malicious cyber activity of the input malware samples. However, this information could be richer if we structure it; hence, \textsf{ToGather} builds from the network information a multi-partite graph connecting the hashes of malware samples to the corresponding IP addresses and domain names. The heterogeneous graph is used to derive abstract homogenous graphs where the emphasis is put on the network information while abstracting away from the malware hashes (since they have the same family in a typical use case). The homogeneous graphs, namely threat networks, represent cyber-infrastructures of Android malware. \textsf{ToGather} applies a highly scalable community detection algorithm \cite{fast08blondel} on this threat network to extract sub-threat networks with high connectivity aiming to give a more granular view to the security practitioner. Besides, we apply page ranking algorithm on these sub-cyber-infrastructures in order to rank the nodes (information network) according to their importance in terms of connectivity among sub-graphs. This indeed results in actionable intelligence that could be leveraged for instance to take-down operations. Finally, for each sub-threat network, we correlate the resulting cyber-infrastructure with well-known malicious information networks to label the underlying malicious activities. \textsf{ToGather} framework automates the previous steps to help security analysts achieve a great deal of situational awareness on Android malware and its activities on the Internet. As such, our contribution is essentially the framework as a whole and not only the components. The main contributions of this paper are: \paragraph{(1)} We design and implement \textsf{ToGather}, a simple, yet practical framework to generate a granular situational awareness report on the malicious cyber infrastructures underlying Android malware. \paragraph{(2)} We propose a correlation mechanism with multiple cyber-threat intelligence feeds, which enrich, not only the resulting malicious cyber-infrastructure intelligence, but also the labeling of the tracked malicious activities. \paragraph{(3)} We evaluate \textsf{ToGather} framework on real Android malware samples from Drebin malware dataset. The evaluation shows promising and interesting findings. \section{Overview} \subsection{Threat Model} We position \textsf{ToGather} as a detector of malicious cyber-infrastructures of Android malware. It is designed to uncover threat networks and the sub-networks from a seed of Android malware samples. However, malware detection is described in existing proposals \cite{arp2014drebin, karbab2016dna, canfora2016acquiring, spreitzenbarth2013mobile,yuan2014droid, grace2012riskranker} and is out of the scope of this paper. \textsf{ToGather} does not guarantee zero false-positives due to the large number of benign domain names and IP addresses that might not be filtered out with \textsf{ToGather} whitelists. \textsf{ToGather} is very resilient to obfuscation during the extraction of the network information from Android malware because it applies both static and dynamic analyses. Hence, if the static content is heavily obfuscated, \textsf{ToGather} is still able to collect IP addresses and domain names from dynamic analysis reports. \subsection{Usage Scenarios} \textsf{ToGather} is designed to be practical and efficient in the hands of security practitioners. \begin{itemize} \item Security analyst uses \textsf{ToGather} framework as an investigation tool to minimize the efforts of generating threat networks for a given Android malware family. The analyst leverages the IP addresses and domain names ordered by their importance in the generated threat network to prioritize the takedown and mitigation operations. \item \textsf{ToGather} acts as a monitoring system. It analyzes a malware feed of Android malware family (e.g., new samples on a daily basis) to generate a snapshot for threat network and uncover the malicious activities (spam, phishing, scanning, and others). Periodic reporting gives insights into the cyber evolution and the malicious behaviors of a given malware family over time. \item \textsf{ToGather} measures the Android malware activity on top of cloud vendors by reporting that a given Android malware family is using a specific cloud vendor infrastructure for its malicious activity during a period of time. \end{itemize} \section{Methodology} In this section, we present the overall workflow of \textsf{ToGather} framework, as shown in Figure \ref{fig:overview}, starting from the Android malware samples ending with the produced relevant security intelligence: \begin{figure*}[!htb] \centering \includegraphics[width=0.99\textwidth]{approach_overview_all} \caption{Approach Overview} \label{fig:overview} \end{figure*} 1) The first step in \textsf{ToGather} framework consists of deriving network information from Android samples in a given window analysis (e.g., day, week, month) whether they are from the same malware family or not. However, we consider one malware family as a typical use-case of \textsf{ToGather}, as presented in the evaluation section. \textsf{ToGather} conducts dynamic and static analyses where each analysis produces a report for each Android malware sample. Therefore, a given malware has two reports from dynamic and static analyses. Leveraging both analysis types enhances the resiliency of \textsf{ToGather} against common obfuscation techniques. The latter aims to hide relevant information about malicious activities such as domain names and IP addresses (network information). Afterwards, \textsf{ToGather} extracts network information strings from the analysis reports. At this point, we apply a simple text pattern search to find IP addresses and domain names. In static analysis, we mainly concentrate on the Dalvik compiled code (classes.dex) for the extraction. We collect network information more efficiently from dynamic analysis reports since they are more structured and have labeled fields. 2) Next, we filter the extracted network identifiers from noise information such as non-routed IP addresses. Also, we filter domain names and URLs that use Unicode characters. For the current \textsf{ToGather} implementation, we do not consider domain names and URLs written in other languages such as Japanese or Arabic. In the case of URL links, we keep only domains. To this end, we have a set of valid IP addresses and domain names found in Android malware. It is important to notice here that each network information is tagged by the underlying malware hash and this tag will be kept during all the workflow steps of \textsf{ToGather}. To minimize false positives, \textsf{ToGather} applies whitelisting mechanisms. For domain names, \textsf{ToGather} leverages the complete Alexa \cite{alexatop_web} and Quantcast \cite{quantcast_web} (more than one million domain name). However, the number of white domain names is a hyper-parameter in \textsf{ToGather} that we could use to control the amount of false positives. In the case of IP addresses, we leverage a set of public white IPs such as Google DNS servers and other ones \cite{tracemyip_web}. It is important to stress that \textsf{ToGather} considers public cloud vendor IPs and domain names as a whitelist. The aim is to observe and then gain insight into the use the cloud infrastructure by Android malware. This idea originates from the observation that Android malicious apps (and malware in general) make more use of the cloud as a low-cost infrastructure for their malicious activity. 3) In this step, we propose a mechanism to enhance and enrich the malicious network information to cover related domains and IPs. In essence, \textsf{ToGather} aims at answering the following questions: (i) What are the IP addresses of current malicious domains? Here we investigate the IP addresses of server machines that host malicious activities. The latter are most likely related to the analyzed Android malware. (ii) What are the domain names pointing to the current malicious IP addresses? The intuition is that a malicious server machine with a given IP address could host various malicious contents and the adversary could use multiple domains pointing to such contents. To answer this question, \textsf{ToGather} has a module to enrich network information by using passive DNS replication. The latter is a technology that builds zones replicate without the cooperation from zone administrators, based on captured name server responses, as presented in Section \ref{sec:pdns}. We use the network information, whether IP addresses or domains, as parameters to two functions applied on a passive DNS database. The goal of the function is to enrich the list of domains and IP addresses that could be part of the adversary threat network. The enrichment services are: (i) GetIP(Domain): This function takes a domain as a parameter to query passive DNS database. The result is all IP addresses the domain points to. (ii) GetDomain(IP): This function gets all the domains that resolves to the IP address given as a parameter. We consider passive DNS correlation for two reasons: (i) A small number of Android malware samples generally yields limited network information. (ii) Security practitioners aim at having a more comprehensive situational awareness about malware Internet activity. As such, they would like to consider all related IPs and domain names. The result of the correlation is a set of IP addresses and domains enriched using passive DNS, related to Android malware apps. The correlation results could, however, overwhelm the investigation process. Passive DNS correlation is therefore optional if we have a big number of samples from a given Android family. \textsf{ToGather} applies network information enrichment using the passive DNS replication. The correlation with passive DNS could produce some known benign entries. For this reason, we filter the likely harmless network information by matching the newly found ones against top Alexa \cite{alexatop_web} and Quantcast \cite{quantcast_web} domain names and known public IP addresses \cite{amazonip_web}. 5) At this stage, we have a set of network information tagged by malware hashes. To extract relevant and actionable intelligence, \textsf{ToGather} aggregates all the previous records into a heterogeneous network with different types of nodes: \textit{malware hashes}, \textit{IP addresses} and \textit{domain names}. We consider the heterogeneous network that is extracted from a given Android malware family as the malicious activity map of that family on the Internet. We call such a heterogenous network, a \textit{threat network}. Furthermore, \textsf{ToGather} produces homogenous networks by executing multiple projections according to the node type (IP address or domain name). For example, in the IP projection, the projection of a malware hash connecting two IP addresses would be only the two IPs connecting to each other. Therefore, \textsf{ToGather} produces three homogeneous graphs, one only considers IP addresses connections, and the other only considers domain names connections. Finally, we consider a threat network with IPs and domains as one type network information. The goal of homogenizing the connection network is to apply graph analyses that need the graph homogeneity (i.e., IP threat network), domain threat network, and network information threat network. \begin{figure*}[!htb] \centering \includegraphics[width=0.95\textwidth]{approach_overview_network} \caption{Graph Analysis Overview} \label{fig:graph_overview} \end{figure*} 6) Further, \textsf{ToGather} aims at producing more granular graphs (see Figure \ref{fig:graph_overview}) from the generated threat networks derived in the previous step. In this respect, \textsf{ToGather} checks the possibility of community identification in these threat networks based on the connectivity between nodes. The higher is the connectivity between the nodes in a particular area of the network, the more is the possibility to have a malicious community in such area. For community detection (Section \ref{sec:community_detection}), we adopt a highly scalable algorithm \cite{fast08blondel} to enhance \textsf{ToGather} community detection module. The intuition behind using the community concept is: (i) Considering \textsf{ToGather} typical usage scenario, where we enter Android malicious apps from the same family, the community could define different threat networks that are related to the malicious activities. In other words, either one adversary is using these threat networks as backups or we have instead multiple adversaries. In the case of Android malware, the second hypothesis is more plausible because of the cheap repackaging of existing malware samples to suit the need of the perpetrator. (ii) In case \textsf{ToGather} receives Android malware from different families, the communities could be interpreted as the threat networks of different Android malware families to focus on the relation between them. The output of this step is a set of threat networks related to IPs, domains, and network information and their communities (sub-threat networks). 7) To produce actionable cyber-threat intelligence, we leverage Google page ranking algorithm (Section \ref{sec:pageranking}) to produce ranking scores for critical nodes of a given (sub) threat network. Consequently, the investigator would have some priority list when it comes to mitigation or take down of nodes that are associated with a malicious cyber-infrastructure. As a result, \textsf{ToGather} produces each (sub) threat network of the Android malware family together with the ranking of each node. Because \textsf{ToGather} generates multiple homogeneous graphs based on the node type (IP, domain, network information), it produces different ranking lists based on the node type. Therefore, the security practitioner will have the opportunity of selecting the node type when executing the mitigation or the take down to protect his system. In such case, an IP node could be more suitable as it could be blacklisted for instance. Also, it is important to mention that it is expensive for the adversary to get new IP addresses. In contrast, domain names could be frequently changed due to their affordability. 8) We do not focus only on Android malware. Instead, we aim at gaining insights into the shared network IP and domains with other platform malware families. Indeed, the adversary tends to have many malicious weapons in several operating systems to achieve the maximum coverage. Therefore, similarly to the first step, we conduct dynamic and static analyses on Windows and Linux malware samples to extract the corresponding network information. The same step is applied to this network information. Afterwards, we correlate the Android network information with the non-Android malware information to discover another dimension of the adversary network. The result will be all the IP addresses and the domains of Android malware in addition to all network records of a given non-Android malware family if they share some network information. It is important to notice that the information networks of non-Android malware are also labeled by malware families. Therefore, the result of this step is the previous (sub) threat networks tagged by Android malware family in addition to tags of other platform malware. So, the security analyst would have a clearer view on the Android cross-platform malicious activity. 9) In this final step of \textsf{ToGather} workflow, we leverage another cyber-threat source, namely sub-threat networks, to label malicious activities that are committed by the produced communities. Specifically, we leverage network information that is collected from different security data. The current \textsf{ToGather} implementation includes the correlation with spam emails, reconnaissance traces and phishing URLs. Therefore, the investigator will not only have the cyber-infrastructure of the Android malicious family but also if it is part of other cyber malicious activities that are conducted by the infrastructure. We consider \textsf{ToGather} as an active service that receives at every epoch time (day, week, month) Android malware with its corresponding family (the typical use case) and produces valuable intelligence about this malware family. \section{Threat Communities Detection} \label{sec:community_detection} A scalable community detection algorithm is essential to extract communities from the threat network. For this reason, we empower \textsf{ToGather} with the Fast Unfolding Community Detection algorithm \cite{fast08blondel}, which can scale to billions of network links. The algorithm achieves excellent results by measuring the \textit{modularity} of communities. The latter is a scalar value $M \in [-1, 1]$ that measures the density of edges inside a given community compared to the edges between communities. The algorithm uses an approximation of the modularity since finding the exact value is computationally hard \cite{fast08blondel}. Our main reason to choose the algorithm proposed in \cite{fast08blondel} is its scalability. As depicted in Figure \ref{fig:scale_million}, we apply the community detection on a million-node graph with a medium density ($P=0.001$ probability of node A is another node B), which we believe has a similar density to the threat network generated from Android malware samples. For the sake of completeness, we perform the same experiment on graphs with a different probability $P$. As presented in Figure \ref{fig:scale_vhigh}, we are able to detect communities in $30,000$-node graphs with ultra density (unrealistic) in a relatively small (compared to the time dedicated to the investigation) amount of time ($3 hours$). \begin{figure}[!htb] \centering \includegraphics[width=0.48\textwidth]{scalability_all_meduim_density} \caption{Scalability of the Community Detection} \label{fig:scale_million} \end{figure} \begin{scriptsize} \begin{figure}[ht!] \begin{center} \subfigure[$P=0.001$ Medium]{% \label{fig:scale_meduim} \includegraphics[width=0.22\textwidth]{scalability_meduim_density} } \subfigure[$P=0.01$ High]{% \label{fig:scale_high} \includegraphics[width=0.22\textwidth]{scalability_high_density} } \subfigure[$P=0.05$ Very High]{% \label{fig:scale_vhigh} \includegraphics[width=0.22\textwidth]{scalability_vhigh_density} } \subfigure[$P=0.10$ Ultra High]{% \label{fig:scale_ultra} \includegraphics[width=0.22\textwidth]{scalability_ultra_density} } \end{center} \caption{ Graph Density Versus Scalability } \label{fig:scale_vs_desity} \end{figure} \end{scriptsize} The previous algorithm requires a homogeneous network, as input, to work correctly. In our case, the threat network generated from the network information is a heterogeneous network because it contains two main node types: (i) The malware sample identifier, which is the cryptographic hash of the malware sample. (ii) The network information: the domain names and the IPv4 addresses. In the current implementation, we do not consider IPv6 addresses and domain names in other languages. Also, we apply the projection on the first heterogeneous network to generate homogeneous graphs. To do so, \textsf{ToGather} makes the graph projection by abstracting away from the malware identifier and only takes the network information, i.e., if the malware identifier connects to two IPs, the projection would produce only the two IPs connecting to each other. To this end, we get different projection results based on the node abstraction: (i) General threat network contains both IP addresses and domain names. (ii) IP threat network contains only IP addresses. (iii) Domains threat network contains only domain names. Furthermore, \textsf{ToGather} could mine sub-threat networks that have highly connected nodes compared to the rest of the cyber-threat network. The intuition here is that each sub-threat network could be a different malicious infrastructure that is used by an adversary. The security practitioner could automatically segregate possible cyber-infrastructures that could lead to different attacks even if we use only one Android malware family. To achieve such scenario, we apply the previous community detection algorithm on the different threat network to check for possible sub-graphs. Also, \textsf{ToGather} filters nodes (IPs, domains) with weak links to others nodes, as we interpret them as false positives (leaves or parts of tiny sub-graphs). \section{Actions Prioritization}\label{sec:pageranking} From the community detection, \textsf{ToGather} checks if there is possible sub-graphs in the threat networks based on node connectivity. Even though the sub threat networks zoom into malicious cyber-infrastructures of a given Android malware family, the security practitioner could not mitigate against the whole threat network at once. For this reason, \textsf{ToGather} proposes an action priority system. The latter takes the IP, domain or both, and threat network and produces an action priority list based the maliciousness of each node. By leveraging the graph structure of the threat network, we measure the maliciousness of a given node by its degree, meaning, the number of edges that relate it to other nodes. From a security point of view, the more connections an IP or domain has, the more it is important for a malicious cyber-infrastructure. Therefore, our goal is to build a priority list sorted by the damage, an IP or a domain, which can inflict in terms of malicious activity. The importance of nodes in a network graph is known as \emph{node's centrality}. The latter represents a real-valued function produced to provide a ranking, which identifies the most relevant nodes (\cite{borgatti2005centrality}). For this purpose, some algorithms have been defined, such as Hypertext Induced Topic Search (HITS) algorithm (\cite{kleinberg1999authoritative}) and Google's PageRank algorithm (\cite{brin1998anatomy}). In our approach, we adopt Google's PageRank algorithm due to its efficiency, feasibility, less query time cost, and less susceptibility to localized links (\cite{nidhi2012comparative}). In the following, we briefly introduce the PageRank algorithm and the random surfer model. \subsection{PageRank Algorithm} \begin{definition}(PageRank). Let $I(v_i)$ be the set of vertices that link to a vertex $v_i$ and let $deg_{out}(v_i)$ be the out-degree centrality of a vertex $v_i$. The PageRank of a vertex $v_i$, denoted by $PR(v_i)$, is provided in Eq. $1$: \begin{equation} PR(v_i) = d \left[\sum_{v_j\in I(v_i)}{\frac{PR(v_j)}{deg_{out}(v_i)}}\right] + (1-d)\frac{1}{|D|} \end{equation} \end{definition} The constant $d$ is called \emph{damping factor}, assumed to be set to $0.85$ \cite{brin1998anatomy}. Eq. $1$ produces one equation per node $v_i$ with an equal number of unknown $PR(v_i)$ values. The PageRank algorithm tries to find out iteratively different PageRank values, which sum up to 1 ($sum_{i=1}^{n} PR(v_i)=1$). The authors of the PageRank algorithm considers the use case of web surfing, where the user starts from a web page and randomly moves to another one through a web link. If the web surfer is on page $v_j$ with a probability or a damping factor $d$, then the probability to change page $v_i$ is $\frac{1}{deg_{out}(v_j)}$. The user could follow the links and teleport to a random web page in $V$ with $1-d$ probability. The described surfing model is a stochastic process, and $W$ is a stochastic transition matrix, where node ranking values are computed as presented in Eq. $2$: \begin{equation} \vec{PR} = d \left[W . \vec{PR}\right] + (1-d)\frac{1}{|D|}\vec{1} \end{equation} \noindent The stochastic matrix $W$ is defined as follows:\\ \indent $w_{ij}=\frac{1}{deg_{out}(v_j)}$ if a vertex $v_j$ is linked to $v_i$ \\ \indent $w_{ij}=0$ otherwise \\ The notation $\vec{R}$ stands for a vector where its $i_{th}$ element is $PR(v_i)$ (PageRank of $v_i$). The notation $\vec{1}$ stands for a vector having all elements equal to $1$. The computation of PageRank values is done iteratively by defining a convergence stopping criterion $\epsilon$. At each computation step $t$, a new vector $(\vec{PR},t)$ is generated based on previous vector values $(\vec{PR},t-1)$. The algorithm stops computing values when the condition $|(\vec{PR},t)-(\vec{PR},t-1)|<\epsilon$ is satisfied. \section{Security Correlation} \subsection{Network Enrichment Using Passive DNS} \label{sec:pdns} Passive DNS \cite{weimer2005passive} replication is the process of capturing live DNS queries and/or their responses, and using this data to build partial replicas of as many DNS zones as possible. Passive DNS aims to make replication of the domain zones without the collaboration of zone administrators. A DNS sensor is used to capture the inter-server DNS communications. Afterwards, the records of passive DNS are stored in a database where they can be queried. We can benefit from the passive DNS database in many ways. For instance, we can know the history of a domain name as well as the IP addresses it is/was pointing to. We can also find what domain names are hosted on a given name server or what domains are/(have been) pointing to a given IP address. There are a lot of use cases of passive DNS for security purposes (e.g., mapping criminal cyber-infrastructure \cite{antonakakis2010building}, tracking spam campaigns, tracking malware command and control systems, detection of fast-flux networks, security monitoring of a given cyber-infrastructure and botnet detection). In our context, we correlated \textsf{ToGather} with a passive DNS database ($30$Billion record) to enrich the investigation of Android malware by: (i) Finding suspicious domains that are pointing to a malicious IP address extracted from the analysis of a malware sample. (ii) Finding suspicious IP addresses that are resolved from a malicious domain that is extracted from the analysis of malware sample. (iii) Measuring the maliciousness magnitude of an IP address: a server identified by a malicious IP address that hosts many malicious activities. We could measure the maliciousness by counting the number of domains that resolve to this malicious IP address. Typically, these domains could be related to different malicious activities or a single one. (iv) Filtering outdated domain names: The passive DNS query generally returns timestamp information. \textsf{ToGather} could leverage the timestamps to filter out old domain names that are no longer active. \begin{figure}[!htb] \centering \includegraphics[width=0.45\textwidth]{example_pdnscorrelation} \caption{Threat Network With\&(out) Correlation} \label{fig:example_correlation} \end{figure} We consider passive DNS correlation as an optional component in \textsf{ToGather} workflow for two reasons. First, passive DNS could be missing to reproduce \textsf{ToGather} framework, since the security practitioner may not have access to such database. Second, the corpus of Android malware samples is enormous, and there are new feeds of malware samples every day. Hence, such large number of samples could fill the gap of passive DNS correlation due to the amount of the extracted network information. For example, as presented in Figure \ref{fig:example_correlation}, the threat network generated from three malware samples could be enhanced by the correlation with passive DNS. \subsection{Threat Network Tagging} \label{sec:nettagging} \textsf{ToGather} produces, from Android malware samples, a threat network that summarizes their malicious activities. Afterwards, \textsf{ToGather} detects and produces threat sub-networks if any. Besides, it helps prioritizing the actions to be taken to mitigate this threat using the PageRank algorithm. In this section, we go a step further towards the automatic investigation by leveraging other security feeds. Specifically, we aim at correlating threat networks with spam intelligence, reconnaissance intelligence, etc. The objective is to give a multi-dimensional view about the malicious activities that are related to the investigated Android malware family. Moreover, \textsf{ToGather} considers the correlation with network information from other platform malware; in the current setup, we correlate with PC malware from different operating systems. \paragraph{PC Malware:} The adversary tends to have different malware samples in their arsenals to achieve their goal. Besides, different types of malware could be used to cover distinct platforms. The used malware samples run on many platforms, but they might share the elements of the same cyber-infrastructure run by the attacker. Therefore, finding other platform malware that share a similar threat network with a given Android malware sample, could help discovering other malware that is in the attackers' cyber-arsenals. Considering the previous case, \textsf{ToGather} tags every produced threat network by leveraging a database of network information extracted from PC malware VirusShare \cite{virusshare}. The malware database is continuously updated. The obtained information is identified by the malware hash and its malware family. The latter helps identifying PC malware (and their families) that share network information with the Android threat network. \paragraph{Spam:} \textsf{ToGather} takes advantage of a spam database ($30$ Million record) to report the relationship between spamming campaigns and a given threat network. This information is precious for security analysts who are tracking spam campaigns. \paragraph{Phishing:} Similarly to the spamming activity, we consider the phishing activity in \textsf{ToGather} tagging. Phishing activities aim at stealing sensitive information using fake web pages that are similar to known trusted ones. Typically, the attacker spread phishing sites using malicious URLs. We extract only the domain name and store it in a phishing database ($5$ Million record). \paragraph{Probing:} Probing \cite{panjwani2015experimental} is the activity of scanning networks over the Internet. The aim is to find vulnerable services. Probing is a significant concern in cyber-security because 50\% of cyber-attacks are preceded by network scanning activity \cite{panjwani2015experimental}. For this reason, \textsf{ToGather} considers tags of the threat network nodes if they are part of a probing activity. This pre-supposes the availability of a probing database ($300$ Million record) that contains IP addresses that have been part of scanning activities within the same epoch. Probing could be derived from darknet traffic and the probing IP addresses could be persisted in a probing database. \section{Experimental Results} In this section, we present the evaluation results of our proposed system. The evaluation's goal is to assess the effectiveness of \textsf{ToGather} framework on giving a situational awareness from a set of Android malware samples. In our experimentation, we consider two cases of the entered malware samples: (a) The samples belong to the same Android malware family; here we look at the threat network of the given family and its sub ones. (b) The samples belong to different Android malware families; here we investigate the relation between the various families of Drebin dataset and how the threat network of the families could be distinguishable from other ones. Notice that the network information will be hidden in the result due the sensibility and confidentiality of this information (i.e., domains and IP addresses). Instead, we focus on the cyber-infrastructure of the malware samples, i.e., how the sub-threat networks could be apparent in the global threat network of Android malware family. Finally, we show the tagging result of the resulting threat network. \subsection{Android Malawre Dataset} In the evaluation, we use a real Android malware dataset from Drebin \cite{arp2014drebin}, a known dataset that contains samples labeled with their families. Drebin dataset \cite{Drebin_Dataset} contains $5560$ labeled malware samples from $179$ families \cite{Drebin_Dataset}., as shown in Table \ref{tab:dataset_family_number}. It is important to stress that Drebin contains all the samples of Genome dataset \cite{Android_Malware}. As a ground truth for the malware labeling, we take the label provided by Drebin since there are some differences between Genome and Drebin dataset labeling. For example, Genome recognizes different versions of DroidKungFu malware (1, 2 and 4), where Drebin has only DroidKungFu. \begin{table}[!htb] \centering \begin{scriptsize} \begin{tabular}{|l||l|c|} \toprule {} & Malawre Family & Number of Samples \\ \hline \midrule 0 & FakeInstaller & 925 \\ \hline 1 & DroidKungFu & 667 \\ \hline 2 & Plankton & 625 \\ \hline 3 & Opfake & 613 \\ \hline 4 & GinMaster & 339 \\ \hline 5 & BaseBridge & 330 \\ \hline 6 & Iconosys & 152 \\ \hline 7 & Kmin & 147 \\ \hline \bottomrule \end{tabular} \end{scriptsize} \caption{Dataset Description By Malware Family} \label{tab:dataset_family_number} \end{table} \subsection{Implementation} We have implemented \textsf{ToGather} using \textit{Python} programming language. In the static analysis, to perform reverse engineering of the \textit{Dex} byte-code, we use \textit{dexdump}, a tool provided with Android SDK. We extract the network information from the \textit{Dex} dis-assembly using regular expressions. Beside, \textsf{ToGather} extracts network information from static text content in the APK file of Android malware. \begin{figure}[!htb] \centering \includegraphics[width=0.45\textwidth]{sendbox} \caption{Sandboxing with Multi-Instance System} \label{fig:sandboxing} \end{figure} In the dynamic analysis, a cornerstone in \textsf{ToGather} framework is the sandboxing system, which heavily influences the produced analysis reports. We use \emph{DroidBox} \cite{droidbox_github}, a well-established sandboxing environment based on the Android software emulator \cite{android_emulator} provided by Google Android SDK \cite{android_sdk}. Running the app may not lead to a sufficient coverage of the executed app. As such, to simulate the user interaction with the apps, we leverage \textit{MonkeyRunner} \cite{monkeyrunner}, which produces random UI actions aiming for a broader execution coverage. However, this makes the app execution non-deterministic since \textit{MonkeyRunner} generates random actions. Therefore, this yields different analysis reports for every execution, where the accuracy of the results may vary. To tackle this issue, we run the app in a sandboxing environment for a long time $T$ to assure the maximum of information in the resulting report. On the other hand, a long time $T$ could lead to execution bottleneck since \emph{DroidBox} can only handle one app at a time. In this context, executing the dataset apps in a sandboxing environment is a computation bottleneck in \textsf{ToGather}. To overcome this challenge, we develop a multi-worker sandboxing environment to exploit the maximum available resources and boost the sandboxing task. as depicted in Figure \ref{fig:sandboxing}. Finally, since the generated reports are semi-structured (JSON files), we straightforwardly extract the network information from the specific fields. \subsection{Drebin Threat Network} \label{sec:drebin_results} In this section, we present the results of applying \textsf{ToGather} framework on the samples of Drebin dataset with all the $179$ families. Figure \ref{fig:allnet} depicts the threat network information (domain names and IP addresses) of Drebin dataset, where each family is represented by a different color. Although the threat network is noisy, we could visually distinguish some connected communities with the same nodes' color, i.e., the same malware family. This initial observation enhances the need for the community detection module in the \textsf{ToGather} framework. The community here is a set of graph nodes that are highly connected even though they share some links with external nodes. In Figure \ref{fig:alldomainnet}, we consider only the domain names; here we could distinguish more sub threat networks having nodes from the same malware family. We choose to filter all the IP addresses for Drebin dataset due to our observation during the experimentation process: (i) Some malware samples contain a significant malware number of IP addresses; exceeding, in some cases, $100$ IPs such as Plankton sample with MD5 hash \begin{scriptsize}\textbf{3f69836f64f956a5c00aa97bf1e04bf2}\end{scriptsize}. The adversary could aim to deceive the investigator by overwhelming the app with fake IP addresses along with used ones; this issue will be discussed in Section \ref{sec:futurework}. (ii) A big portion of the IP addresses are part of cloud companies infrastructure; we filter most of the public ones, but there are plenty of less known infrastructures in other countries. (iii) In most cases, the adversary utilizes domains for the malicious activity due to the low cost and the flexibility compared to IP addresses. In this experimentation, we consider only the domain names, but the security analyst could include the IP addresse when needed. \begin{figure}[!htb] \centering \includegraphics[width=0.33\textwidth]{all_netinfo} \caption{Network Information Drebin Dataset} \label{fig:allnet} \end{figure} Using all Drebin dataset ($179$ malware families) to produce the Threat Network is an extream use case for \textsf{ToGather} framework; few malware families is a typical use case when we aim to investigate the threat networks relations. However, even with all Drebin dataset, \textsf{ToGather}, as presented in Figure \ref{fig:alldomainnet}, shows promising results, where we could see many sub-threat networks with(out) links to other nodes. By considering only domain names in Figure \ref{fig:alldomainnet}, it is noticeable that the size of the threat network significantly decreases by removing the IP addresses; normally there are significantly more domains than IP addresses in the Android apps. However, this is due to the extream whitelisting of domains compared to IPs (more than 1 million domain) and the size of Drebin dataset. At this stage, we do not present the community detection and page ranking on the threat network; this will be conducted on a one-family use case in the next section. \begin{figure}[!htb] \centering \includegraphics[width=0.33\textwidth]{all_domain} \caption{Domain Names Drebin Dataset} \label{fig:alldomainnet} \end{figure} \textsf{ToGather} leverages different malicious datasets, as previously described in Section \ref{sec:nettagging}, to tag the nodes of the produced threat network. Figure \ref{fig:drebin_tagging} depicts the diverse malicious activities of the nodes from Drebin threat network. First, the table shows the top PC malware families which have shared network information with the Drebin threat network. For families' names, we adopt the \textit{Kaspersky} malware naming as our ground truth. Besides, Figure \ref{fig:drebin_tagging} shows the percentage of each malicious type in the Drebin threat network. The result shows that $56\%$ of the shared nodes have a spamming activity, $40\%$ are related to PC malware, $3\%$ Scanning, and $1\%$ Phishing activities. Notice that the previous percentages are only from the shared nodes and not from all the threat network. Also, as we will discuss in Section \ref{sec:futurework}, these results are not exhaustive because of the correlation datasets that obviously do not contain every malicious activity. We could extend the current correlation datasets to cover more suspicious activities in future work. \begin{table}[!h] \begin{minipage}{0.23\textwidth} \includegraphics[width=1\textwidth]{all_malpie} \end{minipage}% \hfill \begin{minipage}{0.23\textwidth} \centering \begin{scriptsize} \begin{tabular}{|c||c|c|} \hline \hline \#& \textit{Family} & \textit{Hits} \\ \hline\hline 1 & Agent \footnote{Kaspersky Naming} & 1268 \\ \hline 2 & VBNA & 283 \\ \hline 3 & Adload & 152 \\ \hline 4 & EgroupDial & 121 \\ \hline 5 & TrustAsia & 120 \\ \hline 6 & Vobfus & 88 \\ \hline 7 & KuPlays & 74 \\ \hline 8 & Pipibo & 72 \\ \hline 9 & Sality & 62 \\ \hline \end{tabular} \end{scriptsize} \end{minipage}% \caption{ Drebin Dataset Tagging Results} \label{fig:drebin_tagging} \end{table} \subsection{Family Threat Networks} \label{sec:family_results} In this section, we present the results of \textsf{ToGather} in its typical usage scenario where malware samples from the same family are analyzed. Figure \ref{fig:droidkungfunet} shows the steps of generating the threat networks from the DroidKungFu family sample. First, \textsf{ToGather} produces the threat network including network information collected from the DroidKungFu analysis and Passive DNS correlation, as shown in Figure \ref{fig:droidkungfu_00}. Afterward, \textsf{ToGather} filters the whitelist network information. The results, as in Figure \ref{fig:droidkungfu_01}, depict bright separated sub-threat networks without applying the community detection algorithm. This could be an insightful result for the security practitioner, especially that this sub-threat network contains network information exclusively from some samples. \textsf{ToGather} goes a step ahead by applying both community detection (Resolution hyperparameter $r = 3$) and page ranking algorithms (damping factor $d = 0,85$ and stopping criterion $\epsilon = 0.001$ hyperparameters) to divide the network and rank the importance of the nodes respectively. The result is multiple sub-threat networks, with high interconnection and low intra-connection, representing the cyber-infrastructures of DroidKungFu malware family. \begin{scriptsize} \begin{figure}[ht!] \begin{center} \subfigure[Unfiltered]{% \label{fig:droidkungfu_00} \includegraphics[width=0.22\textwidth]{droidkungfu_00} } \subfigure[Filtered]{% \label{fig:droidkungfu_01} \includegraphics[width=0.22\textwidth]{droidkungfu_01} } \subfigure[Divide]{% \label{fig:droidkungfu_02} \includegraphics[width=0.22\textwidth]{droidkungfu_02} } \subfigure[Ranking]{% \label{fig:droidkungfu_03} \includegraphics[width=0.22\textwidth]{droidkungfu_03} } \end{center} \caption{ DroidKungFu Malware Threat Network } \label{fig:droidkungfunet} \end{figure} \end{scriptsize} Figure \ref{fig:basebrigdenet} shows \textsf{ToGather} results using Android malware samples from BaseBridge family. Similarly, after the filtering operation, we could easily distinguish small sub-threat networks. In same cases, the community detection task could be optional due to the clear separation between the sub-threat networks. For instance, Figure \ref{fig:drebinfamilynet} depicts the threat networks for GinMaster, Adrd, and Plankton Android malware families before and after the community detection task. Here, Adrd family clearly has multiple sub-threat networks without the need of the community detection function since it does not affect much the results. In the case of Plankton, it is necessary to detect and extract the sub-threat network. \begin{scriptsize} \begin{figure}[ht!] \begin{center} \subfigure[Unfiltered]{% \label{fig:basebrigde_00} \includegraphics[width=0.14\textwidth]{basebrigde_00} } \subfigure[Filter\&Divide]{% \label{fig:basebrigde_01} \includegraphics[width=0.14\textwidth]{basebrigde_01} } \subfigure[Ranking]{% \label{fig:basebrigde_02} \includegraphics[width=0.14\textwidth]{basebrigde_02} } \end{center} \caption{ Basebrigde Malware Threat Network } \label{fig:basebrigdenet} \end{figure} \end{scriptsize} \begin{scriptsize} \begin{figure}[ht!] \begin{center} \subfigure[Ginmaster (1)]{ \label{fig:ginmaster_01} \includegraphics[width=0.14\textwidth]{ginmaster_01} } \subfigure[Adrd (1)]{ \label{fig:adrd_01} \includegraphics[width=0.14\textwidth]{adrd_01} } \subfigure[Plankton (1)]{ \label{fig:plankton_01} \includegraphics[width=0.14\textwidth]{plankton_01} } \subfigure[Ginmaster (2)]{ \label{fig:ginmaster_02} \includegraphics[width=0.14\textwidth]{ginmaster_02} } \subfigure[Adrd (2)]{ \label{fig:adrd_02} \includegraphics[width=0.14\textwidth]{adrd_02} } \subfigure[Plankton (2)]{ \label{fig:plankton_02} \includegraphics[width=0.14\textwidth]{plankton_02} } \end{center} \caption{ Android Families From Drebin Dataset } \label{fig:drebinfamilynet} \end{figure} \end{scriptsize} Tables \ref{tab:basemalware} and \ref{tab:kungfumalware} show the top PC malware families and samples that share the network information with BaseBredge and DroidKungFu threat networks. An important factor in the correlation is the explainability, where we could determine which network information is shared between the Android malware and the PC malware. This could help the security investigator to track the other dimension of the adversary cyber-infrastructure. \begin{table}[!h] \begin{minipage}{0.24\textwidth} \centering \begin{scriptsize} \begin{tabular}{|c||c|c|c|} \hline \hline \#& \textit{Sample} & \textit{Hits} \\ \hline\hline 1 & ed7621ec4d \footnote{ MD5 Hash First 10 Chars} & 2 \\ \hline 2 & e3bc76d14c & 2 \\ \hline 3 & 503902c503 & 1 \\ \hline 4 & bd9b87869b & 1 \\ \hline 5 & 8e0cf0a1ba6 & 1 \\ \hline 6 & f8a5cac12dc & 1\\ \hline 7 & 14db95e5f6 & 1 \\ \hline 8 & 9b5b576ef3 & 1 \\ \hline 9 & 2ec2abc28d & 1 \\ \hline \end{tabular} \end{scriptsize} \end{minipage}% \hfill \begin{minipage}{0.23\textwidth} \centering \begin{scriptsize} \begin{tabular}{|c||c|c|} \hline \hline \#& \textit{Family} & \textit{Hits} \\ \hline\hline 1 & Agent \footnote{Kaspersky Naming} & 23 \\ \hline 2 & Vobfus & 21 \\ \hline 3 & EgroupDial & 13 \\ \hline 4 & Badur & 9 \\ \hline 5 & LMN & 7 \\ \hline 6 & WBNA & 4 \\ \hline 7 & Pipibo & 2 \\ \hline 8 & Blocker & 2 \\ \hline 9 & Virut & 2 \\ \hline \end{tabular} \end{scriptsize} \end{minipage}% \caption{Top PC Malware Related To BaseBridge Family } \label{tab:basemalware} \end{table} In addition to the PC malware tagging, we correlate with other cyber malicious activity datasets over the Internet. Figure \ref{fig:malpies} presents the malicious activities of DroidKungFu and BaseBridge families that are related to their threat network. Here, we found that both families could be part of a spam campaign and have some scanning activity. Notice that these results represent a fraction of the actual activity because of the limited datasets. However, the previous fraction could be a good indicator for the security practitioner in the investigation process. \begin{table}[!h] \begin{minipage}{0.24\textwidth} \centering \begin{scriptsize} \begin{tabular}{|c||c|c|c|} \hline \hline \#& \textit{Sample} & \textit{Hits} \\ \hline\hline 1 & 74529155cc \footnote{ MD5 Hash First 10 Chars} & 3 \\ \hline 2 & bd5a9f768cf & 2 \\ \hline 3 & 259a244ab1 & 2 \\ \hline 4 & 52da75225 & 1 \\ \hline 5 & 11786afada & 1 \\ \hline 6 & ad5e6d577b & 1 \\ \hline 7 & 9f4215bfc3 & 1 \\ \hline 8 & 3c76ff67d0 & 1 \\ \hline 9 & 117f21550 & 1 \\ \hline \end{tabular} \end{scriptsize} \end{minipage}% \hfill \begin{minipage}{0.23\textwidth} \begin{scriptsize} \begin{tabular}{|c||c|c|} \hline \hline \#& \textit{Family} & \textit{Hits} \\ \hline\hline 1 & Agent \footnote{Kaspersky Naming} & 33 \\ \hline 2 & Adload & 24 \\ \hline 3 & TrustAsia & 13 \\ \hline 4 & KuPlays & 11 \\ \hline 5 & Pipibo & 8 \\ \hline 6 & FangPlay & 5 \\ \hline 7 & StartPage & 4 \\ \hline 8 & Injector & 4 \\ \hline 9 & Turbobit & 4 \\ \hline \end{tabular} \end{scriptsize} \end{minipage}% \caption{Top PC Malware Related To DroidKungFu Family } \label{tab:kungfumalware} \end{table} \begin{scriptsize} \begin{figure}[ht!] \begin{center} \subfigure[BaseBridge]{% \label{fig:basebrigde_00} \includegraphics[width=0.22\textwidth]{basebridge_malpie} } \subfigure[DroidKungFu]{% \label{fig:basebrigde_01} \includegraphics[width=0.22\textwidth]{droidkungfu_malpie} } \end{center} \caption{ Maliciousness Tagging Per Family } \label{fig:malpies} \end{figure} \end{scriptsize} \section{Discussion} \label{sec:discussion} The results in the previous section show promising insights about the underlying cyber-infrastructures of the Android malware families. The produced threat networks could show one side of the adversary infrastructure, which is the Android malware one; this side could lead to the complete threat network. Furthermore, all the previous results could be extracted automatically and periodically from a feed of Android malware samples belonging to one or various families. This requires fixing the hyperparameter related to the used algorithms, the community detection and the page ranking algorithms, as we did in our experimentation. Moreover, the number of the malware samples and their families could be a major hyperparameter that impacts the produced result. \textsf{ToGather} framework could tolerate having different Android malware families as presented previously, where we consider all the families of Drebin dataset ($179$ Families). Another important parameter is the whitelisting hyperparameter, which contains the number of domains from the top Alexa \& Quantcast lists. The latter could affect the result by introducing a lot of false positive domains. In our case, we consider the complete lists, which leads to very few false positives. However, this could introduce false negatives by removing a site that is malicious. The concept of sub-threat network gives an insight on the possibility of having different threat networks, meaning multiple adversaries are reusing a family sample or one adversary uses distinct threat networks. Moreover, the sub-threat network helps the security analyst to tackle the cyber-threat sequentially, by focusing on one sub-threat network. Finally, passive DNS database has a paramount role in discovering the related domains and IPs without having all the samples of the malware family. Therefore, with a relatively small set of samples, we could discover the threat network of the family. Finally, in the current implementation, we consider only the PC malware, spamming, phishing, and scanning, but the tagging could be extended to other security feeds. \section{Limitation And Future Work} \label{sec:futurework} \textsf{ToGather} results depend mainly on the input malware samples, which could affect the result in two ways. First, the adversary could include noisy network information in the actual Android package, thus overwhelming the process of detecting the threat network. In the current \textsf{ToGather} setup, we consider both static and dynamic analyses to detect the network information. Afterward, we merge the result of each analysis to correlate with passive DNS and generate the threat network. This setup is venerable to such noise attack, but this could be mitigated by simply considering the intersection instead of the union of the network information analysis. Since the dynamic analysis result is more credible than the static one, the intersection of both analyses is much more credible. Also, the filtering operation could help mitigating such problem by removing possible noise that is part of the white list. To this end, \textsf{ToGather} adopts only static whitelisting, and we are planning to build a dynamic filtering system based on reputation similar to Notos \cite{antonakakis2010building}. Second, the Android malware family may rely on other means to connect the threat network by not using direct IPs or domains connection. Instead, the adversary could leverage legitimate services such as Twitter accounts to operate the malicious cyber-infrastructure. In this case, \textsf{ToGather} could not detect such threats since it relies mainly on network information. We consider such problem as future work, where we investigate different network information, including Twitter and IRC communication. Finally, \textsf{ToGather}'s current design produces a static threat network and sub-network; the time dimension is not provided. A workaround for this issue is to analyze the same malware family for different timestamps and keep the result of each timestamp. The latter could help in the study of the threat network over time. We plan to tackle this issue more rigorously in future work. \section{Related Work} Previous works on Android malware mainly concentrate the actual malware sample using two basic approaches: static and dynamic analyses. Static analysis-based approaches \cite{arp2014drebin, karbab2016dna, feng2014apposcopy, karbab2016cypider, karbab2017android, karbab2018maldozer, yang2014apklancet, zhongyang2013droidalarm, sanz2014anomaly} rely on static features extracted from the app, such as requested permissions and APIs, to detect malicious apps. These methods are not resistant to obfuscation. On the other hand, dynamic analysis-based approaches \cite{canfora2016acquiring, karbab2017dysign, spreitzenbarth2013mobile, ali2016aspectdroid, zhang2013vetting, amos2013applying, wei2012android, wei2012android, huang2014asdroid} aim to identify a behavioral signature or anomaly of the running app. These methods are more resistant to obfuscation than the static ones as there are many ways to hide the malicious code, while it is difficult to hide the malicious behavior. The hybrid analysis methods \cite{yuan2014droid, grace2012riskranker, bhandari2015draco, vidas2014a5 combine between static and dynamic analyses. A significant number of papers has been recently proposed to detect repackaged apps by performing similarity analysis. The latter either identifies the apps that use the same malicious code (i.e., detection of malware families) \cite{kim2015structural, ali2015opseq, deshotels2014droidlegacy, zhou2012hey, gascon2013structural, suarez2014dendroid, kang2015detecting, lin2013identifying, faruki2015androsimilar}, or those that repackage the same original app (i.e., code reuse detection) \cite{chen2015finding, sun2015droideagle, zhou2012detecting, hanna2012juxtapp, crussell2012attack, zhou2013fast, crussell2013andarwin}. However, most of them consider only malware sample detection and not the network dimension of the Android malware samples and their families. Differently, our work is novel in the sense that it represents the Android malware family by the underlying malicious cyber-infrastructure (i.e., threat network). The most similar work to our proposal is \cite{boukhtouta2015graph} \cite{nadji2013connected}, which studies the malicious threat networks in general. Our work is different from \cite{boukhtouta2015graph} \cite{nadji2013connected} by considering the Android malware sample as the seed to build the threat network. However, \cite{boukhtouta2015graph} \cite{nadji2013connected} deal with various sources as seeds. Besides, we propose \textsf{ToGather} as an online system to continuously generate the threat network of Android malware families in each epoch. \section{Conclusion} In this paper, we presented \textsf{ToGather} framework, a set of techniques, tools and mechanisms and security feeds bundled to automatically build a situational awareness about a given Android malware. \textsf{ToGather} leverages the stat-of-art of graph theory and multiple security feeds to produce insightful, granular, as well as actionable intelligence about malicious cyber-infrastructures related to the Android malware samples. We experiment \textsf{ToGather} on real malware from Drebin Dataset\cite{arp2014drebin}. \bibliographystyle{plain}
1,108,101,564,761
arxiv
\section{Introduction} Forecasting future events has always been both an extremely worthwhile and difficult endeavour, and while many processes have been able to predicted by improving our fundamental understanding of the underlying process, not all processes are simple and stable enough to mathematically predicted. The purpose of this project is to do a comparative study on the accuracy of extrapolations with the algorithm known as “SALSA” and a previously explored method “Causal Smoothing Extrapolation”. Applying both to real world data and evaluating how accurately each of algorithms can predict data in comparison to each other and evaluating how useful these forecasts may be. Causal Smoothing Extrapolation proposed in “On Causal Extrapolation of sequences with application to forecasting” [1] was the primary focus of the previous work [2], in which it was determined that in most cases more Causal Smoothing Extrapolation creates more accurate forecasts than linear extrapolation. This was explored using financial and regional maximum temperature time series data. The following next step in the investigation of Causal Extrapolation would be to compare it with a more complex extrapolation method. “Split augmented Lagrangian Shrinkage algorithm” or “Salsa” first proposed in “ Fast Image Recovery using Variable splitting and Constrained Optimization” [3] is a method of data extrapolation currently under study by todays mathematics community and is there for the most relevant method to compare to Causal Smoothing Extrapolation. \break \section{Theory} \subsection{Causal Extrapolation Summary} The method is based on approach from [1]. The following is an extract from “Applications of band-limited extrapolation to forecasting of weather and financial time series” [2] which provides a summary of how the Causal Smoothing Extrapolation works. $$(Q^*z)_k=\frac{\Omega}{\pi} \sum_{t=q}^s sinc(k{\pi} + {\Omega}t)z(t)$$ $$R_{km}=(\frac{\Omega}{\pi})^2 \sum_{t=q}^s sinc(m{\pi} + {\Omega}t)sinc(k{\pi} + {\Omega}t)$$ $$R_v=R+vI$$ $$y_k = R_v^{-1}Q^*x$$ $$\widehat{x}(t)=Qy_k$$ $$\widehat{x}(t)=(Qy)(t)\frac{\Omega}{\pi} \sum_{t=q}^s sinc(k{\pi} + {\Omega}t)$$ This shows the progression of time series data z(t) for $s\le t\le q $ from raw data to Causally Smoothed $x(t)$ points by operators $Q^*$, $Rv$ and $Q$. Where$ \omega$ and $v$ are constants selected by the simulation of 111 data points repeated 10,000 times in order to select the constants which give the most accurate projections. This topic is fleshed out further in “Applications of band-limited extrapolation to forecasting of weather and financial time series” [2] while the full proofs and workings of this algorithm can be found in [1]. \subsection{Split Augment Lagrangian Shrinkage Algorithm or SALSA} SALSA is a method of image restoration and reconstruction where the goal is to minimise the following function by spliting the two major components of the functions and minimising them independently.$$\min_{x,v\epsilon R^n}=\frac{1}{2} ||Ax-y||_2^2+{\omega} {\sigma}(v)$$ Here component one $f(x)=\frac{1}{2}||Ax-y||_2^2$ and component two $f(x)={\omega} {\sigma}(v)$. The algorithm for forecasting a signal is done by comparing a signal y with length M to a bases vector x length N transformed by A which is a MxN matrix and minimise the difference between the two. $$y=Ax$$ $$y=\begin{tabular}{|c|}y(0)\\mu(1)\\...\\mu(M-1)\end{tabular} \ ,\ x=\begin{tabular}{|c|}x(0)\\x(1)\\...\\x(N-1)\end{tabular}$$ With multiple repetition of the SALSA algorithm the bases vector x and transform A can completely replicate the original signal y, when values of y are unknown it can interpolate and extrapolate the missing values of the original signal. \newline The complete algorithm is the following: \newline 1. Set k=0,choose${\mu} > 0 ,\ v_0\ and\ d_0$.\\* 2. Repeat point\\* 3. $x_{k+1}=argmin_x||Ax-y||_2^2+{\mu}||x-v_k-d_k||_2^2$\\* 4. $v_{k+1}=argmin_v {\omega} {\sigma}(v)+\frac{\mu}{2} ||x_{k+1}-v_k-d_k||_2^2$\\* 5. $d_{k+1} =d_k-(x_{k+1}-v_{k+1})$\\* 6$.k{\leftarrow} k+1$\\* 7. Continue until stopping criterion is satisfied.\\* \newline Here $||x||_2^2$ is defined by the $L_2$ norm $$||x||_2^2 = \sum_{n=0}^{N-1} |x(n)|^2. $$ \newline The algorithm will initialise with zero vectors $d$ and $v$ of the same size as $x$. The parameter ${\mu}$ is the penalty parameter which will be determined by simulation, ${\sigma}$ is the regulation parameter(P in the appendices code) which can be found mathematically as N the length of the basis vector and ${\omega}$(lambda in the appendices code) is the regularizer also known as amount the white noise found in the signal. Elements in the basis vector x are updated with each iteration for as many repetitions as required. This can be summarised as the following, after choosing initial conditions ${\mu}$>0, $v_0$, and $d_0$ apply to the time series bases vector $x_k$ to get $x_{k+1}$ repeat until the values of the bases vector set can be transformed into a vector which closely resembles the original data set. For my purposes I have chosen to use the “Fast Fourier transform” in Matlab to convert the original signal y to basis set x and the “Inverse Fast Fourier transform” to convert it back. Circumventing the need to directly define the transform A. I have also used the function ‘soft’ from Ivan Selesnicks SALSA toolbox in order to calculate the argmin values needed in step 3 and 4 of the algorithm [4]. Finally, in order to forecast using SALSA part of the original signal is masked using matrix K. K is and MxM identity matrix with values missing to represent a patchy or incomplete signal, since I am only using SALSA for forecasting the matrix K will obscure the last 2-10 values of y. The vector Ky will be used in place y in the algorithm. $$Ky=Ax$$ For further information on this please look into the lecture notes provided by Ivan Selesnick [4]. \subsection{Linear Extrapolation} Linear extrapolation has also been included in portions of this paper in order to provide a base line for how the previous two methods compare to a standard trend line. The following equation is used in all linear forecasts in this paper.\\* Linear extrapolation for A historical Data points: $$\widehat{x}(t)=\frac{(z(t_0)-z(t_0)-A)}{A}(t) + (z(t_0)-\frac{(z(t_0)-z(t_0-A)}{A})t_0 $$ \section{Monte Carlo Simulation for SALSA forecast} Since d,v,$ {\sigma}$ have been defined previously this leaves values ${\mu} , {\omega} $ and the length of the bases vector N to be selected. In order to do this a Monte Carlo simulation was created to test the most effective values for these constants. This follows the same process used in the previous experiments with Casual extrapolation [1,2]. The Monte Carlo simulation is structured as follows, A(t) takes random values within a uniform distribution, ${ \omega}(t) $is standard gaussian white noise and ${\rho}$ is simply 1 [1]. This is to ensure that the process appears to be entirely random and cannot be forecasted by some other method.\\* $$ z(t)=A(t)z(t-1)+{\gamma}(t),\quad t=-1,1,2,..... $$ Here A(t) takes random values within a uniform distribution, $\gamma(t)$ is the standard Gaussian white noise. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{MonteCarloForecast.jpg} \caption{Monte Carlo simulation, forecast of 10 point} \end{figure} We used $0.1\le \mu\le 200 $ in steps of 0.1, $1\le \omega\le 10$ with integer steps and $100\le N\le 1000$ in steps of 100 with 10,000 trials each forecasting 7 points for a total of 70,000 total data point per trial was the original testing circumstances in order to provide a complete overview and find the most effective values, however this test was abandon after 72 hours of simulation when no significant reduction in the L2-Norm was noticed was noticed past ${\mu}$ of 0.6 , ${\omega}$ of 1 and N of 200. This has henceforth been used at the as the values for future forecasting. \begin{table}[h!] \begin{tabular} { |c|c|c|c|c|c|c|c|c|c|c| } \hline ${\mu}$& 0.1&0.2&0.3&0.4&0.5&0.6&0.7&0.8&0.9&1.0 \\ \hline Residual per point &2.0767&2.1702&2.1743&2.1936&2.1788&2.1135&2.2526&2.1391&2.1597&2.1237 \\ \hline ${\mu}$&1.1&1.2&1.3&1.4&1.5 &1.6&1.7&1.8&1.9&2.0 \\ \hline Residual per point &2.2009&2.1778&2.2461&2.2261&2.1243&2.1720&2.1888&2.1460&2.1852&2.2436\\ \hline \end{tabular} \caption{Table of values of mean residual per point for values of ${\mu}$} \end{table} \section{Electrical signal experiements} The electrical signal information is a simulated data set provided by Dr. Nikolai Dokuchaev to the author. \index It consists of 1000 signals lasting for 3 minutes each. Two experiments were done using this data set, the first was a 0.2 second forecast repeated 450 times and 1 second forecast repeated 90 times. Both forecasting methods were allowed 91 points of data in order to predict the next 2-10 points. The Results of which are below, linear extrapolation has been intentionally omitted from these graphs in order to improve the clarity of the following figures. \begin{table}[h!] \begin{tabular}{|c|c|c|c||c|c|c|c|} \hline \multirow{2}{2em}{Data Type}& \multirow{2}{2em}{Raw Data}&\multirow{2}{6em}{Casual Extrapolation}&\multirow{2}{6em}{Salsa Extrapolation}&\multirow{2}{5em}{Comparison method}&\multirow{2}{5em}{Casual Forecast}&\multirow{2}{5em}{Salsa Forecast}&\multirow{2}{5em}{Linear Forecast} \\ &&&&&&&\\ \hline Min &-2.82&-2.814&-2.736&\multirow{2}{4em}{Total L2 residual}&\multirow{2}{4em}{41.044}&\multirow{2}{4em}{11.5482}&\multirow{2}{4em}{19.8732} \\ \cline{1-4} Max&2.684&2.682&2.558& & & &\\ \hline Mean &-0.00531&3.08e-05&0.00112&\multirow{3}{4em}{Total L2 residual per point}&\multirow{3}{4em}{0.0456}&\multirow{3}{4em}{0.0128}&\multirow{3}{4em}{0.0221} \\ \cline{1-4} STD &1.468&1.467&1.404&&&&\\ \cline{1-4} Range &5.504&5.496&5.294&&&&\\ \hline \end{tabular} \caption{Statistical Results of Electrical signal 0.2 second forecast} \end{table} \begin{table}[h!] \begin{tabular}{|c|c|c|c||c|c|c|c|} \hline \multirow{2}{2em}{Data Type}& \multirow{2}{2em}{Raw Data}&\multirow{2}{6em}{Casual Extrapolation}&\multirow{2}{6em}{Salsa Extrapolation}&\multirow{2}{5em}{Comparison method}&\multirow{2}{5em}{Casual Forecast}&\multirow{2}{5em}{Salsa Forecast}&\multirow{2}{5em}{Linear Forecast} \\ &&&&&&&\\ \hline Min &-2.82&-2.777&-2.658&\multirow{2}{4em}{Total L2 residual}&\multirow{2}{4em}{306.2743}&\multirow{2}{4em}{61.9608}&\multirow{2}{4em}{258.2352} \\ \cline{1-4} Max&2.684&2.672&2.542& & & &\\ \hline Mean &-0.00531&-0.01101&-0.00812&\multirow{3}{4em}{Total L2 residual per point}&\multirow{3}{4em}{0.3437}&\multirow{3}{4em}{0.0695}&\multirow{3}{4em}{0.2898} \\ \cline{1-4} STD &1.475&1.466&1.336&&&&\\ \cline{1-4} Range &5.504&5.449&5.2&&&&\\ \hline \end{tabular} \caption{{Statistical Results of Electrical signal 1.0 second forecast}} \end{table} In both cases SALSA extrapolation has far lower L2 euclidean per point then linear extrapolation and casual smoothing extrapolation showing it to be far better extrapolation method for this process. This makes sense as the SALSA algorithim has been designed with the goals of processing and recovering signal data, it maintains a L2-euclidean of 0.0456 and 0.0695 per point for the 0.2 second and 1.0 second forecasts respectivly. This is 58\% of its nearest competitors residual in the 0.2 forecast and 24\% in the 1.0 second forecast. It also does not heavily compress the range of data points maintaining 94-96\% of the range of the raw data set . The inaccuracy of Casual smoothing in this case is highly visable on the 1 second forecast data, in which forecasts maintains a flat trajectory while the data sharply assends or decends. This highlights the need to forecast using methods that relate to the physical process whenever it is avalible and save Casual smoothing extrapolation for when the process is completely unknown. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{02secondelectricforecast.jpg} \caption{Salsa and Causal extrapolation forecasts for 0.2 seconds of an electrical signal repeated for 100 seconds} \end{figure} \break \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{1secondelectricforecast.jpg} \caption{Salsa and Causal extrapolation forecasts for 1 seconds of an electrical signal repeated for 100 seconds } \end{figure} \clearpage \section{Bureau of Meteorology Experiements} Four experiments have been conducted using information from the Australian Bureau of Meteorology, maximum temperature forecasts of the next week made with 91 points of data repeated 38 times during the year, forecasts for the next 2 day with the same data repeated 133 times during the year and both tests repeated for minimum temperature forecasts. Data from the Australian Bureau of meteorology was retrieved from the “Perth Metro” station located -31.9192 Latitude ,115.8728 Longitude at an altitude of 25.9 meters [4] [5]. Data ranged from the Stations first open in January first 1994 to the twenty fifth of October 2018. This data was selected as it had an estimated 100 completeness for Maximum air temperature data and over 2 decades of collected data as well as being the local weather station. For the purposes of testing the complete year 2017 was used. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{MaximumandminimumtempssalsaandBLE2day.jpg} \caption{Salsa and Causal and linear extrapolation forecasts for 2 days temperature data over the period of a year} \end{figure} \begin{table}[h!] \begin{tabular}{|c|c|c|c||c|c|c|c|} \hline \multirow{2}{2em}{Data Type}& \multirow{2}{2em}{Raw Data}&\multirow{2}{6em}{Casual Extrapolation}&\multirow{2}{6em}{Salsa Extrapolation}&\multirow{2}{5em}{Comparison method}&\multirow{2}{5em}{Casual Forecast}&\multirow{2}{5em}{Salsa Forecast}&\multirow{2}{5em}{Linear Forecast} \\ &&&&&&&\\ \hline Min &14.2 &15.6&13.66&\multirow{2}{4em}{Total L2 residual}&\multirow{2}{4em}{2.1800e03}&\multirow{2}{4em}{4.8381e03}&\multirow{2}{4em}{712105e03} \\ \cline{1-4} Max&37.7&33.6&33.32& & & &\\ \hline Mean &23.05&22.95&21.66&\multirow{3}{4em}{Total L2 residual per point}&\multirow{3}{4em}{8.5492}&\multirow{3}{4em}{18.9730}&\multirow{3}{4em}{27.9256} \\ \cline{1-4} STD &5.054&4.320&3.942&&&&\\ \cline{1-4} Range &23.5&18&19.65&&&&\\ \hline \end{tabular} \caption{Statistical Results of BOM 2 Day Maximum temperature forecasts} \end{table} \begin{table}[h!] \begin{tabular}{|c|c|c|c||c|c|c|c|} \hline \multirow{2}{2em}{Data Type}& \multirow{2}{2em}{Raw Data}&\multirow{2}{6em}{Casual Extrapolation}&\multirow{2}{6em}{Salsa Extrapolation}&\multirow{2}{5em}{Comparison method}&\multirow{2}{5em}{Casual Forecast}&\multirow{2}{5em}{Salsa Forecast}&\multirow{2}{5em}{Linear Forecast} \\ &&&&&&&\\ \hline Min &1.7&5.38&3.915&\multirow{2}{4em}{Total L2 residual}&\multirow{2}{4em}{1.8570e03}&\multirow{2}{4em}{3.7586e03}&\multirow{2}{4em}{6.1947e03} \\ \cline{1-4} Max&21.8&18.31&17.21& & & &\\ \hline Mean &11.43&11.37&10.18&\multirow{3}{4em}{Total L2 residual per point}&\multirow{3}{4em}{7.2822}&\multirow{3}{4em}{14.7395}&\multirow{3}{4em}{24.2931} \\ \cline{1-4} STD &3.902&3.12&2.766&&&&\\ \cline{1-4} Range &20.1&12.93&13.3&&&&\\ \hline \end{tabular} \caption{Statistical Results of BOM 2 Day Minimum temperature forecasts} \end{table} \begin{table}[h!] \begin{tabular}{|c|c|c|c||c|c|c|c|} \hline \multirow{2}{2em}{Data Type}& \multirow{2}{2em}{Raw Data}&\multirow{2}{6em}{Casual Extrapolation}&\multirow{2}{6em}{Salsa Extrapolation}&\multirow{2}{5em}{Comparison method}&\multirow{2}{5em}{Casual Forecast}&\multirow{2}{5em}{Salsa Forecast}&\multirow{2}{5em}{Linear Forecast} \\ &&&&&&&\\ \hline Min &14.2&16.11&13.53&\multirow{2}{4em}{Total L2 residual}&\multirow{2}{4em}{3.3970e03}&\multirow{2}{4em}{6.5986e03}&\multirow{2}{4em}{8.13195e03} \\ \cline{1-4} Max&37.7&32.83&33.87& & & &\\ \hline Mean &23.25&23.31&21.67&\multirow{3}{4em}{Total L2 residual per point}&\multirow{3}{4em}{12.7229}&\multirow{3}{4em}{24.7137}&\multirow{3}{4em}{30.4564} \\ \cline{1-4} STD &5.104&4.482&4.352&&&&\\ \cline{1-4} Range &23.5&16.71&20.35&&&&\\ \hline \end{tabular} \caption{Statistical Results of BOM 7 Day Maximum temperature forecasts} \end{table} \begin{table}[h!] \begin{tabular}{|c|c|c|c||c|c|c|c|} \hline \multirow{2}{2em}{Data Type}& \multirow{2}{2em}{Raw Data}&\multirow{2}{6em}{Casual Extrapolation}&\multirow{2}{6em}{Salsa Extrapolation}&\multirow{2}{5em}{Comparison method}&\multirow{2}{5em}{Casual Forecast}&\multirow{2}{5em}{Salsa Forecast}&\multirow{2}{5em}{Linear Forecast} \\ &&&&&&&\\ \hline Min &1.7&5.685&3.725&\multirow{2}{4em}{Total L2 residual}&\multirow{2}{4em}{2.7701e03}&\multirow{2}{4em}{5.0055e03}&\multirow{2}{4em}{6.3337e03} \\ \cline{1-4} Max&21.8&17.6&17.28& & & &\\ \hline Mean &11.63&11.56&9.845&\multirow{3}{4em}{Total L2 residual per point}&\multirow{3}{4em}{10.3750}&\multirow{3}{4em}{18.7474}&\multirow{3}{4em}{23.7216} \\ \cline{1-4} STD &3.937&3.14&2.504&&&&\\ \cline{1-4} Range &20.1&11.91&13.56&&&&\\ \hline \end{tabular} \caption{Statistical Results of BOM 7 Day Minimum temperature forecasts} \end{table} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{MaximumandminimumtempssalsaandBLE7dayforecast.jpg} \caption{Salsa and Causal and linear extrapolation forecasts for 7 days temperature data over the period of a year} \end{figure} In these experiments the superiority of Casual smoothing extrapolation is clear with the L2- euclidean in the temperature data being 45-55\% smaller then its nearest competitor.Though while its predictions are far closer to the actual points then the other two methods it compresses the data range to 65-77\% of the raw data range. Comparitivly the Salsa extrapolation compresses the data to 67-84\% of the raw range. While both perform significantly better linear extrapolation and neither have a forecast so poor that the maximum temperature forecast is below the minimum temperature for the day of vice versa for both experiements. Casual smoothing extrapolation appears to be a significantly more useful forecasting tool for this particular data set. \break \section{Australian stock exchange experiements} Similar to the previous section 4 tests were conducted using data from the Australian stock exchange, with 2 day and 5-day forecasts of minimum and maximum prices repeated 79 and 31 respectfully with moving 91-point data sets. The data set for the testing of stock prices were obtained from Market index [6]consisting of the opening, high, low and close values of a tradeable stock and index fund between the 11th of October 2017 and the 10th of October 2018. These values were also confirmed with Commonwealth Securities Limited quotes [7] of these data points. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{Untitled.png} \caption{Salsa and Causal and linear extrapolation forecasts for 2 days ASX stock data over the period of a year} \end{figure} \begin{table}[h!] \begin{tabular}{|c|c|c|c||c|c|c|c|} \hline \multirow{2}{2em}{Data Type}& \multirow{2}{2em}{Raw Data}&\multirow{2}{6em}{Casual Extrapolation}&\multirow{2}{6em}{Salsa Extrapolation}&\multirow{2}{5em}{Comparison method}&\multirow{2}{5em}{Casual Forecast}&\multirow{2}{5em}{Salsa Forecast}&\multirow{2}{5em}{Linear Forecast} \\ &&&&&&&\\ \hline Min &67.92&68.34&68.8&\multirow{2}{4em}{Total L2 residual}&\multirow{2}{4em}{150.9761}&\multirow{2}{4em}{194.0201}&\multirow{2}{4em}{5.8375e03} \\ \cline{1-4} Max&77.66&76.86&76.82 & & &\\ \hline Mean &73.12&72.98&71.86&\multirow{3}{4em}{Total L2 residual per point}&\multirow{3}{4em}{0.9320}&\multirow{3}{4em}{1.2437}&\multirow{3}{4em}{36.034} \\ \cline{1-4} STD &70.25&2.13&72.88&&&&\\ \cline{1-4} Range &2.242&8.52&8.02&&&&\\ \hline \end{tabular} \caption{Statistical Results of ASX 2 Day Maximum price forecasts} \end{table} \begin{table}[h!] \begin{tabular}{|c|c|c|c||c|c|c|c|} \hline \multirow{2}{2em}{Data Type}& \multirow{2}{2em}{Raw Data}&\multirow{2}{6em}{Casual Extrapolation}&\multirow{2}{6em}{Salsa Extrapolation}&\multirow{2}{5em}{Comparison method}&\multirow{2}{5em}{Casual Forecast}&\multirow{2}{5em}{Salsa Forecast}&\multirow{2}{5em}{Linear Forecast} \\ &&&&&&&\\ \hline Min &-67&67.56&67.24&\multirow{2}{4em}{Total L2 residual}&\multirow{2}{4em}{150.3173}&\multirow{2}{4em}{195.6079}&\multirow{2}{4em}{5.7525e03} \\ \cline{1-4} Max&77.17&75.99&76.32& & & &\\ \hline Mean &72.18&72.05&70.97&\multirow{3}{4em}{Total L2 residual per point}&\multirow{3}{4em}{0.9279}&\multirow{3}{4em}{1.2539}&\multirow{3}{4em}{35.5091} \\ \cline{1-4} STD &2.237&2.119&8.179&&&&\\ \cline{1-4} Range &10.17&8.437&9.08&&&&\\ \hline \end{tabular} \caption{Statistical Results of ASX 2 Day Minimum price forecasts} \end{table} \begin{table}[h!] \begin{tabular}{|c|c|c|c||c|c|c|c|} \hline \multirow{2}{2em}{Data Type}& \multirow{2}{2em}{Raw Data}&\multirow{2}{6em}{Casual Extrapolation}&\multirow{2}{6em}{Salsa Extrapolation}&\multirow{2}{5em}{Comparison method}&\multirow{2}{5em}{Casual Forecast}&\multirow{2}{5em}{Salsa Forecast}&\multirow{2}{5em}{Linear Forecast} \\ &&&&&&&\\ \hline Min &67.92&68.83&68.8&\multirow{2}{4em}{Total L2 residual}&\multirow{2}{4em}{296.9839}&\multirow{2}{4em}{354.9674}&\multirow{2}{4em}{5.8375e03} \\ \cline{1-4} Max&77.66&77.64&76.74& & & &\\ \hline Mean &73.12&73.2&71.66&\multirow{3}{4em}{Total L2 residual per point}&\multirow{3}{4em}{1.8332}&\multirow{3}{4em}{2.1912}&\multirow{3}{4em}{36.034} \\ \cline{1-4} STD &2.242&2.154&8.25&&&&\\ \cline{1-4} Range &9.74&8.807&7.94&&&&\\ \hline \end{tabular} \caption{Statistical Results of ASX 5 Day Maximum price forecasts} \end{table} \begin{table}[h!] \begin{tabular}{|c|c|c|c||c|c|c|c|} \hline \multirow{2}{2em}{Data Type}& \multirow{2}{2em}{Raw Data}&\multirow{2}{6em}{Casual Extrapolation}&\multirow{2}{6em}{Salsa Extrapolation}&\multirow{2}{5em}{Comparison method}&\multirow{2}{5em}{Casual Forecast}&\multirow{2}{5em}{Salsa Forecast}&\multirow{2}{5em}{Linear Forecast} \\ &&&&&&&\\ \hline Min &67&68.02&67.9&\multirow{2}{4em}{Total L2 residual}&\multirow{2}{4em}{283.3359}&\multirow{2}{4em}{315.1025}&\multirow{2}{4em}{5.7525e03} \\ \cline{1-4} Max&77.17&76.89&76.03& & & &\\ \hline Mean &72.18&72.26&70.89&\multirow{3}{4em}{Total L2 residual per point}&\multirow{3}{4em}{1.749}&\multirow{3}{4em}{1.9451}&\multirow{3}{4em}{35.5091} \\ \cline{1-4} STD &2.237&2.167&8.163&&&&\\ \cline{1-4} Range &10.17&8.871&8.13&&&&\\ \hline \end{tabular} \caption{Statistical Results of ASX 5 Day Minimum price forecasts} \end{table} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{SalsavscasualextrapolationASXmaximumandminimum7dayforecast.jpg} \caption{Salsa and Causal and linear extrapolation forecasts for 5 days ASX stock data over the period of a year} \end{figure} This series of experiments had similar results to the temperature forecasts with Causal extrapolation being the superior forecasters with the most compressed data set. The main difference being that in three out of the four cases the Causal extrapolation had ranges closer to raw data range than the SALSA extrapolation, forecasting data which visually resembles the shape of the underlying process more. The SALSA extrapolated points also had forecasted maximum prices which fell below the minimum price forecasted and minimum forecasted prices which were above the maximum price forecasted, while Casual smoothing extrapolation did not. There for if SALSA extrapolation was used to forecast prices in an attempt to purchase and sell stocks during a 2- or 5-day periods you would have a maximum price falling lower than its minimum price resulting in a catastrophic loss if trades were made with an autonomous system. Causal smoothing extrapolation does not have as large a chance of experiencing this catastrophic failure because at no point does its forecast for the maximum price fall lower than the forecast for the minimum price or vice versa. For this data type the more conservative and reliable forecasts made by casual smoothing extrapolation is clearly the better choice. \section{Summary and Conclusion} In all cases the primary methods of forecasting discussed in this paper are superior to a basic linear forecast when the underlying processes are unknown and Salsa extrapolation is clearly superior in the known process. In the case of the complex underlying process Casual Smoothing extrapolation had a lower L2 euchlideon then Salsa extrapolation, resulting in a forecast more likely to be closer to the actual value that will occur. This is ideal for the Australian Stock exchange data as it will result in the minimum amount of loss when conducting trades and making trades closer to the minimum and maximum prices on any given day. It also has not produced a forecast in which a loss would occur if a stock was sold at the maximum forecast price and bought at the minimum forecast price, while this has occurred multiple times with SALSA extrapolation. This is not to say that either processes would be guaranteed to make a profit on any give day or over time, but the Salsa extrapolation has the greatest capacity to incur a long-term loss or catastrophic short-term loss for an autonomous trading process. In the case of the weather data which is the superior method of prediction is not as straight forward as no cross over of the data between minimum and maximum forecasts occurs. Though the casual smoothing extrapolation had a L2 euclidean half of the SALSA forecasts and a range at most 82\% of the SALSA range. This would result in the Casual smoothing forecasts having far more accurate forecasts of the expected maximum and minimum temperatures for any given day while the SALSA forecast would produce a winder band of temperatures which could prove more useful for preparing for what kind of weather tomorrow may bring. The forecasting of electrical signals is the most simple series to discuss as the SALSA forecasts are more accurate, barely compressed and are completed faster than Casual smoothing extrapolation. This makes Casual smoothing extrapolation a redundent method for forecasting electrical signals as forecast period was shorter then the time it took to complete a prediction. SALSA extrapolation on the other hand is shown to be a forecasting processes that is highly practical for making short range predicitions of electrical signal data. This lead to the conclusion that Casual smoothing extrapolation is a method useful when the underlying process does not have forecasting method or is too complex to reasonably forecast but does not out perform a specially tailored forecast solution. These experiements that have been conducted cannot be considered exhaustive, and it would be advisable to do more extensive tests on each data set, adjust the parameters for both SALSA and Causal smoothing extrapolation depending on the data set and adjusting it depending on fluctuation in that set. This would achieve more accurate results which are unattainable in the confines of this paper. The most conclusive statement that can be made about these forecasting methods is that Causal smoothing extrapolation provides a more conservative estimation of future series data while losing more information about said data set. While SALSA extrapolations are in general less likely to be the actual values that occur in the future, but are much faster and have a wider range of values. With the exeption of the process which it has been designed to forecast, in which it is the superior forecasting method in all aspects. Each of these characteristics could be more useful depending on circumstances and the need for forecasting. Finally both are far better than following an average trend line when the underlying process is either too complex or unknown. \section*{References}\ \hphantom{xx} [1] N. Dokuchaev, "On causal extrapolation of sequences with applications to forecasting, Applied Mathematics and Computation, vol. 328, pp. 276-286, 2018. \par [2] N. Rowe, "Applications of band-limited extrapolation to forecasting of weather and financial time series," arXiv, N/A, 2019. \par [3] M. V. Afonso, J. M. Bioucas-Dias and M. A. T. Figueriredo, "Fast Image Recovery Using Variable Splitting and Constrained Optimization," IEEE Transcation on Image Processing, vol. 19, no. 9, pp. 2345-2356, 2010. \par [4] Australian Government Bureau of Meterorology , "Australian Government Bureau of Meterorology," http://www.bom.gov.au/jsp/ncc/cdio/weatherData /av?p\_nccObsCode=122\&p\_display\_type=dailyDataFile\&p\_startYear=\&p\_c=\&p\_stn\_num=009225. [Accessed 11 10 2018]. \par [5] Australian Government Bureau of Meterology, "Basic Climatological Station Metadata," 26 7 2018. http://www.bom.gov.au/clim\_data/cdio/metadata/pdf/siteinfo/IDCJMD0040.009225.SiteInfo.pdf. \par [Accessed 31 10 2018]. \par [6] Market Index., "S\&P/ASX 200," 10 10 2018. https://www.marketindex.com.au/asx200. [Accessed 10 10 2018]. \par [7] Commonwealth Securities Limited, "Trade History CBA," 26 10 2018. https://www2.commsec.com.au/Private/MarketPrices/TradeHistory /TradeHistory.aspx?stockCode=CBA. [Accessed 2018 10 26]. \par [8] Australian Government Department of Enviorment and energy, "Portfolio Budget Statements 2017-18," http://www.environment.gov.au/about-us/publications/budget/portfolio-budget- statements-2017-18. [Accessed 2 11 2018]. \par [9] N. Bingham, "Szegö’s theorem and its probabilistic descendants," Probab. Surveys 9 , vol. 9, pp. 287-324, 2012. \par [10] N. Dokuchaev, "Filtration and extrapolation in trajectory analysis based on spectrum methods," Working paper, 2019. \par [11] I. Selesnick, "Introduction to Sparsity in Signal Processing," Ivan Selesnick, 2012. \par [12] I. Selesnick, "L1-Norm Penalized Least Squares with Salsa," Connexions, 2014. \newpage \setcounter{equation}{0} \renewcommand{\thesubsection}{A.\arabic{subsection}} \renewcommand{\theequation}{A.\arabic{equation}} \renewcommand{\thelemma}{A.\arabic{lemma}} \renewcommand{\theproposition}{A.\arabic{proposition}} \section*{Appendix} \subsection{Monte Carlo Extrapolation} \begin{verbatim} clear all muset=zeros(2,200); mu=0.6 NIT=1000; lambda= 1; tots=0; cost=zeros(1,NIT); time=0 for j=0:1:1000 [B,u,z]=carlosim(55,1,1,1); S=0; M=98; N=200; y=z(1+S:M+S)'+80; K=91; k=ones(M,1); k=k(1:K); s=true(1,K)'; s(1:K)=true; s(K+1:M)=false; Y=y; Y(~s)=0; p=200; c =AT(Y,M,N); x1=AT(Y,M,N); size(c); size(A(c,M,N)); d = zeros(size(c)); cost = zeros(1,NIT); for i = 1:NIT u = soft(c + d, lambda/mu) - d; d = 1/(mu+p) * AT((Y - s.*A(u,M,N)),M,N); c = d + u; residual = Y - A(c,M,N); cost(i) = sum(abs(residual(:)).^2) + sum(abs(lambda * c(:))); end x=A(c,M,N); tots=tots+sum(abs(((z(91:98)'+80)-x(91:98))).^2); x=0; time=time+1 end tots cost(NIT) t=1:98; x=A(c,M,N) hold on plot(t(1:91),real(x(1:91)),'gx') plot(t(91:98),real(x(91:98)),'rx') plot(t(1:98),z(1:98)+80,'b') title('Simulation of Salsa Extrapolation') xlabel('N') ylabel('Magnitude') line([0,0],[-3,3],'Color',[0,0,0],'HandleVisibility','off') hold all legend('Salsa data points','Extrapolated Points','Original simulation points') \end{verbatim} \subsection{Electronic signal forecast with Salsa and Causal Extrapolation} \begin{verbatim} AA=load('C:\Users\valough\Desktop\MATH5001{\bf P}.mat'); z=AA.SIG(1,:); N=45; n=-N:1:N; M=2*N+1; X=pi; s=91; q=1; ts=q:1:s; time=0 MV=zeros(length(z),1); MV(1)=(z(1)+z(2)+z(3)+z(4)+z(5))/5; MV(1)=MV(2); MV(1)=MV(3); MV(length(z)-1)=(z(length(z)-1)+z(length(z)-2)+z(length(z)-3)+z(length(z)-4)+z(length(z)))/5; MV(length(z)-1)=MV(length(z)-2); MV(length(z)-1)=MV(length(z)-3); for l=3:(length(z)-3) MV(l)=(z(l)+z(l+1)+z(l+2)+z(l-2)+z(l-1))/5; end MA=mean(MV(5:length(MV)-5)); LE=zeros(10,1); for tt=91:10:991 mm=(z(tt)-z(tt-1))/10; cc=(z(tt)-((z(tt)-z(tt-1))/10)*tt); for it=1:10 LE(tt+it-90)=mm*(tt+it)+cc; end end smu=0.6; stots=0; sM=101; sN=200; sNIT=1000; slambda=1; sp=200; sS=0; sD=zeros(163,1); timesalsa=0 for sS=0:10:890 sy=z(1+sS:sM+sS)'; sK=91; sk=ones(sM,1); sk=sk(1:sK); ss=true(1,sK)'; ss(1:sK)=true; ss(sK+1:sM)=false; sY=sy; sY(~ss)=0; sc =AT(sY,sM,sN); sx1=AT(sY,sM,sN); sd = zeros(size(sc)); cost = zeros(1,sNIT); for i = 1:sNIT su = soft(sc + sd, 0.5*slambda/smu) - sd; sd = 1/(smu+sp) * AT((sY - ss.*A(su,sM,sN)),sM,sN); sc = sd + su; residual = sY - A(sc,sM,sN); cost(i) = sum(abs(residual(:)).^2) + sum(abs(slambda * sc(:))); end sx=A(sc,sM,sN); stots=stots+sum(abs(((z(92+sS:101+sS)')-sx(92:101))).^2); sD(sS+1)=sx(92); sD(sS+2)=sx(93); sD(sS+3)=sx(94); sD(sS+4)=sx(95); sD(sS+5)=sx(96); sD(sS+6)=sx(97); sD(sS+7)=sx(98); sD(sS+8)=sx(99); sD(sS+9)=sx(100); sD(sS+10)=sx(101); timesalsa=timesalsa+1 end sD size(sD) length(LE(1:163)) length(91:1:253) g=pi/4; V=zeros(25,1); for ii=0:89; QX=zeros(M,1); MEAN=mean(MV((89+10*ii):(91+10*ii))); MEAN2=mean(MV((1+10*ii):(91+10*ii))); MV((1+10*ii):(91+10*ii))=MV((1+10*ii):(91+10*ii))-MEAN2; for k=-N:1:N QX(k+N+1)=0; for t=ts; QX(k+N+1)=QX(k+N+1)+(g/pi)*sinc((k*pi+g*t)/X)*MV(t); end end R=zeros(M,M); for k=-N:1:N for m=-N:1:N for t=ts R(k+N+1,m+N+1)=R(k+N+1,m+N+1)+(g/pi)^2*sinc((k*pi+g*t)/X)*sinc((m*pi+g*t)/X); end end end v=0.1; RI= R+eye(M)*v; y=inv(RI)*QX; Q=zeros(length(ts)+14,1); for t=q:1:(s+12) for k=-N:1:N Q(t-ii*10)=Q(t-ii*10)+y(k+N+1)*(g/pi)*sinc((k*pi+g*t)/X); end end t=q:1:s+12; BLR=0; D=zeros(length(Q),1); D=Q(:); for i=1:1:91 BLR=BLR+sqrt((z(i+10*ii)-(D(i)+MEAN2))^2); end MV((1+10*ii):(91+10*ii))=MV((1+10*ii):(91+10*ii))+MEAN2; BLR BLR/90 D(1:91)=D(1:91)+MEAN2; MAA=mean(D(1:91)); D(92:104)=D(92:104)+MEAN; V(q:q+12)=D(92:104); s=s+10; q=q+10; time=time+1 BLR=0; end BLRE=0; for i=1:1:891 BLRE=BLRE+abs((z(i+91)-(V(i)))^2); end LERR=0; for i=1:1:891 LERR=LERR+abs((z(i+91)-(LE(i)))^2); end length(V); BLRE BLRE/length(V(1:891)) LERR LERR/length(V(1:891)) stots stots/length(sD(1:891)) length(LE); length(V); length(z); length(MV); length(sD); tt=9.1:0.1:98.1; length(tt); hold on plot(0.1:0.1:9.1,z(1:91),'k.:',0.1:0.1:9.1,MV(1:91),'r','HandleVisibility','off') plot(tt,z(91:981),'k.:',tt,V(1:891),'b--',tt,MV(91:981),'r',tt,sD(1:891),'c*') title('Salsa and Left band limited extrapolation of an electrical signal 9 seconds moving data') xlabel('Seconds') ylabel('Magnitude') line([9,9],[-4,4],'Color',[0,0,0],'HandleVisibility','off') hold all plot(tt,z(91:981),'k.:',tt,V(1:891),'b--',tt,MV(91:981),'r',tt,sD(1:891),'c-*') legend('Raw data','Band limited extrapolation','Five point moving average','Salsa Extraploation') \end{verbatim} \subsection{BOM forecast with Salsa, Causal and Linear Extrapolation} \begin{verbatim} AA=xlsread('C:\Users\valough\Documents\MATLAB\IDCJAC0010_009225_2017_Data.csv'); z=transpose(AA(:,5)); zm=transpose(AA(:,3)); N=45; n=-N:1:N; M=2*N+1; X=pi; s=91; q=1; ts=q:1:s; time=0 MV=zeros(length(z),1); MV(1)=(z(1)+z(2)+z(3)+z(4)+z(5))/5; MV(1)=MV(2); MV(1)=MV(3); MV(length(z)-1)=(z(length(z)-1)+z(length(z)-2)+z(length(z)-3)+z(length(z)-4)+z(length(z)))/5; MV(length(z)-1)=MV(length(z)-2); MV(length(z)-1)=MV(length(z)-3); for l=3:(length(z)-3) MV(l)=(z(l)+z(l+1)+z(l+2)+z(l-2)+z(l-1))/5; end MA=mean(MV(5:length(MV)-5)); LE=zeros(10,1); for tt=91:2:343 mm=(z(tt)-z(tt-1))/2; cc=(z(tt)-((z(tt)-z(tt-1))/2)*tt); for it=1:2 LE(tt+it-90)=mm*(tt+it)+cc; end end smu=0.6; stots=0; sM=94; sN=200; sNIT=1000; slambda=1; sp=200; sS=0; sD=zeros(163,1); timesalsa=0 for sS=0:2:266 sy=z(1+sS:sM+sS)'; sK=91; sk=ones(sM,1); sk=sk(1:sK); ss=true(1,sK)'; ss(1:sK)=true; ss(sK+1:sM)=false; sY=sy; sY(~ss)=0; sc =AT(sY,sM,sN); sx1=AT(sY,sM,sN); sd = zeros(size(sc)); cost = zeros(1,sNIT); for i = 1:sNIT su = soft(sc + sd, 0.5*slambda/smu) - sd; sd = 1/(smu+sp) * AT((sY - ss.*A(su,sM,sN)),sM,sN); sc = sd + su; residual = sY - A(sc,sM,sN); cost(i) = sum(abs(residual(:)).^2) + sum(abs(slambda * sc(:))); end sx=A(sc,sM,sN); stots=stots+sum(abs(((z(92+sS:93+sS)')-sx(92:93))).^2); sD(sS+1)=sx(92); sD(sS+2)=sx(93); timesalsa=timesalsa+1 end sD(sS+3)=sx(94); sD size(sD) length(LE(1:255)) length(91:1:345) g=pi/4; V=zeros(25,1); for ii=0:133; QX=zeros(M,1); MEAN=mean(MV((89+2*ii):(91+2*ii))); MEAN2=mean(MV((1+2*ii):(91+2*ii))); MV((1+2*ii):(91+2*ii))=MV((1+2*ii):(91+2*ii))-MEAN2; for k=-N:1:N QX(k+N+1)=0; for t=ts; QX(k+N+1)=QX(k+N+1)+(g/pi)*sinc((k*pi+g*t)/X)*MV(t); end end R=zeros(M,M); for k=-N:1:N for m=-N:1:N for t=ts R(k+N+1,m+N+1)=R(k+N+1,m+N+1)+(g/pi)^2*sinc((k*pi+g*t)/X)*sinc((m*pi+g*t)/X); end end end v=0.1; RI= R+eye(M)*v; y=inv(RI)*QX; Q=zeros(length(ts)+14,1); for t=q:1:(s+6) for k=-N:1:N Q(t-ii*2)=Q(t-ii*2)+y(k+N+1)*(g/pi)*sinc((k*pi+g*t)/X); end end t=q:1:s+6; BLR=0; D=zeros(length(Q),1); D=Q(:); for i=1:1:91 BLR=BLR+sqrt((z(i+2*ii)-(D(i)+MEAN2))^2); end MV((1+2*ii):(91+2*ii))=MV((1+2*ii):(91+2*ii))+MEAN2; BLR BLR/90 D(1:91)=D(1:91)+MEAN2; MAA=mean(D(1:91)); D(92:98)=D(92:98)+MEAN; V(q:q+6)=D(92:98); s=s+2; q=q+2; time=time+1 BLR=0; end BLRE=0; for i=1:1:255 BLRE=BLRE+abs((z(i+91)-(V(i)))^2); end LERR=0; for i=1:1:255 LERR=LERR+abs((z(i+91)-(LE(i)))^2); end length(V); BLRE BLRE/length(V(1:255)) LERR LERR/length(V(1:255)) stots stots/length(sD(1:255)) length(LE(1:255)); length(V(1:255)); length(z(91:345)); length(MV(91:345)); tt=91:1:345; length(tt); hold on plot(1:91,z(1:91),'k.:',1:91,MV(1:91),'r','HandleVisibility','off') plot(tt,z(91:345),'k.:',tt,V(1:255),'b--',tt,MV(91:345),'r',tt,sD(1:255),'c*') title('Salsa and Left band limited extrapolation of the temperature at Perth Metro, 38 weeks of tests, 90 days of data moving') xlabel('Days') ylabel('Max Temperature C') line([91,91],[0,40],'Color',[0,0,0],'HandleVisibility','off') hold all plot(91:1:345,LE(1:255),'g--') legend('Raw data','Band limited extrapolation','Five point moving average','Salsa Extraploation','Linear extrapolation') \end{verbatim} \subsection{ASX forecast with Salsa, Causal and Linear Extrapolation} \begin{verbatim} AA=xlsread('C:\Users\valough\Documents\MATLAB\CBA.AX.csv'); z=transpose(AA(:,3)); zm=transpose(AA(:,3)); N=45; n=-N:1:N; M=2*N+1; X=pi; s=91; q=1; ts=q:1:s; time=0 MV=zeros(length(z),1); MV(1)=(z(1)+z(2)+z(3)+z(4)+z(5))/5; MV(1)=MV(2); MV(1)=MV(3); MV(length(z)-1)=(z(length(z)-1)+z(length(z)-2)+z(length(z)-3)+z(length(z)-4)+z(length(z)))/5; MV(length(z)-1)=MV(length(z)-2); MV(length(z)-1)=MV(length(z)-3); for l=3:(length(z)-3) MV(l)=(z(l)+z(l+1)+z(l+2)+z(l-2)+z(l-1))/5; end MA=mean(MV(5:length(MV)-5)); LE=zeros(10,1); for tt=91:2:253 mm=(z(tt)-z(tt-1))/2; cc=(z(tt)-((z(tt)-z(tt-1))/2)*tt); for it=1:2 LE(tt+it-90)=mm*(tt+it)+cc; end end smu=15; stots=0; sM=98; sN=200; sNIT=1000; slambda=1; sp=200; sS=0; sD=zeros(163,1); timesalsa=0 for sS=0:5:155 sy=z(1+sS:sM+sS)'; sK=91; sk=ones(sM,1); sk=sk(1:sK); ss=true(1,sK)'; ss(1:sK)=true; ss(sK+1:sM)=false; sY=sy; sY(~ss)=0; sc =AT(sY,sM,sN); sx1=AT(sY,sM,sN); sd = zeros(size(sc)); cost = zeros(1,sNIT); for i = 1:sNIT su = soft(sc + sd, 0.5*slambda/smu) - sd; sd = 1/(smu+sp) * AT((sY - ss.*A(su,sM,sN)),sM,sN); sc = sd + su; residual = sY - A(sc,sM,sN); cost(i) = sum(abs(residual(:)).^2) + sum(abs(slambda * sc(:))); end sx=A(sc,sM,sN); stots=stots+sum(abs(((z(92+sS:96+sS)')-sx(92:96))).^2); sD(sS+1)=sx(92); sD(sS+2)=sx(93); sD(sS+3)=sx(94); sD(sS+4)=sx(95); sD(sS+5)=sx(96); timesalsa=timesalsa+1 end sD(sS+6)=sx(97); sD size(sD) length(LE(1:163)) length(91:1:253) g=pi/4; V=zeros(25,1); for ii=0:31; QX=zeros(M,1); MEAN=mean(MV((89+5*ii):(91+5*ii))); MEAN2=mean(MV((1+5*ii):(91+5*ii))); MV((1+5*ii):(91+5*ii))=MV((1+5*ii):(91+5*ii))-MEAN2; for k=-N:1:N QX(k+N+1)=0; for t=ts; QX(k+N+1)=QX(k+N+1)+(g/pi)*sinc((k*pi+g*t)/X)*MV(t); end end R=zeros(M,M); for k=-N:1:N for m=-N:1:N for t=ts R(k+N+1,m+N+1)=R(k+N+1,m+N+1)+(g/pi)^2*sinc((k*pi+g*t)/X)*sinc((m*pi+g*t)/X); end end end v=0.1; RI= R+eye(M)*v; y=inv(RI)*QX; Q=zeros(length(ts)+14,1); for t=q:1:(s+9) for k=-N:1:N Q(t-ii*5)=Q(t-ii*5)+y(k+N+1)*(g/pi)*sinc((k*pi+g*t)/X); end end t=q:1:s+9; BLR=0; D=zeros(length(Q),1); D=Q(:); for i=1:1:91 BLR=BLR+sqrt((z(i+5*ii)-(D(i)+MEAN2))^2); end MV((1+5*ii):(91+5*ii))=MV((1+5*ii):(91+5*ii))+MEAN2; BLR BLR/90 D(1:91)=D(1:91)+MEAN2; MAA=mean(D(1:91)); D(92:101)=D(92:101)+MEAN; V(q:q+9)=D(92:101); s=s+5; q=q+5; time=time+1 BLR=0; end BLRE=0; for i=1:1:162 BLRE=BLRE+abs((z(i+91)-(V(i)))^2); end LERR=0; for i=1:1:162 LERR=LERR+abs((z(i+91)-(LE(i)))^2); end length(V); BLRE BLRE/length(V(1:162)) LERR LERR/length(V(1:162)) stots stots/length(sD(1:162)) length(LE); length(V); length(z); length(MV); length(sD); tt=91:1:253; length(tt) hold on plot(1:1:90,z(1:90),'k.:',1:1:90,MV(1:90),'r','HandleVisibility','off') plot(tt,z(91:253),'k.:',tt,V(1:163),'b--',tt,MV(91:253),'r',tt,sD(1:163),'c*') title('Left band limited casual extrapolation and salsa extrapolation of ASX stock price, 5 day forecast for 160 days') xlabel('Days') ylabel('Max price $') line([90,90],[40,100],'Color',[0,0,0],'HandleVisibility','off') hold all plot(91:1:253,LE(1:163),'g--') legend('Raw data','Band limited extrapolation','Five point moving average', 'Salsa Extraploation','Linear extrapolation') \end{verbatim} \subsection{"soft" Code created by Ivan Selesnick} \begin{verbatim} function y = soft(x, T) y = max(1 - T./abs(x), 0) .* x; \end{verbatim} \end{document}
1,108,101,564,762
arxiv
\section{ Model production function and CES } \label{apx:ces_prodfun} Here we show that the production functions used in the main text are highly related to nested CES production functions. Specifically, we consider a CES production function with three nests of the form (we suppress time indices for convenience) \begin{equation} x_{i}^\text{inp} = \Big( a_i^\text{C} (z_i^\text{C})^\beta + a_i^\text{IMP} (z_i^\text{IMP})^\beta + (a_i^\text{NC})^{1-\beta} (z_i^\text{NC})^\beta \Big)^{\frac{1}{\beta}}, \end{equation} where $\beta$ is the substitution parameter. Variables $a_i^\text{C} = \sum_{j \in \text{C}} A_{ji}$, $a_i^\text{IMP} = \sum_{j \in \text{IMP}} A_{ji} $ and $a_i^\text{NC} = \sum_{j \in \text{NC}} A_{ji} $ are the input shares (technical coefficients) for critical, important and non-critical inputs, respectively. To be consistent with the specifications of the main text, we do not consider labor inputs here and only focus on $x_i^\text{inp}$. Alternatively, we could include labor inputs in the set of critical inputs and derive the full production function in an analogous manner. $z_i^\text{C}$, $z_i^\text{IMP}$ and $z_i^\text{NC}$ are CES aggregates of critical, important and non-critical inputs for which we have \begin{align} z_i^\text{C} &= \left[ \sum_{j \in \text{C}} A_{ji}^{1-\nu} S_{ji}^\nu \right]^{\frac{1}{\nu}}, \\ z_i^\text{IMP} &= \frac{1}{2} \left[ \sum_{j \in \text{IMP}} A_{ji}^{1-\psi} S_{ji}^\psi \right]^{\frac{1}{\psi}} + \; \frac{1}{2} x_i^\text{cap}, \\ z_i^\text{NC} &= \left[ \sum_{j \in \text{NC}} A_{ji}^{1-\zeta} S_{ji}^\zeta \right]^{\frac{1}{\zeta}}. \end{align} If we assume that every input is critical (i.e. the set of important and non-critical inputs is empty: $a_i^\text{IMP} = a_i^\text{NC} = 0$), and by taking the limits $\beta, \nu \to -\infty$ we recover the Leontief production function \begin{equation} x_i^\text{inp} = \underset{j}{\min} \left\{ \frac{S_{ji}}{A_{ji}} \right\}. \label{eq:leon_limit} \end{equation} If we assume that there is no critical or important inputs at all ($a_i^\text{C} = a_i^\text{IMP} = 0$), taking the limits $\beta \to -\infty$ and $\zeta \to 1$ yields the linear production \begin{equation} x_i^\text{inp} = \frac{ \sum_{j} S_{ji} }{ a_i^\text{NC} }. \label{eq:linear_limit} \end{equation} The IHS1 and IHS3 production functions treat important inputs either as critical or non-critical, i.e. the set of critical inputs is again empty ($a_i^\text{IMP} = 0$). We can approximate these functions again by taking the limits $\beta, \nu \to -\infty$ and $\zeta \to 1$ which yields \begin{equation} x_i^\text{inp} = \min \left\{ \underset{j \in \text{C} }{\min} \left\{ \frac{S_{ji}}{A_{ji}} \right\}, \frac{ \sum_{j \in \text{NC} } S_{ji} }{ a_i^\text{NC} } \right\}. \label{eq:ihs13_limit} \end{equation} Note that the difference between the two production functions lies in the different definition of critical and non-critical inputs. The functional form in Eq.~\eqref{eq:ihs13_limit} applies to both cases equivalently. The IHS2 production function where the set of important inputs is non-empty can be similarly approximated. In addition to the limits above, we also take the limit $\psi \rightarrow -\infty$ to obtain \begin{equation} x_i^\text{inp} = \min \left\{ \underset{j \in \text{C} }{\min} \left\{ \frac{S_{ji}}{A_{ji}} \right\}, \frac{1}{2} \underset{j \in \text{IMP} }{\min} \left\{ \frac{S_{ji}}{A_{ji}} + x_i^\text{cap} \right\}, \frac{ \sum_{j \in \text{NC} } S_{ji} }{ a_i^\text{NC} } \right\}. \label{eq:ihs2_limit} \end{equation} Eqs.~\eqref{eq:ihs13_limit} and \eqref{eq:ihs2_limit} are not exactly identical to the IHS production functions used in the main text (Eqs.~\eqref{eq:xinp_ihs1} -- \eqref{eq:xinp_ihs3}), but very similar. The only difference is that in the IHS functions non-critical inputs do not play a role for production at all, whereas here they enter the equations as a linear term. However, simulations show that this difference is practically irrelevant. Simulating the model with the CES-derived functions instead of the IHS production functions yield exactly the same results, indicating that the linear term representing non-critical inputs is never a binding constraint. \section{First-order supply and demand shocks} \label{apx:shocks} \subsection{Supply shocks} Due to the Covid-19 pandemic, industries experience supply-side reductions due to the closure of non-essential industries, workers not being able to perform their activities at home, and difficulties adapting to social distancing measures. Many industries also face substantial reductions in demand. \cite{del2020supply} provide quantitative predictions of these first-order supply and demand shocks for the U.S. economy. To calculate supply-side predictions, \cite{del2020supply} constructed a Remote Labor Index, which measures the ability of different occupations to work from home, and scored industries according to their essentialness based on the Italian government regulations. We follow a similar approach. We score industries essentialness based on the UK government regulations and use occupational data to estimate the fraction of workers that could work remotely and the difficulties sectors faced in adapting to social distancing measures for on-site work. Several of our estimates are based on indexes and scores available for industries in the NAICS 4-digit classification system. An essential step in our methodology is to map these estimates into the WIOD industry classification system. We explain our mapping methodology below. \paragraph{NAICS to WIOD mapping} We build a crosswalk from the NAICS 4-digit industry classification to the classification system used in WIOD, which is a mix of ISIC 2-digit and 1-digit codes. We make this crosswalk using the NAICS to ISIC 2-digit crosswalk from the European Commission and then aggregating the 2-digit codes presented as 1-digit in the WIOD classification system. We then do an employment-weighted aggregation of the index or score in consideration from the 277 industries at the NAICS 4-digit classification level to the 55 industries in the WIOD classification. Some of the 4-digit NAICS industries map into more than one WIOD industry classification. When this happens, we assume employment is split uniformly among the WIOD industries the NAICS industry maps into. \paragraph{Remote Labor Index and Physical Proximity Index} To estimate the fraction of workers that could work remotely, we use the Remote Labor Index from \cite{del2020supply}. To understand the difficulties different sectors face in adapting to the social distancing measures, we compute an industry-specific Physical Proximity Index. Other works have also used the Physical Proximity of occupations to understand the economic consequences of the lockdown \citep{mongey2020workers,koren2020business}. We map the Physical Proximity Work Context\footnote{ https://www.onetonline.org/find/descriptor/result/4.C.2.a.3 } of occupations provided by O*NET into industries, using the same methodology that \cite{del2020supply} used to map the Remote Labor Index into industries. That is, we use BLS data that indicate the occupational composition of each industry and take the employment-weighted average of the occupation's work context employed in each industry at the NAICS 4-digit classification system. Under the assumption that the distribution of occupations across industries and that the percentage of essential workers within an industry is the same for the US and the UK, we can map the Remote Labor Index by \cite{del2020supply} and the Physical Proximity Index into the UK economy following the mapping methodology explained in the previous paragraph. The WIOD industry sector 'T' ("Activities of households as employers; undifferentiated goods- and services-producing activities of households for own use") does only maps into one NAICS code for which we do not have RLI or PPI. Since We consider sector 'T' to be similar to the 'R\_S' sector ("Other service activities") we use the RLI and PPI from 'R\_S' sector in the 'T' sector. In Figures~\ref{fig:RLI} and \ref{fig:PPI} we show the Remote Labor Index and the Physical Proximity Index of the WIOD sectors. \begin{figure \centering \includegraphics[width = 0.5\textwidth]{fig/RLI_wiod.png} \caption{\textbf{Remote Labor Index of industries.} Remote labor index of the WIOD industry classification. See Table \ref{tab:FO_shocks_supply} for code-industry name.} \label{fig:RLI} \end{figure} \begin{figure \centering \includegraphics[width = 0.5\textwidth]{fig/PPI_wiod.png} \caption{\textbf{Physical Proximity Index of industries.} Physical Proximity Index of the WIOD industry classification. See Table \ref{tab:FO_shocks_supply} for code-industry name.} \label{fig:PPI} \end{figure} \paragraph{Essential score} To determine the essential score of industries in scenarios $S_1$ to $S_4$ we follow the UK government guidelines. We break down WIOD sectors, which are aggregates of 2-digit NACE codes, into finer 3- and 4-digit industries. The advantage of having smaller subsectors is that we can associate shutdown orders to the subsectors and then compute an average essential score for the aggregate WIOD sector. In the following, we provide some details on how we calculate essential scores while referring the reader to the online repository to check all our assumptions. \begin{itemize} \item Essential score of G45 (Wholesale and retail trade and repair of motor vehicles and motorcycles). The only shops in this sector that were mandated to close were car showrooms (see footnote \ref{footn:ukleg} for a link to official legislation). Lacking a disambiguation between retail and wholesale trade of motor vehicles, we assign a 0.5 essential score to subsector 4511, which comprises 72\% of the turnover of G45, and an essential score of 1 to all other subsectors. The average essential score for G45 turns out to be 64\%. \item Essential score of G47 (Retail trade, except of motor vehicles and motorcycles). All ``non-essential'' shops were mandated to close, except food and alcohol retailers, pharmacies and chemists, newsagents, homeware stores, petrol stations, bicycle shops and a few others. We assigned an essential score that could be either 0 or 1 to all 37 4-digit NACE subsectors that compose G47. Weighing essential scores by turnover results in an average essential score of 71\%. \item Essential score of I (Accommodation and food service activities). Almost all economic activities in this sector were mandated to close, except hotels for essential workers (e.g. those working in transportation) and workplace canteens where there is no practical alternative for staff at that workplace to obtain food. Assigning a 10\% essential score to the main hotels subsector (551) and a 50\% essential score to subsector 5629, including workplace canteens, results in an overall 5\% essential score for sector I. \item Essential score of R\_S (Recreational and other services). Almost all activities in this sector were mandated to close, except those related to repair, washing and funerals. Considering all 34 subsectors yields an essential score of 7\%. \end{itemize} \paragraph{Real Estate.} This sector includes imputed rents, which account for $69\%$ of the monetary value of the sector,\footnote{ \url{https://www.ons.gov.uk/economy/grossvalueaddedgva/datasets/nominalandrealregionalgrossvalueaddedbalancedbyindustry}, Table 1B. }. Because we think applying a supply shock to imputed rent does not make sense, we compute that the supply shock derived from the RLI and Essential Score (which is around $50\%$) only affects $31\%$ of the sector, leading to a $15\%$ final supply shock to Real Estate (due to an error, our original work used a value of $4.7\%$).If we had used the correct real estate shock we would have predicted a 22.1\% reduction, exactly like in the data. In the main text, we still describe our prediction as having been a 21.5\% contraction, as explicitly stated in our original paper. Note that these differences are within the range of expected data revisions\footnote{see for instance \url{https://www.ons.gov.uk/economy/grossdomesticproductgdp/datasets/revisionstrianglesforukgdpabmi} for ONS GDP quarterly national accounts revisions, which are of the order of 1 pp.)} In scenario $S_5$, we mapped the supply shocks estimated by \cite{del2020supply} at the NAICS level into the WIOD classification system as explained previously. The WIOD industry sector 'T' ("Activities of households as employers; undifferentiated goods- and services-producing activities of households for own use") does only maps into one NAICS code for which we do not have a supply shock. Since this sector is likely to be essential, we assume a zero supply shock. The supply shocks at the NAICS level depend on the list of industries' essential score at the NAICS 4-digit level provided by \cite{del2020supply}. It is important to note that, although the list of industries' essential score provided by \cite{del2020supply} is based on the Italian list of essential industries, these lists are based on different industry classification systems (NAICS and NACE, respectively) and do not have a one-to-one correspondence. To derive the essential score of industries at the NAICS level \cite{del2020supply} followed three steps. (i) the authors considered a 6-digit NAICS industry essential if the industry had correspondence with at least one essential NACE industry. (ii) \cite{del2020supply} aggregated the 6-digit NAICS essential lists into the 4-digit level taking into account the fraction of NAICS 6-digit subcategories that were considered essential. (iii) the authors revised each 4-digit NAICS industry's essential score to check for implausible classifications and reclassified ten industries whose original essential score seemed implausible. Step (i) likely resulted in a larger fraction of industries at the NAICS level to be classified as essential than in the NACE level. This, in turn, results in supply shocks $S_5$ that are milder than they would have been if the essential score was mapped directly from NACE to WIOD and the supply shocks calculated at the WIOD level. For scenario $S_6$ we used the list of essential industries compiled by \cite{fana2020covid} for Italy, Germany, and Spain. We make one list by taking the mean over the three countries of the essential score of each industry. This list is at the ISIC 2-digit level, which we aggregate to WIOD classification weighting each sector by its gross output in the UK. Table \ref{tab:FO_shocks_supply} gives an overview of first-order supply shocks and Figure \ref{fig:supplyscenarios} show the supply scenarios over time. \begin{figure \centering \includegraphics[width = \textwidth]{fig/supplyshock_scenarios.pdf} \caption{ \textbf{Comparison of supply shock scenarios over time.} Each panel shows industry specific supply shocks $\epsilon_{i,t}^S$ for a given scenario. The coloring of the lines is based on the same code as in Figure \ref{fig:aggregatevssectoral}. } \label{fig:supplyscenarios} \end{figure} \subsection{Demand shocks} For calibrating consumption demand shocks, we use the same data as \cite{del2020supply} which are based on the \cite{CBO2006} estimates. These estimates are available only at the more aggregate 2-digit NAICS level and map into WIOD ISIC categories without complications. To give a more detailed estimate of consumption demand shocks, we also link manufacturing sectors to the closure of certain non-essential shops as follows. \begin{itemize} \item Consumption demand shock to C13-C15 (Manufacture of textiles, wearing apparel, leather and other related products). Four subsectors (4751, 4771, 4772, 4782) selling goods produced by this manufacturing sector were mandated to close, while one subsector was permitted to remain open as it sells homeware goods (4753). Lacking more detailed information about the share of C13-C15 products that these subsectors sell, we simply give equal shares to all subsectors, leading to an 80\% consumption demand shock to C13-C15. \item Consumption demand shock to C20 (Manufacture of chemicals and chemical products). Three subsectors (4752, 4773, 4774) selling homeware and medical goods were considered essential, while subsector 4775, selling cosmetic and toilet articles, was mandated to close. Using the same assumptions as above, we get a 25\% consumption demand shock for this sector. \end{itemize} The same procedure leads to consumption demand shocks for all other manufacturing subsectors. Table \ref{tab:FO_shocks_demand} shows the demand shock for each sector and \ref{fig:demand_time} illustrates the demand shock scenarios over time. \begin{figure \centering \includegraphics[width = \textwidth]{fig/demandshock_time.pdf} \caption{ \textbf{Industry-specific demand shocks over time.} The upper left panel shows the change in preferences $\theta_{i,t}$. The bottom left panel shows shock magnitudes to investment and export. The upper right panel shows demand shocks due to fear of unemployment $\xi_{t}$. The bottom right panel is the aggregate demand shock $\tilde \epsilon_t$ taking the savings rate of 50\% into account. The coloring of the lines for industry-specific results follows the same code as in Figure \ref{fig:aggregatevssectoral}. } \label{fig:demand_time} \end{figure} \input{appendices/tab_FOshocks} \section{Critical vs. non-critical inputs} \label{apx:ihs} A survey was designed to address the question when production can continue during a lockdown. For each industry, IHS Markit analysts were asked to rate every input of a given industry. The exact formulation of the question was as follows: ``For each industry in WIOD, please rate whether each of its inputs are essential. We will present you with an industry X and ask you to rate each input Y. The key question is: Can production continue in industry X if input Y is not available for two months?'' Analysts could rate each input according to the following allowed answers: \begin{itemize} \item \textbf{0} -- This input is \textit{not} essential \item \textbf{1} -- This input is essential \item \textbf{0.5} -- This input is important but not essential \item \textbf{NA} -- I have no idea \end{itemize} To avoid confusion with the unrelated definition of essential industries which we used to calibrate first-order supply shocks, we refer to inputs as \textit{critical} and \textit{non-critical} instead of \textit{essential} and \textit{non-essential.} Analysts were provided with the share of each input in the expenses of the industry. It was also made explicit that the ratings assume no inventories such that a rating captures the effect on production if the input is not available. Every industry was rated by one analyst, except for industries Mining and Quarrying (B) and Manufacture of Basic Metals (C24) which were rated by three analysts. In case there are several ratings we took the average of the ratings and rounded it to 1 if the average was at least $2/3$ and 0 if the average was at most $1/3$. Average input ratings lying between these boundaries are assigned the value 0.5. The ratings for each industry and input are depicted in Figure~\ref{fig:ihs_matrix}. A column denotes an industry and the corresponding rows its inputs. Blue colors indicate \textit{critical}, red \textit{important, but not critical} and white \textit{non-critical} inputs. Note that under the assumption of a Leontief production function every element would be considered to be critical, yielding a completely blue-colored matrix. The results shown here indicate that the majority of elements are non-critical inputs (2,338 ratings with score = 0), whereas only 477 industry-inputs are rates as critical. 365 inputs are rated as important, although not critical (score = 0.5) and \textit{NA} was assigned eleven times. \begin{figure}[htbp] \centering \includegraphics[trim = {0cm 0cm 0cm 0cm}, clip,width=\textwidth]{fig/ihs/ihs_inputs_matrix.pdf} \caption{Criticality scores from IHS Markit analysts. Rows are inputs (supply) and columns industries using these inputs (demand). The blue color indicates critical (score=1), red important (score=0.5) and white non-critical (score=0) inputs. Black denotes inputs which have been rated with NA. The diagonal elements are considered to be critical by definition. For industries with multiple input-ratings we took the average of all ratings and assigned a score=1 if the averaged score was at least $2/3$ and a score=0 if the average was smaller or equal to $1/3$. } \label{fig:ihs_matrix} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[trim = {0cm 0cm 0cm 0cm}, clip,width=1\textwidth]{fig/ihs/ihs_scatter_all.pdf} \caption{(Left panel) The figure shows how often an industry is rated as a critical input to other industries (x-axis) against the share of critical inputs this industry is using. The center and right panel are the same as the left panel, except for using half-critical and non-critical scores, respectively. In each plot the identity line is shown. Point sizes are proportional to gross output. } \label{fig:ihs_scatter} \end{figure} The left panel of Figure \ref{fig:ihs_scatter} shows for each industry how often it was rated as critical input to other industries (x-axis) and how many critical inputs this industry relies on in its own production (y-axis). Electricity and Gas (D35) are rated most frequently as critical inputs in the production of other industries (score=1 for almost 60\% of industries). Also frequently rated as critical are Land Transport (H49) and Telecommunications (J61). On the other hand, many manufacturing industries (ISIC codes starting with C) stand out as relying on a large number of critical inputs. For example, around 27\% of inputs to Manufacture of Coke and Refined Petroleum Products (C19) as well as to Manufacture of Chemicals (C20) are rated as critical. The center panel of Figure \ref{fig:ihs_scatter} shows the equivalent plot for 0.5 ratings (important, but not critical inputs). Financial Services (K64) are most frequently rated as important inputs which do not necessarily stop the production of an industry if not available. Conversely, the industry relying on many important, but not binding inputs is Wholesale and Retail Trade (G46) of which almost half of its inputs got rated with a score = 0.5. This makes sense given that this industry heavily relies on all these inputs, but lacking one of these does not halt economic production. This case also illustrates that a Leontief production function could vastly overestimate input bottlenecks as Wholesale and Retail Trade would most likely still be able to realize output even if a several inputs were not available. In the right panel of Figure \ref{fig:ihs_scatter} we show the same scatter plot but for non-critical inputs. 25 industries are rated to be non-critical inputs to other industries in 80\% of all cases, with Household Activities (T) and Manufacture of Furniture (C31-32) being rated as non-critical in at least 96\%. Industries like Other Services (R-S), Other Professional, Scientific and Technical Activities (M74-75) and Administrative Activities (N) rely on mostly non-critical inputs ($>$90\%). A detailed breakdown of the input- and industry-specific ratings are given in Table \ref{tab:ihs_results}. \input{appendices/tab_ihs} \FloatBarrier \section{Inventory data and calibration} \label{apx:inventory} In the previous version of our work \citep{pichler2020production}, we used U.S. data from the BEA to calibrate the inventory target parameters $n_j$ in Eq. \eqref{eq:order_interm}. Here, we use more detailed UK data from the Annual Business Survey (ABS) . The ABS is the main structural business survey conducted by the ONS\footnote{ \url{ https://www.ons.gov.uk/businessindustryandtrade/business/businessservices/methodologies/annualbusinesssurveyabs} }. It is sampled from all non-farm, non-financial private businesses in the UK (about two-thirds of the UK economy); data are available up to the 4-digit NACE level, but for our purposes 2-digit NACE industries are sufficient. The survey asks for information on a number of variables, including turnover and inventory stocks (at the beginning and end of each year). Data are available from 2008 to 2018. They show a general increase in turnover and inventory stocks (consistent with growth of the UK economy in the same period), with moderate year-on-year fluctuations. \begin{figure \centering \includegraphics[width = \textwidth]{fig/inventory_levels.pdf} \caption{ \textbf{Inventory/sales ratios over time.} } \label{fig:inventory_levels} \end{figure} To proxy inventory levels available in February 2020, we proceed as follows, for each industry: (i) we take the simple average between beginning- and end-of year inventory stock levels; (ii) we calculate the ratio between this average and yearly turnover, and multiply this number by 365, because we consider a daily timescale: this inventory-to-turnover ratio is our proxy of $n_j$ (as can be seen in Figure~\ref{fig:inventory_levels}, the inventory/sales ratios are remarkably constant over time, suggesting that inventory stocks at the beginning of the Covid-19 pandemic could reliably be estimated from past data); (iii) we consider a weighted average between inventory-to-turnover ratios across all years between 2008 and 2018, giving higher weight to more recent years;\footnote{More specifically, we consider exponential weights, such that weights from a given year $X$ are proportional to $0.95^{(2018-X)}$. Some years are missing due to confidentiality problems, and data for some sectors have clear problems in some years. We deal with missing values by giving zero weights to years with missing values and renormalizing weights over the available years.} (iv) we aggregate 2-digit NACE industries to WIOD sectors (for example, we aggregate C10, C11 and C12 to C10\_C12); (v) we fill in the missing sectors (K64, K65, K66, O84, T) by imputing the average inventory-to-turnover ratio across all service sectors. The final inventory to sales ratios $n_j$ are shown in Figure~\ref{fig:uk_inv_sales_ratios}. As can be seen, inventories are much larger relative to sales in production, construction and trade, while they are generally lower in services, although there is considerable heterogeneity across sectors. \begin{figure \centering \includegraphics[width = \textwidth]{fig/uk_inv_sales_ratios.pdf} \caption{ \textbf{Inventory/sales ratios for all WIOD industries.} } \label{fig:uk_inv_sales_ratios} \end{figure} \newpage \section{Notation} \label{apx:notation} \begin{table}[H] \scriptsize \begin{center} \begin{tabular}{ | p{2cm} | p{12cm} |} \hline Symbol & Name \\ \hline $N$ & Number of industries \\ $t$ & Time index \\ $t_\text{start\_lockdown}$ & Start date of lockdown (March 23)\\ $t_\text{end\_lockdown}$ & End date of lockdown (May 13) \\ $x_{i,t}$ & Total output of industry $i$ \\ $x_{i,t}^{\text{cap}}$ & Industry production capacity based on available labor \\ $x_{i,t}^{\text{inp}}$ & Industry production capacity based on available inputs \\ $d_{i,t}$ & Total demand for industry $i$ \\ $Z_{ij,t}$ & Intermediate consumption of good $i$ by industry $j$ \\ $O_{ij,t}$ & Intermediate orders (demand from industry $j$ to industry $i$) \\ $c_{i,t}$ & Household consumption of good $i$ \\ $c_{i,t}^d$ & Demand of household consumption of good $i$ \\ $f_{i,t}$ & Non-household final demand of good $i$ \\ $f_{i,t}^d$ & Demand non-household final demand of good $i$ \\ $l_{i,t}$ & Labor compensation to workers of industry $i$ \\ $\tilde l_t$ & Total labor compensation \\ $\tilde{l}_{t}^p$ & Expectations for permanent labor income \\ $\tilde l^*_t$ & Total labor compensation plus social benefits \\ $\tilde c_t$ & Total household consumption \\ $\tilde{c}_{t}^d$ & Aggregate consumption demand \\ $n_{j}$ & Number of days of targeted inventory for industry $j$ \\ $A_{i,j}$ & Payments to $i$ per unit produced of $j$ (technical coefficients) \\ $S_{ij,t}$ & Stock of material $i$ held in $j$'s inventory \\ $\tau$ & Speed of inventory adjustment \\ $\theta_{i,t}$ & Share of goods from industry $i$ in consumption demand \\ $\bar \theta_{i,t}$ & Share of goods from industry $i$ in consumption demand (unnormalized) \\ $\rho$ & Speed of adjustment of aggregate consumption \\ $m$ & Share of labor income used to consume final domestic goods \\ $\xi_t$ & Fraction of pre-pandemic labor income that households expect to retain in the long-run \\ $\epsilon_t$ & Consumption exogenous shock \\ $\tilde{\epsilon}_{i}^D$ & Relative changes in demand for goods of industry $i$ during lockdown \\ $\tilde{\epsilon}_{i,t}$ & Relative changes in demand for goods of industry $i$ \\ $\tilde \epsilon_t$ & Aggregate consumption shock \\ $\Delta l_{i,t}$ & Desired change of labor supply of industry $i$\\ $ l_{i,t}^\text{max}$ & Maximum labor supply for industry $i$\\ $\gamma_\text{H}$, $\gamma_\text{F}$ & Speed of upward/downward labor adjustment (hiring/firing) \\ $\Delta s$ & Change in saving rate \\ $\nu$ & Consumption shock term representing beliefs in L-shaped recovery \\ $b$ & Share of labor income compensated as social benefit \\ $\text{RLI}_i$ & Remote Labor Index of industry $i$\\ $\text{ESS}_i$ & Essential score of industry $i$\\ $\text{PPI}_i$ & Physical Proximity Index of industry $i$\\ $\iota$ & Scaling factor for $\text{PPI}_i$ \\ $\mathcal{V}_i$ & Set of critical inputs for industry $i$\\ $\mathcal{U}_i$ & Set of important inputs for industry $i$\\ \hline \end{tabular} \end{center} \caption{Notation summary.} \label{tab:notation_econ} \scriptsize \end{table} \section{Sensitivity analysis} \label{apx:sensitivity} To gain a better understanding of the model behavior, we conduct a series of sensitivity analyses by showing aggregate output time series under alternative parametrizations. In particular, we vary exogenous shock inputs, the production function specification and the parameters $\tau$, $\gamma_H$, $\gamma_F$ and $\Delta s$. \paragraph{Supply shocks.} Figure~\ref{fig:suppdemsens}(a) shows model dynamics of total output for the different supply shock scenarios considered. It becomes clear that alternative specifications of supply shocks can influence model results enormously. Supply shocks derived directly from the UK lockdown policy ($S_1-S_4$) yield a similar recovery pattern but nevertheless entail markedly different levels of overall impacts. When initializing the model with the shock estimates from \cite{del2020supply} ($S_5$) and \cite{fana2020covid} ($S_6$), we obtain very different dynamics. \paragraph{Demand shocks.} We find that our model is less sensitive with respect to changing final demand shocks within plausible ranges. Here, we compare the baseline model results with alternative specifications of consumption and ``other final demand'' (investment, export) shocks. Instead of modifying the \cite{CBO2006} consumption shocks as discussed in Section \ref{sec:consumptiondemandshockscenarios}, we now also use their raw estimates. We further consider shocks of 10\% to investment and exports which is milder than the 15\% shocks considered in the main text. Figure~\ref{fig:suppdemsens}(a) indicates that differences are less pronounced between the alternative demand shocks compared to the supply shock scenarios, particularly during the early phase of the lockdown. Differences of several percentage points only emerge after an extended recovery period. \begin{figure}[H] \includegraphics[trim = {0cm 0cm 0cm 0cm}, clip, width = .95\textwidth]{fig/DEMSUP_shock_sens.pdf} \caption{ \textbf{Sensitivity analysis with respect to supply and demand shocks.} Shocks are applied at day one. Except for supply shocks (a) and demand shocks (b), the model is initialized as in the baseline run of the main text. Legend (b): Raw and Main indicate the estimates from \cite{CBO2006} and its adapted version used in the main text, respectively. 15\% and 10\% refer to the two investment/export shock scenarios. } \label{fig:suppdemsens} \end{figure} \paragraph{Production function.} In Figure~\ref{fig:prodfsens}(a) and (b) we show simulation results for alternative production function specifications, when using the baseline $S_4$ and the more severe $S_5$ supply shock scenarios, respectively. Regardless of the supply shock scenarios considered, Leontief production yields substantially more pessimistic predictions than the other production functions. For the milder baseline supply shock scenarios, aggregate predictions are fairly similar across the other production functions, although we observe some differences after an extended period of simulation. Differences between the linear and IHS production functions are larger when considering the more severe shock scenario in Figure~\ref{fig:prodfsens}(b). \begin{figure}[H] \centering \includegraphics[trim = {0cm 0cm 0cm 0cm}, clip, width = .95\textwidth]{fig/PRODF_shock_sens.pdf} \caption{ \textbf{Sensitivity analysis with respect to production functions.} Shocks are applied at day one. (a) is based on the $S_4$ and (b) on the $S_5$ supply shock scenario. Otherwise, the model is initialized as in the baseline run of the main text. } \label{fig:prodfsens} \end{figure} In Figure~\ref{fig:sensitivity_params} we explore model simulations with respect to different parametrizations of the inventory adjustment parameter $\tau$ (left column), hiring/firing parameters $\gamma_H$ and $\gamma_F$ (center left column), change in savings rate parameter $\Delta s$ (center right column) and consumption adjustment speed $\rho$ (right column) under alternative production function-supply shock combinations (panel rows). \paragraph{Inventory adjustment time $\tau$.} We find that the inventory adjustment time becomes a key parameter under Leontief production and much less so under the alternative IHS3 specification. The shorter the inventory adjustment time (smaller $\tau$), the more shocks are mitigated. Contrary to inventory adjustment $\tau$, model results do not change much with respect to alternative specifications of $\gamma_H$ when going from Leontief production to the IHS3 function. Here, differences in model outcomes are rather due to shock severity. Model predictions are almost identical for all choices of $\gamma_H$ under mild $S_1$ shocks but differ somewhat for more substantial $S_4$ shocks. In the $S_4$ scenarios we observe that recovery is quickest for larger values of $\gamma_H$ (stiff labor markets), since firms would lose less productive capacity in the immediate aftermath of the shocks. However, this would come at the expense of reduced profits for firms which are able to more effectively reduce labor costs for smaller values of $\gamma_H$. \paragraph{Change in saving rate $\Delta s$.} We find only very little variation with respect to the whole range of $\Delta s$, regardless of the underlying production function and supply shock scenario. As expected, we find the largest adverse economic impacts in case of $\Delta s = 1$, i.e. when consumers save all the extra money which they would have spent if there was no lockdown, and the mildest impacts if $\Delta s = 0$. \paragraph{Consumption adjustment speed $\rho$.} Similarly, the model is not very sensitive with respect to consumption adjustment speed $\rho$. The smaller $\rho$, the quicker households adjust consumption with respect to (permanent) income shocks. Note that parameter $\rho$ is based on daily time scales. Economic impacts tend to be less adverse when consumers aim to keep original consumption levels upright (large $\rho$) and more adverse if income shocks are more relevant for consumption (small $\rho$). Overall, the effects are small, in particular for the $S_1$ shock scenarios. \begin{figure}[H] \includegraphics[trim = {0cm 0cm 0cm 0cm}, clip, width = 1\textwidth]{fig/sensitivity_compare.pdf} \caption{ \textbf{Results of sensitivity analysis for parameters $\tau$, $\gamma_H$, $\Delta s$ and $\rho$.} All plots show normalized gross output on the y-axis and days after the shocks are applied on the x-axis. Panel rows differ with respect to the combination of production function and supply shock scenario. Panel columns differ with respect to $\tau$ (left), $\gamma_H$ (center left), $\Delta s$ (center right) and $\rho$ (right) as indicated in the legend below each column. Except for the four parameters, production function and supply shock scenario, the model is initialized as in the baseline run of the main text. In all simulations we used $\gamma_F = 2 \gamma_H$. } \label{fig:sensitivity_params} \end{figure} \section{Details on validation} \label{apx:validation} In this appendix we provide further details about validation (Section~\ref{sec:econimpact} in the main paper). In Appendix~\ref{apx:validation_data} we describe the data sources that we used for validation, and we explain how we made empirical data comparable to simulated data. In Appendix \ref{apx:selectedscenario} we give more details about the selected scenario (Section~\ref{sec:selectedscenarioanalysis} in the main paper). \subsection{Validation data} \label{apx:validation_data} \begin{itemize} \item Index of agriculture (release: 12/08/2020): \url{https://www.ons.gov.uk/generator?format=xls&uri=/economy/grossdomesticproductgdp/timeseries/ecy3/mgdp/previous/v26} \item Index of production (release: 12/08/2020): \url{https://www.ons.gov.uk/file?uri=/economy/economicoutputandproductivity/output/datasets/indexofproduction/current/previous/v59/diop.xlsx} \item Index of construction (release: 12/08/2020): \url{https://www.ons.gov.uk/file?uri=/businessindustryandtrade/constructionindustry/datasets/outputintheconstructionindustry/current/previous/v67/bulletindataset2.xlsx} \item Index of services (release: 12/08/2020): \url{https://www.ons.gov.uk/file?uri=/economy/economicoutputandproductivity/output/datasets/indexofservices/current/previous/v61/ios1.xlsx} \end{itemize} All ONS indexes are monthly seasonally-adjusted chained volume measures, based such that the index averaged over all months in 2016 is 100. Although these indexes are used to proxy value added in UK national accounts, they are actually gross output measures, as determining input use is too burdersome for monthly indexes. There is not a perfect correspondence between industry aggregates as considered by ONS and in WIOD. For example, the ONS only releases data for the agricultural sector as a whole, without distinguishing between crop and animal production (A01), fishing and aquaculture (A02) and forestry (A03). In this case, when comparing simulated and empirical data we aggregate data from the simulations, using initial output shares as weights. More commonly, there is a finer disaggregation in ONS data than in WIOD. For example, ONS provides separate information on food manufacturing (C10) and on beverage and tobacco manufacturing (C11, C12), while these three sectors are aggregated into just one sector (C10\_C12) in WIOD. In this case, we aggregate empirical data using the weights provided in the indexes of production and services. These weights correspond to output shares in 2016, the base year for all time series. Finally, after performing aggregation we rebase all time series so that output in February 2020 takes value 100. \subsection{Description of the selected scenario} \label{apx:selectedscenario} Figure \ref{fig:new_model_april_may_june_facets} shows the recovery path of all industries, both in the model and in the data. Interpretation is the same as in Figure \ref{fig:new_model_april_may_june_all} in the main text. For readability, industries are grouped into six broad categories. \begin{figure}[H] \centering \includegraphics[width = 1.0\textwidth]{fig/new_model_april_june_facets.pdf} \caption{\textbf{Comparison between model predictions and empirical data.} We plot production (gross output) for each of 53 industries, both as predicted by our model and as obtained from the ONS' indexes. Each panel refers to a different group of industries. Different colors refer to production in April and June 2020. Black lines connect the same industry across these three months. All sectoral productions are normalized to their pre-lockdown levels, and each point size is proportional to the steady-state gross output of the corresponding sector.} \label{fig:new_model_april_may_june_facets} \end{figure} \section{Introduction} \label{sec:intro} The social distancing measures imposed to combat the Covid-19 pandemic created severe disruptions to economic output, causing shocks that were highly industry specific. Some industries were shut down almost entirely by lack of demand, labor shortages restricted others, and many were largely unaffected. Meanwhile, feedback effects amplified the initial shocks. The lack of demand for final goods such as restaurants or transportation propagated upstream, reducing demand for the intermediate goods that supply these industries. Supply constraints due to a lack of labor under social distancing propagated downstream, creating input scarcity that sometimes limited production even in cases where the availability of labor and demand would not have been an issue. The resulting supply and demand constraints interacted to create bottlenecks in production, which in turn led to unemployment, eventually decreasing consumption and causing additional amplification of shocks that further decreased final demand. In this paper we develop a model that respects the three main features that made the Covid-19 episode exceptional: (1) The shocks were highly heterogeneous across industries, making it necessary to model the economy at the sectoral level, taking sectoral inter-dependencies into account; (2) the shocks affected both supply and demand simultaneously, making it necessary to consider both upstream and downstream propagation; (3) the shocks were so strong and were imposed and relaxed so quickly that the economy never had time to converge to a new steady state, making dynamic models better suited than static models. In the first version of this work, results were released online on 21 May, not long after social distancing measures first began to take effect in March \citep{pichler2020production}. Based on data from 2019 and predictions of the shocks by \citet{del2020supply}, we predicted a 21.5\% contraction of GDP in the UK economy in the second quarter of 2020 with respect to the last quarter of 2019. This forecast was remarkably close to the actual contraction of 22.1\% estimated by the UK Office of National Statistics. (The median forecast by several institutions and financial firms was 16.6\% and the forecast by the Bank of England was 30\%).\footnote{\label{footn:pred} Sources: ONS estimate: \url{https://www.ons.gov.uk/economy/grossdomesticproductgdp/bulletins/gdpfirstquarterlyestimateuk/apriltojune2020}; median forecast of institutions and financial firms (in May): \url{https://www.gov.uk/government/statistics/forecasts-for-the-uk-economy-may-2020}; forecast by the Bank of England (in May): \url{https://www.bankofengland.co.uk/-/media/boe/files/monetary-policy-report/2020/may/monetary-policy-report-may-2020}. } In this substantially revised paper we present our model and describe its results, but we also take advantage of the benefits of hindsight to perform a ``post-mortem''. This allows us to better understand what worked well, what did not work well, and why. To what extent did we succeed by getting things right vs. just getting lucky? We systematically investigate the sensitivity of the results under different specifications, including variations in the shock scenario, production function, and key parameters of the model. We examine the performance at both the aggregate and sectoral levels. We also look at the ability of the model to reproduce the time series patterns of the fall and recovery of sectoral output. Surprisingly, we find that the original specification used for out-of-sample forecasting performs about as well at the aggregate level as any of those developed with hindsight; with a minor adjustment its results are also among the best at the sectoral level. We show how getting good aggregate results depends on making the right trade-off between the severity of the shocks and the rigidity of the production function, as well as other key factors such as the right level of inventories. This provides valuable lessons about modeling the economic effects of disasters such as the Covid-19 pandemic. Our model is inspired by previous work on the economic response to natural disasters \citep{hallegatte2008adaptive, henriet2012firm, inoue2019firm}. As in these models, industry demand and production decisions are based on simple rules of thumb, rather than resulting from optimization in a dynamic general equilibrium setup. We think that the Covid-19 shock was so sudden and unexpected that agent expectations had little time to converge to an equilibrium over the short time period that we consider \citep{evans2012learning}. Our work here is also one of several studies using sectoral models with input-output linkages that appeared in the first few months of the pandemic \citep{barrot2020sectoral,mandel2020economic,fadinger2020effects,bonadio2020global,baqaee2020nonlinear,guan2020global}. Our paper belongs to this effort, but differs in a number of important ways. The most important conceptual difference is our treatment of the production function. The most common production functions can be ordered by the degree to which they allow substitutions between inputs. At one extreme, the Leontief production function assumes a fixed recipe for production, allowing no substitutions and restricting production based on the limiting input \citep{inoue2020propagation}. Under the Leontief production function, if a single input is reduced, overall production will be reduced proportionately, even if that input is ordinarily relatively small. This can lead to unrealistic behaviours. For example, the steel industry has restaurants as an input, presumably because steel companies have a workplace canteen and sometimes entertain their clients and employees. A literal application of the Leontief production function predicts that a sharp drop in the output of the restaurant industry will dramatically reduce steel output. This is unrealistic, particularly in the short run. The alternatives used in the literature are the Cobb-Douglas production function \citep{fadinger2020effects}, which has an elasticity of substitution of 1, and the CES production function, where typically calibration for short term analysis uses an elasticity of substitution less than 1 \citep{barrot2020sectoral,mandel2020economic,bonadio2020global}. Some papers \citep{baqaee2020nonlinear} consider a nested CES production function, which can accommodate a wide range of technologies. In principle, this can be used to allow substitution between some inputs and forbid it between others in an industry-specific manner. However, it is hard to calibrate all the elasticities, so that in practice many models end up using only limited nesting structure or assuming uniform substitutability. Consider again our example of the steel industry: Under the calibrations of the CES production function that are typically used, firms can substitute energy or even restaurants for iron, while still producing the same output. For situations like this, where the production process requires a fixed technological recipe, this is obviously unrealistic. To solve this problem we introduce a new production function that distinguishes between critical and non-critical inputs at the level of the 55 industries in the World Input-Output tables. This production function allows firms to keep producing as long as they have the inputs that are absolutely necessary, which we call {\it critical inputs}. The steel industry cannot produce steel without the critical inputs iron and energy, but it can operate for a considerable period of time without non-critical inputs such as restaurants or management consultants. We apply the Leontief function only to the critical inputs, ignoring the others. Thus we make the assumption that during the pandemic the steel industry requires iron and energy in the usual fixed proportions, but the output of the restaurant or management consultancy industries is irrelevant. Of course restaurants and logistics consultants are useful to the steel industry in normal times -- otherwise they wouldn't use them. But during the short time-scale of the pandemic, we believe that neglecting them provides a better approximation of economic behavior than either a Leontief or a CES production function with uniform elasticity of substitution. In the appendix, we show that our production function is very close to a limiting case of an appropriately constructed nested CES, which we could have used in principle, but is less well-suited to our calibration procedure. To determine which inputs are critical and which are not, we use a survey performed by IHS Markit at our request. This survey asked ``Can production continue in industry X if input Y is not available for two months?''. The list of possible industries X and Y was drawn from the 55 industries in the World Input-Output Database. This question was presented to 30 different industry analysts who were experts of industry X. Each of them was asked to rate the importance of each of its inputs Y. They assigned a score of 1 if they believed input Y is critical, 0 if it is not critical, and 0.5 if it is in-between, with the possibility of a rating of NA if they could not make a judgement. We then apply the Leontief function to the list of critical inputs, ignoring non-critical inputs. We experimented with several possible treatments for industries with ratings of 0.5 and found that we get somewhat better empirical results by treating them as half-critical (though at present we do not have sufficient evidence to resolve this question unambiguously). Besides the bespoke production function discussed above, we also introduce a Covid-19-specific treatment of consumption. Most models do not incorporate the demand shocks that are caused by changes in consumer preferences in order to minimize risk of infection. The vast majority of the literature has focused on the ability to work from home, and some studies incorporate lists of essential vs. inessential industries, but almost no papers have also explicitly added shocks to consumer preferences. (\cite{baqaee2020nonlinear} is an exception, but the treatment is only theoretical). Here we use the estimates from \citet{del2020supply}, which are taken from a prospective study by the \citet{CBO2006}. These estimates are crude, but we are not aware of estimates that are any better. The currently available data on actual consumption is qualitatively consistent with the shocks predicted by the CBO, with massive shocks to the hospitality industry, travel and recreation, milder (but large) shocks elsewhere \citep{andersen2020consumer,carvalho2020tracking,chen2020impact,surico2020consumption}. The largest mismatch between the CBO estimates and consumption data is in the healthcare sector, whose consumption decreased during the pandemic, in contrast to the estimated increase by the CBO. There was also an increase in some specific retail categories (groceries) which the CBO estimates did not consider. However, overall, as \citet{del2020supply} argue, the estimates remain qualitatively accurate. Besides the initial shock, we also attempt to introduce realistic dynamics for recovery and for savings. The shocks to on-site consumption industries are more long lasting, and savings from the lack of consumption of specific goods and services during lockdown are only partially reallocated to other expenses. Our key finding is that there is a trade-off between the severity of the shocks and the rigidity of the production function. At one extreme, severe shocks with a Leontief production function lead to an almost complete collapse of the economy, and at the other, small shocks with a linear production function fail to reproduce the massive recession observed in the data. There is a sweet spot in the middle, with empirically good results based on various mixes of production function rigidity and shock severity. We also find that our models' performance improved after we used better data for inventories. Our selected scenario predicts the aggregate UK recession in Q2 2020 about as well as our initial forecast. It also correctly predicts a stronger reduction in private consumption, investment and profits than in government consumption, inventories and wages. At the sectoral level, the mean absolute error is around 12 percentage points, slightly lower than in our initial forecast (14 p.p.). The correlation between sectoral reductions in the model and in the data is generally high (the Pearson correlation coefficient weighted by industry size is 0.75). However, this masks substantial differences across industries: while our model makes a good job at predicting sectoral outcomes in most industries, it fails at others such as vehicle manufacturing and air transport. We conjecture that this is due to idiosyncratic features of these industries that our model could not capture. Conversely, we provide examples where our model predicts sectoral outcomes correctly when these depend on inter-industry relationships, both statically and dynamically. We conclude by investigating some theoretical properties of the model using a simpler setting. We ask whether popular metrics of centrality, such as upstreamness and output multipliers, are useful to understand the diffusion of supply and demand shocks in our model. We find that static measures are only partially able to explain modeling results. Nevertheless, an industry's upstreamness is a strong indicator of its potential to amplify shocks - this is true for supply shocks, as expected, and for demand shocks, which is somewhat more surprising. The paper is organized as follows. The details of the model are presented in Section~\ref{sec:model}. We discuss the shocks induced by the pandemic which we use to initialize the model in Section~\ref{sec:pandemic_shock}. We show our model predictions for the UK economy in Section~\ref{sec:econimpact} and discuss production network effects and re-opening single industries in Section~\ref{sec:nw_effects}. We conclude in Section~\ref{sec:discuss}. \section{A dynamic input-output model} \label{sec:model} Our model combines elements of the input-output models developed by \cite{battiston2007credit, hallegatte2008adaptive, henriet2012firm} and \cite{inoue2019firm}, together with new features that make the model more realistic in the context of a pandemic-induced lockdown. The main data input to our analysis is the UK input-output network obtained from the latest year (2014) of the World Input-Output Database (WIOD) \citep{timmer2015illustrated}, allowing us to distinguish 55 sectors. \subsection{Timeline} A time step $t$ in our economy corresponds to one day. There are $N$ industries\footnote{ See Appendix~\ref{apx:notation} for a comprehensive summary of notations used. }, one representative firm for each industry, and one representative household. The economy initially rests in a steady-state until it experiences exogenous lockdown shocks. These shocks can affect the supply side (labor compensation, productive capacity) and the demand side (preferences, aggregate spending/saving) of the economy. Every day: \begin{enumerate} \item Firms hire or fire workers depending on whether their workforce was insufficient or redundant to carry out production in the previous day. \item The representative household decides its consumption demand and industries place orders for intermediate goods. \item Industries produce as much as they can to satisfy demand, given that they could be limited by lack of critical inputs or lack of workers. \item If industries do not produce enough, they distribute their production to final consumers and to other industries on a pro rata basis, that is, proportionally to demand. \item Industries update their inventory levels, and labor compensation is distributed to workers. \end{enumerate} \subsection{Model description} \label{sec:modeldescription} It will become important to distinguish between demand, that is, orders placed by customers to suppliers, and actual realized transactions, which might be lower. \subsubsection{Demand} \label{sec:demand} \paragraph{Total demand.} The total demand faced by industry $i$ at time $t$, $d_{i,t}$, is the sum of the demand from all its customers, \begin{equation} d_{i,t} = \sum_{j=1}^N O_{ij,t} + c^d_{i,t} + f^d_{i,t}, \end{equation} where $O_{ij,t}$ (for \emph{orders}) denotes the intermediate demand from industry $j$ to industry $i$, $c_{i,t}^d$ represents (final) demand from households and $f_{i,t}^d$ denotes all other final demand (e.g. government or non-domestic customers). \paragraph{Intermediate demand.} Intermediate demand follows a dynamics similar to the one studied in \cite{henriet2012firm}, \cite{hallegatte2014modeling}, and \cite{inoue2019firm}. Specifically, the demand from industry $i$ to industry $j$ is \begin{equation} \label{eq:order_interm} O_{ji,t} = A_{ji} d_{i,t-1} + \frac{1}{\tau} [ n_i Z_{ji,0} - S_{ji,t} ]. \end{equation} Intermediate demand thus is the sum of two components. First, to satisfy incoming demand (from $t-1$), industry $i$ demands an amount $A_{ji} d_{i,t-1}$ from $j$. Therefore, industries order intermediate inputs in fixed proportions of total demand, with the proportions encoded in the technical coefficient matrix $A$, i.e. $A_{ji} = Z_{ji,0}/x_{i,0}$ where $Z_{ji,0}$ is realized intermediate consumption and $x_{i,0}$ is total output of industry $i$. Before the shocks, both of these variables are considered to be in the pre-pandemic steady state. While we will consider several scenarios where industries do not strictly rely on fixed recipes in production, demand always depends on the technical coefficient matrix. The second term in Eq.~\eqref{eq:order_interm} describes intermediate demand induced by desired reduction of inventory gaps. Due to the dynamic nature of the model, demanded inputs cannot be used immediately for production. Instead industries use an inventory of inputs in production. $S_{ji,t}$ denotes the stock of input $j$ held in $i$'s inventory. Each industry $i$ aims to keep a target inventory $n_i Z_{ji,0}$ of every required input $j$ to ensure production for $n_i$ further days\footnote{ Considering an input-specific target inventory would require generalizing $n_i$ to a matrix with elements $n_{ji}$, which is easy in our computational framework but difficult to calibrate empirically. }. The parameter $\tau$ indicates how quickly an industry adjusts its demand due to an inventory gap. Small $\tau$ corresponds to responsive industries that aim to close inventory gaps quickly. In contrast, if $\tau$ is large, intermediate demand adjusts slowly in response to inventory gaps. \paragraph{Consumption demand.} We let consumption demand for good $i$ be \begin{equation}\label{eq:cd} c^d_{i,t}= \theta_{i,t} \Tilde{c}^d_t, \end{equation} where $\theta_{i,t}$ is a preference coefficient, giving the share of goods from industry $i$ out of total consumption demand $\Tilde{c}^d_t$. The coefficients $\theta_{i,t}$ evolve exogenously, following assumptions on how consumer preferences change due to exogenous shocks (in our case, differential risk of infection across industries, see Section \ref{sec:pandemic_shock}). Total consumption demand evolves following an adapted and simplified version of the consumption function in \cite{muellbauer2020}. In particular, $\Tilde{c}^d_t$ evolves according to \begin{equation}\label{eq:consdemand} \Tilde{c}^d_t= \left(1-\Tilde{\epsilon}^D_t\right) \exp\left\{ \rho \log \Tilde{c}^d_{t-1} + \frac{1-\rho}{2} \log\left( m \Tilde{l}_t \right) + \frac{1-\rho}{2} \log \left( m \Tilde{l}_t^p \right) \right\}. \end{equation} In the equation above, the factor $\left(1-\Tilde{\epsilon}^D_t\right)$ accounts for direct aggregate shocks and will be explained in Section \ref{sec:pandemic_shock}. The second factor accounts for the endogenous consumption response to the state of the labor market and future income prospects. In particular, $\Tilde{l}_t$ is current labor income, $\Tilde{l}_t^p$ is an estimation of permanent income (see Section \ref{sec:pandemic_shock}), and $m$ is the share of labor income that is used to consume final domestic goods, i.e. that is neither saved nor used for consumption of imported goods. In the pre-pandemic steady state with no aggregate exogenous shocks, $\Tilde{\epsilon}^D_t=0$ and by definition permanent income corresponds to current income, i.e. $\Tilde{l}_t^p=\Tilde{l}_t$. In this case, total consumption demand corresponds to $m \Tilde{l}_t $.\footnote{ To see this, note that in the steady state $\Tilde{c}^d_t=\Tilde{c}^d_{t-1}$. Taking logs on both sides, moving the consumption terms on the left hand side and dividing by $1-\rho$ throughout yields $\log \Tilde{c}^d_t=\log\left( m \Tilde{l}_t \right)$. } \paragraph{Other components of final demand.} In addition, an industry $i$ also faces demand $f_{i,t}^d$ from sources that we do not model as endogenous variables in our framework, such as government or industries in foreign countries. We discuss the composition and calibration of $f_{i,t}^d$ in detail in Section \ref{sec:pandemic_shock}. \subsubsection{Supply} \label{sec:supply} Every industry aims to satisfy incoming demand by producing the required amount of output. Production is subject to the following two economic constraints: \paragraph{Productive capacity.} First, an industry has finite production capacity $x_{i,t}^\text{cap}$, which depends on the amount of available labor input. Initially every industry employs $l_{i,0}$ of labor and produces at full capacity $x_{i,0}^{\text{cap}} = x_{i,0}$. We assume that productive capacity depends linearly on labor inputs, \begin{equation} \label{eq:xcap} x_{i,t}^{\text{cap}} = \frac{l_{i,t}}{l_{i,0}}x_{i,0}^{\text{cap}}. \end{equation} \paragraph{Input bottlenecks.} Second, the production of an industry might be constrained due to an insufficient supply of critical inputs. This can be caused by production network disruptions. Intermediate input-based production capacities depend on the availability of inputs in an industry's inventory and its production technology, i.e. \begin{equation} \label{eq:xinp_general} x_{i,t}^{\text{inp}} = \text{function}_i( S_{ji,t}, A_{ji} ). \end{equation} We consider five different specifications for how input shortages impact production, ranging from a Leontief form, where inputs need to be used in fixed proportions, to a Linear form, where inputs can be substituted arbitrarily. As intermediate cases we consider specifications with industry-specific dependencies of inputs. For this purpose, IHS Markit analysts rated at our request whether a given input is \emph{critical}, \emph{important} or \emph{non-critical} for the production of a given industry (see Appendix \ref{apx:ihs} for details). We then make different assumptions on how the criticality ratings of inputs affect the production of an industry. We now introduce the five specifications in order of stringency with respect to inputs. \noindent \emph{(1) Leontief:} As first case we consider the Leontief production function, in which every positive entry in the technical coefficient matrix $A$ is a binding input to an industry. This is the most rigid case we are considering, leading to the functional form \begin{equation} \label{eq:xinp_leo} x_{i,t}^{\text{inp}} = \min_{ \{ j: \; A_{ji} > 0 \} } \left \{ \; \frac{ S_{ji,t} }{ A_{ji} } \right \}. \end{equation} In this case, an industry would halt production immediately if inventories of any input are run down, even for small and potentially negligible inputs. \noindent \emph{(2) IHS1:} As most stringent case based on the IHS Markit ratings, we assume that production is constrained by critical and important inputs, which need to use in fixed proportions. In contrast to the Leontief case, however, production is not constrained by the lack of non-critical inputs. Thus, an industry's production capacity with respect to inputs is \begin{equation} \label{eq:xinp_ihs1} x_{i,t}^{\text{inp}} = \min_{j \in \{ \mathcal{V}_i \; \cup \; \mathcal{U}_i \} } \left \{ \; \frac{ S_{ji,t} }{ A_{ji} } \right \}, \end{equation} where $\mathcal{V}_i$ is the set of \textit{critical} inputs and $\mathcal{U}_i$ is the set of \textit{important} inputs to industry $i$. \noindent \emph{(3) IHS2:} As second case using the input ratings, we leave the assumptions regarding \emph{critical} and \emph{non-critical} inputs unchanged but assume that the lack of an \emph{important} input reduces an industry's production by a half. We implement this production scenario as \begin{equation} \label{eq:xinp_ihs2} x_{i,t}^{\text{inp}} = \min_{ \{ j \in \mathcal{V}_i, \; k \in \mathcal{U}_i \} } \left \{ \; \frac{ S_{ji,t} }{ A_{ji} }, \frac{1}{2} \left(\frac{ S_{ki,t} }{ A_{ki} } + x_{i,0}^\text{cap}\right) \right \}. \end{equation} This means that if an \textit{important} input goes down by 50\% compared to initial levels, production of the industry would decrease by 25\%. When the stock of this input is fully depleted, production drops to 50\% of initial levels. \noindent \emph{(4) IHS3:} Next we treat all \emph{important} inputs as \emph{non-critical}, such that only \emph{critical} suppliers can create input bottlenecks. This reduces the input bottleneck equation, Eq.~\eqref{eq:xinp_general}, to \begin{equation} \label{eq:xinp_ihs3} x_{i,t}^{\text{inp}} = \min_{j \in \mathcal{V}_i } \left \{ \; \frac{ S_{ji,t} }{ A_{ji} } \right \}. \end{equation} \noindent \emph{(5) Linear:} Finally, we also implement a linear production function for which all inputs are perfectly substitutable. Here, production in an industry can continue even when inputs cannot be provided, as long as there is sufficient supply of alternative inputs. In this case we have \begin{equation} \label{eq:xinp_linear} x_{i,t}^{\text{inp}} = \frac{ \sum_j S_{ji,t} }{ \sum_j A_{ji} } . \end{equation} Note that while production is linear with respect to intermediate inputs, the lack of labor supply cannot be compensated by other inputs. We assume for any production function that imports never cause bottlenecks. Thus, imports are treated as non-critical inputs or, equivalently, there is no shortages in foreign intermediate goods. Input bottlenecks are most likely to arise under the Leontief assumption, and least likely under the linear production function. The IHS production functions assume intermediate levels of input specificity. In Appendix \ref{apx:ces_prodfun} we show that the IHS production functions are almost equivalent to (suitably parameterised) CES production functions and that results are even identical using these CES specifications instead of the IHS production functions. \paragraph{Output level choice and input usage.} Since an industry aims to satisfy incoming demand within its production constraints, realized production at time step $t$ is \begin{equation} \label{eq:x_act} x_{i,t} = \min \{ x_{i,t}^{\text{cap}}, x_{i,t}^{\text{inp}}, d_{i,t} \}. \end{equation} Thus, the output level of an industry is constrained by the smallest of three values: labor-constrained production capacity $x_{i,t}^{\text{cap}}$, intermediate input-constrained production capacity $x_{i,t}^{\text{inp}}$, or total demand $d_{i,t}$. The output level $x_{i,t}$ determines the quantity used of each input according to the production recipe. Industry $i$ uses an amount $A_{ji}x_{i,t}$ of input $j$, unless $j$ is not critical and the amount of $j$ in $i$'s inventory is less than $A_{ji}x_{i,t}$. In this case, the quantity consumed of input $j$ by industry $i$ is equal to the remaining inventory stock of $j$-inputs $S_{ji,t} < A_{ji}x_{i,t}$. \paragraph{Rationing.} Without any adverse shocks, industries can always meet total demand, i.e. $x_{i,t} = d_{i,t}$. However, in the presence of production capacity and/or input bottlenecks, industries' output may be smaller than total demand (i.e., $x_{i,t} < d_{i,t}$) in which case industries ration their output across customers. We assume simple proportional rationing, although alternative rationing mechanisms could be considered \citep{pichler2021modeling}. The final delivery from industry $j$ to industry $i$ is a share of orders received \begin{equation} Z_{ji,t} = O_{ji,t} \frac{x_{j,t}}{d_{j,t}}. \end{equation} Households receive a share of their demand \begin{equation} c_{i,t} = c_{i,t}^d \frac{x_{i,t}}{d_{i,t}}, \end{equation} and the realized final consumption of agents with exogenous final demand is \begin{equation} f_{i,t} = f_{i,t}^d \frac{x_{i,t}}{d_{i,t}}. \end{equation} \paragraph{Inventory updating.} The inventory of $i$ for every input $j$ is updated according to \begin{equation} S_{ji,t+1} = \min \left\{ S_{ji,t} + Z_{ji,t} - A_{ji} x_{i,t}, 0 \right\}. \end{equation} In a Leontief production function, where every input is critical, the minimum operator would not be needed since production could never continue once inventories are run down. It is necessary here, since industries can produce even after inventories of non-critical input $j$ are depleted and inventories cannot turn negative. \paragraph{Hiring and separations.} Firms adjust their labor force depending on which production constraints in Eq.~\eqref{eq:x_act} are binding. If the capacity constraint $x_{i,t}^{\text{cap}}$ is binding, industry $i$ decides to hire as many workers as necessary to make the capacity constraint no longer binding. Conversely, if either input constraints $x_{i,t}^{\text{inp}}$ or demand constraints $d_{i,t}$ are binding, industry $i$ lays off workers until capacity constraints become binding. More formally, at time $t$ labor demand by industry $i$ is given by $l^d_{i,t}=l_{i,t-1}+\Delta l_{i,t}$, with \begin{equation} \Delta l_{i,t} = \frac{l_{i,0}}{x_{i,0}}\left[ \min\{x_{i,t}^{\text{inp}},d_{i,t}\} - x_{i,t}^{\text{cap}}\right]. \end{equation} The term $l_{i,0}/x_{i,0}$ reflects the assumption that the labor share in production is constant over the considered period. We assume that it takes time for firms to adjust their labor inputs. Specifically, we assume that industries can increase their labor force only by a fraction $\gamma_{\text{H}}$ in direction of their target. Similarly, industries can decrease their labor force only by a fraction $\gamma_{\text{F}}$ in the direction of their target. In the absence of strong labor market regulations, we usually have $\gamma_{\text{F}}>\gamma_{\text{H}}$, indicating that it is easier for firms to lay off employed workers than to hire new workers. Industry-specific employment evolves then according to \begin{equation} \label{eq:labor_evolution} l_{i,t} = \begin{cases} l_{i,t-1} + \gamma_{\text{H}} \Delta l_{i,t} &\mbox{if } \; \Delta l_{i,t} \ge 0, \\ l_{i,t-1} + \; \gamma_{\text{F}} \Delta l_{i,t} &\mbox{if } \Delta l_{i,t} < 0. \end{cases} \end{equation} The parameters $\gamma_{\text{H}}$ and $\gamma_{\text{F}}$ can be interpreted as policy variables. For example, the implementation of a furloughing scheme makes re-hiring of employees easier, corresponding to an increase in $\gamma_{\text{H}}$. \section{Pandemic shock} \label{sec:pandemic_shock} Simulations of the model described in Section \ref{sec:modeldescription} start in the pre-pandemic steady state. While there is evidence that consumption started to decline prior to the lockdown \citep{surico2020consumption}, for simplicity we apply the pandemic shock all at once, at the date of the start of the lockdown (March $23^{rd}$ in the UK). The pandemic shock is a combination of supply and demand shocks that propagate downstream and upstream and get amplified through the supply chain. During the lockdown, workers who cannot work on-site and are unable to work from home become unproductive, resulting in lowered productive capacities of industries. At the same time demand-side shocks hit as consumers adjust their consumption preferences to avoid getting infected, and reduce overall consumption out of precautionary motives due to the depressed state of the economy. For diagnostic purposes, in addition to the shocks that we used for our original predictions, we consider a few additional supply shock scenarios that are grounded on what happened in the UK specifically\footnote{ As described below, here we use estimates of the shocks that we develop a priori, based on empirically motivated assumptions on labor supply constraints and changes in preferences. We could have, in principle, used a traditional macro approach (see \citet{brinca2020measuring} in the case of Covid-19) where one can infer the shocks based on data. We did not pursue this here for several reasons. Our model does not feature price changes (which are crucial to separately identify supply and demand shocks), and we would have had to develop a method to infer the shocks based on model's fit to the data, a problem that is unlikely to have a unique solution. Most importantly, inferring shocks this way is only useful in hindsight, whereas our goal was to make predictions. }. We then compare the outcomes under each scenario based on its aggregate and sectoral forecasts (see Section \ref{sec:scenarioselection}). In the following, we describe all the scenarios for supply and demand shocks that we consider. Further details and industry-level shock statistics are shown in Appendix \ref{apx:shocks}. \subsection{Supply shock scenarios} \label{sec:supplyshockscenarios} At every time step during the lockdown an industry $i$ experiences an (exogenous) first-order labor supply shock $\epsilon^S_{i,t} \in [0,1]$ that quantifies reductions in labor availability. Letting $l_{i,0}$ be the initial labor supply before the lockdown, the maximum amount of labor available to industry $i$ at time $t$ is given as \begin{equation} l_{i,t}^\text{max} = (1- \epsilon^S_{i,t}) l_{i,0}. \end{equation} If $\epsilon^S_{i,t} > 0$, the productive capacity of industry $i$ is smaller than in the initial state of the economy. We assume that the reduction of total output is proportional to the loss of labor. In that case the productive capacity of industry $i$ at time $t$ is \begin{equation} x_{i,t}^{\text{cap}} = \frac{l_{i,t}}{l_{i,0}} x_{i,0}^{\text{cap}} \le (1-\epsilon^S_{i,t}) x_{i,0}. \end{equation} Recall from Section \ref{sec:supply} that firms can hire and fire to adjust their productive capacity to demand and supply constraints. Thus, productive capacity can be lower than the initial supply shock, in case industry $i$ has some idle workers that are not prevented to go to work by lockdown measures. In any case, during lockdown firms can never hire more than $l_{i,t}^\text{max}$ workers. When the lockdown is lifted for a specific industry $i$, first-order supply shocks are removed, i.e., we set $\epsilon^S_{i,t} = 0$, for $t\geq t_\text{end\_lockdown}$. For diagnostic purposes we consider six different scenarios for the supply shocks $\epsilon^S_{i,t}$, $\text{S}_\text{1}$ to $\text{S}_\text{6}$, ordered from lowest to the highest severity of lockdown restrictions. Scenario $\text{S}_\text{5}$ is the one that was used to produce out of sample forecasts in our original paper. We follow \cite{del2020supply} and estimate the supply shocks in each scenario by calculating for each industry the number of workers who can work remotely, and considering the government regulations that dictate whether an industry is essential i.e., whether an industry can operate during a lockdown even if working from home is not possible. For instance, if an industry is non-essential, and none of its employees can work from home, it faces a labor supply reduction of 100\% during lockdown i.e., $\epsilon^S_{i,t}=1, \ \forall t \in [t_\text{start\_lockdown}, t_\text{end\_lockdown}$. Instead, if an industry is classified as fully essential, it faces no labor supply shock and $\epsilon^S_{i,t}=0 \ \forall t$. In some scenarios, we further refine the supply shocks estimates by taking into account the difficulty of adjusting to social distancing measures. To do this refinement we use the Physical Proximity work context provided by O*NET, as others have done \citep{mongey2020workers,koren2020business}. To estimate the shocks we define a \emph{Remote Labor Index}, an \emph{Essential Score} and a \emph{Physical Proximity Index} at the WIOD industry level. We interpret these indices as follows. The Remote Labor Index of industry $i$ is the probability that a worker from industry $i$ can work from home. The Essential Score is the probability that a worker from industry $i$ has an essential job. The Physical Proximity Index is the probability that an essential worker that cannot work from home cannot go to work due to social distancing measures. In Appendix \ref{apx:shocks} we explain how, similarly to \cite{del2020supply,dingel2020,gottlieb2020working,koren2020business}, we use O*NET data to estimate our Remote Labor Index and Physical Proximity Index. Below we explain each scenario in more detail and how we determine the essential score in each scenario. Figure \ref{fig:supplyscenarios} in Appendix \ref{apx:shocks} shows time series of supply shocks $\epsilon^S_{i,t}$ for all scenarios throughout our simulations, while Table \ref{tab:FO_shocks_supply} shows the cross section of shocks. \paragraph{$\text{S}_\text{1}$: UK policy.} In contrast to some European countries such as Italy and Spain, in the UK shutdown orders were only issued for a few industries\footnote{\label{footn:ukleg} https://www.legislation.gov.uk/uksi/2020/350/pdfs/uksi\_20200350\_en.pdf }. Although social distancing guidelines were imposed for all industries (see $\text{S}_\text{2}-\text{S}_\text{4}$ below), strictly speaking only non-essential retail, personal and recreational services, and the restaurant and hospitality industries were mandated to shut down. While some European countries shut down some manufacturing sectors and construction, the UK did not explicitly forbid these sectors from operating. Based on the UK regulations, we assume that all WIOD industries have an essential score of one with the exception of industries G45 and G47 (vehicle and general retail), I (hotels and restaurants) and R\_S (recreational and personal services). We break down these industries into smaller subcategories that we can directly match to shutdown orders in the UK, and compute an essential score from a weighted average of these subcategories, where weights correspond to output shares (see Appendix \ref{apx:shocks} for more details). The resulting essential scores are 0.64 for G45, 0.71 for G47, 0.05 for I and 0.07 for R\_S. Similarly to \cite{del2020supply}, we assume an industry's supply shock is given by the fraction of workers that cannot work. If we interpret the Remote Labor Index and the essential score as independent probabilities, the expected value of the fraction of workers of industry $i$ that cannot work is $$\epsilon^S_{i,t} = (1 - \text{RLI}_i)(1 - \text{ESS}_i) \quad \forall t \in [t_\text{start\_lockdown}, t_\text{end\_lockdown}),$$ where $\text{RLI}_i$ and $\text{ESS}_i$ are the Remote Labor Index and Essential Score of industry $i$ respectively. We lift labor supply shocks to trade industries on June $15^\text{th}$ and shocks to other sectors in July, as per official guidance. That is, $ \epsilon^S_{i,t} = 0, \quad \forall t \in [t_\text{end\_lockdown}, \infty).$ \paragraph{$\text{S}_\text{2}$, $\text{S}_\text{3}$, $\text{S}_\text{4}$ : UK policy + difficulty to adapt to social distancing.} In monthly communications on the impact of Covid-19 on the UK economy\footnote{ \label{ft1}See, e.g., \url{https://www.ons.gov.uk/economy/grossdomesticproductgdp/articles/coronavirusandtheimpactonoutputintheukeconomy/may2020} }, the ONS reported that several manufacturing industries and the construction sector were struggling to comply with the requirements on social distancing imposed by the government. Thus, further to the legal constraints considered in scenario $\text{S}_\text{1}$, we also consider the practical constraints of operating under the new guidance. For all industries that were not explicitly forbidden to operate, we consider a supply shock due to the difficulties of adapting to social distancing measures. We assume that industries with a higher index of physical proximity have more difficulty to adhere to social distancing, so that only a portion of workers that cannot work from home can actually work at the workplace. In particular, in-person work can only be performed by a fraction of workers that is proportional to the Physical Proximity Index of industry $i$ (see Appendix \ref{apx:shocks} for details). We rescale this index so that it varies in an interval between zero and $\iota$, and consider three values $\iota=0.1,0.4,0.7$. These three values distinguish between scenarios $\text{S}_\text{2}$, $\text{S}_\text{3}$ and $\text{S}_\text{4}$, which are reported in order of severity. At the time when lockdown starts, the supply shocks are given by $$\epsilon^S_{i,t_\text{start\_lockdown}} = \left(1 - \text{RLI}_i\right) \left(1 - \text{ESS}_i\left(1 - \iota \frac{\text{PPI}_{i}}{\max_j(\text{PPI}_j)}\right) \right),$$ where $\text{PPI}_{i}$ is the Physical Proximity Index. Differently from the scenarios above, we do not assume that shocks due to the difficulty to adapt to social distancing are constant during lockdown. In line with ONS reports, firms are able to bring a larger fraction of their workforce back to work as they adapt to the new guidelines. For simplicity, we assume that these shocks vanish when lockdown is lifted (May $13^\text{th}$), and interpolate linearly between their maximum value (which is when lockdown is imposed, March $23^\text{rd}$) and the time when lockdown is lifted. This leads to the following supply shocks $$\epsilon^S_{i,t} = \left(1 - \text{RLI}_i\right) \left(1 - \text{ESS}_i\left(1 - \iota \frac{\text{PPI}_{i,t}}{\max_j(\text{PPI}_j)}\right) \right),$$ where $$ \text{PPI}_{i,t} = \text{PPI}_{i} \left(1 - \frac{t - t_\text{start\_lockdown}}{t_\text{end\_lockdown}- t_\text{start\_lockdown}} \right). $$ Note that lockdown did not have a clear end date in the UK. However, we conventionally take May $13^\text{th}$ as $t_\text{end\_lockdown}$. This is the day when the UK government asked all workers to go back to work, and we assume that at this time most firms had had sufficient time to comply to social distancing guidelines. \paragraph{$\text{S}_\text{5}$: Original shocks.} This baseline scenario corresponds to the original shocks $\epsilon^S_{i,t}$ we used in our previous work \citep{pichler2020production} and was based on the estimates by \cite{del2020supply}. The supply shocks were estimated by quantifying which work activities of different occupations can be performed from home based on the Remote Labor Index and using the occupational compositions of industries. The predictions also considered whether an industry was essential in the sense that on-site work was allowed during the lockdown. We then compiled a list of essential industries within the NAICS classification system, based on the list of essential industries provided by the Italian government using the NACE classification system. Then, we computed an essential score $\text{ESS}_i$ for each industry $i$ in their sample of NAICS industries and calculated the supply shock using the following equation \begin{equation} \epsilon^S_{i} = (1 - \text{RLI}_i)(1 - \text{ESS}_i). \label{eq:supply_shock_original} \end{equation} Following our original work \citep{pichler2020production}, we map these supply shocks from the NAICS 4-digit industry classification system to the WIOD classification using a NAICS-WIOD crosswalk. To deal with one-to-many and many-to-one maps in these crosswalks, we split each NAICS industry's contributions using employment data (see Appendix \ref{apx:shocks} for details). We follow this approach to weigh each worker in the NAICS industries' sample from \cite{del2020supply} equally. In Appendix \ref{apx:shocks} we discuss the implications of this crosswalk methodology in more detail. Finally, for the Real Estate sector, we assume that the supply shock does not apply to imputed rents (which represent about 2/3 of gross output). \paragraph{$\text{S}_\text{6}$: European list of essential industries.} For comparison, we also consider the list of essential industries produced by \cite{fana2020covid}. This list was compiled independently from \cite{del2020supply}, and listed which industries were considered essential by governments of Spain, Germany and Italy. Using this list, we compute the essential score for each industry by taking the mean across essential scores for the three countries considered. We keep labor supply shocks constant during lockdown, and lift them all when lockdown ends. This scenario produces the largest supply shocks of all. For comparison, the average supply shock for $\text{S}_\text{1}$ is 3\%, while the average shock for $\text{S}_\text{6}$ is 28\%. \subsection{Demand shocks} \label{sec:consumptiondemandshockscenarios} The Covid-19 pandemic caused strong shocks to all components of demand. We consider shocks to private consumption demand, which we further distinguish into shocks due to fear of infection and due to fear of unemployment, and shocks to other components of final demand, such as investment, government consumption and exports. We outline the basic assumptions on demand shocks below and show in in Appendix \ref{apx:shocks} the detailed cross-sectional and temporal shock profiles. We further demonstrate in Appendix \ref{apx:sensitivity} that alternative plausible demand shock assumptions only mildly influence model results. \paragraph{Demand shocks due to fear of infection.} During a pandemic, consumption/saving decisions and consumer preferences over the consumption basket are changing, leading to first-order demand shocks \citep{CBO2006, del2020supply}. For example, consumers are likely to demand less services from the hospitality industry, even when the hospitality industry is open. Transport is very likely to face substantial demand reductions, despite being classified as an essential industry in many countries. A key question is whether reductions in demand for ``risky'' goods and services is compensated by an increase in demand for other goods and services, or if lower demand for risky goods translates into higher savings. We consider a demand shock vector ${\epsilon}^D_t$, whose components $\epsilon_{i,t}^D$ are the relative changes in demand for goods of industry $i$ at time $t$. Recall from Eq.~\eqref{eq:cd}, $c^d_{i,t}= \theta_{i,t} \Tilde{c}^d_t$, that consumption demand is the product of the total consumption scalar $\Tilde{c}^d_t$ and the preference vector $\theta_t$, whose components $\theta_{i,t}$ represent the share of total demand for good $i$. We initialize the preference vector by considering the initial consumption shares, that is $\theta_{i,0}=c_{i,0}/\sum_j c_{j,0}$. By definition, the initial preference vector $\theta_{0}$ sums to one, and we keep this normalization at all following time steps. To do so, we consider an auxiliary preference vector $\bar{\theta}_t$, whose components $\bar{\theta}_{i,t}$ are obtained by applying the shock vector $\epsilon^D_{i,t}$. That is, we define $\bar{\theta}_{i,t}=\theta_{i,0} (1-\epsilon^D_{i,t})$ and define $\theta_{i,t}$ as \begin{equation} \label{eq:theta} \theta_{i,t}= \frac{\bar{\theta}_{i,t}}{\sum_j \bar{\theta}_{j,t}} = \frac{ (1-\epsilon^D_{i,t}) \theta_{i,0} }{\sum_j (1-\epsilon^D_{j,t}) \theta_{j,0} } . \end{equation} The difference $1-\sum_i \bar{\theta}_{i,t}$ is the aggregate reduction in consumption demand due to the demand shock, which would lead to an equivalent increase in the saving rate. However, households may not want to save all the money that they are not spending. For example, they most likely want to spend on food the money that they are saving on restaurants. Therefore, we define the aggregate demand shock $\Tilde{\epsilon}^D_t$ in Eq.~\eqref{eq:consdemand} as \begin{equation} \label{eq:epsilon} \Tilde{\epsilon}_t^D=\Delta s \left(1- \sum_{i=1}^N \bar{\theta}_{i,t} \right) , \end{equation} where $\Delta s$ is the change in the savings rate. When $\Delta s=1$, households save all the money that they are not planning to spend on industries affected by demand shocks; when $\Delta s=0$, they spend all that money on goods and services from industries that are affected less. To parameterize $\epsilon^D_{i,t}$, we adapt consumption shock estimates by the \cite{CBO2006} and \cite{del2020supply}. Roughly speaking, these shocks are massive for restaurants and transport, mild for manufacturing and null for utilities. We make two modifications to these estimates. First, we remove the positive shock to the health care sector, as in the UK the cancellation of non-urgent treatment for other diseases than Covid-19 far exceeded the additional demand for health due to Covid-19.\footnote{ \url{https://www.ons.gov.uk/economy/grossdomesticproductgdp/bulletins/gdpfirstquarterlyestimateuk/apriltojune2020} } Second, we apportion to manufacturing sectors the reduced demand due to the closure of non-essential retail. For example, retail shops selling garments and shoes were mandated to shut down, and so we apply a consumption demand shock to the manufacturing sector producing these goods.\footnote{ To be fully consistent with the definition of demand shock, we should model non-essential retail closures as supply shocks, and propagate the shocks to manufacturing through reduced intermediate good demand. However, there are two practical problems that prevent us to do so: (i) the sectoral aggregation in the WIOD is too coarse, comprising only one aggregate retail sector; (ii) input-output tables only report margins of trade, i.e. they do not model explicitly the flow of goods from manufacturing to retail trade and then from retail trade to final consumption. Given these limitations, we conventionally interpret non-essential retail closures as demand shocks. } We keep the intensity of demand shocks constant during lockdown. We then reduce demand shocks when lockdown is lifted according to the situation of the Covid-19 pandemic in the UK. In particular, we assume that consumers look at the daily number of Covid-19 deaths to assess whether the pandemic is coming to an end, and that they identify the end of the pandemic as the day in which the death rate drops below 1\% of the death rate at the peak. Given official data\footnote{\url{https://coronavirus.data.gov.uk/}}, this happens on August $11^\text{th}$. Thus, we reduce $\epsilon^D_{i,t}$ from the time lockdown is lifted (May $13^\text{th}$) by linearly interpolating between the value of $\epsilon^D_{i,t}$ during lockdown and $\epsilon^D_{i,t}=0$ on August $11^\text{th}$. The choice of modeling behavioral change in response to a pandemic by the death rate has a long history in epidemiology \citep{funk2010modelling}. \paragraph{Demand shocks due to fear of unemployment.} A second shock to consumption demand occurs through reductions in current income and expectations for permanent income. Reductions in current income are due to firing/furloughing, due to both direct shocks and subsequent upstream or downstream propagation, resulting in lower labor compensation, i.e. $\tilde{l}_t < \tilde{l}_0$, for $t\geq t_\text{start\_lockdown}$. To support the economy, the government pays out social benefits to workers to compensate income losses. In this case, the total income $\tilde{l}_t$ that enters Eq.~\eqref{eq:consdemand} is replaced by an effective income $\tilde{l}^\star_t=b \tilde{l}_0 + (1-b) \tilde{l}_t $, where $b$ is the fraction of pre-pandemic labor income that workers who are fired or furloughed are able to retain. A second channel for shocks to consumption demand due to labor market effects occurs through expectations for permanent income. These expectations depend on whether households expect a V-shaped vs. L-shaped recovery, that is, whether they expect that the economy will quickly bounce back to normal or there will be a prolonged recession. Let expectations for permanent income $\Tilde{l}_t^p$ be specified by \begin{equation} \label{eq:perm_income} \Tilde{l}_t^p=\xi_t \Tilde{l}_0 \end{equation} In this equation, the parameter $\xi_t$ captures the fraction of pre-pandemic labor income $\Tilde{l}_0$ that households expect to retain in the long run. We first give a formula for $\xi_t$ and then explain the various cases. \begin{equation}{\label{eq:xit}} \xi_t = \begin{cases} 1, & t<t_\text{start\_lockdown}, \\ \xi^L = 1-\frac{1}{2}\frac{\tilde{l}_0-\tilde{l}_{t_\text{start\_lockdown}}}{\tilde{l}_0}, & t \in [t_\text{start\_lockdown}, t_\text{end\_lockdown} ],\\ 1-\rho + \rho\xi_{t-1} + \nu_{t-1}, & t > t_\text{end\_lockdown}. \end{cases} \end{equation} Before lockdown, we let $\xi_t \equiv 1$, i.e. permanent income expectations are equal to current income. During lockdown, following \cite{muellbauer2020} we assume that $\xi_t$ is equal to one minus half the relative reduction in labor income that households experience due to the direct labor supply shock, and denote that value by $\xi^L$. (For example, given a relative reduction in labor income of 16\%, $\xi^L=0.92$.)\footnote{During lockdown, labor income may be further reduced due to firing. For simplicity, we choose not to model the effect of these further firings on permanent income.} After lockdown, we assume that 50\% of households believe in a V-shaped recovery, while 50\% believe in an L-shaped recovery. We model these expectations by letting $\xi_t$ evolve according to an autoregressive process of order one, where the shock term $\nu_t$ is a permanent shock that reflects beliefs in an L-shaped recovery. With 50\% of households believing in such a recovery pattern, it is $\nu_t\equiv-(1-\rho)(1-\xi^L)/2$.\footnote{The specification in Eq.~\eqref{eq:xit} reflects the following assumptions: (i) time to adjustment is the same as for consumption demand, Eq.~\eqref{eq:consdemand}; (ii) absent permanent shocks, $\nu_t=0$ after some $t$, $\xi_t$ returns to one, i.e. permanent income matches current income; (iii) with 50\% households believing in an L-shaped recovery, $\xi_t$ reaches a steady state given by $1-(1-\xi^L)/2$: with $\xi^L=0.92$ as in the example above, $\xi_t$ reaches a steady state at 0.96, so that permanent income remains stuck four percentage points below pre-lockdown income.} \paragraph{Other final demand shock scenarios} Note that WIOD distinguishes five types of final demand: (I) \textit{Final consumption expenditure by households}, (II) \textit{Final consumption expenditure by non-profit organisations serving households}, (III) \textit{Final consumption expenditure by government} (IV) \textit{Gross fixed capital formation} and (V) \textit{Changes in inventories and valuables}. Additionally, all final demand variables are available for every country, meaning that it is possible to calculate imports and exports for all categories of final demand. The endogenous consumption variable $c_{i,t}$ corresponds to (I), but only for domestic consumption. All other final demand categories, including all types of exports, are absorbed into the variable $f_{i,t}$. We apply different shocks to $f_{i,t}$. We do not apply any exogenous shocks to categories (III) \textit{Final consumption expenditure by government} and (V) \textit{Changes in inventories and valuables}, while we apply the same demand shocks to category (II) as we do for category (I). To determine shocks to investment (IV) and exports we start by noticing that, before the Covid-19 pandemic, the volatility of these variables has generally been three times the volatility of consumption.\footnote{ This is computed by calculating the standard deviation of consumption, investment and export growth over all quarters from 1970Q1 to 2019Q4. These are 1.03\%, 2.87\% and 3.24\% respectively. Source: \url{https://www.ons.gov.uk/file?uri=\%2feconomy\%2fgrossdomesticproductgdp\%2fdatasets\%2frealtimedatabaseforukgdpcomponentsfortheexpenditureapproachtothemeasureofgdp\%2fquarter2aprtojune2020firstestimate/gdpexpenditurecomponentsrealtimedatabase.xls} } The overall consumption demand shock is around 5\% so, as a baseline, we assume shocks to investment and exports to be 15\%. In Appendix \ref{apx:sensitivity} we show that the model results are fairly robust with respect to alternative choices. \section{Economic impact of Covid-19 on the UK economy} \label{sec:econimpact} As already mentioned, in the first version of this work we released results in May predicting a 21.5\% contraction of GDP in the UK economy in the second quarter of 2020 with respect to the last quarter of 2019\footnote{% Due to an error in how we dealt with Real Estate shocks, our prediction was slightly worse than it would have been without the error, see Appendix \ref{apx:shocks} }. This is in comparison to the contraction of 22.1\% that was actually observed. In this section we do a post-mortem to understand the factors that influenced the quality of the forecasts. To do this, we compare results under different scenarios defined by different shocks and production functions. Our analysis includes a sectoral breakdown of the forecasts and a comparison of the time series of observed vs. predicted behavior. \subsection{Definition of scenarios and calibration} The model and shock scenarios that we described in the previous sections has several degrees of freedom that can be tuned when exploring the model. These are either model assumptions such as the production function, shock scenarios, or parameters (see Table \ref{tab:econmodelpars}). To understand the factors that influenced the quality of the forecasts, we focus on the assumptions and parameters that cannot easily be calibrated from data and that have a strong effect on the results. As the sensitivity analysis in Appendix \ref{apx:sensitivity} shows, the model is most sensitive to two assumptions: the supply shock scenario and the production function. So, we consider all combinations of the six scenarios for supply shocks, $\text{S}_\text{1}$ to $\text{S}_\text{6}$ (Section \ref{sec:supplyshockscenarios}), and of the five production functions mentioned in Section \ref{sec:supply}. These are: standard Leontief, three versions of the IHS-Markit-modified Leontief that treat important inputs as critical (IHS1), half-critical (IHS2), and non-critical (IHS3) and the linear production function. Combining these two assumptions, we get $6\times5=30$ scenarios that we compare to the data. \begin{table}[htbp] \centering \caption{Assumptions and parameters of the model that: (top) are varied across scenarios; (middle) are fixed due to little effect on results; (bottom) are fixed due to direct data calibration. } \begin{tabular}{|l|c|c|} \hline Name & Symbol & Value \\ \hline Supply shocks & S & $\text{S}_\text{1}$, $\text{S}_\text{2}$, $\text{S}_\text{3}$, $\text{S}_\text{4}$, $\text{S}_\text{5}$, $\text{S}_\text{6}$ \\ Production function & & Leontief, IHS1, IHS2, IHS3, linear \\ \hline Final demand shocks & & Appendix \ref{apx:shocks} \\ Inventory adjustment & $\tau$ &10 \\ Upward labor adjustment & $\gamma_H$ & 1/30 \\ Downward labor adjustment & $\gamma_F$ & 1/15 \\ Change in savings rate & $\Delta s$ & 0.50 \\ Consumption adjustment & $\rho$ & 0.99 \\ \hline Inventory targets & $n_i$ & Appendix \ref{apx:inventory} \\ Propensity to consume & $m$ & 0.82 \\ Government benefits & $b$ & 0.80 \\ \hline \end{tabular} \label{tab:econmodelpars} \end{table} Other parameters have less effect on the model (Appendix \ref{apx:sensitivity}) and so we fix them to reasonable values. These are: \begin{itemize} \item The parameter $\tau$, capturing responsiveness to inventory gaps. We fix $\tau=10$ days, which indicates that firms aim at filling most of their inventory gaps within two weeks. This lies in the range of values used by related studies (e.g. $\tau=6$ in \cite{inoue2019firm}, $\tau=30$ in \cite{hallegatte2014modeling}). \item The hiring and firing parameters $\gamma_{\text{H}}$ and $\gamma_{\text{F}}$. We choose $\gamma_{\text{H}}=1/30$ and $\gamma_{\text{F}} = 2 \gamma_{\text{H}}$. Given our daily time scale, this is a rather rapid adjustment of the labor force, with firing happening faster than hiring. \item The parameter $\rho$, indicating sluggish adjustment to new consumption levels. We select the value assumed by \cite{muellbauer2020}, adjusted for our daily timescale\footnote{ Assuming that a time step corresponds to a quarter, \cite{muellbauer2020} takes $\rho=0.6$, implying that more than 70\% of adjustment to new consumption levels occurs within two and a half quarters. We modify $\rho$ to account for our daily timescale: By letting $\bar{\rho}=0.6$, we take $\rho=1-(1-\bar{\rho})/90$ to obtain the same time adjustment as in \cite{muellbauer2020}. Indeed, in an autoregressive process like the one in Eq.~\eqref{eq:consdemand}, about 70\% of adjustment to new levels occurs in a time $\iota$ related inversely to the persistency parameter $\rho$. Letting $Q$ denote the quarterly timescale considered by \cite{muellbauer2020}, time to adjustment $\iota^Q$ is given by $\iota^Q=1/(1-\bar{\rho})$. Since we want to keep approximately the same time to adjustment considering a daily time scale, we fix $\iota^D=90\iota^Q$. We then obtain the parameter $\rho$ in the daily timescale such that it yields $\iota^D$ as time to adjustment, namely $1/(1-\rho)=\iota^D=90\iota^Q=90/(1-\bar{\rho})$. Rearranging gives the formula that relates $\rho$ and $\bar{\rho}$. }. \item The savings parameter $\Delta s$. We take $\Delta s = 0.5$, meaning that households save half the money they are not spending in goods and services due to fear of infection, and direct half of that money to spending for other ``safer'' goods and services. \end{itemize} Finally, we are able to directly calibrate some parameters against the data. For example, we calibrate the inventory target parameters $n_i$ using ONS data for the usual stock of inventories that different industries typically have (see details in Appendix \ref{apx:inventory}). These parameters are highly heterogeneous across industries; typically manufacturing and trade have much higher inventory targets than services. Another parameter which we can directly calibrate from data is the propensity to consume $m$ (see Eq.~\ref{eq:consdemand}). Directly reading the share of labor income that is used to buy final domestic goods from the input-output tables, we find $m=0.82$. Finally, we calibrate benefits $b$ based on official UK policy, $b=0.8$. \subsection{Comparing our initial forecast with alternative scenarios} \label{sec:scenarioselection} We have chosen not to search for a calibration of the model that best fits the data. We do this because we only have one real world example and this would lead to overfitting. Instead we start from the out-of-sample forecast we made in May and try to understand how performance would have changed if we had made different choices. This helps us understand what is important in determining the accuracy and gives some insight into how the model works, and where care needs to be taken in making forecasts of this type. Since our initial forecast, we have made small changes to the model (in particular the consumption function), and obtained better data to calibrate inventories (using ONS rather than US BEA data). Our initial forecast featured a supply shock scenario identical to S5 except for Real Estate (Section \ref{sec:supplyshockscenarios} and Appendix \ref{apx:shocks}), and an IHS3 production function. In this section, we compare our original forecast to the forecasts made using the 30 possible scenarios discussed above. To do so, we evaluate the forecast errors for each scenario at both the sectoral and aggregate levels, allowing us to better understand the trade-off involved in selecting amongst various assumptions and calibrations. For all 30 scenarios, we simulate the model for six months, from January $1^{st}$ to June $30^{th}$, 2020. We start lockdown on March $23^{rd}$, at which point we apply the supply and demand shocks described in Section \ref{sec:pandemic_shock}.\footnote{ We do not run the model further in the future, both because we focus on the first UK lockdown and the immediate aftermath, and because our assumptions on non-critical inputs are only valid for a limited time span. } We then compare the monthly sectoral output of each scenario against empirical data from the indexes of agriculture, production, construction and services, all provided by the ONS (see Appendix \ref{apx:validation_data}). Specifically, we compute the sector-level Absolute Forecast Errors (AFE) for April, May, and June,\footnote{ We exclude January, February and March from our comparison as we do not model the reaction of the economy before the lockdown, when e.g. international supply chains started to be disrupted. } \[ \mathcal{E}_{i,t}^z=|y_{i,t}-\hat{y}_{i,t}^z|, \] where $y_{i,t} = x_{i,t}/x_{i,0}$ is the output of sector $i$ during month $t$ expressed as a percentage of the output of sector $i$ during February, $x_{i,0}$. Here, $\hat{y}_{i,t}^z$ is the equivalent quantity in simulations of scenario $z$. We then obtain a scenario-specific average sectoral AFE by taking a weighted mean of $\mathcal{E}_{i,t}^z$ across all sectors $i$ and months $t$, where weights correspond to output shares in the steady state (forecast errors for important sectors are more relevant than forecast errors for small sectors). This quantity is defined as \begin{equation} \text{AFE}_\text{sec}^z= \frac{1}{3N} \sum_t \sum_i \frac{x_{i,0}}{\sum_j x_{j,0}} \mathcal{E}_{i,t}^z. \end{equation} We also compare aggregate output reductions in all different scenarios in April, May and June against empirical data. We compute a scenario-specific average aggregate AFE by averaging aggregate forecast errors over the three months we are considering. This is \begin{equation} \text{AFE}_\text{agg}^z= \frac{1}{3} \sum_t \left(\sum_i x_{i,t}-\sum_i \hat{x}_{i,t}^z\right), \end{equation} where $\hat{x}_{i,t}^z$ is output of sector $i$ at time $t$ for scenario $z$. Note that, unlike the measure of sectoral error $\text{AFE}_\text{sec}^z$, the aggregate measure $\text{AFE}_\text{agg}^z$ does not include an absolute value -- it is positive when true production is greater than predicted, and negative when it is smaller than predicted. \begin{figure}[hbt] \centering \includegraphics[width = 1.0\textwidth]{fig/scenarios_comparison_main.pdf} \caption{\textbf{Sectoral and aggregate errors across scenarios.} We plot aggregate Average Forecast Error $\text{AFE}^\text{agg}_z$ vs. sectoral Average Forecast Error $\text{AFE}^\text{sec}_z$ for 30 different scenarios $z$, corresponding to all of the six possible shock scenarios (indicated by color) and the five possible production functions (indicated by symbol), as well as the original model forecast, indicated by a black asterisk. The other parameter values are indicated in Table \ref{tab:econmodelpars}. Both sectoral and aggregate AFE are multiplied by 100 to be interpreted as percentages. The right panel zooms on the region with lowest sectoral and aggregate errors. The dashed lines refer to an average forecast by institutions and financial firms (above zero) and to a forecast by the Bank of England (below zero), see footnote \ref{footn:pred}. } \label{fig:scenarios_comparison_main} \end{figure} Figure \ref{fig:scenarios_comparison_main} plots sectoral and aggregate AFE for the 30 scenarios, plus the original out-of-sample forecast. The sectoral and aggregate errors vary considerably across scenarios. The left panel, which includes all 31 scenarios, contains two outliers where the sectoral error reaches almost 30\% and the aggregate error is roughly -30\%, meaning that the model predicts a downturn 30 percentage points larger than that actually observed (i.e., about a 50\% downturn). This occurs when the Leontief production function is combined with the two most severe supply shock scenarios $S_5$ and $S_6$. The right panel blows up the region containing the other 28 scenarios. The sectoral errors range between roughly 10\% and 20\%, while the aggregate forecast errors range from roughly -10\% to 10\%. A close examination of the figure makes it clear that, as expected, the predicted downturn generally gets stronger as the severity of the shock and the rigidity of the production function increases. With one exception, the choice of scenario has a bigger effect on the error than the production function. This is evident from the fact that there are clusters of points with the same color associated with each scenario. The clusters are particularly tight for the less severe scenarios. The exception is the Leontief production function: The two most severe shock scenarios produce outliers with downturns that are much too strong and the remainder all produce results clustered together, with aggregate errors in the range of $-4\%$ to $-6\%$. This is true even for the weakest shock scenario $S_1$; by comparison, every other production function predicts a downturn that is too weak under scenario $S_1$, in the range of $8\% - 12\%$. The original model forecast, shown as a black asterisk, has a aggregate error of 0.6\% and a sectoral error of 13.5\%. It is useful to compare this to its counterpart in the retrospective analysis, which uses the $\text{S}_\text{5}$ scenario in combination with the IHS3 production function. This forecast performs better at the sectoral level, but worse at the aggregate level (the error is 2.5\%). Using an IHS2 production function with the $S_5$ shock scenario seems to provide the best combination of sectoral and aggregate errors. To put these results in perspective, it is worth comparing to the other out-of-sample forecasts that were made around the same time. We do not know their sectoral errors but we can compare the aggregate errors. The Bank of England forecast, for example, predicted a downturn of about 30\%, which corresponds to an aggregate error of about -8\%, comparable to the scenarios with $S_6$ supply shocks and with Leontief production function and supply shocks $S_1$ to $S_4$. Conversely, the average forecast by institutions and financial firms for the UK economy in Q2 2020 was -16.6\%, which is 5.5\% less than what was observed in reality. This forecast is in line with supply shock scenarios $S_3$ to $S_4$, as well as with $S_5$ combined with a linear production function. Therefore, it seems that the more accurate predictions of our model are obtained by combining shock scenario $S_5$ -- as in our original prediction -- with one of the IHS production functions. Why does this scenario, which was not designed to capture specific features of the UK economy, work so well? The evidence suggests that this is because there were many voluntary firm shutdowns, such as for the car manufacturing industry in the UK. The $S_5$ shock scenario does a better job of identifying industries that we not mandated to shut down, but did so in practice, and seems to better capture the behavioral response to the pandemic than a literal reading of UK regulations. \subsection{Analysis of the selected scenario} \label{sec:selectedscenarioanalysis} Given that it minimizes a combination of aggregate and sectoral error, and given that it is close to the original scenario used for our out-of-sample predictions, we focus on the scenario that combines $S_5$ and the IHS2 production function to illustrate the outcomes of our model in more detail. We start by showing the dynamics produced by the model, and then we evaluate how well the selected scenario explains some aspects of the economic effects of Covid-19 on the UK economy that we did not consider so far. \begin{figure}[hbt] \centering \includegraphics[width = 1.0\textwidth]{fig/aggregate_vs_sectoral.pdf} \caption{\textbf{Economic production for the chosen scenario as a function of time.} We plot production (gross output) as a function of time for each of the 55 industries. Aggregate production is a thick black line and each sector is colored. Agricultural and industrial sectors are colored red; trade, transport, and restaurants are colored green; service sectors are colored blue. All sectoral productions are normalized to their pre-lockdown levels, and each line size is proportional to the steady-state gross output of the corresponding sector. For comparison, we also plot empirical gross output as normalized with respect to March 2020. } \label{fig:aggregatevssectoral} \end{figure} Figure \ref{fig:aggregatevssectoral} shows model results for the selected scenario for production (gross output); results for other important variables, such as profits, consumption and labor compensation (net of government benefits) are similar. When the lockdown starts, there is a sudden drop in economic activity, shown by a sharp decrease in production. Some other industries further decrease production over time as they run out of critical inputs.\footnote{The reduction in production due to input bottlenecks is somewhat limited in this scenario as compared to the outliers in Figure \ref{fig:model_data_comparison}. With supply shocks $S_5$ or $S_6$ and a Leontief production function, the economy collapses by 50\% due to the strong input bottlenecks created in a substantially labor-constrained economy in which all inputs are critical for production.} Throughout the simulation, service sectors tend to perform better than manufacturing, trade, transport and accommodation sectors. The main reason for that is most service sectors face both lower supply and demand shocks, as a high share of workers can effectively work from home, and business and professional services depend less on consumption demand. In the UK, there was not a clear-cut lifting of the lockdown but, under scenario $S_5$, we take May 13th as a conventional date in which lockdown measures are lifted (see Figure \ref{fig:supplyscenarios} in Appendix \ref{apx:shocks} for shock dynamics). By the end of June, the economy is still far from recovering. In part, this is due to the fact that the aggregate level of consumption does not return to pre-lockdown levels, due to a reduction in expectations of permanent income associated with beliefs in an L-shaped recovery (Section \ref{sec:demand}), and due to the fact that we do not remove shocks to investment and exports (see Section \ref{sec:pandemic_shock}). \begin{table}[htbp] \centering \begin{tabular}{|l|c|c|} \hline Variable (compared to Q4-2019) & Data & Model \\ \hline Gross output April & -27.4\% & -25.3\% \\ Gross output May & -25.2\% & -26.9\% \\ Gross output June & -17.8\% & -16.8\% \\ Value added Q2 & -21.5\% & -22.1\% \\ \hline Private consumption Q2 & -25.3\% & -21.3\% \\ Investment Q2 & -26.3\% & -29.7\% \\ Government consumption Q2 & -17.5\% & -14.2\%\\ Inventories Q2 & -2.2\% & -0.5\% \\ Exports Q2 & -23.3\% & -27.8\% \\ Imports Q2 & -30.6\% & -23.9\% \\ \hline Wages and Salaries Q2 & -1.1\% & -4.3\% \\ Profits Q2 & -26.7\% & -22.3\%\\ \hline \end{tabular} \caption{Comparison between data and predictions of the selected scenario for the main aggregate variables. All percentage changes refer to the last quarter of 2019, which we take to represent the pre-pandemic economic situation.} \label{tab:aggregate-data-model} \end{table} We now turn to evaluating how well the selected scenario describes the economic effects of Covid-19 on the UK economy. In terms of gross output, one can see in Table \ref{tab:aggregate-data-model} that the model slightly underestimates the recession in April and slightly overestimates it in May, while it correctly estimates a strong recovery in June.\footnote{ We aggregate empirical output from sectoral indexes using our steady-state output shares as weights. } Additionally, aggregate value added in the second quarter of 2020 is very close to the data. Our model, however, considers other macroeconomic variables than gross output and value added, so we also compare these other variables to data (Table \ref{tab:aggregate-data-model}). From national accounts, we collect data on private and government consumption, investment, change in inventories, exports and imports (expenditure approach to GDP); wages and salaries and profits (income approach to GDP). Looking at the expenditure approach to GDP, some variables have a worse reduction in the model, such as investment and exports, while other variables have a worse reduction in the data, such as private and government consumption, inventories and imports.\footnote{If one considers Exports-Imports as a component, the model predicts a current account deficit, while in reality there was a current account surplus. We should however note that we do not model international trade, we treat exports as exogenous and imports like locally produced goods and services.} However, the model predicts the relative reductions fairly well, as we find a stronger collapse in private consumption and investment than in government consumption or inventories, as in the data. Finally, considering the income approach to GDP, we overestimate the reduction in wages and salaries and underestimate the reduction in profits.\footnote{ Note that these categories are not jointly exhaustive, as the ONS also considers mixed income and taxes less subsidies, which are difficult to compare to variables in our model. } Nonetheless, we correctly predict that the absolute reduction in wages and salaries is much smaller than in profits (due to government subsidies). \begin{figure}[tb] \centering \includegraphics[width = 1.0\textwidth]{fig/model_data_comparison.pdf} \caption{\textbf{Comparison between model predictions and empirical data.} We plot predicted production (gross output) for each of 53 industries against the actual values from the ONS data, averaged across April, May and June. Values are relative to pre-lockdown levels and dot size is proportional to the steady-state gross output of the corresponding sector. Agricultural and industrial sectors are colored red; trade, transport, and restaurants are colored green; service sectors are colored blue. Dots above the identity line means that the predicted recession is less severe than in the data, while the reverse is true for dots below the identity line. } \label{fig:model_data_comparison} \end{figure} Turning to the performance of our model at the disaggregate level of industries, Figure \ref{fig:model_data_comparison} shows gross output as predicted by the model and in the data. Here, gross output is averaged over the values it takes in March, April and June, both in the model and in the data, and compared to the value it had in Q4 2019. To interpret this figure, note that for all points on the left of the identity line, model predictions are lower than in the data, i.e. the model is pessimistic. Conversely, on the right of the identity line model predictions are higher than in the data, i.e. the model is optimistic. Note that model predictions and empirical data are correlated, although not perfectly: the Pearson correlation coefficient weighted by industry size is 0.75. The majority of sectors decreased production up to 60\% of initial levels, both in the model and in the data, but a few sectors were forced to decrease production much more. While the predictions are generally very good, there are also some dramatic failures. We conjecture that this is due to idiosyncratic features of these industries that we could not take into account without overly complicating our model. For example, one sector for which the predictions of our model are completely off is C29 - Manufacturing of vehicles. Almost all car manufacturing plants were completely closed in the UK in April and May, and so production was essentially zero (7\% of the pre-pandemic level in April and 14\% in May). While they reopened in June, production in Q2 is slightly above 20\% of the pre-pandemic level. However, our model predicts a level of production around 63\% of the pre-pandemic level. It is difficult to account for the complete shutdown of car manufacturing plants in our model, as our selected scenario for supply shocks does not require manufacturing plants to completely close during lockdown. We think that this discrepancy between model predictions and data can be explained by two factors. First, car manufacturing is highly integrated internationally, and, in a period where most developed countries were implementing lockdown measures, international supply chains were highly disrupted. For simplicity, however, in our model we did not model input bottlenecks due to lack of imported goods. Second, it is possible that firms producing non-essential goods voluntarily decided to stop production to protect the health of their workers, even if they were not forced to do so. Another example for which the predictions of our model are off is H51 - Air transport. Production in the data is 3\% of pre-lockdown levels, while our model predicts around 50\%. In the model, most activity of the air transport industry during lockdown is due to business travel, which is a non-critical input to many industries that we do not exogenously shut down (recall that industries aim at using non-critical inputs if they are available). In some other cases, however, our model gave accurate predictions, even when the answer was far from obvious. Compare, for instance, industries M74\_M75 (Other Science) and O84 (Public Administration). Both received a very weak supply shock (3\% for Other Science and 1.1\% for Public Administration, see Table \ref{tab:FO_shocks_supply} in Appendix \ref{apx:shocks}), and no private consumption demand shocks. Yet, Public Administration had almost no reduction in production, while Other Science reduced its production to about 75\% of its pre-pandemic level. The ONS report in May (see footnote \ref{ft1}) quotes reduced intermediate demand as the reason why Other Science reduced its activity. Conversely, Public Administration's output is almost exclusively sold to the government, which did not reduce its consumption. Because of its ability to take into account supply chain effects and the resulting reductions in intermediate demand, our model is able to endogenously capture the difference between these two sectors, even though their shocks are small in both cases. \begin{figure}[tb] \centering \includegraphics[width = 1.0\textwidth]{fig/new_model_april_june_all.pdf} \caption{\textbf{Comparison between model predictions and empirical data.} We plot predicted production (gross output) vs. observed values from ONS data. Production is relative to pre-lockdown levels and dot size is proportional to the steady-state gross output of the corresponding sector. Yellow is April and red is June. Black lines connect the same industry from April to June. Only a few industries are highlighted (see main text). } \label{fig:new_model_april_may_june_all} \end{figure} Figure \ref{fig:new_model_april_may_june_all} shows the ability of the model to predict sectoral dynamics. It is similar to Figure \ref{fig:model_data_comparison} but shows output in both April and June. The dots that represent the same industry in April and June are connected by a black line. We focus on a few industries that we discuss in this section, making all other points light grey (Figure \ref{fig:new_model_april_may_june_facets} in Appendix \ref{apx:selectedscenario} shows labels for all industries). To interpret changes from April to June, note that a line close to vertical implies that a given industry had a much stronger recovery in the data than in the model, while a horizontal line implies the opposite. A line parallel to the identity line indicates that the recovery was as strong in the data as in the model. Almost all sectors experience a substantial recovery from April to June, both in the model and in the data. An example in which our model correctly predicts dynamic supply chain effects is the recovery experienced by C23 (Manufacture of other non-metallic mineral products) as a consequence of the recovery by F (Construction). According to ONS reports, increased activity in construction in June is explained by the lifting of the lockdown and by adaptation to social distancing guidelines by construction firms. At the same, industry C23 recovers due to the production of cement, lime, plaster, etc to satisfy intermediate demand by construction (construction is by far the main customer of C23, buying almost 50\% of its output). This pattern is faithfully reproduced by the endogenous dynamics in our model. \section{Propagation of supply and demand shocks} \label{sec:nw_effects} In the standard Cobb-Douglas Equilibrium IO model, productivity shocks propagate downstream and demand shocks propagate upstream \citep{carvalho2019production}. Thus, the elasticity of aggregate output to a shock to one sector depends on the type of shock, and on the position of an industry in the input-output network. What can we say about these questions in our model? Are there properties of an industry that could be computed ex-ante to know how systemic it is? Is this different for supply and demand shocks? To answer these questions, we run the model with a single shock -- either supply or demand -- to a single industry. All the other industries do not experience any shocks but have initial productive capacities and face initial levels of final demand. We then let the economy evolve under this specific setting for one month\footnote{ We also did the analysis with model simulations up to two months after the initial shock is applied. Since results are similar for the two cases, we only report results for the one month simulations. }. We repeat this procedure for every industry, every production function specification and different shock magnitudes. We then investigate if the decline in total output can be explained by simple measures such as shock magnitude, output multipliers or upstreamness levels which we formally define below. We first explain the supply and demand shock based scenarios in somewhat more detail. \paragraph{Supply shock scenarios.} When considering supply shocks only, we completely switch off any adverse demand effects, i.e. $\epsilon_{i,t}^D = 0$ (which implies $\theta_{i,t} = \theta_{i,0}$ and $\tilde \epsilon_t^D = 0$), $\xi_t = 1$ and $f^d_{i,t} = f^d_{i,0}$ for all $i$ and $t$. We also set all supply shocks equal to zero $\epsilon_{i,t}^S = 0$, except for a single industry $j$ which experiences a supply shock from the set $\epsilon_{j,t}^S \in \{0.1,0.2,...,1\}$. We then loop over every possible $j$. We do this for each of the different production function assumptions. \paragraph{Demand shock scenarios.} In our demand shock scenarios there are no supply shocks ($\epsilon_{i,t}^S = 0$ $\forall \; i,t$) and similarly there is no demand shocks for all but one industry $j$ ($\epsilon_{i,t}^D = 0$, $f^d_{i,t} = f^d_{i,0}$ $ \forall i \ne j, \forall t$). For the single industry $j$ we again let shocks vary between 10 and 100\%; $\epsilon_{j,t}^D \in \{0.1,0.2,...,1\}$. For simplicity we assume uniform shocks across all final demand categories of the given industry, resulting in $f^d_{j,t} = (1-\epsilon_{j,t}^D) f^d_{j,0}$. To keep things as simple as possible, we further assume that there is no fear of unemployment $\xi_t = 1$ and that final consumers do not switch to alternative products at all ($\Delta s = 1$). Under these assumptions the values for $\theta_{i,t} = \theta_{i,t}$ and $\tilde \epsilon_t^D$ are then computed as outlined in Section \ref{sec:consumptiondemandshockscenarios}. \newline Figure \ref{fig:output_table} shows the simulation results broken down into the various shock magnitudes (vertical axis) and production function categories (horizontal axis). The coloring and the value of a tile represent the average aggregate output (as a fraction of initial output), where the average is taken over all $N$ runs. Results obtained from the demand shock scenarios, Figure \ref{fig:output_table}(b), do not differ across alternative production function specifications and thus we only show one representative case. The reason for this is that demand shocks to an industry are passed on proportionally to all of its suppliers as shown in Eq.~\eqref{eq:order_interm}, regardless of the underlying production mechanisms. This is in stark contrast to the supply shock scenarios, Figure \ref{fig:output_table}(a), where economic impacts depend strongly on the choice of production function. While the economic impact of supply constraints is limited in the linear production model and depends fairly smoothly on the shock magnitude, this is not the case in the Leontief model. Here the economy experiences a major truncation if single industries are shut down. The results lie in-between these two extremes when production functions differentiate between critical and non-critical inputs (IHS 1-3). Except for the linear production function simulations, we observe for several severe supply shock scenarios fairly wide distributions of economic impacts. Thus, not only the shock size matters but also \emph{which} industries are affected by these shocks. For example, applying an 80\% supply shock shock to industry Repair-Installation (C33) under the IHS2 assumption, collapses the economy by about 50\%, although the industry accounts for less than half a percent of the overall economy. On the other hand, applying the same shock to the comparatively large industry Other Services (R\_S, 3\% of the economy) leads to a mere 6\% reduction of aggregate output. \begin{figure}[H] \centering \includegraphics[width = \textwidth]{fig/shock_tiles_both.pdf} \caption{ \textbf{Aggregate gross output as percentage of pre-shock levels after shocking single industries.} A column depicts different production functions and rows distinguish the supply (a) and demand (b) shock magnitude which an industry is exposed to. Results in (b) are only shown for one production function, since they are identical across alternative specifications. The values in the tiles and their coloring denote aggregate output levels as percentage of pre-shock levels one month after the shock hits a given industry. This values are computed as averages from $N$ runs, always shocking only a single industry. } \label{fig:output_table} \end{figure} For a policymaker it is important to know what properties of an industry drive the amplification of shocks, as this could inform both the design of lockdown measures as well as reopening policies. To explore this more systematically, we regress output levels against potential explanatory factors such as upstreamness, output multipliers and industry sizes. An industry's upstreamness in a production network is its average distance to the final consumer \citep{antras2012measuring} and also known as Total Forward Linkages \citep{miller2017output}. High upstreamness implies that the output of this industry requires several subsequent production steps before it is purchased by final consumers. Thus, relaxing shocks on industries with high upstreamness could potentially stimulate further economic activity. Since upstreamness boils down to the row sums of the Gosh inverse \citep{miller2017output}, we obtain the $N$-dimensional upstreamness vector as $ u = (\mathbb{I} - B)^{-1} \mathbf{1}$, where a matrix element $B_{ij} = Z_{ij,0}/x_{i,0}$ represents ``allocation coefficients''. Upstreamness ranges from 1.004 (Household activities) to 2.742 (Warehousing) in our sample of UK industries. Output multipliers, or alternatively Total Backward Linkages, are a core metric in many economic studies. In input-output analysis multipliers quantify the impact of a change in final demand in a given sector on the entire economy. Multipliers are related to various network centrality concepts and have been shown to be strongly predictive of long-term growth \citep{mcnerney2018production}. Since shocking an industry with a high multiplier should lead to larger decreases in intermediate demand, it is plausible that high-multiplier industries tend to amplify shocks more. The output multiplier is computed as the column sum of the Leontief inverse, i.e. $m = (\mathbb{I} - A^\top)^{-1} \mathbf{1}$. Multipliers range from 1 (Household activities) to 2.379 (Forestry) in our sample. Upstreamness and multipliers are different, but are fairly highly correlated, with a Pearson correlation of 0.45. We then regress log aggregate output one month after the initial shock hits the economy, $\log (\sum_k x_{k,30})$, against the shocked industry's log upstreamness and multiplier levels. Naturally, we would expect a larger decline if supply shocks hit an overall large industry and similarly for demand shocks and the size of final consumption. Thus we also control for total industry size measured in log gross output, $\log (x_{i,0})$, and industry-specific total demand, $\log ( c^d_{i,0} + f^d_{i,0} )$, in our regressions\footnote{ We do not include gross output and final demand values together as regressors to avoid multicolinearity. Industry gross output and final consumption are highly correlated; $\text{cor} \{ \log (x_{i,0}), \log (c_{i,0}+ f_{i,0}) \} =0.89$ (p-value $< 10^{-16}$). }. We then run the regression for every given shock magnitude and production function separately. Tables \ref{tab:regsuppshock} summarizes the regression results for the supply shock scenarios. For supply shocks we find that upstreamness is a very good predictor of adverse economic impacts if the economy builds upon Leontief production mechanisms. If industries use linear production technologies, on the other hand, it is rather the size of the industry than its upstreamness level that explains reductions in aggregate output. When using the intermediate assumption of an IHS2 production function, both upstreamness levels and industry size significantly affect aggregate impacts, although the overall model fit ($R^2$) drops substantially. Note that supply shocks are to a large extent a policy variable as they are directly coupled to non-pharmaceutical interventions such as industry-specific shutdowns. Our results indicate that upstreamness levels of industries are an important aspect for designing lockdown scenarios. In case of limited production flexibilities, upstreamness might be even a better indicator of industry closure related aggregate impacts than the actual size of the industry. Regression results for the demand shock experiments are shown in Table \ref{tab:regdemashock}. The first four columns show the results from univariate regressions where we include only one of the potential covariates: upstreamness, multipliers, final consumption and gross output. Somewhat surprisingly, output multipliers, a key metric for quantifying aggregate impacts resulting from demand side perturbations in simpler input-output models, do not exhibit any predictive power in our case. Upstreamness, on the other hand, is positively associated with aggregate output values, indicating that demand shocks to upstream industries have less adverse impacts on the economy than downstream industries. Better model fits, however, are obtained when regressing aggregate output against indicators of industry size \- gross output or final consumption. Interestingly, combining final consumption values with the network-based metrics in a multivariate regression (column five) does not improve explanatory power at all. On the other hand, upstreamness and output size explain complementary parts of aggregate impacts (columns six), resulting in a better model fit than any regression using final consumption values. Thus, industries' upstreamness and output sizes do not only play a role for the propagation of supply shocks but are also key determinants for demand shock amplification. Overall, we find that static measures are indicative of our model results but only explain them partially, even in this simplified case where we studied either demand or supply to only one industry at time. In reality, supply and demand shocks are mixed and multiple industries affected at the same time to varying degrees. In that case the propagation of pandemic shocks will be more complex. Nevertheless, our results indicate that upstream industries play an important role in the amplification of exogenous shocks. \FloatBarrier \input{shockprop_regtab.tex} \FloatBarrier \section{Discussion} \label{sec:discuss} In this paper we have investigated how locking down and re-opening the UK economy as a policy response to the Covid-19 pandemic affects economic performance. We introduced a novel economic model specifically designed to address the unique features of the current pandemic. The model is industry-specific, incorporating the production network and inventory dynamics. We use survey results by industry experts to model how critical different inputs are in the production of a specific industry. We found in simulation experiments where we studied simpler shock scenarios and a simplified model setup that an industry's upstreamness is predictive of shock amplification. However, the relationship is noisy and strongly depends on the underlying production mechanism in case of downstream propagation of supply shocks. These results underline the necessity of more sophisticated macroeconomic models for quantifying the economic impacts resulting from national lockdowns and subsequent re-opening. Real time GDP predictions for the UK economy made in an early version of this paper turned out to be very accurate \citep{pichler2020production}. But was this because we did things right, or because we just got lucky? Our analysis here shows that it was a mixture of the two. By investigating both alternative shocks scenarios, alternative production functions and studying the sensitivity to parameters and initial conditions we are able to see how the quality of the predictions depends on these factors. We find that the shock scenarios are the most important determinant, but the production function can also be very important, and some of the other parameters can affect the results as well. To make a real time forecast we had to act quickly. There were no data available about which industry classifications were considered essential in the UK and the few data available on UK jobs that could be performed form home was based on US O*NET data\footnote{ \url{https://www.ons.gov.uk/employmentandlabourmarket/peopleinwork/employmentandemployeetypes/articles/whichjobscanbedonefromhome/2020-07-21} }. In the interest of time, we estimated the UK supply shocks using predicted US supply shocks \cite{del2020supply}. These supply shocks were based on a list of essential industries that was considerably less permissive (i.e., less industries were considered essential) than the UK guidelines. This turned out to be lucky: respecting social distancing guidelines caused many industries in the UK to close even though they were not formally and explicitly required to do so. With hindsight, this was fortuitous -- if we had had a list of essential British industries our supply shocks would have been too weak, or we would have had to model social distancing constraints by industry, which is difficult. Even if it missed some of the details, the supply shocks estimated by \cite{del2020supply} provided a reasonable approximation to the truth. The choice of production function also matters a great deal. Our results suggest that the Leontief production function, which is widely used for understanding the response to disasters, is a poor choice. This is for an intuitive reason: Some inputs are not critical, and an industry can operate reasonably well without them, at least for a few months. Our results here show that production functions that incorporate this fact can do well. This could be further developed by calibrating CES production functions to correctly incorporate when substitutions are appropriate. The IHS Markit survey that was performed should eventually be performed with larger samples and tested in detail (but that is beyond the scope of this paper). At the other extreme, our results also suggest that the linear production function is a poor choice. It comes close to the correct aggregate error only with the strongest shock scenario and never performs well at the sectoral level. Our results indicate that dynamic models of the type that we developed here can do a good job of forecasting disruptions in the economy. We want to emphasize that, while there is wide variation in the results under different scenarios, this variation could be dramatically reduced by collecting better data. A clear example is the choice of inventory levels. In our original model we had no data for inventory levels of UK industries, so we used data for the US. Replacing this by UK data made a substantial improvement in our results. Similarly, the economic data we had available was from 2019, and the IO data was from 2014 -- better real time measurements about the response of industries to the pandemic would likely have improved our predictions. A more extensive study of the importance of different inputs to production could have reduced ambiguity about the choice of production function. At the highest level, our model illustrates the value of its key features. These are: modeling at the sectoral level, allowing both supply and demand shocks to operate simultaneously, using a realistic production function that properly captures nonlinearities without exaggerating them, and using a dynamic model that incorporates inventory effects. With better data and better measurement of parameters, our results demonstrate that a model of this type can give useful insight into the economics of a disaster such as the Covid-19 pandemic. \FloatBarrier \small \bibliographystyle{agsm} \section{ Model production function and CES } \label{apx:ces_prodfun} Here we show that the production functions used in the main text are highly related to nested CES production functions. Specifically, we consider a CES production function with three nests of the form (we suppress time indices for convenience) \begin{equation} x_{i}^\text{inp} = \Big( a_i^\text{C} (z_i^\text{C})^\beta + a_i^\text{IMP} (z_i^\text{IMP})^\beta + (a_i^\text{NC})^{1-\beta} (z_i^\text{NC})^\beta \Big)^{\frac{1}{\beta}}, \end{equation} where $\beta$ is the substitution parameter. Variables $a_i^\text{C} = \sum_{j \in \text{C}} A_{ji}$, $a_i^\text{IMP} = \sum_{j \in \text{IMP}} A_{ji} $ and $a_i^\text{NC} = \sum_{j \in \text{NC}} A_{ji} $ are the input shares (technical coefficients) for critical, important and non-critical inputs, respectively. To be consistent with the specifications of the main text, we do not consider labor inputs here and only focus on $x_i^\text{inp}$. Alternatively, we could include labor inputs in the set of critical inputs and derive the full production function in an analogous manner. $z_i^\text{C}$, $z_i^\text{IMP}$ and $z_i^\text{NC}$ are CES aggregates of critical, important and non-critical inputs for which we have \begin{align} z_i^\text{C} &= \left[ \sum_{j \in \text{C}} A_{ji}^{1-\nu} S_{ji}^\nu \right]^{\frac{1}{\nu}}, \\ z_i^\text{IMP} &= \frac{1}{2} \left[ \sum_{j \in \text{IMP}} A_{ji}^{1-\psi} S_{ji}^\psi \right]^{\frac{1}{\psi}} + \; \frac{1}{2} x_i^\text{cap}, \\ z_i^\text{NC} &= \left[ \sum_{j \in \text{NC}} A_{ji}^{1-\zeta} S_{ji}^\zeta \right]^{\frac{1}{\zeta}}. \end{align} If we assume that every input is critical (i.e. the set of important and non-critical inputs is empty: $a_i^\text{IMP} = a_i^\text{NC} = 0$), and by taking the limits $\beta, \nu \to -\infty$ we recover the Leontief production function \begin{equation} x_i^\text{inp} = \underset{j}{\min} \left\{ \frac{S_{ji}}{A_{ji}} \right\}. \label{eq:leon_limit} \end{equation} If we assume that there is no critical or important inputs at all ($a_i^\text{C} = a_i^\text{IMP} = 0$), taking the limits $\beta \to -\infty$ and $\zeta \to 1$ yields the linear production \begin{equation} x_i^\text{inp} = \frac{ \sum_{j} S_{ji} }{ a_i^\text{NC} }. \label{eq:linear_limit} \end{equation} The IHS1 and IHS3 production functions treat important inputs either as critical or non-critical, i.e. the set of critical inputs is again empty ($a_i^\text{IMP} = 0$). We can approximate these functions again by taking the limits $\beta, \nu \to -\infty$ and $\zeta \to 1$ which yields \begin{equation} x_i^\text{inp} = \min \left\{ \underset{j \in \text{C} }{\min} \left\{ \frac{S_{ji}}{A_{ji}} \right\}, \frac{ \sum_{j \in \text{NC} } S_{ji} }{ a_i^\text{NC} } \right\}. \label{eq:ihs13_limit} \end{equation} Note that the difference between the two production functions lies in the different definition of critical and non-critical inputs. The functional form in Eq.~\eqref{eq:ihs13_limit} applies to both cases equivalently. The IHS2 production function where the set of important inputs is non-empty can be similarly approximated. In addition to the limits above, we also take the limit $\psi \rightarrow -\infty$ to obtain \begin{equation} x_i^\text{inp} = \min \left\{ \underset{j \in \text{C} }{\min} \left\{ \frac{S_{ji}}{A_{ji}} \right\}, \frac{1}{2} \underset{j \in \text{IMP} }{\min} \left\{ \frac{S_{ji}}{A_{ji}} + x_i^\text{cap} \right\}, \frac{ \sum_{j \in \text{NC} } S_{ji} }{ a_i^\text{NC} } \right\}. \label{eq:ihs2_limit} \end{equation} Eqs.~\eqref{eq:ihs13_limit} and \eqref{eq:ihs2_limit} are not exactly identical to the IHS production functions used in the main text (Eqs.~\eqref{eq:xinp_ihs1} -- \eqref{eq:xinp_ihs3}), but very similar. The only difference is that in the IHS functions non-critical inputs do not play a role for production at all, whereas here they enter the equations as a linear term. However, simulations show that this difference is practically irrelevant. Simulating the model with the CES-derived functions instead of the IHS production functions yield exactly the same results, indicating that the linear term representing non-critical inputs is never a binding constraint. \section{First-order supply and demand shocks} \label{apx:shocks} \subsection{Supply shocks} Due to the Covid-19 pandemic, industries experience supply-side reductions due to the closure of non-essential industries, workers not being able to perform their activities at home, and difficulties adapting to social distancing measures. Many industries also face substantial reductions in demand. \cite{del2020supply} provide quantitative predictions of these first-order supply and demand shocks for the U.S. economy. To calculate supply-side predictions, \cite{del2020supply} constructed a Remote Labor Index, which measures the ability of different occupations to work from home, and scored industries according to their essentialness based on the Italian government regulations. We follow a similar approach. We score industries essentialness based on the UK government regulations and use occupational data to estimate the fraction of workers that could work remotely and the difficulties sectors faced in adapting to social distancing measures for on-site work. Several of our estimates are based on indexes and scores available for industries in the NAICS 4-digit classification system. An essential step in our methodology is to map these estimates into the WIOD industry classification system. We explain our mapping methodology below. \paragraph{NAICS to WIOD mapping} We build a crosswalk from the NAICS 4-digit industry classification to the classification system used in WIOD, which is a mix of ISIC 2-digit and 1-digit codes. We make this crosswalk using the NAICS to ISIC 2-digit crosswalk from the European Commission and then aggregating the 2-digit codes presented as 1-digit in the WIOD classification system. We then do an employment-weighted aggregation of the index or score in consideration from the 277 industries at the NAICS 4-digit classification level to the 55 industries in the WIOD classification. Some of the 4-digit NAICS industries map into more than one WIOD industry classification. When this happens, we assume employment is split uniformly among the WIOD industries the NAICS industry maps into. \paragraph{Remote Labor Index and Physical Proximity Index} To estimate the fraction of workers that could work remotely, we use the Remote Labor Index from \cite{del2020supply}. To understand the difficulties different sectors face in adapting to the social distancing measures, we compute an industry-specific Physical Proximity Index. Other works have also used the Physical Proximity of occupations to understand the economic consequences of the lockdown \citep{mongey2020workers,koren2020business}. We map the Physical Proximity Work Context\footnote{ https://www.onetonline.org/find/descriptor/result/4.C.2.a.3 } of occupations provided by O*NET into industries, using the same methodology that \cite{del2020supply} used to map the Remote Labor Index into industries. That is, we use BLS data that indicate the occupational composition of each industry and take the employment-weighted average of the occupation's work context employed in each industry at the NAICS 4-digit classification system. Under the assumption that the distribution of occupations across industries and that the percentage of essential workers within an industry is the same for the US and the UK, we can map the Remote Labor Index by \cite{del2020supply} and the Physical Proximity Index into the UK economy following the mapping methodology explained in the previous paragraph. The WIOD industry sector 'T' ("Activities of households as employers; undifferentiated goods- and services-producing activities of households for own use") does only maps into one NAICS code for which we do not have RLI or PPI. Since We consider sector 'T' to be similar to the 'R\_S' sector ("Other service activities") we use the RLI and PPI from 'R\_S' sector in the 'T' sector. In Figures~\ref{fig:RLI} and \ref{fig:PPI} we show the Remote Labor Index and the Physical Proximity Index of the WIOD sectors. \begin{figure \centering \includegraphics[width = 0.5\textwidth]{fig/RLI_wiod.png} \caption{\textbf{Remote Labor Index of industries.} Remote labor index of the WIOD industry classification. See Table \ref{tab:FO_shocks_supply} for code-industry name.} \label{fig:RLI} \end{figure} \begin{figure \centering \includegraphics[width = 0.5\textwidth]{fig/PPI_wiod.png} \caption{\textbf{Physical Proximity Index of industries.} Physical Proximity Index of the WIOD industry classification. See Table \ref{tab:FO_shocks_supply} for code-industry name.} \label{fig:PPI} \end{figure} \paragraph{Essential score} To determine the essential score of industries in scenarios $S_1$ to $S_4$ we follow the UK government guidelines. We break down WIOD sectors, which are aggregates of 2-digit NACE codes, into finer 3- and 4-digit industries. The advantage of having smaller subsectors is that we can associate shutdown orders to the subsectors and then compute an average essential score for the aggregate WIOD sector. In the following, we provide some details on how we calculate essential scores while referring the reader to the online repository to check all our assumptions. \begin{itemize} \item Essential score of G45 (Wholesale and retail trade and repair of motor vehicles and motorcycles). The only shops in this sector that were mandated to close were car showrooms (see footnote \ref{footn:ukleg} for a link to official legislation). Lacking a disambiguation between retail and wholesale trade of motor vehicles, we assign a 0.5 essential score to subsector 4511, which comprises 72\% of the turnover of G45, and an essential score of 1 to all other subsectors. The average essential score for G45 turns out to be 64\%. \item Essential score of G47 (Retail trade, except of motor vehicles and motorcycles). All ``non-essential'' shops were mandated to close, except food and alcohol retailers, pharmacies and chemists, newsagents, homeware stores, petrol stations, bicycle shops and a few others. We assigned an essential score that could be either 0 or 1 to all 37 4-digit NACE subsectors that compose G47. Weighing essential scores by turnover results in an average essential score of 71\%. \item Essential score of I (Accommodation and food service activities). Almost all economic activities in this sector were mandated to close, except hotels for essential workers (e.g. those working in transportation) and workplace canteens where there is no practical alternative for staff at that workplace to obtain food. Assigning a 10\% essential score to the main hotels subsector (551) and a 50\% essential score to subsector 5629, including workplace canteens, results in an overall 5\% essential score for sector I. \item Essential score of R\_S (Recreational and other services). Almost all activities in this sector were mandated to close, except those related to repair, washing and funerals. Considering all 34 subsectors yields an essential score of 7\%. \end{itemize} \paragraph{Real Estate.} This sector includes imputed rents, which account for $69\%$ of the monetary value of the sector,\footnote{ \url{https://www.ons.gov.uk/economy/grossvalueaddedgva/datasets/nominalandrealregionalgrossvalueaddedbalancedbyindustry}, Table 1B. }. Because we think applying a supply shock to imputed rent does not make sense, we compute that the supply shock derived from the RLI and Essential Score (which is around $50\%$) only affects $31\%$ of the sector, leading to a $15\%$ final supply shock to Real Estate (due to an error, our original work used a value of $4.7\%$).If we had used the correct real estate shock we would have predicted a 22.1\% reduction, exactly like in the data. In the main text, we still describe our prediction as having been a 21.5\% contraction, as explicitly stated in our original paper. Note that these differences are within the range of expected data revisions\footnote{see for instance \url{https://www.ons.gov.uk/economy/grossdomesticproductgdp/datasets/revisionstrianglesforukgdpabmi} for ONS GDP quarterly national accounts revisions, which are of the order of 1 pp.)} In scenario $S_5$, we mapped the supply shocks estimated by \cite{del2020supply} at the NAICS level into the WIOD classification system as explained previously. The WIOD industry sector 'T' ("Activities of households as employers; undifferentiated goods- and services-producing activities of households for own use") does only maps into one NAICS code for which we do not have a supply shock. Since this sector is likely to be essential, we assume a zero supply shock. The supply shocks at the NAICS level depend on the list of industries' essential score at the NAICS 4-digit level provided by \cite{del2020supply}. It is important to note that, although the list of industries' essential score provided by \cite{del2020supply} is based on the Italian list of essential industries, these lists are based on different industry classification systems (NAICS and NACE, respectively) and do not have a one-to-one correspondence. To derive the essential score of industries at the NAICS level \cite{del2020supply} followed three steps. (i) the authors considered a 6-digit NAICS industry essential if the industry had correspondence with at least one essential NACE industry. (ii) \cite{del2020supply} aggregated the 6-digit NAICS essential lists into the 4-digit level taking into account the fraction of NAICS 6-digit subcategories that were considered essential. (iii) the authors revised each 4-digit NAICS industry's essential score to check for implausible classifications and reclassified ten industries whose original essential score seemed implausible. Step (i) likely resulted in a larger fraction of industries at the NAICS level to be classified as essential than in the NACE level. This, in turn, results in supply shocks $S_5$ that are milder than they would have been if the essential score was mapped directly from NACE to WIOD and the supply shocks calculated at the WIOD level. For scenario $S_6$ we used the list of essential industries compiled by \cite{fana2020covid} for Italy, Germany, and Spain. We make one list by taking the mean over the three countries of the essential score of each industry. This list is at the ISIC 2-digit level, which we aggregate to WIOD classification weighting each sector by its gross output in the UK. Table \ref{tab:FO_shocks_supply} gives an overview of first-order supply shocks and Figure \ref{fig:supplyscenarios} show the supply scenarios over time. \begin{figure \centering \includegraphics[width = \textwidth]{fig/supplyshock_scenarios.pdf} \caption{ \textbf{Comparison of supply shock scenarios over time.} Each panel shows industry specific supply shocks $\epsilon_{i,t}^S$ for a given scenario. The coloring of the lines is based on the same code as in Figure \ref{fig:aggregatevssectoral}. } \label{fig:supplyscenarios} \end{figure} \subsection{Demand shocks} For calibrating consumption demand shocks, we use the same data as \cite{del2020supply} which are based on the \cite{CBO2006} estimates. These estimates are available only at the more aggregate 2-digit NAICS level and map into WIOD ISIC categories without complications. To give a more detailed estimate of consumption demand shocks, we also link manufacturing sectors to the closure of certain non-essential shops as follows. \begin{itemize} \item Consumption demand shock to C13-C15 (Manufacture of textiles, wearing apparel, leather and other related products). Four subsectors (4751, 4771, 4772, 4782) selling goods produced by this manufacturing sector were mandated to close, while one subsector was permitted to remain open as it sells homeware goods (4753). Lacking more detailed information about the share of C13-C15 products that these subsectors sell, we simply give equal shares to all subsectors, leading to an 80\% consumption demand shock to C13-C15. \item Consumption demand shock to C20 (Manufacture of chemicals and chemical products). Three subsectors (4752, 4773, 4774) selling homeware and medical goods were considered essential, while subsector 4775, selling cosmetic and toilet articles, was mandated to close. Using the same assumptions as above, we get a 25\% consumption demand shock for this sector. \end{itemize} The same procedure leads to consumption demand shocks for all other manufacturing subsectors. Table \ref{tab:FO_shocks_demand} shows the demand shock for each sector and \ref{fig:demand_time} illustrates the demand shock scenarios over time. \begin{figure \centering \includegraphics[width = \textwidth]{fig/demandshock_time.pdf} \caption{ \textbf{Industry-specific demand shocks over time.} The upper left panel shows the change in preferences $\theta_{i,t}$. The bottom left panel shows shock magnitudes to investment and export. The upper right panel shows demand shocks due to fear of unemployment $\xi_{t}$. The bottom right panel is the aggregate demand shock $\tilde \epsilon_t$ taking the savings rate of 50\% into account. The coloring of the lines for industry-specific results follows the same code as in Figure \ref{fig:aggregatevssectoral}. } \label{fig:demand_time} \end{figure} \input{appendices/tab_FOshocks} \section{Critical vs. non-critical inputs} \label{apx:ihs} A survey was designed to address the question when production can continue during a lockdown. For each industry, IHS Markit analysts were asked to rate every input of a given industry. The exact formulation of the question was as follows: ``For each industry in WIOD, please rate whether each of its inputs are essential. We will present you with an industry X and ask you to rate each input Y. The key question is: Can production continue in industry X if input Y is not available for two months?'' Analysts could rate each input according to the following allowed answers: \begin{itemize} \item \textbf{0} -- This input is \textit{not} essential \item \textbf{1} -- This input is essential \item \textbf{0.5} -- This input is important but not essential \item \textbf{NA} -- I have no idea \end{itemize} To avoid confusion with the unrelated definition of essential industries which we used to calibrate first-order supply shocks, we refer to inputs as \textit{critical} and \textit{non-critical} instead of \textit{essential} and \textit{non-essential.} Analysts were provided with the share of each input in the expenses of the industry. It was also made explicit that the ratings assume no inventories such that a rating captures the effect on production if the input is not available. Every industry was rated by one analyst, except for industries Mining and Quarrying (B) and Manufacture of Basic Metals (C24) which were rated by three analysts. In case there are several ratings we took the average of the ratings and rounded it to 1 if the average was at least $2/3$ and 0 if the average was at most $1/3$. Average input ratings lying between these boundaries are assigned the value 0.5. The ratings for each industry and input are depicted in Figure~\ref{fig:ihs_matrix}. A column denotes an industry and the corresponding rows its inputs. Blue colors indicate \textit{critical}, red \textit{important, but not critical} and white \textit{non-critical} inputs. Note that under the assumption of a Leontief production function every element would be considered to be critical, yielding a completely blue-colored matrix. The results shown here indicate that the majority of elements are non-critical inputs (2,338 ratings with score = 0), whereas only 477 industry-inputs are rates as critical. 365 inputs are rated as important, although not critical (score = 0.5) and \textit{NA} was assigned eleven times. \begin{figure}[htbp] \centering \includegraphics[trim = {0cm 0cm 0cm 0cm}, clip,width=\textwidth]{fig/ihs/ihs_inputs_matrix.pdf} \caption{Criticality scores from IHS Markit analysts. Rows are inputs (supply) and columns industries using these inputs (demand). The blue color indicates critical (score=1), red important (score=0.5) and white non-critical (score=0) inputs. Black denotes inputs which have been rated with NA. The diagonal elements are considered to be critical by definition. For industries with multiple input-ratings we took the average of all ratings and assigned a score=1 if the averaged score was at least $2/3$ and a score=0 if the average was smaller or equal to $1/3$. } \label{fig:ihs_matrix} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[trim = {0cm 0cm 0cm 0cm}, clip,width=1\textwidth]{fig/ihs/ihs_scatter_all.pdf} \caption{(Left panel) The figure shows how often an industry is rated as a critical input to other industries (x-axis) against the share of critical inputs this industry is using. The center and right panel are the same as the left panel, except for using half-critical and non-critical scores, respectively. In each plot the identity line is shown. Point sizes are proportional to gross output. } \label{fig:ihs_scatter} \end{figure} The left panel of Figure \ref{fig:ihs_scatter} shows for each industry how often it was rated as critical input to other industries (x-axis) and how many critical inputs this industry relies on in its own production (y-axis). Electricity and Gas (D35) are rated most frequently as critical inputs in the production of other industries (score=1 for almost 60\% of industries). Also frequently rated as critical are Land Transport (H49) and Telecommunications (J61). On the other hand, many manufacturing industries (ISIC codes starting with C) stand out as relying on a large number of critical inputs. For example, around 27\% of inputs to Manufacture of Coke and Refined Petroleum Products (C19) as well as to Manufacture of Chemicals (C20) are rated as critical. The center panel of Figure \ref{fig:ihs_scatter} shows the equivalent plot for 0.5 ratings (important, but not critical inputs). Financial Services (K64) are most frequently rated as important inputs which do not necessarily stop the production of an industry if not available. Conversely, the industry relying on many important, but not binding inputs is Wholesale and Retail Trade (G46) of which almost half of its inputs got rated with a score = 0.5. This makes sense given that this industry heavily relies on all these inputs, but lacking one of these does not halt economic production. This case also illustrates that a Leontief production function could vastly overestimate input bottlenecks as Wholesale and Retail Trade would most likely still be able to realize output even if a several inputs were not available. In the right panel of Figure \ref{fig:ihs_scatter} we show the same scatter plot but for non-critical inputs. 25 industries are rated to be non-critical inputs to other industries in 80\% of all cases, with Household Activities (T) and Manufacture of Furniture (C31-32) being rated as non-critical in at least 96\%. Industries like Other Services (R-S), Other Professional, Scientific and Technical Activities (M74-75) and Administrative Activities (N) rely on mostly non-critical inputs ($>$90\%). A detailed breakdown of the input- and industry-specific ratings are given in Table \ref{tab:ihs_results}. \input{appendices/tab_ihs} \FloatBarrier \section{Inventory data and calibration} \label{apx:inventory} In the previous version of our work \citep{pichler2020production}, we used U.S. data from the BEA to calibrate the inventory target parameters $n_j$ in Eq. \eqref{eq:order_interm}. Here, we use more detailed UK data from the Annual Business Survey (ABS) . The ABS is the main structural business survey conducted by the ONS\footnote{ \url{ https://www.ons.gov.uk/businessindustryandtrade/business/businessservices/methodologies/annualbusinesssurveyabs} }. It is sampled from all non-farm, non-financial private businesses in the UK (about two-thirds of the UK economy); data are available up to the 4-digit NACE level, but for our purposes 2-digit NACE industries are sufficient. The survey asks for information on a number of variables, including turnover and inventory stocks (at the beginning and end of each year). Data are available from 2008 to 2018. They show a general increase in turnover and inventory stocks (consistent with growth of the UK economy in the same period), with moderate year-on-year fluctuations. \begin{figure \centering \includegraphics[width = \textwidth]{fig/inventory_levels.pdf} \caption{ \textbf{Inventory/sales ratios over time.} } \label{fig:inventory_levels} \end{figure} To proxy inventory levels available in February 2020, we proceed as follows, for each industry: (i) we take the simple average between beginning- and end-of year inventory stock levels; (ii) we calculate the ratio between this average and yearly turnover, and multiply this number by 365, because we consider a daily timescale: this inventory-to-turnover ratio is our proxy of $n_j$ (as can be seen in Figure~\ref{fig:inventory_levels}, the inventory/sales ratios are remarkably constant over time, suggesting that inventory stocks at the beginning of the Covid-19 pandemic could reliably be estimated from past data); (iii) we consider a weighted average between inventory-to-turnover ratios across all years between 2008 and 2018, giving higher weight to more recent years;\footnote{More specifically, we consider exponential weights, such that weights from a given year $X$ are proportional to $0.95^{(2018-X)}$. Some years are missing due to confidentiality problems, and data for some sectors have clear problems in some years. We deal with missing values by giving zero weights to years with missing values and renormalizing weights over the available years.} (iv) we aggregate 2-digit NACE industries to WIOD sectors (for example, we aggregate C10, C11 and C12 to C10\_C12); (v) we fill in the missing sectors (K64, K65, K66, O84, T) by imputing the average inventory-to-turnover ratio across all service sectors. The final inventory to sales ratios $n_j$ are shown in Figure~\ref{fig:uk_inv_sales_ratios}. As can be seen, inventories are much larger relative to sales in production, construction and trade, while they are generally lower in services, although there is considerable heterogeneity across sectors. \begin{figure \centering \includegraphics[width = \textwidth]{fig/uk_inv_sales_ratios.pdf} \caption{ \textbf{Inventory/sales ratios for all WIOD industries.} } \label{fig:uk_inv_sales_ratios} \end{figure} \newpage \section{Notation} \label{apx:notation} \begin{table}[H] \scriptsize \begin{center} \begin{tabular}{ | p{2cm} | p{12cm} |} \hline Symbol & Name \\ \hline $N$ & Number of industries \\ $t$ & Time index \\ $t_\text{start\_lockdown}$ & Start date of lockdown (March 23)\\ $t_\text{end\_lockdown}$ & End date of lockdown (May 13) \\ $x_{i,t}$ & Total output of industry $i$ \\ $x_{i,t}^{\text{cap}}$ & Industry production capacity based on available labor \\ $x_{i,t}^{\text{inp}}$ & Industry production capacity based on available inputs \\ $d_{i,t}$ & Total demand for industry $i$ \\ $Z_{ij,t}$ & Intermediate consumption of good $i$ by industry $j$ \\ $O_{ij,t}$ & Intermediate orders (demand from industry $j$ to industry $i$) \\ $c_{i,t}$ & Household consumption of good $i$ \\ $c_{i,t}^d$ & Demand of household consumption of good $i$ \\ $f_{i,t}$ & Non-household final demand of good $i$ \\ $f_{i,t}^d$ & Demand non-household final demand of good $i$ \\ $l_{i,t}$ & Labor compensation to workers of industry $i$ \\ $\tilde l_t$ & Total labor compensation \\ $\tilde{l}_{t}^p$ & Expectations for permanent labor income \\ $\tilde l^*_t$ & Total labor compensation plus social benefits \\ $\tilde c_t$ & Total household consumption \\ $\tilde{c}_{t}^d$ & Aggregate consumption demand \\ $n_{j}$ & Number of days of targeted inventory for industry $j$ \\ $A_{i,j}$ & Payments to $i$ per unit produced of $j$ (technical coefficients) \\ $S_{ij,t}$ & Stock of material $i$ held in $j$'s inventory \\ $\tau$ & Speed of inventory adjustment \\ $\theta_{i,t}$ & Share of goods from industry $i$ in consumption demand \\ $\bar \theta_{i,t}$ & Share of goods from industry $i$ in consumption demand (unnormalized) \\ $\rho$ & Speed of adjustment of aggregate consumption \\ $m$ & Share of labor income used to consume final domestic goods \\ $\xi_t$ & Fraction of pre-pandemic labor income that households expect to retain in the long-run \\ $\epsilon_t$ & Consumption exogenous shock \\ $\tilde{\epsilon}_{i}^D$ & Relative changes in demand for goods of industry $i$ during lockdown \\ $\tilde{\epsilon}_{i,t}$ & Relative changes in demand for goods of industry $i$ \\ $\tilde \epsilon_t$ & Aggregate consumption shock \\ $\Delta l_{i,t}$ & Desired change of labor supply of industry $i$\\ $ l_{i,t}^\text{max}$ & Maximum labor supply for industry $i$\\ $\gamma_\text{H}$, $\gamma_\text{F}$ & Speed of upward/downward labor adjustment (hiring/firing) \\ $\Delta s$ & Change in saving rate \\ $\nu$ & Consumption shock term representing beliefs in L-shaped recovery \\ $b$ & Share of labor income compensated as social benefit \\ $\text{RLI}_i$ & Remote Labor Index of industry $i$\\ $\text{ESS}_i$ & Essential score of industry $i$\\ $\text{PPI}_i$ & Physical Proximity Index of industry $i$\\ $\iota$ & Scaling factor for $\text{PPI}_i$ \\ $\mathcal{V}_i$ & Set of critical inputs for industry $i$\\ $\mathcal{U}_i$ & Set of important inputs for industry $i$\\ \hline \end{tabular} \end{center} \caption{Notation summary.} \label{tab:notation_econ} \scriptsize \end{table} \section{Sensitivity analysis} \label{apx:sensitivity} To gain a better understanding of the model behavior, we conduct a series of sensitivity analyses by showing aggregate output time series under alternative parametrizations. In particular, we vary exogenous shock inputs, the production function specification and the parameters $\tau$, $\gamma_H$, $\gamma_F$ and $\Delta s$. \paragraph{Supply shocks.} Figure~\ref{fig:suppdemsens}(a) shows model dynamics of total output for the different supply shock scenarios considered. It becomes clear that alternative specifications of supply shocks can influence model results enormously. Supply shocks derived directly from the UK lockdown policy ($S_1-S_4$) yield a similar recovery pattern but nevertheless entail markedly different levels of overall impacts. When initializing the model with the shock estimates from \cite{del2020supply} ($S_5$) and \cite{fana2020covid} ($S_6$), we obtain very different dynamics. \paragraph{Demand shocks.} We find that our model is less sensitive with respect to changing final demand shocks within plausible ranges. Here, we compare the baseline model results with alternative specifications of consumption and ``other final demand'' (investment, export) shocks. Instead of modifying the \cite{CBO2006} consumption shocks as discussed in Section \ref{sec:consumptiondemandshockscenarios}, we now also use their raw estimates. We further consider shocks of 10\% to investment and exports which is milder than the 15\% shocks considered in the main text. Figure~\ref{fig:suppdemsens}(a) indicates that differences are less pronounced between the alternative demand shocks compared to the supply shock scenarios, particularly during the early phase of the lockdown. Differences of several percentage points only emerge after an extended recovery period. \begin{figure}[H] \includegraphics[trim = {0cm 0cm 0cm 0cm}, clip, width = .95\textwidth]{fig/DEMSUP_shock_sens.pdf} \caption{ \textbf{Sensitivity analysis with respect to supply and demand shocks.} Shocks are applied at day one. Except for supply shocks (a) and demand shocks (b), the model is initialized as in the baseline run of the main text. Legend (b): Raw and Main indicate the estimates from \cite{CBO2006} and its adapted version used in the main text, respectively. 15\% and 10\% refer to the two investment/export shock scenarios. } \label{fig:suppdemsens} \end{figure} \paragraph{Production function.} In Figure~\ref{fig:prodfsens}(a) and (b) we show simulation results for alternative production function specifications, when using the baseline $S_4$ and the more severe $S_5$ supply shock scenarios, respectively. Regardless of the supply shock scenarios considered, Leontief production yields substantially more pessimistic predictions than the other production functions. For the milder baseline supply shock scenarios, aggregate predictions are fairly similar across the other production functions, although we observe some differences after an extended period of simulation. Differences between the linear and IHS production functions are larger when considering the more severe shock scenario in Figure~\ref{fig:prodfsens}(b). \begin{figure}[H] \centering \includegraphics[trim = {0cm 0cm 0cm 0cm}, clip, width = .95\textwidth]{fig/PRODF_shock_sens.pdf} \caption{ \textbf{Sensitivity analysis with respect to production functions.} Shocks are applied at day one. (a) is based on the $S_4$ and (b) on the $S_5$ supply shock scenario. Otherwise, the model is initialized as in the baseline run of the main text. } \label{fig:prodfsens} \end{figure} In Figure~\ref{fig:sensitivity_params} we explore model simulations with respect to different parametrizations of the inventory adjustment parameter $\tau$ (left column), hiring/firing parameters $\gamma_H$ and $\gamma_F$ (center left column), change in savings rate parameter $\Delta s$ (center right column) and consumption adjustment speed $\rho$ (right column) under alternative production function-supply shock combinations (panel rows). \paragraph{Inventory adjustment time $\tau$.} We find that the inventory adjustment time becomes a key parameter under Leontief production and much less so under the alternative IHS3 specification. The shorter the inventory adjustment time (smaller $\tau$), the more shocks are mitigated. Contrary to inventory adjustment $\tau$, model results do not change much with respect to alternative specifications of $\gamma_H$ when going from Leontief production to the IHS3 function. Here, differences in model outcomes are rather due to shock severity. Model predictions are almost identical for all choices of $\gamma_H$ under mild $S_1$ shocks but differ somewhat for more substantial $S_4$ shocks. In the $S_4$ scenarios we observe that recovery is quickest for larger values of $\gamma_H$ (stiff labor markets), since firms would lose less productive capacity in the immediate aftermath of the shocks. However, this would come at the expense of reduced profits for firms which are able to more effectively reduce labor costs for smaller values of $\gamma_H$. \paragraph{Change in saving rate $\Delta s$.} We find only very little variation with respect to the whole range of $\Delta s$, regardless of the underlying production function and supply shock scenario. As expected, we find the largest adverse economic impacts in case of $\Delta s = 1$, i.e. when consumers save all the extra money which they would have spent if there was no lockdown, and the mildest impacts if $\Delta s = 0$. \paragraph{Consumption adjustment speed $\rho$.} Similarly, the model is not very sensitive with respect to consumption adjustment speed $\rho$. The smaller $\rho$, the quicker households adjust consumption with respect to (permanent) income shocks. Note that parameter $\rho$ is based on daily time scales. Economic impacts tend to be less adverse when consumers aim to keep original consumption levels upright (large $\rho$) and more adverse if income shocks are more relevant for consumption (small $\rho$). Overall, the effects are small, in particular for the $S_1$ shock scenarios. \begin{figure}[H] \includegraphics[trim = {0cm 0cm 0cm 0cm}, clip, width = 1\textwidth]{fig/sensitivity_compare.pdf} \caption{ \textbf{Results of sensitivity analysis for parameters $\tau$, $\gamma_H$, $\Delta s$ and $\rho$.} All plots show normalized gross output on the y-axis and days after the shocks are applied on the x-axis. Panel rows differ with respect to the combination of production function and supply shock scenario. Panel columns differ with respect to $\tau$ (left), $\gamma_H$ (center left), $\Delta s$ (center right) and $\rho$ (right) as indicated in the legend below each column. Except for the four parameters, production function and supply shock scenario, the model is initialized as in the baseline run of the main text. In all simulations we used $\gamma_F = 2 \gamma_H$. } \label{fig:sensitivity_params} \end{figure} \section{Details on validation} \label{apx:validation} In this appendix we provide further details about validation (Section~\ref{sec:econimpact} in the main paper). In Appendix~\ref{apx:validation_data} we describe the data sources that we used for validation, and we explain how we made empirical data comparable to simulated data. In Appendix \ref{apx:selectedscenario} we give more details about the selected scenario (Section~\ref{sec:selectedscenarioanalysis} in the main paper). \subsection{Validation data} \label{apx:validation_data} \begin{itemize} \item Index of agriculture (release: 12/08/2020): \url{https://www.ons.gov.uk/generator?format=xls&uri=/economy/grossdomesticproductgdp/timeseries/ecy3/mgdp/previous/v26} \item Index of production (release: 12/08/2020): \url{https://www.ons.gov.uk/file?uri=/economy/economicoutputandproductivity/output/datasets/indexofproduction/current/previous/v59/diop.xlsx} \item Index of construction (release: 12/08/2020): \url{https://www.ons.gov.uk/file?uri=/businessindustryandtrade/constructionindustry/datasets/outputintheconstructionindustry/current/previous/v67/bulletindataset2.xlsx} \item Index of services (release: 12/08/2020): \url{https://www.ons.gov.uk/file?uri=/economy/economicoutputandproductivity/output/datasets/indexofservices/current/previous/v61/ios1.xlsx} \end{itemize} All ONS indexes are monthly seasonally-adjusted chained volume measures, based such that the index averaged over all months in 2016 is 100. Although these indexes are used to proxy value added in UK national accounts, they are actually gross output measures, as determining input use is too burdersome for monthly indexes. There is not a perfect correspondence between industry aggregates as considered by ONS and in WIOD. For example, the ONS only releases data for the agricultural sector as a whole, without distinguishing between crop and animal production (A01), fishing and aquaculture (A02) and forestry (A03). In this case, when comparing simulated and empirical data we aggregate data from the simulations, using initial output shares as weights. More commonly, there is a finer disaggregation in ONS data than in WIOD. For example, ONS provides separate information on food manufacturing (C10) and on beverage and tobacco manufacturing (C11, C12), while these three sectors are aggregated into just one sector (C10\_C12) in WIOD. In this case, we aggregate empirical data using the weights provided in the indexes of production and services. These weights correspond to output shares in 2016, the base year for all time series. Finally, after performing aggregation we rebase all time series so that output in February 2020 takes value 100. \subsection{Description of the selected scenario} \label{apx:selectedscenario} Figure \ref{fig:new_model_april_may_june_facets} shows the recovery path of all industries, both in the model and in the data. Interpretation is the same as in Figure \ref{fig:new_model_april_may_june_all} in the main text. For readability, industries are grouped into six broad categories. \begin{figure}[H] \centering \includegraphics[width = 1.0\textwidth]{fig/new_model_april_june_facets.pdf} \caption{\textbf{Comparison between model predictions and empirical data.} We plot production (gross output) for each of 53 industries, both as predicted by our model and as obtained from the ONS' indexes. Each panel refers to a different group of industries. Different colors refer to production in April and June 2020. Black lines connect the same industry across these three months. All sectoral productions are normalized to their pre-lockdown levels, and each point size is proportional to the steady-state gross output of the corresponding sector.} \label{fig:new_model_april_may_june_facets} \end{figure} \section{Introduction} \label{sec:intro} The social distancing measures imposed to combat the Covid-19 pandemic created severe disruptions to economic output, causing shocks that were highly industry specific. Some industries were shut down almost entirely by lack of demand, labor shortages restricted others, and many were largely unaffected. Meanwhile, feedback effects amplified the initial shocks. The lack of demand for final goods such as restaurants or transportation propagated upstream, reducing demand for the intermediate goods that supply these industries. Supply constraints due to a lack of labor under social distancing propagated downstream, creating input scarcity that sometimes limited production even in cases where the availability of labor and demand would not have been an issue. The resulting supply and demand constraints interacted to create bottlenecks in production, which in turn led to unemployment, eventually decreasing consumption and causing additional amplification of shocks that further decreased final demand. In this paper we develop a model that respects the three main features that made the Covid-19 episode exceptional: (1) The shocks were highly heterogeneous across industries, making it necessary to model the economy at the sectoral level, taking sectoral inter-dependencies into account; (2) the shocks affected both supply and demand simultaneously, making it necessary to consider both upstream and downstream propagation; (3) the shocks were so strong and were imposed and relaxed so quickly that the economy never had time to converge to a new steady state, making dynamic models better suited than static models. In the first version of this work, results were released online on 21 May, not long after social distancing measures first began to take effect in March \citep{pichler2020production}. Based on data from 2019 and predictions of the shocks by \citet{del2020supply}, we predicted a 21.5\% contraction of GDP in the UK economy in the second quarter of 2020 with respect to the last quarter of 2019. This forecast was remarkably close to the actual contraction of 22.1\% estimated by the UK Office of National Statistics. (The median forecast by several institutions and financial firms was 16.6\% and the forecast by the Bank of England was 30\%).\footnote{\label{footn:pred} Sources: ONS estimate: \url{https://www.ons.gov.uk/economy/grossdomesticproductgdp/bulletins/gdpfirstquarterlyestimateuk/apriltojune2020}; median forecast of institutions and financial firms (in May): \url{https://www.gov.uk/government/statistics/forecasts-for-the-uk-economy-may-2020}; forecast by the Bank of England (in May): \url{https://www.bankofengland.co.uk/-/media/boe/files/monetary-policy-report/2020/may/monetary-policy-report-may-2020}. } In this substantially revised paper we present our model and describe its results, but we also take advantage of the benefits of hindsight to perform a ``post-mortem''. This allows us to better understand what worked well, what did not work well, and why. To what extent did we succeed by getting things right vs. just getting lucky? We systematically investigate the sensitivity of the results under different specifications, including variations in the shock scenario, production function, and key parameters of the model. We examine the performance at both the aggregate and sectoral levels. We also look at the ability of the model to reproduce the time series patterns of the fall and recovery of sectoral output. Surprisingly, we find that the original specification used for out-of-sample forecasting performs about as well at the aggregate level as any of those developed with hindsight; with a minor adjustment its results are also among the best at the sectoral level. We show how getting good aggregate results depends on making the right trade-off between the severity of the shocks and the rigidity of the production function, as well as other key factors such as the right level of inventories. This provides valuable lessons about modeling the economic effects of disasters such as the Covid-19 pandemic. Our model is inspired by previous work on the economic response to natural disasters \citep{hallegatte2008adaptive, henriet2012firm, inoue2019firm}. As in these models, industry demand and production decisions are based on simple rules of thumb, rather than resulting from optimization in a dynamic general equilibrium setup. We think that the Covid-19 shock was so sudden and unexpected that agent expectations had little time to converge to an equilibrium over the short time period that we consider \citep{evans2012learning}. Our work here is also one of several studies using sectoral models with input-output linkages that appeared in the first few months of the pandemic \citep{barrot2020sectoral,mandel2020economic,fadinger2020effects,bonadio2020global,baqaee2020nonlinear,guan2020global}. Our paper belongs to this effort, but differs in a number of important ways. The most important conceptual difference is our treatment of the production function. The most common production functions can be ordered by the degree to which they allow substitutions between inputs. At one extreme, the Leontief production function assumes a fixed recipe for production, allowing no substitutions and restricting production based on the limiting input \citep{inoue2020propagation}. Under the Leontief production function, if a single input is reduced, overall production will be reduced proportionately, even if that input is ordinarily relatively small. This can lead to unrealistic behaviours. For example, the steel industry has restaurants as an input, presumably because steel companies have a workplace canteen and sometimes entertain their clients and employees. A literal application of the Leontief production function predicts that a sharp drop in the output of the restaurant industry will dramatically reduce steel output. This is unrealistic, particularly in the short run. The alternatives used in the literature are the Cobb-Douglas production function \citep{fadinger2020effects}, which has an elasticity of substitution of 1, and the CES production function, where typically calibration for short term analysis uses an elasticity of substitution less than 1 \citep{barrot2020sectoral,mandel2020economic,bonadio2020global}. Some papers \citep{baqaee2020nonlinear} consider a nested CES production function, which can accommodate a wide range of technologies. In principle, this can be used to allow substitution between some inputs and forbid it between others in an industry-specific manner. However, it is hard to calibrate all the elasticities, so that in practice many models end up using only limited nesting structure or assuming uniform substitutability. Consider again our example of the steel industry: Under the calibrations of the CES production function that are typically used, firms can substitute energy or even restaurants for iron, while still producing the same output. For situations like this, where the production process requires a fixed technological recipe, this is obviously unrealistic. To solve this problem we introduce a new production function that distinguishes between critical and non-critical inputs at the level of the 55 industries in the World Input-Output tables. This production function allows firms to keep producing as long as they have the inputs that are absolutely necessary, which we call {\it critical inputs}. The steel industry cannot produce steel without the critical inputs iron and energy, but it can operate for a considerable period of time without non-critical inputs such as restaurants or management consultants. We apply the Leontief function only to the critical inputs, ignoring the others. Thus we make the assumption that during the pandemic the steel industry requires iron and energy in the usual fixed proportions, but the output of the restaurant or management consultancy industries is irrelevant. Of course restaurants and logistics consultants are useful to the steel industry in normal times -- otherwise they wouldn't use them. But during the short time-scale of the pandemic, we believe that neglecting them provides a better approximation of economic behavior than either a Leontief or a CES production function with uniform elasticity of substitution. In the appendix, we show that our production function is very close to a limiting case of an appropriately constructed nested CES, which we could have used in principle, but is less well-suited to our calibration procedure. To determine which inputs are critical and which are not, we use a survey performed by IHS Markit at our request. This survey asked ``Can production continue in industry X if input Y is not available for two months?''. The list of possible industries X and Y was drawn from the 55 industries in the World Input-Output Database. This question was presented to 30 different industry analysts who were experts of industry X. Each of them was asked to rate the importance of each of its inputs Y. They assigned a score of 1 if they believed input Y is critical, 0 if it is not critical, and 0.5 if it is in-between, with the possibility of a rating of NA if they could not make a judgement. We then apply the Leontief function to the list of critical inputs, ignoring non-critical inputs. We experimented with several possible treatments for industries with ratings of 0.5 and found that we get somewhat better empirical results by treating them as half-critical (though at present we do not have sufficient evidence to resolve this question unambiguously). Besides the bespoke production function discussed above, we also introduce a Covid-19-specific treatment of consumption. Most models do not incorporate the demand shocks that are caused by changes in consumer preferences in order to minimize risk of infection. The vast majority of the literature has focused on the ability to work from home, and some studies incorporate lists of essential vs. inessential industries, but almost no papers have also explicitly added shocks to consumer preferences. (\cite{baqaee2020nonlinear} is an exception, but the treatment is only theoretical). Here we use the estimates from \citet{del2020supply}, which are taken from a prospective study by the \citet{CBO2006}. These estimates are crude, but we are not aware of estimates that are any better. The currently available data on actual consumption is qualitatively consistent with the shocks predicted by the CBO, with massive shocks to the hospitality industry, travel and recreation, milder (but large) shocks elsewhere \citep{andersen2020consumer,carvalho2020tracking,chen2020impact,surico2020consumption}. The largest mismatch between the CBO estimates and consumption data is in the healthcare sector, whose consumption decreased during the pandemic, in contrast to the estimated increase by the CBO. There was also an increase in some specific retail categories (groceries) which the CBO estimates did not consider. However, overall, as \citet{del2020supply} argue, the estimates remain qualitatively accurate. Besides the initial shock, we also attempt to introduce realistic dynamics for recovery and for savings. The shocks to on-site consumption industries are more long lasting, and savings from the lack of consumption of specific goods and services during lockdown are only partially reallocated to other expenses. Our key finding is that there is a trade-off between the severity of the shocks and the rigidity of the production function. At one extreme, severe shocks with a Leontief production function lead to an almost complete collapse of the economy, and at the other, small shocks with a linear production function fail to reproduce the massive recession observed in the data. There is a sweet spot in the middle, with empirically good results based on various mixes of production function rigidity and shock severity. We also find that our models' performance improved after we used better data for inventories. Our selected scenario predicts the aggregate UK recession in Q2 2020 about as well as our initial forecast. It also correctly predicts a stronger reduction in private consumption, investment and profits than in government consumption, inventories and wages. At the sectoral level, the mean absolute error is around 12 percentage points, slightly lower than in our initial forecast (14 p.p.). The correlation between sectoral reductions in the model and in the data is generally high (the Pearson correlation coefficient weighted by industry size is 0.75). However, this masks substantial differences across industries: while our model makes a good job at predicting sectoral outcomes in most industries, it fails at others such as vehicle manufacturing and air transport. We conjecture that this is due to idiosyncratic features of these industries that our model could not capture. Conversely, we provide examples where our model predicts sectoral outcomes correctly when these depend on inter-industry relationships, both statically and dynamically. We conclude by investigating some theoretical properties of the model using a simpler setting. We ask whether popular metrics of centrality, such as upstreamness and output multipliers, are useful to understand the diffusion of supply and demand shocks in our model. We find that static measures are only partially able to explain modeling results. Nevertheless, an industry's upstreamness is a strong indicator of its potential to amplify shocks - this is true for supply shocks, as expected, and for demand shocks, which is somewhat more surprising. The paper is organized as follows. The details of the model are presented in Section~\ref{sec:model}. We discuss the shocks induced by the pandemic which we use to initialize the model in Section~\ref{sec:pandemic_shock}. We show our model predictions for the UK economy in Section~\ref{sec:econimpact} and discuss production network effects and re-opening single industries in Section~\ref{sec:nw_effects}. We conclude in Section~\ref{sec:discuss}. \section{A dynamic input-output model} \label{sec:model} Our model combines elements of the input-output models developed by \cite{battiston2007credit, hallegatte2008adaptive, henriet2012firm} and \cite{inoue2019firm}, together with new features that make the model more realistic in the context of a pandemic-induced lockdown. The main data input to our analysis is the UK input-output network obtained from the latest year (2014) of the World Input-Output Database (WIOD) \citep{timmer2015illustrated}, allowing us to distinguish 55 sectors. \subsection{Timeline} A time step $t$ in our economy corresponds to one day. There are $N$ industries\footnote{ See Appendix~\ref{apx:notation} for a comprehensive summary of notations used. }, one representative firm for each industry, and one representative household. The economy initially rests in a steady-state until it experiences exogenous lockdown shocks. These shocks can affect the supply side (labor compensation, productive capacity) and the demand side (preferences, aggregate spending/saving) of the economy. Every day: \begin{enumerate} \item Firms hire or fire workers depending on whether their workforce was insufficient or redundant to carry out production in the previous day. \item The representative household decides its consumption demand and industries place orders for intermediate goods. \item Industries produce as much as they can to satisfy demand, given that they could be limited by lack of critical inputs or lack of workers. \item If industries do not produce enough, they distribute their production to final consumers and to other industries on a pro rata basis, that is, proportionally to demand. \item Industries update their inventory levels, and labor compensation is distributed to workers. \end{enumerate} \subsection{Model description} \label{sec:modeldescription} It will become important to distinguish between demand, that is, orders placed by customers to suppliers, and actual realized transactions, which might be lower. \subsubsection{Demand} \label{sec:demand} \paragraph{Total demand.} The total demand faced by industry $i$ at time $t$, $d_{i,t}$, is the sum of the demand from all its customers, \begin{equation} d_{i,t} = \sum_{j=1}^N O_{ij,t} + c^d_{i,t} + f^d_{i,t}, \end{equation} where $O_{ij,t}$ (for \emph{orders}) denotes the intermediate demand from industry $j$ to industry $i$, $c_{i,t}^d$ represents (final) demand from households and $f_{i,t}^d$ denotes all other final demand (e.g. government or non-domestic customers). \paragraph{Intermediate demand.} Intermediate demand follows a dynamics similar to the one studied in \cite{henriet2012firm}, \cite{hallegatte2014modeling}, and \cite{inoue2019firm}. Specifically, the demand from industry $i$ to industry $j$ is \begin{equation} \label{eq:order_interm} O_{ji,t} = A_{ji} d_{i,t-1} + \frac{1}{\tau} [ n_i Z_{ji,0} - S_{ji,t} ]. \end{equation} Intermediate demand thus is the sum of two components. First, to satisfy incoming demand (from $t-1$), industry $i$ demands an amount $A_{ji} d_{i,t-1}$ from $j$. Therefore, industries order intermediate inputs in fixed proportions of total demand, with the proportions encoded in the technical coefficient matrix $A$, i.e. $A_{ji} = Z_{ji,0}/x_{i,0}$ where $Z_{ji,0}$ is realized intermediate consumption and $x_{i,0}$ is total output of industry $i$. Before the shocks, both of these variables are considered to be in the pre-pandemic steady state. While we will consider several scenarios where industries do not strictly rely on fixed recipes in production, demand always depends on the technical coefficient matrix. The second term in Eq.~\eqref{eq:order_interm} describes intermediate demand induced by desired reduction of inventory gaps. Due to the dynamic nature of the model, demanded inputs cannot be used immediately for production. Instead industries use an inventory of inputs in production. $S_{ji,t}$ denotes the stock of input $j$ held in $i$'s inventory. Each industry $i$ aims to keep a target inventory $n_i Z_{ji,0}$ of every required input $j$ to ensure production for $n_i$ further days\footnote{ Considering an input-specific target inventory would require generalizing $n_i$ to a matrix with elements $n_{ji}$, which is easy in our computational framework but difficult to calibrate empirically. }. The parameter $\tau$ indicates how quickly an industry adjusts its demand due to an inventory gap. Small $\tau$ corresponds to responsive industries that aim to close inventory gaps quickly. In contrast, if $\tau$ is large, intermediate demand adjusts slowly in response to inventory gaps. \paragraph{Consumption demand.} We let consumption demand for good $i$ be \begin{equation}\label{eq:cd} c^d_{i,t}= \theta_{i,t} \Tilde{c}^d_t, \end{equation} where $\theta_{i,t}$ is a preference coefficient, giving the share of goods from industry $i$ out of total consumption demand $\Tilde{c}^d_t$. The coefficients $\theta_{i,t}$ evolve exogenously, following assumptions on how consumer preferences change due to exogenous shocks (in our case, differential risk of infection across industries, see Section \ref{sec:pandemic_shock}). Total consumption demand evolves following an adapted and simplified version of the consumption function in \cite{muellbauer2020}. In particular, $\Tilde{c}^d_t$ evolves according to \begin{equation}\label{eq:consdemand} \Tilde{c}^d_t= \left(1-\Tilde{\epsilon}^D_t\right) \exp\left\{ \rho \log \Tilde{c}^d_{t-1} + \frac{1-\rho}{2} \log\left( m \Tilde{l}_t \right) + \frac{1-\rho}{2} \log \left( m \Tilde{l}_t^p \right) \right\}. \end{equation} In the equation above, the factor $\left(1-\Tilde{\epsilon}^D_t\right)$ accounts for direct aggregate shocks and will be explained in Section \ref{sec:pandemic_shock}. The second factor accounts for the endogenous consumption response to the state of the labor market and future income prospects. In particular, $\Tilde{l}_t$ is current labor income, $\Tilde{l}_t^p$ is an estimation of permanent income (see Section \ref{sec:pandemic_shock}), and $m$ is the share of labor income that is used to consume final domestic goods, i.e. that is neither saved nor used for consumption of imported goods. In the pre-pandemic steady state with no aggregate exogenous shocks, $\Tilde{\epsilon}^D_t=0$ and by definition permanent income corresponds to current income, i.e. $\Tilde{l}_t^p=\Tilde{l}_t$. In this case, total consumption demand corresponds to $m \Tilde{l}_t $.\footnote{ To see this, note that in the steady state $\Tilde{c}^d_t=\Tilde{c}^d_{t-1}$. Taking logs on both sides, moving the consumption terms on the left hand side and dividing by $1-\rho$ throughout yields $\log \Tilde{c}^d_t=\log\left( m \Tilde{l}_t \right)$. } \paragraph{Other components of final demand.} In addition, an industry $i$ also faces demand $f_{i,t}^d$ from sources that we do not model as endogenous variables in our framework, such as government or industries in foreign countries. We discuss the composition and calibration of $f_{i,t}^d$ in detail in Section \ref{sec:pandemic_shock}. \subsubsection{Supply} \label{sec:supply} Every industry aims to satisfy incoming demand by producing the required amount of output. Production is subject to the following two economic constraints: \paragraph{Productive capacity.} First, an industry has finite production capacity $x_{i,t}^\text{cap}$, which depends on the amount of available labor input. Initially every industry employs $l_{i,0}$ of labor and produces at full capacity $x_{i,0}^{\text{cap}} = x_{i,0}$. We assume that productive capacity depends linearly on labor inputs, \begin{equation} \label{eq:xcap} x_{i,t}^{\text{cap}} = \frac{l_{i,t}}{l_{i,0}}x_{i,0}^{\text{cap}}. \end{equation} \paragraph{Input bottlenecks.} Second, the production of an industry might be constrained due to an insufficient supply of critical inputs. This can be caused by production network disruptions. Intermediate input-based production capacities depend on the availability of inputs in an industry's inventory and its production technology, i.e. \begin{equation} \label{eq:xinp_general} x_{i,t}^{\text{inp}} = \text{function}_i( S_{ji,t}, A_{ji} ). \end{equation} We consider five different specifications for how input shortages impact production, ranging from a Leontief form, where inputs need to be used in fixed proportions, to a Linear form, where inputs can be substituted arbitrarily. As intermediate cases we consider specifications with industry-specific dependencies of inputs. For this purpose, IHS Markit analysts rated at our request whether a given input is \emph{critical}, \emph{important} or \emph{non-critical} for the production of a given industry (see Appendix \ref{apx:ihs} for details). We then make different assumptions on how the criticality ratings of inputs affect the production of an industry. We now introduce the five specifications in order of stringency with respect to inputs. \noindent \emph{(1) Leontief:} As first case we consider the Leontief production function, in which every positive entry in the technical coefficient matrix $A$ is a binding input to an industry. This is the most rigid case we are considering, leading to the functional form \begin{equation} \label{eq:xinp_leo} x_{i,t}^{\text{inp}} = \min_{ \{ j: \; A_{ji} > 0 \} } \left \{ \; \frac{ S_{ji,t} }{ A_{ji} } \right \}. \end{equation} In this case, an industry would halt production immediately if inventories of any input are run down, even for small and potentially negligible inputs. \noindent \emph{(2) IHS1:} As most stringent case based on the IHS Markit ratings, we assume that production is constrained by critical and important inputs, which need to use in fixed proportions. In contrast to the Leontief case, however, production is not constrained by the lack of non-critical inputs. Thus, an industry's production capacity with respect to inputs is \begin{equation} \label{eq:xinp_ihs1} x_{i,t}^{\text{inp}} = \min_{j \in \{ \mathcal{V}_i \; \cup \; \mathcal{U}_i \} } \left \{ \; \frac{ S_{ji,t} }{ A_{ji} } \right \}, \end{equation} where $\mathcal{V}_i$ is the set of \textit{critical} inputs and $\mathcal{U}_i$ is the set of \textit{important} inputs to industry $i$. \noindent \emph{(3) IHS2:} As second case using the input ratings, we leave the assumptions regarding \emph{critical} and \emph{non-critical} inputs unchanged but assume that the lack of an \emph{important} input reduces an industry's production by a half. We implement this production scenario as \begin{equation} \label{eq:xinp_ihs2} x_{i,t}^{\text{inp}} = \min_{ \{ j \in \mathcal{V}_i, \; k \in \mathcal{U}_i \} } \left \{ \; \frac{ S_{ji,t} }{ A_{ji} }, \frac{1}{2} \left(\frac{ S_{ki,t} }{ A_{ki} } + x_{i,0}^\text{cap}\right) \right \}. \end{equation} This means that if an \textit{important} input goes down by 50\% compared to initial levels, production of the industry would decrease by 25\%. When the stock of this input is fully depleted, production drops to 50\% of initial levels. \noindent \emph{(4) IHS3:} Next we treat all \emph{important} inputs as \emph{non-critical}, such that only \emph{critical} suppliers can create input bottlenecks. This reduces the input bottleneck equation, Eq.~\eqref{eq:xinp_general}, to \begin{equation} \label{eq:xinp_ihs3} x_{i,t}^{\text{inp}} = \min_{j \in \mathcal{V}_i } \left \{ \; \frac{ S_{ji,t} }{ A_{ji} } \right \}. \end{equation} \noindent \emph{(5) Linear:} Finally, we also implement a linear production function for which all inputs are perfectly substitutable. Here, production in an industry can continue even when inputs cannot be provided, as long as there is sufficient supply of alternative inputs. In this case we have \begin{equation} \label{eq:xinp_linear} x_{i,t}^{\text{inp}} = \frac{ \sum_j S_{ji,t} }{ \sum_j A_{ji} } . \end{equation} Note that while production is linear with respect to intermediate inputs, the lack of labor supply cannot be compensated by other inputs. We assume for any production function that imports never cause bottlenecks. Thus, imports are treated as non-critical inputs or, equivalently, there is no shortages in foreign intermediate goods. Input bottlenecks are most likely to arise under the Leontief assumption, and least likely under the linear production function. The IHS production functions assume intermediate levels of input specificity. In Appendix \ref{apx:ces_prodfun} we show that the IHS production functions are almost equivalent to (suitably parameterised) CES production functions and that results are even identical using these CES specifications instead of the IHS production functions. \paragraph{Output level choice and input usage.} Since an industry aims to satisfy incoming demand within its production constraints, realized production at time step $t$ is \begin{equation} \label{eq:x_act} x_{i,t} = \min \{ x_{i,t}^{\text{cap}}, x_{i,t}^{\text{inp}}, d_{i,t} \}. \end{equation} Thus, the output level of an industry is constrained by the smallest of three values: labor-constrained production capacity $x_{i,t}^{\text{cap}}$, intermediate input-constrained production capacity $x_{i,t}^{\text{inp}}$, or total demand $d_{i,t}$. The output level $x_{i,t}$ determines the quantity used of each input according to the production recipe. Industry $i$ uses an amount $A_{ji}x_{i,t}$ of input $j$, unless $j$ is not critical and the amount of $j$ in $i$'s inventory is less than $A_{ji}x_{i,t}$. In this case, the quantity consumed of input $j$ by industry $i$ is equal to the remaining inventory stock of $j$-inputs $S_{ji,t} < A_{ji}x_{i,t}$. \paragraph{Rationing.} Without any adverse shocks, industries can always meet total demand, i.e. $x_{i,t} = d_{i,t}$. However, in the presence of production capacity and/or input bottlenecks, industries' output may be smaller than total demand (i.e., $x_{i,t} < d_{i,t}$) in which case industries ration their output across customers. We assume simple proportional rationing, although alternative rationing mechanisms could be considered \citep{pichler2021modeling}. The final delivery from industry $j$ to industry $i$ is a share of orders received \begin{equation} Z_{ji,t} = O_{ji,t} \frac{x_{j,t}}{d_{j,t}}. \end{equation} Households receive a share of their demand \begin{equation} c_{i,t} = c_{i,t}^d \frac{x_{i,t}}{d_{i,t}}, \end{equation} and the realized final consumption of agents with exogenous final demand is \begin{equation} f_{i,t} = f_{i,t}^d \frac{x_{i,t}}{d_{i,t}}. \end{equation} \paragraph{Inventory updating.} The inventory of $i$ for every input $j$ is updated according to \begin{equation} S_{ji,t+1} = \min \left\{ S_{ji,t} + Z_{ji,t} - A_{ji} x_{i,t}, 0 \right\}. \end{equation} In a Leontief production function, where every input is critical, the minimum operator would not be needed since production could never continue once inventories are run down. It is necessary here, since industries can produce even after inventories of non-critical input $j$ are depleted and inventories cannot turn negative. \paragraph{Hiring and separations.} Firms adjust their labor force depending on which production constraints in Eq.~\eqref{eq:x_act} are binding. If the capacity constraint $x_{i,t}^{\text{cap}}$ is binding, industry $i$ decides to hire as many workers as necessary to make the capacity constraint no longer binding. Conversely, if either input constraints $x_{i,t}^{\text{inp}}$ or demand constraints $d_{i,t}$ are binding, industry $i$ lays off workers until capacity constraints become binding. More formally, at time $t$ labor demand by industry $i$ is given by $l^d_{i,t}=l_{i,t-1}+\Delta l_{i,t}$, with \begin{equation} \Delta l_{i,t} = \frac{l_{i,0}}{x_{i,0}}\left[ \min\{x_{i,t}^{\text{inp}},d_{i,t}\} - x_{i,t}^{\text{cap}}\right]. \end{equation} The term $l_{i,0}/x_{i,0}$ reflects the assumption that the labor share in production is constant over the considered period. We assume that it takes time for firms to adjust their labor inputs. Specifically, we assume that industries can increase their labor force only by a fraction $\gamma_{\text{H}}$ in direction of their target. Similarly, industries can decrease their labor force only by a fraction $\gamma_{\text{F}}$ in the direction of their target. In the absence of strong labor market regulations, we usually have $\gamma_{\text{F}}>\gamma_{\text{H}}$, indicating that it is easier for firms to lay off employed workers than to hire new workers. Industry-specific employment evolves then according to \begin{equation} \label{eq:labor_evolution} l_{i,t} = \begin{cases} l_{i,t-1} + \gamma_{\text{H}} \Delta l_{i,t} &\mbox{if } \; \Delta l_{i,t} \ge 0, \\ l_{i,t-1} + \; \gamma_{\text{F}} \Delta l_{i,t} &\mbox{if } \Delta l_{i,t} < 0. \end{cases} \end{equation} The parameters $\gamma_{\text{H}}$ and $\gamma_{\text{F}}$ can be interpreted as policy variables. For example, the implementation of a furloughing scheme makes re-hiring of employees easier, corresponding to an increase in $\gamma_{\text{H}}$. \section{Pandemic shock} \label{sec:pandemic_shock} Simulations of the model described in Section \ref{sec:modeldescription} start in the pre-pandemic steady state. While there is evidence that consumption started to decline prior to the lockdown \citep{surico2020consumption}, for simplicity we apply the pandemic shock all at once, at the date of the start of the lockdown (March $23^{rd}$ in the UK). The pandemic shock is a combination of supply and demand shocks that propagate downstream and upstream and get amplified through the supply chain. During the lockdown, workers who cannot work on-site and are unable to work from home become unproductive, resulting in lowered productive capacities of industries. At the same time demand-side shocks hit as consumers adjust their consumption preferences to avoid getting infected, and reduce overall consumption out of precautionary motives due to the depressed state of the economy. For diagnostic purposes, in addition to the shocks that we used for our original predictions, we consider a few additional supply shock scenarios that are grounded on what happened in the UK specifically\footnote{ As described below, here we use estimates of the shocks that we develop a priori, based on empirically motivated assumptions on labor supply constraints and changes in preferences. We could have, in principle, used a traditional macro approach (see \citet{brinca2020measuring} in the case of Covid-19) where one can infer the shocks based on data. We did not pursue this here for several reasons. Our model does not feature price changes (which are crucial to separately identify supply and demand shocks), and we would have had to develop a method to infer the shocks based on model's fit to the data, a problem that is unlikely to have a unique solution. Most importantly, inferring shocks this way is only useful in hindsight, whereas our goal was to make predictions. }. We then compare the outcomes under each scenario based on its aggregate and sectoral forecasts (see Section \ref{sec:scenarioselection}). In the following, we describe all the scenarios for supply and demand shocks that we consider. Further details and industry-level shock statistics are shown in Appendix \ref{apx:shocks}. \subsection{Supply shock scenarios} \label{sec:supplyshockscenarios} At every time step during the lockdown an industry $i$ experiences an (exogenous) first-order labor supply shock $\epsilon^S_{i,t} \in [0,1]$ that quantifies reductions in labor availability. Letting $l_{i,0}$ be the initial labor supply before the lockdown, the maximum amount of labor available to industry $i$ at time $t$ is given as \begin{equation} l_{i,t}^\text{max} = (1- \epsilon^S_{i,t}) l_{i,0}. \end{equation} If $\epsilon^S_{i,t} > 0$, the productive capacity of industry $i$ is smaller than in the initial state of the economy. We assume that the reduction of total output is proportional to the loss of labor. In that case the productive capacity of industry $i$ at time $t$ is \begin{equation} x_{i,t}^{\text{cap}} = \frac{l_{i,t}}{l_{i,0}} x_{i,0}^{\text{cap}} \le (1-\epsilon^S_{i,t}) x_{i,0}. \end{equation} Recall from Section \ref{sec:supply} that firms can hire and fire to adjust their productive capacity to demand and supply constraints. Thus, productive capacity can be lower than the initial supply shock, in case industry $i$ has some idle workers that are not prevented to go to work by lockdown measures. In any case, during lockdown firms can never hire more than $l_{i,t}^\text{max}$ workers. When the lockdown is lifted for a specific industry $i$, first-order supply shocks are removed, i.e., we set $\epsilon^S_{i,t} = 0$, for $t\geq t_\text{end\_lockdown}$. For diagnostic purposes we consider six different scenarios for the supply shocks $\epsilon^S_{i,t}$, $\text{S}_\text{1}$ to $\text{S}_\text{6}$, ordered from lowest to the highest severity of lockdown restrictions. Scenario $\text{S}_\text{5}$ is the one that was used to produce out of sample forecasts in our original paper. We follow \cite{del2020supply} and estimate the supply shocks in each scenario by calculating for each industry the number of workers who can work remotely, and considering the government regulations that dictate whether an industry is essential i.e., whether an industry can operate during a lockdown even if working from home is not possible. For instance, if an industry is non-essential, and none of its employees can work from home, it faces a labor supply reduction of 100\% during lockdown i.e., $\epsilon^S_{i,t}=1, \ \forall t \in [t_\text{start\_lockdown}, t_\text{end\_lockdown}$. Instead, if an industry is classified as fully essential, it faces no labor supply shock and $\epsilon^S_{i,t}=0 \ \forall t$. In some scenarios, we further refine the supply shocks estimates by taking into account the difficulty of adjusting to social distancing measures. To do this refinement we use the Physical Proximity work context provided by O*NET, as others have done \citep{mongey2020workers,koren2020business}. To estimate the shocks we define a \emph{Remote Labor Index}, an \emph{Essential Score} and a \emph{Physical Proximity Index} at the WIOD industry level. We interpret these indices as follows. The Remote Labor Index of industry $i$ is the probability that a worker from industry $i$ can work from home. The Essential Score is the probability that a worker from industry $i$ has an essential job. The Physical Proximity Index is the probability that an essential worker that cannot work from home cannot go to work due to social distancing measures. In Appendix \ref{apx:shocks} we explain how, similarly to \cite{del2020supply,dingel2020,gottlieb2020working,koren2020business}, we use O*NET data to estimate our Remote Labor Index and Physical Proximity Index. Below we explain each scenario in more detail and how we determine the essential score in each scenario. Figure \ref{fig:supplyscenarios} in Appendix \ref{apx:shocks} shows time series of supply shocks $\epsilon^S_{i,t}$ for all scenarios throughout our simulations, while Table \ref{tab:FO_shocks_supply} shows the cross section of shocks. \paragraph{$\text{S}_\text{1}$: UK policy.} In contrast to some European countries such as Italy and Spain, in the UK shutdown orders were only issued for a few industries\footnote{\label{footn:ukleg} https://www.legislation.gov.uk/uksi/2020/350/pdfs/uksi\_20200350\_en.pdf }. Although social distancing guidelines were imposed for all industries (see $\text{S}_\text{2}-\text{S}_\text{4}$ below), strictly speaking only non-essential retail, personal and recreational services, and the restaurant and hospitality industries were mandated to shut down. While some European countries shut down some manufacturing sectors and construction, the UK did not explicitly forbid these sectors from operating. Based on the UK regulations, we assume that all WIOD industries have an essential score of one with the exception of industries G45 and G47 (vehicle and general retail), I (hotels and restaurants) and R\_S (recreational and personal services). We break down these industries into smaller subcategories that we can directly match to shutdown orders in the UK, and compute an essential score from a weighted average of these subcategories, where weights correspond to output shares (see Appendix \ref{apx:shocks} for more details). The resulting essential scores are 0.64 for G45, 0.71 for G47, 0.05 for I and 0.07 for R\_S. Similarly to \cite{del2020supply}, we assume an industry's supply shock is given by the fraction of workers that cannot work. If we interpret the Remote Labor Index and the essential score as independent probabilities, the expected value of the fraction of workers of industry $i$ that cannot work is $$\epsilon^S_{i,t} = (1 - \text{RLI}_i)(1 - \text{ESS}_i) \quad \forall t \in [t_\text{start\_lockdown}, t_\text{end\_lockdown}),$$ where $\text{RLI}_i$ and $\text{ESS}_i$ are the Remote Labor Index and Essential Score of industry $i$ respectively. We lift labor supply shocks to trade industries on June $15^\text{th}$ and shocks to other sectors in July, as per official guidance. That is, $ \epsilon^S_{i,t} = 0, \quad \forall t \in [t_\text{end\_lockdown}, \infty).$ \paragraph{$\text{S}_\text{2}$, $\text{S}_\text{3}$, $\text{S}_\text{4}$ : UK policy + difficulty to adapt to social distancing.} In monthly communications on the impact of Covid-19 on the UK economy\footnote{ \label{ft1}See, e.g., \url{https://www.ons.gov.uk/economy/grossdomesticproductgdp/articles/coronavirusandtheimpactonoutputintheukeconomy/may2020} }, the ONS reported that several manufacturing industries and the construction sector were struggling to comply with the requirements on social distancing imposed by the government. Thus, further to the legal constraints considered in scenario $\text{S}_\text{1}$, we also consider the practical constraints of operating under the new guidance. For all industries that were not explicitly forbidden to operate, we consider a supply shock due to the difficulties of adapting to social distancing measures. We assume that industries with a higher index of physical proximity have more difficulty to adhere to social distancing, so that only a portion of workers that cannot work from home can actually work at the workplace. In particular, in-person work can only be performed by a fraction of workers that is proportional to the Physical Proximity Index of industry $i$ (see Appendix \ref{apx:shocks} for details). We rescale this index so that it varies in an interval between zero and $\iota$, and consider three values $\iota=0.1,0.4,0.7$. These three values distinguish between scenarios $\text{S}_\text{2}$, $\text{S}_\text{3}$ and $\text{S}_\text{4}$, which are reported in order of severity. At the time when lockdown starts, the supply shocks are given by $$\epsilon^S_{i,t_\text{start\_lockdown}} = \left(1 - \text{RLI}_i\right) \left(1 - \text{ESS}_i\left(1 - \iota \frac{\text{PPI}_{i}}{\max_j(\text{PPI}_j)}\right) \right),$$ where $\text{PPI}_{i}$ is the Physical Proximity Index. Differently from the scenarios above, we do not assume that shocks due to the difficulty to adapt to social distancing are constant during lockdown. In line with ONS reports, firms are able to bring a larger fraction of their workforce back to work as they adapt to the new guidelines. For simplicity, we assume that these shocks vanish when lockdown is lifted (May $13^\text{th}$), and interpolate linearly between their maximum value (which is when lockdown is imposed, March $23^\text{rd}$) and the time when lockdown is lifted. This leads to the following supply shocks $$\epsilon^S_{i,t} = \left(1 - \text{RLI}_i\right) \left(1 - \text{ESS}_i\left(1 - \iota \frac{\text{PPI}_{i,t}}{\max_j(\text{PPI}_j)}\right) \right),$$ where $$ \text{PPI}_{i,t} = \text{PPI}_{i} \left(1 - \frac{t - t_\text{start\_lockdown}}{t_\text{end\_lockdown}- t_\text{start\_lockdown}} \right). $$ Note that lockdown did not have a clear end date in the UK. However, we conventionally take May $13^\text{th}$ as $t_\text{end\_lockdown}$. This is the day when the UK government asked all workers to go back to work, and we assume that at this time most firms had had sufficient time to comply to social distancing guidelines. \paragraph{$\text{S}_\text{5}$: Original shocks.} This baseline scenario corresponds to the original shocks $\epsilon^S_{i,t}$ we used in our previous work \citep{pichler2020production} and was based on the estimates by \cite{del2020supply}. The supply shocks were estimated by quantifying which work activities of different occupations can be performed from home based on the Remote Labor Index and using the occupational compositions of industries. The predictions also considered whether an industry was essential in the sense that on-site work was allowed during the lockdown. We then compiled a list of essential industries within the NAICS classification system, based on the list of essential industries provided by the Italian government using the NACE classification system. Then, we computed an essential score $\text{ESS}_i$ for each industry $i$ in their sample of NAICS industries and calculated the supply shock using the following equation \begin{equation} \epsilon^S_{i} = (1 - \text{RLI}_i)(1 - \text{ESS}_i). \label{eq:supply_shock_original} \end{equation} Following our original work \citep{pichler2020production}, we map these supply shocks from the NAICS 4-digit industry classification system to the WIOD classification using a NAICS-WIOD crosswalk. To deal with one-to-many and many-to-one maps in these crosswalks, we split each NAICS industry's contributions using employment data (see Appendix \ref{apx:shocks} for details). We follow this approach to weigh each worker in the NAICS industries' sample from \cite{del2020supply} equally. In Appendix \ref{apx:shocks} we discuss the implications of this crosswalk methodology in more detail. Finally, for the Real Estate sector, we assume that the supply shock does not apply to imputed rents (which represent about 2/3 of gross output). \paragraph{$\text{S}_\text{6}$: European list of essential industries.} For comparison, we also consider the list of essential industries produced by \cite{fana2020covid}. This list was compiled independently from \cite{del2020supply}, and listed which industries were considered essential by governments of Spain, Germany and Italy. Using this list, we compute the essential score for each industry by taking the mean across essential scores for the three countries considered. We keep labor supply shocks constant during lockdown, and lift them all when lockdown ends. This scenario produces the largest supply shocks of all. For comparison, the average supply shock for $\text{S}_\text{1}$ is 3\%, while the average shock for $\text{S}_\text{6}$ is 28\%. \subsection{Demand shocks} \label{sec:consumptiondemandshockscenarios} The Covid-19 pandemic caused strong shocks to all components of demand. We consider shocks to private consumption demand, which we further distinguish into shocks due to fear of infection and due to fear of unemployment, and shocks to other components of final demand, such as investment, government consumption and exports. We outline the basic assumptions on demand shocks below and show in in Appendix \ref{apx:shocks} the detailed cross-sectional and temporal shock profiles. We further demonstrate in Appendix \ref{apx:sensitivity} that alternative plausible demand shock assumptions only mildly influence model results. \paragraph{Demand shocks due to fear of infection.} During a pandemic, consumption/saving decisions and consumer preferences over the consumption basket are changing, leading to first-order demand shocks \citep{CBO2006, del2020supply}. For example, consumers are likely to demand less services from the hospitality industry, even when the hospitality industry is open. Transport is very likely to face substantial demand reductions, despite being classified as an essential industry in many countries. A key question is whether reductions in demand for ``risky'' goods and services is compensated by an increase in demand for other goods and services, or if lower demand for risky goods translates into higher savings. We consider a demand shock vector ${\epsilon}^D_t$, whose components $\epsilon_{i,t}^D$ are the relative changes in demand for goods of industry $i$ at time $t$. Recall from Eq.~\eqref{eq:cd}, $c^d_{i,t}= \theta_{i,t} \Tilde{c}^d_t$, that consumption demand is the product of the total consumption scalar $\Tilde{c}^d_t$ and the preference vector $\theta_t$, whose components $\theta_{i,t}$ represent the share of total demand for good $i$. We initialize the preference vector by considering the initial consumption shares, that is $\theta_{i,0}=c_{i,0}/\sum_j c_{j,0}$. By definition, the initial preference vector $\theta_{0}$ sums to one, and we keep this normalization at all following time steps. To do so, we consider an auxiliary preference vector $\bar{\theta}_t$, whose components $\bar{\theta}_{i,t}$ are obtained by applying the shock vector $\epsilon^D_{i,t}$. That is, we define $\bar{\theta}_{i,t}=\theta_{i,0} (1-\epsilon^D_{i,t})$ and define $\theta_{i,t}$ as \begin{equation} \label{eq:theta} \theta_{i,t}= \frac{\bar{\theta}_{i,t}}{\sum_j \bar{\theta}_{j,t}} = \frac{ (1-\epsilon^D_{i,t}) \theta_{i,0} }{\sum_j (1-\epsilon^D_{j,t}) \theta_{j,0} } . \end{equation} The difference $1-\sum_i \bar{\theta}_{i,t}$ is the aggregate reduction in consumption demand due to the demand shock, which would lead to an equivalent increase in the saving rate. However, households may not want to save all the money that they are not spending. For example, they most likely want to spend on food the money that they are saving on restaurants. Therefore, we define the aggregate demand shock $\Tilde{\epsilon}^D_t$ in Eq.~\eqref{eq:consdemand} as \begin{equation} \label{eq:epsilon} \Tilde{\epsilon}_t^D=\Delta s \left(1- \sum_{i=1}^N \bar{\theta}_{i,t} \right) , \end{equation} where $\Delta s$ is the change in the savings rate. When $\Delta s=1$, households save all the money that they are not planning to spend on industries affected by demand shocks; when $\Delta s=0$, they spend all that money on goods and services from industries that are affected less. To parameterize $\epsilon^D_{i,t}$, we adapt consumption shock estimates by the \cite{CBO2006} and \cite{del2020supply}. Roughly speaking, these shocks are massive for restaurants and transport, mild for manufacturing and null for utilities. We make two modifications to these estimates. First, we remove the positive shock to the health care sector, as in the UK the cancellation of non-urgent treatment for other diseases than Covid-19 far exceeded the additional demand for health due to Covid-19.\footnote{ \url{https://www.ons.gov.uk/economy/grossdomesticproductgdp/bulletins/gdpfirstquarterlyestimateuk/apriltojune2020} } Second, we apportion to manufacturing sectors the reduced demand due to the closure of non-essential retail. For example, retail shops selling garments and shoes were mandated to shut down, and so we apply a consumption demand shock to the manufacturing sector producing these goods.\footnote{ To be fully consistent with the definition of demand shock, we should model non-essential retail closures as supply shocks, and propagate the shocks to manufacturing through reduced intermediate good demand. However, there are two practical problems that prevent us to do so: (i) the sectoral aggregation in the WIOD is too coarse, comprising only one aggregate retail sector; (ii) input-output tables only report margins of trade, i.e. they do not model explicitly the flow of goods from manufacturing to retail trade and then from retail trade to final consumption. Given these limitations, we conventionally interpret non-essential retail closures as demand shocks. } We keep the intensity of demand shocks constant during lockdown. We then reduce demand shocks when lockdown is lifted according to the situation of the Covid-19 pandemic in the UK. In particular, we assume that consumers look at the daily number of Covid-19 deaths to assess whether the pandemic is coming to an end, and that they identify the end of the pandemic as the day in which the death rate drops below 1\% of the death rate at the peak. Given official data\footnote{\url{https://coronavirus.data.gov.uk/}}, this happens on August $11^\text{th}$. Thus, we reduce $\epsilon^D_{i,t}$ from the time lockdown is lifted (May $13^\text{th}$) by linearly interpolating between the value of $\epsilon^D_{i,t}$ during lockdown and $\epsilon^D_{i,t}=0$ on August $11^\text{th}$. The choice of modeling behavioral change in response to a pandemic by the death rate has a long history in epidemiology \citep{funk2010modelling}. \paragraph{Demand shocks due to fear of unemployment.} A second shock to consumption demand occurs through reductions in current income and expectations for permanent income. Reductions in current income are due to firing/furloughing, due to both direct shocks and subsequent upstream or downstream propagation, resulting in lower labor compensation, i.e. $\tilde{l}_t < \tilde{l}_0$, for $t\geq t_\text{start\_lockdown}$. To support the economy, the government pays out social benefits to workers to compensate income losses. In this case, the total income $\tilde{l}_t$ that enters Eq.~\eqref{eq:consdemand} is replaced by an effective income $\tilde{l}^\star_t=b \tilde{l}_0 + (1-b) \tilde{l}_t $, where $b$ is the fraction of pre-pandemic labor income that workers who are fired or furloughed are able to retain. A second channel for shocks to consumption demand due to labor market effects occurs through expectations for permanent income. These expectations depend on whether households expect a V-shaped vs. L-shaped recovery, that is, whether they expect that the economy will quickly bounce back to normal or there will be a prolonged recession. Let expectations for permanent income $\Tilde{l}_t^p$ be specified by \begin{equation} \label{eq:perm_income} \Tilde{l}_t^p=\xi_t \Tilde{l}_0 \end{equation} In this equation, the parameter $\xi_t$ captures the fraction of pre-pandemic labor income $\Tilde{l}_0$ that households expect to retain in the long run. We first give a formula for $\xi_t$ and then explain the various cases. \begin{equation}{\label{eq:xit}} \xi_t = \begin{cases} 1, & t<t_\text{start\_lockdown}, \\ \xi^L = 1-\frac{1}{2}\frac{\tilde{l}_0-\tilde{l}_{t_\text{start\_lockdown}}}{\tilde{l}_0}, & t \in [t_\text{start\_lockdown}, t_\text{end\_lockdown} ],\\ 1-\rho + \rho\xi_{t-1} + \nu_{t-1}, & t > t_\text{end\_lockdown}. \end{cases} \end{equation} Before lockdown, we let $\xi_t \equiv 1$, i.e. permanent income expectations are equal to current income. During lockdown, following \cite{muellbauer2020} we assume that $\xi_t$ is equal to one minus half the relative reduction in labor income that households experience due to the direct labor supply shock, and denote that value by $\xi^L$. (For example, given a relative reduction in labor income of 16\%, $\xi^L=0.92$.)\footnote{During lockdown, labor income may be further reduced due to firing. For simplicity, we choose not to model the effect of these further firings on permanent income.} After lockdown, we assume that 50\% of households believe in a V-shaped recovery, while 50\% believe in an L-shaped recovery. We model these expectations by letting $\xi_t$ evolve according to an autoregressive process of order one, where the shock term $\nu_t$ is a permanent shock that reflects beliefs in an L-shaped recovery. With 50\% of households believing in such a recovery pattern, it is $\nu_t\equiv-(1-\rho)(1-\xi^L)/2$.\footnote{The specification in Eq.~\eqref{eq:xit} reflects the following assumptions: (i) time to adjustment is the same as for consumption demand, Eq.~\eqref{eq:consdemand}; (ii) absent permanent shocks, $\nu_t=0$ after some $t$, $\xi_t$ returns to one, i.e. permanent income matches current income; (iii) with 50\% households believing in an L-shaped recovery, $\xi_t$ reaches a steady state given by $1-(1-\xi^L)/2$: with $\xi^L=0.92$ as in the example above, $\xi_t$ reaches a steady state at 0.96, so that permanent income remains stuck four percentage points below pre-lockdown income.} \paragraph{Other final demand shock scenarios} Note that WIOD distinguishes five types of final demand: (I) \textit{Final consumption expenditure by households}, (II) \textit{Final consumption expenditure by non-profit organisations serving households}, (III) \textit{Final consumption expenditure by government} (IV) \textit{Gross fixed capital formation} and (V) \textit{Changes in inventories and valuables}. Additionally, all final demand variables are available for every country, meaning that it is possible to calculate imports and exports for all categories of final demand. The endogenous consumption variable $c_{i,t}$ corresponds to (I), but only for domestic consumption. All other final demand categories, including all types of exports, are absorbed into the variable $f_{i,t}$. We apply different shocks to $f_{i,t}$. We do not apply any exogenous shocks to categories (III) \textit{Final consumption expenditure by government} and (V) \textit{Changes in inventories and valuables}, while we apply the same demand shocks to category (II) as we do for category (I). To determine shocks to investment (IV) and exports we start by noticing that, before the Covid-19 pandemic, the volatility of these variables has generally been three times the volatility of consumption.\footnote{ This is computed by calculating the standard deviation of consumption, investment and export growth over all quarters from 1970Q1 to 2019Q4. These are 1.03\%, 2.87\% and 3.24\% respectively. Source: \url{https://www.ons.gov.uk/file?uri=\%2feconomy\%2fgrossdomesticproductgdp\%2fdatasets\%2frealtimedatabaseforukgdpcomponentsfortheexpenditureapproachtothemeasureofgdp\%2fquarter2aprtojune2020firstestimate/gdpexpenditurecomponentsrealtimedatabase.xls} } The overall consumption demand shock is around 5\% so, as a baseline, we assume shocks to investment and exports to be 15\%. In Appendix \ref{apx:sensitivity} we show that the model results are fairly robust with respect to alternative choices. \section{Economic impact of Covid-19 on the UK economy} \label{sec:econimpact} As already mentioned, in the first version of this work we released results in May predicting a 21.5\% contraction of GDP in the UK economy in the second quarter of 2020 with respect to the last quarter of 2019\footnote{% Due to an error in how we dealt with Real Estate shocks, our prediction was slightly worse than it would have been without the error, see Appendix \ref{apx:shocks} }. This is in comparison to the contraction of 22.1\% that was actually observed. In this section we do a post-mortem to understand the factors that influenced the quality of the forecasts. To do this, we compare results under different scenarios defined by different shocks and production functions. Our analysis includes a sectoral breakdown of the forecasts and a comparison of the time series of observed vs. predicted behavior. \subsection{Definition of scenarios and calibration} The model and shock scenarios that we described in the previous sections has several degrees of freedom that can be tuned when exploring the model. These are either model assumptions such as the production function, shock scenarios, or parameters (see Table \ref{tab:econmodelpars}). To understand the factors that influenced the quality of the forecasts, we focus on the assumptions and parameters that cannot easily be calibrated from data and that have a strong effect on the results. As the sensitivity analysis in Appendix \ref{apx:sensitivity} shows, the model is most sensitive to two assumptions: the supply shock scenario and the production function. So, we consider all combinations of the six scenarios for supply shocks, $\text{S}_\text{1}$ to $\text{S}_\text{6}$ (Section \ref{sec:supplyshockscenarios}), and of the five production functions mentioned in Section \ref{sec:supply}. These are: standard Leontief, three versions of the IHS-Markit-modified Leontief that treat important inputs as critical (IHS1), half-critical (IHS2), and non-critical (IHS3) and the linear production function. Combining these two assumptions, we get $6\times5=30$ scenarios that we compare to the data. \begin{table}[htbp] \centering \caption{Assumptions and parameters of the model that: (top) are varied across scenarios; (middle) are fixed due to little effect on results; (bottom) are fixed due to direct data calibration. } \begin{tabular}{|l|c|c|} \hline Name & Symbol & Value \\ \hline Supply shocks & S & $\text{S}_\text{1}$, $\text{S}_\text{2}$, $\text{S}_\text{3}$, $\text{S}_\text{4}$, $\text{S}_\text{5}$, $\text{S}_\text{6}$ \\ Production function & & Leontief, IHS1, IHS2, IHS3, linear \\ \hline Final demand shocks & & Appendix \ref{apx:shocks} \\ Inventory adjustment & $\tau$ &10 \\ Upward labor adjustment & $\gamma_H$ & 1/30 \\ Downward labor adjustment & $\gamma_F$ & 1/15 \\ Change in savings rate & $\Delta s$ & 0.50 \\ Consumption adjustment & $\rho$ & 0.99 \\ \hline Inventory targets & $n_i$ & Appendix \ref{apx:inventory} \\ Propensity to consume & $m$ & 0.82 \\ Government benefits & $b$ & 0.80 \\ \hline \end{tabular} \label{tab:econmodelpars} \end{table} Other parameters have less effect on the model (Appendix \ref{apx:sensitivity}) and so we fix them to reasonable values. These are: \begin{itemize} \item The parameter $\tau$, capturing responsiveness to inventory gaps. We fix $\tau=10$ days, which indicates that firms aim at filling most of their inventory gaps within two weeks. This lies in the range of values used by related studies (e.g. $\tau=6$ in \cite{inoue2019firm}, $\tau=30$ in \cite{hallegatte2014modeling}). \item The hiring and firing parameters $\gamma_{\text{H}}$ and $\gamma_{\text{F}}$. We choose $\gamma_{\text{H}}=1/30$ and $\gamma_{\text{F}} = 2 \gamma_{\text{H}}$. Given our daily time scale, this is a rather rapid adjustment of the labor force, with firing happening faster than hiring. \item The parameter $\rho$, indicating sluggish adjustment to new consumption levels. We select the value assumed by \cite{muellbauer2020}, adjusted for our daily timescale\footnote{ Assuming that a time step corresponds to a quarter, \cite{muellbauer2020} takes $\rho=0.6$, implying that more than 70\% of adjustment to new consumption levels occurs within two and a half quarters. We modify $\rho$ to account for our daily timescale: By letting $\bar{\rho}=0.6$, we take $\rho=1-(1-\bar{\rho})/90$ to obtain the same time adjustment as in \cite{muellbauer2020}. Indeed, in an autoregressive process like the one in Eq.~\eqref{eq:consdemand}, about 70\% of adjustment to new levels occurs in a time $\iota$ related inversely to the persistency parameter $\rho$. Letting $Q$ denote the quarterly timescale considered by \cite{muellbauer2020}, time to adjustment $\iota^Q$ is given by $\iota^Q=1/(1-\bar{\rho})$. Since we want to keep approximately the same time to adjustment considering a daily time scale, we fix $\iota^D=90\iota^Q$. We then obtain the parameter $\rho$ in the daily timescale such that it yields $\iota^D$ as time to adjustment, namely $1/(1-\rho)=\iota^D=90\iota^Q=90/(1-\bar{\rho})$. Rearranging gives the formula that relates $\rho$ and $\bar{\rho}$. }. \item The savings parameter $\Delta s$. We take $\Delta s = 0.5$, meaning that households save half the money they are not spending in goods and services due to fear of infection, and direct half of that money to spending for other ``safer'' goods and services. \end{itemize} Finally, we are able to directly calibrate some parameters against the data. For example, we calibrate the inventory target parameters $n_i$ using ONS data for the usual stock of inventories that different industries typically have (see details in Appendix \ref{apx:inventory}). These parameters are highly heterogeneous across industries; typically manufacturing and trade have much higher inventory targets than services. Another parameter which we can directly calibrate from data is the propensity to consume $m$ (see Eq.~\ref{eq:consdemand}). Directly reading the share of labor income that is used to buy final domestic goods from the input-output tables, we find $m=0.82$. Finally, we calibrate benefits $b$ based on official UK policy, $b=0.8$. \subsection{Comparing our initial forecast with alternative scenarios} \label{sec:scenarioselection} We have chosen not to search for a calibration of the model that best fits the data. We do this because we only have one real world example and this would lead to overfitting. Instead we start from the out-of-sample forecast we made in May and try to understand how performance would have changed if we had made different choices. This helps us understand what is important in determining the accuracy and gives some insight into how the model works, and where care needs to be taken in making forecasts of this type. Since our initial forecast, we have made small changes to the model (in particular the consumption function), and obtained better data to calibrate inventories (using ONS rather than US BEA data). Our initial forecast featured a supply shock scenario identical to S5 except for Real Estate (Section \ref{sec:supplyshockscenarios} and Appendix \ref{apx:shocks}), and an IHS3 production function. In this section, we compare our original forecast to the forecasts made using the 30 possible scenarios discussed above. To do so, we evaluate the forecast errors for each scenario at both the sectoral and aggregate levels, allowing us to better understand the trade-off involved in selecting amongst various assumptions and calibrations. For all 30 scenarios, we simulate the model for six months, from January $1^{st}$ to June $30^{th}$, 2020. We start lockdown on March $23^{rd}$, at which point we apply the supply and demand shocks described in Section \ref{sec:pandemic_shock}.\footnote{ We do not run the model further in the future, both because we focus on the first UK lockdown and the immediate aftermath, and because our assumptions on non-critical inputs are only valid for a limited time span. } We then compare the monthly sectoral output of each scenario against empirical data from the indexes of agriculture, production, construction and services, all provided by the ONS (see Appendix \ref{apx:validation_data}). Specifically, we compute the sector-level Absolute Forecast Errors (AFE) for April, May, and June,\footnote{ We exclude January, February and March from our comparison as we do not model the reaction of the economy before the lockdown, when e.g. international supply chains started to be disrupted. } \[ \mathcal{E}_{i,t}^z=|y_{i,t}-\hat{y}_{i,t}^z|, \] where $y_{i,t} = x_{i,t}/x_{i,0}$ is the output of sector $i$ during month $t$ expressed as a percentage of the output of sector $i$ during February, $x_{i,0}$. Here, $\hat{y}_{i,t}^z$ is the equivalent quantity in simulations of scenario $z$. We then obtain a scenario-specific average sectoral AFE by taking a weighted mean of $\mathcal{E}_{i,t}^z$ across all sectors $i$ and months $t$, where weights correspond to output shares in the steady state (forecast errors for important sectors are more relevant than forecast errors for small sectors). This quantity is defined as \begin{equation} \text{AFE}_\text{sec}^z= \frac{1}{3N} \sum_t \sum_i \frac{x_{i,0}}{\sum_j x_{j,0}} \mathcal{E}_{i,t}^z. \end{equation} We also compare aggregate output reductions in all different scenarios in April, May and June against empirical data. We compute a scenario-specific average aggregate AFE by averaging aggregate forecast errors over the three months we are considering. This is \begin{equation} \text{AFE}_\text{agg}^z= \frac{1}{3} \sum_t \left(\sum_i x_{i,t}-\sum_i \hat{x}_{i,t}^z\right), \end{equation} where $\hat{x}_{i,t}^z$ is output of sector $i$ at time $t$ for scenario $z$. Note that, unlike the measure of sectoral error $\text{AFE}_\text{sec}^z$, the aggregate measure $\text{AFE}_\text{agg}^z$ does not include an absolute value -- it is positive when true production is greater than predicted, and negative when it is smaller than predicted. \begin{figure}[hbt] \centering \includegraphics[width = 1.0\textwidth]{fig/scenarios_comparison_main.pdf} \caption{\textbf{Sectoral and aggregate errors across scenarios.} We plot aggregate Average Forecast Error $\text{AFE}^\text{agg}_z$ vs. sectoral Average Forecast Error $\text{AFE}^\text{sec}_z$ for 30 different scenarios $z$, corresponding to all of the six possible shock scenarios (indicated by color) and the five possible production functions (indicated by symbol), as well as the original model forecast, indicated by a black asterisk. The other parameter values are indicated in Table \ref{tab:econmodelpars}. Both sectoral and aggregate AFE are multiplied by 100 to be interpreted as percentages. The right panel zooms on the region with lowest sectoral and aggregate errors. The dashed lines refer to an average forecast by institutions and financial firms (above zero) and to a forecast by the Bank of England (below zero), see footnote \ref{footn:pred}. } \label{fig:scenarios_comparison_main} \end{figure} Figure \ref{fig:scenarios_comparison_main} plots sectoral and aggregate AFE for the 30 scenarios, plus the original out-of-sample forecast. The sectoral and aggregate errors vary considerably across scenarios. The left panel, which includes all 31 scenarios, contains two outliers where the sectoral error reaches almost 30\% and the aggregate error is roughly -30\%, meaning that the model predicts a downturn 30 percentage points larger than that actually observed (i.e., about a 50\% downturn). This occurs when the Leontief production function is combined with the two most severe supply shock scenarios $S_5$ and $S_6$. The right panel blows up the region containing the other 28 scenarios. The sectoral errors range between roughly 10\% and 20\%, while the aggregate forecast errors range from roughly -10\% to 10\%. A close examination of the figure makes it clear that, as expected, the predicted downturn generally gets stronger as the severity of the shock and the rigidity of the production function increases. With one exception, the choice of scenario has a bigger effect on the error than the production function. This is evident from the fact that there are clusters of points with the same color associated with each scenario. The clusters are particularly tight for the less severe scenarios. The exception is the Leontief production function: The two most severe shock scenarios produce outliers with downturns that are much too strong and the remainder all produce results clustered together, with aggregate errors in the range of $-4\%$ to $-6\%$. This is true even for the weakest shock scenario $S_1$; by comparison, every other production function predicts a downturn that is too weak under scenario $S_1$, in the range of $8\% - 12\%$. The original model forecast, shown as a black asterisk, has a aggregate error of 0.6\% and a sectoral error of 13.5\%. It is useful to compare this to its counterpart in the retrospective analysis, which uses the $\text{S}_\text{5}$ scenario in combination with the IHS3 production function. This forecast performs better at the sectoral level, but worse at the aggregate level (the error is 2.5\%). Using an IHS2 production function with the $S_5$ shock scenario seems to provide the best combination of sectoral and aggregate errors. To put these results in perspective, it is worth comparing to the other out-of-sample forecasts that were made around the same time. We do not know their sectoral errors but we can compare the aggregate errors. The Bank of England forecast, for example, predicted a downturn of about 30\%, which corresponds to an aggregate error of about -8\%, comparable to the scenarios with $S_6$ supply shocks and with Leontief production function and supply shocks $S_1$ to $S_4$. Conversely, the average forecast by institutions and financial firms for the UK economy in Q2 2020 was -16.6\%, which is 5.5\% less than what was observed in reality. This forecast is in line with supply shock scenarios $S_3$ to $S_4$, as well as with $S_5$ combined with a linear production function. Therefore, it seems that the more accurate predictions of our model are obtained by combining shock scenario $S_5$ -- as in our original prediction -- with one of the IHS production functions. Why does this scenario, which was not designed to capture specific features of the UK economy, work so well? The evidence suggests that this is because there were many voluntary firm shutdowns, such as for the car manufacturing industry in the UK. The $S_5$ shock scenario does a better job of identifying industries that we not mandated to shut down, but did so in practice, and seems to better capture the behavioral response to the pandemic than a literal reading of UK regulations. \subsection{Analysis of the selected scenario} \label{sec:selectedscenarioanalysis} Given that it minimizes a combination of aggregate and sectoral error, and given that it is close to the original scenario used for our out-of-sample predictions, we focus on the scenario that combines $S_5$ and the IHS2 production function to illustrate the outcomes of our model in more detail. We start by showing the dynamics produced by the model, and then we evaluate how well the selected scenario explains some aspects of the economic effects of Covid-19 on the UK economy that we did not consider so far. \begin{figure}[hbt] \centering \includegraphics[width = 1.0\textwidth]{fig/aggregate_vs_sectoral.pdf} \caption{\textbf{Economic production for the chosen scenario as a function of time.} We plot production (gross output) as a function of time for each of the 55 industries. Aggregate production is a thick black line and each sector is colored. Agricultural and industrial sectors are colored red; trade, transport, and restaurants are colored green; service sectors are colored blue. All sectoral productions are normalized to their pre-lockdown levels, and each line size is proportional to the steady-state gross output of the corresponding sector. For comparison, we also plot empirical gross output as normalized with respect to March 2020. } \label{fig:aggregatevssectoral} \end{figure} Figure \ref{fig:aggregatevssectoral} shows model results for the selected scenario for production (gross output); results for other important variables, such as profits, consumption and labor compensation (net of government benefits) are similar. When the lockdown starts, there is a sudden drop in economic activity, shown by a sharp decrease in production. Some other industries further decrease production over time as they run out of critical inputs.\footnote{The reduction in production due to input bottlenecks is somewhat limited in this scenario as compared to the outliers in Figure \ref{fig:model_data_comparison}. With supply shocks $S_5$ or $S_6$ and a Leontief production function, the economy collapses by 50\% due to the strong input bottlenecks created in a substantially labor-constrained economy in which all inputs are critical for production.} Throughout the simulation, service sectors tend to perform better than manufacturing, trade, transport and accommodation sectors. The main reason for that is most service sectors face both lower supply and demand shocks, as a high share of workers can effectively work from home, and business and professional services depend less on consumption demand. In the UK, there was not a clear-cut lifting of the lockdown but, under scenario $S_5$, we take May 13th as a conventional date in which lockdown measures are lifted (see Figure \ref{fig:supplyscenarios} in Appendix \ref{apx:shocks} for shock dynamics). By the end of June, the economy is still far from recovering. In part, this is due to the fact that the aggregate level of consumption does not return to pre-lockdown levels, due to a reduction in expectations of permanent income associated with beliefs in an L-shaped recovery (Section \ref{sec:demand}), and due to the fact that we do not remove shocks to investment and exports (see Section \ref{sec:pandemic_shock}). \begin{table}[htbp] \centering \begin{tabular}{|l|c|c|} \hline Variable (compared to Q4-2019) & Data & Model \\ \hline Gross output April & -27.4\% & -25.3\% \\ Gross output May & -25.2\% & -26.9\% \\ Gross output June & -17.8\% & -16.8\% \\ Value added Q2 & -21.5\% & -22.1\% \\ \hline Private consumption Q2 & -25.3\% & -21.3\% \\ Investment Q2 & -26.3\% & -29.7\% \\ Government consumption Q2 & -17.5\% & -14.2\%\\ Inventories Q2 & -2.2\% & -0.5\% \\ Exports Q2 & -23.3\% & -27.8\% \\ Imports Q2 & -30.6\% & -23.9\% \\ \hline Wages and Salaries Q2 & -1.1\% & -4.3\% \\ Profits Q2 & -26.7\% & -22.3\%\\ \hline \end{tabular} \caption{Comparison between data and predictions of the selected scenario for the main aggregate variables. All percentage changes refer to the last quarter of 2019, which we take to represent the pre-pandemic economic situation.} \label{tab:aggregate-data-model} \end{table} We now turn to evaluating how well the selected scenario describes the economic effects of Covid-19 on the UK economy. In terms of gross output, one can see in Table \ref{tab:aggregate-data-model} that the model slightly underestimates the recession in April and slightly overestimates it in May, while it correctly estimates a strong recovery in June.\footnote{ We aggregate empirical output from sectoral indexes using our steady-state output shares as weights. } Additionally, aggregate value added in the second quarter of 2020 is very close to the data. Our model, however, considers other macroeconomic variables than gross output and value added, so we also compare these other variables to data (Table \ref{tab:aggregate-data-model}). From national accounts, we collect data on private and government consumption, investment, change in inventories, exports and imports (expenditure approach to GDP); wages and salaries and profits (income approach to GDP). Looking at the expenditure approach to GDP, some variables have a worse reduction in the model, such as investment and exports, while other variables have a worse reduction in the data, such as private and government consumption, inventories and imports.\footnote{If one considers Exports-Imports as a component, the model predicts a current account deficit, while in reality there was a current account surplus. We should however note that we do not model international trade, we treat exports as exogenous and imports like locally produced goods and services.} However, the model predicts the relative reductions fairly well, as we find a stronger collapse in private consumption and investment than in government consumption or inventories, as in the data. Finally, considering the income approach to GDP, we overestimate the reduction in wages and salaries and underestimate the reduction in profits.\footnote{ Note that these categories are not jointly exhaustive, as the ONS also considers mixed income and taxes less subsidies, which are difficult to compare to variables in our model. } Nonetheless, we correctly predict that the absolute reduction in wages and salaries is much smaller than in profits (due to government subsidies). \begin{figure}[tb] \centering \includegraphics[width = 1.0\textwidth]{fig/model_data_comparison.pdf} \caption{\textbf{Comparison between model predictions and empirical data.} We plot predicted production (gross output) for each of 53 industries against the actual values from the ONS data, averaged across April, May and June. Values are relative to pre-lockdown levels and dot size is proportional to the steady-state gross output of the corresponding sector. Agricultural and industrial sectors are colored red; trade, transport, and restaurants are colored green; service sectors are colored blue. Dots above the identity line means that the predicted recession is less severe than in the data, while the reverse is true for dots below the identity line. } \label{fig:model_data_comparison} \end{figure} Turning to the performance of our model at the disaggregate level of industries, Figure \ref{fig:model_data_comparison} shows gross output as predicted by the model and in the data. Here, gross output is averaged over the values it takes in March, April and June, both in the model and in the data, and compared to the value it had in Q4 2019. To interpret this figure, note that for all points on the left of the identity line, model predictions are lower than in the data, i.e. the model is pessimistic. Conversely, on the right of the identity line model predictions are higher than in the data, i.e. the model is optimistic. Note that model predictions and empirical data are correlated, although not perfectly: the Pearson correlation coefficient weighted by industry size is 0.75. The majority of sectors decreased production up to 60\% of initial levels, both in the model and in the data, but a few sectors were forced to decrease production much more. While the predictions are generally very good, there are also some dramatic failures. We conjecture that this is due to idiosyncratic features of these industries that we could not take into account without overly complicating our model. For example, one sector for which the predictions of our model are completely off is C29 - Manufacturing of vehicles. Almost all car manufacturing plants were completely closed in the UK in April and May, and so production was essentially zero (7\% of the pre-pandemic level in April and 14\% in May). While they reopened in June, production in Q2 is slightly above 20\% of the pre-pandemic level. However, our model predicts a level of production around 63\% of the pre-pandemic level. It is difficult to account for the complete shutdown of car manufacturing plants in our model, as our selected scenario for supply shocks does not require manufacturing plants to completely close during lockdown. We think that this discrepancy between model predictions and data can be explained by two factors. First, car manufacturing is highly integrated internationally, and, in a period where most developed countries were implementing lockdown measures, international supply chains were highly disrupted. For simplicity, however, in our model we did not model input bottlenecks due to lack of imported goods. Second, it is possible that firms producing non-essential goods voluntarily decided to stop production to protect the health of their workers, even if they were not forced to do so. Another example for which the predictions of our model are off is H51 - Air transport. Production in the data is 3\% of pre-lockdown levels, while our model predicts around 50\%. In the model, most activity of the air transport industry during lockdown is due to business travel, which is a non-critical input to many industries that we do not exogenously shut down (recall that industries aim at using non-critical inputs if they are available). In some other cases, however, our model gave accurate predictions, even when the answer was far from obvious. Compare, for instance, industries M74\_M75 (Other Science) and O84 (Public Administration). Both received a very weak supply shock (3\% for Other Science and 1.1\% for Public Administration, see Table \ref{tab:FO_shocks_supply} in Appendix \ref{apx:shocks}), and no private consumption demand shocks. Yet, Public Administration had almost no reduction in production, while Other Science reduced its production to about 75\% of its pre-pandemic level. The ONS report in May (see footnote \ref{ft1}) quotes reduced intermediate demand as the reason why Other Science reduced its activity. Conversely, Public Administration's output is almost exclusively sold to the government, which did not reduce its consumption. Because of its ability to take into account supply chain effects and the resulting reductions in intermediate demand, our model is able to endogenously capture the difference between these two sectors, even though their shocks are small in both cases. \begin{figure}[tb] \centering \includegraphics[width = 1.0\textwidth]{fig/new_model_april_june_all.pdf} \caption{\textbf{Comparison between model predictions and empirical data.} We plot predicted production (gross output) vs. observed values from ONS data. Production is relative to pre-lockdown levels and dot size is proportional to the steady-state gross output of the corresponding sector. Yellow is April and red is June. Black lines connect the same industry from April to June. Only a few industries are highlighted (see main text). } \label{fig:new_model_april_may_june_all} \end{figure} Figure \ref{fig:new_model_april_may_june_all} shows the ability of the model to predict sectoral dynamics. It is similar to Figure \ref{fig:model_data_comparison} but shows output in both April and June. The dots that represent the same industry in April and June are connected by a black line. We focus on a few industries that we discuss in this section, making all other points light grey (Figure \ref{fig:new_model_april_may_june_facets} in Appendix \ref{apx:selectedscenario} shows labels for all industries). To interpret changes from April to June, note that a line close to vertical implies that a given industry had a much stronger recovery in the data than in the model, while a horizontal line implies the opposite. A line parallel to the identity line indicates that the recovery was as strong in the data as in the model. Almost all sectors experience a substantial recovery from April to June, both in the model and in the data. An example in which our model correctly predicts dynamic supply chain effects is the recovery experienced by C23 (Manufacture of other non-metallic mineral products) as a consequence of the recovery by F (Construction). According to ONS reports, increased activity in construction in June is explained by the lifting of the lockdown and by adaptation to social distancing guidelines by construction firms. At the same, industry C23 recovers due to the production of cement, lime, plaster, etc to satisfy intermediate demand by construction (construction is by far the main customer of C23, buying almost 50\% of its output). This pattern is faithfully reproduced by the endogenous dynamics in our model. \section{Propagation of supply and demand shocks} \label{sec:nw_effects} In the standard Cobb-Douglas Equilibrium IO model, productivity shocks propagate downstream and demand shocks propagate upstream \citep{carvalho2019production}. Thus, the elasticity of aggregate output to a shock to one sector depends on the type of shock, and on the position of an industry in the input-output network. What can we say about these questions in our model? Are there properties of an industry that could be computed ex-ante to know how systemic it is? Is this different for supply and demand shocks? To answer these questions, we run the model with a single shock -- either supply or demand -- to a single industry. All the other industries do not experience any shocks but have initial productive capacities and face initial levels of final demand. We then let the economy evolve under this specific setting for one month\footnote{ We also did the analysis with model simulations up to two months after the initial shock is applied. Since results are similar for the two cases, we only report results for the one month simulations. }. We repeat this procedure for every industry, every production function specification and different shock magnitudes. We then investigate if the decline in total output can be explained by simple measures such as shock magnitude, output multipliers or upstreamness levels which we formally define below. We first explain the supply and demand shock based scenarios in somewhat more detail. \paragraph{Supply shock scenarios.} When considering supply shocks only, we completely switch off any adverse demand effects, i.e. $\epsilon_{i,t}^D = 0$ (which implies $\theta_{i,t} = \theta_{i,0}$ and $\tilde \epsilon_t^D = 0$), $\xi_t = 1$ and $f^d_{i,t} = f^d_{i,0}$ for all $i$ and $t$. We also set all supply shocks equal to zero $\epsilon_{i,t}^S = 0$, except for a single industry $j$ which experiences a supply shock from the set $\epsilon_{j,t}^S \in \{0.1,0.2,...,1\}$. We then loop over every possible $j$. We do this for each of the different production function assumptions. \paragraph{Demand shock scenarios.} In our demand shock scenarios there are no supply shocks ($\epsilon_{i,t}^S = 0$ $\forall \; i,t$) and similarly there is no demand shocks for all but one industry $j$ ($\epsilon_{i,t}^D = 0$, $f^d_{i,t} = f^d_{i,0}$ $ \forall i \ne j, \forall t$). For the single industry $j$ we again let shocks vary between 10 and 100\%; $\epsilon_{j,t}^D \in \{0.1,0.2,...,1\}$. For simplicity we assume uniform shocks across all final demand categories of the given industry, resulting in $f^d_{j,t} = (1-\epsilon_{j,t}^D) f^d_{j,0}$. To keep things as simple as possible, we further assume that there is no fear of unemployment $\xi_t = 1$ and that final consumers do not switch to alternative products at all ($\Delta s = 1$). Under these assumptions the values for $\theta_{i,t} = \theta_{i,t}$ and $\tilde \epsilon_t^D$ are then computed as outlined in Section \ref{sec:consumptiondemandshockscenarios}. \newline Figure \ref{fig:output_table} shows the simulation results broken down into the various shock magnitudes (vertical axis) and production function categories (horizontal axis). The coloring and the value of a tile represent the average aggregate output (as a fraction of initial output), where the average is taken over all $N$ runs. Results obtained from the demand shock scenarios, Figure \ref{fig:output_table}(b), do not differ across alternative production function specifications and thus we only show one representative case. The reason for this is that demand shocks to an industry are passed on proportionally to all of its suppliers as shown in Eq.~\eqref{eq:order_interm}, regardless of the underlying production mechanisms. This is in stark contrast to the supply shock scenarios, Figure \ref{fig:output_table}(a), where economic impacts depend strongly on the choice of production function. While the economic impact of supply constraints is limited in the linear production model and depends fairly smoothly on the shock magnitude, this is not the case in the Leontief model. Here the economy experiences a major truncation if single industries are shut down. The results lie in-between these two extremes when production functions differentiate between critical and non-critical inputs (IHS 1-3). Except for the linear production function simulations, we observe for several severe supply shock scenarios fairly wide distributions of economic impacts. Thus, not only the shock size matters but also \emph{which} industries are affected by these shocks. For example, applying an 80\% supply shock shock to industry Repair-Installation (C33) under the IHS2 assumption, collapses the economy by about 50\%, although the industry accounts for less than half a percent of the overall economy. On the other hand, applying the same shock to the comparatively large industry Other Services (R\_S, 3\% of the economy) leads to a mere 6\% reduction of aggregate output. \begin{figure}[H] \centering \includegraphics[width = \textwidth]{fig/shock_tiles_both.pdf} \caption{ \textbf{Aggregate gross output as percentage of pre-shock levels after shocking single industries.} A column depicts different production functions and rows distinguish the supply (a) and demand (b) shock magnitude which an industry is exposed to. Results in (b) are only shown for one production function, since they are identical across alternative specifications. The values in the tiles and their coloring denote aggregate output levels as percentage of pre-shock levels one month after the shock hits a given industry. This values are computed as averages from $N$ runs, always shocking only a single industry. } \label{fig:output_table} \end{figure} For a policymaker it is important to know what properties of an industry drive the amplification of shocks, as this could inform both the design of lockdown measures as well as reopening policies. To explore this more systematically, we regress output levels against potential explanatory factors such as upstreamness, output multipliers and industry sizes. An industry's upstreamness in a production network is its average distance to the final consumer \citep{antras2012measuring} and also known as Total Forward Linkages \citep{miller2017output}. High upstreamness implies that the output of this industry requires several subsequent production steps before it is purchased by final consumers. Thus, relaxing shocks on industries with high upstreamness could potentially stimulate further economic activity. Since upstreamness boils down to the row sums of the Gosh inverse \citep{miller2017output}, we obtain the $N$-dimensional upstreamness vector as $ u = (\mathbb{I} - B)^{-1} \mathbf{1}$, where a matrix element $B_{ij} = Z_{ij,0}/x_{i,0}$ represents ``allocation coefficients''. Upstreamness ranges from 1.004 (Household activities) to 2.742 (Warehousing) in our sample of UK industries. Output multipliers, or alternatively Total Backward Linkages, are a core metric in many economic studies. In input-output analysis multipliers quantify the impact of a change in final demand in a given sector on the entire economy. Multipliers are related to various network centrality concepts and have been shown to be strongly predictive of long-term growth \citep{mcnerney2018production}. Since shocking an industry with a high multiplier should lead to larger decreases in intermediate demand, it is plausible that high-multiplier industries tend to amplify shocks more. The output multiplier is computed as the column sum of the Leontief inverse, i.e. $m = (\mathbb{I} - A^\top)^{-1} \mathbf{1}$. Multipliers range from 1 (Household activities) to 2.379 (Forestry) in our sample. Upstreamness and multipliers are different, but are fairly highly correlated, with a Pearson correlation of 0.45. We then regress log aggregate output one month after the initial shock hits the economy, $\log (\sum_k x_{k,30})$, against the shocked industry's log upstreamness and multiplier levels. Naturally, we would expect a larger decline if supply shocks hit an overall large industry and similarly for demand shocks and the size of final consumption. Thus we also control for total industry size measured in log gross output, $\log (x_{i,0})$, and industry-specific total demand, $\log ( c^d_{i,0} + f^d_{i,0} )$, in our regressions\footnote{ We do not include gross output and final demand values together as regressors to avoid multicolinearity. Industry gross output and final consumption are highly correlated; $\text{cor} \{ \log (x_{i,0}), \log (c_{i,0}+ f_{i,0}) \} =0.89$ (p-value $< 10^{-16}$). }. We then run the regression for every given shock magnitude and production function separately. Tables \ref{tab:regsuppshock} summarizes the regression results for the supply shock scenarios. For supply shocks we find that upstreamness is a very good predictor of adverse economic impacts if the economy builds upon Leontief production mechanisms. If industries use linear production technologies, on the other hand, it is rather the size of the industry than its upstreamness level that explains reductions in aggregate output. When using the intermediate assumption of an IHS2 production function, both upstreamness levels and industry size significantly affect aggregate impacts, although the overall model fit ($R^2$) drops substantially. Note that supply shocks are to a large extent a policy variable as they are directly coupled to non-pharmaceutical interventions such as industry-specific shutdowns. Our results indicate that upstreamness levels of industries are an important aspect for designing lockdown scenarios. In case of limited production flexibilities, upstreamness might be even a better indicator of industry closure related aggregate impacts than the actual size of the industry. Regression results for the demand shock experiments are shown in Table \ref{tab:regdemashock}. The first four columns show the results from univariate regressions where we include only one of the potential covariates: upstreamness, multipliers, final consumption and gross output. Somewhat surprisingly, output multipliers, a key metric for quantifying aggregate impacts resulting from demand side perturbations in simpler input-output models, do not exhibit any predictive power in our case. Upstreamness, on the other hand, is positively associated with aggregate output values, indicating that demand shocks to upstream industries have less adverse impacts on the economy than downstream industries. Better model fits, however, are obtained when regressing aggregate output against indicators of industry size \- gross output or final consumption. Interestingly, combining final consumption values with the network-based metrics in a multivariate regression (column five) does not improve explanatory power at all. On the other hand, upstreamness and output size explain complementary parts of aggregate impacts (columns six), resulting in a better model fit than any regression using final consumption values. Thus, industries' upstreamness and output sizes do not only play a role for the propagation of supply shocks but are also key determinants for demand shock amplification. Overall, we find that static measures are indicative of our model results but only explain them partially, even in this simplified case where we studied either demand or supply to only one industry at time. In reality, supply and demand shocks are mixed and multiple industries affected at the same time to varying degrees. In that case the propagation of pandemic shocks will be more complex. Nevertheless, our results indicate that upstream industries play an important role in the amplification of exogenous shocks. \FloatBarrier \input{shockprop_regtab.tex} \FloatBarrier \section{Discussion} \label{sec:discuss} In this paper we have investigated how locking down and re-opening the UK economy as a policy response to the Covid-19 pandemic affects economic performance. We introduced a novel economic model specifically designed to address the unique features of the current pandemic. The model is industry-specific, incorporating the production network and inventory dynamics. We use survey results by industry experts to model how critical different inputs are in the production of a specific industry. We found in simulation experiments where we studied simpler shock scenarios and a simplified model setup that an industry's upstreamness is predictive of shock amplification. However, the relationship is noisy and strongly depends on the underlying production mechanism in case of downstream propagation of supply shocks. These results underline the necessity of more sophisticated macroeconomic models for quantifying the economic impacts resulting from national lockdowns and subsequent re-opening. Real time GDP predictions for the UK economy made in an early version of this paper turned out to be very accurate \citep{pichler2020production}. But was this because we did things right, or because we just got lucky? Our analysis here shows that it was a mixture of the two. By investigating both alternative shocks scenarios, alternative production functions and studying the sensitivity to parameters and initial conditions we are able to see how the quality of the predictions depends on these factors. We find that the shock scenarios are the most important determinant, but the production function can also be very important, and some of the other parameters can affect the results as well. To make a real time forecast we had to act quickly. There were no data available about which industry classifications were considered essential in the UK and the few data available on UK jobs that could be performed form home was based on US O*NET data\footnote{ \url{https://www.ons.gov.uk/employmentandlabourmarket/peopleinwork/employmentandemployeetypes/articles/whichjobscanbedonefromhome/2020-07-21} }. In the interest of time, we estimated the UK supply shocks using predicted US supply shocks \cite{del2020supply}. These supply shocks were based on a list of essential industries that was considerably less permissive (i.e., less industries were considered essential) than the UK guidelines. This turned out to be lucky: respecting social distancing guidelines caused many industries in the UK to close even though they were not formally and explicitly required to do so. With hindsight, this was fortuitous -- if we had had a list of essential British industries our supply shocks would have been too weak, or we would have had to model social distancing constraints by industry, which is difficult. Even if it missed some of the details, the supply shocks estimated by \cite{del2020supply} provided a reasonable approximation to the truth. The choice of production function also matters a great deal. Our results suggest that the Leontief production function, which is widely used for understanding the response to disasters, is a poor choice. This is for an intuitive reason: Some inputs are not critical, and an industry can operate reasonably well without them, at least for a few months. Our results here show that production functions that incorporate this fact can do well. This could be further developed by calibrating CES production functions to correctly incorporate when substitutions are appropriate. The IHS Markit survey that was performed should eventually be performed with larger samples and tested in detail (but that is beyond the scope of this paper). At the other extreme, our results also suggest that the linear production function is a poor choice. It comes close to the correct aggregate error only with the strongest shock scenario and never performs well at the sectoral level. Our results indicate that dynamic models of the type that we developed here can do a good job of forecasting disruptions in the economy. We want to emphasize that, while there is wide variation in the results under different scenarios, this variation could be dramatically reduced by collecting better data. A clear example is the choice of inventory levels. In our original model we had no data for inventory levels of UK industries, so we used data for the US. Replacing this by UK data made a substantial improvement in our results. Similarly, the economic data we had available was from 2019, and the IO data was from 2014 -- better real time measurements about the response of industries to the pandemic would likely have improved our predictions. A more extensive study of the importance of different inputs to production could have reduced ambiguity about the choice of production function. At the highest level, our model illustrates the value of its key features. These are: modeling at the sectoral level, allowing both supply and demand shocks to operate simultaneously, using a realistic production function that properly captures nonlinearities without exaggerating them, and using a dynamic model that incorporates inventory effects. With better data and better measurement of parameters, our results demonstrate that a model of this type can give useful insight into the economics of a disaster such as the Covid-19 pandemic. \FloatBarrier \small \bibliographystyle{agsm}
1,108,101,564,763
arxiv
\section{Introduction}\label{sec:intro} The interstellar medium (ISM) is at the core of the ecosystem in star-forming galaxies. The ISM gives birth to stars and also processes the energy and metals these stars produce, using the majority in maintaining the ISM state while expelling a fraction to larger scales. Modeling the ISM is fundamental to astrophysics, but challenging for many reasons. Proper treatment of ISM dynamics and energetics must involve simultaneous modeling of the formation and evolution of massive young stars, encompassing the physics that controls star formation, i.e., gravity, turbulence, and magnetic fields \citep[][]{2007ARA&A..45..565M}. The ISM is highly dissipative, quickly losing energy through radiative processes \citep[e.g.,][]{1978ppim.book.....S,2011piim.book.....D}. Without continuous and efficient energy inputs, rapid gravitational collapse (approaching the free fall rate) would occur, which would produce far higher star formation rates (SFRs) than those observed \citep[e.g.,][]{2022AJ....164...43S}. Stars can provide the necessary energy, with UV radiation and supernovae (SNe) from massive young stars the two most energetically dominant channels \citep[e.g.,][]{1999ApJS..123....3L}. UV radiation is the primary driver of key thermal and chemical processes in warm and cold ISM phases, setting cooling and heating rates \citep[][]{2022ARA&A..60..247W}. The gas thermal pressure in warm and cold ISM phases depends on the balance of heating and cooling \citep{1969ApJ...155L.149F,1995ApJ...443..152W}, and at a given temperature, the chemical state of the most abundant atom, hydrogen, can vary significantly: the warm medium ($T\sim10^4\Kel$) can be neutral or ionized, and the cold medium ($T\sim 10^2\Kel$) can be neutral or molecular. Thermal and chemical states depend on the strength of the UV radiation field as well as the cosmic ray (CR) rate \citep{2003ApJ...587..278W}, and these vary spatially due to proximity of sources and shielding by the highly inhomogeneous structure \citep[e.g.,][]{2017MNRAS.466.3293P}. Additionally, shocks driven by SNe both accelerate and heat the gas \citep{1972ApJ...178..143C,1988ApJ...334..252C}. Given the galactic SN rate, the hot medium ($T\sim 10^6\Kel$) created by SN shocks can occupy a large volume \citep{1974ApJ...189L.105C,1977ApJ...218..148M} and break out the disk \citep{1976ApJ...205..762S}. Turbulence in the warm and cold ISM is driven by the interaction with expanding supernova-heated bubbles \citep[e.g.,][]{1999A&A...350..230K,2009ApJ...704..137J,2013ApJ...776....1K,2014A&A...570A..81H}. From many decades of research, a consensus has been reached that multiple distinct phases coexist in the ISM, spanning a wide range of temperature and density \citep[e.g.,][]{2001RvMP...73.1031F,2005ARA&A..43..337C}. Furthermore, because the ISM is dynamic, the thermal and chemical states of any given gas parcel are transient, and states that would traditionally be considered unstable are continuously repopulated. Extensive observational surveys of the multiphase ISM have detailed the properties of the cold molecular gas \citep{2001ApJ...547..792D, 2015ARA&A..53..583H}, cold and warm atomic gas \citep{2016A&A...594A.116H,2018ApJS..234....2P} with evidence of a significant fraction being in the unstable temperature regime \citep{2003ApJ...586.1067H,2018ApJS..238...14M}, warm ionized medium \citep{2008ApJ...686..363H,2017ApJ...838...43K}, and hot ionized medium \citep{1987ARA&A..25..303C,2008ApJS..176...59B}. The existence of the multiphase ISM and the ubiquitous high-level turbulence in it are clear evidence that stellar feedback energy is effectively coupled to the ISM. Feedback leads to inefficient star formation in terms of gas consumption, resulting in observed SFRs that are two orders of magnitude lower than the free-fall rate \citep{2018ApJ...861L..18U}. Because energy dissipation leads to localized collapse and star formation, while the bulk ISM's energy loss can be efficiently recovered from stellar feedback, the ISM's physical state and the SFR are intimately connected and in fact must be \emph{co-regulated} \citep[][]{2010ApJ...721..975O}. To quantitatively understand the co-regulation process in the star-forming ISM, a holistic approach is required, using direct numerical simulations to solve the magnetohydrodynamics (MHD) equations with gravity and explicit cooling and heating. These numerical simulations explicitly model the ISM in all phases, while tracking star formation in gravitationally collapsing regions and directly following energy inputs from recently formed stars. In order to be self-consistent, all gas heating and turbulence driving must either originate in energy inputs from stars, or develop as a consequence of naturally-occurring ISM dynamics (driven by gravity, shear, etc). Realistic numerical simulations of the star-forming multiphase ISM require both high mass and spatial resolution to resolve both the mass-dominating (cold and warm) and volume-filling (warm and hot) components. Given the strict resolution requirements for multiphase ISM simulations, to date, the outer dimensions of simulation domains are still limited to a few kpc, corresponding either to low-mass dwarf galaxies \citep{2017MNRAS.471.2151H,2022arXiv220509774S} or to portions of larger galactic disks, using vertically-stratified boxes \citep{2017ApJ...846..133K,2017MNRAS.466.1903G,2017MNRAS.466.3293P,2021MNRAS.504.1039R,2020MNRAS.491.2088K}. The TIGRESS framework was developed by \citet{2017ApJ...846..133K} to study the star-forming multiphase ISM, including a full complement of physics modules, sufficient resolution to follow key processes, and computational performance that enables both long-term evolution and comparative study of multiple galactic environments. In the TIGRESS framework, the MHD equations in a local patch of a differentially rotating galactic disk are solved with the grid-based code {\tt Athena} \citep{2008ApJS..178..137S,2009NewA...14..139S}. Self-gravity of gas is included, together with a fixed vertical potential from stars and dark matter. Cooling is modeled by a temperature-dependent tabulated cooling function appropriate for solar metallicity \citep{1993ApJS...88..253S,2002ApJ...564L..97K}. Sink particles representing star clusters are introduced within cells undergoing runaway gravitational collapse \citep{2013ApJS..204....8G}. Photoelectric heating by FUV radiation is set to scale with the globally attenuated FUV luminosity from star clusters formed in the simulation. Explosions of individual SNe are directly modeled, resolving the Sedov-Taylor stage during which the radial momentum of expanding bubbles is built up and the hot ISM is created in strong shocks \citep{2015ApJ...802...99K}. Systems modeled by the TIGRESS framework successfully evolve to a quasi-steady state over many star formation and feedback cycles. A large suite of individual TIGRESS simulations covering varying galactic conditions shows that in all cases a state with self-consistently regulated SFR and ISM state is reached \citep{2022ApJ...936..137O}, with multiphase outflows launched from the disk \citep{2018ApJ...853..173K,2020ApJ...894...12V,2020ApJ...900...61K}. These TIGRESS simulations have also been used to study cloud and star formation correlations \citep{2020ApJ...898...52M}, molecular chemistry \citep{2018ApJ...858...16G,2020ApJ...903..142G}, diffuse ionized gas \citep{2020ApJ...897..143K}, polarized dust emission \citep{2019ApJ...880..106K}, and CR transport \citep{2021ApJ...922...11A,2022ApJ...929..170A}. In addition, the TIGRESS computational framework of \citet{2017ApJ...846..133K} has been extended to simulate regions with strong spiral structure \citep{2020ApJ...898...35K}, in nuclear rings where bar-driven gas inflows accumulate \citep{2021ApJ...914....9M,2022ApJ...925...99M}, and the ram pressure stripping by the intracluster medium \citep{2022ApJ...936..133C}. This paper presents the first application of an extension of the TIGRESS framework, called TIGRESS-NCR, where ``NCR'' stands for Non-equilibrium Cooling and Radiation. TIGRESS-NCR includes explicit UV radiation transfer using the adaptive ray tracing method implemented in {\tt Athena} \citep{2017ApJ...851...93K}, and the photochemistry model introduced by \citet{KGKO}. We solve time-dependent chemical rate equations for hydrogen species and obtain other abundances assuming formation-destruction balance given the hydrogen species abundances. Cooling in the cold and warm medium is now determined by abundances of hydrogen species (H, $\Hp$, $\HH$) and major coolants ($\mathrm{C^+}$, C, CO, O, $\mathrm{O^+}$). Cooling in warm ionized gas is treated with nebular cooling function that assumes a fixed abundance pattern characteristic of photoionized regions. High-temperature He and metal cooling assume collisional ionization equilibrium (CIE). To follow UV radiation, photon packets emanating from star clusters are transferred along rays through the ISM, with absorption by dust and gas. The major radiative heating channels (photoelectric and photoionization heating) and expansion of overpressurized \ion{H}{2} regions (by photoionization and radiation pressure) are directly modeled. We also include cosmic-ray induced ionization and heating with an attenuation factor inversely proportional to an effective mean column density. Our new framework with adaptive ray tracing improves upon the accuracy of radiation transfer solutions compared to other ISM simulations that adopt more approximate methods, including the local attenuation and local ionization approach in \citet{2021ApJ...920...44H}, the tree-based backward radiation transfer method in \citet{2021MNRAS.504.1039R}, and the two-moment approach with M1 closure in \citet{2020MNRAS.491.2088K,2022arXiv221104626K}. In this paper, we focus on technical aspects of the TIGRESS-NCR implementation, and demonstrate how the ISM state and SFR are co-regulated by the full physics treatments in the TIGRESS-NCR framework. We consider two different galactic conditions for the models in this paper, one similar to the solar neighborhood, and one representing inner-galaxy environments. In a companion paper, we use the TIGRESS-NCR implementation to conduct a set of controlled numerical experiments in which we turn on and off individual feedback channels and the magnetic field, in order to investigate the role of each physical process in regulating SFRs and ISM properties. In subsequent papers, we will present detailed analyses of radiation properties, ISM energetics, and galactic outflows. We describe numerical details in \autoref{sec:methods}, drawing from \citet{2017ApJ...846..133K} and from \citet{KGKO}. TIGRESS-NCR specific treatments regarding truncation of rays for efficient calculations are detailed in \autoref{sec:rt} and \autoref{sec:A-rt}. \autoref{sec:simulations} describes the ISM properties, energetics, and phase distributions for the two simulated galactic conditions. \autoref{sec:prfm} examines SFRs and the ISM state in the context of the pressure-regulated, feedback-modulated star formation model \citep{2022ApJ...936..137O} by analyzing the midplane pressure components and their ratio to SFR surface density (feedback yields). \autoref{sec:summary_and_discussion} summarizes our simulation results and discusses observational constraints, also situating our work within the evolution of ISM theory. \section{Methods}\label{sec:methods} In this section, we introduce the TIGRESS-NCR numerical framework. This is an extension of the original TIGRESS framework \citep[][which we refer to as TIGRESS-classic hereafter]{2017ApJ...846..133K} coupled with photochemistry and UV radiation transfer modules, as detailed in \citet{KGKO}. We begin by describing the governing equations (\autoref{sec:eqn}), and then briefly summarize the methods for treating star formation and SNe (\autoref{sec:sfsn}), radiation transfer (\autoref{sec:rt}), and photochemistry and cooling/heating (\autoref{sec:pchem}). \subsection{Governing Equations}\label{sec:eqn} We solve the MHD equations in a local Cartesian rotating frame, with background galactic differential rotation treated via the so-called shearing-box approximation \citep{1965MNRAS.130..125G,HGB1995}. We use the grid-based code {\tt Athena} to solve the ideal MHD equations \citep{2008ApJS..178..137S,2009NewA...14..139S}, employing a high-order Godunov method combined with the constrained transport algorithm \citep{2008JCoPh.227.4123G}. The conservation equations for gas mass, momentum, and total energy are, respectively, \begin{equation}\label{eq:mass_con} \pderiv{\rho}{t} + \divergence{\rho\vel} = 0, \end{equation} \begin{align}\label{eq:mom_con} \pderiv{(\rho\vel)}{t} +\divergence[\sbrackets]{\rho \vel\vel + \rbrackets{P+P_B}\mathbb{I} - \frac{\Bvec\Bvec}{4\pi}} = \nonumber\\ - \rho \nabla \Phi - 2\Om\times (\rho\vel) +\frad, \end{align} and \begin{align}\label{eq:energy_con} \pderiv{\etot}{t} + \divergence[\sbrackets]{\vel(\etot+P+P_B)- \frac{\Bvec(\Bvec\cdot\vel)}{4\pi}} = \nonumber\\ \mathcal{G} - \mathcal{L} - (\rho\vel) \cdot\nabla{\Phi} +\vel\cdot\frad. \end{align} The magnetic field evolution is governed by the induction equation without explicit resistivity (ideal MHD): \begin{equation}\label{eq:induction} \pderiv{\Bvec}{t}=\curl{\vel\times\Bvec}, \end{equation} while $\Bvec$ must obey the divergence-free constraint \begin{equation}\label{eq:divzero} \nabla\cdot \Bvec =0. \end{equation} In the above, $\rho = \muH m_{\rm H}\nH$ is the gas density, $\nH$ the number density of hydrogen nuclei, $\muH$ the mean molecular weight per H nucleus, and $m_{\rm H}$ the mass of a hydrogen atom; $\vel$ and $\Bvec$ are velocity and magnetic field vectors, respectively; $P$ and $P_B=\Bvec\cdot\Bvec/(8\pi)$ are thermal and magnetic pressure, respectively; $\etot=\rho v^2/2 + P/(\gamma -1) + P_B$ is the total energy density, where $\gamma=5/3$ is the adiabatic index. We explicitly follow non-equilibrium abundances of molecular ($\xHH$) and ionized ($\xHII$) hydrogen by solving the transport of abundances with source terms, \begin{equation}\label{eq:scalar} \pderiv{\rho_s}{t} + \nabla \cdot (\rho_s \vel) = \rho\mathcal{C}_s, \end{equation} where $\rho_s = \rho x_s$ is the mass density of species $s$ ($\HH$ or $\Hp$), and $\mathcal{C}_s$ is the net creation rate coefficient. On the right hand side of the momentum equation (\autoref{eq:mom_con}), we have source terms due to the total gravitational force ($-\rho \nabla \Phi$), Coriolis force ($-2\rho\vel\times\Om$), and radiation force ($\frad$) per unit volume. The total gravitational potential $\Phi = \Phi_{\rm sg} + \Phi_{\rm ext}(z) + \Phi_{\rm tidal}(x)$ includes the self-gravitational potential obtained as the solution of Poisson's equation (including contributions from both gas and young star clusters, represented numerically as sink/star particles), \begin{equation}\label{eq:poisson} \nabla^2 \Phi_{\rm sg} = 4\pi G (\rho + \rho_{\rm sp}), \end{equation} the external gravitational potential in the vertical direction (fixed in time; see \citet{2017ApJ...846..133K} for the exact form), and the tidal potential which gives rise to the differential rotation of the background flow (non-rigid body rotation); see below for the last. In the energy equation (\autoref{eq:energy_con}), we then have mechanical energy source terms associated with the gravity and radiation pressure forces (there is no work from Coriolis forces) in addition to the radiative heating and cooling terms ($\mathcal{G} - \mathcal{L}$). We solve \autoref{eq:poisson} using a Fast Fourier Transform method with shearing-periodic boundary conditions in the horizontal directions \citep{2001ApJ...553..174G} and open boundary conditions in the vertical direction \citep{2009ApJ...693.1316K}. We include newly formed stars' gravity using the particle mesh method by depositing the particle mass using a triangle-shaped cloud to obtain $\rho_{\rm sp}$ \citep{2013ApJS..204....8G}. The center of our domain corotates with the background galactic rotation speed at galactocentric radius of $R_0$, i.e., $\Om=\Omega_0\zhat$, and we assume the galactic rotation curve is flat, i.e., the shear parameter $q\equiv - d\ln \Omega/d\ln R =1$. As a result, $\Phi_{\rm tidal}(x) = - q \Omega_0^2 x^2$, where $x$ is the local radial coordinate ($x=0$ at the domain center). The source terms due to galactic differential rotation are included in the hyperbolic integrator by a semi-implicit method (Crank-Nicholson time differencing) as described by \citet{2010ApJS..189..142S}. The gravity source term is also included in the integrator, while the radiation force and cooling/heating source terms are included using an operator-split approach (see below). The main new features in the TIGRESS-NCR framework are the explicit treatments of chemical processes and associated cooling and heating terms. This is in contrast to the TIGRESS-classic framework, in which the heating rate per hydrogen $\Gamma$ is spatially constant (but time-variable), set via a simple scaling relative to the solar neighborhood value of the globally-attenuated instantaneous FUV radiation field as produced by star cluster particles \citep{2020ApJ...900...61K}. In TIGRESS-classic, the cooling function $\Lambda(T)$ is only a function of temperature with a temperature dependent mean molecular weight $\mu(T)$ combining \citet{2002ApJ...564L..97K} and \citet{1993ApJS...88..253S}. In this work, we directly calculate chemical reaction rates and cooling/heating rates from key microphysical processes that depend on gas properties ($\nH$, $T$, $\vel$), species abundances ($x_s$), radiation fields in three UV bands ($\erad$; see \autoref{sec:rt}), and the CR ionization rate ($\xicr$). Explicitly, we have \begin{align} \mathcal{C}_s & \equiv \mathcal{C}_s(\nH, T, x_s, \Zd, \Zg, \erad, \xicr) \,, \label{eq:Chem_def} \\ \mathcal{G} & \equiv \nH\Gamma(\nH, T, x_s, \Zd, \Zg, \erad, \xicr) \,, \label{eq:Gamma_def} \\ \mathcal{L} & \equiv \nH^2\Lambda(\nH,T, x_s, \Zd, \Zg, \erad, |dv/dr|) \,. \label{eq:Lambda_def} \end{align} where $\Zg$ and $\Zd$ are the gas metallicity and dust abundance relative to solar neighborhood values. Details of these functions are provided in \citet{KGKO}. This paper assumes $\Zg = \Zd = 1$, corresponding to solar metallicity with abundances of \citet{Asplund09} and fractional mass of metals $Z_{\rm g,\odot} = 0.014$; and mass of grain material relative to gas $0.0081$ \citep{Weingartner01a}. Although here we adopt globally constant values for $\Zg$ and $\Zd$, in principle they can be determined locally (with appropriate treatments for metal enrichment and dust formation and destruction processes), and the TIGRESS-NCR framework is applicable for a wide range of $\Zg$ and $\Zd$ except for gas with very low metal and dust content in the early universe.\footnote{For example, our model for the CO abundance has been tested limited ranges of parameter values \citep{2017ApJ...843...38G}. Also, our model does not include gas-phase H$_2$ formation and HD cooling, which can be important for very low-metallicity, dense gas \citep[e.g.,][]{2004ApJ...611...40C, 2005ApJ...626..627O}.} As detailed in \citet{KGKO}, we note that the heating and cooling functions that we adopt follow \citet{1995ApJ...443..152W,2003ApJ...587..278W} in all essential aspects and produce results consistent with theirs for solar neighborhood conditions, while also allowing for varying metal and dust abundances as well as UV and CR fluxes \citep[see also][]{2019ApJ...881..160B}. Because our simulations are time-dependent, they also make it possible to test the extent to which the predicted thermal equilibrium state is actually reached. \subsection{Star Formation and Supernovae}\label{sec:sfsn} In addition to the source terms given on the right hand sides of Equations~\eqref{eq:mass_con}--\eqref{eq:energy_con}, we also include sink and source and terms associated with star formation and SN feedback. The treatments of star formation and SN feedback using sink/star particles are identical to methods adopted in \citet{2020ApJ...900...61K}. We form and grow star cluster particles based on the sink particle treatment in {\tt Athena} first introduced by \citet{2013ApJS..204....8G} and modified further for the TIGRESS-classic framework \citep{2020ApJ...900...61K}. Within the control volume ($3^3$ cells) surrounding a cell undergoing unresolved gravitational collapse, we create a sink particle by turning gas mass and momentum into a particle's mass and velocity. The collapse criteria are: (1) a cell's density is higher than a threshold Larson-Penston density depending on sound speed and numerical resolution, (2) the cell is at a local gravitational potential minimum, and (3) flows along all three Cartesian directions converge toward the cell. Each particle represents a star cluster (with typical mass $>10^3\Msun$ for our adopted resolution) consisting of coeval stars from a fully-sampled initial mass function (IMF). We use the STARBURST99 stellar population synthesis (SPS) model \citep{1999ApJS..123....3L, 2014ApJS..212...14L} to determine SN rates for each star cluster, assuming a \citet{2001MNRAS.322..231K} IMF and Geneva evolutionary tracks for non-rotating stars. Each star cluster hosts multiple SNe over its lifetime ($t_{\rm age}<40\Myr$). We assume 50\% of SN events are in binary OB systems, and if an event was from a binary we inject a massless particle with a velocity kick \citep{2011MNRAS.414.3501E}. These runaway stars can travel far from the cluster particle before they explode as SNe. However, we do not consider runaways as sources of UV radiation because the computational cost of ray tracing would become too expensive if runaway sources were included, and they do not contribute significantly to the total luminosity or ionization budget \citep{2020ApJ...897..143K}. For each SN event, we first calculate the enclosed mass $M_{\rm SNR}$ (sum of ejecta, $M_{\rm ej}=10\Msun$, and pre-existing ambient ISM) and mean density $n_{\rm amb}$ of the surrounding ISM within a spherical region with a radius of $3\Delta x$. If the calculated gas mass is less than the shell formation mass $M_{\rm sf} = 1540\Msun(n_{\rm amb}/\pcc)^{-0.33}$ when a remnant becomes radiative \citep{2015ApJ...802...99K}, a total of $E_{\rm SN}=10^{51}\erg$ energy is injected with a thermal to kinetic energy ratio consistent with that of the Sedov-Taylor stage of evolution, after averaging out the feedback injection region. If the shell formation is unresolved (i.e., $M_{\rm SNR}>M_{\rm sf}$), a total of $p_{\rm SNR}=2.8\times10^5\Msun\kms(n_{\rm amb}/\pcc)^{-0.17}$ radial momentum is injected. Given the high resolution and natural clustering of SNe realized in our simulations, only a small fraction of SN events ($<10\%$) are realized in the form of pure momentum injection. \subsection{UV Radiation Transfer and Cosmic Rays}\label{sec:rt} All star cluster particles with (mass-weighted) age $t_{\rm age} \le 20 \Myr$ act as sources of UV radiation. Appendix C in \citet{KGKO} provides details regarding radiation characteristics of the adopted SPS model. We divide UV radiation into three frequency bins: photoelectric (PE; $110.8\nm < \lambda < 206.6\nm$), Lyman-Werner (LW; $91.2\nm < \lambda < 110.8\nm$), and Lyman continuum (LyC; $\lambda < 91.2\nm$). Both LW and PE photons (collectively referred to as FUV) provide an important source of gas heating via the photoelectric effect when absorbed by small dust grains and large molecules. All FUV photons are attenuated by dust along rays, and the LW band photons also dissociate H$_2$ and CO, and ionize C. To compute the dissociating radiation field for H$_2$, we apply the \citet{Draine96} self-shielding function to the LW band, using the H$_2$ column density integrated from each source to each cell. The LyC photons (also referred to as EUV) ionize neutral hydrogen (H and H$_2$) and are attenuated by dust grain. To follow the propagation of UV photons from young star clusters, we utilize the adaptive ray tracing module in the {\tt Athena} code \citep{2017ApJ...851...93K}. After the hyperbolic part of the governing equations (including shearing-box and gravitational source terms) is integrated, we solve the time-independent UV radiation transfer equation, \begin{equation} \label{eq:RT} \khat \cdot \nabla I_j = -\alpha_j I_j \end{equation} for each frequency bin $j$ along a set of rays. Here, $I_j$ is the intensity, $\khat$ is the unit vector specifying the direction of radiation propagation, and $\alpha_j$ is the absorption cross section per unit volume. In brief, $12 \times 4^4$ photon packets are injected for each band at the location of each source, corresponding to HEALPix level 4 \citep{Gorski05}. Photon packets propagate radially outward, and rays are split into four children when needed to ensure that each cell is intersected by at least 4 rays per source. The optical depth of rays through each intersecting cell is computed and used to apply the corresponding rate of energy and momentum deposition. The radiation energy density ${\erad}_{,j}$ and flux ${\Frad}_{,j}$ in each cell are obtained by summing over contributions from all rays passing through it. We then have $\frad =\sum_j \rho\kappa_j{\Frad}_{,j}/c$, which we use to update the gas momentum and corresponding kinetic energy density. As with other fluid properties, the shearing-periodic boundary conditions are implemented for the ray tracing. Photon packets crossing the local-radial ($\xhat$) edges of the computational domain are offset by the distance the boundaries have sheared in the local-azimuthal ($\yhat$) direction, and the position of sources is offset accordingly. The boundary condition in the $y$ direction is periodic. We terminate a ray if a photon packet exits the $z$-boundary of the computational domain or the optical depth measured from the source is larger than $\tau_{\rm max}=30$ in all frequency bins. With just these basic ray termination conditions, however, we find that the computational cost of performing adaptive ray tracing every timestep is prohibitive. To reduce the cost of ray tracing without losing accuracy too much, we adopt three modifications. (1) We perform ray tracing at intervals $\Delta t_{\rm 2p}$ based on the Courant–Friedrichs–Lewy (CFL) time step for the cold and warm gas at $T < 2.0 \times 10^4 \Kel$, or whenever a new radiation source is created, or whenever an existing radiation source is removed. (2) We put a hard limit on the maximum horizontal distance a ray may propagate from each source, which we denote as $\dxymax$. (3) We terminate a ray in the FUV band if the ratio between the luminosity of the photon packet and the total luminosity of all sources in the computational domain falls below a small number $\epp$. The first condition, on the interval for radiation updates, is justified because the hot gas has very low opacity and its interaction with radiation is minimal. If there is no limitation on the maximum horizontal propagation distance of rays, the cost of ray tracing can become prohibitively high. We have found that imposing condition (2) reduces the cost to an acceptable level without significantly affecting the radiation field solution in the midplane region, provided $\dxymax$ is large enough (see \autoref{sec:A-rt-conv}). The condition (3) limits the maximum distance traveled by photon packets originating from faint sources, without seriously degrading the accuracy of the radiation field. Terminating rays based on $\dxymax$ and $\epp$ causes underestimation of the FUV radiation field at high altitudes. To address this issue without incurring additional computational cost, we apply an analytic solution based on the plane-parallel radiation transfer (see \autoref{sec:A-rt-planeparallel}). We stop ray-tracing for the PE and LW bands at $|z| > \zpp$ and measure the area-averaged intensity of the PE and LW bands as a function of $\mucos = \khat \cdot \zhat$. We then calculate radiation energy density by integrating the intensity with the mean density averaged horizontally at each $z$. We adopt $\zpp=300\pc$. This approach gives the mean radiation field as a function of $z$, which is uniform horizontally at a given $z$. It is generally adequate for high-$z$ regions where the majority of gas is diffuse. For the LyC band, we do not apply the condition (3), nor do we adopt the plane-parallel approximation at large $|z|$. The background CR ionization rate is scaled relative to the solar neighborhood level based on the SFR. Specifically, we adopt $\xicrunatt = 2 \times 10^{-16} \,{\rm s}^{-1} \Sigma_{\rm SFR,40}'/\Sgas'$, where $\Sigma_{\rm SFR,40}'$ and $\Sgas'$ are the SFR surface density measured from stars formed in last 40 Myr and instantaneous gas surface density relative to solar neighborhood values $\Ssfr=2.5 \times 10^{-3} \sfrunit$ and $\Sgas=10\Surf$. The local CR ionization rate is then set to \begin{equation}\label{e:cratt} \xicr = \begin{cases} \xicrunatt & \mbox{if } N_{\rm eff} \le N_{0} \,; \\ \xicrunatt \left(\frac{N_{\rm eff}}{N_{0}}\right)^{-1} & \mbox{if } N_{\rm eff} > N_{0} \,, \end{cases} \end{equation} where $N_{\rm eff}$ is the effective shielding column density and $N_{0} = 9.35\times 10^{20}\,\cm^{-2}$. To compute $N_{\rm eff}$ at each zone, we additionally follow the radiation energy density in the PE band without dust attenuation. We then use the ratio of attenuated to unattenuated PE radiation energy density to obtain $N_{\rm eff} = 10^{21} \cm^{-2} Z_d'^{-1}\ln (\mathcal{E}_{\rm PE,unatt}/\mathcal{E}_{\rm PE})$. We note that the CR transport in the ISM is uncertain and our prescription for CR attenuation in setting $\xicr$ should be considered provisional; the term $1/\Sgas'$ in setting $\xicrunatt$ represents attenuation under average conditions and is motivated by \citet{2003ApJ...587..278W}. The additional attenuation at high column in Equation~\eqref{e:cratt} is motivated by observations of column-density dependent CR ionization rate \citep{2017ApJ...845..163N}. \subsection{Photochemistry, Cooling, and Heating}\label{sec:pchem} In the MHD integrator, we transport molecular and ionized hydrogen using passive scalars ($\xHH$ and $\xHII$, respectively) without source terms as implemented in {\tt Athena}. We obtain the atomic hydrogen abundance from the closure relation $\xHI = 1 - 2\xHH - \xHII$. We then update the temperature and abundances in an operator split manner by solving two ordinary differential equations (ODEs): \begin{equation}\label{eq:ODE1} \frac{de}{dt} = K \dfrac{d T_{\mu}}{dt} = \mathcal{G} - \mathcal{L}\,, \end{equation} \begin{equation}\label{eq:ODE2} \frac{d x_s}{dt} = \mathcal{C}_s \,, \;\;\;\; \text{(s: ${\rm H}_2$ and ${\rm H}^+$)} \,, \end{equation} where $e=P/(\gamma - 1)$ is the internal energy density, $T_{\mu} \equiv T/\mu$ for $\mu = \muH/(1.1 + \xe - \xHH)$ the mean mass per particle, and $K = \nH \muH k_B/(\gamma-1)$ is taken as a constant. Given the generally short cooling/heating and chemical time scales, we take substeps with the time step size determined by the minimum of MHD time step and 10\% of the cooling, heating, and chemical time scales. At each substep, we solve the two ODEs sequentially. \autoref{eq:ODE1} is solved using the first-order backward difference formula with a Taylor expansion of the source terms, $\mathcal{G}-\mathcal{L}$, with respect to temperature which depends on the previous step's abundances and other information (see parameter dependence in \autoref{eq:Gamma_def} and \autoref{eq:Lambda_def} as well as Section 4 of \citet{KGKO}). We then evaluate $\mathcal{C}_s$ (see Section 3.1 of \citet{KGKO}) and solve \autoref{eq:ODE2} by treating it as a system of linear ODEs and use the backward Euler method. After the time-dependent update of hydrogen species, we compute the abundances of O$^+$, C$^+$, CO, C, and O under the steady-state assumption (see Sections 3.2 and 3.3 of \citet{KGKO}; see also \citealt{2017ApJ...843...38G}). We finally calculate the electron abundance $x_e$ from H$^+$ with contributions from C$^+$, O$^+$, and molecules at $T<2\times10^4\Kel$ or from He and metals assuming CIE at $T>2\times10^4\Kel$. We refer the readers \citet{KGKO} for complete information on the processes we include and rates we adopt. Here, we list the formation and destruction processes that are explicitly considered, as well as the cooling and heating processes. \begin{itemize} \item {\bf Molecular hydrogen:} formation by grain catalysis; destruction by CR ionization, photo-dissociation, photo-ionization, and collisional dissociation. \item {\bf Ionized hydrogen:} formation by photoionization, CR ionization, and collisional ionization of atomic hydrogen; destruction by radiative recombination and grain-assisted recombination. \item {\bf Ionized Carbon:} formation by photoionization, CR-induced photionization, and CR ionization of atomic carbon; destruction by grain-assisted recombination, radiative+dielectronic recombination, and the CH${}_2^{+}$ formation reaction. \item {\bf Heating:} photoelectric effect on small grains by FUV photons; CR ionization of $\Hn$ and $\HH$; photoionization of $\Hn$ and $\HH$; formation, photodissociation, and UV pumping of $\HH$. \item {\bf Cooling}: \begin{itemize} \item $\Lambda_{\rm hyd}$: collisionally-excited Ly$\alpha$ resonance line from $\Hn$; collisional ionization of $\Hn$; collisional dissociation of $\HH$; ro-vibrational lines from $\HH$; bremsstrahlung and radiative/grain assisted recombination of free electrons with $\Hp$. \item $\Lambda_{\rm others} ( T<2\times10^4\Kel)$: fine structure lines from C$^+$, C, and O; rotational lines of CO; combined nebular lines in ionized gas ($\Lambda_{\rm neb}$); grain-assisted recombination. \item $\Lambda_{\rm CIE} ( T>3.5\times10^4\Kel)$: ion-by-ion CIE cooling table for He and metals from \citet{2012ApJS..199...20G}. Metal cooling is scaled linearly with $Z_g'$. \end{itemize} We smoothly transition from $\Lambda_{\rm others}$ to $\Lambda_{\rm CIE}$ at $2\times10^4\Kel < T<3.5\times10^4\Kel$ using a sigmoid function, while $\Lambda_{\rm hyd}$ is applied at all temperatures using time-dependent hydrogen abundances. \end{itemize} \subsection{Models}\label{sec:model} \begin{deluxetable}{lCCCCCC} \tablecaption{Input Physical Parameters\label{tbl:model}} \tablehead{ \colhead{Model} & \dcolhead{\Sigma_{\rm gas,0}} & \dcolhead{\Sstar} & \dcolhead{\rho_{\rm dm}} & \dcolhead{\Omega} & \dcolhead{z_*} & \dcolhead{R_0} } \colnumbers \startdata {\tt R8} & 12 & 42 & 6.4\cdot 10^{-3}& 28 & 245 & 8 \\ {\tt LGR4} & 50 & 50 & 5.0\cdot 10^{-3}& 30 & 500 & 4 \enddata \tablecomments{ Column (2): initial gas surface density in $M_\odot\pc^{-2}$. Column (3): stellar surface density in $M_\odot\pc^{-2}$. Column (4): dark matter volume density at the midplane in $M_\odot\pc^{-3}$. Column (5): angular velocity of galactic rotation at the domain center in ${\rm km\, s^{-1}\, kpc^{-1}}$. Column (6): scale height of the stellar disk in pc. Column (7): nominal galactocentric radius in kpc. } \end{deluxetable} We consider two galactic conditions {\tt R8} and {\tt LGR4}, as described in \autoref{tbl:model}, which are analogous to the models of the same names in the TIGRESS-classic suite \citep{2020ApJ...900...61K}; the ``LG'' stands for the model with lower external gravity at a given gas surface density. The {\tt R8} model represents conditions similar to the solar neighborhood \citep[e.g.,][]{2015ApJ...814...13M}. In terms of gas and stellar surface densities, the conditions in models {\tt R8} and {\tt LGR4} roughly correspond to the area-weighted and molecular-mass-weighted averages of conditions in nearby star-forming galaxies surveyed as part of PHANGS \citep{2022AJ....164...43S}. For both simulations, the domain is a tall box, with dimensions $(1024\pc)^2\times 6144 \pc$ for {\tt R8}, and $(512\pc)^2 \times 3072 \pc$ for {\tt LGR4}. We use $\dxymax = 2048\pc$ and $\epp=0$ for {\tt R8} and $\dxymax=1024\pc$ and $\epp=10^{-8}$ for {\tt LGR4}. We found good convergence for the EUV radiation field everywhere, the FUV field near the midplane, and overall simulation outcomes (see \autoref{sec:A-rt-conv}). We note that the selection of the optimal values of $\dxymax$ and $\epp$ depends on the system's conditions, especially on $\Zd$ (which determines dust absorption). We run each simulation with two different resolutions: $8\pc$ and $4\pc$ for {\tt R8}; and $4\pc$ and $2\pc$ for {\tt LGR4}. The initial gas distribution follows double Gaussian profiles \citep[see][]{2017ApJ...846..133K} representing warm and hot components with the Gaussian scale height corresponding to initial velocity dispersions of 10 and 100~km/s for {\tt R8} and 15 and 150~km/s for {\tt LGR4}. To reduce the initial transients, we add additional velocity perturbation with amplitude of 10 and 15 km/s for {\tt R8} and {\tt LGR4}, respectively, along with randomly distributed initial star clusters that provide non-zero initial heating and SNe. The initial magnetic field is set to be azimuthal $\Bvec=B_0\yhat$ with uniform plasma beta $\beta_0\equiv \Pth/(B_0^2/8 \pi)=1$, which is close to the expected saturation value of the magnetic field \citep[e.g.][]{2015ApJ...815...67K,2022ApJ...936..137O}. After one or two cycles of star formation and feedback, the system reaches a quasi-steady state evolution with minimal impact from initial transients and the choice of density profiles and the level of initial turbulence. \section{TIGRESS-NCR Simulations}\label{sec:simulations} We first provide an overview of the {\tt R8} and {\tt LGR4} simulations. Here, we focus on global ISM properties and energetics as well as the distribution of the multiphase (both thermally and chemically) ISM near the galactic midplane. \begin{deluxetable*}{lCCCCCCC} \tablecaption{Global Properties at Saturation\label{tbl:satprop}} \tablehead{ \colhead{Model} & \dcolhead{\Sgas} & \dcolhead{{\Ssfr}_{,10}} & \dcolhead{H} & \dcolhead{\sigma_{{\rm eff}}} & \dcolhead{\sigma_{z, {\rm turb}}} & \dcolhead{t_{\rm ver}} & \dcolhead{t_{\rm dep}} \\ \colhead{} & \colhead{$(M_\odot\pc^{-2})$} & \colhead{$(10^{-3} \sfrunit)$} & \colhead{(pc)} & \colhead{(km/s)} & \colhead{(km/s)} & \colhead{(Myr)} & \colhead{(Gyr)} } \colnumbers \startdata {\tt R8-4pc} & 10.6^{+0.2}_{-0.2} & 2.8^{+1.5}_{-1.0} & 199.3^{+36.2}_{-44.3} & 12.0^{+1.0}_{-0.5} & 7.7^{+1.0}_{-0.7} & 23.3^{+8.0}_{-3.8} & 3.6^{+1.0}_{-1.0} \\ {\tt R8-8pc} & 10.3^{+0.4}_{-0.2} & 2.8^{+3.5}_{-2.0} & 233.5^{+147.1}_{-57.9} & 13.5^{+2.9}_{-1.3} & 9.4^{+3.3}_{-2.1} & 24.3^{+16.7}_{-5.2} & 3.2^{+2.5}_{-1.4} \\ {\tt LGR4-2pc} & 37.9^{+1.3}_{-0.9} & 34.8^{+10.4}_{-10.7} & 164.5^{+31.4}_{-47.1} & 13.4^{+0.7}_{-0.7} & 8.3^{+0.9}_{-0.8} & 17.8^{+6.7}_{-3.9} & 1.2^{+0.2}_{-0.2} \\ {\tt LGR4-4pc} & 36.2^{+1.4}_{-0.6} & 29.0^{+20.3}_{-8.5} & 176.2^{+64.0}_{-66.2} & 14.6^{+1.2}_{-1.1} & 9.9^{+1.5}_{-1.2} & 15.4^{+9.6}_{-4.0} & 1.0^{+0.4}_{-0.1} \\ \enddata \tablecomments{ Each column provides median values as well as the 16$^{\rm th}$ to 84$^{\rm th}$ percentile range, over $t=250-450\Myr$ for {\tt R8} and $t=250-350\Myr$ for {\tt LGR4}. Column (2): gas surface density. Column (3): SFR surface density. Column (4): mass-weighted gas scale height. Column (5): effective vertical velocity dispersion. Column (6): turbulent component of vertical velocity dispersion. Column (7): vertical dynamical time scale. Column (8): gas depletion time. Note that Columns (4)-(7) are calculated for the warm and cold gas with temperature $T<3.5\times10^4\Kel$. See text for definitions.} \end{deluxetable*} \begin{figure*} \centering \includegraphics[width=\linewidth]{full_tevol_global2.png} \caption{Time evolution of global properties in model {\tt R8} (left) and {\tt LGR4} (right). From top to bottom, we plot (a) \& (b) surface densities of gas, newly-formed stars, and outflows, (c) \& (d) SFR surface density (over last 10Myr), (e) \& (f) effective vertical velocity dispersion, and (g) \& (h) gas scale height. Results from models with different resolutions are presented, as noted in the keys. We apply 5~Myr rolling averages to reduce high-frequency fluctuations in order to ease comparison between different resolution models. The shaded area represents the time interval over which the saturated properties are calculated.} \label{fig:tevol_global} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{full_tevol_source_sink.png} \caption{Time evolution of energy source and sink rates per area in the simulation. From left to right, we show (a) the UV radiation energy injection rate, (b) the radiative heating rate, (c) the net cooling rate, and (d) the SN energy injection rate. About $5\%$ of UV radiation energy goes into heating the warm and cold gas. The total radiative cooling always exceeds radiative heating because the cooling offsets heating provided by SNe. Only 2--3\% of the injected SN energy leaves our simulation domain.} \label{fig:tevol_ss} \end{figure*} \begin{figure*} \centering \fig{{R8_4pc_NCR.full.xy2048.eps0.np768.has.0285.snapshot}.png}{\textwidth}{(a) $t=280\Myr$} \fig{{R8_4pc_NCR.full.xy2048.eps0.np768.has.0300.snapshot}.png}{\textwidth}{(b) $t=295\Myr$} \caption{Visualization of the simulated ISM from model {\tt R8-4pc} at (a) $t=280\Myr$ -- top set of panels; and (b) $295\Myr$ -- bottom set of panels. {\bf Left:} Gas surface density and emission measure in the $x$-$y$ plane (corresponding to integrals of $\nH$ and $n_e^2$ along the $z$-axis). The letters in the emission measure map indicate regions of ionized bubbles (warm ionized bubble -- A1, A2, A3; young superbubble -- B1, B2; old superbubble -- C). {\bf Right:} Slices through the midplane, $z=0$. From left to right, the top row shows hydrogen number density $\nH$, temperature $T$, and electron fraction $x_e$; the bottom row shows normalized FUV radiation energy density $\chiFUV$, cooling rate $\mathcal{L}$, and net heating rate $\mathcal{G}-\mathcal{L}$. $\chi_{\rm FUV}\equiv J_{\rm FUV}/J_{\rm FUV}^{\rm D78}$, where $J_{\rm FUV}$ is the FUV mean intensity and $J_{\rm FUV}^{\rm D78} = 2.1\times10^{-4}\junits$ \citep{1978ApJS...36..595D}. Note that for $\mathcal{G}-\mathcal{L}$, the pink colormap is used only for positive values (net heating), while the blue colormap from $\mathcal{L}$ is used for negative values (net cooling). Contours of EUV radiation energy density are overlaid in the $\chiFUV$ panels for $\log [\mathcal{E}_{\rm LyC}/(\erg\pcc)] = -15$ (red), $-14$ (orange), $-13$ (green), and $-12$ (blue). In the $\Sgas$ panel, young star clusters with age $<40$~Myr and $|z|<50\pc$ are shown as circles. The size of circles is proportional to the cluster mass, but in practice its range is narrow ($10^3-5\times10^3\Msun$). Clusters with age $<20$~Myr (magenta-ish colors; see colormap in the bottom-right of the $\Sgas$ panel) are FUV sources, while very young clusters (age $<5$~Myr) are the only significant EUV sources (enclosed by green/blue contours in the $\chi_{\rm FUV}$ panel). Locations where SNe exploded within the past 10~Myr, and within $|z|<15\pc$ of the slice shown, are marked as star symbols in the $\nH$ panel.} \label{fig:R8_snapshot} \end{figure*} \subsection{Global ISM Properties}\label{sec:ism_prop} Figure~\ref{fig:tevol_global} shows the time evolution of key global quantities including (a) the gas surface density $\Sgas\equiv M_{\rm gas}/(L_xL_y)$ along with surface densities of newly formed stars $\Sstarnew\equiv M_{\rm *, new}/(L_xL_y)$ and mass loss via outflows from the computational domain $\Sout\equiv M_{\rm out}/(L_xL_y)$; (b) the SFR surface density over the last $\Delta t = 10 \Myr$ \begin{equation}\label{eq:sfr10} {\Ssfr}_{, 10}\equiv \frac{\sum_i M_{*,i}(t_{\rm age}<\Delta t)}{L_xL_y\Delta t}, \end{equation} where $M_{*,i}(t_{\rm age}<\Delta t)$ is the total mass of star particles with age younger than $\Delta t$; (c) the effective vertical velocity dispersion \begin{equation}\label{eq:szeff} \sigma_{{\rm eff}} \equiv \sbrackets{ \frac{\int\rbrackets{\rho v_z^2 + P + B^2/(8\pi) - B_z^2/(4\pi)}dV}{\int\rho dV}}^{1/2}; \end{equation} and (d) the mass-weighted scale height of the gas \begin{equation}\label{eq:H} H\equiv\rbrackets{\frac{\int \rho z^2 dV}{\int \rho dV}}^{1/2}. \end{equation} Here, the values of $\sigma_{\rm eff}$ and $H$ are calculated only for the warm and cold gas with temperature $T<3.5\times10^4\Kel$ because these quantities for the hot gas are not well defined in local simulations. We discuss phase separated velocity dispersions in \autoref{sec:phase}. We note that the magnetic term in $\sigma_{{\rm eff}}$ is not simply the magnetic pressure, but instead represents the vertical component of Maxwell stress including both magnetic pressure and tension. We measure the mass-weighted vertical velocity dispersion of the turbulence for the warm and cold gas, i.e. $\sigma_{z, {\rm turb}}\equiv (\int \rho v_z^2dV/\int\rho dV)^{1/2}$, and report this separately in \autoref{tbl:satprop}. The simulations begin with initial rough hydrostatic equilibrium with $H\sim 150-200\pc$. The idealized initial setups soon lead to a burst of star formation and associated feedback as the disk loses its initial vertical support from both thermal and turbulent pressure. During the first $\sim 100\Myr$ evolution, both models experience at least two complete star formation and feedback cycles (rise and fall of SFRs), whose timescale is proportional to the vertical crossing timescale of the disk \citep{2020ApJ...900...61K}. To reduce the computational time needed for high-resolution models, we refine and restart low-resolution simulations ({\tt R8-8pc} and {\tt LGR4-4pc}) after the initial transient has passed ($t=200\Myr$). The mesh-refined, restarted models are run for longer than one orbit time (or 3-4 star formation and feedback cycles) to obtain a fair statistical sample of states at higher resolution. In \autoref{tbl:satprop}, we summarize mean and standard deviation of quantities of interest over $t=250-450\Myr$ and $t=250-350\Myr$ for {\tt R8} and {\tt LGR4}, respectively, at different resolutions. Our results verify the overall convergence of the global properties. As shown in the top row of \autoref{fig:tevol_global}, the gas surface density (solid) decreases gradually due to star formation (dashed) and outflows (dotted). The global properties shown in the bottom three rows of \autoref{fig:tevol_global} reach a quasi-steady state, with substantial temporal fluctuations ($\sim 0.2-0.3$ dex), and show quasi-periodic fluctuations. The characteristic period is the vertical oscillation time determined by the \emph{total} (gas+star+dark matter) midplane density $\sim (4\pi G \rho_{\rm tot})^{-1/2}$, which is similar to the vertical crossing time \citep[see][]{2020ApJ...900...61K}. In \autoref{tbl:satprop}, we list the vertical crossing time $t_{\rm ver}\equiv H/\sigma_{\rm z, turb}$ and gas depletion time $t_{\rm dep}\equiv \Sgas/\Ssfr$. Quasi-periodic fluctuating behavior in $\Ssfr$ and $\sigma_{{\rm eff}}$ shows higher frequency fluctuations than $H$. Occasionally, when there is a big burst, systematic offsets among three quantities stand out; a peak of SFR is followed by a peak of velocity dispersion and then scale height (e.g., see peaks near 100~Myr and 300~Myr in {\tt R8-8pc}). \subsection{Global Energetics}\label{sec:energetics} \autoref{fig:tevol_ss}(a) shows the time evolution of the total energy injection rate per unit area by UV radiation\footnote{Here, we use the $\dot{S}$ notation for any energy gain and loss rates per unit area. In previous publications \citep[e.g.,][]{2020ApJ...900...61K,2022ApJ...936..137O}, we used $\Sigma_{\rm FUV}$ for the surface density of FUV luminosity. With the current notation, $\dot{S}_{\rm FUV}\equiv\Sigma_{\rm FUV}$.} $\dot{S}_{\rm UV} \equiv L_{\rm UV,tot}/(L_x L_y)$, where $L_{\rm UV,tot}$ is the total UV (PE+LW+LyC) luminosity of star particles with $t_{\rm age} < 20\Myr$. This energy injection rate is solely determined by the adopted SPS model (STARBURST99 in our case) and recent star formation history. UV radiation propagates through the ISM and is absorbed by gas and dust, photoionizing some regions where EUV penetrates, and heating up the gas via the photoelectric effect in other regions where FUV penetrates. In addition, CR ionization is an important heating source in regions that are shielded to both EUV and FUV. The total \emph{radiative} (including CR) heating rate per unit area $\dot{S}_{\rm heat} \equiv\int \mathcal{G} dV/(L_xL_y)$ shown in \autoref{fig:tevol_ss}(b) is the sum of hydrogen photoionization heating by LyC radiation ($\dot{S}_{\rm heat,PI}/ \dot{S}_{\rm heat}\sim 75\%$), photoelectric heating from FUV (PE+LW) radiation on small dust grains ($\dot{S}_{\rm heat,PE}/ \dot{S}_{\rm heat} \sim 20\%$), and CR ionization heating ($\dot{S}_{\rm heat,CR}/ \dot{S}_{\rm heat} \sim 1-2\%)$, plus a tiny contribution from $\HH$ heating ($<0.1\%$). The global heating efficiency, defined as the ratio of the total heating rate to the UV radiation injection rate, is $\dot{S}_{\rm heat,PI+PE}/\dot{S}_{\rm UV}\sim 5-6\%$, with individual efficiencies $\dot{S}_{\rm heat,PE} / \dot{S}_{\rm FUV}\sim 2\%$ and $\dot{S}_{\rm heat,PI} / \dot{S}_{\rm EUV}\sim 15\%$. The radiative heating within the simulation domain is balanced by radiative cooling. \autoref{fig:tevol_ss}(c) shows the difference between cooling and heating per unit area within the simulation volume. The difference is positive, indicating net cooling. The excess in radiative cooling is offset by energy input from SN feedback ($\dot{S}_{\rm SN}$; \autoref{fig:tevol_ss}(d)). $\dot{S}_{\rm SN}$ is about two orders of magnitude smaller than the UV radiation injected ($\dot{S}_{\rm UV}$; \autoref{fig:tevol_ss}(a)), and a factor $\sim3$ lower than the radiative heating rate $\dot{S}_{\rm heat}$. A small fraction of energy also leaves the computational domain through outflows; the majority of outflowing energy is in the hot gas and therefore originally deposited by SNe, and the kinetic energy of outflowing gas is also powered by SNe. This outflowing energy escaping the domain ($\sim2-3\%$ of $\dot{S}_{\rm SN}$) accounts for the small excess of SN injection energy over the net cooling within the domain. \subsection{ISM Cartography}\label{sec:cartography} The instantaneous spatial distribution of the ISM mass and energy densities is highly structured and complex. To provide a visual impression of the ISM structure in our simulations, we display maps of various quantities from {\tt R8-4pc} at two epochs, $t=280$ and 295~Myr in \autoref{fig:R8_snapshot}(a) and (b). These times respectively correspond to a local peak and trough of $\Ssfr$ (see \autoref{fig:tevol_global}(c)). We note that qualitative features presented here for {\tt R8} are also seen in {\tt LGR4}. We first focus on the epoch shown in \autoref{fig:R8_snapshot}(a), shortly after a burst of star formation occurred, during which many new star clusters were formed. Very young clusters ($t_{\rm age} < 5\Myr$) act as strong UV radiation sources. These clusters are spatially correlated with each other and with the dense clouds where they were born. There are two distinct types of bubble structures: hot SN-driven bubbles (characterized by low density, diffuse EM, and high temperature) and warm ionized bubbles (characterized by high EM). The electron fraction of the two types of bubbles are different. The hot ionized gas has higher $\xe\approx 1.2$ (bright green), due to free electrons from collisionally ionized H, He, and metals. In contrast, $\xe\approx 1$ (dark green) in the warm ionized gas as electrons are mostly from photoionized H (and a tiny contribution from $\mathrm{C^+}$, $\mathrm{O^+}$, and molecular ions). In the upper region of the domain ($y\sim 0.2\kpc$), examples of warm ionized bubbles, corresponding to high-EM sites, are marked as A1, A2, and A3. In the middle region ($y\sim 0\kpc$), two superbubbles that are formed relatively recently and show moderate EM are marked as B1 and B2. An example of an old, low EM superbubble is marked as C. It is evident that recently born star cluster complexes are responsible for photo-ionizing bubbles A1, A2, and A3. Ionizing radiation from clusters near A1 and A2 is fairly well blocked toward bubbles B1 and B2, while extended radiation from these sources could ionize a substantial area toward the top of the domain. Large areas within the domain remain neutral ($x_e\lesssim 10\%$; white-blue in the $\xe$ panel) as EUV is effectively truncated due to the large cross section of the neutral hydrogen. FUV is only significantly absorbed by dense clouds, making the radiation energy density low in their interiors and also casting shadows behind them. Still, it is evident that most of the neutral gas is irradiated with FUV ($\chi_{\rm FUV}\gtrsim 0.5$), which is a major heating source. Bubble C is an old, hot superbubble, and star clusters are old and no longer contributing significant EUV. As a result, the hot gas is bounded by the neutral gas. In contrast, the intermediate age clusters inside hot bubbles B1 and B2, together with other nearby young clusters, create a photoionized region between the hot and neutral gas. The strongest cooling in the slice, as shown in the $\cal{L}$ panel, occurs in the photoionized regions near Bubbles A and B; the main coolants are nebular lines of metal ions. However, the heating produced by ionizing radiation offsets or even exceeds the cooling in this region, leading to net heating (see the $\cal{G}-\cal{L}$ panel). The net cooling rate is highest at hot bubble boundaries (CIE cooling at $T\sim10^5\Kel$). These interfaces where hot gas mixes with denser gas to become strongly radiative are important in bubble energetics. As young star clusters shown in \autoref{fig:R8_snapshot}(a) age, they begin to produce SNe, resulting in superbubbles, which merge into a very large hot bubble in the center of \autoref{fig:R8_snapshot}(b). This is carved by several clustered SNe (positions are shown as star symbols in the $\nH$ panel). At this epoch, there is only one significant ionizing source at the center of the midplane slice, and the area filled with the warm ionized gas (dark green in the $\xe$ panel) is greatly reduced. There are a few out-of-midplane sources (not shown in the $\Sgas$ panel) responsible for large EM bubbles at $(x,y)\sim(0, 0.5)\kpc$ and $(x,y)\sim (-0.2,-0.3)\kpc$. It is also noticeable that old clusters are now spread across the simulation domain; clustering of clusters is reduced. There is temporal evolution in the ISM phase structure over the interval shown between the two snapshots. The midplane volume filling factor of the warm ionized medium achieves its local maximum, $\sim 30\%$, at $t=280\Myr$ (\autoref{fig:R8_snapshot}(a)), decreasing to $\sim 10\%$ within another 5~Myr. In contrast, the midplane hot gas filling factor increases gradually from $20\%$ at $t=280\Myr$ to $50\%$ at $t=295\Myr$. The filling factor of the warm neutral medium changes from $40\%$ to $20\%$ over the same $15\Myr$ interval. \begin{deluxetable*}{lCCc} \tablecaption{Phase Definition \label{tbl:phase}} \tablehead{ \colhead{Name} & \dcolhead{{\rm Temperature}} & \dcolhead{{\rm Abundance}} & \colhead{{\rm Shorthand}} } \startdata Cold Molecular Medium\tablenotemark{a} & T<6\cdot10^3\Kel & \xHH>0.25 & \CMM \\ Cold Neutral Medium\tablenotemark{b} & T<500\Kel & \xHI>0.5 & \CNM \\ Unstable Neutral Medium & 500\Kel<T<6\cdot10^3\Kel & \xHI>0.5 & \UNM \\ Unstable Ionized Medium\tablenotemark{c} & T<6\cdot10^3\Kel & \xHII>0.5 & \UIM \\ Warm Neutral Medium & 6\cdot10^3\Kel<T<3.5\cdot10^4\Kel & \xHI>0.5 & \WNM \\ Warm Photo Ionized Medium & 6\cdot10^3\Kel<T<1.5\cdot10^4\Kel & \xHII>0.5 & \WPIM \\ Warm Collisionally Ionized Medium & 1.5\cdot10^4\Kel<T<3.5\cdot10^4\Kel & \xHII>0.5 & \WCIM \\ Warm-Hot Ionized Medium & 3.5\cdot10^4\Kel<T<5\cdot10^5\Kel & \cdots & \WHIM \\ Hot Ionized Medium & 5\cdot10^5\Kel<T & \cdots & \HIM \\ \enddata \tablenotetext{a}{This includes unstable temperature range but is dominated by cold.} \tablenotetext{b}{In principle, `neutral' includes both `atomic' and 'molecular'. But historically, the cold neutral medium has been used to denote cold atomic medium. Here, we follow the convention.} \tablenotetext{c}{This includes cold temperature range but is dominated by unstable.} \end{deluxetable*} \subsection{Phase Definition, Filling Factor, and Velocity Dispersion}\label{sec:phase} \begin{figure*} \centering \includegraphics[width=\textwidth]{{LGR4_2pc_NCR.full.0570.phase}.png} \caption{Example showing the gas phase distribution from {\tt LGR4-2pc} at $t=230\Myr$. (a) Midplane slice of temperature. (b) Regions assigned to mutually exclusive defined phases as shown in the key. (c) Mass-weighted joint PDF of $\log\,T$ and $\xHI$ for gas within $|z|<300\pc$, with dividing lines for different gas phase definitions (see \autoref{tbl:phase}). The red dashed curve in panel (c) denotes the relation between $\xHI$ and $T$ based on CIE of hydrogen. \label{fig:phase}} \end{figure*} \begin{deluxetable*}{lCCCCCCC} \tablecaption{Mass Fractions and Volume Filling Factors with $|z|<300\pc$ \label{tbl:filling}} \tablehead{ \colhead{Model} & \colhead{\Cold} & \colhead{\UNM} & \colhead{\WNM} & \colhead{\WPIM} & \colhead{\WCIM} & \colhead{\WHIM} & \colhead{\HIM} } \startdata \multicolumn{8}{c}{Mass fractions} \\ {\tt R8-4pc} & 27.4^{+4.8}_{-10.3} & 28.8^{+3.4}_{-3.1} & 34.9^{+10.6}_{-5.9} & 7.5^{+5.6}_{-3.1} & 0.2^{+0.2}_{-0.1} & 0.1^{+0.1}_{-0.0} & 0.05^{+0.05}_{-0.02} \\ {\tt R8-8pc} & 22.0^{+11.6}_{-10.8} & 29.1^{+5.6}_{-8.7} & 36.0^{+15.0}_{-9.6} & 6.7^{+10.6}_{-5.4} & 0.3^{+0.3}_{-0.1} & 0.2^{+0.2}_{-0.1} & 0.07^{+0.09}_{-0.03} \\ {\tt LGR4-2pc} & 37.3^{+3.2}_{-5.3} & 27.4^{+1.3}_{-0.9} & 27.2^{+3.1}_{-3.3} & 7.3^{+2.3}_{-1.7} & 0.2^{+0.1}_{-0.1} & 0.1^{+0.1}_{-0.0} & 0.07^{+0.03}_{-0.02} \\ {\tt LGR4-4pc} & 31.6^{+3.8}_{-4.9} & 30.0^{+1.5}_{-1.3} & 29.9^{+4.5}_{-3.5} & 7.3^{+3.6}_{-2.1} & 0.3^{+0.1}_{-0.1} & 0.2^{+0.1}_{-0.1} & 0.10^{+0.03}_{-0.03} \\ \hline \multicolumn{8}{c}{Volume filling factors} \\ {\tt R8-4pc} & 1.6^{+0.5}_{-1.0} & 17.0^{+4.2}_{-6.0} & 48.2^{+6.9}_{-5.9} & 10.9^{+7.6}_{-4.4} & 1.4^{+0.3}_{-0.3} & 3.6^{+1.1}_{-0.9} & 12.5^{+15.5}_{-3.8} \\ {\tt R8-8pc} & 1.5^{+1.9}_{-1.1} & 14.4^{+8.5}_{-9.3} & 44.5^{+11.8}_{-11.9} & 10.0^{+12.5}_{-7.5} & 1.7^{+0.7}_{-0.5} & 4.0^{+1.4}_{-1.4} & 17.5^{+20.7}_{-9.3} \\ {\tt LGR4-2pc} & 2.5^{+0.7}_{-0.6} & 14.5^{+4.2}_{-2.0} & 51.3^{+10.4}_{-8.4} & 9.3^{+3.2}_{-2.1} & 1.1^{+0.2}_{-0.3} & 3.1^{+0.7}_{-1.0} & 15.4^{+6.6}_{-8.9} \\ {\tt LGR4-4pc} & 2.1^{+1.0}_{-0.6} & 12.6^{+3.9}_{-1.4} & 50.9^{+6.1}_{-8.2} & 9.8^{+4.1}_{-3.3} & 1.8^{+0.3}_{-0.3} & 3.7^{+0.7}_{-0.7} & 18.3^{+4.9}_{-7.2} \\ \enddata \tablecomments{ Each column shows the median and 16$^{\rm th}$ and 84$^{\rm th}$ percentile range over $t=250-450\Myr$ and $t=250-350\Myr$ for {\tt R8} and {\tt LGR4}, respectively. } \end{deluxetable*} \begin{deluxetable*}{lCCCCcCCCC} \tablecaption{Mass-weighted vertical velocity dispersion \label{tbl:veld}} \tablehead{ \multirow{2}{*}{Model} & \multicolumn{4}{c}{$\sigma_{\rm eff}$} & & \multicolumn{4}{c}{$\sigma_{\rm z,turb}$} \\ \cline{2-5} \cline{7-10} & \colhead{\Cold} & \colhead{\UNM{}+\WNM} & \colhead{\twop} & \colhead{\WIM} & & \colhead{\Cold} & \colhead{\UNM{}+\WNM} & \colhead{\twop} & \colhead{\WIM} } \startdata {\tt R8-4pc} & 5.4^{+1.2}_{-0.9} & 12.6^{+0.6}_{-0.7} & 11.2^{+0.6}_{-0.6} & 18.4^{+2.1}_{-1.9} & & 4.2^{+1.5}_{-0.6} & 7.6^{+1.1}_{-0.7} & 6.9^{+1.0}_{-0.7} & 13.0^{+2.5}_{-2.0} \\ {\tt R8-8pc} & 5.8^{+1.8}_{-0.7} & 13.1^{+2.2}_{-1.1} & 12.2^{+2.2}_{-1.5} & 21.1^{+10.6}_{-3.4} & & 4.8^{+1.5}_{-1.1} & 8.6^{+2.1}_{-1.8} & 8.1^{+2.5}_{-1.8} & 16.2^{+12.3}_{-4.1} \\ {\tt LGR4-2pc} & 6.1^{+1.1}_{-0.7} & 14.8^{+0.6}_{-1.2} & 12.9^{+0.7}_{-0.7} & 18.7^{+1.1}_{-0.8} & & 5.3^{+1.4}_{-0.7} & 8.4^{+1.1}_{-0.7} & 7.7^{+1.1}_{-0.7} & 12.9^{+1.6}_{-1.3} \\ {\tt LGR4-4pc} & 7.7^{+3.3}_{-1.2} & 15.9^{+1.4}_{-1.4} & 13.8^{+1.6}_{-1.2} & 21.0^{+2.0}_{-1.4} & & 6.7^{+3.2}_{-1.8} & 10.1^{+1.8}_{-1.3} & 9.2^{+2.1}_{-1.1} & 14.8^{+2.5}_{-1.6} \\ \enddata \tablecomments{ Each column shows the median and 16$^{\rm th}$ and 84$^{\rm th}$ percentile range over $t=250-450\Myr$ for {\tt R8} and $t=250-350\Myr$ for {\tt LGR4}. } \end{deluxetable*} Traditionally, in the ISM literature, the gas is often divided into different phases based on temperature and hydrogen chemical state. We choose a set of specific temperature and abundance cuts to define 9 phases as summarized in \autoref{tbl:phase}. The distributions of these phases are depicted for a sample snapshot from {\tt LGR4-2pc} in \autoref{fig:phase}. In our previous work (\citealt{2017ApJ...846..133K} and subsequent works based on TIGRESS-classic), we used temperature cuts only to define 5 ISM phases. Key additional information available in the current study is the time-dependent hydrogen abundance, allowing for subdivisions, e.g., ``warm'' into the warm neutral and warm ionized medium. The warm ionized medium is further divided into ``warm-photo-ionized'' and ``warm-collisionally-ionized'' medium with a temperature cut at $T=1.5\cdot10^4\Kel$, above which hydrogen can be collisionally ionized (see red dashed line in \autoref{fig:phase}(c)). We assign every cell to one of the 9 phases exclusively. A summary of the mass and volume fraction for each phase is shown in \autoref{tbl:filling}, for both {\tt R8} and {\tt LGR4}. Here, we do not explicitly distinguish \CNM{} and \CMM{} and we ignore \UIM{} given its negligible mass and volume fractions. Instead, we use \Cold{} for the combined cold medium (\CMM+\CNM). We note that, as shown in \autoref{fig:phase}(c), hydrogen species abundances vary continuously, and a significant amount of \emph{partially} molecular gas is present in \CNM{}. The total molecular gas mass is thus larger than the mass of \CMM{}. We note that the \Cold{} mass fraction increases substantially with higher numerical resolution at the expense of \UNM{} and \WNM{}, but the fractions of all the other phases are reasonably converged. \WNM{} fills the majority of the volume, with substantial contributions from \WPIM{} and \HIM{} as well as \UNM{}. The neutral gas (\Cold{}, \UNM{}, and \WNM{}) dominates the total mass budget. \WPIM{} contributes to both volume and mass at $\sim 10\%$ level, with an increasing contribution at high altitudes (e.g., \citealt{2020ApJ...897..143K}; see also N. Linzer et al. in prep). Separating the warm and cold gas into \Cold{}, \UNM+\WNM, \twop, and \WIM{} components, \autoref{tbl:veld} shows the effective vertical velocity dispersion as defined by \autoref{eq:szeff} and the mass-weighted turbulent velocity dispersion only considering the $P_{\rm turb} = \rho v_z^2$ term for each component. Given that the neutral medium (especially, warmer component \UNM+\WNM) dominates the mass fraction (\autoref{tbl:filling}), $\sigma_{\rm eff,\twop}$ agrees well with the effective velocity dispersion of all warm and cold gas at $T<3.5\times10^4\Kel$ presented in \autoref{tbl:satprop}. On the one hand, this makes the observed \ion{H}{1} velocity dispersion a good tracer for the (thermal plus turbulent) velocity dispersion of the mass-dominating component. On the other hand, it shows that \WIM{} tracers will typically overestimate the mass-weighted velocity dispersion. This could lead to a bias if, for example, H$\alpha$ velocities are used in estimators for the ISM weight (see \autoref{sec:prfm}). It is also noteworthy that the turbulent velocity dispersion is much lower than the effective velocity dispersion; thermal and magnetic components contribute significantly to the total pressure. \autoref{fig:1D-pdf-R8} and \autoref{fig:1D-pdf-LGR4} show probability distribution functions (PDFs) of temperature, density, and thermal pressure from {\tt R8-4pc} and {\tt LGR4-2pc}, respectively, based on the region within $|z|<300\pc$. \WNM{} and \WPIM{} have similar characteristic densities and temperatures, but the thermal pressure of \WPIM{} is higher because of the contribution from free electrons. \CMM{} corresponds to the dense part of \CNM{} with similar thermal pressure. All other phase definitions are mostly equivalent to simple temperature cuts. \UNM{} and \WNM{} are in rough thermal pressure equilibrium, but \CNM{} tends to have lower thermal pressure, which is compensated by higher magnetic pressure. \WCIM{} and \WHIM{} are not significant components in terms of mass and volume as they are usually populated only near the interfaces between warm and hot gas (see \autoref{fig:phase}). However, most (net) cooling occurs in these phases (see bottom right panel of \autoref{fig:R8_snapshot}, and \autoref{sec:jointpdf} below). The thermal pressure of \HIM{} is generally larger than that of \WNM{}. Since the thermal pressure of \HIM{} is in balance with the total pressure of \WNM{}, and the turbulent and magnetic contributions in \WNM{} are larger in {\tt LGR4} than in {\tt R8} (see \autoref{sec:prfm}), the difference in thermal pressure between \HIM{} and \WNM{} is larger in {\tt LGR4}. \begin{figure*} \centering \includegraphics[width=\textwidth]{{R8_4pc_NCR.full.xy2048.eps0.np768.has.1Dpdf-TnP}.png} \caption{PDFs, separated by phase, of temperature (left), density (middle), and pressure (right) at $|z|<300\pc$ for {\tt R8-4pc}. Top row shows volume-weighted and bottom shows mass-weighted distributions. The lines show the median over $250\Myr<t<450\Myr$. The shaded regions represent the 16$^{\rm th}$ to 84$^{\rm th}$ percentile range. \label{fig:1D-pdf-R8}} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{{LGR4_2pc_NCR.full.1Dpdf-TnP}.png} \caption{ Similar to \autoref{fig:1D-pdf-R8}, but for \label{fig:1D-pdf-LGR4} {\tt LGR4-2pc} over $250\Myr<t<350\Myr$.} \end{figure*} \subsection{Joint PDF of density and pressure}\label{sec:jointpdf} \autoref{fig:R8_nP} shows, for {\tt R8-4pc} at $t=320\Myr$ within $|z|<300\pc$, the instantaneous distribution of gas in the density-pressure phase plane weighted by volume, mass, and net cooling rate. The total gas is shown in column (a), while column (b) shows the neutral (atomic + molecular) gas and (c) shows the ionized gas, using a cut $\xHII=0.5$. Note that the $x$-axis is the hydrogen number density $\nH$ rather than the total number density $n=(1.1+x_e-0.5\xHH)\nH$. Therefore, at a given temperature the neutral and ionized medium lie on different pressure tracks as a function of $\nH$---a higher (lower) pressure track for the warm ionized (neutral) gas. \begin{figure*} \centering \includegraphics[width=\textwidth]{R8_4pc_NCR.full.xy2048.eps0.np768.has.0285.nP.sp.png} \caption{Joint PDFs in $\nH$ and $P_{\rm th}/k_B$ weighted by volume (top), mass (middle), and net cooling rate (bottom) for the gas in {\tt R8-4pc} at $t=280\Myr$ (time corresponds to \autoref{fig:R8_snapshot}(a)). All gas within $|z|<300\pc$ shown in (a) is subdivided into (b) neutral ($\xHII<0.5$) and (c) ionized ($\xHII>0.5$). Note that the PDF weighted by net cooling rate is normalized by the total cooling rate within the volume, adopting logarithmic blue-ish and pink-ish color scales for net cooling and heating, respectively. In the middle column, the diagonal dotted lines show $T=500$ and $6000\Kel$ for neutral gas ($P/k_B=1.1 \nH T$). In the right column, the diagonal dotted lines show $T=3.5\cdot10^4$ and $T=5\cdot10^5\Kel$ for ionized gas ($P/k_B=(1.1+x_e) \nH T$ with $x_e=1$ and 1.2, respectively). The majority of the \WPIM{} is found near $T\sim7000\Kel$. The neutral medium is distributed somewhat broadly, but with a concentration near $T=7500\Kel$ for \WNM{}, and near the locus corresponding to thermal equilibrium (zero net cooling) for \CNM{}. The \HIM{} region shows tracks roughly following $P\propto \rho^{5/3}$, corresponding to adiabatic expansion of hot bubble interiors. Horizontal dashed lines in (b) and (c) show volume-weighted mean pressure of the neutral gas and ionized gas, respectively. In the middle column (b), we show unshielded equilibrium curves for $\xicr=2.9\times10^{-16}{\rm\,s^{-1}}$ and three values of FUV radiation field $\chiFUV=0.51$, 0.87, and 2.6, corresponding to 2$^{\rm nd}$, 50$^{\rm th}$, and 98$^{\rm th}$ percentiles of the volume-weighted $\chiFUV$ distribution within $|z|<300\pc$. The median curve describes the \WNM{} branch well. \label{fig:R8_nP}} \end{figure*} In TIGRESS-NCR, the heating and cooling rates are not solely a function of density and temperature and a spatially-uniform FUV field (which was the case in TIGRESS-classic), but also depend on other quantities such as the electron abundance and spatially-nonuniform radiation (see \autoref{eq:Gamma_def} and \autoref{eq:Lambda_def}). Thus, a single thermal equilibrium curve applicable for the whole simulation domain cannot be drawn in \autoref{fig:R8_nP}. Yet, we still see the characteristic locus of thermal equilibrium for neutral gas \citep[see e.g.,][]{1969ApJ...155L.149F,1995ApJ...443..152W,KGKO} in the bottom panel of \autoref{fig:R8_nP}(b) as the boundary between cooling-dominated and heating-dominated regions. Given $\xicr=2.9\times10^{-16}{\rm\,s^{-1}}$ for the background gas and the median value of $\chiFUV=0.87$, in \autoref{fig:R8_nP}(b), we plot an equilibrium curve as thick solid line (as well as two thin lines for $\chiFUV=0.51$ and 2.6). Since we ignore shielding of FUV in these one zone models, the unshielded equilibrium curves give overall higher pressure at high densities, although the \WNM{} equilibrium branch and its maximum pressure is well described by the median equilibrium curve. The volume-weighted mean pressure (within $|z|<300\pc$) of the neutral gas, $P/k_B=3.1\times10^3\pcc\Kel$, is shown as a horizontal dashed line. This pressure sits nicely between the maximum thermal equilibrium pressure of \WNM{} (the bulk net heating region shown as pink) at $\nH\sim 0.5\pcc$ and $P/k_B\sim 5\times 10^3\pcc\Kel$ and the minimum thermal equilibrium pressure of the \CNM{} (the bulk net cooling region shown as blue) at $\nH\sim 5\pcc$ and $P/k_B\sim 1\times 10^3\pcc\Kel$. Although the ionized gas has a very wide range of thermal pressure, the mean is $P/k_B=7.9\times10^3\pcc\Kel$, which is shown as the horizontal dashed line in \autoref{fig:R8_nP}(c). This is higher than that of the neutral gas, in which turbulence and magnetic field significantly contribute to the total pressure (see \autoref{sec:pe_vde}). As shown in \autoref{fig:1D-pdf-R8}, both mass and volume are dominated by the neutral medium near the disk midplane. The ionized gas occupies $\sim30-40\%$ by volume (approximately equally in \WIM{} and \HIM) and $\sim10\%$ by mass (mostly in \WIM). The bottom row of \autoref{fig:R8_nP} shows that both neutral and ionized gas populate a wide parameter space with net cooling or heating (i.e, gas is out of thermal equilibrium). Photoionization can cause net heating in expanding \ion{H}{2} regions as well as the diffuse \WIM{}, which is evident in the narrow dark pink strip at $T\sim7000\Kel$ in the bottom-right panel of \autoref{fig:R8_nP}. Out-of-equilibrium \CNM{} is mostly in the net (radiative) cooling regime, implying that dissipation of turbulence may contribute to the thermal balance in \CNM{} to allow an overall excess of radiative cooling over radiative heating. Locally, \WNM{} is perturbed into both net cooling and heating regimes, while \WNM{} as a whole is experiencing net cooling. Due to its low density, cooling in the hot gas located inside SN(e)-driven bubbles is negligible. The corresponding high-temperature regions in \autoref{fig:R8_nP} show adiabatic expansion tracks, $P\propto \rho^{5/3}$. The thermalized SN energy is mostly cooled away at lower temperature phases, e.g., \WHIM{} and \WNM/\WIM. To clarify the contribution of each gas phase in cooling, \autoref{fig:pdf-cooling} plots the cooling and net cooling contribution within $|z|<300\pc$ from each phase as a function of density, for {\tt R8-4pc} (left) and {\tt LGR4-2pc} (right). The total radiative cooling rate per volume, shown in the top panel, is dominated by \WPIM{} ($\sim70\%$). \WNM{} and \WHIM{} contribute about 10\% each. But, \WPIM{} and \WNM{} are also the gas phases within which most radiative (photoionization and photoelectric) heating occurs. The bottom panels show the net cooling rate per volume, with heating subtracted from cooling. The net cooling, which when integrated over volume produces the history shown in \autoref{fig:tevol_ss}(c), is now dominated by \WHIM{} ($\sim 40\%$). \WNM, \WPIM, and \WCIM{} contribute about 10-20\% each. \begin{figure*} \centering \includegraphics[width=\linewidth]{nH-pdf-coolnorm.png} \caption{Distribution in $\nH$ of gas within $|z|<300\pc$, separated by phase and weighted by the cooling rate (top) and net cooling rate (bottom) for {\tt R8-4pc} (left) and {\tt LGR4-2pc} (right). Overall normalization is by the total cooling rate within the volume. The lines show the median over $250\Myr<t<450\Myr$ for {\tt R8-4pc} and $250\Myr<t<350\Myr$ for {\tt LGR4-2pc}. The shaded regions represent the 16$^{\rm th}$ to 84$^{\rm th}$ percentile range. Total cooling is by far dominated by \WPIM{} ($\sim 70\%$), while net cooling is greatest in \WHIM{}.} \label{fig:pdf-cooling} \end{figure*} \section{Pressure-Regulated, Feedback-Modulated Theory of the Equilibrium Star-Forming ISM}\label{sec:prfm} Having presented the overall characteristics of the simulated ISM, in this section we focus on the midplane pressure and stresses, gas weight, and SFR surface density, and their mutual correlations. These analyses aim to confirm the validity and prediction of the pressure-regulated, feedback-modulated (PRFM) star formation theory, first formulated in \citet{2010ApJ...721..975O} and \citet{2011ApJ...731...41O} and tested in subsequent work \citep{2011ApJ...743...25K,2013ApJ...776....1K,2012ApJ...754....2S,2015ApJ...815...67K}. We closely follow the analysis of \citet{2022ApJ...936..137O}, which summarizes the theory and analyzes the TIGRESS-classic simulation suite. Here, we simplify our full phase definition into three phases: \twop{} for the neutral medium at cold to warm temperatures (i.e., \twop=\CMM+\CNM+\UNM+\WNM), \WIM{} for the warm ionized medium (i.e., \WIM=\WPIM+\WCIM), and \hot{} for the hot medium (i.e., \hot=\WHIM+\HIM). Note that in \citet{2022ApJ...936..137O}, we only had \twop{} and \hot{} as the TIGRESS-classic framework does not follow the ionization state explicitly. The PRFM theory views the ISM in galactic disks as a long-lived thermal-dynamical system with stellar feedback as the main energy source. Despite the vigorous dynamical evolution seen in our simulations (and in the real ISM), the system is in a quasi-steady state on average (in terms either of long-term temporal averages or large-scale ensemble averages). Under this assumption, the governing gas dynamics equations dictate vertical dynamical equilibrium, a balance between total pressure and gas weight. The ISM energy would drop quickly through cooling and dissipation in the absence of inputs, but stellar feedback can offset losses to maintain the required pressure/stress. As a consequence, the PRFM theory posits that galactic SFRs are naturally linked to the dynamical equilibrium pressure, which in turn predicts galactic SFRs from large-scale mean galactic parameters. Our analysis steps in this section are as follows. We first check pressure equilibrium among the three phases and the vertical dynamical equilibrium between total pressure support and gas weight (\autoref{sec:pe_vde}). Then, we measure each pressure component and examine the pressure-$\Ssfr$ relation (\autoref{sec:yields}). This gives a numerical calibration of the key parameter in the theory, the ratio of pressure and $\Ssfr$, which we call the \emph{feedback yield}. We compare our new results for the feedback yield with the theoretical and numerical results in \citet{2022ApJ...936..137O}. Since we only consider two galactic conditions in this paper, we refrain from deriving new fitting results. In \autoref{sec:P_SFR}, we present the correlations between SFR surface density, total pressure, and weight (or its simplified estimator, dynamical equilibrium pressure). Crucially, in TIGRESS-NCR, we do not impose radiation fields based on (scaled) observational estimates, but compute $J_{\rm FUV}$ using radiative transfer from young star cluster particles formed in our simulations, where the number and location of these star particles is self-consistently set by the rate of gravitational collapse. Similarly, the present simulations have sufficient resolution to follow the transition from adiabatic to cooling stages of SN remnant expansion. Thus, unlike in lower resolution simulations, we resolve the simultaneous heating and acceleration of ambient gas by SN shocks as well as subsequent cooling. Our simulations therefore directly capture both the energy injection and energy dissipation processes that enter in determining the feedback yield. \subsection{Pressure Equilibrium and Vertical Dynamical Equilibrium}\label{sec:pe_vde} \begin{figure*} \centering \includegraphics[width=\textwidth]{high_Pmid_tevol.png} \caption{Time evolution of midplane pressures and weight. Total midplane pressure of \twop{} (black), \WIM{} (yellow), and \hot{} (red) phases are comparable with each other. This total vertical support matches the ISM weight (black dashed, dominated by \twop). The simple weight estimator $\PDE$ (grey dashed) provides reasonable agreement with the weight and total pressure. We show each pressure component of \twop{} as blue ($\Pthtwo$), orange ($\Pturbtwo$), and green ($\Pimagtwo$) lines. \label{fig:Pmid_tevol}} \end{figure*} The PRFM formulation assumes that temporally and horizontally averaged vertical momentum and energy equations (\autoref{eq:mom_con} and \autoref{eq:energy_con}) satisfy a steady state. By integrating the vertical momentum equation from the midplane to the top/bottom of the gas disk, the vertical dynamical equilibrium condition is then given by the balance between the midplane total pressure and the ISM weight: \begin{equation}\label{eq:zmom_avg} \Delta \abrackets{\Ptot} \equiv \Delta\abrackets{\Pth + \Pturb + \Pimag} + \Delta P_{\rm rad} = \W \end{equation} where $\Delta$ denotes the difference between the midplane ($z=0$) and the top/bottom of the gaseous disk ($z=\pm L_z/2$), and the angle brackets denote a horizontal average. In the above, we adopt the following nomenclature of pressure components: thermal pressure $\Pth=P$, turbulent pressure (Reynolds stress) $\Pturb\equiv\rho v_z^2$, and magnetic stress (magnetic pressure + tension) $\Pimag \equiv (B_x^2+B_y^2-B_z^2)/(8\pi)$. Note that the mean or turbulent magnetic stress ($\oPimag$ or $\dPimag$) is respectively defined by substituting for $\Bvec$ with the mean $\overline{\Bvec} \equiv \abrackets{\Bvec}$ or turbulent $\delta{\Bvec}\equiv \Bvec-\overline{\Bvec}$ component. The radiation pressure and weight terms can be defined toward either upper or lower disk, $\Delta P_{\rm rad} = \int_0^{\pm L_z/2}\abrackets{f_{\rm rad,z}}dz$ and $\W = \int_0^{\pm L_z/2}\abrackets{\rho \partial{\Phi}/\partial{z}}dz$. We take an average of two values (integrated from top or bottom) in the following analysis. The vertical gravity $\partial \Phi/\partial z$ is a sum of terms from stars, dark matter, and gas, so the total weight can be decomposed into two terms: gas weight from external gravity $\W_{\rm ext}$ (due to stellar disk and dark matter halo), and gas weight from self-gravity $\W_{\rm sg}$ (due to gas). Generally, the pressure components at the midplane $z=0$ are much larger than those at the top of the gaseous disk, leading to $\Delta P\rightarrow P(z=0)$. To separate the contribution from each phase, we define the horizontal average of a quantity $q$ for a selected phase by \begin{equation}\label{eq:midavg} \abrackets{q}_{\rm ph}\equiv \frac{\int\int q\Theta({\rm ph}) dxdy}{L_x L_y} \end{equation} where $\Theta({\rm ph}) = 1$ if temperature and abundance condition in \autoref{tbl:phase} is satisfied and 0 otherwise for ph=\twop, \WIM, and \hot. We can also separately define the area filling factor $f_{\rm A,ph} \equiv \int\int\Theta({\rm ph})dxdy/L_xL_y$. Each phase's contribution adds up such that $\abrackets{\Ptot} = \sum_{\rm ph} \abrackets{\Ptot}_{\rm ph}$. Individual pressure components ($\abrackets{P_{\rm th}}$, etc.) can be written as a sum over contributions from each phase in the same way. The typical midplane value of the total pressure in each phase is defined by $\tilde{P}_{\rm tot,ph}\equiv\abrackets{\Ptot}_{\rm ph}/f_{\rm A, ph}$ (and similarly for $\tilde{P}_{\rm th, ph}$ etc.). We can then write $\abrackets{\Ptot} =\sum_{\rm ph} f_{\rm A, ph} \tilde{P}_{\rm tot,ph}$. If the typical values of the total pressure for \twop, \WIM, and \hot{} are comparable with each other, we have $\abrackets{\Ptot} \approx \tilde{P}_{\rm tot, X}\sum_{\rm ph} f_{\rm A,ph} = \tilde{P}_{\rm tot, X}$, where X denotes any given phase. If we neglect the direct UV radiation pressure $\Delta P_{\rm rad}$ for succinctness as it contributes less than 1\% to the total pressure in both models, \autoref{eq:zmom_avg} simply becomes \begin{equation}\label{eq:zmom_avg_2p} \Delta\abrackets{\Ptot} \approx \tilde{P}_{\rm tot,2p} = \tilde{P}_{\rm th,2p} + \tilde{P}_{\rm turb,2p} + \tilde{\Pi}_{\rm mag,2p} = \W. \end{equation} We note that weight (and radiation pressure) is vertically integrated over all phases, and that the (time-averaged) pressure of gas in any given phase at the midplane must support the weight of gas in all phases above it (rather than selectively supporting its own phase). In our simulations, the gas weight is dominated by \twop{} with 14\% (4\%) from \WIM{} for {\tt R8} ({\tt LGR4}) and less than 1\% from \hot. The contribution from the external gravity is 75\% for {\tt R8} and 30\% for {\tt LGR4}. \autoref{fig:Pmid_tevol} shows time evolution of all pressure terms in \autoref{eq:zmom_avg_2p} along with the total midplane pressures of \WIM{} and \hot{}. In this and other figures and tables, the values of pressures shown represent midplane averages either within a given phase or over all phases, dropping the tilde for cleaner notation. Comparing the total pressures of each phase (dark blue for \twop, yellow for \WIM, and red for \hot), we confirm that they are roughly in pressure equilibrium. Also shown are the direct measure of the ISM weight ($\mathcal{W}$) and the commonly used weight estimator \begin{equation}\label{eq:PDE} \PDEtwo \equiv \frac{\pi G \Sgas^2}{2} + \Sgas (2G\rho_{\rm sd})^{1/2} \sigma_{\rm eff, 2p}, \end{equation} constructed from observables \citep[e.g.,][]{2020ApJ...892..148S}. We note that $\sigma_{\rm eff,\twop}$ presented in \autoref{tbl:veld} is the mass-weighted mean for the \twop{} phase over the entire domain (not the midplane measure). This kinetic (thermal+turbulent) velocity dispersion is a direct observable given line emission from the neutral (atomic and molecular) gas, and then $\sigma_{\rm eff,\twop}$ can be obtained by correcting for the magnetic terms. On average, the total pressure and weight are in good agreement with each other (they are usually off-phased). \begin{deluxetable*}{lCCCCCCCCC} \tablecaption{Midplane Pressure and Weight \label{tbl:pressure}} \tablehead{ \colhead{Model} & \dcolhead{\Pthtwo} & \dcolhead{\Pturbtwo} & \dcolhead{\Pimagtwo} & \dcolhead{\Ptottwo} & \dcolhead{\Ptotwim} & \dcolhead{\Ptothot} & \dcolhead{\Ptot} & \dcolhead{\W} & \dcolhead{\PDEtwo} } \startdata {\tt R8-4pc } & 4.6^{+1.5}_{-0.8} & 7.4^{+2.5}_{-1.9} & 7.7^{+2.4}_{-2.7} & 20.2^{+2.9}_{-2.5} & 19.4^{+3.7}_{-3.6} & 15.1^{+5.2}_{-3.8} & 18.5^{+3.1}_{-3.4} & 17.6^{+2.8}_{-1.9} & 20.1^{+1.2}_{-1.5} \\ {\tt R8-8pc } & 4.3^{+3.5}_{-1.2} & 7.0^{+5.1}_{-2.0} & 8.5^{+3.0}_{-2.5} & 20.6^{+6.7}_{-4.8} & 20.5^{+6.1}_{-5.4} & 17.8^{+6.2}_{-6.3} & 20.5^{+5.7}_{-5.7} & 19.1^{+5.6}_{-2.0} & 21.5^{+2.7}_{-2.4} \\ {\tt LGR4-2pc } & 2.1^{+0.4}_{-0.1} & 5.6^{+3.6}_{-1.3} & 4.4^{+1.4}_{-2.2} & 13.0^{+1.5}_{-2.6} & 9.1^{+2.5}_{-1.9} & 9.5^{+4.5}_{-2.1} & 10.5^{+2.8}_{-2.2} & 10.7^{+1.1}_{-1.4} & 10.0^{+0.3}_{-0.4} \\ {\tt LGR4-4pc } & 2.1^{+0.9}_{-0.7} & 6.2^{+1.5}_{-2.9} & 5.2^{+2.9}_{-3.1} & 12.4^{+5.6}_{-2.4} & 9.0^{+2.7}_{-1.2} & 8.5^{+2.9}_{-1.5} & 10.4^{+3.8}_{-2.5} & 10.9^{+1.5}_{-2.0} & 9.9^{+0.5}_{-0.7} \\ \enddata \tablecomments{ Each column shows the median and 16$^{\rm th}$ and 84$^{\rm th}$ percentile range over $t=250-450\Myr$ and $t=250-350\Myr$ for {\tt R8} and {\tt LGR4}, respectively. Pressure/weight is in units of $k_B\Kel\pcc$, with a multiplication factor of $10^3$ and $10^4$ for {\tt R8} and {\tt LGR4}, respectively. } \end{deluxetable*} \begin{figure*} \centering \includegraphics[width=\textwidth]{Ptot_hot_W_PDE.png} \caption{Measured midplane total pressure of warm-cold (\twop) gas $\Ptottwo$ as a function of (a) measured midplane total pressure of the \hot{} phase $\Ptothot$, (b) measured weight $\W$, and (c) dynamical equilibrium pressure $\PDEtwo$ (simple weight estimator). Individual points at intervals 1 (0.5) Myr are plotted for {\tt R8} ({\tt LGR4}) over 200 (100) Myr interval. Medians with 16$^{th}$ and 84$^{th}$ percentiles are shown as a larger point with errorbars. For reference, the dotted line shows the identity relation. \label{fig:vequil}} \end{figure*} \autoref{fig:vequil} plots $\Ptottwo$ as a function of (a) $\Ptothot$, (b) $\W$, and (c) $\PDEtwo$, while \autoref{tbl:pressure} summarizes the midplane pressure components in \twop{} as well as total pressure in each phase, weight, and weight estimator. Again, approximate pressure equilibrium among the different phases holds, but \hot{} gives slightly lower total pressure. Thermal pressure dominates in \WIM{} and \hot, while thermal pressure is the smallest component in \twop{}, with magnetic and turbulent components comparable. $\Ptottwo\approx \W$ directly demonstrates that the ISM pressure is \textit{regulated} in disk systems as it obeys the conservation ``law'' of momentum (on average). \autoref{fig:vequil}(c) demonstrates the validity of $\PDEtwo$ as a reasonable estimator of the true weight (see \autoref{tbl:pressure}) and hence total midplane pressure. \subsection{Feedback Modulation and Yields}\label{sec:yields} The PRFM theory postulates that thermal and turbulent pressure ($\propto$ energy density) components are sourced by feedback from massive young stars through heating by UV radiation and turbulence driven by SNe. The balance between radiative cooling and heating sets the thermal pressure, while the balance between turbulence driving and dissipation sets the turbulent pressure. Magnetic fields are set by the saturation of galactic dynamo, providing the contribution from the magnetic term at a level similar to (or slightly below) the turbulent term \citep{2015ApJ...815...67K}. The pressure components are thus expected to scale with the rate of feedback energy injection, and therefore with $\Ssfr$. We define the feedback yields $\Upsilon_c\equiv P_c/\Ssfr$ as the ratios of a pressure component $c$ to $\Ssfr$, to quantify the feedback modulation of individual pressure components. Note that the natural unit for the feedback yield is velocity. \subsubsection{Thermal Pressure}\label{sec:th_yield} \begin{figure*} \centering \includegraphics[width=\linewidth]{P_Y.png} \caption{{\bf Top:} Midplane (a) thermal pressure $\Pthtwo$, (b) turbulent $\Pturbtwo$, and (c) magnetic stress $\Pimagtwo$ of the \twop{} medium as a function of SFR surface density $\Ssfr$. {\bf Bottom:} Feedback yield for (d) thermal $\Upsilon_{\rm th}\equiv \Pthtwo/\Ssfr$ (e) turbulent $\Upsilon_{\rm turb}\equiv \Pturbtwo/\Ssfr$, and (f) magnetic $\Upsilon_{\rm mag}\equiv \Pimagtwo/\Ssfr$ component as a function of $\PDEtwo$. Individual points at intervals 1 (0.5) Myr are plotted for {\tt R8} ({\tt LGR4}) over 200 (100) Myr interval. Medians with 16$^{th}$ and 84$^{th}$ percentiles are shown as a larger point with errorbars. For reference, the solid lines show the best fit results from \citet{2022ApJ...936..137O}: (a) \autoref{eq:Pth_tigress}, (b) \autoref{eq:Ptrb_tigress}, (d) \autoref{eq:Yth_tigress}, and (e) \autoref{eq:Ytrb_tigress}.} \label{fig:P_Y} \end{figure*} Because of the short cooling time in the cold and warm ISM (\twop{} and \WIM), the energy gains from radiative heating are quickly lost through optically thin radiative cooling mostly within the same phase. Thermal pressure in both \twop{} and \WIM{} is then expected to scale with the radiative heating rate, which is proportional to the mean UV intensity and hence SFR. In the \twop{} medium, the photoelectric effect by FUV is the main heating source. Therefore, $\Pthtwo\propto \Gamma_{\rm PE}\propto \epsilon_{\rm PE} J_{\rm FUV}$, where $\epsilon_{\rm PE}$ is the photoelectric heating efficiency, which depends sensitively on the grain size distribution and the grain charging parameter $\psi \equiv G_0\sqrt{T}/n_e$ \citep[e.g.,][]{1994ApJ...427..822B,2001ApJS..134..263W}. The source of FUV radiation is massive young stars (the luminosity-weighted mean age of FUV emitters is $\sim10\Myr$) so that $J_{\rm FUV}= \ftau\dot{S}_{\rm FUV}/(4\pi)$ for $\ftau$ a factor accounting for UV radiative transfer in the dusty ISM, where $\dot{S}_{\rm FUV}=L_{\rm FUV}/L_xL_y \propto \Ssfr$.\footnote{We note that we have changed notation for the FUV luminosity per area from $\Sigma_{\rm FUV}$ (e.g., \citealt{2010ApJ...721..975O,2022ApJ...936..137O}) to $\dot{S}_{\rm FUV}$ to consistently refer to areal energy gain and loss rates using $\dot{S}$ (see \autoref{sec:energetics})}. A simple radiation transfer solution for uniformly-distributed sources in a uniform slab gives \begin{equation}\label{eq:ftau} \ftau = \frac{1-E_2(\tau_\perp/2)}{\tau_\perp} \end{equation} \citep{2010ApJ...721..975O}. Here, $E_2$ is the second exponential integral and $\tau_\perp=\kappa_{\rm FUV}\Sigma_{\rm gas}$ is the mean optical depth to FUV. For $\kappa_{\rm FUV}=10^3\cm^2\gram^{-1}$, $\ftau\approx 1/\tau_\perp$ at $\Sgas>20\Surf$. In the TIGRESS-classic suite, we adopted the approximate form of $\ftau$ as presented in \autoref{eq:ftau} to convert $\dot{S}_{\rm FUV}$ to $J_{\rm FUV}$, and we also adopted a single value for $\epsilon_{\rm PE}$. The attenuation of FUV increases at higher surface densities (which generally corresponds to higher pressures). The relationship between the thermal pressure and $\Ssfr$ is thus sublinear, resulting in a decrease of thermal pressure yield at higher $\PDE$. The fit to the TIGRESS-classic suite gives \citep{2022ApJ...936..137O} \begin{subequations} \begin{eqnarray} \label{eq:Pth_tigress} \log(\Pthtwo/k_B) = 0.603\,\log(\Ssfr) &+& 4.99\\ \label{eq:Yth_tigress} \log \Upsilon_{\rm th} =-0.506\,\log\left({\PDE}/{k_B}\right) &+& 4.45. \end{eqnarray} \end{subequations} In the current simulations, the connection from $\Sigma_{\rm SFR}$ to $J_{\rm FUV}$ to $\Gamma_{\rm PE}$ is self-consistently determined by explicit UV radiation transfer and an adopted theoretical dust model for the heating efficiency (our standard choice is a model with grain size distribution A, $R_V=3.1$, and $b_C=4.0\times10^{-5}$ in \citealt{2001ApJS..134..263W}). \autoref{fig:P_Y}(a) plots $\Pthtwo$ vs. $\Ssfr$, showing the similar sublinear relationship calibrated to TIGRESS-classic (\autoref{eq:Pth_tigress}; solid black line). \autoref{fig:P_Y}(d) shows the thermal feedback yield that decreases as $\PDE$ increases (the TIGRESS-classic fit \autoref{eq:Yth_tigress} is also shown). We find that the scaling is quite similar to the fit from the TIGRESS-classic suite, but the normalization in $\Upsilon_{\rm th}$ is higher here -- that is, the TIGRESS-NCR simulations give rise to the higher thermal pressure at a given $\Ssfr$ than the TIGRESS-classic simulations. The offset is because the explicit treatment of the heating efficiency here yields on average a factor of 2--3 higher heating rate for a given $J_{\rm FUV}$, compared to the heating rate coefficient adopted in TIGRESS-classic (which is from \citealt{2002ApJ...564L..97K}). The consistent scaling stems from the fact that \autoref{eq:ftau} is indeed a good approximation for the mean attenuation factor in comparison to the actual radiation transfer solution obtained here by adaptive ray tracing (N. Linzer et al. in prep.). \subsubsection{Turbulent Pressure}\label{sec:turb_yield} The turbulent pressure in the warm and cold components of the ISM arises from large-scale forcing, with expanding hot bubbles produced by SN feedback as the most important source. Because the energy injection from SNe is highly localized in space and time, it creates a shock when it is transferred to the warm and cold ISM gas. This accelerates the surrounding ISM, increasing the total momentum until the shock becomes radiative when the post-shock temperature $T\lesssim 10^6\Kel$ (or $v_{\rm SNR}\sim 200\kms$). The radial momentum injected per SN ($p_*$) is much larger ($\sim 10^5\Msun\kms$) than the momentum of the initial SN ejecta ($\sim 10^4\Msun\kms$) because the shock accelerates two orders of magnitude more mass than the initial ejecta before becoming radiative. The SN momentum injection is also much greater than other sources, such as expanding \ion{H}{2} regions \citep{2018ApJ...859...68K} and stellar wind driven bubbles \citep{2021ApJ...914...90L,2021ApJ...914...89L}. For the \citet{2001MNRAS.322..231K} IMF, the total stellar mass formed for every SN progenitor star is $m_*\sim100\Msun$, and the areal rate of SN explosions in quasi-steady state is $\Ssfr/m_*$. For spherical momentum injection per SN of $p_*$ centered on the disk midplane, \citet{2011ApJ...731...41O} argued that the turbulent pressure $\rho v_z^2$ is expected to be comparable to the rate of vertical momentum flux injected on either side of the disk, $P_{\rm turb} = (p_*/4)(\Ssfr/m_*)$. Since $p_*$ is insensitive to the environment (both density and metallicity; e.g., \citealt{2015ApJ...802...99K,2017ApJ...834...25K,KGKO}), the turbulent feedback yield is expected to be nearly constant (this is in stark contrast to the thermal feedback yield). The fit to the TIGRESS-classic suite gives \citep{2022ApJ...936..137O} \begin{subequations} \begin{eqnarray} \label{eq:Ptrb_tigress} \log(\Pturbtwo/k_B) = 0.96\,\log(\Ssfr) &+& 6.17\\ \label{eq:Ytrb_tigress} \log \Upsilon_{\rm turb} =-0.06\,\log({\PDE}/{k_B})&+& 2.81. \end{eqnarray} \end{subequations} \autoref{fig:P_Y}(b) plots $\Pturb$ vs. $\Ssfr$, showing the expected near linear relationship (\autoref{eq:Ptrb_tigress}). The turbulent feedback yield shown in \autoref{fig:P_Y}(e) is consistent with the shallow dependence on $\PDE$ seen in \autoref{eq:Ytrb_tigress}. We note that the current simulations have additional momentum injection by expanding \ion{H}{2} regions as well as direct UV radiation pressure. Apparently, the contribution of UV in modulating global turbulent pressure is not significant. This strongly contrasts with the dominant role of UV in the destruction of molecular clouds \citep[e.g.,][]{2018ApJ...859...68K,2021ApJ...911..128K}. \subsubsection{Magnetic Stress}\label{sec:mag_yield} We find that the midplane magnetic stress and hence magnetic feedback yield (\autoref{fig:P_Y}(c) and (f)) is comparable to the turbulent kinetic component, for both models. Magnetic terms are determined by galactic dynamo action as a result of the interaction between turbulence (driven by feedback), galactic differential rotation, and buoyancy. The turbulent component of magnetic fields is directly related to the kinetic energy in turbulence and turbulent magnetic energy density quickly saturates at a level similar to kinetic energy density as long as the initial field is strong enough \citep{2015ApJ...815...67K}. Our initial field is purely azimuthal (along the $y$ direction) and comparable to the final saturation level. Overall, the current simulations cover long-term evolution and result in a saturated state without a sign of further secular evolution in magnetic field strengths.\footnote{ The regular (mean) component of magnetic fields is maintained in our simulations as we include galactic differential rotation using the shearing box. In separate experiments without rotation or weak shear, we find much lower saturation level of magnetic fields and hence magnetic stress. We defer the detailed exploration and discussion of the magnetic field evolution to a separate work.} \subsection{Total pressure and SFR prediction}\label{sec:P_SFR} \begin{figure*} \centering \includegraphics[width=\textwidth]{P_sfr.png} \caption{$\Ssfr$ as a function of measured total midplane pressure $\Ptottwo$, measured ISM weight $\W$, and estimated weight $\PDEtwo$. Individual points at intervals 1 (0.5) Myr are plotted for {\tt R8} ({\tt LGR4}) over 200 (100)Myr interval. Medians with 16$^{\rm th}$ and 84$^{\rm th}$ percentiles are shown as a larger point with errorbars. For reference, the solid lines show the best fit results from \citet{2022ApJ...936..137O} (\autoref{eq:SFR_P} to \ref{eq:SFR_PDE}).} \label{fig:P_sfr} \end{figure*} Given the validity of vertical dynamical equilibrium and agreement of $\W$ with the simple weight estimator $\PDE$ (\autoref{eq:PDE}), the PRFM theory postulates that the yield $\Upsilon_{\rm tot}=\Ptot/\Ssfr$ (calibrated from simulations) can be used to predict $\Ssfr$ from $\PDE$, which is calculated from large-scale galactic properties in observations. Summing up all pressure components, we obtain the total pressure support and the corresponding feedback yield. We find median $\Upsilon_{\rm tot}=1500\kms$ for model {\tt R8-4pc} and $720\kms$ for model {\tt LGR4-2pc}, respectively. As shown in \autoref{fig:P_sfr}, the new simulation results are overall in good agreement with the TIGRESS-classic suite for the relation between $\Ssfr$ and pressure or weight. In each panel, we directly compare our results with the fitting results from \citet{2022ApJ...936..137O} for measured midplane pressure, measured weight, and estimated weight: \begin{subequations} \begin{eqnarray} \label{eq:SFR_P} \log(\Ssfr) &=& 1.18~\log(\Ptottwo/k_B) -7.43 \\ \label{eq:SFR_W} \log(\Ssfr) &=& 1.17~\log(\W/k_B) -7.32\\ \label{eq:SFR_PDE} \log(\Ssfr) &=& 1.21~\log(\PDEtwo/k_B) - 7.66. \end{eqnarray} \end{subequations} We refrain from delivering a new fitting formula or making additional quantitative adjustments to the feedback yields given the limited parameter space covered in the present work. In the future, we will extend our parameter space survey, especially toward low metallicity regimes, to generalize the numerical calibration of feedback yield in the PRFM theory. \section{Summary \& Discussion}\label{sec:summary_and_discussion} \begin{figure} \centering \includegraphics[width=\linewidth]{mass_fraction_box.png} \includegraphics[width=\linewidth]{volume_fraction_box.png} \includegraphics[width=\linewidth]{Pressure_box.png} \includegraphics[width=\linewidth]{Upsilon_box.png} \caption{Summary of the main measured quantities. (a) Mass fraction and (b) volume fraction of each phase within $|z|<300\pc$. (c) Pressure components. (d) Feedback yields. The box and whisker enclose from 25$^{\rm th}$ to 75$^{\rm th}$ and from 16$^{\rm th}$ to 84$^{\rm th}$ percentiles, respectively, of the time evolution over $t\in(250,450)\Myr$ for {\tt R8} and $(250,350)\Myr$ for {\tt LGR4}. The median is shown as horizontal line within the box.} \label{fig:summary} \end{figure} \subsection{Summary of simulation results}\label{sec:summary} We present first results from a new numerical framework that synthesizes the TIGRESS-classic computational model of the star-forming ISM \citep{2017ApJ...846..133K} with our recently developed non-equilibrium cooling and radiation (NCR) module \citep{KGKO}. The detailed photochemical treatment and the effects of UV radiation from massive young stars are combined with the gravitational collapse/star formation and SN injection schemes implemented and tested in the TIGRESS-classic framework, in order to study the multiphase, turbulent, magnetized ISM self-consistently. This paper considers two galactic conditions, one representing the solar neighborhood ({\tt R8}) and the other a higher density/pressure environment ({\tt LGR4}; close to the molecular gas weighted mean conditions in the PHANGS survey). We delineate the ISM properties, with a focus on the multiphase ISM distribution near the midplane (within a scale height). We then repeat the basic analysis done in \citet{2022ApJ...936..137O} to test, validate, and calibrate the PRFM star formation theory. The key measured quantities from our analysis are summarized in \autoref{tbl:satprop}, \autoref{tbl:pressure}, and \autoref{fig:summary}. We summarize the ISM phase distributions by mass and volume within $|z|<300\pc$ in \autoref{fig:summary}(a) and (b). Near the galactic midplane (within one scale height of the disk), the cold, unstable, and warm neutral medium (\CNM{}, \UNM{}, and \WNM{}) occupies about 25\%, 30\%, and 35\% by mass and 2\%, 20\%, and 50\% by volume, respectively, in the solar neighborhood model {\tt R8}. The warm ionized medium (\WIM=\WPIM{}+\WCIM{}) contributes 8\% and 10\% by mass and volume, respectively, while the hot medium (\hot=\WHIM{}+\HIM{}) occupies about 15\% of the volume with a negligible mass contribution. It is important to keep in mind that there are large amplitude temporal fluctuations (up to a factor 2) in these values, as indicated by the box (25-75 percentiles) and whisker (16-84 percentiles) in \autoref{fig:summary}. Moving to conditions of higher gas surface density, total pressure, and SFR with model {\tt LGR4}, the mass contribution from colder components (i.e., \CMM{} and \CNM{}) increases, while the volume filling factors remain more or less constant. For both models, the mass fractions of \Cold{} increase with higher resolution at the expense of \UNM{} and \WNM{}, although the change in {\tt R8} is well within the time fluctuation level. The volume fractions are converged up to the temporal variation level. \autoref{fig:summary}(c) and (d) show the midplane pressure components and feedback yields for both models. The two different resolutions give converged results for both {\tt R8} and {\tt LGR4}. The turbulent feedback yields are similar for {\tt R8} and {\tt LGR4}, with a slightly decreasing trend toward higher pressure environment. The thermal feedback yield decreases as expected from {\tt R8} to {\tt LGR4}, due to higher shielding of FUV radiation field in higher density environments. In an upcoming paper (N. Linzer et. al. in prep.), we will analyze the radiation field in depth to validate and calibrate the global attenuation model used in TIGRESS-classic (see \autoref{eq:ftau}). The magnetic feedback yields in {\tt R8} are generally larger than those of {\tt LGR4}; understanding the magnetic feedback yields requires further investigation of the galactic dynamo process, which in itself is a large and challenging area of research. As shown in \autoref{fig:P_sfr}, the total feedback yields are quite similar to those reported in \citet{2022ApJ...936..137O}. For {\tt R8}, $\Upsilon_{\rm tot}=1500\kms$, and for {\tt LGR4}, $\Upsilon_{\rm tot}=720\kms$. Both models have similar $\sigma_{\rm eff,\twop} \approx 12-13 \kms$ and $\sigma_{\rm z, turb, \twop}\sim 7-8\kms$. Finally, it is worth noting that the decrease in the \WNM{} mass fraction from model {\tt R8} to model {\tt LGR4} is at least qualitatively consistent with theoretical expectations \citep{2022ApJ...936..137O}. The \WNM{} mass fraction may be written as \begin{eqnarray} f_{M,\WNM{}} &=&\frac{\rho_w}{\rho_{\rm tot}}f_{V,\WNM{}}\\ \nonumber &=&\frac{P_{\rm th}}{P_{\rm tot}}\frac{\sigma_{\rm eff}^2}{c_w^2}f_{V,\WNM{}} \end{eqnarray} for $f_{V,\WNM{}}$ the volume filling factor, where we have used $P_{\rm th} = \rho_w c_w^2$ for $c_w$ the warm-gas sound speed (which is insensitive to galactic environment), $\rho_w$ for the typical density in the warm medium near the midplane, and $P_{\rm tot} \equiv \rho_{\rm tot} \sigma_{\rm eff}^2$ for $\rho_{\rm tot}$ the total midplane density. From \autoref{tbl:satprop} and \autoref{tbl:filling}, $\sigma_{\rm eff}$ and $f_{V,\WNM}$ are very similar between model {\tt R8} and {\tt LGR4}, whereas \autoref{tbl:pressure} gives $\sim 30\%$ lower ratio of thermal to total pressure for {\tt LGR4} than that for {\tt R8}. About 30\% decrease in the \WNM{} mass fraction (\autoref{tbl:filling} and \autoref{fig:summary}) is consistent with the expectation. \subsection{ISM Phase Balance and Distribution}\label{sec:disscuss_phase} \subsubsection{Comparison to Milky Way Empirical Constraints on ISM Phases}\label{sec:galactic_ism} Our phase distribution is overall in good agreement with multiwavelength galactic observations. \ion{H}{1} 21cm lines are the fundamental probe of the atomic ISM. An accurate determination of both gas column density and spin temperature requires \ion{H}{1} absorption line measurements paired with emission lines. Generally speaking, \WNM{} dominates 21~cm emission spectra, but \WNM{} is extremely faint in absorption due to its low density and high spin temperature (which can be as high as the gas temperature due to radiative excitation by Ly$\alpha$ resonant scattering; \citealt{1952AJ.....57R..31W,1959ApJ...129..536F,2020ApJS..250....9S}). The detection of \WNM{} (and \UNM) in absorption requires highly sensitive absorption observations. There have been a number of sensitive absorption line surveys that determine mass fractions of different \ion{H}{1} components \citep{2003ApJ...586.1067H,2013MNRAS.436.2366R,2018ApJS..238...14M}. Using a simple radiative transfer model with multiple Gaussian components, a component detected in absorption with low or intermediate spin temperature ($T_s<250\Kel$ or $250\Kel<T_s<1000\Kel$) is considered to be \CNM{} or \UNM{}, while a component detected only in emission is \WNM{} (with a small fraction detected in absorption with large spin temperature). The mass contribution to total \ion{H}{1} column density of each component is roughly 30\%, 20-30\%, and 40-50\% for \CNM, \UNM, and \WNM, respectively, generally consistent among surveys. From \autoref{tbl:filling}, the mass fractions of the cold, unstable, and warm neutral medium in model {\tt R8} are $\sim$ 0.3, 0.3, 0.4, generally consistent with current empirical constraints for the Milky Way to the extent that they are available. The observational measurement of thermal pressure using \ion{C}{1} fine structure lines shows a lognormal distribution with a mean at $P_{\rm th, \CNM}\sim 4000\Kel\pcc$ and an rms dispersion of 0.175 dex \citep{2011ApJ...734...65J}. We obtain the mass-weighted pressure PDF in {\tt R8} with mean and standard deviation of $\sim 1500\Kel\pcc$ and 0.27 dex for \CNM{}, $\sim 3700\Kel\pcc$ and 0.35 dex for \UNM{}, and $\sim 4000\Kel\pcc$ and 0.36 dex for \WNM{}. Observations of H$\alpha$ and pulsar dispersion measures suggest that \WIM{} forms a thick layer with scale height of $\sim 1-2\kpc$ \citep{1989ApJ...339L..29R,1991IAUS..144...67R,1993ApJ...411..674T,2008ApJ...686..363H,2008PASA...25..184G}. One can deduce the volume-averaged midplane electron density $\langle n_e \rangle \sim 0.02-0.05\pcc$ (by using dispersion measures to pulsars with known distances) and filling factor $f_{V, \WIM} \sim 0.05-0.15$ (by combining emission measures and dispersion measures) of the diffuse \WIM{} \citep{1987ASSL..134...87K,2001RvMP...73.1031F,2008PASA...25..184G}. For the midplane number density of total gas $\abrackets{\nH} \sim 0.5-1\pcc$ \citep[e.g.,][]{1990ApJ...365..544B,2015ApJ...814...13M}, the mass fraction of \WIM{} at the midplane is $\abrackets{n_e}/\abrackets{\nH}\simlt 10\%$. The mass fraction of \WIM{} of $f_{M, \WIM} \sim 7\%$ within $|z|<300\pc$ (see \autoref{tbl:filling}) is very much consistent with this empirical result. Direct measurement of the hot gas in X-rays is difficult due to its low density. Also, significant diffuse soft X-ray emission is from the Local Bubble \citep{1987ARA&A..25..303C}. Soft X-ray radiation from larger scales is presumably absorbed; for example, the band averaged absorption cross section at $\sim 0.25$~keV is $\sim10^{-20} \cm^2{\rm H}^{-1}$ \citep{1990ApJ...354..211S}, yielding the mean free path $\sim 30 \pc (\nH/1\pcc)^{-1}$. Direct observational constraints on the larger scale distribution of likely pervasive hot gas in our Galaxy are still lacking. \subsubsection{Comparisons to Self-consistent Numerical Models of the Star-Forming ISM}\label{sec:co_regulation_theory} Because the SFR and the ISM thermal and dynamical state co-regulate each other, one cannot be considered independently of the other. A theoretical model that explicitly addresses co-regulation, computing the SFR needed to maintain the thermal properties of the warm and cold ISM, was introduced by \citet{2010ApJ...721..975O}; this and subsequent theoretical developments are summarized in \citet{2022ApJ...936..137O}. Several groups have recently developed numerical frameworks that solve (magneto)hydrodynamics with cooling and heating, including stellar feedback (of various forms) from star clusters that are self-consistently formed via gravitational collapse. Our own numerical studies began with a focus on just the warm and cold ISM, with feedback in the form of momentum injection and heating, both proportional to $\Ssfr$ \citep{2011ApJ...743...25K,2013ApJ...776....1K,2015ApJ...815...67K}. These simulations, with a wide range of $\Sgas$, showed that a quasi-steady state is reached, validating vertical dynamical equilibrium. For a solar neighborhood model (QA10 in \citealt{2013ApJ...776....1K}), the values of $\Ssfr\sim1.5\times10^{-3}\sfrunit$ and the midplane pressure (=weight) $\sim 10^4k_B\pcc\Kel$ were about a factor of two lower than those reported here (and from TIGRESS-classic) due to missing magnetic support and slightly weaker turbulence ($H\sim 80\pc$ vs. $220\pc$ and $\sigma_{z,{\rm turb}}\sim5\kms$ vs. $7-8\kms$). Coincidentally, the total feedback yield (without magnetic contribution) in \citet{2013ApJ...776....1K} is similar to that of the current simulations as the fixed $(p_*/m_*)=3\times10^3\kms$ adopted in the earlier work was higher than the effective $(p_*/m_*)_{\rm eff}\sim10^3\kms$ realized via self-consistent expansion of SNe driven bubbles \citep{2017ApJ...834...25K}. \citet{2017ApJ...846..133K} introduced the TIGRESS-classic framework, with full treatment of the hot ISM. Direct comparison with the TIGRESS-classic suite results from \citet{2022ApJ...936..137O} regarding SFR, pressures, and feedback yields show overall consistent results with the current work, modulo slightly larger value of $\Upsilon_{\rm th}$ and hence $\Upsilon_{\rm tot}$ here (see \autoref{sec:prfm}). The lack of local shielding of FUV in TIGRESS-classic tends to result in lower \Cold{} mass fraction ($f_{M, \CNM+\UNM}\sim 30\%$ in TIGRESS-classic vs. $f_{M,\Cold}\sim f_{M,\UNM}\sim30\%$ in TIGRESS-NCR). Inclusion of the ionizing radiation in TIGRESS-NCR converts significant \WNM{} into \WPIM{} ($f_{M,\WPIM}\sim7-8\%$), which is similar to the value obtained from the post-processing of TIGRESS-classic \citep{2020ApJ...897..143K}. \citet{2014A&A...570A..81H} and \citet{2018A&A...620A..21C} are similar to our earlier work \citep{2013ApJ...776....1K,2015ApJ...815...67K} in terms of their SN feedback being mostly in the form of momentum injection without creating the hot gas. The velocity dispersion in their models (which have $\Sgas=20\Surf$) is about $5-7\kms$, which is slightly lower than both of our models. The magnetic fields tend to reduce SFR up to a factor of 2, with more reduction in the strong rotation case. Given that the magnetic feedback yield is about $30-40\%$ of the total, \citet{2015ApJ...815...67K} found similarly higher SFR in non-magnetized cases. Comparisons between magnetized and unmagnetized cases using the TIGRESS-NCR framework will be investigated in a separate paper. The SILCC framework first introduced by \citet{2015MNRAS.454..238W} focuses on solar neighborhood ISM modeling, with particular emphasis on hydrodynamical evolution with a hydrogen and carbon chemistry network \citep[][collectively called {\sc SGChem}]{2007ApJS..169..239G,2007ApJ...659.1317G,2012MNRAS.421..116G}. \citet[][]{2017MNRAS.466.1903G} added sink particle treatments and star formation via gravitational collapse in the SILCC framework. They emphasized the role of stellar winds shutting off further accretion after sink formation. They found the resulting SFR surface density and ISM properties (mostly focused on hot gas filling factor) are sensitive functions of the density threshold for sink particle formation. The highest density threshold model ($n_{\rm thresh}=10^4\pcc$) yields $\Ssfr\sim10^{-3}\sfrunit$, while the low density threshold model ($n_{\rm thresh}=10^2\pcc$) experiences a strong initial burst of star formation with $\Ssfr>10^{-2}\sfrunit$. \citet{2017MNRAS.466.3293P} included radiation transfer for ionizing UV (without radiation pressure and with a constant FUV background), with the same treatment of SNe and stellar winds, and a high density threshold. Their models with and without ionizing radiation (with both SNe and stellar winds) show similar $\Ssfr\sim10^{-3}\sfrunit$ but the inclusion of UV radiation gives smaller $f_{V,h}$ of $\sim 20-30\%$, larger warm gas filling factor, and reduced ${\rm H}_2$ gas mass (about a factor two) at the end of their simulation ($\sim70\Myr$). However, these quantities were still evolving, and the short runtime of their simulations makes it unclear whether the reported values are representative values in the statistical steady state of these models. The SFR obtained by \citet{2017MNRAS.466.3293P} in their simulations with SNe, stellar winds, and ionizing radiation is similar to what we obtain here for the {\tt R8} model, $\Ssfr=3\times 10^{-3} \sfrunit$. Recently, \citet{2021MNRAS.504.1039R} conducted simulations using the SILCC framework with a more comprehensive feedback model including SNe, stellar winds, UV radiation, and CRs, as well as magnetic fields. By systematically turning on and off each feedback process, they found a progressive decrease in $\Ssfr$, $f_{V,h}$, and cold gas mass fraction with more feedback. The impact of CRs is not significant (given the short evolution time of $\sim 100\Myr$), and the model with SN, stellar winds, and radiation (called SWR) shows $\Ssfr\sim 1.5-2\times10^{-3}\sfrunit$, similar to what we find and to observations. Within $|z|<250\pc$, their SWR model shows $f_{V,h}\sim 50\%$ ($35\%$ with CRs) and $f_{M, {\rm cold}}\sim 50\%$; both are larger than what we find here. One potential reason is that their FUV radiation was assumed to be constant over time so that the thermal balance in the volume filling warm and cold ISM may not be fully self-consistent. EUV radiation was transferred using a tree-based backward ray tracing method \citep{2021MNRAS.505.3730W}, which is inherently less accurate than the direct ray tracing method we adopt here, especially behind regions of strong shielding (pervasive for EUV due to the huge cross section of neutral hydrogen against ionizing radiation). Finally, due to short evolution time ($t<100\Myr$), their measurements include an initial burst period ($25\Myr<t<100\Myr$), which may bias the hot gas filling factor toward higher values. \citet{2021ApJ...920...44H} developed a local simulation that handles time-dependent hydrogen chemistry on-the-fly using a chemistry network based on {\sc SGChem}, and explored the effect of metallicity. Their radiation treatment is approximate: the (spatially-constant) unattenuated UV radiation field and CR ionization rate are scaled by recent star formation, with a local attenuation factor for FUV radiation applied using a tree-based method \citep{2012MNRAS.420..745C}. Photoionization is treated using an iterative Str\"omgren sphere approach \citep{2017MNRAS.471.2151H}. Although properties of the ISM phase structure from this simulation were not explicitly discussed, $\Ssfr\sim2-3\times10^{-3}\sfrunit$ and the mass fraction of warm ionized medium ($\sim5-10\%$) for the solar metallicity model are consistent with observations and our results. \subsection{Future Perspectives}\label{sec:future} The new simulation framework, TIGRESS-NCR, presented in this paper provides a promising tool for modeling the star-forming ISM. The main advance from the TIGRESS-classic framework is including direct UV radiation transfer and explicit chemical abundance calculations. These extensions allow us to examine more detailed aspects of ISM physics, and enable us to explore new parameter space beyond the conditions that apply in normal, low-redshift spiral galaxies like the Milky Way. One immediate application is to explore low metallicity environments that are common in local dwarfs and prevalent in all galaxies at high redshifts. Effects of metallicity on species abundances and the CO-to-H$_2$ conversion factor have been studied in previous work, with \citet{2018ApJ...858...16G,2020ApJ...903..142G} post-processing the TIGRESS-classic suite with six-ray radiation transfer and steady-state chemistry, and \citet{2021ApJ...920...44H,2022ApJ...931...28H} using a tree-based shielding column calculation with time-dependent hydrogen chemistry combined with steady state carbon/oxygen chemistry. Given more accurate methods for UV radiation transfer implemented in the TIGRESS-NCR framework, it will be very interesting to make comparisons with these works employing approximate radiation transfer. With a suite of simulations at varying metallicity, we can extend the theoretical understanding of SFR/ISM co-regulation to the low-metallicity regime, where the thermal feedback yield (and therefore thermal pressure) to become larger than other components because radiation easily propagates over large distances. This extension of the PRFM theory will be critical in developing a subgrid star formation recipe for large scale cosmological simulations. Applying the TIGRESS-NCR framework to study regions with strong spiral structure will be straightforward, since the TIGRESS-classic framework has already been successfully used for models of this kind \citep{2020ApJ...898...35K}. Although TIGRESS-NCR represents a significant advance in resolving and modeling key physical processes, there is still more to be done. First, we do not explicitly model CR transport. Currently, we only include ionization and heating by low-energy CRs, with the background value scaled with $\Ssfr$ and $\Sgas$ and attenuated in high density environments (see \ref{sec:rt}). This is a physically and empirically motivated prescription but lacks quantitative calibration from direct numerical modeling and ignores the dynamical effect of CRs. Full CR transport should include advection by the gas, streaming along magnetic field lines at the (ion) Alf\'ven speed, and diffusion by scattering off of MHD waves that are likely self-generated for GeV and lower energies \citep{2021ApJ...922...11A,2022ApJ...929..170A}. TIGRESS-NCR provides a unique laboratory for CR transport modeling as our framework produces a turbulent, multiphase ISM with realistic magnetic field and ionization structure as well as realistic, high-velocity hot galactic winds. Although $\sim$GeV CRs dominate the total energy budget and are expected to be dynamically important \citep{2018MNRAS.479.3042G}, low-energy CRs are responsible for ionization in most of the ISM's mass. Therefore, spectrally resolved CR transport is necessary \citep[e.g.,][]{2020MNRAS.491..993G,2022MNRAS.510.3917G}. Thermal conduction, which is not included in our current framework, can alter the hot gas properties. The conductive heat transport from hot gas (created by SNe) to the warm/cold ISM leads to evaporation, although conductivity may be suppressed perpendicular to the magnetic field \citep{1965RvPP....1..205B}. To the extent that it can act, conductive evaporation maintains the hot gas pressure while increasing its mass and decreasing its temperature \citep[e.g.,][]{2019MNRAS.490.1961E}. Conduction could certainly alter the observable properties of diffuse X-ray emission from the hot gas, and potentially change the hot gas mass fraction and volume filling factor. \acknowledgements C.-G.K. and E.C.O. were supported in part by NASA ATP grant No. NNX17AG26G. The work of C.-G.K. was supported in part by NASA ATP grant No. 80NSSC22K0717. J.-G.K. acknowledges support from the Lyman Spitzer, Jr. Postdoctoral Fellowship at Princeton University and from the EACOA Fellowship awarded by the East Asia Core Observatories Association. M.G. acknowledges support from Paola Caselli and the Max Planck Institute for Extraterrestrial Physics. Partial support was also provided by grant No. 510940 from the Simons Foundation to E. C. Ostriker. Resources supporting this work were provided in part by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center and in part by the Princeton Institute for Computational Science and Engineering (PICSciE) and the Office of Information Technology's High Performance Computing Center. \clearpage
1,108,101,564,764
arxiv
\section{Introduction}\label{s:introduction} In this paper, we consider Covariance Steering (CS) problems for discrete-time linear stochastic systems. In this particular class of stochastic optimal control problems, one tries to find a control policy that will steer the probability distribution of the state, or more precisely the first two moments of the latter distribution, to a goal distribution in finite time or infinite time. Specifically, we address a soft constrained version of the CS problem in which the objective is defined as finding a control policy that will minimize the sum of the expected value of a running cost (e.g., control effort) and a terminal cost which corresponds to a distance metric in the space of probability distributions. To this end, we choose the (squared) Wasserstein distance \cite{p:givens1984class}. \noindent\textit{Literature Review:} Early versions of the CS problems were focused on the infinite horizon case \cite{p:skelton1987covassignment, p:skelton1987covcontroltheory, p:skelton1992improvedcovariance} for both discrete-time and continuous-time systems. Finite-horizon CS problems in continuous time are addressed in \cite{p:chen2015covariance1, p:chen2015covariance2, p:chen2018covariance3}. A soft constrained version of the CS in continuous time which is based on the utilization of a Wasserstein distance terminal cost is addressed in \cite{p:halder2016covariancewasserstein}. However, the latter problem formulation does not offer computational advantages over standard CS problem formulations given that for its solution one has to utilize numerical optimal control techniques (e.g., shooting methods) that rely, in general, on nonlinear programming (NLP) methods. Finite horizon discrete-time CS problems are typically addressed by means of optimization based approaches \cite{p:bakolas2018covarianceautomatica, p:bakolas2019liouville, p:bakolas2018partialcovariance, p:kotsalis2021robustcovariance, p:goldshtein2017covariance,p:okamoto2018covariancechance, p:balci2021covariancedisturbance, p:okamoto2019pathplanning}. Furthermore, the multi-agent CS problems are addressed in \cite{p:Saravanos2021DistributedCS}. And these methods are extended to the nonlinear CS problems in \cite{p:bakolas2020greedynonlinearcovariance, p:ridderhof2019nonlinearcovariance, p:yi2020ddpcovariance}. However, all of these approaches rely on a semi-definite relaxation of the terminal covariance constraint to reduce the CS problem to a convex program. Recently, a maximum entropy finite-horizon CS problem for discrete-time deterministic linear systems was addressed in \cite{p:ito2022maxentropyCS} based on a Riccati equation approach. In our previous work \cite{p:balci2020covariancewasserstein}, we have addressed the discrete-time version of the CS problem with Wasserstein terminal cost proposed in \cite{p:halder2016covariancewasserstein} and showed that it can be recast as a difference of convex functions program (DCP) \cite{p:an2005dcprogramming} which can be solved for local optimality with convergence guarantees using the convex-concave procedure (CCP) \cite{p:yuille2003ccp}. \noindent\textit{Main Contributions:} As we have already mentioned, finite horizon discrete-time CS problems are typically treated as optimization problems in the relevant literature. The latter optimization problems are either convexified by semi-definite relaxations which prohibit finding the exact solutions to the CS problem or they are cast as generic nonlinear programs, for which there are no guarantees of convergence or global optimality in general. To the best of our knowledge, this is the first paper that formulates the CS problem with a Wasserstein distance terminal cost as a convex semi-definite program specifically without relying on any convex relaxations of non-convex constraints. We first show that the optimal value of the CS problem with state history feedback control policies is lower bounded by the optimal value of the CS problem with the randomized state feedback policy. Then, we formulate an SDP for solving the CS problem with a Wasserstein distance terminal cost and show that this SDP formulation is exact. Then, we express the Wasserstein distance in a convenient form that allows us to express the objective function of the SDP as a linear function of the decision variables. Next, we show that the optimal policy, whose parameters are obtained by solving the associated SDP, corresponds to a deterministic feedback policy. Finally, we show the efficacy of our formulation in terms of controller's performance and computation time via numerical simulations. \noindent\textit{Structure of the Paper:} The rest of the paper is organized as follows. In Section \ref{s:problem-formulation}, we formulate the CS problem with Wasserstein distance terminal cost. We then, summarize previous results based on the state history feedback policy parametrization in Section \ref{s:state-history}. In Section \ref{s:randomized-policy}, we introduce the randomized state feedback policy parametrization and demonstrate its advantages over other parametrizations. In Section \ref{s:SDP-statefeedback}, we formulate the problem as an instance of an SDP using a suitable variable transformation. Numerical simulations are presented in Section \ref{s:numerical-experiments}. Finally, we conclude the paper in Section \ref{s:conclusion}. \section{Problem Formulation}\label{s:problem-formulation} \subsection{Notation} The space of $n$-dimensional real vectors is denoted as $\R{n}$ and the space of $n\times m$ matrices as $\R{n \times m}$. The space of $n\times n$ symmetric, positive semi-definite and positive definite matrices are denoted by $\S{n}$, $\S{n}^{+}$ and $\S{n}^{++}$, respectively. The $n\times n$ identity matrix is denoted as $\mathbf{I}_{n}$ whereas $\mathbf{0}$ denotes the zero matrix (or vector) with appropriate dimension. For $A, B \in \S{n}$, $A \succ B$ ($A \succeq B$) means $A - B \in \S{n}^{++}$ ($A - B \in\S{n}^{+}$). We use $\tr{\cdot}$ to denote the trace operator. $\bdiag{A_1, A_2, \dots, A_{N}}$ denotes the block diagonal matrix whose diagonal blocks are the matrices $A_1, A_2, \dots, A_{N}$. Vertical concatenation of vectors $\{ x_i \in \mathbb{R}^n \}_{i=1}^N$ is denoted as $\vertcat{x_0, \dots, x_{N}}$. $x \sim \mathcal{N}(\mu, \Sigma)$ means that $x$ is a normal (Gaussian) random variable with mean $\mu\in\mathbb{R}^n$ and covariance matrix $\Sigma \in \S{n}^{+}$. The expectation and the covariance of a random variable $x$ are denoted as $\E{x}$ and $\Cov{x}$, respectively. \subsection{Wasserstein Distance} The Wassertein distance defines a distance metric in the space of probability distributions over $\R{n}$. In particular, for two random variables over $\R{n}$ with probability density functions $\rho_1$ and $\rho_2$, their squared Wasserstein distance is denoted as $W_2^2(\rho_1, \rho_1)$ and defined as: \begin{align}\label{eq:Wasserstein-definition} W^2_2(\rho_1, \rho_2) := \inf_{\sigma \in \mathcal{P}(\rho_1, \rho_2)} \mathbb{E}_{y} [ \lVert x_1 - x_2 \rVert^2_2] \end{align} where $\mathcal{P}(\rho_1, \rho_2)$ denotes the space of all probability distributions of the random variable $y = [x_1\t, x_2\t]\t$ over $\R{2n}$ with finite second moments and marginals $\rho_1$ and $\rho_2$ on $x_1$ and $x_2$, respectively. Typically, the Wasserstein distance for two arbitrary probability density functions does not admit an analytic expression and its computation can be a hard task. However, if both $\rho_1$ and $\rho_2$ correspond to the densities of two Gaussian distributions with mean vectors $\mu_1, \mu_2 \in \R{n}$ and covariance matrices $\Sigma_1, \Sigma_2 \in \mathbb{S}^{+}_n$, then the Wasserstein distance admits the following closed form expression \cite{p:givens1984class}: \begin{align}\label{eq:Wasserstein-Gaussian} W_2^2 (\rho_1, \rho_2) := & \lVert \mu_1 - \mu_2 \rVert_2^2 + \tr{\Sigma_1 + \Sigma_2} \nonumber \\ & - 2\tr{ \parantheses{\sqrt{\Sigma_2} \Sigma_1 \sqrt{\Sigma_2}}^{1/2} }. \end{align} \subsection{Problem Formulation} Let us consider the following discrete-time stochastic linear system: \begin{align}\label{eq:system-dynamics} x_{k+1} = A_k x_k + B_k u_k + w_k, \end{align} where $\{x_k\}_{k=0}^N$ is the state process over $\R{n}$, $\curly{u_k}_{k=0}^{N-1}$ is the control process over $\R{m}$ and $\curly{w_k}_{0}^{N-1}$ is the disturbance process over $\R{n}$. In addition, $A_k \in \R{n \times n}$, $B_k \in \R{n \times m}$ are known matrices for all $k$. We consider the case where $\curly{w_k}_{k=0}^{N-1}$ is a Gaussian white noise process i.e. $\E{w_k} = \bm{0}$ for all $k \in \curly{0, \dots, N-1}$ and $\E{w_k w_\ell\t} = \bm{0}$ for all $k \neq \ell \in \curly{0, \dots, N-1} $ and $\E{w_k w_k\t} = W_k \in \mathbb{S}_n^{+}$. We also assume that the initial state $x_0 \sim \mathcal{N}(\mu_0, \Sigma_0)$ and $\E{x_0 w_k\t} = \bm{0}$ for all $k \in \curly{0, \dots, N-1}$. The main objective in this paper is to find a control policy such that the distribution of the terminal state $x_N$ under the policy is close to a desired terminal Gaussian distribution in terms of the Wasserstein metric while minimizing the expected value of the quadratic control cost. Next, we state the precise problem formulation. \begin{problem}\label{problem:first formulation} Let $\mu_0, \mu_\mathrm{d} \in \R{n}$, $\Sigma_0, \Sigma_d \in \mathbb{S}_n^{++}$, $\curly{R_k}_{k=0}^{N-1}$, $N \in \mathbb{Z}^{+}$, $\lambda > 0$ and $\curly{A_k, B_k, W_k}_{k=0}^{N-1}$ be given where $W_k \in \mathbb{S}_n^{+}$ and $R_k \in \mathbb{S}_m^{++}$ for all $k \in \mathbb{Z}^+$. Furthermore, let $\Pi_0$ denote the space of admissible causal control policies $\curly{m_0(X^0), m_1(X^1), \dots, m_{N-1}(X^{N-1})}$ for system \eqref{eq:system-dynamics} with $u_k = m_k(X^k)$ where $X = \curly{x_0, \dots, x_k}$ is the state sequence up to time step $k$. Then, find a policy $\pi \in \Pi_0$ that solves the following problem: \begin{subequations}\label{eq:infinite-dim-problem} \begin{align} \min_{\pi \in \Pi_0} & ~~ J(\pi) \label{eq:infinite-dim-problem-objective}\\ \text{s.t.} & ~~ \eqref{eq:system-dynamics}\quad \forall k \in \{0, \dots, N-1 \} \nonumber \\ & ~~ u_k = m_k(X^k) \\ & ~~ x_0 \sim \mathcal{N}(\mu_0, \Sigma_0) \end{align} \end{subequations} where $J(\pi) := \E{\sum_{k=0}^{N-1} u_k\t R_k u_k} + \lambda W_2^2(\rho_N, \rho_\mathrm{d})$. In addition, $\rho_N$ and $\rho_\mathrm{d}$ denote the probability density functions of the terminal state distribution and the desired distribution, respectively. \end{problem} \begin{remark} Note that Problem \ref{problem:first formulation} is an infinite-dimensional problem because the optimization is over $\Pi_0$ which is the set of all causal policies. Thus, the variables of Problem \ref{problem:first formulation} are the control laws $m_k(X^k)$ which are functions of state sequences. \end{remark} \section{Solution via State History Feedback Policy}\label{s:state-history} Since Problem \ref{problem:first formulation} is an infinite-dimensional optimization problem, it is generally computationally intractable. To improve computational tractability, we restrict the set of admissible policies to the one that are affine functions of states visited up to time step $k$ as follows: \begin{align}\label{eq:affine-policy} m^{(1)}_k(X^k) = v^{(1)}_k + \sum_{\tau = 0}^{k} K^{(1)}_{k, \tau} (x_\tau - \Bar{x}_\tau), \end{align} where $\Bar{x}_\tau = \E{x_\tau}$. We denote the space of policies that are defined as in \eqref{eq:affine-policy} as $\Pi_1$. We observe that all policies $\pi \in \Pi_1$ can be parametrized by $v^{(1)}_k \in \R{m}$ and $K^{(1)}_{k,j} \in \R{m \times n}$ for all $k,j \in \curly{0, \dots, N-1}$ with $k \geq j$. \subsection{Difference of Convex Functions Program Formulation} Next, we will show that Problem \ref{problem:first formulation} can be recast as a difference of convex functions program (DCP). To this aim, we will have to introduce the following variables which will facilitate the subsequent discussion and analysis. Let us consider the concatenated vectors $\bm{x} := \vertcat{x_0, \dots, x_N}$, $\bm{u} := \vertcat{u_0, \dots, u_{N-1}}$, $\bm{w} := \vertcat{w_0, \dots, w_{N-1}}$ and $\bm{v} := \vertcat{v_0^{(1)}, \dots, v_{N-1}^{(1)}}$. Then, the relationship between the concatenated vectors are given as follows: \begin{subequations}\label{eq:compact-dynamics} \begin{align} \bm{x} & = \mathbf{\Gamma} x_0 + \mathbf{G_u} \bm{u} + \mathbf{G_w} \bm{w}, \\ \bm{u} & = \bm{v} + \bm{\mathcal{K}} (\bm{x} - \bm{\Bar{x}}), \end{align} \end{subequations} where the matrices $\bm{\Gamma} \in \R{n(N+1) \times n}$, $\mathbf{G_u} \in \R{n(N+1)\times mN}$, $\mathbf{G_w} \in \R{n(N+1) \times nN}$ are derived from \eqref{eq:system-dynamics} and $\bm{\mathcal{K}} \in \R{mN \times n(N+1)}$ is derived from \eqref{eq:affine-policy}. Interested readers can refer to \cite{p:bakolas2018covarianceautomatica} for detailed derivations. Under the state history feedback policy \eqref{eq:affine-policy}, the mean of the concatenated state vector $\bm{x}$ is an affine function of $\bm{v}$ as $ \bm{\Bar{x}} := \bm{\Gamma} \mu_0 + \mathbf{G_u} \bm{v}.$ To be able to express the covariance of $\bm{x}$ as a convex quadratic function of the decision variables, we utilize the following bijective variable transformation: \begin{subequations}\label{eq:variable-transformation1} \begin{align} \bm{\Theta} & = \bm{\mathcal{K}} (\Imat{n(N+1)} - \mathbf{G_u} \bm{\mathcal{K}})^{-1}, \\ \bm{\mathcal{K}} & = \bm{\Theta} (\Imat{n(N+1)} - \mathbf{G_u} \bm{\Theta})^{-1}.\label{eq:variable-transformation1-eq2} \end{align} \end{subequations} Using \eqref{eq:compact-dynamics}-\eqref{eq:variable-transformation1}, and the matrix inversion lemma \cite{b:matrixinversionlemma}, we have \begin{subequations}\label{eq:covXcovu} \begin{align} \Cov{\bm{x}} & = (\Imat{n(N+1)} + \mathbf{G_u} \bm{\Theta}) \mathbf{\Bar{S}} \times \nonumber \\ & \qquad \qquad (\Imat{n(N+1)} + \mathbf{G_u} \bm{\Theta})\t, \\ \Cov{\bm{u}} & = \bm{\Theta} \mathbf{\Bar{S}} \bm{\Theta}\t . \end{align} \end{subequations} By utilizing \eqref{eq:covXcovu}, one can express the objective function $J(\pi)$ of Problem \ref{problem:first formulation} with admissible policy space $\Pi_1 \subset \Pi_0$ in terms of $(\bm{v}, \bm{\Theta})$ as follows: \begin{align} \Tilde{J}(\bm{v}, \bm{\Theta}) := J_1(\bm{v})+J_2(\bm{\Theta}) + J_3(\bm{\Theta}) - J_4(\bm{\Theta}) \end{align} where $J_i$, for $i\in \{1, \dots, 4\}$, are defined as \cite{p:balci2020covariancewasserstein}: \begin{subequations}\label{eq:Jdecomp} \begin{align} J_1(\bm{v}) & := \bm{v}\t \bm{\mathcal{R}} \bm{v} + \lambda \lVert \mathbf{F} (\mathbf{\Gamma} \mu_0 + \mathbf{G_u} \bm{v}) - \mu_\mathrm{d} \rVert_2^2 \\ J_2(\bm{\Theta}) & := \tr{\bm{\mathcal{R}} \bm{\Theta} \mathbf{\Bar{S}} \bm{\Theta}\t} \\ J_3(\bm{\Theta}) & := \lambda \operatorname{tr} \big( \mathbf{F} (\Imat{n(N+1)} + \mathbf{G_u} \bm{\Theta})~ \mathbf{\Bar{S}} \times \nonumber \\ & \qquad \qquad (\Imat{n(N+1)} + \mathbf{G_u} \bm{\Theta})\t \mathbf{F}\t + \Sigma_\mathrm{d} \big) \\ J_4(\bm{\Theta}) & := 2 \lambda \operatorname{tr} \bigg( \big( \sqrt{\Sigma_\mathrm{d}} \mathbf{F} (\Imat{n(N+1)} + \mathbf{G_u} \bm{\Theta}) ~ \mathbf{\Bar{S}} \times \nonumber \\ & \qquad (\Imat{n(N+1)} + \mathbf{G_u} \bm{\Theta})\t \mathbf{F}\t \sqrt{\Sigma_\mathrm{d}} \big)^{1/2} \bigg) \end{align} \end{subequations} where $\bm{\mathcal{R}} := \bdiag{R_0, \dots, R_{N-1} } \in \mathbb{S}_{mN}^{++}$ and $\mathbf{F} := [\bm{0}, \dots, \bm{0}, \Imat{n}] \in \R{n \times n (N+1)}$. \begin{remark} Note that in \cite{p:balci2020covariancewasserstein}, we showed that the objective function of Problem \ref{problem:first formulation} when the admissible policy space is $\Pi_1$ can be written as a difference of two convex functions, which does not necessarily mean that it is non-convex. However, in our more recent work \cite{p:balci2021convexity-wasserstein}, we showed that the objective function $J(\bm{v}, \bm{\Theta})$ is in general non-convex and admits more than one local minima. We also provided a sufficient condition under which the objective function becomes convex, but it is not possible to check whether this condition holds without solving Problem \ref{problem:first formulation} first. \end{remark} \section{Randomized State Feedback Policy}\label{s:randomized-policy} In this section, we define a new set of policies which are comprised of randomized affine state feedback policies, denoted as $\Pi_2$. Each policy $\pi \in \Pi_2$ is a sequence of feedback laws $\{m^{(2)}_0(x_0), m^{(2)}_1(x_1), \dots, m^{(2)}_{N-1}(x_{N-1}) \}$ where each $m^{(2)}_k$ is a function from $\R{n}$ to the set of $m$ dimensional random variables with Gaussian distribution. The functions $m^{(2)}_k(x_k)$ is realized as follows: \begin{align}\label{eq:random-state-policy} m^{(2)}_k(x_k) = v^{(2)}_k + K^{(2)}_k (x_k - \Bar{x}_k) + n_k \end{align} where $n_k \sim \mathcal{N}(\bm{0}, Q^{(2)}_k)$, $\E{n_k x_k\t} = \bm{0}$ and $Q^{(2)}_k \in \mathbb{S}^{+}_{n}$. Since every policy $\pi^{(2)} \in \Pi_2$ is parametrized by the terms $\{ \Bar{u}^{(2)}_k, K^{(2)}_k, Q^{(2)}_k \}_{k=0}^{N-1} $, solving Problem \ref{problem:first formulation} over $\Pi_2$ is a finite-dimensional optimization problem. Given the discrete-time stochastic linear system \eqref{eq:system-dynamics}, the dynamics of the mean and the covariance of the state process $\curly{x_k}_{k=0}^{N} $ under the randomized state feedback policy defined in \eqref{eq:random-state-policy} are given as follows: \begin{subequations}\label{eq:state-mean-cov-under-random-policy} \begin{align} \mu_{k+1} & = A_k \mu_k + B_k v_k^{(2)}, \label{eq:state-mean-random-policy}\\ \Sigma_{k+1} & = (A_k + B_k K^{(2)}_k) \Sigma_k (A_k + B_k K^{(2)}_k)\t \nonumber \\ & \qquad + B_k Q^{(2)}_k B_k\t + W_k, \label{eq:state-cov-under-random-policy} \end{align} \end{subequations} where $\mu_k$ and $\Sigma_k$ denote the mean vector and the covariance matrix of $x_k$ for all $k \in \{0, \dots, N-1\}$, respectively. In addition, let the mean and covariance of $\curly{u_k}_{k=0}^{N-1}$ be denoted as, respectively, $\Bar{u}_k$ and $U_k$ and written in terms of the policy parameters $\curly{v_k^{(2)}, K_k^{(2)}, Q_k^{(2)}}_{k=0}^{N-1}$ as follows: \begin{align}\label{eq:control-mean-cov-under-random-policy} \Bar{u}_k = v_k^{(2)}, \quad U_k = K^{(2)}_k \Sigma_k K^{(2)\mathrm{T}}_k + Q^{(2)}_k. \end{align} Moreover, the dynamics of the state mean and the state covariance under the state history feedback policy defined in \eqref{eq:affine-policy} are given by: \begin{subequations} \begin{align} \mu_{k+1} & = A_k \mu_k + B_k v_k^{(1)} \\ \Sigma_{k+1} & = A_k \Sigma_{k} A_k\t + A_k L_k\t B_k\t + B_k L_k A_k\t \nonumber \\ & \quad + B_k U_k B_k\t + W_k \label{eq:state-cov-under-history} \end{align} \end{subequations} where \begin{subequations}\label{eq:LandMdef} \begin{align} L_k & = \E{\Tilde{u}_k \Tilde{x}_k\t} = \E{\sum_{\tau=0}^k K^{(1)}_\tau \Tilde{x}_\tau \Tilde{x}_k\t}, \\ U_k & = \E{\Tilde{u}_k \Tilde{u}_k\t} = \E{ \sum_{\tau, \ell=0}^{k} K^{(1)}_\tau \Tilde{x}_\tau \Tilde{x}_\ell\t K^{(1)\mathrm{T}}_\ell}. \end{align} \end{subequations} for all $k \in \{0, \dots, N-1\}$. It is worth noting that the term $L_k $ corresponds to the cross-covariance between the random variables $x_k$ and $u_k$. \begin{assumption}\label{assumption:non-singular-sigma} For any policy $\pi \in \Pi_1$, the state covariance $\Sigma_k \in \mathbb{S}_n^{++}$ for all $k \in \curly{0, \dots, N}$. \end{assumption} We now show that for any policy $\pi \in \Pi_1$, there exists a policy $\Bar{\pi} \in \Pi_2$ such that the first and the second moments of the control and the state processes that are induced by $\Bar{\pi}$ are equal to the ones that are induced by $\pi$ under Assumption \ref{assumption:non-singular-sigma}. This will allow us to conclude that the optimal value of Problem \ref{problem:first formulation} when the admissible policy space is $\Pi_1\subset \Pi_0$ is lower bounded by the optimal value of the same problem when the admissible policy space is $\Pi_2\subset \Pi_0$. The next lemma and theorem formally states the previous claims about the policy spaces $\Pi_1$ and $\Pi_2$. \begin{lemma}\label{lemma:equivalence-policies} Let Assumption \ref{assumption:non-singular-sigma} hold and let $\curly{\mu^{(1)}_k, \Sigma^{(1)}_k}_{k=0}^N$, $\curly{ \Bar{u}_k^{(1)}, U_k^{(1)} }_{k=0}^{N-1}$ be the mean and covariance of the state and the control sequences under a policy $\pi \in \Pi_1$. Then, for all $\pi \in \Pi_1$, there exists a policy $\Bar{\pi} \in \Pi_2$ such that \begin{subequations}\label{eq:equivalence-relations} \begin{align} \mu_k^{(1)} &= \mu_k^{(2)}, \quad &\Bar{u}_k^{(1)} &= \Bar{u}_k^{(2)}, \label{eq:mean-equivalence}\\ \Sigma_k^{(1)} &= \Sigma_k^{(2)}, \quad & U_k^{(1)}&= U_k^{(2)}, \label{eq:covariance-equivalence} \end{align} \end{subequations} for all $k \in \curly{0, \dots, N}$ where $\curly{\mu^{(2)}_k, \Sigma^{(2)}_k}_{k=0}^N$ and $\curly{\Bar{u}_k^{(2)}, U_k^{(2)}}_{k=0}^{N-1}$ denote the mean and covariance of the state and the control sequences under a policy $\Bar{\pi} \in \Pi_2$. \end{lemma} \begin{proof} Let us consider a policy $\pi \in \Pi_1$ and let $\{v^{(1)}_k, K^{(1)}_{k, \ell} \}_{k=0, \ell=0}^{N-1, k}$ be the parameters corresponding to this policy. Our goal is to find a different policy $\Bar{\pi} \in \Pi_2$ with corresponding policy parameters $\{ v_k^{(2)}, K_k^{(2)}, Q_k^{(2)} \}_{k=0}^{N-1}$ such that the mean and covariance of the state and control processes will satisfy \eqref{eq:equivalence-relations}. First, let $v_k^{(2)} = v_k^{(1)}$. Because the feedback terms do not affect the dynamics of the mean of the state process $\curly{x_k}$ for both sets of policies, \eqref{eq:mean-equivalence} follows immediately. Now, let $\Sigma^{(1)}_k, U^{(1)}_k$ be the covariance matrices of $x_k$ and $u_k$ induced by the state history feedback policy defined in \eqref{eq:affine-policy}. Then, let \begin{subequations}\label{eq:KandQdef-lemma-proof} \begin{align} K_k^{(2)} &= L_k (\Sigma^{(1)}_k)^{-1}, \\ Q_k^{(2)} &= U^{(1)}_k - L_k (\Sigma^{(1)}_k)^{-1} L_k\t, \end{align} \end{subequations} where $L_k$ and $M_k$ are defined as in \eqref{eq:LandMdef}. Using \eqref{eq:state-cov-under-random-policy} with the parameters $\curly{K^{(2)}_k, Q^{(2)}_k}$, we can write the state covariance dynamics under policy $\Bar{\pi} \in \Pi_2$, which is denoted by $\Sigma^{(2)}_k$, as follows: \begin{align}\label{eq:TildeSigmaDynamics} \Sigma^{(2)}_{k+1} & = A_k \Sigma^{(2)}_k A_k\t + A_k \Sigma^{(2)}_k (\Sigma_k^{(1)})^{-1} L_k\t B_k\t \nonumber \\ & + B_k L_k (\Sigma_k^{(1)})^{-1} \Sigma^{(2)}_k A_k\t \nonumber\\ & + B_k (L_k (\Sigma_k^{(1)})^{-1} \Sigma^{(2)}_k (\Sigma_k^{(1)})^{-1} L_k\t ) B_k\t \nonumber \\ & + B_k (U^{(1)}_k - L_k (\Sigma_k^{(1)})^{-1} L_k\t ) B_k\t + W_k. \end{align} The initial state covariance is equal for both policies, that is, $\Sigma_0 = \Sigma^{(1)}_0 = \Sigma^{(2)}_0$. Furthermore, if we assume that $\Sigma_k^{(1)} = \Sigma_k^{(2)}$ as the inductive hypothesis, then it follows that $\Sigma_{k+1}^{(1)} = \Sigma_{k+1}^{(2)}$ by virtue of \eqref{eq:state-cov-under-history} and \eqref{eq:TildeSigmaDynamics}. Thus, we conclude that $\Sigma_k^{(1)} = \Sigma_k^{(2)}$ holds for all $k$ by induction. Using \eqref{eq:control-mean-cov-under-random-policy}, the covariance matrix $U_k^{(2)}$ of the control input $u_k$ under $\curly{K_k^{(2)}, Q_k^{(2)}}$ is given by \begin{align}\label{eq:TildeSigmaUequation} U_k^{(2)} &= U^{(1)}_k - L_k (\Sigma^{(1)}_k)^{-1} L_k\t + L_k (\Sigma^{(1)}_k)^{-1} L_k\t \nonumber \\ & = U_k^{(1)} \end{align} which completes the proof. \end{proof} \begin{remark} Note that Lemma \ref{lemma:equivalence-policies} does not only apply to state history feedback policies \eqref{eq:affine-policy}. For any linear or nonlinear control policy $\pi \in \Pi_0$ which induces a state process with non-singular covariance matrix for all time steps, there exists a policy $\Tilde{\pi} \in \Pi_2$ which induces state and control processes which have the same first and second moments as the induced state and control processes under policy $\pi \in \Pi_0$. \end{remark} \begin{theorem} Let $J^{\star}_1 = J(\pi_1^\star)$, where $\pi_1^\star \in \Pi_1$ is the minimizer of Problem \ref{problem:first formulation} over $\Pi_1$, and $J_2^\star = J(\pi_2^\star)$, where $\pi_2^\star$ is the minimizer of Problem \ref{problem:first formulation} over $\Pi_2$. Then, $J^{\star}_2 \leq J_1^\star$. \end{theorem} \begin{proof} Let $\pi_1^\star \in \Pi_1$ be the optimal policy that solves Problem~1 over $\Pi_1$ and let $\{v^\star_k, K^\star_{k,\ell} \}_{k=0, \ell=0}^{N-1, k}$ be the corresponding set of parameters. Now, let $\curly{v'_k, K'_{k}, Q'_k}_{k=0}^{N-1}$ be the policy parameters of $\pi_2' \in \Pi_2$ which are defined as in \eqref{eq:KandQdef-lemma-proof}. Then, from Lemma \ref{lemma:equivalence-policies}, we can show that the first two moments of the control and state sequences under policy $\pi_1^\star$ are equal to the ones under policy $\pi_2'$. Since both policies are affine functions of the state $x_k$, then the distribution of $x_k$ will be Gaussian at each time step $k$, and the distribution of the terminal state $x_N$ under both policies will also be Gaussian with the same mean and covariance. Thus, the Wasserstein distance terms in the objective function of Problem \ref{problem:first formulation} are equal for both policies. Furthermore, the running cost term at time step $k$ is given by: \begin{align} \E{u_k\t R_k u_k} & = \tr{R_k \E{u_k u_k\t} } \nonumber \\ & = \Bar{u}_k\t R_k \Bar{u}_k + \tr{R_k U_k}. \label{eq:running-costk} \end{align} Thus, the running cost is a function of the first two moments of $u_k$. Since the values of $\Bar{u}_k, U_k$ are equal under both policies $\pi'$ and $\pi^\star$, the running cost terms in \eqref{eq:running-costk} are equal at all time steps for both policies. Thus, we conclude that $J(\pi_2^\star) \leq J(\pi_2') = J(\pi_1^\star)$. \end{proof} \section{SDP formulation for Randomized State Feedback Policy}\label{s:SDP-statefeedback} In Section \ref{s:randomized-policy}, we showed that the optimal value of Problem \ref{problem:first formulation} over randomized state feedback policies defined in \eqref{eq:random-state-policy} is upper bounded by the optimal value of the same problem over state history feedback policies defined in \eqref{eq:affine-policy}. Furthermore, the number of parameters that is required to parametrize the control policies in $\Pi_2$ is less than $\Pi_1$. However, Problem \ref{problem:first formulation} can be recast as a difference of convex functions program using the policy class $\Pi_1$ as shown in Section \ref{s:state-history}. In this section, we will formulate Problem \ref{problem:first formulation} with admissible policy space $\Pi_2\subset \Pi$ as a convex semi-definite program using a suitable variable transformation. In particular, the finite-dimensional problem can be formulated as follows from \eqref{eq:state-mean-cov-under-random-policy}, \eqref{eq:control-mean-cov-under-random-policy}: \begin{align}\label{eq:problem-NLP-beforeSDP} \min_{\substack{v_k, \mu_k, \\ K_k, Q_k, \Sigma_k}} & J_{\mathrm{NLP}}(\curly{v_k, K_k, Q_k}_{k=0}^{N-1}) \\ \text{s.t.} & ~~ \eqref{eq:state-mean-random-policy}, \eqref{eq:state-cov-under-random-policy} \quad \forall k \in \{0, \dots, N-1\} \nonumber \end{align} where \begin{align}\label{eq:objective-NLP-beforeSDP} J_{\mathrm{NLP}} & := \sum_{k=0}^{N-1} \Big( v_k\t R_k v_k + \tr{R_k (K_k \Sigma_k K_k + Q_k) } \Big) \nonumber \\ & ~~~~ + \lambda \Big(\lVert \mu_N - \mu_\mathrm{d} \rVert_2^2 + \tr{\Sigma_N + \Sigma_\mathrm{d}} \nonumber \\ & ~~~~~ \qquad -2 \mathrm{tr} \big( (\sqrt{\Sigma_\mathrm{d}} \Sigma_N \sqrt{\Sigma_\mathrm{d}})^{1/2} \big) \Big) \end{align} since $\E{u_k\t R_k u_k} = \E{u_k}\t R_k \E{u_k} + \tr{R_k U_k} $. The sum of the first two terms of the objective function in \eqref{eq:objective-NLP-beforeSDP} corresponds to the running cost whereas the third term corresponds to the terminal cost which is taken to be the squared Wasserstein distance between the terminal state density and the desired density. The constraints \eqref{eq:state-mean-random-policy} and \eqref{eq:state-cov-under-random-policy} impose the mean and the covariance dynamics given in respective equations. The optimization problem given in \eqref{eq:problem-NLP-beforeSDP} is a nonlinear program (NLP) which is generally non-convex due to the presence of the terms $\tr{R_k K_k \Sigma_k K_k\t }$ in the running cost and the nonlinear equality constraints \eqref{eq:state-cov-under-random-policy}. In order to convexify the latter problem, we first introduce the following variable transformation which is often used in convex optimization based approaches for optimal control and robust control problems for discrete-time linear systems \cite{p:kothare1996robustMPC-LMI, p:chen2015covariance2, p:benedikter2022covariance}: \begin{align}\label{eq:Pkdef} P_k = K_k \Sigma_k. \end{align} By introducing the new variable $P_k$ in \eqref{eq:Pkdef}, we can eliminate the bilinear terms in \eqref{eq:state-cov-under-random-policy}. Now, observe that the term $K_k \Sigma_k K_k\t$ that appears in \eqref{eq:objective-NLP-beforeSDP} and \eqref{eq:state-cov-under-random-policy} is equal to $P_k \Sigma_k^{-1} P_k\t$ under $\eqref{eq:Pkdef}$. With this observation, we define a new decision variable $M_k$ as follows: \begin{align}\label{eq:MKdef} M_k = P_k \Sigma_k^{-1} P_k\t + Q_k. \end{align} By using $M_k$ and $P_k$, the nonlinear equality constraint \eqref{eq:state-cov-under-random-policy} can be written equivalently as follows: \begin{align} \Sigma_{k+1} & = A_k \Sigma_k A_k\t + A_k P_k\t B_k\t + B_k P_k A_k\t \nonumber \\ & \qquad + B_k M_k B_k\t + W_k. \label{eq:nonlinear-equality-partA} \end{align} The following lemmas will be used to formulate Problem \eqref{eq:problem-NLP-beforeSDP} as a standard SDP. \begin{lemma}\label{lemma:equivalent-lmi} Let $X \in \mathbb{S}_n^{+}$, $Y \in \mathbb{S}_m^{++}$ and $N \in \R{n \times m} $. Then, there exists a matrix $Q \in \mathbb{S}_n^{+}$ such that \begin{align}\label{eq:lemma-lmi-eq1} X = N Y^{-1} N\t + Q \end{align} if and only if \begin{align}\label{eq:lemma-lmi-eq2} \left[ \begin{array}{cc} X & N \\ N\t & Y \end{array} \right] \succeq \bm{0}. \end{align} \end{lemma} \noindent The proof of Lemma \ref{lemma:equivalent-lmi} is straight-forward and therefore is omitted. The equivalence of \eqref{eq:lemma-lmi-eq1} and \eqref{eq:lemma-lmi-eq2} can be seen by applying Schur's complement formula to \eqref{eq:lemma-lmi-eq2}. \begin{lemma}\label{lemma:equivalent-varphi} Let $\varphi : \mathbb{S}_n^{+} \times \mathbb{S}_n^+ \rightarrow \R{}$ be defined as: \begin{align}\label{eq:varphi-definition} \varphi(M, N) := - \tr{(\sqrt{M} N \sqrt{M})^{1/2}}. \end{align} Then, $\varphi(M, N)$ can be equivalently written as: \begin{align} \varphi(M, N) & = \min_{L \in \mathcal{M}(N, M)} ~~ - \tr{L}, \label{eq:equivalent-varphi-constr} \end{align} where the set $\mathcal{M}(N, M)$ is defined as follows: \begin{align} \mathcal{M}_{}= \bigg\{ L \in \mathbb{S}^{+}_n ~|~ \left[ \begin{array}{cc} N & M^{-1/2} L \\ L M^{-1/2} & \Imat{n} \end{array} \right] \succeq \bm{0} \bigg\}. \end{align} \end{lemma} \begin{proof} Let us consider the following optimization problem: \begin{subequations}\label{problem:proof-equivalent-varphi} \begin{align} \min_{L \in \mathbb{S}_n^+} & - \tr{L} \\ \text{s.t.}~~ & (\sqrt{M} N \sqrt{M})^{1/2} \succeq L \label{eq:proof-equivalent-varphi-constr} \end{align} \end{subequations} It follows readily that the minimizer $L^\star$ of the latter problem is equal to $(\sqrt{M} N \sqrt{M})^{1/2}$. Thus, the optimal value of the objective function of the optimization problem in \eqref{problem:proof-equivalent-varphi} is equal to the value of $\varphi(M,N)$ which is defined in \eqref{eq:varphi-definition}. Therefore, \begin{subequations} \begin{align} \eqref{eq:proof-equivalent-varphi-constr} & \Leftrightarrow \sqrt{M} N \sqrt{M} \succeq L^2 \label{eq:relation-proof1}\\ & \Leftrightarrow N \succeq M^{-1/2} L \Imat{n} L M^{-1/2} \label{eq:relation-proof2}\\ & \Leftrightarrow \left[ \begin{array}{cc} N & M^{-1/2} L \\ L M^{-1/2} & \Imat{n} \end{array} \right] \succeq \bm{0} \label{eq:relation-proof3} \end{align} \end{subequations} The relation in \eqref{eq:relation-proof1} holds since the function $f(X) = X^{2}$ is monotonically non-decreasing for all $X \in \mathbb{S}_n^+$ in the L\"{o}wner partial order sense. The relation in \eqref{eq:relation-proof2} holds due to the congruence transform with $M^{-1/2}$. The final step of the proof in \eqref{eq:relation-proof3} follows from direct application of Schur's complement formula to \eqref{eq:relation-proof2}. \end{proof} Using Lemma \ref{lemma:equivalent-lmi} and Lemma \ref{lemma:equivalent-varphi}, we can write the problem in \eqref{eq:problem-NLP-beforeSDP} as a standard SDP as follows: \begin{subequations}\label{problem:sdp-formulation} \begin{align} \min_{\substack{v_k, \mu_k, L \\ P_k, \Sigma_k, M_k}} & J_{\mathrm{SDP}}(\curly{v_k, P_k, M_k}_{k=0}^{N-1}, L)\\ \text{s.t.}~~ & \eqref{eq:state-mean-random-policy}, \eqref{eq:nonlinear-equality-partA} \nonumber\\ & \left[ \begin{array}{cc} M_k & P_k \\ P_k\t & \Sigma_k \end{array} \right] \succeq \bm{0} \label{eq:SDP-Mk-constr} \\ & \left[ \begin{array}{cc} \Sigma_N & \Sigma_\mathrm{d}^{-1/2} L \\ L \Sigma_\mathrm{d}^{-1/2}& \Imat{n} \end{array} \right] \succeq \bm{0} \\ & ~ L, \Sigma_k, M_k \succeq \bm{0} \label{eq:SDP-PSD-constr} \end{align} \end{subequations} where the constraints \eqref{eq:state-mean-random-policy}, \eqref{eq:nonlinear-equality-partA}, \eqref{eq:SDP-Mk-constr} and \eqref{eq:SDP-PSD-constr} are imposed for all $k \in \{0, \dots, N-1 \}$. In addition, the objective function $J_{\mathrm{SDP}}$ is defined as follows: \begin{align} J_{\mathrm{SDP}} := & \sum_{k=0}^{N-1} v_k\t R_k v_k + \tr{R_k M_k} \nonumber \\ & \hspace{-0.50cm} + \lambda \big(\lVert \mu_N - \mu_\mathrm{d} \rVert_2^2 + \tr{ \Sigma_N + \Sigma_\mathrm{d} - 2 L } \big) & \end{align} \begin{theorem}\label{theorem:equivalenceofSDPandNLP} Let $\curly{\mu'_k, v'_k, \Sigma'_k, K'_k, Q'_k}$ be the minimizer of the NLP in \eqref{eq:problem-NLP-beforeSDP}, and let $\{\mu''_k, v''_k, \Sigma''_k, P''_k, M''_k, L \}$ be the minimizer of the SDP in \eqref{problem:sdp-formulation}. Then, $v'_k = v''_k$, $\mu'_k = \mu''_k$, $\Sigma'_k = \Sigma''_k, K'_k = P''_k (\Sigma''_k)^{-1}, Q'_k = M''_k - P''_k (\Sigma''_k)^{-1} P^{\prime \prime \mathrm{T}}_k$, for all $k \in \{0, \dots, N-1\}$. \end{theorem} \begin{proof} The equalities $\mu'_k = \mu''_k$ and $v'_k = v''_k$ follow immediately given that the problem in \eqref{problem:sdp-formulation} and the problem in \eqref{eq:problem-NLP-beforeSDP} share the constraint in \eqref{eq:state-mean-random-policy} that enforces the state mean dynamics and in addition, the terms related to the mean dynamics in the objective functions $J_{\mathrm{NLP}}$ and $J_{\mathrm{SDP}}$ are equal. By the definition of $M_k$ in \eqref{eq:MKdef}, it follows that $\tr{R_k M_k } = \tr{R_k (K_k \Sigma_k K_k\t + Q_k)}$. By utilizing the definition of $M_k$, the state covariance dynamics in \eqref{eq:state-cov-under-random-policy} can be equivalently expressed as in \eqref{eq:MKdef} and \eqref{eq:nonlinear-equality-partA}. Furthermore, we observe that \eqref{eq:SDP-Mk-constr} is equivalent to \eqref{eq:MKdef} from Lemma \ref{lemma:equivalent-lmi}. Finally, the last term of the objective function of the NLP in \eqref{eq:objective-NLP-beforeSDP} can be expressed as the optimal value of a relevant minimization problem in view of \eqref{lemma:equivalent-varphi}. Hence, the SDP problem in \eqref{problem:sdp-formulation} is equivalent to the NLP problem in \eqref{eq:problem-NLP-beforeSDP}. \end{proof} \begin{remark}\label{remark:separable-problems} The problem given in \eqref{problem:sdp-formulation} can be decomposed into two optimization problems. One for steering the mean and one for steering the covariance. This is because the objective function $J_{\mathrm{SDP}}$ is equal to the sum of $J_{\mathrm{SDP},\mathrm{mean}}$ and $J_{\mathrm{SDP}, \mathrm{cov}}$, where $J_{\mathrm{SDP},\mathrm{mean}} = \sum_{k=0}^{N-1} v_k^{\mathrm{T}} R_k v_k + \lambda \lVert \mu_N - \mu_{\mathrm{d}} \rVert_2^2$ and $\curly{v_k}_{k=0}^{N-1}$ and $J_{\mathrm{SDP}, \mathrm{cov}} = \sum_{k=0}^{N-1} \tr{R_k M_k} + \lambda \tr{\Sigma_N + \Sigma_d - 2 L}$. The mean steering problem can be formulated as an equality constrained convex quadratic program as follows: \begin{align*} \min_{v_k, \mu_k} ~& \sum_{k=0}^{N-1} v_k^{\mathrm{T}} R_k v_k + \lambda \lVert \mu_N - \mu_{\mathrm{d}} \rVert_2^2 \\ \text{s.t.} ~ & \eqref{eq:state-mean-random-policy} \end{align*} \end{remark} Although the proposed control policy~\eqref{eq:random-state-policy} for the solution to the CS problem corresponds to a randomized state feedback policy, it turns out that the optimal policy for the Problem in \eqref{problem:sdp-formulation} corresponds to a deterministic policy, i.e., $Q_k = \bm{0}$ for all $k$. This observation is formally stated in the following theorem. \begin{theorem} Let us assume that $A_k$ is non-singular for all $k \in \{0,\dots, N-1\}$ and let $\pi' \in \Pi_2$ be the optimal policy that solves the optimization problem in \eqref{eq:problem-NLP-beforeSDP} and let $\curly{v_k, K_k, Q_k}_{k=0}^{N-1}$ be its corresponding parameters. Then, $Q_k = \bm{0}$ for all $k \in \curly{0, \dots, N-1}$. \end{theorem} \begin{proof} Let us assume, for the sake of contradiction, that $Q_\ell \neq \bm{0}$ for some $\ell \in \curly{0, \dots, N-1}$. Let $\curly{\Sigma_k}_{k=0}^{N}$ be the induced state covariance sequence under the optimal policy $\pi'$, and $M_\ell, P_\ell$ be defined as in \eqref{eq:Pkdef}, \eqref{eq:MKdef}, respectively. Let us now consider the following optimization problem: \begin{subequations}\label{eq:theorem3-problem} \begin{align} \min_{M_\ell, P_\ell} & ~ \tr{R_\ell M_\ell } \\ \text{s.t.} & ~ \Sigma_{\ell+1} = A_\ell \Sigma_\ell A_\ell\t + A_\ell P_\ell\t B_\ell\t \nonumber \\ & \qquad ~~~~ + B_\ell P_\ell A_\ell\t + B_\ell M_\ell B_\ell\t + W_\ell \\ & ~ \begin{bmatrix} M_\ell & P_\ell \\ P_\ell\t & \Sigma_\ell \end{bmatrix} \succeq \bm{0} \end{align} \end{subequations} which can be written in a more compact way after dropping the subscripts as follows: \begin{subequations}\label{eq:theorem3-problem-compact} \begin{align} \min_{M, P} & ~ \tr{R M} \\ \text{s.t.} & ~ Z = \Sigma + P\t Y\t + Y P + Y M Y\t \\ & \begin{bmatrix} M & P \\ P\t & \Sigma \end{bmatrix} \succeq \bm{0} \end{align} \end{subequations} where $Z = A_\ell^{-1}(\Sigma_{\ell+1} - W_\ell)A_\ell^{-\mathrm{T}}$ and $Y = A_\ell^{-1} B_\ell$. Furthermore, the dual problem associated with the problem in \eqref{eq:theorem3-problem-compact} is given by \begin{subequations} \begin{align} \max_{H \in \mathbb{S}_n, Q \in \mathbb{S}_n^{+}} & ~ \tr{H(Z-\Sigma) - Q_{22}\Sigma} \label{eq:theorem3-dual-objective}\\ \text{s.t.} & ~ Q_{11} - (R-Y H Y\t) = \bm{0} \label{eq:theorem3-dual-constr1} \\ & ~ Q_{12}\t = -HY \label{eq:theorem3-dual-constr2} \end{align} \end{subequations} where $Q = \left [ \begin{smallmatrix} Q_{11} & Q_{12} \\ Q_{12}\t & Q_{22} \end{smallmatrix} \right]$. From complementary slackness, we conclude that the optimal values of the primal and dual variables satisfy the following equality: \begin{align}\label{eq:comp-slack-first} Q \begin{bmatrix} \Imat{m} & P \Sigma^{-1} \\ \bm{0} & \Imat{n} \end{bmatrix} \begin{bmatrix} M - P \Sigma^{-1} P\t & \bm{0} \\ \bm{0} & \Sigma \end{bmatrix}= \bm{0} \end{align} since $\Sigma$ is non-singular in view of Assumption~\ref{assumption:non-singular-sigma}. From \eqref{eq:comp-slack-first}, we obtain: \begin{subequations}\label{eq:comp-slack-eqns} \begin{align} Q_{11} (M - P \Sigma^{-1} P\t) = \bm{0} \label{eq:comp-slack1}\\ Q_{12}\t (M - P \Sigma^{-1} P\t) = \bm{0} \label{eq:comp-slack3} \end{align} \end{subequations} Now, let $\mathcal{M} = M - P \Sigma^{-1} P\t$. By multiplying both sides of dual feasibility constraint \eqref{eq:theorem3-dual-constr1} by $\mathcal{M}$ on the left, we obtain: \begin{subequations} \begin{align} \mathcal{M}(Q_{11} - (R - YHY\t)) & = \bm{0} \\ \mathcal{M}Q_{11} + \mathcal{M}YHY\t & = \mathcal{M}R, \end{align} \end{subequations} where in our derivations we have used that $\mathcal{M} Q_{11} = \bm{0}$ from \eqref{eq:comp-slack1} and $\mathcal{M} Y H Y\t = \bm{0}$ from \eqref{eq:theorem3-dual-constr2} and \eqref{eq:comp-slack3}. We obtain $\mathcal{M}R=\bm{0}$. Since $R \succ \bm{0}$, we have $\mathcal{M} = \bm{0}$. Thus, we have $M_\ell - P_\ell \Sigma_\ell P_\ell\t = Q_\ell = \bm{0}$. This contradicts our initial assumption that there exists $\ell \in \curly{0, \dots, N-1}$ such that $Q_\ell \neq \bm{0}$ and thus, we conclude that $Q_k = \bm{0}$ for all $k$. This completes the proof. \end{proof} \section{Numerical Experiments}\label{s:numerical-experiments} We solve the SDP in \eqref{problem:sdp-formulation} for the linear system \eqref{eq:system-dynamics} with parameters: $A_k = \left[\begin{smallmatrix} 1.0 & 0.175 \\ -0.343 & 1.0 \end{smallmatrix} \right]$, $B_k = \left[\begin{smallmatrix} 0.789 \\ 0.415 \end{smallmatrix}\right]$, $W_k = 0.5 \Imat{2}$, $\mu_0 = [4.0, 3.0]\t$, $\Sigma_0 = \Imat{2}$, $\mu_\mathrm{d} = [0, 0]\t$, $\Sigma_\mathrm{d} = \left[ \begin{smallmatrix} 3 & -2 \\ -2 & 3 \end{smallmatrix} \right]$, $R_k = 1.0$, $N =60$ to generate the experiment results in Fig. \ref{fig:3d-state-evolution} and Fig. \ref{fig:comparison-LMIwithPrevious}. We also consider a two-dimensional and a three-dimensional double integrator system with $A_k = \left[\begin{smallmatrix}\Imat{n} & \Delta t \Imat{n}\\ \bm{0} & \Imat{n}\end{smallmatrix}\right]$, $B_k = \left[ \begin{smallmatrix} \bm{0} \\ \Imat{n} \end{smallmatrix} \right]$ with $\Delta t = 0.05$ and $n = 2,3$ to show that our SDP formulation in \eqref{problem:sdp-formulation} can handle CS problems with higher dimensional systems. The results of these experiments are given in Table \ref{tab:comp_LMI}. In all of our experiments, we use MOSEK \cite{mosek} to solve SDPs along with the modeling tool CVXPY \cite{p:diamond2016cvxpy}. \input{data.tex} \begin{figure*} \centering \begin{subfigure}{0.32\linewidth} \includegraphics[height=\linewidth, width=\linewidth]{figures/3d-plot-lambda05-cropped.pdf} \caption{$\lambda = 0.5$}\label{subfig:state-evolution-lambda05} \end{subfigure} \begin{subfigure}{0.32\linewidth} \includegraphics[height=\linewidth,width=\linewidth]{figures/3d-plot-lambda2-cropped.pdf} \caption{$\lambda=2.0$}\label{subfig:state-evolution-lambda20} \end{subfigure} \begin{subfigure}{0.32\linewidth} \includegraphics[height=\linewidth,width=\linewidth]{figures/3d-plot-exact-cropped.pdf} \caption{$\lambda \rightarrow \infty$}\label{subfig:state-evolution-lambdainfty} \end{subfigure} \caption{ Evolution of state statistics illustrated in terms of the 2-$\sigma$ confidence ellipsoids. Cyan, green and red ellipsoids show the initial, terminal and desired distributions, respectively. Figures \ref{subfig:state-evolution-lambda05}, \ref{subfig:state-evolution-lambda20} and \ref{subfig:state-evolution-lambdainfty} show the evolution of the state statistics when $\lambda=0.5$, $\lambda=2.0$, and $\lambda \rightarrow \infty$, respectively. }\label{fig:3d-state-evolution} \end{figure*} Fig. \ref{fig:3d-state-evolution} illustrates the evolution of the state statistics under the optimal policy found by solving the SDP in \eqref{problem:sdp-formulation}. As it can be seen from Figures \ref{subfig:state-evolution-lambda05}-\ref{subfig:state-evolution-lambdainfty}, as $\lambda$ increases the terminal distribution approaches the desired distribution. \begin{figure*}[h] \centering \begin{subfigure}{0.48\linewidth} \begin{tikzpicture} \begin{axis}[ height=0.55\linewidth, width=\linewidth, xlabel={$N$}, ylabel= Computation Time (s), ylabel style={at={(0.05, 0.5)}}, grid=both, grid style={line width=0.01pt}, legend style={nodes={scale=0.9},at={(0.2,0.9)},anchor=north} ] \addplot table [x=N, y=disturbance-simple, col sep=comma] {computation_converter_all.csv}; \addlegendentry{Dist-Simple} \addplot table [x=N, y=auxilary-simple, col sep=comma] {computation_converter_all.csv}; \addlegendentry{SH-Simple} \addplot table [x=N, y=state-fb, col sep=comma] {computation_converter_all.csv}; \addlegendentry{SDP} \end{axis} \end{tikzpicture} \caption{}\label{subfig:computation-simple} \end{subfigure} \begin{subfigure}{0.48\linewidth} \begin{tikzpicture} \begin{axis}[ height=0.55\linewidth, width=\linewidth, xlabel={$N$}, ylabel= Computation Time (s), ylabel style={at={(0.05, 0.5)}}, ylabel style={at={(axis description cs:0.03,.5)}}, grid=both, grid style={line width=0.01pt}, legend style={nodes={scale=0.9},at={(0.2,0.9)},anchor=north} ] \addplot table [x=N, y=disturbance-full, col sep=comma] {computation_converter_all.csv}; \addlegendentry{Dist-Full} \addplot table [x=N, y=auxilary-full, col sep=comma] {computation_converter_all.csv}; \addlegendentry{SH-Full} \addplot table [x=N, y=state-fb, col sep=comma] {computation_converter_all.csv}; \addlegendentry{SDP} \end{axis} \end{tikzpicture} \caption{}\label{subfig:computation-full} \end{subfigure} \begin{subfigure}{0.48\linewidth} \begin{tikzpicture} \begin{axis}[ height=0.55\linewidth, width=\linewidth, xlabel={$N$}, ylabel= Optimal Values, ylabel style={at={(0.05, 0.5)}}, grid=both, grid style={line width=0.01pt}, legend style={nodes={scale=0.9},at={(0.2,0.9)},anchor=north} ] \addplot table [x=N, y=disturbance-simple, col sep=comma] {optimal_converter_all.csv}; \addlegendentry{Dist-Simple} \addplot table [x=N, y=auxilary-simple, col sep=comma] {optimal_converter_all.csv}; \addlegendentry{SH-Simple} \addplot table [x=N, y=state-fb, col sep=comma] {optimal_converter_all.csv}; \addlegendentry{SDP} \end{axis} \end{tikzpicture} \caption{}\label{subfig:optimal-simple} \end{subfigure} \begin{subfigure}{0.48\linewidth} \begin{tikzpicture} \begin{axis}[ height=0.55\linewidth, width=\linewidth, xlabel={$N$}, ylabel= Optimal Values, ylabel style={at={(0.05, 0.5)}}, ylabel style={at={(axis description cs:0.03,.5)}}, grid=both, grid style={line width=0.01pt}, legend style={nodes={scale=0.9},at={(0.2,0.9)},anchor=north} ] \addplot table [x=N, y=disturbance-full, col sep=comma] {optimal_converter_all.csv}; \addlegendentry{Dist-Full} \addplot table [x=N, y=auxilary-full, col sep=comma] {optimal_converter_all.csv}; \addlegendentry{SH-Full} \addplot table [x=N, y=state-fb, col sep=comma]{optimal_converter_all.csv}; \addlegendentry{SDP} \end{axis} \end{tikzpicture} \caption{}\label{subfig:optimal-full} \end{subfigure} \caption{Computation-time (\ref{subfig:computation-simple}, \ref{subfig:computation-full}) and Optimal Values (\ref{subfig:optimal-simple}, \ref{subfig:optimal-full}) for increasing $N$. Dist and SH represent the affine disturbance feedback policy and the state history feedback policy, respectively. On Figures \ref{subfig:computation-simple} and \ref{subfig:optimal-simple} only terms from the block diagonal or the last disturbance term are used to parametrize the feedback control policy whereas on Figures \ref{subfig:computation-full} and \ref{subfig:optimal-full}, the full state or disturbance history is used. }\label{fig:comparison-LMIwithPrevious} \end{figure*} Figures \ref{subfig:computation-simple} and \ref{subfig:optimal-simple} show comparison results between the randomized state feedback policy in \eqref{eq:random-state-policy}, the truncated affine disturbance feedback policy \cite{p:balci2021covariancedisturbance} and the truncated state history feedback policy \cite{p:balci2020covariancewasserstein}. In the utilized truncated affine disturbance feedback policy, only the last disturbance term $w_{k-1}$ from the history of disturbances is used for the computation of the control input $u_k$ at time step $k$ (in contrast with the standard or non-truncated affine disturbance feedback policy in which the whole history of disturbances $\{w_{0}, \dots, w_{k-1}\}$ is used instead). The state history feedback policy is parametrized by the matrix variable $\bm{\Theta}$ which is defined in \eqref{eq:variable-transformation1} and which is constrained to be a lower block triangular matrix to preserve the causality of the latter control policy. In the utilized truncated state history feedback policy, the matrix $\bm{\Theta}$ is further restricted to be a block diagonal matrix. Note that this truncation scheme does not necessarily mean that the control will be determined by a truncated history of states since the matrix variable $\bm{\mathcal{K}}$ in \eqref{eq:variable-transformation1-eq2} is still lower block triangular (and not necessarily block diagonal) even if $\bm{\Theta}$ is constrained to be a block diagonal matrix. However, the number of decision variables that appear in the truncated state history feedback policy for the CS problem is smaller than the standard (non-truncated) version of this policy which typically leads to smaller computational cost. For both of the truncated policies used in our comparisons, the computation time increases with the same rate with increasing problem horizon $N$ as in the (memoryless) state feedback policy induced by the SDP formulation in \eqref{problem:sdp-formulation}. However, the optimal parameters for the truncated policies yield sub-optimal results compared with our proposed SDP-based (memoryless) state feedback policy as shown in Fig. \ref{subfig:optimal-simple}. If one uses the standard (or non-truncated) versions of the disturbance affine feedback or state history feedback control policies, the sub-optimality gap will be eliminated, as shown in Fig. \ref{subfig:optimal-full}, however the CS problem becomes computationally intractable as the problem horizon $N$ increases, as shown in Fig. \ref{subfig:computation-full}. In contrast to the SDP formulation, the other policy parametrizations lead to non-convex problem formulations. However, since the values of different local minima are close to each other in this problem instance, optimal values returned by these approaches turned out to be close to each other in Fig. \ref{subfig:optimal-full}. \begin{table*}[h] \centering \begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline N & 30 & 40 & 50 & 60 & 70 & 80 & 90 & 100 & 110 & 120 & 130 & 140 & 150 \\ \hline System in Fig. \ref{fig:3d-state-evolution} & 1.45 & 2.40 & 3.60 & 5.22 & 6.77 & 8.77 & 10.88 & 13.38 & 16.11 & 19.05 & 22.24 & 25.75 & 29.34 \\ \hline Double-Int.(2d) & 3.62 & 6.20 & 9.55 & 13.48 & 18.48 & 23.99 & 29.85 & 37.44 & 44.04 & 52.39 & 61.27 & 70.34 & 80.37 \\ \hline Double-Int.(3d) & 6.81 & 11.85 & 18.44 & 26.23 & 35.30 & 45.45 & 57.28 & 71.04 & 85.38 & 100.31 & 117.52 & 136.73 & 156.31 \\ \hline \end{tabular} \caption{Computation time of different problem instances in seconds required for the solution of the associated SDP in \eqref{problem:sdp-formulation}. The first row indicates the problem horizon $N$, the second, the third and the fourth row correspond to the randomly generated system in Fig. \ref{fig:3d-state-evolution}, the two-dimensional and the three-dimensional double integrator, respectively.} \label{tab:comp_LMI} \end{table*} The time complexity of SDP problems is in $\mathcal{O}(\ell^3)$ where $\ell$ is the number of decision variables \cite{p:helmberg1996complexitySDP}. Let $n, m$ be the dimension of the state $x_k$ and control input $u_k$, respectively. Then, the number of decision variables corresponding to the covariance problem is given by $N(n^2 + nm + m^2) + n^2$ and thus, the number of decision variables is linearly proportional to $N$. Therefore, the time complexity of solving Problem \ref{problem:first formulation} over randomized state feedback policies is in $\mathcal{O}(N^3)$. This fact is also reflected by the computation times reported in Table \ref{tab:comp_LMI}. \section{Conclusion}\label{s:conclusion} In this paper, we addressed a class of covariance steering problems for discrete-time stochastic linear systems with Wasserstein terminal cost. We showed that the proposed state feedback policies have significant advantages over previous policy parametrizations and formulated an SDP to find the optimal policy parameters by using suitable variable transformations. Then, we showed that the optimal policy is deterministic for the problems that we addressed. Finally, we demonstrate the efficacy of our approach by extensive numerical simulations. In our future work, we plan to extend our approach to robust and nonlinear covariance control problems as well as dynamic games with covariance assignment constraints. \bibliographystyle{ieeetr} \section{Main Proof} Consider the covariance dynamics: \begin{subequations}\label{eq:covariance-dynamics} \begin{align} &\Sigma_{k+1} = A_k \Sigma_k A_k\t + A_k P_k\t B_k\t + B_k P_k A_k\t + B_k M_k B_k\t + W_k \\ &\left[ \begin{array}{cc} M_k & P_k \\ P_k\t & \Sigma_k \end{array} \right] \succeq 0 \end{align} \end{subequations} We assume that $ \Sigma_k \succ 0$ and $R_k \succ 0$, $A_k^{-1}$ exists. Consider optimization problem: \begin{subequations}\label{eq:first-problem} \begin{align} \min_{M_k, P_k} &~~ \tr{R_k M_k} \label{eq:first-problem-objective}\\ \text{s.t.} & ~~\Sigma_{k+1} = A_k \Sigma_k A_k\t + A_k P_k\t B_k\t + B_k P_k A_k\t + B_k M_k B_k\t + W_k \label{eq:first-problem-constr1}\\ &\left[ \begin{array}{cc} M_k & P_k \\ P_k\t & \Sigma_k \end{array} \right] \succeq 0\label{eq:first-problem-constr2} \end{align} \end{subequations} Constraint \eqref{eq:first-problem-constr1} can be equivalently written as: \begin{align} A_k^{-1} (\Sigma_{k+1} - W_k) A_k^{-\mathrm{T}} = \Sigma_k + P_k\t B_k\t A_k^{-\mathrm{T}} + A_k^{-1} B_k P_k + A_k^{-1} B_k M_k B_k\t A_k^{-\mathrm{T}} \end{align} Simplify notation by dropping the subscripts $k$: \begin{align} Z - (\Sigma + P\t Y\t + Y P + Y M Y\t) = 0 \end{align} where $ Z = A_k^{-1} (\Sigma_{k+1} - W_k) A_k^{-\mathrm{T}}$ and $ Y = A_k^{-1} B_k$. Using the new notation, the optimization problem in \eqref{eq:first-problem} can be equivalently written as: \begin{subequations}\label{eq:primal-problem} \begin{align} \min_{M, P} & ~ \tr{R M} \label{eq:primal-objective}\\ \text{s.t.} & ~~ Z - (\Sigma + P\t Y\t + Y P + Y M Y\t) = 0 \label{eq:primal-constr1} \\ & ~ \left[ \begin{array}{cc} M_k & P_k \\ P_k\t & \Sigma_k \end{array} \right] \succeq 0\label{eq:primal-constr2} \end{align} \end{subequations} The problem in \eqref{eq:primal-problem} is referred as primal problem for the rest of this note. Now define the lagrangian of the primal problem \eqref{eq:primal-problem} as: \begin{align} \mathcal{L}(M, P, H, Q) &:= \tr{RM} + \tr{H (Z - (\Sigma + P\t Y\t + Y P + Y M Y\t))} \nonumber \\ & \qquad + \tr{- Q_{11}M_k - Q_{12}P\t - Q_{21}P - Q_{22}\Sigma} \\ & = \tr{(M - Y\t H Y - Q_{11})M} \nonumber \\ & \quad + \tr{- (HY + Q_{21})P - (Y\t H\t + Q_{12})P\t} \nonumber \\ & \quad + \tr{H(Z-\Sigma) - Q_{22}\Sigma} \end{align} The dual problem is defined as: \begin{align} \max_{Q\succeq 0, H} \min_{M. P} \mathcal{L}(M, P, H, Q) \end{align} which can be written as: \begin{subequations}\label{eq:dual-problem} \begin{align} \max_{Q, H} & ~~ \tr{H(Z-\Sigma) - Q_{22}\Sigma} \label{eq:dual-objective}\\ \text{s.t} &~~ Q_{11} - (R - Y H Y\t) = 0 \label{eq:dual-constr1}\\ & ~~ Q_{21} = -HY \label{eq:dual-constr2}\\ & ~~ Q_{21} = Q_{12} \label{eq:dual-constr3}\\ & ~~ \left[ \begin{array}{cc} Q_{11}&Q_{12} \\ Q_{21}&Q_{22} \end{array} \right] \succeq 0\label{eq:dual-constr4} \end{align} \end{subequations} Complementary slackness implies that for the optimal values, following equality holds: \begin{align}\label{eq:complementary-slackness} \left[ \begin{array}{cc} Q_{11}&Q_{12} \\ Q_{21}&Q_{22} \end{array} \right] \left[ \begin{array}{cc} M & P \\ P\t & \Sigma \end{array} \right] = 0 \end{align} Since $\Sigma$ is non-singular, \eqref{eq:complementary-slackness} reads: \begin{align} \left[ \begin{array}{cc} Q_{11}&Q_{12} \\ Q_{21}&Q_{22} \end{array} \right] \left[ \begin{array}{cc} \mathbf{I} & P \Sigma^{-1} \\ \bm{0} & \mathbf{I} \end{array} \right] \left[ \begin{array}{cc} M - P \Sigma^{-1} P\t & 0 \\ 0 & \Sigma \end{array} \right] \left[ \begin{array}{cc} \mathbf{I} & \bm{0} \\ \Sigma^{-1} P\t & \mathbf{I} \end{array} \right] = 0 \end{align} since $\left[\begin{smallmatrix}\mathbf{I} & \bm{0} \\\Sigma^{-1} P\t & \mathbf{I} \end{smallmatrix}\right]$ is invertible, we obtain: \begin{align} \left[ \begin{array}{cc} Q_{11}(M-P\Sigma^{-1}P\t) & (Q_{11}P\Sigma^{-1}+Q_{12})\Sigma \\ Q_{21}(M-P\Sigma^{-1}P\t) & (Q_{21}P\Sigma^{-1}+ Q_{22})\Sigma \end{array}\right] = \bm{0} \end{align} we have \begin{subequations}\label{eq:slack-equations} \begin{align} Q_{11}(M-P\Sigma^{-1}P\t) = \bm{0} \label{eq:slack-eq1}\\ -Q_{11}P\Sigma^{-1} = Q_{12} \label{eq:slack-eq2}\\ Q_{21}(M-P\Sigma^{-1}P\t) = \bm{0} \label{eq:slack-eq3}\\ -Q_{21}P\Sigma^{-1} = Q_{22} \label{eq:slack-eq4} \end{align} \end{subequations} Now, let $\mathcal{M} = M - P \Sigma^{-1} P\t$. Multiply the dual feasibility constraint in \eqref{eq:dual-constr1} by $\mathcal{M}$ on the left: \begin{align} \mathcal{M}(Q_{11} - (R - Y H Y\t)) = \bm{0} \\ \mathcal{M}Q_{11} + \mathcal{M}Y H Y\t = \mathcal{M}R \end{align} since $\mathcal{M}Q_{11} = \bm{0}$ due to equation \eqref{eq:slack-eq1} and $\mathcal{M}Y H Y\t = \bm{0}$ due to equation \eqref{eq:slack-eq3} and dual feasibility condition \eqref{eq:dual-constr2}. Thus above equation is reduced to \begin{align} \mathcal{M} R = 0 \end{align} Since, $R\succ 0$, it yields $\mathcal{M}=0 \Rightarrow M = P \Sigma^{-1} P\t$. $\hfill \square$ \end{document}
1,108,101,564,765
arxiv
\section{Introduction} \label{sec:intro} Many applications within scientific computing and big-data analytics rely heavily on efficient implementations of sparse linear algebra algorithms, such as sparse matrix-vector multiplication (SpMV). Examples include conjugate gradient solvers~\cite{CG}, tensor decomposition~\cite{smith2015splatt}, and graph analytics~\cite{GraphAnalytics}. Unlike dense linear algebra algorithms, sparse kernels present difficult challenges to achieving high performance on today's common architectures. These challenges include irregular access patterns and weak locality, which impede static optimizations and efficient cache utilization. As a result, there has been an abundance of research regarding the design of data structures and algorithms to take advantage of the capabilities of today's systems, which include deep-memory hierarchy architectures and graphics processing units (GPUs)\cite{GPUSpMVFormat}. Beyond designing data structures and algorithms for sparse kernels that conform to existing systems, there have been efforts to develop novel architectures that would be better suited for sparse algorithms. One such effort is the Emu architecture~\cite{IA3EMU}, which is a cache-less system centered around light-weight migratory threads and near-memory processing capabilities. The premise of this architecture is that the challenges posed by irregular applications can be overcome through the use of fine-grained memory accesses that reduce the memory system load by only transferring light-weight thread contexts. The Emu architecture is described in Section \ref{sec:emuArch}. To determine the efficacy of such a novel architecture, it is insightful to understand how the impact of existing optimizations for sparse algorithms differs between Emu and cache-memory based systems. To this end, we explore SpMV on the Emu architecture and investigate the effects of traditional sparse optimizations such as vector data layouts, work distributions, and matrix reorderings. We focus on SpMV as it is one of the most prevalent sparse kernels, is found across a wide range of applications, and exhibits the algorithmic traits that the Emu architecture targets. Our implementation leverages the standard Compressed Sparse Row (CSR) data format for storing matrices. This paper's contributions are as follows: \begin{itemize} \item We implement a standard CSR-based SpMV algorithm for the Emu architecture with two different data layout schemes for the vectors and two different work distribution strategies. \item We conduct a performance evaluation of our implementation and the different sparse optimizations across a set of real-world matrices on the Emu Chick system. \item We find that initially distributing work evenly across the system is inadequate to maintain load balancing over time due to the migratory nature of Emu threads. \item We demonstrate that traditional matrix reordering techniques can improve SpMV performance on the Emu architecture by as much as 70\% by encouraging sustained load balancing. On the other hand, we find that the performance gains of the same reordering techniques on a cache-memory based system is no more than 16\%. \end{itemize} The rest of this paper is organized as follows: Section \ref{sec:emuArch} provides a brief overview of the Emu architecture. The details of our CSR-based SpMV implementation and sparse optimizations are presented in Section \ref{sec:impl}. Section \ref{sec:perfEval} describes our experimental setup and the results of our performance study. Related work is presented in Section \ref{sec:relatedWork} and we provide concluding remarks in Section \ref{sec:concl}. \section{Emu Architecture} \label{sec:emuArch} The unique aspects of the Emu architecture are migratory threads and memory-side processing. These features are designed to improve the performance of code containing irregular memory access patterns~\cite{IA3EMU}. Rather than fetching data from memory and bringing it to the location of the computation, the Emu architecture sends or \textit{migrates} the computation to that data. This is accomplished by forcing threads to execute on processors that are co-located with accessed data or by sending computation to co-located memory-side processors. \subsection{Emu Nodelet Architecture} The basic building block of an Emu system is a \textit{nodelet}, which consists of one or more \textit{Gossamer Cores}, a bank of narrow channel DRAM, and a memory-side processor. Eight nodelets are combined together to make up a single node. Figure \ref{fig:01_EmuArch} depicts one such node within the Emu architecture. A Gossamer Core (GC) is a general purpose, cache-less processing unit developed specifically for the Emu architecture. It supports the execution of up to 64 concurrent light-weight threads. A GC can issue a single instruction every cycle and each thread on a GC is limited to one active instruction at any given time. Such restrictions, coupled with the lack of caches, simplifies the logic required by a GC. \begin{figure} \centering \includegraphics[scale=0.35]{01_EmuArch.pdf} \caption{A single node in the Emu architecture. ``GC'' refers to a Gossamer Core. There are 8 nodelets within a single node.} \label{fig:01_EmuArch} \end{figure} The target applications for the Emu architecture have irregular access patterns and little spatial locality. Because such applications often require 8 bytes of memory per access, it is inefficient to load 64 byte blocks from main memory. These larger accesses are often required by other architectures due to standard DDR interfaces and cache line sizes. The narrow channel DRAM within an Emu system is designed to support narrower accesses by using eight 8-bit channels rather than a single, wider 64-bit interface. When a thread on a GC makes a memory request to a remote address, a \textit{migration} is generated. A migration involves a GC issuing a request to the Nodelet Queue Manager (NQM) to migrate the thread context to the nodelet that contains the desired data. The NQM interfaces with the \textit{Migration Engine} (ME), which is the communication fabric that connects multiple nodelets together. The thread context sits in the migration queue until it is accepted by the ME, at which point it is sent over the ME and is processed by the destination nodelet's NQM. An executable's compiled code is replicated on each nodelet, so the thread can resume execution without being aware that it was migrated. By limiting the size of a thread context to roughly 200 bytes~\cite{InitialEmu}, the Emu architecture is able to keep the cost of a migration low. We have observed that a memory access that requires a migration is roughly 2x slower than a memory access without a migration. The current Emu architecture supports a range of atomic operations on 64-bit data. These include add, AND, OR, XOR, min and max. Atomic operations are handled by the memory-side processor on each nodelet. For those atomic and store operations that do not return a value to a thread, the memory-side processor can perform the operation on behalf of a thread executing on a different nodelet without generating a migration. Such operations are referred to as \textit{remote updates}. A remote update generates a packet that is sent to the nodelet owning the remote memory location. The packet contains the data as well as the operation to be performed. While remote updates do not return results, they generate acknowledgements that are sent back to the source thread. A thread cannot migrate until all outstanding acknowledgements have been received. Each nodelet has three queues that hold threads in various states. The run queue consists of threads that are waiting for an available register set on a GC. The migration queue holds threads, or packets, that are waiting to depart the nodelet. Packets containing remote updates are held in the memory queue until they can be executed by the memory-side processor. In the current architecture, the number of active threads on a nodelet can be throttled depending on the availability of resources on that nodelet, including space in these queues. In this work, we perform our experiments on the Emu Chick system, which consists of 8 nodes connected together by a RapidIO network. In the current Emu Chick system, each nodelet consists of only one GC clocked at 150MHz and 8GB of narrow channel DDR4 memory clocked at 1600MHz. The GC and ME on each nodelet are implemented on an Arria 10 FPGA. \subsection{Data Distribution Support} There are two basic functions for specifying where allocated memory is placed within the system. The \texttt{mw\_malloc1dlong} function will allocate an array of 64-bit elements that is distributed cyclically element by element across the available nodelets in the system. The \texttt{mw\_malloc2d} function allocates an array of pointers that is distributed cyclically element by element across the available nodelets, where each pointer points to a block of co-located memory of a specified size. While \texttt{mw\_malloc2d} does not directly support variable size blocks, one can achieve such a layout by invoking \texttt{mw\_malloc2d} with block size of 1 and then use standard \texttt{malloc} to allocate the desired block size on each of the nodelets. \section{Implementation} \label{sec:impl} In this section, we provide details regarding our implementation of the Compressed Sparse Row (CSR) storage format and its accompanying SpMV algorithm on the Emu architecture. We also describe how we achieve different data layouts and work distribution strategies. For the remainder of this paper, we will refer to a generic sparse matrix \textbf{A} as having $M$ rows, $N$ columns and \textit{NNZ} non-zeros. We consider $\textbf{Ax} = \textbf{b}$ as the formulation of SpMV, where \textbf{x} and \textbf{b} are dense vectors of length $N$ and $M$, respectively. \subsection{Compressed Sparse Row Format} \label{sec:csr} We adopt the standard CSR storage format, which stores \textbf{A} as three separate arrays: \textit{values} stores all the non-zero values of \textbf{A}, \textit{colIndex} stores the column indices of all the non-zeros and \textit{rowPtr} stores pointers into \textit{colIndex} that correspond to the start of each row. We distribute the rows of \textbf{A} across the desired number of nodelets such that each nodelet will have local access to all portions of the CSR arrays it needs to traverse its assigned rows. An illustration of this distribution for 4 nodelets is shown in Figure \ref{fig:02_CSRExample}, where each nodelet is assigned two rows. The rows assigned to a given nodelet can then be distributed among the desired number of threads spawned on the nodelet. Note that each nodelet stores a ``mini'' CSR matrix for its rows with relative row offsets. \begin{figure} \centering \includegraphics[trim=0cm 7cm 0cm 0cm, clip, scale=0.35]{02_CSRExample.pdf} \caption{Illustration of how a sparse matrix is represented and stored in CSR format across 4 nodelets.} \label{fig:02_CSRExample} \end{figure} To perform SpMV, we start by spawning a ``parent'' thread on each of the desired nodelets. Each of these parent threads then spawns the desired number of worker threads to process the rows assigned to the nodelet. In order to make updates to \textbf{b} during SpMV, each worker thread needs to be aware of the absolute index of its assigned rows, as only relative offsets are stored in the nodelet's \textit{rowPtr} array. We accomplish this by passing in the absolute row index of the first row assigned to each nodelet when we spawn the parent threads. Worker threads migrate to and from the nodelets as needed to access elements of \textbf{x} and \textbf{b}. The rest of the SpMV algorithm is unchanged from the standard CSR implementation. \subsection{Vector Data Layout} \label{sec:dataLayout} While it is possible to enforce only local accesses to the CSR arrays for SpMV, accesses to the vectors are much more challenging to control. As these memory accesses largely dictate the overall performance of SpMV, it is crucial to address the data layout for \textbf{x} and \textbf{b} on the Emu architecture. Since the non-zeros in row $i$ of \textbf{A} will only be assigned to a single worker thread, that thread can accumulate the updates to $\textbf{b}[i]$ in a local register and then issue a single store to $\textbf{b}[i]$. Therefore, fully computing $\textbf{Ax} = \textbf{b}$ only requires $M$ stores to \textbf{b}, which do not require migrations as they are either local writes or remote updates. On the other hand, there are a total of \textit{NNZ} loads from \textbf{x} required for SpMV, each of which may require a migration. As it is often the case that \textit{NNZ}~$\gg M$ for most sparse matrices, the layout of \textbf{x} has more bearing on performance than that of \textbf{b}. We implement two different data layouts for both \textbf{x} and \textbf{b}: cyclic and block. In a cyclic layout, adjacent elements of a vector are stored on different nodelets in a round-robin fashion such that each consecutive access requires a migration. For a block layout, contiguous elements of a vector are stored in a fixed-size block on each nodelet. Assuming a block size of $B$ elements, one migration will be required for every $B$ consecutive accesses. For our approach, we use the same block size on each nodelet, which is $M/nodelets$ for \textbf{b} and $N/nodelets$ for \textbf{x}, where $nodelets$ is the number of nodelets utilized. \subsection{Work Distribution Strategies} \label{sec:workDistr} We explore two different strategies for distributing work across the nodelets: one that only considers the number of rows in \textbf{A} and one that also factors in the number of non-zeros on each row. For the row-based approach, we evenly distribute the rows of \textbf{A} to each nodelet and then further divide those blocks of rows among the worker threads utilized by each nodelet. When using the block layout for \textbf{b}, the block size is equal to the number of rows assigned to each nodelet via the row distribution strategy. Therefore, the worker threads on a nodelet will have local access to the elements of \textbf{b} that need to be updated. While each nodelet may receive the same number of rows via the row approach, the sparsity pattern of the matrix may result in some worker threads being assigned a significantly different number of non-zeros than others. Since the number of non-zeros given to a nodelet largely dictates its work-load for SpMV, this can lead to load imbalance. To mitigate this, the non-zero approach distributes the rows of \textbf{A} to each worker thread such that the total number of non-zeros assigned to each thread is roughly the same. We achieve this by iterating over \textit{rowPtr} and accumulating rows until the threshold of $NNZ / threads$ is met, where $threads$ is the total number of worker threads used across all of the nodelets. For matrices with very irregular sparsity patterns, this can result in a given nodelet being assigned a significantly different number of rows than another. In such cases, a block layout for \textbf{b} no longer guarantees that the required elements of \textbf{b} will be local to each nodelet. \section{Performance Evaluation} \label{sec:perfEval} In order to understand the impact of traditional sparse optimizations on a migratory thread architecture, we evaluated our SpMV implementation on the Emu Chick system across a range of real-world matrices. In this section, we describe our experimental setup and then present the results of several different experiments that evaluate the sparse optimizations described in Section \ref{sec:impl}. \subsection{Experimental Setup} \label{sec:expSetup} For our experiments, we ran on a single node of the Emu Chick system, as described in Section \ref{sec:emuArch}, and used version 18.04.1 of the Emu toolchain. Multi-node execution on the Emu Chick hardware was not reliable enough to conduct our tests, so we limit our experiments to a single node and leave multi-node tests for future work. We utilize all 8 nodelets on a node and leverage 64 worker threads per nodelet. We executed our SpMV implementation across a suite of 40 different matrices. In the following sections, we focus our evaluation on the matrices shown in Table \ref{tab:matrices}, which are representative of the suite and highlight the most interesting performance characteristics. All matrices were obtained from the University of Florida Sparse Matrix Collection~\cite{MatrixMarket} with the exception of rmat, which is an RMAT graph that was generated with RMAT a, b and c parameters of 0.45, 0.22 and 0.22, respectively~\cite{parmat}. For the symmetric matrices, we store and operate on the entire matrix rather than just the upper or lower triangular matrix. All results reported are the average of 10 trials. \begin{table} \renewcommand{\arraystretch}{1.0} \caption{Matrices} \label{tab:matrices} \centering \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Name} & \textbf{Dimensions} & \textbf{Non-Zeros} & \textbf{Density} & \textbf{Symmetric}\\ \hline ford1 & 18k x 18k & 100k & 2.9E-04 & Yes \\ cop20k\_A & 120k x 120k & 2.6M & 1.79E-04 & Yes \\ webbase-1M & 1M x 1M & 3.1M & 3.11E-06 & No \\ rmat & 445k x 445k & 7.4M & 3.74E-05 & No\\ nd24k & 72k x 72k & 28.7M & 5.54E-03 & Yes \\ audikw\_1 & 943k x 943k & 77.6M & 8.72E-05 & Yes \\ \hline \end{tabular} \end{table} \subsection{Cyclic Versus Block Vector Data Layout} \label{sec:cyclicVsBlock} Figure \ref{fig:04_blockVsCyclic_BW} shows the achieved SpMV bandwidth for the cyclic and block vector layouts across the different matrices. A row-based work distribution strategy was employed for these results. The block layout outperforms the cyclic layout on each matrix, achieving up to 25\% more bandwidth. \begin{figure} \centering \includegraphics[trim={0cm 2.5cm 0 0},clip,scale=0.36]{04_BlockVSCyclic_BW.pdf} \caption{Bandwidth (MB/s) for SpMV across the matrices when using a cyclic and block data layout for \textbf{x} and \textbf{b}. Higher bars represent better performance.} \label{fig:04_blockVsCyclic_BW} \end{figure} \begin{figure} \centering \includegraphics[scale=0.30]{03_Spyplots.pdf} \caption{Spy plots for the matrices in Table \ref{tab:matrices}. Blue (dark) dots represent non-zeros.} \label{fig:03_Spyplots} \end{figure} We can study the sparsity patterns of the matrices, as shown in Figure \ref{fig:03_Spyplots}, to understand why the block layout performs better than the cyclic layout. A particular sparsity pattern that offers significant benefits for the block layout is one in which a majority of the non-zeros are clustered around the main diagonal of the matrix. Since the matrices in Table \ref{tab:matrices} are square, the length of both \textbf{x} and \textbf{b} is equal to the number of rows in \textbf{A}. With the row-based work distribution, the number of rows assigned to each nodelet is equal to the block size used for the vectors. For a matrix where the non-zeros are clustered on the main diagonal, this means that very few migrations are incurred when accessing \textbf{x}. Figure \ref{fig:05_ford1Partitions} illustrates this for the ford1 matrix across 8 nodelets. As can be seen, a majority of the non-zeros are contained in the shaded boxes, which represent local accesses to \textbf{x}. If \textbf{x} were distributed in a cyclic layout, then we could not exploit such a sparsity pattern, as consecutive accesses to \textbf{x} would cause migrations. Indeed, we observe that the block layout generates 1.42x -- 6.3x fewer migrations than the cyclic layout across the matrices in Table \ref{tab:matrices}. For the remainder of the results, we will assume a block data layout for \textbf{x} and \textbf{b}. \begin{figure} \centering \includegraphics[trim={4cm 0 0 0},clip,scale=0.4]{05_ford1Partitions.pdf} \caption{An illustration of how the ford1 matrix from Figure \ref{fig:03_Spyplots} and the \textbf{x} and \textbf{b} vectors are distributed across 8 nodelets via the row-based work distribution approach with a block layout for \textbf{x} and \textbf{b}. Red (dark) lines indicate the blocks of rows assigned to each nodelet. Blue (dark) dots within shaded boxes represent non-zeros that require local access to \textbf{x} while those outside of the shaded boxes represent non-zeros that require migrations to access \textbf{x}.} \label{fig:05_ford1Partitions} \end{figure} \subsection{Row Versus Non-zero Work Distribution} \label{sec:rowVSNonzero} Figure \ref{fig:06_RowVSNonzero_BW} presents the SpMV bandwidth results of the row and non-zero work distribution strategies across the matrices. We observe that the non-zero approach consistently outperforms the row approach, achieving as much as 3.34x more bandwidth. As described in Section \ref{sec:workDistr}, the row-based approach can lead to severe work imbalances by assigning a block of rows to a given nodelet with a significantly different number of non-zeros than other nodelets. \begin{figure} \centering \includegraphics[trim={0.2cm 2.5cm 0 0},clip,scale=0.35]{06_RowVSNonzero_BW.pdf} \caption{Bandwidth (MB/s) for SpMV across the matrices when using a row and non-zero work distribution. Higher bars represent better performance.} \label{fig:06_RowVSNonzero_BW} \end{figure} \begin{figure} \centering \includegraphics[trim={0cm 2.5cm 0 0},clip,scale=0.36]{07_RowVSNonzero_MemIPC.pdf} \caption{Coefficient of variation for the memory instructions executed per nodelet when using a row and non-zero work distribution. Lower bars represent better performance.} \label{fig:07_RowVSNonzero_MemIPC} \end{figure} To quantify the performance advantage of the non-zero work distribution, Figure \ref{fig:07_RowVSNonzero_MemIPC} shows the coefficient of variation (CV) for the number of memory instructions executed by each nodelet. The CV is the standard deviation divided by the mean and is a measure of relative variability. A low memory instruction CV indicates that each nodelet completed roughly the same amount of work, as SpMV is considered to be memory-bound. Indeed, we observe a significantly lower CV for the non-zero approach across all of the matrices. While the work is distributed more evenly with the non-zero approach and we observe higher bandwidth, the non-zero strategy incurs an average of 1.69x more migrations than the row-based approach. This is because the row strategy coupled with a block layout for the vectors works to minimize migrations, especially for matrices with a dense main diagonal. However, the non-zero distribution does not necessarily assign equally sized blocks of rows to each nodelet. The result is that the block data layout for the vectors is less successful at minimizing migrations as a nodelet's assigned rows and non-zeros are not necessarily aligned with the block partitions of \textbf{x} and \textbf{b}. Despite incurring more migrations, the non-zero approach offers better performance than the row-based approach. This suggests that the penalty of migrations can be offset by more uniform work distribution and load balancing. We discuss this topic in more detail in the next section. For the remainder of the results, we assume the use of the non-zero work distribution strategy. \subsection{Hardware Load Balancing} \label{sec:hwLoad} On a traditional cache-memory based system, both memory access locality and hardware load balancing for SpMV can be controlled by distributing the non-zeros among the threads and binding the threads to hardware resources such as cores. However, the Emu architecture differs because threads cannot be isolated to specific hardware resources, such as a Gossamer Core, due to their migratory behavior. To bind Emu threads to cores, one would need to only read from local memory, avoiding migrations entirely. At the other extreme, despite best efforts to initially lay out and distribute work evenly across the nodelets, it is possible for all of the threads to migrate to a single nodelet and oversubscribe that nodelet's resources. In general, the layout of an application's data structures across the nodelets as well as its memory access pattern are what determine the load balancing of the hardware. Consider the cop20k\_A matrix as shown in Figure \ref{fig:03_Spyplots}. Regardless of how the rows are distributed among the nodelets, a large portion of the total non-zeros require access to elements of \textbf{x} that all reside on the same nodelet. Specifically, 25\% of the total non-zeros in the matrix require access to elements of \textbf{x} that are stored on nodelet 0. Within the Emu architecture, this results in a majority of the threads all migrating to the same nodelet at roughly the same time. The particular load imbalance scenario for the cop20k\_A matrix is shown in Figure \ref{fig:09_ActiveThreads_Cop_64}, where we monitor the total number of threads residing on each nodelet during the SpMV execution, including those executing and those waiting in the run queues. By the time 150ms have elapsed, all of the nodelets except for nodelet 0 have ceased significant activity, indicating that there is a clear load imbalance of the system resources. However, as described above, we would expect to see higher than average activity on nodelet 0 due to all of the threads requiring access to elements of \textbf{x} stored on nodelet 0. Instead, the number of threads on nodelet 0 is, on average, 32 while the other nodelets maintained between 53 and 75 threads on average. \begin{figure} \centering \includegraphics[scale=0.36]{09_ActiveThreads_Cop_64.pdf} \caption{The total number of threads on each nodelet during the execution of SpMV on the cop20k\_A matrix. These results include threads executing on the nodelets as well as those waiting in the run queues.} \label{fig:09_ActiveThreads_Cop_64} \end{figure} To understand this behavior, we observed the sizes of the migration queue on each nodelet over the SpMV execution. We found that nodelet 0 experiences an immediate surge of packets into its migration queue. This is due to a majority of the worker threads migrating to nodelet 0 to access \textbf{x} and then requiring a migration back to their parent nodelet to access their CSR arrays. Nodelet 1 also exhibits noticeable activity, as it too holds elements of \textbf{x} that are required by a large number of non-zeros residing on other nodelets. In the current Emu architecture, thread activity on a nodelet is throttled based on the nodelet's available resources, which includes space in the migration queue. Because the migration queue on nodelet 0 is immediately filled nearly to capacity, the nodelet reduces the number of threads that can be executed. It is not until the other nodelets approach completion of their work that the migration queue on nodelet 0 starts to empty out and thread activity increases. We note that running SpMV on the cop20k\_A matrix with fewer threads per nodelet provides better load balancing by reducing the pressure on the migration queue, and thus, allowing for more threads to be active on nodelet 0. This suggests that as more nodelets and threads are present as a system is scaled up, load balancing issues due to thread migration hot-spots will require attention. \subsection{Matrix Reordering} \label{sec:matReorder} As shown in the previous section, the sparsity pattern of a given matrix can have profound impacts on SpMV performance, despite efforts to properly lay out data structures and distribute work evenly to all of the threads. There are many known techniques to reorder the non-zeros of a matrix in order to improve locality and data reuse on traditional cache-memory based systems. We investigated whether these existing reordering algorithms could offer similar performance benefits on Emu system as well as mitigate potential hardware load balancing issues. We focus on the following reordering techniques: Breadth First Search (BFS)~\cite{BFS}, METIS~\cite{METIS} and Random. A random reordering performs a random permutation of the matrix rows and columns using a Fisher-Yates shuffle. As an example, Figure \ref{fig:12_Cop20K_A_Reorderings} shows the original cop20k\_A matrix and the results of the three reordering algorithms. \begin{figure} \centering \includegraphics[trim=0cm 4.5cm 12cm 0cm, clip, scale=0.4]{12_Cop20K_A_Reorderings.pdf} \caption{The original cop20k\_A matrix, denoted as NONE, and the resulting spy plots when applying the random, BFS and METIS reordering algorithms.} \label{fig:12_Cop20K_A_Reorderings} \end{figure} \begin{figure} \centering \includegraphics[trim={0.2cm 2.25cm 0 0},clip,scale=0.35]{13_Reordering_EMUBW.pdf} \caption{Bandwidth (MB/s) of SpMV for the different reordering techniques across the matrices. Higher bars represent better performance. NONE refers to the original matrix.} \label{fig:13_Reordering_EMUBW} \end{figure} Figure \ref{fig:13_Reordering_EMUBW} presents the achieved SpMV bandwidth for the different reordering techniques across the matrices, where NONE refers to the original matrix. We find that BFS and METIS generally offer the best performance, achieving up to 70\% more bandwidth than the original matrix. BFS and METIS both attempt to move the non-zeros towards the main diagonal and they tend to put an equal number of non-zeros on each row. Having the non-zeros clustered on the main diagonal allows us to exploit the block data layout of the vectors and reduce the total amount of migrations, as described in Section \ref{sec:cyclicVsBlock}. Since BFS and METIS tend to produce balanced rows, both of the work distribution strategies achieve roughly the same outcome: each nodelet is assigned an equal number of rows and the total number of non-zeros assigned to each nodelet is roughly the same. Therefore, these reordering techniques allow us to maintain an equal amount of activity on each nodelet and mitigate hot-spots by encouraging threads to be ``pinned'' to their parent nodelet. However, an interesting result from Figure \ref{fig:13_Reordering_EMUBW} is that a random reordering can offer up to a 50\% increase in bandwidth over the original matrix, and in some cases, outperform BFS or METIS. The random reordering has the effect of also producing balanced rows by uniformly spreading out the non-zeros rather than clustering them together. As one would expect, this results in more migrations than the other techniques, but it provides a natural hot-spot mitigation for SpMV. This is because it is very unlikely that a majority of the threads would all converge onto the same nodelet at the same time. We observe that such an effect is similar to the distributed randomized algorithm for packet routing proposed by Valiant~\cite{Valiant}, which prevents multiple packets from being sent across the same wire at the same time. As alluded to at the end of Section \ref{sec:rowVSNonzero}, the cost of extra migrations can be overcome by better load balancing. Indeed, we can see the improvement in load balancing achieved by the random reordering in Figure \ref{fig:15_ActiveThreads_Cop_RANDOM}, which tracks the total number of threads on each nodelet during SpMV for the cop20k\_A matrix. \begin{figure} \centering \includegraphics[scale=0.35]{15_ActiveThreads_Cop_RANDOM.pdf} \caption{The total number of threads on each nodelet during the execution of SpMV on the cop20k\_A matrix when randomly reordered. These results include threads executing on the nodelets as well as those waiting in the run queues.} \label{fig:15_ActiveThreads_Cop_RANDOM} \end{figure} The results from Figure \ref{fig:13_Reordering_EMUBW} highlight two approaches for achieving hardware load balancing on the Emu architecture: (1) assign an equal amount of work to each nodelet and lay out the data so that threads rarely need to migrate off of their parent nodelet, and (2) assign an equal amount of work to each nodelet and lay out the data so that threads will rarely converge onto the same nodelet at the same time. The first approach, as achieved by BFS and METIS, attempts to enforce the original intent of the data layout and work distribution, and generally offers the best performance. The second approach, as achieved by the random reordering, incurs more migrations but can be useful due to the minimal amount of work required to perform the reordering. \begin{figure} \centering \includegraphics[trim={0cm 2cm 0 0},clip,scale=0.35]{16_Reordering_CPUBW.pdf} \caption{SpMV bandwidth (MB/s) on a dual socket Broadwell Xeon system for the different reordering techniques across the matrices when using 32 threads. Higher bars represent better performance. NONE refers to the original, unordered matrix.} \label{fig:16_Reordering_CPUBW} \end{figure} It is worth noting the difference in impact of the matrix reordering techniques on the Emu architecture when compared to a traditional cache-memory based architecture. Figure \ref{fig:16_Reordering_CPUBW} shows the achieved bandwidth for an identical SpMV implementation on a dual-socket Broadwell Xeon system with 45MB of last-level cache. We observe that BFS and METIS only achieve up to 12\% and 16\% higher bandwidth over the original matrix, respectively. Furthermore, a random reordering never outperforms the original matrix, and in general, is considerably worse than the other reordering techniques. This behavior is what we would expect, as the random reordering has poor spatial locality. The difference in time to access the L1 cache and main memory can be on the order of 100 -- 200x, which is much more severe than the relative cost of a migration on the Emu architecture. Such a penalty cannot easily be amortized by the load balancing benefits provided by a random reordering. \section{Related Work} \label{sec:relatedWork} The Emu architecture is described in detail by Dysart et al.\xspace~\cite{IA3EMU}, which also gives initial performance results obtained from a simulator of the architecture. Also using a simulator of an Emu system, Minutoli et al.\xspace~\cite{RadixEmu} present an implementation of radix sort and benchmark its performance on up to 128 nodelets. Hein et al.\xspace~\cite{InitialEmu} performed an evaluation of several micro-benchmarks, including CSR-based SpMV, on actual Emu Chick system hardware. While our work is similar to Hein et al.\xspace, there are significant differences. In that work, \textbf{x} was replicated on each nodelet and the entirety of \textbf{b} was placed on a single nodelet. Furthermore, their evaluation only considered synthetically generated Laplacian matrices. In our work, \textbf{A}, \textbf{x} and \textbf{b} are all distributed in some fashion across the nodelets, with no portion of the vectors being replicated. Additionally, we run experiments on real-world sparse matrices that are drawn from a variety of different domains. \section{Conclusions} \label{sec:concl} As migratory thread architectures, such as Emu, mature and evolve, optimizing sparse codes to capitalize on these new architecture's unique strengths will be increasingly important. We evaluated several traditional sparse optimizations on the Emu architecture, including vector data layout, work distribution, and matrix reordering. Our findings can be summarized as follows: \begin{itemize} \item While designing data structures and algorithms to minimize migrations is generally a good strategy, we found that work distribution and load balancing is of similar importance for achieving high performance. \item Unlike traditional systems, it is very difficult to explicitly enforce hardware load balancing for the Emu architecture due to thread migration. Specifically, the placement of the data to be accessed and the patterns of these accesses entirely dictates the work performed by a given hardware resource, irrespective of how much work is initially delegated to each processing element. \item The impact of employing known matrix reordering techniques is more significant on the Emu architecture than a traditional cache-memory based system. We found that the METIS and BFS matrix reordering techniques can increase performance by as much as 70\% on Emu while we observed a maximum gain of 16\% on a traditional architecture. Furthermore, a completely random reordering of the rows and columns can exhibit better performance on Emu than not reordering at all, which contradicts what we observe on a traditional system. \end{itemize} For future work, we would like to re-evaluate our performance study on the newly upgraded Emu hardware and toolchain (version 18.08.1), which includes a faster Gossamer Core clock rate and hot-spot mitigation improvements. We are also interested in more thoroughly investigating multi-node performance, specifically on hardware, as thus far we have only been able to do so via simulation. We would also like to evaluate other sparse matrix formats, including new formats targeted specifically at the Emu architecture. Furthermore, we are interested in investigating prior work by Valiant~\cite{Valiant} on randomized data distributions and how it can apply to data layout schemes on Emu. \bibliographystyle{IEEEtran}
1,108,101,564,766
arxiv
\section{Introduction}\label{SecIntro} Determining the various parameter values of the resonant modes of oscillation of the Sun is an important process in the field of helioseismology. Parameters such as the mode frequencies, lifetimes and amplitudes can all be used to determine the conditions of the solar interior. Over the years the quality of helioseismic data has improved significantly due to the length of data sets increasing, signal-to-noise ratios being improved and more continuous observations being made, both from ground-based and space-borne missions. This has led to the estimated parameter values being constrained to increasingly greater precision. As the precision of the parameter estimates increases so more subtle physical characteristics of the solar interior are being uncovered. Examples of this include the discovery of asymmetric peaks in the p-mode power spectrum (e.g., \citealt{Duvall1993}; \citealt{Chaplin1999}), which supported theoretical predictions (e.g., \citealt{Gabriel1992,Gabriel1995}) and gave evidence to the belief that acoustic waves are generated within a well-localized region of the solar interior. Also, long sets of observations have allowed investigations to be carried out on the dependence of the mode parameters on solar activity (e.g., for asymmetry dependence see \citealt{Chaplin2007}). Determining subtle effects such as these requires analysis techniques that return robust estimates of the mode parameters. If this is not the case and the returned parameters are biased, the inaccuracies can be confused with mode characteristics that have true physical significance. Methods of determining solar mode parameters are often referred to as peak-bagging techniques. For low-degree (low-$\ell$) Sun-as-a-star observations, a common method of peak-bagging involves dividing the p-mode power spectrum into a series of `fitting windows' centered on the $\ell$=0/2 and 1/3 pairs. The modes are then fitted, pair by pair, to determine how the mode parameters depend on both frequency (overtone number) and angular degree without the need to fit the entire spectrum simultaneously. In this paper we use a Monte-Carlo type approach to estimate the extent of biases seen in the fitted parameters when using this `standard' method of fitting, and introduce a new modified technique designed to reduce these problems. \section{Fitting Techniques}\label{SecTechniques} In this section the peak-bagging technique described in the introduction is explained in more detail. Reasons why the standard fitting method may return biased parameter estimates are discussed and the new modified fitting method, designed to limit these problems, is introduced. We begin by summarizing briefly the main elements of the standard peak-bagging method. Within each fitting window the mode peaks are modeled using a modified Lorentzian equation \citep{Nigam1998}. This is fitted to the data using an appropriate maximum-likelihood estimator \citep{Anderson1990}. As we elaborate in Section~\ref{SecData}, only simulated low-$\ell$ data has been used in this initial analysis. All modes within the frequency range 1500 $ \leq \nu \leq $ 4600 $\mu$Hz and with angular degree 0 $ \leq \ell \leq $ 3 were fitted. In \cite{Fletcher2007} it was shown that if the weak $\ell$ = 4 and 5 modes are not accounted for, they can often impact on the fitted parameters of the stronger modes. This was also commented upon in \cite{JimenezReyes2007} and investigated in \cite{JimenezReyes2008}. Therefore, in the regions of the spectrum where the modes are strongest, our fitting model was modified to also include parameters for the $\ell$ = 4 and 5 modes. This adaption was only made to the model when fitting the $\ell$ = 0/2 pairs, as $\ell$ = 4 and 5 modes never lie within the fitting windows of $\ell$ = 1/3 pairs. Within each fitting window the following parameters were varied iteratively until they converged on their best-fitting values: \begin{enumerate} \item A central frequency for each mode. \item A parameter describing the symmetric rotational splitting pattern for each mode (not applicable at $\ell$ = 0). \item A linewidth for each mode, for which the logarithms were varied (the $m$ components in a mode were assumed to have the same widths). \item A single maximum height -- that of the outer, sectoral $m$ components -- for each mode, for which the logarithm was varied (the relative $m$ component height ratios were assumed to take fixed values). \item A single peak asymmetry parameter (the $m$ components of the modes in the pair were assumed to have the same asymmetry). \item A flat, background offset for the fit, whose logarithm was varied. \end{enumerate} It should be noted that although the simulated mode peaks were symmetric in frequency, a parameter characterising asymmetry was still used so as to be consistent with fits to real data. For the most part the parameter values returned via this method are very accurate, especially the mode frequencies. However, there is a problem associated with restricting the fitting to only a small slice of the spectrum. For the model to be completely accurate there would have to be no power within the fitting window from modes lying outside it, which of course, is not true. The main problem associated with this is that the background parameter will be overestimated in order to account for the extra power from the wings of the outlying modes. The effect has previously been documented in the literature \citep{Chaplin2003,Fletcher2007} and is illustrated again in this paper in Section~\ref{SecSimulationFits}. For fitting purposes the background is usually assumed to be flat in frequency across the extent of the fitting window. In the case of the true background this is a fairly safe assumption as the sizes of the fitting windows tend to be quite small, and over the region of the spectrum where p modes are observed, the background is thought to be a relatively weak function of frequency. However, the presence of the wings from the surrounding modes means the effective background (i.e., background plus wings) will have a stronger dependence on frequency. Specifically, the effective background is observed to rise towards the extreme ends of the fitting window as these areas are nearer, in frequency, to the surrounding peaks. The combined effect of incorrectly fitting the true background and using a model that does not correctly account for the shape of the effective background, means other parameters may be impacted upon during the fitting process to minimise these inaccuracies. This effect is shown in Fig.~\ref{TestFit}, where a simple single Lorentzian mode peak has been fitted. Either side of this mode, outside of the fitting window, there are two large peaks the effect of whose wings can be seen within the fitting window. (This is not a realistic scenario as the outlying peaks have been substantially increased in height in order to exaggerate bias in the fits). When a model is used which does not account directly for the wings of outlying modes, the fitted background is increased to compensate for the extra power in order to minimise the likelihood function. However, because the additional power from the outlying modes is not flat across the fitting window the likelihood function can be further minimised by changes in the linewidth and height of the mode. In this case the background is reduced and the height increased. This can be seen more clearly in the right-hand plot of Fig.~\ref{TestFit} where the backgrounds have been removed allowing the linewidth and height of the fitted peak to be more easily compared with those of the true peak. We shall see in section~\ref{SecSimulationFits} that similar biases are seen when fitting more complicated, more realistic simulated data. \begin{figure*} \centering{\label{TestFitTop}\includegraphics[width=2.8in]{figure_1l}} {\label{TestFitBot}\includegraphics[width=2.8in]{figure_1r}} \caption{Plot showing fits to a single mode. Large peaks situated outside the fitting window are not accounted for by the model. In the left hand plot the dashed line shows the spectrum to be fitted whereas the solid line gives the fitted result. The dotted line shows the true peak characteristics without the effects of the outlying modes. In the right hand plot the solid line shows the fitted peak minus the fitted background whereas the dashed line shows the true peak minus the true background. This plot highlights more clearly the bias in the linewidth and height.} \label{TestFit} \end{figure*} An obvious way of fixing these problems is to fit the entire power spectrum simultaneously, in which case the wings of all the modes will be included in the model and the true background will be fitted. Unfortunately, doing this requires a model with many hundreds of parameters. As such attempts to fit entire power spectra would involve large computing times and will be susceptible to premature convergence. Even so, this technique has been employed relatively successfully when the time series used were of fairly short duration (i.e., a few hundred days) (e.g., \citealt{RocaCortes1998}; \citealt{Jimenez2002}). There are at least three possible methods that retain the benefits of full-spectrum fitting without the need to use a complex model. One method is to fit the full spectrum, but to describe the parameters as smooth functions of frequency thus reducing the number of parameters to be fitted (e.g., \citealt{Jefferies2004}). A second method is to fit the entire spectrum using a multi-step iterative process whereby groups of parameters (such as frequencies, linewidths etc.) are fitted separately. This method has been pioneered by \cite{Gelly2002}. Finally, a third approach is to employ an amalgamation of full-spectrum fitting and the standard `pair-by-pair' fitting. In this scenario one would initially use parameter fits from the standard technique to create a first-guess model for the entire spectrum. This model can then be used to fit just a small slice of the spectrum in the same manner as the pair-by-pair fitting, allowing only the parameters associated with the target pair to vary, leaving the remaining model parameters associated with all the other modes fixed. The advantage of this approach is that, even though peaks of surrounding modes are not present in the fitting range, the effects of their wings will still be modeled. All of these techniques are worthy of further study, however, in this paper it is the final of these three approaches that we investigate. A simple algorithm for this modified fitting technique is: \begin{enumerate} \item Create a model for the full spectrum using the parameters determined from the standard fitting code. \item Perform a fit across a single fitting window using the full spectrum model, but only allow parameters associated with the peaks centered in the window to vary. \item Repeat the process for all fitting windows. \end{enumerate} It should be noted that when forming the full spectrum model from the parameter estimates determined using the traditional fitting method, the asymmetry term is reset to zero. This is because the simplified formalisation of the \cite{Nigam1998} expression used for our fitting model is only valid within the vicinity of the peaks. The asymmetry is allowed to vary again during the second round of fitting. As in the first set of fits all modes within the frequency range 1500 $ \leq \nu \leq $ 4600 $\mu$Hz and angular degree 0 $ \leq \ell \leq $ 3 (along with detectable $\ell$ = 4 and 5 modes) were fitted. However only modes with frequencies up to 4000 $\mu$Hz were analysed as modes with higher frequencies will begin to experience contamination from outlying modes that were not fitted using the pair-by-pair technique. The same parameters that were varied in the standard fitting techniques were also varied in the modified version. However, in order to reduce the complexity of the code it was decided to increase the size of the fitting window to include both $\ell$ =0/2 and $\ell$ = 1/3 pairs (and consequently the weaker $\ell$ = 4 and 5 modes as well). Because of the extended size of the fitting window, it was also decided to allow the background to vary as a function of one over frequency (which is assumed to be a reasonable assumption for how the background varies with frequency in real solar data). This is more important at low frequencies where the background varies more rapidly with frequency. It should be noted that this frequency dependence was only applied across the extent of each fitting window and there were no constraints forcing a fitting window at higher frequencies to have a smaller background than was fitted for a window at lower frequencies. However, for the most part the fitted backgrounds were found to `match-up' from one window to the next. \section{Data used}\label{SecData} A large set of simulated data was created enabling Monte Carlo simulations to be performed, in order to show whether, and if so where, the modified fitting code improved upon the standard code. The data were generated in the time domain using the first-generation solarFLAG simulator \citep{Chaplin2006}. A database of mode frequency, power, and linewidth estimates, based on analysis of spectra made from observations by the Birmingham Solar Oscillations Network (BiSON), was used in order to fix the various characteristics of each oscillator component. Visibility levels for the barely-detectable $\ell$ = 4 and 5 modes were fixed at levels calculated by \cite{C-Dalsgaard1989}. At the extreme ends of the spectrum -- where there are no reliable, fitted estimates for the parameters -- appropriate extrapolations of the known parameter values were made. In all cases the rotationally induced splitting between adjacent $m$ components was set at 0.4 $\mu$Hz, again in order to match fitted estimates from observational data. In all a sequence of 60 `long' 3456-day time series was generated, along with 240 `short' 796-day time series. \section{Monte-Carlo simulations}\label{SecSimulationFits} In Fig.~\ref{SimFigs}, the average fitted parameters determined from the Monte Carlo testing are shown. We concentrate initially on the background parameter as this is the parameter for which fits will most obviously be improved by accounting for the wings of modes that lie outside the fitting region. In Fig.~\ref{BG}, the average fitted backgrounds are plotted as a function of frequency for both the standard and modified fitting methods. For the standard technique the backgrounds within each $\ell$ = 0/2 and 1/3 fitting window are plotted, whereas for the modified technique the background levels at the frequency of each $\ell$ = 0 mode are shown. (Recall that modes are fitted two pairs at a time by the modified technique and that the background is allowed to vary as one over frequency across the fitting window.) The standard fitting code shows considerable overestimates of the background, as these estimates also incorporate the wings of the surrounding modes lying outside the fitting windows. There are also inconsistences between the fits for different sets of mode pairs, with the estimated backgrounds in the window centered on $\ell$ = 0/2 pairs being greater on average than those for $\ell$ = 1/3. This problem can also be attributed to not allowing for the wings of the surrounding modes, and is a result of two distinct effects. The first is that the overall power in an $\ell$ = 1/3 pair is slightly greater than that in an $\ell$ = 0/2 pair (assuming similar mode frequencies). The second is dependent on the fact that, in the direction of lower to higher frequencies, $\ell$ = 1/3 mode pairs are closer to the next 0/2 pair than $\ell$ = 0/2 pairs are to the next 1/3 pair. This is important because modes at higher frequencies tend to have larger linewidths and (up until around 3100 $\mu$Hz) greater heights. \begin{figure*} \centering \subfigure[Background] {\label{BG}\includegraphics[width=2.2in]{figure_2a}} \subfigure[Frequency] {\label{F}\includegraphics[width=2.2in]{figure_2b}} \subfigure[Linewidth] {\label{W}\includegraphics[width=2.2in]{figure_2c}} \subfigure[Height] {\label{H}\includegraphics[width=2.2in]{figure_2d}} \subfigure[Splitting] {\label{S}\includegraphics[width=2.2in]{figure_2e}} \subfigure[Asymmetry] {\label{A}\includegraphics[width=2.2in]{figure_2f}} \caption{Parameters averaged from fits to 60 sets of simulated 3456-day time series and 240 sets of simulated 796-day time series, plotted as a function of input frequency. Fitted backgrounds and asymmetries are plotted directly, whereas frequencies, linewidths, heights and splittings are plotted in relation to input values (in the sense fitted - input). Note that for the linewidths and heights it is the difference in the natural logarithms that are plotted. Open symbols signify fits returned from the standard fitting code, whereas solid symbols give results of the modified code. For the standard code in plots (a) and (f) the fitted parameters for the $\ell$ = 0/2 window are given by diamonds and by triangles for the $\ell$ = 1/3 window. In (b) to (e) diamonds signify $\ell$ = 0, triangles $\ell$ = 1, squares $\ell$ = 2 and circles $\ell$ = 3. The dashed lines give 0.5-sigma values about the true inputs for a single fit while the error bars in (a) give 1-sigma errors on the mean. (0.5-sigma is used as opposed to 1-sigma to give a better scaling for the plots.) Dotted lines show true input values. In (c) the dash-dot lines show the expected overestimation of the fits taking into account the finite resolution of the spectra (see text).} \label{SimFigs} \end{figure*} In contrast to the standard fitting, Fig.~\ref{BG} shows that the modified code returns estimates of the background that closely match the input values, at least up to frequencies of around 3000 $\mu$Hz. The small overestimate of the background returned above this frequency is not entirely understood, although it does appear from the plots that the effect is dependent on the length of the time series. It has previously been documented that many of the fitted mode parameters are correlated with one another (e.g., \citealt{Fletcher2007}). This suggests that the effect of not accurately fitting the true background, when using the standard fitting technique, may also impact on the estimates of the other parameters. This was highlighted in Section~\ref{SecTechniques} in Fig~\ref{TestFit}, where the height and widths were shown to be biased due to the presence of unmodeled outlying peaks. In Fig.~\ref{F} average differences between the fitted and input frequencies are shown. The plot shows that for the most part the frequencies returned by the standard fitting technique are both robust and accurate and so there is little to improve upon when using the modified fitting code. The frequencies show the smallest correlation with the other parameters and therefore one might expect them to be particularly robust when fitting the data using the standard method. In Fig.~\ref{W} the average differences in the natural logarithms of the fitted and input linewidths are shown. This time there is a small but clear systematic bias in the estimates returned by the standard fitting method. As the magnitude of the bias is quite small the differences in the natural logarithms closely approximate the fractional bias in the linear width values, (i.e., a difference in the natural logarithm of -0.05 indicates the fitted linear values of the widths underestimate the input value by about 5 percent). For the 3456-day power spectra, this bias is clearly reduced when using the modified fitting code, with the largest improvements coming at higher frequencies and for the higher-degree ($\ell$ = 2 and 3) modes. However, the results of the fits to the 796-day power spectra are not quite as clear cut. Over a large part of the frequency range the standard fitting code still returns fits with a negative bias, and for the most part (between around 2500 and 3500 $\mu$Hz) the modified fitting code improves upon this. However, there is evidence of a small positive bias in results from the modified code which increases at lower frequencies and for lower-degree modes. This is believed to be a resolution issue. It has previously been shown that when fitting the linewidth of the modes in the power spectrum, the estimates returned are actually better matches to the true inputs plus the binwidth (see \citealt{Chaplin1997}). Since the binwidth is larger for a power spectrum made from shorter times series, this effect is more easily observed in the 796-day data. The non-horizontal dotted lines in Fig.~\ref{W} give, for the $\ell$ = 0 modes, the difference between the natural logarithm of the linewidth plus the binwidth and the natural logarithm of the linewidth only. As such, this gives the line we would expect the $\ell$ = 0 points to fall upon (assuming all effects other than this resolution bias have been removed) and the plot shows that over much of the frequency range this is indeed the case for the modified fitting code. The overestimation is reduced by the square root of the number of peaks being fitted which explains why the effect is reduced for the higher-degree modes. When fitting the power spectrum, one will nearly always find a very strong correlation between the estimated linewidths and heights of the modes. Therefore, much the same pattern as was seen for the linewidths will be seen for the heights but with the opposite bias. This is indeed seen to be the case as shown in Fig.~\ref{H}. Again the modified code is seen to reduce the overall extent of the bias especially with the longer 3456-day times series. The results of fitting the rotational splittings are shown in Fig.~\ref{S}. In this plot the differences between averages of the fitted splittings and the true input values are divided by the error on the mean. This has been done to more easily display the splittings at all frequencies and angular degrees on the same scale. It has been well documented that at high frequencies the fitted splittings will tend to overestimate the true values (eg., \citealt{Chaplin2001}). This is a result of the strong mode blending that occurs as the linewidths of the modes increase at higher frequencies. This problem tends to be larger for lower-degree modes as the separation in frequency between the outermost components is smaller. However, this effect is less obvious when plotting the differences and dividing by the error, since the uncertainties on the lower-degree splittings are also very large. Fig.~\ref{S} clearly shows a distinct overestimate of the fitted splittings at high frequencies with a number of points showing values larger than 3 sigma. As with the previous parameters, over the medium to high-frequency ranges, there appears to be a significant reduction in the bias when using the modified fitting code. However, the extremely large overestimates that are seen when fitting the highest frequencies are not significantly reduced. The final parameter investigated was the peak asymmetry. Even though no asymmetry was included in the simulations, it is still important to investigate in order to check that the model returns estimates consistent with zero. Fig.~\ref{A} shows the average fitted asymmetries as a function of frequency. Unlike the previous parameters the asymmetry was kept constant throughout the fitting window, meaning there is no $\ell$-dependence and thus fewer points in the plot. While the intrinsic values of the average fitted asymmetries are actually very small, they are significant and hence not consistent with zero. For the standard fitting code there are two sets of values for the asymmetries, one for the $\ell$ = 0/2 pairs and one for the 1/3 pairs. This is the same scenario as for the backgrounds, and as for that parameter, there is a significant disparity between the average fitted asymmetries for the two different sets of mode pairs, with the $\ell$ = 0/2 generally showing a much larger bias than the $\ell$ = 1/3. In fact the disparities seen in the background and asymmetry are most likely related. The modified code seems to reduce the overall bias in the fits, although in this case, it is not eliminated entirely. Also, the plots show that the extent of the bias has a fairly smooth response as a function of frequency and this trend is similar for all cases (although somewhat exaggerated for the fits to the $\ell$ = 0/2 pairs). These two facts would suggest that there is some underlying cause for the bias in the fitted asymmetries that is not being correctly addressed, even with the modified fitting technique. \section{Conclusions}\label{SecConclusion} We have introduced a new peak-bagging fitting technique that was designed in order to gain some of the advantages that fitting the entire `Sun-as-a-star' p-mode spectrum simultaneously might bring. Results on Monte-Carlo simulations demonstrated that this modified fitting method enabled accurate estimates of the true background level in the vicinity of the p-modes to be determined, something that was not possible using previous peak-bagging techniques. The Monte-Carlo simulations also showed that small biases present in other parameters (e.g, heights and widths) can be reduced using the modified code. If the modified fitting code is indeed giving the true background level then it will enable investigations of the noise characteristics to be carried out within the frequency range of the p modes. Previously, study of noise characteristics (which include both solar and instrumental sources as well as atmospheric sources in the case of ground based instruments) was limited to measurements of the background at frequencies well above and below the main part of the p-mode spectrum (e.g., \citealt{Chaplin2005}). The modified code has also been tested using data collected by the Global Oscillations at Low Frequencies (GOLF) instrument on board the Solar and Heliospheric Observatory (SOHO) spacecraft. The results of this analysis will be presented in a subsequent paper. Additionally the modified code has been adapted for use with time series that contain breaks, such as those collected by the ground based BiSON group. Analysis of these type of data (both simulated and real) is the subject of current work. \section*{Acknowledgments} STF acknowledges the support of the School of Physics and Astronomy at the University of Birmingham. We thank all those associated with BiSON which is funded by the Science and Technology Facilities Council (STFC). The authors are grateful for the support of the STFC. We acknowledge the Solar Fitting at Low Angular degree Group (solarFLAG) for use of their artificial helioseismic data.
1,108,101,564,767
arxiv
\subsection{Proof of Theorem \ref{theo:np_hard}} \label{sec:proof_A} Suppose we have $\mathcal{G_s}\!=\!(\mathcal{V_s} ,\mathcal{L_s})$, where $\mathcal{V_s}\!=\!\{e_1,e_2,a_1\}$ and $\mathcal{L_s}\!=\!\{(e_1,a_1),(e_2,a_1)\}$. For the purpose of expressing easily, we define $l_1\!=\!(e_1,a_1)$ and $l_2\!=\!(e_2,a_1)$. We also define $\mathcal{E_s}\!=\!\{e_1,e_2\}$, $\mathcal{A_s}\!=\!\{a_1\}$, and $\mathcal{L_s}\!=\!\{l_1,l_2\}$. Assume the available storage capacities for EC is infinity, i.e., $w_e\to+\infty$. Therefore, constraint \eqref{LP:con2} can be relaxed. Noting that $$\lim_{w_e\to+\infty}\frac{s_k}{w_e}=0,$$ the constraint \eqref{con:t_e} can be removed safely as well because $t_e\equiv1$ in that case. Moreover, constraints \eqref{LP:con9}$\sim$\eqref{LP:con11} can be relaxed considering $\chi_{ke}=t_e\!\cdot\!x_{ke}=x_{ke}$. In the proposed network topology $\mathcal{G_s}$, there is only one AR $a_1$ in the network, so the second dimension in decision variable $z_{kae}$ could be squeezed as $z_{ke}$. As a result, the decision variables $x_{ke}$, $y_{kl}$, and $z_{kae}$ have the exact same meaning in this scenario, since each path consists only one link (i.e. the path from $a_1$ to $e_1$ is link $l_1$, and the path from $a_1$ to $e_2$ is $l_2$) and each link represents a unique path. In other words, we can use decision variable $y_{kl}$ to replace the other two. Furthermore, let the moving probability $p_{ka}=1,\forall k\in\mathcal{K}$. Then the problem \eqref{LP:main_MILP} can be expressed as: given a positive number $b$, is it possible to find a feasible solution $y_{kl}$ such that \begin{equation} \label{fml:obj2} J\leq b, \end{equation} with constraints \begin{subequations} \begin{align} \label{fml:newcon1} &\sum_{l\in\mathcal{L_s}}y_{kl}\!\leq\!1,\forall k\!\in\!\mathcal{K}, \\ \label{fml:newcon2} &\sum_{k\in\mathcal{K}}b_k\!\cdot\!y_{kl}\!<\!c_l,\forall l\!\in\!\mathcal{L_s}, \\ \label{fml:newcon3} &y_{kl}\!\in\!\{0,1\},\forall k\!\in\!\mathcal{K},l\!\in\!\mathcal{L_s}, \end{align} \end{subequations} satisfied. Same as \cite{fan2018application}, assume $b\to+\infty$, then \eqref{fml:obj2} can be relaxed. Moreover, we set each link has same capacity, i.e. $c_{l_1}=c_{l_2}=\frac{1}{2}\sum_{k\in\mathcal{K}}b_k+\nu$, where $\nu\ll\frac{1}{2}min(b_k)$. In order to satisfy \eqref{fml:newcon2}, we guarantee that $$\sum_{k\in\mathcal{K}}b_k y_{kl_1}=\sum_{k\in\mathcal{K}}b_k y_{kl_2}=\frac{1}{2}\sum_{k\in\mathcal{K}}b_k,$$ i.e. the set-partition problem. In other word, it is reducible to MILP model \eqref{LP:main_MILP} while set-partition is a well-known $NP$-hard problem \cite{cormen2013algorithms}. \subsection{Proof of Theorem \ref{theo:micro}} \label{sec:proof_B} In the operation of micro, all the quantities, i.e. $T^+$, $T^-$, $F^+$ and $F^-$, are in the form of summary alongside EC. Regarding $F^+$, there is \begin{equation*} \begin{aligned} F^+&=\sum_{e=1}^{|\mathcal{E}|}F^+_e=\sum_{e=1}^{|\mathcal{E}|}\sum_{i=1}^{|\mathcal{T}|}|\pi_{ie}\notin X_i\cap\pi_{ie}\in\hat{X}_i|=\sum_{i=1}^{|\mathcal{T}|}\left(\sum_{e=1}^{|\mathcal{E}|}|\pi_{ie}\notin X_i\cap\pi_{ie}\in\hat{X}_i|\right). \end{aligned} \end{equation*} The cardinality $|\pi_{ie}\notin X_i\cap\pi_{ie}\in\hat{X}_i|$ indicates the number of cells whose value is $1$ in prediction but $0$ in fact in each testing instance. Constraint \eqref{LP:con1} forces that each request is served by exact one EC. In other words, once the cell is $1$ in prediction but $0$ in fact, there would be a corresponding cell in the same column is $0$ but $1$ in fact. Therefore, $$ \sum_{e=1}^{|\mathcal{E}|}|\pi_{ie}\notin X_i\cap\pi_{ie}\in\hat{X}_i|=\sum_{e=1}^{|\mathcal{E}|}|\pi_{ie}\in X_i\cap\pi_{ie}\notin\hat{X}_i|, \forall i\in\mathcal{T}. $$ $F^+$ can be rewritten as \begin{align*} F^+=& \sum_{i=1}^{|\mathcal{T}|}\left(\sum_{e=1}^{|\mathcal{E}|}|\pi_{ie}\notin X_i\cap\pi_{ie}\in\hat{X}_i|\right)= \sum_{i=1}^{|\mathcal{T}|}\left(\sum_{e=1}^{|\mathcal{E}|}|\pi_{ie}\in X_i\cap\pi_{ie}\notin\hat{X}_i|\right)\\ =&\sum_{e=1}^{|\mathcal{E}|}\sum_{i=1}^{|\mathcal{T}|}|\pi_{ie}\in X_i\cap\pi_{ie}\notin\hat{X}_i|=\sum_{e=1}^{|\mathcal{E}|}F^-_e=F^-. \end{align*} Consequently, \begin{equation*} \begin{aligned} P_{\textrm{micro}} =\frac{\sum_{e=1}^{|\mathcal{E}|}T^+_e}{\sum_{e=1}^{|\mathcal{E}|}T^+_e+\sum_{e=1}^{|\mathcal{E}|}F^+_e} =\frac{T^+}{T^++F^+}=\frac{T^+}{T^++F^-}=R_{\textrm{micro}}, \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned} F_{1\textrm{micro}}\!=\!\frac{2\!\times\!T^+}{2\!\times\!T^+\!+\!F^-\!+\!F^+} =\frac{2\times T^+}{2\times T^++F^++F^+} \!=\!\frac{2\!\times\! T^+}{2\!\times\!(T^+\!+\!F^+)}=P_{\textrm{micro}}. \end{aligned} \end{equation*} \section{Conclusions} \label{sec:conclusions} In this paper, we have proposed a framework in which a data-driven technique in the form of deep convolutional neural networks (CNNs) is amalgamated with a model-based approach in the form of a mixed integer programming problem. We have transformed the optimization model into a grayscale image and, then, trained CNNs in parallel with optimal decisions. To further improve the performance, we have provided two algorithms: the proposed CNN-HCLS is suitable for the case in which the processing time is more significant; while CNN-rMILP fits the scenario where the overall cost is more important. Numerical investigations reveal that the proposed schemes provide competitive cache allocations. In the case of 15 flow requests, CNN-rMILP manages to accelerate the calculation by 5 times compared to the benchmark, with less than $7\%$ additional overall cost. Furthermore, the average computational time for CNN-HCLS is only $49.3$ msec, which make the proposed framework suitable for real-time decision making. We envision several future directions for this work. First, we have assumed that the collected network information is convincing. When there are controlled small variations in the input, we need further sensitivity analysis of the deep learning output. Furthermore, considering a scenario with time sequential content request, an interesting extension of the proposed CNN would be to include temporal characteristics, which will require new ways to transform the spatio-temporal characterization of the caching placement problem into an image. Finally, one can study the design of novel distributed machine learning algorithms \cite{9210812,9457160} so as to enable edge devices to proactively optimize caching content and delivery methods without transmitting a large amount of data. \section{Deep Learning for Caching} \label{sec:DNN} In this section, we use CNN to capture the spatial features of \eqref{LP:main_MILP}. Solving \eqref{LP:main_MILP} allows us to determine the allocation of each requested content, i.e. $x_{ke}$, which is expected to be the output of the CNN. However, each EC $e$ can serve more than one flow if there is enough memory space for caching, and in this case, the size of the CNN output space grows exponentially according to $|\mathcal{E}|\cdot 2^{|\mathcal{K}|}$. To deal with this challenge, the dependency among ECs and flows is taken into consideration. Constraint \eqref{LP:con1} forces each flow to be served by exactly one EC. If we ignore the caching resource competition among flows, i.e. the storage space in constraint \eqref{LP:con2} and the bandwidth in \eqref{LP:con3}, then, we can decompose the original caching allocation problem into a number of independent sub-problems where every sub-problem focuses on the allocation of one flow, i.e., we use a first-order strategy~\cite{zhang2013review}. Moreover, in each sub-problem, the decision variable $x_{ke}$ is reduced to $x_{e}$, and constraint \eqref{LP:con1} becomes $\sum_{e\in\mathcal{E}}x_e=1$. In other word, we only need to classify the specific content into exactly one EC among all caching candidates. Therefore, the sub-problem becomes a multi-classification problem. Next, we can define a CNN that corresponds to each sub-problem and that can be trained in parallel. In Section \ref{sec:subtraining}, we introduce the training process for each individual CNN and, then, in Section \ref{sec:subtesting}, we present the testing process by combining the prediction of all of the CNNs, and we propose two algorithms to improve performance. \subsection{Training Process} \label{sec:subtraining} As shown in Figure \ref{fig:training}, the training process includes four steps in general. Step ($i$) converts the original network caching allocation problem into an MILP model as discussed in Section \ref{sec:model}. \begin{figure}[t] \centering \includegraphics[trim=0mm 0mm 0mm 5mm, clip, width=\textwidth]{figure/training_process.eps} \caption{Training process for the proposed CNN.} \label{fig:training} \end{figure} \subsubsection{Grayscale Image Generation} In step ($ii$), the mathematical model is transformed into a grayscale image which is used as an input to the CNN. An intuitive idea is constructing a dense parameter matrix which contains the entire information of the optimization model \eqref{LP:main_MILP}. This will require us to include the parameters as listed in Table \ref{tab:Notations} as part of the matrix. Therefore, our CNN should be wide and deep enough to recognize the large input image, which increases the complexity of CNN designing and training. We notice that some of the parameters in Table \ref{tab:Notations} are constant, such as $N_{ae}$ which is the number of hops that is typically fixed for a particular network topology. These parameters have very limited contribution to distinguish different assignments and can be viewed as redundant information during the training procedures. By analysing the optimal solutions derived from \eqref{LP:main_MILP}, we observe that there is a pattern in user behaviour, network resources, and the allocation of content caching. For instance, the EC is more likely to be selected as a caching host than others, if a) this EC is closer to mobile users, i.e. less transmission cost $C^T$, b) this EC has more available caching memory space, i.e. less caching cost $C^C$ and valid storage capacity constraint \eqref{LP:con2}, and c) this EC attaches links with enough bandwidth, i.e. valid bandwidth capacity \eqref{LP:con3}. Furthermore, the transmission cost $C^T$ mentioned in a) depends on the moving probability $p_{ka}$ and the number of hops $N_{ae}$. For the storage space in b), we introduce $q_{ke}=s_k/w_e, \forall k\in\mathcal{K},e\in\mathcal{E}$ as a parameter that reflects the influence of caching content for flow $k$ on EC $e$. Therefore, constraint \eqref{LP:con2} can be transformed in the form of $q_{ke}$, which becomes $\sum_kq_{ke}\cdot x_{ke}<1, \forall e\in\mathcal{E}$. Clearly, larger available caching space $w_e$ results in smaller $q_{ke}$. Similarly, for the link bandwidth in c), we introduce a new parameter $r_{kl}=b_k/c_l,\forall k\in\mathcal{K},l\in\mathcal{L}$ to capture the link congestion level. We can now arrange $p_{ka}$, $q_{ke}$, and $r_{kl}$ as a matrix $\boldsymbol{T}$ that represents the features to be learned by CNN, as follows: $$ \boldsymbol{T}= \setlength{\arraycolsep}{1pt} \left[ \begin{array}{ccc|ccc|ccc} p_{11}&\cdots&p_{1|\mathcal{A}|}&q_{11}&\cdots&q_{1|\mathcal{E}|}&r_{11}&\cdots&r_{1|\mathcal{L}|}\\ p_{21}&\cdots&p_{2|\mathcal{A}|}&q_{21}&\cdots&q_{2|\mathcal{E}|}&r_{21}&\cdots&r_{2|\mathcal{L}|}\\ \vdots&\ddots&\vdots&\vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\ p_{|\mathcal{K}|1}&\cdots&p_{|\mathcal{K}||\mathcal{A}|}&q_{|\mathcal{K}|1}&\cdots&q_{|\mathcal{K}||\mathcal{E}|}&r_{|\mathcal{K}|1}&\cdots&r_{|\mathcal{K}||\mathcal{L}|}\\ \end{array} \right] $$ Now, the element in the above matrix $\boldsymbol{T}$ can be viewed as the shades of gray, in which the range is from $0\%$ (white) to $100\%$ (black). Then, $\boldsymbol{T}$ is converted to a grayscale image. In order to generate the matrix $\boldsymbol{T}$, we need to know the user moving probability $p_{ka}$. This probability can be predicted by exploring historical data, such as~\cite{chen2017caching} and \cite{al2018move}, and this technique is beyond the scope as well as complementary of this paper. Moreover, regarding the memory utilization $q_{ke}$ and link utilization $r_{kl}$, they can be achieved from the network management element. For instance, in the environment of multi-access edge computing (MEC) system, the virtualisation infrastructure manager in MEC host is responsible for managing storage and networking resources \cite{etsi2020mec}, i.e. $w_e$ and $c_l$. Once receiving the user required caching size $s_k$ and bandwidth $b_k$, we can calculate $q_{ke}$ and $r_{kl}$ accordingly. Figure \ref{fig:encoding} shows a typical generated image where $|\mathcal{K}|=10$, $|\mathcal{A}|=8$, $|\mathcal{E}|=7$ and $|\mathcal{L}|=9$. In Figure \ref{fig:encoding}, the darker colour of a pixel, the larger is the value of that element in the matrix. \begin{figure}[t] \centering \includegraphics[trim=3mm 12mm 8mm 0mm, clip, width=0.4\textwidth]{figure/grayscale_image.eps} \caption{Feature encoding to grayscale image.} \label{fig:encoding} \end{figure} \subsubsection{Proposed CNN Structure} \label{subsubsec:CNN} The constructed image is applied as a feature to be learned by our CNN in step ($iii$). A common image recognition task has some key properties: a) some patterns are much smaller than the whole image; b) the same pattern appears in different positions; c) downsampling some pixels will not change the classification results. In particular, the convolutional layer satisfies properties a) and b) while the pooling layer matches c). Our generated grayscale images also have properties a) and b). For example, the pixel in the middle of Figure \ref{fig:encoding} represents the effect on memory space utilization when caching content in the corresponding EC, which fits property a); and the dark pixel appears in different regions of the image, which matches property b). However, each pixel in our grayscale images provides essential information for content placement, and subsampling these pixels will change the CNN classification object. As a result, property c) does not hold in our images. Therefore, we keep the convolutional layer in our CNN structure but we remove the pooling layer. Figure \ref{fig:CNN} shows the structure of the proposed CNN which contains the following layers: \begin{figure}[t] \centering \includegraphics[trim=0mm 0mm 0mm 0mm, clip, width=\textwidth]{figure/CNN.eps} \caption{CNN architecture.} \label{fig:CNN} \end{figure} \begin{itemize} \item \emph{Input Layer:} this layer specifies the image size and applies data normalization. In our model, the input is a grayscale image, so the channel size is one. The height is $|\mathcal{K}|$ and the width is $|\mathcal{A}|+|\mathcal{E}|+|\mathcal{L}|$. \item \emph{CONV Layer:} this layer generates a successively higher-level abstraction of the input figure, or feature map (as commonly named). In our work, the CONV layer is composed of a) a convolutional layer, applying sliding convolutional filters to the input; b) a batch normalization layer, normalizing the input to speed up network training and reduce the sensitivity to network initialization; and c) a ReLu layer, introducing activation function rectified linear unit (ReLu) as the threshold to each element of the input. Generally, a CNN structure contains deep CONV layers. But we only use one CONV layer in our CNN. The effect of the different number of CONV layers is discussed in Section \ref{sec:investigations}. \item \emph{FC Layer:} this layer produces the output of the CNN and consists of a) a fully connected layer that combines the features of a grayscale image to select the EC for caching; b) a softmax layer that normalizes the output of the preceding layer. It is worth noting that the output of softmax layer is a vector that contains non-negative numbers and the summary of vector elements equals to one. Therefore, the output vector can be used as a probability for classification; c) a classification layer that uses the output of the softmax layer for each grayscale image to assign it to one of the potential ECs and then compute the overall loss. \end{itemize} \subsubsection{Loss Function} In step ($iv$), we employ the following cross-entropy loss function to estimate the gap between CNN prediction and the optimal solution, \begin{equation} \label{eq:loss} \Upsilon(k)\!=\!-\!\sum_{i=1}^{|\mathcal{I}|}\sum_{e=1}^{|\mathcal{E}|}\!x^i_{ke}\!\ln{\hat{x}^i_{ke}}\!+\!\frac{\lambda}{2}\!\boldsymbol{W}^T\!\boldsymbol{W}, k\!=\!1,2,\cdots,|\mathcal{K}|, \end{equation} where $\mathcal{I}$ represents the training dataset, $\lambda$ is the L2 regularization (a.k.a Tikhonov regularization) factor to avoid overfitting, $\boldsymbol{W}$ is the weight vector, $x^i_{ke}$ is the optimal allocation of $i^{th}$ network scenario, and $\hat{x}^i_{ke}$ is the predicted result of CNN accordingly. During training, the weight vector $\boldsymbol{W}$ is updated recursively towards the reduction of loss function \eqref{eq:loss}. Note that the original optimization problem \eqref{LP:main_MILP} contains five decision variables but we pick $x_{ke}$ as the label to train the CNN, because the rest can be approximated by $x_{ke}$. For example, two auxiliary variables $\chi_{ke}$ and $t_e$ are determined via constraint \eqref{LP:con9}$\sim$\eqref{LP:con11} and \eqref{con:t_e} respectively. The upper-bound of $z_{kae}$ is limited by constraint \eqref{LP:con5}. Generally, the penalty of cache-miss is larger than the cache-hit transmission cost given that the data center is further away from AR than the host EC. Thus, most potential connected ARs would be taken into account except the AR with less attached probability (i.e. $p_{ka}$ is less than a threshold, which can be defined as the ratio between caching cost and caching gain~\cite{vasilakos2012proactive}). After that, the value of $z_{kae}$ is fixed. Moreover, constraints \eqref{LP:con6} and \eqref{LP:con7} provide the upper- and lower- bound of $y_{kl}$ in the form of $z_{kae}$. \subsection{Testing Process} \label{sec:subtesting} Directly combining the output of each CNN may cause collisions since the aforementioned MILP model decomposition does not consider the internal effect of decision variables. In this subsection, two different approaches, reduced MILP (rMILP) and hill-climbing local search (HCLS), are proposed respectively to avoid the conflict. \subsubsection{CNN-rMILP} \label{sec:sub_ReducedMILP} As seen from Figure \ref{fig:CNN-MILP}, the testing process consists of five steps where the first three steps are explained in Section \ref{sec:subtraining}. \begin{figure}[t] \centering \includegraphics[trim=0mm 0mm 0mm 0mm, clip, width=.9\textwidth]{figure/CNN_MILP.eps} \caption{Testing process for CNN-rMILP.} \label{fig:CNN-MILP} \end{figure} In step ($iv$), the output of the softmax layer is a vector in which each element represents the probability of hosting content. This probability can be viewed as the confidence of prediction. For example, when the CNN output is $[0.8,0,0.15,0,0.05,0]$, then, the CNN is more confident to allocate the content in the first EC (first element in the vector) which has the highest value $0.8$. By combining these vectors, we can derive a matrix $\boldsymbol{O}=(o_{ke})_{|\mathcal{K}|\times|\mathcal{E}|}$. Here, each element $o_{ke}$ captures the probability of caching content for flow $k$ in EC $e$. We ignore the zero and iota elements because they are unlikely to be selected as caching host according to the CNN prediction. In order to filter out these elements, a unit step function $H(\cdot)$ is introduced with threshold $\delta$: $$ H(o_{ke})= \begin{cases} 1, &o_{ke}\leq\delta, \\ 0, &o_{ke}>\delta. \end{cases} $$ After the filter $H(o_{ke})$, all $1$ elements are viewed as candidates to host content. As done in~\cite{lei2019learning}, we perform a global search by adding a new constraint in the original optimization problem \eqref{LP:main_MILP}, which becomes: \begin{subequations} \label{LP:new} \begin{align} &\mathop{\min} J(\chi_{ke},z_{kae}). \\ \textrm{s.t.}\quad &\eqref{LP:con1}\sim\eqref{LP:con8},\eqref{LP:con9}\sim\eqref{LP:con11},\eqref{con:t_e},\eqref{LP:con12}, \nonumber\\ \label{LP:added} &x_{ke}\leq H(o_{ke}),\forall k\!\in\!\mathcal{K},e\!\in\!\mathcal{E}. \end{align} \end{subequations} In step ($v$), the new optimization problem \eqref{LP:new} can be efficiently solved when $\big(H(o_{ke})\big)_{|\mathcal{K}|\times|\mathcal{E}|}$ is a sparse matrix since the feasible region of the decision variable $x_{ke}$ is greatly reduced. Note that, although the introduction of constraint \eqref{LP:added} enables a reduction in search space, the feasible solutions may also be eliminated by constraint \eqref{LP:added}, especially when the CNN is not well-trained or the threshold $\delta$ in function $H(\cdot)$ is high. As a result, problem \eqref{LP:new} becomes unsolvable. In case of infeasibility, constraint \eqref{LP:added} is slacked and we need to resolve the original optimization model \eqref{LP:main_MILP}. \subsubsection{CNN-HCLS} \begin{figure}[t] \centering \includegraphics[trim=0mm 25mm 0mm 0mm, clip, width=.9\textwidth]{figure/CNN-HCLS.eps} \caption{Testing process for CNN-HCLS.} \label{fig:CNN-HCLS} \end{figure} The CNN-HCLS process is shown in Figure \ref{fig:CNN-HCLS}. We define the \textit{state space} as a collection of all possible solutions and non-solutions, i.e. all assignment possibilities of variable $x_{ke}$; the \textit{successor state} is the ``locally'' accessible states from a current state with the largest probability, where ``locally'' means the successor state only differs by one-user cache allocation; finally, the \textit{score} $S$ is defined as a piecewise linear function which is added to the total cost $J$ as a penalty cost to measure the impact of invalid constraints \eqref{LP:con2} and \eqref{LP:con3} in the form of $q_{ke}$ and $r_{kl}$: \begin{equation} \label{fml:new_cost} S\!=\!\gamma\!\max\!\left\{\!0\!,\!\left(\!\sum_{e\in\mathcal{E}}\!\sum_{k\in\mathcal{K}}\!q_{ke}\!x_{ke}\!-\!1\!\right)\!,\!\left(\!\sum_{l\in\mathcal{L}}\!\sum_{k\in\mathcal{K}}\!r_{kl}\!y_{kl}\!-\!1\!\right)\!,\!\left(\!\sum_{e\in\mathcal{E}}\!\sum_{k\in\mathcal{K}}\!q_{ke}\!x_{ke}\!-\!1\!\right)\!+\!\left(\!\sum_{l\in\mathcal{L}}\!\sum_{k\in\mathcal{K}}\!r_{kl}\!y_{kl}\!-\!1\!\right)\!\right\}\!+\!J\!, \end{equation} where $\gamma$ is the penalty factor for invalidated constraints. Similar to the processing in CNN-rMILP, to reduce the number of attempted assignments, the matrix is filtered via the unit step function $H(\cdot)$ with threshold $\delta$. The details are described in Algorithm 1. {\renewcommand1.4{1.35} \begin{algorithm}[t] \caption{Hill-Climbing Local Search (HCLS)} \label{alg:HCLS} \KwData{predicted conditional probability $\boldsymbol{O}=(o_{ke})$, threshold of probability $\delta$, variables in Table \ref{tab:Notations}} \KwResult{flow assignment $\boldsymbol{X}=(x_{ke})$} Construct assignment matrix $\boldsymbol{X}\leftarrow \boldsymbol{X}\left(o_{ke}\geq\delta\right)$\;\label{alg:state1} Select the largest number on each column as the initial state\;\label{alg:state2} Evaluate the cost $S$ via \eqref{fml:new_cost} of the successor states\;\label{alg:state3} Move to a successor state with less cost\;\label{alg:state4} Repeat from Step \ref{alg:state3} until no further improvement of cost are possible\;\label{alg:state5} Set all the elements in the terminated state to be 1, while the rest 0\; \label{alg:state6} \end{algorithm} \par} Considering Figure \ref{fig:HCLS-instance} as an instance, we combine the output of CNNs in Figure \ref{fig:CNN-HCLS} as a $4\times3$ matrix, where rows indicate candidate ECs and the columns represent different requests. In step \ref{alg:state1} of Algorithm \ref{alg:HCLS}, each predicted probability is compared with threshold $\delta=0.1$ and all element less than this value are set to zero. Then, we select the largest value (shown in red and bold font) in each column as the initial case, i.e. $0.99$, $0.7$ and $0.5$ in three columns respectively. Since $0.99$ is the first element in the first column, the initial assignment will allocate the first request in the first EC. Similarly, the second and third requests are cached in the third EC. In step \ref{alg:state2} because $0.99$ is the only non-zero item in the first column, there is no successor in this direction. For the second column, the second-largest number is $0.2$ with position index (2,2) in the initial state, so one successor state is keeping the allocation of first and third requests in the original location but caching the second request in the second EC, i.e. branch $i$ in Figure \ref{fig:HCLS-instance}. Similarly, we get another successor state following the third column as branch $ii$. In the following two steps, we evaluate the cost $S$ of these three assignments via formula \eqref{fml:new_cost}, where successor state $i$ results in a smaller cost. Then, we check further successor states such as $iii$ and $iv$ recursively based on current state $i$. In this example, the algorithm terminates at the local optimal solution (shown in a blue dot rectangular) because further improved states cannot be found. In the last step, the bold and red colored positions are selected as caching ECs. \begin{figure}[t] \centering \includegraphics[trim=0mm 160mm 0mm 130mm, clip, width=\textwidth]{figure/HCLS-instance.eps} \caption{An instance of CNN-HCLS.} \label{fig:HCLS-instance} \end{figure} As can be seen from the above instance, there are at most $|\mathcal{K}|$ branches from the previous state to the successor state, and the maximum number of states is $|\mathcal{E}|$, which represents all potential ECs are enumerated. Moreover, the computation of cost $S$ in each branch takes constant time. Therefore, the time complexity of Algorithm \ref{alg:HCLS} is $O(|\mathcal{K}|\cdot|\mathcal{E}|)$. Once the network topology is determined, $|\mathcal{E}|$ becomes constant and, then, the running time is reduced to $O(|\mathcal{K}|)$, i.e. linear time complexity. In summary, we have proposed two different approaches, CNN-rMILP and CNN-HCLS, to decide cache content placement. While both of them utilize the output of trained CNN, the CNN plays different roles: in CNN-rMILP, CNN is used to eliminate improbable ECs as constraint \eqref{LP:added}; in CNN-HCLS, CNN provides the searching direction in Algorithm \ref{alg:HCLS}. Note that the proposed CNN-rMILP and CNN-HCLS provide sub-optimal solutions of \eqref{LP:main_MILP}, and should not be considered as solving a NP-hard problem with the optimal solution. \section{Numerical Results} \label{sec:investigations} \subsection{Simulation Setting} We will now compare the performance of the optimal decision making derived from the MILP model \eqref{LP:main_MILP} with the proposed CNN-rMILP and CNN-HCLS. To further benchmark the results we also compare with a greedy caching algorithm (GCA) which attempts to allocate each request to its nearest EC, as detailed in Algorithm \ref{alg:greedy}. Additionally, to address the improvements of CNN-rMILP and CNN-HCLS schemes, we report the resulting quality of using only the CNN (named as pure-CNN in the following). We assume a nominal mesh tree-like mobile network topology where user mobility is taking place on the edge of the network. Based on the mobility patterns of the users, we apply the different techniques, i.e., MILP (benchmark), pure-CNN, CNN-rMILP, CNN-HCLS, and GCA respectively, to perform proactive edge cloud caching of popular content. The simulation parameters are summarized in Table \ref{tab:Network_Parameters}. {\renewcommand1.4{1.35} \begin{algorithm}[t] \caption{Greedy Caching Algorithm (GCA)} \label{alg:greedy} \KwData{variables in Table \ref{tab:Notations}} \KwResult{flow assignment $\boldsymbol{X}=(x_{ke})$} \For{each $k\in\mathcal{K}$}{ Find the maximum $p_{ka}$ and related $a$, $p_{ka}\!\leftarrow\!0$\; Build an EC queue $L_e$ from $a$ by Dijkstra algorithm\; \For{each $e\in L_e$}{ \If{$s_k \leq w_e$}{ $x_{ke}\!\leftarrow\!1$, $w_e\!\leftarrow\!w_e\!-\!s_k$\; \textbf{break}\; } } } \end{algorithm} \par} \begin{table}[t] \centering \caption{\label{tab:Network_Parameters}Network parameters \cite{wang2019proactive}.} \begin{tabular}{l|c} \hline \textbf{Parameter}&\textbf{Value} \\ \hline Degree per node & 1$\sim$5 \\ Number of mobile users ($|\mathcal{K}|$)& \{5,10,15,20\} \\ Number of links ($|\mathcal{L}|$)& 20\\ Number of access routers ($|\mathcal{A}|$)& 7\\ Number of edge clouds ($|\mathcal{E}|$)& 6\\ Threshold of prediction probability ($\delta$) & 0.001 \\ Size of user request content ($s_k$) & [10,50] MB \\ Available cache size in EC ($w_e$) & [100,500] MB\\ User request transmission bandwidth ($b_k$) & [1,10] Mbps\\ Link available capacity ($c_l$) & [50,100] Mbps\\ \hline \end{tabular} \end{table} The dataset is generated by solving the optimization problem \eqref{LP:main_MILP} via nominal branch-and-bound approaches with different user behaviour and network utilization levels, i.e., $p_{ka}$, $s_k$, $b_k$, $w_e$ and $c_l$, which follow uniform distributions. In the subsequent results, we first generate 1280 samples for a scenario with $5$ users, out of which $1024$ samples are used for training by using the structure of each CNN illustrated in Figure \ref{fig:CNN}; $128$ samples are used as the validation set for hyperparameter tuning, as illustrated in Section \ref{subsec:hyper}; and the remaining $128$ samples are used for performance testing, which is discussed in Section \ref{subsec:compa}. For the scenarios with $10$, $15$, and $20$ users, we construct $128$ samples for each case accordingly. We note that the input image size of the trained CNN is $5\times33$ (i.e. $|\mathcal{K}|\times(|\mathcal{A}|+|\mathcal{E}|+|\mathcal{L}|)$) and the images for more than $5$ requests exceed this size. For the purpose of matching the input layer of the trained CNN, the large grayscale image can be divided into partitions and the height of each sub-image is considered as 5 to fit the input size. Next, the trained CNNs (i.e. the so-called pure-CNN) are applied for predicting the allocation of sub-images and, then, updating the unassigned sub-images based on the assignment in a cascade manner. The behaviour of the traffic flow such as the moving probability $p_{ka}$ is independent from the caching locations, and, hence, it will remain constant during the update process. For the update of $q_{ka}$, we have \begin{equation} \label{eq:update_ec} q_{ke}=\frac{s_k}{w_e-\sum_{k'\in\mathcal{K'}}s_{k'}x_{k'e}}, \forall k\in\mathcal{K}\setminus\mathcal{K'}, \end{equation} where $\mathcal{K'}$ is the set of traffic flows who have already been assigned. Regarding the link utilization $r_{kl}$, the update is calculated in a similar fashion, \begin{equation} \label{eq:update_link} r_{kl}=\frac{b_k}{c_l-\sum_{k'\in\mathcal{K'}}b_{k'}y_{k'l}}, \forall k\in\mathcal{K}\setminus\mathcal{K'}. \end{equation} From \eqref{eq:update_ec}, we can see that the value of $q_{ke}$ could become negative which indicates that the previous allocation is invalid. This, in turn, results in a congestion at EC $e$, because it violates the constraint $w_e\!-\!\sum_{k'\in\mathcal{K'}}s_{k'}x_{k'e}\!>\!0$. The same holds also for variable $r_{kl}$ as well. In this special case, the EC or link is not involved in next sub-image assignment. We keep caching contents and updating network parameters $q_{ka}$ as well as $r_{kl}$ until all users' requests are satisfied, or we terminate the cache placement when no additional network resources (i.e. caching memory space, link bandwidth) are available. The pure-CNN, instead of CNN-HCLS or CNN-rMILP, is used to update the image in order to avoid error accumulation. This can be explained from Figure \ref{fig:HCLS-instance}, where the output of the pure-CNN configuration is a probability matrix and many different assignments (as long as the cell is not zero) could be taken into consideration but for CNN-HCLS and CNN-rMILP, the end result is a binary matrix. Once the allocation for the previous sub-image is inappropriate, the error is accumulated and it affects the remaining sub-images. The operations when the number of end-users exceeds 5 users are summarized in Algorithm \ref{alg:update}. {\renewcommand1.4{1.35} \begin{algorithm}[t] \caption{Augmenting Allocations for $K$ Requests} \label{alg:update} \KwData{Grayscale image $\boldsymbol{I}$} \KwResult{Flow assignment $\boldsymbol{X}=(x_{ke})$} Separate the image into $\ceil*{|\mathcal{K}|/5}$ sub-images\; \While{any sub-image not assigned}{ Call pure-CNN to do prediction for one of the unassigned sub-image\; Update unassigned sub-images based on \eqref{eq:update_ec} and \eqref{eq:update_link}\; } Combine the predictions into a $|\mathcal{K}|\times|\mathcal{E}|$ matrix\; Call HCLS (Algorithm \ref{alg:HCLS}) or rMILP (Section \ref{sec:sub_ReducedMILP})\; \label{alg:STATE_call} \end{algorithm} \par} In contrast to the nominal multi-label classification problem, in our case, some misclassification is still acceptable if the total cost $S$ can be deemed as competitive. We first compare the computation complexity among these algorithms which is measured by the average testing time. For pure-CNN, the time tracking is from the grayscale image loading in the input layer to the prediction produced in output layer excluding the training time. When it comes to CNN-rMILP and CNN-HCLS, the processing time also includes the global search by solving reduced MILP and local search through hill-climbing respectively. Moreover, the average total cost with invalid constraint penalties \eqref{fml:new_cost} is calculated. Then, we derive the feasible ratio which is defined as the percentage of assignments satisfying the constraints. In turn, this will shed light on the network congestion after allocation. For example, $100\%$ indicates adequate network availability, while $60\%$ indicates a significant congestion. Additionally, the maximum total cost difference with the optimal solution is analyzed, which represents the performance gap in the worst case. We also compare the average number of decision variables for the benchmark and the CNN-rMILP in the $128$ testing samples, which reflects the search space reduction. Finally, some typical classification evaluation metrics are included, such as accuracy, precision, recall, and $F_1$ score\footnote{To put computational times in perspective we note that simulations run on MATLAB 2019b in a 64-bit Windows 10 environment on a machine equipped with an Intel Core i7-7700 CPU 3.60 GHz Processor and 16 GB RAM}. According to \cite{zhang2013review}, four basic quantities, namely the true positive ($T^+$), false positive ($F^+$), true negative ($T^-$) and false negative ($F^-$) are defined as follows, for each EC $e$: \begin{align*} &T^+_e\!=\!\sum_{i=1}^{|\mathcal{T}|}|\pi_{ie}\!\in\!X_i\!\cap\!\pi_{ie}\!\in\!\hat{X}_i|,\quad F^+_e\!=\!\sum_{i=1}^{|\mathcal{T}|}|\pi_{ie}\!\notin\!X_i\!\cap\!\pi_{ie}\!\in\!\hat{X}_i|, \\ &T^-_e\!=\!\sum_{i=1}^{|\mathcal{T}|}|\pi_{ie}\!\notin\!X_i\!\cap\!\pi_{ie}\!\notin\!\hat{X}_i|,\quad F^-_e\!=\!\sum_{i=1}^{|\mathcal{T}|}|\pi_{ie}\!\in\!X_i\!\cap\!\pi_{ie}\!\notin\!\hat{X}_i|, \end{align*} where $\mathcal{T}$ represents the testing data set, $|\mathcal{T}|$ is the number of testing samples, $\pi_{ie}$ is the caching allocation, $X_i$ is the optimal solution of $i^{th}$ sample and $\hat{X}_i$ express the output of estimated algorithms. Based on these quantities, related metrics, such as accuracy ($A$), precision ($P$), recall ($R$) and $F_1$ score, can be calculated with \begin{align*} &A(T^+_e,F^+_e,T^-_e,F^-_e)=\frac{T^+_e+T^-_e}{T^+_e+F^+_e+T^-_e+F^-_e}, \quad P(T^+_e,F^+_e,T^-_e,F^-_e)=\frac{T^+_e}{T^+_e+F^+_e},\\ &R(T^+_e,F^+_e,T^-_e,F^-_e)=\frac{T^+_e}{T^+_e+F^-_e}, \quad F_1(T^+_e,F^+_e,T^-_e,F^-_e)=\frac{2\times T^+_e}{2\times T^+_e+F^-_e+F^+_e}. \end{align*} Let $f$ be one of the four metrics, i.e., $A$, $P$, $R$ or $F_1$. The macro-averaged and micro-averaged version of $f$ are calculated as follows: \begin{equation*} \begin{aligned} f_{\textrm{macro}}=\frac{1}{|\mathcal{E}|}\sum_{e=1}^{|\mathcal{E}|}f(T^+_e,F^+_e,T^-_e,F^-_e), \quad f_{\textrm{micro}}=f(\sum_{e=1}^{|\mathcal{E}|}T^+_e,\sum_{e=1}^{|\mathcal{E}|}F^+_e,\sum_{e=1}^{|\mathcal{E}|}T^-_e,\sum_{e=1}^{|\mathcal{E}|}F^-_e). \end{aligned} \end{equation*} The macro and micro accuracy $A$ are equivalent depending on the definition with the fact that $T\!P_e+T\!N_e+F\!P_e+F\!N_e=|\mathcal{T}|$. For the rest micro metrics, we have the following theorem: \begin{theorem} \label{theo:micro} The precision, recall, and $F_1$ score in micro-averaged version are equivalent. \end{theorem} \begin{proof} See Appendix \ref{sec:proof_B}. \end{proof} Hence, in the following evaluations, the micro accuracy is omitted and the micro recall together with $F_1$ score are represented by the micro precision. \subsection{Hyperparameters Tuning} \label{subsec:hyper} The parameters in the CNN can be divided into two categories: normal parameters, such as the neuron weights, which can be learned from the training dataset; and hyperparameters, like the depth of the CONV layers, whose value should be determined during the CNN architecture design to control the training process. These parameters affect the CNN's output and, as a result, the overall network performance. Therefore, the hyperparameters have to be tuned. After several rounds of grid search and by considering the tradeoff between simplicity and loss function, the best configuration recorded is the following: CONV layer = $1$, batch size = $64$, epoch = $30$, learning rate = $10^{-3}$. In the sequel, the impact and sensitivity of each hyperparameter on the trained CNN is investigated, whilst all other hyperparameters are fixed and the metric is set to be the loss function as previously defined in \eqref{eq:loss}. \begin{figure}[htbp] \centering \subfigure[Training loss function with different depth.]{ \label{fig:depth_train} \includegraphics[width=\figSize\textwidth]{figure/depth_train.eps} } \subfigure[Validation loss function with different depth.]{ \label{fig:depth_valid} \includegraphics[width=\figSize\textwidth]{figure/depth_validate.eps} } \quad \subfigure[Training time with different depth (Iterations=$480$).]{ \label{fig:depth_time} \includegraphics[width=\figSize\textwidth]{figure/depth_time.eps} } \subfigure[Training time with different batch size (Epoch=$30$).]{ \label{fig:batch_time} \includegraphics[width=\figSize\textwidth]{figure/batch_time.eps} } \quad \subfigure[Training loss function with different batch size.]{ \label{fig:batch_train} \includegraphics[width=\figSize\textwidth]{figure/batch_train.eps} } \subfigure[Validation loss function with different batch size.]{ \label{fig:batch_valid} \includegraphics[width=\figSize\textwidth]{figure/batch_validate.eps} } \quad \subfigure[Loss function with different epochs.]{ \label{fig:epoch} \includegraphics[width=\figSize\textwidth]{figure/epoch_tv.eps} } \subfigure[Loss function with different learning rate.]{ \label{fig:learning_rate} \includegraphics[width=\figSize\textwidth]{figure/learn_rate.eps} } \caption{The effect of hyperparameters.} \label{fig:hyper} \end{figure} \subsubsection{Effect of CONV Depths} As already eluded to above, each CONV layer encompasses a convolutional layer, a normalization layer, and a ReLU layer. The effect of CONV layers on training and performance are shown in Figures \ref{fig:depth_train}, \ref{fig:depth_valid}, and \ref{fig:depth_time} respectively. More complex structures do not bring obvious advantages over the 1 CONV layer structure but add computational complexity (the more CONV layers, the more training time required in Figure \ref{fig:depth_time}). Additionally, the loss function deteriorates when adding more layers. For instance, the curve of 16 hidden layers in the validation figure \ref{fig:depth_valid} is above the rest. In light of the above findings, the hidden layer depth in the sequel is set to 1. \subsubsection{Effect of the Batch Size} The batch size is by definition the number of training data in each iteration. In the fixed epochs and training dataset, larger batch sizes lead to smaller iterations, which reduces the training time as Figure \ref{fig:batch_time} shows. According to \cite{keskar2016large}, larger batch sizes tend to converge to sharp minimizers but smaller batch sizes would terminate at flat minimizers and the later has better generalization abilities. In Figures \ref{fig:batch_train} and \ref{fig:batch_valid}, we compare the performance for different batch sizes, namely sizes of 16, 32, 64, 128 and 256. As expected, the small batch size has better performance in general. Specifically, around the 30-epoch instance in Figure \ref{fig:batch_valid}, the 32-batch curve outperforms the 64-batch (less than $8.2$\textperthousand) while the others achieve a worse performance. However, note that the training time for 32-batch is almost double that of the 64-batch as shown in Figure \ref{fig:batch_time}. To this end, for the sake of performance and to enable less running time in terms of epochs, the batch size is chosen to be 64. \subsubsection{Effect of Epochs} Figure \ref{fig:epoch} illustrates the loss function in training and validation. Initially, the loss drops heavily by increasing the epoch number. Then, the generalization capability decreases slowly from 10 to 30 epochs and reach a plateau after 30 epochs. In this case, large epochs can improve the performance slightly (less than $1.5$\textperthousand) but the price to pay is temporal cost, i.e., more than $86.1\%$ training time is required. Therefore, we select 30 epochs for the training. Note that overfitting could be eased by introducing $L2$ type regularization in \eqref{eq:loss}. \subsubsection{Effect of the Learning Rate} Figure \ref{fig:learning_rate} shows the effect of the learning rate. As expected, a lower learning rate can explore the downward slope with a small step but it would take a longer time to converge. When the learning rate is $10^{-4}$, the training process ends before convergence because the epoch is set to 30; and this can be seen as the reason why a lower learning rate is under-performing. On the other hand, a larger learning rate may oscillate over the minimum point, leading to a potential failure to converge and even diverge, such as the case of the learning rate reaching $1$ in Figure \ref{fig:learning_rate}. Consequently, the learning rate is set to be $10^{-3}$ because of the lowest validation loss. \subsection{Caching Performance Comparison} \label{subsec:compa} We first compare the performance of the different schemes for the case of 5 requests, as illustrated in Table \ref{tab:5req}. The optimal performance of each testing metric is shown in a bold font. In general, all evaluated methods can achieve solutions efficiently, even the branch-and-bound technique (benchmark) provides optimal solutions in around 100 ms. Therefore, we use the optimal solutions in the case of 5 requests to establish training dataset instead of the case of 10, 15 or 20 requests. Benefiting from the prior knowledge learned during the training process and the simple structure (only 1 CONV layer), the pure-CNN configuration can provide the decision making in just 2 ms. However, the pure-CNN approach deals with each flow request independently and does not consider the cross correlation among these requests. Compared with benchmark, while the pure-CNN saves approximately $98.4\%$ computation time, the network suffers from more than $300\%$ total cost payment and $1.4\%$ constraints lack of satisfaction. The proposed CNN-rMILP utilizes the prediction from pure-CNN to reduce the search space. As can be seen from Table \ref{tab:5req}, the number of decision variables in CNN-rMILP is reduced to $356.98$ on average, while the original number in the benchmark is $376$. Compared with the benchmark, this reduction saves almost $71.6\%$ computation time, with just $0.8\%$ additional total cost. Even in the worst case, the performance gap between CNN-rMILP and the optimal solution is $0.32$, which is much smaller than any other algorithm. Therefore, CNN-rMILP provides highly competitive performance for the case of 5 requests. Similarly, CNN-HCLS use the output of the pure-CNN to guide the feasible solution searching, which leads to around $94.6\%$ running time reduction but $149.6\%$ cost increment. To provide a more holistic view, we tested the above schemes in terms of different nominal metrics such as accuracy, precision, recall and $F_1$ score in Table \ref{tab:5req}. Interestingly, although the pure-CNN approach performs better in the macro- and micro-averaged metrics, the greedy algorithm GCA and the proposed CNN-HCLS can provide better solutions with less total costs. This trend also holds for cases of 10, 15, and 20 requests. Thus, there is no strong correlation between total cost and typical deep learning metrics, such as accuracy, precision, recall, and $F_1$. \begin{table*}[!t] \centering \caption{Performance comparison for the case of 5 requests.} \label{tab:5req} \begin{tabular}{c|c|c|c|c|c|c} \hline \multicolumn{1}{r}{} & &\textbf{Benchmark} &\textbf{pure-CNN} &\textbf{CNN-rMILP} &\textbf{CNN-HCLS} &\textbf{GCA} \\ \hline \multicolumn{2}{c|}{Mean time} & 101.7 ms & \textbf{1.6 ms} & 28.9 ms & 5.5 ms & 9.6 ms \\ \hline \multicolumn{2}{c|}{Mean total cost} & \textbf{7.41} & 31.11 & 7.47 & 18.50 & 23.76 \\ \hline \multicolumn{2}{c|}{Mean feasible ratio} & \textbf{100.00\%} & 98.56\% & \textbf{100.00\%} & 98.83\% & 98.80\% \\ \hline \multicolumn{2}{c|}{Max total cost difference} & \textbf{0} & 243.51 & 0.32 & 120.90 & 303.20 \\ \hline \multicolumn{2}{c|}{Number of decision variables} & 376.00 & - & \textbf{356.98} & - & - \\ \hline \multicolumn{2}{c|}{Macro/Micro accuracy} & \textbf{100.00\%} & 99.52\% & 99.76\% & 93.90\% & 91.35\% \\ \hline \multicolumn{1}{c|}{\multirow{3}{*}{Macro}} & precision & \textbf{100.00\%} & 65.87\% & 97.76\% & 58.09\% & 55.28\% \\ \cline{2-7} & recall & \textbf{100.00\%} & 65.87\% & 90.78\% & 60.41\% & 60.73\% \\ \cline{2-7} & $F_1$ & \textbf{100.00\%} & 65.86\% & 93.09\% & 56.74\% & 57.24\% \\ \hline \multicolumn{1}{c|}{Micro} & precision/recall/$F_1$ & \textbf{100.00\%} & 98.56\% & 99.27\% & 81.71\% & 74.06\% \\ \hline \end{tabular} \end{table*} \begin{table*}[!t] \centering \caption{Performance comparison for the case of 10 requests.} \label{tab:10req} \begin{tabular}{c|c|c|c|c|c|c} \hline \multicolumn{1}{r}{} & &\textbf{Benchmark} &\textbf{pure-CNN} &\textbf{CNN-rMILP} &\textbf{CNN-HCLS} &\textbf{GCA} \\ \hline \multicolumn{2}{c|}{Mean time} & 2842.8 ms & \textbf{3.6 ms} & 987.7 ms & 19.1 ms & 16.5 ms \\ \hline \multicolumn{2}{c|}{Mean total cost} & \textbf{22.71} & 269.65 & 23.91 & 120.55 & 166.33 \\ \hline \multicolumn{2}{c|}{Mean feasible ratio} & \textbf{100.00\%} & 83.86\% & \textbf{100.00\%} & 86.52\% & 86.12\% \\ \hline \multicolumn{2}{c|}{Max total cost difference} & \textbf{0} & 703.36 & 48.51 & 239.23 & 936.56 \\ \hline \multicolumn{2}{c|}{Number of decision variables} & 746.00 & - & \textbf{647.74} & - & - \\ \hline \multicolumn{2}{c|}{Macro/Micro accuracy} & \textbf{100.00\%} & 92.90\% & 95.22\% & 88.64\% & 89.97\% \\ \hline \multicolumn{1}{c|}{\multirow{3}{*}{Macro}} & precision & \textbf{100.00\%} & 57.01\% & 73.62\% & 55.01\% & 55.59\% \\ \cline{2-7} & recall & \textbf{100.00\%} & 56.70\% & 73.57\% & 55.71\% & 56.28\% \\ \cline{2-7} & $F_1$ & \textbf{100.00\%} & 55.59\% & 73.48\% & 54.04\% & 55.43\% \\ \hline \multicolumn{1}{c|}{Micro} & precision/recall/$F_1$ & \textbf{100.00\%} & 78.69\% & 85.67\% & 65.91\% & 69.92\% \\ \hline \end{tabular} \end{table*} \begin{table*}[!t] \centering \caption{Performance comparison for the case of 15 requests.} \label{tab:15req} \begin{tabular}{c|c|c|c|c|c|c} \hline \multicolumn{1}{r}{} & &\textbf{Benchmark} &\textbf{pure-CNN} &\textbf{CNN-rMILP} &\textbf{CNN-HCLS} &\textbf{GCA} \\ \hline \multicolumn{2}{c|}{Mean time} & 35095.0 ms & \textbf{6.0 ms} & 7636.4 ms & 49.3 ms & 23.1 ms \\ \hline \multicolumn{2}{c|}{Mean total cost} & \textbf{62.83} & 667.05 & 67.24 & 271.45 & 366.11 \\ \hline \multicolumn{2}{c|}{Mean feasible ratio} & \textbf{100.00\%} & 67.80\% & \textbf{100.00\%} & 74.01\% & 72.96\% \\ \hline \multicolumn{2}{c|}{Max total cost difference} & \textbf{0} & 1404.95 & 128.50 & 471.16 & 1049.46 \\ \hline \multicolumn{2}{c|}{Number of decision variables} & 1116.00 & - & \textbf{883.36} & - & - \\ \hline \multicolumn{2}{c|}{Macro/Micro accuracy} & \textbf{100.00\%} & 86.16\% & 88.60\% & 83.49\% & 87.10\% \\ \hline \multicolumn{1}{c|}{\multirow{3}{*}{Macro}} & precision & \textbf{100.00\%} & 43.92\% & 59.37\% & 46.26\% & 53.21\% \\ \cline{2-7} & recall & \textbf{100.00\%} & 46.26\% & 59.56\% & 44.33\% & 53.19\% \\ \cline{2-7} & $F_1$ & \textbf{100.00\%} & 43.68\% & 59.32\% & 44.71\% & 53.02\% \\ \hline \multicolumn{1}{c|}{Micro} & precision/recall/$F_1$ & \textbf{100.00\%} & 58.47\% & 65.79\% & 50.48\% & 61.30\% \\ \hline \end{tabular} \end{table*} \begin{table*}[!t] \centering \caption{Performance comparison for the case of 20 requests.} \label{tab:20req} \begin{tabular}{c|c|c|c|c|c|c} \hline \multicolumn{1}{r}{} & &\textbf{Benchmark} &\textbf{pure-CNN} &\textbf{CNN-rMILP} &\textbf{CNN-HCLS} &\textbf{GCA} \\ \hline \multicolumn{2}{c|}{Mean time} & 41.04 min & \textbf{10.1 ms} & 4.91 min & 103.7 ms & 29.7 ms \\ \hline \multicolumn{2}{c|}{Mean total cost} & \textbf{111.17} & 1160.65 & 127.48 & 461.10 & 567.68 \\ \hline \multicolumn{2}{c|}{Mean feasible ratio} & \textbf{100.00\%} & 55.68\% & \textbf{100.00\%} & 64.51\% & 64.00\% \\ \hline \multicolumn{2}{c|}{Max total cost difference} & \textbf{0} & 2023.86 & 527.45 & 946.13 & 1268.07 \\ \hline \multicolumn{2}{c|}{Number of decision variables} & 1486.00 & - & \textbf{1110.88} & - & - \\ \hline \multicolumn{2}{c|}{Macro/Micro accuracy} & \textbf{100.00\%} & 82.63\% & 83.76\% & 80.74\% & 85.48\% \\ \hline \multicolumn{1}{c|}{\multirow{3}{*}{Macro}} & precision & \textbf{100.00\%} & 37.68\% & 49.33\% & 40.71\% & 53.48\% \\ \cline{2-7} & recall & \textbf{100.00\%} & 42.18\% & 48.88\% & 39.60\% & 52.92\% \\ \cline{2-7} & $F_1$ & \textbf{100.00\%} & 37.44\% & 48.93\% & 39.92\% & 53.13\% \\ \hline \multicolumn{1}{c|}{Micro} & precision/recall/$F_1$ & \textbf{100.00\%} & 47.89\% & 51.29\% & 42.23\% & 56.45\% \\ \hline \end{tabular} \end{table*} Tables \ref{tab:10req}, \ref{tab:15req}, and \ref{tab:20req} show different performance metrics for the case of 10, 15, and 20 requests respectively. Generally, the performance of all estimated algorithms deteriorates as the number of requests increases. Although the benchmark provides the minimum cost in all cases, the computation time increases rapidly. As shown in Table \ref{tab:10req}, the computational time for the optimal decision making is less than $3$ seconds in the case of 10 requests, and, then, the time reaches around $41.04$ minutes on average for the case of 20 requests in Table \ref{tab:20req}. Regarding the pure-CNN scheme, on the one hand, it is the fastest method among the five estimated algorithms, whose average decision time is just $10.1$ ms even in the case of 20 requests; on the other hand, it performs worse than GCA, where the total cost of pure-CNN is double the GCA's cost. Moreover, nearly half of the flow requests cannot be served by the allocation from pure-CNN, as the feasible ratio is $55.68\%$ in Table \ref{tab:20req}. The performance of the proposed CNN-rMILP and CNN-HCLS schemes also become progressively worse due to the increment of user requests. With regard to CNN-rMILP, the mean total cost gap with the benchmark increases from $5.0\%$ (the case of 10 requests) to $14.7\%$ (the case of 20 requests), and the computation time increases from around $1$ second to $5$ minutes. For CNN-HCLS, under the case of 20 requests, it can provide decision making in $103.7$ ms with the expense of additional $300\%$ total cost payment. In summary, compared with the benchmark, both CNN-rMILP and CNN-HCLS make a tradeoff between processing time and overall cost. In particular, CNN-rMILP is suitable for cases in which pseudo-real time processing is allowed so as to provide high-quality decision making, whereas the CNN-HCLS is amenable to real-time decision making with performance gap tolerance. \section{System Model and Problem Formulation} \label{sec:model} We model a mobile network as an undirected graph $\mathcal{G}=\{\mathcal{V},\mathcal{L}\}$ having a set of vertices $\mathcal{V}$ and a set of links $\mathcal{L}$, as shown in Figure \ref{fig:system}. We assume that vertices consist of access points or access routers\footnote{Here the term \textit{router} is not the core router in the Internet backbone but the Information-Centric Networking (ICN) router~\cite{xylomenos2013survey}.} (ARs) set $\mathcal{A}$ and general routers set $\mathcal{R}$, where the former provides network connectivity and the latter supplies traffic flow forwarding. Meanwhile, some of the access and typical routers have caching capacity to host the users' contents of interest. We call these routers content routers or edge clouds (ECs)\footnote{The term \textit{edge cloud} and \textit{content router} will be used interchangeably.}. We define $\mathcal{E}\subset\mathcal{A}\cup\mathcal{R}$ as the set of ECs. In this model, the end-users are mobile and, thus, their serving ARs will change over time. Here we assume that the mobility pattern of each user is obtainable, since the user moving probability is predicable by exploring historical data~\cite{chen2017caching}. \begin{figure}[t] \centering \includegraphics[trim=0mm 0mm 0mm 0mm, clip, width=.8\textwidth]{figure/system.eps} \caption{Illustration of our caching system model.} \label{fig:system} \end{figure} We then define $w_e$ as the caching storage capacity of each content router $e\in\mathcal{E}$ and $c_l$ as the bandwidth of each communication link $l\in\mathcal{L}$. Here, without loss of generality, the salient assumption is that each user requests only one flow\footnote{The symbol $k$ represents both the end-user and the associated request flow.} $k$ and the set of traffic flows transmitted through the network is given by $\mathcal{K}$. Let $s_k$ be the required data size of user $k$; $b_k$ be the required transmission rate of user $k$; and $p_{ka}$ be the probability of user $k$ connecting to AR $a\in\mathcal{A}$. In order to measure the transmission cost, we define $N_{ae}$ as the number of hops from AR $a$ to a content hosting router $e$ where $N_{ae}$ is obtained via the shortest path (unambiguously $N_{ae}=0$ holds when $a=e$). $N^T$ is the number of hops from an AR to the central data server. According to \cite{van2014performance}, from a given source to a given destination, there are 10 to 15 hops on average for each user's request. For simplicity, $N^T$ is considered to be a constant equal to the mean number of hops from the AR to the data server. Our notation is summarized in Table \ref{tab:Notations}. \begin{table}[t] \centering \caption{Summary of main notations.} \begin{tabular}{c|l} \hline \hline $\mathcal{K}$ & set of flows/requests from users\\ $\mathcal{L}$ & set of transmission links \\ $\mathcal{A}$ & set of ARs \\ $\mathcal{E}$ & set of ECs \\ $|\mathcal{X}|$ & cardinality of set $\mathcal{X}$ \\ \hline $\alpha$ & impact factor of caching cost\\ $\beta$ & impact factor of transmission cost\\ $N_{ae}$ & number of hops from AR $a$ to EC $e$\\ $B_{lae}$ & relation between link $l$ and shortest path from $a$ to $e$\\ $N^T$ & number of hops from AR to data center\\ \hline $p_{ka}$ & the probability of flow $k$ connecting with AR $a$\\ $s_k$ & flow $k$ required caching size\\ $b_k$ & flow $k$ required bandwidth\\ $w_e$ & available space on EC $e$\\ $c_l$ & available link capacity on link $l$\\ \hline \hline \end{tabular} \label{tab:Notations} \end{table} \subsection{Problem Formulation} We consider two types of cost: a content caching cost $C^C$ and a content transmission cost $C^T$. From~\cite{vasilakos2012proactive}, we have: \begin{equation} \label{fml:cc} C^C(x_{ke})=\sum_{k\in\mathcal{K}}\sum_{e\in\mathcal{E}}\frac{x_{ke}}{1-\sum_{k\in\mathcal{K}}s_k\!\cdot\! x_{ke}/w_e}, \end{equation} where $x_{ke}$ is a binary decision variable with $x_{ke}=1$ indicating that the content for flow $k$ is cached at EC $e$, otherwise, we have $x_{ke}=0$. Therefore, $\sum_{k\in\mathcal{K}}s_k\!\cdot\! x_{ke}/w_e$ represents the storage utilization degree of EC $e$. From formula \eqref{fml:cc}, higher storage utilization leads to a higher cost for content caching and, thus, $C^C$ can be used for load balancing. Next, we define the transmission cost $C^T$ that depends on the number of traversal hops $N_{ae}$ and $N^T$: \begin{equation} \begin{aligned} \label{eq:trans} C^T(z_{kae})=\sum_{k\in\mathcal{K}}\sum_{a\in\mathcal{A}}\sum_{e\in\mathcal{E}} p_{ka}z_{kae}N_{ae}+\sum_{k\in\mathcal{K}}(1\!-\!\sum_{a\in\mathcal{A}}\sum_{e\in\mathcal{E}}p_{ka}z_{kae})N^T, \end{aligned} \end{equation} where $z_{kae}$ is a binary decision variable with $z_{kae}=1$ indicating that flow $k$ connects with AR $a$ and receives the requested content from EC $e$ through the shortest path, otherwise, $z_{kae}=0$. Therefore, $p_{ka}z_{kae}$ represents the probability for flow $k$ choosing the shortest path between AR $a$ and EC $e$ and $\sum_{a\in\mathcal{A}}\sum_{e\in\mathcal{E}}p_{ka}z_{kae}$ evaluates the probability of cache-hit happens for flow $k$. Hence, $\sum_{k\in\mathcal{K}}\sum_{a\in\mathcal{A}}\sum_{e\in\mathcal{E}} p_{ka}z_{kae}N_{ae}$ is the expected number of transmission hops in the cache-hit case. Similarly, $(1\!-\!\sum_{a\in\mathcal{A}}\sum_{e\in\mathcal{E}}p_{ka}z_{kae})$ captures the probability for cache miss and $\sum_{k\in\mathcal{K}}(1\!-\!\sum_{a\in\mathcal{A}}\sum_{e\in\mathcal{E}}p_{ka}z_{kae})N^T$ is the expected number of transmission hops accordingly. Our goal is to minimize the total cost function $J$ given by: \begin{equation} J(x_{ke},z_{kae})=\alpha\cdot C^C+\beta\cdot C^T, \end{equation} where $\alpha$ and $\beta$ are the weights for caching cost $C^C$ and transmission cost $C^T$ respectively. We can now formally pose the proactive edge caching problem as follows: \begin{subequations} \label{LP:main} \begin{align} \label{LP:obj} &\mathop{\min}\; J(x_{ke},z_{kae}).\\ \textrm{s.t.}\quad \label{LP:con1} & \sum_{e\in\mathcal{E}} x_{ke}\!=\!1, \forall k\!\in\!\mathcal{K}, \\ \label{LP:con2} & \sum_{k\in\mathcal{K}} s_k\!\cdot\!x_{ke}\!<\!w_e, \forall e\!\in\!\mathcal{E},\\ \label{LP:con3} & \sum_{k\in\mathcal{K}} b_k\!\cdot\!y_{kl}\!<\!c_l,\forall l\in\mathcal{L}, \\ \label{LP:con4} & \sum_{e\in\mathcal{E}} z_{kae}\!\leq\!1, \forall k\!\in\!\mathcal{K},a\!\in\!\mathcal{A}, \\ \label{LP:con5} & z_{kae}\!\leq\!x_{ke}, \forall k\!\in\!\mathcal{K},a\!\in\!\mathcal{A},e\!\in\!\mathcal{E},\\ \label{LP:con6} & y_{kl}\!\leq\!\sum_{a\in\mathcal{A}}\sum_{e\in\mathcal{E}} B_{lae}\!\cdot\!z_{kae}, \forall k\!\in\!\mathcal{K},l\!\in\!\mathcal{L}, \\ \label{LP:con7} & M\!\cdot\!y_{kl}\!\geq\!\sum_{a\in\mathcal{A}}\sum_{e\in\mathcal{E}} B_{lae}\!\cdot\!z_{kae}, \forall k\!\in\!\mathcal{K},l\!\in\!\mathcal{L},\\ \label{LP:con8} & x_{ke},y_{kl},z_{kae}\!\in\!\{0,1\},\forall k\!\in\!\mathcal{K},l\!\in\!\mathcal{L},a\!\in\!\mathcal{A},e\!\in\!\mathcal{E}. \end{align} \end{subequations} In problem \eqref{LP:main}, $M$ is a sufficiently large number. Constraint \eqref{LP:con1} means that each flow will be served by exactly one EC. \eqref{LP:con2} and \eqref{LP:con3} capture the capacity of each EC storage and individual link bandwidth respectively. Here, $y_{kl}$ is a binary decision variable with $y_{kl}=1$ implying that flow $k$ traverses via link $l$; otherwise $y_{kl}=0$. \eqref{LP:con4} indicates that the redirected path is unique and \eqref{LP:con5} enforces an EC should be selected as a caching host, if any flow retrieves the caching content from this EC. Furthermore, the next two constraints \eqref{LP:con6} and \eqref{LP:con7} mean that the link being selected for transmission should be on the path between AR $a$ and EC $e$ as per the decision variable $z_{kae}$ (and vice versa). $B_{lae}$ describes the relationship between link and retrieved path, which is defined based on the network topology: $$B_{lae}= \begin{cases} 1, &\text{link $l$ belongs to the shortest path between AR $a$ and EC $e$,} \\ 0, &\text{otherwise.} \end{cases}$$ \subsection{Model Linearization} The denominator in \eqref{fml:cc} contains the decision variable $x_{ke}$, and, hence, it results in a non-linear part in the objective function of \eqref{LP:main}. To linearize it, we define a new variable: \begin{equation} \label{def:t_e} t_e=\frac{1}{1-\sum_{k\in\mathcal{K}}s_k\!\cdot\!x_{ke}/w_e}, \forall e\in\mathcal{E}. \end{equation} This definition of $t_e$ can be converted to the following constraints: \begin{subequations} \begin{align} \label{con:t_e1} &t_e-\sum_{k\in\mathcal{K}}\frac{s_k}{w_e}\!\cdot\!x_{ke}t_e=1, \forall e\in\mathcal{E},\\ \label{con:t_e2} &t_e>0, \forall e\in\mathcal{E}. \end{align} \end{subequations} Then, the caching cost $C^C$ becomes \begin{equation} \label{fml:cc2} C^C(x_{ke},t_e)=\sum_{k\in\mathcal{K}}\sum_{e\in\mathcal{E}}x_{ke}t_e. \end{equation} Both \eqref{con:t_e1} and \eqref{fml:cc2} contain a product of two decision variables (i.e. $x_{ke}t_e$), so we introduce an auxiliary decision variable $\chi_{ke}$, as follows: \begin{equation} \chi_{ke}=x_{ke}t_e= \begin{cases} t_e,&\text{$x_{ke}$ is 1,}\\ 0,&\text{otherwise.} \end{cases} \end{equation} Clearly, $\chi_{ke}$ is the product of a continuous variable ($t_e$) and a binary variable ($x_{ke}$), and, thus, it must satisfy the following constraints: \begin{subequations} \begin{align} \label{LP:con9} & \chi_{ke}\!\leq\!t_e,\forall k\!\in\!\mathcal{K},e\!\in\!\mathcal{E},\\ \label{LP:con10} & \chi_{ke}\!\leq\!M\!\cdot\!x_{ke},\forall k\!\in\!\mathcal{K},e\!\in\!\mathcal{E},\\ \label{LP:con11} & \chi_{ke}\!\geq\!M\!\cdot\!(x_{ke}-1)\!+\!t_e,\forall k\!\in\!\mathcal{K},e\!\in\!\mathcal{E}. \end{align} \end{subequations} As defined above, $M$ is a sufficiently large number. Therefore, \eqref{con:t_e1} and \eqref{fml:cc2} can be rewritten in terms of $\chi_{ke}$ as follows: \begin{equation} \label{con:t_e} t_e-\sum_{k\in\mathcal{K}}\frac{s_k}{w_e}\chi_{ke}=1,\forall e\in\mathcal{E}, \end{equation} \begin{equation} \label{fml:cc3} C^C(\chi_{ke})=\sum_{k\in\mathcal{K}}\sum_{e\in\mathcal{E}}\chi_e. \end{equation} Finally, we can write the following MILP model: \begin{subequations} \label{LP:main_MILP} \begin{align} \label{LP:obj_MILP} &\mathop{\min}\; J(\chi_{ke},z_{kae})\\ \textrm{s.t.}\quad &\eqref{LP:con1}\sim\eqref{LP:con8},\eqref{LP:con9}\sim\eqref{LP:con11},\eqref{con:t_e},\nonumber \\ \label{LP:con12} & t_e,\chi_{ke}>0,\forall k\!\in\!\mathcal{K}, e\!\in\!\mathcal{E}. \end{align} \end{subequations} Regarding the time complexity of the aforementioned model, we have the following theorem: \begin{theorem} \label{theo:np_hard} The MILP model \eqref{LP:main_MILP} falls into the family of $NP$-hard problems. \end{theorem} \begin{proof} See Appendix \ref{sec:proof_A}. \end{proof} Therefore, with the increment of the number of requests, it is quite time-consuming to solve \eqref{LP:main_MILP}. In order to accelerate the solving process and overcome its $NP$-hard nature, we propose a framework which merges the optimization model with deep learning method in the next section. \section{Introduction} \label{sec:introduction} Caching popular contents at the edge of a wireless network has emerged as an effective technique to alleviate congestion and data traffic over the backhaul and fronthaul of existing wireless systems~\cite{SurveyCaching}. Caching methods can be classified into three categories~\cite{vasilakos2012proactive}: reactive caching~\cite{sourlas2010mobility}, durable subscription~\cite{farooq2004performance}, and proactive caching~\cite{gaddah2010extending}. In reactive caching, the connected access point keeps caching items that match user's subscription after the user disconnects from the network. When the user reconnects from a different point later in time, the caching items are retrieved from the previous access point. Reactive caching has a re-transmission delay from the old access point to the new one. In durable subscription, both the current connected access point and all proxies within a domain that a user can possibly connect to should keep caching the content. Therefore, durable subscription reduces the transmission delay but significantly increases memory usage. Compared with those two policies, proactive caching selects a subset of proxies as potential caching hosts thus giving rise to a balance between storage cost and transmission latency. However, enabling proactive caching in real networks faces four key challenges~\cite{wang2020survey}: where to cache, what to cache, cache dimensioning, and content delivery. In this paper, the primary concentration is on ``where to cache'' problem, i.e., determining caching locations. Proactive caching techniques have attracted significant attention in prior art~\cite{yang2018cache,yang2020joint,wang2019proactive,zheng2016optimal,fang2015energy,zou2019joint}. Conventionally, the ``where to cache'' problem is modeled as an optimization problem that is then solved using convex optimization~\cite{yang2018cache,yang2020joint}, mixed integer linear programming (MILP)~\cite{wang2019proactive,zheng2016optimal}, and game theory~\cite{fang2015energy,zou2019joint}. However, these solutions, particularly MILP, can be $NP$-hard. Due to the curse of dimensionality, such optimal model-based methods are not suitable to support real-time decision making on selecting caching locations. Recently, deep learning (DL) has emerged as an important tool for solving caching problems as discussed in~\cite{lei2019learning,lei2017deep,chen2017caching,tsai2018caching,ale2019online,fan2021pa,qian2020reinforcement,he2017integrated,li2019deep,zhong2020deep}. In~\cite{lei2019learning}, the authors propose the use of convolutional neural network (CNN) to perform time slot allocation for content delivery at wireless base stations. The study in~\cite{lei2017deep} employs a fully-connected neural network (FNN) to simplify the searching space of caching optimization model and, then, determine caching base stations. In~\cite{chen2017caching}, the authors introduced a conceptor-based echo state network (ESN) to learn the users' split patterns independently, which results in a more accurate prediction. The authors in~\cite{tsai2018caching} propose a long short-term memory (LSTM) model to analyse and extract information from 2016 U.S. election tweets, thus providing support for content caching. The authors in~\cite{ale2019online} propose a stacked DL structure, which consists of a CNN, a bidirectional LSTM and a FNN, for online content popularity prediction and cache placement in the mobile network. In~\cite{fan2021pa}, a gated recurrent unit (GRU) model is proposed to predict time-variant video popularity for cache eviction. Except for the aforementioned supervised learning methods, a number of works used deep reinforcement learning (DRL) for content caching. For instance, the authors in~\cite{qian2020reinforcement} propose a joint caching and push policy for mobile edge computing networks using a deep Q network (DQN). In~\cite{he2017integrated}, the authors employ a double dueling DQN for dynamic orchestration of networking, caching and computing in vehicle networks. Moreover, some actor-critic models are proposed in~\cite{li2019deep} and~\cite{zhong2020deep} to decide content caching in wireless networks. However, most existing DL works on caching such as~\cite{lei2019learning,lei2017deep,chen2017caching,tsai2018caching,ale2019online} and~\cite{qian2020reinforcement,he2017integrated,li2019deep,zhong2020deep} have considered a two-tier heterogeneous network structure, in which the caching content is available within one or two hops transmission. Therefore, these works may not scale well in flow routing decisions of multi-hop environments. The study in~\cite{fan2021pa} has focused on the caching policy of a single node, which may results in insufficient utilization of the network caching resources. Additionally, although the well-trained DL models can provide competitive solutions compared with conventional heuristics, the training process is very time-consuming and struggle to converge, particularly LSTM in~\cite{tsai2018caching,ale2019online} and DRL in~\cite{qian2020reinforcement,he2017integrated,li2019deep,zhong2020deep}. In this paper, the main contribution is to propose a framework merging MILP model with data-driven approach to determine caching locations. The proposed MILP model jointly considers the caching cost and multi-hop transmission cost in mobile network, with constraints of node storage capacity and link bandwidth limitation. Since the MILP model shares some properties with image recognition task (more details are represented in section~\ref{subsubsec:CNN}), we provide a novel approach to transform the MILP model into a grayscale image. To our best knowledge, beyond our previous work in~\cite{wang2020caching}, no work has used this method for image transformation. Recently, CNN has been widely used in computer vision tasks and the study in~\cite{sze2017efficient} shows that a CNN can exceed human-level accuracy in image recognition. Therefore, we consider CNN as the data-driven approach to extract spatial features from the input images. In order to accelerate CNN training, we decompose the MILP model into a number of independent sub-problems, and, then train CNNs in parallel with optimal solutions. As aforementioned, the caching assignment problem is $NP$-hard, and, thus, calculating optimal solutions can be time-consuming, especially when dealing with large search space instances. Note that for $NP$-hard problems, many small to medium search space instances can be solved efficiently~\cite{nowak2018revised}. Therefore, we solve the small instances of caching assignment problem to train CNNs, and, then use trained CNNs to predict caching locations for medium and large instances. To further improve the performance, two algorithms are provided: a reduced MILP model using CNNs' outputs as an extra constraint; and a hill-climbing local search algorithm using CNNs' output as searching directions. Unlike the work in~\cite{lei2019learning} and~\cite{lei2017deep}, which require the number of requests should exactly match the input layer of trained CNN, we propose an augmenting allocations algorithm to deal with the requests excessive case. We show via numerical results that the proposed framework can reduce $71.6\%$ computation time with only $0.8\%$ additional performance cost compared with the optimal solution. Furthermore, the impact of hyperparameters on CNN's performance is also presented. To this end, the key contributions of this paper can be summarized as follows, \begin{itemize} \item We propose a framework which merges MILP model with CNNs to solve ``where to cache'' problem. We also provide a novel method that transforms the MILP model into a grayscale image to train the CNNs. \item For simple design and parallel training purposes, we decompose the MILP model and train CNNs correspondingly to predict content caching locations. \item To further improve the performance, we propose two algorithms which perform global and local search based on CNNs' prediction. \item To deal with requests excessive case, we give an augmenting allocations algorithm to divide the input image into partitions that match CNN's input size. \end{itemize} The rest of paper is organized as follows. Section \ref{sec:model} presents the system model for proactive caching. In Section \ref{sec:DNN}, we discuss the transformation of the MILP model into a grayscale image as well as the training and testing process of the proposed CNN framework. Section \ref{sec:investigations} then provides a performance evaluation and Section \ref{sec:conclusions} concludes the paper and outlines future works.
1,108,101,564,768
arxiv
\section{Introduction} \label{sec:I} Neutron stars must be one of the most suitable environments to examine the physics under extreme conditions. The density inside the star can be much more than the nuclear saturation density, $\varepsilon_0=2.7\times 10^{14}$ g/cm$^3$, and its gravitational binding energy becomes too large, such as $M/R\simeq 0.2$ with the stellar mass ($M$) and radius ($R$) \citep{NS}. Via the observations of the phenomena associated with the compact objects, one might reveal not only the equation of state (EOS) and nuclear properties (e.g., \citet{AK1996,STM2001,SH2003,SKH2004,PA2011,DGKK2013,SNIO2012,SNIO2013a,SNIO2013b}), but also the gravitational theory itself (e.g., \citet{SK2004,SK2005,S2009a,S2009b,YYT2012,S2014a,S2014b,S2014c}). In addition, the observations of pulsars tell us that the neutron stars generally have a strong magnetic field, such as $\sim 10^{12}-10^{13}$ Gauss \citep{Pulsar}. Furthermore, the existence of another class of neutron stars, the so-called magnetars, is also suggested observationally \citep{DT1992,TD1993,TD1996}. The surface magnetic field of magnetars can reach as large as $10^{14}-10^{15}$ Gauss, which is determined through the measurements of the rotational period and spin down rate of soft gamma repeaters and anomalous X-ray pulsars \citep{K1998,H1999,M1999}. According to the population statistics of soft gamma repeaters, even $\sim 10 \%$ of the neutron stars produced via supernovae are expected to be magnetars \citep{K1998}. The origin of such a strong magnetic field of neutron star is still uncertain. Maybe, a simple model is caused by fossil magnetic field of progenitor star. That is, a small magnetic field in progenitor star would be amplified during the gravitational collapse with conserving the magnetic flux, which produces the strong magnetic field of neutron star \citep{C1992}. Unfortunately, this hypothesis could be difficult to be accepted for magnetars, because the stellar radius with the canonical mass of $M\approx 1.4M_\odot$ should be less than the Schwarzshild radius defined by $R_{\rm Sch} = 2GM/c^2$ to produce a strong surface magnetic field such as $\sim 10^{15}$ Gauss \citep{tatsumi00}. Another possibility of the generation mechanism is due to the magnetohydrodynamic dynamo mechanism, i.e., the rapidly rotation of protoneutron star with the rotational period smaller than 3 ms may amplify a seed magnetic field up to $\sim 10^{15}$ Gauss \citep{DT1992,TD1993}. This scenario is also unfavorable from the observations of supernova remnants associated with the magnetar candidates \citep{VK2006}. Additionally, the possibility of the ferromagnetism of quark liquid inside the neutron star is suggested as an origin of a strong magnetic field \citep{tatsumi00}. Conceivably, the hint to solve the open question about the origin of strong magnetic field of neutron star, might lay behind the magnetized properties in the core region. Furthermore, the exact structure of neutron stars is still unclear, because the EOS of neutron star matter, especially for high density region, is not fixed yet. Even so, it is believed that, under the surface ocean composed of molten metal, the neutron-rich nuclei form a lattice structure through the Coulomb interaction, which is usually called a crust region \citep{LP2004}. As the energy density increases up to $\sim \varepsilon_0$, the nuclei in the crust region would melt into uniform matter. This region corresponds to the core of neutron stars. Moreover, non-hadronic matter, such as quarks, might appear in deeper part of core region, depending on the theoretical model of neutron star matter \citep{NS}. In particular, the neutron star having quark matter is referred to as the hybrid star. Inside the hybrid star, one has only to consider three flavor quark matter composed of $u$, $d$, and $s$ quarks, because the more massive quarks can not be produced under the typical energy density of neutron stars. Since quark matter makes EOS soft, the mass of hybrid star is usually expected to be small. In practice, most of the hybrid stars proposed up to now might be difficult to reach the observed maximum mass, which is about two solar mass \citep{D2010,A2013}. Or, quark matter occupies very tiny region, even if the massive hybrid stars could be constructed. Along with the origin of magnetic field of neutron star, the strength of magnetic field inside the star is also uncertain. The dipole magnetic field must dominate outside the star, while the magnetic configuration inside the star may be more complex due to the magnetic instability. According to the virial theorem, the maximum magnetic field for the neutron star with $R\simeq 10$ km and $M\simeq 1.4M_\odot$ could be in the order of $\sim 10^{18}$ Gauss \citep{LS1991}. On the other hand, the maximum magnetic field in the quark phase might reach $\sim 10^{20}$ Gauss, adopting the condition that the magnetic energy density should not exceed the energy density of the self-bound quark matter \citep{Ferrer2010}. If the magnetic field inside the star would be so strong, one has to take into account the effect of magnetic field on the neutron star matter, where the quantum effects such as the Landau levels may also play an important role to determine the stellar configuration. So far, there are a few literatures about the macroscopic structures of hybrid stars under the magnetic field \citep{Rabhi2009,CCM2014}. Considering the effects of the magnetic field in hadronic and quark phases, we find two important differences. One is the population of the charged particles: neutrons dominate over charged particles in the hadronic phase, while all the quarks have net electric charge. The other one is the difference of the mass of particles: baryons have much larger mass than quarks. These two features suggest that the effect of the magnetic field should be more important in quark matter. In this paper, we will consider the Landau level in the quark phase of hybrid star. We derive a critical magnetic field $B_c$ for given density, above which only the lowest Landau level is occupied, and find the EOS becomes stiffest as a causality limit for $B>B_c$, independent of the magnetic field. In addition, we especially examine how massive the hybrid star can become with the effect of strong magnetic field. We demonstrate that the EOS becomes {\it stiff} as a result of the hadron-quark phase transition in the strong magnetic field. Then, we find that the hybrid star whose mass is more than two solar mass, can have a very large quark core, i.e., ``{\it hybrid quark star}". Unless otherwise noted, we adopt the geometric unit of $c=G=1$, where $c$ and $G$ denote the speed of light and the gravitational constant, respectively. \section{Effects of Magnetic fields} \label{sec:II} Assuming the uniform magnetic field along the $z$ axis, the $n$-th energy level of quark with flavor $f$ in the strongly magnetic field is given by \begin{equation} E_n^f = \sqrt{c^4m_f^2 + c^2p_z^2 + \hbar c |e_fB|[2n+1+{\rm sgn}(e_f B)s]}, \label{eq:En} \end{equation} where $m_f$, $\hbar$, $B$, and $s$ denote the particle mass, the Plank constant, the magnetic field strength, and the degree of freedom of spin, respectively, while $e_f=(2e/3,-e/3,-e/3)$ with the electron charge $e$. As shown later, the lowest Landau level (LLL) plays a primary role in our discussion, where quark matter settles only in LLL, the quark number density $n_{f}$ is given by \begin{equation} n_{f} = \frac{3|e_fB|}{2\pi^2 \hbar^2 c}p_{f\rm F} \label{eq:nb} \end{equation} where $p_{f\rm F}$ denotes the Fermi momentum of quark. Since the typical values of the number density and magnetic field are $n_f\simeq 3n_0$ with the saturation density $n_0=0.16$ fm$^{-3}$ and $B\simeq 10^{18}$ Gauss in this paper, the Fermi momentum of quark can be estimated as $p_{f\rm F}\simeq 4\times |e/e_f|$ GeV, which is much less than $m_f$. Then, the mass term in Eq. (\ref{eq:En}) does not significantly contribute to the energy level, $E_n$. In this case, the energy density of quark $\varepsilon_f$ is \begin{equation} \varepsilon_f = \frac{3|e_fB|}{4\pi^2\hbar^2}p_{f\rm F}^2, \label{eq:epsilon} \end{equation} and quark matter becomes flavor symmetric, i.e., $n_u\simeq n_d\simeq n_s$, which leads to $2p_{u\rm F}\simeq p_{d\rm F}\simeq p_{s\rm F}$. We remark that the baryon number density $n_{\rm b}$ is given by $n_{\rm b}=(n_u+n_d+n_s)/3$. Then, the total energy density $\varepsilon$ is defined by $\varepsilon = \varepsilon_u+\varepsilon_d+\varepsilon_s + {\cal B}$ within the MIT bag model, where ${\cal B}$ denotes the bag constant. From Eqs. (\ref{eq:nb}) and (\ref{eq:epsilon}), $\varepsilon$ is \begin{equation} \varepsilon = \frac{5\pi^2 \hbar^2 c^2}{2eB}n_{\rm b}^2 + {\cal B}, \end{equation} while the pressure, $P$, is calculated by \begin{equation} P=n_{\rm b}^2\frac{\partial (\varepsilon/n_{\rm b})}{\partial n_{\rm b}}. \end{equation} As a result, one can express $P$ as a function of $n_{\rm b}$, such as \begin{eqnarray} P &=& \frac{5\pi^2\hbar^2c^2}{2eB}n_{\rm b}^2 - {\cal B} \nonumber \\ &=& 1.31 \times 10^{35}\ {\rm erg/cm}^3 \times \left(\frac{n_{\rm b}}{n_0}\right)^2 {B_{19}}^{-1} - {\cal B} \nonumber \\ &=& 82.0\ {\rm MeV/fm}^3 \times \left(\frac{n_{\rm b}}{n_0}\right)^2 {B_{19}}^{-1} - {\cal B} \label{eq:P} \end{eqnarray} where $B_{19}$ is defined by $B_{19}=B/(10^{19}$ Gauss). Although the pressure expressed by Eq. (\ref{eq:P}) depends on $B$ as well as the baryon number density, we can find that the relation between $P$ and $\varepsilon$ becomes that $P=\varepsilon - 2{\cal B}$, which is independently of the magnetic field strength. From this relationship, the adiabatic speed of sound, $c_{\rm s}=(dP/d\varepsilon)^{1/2}$, becomes equivalent to the speed of light $c$. Thus, this is corresponding to the limiting case of a stiff EOS, because $c_{\rm s}$ can not exceed $c$ to keep the causality. Note that the EOS of free quark matter gives $c_{\rm s}=c/\sqrt{3}$. We also remark that our EOS would reduce to the same as the EOS of free quark matter in the limit of low magnetic field, where quarks occupy all the Landau levels. The condition that quark matter can settle only in LLL is that $E_{f\rm F} < \sqrt{2\hbar c |e_fB|}$, where $E_{f\rm F}$ denotes the Fermi energy of quark. With the relation of $E_{f\rm F}=cp_{f\rm F}$ in LLL, one can derive the critical strength of the magnetic field so that quarks only occupy LLL for each flavor. We then find that the critical strength for $u$ quarks is weaker than that for $d$ and $s$ quarks, due to the difference of electric charge $e_f$. Thus, the condition for all quarks only occupy LLL is given by that the magnetic field should be stronger than the critical strength for $d$ and $s$ quarks, i.e., one can show that the magnetic field strength should be larger than the critical strength $B_c$ given by \begin{eqnarray} B_c &=& \frac{\sqrt[3]{6\pi^4}\hbar c}{e}n_{\rm b}^{2/3} \nonumber \\ &=& 1.62\times 10^{19} \times\left(\frac{n_{\rm b}}{n_0}\right)^{2/3} \ \rm{Gauss}. \label{eq:Bc1} \end{eqnarray} It may be interesting to see that this critical strength is not changed even in the non-relativistic limit. Thus the critical strength can be drawn as a universal function of density (Fig.~\ref{fig:LLL}). If the magnetic field exceeds $B_c$ at some density region in quark matter, EOS is given by the causal limit there. Note that EOS becomes invariant for the increase of the magnetic field once $B>B_c$ holds. \begin{figure*} \begin{center} \begin{tabular}{cc} \includegraphics[scale=0.4]{LLL} & \includegraphics[scale=0.4]{2LL} \end{tabular} \end{center} \caption Image of the energy level for magnetized quark matter. In the case that the Fermi energy of quark matter with flavor $f$ is larger than $E_1^f$ and less than $E_2^f$ ($E_1^f<E_{\rm F}^f< E_2^f$), quark matter exists in the state either with the momentum $p_{f\rm F}^{(1)}$ above the lowest energy level $E_0^f$ (left panel) or with $p_{f\rm F}^{(2)}$ above the first excited energy $E_1^f$ (right panel). \label{fig:LLL} \end{figure*} The situation that quark matter can settle up to the 2nd Landau level is more complicated. That is, as shown in Fig. \ref{fig:LLL}, the magnetized quark matter with Fermi energy $E_{f\rm F}$ can exist in the state either with the momentum $p_{f\rm F}^{(1)}$ above the lowest energy level $E_0^f$ or with $p_{f\rm F}^{(2)}$ above the first excited energy $E_1^f$, where $E_{f\rm F}$ is given by \begin{equation} E_{f\rm F} = cp_{f\rm F}^{(1)} = \left(c^2 {p_{f\rm F}^{(2)}}^2 + 2\hbar c |e_fB|\right)^{1/2}. \label{eq:EF2} \end{equation} In this situation, the quark number density and the energy density of quark can be written as \begin{eqnarray} n_{f} &=& \frac{3|e_fB|}{2\pi^2\hbar^2c}\left[p_{f\rm F}^{(1)} + 2p_{f\rm F}^{(2)}\right], \label{eq:nb2} \\ \varepsilon_f &=& \frac{3|e_fB|}{4\pi^2\hbar^2}\left[{p_{f\rm F}^{(1)}}^2 + 2p_{f\rm F}^{(1)}p_{f\rm F}^{(2)} + \frac{4\hbar |e_f B|}{c}\ln\left|p_{f\rm F}^{(1)} + p_{f\rm F}^{(2)}\right| - \frac{2\hbar |e_fB|}{c}\ln\left(\frac{2\hbar |e_fB|}{c}\right)\right]. \label{eq:e2} \end{eqnarray} One expects that, due to the increase of the degree of freedom, the EOS in this situation could become somewhat softer than the result obtained when quark matter exists only in LLL, i.e., $P=\varepsilon - 2{\cal B}$. In practice, one may be able to derive the explicit relation between $P$ and $\varepsilon$ with the above Eqs. (\ref{eq:EF2}) -- (\ref{eq:e2}), but here we avoid to derive a complicated expression, because we focus on the stiffest case of EOS with the effects of the Landau levels in this paper. Instead of the explicit expression of EOS, we only point out how the critical value of magnetic field strength decreases in this case. Since the condition that quark matter settles up to the 2nd Landau level should be that $E_{f\rm F} < \sqrt{4\hbar c |e_fB|}$, one can show that the critical magnetic field is given by \begin{eqnarray} B_c &=& \left[\frac{3\pi^4}{(1+\sqrt{2})^2}\right]^{1/3}\frac{\hbar c}{e}n_{\rm b}^{2/3} \nonumber \\ &=& 7.15\times 10^{18} \times\left(\frac{n_{\rm b}}{n_0}\right)^{2/3} \ \rm{Gauss}. \label{eq:Bc2} \end{eqnarray} The critical field strengths $B_c$ so that quark matter settles only in the LLL and up to the 2nd Landau level given by Eqs. (\ref{eq:Bc1}) and (\ref{eq:Bc2}), are shown as a function of the baryon number density in Fig. \ref{fig:Bc}. Considering that the baryon number density at the center of neutron star would be the order of $\sim (5-7)n_0$ fm$^{-3}$, the effects of the Landau level become considerably important only when the magnetic field strength is larger than $\sim 10^{19}$ Gauss. On the other hand, although the distribution and strength of magnetic field inside the star are still unclear, the magnetic field may reach such a large strength in the neutron star core composed of quark matter. We also remark that in any case the simple dipole magnetic distribution inside the star can not realize enough large strength that the effects of the Landau levels play important roles in the neutron star core (see for the detailed discussion in Appendix \ref{sec:appendix_1}). \begin{figure} \begin{center} \includegraphics[scale=0.5]{Bc-nb} \end{center} \caption The critical magnetic field strengths so that quark matter settles only in LLL and up to the 2nd Landau level (2LL) are shown as a function of the baryon number density $n_{\rm b}$, where the solid and broken lines correspond to the results for LLL and 2LL, respectively. } \label{fig:Bc} \end{figure} \section{Hybrid Star Models} \label{sec:III} Now, we consider how the properties of hybrid stars could be changed due to the effects of the Landau levels, when the magnetic field would be enough strong in the stellar core. In particular, we focus on the case that quark matter settles only in LLL, because such case realizes the stiffest EOS as mentioned in the previous section. The equilibrium configuration of magnetized neutron star is generally deformed due to the nonradial magnetic pressure. But, as a first step, we simply neglect the effect of magnetic pressure on the stellar configuration in this paper, i.e., the stellar configuration becomes spherically symmetric, because the structure of magnetic field inside the star is still unknown and the stellar deformation strongly depends on the magnetic geometry \citep{BBGN1995}. So, the stellar models considered in this paper are determined by solving the so-called Tolman-Oppenheimer-Volkof equations together with the relationship between the total energy density and pressure, i.e., the EOS. Here, we should remark the magnetohydrodynamic issues associated with such a strong magnetic field advocated in this paper. \cite{CC1968} considered the quantum theory of a relativistic electron gas in the magnetic field, and pointed out that the kinetic parallel and perpendicular pressures with respect to the direction of the magnetic field could be anisotropic. However, \cite{BH1982} have shown that the pressure is always isotropic due to the magnetization currents in compressed plasma, through the calculations of the magnetic susceptibility for degenerate free electrons in the crust of a neutron star. Recently, \cite{PY2012} have also shown that the hydrostatic equilibrium of a volume element in a magnetized star does not depend on the direction of the magnetic field, i.e., the pressure is isotropic. On the other hand, \cite{HHRS2010} discussed the mechanical stability of a system formed by quarks confined to their lowest Landau level, and the possibility that the transverse pressure tends to vanish under such a strong magnetic field, which may induce a gravitational collapse. Anyway, these issues might be more important under the strong magnetic field considered in this paper, which would be discussed elsewhere. Before considering the stellar models with the effects of the Landau level, for reference, we construct the stellar model with the same EOSs as in \citet{TSHT2007,Yasutake2009,SYMT2011}. That is, for hadronic matter, we adopt the EOS based on the non-relativistic Brueckener-Hartree-Fock approach with $\Sigma^-$ and $\Lambda$ hyperons \citep{Baldo1998}, which is referred to as ``hyperon EOS" in this paper. For the quark phase, we adopt the sophisticated MIT bag model suggested in \citet{yasutake09a,chen09}, which are composed of the massless $u$ and $d$ quarks and $s$ quark with the current mass of $m_{\rm s}=150$ MeV, where the bag constant is set to be 100 MeV fm$^{-3}$. Then, the quark phase is connected to the hadronic matter with a Maxwell construction. The resultant EOS is referred to as ``Maxwell EOS," where the energy density becomes discontinuous between $5.93 \times 10^{14}$ and $8.82 \times 10^{14}$ g/cm$^3$. We show the pressure as a function of the total energy density for hyperon and Maxwell EOSs in Fig. \ref{fig:EOS}, while the corresponding mass-radius relations of the constructed neutron stars in Fig. \ref{fig:MR}, where the thick-solid and thick-dotted lines denote the results with hyperon and Maxwell EOSs, respectively. From Fig. \ref{fig:EOS}, one can see that the quark phase becomes stiffer than the hadronic matter in the high density region. As a result, the stellar models with quark core can be more massive than those without quark core, as in Fig. \ref{fig:MR}. However, the maximum mass of the stellar model with quark core is still too small to explain the observed maximum mass, i.e., two-solar mass, which is a big problem for considering hybrid stars. \begin{figure} \begin{center} \includegraphics[scale=0.5]{EOSa} \end{center} \caption Relationship between the total energy density ($\varepsilon$) and pressure ($P$) for EOSs adopted in this paper (see text for details). } \label{fig:EOS} \end{figure} Next, we consider the hybrid star models with the effects of LLL. For this purpose, we consider that the quark phase in Maxwell EOS would be modified due to the existence of strong magnetic field. In the high density region, such a modified EOS should be expressed as $P=\varepsilon - 2{\cal B}$, as derived in the previous section. Meanwhile, it is not clear how the EOS for quark matter would be connected with the hadronic matter at the moderate density. So, in this paper, we adopt three possible cases to construct the modified EOS. That is, the EOS for quark matter is connected with the hadronic matter (A) at the upper limit of the density discontinuity in Maxwell EOS, i.e., $\varepsilon_u = 8.82 \times 10^{14}$ g/cm$^3$, (B) at the lower limit of the density discontinuity in Maxwell EOS, i.e., $\varepsilon_l = 5.93 \times 10^{14}$ g/cm$^3$, and (C) at the density defined as $\varepsilon_m=(\varepsilon_u + \varepsilon_l)/2$, as shown in Fig. \ref{fig:EOS}. Probably, almost all possibilities of how to connect the EOS for quark matter modified by the strong magnetic field with the hadronic EOS must be covered by the above cases from (A) to (C), although the value of ${\cal B}$ might be different from the value of bag constant in Maxwell EOS because of the existence of magnetic field. In fact, such a connection between quark and hadronic matter has effectively shifted the value of the bag constant into ${\cal B}= 237.3$ MeV fm$^{-3}$ for the case (A), ${\cal B}= 160.9$ MeV fm$^{-3}$ for the case (B), and ${\cal B}= 192.8$ MeV fm$^{-3}$ for the case (C). Anyway, we believe that the three cases are sufficient to see the qualitative behavior of the hybrid star models with the effects of the Landau level. Hereafter, we refer to these modified EOSs as ``Landau-A," ``Landau-B," and ``Landau-C" EOSs. \begin{figure} \begin{center} \includegraphics[scale=0.5]{MRa} \end{center} \caption Mass-radius relations of neutron stars constructed with several EOSs shown in Fig. \ref{fig:EOS}. } \label{fig:MR} \end{figure} Fig. \ref{fig:MR} shows the mass-radius relations of the hybrid stars constructed with the Landau-A (solid line), Landau-B (broken line), and Landau-C EOSs (dotted line). From this figure, we find that the maximum mass of the hybrid star becomes larger for the stellar model with EOS connected to the hadronic matter at the lower energy density, i.e., the maximum masses are $2.80M_\odot$ for the Landau-B EOS, $2.56M_\odot$ for the Landau-C EOS, and $2.31M_\odot$ for the Landau-A EOS. Since the Landau-A EOS is the softest among the three EOSs due to the existence of a large discontinuity in energy density, the maximum mass for the Landau-A EOS becomes smaller than the other cases. Nevertheless, one can obviously see that in any cases with the effect of LLL, the maximum masses become much larger than that constructed with the Maxwell EOS. As a result, these models can avoid the observational problem, i.e., the expected maximum masses are larger than the two-solar mass. Up to now, the explanation of the two-solar mass with the hybrid stars is difficult, but we are successful to show the possibility to construct the massive stellar models by considering the effect of the strong magnetic field. Here, we remark that the strong magnetic field is necessary at least in the core region to construct such a massive hybrid star, but the magnetic field in the crust region and at the stellar surface does not take so large (maybe at most $B\sim 10^{16}$ Gauss). It should be worthwhile to discuss the stability of hybrid stars near the maximum mass. Since quark matter is well approximated by the free quark-gas in the core region due to the asymptotic freedom of QCD, we can discuss the stability in a rather model-independent way. It is well-known that the adiabatic index $\gamma$ of the free quark gas is $4/3$, and cannot satisfy the stability criterion for compact stars, $\gamma>4/3+\kappa M/R$ with $\kappa\sim O(1)$ \citep{shapiro-teukolsky}. However, we find that the adiabatic index of the EOS of quark matter becomes almost $2$ from Eq.~(\ref{eq:P}) in the presence of the magnetic field, which value satisfies the criterion. We also show the stellar masses as a function of the central energy density in Fig. \ref{fig:rho-M}, where the solid, broken, and dotted lines correspond to the stellar models constructed with the Landau-A, -B, and -C EOSs, respectively. For reference, we also add the stellar models constructed with the hyperon and Maxwell EOSs in Fig. \ref{fig:rho-M} with the thick solid and thick dotted lines. From this figure, we find that the central energy density constructing the maximum mass becomes smaller for the stellar model with the EOS connected to the hadronic matter at the lower energy density, i.e., $\varepsilon_c=1.74\times 10^{15}$ g/cm$^3$ for the Landau-B EOS, $\varepsilon_c=2.06\times 10^{15}$ g/cm$^3$ for the Landau-C EOS, and $\varepsilon_c=2.58\times 10^{15}$ g/cm$^3$ for the Landau-A EOS. Similarly, the central energy density constructing the stellar models with $M=2M_\odot$ becomes smaller for the EOS connected to the hadronic matter at the lower energy density, i.e., $\varepsilon_c=7.75\times 10^{14}$ g/cm$^3$ for the Landau-B EOS, $\varepsilon_c=9.82\times 10^{14}$ g/cm$^3$ for the Landau-C EOS, and $\varepsilon_c=1.35\times 10^{15}$ g/cm$^3$ for the Landau-A EOS. On the other hand, as shown in Fig. \ref{fig:Bc}, the critical magnetic field above which only LLL is occupied becomes smaller as decreasing the energy density. Thus, the stellar models constructed with the Landau-B EOS can be easier to be realized with weaker magnetic field, compared with the stellar models with the Landau-A EOS. \begin{figure} \begin{center} \includegraphics[scale=0.5]{rho-M} \end{center} \caption Stellar mass as a function of the central energy density, $\varepsilon_c$, for the stellar models constructed with the different EOSs. } \label{fig:rho-M} \end{figure} At the end, we show the boundary between the quark and hadron phases in the hybrid stars constructed with the Landau-A, -B, and -C EOSs in Fig. \ref{fig:RQ-M}, where $R_Q$ and $M_Q$ denote the quark core radius and mass. From this figure, it is obvious that, adopting the Landau EOSs, the quark phase can occupy most part of the massive hybrid stars, i.e., the quark phase for $2M_\odot$ stellar models becomes $\, \raisebox{-0.8ex}{$\stackrel{\textstyle >}{\sim}$ } 78\%$ of the stellar radius and $\, \raisebox{-0.8ex}{$\stackrel{\textstyle >}{\sim}$ } 70\%$ of the stellar mass. This is a noteworthy feature of the massive hybrid stars constructed with the Landau EOSs, because this is quite different from the massive hybrid stars suggested up to now, where the quark phase is generally quite tiny \citep{Rabhi2009,CCM2014}. \begin{figure*} \begin{center} \begin{tabular}{cc} \includegraphics[scale=0.5]{RQ-M} & \includegraphics[scale=0.5]{MQ-M} \end{tabular} \end{center} \caption The boundary between the quark and hadronic phases in the hybrid stars constructed with the Landau-A, -B, and -C EOSs, where $R_Q$ and $M_Q$ denote the quark core radius and mass for each stellar model. \label{fig:RQ-M} \end{figure*} \section{Conclusion} \label{sec:IV} Some compact objects have a strong magnetic field, although the details of the distribution of magnetic field inside the stars are still uncertain. Maybe, one should consider the effects of such strong magnetic field on the structure of compact objects. We particularly focus on the hybrid stars in this paper, i.e., the core region of neutron stars is composed of quark matter. We estimate that the critical magnetic field strength above which quarks only occupy LLL (and the second Landau level), which is shown that $B\sim 10^{19}$ Gauss. We also find that the equation of state (EOS) in the phase of LLL can be expressed as $P=\varepsilon-2{\cal B}$ independently of the magnetic field strength, where ${\cal B}$ denotes the bag constant. We remark that this is the limit of a stiff EOS, i.e., the sound speed becomes equal to the speed of light. We derive these results by using the MIT bag model, where quarks move almost freely, so that our findings should be relevant especially in the core region due to the asymptotic freedom of QCD, while there may be added non-perturbative effects in the moderate density region. Further consideration on the EOS of quark matter based on QCD is a future subject. On the other hand, in general, the maximum mass of hybrid star comes to be smaller than the observed maximum mass, i.e., $2M_\odot$, because the introduction of quark matter makes the EOS soft. Even if one can construct the massive hybrid star, the quark region is generally quite tiny. However, owing to the effect of the Landau levels in the quark phase, we are successful to construct the massive hybrid quark stars occupied in large part by quark matter, which can be larger than $2M_\odot$. Furthermore, in order to examine the qualitative behavior of hybrid stellar models, we simply consider three different connections of quark matter with the hadronic matter. As a result, we find that the stellar model constructed with EOS connected to the hadronic matter at the lower energy density can realize more massive stellar model with smaller central energy density. In this paper, we consider simple stellar models as a first step, where we neglect the magnetic pressure and the deformation of stellar shape. Such additional effects will be taken into account elsewhere. In addition, one might also have to consider the hadron-quark mixed phase in the more realistic stellar models \citep{TSHT2007,Yasutake2009}. At any rate, one could see the properties of such phase modified by the strong magnetic field via the observations of the stellar oscillations \citep{Sotani2007,Sotani2008,Sotani2009}, which tells us an additional information about the strongly magnetized compact objects. We are grateful to N. Yasutake and T. Takatsuka for their fruitful discussions, and also to our referee for reading carefully and giving valuable comments. This work was supported in part by Grants-in-Aid for Scientific Research on Innovative Areas through No.\ 24105008 provided by MEXT, and by Grant-in-Aid for Young Scientists (B) through No.\ 26800133 provided by JSPS.
1,108,101,564,769
arxiv
\section{Introduction} An \emph{overlay network} is a network created on top of an existing network. In more technical terms, an overlay network is a network the links of which are realised as flows (or connections) of the underlying network. Overlay networks are an old idea, and have many applications. They can be used as a transition technology, when the desired physical network does not exist yet --- the transition to IPv6 was bootstrapped by running IPv6 within the \emph{6bone}, an overlay over the existing IPv4 Internet. \emph{Virtual Private Networks} (VPN) are a technology that allows a network node to appear connected at a place different from what is implied by the physical network topology, typically in order to work around topology-based security policies; \emph{onion routing} \cite{tor} generalises this idea to large public virtual networks that are used to provide a modicum of anonymity to their users. Finally, by rerouting around failures faster than the underlying network does, overlay networks are used to improve the reliability of large-scale distributed systems in the presence of partial network failures. It is this last application that concerns us here. \subsection{Overlay networks for reliability} BGP, the routing protocol used in the Internet core, is designed to scale to very large networks. This implies a number of trade-offs, most notably relatively slow reconvergence after a network failure, on the order of minutes. Measurements indicate that at any one time a few percent of the expected routes are not available \cite{detour}. This implies that in a sufficiently large distributed system implemented on the Internet, at any time at least some of the participants will not be able to communicate. There are multiple ways of dealing with this issue. An interesting approach is to design application algorithms that are able to deal with temporary failures; for example, the SMTP protocol used for electronic mail has a complex system of timeouts, retries and fallback servers that allow it to deal with temporary failures. A more recent example is that of the Kademlia distributed hashtable algorithm (used notably for locating peers in large-scale peer-to-peer file transfer applications), which is highly redundant in order to deal with arbitrary communication failures. A more modular approach consists in delegating the reliability requirements to a lower sub-layer. In this approach, the application blindly sends its data to the desired destination, and a lower sub-layer uses an overlay network to route the data to the destination, using a routing algorithm with fast rerouting properties and with its own routing policies, possibly different from the policies used by the underlying network. This overlay network and routing algorithm can be implemented within the application layer (as an ad-hoc library), as in \emph{Resilient Overlay Networks} \cite{ron}, which makes it possible to fine-tune the routing heuristics in an application-specific manner (e.g.\ prefer lower latency or higher reliability) without the need for cross-layer interactions. Alternatively, the overlay network can be implemented at the network layer, using familiar packet-switching technology, which reduces flexibility somewhat but allows using unmodified applications over the overlay. \subsection{Routing in a distributed cloud} SlapOS is a framework for building distributed cloud applications. SlapOS was initially implemented over native IPv6, which was found to be too unreliable. SlapOS was then modified to use a dense network (but not a full mesh) of virtual links \cite{re6st}, and route over it by using the off-the-shelf protocol Babel \cite{babel} with the hop-count metric. This solution worked fairly well as long as the cloud was mostly local. Unfortunately, as soon as distant nodes were added, Babel started making routing choices that, while consistent with the shortest-hop metric, were clearly sub-optimal. Consider for example the topology in Figure~\ref{fig:diamond-real}, which consists of four nodes configured in an almost complete mesh. As long as all the links are operational, the shortest-hop metric yields optimal results --- traffic local to Europe remains in Europe. However, if the link between Lille and Marseilles breaks, the shortest-hop metric does not allow the routing protocol to distinguish between the local route through Paris and the remote route through Tokyo, which is therefore chosen in roughly one half of the cases. \begin{figure}[htb] \centering \includegraphics[width=0.35\textwidth]{diamond-real} \caption{A real-world topology}\label{fig:diamond-real} \end{figure} The shortest-hop metric is not precise enough for the distributed cloud. In this paper, we describe our work on extending the Babel routing protocol with a metric based on packet delay. \subsection{A delay-based metric} Our goal in this work is to extend the Babel routing protocol with the simplest possible metric that does reliably distinguish between local and non-local routes in the overlay network generated by SlapOS. Our metric is not meant to be the end-all of all metrics for overlay networks; still, the requirements of the application dictate a number of properties that it must have. First, as one of the goals of the distributed cloud is to reduce operational cost, the metric must not require any manual configuration, which rules out manually configuring links as ``local'' or ``remote''. We have chosen to base our network on the \emph{round-trip time} (RTT), or two-way delay, which is easily measured with off-the-shelf hardware with an accuracy sufficient to distinguish between Paris and Tokyo. (One-way delay might lead to a more generally useful metric in the presence of asymmetric network congestion, but it is more difficult to measure and is not required for this particular application.) Second, the algorithm must be easy to implement on cheap off-the-shelf hardware, and, in particular, it must not rely on globally synchronised clocks. Since the links used in a distributed cloud are of varying quality, it must consume a negligible amount of additional network resources. Additionally, since the hardware used in the distributed cloud can be fairly loaded, it should be asynchronous, i.e.\ not require real-time response to query packets. Finally, since delay can be caused by network congestion, using delay in a routing metric causes a feedback loop, which can cause persistent oscillation. We require that our algorithm provide reasonable stability, with a bound on the period of oscillations of at least a few minutes. \subsection{Stability issues} \label{sec:stability} Using delay as an input to the routing metric in congested networks gives rise to a negative feedback loop: low RTT encourages traffic, which in turn causes the RTT to increase. In a discrete domain, such a feedback loop can cause persistent oscillations. Consider for example the topology in Figure~\ref{fig:diamond-topology}, where the links $A\cdot B$ and $A\cdot C$ are subject to congestion. Suppose that there is a significant amount of traffic from $A$ to $D$. The routing protocol initially chooses some route, say the route through $B$; as the link $A\cdot B$ becomes congested, its RTT rises, so the routing protocol reroutes through $C$. The situation then reverses: the link $A\cdot C$ becomes congested, the protocol reroutes through $B$, etc. \begin{figure}[htb] \centering \includegraphics[width=0.25\textwidth]{diamond-topology} \caption{Worst-case topology} \label{fig:diamond-topology} \end{figure} In the general case, such oscillations are unavoidable in the presence of congestion, but their frequency can be limited. Our protocol contains two mechanisms, saturation and hysteresis, that cooperate to limit the frequency of oscillations; in Section~\ref{sec:worst-case}, we provide empirical data that shows that in the worst case the period of the oscillations is on the order of minutes. \section{Related work} \subsection{Use of RTT in routing protocols} In 1983, Mills described the use of RTT for routing in the DCNet~\cite{mills83}, but didn't provide an evaluation of his protocol; the asynchronous algorithm that we use to measure RTT (Section~\ref{sec:async-rtt}) is inspired by Mills' algorithm, which later became the basis for NTP~\cite{NTPv4}. A few years later, the ``new'' routing protocol for the Arpanet~\cite{arpanet89} used a metric based on RTT in order to mitigate the congestion of the network; stability issues were considered, and solved by saturating the metric, similarly to what we do. Using a delay-based metric for routing has apparently been abandoned since then: to the best of our knowledge, no modern network has been using this method in recent years. Our interpretation is that congestion seldom occurs within the core of the network nowadays, and has moved to the edge, where there is little opportunity for routing optimisations: congestion occurs in the ``Customer Premises Equipment'' (the ADSL modem) which cannot be routed around. The proprietary routing protocols IGRP and EIGRP use a parameter called ``delay'' for computing their metric. However, this value is statically configured by the operator rather than determined empirically, and this feature is therefore out of scope for this paper. \subsection{Overlay networks} Overlay networks are an old idea, and there is a wide range of literature describing their various applications. In this paper, we are concerned with the use of overlay networks to increase reliability, as described in Detour \cite{detour}. The techniques most similar to ours are the ones used by \emph{Resilient Overlay Networks} (RON) \cite{ron}, where the authors build an overlay network to increase reliability and use a variety of metrics, controlled by the application, to perform routing. Unlike our work, however, RON is layered above UDP and performs routing within the application layer: this makes implementation simpler and makes it easier to provide multiple routing metrics, but requires changing all applications to link with the RON library and use its primitives for communication. In contrast to RON, our network-layer approach allows the use of unmodified applications and is completely oblivious to the transport-layer protocol being used. \section{RTT-based routing} In this section, we describe the issues related to integrating an RTT-based metric in the Babel routing protocol. \subsection{Measuring RTT asynchronously} \label{sec:async-rtt} \begin{figure}[htb] \begin{minipage}[b]{0.2\textwidth} \centering \includegraphics[scale=0.33]{rtt1}\\ (a) \end{minipage} \hfill \begin{minipage}[b]{0.2\textwidth} \centering \includegraphics[scale=0.33]{rtt2}\\ (b) \end{minipage} \caption{RTT measurement}\label{fig:rtt} \end{figure} The simplest way to measure RTT between nodes A and B (Figure~\ref{fig:rtt}(a)), as performed e.g.\ by the \emph{ping} program, is to send a single ``echo request'' packet from A to B, and have B immediately respond with an ``echo reply''. This is a simple and intuitive algorithm that does not require synchronised clocks; unfortunately, it requires a synchronous reply from B, which is not necessarily easy to integrate within an existing routing protocol. Like most modern routing protocols, Babel has a fairly sophisticated scheme for scheduling outgoing messages. Roughly speaking, messages are delayed by a random time (at most one \emph{Hello} interval) in order to avoid global synchronisation \cite{jitter} and to make it possible to aggregate multiple messages into a single packet. Adding synchronous messages to Babel would require a moderate amount of changes to the protocol, increase the amount of network traffic it generates, and might cause unexpected issues with node synchronisation. Fortunately, the problem of measuring RTT asynchronously has been solved before, and was used by Mills in his \emph{HELLO} routing protocol \cite{mills83} and in the NTP clock synchronisation protocol \cite{NTPv4}. In Mills' algorithm (Figure~\ref{fig:rtt}(b)), a node A sends a packet $p_1$ with its local timestamp; $B$ saves $p_1$'s reception timestamp $u_1$ according to its local clock. At some later time, a node B sends a packet $p_2$ with a copy $t_1$ of $p_1$'s timestamp, its timestamp $u_1$, and the timestamp $u_2$ of $p_2$. When node $A$ receives the packet $p_2$ at local time $t_2$, it computes the difference \[ (t_2 - t_1) - (u_2 - u_1) \] which yields the RTT. Note that each of the terms in this difference use a single clock --- hence, no clock synchronisation is necessary. Except for the first packet, all packets exchanged in Mills' algorithm carry three timestamps: therefore, each node computes a new RTT sample for each received packet, which is twice as efficient as the naive \emph{ping} algorithm. A further refinement is possible. On a multi-access network, a packet's timestamp is valid for all neighbours; it is only the echoed timestamps which must be sent to a particular peer. In Babel, we attach a timestamp to each \emph{Hello} message, which is sent over multicast to all neighbours. The echoed timestamp is piggybacked to \emph{IHU} (``I Heard You'') messages, used for reverse reachability detection, which are conceptually unicast (but usually sent over multicast). In order to make it possible to perform Mills' computation, we ensure that every \emph{IHU} is accompanied with a \emph{Hello} in the same packet. Therefore, the cost of implementing Mills' algorithm is just a few octets per \emph{Hello} and \emph{IHU} message, with no additional packets sent. \subsection{Smoothing} The RTT samples obtained by the algorithm described above contain a varying amount of \emph{jitter}, or short-term noise. Figure~\ref{fig:rtt-paris-tokyo} shows the samples obtained over a period of almost one hour over a GRE tunnel between Paris and Tokyo, at a time when the RTT was particularly stable. Before time 1300, the samples are roughly constant, with a single outlier. At time 1350, something happens (rerouting?), there are a few outliers after which time the samples are roughly constant again, with a small number of outliers. \begin{figure}[htb] \centering \includegraphics[width=0.45\textwidth]{32bits-rtt-tunnel-babel-stable} \caption{RTT through a tunnel from Paris to Tokyo}\label{fig:rtt-paris-tokyo} \end{figure} Obviously, we are interested in the medium-term latency averages (285\,ms before time 1500, 270\,ms after that), rather than in the random jitter. For that reason, we smooth the RTT data using an exponential average analogous to the one used by TCP \cite{rfc793}. More precisely, for every new RTT sample $\mathrm{RTT}_n$, our RTT estimate $\mathrm{RTT}$ is updated as follows: \[ \mathrm{RTT} := \alpha\cdot\mathrm{RTT} + (1 - \alpha)\cdot\mathrm{RTT}_n \] The value $\alpha$ is currently set to 0.836 by default (which is consistent with TCP's recommendation of 0.8 to 0.9). The results of this smoothing are shown in Figure~\ref{fig:rtt-paris-tokyo-smoothed}. \begin{figure}[htb] \centering \includegraphics[width=0.45\textwidth]{32bits-rtt-tunnel-babel+smoothed-stable} \caption{Effect of smoothing on RTT}\label{fig:rtt-paris-tokyo-smoothed} \end{figure} Figure~\ref{fig:rtt-paris-tokyo-unstable} shows the behaviour of the same tunnel at a different time, when the RTT exhibited much larger variation. While the raw data is much more chaotic, the smoothing algorithm is able to provide useful data. \begin{figure}[htb] \centering \includegraphics[width=0.45\textwidth]{32bits-rtt-tunnel-babel+smoothed-unstable} \caption{Effect of smoothing on an unstable RTT}\label{fig:rtt-paris-tokyo-unstable} \end{figure} \subsection{Accuracy and clock skew} As noted above, Mills' algorithm does not require synchronised clocks. However, its accuracy is limited by two factors. First, packets must be timestamped just before they are sent and just after they are received: if sent packets are timestamped too early, or received packets too late, the RTT will be overestimated. Second, the two clocks must progress at roughly the same rate: if one clock is significantly faster than the other, RTTs will be overestimated on the fast side and underestimated on the slow one. Concerning the first issue, we have put some care into ensuring that timestamps are generated in a timely manner. Babel's packet formatter formats \emph{Hello} messages with zero timestamps; the timestamps are filled in just prior to transmission. On the receiving side, however, timestamps are only parsed after packet validation. Our tests on a local gigabit Ethernet indicate that we overestimate RTT by 0.4\,ms as compared to the \emph{ping6} command, and introduce a moderate amount of jitter, on the order of 0.1\,ms. This is acceptable for the intended application. As to the second issue, a clock skew of $\delta$ introduces a maximum error of $\delta\cdot\tau$, where $\tau$ is the maximum interval between two IHU messages (12\,s by default). Typical computer clocks have clock skew on the order of 10\,ppm, which should yield an error of at most 0.1\,ms. Interestingly, our tests indicate that clock skew increases dramatically when one peer enters a power-saving mode: in that case, we have witnessed asymmetric errors of more than 1\,ms, an order of magnitude more than the expected value. Even these extreme values, however, are within the accuracy required for the intended application of our protocol. \subsection{An RTT-based metric} In the previous section, we described how to measure RTT precisely and cheaply. The RTT alone, however, does not directly constitute a metric: we need to somehow map RTT values to an additive metric. As far as the Babel routing protocol is concerned, a metric is just a 16 bit integer. While it would be possible to map RTT to a metric proportionally (just multiplying it by some suitable constant), this would favour too much low-RTT links, and prefer multiple low-RTT hops to a single moderate-RTT hop. What is more, it would yield arbitrarily large metrics for large RTT links, which, as we shall see in Section~\ref{sec:worst-case}, has a negative effect on stability. \begin{figure}[htb] \centering \includegraphics[width=0.35\textwidth]{rtt-cost} \caption{Deriving cost from RTT}\label{fig:rtt-cost} \end{figure} Instead, we map RTT to metrics using the piecewise affine function described in Figure~\ref{fig:rtt-cost}. For RTTs below a value \texttt{min-rtt} (10\,ms by default), a link is considered ``good'', and its metric is the fixed value \texttt{min-cost}. For RTTs above \texttt{max-rtt} (120\,ms by default), the link is ``bad'', and its cost is the fixed value \texttt{max-cost}. For intermediate RTTs between \texttt{min-rtt} and \texttt{max-rtt}, the resulting cost is an affine function of the RTT. This mapping has two essential properties. First, all link metrics are no smaller than \texttt{min-cost}, which guarantees that even very low RTT links are not seen as ``free'' --- in a very low latency network, our metric degenerates to the shortest-hop metric. Second, all high-RTT links are treated equally, which, as we shall see in Section~\ref{sec:worst-case}, limits the frequency of route oscillations in congested networks. \subsection{Hysteresis} In traditional routing protocols, metrics tend to vary discontinuously, by discrete amounts. Hence, a traditional routing protocol can afford to switch routes as soon as a route's metric becomes lower than that of the currently selected routes. When continuous metrics are used to measure real-world parameters, this is no longer the case: the metrics of two routes could oscillate around a similar value, leading to frequent route oscillation. For that reason, the Babel routing protocol applies a modest amount of hysteresis to the metrics that it considers for route selection. As we shall see in Section~\ref{sec:worst-case}, this hysteresis is essential to the stability of delay-based routing. The algorithm is as follows. For every route, Babel maintains two metrics: the \emph{advertised metric} $M_a$, which is obtained from neighbours and readvertised as is to other nodes, and the \emph{smoothed metric} $M_s$. The smoothed metric is initialised to the advertised metric, and is periodically updated according to the formula: \[ M_s := \beta(\delta)\cdot M_s + (1 - \beta(\delta))\cdot M_a \] where $\delta$ is the delay since the last update of $M_s$, and $\beta(\delta)$ is a value chosen so that $M_s$ converges towards $M_a$ exponentially with a time constant of 4\,s (in base 2). Babel's route selection algorithm avoids routes with an infinite advertised metric (retracted routes); when multiple routes to a given destination have finite metrics, Babel will only switch routes if both the advertised and smoothed metrics of the new route are better than those of the currently selected one. The effect is to permit fast reconvergence when a route is lost, but to delay switching routes when an unselected route's metric decreases below that of the currently selected one. The hysteresis algorithm may appear similar to the smoothing algorithm described above, but there are good reasons why these are separate. Babel is a modular protocol, and metric computation is separate from route selection. The smoothing algorithm is part of the metric calculation, and is designed to extract a smooth signal from the noisy RTT samples; it is specific to the RTT metric. The hysteresis algorithm, on the other hand, is part of the (metric-independent) route selection procedure, and its only purpose is to improve stability by delaying switching to a better route. \section{Experimental evaluation} In this section, we show some empirical data describing the behaviour of our implementation of the algorithm described above. \subsection{Real-world behaviour} We have tested our implementation on a small overlay network deployed over the Global Internet, consisting of four nodes, three of which are in France and one in Japan. The topology of the overlay network is the one in Figure~\ref{fig:diamond-real}. Each node is running Linux, and the links are implemented using OpenVPN over UDP (without cryptography). All Babel instances are run with \verb|rtt-min| equal to $10\,\textrm{ms}$, \verb|rtt-max| equal to $200\,\textrm{ms}$, \verb|min-cost| equal to 96 and \verb|max-cost| equal to 246 Throughout the experiment, Lille is sending data to Marseilles. Figure~\ref{fig:nexedi-throughput} shows the incoming throughput in Marseilles over each of the local interfaces. Initially, all links are up, so the data arrives directly from Lille. Around minute 13, the direct link between Lille and Marseilles is shut down; after a few dozen seconds, the failure is detected, and the data is rerouted through Paris. Around minute 14, the Paris link is shut down, and the data is rerouted through Tokyo. Finally, after minute 1d, the links are reestablished; when this is detected, the data is rerouted through the direct low-latency link. \begin{figure}[htb] \centering \includegraphics[width=0.45\textwidth]{nexedi-throughput} \caption{Throughput in Marseilles} \label{fig:nexedi-throughput} \end{figure} Figure~\ref{fig:nexedi-metrics} shows the metrics of the different routes during the experiment. It shows that the links remain uncongested: all of the metrics remain roughly constant throughout the experiment. \begin{figure}[htb] \centering \includegraphics[width=0.45\textwidth]{nexedi-metrics} \caption{Metrics in Lille} \label{fig:nexedi-metrics} \end{figure} \subsection{Worst-case simulation} \label{sec:worst-case} The previous experiment uses links of different natural latencies that remain uncongested throughout the experiment. We believe that this is representative of real-world conditions in overlay networks; however, since the traffic that we generate does not significantly impact the latencies of the links, the feedback loop described in Section~\ref{sec:stability} does not occur, and there are no stability issues. In order to test our algorithm's stability properties in a worst case situation, we have simulated a network consisting of two exactly identical parallel routes that are subject to congestion. The topology is that of Figure~\ref{fig:diamond-topology}; the links $A\cdot B$ and $A\cdot C$ have their throughput artificially limited, and are therefore subject to congestion, while the links $B\cdot D$ and $C\cdot D$ are uncongested. As expected, routing in this somewhat pathological topology is subject to oscillations. Figure~\ref{fig:diamond-stability-rtt} shows the RTTs of the two congested links. The routing protocol chooses one of the two routes, the RTT of which subsequently increases; after a few minutes, the protocol reacts to the increase of the RTT and switches to the other route; the situation then repeats, \emph{ad nauseam}. However, the frequency of the oscillations remains bounded, with a time constant of roughly 5 minutes. \begin{figure}[htb] \centering \includegraphics[width=0.45\textwidth]{32bits-diamond-bounded-local} \caption{Worst-case RTT oscillation} \label{fig:diamond-stability-rtt} \end{figure} Two mechanims collaborate to limit the frequency of oscillations. The saturation of the cost function ensures that both congested links spend part of their time in the saturated state. Hysteresis ensures that Babel doesn't switch routes as long as both metrics are saturated. Figure~\ref{fig:diamond-stability-rtt-unbounded} shows an experiment performed in the same topology, but with an unbounded cost function (both \verb|rtt-max| and \verb|cost-max| set to very high values, chosen so that the slope of the curve remains the same as in the previous experiment). The oscillations are now much faster (less than a minute), which shows the importance of a bounded cost function. \begin{figure}[htb] \centering \includegraphics[width=0.45\textwidth]{32bits-diamond-unbounded-local} \caption{Worst-case RTT oscillation without saturation} \label{fig:diamond-stability-rtt-unbounded} \end{figure} In this experiment, the congested links are the ones close to the sender, which ensures fast reaction to changing conditions. We have repeated the experiment with the links $B\cdot D$ and $C\cdot D$ being the ones subject to congestion; as expected, the behaviour is similar, but with slightly slower oscillation. \section{Conclusions and further work} In this paper, we have described a working implementation of a delay-based routing metric that is currently being deployed in production. We have shown an algorithm that measures RTT while having a negligible impact on the amount of routing protocol traffic, and have shown how to mitigate the stability issues intrinsically connected to using delay that are good enough to limit instability in the most hostile examples that we could construct. While the functionality of our protocol is sufficient for the overlay networks that we consider, there is a number of related issues that still remain open. \paragraph{One-way delay} The metric described in this paper is based on the round-trip time, or two-way delay. The congestion control community have repeatedly shown that one-way delay behaves better than two-way delay, at least as far as congestion control algorithms are concerned \cite{tcp-lp,ledbat}, at the cost of much more complex algorithms. It would certainly be interesting to find out whether there are any real-world cases where one-way delay performs significantly better than RTT as a basis for a routing metric. \paragraph{Arbitrary choices and theoretical study of stability} There are a number of arbitrary choices in our algorithm: the constants used for smoothing and filtering, the amount of hysteresis applied, and, above all, the function used for mapping an RTT value to a metric. While we have empirically checked that these particular choices work well, at least for the particular application under consideration, there are almost certainly other choices that would work just as well and perhaps better. More generally, we lack an in-depth theoretical understanding of the performance of our algorithm, in particular of its stability. As there exist a number of techniques for the theoretical study of the stability of distributed systems, this would seem to be feasible. \paragraph{Other applications} After we initially published the code of our implementation, one researcher has expressed interest in studying its suitability for networks other than overlays. There is some support to the feeling that the metrics currently used in wireless mesh networks (such as ETX \cite{etx} or physical-layer based metrics) are not satisfactory, because they are a poor predictor of network performance, because they are too slow to react to changing conditions, or because they are too difficult to implement. We hold some hope that, at least for some MAC layers, an accurate measurement of delay might be a good indicator of lower-layer congestion, and therefore could serve as one component of a metric for wireless mesh networks. \section*{Acknowledgment} We are grateful to Julien Muchembled of Nexedi for providing access to the cloud nodes used in our experiments.
1,108,101,564,770
arxiv
\section{Introduction} Contemporary human society lies under the effect of an almost fully accomplished globalisation. International barriers have become more permeable thanks to the spread of English as universal language. Connections of individuals develop across huge distances throughout the world thanks to the daily use of social media and the immediate coverage nowadays attained by information media. These observations naturally support the picture of a global society described as a large collective system involving strongly interacting degrees of freedom, represented by the individuals' actions or opinions. With this picture in mind, scientists started to study the human society by means of a statistical mechanics approach. Early results of this effort resulted in the first opinion dynamics model proposed by a physicist \cite{weidlich1971statistical} and the introduction of the Ising model to study consensus in societies \cite{galam1982sociophysics, galam1991towards} . An important amount of work followed these first seminal articles, building a literature in which the pressure of the society is modelled by assimilative interactions that mimic the tendency of people to imitation. The most widely studied models of a society of this kind come from physics, {\it i.e.} the Voter model \cite{clifford1973model} the Ising model with its variants \cite{borghesi2007songs} already mentioned and the Majority rule model \cite{galam2002minority, martins2013building}. Yet, despite many other interesting features, it soon appeared evident that models only based on assimilative interactions describe a society in which full consensus is typically inevitable. To overcome this problem the concept of {\it homophily} \cite{lazarsfeld1954friendship, mcpherson2001birds}, that is the tendency of people to interact more often or with grater intensity with similar others, has been introduced \cite{axelrod1997dissemination, deffuant2000mixing, hegselmann2002opinion}. However, the fragmentation of the society into groups with different opinions obtained in models that include both assimilation and homophily was found to be unstable under the introduction of noise \cite{flache2011local, mas2010individualization}. In fact an infinitesimal amount of noise it is seen to irremediably redirect the society to a state of full agreement. The idea that antagonistic interactions \cite{mas2010individualization, mas2014cultural, macy2003polarization, flache2011small, sirbu2013opinion, kurmyshev2011dynamics, sznajd2011phase, radillo2009axelrod, martins2010mass, baldassarri2007dynamics} and xenophobia (the phenomenon for which the larger the dissimilarity between two interacting individuals, the more they evaluate each other negatively) \cite{macy2003polarization, flache2011small, baldassarri2007dynamics, mark2003culture} should be taken into account to resolve this issue has been developed only fairly recently. In the present work we build up on these observations and put them in conjunction with the traditional approach that assumes imitation tendencies between individuals. We propose a model for opinion formation based on the rule that individuals in agreement with each other tend to reinforce their mutual positive influence, while individuals in disagreement will develop an antagonistic relation based on the mistrust towards one another's view. These tendencies will be encoded in an interaction term that for each pair of individuals reflects the history of their agreement or disagreement at previous times. Past relevant interactions are only those that lie within the finite range of agent's memory.\\ We note that the dynamics of the model we propose is strongly reminiscent of the dynamics of graded response neural networks \cite{hopfield1984neurons, kuhn1991statistical, amit1987statistical} which have been used to describe associative memory. Indeed, the model discussed here will, under suitable conditions, develop interactions of the Hopfield type \cite{hopfield1982neural}. Models of society including Hopfield-like interactions have been already used in social sciences in a few occasions \cite{macy2003polarization,flache2011small} to study consensus formation and opinion polarization. In these works Hopfield interactions are introduced and studied in conjunction with a number of other elements like individual flexibility, broad-mindedness, and open-mindedness \cite{macy2003polarization} or in more complicated network structures \cite{flache2011small}. Moreover in all these cases the focus of the study was on the stationary state reached by a society with fixed interpersonal interactions and how it is approached from a random initial condition. Our contribution will instead focus on a model society whose internal interactions develop starting from historical interpersonal relationships. Yet, it will, under suitable conditions, spontaneously turn out to closely resemble a model society exhibiting Hopfield-like couplings. We will develop a detailed analysis of the way the Hopfield-like interactions develop and analyze the resulting collective behaviour of the system. To this aim we will focus on a society that is constantly under the influence of new external events and we will pay particular attention to the reaction of the society to world-wide news, modelled as external fields applied to it \cite{carletti2006make, gargiulo2008saturation}. We will study whether the society does develop self-maintained collective memories of certain news, and will therefore be irremediably shaped by them. In literature the concept of collective memory was first introduced in \cite{halbwachs1992collective} and only recently scholars have focused on how collective memory is influenced by media \cite{kligler2014setting, garcia2017memory}. In our model the possibility for external events to impress the society will depend on a number of parameters including the extent of the news influence on single individuals and how frequently they are impacting the society. In particular our model shows that strongly impacting news, or even just very frequent news, can change the internal structure of the society in a drastic way and will determine its non linear collective response to future external influences. \noindent The structure of the paper is as follows. In Section \ref{model} we will introduce the model and its main features, before entering into the description of different scenarios corresponding to different kinds of external information. The results are presented in Section \ref{results} and organized in the following sections starting from the simplest scenario to more complicated ones. Given the growing complexity of the problems studied, not all the cases considered can be fully solved analytically. For each choice of the external stimuli we first derive all the analytic predictions we have access to, then we complete the picture by showing simulation results. \section{The model} \label{model} The society that we consider is composed of a set of $N$ agents, each potentially connected to all the other agents. With agent $i$ we associate a continuous preference field $u_i$. Expressed opinions are given by nonlinear functions $g(u_i)$ of the preference field $u_i$ of agent $i$. We will take $g(u_i)$ to be of sigmoid form, implying that expressed opinions remain bounded. We will take the stochastic dynamics of the system to be of the form: \begin{eqnarray}\label{maineq} \dot{u_i}=-u_i+I_i+\sum_{j\neq i}^N J_{ij}g_j+\eta_i\ , \end{eqnarray} where we use the abbreviation $g_j= g(u_j)$ and we dropped time dependencies, which in general pertain to all variables in the equation. Here $J_{ij} >0 $ represents a mutually supportive interaction between the individuals $i$ and $j$ while $J_{ij}<0$ indicates an antagonistic interaction between the same agents. The quantity $I_i$ represents the mass media information as perceived by individuals, while $-u_i$ is a mean reversion term which entails that in absence of external influences the preference field of each agent will fluctuate around zero. The last term $\eta_i$ is a white noise with Gaussian distribution with zero mean and finite variance $\langle \eta_i(t)\eta_j(t')\rangle = \sigma^2 \delta_{ij} \delta(t-t')$. The combined effect of individually perceived external sources of information, $I_i$, and the interaction with others agents' expressed opinions, $g_j$, may act to drive the preference field of an agent away from zero, and thus may favour the development of specific orientations. The heterogeneous external information $I_i$ is defined as $I_i=I_0 \xi_i$ separating the strength $I_0$ of the signal from the variables $\xi_i$ encoding the local variability. Note that such variability might be genuine or arise as a result of individually variable perception of an underling uniform message (which may be caused by idiosyncratic interpretations). We simply assume that $\xi_i$ can only be either $+1$ or $-1$. \\ Finally and most importantly the tendency of each person to agree or disagree with others is based in our model on the memory of past history of agreement and disagreement with them. People which have a history of agreement in the past, will be more likely to agree also in the future, and an analogous result holds for disagreement. This feature represents the key ingredient of our model. More specifically we consider the recent history of agreement or disagreement to have a larger weight than the distant past to take into account how vivid the experience of past interactions is. We will assume an exponentially weighted memory and take interactions between agents at time $t$ to be given by \begin{eqnarray}\label{eqJs} J_{ij}(t)= \frac{J_0\cdot \gamma}{N} \int_{0}^{t}\mbox{d}s \ g_i(s)g_j(s) e^{-\gamma(t-s)} \end{eqnarray} for some $J_0>0$, assuming for simplicity the time scale $\tau_\gamma=1/\gamma$ of the memory to be uniform across agents. The normalization factor $1/N$ is introduced so that the interaction with other agents is not overwhelmingly dominant, but remains comparable to the influence of external sources of information in the large $N$ limit.\\ In this way we have a society that uses the past history to interpret any instantaneous inputs that it receives. In particular agreement (disagreement) will be perceived if the value of $g_i(s)g_j(s)$ will be continuously positive (negative) on the time scale $\tau_\gamma$ and will bias agents $i$ and $j$ toward future agreement (disagreement). When the history of past interaction is instead characterized by an alternation of agreement and disagreement periods the agents will tend to be neutral with each other, $J_{ij}\sim0$. This memory effect is particularly important when studying the influence of the external information on the agents' opinions. Note that the memory that appears in our model, being a memory of past relations, must not be confused with the memory of past actions or opinions of single agents that has been more often considered in the literature of social behaviour \cite{dall2007effective, noah1998beyond, jkedrzejewski2018impact}. \\ \subsection{Agents' interactions {\it vs} Hopfield couplings} \label{Hopfield_couplings} The main ingredient of our model is the memory of past interactions, which is associated with the time scale $\tau_\gamma$. The model also contemplates a second time scale which is the relaxation time of the individual preference $u_i$. The latter has been set equal to 1, without loss of generality as in general all the other parameters and time can be expressed in terms of its unit. A third time scale $\Delta_0$ should be also considered. It is the one associated with the duration of the exposure of the society to external stimuli. We will typically focus on the regime $\tau_{\gamma} \gg\Delta_0\gg 1$ corresponding to a fast adaptation of $\mathbf{u}$ to eventual external stimuli and a slow memory decay which will be responsible of the storing of previous opinion configurations in the memory of interpersonal relations. This process will describe how the whole society can be {\it shaped} by its past by {\it learning} from patterns of opinions produced by sustained signals or series of repeated external stimuli. Among the different scenarios studied, we will describe the case of the arrival of different external stimuli represented by a local field changing in time $\bm{I}(t)=I_0\bm{\xi}^{\mu(t)}$. In this expression $p$ different random choices of ${\bm \xi}^{\mu(t)}=(\xi_i^{\mu(t)}) \in \{\pm 1\}^N$ are considered, one for each integer value in $\{1...p\}$ that $\mu(t)$ assumes, each of them for a time $\Delta_0$. Each of these random vectors represents the perceived piece of news that influence the society during the time $\Delta_0$ and is later substituted by a different piece of news. Under the effect of such external influences we expect that the society will likely develop interactions comparable to the classic Hopfield couplings \cite{hopfield1982neural} defined from a collection of $p$ random patterns ${\bm \xi}^{\mu}$, albeit rescaled by a factor $1/p$, so in the long time limit we expect \begin{equation}\label{10} J_{ij} \simeq J_{ij}^{\mbox{H}}/p=\frac{1}{Np}\sum_{\mu=1}^p\xi_{i}^\mu\xi_{j}^\mu\ . \end{equation} In fact, if each strong signal clamps the expressed opinions towards the positions that it suggests ($\mathbf{g}(t)=\bm{\xi}^{\mu(t)}$) for a time $\Delta_0\ll\tau_\gamma$, and the sequence $\mu(t)$ of the signals' appearances is repeated many times in $\tau_{\gamma}$, the average over the past history in Eq.(\ref{eqJs}) can be approximated by an average over the product of $\xi_{i}^\mu$ and $\xi_{j}^\mu$ as by definition of Hopfield couplings. In section \ref{sec:per_ext_sig} we will discuss the similarity between the generated interpersonal couplings in our modeled society and Hopfield couplings in more detail. For the moment it is interesting to note that, despite the similarity that emerges at first sight, the couplings in the Hopfield model \cite{hopfield1982neural, hopfield1984neurons} were taken to be fixed from the start, whereas in the present case they evolve dynamically under the influence of external signals and internal dynamics. It is only in the case of the special scenario described above that we will see the emergence of Hopfield type couplings. \subsection{Numerical details} While a few of the simplest scenarios investigated in the present paper can be studied analytically, we are in many cases forced to use numerical simulations. To perform these, we note that the couplings of Eq. \eqref{eqJs} satisfy a dynamical evolution equations: \begin{equation}\label{dynrule} \dot J_{ij}(t)= \gamma \left[\frac{J_0}{N}g_i(t)g_j(t) - J_{ij}(t)\right]\ . \end{equation} We use Euler integration to integrate Eq. \eqref{maineq} and Eq. \eqref{dynrule}; we found a step size $\mbox{d} t=0.1$ (with $\mbox{d} \eta_i(t)=\sigma \sqrt{dt}$) sufficiently small for our purposes. There are many parameters involved in the simulations, some are fixed in all the cases, some vary. The fixed parameters are listed here: we choose $N=100$ for the number of agents and $p=3$ for the number of different external signals. Although these numbers are small, they produce results that are representative of the $N\to \infty$ limit with $p\ll N$ (as we have checked by using other $(p,N)$ combinations). Throughout this paper we used a low noise level, $\sigma^2=0.01$, to ensure that non-trivial collective states can emerge. All simulations start with random initial conditions $u_i \sim \mathcal{N}(0, \sigma^2/2)$ which would be the equilibrium distribution in a non-interacting system without external signal. The other important parameters that change from case to case are the time length of each external signal $\Delta_0$, the amplitude $I_0$ of the polarizing signal (apart from a few exceptions taken to be $I_0=50$), the strength of the interactions $J_0$ and the time scale $\tau_{\gamma}$ of the memory of past interactions.\\ \section{Results Overview} \label{results} Our model is constructed on the simple assumption that the mutual interactions between agents depend on their past history of agreement or disagreement. Our main result is that this creates a mechanism that allows a society to develop a collective memory of its past experiences. To study this mechanism in more detail, we analyze different protocols of external influences that first trigger the different individuals' opinions. The different scenarios presented range from simple situations to more complex and realistic ones and are listed below together with a list of the results obtained in each case. \\ \begin{enumerate} \item The external information $\bm{I}$ is heterogeneous but constant in time. We are able to treat the system analytically and predict its long time behaviour. Already in this simple setting the presence of the signal changes the way people interact and determines their future behaviour. \item The signal consists of a sequence of different (random) patterns, repeatedly presented in a cyclic fashion. In the long run this creates a stable matrix of interactions between the agents which can be predicted analytically. By means of this interaction matrix the society develops a memory of the opinion patterns presented previously and it is found to be able to recall each of them. \item A sequence of external stimuli is repeatedly presented in a cyclic fashion as before. However there are gaps between the presentation of successive signals where the society is not exposed to an external stimulus and follows its own internal dynamics. In this scenario a critical ratio of patterns duration and length of the gaps without pattern presentation must be exceeded for the system to develop persistent memory of the signal. We provide an analytic treatment to predict this critical ratio. Interestingly we also found that in the case of high-impact news that influence the society very frequently, the presentation time needed for the news to be memorized by the society is unexpectedly small. \end{enumerate} The study of even more realistic situations such as sequences of external influences with different impact on the society and random appearance in time will be considered in a follow up paper. \section{Learning from a persistent external signal} \label{one_pattern} In this section we start illustrating the behaviour of the society described by our model in a simple setting. Here we study its reaction to an external information persistent in time and described by a signal $I_i=I_0 \xi_i$ where $I_0$ is its strength and $\xi_i$ is a random variable taking values in $\{\pm 1\}$, which represents the way in which the agent $i$ perceives it. Note that the uniform perception $I_i=I_0 \ \ \forall i$ is a particular case of what discussed here. Our aim is to understand how the society reacts to this signal and how the memory of the opinions induced by it develops in this simple case before moving to more complicated settings. In order to do this, we will study the evolution in time of the agents' preference field $u_i$ and of interaction couplings $J_{ij}$ between agents. Even in the simple case of a constant signal, solving the equation for $u_i$ is not a trivial task, mainly because of the dependence of the couplings on the expressed opinions. We will see that the presence of the signal will induce the agents to change their opinion, consequently modifying the relationship of agreement and disagreement between them. As a results of this change, the couplings $J_{ij}$, which are null at the beginning of the dynamics, will start to evolve and establish the new interactions between the agents. This process allows the society to learn the opinion pattern $\bm{\xi}$ and to collectively retrieve it in the future. \\ We can write down a formal solution of Eq. \eqref{maineq} with couplings defined in Eq. \eqref{eqJs}: \begin{eqnarray}\label{u1} u_i(t)&=& u_i(0)e^{-t}+ J_0 \int_{0}^{t}\mbox{d} s \left[ U_i(s)+\eta_i(s) \right] e^{-(t-s)}+I_0\xi_i (1- e^{-t}) \ , \end{eqnarray} in which \begin{equation}\label{U1} U_i(s)=\gamma \int_0^s\mbox{d} s' \mbox{e}^{-\gamma(s-s')}g_i(s')q(s,s')\ , \end{equation} with \begin{equation} q(s,s')=\frac{1}{N}\sum_jg_j(s)g_j(s')\ . \end{equation} We note that, by appeal to the law of large numbers, the correlator $q(s,s')$ will be non-random in the large $N$ limit. Noting further that, for $\gamma \ll 1$, the function $U_i(s)$ are very slowly varying funcions of $s$, we see that: \begin{equation}\label{U2} \int_0^t \mbox{d} s U_i(s)\mbox{e}^{-(t-s)} \simeq U_i(t) \int_0^t \mbox{d} s \mbox{e}^{-(t-s)} = U_i(t)(1-\mbox{e}^{-t})\ . \end{equation} To proceed we adopt one further approximation to replace $U_i(t)$ in Eq. \eqref{U2} by its noise average (indicated by $\langle \cdot \rangle$), which is tantamount to replacing the $g_i(s')$ in Eq. \eqref{U1} by their noise average. While this replacement leads to an underestimation of the noise contribution in the evolution of the $u_i(t)$, we found that the effect remains small in the parameter ranges considered. With this approximations the solution for $u_i$ becomes \begin{equation}\label{u3} u_i= u_i(0)\mbox{e}^{-t}+ J_0 \langle U_i(t)\rangle (1-\mbox{e}^{-t}) + \int_{0}^{\infty}\eta_i(\tau)e^{-\tau}\mbox{d} \tau+ I_0\xi_i(1-\mbox{e}^{-t}) \ . \end{equation} This means that $u_i$ will be a Gaussian process with mean \begin{equation} \langle u_i \rangle = u_i(0)\mbox{e}^{-t}+ J_0 \langle U_i(t)\rangle (1-\mbox{e}^{-t}) + I_0\xi_i(1-\mbox{e}^{-t})\ , \end{equation} covariance \begin{eqnarray} C(t,t')&=&\langle(u_i(t)-\langle u_i\rangle \nonumber (t))(u_i(t')-\langle u_i\rangle (t'))\rangle \\ &=&\int_{0}^{t}\int_{0}^{t'}\mbox{d} s \mbox{d} s' \langle \eta_i(s)\eta_i(s')\rangle e^{-(t-s)}e^{-(t'-s')} \nonumber\\ &=&\frac{\sigma^2}{2} \left(\mbox{e}^{-|t-t'|}-\mbox{e}^{-(t+t')}\right)\ , \end{eqnarray} and variance \begin{equation} \sigma_u^2(t)=C(t,t)=\frac{\sigma^2}{2}(1-\mbox{e}^{-2t})\ . \end{equation} We are interested in studying the system in the long time limit $t, t' \to \infty$, where the $u_i$-process become stationary, with the covariance of the $u_i$ only depending on the time difference $\tau=|t-t'|=O(1)$, i.e. $C(t,t')=C(\tau)$, so we will have \begin{equation} \lim_{t \to \infty} \langle U_i(t)\rangle = \gamma \langle g_i \rangle \tilde{q}(\gamma) \end{equation} where $\tilde{q}(\gamma)$ is the Laplace transform of $q(t-s')$. The mean of $u_i$, its covariance and variance will thus become \begin{eqnarray} \langle u_i \rangle &=& \gamma \langle g_i \rangle \tilde{q}(\gamma) J_0 + I_0\xi_i \label{mean_u}\\ C(\tau) &=&\frac{\sigma^2}{2} \mbox{e}^{-\tau} \label{24} \\ \sigma^2_u &=&C(0)=\frac{\sigma^2}{2} \label{sigma_u}\ . \end{eqnarray} Now we realize that $\langle u_i \rangle=\xi_i\langle u \rangle$ and $\langle g_i \rangle = \langle g(\langle u_i \rangle + \sigma_u \zeta_i)\rangle_\zeta = \xi_i \langle g \rangle$, with $\langle \cdot \rangle_\zeta$ being the average taken over $\zeta \sim \mathcal{N}(0,1)$, are a consistent solution of Eq. \eqref{mean_u}. This allows us to take out the dependencies on $i$ from the equations.\\ If we now choose the expressed opinions to be the error functions of the preference fields, with $g_i=\mbox{erf}(u_i)$, we can exploit the properties of the error function to obtain a self consistency equation for $\langle u \rangle$. We will have \begin{equation} \label{g} \langle g\rangle=\mbox{erf}\left(\frac{\langle u\rangle}{\sqrt{1+2\sigma_u^2}}\right) \end{equation} and \begin{equation} \label{q_tau} q(\tau)=\left\langle\mbox{erf}\left(\langle u\rangle_\infty+\sigma_{u}x\right)\mbox{erf}\left(\frac{\langle u\rangle _\infty+\rho(\tau)\sigma_{u}x}{\sqrt{1+2(1-\rho^2(\tau))\sigma_{u}^2}}\right)\right \rangle_x\ , \end{equation} where $\langle\cdot\rangle_{x}$ stands for an average over a Normal random variable $x$ and the correlation coefficient $\rho(\tau)$ is \begin{equation} \rho(\tau)=\frac{C(\tau)}{\sigma_u^2}=e^{-|\tau|}\ . \end{equation} (See \cite{anand2018structural} for further details on these last passages from a similar computation in a different context.) \\ Gathering these results together we obtain a closed system of equations that describe the preference field of the agents in our society under the effect of a constant external signal in the stationary regime: \begin{eqnarray} \nonumber \langle u\rangle &=& I_0 + J_0 \gamma \mbox{erf}\left(\frac{\langle u\rangle}{\sqrt{1+2\sigma_u^2}}\right)\tilde{q}(\gamma)\ ,\\ \nonumber \sigma_u^2&=&\frac{\sigma^2}{2}\ ,\\ \nonumber \tilde{q}(\gamma)&=&\int_{0}^{\infty}\mbox{d} \tau q(\tau)e^{-\gamma \tau} \ ,\\ \nonumber q(\tau)&=&\left\langle\mbox{erf}\left(\langle u\rangle+\sigma_{u}x\right)\mbox{erf}\left(\frac{\langle u\rangle+\rho(\tau)\sigma_{u}x}{\sqrt{1+2(1-\rho^2(\tau))\sigma_{u}^2}}\right)\right\rangle_x\ . \end{eqnarray} In order to understand to what extent the external field has influenced the society we will focus on the overlap of the asymptotic system state with the pattern $\bm{\xi}$: \begin{equation} \label{m} m(t) = \frac{1}{N}\sum_i \xi_i g_i(t) \ . \end{equation} This quantity measures whether the expressed opinions in the society are similar to those induced by the signal $\bm{\xi}$, {\it i.e.} $m\sim O(1)$, or not. The overlap is self-averaging in the $N \to \infty$ limit and its value at late times, $t \to \infty$, is given by \begin{equation}\label{self_m1} m = \frac{1}{N}\sum_i \xi_i \langle g_i\rangle = \langle g \rangle \ , \end{equation} which therefore satisfy the following equation \begin{equation}\label{self_m2} m = \mbox{erf}\left(\frac{ \langle u \rangle }{\sqrt{1+2\sigma_{u}^2}}\right)=\mbox{erf}\left(\frac{ I_0 + J_0 \gamma \tilde{q}(\gamma)m}{\sqrt{1+2\sigma_{u}^2}}\right)\ . \end{equation} We chose to write the last equation in a self consistent form to show that under certain conditions we can expect a non zero value of $ m$ even when the signal is finally removed, $I_0=0$. Indeed, for sufficiently large values of the product $J_0\tilde{q}(\gamma)$ in comparison with the noise contribution quantified by $\sigma$, the equation admits a non zero solution. For example, if the society is exposed to a signal $I_0=1$ for long times, given a $J_0=6$ and $\gamma=10^{-3}$ its overlap will be close to 1 and will remain close to 1 when the signal is removed (we will see that in the same conditions this value will match the results obtained with a simulated dynamics). In other words the society is potentially able to remember the opinions induced by the signal even when it is removed, after having been exposed to such signal for sufficiently long time. To shed more light on this mechanism we now focus on the evolution of the couplings and the value they reach in the stationary regime for a society described by a choice of the parameters that admits a non zero solution to Eq.\eqref{self_m2}. \begin{figure} \centering \includegraphics[scale=0.6]{one_pattern_modJ_modJa_I0_1.eps} \includegraphics[scale=0.6]{one_pattern_QR_I0_1.eps} \caption{Simulated dynamics with a persistent external signal. Upper panel: results from simulations for $|J|/J_0$ are compared with the analytical prediction (Eq. \eqref{J_analytic}) and are seen to approach the asymptotic Mattis couplings (Eq. \eqref{mattis}). Lower panel: the overlap $Q$ between the simulated and the analytical $J$ is compared with the overlap between the simulated $J$ and the Mattis couplings. In these simulations $J_0=6$, $I_0=1$ and $\gamma=0.1$.} \label{one_pattern_m} \end{figure} The upper panel of Fig. \ref{one_pattern_m} shows how couplings evolve in a numerical simulation starting from a society without interactions between agents. The continuous curve shows the norm of the $J_{ij}$ matrix defined as \begin{equation} \label{absJ} |J|=\sqrt{\sum_{ij}J_{ij}^2}\ . \end{equation} For sufficiently large $I_0$, a simple analytic argument allows us to give an accurate prediction of this growth. In the presence of a signal $I_i=I_0\xi_i$, preference fields rapidly orient towards the signal with large absolute values of $u_i$ and $g_i$ will therefore soon be approximately equal to $\xi_i$. By using this information we can integrate Eq. (\ref{eqJs}) to obtain: \begin{eqnarray}\label{J_analytic} J_{ij}(t)\approx \frac{J_0\cdot \gamma}{N} \int_{0}^{t}\mbox{d}s \ \xi_i\xi_j e^{-\gamma(t-s)}= \frac{J_0}{N} \xi_i\xi_j (1-e^{-\gamma t}) \ , \end{eqnarray} where the memory time scale appears explicitly. The average absolute value of this analytic prediction is represented also in units of $J_0$ by the dashed line in the upper panel of Fig.\ref{one_pattern_m} and nicely superimpose with the numerical results. As the system will quickly approach a stationary state, aligned with the external signal, interactions further stabilizing that the very same stationary state will be established after a time set by the memory time scale, chosen to be $\tau_\gamma= 10$ in that simulation. \\ Note that the sign of the predicted couplings is also peculiar. For long times the $J_{ij}$ approach the Mattis couplings \cite{mattis1976solvable}: \begin{eqnarray}\label{mattis} \lim_{t\rightarrow\infty}J_{ij}(t)= \frac{J_0}{N} \xi_i\xi_j \equiv J^{M}_{ij} \end{eqnarray} equivalent to the Hopfield couplings in Eq.(\ref{10}) for p=1. It is well known that in a system with pairwise interactions given by Mattis couplings (Eq.\eqref{mattis}) with sufficiently large amplitude, the variables spontaneously align in the directions defined by $\bm{\xi}$. In our modelled society this would correspond to the fact that the opinion pattern can be spontaneously retrieved because the corresponding Mattis couplings have been formed as a consequence of the memory of the sustained past opinion patterns induced by the exposure to the signal $I_i=I_0\xi_i$. To verify that this is the case we define the correlation \begin{eqnarray} Q=\frac{\sum_{ij}J_{ij}J_{ij}'}{\sqrt{\sum_{ij} J_{ij}^{2}\sum_{ij} {J'}_{ij}^{2}}} \label{2} \end{eqnarray} that reveals the degree of alignment between two sets of couplings $J=(J_{ij})$ and $J'=(J'_{ij})$, with $Q=1$ implying perfect alignment and $Q=0$ the absence of any correlations. In the lower panel of Fig.\ref{one_pattern_m} we exhibit the correlation between couplings observed in a simulation with both the analytic prediction (Eq. \eqref{J_analytic}) and the asymptotic Mattis couplings (Eq. \eqref{mattis}). It shows that interactions are very quickly perfectly correlated with both the analytically predicted couplings and the Mattis couplings, although their norm is initially smaller than $|J^M|$ (see comparison with the absolute value of $J^M/J_0$ in the upper panel of Fig.\ref{one_pattern_m}). In conclusion, the memory of interpersonal relations developed in response of an external stimulus $\bm{\xi}$ produces in the society interactions of increasing strength, which are very quickly aligned with Mattis couplings corresponding to the $\bm{\xi}$ itself. \\ For large enough strength of the couplings we can expect that the society will spontaneously polarize along $\bm{\xi}$ autonomously sustaining its memory even after the external signal is gone. To give evidence of this interesting phenomenon we studied different dynamics in which the external signal $\bm{\xi}$ is switched on at $t=0$ and removed at a time $t_r$. We then focus on the \textit{asymptotic} average overlap for \begin{enumerate} \item dynamics in which couplings keep evolving after $t_r$ (evolving $J_{ij}$) \item dynamics in which couplings are fixed to the value they had at $t_r$ (frozen $J_{ij}$). \end{enumerate} \begin{figure} \centering \includegraphics[scale=0.6]{one_pattern_m_nofreeze_I0_1.eps} \caption{Simulated dynamics with a persistent external signal. The figure shows the value of $m$ at stationarity as a function of the time $t_r$ at which the signal is removed, for two kind of dynamics: 1) dynamics with evolving couplings after $t_r$ 2) dynamics with frozen couplings after $t_r$. The dashed line represents the amplitude of couplings norm $|J(t_r)|$ at $t_r$. The parameters used are the same as Fig. \ref{one_pattern_m}.} \label{one_pattern_m2} \end{figure} The average overlap for both versions of the dynamics has been calculated as the average of the simulated overlap at stationarity over the last 1000 units of time and plotted in Fig. \ref{one_pattern_m2} as a function of $t_r$. \\ The dynamics with frozen $J_{ij}$ shows at which $t_r$ the couplings have grown large enough for a spontaneous $m$ to arise. We note here that this overlap $m$ corresponds to the solution of the self-consistent Eq. \eqref{self_m2} derived for the stationary solution once the amplitude of the asymptotic couplings $J_0 \tilde{q}(\gamma)$ is replaced by the amplitude of the frozen couplings $|J(t_r)|$ (also reported in Fig. \ref{one_pattern_m2}) and $I_0=0$. According to this equation the critical value of the strength of Mattis type couplings needed for the model to exhibit spontaneous order with $m>0$ is $|J_c|=0.148 J_0$ and it corresponds to the $|J|=(0.148 \pm 0.003)J_0$ reached at the minimum $t_r$ where $m>0$ in simulations with frozen $J_{ij}$. We will come back to this minimum time $t_{r}$ in a different context in Section \ref{sec:Learning with signal removal}. \\ Interestingly the dynamics with evolving $J_{ij}$ is instead characterized by an \textit{asymptotic} spontaneous overlap arising in a discontinuous way even before that minimum $t_r$. Indeed, at variance with the frozen case, the evolving couplings will continue to grow after $t_r$ even in absence of the external signal as a result of the interactions embedded in the system. If this residual reinforcement of the couplings is large enough a positive feedback loop will occur between the evolving couplings and the degree of order supported by the interactions currently established in the society. The degree of order will be strengthened by strong interactions, which will in turn grow further due to a higher degree of persistence of the expressed opinion pattern. As soon as this self-sustained mechanism takes place it leads to a strongly polarized society represented by the large value of $m $ in Fig. \ref{one_pattern_m2}.\\ In the following sections we will study more complicated cases in which the external signal is composed of a sequence of different stimuli both with and without interposed periods of complete absence of stimuli. The understanding of the system's reaction to a single external stimulus gained in the present section as well as the quantities introduced here will be used in the next sections to understand if and how the society is able to learn and subsequently retrieve the different opinion patterns to which it has been exposed in the past. \section{Periodic external signal} \label{sec:per_ext_sig} As a second step in the exploration of our society's behaviour we will change the external information structure, passing from a constant signal $I_0\bm{\xi}$ to a signal that changes in time. The aim is mimicking a sequence of different news or events, labeled by the index $\mu \in \{1, \dots, p\}$, to which the society is exposed and reacts. As before, the variables $\bm{\xi}^\mu$ represent the way in which the population reacts to a single piece of information, and are modelled as random variables with entries $\xi_i^\mu$ $\in \{\pm 1\}$. The different contributions $I^\mu=I_0\bm{\xi}^\mu$ will appear in an ordered sequence, each switched on for a time span $\Delta_0$ before being substituted by the following piece of news in the sequence. This process is repeated in a cyclic fashion. The resulting expression of the signal is thus \begin{equation}\label{18} I_i(t)= I_0 \xi_i^{\mu(t)}\ , \end{equation} with \begin{equation}\label{32} \mu(t)= 1+\left \lfloor{\frac{t}{\Delta_0}}\right \rfloor \hspace{-0.3cm}\mod{p}\ , \end{equation} where $\left \lfloor{\cdot}\right \rfloor$ is the notation for the integer part. This series of repetitive signals represents a simple but effective way to model series of events that are repeatedly appearing in television or newspapers.\\ As introduced in section \ref{Hopfield_couplings}, we expect that in the long run the exposure of the society to a sequence of signals defined in Eq. \eqref{18} and \eqref{32} will result in the development of couplings that are similar to the Hopfield couplings (Eq. (\ref{10})). Indeed as before, given a signal with large amplitude $I_0$, we can assume that the opinions $g_i$ become rapidly equal to the opinion pattern proposed every time we have a signal spike, so we have that $\bm{g}(t)=\bm{\xi}^{\mu(t)}$. For this assumption to provide an accurate approximation of the full dynamics, we also assume that $\Delta_0\gg 1$ allowing us indeed to neglect transient behaviour after the switches of the external signal. Using this approximation we can calculate (see Appendix A for details) the couplings $J_{ij}$ developed in the society at times $t$, which are multiples of the presentation time $p\Delta_0$: \begin{eqnarray} \label{J_signal_nodrop_2} J_{ij} = \frac{J_0}{N} (e^{\gamma \Delta_0}-1) \frac{(e^{-\gamma t}-1)}{(1-e^{\gamma \Delta_0 p})}\sum_{\mu=1}^{p}\xi_i^\mu\xi_j^\mu e^{(\mu-1)\Delta_0\gamma} \end{eqnarray} In the long time limit $t \to \infty$ (thus $N_p\to \infty$) this tends to \begin{eqnarray} \label{J_inf} \lim_{N_p\to \infty} J_{ij}(t=N_p\Delta_0 p) = J_{ij,\infty}(p)= \frac{J_0}{N} \frac{(e^{\gamma \Delta_0}-1)}{(e^{\gamma \Delta_0 p}-1)}\sum_{\mu=1}^{p}\xi_i^\mu\xi_j^\mu e^{(\mu-1)\Delta_0\gamma}\ . \label{12} \end{eqnarray} Eq. (\ref{J_inf}) shows explicitly that the learning protocol allows the couplings to approach in the long run a weighted version of the Hopfield couplings where each pattern's weight is a function of its presentation order ($\mu$). This means that we expect an uneven storing of the patterns: the pattern last seen is remembered best, while the memory of the previous ones decays exponentially on a time scale $\tau_{\gamma}$, in a similar way as in some generalized Hopfield models of forgetful memories \cite{nadal1986networks, parisi1986memory, van1988forgetful}. Finally expanding equation (\ref{J_inf}) for small $\gamma\Delta_0$ (many repetitions of news presented within a memory time) we obtain \begin{equation} \label{27} J_{ij,\infty}= \frac{J_0}{pN} \sum_{\mu=1}^{p}\xi_i^\mu\xi_j^\mu + \frac{J_0}{pN} \sum_{\mu=1}^{p}\xi_i^\mu\xi_j^\mu (\mu-1)\Delta_0\gamma + o((\Delta_0\gamma)^2)\ . \end{equation} Note that the first term is proportional to the Hopfield couplings (see Eq. (\ref{10})), which can be thus thought as a zeroth order approximation to our couplings.\\ Similarly to what happens in Hopfield Neural Networks, our society will be able in the long run to store and easily retrieve the opinion patterns $\bm{\xi}^\mu$. The level of retrieval of the society for each of the patterns $\mu$ can be evaluated using a set of overlaps $m^\mu$: \begin{equation} m^\mu(t)=\frac{1}{N}\sum_{i}\xi_i^\mu g_i(t)\ . \label{3}\\ \end{equation} These overlaps will tell us if the system is aligned with one of the opinion configurations previously presented ($m^\mu = O(1)$) or not ($m^\mu = O(1/\sqrt{N})$). The value of $ m^\mu $ in the long time limit can be obtained assuming the Gaussianity of $u_i$ (as done in the scenario of the previous section) and evaluating the average $u_i$ in the following way: we use the couplings evaluated in Eq. (\ref{12}) to evaluate the long term limit of Eq. \eqref{maineq}. In the absence of a signal ($I_i=0\ \forall i$) we thus obtain $u_i \sim \mathcal{N}(\langle u_i\rangle,\sigma_u^2)$, with \begin{eqnarray} \label{g_3} \langle u_i\rangle &=& \sum_{j}J_{i,j,\infty}\langle g_j\rangle \\ \label{g_4} \sigma_u^2 &=&\sigma^2/2\ , \end{eqnarray} and \begin{eqnarray} \label{g_2} \nonumber \langle g_i\rangle &=&\mbox{erf}\left(\frac{\langle u_i\rangle}{\sqrt{1+2\sigma_u^2}}\right)\\ &=& \mbox{erf}\left(\frac{(e^{\gamma \Delta_0}-1)}{(e^{\gamma \Delta_0 p}-1)}\frac{J_0 \sum_{\mu=1}^{p}\xi_i^\mu m^\mu e^{(\mu-1)\Delta_0\gamma}} {\sqrt{1+\sigma^2}}\right). \end{eqnarray} If different patterns have negligible mutual overlaps, such as for uncorrelated patterns with $\frac{1}{N}\sum_{i}\xi_i^\mu \xi_i^\nu \sim \frac{1}{\sqrt{N}}$ for $\mu\neq \nu$, the equation above can have a non trivial solution for which the society aligns with exactly one of the patterns, $\nu$. In this case $\boldsymbol{m}\simeq\{0,..0, m^\nu ,0...,0\}$ and \begin{equation}\label{8} m^\nu\simeq \mbox{erf}\left( m^\nu J_0 \frac{(e^{\gamma \Delta_0}-1)}{(e^{\gamma \Delta_0 p}-1)} \frac{e^{(\nu-1)\Delta_0\gamma}}{\sqrt{1+\sigma^2}}\right)\ . \end{equation} Note that under certain conditions the solution of Eq. (\ref{8}) is non trivial and will be larger for more recently presented patterns and smaller for older ones, meaning that, if remembered at all, recent pieces of news will be better recalled by the society. \\ As a confirmation of this behaviour we simulated the dynamics of our society until a time $t_r$ at which we froze the couplings. To check whether the society has developed a memory of the $p$ external signals to which it has been exposed, after $t_r$ we apply each signal contribution again for a short time after which we remove it to observe the response of the society in terms of the overlaps $m^\mu$ in absence of it. \begin{figure} \centering \includegraphics[scale=0.5]{No_drop_sig_sim_vs_predic5_1.eps} \includegraphics[scale=0.5]{No_drop_sig_sim_vs_predic5_2.eps} \caption{Simulated dynamics with periodic external signal. Upper panel: Early dynamics of the society. The overlaps with different patterns are represented by different colours and quickly reach $m^\mu\simeq 1$ when their corresponding signal contribution $\bm\xi^\mu$ is on. Lower panel: after presenting many times the patterns in a cyclic fashion, we freeze the couplings at time $t_r=6090$ and we presented each of the patterns for a time very short compared to $\Delta_0$, after which the signal is removed. The analytic predictions of the overlaps in absence of signal are compared with the simulations. In this simulation we set $I_0=50$, $J_0=6$, $\gamma=10^{-3}$ and $\Delta_0=70$. }\label{11} \end{figure} \begin{table}[] \begin{tabular}{|l|lll|l|} \hline $\Delta_0=70$ & \multicolumn{1}{l|}{$I_0=1$} & \multicolumn{1}{l|}{$I_0=5$} & $I_0=50$ & Analytical \\ \hline $\overline{m^1}$ & 0.72 $\pm$ 4 $\cdot 10^{-2}$ & 0.966 $\pm$ $ 9\cdot 10^{-3}$ & 0.97 $\pm$ 1$\cdot 10^{-2}$ & 0.9905 \\ $\overline{m^2}$ & 0.74 $\pm$ 4 $\cdot 10^{-2}$ & 0.9894 $\pm$ 4$\cdot 10^{-4}$ & 0.9887 $\pm$ $6\cdot 10^{-4}$ & 0.9948 \\ $\overline{ m^3}$ & 0.75 $\pm$ 4 $\cdot 10^{-2}$ & 0.9943 $\pm$ 3$\cdot 10^{-4}$ & 0.9947 $\pm$ 2$\cdot 10^{-4}$ & 0.9973 \\ \hline \end{tabular} \caption{The table shows the values of $\overline{m^\mu}$ obtained averaging $m^\mu$ over 100 simulations with fixed $\Delta_0$ for different values of $I_0$, against their analytical predictions. The control parameter of the simulations are $J_0=6$ and $\gamma=10^{-3}$.} \label{table} \end{table} As shown in Fig. \ref{11}, the society quickly reacts to each of the spikes after $t_r=6090$ as the corresponding $m^\mu$ (highlighted in Fig. \ref{11} with different colours for different signal patterns) jumps to $1$ during the spike and relaxes to a non trivial value in absence of external signal. It remains stationary until the subsequent spike of a different pattern is applied. The expected stationary non trivial overlap $m^\mu$ obtained from Eq. (\ref{8}) matches quite well with the simulation results. To further confirm our findings we defined the quantities $\overline{m^\mu}$ as the average of $m^\mu$ over 100 simulations and we compared them for different strength $I_0$ of the signal applied during the dynamics, with their analytical predictions in Table \ref{table}. In this case to obtain $m^\mu$ in each simulation we froze the couplings at $t_r$ and we averaged the values of $m^\mu$ on the last 2000 steps after the corresponding subsequent signal spike. We note that predictions always slightly overestimate the simulation results. The two main reasons for this discrepancy are that in Eq. (\ref{u3}) we neglected the contributions of the fluctuations of $g_i(t)$ and that the transients of the $u_i$ dynamics after every change of external signal were neglected in the analytical evaluation of the couplings. Moreover the theory works better for higher $I_0$ as the assumption we made that the opinions align immediately to the signal becomes more accurate for large signals. Finally we can observe that predictions get worse for earlier external signals. Indeed they are associated with smaller effective couplings in Eq. (\ref{maineq}) and consequently overlap solutions more susceptible to the neglected fluctuations. \begin{figure} \centering \includegraphics[scale=0.6]{Q_R_cycle5.eps} \includegraphics[scale=0.6]{modJ_modJa_no_drop5.eps} \caption{Simulations with periodic external signal. The upper panel shows the evolution of the correlation $Q$ between the simulated and the analytical $J$ and between the simulated $J$ and the Hopfield couplings $J^H/p$. The lower panel shows the evolution of $|J|$ against its estimated analytical evolution and the norm of the Hopfield couplings over $p$ $|J^H|/p$ in units of $J_0$. The control parameters are the same as figure \ref{11}.} \label{No_drop_sig_QR} \end{figure}\\ The possibility to store and retrieve all the presented external signals as shown in Fig. \ref{11} is expected, given the similarity between the spontaneously formed interaction couplings and classical Hopfield couplings (see Eq. \eqref{10}). For the simulation reported in Fig. \ref{11} we indeed find that interactions very soon align almost perfectly with corresponding Hopfield couplings as shown in the upper panel of Fig. \ref{No_drop_sig_QR}. The figure also shows that analytic predictions of $J$ from Eq. \eqref{J_signal_nodrop_2} are very accurate as the corresponding $Q\simeq1$ at all times for $J$ analytical. As discussed in the previous section, the possibility for the society to retrieve the pattern induced by an external signal requires sufficiently large couplings to allow a non-zero solution of Eq. \eqref{8}. In the bottom panel of Fig. \ref{No_drop_sig_QR} we compare the norm of $J$ (defined in Eq. \eqref{absJ}) obtained with simulations to its analytical prediction and the norm of the Hopfield couplings over p $J^H/p$. The evolution of the analytical curve predicts closely the evolution of the simulated $|J|/J_0$ while $|J^H|/p$ overestimates the true value of $|J|$ at the beginning of the dynamics. However at $t_r=6090$ the value of the simulated $|J|$ has reached a stationary value very close to $|J^H|/p$. The society has been irremediably shaped by the opinion patterns $\bm{\xi}^\mu$. \section{Intermittent external signal} \label{sec:Learning with signal removal} In this section we study a still stylized but slightly more complicated scenario. We analyse the response of the society to intermittent external information. We keep the cyclic presentation mode described in the previous section. However, the different opinion patterns are no longer influencing the society for the entire presentation period $\Delta_0$, but only for a fraction $\theta \Delta_0$ of each period, with $\theta<1$. The signal is absent for the remainder $\Delta_1=(1-\theta)\Delta_0$ of the presentation period: \begin{eqnarray}\label{sig_rem} I_i(t)= \begin{cases} I_0 \xi_i^{\mu(t)} \hspace{2cm} &n\Delta_0 < t \le n\Delta_0 + \theta \Delta_0\\ 0 \hspace{2cm} &\mbox{otherwise} \end{cases} \end{eqnarray} with $n \in \mathbb{N}$ and \begin{equation}\label{sig_rem2} \mu(t)= 1+\left \lfloor{\frac{t}{\Delta_0}}\right \rfloor \hspace{-0.cm}\mod{p}\ . \end{equation} In this way we represent a society hit by a periodic sequence of different strong stimuli, such as repetitive political propaganda or a series of shocking events (e.g. terrorist attacks) alternated with periods of absence of external information. Questions that arise in such a scenario are: Will the society be shaped by these shocks? What is the smallest fraction $\theta$ of the time for which the system is exposed to external stimuli that still allows the society to spontaneously retain the information presented? \\ To evaluate the couplings in this case, we assume that during the time $\theta\Delta_0$ in which the signal $I_0{\bm \xi}^\mu$ is on, the preference fields immediately align and $\bm{g} = \bm{\xi}^\mu$, which requires the signal strength $I_0$ to be sufficiently large. As long as the society remains unable to retain the presented patterns, we find that, as soon as the signal is removed, the preference fields very quickly decay to $0$ and remain small during the time $\Delta_1$ in which the signal is off. Once the society has been exposed to sufficiently many presentation cycles, couplings of sufficient strength may have developed allowing the society to retain information about the latest pattern presented, even when the signal is removed. When evaluating the couplings for this situation, we assume for simplicity that the system remains nearly fully aligned with the previously presented pattern, $\bm{g}\simeq\bm{\xi}^\mu$, even after the the signal is turned off. Figure \ref{13} shows a simulation exhibiting a transition from an early time regime, where information is not retained after signal removal, to a late time regime, where the system remains aligned with a signal even at times where the signal is switched off. In Fig. \ref{33}, we present a zoom into both the early time and the late time regimes. Shaded rectangles in the figure represent the intervals $\theta\Delta_0$ in which an external signal is present. At early time when the signal is switched off, the opinions take some time to disalign to it. This extra time, that we will indicate as $\theta' \Delta_0$, is not easy to calculate, however we can give a rough estimation of it assuming that the preference field $u_i(t)$ freely decays to zero when the signal is removed (see \ref{appendix_thetaprime}). \begin{figure} \centering \includegraphics[scale=0.39]{sig_rem5} \caption{Simulated dynamics with intermittent signal. Three patterns are presented in a cyclic fashion. Each pattern presentation for a time $\theta \Delta_0=70$ is followed by a period $\Delta_1=30$ during which there is no external signal. Each time a pattern is presented the corresponding overlap $m^\mu$ very quickly approaches 1, and it decays to smaller values when the signal is removed. At early times, couplings are still too small to retain previously presented information and overlaps decay to small ${\cal O}(1/\sqrt{N})$ values when external signals are removed. However, after a time $t^* \simeq 600$ the couplings are able to sustain the opinion patterns even when the signal is removed. In this simulations $I_0=5, J_0=6$ and $\gamma=10^{-3}$}\label{13} \end{figure} \begin{figure} \includegraphics[scale=0.27]{sig_rem5_v1} \includegraphics[scale=0.27]{sig_rem5_v2} \caption{The figures represent a zoom into the early time regime (left panel), and into the late time regime (right panel) of Fig. \ref{13}. The coloured rectangles represent the time periods in which a signal is switched on. After the signal removal at early times the value of $m^\mu$ drops to significantly lower values, whereas it remains much close to $m^\mu=1$ at late times.} \label{33} \end{figure} We define the time $t^*$, in multiples of $p\Delta_0$, as the time of the last cycle of external stimuli that is still insufficient to create couplings of a strength needed for the society to retain the previously presented information. Using the signal structure defined in Eq. \eqref{sig_rem}, we can derive an expression for the couplings for $t<t^*$ and for $t>t^*$ under the simplifying assumptions made above (details of the calculations in \ref{appendix_drop}): \begin{eqnarray}\label{J_a_1} J_{ij}(t<t^*)=\frac{J_0}{N}(\mbox{e}^{\gamma\Delta_0(\theta+\theta')}-1)\mbox{e}^{-\gamma t}\frac{(1-e^{\gamma t})}{(1-e^{\gamma \Delta_0 p})} \sum_{\mu=1}^p \xi_i^\mu \xi_j^\mu \mbox{e}^{\Delta_0\gamma(\mu-1)} \end{eqnarray} \begin{eqnarray}\label{J_a_2} J_{ij} ( t>t^*) &=&\nonumber \frac{J_0}{N} e^{-\gamma t}\left((e^{\gamma \Delta_0(\theta+\theta')}-e^{\gamma \Delta_0})\frac{(1-e^{\gamma t^*})}{(1-e^{\gamma \Delta_0 p})} \right. \nonumber \\ &\ & + \left. (e^{\gamma \Delta_0}-1)\frac{(1-e^{\gamma t})}{(1-e^{\gamma \Delta_0 p})}\right) \sum_{\mu=1}^{p}\xi_i^\mu\xi_j^\mu e^{(\mu-1)\Delta_0\gamma} \nonumber \\ \end{eqnarray} The couplings in Eq.s \eqref{J_a_1} and \eqref{J_a_2} are for simplicity evaluated only for integer multiples of $\Delta_0$. As shown in Fig. \ref{sig_rem_nodrop}, the couplings thus predicted compare remarkably well with those evaluated in a numerical simulation of the dynamics as presented in Fig. \ref{33}. However, given the approximations used in the estimation of $\theta'$ we cannot expect to have a perfect agreement between the analytical prediction and the simulations (the limitations of our approach are discussed in \ref{appendix_I0}). \begin{figure} \centering \includegraphics[scale=0.6]{sig_rem_Q5_2.eps} \includegraphics[scale=0.6]{sig_rem_modJ5_2.eps} \caption{The upper panel shows the evolution in time of the correlation $Q$ between the simulated $J$, its analytical prediction and the Hopfield couplings over $p$, $J^H/p$. The lower panel compares the norm of $J/J_0$ from simulations to its analytical prediction and to the norm of $J^H/p$. The control parameters are the same as Fig. \ref{13}.} \label{sig_rem_nodrop} \end{figure} In Fig. \ref{sig_rem_nodrop}, the choice of parameters is such that the interactions rapidly align with the Hopfield couplings, and their norm grows along the dynamics eventually granting retrieval of the signal patterns after a finite time $t^*$. \\ To determine $t^*$, we use these equations to obtain an expression for the couplings $J$ at time $t^*+(\theta+\theta') \Delta_0$ (see \ref{appendix_drop} for details of the calculation) and use these in the self-consistency equation for $m^1$ (also derived in the same appendix) \begin{equation} \label{m_sig_drop} m^1 =\mbox{erf} \left( m^1 \frac{J_0(1-\mbox{e}^{-\gamma\Delta_0(\theta+\theta')})}{\sqrt{(1+\sigma^2)}} \frac{(1-e^{-\gamma (t^*+\Delta_0 p)})}{(1- e^{-\gamma \Delta_0 p})}\right)\ . \end{equation} The value of $t^*$ is then obtained by requiring that Eq. \eqref{m_sig_drop} has a non trivial solution, which gives \begin{eqnarray} \label{t*} t^*= -\Delta_0p \left[1 + \frac{1}{\gamma \Delta_0 p }\log \left(1 - \frac{\sqrt{(1+\sigma^2)}}{J_0(1-\mbox{e}^{-\gamma\Delta_0(\theta+\theta')})}\frac{\sqrt{\pi}}{2}(1-e^{-\gamma \Delta_0 p}) \right) \right] \ . \end{eqnarray} Note that $t^*$ increases with decreasing $\theta$, and it will eventually diverge (and a finite $t^*$ will cease to exist) as $\theta$ is decreased below $ \theta_{\mbox{min}}$ \begin{eqnarray} \theta_{\mbox{min}}&=&-\frac{1}{\gamma \Delta_0}\log\left(1-\frac{\sqrt{\pi(1+\sigma^2)}}{2J_0}(1-\mbox{e}^{-\gamma\Delta_0 p})\right)-\theta'\\ \nonumber &=& -\frac{1}{\gamma \Delta_0}\log\left(1-\frac{\sqrt{\pi(1+\sigma^2)}}{2J_0}(1-\mbox{e}^{-\gamma\Delta_0 p})\right)- \frac{1}{\Delta_0}\log \left(\frac{I_0(1-e^{-\Delta_0\theta})}{0.74}\right) \end{eqnarray} for which the argument of the logarithm in Eq. \eqref{t*} vanishes and where $\theta'$ is evaluated in \ref{appendix_thetaprime}. The solution of this equation can be found numerically and the resulting behaviour in function of $1/\Delta_0$ is shown in Fig. \ref{7}. The condition $\theta > \theta_{\mbox{min}}$ thus guarantees the existence of a finite time $t^*$ at which persistent memory starts to form, and at least one of the patterns stored can be recovered\footnote{For $\theta > \theta_{\mbox{min}}$ at least one pattern will be recovered, but this does not guarantee that the first pattern of the cyclically repeated sequence is among those recovered.}. The existence of a minimum value of $\theta$ required for the society to be able to spontaneously retrieve the information contained in the signals presented earlier is of immediate practical relevance. For advertisement campaigns, for instance, it defines the minimum fraction of time needed for a repeatedly presented signal to permanently impress the audience as a collective body. In the domain of news, it would, for instance allow to assess, whether or not repeated news items might leave a subtle persistent trace in the society and produce collective responses otherwise unpredictable.\\ To verify our predictions of $\theta_{\mbox{min}}$ we simulated dynamics with external signal of different amplitude $I_0$ until stationary interaction couplings are reached. We then froze the couplings and counted how many times the society recovers at least one of the patterns\footnote{This includes also mixture states with $\mathbf{m}=(m^1, m^2, m^3)$, where $m^1\ne 0$, $m^2\ne 0$ and $m^3\ne 0$, which are also encountered in some instances.} after a signal spike. Recovery is reached when the corresponding overlap in absence of external signal satisfies the threshold condition $m^\mu>0.4$. Such recovery threshold has been chosen significantly higher than the overlap ($\sim 1/\sqrt{N} = 0.1$) expected if the system state is uncorrelated with the pattern, but not too high in order not to exclude recovery with a low $O(1)$ overlap given the system parameters. We finally estimate $\theta_{\mbox{min}}$ from simulations as the smallest $\theta$ for which at least half of $50$ trial runs of the dynamics show such retrieval behaviour. \begin{figure} \centering \includegraphics[scale=0.6]{theta_min_vs_inversedelta0_3.eps} \caption{Simulated relation between the minimum $\theta$ necessary for the recovery of at least one pattern and the inverse of the time $\Delta_0$ compared with their analytical estimation for $\gamma=10^{-3}$ and different signal strengths.}\label{7} \end{figure} In Fig. \ref{7} we plot $\theta_{\mbox{min}}$ for different values of $I_0$ as a function of the inverse total presentation time $1/\Delta_0$ alongside the analytic prediction $\theta_{\mbox{min}}$ obtained above. \\ As clearly visible in the figure, for large $\Delta_0$ the analytic curve gives a good prediction of the simulation results for all the signal strengths, confirming that there is not significant dependence on $I_0$ in this regime. It is interesting to notice that in this regime the value of $(\theta_{\mbox{min}}+\theta')\Delta_0$ corresponds to the minimum time needed by the society to embed a single pattern presented with a persistent signal (the minimum $t_r$ mentioned in section \ref{sec:per_ext_sig}): \begin{equation} \theta_{\mbox{min}}+\theta'\simeq -\frac{1}{\gamma \Delta_0}\log\left(1-\frac{\sqrt{\pi(1+\sigma^2)}}{2J_0}\right)\ \end{equation} assuming $J_0>\sqrt{\pi(1+\sigma^2)}/2$. This means that at the beginning of the dynamics the society is able to maintain its orientation towards the very first pattern seen, that will be the one most easily remembered. \\ Interestingly at small $\Delta_0$ a much stronger dependence on $I_0$ develops. The analytical curves qualitatively capture the trend of the numerical ones but overestimate their true value. This discrepancy can be related to the approximations used in the estimate of $\theta'$. When solving the dynamics in the fraction of time $\theta'\Delta_0$ we neglect the interactions between agents. In doing this we underestimate the partial memory that has started to form in the society, and consequently overestimate the $\theta_{\mbox{min}}$ needed for recovery. \\ Lastly but very importantly we note that both numerical results and analytic estimations show a decrease in $\theta_{\mbox{min}}$ at small $\Delta_0$ which is more pronounced when $I_0$ is larger. This phenomenon is a direct consequence of the fact that $\theta'$ increases with $I_0$, therefore for larger $I_0$ the term $\Delta_0\theta'$ allows recovery with smaller $\theta_{\mbox{min}}$. In applications the fact that $\theta_{\mbox{min}}$ reaches very small values for small $\Delta_0$ and large $I_0$ would suggest to invest in short but frequent and high-impact advertisements. Similarly we can conclude that shocking events such as terror attacks, if repeated on short periods, might leave deep traces in the society despite being very localised in time. \section{Conclusions} News of disruptive events in history such as terror attacks often appear to change the behaviour of a society and will influence how people will react to news of future events of a similar kind. In this work we introduce a simple model of opinion dynamics which includes otherwise well-studied phenomena such as homophily \cite{lazarsfeld1954friendship, mcpherson2001birds, axelrod1997dissemination, deffuant2000mixing, hegselmann2002opinion} (the tendency of people to interact more often with others who share similar opinions) and xenophobia \cite{macy2003polarization, flache2011small, baldassarri2007dynamics, mark2003culture} (the tendency to adopt opinions different different from those of people with whom there has been disagreement in the past). The model includes dynamically evolving couplings, which effectively record an exponentially weighted history of co-expressed opinions between any pair of agents in the system. We show how this mechanism allows a society do develop a collective memory of external information it had previously been exposed to, allowing it to spontaneously retrieve such information in the future when briefly triggered by exposure to that information.\\ We study the emergence of this type of collective memory both analytically and by way of simulations for three stylized scenarios representing different histories of exposure to external information: (i) information consisting of a persistent signal, (ii) information consisting of repeated presentations of a set of different signal patterns, and (iii) information consisting of repeated presentations of a set of different signal patterns separated by periods of absence of any signal. In the first scenario, the external information does not change in time; the society aligns to the signal and --- after a sufficiently long time of exposure to the signal --- will remember it in the future even when the signal is removed. In the second scenario, the society is exposed to a series of signals, corresponding to different news. If these news are repeated for a sufficient number of times, the society is able to remember all of them, and recall them in the future when triggered by a brief spike of the same information. This can be true also in the third scenario, in which the different news are are interspersed with phases of absence of any signal. The determining factor here is the relative length of the periods of signal presentation and signal removal. We were able to compute the critical minimal ratio of presentation time and signal removal time that allows the society to develop a persistent memory of the sequence of news and thereby to remain aligned to external information even when the signal is removed. Moreover we demonstrated that even very short signals, if sufficiently strong and repeated sufficiently often, can guarantee the spontaneous retrieval of their information.\\ In the three scenarios analysed, polarizing signals presented to the society were able to deeply change the collective behaviour of the society described by our model, in the sense that persistent memory of past events emerged which causes it to react differently in the future. The condition for this to occur is always that the bare coupling constant $J_0$ in the model is sufficiently large compared to the noise level of the dynamics. Thus our model is able to capture, how the collective behaviour of a society can be strongly influenced by its past events.\\ A follow up work will concern the study of the model under more realistic assumptions about signal structures, including presentation of news items in random order and presentation of news with different signal intensities. An interesting further generalization that could be considered to make the model more realistic is to define interactions depending on multidimensional (rather than binary scalar) opinions so as to represent the effect that individuals interact in ways which depend on judgments about a collection of topics. In this setting the evolving interactions based on past interpersonal agreement or disagreement on the entire set of topics would correlate the agents' response to the different topic in a non trivial way. \section*{Acknowledgements} The authors acknowledge funding by the Engineering and Physical Sciences Research Council (EPSRC) through the Centre for Doctoral Training in Cross Disciplinary Approaches to Non-Equilibrium Systems (CANES, Grant Nr. EP/L015854/1). The authors would like to thank Nishanth Sastry for his contribution at the initial stage of this project and Jean-Philippe Bouchaud, Ton Coolen, Imre Kondor and Francesca Tria for interesting discussions.
1,108,101,564,771
arxiv
\section{Introduction} We say that a word $x$ is a {\it border} of $w$ if it is both a prefix and a suffix of $w$. Peltom\"aki \cite{Pelt1,Pelt2} recently introduced the notion of {\it privileged word}. A word $w$ is privileged if \begin{itemize} \item[(a)] it is of length $\leq 1$, or \item[(b)] it has a privileged border that appears exactly twice in $w$. \end{itemize} Here are the first few privileged words over a binary alphabet: $$ 0, 1, 00, 11, 000, 010, 101, 111, 0000,0110,1001,1111, 00000,00100,01010,01110,10001, $$ $$ 10101,11011,11111, 000000,001100,010010,011110,100001,101101,110011,111111. $$ An easy induction shows that $a^i$ is privileged for for any letter $a$ and $i \geq 0$. We now recall two results of Peltom\"aki \cite{Pelt1}. \begin{theorem} Let $w$ be privileged. \begin{itemize} \item[(a)] If $t$ is a privileged prefix (resp., suffix) of $w$, then $t$ is also a suffix (resp., prefix) of $w$. \item[(b)] If $v$ is a border of $w$ then $v$ is privileged. \end{itemize} \label{pelto} \end{theorem} Define the {\it number of leading $a$'s in $w$} to be the largest integer $n$ such that $a^n$ is a prefix of $w$, and similarly for the number of trailing $a$'s. Then we have \begin{corollary} If $w$ is privileged, then the number of leading $a$'s in $w$ equals the number of trailing $a$'s. \label{cor-leading} \end{corollary} \begin{proof} Write $w = a^i z a^j$ where $z$ neither begins nor ends in $a$. Then by Theorem~\ref{pelto} (a) we see that $i \geq j$ and $j \geq i$. \end{proof} We now state a useful lemma. \begin{lemma} Let $w$ be a nonempty word. Then $w$ is privileged if and only if its longest proper privileged prefix is also a suffix of $w$. \label{fors} \end{lemma} \begin{proof} $\Longrightarrow$: follows from Theorem~\ref{pelto} (a) above. $\Longleftarrow$: Let $u$ be the longest proper privileged prefix of $w$. Let $v$ be the shortest prefix of $w$ containing exactly two occurrences of $u$; this is well-defined since $u$ is a suffix of $w$. Then $v$ itself is privileged. So either $v = w$, or $|u| < |v| < |w|$ and $v$ is a longer proper privileged prefix of $w$, a contradiction. \end{proof} We now prove a result on powers and privileged words. \begin{theorem} Let $w$ be any word and $k$ an integer $\geq 1$. If $w^k$ is privileged, then $w^j$ is privileged for all integers $j \ge 0$. \end{theorem} \begin{proof} Suppose $k \geq 2$. Then $w$ is a border of $w^k$, and hence by Theorem~\ref{pelto} (b) we know $w$ is privileged. It remains to show that if $w$ is privileged, then so is $w^j$ for all $j \geq 0$. We prove this by induction on $j$. The result is clearly true for $j = 0$ or $j = 1$, so assume $j \geq 2$ and $w^{j-1}$ is privileged. Let $u$ be the longest proper privileged prefix of $w^j$. If $|u| \leq |w^{j-1}|$, then $u$ is also a privileged prefix of $w^{j-1}$. Then Theorem~\ref{pelto} (a) and induction together imply that $u$ is a suffix of $w^{j-1}$. Then $u$ is also a suffix of $w^j$, and by Lemma~\ref{fors} we know $w^j$ is privileged. Otherwise $|u| > |w^{j-1}|$. Write $u=w^{j-1}y$ for some $y$, where $y$ is a proper prefix of $w$. Since $j \geq 2$, we see that $y$ is also a proper prefix of $w^{j-1}$ and hence a proper prefix of $u$. Thus $y$ is a border of $u$, and hence, by Theorem~\ref{pelto} (b), $y$ is privileged. Since $y$ is a privileged prefix of $w$, by Theorem~\ref{pelto} (a), it is also a suffix of $w$. Write $w=zy$ for some $z$. By induction we know that $w^{j-1}$ is privileged. Since $w^{j-1}$ is a prefix of $u$, by Theorem~\ref{pelto} (a), it is also a suffix of $u$, so there exists $x$ such that $u = x w^{j-1}$. Since $u=w^{j-1}y=xw^{j-1}$, we see that $|x| = |y|$ and $x$ is a proper prefix of $w$. Thus in fact $x = y$. So $u = y w^{j-1}$. Then $$w^j=w \, w^{j-1}= (zy) w^{j-1}= z (y w^{j-1}) = zu,$$ and it follows that $u$ is a suffix of $w^j$. By Lemma~\ref{fors}, we conclude that $w^j$ is privileged. This completes the induction. \end{proof} \section{The set of privileged words} Let $\Sigma$ be a fixed alphabet and consider ${\cal P}$, the set of privileged words over $\Sigma$. We prove here that ${\cal P}$ is neither regular nor context-free. \begin{proposition} If $|\Sigma| \geq 2$, then ${\cal P}$ is not regular. \end{proposition} \begin{proof} Let $0, 1$ be distinct letters in $\Sigma$. Assume $\cal P$ is regular, and consider $L = {\cal P} \ \cap \ 0^+ 1 0^+$. By Corollary~\ref{cor-leading} we have $L = \lbrace 0^n 1 0^n \ : \ n \geq 1 \rbrace$. By the pumping lemma, $L$ is not regular, and hence neither is $\cal P$. \end{proof} \begin{proposition} If $|\Sigma| \geq 2$, then ${\cal P}$ is not context-free. \end{proposition} \begin{proof} Assume ${\cal P}$ is context-free, and consider the regular language $R=0^+10^+110^+$. By a well-known closure property of the context-free languages, $L := {\cal P} \ \cap \ R$ is context-free. We will now use Ogden's lemma \cite{Ogden} to show that $L$ is not context-free, a contradiction. We claim that $$L = \{00^a100^b1100^c \ : \ a=c \text{ and } a>b \}.$$ To see this, note that $L \subseteq R$. Thus it suffices to show that a word $w$ of the form $0^{a+1} 1 0^{b+1} 11 0^{c+1}$ word is privileged if and only if $a=c$ and $a>b$. ($\Rightarrow$) Since $w$ begins and ends with $0$, by Corollary~\ref{cor-leading}, we know that $a+1 = c+1$ and so $a = c$. Suppose $b \ge a$. Then $0^{a+1}10^{a+1}$ is a privileged prefix of $w$, yet it is not a suffix of $w$. By Theorem~\ref{pelto} (a), $w$ is not privileged. Thus $a > b$. ($\Leftarrow$) Let $w=00^a100^b1100^a$ where $a>b$. Then the longest proper privileged prefix of $w$ is $0^{a+1}$, which appears again as a suffix of $w$. Thus $w$ is privileged. Now let $n$ be as in Ogden's lemma, and let $w=\underline{0^n} 10^{n-1}110^{n}$, where the first block of $n$ zeros is marked as required by Ogden's lemma. Then there exists some decomposition $w=uxvyz$ where $xvy$ contains at most $n$ `marked' characters, $xy$ contains at least 1 `marked' character, and $ux^ivy^iz \in L$ for all $ i \ge 0$. We see that if either $x$ or $y$ contain a 1, then $ux^0vy^0z$ will have too few ones, and thus will not be in $L$. Otherwise, we know $x$ lies entirely in the first block of zeros. If $y$ does not lie in the last block of zeros, then if $i=0$, we will have $a<c$, so $ux^0vy^0z \notin {\cal P} \cap R$. If $y$ does lie in the last block of zeros, then $ux^0vy^0z=00^{n-j}100^{n-1}1100^{n-k}$ for some $j,k > 0$. Since $n-j \le n-1$, we see that $w \notin L$. Hence no decomposition for $w$ exists with $ux^0vy^0z \in L$, and thus ${\cal P} \cap R$ is not context-free. Thus, the language of privileged words is not context-free. \end{proof} \section{A linear-time algorithm for determining if a word is privileged} In this section we present an efficient algorithm for determining if a given word is privileged. Algorithm P: \begin{algorithmic} \Function{check-privileged}{$w$} \If {$|w| \leq 1$} \State \Return True \Else \State $T[0] \gets 0$ \State $p \gets 1$ \For{$i=1$ to $|w|-1$} \State $j \gets T[i-1]$ \While{true} \If {$w[j] = w[i]$} \State $T[i] \gets j+1$ \If {$T[i] = p$} \State $p \gets i+1$ \EndIf \State exit while loop \ElsIf{$j = 0$} \State $T[i] \gets 0$ \State exit while loop \EndIf \State $j \gets T[j-1]$ \EndWhile \EndFor \If{$p = |w|$} \State \Return True \Else \State \Return False \EndIf \EndIf \EndFunction \end{algorithmic} Our algorithm is a slightly modified version of the algorithm for building a failure table in the well-known Knuth-Morris-Pratt linear-time string-matching algorithm \cite{Knuth}. \begin{theorem} Algorithm P returns ``true'' if and only if $w$ is privileged. \end{theorem} \begin{proof} It is easy to see that if $|w|=0$ or $|w|=1$, then $w$ is privileged and the algorithm returns ``true''. Otherwise, we consider the value for $p$ at each iteration of the for-loop. We now claim that at the end of each iteration of the for-loop, $p$ equals the length of the longest privileged prefix of the first $i+1$ characters of $w$. To see the claim, observe that, when entering the first loop we have $p=1$, and is the longest privileged prefix of the first character of $w$. This establishes our base case. Otherwise, we assume $p$ is the longest privileged prefix of the first $i$ characters of $w$ at the beginning of the for loop, and prove our claim for the end of this iteration. We note that $T[i]$ represents the length of the longest subword $u$ which is both a prefix and suffix of the first $i+1$ characters of $w$ (the word ``read so far''). If $T[i]=p$, we know $u$ is privileged, and $p$ is increased to $i+1$. Since $p$ is increased as soon as this equality is found, this is the first time $u$ is repeated in $w$, and thus the word read so far is privileged. This proves our claim. After $w$ has been completely read by our algorithm, $p$ represents the length of the longest privileged prefix of $w$. The algorithm returns ``true'' if and only if $p=|w|$, in which case $w$ is privileged. \end{proof} Next, we have \begin{theorem} Algorithm P runs in $O(n)$ time, where $n = |w|$. \end{theorem} \begin{proof} Starting with the KMP algorithm, we have added one extra \emph{if} statement in the main loop, allowing this algorithm to run in the same $O(|w|)$ time bound as the original algorithm. More formally, we consider the number of times the inner while loop is executed, as all else takes constant time. The first time the while loop is executed, $i=1$ and $j=0$. Upon each iteration, we see that either \begin{enumerate} \item $i$ is incremented by 1, and $j$ is incremented by at most 1; \item $j$ decreases \end{enumerate} We see $i$ is incremented by exactly 1 when $w[j]=w[i]$ or $j=0$, due to moving to the next iteration of the for loop. When $j=0$, then $j$ will remain 0 beginning the next execution of the while loop. When $w[i]=w[j]$, then $j$ will be set to $j+1$ in the next execution of the while loop. If neither of the above cases are fulfilled, we see $j$ is set to $T[j-1]$, which is known by a property of the failure array to be strictly less than $j$. With these cases, we see that either $i$ increases or $i-j$ increases. Since the algorithm terminates when $i=|w|-1$, $i$ will increase exactly $n-2$ times, where $n = |w|$. Also, since $j<i$ at each stage of the algorithm, $i-j$ can increase at most $n-3$ times. Since these are the only possible cases, the while loop will execute no more than $2n-5$ times. Thus, Algorithm P takes $O(n)$ time to complete. \end{proof} \section{A lower bound on the number of privileged binary words} Let $B(n)$ denote the number of privileged binary words of length $n$. We observe that if $x = 0^t 1 w 1 0^t$, and $w$ contains no occurrences of $0^t$, then $x$ is privileged. By choosing the appropriate value of $t$, we get our lower bound. First, though, we need a detour into generalized Fibonacci sequences. We need to count the number of words of length $n$ that contain no occurrence of $0^t$. As is well-known \cite[p.\ 269]{Knuth0} and easily proved, this is $G_n^{(t)}$, where $$ G_n^{(t)} = \begin{cases} 2^n, & \text{if $0 \leq n < t$}; \\ G_{n-1}^{(t)} + G_{n-2}^{(t)} + \cdots + G_{n-t}^{(t)}, & \text{if $n \geq t$}. \end{cases} $$ We point out that in the case where $t = 2$, this is $F_{n+2}$, the $(n+2)$'nd Fibonacci number, where $F_0 = 0$, $F_1 = 1$, and $F_n = F_{n-1} + F_{n-2}$. It is well-known from the theory of linear recurrences that $$G_n^{(t)} = \Theta(\gamma_t^n),$$ where $1 < \gamma_t < 2$ is the root of the equation $x^t - x^{t-1} - \cdots - x - 1 = 0$. Since $\gamma_t^t - \gamma_t^{t-1} - \cdots - \gamma_t - 1 = 0$, multiplying by $\gamma_t - 1$ we get $\gamma_t^{t+1} - 2 \gamma_t^t + 1 = 0$, so $\gamma_t = 2- \gamma_t^{-t}$. The next step is to find a good lower bound on $\gamma_t$. \begin{lemma} Let $s \geq 2$ be an integer and let $\beta$ be a real number with $0 \leq \beta \leq {6 \over s}$. Then $$ 2^s - \beta s 2^{s-1} \leq (2-\beta)^s .$$ \label{ineq-lem} \end{lemma} \begin{proof} For $ s = 2$, the claim is $4-4\beta \leq (2-\beta)^2 = 4 - 4\beta + \beta^2$. Otherwise, assume $s \geq 3$. The result is clearly true for $\beta = 0$, so assume $\beta > 0$. By the binomial formula, we have \begin{eqnarray} (2-\beta)^s &=& \sum_{0 \leq i \leq s} 2^{s-i} (-\beta)^i {s \choose i} \notag \\ &=& 2^s - \beta s 2^{s-1} + \sum_{2 \leq i \leq s} 2^{s-i} (-\beta)^i {s \choose i} \notag \\ &=& 2^s - \beta s 2^{s-1} + \sum_{1 \leq j \leq (s-1)/2} \left( 2^{s-2j} \beta^{2j} {s \choose {2j}} - 2^{s-2j-1} \beta^{2j+1} {s \choose {2j+1}} \right) \label{s1} \\ &+& \begin{cases} \beta^s, & \text{if $s$ even}; \\ 0, & \text{otherwise.} \end{cases} \notag \end{eqnarray} It therefore suffices to show that each term of the sum \eqref{s1} is positive, or, equivalently, that $$ 2^{s-2j} \beta^{2j} {s \choose {2j}} \geq 2^{s-2j-1} \beta^{2j+1} {s \choose {2j+1}} .$$ for $1 \leq j \leq (s-1)/2$. Now $\beta \leq {6 \over s}$ by hypothesis, so $\beta \leq {6 \over {s-2}}$. Hence $\beta s -2\beta \leq 6$. Adding $2\beta - 2$ to both sides we get $\beta s - 2 \leq 4 + 2\beta$, and so ${{\beta s - 2} \over {2 + \beta}} \leq 2$. If $i \geq 2 \geq {{\beta s - 2} \over {2 + \beta}}$ then $(2+\beta)i \geq \beta s - 2$, so $2(i+1) \geq \beta(s-i)$, and $${2 \over \beta} \geq {{s-i} \over {i+1}} = {{s \choose {i+1}} \over {s \choose i}} .$$ Thus $2 {s \choose i} \geq \beta {s \choose {i+1}}$. Let $i = 2j$, and multiply both sides by $2^{s-2j} \beta^{2j}$ to get $2^{s-2j} \beta^{2j} {s \choose {2j}} \geq 2^{s-2j-1} \beta^{2j+1} {s \choose {2j+1}}$, which is what we needed. \end{proof} \begin{theorem} Let $t \geq 2$ be an integer and define $$\alpha_t = 2 - {1 \over {2^t - {t\over 2} - {t^2 \over {2^t}}}} .$$ Then $\alpha_t \leq 2 - \alpha_t^{-t}$. \label{alph-thm} \end{theorem} \begin{proof} It is easy to verify that $$ {{3t^2} \over 4} \geq {{t^3} \over {2^t}} + {{t^4}\over {2^{2t}}} $$ for all real $t \geq 2$. Hence $$ 0 \leq {{3t^2} \over 4} - {{t^3} \over {2^t}} - {{t^4}\over {2^{2t}}},$$ and, adding $t2^{t-1}$ to both sides, we get \begin{eqnarray*} t 2^{t-1} &\leq& t 2^{t-1} +{{3t^2} \over 4} - {{t^3} \over {2^t}} - {{t^4}\over {2^{2t}}} \\ &=& \left( {t \over 2} + {{t^2} \over {2^t}} \right) \left( 2^t - {t \over 2} - {{t^2} \over {2^t}} \right) . \end{eqnarray*} Setting $\beta_t = {1 \over {2^t - {t \over 2} - {{t^2} \over {2^t }}}} $, we therefore have $$ \beta_t t 2^{t-1} \leq {t \over 2} + {{t^2} \over {2^t}} ,$$ or $$ -\beta_t t 2^{t-1} \geq -{t \over 2} - {{t^2} \over {2^t}}.$$ Add $2^t$ to both sides to get $$ 2^t - \beta_t t 2^{t-1} \geq 2^t - {t \over 2} - {{t^2} \over {2^t}}.$$ Now it is easily verified that $\beta_t \leq 6/t$ for $t\geq 2$, so we can apply Lemma~\ref{ineq-lem} with $s = t$ to get $2^t - \beta t 2^{t-1} \leq (2-\beta)^t$. It follows that $$ (2-\beta)^t \geq 2^t - {t \over 2} - {{t^2} \over {2^t}},$$ and so $$ \beta_t \geq (2-\beta_t)^{-t}.$$ It follows that $$ 2 - \beta_t \leq 2 - (2-\beta)^{-t}.$$ Since $\alpha_t = 2- \beta_t$, we get $$\alpha_t \leq 2 - \alpha_t^{-t},$$ as desired. \end{proof} We can now apply this to get a bound on $G_n^{(t)}$. \begin{corollary} Let $t \geq 2$ be an integer and $n \geq 0$. Then $G_n^{(t)} \geq \alpha_t^n$, where $\alpha_t = 2 - {1 \over {2^t - {t\over 2} - {{t^2} \over {2^t}}}} < 2$. \end{corollary} \begin{proof} By induction on $n$. Clearly $G_n^{(t)} = 2^n \geq \alpha_t^n$ for $0 \leq n < t$ by definition. Otherwise we have \begin{eqnarray*} G_n^{(t)} &=& G_{n-1}^{(t)} + \cdots + G_{n-t}^{(t)} \\ &\geq & \alpha_t^{n-1} + \cdots + \alpha_t^{n-t} \\ & = & {{ \alpha_t^n - \alpha_t^{n-t}} \over {\alpha_t - 1 }} . \end{eqnarray*} However, $\alpha_t \leq 2 - \alpha_t^{-t}$ by Theorem~\ref{alph-thm}, so $$\alpha_t - 1 \leq 1 - \alpha_t^{-t} .$$ Hence $(\alpha_t - 1) \alpha_t^n \leq (1-\alpha_t^{-t}) \alpha_t^n = \alpha_t^n - \alpha_t^{n-t}$, so from above we have $$ G_n^{(t)} \geq {{\alpha_t^n - \alpha_t^{n-t}} \over {\alpha_t - 1}} \geq \alpha_t^n. $$ \end{proof} Now we state and prove our lower bound on the number of binary privileged words of length $n$. \begin{theorem} There are at least $$ {{2^{n-5} } \over {n^2}}$$ privileged binary words of length $n$. \label{bound} \end{theorem} \begin{proof} Each word of the form $0^t 1 w 1 0^t$ is privileged, where $|w| = n-2t-2$ and $w$ contains no factor $0^t$. The number of such $w$, as we have seen, is $G_{n-2t-2}^{(t)}$. So it suffices to pick the right $t$ to get a lower bound on $G_{n-2t-2}^{(t)}$. It is easy to check, using the data in the next section, that our bound holds for $n \leq 10$. So assume $n \geq 11$. Now \begin{eqnarray*} G_{n-2t-2}^{(t)} & \geq & \alpha_t^{n-2t-2} \\ &=& (2-\beta_t)^{n-2t-2} \\ &\geq& 2^{n-2t-2} - \beta_t (n-2t-2) 2^{n-2t-3} \\ & = & 2^{n-2t-2} (1- \beta_t (n/2 -t - 1) ) , \end{eqnarray*} by Lemma~\ref{ineq-lem} with $s = n - 2t - 2$, provided $\beta_t \leq 6/(n-2t-2)$. We now choose $t = \lfloor \log_2 n \rfloor + 1$, so that \begin{equation} 2^{t-1} \leq n < 2^t . \label{eq2t} \end{equation} It is now easy to verify that $\beta_t \leq 6/(n-2t-2)$ for $n \geq 11$. On the other hand, it is easy to verify that $${{3t} \over 4} \geq {{t^2} \over {2^{t+1}}} $$ for all real $t \geq 0$, so $$ {{3t} \over 4} + 1 - {{t^2} \over {2^{t+1}}} > 0.$$ Adding $2^{t-1}$ to both sides, and using \eqref{eq2t}, we get $$ {n \over 2} < 2^{t-1} < 2^{t-1} + {{3t} \over 4} + 1 - {{t^2} \over 2^{t+1}} ,$$ which implies $$ {n\over 2} -t - 1 \leq {1 \over 2} \left( 2^t - {t \over 2} - {{t^2} \over {2^t}} \right)$$ and so $\beta_t (n/2 - t - 1) \leq 1/2$. It follows that $$ B(n) \geq G_{n-2t-2}^{(t)} \geq 2^{n-2t-2} (1- \beta_t (n/2 -t - 1) \geq 2^{n-2t-3} \geq {{2^{n-5}} \over {n^2}} .$$ \end{proof} \begin{openproblem} What is the true asymptotic behavior of $B(n)$ as $n \rightarrow \infty$? \end{openproblem} Define the function $f$ as follows: $$f(n) = \begin{cases} n, & \text{ if $n \geq 2$; } \\ n f(\lfloor \log_2 n \rfloor), & \text{otherwise.} \end{cases} $$ It should be possible to improve Theorem~\ref{bound} to $B(n) = \Omega(2^n c^{\log^{*} (n)} /f(n))$, where $c$ is a constant and, as usual, $\log^{*} (n)$ is the number of times we need to apply $\log_2$ to $n$ to get a number $\leq 1$. We sketch the outline of an incomplete argument here: We generalize our argument above to count the number of privileged words of length $n$ having any privileged border of length $\lfloor \log_2 n \rfloor$. We can use our previous argument provided the count for arbitrary patterns is larger than the count for $0^t$. More precisely, if $x(p,n)$ is the number of strings of length $n$ beginning with the pattern $p$, ending with $p$, and having no other occurrence of $p$, then $x(p,n)$ satisfies a linear recurrence of order $t = |p|$. By analyzing this carefully, it should be possible to show that, provided $n$ is in a certain range with respect to $|p|$, we have $x(p,n) \geq x(0^t, n)$. Then we can imitate our analysis above, setting $t = \lfloor \log_2 n \rfloor$, to get \begin{eqnarray*} B(n) &\geq & \sum_{{p~\text{privileged}} \atop {|p| = \lfloor \log_2 n \rfloor}} x(p,n) \\ & \geq & c B(\lfloor \log_2 n \rfloor) \cdot {{2^n} \over {n^2}}, \end{eqnarray*} for a constant $c$. By iterating this relationship $\log^{*} (n)$ times, we would get the claimed bound. \section{Explicit enumeration of privileged words} We finish with a table giving the number $B(n)$ of privileged binary words of length $n$ for $0 \leq n \leq 38$. It is sequence A231208 in Sloane's {\it On-line Encyclopedia of Integer Sequences} \cite{Sloane}. \begin{table}[H] \begin{center} \begin{tabular}{|c|c||c|c||c|c|} \hline $n$ & $B(n)$ & $n$ & $B(n)$ & $n$ & $B(n)$ \\ \hline 0 & 1 & 13 & 328 & 26 & 875408 \\ 1 & 2 & 14 & 568 & 27 & 1649236 \\ 2 & 2 & 15 & 1040 & 28 & 3112220 \\ 3 & 4 & 16 & 1848 & 29 & 5888548 \\ 4 & 4 & 17 & 3388 & 30 & 11160548 \\ 5 & 8 & 18 & 38576 & 31 & 21198388 \\ 6 & 8 & 19 & 71444 & 32 & 40329428 \\ 7 & 16 & 20 & 133256 & 33 & 76865388 \\ 8 & 20 & 21 & 248676 & 34 & 146720792 \\ 9 & 40 & 22 & 466264 & 35 & 280498456 \\ 10 & 60 & 23 & 875408 & 36 & 536986772 \\ 11 & 108 & 24 & 1649236 & 37 & 1029413396 \\ 12 & 176 & 25 & 3112220 & 38 & 1975848400 \\ \hline \end{tabular} \end{center} \end{table}
1,108,101,564,772
arxiv
\section{Introduction} Standard finite element methods use the full $P_k$ polynomials (of total degree $\leq k$) on each element (e.g. triangle or tetrahedron), in order to achieve the optimal order of approximation, in solving partial differential equations. In certain situations the $P_k$ polynomial space is enriched by the so-called bubble functions, for stability or continuity, cf. \cite{Arnold, Brenner-Scott, Falk, Hu-Huang-Zhang, Hu-Zhang, Hu-Zhang-Low, Huang-Zhang, Schumaker, Zhang, Zhang-3D-C1, Zhang-4D, Zhang-P3}. But only in one case we use a proper subspace of $P_k$ polynomials while retaining the optimal order, $O(h^k)$ in $H^1$-norm, of convergence. That is the harmonic finite element method for solving the Laplace equation, $\Delta p=p_{xx}+p_{yy}=0$, where only harmonic polynomials in $P_k$ are used \cite{Sorokina2, Sorokina}. For example, in the $P_2$ nonconforming element method for solving the following Laplace equation, \an{ \label{h-e} \ad{ -\Delta u & =0, \quad \t{ in } \ \Omega, \\ u&=f, \quad \t{ on } \ \partial \Omega, } } where $\Omega$ is a bounded polygonal domain in $\RR^2$, the five basis functions on the element boundary are harmonic polynomials and only the sixth basis function (which vanishes on the 6 Gauss-Legendre points on the three edges) is not a harmonic polynomial. So, in \cite{Sorokina}, the 6th basis function of the $P_2$ nonconforming element is thrown away, on every triangle, in the harmonic finite element method. For example, on a uniform triangular grid on a square domain, the number of unknowns is reduced from $(2n-1)^2+2n^2$ to $(2n-1)^2$, about one-third less. But the harmonic finite element method cannot be applied directly to the Poisson equation, \an{ \label{e} \ad{ -\Delta u & =f, \quad \t{ in } \ \Omega, \\ u&=0, \quad \t{ on } \ \partial \Omega, } } where $\Omega$ is a bounded polygonal domain in $\RR^2$. The sixth basis function of $P_2$ nonconforming finite element must be added to the harmonic finite element method. This is then the standard $P_2$ nonconforming element method where the solution is \an{\label{2s} u_h=\sum_{\b x_i\in \partial K\setminus \partial \Omega} u_i \phi_i + \sum_{\b x_j\in K^o} u_j \phi_j + \sum_{\b x_k\in \partial \Omega} c_k \phi_k, } where $c_k$ are interpolated values on the boundary, and $u_i$ and $u_j$ are obtained from the Galerkin projection (from the solution of a discrete linear system of equations). But the sixth basis function is local and the only non-harmonic polynomial which can be obtained from the right hand side function $f$ in \eqref{e}. That is, the solution of the $P_2$ nonconforming interpolated Galerkin finite element is \an{\label{1s} u_h=\sum_{\b x_i\in \partial K\setminus \partial \Omega} u_i \phi_i + \sum_{\b x_j\in K^o} c_j \phi_j + \sum_{\b x_k\in \partial \Omega} c_k \phi_k, } where $c_j$ (could be $f(\b x_j)$ depending on which $\phi_j$ is used) are interpolated values of the right hand side function $f$, $c_k$ are interpolated boundary values, and only $u_i$ are obtained from the Galerkin projection. The new method does not only reduce the number of unknowns (from $O(k^2)$ to $O(k)$), but also improves the condition number. It is totally different from the traditional finite element static condensation which does Gaussian elimination from internal degrees of freedom first. In this work, in addition to constructing special $P_2$ conforming and nonconforming interpolated finite elements, we redefine the basis functions of the $P_k$ ($k\ge 3$) Lagrange finite element. We keep the Lagrange nodal values on the boundary of each element, and replace the internal Lagrange nodal values by the internal Laplacian values at these internal Lagrange nodes. This way, the linear system of Galerkin projection equations involves only the unknowns on the inter-element boundary. Therefore the number of unknowns on each element is reduced from $(k+1)(k+2)/2$ to $3k$ as all internal unknowns are interpolated by the given function $f$ directly. We show that the interpolated Galerkin finite element solution converges at the optimal order. Numerical tests are provided to the above mentioned interpolated Galerkin finite elements, in comparison with the standard finite element method. \section{The $P_2$ interpolated Galerkin conforming finite element} The $P_2$ interpolated Galerkin conforming finite element is defined only on macro-element grids, while the rest higher order $P_k$ elements are defined on general triangular grids. We will define another $P_2$ interpolated nonconforming element next section on general triangular grids. A $P_2$ harmonic polynomial is a linear combination of $1, x, y, x^2-y^2$ and $xy$. A $P_2$ interpolated finite element basis function is a linear combination of $1, x, y, x^2-y^2, xy$ and $x^2+y^2$. Only the last basis function has a non-zero Laplacian. Let the union of four triangles $\hat K=\cup_{i=1}^4 K_i$ be a reference macro-element shown in Figure \ref{K-hat} (left). On the reference macro-element $\hat K$, the $P_2$ finite element space is \an{\label{P2hat} P_{\hat K}:&= \{ v_h \in L^2(\hat K) \mid v_h|_{K_i}\in P_{2}; \ v_h\in C^0(\b x_i), \ i=1,2,3,4; \\ \nonumber &\qquad v_h\in C^1(\b x_9); \ \Delta v_h\in P_0 \}, } where $v_h \in C^0(\b x_i)$ means that the two adjoining at $\b x_i$ polynomial pieces of $v_h$ have the same value at $\b x_i$, $v_h \in C^1(\b x_9)$ means that the four adjoining at $\b x_9$ polynomial pieces of $v_h$ and its first derivatives have matching values at $\b x_9$, and $\Delta v_h\in P_0$ means that the four adjoining at $\b x_9$ polynomial pieces of $v_h$ have a matching constant Laplacian. These conditions immediately imply that $v_h$ is continuous on $\hat K$. We will show that the dimension of the space $P_{\hat K}$ is 9, and each such $P_2$ function is uniquely determined by its 8 nodal values, $v_h(\b x_i)$, $i=1,2,\dots,8$, and the value of $\Delta v_h$, see Figure \ref{K-hat} (left). \begin{figure}[h!] \begin{minipage}{0.47\linewidth} \begin{center} \setlength\unitlength{5pt} \begin{picture}(20,20)(0,0) \def\cr{\begin{picture}(20,20)(0,0)\put(0,0){\line(1,0){20}}\put(0,20){\line(1,0){20}} \put(0,0){\line(0,1){20}} \put(20,0){\line(0,1){20}} \put(0,0){\line(1,1){20}} \put(20,0){\line(-1,1){20}}\end{picture}} \put(0,0){\cr} \put(9.5,3){$K_1$} \put(16,9.5){$K_2$} \put(9.5,16){$K_4$} \put(3,9.5){$K_3$} \put(-2.5,0){$\b x_1$} \put(21,0){$\b x_2$} \put(21,20){$\b x_3$} \put(-2.5,20){$\b x_4$} \put(11,9.2){$\b x_9$} \put(10,-1.5){$\b x_5$} \put(21,9){$\b x_6$}\put(10,20.5){$\b x_7$} \put(-2.5,9){$\b x_8$} \put(0,0){\circle*{0.5}}\put(10,0){\circle*{0.5}}\put(20,0){\circle*{0.5}} \put(0,10){\circle*{0.5}} \put(20,10){\circle*{0.5}} \put(0,20){\circle*{0.5}}\put(10,20){\circle*{0.5}}\put(20,20){\circle*{0.5}} \end{picture}\end{center} \end{minipage} \begin{minipage}{0.47\linewidth} \begin{center} \setlength\unitlength{5pt} \begin{picture}(20,20)(0,0) \def\cr{\begin{picture}(20,20)(0,0)\put(0,0){\line(1,0){20}}\put(0,20){\line(1,0){20}} \put(0,0){\line(0,1){20}} \put(20,0){\line(0,1){20}} \put(0,0){\line(1,1){20}} \put(20,0){\line(-1,1){20}}\end{picture}} \put(0,0){\cr} \put(3.5,13.3){$c_{13}$} \put(15.5,5.5){$c_{11}$} \put(15.5,13.7){$c_{12}$} \put(3.2,6.1){$c_{10}$} \put(-2.5,0){$ c_1$} \put(21,0){$ c_2$} \put(21,20){$ c_3$} \put(-2.5,20){$ c_4$} \put(11,9.2){$ c_9$} \put(10,-1.5){$ c_5$} \put(21,9){$ c_6$}\put(10,20.5){$ c_7$} \put(-2.5,9){$ c_8$} \put(0,0){\circle*{0.5}}\put(10,0){\circle*{0.5}}\put(20,0){\circle*{0.5}} \put(0,10){\circle*{0.5}} \put(20,10){\circle*{0.5}} \put(0,20){\circle*{0.5}}\put(10,20){\circle*{0.5}}\put(20,20){\circle*{0.5}}\put(5,5){\circle*{0.5}}\put(5,15){\circle*{0.5}}\put(15,5){\circle*{0.5}}\put(15,15){\circle*{0.5}}\put(10,10){\circle*{0.5}} \end{picture}\end{center} \end{minipage} \caption{\label{K-hat} The reference macro-element, $\hat K=\cup_{i=1}^4 K_i=[-1,1]^2$ (left), and the B-coefficients associated with the domain points in $\hat K$ (right)} \end{figure} \begin{theorem} Consider $S^0_2 (\hat K)\xrightarrow[]{\Delta} S^{-1}_0(\hat K)$, where \a{ S^{0,1}_2(\hat K) &: = \{ s|_{K_i}\in P_{2}, \ i=1,2,3,4; \ s\in C^1(\b x_9); \ s\in C^0(\hat K) \},\\ S^{-1}_0(\hat K) &: = \{ s|_{K_i}\equiv c_i\in\RR, \quad i=1,2,3,4. \}.} Then the image of $S^0_2 (\hat K)$ is a three dimensional subspace of $S^{-1}_0(\hat K)$ consisting of piecewise constants satisfying the condition $c_1-c_2+c_3-c_4=0$. \end{theorem} \begin{lemma} The conforming $P_2$ finite element function $u_h\in P_{\hat K_h}$ is unisolvent by the eight nodal values, $u_h(\b x_i)$, $i=1,2,\dots,8$, and the value $\Delta u_h(\b x_9)$. \end{lemma} \begin{proof} We use the Bernstein-B\'ezier form of $u_h$ on $\hat K_h$ with the B-coefficients $c_1$, $c_2$, \dots, $c_{13}$ associated with the domain points in $\hat K_h$ as depicted in Figure~\ref{K-hat} (right), see e.g. Chapter 2 of~\cite{LaiSch} for relevant definitions. Using the eight nodal values, $u_h(\b x_i)$, $i=1,2,\dots,8$, we compute the eight B-coefficients by interpolation conditions as follows \begin{alignat*}{2} &c_i=u_h(\b x_i), &i=1,\dots,4,\\ &c_5=2u_h(\b x_5)-\left(c_1+c_2\right)/2,\hskip 10pt &c_6=2u_h(\b x_6)-\left(c_2+c_3\right)/2, \\ &c_7=2u_h(\b x_7)-\left(c_3+c_4\right)/2,\hskip 10pt &c_8=2u_h(\b x_6)-\left(c_4+c_1\right)/2. \\ \end{alignat*} By Lemma 4.1 in~\cite{va1}, the following four conditions are necessary and sufficient for $\Delta u_h=\Delta u_h(\b x_9)$ on each triangle $K_i$, $i=1,\dots,4$, in the square $\hat K_h$: \begin{align}\label{system1} \ad{ &2c_9+c_1+c_2-2c_{10}-2c_{11}=\Delta u_h(\b x_9)/2,\\ &2c_9+c_2+c_3-2c_{11}-2c_{12}=\Delta u_h(\b x_9)/2,\\ &2c_9+c_3+c_4-2c_{12}-2c_{13}=\Delta u_h(\b x_9)/2,\\ &2c_9+c_4+c_1-2c_{13}-2c_{10}=\Delta u_h(\b x_9)/2. } \end{align} By Theorem 2.28 in~\cite{LaiSch}, the following two conditions are necessary and sufficient for $u_h$ to be $C^1$ at the center $\b x_9$ of the square $\hat K_h$: \begin{align}\label{system2} \ad{ &2c_9-c_{10}-c_{12}=0,\\ &2c_9-c_{11}-c_{13}=0. } \end{align} Note that the alternating sum of the four equations in~(\ref{system1}) vanishes. Thus we only consider the first three equations of~(\ref{system1}). Substituting $c_{13}=c_{10}+c_{12}-c_{11}$, and $2c_9=c_{10}+c_{12}$ from~(\ref{system2}), into~(\ref{system1}), we obtain a system of three equations with three unknowns that has a unique solution given by \begin{align}\label{solution} \ad{ &c_9=(c_1+c_2+c_3+c_4-2\Delta u_h(\b x_9))/4,\\ &c_{10}=(2c_1+c_2+c_4-\Delta u_h(\b x_9))/4,\\ &c_{11}=(2c_2+c_1+c_3-\Delta u_h(\b x_9))/4,\\ &c_{12}=(2c_3+c_2+c_4-\Delta u_h(\b x_9))/4,\\ &c_{13}=(2c_4+c_1+c_3-\Delta u_h(\b x_9))/4. } \end{align} Therefore, all thirteen B-coefficients $c_1,c_2,\dots, c_{13}$ have been uniquely determined by the eight nodal values $u_h(\b x_i)$, $i=1,2,\dots,8$, and by $\Delta u_h(\b x_9)$. \end{proof} Let $\mathcal{M}_h=\{K : \cup K=\Omega \}$ be a square subdivision of the domain $\Omega$. We subdivide each rectangle $K$ in to four triangles $K_i$ as in Figure \ref{K-hat}, and let $\mathcal{T}_h= \{K_i : K_i \subset K \} $ be the corresponding triangular grid of grid-size $h$. The $P_2$ finite element space on the grid is defined by \an{\label{P2N} V_h & =\{ v_h \in H^1_0(\Omega) \mid \ v_h|_K =\sum_{i=1}^8 c_i \phi_i + c_9 \phi_9 \in P_{K} \ \forall K\in\mathcal{M}_h \}, } where $P_K$ is defined in \eqref{P2hat}, basis $\phi_i(\b x_j)=\delta_{ij}$ and $\Delta \phi_{i9}(\b x_9)=\delta_{i9}$. The interpolated Galerkin finite element problem reads: Find $u_h=\sum_{K\in\mathcal{M}_h}\Big(\sum_{i=1}^8 u_i \phi_i - f(\b x_9) \phi_9\Big)$ such that \an{ \label{E-2} (\nabla u_h, \nabla v_h)=(f,v_h) \quad\forall v_h=\sum_{K\in\mathcal{M}_h} \sum_{i=1}^8 v_i \phi_i. } \section{The $P_2$ interpolated nonconforming finite element} We define a $P_2$ interpolated Galerkin nonconforming finite element on general triangular grids in this section. This element is the best one to describe the difference between interpolated Galerkin finite element methods and standard Galerkin finite element methods. The $P_2$ nonconforming finite element function is continuous on the two Gauss-Legendre points of every edge. But the set of 6 nodal values of a $6$-dimensional $P_2$ polynomial is linearly dependent. We can use the 5-dimensional harmonic $P_2$ polynomials to build these 5 basis functions which has non-zero values at the 6 Gauss-Legendre points and zero Laplacian at the barycenter of triangle. As these 6 Gauss-Legendre points on edges are always on an ellipse, the last $P_2$ basis function has a constant Laplacian 1 everywhere on the triangle and vanishes at the Gauss-Legendre points. This is how the basis functions are defined in the standard $P_2$ nonconforming finite element. Now, instead of solving the coefficient of this last basis function from the discrete equations, we can interpolate the right hand side function to get this coefficient directly. Let $\mathcal {T}_h$ be a shape-regular, quasi-uniform triangulation on $\Omega$. The $P_2$ non-conforming interpolated finite element space is defined by \an{\label{P2N} \ad{ V_h =\{ v_h \in L^2(\Omega) \mid \ & v_h \t{ is continuous at two Gauss points each edge }, \\ & v_h \t{ is zero at two Gauss points on boundary edge }, \\ & v_h|_K =\sum_{i=1}^5 c_i \phi_i + c_0 \phi_0 \in P_2(K) \ \forall K\in\mathcal{T}_h \}, } } where $\{\phi_i, i=1,...,5\}$ are global basis functions restricted on $K$ which are $P_2$ harmonic functions (i.e., spanned by $\{1,x,y,xy,x^2-y^2\}$, cf. \cite{Sorokina}) and $ \Delta \phi_{0}(\b x_0)=-1$, $\phi_{0}(\b x_i)=0$, $i=1,...,5$, cf. Figure \ref{f-p2}. \begin{figure}[ht] \setlength{\unitlength}{0.8pt} \begin{center}\begin{picture}( 200., 192)( 0., -20.) \def\lb{\circle*{0.8}}\def\lc{\vrule width1.2pt height1.2pt} \def\la{\circle*{0.3}} \multiput( 0.00, 0.00)( 0.125, 0.216){799}{\la} \multiput( 0.00, 0.00)( 0.250, 0.000){800}{\la} \multiput( 200.00, 0.00)( -0.125, 0.216){799}{\la} \multiput( 180.00, 57.00)( -0.011, 0.166){ 62}{\la} \multiput( 177.28, 77.70)( -0.054, 0.158){ 62}{\la} \multiput( 169.29, 96.98)( -0.093, 0.139){ 62}{\la} \multiput( 156.59, 113.55)( -0.125, 0.110){ 62}{\la} \multiput( 140.03, 126.26)( -0.149, 0.074){ 62}{\la} \multiput( 120.75, 134.26)( -0.163, 0.033){ 62}{\la} \multiput( 100.05, 137.00)( -0.166, -0.011){ 62}{\la} \multiput( 79.35, 134.29)( -0.158, -0.053){ 62}{\la} \multiput( 60.06, 126.32)( -0.139, -0.092){ 62}{\la} \multiput( 43.49, 113.62)( -0.110, -0.125){ 62}{\la} \multiput( 30.76, 97.08)( -0.074, -0.149){ 62}{\la} \multiput( 22.75, 77.80)( -0.033, -0.163){ 62}{\la} \multiput( 20.00, 57.10)( 0.011, -0.166){ 62}{\la} \multiput( 22.70, 36.40)( 0.053, -0.158){ 62}{\la} \multiput( 30.66, 17.11)( 0.092, -0.139){ 62}{\la} \multiput( 43.34, 0.52)( 0.125, -0.110){ 62}{\la} \multiput( 59.88, -12.21)( 0.149, -0.074){ 62}{\la} \multiput( 79.15, -20.24)( 0.163, -0.033){ 62}{\la} \multiput( 99.84, -23.00)( 0.166, 0.011){ 62}{\la} \multiput( 120.55, -20.32)( 0.158, 0.053){ 62}{\la} \multiput( 139.85, -12.37)( 0.139, 0.092){ 62}{\la} \multiput( 156.44, 0.30)( 0.110, 0.125){ 62}{\la} \multiput( 169.19, 16.83)( 0.074, 0.149){ 62}{\la} \multiput( 177.22, 36.10)( 0.033, 0.163){ 62}{\la} \put(35,-10){$\b x_1$}\put(158,-10){$\b x_2$} \put(8,40){$\b x_6$}\put(185,38){$\b x_3$} \put(62,139){$\b x_5$}\put(125,139){$\b x_4$} \put(105,57.6){$\b x_0$}\put(100,57.6){\circle*{3}} \put(82,45){$\Delta \phi_0(\b x_0)=-1$} \end{picture}\end{center} \caption{Nodal points of $P_2$ non-conforming finite elements. \label{f-p2} } \end{figure} The $P_2$-nonconforming, interpolated Galerkin finite element problem reads: Find $u_h=\sum_{K\in\mathcal{T}_h}\Big(\sum_{i=1}^5 u_i \phi_i + f(\b x_0) \phi_0\Big)$ such that \an{\label{E-2n} (\nabla_h u_h, \nabla_h v_h)=(f,v_h) \quad\forall v_h=\sum_{K\in\mathcal{T}_h} \sum_{i=1}^5 v_i \phi_i. } \section{$P_k$ ($k\ge 3$) interpolated Galerkin finite elements} There is no $P_1$ interpolated finite element as the Laplacian of a $P_1$ polynomial is zero. It needs special cares for $P_2$ interpolated finite elements, as we did in the last two sections. But for $P_3$ and above interpolated finite elements, we can simply replace all internal Lagrange degrees of freedom by the Laplacian values, which are obtained from the given right hand side function $f$. However we could not prove the uni-solvence for general $k$. We define another type interpolated finite element for $P_k$ ($k\ge 4$), where we use local averaging Laplacian values in stead of pointwise Laplacian value. Let $\mathcal {T}_h$ be a shape-regular, general quasi-uniform triangulation on $\Omega$. The $P_3$ interpolated finite element space is defined by \an{\label{P-3} \ad{ V_h =\{ v_h \in H^1_0(\Omega) \mid \ & v_h|_K =\sum_{i=1}^{9} c_i \phi_i + c_0 \phi_0 \in P_3(K) \ \forall K\in\mathcal{T}_h \}, } } where $\{\phi_i, i=1,...,9\}$ are boundary Lagrange basis functions which have vanishing Laplacian at the barycenter $\b x_0$ of $K$, and $\phi_0$ vanishes on the three edges of $K$ and $\Delta \phi_{0}(\b x_0)=-1$, cf. Figure \ref{f-p3}. \begin{figure}[ht] \setlength{\unitlength}{1pt} \begin{center}\begin{picture}(160., 100)( 0., 0) \put(0,0){\line(1,0){160}} \put(0,0){\line(4,5){80}} \put(160,0){\line(-4,5){80}} \multiput(0, 0)(26.6,33.3){4}{\circle*{5}}\multiput(160, 0)(-26.6,33.3){3}{\circle*{5}} \multiput(0, 0)(53.3,0){4}{\circle*{5}} \put(113,70){$\phi_i(\b x_l)=\delta_{i,l}$} \put(80,33.3){\circle*{5}} \put(42,40){$\Delta\phi_0( \b x_0)=-1$} \end{picture}\end{center} \caption{Nodal points of $P_3$ interpolated finite elements. \label{f-p3} } \end{figure} \begin{lemma} The $(9+1)$ nodal degrees of freedom in \eqref{P-3} uniquely define a $P_3$ polynomial. \end{lemma} \begin{proof} We have a square system of 9 linear equations with 9 unknowns. The uniqueness guarantees existence. Let $u_h$ be a solution of the homogeneous system. Then $u_h$ vanishes on 3 edges, cf. Figure \ref{P-3}. $u_h=C b$, where $b$ is the $P_3$ bubble function on $K$, vanishing on the edges and assuming value $1$ at the barycenter $\b x_0$. Since $\Delta u_h(\b x_0)=0$ and $\Delta u_h$ is a linear function, we have, by symmetry, \a{ 0 &=-\Delta u_h(\b x_0)b =-\int_K (\Delta u_h)b \; d\b x \\ & =\int_K \nabla u_h \cdot \nabla b d \b x = C \int_K |\nabla b|^2 d \b x. } Thus $C=0$, $u_h=0$ and the lemma is proved. \end{proof} The $P_3$ interpolated Galerkin finite element problem reads: \an{\nonumber \t{ Find } \ u_h&=\sum_{K\in\mathcal{T}_h}\Big(\sum_{i=1}^{9} u_i \phi_i + f(\b x_0)_K \phi_0\Big) \ \t{ such that } \\ \label{E-3} (\nabla u_h, \nabla v_h)&=(f,v_h) \quad\forall v_h=\sum_{K\in\mathcal{T}_h} \sum_{i=1}^{9} v_i \phi_i. } For defining $P_k$ ($k\ge 4$) interpolated finite elements, we explicitly define two types degrees of freedom. Let boundary nodal-value linear functional $F_i$ be \an{\label{F-i} F_i(u) = u(\b x_i), \quad i=1,...,3k, \t{ \ cf. Figure \ref{f-p5}.} } Let the Laplacian moment linear functional $G_j$ be \an{\label{G-j} G_j(u) =\int_K p_j b \Delta u \, d\b x, \quad j=1,...,d_{k-3}, } where $p$ is again the cubic bubble function on $K$, $d_{k-3}=\dim P_{k-3}$, and $\{ p_j\}$ is an orthonormal basis obtained by the Gram-Schmidt process on $\{ p_j=1,x,y,x^2,..., y^{k-3}\}$ under the inner product \a{ (u,v)_G = \int_K \nabla(b u)\cdot \nabla(b v) d\b x. } \begin{figure}[ht] \setlength{\unitlength}{1pt} \begin{center}\begin{picture}(160., 100)( 0., 0) \put(0,0){\line(1,0){160}} \put(0,0){\line(4,5){80}} \put(160,0){\line(-4,5){80}} \multiput(0, 0)(16,20){6}{\circle*{5}}\multiput(160, 0)(-16,20){6}{\circle*{5}} \multiput(0, 0)(32,0){6}{\circle*{5}} \put(103,78){$F_i(u)=u(\b x_i)$} \put(115,64){$ \b x_i$} \put(35,27){$G_j(u)=\int_K p_jb\Delta u\,d\b x$} \end{picture}\end{center} \caption{Nodal points and moments of $P_5$ interpolated finite elements. \label{f-p5} } \end{figure} \begin{lemma}\label{l-k} The $(3k+d_{k-3})$ linear functionals in \eqref{F-i} and \eqref{G-j} uniquely define a $P_k$ polynomial. \end{lemma} \begin{proof} Because $\dim P_k=3k+d_{k-3}$, we have a square linear system of equations when applying the functionals to determine a $P_k$ polynomial. We only need to show the uniqueness. Let $u\in P_k$ such that $F_i(u)=0$ and $G_j(u)=0$, $i=1,...,3k, j=1,..., d_{k-3}$. Because $u=0$ on the three edges (cf. Figure \ref{f-p5}) we have \a{ u=bp \t{ \ for some } p\in P_{k-3}. } Let the combination of $p_j$ in $G_j$, defined in \eqref{G-j}, be $p$. We get \a{ 0&=\sum_{j=0}^{d_{k-3}} c_j G_j(u) = \int_K \sum_{j=0}^{d_{k-3}} c_j p_j b \Delta u\, d\b x\\ &= \int_K p b \Delta u\, d\b x= \int_K |\nabla u|^2\, d\b x. } Thus $\nabla u=0$ and $u=C$. Because $u=bp=0$ on the boundary, $u=C=0$. The proof is completed. \end{proof} The $P_k$ ($k\ge 4$) interpolated finite element space is defined by \an{\label{P-k} \ad{ V_h =\{ v_h \in H^1_0(\Omega) \mid \ & v_h|_K =\sum_{i=1}^{3k} c_i \phi_i + \sum_{j=1}^{d_{k-3}} c_j \psi_j \in P_k(K) \ \forall K\in\mathcal{T}_h \}, } } where $\{\phi_i, \psi_j\}$ is the dual basis of $\{F_i, G_j\}$, by Lemma \ref{l-k}. The $P_k$ ($k\ge 4$) interpolated Galerkin finite element problem reads: \an{\label{u-d} \t{ Find } \ u_h&=\sum_{K\in\mathcal{T}_h}\Big(\sum_{i=1}^{3k} u_i \phi_i - \sum_{j=1}^{d_{k-3}} (f,\psi_j)_K \psi_j\Big) \ \t{ such that } \\ \label{E-k} (\nabla u_h, \nabla v_h)&=(f,v_h) \quad\forall v_h=\sum_{K\in\mathcal{T}_h} \sum_{i=1}^{3k} v_i \phi_i. } \section{Convergence theory} \begin{theorem}\label{m1} Let $u$ and $u_h$ be the exact solution of \eqref{e} and the $P_k$ ($k\ge 4$) interpolated finite element solution of \eqref{E-k}, respectively. Then \an{\label{c11} \| u-u_h\|_{0}+h | u-u_h |_{1} \le C h^{k+1} | u |_{k+1}, } where $|\cdot|_k$ is the Sobolev (semi-)norm $H^k(\Omega)$. \end{theorem} \begin{proof} Testing \eqref{e} by $v_h=\phi_i \in H^1_0(\Omega)$, we have \an{\label{e-v} (\nabla u, \nabla v_h) &=(f,v_h). } Subtracting \eqref{E-k} from \eqref{e-v}, \an{\label{o1} (\nabla(u-u_h), \nabla v_h) &= 0 . } Testing \eqref{e} by $v_h=\psi_j\in H^1_0(\Omega)$, by \eqref{G-j} and \eqref{u-d}, we get \an{ \label{o2} \ad{ (\nabla(u-u_h), \nabla \psi_j) &=-\int_K \Delta u \psi_j d \b x - (f,\psi_j)_K\int_K |\nabla \psi_j|^2 d \b x \\ &= \int_K f\psi_j d \b x -(f,\psi_j)_K = 0. } } Combining \eqref{o1} and \eqref{o2} implies \a{ |u-u_h |_{1}^2 &= (\nabla(u-u_h),\nabla(u-I_h u))\\ &\le |u-u_h |_{1} |u-I_h u |_{1} \le |u-u_h |_{1} Ch^k \|u\|_{k+1}, } where $I_h$ is the interpolation operator to $V_h$. This completes the $H^1$ estimate. Let $w\in H^2(\Omega) \cap H^1_0(\Omega)$ solve \a{ (\nabla w,\nabla v) = ( u-u_h, v) \quad \forall v\in V_h. } We assume $H^2$ regularity for the solution, i.e., \a{ \|w\|_2 \le C \|u-u_h\|_0. } Let $w_h$ be $P_k$ interpolated finite solution of $w$. Then \a{ \|u-u_h\|_0^2 &= (\nabla w, \nabla (u-u_h)) = (\nabla( w-w_h), \nabla (u-u_h)) \\ &\le |w-w_h|_1 |u-u_h| \le C h |w|_2 C h^k \|u\|_{k+1} \\& \le \|u-u_h\|_0 C h^{k+1} \|u\|_{k+1}. } This gives the $L^2$ error estimate. \end{proof} \begin{theorem}\label{m2} Let $u$ be the exact solution of \eqref{e}. Let $u_h$ the $P_2$ conforming, or the $P_2$ nonconforming, or the $P_3$ finite element solution of \eqref{E-2}, \eqref{E-2n}, or \eqref{E-3}, respectively. Then \an{\label{c11} \| u-u_h\|_{0}+h | u-u_h |_{1} \le C h^{k+1} | u |_{k+1}, } where $k=2$, or $3$. \end{theorem} \begin{proof} As there is one local/internal Laplacian degree of freedom, the proof becomes very simple. Testing \eqref{e} by nodal value basis $\tilde v_h=\phi_i$, we have \an{\label{e-v2} (\nabla u, \nabla \tilde v_h) &=(f,v_h). } Subtracting finite element equations from \eqref{e-v2}, \a{ (\nabla(u-\tilde u_h-u_0 ), \nabla v_h) &= 0, } where we separate the finite element solution $u_h$ in to two parts, the nodal basis span part $\tilde u_h$ and the interpolated Laplacian part $u_0$ (spanned by the last basis function $\phi_0$.) Thus \a{ |u-u_h |_{1}^2 &= (\nabla(u-\tilde u_h-u_0),\nabla(u-\tilde v_h - u_0 ))\\ &= (\nabla(u-\tilde u_h-u_0),\nabla(u- I_h u ))\\ &\le |u-u_h |_{1} |u-I_h u |_{1} \le |u-u_h |_{1} Ch^k \|u\|_{k+1}, } where $I_h$ is the interpolation operator to $V_h$. This completes the $H^1$ estimate. The $L^2$ error estimate is identical to above proof for Theorem \ref{m1}. The treatment for the inconsistency by $P_2$ nonconforming element is standard, cf. \cite{Li, Sorokina, Wang, ZhangMin, Zhang-Jump}. \end{proof} \section{Numerical tests} Let the domain of the boundary value problem~\eqref{e} be $\Omega=(0,1)^2$. The exact solution is $u(x,y)=\sin \pi x \sin \pi y$. We chose a family of uniform triangular grids, shown in Figure~\ref{grids}, in all numerical tests on $P_k$ interpolated Galerkin finite element methods. \begin{figure}[h!] \begin{center} \setlength\unitlength{4pt} \begin{picture}(70,20)(0,0) \def\tr{\begin{picture}(20,20)(0,0)\put(0,0){\line(1,0){20}}\put(0,20){\line(1,0){20}} \put(0,0){\line(0,1){20}} \put(20,0){\line(0,1){20}} \put(20,0){\line(-1,1){20}} \put(0,0){\line(1,1){20}} \end{picture}} \def\sq{\begin{picture}(20,20)(0,0)\put(0,0){\line(1,0){20}}\put(0,20){\line(1,0){20}} \put(0,0){\line(0,1){20}} \put(20,0){\line(0,1){20}} \end{picture}} \def\pt{\begin{picture}(40,40)(0,0)\put(0,0){\line(1,0){40}}\put(0,40){\line(1,0){40}} \put(0,0){\line(0,1){40}} \put(40,0){\line(0,1){40}} \put(0,20){\line(1,0){10}} \put(30,20){\line(1,0){10}} \put(20,0){\line(0,1){10}} \put(20,30){\line(0,1){10}} \put(10,20){\line(1,1){10}}\put(10,20){\line(1,-1){10}} \put(30,20){\line(-1,1){10}}\put(30,20){\line(-1,-1){10}} \end{picture}} \def\hx{\begin{picture}(20,20)(0,0)\put(0,0){\line(1,0){20}}\put(0,20){\line(1,0){20}} \put(0,10){\line(1,1){10}} \put(0,0){\line(0,1){20}} \put(20,0){\line(0,1){20}} \put(10,0){\line(1,1){10}}\end{picture}} \multiput(0,0)(20,0){1}{\multiput(0,0)(0,20){1}{\tr}} \put(22,0){ \setlength\unitlength{2pt}\begin{picture}(20,20)(0,0) \multiput(0,0)(20,0){2}{\multiput(0,0)(0,20){2}{\tr}} \end{picture} } \put(44,0){ \setlength\unitlength{1pt}\begin{picture}(20,20)(0,0) \multiput(0,0)(20,0){4}{\multiput(0,0)(0,20){4}{\tr}} \end{picture} } \end{picture}\end{center} \caption{The first three levels of grids in all numerical tests. } \label{grids} \end{figure} We solve problem \eqref{e} first by the $P_2$ interpolated Galerkin conforming finite element method defined in \eqref{E-2} and by the $P_2$ Lagrange finite element method, on same grids. The errors and the orders of convergence are listed in Table \ref{t1}. Both elements converge at the optimal order. \begin{table}[ht] \caption{\lab{t1} The error $e_h= I_h u- u_h$ and the order of convergence, by the $P_2$ conforming interpolated finite element and by the $P_2$ Lagrange finite element. } \begin{center} \begin{tabular}{c|rr|rr|rr|rr} \hline grid & $ \|e_h\|_{0}$ &$h^n$ &$ |e_h|_{1}$ & $h^n$ & $ \|e_h\|_{0}$ &$h^n$ & $ |e_h|_{1}$ & $h^n$ \\ \hline & \multicolumn{4}{c|}{$P_2$ Interpolated conforming FE} & \multicolumn{4}{c}{$P_2$ Lagrange element} \\ \hline 4& 0.614E-03& 3.2& 0.499E-01& 2.0& 0.615E-03& 3.2& 0.500E-01& 2.0\\ 5& 0.723E-04& 3.1& 0.124E-01& 2.0& 0.723E-04& 3.1& 0.124E-01& 2.0\\ 6& 0.887E-05& 3.0& 0.309E-02& 2.0& 0.887E-05& 3.0& 0.309E-02& 2.0\\ 7& 0.110E-05& 3.0& 0.773E-03& 2.0& 0.110E-05& 3.0& 0.773E-03& 2.0\\ 8& 0.138E-06& 3.0& 0.193E-03& 2.0& 0.138E-06& 3.0& 0.193E-03& 2.0\\ 9& 0.172E-07& 3.0& 0.483E-04& 2.0& 0.172E-07& 3.0& 0.483E-04& 2.0\\ \hline \end{tabular}\end{center} \end{table} Next we solve the test problem \eqref{e} again, by the $P_2$ interpolated non-conforming finite element method \eqref{E-2n} and by the standard $P_2$ nonconforming finite element method. The errors and the orders of convergence are listed in Table \ref{t2}. Again, both methods converge in the optimal order. \begin{table}[ht] \caption{\lab{t2} The error $e_h= I_h u- u_h$ and the order of convergence, by the $P_2$ interpolated nonconforming finite element and by the $P_2$ nonconforming finite element. } \begin{center} \begin{tabular}{c|rr|rr|rr|rr} \hline grid & $ \|e_h\|_{0}$ &$h^n$ &$ |e_h|_{1}$ & $h^n$ & $ \|e_h\|_{0}$ &$h^n$ & $ |e_h|_{1}$ & $h^n$ \\ \hline & \multicolumn{4}{c|}{$P_2$ Interpolated nonconforming FE} & \multicolumn{4}{c}{$P_2$ nonconforming element} \\ \hline 2& 0.503E-02& 3.8& 0.839E-01& 2.3& 0.124E-01& 3.2& 0.186E+00& 2.5\\ 3& 0.118E-02& 2.1& 0.363E-01& 1.2& 0.164E-02& 2.9& 0.495E-01& 1.9\\ 4& 0.181E-03& 2.7& 0.111E-01& 1.7& 0.208E-03& 3.0& 0.126E-01& 2.0\\ 5& 0.244E-04& 2.9& 0.298E-02& 1.9& 0.260E-04& 3.0& 0.315E-02& 2.0\\ 6& 0.316E-05& 3.0& 0.767E-03& 2.0& 0.325E-05& 3.0& 0.789E-03& 2.0\\ 7& 0.406E-06& 3.0& 0.194E-03& 2.0& 0.407E-06& 3.0& 0.197E-03& 2.0\\ \hline \end{tabular}\end{center} \end{table} In Table \ref{t3} we list the results of $P_3$ interpolated finite elements \eqref{E-3} and the $P_3$ Lagrange finite elements. \begin{table}[h!] \caption{\lab{t3} The error $e_h= I_h u- u_h$ and the order of convergence, by the $P_3$ interpolated finite element and by the $P_3$ Lagrange finite element. } \begin{center} \begin{tabular}{c|rr|rr|rr|rr} \hline grid & $ \|e_h\|_{0}$ &$h^n$ & $ |e_h|_{1}$ & $h^n$ & $ \|e_h\|_{0}$ &$h^n$ & $ |e_h|_{1}$ & $h^n$ \\ \hline & \multicolumn{4}{c|}{$P_3$ interpolated element} & \multicolumn{4}{c}{$P_3$ Lagrange element} \\ \hline 4& 0.119E-04& 4.0& 0.114E-02& 2.9& 0.118E-04& 4.0& 0.114E-02& 3.0\\ 5& 0.742E-06& 4.0& 0.143E-03& 3.0& 0.742E-06& 4.0& 0.143E-03& 3.0\\ 6& 0.464E-07& 4.0& 0.180E-04& 3.0& 0.464E-07& 4.0& 0.180E-04& 3.0\\ 7& 0.290E-08& 4.0& 0.225E-05& 3.0& 0.290E-08& 4.0& 0.225E-05& 3.0\\ 8& 0.181E-09& 4.0& 0.281E-06& 3.0& 0.181E-09& 4.0& 0.281E-06& 3.0\\ \hline \end{tabular}\end{center} \end{table} \vskip 10pt We then solve problem \eqref{e} by the $P_4$/$P_5$/$P_6$ interpolated finite element methods \eqref{E-k} and by the $P_4$/$P_5$/$P_6$ Lagrange finite element methods. The errors and the orders of convergence are listed in Table \ref{t4}. The optimal order of convergence is achieved in all cases. \vskip 10pt \begin{table}[h!] \caption{\lab{t4} The error $e_h= I_h u- u_h$ and the order of convergence, by the $P_4$/$P_5$/$P_6$ interpolated finite elements and by the $P_4$/$P_5$/$P_6$ Lagrange finite elements. } \begin{center} \begin{tabular}{c|rr|rr|rr|rr} \hline grid & $ \|e_h\|_{0}$ &$h^n$ & $ |e_h|_{1}$ & $h^n$ & $ \|e_h\|_{0}$ &$h^n$ & $ |e_h|_{1}$ & $h^n$ \\ \hline & \multicolumn{4}{c|}{$P_4$ interpolated element} & \multicolumn{4}{c}{$P_4$ Lagrange element} \\ \hline 4& 0.136E-06& 4.7& 0.132E-04& 3.9& 0.159E-06& 5.0& 0.142E-04& 4.0\\ 5& 0.464E-08& 4.9& 0.859E-06& 3.9& 0.501E-08& 5.0& 0.891E-06& 4.0\\ 6& 0.151E-09& 4.9& 0.547E-07& 4.0& 0.157E-09& 5.0& 0.557E-07& 4.0\\ \hline \hline & \multicolumn{4}{c|}{$P_5$ interpolated element} & \multicolumn{4}{c}{$P_5$ Lagrange element} \\ \hline 3& 0.484E-06& 6.0& 0.501E-04& 4.9& 0.478E-06& 6.0& 0.499E-04& 4.9\\ 4& 0.755E-08& 6.0& 0.158E-05& 5.0& 0.754E-08& 6.0& 0.158E-05& 5.0\\ 5& 0.118E-09& 6.0& 0.495E-07& 5.0& 0.118E-09& 6.0& 0.495E-07& 5.0\\ \hline \hline & \multicolumn{4}{c|}{$P_6$ interpolated element} & \multicolumn{4}{c}{$P_6$ Lagrange element} \\ \hline 2& 0.144E-05& 6.6& 0.639E-04& 5.6& 0.336E-06& 7.2& 0.164E-04& 6.3\\ 3& 0.122E-07& 6.9& 0.102E-05& 6.0& 0.276E-08& 6.9& 0.259E-06& 6.0\\ 4& 0.972E-10& 7.0& 0.160E-07& 6.0& 0.218E-10& 7.0& 0.406E-08& 6.0\\ \hline \end{tabular}\end{center} \end{table} \clearpage
1,108,101,564,773
arxiv
\section{Introduction} Structures of atypical chaoticity, although rare, can play an important role in many physical systems. For instance, resonances and separatrices play a crucial part in determining stability of planetary systems \cite{LaffargueLaskar, LaffargueMurrayHolman}. Similarly, to study the global diffusion mechanism in almost-integrable systems, we need to focus on extremely thin chaotic layers which are responsible for Arnold diffusion \cite{LaffargueArnold, LaffargueArnoldWeb1, LaffargueArnoldWeb2}. Likewise, unstable objects like solitons and chaotic breathers \cite{LaffargueFPU} are responsible for the energy transport in Bose-Einstein condensates \cite{LaffargueBoseEinstein} and in biological molecules \cite{LaffargueMolecules}. Those structures are usually not only rare but also unstable, which makes them even harder to find. Despite the progress made in the last few years, most numerical methods to locate those structures are restricted to low-dimensional systems or are model-specific. The Lyapunov Weighted Dynamics is a Monte Carlo algorithm which samples trajectories according to their Lyapunov spectrum, an observable measuring the sensitivity to initial conditions and hence chaoticity. In this article, we review this algorithm and show how it can be used to reveal rare trajectories, impossible to find with direct simulations, in both low and high dimensions, opening the door to applications going from celestial mechanics to statistical physics. \section{The Lyapunov spectrum, a large deviation problem} For a dynamical system defined by trajectories of $D$-dimensional variables $\mathbf{x}(t)$, consider two infinitely close points $\mathbf{x}(0)$ and ${\mathbf{x}(0) + \mathbf{u}(0)}$. The distance $\mathbf{u}(t)$ between them typically grows as \begin{equation} \left| \mathbf{u}(t) \right| \equiv \left|\mathbf{u}(0)\right| e^{t \lambda_{1}(t)} \end{equation} where $\lambda_{1}(t)$ is called the largest finite-time Lyapunov exponent (at time $t$). It measures the sensitivity of the dynamical system to an initial perturbation in the vicinity of $\mathbf{x}(0$). Similarly, we can consider $k+1$ nearby points defining $k$ noncollinear vectors $\mathbf{u}_i(0)$, with ${i \in \{1, \dots, k\}}$, and look at how the area ${V_{k}(t) \equiv \left| \mathbf{u}_{1}(t) \wedge \dots \wedge \mathbf{u}_{k}(t) \right|}$ evolves. In general, for $k \leqslant D$, it grows as \begin{equation} V_{k}(t) \approx e^{t [\lambda_{1}(t) + \cdots + \lambda_{k}(t)]} \end{equation} with ${\lambda_{1}(t) \geqslant \lambda_{2}(t) \geqslant \dots \geqslant \lambda_{k}(t)}$. These are the $k$ largest (finite-time) Lyapunov exponents. Under general assumptions, the $D$ Lyapunov exponents converge as $t$ goes to infinity to finite values, yielding the so-called Lyapunov spectrum. In the following, in order to facilitate the comprehension, we will focus on the largest one, $\lambda_{1}$, that we simply call $\lambda$, but everything said below can be generalized to the entire Lyapunov spectrum. The Lyapunov exponent $\lambda(t)$ can fluctuate between two trajectories: it does not take a unique value, and is distributed according to a distribution $P(\lambda, \,t)$, giving the probability density to observe a trajectory $x(t)$ with an exponent $\lambda$. In the large time limit, this pdf typically obeys a large deviation principle \cite{LaffargueTouchette} \begin{equation} P(\lambda, \,t) \underset{t \to +\infty}{\approx}e^{- t s(\lambda)} \quad \text{ with } \quad s(\lambda) \underset{t \to + \infty}{=} {\cal O}(1). \end{equation} $P(\lambda, \,t)$ thus becomes sharper and sharper as time increases and concentrates around a typical value $\lambda^{*}$, which satisfies ${s'(\lambda^{*}) = 0}$. This is why direct simulations of long trajectories are not efficient to isolate trajectories with atypical value of $\lambda$: when $\lambda - \lambda^{*} \sim {\cal O}(1)$, then $s(\lambda) \sim {\cal O}(1)$ and one needs an exponentially large number of independent random samples ($\sim e^{t s(\lambda)}$) to observe with probability one a trajectory with an exponent $\lambda$. \section{Thermodynamic formalism} Brute-force sampling imposes flat measure on the trajectory space by giving the same weight to all trajectories. On the contrary, collecting trajectories with a given $\lambda$ resembles the construction of the microcanonical ensemble in equilibrium statistical physics, where one tries to collect all configurations of given energy $E$. This is a notoriously difficult problem; it is usually simpler to fix the mean value of the energy, by introducing a conjugate parameter, the temperature $\beta$: this is the construction of the canonical ensemble. We will follow a similar strategy here: rather than collecting all trajectories of exponent $\lambda$, we introduce a conjugate parameter $\alpha$ and define the canonical weights: \begin{equation} P_{\alpha}(\lambda, \,t) \equiv \frac{1}{Z(\alpha, \,t)} P(\lambda, \,t) \, e^{\alpha \lambda t} \underset{t \to +\infty}{\approx} e^{t [\alpha \lambda - s(\lambda) - \mu(\alpha)]} \label{laffargue_canonical_weight} \end{equation} where ${Z(\alpha, \,t) \equiv \left< e^{\alpha \lambda t} \right>}$ is the (dynamical) partition function (or in, a more mathematical language, the moment-generating function). With those new weights, the new typical Lyapunov exponent $\lambda^{*}_{\alpha}$ satisfies ${s'(\lambda^{*}_{\alpha}) = \alpha}$. The conjugate parameter $\alpha$ acts like a temperature for chaoticity: positive $\alpha$ favors trajectories with large Lyapunov exponents, hence chaos, whereas negative $\alpha$ favors trajectories with small Lyapunov exponents, and thus promotes stability. Furthermore, in the canonical ensemble, all the macroscopic (static) properties can be extracted from the partition function or from the free energy. Here also, we can define a dynamical free energy $\mu(\alpha)$ by \begin{equation} Z(\alpha, \,t) \underset{t \to +\infty}{\approx} e^{t \mu(\alpha)}. \end{equation} It relates to the dynamical entropy by a Legendre-Fenchel transform: \begin{equation} \mu(\alpha) = \underset{\lambda}{\sup} \left[\alpha \lambda - s(\lambda)\right]. \end{equation} In a more mathematical language, $\mu$ is the cumulant-generating function. The analogy with equilibrium statistical physics is summarized in table \ref{laffargue_thermodynamic_formalism}. \begin{table}[h] \renewcommand{\arraystretch}{2.2} \centering \begin{tabular}{|>{\centering\arraybackslash}p{3cm} |>{\centering\arraybackslash}p{5cm} |>{\centering\arraybackslash}p{5cm}|} \hline \bf Variable & \bf Equilibrium statistical physics & \bf Dynamical system\\ \hline Macrostate & $\rho=\frac{E}{V}$ & $\lambda$\\ \hline Volume & $V$ & $t$\\ \hline Entropy & $s(\rho) \underset{V \to \infty}{=} \frac{k}{V} \ln \Omega(E, V)$ & $s(\lambda) \underset{t \to \infty}{=} -\frac{1}{t} \ln P(\lambda, t)$\\[0.5em] \hline Inverse temperature & $\beta$ & $-\alpha$\\ \hline Partition function & $Z(\beta, V) = \left< e^{- \beta E} \right>$ & $Z(\alpha, t) = \left< e^{\alpha \lambda t} \right>$\\ \hline Free energy & $f(\beta) \underset{V \to \infty}{=} - \frac{1}{\beta V} \ln Z(\beta, V)$ & $\mu(\alpha) \underset{t \to \infty}{=} \frac{1}{t} \ln Z(\alpha, t)$\\[0.5em] \hline \end{tabular} \caption{\label{laffargue_thermodynamic_formalism} Thermodynamic formalism for dynamical systems. Differences in prefactors and signs are due to historical reasons: equilibrium statistical physics was constructed to explain thermodynamics and has to take into account previous definitions (temperature, entropy, free energy) whereas thermodynamic formalism was born in the dynamical system community \cite{LaffargueRuelle, LaffargueGrassbergerThermoForm} and remained closer to the probability theory language.} \end{table} \newpage \section{Lyapunov Weighted Dynamics} The parameter $\alpha$ has no evident physical meaning, it is thus not obvious how the biased weights (\ref{laffargue_canonical_weight}) can be realized: we do not have thermostat for chaoticity in a lab. Lyapunov Weighted Dynamics (LWD) is a population Monte Carlo algorithm, inspired by Diffusion Monte Carlo algorithm and similar, in spirit, to the "go with the winners" algorithms \cite{LaffargueGrassberger}, which aims at fulfilling this role \cite{LaffargueLWDJulien}. The key idea is to evolve a population of copies of the system, called clones, and to copy and kill them in a controlled way. We consider $N_{c}$ clones $(\mathbf{x}, \mathbf{u})$ of the dynamical system $\dot{\mathbf{x}}(t) = \mathbf{f}(\mathbf{x}(t))$ and a time increment $\mathrm{d}t$. At every time step $t_{n} = n \,\mathrm{d}t$: \begin{itemize} \item each copy evolves with the dynamics $\dot{\mathbf{x}}=f(\mathbf{x})$ and $\dot{\mathbf{u}} = \frac{\partial \mathbf{f}}{\partial \mathbf{x}} \mathbf{u}$ \item for each clone $j$, we compute $s_{j}(t) = \frac{|\mathbf{u}(t+dt)|}{|\mathbf{u}(t)|} \simeq e^{\lambda \, \mathrm{d} t}$ \item each clone $j$ is then replaced, on average, by $s_{j}(t)^{\alpha}$ copies \end{itemize} Roughly speaking, at time $t$, one clone has yielded $e^{\alpha \lambda t}$ copies. If the initial population was large enough, the ratio between the total number of clones at time $t$ and the initial number of clone yields \begin{equation} \frac{N_c(t)}{N_c(0)} \simeq \left<{e^{\alpha \lambda t}} \right> \approx e^{t \mu(\alpha)}. \end{equation} It thus gives access to the partition function and to the free energy. Two important tricks are used: to maintain the population almost constant, we use $w_{j} = N_{c} \,s_{j}/\sum_{j} s_{j}$ instead of $s_{j}$ for calculating the cloning rate and, to prevent degeneracy of clones and enhance the quality of sampling, a small noise, with appropriate properties (energy conservation, momentum conversation, etc), is added to the dynamics. A simple way to understand why this algorithm works is to think about it as an evolution problem. Cloning plays the role of reproduction, noise the one of mutation and dependence on $\lambda$ of the cloning rate the one of selection. The convergence of the algorithm is then assured by a sort of ``selection pressure'' and changing $\alpha$ is equivalent to modify the fitness landscape. This algorithm can be generalized to sample the fluctuations of the $k$ first Lyapunov exponents, by considering one chaotic temperature $\alpha_{i}$ for each Lyapunov exponent $\lambda_{i}$ and using Gram-Schmidt orthonormalization procedure. For technical details and numerical implementations, see \cite{LaffargueLWD}. \section{Normally hyperbolic invariant manifold} Normally hyperbolic invariant manifolds (NHIMs) with $p$ unstable directions are manifolds invariant under the dynamics and whose normal directions have the structure of saddles, with exactly $p$ unstable directions. If we consider the LWD with $\alpha=1$, we see that the cloning rate exactly compensates the volume contractions and expansions induced by time evolution for trajectories evacuating from saddles with one unstable direction \cite{LaffargueJulienSUSY}. Then the cloning stabilizes the unstable manifold of NHIMs with one unstable direction and the algorithm populates it uniformly. Similarly, taking $\alpha_{i} = 1$ for $i$ in $\{1,\dots,k\}$ stabilizes the unstable manifold of NHIMs with $k$ unstable directions. We can illustrate this property on a simple example: two double well potentials. This system has four degrees of freedom and the Hamiltonian is given by \begin{equation} H(\boldsymbol{q}, \boldsymbol{p}) = \sum_{i=1,2} \left[\frac{p_{i}^{2}}{2} + \frac{(q_i^{2} - 1)^{2}}{4}\right]. \label{laffargue_DW_def} \end{equation} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{laffargue_DW} \caption{Trajectories of $5\,000$ clones using LWD for the system defined in (\ref{laffargue_DW_def}) for $t \geqslant 250$. The variance of the noise is decreased from $2.10^{-3}$ to $2.10^{-5}$ at $t=60$, and then to $2.10^{-7}$ at $t=120$. The clones are in gray and the color code correponds to the energy $H$. \textbf{Top:} $\alpha_1 = 1$ and $\alpha_{i\geqslant 2} = 0$. \textbf{Bottom:} $\alpha_{1,2} = 1$ and $\alpha_{3,4} = 0$. \label{laffargue_DW}} \end{figure} This system has two saddle points, defined respectively by $q_1 = p_1 = 0$ and $q_2 = p_2 = 0$, and, once a Gaussian white noise is added to momenta, its steady-state measure is the flat measure. It has two NHIMs with one unstable direction, corresponding to the Cartesian products between the flat measure over one double well and the saddle point of the other double well. It also has one NHIM with two unstable directions, corresponding to the Cartesian product of the two saddle points. We can see on figure \ref{laffargue_DW} that the LWD with $\alpha_1=1$ isolates the unstable manifold of one NHIM with one unstable direction and that the LWD with $\alpha_{1,2}=1$ isolates the unstable manifold of the NHIM with two unstable directions. \section{A spatially extended system: the Fermi-Pasta-Ulam-Tsingou chain} This algorithm can be applied to spatially extended systems, like the $\beta$-FPU chain defined by the Hamiltonian \begin{equation} H(\boldsymbol{x}, \boldsymbol{p}) = \sum_{i=1}^{L} \left[ \frac{p_{i}^{2}}{2} + \frac{(x_{i+1} - x_{i})^2}{2} + \beta \frac{(x_{i+1} - x_{i})^4}{4} \right] \end{equation} with periodic boundary conditions $x_{L+1} = x_{1}$. This describes a chain of $L$ particles coupled with anharmonic springs. At equilibrium, the typical configuration is a superposition of short-lived solitons, short-lived chaotic breathers \cite{LaffargueFPU} and thermal fluctuations (phonons). When applying the LWD with $\alpha<0$, we isolate a gas of solitons whereas with $\alpha>0$ we stabilize chaotic breathers. These three cases are illustrated on figure \ref{laffargue_FPU}. In \cite{LaffargueLWDJulien}, fixed boundary conditions were used to isolate solitons, because otherwise the system can put all its energy in a rotation of its center of mass. Here, thanks to a noise which conserves total impulsion, we were able to use periodic boundary conditions. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{laffargue_Comparaison} \caption{Configuration of one clone of the LWD for the $\beta$-FPU chain with $\beta=0.1$, $L=128$, 200 clones and $H=L$. \textbf{Top:} Gas of solitons. \textbf{Middle:} Equilibrium. \textbf{Bottom:} Chaotic breather. \label{laffargue_FPU}} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{laffargue_2breathers_Lyapunov} \caption{Mean value of the two largest Lyapunov exponents over all the clones and over 5 runs, using LWD for the $\beta$-FPU chain with $\beta=0.1$, $L=200$, $H=L$, $\alpha_{i\neq2} = 0$ and $\alpha_{2} = 5 L$. \label{laffargue_FPU_Lyap}} \end{figure} Biasing the $k$th Lyapunov exponent with a positive $\alpha_{k}$ reveals really rare trajectories with $k$ non-merging breathers \cite{LaffargueLWD}. We see on figure \ref{laffargue_FPU_Lyap} that, for given $\{\alpha_{i}\}$, the finite-time Lyapunov exponents seem to converge to a finite values $\lambda_{\alpha}$ as time and number of clones increase. This is important because $\lambda_{\alpha}$ can be used to compute the dynamical free energy thanks to thermodynamics integration \begin{equation} \mu(\alpha) = \int_{0}^{\alpha} \lambda_{\alpha'} \, \mathrm{d}\alpha' \end{equation} which, compared to direct measurement using $Z(\alpha)$, yields much better (smoother) averages. \section{Conclusion} We have seen two applications of this algorithm: the stabilization of the unstable manifold of NHIMs in a simple dynamical system and the detection of localized chaotic breathers in a spatially extended system. In the latter case, we have shown that the measure of the first derivative of the dynamical free energy can be achieved, which opens the way to future studies of dynamical phase transitions in these systems. This algorithm has also been applied elsewhere to localize the Arnold web \cite{LaffargueLWDJulien} and to study the stability of Lagrange points L4 and L5 in the restricted three-body problem \cite{LaffargueLWD}.
1,108,101,564,774
arxiv
\section{Renormalization of contact interactions} While it is convenient to work with local two-body interactions of the form \begin{equation} \hat{V} = U \sum_{\mathbf{k},\mathbf{k'},\mathbf{q}} x^\dagger_\mathbf{k}x_{\mathbf{k}-\mathbf{q}} \, c^\dagger_\mathbf{k'} c_{\mathbf{k'}+\mathbf{q}}, \end{equation} this leads to logarithmic UV divergences arising from high momentum virtual states. This is expected from contact-like models, which are only valid for low momentum scattering. To regularize the model we restrict loop momenta to lie below a cut-off scale $\Lambda$ and introduce a regularized interaction strength $U_\Lambda$. As with any effective theory, $U_\Lambda$ must be chosen to properly describe the physics of the underlying microscopic model. This is achieved by ensuring that the scattering length $a$, or the relevant binding energy $E_B^0$, is correctly reproduced. The binding energy of the exciton-electron complex in monolayer TMDs is known and was found to be $E_B^0 \simeq 30 \mathrm{meV}$. To fix $U_\Lambda$ we compute the T-matrix and match the position of its pole to the physical value of $E_B^0$ \begin{equation} \frac{1}{U_\Lambda} = \int\limits_{|\mathbf{k}|<\Lambda} \frac{d^2\mathbf{k}}{(2\pi)^2} \frac{1}{E^0_B - \frac{\mathbf{k}^2}{2\mu} + i 0^+}. \end{equation} \section{Non-equilibrium formalism} In the following we find an approximate solution to Eq.~\ref{Eq:lindblad} of the main text using non-equilibirum field theory. It has been shown that generic Master equations can be mapped to a functional integral. This is achieved by rewriting the real time evolution of Eq.~\ref{Eq:lindblad} in a coherent state basis, which yields a field theory on a Keldysh contour~\cite{sieberer_16}. As the density matrix is now in general non-thermal, the system admits two possibly inequivalent correlation functions \begin{equation} \begin{aligned} \mathcal{G}^>_\mathbf{k}(t,t') &= -i \langle x_\mathbf{k}(t) x^\dagger_\mathbf{k}(t')\rangle, \\ \mathcal{G}^<_\mathbf{k}(t,t') &= -i \langle x^\dagger_\mathbf{k}(t') x_\mathbf{k}(t)\rangle, \end{aligned} \end{equation} which would be related to each other by fluctuation-dissipation relations if the system was in thermal equilibrium. In our case only the excitons couple to the radiation field, which leads to quantum jump operators $L_\mathbf{k}$, linear in the exciton operators ${x}_\mathbf{k}$ and ${x}^\dagger_\mathbf{k}$. Consequently, dissipation can be treated to all orders simply by modifying the impurity propagator. In the impurity limit $n_I= \sum_\mathbf{r}\langle x_\mathbf{r}^\dagger x_\mathbf{r}\rangle/V \approx 0$, this yields \begin{equation} \begin{aligned} \mathcal{G}^>(\omega,\mathbf{k})= \frac{2i \Gamma(\mathbf{k})}{(\omega-\mathbf{k}^2/2M)^2 + \Gamma(\mathbf{k})^2},\qquad \mathcal{G}^<(\omega,\mathbf{k})= 0. \end{aligned} \label{Eq:diss_propagators} \end{equation} Other quantities, such as the retarded response function $\mathcal{G}^R$, contain no additional information and are defined in terms of $\mathcal{G}^\lessgtr$ as usual: \begin{align*} \mathcal{G}^R(t,t') &= \Theta(t-t') \left( \mathcal{G}^> - \mathcal{G^<} \right)(t,t'),\\ \mathcal{G}^{R/A}(\omega,\mathbf{k}) &= \frac{1}{\omega - \mathbf{k}^2/2M \pm i\Gamma(\mathbf{k})}. \end{align*} The electrons on the other hand are in thermal equilibrium and their correlation functions take the simpler form: \begin{align*} G^>_\alpha(\omega,\mathbf{k})&=-2\pi i \; \left(1-n_F(\omega)\right) \delta (\omega - \mathbf{k}^2/2m-\varepsilon_\alpha+\epsilon_F),\qquad &G^<_\alpha(\omega,\mathbf{k})&= 2\pi i \; n_F(\omega) \delta (\omega - \mathbf{k}^2/2m-\varepsilon_\alpha+\epsilon_F), \\ G_\alpha^R(t,t') &= \Theta(t-t') \left( G^>_\alpha - G^<_\alpha \right)(t,t'), \qquad &G_\alpha^{R}(\omega,\mathbf{k}) &= \frac{1}{\omega - \mathbf{k}^2/2m - \varepsilon_\alpha + i0^+}, \end{align*} where $n_F(\omega)$ is the Fermi-Dirac distribution function and $\alpha \in \lbrace O,C\rbrace$ labels the channel. \section{T-matrix approximation to the exciton self energy} Here we determine the T-matrix of Eq.~\ref{Eq:lindblad} on a Keldysh contour, taking into account the ground state of the electrons. This will allow us to construct an approximate expression for the self energy. Although our nonequilibrium setting allows for additional correlations, the local-in-time structure of the interaction significantly restricts the number of independent components of the T-matrix~\cite{Babadi2013PhD}. In the end the T-matrix has the same causality structure as the propagators and can be expressed in terms of $T^\lessgtr(t,t')$. The diagrammatic structure remains is the same as for the two-particle problem in Eq.~\ref{Eq:tmatrix} except that time arguments now live on a Keldysh contour \begin{fmffile}{tmatrix_2} \begin{align*} \begin{gathered} \begin{fmfgraph*}(60,40) \fmfleft{i1,i2} \fmfright{o1,o2} \fmf{fermion,fore=(0.3,,0.4,,0.8)}{i1,v} \fmf{fermion}{i2,v} \fmf{fermion,fore=(0.3,,0.4,,0.8)}{v,o1} \fmf{fermion}{v,o2} \fmfblob{.25w}{v} \end{fmfgraph*} \end{gathered}= \begin{gathered} \begin{fmfgraph*}(55,40) \fmfleft{i1,i2} \fmfright{o1,o2} \fmf{fermion,fore=(0.3,,0.4,,0.8)}{i1,v} \fmf{fermion}{i2,v} \fmf{fermion,fore=(0.3,,0.4,,0.8)}{v,o1} \fmf{fermion}{v,o2} \fmfdot{v} \end{fmfgraph*} \end{gathered}+ \begin{gathered} \begin{fmfgraph*}(100,40) \fmfleft{i1,i2} \fmfright{o1,o2} \fmf{fermion,fore=(0.3,,0.4,,0.8)}{i1,v1} \fmf{fermion}{i2,v1} \fmf{fermion,fore=(0.3,,0.4,,0.8)}{v2,o1} \fmf{fermion}{v2,o2} \fmf{fermion,left=.5,tension=.5}{v1,v2} \fmf{fermion,right=.5,tension=.5,fore=(0.3,,0.4,,0.8)}{v1,v2} \fmfdot{v1} \fmfblob{.125w}{v2} \end{fmfgraph*} \end{gathered}. \end{align*} \end{fmffile} We use Langreth rules to decompose the above equation, which yields the following equations for the components of the T-matrix: \begin{equation} \begin{aligned} T_\mathbf{k}^\lessgtr(E) &= U_\Lambda \left[ K_\mathbf{k}^\lessgtr(E) T_\mathbf{k}^A(E) + K_\mathbf{k}^R(E)T_\mathbf{k}^\lessgtr(E)\right] &=& U_\Lambda\frac{K_\mathbf{k}^\lessgtr(E) T_\mathbf{k}^A(E)}{1-K_\mathbf{k}^R(E)},\\ T_\mathbf{k}^{R/A}(E) & = U_\Lambda \left[\mathbb{1} + K^{R/A}_\mathbf{k}(E) T_\mathbf{k}^{R/A}(E)\right] &=& U_\Lambda \frac{1}{1- U_\Lambda K^{R/A}_\mathbf{k}(E)}, \qquad \mathrm{where}\\ K^\lessgtr_{\alpha\beta}(t,\mathbf{x}) &= - \mathcal{G}^\lessgtr(t,\mathbf{x}) G^\lessgtr_{0,\alpha}(t,\mathbf{x})\delta_{\alpha\beta}, \quad \mathrm{and} \quad K^{R/A}_{\alpha\beta}(t,\mathbf{x})&=& \pm \Theta(\pm t) \left(K^> - K^< \right)_{\alpha\beta}(t,\mathbf{x}). \label{Eq:tmatrix_keldysh} \end{aligned} \end{equation} $\Theta(x)$ is the Heaviside step function and the kernels $K$ can be explicitly computed as before. Eq.~\ref{Eq:tmatrix_keldysh} implies that $T^<(t,t') =0$, as it is proportional to $G^<$. This is a consequence of the impurity limit, as $T^<$ is proportional to the density of molecules, which vanishes for a single exciton. We now compute an approximate self energy of the exciton from the many body T-matrix. Its form is given by the following Dyson equation \begin{fmffile}{selfeng} \begin{align*} \begin{gathered} \begin{fmfgraph*}(40,20) \fmfleft{i} \fmfright{o} \fmf{fermion,fore=(0.15,,0.2,,0.4),width=1.8}{i,o} \end{fmfgraph*} \end{gathered}= \begin{gathered} \begin{fmfgraph*}(40,20) \fmfleft{i} \fmfright{o} \fmf{fermion,fore=(0.3,,0.4,,0.8)}{i,o} \end{fmfgraph*} \end{gathered}+ \begin{gathered} \begin{fmfgraph}(80,15) \fmfleft{i} \fmfright{o} \fmf{phantom}{i,v1,v2,v3,v4,o} \fmf{fermion,fore=(0.3,,0.4,,0.8),tension=3.5}{i,v2} \fmf{fermion,fore=(0.15,,0.2,,0.4),tension=3.5,width=1.8}{v4,o} \fmf{fermion,right=4,tension=10}{v4,v2} \fmf{fermion}{v2,v3,v4} \fmfblob{.2w}{v3} \end{fmfgraph} \end{gathered}, \end{align*} \end{fmffile} which encodes the following equations \begin{equation} \begin{aligned} \mathcal{G}^R(\omega,\mathbf{k}) &= \mathcal{G}^R_0(\omega,\mathbf{k}) + \mathcal{G}^R_0(\omega,\mathbf{k}) \cdot \Sigma^R(\omega,\mathbf{k})\cdot\mathcal{G}^R(\omega, \mathbf{k}), \\ \Sigma^R(\omega,\mathbf{k})&= -i \int\frac{d^2q}{(2\pi)^2}\frac{d\delta\omega}{2\pi} \,\mathrm{Tr}\lbrace T^R_{\mathbf{k}+\mathbf{q}}(\omega+\delta\omega) \cdot G^<_0(\delta\omega,\mathbf{q})\rbrace, \end{aligned} \label{Eq:SDE} \end{equation} where the trace is performed over the two scattering channels. To obtain the second equations we have already assumed $T^<(\omega,\mathbf{k}) =0$, as the number of molecules is negligible in the impurity limit. \end{document}
1,108,101,564,775
arxiv
\section{Introduction} \vspace{-3pt} \label{sec:intro} In our surrounding environment, there are various sounds such as speech, machine activity, and animal sounds. A system capable of detecting such sounds will provides us various valuable applications such as automatic driving to detect unseen dangers~\cite{car,SmartCar1,SmartCar2}, detection of crimes in the dark~\cite{surveillance,drone}, and support for the safety of pedestrians. A key technique for such applications is sound event localization and detection (SELD),which combines direction of arrival (DOA) estimation and sound event detection (SED). In a recent competition of SELD, many algorithms using a deep neural networks (DNNs) as a regression function of DOA and classification function of SED have achieved high performance~\cite{dcase2019sota,dcase2020sota,dcase2021sota}. To train such DNNs, a large amount of annotated data is needed that contains various sound events occurring in various directions around the microphone. In the SELD problem settings considered to date, the system is trained using the SELD dataset recorded in up to 11 environments, and then performance is evaluated using data recorded in the same environments. On the other hand, since users will utilize the SELD system in any environment they want in real-world applications, the system should work robustly in environments not included in the training data, referred to as unknown environments in this paper. The simplest way to train a robust SELD system for any environment is to record complete IR datasets in many environments and use them for training, as used in conventional SELD methods. However, recording and annotating three-dimensional sound data incurs a huge cost. In particular, since countless factors affect the directional information of sound, such as the arrangement of objects in a room, distance from the walls to the microphone, and building materials, covering all of these combinations is unrealistic. Several previous studies on SED and DOA estimation have addressed this issue by using domain adaptation approaches~\cite{doaadap1,doaadap2,sedadap,sedadap2,sedadap3}. In SED, it is shown that domain adversarial training (DAT) is effective in keeping the performance at the target domain~\cite{sedadap}. On the other hand, in DOA estimation, it has been reported that DAT does not work effectively, and weak labels on DOA are required for adaptation to target domain~\cite{doaadap2}. These domain adaptation methods in SED and DOA estimation require the data recorded in the target domain, so they have difficulty adapting models to unknown environments. Another strategy for adapting to unknown environments is utilizing sounds that the system ``hears'' during inference. For robot audition, noise-robust sound event detection has been proposed utilizing observed background noise.~\cite{adapnoise1}. In addition, for SELD, not only noise but also reverberation must be helpful cues. In particular, the reflections of the sound emitted by the system itself, which we call `echo' in this paper, has been reported to provide a wealth of spatial cues such as room's shape and the arrangement of reflectors~\cite{echo1,echo2,echo3}. Combining it with the SELD system is a promising strategy for adaption to unknown environments. To take advantage of the spatial cues provided by echoes, we propose an echo-aware feature refinement (EAR) for SELD. The proposed system incorporates a feature refinement mechanism conditioned on embeddings extracted from echoes to suppress environmental effects that cause performance degradation in an unknown environment. This refinement mechanism is trained using the DAT framework so that the refinement features are indistinguishable from anechoic ones. For our new task, we also recorded a new dataset: multi-environment impulse response recordings with a first-order ambisonic microphone (FOA-MEIR). This dataset combines comprehensive IR recordings in an anechoic room and sparse IR recordings in nearly 100 real environments, which exceed the total number of environments in the SELD dataset published so far~\cite{dataset2019,dataset2020,dataset2021}. Experimental results on the FOA-MEIR dataset show that the EAR effectively improves the performance of SELD in unknown environments and outperforms the baseline method without EAR. \vspace{-9pt} \section{Related studies} \label{sec:relatedstudies} \vspace{-5pt} \subsection{Overview of SELD} \vspace{-4pt} SELD task has been attracting particular attention since it was taken up as a challenge task in DCASE 2019~\cite{dataset2019}. Recently, more real-world like problem settings of the SELD are attempted~\cite{seldproblem}. In the task of DCASE 2019, SELD for stationary polyphonic sound sources recorded in five environments was handled, and it was expanded to the moving sound sources in 2020, and the unknown directional interferers were added in 2021~\cite{dataset2019,dataset2020,dataset2021}. However, in any problem setting, training data and test data are generated using IR recorded in the same environment, and the adaptation to an unknown environment has not been examined yet. \vspace{-5pt} \subsection{Domain adaptation in related tasks} \vspace{-2pt} In SED and acoustic scene classification, several methods based on domain adaptive learning have been proposed to bridge the gap between the training data and the target domain~\cite{sedadap,sedadap2,sedadap3}. In the domain adaptation utilizing DAT, a discriminator is introduced to distinguish the features obtained from the source and target domains. In contrast, the feature extractor is trained adversarially to trick the discriminator, resulting in domain-invariant features are extracted. In DOA estimation, a method was proposed to adapt a DNN trained with labeled data recorded in an anechoic room to unlabeled data recorded in a reverberant room was proposed~\cite{doaadap1}. This method estimates DOA as a classification problem, and performs domain adaptation by retraining the DNN model so as to minimize the entropy of output. While entropy minimization-based methods are limited to DOA estimation of a single source, He {\it et al.} proposed a domain adaptation method that can apply to DOA estimation of multiple sources~\cite{doaadap2}. \vspace{-5pt} \section{Proposed method} \vspace{-3pt} Our goal is to develop a SELD system that works robustly even in unknown environments, only using training data and its labels of known environments. This section describes the proposed method to achieve such a system. Here, we define some notations. Define $\mathcal{E}=\{e_1,\ldots,e_N\}$ as the $N$ known environments. The unknown environment $\mathcal{E}^*\not\subset\mathcal{E}$ is defined as any environment that is not included in $\mathcal{E}$. The observed signals in the known and unknown environments are denoted as $\bm{x}_{\mathcal{E}}$ and $\bm{x}_{\mathcal{E}^*}$, respectively. Let SELD prediction function represent as $\mathcal{M}$ that predict SELD ground truth labels $Y_{\mathcal{E}}=\{\bm{y}_1,\ldots,\bm{y}_K\}$ from corresponding training data $X_{\mathcal{E}}=\{\bm{x}_1,\ldots,\bm{x}_K\}$. Here, $\bm{y}$ include both SED and DOA label as $\bm{y}=\{y_{\mbox{\scriptsize{SED}}},y_{\mbox{\scriptsize{DOA}}}\}$. In addition, as a basic architecture~\cite{dataset2020}, $\mathcal{M}$ consists of the feature extractor $\mathcal{F}$, SED classifier $\mathcal{C}$ and DOA regression function $\mathcal{D}$. \vspace{-5pt} \subsection{Basic concept} \vspace{-4pt} \label{sec:basic} The performance of a SELD system trained for particular known environment will degrade in an unknown environment. This problem is known as domain shift. A feature extractor trained for known environments does not work properly in an unknown environment due to environmental effects such as noise and reverberation, resulting in a shift of feature statistics and consequent performance degradation. To address this problem, we propose echo-aware feature refinement (EAR). Fig.~\ref{fig:ear} shows the two-stage inference procedure. The first stage is the echo measurement in an unknown environment. The sound source for the echo measurement is assumed to be the system's own sound, such as the startup sound. The observed echo $\bm{h}_{\mathcal{E}^{*}}$ is embedded into $\bm{z}_{\mathcal{E}^{*}}$ via the encoder $\mathcal{G}$. Since echoes hold a wealth of spatial cues about the surrounding environment~\cite{echo1,echo2,echo3}, such information about the unknown environment will be embedded in $\bm{z}_{\mathcal{E}^{*}}$. The second stage is SELD. To begin with, the feature extractor $\mathcal{F}$ extracts the feature $\bm{f}_{\mathcal{E}^ {*}}$ from the observed signal $\bm{x}_{\mathcal{E}^{*}}$ containing sound events occurring in the surroundings. To suppress environmental effects such as noise and reverberation from $\bm{f}_{\mathcal{E}^{*}}$, we utilize the spatial cues embedded in $\bm{z}_{\mathcal{E}^{*}}$. The refinement of $\bm{f}_{\mathcal{E}^{*}}$ utilizing $\bm{z}_{\mathcal{E}^{*}}$, that is EAR, is performed by the following equation: \vspace{-1pt} \begin{equation} \label{eq:refinement}\bm{f}_{\mathcal{E}^{*}}^{'} = \mathcal{R}(\bm{f}_{\mathcal{E}^{*}}, \bm{z}_{\mathcal{E}^{*}}). \vspace{-1pt} \end{equation} Here, $\mathcal{R}$ is the refinement function. Finally, on the basis of the obtained $\bm{f}^{'}_{\mathcal{E}^{*}}$, SELD is performed by the SED classifier $\mathcal{C}$ and the DOA regression function $\mathcal{D}$. The above two-stage inference procedure requires training the feature extractor $\mathcal{F}$, encoder $\mathcal{G}$, and refinement function $\mathcal{R}$ using only known environment data. For this training, we first prepare the paired data consisting of observed echo $\bm{h}_{\mathcal{E}}$ and sound event observation signals $\bm{x}_{\mathcal{E}}$ recorded in multiple known environments. In addition, we collect paired data ($\bm{h}_{\mathcal{E}}$,$\bm{x}_{\mathcal{E}}$) in an anechoic room. To train the system by using this training data, we adopt the DAT framework. Note that, since the data in unknown environments $\mathcal{E}^{*}$, i.e., the true target domain, is not available, we set the anechoic room as the source domain and the other environment as the target domain. In training, paired data ($\bm{h}_{\mathcal{E}}$,$\bm{x}_{\mathcal{E}}$) is used to obtain the refined feature $\bm{f}^{'}_{\mathcal{E}}$ on the basis of Eq. (\ref{eq:refinement}). The domain classifier $\mathcal{H}$, which is used only during training, takes $\bm{f}^{'}_{\mathcal{E}}$ as input and classifies the domain, reverberant or anechoic. By training $\mathcal{F}$,$\mathcal{G}$,$\mathcal{R}$ adversarially to degrade the performance of domain classification by $\mathcal{H}$, an environment-invariant feature $\bm{f}^{'}_{\mathcal{E}}$ can be obtained. Especially, the correction function $\mathcal{R}$ conditioned on $\bm{z}_{\mathcal{E}}$ is expected to acquire the ability to suppress the environmental effect at the feature level, on the basis of the spatial cues in surrounding environment embedded in $\bm{z}_{\mathcal{E}}$ (e.g., the arrangement of objects). \vspace{-5pt} \subsection{Data collection scheme for EAR} \vspace{-3pt} \label{sec:casm} For the training of EAR, it is necessary to collect paired data of echo $\bm{h}_{\mathcal{E}}$ and observed signal containing sound event $\bm{x}_{\mathcal{E}}$ over many environments. Conventional datasets use comprehensively collected IR recordings at 5 to 11 environments to generate training data, but simply extending this in a larger number of environments is very costly. Especially when considering the development of real SELD devices, collecting a huge amount of data each time is not practical, as training data is required for each device. To overcome this, we propose a data collection scheme, combination of comprehensive anechoic data and sparse multi-environment data (CASM), which is suitable for collecting multi-environment data. Comprehensive anechoic data is the recordings of acoustic events or IRs in an anechoic room, including a comprehensive angle and distance of the sound source to the microphone. The sparse multi environment data, which are sound events or IRs recorded at a few variations of angles and distances over many environments, are expected to be useful in understanding how the environment affects the acoustic signal. \begin{figure}[t!] \begin{center} \includegraphics[width=\linewidth]{Figures/basicconcept.pdf} \vspace{-18pt} \caption{Basic concept of echo-aware feature refinement (EAR).} \label{fig:ear} \end{center} \vspace{-25pt} \end{figure} \vspace{-5pt} \subsection{Implementation details} \vspace{-3pt} Fig.~\ref{fig:architecture} shows the network architecture of the proposed method. Our system can be broadly divided into three parts; the main branch for the SELD task, the echo auto encoder (AE) to extract the embedding $z_{\mathcal{E}}$, and the domain classifier that is used only during training for DAT. We adopt the baseline model of DCASE2020~\cite{dataset2020} as the main branch in order to validate the effect of EAR in combination with the standard SELD system. The input features of the main branch are 4-channel logmel-spectrograms and 3-channel intensity vectors~\cite{intensityvector}. The feature extractor consisted of a multi-layer convolutional neural network (CNN)-block extracts the feature $\bm{f}_{\mathcal{E}}$. $\bm{f}_{\mathcal{E}}$ is concatenated with $z_{\mathcal{E}}$ and inputted to the feature refinement function $\mathcal{R}$. Bi-directional gated recurrent unit (Bi-GRU) ~\cite{bigru} based architecture of $\mathcal{R}$ enables to be refined $\bm{f}_{\mathcal{E}}$ considering long-term dependencies of the observed signal such as the effect from the rear part reverberation. The two branches for SED and DOA estimation take $\bm{f}^{'}_{\mathcal{E}}$ as input and output a prediction of SELD. By using this output, the loss function for the SELD task is calculated using binary cross entropy (BCE) and mean square error (MSE) as follows: \begin{figure}[t!] \begin{center} \includegraphics[width=0.95\linewidth]{Figures/architecture.pdf} \vspace{-12pt} \caption{Network architecture of proposed method.} \label{fig:architecture} \end{center} \vspace{-25pt} \end{figure} \begin{figure*}[h!] \begin{center} \vspace{-6pt} \includegraphics[width=0.97\linewidth]{Figures/embeddings.pdf} \vspace{-11pt} \caption{Comparison of t-SNE visualization of intermediate features.} \vspace{-22pt} \label{fig:embeddings} \end{center} \end{figure*} \vspace{-12pt} \begin{equation} \label{eq:lossseld} \mathcal{L}_{\mbox{\scriptsize{seld}}} = \mathcal{L}_{\mbox{\scriptsize{BCE}}}(\hat{\bm{y}}_{\mbox{\scriptsize{sed}}},\bm{y}_{\mbox{\scriptsize{sed}}}) + \lambda_{\mbox{\scriptsize{doa}}} \mathcal{L}_{\mbox{\scriptsize{MSE}}}(\hat{\bm{y}}_{\mbox{\scriptsize{doa}}},\bm{y}_{\mbox{\scriptsize{doa}}}) , \end{equation} where $\lambda_{\mbox{\scriptsize{DOA}}}$ is the fixed balance parameter. The domain classifier consists of a gradient reversal layer (GRL)~\cite{grl} and linear layers with a sigmoid activation function. It outputs a one-dimensional domain prediction $\hat{\bm{d}}$. Here, $\bm{d}=0,\,1$ denotes the anechoic and reverberant domains, respectively. The GRL is introduced for DAT, which works as an identity layer in forward propagation and reverses the gradient in backpropagation with scale $\lambda_{\mbox{\scriptsize grl}}$. We adopted BCE as the loss function $\mathcal{L}_{\mbox{\scriptsize domain}}$ to train the domain classifier. The input of the echo AE is the observed echo $\bm{h}_{\mathcal{E}}$. Unlike the input signal $\bm{x}_{\mathcal{E}}$ for the main branch, $\bm{h}_{\mathcal{E}}$ is measured only once in each environment. The input feature of the echo AE, denoted as $Z_{\mathcal{E}}$, is extracted from $\bm{h}_{\mathcal{E}}$ with the same way as the input feature of the main branch. The encoder part of the echo AE, which consists of multi-layer CNN-blocks and a linear layer, extracts $\bm{z}_{\mathcal{E}}$ from the input $Z_{\mathcal{E}}$. The undesirable saddle point of DAT using GRL is that $\bm{z}_{\mathcal{E}}$ is trained as an environment-invariant embedding, and useful spatial cues for EAR are lost. To avoid this, we set a reconstruction constraint on $\bm{z}_{\mathcal{E}}$. For this constraint, we add the echo reconstruction loss $\mathcal{L}_{\mbox {\scriptsize echo}}$ that is MSE between the input feature $Z_{\mathcal{E}}$ and the reconstructed input feature $\tilde{Z}_{\mathcal{E}}$, to the training loss function. The entire network is trained in an end-to-end manner on the basis of the following loss function: \vspace{-3pt} \begin{equation} \label{eq:loss} \mathcal{L} = \lambda_{\mbox{\scriptsize seld}}\mathcal{L}_{\mbox{\scriptsize seld}}+\lambda_{\mbox{\scriptsize domain}}\mathcal{L}_{\mbox{\scriptsize domain}}+\lambda_{\mbox{\scriptsize echo}}\mathcal{L}_{\mbox{\scriptsize echo}}, \end{equation} \vspace{-5pt} where $\lambda_{\mbox{\scriptsize seld}},\lambda_{\mbox{\scriptsize domain}}$ and $\lambda_{\mbox{\scriptsize echo}}$ are fixed balance parameters. \section{Experiments} \vspace{-2pt} \subsection{Dataset: FOA-MEIR} \vspace{-2pt} To set up a new problem of unknown environment adaptation of SELD, we collected a ``FOA-MEIR'' data set that records IR of many environments using a first-order ambisonic (FOA) microphone on the basis of the CASM scheme described in Sec.~\ref{sec:casm}. The data set consists of five subsets of IR recordings shown in Table \ref{tb:recording} and dry source recordings for SELD tasks. The first subset, ``Anechoic'', contains 216 IR recordings in an anechoic room. These 216 IR recordings consist of a comprehensive combination of relative angles and distances between the sound source and the microphone. The second subset, ``Reverb-S'', consists of sparse IR recordings in multiple reverberant environments. It includes 96 environments such as offices, meeting rooms, halls, etc. Each environment is different in at least one way: the room, the position of the microphone, and the arrangement of surrounding objects. At each environment, three recording positions of IR were randomly selected from 216 angle and distance combinations that are the same as ``Anechoic''. The third subset, ``Test'', contains 216 IR recordings in 5 unknown environments. The fourth subset, ``Echo'', consists of IR recordings recorded at 101 places where ``Reverb-S'' and ``Test'' were recorded, at the azimuth and elevation equals to 0 degree, and the distance of sound source from microphone equals to 150 cm. This subset is used to simulate the observed echo $\bm{h}_{\mathcal{E}}$ described in Sec.~\ref{sec:basic}. The fifth subset, ``Reverb-C'', is conventional like comprehensive reverberant IR recordings, which contains 216 IR recordings in two reverberant environments. At each position of ``Reverb-S'',``Test'' and ``Reverb-C'', ambient noise was also recorded using the same FOA microphone that was used for IR recording. Ambient noise includes air conditioning, walking, talking, and so on. To synthesize a dataset for SELD using the above IR, dry sounds were recorded in an anechoic room using a monaural microphone. These dry sounds contain 12 different sound event classes, and each class has 20 variations of sound. \begin{table}[t!] \centering \vspace{-10pt} \caption{Specification of FOA-MEIR dataset} \label{tb:recording} \scalebox{0.78}[0.78]{ \begin{tabular}{@{}l|cccc|c@{}} \toprule Subset& Anechoic & Reverb-S & Test &Echo& Reverb-C\\ \midrule \# of environment & 1 & 96 & 5& 102&2\\ \# of IR / environment & 216 & 3 & 216 & 1&216 \\ \midrule Azimuth range& $[-\pi,\pi)$&$[-\pi,\pi)$&$[-\pi,\pi)$ &$0$&$[-\pi,\pi)$\\ Azimuth interval&$10^{\circ}$&random&$10^{\circ}$&-&$10^{\circ}$\\ Elevation range&$[-\frac{\pi}{2},\frac{\pi}{2})$&$[-\frac{\pi}{2},\frac{\pi}{2})$&$[-\frac{\pi}{2},\frac{\pi}{2})$ &$0$&$[-\frac{\pi}{2},\frac{\pi}{2})$ \\ Elevation interval&$20^{\circ}$&random&$20^{\circ}$&-&$20^{\circ}$\\ Distance [cm] & 75,\,150 & 75,\,150 & 75,\,150 & 150&75,\,150\\ \midrule Noise / environment &-&2.5 min&15 min&- & 15 min\\ \bottomrule \end{tabular} } \vspace{-14pt} \end{table} \vspace{-15pt} \begin{table}[t!] \centering \caption{Split settings of synthesis data for training and evaluation.} \label{tb:split} \scalebox{0.9}[0.9]{ \begin{tabular}{@{}l|cccc@{}} \toprule Split Name& IR & SNR & Length&\# of clips\\ \midrule Train-rev&``Reverb-S'' &6 to 30dB&20 sec.&1920 \\ Train-anec&``Anechoic''&Clean&20 sec.&1920 \\ Train-target &``Test''&6 to 30dB&20 sec.&1920\\ Train-base &``Reverb-C''&6 to 30dB&20 sec.&1920\\ Test &``Test''&20dB&20 sec.&300 \\ \midrule Train-echo-rev&``Echo'' &6 to 30dB&2.5 sec.&1920 \\ Train-echo-anec&``Anechoic''&Clean&2.5 sec&1 \\ Test-echo &``Echo''&20dB&2.5 sec.&5 \\ \bottomrule \end{tabular} } \vspace{-13pt} \end{table} \vspace{8pt} \subsection{Experimental setup} \vspace{-3pt} {\bf Dataset synthesis:} A dataset for training and evaluation was synthesized using IR recordings and dry sounds of the FOA-MEIR dataset. The dataset consists of eight splits as shown in Table~\ref{tb:split}. The above five split are constructed in the same way as in the existing SELD dataset dealing with stationary polyphonic sound sources~\cite{dataset2019}. Here, the maximum number of overlapping sources was set to 2, and the clip-wise average of the signal-to-noise ratio (SNR) was randomly set between 6 to 30 dB. The below three splits in Table~\ref{tb:split} contain sound clips assuming observed echo $\bm{h}_{\mathcal{E}}$. These observed echoes were synthesized by convolving the 20 ms swept-sine signal~\cite{echo1} with the IR of the ``Echo'' subset of FOA-MEIR. The ``Train-echo-rev'' is composed of the same number of clips as ``Train-rev''. Each clip is synthesized with the same environment and SNR as ``Train-rev'', but different noise is added. This is because the observed signal $\bm{x}_{\mathcal{E}}$ and the observed echo $\bm{h}_{\mathcal{E}}$ are asynchronous. The "train-echo-anec" split contains only one clip synthesized using an IR recording in anechoic room, and that clip contains only the direct sound of the swept-sine signal. \vspace{2pt} \noindent {\bf Hyper parameter:} For the short time Fourier transform of $\bm{x}_{\mathcal{E}}$ and $\bm{h}_{\mathcal{E}}$, 2048 and 1024-point Hanning windows with 960 and 512-point shifts were used, respectively. The dimension of the Mel filter bank was 64. All model parameters in the main branch are the same as in the DCASE2020 baseline model~\cite{dataset2020}. As shown in Fig.~\ref{fig:architecture}, the Echo AE encoder has 4 CNN blocks; the number of CNN filters of each block was (16, 32, 64, 4), the kernel size was 3, the stride and padding were both 1, the max-pooling size was (2, 2) for all blocks. The parameters of the Upsampling CNN-block of The echo AE decoder are symmetric to the encoder. Dimension of $\bm{z}_{\mathcal{E}}$ was $D_{\mbox{\scriptsize echo}}=16$. The number of linear layers of the domain classifier was 3; the input dimension was 512, the hidden dimensions were 512, 128, and the output dimension was 1. In the GRL layer, scale factor $\lambda_{\mbox{\scriptsize grl}}$ was changed during training using the following schedule as in~\cite{grl,doaadap2}: \vspace{-9pt} \begin{equation} \lambda_{\mbox{\scriptsize grl}} =\bar{\lambda}_{\mbox{\scriptsize grl}}\left(\frac{2}{1 + \exp(-\gamma p)}-1\right), \vspace{-3pt} \end{equation} where $p=\frac{\rm epoch}{\rm max epoch}$, $\gamma=10$, and $\bar{\lambda}_{\mbox{\scriptsize grl}}=0.01$. The balance parameter of the loss function $\lambda_{\mbox{\scriptsize doa}},\lambda_{\mbox{\scriptsize seld}},\lambda_{\mbox{\scriptsize domain}}$, and $\lambda_{\mbox{\scriptsize echo}}$ was 100, 3.0, 1.0 and 0.01, respectively. The batch size was 64, half of which was data from an anechoic room and half from reverberant environments. The ADAM optimizer was used for training with an initial learning rate equals 0.01~\cite{Adam}. Training was concluded with 100 epochs. \vspace{3pt} \noindent {\bf Comparison method and evaluation metrics:} To evaluate the effectiveness of the proposed method, the following six conditions were compared. In the following, the base model refers to the DCASE2020 baseline model that removes the domain classifier and the echo AE from the proposed architecture. \setlength{\leftmargini}{15pt} \begin{description} \vspace{-2pt} \setlength{\parskip}{0cm} \setlength{\itemsep}{0.03cm} \item[(A) Target (Oracle):] Training the base model by using the ``Train-target'' split. This is an oracle condition that IR recordings of the unknown environments are available as training data. \item[(B) Source:] Training the base model by using the ``Train-anec''. This setting is the worst case that any IR recordings in reverberant environments are not available as training data. \item[(C) Baseline:] Training the base model using the ``train-base'' and ``train-anec''. The ``train-base'' is a conventional dataset that does not employ the proposed CASM scheme, and this result serves as a baseline for our method. \item[(D) NoAdap:] Training the base model by using ``Train-rev'' and ``Train-anec''. \item[(E) DAT:] Training the proposed model without echo embedding $\bm{z}_{\mathcal{E}}$ by using ``Train-rev'' and ``Train-anec''. \item[(F) Proposed:] Training the proposed model by using ``Train-rev'', ``Train-anec'', ``Train-echo-rev'', and ``Train-echo-anec''. \end{description} To independently evaluate the effectiveness of the proposed method for DOA estimation and SED, SELD is evaluated with individual metrics for SED and DOA estimation~\cite{Metrics,Metrics2}. For SED, we use the one-second segment-based F-score (F) and error rate (ER) calculated. For DOA estimation, we use the frame-wise metrics of DOA error (DE) that is the average angular error in degrees In addition, frame recall (FR) is calculated as the recall of the number of active source estimations. Among these four metrics, higher F and FR and lower ER and DE indicate better performance. \begin{table}[t!] \centering \vspace{-7pt} \caption{SELD performances. DE, FR, F, and ER denote DOA error, Frame recall, F-value and Error rate of SED, respectively.} \label{tb:result} \begin{tabular}{@{}l|cccc@{}} \toprule System & DE$\downarrow$ & FR$\uparrow$ & F$\uparrow$ & ER$\downarrow$ \\ \midrule (A) Target (Oracle) &6.1 &95.4 &95.4 &8.8 \\ (B) Source &11.9 &68.2 &57.3 &94.3 \\ (C) Baseline &11.9 & 89.7& 84.9&25.3 \\ \midrule (D) NoAdap &8.9& 94.5&93.9&11.0\\ (E) DAT &8.8 & 94.5&94.1 & 10.8\\ (F) Proposed &{\bf 8.4} &{\bf 94.6}& {\bf 94.4}& {\bf 10.5}\\ \bottomrule \end{tabular} \vspace{-17pt} \end{table} \vspace{-10pt} \subsection{Result} \vspace{-4pt} Table~\ref{tb:result} shows the SELD performance of the proposed method and the comparison methods. First, the proposed method (F) outperforms the comparison methods (B) to (E) in all the SELD metrics. Secondly, comparing (C) and (D), even though the number of IR measurements used in (C): 648 is larger than (D): 504, (D) achieves better performance. This fact shows that the proposed data collection scheme, CASM, which combines data from an anechoic room and sparse data from a multi reverberant environment, is more suitable for training a robust SELD model for unknown environments even without DAT and EAR. Comparing (D), (E), and (F), the introduction of DAT and EAR resulted in a stepwise improvement in the performance of DOA estimation, and signs of improvement were also observed for the other metrics. These results indicate that EAR is effective in adapting SELD to unknown environments. Fig.~\ref{fig:embeddings} shows the t-SNE~\cite{tsne} visualization of the intermediate feature $\bm{f}_{\mathcal{E}^*}$ and $\bm{f}^{'}_{\mathcal{E}^*}$ obtained by (D), (E), and (F) methods when inputting the observed signals of the unknown and anechoic environments. For each method, the left figure shows $\bm{f}_{\mathcal{E}^*}$, and the right figure shows $\bm{f}^{'}_{\mathcal{E^{*}}}$. From the comparison of the distribution of $\bm{f}^{'}_{\mathcal{E}^{*}}$, we can see that the distributions of the anechoic environment, and the unknown environments are different in (D), while the distributions of all environments are mixed in (E) and (F). This suggests that (E) and (F) are successful in suppressing noise and reverberation at the feature level. In addition, when comparing the distributions of $\bm{f}_{\mathcal{E}^{*}}$ in (E) and (F), the distributions in (F) are clearly separated by the environment. It means that the denoising and dereverberation is not performed in $\mathcal{F}$, but is mainly performed in the $\mathcal{R}$. This implies that the EAR prioritizes feature refinement based on the observation of echoes, rather than the knowledge obtained from the training data. This property of the EAR is expected to be good for the robust SELD because feature refinement based on knowledge from limited training data could be useless in unknown environments. \vspace{-5pt} \section{Conclusion} \vspace{-6pt} In this study, we proposed echo-aware feature refinement (EAR) for the robust SELD system in unknown environments. EAR associates the spatial cues in an unknown environment obtained through the echo measurement with feature refinement in domain adversarial training manner. The validation experiments using our FOA-MEIR dataset confirmed that EAR improves the SELD performance in unknown environments. Therefore, we conclude that the proposed EAR is effective for adaptation of SELD in unknown environment. \clearpage \bibliographystyle{IEEEbib}
1,108,101,564,776
arxiv
\section{Introduction} The robust and accurate prediction of complex machinery's remaining useful life (RUL) is essential for condition-based maintenance (CBM) to improve reliability and reduce maintenance costs. Turbofan engine, a key component of aircraft, is highly complex and precise thermal machinery. Since the degradation of turbofan engines can lead to serious disaster, it is crucial to conduct prognostics maintenance to ensure machinery safety. Prognostics maintenance is a challenging task due to the nonlinear complexity and uncertainty in the complex machinery. The remaining useful life (RUL) is a technical term used to describe the progression of fault in prognostics and health management (PHM) applications\cite{si2011remaining}. As a result, the PHM system can make maintenance decisions to improve the reliability, stability, and efficiency of complex machinery\cite{zhang2021transfer,pang2021bayesian}. In particular, accurate RUL prediction is a highly crucial technology of PHM that performs the machinery's health states to provide precise failure warnings and avoid safety accidents. Therefore, RUL prediction is essentially valuable in engineering practice. Generally, RUL is defined as the period from the current time to the failure within a component\cite{wang2021remaining,zhang2021remaining,ellefsen2019remaining}. The existing RUL prediction algorithms can be divided into two main categories: model-based approaches and data-driven approaches. The physical model-based approaches\cite{jouin2016degradations} are to describe the mechanical degradation process by building a mathematical model. However, the sub-components of the complex machinery are coupled, making some model parameters hard to obtain. Building a physical model that accurately simulates the degradation process is challenging. In contrast, the data-driven approaches\cite{zhu2018estimation,ding2021remaining} do not rely on physical models but the extraction of degradation-related features from the historical data. They can reveal the underlying correlations and causalities between raw sensor data and RUL. Therefore, the data-driven approaches attract a lot of research interests. The machine learning approaches utilize expert knowledge, and signal processing algorithms to extract features from raw sensor data to predict the RUL, $\it{e.g.}$ support vector machine (SVM)\cite{ordonez2019hybrid}, hidden Markov model (HMM)\cite{chen2019hidden}, artificial neural network (ANN)\cite{gebraeel2004residual}, extreme learning machine (ELM)\cite{liu2018method}. The machine learning approaches can reduce machinery maintenance costs because condition monitoring data can have very high dimensions. The underlying relationship becomes more complex, making for traditional machine learning approaches to extract temporal features from the raw sensor data. However, given the higher dimensionality and more complex relationship of raw sensor data, deep learning neural networks are more capable of RUL prediction. In recent years, deep learning (DL) neural networks with the ability of automatic feature extraction and great nonlinear fitting have attracted widespread attention of engineers and researchers\cite{krizhevsky2012imagenet,lecun2015deep,dahl2011context}. It has shown effectively learning capacity and achieved excellent performance, especially in image processing, natural language processing (NLP), etc. Thus, deep learning technology provides a promising solution to improve RUL prediction accuracy. So far, the deep learning approaches, such as recurrent neural network (RNN)\cite{graves2013speech}, long short-term memory (LSTM)\cite{hochreiter1997long}, gated recurrent unit (GRU)\cite{cho2014learning} for time series modeling, as well as convolutional neural network (CNN)\cite{lecun2015deep} have been widely used for the feature extraction from high-dimensional data. Although the deep learning neural network has shown excellently potential for RUL prediction, some practical and data-related issues are worthy of further consideration. Since physical machinery works in complex operating conditions, sensor streaming data must contain noisy fluctuations and measurement errors. Hence, the fluctuating changes significantly affect the feature extraction and RUL prediction. Currently, expert knowledge, signal processing approaches, and health indicators are still used in feature extraction. However, these approaches cannot sufficiently mine intrinsic features from the high-dimensional sensor streaming data. There is an urgent need for an effective method to extract features reflecting the underlying physical degradation development. Moreover, the degradation patterns in each cycle may have different weights in the degradation development, especially the representative features that appear at specific cycles in periodic pulses or shocks. If the cycle sensor data is equally considered to the RUL prediction, the key fault characteristics are overwhelmed by noise signals. Thus, capturing key fault characteristics makes the deep learning neural network achieve highly accurate RUL prediction. We propose the temporal deep degradation network (TDDN) combining 1D CNN with attention mechanism to achieve higher prediction accuracy by extracting the degradation-related features. Firstly, the TDDN model utilized 1D CNN to extract temporal features from raw sensor data. Next, temporal features are fed to a fully connected layer to generate abstract features, then an attention mechanism is introduced to calculate the importance weights of abstract features. The TDDN model outperforms the existing deep learning models on the same dataset. The main contributions of this paper are as follows: \begin{itemize} \item The end-to-end TDDN model is proposed to make effective and accurate RUL prediction by extracting degradation-related features from multivariate time series. \item 1D CNN can accurately extract the temporal features in the degradation development. Temporal features have monotonic-degradation trends and demonstrate strong robustness against the fluctuation in the noisy streaming data. \item The attention weights evolve with the degradation development. Attention mechanism effectively identifies the underlying degradation patterns and captures key fault characteristics. \item The effectiveness of the TDDN model on RUL prediction is verified by achieving superior performance on the C-MAPSS dataset. Furthermore, the model remarkably outperforms other methods in complex conditions on FD002 and FD004 datasets, demonstrating its robustness in practical applications. \end{itemize} The remaining paper is organized as follows: Section~\ref{rw} introduces related work on the C-MAPSS dataset. The proposed TDDN model is introduced and discussed in Section~\ref{TDDNs}. Section~\ref{results} illustrates the experimental approach, results, and discussions. Section~\ref{pa} discusses the tuning of hyperparameters and how degradation-related features make the TDDN model predict RUL effectively and accurately. We close the paper with conclusions and future work in Section~\ref{discussion}. \section{Related Work}\label{rw} The degradation development and machinery RUL are embedded in the streaming data collected by industrial sensors. The deep learning architectures of RNN and CNN have been widely used in RUL prediction. For the RNNs, LSTM and GRU are two deep learning architectures that are widely applied in the time series prediction. RNN has been widely used to mine the time series data. Heimes $\it{et\; al.}$\cite{heimes2008recurrent} applied RNN to predict the degradation trend. Zheng $\it{et\; al.}$\cite{zheng2017long} proposed an approach that combined LSTM cells with standard feed-forward layers to discover hidden patterns in the sensor streaming data. The approach can take time-series temporal information into account to achieve higher accuracy. Wang $\it{et\; al.}$\cite{wang2019remaining} applied LSTM to learn the nonlinear mapping from the degradation features to the RUL. These features were selected by three degradation evaluation indicators corresponding to the representative degradation features. The prediction results were superior to several traditional machine learning algorithms, such as backpropagation neural network (BPNN) and support vector regression machine (SVRM). GRU\cite{chen2019gated} has also been applied to the RUL prediction by pre-processing sensor data with kernel principal component analysis (KPCA) for nonlinear feature extraction. Compared with LSTM, the proposed approach took less training time and achieved better prediction accuracy. Ahmed Elsheikh $\it{et\; al.}$\cite{elsheikh2019bidirectional} applied the bidirectional handshaking LSTM (BHSLSTM) to process the time series streaming data in both directions effectively. This architecture enabled the LSTM network to have more insights when identifying the degradation information in time series streaming data. Chen $\it{et\; al.}$\cite{chen2020machine} integrated the advantages of LSTM and attention mechanism to propose a deep learning framework to improve RUL prediction accuracy further. The LSTM network and attention mechanism were employed to learn temporal features and weights, respectively. A similar approach\cite{ragab2020attention} that the LSTM network encoded features and attention mechanism decoded hidden features. The encoder and decoder hidden features were fed to a fully connected neural (FCN) network for RUL prediction. In addition, Ellefsen $\it{et\; al.}$ \cite{ellefsen2019remaining} introduced unsupervised deep learning techniques to overcome the difficulty of acquiring high-quality labeled training data by automatically extracting degradation features from raw unlabeled training data. Combining the LSTM network with the restricted Boltzmann machine (RBM) network, the semi-supervised deep architecture showed better accuracy than purely supervised training approaches. 1D CNN had excellent feature learning capability to meet the challenges of highly nonlinear and multi-dimension sensor streaming data. Babu $\it{et\; al.}$\cite{babu2016deep} adopted 1D CNN to capture the salient patterns of the sensor signals. 1D CNN was applied to learn features along temporal dimensions in the whole multivariate time series. Yao $\it{et\; al.}$\cite{yao2021remaining} simplified the 1D CNN structure to avoid a large number of parameters by replacing the fully connected layer with the global maximum pooling layer. Then the features extracted by 1D CNN is fed into the simple recurrent unit (SRU) network to predict RUL. Li $\it{et\; al.}$\cite{li2018remaining} improved the depth of 1D CNN. The deep 1D CNN effectively extracted degradation-related features from raw sensor data. In addition, some researchers combined 1D CNN with other methods to make the RUL prediction. For example, the 1D CNN was combined with stacked bi-directional and uni-directional LSTM (SBULSTM) network\cite{an2020data} to learn more abstract features and encode the temporal information. Yan $\it{et\; al.}$\cite{song2020distributed} developed an integrated model based on Temporal Convolutional Networks (TCN) with an attention mechanism to calculate the contribution of data from different sensors and degradation stages. TCN was used for feature extraction of time series, and an attention mechanism was applied to calculate temporal weights. \section{Proposed Model: Temporal Deep Degradation Network}\label{TDDNs} In the proposed temporal deep degradation network, 1D CNN and attention mechanism are two key components to be introduced, then the structure and parameters of the TDDN model are explained. The structure of TDDN is presented in Fig.~\ref{TDDN}, which has four parts: 1. 1D CNN to extract the temporal features, 2. fully connected layer to generate abstract features, 3. attention mechanism to generate attention-weighted state, 4. fully connected layer to predict RUL. As shown in Fig.~\ref{TDDN}, 1D CNN is firstly applied to extract temporal features from multivariate time series. Temporal features are fed to fully connected layer to generate abstract features. To capture key fault characteristics at different cycles, attention mechanism is employed to calculate a weighted sum over all abstract features. At last, the output of attention mechanism is fed into fully connected layers to predict RUL. The layer details of the proposed network are shown in Table~\ref{layerdetails}. The 1D CNN feature extraction and attention mechanism in TDDN are critically essential for the RUL prediction. We explain them in details below. \begin{figure}[!h] \centering \includegraphics[width=1.0\textwidth]{figures/proposed_structure.pdf} \caption{Proposed TDDN model with 1D CNN and attention mechanism} \label{TDDN} \end{figure} \subsection{Sensor Data Representation and Moving Windows} The sensor streaming data (multivariate time series) is $\mathbf{X}=\left[\mathbf{x}_{1} ; \mathbf{x}_{2} ; \cdots ; \mathbf{x}_{i} ; \cdots ; \mathbf{x}_{n}\right] \in \mathbb{R}^{n \times m}$, where $\mathbf{x}_i =\left[x_{i,1}\; {x_{i,2} \; \cdots \; x_{i,m}}\right]$ is sensor data at time $i$, $n$ is the length of sensor streaming data, and $m$ is the number of sensors. The sensor streaming data selection is discussed in Section~\ref{data}. Sensor streaming data are segmented into the sequence of moving windows and used as inputs to TDDN model. With the sequence of moving windows $\mathbf{W}_M=[\mathbf{W}_1 \; \cdots \mathbf{W}_j \; \cdots \mathbf{W}_{n-w+1}]$, the $j$-th window is defined as $\mathbf{W}_j=\left[\mathbf{x}_{j} ; \mathbf{x}_{j+1} ; \cdots ; \mathbf{x}_{j+w-1}\right]$, where $w$ is the window size. \subsection{1D CNN and Temporal Features} The 1D CNN is generally composed of convolutional layers and max-pooling layers. Through the convolutional layers and max-pooling layers, the temporal features are automatically extracted from a moving window $\mathbf{W}_j$ by the filters. The 1D CNN can extract the temporal features associated with degradation patterns. The structure of 1D CNN is shown in Fig.~\ref{1DCNN}. \begin{figure}[!h] \centering \includegraphics[width=\textwidth]{figures/1DCNN.pdf} \caption{\rmfamily 1D convolutional Neural Network and Temporal Feature Extraction} \label{1DCNN} \end{figure} The convolutional filters in the convolutional layer are used to extract feature maps from the input moving window of the time series. Feature maps are fed into the pooling layer to obtain temporal features. In Fig.~\ref{1DCNN}, it is shown that $L$ filters $\mathbf{k}_l$ are used to generate $L$ feature maps. All the convolutional filters have the same dimension of $cm\times1$. Besides, the zero-padding operation is implemented in 1D CNN to ensure boundary information extraction. The rectified linear unit (ReLU) is used as the activation function for all the convolutional layers. A fully connected (FC) layer is connected 1D CNN to convert temporal features into abstract features with the same dimension $w$ as the length of the moving window. The concatenated vector $\mathbf{x}_{k: k+c-1}^j$ of $\mathbf{x}_{j+k}$ to $\mathbf{x}_{j+k+c-1}$ in the moving window $\mathbf{W}_j$ for the 1D CNN is expressed as, \begin{equation} \mathbf{x}_{k: k+c-1}^j=\mathbf{x}_{j+k} \oplus \cdots \oplus \mathbf{x}_{j+k+c-1}, \end{equation} where $k\in [1,j+w-c]$ and symbol $\oplus$ is the direct sum of $c$ vectors. Therefore, the filter dimension $cm$ is equal to $c*m$. The $l$-th extracted feature $z_{l,j}^k$ from the concatenated vector $\mathbf{x}_{j+k: j+k+c-1}$ in the $j$-th moving window is defined as, \begin{equation} z_{l,j}^k=f(\mathbf{k}_l\cdot \mathbf{x}_{k:k+c-1}^j + b_l) \end{equation} where $\mathbf{k}_l$ is the $l$-th filter and $b_l \in \mathbb{R}$ is a bias term, and $f$ and symbol $\cdot$ represent nonlinear activation function and dot product, respectively. The feature map $\mathbf{z}_{l,j}$ of the $j$-th moving window $\mathbf{W}_j$ is defined as, \begin{equation} \mathbf{z}_{l,j}=\left[z_{l,j}^{1} \;\cdots z_{l,j}^{k} \; \cdots \; z_{l,j}^{j+w-c}\right]. \end{equation} As a result, the filter $\mathbf{k}_l$ with different initialization is applied to generate the feature map $\mathbf{z}_{l,j}$ for the moving window $\mathbf{W}_j$. Then, maximum pooling function\cite{boureau2010theoretical} $\operatorname{pool()}$ is used to obtain the $l$-th temporal feature, $\mathbf{y}_{l,j}=\operatorname{pool}(\mathbf{z}_{l,j}, p, s)$ by reducing the size of the feature map according to the maximum value from the feature map $\mathbf{z}_{l,j}$, where $p$ is the pooling size, and $s$ is the step size. With the FC layer after the 1D CNN, the sequence of the abstract feature $\mathbf{H}_j$ is obtained from the temporal features for the moving window $\mathbf{W}_j$. The sequence of the abstract feature $\mathbf{H}_j$ corresponding to the $j$-th moving window $\mathbf{W}_j$ is expressed as, $\mathbf{H}_j=\left[\mathbf{h}_{1} ; \mathbf{h}_{2} ; \cdots ; \mathbf{h}_{i}; \cdots; \mathbf{h}_{w}\right]$, where $\mathbf{h}_{i}$ represents the abstract feature with dimension of $1 \times m$. \subsection{Attention Mechanism and Degradation Attention Weights} Inspired by the success and breakthrough of the transformer model in NLP tasks\cite{vaswani2017attention}, we introduce an attention mechanism to calculate the degradation attention weights relevant to the current health states. The degradation attention weights can effectively reflect the degeneration development. The attention mechanism generates degradation attention weights according to the similarities between abstract features and the health states. The initial states greatly affect the degradation development in a physical system. In order to obtain the degradation attention weights from the sequence of abstract features $\mathbf{H}_j$, the reference sequence $\mathbf{H}_r$ in terms of the initial abstract feature $\mathbf{h}_1$ of the moving window $\mathbf{W}_j$ is defined as $[\mathbf{h}_1;\cdots \mathbf{h}_1;\cdots; \mathbf{h}_1]$ with the same length of $\mathbf{H}_j$. Other two sequences $\mathbf{H}_s$ and $\mathbf{H}_m$ in terms of the difference and product of $\mathbf{H}_j$ and $\mathbf{H}_r$, are defiend as, $\mathbf{H}_{s} = [0 ; \mathbf{h}_2-\mathbf{h}_1 ; \cdots; \mathbf{h}_i -\mathbf{h}_1; \cdots; \mathbf{h}_{n}-\mathbf{h}_1]$ and $\mathbf{H}_{m}=[\mathbf{h}_1 \cdot \mathbf{h}_1; \mathbf{h}_{2} \cdot \mathbf{h}_1 ; \mathbf{h}_i\cdot \mathbf{h}_1; \cdots ; \mathbf{h}_{n}\cdot \mathbf{h}_1 ]$. Then, the four matrices $\mathbf{H}_j$, $\mathbf{H}_r$, $\mathbf{H}_s$ and $\mathbf{H}_m$, are stacked together $\mathcal{H}_j=[\mathbf{H}_j \; \mathbf{H}_r \; \mathbf{H}_s \; \mathbf{H}_m]$ for the attention layer. The attention layer is composed of one MLP layer and softmax function. $\mathcal{H}_j$ is fed into the MLP layer to generate the hidden state $\mathbf{u}_{i}$ as, \begin{equation} \mathbf{u}_{i}=\operatorname{tanh}\left(\mathbf{W} \mathbf{h}_{i}+\mathbf{b}\right) \end{equation} where $\mathbf{W}$ and $\mathbf{b}$ are weight and the bias of MLP, respectively. Afterwards, a softmax function is employed to calculate the attention weight according to correlation score between $\mathbf{u}_{i}$ and a randomly initialized vector $\mathbf{u}_{s}$. The higher score is, the stronger correlation is, and the abstract feature should be assigned with higher degradation attention weight. After that, the degradation attention weight $\lambda_{i}$ is computed with softmax function as, \begin{equation} \lambda_{i}=\frac{\exp \left(\mathbf{u}_{i}^{T} \mathbf{u}_{s}\right)}{\sum\limits_{i} \exp \left(\mathbf{u}_{i}^{T} \mathbf{u}_{s}\right)} \end{equation} \begin{table}[!t] \renewcommand\arraystretch{2} \centering \caption{Layer details of the proposed TDDN model} \label{layerdetails} \begin{tabular}{m{2cm}<{\centering}m{2cm}<{\centering}m{3cm}<{\centering}m{4.5cm}<{\centering}} \hline TDDN & Layer number & Description & Details \\ \hline \makecell[c]{1D CNN} & Layer 1 & 1D-convolution 1D-max-pooling & \makecell[c]{$L$=32, filter\_size=$2\ast m$, \\ stride=1, activation=ReLU, \\ $p$=2, $s$=2} \\ & Layer 2 & 1D-convolution 1D-max-pooling & \makecell[c]{$L$=64, filter\_size=64, \\ stride=1, activation=ReLU, \\ $p$=2, $s$=2} \\ & Layer 3 & 1D-convolution 1D-max-pooling & \makecell[c]{$L$=128, filter\_size=128, \\ stride=1, activation=ReLU, \\ $p$=2, $s$=2} \\ & Layer 4 & Fully connection & \makecell[c]{layer\_size=$w\ast m$\\ activation=ReLU } \\ \hline \makecell[c]{Attention\\Mechanism} & Layer 5 & MLP & \makecell[c]{layer\_size=$w$ \\ activation=Tanh} \\ & Layer 6 & \makecell[c]{Attention-weighted\\ state } & softmax function \\ \hline \makecell[c]{Regression} & Layer 7 & Fully connection & \makecell[c]{layer\_size=8 \\ activation=ReLU } \\ & \makecell[c]{Layer 8} & Regression & layer\_size=1 \\ \hline \makecell[c]{Hyper-\\parameters} & Learning rate & Batch size & Epochs \\ & \makecell[c]{$10^{-4}$ ($10^{-5}$)} & 32 & 200 \\ & \makecell[c]{Loss} & Optimizer & Window size \\ & \makecell[c]{Mean square\\ error} & Adam & 64 \\ \hline \end{tabular} \end{table} With the attention weights, TDDN can identify critical degradation patterns, so as to capture all the relevant abstract features. The attention-weighted state $\mathbf{s}_j$ is a weighed sum of all extracted abstract features for the moving window $W_j$, \begin{equation} \mathbf{s}_j=\sum_{i} \lambda_{i} \mathbf{h}_{i} \end{equation} Finally, the attention-weighted state $\mathbf{s}_j$ is fed into the fully connected layer to make the RUL prediction at the $j$-th time step. In summary, the structure parameters of TDDN is given in Table ~\ref{layerdetails} \section{Experimental Study: A Small Fleet of Turbofan Engines}\label{results} This section describes the process of data preprocessing, sample construction, and label setting in the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) dataset. Then, the proposed method is evaluated on a benchmark dataset and compared with the latest prediction results. \subsection{Data Preprocessing}\label{data} The C-MAPSS dataset provided by NASA is widely used to evaluate the performance of models. It is generated by a thermo-dynamical simulation model which simulates damage propagation and performance degradation in MATLAB and Simulink environment\cite{saxena2008damage,saxena2008turbofan}. In addition, considering noisy fluctuations in real data, random measurement noises are added to the sensor output to enhance the authenticity of data. According to engine operating conditions and fault modes, the dataset is divided into four subsets. In each subset, engine number, operational cycle number, three operational settings, and 21 sensor measurements reflect turbofan engine degradation. Since each engine normally works at the start and then occurs fault at a random time, the degradation process is generally different from one to another. The detailed description of sensor data can be found in the previous work\cite{saxena2008damage}. Each subset is further divided into a training dataset and a test dataset. Finally, the operating conditions, fault modes, and other statistical data of each subset are listed in Table~\ref{C-MAPSS}. Especially, the training dataset records the whole run-to-failure data. In contrast, the test dataset terminates some time prior to the machinery failure. The corresponding RUL of each engine is included in the dataset for verification purposes. Therefore, the objective is to predict the RUL of each engine based on provided sensor streaming data in the test dataset. The C-MAPSS dataset only provides the RUL corresponding to the last cycle of each engine in the test dataset. The RUL labels are added to the training dataset to establish the relationship between the sensor data of the turbofan engine and RUL. In practical applications, the degradation process of turbofan engines is very long, and the degradation trend in the early and healthy operation stages is not distinct, so the degradation of the system is usually negligible. When a fault occurs at a certain time point, the engine performance will be reduced, and when the engine reaches complete fault, the engine life will be terminated. Therefore, this paper uses a piecewise linear function which is shown in Fig.~\ref{timewindow} to set the RUL label of the training dataset. Besides, it is observed that the degradation usually occurred at the late 120-130 cycles\cite{heimes2008recurrent}. Therefore, the maximum value of RUL is set to be 120 based on numerous experiments. \begin{table} \renewcommand\arraystretch{2} \centering \caption{Information of the C-MAPSS dataset} \begin{tabular}{m{4cm}<{\centering}m{2cm}<{\centering}m{2cm}<{\centering}m{2cm}<{\centering}m{2cm}<{\centering}} \hline Dataset & FD001 & FD002 & FD003 & FD004 \\ \hline Engines in training data & 100 & 260 & 100 & 259 \\ Engines in test data & 100 & 259 & 100 & 248 \\ Operating conditions & 1 & 6 & 1 & 6 \\ Fault modes & 1 & 1 & 2 & 2 \\ \makecell[c]{Minimum running cycle\\ in training data} & 128 & 128 & 145 & 128 \\ \makecell[c]{Minimum running cycle\\ in test data} & 31 & 21 & 38 & 19 \\ \hline \end{tabular} \label{C-MAPSS \end{table} Sensor signals in real system has fluctuations in magnitudes and units, so not all raw sensor data are useful to the RUL prediction. Each sensor streaming data is normalized to be within the range of $\left[-1, 1\right]$ by using min-max scaler. To formulate the scaling function, let the multivariate time series $\mathbf{X}=\left[\mathbf{x}_{1} \; \mathbf{x}_{2} \; \cdots \; \mathbf{x}_{j} \; \cdots \; \mathbf{x}_{m}\right]$, where $\mathbf{x}_{j}=\left[x_{1,j}\; {x_{2,j} \; \cdots \; x_{n,j}}\right]^T$ represents the time series data of $i$-th sensor. The $i$-th normalized value is calculated as, \begin{equation} \mathbf{x}_{j}=\frac{2\left(\mathbf{x}_{j}-\min \left(\mathbf{x}_{j}\right)\right)}{\max \left(\mathbf{x}_{j}\right)-\min \left(\mathbf{x}_{j}\right)}-1 \end{equation} where, $\min \left(\mathbf{x}_{j}\right)$ and $\max \left(\mathbf{x}_{j}\right)$ denote the maximum and minimum values in the vector $\mathbf{x}_{j}$, respectively. Moreover, it is noted that the normalization of the test dataset is based on the maximum and minimum values of the training dataset. The selection is carried out to remove irrelevant sensor streaming data to improve computational efficiency and save training time. FD001 and FD003 datasets have a single operating condition. In the FD001 and FD003 datasets, there are twenty-one sensors labeled with the indices of $(1,\cdots, 21)$ and three operational settings of s1,s2, and s3. In order to select the relevant sensor streaming data, the twenty-one sensors and operational settings in the FD001 dataset are visualized in Fig.~\ref{sensorFD001}. The sensor measurements and operational settings demonstrate the ascending, descending, and unchanged three trends throughout the whole life in the FD001 dataset. According to the ascending, descending, and unchanged trends, we sort the sensor and operation settings into three trend categories listed in Table~\ref{sensortrend}. It is noticed that most sensors have clearly ascending or descending trends in the degradation trajectories, while s3 and sensors 1, 5, 6, 10, 16, 18, and 19 have the unchanged trend that provides no useful degradation information for the RUL prediction. Hence, the streaming data in sensors and settings with the ascending and descending trends in Table~\ref{sensortrend} are used as the inputs for the TDDN model. Due to the same operating condition and degradation pattern, the same trend categories also exist in the FD003 dataset. However, selecting sensors in the FD002 and FD004 datasets is more challenging because they have six operating conditions. Since sensors and settings in the FD004 dataset have fluctuating trajectories in Fig.~\ref{sensorFD004}, it is impossible to select the sensors in the visualization. Therefore, all operational settings and sensor measurements are used for the RUL prediction of the FD002 and FD004 datasets. \begin{figure}[!h] \centering \includegraphics[width=8cm]{figures/FD001_sensor_curve.pdf} \caption{The visualization of sensor streaming data and settings in FD001} \label{sensorFD001} \end{figure} \begin{table}[!h] \renewcommand\arraystretch{2} \rmfamily \centering \caption{\rmfamily The trend categories of the sensors and operational settings in FD001.} \begin{tabular}{m{2cm}<{\centering}|m{4cm}<{\centering}} \hline Trend & Sensors \\ \hline Ascending & 2,3,4,8,9,11,13,15,17, s1, s2 \\ Descending & 7,12,20,21, s1, s2 \\ Unchanged & 1,5,6,10,14,16,18,19, s3 \\ \hline \end{tabular} \label{sensortrend} \end{table} \begin{figure}[!h] \centering \includegraphics[width=8cm]{figures/FD004_sensor_curve.pdf} \caption{The visualization of sensor streaming data and settings in FD004} \label{sensorFD004} \end{figure} In addition, it should be noted that the sensor streaming data in different engines have different lengths, as shown in Table~\ref{C-MAPSS}. Cycle length discrepancy limits the maximum size of the moving window and affects the RUL prediction accuracy. The smaller size of moving window may result in the larger prediction error. In order to eliminate the size limitation of the moving window, we pad the sensor streaming time with the first cycle value $\mathbf{x}_1$ $w-1$ times as, \begin{equation} \mathbf{X}=[\underbrace{\mathbf{x}_{1} \; \cdots \; \mathbf{x}_{1}}_{w} \; \mathbf{x}_{2} \; \cdots \; \mathbf{x}_{n-1} \; \mathbf{x}_{n}] \end{equation} Moving windows and padding strategy are illustrated in Fig.~\ref{timewindow}. The padding strategy allows utilizing the engines with fewer cycles rather than removing them from samples in the traditional approach\cite{babu2016deep,li2018remaining}. With the padding, the engine RUL can be predicted even if only a small number of cycles are given. \begin{figure}[!h] \centering \includegraphics[width=8cm]{figures/data_segment.pdf} \caption{Illustration of data segmentation using moving windows to generate input samples} \label{timewindow} \end{figure} \subsection{TDDN Training} Fig.~\ref{flowchart} illustrates the complete procedure of the TDDN model to predict the turbofan engine RUL. Data visualization can help select meaningful sensor streaming data since some data remain constant and contain no useful information. All the selected sensor data are normalized. Significantly, sensor streaming data are padded with the first cycle values in the beginning part to improve window size. The training data is fed into the TDDN prediction method which effectively extracts the temporal features and captures key fault characteristics. All the TDDN model hyperparameters are shown in Table~\ref{layerdetails}. \begin{figure}[!h] \centering \includegraphics[width=8cm]{figures/flowchart.pdf} \caption{Flowchart of TDDN model for prognostics} \label{flowchart} \end{figure} The validation dataset is used to evaluate the performance corresponding to each training epoch to verify the effectiveness of the TDDN model and select appropriate hyperparameters. Hence, 20\% engines in each training dataset are randomly selected as validation sets while the rest engines are training samples. Through experiments, the batch size of 32 and the maximum epoch of 200 can achieve the best prediction performance. Meanwhile, the learning rate is set to be 0.0001 initially for the fast optimization at the 100 epochs and then reduces to one-tenth of the initial learning rate for stable convergence. The back-propagation learning is adopted to update the weights in the network to minimize the mean square error (MSE) loss function. At the same time, the Adam optimizer is used as the network gradient descent algorithm. In addition, an early stopping strategy is applied to mitigate the overfitting problem. The training process terminates in advance when the validation performance shows no improvement of more than ten consecutive epochs. \subsection{Performance Benchmark} Two performance metrics, namely root mean square error (RMSE) and scoring function, are generally used to establish a common comparison with state-of-the-art methods. They are defined as, \begin{equation} {\rm RMSE}=\sqrt{\frac{1}{n} \sum_{i=1}^{n} d_{i}^{2}} \end{equation} \begin{equation} {s}=\left\{ \begin{aligned} & \sum_{i=1}^{n}\left(e^{-\frac{h}{13}}-1\right), & d_{i}<0 \\ & \sum_{i=1}^{n}\left(e^{\frac{h}{10}}-1\right), & d_{i} \geq 0 \end{aligned}\right. \end{equation} Where $s$ denotes the score, and $n$ represents the total number of samples in the test dataset. $d_{i}=RUL^{'}_{i}-RUL_{i}$, is the difference between the $i$-th sample predicted RUL value and true RUL value. The comparison between the two performance metrics is shown in Fig.~\ref{functioncompare}. The asymmetric scoring function penalizes the late prediction error more than the early prediction error, while RMSE gives an equal penalty to prediction error. The late prediction error usually leads to severe consequences for a mechanical degradation scenario, while early prediction error can allow maintenance in advance and avoid unnecessary downtime. However, a single outlier significantly affects this scoring function value since it is exponential and has no error normalization. Accordingly, it is noted that a single scoring function cannot thoroughly evaluate the performance of an algorithm. Therefore, performance metrics are a combined aggregate of RMSE and scoring function that ensures an algorithm consistently predicts accurately and timely. \begin{figure}[!h] \centering \includegraphics[width=8cm]{figures/performance.pdf} \caption{Comparison between scoring function and RMSE for different error values} \label{functioncompare} \end{figure} The performance of the TDDN model is compared with the existing methods on all subsets in the C-MAPSS dataset according to the performance metrics of RMSE and Score listed in Table~\ref{modelcompare}. The results indicate that the proposed method achieves noticeable improvement and has a superior performance in different scenarios. Specifically, despite the score of the proposed method is similar to the best score in previous literature on FD001 and FD003 datasets, the proposed method obtains lower RUL standard deviations in the degradation process. Observing the results of FD002 and FD004 datasets, the proposed method remarkably outperforms other methods with the lowest prediction errors. Significantly, compared with the best previous results, it decreased by more than 28\% for RMSE and 56\% for the score on the FD002 dataset, and 38\% for RMSE and 35\% for the score on the FD004 dataset. In this way, the precision and reliability of the proposed TDDN model in real-life PHM applications have been successfully validated. Besides, all existing methods have higher prediction errors on FD002 and FD004 datasets than that on two other datasets because of more complex operating conditions. Thanks to extracting degradation-related features, the lowest difference in prediction error between 4 subsets are achieved by the TDDN model. Furthermore, the TDDN model in complex conditions is also able to achieve the optimal prediction accuracy of other methods in normal conditions. The substantial performance improvements illustrate the superiority of the TDDN model in complex conditions. It is also worth noting that most industrial machinery work in a variety of complex conditions in the real world, so the proposed method is quite suitable for the practical applications of machinery. \begin{table*}[!h] \rmfamily \centering \caption{The benchmarking of the TDDN model on the C-MAPSS dataset} \renewcommand\arraystretch{1.5}{ \begin{tabular}{m{1.5cm}<{\centering}m{1cm}<{\centering}m{1cm}<{\centering}m{1cm}<{\centering}m{1cm}<{\centering}|m{1cm}<{\centering}m{1cm}<{\centering}m{1cm}<{\centering}m{1cm}<{\centering}} \hline & \multicolumn{4}{c|}{RMSE} & \multicolumn{4}{c}{Score} \\ \hline Method & FD001 & FD002 & FD003 & FD004 & FD001 & FD002 & FD003 & FD004 \\ \hline LSTM\cite{zheng2017long} & 16.14 & 24.49 & 16.18 & 28.17 & 338 & 4450 & 852 & 5550 \\ DCNN\cite{li2018remaining} & 12.61 & 22.36 & 12.64 & 23.31 & 273.7 & 10412 & 284.1 & 12466 \\ HDNN\cite{al2019multimodal} & 13.02 & 15.24 & 12.22 & 18.16 & 245 & 1282.42 & 287.72 & 1527.42 \\ MS-DCNN\cite{li2020remaining} & 11.44 & 19.35 & 11.67 & 22.22 & \textbf{196.22} & 3747 & 241.89 & 4844 \\ Li-DAG\cite{li2019directed} & 11.96 & 20.34 & 12.46 & 22.43 & 229 & 2730 & 535 & 3370 \\ Ellefsen $\it{et\; al.}$\cite{ellefsen2019remaining} & 12.56 & 22.73 & 12.10 & 22.66 & 231 & 3366 & 251 & 3370 \\ DA-TCN\cite{song2020distributed} & 11.78 & 16.95 & 11.56 & 18.23 & 229.48 & 1842.38 & 257.11 & 2317.32 \\ Proposed TDDN model & \textbf{9.47} & \textbf{10.93} & \textbf{9.17} & \textbf{11.16} & 214.17 & \textbf{561.82} & \textbf{216.79} & \textbf{997.69} \\ \hline \end{tabular}} \label{modelcompare} \end{table*} To visualize the prediction results, one representative engine is selected in the four datasets of FD001, FD002, FD003, and FD004. Fig.~\ref{predict} shows that the predicted RUL evolves around the actual RUL. TDDN can effectively capture the trend of the degradation development. The fault characteristics accumulate at the final failure stage of the degradation development. The prediction error gradually decreases when an engine is close to the final failure stage. It is also observed that the predicted RUL is mostly below the actual RUL in Fig.~\ref{predict}, which reduces the chance of overestimating the RUL to cause unnecessary maintenance costs. Hence, the proposed TDDN model can achieve outstanding prediction performance under various operating conditions in the FD001, FD002, FD003, and FD004 datasets. \begin{figure}[!t] \centering \subfigure[Test engine 34 in FD001]{ \includegraphics[width=0.49\textwidth]{figures/FD001_rul_curve.pdf}} \subfigure[Test engine 138 in FD002]{ \includegraphics[width=0.49\textwidth]{figures/FD002_rul_curve.pdf}}\\ \subfigure[Test engine 92 in FD003]{ \includegraphics[width=0.49\textwidth]{figures/FD003_rul_curve.pdf}} \subfigure[Test engine 135 in FD004]{ \includegraphics[width=0.49\textwidth]{figures/FD004_rul_curve.pdf}} \caption{RUL prediction results on different test datasets} \label{predict} \end{figure} \section{TDDN Hyperparameters and Performance Analysis}\label{pa} This section studies the influence of the moving window size and the number of convolutional layers on the prediction results. In addition, the degeneration-related features extracted by 1D CNN and the attention mechanism to the TDDN model are studied. \subsection{Size Dependence of Moving Windows and Convolutional Layers} The sensor streaming data are segmented into moving windows. The size of the moving window is $w\times m$, where $w$ and $m$ denote the size of the moving window and the number of selected sensor data, respectively. The label of the moving window $\mathbf{W}_j$ is determined by the RUL at the $j$-th cycle. Moving windows take the temporal features evolution into consideration. Therefore, the tuning of moving windows can better reflect the system degradation patterns and improve the prediction performance. Particularly, the size of the moving window plays a vital role in the prediction performance. The small window size does not contain much useful degradation information. \begin{figure}[!b] \centering \subfigure[FD001]{ \includegraphics[width=0.49\textwidth]{figures/FD001_window.pdf}} \subfigure[FD002]{ \includegraphics[width=0.49\textwidth]{figures/FD002_window.pdf}}\\ \subfigure[FD003]{ \includegraphics[width=0.49\textwidth]{figures/FD003_window.pdf}} \subfigure[FD004]{ \includegraphics[width=0.49\textwidth]{figures/FD004_window.pdf}} \caption{The RMSE and Score performance metrics {\it{v.s.}} moving window size} \label{window} \end{figure} On the other hand, unnecessarily large window sizes need more computational time and the loss of crucial degradation-related features. Thus, selecting the appropriate window size is critically important to improving the prediction performance. The minimum running cycles of engine limit the window size selection in the previous work\cite{babu2016deep,li2018remaining}. However, with the padding strategy, the window size can be extended accordingly to make the moving window size equal for all the engines. The window sizes ranging from 16 cycles to 80 cycles are tested on FD001, FD002, FD003, and FD004 four datasets to understand the window size dependence. Each condition is tested five times independently, and the average value of experimental results is shown in Fig.~\ref{window}. The temporal features and fault characteristics are contained in the optimal moving window, which could significantly enhance the extraction of degradation-related features. The test experiments show that the larger window size can improve the prediction performance of the TDDN model within 64 cycles. If the window size further increases from 64 cycles to 80 cycles, the TDDN model prediction performance does not change much in the FD001 and FD003 datasets but becomes poorer in the FD002 and FD004 datasets in Fig.~\ref{window}. The unnecessarily large moving window size can diminish the performance of the attention mechanism. Based on the test experiments, the optimal moving window size is set to be 64 cycles, which can effectively balance degradation-related features extraction and computational time consumption in sensor streaming data. For the 1D CNN, the depth of hidden convolutional network layers strongly affects feature extraction ability. The CNN unit with more stacked network layers usually can learn more abstract and discriminative representations but demands higher training time consumption. To study the size dependence of the TDDN prediction performance and time consumption, one to four network layers in the CNN unit is tested by fixing all other hyperparameters. The experimental results on the dataset FD001 are shown in Figure~\ref{CNN_layer}. The performance metrics, RMSE and Score, decrease with the number of the network layers. Clearly, deeper network layers can more effectively capture temporal features in the degradation development. In Fig.~\ref{CNN_layer}, three network layers already demonstrate the best performance while four network layers provide little extra performance gain. Therefore, three network layers in the 1D CNN is the optimal choice with the trade-off between prediction accuracy and training time consumption. \begin{figure}[!h] \centering \includegraphics[width=8cm]{figures/CNN_layer.pdf} \caption{The number of convolutional network layers and performance metrics in terms of RMSE, Score and Time} \label{CNN_layer} \end{figure} \captionsetup[figure]{justification=raggedright} \begin{figure}[!b] \centering \includegraphics[width=0.8\textwidth]{figures/cluster.pdf} \caption{The t-SNE visualization of (a) raw sensor streaming data for engine 1 in the FD001 dataset, (b) temporal features for engine 1 in the FD001 dataset, (c) raw sensor streaming data for engine 1 in the FD004 dataset, (d)temporal features for the engine 1 in the FD004 dataset. The dots of the t-SNE projection of the raw sensor streaming data in moving window $\mathbf{W}_j$ are visualized in (a) and (c). The dots of the t-SNE projection of the temporal features in moving window $\mathbf{W}_j$ are visualized in (b) and (d). The hotter the color is, the closer the engine is to the final failure.} \label{cluster} \end{figure} \subsection{Temporal Features and Degradation Development Trajectories} To understand how the TDDN model works with its feature learning capability, t-stochastic neighbor embedding(t-SNE)\cite{van2008visualizing}, a nonlinear technique for unsupervised dimensionality reduction, is applied to visualize temporal features to gain more insights while preserving much of local structure in the high-dimensional space. Fig.~\ref{cluster} illustrates the t-SNE projection of the raw sensor streaming data and temporal features extracted by the 1D CNN respectively at every time step $j$ in an engine degradation development. Figs.~\ref{cluster} (a) and (c) shows the t-SNE dots of the raw sensor streaming data are highly mixed together and do not form any degradation trajectories because of high variance of the raw sensor streaming data as shown in Fig.~\ref{sensorFD001} and ~\ref{sensorFD004}. In contrast, it is observed in Fig.~\ref{cluster} (b) and (d) that the t-SNE dots of the representative temporal features for moving window $\mathbf{W}_j$ form a clear degradation development trajectory, which is critical to improving the prediction performance of the TDDN model, particularly in the complex conditions of the FD004 dataset. The 1D CNN and optimal moving window can extract the degradation-related temporal features. To further understand the degradation development trajectories in detail, the elements in eight temporal features extracted by the 1D CNN are plotted in Fig~\ref{degradation_trajectory}. Compared with the raw sensor streaming data in Fig.~\ref{sensorFD001}, the temporal features have clear trends and smaller fluctuation due to the 1D CNN filtering. The turning points in Fig.~\ref{degradation_trajectory} represent the essential degradation information in the middle degradation stage, which has a drastic change in a small time range. Therefore, the 1D CNN can learn degradation-related temporal features effectively and robustly. \begin{figure}[!h] \centering \includegraphics[width=0.8\textwidth]{figures/temporal_feature.pdf} \caption{The trajectory of the $k$-th element of temporal feature $l$, $\mathbf{y}^k_{l,j}$ in the $j$-th moving window $\mathbf{W}_j$ for engine 1 in the FD001 dataset.} \label{degradation_trajectory} \end{figure} \subsection{Attention Weights and Degradation Stages} To better understand how the attention mechanism describes the degradation development in the TDDN model, the evolution of attention weight $\lambda_i$ for the abstract feature $\mathbf{h}_i$ in the trained TDDN model is visualized in Fig.~\ref{attention}. It is found that not all attention weights of the abstract features evolve with moving window $\mathbf{W}_j$. Some abstract features demonstrate no attention weight changing at all. \begin{figure}[!t] \centering \subfigure[FD001]{ \includegraphics[width=0.49\textwidth]{figures/FD001_attention.pdf}} \subfigure[FD004]{ \includegraphics[width=0.49\textwidth]{figures/FD004_attention.pdf}} \caption{The evolution of the attention weight $\lambda_{j}$ in moving window $\mathbf{W}_j$ for each abstract feature $\mathbf{h}_j$ for (a) engine 1 in the FD001 dataset, (b) engine 1 in the FD004 dataset. The twenty moving windows in four degradation stages are selected corresponding to the healthy stage, initial degradation stage, middle degradation stage, and failure stage. These stages are orderly shown in each picture from top to down.} \label{attention} \end{figure} The attention weights of the abstract features evolve with windows. Under different operating conditions, the attention weights for engine 1 in the FD001 dataset show the different evolution patterns from engine 1 in the FD004 dataset. These abstract features with the attention weights that evolve with the degradation development are essentially key abstract features to characterize the degradation development and contain the important information for the RUL prediction. The elements in the abstract features $\mathbf{h}_i$ in Fig.~\ref{FD001_abstract_trajectory} and Fig.~\ref{FD004_abstract_trajectory} have clear increasing or decreasing trend with moving windows. However, some elements in the abstract features do not change at all, and some only change at the failure stage. The underlying dynamics of the degradation will have dramatic changes. The attention mechanism accurately captures the degradation patterns. The abstract features with higher attention weights in moving windows are crucial to the degradation development trajectories. Therefore, the attention mechanism can effectively focus on fault characteristics and improve the accuracy of RUL prediction. \begin{figure}[!t] \centering \includegraphics[width=0.7\textwidth]{figures/FD001_abstract_feature.pdf} \caption{The trajectory of the $k$-th element of abstract feature $i$, $\mathbf{h}^{k}_{i,j}$ in moving window $\mathbf{W}_j$ for the engine 1 in the FD001 dataset. } \label{FD001_abstract_trajectory} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.7\textwidth]{figures/FD004_abstract_feature.pdf} \caption{The trajectory of the $k$-th element of abstract feature $i$, $\mathbf{h}^{k}_{i,j}$ in moving window $\mathbf{W}_j$ for the engine 1 in the FD004 dataset} \label{FD004_abstract_trajectory} \end{figure} \section{Conclusions}\label{discussion} Combining the advantages of 1D CNN and the attention mechanism, the end-to-end TDDN deep learning framework is proposed to predict the turbofan engine RUL. Experiments are performed on a benchmark C-MAPSS dataset to evaluate the performance of the TDDN model. Compared with the performance metrics of existing deep learning methods on the C-MAPSS dataset, the TDDN model achieves the best RUL prediction. Furthermore, since the 1D CNN and attention mechanism can learn degradation-related features, the TDDN model also has robust performance. Therefore, it is well suited for machinery prognostic problems in complex conditions. Moreover, thanks to the learned monotonic-degradation temporal features and key fault characteristics capturing ability, the proposed method can also be used to find suitable maintenance decisions for industrial machinery. Furthermore, the effects of crucial hyperparameters are investigated. Moving window size is vital to catch the degradation development reflected in the fluctuating multivariate time series streaming data. 1D CNN can automatically extract degradation-related temporal features from sensor streaming data and significantly enhance the feature-learning ability of the TDDN model. Meanwhile, the attention mechanism effectively identifies the underlying degradation development and captures key fault characteristics to improve prediction performance. Despite promising prediction performance, the TDDN model accuracy significantly relies on a high-quality labeled dataset. Due to distributed sensors at different sites, the transmission of sensor streaming data to the server-side increases transmission costs and potential security issues in actual applications due to sensors distributed at different sites. Hence, the sensor data stakeholders are unwilling to share data for maintenance purposes, and thus sensor streaming data will be only available on local devices. The federated learning frameworks can help solve the issue. The federated learning aims to protect data privacy by enabling clients to collaboratively build machine learning models without sharing their data by training machine learning model locally and transmitting model parameters only. The machine learning model can be cooperatively learned without directly sharing sensitive streaming data. In the context of the Internet of Things (IoT), the TDDN model can be extended in the framework of federated learning to enhance data privacy and reduce data transmitting costs. The TDDN model can be effectively used in the distributed industrial machinery for the RUL prediction. \section*{Acknowledgments} Xin Chen acknowledges the funding support from the National Natural Science Foundation of China under grant No. 21773182 and the support of HPC Platform, Xi’an Jiaotong University.
1,108,101,564,777
arxiv
\section{Introduction} Adaptive designs have a growing importance in clinical drug discovery and development. In clinical studies, multiple new treatments are often of interest for evaluation but, due to limited resources (time, available patients, budget, etc.), only one or two with the best-observed response(s) can be selected for further assessment in a large-scale clinical trial. Two-stage adaptive designs mainly deal with finding a safe and effective treatment among multiple candidate treatments in stage 1 and then validating its properties using an independent sample in stage 2. \par For a thorough insightful overview on adaptive designs in clinical trials, the reader is referred to \cite{pallmann2018adaptive}. For an extensive discussion on inference procedures in two-stage adaptive designs, one may refer to \cite{bauer1999combining}, \cite{sampson2005drop}, \cite{stallard2008group}, \cite{bowden2008unbiased}, \cite{carreras2013shrinkage}, \cite{kimani2013conditionally}, \cite{chiu2018design}, \cite{kimani2020point} and \cite{robertson2022point}. \par In this paper, we consider a two-stage adaptive design comprising two stages with selection of a candidate for the better treatment in the first stage and estimation of the mean treatment effect of the selected treatment in the second stage. It is well known that, using a single-stage data alone, there does not exist any unbiased estimator of the selected mean in many cases. For example, the selected means of normal and binomial populations are not unbiasedly estimable (see \cite{putter1968estimating} and \cite{tappin1992unbiased}). However, the naive estimates that incorporate data from both the stages can induce selection bias. To overcome this issue, the technique of Rao-Blackwellization can be utilized. Using this technique, the unbiased second stage sample mean is conditioned on a complete-sufficient statistic. As a result, a uniformly minimum variance conditionally unbiased estimator (UMVCUE) is obtained. An appealing property of the UMVCUE is that it has the smallest variance (or, in other words, the smallest mean squared error (MSE)) among all the conditionally unbiased estimators of the selected mean. \par Initially, \cite{cohen1989two} dealt with the two-stage estimation of the selected treatment mean under the ranking and selection framework in the Gaussian setting. They separately considered the cases of known and unknown variance. The authors obtained the UMVCUE for the selected normal mean which uses data from both the stages. A limitation of \cite{cohen1989two} work is that it assumes single observation in the second stage of the adaptive design. Since then, their work has been extended by many researchers including \cite{tappin1992unbiased} and \cite{sampson2005drop}. For the case of common known variance, \cite{bowden2008unbiased} extended the UMVCUE to account for unequal stage one and stage two sample sizes, where the parameter of interest is the $j^{th}$ best among the $k$ candidates. \cite{kimani2013conditionally} considered point estimation of the selected most effective treatment compared with a control, after a two-stage adaptive seamless trial in which treatment selection and the possibility of early stopping for futility are available at stage 1. Using a multistage analog of the two-stage drop-the-losers design, \cite{bowden2014conditionally} provided unbiased and near unbiased estimates for the selected mean. \par In many practical situations, it may not be appropriate to assume the variances of the treatment effects to be known. For the case of common unknown variance, building upon the work of \cite{cohen1989two}, we derive the UMVCUE for the selected treatment mean when there are more than one observations in the second stage of the adaptive design. Although, \cite{robertson2019conditionally} had claimed that their UMVCUE works for multiple observations in the second stage, their approach is not optimal as they have conditioned the second stage data on a statistic which is not a complete-sufficient statistic. Our extended UMVCUE takes care of this deficiency. We have also obtained a minimax estimator for the selected treatment mean under the scaled mean squared error criterion. This minimax estimator is also shown to be admissible. \par The remainder of the paper is organized as follows: In Section 2, we introduce some notations and preliminaries that will be used all across the paper. In Subsection 2.1, we derive the UMVCUE of the selected treatment mean. In Section 3, we prove that the naive estimator, which is the weighted average of the first and the second stage sample means, is minimax and admissible for estimating the selected treatment mean in case of common unknown variance. In Subsection 3.1, we provide some additional estimators for the selected treatment mean. In order to have numerical assessment of the performances of various competing estimators under the criterion of the scaled mean squared error and the scaled bias, we provide a simulation study in Section 4. For illustration, a real data set has also been considered in Section 5 of the paper. \section{Estimation of the selected treatment mean} The notations listed below will be used throughout the article:\\ \begin{itemize} \item[ $\bullet$] $\mathbb{R}$: the real line $\left(-\infty,\infty\right)$; \item[ $\bullet$] $\mathbb{R}^k$: the $k$ dimensional Euclidean space, $k \in \{2,3,\ldots\}$; \item[ $\bullet$] $N\left(\theta, \sigma^2 \right)$: normal distribution with mean $\theta\in \mathbb{R}$ and standard deviation $\sigma \in \left(0,\infty\right) $; \item[ $\bullet$] $\phi(\cdot) $: probability density function (p.d.f.) of $N(0,1)$; \item[ $\bullet$] $\Phi(\cdot)$: cumulative distribution function (c.d.f.) of $N(0,1)$; \item[ $\bullet$] $Beta (a,b)$: beta distribution with shape parameters $a>0$ and $b>0$; \item[ $\bullet$] $B(\alpha,\beta)=\frac{\Gamma(\alpha) \Gamma(\beta)}{\Gamma(\alpha+\beta)}$, $\alpha >0$, $\beta>0$, will denote the usual beta function and $\Gamma (\cdot )$ will denote the usual gamma function; \item[ $\bullet$] For real numbers $x$ and $y$ \begin{equation*} \label{S1.E1} I(x\geq y)=\begin{cases} 1, & \text{if $x \geq y$ }\\ 0, & \text{if $x < y$}. \end{cases} \end{equation*} $I(x > y)$ is defined similarly. \end{itemize} Consider two treatments say, $\tau_1$ and $\tau_2$, such that effectiveness of the treatment $\tau_i$ is described by $N$$\left(\mu_i, \sigma^2 \right),~ i=1,2$; where $\mu_1 \in\mathbb{R}$ and $\mu_2 \in \mathbb{R}$ are unknown mean treatment effects and $\sigma^2 ~(\sigma >0)$ is the common unknown treatment effect variance. We define the treatment associated with $\max\{\mu_{1},\mu_{2}\}$ as the better or the promising treatment. We consider an adaptive design which consists of two stages, with a single data-driven selection made in the interim. In the first stage of the design, say stage $1$, the treatment $\tau_1$ is administered to $n_1$ respondents and the treatment $\tau_2$ is independently administered to another set of $n_1$ respondents. Let $\overline{X}_i,~ i=1,2,$ be the sample averages (mean treatment effect estimates) corresponding to the two treatments. For the purpose of selecting the better treatment, we consider the natural selection rule that selects the treatment with the larger sample mean as the better treatment (for optimality properties of this natural selection rule, see \cite{bahadur1952impartial}, \cite{eaton1967some} and \cite{misra1994non}). Let $Q \in \{1,2\}$ be the index of the selected treatment $\tau_Q$ (i.e. $Q=1$, if $\overline{X}_1 > \overline{X}_2$, $Q=2$, if $\overline{X}_2 \geq \overline{X}_1$). Treatment $\tau_Q$ is then carried forward to the second stage, referred to as the stage $2$ of the two stage design, for further analysis. In stage 2, the selected treatment $\tau_Q$ is independently administered to $n_2$ additional respondents. Let $\overline{Y}$ be the stage $2$ sample mean for the selected treatment $\tau_Q$. The goal is to estimate $\mu_{Q}$, the mean effect of the selected treatment. Note that $\mu_{Q} \equiv \mu_{Q} \left(\mu_{1},\mu_{2},\overline{X}_1,\overline{X}_2\right)$ is a random parameter which depends on $\mu_{1}$, $\mu_{2}$, $\overline{X}_1$ and $\overline{X}_2$, equals $\mu_{1}$, if $\overline{X}_1 > \overline{X}_2$, and equals $\mu_{2}$, if $\overline{X}_2 \geq \overline{X}_1$. Clearly, $\overline{X}_i \sim N\left(\mu_i, \frac{\sigma^2}{n_1} \right),~i=1,2,$ are independently distributed and, conditioned on $Q$, $\overline{Y} \sim N\left(\mu_Q, \frac{\sigma^2}{n_2} \right)$. \par \par \noindent The following notations will also be utilized throughout the paper: \\ \noindent $\utilde{T} = (\overline{X}_1,\overline{X}_2, \overline{Y},S^2)$; $S^2=\sum\limits_{i=1}^{2} \sum\limits_{j=1}^{n_1} X_{ij}^2+\sum\limits_{j=1}^{n_2} Y_{j}^2$; $\underline{\theta} = (\mu_1,\mu_2,\sigma)$; $\Theta = \mathbb{R}^2 \times (0,\infty)$; $\overline{X}_{Q} = \max (\overline{X}_1 ,\overline{X}_2)$ (maximum of $\overline{X}_1$ and $\overline{X}_2$); $\overline{X}_{3-Q} = \min (\overline{X}_1 ,\overline{X}_2)$ (minimum of $\overline{X}_1$ and $\overline{X}_2$). Also, for any $\underline{\theta} \in \Theta$, $\mathbb{P}_{\underline{\theta}}(\cdot)$ will denote the probability measure induced by $\utilde{T} = (\overline{X}_1,\overline{X}_2, \overline{Y},S^2)$, when $\underline{\theta} \in \Theta$ is the true parameter value, and $\mathbb{E}_{\underline{\theta}}(\cdot)$ will denote the expectation operator under the probability measure $\mathbb{P}_{\theta}(\cdot)$, $\underline{\theta} \in \Theta$. \par \noindent In this paper, our focus is on point estimation of the selected treatment mean effect defined by \begin{align} \label{S1.E1} \mu_Q\equiv \mu_{Q}\left(\mu_{1},\mu_{2}, \overline{X}_1,\overline{X}_2\right)&=\begin{cases} \mu_1, & \text{if $\overline{X}_1 \geq \overline{X}_2$ }\\ \mu_2, & \text{if $\overline{X}_1 < \overline{X}_2$} \end{cases},\nonumber\\ &=\mu_{1}I\left(\overline{X}_1 \geq \overline{X}_2\right) +\mu_{2} I\left(\overline{X}_2 >\overline{X}_1\right), \end{align} under the scaled squared error loss function \begin{equation} \label{S2.E2} L_{\utilde{T}}(\underline{\theta},a) = \left(\frac{a-\mu_Q}{\sigma} \right)^2 ,~~ \underline{\theta} \in \Theta, ~ a \in \mathcal{A} = \mathbb{R}. \end{equation} It is worth mentioning here that the statistic $\left(\frac{n_1\overline{X}_Q+n_2\overline{Y}}{n_1+n_2},\overline{X}_{3-Q},S^2,Q\right)$ is minimal sufficient but not complete. However, given $Q$, the statistic $\left(\frac{n_1\overline{X}_Q+n_2\overline{Y}}{n_1+n_2},\overline{X}_{3-Q},S^2\right)$ is a complete-sufficient statistic. Consequently, $\utilde{T} = (\overline{X}_1,\overline{X}_2, \overline{Y},S^2)$ is a sufficient statistic and $\mu_{Q}$ depends on observations only through $\utilde{T}$. Therefore, we may restrict our attention to only those estimators that depend on observations only through $\utilde{T}$ (see \cite{misra1993umvue}) . \par Under the scaled squared error loss function $(2.2)$, the risk function (also referred to as the scaled mean squared error) of an estimator $d(\utilde{T})$ is defined by $$ R(\underline{\theta}, d)=\mathbb{E}_{\underline{\theta}}\left(\frac{d(\utilde{T})-\mu_Q}{\sigma}\right)^2,~ \underline{\theta} \in \Theta.$$ Suppose, we have a prior distribution (density) $\Pi$ on $\Theta$. Then, the Bayes risk of an estimator $d$, with respect to the prior, $\Pi$ is defined as \begin{align*} r(\Pi, d)&=\mathbb{E}_{\Pi} \left( R(\underline{\theta}, d) \right)\\ &=\displaystyle \int_{\Theta} \displaystyle \int_{\zeta} L_{\utilde{T}}(\underline{\theta},d(\utilde{t})) f(\utilde{t}|\underline{\theta}) \Pi(\underline{\theta})d \underline{t}~ d \underline{\theta}, \end{align*} where $\zeta$ denotes the support of $\utilde{T}$. \par An estimator $d_{\Pi}$ that minimizes the Bayes risk $r(\Pi, d)$, among all estimators $d$ of $\mu_{Q}$, is called a Bayes estimator with respect to the prior $\Pi$. \par \noindent An estimator $d(\utilde{T})$ is said to be conditionally unbiased, if $$ \mathbb{E}_{\underline{\theta}}((d(\utilde{T})|Q=q))=\mu_{q},~ \forall ~\underline{\theta} \in \Theta.$$ \noindent A naive estimator for estimating the mean effect of the selected treatment $\mu_Q$ is the weighted average of the sample means at the two stages, i.e. \begin{equation} d_M(\utilde{T})=\frac{n_1 \overline{X}_Q+n_2\overline{Y}}{n_1+n_2}. \end{equation} Clearly, $d_M(\utilde{T})$ is the maximum likelihood estimator (MLE) of $\mu_{Q}$. \par In the following subsection, the UMVCUE of the selected treatment mean $\mu_Q$ is derived under the assumption that the common variance is unknown. \subsection{The extended UMVCUE} \begin{theorem} The two-stage UMVCUE of $\mu_{Q}$, given $Q$, is \begin{align} d_U(\utilde{T})&=\frac{Z}{n_1+n_2}-\sqrt{\frac{n_1}{n_2(n_1+n_2)}}\widetilde{S} \frac{(1-{V^*}^2)^c}{2^{2c}c B(c,c)I_{c,c}\left(\frac{V^*+1}{2}\right)}, \end{align} where, $Z=n_1\overline{X}_Q+n_2\overline{Y}$, $V=\frac{\sqrt{\frac{n_1(n_1+n_2)}{n_2}}}{\widetilde{S}} \left(\frac{Z}{n_1+n_2}-\overline{X}_{3-Q}\right)$, $V^*=\min\{V,1\}$, \vspace*{3mm}\\ $\widetilde{S}^2=\sum\limits_{i=1}^{2} \sum\limits_{j=1}^{n_1} X_{ij}^2+\sum\limits_{j=1}^{n_2} Y_{j}^2-(n_1+n_2)\left(\frac{Z}{n_1+n_2}\right)^2-n_1 \overline{X}^2_{3-Q}$ , $c=\frac{2n_1+n_2-3}{2}$,\vspace*{3mm}\\ $I_{c,c}(u)$ is the cumulative distribution function of a $Beta(c,c)$ distribution and $B(c,c)=\frac{\Gamma(c) \Gamma(c)}{\Gamma(2c)},~c >0$, is the usual beta function. \end{theorem} \begin{proof} Let $\underline{S}=\left(\underline{X_1}, \underline{X_2},\underline{Y}\right)$ denote the vector of $n=2n_1+n_2$ sample observations of stage 1 and stage 2, combined. Then, the joint probability density function (p.d.f.) of $\underline{S}$, given $Q=q$, based on the first and second stage data can be written as \begin{align*} f_q(\underline{x_1},\underline{x_2},\underline{y}|\underline{\theta}) \propto \exp \left(\frac{1}{\sigma^2}\left[\sum\limits_{j=1}^{n_1}x_{qj}+\sum\limits_{j=1}^{n_2}y_{j}\right]\mu_q+ \frac{\sum\limits_{j=1}^{n_1}x_{(3-q) j}}{\sigma^2} \mu_ {3-q}-\frac{1}{2\sigma^2}\left[\sum\limits_{j=1}^{n_1}x^2_{qj}+\sum\limits_{j=1}^{n_1}x^2_{(3-q)j}+\sum\limits_{j=1}^{n_2}y^2_{j}\right]\right). \end{align*} From the expression of the joint density $f_q(\underline{x_1},\underline{x_2},\underline{y}|\underline{\theta})$, we observe that, given $Q$, the statistic $\left(Z,\overline{X}_{3-Q}, \widetilde{S}^2\right)$ is a complete-sufficient statistic. Since, given $Q$, $\overline{Y}$ is an unbiased estimator of $\mu_{Q}$, the UMVCUE of $\mu_{Q}$ is the Rao-Blackwellization of $\overline{Y}$, conditional on the complete-sufficient statistic $\left(Z,\overline{X}_{3-Q}, \widetilde{S}^2\right)$. Therefore, the required UMVCUE of the selected mean $\mu_Q$, is $\mathbb{E}_{\underline{\theta}}\left(\overline{Y} \bigg|\left(Z,\overline{X}_{3-Q}, \widetilde{S}^2,Q\right)\right)$.\par \noindent Define, $$U=\frac{\sqrt{\frac{n_2(n_1+n_2)}{n_1}}\left(\overline{Y}-\frac{Z}{n_1+n_2}\right)}{\widetilde{S}}.$$ In order to show that the UMVCUE is the same as (2.4), it requires to show that \begin{align} \mathbb{E}_{\underline{\theta}}\left(U \bigg|\left(Z,\overline{X}_{3-Q}, \widetilde{S}^2\right)\right)&=-\frac{(1-{V^*}^2)^c}{2^{2c}c B(c,c)I_{c,c}\left(\frac{V^*+1}{2}\right)}. \end{align} To establish $(2.5)$, we need to obtain the conditional p.d.f. of $U$ given $\left(Z, \overline{X}_{3-Q}, \widetilde{S}^2, Q\right)$. \par \noindent Note that, on rearrangement of the terms in the expression of $\widetilde{S}^2$, we can write the expression of $U^2$ as \begin{align} U^2 &= \frac{\frac{n_2(n_1+n_2)}{n_1}\left(\overline{Y}-\frac{Z}{n_1+n_2}\right)^2}{\sum\limits_{i=1}^{2} \sum\limits_{j=1}^{n_1} (X_{ij}-\overline{X}_{i})^2+\sum\limits_{j=1}^{n_2} (Y_{j}-\overline{Y})^2+\frac{n_2(n_1+n_2)}{n_1}\left(\overline{Y}-\frac{Z}{n_1+n_2}\right)^2}. \end{align} As in \cite{cohen1989two} and \cite{robertson2019conditionally}, it can be verified that the conditional density of $U$, given $\left(Z, \overline{X}_{3-Q}, \widetilde{S}^2, Q\right)=\left(z,x_{3-q},\widetilde{s},q\right)$, is given by \begin{align*} f_{U|\left(Z, \overline{X}_{3-Q}, \widetilde{S}^2, Q\right)}(u|(z,x_{3-q},\widetilde{s},q))&=\frac{(1-u^2)^{c-1}}{\displaystyle \int_{-1}^{v^*} (1-u^2)^{c-1} du},~~-1<u<v^*, \end{align*} where, for $v=\frac{\sqrt{\frac{n_1(n_1+n_2)}{n_2}}}{\widetilde{s}} \left(\frac{z}{n_1+n_2}-x_{3-Q}\right)$, $v^*=\min\{v,1\}$. \par Therefore, we have \begin{align*} \mathbb{E}_{\underline{\theta}}\left(U \bigg|\left(Z,\overline{X}_{3-Q}, \widetilde{S}^2,Q\right)\right)&=\frac{\displaystyle \int_{-1}^{V^*} u (1-u^2)^{c-1}~du}{\displaystyle \int_{-1}^{V^*} (1-u^2)^{c-1}du}\\ &=-\frac{(1-{V^*}^2)^c}{2^{2c}c B(c,c)I_{c,c}\left(\frac{V^*+1}{2}\right)}. \end{align*} Hence the result follows. \end{proof} Now, we will show that the naive estimator $d_M(\utilde{T})$, defined by $(2.3)$, is minimax and admissible for estimating $\mu_{Q}$ under the scaled mean squared error criterion. \section{The minimax and admissible estimator} Let ${S^*}^2=\frac{\sum\limits_{i=1}^{2} \sum\limits_{j=1}^{n_1} (X_{ij}-\overline{X}_{i})^2+\sum\limits_{j=1}^{n_2} (Y_{j}-\overline{Y})^2}{2n_1+n_2-3}$ be the pooled sample variance of the stage 1 and the stage 2 data, and let $W=(2n_1+n_2-3){S^*}^2$, so that $\frac{W}{\sigma^2} \sim \chi^2_{2n_1+n_2-3}$. Suppose that the unknown parameter vector $\underline{\theta}=(\mu_1,\mu_2,\sigma^2) \in \Theta$ is a realization of a random vector $\underline{P}=(P_1,P_2,P_3)$, having a specified probability distribution function. Consider a sequence of prior distributions (densities) $\left\{\xi_m\right\}_{m\geq 1}$, for $\underline{P}=(P_1,P_2,P_3)$, such that : \begin{itemize} \item[(i)] for any fixed $p_3 >0$, given $P_3 = p_3$, conditionally, $P_1$ and $P_2$ are independent and identically distributed as $N(0,m p_3)$; \item[(ii)] the random variable $P_3$ follows the inverse exponential distribution having the p.d.f. \begin{equation*} \Pi_{1,P_3}(p_3)=\frac{1}{p_3^2} e^{-\frac{1}{p_3}}, ~p_3 >0. \end{equation*} \end{itemize} Recall that, $\overline{X}_i ~(i=1,2)$ is the sample mean of the first stage sample from the $i^{th}$ population and $\overline{Y}$ is the second stage sample mean of the sample drawn from the population selected at the first stage. Then, under the prior distribution $\xi_m$, the joint posterior distribution of $ (P_1, P_2,P_3)$ given $(\overline{X}_1, \overline{X}_2,\overline{Y}, W)=(x_1,x_2,y,w)$, is such that: \begin{itemize} \item[(i)] for any $p_3 >0$, conditionally, the random variables $ P_1$ and $P_2$ are independently distributed as $N\left(\theta_{1,m}, \tau^2_{1,m}\right)$ and $N\left(\theta_{2,m}, \tau^2_{2,m}\right)$, respectively, where \begin{equation*} \left(\theta_{1,m}, \theta_{2,m}, \tau^2_{1,m}, \tau^2_{2,m}\right)=\begin{cases} \left(\frac{n_1x_1+n_2y}{n_1+n_2+\frac{1}{m}},\frac{n_1x_2}{n_1+\frac{1}{m}},\frac{1}{\frac{n_1+n_2}{p_3}+\frac{1}{m p_3}},\frac{1}{\frac{n_1}{p_3}+\frac{1}{m p_3}}\right), & \text{if $x_1 \geq x_2$ } \vspace{2mm}\\ \left(\frac{n_1x_1}{n_1+\frac{1}{m}},\frac{n_1x_2+n_2y}{n_1+n_2+\frac{1}{m}}, \frac{1}{\frac{n_1}{p_3}+\frac{1}{m p_3}},\frac{1}{\frac{n_1+n_2}{p_3}+\frac{1}{m p_3}}\right), & \text{if $x_1 < x_2$} \end{cases}. \end{equation*} \item [(ii)] the random variable $P_3$ follows the inverse gamma distribution having the p.d.f. $$\Pi_{2,m}(p_3)=\frac{(v_m)^{\frac{n+7}{2}}}{\Gamma(n+7)} \frac{e^{-\frac{v_m}{p_3}}}{p_3^{\frac{n+9}{2}}} $$ where, $n=2 n_1+n_2-3$ and \begin{equation*} v_m=\begin{cases} 1+\frac{w}{2}+\frac{n_1 x_1^2}{2}+\frac{n_1 x_2^2}{2}+\frac{n_2 y^2}{2}-\frac{(n_1 x_1+n_2y)^2}{2(n_1+n_2+\frac{1}{m})}-\frac{(n_1 x_2)^2}{2(n_1+\frac{1}{m})}, & \text{if $x_1 \geq x_2$ } \vspace{2mm}\\ 1+\frac{w}{2}+\frac{n_1 x_1^2}{2}+\frac{n_1 x_2^2}{2}+\frac{n_2 y^2}{2}-\frac{(n_1 x_2+n_2y)^2}{2(n_1+n_2+\frac{1}{m})}-\frac{(n_1 x_1)^2}{2(n_1+\frac{1}{m})}, & \text{if $x_1 < x_2$} \end{cases}, \end{equation*} $m=1,2,\ldots.$ \end{itemize} Therefore, under the scaled squared error loss function \eqref{S2.E2}, the Bayes estimator of the selected treatment mean $\mu_Q$, w.r.t. the prior distribution $\xi_m$, is given by \begin{align} d_{\xi_m}\left(\utilde{T}\right &=\begin{cases*} \frac{\mathbb{E}^{\underline{P}|\utilde{T}}\left(\frac{P_1}{P_3}\right)}{\mathbb{E}^{\underline{P}|\utilde{T}}\left(\frac{1}{P_3}\right)} , & \text{if $\overline{X}_1 \geq \overline{X}_2$ }\\ \frac{\mathbb{E}^{\underline{P}|\utilde{T}}\left(\frac{P_2}{P_3}\right)}{\mathbb{E}^{\underline{P}|\utilde{T}}\left(\frac{1}{P_3}\right)}, & \text{if $\overline{X}_1 < \overline{X}_2$} \end{cases*} \nonumber\\ &=\begin{cases*} \frac{n_1 \overline{X}_1 + n_2\overline{Y}}{n_1+n_2+\frac{1}{m}}, & \text{if $\overline{X}_1 \geq \overline{X}_2$ }\\ \frac{n_1 \overline{X}_2 + n_2\overline{Y}}{n_1+n_2+\frac{1}{m}}, & \text{if $\overline{X}_1 < \overline{X}_2$} \end{cases*} \nonumber\\ &=\frac{n_1\overline{X}_Q+n_2\overline{Y}}{n_1+n_2+\frac{1}{m}},~m=1,2, \ldots. \end{align} The posterior risk of the Bayes rule $d_{\xi_m}\left(\utilde{T}\right)$ is obtained as \begin{align*} r_{d_{\xi_m}}\left(\utilde{T}\right)&=\mathbb{E}^{\underline{P}|\utilde{T}} \left[\frac{\left(\frac{n_1\overline{X}_Q+n_2\overline{Y}}{n_1+n_2+\frac{1}{m}}-\mu_Q\right)^2}{P_3}\right]\\ &=\frac{1}{n_1+n_2+\frac{1}{m}}, \end{align*} which is independent of $\utilde{T}=(\overline{X}_1,\overline{X}_2,\overline{Y},W)$. Hence, the Bayes risk of the estimator $d_{\xi_m}$ is \begin{equation} r^*_{d_{\xi_m}}\left(\xi_m\right)=\frac{1}{n_1+n_2+\frac{1}{m}},~ m=1,2,\ldots \end{equation} Applying Lemma $2.2 ~(ii)$ of \cite{misra2022estimation}, we obtain the risk of the estimator $d_M(\utilde{T})=\frac{n_1\overline{X}_Q+n_2\overline{Y}}{n_1+n_2}$ as \begin{align} R(\underline{\theta},d_{M})&=\mathbb{E}_{\underline{\theta}}\left( \frac{\frac{n_1\overline{X}_Q+n_2\overline{Y}}{n_1+n_2}-\mu_Q}{\sigma} \right)^2 \nonumber \\ &=\frac{1}{n_1+n_2}. \end{align} Therefore, the Bayes risk of the naive estimator $d_{M}(\utilde{T})$, under the prior $\xi_m$, is \begin{align} r^*_{d_{M}}\left(\xi_m\right)&=\frac{1}{n_1+n_2},~ m=1,2,\ldots \end{align} Now, we provide the following theorem which proves the minimaxity of the natural estimator $d_{M}$. \begin{theorem} Under the scaled squared error loss function \eqref{S2.E2}, the natural estimator $d_M(\utilde{T})=\frac{n_1\overline{X}_Q+n_2\overline{Y}}{n_1+n_2}$ is minimax for estimating the selected treatment mean $\mu_{Q}$. \end{theorem} \begin{proof} Let $d$ be any other estimator. Since, $d_{\xi_m}$, given by $(3.1)$, is the Bayes estimator of $\mu_{Q}$ under the prior $\xi_m, m=1,2,\ldots$, we have, \begin{align*} \sup_{\underline{\theta} \in \Theta } R(\underline{\theta}, d) &\geq \int_{\Theta} R(\underline{\theta},d) \xi_{m}(\underline{\theta})d\underline{\theta}\\ &\geq \int_{\Theta} R(\underline{\theta},d_{\xi_{m}}) \xi_{m}(\underline{\theta})d\underline{\theta}\\ &=r^*_{d_{\xi_{m}}}(\xi_{m})=\frac{1}{n_1+n_2+\frac{1}{m}}, ~~~m=1,2,\ldots ~~~(\text{using}~ (3.2))\\ \Rightarrow \sup_{\underline{\mu} \in \Theta } R(\underline{\mu}, d) &\geq \lim_{m\to\infty}r^*_{d_{\xi_{m}}}(\xi_{m})=\frac{1}{n_1+n_2}=\sup_{\underline{\theta} \in \Theta } R(\underline{\theta}, d_{M}), ~~~(\text{using} ~(3.4)) \end{align*} implying that $d_{M}(\utilde{T})=\frac{n_1\overline{X}_Q+n_2\overline{Y}}{n_1+n_2}$ is minimax for estimating $\mu_{Q}$. \end{proof} We will now invoke the principle of invariance. The problem of estimating the selected treatment mean $\mu_{Q}$, under the scaled squared error loss function \eqref{S2.E2}, is invariant under the affine group of transformations and also under the group of permutations. It is easy to verify that any affine and permutation equivariant estimator of $\mu_{Q}$ will be of the form \begin{equation} \label{S2,E3} d_{\psi}(\utilde{T})=\frac{n_1\overline{X}_Q+n_2\overline{Y}}{n_1+n_2} - \widetilde{S}\psi\left(\frac{D}{\widetilde{S}}\right), \end{equation} for some function $\psi:\mathbb{R} \rightarrow \mathbb{R}$, where $D=\frac{n_1\overline{X}_Q+n_2\overline{Y}}{n_1+n_2}-\overline{X}_{3-Q}$. Let the class of all affine and permutation equivariant estimators of the type (3.5) be denoted by $\mathscr{E}_1$. Clearly, the MLE $d_M(\utilde{T})$ and the UMVCUE $d_{U}$, defined by (2.3) and $(2.4)$ respectively, belong to the class $\mathscr{E}_1.$ Next, we will show that the MLE $d_{M}$ is admissible within the class $\mathscr{E}_1$ of affine and permutation equivariant estimators. Notice that, the risk function of any estimator $d \in \mathscr{E}_1$ depends on $\underline{\theta} \in \Theta$ through $\mu=\frac{\theta_2-\theta_1}{\sigma}$, where $\theta_2=\max\left\{\mu_1,\mu_2\right\}$ and $\theta_1=\min\left\{\mu_1,\mu_2\right\}$. Consequently, the Bayes risk of any estimator $d \in \mathscr{E}_1$ depends on the prior distribution of $\underline{P}=(P_1, P_2, P_3)$ through the distribution of $\frac{P_{[2]} - P_{[1]} }{P_{3}}$, where $ P_{[1]}=\min \left\{P_1, P_2 \right\}$, $P_{[2]}=\max\left\{P_1, P_2\right\}.$ \par The following theorem establishes the admissibility of the estimator $d_{M}$, within the class $\mathscr{E}_1$ of affine and permutation equivariant estimators. Since the proof of the theorem is on the lines of the proof of Theorem 5.2 of \cite{masihuddin2021equivariant}, it is being omitted. \begin{theorem} The natural estimator $d_M(\utilde{T})=\frac{n_1\overline{X}_Q+n_2\overline{Y}}{n_1+n_2}$ is admissible for estimating the selected treatment mean $\mu_{Q}$, within the class $\mathscr{E}_1$, under the criterion of scaled mean squared error. \end{theorem} \section{Some additional naive estimators and a simulation study} For estimating the selected treatment mean, some additional naive estimators can be obtained by plugging in the estimate of unknown variance $\sigma^2$ in the expressions of estimators ($\delta_{0}^{RB}$ and $\delta_1$) derived by \cite{misra2022estimation}, for the case when $\sigma^2$ is known. For unknown variance case, two such naive estimators are obtained as: \begin{align} d_{U1}(\utilde{T})= d^{RB,\widehat{\sigma}}_{0}(\utilde{T})&=Z_1+ \frac{\sqrt{n_2}S^*}{\sqrt{n_1(n_1+n_2)}}\frac{\phi\left(\frac{\sqrt{n_1(n_1+n_2)}}{\sqrt{n_2}S^*}(Z_1-Z_2)\right)}{\Phi\left(\frac{\sqrt{n_1(n_1+n_2)}}{\sqrt{n_2}S^*}(Z_1-Z_2)\right)} \end{align} and \begin{align} d_{U2}(\utilde{T})= d^{\widehat{\sigma}}_{1}(\utilde{T})&=\begin{cases} \frac{(n_1+n_2)Z_1+n_1 Z_2}{2n_1+n_2}\left\{\frac{\Phi\left(\frac{Z_1-Z_2}{\widehat{\sigma_1}}\right)-\Phi\left(\frac{n_1(Z_1-Z_2)}{(2n_1+n_2)\widehat{\sigma_1}}\right)}{\Phi\left(\frac{Z_1-Z_2}{\widehat{\sigma_1}}\right)}\right\}\\+\frac{\widehat{\sigma_1}\phi\left(\frac{n_1(Z_1-Z_2)}{(2n_1+n_2)\widehat{\sigma_1}}\right)+Z_1\Phi\left(\frac{n_1(Z_1-Z_2)}{(2n_1+n_2)\widehat{\sigma_1}}\right)}{\Phi\left(\frac{Z_1-Z_2}{\widehat{\sigma_1}}\right)}, & \text{ if }~ Z_1 > Z_2\vspace{3mm}\\ \frac{(n_1+n_2)Z_1+n_1 Z_2}{2n_1+n_2} , & \text{ if }~ Z_1 \leq Z_2 \end{cases} , \end{align} where, $\widehat{\sigma_1}=S^*\sqrt{\frac{n_2}{n_1(n_1+n_2)}}$, $Z_1=\frac{n_1\overline{X}_Q+n_2\overline{Y}}{n_1+n_2}, Z_2=\overline{X}_{3-Q}$ and ${S^*}^2$ is the pooled sample variance based on the stage $1$ and the stage $2$ data. \par Now we report a simulation study on performance comparison of estimators $d_{U_1}$, $d_{U_2}$, the MLE $d_{M}$ and the UMVCUE $d_{U}$ under the scaled mean squared error and the bias criterion. \par \vspace{4mm} We compare the risk (scaled MSE) and the bias performances of various estimators of the selected treatment mean ($\mu_{Q}$) using the Monte-Carlo simulations. Following estimators are considered for our numerical study: $d_{M}$, $d_{U}$, $d_{U_1}$ and $d_{U_2}$ (see (2.3), (2.4), (4.1) and (4.2)). Since the scaled MSEs and biases of these estimators of $\mu_{Q}$ depend on parameters $\underline{\theta}=(\mu_{1},\mu_{2},\sigma) \in \Theta$ through $\mu~(\geq 0)$, we have plotted the simulated risks and the scaled biases of various estimators against $\mu \geq 0$ for different configurations of sample sizes $n_1$ and $n_2$. The simulated values of the scaled MSE and the bias based on 100,000 simulations are plotted in Figures 4.1-4.7. In Figures 4.1-4.2, we have plotted the simulated scaled MSEs of the proposed estimators against $\mu$. In Figures 4.3-4.5, we have plotted the simulated values of the scaled MSE against the information fraction~($=\frac{n_1}{n_1+n_2}$) for fixed total sample size $n=n_1+n_2$ and $\mu$. The scaled biases of various estimators have been plotted in Figures 4.6-4.7.\par Following conclusions are drawn based on the simulation study : \begin{itemize} \item[(i)] In conformity with (3.3), the scaled MSEs of the natural estimator $d_M$ is constant(=$\frac{1}{n_1+n_2}$) for all values of the normalized treatment effect difference $\mu$. \item[(ii)] The scaled MSEs of all other estimators ($d_U$, $d_{U1}$ and $d_{U2}$), except $d_M$, decrease as the the value of $\mu$ increases. For larger values of $\mu$, the estimator $d_{U2}$ has better scaled MSE performance as compared to other estimators. \item[(iii)] For $n_1=n_2$, the UMVCUE $d_U$ has the similar scaled MSE performance as the estimator $d_{U1}$. For $n_1 \geq n_2$, the estimator $d_{U2}$ outperforms estimators $d_U$ and $d_{U1}$. \par It is also observed that the estimator $d_{U2}$ uniformly dominates $d_{U1}$ in terms of the scaled MSE.\par \item [(iv)] For small $n_2$ and relatively large $n_1$, the scaled MSEs of the estimators $d_{U1}$ and $d_{U2}$ are very much close to that of $d_M$.\par When $n_1$ is small and $n_2$ is large, the scaled MSE of the UMVCUE $d_{U}$ is closer to the scaled MSEs of $d_M$. \item[(v)] For fixed smaller values of $\mu$ and the total sample size $n=n_1+n_2$, the scaled MSEs of the UMVCUE $d_U$ increases as the information fraction $\frac{n_1}{n}$ increase whereas the scaled MSEs of $d_{U1}$ and $d_{U2}$ decreases as the information fraction increases. For larger values of $\mu$, the estimators $d_U$ and $d_M$ have similar scaled MSE performance. \item [(vi)] The scaled biases of all the competing estimators ($d_M$, $d_{U1}$ and $d_{U2}$), except the UMVCUE $d_U$, decreases as the value of $\mu$ increases. \par For larger values of $\mu$ these estimators have scaled bias performance comparable to $d_U$. \par It is also interesting to note that all the competing estimators of $\mu_{Q}$ have the maximum bias when $\mu$ is zero and it decreases as the value of $\mu$ increases. \item[(vii)] The scaled MSEs and scaled biases of all the competing estimators decrease towards zero for large value of the sample sizes $n_1$ and $n_2$. \end{itemize} \vspace{2mm} When scaled MSE is the key criterion for choosing suitable estimators, we recommend estimators $d_M$ and $d_{U2}$. For some specific configurations of $n_1$ and $n_2$ (small $n_1$ and large $n_2$), the UMVCUE $d_U$ is also a good competitor. \par \noindent When both scaled bias and scaled MSE are to be controlled, we recommend using estimators $d_U$ and $d_M$. \newpage \FloatBarrier \begin{figure} \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=9cm,height=8cm]{figs/ur1.eps} \caption{$n_1=3, n_2=5$} \end{subfigure} \hfill \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=9cm,height=8cm]{figs/ur2.eps} \caption{$n_1=5, n_2=5$} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=9cm,height=8cm]{figs/ur3.eps} \caption{$n_1=8, n_2=5$} \end{subfigure} \hfill \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=9cm,height=8cm]{figs/ur4.eps} \caption{$n_1=10, n_2=10$} \end{subfigure} \caption{\textbf{Scaled MSE plots of various competing estimators for different configurations of $n_1$ and $n_2$}} \end{figure} \FloatBarrier \FloatBarrier \begin{figure} \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=9cm,height=8cm]{figs/ur5.eps} \caption{$n_1=15, n_2=5$} \end{subfigure} \hfill \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=9cm,height=8cm]{figs/ur6.eps} \caption{$n_1=15, n_2=20$} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=9cm,height=8cm]{figs/ur7.eps} \caption{$n_1=5, n_2=15$} \end{subfigure} \hfill \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=9cm,height=8cm]{figs/ur8.eps} \caption{$n_1=10, n_2=30$} \end{subfigure} \caption{\textbf{Scaled MSE plots of various competing estimators for different configurations of $n_1$ and $n_2$}} \end{figure} \FloatBarrier \FloatBarrier \begin{figure}[!h] \centering \includegraphics[width=4.3in,height=3.4in]{figs/if1.eps} \caption{\textbf{Scaled MSE plots of different competing estimators for fixed $n=n_1+n_2=20$ and $\mu=0.1$.}} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=4.3in,height=3.4in]{figs/if2.eps} \caption{\textbf{Scaled MSE plots of different competing estimators for for fixed $n=n_1+n_2=20$ and $\mu=0.6$.}} \end{figure} \FloatBarrier \FloatBarrier \begin{figure}[!h] \centering \includegraphics[width=4.3in,height=3.4in]{figs/if3.eps} \caption{\textbf{Scaled MSE plots of different competing estimators for for fixed $n=n_1+n_2=20$ and $\mu=1.2$.}} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=4.3in,height=3.4in]{figs/sb2.eps} \caption{\textbf{Scaled Bias plots of different competing estimators for $n_1=5,n_2=3$.}} \end{figure} \FloatBarrier \begin{figure} \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=9cm,height=8cm]{figs/sb1.eps} \caption{$n_1=5, n_2=5$} \end{subfigure} \hfill \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=9cm,height=8cm]{figs/ub3.eps} \caption{$n_1=5, n_2=8$} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=9cm,height=8cm]{figs/sb3.eps} \caption{$n_1=10, n_2=30$} \end{subfigure} \hfill \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=9cm,height=8cm]{figs/sb4.eps} \caption{$n_1=30, n_2=10$} \end{subfigure} \caption{\textbf{Scaled Bias plots of various competing estimators for different configurations of $n_1$ and $n_2$}} \end{figure} \FloatBarrier \section{Real data example} In this section, we provide an illustration of the theoretical findings of our paper to a data set. The details of the data set can be accessed using the link: \url{https://vincentarelbundock.github.io/Rdatasets/doc/Stat2Data/FatRats.html}. The data is presented in Table 1 below. Data from this experiment compared weight gain for 60 baby rats that were fed different diets. Half of the rats were given low-protein diets and the rest were supplied high-protein diet. The source of protein was either beef, cereal, or pork. \par Using the Shapiro-Wilk normality test with a $p$- value of 0.834 for high protein diet and 0.771 for low protein diet, we conclude that the underlying populations of weight gains of the baby rats receiving high protein diet and low protein diet are approximately Gaussian. The assumption of equality of variance of the two populations is also accepted with a $p$- value of 0.631 by using the F test. So, the data can be considered to have come from two normal populations $N(\mu_{1},\sigma^2)$ and $N(\mu_{2},\sigma^2)$, with $\left(\widehat{\mu_{1}},\widehat{\mu_{2}}\right)=\left(92.5,81.6\right)$.\par To select the effective protein diet (which provides the largest weight gain), we extract a sample of size $n_1=20$ from the each population group and calculate $\overline{X}_1$ and $\overline{X}_2$. If $\overline{x}_1 > \overline{x}_2 $, we select the population corresponding to the high protein diet and, if $\overline{x}_2 \geq \overline{x}_1 $, we select the population corresponding to the low protein diet. In stage 2, we draw an additional sample of size $n_2=10$ from the selected population in stage 1, and calculate the stage 2 sample mean $\overline{Y}$. Finally, we compute the estimates $d_{M}$, $d_{U}$, $d_{U1}$ and $d_{U2}$ based on the calculated values of $\overline{X}_1$, $\overline{X}_2$, ${S}^2$ and $\overline{Y}$ . \FloatBarrier \begin{table}[h!] \centering \textbf{Table 1}\\ \textbf{Weight gain in rats after being fed with high-protein diets.}\\\textbf{Stage I data} \begin{tabular}{lllllllllll} 73 & 102 & 118 & & 104 & 81 & & 107 & 100 & & \\ 87 & 117 & 111 & & 98 & 74 & & 56 & 111 & & \\ 95 & 88 & 82 & & 77& 86 & & 92 & & & \\ \end{tabular} \end{table} \vskip -0.2in \begin{table}[h!] \centering \textbf{Weight gain in rats after being fed with low-protein diets.}\\ \textbf{Stage I data} \begin{tabular}{lllllllllll} 90& 76 & 90 & &64 & 86 & &51& 72& & \\ 90 & 95 & 78 & &107 & 107 & &97 & 80 & & \\ 98 & 74 & 74 & &67 & 89 & &58 & & & \\ \end{tabular} \end{table} \begin{table}[h!] \centering \textbf{Weight gain in rats after being fed with high-protein diets.}\\\textbf{Stage II data} \begin{tabular}{lllllllllll} 94 & 79 & 96 & & 98 & 102 & & 102 & 108 & & \\ 91 & 120 & 105 & & & & & & & & \\ \end{tabular} \end{table} \FloatBarrier The various two-stage estimates of the selected treatment mean $\mu_{Q}$ are tabulated in Table 2 below: \begin{center} \textbf{Table 2:} \textbf{Various estimates of the selected treatment mean $\mu_{Q}$}. \label{tab:table1}\\ \begin{tabular}{|l|c|c|c|} \hlin \textbf{$d_{M}$} & \textbf{$d_{U}$} &\textbf{$d_{U1}$} & \textbf{$d_{U2}$} \\ \hline 95.13 & 91.23& 88.34 & 89.34\\ \hline \end{tabular} \end{center} From various estimates of the selected treatment mean given in Table 2, we notice that the two-stage adaptive estimates $d_M$ and $d_U$ are near the true value of the mean of the selected treatment, i.e., $\mu_Q$ = 92.5. \section{Concluding Remarks} In this article, we investigated the problem of estimating the mean of the selected treatment under a two-stage adaptive design. We have extended the work of \cite{cohen1989two} by considering availability of multiple observations at the second stage. \par In the unknown variance setting, we have derived the UMVCUE of the selected treatment mean and have shown that the MLE (a weighted average of the first and second stage sample means, with weights being proportional to the corresponding sample sizes) is minimax and admissible. We have also proposed two plug-in estimators ($d_{U1}$ and $d_{U2}$) obtained by plugging in the pooled sample variance in place of the common variance $\sigma^2$, in some of the estimators proposed by \cite{misra2022estimation}. Using simulations we have tried to investigate the strengths and weaknesses of these four estimators under different trial designs. \bibliographystyle{apalike}
1,108,101,564,778
arxiv
\section{Introduction} \noindent The single pulse emission from pulsars consist of one or more components which are known as subpulses. Shortly after the discovery of pulsars it was observed that the subpulses in some cases carry out systematic drift motion within the pulse window and the phenomenon was termed subpulse drifting \citep{dra68}. A systematic approach to measuring drifting features was developed by \cite{bac73,bac75}, which included the fluctuation spectral studies. The drifting is usually characterised by two periodicities, $P_2$, the longitudinal separation between two adjacent drift bands, and $P_3$, the interval over which the subpulses repeat at any specific location within the pulse window. Fluctuation spectral studies estimate the Fourier transform of the subpulse intensities along specific longitude ranges within the pulse window. The frequency peak of the amplitude corresponds to $P_3$, while the phase variations across the pulse window, corresponding to the peak amplitude, give a more detailed indication of the subpulse motion across the window compared to $P_2$. The periodic subpulse variations can be broadly categorised into two classes : phase modulated drifting which is associated with subpulse motion and periodic amplitude modulation, similar to periodic nulling, which is expected to be a different phenomenon distinct from subpulse drifting \citep{bas16, mit17,bas17,bas18b}. Phase modulated drifting also shows a wide variety of features which can be characterised by their drifting phase variations. For instance, positive drifting corresponds to the case where the phases show a decreasing trend from the leading to the trailing edge of the profile, and vice versa for negative drifting. Subpulse drifting has been a topic of intensive research with the phenomenon expected to be present in 40-50\% of the pulsar population. There are around 120 pulsars known at present to exhibit some form of periodic modulations in their single pulse sequence \citep{wel06,wel07,bas16}. The drifting effects are very diverse and associations are seen with other phenomena like mode changing and nulling in some cases \citep{wri81,dei86,han87,viv97,red05,bas18b}. However, to gain a deeper understanding of the phenomenon at large, more comprehensive studies involving classification of the drifting population and identification of underlying traits are essential. The first such attempt was undertaken by \citet{ran86} who estimated the drifting behaviour within the empirical core-cone model of the emission beam. The pulsar profile, which is formed after averaging several thousand single pulses, has a highly stable structure and is made up of one or more (typically 5) components. The components can be classified into two distinct categories, the central core component and the adjacent conal components. A number of detailed studies suggest that the radio emission beam consists of a central core emission surrounded by conal emission arranged in nested rings \citep{ran90,ran93, mit99}. The differences in the observed profiles are a result of the different line of sight (LOS) traverses across the emission beam. \citet{ran86} suggested subpulse drifting to be primarily a conal phenomenon, and the drifting to arise due to the circulation of the conal emission around the magnetic axis. This would result in an association between the drifting properties (particularly the phase variation of the drifting feature across the pulse) and the profile classification. The most prominent drifting with clear drift bands and large phase variations was associated with the conal single (S$_d$) and barely resolved conal double (D) profile classes. This corresponded to the LOS traversing the emission beam towards the outer edge. Progressively more interior LOS traverses of the emission beam result in well resolved conal double (D), conal Triple ($_c$T) and conal Quadruple ($_c$Q) profile shapes. In the above classification scheme such profile classes were expected to show primarily longitude stationary drift with very little phase variations. The core dominated profiles were associated with central LOS traverses of the emission beam. The principal profile classes were categorised as core single (S$_t$), Triple (T) with a central core component and pair of conal outriders, and Multiple (M) with central core and two pairs of conal components. Subpulse drifting was expected to be phase stationary and only seen in the conal components of the T and M class profiles. However, the core components sometimes show longer periodic structures which were not classified as subpulse drifting. A second classification scheme for subpulse drifting was introduced by \citet{wel06,wel07} for their large drifting survey involving 187 pulsars. These studies primarily employed the fluctuation spectrum method to quantify the subpulse drifting features and identified periodic phenomena in around 55\% of the pulsars studied. Subpulse drifting was categorised based on the peak frequency in the fluctuation spectrum with three primary drifting classes. The class ``coherent'' drifters had narrow features with widths smaller than 0.05 cycles/$P$ (where $P$ corresponds to the pulsar period). The class ``diffuse'' drifters had wider features compared to ``coherent'' drifters and were further subdivided into two categories. The subclass ``Dif'' pulsars had features which were clearly separated from the alias boundaries of 0 and 0.5 cycles/$P$, while the subclass ``Dif$^*$'' had features bordering either boundary. Additionally, a fourth class of pulsars with longitude stationary subpulse modulation were also observed but were not classified into the drifting population. In this classification scheme no information regarding the core-cone nature of the profile class was utilized and hence many pulsars with core components showing low frequency features were also identified as subpulse drifters. The Meterwavelength Single-pulse Polarimetric Emission Survey (MSPES) was conducted recently to characterise the single pulse behaviour of 123 pulsars \citep{mit16}. The periodic subpulse behaviour was investigated by \citet{bas16} with 56 pulsars showing some form of periodicity. No classification studies of drifting was carried out in this work. But the study reported a clear separation between pulsars with prominent drift bands and periodic amplitude modulations. It was observed that the phase modulated drifting associated with the conal profiles showed a negative correlation between the drifting periodicity and the spin-down energy loss ($\dot{E}$). The periodic amplitude modulation had large periodicities, in excess of 10$P$ and did not show any dependence on $\dot{E}$. It was further shown by \citet{bas17} that the periodic amplitude modulation had similarities with the periodic nulling seen in certain pulsars, indicating a possible common origin for the two phenomena. A number of pulsars with periodic nulling have conal profiles which opens up the possibility that the low frequency features are not limited to core components only. These works provide physical basis for the assertion made by \citet{ran86} that subpulse drifting is primarily a conal phenomenon and the low frequency periodic modulation seen in the core components belongs to a different physical phenomenon. However, there are certain issues that have not been addressed in the MSPES studies. Firstly, drifting features of the conal components in core dominated profiles show relatively flatter phase variations. In the above works these drifting cases have not been clearly distinguished from periodic amplitude modulation in core components. Secondly, no detailed phase variation studies associated with subpulse drifting were reported. Finally, in order to estimate the $P_3$ dependence on $\dot{E}$, the sparking model of drifting as suggested by \citet[][hereafter RS75]{rud75} was used. According to the RS75 model the radio emission emerges from an ultra-relativistic, outflowing plasma which is generated in an Inner Acceleration Region (IAR) above the polar cap. The IAR maintains a non-stationary flow of plasma via sparking discharges that undergo non-linear plasma instabilities around 500 km from the stellar surface to emit coherent curvature radio emission \citep{ass98,mel00,gil04,mit09}. The subpulses are believed to be associated with each spark \citep{gil00}, and the drifting occurs due to movement of sparks across the magnetic field in the IAR due to the ${\bf E \times B}$ drift, where the electric field $\mathbf{E}$ is the co-rotational electric field in the IAR. RS75 argued that in a pulsar where the rotation axis is anti-parallel to a star centered dipolar magnetic axis, as the sparks develop in the IAR, they move slower than the co-rotation speed. As a consequence with every rotation the sparks lag behind. Since the same effect is translated to the subpulse emission, an observer sees this effect as subpulse drifting. In a non-aligned rotator (where the dipole magnetic axis and rotation axis are not coincident), assuming that the spark motion is primarily due to the co-rotational electric field, the sparks motion is around the rotation axis. This gives a preferred direction for the subpulse motion within the pulse window from the leading to the trailing edge of the profile \citep{bas16}. This justifies the assumption that negative drifting correspond to the sparks lagging behind co-rotation speed with a periodicity greater than 2$P$, while positive drifting is aliased and has periodicity between $P$ and 2$P$. However, it has been postulated that the magnetic field near the surface in the IAR is non-dipolar, while the magnetic field is strictly dipolar in the emission region \citep[RS75,][]{gil02,mit09,mit17b}. This implies that even though the sparks lag behind co-rotation speed in the IAR, the corresponding subpulse motion does not necessarily exhibit a preferential direction within the pulse window, particularly for more central LOS traverses of the emission beam. As mentioned earlier the phase variations of the drifting peaks give a more accurate diagnostic of the subpulse motion across the pulse window compared to $P_2$. The possibility that the low frequency amplitude modulations can be associated with both core and conal emission makes the phase variation studies all the more important for categorising the drifting phenomenon in pulsars. However, only a handful of studies exist in the literature with detailed phase behaviour reported, for example \citet{edw03,ros11,wel16}. Phase variations across the profile show considerable departure from linearity in a number of cases. A connection between non-linearity in phase variations and orthogonal polarization moding, especially for the outer cones, has been reported in \citet{ran03,ran05}. A precursor study of the detailed phase behaviour associated with subpulse drifting in four pulsars has been conducted by \citet{bas18a}. The primary objective of this paper is to extend this work and develop a uniform classification scheme for the subpulse drifting population. We have carried out extensive observations using the Giant Meterwave Radio Telescope (GMRT) as well as gathered archival data from the Telescope to characterise the subpulse drifting in the majority of pulsars where this phenomenon has been reported. The expanded sample is also useful for investigating the different physical relationships of the drifting population, particularly the dependence of $P_3$ on $\dot{E}$ in the presence of non-dipolar magnetic fields in the IAR. In section 2 we report the details of the observations and analysis for both archival and recently acquired data. In section 3 we present the details of the drifting features including the classification scheme introduced in this work. In section 4 we discuss the physical properties of drifting in detail. Finally we summarize our results in section 5. \section{Observations and Analysis} \noindent Numerous detailed studies exist in the literature of individual pulsars exhibiting periodicity in their single pulse behaviour, however, there have been only a few major works collecting and enhancing this population. \citet{ran86} assembled all the available sources of periodic modulations, which at the time of their publication amounted to 40-50 pulsars. \citet{wel06, wel07} considerably increased the drifting population and reported around 100 pulsars with periodic modulation, including most of the pulsars in the \citet{ran86} list. More recently \citet{bas16} incremented this population by around 20 sources in their MSPES studies using the GMRT. In this work we consider primarily the pulsars exhibiting subpulse drifting and not the periodic amplitude modulation which is considered to be a separate phenomenon. As mentioned earlier drifting is restricted to the conal components of pulsar profiles with core components not showing any signatures of this phenomenon. This is most evident in the T or M type profile classes where the drifting is seen primarily in the leading and trailing conal components with no signatures seen in the central core component \citep{mit08,maa14}. On the contrary periodic amplitude modulation is usually seen as a low frequency feature in the fluctuation spectrum. This phenomenon appears across both the core and the conal components simultaneously, at similar locations in the fluctuation spectrum, and is likely to be a result of changes in the conditions across the entire IAR. This is similar to the periodic nulling seen in certain pulsars which suggests a common physical origin for both \citep{bas17}. The low frequency feature seen in the fluctuation spectra has been interpreted in terms of a rotating-subbeam carousel pattern of the conal emission, which is sparsely populated and stable over several rotation periods \citep{her07,her09,for10}. Subpulse drifting arises due to circulation of beamlets, while the low frequency feature is expected to be indicative of overall circulation time. However, the presence of low frequency feature in core emission both in the form of amplitude modulation and periodic nulling strongly argues against this hypothesis. In certain pulsars where both these features are seen in the fluctuation spectra, their shapes are very different which also indicates a different physical origin for the two phenomena \citep{bas17}. In order to analyze the drifting features highly sensitive single pulse measurements are required. We have carried out a detailed study of the subpulse drifting population primarily using observations from the GMRT. Our objective was to determine the phase variations wherever possible and introduce a consistent classification scheme for drifting. The MSPES survey observed single pulses at two radio frequencies, 333 and 618 MHz, from a large number of pulsars primarily between 0\degr~and $-$50\degr~declinations \citep{mit16}. The fluctuation spectral analyses for these sources were previously conducted by \citet{bas16}. However, detailed phase variation estimates across the profile were only carried out for five pulsars from this list \citep{bas18a,bas18b}. We have used the data from these observations to carry out more detailed phase variation studies of drifting. Additionally, we have also assembled archival observations carried out in the past, at 325 MHz for 50 pulsars on three separate occasions, 2004 August 27, 2006 February 14 and 2007 October 26. These observations recorded sensitive fully polarized single pulses using the now defunct GMRT hardware backend \citep{sir00}. Detailed studies of polarization characteristics for many of these pulsars have been previously reported in \citet{mit11}, where the observing details as well as the instrumental setup are explained. However, no detailed fluctuation spectral studies of the single pulses were then carried out. We found several pulsars with drifting features which have also been included in this work. There were an additional 30-40 pulsars with periodic single pulse modulation particularly above 0\degr declination which were outside the archival sample. We have carried out new observations of around 35 pulsars using the GMRT, on seven separate occasions between November 2017 and January 2018, for a total time of 30 hours. These observations were carried out as a followup to MSPES. We recorded total intensity signals for roughly 2000 single pulses from each pulsar, at 333 MHz, with an observing setup similar to \citet{bas18a}. We have compiled a list of 61 pulsars which is possibly the most complete database of subpulse drifting known at present. As mentioned earlier this list does not include the periodic amplitude modulation and periodic nulling cases. We have used fluctuation spectral analysis to measure the drifting features. We employed analysis schemes similar to \citet{bas18a} in order to estimate the detailed phase behaviour corresponding to drifting. We estimated the Longitude Resolved Fluctuation Spectra (LRFS) using 256 consecutive single pulses, as well as their time variations by shifting the starting point by 50 pulses. The phase variations across the pulse window were estimated by fixing the phase to be zero at the longitude corresponding to peak amplitude in the average spectrum. This process was repeated for the entire pulse sequence, as detailed in \citet{bas18a}. The Fourier transforms were estimated for each longitude and the phases were only calculated for significant detections (5$\sigma$ baseline level) of the peaks at that longitude. This implied that in certain pulsars, with weak peaks in the fluctuation spectra, no phase information was available. In these cases we used the Harmonic Resolved Fluctuation Spectra \citep[HRFS,][]{des01,bas16} to get indications of whether the drifting is positive or negative or roughly phase stationary. The phase studies reported here were carried out for pulsars which showed long durations of systematic subpulse drifting. In certain pulsars there is rapid mode changing and/or frequent long duration nulls, and the emission switches between multiple states, with or without subpulse drifting, at short intervals. These require specialized techniques to separate emission modes and estimate drifting, and will be addressed in future works. \section{The Subpulse Drifting Classes} \noindent Subpulse drifting has been classified in the past based on either the profile type or the width of the drifting feature. There are merits to both these schemes as although drifting is primarily a conal phenomenon there are clear differences seen between prominent drift bands and diffuse structures in the fluctuation spectra. We have classified pulsars into four groups based on the nature of their phase variations as well as the width of the drifting features. The criteria for these classifications are explained in detail below. In each pulsar we have also specified the profile class within the core-cone model of the emission beam as introduced by \citet{ran90,ran93}. In some pulsars the profile types were not available in the literature and we have suggested possible profile types for the same based on their shapes and polarization properties. \subsection{Coherent phase modulated drifting} \begin{table*} \caption{The coherent phase modulated drifting} \label{tabphscoh} \centering \begin{tabular}{rccccccccc} \hline & PSRJ & PSRB & $P$ & $\dot{E}$ & Mode & $P_3$ & Type & Profile & \\ & & & (s) & (10$^{30}$ erg~s$^{-1}$) & & ($P$) & & & \\ \hline 1 & J0034$-$0721 & B0031$-$07 & 0.943 & 19.2 & A & 13$\pm$1 & ND & S$_d$ & \ref{fig_J0034_1}, \ref{fig_J0034_2} \\ & & & & & B & 6.5$\pm$0.5 & & & \\ & & & & & C & 4.0$\pm$0.5 & & & \\ & & & & & & & & & \\ 2 & J0108+6608 & B0105+65 & 1.284 & 244 & --- & 2.04$\pm$0.08 & PD & S$_d$ & \ref{fig_J0108} \\ & & & & & & & & \\ 3 & J0151$-$0635 & B0148$-$06 & 1.465 & 5.56 & --- & 14.4$\pm$0.8 & ND & D & \ref{fig_J0151_1}, \ref{fig_J0151_2} \\ & & & & & & & & & \\ 4 & J0421$-$0345 & --- & 2.161 & 4.55 & --- & 3.1$\pm$0.1 & PD & *D/S$_d$ & \ref{fig_J0421} \\ & & & & & & & & & \\ 5 & J0459$-$0210 & --- & 1.133 & 37.9 & --- & 2.36$\pm$0.01 & ND & *D & --- \\ & & & & & & & \\ 6 & J0814+7429 & B0809+74 & 1.292 & 3.08 & --- & 11.1$\pm$0.1 & ND & S$_d$ & \ref{fig_J0814} \\ & & & & & & & & & \\ 7 & J0820$-$1350 & B0818$-$13 & 1.238 & 43.8 & --- & 4.7$\pm$0.1 & ND & S$_d$ & [1] \\ & & & & & & & & & \\ 8 & J0934$-$5249 & B0932$-$52 & 1.445 & 60.9 & --- & 3.9$\pm$0.2 & ND & S$_d$ & \ref{fig_J0934} \\ & & & & & & & & & \\ 9 & J0946+0951 & B0943+10 & 1.098 & 104 & B & 2.15$\pm$0.01 & PD & S$_d$ & [2] \\ & & & & & Q & --- & --- & S$_d$-PC & \\ & & & & & & & & & \\ 10 & J1418$-$3921 & --- & 1.097 & 26.6 & --- & 2.49$\pm$0.03 & PD & *D & \ref{fig_J1418} \\ & & & & & & & & & \\ 11 & J1543$-$0620 & B1540$-$06 & 0.709 & 97.4 & --- & 3.01$\pm$0.05 & PD & S$_d$ & \ref{fig_J1543} \\ & & & & & & & & & \\ 12 & J1555$-$3134 & B1552$-$31 & 0.518 & 17.7 & Peak1 & 17.5$\pm$3.6 & ND & *D & [1] \\ & & & & & Peak2 & 10.2$\pm$1.0 & & & \\ & & & & & & & & & \\ 13 & J1720$-$2933 & B1717$-$29 & 0.620 & 123 & --- & 2.452$\pm$0.006 & ND & *S$_d$ & [1] \\ & & & & & & & & & \\ 14 & J1727$-$2739 & --- & 1.293 & 20.1 & A & 9.7$\pm$1.6 & ND & *D & [3] \\ & & & & & B & 5.2$\pm$0.9 & & & \\ & & & & & C & --- & --- & & \\ & & & & & & & & & \\ 15 & J1816$-$2650 & B1813$-$26 & 0.593 & 12.6 & --- & 4.1$\pm$0.2 & ND & *D & --- \\ & & & & & & & & & \\ 16 & J1822$-$2256 & B1819$-$22 & 1.874 & 8.12 & A & 19.6$\pm$1.6 & ND & *D & [4] \\ & & & & & Trans A & 14.3$\pm$1.8 & & & \\ & & & & & B & --- & --- & & \\ & & & & & C & 10.7$\pm$1.1 & & & \\ & & & & & & & & & \\ 17 & J1901$-$0906 & --- & 1.782 & 11.4 & --- & 3.05$\pm$0.09 & ND & *D & \ref{fig_J1901_1}, \ref{fig_J1901_2} \\ & & & & & & & & & \\ 18 & J1919+0134 & --- & 1.604 & 5.63 & --- & 6.6$\pm$0.6 & ND & *D & --- \\ & & & & & & & & & \\ 19 & J1921+1948 & B1918+19 & 0.821 & 63.9 & A & 6.1$\pm$0.3 & PD & $_c$T & \ref{fig_J1921_P1} \\ & & & & & B & 3.8$\pm$0.1 & & & \\ & & & & & C & 2.45$\pm$0.04 & & & \\ & & & & & N & --- & --- & & \\ & & & & & & & & & \\ 20 & J1946+1805 & B1944+17 & 0.441 & 11.1 & A & 13.8$\pm$0.7 & ND & $_c$T & --- \\ & & & & & B & 6.1$\pm$1.7 & & & \\ & & & & & C & --- & --- & & \\ & & & & & D & --- & --- & & \\ & & & & & & & & & \\ 21 & J2046$-$0421 & B2043$-$04 & 1.547 & 15.7 & --- & 2.75$\pm$0.04 & PD & S$_d$ & \ref{fig_J2046_1}, \ref{fig_J2046_2} \\ & & & & & & & & & \\ 22 & J2305+3100 & B2303+30 & 1.576 & 29.2 & B & 2.05$\pm$0.05 & PD/ND & S$_d$ & --- \\ & & & & & Q & $\sim$3 & & & \\ & & & & & & & & & \\ 23 & J2313+4253 & B2310+42 & 0.349 & 104 & --- & 2.1$\pm$0.1 & PD & *$_c$T & \ref{fig_J2313} \\ \hline \end{tabular} \medskip \\$^*$-These classifications were not available previously and suggested here.\\ 1-\cite{bas18a}; 2-\cite{des01}; 3-\cite{wen16}; 4-\cite{bas18b}. \end{table*} The most prominent drifting is characterised by visible drift bands in the single pulse sequence. Fluctuation spectra show the presence of sharp narrow peaks indicating highly structured drifting. The phases corresponding to the peak frequency show large systematic variations across the pulse window signifying large scale subpulse motion. All drifting features with time averaged widths, measured at full width at half maximum (FWHM), less than 0.05 cycles/$P$, and phases varying monotonically for more than 100$\degr$ across the majority of the pulse window have been classified as coherent phase modulated drifting. In Table \ref{tabphscoh} we report 23 pulsars which exhibit this drifting effect. In addition to the basic physical parameters, $P$ and $\dot{E}$, the Table also shows the different emission modes, the drifting periodicity, the general direction of subpulse variation signified by positive (PD) and negative drifting (ND), and the profile classification for each pulsar. In many pulsars no previous profile classifications were available and our suggested classifications are marked with `*' in the Table. All the pulsars in this list exhibit conal profiles with the majority showing either S$_d$ or D type profiles. There are 7 pulsars in this group that show the presence of mode changing with more than one stable emission state. The subpulse drifting changes in the different modes, with some modes showing no clear drift pattern. In Appendix A we also show the fluctuation spectral plots for 12 pulsars, which include the time evolution of the LRFS as well as the phase and amplitude variation of the peak frequency across the pulse profile. The detailed phase variations for three pulsars in this group, J0820$-$1350 (B0818$-$13), J1555$-$3134 (B1552$-$31) and J1720$-$2933 (B1717$-$29) have been shown in \citet{bas18a}. Pulsar J1555$-$3134 shows the presence of two distinct drifting peaks which are not harmonically related and likely suggests fast transitions between two different drift states. Pulsar J1727$-$2739 nulls for around 60\% of the time and hence our analysis schemes were not suitable for estimating the detailed phase behaviour. However, the single pulse behaviour was studied in detail by \citet{wen16} who found the presence of three distinct emission modes during the burst phases, two of which show systematic drift motion with prominent drift bands. Additionally, pulsar J1822$-$2256 (B1819$-$22) has also been studied in detail, including the drifting behaviour, by \citet{bas18b}. Pulsar J0946+0951 (B0943+10) has been extensively studied \citep{des01} and shows the presence of large phase variations corresponding to sharp peaks in the fluctuation spectra. Pulsar J0459$-$0210 was part of our latest observations, but was affected by radio frequency interference (RFI) and could not be analyzed. In \citet{wel07} the fluctuation spectra showed the presence of a narrow peak with preferred negative drifting. Two pulsars J1816$-$2650 (B1813$-$26) and J1919+0134 were observed in MSPES and their fluctuation spectra showed the presence of narrow peaks with preferred drift directions \citep{bas16}. However, their single pulses were not sensitive for the phase variation studies. In the case of PSR J1946+1805 (B1944+17) the emission is frequently interrupted by long nulls. Our analysis schemes were not adequate for estimating the detailed phase variations in this pulsar. However, the drifting properties have been investigated by \citet{klo10} and are consistent with the drifting class described here. It is also possible that the pulsar belongs to the switching phase-modulated drifting class described below and more detailed studies are required to estimate its phase variations. Finally, pulsar J2305+3100 (B2303+30) has prominent phase variations with a drifting periodicity very close to 2$P$. The temporal fluctuations of the LRFS causes frequent switching between positive and negative drifting which smears out the average phase behaviour. \subsection{Switching phase modulated drifting} \begin{table*} \caption{The switching phase modulated drifting} \centering \begin{tabular}{rccccccccc} \hline & PSRJ & PSRB & $P$ & $\dot{E}$ & Mode & $P_3$ & Profile & \\ & & & (s) & (10$^{30}$ erg~s$^{-1}$) & & ($P$) & & \\ \hline 1 & J0323+3944 & B0320+39 & 3.032 & 0.9 & --- & 8.5$\pm$0.3 & *$_c$T & \ref{fig_J0323} \\ & & & & & & & & \\ 2 & J0815+0939 & --- & 0.645 & 20.4 & --- & 16.6$\pm$0.3 & *$_c$Q & [1] \\ & & & & & & & & \\ 3 & J0820$-$4114 & B0818$-$41 & 0.545 & 4.60 & --- & 18.5$\pm$1.5 & $_c$Q & [2] \\ & & & & & & & & \\ 4 & J1034$-$3224 & --- & 1.151 & 5.97 & --- & 7.2$\pm$0.5 & *$_c$Q & [3] \\ & & & & & & & & \\ 5 & J1842$-$0359 & B1839$-$04 & 1.840 & 3.22 & --- & 12.4$\pm$0.5 & *$_c$Q & [4], \ref{fig_J1842} \\ & & & & & & & & \\ 6 & J1921+2153 & B1919+21 & 1.337 & 22.3 & --- & 4.2$\pm$0.2 & $_c$Q? & \ref{fig_J1921_P2} \\ & & & & & & & & \\ 7 & J2321+6024 & B2319+60 & 2.256 & 24.2 & A & 8$\pm$1 & $_c$Q & \ref{fig_J2321} \\ & & & & & B & 4$\pm$1 & & & \\ & & & & & ABN & 3$\pm$0.5 & & & \\ \hline \end{tabular} \label{tabphsw} \medskip \\$^*$-These classifications were not available previously and suggested here.\\ 1-\cite{cha05,sza17}; 2-\cite{bha07,bha09}; 3-\cite{bas18a}; 4-\cite{wel16}. \end{table*} This group of pulsars is similar to the coherent phase modulated drifters with sharp peaks in their fluctuation spectra and large phase variations across the pulse window. The primary distinguishing feature involves a sudden switch in the phase variation across adjacent components. The pulsars where the FWHM of the drifting features, in the time averaged spectra, are less than 0.05 cycles/$P$, and the phase variations either show reversals in slope or sudden 180\degr~jumps in adjacent components, are classified as switching phase-modulated drifting. The three pulsars J0815+0939, J1034$-$3224 and J1842$-$0359 (B1839$-$04) where detailed phase variation studies show the presence of the rare phenomenon of bi-drifting, i.e. the positive and negative drift directions are seen simultaneously at different regions of the pulse window, are included in this classification scheme. In Table \ref{tabphsw} we list the 7 pulsars which belong to this group, including their $P$, $\dot{E}$, different emission modes, drifting periodicity and profile classification. Once again we have marked with `*' our suggestions for the pulsars without previous profile classifications. As shown in the Table most of the pulsars in this group belong to $_c$Q profile class, with four conal components, the only exception being J0323+3944 (B0320+39) which likely has $_c$T profile shape. Additionally, pulsar J1034$-$3224 also has a low level preceding pre-cursor component which does not exhibit any detectable drifting \citep{bas15,bas18a}. Incidentally, all three bi-drifting pulsars as well as PSR J0820$-$4114 (B0818$-$41) have large profile widths of 100\degr~longitude or more, suggesting small inclination angles between the rotation and magnetic axes. Pulsar J0815+0939 shows positive drifting only in the second component while the remaining components show negative drifting \citep{cha05,sza17}. In PSR J1034$-$3224 alternate components have opposite direction of phase variation, with the first and third components showing negative drifting and the second and fourth showing positive \citep{bas18a}. In case of J1842$-$0359 the leading part shows phase variation corresponding to negative drifting which reverses direction towards the middle into positive drifting \citep{wel16}. Our analysis at 618 MHz is shown in Appendix B and also indicates a reversal in phase between the leading and trailing components. The phase behaviour of pulsar J0323+3944 has been shown in detail in Appendix B. An 180\degr~phase jump has been previously reported \citep{edw03} for this pulsar near the center of the profile which is also seen in our plots. Pulsar J0820$-$4114 was observed as part of MSPES, but the detected single pulses were not sensitive for estimating drifting behaviour. However, the pulsar has been studied in detail by \citet{bha07,bha09} who report that the central region shows a different subpulse motion compared to the outer components. It appears that no reversal in the drift direction is seen in the different components but more detailed phase variation studies would be required. Pulsar J1921+2153 (B1919+21) was observed in MSPES at 618 MHz and we have carried out detailed phase variation studies as shown in Appendix B. A jump in the phase variation is seen towards the center signifying different subpulse motion between the inner and outer parts of the profile. Finally, pulsar J2321+6024 (B2319+60) has been reported to exhibit three distinct drift modes by \citet{wri81}. Our drifting analysis could only estimate the phase variations corresponding to the most dominant mode with $P_3$ = 8$P$ as shown in Appendix B. The phase variations show positive drifting, and no drift reversals are seen in any of the components. But a clear shift in the phase variations is seen between the adjacent second and third components which justifies the classification. \subsection{Diffuse phase modulated drifting} \begin{table*} \caption{The diffuse phase modulated drifting} \centering \begin{tabular}{rcccccccc} \hline & PSRJ & PSRB & $P$ & $\dot{E}$ & $P_3$ & Type & Profile & \\ & & & (s) & (10$^{30}$erg~s$^{-1}$) & ($P$) & & & \\ \hline 1 & J0152$-$1637 & B0149$-$16 & 0.833 & 88.8 & 5.9$\pm$1.0 & ND & D & [1] \\ & & & & & & & & \\ 2 & J0304+1932 & B0301+19 & 1.388 & 19.1 & 6.4$\pm$1.7 & ND & D & [1] \\ & & & & & & & & \\ 3 & J0525+1115 & B0523+11 & 0.354 & 65.3 & 3.2$\pm$0.5 & PD & *D & [1] \\ & & & & & & & & \\ 4 & J0630$-$2834 & B0628$-$28 & 1.244 & 146 & 6.9$\pm$1.5 & PD & S$_d$ & [2] \\ & & & & & & & & \\ 5 & J0823+0159 & B0820+02 & 0.865 & 6.38 & 4.7$\pm$0.6 & PD & *S$_d$ & \ref{fig_J0823} \\ & & & & & & & & \\ 6 & J0944$-$1354 & B0942$-$13 & 0.570 & 9.63 & 6.4$\pm$0.3 & ND & *$_c$T & [1] \\ & & & & & & & & \\ 7 & J0959$-$4809 & B0957$-$47 & 0.670 & 10.8 & 5.6$\pm$1.3 & ND & D & [1] \\ & & & & & & & & \\ 8 & J1041$-$1942 & B1039$-$19 & 1.386 & 14.0 & 4.3$\pm$0.4 & PD & *$_c$T & [1] \\ & & & & & & & & \\ 9 & J1703$-$1846 & B1700$-$18 & 0.804 & 131 & 3.6$\pm$0.2 & ND & *S$_d$ & [2] \\ & & & & & & & & \\ 10 & J1720$-$0212 & B1718$-$02 & 0.478 & 30.0 & 5.4$\pm$0.1 & PD & *D & [2] \\ & & & & & & & & \\ 11 & J1741$-$0840 & B1738$-$08 & 2.043 & 10.5 & 4.6$\pm$0.6 & PD & $_c$Q? & [1] \\ & & & & & & & & \\ 12 & J1840$-$0840 & --- & 5.309 & 6.25 & 15.0$\pm$0.8 & PD & *D & [3] \\ & & & & & & & & \\ 13 & J2018+2839 & B2016+28 & 0.558 & 33.7 & 4.0$\pm$0.2 & ND & S$_d$ & \ref{fig_J2018} \\ & & & & & & & & \\ 14 & J2046+1540 & B2044+15 & 1.138 & 4.88 & 23.0$\pm$6.1 & ND & D & [1] \\ & & & & & & & & \\ 15 & J2317+2149 & B2315+21 & 1.445 & 13.7 & 5.2$\pm$0.4 & ND & $_c$T & [1] \\ \hline \end{tabular} \label{tabphsdif} \medskip \\$^*$-These classifications were not available previously and suggested here.\\ 1-\cite{bas16}; 2-\cite{wel07}; 3-\cite{gaj17}; \end{table*} This group of pulsars exhibits wide structures in their fluctuation spectra which indicate the presence of multiple drift bands in the emission. Pulsars with FWHM of drifting features greater than 0.5 cycles/$P$ in the time average fluctuation spectra and no clearly measurable phase variations, but a clear preference for a drift direction, are identified as diffuse phase modulated drifters. A notable example of this phenomenon has been recently seen in the B mode of the pulsar J1822$-$2256 by \citet{bas18b}. The single pulse behaviour shows the presence of multiple short lived subpulse tracks with the fluctuation spectra showing a wide structure without any distinct peaks. In Table \ref{tabphsdif} we have assembled 15 pulsars which exhibit wide structures in their fluctuation spectra. The individual drift bands usually have short durations, so have no significant peaks in their average spectra. As a result the average phase behaviour cannot be determined using the techniques employed in this work. However, as described in \citet{bas16} the harmonic spectrum gives an indication of the general nature of the drift direction. Negative drifting is seen in the 0-0.5 cycles/$P$ region of the HRFS while for positive drifting the peak structure is shifted into the 0.5-1 cycles/$P$ region. In addition to $P$ and $\dot{E}$ the Table also lists mean drifting periodicity, drifting type and classification of the profile, including `*' identifiers for newly suggested classifications. As indicated in the Table the fluctuation spectral studies have been reported earlier for 12 pulsars, 9 pulsars in \citet{bas16}, PSR J0630$-$2834 (B0628$-$28), J1703$-$1846 (B1700$-$18) and J1720$-$0212 (B1718$-$02) in \citet{wel07} and J1840$-$0840~in \citet{gaj17}. For two pulsars J0823+0159 (B0820+02) and J2018+2839 (B2016+28) we have carried out new measurements for the fluctuation spectra as shown in Appendix C. Pulsar J2018+2839 is particularly interesting since the wide peak corresponding to drifting fills up the entire window between 0-0.5 cycles/$P$ in the average LRFS. Short duration drift bands are also prominently seen in its single pulse sequence. It should be noted that the wide structures seen in these pulsars are different from the temporal fluctuations of the single drifting peaks reported in \citet{bas18a}. The closest counterpart is the two peaks seen in the fluctuation spectra of PSR J1555$-$3134, where the drifting is likely to alternate between two different states at rapid intervals. \subsection{Low-mixed phase-modulated drifting} \begin{table*} \caption{The low-mixed phase-modulated drifting} \centering \begin{tabular}{rcccccccc} \hline & PSRJ & PSRB & $P$ & $\dot{E}$ & $P_3$ & Profile \\ & & & (s) & (10$^{30}$erg~s$^{-1}$) & ($P$) & \\ \hline 1 & J0624$-$0424 & B0621$-$04 & 1.039 & 29.2 & 2.05$\pm$0.01 & M & \ref{fig_J0624} \\ & & & & & & & \\ 2 & J0837+0610 & B0834+06 & 1.274 & 130 & 2.17$\pm$0.03 & D & \ref{fig_J0837_1}, \ref{fig_J0837_2} \\ & & & & & & & \\ 3 & J0846$-$3533 & B0844$-$35 & 1.116 & 45.5 & 2.03$\pm$0.02 & *M & \ref{fig_J0846_1}, \ref{fig_J0846_2} \\ & & & & & & & \\ 4 & J1239+2453 & B1237+25 & 1.382 & 14.3 & 2.8$\pm$0.1 & M & \ref{fig_J1239_1}, \ref{fig_J1239_2} \\ & & & & & & & \\ 5 & J1328$-$4921 & B1325$-$49 & 1.479 & 7.45 & 3.4$\pm$0.2 & *M/$_c$Q & \ref{fig_J1328_1}, \ref{fig_J1328_2} \\ & & & & & & & \\ 6 & J1625$-$4048 & --- & 2.355 & 1.34 & 61$\pm$30 & *T/$_c$T & \ref{fig_J1625} \\ & & & & & & & \\ 7 & J1650$-$1654 & --- & 1.750 & 23.6 & 2.6$\pm$0.1 & *D/T & \ref{fig_J1650} \\ & & & & & & & \\ 8 & J1700$-$3312 & --- & 1.358 & 74.2 & 2.2$\pm$0.1 & *M & --- \\ & & & & & & & \\ 9 & J1703$-$3241 & B1700$-$32 & 1.212 & 14.6 & 4.7$\pm$0.5 & T & --- \\ & & & & & & & \\ 10 & J1733$-$2228 & B1730$-$22 & 0.872 & 2.55 & 23$\pm$13 & *T & \ref{fig_J1733} \\ & & & & & & & \\ 11 & J1740+1311 & B1737+13 & 0.803 & 110 & 9$\pm$1 & M & --- \\ & & & & & & & \\ 12 & J1801$-$2920 & B1758$-$29 & 1.082 & 103 & 2.48$\pm$0.08 & *T & --- \\ & & & & & & & \\ 13 & J1900$-$2600 & B1857$-$26 & 0.612 & 35.2 & 7.6$\pm$0.8 & M & \ref{fig_J1900_1}, \ref{fig_J1900_2} \\ & & & & & & & \\ 14 & J1912+2104 & B1910+20 & 2.233 & 36.1 & 2.70$\pm$0.04 & M/T? & --- \\ & & & & & & & \\ 15 & J2006$-$0807 & B2003$-$08 & 0.581 & 9.27 & 15.2$\pm$2.5 & M & --- \\ & & & & & & & \\ 16 & J2048$-$1616 & B2045$-$16 & 1.962 & 57.3 & 3.23$\pm$0.03 & T & --- \\ \hline \end{tabular} \label{tabphstat} \medskip \\$^*$-These classifications were not available previously and suggested here.\\ \end{table*} \noindent This group comprises pulsars which have been previously identified as exhibiting phase-stationary drifting, where the subpulses are not expected to move across the pulse window but show periodic fluctuations in intensity. These pulsars usually show the presence of a central core component in their profiles which does not exhibit any drifting feature. However, our detailed studies reveal that in many of these cases the phases are not stationary across the profile but show significant variations, though these variations are lower compared to the coherent phase-modulated pulsars. We have classified pulsars which show relatively shallow phase variations of less than 100\degr across different components of the profile as low-mixed phase-modulated drifting. The mixed nature reflects the fact that in certain components the phase variations have opposite slopes. However, the extent of these variations is significantly lower than the switching phase-modulated drifters. The FWHM of the drifting feature in the fluctuation spectra varies for different pulsars in this group as well, similar to the coherent and diffuse classes described above, and possibly can be subdivided into two groups. However, the presence of the core component, along with lower phase variations identify this group. In Table \ref{tabphstat} we list 16 pulsars including their $P$, $\dot{E}$, drifting periodicity and profile classification. Pulsars with periodic behaviour of the core component are believed to exhibit a different phenomenon and are not included in this list. In Appendix D we show the detailed phase variations across the profile for 9 pulsars in this list. As mentioned above the phase variations are only seen in the conal components, with the variations much flatter than the coherent and switching phase modulation cases. However, even in these cases the phase variations are not zero but show curved trajectories. This implies that even for central LOS traverses through the emission beams the conal components show subpulse motion across the pulse window, which is smaller than the more peripheral traverse. Pulsar J0837+0610 (B0834+06) has a D profile and is the only one in this list without the presence of any clear core component. However, the presence of relatively shallow phase variations of less than 50\degr~across its two components prompted its drifting classification. In pulsar J1700$-$3312 we were not able to estimate the phase behaviour due to the lower sensitivity of the single pulses. However, the presence of a possible core component and the low drifting periodicity around 2$P$ justified its classification. More sensitive single pulse studies are required to estimate the phase variations in this pulsar. In the case of three pulsars J1703$-$3241 (B1700$-$32), J1740+1311 (B1737+13) and J2048$-$1616 (B2045$-$16) the drifting exhibited wide features, with FWHM in excess of 0.05 cycles/$P$, without any prominent single drifting periodicity, similar to the diffuse phase modulation case. Hence, no clear phase variation studies could be carried out for these pulsars, but detailed fluctuation spectra for these sources have been reported in \citet{bas16}. In each of these pulsars it is clear that the drifting is restricted to the conal regions only. We did not have access to single pulses from the pulsar J1912+2104 (B1910+20) and could not estimate the phase variations. The pulsar has been studied by \citet{wel07} where a low periodicity drifting feature is seen for only the leading conal component. Finally, two pulsars J1801$-$2920 (B1758$-$29) and J2006$-$0807 (B2003$-$08) also show the presence of drifting features in the fluctuation spectra associated with only the conal components \citep{bas16}. However, their single pulse behaviour is also characterized by frequent nulls which renders our analysis techniques inadequate for phase variation studies. These would require a separate analysis scheme where the burst regions are separated out and drifting analysis is carried out on individual segments. \section{Discussion} \subsection{The effect of Aliasing and dependence on Spin-down energy loss} \begin{figure*} \begin{tabular}{@{}lr@{}} {\mbox{\includegraphics[scale=1.0,angle=0.]{figure1a.eps}}} & {\mbox{\includegraphics[scale=1.0,angle=0.]{figure1b.eps}}} \\ \end{tabular} \caption{The Figure shows the variation of the drifting periodicity $P_3$ as a function of the spin-down energy loss ($\dot{E}$). The left panel shows all pulsars with subpulse drifting with the four different classes of drifting separately indicated in the Figure. The right panel only shows the coherent drifting class associated with peripheral line of sight cuts of the emission beam. The alias around 2$P$ is resolved by assuming $P_3>$2$P$ for negative drift directions and $P_3<$2$P$ for positive drifting.} \label{fig_p3edot} \end{figure*} \noindent As discussed in the introduction, the RS75 model, which postulates that the sparking discharges in the IAR lag behind co-rotation speed, has been used to unravel the aliasing associated with subpulse drifting. This gives a natural solution for estimating the alias where negative drifting has periodicities in excess of 2$P$, and positive drifting has periodicities less than 2$P$. This was used to determine a dependence between the drifting periodicity and spin-down energy loss ($\dot{E}$) which exhibits a negative correlation \citep{bas16}. However, there are some issues with the above assumptions. Firstly, a number of pulsars like the switching phase-modulated and low-mixed phase-modulated drifters do not show any specific drift direction. Secondly, the magnetic field structure in the IAR is highly non-dipolar in nature, while the magnetic field in the emission region is dipolar \citep{mit17b}. This ensures that the spark tracks in the non-dipolar IAR which lag behind co-rotation speed will have more convoluted subpulse motion in the emission region due to the transition of the magnetic fields from the non-dipolar to dipolar. This still does not justify a carousel like motion of subpulses as proposed in certain models \citep{gil00,des01}, as the subpulse motion in this model requires the sparks to lag behind co-rotation speed in certain instances and lead co-rotation speed in others. A more detailed model of subpulse drifting needs to be explored to understand the observational results. But, we once again take a detailed look into the dependence of $P_3$ on $\dot{E}$ for the extended sample studied here. In Figure \ref{fig_p3edot}, left panel, we have plotted $P_3$ as a function of $\dot{E}$ for all 61 pulsars which are expected to exhibit subpulse drifting. The four different drifting classes are represented separately in the Figure. We have not resolved the alias using the drift direction and have used periodicities $>$2$P$ for all measurements. On the right panel we have isolated only the coherent drifting pulsars which are mainly associated with LOS cuts of the emission beam towards the edge of the profile. These are less likely to be affected by non-dipolar effects of the inner field lines. Similar to \citet{bas16} we have resolved the alias around 2$P$ by assuming that negative drifting has $P_3>$2$P$ and positive drifting has $P_3<$2$P$. As seen in the left panel of the Figure, despite a large scatter there exists a dependence between the two quantities with $P_3$ increasing with decreasing $\dot{E}$. The dependence flattens out between 10$^{31}$ and 10$^{32}$ erg~s$^{-1}$ where $P_3$s are likely to be aliased. This is further highlighted by the fact that most $P_3$ values close to 2$P$ boundary appear in this range. The correlation becomes even more prominent on the right panel where we are dealing with cleaner examples of systematic subpulse drifting. \subsection{The Pulsars without subpulse drifting} \begin{table*} \caption{List of Pulsars with prominent conal components but no detectable drifting} \centering \begin{tabular}{cccc|cccc|cccc} \hline & Pulsar & $\dot{E}$ & Profile & & Pulsar & $\dot{E}$ & Profile & & Pulsar & $\dot{E}$ & Profile \\ & & (10$^{30}$erg~s$^{-1}$) & & & & (10$^{30}$erg~s$^{-1}$) & & & & (10$^{30}$erg~s$^{-1}$) & \\ \hline 1 & B0052+51 & 39.8 & *D & 26 & B1530+27 & 21.6 & S$_d$/D & 51 & B1845$-$01 & 723 & $_c$T \\ 2 & J0134$-$2937 & 1.20$\times10^3$ & *$_c$T & 27 & B1541+09 & 40.7 & T & 52 & J1850+0026 & 11.2 & *M \\ 3 & B0138+59 & 8.44 & M & 28 & B1558$-$50 & 4.26$\times10^3$ & T & 53 & J1857$-$1027 & 8.31 & *T/$_c$T \\ 4 & B0144+59 & 1.34$\times10^3$ & *T & 29 & B1601$-$52 & 35.5 & D & 54 & B1905+39 & 11.3 & M \\ 5 & B0329+54 & 222 & T/M & 30 & B1604$-$00 & 161 & T & 55 & B1907+03 & 13.9 & T/M \\ 6 & B0402+61 & 1.05$\times10^3$ & T/M & 31 & B1612+07 & 53.0 & S$_d$ & 56 & B1914+09 & 5.04$\times10^3$ & T$_{1/2}$ \\ 7 & B0447$-$12 & 48.2 & *M & 32 & B1633+24 & 39.9 & $_c$T & 57 & B1917+00 & 147 & T \\ 8 & B0450$-$18 & 1.37$\times10^3$ & T & 33 & B1648$-$17 & 130 & *D & 58 & B1923+04 & 78.3 & S$_d$ \\ 9 & B0450+55 & 2.37$\times10^3$ & T & 34 & B1649$-$23 & 25.2 & *T$_{1/2}$ & 59 & B1929+10 & 3.93$\times10^3$ & T$_{1/2}$ \\ 10 & B0458+46 & 846 & T & 35 & J1652+2651 & 33.6 & *D & 60 & B2021+51 & 816 & D/S$_d$ \\ 11 & J0520$-$2553 & 84.2 & *D & 36 & B1717$-$16 & 59.6 & *D & 61 & B2044+15 & 4.88 & D \\ 12 & B0525+21 & 30.1 & D & 37 & B1718$-$32 & 235 & *T$_{1/2}$ & 62 & B2110+27 & 59.5 & S$_d$ \\ 13 & B0559$-$05 & 828 & *T & 38 & B1727$-$47 & 1.13$\times10^4$ & T & 63 & B2148+63 & 121 & S$_d$ \\ 14 & B0727$-$18 & 5.64$\times10^3$ & *T & 39 & B1742$-$30 & 8.49$\times10^3$ & T & 64 & B2154+40 & 38.2 & $_c$T/D \\ 15 & B0736$-$40 & 1.21$\times10^3$ & T & 40 & B1745$-$12 & 782 & T/M & 65 & B2227+61 & 1.02$\times10^3$ & *$_c$Q/M \\ 16 & B0740$-$28 & 1.43$\times10^5$ & T/M & 41 & B1747$-$46 & 125 & T/M & 66 & B2306+55 & 73.5 & D \\ 17 & B0751+32 & 14.2 & D & 42 & B1753+52 & 4.52 & *D & 67 & B2323+63 & 37.6 & D/$_c$Q \\ 18 & B0905$-$51 & 4.43$\times10^3$ & *$_c$Q/M & 43 & B1754$-$24 & 4.00$\times10^4$ & *T & 68 & B2327$-$20 & 41.2 & T \\ 19 & B0919+06 & 6.79$\times10^3$ & T & 44 & B1758$-$29 & 103 & *T & 69 & J2346$-$0609 & 32.6 & *D \\ 20 & B0950+08 & 560 & S$_d$ & 45 & B1804$-$08 & 259 & T & & & & \\ 21 & B1133+16 & 87.9 & D & 46 & J1808$-$0813 & 72.8 & *S$_d$ & & & & \\ 22 & B1254$-$10 & 60.9 & *D & 47 & B1821+05 & 21.0 & T & & & & \\ 23 & B1322+83 & 74.3 & S$_d$ & 48 & B1826$-$17 & 7.57$\times10^3$ & T & & & & \\ 24 & B1508+55 & 488 & T & 49 & B1831$-$04 & 116 & M & & & & \\ 25 & B1524$-$39 & 53.3 & D & 50 & B1845$-$19 & 11.5 & *T & & & & \\ \hline \end{tabular} \label{tabnodrift} \medskip \\$^*$-These classifications were not available previously and suggested here.\\ \end{table*} As noted earlier drifting is primarily a conal phenomenon and hence core dominated pulsars belonging to the S$_t$ and T profile classes, with weak or absent conal emission, do not show any drifting. However, the low frequency features associated with periodic amplitude modulation or nulling are seen in a number of these pulsars. Additionally, there are also many pulsars with prominent conal emission where detailed studies have not revealed any measurable drifting features. In Table \ref{tabnodrift} we have listed 69 pulsars with distinct conal components which do not show any drifting, along with their $\dot{E}$ and profile classifications. There are 23 pulsars in this list with $\dot{E} >$ 5$\times$10$^{32}$ erg~s$^{-1}$ and the remaining 46 are below this limit. There are also around 100 pulsars with profile classifications where the single pulses are too weak to carry out detailed analysis. One primary feature of drifting studies reported in this work is that the majority of measurements are carried out at a single frequency of 325 MHz. This however makes it difficult to estimate frequency evolution of the drifting phenomenon particularly at higher frequencies. It has been observed that the conal emission in core dominated pulsars becomes more prominent at higher frequencies \citep{ran83,mit17}. It is possible that some core dominated pulsars show subpulse drifting in their conal components at higher frequencies. But a majority of such pulsars have $\dot{E}$ greater than 10$^{33}$ erg~s$^{-1}$ making it unlikely for drifting to be seen. Detailed single pulse observations at frequencies in excess of 2 GHz would be required to address this issue. There are 46 pulsars where sensitive single pulse studies show no drifting. This implies that drifting is seen in 57\% of pulsars with conal emission and $\dot{E} <$ 5$\times$10$^{32}$ erg~s$^{-1}$. As seen in the Table non-drifting pulsars are not restricted to any specific profile class. It is not clear at present why a large fraction of conal pulsars do not show presence of drifting. One possibility is that the absence of drifting is an extreme example of diffuse drifting category where the pulsar switches rapidly between multiple subpulse tracks. In absence of any apparent physical characteristics distinguishing between the two populations, more detailed modelling is essential to better understand the conditions affecting the drifting phenomenon. \subsection{The Conal emission in Pulsars} \begin{figure*} \begin{tabular}{@{}lr@{}} {\mbox{\includegraphics[scale=0.68,angle=0.]{figure2a.eps}}} & {\mbox{\includegraphics[scale=0.68,angle=0.]{figure2b.eps}}} \\ \end{tabular} \caption{The Figure shows the distribution of the different profile classes as a function of the spin-down energy loss ($\dot{E}$). The left panel shows the distribution for the three profile classes, conal single (S$_d$), conal double (D) and core single (S$_t$). The right panel shows the distribution for the Multiple (M), Triple (T) and conal Triple ($_c$T) profile classes. The $_c$T class in this distribution is a combination of the conal Triple and conal Quadruple ($_c$Q) classes.} \label{fig_prof_edot} \end{figure*} \noindent There are no cases of drifting with $\dot{E} >$ 5$\times$10$^{32}$ erg~s$^{-1}$ which has also been noted previously in \citet{bas16}. It has also been established that drifting is primarily a conal phenomenon. This raises the question about whether the different profile classes have similar dependencies on $\dot{E}$. As mentioned earlier profile classifications were introduced in the works of \citet{ran90,ran93}. Detailed profile classifications for the MSPES pulsars were carried out by \citet{skr17}. There are around 300 pulsars with classifications, and their distribution as a function of $\dot{E}$ for 6 different classes S$_d$, S$_t$, D, $_c$T, T and M are shown in Figure \ref{fig_prof_edot}. The $_c$T distribution also includes the $_c$Q class since they both represent conal profiles with multiple components. The Figure shows a clear demarcation between core dominated profiles and conal pulsars. The $\dot{E}$ of majority of conal pulsars belonging to the S$_d$ and D classes lie below 10$^{33}$ erg~s$^{-1}$ while the core single profiles mostly have higher $\dot{E}$ values. The differences become less clear in more complicated profile types (T, $_c$T and M), though their numbers become lower at higher $\dot{E}$ range. This gives a possible indication about absence of drifting in the higher $\dot{E}$ range. These results highlight that the distinction between core and conal dominated profiles are not just a geometrical effect, related to the line of sight traverse through the emission beam, but also a likely outcome of other physical processes. In previous studies there have been indications of physical differences between profile classes. For example \citet{ran93} used $B_{12}/P^2$ as an indicator for core and conal species with the cores having larger values. Similarly, \citet{gil00} introduced the complexity parameter to distinguish core and conal profile classes. It has been recently shown by \citet{skr18} that the underlying widths of core and conal components in profiles are similar and has a $P^{-0.5}$ dependence. In order to understand the physical processes leading to different profile classes more detailed modelling is required. The dependence seen between $P_3$ and $\dot{E}$ reported in this work should serve as an input into these models. One example is the partially screened gap (PSG) model where the IAR is screened by thermally emitted ions from the stellar surface \citep{gil03,sza15}. The drifting periodicity in this model is inversely proportional to screening factor of the gap and hence a inverse dependence of $P_3$ on $\dot{E}$ can be established \citep{bas16}. However, this model needs to be extended to explain variations seen in the physical properties of core and conal pulsars. \subsection{The association between Drifting and Profile Classifications} \begin{figure*} {\mbox{\includegraphics[scale=0.42,angle=0.]{figure3.eps}}} \caption{The Figure shows a schematic of the association of the different drifting classes with the line-of-sight (LOS) traverse of the emission beam. Coherent drifting with regular phase variations are associated with outermost LOS1. Switching phase-modulated drifting is usually associated with more interior LOS2. The central core region does not show any systematic subpulse drifting. Low-mixed phase-modulated drifting is associated with conal regions of central LOS3.} \label{fig_drift_schem} \end{figure*} \noindent A distinct association between the nature of subpulse drifting and profile class was noted by \citet{ran86}. It was seen that the most ordered drifting was associated with the S$_d$ class and barely resolved D profiles where the LOS was supposed to traverse the emission beam tangentially. On the contrary well resolved D profiles as well as conal components of the T and M profiles were not expected to drift but show pulse-to-pulse phase-stationary modulation. This behaviour was interpreted as the LOS traversing emission beam more centrally, with conal radio emission circulating around magnetic axis. The presence of detailed phase variation measurements associated with drifting makes it appropriate to revisit this suggested association. As shown in Table \ref{tabphscoh} the majority of pulsars which exhibit large and systematic phase variations across the profile are associated with S$_d$ and barely resolved D profiles. The phases monotonically increase or decrease across the profile for several hundred degrees. However, as profile shapes become more complicated the drifting behaviour also show large diversity. Well resolved conal D profiles do not necessarily exhibit phase-stationary behaviour. For example in PSR J0837+0610 (B0834+06) the phase variations, particularly in the trailing component, are relatively flat with less than 20\degr~variation. The pulsar has been classified in the low-mixed phase-modulated drifting category. In contrast a number of pulsars with D profiles show large phase variations across the components and have been classified in the coherent phase-modulated drifting class. The most prominent D class pulsars with clearly separated components are J0151$-$0635 (B0148$-$06), where the leading component exhibits between 100-150\degr~phase variations and the trailing component between 50-100\degr; and J1901$-$0906 where the drifting is prominent primarily in the trailing component with more than 200\degr~phase variation across the component. In three pulsars J1921+1948 (B1918+19), J1946+1805 (B1944+17) and J2313+4253 (B2310+42) belonging to the $_c$T profile class, the phases show more than several hundred degree variations across the profile and are classified as coherent phase-modulated drifters. Pulsar J0323+3944 (B0320+39) also shows similar large phase variations, but a 180\degr~phase jump is seen towards the center of the profile. Due to its complex phase behaviour we have classified the profile as $_c$T. For six $_c$Q pulsars showing subpulse drifting, once again large phase variations exceeding several hundred degrees across the profiles are seen. However, in each of these pulsars phase variations show sudden jumps between components, which include the three bi-drifting pulsars where slope of phase variations changes sign due to drift reversals. These pulsars along with J0323+3944 (B0320+39) have been identified as a separate class of switching phase-modulated drifting. Finally, the drifting in the conal components of T and M class profiles, in addition to PSR J0837+0610 (B0834+06), is identified as low-mixed phase-modulated drifting which is different from phase stationary behaviour. The phase variations are not strictly zero but show variations of 50\degr~across certain components. The most prominent variations are seen in the inner cone of PSR J1239+2453 (B1237+25), as well as J1733$-$2228 (B1730-22) in its strong leading component. In summary a general evolution of the phase variations corresponding to drifting is indeed seen with the pulsar profile class. The more conal profiles, S$_d$, D and $_c$T are associated with tangential LOS cuts of the emission beam and show a systematic change in the phase variations across the profile components. In case of more inward LOS traverses, corresponding to $_c$Q profile type and certain $_c$T pulsars, the phase jumps and phase reversals become more prominent. Finally, for central LOS traverse corresponding to the M and T class profiles, the drifting is absent in the central core component while the conal components show relatively smaller phase variations, which however still show the complexity of the switching class. A schematic explaining the association of the different drifting categories with the LOS traverse of the emission beam is shown in the Figure \ref{fig_drift_schem}. But it should be noted that the schematic is only representative of the generalized association between drifting behaviour, with profile class with clear deviations seen in individual pulsars. Estimation of the detailed emission geometry in each case would likely provide a better understanding of this association. There are a few possibilities for the deviation of the phase variations from linearity. A flattening of phases towards the peripheral longitudes is seen in a number of S$_d$ pulsars. This can develop due to curvature of the LOS in more tangential traverses of the emission beam. In certain pulsars it has been shown that phase jumps are correlated with variations in the orthogonal polarization modes \citep{ran03,ran05}, particularly for the outer conal components. This phenomenon needs to be explored in more detail by carrying out sensitive polarization modal studies in drifting pulsars. Finally, presence of non-dipolar surface fields in the IAR, as proposed by RS75, is another possible source of deviation from linear phase variations. The sparks, responsible for the outflowing plasma, undergo {\bf E}x{\bf B} drift in the IAR in presence of highly curved non-dipolar magnetic fields. On the other hand the subpulses corresponding to the observed emission originate at regions associated with dipolar field lines \citep{mit17b}. The transition from non-dipolar fields in the IAR to dipolar fields in the emission region is likely to lead to non-linearity in drifting phases. However, detailed modelling is required to understand the spark motion in the IAR. \section{Summary} \noindent We have carried out a detailed analysis of subpulse drifting in pulsars and assembled the most complete sample exhibiting this phenomenon. We have made careful distinctions between drifting and other phenomena like periodic amplitude modulations and periodic nulling. We have used fluctuation spectral analysis to characterise the drifting features, particularly the nature of frequency peaks and phase variations across the profile associated with the peaks. Based on these measurements the drifting population was divided into four main categories: coherent phase-modulated drifting with narrow frequency features and regular phase variations across the profile; switching phase-modulated drifting with narrow frequency features but exhibiting sudden jumps and reversals in phase variations; diffuse phase-modulated drifting with relatively wide frequency features and indication of a drift direction; and low-mixed phase-modulated drifting associated with pulsars with central core components, which do not show drifting, and corresponding conal components, showing relatively lower but complex phase variations. The classification scheme introduced in this work differs from the previous schemes of \citet{ran86} and \citet{wel06,wel07}. In contrast to \citet{ran86} the classification is based not just on profile class but also the nature of subpulse drifting seen in each profile class. However, our classification uses the suggestion by \citet{ran86} that drifting is restricted to conal components, which was not utilized in the works of \citet{wel06,wel07}. Drifting periodicities are anti-correlated with spin-down energy loss ($\dot{E}$). It is also observed that drifting is restricted to a narrow range of $\dot{E}$ $<$ 5$\times$10$^{32}$ erg~s$^{-1}$. The drifting classification shows an association with the profile types and line-of-sight (LOS) geometry. The coherent phase-modulated drifting is associated with S$_d$ and barely resolved D profiles with tangential LOS traverses of the emission beam. The switching phase-modulated drifting is seen in more interior LOS traverses of $_c$T and $_c$Q pulsars. While the low-mixed phase-modulated drifting usually appears in T and M profiles with central LOS traverse. \section*{Acknowledgments} \noindent We thank the referee Prof. Joanna Rankin for her comments which helped to improve the paper. Dipanjan Mitra acknowledges funding from the grant ``Indo-French Centre for the Promotion of Advanced Research - CEFIPRA". We thank the staff of the GMRT who have made these observations possible. The GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research.
1,108,101,564,779
arxiv
\section{Introduction} { The formation and early evolution of massive star clusters, those containing large numbers of early type stars, is currently poorly understood but is fundamental for our understanding of star formation in general and massive star formation in particular \citep[e.g.][]{zinnecker,adamo}. The lack of knowledge is largely due to young massive clusters (YMCs) being statistically rare and thus typically far away, making it difficult to spatially resolve these clusters into the individual stellar components. } { Although there are several very massive clusters in the Milky Way and nearby galaxies, they are typically mature in the sense that they have already expelled most of their natal molecular gas (e.g. Westerlund 1 \citep{andersen09}, Arches \citep{hosek}, and NGC 3603 \citep{stolte_ngc3603} in the Milky Way and R136 \citep{andersen09} within the 30 Doradus star forming complex in the Large Magellanic Cloud (LMC) and NGC604 in M33 \citep{hunter_604}. |} { Thus, star formation has mostly ceased in these systems. Furthermore, the clusters are much older than the crossing time and will already be dynamically evolved with any sub-structure at formation erased.} { As part of the HERITAGE Herschel program \citep{meixner} it was found that the N79 { star-forming complex} in the LMC, spanning over 500 pc, has twice the star formation rate of 30 Doradus \citep{ochsendorf}. Within the region is the very luminous far-infrared {\it Herschel} point source HSO BMHERICC J72.971176-69.391112}, for short \HH, with an integrated luminosity of $2.2\cdot10^6$ L$_\odot$ \citep{seale,ochsendorf}{, located within the HII region IC2111}. { Although the bolometric luminosity is less than that of R136 \citep{malumuth}, it is one of the most luminous star forming clusters known in the LMC and due to the high far-infrared luminosity is suggestive of being a super star cluster in the making \citep{ochsendorf}}. Its existence within the LMC makes the target particularly interesting since it allows for studying massive star cluster formation and evolution in a sub-solar metallicity environment. Furthermore star clusters in the LMC are less affected by field star contamination than clusters in the Galactic plane which allows for a more straightforward analysis of the stellar population in the region. Little is currently known about the stellar content of the region around \HH . \citet{indebetouw} had earlier used Australia Telescope Compact Array (ATCA) measured flux densities at 3 and 6 cm to estimate an ionizing flux from the region now known as \HH\ consistent with an O5 star, assuming optically thin bremsstrahlung. However, the total stellar content in the region is not known due to the limited depth of the previous near-infrared observations shown in \citet{nayak19}. ALMA line observations showed a complex of filaments feeding into the central region \citep{nayak19} which can provide additional mass to the region in the future. In terms of directly detecting the stellar content, no deep high spatial resolution imaging of the region has so far been performed. The extinction is expected to be high in the region and near-infrared observations are necessary. Here we present near-infrared Gemini Flamingos 2 and Multi-Conjugate Adaptive Optics (MCAO) GeMS/GSAOI observations of \HH\ to probe the intermediate-- and high-- mass stellar content of the region. The larger field of view of Flamingos 2 provides the larger glimpse of the region and allows calibration of the deeper GeMS/GSAOI data that, through the higher spatial resolution possible with MCAO, can resolve the individual stars in \HH . Previous observations of massive star forming regions in the LMC have shown that adaptive optics and in particular multiconjugate adaptive optics allows for resolution of the individual stars \citep{brandl96,bernard}. The paper is structured as follows. In Section 2 the observations are presented together with the data reduction and source extraction and photometry. Section 3 presents the basic observational findings. Color-magnitude diagrams (CMDs) are presented and different sub-regions within the observed field are described. In Section 4 the stellar content is analyzed using pre-main sequence evolutionary models. We derive the total mass of the young stellar population and discuss the near-infrared observations in the context of the previous ALMA and Herschel observations. Finally we summarize our findings in Section 5. \section{Data and data reduction} N79 has been imaged using the natural seeing infrared imager Flamingos 2 and the MCAO assisted imager GSAOI, both mounted on Gemini South. Below we describe the data reduction and calibration of the data. \subsection{Flamingos 2 imaging} As an initial survey of the region and preparation for GeMS/GSAOI observations we obtained Flamingos 2 (F2) \filter{JHKs} imaging of N79. The field of view is 6.1\arcmin\ in diameter with a 0.18\arcsec\ pixel scale. The observations were taken in a standard manner with dedicated sky frames due to the target region being crowded. Dithers were employed to account for bad pixels. For the \filter{J} band 25 frames of 60 seconds each, for the \filter{H} band 60 frames of 10 seconds each, and for the \filter{Ks} band 20 frames of 10 seconds each, respectively, were obtained. The data were reduced in a standard manner using the Gemini package within {\tt pyraf}. Dark frames were subtracted before flat fielding. For each science frame the adjacent sky frames were median combined and subtracted from the science frame. The sky subtracted science frames were then registered and combined rejecting outliers to remove cosmic rays and any remaining bad pixels. The circular field of view is vignetted by the peripheral wavefront sensor and for this paper we focus on the central 3\arcmin$\times$3\arcmin\ field of view. The trimmed color image representation of the F2 \filter{JHKs} data is shown in Fig.~\ref{col_F2}. \begin{figure*} \begin{center} \includegraphics[width=12cm]{col_F2.png} \caption{{ Flamingos 2} \filter{JHKs} color composite of N79 and its surroundings. Blue represents the \filter{J} band, green \filter{H}, and red \filter{$K_s$}. A logarithmic scale for all three channels has been used. The field of view shown is 3\arcmin$\times$3\arcmin , corresponding to 43pc$\times$43pc. The whole nebulous region shown has previously been identified as IC2111. The location of \HH\ is marked with the green circle, the size (5\arcsec) of the Herschel beam. } \label{col_F2} \end{center} \end{figure*} \subsection{GSAOI imaging} \filter{H} and \filter{Ks} band data of the central region of N79 were obtained on August 6 and October 23, 2018 with Gemini's GSAOI imager fed by the GeMS Multiconjugate Adaptive Optics system under program GS-2018B-Q-202, PI M. Andersen. The field of view of GSAOI is 85\arcsec $\times$ 85\arcsec\ with a pixel scale of 20mas. Using five laser spots on the sky and three natural guide stars GeMS provide improved spatial resolution across the field of view. { The spatial resolution for the observations obtained here were 130mas for the \filter{Ks} band and 140mas in the \filter{H} band. For comparison, the natural seeing in the \filter{H} and \filter{K} bands during the observations was 0.7\arcsec . The seeing was rather uniform within a box 10\arcsec\ larger than the laser guide star configuration of 60\arcsec , but deteriorates outside. } For both filters individual exposures on source and on sky were obtained, each with a length of 120 seconds. For the \filter{Ks} band a total of 10 frames were obtained for a total integration time of 20 minutes per pixel on-source. In the \filter{H} band 15 frames were obtained for a total exposure time of 30 minutes per pixel on-source. Only a fraction of the data requested were obtained due to variable weather and an earthquake. We thus did not obtain short exposures with GSAOI and the brightest few stars are moderately saturated in one of the filters. Consequently two stars are mildly saturated within the central cluster. For those two stars the F2 photometry will be used as discussed below. The images were reduced in a standard manner using the Gemini data reduction package within the \texttt{pyraf} environment. Dark current was subtracted before the individual frames were flat fielded. The sky frames were median combined and the resulting sky frame was subtracted from the science frames. The frames were distortion corrected to a single reference frame using stars in common and using the dedicated software package {\tt disco-stu}. All distortion corrected frames were combined into the final \filter{H} and \filter{Ks} frames. After averaging the distortion corrected frames for each filter we block averaged the frames by a factor of two resulting in a final pixel scale in the data of 40mas. Fig.~\ref{col_GSAOI} shows the region observed with GSAOI where { several} previously identified regions are { marked}. \begin{figure*}[ht] \begin{center} \includegraphics[width=15cm]{full_GSAOI_color.png} \caption{{ GeMS/GSAOI} \filter{HKs} color composite of N79 and its surroundings. The field of view is 90\arcsec$\times$90\arcsec { (22.5pc$\times22.5pc$}. The green channel is created as an average of the \filter{H} and \filter{Ks} images. The stretch is a square root stretch. Shown are the locations of several known regions, including the \citet{bhatia} double clusters (shown as the two large { green} circles), the ATCA source (small cyan circle), and \HH\ in green. To the east, encircled in magenta is located a YSO \citep{gruendl} likely unrelated to \HH , as discussed in the text. } \label{col_GSAOI} \end{center} \end{figure*} \subsection{Source detection} The focus of this study is on the high spatial resolution GSAOI data. However, we will further use the Flamingos 2 data to calibrate the GSAOI data and to provide photometry for a few sources that are saturated in the GSAOI images. Sources in the GSAOI images were detected in the \filter{Ks} band image using {\tt daofind} in the {\tt pyraf} environment. Sources were identified in the \filter{Ks} band and that list was then used as the input list for the \filter{H} band image. Sources were identified down to a threshold of three times the standard deviation of the background. A source was considered detected in both bands if the position after photometry was the same in the two bands to within 1 block average pixel of 40 mas. \subsection{Photometry and photometric calibration} Photometry was performed in a standard manner. An aperture with a radius of 3.5 pixels (140mas), equal to the full width at half maximum was used. The sky was estimated independently for each source in an annulus covering 0\farcs8 to 1\farcs2 from the source center. In both the \filter{H} and \filter{Ks} band images we used the \filter{Ks} band source list and allowed for a recentering of each source. The photometric error as a function of magnitude is shown in Fig.~\ref{mag_merr} for both filters. \begin{figure}[ht] \centering \includegraphics[width=8.5cm]{mag_lim-crop.pdf} \caption{The derived magnitudes and photometric errors for the stars in the GSAOI \filter{H} band (top) and \filter{Ks} band (bottom). A magnitude error of 0.1 mag is highlighted by the horizontal line in both diagrams. } \label{mag_merr} \end{figure} For the results and analysis we limit the sample to objects with a magnitude error less than 0.1 mag in both filters. A total of 965 sources were detected, given these criteria. The GSAOI field of view is relatively small and there are few sources in common with 2MASS to calibrate with. Instead we use the F2 data which covers a larger field and due to shallower observations have a better overlap in magnitudes with 2MASS. The transformation was based on 26 stars in common between 2MASS and the short exposure F2 data. All stars used had a photometric error in each of the 2MASS bands of less than 0.1 mag and were visually inspected to ensure they were not unresolved binaries in the 2MASS data or affected by nebulosity. Subsequently a list of well exposed stars in common between the F2 and GSAOI datasets were used to calibrate the GSAOI data. Several stars are saturated in one of the GSAOI bands. For those stars we use the Flamingos 2 photometry. To limit the flux contamination from other nearby sources and nebulosity we have derived the photometry for the stars using an aperture of 1.5 pixel radius (corresponding to 0\farcs 27 for F2). { The source list is presented in Table~\ref{phot_table}, where the photometry in the 2MASS system is provided. \begin{table*} \caption{Sources identified within the GSAOI field with photometric errors below 0.1mag. Five sources are saturated in one or both GSAOI bands and they have been replaced with Flamingos 2 photometry. The full list is available in the online version.} \begin{tabular}{cccccccc} Source ID & RA & DEC & \filter{H}& error \filter{H} & \filter{Ks} & error \filter{Ks} & Instrument \\ \hline 0 & 72.983836& -69.389467&20.2& 0.04 &20.0 &0.09 &GSAOI\\ 1 & 72.963730& -69.389388&20.1& 0.02 &19.9 &0.06 &GSAOI\\ 2 & 72.969217& -69.389391&20.7& 0.06 &19.9 &0.08 &GSAOI\\ 3 & 72.944780& -69.389353&20.4& 0.03 &20.3 &0.09 &GSAOI\\ 4 & 72.986876& -69.389369&18.2& 0.00 &18.0 &0.01 &GSAOI\\ 5 & 72.981919& -69.379712&20.3& 0.03 &20.1 &0.08 &GSAOI\\ 6 & 72.943188& -69.403334&16.0& 0.00 &15.9 &0.00 &GSAOI\\ 7 & 72.987428& -69.389264&19.8& 0.02 &19.6 &0.05 &GSAOI\\ 8 & 72.964797& -69.389233&19.2& 0.01 &18.3 &0.01 &GSAOI\\ 9 & 72.964706& -69.389204&19.3& 0.01 &18.4 &0.01 &GSAOI\\ \hline \label{phot_table} \end{tabular} \end{table*} } \subsection{Archival Herschel and ALMA data} { Previous work has examined the dust and gas content of \HH\ using {\it Herschel} and ALMA data \citep{ochsendorf,nayak19}. To place the GeMS/GSAOI data in context we utilize some of these data as they were presented in the literature but here we briefly describe the used data. \HH\ was identified as a far-infrared point source in \citet{meixner} using PACS and SPIRE imaging from {\it Herschel}. These data were combined with {\it Spitzer} and {\it WISE} data in \citet{ochsendorf} to estimate the total luminosity of \HH . Note that the spatial resolution for the far-infrared observations is 5.5\arcsec at 60 $\mu$m and worse at longer wavelengths. \citet{nayak19} presented ALMA observations of multiple molecular species and the H30$\alpha$ recombination line. Here we use their $^{12}CO(2-1)$, $^{13}CO(2-1)$, $SO(6-5)$, and H30$\alpha$ data. Observations were obtained in both compact and extended mode resulting in a spatial resolution ranging from 0.239\arcsec$\times$0.160\arcsec\ for $SO(6-5)$ to 0.331\arcsec$\times$0.224\arcsec\ for $^{13}CO(2-1)$. Due to the compact configuration the maximum recoverable scale is 17.8\arcsec . } \section{Results} \subsection{The environment of \HH} { Fig.~\ref{col_F2} shows the Flamingos 2 color image where the IC2111 HII region is evident.} In Fig.~\ref{col_GSAOI} some of the previously identified objects and regions near \HH\ are marked. These include the clusters BRHT1a and BRHT1b suggested to be double clusters \citep{bhatia}, and \HH\ itself which coincides with the ATCA detected source { per the coordinates provided in \citet{indebetouw}}. The ATCA source spatially aligns with the {\it Herschel} source \citep{meixner} as well as the $\mathrm{H30\alpha}$ source identified in \citet{nayak19} within a few arcseconds, i.e. the expected accuracy of the lower--resolution {\it Herschel} studies. These sources coincide with a bright nebulous region in the near-infrared with a bright point source associated with it. Some 25\arcsec\ to the north--east of \HH\ there is a cluster of reddened objects. The lack of strong nebulosity could suggest it is a slightly older population than the one associated with the \HH\ region. Fig.~\ref{col_F2} shows that the diffuse nebulosity extends further to the south of the two clusters and based on the color--magnitude diagrams presented below the whole region is extincted by the surrounding dust. This is not surprising since \HH\ is part of the N79 star forming complex and as such we do expect several star formation sites within the region. Here we focus on the stellar content in the vicinity of \HH , and based on the radial surface mass density profile in Fig.~\ref{radial} we focus on central 8\arcsec\ radius region. Further to the east of the \HH\ is located one extended object marked in Fig.~\ref{col_GSAOI}. It was identified in \citet{gruendl} as a YSO based on the near-infrared and mid-infrared colors from 2MASS and Spitzer imaging. We note that there is both a point--like object and an extended and elongated nebulous region below it, with a length of roughly 2\arcsec , similar to the spatial resolution of 2MASS so the source has appeared as a point source in the study of \citet{gruendl}. Due to the large projected distance of the YSO from \HH\ (10 pc) this source is not likely to be directly related to \HH . Figure~\ref{GSAOI_zoom} shows in detail the GSAOI image of the region around \HH\ with contours of ALMA detected emission in several molecular tracers shown. \begin{figure*}[ht] \centering \includegraphics[width=14cm]{cutout_GSOI_color.png} \caption{The central region around \HH\ . Several filamentary dark lanes are present obscuring parts of the stellar content even at near-infrared wavelengths. Overplotted as contours are the SO integrated emission (cyan), the $^{13}CO$ integrated emission in red, and the $^{12}CO$ integrated emission in blue \citep{nayak19}. { The location of the H30$\alpha$ source is further marked as a yellow circle. } The spatial resolution of the ALMA 1.3mm observations is 0.3\arcsec$\times$ 0.2\arcsec . { Several bright sources associated with nebulosity have been marked as A1 (the ATCA source), A2 (the approximate cluster center, A3, and A4 )}.} \label{GSAOI_zoom} \end{figure*} The higher spatial resolution of the GSAOI images compared to the previous near-infrared imaging shows that although there is a bright star associated with extended nebulosity next to the H30$\alpha$ source, they are offset by $\sim$ 0.2\arcsec, which, however, is comparable to the ALMA beam size and pointing accuracy of ALMA. The integrated SO emission is extending almost perpendicular to the nebulosity. Whereas the $SO$ emission is compact, the $^{13}CO$ emission is more extended with a peak of the integrated emission near source A1. The $^{13}CO$ emission traces more quiescent gas and is thus showing the extent of molecular material around \HH\ whereas $SO$ is tracing either high density or shocked material. In the vicinity of \HH\ there are several dark filamentary lanes seen extending from the south and west of \HH . The dark lanes are loosely associated with the emission in $^{13}CO$ and in particular $^{12}CO$. Since they appear dark relative to the extended nebulosity they are in the foreground but based on their spatial alignment appears to be associated with \HH . Several other bright infrared sources are evident in the vicinity of \HH . These include one to the East (2\arcsec , or 0.5pc, source A2) associated with strong nebulosity and another nebulous source 2\farcs 9 (0.7pc) to the south (source A4). Some 3\farcs 8 (0.9pc) to the South-East of \HH\ is located another bright object associated with nebulosity (source A3). The fact that these bright objects are associated with nebulosity and their proximity to \HH\ suggests that they are young and most likely connected to the recent star formation in the region. \subsection{The center of \HH\ in the near-infrared} { We have determined the near--infrared center of \HH\ by measuring the stellar number density profile in both RA and DEC and defined the center as the peak in each direction}. The sample has further been limited to objects included in the mass estimate described in Section 4, with stellar masses above 10 M$_\odot$. We have examined if there is any dependence on the center if potential foreground objects are excluded using an extinction cut (using objects with A$_{Ks}$ larger than 0.75 only) but the center does not change. The derived peak is located to the east of \HH\ by 0.5pc, close to the bright source marked as A2 in Fig.~\ref{GSAOI_zoom} \subsection{Luminosity functions and color-magnitude diagrams} Fig.~\ref{CMD} shows the color-magnitude diagram across the GSAOI field of view for the sample of objects with photometric errors less than 0.1 mag in each band. Objects within 200 pixels radius (2 pc) are marked in red. The radius of 2 pc is based on Fig.~\ref{radial} where the stellar surface density reaches 10\%\ of the peak value. \begin{figure}[ht] \centering \includegraphics[width=7cm]{CMD-crop.pdf} \includegraphics[width=7cm]{LF-crop.pdf} \caption{Top: \filter{H}-\filter{K} versus \filter{H} band color magnitude diagram for the point sources in the GSAOI field of view. Red points are those within 200 pixel radius (2 pc) of the central parts of \HH , cyan in the control field to the South. Overplotted is a 1 Myr isochrone from the PARSEC calculations \citep{tang}. { The numbers indicate the initial mass for a star with that \filter{H-K} color and \filter{H} band magnitude.} Note that the main-sequence to pre-main sequence transition is not monotonic as discussed in the text. Bottom: The luminosity function for stars in the cluster region (solid histogram) and the control region (dashed histogram). } \label{CMD} \end{figure} { There is a large range of reddening values for the region, from objects close to the isochrone and then extending to \filter{H-K} of 2 mag. In the near-infrared color-magnitude diagrams the isochrone for higher mass stars is almost vertical and there is little dependence as a function of mass. However, the extinction is derived by de-reddening the object to the isochrone as described below.} Not only the sources in the vicinity of \HH\ are affected by reddening, also across the region is this true as can be seen by some of the red sources in the control field. The large range of reddening means there is no well-defined locus of the cluster members but there is a large over-density over the control field, 132 sources in the cluster region but only 59 in the control field (which covers four times the area). Further, the field star counts tend to be at fainter magnitudes where the nebulosity associated with \HH\ limits the detection of faint stars. This is seen in the luminosity functions for the two regions in Fig.~\ref{CMD}. The cluster field has a number of bright sources not present in the control field but with a relatively larger number of sources at faint magnitudes. This is consistent with the control field being populated predominantly with an older foreground population. Fig.~\ref{radial} shows the radial profile for the region around \HH , centered as noted in Fig.~\ref{GSAOI_zoom}. The profile peaks strongly at the location of \HH\ and declines gradually towards a background level outside 20 \arcsec . Limiting the sample to sources brighter than \filter{H}$=19.5$ shows the same number of detected sources in the central region as the full sample but then a steeper decline with radius suggesting extinction has a stronger effect in the center. \begin{figure}[ht] \centering \includegraphics[width=7cm]{radial-crop.pdf} \caption{The radial surface density profile for sources in the region around \HH\ . The solid-lined histogram is for all detected sources with a photometric error less than 0.1 mag in each band and the dashed-lined histogram is for sources brighter than an observed magnitude of \filter{H}$=19.5$, where the completeness is less affected by differential extinction.} \label{radial} \end{figure} \section{Analysis} \subsection{De-reddening and stellar mass estimates} In order to derive the stellar masses an extinction law and a stellar isochrone are needed. For the infrared it has been found that the extinction curve in the LMC does not differ substantially from the Milky Way \citep[e.g.][]{koornneef82} and thus we adopt the parameterisation of \citet{cardelli}. We used an $R_V$ of 3.4, the average $R_V$ reported by \citet{gordon} for their LMC stellar sample. For the stellar isochrone we adopt the PARSEC models which covers young ages as well as low metallicity, appropriate for the LMC. We use a distance modulus of 18.48, corresponding to a distance of 49.6 kpc \citep{dist_LMC}. { For each mass in the model a bolometric magnitude and effective temperature are provided which in turn provide the predicted \filter{H} and \filter{Ks} band magnitudes. From these, together with the reddening law, each object can be slided along the reddening vector in the color-magnitude diagram to the isochrone and the mass for the object can be determined together with a measure of the extinction. The isochrone has a finite resolution and linear interpolation is performed from the two closest points on the isochrone. } The stellar isochrones vary with age, especially for the lower-mass content. The mass where the pre-main sequence to main sequence transition happens is age dependent and is lower for an older population. The transition results in a brightening of the star which translates into the non-monotonicity seen in the isochrone in Fig.~\ref{CMD} for the mass range 5-10 M$_\odot$. For a 1 Myr old population the transition happens at a mass of $\sim$ 3 M$_\odot$ where the stars can be as bright as a 9 M$_\odot$ main sequence star in the \filter{H} band. For a 2 Myr old population the transition is at 2.5 M$_\odot$. This does introduce a degeneracy for mass determination over the range where the relationship between stellar mass and its luminosity is not monotonic and we will explore the effects of that below when estimating the cluster mass. With only the \filter{H} and \filter{Ks} filters, identifying objects with a reddening and a near-infrared excess due to a circumstellar disk is difficult since the reddening and presence of a disk would be degenerate. The disk fraction in massive clusters is still poorly known, especially for high mass stars. \citet{stolte} found a modest disk fraction of less than 10\% in the Arches and Quintuplet clusters using \filter{L} band observations. Since \filter{L} band observations are more sensitive to circumstellar disks than \filter{Ks} band observations \citep{haisch} we would expect to identify even fewer disks in a near-infrared survey than was the case in \citet{stolte}. In addition, there is even fewer indications of disks in the near-infrared for massive stars, as probed here, making the possible contamination even smaller. Thus, even though \HH\ is likely younger than the Quintuplet and Arches we would still expect the disk fraction for the intermediate-- and high-- mass stars to be small and not affect the derived masses. Fig.~\ref{ext} shows the distribution of derived extinction values for the GSAOI field as a whole and the \HH\ region. \begin{figure}[ht] \centering \includegraphics[width=7cm]{ext-crop.pdf} \caption{The extinction derived using the 1 Myr isochrone for all stars within the GSAOI field of view (short-dashed histogram) and for the region around \HH\ (solid-lined histogram).} \label{ext} \end{figure} The distribution of extinction is different for the field as a whole and for \HH . Whereas the field in general has a distribution that peaks at low extinction, consistent with the foreground extinction towards the LMC, the distribution for \HH\ has a much broader peak at $\mathrm{A_K}=1.2$ and a median value of 1.25. However, the extinction distribution is likely skewed towards less extincted objects due to the limited depth of the observations. If instead the sample is limited to the brighter stars with a mass above 10 M$_\odot$ (64 objects) the median extinction is $A_{Ks}=1.7$. Assuming there is no preference for higher mass stars to be more deeply embedded than lower mass stars this would suggest a relatively fewer low-mass stars to be detected. In addition, if an unbiased measure of an Initial Mass Function is needed, an extinction limited sample needs to be constructed \citep[e.g.][]{andersen09}. \subsection{The total stellar mass of \HH\ } { The total stellar mass of \HH\ is a key parameter to determine if the region will evolve into a R136 type region in the future. Based on the de--reddened \filter{H} band magnitudes derived in the previous section we can obtain several estimates of the total stellar mass. The most straight forward is to calculate the mass of the detected stars. Using the masses determined above, a total mass of 2466 M$_\odot$ is found for a 1 Myr isochrone assuming a metallicity of $-0.5$ dex, based on the metallicity maps for the LMC in \citet{choudhury18}. However, this does not take field star contamination into account. We use the control field in the North-East corner to estimate the field star contamination to the mass. { The radius is 300 binned pixels compared to the 200 binned pixel radius for the science region for better statistics. } Since field stars within the cluster would have been assigned a mass assuming the 1 Myr isochrone we use the same isochrone to estimate masses for the control field stars. The total mass of stars in the control field is 141 M$_\odot$ but this is for a larger area. Taking this into account the field star corrected directly detected mass of \HH\ is 2403 M$_\odot$. However, the finite depth of the imaging means only a fraction of the stellar content is determined. Extrapolating to low masses can provide an estimate of the full stellar content. Assuming a log-normal IMF below 1 M$_\odot$ and a power-law above using the prescription in \citet{chabrier} the full stellar mass is extrapolated. Although there is a large ongoing debate if the IMF is variable as a function of environment, there is little to indicate that this is the case for present day clusters in the Milky Way or Large Magellanic Clouds \citep{andersen09,bastian,dario09,andersen17a} with the possible exception of the clusters near the Galactic center \citep{hosek}, which is a very different environment than N79. For the detected stellar mass we restrict the sample to 10 M$_\odot$ and above to avoid issues with any inflections in the mass--luminosity relation and incompleteness. { We detect a total mass of 2186 M$_\odot$ in stars more massive than 10 M$_\odot$. However, this includes potential contamination from field objects. Indeed, correcting for the field star contamination this is reduced moderately to 2144 M$_\odot$. Using the field star corrected mass estimate to extrapolate the IMF to low masses the total mass is found to be 10951 M$_\odot$. } We have performed the same calculation with 0.5 and 1.6 Myr isochrones. The total extrapolated masses are 12602 M$_\odot$, and 9716 M$_\odot$, respectively. Due to cluster being deeply embedded, the presence of molecular filaments and shocks indicative of young stars we prefer a younger age and thus the mass of \HH\ is expected to be above $10^4$ M$_\odot$ within 2 pc radius. } \subsection{The immediate environment of the ALMA peak} As shown in Section 3, the peak of the stellar content of the \HH\ region is centered next to A2 in Fig.~\ref{GSAOI_zoom}, to the west of the ALMA peak location. The peak of the stellar distribution shows less emission from $^{12}CO$ and $^{13}CO$ as well as lower extinction than in the vicinity of the ALMA peak and is thus likely an older star formation event than the ALMA region of \HH . We have investigated the immediate stellar content around the \HH\ ALMA source, within a radius of 1.6\arcsec . A total of six sources are found within this region, the least massive of which is 17 M$_\odot$ (independent of the age assumed to be 0.5, 1, or 1.6 Myr). The total stellar mass is found to be 428 M$_\odot$. The average surface mass density in the region is 53 M$_\odot$ arcsec$^{-2}$. Since the lowest mass object is 17 M$_\odot$ one can directly compare with the surface mass density in the center of R136 \citep{brandl96}. Their single conjugate adaptive optics study had a spatial resolution similar to this study. They determined a surface mass density within the central 0\farcs 4 to be $\sim$ 1200 M$_\odot$ arcsec$^{-2}$ for stars more massive than 15 M$_\odot$. \subsection{Is \HH\ a forming SSC?} { We find that \HH\ contains a population of massive stars, but, compared to a region like R136 the total mass of the region is relatively modest.} The 2 pc radius adopted is similar to the half-mass radius of R136 within the 30 Dor complex \citep[1.7 pc][]{hunter95,andersen09}. The extrapolated total mass of R136 was estimated in \citet{andersen09} to be $10^5$ M$_\odot$, and thus the mass within the half-mass radius $50,000$ M$_\odot$, more than a factor of five larger than that of \HH\ , where the mass within 1.7 pc is found to be 7200 M$_\odot$. { However, whereas R136 is largely free of gas, there is still material present for potential further star formation in \HH . To gauge the amount available we use the extinction determined for the individual stars to estimate the total amount of dust and hence gas within \HH . We use the average extinction derived for the stars above 10 M$_\odot$, similar as above to estimate the total stellar mass. The extinction is converted into a gas mass estimate based on the relations between $A_V$ and dust surface mass density and a gas to dust ratio of 380 determined by \cite{duval} for the LMC. { The conversion of measured extinction to dust mass relies on the extinction law being standard in the near-infrared for \HH . If the dust size distribution is different due to e.g. grain growth this will affect the derived extinction values and hence the dust mass. The dust to gas ratio is different for the LMC than the Milky Way and depends on metallicity. Any change in the gas to dust ratio locally will linearly affect the derived dust mass.} There has recently been claims of large reservoir of CO dark gas, most notably in the 30 Dor region which could suggest we will underestimate the total gas mass \citep{chevance}. However, as discussed in \citet{chevance} this is mainly observed at low extinction values, typically $A_V$ of less than three. In the case of \HH , the average extinction is $A_V > 10$. We have adopted the average extinction to hydrogen column density conversion from \citet{duval}. Converting the derived near-infrared extinction to the extinction in the \filter{V} band using the \citet{cardelli} extinction law we obtained a total gas mass 6580 M$_\odot$ within a radius of 2 pc. This is based on the average extinction of the sight lines probed by the embedded stars (126 stars). The SO traced gas will likely add to the total molecular gas estimate. In addition, the $^{13}CO$ emission was highly filamentary \citep{nayak19} and will not be captured well by the extinction mapping. Relying only on the extinction mapping as a lower limit to the gas mass. Adding the gas mass determined above to the stellar mass gives a total mass of 17318 M$_\odot$ assuming a 1 Myr isochrone for the stellar content and extrapolating the IMF to the full stellar range. } Given the stellar content within 2 pc of the central region of \HH , we find that the cluster is a Young Massive Cluster, based on the criteria in \citet{portegieszwart} of being more massive than $10^4$ M$_\odot$ and younger than 100 Myr. From the stellar density profile it is clear that the majority of mass is within the 2 pc radius, with an increase of the total mass of 30\%\ for a radius of 3 pc. Even with the gas mass added to the stellar mass, \HH\ is still lower mass than R 136 for the same spatial coverage. Further, most likely not all the gas would be turned into stars and thus only a fraction would be added to the total final stellar mass. { However, as the extinction distribution shows, a substantial fraction of the sightlines probe gas with an extinction higher than $A_{Ks}=0.8$ which is suggested the threshold for star formation in the Galaxy \citep{lada2010}}. All of this places a likely total final mass for \HH\ closer to 15000 M$_\odot$ which would make it a higher mass YMC. Based on the large amount of gas still in the region, and that the cluster is deeply embedded, \HH\ is expected to be a much younger YMC than R136 and it thus provides a unique opportunity to study massive star cluster formation in a metal-poor environment. \section{Summary} We have performed Flamingos 2 and GeMS/GSAOI imaging of the forming potential Super Star Cluster associated with \HH\ in the Large Magellanic Cloud. We find a near-infrared embedded cluster with a characteristic radius of $\sim$2 pc, with a large range of reddening compared to the adjacent region. The cluster includes several bright, highly reddened sources associated with nebulosity. The total stellar mass of the 2 pc radius region around \HH\ is at least $10^4$ M$_\odot$ based on the intermediate-- and high-- mass content, extrapolated for the whole stellar mass range, adopting a 1 Myr isochrone. The central mass surface density of the cluster is found to be smaller than that of R136, although it appears that \HH\ is still sub-clustered based on the existence of different stellar and molecular peaks. Thus, \HH\ coincides with a forming Young Massive Cluster, offering an opportunity to study the formation of a sub-solar metallicity Young Massive Cluster. \acknowledgements{{ We thank the referee and editor for suggestions and comments that improved the manuscript.} Based on observations through programs 2019A-DD-103, and 2018B-Q-202, obtained at the international Gemini Observatory, a program of NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. on behalf of the Gemini Observatory partnership: the National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigaci\'{o}n y Desarrollo (Chile), Ministerio de Ciencia, Tecnolog\'{i}a e Innovaci\'{o}n (Argentina), Minist\'{e}rio da Ci\^{e}ncia, Tecnologia, Inova\c{c}\~{o}es e Comunica\c{c}\~{o}es (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea). { Nayak and Meixner were supported by NSF grant AST-1312902, NNX15AF17G, HST-GO-14689.005-A, and SOFIA: SOF-04-0101. Nayak was additionally supported by STScI Director’s Discretionary Fund. Hirschauer and Meixner acknowledge support from NASA grant NNX14AN06G.}}
1,108,101,564,780
arxiv
\section{Introduction} \subsection{Notation} We write $\mathbb{R}_{> 0} \coloneqq \setm{x \in \mathbb{R}}{x > 0}$ and for $\mu, \rho \in \mathbb{R}$ we define for the comprehension $\mathbb{C}_{\abs{\cdot} < \mu} \coloneqq \setm{z \in \mathbb{C}}{\abs{z} < \mu}$ and $\mathbb{C}_{\abs{\cdot} > \mu}$, $\mathbb{C}_{\abs{\cdot} \leq \mu}$, $\mathbb{C}_{\abs{\cdot} \geq \mu}$ and $\mathbb{C}_{\mu \geq \abs{\cdot} \geq \rho}$ are defined similarly. For $\rho > 0$ we denote the complex ball with radius $\rho$ centered at $0$ by $B(0, \rho) \coloneqq \setm{z \in \mathbb{C}}{|z| < \rho}$ and the circle with radius $\rho$ centered at $0$ by $S_\rho \coloneqq \partial B(0, \rho)$. We set $\mathbb{N} \coloneqq \mathbb{Z}_{\geq 0} \coloneqq \{0, 1, 2, \dots\}$. For sets $X, Y$ we denote the set of functions from $Y$ to $X$ by $X^Y \coloneqq \{f: Y \rightarrow X\}$ and for $f \in X^Y$ we write $\operatorname{ran} f \coloneqq \setm{f(y) \in X}{y \in Y}$ for the range of $f$. In particular, for any $M \subseteq \mathbb{Z}$, $X^M$ is the space of sequences in $X$ on $M$ and for $u \in X^M$, $n \in M$ we write $u_n \coloneqq u(n)$. The identity mapping on a vector space $V$ is denoted by $1$. For a sequence $u \in V^\mathbb{Z}$ we denote $\operatorname{spt} u \coloneqq \setm{n \in \mathbb{Z}}{u_n \neq 0}$. If $V$ is a normed vector space we denote with $\norm{\cdot}_V$ the norm on $V$. We recall the binomial coefficient and the binomial series including some of their properties. Proofs of the following propositions can be found in \cite{knuth1994} and \cite{koenigsberger2004}. \begin{proposition}[Binomial coefficient ({\cite[pp.~164-165]{knuth1994}}, {\cite[p.~34]{koenigsberger2004}})]\label{p:binom} For $\alpha \in \mathbb{C}$ and $n \in \mathbb{Z}_{\geq 1}$ the binomial coefficent is defined by \begin{equation*} \binom{\alpha}{0} \coloneqq 1, \qquad \binom{\alpha}{n} \coloneqq \frac{\alpha(\alpha - 1) \cdots (\alpha - n + 1)}{n!}. \end{equation*} For $\alpha \in \mathbb{C}$ and $n \in \mathbb{N}$ we have \begin{equation*} (-1)^n \binom{\alpha}{n} = \binom{-\alpha + n - 1}{n} \qquad \text{and} \qquad \sum_{k = 0}^n (-1)^k \binom{\alpha}{k} = (-1)^n \binom{\alpha - 1}{n}. \end{equation*} \end{proposition} \begin{proposition}[Binomial series ({\cite[p.~65 \& p.~73]{koenigsberger2004}})] Let $\alpha \in \mathbb{C}$. The binomial power series is defined by \begin{equation*} (1 + z)^\alpha \coloneqq \sum_{k = 0}^\infty \binom{\alpha}{k} z^k. \end{equation*} The series converges absolutely in $B(0, 1)$. In particular, the mapping $\mathbb{C}_{\abs{\cdot} > 1} \rightarrow \mathbb{C}, z \mapsto (1 - z^{-1})^\alpha$ is holomorphic. For each $\alpha, \beta \in \mathbb{C}$ we have $(1 + z)^\alpha (1 + z)^\beta = (1 + z)^{\alpha + \beta}$. \end{proposition} Binomial coefficients can be expressed with the gamma function. \begin{lemma}[Falling factorial ({\cite[p.~164]{knuth1994}})]\label{l:eq-binom-gamma} With the falling factorial \begin{equation*} (x)^{(n)} \coloneqq \frac{\Gamma(x + 1)}{\Gamma(x - n + 1)} , \qquad x \in \mathbb{C} \setminus \mathbb{Z}, \qquad n \in \mathbb{N}, \end{equation*} we have for each $\alpha \in \mathbb{C} \setminus \mathbb{Z}$ and $n \in \mathbb{N}$ \begin{equation}\label{e:factorial-vs-binom} (-1)^n \binom{\alpha}{n} = \binom{-\alpha + n - 1}{n} = \frac{1}{\Gamma(-\alpha)} (n - (1 + \alpha))^{(-(1 + \alpha))}. \end{equation} \end{lemma} \begin{lemma}\label{lem:binom-estimate} Let $\alpha \in (0, 1)$ and $\rho > 1$. Then we have for each $z \in S_\rho$ \begin{equation*} (1 - \rho^{-1})^\alpha \leq \abs{(1 - z^{-1})^\alpha}. \end{equation*} \end{lemma} \begin{proof} Let $z \in S_\rho$. For every $n \in \mathbb{Z}_{\geq 1}$ we observe that $(-1)^n \binom{\alpha}{n} < 0$ and therefore $(-1)^n \binom{\alpha}{n} z^{-n} = -\abs{\binom{\alpha}{n}} z^{-n}$. We show by induction that for every $n \in \mathbb{N}$ \begin{equation*} \abs{\sum_{k = 0}^n (-1)^k \binom{\alpha}{k} z^{-k}} \geq \abs{\sum_{k = 0}^n (-1)^k \binom{\alpha}{k} \rho^{-k}} \end{equation*} and when letting $n$ tend to infinity the inequality follows. The induction basis is trivial. For the induction step for $n \in \mathbb{N}$ we use the lower triangle inequality to obtain \begin{align*} \abs{\sum_{k = 0}^{n + 1} (-1)^k \binom{\alpha}{k} z^{-k}} &= \abs{\sum_{k = 0}^n (-1)^k \binom{\alpha}{k} z^{-k} + (-1)^{n + 1} \binom{\alpha}{n + 1} z^{-(n + 1)}} \\ &\geq \abs{\abs{\sum_{k = 0}^n (-1)^k \binom{\alpha}{k} z^{-k}} - \abs{(-1)^{n + 1} \binom{\alpha}{n + 1} z^{-(n + 1)}}} \\ &= \abs{\abs{\sum_{k = 0}^n (-1)^k \binom{\alpha}{k} z^{-k}} + (-1)^{n + 1} \binom{\alpha}{n + 1} \rho^{-(n + 1)}} \\ &\geq \abs{\sum_{k = 0}^{n + 1} (-1)^k \binom{\alpha}{k} \rho^{-k}}. \end{align*} \end{proof} \subsection{Fractional difference operators} Let $V$ be a real or complex vector space. The fractional sum can be motivated by the iterated sum formula and is also related to iterating the backward difference operator (see e.g.\ \cite{kuttner1957}). For $\alpha \in \mathbb{R}_{> 0}$ the fractional sum $\nabla^{-\alpha} \colon V^\mathbb{N} \to V^\mathbb{N}$ is defined by (cf.\ \cite[p.\ 3]{atici2009}) \begin{equation}\label{e:FI-op} (\nabla^{-\alpha} u)_n = \sum_{k = 0}^n \binom{n - k + \alpha - 1}{n - k} u_k = \sum_{k = 0}^n (-1)^k \binom{-\alpha}{k} u_{n - k}. \end{equation} There is also a definition motivated by iterating the forward difference operator which is studied at least since \cite{kuttner1957} and can be found in \cite[p.\ 3]{atici2009} as well. Note that $(\nabla^{-\alpha} u)_n$ in general depends on $u_0, \dots, u_n$. The approach to defining the fractional differential operators in the Riemann-Liouville and Caputo sense (cf.\ \cite{Diethelm2004}) was applied mutatis mutandis to difference operators (see e.g.\ \cite{Qasem2013} and the references therein). Recall that for $\Delta: V^\mathbb{N} \rightarrow V^\mathbb{N}, u \mapsto (u_{n + 1} - u_n)_\mathbb{N}$ we have $(\Delta u)_n = (\nabla u)_{n + 1}$ for $n \in \mathbb{N}$. For $\alpha \in (0, 1)$ the Riemann-Liouville forward fractional difference operator is defined by (cf.\ \cite[p.\ 3813]{Lizama2017}) \begin{equation}\label{e:RL-op} \Delta^{\alpha}: V^\mathbb{N} \rightarrow V^\mathbb{N}, \qquad u \mapsto \Delta \nabla^{-(1 - \alpha)} u. \end{equation} The Caputo forward fractional difference operator is defined by (cf. \cite[p. 3813]{Lizama2017}) \begin{equation}\label{e:C-op} \Delta_C^\alpha: V^\mathbb{N} \rightarrow V^\mathbb{N}, \qquad u \mapsto \nabla^{-(1 - \alpha)} \Delta u. \end{equation} In this paper we study sequences in a Hilbert space $V = H$ on $\mathbb{Z}$ and define a fractional difference sum operator using the binomial series and a functional calculus which is not purely algebraic as in the case of $\nabla^{-\alpha}$. The connection between operators defined on $H^\mathbb{Z}$ with those defined on $H^\mathbb{N}$ will be causality and we analyze how the Riemann-Liouville and the Caputo operator fit into the calculus developed for sequences in $H^\mathbb{Z}$. An important step for the development of the discrete, functional analytic framework which is introduced in this paper has been done in the continuous case for fractional derivatives in \cite{Picard:Trostorff:Waurick2015}. Lastly we study the asymptotic stability of the zero solution of a linear fractional difference equation with the Riemann-Liouville and the Caputo forward difference operator. The interest in the study of linear problems in the context of stability analysis stems from Lyapunov's first method, which has been analyzed in \cite{Siegmund2016} for fractional differential equations. The results regarding asymptotic stability will be in terms of the Matignon criterion (cf.\ \cite{Matignon1998}), however, for bounded operators on a Hilbert space $H$ and will be compared to those in \cite{Qasem2013} and \cite{czermak2015}. A useful tool when analyzing the asymptotic stability of linear problems is the $\mathcal{Z}$ transform which is also used in \cite{Qasem2013} and \cite{czermak2015} but which is studied here for sequences in $H^\mathbb{Z}$. Asymptotic stability has also been studied using the Riemann-Liouville and the Caputo backward difference operators in \cite{czermak2012} and \cite{Lizama2017}. \section{Exponentially weighted $\ell_p$ spaces} We denote by $(H, \norm{\cdot}_H)$ a complex and separable Hilbert space. The scalar product $\left<\cdot, \cdot\right>_H$ on $H$ shall be conjugate linear in the first argument and linear in the second argument. We recall several of the concepts of weighted $\ell_{p, \rho}(\mathbb{Z}; H)$ spaces and the $\mathcal{Z}$ transform (see also \cite{Siegmund2018}). \begin{lemma}[Exponentially weighted $\ell_p$ spaces \cite{Siegmund2018}]\label{lem:ell2rho} Let $1 \leq p < \infty$, $\rho >0$. Define \begin{align*} \ell_{p,\rho}(\mathbb{Z}; H) &\coloneqq \setm{x \in H^\mathbb{Z}}{\sum_{k \in \mathbb{Z}} \norm{x_k}_H^p \rho^{-p k} < \infty}, \\ \ell_{\infty,\rho}(\mathbb{Z}; H) &\coloneqq \setm{x \in H^\mathbb{Z}}{\sup_{k \in \mathbb{Z}} \norm{x_k}_H \rho^{-k} < \infty}. \end{align*} Then $\ell_{p,\rho}(\mathbb{Z}; H)$ and $\ell_{\infty,\rho}(\mathbb{Z}; H)$ are Banach spaces with norms \begin{equation*} \norm{x}_{\ell_{p,\rho}(\mathbb{Z}; H)} \coloneqq \Big( \sum_{k \in \mathbb{Z}} \norm{x_k}_H^p \rho^{-p k} \Big)^{\frac{1}{p}} \qquad (x \in \ell_{p,\rho}(\mathbb{Z}; H)) \end{equation*} and \begin{equation*} \norm{x}_{\ell_{\infty,\rho}(\mathbb{Z}; H)} \coloneqq \sup_{k \in \mathbb{Z}} \norm{x_k}_H \rho^{-k} \qquad (x \in \ell_{\infty,\rho}(\mathbb{Z}; H)), \end{equation*} respectively. Moreover, $\ell_{2,\rho}(\mathbb{Z}; H)$ is a Hilbert space with the inner product \begin{equation*} \langle x, y \rangle_{\ell_{2,\rho}(\mathbb{Z}; H)} \coloneqq \sum_{k \in \mathbb{Z}} \big\langle x_k, y_k \big\rangle_{\!H} \rho^{-2 k} \qquad (x, y \in \ell_{2,\rho}(\mathbb{Z}; H)). \end{equation*} We write $\ell_{p}(\mathbb{Z}; H) \coloneqq \ell_{p,1}(\mathbb{Z}; H)$ for $1 \leq p \leq \infty$. \end{lemma} \begin{proposition}[One sided weighted sequence spaces \cite{Siegmund2018}]\label{lem:one-sided} For $1 \leq p \leq \infty$, $a \in \mathbb{Z}$ and $\rho > 0$ we define \begin{equation*} \ell_{p,\rho}(\mathbb{Z}_{\geq a}; H) \coloneqq \setm{x|_{\mathbb{Z}_{\geq a}}}{x \in \ell_{p,\rho}(\mathbb{Z}; H)}. \end{equation*} And for $1 \leq p \leq \infty$, $\rho > 0$, $a \in \mathbb{Z}$ and for $x \in H^{\mathbb{Z}_{\geq a}}$, we define $\iota x \in H^{\mathbb{Z}}$ by \begin{equation*} (\iota x)_k \coloneqq \begin{cases} 0 & \text{if } k < a, \\ x_k & \text{if } k \geq a. \end{cases} \end{equation*} Then $\ell_{p,\rho}(\mathbb{Z}_{\geq a}; H)$ is a Banach space with norm $\norm{\cdot}_{\ell_{p,\rho}(\mathbb{Z}_{\geq a}; H)} \coloneqq \norm{\iota \cdot}_{\ell_{p,\rho}(\mathbb{Z}; H)}$, and \begin{equation*} \iota \colon \ell_{p,\rho}(\mathbb{Z}_{\geq a}; H) \hookrightarrow \ell_{p,\rho}(\mathbb{Z}; H) \end{equation*} is an isometric embedding. Write $\ell_{p,\rho}(\mathbb{Z}_{\geq a}; H) \subseteq \ell_{p,\rho}(\mathbb{Z}; H)$. \\[1ex] For $1 \leq p < q \leq \infty$, $\rho, \varepsilon > 0$, $a \in \mathbb{Z}$ we have \begin{align*} (a) \qquad &\ell_{p,\rho}(\mathbb{Z}_{\geq a}; H) \subsetneq \ell_{q,\rho}(\mathbb{Z}_{\geq a}; H), \\ (b) \qquad &\ell_{q,\rho}(\mathbb{Z}_{\geq a}; H) \subsetneq \ell_{p,\rho + \varepsilon}(\mathbb{Z}_{\geq a}; H). \end{align*} \end{proposition} \begin{definition} For $x \in H$ and $n \in \mathbb{Z}$ we define $\delta_n x \in H^\mathbb{Z}$ by \begin{equation*} (\delta_n x)_m \coloneqq \begin{cases} x, &\text{if} \; m = n, \\ 0, &\text{if} \; m \neq n, \end{cases} \end{equation*} and $\chi_{\mathbb{Z}_{\geq n}} x \in H^\mathbb{Z}$ by \begin{equation*} (\chi_{\mathbb{Z}_{\geq n}} x)_m \coloneqq \begin{cases} x, &\text{if} \; m \geq n, \\ 0, &\text{if} \; m < n. \end{cases} \end{equation*} \end{definition} Note that for $\rho > 0$, $\delta_n x \in \ell_{p, \rho}(\mathbb{Z}; H)$ and for $\rho > 1$, $\chi_{\mathbb{Z}_{\geq n}} x \in \ell_{p, \rho}(\mathbb{Z}; H)$. \begin{lemma}[Shift operator \cite{Siegmund2018}]\label{lem:Shift} Let $1 \leq p \leq \infty$, $\rho > 0$. Then \begin{align*} \tau \colon \ell_{p,\rho}(\mathbb{Z}; H) & \rightarrow \ell_{p,\rho}(\mathbb{Z}; H), \\ (x_k)_{k \in \mathbb{Z}} & \mapsto (x_{k+1})_{k \in \mathbb{Z}}, \end{align*} is linear, bounded, invertible, and \begin{equation*} \norm{\tau^n}_{L(\ell_{p,\rho}(\mathbb{Z}; H))} = \rho^n \qquad (n \in \mathbb{Z}). \end{equation*} \end{lemma} \section{$\mathcal{Z}$ transform} \begin{lemma}[$L_2$ space on circle and orthonormal basis \cite{Siegmund2018}] Let $\rho >0$. Define \begin{equation*} L_{2}(S_\rho; H) \coloneqq \setm{f \colon S_\rho \rightarrow H}{ \int_{S_\rho} \norm{ f(z) }_H^2 \frac{\mathrm{d}z}{|z|} < \infty}. \end{equation*} Then $L_{2}(S_\rho; H)$ is a Hilbert space with the inner product \begin{equation*} \langle f, g \rangle_{L_{2}(S_\rho; H)} \coloneqq \frac{1}{2\pi} \int_{S_\rho} \big\langle f(z), g(z) \big\rangle_{\!H} \frac{\mathrm{d}z}{|z|} \qquad (f, g \in L_{2}(S_\rho; H)). \end{equation*} Moreover, let $(\psi_n)_{n \in \mathbb{Z}}$ be an orthonormal basis in $H$. Then $(p_{k,n})_{k, n \in \mathbb{Z}}$ with \begin{equation*} p_{k,n}(z) \coloneqq \rho^{k} z^{-k} \psi_n \qquad (z \in S_\rho) \end{equation*} is an orthonormal basis in $L_2(S_\rho; H)$. \end{lemma} \begin{theorem}[$\mathcal{Z}$ transform \cite{Siegmund2018}]\label{t:ztransform} Let $\rho >0$. The operator \begin{align*} \mathcal{Z}_\rho \colon \ell_{2,\rho}(\mathbb{Z}; H) &\rightarrow L_2(S_\rho; H), \\ x & \mapsto \Big( z \mapsto \sum_{k \in \mathbb{Z}} \langle \psi_n, \rho^{-k} x_k\rangle_{\!H} p_{k, n}(z) \Big) \end{align*} is well-defined and unitary. For $x \in \ell_{1, \rho}(\mathbb{Z}; H) \subseteq \ell_{2, \rho}(\mathbb{Z}; H)$ we have \begin{equation*} \mathcal{Z}_\rho(x) = \left(z \mapsto \sum_{k \in \mathbb{Z}} x_k z^{-k}\right). \end{equation*} \end{theorem} \begin{remark}[$\mathcal{Z}$ transform of $x \in \ell_{2,\rho}(\mathbb{Z}; H) \setminus \ell_{1,\rho}(\mathbb{Z}; H)$]\label{rem:z-transform} Let $\rho > 0$, $x \in \ell_{2,\rho}(\mathbb{Z}; H) \setminus \ell_{1,\rho}(\mathbb{Z}; H)$. Then \begin{equation*} \sum_{k \in \mathbb{Z}} x_k z^{-k} \end{equation*} does not necessarily converge for all $z \in S_\rho$. For example if $H = \mathbb{C}$, $x \in \ell_{2, \rho}(\mathbb{Z}; H) \setminus \ell_{1, \rho}(\mathbb{Z}; H)$ with $x_k \coloneqq \frac{\rho^k}{k}$ and $z = \rho$. \end{remark} \begin{lemma}[Shift is unitarily equivalent to multiplication \cite{Siegmund2018}]\label{lem:mo} Let $\rho > 0$. Then \[ \mathcal{Z}_\rho \tau \mathcal{Z}_\rho^* = \mathrm{m}, \] where $\mathrm{m}$ is the multiplication-by-the-argument operator acting in $L_2(S_\rho;H)$, i.e., \begin{align*} \mathrm{m}\colon L_2(S_\rho;H) &\to L_2(S_\rho;H), \\ f&\mapsto (z\mapsto zf(z)). \end{align*} \end{lemma} Next, we present a Paley--Wiener type result for the $\mathcal{Z}$ transform. \begin{lemma}[Characterization of positive support \cite{Siegmund2018}]\label{lem:positive-support} Let $\rho > 0$, $x \in \ell_{2,\rho}(\mathbb{Z}; H)$. Then the following statements are equivalent: (i) $\operatorname{spt} x \subseteq \mathbb{N}$, (ii) $z \mapsto \sum_{k \in \mathbb{Z}} x_k z^{-k}$ is analytic on $\mathbb{C}_{\abs{\cdot} > \rho}$ and \begin{equation}\label{e:Hardybound} \sup_{\mu > \rho} \int_{S_\mu} \norm{\sum_{k \in \mathbb{Z}} x_k z^{-k}}_H^2 \,\frac{\mathrm{d}z}{|z|} < \infty. \end{equation} \end{lemma} \begin{definition}[Causal linear operator]\label{d:linearcausal} We call a linear operator $B \colon \ell_{2,\rho}(\mathbb{Z}; H) \rightarrow \ell_{2,\rho}(\mathbb{Z}; H)$ \emph{causal}, if for all $a\in \mathbb{Z}$, $f\in \ell_{2,\rho}(\mathbb{Z};H)$, we have \begin{equation*} \operatorname{spt} f \subseteq \mathbb{Z}_{\geq a} \quad\Rightarrow\quad \operatorname{spt} Bf \subseteq\mathbb{Z}_{\geq a}. \end{equation*} \end{definition} Recall \cite[VIII.3.6, p.\ 222]{Katznelson2004} that for $A \in L(H)$ with spectrum $\sigma(A)$, the spectral radius \begin{equation*} r(A) \coloneqq \sup \setm{|z|}{z \in \sigma(A)} \end{equation*} of $A$ satisfies \begin{equation*} r(A) = \lim_{n \to \infty} \norm{A^n}_{L(H)}^{1/n}. \end{equation*} Let $A\in L(H)$ and $\rho > 0$. We denote the operators $\ell_{2,\rho}(\mathbb{Z}, H)\rightarrow \ell_{2,\rho}(\mathbb{Z}, H)$, $x\mapsto (Ax_k)$, and $L_2(S_\rho, H)\rightarrow L_2(S_\rho, H)$, $f\mapsto (z \mapsto Af(z))$, which have the same operator norm as $A$, again by $A$. \begin{proposition}[Convolution] Let $c \in \ell_{1, \rho}(\mathbb{Z}; \mathbb{C})$ and $u \in \ell_{2, \rho}(\mathbb{Z}; H)$. Then \begin{equation*} c * u \coloneqq \left(\sum_{k = -\infty}^\infty c_k u_{n - k}\right)_{n \in \mathbb{Z}} \in \ell_{2, \rho}(\mathbb{Z}; H). \end{equation*} We have Young's inequality \begin{equation*} \norm{c * u}_{\ell_{2, \rho}(\mathbb{Z}; H)} \leq \norm{c}_{\ell_{1, \rho}(\mathbb{Z}; \mathbb{C})} \norm{u}_{\ell_{2, \rho}(\mathbb{Z}; H)}. \end{equation*} Moreover, \begin{equation*} \mathcal{Z}_\rho(c * u) = \mathcal{Z}_\rho c \mathcal{Z}_\rho u. \end{equation*} \end{proposition} \begin{proof} Let $n \in \mathbb{Z}$. With the Cauchy-Schwarz inequality we compute \begin{align*} \left(\sum_{k = -\infty}^\infty \norm{c_k u_{n - k}}_H \right)^2 \rho^{-2n} &= \left(\sum_{k = -\infty}^\infty |c_k|^{1/2} \rho^{-k/2} |c_k|^{1/2} \rho^{-k/2} \norm{u_{n - k}}_H \rho^{-(n - k)}\right)^2 \\ &\leq \norm{c}_{\ell_{1, \rho}(\mathbb{Z}; \mathbb{C})} \left(\sum_{k = -\infty}^\infty |c_k| \rho^{-k} \norm{u_{n - k}}_H^2 \rho^{-2(n - k)}\right). \end{align*} Therefore using Fubini's theorem \begin{align*} \sum_{n = -\infty}^\infty \norm{(c * u)_n}_H^2 \rho^{-2n} &\leq \sum_{n = -\infty}^\infty \left(\sum_{k = -\infty}^\infty \norm{c_k u_{n - k}}_H\right)^2 \rho^{-2n} \\ &\leq \norm{c}_{\ell_{1, \rho}(\mathbb{Z}; \mathbb{C})} \sum_{n = -\infty}^\infty \left(\sum_{k = -\infty}^\infty |c_k| \rho^{-k} \norm{u_{n - k}}_H^2 \rho^{-2(n - k)}\right) \\ &= \norm{c}_{\ell_{1, \rho}(\mathbb{Z}; \mathbb{C})}^2 \norm{u}_{\ell_{2, \rho}(\mathbb{Z}; H)}^2. \end{align*} This shows Young's inequality. If additionally $u \in \ell_{1, \rho}(\mathbb{Z}; H)$ then \begin{align*} \sum_{n = -\infty}^\infty \norm{(c * u)_n}_H \rho^{-n} &\leq \sum_{n = -\infty}^\infty \sum_{k = -\infty}^\infty \norm{c_k u_{n - k}}_H \rho^{-n} \\ &= \sum_{k = -\infty}^\infty \sum_{n = -\infty}^\infty |c_k| \rho^{-k} \norm{u_{n - k}}_H \rho^{-(n-k)} \\ &= \norm{c}_{\ell_{1, \rho}(\mathbb{Z}; \mathbb{C})} \norm{u}_{\ell_{1, \rho}(\mathbb{Z}; H)}, \end{align*} i.e., $c * u \in \ell_{1, \rho}(\mathbb{Z}; H) \cap \ell_{2, \rho}(\mathbb{Z}; H)$ which simplifies the $\mathcal{Z}$ transform of $c * u$. Using Fubini's theorem, we compute for $u \in \ell_{1, \rho}(\mathbb{Z}; H) \cap \ell_{2, \rho}(\mathbb{Z}; H)$ and $z \in S_\rho$ \begin{align*} \mathcal{Z}_\rho(c * u)(z) &= \sum_{n = -\infty}^\infty \left(\sum_{k = -\infty}^\infty c_{k} u_{n - k}\right) z^{-n} = \sum_{n = -\infty}^\infty \left(\sum_{k = -\infty}^\infty c_{k} z^{-k} u_{n - k} z^{-(n - k)}\right) \\ &= \sum_{k = -\infty}^\infty c_{k} z^{-k} \left(\sum_{n = -\infty}^\infty u_{n - k} z^{-(n - k)}\right) = \mathcal{Z}_\rho(c) \mathcal{Z}_\rho(u). \end{align*} For $u \in \ell_{2, \rho}(\mathbb{Z}; H)$ the formula follows by density of $\ell_{1, \rho}(\mathbb{Z}; H) \cap \ell_{2, \rho}(\mathbb{Z}; H) \subseteq \ell_{2, \rho}(\mathbb{Z}; H)$. \end{proof} \begin{example}[The operator $(1 - \tau^{-1})^\alpha$] Let $\rho > 1$ and $\alpha \in \mathbb{C}$. For the operator $1 - \tau^{-1}: \ell_{2, \rho}(\mathbb{Z}; H) \rightarrow \ell_{2, \rho}(\mathbb{Z}; H)$, we compute \begin{equation*} (1 - \tau^{-1}) = \mathcal{Z}_\rho^* (1 - z^{-1}) \mathcal{Z}_\rho. \end{equation*} We have $|z^{-1}| < 1$ for all $z \in S_\rho$ and therefore \begin{equation*} (1 - \tau^{-1})^{\alpha} \coloneqq \mathcal{Z}_\rho^* (1 - z^{-1})^{\alpha} \mathcal{Z}_\rho : \ell_{2, \rho}(\mathbb{Z}; H) \rightarrow \ell_{2, \rho}(\mathbb{Z}; H) \end{equation*} is well-defined. This is an application of the holomorphic functional calculus (cf.\ \cite[pp.~13--18]{Gohberg:Goldberg:Kaashoek1990}, \cite[p.~601]{Dunford1988}). We define $c \in \ell_{1, \rho}(\mathbb{Z}; \mathbb{C})$ by \begin{equation*} c_k \coloneqq \begin{cases} (-1)^k \binom{\alpha}{k} & \text{if } k \geq 0, \\ 0 & \text{if } k < 0. \end{cases} \end{equation*} Then \begin{equation*} \mathcal{Z}_\rho c = \sum_{k = 0}^\infty (-1)^k \binom{\alpha}{k} z^{-k} = (1 - z^{-1})^\alpha. \end{equation*} Thus we compute for $u \in \ell_{2, \rho}(\mathbb{Z}; H)$ \begin{equation*} \mathcal{Z}_\rho(c * u) = \mathcal{Z}_\rho c \mathcal{Z}_\rho u = (1 - z^{-1})^\alpha \mathcal{Z}_\rho u. \end{equation*} Thus for $\alpha \in \mathbb{C}$ and $u \in \ell_{2, \rho}(\mathbb{Z}; H)$ we obtain \begin{equation*} (1 - \tau^{-1})^\alpha u = c * u = \left(\sum_{k = 0}^\infty (-1)^k \binom{\alpha}{k} u_{n - k}\right)_{n \in \mathbb{Z}} = \left(\sum_{k = -\infty}^n (-1)^{n - k} \binom{\alpha}{n - k} u_k\right)_{n \in \mathbb{Z}}, \end{equation*} i.e., $(1 - \tau^{-1})^\alpha$ is a convolution operator and by Young's Theorem $(1 - \tau^{-1})^\alpha$ is bounded and $\norm{(1 - \tau^{-1})^\alpha}_{L(\ell_{2, \rho}(\mathbb{Z}; H))} = \norm{c}_{\ell_{1, \rho}(\mathbb{Z}; H)}$. If $u \in \ell_{2, \rho}(\mathbb{Z}; H)$ with $\operatorname{spt} u \subseteq \mathbb{N}$, we have \begin{equation*} (1 - \tau^{-1})^{\alpha} u = \left(\sum_{k = 0}^n (-1)^k \binom{\alpha}{k} u_{n - k}\right)_{n \in \mathbb{Z}}. \end{equation*} Since $\tau$ commutes with $(1 - \tau^{-1})^{\alpha}$, we deduce that $(1 - \tau^{-1})^\alpha$ is causal. On $\ell_{2, \rho}(\mathbb{Z}; H)$ we compute for $\alpha, \beta \in \mathbb{C}$ \begin{align*} (1 - \tau^{-1})^\alpha (1 - \tau^{-1})^\beta &= \mathcal{Z}_\rho^* (1 - z^{-1})^\alpha \mathcal{Z}_\rho \mathcal{Z}_\rho^* (1 - z^{-1})^\beta \mathcal{Z}_\rho \\ &= \mathcal{Z}_\rho^* (1 - z^{-1})^{\alpha + \beta}\mathcal{Z}_\rho = (1 - \tau^{-1})^{\alpha + \beta}. \end{align*} In particular, for $\alpha \in \mathbb{C}$, $(1 - \tau^{-1})^\alpha$ is invertible with inverse $(1 - \tau^{-1})^{-\alpha}$. \end{example} \section{Fractional difference equations on $\ell_{2, \rho}(\mathbb{Z}; H)$} \subsection*{Fractional operators} Let $\rho > 1$ and $\alpha \in (0, 1)$. We consider the operators \eqref{e:FI-op}, \eqref{e:RL-op} and \eqref{e:C-op} defined on $V = H$. For comparing operators defined on spaces of sequences on $\mathbb{Z}$ with those defined for sequences on $\mathbb{N}$, we recall the embedding of $\ell_{2, \rho}(\mathbb{N}; H)$ into $\ell_{2, \rho}(\mathbb{Z}; H)$ by $\iota$ in Lemma \ref{lem:one-sided}. Moreover, we extend the operator $\Delta$ on $\mathbb{N}$ to $\mathbb{Z}$ by \begin{equation*} \Delta: \ell_{2, \rho}(\mathbb{Z}; H) \rightarrow \ell_{2, \rho}(\mathbb{Z}; H), \qquad u \mapsto \chi_{\mathbb{N}} (\tau - 1)u = \chi_{\mathbb{N}} \tau (1 - \tau^{-1})u. \end{equation*} Note that the left shift on $\mathbb{N}$ cuts of the first value of a sequence and embedded sequences have positive support. This is the reason for multiplying with $\chi_{\mathbb{N}}$ in the definition of $\Delta$ on $\ell_{2, \rho}(\mathbb{Z}; H)$. Let $v \in \ell_{2, \rho}(\mathbb{N}; H)$ and set $u \coloneqq \iota v \in \ell_{2, \rho}(\mathbb{Z}; H)$. We compare the operator $(1 - \tau^{-1})^{-\alpha}$ defined on $\ell_{2, \rho}(\mathbb{Z}; H)$ and the fractional sum \eqref{e:FI-op}. We have $\operatorname{spt} \left((1 - \tau^{-1})^{-\alpha} u\right) \subseteq \mathbb{N}$ and obtain \begin{equation*} \iota \nabla^{-\alpha} v = (1 - \tau^{-1})^{-\alpha} u. \end{equation*} Using definitions \eqref{e:RL-op} and \eqref{e:C-op} of the Riemann-Liouville and Caputo difference operators, and the fact that $\Delta u = (\tau - 1) (u - \chi_{\mathbb{N}} u_0) = \tau (1 - \tau^{-1}) (u - \chi_{\mathbb{N}} u_0)$, we compute \begin{align*} &\Delta (1 - \tau^{-1})^{-(1 - \alpha)} u = \chi_{\mathbb{N}} \tau (1 - \tau^{-1})^\alpha u = \tau (1 - \tau^{-1})^\alpha u - \delta_{-1} u_0, \\ &(1 - \tau^{-1})^{\alpha - 1} \Delta u = (1 - \tau^{-1})^{\alpha - 1} \chi_{\mathbb{N}} \tau (1 - \tau^{-1}) u = \tau (1 - \tau^{-1})^\alpha (u - \chi_{\mathbb{N}} u_0). \end{align*} Moreover, we have \begin{align*} &\iota \Delta^\alpha v = \chi_{\mathbb{N}} \tau (1 - \tau^{-1})^{\alpha} u, \\ &\iota \Delta_C^\alpha v = \tau (1 - \tau^{-1})^\alpha (u - \chi_{\mathbb{N}} u_0). \end{align*} In view of $\tau (1 - \tau^{-1})^\alpha$, the Caputo and the Riemann-Liouville operators are equal whereby the Caputo operator regularizes $u$ first. In particular for $n \in \mathbb{N}$ by Proposition \ref{p:binom} we have $((1 - \tau^{-1})^\alpha \chi_\mathbb{N} u_0)_n = \sum_{k = 0}^n (-1)^k \binom{\alpha}{k} u_0 = \binom{-\alpha + n}{n} u_0$ and so \begin{equation*} (\Delta^\alpha v)_n = (\Delta_C^\alpha v)_n + \binom{-\alpha + n + 1}{n + 1} u_0. \end{equation*} It is notable that the operator $(1 - \tau^{-1})^\alpha$ defined on $\mathbb{C}$ maps real valued sequences to real valued sequences. We could have started with a real Hilbert space $H$ and analyze $(1 - \tau^{-1})^\alpha$ spectral-wise by the complexification $H \oplus H$. \begin{proposition}[Equivalence of difference equation and sequence equation]\label{p:equ-frac-de-seq-eq} Let $\rho > 1$ and $\alpha \in (0, 1)$. Let $x \in H$, $F: \ell_{2, \rho}(\mathbb{Z}; H) \rightarrow \ell_{2, \rho}(\mathbb{Z}; H)$ and $u \in \ell_{2, \rho}(\mathbb{Z}; H)$. Let $\operatorname{spt} u \subseteq \mathbb{N}$ and $\operatorname{spt} F(u) \subseteq \mathbb{N}$. In view of the Riemann-Liouville operator, the following are equivalent: \begin{align*} &(i) &&\tau (1 - \tau^{-1})^\alpha u = F(u) + \delta_{-1} x, \\ &(ii) &&u_0 = x, ((1 - \tau^{-1})^\alpha u)_{n + 1} = F(u)_n \text{ for } n \in \mathbb{N}, \\ &(iii) &&u_0 = x, u_{n + 1} = (-1)^{n + 1} \binom{-\alpha}{n + 1} u_0 + \sum_{k = 0}^n (-1)^{n - k} \binom{-\alpha}{n - k} F(u)_k \text{ for } n \in \mathbb{N}. \end{align*} In view of the Caputo operator, the following are equivalent: \begin{align*} &(iv) &&\tau (1 - \tau^{-1})^\alpha u = F(u) + (1 - \tau^{-1})^\alpha \chi_{\mathbb{Z}_{\geq -1}} x, \\ &(v) &&u_0 = x, ((1 - \tau^{-1})^\alpha u)_{n + 1} = F(u)_n + (-1)^{n + 1} \binom{\alpha - 1}{n + 1} u_0 \text{ for } n \in \mathbb{N}, \\ &(vi) &&u_0 = x, u_{n + 1} = u_0 + \sum_{k = 0}^n (-1)^{n - k} \binom{-\alpha}{n - k} F(u)_k \text{ for } n \in \mathbb{N}. \end{align*} \end{proposition} \begin{proof} We only proof the equivalence of $(i), (ii)$ and $(iii)$. \\[1ex] $(i) \Leftrightarrow (ii)$: If we evaluate $(i)$ at $n \in \mathbb{Z}$ we obtain \begin{equation*} (\tau (1 - \tau^{-1})^\alpha u)_n = ((1 - \tau^{-1})^\alpha u)_{n + 1} = F(u)_n + (\delta_{-1} x)_n. \end{equation*} Since $((1 - \tau^{-1})^\alpha u)_n$ and $F(u)_n = 0$ for $n \in \mathbb{Z}_{< 0}$, and since $(\delta_{-1} x)_n = x$ if and only if $n = -1$ and $((1 - \tau^{-1})^\alpha u)_0 = u_0$, it follows that $(i)$ and $(ii)$ are equivalent. \\[1ex] $(i) \Leftrightarrow (iii)$: If we apply $(1 - \tau^{-1})^{-\alpha}$ to $(i)$ we see that $(i)$ is equivalent to \begin{equation*} \tau u = (1 - \tau^{-1})^{-\alpha} F(u) + (1 - \tau^{-1})^{-\alpha} \delta_{-1} u. \end{equation*} This equation is equivalent to $(iii)$, since \begin{equation*} (1 - \tau^{-1})^{-\alpha} \delta_{-1} x = \begin{cases} 0, &\text{if} \; n < -1, \\ (-1)^{n + 1} \binom{-\alpha}{n + 1} x, &\text{if} \; n \geq -1, \end{cases} \end{equation*} and since $\operatorname{spt} F(u) \subseteq \mathbb{N}$, \begin{equation*} (1 - \tau^{-1})^{-\alpha} F(u) = \sum_{k = 0}^n (-1)^{n - k} \binom{-\alpha}{n - k} F(u)_k. \end{equation*} \end{proof} \begin{remark} Note that the right hand side $F$ in Proposition \ref{p:equ-frac-de-seq-eq}$(i), (iv)$ maps sequences instead of values of $H$. If we have a function $f: H \rightarrow H$ such that for $u \in \ell_{2, \rho}(\mathbb{Z}; H)$ we have $(f(u_n))_{n \in \mathbb{Z}} \in \ell_{2, \rho}(\mathbb{Z}; H)$ we may set $F(u) \coloneqq (f(u_n))_{n \in \mathbb{Z}}$ in Proposition \ref{p:equ-frac-de-seq-eq}. \end{remark} \begin{remark}[Gr\"unwald-Letnikov difference operator] The Gr\"unwald-Letnikov difference operator is defined for $h > 0$ and $\alpha \in (0, 1)$ by (c.f.\ \cite[p.\ 708]{lubich1986}): \begin{equation}\label{e:GL-op} \tilde{\Delta}_h^\alpha: V^{h \mathbb{N}} \rightarrow V^{h \mathbb{N}}, \qquad u \mapsto \left(t \mapsto \frac{1}{h^\alpha} \sum_{k = 0}^{t/h} (-1)^k \binom{\alpha}{k} u_{t - kh}\right), \end{equation} where $h \mathbb{N} = \setm{h n}{n \in \mathbb{N}}$. It can be shown (cf.\ \cite[p.\ 708]{lubich1986}, \cite[p.\ 43]{podlubny1999}) that for $V = \mathbb{R}$ the Gr\"unwald-Letnikov operator can be used to approximate the Riemann-Liouville integral of sufficiently smooth functions. Let $\alpha \in (0, 1)$. For $v \in \ell_{2, \rho}(\mathbb{N}; H)$ and $u \coloneqq \iota v$ we calculate for the Gr\"unwald-Letnikov operator \eqref{e:GL-op}, $(1 - \tau^{-1})^\alpha u = \tilde{\Delta}_1^\alpha v$. Let $h > 0$, $x \in H$ and $F: H \rightarrow H$. A Gr\"unwald-Letnikov difference equation has the form \begin{equation*} (\tilde{\Delta}_h^\alpha v)(t + h) = F(v(t)), \quad v(0) = x \qquad (t \in h \mathbb{N}). \end{equation*} For $h = 1$ the Grünwald-Letnikov equation resembles the Riemann-Liouville equation of Proposition \ref{p:equ-frac-de-seq-eq} and for $h \in \mathbb{R}_{> 0}$ we may treat a Grünwald-Letnikov problem by considering the problem \begin{equation*} \tau (1 - \tau^{-1})^\alpha u = h^\alpha F(u) + \delta_{-1} x. \end{equation*} \end{remark} \subsection*{Linear equations on sequence spaces} \begin{remark} Let $A \in L(H)$ and $x \in H$. In view of the Riemann-Liouville difference operator we ask whether the linear equation \begin{equation}\label{e:lin-RL} \tau (1 - \tau^{-1})^\alpha u = A u + \delta_{-1} x \end{equation} of Proposition \ref{p:equ-frac-de-seq-eq} has a unique so-called causal solution that is supported in $\mathbb{N}$. In the spaces $\ell_{2, \rho}(\mathbb{Z}; H)$ we have a unique solution of \eqref{e:lin-RL} for every initial value if $\tau (1 - \tau^{-1})^{\alpha} - A$ is invertible in $\ell_{2, \rho}(\mathbb{Z}; H)$. In view of Proposition \ref{p:equ-frac-de-seq-eq} the solution $(\tau (1 - \tau^{-1})^\alpha - A)^{-1} \delta_{-1} x$ should be causal. For the corresponding Caputo equation \begin{equation}\label{e:lin-C} \tau (1 - \tau^{-1})^\alpha u = A u + (1 - \tau^{-1})^\alpha \chi_{\mathbb{Z}_{\geq -1}} x, \end{equation} the treatment is similar since $\chi_{\mathbb{Z}_{\geq -1}} x = \chi_{\mathbb{N}} x + \delta_{-1} x$. \end{remark} \begin{lemma}\label{lem:invertible} Let $\alpha \in (0, 1)$ and $A \in L(H)$. We define $f: \mathbb{C}_{\abs{\cdot} > 1} \rightarrow \mathbb{C}, z \mapsto z (1 - z^{-1})^\alpha$ and set $f_\rho \coloneqq f|_{S_\rho}$ for $\rho > 1$. For $\rho > 1$ the operator $\tau (1 - \tau^{-1})^\alpha - A$ is invertible in $\ell_{2, \rho}(\mathbb{Z}; H)$ if and only if $\operatorname{ran} f_\rho \cap \sigma(A) = \emptyset$. Moreover there is $\rho > 1$ such that for all $\mu > \rho$, $\operatorname{ran} f_\mu \cap \sigma(A) = \emptyset$, that is $\setm{z (1 - z^{-1})^\alpha}{\abs{z} > \rho}$ is in the resolvent set of $A$. \end{lemma} \begin{proof} Recall the multiplication operator $\operatorname{m}$ of Lemma \ref{lem:mo}. Using the $\mathcal{Z}$ transform, the operator $\tau (1 - \tau^{-1})^\alpha - A$ is invertible in $\ell_{2, \rho}(\mathbb{Z}; H)$ if and only if $\operatorname{m} (1 - \operatorname{m}^{-1})^\alpha - A$ is invertible in $L_2(S_\rho, H)$, since $\mathcal{Z}_\rho$ is unitary. This is the case, however, if and only if $\operatorname{ran} f_\rho \cap \sigma(A) = \emptyset$. Using Lemma \ref{lem:binom-estimate} there is $\rho > 1$ such that for all $\mu > \rho$ and $z \in S_\mu$, $r(A) < \mu (1 - \mu^{-1})^\alpha \leq \abs{z (1 - z^{-1})^\alpha}$. That is for all $\mu > \rho$, $\operatorname{ran} f_\mu \cap \sigma(A) = \emptyset$. \end{proof} \begin{proposition}[Causality of $(\tau (1 - \tau^{-1})^{\alpha} - A)^{-1}$]\label{p:causality-lin-frac} Let $\rho > 1$, $\alpha \in (0, 1)$ and $A \in L(H)$. Let $f_\rho$ be defined as in Lemma \ref{lem:invertible}. The following are equivalent: \begin{align*} &(i) &&(\tau (1 - \tau^{-1})^\alpha - A)^{-1} \in L\big(\ell_{2, \rho}(\mathbb{Z}; H)\big) \; \text{is causal}, \\ &(ii) &&(\tau (1 - \tau^{-1})^\alpha - A)^{-1} \in L\big(\ell_{2, \rho}(\mathbb{Z}; H)\big) \\& &&\text{and} \qquad \forall x \in H \colon \operatorname{spt} (\tau (1 - \tau^{-1})^{\alpha} - A)^{-1} \delta_{-1} x \subseteq \mathbb{N}, \\ &(iii) &&\forall \mu \geq \rho \colon \operatorname{ran} f_\mu \cap \sigma(A) = \emptyset. \end{align*} \end{proposition} \begin{proof} $(i) \Rightarrow (ii)$: Let $x \in H$ and $u \coloneqq (\tau (1 - \tau^{-1})^\alpha - A)^{-1} \delta_{-1} x$. Using causality assumed in $(i)$, we obtain $\operatorname{spt} u \subseteq \mathbb{Z}_{\geq -1}$. Moreover, $u_{-1} = ((1 - \tau^{-1})^{-\alpha} A u)_{-2} + ((1 - \tau^{-1})^{-\alpha} \delta_{-1} x)_{-2} = 0$ so that $\operatorname{spt} u \subseteq \mathbb{N}$. \iffalse \\[1ex] $(ii) \Rightarrow (i)$: Let $u \in \ell_{2, \rho}(\mathbb{Z}; H)$ satisfy $\operatorname{spt} u \subseteq \mathbb{N}$. Then $(\tau(1 - \tau^{-1})^\alpha - A)^{-1} \delta_n u_n \subseteq \mathbb{Z}_{\geq n}$. Since $\sum_{k = 0}^\infty \delta_k u_k$ converges in $\ell_{2, \rho}(\mathbb{Z}; H)$ and $(\tau(1 - \tau^{-1})^\alpha - A)^{-1}$ is continuous we have \begin{equation*} (\tau(1 - \tau^{-1})^\alpha - A)^{-1} u = (\tau(1 - \tau^{-1})^\alpha - A)^{-1} \sum_{k = 0}^\infty \delta_k u_k = \sum_{k = 0}^\infty (\tau(1 - \tau^{-1})^\alpha - A)^{-1} \delta_k u_k \subseteq \mathbb{N}. \end{equation*} \\[1ex] $(i) \Rightarrow (iii)$: Let $\mu > \rho$. We show that $\tau(1 - \tau^{-1})^\alpha - A$ is invertible in $\ell_{2, \mu}(\mathbb{Z}; H)$. The operator $\tau(1 - \tau^{-1})^\alpha - A$ is surjective: Let $v \in \ell_{2, \mu}(\mathbb{Z}; H)$. For $n \in \mathbb{N}$ we set $v^n \coloneqq \chi_{[-n, n]} v$. Then $v^n \in \ell_{2, \rho}(\mathbb{Z}; H)$ and by causality $u^n \coloneqq (\tau(1 - \tau^{-1})^\alpha - A)^{-1} v^n$ satisfies $\operatorname{spt} u^n \subseteq \mathbb{Z}_{\geq -n}$ and therefore $u^n \in \ell_{2, \mu}(\mathbb{Z}; H)$. We show that the operator $z(1 - z^{-1})^\alpha - A \in L(L_2(S_\mu))$ is invertible. Then by the open mapping theorem $z(1 - z^{-1})^\alpha - A$ is bounded and by the $\mathcal{Z}$ transform it follows that $\operatorname{ran} f_\mu \cap \sigma(A) = \emptyset$. Let $\mu > \rho$. For $x \in H$ we set $v^x \coloneqq (\tau(1 - \tau^{-1})^\alpha - A)^{-1} \delta_0 x$. We use $(ii)$ to compute \begin{align*} \operatorname{spt} v^x &= \operatorname{spt} (\tau(1 - \tau^{-1})^\alpha - A)^{-1} \delta_0 x = \operatorname{spt} (\tau(1 - \tau^{-1})^\alpha - A)^{-1} \tau^{-1} \delta_{-1} x \\ &= \operatorname{spt} \tau^{-1} (\tau(1 - \tau^{-1})^\alpha - A)^{-1} \delta_{-1} x \subseteq \operatorname{spt} \mathbb{N}. \end{align*} From Lemma \ref{lem:positive-support} we deduce that $V^x$, defined by $z \in \mathbb{C}_{\abs{\cdot} > \rho} \mapsto \sum_{k = -\infty}^\infty v_k^x z^{-k}$, is analytic. Moreover, since $\mathcal{Z}_\rho (\tau(1 - \tau^{-1})^\alpha - A) \delta_0 x = (m (1 - m^{-1})^\alpha - A) x$, we have $(z (1 - z^{-1})^\alpha - A) V^x(z) = V^x(z) (z (1 - z^{-1})^\alpha - A) = x$ in a neighborhood of $S_\rho$. Therefore the analytic mapping $z \in \mathbb{C}_{\abs{\cdot} > \rho} \mapsto x$ equals the analytic mapping $z \in \mathbb{C}_{\abs{\cdot} > \rho} \mapsto (z (1 - z^{-1})^\alpha - A) V^x(z) = V^x(z) (z (1 - z^{-1})^\alpha - A)$ in a cluster point, i.e. they are equal by the identity theorem for analytic functions (cf. \cite[p.~462]{Arendt2011}). It follows that $\operatorname{ran} f \subseteq \rho(A)$. \fi \\[1ex] $(ii) \Rightarrow (iii)$: Suppose by contradiction that there is $\rho' > \rho$ with $\operatorname{ran} f_{\rho'} \cap \sigma(A) \neq \emptyset$. The set $\setm{z \in \mathbb{C}_{\abs{\cdot} \geq \rho'}}{z(1 - z^{-1})^\alpha \in \sigma(A)}$ is closed, since $\sigma(A)$ is closed and since $f$ is continuous and the set is bounded, since by Lemma \ref{lem:invertible} there is a $\tilde \rho > \rho'$ such that $f(\mathbb{C}_{\abs{\cdot} \geq \tilde \rho})$ is in the resolvent set. Thus there is $z' \in \setm{z \in \mathbb{C}_{\abs{\cdot} \geq \rho'}}{z(1 - z^{-1})^\alpha \in \sigma(A)}$ with maximum absolute value. Therefore there is a sequence $(z_n)_{n \in \mathbb{N}}$ in $\mathbb{C}$ with $\abs{z_n} > \abs{z'}$, that is $z_n (1 - z_n^{-1})^\alpha$ is in the resolvent set of $A$ ($n \in \mathbb{N}$) and $\lim_{n \rightarrow \infty} z_n = z'$. Using the resolvent estimate (cf.\ \cite[p.~378]{werner2000}), we have $\lim_{n \rightarrow \infty} \norm{(z_n(1 - z_n^{-1})^\alpha - A)^{-1}}_{L(H)} = \infty$. When applying the Banach-Steinhaus theorem (cf.\ \cite[p.~141]{werner2000}), there is $x \in H$ with $\lim_{n \rightarrow \infty} \norm{(z_n(1 - z_n^{-1})^\alpha - A)^{-1} x}_H = \infty$. By assumption $(\tau (1 - \tau^{-1})^\alpha - A)^{-1} \delta_{-1} x \in \ell_{2, \rho}(\mathbb{Z}; H)$ and $\operatorname{spt} (\tau (1 - \tau^{-1})^\alpha - A)^{-1} \delta_{-1} x \subseteq \mathbb{N}$. Hence for $v \coloneqq (\tau (1 - \tau^{-1})^\alpha - A)^{-1} \delta_{0} x \in \ell_{2, \rho}(\mathbb{Z}; H)$ we have $v \in \ell_{2, \rho}(\mathbb{Z}; H)$ and $\operatorname{spt} v \subseteq \mathbb{N}$. Applying Lemma \ref{lem:positive-support}, it follows that $F: \mathbb{C}_{\abs{\cdot} > \rho} \rightarrow H, z \mapsto \sum_{k = -\infty}^\infty v_k z^{-k}$ is analytic. Since $v \in \ell_{2, \mu}(\mathbb{Z}; H)$ for $\mu > \abs{z'}$, it follows that for $G: \mathbb{C}_{\abs{\cdot} > \abs{z'}} \rightarrow H, z \mapsto (z(1 - z^{-1})^\alpha - A)^{-1} x$ we have $G = F|_{\mathbb{C}_{\abs{\cdot} > \abs{z'}}}$. This means that $\lim_{n \rightarrow \infty} \norm{F(z_n)}_H = \lim_{n \rightarrow \infty} \norm{G(z_n)}_H = \infty$. Since $F$ is continuous, this is a contradiction in that $\lim_{n \rightarrow \infty} \norm{F(z_n)}_H\; \neq \infty$. \\[1ex] $(iii) \Rightarrow (i)$: We have $(\tau (1 - \tau^{-1})^\alpha - A)^{-1} \in L(\ell_{2, \mu}(\mathbb{Z}; H))$ for $\mu > \rho$ by Lemma \ref{lem:invertible}. Since the resolvent of $A$ is analytic, the mapping $z \mapsto (z (1 - z^{-1})^\alpha - A)^{-1}$ is analytic on $\mathbb{C}_{\abs{\cdot} > \rho}$. Moreover the mapping $z \mapsto \norm{(z (1 - z^{-1})^\alpha - A)^{-1}}_{L(H)}$ is continuous and hence bounded on compact sets $\mathbb{C}_{\mu \geq \abs{\cdot} \geq \rho}$ where $\mu \geq \rho$, i.e. the mapping attains its maximum on $\mathbb{C}_{\mu \geq \abs{\cdot} \geq \rho}$. By Lemma \ref{lem:binom-estimate} and since $A$ is bounded, $\sup_{z \in S_\mu} \norm{(z (1 - z^{-1})^\alpha - A)^{-1}}_{L(H)}$ decays to zero when $\mu$ tends to infinity. It follows that $\mu \mapsto \sup_{z \in S_\mu} \norm{(z (1 - z^{-1})^\alpha - A)^{-1}}_{L(H)}$ is bounded on $[\rho, \infty)$ and therefore the conditions of Lemma \ref{lem:positive-support}$(ii)$ are satisfied for $(\tau (1 - \tau^{-1})^\alpha - A)^{-1} u$ where $u \in \ell_{2, \rho}(\mathbb{Z}; H)$, $\operatorname{spt} u \subseteq \mathbb{N}$. It follows that $(\tau (1 - \tau^{-1})^\alpha - A)^{-1}$ is causal. \iffalse Let $v \in \ell_{2, \rho}(\mathbb{Z}; H)$ with $\operatorname{spt} v \subseteq \mathbb{N}$. We show that $u \coloneqq (\tau (1 - \tau^{-1})^\alpha - A)^{-1} v$ satisfies $\operatorname{spt} u \subseteq \mathbb{N}$ using Lemma \ref{lem:positive-support}. For $z \in \mathbb{C}_{\abs{\cdot} > \rho}$ we compute $\mathcal{Z}_\rho u (z) = (z (1 - z^{-1})^\alpha - A)^{-1} \sum_{k = -\infty}^\infty v_k z^{-k}$. Therefore $\mathbb{C}_{\abs{\cdot} > \rho} \rightarrow H, z \mapsto \sum_{k = -\infty}^\infty u_k z^{-k}$ is analytic since $\mathbb{C}_{\abs{\cdot} > \rho} \rightarrow H, z \mapsto \sum_{k = -\infty}^\infty v_k z^{-k}$ is analytic by Lemma \ref{lem:positive-support}, $f_\mu$ is analytic and the resolvent is analytic. \\ Let $\mu > \rho$. For $z \in S_\mu$ we have $\abs{(1 - z^{-1})^\alpha} > (1 - \mu^{-1})^\alpha$. Therefore there is $\tilde{\mu} > \rho$ such that for all $z \in \mathbb{C}_{\abs{\cdot} > \tilde{\mu}}$ we have $\abs{z(1 - z^{-1})^\alpha} > \norm{A}$ and hence \begin{align*} (z(1 - z^{-1})^\alpha - A)^{-1} &= \frac{1}{z(1 - z^{-1})^\alpha} \left(1 - \frac{A}{z(1 - z^{-1})^\alpha}\right)^{-1} \\ &= \frac{1}{z(1 - z^{-1})^\alpha} \sum_{k = 0}^\infty \left(\frac{A}{z(1 - z^{-1})^\alpha}\right)^k. \end{align*} Therefore we have $\sup_{\mu > \rho} \norm{(z(1 - z^{-1})^\alpha - A)^{-1}} < \infty$ (note that $f(\bigcup_{\mu \in [\rho, \tilde{\mu}]} S_\mu) = \bigcup_{\mu \in [\rho, \tilde{\mu}]} \operatorname{ran} f_\mu$ is closed). Using Lemma \ref{lem:positive-support} for $v$ which has positive support, we obtain \begin{align*} \int_{S_\mu} \norm{\sum_{k = -\infty}^\infty u_k z^{-k}}^2 \frac{\mathrm{d}z}{\abs{z}} &= \int_{S_\mu} \norm{(z (1 - z^{-1})^\alpha - A)^{-1} \sum_{k = -\infty}^\infty v_k z^{-k}}^2 \frac{\mathrm{d}z}{\abs{z}} \\ &\leq \sup_{\abs{\cdot} \geq \rho} \norm{(z (1 - z^{-1})^\alpha - A)^{-1}}^2 \sup_{\mu > \rho} \int_{S_\mu} \norm{\sum_{k = -\infty}^\infty v_k z^{-k}}^2 \frac{\mathrm{d}z}{\abs{z}} < \infty. \end{align*} Therefore $u$ has positive support by the characterization of the positive support of Lemma \ref{lem:positive-support}. \fi \end{proof} \begin{remark} Let $A \in L(H)$, $\rho > 1$ and $\alpha \in (0, 1)$. By Lemma \ref{lem:invertible} and Proposition \ref{p:causality-lin-frac} we can always choose $\rho$ large enough such that $\tau (1 - \tau^{-1})^\alpha - A$ is invertible with causal inverse. As a consequence the linear fractional difference equation \eqref{e:lin-RL} or \eqref{e:lin-C} has a unique solution $u \in \ell_{2, \rho}(\mathbb{Z}; H)$. Moreover, from the previous Theorem it follows that \eqref{e:lin-RL} or \eqref{e:lin-C} has a unique solution in $\ell_{2, \mu}(\mathbb{Z}; H)$ for $\mu \geq \rho$ which coincides with the solution $u$, since $\ell_{2, \rho}(\mathbb{N}; H) \subseteq \ell_{2, \mu}(\mathbb{N}; H)$. Therefore we can speak of the solution operator $(\tau (1 - \tau^{-1})^\alpha - A)^{-1}$. The difference equation for an initial value $x \in H$ and $A \in L(H)$ \begin{equation*} (\Delta^\alpha u)_n = A u_n, \qquad u_0 = x, \end{equation*} or \begin{equation*} (\Delta_C^\alpha u)_n = A u_n, \qquad u_0 = x, \end{equation*} can be solved algebraically with a unique solution $u \in H^\mathbb{N}$ (cf.\ Proposition \ref{p:equ-frac-de-seq-eq}$(iii), (vi)$). Recall the embedding $\iota$ of Proposition \ref{lem:one-sided}. Since $A$ has bounded spectrum, when applying the previous theorem, there is $\rho > 1$ such that $\iota u \in \ell_{2, \rho}(\mathbb{Z}; H)$ is the unique solution of \eqref{e:lin-RL} or \eqref{e:lin-C}. \end{remark} \subsection*{Asymptotic stability} We discuss asymptotic stability of linear fractional difference equations. For an analysis of rates of convergence, see also \cite{czermak2015} and \cite{Tuan2018}. \begin{definition}[Asymptotic stability] Let $A \in L(H)$. The zero equilibrium of equation \eqref{e:lin-RL} or \eqref{e:lin-C}, i.e., the solution $u = 0$ for the inital value $0$, is said to be asymptotically stable if for every $\rho > 1$, every solution $u \in \ell_{2, \rho}(\mathbb{Z}; H)$ of \eqref{e:lin-RL} or \eqref{e:lin-C} with $\operatorname{spt} u \subseteq \mathbb{N}$ satisfies $\lim_{n \rightarrow \infty} u_n = 0$ in $H$. \end{definition} \begin{remark}\label{r:good-spaces} If a sequence $u \in H^\mathbb{Z}$ satisfies $\operatorname{spt} u \subseteq \mathbb{N}$ and $\lim_{n \rightarrow \infty} u_n = 0$ then necessarily for all $\rho > 1$ we have $u \in \ell_{2, \rho}(\mathbb{Z}; H)$. One could say that the spaces $\ell_{2, \rho}(\mathbb{Z}; H)$, $\rho > 1$, are large enough to look for asymptotically stable solutions of a linear sequence equation. \end{remark} \begin{proposition}[Necessary condition for asymptotic stability]\label{p:necessary} Let $A \in L(H)$ such that the zero equilibrium of equation \eqref{e:lin-RL} or \eqref{e:lin-C} is asymptotically stable and let $f_\mu$ ($\mu > 1$) be as in Lemma \ref{lem:invertible}. Then for all $\mu > 1$, $\tau(1 - \tau^{-1})^\alpha - A$ is invertible in $\ell_{2, \mu}(\mathbb{Z}; H)$ with causal inverse, i.e., for each $\mu > 1$, $\sigma(A) \cap \operatorname{ran} f_\mu = \emptyset$. \end{proposition} \begin{proof} Assume by contradiction there is $z' \in \operatorname{ran} f_\rho \cap \sigma(A) \neq \emptyset$ where $\rho > 1$. We may assume that $\operatorname{ran} f_\mu \cap \sigma(A) = \emptyset$ for $\mu > \abs{z'}$. Then there is a sequence $(z_n)_{n \in \mathbb{N}}$ with $\abs{z_n} > \abs{z'}$ such that $z_n (1 - z_n^{-1})^\alpha$ is in the resolvent set of $A$ ($n \in \mathbb{N}$) and such that $z_n \rightarrow z'$ ($n \rightarrow \infty$). Using the resolvent estimate we have $\lim_{n \rightarrow \infty} \norm{(z_n (1 - z_n^{-1})^\alpha - A)^{-1}}_{L(H)} = \infty$. Using the Banach-Steinhaus theorem there is $x \in H$ with $\lim_{n \rightarrow \infty} \norm{(z_n (1 - z_n^{-1})^\alpha - A)^{-1} x}_H = \infty$. By Lemma \ref{lem:invertible} and Proposition \ref{p:causality-lin-frac}, for $\mu > \abs{z'}$ we know that $\tau (1 - \tau^{-1})^\alpha - A$ is invertible in $\ell_{2, \mu}(\mathbb{Z}; H)$ and $v \coloneqq (\tau (1 - \tau^{-1})^\alpha - A)^{-1} \delta_0 x$ satisfies $\operatorname{spt} v \subseteq \mathbb{N}$. Since the zero equilibrium is asymptotically stable, we have $v \in \ell_{2, \rho'}(\mathbb{Z}; H)$ for some $\rho' \in (1, \abs{z'})$ by Remark \ref{r:good-spaces}. Then the mapping $F: \mathbb{C}_{\abs{\cdot} > \rho'} \rightarrow H, z \mapsto \sum_{k = -\infty}^\infty v_k z^{-k}$ is analytic and equals $G: \mathbb{C}_{\abs{\cdot} > \abs{z'}} \rightarrow H, z \mapsto (z(1 - z^{-1})^\alpha - A)^{-1} \delta_0 x$ on $\mathbb{C}_{\abs{\cdot} > \abs{z'}}$ by Lemma \ref{lem:positive-support}. Therefore we have $\lim_{n \rightarrow \infty} F(z_n) < \infty$, since $F$ is analytic which contradicts $\lim_{n \rightarrow \infty} F(z_n) = \lim_{n \rightarrow \infty} G(z_n) = \infty$. \iffalse Let the zero equilibrium be asymptotically stable. Let $\rho > 1$ such that $\sigma(A) \cap f(\mathbb{C}_{\abs{\cdot} \geq \rho}) = \emptyset$. For $x \in H$ let $v^x \coloneqq (\tau (1 - \tau^{-1})^\alpha - A)^{-1} \delta_0 x \in \ell_{2, \rho}(\mathbb{Z}; H)$. Let $\mu \in (1, \rho)$. Then $v^x \in \ell_{2, \mu}(\mathbb{Z}; H)$, since $(v_n^x)_{n \in \mathbb{N}} \in H^\mathbb{N}$ tends to zero. Using Lemma \ref{lem:positive-support} we have that $V^x: \mathbb{C}_{> \mu} \rightarrow H, z \mapsto \sum_{k = -\infty}^\infty v_k^x z^{-k}$ is analytic. Since the mapping $\mathbb{C}_{> \mu} \rightarrow H, z \mapsto (z (1 - z^{-1})^\alpha - A) V^x(z)$ is analytic and equals $x \in H$ on $\mathbb{C}_{\abs{\cdot} \geq \rho}$ when using the identity theorem, we have $(z (1 - z^{-1})^\alpha - A) V^x(z) = x$ for $z \in \mathbb{C}_{\abs{\cdot} > \mu}$. This shows that $\mathbb{C}_{\abs{\cdot} > \mu} \subseteq \rho(A)$. The assertion of the theorem follows, since $\mu > 1$ was chosen arbitrary. \fi \end{proof} For a sufficient condition of asymptotic stability we observe that if $u \in \ell_{2, 1}(\mathbb{Z}; H)$ with $\operatorname{spt} u \subseteq \mathbb{N}$ then $\lim_{n \rightarrow \infty} u_n = 0$. \begin{proposition}[Sufficient condition for asymptotic stability]\label{p:sufficient} Let $A \in L(H)$. For all $\rho > 1$ let $\tau(1 - \tau^{-1})^\alpha - A$ be invertible in $\ell_{2, \rho}(\mathbb{Z}; H)$ with causal inverse. If for all $x \in H$ the mapping $\mathbb{C}_{\abs{\cdot} > 1} \rightarrow H, z \mapsto \sum_{k = -\infty}^\infty [(\tau(1 - \tau^{-1})^\alpha - A)^{-1} \delta_{-1} x]_k z^{-k}$ has a continuous continuation to the unit circle $S_1$ then the zero equilibrium of equation \eqref{e:lin-RL} or \eqref{e:lin-C} is asymptotically stable. \end{proposition} \begin{proof} Let $g$ be the continuous continuation. Then $g|_{S_1} \in L_2(S_1, H)$ and $v \coloneqq \mathcal{Z}_1^{-1} g|_{S_1} \in \ell_{2, 1}(\mathbb{Z}; H)$. Moreover, $u = v$ that is $u \in \ell_{2, 1}(\mathbb{Z}; H)$. \end{proof} \iffalse \begin{example} Let $H \coloneqq \ell_2(\mathbb{Z}_{\geq 1}; \mathbb{C})$ and $A: \ell_2(\mathbb{Z}_{\geq 1}; \mathbb{C}) \rightarrow \ell_2(\mathbb{Z}_{\geq 1}; \mathbb{C}), x \mapsto (\frac{1}{k} x_k)_{k \in \mathbb{Z}_{\geq 1}}$. I can show that for $x \in H$ the zero solution of \begin{equation*} (\tau - I)u = -A u + \delta_{-1} x, \qquad u \in \ell_{2, \rho}(\mathbb{Z}; H), \rho > 1, \end{equation*} is asymptotically stable although $\sigma(I - A) \subseteq B[0, 1]$ and $\sigma(I - A) \cap S_1 = \{1\} \neq \emptyset$. (It is known that if $H$ is finite dimensional then $\sigma(I - A) \subseteq B(0, 1)$ iff the zero equilibrium is asymptotically stable.) (cf. \cite{czermak2015}) I'm looking for an example for $\alpha \neq 1$, i.e. is the zero solution of ($\alpha \in (0, 1)$) \begin{equation*} \tau (1 - \tau^{-1})^\alpha u = -A u + \delta_{-1} x, \qquad u \in \ell_{2, \rho}(\mathbb{Z}; H), \rho > 1, \end{equation*} asymptotically stable? (Maybe $A: \ell_2(\mathbb{Z}_{> 1}; \mathbb{C}) \rightarrow \ell_2(\mathbb{Z}_{> 1}; \mathbb{C}), x \mapsto (\frac{1}{\rho^k} x_k)_{k \in \mathbb{Z}_{> 1}}$ where $\rho > 1$). \end{example} \fi \begin{remark} We believe that the necessary conditions for stability in Proposition \ref{p:necessary} are not sufficient, neither are the sufficient conditions for stability in Proposition \ref{p:sufficient} necessary. Already for semigroups the asymptotic stability can in general not be characterized by spectral conditions solely. The shift operator on continuous functions from $\mathbb{R}^+$ to $\mathbb{R}$ which decay at infinity, for example, is asymptotically stable although its spectrum consists of all complex numbers with non-positive real part \cite[Example 2.5(c)]{Arendt:Batty1988}. The characterization of asymptotic stability for linear fractional difference equations is an intricate problem which still needs to be addressed. \end{remark} \begin{example} Let $H = \mathbb{C}$, $A: \mathbb{C} \rightarrow \mathbb{C}, z \mapsto \lambda z$ where $\lambda \in \mathbb{R}$ and $\alpha \in (0, 1)$. We study the asymptotic behavior of the linear fractional equations \eqref{e:lin-RL} and \eqref{e:lin-C} on $\ell_{2, \rho}(\mathbb{Z}; H)$ ($\rho > 1$) in view of Proposition \ref{p:necessary} and Proposition \ref{p:sufficient} and therefore want to apply the $\mathcal{Z}$ transform to equation \eqref{e:lin-RL} and \eqref{e:lin-C}. In order to obtain an asymptotically stable zero equilibrium by Proposition \ref{p:necessary}, we must have $\sigma(A) \cap \operatorname{ran} f = \emptyset$ where $f: \mathbb{C}_{\abs{\cdot} > 1} \rightarrow \mathbb{C}, z \mapsto z (1 - z^{-1})^\alpha$ is defined as in Lemma \ref{lem:invertible} and $\sigma(A) = \{\lambda\}$. We remark that for $z \in \mathbb{C}_{\abs{\cdot} > 1}$, $f(z) \in \mathbb{R}$ if and only if $z \in \mathbb{R}$ since $f$ is injective and since $f(\overline z) = \overline{f(z)}$. Moreover $f(\mathbb{C}_{\abs{\cdot} > 1} \cap \mathbb{R}) = (-\infty, -2^\alpha) \cup (0, \infty)$ and so $\lambda \notin \operatorname{ran} f$ if and only if $\lambda \in [-2^\alpha, 0]$. By Proposition \ref{p:necessary} we necessarily have $\lambda \in [-2^\alpha, 0]$ if the zero equilibrium of \eqref{e:lin-RL} or \eqref{e:lin-C} is asymptotically stable. Let $\lambda \in [-2^\alpha, 0]$ and for $u \in \ell_{2, \rho}(\mathbb{Z}; H)$ we denote $\hat{u} \coloneqq \mathcal{Z} u$. We consider \eqref{e:lin-RL} with $x \in \mathbb{C}$ first. Also for $z \in S_\rho$ we have $(\mathcal{Z} \delta_{-1} x)(z) = z x$. Applying the $\mathcal{Z}$ transform to equation \eqref{e:lin-RL}, we obtain for $z \in S_\rho$ \begin{equation*} z (1 - z^{-1})^\alpha \hat{u}(z) = A \hat{u}(z) + z x. \end{equation*} If $\lambda \in (-2^\alpha, 0)$ the mapping $\mathbb{C}_{\abs{\cdot} > 1} \rightarrow H, z \mapsto \frac{z x}{z (1 - z^{-1})^\alpha - \lambda}$ has a continuous continuation to $S_1$ and by Proposition \ref{p:sufficient} we obtain that the zero equilibrium of \eqref{e:lin-RL} is asymptotically stable. We now consider equation \eqref{e:lin-C} where $x \in \mathbb{C}$. For $z \in S_\rho$ we have $(\mathcal{Z} \chi_{\mathbb{Z}_{\geq 1}} x)(z) = \frac{z x}{1 - z^{-1}}$. Applying the $\mathcal{Z}$ transform to equation \eqref{e:lin-C}, we obtain for $z \in S_\rho$ \begin{equation*} z (1 - z^{-1})^\alpha \hat{u}(z) = A \hat{u}(z) + z (1 - z^{-1})^{\alpha - 1} x. \end{equation*} If $\lambda \in (-2^\alpha, 0)$ the mapping $\mathbb{C}_{\abs{\cdot} > 1} \rightarrow H, z \mapsto \frac{z (1 - z^{-1})^{\alpha - 1} x}{z (1 - z^{-1})^\alpha - \lambda}$ has a continuous continuation to $S_1$ and using Proposition \ref{p:sufficient} we obtain that the zero equilibrium of \eqref{e:lin-C} is asymptotically stable. The cases $\lambda = 0$ and $\lambda = -2^\alpha$ are discussed in \cite{czermak2015}. \end{example} \section*{Acknowledgement} The research of A.B.\ and A.C.\ was funded by the National Science Centre in Poland granted according to decisions DEC-2015/19/D/ST7/03679 and DEC-2017/25/B/ST7/02888, respectively. The research of M.N.\ was supported by the Polish National Agency for Academic Exchange according to the decision PPN/BEK/2018/1/00312/DEC/1. The research of S.S.\ was partially supported by an Alexander von Humboldt Polish Honorary Research Fellowship. The work of H.T.\ Tuan was supported by the joint research project from RAS and VAST QTRU03.02/18-19.
1,108,101,564,781
arxiv
\section{Introduction} In \cite{ES} Ellingsrud and Str{\o}mme compute the number of twisted cubic curves on some smooth projective hypersurfaces and complete intersections. Their arguments rely on understanding the closure $H_n$ in an appropriate Hilbert scheme of the locus of smooth twisted cubics in $\P^n$, and some vector bundles on $H_n$ arising in the resolution of the ideal sheaf of the universal curve. They introduce the standard torus action on $\P^n$ and using Bott's residue formula arrive at their computation; note that the end result is not a closed formula valid in all cases, but rather gives an algorithm which they implement to give concrete numbers in some cases of interest, including the famous quintic threefold. The main goal in this paper is refine the integer-valued computations of Ellingsrud and Str{\o}mme to elements in the Grothendieck-Witt ring $\GW(k)$, where $k$ is the field of definition of the given hypersurface or complete intersection. For this, we need to look a bit more closely at the resolutions described in \cite{ES} and \cite{EPS} in order to construct a so-called {\em relative orientation} of the vector bundle whose sections describe the twisted cubic curves contained in a given hypersurface (the section depending on the selected hypersurface). Once we have this information, we apply the quadratically refined version of Bott's residue formula, found in \cite[Theorem 9.5]{LevineAB}. This allows to proceed essentially as in \cite{ES}, although one does need to extend the torus action to an action of the normalizer $N_{\operatorname{SL}_2}$ of the torus in $\operatorname{SL}_2$ in order to retain the quadratic information, and in addition one needs to keep careful track of the orientation condition, which boils down to keeping careful track of signs. In addition to the numerical condition on the degrees of the complete intersection $X$ that is needed in order for one to expect to have only finitely many twisted cubics on $X$, the requirement of a relative orientation imposes an additional congruence. For instance, in the case of a hypersurface of degree $m$ in $\P^n$, the numerical condition is $3m+1=4n$, and for the relative orientation one needs $n$ to be even and $m\equiv1\mod 4$ (see Proposition~\ref{prop:relOrientationObservations}). Just as in \cite{ES}, we do not have a closed form formula for the quadratic count, but we have a number of examples of concrete computer-assisted computations. We should add that the localization methods of \cite{LevineAB} destroy the two-torsion information in $\GW(k)$, so {\it a priori} we retain only the information given by the signature function for each real embedding of the field of definition $k$ (as well as the rank information given by the numerical count of Ellingsrud and Str{\o}mme); this does give a complete computation for $k={\mathbb R}$. However, we show that the {\em problem} being posed lives over ${\mathbb Z}[1/6]$, so one does have an answer in the form of an element of $\GW({\mathbb Z}[1/6])$, which yields the computation over an arbitrary base-field $k$ of characteristic $\neq 2,3$ by base-extension (see \S\ref{sec:Integrality}, especially Corollary~\ref{cor:Integrality}). Noting that $\GW({\mathbb Z})\to \GW({\mathbb R})$ is an isomorphism, and that the cokernel of $\GW({\mathbb Z})\to \GW({\mathbb Z}[1/6])$ is killed by passage to $\GW(k)$ for ${\mathbb Z}[1/6]\subset k$ if $2$ and $3$ are squares in $k$, we see that the signature and rank yield a complete answer for all such fields (for example, for the fields ${\mathbb F}_p$ with $p\equiv \pm 1\mod 24$, i.e., for roughly 1/4 of these fields). See Remark~\ref{rem:Refinement} for details. We computed the count in all relatively oriented cases where $n\le 12$, feeding our computations into a Macaulay2 program. Here are the computations in tabular form. \\[10pt] \scalebox{0.92}{ \begin{tabular}{|c|c|r|r|} \hline $n$&degree(s)&signature&rank\\ \hline \hline 4&(5)&765&317206375\\ \hline 5&(3,3)&90&6424326\\ \hline 10&(13)&768328170191602020&794950563369917462703511361114326425387076\\ \hline 11&(3,11)&4407109540744680&31190844968321382445502880736987040916\\ \hline 11&(5,9)&313563865853700&163485878349332902738690353538800900\\ \hline 11&(7,7)&136498002303600&31226586782010349970656128100205356\\ \hline 12&(3,3,9)&43033957366680&3550223653760462519107147253925204\\ \hline 12&(3,5,7)&5860412510400&67944157218032107464152121768900\\ \hline 12&(5,5,5)&1833366298500&6807595425960514917741859812500\\ \hline \end{tabular}} \ \\[2pt] Note that among the above cases, only for $n=4, m=(5)$, $n=5, m=(3,3)$ is the variety Calabi-Yau. The signature is what is computed in this paper and gives a signed count of the real twisted cubics in a real hypersurface or complete intersection. Each curve is counted with its ``local quadratic degree'', which we have not attempted to discuss here; note that in any case, the signed count gives a lower bound for the number of real twisted cubics, assuming the local degrees are all $\pm1$, which we expect to be the case for the general hypersurface or complete intersection. The rank is the classical count of the twisted cubics in a hypersurface or complete intersection over an algebraically closed field (the cases $(4,(5))$ and $(5, (3,3))$ are given in \cite{ES} and the other cases were computed by transforming the formulas of \cite{ES} into a Macaulay2 program. This program, and the one computing the signatures, were written by the second author). Putting these together gives the quadratic count $Q$, \[ Q:=s+\frac{r-s}{2}\cdot H, \] valid for a field $k$ of characteristic $\neq 2,3$ for which both $2$ and $3$ are squares in $k$, where $s$ is the signature, $r$ is the rank, and $H$ is the hyperbolic form $H(x,y)=x^2-y^2$. For a general $k$ (of characteristic $\neq 2,3$), \[ Q=s+\frac{r-s}{2}\cdot H+\epsilon_1(\<2\>-1)+\epsilon_2(\<3\>-1)+\epsilon_3(\<6\>-1) \] with the $\epsilon_j$ constants depending only on $n, m_*$, and with $\epsilon_1, \epsilon_2\in \{0,1\}$, $\epsilon_3\in \{0,1,2,3\}$ (see Lemma~\ref{lem:Coker}). The first named author thanks John Christian Ottem for suggesting the quadratic count of twisted cubics as an application of the quadratic equivariant localization method, and Raman Parimala for her help with understanding aspects of the Grothendieck-Witt ring of ${\mathbb Z}[1/d]$. Unless explicitly mentioned otherwise, we work over a field $k$ of characteristic $\neq 2,3$. \section{Eagon-Northcott complexes and flat families} For use throughout the paper, we recall the simplest case of the Eagon-Northcott complex. Let $R$ be a commutative ring and let $A=(a_{ij})$ be a $3\times2$ matrix with the $a_{ij}\in R$. For $1\le i<j\le 3$, let $e_{ij}(A)$ denote the determinant of the $2\times2$ submatrix of $A$ with rows $i,j$ and let $(A)\subset R$ denote the ideal $(e_{12}(A), e_{13}(A), e_{23}(A))$. Let $B:R^3\to R$ be the map with matrix $(e_{23}(A), -e_{13}(A), e_{12}(A))$. We identify $B$ with $\bigwedge^2A^t$, giving us the Eagon-Northcott complex \begin{equation}\label{eqn:EN1} 0\to R^2\xrightarrow{A} R^3\xrightarrow{\bigwedge^2A^t}R\to R/(A)\to 0 \end{equation} \begin{theorem}[Eagon-Northcott]\label{thm:EN} Suppose that $R$ is a Cohen-Macaulay ring and $(A)$ is not the unit ideal. \\[5pt] 1. Each minimal prime ideal of $(A)$ has height $\le 2$.\\[2pt] 2. The sequence \eqref{eqn:EN1} is exact if and only if each minimal prime ideal of $(A)$ has height $=2$. \end{theorem} This follows from \cite[Theorem1, Theorem 2]{EN} in the special case $r=3, s=2$ and taking $R$ to be Cohen-Macaulay to identify the grade of an ideal with the minimum height of an associated prime. \begin{definition} Let $R$ be a ring and consider the polynomial ring $R[X_0, X_1, X_2, X_3]$. Let \[ A=a_1X_1+a_2X_2+a_3X_3,\ B=b_2X_2+b_3X_3,\ C=c_2X_2+c_3X_3 \] be linear forms in $R[X_0, X_1, X_2, X_3]$ and let $q=AX_1^2+BX_1X_2+CX_2^2$. Let \[ \alpha_t(q)=\begin{pmatrix}tC&tB-X_0\{\mathbb X}_0&tA\\-X_1&X_2\end{pmatrix} \] We say that $q$ is {\em non-degenerate} if $(a_1, a_2, a_3, b_2, b_3, c_2, c_3)$ is the unit ideal in $R$, or equivalently, $q$ is non-zero modulo each maximal ideal in $R$. Let $J(q)\subset R[t][X_0, X_1, X_2, X_3]$ be the ideal generated by the determinants of the $2\times2$ submatrices of $\alpha_t(q)$ and let $I(q)=(J(q), q)\subset R[t][X_0, X_1, X_2, X_3]$. We let ${\mathcal J}(q)\subset {\mathcal I}(q)\subset {\mathcal O}_{\P^3_{R[t]}}$ denote the corresponding ideal sheaves Let $C(q)\subset \P^3_{R[t]}$ be the subscheme defined by the ideal sheaf ${\mathcal I}(q)$ and let $C'(q)\subset \P^3_{R[t]}$ be the subscheme defined by the ideal sheaf ${\mathcal J}(q)$. \end{definition} \begin{lemma}\label{lem:TorsionFree} Suppose that $R$ is locally a UFD and $q$ is non-degenerate. Then \\[5pt] 1. $C'(q)\cap \P^3_{R[t, t^{-1}]}=C(q)\cap \P^3_{R[t, t^{-1}]}$\\[2pt] 2. $C'(q)\subset \P^3_{R[t]}$ has pure codimension two, and has relative dimension one over ${\rm Spec\,} R[t,t^{-1}]$.\\[2pt] 3. The sheaf ${\mathcal O}_{C(q)}$ is $t$-torsion free. \end{lemma} \begin{proof} The statement is local in $R$, so we may assume that $R$ is a local UFD. We have the exact sequence \[ 0\to {\mathcal I}(q)/{\mathcal J}(q)\to {\mathcal O}_{C'(q)}\to {\mathcal O}_{C(q)}\to 0 \] We note that ${\mathcal I}(q)/{\mathcal J}(q)$ is generated by the image $[q]$ of $q$, that is, we have the surjection \[ p_q:{\mathcal O}_{\P^3_{R[t]}}(-3)\to {\mathcal I}(q)/{\mathcal J}(q) \] sending $1$ to $[q]$. We claim that the kernel of $p_q$ is the ideal generated by $(X_0, t)$. To see this, we have the identity \[ tq=X_1e_{23}(\alpha_t(q))+X_2e_{13}(\alpha_t(q))\in J(q), \] so $t[q]=0$. This shows that $t{\mathcal I}(q)\subset {\mathcal J}(q)$, so $({\mathcal I}/{\mathcal J})/t({\mathcal I}/{\mathcal J})={\mathcal I}/{\mathcal J}$ and also proves (1). Moreover, the kernel of $p_q$ is the same as the kernel of the induced map \[ \tilde{p}_q:{\mathcal O}_{\P^3_{R[t]}}(-3)\to {\mathcal O}_{C'(q)}={\mathcal O}_{\P^3_{R[t]}}/{\mathcal J}(q). \] Let $\bar{{\mathcal J}}(q)$ be the image of ${\mathcal J}(q)$ in ${\mathcal O}_{\P^3_{R[t]}}/(t)={\mathcal O}_{\P^3_R}$, that is \[ \bar{{\mathcal J}}(q)=(X_0X_2, X_0X_1, X_0^2){\mathcal O}_{\P^3_R}. \] We have \[ {\mathcal O}_{C'(q)}/t {\mathcal O}_{C'(q)}={\mathcal O}_{\P^3_{R[t]}}/(t, {\mathcal J}(q))={\mathcal O}_{\P^3_R}/\bar{{\mathcal J}}(q). \] Letting $\bar{q}$ denote the image of $q$ in ${\mathcal O}_{\P^3_R}/\bar{{\mathcal J}}(q)$, we claim that the annihilator of $\bar{q}$ in ${\mathcal O}_{\P^3_R}$ is $(X_0)$. To see this, if we take a homogeneous $\alpha\in R[X_0,\ldots, X_3]$ with $\alpha\cdot q\in (X_0X_2, X_0X_1, X_0^2)$, then \[ \alpha\cdot q=X_0(a\cdot X_0+b\cdot X_1+c\cdot X_2) \] for some $a,b,c\in R[X_0,\ldots, X_3]$. Since $q$ is non-degenerate and $R$ is a UFD, $X_0$ divides $\alpha$, so $\alpha$ is in $(X_0)$. Conversely, a direct computation shows \[ X_0\cdot q=(BX_1+CX_2)\cdot e_{23}(\alpha_t(q))-AX_1\cdot e_{13}(\alpha_t(q))\in J(q) \] This shows that ${\operatorname{Ann}}_{{\mathcal O}_{\P^3_R}}(\bar{q})=(X_0){\mathcal O}_{\P^3_R}$ and in addition that \[ {\operatorname{Ann}}_{{\mathcal O}_{\P^3_{R[t]}}}([q])\supset (X_0, t){\mathcal O}_{\P^3_{R[t]}}= {\operatorname{Ann}}_{{\mathcal O}_{\P^3_{R[t]}}}(\bar{q})\supset {\operatorname{Ann}}_{{\mathcal O}_{\P^3_{R[t]}}}([q]) \] so $ {\operatorname{Ann}}_{{\mathcal O}_{\P^3_{R[t]}}}([q])= (X_0, t){\mathcal O}_{\P^3_{R[t]}}$, as claimed. This also shows that the map \[ {\mathcal I}(q)/{\mathcal J}(q)= {\mathcal I}(q)/{\mathcal J}(q)\otimes_{R[t]}R\to {\mathcal O}_{C'(q)}\otimes_{R[t]}R= {\mathcal O}_{\P^3_R}/\bar{{\mathcal J}}(q) \] induced by the inclusion ${\mathcal I}(q)/{\mathcal J}(q)\subset {\mathcal O}_{C'(q)}$ is injective, since $[q]$ has the same annihilator in ${\mathcal O}_{\P^3_{R[t]}}$ as does $\bar{q}$, and gives us the presentation of ${\mathcal I}(q)/{\mathcal J}(q)$ via the Koszul complex \begin{multline}\label{multline:Koszul} 0\to {\mathcal O}_{\P^3_{R[t]}}(-4)\xrightarrow{\begin{pmatrix}X_0\\-t\end{pmatrix}} {\mathcal O}_{\P^3_{R[t]}}(-3)\oplus {\mathcal O}_{\P^3_{R[t]}}(-4)\\\xrightarrow{\displaystyle(t, X_0)} {\mathcal O}_{\P^3_{R[t]}}(-3)\to {\mathcal I}(q)/{\mathcal J}(q)\to 0 \end{multline} We now prove (2). Let $\mathfrak{m}$ be a maximal ideal of $R[t, t^{-1}]$. We have seen that $tq$ is a section of ${\mathcal J}(3)$. Moreover, $e_{23}(\alpha_t(q))=X_0X_2+tAX_1$ is irreducible of degree two in $R[t, t^{-1}]/\mathfrak{m}[X_0, X_1, X_2, X_3]$ if $(a_1, a_3)\not\subset\mathfrak{m}$, so assuming $(a_1, a_3)\not\subset\mathfrak{m}$, $C'(q)\cap \P^3_{R[t, t^{-1}]/\mathfrak{m}}$ has codimension $\ge2$ in $\P^3_{R[t, t^{-1}]/\mathfrak{m}}$. If $(a_1, a_3)\subset\mathfrak{m}$, then $e_{23}(\alpha_t(q))\equiv X_2(X_0-ta_2X_1)\mod\mathfrak{m}$ and since $q$ is non-degenerate the subscheme defined by $(X_0-ta_2X_1, tq)$ has codimension two in $\P^3_{R[t, t^{-1}]/\mathfrak{m}}$. For any $\mathfrak{m}$, the ideal $(X_2, e_{13}(\alpha_t(q)))\subset R[t, t^{-1}]/\mathfrak{m}[X_0,\ldots, X_2]$ is $(X_2, (tb_3X_3-X_0)X_1)$, which also defines a codimension two closed subscheme of $\P^3_{R[t, t^{-1}]/\mathfrak{m}}$, so $C'(q)\cap \P^3_{R[t, t^{-1}]}$ has codimension $\ge2$ in $\P^3_{R[t, t^{-1}]}$. Thus we have shown that $C'(q)$ has relative dimension $\le 1$ over ${\rm Spec\,} R[t,t^{-1}]$. Working modulo $t$, the ideal $\bar{{\mathcal J}}(q)$ defines a closed subscheme of $\P^3_R$ of codimension $\ge1$, so $C'(q)$ has codimension $\ge2$ in $\P^3_{R[t]}$, and by Theorem~\ref{thm:EN}, $C'(q)$ has pure codimension two in $\P^3_{R[t]}$. With what we have shown above, this implies that $C'(q)$ has pure relative dimension one over ${\rm Spec\,} R[t,t^{-1}]$. Using Theorem~\ref{thm:EN} again, we see that the Eagon-Northcott complex \[ 0\to {\mathcal O}_{\P^3_{R[t]}}(-3)^2\xrightarrow{\alpha_t(q)} {\mathcal O}_{\P^3_{R[t]}}(-2)^3\xrightarrow{\bigwedge^2\alpha_t(q)^t} {\mathcal O}_{\P^3_{R[t]}} \to {\mathcal O}_{C'(q)}\to0 \] is exact. We map the Koszul complex \eqref{multline:Koszul} to the Eagon-Northcott complex over the inclusion ${\mathcal I}(q)/{\mathcal J}(q)\to {\mathcal O}_{\P^3_{R[t]}}$ by the maps \begin{align*} &\phi_0:{\mathcal O}_{\P^3_{R[t]}}(-3)\to {\mathcal O}_{\P^3_{R[t]}}\\ &\phi_1:{\mathcal O}_{\P^3_{R[t]}}(-3)\oplus {\mathcal O}_{\P^3_{R[t]}}(-4)\to {\mathcal O}_{\P^3_{R[t]}}(-2)^3\\ &\phi_2: {\mathcal O}_{\P^3_{R[t]}}(-4)\to {\mathcal O}_{\P^3_{R[t]}}(-3)^2 \end{align*} defined by $\phi_0:=\times q$, \[ \phi_1:=\begin{pmatrix}X_1&BX_1+CX_2\\-X_2&AX_1\\emptyset&0\end{pmatrix} \] and \[ \phi_2:=\begin{pmatrix}-X_2\\-X_1\end{pmatrix}. \] This induces the map $\bar\phi$ of complexes after applying $-\otimes_{R[t]}R$. A direct computation using these resolutions shows that $\Tor^{R[t]}_1({\mathcal I}(q)/{\mathcal J}(q), R)$ is isomorphic to ${\mathcal O}_{\P^3_R}(-3)/(X_0){\mathcal O}_{\P^3_R}(-4)$, with generator \[ v:=\begin{pmatrix}1\\emptyset\end{pmatrix}: {\mathcal O}_{\P^3_R}(-3)\to {\mathcal O}_{\P^3_{R}}(-3)\oplus {\mathcal O}_{\P^3_{R}}(-4) \] and that $\bar{\phi}_1(v)$ is a generator for $\Tor^{R[t]}_1({\mathcal O}_{C'(q)}, R)$. Thus, in the long exact $\Tor$-sequence \begin{multline*} \ldots\to \Tor^{R[t]}_1({\mathcal I}(q)/{\mathcal J}(q), R)\to \Tor^{R[t]}_1({\mathcal O}_{C'(q)}, R)\to \Tor^{R[t]}_1({\mathcal O}_{C(q)}, R)\\\to {\mathcal I}(q)/{\mathcal J}(q)\otimes_{R[t]}R\to {\mathcal O}_{C'(q)}\otimes_{R[t]}R\to {\mathcal O}_{C(q)}\otimes_{R[t]}R\to 0 \end{multline*} the map $\Tor^{R[t]}_1({\mathcal I}(q)/{\mathcal J}(q), R)\to \Tor^{R[t]}_1({\mathcal O}_{C'(q)}, R)$ is surjective and the map $ {\mathcal I}(q)/{\mathcal J}(q)\otimes_{R[t]}R={\mathcal I}(q)/{\mathcal J}(q)\to {\mathcal O}_{C'(q)}\otimes_{R[t]}R$ is injective, so \[ \Tor^{R[t]}_1({\mathcal O}_{C(q)}, R)=0. \] Thus, the sheaf ${\mathcal O}_{C(q)}$ is $t$-torsion free, proving (3). \end{proof} \begin{proposition} Suppose that $q$ is non-degenerate, then $C(q)$ is flat of relative dimension one over ${\rm Spec\,} R[t]$. \end{proposition} \begin{proof} The statement is local in $R$, so we may assume that $R$ is local. To prove the result for $R$, suffices to prove the result for some ring $S$ and a non-degenerate $q_S=A_SX_1^2+B_XX_1X_2+C_SX_2^2\in S[X_1,X_2, X_3]$ such that there is a ring homomorphism $\psi:S\to R$ sending $q_S$ to $q$. As $q=AX_1^2+BX_1X_2+CX_2^2$ with $A=a_1X_1+a_2X_2+a_3X_3$, $B=b_2X_2+b_3X_3$, $C=c_2X_2+c_3X_3$, $a_i, b_i, c_i\in R$, we may replace $R$ with a suitable localization $S$ of a polynomial ring over ${\mathbb Z}$, and suitable element $q_S$. Thus, we may assume from the start that $R$ is a regular local ring, in particular, a UFD. Let $C_0(q)\subset \P^3_R$ denote the closed subscheme with ideal ${\mathcal I}(q)_0:=(\alpha_0(q))+(q)\subset R[X_0, X_1, X_2, X_3]$ and let $\bar{C}_0(q)$ be the closed subscheme with ideal sheaf $\sqrt{{\mathcal I}(q)_0}=(X_0, q){\mathcal O}_{\P^3_R}$. As $q$ is non-degenerate, $\bar{C}_0(q)$ is a complete intersection subscheme of $\P^3_R$, equi-dimensional of relative dimension one over $R$, and hence $\bar{C}_0(q)$ is flat over $R$. Note that the quotient $\sqrt{{\mathcal I}(q)_0}/ {\mathcal I}(q)_0$ is generated as ${\mathcal O}_{\P^3_R}$-module by the image $[X_0]$ of $X_0$. For each $r\in R\setminus \{0\}$, $r[X_0]\neq0$, as ${\mathcal I}(q)_0\subset (X_0, X_1, X_2)^2{\mathcal O}_{\P^3_R}$ but $rX_0$ is not in $(X_0, X_1, X_2)^2{\mathcal O}_{\P^3_R}$. Moreover, $(X_0, X_1, X_2)\cdot\sqrt{{\mathcal I}(q)_0}/ {\mathcal I}(q)_0=0$. This implies that $\sqrt{{\mathcal I}(q)_0}/ {\mathcal I}(q)_0=i_*({\mathcal O}_{{\rm Spec\,} R})$, where $i:{\rm Spec\,} R\to {\mathcal O}_{\P^3_R}$ is the constant section with value $(0,0,0,1)$. From the exact sequence \begin{equation}\label{eqn:ExactCubicSeq} 0\to \sqrt{{\mathcal I}(q)_0}/ {\mathcal I}(q)_0\to {\mathcal O}_{C_0(q)}\to {\mathcal O}_{\bar{C}_0(q)}\to 0 \end{equation} we see that $C_0(q)$ is flat of relative dimension one over $R$. By Lemma~\ref{lem:TorsionFree}(1, 2) $C(q)\cap \P^3_{R[t,t^{-1}]}$ has relative dimension one over $R[t,t^{-1}]$, so $C(q)\cap \P^3_{R[t,t^{-1}]}$ has codimension two in $\P^3_{R[t,t^{-1}]}$, and this remains the case after base-change by $R[t,t^{-1}]\to R'$ for any commutative noetherian ring $R'$. By Theorem~\ref{thm:EN}, this implies that the Eagon-Northcott complex \[ 0\to {\mathcal O}_{\P^3_{R[t]}}(-3)^2\xrightarrow{\alpha_t(q)} {\mathcal O}_{\P^3_{R[t]}}(-2)^3\xrightarrow{\bigwedge\alpha_t(q)^t} {\mathcal O}_{\P^3_{R[t]}} \to {\mathcal O}_{C(q)}\to0 \] is exact after base-change to $R'$ for any ring extension $R[t,t^{-1}]\to R'$, and thus $C(q)\cap \P^3_{R[t,t^{-1}]}$ is flat over ${\rm Spec\,} R[t,t^{-1}]$. Moreover, by Lemma~\ref{lem:TorsionFree}(3), we have \[ \Tor_i^{R[t]}({\mathcal O}_{C(q)}, R[t]/(t))=0 \] for all $i>0$, and thus for any prime ideal $\mathfrak{p}\subset R[t]$ containing $(t)$, we have \[ \Tor_i^{R[t]}({\mathcal O}_{C(q)}, R[t]/\mathfrak{p})=\Tor_i^{R[t]/(t)}({\mathcal O}_{C_0(q)}, R[t]/\mathfrak{p}) \] But as $C_0(q)$ is flat over $R=R[t]/(t)$, it follows that \[ \Tor_i^{R[t]}({\mathcal O}_{C(q)}, R[t]/\mathfrak{p})=0 \] for $i>0$. Since we have already seen that $C(q)$ is flat over $R[t,t^{-1}]$, it follows that $C(q)$ is flat over $R[t]$. \end{proof} \section{Cubic rational curves in $\P^n$} We fix a base-field $k$ of characteristic $\neq 2, 3$. Unless otherwise noted, all schemes are finite type separated $k$-schemes. We write $\P^n$ for $\P^n_k$. Let $K\supset k$ be an extension field and let $C\subset \P^n_K$ be a closed subscheme. The {\em linear span} of $C$ is the smallest linear subspace $L_C$ of $\P^n_K$ containing $C$. In fact, $L_C$ is always a $K$-subspace of $\P^n_K$, as the ideal of $L_C$ is generated by the linear forms in the $K$-vector space $H^0(\P^n_K, {\mathcal I}_C(1))$. \begin{lemma}\label{lem:linearspan} Let ${\mathcal O}$ be a dvr with quotient field $K$ and residue field $\kappa$, and let $C\subset \P^n_{\mathcal O}$ be a closed subscheme, flat over ${\mathcal O}$. Let $L_{C_K}$ be the linear span of the generic fiber $C_K$ of $C$. \\[5pt] 1. There is a projective space bundle $L\subset \P^n_{\mathcal O}$, smooth over ${\mathcal O}$ with generic fiber $L_{C_K}$ and with $C\subset L$.\\[2pt] 2. Suppose $L_{C_K}$ has dimension $m$ over $K$. Then there is an element $A\in \operatorname{GL}_{n+1}({\mathcal O})$ such that $A\cdot C\subset \P^m_{\mathcal O}\subset \P^n_{\mathcal O}$, where $\P^m_{\mathcal O}$ is the linear subspace defined by $X_{m+1}=\ldots=X_n=0$. \end{lemma} \begin{proof} (1) The linear space $L_{C_K}$ being defined over $K$, $L_{C_K}$ gives a $K$-point of a Grassmannian $\Gr(m+1, n+1)$, where $m$ is the dimension of $L_{C_K}$. Since $\Gr(m+1, n+1)$ is proper over $k$, this $K$-point extends to a morphism $f:{\rm Spec\,} {\mathcal O}\to \Gr(m+1, n+1)$; pulling back the universal $\P^m$-sub-bundle of $\P^n\times\Gr(m+1, n+1)$ via $f$ gives us a $\P^m$-bundle $L\subset \P^n_{\mathcal O}$ with $C_K\subset L_K$. Since $\P^n_{\mathcal O}$ is separated over ${\mathcal O}$, this implies that $C\subset L$. For (2), the subspace $L$ is defined by an $m+1\times n+1$ matrix $M\in M_{m+1, n+1}({\mathcal O})$, with the rows of $M$ giving ${\mathcal O}$-points of $\P^n_{\mathcal O}$ that span $L$. From the structure theory of matrices over the dvr ${\mathcal O}$, there are invertible matrices $A\in \operatorname{GL}_{n+1}({\mathcal O})$ and $B\in \operatorname{GL}_{m+1}({\mathcal O})$ such that \[ B\cdot M\cdot A=(I_{m+1}, 0_{m+1\times n-m}). \] \end{proof} Let $\operatorname{Hilb}^{3m+1}(\P^n)$ be the Hilbert scheme parametrizing flat families of closed subschemes of $\P^n$ with Hilbert polynomial $3m+1$, and with universal curve ${\mathcal C}^{3m+1, n}\to \operatorname{Hilb}^{3m+1}(\P^n)$. For $x\in \operatorname{Hilb}^{3m+1}(\P^n)$, we write $C_x\subset \P^n_{k(x)}$ for the corresponding curve. In $\operatorname{Hilb}^{3m+1}(\P^n)$ we have the open subscheme $H_n^{sm}$ of smooth cubic rational curves, classically known as {\em twisted cubics}. Let $H_n$ denote the closure of $H_n^{sm}$ in $\operatorname{Hilb}^{3m+1}(\P^n)$, and let ${\mathcal C}_n\subset H_n\times \P^n$ be the corresponding universal curve, with projection $p:=p_1:{\mathcal C}_n\to H_n$. For $x\in H_n$, we call the corresponding curve $C_x\subset \P^n_{k(x)}$ a {\em rational cubic curve in $\P^n$}. It is well-known that $H_n^{sm}$ is smooth and irreducible, so $H_n$ is an integral $k$-scheme. If we wish to work over another base-scheme, we write $H_n/S$ for the closure in $\operatorname{Hilb}^{3m+1}(\P^n_S)$ of the open subscheme $H^{sm}_{n, S}\subset \operatorname{Hilb}^{3m+1}(\P^n_S)$ parametrizing flat families of twisted cubics. \begin{lemma}\label{lem:3Space} 1. For each $x\in H_n$, $C_x$ has linear span of dimension 3. \\[2pt] 2. After a $k(x)$-linear change of coordinates, $C_x$ is one of two types:\\[5pt] a. $C_x$ is Cohen-Macaulay. Passing to the algebraic closure $\overline{k(x)}$ of $k(x)$, there is a $3\times 2$ matrix $(L_{ij})$ of linear forms $L_{ij}\in \overline{k(x)}[X_0,\ldots, X_3]$ such that the ideal of $C_x\times_{k(x)}\overline{k(x)}$ is generated by the linear forms $X_4,\ldots, X_n$ and the determinants of the $2\times2$ submatrices of $(L_{ij})$. Moreover these three determinants are $\overline{k(x)}$-linearly independent elements of $\overline{k(x)}[X_0,\ldots, X_3]$.\\[2pt] b. $C_x$ is not Cohen-Macaulay. There are linear forms $B, C\in \overline{k(x)}[X_2, X_3]$ and $A\in \overline{k(x)}[X_1, X_2, X_3]$, not all zero, such that the ideal of $C_x\times_{k(x)}\overline{k(x)}$ is generated by the linear forms $X_4,\ldots, X_n$, the quadratic terms $X_0^2, X_0X_1, X_0X_2$ and the cubic polynomial $q_C:=AX_1^2+BX_1X_2+CX_2^2$.\\[2pt] 3. Let ${\mathcal I}_x\subset {\mathcal O}_{\P^n_{k(x)}}$ denote the ideal sheaf of $C_x$. Then for $i>0$, $H^i(\P^n_{k(x)}, {\mathcal I}_x(m))=0$ for $m\ge0$ if $C_x$ is Cohen-Macaulay, and $H^i(\P^n_{k(x)}, {\mathcal I}_x(m))=0$ for $m\ge1$ if $C_x$ is not Cohen-Macaulay. \end{lemma} \begin{proof} Since a smooth cubic curve in a $\P^2$ has genus 1, the linear span of a smooth rational degree three curve $C\subset \P^n$ must have dimension at least three. The fact that $h^0(\P^1, {\mathcal O}_{\P^1}(3))=4$ says that the kernel of the restriction map \[ H^0(\P^n, {\mathcal O}_{\P^n}(1))\to H^0(C, {\mathcal O}_C(1)) \] has codimension at most 4, so there are at least $n-3$ linearly independent linear forms vanishing on $C$. Thus the linear span of $C$ has dimension at most three, that is, the linear span of a smooth rational degree three curve in $\P^n$ is a $\P^3$. As $C_x$ is the specialization over some dvr ${\mathcal O}$ of a smooth rational cubic curve $C_K\subset \P^n_K$, it follows by Lemma~\ref{lem:linearspan} that, after a $k(x)$-linear change of coordinates, $C_x$ is contained in the linear subspace $\P^3\subset \P^n$ defined by $X_4=\ldots, X_n=0$ and $C_x$ is a $k(x)$ point of the $H_3\subset H_n$ defined by this $\P^3$ in $\P^n$. This reduces the proof of (1) and (2) to the case $n=3$. We pass to the algebraic closure of $k(x)$ and write $C_x$ for $C_x\times_{k(x)}\overline{k(x)}$. If $C_x$ is Cohen-Macaulay, the resolution of ${\mathcal I}_{C_x}$ given in \cite[Lemma 1]{PS} is an Eagon-Northcott complex of the form \[ 0\to {\mathcal O}_{\P^3}(-3)^2\xrightarrow{(L_{ij})}{\mathcal O}_{\P^3}(-2)^3\xrightarrow{\bigwedge^2(L_{ij})^t}{\mathcal I}\to 0 \] with $(L_{ij})$ the matrix of linear forms mentioned in (2a) and with $\bigwedge^2(L_{ij})^t$ the $1\times 3$ matrix of determinants $(e_{23}(L_{ij}), -e_{13}(L_{ij}), e_{12}(L_{ij}))$. This takes care of the case (2a). \cite[Lemma 2]{PS} yields the assertion (2b), describing the non-Cohen-Macaulay curves. In the Cohen-Macaulay case, one shows using the Eagon-Northcott resolution that $h^i(\P^3, {\mathcal I}(m))=0$ for $m\ge0$, and $h^0(\P^3, {\mathcal I}(1))=0$. Thus the linear span of $C_x$ is the entire $\P^3$. In the non-Cohen-Macaulay case, let ${\mathcal J}$ be the ideal $(X_0, q){\mathcal O}_{\P^3}$, so ${\rm Spec\,} {\mathcal O}_{\P^3}/{\mathcal J}$ has linear span the plane $X_0=0$. In the affine space ${\mathbb A}^3:=\P^3\setminus \{X_3=0\}$, $C_x$ has defining ideal $(x_0x_1, x_0x_2, x_0^2, q(x_0, x_1, x_2, 1))$, $x_i:=X_i/X_3$, so $X_0$ is not in $H^0(\P^3, {\mathcal I}(1))$, hence $C_x$ has linear span $\P^3$. This proves (1). For (3), we first consider the Cohen-Macaulay case. We may assume that $C_x$ is contained in the $\P^3$ defined by $X_4=\ldots=X_n=0$; let $\bar{{\mathcal I}}$ denote the ideal sheaf of $C_x\subset \P^3$ and let ${\mathcal K}=(X_4,\ldots, X_n){\mathcal O}_{\P^n}$. This gives the exact sequence \[ 0\to {\mathcal K}\to {\mathcal I}\to i_*\bar{\mathcal I}\to 0 \] with $i:\P^3\to \P^n$ the inclusion. Using the Koszul resolution of ${\mathcal K}$, one shows that $H^i(\P^n,{\mathcal K}(m))=0$ for $i,m\ge0$ and the Eagon-Northcott resolution of $\bar{\mathcal I}$ shows that $H^i(\P^n,\bar{\mathcal I}(m))=0$ for $i,m\ge0$. In the non-Cohen-Macaulay case, a similar argument reduces us to the case $n=3$; changing notation, we may assume $k=k(x)$, $k$ is algebraically closed and $C_x$ has ideal ${\mathcal I}$ as described in the proof of (2b). We have the exact sequence \[ 0\to {\mathcal I}\to {\mathcal J}\to {\mathcal J}/{\mathcal I}\to 0 \] where ${\mathcal J}=(X_0, q){\mathcal O}_{\P^3}$. Letting $i:{\rm Spec\,} k\to \P^3$ be the inclusion at the point $(0,0,0,1)$, we have ${\mathcal J}/{\mathcal I}\cong i_*k(-1)$ with generator $X_0$. Thus the map \[ H^0(\P^3, {\mathcal J}(1))\to H^0(\P^3, {\mathcal J}/{\mathcal I}(1)) \] is surjective. Clearly $H^i(\P^3, {\mathcal J}/{\mathcal I}(m))=0$ for all $i>0$ and all $m$, and the Koszul resolution of ${\mathcal J}$ shows that $H^i(\P^3, {\mathcal J}(m))=0$ for all $i>0$ and all $m\ge0$. Thus $H^i(\P^3, {\mathcal I}(m))=0$ for all $i>0$ and all $m\ge1$, completing the proof. \end{proof} \begin{proposition}\label{prop:DefSpace} $H_n$ is a smooth $k$-scheme of dimension $4n$ and the non-Cohen-Macaulay locus $H^{\operatorname{ncm}}_n$ is a smooth divisor. \end{proposition} \begin{proof} For $n=3$, this follows from the main result \cite[\S 6, Theorem]{PS}. The proof of {\em loc. cit.} relies on the construction of the universal deformation space of the non-Cohen-Macaulay curve $C_0$ defined by the ideal ${\mathcal I}:=(X_0X_1, X_0X_2, X_0^2, X_1^3){\mathcal O}_{\P^3}$. Let $P=k[X_0,\ldots, X_3]$ and let $I\subset P$ be the ideal $(X_0X_1, X_0X_2, X_0^2, X_1^3)$. The argument is in two steps. First they explicitly compute the universal deformation space of $I$, considered as a homogeneous ideal in $P$. To do this, they construct an explicit resolution \[ 0\to F_3\to F_2\to F_1\to P\to P/I\to 0 \] Using this resolution, they write down a $k$-basis of the 16-dimensional tangent space $T_{M_3',I}$, with the first 10 basis elements $\partial/\partial u_1,\ldots, \partial/\partial u_{10}$ corresponding to deformations that fix the flag $(X_0=X_1=X_2=0)\subset (X_0=0)$ and the remaining 6 ``trivial" vectors coming from the translations by $\operatorname{GL}_4$ acting on such flags. In the subspace spanned by the $\partial/\partial u_i$, they construct the versal deformation space of $I\subset P$ explicitly, and show this is a union of affine spaces ${\mathbb A}_k^6\cup {\mathbb A}^9_k$, intersecting along a linear subspace ${\mathbb A}^5_k\subset {\mathbb A}^6_k$, with ${\mathbb A}_k^6\setminus{\mathbb A}_k^5$ is the locus of Cohen-Macaulay deformations. The universal deformation space is then the product of the versal space with the ${\mathbb A}^6$ corresponding to the 6 ``trivial'' deformations, embedded as a linear space in $\operatorname{GL}_4$ transversal to the isotropy group of the flag, and parametrizing the big affine cell in the corresponding flag variety. The second step relies on \cite[\S 3, Comparison Theorem]{PS}. Using this result, they show that the map $M'_3={\mathbb A}^{12}\cup {\mathbb A}^{15}\to \operatorname{Hilb}^{3m+1}(\P^3)$ induced by the universal family over ${\mathbb A}^{12}\cup {\mathbb A}^{15}$ is an open immersion. It is easy to show that the $\operatorname{GL}_4$ translates of the image covers $H^{\operatorname{ncm}}_3$. This yields the main theorem \cite[\S 6, Theorem]{PS}, with $H_3$ corresponding to the component ${\mathbb A}^{12}$ and $H_3^{\operatorname{ncm}}$ corresponding to the intersection ${\mathbb A}^{11}_k={\mathbb A}^{12}_k\cap {\mathbb A}^{15}_k$ (and all their respective translates by $\operatorname{GL}_4$). We consider the analogous situation for $C_0\subset \P^3\subset \P^n$, with ideal sheaf ${\mathcal I}_n=({\mathcal I}, X_4,\ldots, X_n){\mathcal O}_{\P^n}\subset {\mathcal O}_{\P^n}$ and homogeneous ideal $I_n:=(I, X_4,\ldots, X_n)\subset P_n:=k[X_0,\ldots, X_n]$. Here the $\P^3$ is defined by $X_4=\ldots=X_n=0$. We first compute the universal deformation space of $I_n\subset P_n$. Let $A_n:=P_n/I_n$. Note that $A_n\cong A_3$. We first consider the cotangent complex $\L_{A_n/P_n}$. We have the fundamental distinguished triangle (see \cite[Tag 08QR, (90.7.0.1), Proposition 90.7.4]{Stacks}) \[ \L_{P_3/P_n}\otimes_{P_3}^LA_3\to \L_{A_n/P_n}\to \L_{A_3/P_3}\to \L_{P_3/P_n}\otimes_{P_3}A_3[1], \] where we view $P_3$ as the quotient $P_n/(X_4,\ldots, X_n)$ . Since $P_3$ is defined by the complete intersection ideal $(X_4,\ldots, X_n)$, we have $\L_{P_3/P_n}=P_3(-1)^{n-3}[1]$ \cite[Tag 08SH, Lemma 90.14.2]{Stacks}, and the distinguished triangle yields the exact sequence \begin{multline*} 0\to T_2(A_3/P_3, A_3)\to T_2(A_n/P_n,A_n)\to 0\\ \to T_1(A_3/P_3, A_3)\to T_1(A_n/P_n,A_n)\to A_n(1)^{n-3}\to 0 \end{multline*} We write $M_d$ for the degree $d$ graded component in a ${\mathbb Z}$-graded object $M$. From the above sequence, we see that the obstruction space $T_2(A_n/P_n,A_n)_0$ for $I_n\subset P_n$ agrees with that for $I_3\subset P_3$. Since the curve $C_0$ has $\P^3$ as linear span, the surjection $P_n\to A_n$ sends the degree one component to the four dimensional linear span of $X_0,\ldots, X_3$ in $A_n$. We can split the surjection $T_1(A_n/P_n,A_n)_0\to [A_n]_1^{n-3}$ by realizing $[A_n]_1^{n-3}$ as the global sections of the normal bundle of $\P^3$ in $\P^n$. This is realized as the universal deformation space of $\P^3$ in $\P^n$ given by the standard affine cell in $\Gr(4, n+1)$, which in turn is given by an affine space $V\subset \operatorname{GL}_{n+1}$, $V={\mathbb A}^{4(n-3)}$, with $\operatorname{GL}_{n+1}$ acting by the usual action on ${\mathbb A}^{n+1}$. The corresponding first order deformations of $I_n$ given by the subspace of $\mathfrak{gl}_n$ corresponding to $V$ give a $k$-subspace of $ T_1(A_n/P_n,A_n)_0$ mapping isomorphically to $[A_n(1)^{n-3}]_0=k^{4(n-3)}$. Since the two obstruction spaces agree, this implies that the universal deformation space of $I_n\subset P_n$ is $M'_3\times V$. The map $f:M'_3\times V\to \operatorname{Hilb}^{3m+1}(\P^n)$ corresponding to the universal family thus sends ${\mathbb A}^{12}\times V$ to $H_n$ and ${\mathbb A}^{11}\times V$ to $H_n^{\operatorname{ncm}}$. The hypotheses for the Comparison Theorem in this case follow easily from the case $n=3$ by translating by $V$, so the map $f$ is an open immersion of $M'_3\times V$ onto an open neighborhood of $[C_0]\in \operatorname{Hilb}^{3m+1}(\P^n)$. The $\operatorname{GL}_{n+1}$ translates of $f({\mathbb A}^{11}\times V)$ covers $H^{\operatorname{ncm}}_n$ and the $\operatorname{GL}_{n+1}$ translates of $f({\mathbb A}^{12}\times V)$ cover $H_n$, since each curve $C$ in $H_n$ is contained in some $\P^3$ in $\P^n$. Thus $H_n$ is smooth and $H_n^{\operatorname{ncm}}$ is a smooth divisor in $H_n$. \end{proof} We return to our study of $H_n$ over a fixed field $k$ of characteristic $\neq 2,3$. \begin{lemma} 1. There is a morphism $\Phi:H_n\to \Gr(4, n+1)$ with $\Phi(x)=\pi_{C_x}$ for all $x\in H_n$.\\[2pt] 2. $\Phi$ is a smooth $\operatorname{GL}_{n+1}$-equivariant morphism, making $H_n$ into a fiber bundle (for the \'etale topology) over $\Gr(4, n+1)$ with fiber $H_3$. \end{lemma} \begin{proof} (1) Let ${\mathcal C}_n\subset H_n\times\P^n$ be the universal curve, with defining ideal sheaf ${\mathcal I}_n\subset {\mathcal O}_{H_n\times\P^n}$. By Lemma~\ref{lem:3Space}(3) and standard base-change theorems on coherent cohomology, for all $m\ge1$, $R^ip_{1*}({\mathcal I}_n(m))=0$ for $i>0$, $p_{1*}({\mathcal I}_n(m))$ is locally free, and the map \[ p_{1*}({\mathcal I}_n(m))\otimes k(x)\to H^0(\P^3_{k(x)}, {\mathcal I}_{C_x}(m)) \] is an isomorphism for all $x\in H_n$. In particular $p_{1*}({\mathcal I}_n(1))$ is a locally free subsheaf of $p_{1*}({\mathcal O}_{H_n\times\P^n})={\mathcal O}_{H_n}\otimes_kH^0(\P^n, {\mathcal O}_{\P^n}(m))$ of rank $n+1-4$, with $p_{1*}({\mathcal I}_n(1))\otimes k(x)\to p_{1*}({\mathcal O}_{H_n\times\P^n})\otimes k(x)$ injective for all $x\in H_n$. Taking the dual surjection defines the morphism $\Phi:H_n\to \Gr(4, n+1)$ with $\Phi(x)=\pi_{C_x}$ for all $x\in H_n$. For (2), it is clear that $\Phi$ is $\operatorname{GL}_{n+1}$ equivariant, where $\operatorname{GL}_{n+1}$ acts on $H_n$ and $\Gr(4, n+1)$ through its standard linear action on ${\mathbb A}^{n+1}$. To see that $\Phi$ is a smooth morphism, we first check over the $H_n^{\operatorname{cm}}$, an open subscheme of $\operatorname{Hilb}^{3m+1}(\P^n)$. For $x\in H^{\operatorname{cm}}_n$, the tangent space is given by \[ T_{H_n, x}={\rm Hom}({\mathcal I}_{C_x}, {\mathcal O}_{C_x}). \] We may suppose that $\pi_{C_x}$ is our standard $\P^3$ defined by $X_4=\ldots=X_n=0$; let $\bar{{\mathcal I}}_{C_x}\subset {\mathcal O}_{\P^3}$ be the ideal sheaf of $C_x\subset \P^3$. We thus have the exact sequence \[ 0\to {\mathcal O}_{\P^n}(-1)^{n-3}\to {\mathcal I}_{C_x}\to i_*\bar{{\mathcal I}}_{C_x}\to 0 \] where $i:\P^3\to \P^n$ is the inclusion. Looking at the Eagon-Northcott resolution of $\bar{{\mathcal I}}_{C_x}$, we see that $H^1(\P^n, i_*\bar{{\mathcal I}}_{C_x}(m))=0$ for all $m\ge0$, giving the short exact sequence \[ 0\to {\rm Hom}_{{\mathcal O}_{\P^3}}(\bar{{\mathcal I}}_{C_x}, {\mathcal O}_{C_x})\to {\rm Hom}_{{\mathcal O}_{\P^n}}(\bar{{\mathcal I}}_{C_x}, {\mathcal O}_{C_x})\to H^0(C_x, {\mathcal O}_{C_x}(1))^{n-3}\to 0 \] Identifying $\mathcal{H}om({\mathcal O}_{\P^n}(-1)^{n-3}, {\mathcal O}_{\P^3})$ with the normal sheaf ${\mathcal N}_i$ of $i$ allows us to identify the term $H^0(C_x, {\mathcal O}_{C_x}(1))^{n-3}$ with $H^0(C_x, {\mathcal N}_i\otimes {\mathcal O}_{C_x})$. Also, since $\P^3$ is the linear span of $C_x$ and $H^1(C_x, \bar{{\mathcal I}}_{C_x}(1))=0$, we see that the restriction map \[ T_{\Gr(4, n+1), \P^3}=H^0(\P^3, {\mathcal N}_i)\to H^0(C_x, {\mathcal N}_i\otimes{\mathcal O}_{C_x}) \] is an isomorphism. This identifies the term $H^0(C_x, {\mathcal O}_{C_x}(1))^{n-3}$ in the above exact sequence with $i_x^*\Phi^*T_{\Gr(4, n+1), \Phi(x)}$, where $i_x:x\to H_n$ is the inclusion. In other words, $d\Phi_x$ is surjective and hence $\Phi$ is smooth at $x$. Note that $H_n^{\operatorname{ncm}}\subset H_n$ is a proper closed subscheme, stable under the $\operatorname{GL}_{n+1}$-action. Thus for $y\in \Gr(4, n+1)$, $H_n^{\operatorname{cm}}\cap \Phi^{-1}(y)\subset H_n\cap \Phi^{-1}(y)$ is dense in $\Phi^{-1}(y)$. Moreover, $\Gr(4, ,n+1)$ and $H_n$ are both smooth, hence $H_n$ is Cohen-Macaulay and each maximal ideal $\mathfrak{m}_y\subset {\mathcal O}_{\Gr(4, n+1)}$ is generated by a regular sequence. Since $\Phi^{-1}(y)_{{\rm red}}$ is isomorphic to $H_3\times_kk(y)$, the fiber $\Phi^{-1}(y)$ has the expected dimension, and thus each fiber $\Phi^{-1}(y)$ has no embedded components; this also implies that the map $\Phi$ is flat. As $\Phi^{-1}(y)$ contains a smooth open dense subscheme, and $\Phi^{-1}(y)^{{\rm red}}\cong H_3\times_kk(y)$ is smooth over $k(y)$, it follows that $\Phi^{-1}(y)=\Phi^{-1}(y)^{{\rm red}}$ and is thus smooth over $k(y)$, hence $\Phi$ is a smooth morphism. \end{proof} For $x\in H_n$ and $C=C_x$, we often write $\pi_C$ for the $\P^3\subset \P^n$ corresponding $\Phi(x)\in \Gr(4, n+1)$. Let $\Pi\subset \Gr(4, n+1)\times\P^n$ be the universal $\P^3$-bundle, and let $\Pi_n:=H_n\times_{ \Gr(4, n+1)}\Pi$, with projection $\pi_1$ to $H_n$ and $p_2$ to $\Pi$; we let $\pi_2:\Pi_n\to \Gr(4, n+1)$ denote the composition $p_1\circ p_2$. ${\mathcal C}_n$ is thus a closed subscheme of $\Pi_n$. Via the second projection $p_2:\Pi_n\to \P^n$, we may form the twist of a coherent sheaf ${\mathcal F}$ on $\Pi_n$ by $p_2^*{\mathcal O}_{\P^n}(m)$, which we denote by ${\mathcal F}(m)$. We similarly twist a sheaf ${\mathcal G}$ on ${\mathcal C}_n$ by $p_2^*{\mathcal O}_{\P^n}(m)$, and denote this by ${\mathcal G}(m)$. We consider $ \Gr(4, n+1)$ as embedded in a projective space by the Pl\"ucker embedding, giving us the invertible sheaf ${\mathcal O}_{\Gr(4,n+1)}(m)$ for $m\in {\mathbb Z}$; hopefully the two different notations for twisting will not cause confusion. This gives us the diagram \begin{equation}\label{eqn:MainDiag} \xymatrix{ &{\mathcal C}_n\ar@{^(->}[d]^i\ar[ddl]_p\ar[ddr]^{p_2}\\ &\Pi_n\ar[dl]^{\pi_1}\ar[d]^{p_2}\ar@/_10pt/[dd]_{\pi_2}\ar[dr]^{p_2}\\ H_n\ar[dr]_\Phi&\Pi\ar[d]^{p_1}\ar[r]_{p_2}&\P^n\\ &\Gr(4, n+1) } \end{equation} We let $\bar{{\mathcal I}}_C\subset {\mathcal O}_{\pi_C}$ denote the ideal sheaf of $C\subset \pi_C$, and ${\mathcal I}_C\subset {\mathcal O}_{\P^n}$ the ideal sheaf of $C\subset \P^n$. Similarly, let $\bar{{\mathcal I}}_{{\mathcal C}_n}\subset {\mathcal O}_{\Pi_n}$ and ${\mathcal I}_{{\mathcal C}_n}\subset {\mathcal O}_{H_n\times\P^n}$ denote the respective ideal sheaves of ${\mathcal C}_n$ in $\Pi_n$ and in $H_n\times\P^n$. \section{Some sheaves and presentations} Let ${\mathcal E}:=p_*(\bar{\mathcal I}_{{\mathcal C}_n}(2))$ and let \[ \beta:p^*{\mathcal E}(-2)\to \bar{\mathcal I}_{{\mathcal C}_n}\subset {\mathcal O}_{\Pi_n} \] be the canonical map. Let ${\mathcal F}$ be the kernel of the multiplication map \[ m:{\mathcal E}\otimes p_*{\mathcal O}_{\Pi_n}(1)\to p_*{\mathcal O}_{\Pi_n}(3) \] giving the map \[ \alpha:p^*{\mathcal F}(-3)\to p^*{\mathcal E}(-2). \] \begin{lemma} 1. For all $x\in H_n$, we have \[ \dim_{k(x)} H^0(\pi_{C_x}, \bar{{\mathcal I}}_{C_x}(2))=3 \] and $H^i(\pi_{C_x}, \bar{{\mathcal I}}_{C_x}(m))=0$ for $m\ge 1$, $i\ge1$. \\[2pt] 2. ${\mathcal E}$ is a rank three locally free sheaf on $H_n$. \end{lemma} \begin{proof} (1) implies (2) by standard results on the higher direct images of flat sheaves. To prove (1), we may reduce to the case $n=3$, so for $x\in H^{\operatorname{cm}}_n$, ${\mathcal I}_{C_x}$ is generated by the $2\times 2$ determinants of a $3\times 2$ matrix of linear forms $(L_{ij})$. Let $R$ be the graded ring $k(x)[X_0, X_1, X_2, X_3]$ and let $I\subset R$ be the homogeneous ideal generated by the $2\times 2$ determinants $e_{ij}(L_{ij})$. We have the corresponding (graded) Eagon-Northcott complex \eqref{eqn:EN1} \[ 0\to R(-3)^2\xrightarrow{(L_ij))}R(-2)^3\xrightarrow{\bigwedge^2(L_ij))^t} R\to R/I\to 0 \] which is exact since $C_x$ has the expected codimension 2 in $\P^3$ (Theorem~\ref{thm:EN}). Moreover, Riemann-Roch for the cubic CM-curve $C_x$ implies \[ \dim_{k(x)} H^0(C_x, {\mathcal O}_{C_x}(m))=3m+1 \] for all $m\ge0$. This agrees with the Hilbert polynomial of the free part of the Eagon-Northcott complex, which implies that the map $H^0(\P^3, {\mathcal O}_{\P^3}(m))\to H^0(C_x, {\mathcal O}_{C_x}(m))$ is surjective for all $m$ and that $H^i(C_x, {\mathcal O}_{C_x}(m))=0$ for all $m\ge0$, $i\ge1$. Via the exact sequence \[ 0\to \bar{\mathcal I}_{C_x}\to {\mathcal O}_{\P^3}\to {\mathcal O}_{C_x}\to 0 \] we see that \[ h^0(\P^3, \bar{\mathcal I}_{C_x}(m))=\begin{pmatrix}m+3\\3\end{pmatrix}-3m-1 \] and \[ H^i(\P^3, \bar{\mathcal I}_{C_x}(m))=0 \] for all $m\ge0$ and $i\ge1$. For $m=2$, this yields $h^0(\P^3, \bar{\mathcal I}_{C_x}(2))=3$. If $C_x$ is a non-CM curve, then we may consider $C_x$ as a curve in $\P^3$ and after a linear change of coordinates, $\bar{{\mathcal I}}_{C_x}\subset {\mathcal O}_{\P^3}$ has generators $X_0^2, X_0X_1, X_0X_2, q$, with $q$ of the form \[ q=AX_1^2+BX_1X_2+CX_2^2 \] where $A,B, C$ are linear forms in $X_1, X_2, X_3$. Let $C^0_x$ be the cubic curve $V(X_0, q)$ and let $p:=(0,0,0,1)\in C^0_x$. Then we have the exact sequence \[ 0\to \bar{\mathcal I}_{C_x}\to \bar{\mathcal I}_{C^0_x}\to i_{p^*}(k(p ))\to 0 \] and $C^0_x$ is cubic plane curve. Also, the section $X_0$ of $\bar{\mathcal I}_{C^0_x}(1)$ generates the term $i_{p^*}(k(p ))$. The ideal $\bar{\mathcal I}_{C^0_x}$ of $C^0_x$ is a complete intersection of degrees $1$ and $3$, so we have the exact sequence \[ 0\to {\mathcal O}_{\P^3}(-4)\to {\mathcal O}_{\P^3}(-1)\oplus {\mathcal O}_{\P^3}(-3)\to \bar{\mathcal I}_{C^0_x}\to0 \] Thus $H^i(\P^3, \bar{\mathcal I}_{C^0_x}(m))=0$ for $m\ge1$, $i\ge1$. Since $H^0(\bar{\mathcal I}_{C^0_x}(1))\to k(p )$ is surjective, it follows that $H^i(\P^3, \bar{\mathcal I}_{C_x}(m))=0$ for all $m\ge1$ and \[ h^0(\P^3, \bar{\mathcal I}_{C_x}(m))=h^0(\P^3, \bar{\mathcal I}_{C^0_x}(m))-1= \frac{m^3+6m^2-7m+6}{6}-1 \] For $m=2$, this yields $h^0(\P^3, \bar{\mathcal I}_{C_x}(2))=3$. \end{proof} \section{Local descriptions} We have the locally free sheaf ${\mathcal E}:=p_*(\bar{\mathcal I}_{{\mathcal C}_n}(2))$ on $H_n$, with \[ {\mathcal E}\otimes k(x)=H^0(\pi_{C_x}, \bar{{\mathcal I}}_{C_x}(2)) \] We similarly have the locally free sheaf ${\mathcal V}:=p_{1*}{\mathcal O}_\Pi(2)$ of rank $10=h^0(\P^3, {\mathcal O}_{\P^3}(2))$ on $\Gr(4, n+1)$. This gives us the Grassmannian bundle \[ r:\Gr(3, {\mathcal V})\to \Gr(4, n+1) \] The inclusions $H^0(\pi_{C_x}, \bar{{\mathcal I}}_{C_x}(2))\hookrightarrow H^0(\pi_{C_x}, {\mathcal O}_{\pi_{C_x}}(2))$ define a a vector-bundle inclusion ${\mathcal E}\hookrightarrow \Phi^*{\mathcal V}$, which induces the (projective) morphism \[ q:H_n\to \Gr(3,{\mathcal V}) \] over $\Gr(4, n+1)$. This enlarges our diagram \eqref{eqn:MainDiag} to the diagram \[ \xymatrix{ &{\mathcal C}_n\ar[dl]_p\ar@{^(->}^i[d]\ar[dr]^{p_2}\\ H_n\ar[dr]_\Phi\ar[d]_q & \Pi_n\ar[d]_{\pi_2}\ar[l]_{\pi_1} \ar[r]^{p_2}&\P^n\\ \Gr(3, {\mathcal V})\ar[r]_r&\Gr(4, n+1) } \] Letting $E_{3, {\mathcal V}}$ be the universal bundle on $\Gr(3, {\mathcal V})$ we have a canonical isomorphism $q^*E_{3, {\mathcal V}}\cong {\mathcal E}$. Let $X\subset \Gr(3, {\mathcal V})$ denote the image of $q$ and let $F\subset X$ denote the image of $H_n^{\operatorname{ncm}}$. It is shown in \cite[\S1]{EPS} that the restriction of $q$ to $H_n^{\operatorname{cm}}$ defines an isomorphism $q^{\operatorname{cm}}:H_n^{\operatorname{cm}}\to X\setminus F$; we henceforth identify $H_n^{\operatorname{cm}}$ and $X\setminus F$ via $q$. We recall another construction of $X$, given in \cite{EPS} in the case $n=3$; the general case is just formed by replacing the point $\Gr(4, 4)$ with our parameter space $\Gr(4, n+1)$. Consider the sheaf ${\mathcal M}:=\mathcal{H}om({\mathcal O}_{\Gr(4, n+1)}^2, {\mathcal O}_{\Gr(4, n+1)}^3\otimes {\mathcal V})$ and the corresponding vector bundle $M$ over $\Gr(4, n+1)$, with fiber over $y\in \Gr(4, n+1)$ the $3\times 2$ matrices of linear forms on $\Pi_y\cong \P^3$. For $(L_{ij})$ in a fiber $M_y$, we have the corresponding subspace of ${\mathcal V}\otimes k(x)$ spanned by the determinants of the $2\times 2$-submatrices of $(L_{ij})$, $\<e_{12}( L_{ij}), e_{13}(L_{ij}), e_{23}(L_{ij})\>$. We let ${\mathcal U}\subset M$ be the subscheme parametrizing those $(L_{ij})$ such that $\<e_{12}( L_{ij}), e_{13}(L_{ij}), e_{23}(L_{ij})\>$ has the maximal rank three. Sending $(L_{ij})\in {\mathcal U}$ to $\<e_{12}( L_{ij}), e_{13}(L_{ij}), e_{23}(L_{ij})\>$ gives a morphism \[ \psi: {\mathcal U}\to \Gr(3, {\mathcal V}). \] This gives us the diagram \begin{equation}\label{eqn:MainDiag2} \xymatrix{ &&{\mathcal C}_n\ar[dl]_p\ar@{^(->}[d]^i\ar[dr]^{p_2}\\ &H_n\ar[dr]^-\Phi\ar[d]_q & \Pi_n\ar[d]_{\pi_2} \ar[l]_{\pi_1}\ar[r]^{p_2}&\P^n\\ &X\ar@{^(->}[d]_{i_X}\ar[r]^-{r_X}&\Gr(4, n+1)\\ {\mathcal U}\ar[r]_-{\psi}&\Gr(3, {\mathcal V})\ar[ru]_r } \end{equation} with $r_X$ the restriction of $r$. \begin{theorem} 1. ${\mathcal U}$ is an open subscheme of $M$.\\[2pt] 2. We have the the diagonal inclusion ${\mathbb G}_m\to \operatorname{GL}_3\times \operatorname{GL}_2$, sending $t$ to the pair of diagonal matrices with entries $t$; let $\Gamma$ be the quotient group-scheme over $\Gr(4, n+1)$, \[ \Gamma:=({\mathbb G}_m\backslash\operatorname{GL}_3\times \operatorname{GL}_2)/\Gr(4, n+1). \] Let $\Gamma$ act on ${\mathcal U}$ by \[ (g, h)\cdot (L_{ij}):=g\circ (L_{ij})\circ h^{-1}. \] Then ${\mathcal U}$ admits a smooth projective geometric quotient $\Gamma\backslash {\mathcal U}$, with quotient map $Q:{\mathcal U}\to \Gamma\backslash {\mathcal U}$ making ${\mathcal U}$ a $\Gamma$-torsor over $\Gamma\backslash {\mathcal U}$.\\[2pt] 3. The map $\psi:{\mathcal U}\to \Gr(3, {\mathcal V})$ has image $X$ and descends to an isomorphism \[ \bar\psi:\Gamma\backslash {\mathcal U}\to X. \] \end{theorem} \begin{proof} This result is proven in \cite[Proposition 1, Lemma 2]{EPS} for $k$ an algebraically closed field of characteristic zero; the case of $k$ an arbitrary characteristic zero field follows by faithfully flat descent. It is not hard to show that for all $k$, the $\Gamma$-action is free and proper. The arguments of {\it loc.\,cit.}, with Mumford's GIT extended by the work of Seshadri \cite{S1, S2} give the proof in positive characteristic. \end{proof} This gives us the diagram \begin{equation}\label{eqn:MainDiag3} \xymatrix{ &&{\mathcal C}_n\ar[dl]_p\ar@{^(->}[d]^i\ar[dr]^{p_2}\\ &H_n\ar[dr]^-\Phi\ar[d]_q & \Pi_n\ar[d]_{\pi_2} \ar[l]_{\pi_1}\ar[r]^{p_2}&\P^n\\ \Gamma\backslash {\mathcal U}\ar@{=}[r]&X\ar@{^(->}[d]_{i_X}\ar[r]^-{r_X}&\Gr(4, n+1)\\ {\mathcal U}\ar[r]_-{\psi}\ar[u]^Q&\Gr(3, {\mathcal V})\ar[ru]_r } \end{equation} Via this description of $X$ as $\Gamma\backslash {\mathcal U}$, we may define locally free sheaves ${\mathcal E}_X, {\mathcal F}_X$ on $X$ of rank three and two, respectively, as follows. We have the action of $\operatorname{GL}_3\times \operatorname{GL}_2$ on ${\mathcal O}_{\mathcal U}^3$ and ${\mathcal O}_{\mathcal U}^2$ over the action on ${\mathcal U}$ by letting a pair $(g, h)$ act on ${\mathcal O}_{\mathcal U}^3$ by \[ (g,h)(v) :=\frac{\det h}{\det g}\cdot g(v), \] and ${\mathcal O}_{\mathcal U}^2$ by \[ (g,h)(w) :=\frac{\det h}{\det g}\cdot h(w). \] This descends to an action of $\Gamma$ on both ${\mathcal O}_{\mathcal U}^3$ and ${\mathcal O}_{\mathcal U}^2$ over ${\mathcal U}$, giving the locally free sheaves ${\mathcal E}_X$ and ${\mathcal F}_X$ on $X$ by descent. Let $\Pi_{\mathcal U}:={\mathcal U}\times_{\Gr(4, n+1)}\Pi$, with projections $q_1:\Pi_{\mathcal U}\to {\mathcal U}$, $q_2:\Pi_{\mathcal U}\to \Pi_n$. Via $q_2$, we may twist a coherent sheaf ${\mathcal G}$ on $\Pi_{\mathcal U}$, setting ${\mathcal G}(m):={\mathcal G}\otimes q_2^*{\mathcal O}_{\Pi_n}(m)$. Let \[ {\mathcal L}:{\mathcal O}_{\Pi_{\mathcal U}}(-3)^2\to {\mathcal O}_{\Pi_{\mathcal U}}(-2)^3 \] be the map with value $(L_{ij})$ at $(L_{ij})\in {\mathcal U}$ and let $\bigwedge^2{\mathcal L}^t:{\mathcal O}_{\Pi_{\mathcal U}}(-2)^3\to {\mathcal O}_{\Pi_{\mathcal U}}$ be the map with value $(e_{23}(L_{ij}), -e_{13}(L_{ij}), e_{12}(L_{ij}))$ at $(L_{ij})\in {\mathcal U}$. Here as before $e_{ij}(L_{ij})$ is the determinant of the $2\times 2$ minor of $(L_{ij})$ with rows $i,j$. Let ${\mathcal J}\subset {\mathcal O}_{\Pi_{\mathcal U}}$ be the image of $\bigwedge^2{\mathcal L}^t$, giving us the Eagon-Northcott complex \begin{equation}\label{eqn:EagonNorthcott1} 0\to {\mathcal O}_{\Pi_{\mathcal U}}^2(-3)\xrightarrow{{\mathcal L}} {\mathcal O}_{\Pi_{\mathcal U}}(-2)^3\xrightarrow{\bigwedge^2{\mathcal L}^t}{\mathcal O}_{\Pi_{\mathcal U}}\to {\mathcal O}_{\Pi_{\mathcal U}}/{\mathcal J}\to0 \end{equation} on $\Pi_{\mathcal U}$. We let ${\mathcal U}^{\operatorname{cm}}:=(\bar\psi\circ Q)^{-1}(X\setminus F)$ and ${\mathcal U}^{\operatorname{ncm}}:=(\bar\psi\circ Q)^{-1}(F)$. \begin{lemma} The complex \eqref{eqn:EagonNorthcott1} is exact. \end{lemma} \begin{proof} For $x:=(L_{ij})\in {\mathcal U}^{\operatorname{cm}}$, the ideal ${\mathcal J}_x$ defines a CM-curve, hence has height 2 in ${\mathcal O}_{x\times_{\Gr(4, n+1)}\Pi_n}\cong {\mathcal O}_{\P^3}$. For $x\in {\mathcal U}^{\operatorname{ncm}}$, the ideal ${\mathcal J}_x$ defines a (non-reduced) plane in $x\times_{\Gr(4, n+1)}\Pi_n\cong \P^3$, hence has height 1. But since $F$ is a proper closed subset of the irreducible $k$-scheme $X$, this implies that the subscheme of $\Pi_{\mathcal U}$ defined by ${\mathcal J}$ has codimension 2. By the theorem of Eagon-Northcott (Theorem~\ref{thm:EN}), this implies that the complex \eqref{eqn:EagonNorthcott1} is exact. \end{proof} \begin{lemma} Let $\Pi_X:=X\times_{\Gr(4, n+1)}\Pi$ with projections $q_{X1}:\Pi_X\to X$, $q_{X2}:\Pi_X\to \Pi$.\\[5pt] 1. The complex \eqref{eqn:EagonNorthcott1} descends to a complex \[ 0\to q_{X1}^*{\mathcal F}_X(-3)\xrightarrow{\alpha_X} q_{X1}^*{\mathcal E}_X(-2)^3\xrightarrow{\bigwedge^2\alpha_X^t}{\mathcal O}_{\Pi_X}\to {\mathcal O}_{\Pi_X}/{\mathcal J}_X\to0 \] on $\Pi_X$. Here ${\mathcal J}_X\subset {\mathcal O}_{\Pi_X}$ is defined as the image of $\bigwedge^2\alpha_X^t$. Moreover, this complex is exact.\\[2pt] 2. The map \[ \beta_X:{\mathcal E}_X\to q_{1X*}({\mathcal O}_{\Pi_X}(2))=r_X^*{\mathcal V} \] induced by $\bigwedge^2\alpha_X^t$ defines an isomorphism of ${\mathcal E}_X$ with $r_X^*(E_{3, {\mathcal V}})$. \end{lemma} \begin{proof} We use the standard basis $e_1, e_2, e_3$ for $ {\mathcal O}_{\mathcal U}^3$, and the standard basis $f_1, f_2$ for ${\mathcal O}_{\mathcal U}^2$. We use the basis $e_2\wedge e_3, -e_1\wedge e_3, e_1\wedge e_2$ for $\bigwedge^2{\mathcal O}_{\mathcal U}^3$ and the basis $f_1\wedge f_2$ for $\bigwedge^2{\mathcal O}_{\mathcal U}^2$. With respect to these bases, \[ \bigwedge^2{\mathcal L}:\bigwedge^2({\mathcal O}_{\Pi_{\mathcal U}}^2(-3))\to \bigwedge^2({\mathcal O}_{\Pi_{\mathcal U}}(-2)) \] has matrix \[ \begin{pmatrix}e_{23}(L_{ij})\\-e_{13}(L_{ij})\\e_{12}(L_{ij})\end{pmatrix} \] at $(L_{ij})\in {\mathcal U}$. Taking the transpose and twisting gives us the map \[ \bigwedge^2{\mathcal L}^t: (\bigwedge^2{\mathcal O}_{\Pi_{\mathcal U}}^3)^\vee(-2)\to \det^{-1}{\mathcal O}_{\Pi_{\mathcal U}}^2 \] with matrix $(e_{23}(L_{ij}),-e_{13}(L_{ij}),e_{12}(L_{ij}))$ at a point $(L_{ij})$, in our choice of bases. We have the canonical isomorphism \[ {\mathcal O}_{\Pi_{\mathcal U}}^3\cong (\bigwedge^2{\mathcal O}_{\Pi_{\mathcal U}}^3)^\vee\otimes\det {\mathcal O}_{\Pi_{\mathcal U}}^3 \] so $\bigwedge^2{\mathcal L}^t$ induces the map \[ \tilde\bigwedge^2{\mathcal L}^t: {\mathcal O}_{\Pi_{\mathcal U}}^3(-2)\to \det^{-1}{\mathcal O}_{\Pi_{\mathcal U}}^2\otimes \otimes\det {\mathcal O}_{\Pi_{\mathcal U}}^3. \] The group $\Gamma$ acts on $\det^{-1}{\mathcal O}_{\Pi_{\mathcal U}}^2\otimes \otimes\det {\mathcal O}_{\Pi_{\mathcal U}}^3$ by \[ (g,h)\mapsto (\det h)^{-1}(\det g)^2(\det h)^{-2}\cdot \det g\cdot (\det g)^{-3}\cdot (\det h)^3=1 \] and as we have constructed the map $\tilde\bigwedge^2{\mathcal L}^t$ by natural operations from the map ${\mathcal L}$, $\tilde\bigwedge^2{\mathcal L}^t$ descends to a map of locally free sheaves on $X$ \[ \bigwedge^2\alpha^t:q_{X1}^*{\mathcal E}_X(-2)\to {\mathcal O}_{\Pi_X} \] Our ideal ${\mathcal J}_X$ is by definition the image of $\bigwedge^2\alpha_X^t$. Thus, descent by the action of $\Gamma$ on the complex \eqref{eqn:EagonNorthcott1} does give a complex of the asserted form on $\Pi_X$. Since the complex \eqref{eqn:EagonNorthcott1} is exact, so is the complex formed by descent. This proves (1). For (2), we have the canonical isomorphism \[ q_{X1*}({\mathcal O}_{\Pi_X}(2))=q_{X1*}(q_{2X}^*{\mathcal O}_{\Pi}(2))=r_X^*p_{1*}*{\mathcal O}_{\Pi}(2)=r_X^*{\mathcal V}, \] justifying the identity $q_{1X*}({\mathcal O}_{\Pi_X}(2))=r_X^*{\mathcal V}$ in the statement of (2). By construction if $(L_{ij})$ has image $x\in X$, then the image of ${\mathcal E}_X\otimes k(x)$ in $H^0({\mathcal O}_{\Pi_{r(x)}}(2))={\mathcal V}\otimes k(x)$ is the subspace generated by the $2\times 2$ minors of $(L_{ij})$. By definition of the map $q$, this is exactly the subspace $E_{2, {\mathcal V}}\otimes k(x)\subset {\mathcal V}\otimes k(x)$, which proves (2). \end{proof} \begin{lemma}\label{lem:Blowup} The map $q:H_n\to X$ induces an isomorphism of $H_n$ with the blow-up $\mu_F:\text{Bl}_FX\to X$. \end{lemma} \begin{proof} The isomorphism $q^{\operatorname{cm}}:H^{\operatorname{cm}}_n\to X\setminus F$ induces a birational map $\phi:H_n\dashrightarrow \text{Bl}_FX$ over $X$; to show that $\phi$ is an isomorphism, it suffices to show that after pullback by ${\mathcal U}\to X$, the induced map $\phi_{\mathcal U}:H_n\times_X{\mathcal U}\dashrightarrow \text{Bl}_{F_{\mathcal U}}{\mathcal U}$ is an isomorphism. We reduce to the case $n=3$ by fibering over $\Gr(4,n+1)$. Let $D\subset \text{Bl}_FX$ denote the exceptional divisor. In addition to the action of $\Gamma$, we have the action of $\operatorname{GL}_4$ on ${\mathcal U}$ via its action on the linear forms on $\P^3$, which thus commutes with the $\Gamma$-action. The matrices in $F_{\mathcal U}$ correspond closed subschemes with ideals of the form $(L_0^2, L_0L_1, L_0L_2)$, with $L_0, L_1, L_2$ linearly independent linear forms; from this, one sees that $F_{\mathcal U}$ is a single $\operatorname{GL}_4\times\Gamma$ orbit. We take for base-point the matrix \[ \alpha_0:=\begin{pmatrix} 0&-X_0\{\mathbb X}_0&0\\-X_1&X_2\end{pmatrix} \] Acting by the Lie algebra of $\operatorname{GL}_4\times \Gamma$, we find a basis for the normal bundle of $F_{\mathcal U}$ in ${\mathcal U}$ at $\alpha_0$ given by matrices of the form \[ v_{A,B,C}:=\begin{pmatrix} C&B\\emptyset&A\\emptyset&0\end{pmatrix} \] with $C, B$ linear forms in $X_2, X_3$, $A$ a linear form in $X_1, X_2, X_3$. A path $\gamma$ in $\text{Bl}_{F_{\mathcal U}}{\mathcal U}$ lying over $\alpha_0$ and intersecting the exceptional divisor transversely thus corresponds to a path in ${\mathcal U}$ of the form \[ \alpha_t:=\begin{pmatrix} tC&tB-X_0\{\mathbb X}_0&tA\\-X_1&X_2\end{pmatrix} +O(t^2). \] giving for $t\neq0$ the flat family of ideals ${\mathcal I}_t$ with generators \[ e_{12}(t)=t^2AC-tBX_0+X_0^2,\ e_{13}(t)=tCX_2+tBX_1-X_0X_1,\ e_{23}(t)=X_0X_2+X_1tA \] We have the element \[ tq:=X_2e_{13}(t)+X_1e_{23}(t)=t(AX_1^2+BX_1X_2+CX_2^2) \] so for $t\neq 0$, the element $q$ is in ${\mathcal I}_t$. One can check by a direct computation that, if $f$ is a local section of ${\mathcal O}_{\P^3_S}$ such that $tf$ is a local section of ${\mathcal I}$, then $f$ is a local section of ${\mathcal I}$, that is, ${\mathcal O}_{\P^3_S}/{\mathcal I}$ is flat over $k[t]_{(t)}$. Clearly ${\mathcal I}\otimes k(t)\subset {\mathcal O}_{\P^3_{k(t)}}$ defines a $CM$-curve, and ${\mathcal I}\otimes k\subset {\mathcal O}_{\P^3}$ gives the subscheme $C_0$ defined by the ideal ${\mathcal I}_0$ with generators $(X_0^2, X_0X_1, X_0X_2, q)$. We can compute the corresponding tangent vector at $[C_0]\in H_3$ as the element of ${\rm Hom}({\mathcal I}_{C_0}, {\mathcal O}_{C_0})$ as induced by $-1$ times the map ${\mathcal I}\to {\mathcal O}_{\P^3_S}/{\mathcal I}_0\otimes_kk[t]/t^2=t{\mathcal O}_{C_0}\oplus {\mathcal O}_{C_0}$. This maps ${\mathcal I}$ to the summand $t{\mathcal O}_{C_0}$ and sends $t{\mathcal I}$ to zero. Explicitly, the map ${\mathcal I}_{C_0}\to {\mathcal O}_{C_0}$ is given by \[ x_0^2\mapsto Bx_0,\ x_0x_1\mapsto Cx_2+Bx_1, x_0x_2\mapsto Ax_1,\ q\mapsto 0. \] From our conditions on $A, B, C$, it is easy to see that this map is zero as a map to the normal bundle if and only if $A=B=C=0$. This shows that $q_{\mathcal U}^{-1}(F_{\mathcal U})\subset H_3\times_XF_{\mathcal U}$ is a reduced Cartier divisor, and hence $q^{-1}(F)\subset H_3$ is also a reduced Cartier divisor. As set-theoretically $q^{-1}(F)=H_3^{\operatorname{ncm}}$, the universal property of the blow-up implies that the morphism $q:H_3\to X$ lifts to a well-defined morphism $\tilde{q}:H_3\to \text{Bl}_FX$, with $\tilde{q}^*(D)=H_3^{\operatorname{ncm}}$. The points $x$ of $H_3$ lying over $\alpha_0$ are uniquely determined by a choice of degree three generator for ${\mathcal I}_{C_x}$, and this degree three generator is of the form $AX_1^2+BX_1X_2+CX_2^2$, with $A, B, C$ linear forms in $X_1, X_2, X_3$. We may take $B, C$ to be linear forms in $X_2, X_3$ and $A$ to be a linear form in $X_1, X_2, X_3$ without changing the resulting ideal, and then the choice of $A, B$ and $C$ is unique up to a (single) scalar. Thus, the morphism $\tilde{q}_{\mathcal U}$ defines a bijection from the projective space on $F_{\mathbb U}\otimes k(\alpha_0)$ with the fiber of $q_{\mathcal U}$ over $\alpha_0$, and hence $\tilde{q}$ gives a bijection of $H_3^{\operatorname{ncm}}$ with $D$. Since the map $\tilde{q}$ is an isomorphism over $X\setminus F$, $\tilde{q}$ is a birational bijection of smooth proper $k$-schemes, hence an isomorphism by Zariski's main theorem. \end{proof} Define locally free sheaves ${\mathcal F}_n$, ${\mathcal E}_n$ on $H_n$ by ${\mathcal F}_n:=q^*{\mathcal F}_X$, ${\mathcal E}_n:=q^*{\mathcal E}_X$. Note that $\Pi_n=H_n\times_X\Pi_X$, so pulling back the complex \[ 0\to q_{X1}^*{\mathcal F}_X(-3)\xrightarrow{\alpha_X} q_{X1}^*{\mathcal E}_X(-2)\xrightarrow{\bigwedge^2\alpha_X^t}{\mathcal O}_{\Pi_X} \] to $\Pi_n$ via $q\times{\operatorname{\rm Id}}$ gives a complex \[ 0\to \pi_1^*{\mathcal F}_n(-3)\xrightarrow{\alpha_n} \pi_1^*{\mathcal E}_n(-2)\xrightarrow{\bigwedge^2\alpha_n^t}{\mathcal O}_{\Pi_n} \] on $\Pi_n$. Let ${\mathcal J}_n\subset {\mathcal O}_{\Pi_n}$ be the image of $\bigwedge^2\alpha_n^t$, giving the closed subscheme ${\mathcal C}_n'$ of $\Pi_n$ and the complex \begin{equation}\label{eqn:EagonNorthcott3} 0\to \pi_1^*{\mathcal F}_n(-3)\xrightarrow{\alpha_n} \pi_1^*{\mathcal E}_n(-2)\xrightarrow{\bigwedge^2\alpha_n^t}{\mathcal O}_{\Pi_n} \to {\mathcal O}_{{\mathcal C}_n'}\to0 \end{equation} Let $\Pi^{\operatorname{ncm}}_n\to H_n^{\operatorname{ncm}}$ be the pullback of $\Pi_n$ to $H_n^{\operatorname{ncm}}$ let $\bar{{\mathcal J}}_n$ be the image of ${\mathcal J}_n$ in ${\mathcal O}_{\Pi^{\operatorname{ncm}}_n}$ and let ${\mathcal P}_n\subset \Pi^{\operatorname{ncm}}_n$ be the closed subscheme defined by $\sqrt{\bar{{\mathcal J}}_n}$. Then ${\mathcal P}_n\to H_n^{\operatorname{ncm}}$ is a $\P^2$-bundle, linearly embedded in the $\P^3$-bundle $\Pi^{\operatorname{ncm}}_n\to H_n^{\operatorname{ncm}}$. Let $\iota:{\mathcal P}_n\to \Pi_n$ denote the inclusion. \begin{lemma}\label{lem:ENCohomology} 1. The complex \eqref{eqn:EagonNorthcott3} is exact.\\[2pt] 2. We have ${\mathcal J}_n\subset {\mathcal I}_{{\mathcal C}_n}$ and the quotient ${\mathcal I}_{{\mathcal C}_n}/{\mathcal J}_n$ is of the form $\iota_*({\mathcal L})$ for a certain invertible sheaf ${\mathcal L}$ on ${\mathcal P}_n$. \\[2pt] 3. For each $x\in H_n^{\operatorname{ncm}}$, the invertible sheaf ${\mathcal L}\otimes k(x)$ on ${\mathcal P}_{n, x}\cong \P^2_{k(x)}$ is isomorphic to ${\mathcal O}_{{\mathcal P}_{n, x}}(-3)$. \end{lemma} \begin{proof} We have already proven that, after restriction to $H^{\operatorname{cm}}$, (1) and (2) hold and ${\mathcal J}_n= {\mathcal I}_{{\mathcal C}_n}$. The statement is also local on $H_n$ in the flat topology, so we need only prove the lemma after pulling back via $\text{Bl}_{F_{\mathcal U}}{\mathcal U}\to H_n$ and restricting to a neighborhood of the exceptional divisor $D\subset \text{Bl}_{F_{\mathcal U}}{\mathcal U}$. As $H_n\to \Gr(4, n+1)$ is $\operatorname{GL}_{n+1}$ equivariant, we can restrict to a fiber over the standard $\P^3\subset \P^{n+1}$ defined by $x_4=\ldots=x_n=0$, so we reduce to the case $n=3$. For $n=3$, $F_{\mathcal U}$ is a homogeneous space for the action of $ \Gamma$ on ${\mathcal U}$, the cohomology of the pullbacks of \eqref{eqn:EagonNorthcott3} and the quotient ${\mathcal I}_{{\mathcal C}_n}/{\mathcal J}_n$ are both flat over $F_{\mathcal U}$, so we can restrict our attention to a fiber $D_x$ of $D\to F_{\mathcal U}$ over a single point $x\in F_{\mathcal U}(k)$. Similarly, it suffices to prove that we can find a flat morphism $\pi:S\to H_3$ with $S$ a smooth $k$-scheme such that \\[5pt] 1. The the closed subscheme $V:=S\times_{H_3}D$ of $S$ is a smooth Cartier divisor on $S$, the projection $V\to D$ has image $D_x$ and the induced map $V\to D_x$ is smooth.\\[2pt] 2. The cartesian square \[ \xymatrix{ V\ar[r]\ar[d]&S\ar[d]^\pi\\ D\ar@{^(->}[r]&H_3 } \] is transverse in the category of smooth $k$-schemes, that is, the induced map on the normal bundles $N_V\to i_{D_x}^*N_D$ is an isomorphism\\[2pt] 3. The statement of the lemma holds after pulling back by $\pi:S\to H_3$. \\[5pt] We take $x$ to be the matrix \[ \begin{pmatrix}0&-x_0\\x_0&0\\x_1&x_2\end{pmatrix}. \] The set of points $y$ lying over $x$ are parametrized by the non-zero cubic forms \[ q:=Ax_1^2+Bx_1x_2+Cx_2^2 \] with $A, B, C$ as in the proof of Lemma~\ref{lem:Blowup}. We take as parameter space $V$ the complement of 0 in the affine space of dimension 7 that parametrizes the cubic forms $q$, that is \[ A=a_1x_1+a_2x_2+a_3x_3,\ B=b_2x_2+b_3x_3,\ C=c_2x_2+c_3x_3 \] and $V={\rm Spec\,} k[a_1, a_2, a_3, b_2, b_3, c_2, c_3]\setminus\{0\}$. Our fiber in $D$ is thus $\P(V)$, so we have the canonical smooth surjective map $\pi_0:V\to D$ We take as family of curves over $V\times {\rm Spec\,} k[t]_{(t)}$ \[ \alpha_t(q):= \begin{pmatrix}tC&tB-x_0\\x_0&tA\\x_1&x_2\end{pmatrix}. \] Let $S=V\times {\rm Spec\,} k[t]_{(t)}$. The corresponding flat family of subschemes $C\subset \P^3_{S}$ is given by the sheaf of ideals ${\mathcal I}$ generated by $(e_{12}(\alpha_t(q)), e_{13}(\alpha_t(q)), e_{23}(\alpha_t(q)), q)$; the corresponding ideal sheaf ${\mathcal J}$ is generated by $(e_{12}(\alpha_t(q)), e_{13}(\alpha_t(q)), e_{23}(\alpha_t(q)))$. Since $C':={\rm Spec\,} {\mathcal O}_{\P^3_S}/{\mathcal J}$ has codimension 2 in $\P^3_S$, the Eagon-Northcott complex \[ 0\to{\mathcal O}_{\P^3_S}(-3)^2\xrightarrow{\alpha_t(q)}{\mathcal O}_{\P^3_S}(-2)^3\xrightarrow{\bigwedge^2\alpha_t(q)^t} {\mathcal O}_{\P^3_S}\to {\mathcal O}_{C'}\to0 \] is exact. Putting ${\mathcal O}_{\P^3_S}$ in degree zero, the cohomology of \[ 0\to {\mathcal O}_{\P^3_S}(-3)^2\xrightarrow{\alpha_t(q)}{\mathcal O}_{\P^3_S}(-2)^3\xrightarrow{\bigwedge^2\alpha_t(q)^t} {\mathcal O}_{\P^3_S}\to {\mathcal O}_C\to0 \] is equal to ${\mathcal I}/{\mathcal J}$, supported in degree zero. As a ${\mathcal O}_{\P^3_S}$ module, ${\mathcal I}/{\mathcal J}$ is generated by the image of $q$. Since \[ tq=x_2e_{13}(\alpha_t(q))+x_1e_{23}(\alpha_t(q)) \] we have $t{\mathcal I}\subset {\mathcal J}$, $t\cdot {\mathcal I}/{\mathcal J}=0$ and \[ {\mathcal I}/{\mathcal J}={\mathcal I}/({\mathcal J}+t{\mathcal I})\cong ({\mathcal I}/t{\mathcal I})/({\mathcal J}/t{\mathcal I}). \] We write $0$ for ${\rm Spec\,} k[t]_{(t)}/(t)\subset {\rm Spec\,} k[t]_{(t)}$ and let $C_0\subset \P^3_k$ be fiber of $C$ over $V\times0\hookrightarrow S$. Since $C$ is flat over $S$, the sequence \[ 0\to {\mathcal I}/t{\mathcal I}\to {\mathcal O}_{\P^3_S}/(t)\to {\mathcal O}_{C_0}\to 0 \] is exact. Thus ${\mathcal J}/t{\mathcal I}\subset {\mathcal I}/t{\mathcal I}\subset {\mathcal O}_{\P^3_S}/(t)$ is given by the ideals \[ (x_0^2, x_0x_1, x_0x_2){\mathcal O}_{\P^3_S}/(t)\subset (x_0^2, x_0x_1, x_0x_2, q){\mathcal O}_{\P^3_S}/(t). \] Thus as ${\mathcal O}_{\P^3_S}$-module, ${\mathcal I}/{\mathcal J}$ is the ${\mathcal O}_{\P^3_S}/(t)={\mathcal O}_{\P^3_V}$-module corresponding to the graded module $(x_0^2, x_0x_1, x_0x_2, q)/ (x_0^2, x_0x_1, x_0x_2)$ over the sheaf of polynomial rings $R:={\mathcal O}_V[x_0, x_1, x_2, x_3]$. This is a cyclic module with generator the image $[q]$ of $q$, so we need only compute the annihilator of $[q]$. Clearly $x_0q$ is in ${\mathcal J}$, so $x_0\in {\operatorname{Ann}}_R[q]$. On the other hand, if $P\cdot[q]=0$ for some $P\in R$, we have $\alpha, \beta, \gamma\in R$ with \[ Pq=\alpha\cdot x_0^2+\beta\cdot x_0x_1+\gamma\cdot x_0x_2 \] Since $q$ does not involve $x_0$ we have $P=x_0P'$ for some $P'\in R$. Thus ${\operatorname{Ann}}_R[q]=x_0R$. The plane $\iota_x:{\mathcal P}_{3,x}\hookrightarrow \P^3_S$ is the subscheme of $\P^3_S$ defined by $(t,x_0)$. As $q$ has degree three, this implies \[ {\mathcal I}/{\mathcal J}\cong \iota_{x*}{\mathcal O}_{{\mathcal P}_{3,x}}(-3) \] which completes the proof. \end{proof} The complex \eqref{eqn:EagonNorthcott3} and inclusion ${\mathcal I}_{{\mathcal J}_n}\subset {\mathcal I}_{{\mathcal C}_n}$ gives us the complex \begin{equation}\label{eqn:EagonNorthcott4} 0\to p^*{\mathcal F}_n(-3)\xrightarrow{\alpha_n} p^*{\mathcal E}_n(-2)\xrightarrow{\bigwedge^2\alpha_n^t}{\mathcal O}_{\Pi_n} \to {\mathcal O}_{{\mathcal C}_n}\to0 \end{equation} with ${\mathcal O}_{\Pi_n}$ in degree zero For later use, we note the following consequence of Lemma~\ref{lem:ENCohomology}. \begin{proposition}\label{prop:MainResolution} Retaining the notation of Lemma~\ref{lem:ENCohomology}, the cohomology sheaves ${\mathcal H}^i$ of the complex \eqref{eqn:EagonNorthcott4} satisfy\\[5pt] 1. ${\mathcal H}^i=0$ for $i\neq0$\\[2pt] 2. There is a canonical isomorphism ${\mathcal H}^0\cong \iota_*({\mathcal L})$. \end{proposition} \section{Determinants and canonical bases}\label{sec:Det} Recall the morphisms $r_X:X\to \Gr(4, n+1)$ and $\Phi:H_n\to \Gr(4, n+1)$. \begin{lemma}\label{lem:TangentDet} 1. We have the following presentation of the relative tangent sheaf ${\mathcal T}_{r_X}$: \begin{equation}\label{eqn:TangentPresentation} 0\to {\mathcal O}_X\to \mathcal{E}nd({\mathcal E}_X)\oplus \mathcal{E}nd({\mathcal F}_X)\to \mathcal{H}om({\mathcal F}_X, {\mathcal E}_X)\otimes q_{X1*}{\mathcal O}_{\Pi_X}(1)\to {\mathcal T}_{r_X}\to 0 \end{equation} 2. Let $j:H_n^{\operatorname{cm}}\to H_n$ be the inclusion. We have the presentation of $j^*{\mathcal T}_\Phi$ \[ 0\to {\mathcal O}_{H_n^{\operatorname{cm}}}\to j^*\mathcal{E}nd({\mathcal E})\oplus j^*\mathcal{E}nd({\mathcal F})\to j^*\mathcal{H}om({\mathcal F}, {\mathcal E})\otimes \pi_{1*}{\mathcal O}_{\Pi_n}(1)\to j^*{\mathcal T}_\Phi\to 0 \] 3. We have a canonical isomorphisms \[ \det {\mathcal T}_{\Phi}\cong (\det{\mathcal F}^{\otimes -3}\otimes\det{\mathcal E}^{\otimes 2})^{\otimes 4}\otimes \det(\pi_{1*}{\mathcal O}_{\Pi_n}(1)))^{\otimes 6}\otimes {\mathcal O}_{H_n}(-6\cdot H_n^{\operatorname{ncm}}) \] and \[ \det {\mathcal T}_{H_n}\cong\det {\mathcal T}_{\Phi}\otimes\Phi^*{\mathcal O}_{\Gr(4,n+1)}(n+1) \] \end{lemma} \begin{proof} Let $Q:{\mathcal U}\to X$ denote the quotient map, $r_{\mathcal U}:{\mathcal U}\to \Gr(4, n+1)$ the structure map. We have the vector bundle $M\to \Gr(4, n+1)$ associated to the locally free sheaf ${\mathcal M}=\mathcal{H}om({\mathcal O}_\Gr^2, {\mathcal O}_\Gr^3)\otimes p_{1*}{\mathcal O}_\Pi(1)$, and ${\mathcal U}$ is an open subscheme of $M$. Thus ${\mathcal T}_{r_{\mathcal U}}=r_{\mathcal U}^*{\mathcal M}$. A section $m=(m_{ij})$ of $r_{\mathcal U}^*{\mathcal M}=\mathcal{H}om({\mathcal O}_{\mathcal U}^2, {\mathcal O}_{\mathcal U}^3)\otimes r_{\mathcal U}^* p_{1*}{\mathcal O}_\Pi(1)$ is a matrix of linear forms on the fibers of the $\P^3$-bundle $\Pi_{\mathcal U}\to \Gr(4, n+1)$, with a matrix $m$ giving the tangent vector at $L\in {\mathcal U}\subset \mathcal{H}om({\mathcal O}_{\mathcal U}^2, {\mathcal O}_{\mathcal U}^3)\otimes q_{\sU1*}{\mathcal O}_{\Pi_{\mathcal U}}$ defined by $L+\epsilon m$. Since $X$ is the quotient of ${\mathcal U}$ by $\Gamma$, the pullback $Q^*{\mathcal T}_{r_X}$ is the quotient of ${\mathcal M}$ by the action of the Lie algebra of $\Gamma$, and ${\mathcal T}_{r_X}$ itself is thus given by descent via the natural action of $\Gamma$ on this quotient. Explicitly, the Lie algebra of $\operatorname{GL}_3\times\operatorname{GL}_2$ acts on ${\mathcal M}$ at a point $L$ of ${\mathcal U}$ by \[ (a,b)\cdot m=m+a\cdot L-L\cdot b \] The map ${\mathbb G}_m\to \operatorname{GL}_3\times\operatorname{GL}_2$ induces the map of Lie algebras $\lambda\mapsto (\lambda\cdot {\operatorname{\rm Id}}, \lambda\cdot {\operatorname{\rm Id}})$ giving the presentation of $Q^*{\mathcal T}_{r_X}$ \[ 0\to {\mathcal O}_{\mathcal U}\to \mathcal{E}nd({\mathcal O}_{\mathcal U}^3)\oplus \mathcal{E}nd({\mathcal O}_{\mathcal U}^2)\to \mathcal{H}om({\mathcal O}_{\mathcal U}^2, {\mathcal O}_{\mathcal U}^3)\otimes r_{\mathcal U}^* p_{1*}{\mathcal O}_\Pi(1)\to Q^*{\mathcal T}_{r_X}\to0. \] Applying $\Gamma$-descent yields the asserted presentation of ${\mathcal T}_{r_X}$. (2) follows from (1) and the fact that $q$ restricts to an isomorphism $q^{\operatorname{cm}}:H_n^{\operatorname{cm}}\to X\setminus F$. For (3), we have \[ \rnk {\mathcal F}_X=2,\ \rnk {\mathcal E}_X=3,\ \rnk q_{X1*}{\mathcal O}_{\Pi_X}(1)=\dim_kH^0(\P^3, {\mathcal O}_{\P^3}(1))=4 \] so (1) gives us the canonical isomorphism \[ \det{\mathcal T}_{r_X}\cong (\det{\mathcal F}_X^{\otimes -3}\otimes\det{\mathcal E}_X^{\otimes 2})^{\otimes 4}\otimes \det(q_{X1*}{\mathcal O}_{\Pi_X}(1))^{\otimes 6} \] $H_n$ is isomorphic to the blow-up of $X$ along the codimension 7 closed subscheme $F$ with exceptional divisor $H_n^{\operatorname{ncm}}$. Since $F$ is smooth over $\Gr(4, n+1)$, the formula for the relative canonical class of a blow-up yields our formula for $\det {\mathcal T}_{\Phi}$. For the final formula, we have the exact sequence \[ 0\to {\mathcal T}_{\Phi}\to {\mathcal T}_{H_n}\to \Phi^*{\mathcal T}_{\Gr(4, n+1)}\to 0 \] so the canonical isomorphism $\det{\mathcal T}_{\Gr(4, n+1)}\cong {\mathcal O}_{\Gr(4, n+1)}(n+1)$ yields our formula for $\det{\mathcal T}_{H_n}$. \end{proof} Let ${\mathcal E}_{m,n}:=p_*{\mathcal O}_{{\mathcal C}_n}(m)$. \begin{lemma}\label{lem:EnmPresentation} For $m\ge0$, ${\mathcal E}_{m,n}$ is a locally free sheaf on $H_n$ of rank $3m+1$ and $R^ip_*{\mathcal O}_{{\mathcal C}_n}(m)=0$ for $i>0$. \end{lemma} \begin{proof} We have $R^ip_*{\mathcal O}_{\Pi}(j)=0$ for $j>-3$; applying this to the resolution \eqref{eqn:TangentPresentation} gives the exact sequence in (2). This also shows that $j^*{\mathcal E}_{m,n}$ is a locally free sheaf on $H^{\operatorname{cm}}_n$ of rank $3m+1$ and that $R^ip_*{\mathcal O}_{{\mathcal C}_n}(m)=0$ for $i>0$. For $x\in H^{\operatorname{ncm}}$, the plane curve $\bar{C}_x$ is a plane cubic, showing that $H^0(\bar{C}_x, {\mathcal O}_{\bar{C}_x}(m))$ has dimension $3m$ and $H^i(\bar{C}_x, {\mathcal O}_{\bar{C}_x}(m))=0$ for $i>0$. This implies $H^0(C_x, {\mathcal O}_{\bar{C}_x}(m))$ has dimension $3m+1$ and $H^i(C_x, {\mathcal O}_{\bar{C}_x}(m))=0$ for $i>0$. As ${\mathcal O}_{{\mathcal C}_n}(m)$ is flat over $H_n$, this completes the proof. \end{proof} We make a brief detour to define a canonical relative orientation on ${\operatorname{Sym}}^mW$, for $W$ a given $k$-vector space. Suppose $W$ has dimension $r$ over $k$ and let $w_1,\ldots, w_r$ be a basis. We order the basis ${\operatorname{Sym}}^mW$ of monomials of degree $m$ in the $w_j$ lexicographically, using the order $w_1>w_2>\ldots>w_r$ on the $w_j$. To fix this notion, $w_1^m$ will come first in this order. Let ${\operatorname{sym}}^m_r$ denote the dimension of ${\operatorname{Sym}}^mW$ and let $e_1(w_*),\ldots, e_{{\operatorname{sym}}^m_r}(w_*)$ denote the lexicographically ordered basis of ${\operatorname{Sym}}^mW$. \begin{lemma}\label{lem:DetSymIso} Let \[ N_{m,r}:= \binom{m+r-1}{r} \] Then as a rank one $\Aut_k(W)$ representation there is a unique isomorphism \[ \phi_{W,m}:\det(W)^{\otimes N_{m,r}}\xrightarrow{\sim}\det{\operatorname{Sym}}^mW \] satisfying \[ \phi_{W,m}((w_1\wedge\ldots w_r)^{\otimes N_{m,r}})=e_1(w_*)\wedge\ldots\wedge e_{{\operatorname{sym}}^m_r}(w_*) \] for each basis $w_1,\ldots, w_r$ of $W$. \end{lemma} \begin{proof} As $\det{\operatorname{Sym}}^mW$ is a rank one representation of $\Aut_k(W)$, there is a unique integer $N$ such that $\det{\operatorname{Sym}}^mW$ is isomorphic to the representation $\det(W)^{\otimes N}$, and one can determine $N$ by examining the action of $t\cdot {\operatorname{\rm Id}}_W$. On $\det(W)^{\otimes N}$, $t\cdot {\operatorname{\rm Id}}_W$ acts by $t^{rN}$, while on $\det{\operatorname{Sym}}^mW$, $t\cdot {\operatorname{\rm Id}}_W$ acts by $t^{m\cdot{\operatorname{sym}}^m_r}$. Thus \[ N=\frac{m}{r}\cdot \binom{m+r-1}{m-1}= \frac{m}{r}\cdot \binom{m+r-1}{r}= \binom{m+r-1}{r-1}=N_{m,r} \] To see that $\phi:=\phi_{W,m}$ is a well-defined $\Aut_kW$-equivariant isomorphism, we temporarily write $\phi$ as $\phi_{w_*}$, to indicate the possible dependence on the choice of ordered basis $w_*:=(w_1,\ldots, w_r)$. Then we need only show that \[ \phi_{\alpha(w_*)}((\alpha(w_1)\wedge\ldots\wedge\alpha(w_r))^{\otimes N_{m,r}}))=\det{\operatorname{Sym}}^m(\alpha)(\phi_{w_*}((w_1\wedge\ldots\wedge w_r)^{\otimes N_{m,r}})) \] for $\alpha\in \Aut_k(W)$ of the form \\[2pt] 1. An elementary matrix $e_{ij}(\lambda)$ acting on $w_1,\ldots, w_r$ by $w_\ell':=e_{ij}(\lambda)(w_\ell)$ with $w_\ell'=w_\ell$ for $\ell\neq i$ and $w_i'=w_i+\lambda\cdot w_j$. \\ 2. A diagonal automorphism $(t_1,\ldots, t_r)$ with $w_j':=(t_1,\ldots, t_r)(w_j):=t_jw_j$, \\ 3. A permutation $\sigma\in \Sigma_r$ acting by $w_j':=\sigma(w_j):=w_{\sigma(j)}$\\[2pt] \\[2pt] For (1), ${\operatorname{Sym}}^m(e_{ij}(\lambda))$ sends $e_\ell(w_*)$ to $e_\ell(w_*')$. This map is represented by a uni-potent matrix with respect to the basis $e_*(w_*)$, and thus \begin{align*} \phi_{w_*'}((w'_1\wedge\ldots\wedge w'_r)^{\otimes N_{mr}}))&= e_1(w_*')\wedge \ldots\wedge e_{{\operatorname{sym}}^m_r}(w_*')\\ &=e_1(w_*)\wedge \ldots\wedge e_{{\operatorname{sym}}^m_r}(w_*)\\ &={\operatorname{Sym}}^m(e_{ij}(\lambda))(\phi_{w_*}((w_1\wedge\ldots\wedge w_r)^{\otimes N_{mr}})) \end{align*} For (2), this follows by a similar argument, since the action of $(t_1,\ldots, t_r)$ sends the ordered basis $w_1,\ldots, w_r$ to the ordered basis $t_1w_1,\ldots, t_rw_r$. For (3), the permutation $\sigma$ sends the $j$th ordered monomial $e_j(w_1,\ldots, w_r)$, with respect to the lexicographical order for $w_1>\ldots> w_r$, to the $j$th ordered monomial $e_j(w_{\sigma(1)},\ldots, w_{\sigma(r)})$ with respect to the lexicographical order for $w_{\sigma(1)}>\ldots> w_{\sigma(r)}$, in other words, \[ e_j(w'_*)={\operatorname{Sym}}^m(\sigma)(e_j(w_*)) \] and thus \[ \phi_{w_*'}((w'_1\wedge\ldots\wedge w'_r)^{\otimes N_{mr}}))= {\operatorname{Sym}}^m(\sigma)(\phi_{W,m}((w_1\wedge\ldots\wedge w_r)^{\otimes N_{mr}})) \] \end{proof} \begin{proposition}\label{prop:DetSymIso} Let ${\mathcal W}$ be a locally free coherent sheaf of rank $r$ on a $k$-scheme $X$. Then there is a unique isomorphism of invertible sheaves $\phi_{{\mathcal W},m}:(\det {\mathcal W})^{\otimes N_{m,r}}\xrightarrow{\sim} \det{\operatorname{Sym}}^m{\mathcal W}$ such that, if $w_1,\ldots, w_r$ is a framing for ${\mathcal W}$ over some open subscheme $U$ of $X$, then \[ \phi_{{\mathcal W},m}((w_1\wedge\ldots\wedge w_r)^{\otimes N_{m,r}})=e_1(w_*)\wedge\ldots e_{{\operatorname{sym}}^m_r}(w_*) \] where $e_j(w_*)$ is the $j$th monomial section of ${\operatorname{Sym}}^m{\mathcal W}$ over $U$, with respect to the lexicographical order on the monomials in the $w_j$ induced by the order $w_1>\ldots>w_r$.\end{proposition} \begin{proof} This is an immediate consequence of Lemma~\ref{lem:DetSymIso}. \end{proof} We will be using a modification of the lexicographical orientation, which we now describe. As we will only be using this for a rank four bundle, we restrict to $r=4$. \begin{definition}\label{def:CanonicalOrientation} 1. Given an ordered basis $x_1,\ldots, x_4$ for $W$ we call a monomial $x_1^{m_1}x_2^{m_2}x_3^{m_3}x_4^{m_4}$ {\em positive} if $m_1>m_2$ or if $m_1=m_2$ and $m_3>m_4$, {\em negative} if $m_1<m_2$ or if $m_1=m_2$ and $m_3<m_4$, and {\em neutral} if $m_1=m_2$ and $m_3=m_4$.\\[2pt] 2. If $M:=x_1^{m_1}x_2^{m_2}x_3^{m_3}x_4^{m_4}$ is a positive monomial, define $M^*:= x_1^{m_2}x_2^{m_1}x_3^{m_4}x_4^{m_3}$.\\[2pt] 3. Note that $M\mapsto M^*$ defines a bijection of the positive monomials with the negative monomials and is the identity on the neutral monomials. The {\em canonical order} on monomials in ${\operatorname{Sym}}^mW$ is defined as follows: If $M_1$ and $M_2$ are positive or neutral monomials, set $M_1>M_2$ if $M_1$ is larger than $M_2$ in the lexicographical order. If $M_1>M_2$ are positive monomials and $M_3$ is neutral, we set $M_1>M_1^*>M_2>M_2^*>M_3$. \\[2pt] 4. Let $\sigma_m(x_*)$ be the permutation of the set of the monomials of ${\operatorname{Sym}}^mW$ that transforms the lexicographical order to the canonical order, and let $\epsilon_m$ be the sign of $\sigma_m(x_*)$. \\[2pt] 5. Let ${\mathcal W}$ be a rank four invertible sheaf on some $k$-scheme $X$. The {\em canonical orientation} on ${\operatorname{Sym}}^m{\mathcal W}$ is the isomorphism \[ \epsilon_m\phi_{{\mathcal W},m}:\det{\mathcal W}^{\otimes N_{m,4}}\xrightarrow{\sim} \det{\operatorname{Sym}}^m{\mathcal W} \] \end{definition} \begin{remark} 1. Definition~\ref{def:CanonicalOrientation}(5) is justifed by the fact that $\epsilon_m$ is independent of the choice of ordered basis $x_1,\ldots, x_4$ for $W$. One can verify this along the lines of the proof of Lemma~\ref{lem:DetSymIso}.\\[2pt] 2. Fix $m$ and choose positive integers $a_1, a_2$ with $a_1> m\cdot a_2$, give, $x_1$ weight $a_1$, $x_2$ weight $-a_1$, $x_3$ weight $a_1$ and $x_4$ weight $-a_4$. Then a monomial $M:=x_1^{m_1}x_2^{m_2}x_3^{m_3}x_4^{m_4}$ is positive (resp. negative, resp. neutral) if and only if $\operatorname{wt}(M)>0$, resp. $\operatorname{wt}(M)<0$, resp. $\operatorname{wt}(M)=0$, that is, if and only if $a_1(m_1 -m_2)+a_2(m_3-m_4)$ is positive, resp. negative, resp. zero. Moreover, if $a_1>(2m-1)a_2$, then ordering the positive monomials by weight is the same as the lexicographical order and the canonical order on the non-neutral monomials is the order by the absolute value of the weight, with the positive monomial of weight $w$ immediately preceeding the negative monomial of weight $-w$. \end{remark} \begin{ex} We illustrate with the example of degree 2. The lexicographical order is \[ x_1^2>x_1x_2>x_1x_3>x_1x_4>x_2^2>x_2x_3>x_2x_4>x_3^2>x_3x_4>x_4^2 \] The positive monomials are \[ x_1^2, x_1x_3, x_1x_4, x_3^2 \] The negative monomials are \[ x_2^2, x_2x_3 x_2x_4, x_4^2 \] the neutral monomials are \[ x_1x_2, x_3x_4 \] The canonical order is \[ x_1^2>x_2^2>x_1x_3>x_2x_4> x_1x_4>x_2x_3> x_3^2>x_4^2>x_1x_2>x_3x_4 \] If we give $x_1$ weight $3$, $x_2$ weight $-3$, $x_3$ weight 1 and $x_4$ weight -1, the weights of the monomials in the canonical order are \[ 6, -6, 4, -4, 2,-2, 2, -2, 0,0 \] If we give $x_1$ weight $4$, $x_2$ weight $-4$, $x_3$ weight 1 and $x_4$ weight -1, the weights of the monomials in the canonical order are \[ 8, -8, 5, -5, 3,-3, 2, -2, 0,0 \] \end{ex} Let $V$ be a fixed $k$-vector space of dimension $n+1$ and let $E_{4,n+1}\to \Gr(4, n+1)$ be the tautological rank four subsheaf of ${\mathcal O}_{\Gr(4, n+1)}^{n+1}$, giving the exact sequence \[ 0\to E_{4,n+1}\to {\mathcal O}_{\Gr(4, n+1)}\otimes V\to {\mathcal Q}_{n-3, n+1}\to 0 \] We have the similarly defined sequence on $\P^n$, \[ 0\to {\mathcal O}_{\P^n}(-1)\to {\mathcal O}_{\P^n}\otimes V\to {\mathcal Q}_{n,n+1}\to 0 \] \begin{lemma}\label{lem:OPiDet} Let $N_m=N_{m,4}$. Then for all $m\ge0$ we have canonical isomorphisms \[ \phi_{m,n}:{\mathcal O}_{\Gr(4,n+1)}(N_m) \xrightarrow{\sim}\det p_{1*}({\mathcal O}_\Pi(m)) \] \[ \psi_n:{\mathcal O}_{\Gr(4,n+1)}(1)\xrightarrow{\sim} \det E_{4,n+1}^\vee. \] with \[ \phi_{m,n}\circ\psi_n^{\otimes N_m}=\epsilon_m\cdot \phi_{m, E_{4,n+1}^\vee} \] \end{lemma} \begin{proof} The Pl\"ucker embedding is defined by the global sections of $\det E^\vee_{4,n+1}$, which gives the isomorphism $\psi_n$. We then define $\phi_{m,n}:= \epsilon_m\cdot \phi_{m, E_{4,n+1}^\vee}\circ \psi_n^{\otimes N_m}$. \end{proof} \begin{remark} Given a framing $x_1,\ldots, x_4$ of $E_{4, n+1}^\vee$ over some open subset $U$ of $\Gr(4, n+1)$, this gives a corresponding isomorphism ${\mathcal O}_{\Gr(4, n+1)}(1)\cong {\mathcal O}_{\Gr(4, n+1)}$ over $U$ and if $s_1,\ldots, s_{{\operatorname{sym}}^m_4}$ is the canonical framing of $ p_{1*}({\mathcal O}_\Pi(m))$ with respect to $x_1,\ldots, x_4$, then \[ \phi_{m,n}(1)=s_1\wedge\ldots\wedge s_{{\operatorname{sym}}^m_4} \] \end{remark} We have the $\P^2$-bundle $\rho_n:{\mathcal P}_n\to H^{\operatorname{ncm}}$ and the invertible sheaf ${\mathcal L}$ on ${\mathcal P}_n$ given by Lemma~\ref{lem:ENCohomology}. We twist the complex \eqref{eqn:EagonNorthcott4} by ${\mathcal O}_{\Pi_n}(m)$. This gives us the complex on ${\mathcal C}_n$ \begin{equation}\label{eqn:ResOnCn} 0\to p^*{\mathcal F}_n(m-3)\xrightarrow{\alpha_n} p^*{\mathcal E}_n(m-2)\xrightarrow{\bigwedge^2\alpha_n^t}{\mathcal O}_{\Pi_n}(m) \to {\mathcal O}_{{\mathcal C}_n}(m)\to0 \end{equation} with ${\mathcal O}_{\Pi_n}(m)$ put in degree zero. \begin{lemma}\label{lem:MainRes} Take $m\ge1$. If we apply $p_*$ to the complex \eqref{eqn:ResOnCn}, we obtain the complex \begin{equation}\label{eqn:ResOnHn} 0\to {\mathcal F}_n\otimes p_*[{\mathcal O}_{\Pi_n}(m-3)]\xrightarrow{\alpha_n} {\mathcal E}_n\otimes p_*[{\mathcal O}_{\Pi_n}(m-2)]\xrightarrow{\bigwedge^2\alpha_n^t}p_*{\mathcal O}_{\Pi_n}(m) \to {\mathcal E}_{m,n}\to0. \end{equation} Moreover, this complex has cohomology sheaves ${\mathcal H}^i=0$ for $i\neq0$ and ${\mathcal H}^0\cong i_{H^{\operatorname{ncm}}_n*}(\rho_{n*}({\mathcal L}(m)))$. \end{lemma} \begin{proof} This follows from Proposition~\ref{prop:MainResolution}, together with the fact that ${\mathcal O}_{\P^3}(m)$, ${\mathcal O}_{\P^3}(m-3)$, ${\mathcal O}_{\P^3}(m-2)$ ${\mathcal O}_{{\mathcal C}_n}(m)$ and ${\mathcal O}_{\P^2}(m-3)$ have vanishing higher cohomology for $m\ge1$. \end{proof} \begin{proposition}\label{prop:DetCMSectionBundle} Let \[ M_m:=-N_m+3N_{m-2}-2N_{m-3}=\frac{(3m-1)m}{2}. \] Let \[ K_m:=\binom{m-1}{2}. \] Then for all $m\ge1$, the isomorphisms $\phi_{j,n}$ and the complex of Lemma~\ref{lem:MainRes} give rise to a canonical isomorphism \[ \rho_{m,n}:\det{\mathcal E}_{m,n}\xrightarrow{\sim} \Phi^*{\mathcal O}_{\Gr(4,n+1)}(M_m)\otimes \det{\mathcal E}^{\otimes-\binom{m+1}{3}}\otimes\det{\mathcal F}^{\otimes\binom{m}{3}}\otimes{\mathcal O}_{H_n}(K_mH_n^{\operatorname{ncm}}). \] \end{proposition} \begin{proof} Since \[ N_m=N_{m,4}=\binom{m+3}{4} \] we have \[ M_m=\binom{m+3}{4}-3\binom{m+1}{4}+2\binom{m}{4}=\frac{(3m-1)m}{2} \] as claimed. The complex of Lemma~\ref{lem:MainRes} gives us the canonical isomorphism \begin{align*} \det{\mathcal E}_{m,n}&=\det p_*{\mathcal O}_{{\mathcal C}_n}(m)\\ &\cong\det ({\mathcal F}_n \otimes p_*[{\mathcal O}_{\Pi_n}(m-3)])\otimes \det({\mathcal E}_n^3 \otimes p_*[{\mathcal O}_{\Pi_n}(m-2)])^{-1}\\&\hskip120pt\otimes \det p_*[{\mathcal O}_{\Pi_n}(m)] \otimes \det i_{H^{\operatorname{ncm}}_{n*}}(\rho_{n*}({\mathcal L}(m))). \end{align*} Recalling that the restriction of ${\mathcal L}$ to a fiber of $\rho_n$ is canonically isomorphic to ${\mathcal O}_{\P^2}(-3)$, we find that, for $x\in H^{\operatorname{ncm}}_{n*}$, $\rho_{n*}({\mathcal L}(m))\otimes k(x)$ has dimension \[ h^0(\P^2, {\mathcal O}_{\P^2}(m-3))=\binom{m-1}{2}=K_m \] so \[ \det i_{H^{\operatorname{ncm}}_{n*}}(\rho_{n*}({\mathcal L}(m)))={\mathcal O}_{H_n}(K_m\cdot H^{\operatorname{ncm}}). \] Our isomorphism $\phi_{j,n}:{\mathcal O}_{\Gr(4, n+1)}(N_j)\xrightarrow{\sim} \det p_{1*}{\mathcal O}_{\Pi}(j)$ of Lemma~\ref{lem:OPiDet}, together with the fact \[ \dim_kH^0(\P^3, {\mathcal O}_{\P^3}(j))=\binom{ j+3}{3} \] and the formula for $\det({\mathcal A}\otimes{\mathcal B})$ gives us the canonical isomorphisms \[ \det {\mathcal F}_n \otimes p_*[{\mathcal O}_{\Pi_n}(m-3)]=\det{\mathcal F}^{\otimes\binom{m}{3}}\otimes {\mathcal O}_{H_n}(2N_{m-3}) \] \[ \det {\mathcal E}_n(m-2)^3 \otimes p_*[{\mathcal O}_{\Pi_n}(m-2)]=\det{\mathcal E}^{\otimes\binom{m+1}{3}}\otimes {\mathcal O}_{H_n}(3N_{m-2}) \] and \[ \det p_*{\mathcal O}_{\Pi_n}(m)={\mathcal O}_{H_n}(N_{m}) \] Fitting this together yields our formula for $\det{\mathcal E}_{m,n}$. \end{proof} \begin{definition} We say that a locally free sheaf ${\mathcal P}$ on a smooth $k$-scheme $Y$ is {\em relatively oriented} if $\det{\mathcal P}\otimes\det{\mathcal T}_Y$ is a square in ${\rm Pic}(Y)$. A choice of an invertible sheaf ${\mathcal L}$ on $Y$ and an isomorphism $\rho:\det{\mathcal P}\otimes\det{\mathcal T}_Y\xrightarrow{\sim}{\mathcal L}^{\otimes 2}$ is a {\em relative orientation} for ${\mathcal P}$ \end{definition} The group $\Gamma$ has a free rank one character group, with generator the character $(g, h)\mapsto (\det g)^2\cdot (\det h)^{-3}$ of $\operatorname{GL}_3\times \operatorname{GL}_2$. Let $\gamma\in {\rm Pic}(X)$ be the class of the invertible sheaf on $X=\Gamma\backslash{\mathcal U}$ induced by this character of $\Gamma$. \begin{lemma} \label{lem:Pic} Suppose $n\ge 4$. \\[5pt] 1. ${\rm Pic}(X)$ is free abelian group of rank 2 with generators $\gamma$ and $r_X^*{\mathcal O}_{\Gr(4, n+1)}(1)$.\\[2pt] 2. ${\rm Pic}(H_n)={\mathbb Z}^3$ with generators $\Phi^*{\mathcal O}_{\Gr(4, n+1)}(1)$, $q^*\gamma$ and ${\mathcal O}_{H_n}(H_n^{{\operatorname{ncm}}})$.\\[5pt] If $n=3$, ${\rm Pic}(X)={\mathbb Z}$ with generator $\gamma$ and ${\rm Pic}(H_n)={\mathbb Z}^2$ with generators $q^*\gamma$ and ${\mathcal O}_{H_n}(H_n^{{\operatorname{ncm}}})$. \end{lemma} \begin{proof} We first consider the case $n\ge 4$. (2) follows from (1) and the formula for the Picard group of the blowup. For (1), the scheme ${\mathcal U}$ is an open subscheme of the vector bundle $M$ over $\Gr(4, n+1)$. Moreover, in a fiber over $x\in \Gr(4, n+1)$, it follows from \cite[Lemma 2]{EPS} that $M_x\setminus{\mathcal U}_x$ has codimension $\ge2$ in $M_x$. Thus the projection $r_{\mathcal U}:{\mathcal U}\to \Gr(4, n+1)$ induces an isomorphism ${\rm Pic}(\Gr(4, n+1))\to {\rm Pic}({\mathcal U})$. This implies (1) by standard descent theory. The case $n=3$ is similar, except that $\Gr(4, n+1)$ is replaced with the point $\Gr(4,4)$, so ${\rm Pic}({\mathcal U})=\{0\}$. \end{proof} \begin{remark}\label{rem:Cong1} It follows directly from our construction of ${\mathcal E}_X$ and ${\mathcal F}_X$ that \[ \det{\mathcal E}_X=\gamma=\det{\mathcal F}_X. \] \end{remark} For positive integers $m_1,\ldots, m_r,n$, define \begin{equation}\label{eqn:BundleNotation} {\mathcal E}_{m_1,\ldots,m_r;n}:=\oplus_{i=1}^r{\mathcal E}_{m_i,n} \end{equation} \begin{proposition} \label{prop:relOrientationObservations} Assume $\rnk({\mathcal E}_{m_1,\ldots,m_r;n})=\dim H_n$ and that the $m_i$ are all odd. Then ${\mathcal E}_{m_1,\ldots,m_r;n}$ is relatively oriented if and only if: \begin{enumerate} \item The number of $m_i$'s such that $m_i\equiv-1\mod 4$ is even. \item When $n$ is odd, the number of direct summands $r$ in ${\mathcal E}_{m_1,\ldots,m_r;n}$ is even. \item When $n$ is even, the number of direct summands $r$ in ${\mathcal E}_{m_1,\ldots,m_r;n}$ is odd. \end{enumerate} \end{proposition} \begin{proof} Since $\rnk({\mathcal E}_{m_1,\ldots,m_r;n})=\sum_{i=1}^r(3m_i+1)$ and $\dim H_n=4n$, the condition $\rnk({\mathcal E}_{m_1,\ldots,m_r;n})=\dim H_n$ implies $n\ge4$. Then ${\rm Pic}(H_n)={\mathbb Z}^3$ with generators ${\mathcal O}_{H_n}(H^{\operatorname{ncm}}_n)$, $\gamma:=\det {\mathcal F}=\det{\mathcal E}$, and ${\mathcal O}_{\Gr(4,n+1)}(1)$. Since we are assuming all the $m_i$ are odd, $\det{\mathcal E}$ and $\det{\mathcal F}$ appear in $\det{\mathcal E}_{m_i,n}$ to an even power. ${\mathcal O}_{H_n}(H^{\operatorname{ncm}}_n)$ occurs to an even power in $\det{\mathcal T}_{H_n}$, and is even in $\det{\mathcal E}_{m_i,n}$ if and only if $m_i\equiv 1\mod 4$ (for odd $m_i$). This proves (1). ${\mathcal O}_{\Gr(4, n+1)}(1)$ occurs in $\det{\mathcal E}_{m_i,n}$ to the power $M_{m_i}$, which is odd if $m_i\cong 1\mod 4$ and even if $m_i\equiv -1\mod 4$ ${\mathcal O}_{\Gr(4, n+1)}(1)$ occurs in $\det{\mathcal T}_{H_n}$ to the power $n+1$, so if $n$ is odd, there must be an even number of $m_i\equiv 1\mod 4$, and if $n$ is even there must be an odd number of $m_i\equiv 1\mod 4$. This proves (2) and (3). The statement about the choice of orientation follows directly from the construction of the $\rho_{m_i,n}$. \end{proof} For $n\ge4$, $m\ge2$, ${\mathcal E}_{m,n}$ is relatively oriented if an only if $n$ is even and and $m\equiv 1\mod 4$. Assuming this to be the case, the isomorphisms $\rho_{m,n}$ of Proposition~\ref{prop:DetCMSectionBundle} and that of Lemma~\ref{lem:TangentDet} give rise to a relative orientation for ${\mathcal E}_{m,n}$. \begin{proof} In all cases, Lemma~\ref{lem:TangentDet} gives an isomorphism of $\det{\mathcal T}_\Phi$ with a square and an isomorphism, $\det{\mathcal T}_{H_n}\cong \Phi^*{\mathcal O}_{\Gr(4, n+1)}(n+1)$ modulo a square. In case $m$ is odd, $\det{\mathcal E}$ and $\det{\mathcal F}$ appear in $\det{\mathcal E}_{m,n}$ even powers. If $m\equiv1]\mod 4$ and $n$ is even, $M_m$ and $n+1$ are both odd and $K_m$ is even. Thus ${\mathcal O}_{\Gr(4, n+1)}(1)$ appears to an odd power in both formulas and the twist by ${\mathcal O}_{H_n}(H^{\operatorname{ncm}}_n)$ is even in both formulas. If $m$ is not congruent to 1 modulo 4, ${\mathcal O}_{H_n}(H^{\operatorname{ncm}}_n)$ occurs to an even power in $\det{\mathcal T}_{H_n}$ but to an odd power in $\det{\mathcal E}_{m,n}$. As ${\mathcal O}_{H_n}(H^{\operatorname{ncm}}_n)$, $\Phi^*{\mathcal O}_{\Gr(4, n+1)}(1)$ and $\det {\mathcal F}=\det{\mathcal E}$ are independent generators of ${\rm Pic}(H_n)={\mathbb Z}^3$, ${\mathcal E}_{m,n}$ is not relatively oriented in this case. If $m\equiv 1\mod 4$ and $n$ is odd, then $\det{\mathcal E}$ and $\det{\mathcal F}$ appear in $\det{\mathcal E}_{m,n}$ to even powers, but $K_m$ is odd and $n+1$ is even, so ${\mathcal E}_{m,n}$ is not relatively oriented, by the same reasoning as in the previous case. \end{proof} \begin{remark}\label{rem:Cong2} If one of the $m_j$ is even, then ${\mathcal E}_{m_j,n}$ has odd rank $3m_j+1$, so the Euler class $e({\mathcal E}_{m_j,n})\in H^{3m_j+1}(H_n, {\mathcal W}(\det^{-1}{\mathcal E}_{m_j,n}))$. By the multiplicativity of the Euler class, $e(\oplus_j{\mathcal E}_{m_j,n})$ vanishes as well, so we can ignore discussing orientation conditions in this case. \end{remark} \begin{ex} 1. We consider the case $r=1$ $m_1=5$, $n=4$. Then on $H_4^{\operatorname{cm}}$, \[ \det{\mathcal E}_{5,4}=\Phi^*{\mathcal O}_{\Gr(4,5)}(35)\otimes \det{\mathcal E}^{\otimes-20}\otimes\det{\mathcal F}^{\otimes 10} \] and \[ \det {\mathcal T}_{H_4^{\operatorname{cm}}}=(\det{\mathcal F}^{\otimes -3}\otimes\det{\mathcal E}^{\otimes 2})^{\otimes 4}\otimes \det(p^{\operatorname{cm}}_*{\mathcal O}_\Pi(1)))^{\otimes 6}\otimes \Phi^*{\mathcal O}_{\Gr(4,5)}(5) \] Thus the vector bundle ${\mathcal E}_{4,5}$ is relatively oriented on $H_4^{\operatorname{cm}}$ and has rank $3\cdot 5+1=16$, equal to the dimension $4\cdot 4$ of $H_4$. \\[5pt] 2. For $r=1$, $m=9$, $n=7$, the condition $3m+1=4n$ is satisfied. On $H_7^{\operatorname{cm}}$, \[ \det{\mathcal E}_{9,7}=\Phi^*{\mathcal O}_{\Gr(4,8)}(M_9)\otimes \det{\mathcal E}^{\otimes-120}\otimes\det{\mathcal F}^{\otimes 84} \] with $M_9=(3\cdot 9-1)\cdot 9/2=13\cdot 9$ odd, and \[ \det {\mathcal T}_{H_7^{\operatorname{cm}}}=(\det{\mathcal F}^{\otimes -3}\otimes\det{\mathcal E}^{\otimes 2})^{\otimes 4}\otimes \det(p^{\operatorname{cm}}_*{\mathcal O}_\Pi(1)))^{\otimes 6}\otimes \Phi^*{\mathcal O}_{\Gr(4,8)}(8) \] so ${\mathcal E}_{9,7}$ is not relatively oriented. \end{ex} \begin{remark}\label{rem:GEquivarianceProperty}The group scheme $\operatorname{GL}_{n+1}$ acts naturally on $\P^n$, and this action extends to give a canonical $\operatorname{GL}_{n+1}$-linearization on all ``natural'' sheaves on $\P^n$, such as ${\mathcal O}_{\P^n}(m)$ and the tangent sheaf ${\mathcal T}_{\P^n}$. The $\operatorname{GL}_{n+1}$-action on $\P^n$ induces a $\operatorname{GL}_{n+1}$-action on the diagrams \eqref{eqn:MainDiag}, \eqref{eqn:MainDiag}, and \eqref{eqn:MainDiag}, where $\operatorname{GL}_{+1}$ acts on ${\mathcal U}$ via its action on linear forms $H^0(\P^n, {\mathcal O}_{\P^n}(1))$. This extends to a $\operatorname{GL}_{n+1}$ linearization of all ``natural'' sheaves on the various schemes in these diagrams, such as ${\mathcal T}_{H_n}$, ${\mathcal T}_{\Gr(4,n+1)}$, ${\mathcal E}_X$, ${\mathcal F}_X$, and so on. As the divisor $H^{\operatorname{ncm}}_n$ is fixed by the $\operatorname{GL}_{n+1}$-action, we also an induced $\operatorname{GL}_{n+1}$-action on $H^{\operatorname{ncm}}_n$ and $H^{\operatorname{cm}}_n$, and a linearization of ${\mathcal O}_{H_n}(d\cdot H^{\operatorname{ncm}}_n)$. The resolution of $j^*{\mathcal T}_\Phi$ carries a natural $\operatorname{GL}_{n+1}$ linearization, so the description of $\det{\mathcal T}_{H_n}$ given in Lemma~\ref{lem:TangentDet} is $\operatorname{GL}_{n+1}$-equivariant. Similarly, the complex \eqref{eqn:ResOnHn} carries a natural $\operatorname{GL}_{n+1}$ linearization and the description of $\det{\mathcal E}_{m,n}$ in Proposition~\ref{prop:DetCMSectionBundle} is also $\operatorname{GL}_{n+1}$-equivariant. \end{remark} \section{Extending to ${\mathbb Z}[1/6]$}\label{sec:Integrality} We extend the picture to the closure of the locus of twisted cubics over ${\mathbb Z}[1/6]$, $H_n/{\mathbb Z}[1/6]$. Our first task is to show that $H_n/{\mathbb Z}[1/6]$ is smooth over ${\mathbb Z}[1/6]$ and that the non-Cohen-Macaulay locus $H_n^{\operatorname{ncm}}/{\mathbb Z}[1/6]$ is a smooth divisor. The issue here is that in general, closure does not commute with base-change, so we need to check that the fiber over some ${\mathbb F}_p$ of the closure of $H^{sm}_{n, {\mathbb Z}[1/6]}$ in the Hilbert scheme is the same as $H_n/{\mathbb F}_p$. For this, we first recall the notion and basic properties of a {\em Nagata ring}. All our rings are assumed commutative. A noetherian domain $R$ with quotient field $K$ satisfies the property $N$-2 if for every finite field extension $K\subset L$, the integral closure of $R$ in $L$ is a finite $R$-module. A noetherian ring $R$ is a Nagata ring if for each prime ideal $P$, the domain $R/P$ satisfies $N$-$2$. An arbitrary localization of a Nagata ring is a Nagata ring and if $(f_1,\ldots, f_n)$ generate the unit ideal in $R$ and $R[1/f_i]$ is a Nagata ring for each $i$, then $R$ is a Nagata ring (\cite[Tag 032E, Lemma 10.162.6, Lemma 10.162.7]{Stacks}. A a scheme $X$ is a {\em Nagata scheme} if each point $x\in X$ admits an affine open neighborhood $U$ such that ${\mathcal O}_X(U)$ is a Nagata ring (see \cite[Tag 033R, Definition 28.13.1]{Stacks}). $X$ is a Nagata scheme if and only if $X$ admits an open cover by Nagata schemes, if and only if for each affine open $U\subset X$, ${\mathcal O}_X(U)$ is a Nagata ring \cite[Tag 033R, Lemma 28.13.6]{Stacks}. In addition, a fundamental result of Nagata states that if $R$ is a Nagata ring and $A$ is a finitely generated $R$-algebra, then $A$ is also a Nagata ring (see \cite[Theorem 72]{Matsumura} or \cite[Tag 032E, Proposition 10.162.15]{Stacks}). Thus, if $X$ is a Nagata scheme and $f:Y\to X$ is a morphism, locally of finite type, then $Y$ is also a Nagata scheme. Finally, Nagata has shown that a Noetherian complete local ring is a Nagata ring (\cite[Tag 032E, Lemma 10.162.8]{Stacks}, \cite[Corollary 2, Chap. 12]{Matsumura}). Besides the Noetherian complete local rings, a Dedekind domain with characteristic zero quotient field (such as ${\mathbb Z}$) is a Nagata ring. We present a version of a lemma of Hironaka \cite[Lemma 4]{Hironaka}. \begin{lemma} Let ${\mathcal O}$ be a discrete valuation ring with residue field $k$ and quotient field $K$, and let $X\to {\rm Spec\,} {\mathcal O}$ be an integral, finite-type ${\mathcal O}$ scheme, flat over ${\mathcal O}$. Let ${\mathcal P}\subset {\mathcal O}_X$ be the ideal sheaf of the reduced special fiber $(X\times_{\mathcal O} k)_{\rm red}$. Suppose that ${\mathcal O}$ is a Nagata ring and that both the generic fiber $X\times_{\mathcal O} K$ and the reduced special fiber $(X\times_{\mathcal O} k)_{\rm red}$ are normal and integral. Suppose in addition that, for $x\in X$ the generic point of $X\times_{\mathcal O} k$, we have ${\mathcal P}_x=(\pi){\mathcal O}_{X,x}$. Then $X$ is normal. \end{lemma} \begin{proof} Our proof is taken directly from Hironaka {\em loc. cit.} with minor changes. We note that the assumption that $X\times_{\mathcal O} K$ is integral follows from our assumption that $X$ is integral. Choose a parameter $\pi$ for ${\mathcal O}$. The conclusion is local on $X$, so we can replace $X$ with ${\rm Spec\,} A$, with $A$ a Noetherian local ${\mathcal O}$-algebra essentially of finite type over ${\mathcal O}$, such that\\[5pt] i. $(\pi)A$ has a unique minimal prime $P$ and $(\pi)A_P=PA_P$\\[2pt] ii. $A/P$ is normal\\[2pt] iii. $A[1/\pi]$ is normal. \\[5pt] Since ${\mathcal O}$ is a Nagata ring, so is $A$. Since $A_P$ is a Noetherian local domain with principal maximal ideal $(\pi)A_P$, $A_P$ is a discrete valuation ring, hence integrally closed. Let $A'=A_P\cap A[1/\pi]$, the intersection taking place in the quotient field $F$ of $A$. By (iii), $A'$ is equal to the integral closure of $A$ in $F$. Let $P'$ be a minimal prime of $(\pi)A'$. Then $P'\cap A=P$. Since $A_P$ is integrally closed, $A'_P=A_P$, so $P'$ is the unique prime of $A'$ lying over $P$ and $A'_{P'}=A_P$. By (i), we have $(\pi)A'_{P'}=P'A'_{P'}$ and $P'$ is the unique minimal prime of $(\pi)A'$. Since $A'$ is normal, $(\pi)A'$ has no embedded components \cite[Theorem 38]{Matsumura}. Thus $P'=(\pi)A'$ and $A'/(\pi)=A'/P'$. But after localizing at $P$ we have \[ (A/P)_P=A_P/PA_P=A'_{P'}/P'A_{P'}=(A'/P')_{P'} \] so the domains $A/P$ and $A'/P'$ have the same quotient fields. By (ii), we have \[ A/P= A'/P'=A'/(\pi)A'. \] Since $A$ is a Nagata ring, $A'$ is a finite $A$-module. But $A'[1/\pi]=A[1/\pi]$ by (iii), so there is an integer $e\ge0$ with $\pi^eA'\subset A$. We take $e$ minimal and want to show that $e=0$, that is, $A=A'$, which will complete the proof. If $e>0$, then $A'/(\pi)A'=A/P$ implies $A'= A+\pi\cdot A'$. Then \[ \pi^{e-1}\cdot A'= \pi^{e-1}A+\pi^eA'\subset A, \] contrary to the minimality of $e$. \end{proof} \begin{corollary} $H_n/{\mathbb Z}[1/6]$ is a smooth projective scheme over ${\mathbb Z}[1/6]$, and for each prime $p>3$, the base-change $(H_n/{\mathbb Z}[1/6])\times_{{\mathbb Z}[1/6]}{\mathbb F}_p$ is isomorphic to $H_n/{\mathbb F}_p$. \end{corollary} \begin{proof} Let $p>3$ be a prime and let ${\mathcal O}:={\mathbb Z}_{(p )}$, with quotient field ${\mathbb Q}$ and residue field $k={\mathbb F}_p$; ${\mathcal O}$ is a Nagata ring. It suffices to show that $H_n/{\mathcal O}$ is smooth over ${\mathcal O}$. $H_n/{\mathcal O}$ is the closure of $H_n/{\mathbb Q}$ in $\operatorname{Hilb}^{3m+1}(\P^n_{\mathcal O})$, so $H_n/{\mathcal O}$ is integral and is flat and projective over ${\mathcal O}$. For each $x\in H_n/{\mathbb Q}$, the corresponding curve $C_x$ is geometrically connected, so the same holds for all $x\in H_n/{\mathcal O}$. By the classification given in the proofs of \cite[Lemma 1, Lemma 2, Lemma 4]{PS} and our determination of the universal deformation space in Proposition~\ref{prop:DefSpace}, there is a neighborhood $U$ of $H_n/k$ in $\operatorname{Hilb}^{3m+1}(\P^n_k)$ such that, for $x$ in $U\setminus H_n/k$, the curve $C_x$ is one of the following two types:\\[5pt] a. $C_x$ is not connected.\\[2pt] b. $C_x^{{\rm red}}$ is a plane cubic and the kernel of ${\mathcal O}_{C_x}\to {\mathcal O}_{C_x^{{\rm red}}}$ is non-zero and supported at a smooth point $p$ of $C_x^{{\rm red}}$.\\[5pt] Thus, if $H_n/{\mathcal O}\times_{\mathcal O} k\setminus H_n/k\neq\emptyset$, there is a point $x\in H_n/{\mathcal O}\times_{\mathcal O} k\setminus H_n/k$ with corresponding curve $C_x$ of type (b). Since $H_n/{\mathcal O}$ is the closure of the locus of twisted cubics, there is a discrete valuation ring ${\mathcal O}'$ and closed subscheme ${\mathcal C}\subset \P^n_{{\mathcal O}'}$ defining a flat family over ${\mathcal O}'$ with generic fiber ${\mathcal C}_\eta$ a smooth twisted cubic and special fiber ${\mathcal C}_0$ our curve $C_x$ of type (b), with its corresponding smooth point $p$. We may assume that ${\mathcal O}'$ is complete, hence a Nagata ring. But then there is a neighborhood $V$ of $p$ in ${\mathcal C}$ that is an integral scheme, flat over ${\mathcal O}'$ with smooth generic fiber and with $V_0\setminus \{p\}$ and $V_0^{{\rm red}}$ both smooth and integral; here $V_0$ is the special fiber of $V\to {\rm Spec\,} {\mathcal O}'$. By Hironaka's lemma, $V$ is a normal scheme. But as $V_0\setminus \{p\}$ is reduced, and there are no embedded components in $V_0$ (\cite[Theorem 38]{Matsumura} again) the special fiber $V_0$ is reduced, contrary to the assumption that ${\mathcal C}_0=C_x$ has an embedded component at $p$. Thus, set-theoretically, the fiber $H_n/{\mathcal O}\times_{\mathcal O} k$ is equal to $H_n/k$. The dense open subscheme $H^{sm}_{n, {\mathcal O}}$ of twisted cubics is smooth over ${\mathcal O}$ with special fiber $H^{sm}_{n, k}$ dense in $H_n/k$ and $H_n/{\mathbb Q}$ is smooth over ${\mathbb Q}$. Applying Hironaka's lemma again, it follows that $H_n/{\mathcal O}$ is normal. As above, this implies that $H_n/{\mathcal O}\times_{\mathcal O} k=H_n/k$ and so $H_n/{\mathcal O}$ is smooth over ${\mathcal O}$. \end{proof} \begin{remark} In \cite{HSS}, Heinrich, Skjelnes and Stevens extend the smoothness result of Piene-Schlessinger to fields of characteristic $2$ and $3$, thus $H_3/k$ is a smooth $k$-scheme for all fields $k$. However, we do not have a nice classification of the curves in $\operatorname{Hilb}^{3m+1}(\P^3_k)\setminus H_3$, needed for the argument above. We also do not know if $H_n/k$ is smooth for fields of characteristic $2,3$ and $n>3$. \end{remark} For an arbitrary base-scheme $S$, we let $H_n^{\operatorname{cm}}/S\subset H_n/S$ denote the subscheme of $H_n/S$ parametrizing flat families of Cohen-Macaulay curves. \begin{lemma}\label{lem:HCM} 1. $H_n^{\operatorname{cm}}/{\mathbb Z}[1/6]$ is an open subscheme of $H_n/{\mathbb Z}[1/6]$.\\[2pt] 2. For $f:T\to S$ a morphism of schemes we have $H_n^{\operatorname{cm}}/S\times_ST = H_n^{\operatorname{cm}}/T$, where we view both schemes as subschemes of $\operatorname{Hilb}^{3m+1}(\P^n_T)$ via the canonical isomorphism $\operatorname{Hilb}^{3m+1}(\P^n_T)\cong \operatorname{Hilb}^{3m+1}(\P^n_S)\times_ST$. \end{lemma} \begin{proof} (1) Let $x\in H_n^{\operatorname{cm}}/{\mathbb Z}[1/6]$ be a geometric point and let $C_x\subset \P^n_{k(x)}$ be the corresponding curve. It suffices to show that an open neighborhood of $x$ in $\operatorname{Hilb}^{3m+1}(\P^n_{{\mathbb Z}[1/6]})$ is contained in $H_n^{\operatorname{cm}}/{\mathbb Z}[1/6]$. Since $C_x$ is a degree three curve, we have the following possibilities for the 1-cycle $|C_x|$ associated to the scheme $C_x$; in all cases $C_x$ is connected and is not contained in a plane.\\[5pt] a. $C_x$ is irreducible and $|C_z|=1\cdot C_x^{\rm red}$\\[2pt] b. $C_x^{\rm red}$ is a union of a line $\ell$ and a smooth conic $Q$, and $|C_z|=1\cdot \ell+1\cdot Q$.\\[2pt] c1. $C_x^{\rm red}$ is a union of three distinct lines $\ell_i$ and $|C_z|=\sum_{i=1}^31\cdot \ell_i$. The three lines do not pass though a common point.\\[2pt] c2. $C_x^{\rm red}$ is a union of three distinct lines $\ell_i$ and $|C_z|=\sum_{i=1}^31\cdot \ell_i$. The three lines pass though a common point.\\[2pt] d. $C_x^{\rm red}$ is a union of two distinct lines $\ell_i$ and $|C_z|=1\cdot \ell_1+2\cdot \ell_2$\\[2pt] e. $C_x^{\rm red}$ is a line $\ell$ and $|C_z|=\ 3\cdot \ell$.\\[5pt] In cases (a), $C_x$ is a smooth cubic rational curve. In cases (b) and (c1), $C_x$ is a local complete intersection. In case (c2), $C_x$ is projectively equivalent to the union of three coordinate axes through $(0,0,0,1)$, defined by the ideal $(X_0X_1, X_0X_2, X_1X_2)$. In case (d), $C_x$ is projectively equivalent to the curve defined by the ideal $(X_0^2,X_2)\cap (X_0,X_1)=(X_0^2, X_0X_2, X_1X_2)$. In case (e), $C_x$ is projectively equivalent to the curve defined by the ideal $(X_0^2,X_0X_1, X_1^2)$. In the local complete intersection cases (a), (b) and (c1), the cotangent complex $\L_{C_x/\P^3_{k(x)}}$ is isomorphic in $D(\operatorname{Coh}_{{\mathcal O}_{C_x}})$ to ${\mathcal I}_{C_x}/{\mathcal I}_{C_x}^2[1]$. In cases (c2), (d) and (e), one can make an explicit computation of $\tau_{\ge-2}\L_{C_x/\P^3_{k(x)}}$. In fact, if a Cohen-Macaulay curve $C_x$ has an Eagon-Northcott complex with matrix $(L_{ij})$, let $(L_{ij}^*)$ be the matrix \[ \begin{pmatrix} L_{12}&L_{22}&L_{32}\\-L_{11}&-L_{21}&-L_{31}\end{pmatrix} \] This gives the map ${\mathcal O}_{\P^3_{k(x)}}(-4)^3\xrightarrow{(L_{ij}^*)}{\mathcal O}_{\P^3_{k(x)}}(-3)^2$; let $\overline{{\mathcal O}}_{\P^3_{k(x)}}(-3)^2$ denote the cokernel of $(L_{ij}^*)$. Then \[ \tau_{\ge-2}\L_{C_x/\P^3_{k(x)}}\cong [\overline{{\mathcal O}}_{\P^3_{k(x)}}(-3)^2\xrightarrow{(L_{ij})} {\mathcal O}_{C_x}(-2)^3] \] with ${\mathcal O}_{C_x}(-2)^3$ in degree -1. To see this, the complex described above is the Lichtenbaum-Schlessinger cotangent complex associated to the Eagon-Northcott resolution of ${\mathcal O}_{C_x}$, and the Lichtenbaum-Schlessinger cotangent complex computes the truncation $\tau_{\ge-2}$ of the true cotangent complex. Choosing a suitable matrix $(L_{ij})$ in the cases (c2), (d) and (e), one can show using the explicit complex above that ${\mathcal H}^{-2}(\L_{C_x/\P^3_{k(x)}})=0$. Using the distinguished triangle \[ \L_{\P^3_{k(x)}/\P^n_{{\mathbb Z}[1/6]}}\otimes{\mathcal O}_{C_x}\to \L_{C_x/\P^n_{{\mathbb Z}[1/6]}}\to \L_{C_x/\P^3_{k(x)}}\xrightarrow{+1} \] and the fact that $\L_{\P^3_{k(x)}/\P^n_{{\mathbb Z}[1/6]}}\otimes{\mathcal O}_{C_x}$ is concentrated in degree -1, we see that ${\mathcal H}^2( \L_{C_x/\P^n_{k(x)}})=0$. Thus, in all cases of a Cohen-Macaulay cubic rational curve $C_x$, the obstruction space for the deformations of $C_x$ in $\P^n_{{\mathbb Z}[1/6]}$ is $H^1(C_x, {\mathcal N}_{C_x})$, with ${\mathcal N}_{C_x}=\mathcal{H}om({\mathcal I}_{C_x}, {\mathcal O}_{C_x})$ the normal sheaf of $C_x$ in $\P^n_{{\mathbb Z}[1/6]}$. In \cite[Lemma 5]{PS} it is shown that $H^1(C_x, \bar{{\mathcal N}}_{C_x})=0$, where $\bar{{\mathcal N}}_{C_x}$ the normal sheaf in $\P^3_{k(x)}$. We have the exact sequence \[ 0\to {\mathcal O}_{C_x}(1)^{n-3}\oplus {\mathcal O}_{C_x}^\epsilon \to {\mathcal N}_{C_x}\to \bar{{\mathcal N}}_{C_x}\to 0 \] with $\epsilon=1$ if $k(x)$ has positive characteristic and $\epsilon=0$ if $k(x)$ has characteristic zero. Thus $H^1(C_x, {{\mathcal N}}_{C_x})=0$ and the deformations of $C_x$ are unobstructed. By computing $H^0(C_x, {\mathcal N}_{C_x})$, one can similarly show that the first order deformations are generated by first order deformations of the matrix $(L_{ij})$ in the Eagon-Northcott complex for $C_x$, which shows that all the nearby deformations of $C_x$ are in $H^{\operatorname{cm}}_n/{\mathbb Z}[1/6]$. This shows that $H^{\operatorname{cm}}_n/{\mathbb Z}[1/6]$ is open in $H_n/{\mathbb Z}[1/6]$. Let $\pi:S\to {\rm Spec\,} {\mathbb Z}[1/6]$ be a ${\mathbb Z}[1/6]$-scheme. As the Eagon-Northcott complex remains exact after any base-change that preserves the codimension of a Cohen-Macaulay curve $C_x$ in a $\P^3_{k(x)}\subset \P^n_{k(x)}$, it follows that $H^{\operatorname{cm}}_n/{\mathbb Z}[1/6]\times_{{\mathbb Z}[1/6]}S$ is contained in $H^{\operatorname{cm}}_n/S$. Conversely, for each geometric point $x$ of $H^{\operatorname{cm}}_n/S$, the corresponding geometric point of $\operatorname{Hilb}^{3m+1}(\P^n_{{\mathbb Z}[1/6]})$ is in $H^{\operatorname{cm}}_n/{\mathbb Z}[1/6]$, so $H^{\operatorname{cm}}_n/{\mathbb Z}[1/6]\times_{{\mathbb Z}[1/6]}S=H^{\operatorname{cm}}_n/S$. This proves (2). \end{proof} Working over ${\mathbb Z}[1/6]$, use a parallel notation to the case of the base-field $k$. We have the Grassmann variety $\Gr_{{\mathbb Z}[1/6]}(4, n+1)$, the morphism $\Phi:H_n/{\mathbb Z}[1/6]\to \Gr_{{\mathbb Z}[1/6]}(4, n+1)$, the universal 3-plane bundle $\Pi_{{\mathbb Z}[1/6]}\to \Gr_{{\mathbb Z}[1/6]}(4, n+1)$, the locally free sheaf ${\mathcal V}_{{\mathbb Z}[1/6]}:=p_{1*}{\mathcal O}_{\Pi_{{\mathbb Z}[1/6]}}(2)$, the Grassmann bundle $r:\Gr(3,{\mathcal V}_{{\mathbb Z}[1/6]})\to \Gr_{{\mathbb Z}[1/6]}(4, n+1)$ and the morphism $q:H_n/{\mathbb Z}[1/6]\to \Gr(3,{\mathcal V}_{{\mathbb Z}[1/6]})$. This gives us the (reduced) image subscheme $X_{{\mathbb Z}[1/6]}:=q(H_n/{\mathbb Z}[1/6])$ and the closed subscheme $F_{{\mathbb Z}[1/6]}:=q(H_n^{\operatorname{ncm}}/{\mathbb Z}[1/6])\subset X_{{\mathbb Z}[1/6]}$. Let ${\mathcal U}_{{\mathbb Z}[1/6]}$ be the ${\mathbb Z}[1/6]$-scheme of $3\times2$ matrices $L:=(L_{ij})$ of linear forms on $\Pi_{{\mathbb Z}[1/6]}$ such that $\<e_{12}(L), e_{13}(L), e_{23}(L)\>$ has rank 3 in ${\mathcal V}_{{\mathbb Z}[1/6]}$. We let $\Gamma_{{\mathbb Z}[1/6]}$ be the group-scheme over ${\mathbb Z}[1/6]$ $\operatorname{GL}_3\times\operatorname{GL}_2/{\mathbb G}_m$, with the action $(g,h)\cdot L=g\cdot L\cdot h^{-1}$ on ${\mathcal U}_{{\mathbb Z}[1/6]}$. \begin{lemma}\label{lem:IntegralStructure} 1. $F_{{\mathbb Z}[1/6]}\subset X_{{\mathbb Z}[1/6]}$ are smooth over ${\mathbb Z}[1/6]$\\[2pt] 2. The map $q:H_n/{\mathbb Z}[1/6]\to X_{{\mathbb Z}[1/6]}$ lifts to an isomorphism of $H_n/{\mathbb Z}[1/6]$ with the blow-up $\text{Bl}_{F_{{\mathbb Z}[1/6]}}X_{{\mathbb Z}[1/6]}$.\\[2pt] 3. For a ${\mathbb Z}[1/6]$-algebra $R$, sending $L\in {\mathcal U}_{{\mathbb Z}[1/6]}(R)$ to $\<e_{12}(L), e_{13}(L), e_{23}(L)\>\subset {\mathcal V}_{{\mathbb Z}[1/6]}(R)$ defines a smooth morphism $\psi:{\mathcal U}_{{\mathbb Z}[1/6]}\to X$, making ${\mathcal U}_{{\mathbb Z}[1/6]}\to X_{{\mathbb Z}[1/6]}$ a $\Gamma_{{\mathbb Z}[1/6]}$-torsor over $X$ (via the given action of $\Gamma$ on ${\mathcal U}$). \end{lemma} \begin{proof} For (3), we consider the action map $\rho: \Gamma_{{\mathbb Z}[1/6]}\times_{{\mathbb Z}[1/6]}{\mathcal U}_{{\mathbb Z}[1/6]}\to {\mathcal U}_{{\mathbb Z}[1/6]}\times_{{\mathbb Z}[1/6]}{\mathcal U}_{{\mathbb Z}[1/6]}$. As $\rho$ is an isomorphism after taking $-\times_{{\mathbb Z}[1/6]}{\mathbb F}_p$ for all $p>3$, it follows that $\rho$ is an isomorphism. This proves (3) and shows that $X_{{\mathbb Z}[1/6]}$ is smooth over ${\mathbb Z}[1/6]$. Similarly, since $H_n^{\operatorname{ncm}}/{\mathbb Z}[1/6]$ is smooth over ${\mathbb Z}[1/6]$ and for each $p$ $H_n^{\operatorname{ncm}}/{\mathbb F}_p\to F_{{\mathbb F}_p}$ is a projective space bundle, it follows that $F_{{\mathbb Z}[1/6]}$ is smooth over ${\mathbb Z}[1/6]$. Since $H_n/{\mathbb F}_p\cong \text{Bl}_{F_{{\mathbb F}_p}}X_{{\mathbb F}_p}$ for $p>3$, it follows that the closed subscheme $q^{-1}(F_{{\mathbb Z}[1/6]})\subset H_n/{\mathbb Z}[1/6]$ is flat over ${\mathbb Z}[1/6]$, as each local section of the ideal of the fiber over ${\mathbb F}_p$ lifts to a local section of the ideal over ${\mathbb Z}[1/6]$. Thus $q^{-1}(F_{{\mathbb Z}[1/6]})$ is a Cartier divisor on $H_n/{\mathbb Z}[1/6]$, and hence the map $q:H_n/{\mathbb Z}[1/6]\to X_{{\mathbb Z}[1/6]}$ lifts to a map $\tilde{q}:H_n/{\mathbb Z}[1/6]\to \text{Bl}_{F_{{\mathbb Z}[1/6]}}X_{{\mathbb Z}[1/6]}$. As both source and target are smooth over ${\mathbb Z}[1/6]$, and $\tilde{q}$ is an isomorphism on each fiber over ${\mathbb F}_p$, $\tilde{q}$ is an isomorphism. \end{proof} We thus may define the sheaves ${\mathcal F}_{X_{{\mathbb Z}[1/6]}}$, ${\mathcal E}_{X_{{\mathbb Z}[1/6]}}$ on $X_{{\mathbb Z}[1/6]}$ by $\Gamma_{{\mathbb Z}[1/6]}$-descent, as used to define ${\mathcal F}_{X_k}$, ${\mathcal E}_{X_k}$; we let ${\mathcal F}_{{\mathbb Z}[1/6]}$, ${\mathcal E}_{{\mathbb Z}[1/6]}$ be the respective pull-backs of ${\mathcal F}_{X_{{\mathbb Z}[1/6]}}$, ${\mathcal E}_{X_{{\mathbb Z}[1/6]}}$ to $H_n/{\mathbb Z}[1/6]$. We have the universal curve $p:{\mathcal C}_n/{\mathbb Z}[1/6]\to H_n/{\mathbb Z}[1/6]$ and the locally free sheaf ${\mathcal E}_{m, n, {\mathbb Z}[1/6]}:=p_*({\mathcal O}_{C_n/{\mathbb Z}[1/6]}(m))$. \begin{theorem}\label{thm:IntegralExtension} 1. We have a canonical isomorphism \begin{multline*} \det {\mathcal E}_{n,m,{\mathbb Z}[1/6]}\cong \Phi^*{\mathcal O}_{\Gr_{{\mathbb Z}[1/6]}(4,n+1)}(M_{m,n})\otimes \det{\mathcal E}_{{\mathbb Z}[1/6]}^{\otimes-\scriptscriptstyle\begin{pmatrix}m+1\\3\end{pmatrix}}\\\otimes\det{\mathcal F}_{{\mathbb Z}[1/6]}^{\otimes\scriptscriptstyle\begin{pmatrix}m\\3\end{pmatrix}}\otimes{\mathcal O}_{H_n/{\mathbb Z}[1/6]}(K_mH_n^{\operatorname{ncm}}/{\mathbb Z}[1/6]). \end{multline*} 2. We have a canonical isomorphism \begin{multline*} \det{\mathcal T}_{H_n/{\mathbb Z}[1/6]}\cong (\det{\mathcal F}_{{\mathbb Z}[1/6]}^{\otimes -3}\otimes\det{\mathcal E}_{{\mathbb Z}[1/6]}^{\otimes 2})^{\otimes 4}\otimes \det(\pi_{1*}{\mathcal O}_{\Pi_n}(1)))^{\otimes 6}\\\otimes {\mathcal O}_{H_n/{\mathbb Z}[1/6]}(-6\cdot H_n^{\operatorname{ncm}}/{\mathbb Z}[1/6]) \otimes\Phi^*{\mathcal O}_{\Gr_{{\mathbb Z}[1/6]}(4,n+1)}(n+1). \end{multline*} \end{theorem} \begin{proof} Using Lemma~\ref{lem:IntegralStructure}, we can repeat the constructions used in \S\ref{sec:Det} to yield the computation of the determinants. \end{proof} \begin{remark} Just as in the case of a field, the construction of ${\mathcal E}_{X_{{\mathbb Z}[1/6]}}$ and ${\mathcal F}_{X_{{\mathbb Z}[1/6]}}$ by $\Gamma_{{\mathbb Z}[1/6]}$-descent gives a canonical isomorphism $\det{\mathcal E}_{{\mathbb Z}[1/6]}\cong \det{\mathcal F}_{{\mathbb Z}[1/6]}$. Thus the congruences in the $m_j, n$ needed for $\oplus_j {\mathcal E}_{n,m_j,{\mathbb Z}[1/6]}$ to be relatively oriented are the same as in the case of a field (see Remark~\ref{rem:Cong2}). \end{remark} \begin{corollary}\label{cor:Integrality} Let $m_1,\ldots, m_r, n$ be integers with $\sum_j3m_j+1 = 4n$, $n\ge4$, $m_j\ge2$. Suppose that the congruences in the $m_j$ and $n$ as given by Remark~\ref{rem:Cong2} are satisfied. Then there is an element $\text{deg}_{{\mathbb Z}[1/6]}e(\oplus_j{\mathcal E}_{n,m_j, {\mathbb Z}[1/6]})\in \GW({\mathbb Z}[1/6])$ such that for each field $k\supset {\mathbb Z}[1/6]$, the push-forward of the Euler class $e(\oplus_j{\mathcal E}_{n,m_j,k})$ is a well-defined element of $\GW(k)$ and is equal to the image of $\text{deg}_{{\mathbb Z}[1/6]}e(\oplus_j{\mathcal E}_{n,m_j, {\mathbb Z}[1/6]})\in \GW({\mathbb Z}[1/6])$ under the base-change map $\GW({\mathbb Z}[1/6])\to \GW(k)$. Here $e(\oplus_j{\mathcal E}_{n,m_j,k})$ is the Euler class in hermitian $K$-theory. \end{corollary} \begin{proof} We use hermitian $K$-theory to define the Euler classes; this is a well-defined construction over ${\mathbb Z}[1/2]$. The condition $\sum_j3m_j+1 = 4n$ says that $\oplus_j{\mathcal E}_{n,m_j, k}$ has rank $\sum_j3m_j+1$ equal to the dimension $4n$ of $H_n/k$, and the congruences being satisfied implies that $\oplus_j{\mathcal E}_{n,m_j, k}$ is relatively oriented. Thus the Euler class $e(\oplus_j{\mathcal E}_{n,m_j,k})$ lives in the twisted hermitian $K$-theory $KQ^{8n, 4n}(H_n/k, \omega_{H_n/k})$, and as $H_n$ is smooth and proper of dimension $4n$ over $k$, we have the well-defined push-forward map \[ p_{H_n*}: KQ^{8n, 4n}(H_n/k, \omega_{H_n/k})\to KQ^{0, 0}(k)=\GW(k) \] Similarly, we have the Euler class $e(\oplus_j{\mathcal E}_{n,m_j, {\mathbb Z}[1/6]})\in KQ^{8n, 4n}(H_n/{\mathbb Z}[1/6], \omega_{H_n/{\mathbb Z}[1/6]})$ and the push-forward \[ p_{H_n/{\mathbb Z}[1/6]*}: KQ^{8n, 4n}(H_n/ {\mathbb Z}[1/6], \omega_{H_n/ {\mathbb Z}[1/6]})\to KQ^{0, 0}({\mathbb Z}[1/6])=\GW({\mathbb Z}[1/6]) \] Defining \[ \text{deg}_{{\mathbb Z}[1/6]}e(\oplus_j{\mathcal E}_{n,m_j, {\mathbb Z}[1/6]}):=p_{H_n/{\mathbb Z}[1/6]*}(e(\oplus_j{\mathcal E}_{n,m_j, {\mathbb Z}[1/6]})) \] and using the commutativity of ${}_*$ and ${}^*$ in cartesian squares gives the result. \end{proof} \begin{lemma}\label{lem:Coker} 1. Let $d\in {\mathbb Z}$ be a square-free even integer and let $k$ be a field of characteristic prime to $d$ (or of characteristic zero), giving the ring homomorphism ${\mathbb Z}[1/d]\to k$. For $x\in \GW({\mathbb Z}[1/d])$, the image of $x$ in $\GW(k)$ is determined by the rank $r$ and signature $s$ of $x$, in other words \[ x=s+\frac{r-s}{2}\cdot H\in \GW(k) \] where $H$ is the hyperbolic form.\\[2pt] 2. For a unit $u$ in a commutative ring $R$, let $\<u\>$ denote the rank one quadratic form $x\mapsto ux^2$. Then the cokernel of the map $\GW({\mathbb Z})\to \GW({\mathbb Z}[1/6])$ is additively generated by the forms $\<2\>-1$, $\<3\>-1$ and $\<6\>-1$. \end{lemma} \begin{proof} We are indebted to R. Parimala for the statement and proof of (1); as (1) in case $d=6$ follows from (2), and we will only be using (1) in case $d=6$, we give an independent proof of (2), inspired by Parimala's argument. Since natural map $\GW({\mathbb Z})\to \GW({\mathbb R})$ is an isomorphism \cite[Chap. 5, Theorem 4.1]{Scharlau}, and an element of $\GW({\mathbb R})$ is determined by its rank and signature (this is Sylvester's theorem of inertia), we need to show that an element $x\in \GW({\mathbb Z}[1/6])$ of rank zero and signature zero is in the subgroup generated by $x_2:=\<2\>-1$, $x_3:=\<3\>-1$ and $x_6:=\<6\>-1$.. Let $x_1=0$. The discriminant $d_x$ of $x$ is of the form $\pm2^a3^b$ mod squares (in ${\mathbb Q}^\times$). The assumption that $x$ has zero rank and signature easily implies that $d_x>0$, so $d_x$ is in $\{1, 2,3,6\}$. Subtracting $x_{d_x}$, we may assume that $x$ has trivial discriminant. The element $x_6-x_2-x_3$ is the Pfister form $\<\<3,2\>\>$, hence has trivial discriminant and has Hilbert symbol $\{3,2\}=-1$ at $p=3$. Adding $x_6-x_2-x_3$ to $x$ if necessary, we may assume that $x$ has trivial Hilbert symbol +1 at $p=3$. Since by assumption $x$ has trivial Hilbert symbol at all $p>3$, including $p=\infty$, the Hilbert reciprocity theorem \cite[Chap. 5, Theorem 5.1]{Scharlau} implies that $x$ has trivial Hilbert symbol at $p=2$. Thus $x$ has trivial rank, signature, and discriminant, as well as trivial Hilbert symbol at all $p$, so $x=0$ in $\GW({\mathbb Q}_p)$ for all $p\le \infty$ by \cite[Chap. 5, Theorem 4.2]{Scharlau}. Thus $x=0$ in $\GW({\mathbb Q})$ by the Hasse-Minkowski theorem. Since the map $\GW({\mathbb Z}[1/6])\to \GW({\mathbb Q})$ is injective \cite[Chap. 4, Corollary 3.3]{HusMilnor}, this says that $x=0$. \end{proof} \begin{remark}\label{rem:Refinement} The methods of \cite{LevineAB} use Euler classes with values in the twisted cohomology of the sheaf of Witt groups, rather than in hermitian $K$-theory. However, in the case of a locally free, relatively oriented sheaf ${\mathcal V}$ on a smooth projective $k$-scheme $p:X\to {\rm Spec\,} k$ (with ${\rm char} k\neq 2$) and with $\rnk({\mathcal V})=\dim_kX$, the push-forward of the Euler class $e^{{\operatorname{KQ}}}({\mathcal V})$ in hermitian $K$-theory and the push-forward of the Euler class $e^{\mathcal W}({\mathcal V})$ in Witt cohomology both have the same image in the Witt group $W(k)$. Indeed, Witt theory $W(-)$ is gotten from hermitian $K$-theory by inverting the algebraic Hopf map $\eta$, and by \cite[Proposition 3.4, Theorem 9.1]{LevineAspects}, the Witt-theory Euler class $e^W({\mathcal V})$ and the Witt cohomology Euler class $e^{\mathcal W}({\mathcal V})$ are unchanged if one twists ${\mathcal V}$ by a line bundle. Thus, we may assume that ${\mathcal V}$ admits a section $s$ with zero locus $Z$ of dimension zero. In this case, both $p_*(e^W({\mathcal V}))$ and $p_*(e^{\mathcal W}({\mathcal V}))$ are given by the (finite) push-forward of the respective Euler classes with support in $Z$. But as the Witt groups of $Z$ agree with the 0th cohomology of the Witt sheaf on $Z$, the Euler classes $e^W_Z({\mathcal V})$ and $e^{\mathcal W}_Z({\mathcal V})$ are equal in this common group. Thus the conclusion of Corollary~\ref{cor:Integrality} may be applied to the quadratic degree of the Euler class $e^{\mathcal W}(\oplus_j{\mathcal E}_{n,m_j,k})$ computed using Witt-sheaf cohomology, in other words, there is an element $d_{n,m_*}\in \GW({\mathbb Z}[1/6])$ such that $p_*(e^{\mathcal W}(\oplus_{j=0}^r{\mathcal E}_{n,m_j,k})))$ is the image of $d_{n,m_*}$ in $W(k)$, as long as $k$ of characteristic $\neq 2,3$. In addition, there are integers $a,b,c$ (depending on $n, m_1,\ldots, m_r$) such that \[ p_*(e^{\operatorname{KQ}}(\oplus_{j=0}^r{\mathcal E}_{n,m_j,k})))=s+\frac{r-s}{2}\cdot H+a(\<2\>-1)+b(\<3\>-1)+c(\<6\>-1) \] where $s$ is the signature of $d_{n,m_*}$, $r$ is the rank of $d_{n,m_*}$ and $H$ is the hyperbolic form. \end{remark} \section{Counting twisted cubics} Let ${\mathcal E}_{m_1,\ldots,m_r;n}:=\bigoplus_{i=1}^r{\mathcal E}_{m_i}\rightarrow H_n$. By \cite[$\S3$]{ES} a hypersurface in $\mathbb{P}^n$ defined by a degree $m$ homogeneous polynomial defines a section of the bundle ${\mathcal E}_m\rightarrow H_n$ and the zeros of the section correspond to the twisted cubics contained in the hypersurface. So if $\operatorname{rank}{\mathcal E}_{m_1,\ldots,m_r;n}=\operatorname{dim}H_n$, a general section of ${\mathcal E}_{m_1,\ldots,m_r}$ has finitely many zeros and thus there are finitely many twisted cubics on a general complete intersection of multidegree $(m_1,\ldots,m_r)$. Classically, one can compute this finite number of twisted cubics as the degree of the top chern class (or Euler class) of ${\mathcal E}_{m_1,\ldots,m_r;n}$. We aim to quadratically refine this count by computing the degree of $e^{\mathcal W}({\mathcal E}_{m_1,\ldots,m_r;n})$ valued in $\operatorname{W}(k)$. Let $N=N_{\operatorname{SL}_2}$ be the normalizer of the diagonal torus in $\operatorname{SL}_2$. We use the quadratically refined version of Bott's residue formula \cite[Theorem 9.5]{LevineAB} which applies to schemes with an $N$-action with finitely many fixed points. We recall a description of the irreducible $N$-representations and their Euler classes from \cite[\S6, Theorem 7.1]{LevineWitt}, and then define the action of $N$ on $H_n$ and find all the fixed points. From now on we use the notation $e$ for the Euler class $e^{{\mathcal W}}$ in Witt sheaf cohomology and $e^N$ for the equivariant Euler class. \subsection{Irreducible $N$-representations and their equivariant Euler classes} $N$ is generated by $t:=\begin{pmatrix}t & 0\\emptyset & t^{-1} \end{pmatrix}$ and $\sigma:=\begin{pmatrix}0 & 1\\ -1 & 0\end{pmatrix}$. For an integer $a>0$ let $\rho_a:N\rightarrow \operatorname{GL}_2$ the irreducible $N$-representation that sends $t$ to $\begin{pmatrix}t^a& 0\\ 0& t^{-a}\end{pmatrix}$ and $\sigma$ to $\begin{pmatrix} 0 & 1\\ (-1)^a& 0 \end{pmatrix}$ and let $\rho^-_a$ be the $2$-dimensional $N$-representation that sends $t$ to $\begin{pmatrix}t^a& 0\\ 0& t^{-a}\end{pmatrix}$ and $\sigma$ to $-\begin{pmatrix} 0 & 1\\ (-1)^a& 0 \end{pmatrix}$. Together with the trivial representation which we call $\rho_0$ and the $1$-dimensional representation $\rho_0^-$ which sends $t$ to $1$ and $\sigma$ to $-1$ in ${\mathbb G}_m=\operatorname{GL}_1(k)$, these are all the irreducible $N$-representations. The corresponding vector bundles on $BN$ are denoted by $\widetilde{\mathcal{O}}(a)$ and $\widetilde{\mathcal{O}}^-(a)$. Let $p:N\rightarrow \operatorname{SL}_2$ be the inclusion and let $e\in H^2(\operatorname{BSL}_2, {\mathcal W})$ be the Euler class of the bundle on $B\operatorname{SL}_2$ defined by the fundamental representation of $\operatorname{SL}_2$ on ${\mathbb A}^2$ in $H^2(B\operatorname{SL}_2,\mathcal{W})$. Let $\gamma$ be the invertible sheaf on $BN$ defined by the representation $\rho_0^-$ and let $\tilde{e}\in H^2(BN, {\mathcal W}(\gamma))$ be the Euler class of $\tilde{{\mathcal O}}(2)$. The equivariant Euler classes of $\widetilde{\mathcal{O}}^{\pm1}(a)$ are computed in \cite[Theorem 7.1]{LevineWitt}. \begin{theorem} \label{thm:EulerClasses} For $a$ odd, define \[\epsilon(a):=\begin{cases} 1 & \text{for $a\equiv 1\mod 4$}\\ -1 & \text{for $a\equiv 3\mod 4$.}\end{cases}\] Then \[e(\widetilde{\mathcal{O}}(a))=\begin{cases} \epsilon(a)\cdot a\cdot p^*e\in H^2(BN,\mathcal{W})& \text{for $a$ odd}\\ \frac{a}{2}\cdot \widetilde{e}\in H^2(BN,\mathcal{W}(\gamma)) & \text{for $a\equiv2\mod 4$}\\ -\frac{a}{2}\cdot \widetilde{e}\in H^2(BN,\mathcal{W}(\gamma))& \text{for $a\equiv0\mod 4$.} \end{cases}\] Moreover, $e(\tilde{{\mathcal O}}^-(a))=-e(\tilde{{\mathcal O}}(a))$ and $\tilde{e}^2=4e^2$, this latter identity taking place in $H^4(BN,\mathcal{W})$. \end{theorem} \begin{remark}\label{rem:OrientedBasis} The Euler class $e(V)$ (in Witt sheaf cohomology) of a rank $r$ locally free sheaf $V$ on some smooth scheme $X$ lives canonically in $H^r(X, {\mathcal W}(\det^{-1}V))$. Our computation for $e(\tilde{{\mathcal O}}(a))$ thus relies on a particular choice of isomorphism $\det\tilde{{\mathcal O}}(a)\cong {\mathcal O}_{BN}$ for $a>0$ odd and $\det\tilde{{\mathcal O}}(a)\cong \gamma$ for $a>0$ even. These are given by taking as basis $e_1, e_2$ for the representation $\rho_a$ as described by the matrix formula above, that is $\rho_a(t)(e_1)=t^ae_1$, $\rho_a(t)(e_2)=t^{-a}e_2$, $\rho_a(\sigma)(e_1)=(-1)^ae_2$, $\rho_a(\sigma)(e_2)=e_1$, and using $e_1\wedge e_2$ as basis for $\det\rho_a$. We call such as basis for $\rho_a$ {\em oriented}; concretely, two oriented bases $(e_1, e_2)$ and $(e_1', e_2')$ for $\rho_a$ are related by $e'_i=\lambda e_i$ for some $\lambda\in k^\times$, $i=1,2$. Thus the generator $e_1\wedge e_2$ for $\det\rho_a$ is well-determined up to a square in $k^\times$, so our computation of $e(\tilde{{\mathcal O}}(a))$ is thus valid for any choice of oriented basis. The identity $e(\tilde{{\mathcal O}}^-(a))=-e(\tilde{{\mathcal O}}(a))$ uses a basis $e_1, e_2$ for $\tilde{{\mathcal O}}^-(a)$, with the action of $\rho_a(t)$ as before and with $\rho_a(\sigma)(e_1)=(-1)^{a+1}e_2$, $\rho_a(\sigma)(e_2)=-e_1$. The identity $\widetilde{e}^2=4p^*e^2$ uses the basis $(e_1\wedge e_2)^2$ for $(\det\rho_2)^{\otimes 2}$ and similarly for $(\det\rho_1)^{\otimes 2}$. \end{remark} \subsection{Action and fixed points} For a smooth $k$-scheme $Y$ with $N$-action and an $N$-linearized vector bundle $V$ of rank $r$ on $Y$, we have the equivariant Euler class $e^N(V)$ in the $N$-equivariant Witt sheaf cohomology $H^{2r}_N(Y, {\mathcal W}(\det^{-1}(V))$ \cite{LevineAB}. \begin{lemma} Let $a_1,\ldots,a_r$ be positive odd integers. Then \[N\rightarrow N^{r}\text{, } t\mapsto (t^{a_1},\ldots, t^{a_r})\text{, }\sigma\mapsto (\sigma,\ldots,\sigma)\] defines a group homomorphism. \end{lemma} \begin{proof} We have the relation $\sigma\cdot \sigma=-1$, and $-1$ is sent to $(-1,\ldots,-1)$ which equals $(\sigma^2,\ldots,\sigma^2)$. Also the map described above respects the relation $\sigma\cdot t=t^{-1}\sigma$, so gives a well-defined homomorphism of group schemes. \end{proof} Let $s=\floor{\frac{n+1}{2}}$ and choose $a_1,\ldots,a_s$ odd positive, pairwise distinct integers. $N$ acts on $\P^n$ via $N\rightarrow N^{\floor{\frac{n+1}{2}}}\hookrightarrow \operatorname{GL}_{n+1}$ where the first map equals the map in the lemma above. This action extends to an action on $H_n$. A point $x\in H_n$ is fixed by the action if and only if the corresponding curve $C_x$ is mapped to itself via the action of $N$ on $\P^n$. Every twisted cubic spans a unique 3-plane, so this 3-plane needs to be fixed by the action for the corresponding point to be a fixed point. Thus, the possible fixed 3-planes are those subspaces $\Pi_{ij}$, $1\le i<j\le s$ defined by by $x_\ell=0$, $\ell\neq 2i-2, 2i-1, 2j-2, 2j-1$. We will concentrate on the case $\P^3=\Pi_{12}$ defined by $x_4=\ldots=x_n=0$; the remaining cases will follow by applying the suitable shuffle permutation to the variables. \begin{lemma}\label{lemma:FixCurve} The action of $N$ on $H_3$ has 6 fixed points $y_1,\ldots,y_6$ with defining ideal of the corresponding curve\\[5pt] $y_1:=(x_0x_2,x_0x_1,x_1x_3)$,\ $y_2:=(x_0x_2,x_2x_3,x_1x_3)$,\ $y_3:=(x_0x_3,x_0x_1,x_1x_2)$, \\[2pt] $y_4:=(x_0x_3,x_2x_3,x_1x_2)$,\ $y_5:=(x_0^2,x_0x_1,x_1^2)$,\ $y_6:=(x_2^2,x_2x_3,x_3^2)$. \end{lemma} \begin{proof} The points on $H_n$ that are fixed by the action of $t$ are exactly the points fixed by the classical torus action studied by Ellingsrud-Str{\o}mme. Thus the set of fixed points of the $N$-action is a subset of the set of fixed points in \cite[Proposition 3.8]{ES}. One checks that the six listed fixed points are exactly the ones which are fixed by the action of $\sigma$. \end{proof} \begin{corollary} The action of $N$ on $H_n$ has $\binom{\floor{\frac{n+1}{2}}}{2}\cdot 6$ fixed points, each of which are in $H_n(k)$. \end{corollary} \begin{proof} There are $\binom{\floor{\frac{n+1}{2}}}{2}$ fixed $3$-planes in $\P^n$ each containing $6$ twisted cubics which are fixed by the action. The fixed $3$-planes are each defined by subset of the variables $X_0,\ldots, X_n$, hence are all defined over ${\mathbb Z}$. Similarly, the fixed curves $C_x$ described in Lemma~\ref{lemma:FixCurve} are all defined over ${\mathbb Z}$, so all these $N$-fixed points are in $H_n(k)$. \end{proof} Assume ${\mathcal E}_{m_1,\ldots,m_r; n}:={\mathcal E}_{m_1,n}\oplus\ldots\oplus {\mathcal E}_{m_r,n}\rightarrow H_n$ is relatively oriented and that $\sum_{i=1}^r(3m_i+1)=4n$. We also assume that all the $m_j$ are odd, following Remark~\ref{rem:Cong2}(2). We will use the canonical relative orientation, discussed in Proposition~\ref{prop:relOrientationObservations} In this case the equivariant Euler class $e^N({\mathcal E}_{m_1,\ldots,m_r; n})$ of ${\mathcal E}_{m_1,\ldots,m_r; n}$ lives in $H_N^{4n}(H_n,\mathcal{W}(\det^{-1}{\mathcal E}_{m_1,\ldots,m_r; n}))$ which is isomorphic to $H_N^{4n}(H_n,\mathcal{W}(\omega_{H_n/k}))$ via the relative orientation. Let $\pi_{H_n}:H_n\rightarrow {\rm Spec\,} k$ be the structure map. Then we can push $e^N({\mathcal E}_{m_1,\ldots,m_r; n})$ forward and get $(\pi_{H_n})_*e^N({\mathcal E}_{m_1,\ldots,m_r; n})\in H^0(BN,\mathcal{W})\cong \operatorname{W}(k)[e]$. Applying the quadratic version of Bott's residue formula \cite[Theorem 9.5]{LevineAB} we get \begin{equation} \label{eq: Bott formula for twisted cubics0} (\pi_{H_n})_*e^N({\mathcal E}_{m_1,\ldots,m_r; n})=\sum_{\text{fixed $3$-planes}}\left(\sum_{i=1}^6\frac{e^N({\mathcal E}_{m_1,\ldots,m_r; n}(y_i))}{e^N( {\mathcal T}_{H_n}(y_i))}\right)\in \operatorname{W}(k)[e, 1/Me] \end{equation} for a suitable integer $M>0$. Of course, we are really interested in the ``quadratic degree'' \[ (\pi_{H_n})_*e({\mathcal E}_{m_1,\ldots,m_r; n})\in W(k) \] of our non-equivariant class $e({\mathcal E}_{m_1,\ldots,m_r; n})\in H^{4n}(H_n,\mathcal{W}(\omega_{H_n/k}))$. This is the image of $(\pi_{H_n})_*e^N({\mathcal E}_{m_1,\ldots,m_r; n})$ under the reduction map $W(k)[e]\to W(k)$ sending $e$ to zero. Both $e^N({\mathcal E}_{m_1,\ldots,m_r; n}(y_i))$ and $e^N( {\mathcal T}_{H_n}(y_i))$ are elements of $H^d(BN, {\mathcal W}(\gamma^{d_{y_i}}))$ for some integer $d_{y_i}$ depending on $y_i$, and where $d=\dim_kH_n$. Thus their ratio is a well-defined element of $W(k)[1/M]\subset W(k)[e. 1/Me]$, and \eqref{eq: Bott formula for twisted cubics0} gives us the identity \begin{equation} \label{eq: Bott formula for twisted cubics} (\pi_{H_n})_*e({\mathcal E}_{m_1,\ldots,m_r; n})=\sum_{\text{fixed $3$-planes}}\left(\sum_{i=1}^6\frac{e^N({\mathcal E}_{m_1,\ldots,m_r; n}(y_i))}{e^N( {\mathcal T}_{H_n}(y_i))}\right)\in \operatorname{W}(k)[1/Me] \end{equation} \section{Relatively oriented bases and Euler class computation} In order to compute \eqref{eq: Bott formula for twisted cubics}, we need to compute the equivariant Euler classes $e^N({\mathcal E}_{m,n}(y))$ and $e^N({\mathcal T}_{H_n}(y))$ for each fixed point $y$. We use the following strategy. We write the $N$-representations ${\mathcal E}_{m,n}(y)$ and ${\mathcal T}_{H_n}(y)$ as a direct sum of irreducible $N$-representations. Theorem \ref{thm:EulerClasses} gives us the equivariant Euler class of each irreducible summand and by the Whitney sum formula, the equivariant Euler classes of ${\mathcal E}_{m,n}(y)$ and ${\mathcal T}_{H_n}(y)$ are the respective products of the equivariant Euler classes of their irreducible summands. In order to get the correct choice of relatively oriented bases for ${\mathcal T}_{H_n}(y)$ and ${\mathcal E}_{m,n}(y)$, we first choose bases for ${\mathcal E}_n(y)$ and ${\mathcal F}_n(y)$. We have seen above that $y$ is in $H_n^{\operatorname{cm}}$. We will then use the resolutions \eqref{eqn:TangentPresentation} and \eqref{eqn:ResOnHn} for ${\mathcal T}_{\Phi}(y)$ and ${\mathcal E}_{m,n}(y)$ and choose our bases for ${\mathcal T}_{\Phi}(y)$ and ${\mathcal E}_{m,n}(y)$ so that they are compatible with respect to the resolutions and the given choice of bases for ${\mathcal E}_n(y)$ and ${\mathcal F}_n(y)$. We will then use a choice of basis for ${\mathcal T}_{\Gr(4, n+1)}(\Phi(y))$ to give a basis for ${\mathcal T}_{H_n}(y)$. In case our chosen bases for ${\mathcal T}_{\Phi}(y)$ or ${\mathcal E}_{m,n}(y)$ are not oriented in the sense of Remark~\ref{rem:OrientedBasis}, we will keep track of the factors needed to make the basis oriented in this sense (these will always be $\pm1$) and correct our computation by the product of all these factors. We have the Euler sequence on $\Gr(4, n+1)$ \[ 0\to {\rm End}(E_4)\to E_4^\vee\otimes {\mathcal O}^{n+1}_{\Gr(4, n+1)}\to {\mathcal T}_{\Gr(4, n+1)}\to 0 \] and the canonical identification ${\mathcal O}_{\Pi_n(\Phi(y))}(1)=E_4^\vee(y)$. Using this, our choice of coordinates for the projective space $\Pi_n(\Phi(y))$ gives an oriented basis for $ {\mathcal T}_{\Gr(4, n+1)}(\Phi(y))$, and the identity \[ \det {\mathcal T}_{\Gr(4, n+1)}(\Phi(y))=(\det E_4^\vee)^{\otimes n+1}={\mathcal O}_{\Gr(4, n+1)}(n+1). \] Note that $\det{\mathcal E}_n(y)$ and $\det{\mathcal F}_n(y)$ both appear to even powers in $\det{\mathcal T}_{H_n}(y)$ and since we will always take $m$ odd, $\det{\mathcal E}_n(y)$ appears to an even power in $\det{\mathcal E}_{m,n}$. For $m\equiv1\mod4$, the same holds for $\det{\mathcal F}_n(y)$, but if $m\equiv-1\mod4$, $\det{\mathcal F}_n(y)$ occurs to an odd power in $\det{\mathcal E}_{m,n}$. However, if we apply our method to each of ${\mathcal E}_{m_1,n},\ldots, {\mathcal E}_{m_r, n}$ and $\oplus_{j=1}^r{\mathcal E}_{m_j, n}$ is relatively oriented, then the number of $m_i\equiv-1\mod 4$ is even, so in all relatively oriented cases, our choice of basis for ${\mathcal E}_n(y_j)$ and ${\mathcal F}_n(y_j)$ at each of the fixed points $y_j$ will not play a role. At the individual $y_j$, we can even make an arbitrary change of basis for ${\mathcal E}_n(y_j)$ in our computations, and a change of basis with determinant one for ${\mathcal F}_n(y_j)$, without affecting the global result. Similarly, the relative orientation condition implies that the powers of ${\mathcal O}_{\Gr(4, n+1)}(1)$ occurring in $\det\oplus_{j=1}^r{\mathcal E}_{m_j, n}$ and in $\det{\mathcal T}_{H_n}$ are either both odd or both even, so our choice of basis for ${\mathcal T}_{\Gr(4, n+1)}(\Phi(y))$ as above will give us a bases for ${\mathcal T}_{H_n}(y)$ and $\oplus_{j=1}^r{\mathcal E}_{m_j, n}(y)$ that are relatively oriented with respect to our choice of global relative orientation. Let $E={\mathcal E}_n(y)$ and $F={\mathcal F}_n(y)$ be the fibers at a fixed point $y$. As all our fixed points are in the Cohen-Macaulay locus, the restriction of the complex \eqref{eqn:ResOnHn} gives an equivariant resolution \begin{multline} \label{eq:resolution bundle} 0\rightarrow F\otimes H^0(\mathcal{O}_{\P^3}(d-3))\xrightarrow{L(y)} E\otimes H^0(\mathcal{O}_{\P^3}(d-2))\\\xrightarrow{\bigwedge^2L(y)^t} H^0(\mathcal{O}_{\P^3}(d))\xrightarrow{\pi_{m,n}} {\mathcal E}_{m,n}(y)\rightarrow 0 \end{multline} of ${\mathcal E}_{m,n}(y)$ (see Remark~\ref{rem:GEquivarianceProperty}). Similarly, the restriction of the complex \eqref{eqn:TangentPresentation} gives an equivariant resolution \begin{multline} \label{eq:resulution tangent space} 0\rightarrow k\xrightarrow{({\operatorname{\rm Id}}_F,-{\operatorname{\rm Id}}_E)} {\rm End}(F)\oplus {\rm End}(E)\\\xrightarrow{(L(y)_*, L(y)^*)} {\rm Hom}(F,E)\otimes H^0(\mathcal{O}_{\P^3}(1))\xrightarrow{\pi} {\mathcal T}_{H_3}(y)\rightarrow 0 \end{multline} of $ {\mathcal T}_{H_3}(y)$. The matrix $L(y)$ and bases for $E$ and $F$ are not uniquely determined by $y$; we fix our choices of $L(y)$ and the corresponding bases $e_1, e_2, e_3$ of $E$ and $f_1, f_2$ of $F$ as follows \[ L(y_1)=\begin{pmatrix}-x_1&0\\x_2&x_3\\emptyset&-x_0\end{pmatrix},\ L(y_2)=\begin{pmatrix}-x_3&0\\x_0&x_1\\emptyset&-x_2\end{pmatrix},\ L(y_3)=\begin{pmatrix}0&-x_1\\x_2&x_3\\-x_0&0\end{pmatrix}, \] \[ L(y_4)=\begin{pmatrix}0&-x_3\\x_0&x_1\\-x_2&0\end{pmatrix},\ L(y_5)=\begin{pmatrix}-x_1&0\\x_0&x_1\\emptyset&-x_0\end{pmatrix},\ L(y_6)=\begin{pmatrix}-x_3&0\\x_2&x_3\\emptyset&-x_2\end{pmatrix}, \] in other words, $L(y_1)(f_1)=-x_1e_1+x_2e_2$, etc. We now proceed to implement the strategy outlined above. We have our action of $N$ on $\P^n$ and $H_n$ arising from our choice of odd integers $a_1>a_2,\ldots>a_s>0$, $s=\floor{\frac{n+1}{2}}$. For the computation involving the six $N$-fixed curves lying in $\Pi_{ij}$, we use the oriented basis $x_{2i-2}, x_{2i-1}, x_{2j-2}, x_{2j-1}$ with weights $a_i, -a_i, a_j, -a_j$ for $H^0(\Pi_{ij}, {\mathcal O}(1))$. We fix an integer $M>0$ and assume that $a_i>M\cdot a_{i+1}$ for $i=1,\ldots, s$; for application to the study of the sheaf $\oplus_{i=1}^r{\mathcal E}_{m_i,n}$, we will take $M=\max_i m_i$. In what follows, we will look at the case of $\Pi_{12}=\P^3$; the general case follows by symmetry. We let $V=H^0(\P^3, {\mathcal O}_{\P^3}(1))$ with standard basis $x_0, x_1, x_2, x_3$. \begin{definition} Let $X$ be a $k$-vector space with an $N$ action. Take $x\in X\setminus\{0\}$ such that the line $k\cdot x$ is stable under the ${\mathbb G}_m$-action. Then $t\cdot x=t^wx$ for some integer $w\in {\mathbb Z}$. We say that $x$ is a weight vector with weight $\operatorname{wt}(x)=w$. \end{definition} We use the canonical order defined in Definition \ref{def:CanonicalOrientation} on the monomials in ${\operatorname{Sym}}^rV$. We assume that $r$ is at most our chosen integer $M$. Recall from Definition \ref{def:CanonicalOrientation} that we call a monomial $g=x_0^{m_1}x_1^{m_2}x_3^{m_2}x_3^{m_4}$ \begin{itemize} \item \emph{positive} if $m_1>m_2$ or $m_1=m_2$ and $m_3>m_4$. Note that is is exactly the case when $\operatorname{wt}(g)>0$. \item \emph{negative} if $m_1<m_2$ or $m_1=m_1$ and $m_3<m_4$. Note that is is exactly the case when $\operatorname{wt}(g)<0$, \item \emph{neutral} $m_1=m_2$ and $m_3=m_4$ and note that is is exactly the case when $\operatorname{wt}(g)=0$. \end{itemize} Also recall that we set $g^*=x_0^{m_2}x_1^{m_1}x_2^{m_4}x_3^{m_3}$. For our chosen $N$-action, the induced action on $V$ makes each monomial $m_\alpha$ in $x_0,\ldots, x_3$ a weight vector, with $\sigma(m_\alpha)=\pm m_\alpha^*$. In particular, each $k$-vector subspace $k\cdot m_\alpha+k\cdot m_\alpha^*$ of $k[x_0,\ldots, x_3]$ is an irreducible $N$-representation, and has dimension two exactly when $m_\alpha^*\neq m_\alpha$. The $N$-representation $E:=H^0(\P^3, {\mathcal I}_{C_y}(2))$ is naturally a sub-representation of ${\operatorname{Sym}}^2V$, and thus has a basis of weight vectors consisting of degree two monomials in $x_0,\ldots, x_ 3$. By our choice of bases for $E$ and $F$, and the matrices $L(y_j)$, we see that $f_1, f_2$, $e_1, e_2, e_3$ are weight vectors with $\operatorname{wt}(f_1)>0$, $\operatorname{wt}(f_2)<0$, $\operatorname{wt}(e_1)>0$, $\operatorname{wt}(e_2)=0$ and $\operatorname{wt}(e_3)<0$. We call the bases $(e_1, e_2, e_3)$ for $E$, $(f_1, f_2)$ for $F$ and $x_0,\ldots, x_3$ for $V$ {\em standard bases}. Similarly, we call $x_4,\ldots, x_n$ the standard basis for $H^0(\P^n,. {\mathcal O}_{\P^n}(1))/V$. We make an analogous definition of the matrices $L(y_j)$ and standard bases in the case of the other fixed subspaces $\Pi_{ij}$ by acting by the evident shuffle permutation of the variables. We extend the $*$-action on monomials to $E$ and $F$ by setting $e_1^*=e_3$, $e_3^*=e_1$, $e_2^*=e_2$, $f_1^*=f_2$ and $f_2^*=f_1$. \begin{ex} For $y_1=(x_0x_2, x_0x_1, x_1x_3)$ we have $e_1\mapsto -x_0x_2$, $e_2\mapsto -x_0x_1$, $e_3\mapsto -x_1x_3$. The weights are \[ \operatorname{wt}(e_1)=a_1+a_2, \operatorname{wt}(e_2)=0, \operatorname{wt}(e_3)=-a_1-a_2 \] \[ \operatorname{wt}(f_1)=a_2, \operatorname{wt}(f_2)=-a_2 \] and we have \[ \sigma(e_1)=e_3, \sigma(e_2)=-e_2, \sigma(e_3)=e_1, \] \[ \sigma(f_1)=f_2, \sigma(f_2)=-f_1. \] Thus $E\cong \tilde{{\mathcal O}}(a_1+a_2)\oplus\tilde{{\mathcal O}}^-(0)$, $F\cong \tilde{{\mathcal O}}^-(a_2)$ (recall that $a_2$ is odd and $a_1+a_2$ is even). \end{ex} We use the following convention for ordered bases for a tensor product. If we have constructed bases $u_1,\ldots, u_r$, $w_1,\ldots, w_s$ for vector spaces $U$ and $W$, the tensor product $U\otimes W$ inherits the basis $\{u_i\otimes w_j\}_{i,j}$, where we use the order in which $u_i\otimes w_j$ precedes $u_{i'}\otimes w_{j'}$ if $j<j'$ or if $j=j'$ and $i<i'$: \[ u_1\otimes w_1, u_2\otimes w_1,\ldots, u_r\otimes w_1, u_1\otimes w_2,\ldots, u_r\otimes w_2, \ldots, u_1\otimes w_s,\ldots, u_r\otimes w_s. \] This fixes the canonical isomorphism \[ \det(U\otimes W)\cong \det(U)^{\otimes \dim W}\otimes\det(W)^{\otimes\dim U} \] To construct our relatively oriented basis for ${\mathcal E}_{m,n}(y)$, we start with our choices of standard bases for $E$, $F$ and $V$, and use our canonical bases for ${\operatorname{Sym}}^jV$, $j=m-3, m-2, m$. This gives us canonical bases the terms in the resolutions \eqref{eq:resolution bundle} and \eqref{eq:resulution tangent space} From this we can extract $N$-oriented bases for ${\mathcal E}_{m,n}(y)$ and ${\mathcal T}_{\Phi}(y)$, and then at the end add in a similarly chosen basis for ${\mathcal T}_{\Gr(4, n+1})(\Phi(y))$ to give our $N$-oriented basis for ${\mathcal T}_{H_n}(y)$. From our discussion above, these bases will automatically be compatible with our chosen relative orientation for ${\mathcal E}_{m,n}$, possibly up to a sign that we can keep track of. In what follows we use {\em oriented} to refer to oriented bases as $N$-representation, and {\em relatively oriented} to refer to our globally chosen relative orientation. As $\det F$, $\det E$ occur in $\det {\mathcal E}_{m,n}(y)$ and in $\det{\mathcal T}_{H_n}(y)$ to the same powers modulo two, the choice of standard bases for $F$ and $E$ will not matter in our construction of a relatively oriented bases for ${\mathcal E}_{m,n}(y)$, as long as we use the same bases for the construction for ${\mathcal E}_{m,n}(y)$ and for ${\mathcal T}_{H_n}(y)$. \subsection{Oriented bases for ${\mathcal E}_{m,n}(y_j)$} Note that there are no zero weight monomials in ${\operatorname{Sym}}^rV$ when $r$ is odd. Consequently, the canonical ordering of ${\operatorname{Sym}}^rV$ defined in \ref{def:CanonicalOrientation} comes in pairs $(g,g^*)$ where $g$ is positive and $g^*$ is negative. Also note that $\sigma$ sends $g$ to $\pm g^*$ and hence the vector spaces with basis $(g,g^*)$ are invariant under the $N$-action. Note further that changing the order of the pairs yields an even permutation of a given basis, and thus is orientation preserving. The $E\otimes {\operatorname{Sym}}^{m-2}V$-term breaks up into $6$-dimensional invariant subspaces with ordered bases \[(e_1g,e_2g,e_3g,e_1g^*,e_2g^*,e_3g^*)\] where $g$ is a monomial of degree $m-2$ and positive weight. We rearrange this basis to \[(e_1g,e_3g^*,e_2g,e_2g^*,e_3g,e_1g^*).\] Note that this base change has determinant $+1$ and splits up into $3$ pairs of weight vectors, dual with respect to the $\sigma$-action, namely $(e_1g,e_3g^*)$, $(e_2g,e_2g^*)$ and $(e_3g,e_1g^*)$ where the first and the second one are oriented, that is, the positive weight vector comes first. It might be that $\operatorname{wt}(e_3g)<\operatorname{wt}(e_1g^*)$ in which case we want to swap the order and keep track of an additional sign. The $F\otimes {\operatorname{Sym}}^{m-3}V$-term breaks up into $4$-dimensional (when $g$ is positive) respectively $2$-dimensional (when $h$ is neutral) subspaces with ordered basis \[(f_1g,f_2g,f_1g^*,f_2g^*) \] respectively \[(f_1h,f_2h).\] The latter is of the desired form. We rearrange the first basis (change of basis with determinant $=+1$) to \[(f_1g,f_2g^*,f_2g,f_1g^*)\] which splits up into 2 $2$-dimensional invariant subspaces, with bases $(f_1g, f_2g^*)$ and $(f_2g,f_1g^*)$, both dual with respect to the $\sigma$-action. The basis $(f_1g, f_2g^*)$ is an oriented basis as $N$-representation, while we might need to switch the order of $(f_2g,f_1g^*)$ to yield an oriented basis. We compute the number of total swaps to get our desired basis for $m$ odd. \begin{proposition} \label{prop:number of swaps} \begin{enumerate} \item Assume $m\equiv 1\mod 4$. Then the number of monomials $g$ of degree $m-2$ for which $\operatorname{wt}(ge_3)<\operatorname{wt}(g^*e_1)$ plus the number of monomials $g$ of degree $m-3$ for which $\operatorname{wt}(f_2g)<\operatorname{wt}(f_1g^*)$ is even. So in this case we have an even number of swaps. \item Assume $m\equiv-1\mod 4$. Then the number of swaps is even for the first, second and fifth fixed point, while it is odd for the others. \end{enumerate} \end{proposition} \begin{lemma} \label{lemma: cases} Let $m$ be an odd, positive integer. \\[5pt] (1) $\#\{g:\text{$g$ is a monomial of degree $m-2$ and $0<\operatorname{wt}(g)<a_1+a_2$}\}=\frac{(m-1)(m+1)}{4}$\\[2pt] (2) $\#\{g:\text{$g$ is a monomial of degree $m-2$ and $0<\operatorname{wt}(g)<a_1-a_2$}\}=\frac{(m-1)^2}{4}$\\[2pt] (3) $\#\{g: \text{$g$ is a monomial of degree $m-2$ and $0<\operatorname{wt}(g)<2a_2$}\}=\frac{m-1}{2}$\\[2pt] (4) $\#\{g: \text{$g$ is a monomial of degree $m-2$ and $0<\operatorname{wt}(g)<2a_1$}\}=\frac{(m-1)^2}{2}$\\[2pt] (5) $\#\{g: \text{$g$ is a monomial of degree $m-3$ and $0<\operatorname{wt}(g)<a_2$}\}=0$\\[2pt] (6) $\#\{g: \text{$g$ is a monomial of degree $m-3$ and $0<\operatorname{wt}(g)<a_1$}\}=\frac{(m-1)(m-3)}{4}$ \end{lemma} \begin{proof} (1) Either $\operatorname{wt}(g)=j\cdot a_2$ for some $j\in \mathbb{Z}_{>0}$ or $\operatorname{wt}(g)=a_1-j\cdot a_2$ for $j\in \mathbb{Z}_{\ge 0}$. In the first case $g=x_0^{i}x_1^{i}x_2^px_3^q$ for some $i\ge 0$ and $p,q\ge 0$ such that $p>q$ and $p+q=m-2-2i$. For a fixed $i$ there are $\frac{m-1-2i}{2}$ such monomials, namely \[ x_0^ix_1^ix_2^{\frac{m-2-2i+1}{2}}x_3^{\frac{m-2-2i-1}{2}}, x_0^ix_1^ix_2^{\frac{m-2-2i+3}{2}}x_3^{\frac{m-2-2i-3}{2}}, \ldots, x_0^ix_1^ix_2^{m-2-2i}. \] So we get \[\sum_{i=0}^{\frac{m-3}{2}}\frac{m-1-2i}{2}=1+2+\ldots +\frac{m-1}{2}=\binom{\frac{m+1}{2}}{2}=\frac{(m+1)(m-1)}{8}\] in total. In the second case we have $g=x_0^{i+1}x_1^ix_2^px_3^q$ for some $i\ge 0$, $p,q\ge 0$ with $p+q=m-2-2i-1$ and $q\ge p$. For a fixed $i$ there are $\frac{m-1-2i}{2}$ such $g$'s, namely \[x_0^{i+1}x_1x_2^{\frac{m-2-2i-1}{2}}x_3^{\frac{m-2-2i-1}{2}},x_0^{i+1}x_1x_2^{\frac{m-2-2i-3}{2}}x_3^{\frac{m-2-2i+1}{2}},\ldots,x_0^{i+1}x_1^ix_3^{m-2-2i-1}\] and in total we get \[\sum_{i=0}^{\frac{m-3}{2}}\frac{m-1-2i}{2}=1+2+\ldots +\frac{m-1}{2}=\binom{\frac{m+1}{2}}{2}=\frac{(m+1)(m-1)}{8}.\] So there are $\frac{(m+1)(m-1)}{4}$ monomials $g$ of degree $m-2$ and $0<\operatorname{wt}(g)<a_1+a_2$.\\[5pt] (2) Either $\operatorname{wt}(g)=j\cdot a_2$ for $j\in \mathbb{Z}_{>0}$ or $\operatorname{wt}(g)=a_1-j\cdot a_2$ for some $j\in \mathbb{Z}_{>1}$. We already know that there are $\frac{(m-1)(m+1)}{8}$ monomials with weight $j\cdot a_2$. The monomials of weight $a_1-j\cdot a_2$ with $j>1$ are of the form $x_0^{i+1}x_1^ix_2^px_3^q$ for some $i\ge 0$, $p,q\ge 0$ with $p+q=m-2-2i-1$ and $q> p+1$. For a fixed $i$ there are $\frac{m-3-2i}{2}$ such monomials, namely \[x_0^{i+1}x_1x_2^{\frac{m-2-2i-3}{2}}x_3^{\frac{m-2-2i+1}{2}},x_0^{i+1}x_1x_2^{\frac{m-2-2i-5}{2}}x_3^{\frac{m-2-2i+3}{2}},\ldots, x_0^{i+1}x_1^ix_3^{m-2-2i-1}.\] This makes in total $\binom{\frac{m-1}{2}}{2}=\frac{(m-1)(m-3)}{8}$. Adding the $2$ numbers we get \[\frac{(m-1)(m+1)}{8}+\frac{(m-1)(m-3)}{8}=\frac{m^2-1+m^2-4m+3}{8}=\frac{(m-1)^2}{4}\] \ \\[2pt] (3) We are looking for monomials of degree $m-2$ and weight $a_2$. These are of the form $x_0^ix_1^ix_2^{\frac{m-2-2i+1}{2}}x_3^{\frac{m-2-2i-1}{2}}$ and one checks that there are $\frac{m-1}{2}$ of these.\\[2pt] (4) $g$ can either have weight $j\cdot a_2$ for some $j\in \mathbb{Z}_{>0}$ or $a_1+j\cdot a_2$ for $j\in \mathbb{Z}$ or $2a_1-j\cdot a_2$ for $j>0$. In the first case $g$ is of the form $g=x_0^ix_1^ix_2^px_3^q$ for $i,p,q\ge 0$ with $p>q$ and $m-2-2i=p+q$. For a fixed $i$ these are the following \[x_0^ix_1^ix_2^{\frac{m-2-2i+1}{2}}x_3^{\frac{m-2-2i-1}{2}},x_0^ix_1^ix_2^{\frac{m-2-2i+3}{2}}x_3^{\frac{m-2-2i-3}{2}},\ldots, x_0^ix_1^ix_2^{m-2-2i}\] One counts that the number of such monomials is $\frac{m-1-2i}{2}$. In total we get $\binom{\frac{d+1}{2}}{2}=\frac{(m-1)(m+1)}{8}$. In the second case the monomials are of the form $g=x_0^{i+1}x_1^ix_2^px_3^q$ for $i,p,q\ge 0$ with $m-2-2i=p+q$. For a fixed $i$ there are $m-2-(2i+1)+1$ such monomials and in total we get \begin{multline*} \sum_{i=0}^{\frac{m-3}{2}}m-2-2i=\frac{m-1}{2}\cdot (m-2)-2\cdot (1+2+\ldots+\frac{m-3}{2})\\=\frac{m-1}{2}\cdot(m-2)-2\cdot\binom{\frac{m-1}{2}}{2}=\frac{(m-1)^2}{4} \end{multline*} For the third weight class $2a_1-j\cdot a_2$ the monomials look as follows $x_0^{i+2}x_1^ix_2^px_3^q$ for $i,p,q\ge 0$ with $p+q=m-2-(2i+2)$ and $q>p$. For a fixed $i$ these are the following ones \[x_0^{i+2}x_1^ix_2^{\frac{m-2-(2i+2)-1}{2}}x_3^{\frac{m-2-(2i+2)+1}{2}},\ldots, x_0^{i+2}x_1^ix_3^{m-2-(2i+2)}\] which are $\frac{m-1-(2i+2)}{2}$ many. Summing up over all possible $i$, we get $\binom{\frac{m-1}{2}}{2}=\frac{(m-1)(m-3)}{8}$ many. So in total we get \[\frac{(m-1)(m+1)}{8}+\frac{(m-1)^2}{4}+\frac{(m-1)(m-3)}{8}=\frac{(m-1)^2}{2}\] \ \\ (5) No monomial has weight in this range. \\[2pt] (6) These are all the monomials of degree $m-3$ of weight either $j\cdot a_2$ for some $j>0$ or $a_1-j\cdot a_2$ for $j>0$. In the first case these are of the form $x_0^ix_1^ix_2^px_3^q$ for $i,q,p\ge 0$ with $p+q=m-3-2i$ and $p>q$. For a fixed $i$ these are the following \[x_0^ix_1^ix_2^{\frac{m-3-2i+2}{2}}x_3^{\frac{m-3-2i-2}{2}},x_0^ix_1^ix_2^{\frac{m-3-2i+4}{2}}x_3^{\frac{m-3-2i-4}{2}},\ldots,x_0^ix_1^ix_2^{m-3-2i}\] which makes $\frac{m-3-2i}{2}$ in total. So we get $\binom{\frac{m-1}{2}}{2}=\frac{(m-1)(m-3)}{8}$ monomials of positive weight a multiple of $a_2$. The monomials of weight $a_1-j\cdot a_2$ are of the form $x_0^{i+1}x_1^ix_2^px_3^q$ for $p,q,i\ge 0$ with $q>p$ and $m-3-(2i+1)=p+q$. For a fixed $i$ we get the following monomials \[x_0^{i+1}x_1^ix_2^{\frac{m-4-2i-1}{2}}x_3^{\frac{m-4-2i+1}{2}},x_0^{i+1}x_1^ix_2^{\frac{m-4-2i-3}{2}}x_3^{\frac{m-4-2i+3}{2}},\ldots ,x_0^{i+1}x_1^ix_3^{m-4-2i}\] which makes $\frac{m-3-2i}{2}$ in total. Summing up over all possible $i$ we get $\binom{\frac{m-1}{2}}{2}=\frac{(m-1)(m-3)}{8}$. In total we get $\frac{(m-1)(m-3)}{4}$. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:number of swaps}] We need to swap $e_3g$ and $e_1g^*$ whenever \[\operatorname{wt}(ge_3)<0\Leftrightarrow 0<\operatorname{wt}(g)<\operatorname{wt}(e_1)\] and we need to swap $f_2g$ and $f_1g^*$ whenever \[\operatorname{wt}(f_2g)<0\Leftrightarrow0<\operatorname{wt}(g)<\operatorname{wt}(f_1)\] \ \\ (1) First fixed point: $\operatorname{wt}(e_1)=a_1+a_2$ and $\operatorname{wt}(f_1)=a_2$. By Lemma \ref{lemma: cases} we need to swap $\frac{(m-1)(m+1)}{4}+0=\frac{(m-1)(m+1)}{4}$ times which is always even since $m$ is odd.\\[2pt] (2) Second fixed point: $\operatorname{wt}(e_1)=a_1+a_2$ and $\operatorname{wt}(f_1)=a_1$. By Lemma \ref{lemma: cases} we need to swap $\frac{(m-1)(m+1)}{4}+\frac{(m-1)(m-3)}{4}$ times which is always even (both summands are even) since $m$ is odd.\\[2pt] (3) Third fixed point: $\operatorname{wt}(e_1)=a_1-a_2$ and $\operatorname{wt}(f_1)=a_2$. By Lemma \ref{lemma: cases} we need to swap $\frac{(m-1)^2}{4}+0=\frac{(m-1)^2}{4}$ times which is always even for $m\equiv 1\mod 4$ and odd for $m\equiv -1\mod 4$. \\[2pt] (4) Fourth fixed point: $\operatorname{wt}(e_1)=a_1-a_2$ and $\operatorname{wt}(f_1)=a_1$. By Lemma \ref{lemma: cases} we need to swap $\frac{(m-1)^2}{4}+\frac{(m-1)(m-3)}{4}$ times which is an even number for $m\equiv 1\mod 4$ and odd for $m\equiv -1 \mod 4$.\\[2pt] Fifth fixed point: $\operatorname{wt}(e_1)=2a_1$ and $\operatorname{wt}(f_1)=a_1$. By Lemma \ref{lemma: cases} we need to swap $\frac{(m-1)^2}{2}+\frac{(m-1)(m-3)}{4}$ times which is even for $m$ odd. \\[2pt] (6) Sixth fixed point: $\operatorname{wt}(e_1)=2a_2$ and $\operatorname{wt}(f_1)=a_2$. We get an even number of swaps for $m\equiv 1\mod 4$ namely $\frac{m-1}{2}$. We get an odd number for $m\equiv-1\mod 4$. \end{proof} We note that for each $j$, the map $E\to {\operatorname{Sym}}^2(V)$ sends each $e_i$ to a (degree 2) monomial. Thus, ${\mathcal E}_{m,n}(y_j)$ has as basis the images of the monomials in ${\operatorname{Sym}}^mV$ that are not in the image of $E\otimes {\operatorname{Sym}}^{m-2}V\to {\operatorname{Sym}}^mV$. Using the canonical bases for ${\operatorname{Sym}}^{m-3}V$, ${\operatorname{Sym}}^{m-2}V$ and ${\operatorname{Sym}}^mV$, together with our choice of standard bases for $E$ and $F$ and the resulting bases for $F\otimes {\operatorname{Sym}}^{m-3}V$ and $E\otimes {\operatorname{Sym}}^{m-2}V$, the resolution \eqref{eq:resolution bundle} gives us the canonical isomorphism \begin{equation}\label{eqn:CanDetIso} \delta(y_j): \det{\mathcal E}_{m,n}(y_j)\xrightarrow{\sim} \det{\operatorname{Sym}}^mV\otimes(\det E\otimes {\operatorname{Sym}}^{m-2}V)^{-1}\otimes \det F\otimes {\operatorname{Sym}}^{m-3}V\cong k \end{equation} \begin{definition} Let $m\ge3$ be an odd integer. We call a basis $g_1, g_2,\ldots, g_D$ of ${\mathcal E}_{m,n}(y_j)$ {\em canonically oriented} if $g_1\wedge\ldots\wedge g_D$ maps to $1\in k$ under the isomorphism \eqref{eqn:CanDetIso}. We call such a basis {\em anti-canonically oriented} if if $g_1\wedge\ldots\wedge g_D$ maps to $-1\in k$ under this isomorphism. We extend these notions to the vector spaces ${\mathcal E}_{m_1,\ldots, m_r,n}(y_j)$, with the $m_j\ge3$ odd integers, in the evident way. \end{definition} \begin{proposition}\label{prop:SignsInOrientedBases} Let $m\ge3$ be an odd integer, fix $j\in\{1,\ldots, 6\}$, let $\{M_1,\ldots, M_d\}$ be the set of positive monomials $M_i$ in ${\operatorname{Sym}}^mV$ that are not in the image of $E\otimes {\operatorname{Sym}}^{m-2}V\to {\operatorname{Sym}}^mV$ (in any order), and let $m_i, m_i^*$ be the respective images of $M_i, M_i^*$ in ${\mathcal E}_{m,n}(y_j)$. Then the basis $m_1, m_1^*, m_2, m_2^*, \ldots, m_d, m_d^*$ of ${\mathcal E}_{m,n}(y_j)$ is canonically oriented in case $j=1,2,5$, or $j=3,4,6$ and $m\equiv 1\mod 4$, and is anti-canonically oriented in case $j=3,4,6$ and $m\equiv -1\mod 4$. \end{proposition} \begin{proof} We first give the proof in case $j=1$. Let $W\subset E\otimes{\operatorname{Sym}}^{m-2}V$ be the subspace $e_2\otimes {\operatorname{Sym}}^{m-2}V$. We have the maps $L(y_1):F\otimes{\operatorname{Sym}}^{m-3}V\to E\otimes{\operatorname{Sym}}^{m-2}V$, $\bigwedge^2L(y)^t:E\otimes{\operatorname{Sym}}^{m-2}V\to{\operatorname{Sym}}^mV$ and $\pi:{\operatorname{Sym}}^mV\to {\mathcal E}_{m,n}(y_j)$ from the complex \eqref{eq:resolution bundle}. We replace $L(y_1)$ with $\alpha=-L(y_1)$ and replace $\bigwedge^2L(y)^t$ with $\beta=-\bigwedge^2L(y)^t$ in our complex; since this change is equivalent to the change of basis of $E$, $e_i\mapsto -e_i$, and since ${\operatorname{Sym}}^{m-2}V$ has even dimension, this change arises from a change of basis of $E\otimes{\operatorname{Sym}}^{m-2}V$ of determinant $+1$. Since the restriction of the map $\beta:E\otimes{\operatorname{Sym}}^{m-2}V\to {\operatorname{Sym}}^mV$ to $W$ is just the injective map $\times x_0x_1:{\operatorname{Sym}}^{m-2}V\to {\operatorname{Sym}}^mV$, $W\cap\alpha(F\otimes {\operatorname{Sym}}^{m-3}V)=\{0\}$. Note that the map $\alpha:F\otimes {\operatorname{Sym}}^{m-3}V\to E\otimes{\operatorname{Sym}}^{m-2}V$ sends $f_1\otimes m$ to $e_1\otimes x_1m-e_2\otimes x_2m$ and sends $f_2\otimes m$ to $e_3\otimes x_0m-e_2\otimes x_3m$. Form a new basis for $E\otimes{\operatorname{Sym}}^{m-2}V$ by starting with the canonical basis, and replacing for each monomial $m\in {\operatorname{Sym}}^{m-3}V$ the basis element $e_1\otimes x_1m$ with $e_1\otimes x_1m-e_2\otimes x_2m$ and the basis element $e_3\otimes x_0m$ with $e_3\otimes x_0m-e_2\otimes x_3m$. Then since $W\cap {\rm im}(F\otimes {\operatorname{Sym}}^{m-3}V)=\{0\}$ and $W\cap e_1\otimes{\operatorname{Sym}}^{m-2}V\oplus e_3\otimes {\operatorname{Sym}}^{m-2}V=\{0\}$ this is indeed a basis for $E\otimes{\operatorname{Sym}}^{m-2}V$ and the change of basis matrix has determinant one. Note also that \[ (e_1\otimes x_1m)^*=e_3\otimes x_0m^*, (e_2\otimes x_2m)^*=e_2\otimes x_3m^* \] and \[ \operatorname{wt}(e_1\otimes x_1m)=\operatorname{wt}(e_2\otimes x_2m), \operatorname{wt}(e_3\otimes x_0m)=\operatorname{wt}(e_2\otimes x_3m), \] so under this change of basis, we send the $\sigma$-dual pair of weight vectors $(e_1\otimes x_1m, e_3\otimes x_0m^*)$ to the $\sigma$-dual pair of weight vectors $(\alpha(f_1\otimes m), \alpha(f_2\otimes m^*))$. We pass from the canonical basis of $E\otimes{\operatorname{Sym}}^{m-2}V$ to the basis of oriented pairs $(g, g^*)$ by applying a permutation. We apply the same permutation after substituting $\alpha(f_1\otimes m)$ for $e_1\otimes x_1m$ and $\alpha(f_2\otimes m)$ for $e_3\otimes x_0m$. We order the oriented pairs $(g,g^*)$ by placing those of the form $(\alpha(f_1\otimes m), \alpha(f_2\otimes m^*))$ for $(f_1\otimes m, f_2\otimes m^*)$ an oriented pair in $F\otimes {\operatorname{Sym}}^{m-3}V$ first, followed by the oriented pairs of the form $(e_1\otimes m, e_3\otimes m^*)$, $(e_3\otimes m, e_1\otimes m^*)$, or $(e_2\otimes m, e_2\otimes m^*)$. The first type gives a basis for $\alpha(F\otimes {\operatorname{Sym}}^{m-3}V)$ and the image in ${\operatorname{Sym}}^mV$ of the second type gives a basis of oriented pairs of monomials $(m, m^*)$ for $\beta(E\otimes{\operatorname{Sym}}^{m-2}V)\subset {\operatorname{Sym}}^mV$. We then complete the basis of ${\operatorname{Sym}}^mV$ by using the oriented pairs of monomials $(m, m^*)$ with $m$ not in $\beta(E\otimes{\operatorname{Sym}}^{m-2}V)$. Altogether, this gives us a basis $B_F=(b_1^F,\ldots, b_{d_F}^F)$ for $F\otimes{\operatorname{Sym}}^{m-3}V$, a basis $B_E=(b_1^E:=\alpha(b_1^F),\ldots, b_{d_F}^E:=\alpha(b_{d_F}^F), b_{1}^E,\ldots, b_{d_E}^E)$ for $E\otimes{\operatorname{Sym}}^{m-2}V$, a basis $B_V:=(\beta(b_1^E),\ldots, \beta(b_{d_E}^E), b_1^V,\ldots, b_{d_V}^V)$ for ${\operatorname{Sym}}^mV$, and the basis $B_{{\mathcal E}} :=(\pi(b_1^V),\ldots, \pi(b_{d_V}^V))$ for ${\mathcal E}_{m,n}(y_1)$. In particular, the basis elements $b_\ell^V$ are each an oriented pair of monomials $(M, M^*)$, and so $B_{{\mathcal E}} $ is a basis for ${\mathcal E}_{m,n}(y_1)$ of images of oriented pairs of monomials. The bases $B_F$, $B_E$, $B_V$ and $B_{\mathcal E}$ define generators for $\det F\otimes{\operatorname{Sym}}^{m-3}V$, $\det E\otimes {\operatorname{Sym}}^{m-2}V$, $\det{\operatorname{Sym}}^mV$ and $\det{\mathcal E}_{m,n}(y_1)$, respectively, giving isomorphisms \[ \phi_1(y_1):\det F\otimes{\operatorname{Sym}}^{m-3}V\otimes (\det E\otimes {\operatorname{Sym}}^{m-2}V)^{-1}\otimes \det{\operatorname{Sym}}^mV\xrightarrow{\sim} k \] \[ \phi_2(y_1):\det{\mathcal E}_{m,n}(y_1)\xrightarrow{\sim} k \] and the composition $\phi_2(y_1)^{-1}\phi_1(y_1)$ is exactly the isomorphism $\delta(y_1)$ determined by the exact sequence \eqref{eq:resulution tangent space}. By Proposition~\ref{prop:number of swaps}, the isomorphism $\phi_1(y_1)$ is the same as the one determined by the canonical bases, so the basis $B_{\mathcal E}$ is canonically oriented. This property is preserved by any reordering of oriented pairs, so we have proven the Proposition for $y_1$. The proof in case $j=2,5$, or $j=3,4,6$ and $m\equiv1\mod 4$ is essentially the same. In case $j=3,4,6$ and $m\equiv-1\mod4$, Proposition~\ref{prop:number of swaps} says that the isomorphism \[ \phi_1(y_j):\det F\otimes{\operatorname{Sym}}^{m-3}V\otimes (\det E\otimes {\operatorname{Sym}}^{m-2}V)^{-1}\otimes \det{\operatorname{Sym}}^mV\xrightarrow{\sim} k \] constructed as for $\phi_1(y_1)$ will differ from the one using the canonical bases by a factor of $-1$, so the basis we construct will be anti-canonically oriented. \end{proof} \begin{remark}\label{rem:OrientedBases} We consider the direct sum ${\mathcal E}_{m_1,\ldots, m_r;n}:=\oplus_{i=1}^r{\mathcal E}_{m_i,n}$, with all $m_i$ odd (see Remark~\ref{rem:Cong2}). The case of interest for us is when $\sum_i3m_i+1=4n$ and ${\mathcal E}_{m_1,\ldots, m_r;n}$ is relatively oriented, which by Proposition~\ref{prop:relOrientationObservations} implies that the number of $i$ such that $m_i\equiv-1\mod4$ is even. By Proposition~\ref{prop:SignsInOrientedBases}, this says that a basis of ${\mathcal E}_{m_1,\ldots, m_r;n}$ consisting of oriented pairs of monomials $(g,g^*)$ for each of the ${\mathcal E}_{m_i,n}$ is canonically oriented. \end{remark} \subsection{The equivariant Euler class} Define $\sigma_m(a_1,a_2)(y_j)\in \{\pm1\}$, $j=1,\ldots, 6$, by \[ \sigma_m(a_1,a_2)(y_j)=\begin{cases} \epsilon(a_1a_2)&m\equiv1\mod 4\\ -\epsilon(a_1)&m\equiv-1\mod 4,\ j=2,4,5\\ -\epsilon(a_2)&m\equiv-1\mod 4,\ j=1,3,6 \end{cases} \] We call $\sigma_m(a_1,a_2)(y_j)$ the {\em sign for $e^N({\mathcal E}_{m,n})$ at $y_j$}. As in \S\ref{sec:Det}, \eqref{eqn:BundleNotation}, we set ${\mathcal E}_{m_1,\ldots,m_r;n}:=\oplus_{i=1}^r{\mathcal E}_{m_i,n}$. Thus \[ e^N({\mathcal E}_{m_1,\ldots,m_r;n}(y_j))=\prod_{i=1}^re^N({\mathcal E}_{m_i,n}(y_j)) \] and we call \[ \sigma_{m_1,\ldots, m_r}(a_1, a_2,)(y_j):=\prod_{i=1}^r\sigma_{m_i}(a_1,a_2)(y_j), \] the {\em sign for ${\mathcal E}_{m_1,\ldots,m_r;n}$ at $y_j$}. This terminology for $r=1$ is justified by the following proposition; the justification in general follows from the identity \[ e^N({\mathcal E}_{m_1,\ldots,m_r;n}(y_j))=\prod_{i=1}^re^N({\mathcal E}_{m_i,n}(y_j)). \] \begin{proposition} \label{Prop:EulerClassBundle} The Euler classes $e^N({\mathcal E}_{m,n}(y_j))$, $j=1,\ldots,6$, are as follows: \scalebox{.95}{ \vbox{ \begin{align*} &e^N({\mathcal E}_{m,n}(y_1))=\sigma_m(a_1,a_2)(y_1) \cdot \left[\prod_{i=0}^{m-1}((m-i)a_1-ia_2)\right] \cdot m!!\cdot a_2^{\frac{m+1}{2}} \cdot e^{\frac{3m+1}{2}}\\ &e^N({\mathcal E}_{m,n}(y_2))=\sigma_m(a_1,a_2)(y_2) \cdot \left[ \prod_{i=1}^{m-1}((m-i)a_1-ia_2)\right] \cdot ma_2\cdot m!!\cdot a_1^{\frac{m+1}{2}}\cdot e^{\frac{3m+1}{2}}\\ &e^N({\mathcal E}_{m,n}(y_3))=\sigma_m(a_1,a_2)(y_3) \cdot \left[ \prod_{i=0}^{m-1}((m-i)a_1+ia_2)\right] \cdot m!!\cdot a_2^{\frac{m+1}{2}}\cdot e^{\frac{3m+1}{2}}\\ &e^N({\mathcal E}_{m,n}(y_4))=\sigma_m(a_1,a_2)(y_4) \cdot \left[\prod_{i=1}^{m-1}((m-i)a_1+ia_2)\right] \cdot ma_2\cdot m!!\cdot a_1^{\frac{m+1}{2}}\cdot e^{\frac{3m+1}{2}}\\ &e^N({\mathcal E}_{m,n}(y_5))=\sigma_m(a_1,a_2)(y_5) \cdot \left[\prod_{i=0}^{m-1}(a_1+(m-1-2i)a_2)\right] \cdot m!!\cdot a_2^{\frac{m+1}{2}}\cdot e^{\frac{3m+1}{2}}\\ &e^N({\mathcal E}_{m,n}(y_6))=\sigma_m(a_1,a_2)(y_6)\cdot\left[\prod_{i=0}^{\frac{m-3}{2}} ((m-1-2i)^2a_1^2-a_2^2)\right] \cdot a_2\cdot m!!\cdot a_1^{\frac{m+1}{2}}\cdot e^{\frac{3m+1}{2}} \end{align*} } } \end{proposition} Recall that $\dim H_n=4n$ and $\rnk({\mathcal E}_{m_1,\ldots,m_r;n})=\sum_{i=1}^r(3m_i+1)$. \begin{proposition}\label{prop:Signs} Assume $\rnk({\mathcal E}_{m_1,\ldots,m_r;n}) =\dim H_n$ and that all the $m_i$ are odd. Then the sign of $e^N({\mathcal E}_{m_1,\ldots,m_r;n})$ at $y_j$ is given by \[\sigma_{m_1,\ldots, m_r}(a_1, a_2,)(y_j)=\begin{cases}\epsilon(a_1a_2)&\text{for $r$ odd,}\\ 1&\text{for $r$ even.}\end{cases}\] \end{proposition} \begin{proof} For fixed $a_1, a_2$, and $y_j$, $\sigma_m(a_1,a_2)(y_j)$ depends only the residue of $m$ mod 4. By Proposition~\ref{prop:relOrientationObservations} the number of $m_i$ with $m_i\equiv-1\mod 4$ is even. Thus \[ \sigma_{m_1,\ldots, m_r}(a_1, a_2)(y_j)=\prod_{m_i\equiv 1\mod 4}\sigma_{m_i}(a_1,a_2)(y_j). \] If $r$ is even then the number of the $m_i$ with $m_i\equiv1\mod 4$ is even, so in this case $\sigma_{m_1,\ldots, m_r}(a_1, a_2)(y_j)=1$. If $r$ is odd, we have an odd number of $m_i$'s with $m_i\equiv 1\mod 4$ and for such $m_i$ we have $\sigma_{m_i}(a_1,a_2)(y_j)=\epsilon(a_1a_2)$. Thus if $r$ is odd, $\sigma_{m_1,\ldots, m_r}(a_1, a_2)(y_j)=\epsilon(a_1a_2)$ \end{proof} In fact, the sign $\sigma_{m_1,\ldots, m_r}(a_1, a_2)(y_j)$ is the same as the corresponding sign for the equivariant Euler class of $ {\mathcal T}_{H_n}(y_j)$ (see Proposition~\ref{prop:Signs} and Proposition~\ref{prop:EulerClassGrassmannian}) So all signs depending on the weights $a_1$ and $a_2$ cancel in the formula for $(\pi_{H_n})_*e({\mathcal E}_{m_1,\ldots,m_r;n})$ in \eqref{eq: Bott formula for twisted cubics}, as they should. \begin{proof}[Proof of Proposition \ref{Prop:EulerClassBundle}] For the fixed point $y_1:=(x_0x_2,x_0x_1,x_1x_3)$, the pairs of the basis of ${\mathcal E}_{m,n}(y_1)$ are of the form \begin{itemize} \item $(x_0^{m-i}x_3^i,x_1^{m-i}x_2^i)$, $i=0,\ldots,m-1$ \item or $(x_2^{m-i}x_3^i,x_2^ix_3^{m-i})$, $i=0,\ldots,\frac{m-1}{2}$. \end{itemize} In the first case this is the representation $\rho^{(-1)^i}_{(m-i)a_1-ia_2}$ which has equivariant Euler class $$(-1)^i\epsilon((m-i)a_1-ia_2)\cdot ((m-i)a_1-ia_2)e.$$ In the second case this is the representation $\rho^{(-1)^i}_{(m-2i)a_2}$ which has Euler class $$(-1)^i\epsilon(m-2i)\epsilon(a_2)\cdot(m-2i)a_2e.$$ For the sign, if $m\equiv 1\mod 4$ we have $\epsilon(m-2i)=-1$ iff $i$ is odd. It follows that $(-1)^i\epsilon(m-2i)\epsilon(a_2)=\epsilon(a_2)$ and we get the following sign \begin{align*} \prod_{i=0}^{m-1}(-1)^i\epsilon&((m-i)a_1-ia_2)\cdot \prod_{i=0}^{\frac{m-1}{2}}\epsilon(a_2)\\ =&\epsilon(a_2)\cdot \epsilon(a_1)\cdot (-1)^{\frac{m-1}{2}}\cdot \prod_{i=1}^{m-1}\epsilon((m-i)a_1-ia_2)\\ =&\epsilon(a_1)\epsilon(a_2)\cdot \left(\epsilon\left(\prod_{i=1}^4((m-i)a_1-ia_2)\right)\right)^{\frac{m-1}{4}}\\ =&\epsilon(a_1)\epsilon(a_2)\cdot \left(\epsilon((4a_1-a_2)(3a_1-2a_2)(2a_1-3a_2)(a_1-4a_2))\right)^{\frac{m-1}{4}}\\ =&\epsilon(a_1)\epsilon(a_2)\cdot\epsilon(-a_1a_2(6a_1^2-13a_1a_2+6a_2^2)^{\frac{m-1}{4}}\\ =&\epsilon(a_1)\epsilon(a_2)\cdot \epsilon(-a_1a_2(6\cdot 1-a_1a_2+6\cdot 1))^{\frac{m-1}{4}}\\ =& \epsilon(a_1)\epsilon(a_2)\cdot \epsilon((-a_1a_2)(-a_1a_2))^{\frac{m-1}{4}}\\ =&\epsilon(a_1)\epsilon(a_2)\\ =& \sigma_m(a_1,a_2)(y_1). \end{align*} When $m\equiv-1\mod 4$, $\epsilon(m-2i)=-1$ iff $i$ is odd. Hence, $(-1)^i\epsilon(m-2i)\epsilon(a_2)=-\epsilon(a_2)$ In total we get the sign \begin{align*} \prod_{i=0}^{m-1}(-1)^i\epsilon&((m-i)a_1-ia_2)\cdot \prod_{i=0}^{\frac{m-1}{2}}(-1)\epsilon(a_2)\\ =&\epsilon(a_1)\cdot (-1)^{\frac{m-1}{2}}\cdot\prod_{i=1}^{m-1}\epsilon((m-i)a_1-ia_2)\\ =&-\epsilon(a_1)\cdot\left(\epsilon\left(\prod_{i=1}^4((m-i)a_1-ia_2)\right)\right)^{\frac{m-3}{4}}\\&\cdot (-1)\epsilon((m-(m-2))a_1-(m-2)a_2)\epsilon((m-(m-1))a_1-(m-1)a_2)\\ =&\epsilon(a_1)\cdot \epsilon(2a_1-a_2)\epsilon(a_1-2a_2)\\ =&-\epsilon(2a_1-a_2)=-\epsilon(a_2)\\ =& \sigma_m(a_1,a_2)(y_1). \end{align*} For the second fixed point $y_2:=(x_0x_2,x_2x_3,x_1x_3)$, the pairs of the basis of ${\mathcal E}_{m,n}(y_2)$ are of the form \begin{itemize} \item $(x_0^{m-i}x_3^i,x_1^{m-i}x_2^i)$, $i=1,\ldots,m-1$ \item or $(x_2^m,x_3^m)$ \item or $(x_0^{m-i}x_1^i,x_0^ix_1^{m-i})$, $i=0,\ldots,\frac{m-1}{2}$. \end{itemize} In the first case this is the representation $\rho^{(-1)^i}_{(m-i)a_1-ia_2}$ which has Euler class $$(-1)^i\epsilon((m-i)a_1-ia_2)\cdot ((m-i)a_1-ia_2)e.$$ In the second case this is $\rho_{ma_2}$ which has Euler class $$\epsilon(a_2)\cdot ma_2e$$ and in the last case this is the representation $\rho^{(-1)^i}_{(m-2i)a_1}$ which has Euler class $$(-1)^i\epsilon(m-2i)\epsilon(a_1)\cdot(m-2i)a_1e.$$ For the sign calculation, when $m\equiv1\mod 4$ we get \begin{align*} \prod_{i=1}^{m-1}(-1)^i\epsilon&((m-i)a_1-ia_2) \cdot \epsilon(ma_2)\cdot \prod_{i=0}^{\frac{m-1}{2}}(-1)^i\epsilon((m-2i)a_1)\\ &= \epsilon(a_1)\epsilon(a_2)\cdot \prod_{i=1}^{m-1}(-1)^i\epsilon((m-i)a_1-ia_2)\\ &= \epsilon(a_1)\epsilon(a_2)\cdot \left(\prod_{i=1}^{4}(-1)^i\epsilon((m-i)a_1-ia_2)\right)^{\frac{m-1}{4}}\\ &= \sigma_m(a_1,a_2)(y_2). \end{align*} When $m\equiv-1\mod 4 $, the sign is given by \begin{align*} \prod_{i=1}^{m-1}(-1)^i\epsilon&((m-i)a_1-ia_2) \cdot \epsilon(ma_2)\cdot \prod_{i=0}^{\frac{m-1}{2}}(-1)^i\epsilon((m-2i)a_1)\\ =& -\epsilon(a_2)\cdot \prod_{i=1}^{m-1}(-1)^i\epsilon((m-i)a_1-ia_2)\\ =& \epsilon(a_2)\cdot \left(\prod_{i=1}^{4}(-1)^i\epsilon((m-i)a_1-ia_2)\right)^{\frac{m-3}{4}}\\&\cdot \epsilon((m-(m-2))a_1-(m-2)a_2)\epsilon((m-(m-1))a_1-(m-1)a_2)\\ =& \epsilon(a_2)\cdot \epsilon(2a_1-a_2)\epsilon(a_1+2a_2)\\ =& -\epsilon(a_1)\\ =& \sigma_m(a_1,a_2)(y_2) \end{align*} For the third fixed point $y_3:=(x_0x_3,x_0x_1,x_1x_2)$, the pairs of the basis of ${\mathcal E}_m(y_3)$ are of the form \begin{itemize} \item $(x_0^{m-i}x_2^i,x_1^{m-i}x_3^i)$, $i=0,\ldots,m-1$ \item or $(x_2^{m-i}x_3^i,x_2^ix_3^{m-i})$, $i=0,\ldots,\frac{m-1}{2}$. \end{itemize} In the first case this is the representation $\rho_{(m-i)a_1+ia_2}$ which has Euler class $$\epsilon((m-i)a_1+ia_2)\cdot ((m-i)a_1+ia_2)e.$$ In the second case this is the representation $\rho^{(-1)^i}_{(m-2i)a_2}$ which has Euler class $$(-1)^i\epsilon(m-2i)\epsilon(a_2)\cdot(m-2i)a_2e.$$ For the sign, if $m\equiv 1\mod 4$, then \begin{align*} \prod_{i=0}^{m-1}\epsilon&((m-i)a_1+ia_2)\cdot \prod_{i=0}^{\frac{m-1}{2}}\epsilon(a_2)\\ &=\epsilon(a_1)\epsilon(a_2)\cdot \prod_{i=1}^{m-1}\epsilon((m-i)a_1+ia_2)\\ &=\epsilon(a_1)\epsilon(a_2)\cdot\epsilon(\prod_{i=1}^4(m-i)a_1+ia_2)^{\frac{m-1}{4}}\\ &=\epsilon(a_1)\epsilon(a_2)\cdot \epsilon(a_2(3a_1+2a_2)(2a_1+3a_2)a_1)^{\frac{m-1}{4}}\\ &= \epsilon(a_1)\epsilon(a_2)\cdot \epsilon(a_1a_2(12\cdot 1+13a_1a_2))^{\frac{m-1}{4}}\\ &= \epsilon(a_1)\epsilon(a_2)\\ &=\sigma_m(a_1,a_2)(y_3). \end{align*} If $m\equiv-1\mod4$, then \begin{align*} \prod_{i=0}^{m-1}\epsilon&((m-i)a_1+ia_2)\cdot\prod_{i=1}^{\frac{m-1}{2}}(-1)^i\epsilon((m-2i)a_2)\\ =&-\epsilon(a_1)\prod_{i=1}^{m-1}\epsilon((m-i)a_1+ia_2)\\ =&-\epsilon(a_1) \epsilon(\prod_{i=1}^4(m-i)a_1+ia_2)^{\frac{m-3}{4}}\\&\cdot\epsilon((m-(m-2))a_1+(m-2)a_2)\epsilon((m-(m-1))a_1+(m-2)a_2)\\ =&-\epsilon(a_1)\epsilon(2a_1+a_2)\epsilon(a_1+2a_2)\\ =&-\epsilon(a_2)\\ =&\sigma_m(a_1,a_2)(y_3). \end{align*} For the fourth fixed point $y_4:=(x_0x_3,x_2x_3,x_1x_2)$, the pairs of the basis of ${\mathcal E}_{m,n}(y_4)$ are of the form \begin{itemize} \item $(x_0^{m-i}x_2^i,x_1^{m-i}x_3^i)$, $i=1,\ldots,m-1$ \item or $(x_2^m,x_3^m)$ \item or $(x_0^{m-i}x_1^i,x_0^ix_1^{m-i})$, $i=0,\ldots,\frac{m-1}{2}$. \end{itemize} In the first case this is the representation $\rho_{(m-i)a_1+ia_2}$ which has Euler class $$\epsilon((m-i)a_1+ia_2)\cdot ((m-i)a_1+ia_2)e.$$ The second case is $\rho_{ma_2}$ which has Euler class $\epsilon(ma_2)\cdot ma_2e$. In the last case this is the representation $\rho^{(-1)^i}_{(m-2i)a_1}$ which has Euler class $$(-1)^i\epsilon(m-2i)\epsilon(a_1)\cdot(m-2i)a_1e.$$ For the sign, when $m\equiv1\mod 4$ we get \begin{align*} \prod_{i=1}^{m-1}\epsilon&((m-ia_1+ia_2))\cdot \epsilon(ma_2)\cdot \prod_{i=0}^{\frac{m-1}{2}}(-1)^i\epsilon((m-2i)a_1)\\ &= \epsilon(a_1)\epsilon(a_2)\cdot \prod_{i_1}^{m-1}\epsilon((m-ia_1+ia_2))\\ &=\epsilon(a_1)\epsilon(a_2)\\ &=\sigma_m(a_1,a_2)(y_4). \end{align*} When $m\equiv-1\mod 4$ we get \begin{align*} \prod_{i_1}^{m-1}\epsilon&((m-ia_1+ia_2))\cdot \epsilon(ma_2)\cdot \prod_{i=0}^{\frac{m-1}{2}}(-1)^i\epsilon((m-2i)a_1)\\ =& -\epsilon(a_2)\cdot \prod_{i_1}^{m-1}\epsilon((m-ia_1+ia_2))\\ =&-\epsilon(a_2)\cdot \left(\prod_{i=1}^4\epsilon((m-ia_1+ia_2))\right)^{\frac{m-3}{2}}\\&\cdot \epsilon(2a_1+a_2)\epsilon(a_1+2a_2)\\ =& -\epsilon(a_1)\\ =&\sigma_m(a_1,a_2)(y_4). \end{align*} For the fifth fixed point $y_5:=(x_0^2,x_0x_1,x_1^2)$, the pairs are \begin{itemize} \item $(x_0x_2^{m-1-i}x_3^i,x_1x_2^ix_3^{m-1-i})$, $i=0,\ldots ,m-1$ \item $(x_2^{m-i}x_3^i,x_2^ix_3^{m-i})$, $i=0,\ldots,\frac{m-1}{2}$. \end{itemize} In the first case this is $\rho^{(-1)^i}_{a_1+(m-1-2i)a_2}$ which has Euler class $$(-1)^i\epsilon(a_1+(m-1-2i)a_2)\cdot (a_1+(m-1-2i)a_2)e.$$ In this second case this is $\rho^{(-1)^i}_{(m-2i)a_2}$ which has Euler class $$(-1)^i\epsilon((m-2i)a_2)\cdot (m-2i)a_2e.$$ For the sign, if $m\equiv1\mod4$, then \begin{align*} \epsilon(a_2)\cdot\prod_{i=0}^{m-1}(-1)^i&\epsilon(a_1+(m-1-2i)a_2)\\ =& \epsilon(a_2)\cdot (-1)^{\frac{m}{2}}\cdot \prod_{i=0}^{m-1}\epsilon(a_1+(m-1-2i)a_2)\\ =& \epsilon(a_2)\cdot \epsilon(a_1)\\&\cdot \epsilon((a_1+4a_2)(a_1+2a_2)(a_1-2a_2)(a_1-4a_2))^{\frac{m-1}{4}}\\ =&\epsilon(a_1)\epsilon(a_2)\cdot \epsilon(a_1^2(a_1^2-4a_2))^{\frac{m-1}{4}}\\ =&\epsilon(a_1)\epsilon(a_2)\\ =&\sigma_m(a_1,a_2)(y_5). \end{align*} If $m\equiv-1\mod4$, then \begin{align*} \prod_{i=0}^{m-1}(-1)^i\epsilon&(a_1+(m-1-2i)a_2)\cdot \prod_{i=0}^{\frac{m-1}{2}}(-1)^i\epsilon((m-2i)a_2)\\ =& (-1)^{\frac{m-1}{2}}\cdot \prod_{i=0}^{m-1}\epsilon(a_1+(m-1-2i)a_2)\\ =& -\epsilon(a_1+2a_1)^2\epsilon(a_1)\cdot \epsilon((a_1+4a_2)(a_1+2a_2)(a_1-2a_2)(a_1-4a_2))^{\frac{m-3}{4}}\\ =&-\epsilon(a_1)\\ =&\sigma_m(a_1,a_2)(y_5). \end{align*} For the sixth fixed point $y_6:=(x_0x_3,x_0x_1,x_1x_2)$, the pairs are of the form \begin{itemize} \item $(x_0^{m-1-i}x_1^i)x_2,x_0^ix_1^{m-1-i}x_3)$, $i=0,\ldots,\frac{m-1}{2}$ \item $(x_0^{m-1-i}x_1^i)x_3,x_0^ix_1^{m-1-i}x_2)$, $i=0,\ldots,\frac{m-3}{2}$ \item $(x_0^{m-i}x_1^i,x_0^ix_1^{m-i})$, $i=0,\ldots,\frac{m-1}{2}$. \end{itemize} The first case is $\rho^{(-1)^i}_{(m-1-2i)a_1+a_2}$ which has Euler class $$(-1)^i\epsilon((m-1-2i)a_1+a_2)\cdot ((m-1-2i)a_1+a_2)e.$$ The second case is $\rho^{(-1)^{i+1}}_{(m-1-2i)a_1-a_2}$ which has Euler class $$(-1)^{i+1}\epsilon((m-1-2i)a_1-a_2)\cdot ((m-1-2i)a_1-a_2)e$$ and the last one is $\rho_{(m-2i)a_1}^{(-1)^i}$ which has Euler class $$(-1)^i\epsilon((m-2i)a_1)\cdot (m-2i)a_1e.$$ To compute the sign, if $m\equiv1\mod4$, then \begin{align*} \epsilon(a_1)\prod_{i=0}^{\frac{m-1}{2}}(-1)^i&\epsilon((m-1-2i)a_1+a_2)\cdot \prod_{i=0}^{\frac{m-3}{2}}(-1)^{i+1}\epsilon((m-1-2i)a_1-a_2)\\ =& \epsilon(a_1)\epsilon(a_2)\cdot \prod_{i=0}^{m-1}(-1)^i\epsilon(a_2-(m-1-2i)a_1)\\ =& \epsilon(a_1)\epsilon(a_2)\cdot (-1)^{\frac{m-1}{2}}\cdot \prod_{i=0}^{m-1}\epsilon(a_2-(m-1-2i)a_1)\\ =&\epsilon(a_1)\epsilon(a_2)\\ =&\sigma_m(a_1,a_2)(y_6). \end{align*} The last step follows from the calculation for the fifth fixed point, one just needs to swap the roles of $a_1$ and $a_2$. When $m\equiv-1\mod4$ the sign is \begin{align*} \prod_{i=0}^{\frac{m-1}{2}}(-1)^i&\epsilon((m-1-2i)a_1+a_2)\\&\hskip30pt\cdot \prod_{i=0}^{\frac{m-3}{2}}(-1)^{i+1}\epsilon((m-1-2i)a_1-a_2)) \cdot \prod_{i=0}^{\frac{m-1}{2}}(-1)^i\epsilon((m-2i)a_1)\\ =&\prod_{i=0}^{\frac{m-1}{2}}(-1)^i\epsilon((m-1-2i)a_1+a_2)\cdot \prod_{i=0}^{\frac{m-3}{2}}(-1)^{i+1}\epsilon((m-1-2i)a_1-a_2))\\ =&\prod_{i=0}^{m-1}(-1)^i(a_2-(m-1-2i)a_1)\\ =& -\epsilon(a_2)\\ =& \sigma_m(a_1,a_2)(y_6). \end{align*} Again, the last step follows from the calculation for the fifth fixed point, one just needs to swap the roles of $a_1$ and $a_2$. \end{proof} \subsection{Oriented bases for $ {\mathcal T}_{H_3}(y)$} For each of the six fixed points $y_1,\ldots,y_6$ we have chosen standard bases $(f_1,f_2)$, $(e_1,e_2, e_3)$ and $(x_0, x_1, x_2, x_3)$ for $F$, $E$ and $V$. These define bases of each term of the resolution \eqref{eq:resulution tangent space}, using our convention for the ordered basis of a tensor product. We get the basis \[(f_2^{\vee}f_1,f_1^{\vee}f_1,f_2^{\vee}f_2,f_1^{\vee}f_2)\] of ${\rm End}(F):=F^\vee\otimes F$, which we rearrange to \[ (f_2^{\vee}f_1,f_1^{\vee}f_2)\oplus (f_1^{\vee}f_1)\oplus(f_2^{\vee}f_2)\] using an even permutation. We get the basis \[(e_3^{\vee}e_1,e_2^{\vee}e_1,e_1^{\vee}e_1,e_3^{\vee}e_2,e_2^{\vee}e_2,e_1^{\vee}e_2,e_3^{\vee}e_3,e_2^{\vee}e_3,e_1^{\vee}e_3)\] of ${\rm End}(E):=E^\vee\otimes E$. The following is an even permutation of this basis \begin{align*} (e_3^{\vee}e_1,e_1^{\vee}e_3)\oplus (e_2^{\vee}e_1,e_2^{\vee}e_3)\oplus (e_3^{\vee}e_2,e_1^{\vee}e_2)\oplus(e_3^{\vee}e_3)\oplus (e_2^{\vee}e_2)\oplus (e_1^{\vee}e_1). \end{align*} Note that the direct summands in the bases for ${\rm End}(F)$ and ${\rm End}(E)$ either come in pairs which are dual with respect to the $\sigma$-action or have weight $0$. We also get a basis of ${\rm Hom}(F,E):=F^\vee\otimes E$, which we rearrange using an even permutation \[(f_2^{\vee}e_1,f_1^{\vee}e_1,f_2^{\vee}e_2,f_1^{\vee}e_2,f_2^{\vee}e_3,f_1^{\vee}e_3)\mapsto (f_2^{\vee}e_1,f_1^{\vee}e_3)\oplus (f_1^{\vee}e_1,f_2^{\vee}e_3)\oplus (f_2^{\vee}e_2,f_1^{\vee}e_2). \] The first basis element of the three pairs has weight given by the following table. \begin{equation}\label{eqn:SwapTable} \vbox{ \begin{center} \begin{tabular}{ |c|c|c|c|c| } \hline & $\operatorname{wt}(f_2^{\vee}e_1)$& $\operatorname{wt}(f_1^{\vee}e_1)$& $\operatorname{wt}(f_2^{\vee}e_2)$ & $\#$swaps\\ \hline $y_1$& $a_1+2a_2$ & $a_1$ & $a_2$&1\\ $y_2$& $2a_1+a_2$ & $a_2$ & $a_1$ &1\\ $y_3$& $a_1$ & $a_1-2a_2$ & $a_2$ &2\\ $y_4$& $2a_1-a_2$ & $-a_2$ & $a_1$&2\\ $y_5$& $3a_1$ & $a_1$ & $a_1$ &0\\ $y_6$& $3a_2$ & $a_2$ & $a_2$ &3\\ \hline \end{tabular} \end{center} } \end{equation} Let $g=f_2^{\vee}e_1,f_1^{\vee}e_1,f_2^{\vee}e_2$. Then $(g,g^*)\otimes (x_0,x_1,x_2,x_3)$ has basis \[(gx_0,g^*x_0,gx_1,g^*x_1,gx_2,g^*x_2,gx_3,g^*x_3)\] which we rearrange to \[(gx_0,g^*x_1)\oplus (gx_1,g^*x_0)\oplus (gx_2,g^*x_3)\oplus (gx_3,g^*x_1).\] We aim to have pairs where the first basis element is the one with weight greater or equal zero, which is always the case for $(gx_0,g^*x_1)$ and $(gx_2,g^*x_3)$. In the pair $(x_1g,x_0g^*)$ the positive weight comes first iff \[\operatorname{wt}(x_1g)\ge 0\Leftrightarrow \operatorname{wt}(g)\ge a_1\] and in the pair $(x_3g,x_2g^*)$ the positive weight comes first iff \[\operatorname{wt}(x_3g)\ge 0\Leftrightarrow \operatorname{wt}(g)\ge a_2.\] The number of swaps we need to achieve this is listed in the table \eqref{eqn:SwapTable}. We call an element of ${\mathcal T}_{H_3}(y_j)$ that is the image of an element of ${\rm Hom}(F,E)\otimes V$ of the form $f^\vee_ie_jx_\ell$ a monomial in ${\mathcal T}_{H_3}(y_j)$. As for ${\mathcal E}_{m,n}$, we call a basis $b_1,\ldots, b_{12}$ of ${\mathcal T}_{H_3}(y_j)$ canonically oriented, resp. anti-canonically oriented if $b_1\wedge\ldots\wedge b_{12}$ maps to $1$, resp. $-1$ in $k$ under the isomorphism $\det{\mathcal T}_{H_3}(y_j)$ induced by the resolution \eqref{eq:resulution tangent space} and our choice of canonical bases for ${\rm Hom}(E,E)$, ${\rm Hom}(F,F)$, ${\rm Hom}(F, E)\otimes V$ and $1\in k$. \begin{proposition} For each $j=1,\ldots, 6$, each basis of ${\mathcal T}_{H_3}(y_j)$ consisting of $\sigma$-dual pairs of monomial weight vectors is canonically oriented if the number of swaps in \eqref{eqn:SwapTable} is even, and is anti-canonically oriented if if the number of swaps in \eqref{eqn:SwapTable} is odd. \end{proposition} \begin{proof} The issue is the same as for Proposition~\ref{prop:SignsInOrientedBases}, namely, the map ${\rm Hom}(E,E)\oplus{\rm Hom}(F,F)\to{\rm Hom}(F, E)\otimes V$ does not send standard basis vectors to standard basis vectors. Just as in the proof of Proposition~\ref{prop:SignsInOrientedBases}, one can choose monomials $f^\vee_ie_jx_\ell$ in ${\rm Hom}(F, E)\otimes V$, closed under $f^\vee_ie_jx_\ell\mapsto (f^\vee_ie_jx_\ell)^*$, that yield a basis in ${\mathcal T}_{H_3}(y_j)$. One then changes the monomial basis in ${\rm Hom}(F, E)\otimes V$ by a matrix of determinant one that is the identity on this chosen set of monomials, and that maps the complementary set of monomials to a basis of the image of ${\rm Hom}(E,E)\oplus{\rm Hom}(F,F)$ in ${\rm Hom}(F, E)\otimes V$ (the corresponding construction is easy to accomplish for the map $k\to {\rm Hom}(E,E)\oplus{\rm Hom}(F,F)$). The argument used in the proof of Proposition~\ref{prop:SignsInOrientedBases} then gives the result. \end{proof} Suppose we have odd integers $m_1,\ldots, m_r\ge 3$ and $n\ge4$ such that $\sum_i3m_i+1=4n$ and with ${\mathcal E}_{m_1,\ldots, m_r;n}$ relatively oriented. For the fixed points $y_3$, $y_4$, $y_5$, we have an even number of swaps, and so a basis of ${\mathcal T}_{H_3}(y_j)$ constructed in this way will be relatively oriented with respect to our similarly constructed basis of ${\mathcal E}_{m_1,\ldots, m_r;n}$ (see Remark~\ref{rem:OrientedBases}). For $y_1$, $y_2$ and $y_6$, we have an odd number of swaps, so our basis of ${\mathcal T}_{H_3}(y_j)$ will yield the ``opposite'' relative orientation with respect to our basis of ${\mathcal E}_{m_1,\ldots, m_r;n}$, and we will correct this by including an extra $(-1)$ factor in the Euler class computation for ${\mathcal T}_{H_3}(y_j)$. \subsection{Equivariant Euler class of $ {\mathcal T}_{H_3}(y_i)$ for $i=1,\ldots,6$} Recall from Theorem \ref{thm:EulerClasses} that for $a$ even \[e(\widetilde{\mathcal{O}}(a))=\begin{cases}\frac{a}{2}\widetilde{e}& \text{if }a\equiv2\mod 4\\ -\frac{a}{2}\widetilde{e}& \text{if }a \equiv 0 \mod 4\end{cases}\] We say that the sign of $e(\widetilde{\mathcal{O}}(a))$ is $+1$ for $a\equiv2\mod 4$ and $-1$ for $\equiv0\mod4$. Since $a_1$ and $a_2$ are odd we have $e(\widetilde{\mathcal{O}}(2a_1))=a_1\widetilde{e}$ and $e(\widetilde{\mathcal{O}}(2a_2))=a_2\widetilde{e}$. Furthermore one checks that \begin{align*}&\text{sign}(e(\widetilde{\mathcal{O}}(a_1-a_2)))=-\text{sign}(e(\widetilde{\mathcal{O}}(a_1+3a_2)))=-\text{sign}(e(\widetilde{\mathcal{O}}(3a_1+a_2))).\\ =&-\text{sign}(e(\widetilde{\mathcal{O}}(a_1+a_2)))=\text{sign}(e(\widetilde{\mathcal{O}}(a_1-3a_2)))=\text{sign}(e(\widetilde{\mathcal{O}}(3a_1-a_2)))\end{align*} In the following computations we use $\pm$ for the sign of the Euler classes of $\widetilde{\mathcal{O}}(a_1-a_2)$, $\widetilde{\mathcal{O}}(a_1+3a_2)$ and $\widetilde{\mathcal{O}}(3a_1+a_2)$ and $\mp$ for the sign of $\widetilde{\mathcal{O}}(a_1+a_2)$, $\widetilde{\mathcal{O}}(a_1-3a_2)$ and $\widetilde{\mathcal{O}}(3a_1-a_2)$ to indicate that the signs are opposite. \subsubsection{First fixed point $y_1=(x_0x_2,x_0x_1,x_1x_3)$} The six oriented monomial pairs in $ {\mathcal T}_{H_3}(y_1)$ yield the following $N$-representations \begin{itemize} \item $(f_1^{\vee}e_1x_0,f_2^{\vee}e_3x_1)=\rho^-_{2a_1}$ \item $(f_1^{\vee}e_2x_0,f_2^{\vee}e_2x_1)=\rho_{a_1-a_2}$ \item $(f_2^{\vee}e_1x_2,f_1^{\vee}e_3x_3)=\rho_{3a_2+a_1}$ \item $(f_2^{\vee}e_2x_2,f_1^{\vee}e_2x_3)=\rho^-_{2a_2}$ \item $(f_2^{\vee}e_1x_3,f_1^{\vee}e_3x_2)=\rho^-_{a_1+a_2}$ \item $(f_1^{\vee}e_1x_3,f_2^{\vee}e_3x_2)=\rho_{a_1-a_2}$ \end{itemize} and we get the following equivariant Euler class \begin{align*} \begin{split} e^N( {\mathcal T}_{H_3}(y_1))=& (-1)\cdot(-a_1\widetilde{e})\cdot(\pm \frac{a_1-a_2}{2}\widetilde{e})\cdot (\pm \frac{a_1+3a_2}{2}\widetilde{e})\\&\hskip50pt\cdot(-a_2 \widetilde{e})\cdot (\mp (-1) \frac{a_1+a_2}{2}\widetilde{e})\cdot (\pm \frac{a_1-a_2}{2}\widetilde{e})\\ =& -4a_1a_2(a_1-a_2)^2(a_1+3a_2)(a_1+a_2)e^6. \end{split} \end{align*} Here we use the identity $\widetilde{e}^2=4e^2$ (Theorem~\ref{thm:EulerClasses}) for the last equality. The first $(-1)$ in this computation is needed, because we used an odd permutation to get a basis of desired form. \subsubsection{Second fixed point $y_2=(x_0x_2,x_2x_3,x_1x_3)$} The six oriented monomial pairs in $ {\mathcal T}_{H_3}(y_2)$ yield the following $N$-representations \begin{itemize} \item $(f_2^{\vee}e_2x_3,f_1^{\vee}e_2x_2)=\rho_{a_1-a_2}$ \item $(f_1^{\vee}e_1x_2,f_2^{\vee}e_3x_3)=\rho^-_{2a_2}$ \item $(f_2^{\vee}e_2x_0,f_1^{\vee}e_2x_1)=\rho^-_{2a_1}$ \item $(f_2^{\vee}e_1x_0,f_1^{\vee}e_3x_1)=\rho_{3a_1+a_2}$ \item $(f_2^{\vee}e_1x_1,f_1^{\vee}e_3x_0)=\rho^-_{a_1+a_2}$ \item $(f_2^{\vee}e_3x_0,f_1^{\vee}e_1x_1)=\rho_{a_1-a_2}$ \end{itemize} and thus we get the following equivariant Euler class \begin{align*} \begin{split} e^N( {\mathcal T}_{H_3}(y_2))=&(-1)\cdot (\pm \frac{a_1-a_2}{2}\widetilde{e})\cdot (-a_2\widetilde{e})\\ &\hskip40pt\cdot (-a_1 \widetilde{e})\cdot (\pm \frac{3a_1+a_2}{2}\widetilde{e})\cdot (\mp(-1) \frac{a_1+a_2}{2}\widetilde{e})\cdot (\pm \frac{a_1-a_2}{2}\widetilde{e})\\ =& -4a_1a_2(a_1-a_2)^2(3a_1+a_2)(a_1+a_2)e^6 \end{split} \end{align*} Again we need an additional sign because we used an odd permutation to get the basis that splits up at the direct sum of summands listed above. \subsubsection{Third fixed point $y_3=(x_0x_3,x_0x_1,x_1x_2)$} The six oriented monomial pairs in $ {\mathcal T}_{H_3}(y_3)$ yield the following $N$-representations \begin{itemize} \item $(f_2^{\vee}e_2x_1,f_1^{\vee}e_2x_0)=\rho^-_{a_1+a_2}$ \item $(f_2^{\vee}e_3x_1,f_1^{\vee}e_1x_0)=\rho^-_{2a_1}$ \item $(f_2^{\vee}e_2x_2,f_1^{\vee}e_2x_3)=\rho^-_{2a_2}$ \item $(f_1^{\vee}e_3x_3,f_2^{\vee}e_1x_2)=\rho^-_{a_1-3a_2}$ \item $(f_1^{\vee}e_3x_2,f_2^{\vee}e_1x_3)=\rho_{a_1-a_2}$ \item $(f_2^{\vee}e_3x_2,f_1^{\vee}e_1x_3)=\rho^-_{a_1+a_2}$ \end{itemize} and we get the following equivariant Euler class \begin{align*} \begin{split} e^N( {\mathcal T}_{H_3}(y_3))=&(\pm \frac{a_1+a_2}{2}\widetilde{e})\cdot (-a_1\widetilde{e})\cdot (-a_2 \widetilde{e})\\&\hskip40pt\cdot (\pm \frac{a_1-3a_2}{2}\widetilde{e})\cdot (\pm \frac{a_1-a_2}{2}\widetilde{e})\cdot (\pm \frac{a_1+a_2}{2}\widetilde{e})\\ =&4a_1a_2(a_1+a_2)^2(a_1-a_2)(a_1-3a_2)e^6 \end{split} \end{align*} \subsubsection{Fourth fixed point $y_4=(x_0x_3,x_2x_3,x_1x_2)$} The six oriented monomial pairs in ${\mathcal T}_{H_3}(y_4)$ yield the following $N$-representations \begin{itemize} \item $(f_2^{\vee}e_2x_3,f_1^{\vee}e_2x_2)=\rho^-_{a_1+a_2}$ \item $(f_2^{\vee}e_3x_3,f_1^{\vee}e_1x_2)=\rho^-_{2a_2}$ \item $(f_2^{\vee}e_2x_0,f_1^{\vee}e_2x_1)=\rho^-_{2a_1}$ \item $(f_2^{\vee}e_1x_0,f_1^{\vee}e_3x_1)=\rho^-_{3a_1-a_2}$ \item $(f_2^{\vee}e_1x_1,f_1^{\vee}e_3x_0)=\rho_{a_1-a_2}$ \item $(f_2^{\vee}e_3x_0,f_1^{\vee}e_1x_1)=\rho^-_{a_1+a_2}$. \end{itemize} The equivariant Euler class of $ {\mathcal T}_{H_3}(y_4)$ equals \begin{align*} \begin{split} e^N( {\mathcal T}_{H_3}(y_4))=&(\pm\frac{a_1+a_2}{2}\widetilde{e})\cdot (-a_2\widetilde{e})\cdot (-a_1 \widetilde{e})\\&\hskip40pt\cdot (\pm \frac{3a_1-a_2}{2}\widetilde{e})\cdot (\pm \frac{a_1-a_2}{2}\widetilde{e})\cdot (\pm \frac{a_1+a_2}{2}\widetilde{e})\\ =&4a_1a_2(a_1+a_2)^2(a_1-a_2)(3a_1-a_2)e^6 \end{split} \end{align*} \subsubsection{Fifth fixed point $y_5=(x_0^2,x_0x_1,x_1^2)$} The six oriented monomial pairs in ${\mathcal T}_{H_3}(y_5)$ yield the following $N$-representations \begin{itemize} \item $(f_2^{\vee}e_2x_3,f_1^{\vee}e_2x_2)=\rho_{a_1-a_2}$ \item $(f_1^{\vee}e_1x_2,f_2^{\vee}e_3x_3)=\rho^-_{a_1+a_2}$ \item $(f_2^{\vee}e_2x_2,f_1^{\vee}e_2x_3)=\rho^-_{a_1+a_2}$ \item $(f_2^{\vee}e_1x_2,f_1^{\vee}e_3x_3)=\rho_{3a_1+a_2}$ \item $(f_2^{\vee}e_1x_3,f_1^{\vee}e_3x_2)=\rho^-_{3a_1-a_2}$ \item $(f_1^{\vee}e_1x_3,f_2^{\vee}e_3x_2)=\rho_{a_1-a_2}$. \end{itemize} So the equivariant Euler class of ${\mathcal T}_{H_3}(y_5)$ equals \begin{align*} \begin{split} e^N( {\mathcal T}_{H_3}(y_5))=&(\pm\frac{a_1-a_2}{2}\widetilde{e})\cdot (\pm\frac{a_1+a_2}{2}\widetilde{e})\cdot (\pm\frac{a_1+a_2}{2} \widetilde{e})\\&\hskip50pt\cdot (\pm \frac{3a_1+a_2}{2}\widetilde{e})\cdot (\pm \frac{3a_1-a_2}{2}\widetilde{e})\cdot (\pm \frac{a_1-a_2}{2}\widetilde{e})\\ =&(a_1+a_2)^2(a_1-a_2)^2(3a_1+a_2)(3a_1-a_2)e^6 \end{split} \end{align*} \subsubsection{Sixth fixed point $y_6=(x_2^2,x_2x_3,x_3^2)$} The six oriented monomial pairs in ${\mathcal T}_{H_3}(y_6)$ yield the following $N$-representations \begin{itemize} \item $(f_1^{\vee}e_2x_0,f_2^{\vee}e_2x_1)=\rho_{a_1-a_2}$ \item $(f_1^{\vee}e_1x_0,f_2^{\vee}e_3x_1)=\rho^-_{a_1+a_2}$ \item $(f_2^{\vee}e_2x_0,f_1^{\vee}e_2x_1)=\rho^-_{a_1+a_2}$ \item $(f_2^{\vee}e_1x_0,f_1^{\vee}e_3x_1)=\rho_{3a_2+a_1}$ \item $(f_1^{\vee}e_3x_0,f_2^{\vee}e_1x_1)=\rho^-_{a_1-3a_2}$ \item $(f_2^{\vee}e_3x_0,f_1^{\vee}e_1x_1)=\rho_{a_1-a_2}$. \end{itemize} So the equivariant Euler class of $ {\mathcal T}_{H_3}(y_6)$ is equal to \begin{align*} \begin{split} e^N( {\mathcal T}_{H_3}(y_6))=& (-1)\cdot(\pm\frac{a_1-a_2}{2}\widetilde{e})\cdot (\pm\frac{a_1+a_2}{2}\widetilde{e})\cdot (\pm\frac{a_1+a_2}{2} \widetilde{e})\\&\hskip50pt\cdot (\pm \frac{3a_2+a_1}{2}\widetilde{e})\cdot (\pm \frac{a_1-3a_2}{2}\widetilde{e})\cdot (\pm \frac{a_1-a_2}{2}\widetilde{e})\\ =&-(a_1+a_2)^2(a_1-a_2)^2(3a_2+a_1)(a_1-3a_2)e^6 \end{split} \end{align*} Note that again we need an additional $(-1)$. \subsection{Equivariant Euler class of ${\mathcal T}_{\Gr(4,n+1)}(y_j)$} To find $e^N({\mathcal T}_{H_n}(y_j))$ for $j=1,\ldots,6$ it remains to compute $e^N({\mathcal T}_{\Gr(4,n+1)}(y_j))$, which is the same for all 6 fixed points. \begin{proposition}\label{prop:EulerClassGrassmannian} Let $s=\floor{\frac{n+1}{2}}$. If $n$ is odd, then \[e^N({\mathcal T}_{\Gr(4,n+1)}(y_j))= \prod_{i=3}^s (a_1^2-a_i^2) \cdot \prod_{i=3}^s(a_2^2-a_i^2)\cdot e^{2s-6}. \] If $n$ is even, then \[ e^N({\mathcal T}_{\Gr(4,n+1)}(y_j))= \epsilon(a_1a_2)\cdot a_1a_2\prod_{i=3}^s (a_i^2-a_1^2) \cdot \prod_{i=3}^s(a_i^2-a_2^2)\cdot e^{2s-6}.\] \end{proposition} \begin{proof} Since ${\mathcal T}_{\Gr(4,n+1)}\cong E_4^{\vee}\otimes \mathcal{Q}$ where $E_4$ and $\mathcal{Q}$ are the tautological bundle and the quotient bundle respectively, ${\mathcal T}_{\Gr(4,n+1)}(y_j)=(\rho_{a_1}\oplus \rho_{a_2})\otimes (\rho_{a_3}\oplus\ldots \oplus\rho_{a_s})$ if $n$ is odd and ${\mathcal T}_{\Gr(4,n+1)}(y_j)=(\rho_{a_1}\oplus \rho_{a_2})\otimes (\rho_{a_3}\oplus\ldots \oplus\rho_{a_s}\oplus k)$ if $n$ is even. So we need to compute the equivariant Euler class of $\rho_{a_i}\otimes \rho_{a_j}$ for $i\neq j$. Let $u_1,u_2$ be an oriented basis of the N-representation $\rho_{a_i}$ and $v_1,v_2$ an oriented basis of $\rho_{a_j}$. Then we get the following basis for $\rho_{a_i}\otimes \rho_{a_j}$ \[u_1\otimes v_1,u_2\otimes v_1,u_1\otimes v_2,u_2\otimes v_2\] which we rearrange (base change determinant 1) to \[u_1\otimes v_1,u_2\otimes v_2,u_2\otimes v_1,u_1\otimes v_2.\] This splits up as the direct sum of two irreducible $N$-representations, namely one with basis $u_1\otimes v_1,u_2\otimes v_2$ and one with basis $u_2\otimes v_1,u_1\otimes v_2$. The first one is $\rho_{a_i+a_j}$ and the basis is oriented. If $a_i>a_j$ the second one is $\rho_{a_i-a_j}^-$ and the basis above is oriented and if $a_j>a_i$ it is $\rho_{a_j-a_i}^-$ and the basis is not oriented. In both cases we get \begin{align*} e^N(\rho_{a_i}\otimes\rho_{a_j})=&(\pm \cdot\frac{a_i+a_j}{2}\cdot \widetilde{e})\cdot(\mp\cdot(-1)\cdot \frac{a_i-a_j}{2}\cdot \widetilde{e})\\ =& (a_i^2-a_j^2)\cdot e^2. \end{align*} When $n$ is even, we get the additional summand $\rho_{a_1}\oplus \rho_{a_2}$ which gives the additional factor $\epsilon(a_1)a_1e\cdot \epsilon(a_2)a_2e=\epsilon(a_1a_2)a_1a_2e^2$. \end{proof}
1,108,101,564,782
arxiv
\section{Introduction} While the physics of dark energy is obscure, existence of a feature in data consistent with the cosmological constant $\Lambda$ across supernovae \cite{Riess:1998cb, Perlmutter:1998np}, cosmic microwave background (CMB) \cite{Aghanim:2018eyx} and baryon acoustic oscillations (BAO) \cite{Eisenstein:2005su} is compelling. The glaring inconsistency of $\Lambda$, corresponding to the dark energy equation of state (EOS) $(w=-1$), with quantum theory motivates alternative dark energy models. Starting with Quintessence \cite{Copeland:2006wr, Tsujikawa:2013fta},\footnote{See \cite{Vagnozzi:2018jhn, Banerjee:2020xcn} for a discussion on how Quintessence exacerbates Hubble tension \cite{Verde:2019ivm}.} there is now a zoo of alternative dark energy models (see \cite{Clifton:2011jh} for a review) within Effective Field Theory. Pertinently, these field theories allow for evolution in the dark energy EOS $w(z)$. This motivates a host of dynamical dark energy (DDE) parametrisations \cite{Cooray:1999da, Astier:2000as, Efstathiou:1999tm, Chevallier:2000qy, Linder:2002et, Jassal:2005qc,Barboza:2008rh} in a bid to diagnose deviations from $\Lambda$ in observational data. Confronted with our ignorance of $w(z)$, one can Taylor expand $w(z)$ in redshift $z$ about its value today, $w(z) = w_0 + w_a z + O(z^2)$ \cite{Cooray:1999da, Astier:2000as}. This exercise is valid as the prevailing consensus is that dark energy is a late time, or low redshift phenomenon. Expansion in $z$ at low redshift $(z < 1)$ satisfies an obvious requirement that the expansion parameter is small \cite{Cattoen:2007sk, CH}, but this ``model", \footnote{{It is more accurately a diagnostic of DDE.}} like the Efstathiou model \cite{Efstathiou:1999tm}, is not valid at high redshift, since $w(z)$ is not bounded. This problem is solved through the celebrated Chevallier-Polarski-Linder (CPL) model \cite{Chevallier:2000qy, Linder:2002et}, which employs the other natural small number $(1-a)$, where $a=(1+z)^{-1}$ is the scale factor. The CPL model and various alternatives \cite{Jassal:2005qc,Barboza:2008rh} are valid at high redshift and allow one to bring CMB into the DDE conversation. See \cite{Yang:2021flj, Zheng:2021oeq} for recent studies of these models. Alternatively, at low redshift one can employ data reconstruction techniques to extract $w(z)$ \cite{Holsclaw:2010nb, Holsclaw:2010sk, Shafieloo:2012ht, Seikel:2012uu, Crittenden:2005wj, Crittenden:2011aa, Zhao:2017cud, Wang:2018fng}. These techniques make assumptions on the correlations, either between reconstructed data points \cite{Holsclaw:2010nb, Holsclaw:2010sk, Shafieloo:2012ht, Seikel:2012uu} or reconstructed functions, e.g. $w(z)$ \cite{Crittenden:2005wj, Crittenden:2011aa}. They have an advantage over traditional models, since the reconstruction is \textit{local}. This means that the reconstruction is more sensitive to nearby data and data farther away in redshift carries less weight.\footnote{To see this, one may plot the presumed correlation function \eqref{correlation} for different values of the width parameter $a_c$. See also \cite{Colgain:2021ngq} for a recent discussion on how assumptions on correlations can suppress errors in cosmological parameters.} Ultimately, these two complementary approaches have to converge if DDE is physical. Our work here highlights some issues with both approaches, begging the question, even if dark energy is dynamical, are we using the correct tools (diagnostics) to find it? We first make a simple observation for the traditional parametric approach \cite{Cooray:1999da, Astier:2000as, Efstathiou:1999tm, Chevallier:2000qy, Linder:2002et, Jassal:2005qc,Barboza:2008rh}. Namely, if one considers $w(z) = w_0 + w_a f(z)$ with $f(0)=0, f'(0)=1$, then data actually constrains the product $w_a f(z)$. This can be easily seen by Taylor expanding the Hubble diagram $H(z)$ order by order in $z$ about $z=0$ and noting that the combination $w_a f^{(n)}(z=0)$ always appears together: $w_a$ cannot be separated from $f(z)$ and its derivatives. This means that if {the data is of fairly consistent uniform quality}, $f(z)$ grows slowly with redshift, the errors on $w_a$ will be large. Now, recall that CPL \cite{Chevallier:2000qy, Linder:2002et} is an expansion in $(1-a)$, which is an undisputed small parameter and one shall see that it is less likely to diagnose DDE. Our observation here, which we quantify through mock realisations, echoes findings in cosmographic expansions \cite{Busti:2015xqa}. {Moreover, our observation is also in line with recent studies, e.g. FIG. 7 of \cite{Zheng:2021oeq}, where it is clear that the scale of the $w_a$ axis changes with the DDE model. Once seen, this trend may be difficult to unsee.} In contrast to CPL, the less well known Barboza-Alcaniz (BA) model \cite{Barboza:2008rh} is more likely to diagnose DDE at low redshift, while the Jassal-Bagla-Padmanabhan (JBP) model \cite{Jassal:2005qc} may make $\Lambda$ a safe bet. In short, the well known parametrisations are biased, making it is imperative to employ a wide range of parametric DDE models in studies, e.g. \cite{Yang:2021flj, Zheng:2021oeq}. Next we turn our attention to data reconstruction and in particular claims of wiggles in $w(z)$ \cite{Zhao:2017cud}, or its integrated density $\rho_{\textrm{de}} (z)$, \begin{equation} \label{density} X(z):=\frac{\rho_{\textrm{de}} (z)}{\rho_{\textrm{de}, 0}}= \exp \left( 3 \int_0^{z} \frac{1 + w(z')}{1+z'} \textrm{d} z' \right). \end{equation} In \cite{Wang:2018fng}, however, it was adopted to use $X(z)$ instead of $w(z)$ as the independent variable. Starting from $w(z)$, it is clear from (\ref{density}) that $X(z)$ cannot change sign, while for some potentially relevant DE sectors, e.g. non-minimally coupled scalar field models, one may like to allow $X(z)$ to also change sign. That being said, it is clear from the results of \cite{Wang:2018fng} (also \cite{Bonilla:2020wbn}) that data has a preference for $X \sim 1$, so this distinction is a little moot. From (\ref{density}), it is evident that wiggles in $w(z)$ around $w = -1$ translate into wiggles in $X(z)$ around $X=1$. One important input in the analysis of \cite{Zhao:2017cud, Wang:2018fng} is a constraint on correlations in the dark energy sector \cite{Crittenden:2011aa}. Importantly, the correlations are defined by two parameters, an overall normalisaton, and a parameter defining the scale beyond which correlations are suppressed. As is evident from \cite{Wang:2018fng} (appendix B), the existence (or not) of wiggles depends on the scale. A fair summary of the analysis of \cite{Wang:2018fng} may be that within the assumed correlations, there exists a parameter space where the reconstructed wiggles in $X(z)$ are favoured by Bayesian evidence over flat $\Lambda$CDM ($X=1$). A pertinent question is then whether the correlations can be realised in a well-motivated theory, e.g. a field theory? Before touching upon that question, we analyse wiggles for the ``default" parameters \cite{Wang:2018fng} to ascertain if the data has an affinity for them. It should be noted that this is not quite the same range of parameters where the wiggles are favoured over flat $\Lambda$CDM by Bayesian evidence (see details in \cite{Wang:2018fng}), nevertheless, wiggles exist. By Fourier decomposing the wiggles in a given redshift range, we isolate the most relevant modes and fit them back to the original data. We find that any preference the data has for the wiggles is weak ($\lesssim 2 \sigma$), but appears to be robust. In other words, the data has a (slight) preference for wiggles. Next, by working within a field theory framework that is closely related to Quintessence, but allows excursions into the phantom regime, $w(z) < -1$, we spell out the implications of the assumptions made in \cite{Zhao:2017cud, Wang:2018fng} for a run-of-the mill field theory model. We find that the restrictions are strong at the level of field theory, which means that as data improves, these discrepancies should become transparent. The analysis, while far from conclusive, serves as an appetiser to the key question can wiggles in $w(z)$ have a field theory backend? \section{Review of DDE} In this work we consider the traditional DDE \textit{parametrisations} from Table \ref{DDE_models} along with the \textit{reconstructed} $X(z)$ from Wang et al. \cite{Wang:2018fng}. As explained, it is easy to translate between $w(z)$ and $X(z)$ through equation (\ref{density}) provided $X(z) > 0$. Furthermore, this equation is robust within FLRW framework and can only breakdown in the asymptotic future ($z = -1$).\footnote{{One can find dark energy parametrisations that avoid divergences \cite{Akarsu:2015yea}.}} \begin{table}[t] \centering \begin{tabular}{c|c|c} \rule{0pt}{3ex} Model & $w(z)$ & $ X(z) $ \\ \hline \rule{0pt}{3ex} ``Redshift" \cite{Cooray:1999da, Astier:2000as} & $w_0 + w_a z $ & $ (1+z)^{3(1+w_0-w_a) } e^{3 w_a z}$ \\ \rule{0pt}{3ex} CPL \cite{Chevallier:2000qy, Linder:2002et} & $w_0 + w_a \frac{z}{1+z}$ & $(1+z)^{3(1+w_0+w_a) } e^{-\frac{3 w_a z}{1+z}}$ \\ \rule{0pt}{3ex} Efstathiou \cite{Efstathiou:1999tm} & $w_0 + w_a \ln (1+z)$ & $ (1+z)^{3(1+w_0) } e^{\frac{3}{2} w_a [\ln (1+z)]^2}$ \\ \rule{0pt}{3ex} JBP \cite{Jassal:2005qc} & $w_0 + w_a \frac{z}{(1+z)^2}$ & $ (1+z)^{3(1+w_0) } e^{\frac{3 w_a}{2} \frac{z^2}{(1+z)^2}}$ \\ \rule{0pt}{3ex} BA \cite{Barboza:2008rh} & $w_0 + w_a \frac{ z (1+z)}{1+z^2}$ & $ (1+z)^{3(1+w_0) } (1+z^2)^{{3 w_a}/{2}}$ \end{tabular} \caption{DDE parametrisations/models} \label{DDE_models} \end{table} To begin, let us note that neglecting the JBP \cite{Jassal:2005qc} and BA models \cite{Barboza:2008rh}, where $w(z)$ is effectively a constant beyond $z \sim 1$ (see FIG. \ref{wa}), there is a tendency in parametric DDE models for $w(z)$ to either increase or decrease monotonically with $z$. This creates an apparent clash between the traditional DDE models and the findings of \cite{Wang:2018fng, Zhao:2017cud}. In short, if the oscillatory features in $w(z)$ or $X(z)$ reported in \cite{Wang:2018fng, Zhao:2017cud} are real, then it should be intuitively obvious that the traditional parametric DDE models will fail to detect the features, as explicitly stated elsewhere \cite{Sahni:2006pa}. We put this statement beyond doubt later. The claims of \cite{Zhao:2017cud, Wang:2018fng} supporting a $\sim 3.7 \sigma$ preference for DDE over $\Lambda$ are intriguing.\footnote{Despite the lower $\chi^2$, as explained in \cite{Wang:2018fng}, the Bayesian evidence still favours flat $\Lambda$CDM for some specific values of parameters.} In contrast to traditional models, which build up sensitivity to the $w_a$ parameter with redshift, data reconstruction based on assumed correlations in $w(z)$ or $X(z)$, allows greater local sensitivity and in principle permits deviations from $w=-1$ ($X=1$) to be identified close to $z=0$. Once again, this can be seen from Taylor expansion by noting that $w_0:=w(z=0)$ and $\Omega_{m0}$ can only be distinguished at $O(z^2)$. Note also that the low redshift regime is where dark energy is expected to dominate. {Remarkably, the reconstructed $X(z)$ from \cite{Wang:2018fng} has a number of wiggles in $X(z)$, some of which cannot be immediately correlated with data discrepant with Planck-$\Lambda$CDM. More precisely, there are data points that are widely recognised as being discrepant with Planck-$\Lambda$CDM \cite{Aghanim:2018eyx}, notably a high local $H_0$ \cite{Riess:2019cxk} or Lyman-alpha BAO \cite{duMasdesBourboux:2020pck}, which may buy one a wiggle or two, but additional wiggles may be an artifact of the assumptions.} The key assumption in the line of research \cite{ Crittenden:2011aa, Zhao:2017cud} is that one can work with the correlations, \begin{equation} \label{correlation} \xi (\delta a ) := \langle [ w(a) - w^{\textrm{fid}}(a)] [ w(a') - w^{\textrm{fid}}(a')] \rangle = \frac{\xi_{w}(0)}{1+ \left( \frac{\delta a}{a_c} \right)^2 }, \end{equation} where $\delta a = a - a'$. The term on the LHS is the formal definition, whereas the expression on the RHS is how it is implemented in \cite{Crittenden:2011aa, Zhao:2017cud}. One may impose similar correlations in $X(a)$ (instead of $w(a)$) \cite{Wang:2018fng}. In this work, we switch between correlations in $w(a)$ \cite{Zhao:2017cud} and correlations in $X(a)$ \cite{Wang:2018fng}. Here $\xi_w(0)$ denotes the normalisation factor, and as explained in \cite{Crittenden:2011aa}, $a_c$ represents a smoothing distance. Note that the denominator becomes large once $\delta a > a_c$, so correlations are suppressed beyond $a_c$. As further explained in \cite{Crittenden:2011aa}, the normalisation is related to the allowed variance, \begin{equation} \sigma_m^2 \approx \frac{\pi \xi_{w} (0) a_c}{a_{\textrm{max}}- a_{\textrm{min}}}, \end{equation} and in practice the numbers $\sigma_m$ and $a_c$ are put in by hand, while $\xi_{w} (0)$ is inferred. The canonical values chosen in \cite{Zhao:2017cud, Wang:2018fng} are $\sigma_m = 0.04 $ and $a_c = 0.06$. {In addition, there is a prior on displacements of $X$ from $X=1$, $\Delta_X$, and the default value is $\Delta_{X} = 4$. This parameter is also dialed and the most pronounced departure from $\Lambda$CDM was reported to happen at $\Delta_X=0.09$ \cite{Wang:2018fng}.} \begin{figure}[htb] \centering \includegraphics[width=80mm]{X.png} \\ \caption{Reconstructed $X(z)$ reproduced from \cite{Wang:2018fng} with parameters $\sigma_m = 0.04, a_c = 0.06, \Delta_{X} = 4$.} \label{X_wiggle} \end{figure} The choice of $(\sigma_m, a_c, \Delta_{X})$ constitute transparent assumptions, and clearly, as they are dialed, one gets different results (see appendix B of \cite{Wang:2018fng}). In particular, in the limit $a_c \rightarrow 1$ or $\sigma_m \rightarrow 0$, correlations in $X(z)$ (alternatively $w(z)$) can spread further and the reconstructed function is consistent with flat $\Lambda$CDM, $X=1$ $(w=-1)$. In the later part of this work, we reanalyse the wiggles in \cite{Wang:2018fng} to ascertain if the data has a strong or weak preference for wiggles. This allows one to quantify the affinity of the data directly to wiggles without viewing them through the prism of correlations, which are objectively put in by hand. Once the correlations (\ref{correlation}) are specified, Wang et al. \cite{Wang:2018fng} consider the Hubble parameter \begin{equation} \label{XCDM} H(z) = H_0 \sqrt{ X(z) (1-\Omega_{m0} - \Omega_{r0}) + \Omega_{m0} (1+z)^3 + \Omega_{r0} (1+z)^4}, \end{equation} where $H_0$ is the Hubble constant, $\Omega_{m0}$ is the matter density and $\Omega_{r0}$ denotes the radiation density. We will largely work at low redshift where $\Omega_{r0}$ can be safely neglected. As \eqref{XCDM} shows, $X(z)$ is any contribution to the budget of the universe besides pressureless matter and radiation, which can include a DE sector plus its possible interactions with other sectors. The $X(z_i)$ parameter is reconstructed from 39 redshifts $z_i \in [ 0, 1000]$, subject to an analogous $X(a)$ correlation to (\ref{correlation}) and the further requirement that $X(a=1) = 1$. As explained in \cite{Crittenden:2011aa}, the Hubble parameter and correlation are fitted in tandem to a combination of data comprising CMB distance information from Planck \cite{Aghanim:2018eyx}, supernovae \cite{Betoule:2014frx}, BAO \cite{Beutler:2011hx, Ross:2014qpa, Wang:2016wjr, Font-Ribera:2013wce, Delubac:2014aqe}, cosmic chronometers \cite{Moresco:2016mzx} and a local determination of $H_0$ \cite{Riess:2016jrr}. We have illustrated the resulting best-fit $X(z_i)$ in FIG. \ref{X_wiggle}, while the Hubble constant $H_0$ and matter density $\Omega_{m0}$ are \cite{Wang:2018fng}, \begin{equation} \label{H0om} H_0 = 70.3 \pm 0.99 \textrm{ km/s/Mpc}, \quad \Omega_{m0} = 0.288 \pm 0.008. \end{equation} It is interesting to compare the value of $\Omega_{m0} h^2$ corresponding to (\ref{H0om}), $\Omega_{m0} h^2 = 0.1423 \pm 0.008$, with the Planck value, $\Omega_{m0} h^2 = 0.1430 \pm 0.0011$ \cite{Aghanim:2018eyx}. We see that the higher value of $H_0$ and lower value of $\Omega_{m0}$, when combined, are consistent with Planck values. This may not be so surprising as while observational data is sparse in the higher redshift bins, there is some input from CMB. The high $H_0$ value has been driven by a local $H_0$ prior, but recently the rational for imposing a prior on $H_0$, versus a prior on the absolute magnitude of supernovae $M_{B}$, has been called into question \cite{Benevento:2020fev, Lemos:2018smw, Camarena:2021jlr, Efstathiou:2021ocp}. \section{Parametric DDE} Parametric models recently appeared in an assessment of DDE in light of Hubble tension by Yang et al. \cite{Yang:2021flj}. In particular, therein fits to a compilation of CMB, BAO and local $H_0$ data are performed and it is concluded that \textit{``the constraints on the cosmological parameters, both free and derived, are almost unaltered by the choice of the DE parametrization"}. This conclusion may come as no surprise. First, local determinations of $H_0$ are insensitive to the cosmological model and its dark energy sector is no exception \cite{Dhawan:2020xmp}. Secondly, CMB represents an early Universe (high redshift) observable and BAO is anchored in the early Universe. On the contrary, it is commonly believed that dark energy only becomes relevant at late times. For these reasons it may be expected that CMB constraints are largely insensitive to the details of the dark energy model. In essence, the statements in \cite{Yang:2021flj} conform to the expectations. That being said, when one recalls the origin of the ``redshift" \cite{Cooray:1999da, Astier:2000as} and CPL \cite{Chevallier:2000qy, Linder:2002et} models as Taylor expansions, there is a clear distinction. It is an undeniable fact that $(1-a)=z/(1+z)$ is a smaller expansion parameter than $z$ and this has direct consequences \footnote{{In \cite{Albrecht:2006um} it has been suggested that the situation can be improved in the CPL model by replacing $w_0$ with the new parameter $w_p = w_0 + (1-a_p) w_a$, where $a_p$ is the pivot point that extremises the uncertainty in $w(a)$. This is just a redefinition of the constant component of $w(a)$ and our arguments here concern the dynamical part, i. e. the part of $w(a)$ that depends on redshift, $w_a=-dw/da$. To see this, observe that one can rewrite $w(a) = w_p + w_a (a_p -a)= w_0 + w_a (1-a)$, thus making it hopefully clear that the errors in $w_a$ are not affected by the pivot.}}. In short, in any given fit to low redshift dataset, one should expect that one has to go deeper in redshift in $(1-a)$ than $z$ in order to constrain the coefficient $w_a$. Indeed, it has already been observed by Busti et al. \cite{Busti:2015xqa} that $z$-expansions perform better than $(1-a)$-expansions at low redshift, $ z \lesssim 1.4$, i.e. within the range of supernovae, when attempting to recover the flat $\Lambda$CDM model from cosmographic expansions. In particular, it was noted that the errors in the $(1-a)$-expansion were larger. Or alternatively put, precisely because $(1-a)$ is a smaller expansion parameter, one requires a higher order Taylor expansion to approximate any model (see for example Figure 9 of \cite{Yang:2019vgk}). These statements are two faces of the same coin. Let us try to sum up the immediate concern. The CPL model \cite{Chevallier:2000qy, Linder:2002et} is a leading parametrisation for DDE. Objectively, current data is consistent with the cosmological constant and this means that $w_0 \approx -1$ and $w_a \approx 0$. For the DDE paradigm to be credible, one has to show that $w_a \neq 0$ outside of the confidence intervals. Now, bear in mind that evidence usually requires a $>\!\!3\sigma$ deviation. If dark energy is largely a low redshift phenomenon, then a DDE parametrisation that is sensitive to evolution in $w(z)$ at low redshift is a prerequisite. In practice, this means that the errors on $w_a$ should be small so that deviations from $w_a = 0$ can be distinguished. Note, by DDE we are not discussing deviations from $w = -1$, but the notion that dark energy evolves with redshift, in other words that the derivative is non-zero, $w'(z) \neq 0$, while $w_0=w(0)$ is just another parameter of the DE sector. \begin{figure}[htb] \centering \includegraphics[width=88mm]{models.png} \\ \caption{The redshift dependence of various DDE models.} \label{wa} \end{figure} It should be clear from the above arguments that the redshift model \cite{Cooray:1999da, Astier:2000as} will lead to smaller errors on $w_a$, thus making it a more appropriate model than the CPL model for parametrising evolution in $w(z)$ at low $z$. More generally, the errors in $w_a$ will differ across DDE models, and this leads to a degree of arbitrariness, and one of the take-home messages of this work is the necessity to analyse a number of models to reduce bias. Of course, if there is no DDE, then this arbitrariness will not be a problem. To put this comment in context, note that as we will soon show, the BA model \cite{Barboza:2008rh} leads to a detection of DDE quicker than CPL \cite{Chevallier:2000qy, Linder:2002et}, assuming DDE is real. Given how ubiquitous the CPL model has become, it is clear that some simple facts regarding these models are under-appreciated in the community. In short, parametric DDE models are biased tracers of DDE, so it is imperative to make statements across a class of models, e.g. \cite{Yang:2021flj, Zheng:2021oeq}. Moving along, it is easy to compare DDE models given in table \ref{DDE_models}. In these models \begin{equation}\label{wz-fz} w(z)=w_0+ f(z)\ w_a,\qquad f(0)=0,\ \ f'(0)=1, \end{equation} but higher derivatives of $f(z)$ at $z=0$ differ in these models. $f(z)$ is depicted for these models in FIG. \ref{wa}. Expanding all the $w(z)$ expressions around $z=0$, one can confirm that $w(z) \approx w_0 + w_a z $ below $z \sim 0.15$, but at higher $z$, yet still below $z \sim 1$, there are noticeable departures in behaviour. Indeed, below $z \sim 1$, provided the data is of suitably uniform quality, one can anticipate that the BA model \cite{Barboza:2008rh} should be more sensitive than the redshift model \cite{Cooray:1999da, Astier:2000as}. Furthermore, we should expect that the sensitivity to $w_a$ decreases across the Efstathiou \cite{Efstathiou:1999tm}, CPL \cite{Chevallier:2000qy, Linder:2002et} and JBP \cite{Jassal:2005qc} models in that order.\footnote{Some of these models appeared in a recent paper \cite{Zheng:2021oeq} and as is clear from Figure 7 there, the errors in $w_a$ vary considerably. Our discussions illuminates such trends.} Below $z \sim 2$, the order in sensitivity should change so that the redshift model performs best, followed in order by BA, Efstathiou, CPL and JBP. Lastly, above $z \sim 2$ the order of sensitivity in $w_a$ changes once again and we should expect that the Efstathiou model outperforms the BA model on the size of $w_a$ errors. While this argument is analytic, and admittedly a little naive since all expressions are exact and there is no data, we will now confirm how it is realised in fits to mock data. \begin{figure}[htb] \centering \includegraphics[width=80mm]{Hmock.png} \caption{A sample mock realisation for forecasted $H(z)$ DESI data based on the CPL model with $(H_0, \Omega_{m0}, w_0, w_a) = (67.36, 0.3153, -1, 0.5)$. } \label{mockH} \end{figure} \section{DESI Mocks} We begin by detailing our mocking procedure. Since our focus is DDE, we fix the other parameters to their Planck-$\Lambda$CDM values $(H_0, \Omega_{m0}, w_0) = (67.36, 0.3153, -1)$ \cite{Aghanim:2018eyx} and choose a value of $w_a$ that is sufficiently different from $w_a = 0$. Here, we choose $w_a = 0.5$, which is clearly an exaggerated or cartoon value, but it serves to make our point. Moreover, as the focus is evolution in $w(z)$, i.e. determining $w'(z)$, it is unimportant what assumption we make on $w_0$. The above values are nominal, but the reader is free to repeat with other values of $w_0, w_a$ and arrive at the same conclusion. Importantly, we mock data up on a particular DDE model and then fit the \textit{same} model to the mock data to recover the cosmological parameters. Note, by construction the model fits the data. We repeat this process one hundred times and average over the central values and the errors ($1 \sigma$ confidence intervals). For the data, we use the most optimistic forecasted DESI errors on the Hubble parameter $H(z)$ and angular diameter distance $D_{A}(z)$ \cite{Aghamousa:2016zmz} in the redshift range $0 < z \leq 3.55$, and impose a cut-off on the redshift $z_{\textrm{max}}$. We have picked this extended range so that we can flesh out as many of the features of FIG. \ref{wa} as possible. We perform Markov Chain Monte Carlo (MCMC) analysis for each realisation, and to speed up the convergence over the four parameters of interest, we impose a Planck prior $\Omega_{m0} h^2 = 0.1430 \pm 0.0011$ \cite{Aghanim:2018eyx}. We present a given mock realisation for the CPL model in FIG. \ref{mockH} and FIG. \ref{mockD}, simply to illustrate the DESI errors. We will comment on them soon. \begin{figure} \centering \includegraphics[width=80mm]{DAmock.png} \caption{Same as FIG. \ref{mockH} but for $D_{A}(z)$. } \label{mockD} \end{figure} Scanning Table \ref{table1}, one sees that with a cut-off $z_{\textrm{max}}=1$, the BA model does indeed lead to the smallest average errors on $w_a$, as anticipated from FIG. \ref{wa}. The next best performer is the redshift model, which once again confirms our expectations from FIG. \ref{wa}. Observe that, as promised, sensitivity in $w_a$ drops across the the Efstathiou, CPL and JBP models. Moreover, all of these models struggle to tell $w_a = 0.5$ apart from $w_a = 0$ with $z_{\textrm{max}}=1$. Of course, the reader can complain that $w_a = 0.5$ in the redshift model and $w_a = 0.5$ in the JBP model are different, since the combination $w_a f(z)$ is smaller in the latter, so the data will drive $w_a$ to larger values. This is true, but note that FIG. 7 of \cite{Zheng:2021oeq} uses real data and the discrepancies in the size of the $w_a$ errors are still evident (see also FIG. 2 of \cite{Barboza:2008rh}). Below $z_{\textrm{max}}=2$, the story changes, and the size of the errors in $w_a$ decrease in order across the redshift, BA, Efstathiou, CPL and JBP models. Once again this is in line with intuition gained from FIG. \ref{wa}. All the models bar JBP can now distinguish $w_a = 0.5$ from $w_a = 0$. Finally, with the highest redshift cut-off, $z_{\textrm{max}}=3.55$, the insights gleaned from FIG. \ref{wa} are largely correct, but there is a noticeable exception. From FIG. \ref{wa}, we would expect the Efstathiou model to perform better than the BA model beyond $z \sim 2.5$. However, it is clear from the numbers that this is not true. The likely explanation is that FIG. \ref{wa} is an analytic statement that does not factor in data quality. As can be seen from FIG. \ref{mockH} and FIG. \ref{mockD}, the forecasted DESI data quality is reduced at higher redshifts, so even if the Efstathiou model becomes (analytically) more sensitive to DDE than the BA model in that range, because of the decrease in data quality, this may not be evident. Note, our insights gained from analytic expressions are largely correct, but data quality plays some role. To help visualising the errors on $w_a$ with different cut-off redshifts, we plot the errors in FIG. \ref{error_wa}. \begin{table}[htb] \centering \begin{tabular}{c|ccccc} \rule{0pt}{3ex} Model & $z_{\textrm{max}}$ & $H_0$ & $\Omega_{m0}$ & $w_0$ & $w_a$ \\ \hline \rule{0pt}{3ex} \multirow{3}{*}{Redshift} & $1$ & $67.55^{+1.53}_{-1.46}$ & $0.314^{+0.014}_{-0.014}$ & $-1.02^{+0.16}_{-0.16}$ & $0.52^{+0.34}_{-0.34}$ \\ \rule{0pt}{3ex} & $2$ & $67.20^{+0.96}_{-0.94}$ & $0.317^{+0.009}_{-0.009}$ & $-0.99^{+0.08}_{-0.07}$ & $0.48^{+0.12}_{-0.12}$ \\ \rule{0pt}{3ex} & $3.55$ & $67.38^{+0.70}_{-0.70}$ & $0.315^{+0.007}_{-0.007}$ & $-1.00^{+0.04}_{-0.04}$ & $0.50^{+0.04}_{-0.04}$ \\ \hline \rule{0pt}{3ex} \multirow{3}{*}{CPL} & $1$ & $67.21^{+1.74}_{-1.65}$ & $0.317^{+0.016}_{-0.016}$ & $-0.98^{+0.21}_{-0.21}$ & $0.43^{+0.72}_{-0.72}$ \\ \rule{0pt}{3ex} & $2$ & $67.39^{+1.31}_{-1.27}$ & $0.315^{+0.013}_{-0.012}$ & $-1.00^{+0.13}_{-0.13}$ & $0.49^{+0.39}_{-0.40}$ \\ \rule{0pt}{3ex} & $3.55$ & $67.18^{+1.09}_{-1.08}$ & $0.317^{+0.011}_{-0.010}$ &$-0.98^{+0.10}_{-0.10}$ & $0.46^{+0.27}_{-0.29}$ \\ \hline \rule{0pt}{3ex} \multirow{3}{*}{Efstathiou} & $1$ & $67.04^{+1.61}_{-1.54}$ & $0.318^{+0.015}_{-0.015}$ & $-0.96^{+0.18}_{-0.18}$ & $0.37^{+0.50}_{-0.51}$ \\ \rule{0pt}{3ex} & $2$ & $67.15^{+1.13}_{-1.10}$ & $0.317^{+0.011}_{-0.011}$ & $-0.98^{+0.10}_{-0.10}$ & $0.45^{+0.23}_{-0.23}$ \\ \rule{0pt}{3ex} & $3.55$ & $67.22^{+0.88}_{-0.86}$ & $0.317^{+0.009}_{-0.008}$ & $-0.99^{+0.07}_{-0.07}$ & $0.49^{+0.13}_{-0.13}$ \\ \hline \rule{0pt}{3ex} \multirow{3}{*}{JBP} & $1$ & $67.26^{+2.06}_{-1.95}$ & $0.317^{+0.019}_{-0.019}$ & $-0.98^{+0.31}_{-0.31}$ & $0.38^{+1.59}_{-1.60}$ \\ \rule{0pt}{3ex} & $2$ & $67.27^{+1.85}_{-1.74}$ & $0.317^{+0.017}_{-0.017}$ & $-0.98^{+0.25}_{-0.25}$ & $0.40^{+1.23}_{-1.23}$ \\ \rule{0pt}{3ex} &$3.55$ & $67.40^{+1.85}_{-1.76}$ & $0.315^{+0.018}_{-0.017}$ & $-0.99^{+0.25}_{-0.25}$ & $0.43^{+1.21}_{-1.22}$ \\ \hline \rule{0pt}{3ex} \multirow{3}{*}{BA} & $1$ & $67.58^{+1.58}_{-1.50}$ & $0.314^{+0.015}_{-0.014}$ & $-1.01^{+0.16}_{-0.16}$ & $0.51^{+0.31}_{-0.31}$ \\ \rule{0pt}{3ex} & $2$ & $67.39^{+1.14}_{-1.11}$ & $0.315^{+0.011}_{-0.011}$ & $-1.00^{+0.10}_{-0.10}$ & $0.51^{+0.15}_{-0.15}$ \\ \rule{0pt}{3ex} & $3.55$ & $67.40^{+0.97}_{-0.95}$ & $0.315^{+0.009}_{-0.009}$ & $-1.00^{+0.075}_{-0.074}$ & $0.50^{+0.10}_{-0.10}$ \end{tabular} \caption{{We show the average best-fit values of the cosmological parameters $(w_0, w_a)$ for mock DESI data with $(H_0, \Omega_{m0}, w_0, w_a) = (67.36, 0.3153, -1, 0.5)$ over 100 realisations with a redshift cut-off $z_{\textrm{max}}$.}} \label{table1} \end{table} Let us summarise. As explained, parametric DDE models build up sensitivity to $w_a$ with redshift. However, the rate at which the sensitivity increases depends on the function multiplying $w_a$ in the dark energy EOS. In reality, the ubiquitous CPL model is one of the poorer performers, but not as bad as the JBP model. The CPL model \cite{Chevallier:2000qy, Linder:2002et} performs better, but is still conservative, and given how ubiquitous it has become, one may worry that not discovering DDE has become a self-fulfilling prophecy. The BA model \cite{Barboza:2008rh} performs a lot better, which should make it the parametric DDE model of choice. Of course, it still cannot recover oscillatory behaviour in $w(z)$, if it is real. \begin{figure} \centering \includegraphics[width=80mm]{error_bar.png} \caption{Error in the $w_a$ parameter $\Delta w_a$ across the different models for different $z_{\textrm{max}}$ as quantified in Table \ref{table1}. } \label{error_wa} \end{figure} Observe that in both the CPL and BA models, the dark energy EOS is bounded and there is no immediate obstacle to fitting CMB data. Indeed, while Barboza \& Alcaniz deserve credit for their model, and we encourage the community to use it, along with the other models to reduce bias, the BA model may be easily tweaked it to get further improvements. To this end, note that there is a simple generalisation: \begin{equation} \label{genBA} w(z) = w_0 + w_a \frac{z (1+z)^{n-1}}{1+z^n}, \end{equation} where $n \in \mathbb{N}$ and $n=1,2$ respectively correspond to the CPL and BA models. Once again, this reduces to $w(z) \approx w_0 + w_a z$ at low redshift and saturates to $w = w_0 + w_a $ at $z = \infty$, where the (generalised) BA model (\ref{genBA}) approaches the limit from above, whereas CPL approaches it from below (see FIG. \ref{wa}). In this sense, the dark energy EOS (\ref{genBA}) is on par with the CPL model, but as can be seen from the tables (for $n=2$), performs much better in constraining $w_a$. This makes it more likely that DDE, once again assuming it is physical, can be discovered. The message to the community is that one cannot rely on a single DDE parametrisation, as all of them are biased by the function $f(z)$, and it is better to study DDE over a range of models. It should be hopefully clear that if DDE is real, various models, at least in the two-parameter $(w_0, w_a)$ family, will not agree on the significance of any discovery. This arbitrariness will be a persistent problem, unless DDE is simply not discovered by any model! We make one final digression to demonstrate that parametric DDE models struggle with uncovering oscillatory features in $w(z)$ or $X(z)$. Recall again that the output of the study \cite{Wang:2018fng} is the mean values of $H_0, \Omega_{m 0}, X(z_i)$ and the corresponding covariance matrix. Since the points are uniformly distributed in the scale factor $a$, but not in redshift $z$, the data points become sparse at high redshift where the only constraints come from CMB. For this reason, we restrict our attention to $z \lesssim 2.5$ (see FIG. \ref{X_wiggle}). Having restricted the redshift range, we crop the covariance matrix to remove the $H_0, \Omega_{m0}$ and higher redshift $X (z_i)$ entries. It is then a simple exercise to treat the remaining $X(z_i)$ as ``data" and fit the different $w(z)$ parametrisations from Table \ref{DDE_models} directly to $X (z_i)$ along with the corresponding covariance matrix. The results of the best-fit $(w_0, w_a)$ parameters are displayed in Table \ref{models} along with their $1 \sigma$ confidence intervals. Evidently, all fits are largely consistent with the cosmological constant, i. e. $(w_0, w_a) = (-1, 0)$ and any wiggles have been washed out. The Efstathiou model \cite{Efstathiou:1999tm} shows a small deviation from $w_0 = -1$, but this seems to be due to the fact that $w_a$ is very small. This model aside, the $w_a$ errors in Table \ref{models} are more or less in line with our expectation that the redshift and BA models are competitive, whereas the CPL and JBP models are less so. The models appear to agree on $w_a < 0$, which may be expected from the dip in $X(z)$, which is driven by Lyman-alpha BAO. So the take-home message is that if the wiggles in dark energy are real, one will not be able to probe them using traditional approaches. This appears to say that non-parametric data reconstructions have the ascendancy. \begin{table}[htb] \centering \begin{tabular}{c|c|c} \rule{0pt}{3ex} Model & $w_0$ & $ w_a $ \\ \hline \rule{0pt}{3ex} Redshift & $-1.03^{+0.07}_{-0.07}$ & $-0.12^{+0.14}_{-0.16}$ \\ \rule{0pt}{3ex} CPL & $-1.03^{+0.09}_{-0.09}$ & $-0.19^{+0.32}_{-0.36}$ \\ \rule{0pt}{3ex} Efstathiou & $-1.09^{+0.07}_{-0.08}$ & $-0.01^{+0.05}_{-0.05}$ \\ \rule{0pt}{3ex} JBP & $-1.05^{+0.12}_{-0.13}$ & $-0.14^{+0.77}_{-0.80}$ \\ \rule{0pt}{3ex} BA & $-1.03^{+0.08}_{-0.08}$ & $-0.10^{+0.16}_{-0.17}$ \end{tabular} \caption{Direct fits of the traditional DDE models to the $X(z)$ reconstruction from \cite{Wang:2018fng}.} \label{models} \end{table} \section{Discrete Fourier Transform} In this section we turn our attention to the wiggles in a bid to ascertain if observational data has a preference for wiggles. This allows one to confirm or refute the output from \cite{Wang:2018fng} without resorting to the assumed correlations. In order to build a model for the wiggles in $X(z)$, we will make use of discrete Fourier transform (DFT). To get acquainted, it is instructive to consider an example. Let us begin with the function, \begin{equation} g(x) = \sin (x) + 2 \cos (2 x) - 4 \sin(3 x). \label{f} \end{equation} We illustrate the function in FIG. \ref{cos_sin} where we have considered the period $x \in [ 0, 2 \pi]$. Noting that the curve in FIG. \ref{cos_sin} is in fact an interpolation of approximately 1000 discrete points, we have a discrete sample and one can perform a DFT analysis using the \textit{numpy.fft} package in Python. This leads to the plot in FIG. \ref{cos_sin_DFT}. Observe that we have restricted the frequency range in the plot to the lowest frequencies of interest. In general, the DFT of a sample of discrete points is complex, so we have separated the real (red dots) and imaginary parts (green dots). \begin{figure}[htb] \centering \includegraphics[width=80mm]{cos_sin.png} \caption{$g(x) = \sin (x) + 2 \cos (2 x) - 4 \sin(3 x)$} \label{cos_sin} \end{figure} \begin{figure} \centering \includegraphics[width=80mm]{DFT.png} \caption{DFT of $g(x) = \sin (x) + 2 \cos (2 x) - 4 \sin(3 x)$ versus frequency.} \label{cos_sin_DFT} \end{figure} It should be stressed that FIG. \ref{cos_sin_DFT} is specific to the function (\ref{f}) in a way that we now detail. First, most of the dots are consistent with zero, which tells us that those modes are not excited. Moreover, observe that $\sin(x), \cos(2x)$ and $\sin (3x)$ have frequencies $1/(2 \pi), 1/\pi$ and $3/(2 \pi)$, respectively, which we have highlighted using lines. In line with our expectations, dots on these lines have finite values. It should be noted that the values are also in the ratio of the coefficients, i. e. $1:2:4$, so clearly the displacement from zero encodes the amplitude of the oscillation. Finally, as is well documented, we see that the Fourier transform of a sine function is odd as the sign of the frequency is flipped, whereas the Fourier transform of cosine is even. Having hopefully oriented ourselves, we can turn our attention to $X(z)$. The reconstructed $X(z)$ \cite{Wang:2018fng} is defined by the mean values and covariance matrix. As is clear from FIG. \ref{X_wiggle}, the dip at $z \sim 2.3$ is driven by Lyman-alpha BAO. Neglecting the ``bump" at $z \sim 1$, which appears in a ``data desert" where there are only poor quality OHD data, the remaining wiggles are most pronounced at low redshift. In practice, we restrict our attention to $z \lesssim 0.67$ because it was easiest to approximate the wiggles using simple trigonometric functions in this redshift range. Note, there is also a little bit of trial and error here. One can construct lots of ansatze for the wiggles using the output from DFT, but one may not be able to find tangible evidence for the wiggle even in fits to $X(z)$, since the errors in $X(z)$ increase at higher redshift (see FIG. \ref{X_wiggle}). Now, from the mean and the covariance matrix one can generate different realisations of $X^{(n)}(z)$, a sample of which is shown in FIG. \ref{Xsample}. Performing a DFT, we arrive at the frequencies highlighted in FIG. \ref{XDFT}, where we have exploited the same colour coding as the $X^{(n)}(z)$ realisations: observe that squares denote the real components and triangles the imaginary ones. Just as in our simple warm-up example, we recognise that the higher frequency modes are largely not excited. This can be expected since the Fourier transform of the correlation (\ref{correlation}) corresponds to exponential decay. In particular, one can see that the amplitudes drop off at higher frequency and the most relevant excited frequencies $f_n$ belong to the following group, $f_n=1.58 n, n\in \mathbb{Z}$, explicitly $f_n \in \{0, \pm 1.58, \pm 3.15, \pm 4.73, \pm 6.3, \pm 7.88, \pm 9.45, \dots \}$. \begin{figure}[htb] \centering \centering \includegraphics[width=80mm]{X_sample.png} \caption{A sample of $X(z)$ functions.} \label{Xsample} \end{figure} \begin{figure} \centering \includegraphics[width=80mm]{X_DFT.png} \caption{The corresponding DFTs from the $X(z)$ sample in Fig. \ref{Xsample}.} \label{XDFT} \end{figure} \section{Modeling the wiggles} Having isolated the most relevant modes through DFT, let us now confirm that we have extracted the relevant modes. The first task is to check that the wiggles corresponding to the modes can be recovered from a direct MCMC exploration of the $X(z)$ reconstruction output. To do so, let us keep the lowest three modes, corresponding to the frequencies $f_1 = 1.58, f_2 = 3.15$ and $f_3 = 4.73$, which are clearly the most relevant as can be seen from FIG. \ref{XDFT}. We keep only the lowest modes, since as one increases the number of modes, the task of recovering them from an MCMC analysis becomes more daunting. Restricting the number of modes, allows us to construct the ansatz, \begin{equation} \label{ansatz0} X(z) = (1 - \sum_{i=1}^3 A_i) + \sum_{i=1}^3 \left[ A_i \cos ( 2 \pi f_i z) + B_i \sin (2 \pi f_i z) \right]. \end{equation} Observe that although related oscillatory expressions for $w(z)$ have appeared previously in the literature \cite{Feng:2004ff, Jaime:2018ftn, Arciniega:2021ffa}, here the frequencies are fixed and the amplitudes are free parameters. Clearly, when $A_i = B_i = 0$, we recover the flat $\Lambda$CDM model where $X = 1$. This means that even in the MCMC analysis, the reconstructed $X(z)$ or original data can reject the wiggle ansatz simply by restricting $A_i$ and $B_i$ to small numbers that are consistent with zero. We first fit the general ansatz (\ref{ansatz0}) directly to the mean and covariance matrix for $X(z)$ and only retain the amplitudes that differ from zero outside of $1 \sigma$. The rational here is that those modes should stand the best chance of being recovered from the original data through further MCMC analysis. In practice, this is an iterative procedure and at each step we throw away the smallest amplitude within $1 \sigma$. This leads to the ansatz \begin{eqnarray} \label{ansatz1} X(z) = (1-A_1 -A_2) &+& A_1 \cos (2 \pi f_1 z) \\ &+& A_2 \cos (2 \pi f_2 z) + B_2 \sin (2 \pi f_2 z), \nonumber \end{eqnarray} with best-fit values \begin{eqnarray} \label{fit1} A_1 &=& 0.050^{+ 0.031}_{-0.031}, \quad A_2 = -0.030^{+ 0.026}_{-0.026}, \nonumber \\ B_2 &=& - 0.046^{+0.028}_{-0.027}. \end{eqnarray} Observe that any evidence for the amplitudes is marginal. Doing the sums, one sees that $A_1, A_2$ and $B_2$ are distinct from zero at $1.6 \sigma, 1.2 \sigma$ and $1.6 \sigma$, respectively. Thus, in this context, ``evidence" is a strong word, given that we are talking about amplitudes that differ by $\gtrsim 1 \sigma$ from zero. This may have been anticipated from FIG. \ref{X_wiggle}, as neglecting a few isolated redshift ranges, the $X=1$ line largely intersects the $1 \sigma$ confidence interval for $z \lesssim 1.6$. Noting that evidence for $A_2$ being non-zero is more marginal than the others, one can also remove this amplitude. Doing so, we have the simplified ansatz, \begin{equation} \label{ansatz2} X(z) = (1-A_1) + A_1 \cos (2 \pi f_1 z) + B_2 \sin (2 \pi f_2 z), \end{equation} and the best-fit values become \begin{equation} \label{fit2} A_1 = 0.028^{+0.025}_{-0.024}, \quad B_2 = - 0.039^{+0.025}_{-0.026}, \end{equation} which represents a deviation in $A_1$ and $B_2$ from zero of $1.2 \sigma $ and $1.6 \sigma $ statistical significance, respectively. In FIG. \ref{X_vs_wiggle} we plot the reconstructed $X(z)$ alongside the best-fit and $1 \sigma$ confidence interval inferred from the MCMC chain for our wiggle ansatz (\ref{ansatz2}). We omit a similar plot for the ansatz (\ref{ansatz1}), which since it has an extra parameter leads to a slightly broader confidence interval. It should come as no surprise to the reader that since we have truncated out the higher frequency modes, the errors have contracted noticeably. Moreover, since the ansatz (\ref{ansatz2}) is minimal, it has the smallest errors. That being said, oscillations are evident. \begin{figure}[htb] \centering \includegraphics[width=80mm]{wigglesfit.png} \caption{The reconstructed $X(z)$ from \cite{Wang:2018fng} along with (\ref{ansatz2}) fitted to the same $X(z)$.} \label{X_vs_wiggle} \end{figure} We now have ansatze for a wiggle in the redshift range $z \lesssim 0.67$ and we can fit it back to a combination of low redshift data. This will establish if the wiggle is in the data or an artifact of the working assumptions employed in \cite{Wang:2018fng}. It should be noted that if the data rejects the wiggle ansatz, one should find that $A_i = B_i = 0$ within the confidence intervals. We largely make use of the same data as Wang et al. \cite{Wang:2018fng}, modulo two differences. In contrast to \cite{Wang:2018fng}, the JLA dataset \cite{Betoule:2014frx} has been replaced by Pantheon \cite{Scolnic:2017caz} and we have dropped the SH0ES prior on $H_0$ \cite{Riess:2019cxk} on the grounds that the most conservative definition of Hubble tension is that the SH0ES results require a prior on the absolute magnitude $M_B$ and not $H_0$ directly \cite{Benevento:2020fev, Lemos:2018smw, Camarena:2021jlr, Efstathiou:2021ocp}. This may look like data editing from our end, but to make the comparison fairer we will perform the analysis both with and without the priors (\ref{H0om}). By imposing the priors (\ref{H0om}), we can ensure that all constraining power in the data is being transferred to our amplitudes in the dark energy sector. This is to negate any criticism from the reader that we are fitting more parameters and it is unsurprising if we see less ``evidence" for the wiggles. Our best-fit values of the ansatze (\ref{ansatz1}) and (\ref{ansatz2}) to a combination of Pantheon supernovae \cite{Scolnic:2017caz}, BAO determinations from 6dF Galaxy Survey \cite{Beutler:2011hx}, SDSS DR7 Main Galaxy Survey \cite{Ross:2014qpa}, tomographic BAO \cite{Wang:2016wjr} and cosmic chronometer data \cite{Moresco:2016mzx} can be found in Table \ref{wiggles_recovery}. Throughout we make use of the Planck prior $r_d = 147 $ Mpc and $H_0$ is in units of km/s/Mpc. Moreover, when fitting the Pantheon dataset, we made use of the following expression for the apparent magnitude $m_{B}$, $m_{B} = 5 \log_{10} (H_0 d_{L}) - 5 a_B$, where $d_{L}(z)$ is the luminosity distance and $a_B = 0.71273 \pm 0.00176$ \cite{Riess:2016jrr}. We have divided the results in Table \ref{wiggles_recovery} into the first two entries, which do not make use of the prior (\ref{H0om}), and the second two entries, which do. We have suppressed the best-fit value for $a_B$ as it is always consistent with the prior and does not add any information. \begin{table}[htb] \centering \begin{tabular}{cccccc} \rule{0pt}{3ex} $H_0$ & $\Omega_{m0}$ & $A_1$ & $A_2$ & $B_2$ \\ \hline \rule{0pt}{3ex} $68.59^{+0.56}_{-0.57}$ & $0.318^{+0.020}_{-0.020}$ & $0.012^{+0.024}_{-0.023}$ & - & $-0.026^{+0.024}_{-0.023}$ \\ \rule{0pt}{3ex} $68.66^{+0.59}_{-0.57}$ & $0.309^{+0.021}_{-0.021}$ & $0.049^{+0.031}_{-0.030}$ & $-0.050^{+0.026}_{-0.026}$ & $-0.050^{+0.028}_{-0.027}$ \\ \hline \rule{0pt}{3ex} $69.24^{+0.44}_{-0.42}$ & $0.291^{+0.008}_{-0.008}$ & $-0.001^{+0.017}_{-0.017}$ & - & $-0.031^{+0.023}_{-0.022}$ \\ \rule{0pt}{3ex} $69.20^{+0.45}_{-0.43}$ & $0.289^{+0.007}_{-0.008}$ & $0.045^{+0.028}_{-0.027}$ & $-0.054^{+0.026}_{-0.026}$ & $-0.055^{+0.026}_{-0.026}$ \\ \end{tabular} \caption{Best-fit values of the models (\ref{ansatz1}) and (\ref{ansatz2}) to a compilation of supernovae, BAO and cosmic chronometer data. The upper entries do not include the prior (\ref{H0om}), whereas the lower entries do.} \label{wiggles_recovery} \end{table} Now, we are in a fitting position to make a comment. The best-fit values quoted in (\ref{fit1}) and (\ref{fit2}) show fits of our wiggle ansatze directly to the reconstructed $X(z)$. That the best-fit values are non-zero outside of $1 \sigma$ confirms that the $X(z)$ ``data" recognises the wiggles. Admittedly, this recognition is marginal, nevertheless it confirms that the Fourier modes extracted from the DFT are the relevant modes. On the other hand, the results in Table \ref{wiggles_recovery} seem to confirm the presence of wiggles. There is a slight difference between the ansatze (\ref{ansatz1}) and (\ref{ansatz2}), but even without a prior (upper entries in table), the deviation from zero in the latter amplitudes are $1.6 \sigma$, $1.9 \sigma$ and $1.8 \sigma$, respectively. Once the prior is imposed this increases to $1.7 \sigma$, $2.1 \sigma$ and $2.1 \sigma$, respectively. Thus, our DFT analysis seems to confirm that the wiggles are in the data and can be accessed directly without assuming correlations. This is a consistency check and the analysis \cite{Wang:2018fng} passes convincingly. \section{Comment on Correlations} While the assumptions in \cite{Wang:2018fng} are very much within the spirit of ``data-driven cosmology", if one puts correlations in by hand, and dials the parameters and finds a range where Bayesian evidence prefers the reconstructed $X(z)$ over flat $\Lambda$CDM, one may wonder what makes these values so special? Let us dwell on this further. As one sends the scale $a_c \rightarrow 1$, one recovers flat $\Lambda$CDM. There is no problem constructing large classes of field theories that allow behaviour close to flat $\Lambda$CDM. Quintessence is an example in this class. However, when one chooses $\sigma_m = 0.04, a_c = 0.06$, the presence of wiggles in $X(z)$ suggests that one may be far from the flat $\Lambda$CDM regime $(X=1)$ and it is valid to ask is there a corresponding field theory? Furthermore, if it does exist, how exotic is it? In this section we consider a foray into addressing this question. Let us return to (\ref{correlation}) and set $a$ or $a'$ to unity. It is then an easy task to translate from $a$ to $z$, so that the correlation may be expressed as \begin{equation} \xi (z) = \frac{\xi_{w}(0)}{1+ \left( \frac{z}{a_c (1+z)} \right)^2 }. \end{equation} Since we have fixed one point at $a=1$, or alternatively $z=0$, this expression now represents correlations between a given redshift $z$ and $z = 0$. Provided we operate at $z < 1$, one can Taylor expand this expression to get, \begin{equation} \label{xi_expand} \xi (z) = \xi_{w}(0) \left( 1 - \frac{z^2}{a_c^2} + \frac{2 z^3}{a_c^2} + \dots \right). \end{equation} We will now attempt to compare this to some expressions from a dynamical field theory. We make the assumption that $w^{\textrm{fid}} = -1$ and import the following expression for $w(z)$ from \cite{Banerjee:2020xcn}: \begin{equation} \label{w} w(z) = -1 + P + z Q + z^2 R + \dots \end{equation} where $\dots$ denote higher order terms and the parameters $P, Q, R$ may be expressed as, \begin{widetext} \begin{eqnarray} \label{PQR} P &=& \frac{\alpha^2}{3 \Omega_{\phi 0}}, \quad Q = \frac{1}{\Omega_{\phi 0}^2} \biggl[ \frac{\alpha^4}{3} ( \Omega_{\phi 0} -1) + \frac{\alpha^2}{3} \Omega_{\phi 0} (5 - 3 \Omega_{\phi 0} ) + \frac{4}{3} \alpha \beta \Omega_{\phi 0} \biggr], \nonumber \\ R &=& \frac{1}{\Omega_{\phi 0}^3} \biggl[ \frac{\alpha^6}{6} ( \Omega_{\phi 0}^2 - 3 \Omega_{\phi 0} +2 ) + \frac{\alpha^4}{6} \Omega_{\phi 0} (17 \Omega_{\phi 0} -14 -3 \Omega_{\phi 0}^2) + 2 \alpha^3 \beta \Omega_{\phi 0} ( \Omega_{\phi 0} - 1 ) + \frac{\alpha^2}{3} \Omega_{\phi 0}^2 (10- 9 \Omega_{\phi 0}) + \frac{4}{3} \alpha \beta \Omega_{\phi 0}^2 (5 - 3 \Omega_{\phi 0}) \nonumber \\ &+& \frac{4}{3} \beta^2 \Omega_{\phi 0}^2 + 2 \alpha \gamma \Omega_{\phi 0}^2 \biggr]. \end{eqnarray} \end{widetext} This expression for the EOS represents a perturbative solution to the Quintessence equations of motion about $z= 0$. Setting $\alpha = \beta = \gamma$ one recovers $w = -1$ and note that $\Omega_{\phi 0} = 1 - \Omega_{m 0}$. The parameters $\alpha, \beta, \gamma$ are related to the Quintessence potential, its first and second derivative, respectively, so even for the simplest Quintessence model one expects at least two independent parameters. It should be stressed that while Quintessence restricts $w(z) \geq -1$ by construction, this is only true for the exact solution. Since the solution outlined in \cite{Banerjee:2020xcn} is perturbative, nothing precludes values of $\alpha, \beta$ and $\gamma$ where $w < -1$. Precisely for this reason, in \cite{Banerjee:2020xcn} $w(z) \geq -1$ had to be imposed by hand. Now, we can make the first observation. The correlations adopted in \cite{Crittenden:2011aa, Zhao:2017cud, Wang:2018fng} tacitly assume that the dark energy sector, whether it is parametrised by $w(z)$ or $X(z)$, decouples from the other cosmological parameters $H_0$ and $\Omega_{m0}$. As can be seen from (\ref{w}), this is not necessarily true. There is a very simple reason for this. Neither $w(z)$ nor $X(z)$ are fundamental in a field theory description and both are \textit{derived} quantities. In addition, it is straightforward to calculate \begin{eqnarray} \xi(z) &=& \langle [ w(0) - w^{\textrm{fid}}(0)] [ w(z) - w^{\textrm{fid}}(z)] \rangle \nonumber \\ &=& \langle P^2 \rangle + z \langle P Q \rangle + z^2 \langle P R \rangle + \dots \end{eqnarray} where we have separated terms by $z$ dependence. Note also that, as highlighted earlier, $\Omega_{\phi 0}$ is hanging around in contradiction to assumptions made in \cite{Zhao:2017cud, Wang:2018fng}. Admittedly expressions are complicated, but this is the price one pays for working with a derived quantity. The first term is a number that can be fixed through comparison to (\ref{xi_expand}), but the term linear in $z$ must vanish. If this vanishes, this is clearly an accident and it cannot be expected to hold in general. Indeed, we can take this comparison a bit further and fit the original parameters $H_0, \Omega_{m0}, \alpha, \beta, \gamma$ to some representative low redshift observational data \cite{Scolnic:2017caz, Beutler:2011hx, Ross:2014qpa, Wang:2016wjr, Moresco:2016mzx}. As explained in \cite{Banerjee:2020xcn}, since the solution is perturbative, we restrict to a range of redshifts $z \lesssim 0.7$, where one can approximate the Hubble parameter to flat $\Lambda$CDM to $1 \%$ precision. This leads to an MCMC chain in the relevant parameters and using (\ref{PQR}) one can convert these chains into chains in $P, Q, R$. From there, it is easy to extract the covariance matrix and divide through by the errors to fix $\langle P^2 \rangle = \langle Q^2 \rangle = \langle R^2 \rangle = 1$ and get an estimate for the needed correlations, e.g. $\langle P Q \rangle$ and $\langle P R \rangle$. This allows us to get an expression for the correlation up to an overall constant $\kappa$: \begin{equation} \label{xi_Quint} \xi(z) = \kappa \left( 1 - 0.60 z + 0.21 z^2 + \dots \right). \end{equation} Comparing this to (\ref{xi_expand}), while the overall constant is no problem, since one can tune the normalisation factor, we see the necessity of the linear term. In addition, $a_c$ can also be dialed to accommodate the quandratic term. It is worth noting that since we have switched from the scale factor $a$ to redshift $z$, $a_c = 0.06$ is no longer the relevant value and a larger value is required \cite{Raveri:2017qvt, Espejo:2018hxa}. Note also that we have adopted the prior $w^{\textrm{fid}} = -1$, but even if it is some constant value, it is difficult to see how one can remove the linear in $z$ term from (\ref{xi_Quint}) so that it agrees with (\ref{xi_expand}). In summary, the assumed correlations appear to be at odds with correlations in a representative field theory. In particular, the matter density does not decouple and the absence of a linear term in the correlation is notable. While this disagreement may not be an immediate problem given the current status of the data, one can expect that as data quality improves, these differences will become more pronounced. This may necessitate changing the assumed correlations (\ref{correlation}) or attempting to identify a corresponding field theory - assuming one exists - without a linear term in the expansion (\ref{xi_expand}). \section{Discussion} Over two decades have passed since the discovery of dark energy \cite{Riess:1998cb, Perlmutter:1998np} and the possibility that dark energy evolves, as evident by the number of traditional models \cite{Cooray:1999da, Astier:2000as, Efstathiou:1999tm, Chevallier:2000qy, Linder:2002et, Jassal:2005qc,Barboza:2008rh}, has become a staple of cosmology. Some of these parametrisations are attractive in the sense that they are the starting point of a Taylor expansion in a small parameter, either $z$ \cite{Cooray:1999da, Astier:2000as} or $(1-a)$ \cite{Chevallier:2000qy, Linder:2002et}. In contrast, others are more ad hoc \cite{Efstathiou:1999tm, Jassal:2005qc,Barboza:2008rh}. Nevertheless, a common feature of all parametric models is that they are insensitive to DDE at $z \approx 0$ and only build up sensitivity with redshift. A key take-home message from this work is that the ``sensitivity" depends on the function multiplying the parameter $w_a$ in the usual two-parameter approach ($w_0, w_a$). Remarkably, the ubiquitous CPL model performs poorly and leads to larger errors on $w_a$, as should be evident from our mock analysis. In other words, if one is actively looking for evidence for DDE, ideally at the $\sim 3 \sigma$ level, then other models will reach that threshold first. Thus, assuming DDE is real, one may be confronted with a scenario where one has $\sim 1 \sigma$ in one parametric DDE model, but $\sim 3 \sigma$ in another. It is this arbitrariness, which is inherent in the parametric approach, that ultimately brings the parametric approach into question. In short, traditional DDE models are imperfect probes of DDE that are biased by the assumptions of the functional form of the $w_a$ term. Thus, it is prudent to test for DDE across a wide range of parametric models \cite{Yang:2021flj, Zheng:2021oeq}, especially at low redshift where the differences are especially acute. In the latter part of this work, we took a closer look at claims of oscillatory behaviour in $w(z)$ or $X(z)$ \cite{Zhao:2017cud, Wang:2018fng}. The rational for doing so is simply if the wiggles are real, then, as confirmed in the text, traditional parametric models will struggle to recover oscillatory behaviour. In the papers \cite{Zhao:2017cud, Wang:2018fng} what has been objectively shown is that working within the imposed correlations (\ref{correlation}), there is a range of parameters ($\sigma_m, a_c, \Delta_{X}$) where wiggles appear in reconstructions of $w(z)$ or $X(z)$. Further constraining this range, there are even values of $(\sigma_m, a_c, \Delta_{X})$ favoured by Bayesian evidence over flat $\Lambda$CDM. That being said, it should be obvious that all wiggles are observed through the prism of assumptions on correlations and different correlations interpret the \textit{same} data differently. So our motivation here was to strip the wiggles from the correlations employed in the reconstruction. To do so, we worked with the default $X(z)$ output from \cite{Wang:2018fng} and performed a DFT of $X(z)$. This analysis revealed only a finite number of relevant modes. This outcome may have been anticipated from the correlations (\ref{correlation}), since Fourier transform of the correlation corresponds to an exponential decay in frequency. Using the DFT output as a guide, we constructed wiggly dark energy models, which we confirmed through direct fits to the reconstructed $X(z)$. With these models, or ansatze, in hand, we further confronted them to the original data and recovered the wiggles from the data. It should be stressed that we dropped a local $H_0$ prior and replaced JLA supernovae with Pantheon supernovae. This suggests that the wiggles in the data may be robust enough. Of course, this vindicates the approach of \cite{Zhao:2017cud, Wang:2018fng} and raises further questions for parametric DDE models, since they cannot recover this behaviour. Finally, since the correlations employed in \cite{Zhao:2017cud, Wang:2018fng} do not appear to be physically motivated, but more motivated by the data, we attempted to interpret the correlations within a truncated Quintessence theory. The truncation is important here as it allows one to define the theory at low redshift, while relaxing the strict requirement that $ w(z) \geq -1$. Within this framework, we noted in constrast to the assumptions made in \cite{Zhao:2017cud, Wang:2018fng}, it may not be possible to decouple the dark energy sector from other cosmological parameters, e.g. $\Omega_{m0}$. Moreover, when expanded around $z=0$, one sees that the correlation (\ref{correlation}) has no linear term. On the contrary, a linear term is expected from a generic field theory, as we show in fits to representative data. It is not clear if one can recover the assumed correlation (\ref{correlation}) from a field theory, but it is a very interesting question to explore, especially as the data quality improves. \section*{Acknowledgements} We thank and Stephen Appleby, Shahab Joudaki, Eric Linder, Levon Pogosian \& Tao Yang for discussion/correspondence and comments on earlier drafts. We thank Yuting Wang \& Gong-Bo Zhao for sharing and explaining the $X(z)$ wiggles from \cite{Wang:2018fng}. E\'OC is funded by the National Research Foundation of Korea (NRF-2020R1A2C1102899). MMShJ would like to acknowledge SarAmadan grant No. ISEF/M/99131. Lu Yin was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education through the Center for Quantum Spacetime (CQUeST) of Sogang University (NRF-2020R1A6A1A03047877).
1,108,101,564,783
arxiv
\section{Introduction} \label{sec:intro} In recent years there has been much excitement about the structures and properties of novel two-dimensional (2D) materials. With high mechanical strength, unique electronic properties, and tunable chemical reactivity \cite{Yazyev14,ChengNanoscale21} these materials offer promise in a wide variety of technological applications. These properties are a direct consequence of the 2D crystalline structure of the material, and thus to take full advantage of them requires the growth and synthesis of large-area, defect-free samples, a feat that has proved challenging. These large-scale samples are often grown by techniques of heteroepitaxy, such as chemical vapor deposition (CVD). While large areas can be covered with atomically thin layers of the desired material via these techniques, the resulting monolayer film is typically polycrystalline, as a result of spontaneous nucleation at various locations on the substrate and the formation and evolution of topological defects including dislocations and grain boundaries as observed in experiments \cite{Yazyev14,Huang11,GibbJACS13,RenNanoLett19,RenACSNano20,LinACSNano15,MendesACSNano19}. It is thus desirable to remodel the material structure through post-grown processing such as annealing to facilitate the procedure of grain growth for achieving the single crystalline state of the system. Usually such a process begins with a large number of discreet grains separated by well-defined grain boundaries comprised of arrays of dislocation defects. The dynamics of these defects, particularly dislocation glide and climb, result in the motion of grain boundaries, with some grains growing at the expense of others. A large amount of research efforts, both experimentally \cite{Yazyev14,Huang11,GibbJACS13,RenNanoLett19,RenACSNano20,LinACSNano15,MendesACSNano19} and theoretically/computationally \cite{Taha17,HirvonenPRB16,LiJMPS18,Zhou19,MomeniNPJCM20}, have been put on the studies of defect dynamics and grain growth as well as the understanding and control of the corresponding mechanisms in various 2D material systems including graphene \cite{Yazyev14,Huang11,HirvonenPRB16,LiJMPS18,Zhou19}, hexagonal boron nitride (h-BN) \cite{GibbJACS13,Taha17,RenNanoLett19,RenACSNano20}, and transition metal dichalcogenides (TMDs) \cite{Yazyev14,LinACSNano15,MendesACSNano19}. The mechanisms of grain growth, in both two- and three-dimensional (3D) materials, involve many coupled and competing factors in play \cite{CahnActaMater04,CahnActaMater06,CahnPhiloMag06,TrauttActaMater12a,TrauttActaMater12b,ThomasNatCommun17,WuActaMater12,McReynoldsActaMater16,AdlandPRL13}, resulting in complex phenomena of grain boundary migration and grain rotation. It has been shown that the migration of a grain boundary in its normal direction is geometrically coupled to the tangential motion of the grain along the boundary \cite{CahnActaMater04}, which tends to rotate the annealing grain. The strength and multiplicity of this coupling, as well as the degree of the accompanying shear deformation, depend on the misorientation of crystalline lattice across the grain boundary and the material growth or processing conditions, creating a rich landscape of possible motions as grains of different initial sizes and orientations interact. Properties of this type of shear-coupled motion have been studied via molecular dynamics (MD) \cite{CahnActaMater06,CahnPhiloMag06,TrauttActaMater12a,TrauttActaMater12b,ThomasNatCommun17} and phase field crystal (PFC) \cite{TrauttActaMater12b,WuActaMater12,McReynoldsActaMater16,AdlandPRL13} simulations, for both symmetric \cite{CahnActaMater06,CahnPhiloMag06,ThomasNatCommun17} and asymmetric \cite{TrauttActaMater12b} planar tilt grain boundaries in bicrystals, 3D cylindrical \cite{TrauttActaMater12a} or 2D circular \cite{WuActaMater12,McReynoldsActaMater16,AdlandPRL13} individual embedded grains, and multigrain systems of polycrystals \cite{McReynoldsActaMater16,ThomasNatCommun17}. The dual behavior or the switching between two different coupling modes has been found within a narrow transition range of misorientation angles in some 3D MD simulations of fcc metals like Cu \cite{CahnActaMater06,CahnPhiloMag06} and Ni \cite{ThomasNatCommun17} for the shearing of symmetric planar grain boundaries. It is noted that all these previous works are for single-component systems, while the modeling and understanding of the coupled grain boundary motion and grain rotation for binary or multi-component systems are still lacking. In principle, this mechanism of normal-tangential coupled motion for grain growth dynamics is expected to directly apply to graphene-type 2D materials. However, not all 2D materials are created equal. Graphene is a widely studied conductive 2D carbon allotrope, with its six-fold symmetry responsible for its protected conducting states. A related material is 2D h-BN, which adopts the same honeycomb lattice-site structure as graphene, but with alternating B and N atoms occupying the neighboring lattice sites. This breaks the $60\degree$ symmetry of rotation for honeycomb lattice. Such a property of lattice polarity and inversion symmetry breaking occurs in all other 2D binary hexagonal materials such as TMDs (in the form of $MX_2$). If the shear-coupled mechanism of grain boundary motion would be purely geometric and dominated by the lattice structure and crystallographic characteristics, as assumed in previous studies \cite{CahnActaMater04,CahnActaMater06,CahnPhiloMag06,TrauttActaMater12a,TrauttActaMater12b,ThomasNatCommun17}, the property of the coupling between normal and tangential motions and the grain rotation induced would then be qualitatively similar for single-component graphene and two-component h-BN or TMDs (other than a simple extension of the full range of misorientation from $60\degree$ to $120\degree$), given that they have the same set of honeycomb lattice sites if neglecting the distinction between atomic species. However, this is not the case, as will be shown in this study. In this paper we examine the dynamics of curved, embedded grains in both single-component and binary 2D hexagonal material systems, through the PFC modeling \cite{Elder04,Elder07,Taha17} which enables the study on diffusive timescales that are not accessible to atomistic simulations while still maintaining the microscopic resolution of individual defect structures. Our focus is on the effect of coupling between grain boundary normal motion and tangential translation in both systems, via the quantitative study of time variations of misorientation angle and grain radius during grain rotation, shrinking, and boundary evolution, and the matching to the analytic results of Cahn-Talyor formulation. Particularly important is the dual behavior, showing as the coexistence and switching between two different coupling modes, which is found in our simulations of h-BN type binary systems but absent in 2D single-component ones. The underlying atomic mechanisms are related to diverse types of dislocation core microstructures and their transformations that are made available in binary materials, indicating the pivotal role played by the inversion symmetry breaking and sublattice ordering in material systems composed of more than one species of constituents. \section{Model and theory} \subsection{The phase field crystal models} \label{PFC} We use a binary PFC model with sublattice ordering to study 2D monolayers of two-component hexagonal materials like h-BN, and for comparison, a single-component PFC model to simulate the graphene-type 2D system. The PFC models can be derived from classical density functional theory (DFT) by expanding the free energy functional $\mathcal{F}$ in terms of an atomic number density variation field $n(\vec{r},t)$ \cite{Elder07,re:teeffelen09,Huang10,Wang18R,Taha19}, allowing for spatially periodic solutions of the density field to represent the crystalline solid state of the material. The specific form of this free energy functional can be chosen to permit solutions with a desired crystalline lattice symmetry, incorporating system elasticity and plasticity, and be parameterized to match a real material of interest. The PFC method has the advantage of allowing the modeling of material systems with atomic-scale spatial resolution, while being computationally efficient enough to cover relatively long, diffusive timescales to examine complex dynamical phenomena such as defect migration and grain growth \cite{re:berry06,BerryPRB15,re:stefanovic09,BackofenActaMater14,TrauttActaMater12b,AdlandPRL13,WuActaMater12,McReynoldsActaMater16,Taha17,HirvonenPRB16,LiJMPS18,Zhou19,SkaugenPRB18,SkaugenPRL18,Salvalaglio20,Salvalaglio21}. In the original, single-component PFC model, the dimensionless free energy functional (after rescaling) is given by \cite{Elder04,Elder07} \begin{equation} \label{graphene energy} \mathcal{F}=\int \mathrm{d} \vec{r} \left [ -\frac{\epsilon}{2} n^2 + \frac{1}{2} n (\nabla^2 + q_0^2)^2 n - \frac{g}{3} n^3 + \frac{1}{4} n^4 \right ], \end{equation} where $\epsilon$ and $g$ are phenomenological parameters that can be connected to the Fourier components of direct correlation functions in classical DFT \cite{Elder07,re:teeffelen09,Huang10}, with $\epsilon > 0$ required to produce a solid crystal with an appropriate choice of the average density variation $n_0$. The structure length unit has been rescaled such that the characteristic wave number $q_0 = 1$ for lattice periodicity. The dynamical evolution of the density field is governed by \begin{equation} \label{single PFC} \frac{\partial n}{\partial t} = - \frac{\delta \mathcal{F}}{\delta n} + \mu = - \left [ -\epsilon n + (\nabla^2 + q_0^2)^2 n - g n^2 + n^3 \right ] + \mu. \end{equation} Here the nonconserved dynamics with a constant chemical potential $\mu$ is used to model the evolution of grains during the growth and annealing process that resembles the experimental conditions. Experimentally, during the fabrication of 2D materials using epitaxial techniques (e.g., CVD), the constant flux is maintained under high-temperature growth conditions, where the sample is typically surrounded by gas-form surplus atoms subjected to specific chemical potential. As discussed in Sec.~\ref{sec:intro}, this growth process typically produces polycrystalline samples, grains of which then interact and evolve. The nonconserved dynamics, with the imposed constraint of pre-determined constant chemical potential for each atomic specie, is to model these conditions with constant flux. This type of nonconserved PFC dynamics (i.e., in the grand canonical ensemble) has also been used in some previous works simulating e.g., 2D single-component grain evolution and rotation \cite{AdlandPRL13}, shear-coupled motion of 2D grain boundaries \cite{TrauttActaMater12b}, and the evolution of $60\degree$ inversion domain boundaries in h-BN \cite{Taha17}, which all showed physically consistent results of dislocation motion as compared to those from conserved dynamics and also MD simulations (see e.g., Ref.~\cite{TrauttActaMater12b}). Note that this single-component PFC system is invariant with respect to $n \rightarrow -n$ and $g \rightarrow -g$. It is straightforward to show that the corresponding solid phase in 2D is of triangular symmetry if $g-3n_0>0$, and when $g-3n_0<0$ it is of honeycomb lattice structure which is basically the inverse of triangular one, where the density maxima and minima have been reversed while the system elastic properties are maintained. This property has been utilized for the PFC modeling and the related model parameterization for graphene, including the structure, energy, and dynamics of dislocations, grain boundaries, polycrystals, and heterostructures \cite{HirvonenPRB16,LiJMPS18,Zhou19,HirvonenPRB19,ElderPRM21}. Although some elastic constants determined by this simple version of PFC model, particularly the value of Poisson ratio \cite{HirvonenPRB16}, are different from that of graphene, it would affect the properties of mechanical deformation of the sample but is not expected to qualitatively influence the behaviors of defect dynamics examined in this work. There would always be some quantitative differences which however should be small, given that any structural deformations/distortions caused by grain boundary and dislocation motion and interaction are local and of relatively small degree, and that the normal-tangential coupled motion of grain boundary, which is the main focus of this study, is related to local shear deformation of the lattice \cite{CahnActaMater04,CahnActaMater06,CahnPhiloMag06}. In our simulations here the model parameters $\epsilon=0.02$, $g = -0.5$, $\mu=0.03$, and an initial value of $n_0=0.04$ are used. We chose the chemical potential value of our model by first numerically solving the corresponding PFC equation with conserved dynamics, with the use of the same model parameters in a perfect single crystal, allowing it to come to equilibrium, and then calculating the corresponding chemical potential $\mu$. We used this value of $\mu$ as the imposed constraint for our simulations with nonconserved dynamics, so that it would maintain the corresponding average density $n_0$ for a solid crystal. The high temperature regime our simulations operated in has a fairly narrow range of $n_0$ where the solid phase is stable, and thus this choice of chemical potential is important to maintain the solid state. The constraint of constant chemical potential then controls the global conservation of density in the system, given that the system simulated here is always in a solid phase with no liquid-solid interface involved. To verify this, we calculated the average density at time intervals throughout our simulations. After fluctuating during the initial transient time the average density of our PFC systems evolved to and was maintained at its own equilibrated value (with very small (less than $10^{-3}$) variations with time for this single-component PFC and also the binary system described below). The 2D binary hexagonal materials can be effectively modeled by a two-component PFC model developed recently \cite{Taha17,Taha19}, where the sublattice ordering of $A$ and $B$ components is described by a rescaled free energy functional for the density variation fields $n_A$ and $n_B$, i.e., \begin{equation} \label{hBN energy} \begin{split} \mathcal{F} = \int \mathrm{d} \vec{r} & \left [ -\frac{1}{2} \epsilon_A n_A^2 + \frac{1}{2} n_A(\nabla^2 + q_A^2)^2n_A - \frac{1}{3} g_A n_A^3 + \frac{1}{4} n_A^4 \right. \\ & - \frac{1}{2} \epsilon_B n_B^2 + \frac{1}{2}\beta_B n_B(\nabla^2 + q_B^2)^2 n_B - \frac{1}{3} g_B n_B^3 + \frac{1}{4} v n_B^4 \\ & \left. + \alpha_{AB} n_A n_B + \beta_{AB} n_A (\nabla^2 + q_{AB}^2)^2 n_B + \frac{1}{2} w n_A^2 n_B +\frac{1}{2} u n_A n_B^2 \right ]. \end{split} \end{equation} The corresponding PFC dynamical equations are given by \begin{eqnarray} \frac{\partial n_A}{\partial t} &=& - \frac{\delta \mathcal{F}}{\delta n_A} + \mu_A = - \left [ -\epsilon_A n_A + \left (\nabla^2 + q_A^2 \right )^2 n_A - g_A n_A^2 + n_A^3 \right. \nonumber\\ && \left. + \alpha_{AB} n_B + \beta_{AB} \left (\nabla^2 + q_{AB}^2 \right )^2 n_B + w n_A n_B + \frac{u}{2} n_B^2 \right ] + \mu_A, \label{binary PFC 1}\\ \frac{\partial n_B}{\partial t} &=& -m_B \left ( \frac{\delta \mathcal{F}}{\delta n_B} - \mu_B \right ) = -m_B \left [ -\epsilon_B n_B + \beta_B \left (\nabla^2 + q_B^2 \right )^2 n_B - g_B n_B^2 \right. \nonumber\\ && \left. + vn_B^3 + \alpha_{AB} n_A + \beta_{AB} \left (\nabla^2 + q_{AB}^2 \right )^2 n_A + u n_A n_B + \frac{w}{2}n_A^2 - \mu_B \right ] \label{binary PFC 2} \end{eqnarray} where $m_B$ is the mobility ratio between $B$ and $A$ components, $\mu_A$ and $\mu_B$ are chemical potential values of $A$ and $B$ species, and the dimensionless model parameters $\epsilon_{A(B)}$, $q_{A(B)}$, $q_{AB}$, $g_{A(B)}$, $\alpha_{AB}$, $\beta_{AB}$, $\beta_B$, $v$, $w$, and $u$ can be expressed in terms of the Fourier-expansion coefficients of two- and three-point direct correlation functions in classical DFT for a binary $AB$ system \cite{Taha19}. In Eqs.~(\ref{hBN energy}) and (\ref{binary PFC 1}), the first four terms correspond to the single-component description for component $A$ with density field $n_A$, while the next four terms in Eq.~(\ref{hBN energy}) and the first four terms in Eq.~(\ref{binary PFC 2}) are for component $B$. All the other terms represent the coupling between $A$ and $B$ species, ensuring no overlap between $A$ and $B$ density maxima and the stabilization of a binary honeycomb structure, which consists of two $A$ and $B$ triangular sublattices breaking the inversion symmetry of the overall binary lattice (i.e., with 3 triangular lattice sites occupied by component $A$ and the other 3 occupied by $B$ in a 6-membered honeycomb unit ring). Also importantly, the model free energy functional is constructed to energetically favor the $A$-$B$ heteroelemental neighboring with respect to the $A$-$A$ or $B$-$B$ homoelemental ones, as occurred in real two-component materials. In binary PFC equations (\ref{binary PFC 1}) and (\ref{binary PFC 2}) the nonconserved dynamics with constant chemical potentials is again used to simulate the grain growth with constant flux. In addition to better modeling the growth conditions, an advantage of using this nonconserved dynamics is related to the mobility of embedded grains in the binary PFC. We found that when applying the more conventional conserved dynamics (with the absence of thermal noise in our study), the embedded grains of binary PFCs have rather low mobility due to large Peierls barrier for defect motion and rotate too little to measure, while the nonconserved dynamics appeared to ease this problem. We thus used the nonconserved dynamics for both binary and single-component PFCs so that they can be directly compared with as few confounding factors as possible. We also chose initial conditions that resulted in high enough mobilities of defects along the grain boundary and therefore large enough degrees of grain rotations for analysis. This binary PFC model has been parameterized for 2D h-BN monolayers and been applied to recent studies of h-BN grain boundary structures and dynamics \cite{Taha17}, thermal transport in pristine and polycrystalline h-BN layers \cite{DongPCCP18}, and both graphene/h-BN and h-BN/h-BN vertical heterostructures \cite{HirvonenPRB19,ElderPRM21}. In the following calculations we use parameters $\epsilon_A=\epsilon_B=0.02$, $q_A=q_B=q_{AB}=1$, $v=\beta_B=1$, $\alpha_{AB}=0.5$, $\beta_{AB}=0.02$, $g_A=g_B=0.5$, $w=u=0.3$, $m_B=1$, $\mu_A=\mu_B=-0.45$, and the average density variations $n_{A0}=n_{B0}=-0.4$. By matching to the energy and length scales of 2D h-BN, it has been shown that a unit length in this model corresponds to $0.342$ {\AA} and an energy unit is of 2.74 eV \cite{Taha17}. In principle, out-of-plane deformations can be incorporated in the model (as implemented very recently \cite{ElderPRM21}), the effect of which is neglected in this study as here we focus on the evolution of monolayer grains confined on a substrate (e.g., during CVD growth), where very small or flattening vertical variation of epitaxial overlayers was observed in experiments of h-BN \cite{GibbJACS13} and thus plays a negligible or secondary role. \subsection{The Cahn-Taylor formulation for grain boundary coupled motion} Qualitatively, an embedded grain within a background crystal or matrix is a section of that crystal that has its lattice planes misaligned by some angle $\theta$ relative to the orientation of the background lattice. This misalignment causes the formation of an enclosed, curved grain boundary surrounding the grain, a region where the crystalline lattice changes orientation, punctuated by lattice dislocation defects which accommodate this angle change. Despite this misalignment, lattice planes of the embedded grain tend to maintain continuity with those of the surrounding matrix across the grain boundary. This produces an effective coupling between the normal motion of the grain boundary and tangential translation along the boundary, which, repeated all along the circumference of the grain, would then induce a net rotation of the grain during grain shrinkage or growth to maintain the lattice-plane continuity \cite{CahnActaMater04}. This process of coupled motion has been found to occur not only for small-angle grain boundaries with arrays of discrete boundary dislocations, but also for high-angle boundaries with connected defects, with the direction of boundary motion or grain rotation (i.e., towards larger or smaller misorientation angle $\theta$) depending on the initial misorientation and the coupling mode \cite{CahnActaMater06,CahnPhiloMag06,TrauttActaMater12a,TrauttActaMater12b,ThomasNatCommun17}. As in the formulation of Cahn and Taylor \cite{CahnActaMater04}, if the grain changes size due to motion of the grain boundary normal to the grain-matrix interface at velocity $v_n$, it will tend to cause a tangential motion of the grain at velocity $v_\parallel$ to accommodate the continuity of the lattice planes. In addition, tangential motion could be induced by a shear stress $\sigma$ without the coupling to normal motion, referred to as sliding. Thus, the combined tangential motion of the grain gives $v_\parallel = \beta v_n + S \sigma$, where $\beta$ is the coupling factor and $S$ is the sliding coefficient. In general, these coefficients can be functions of $\theta$. For a cylindrical or circular embedded grain with radius $r$, the general equations of motion have been derived by Cahn and Taylor \cite{CahnActaMater04} as \begin{equation} \label{vperp} v_n = -\frac{dr}{dt} = M \left( \frac{\gamma - \beta \gamma'}{r} + \beta \sigma \right), \end{equation} \begin{equation} \label{vparallel} -v_\parallel = r \frac{d \theta}{dt} = \beta M \left( \frac{\gamma - \beta \gamma'}{r} + \beta \sigma \right) - S \left(\frac{\gamma'}{r} - \sigma \right), \end{equation} and from dividing Eq.~(\ref{vparallel}) by Eq.~(\ref{vperp}), \begin{equation} \label{full_dynamics} \frac{d \theta}{d \ln{r}} = - \beta + \frac{S(\gamma'/r - \sigma)}{M [(\gamma - \beta \gamma')/r + \beta \sigma]}, \end{equation} where $M$ is the mobility of grain boundary migration, $\gamma$ is the grain boundary energy, $\gamma'=d\gamma/d\theta$, and the absence of grain-matrix bulk free energy difference has been assumed. Thus, Eq.~(\ref{full_dynamics}) indicates that the grain motion can be described by a relationship between $\theta$ and $\ln r$ controlled by the strength of coupling (i.e., $\beta$) and sliding (i.e., $S$). It has been hypothesized that the coupling occurs or dominates in most cases of grain boundary motion, as verified in recent MD and PFC simulations \cite{CahnActaMater06,CahnPhiloMag06,TrauttActaMater12a,TrauttActaMater12b,ThomasNatCommun17,WuActaMater12,McReynoldsActaMater16,AdlandPRL13}, except in some narrow ranges of $\theta$ corresponding to symmetry points of the underlying crystalline lattice where $\beta \rightarrow 0$ \cite{CahnActaMater04}. While the contribution of sliding may be significant for some ranges of $\theta$, it would be difficult to determine quantitatively through coefficient $S$ which is generally unknown. It is therefore more practical to measure sliding as a deviation from the idealized coupling behavior \cite{CahnActaMater06,TrauttActaMater12a,TrauttActaMater12b}. For example, for pure or perfect coupling with $S = 0$, Eq.~(\ref{full_dynamics}) reduces to a geometric relation \begin{equation} \label{coupling} \frac{d \theta}{d \ln{r}} = - \beta. \end{equation} Once the $\theta$ dependence of $\beta$ is known, integrating Eq.~(\ref{coupling}) would yield a master curve \cite{CahnActaMater04} governing the relation between grain radius $r$ and misorientation $\theta$. The deviation of the behavior of an embedded grain from this ideal coupling relation can then be used to identify the degree of sliding. \subsection{Multiplicity of the coupling factor in 2D hexagonal systems} \label{sec:multimodes} Recent studies based on both geometric dislocation or disconnection models and MD simulations \cite{CahnActaMater06,CahnPhiloMag06,ThomasNatCommun17} have shown that the coupling factor $\beta$ is a multivalued function of the misorientation angle $\theta$, with different branches/modes of $\beta(\theta)$. Because $\beta$ is a purely geometric coupling factor, it can be derived from geometric considerations of the grain boundary, giving $\beta(\theta) \propto \tan(\theta/2 - k\pi/N)$ with $k=0,1,2,...,N-1$ for $N$-fold rotational symmetry of the lattice \cite{CahnActaMater06,TrauttActaMater12b}. In the previous studies of coupled grain boundary motion, either 3D cubic or 2D square symmetry is considered, corresponding to $N=4$; only the first two modes, $k=0,1$, were observed, i.e., $\beta_1=\beta_{<100>}=2\tan(\theta/2)$ and $\beta_2=\beta_{<110>}=2\tan(\theta/2-\pi/4)$ \cite{CahnActaMater06,CahnPhiloMag06}, but not the other two which would involve too large angles. For the 2D hexagonal systems examined here, we have six-fold symmetry (i.e., $N=6$). Following the similar procedure in Ref.~\cite{CahnActaMater06} based on the dislocation model, the coupling factor for the single-component honeycomb structure is obtained as \begin{equation} \label{beta1} \beta_1 = 2 \tan \left( \frac{\theta}{2} \right). \end{equation} Substituting it into Eq.~(\ref{coupling}) and integrating over $\theta$ lead to (with an integration constant $C_1$) \begin{equation} \label{lnr1} \ln r = - \ln{\left[\sin{\left(\frac{\theta}{2}\right)}\right]} + C_1, \end{equation} which can be used to obtain a master curve relating $r$ and $\theta$ for the embedded grain in the case of perfect coupling. The coupling factor $\beta_1$ in Eq.~(\ref{beta1}) represents a positive branch of coupling. Another branch, with negative coupling factor $\beta$ related to opposite direction of grain rotation, can be obtained by considering the $60\degree$ rotational invariance of the lattice structure, i.e., via $\theta \rightarrow \theta-\pi/3$ in Eq.~(\ref{beta1}), giving \begin{equation} \label{beta2} \beta_2 = - 2 \tan \left ( \frac{\pi}{6} - \frac{\theta}{2} \right ), \end{equation} which results in \begin{equation} \label{lnr2} \ln r = - \ln{\left[\sin{\left(\frac{\pi}{6}-\frac{\theta}{2}\right)}\right]} + C_2, \end{equation} where $C_2$ is the integration constant. As stated above, in general there are $N=6$ possible branches of the multivalued function $\beta(\theta)$ due to six-fold symmetry of the honeycomb lattice, i.e., $\beta = 2 \tan{(\theta/2 - k \pi/6)}$ with $k = {0,1,2,3,4,5}$. Typically only the first two with smallest magnitudes, i.e., $k = 0,1$ corresponding to Eqs.~(\ref{beta1}) and (\ref{beta2}) respectively, are expected to be found in practice, while the others are too large to couple effectively to grains that can be physically realized. This has been seen in our PFC simulations of 2D single-component systems of graphene, as will be shown in the next section. \begin{figure}[htb] \centering \includegraphics[width=0.8\textwidth]{beta_branches.png} \caption{Branches of $\beta(\theta)$ for 2D binary hexagonal systems. Note that near the branching points of $\theta = 30 \degree$ and $\theta = 90 \degree$ the positive and negative branches of $\beta(\theta)$ approach the same magnitude.} \label{fig:beta_branches} \end{figure} For two-component 2D hexagonal materials like h-BN, the full range of grain boundary misorientation angles is from $0\degree$ to $120\degree$, instead of $60\degree$ for single-component systems like graphene, as a result of the binary $AB$ sublattice ordering and the subsequent lattice inversion symmetry breaking. For $\theta$ in the range of $[60\degree, 90\degree]$, we have $\beta = 2 \tan{(\theta/2 - \pi/6)}$, of the same form as Eq.~(\ref{beta2}) but now positive. The corresponding $r$ vs $\theta$ master curve then becomes \begin{equation} \label{lnr3} \ln r = - \ln{\left[\sin{\left(\frac{\theta}{2}-\frac{\pi}{6}\right)}\right]} + C_3. \end{equation} The negative branch of $\beta$ for the $\theta$ range of $[60\degree, 120\degree]$ can be obtained by replacing $\theta \rightarrow \theta-2\pi/3$ in Eq.~(\ref{beta1}) or equivalently, $\theta \rightarrow \theta-\pi/3$ in Eq.~(\ref{beta2}), i.e., \begin{equation} \label{beta3} \beta_3 = - 2 \tan \left ( \frac{\pi}{3} - \frac{\theta}{2} \right ), \end{equation} corresponding to $k=2$ mode. From Eq.~(\ref{coupling}) we have \begin{equation} \label{lnr4} \ln r = - \ln{\left[\sin{\left(\frac{\pi}{3}-\frac{\theta}{2}\right)}\right]} + C_4. \end{equation} In Eqs.~(\ref{lnr3}) and (\ref{lnr4}) the integration constants $C_3$ and $C_4$ are determined by initial conditions. Thus, for a binary hexagonal system three coupling modes ($k=0,1,2$) can be identified, as shown in Eqs. (\ref{beta1}), (\ref{beta2}), (\ref{beta3}) and Fig.~\ref{fig:beta_branches}, resulting in four $r$ vs $\theta$ master curves determined by Eqs.~(\ref{lnr1}), (\ref{lnr2}), (\ref{lnr3}), and (\ref{lnr4}). If the coupling mechanism between normal motion and tangential translation is attributed only to the geometric factors of lattice structure, as in the Cahn-Taylor formulation, the property of grain dynamics in the $\theta$ range of $[60\degree, 120\degree]$ is expected to mirror that of $[0\degree, 60\degree]$, resembling the behavior of single-component systems (e.g., graphene) which have the same honeycomb lattice. However, our simulation results indicate that this scenario based on purely geometric consideration is only partially true for a binary ordering system like h-BN, and additional factors due to the energetic contribution of binary components with inversion symmetry breaking and the resulting different defect structures need to be incorporated, leading to more complex behavior of grain motion as will be detailed below. \section{Simulations and Results} \subsection{Simulation method and setup} \label{sec:setup} We numerically solved the aforementioned PFC equations using a pseudospectral method in a square simulation box under periodic boundary conditions, with a resolution of 8 numerical grid points per lattice spacing. Although the numerical grid spacings $(\Delta x, \Delta y)$ can be chosen to minimize the system free energy of a single crystal with hexagonal structure, which can effectively remove most of strain in the bulk and fit integer number of atoms (density peaks) into the simulation box with equilibrium lattice spacing \cite{Taha17}, they would be of different values for single-component and binary systems simulated. For a more direct comparison between these two types of system it is preferable to set up their simulation conditions as close as possible, and thus we used the same values of $\Delta x=\Delta y=2\pi/8$, which inevitably introduced weak strain into the simulation box. The stress induced in the whole system by the simulation setup should make some small quantitative influence on the grain motion but would not affect the overall results of grain boundary coupled motion examined in this work, given that grains of all the angles studied were set up with the same conditions. In addition, in polycrystalline samples of real materials each grain is usually subjected to the impact of stress field generated by the other grains due to long-range elastic interaction, and our simulation of single embedded grains under background strain could serve as a related model system. The systems were initialized with the majority of the area in a perfect honeycomb lattice with the PFC density field $n$ or $n_A$ and $n_B$ approximated by the corresponding analytic expression in the one-mode approximation, to be further relaxed via the full PFC equations. This single crystal then had a circular area rotated by an initial angle $\theta_0$ to form an embedded grain, by projecting the $x$ and $y$ coordinates onto a new coordinate system rotated by $\theta_0$ and recalculating the approximated $n$ values for initialization. These embedded grains were allowed to evolve until they either ceased further motion, or shrank completely and disappeared with all the boundary defects annihilated. Delaunay triangulation was used to identify the connection from each simulated atomic site to its nearest neighbors, for measuring the locations of defects and local lattice orientations. Since the inverse of honeycomb structure is triangular which is more convenient for analysis, here we use local density minimum (i.e., local minimum of $n$ or $n_A+n_B$, corresponding to the real vacancy site) as the ``atomic site'' in the triangulation to generate the Delaunay graph. If the site has exactly 6 neighbors, the angles of the triangulation edges were computed to calculate the local angle of the lattice in the neighborhood of that site. If the number of neighboring differs from 6, it was labeled as a disclination defect. At a given time step the overall misorientation angle $\theta$ of the embedded grain was determined by averaging over the local angles of a group of sites within a small radius corresponding to approximately three lattice periods near the center of the grain, so that the angle calculation would not be affected by the shrinking grain until nearly the final moment of grain collapse. As the grains shrank and evolved we occasionally observed narrow regions of local angle change in the background crystal outside the grain. Some of these distortions would be the result of strain induced by the simulation box setup, but they were small in terms of both magnitude and spatial extent, showing typically as narrow bands of 1-3 lattice periods wide which are faintly visible outside the grain (see e.g., Fig.~\ref{fig:binary_rotation}). In our setup the embedded grain was located far enough from the boundaries, and any defects which would nucleate near the edge of the strained simulation box were several lattice periods away from the grain and did not interact with the grain boundary in any obvious way observed. The grains frequently deviated from their initial circular shape during the course of evolution, becoming irregular or faceted. Thus, to calculate an effective grain radius $r$ we constructed a polygon connecting the dislocations along the grain boundary and counted the number of sites within it, with the radius of the grain proportional to the square root of this number. This effectively averaged the radius around the circumference of the approximately circular grain. To study the $\theta$ dependent property of embedded grain rotation, a large number of simulations were initialized at different initial angles $\theta_0$ and evolved with time. Each grain was initialized with a radius of 64 grid points, corresponding to a diameter of approximately 16 lattice spacings. This setup of relatively small initial grains is to better facilitate elastic relaxation and grain rotation, particularly for the binary PFC grains which exhibit much lower mobility and are more reluctant to rotate (due to higher Peierls barrier) as compared to the single-component ones. This limited grain size results in high curvature of the grain boundary and irregular grain shape during time evolution. For very low misorientation angles there would be few sparsely distributed dislocations around the irregular circumference of the curved grain. When the spacing of the defects along the grain boundary becomes too large as compared to the radius of curvature of the boundary, the boundary itself and the grain size become ill-defined and ambiguous. Grains in this state with widely spaced defects, which tended to act more like isolated dislocations rather than those of a distinct grain boundary, also tend to evolve slowly or even cease to evolve after initial transients. Furthermore, at these lowest misorientation angles the normal-tangential coupling is the weakest according to the Cahn-Taylor formulation (see Fig.~\ref{fig:beta_branches}). Therefore, in this study we set the minimum initial misorientation at $\theta_0 \approx 5 \degree$, and mainly focus on the behavior of embedded grains at larger misorientation angles. To study different initial conditions at the same $\theta_0$ we shifted the initial grain center by small displacements along the horizontal and vertical directions, to create grain boundaries that intersect different sets of lattice planes. These displacements were set by integer numbers of numerical grid points, less than 8 points that cover one lattice period as the properties of the crystal would be periodic over this range. We have examined the outcomes of simulated grain evolution to ensure the nonconserved PFC dynamics used in this work (with the imposed constraint of constant chemical potential) generated correct behaviors of defect dynamics. The number of dislocation defects remained conserved before their annihilation or recombination, which is consistent with that found in previous PFC study of grain rotation in 2D square lattice using this type of nonconserved dynamics \cite{AdlandPRL13}. Also, the process of defect annihilation and recombination was observed to occur only between neighboring dislocation rings but not at larger or arbitrary separation. A known issue of PFC simulations, which exists for both conserved and nonconserved dynamics, is that in some cases the number of atomic sites (density peaks) near a dislocation core would vary and thus be not strictly conserved during dislocation motion (e.g., climb), which can be attributed to the result of vacancy diffusion (see e.g., discussions in Refs.~\cite{McReynoldsActaMater16} and \cite{TrauttActaMater12b}). Additionally, we compared our simulation results to those of Ref.~\cite{WuActaMater12} using conserved dynamics for the study of single-component PFC with triangular lattice symmetry. Similar behavior of single-component embedded grain motion has been found using both types of dynamics, with no significant differences between them (see below for more details), indicating that conserved and nonconserved PFC dynamics give similar and consistent results of grain rotation. \subsection{Embedded grains in single-component PFCs for graphene} Single-component PFCs exhibit a honeycomb lattice structure with six-fold rotational symmetry, so that $\theta < \theta_{\rm max}^{\rm s} = 60 \degree$. Embedded grains in our PFC simulations of graphene systems based on Eq.~(\ref{single PFC}) tended to have relatively high mobility, often rotating across $5 \degree$ or more before grain collapsing. At low angles $(\theta < 15 \degree)$ they behaved as predicted by the Cahn-Talyor formulation for the coupling of normal-tangential motions, tending to decrease in radius while $\theta$ increased simultaneously, with some sample snapshots shown in Fig.~\ref{fig:single_rotation}. These embedded grains often underwent a faceting-defaceting transition, as described in Ref.~\cite{WuActaMater12} which examined the dynamics of circular grains in 2D PFC with triangular structure. At higher misalignment angles $(15 \degree < \theta < 30 \degree)$ grains tended to rotate much less or not at all, agreeing again with Ref.~\cite{WuActaMater12} and indicating that they experienced a substantial decrease in coupling. Details of the simulations (with the help of Delaunay triangulation for defects identification) showed pairs of dislocation cores around the boundary of the embedded grain would approach each other and finally annihilate when the spacing between them became sufficiently small. This would then limit the maximum density of dislocations, preventing any further increase in $\theta$. Grains initialized at or near $\theta = 30 \degree = \theta_{\rm max}^{\rm s}/2$ did not rotate, and decreased in radius without any obvious changes in the overall structure or orientation. \begin{figure}[htbp] \centering \includegraphics[width=0.95\textwidth]{single_composite.png} \caption{Snapshots of grain shrinking and rotation obtained from single-component PFC simulation, at early [(a) and (c)] and late [(b) and (d)] times of evolution. The grain was initialized with $\theta_0 = 12 \degree$. The spatial configurations of the density profile $n$ are shown in (a) and (b), with honeycomb lattice structure. The corresponding spatial distributions of local orientation angle $\theta$ are given in (c) and (d) as obtained from Delaunay triangulation, where each site represents the local minimum of $n$ colored by the value of local angle $\theta$. Sites with more than 6 neighbors are marked in black, and those with fewer than 6 neighbors marked in white, indicating the locations of defect cores. Note that the increase in the degree of purple coloring inside the grain from (c) to (d) represents the grain's rotation towards larger $\theta$, and the faint streamers of color outside the grain represent areas of angle change caused by strain in the crystal.} \label{fig:single_rotation} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=\textwidth]{single_angle_time_2.png} \caption{Grain misorientation angle $\theta$ as a function of time, for single-component embedded grains initialized at two different angles in each panel. (a) Grains were initialized in the low-angle regime around $12 \degree$ and $14 \degree$, and rotated towards increasing $\theta$ during time evolution and shrinking. (b) Grains were initialized in the high-angle regime around $46 \degree$ and $50 \degree$, and rotated towards lower $\theta$ instead.} \label{fig:single_angle_time} \end{figure} Results for $\theta > 30\degree$ are similar, mirroring those below $30\degree$ as expected from $60\degree$ rotational invariance of the single-component honeycomb lattice. The small misorientations now correspond to angles close to $60\degree$ (i.e., $\theta \rightarrow 60\degree - \theta$), with the grain rotating towards lower angle instead during its shrinkage. Some sample simulation results for the time evolution of the misorientation angle $\theta$ are given in Fig.~\ref{fig:single_angle_time}, showing both cases of grain rotation towards increasing or decreasing $\theta$ when starting from initial angles below or above $30\degree$, respectively. This is consistent with previous MD simulations giving similarly two types of time-dependent behavior of the misorientation angle for 3D cylindrical grain rotation in a fcc metal of Cu \cite{TrauttActaMater12a}. Note that some small nonmonotonic fluctuations occur at the early time range in Fig.~\ref{fig:single_angle_time}, due to initial transients of grain relaxation, while at later times the monotonic behavior of $\theta$, either increasing (Fig.~\ref{fig:single_angle_time}(a)) or decreasing (Fig.~\ref{fig:single_angle_time}(b)) with time, maintains, with few slight variations within the measurement uncertainty. These two types of grain rotation to opposite directions correspond to the two coupling modes described in Sec.~\ref{sec:multimodes}, with two branches of the coupling factor $\beta$ of opposite signs given in Eqs.~(\ref{beta1}) and (\ref{beta2}). To develop a more quantitative understanding of our simulation results, we measured the effective grain radius $r$ as a function of the evolving grain misorientation $\theta$ and plot the result of $\ln{(r(\theta))}$ in Fig.~\ref{fig:single_rotation_lnr}. Also plotted are the corresponding analytic expressions of Eqs.~(\ref{lnr1}) and (\ref{lnr2}), serving as the master curves \cite{CahnActaMater04} in the condition of perfect normal-tangential coupling. We conducted a series of PFC simulations by initializing the subsequent embedded grain with an initial angle $\theta_0$ equal to the final-stage angle of the previous grain simulation, and obtained sections of the $\ln r$ curve across all possible $\theta$. To match or fit the measured data with the master curves, we need to consider the unknown integration constants $C_1$ and $C_2$ in the analytic results of Eqs.~(\ref{lnr1}) and (\ref{lnr2}) by following the similar step of Ref.~\cite{CahnActaMater04}, i.e., shifting the $\ln r$ data obtained from different simulation runs vertically in the plot due to different initial conditions yielding different $C_1$ or $C_2$. The amount of shifting is different for different sections of data (with each section referring to the data measured in a same run) as they correspond to different initial conditions. For $\theta < 30\degree$, we first fitted the lowest-angle section of $\ln r$ vs $\theta$ data into Eq.~(\ref{lnr1}) to get $C_1$ and identify the full form of master curve, and then for each subsequent data section vertically shifted all its data points by a same constant value determined by the averaged distance of $\ln r$ data in this section from the master curve (which approximates the corresponding integration constant for this data section). Similar steps were taken for $\theta > 30\degree$ with the master curve described by Eq.~(\ref{lnr2}). In those data sections showing steep descending of $\ln r(\theta)$ with large deviations from the master curve (i.e., for $\theta$ not far from $30\degree$), the vertical shifting was made instead to simply connect the subsequent sections continuously. \begin{figure}[htbp] \centering \includegraphics[width=0.9\textwidth]{single_rotation.png} \caption{Plots of $\ln r$ vs $\theta$, including both the numerical data obtained from PFC simulations of single-component embedded grains and the analytic master curves (Eqs.~(\ref{lnr1}) and (\ref{lnr2})) for the two primary coupling branches of $\beta$. Different types of symbols represent the simulation results for grains initialized with different displacements of grain center relative to the background crystal, with the horizontal and vertical components of each displacement (in units of grid points) indicated.} \label{fig:single_rotation_lnr} \end{figure} The resulting outcomes presented in Fig. \ref{fig:single_rotation_lnr} include three different types of initial grain setup (see Sec.~\ref{sec:setup}). A quantitative comparison of the numerical simulation data with the analytic master curves supports our qualitative analysis described above. Overall the numerical data lies along the master curves up to $\theta$ or $\pi/3-\theta$ around $15\degree$, indicating the motion of the embedded grains is dominated by coupling, with the grain rotating with increasing or decreasing angle in two coupling modes as the grain radius decreases. In the intermediate range of misorientation (i.e., $\theta$ or $\pi/3-\theta$ around $10 \degree$ to $15 \degree$), there are some deviations from the master curves, with the $\theta$ variation of $\ln r$ proceeding at a slower rate, indicating the occurrence of sliding accompanied with the coupling. At higher misorientation with $\theta$ or $\pi/3-\theta$ approaching $20\degree$, the normal motion (shrinking) and tangential translation of the grain begin to decouple, and the grain dynamics is dominated by sliding. This is indicated by the sharp deviation in the slope of the $\ln r(\theta)$ data away from the master curve predicted by Eq.~(\ref{lnr1}) or (\ref{lnr2}). In this regime $\theta$ changes only slightly while $\ln r$ decreases, corresponding to an effective rotation rate much smaller than what would be expected if the grain motion were dominated by coupling. When closer to $30\degree$ misorientation, the grain is no longer undergoing either sliding or coupling and shrinks without rotating, i.e., with $\theta$ remaining constant with time. Note that in our PFC simulations for single-component graphene, no dual behavior, i.e., no switching between different coupling modes during grain evolution or no dual modes at the same $\theta$, is observed. This is consistent with the previous PFC simulation of the shearing of planar grain boundary in 2D square lattice \cite{TrauttActaMater12b}, while in single-component systems so far the dual behavior has been found only in 3D MD simulations of grain boundaries \cite{CahnActaMater06,CahnPhiloMag06,ThomasNatCommun17}. \subsection{Binary embedded grains for h-BN} \label{sec:binary} The binary honeycomb crystalline structure with $AB$ sublattice ordering has a broken inversion symmetry due to the alternating $A$ and $B$ type atoms in the lattice, resulting in its rotational symmetry being changed from 6-fold to 3-fold, with maximum misorientation angle $\theta_{\rm max}^{\rm b} = 120 \degree$. Embedded grains in our binary PFC simulations of h-BN tended to have substantially lower mobility than their single-component counterparts. These grains would typically rotate less than $2 \degree$ or even less than $1 \degree$ before ceasing to evolve further over the timescales we investigated, with $r$ no longer decreasing, i.e., with the grains fixed in size and orientation. As described in Sec.~\ref{PFC}, here we used model parameters similar to those of single-component case, with the same temperature parameter $\epsilon_A = \epsilon_B = 0.02$. Larger values of $\epsilon$ (corresponding to lower temperature) produced even less degree of rotation and overall evolution of these binary grains. This could be attributed to the much higher Peierls barrier for dislocation motion and much stronger defect pinning effect in this binary system as compared to the single-component one, leading to more faceted interfaces and more rigid boundary motion, as seen in some simulation snapshots given in Fig.~\ref{fig:binary_rotation}. \begin{figure}[htbp] \centering \includegraphics[width=0.95\textwidth]{binary_composite.png} \caption{Simulation snapshots of a binary embedded grain during its time evolution at early [(a) and (c)] and late [(b) and (d)] times. The grain was initialized with $\theta_0 = 12 \degree$ and rotated to a larger angle. (a) and (b) show the spatial distribution of the density difference $n_A - n_B$, giving binary honeycomb structure, while (c) and (d) show the corresponding distribution of local angle $\theta$ of the same grain, with the coloring and defect marking similar to those of Fig.~\ref{fig:single_rotation}.} \label{fig:binary_rotation} \end{figure} The motion of the boundaries of binary embedded grains appeared more irregular and less rapid than the single-component case. Typically, as seen from the simulations a majority of the boundary remained stationary for an interval of time and then one or few boundary defects migrated inward. This caused the grain to rotate slightly, which then sometimes induce the motion of other defects. This intermittent and limited motion, driven by only a fraction of the total defects, would often leave the grain concave during part of its evolution, resulting in irregular grain boundary geometries as shown in Fig.~\ref{fig:binary_rotation}. We have conducted PFC simulations of binary embedded grains over the full misorientation range of $0\degree < \theta < 120 \degree$ and measured the time dependence of grain angle $\theta(t)$ and the corresponding effective grain radius $r(t)$. Some sample results of $\theta$ vs $t$ in four characteristic regimes of grain misorientation are shown in Fig.~\ref{fig:binary_angle_time}, and the $\ln r$ vs $\theta$ plots for all the outcomes obtained are given in Fig.~\ref{fig:binary_rotation_all}, where the numerical simulation data has been shifted vertically according to the master curves of Eqs.~(\ref{lnr1}), (\ref{lnr2}), (\ref{lnr3}), and (\ref{lnr4}) to take into account the different initial conditions of different simulation runs, in the same manner as for the single-component case described above. \begin{figure}[htbp] \centering \includegraphics[width=\textwidth]{binary_angle_time_2.png} \caption{Time evolution of misorientation angle $\theta$ for several embedded grains in binary honeycomb PFC. (a) Grains were initialized at low angles with $0 \degree < \theta_0 < 30 \degree$, which all rotate towards increasing $\theta$ with similar rates. (b) Grains were initialized at higher angles of $30 \degree < \theta_0 < 60 \degree$, exhibiting a dual behavior. In two examples shown here, grains rotate in opposite directions despite being initialized with similar $\theta_0$. (c) Grains were initialized within the range of $60 \degree < \theta_0 < 90 \degree$, also showing the dual behavior. This dual behavior can also manifest as grains changing the direction of their rotation during the evolution, as shown in the examples here. (d) Grains were initialized within the range of $90 \degree < \theta_0 < 120 \degree$ and rotated towards decreasing $\theta$, without any dual behavior.} \label{fig:binary_angle_time} \end{figure} At the two edges of the angle range with smallest and largest sets of $\theta$ values, i.e., $\theta < 30 \degree$ or $\theta > 90 \degree$, the binary embedded grains behaved similarly to their single-component counterparts, shrinking and rotating through a small angle towards higher or lower $\theta$, respectively, as seen in Figs.~\ref{fig:binary_angle_time}(a) and \ref{fig:binary_angle_time}(d) for some examples of time evolution of $\theta$ which increases or decreases monotonically over time, other than some small fluctuations at initial transients. They correspond to two coupling modes with positive and negative coupling factors $\beta_1$ [Eq.~(\ref{beta1})] and $\beta_3$ [Eq.~(\ref{beta3})] and the associated master curves of Eqs.~(\ref{lnr1}) and (\ref{lnr4}), as presented in the $\ln r$ vs $\theta$ results of Fig.~\ref{fig:binary_rotation_all}. \begin{figure}[htbp] \centering \includegraphics[width=0.9\textwidth]{rotation_branches.png} \caption{Plots of $\ln r$ vs $\theta$ for binary embedded grains across the full range of misorientation $0 < \theta < 120 \degree$. Both the numerical data obtained from binary PFC simulations and the analytic master curves of Eqs.~(\ref{lnr1}), (\ref{lnr2}), (\ref{lnr3}), and (\ref{lnr4}) corresponding to all three coupling branches of $\beta$ are shown. Different types of symbols represent the simulation results starting from different initial displacements of grain center with respect to the background crystal, indicated by the horizontal and vertical components of each displacement (in units of grid points). The dual behavior appears within the range of $30 \degree < \theta < 90 \degree$.} \label{fig:binary_rotation_all} \end{figure} However, the binary grains initialized at the intermediate range of angles $30 \degree < \theta < 90 \degree$ exhibited a more complicated, dual behavior which is absent in the single-component 2D grain rotation. It showed as two forms of coupling-mode selection depending on the detailed local microstructures and energetics of elasticity and plasticity. Grains initialized with slightly different initial conditions could rotate in opposite directions, even if the initial $\theta_0$ was changed by as little as less than $1 \degree$ or the initial displacement of the grain center by only a single grid point. A related example is shown in Fig.~\ref{fig:binary_angle_time}(b). In another form of mode switching, some embedded grains initialized within this angle range would change their direction of rotation at an intermediate evolution time during a single simulation. Some examples are given in Fig.~\ref{fig:binary_angle_time}(c), where the nonmonotonic behavior of misorientation $\theta$ over time (after initial transients) is beyond the measurement errors but a result of dual behavior of grain rotation manifesting as a procedure of grain rotating, halting, and reversing of rotation direction during evolution. It is interesting to note that these PFC simulation results are consistent with those of a very recent experiment on the shrinking and rotation of h-BN grains \cite{RenACSNano20} which observed the phenomenon of dual mode switching for initial $\theta_0 \sim 35.6\degree$ and the single-coupling-mode behavior for $\theta_0 < 22\degree$. A discrepancy occurs for $38 \degree < \theta_0 < 60\degree$, where only the unidirectional rotation towards smaller $\theta$ (related to the negative branch of $\beta$ coupling) was observed experimentally, while the simulation data shown in Figs.~\ref{fig:binary_angle_time}(b) and \ref{fig:binary_rotation_all} indicates a dual behavior of two coupling modes up to $\theta \sim 50 \degree$, beyond which the grain could either rotate according to the negative coupling mode as seen in the experiment, or significantly deviates from the coupled motion with much smaller rotation. This discrepancy could be related to the limited amount of grain cases measured in the experiment and some quantitative differences in the system conditions simulated by PFC (e.g., different conditions of nanowelding method and electron irradiation used in experiments). The systematic results of our PFC study presented in Fig.~\ref{fig:binary_rotation_all} are symmetric with respect to $\theta_{\rm max}^{\rm b}/2=60\degree$, as expected from the binary honeycomb lattice. As described above, the behavior of the binary embedded grains in the range of $30\degree < \theta < 90\degree$ can be understood not as random rotations, but as the selection or switching between dual coupling modes. Such dual behavior occurs when there are more than one accessible branches of $\beta(\theta)$ for the grain coupled motion so that both positive and negative coupling modes coexist at a given $\theta$ \cite{CahnActaMater06}, which results in the observance of both opposite directions of grain rotation. This can be shown more systematically by aligning and comparing the corresponding numerical data with the master curves of both modes in Fig.~\ref{fig:binary_rotation_all}. We separated the portions of the grain rotation data where $\theta$ monotonically changed with time, either increasing or decreasing, which correspond to separate coupling modes, and then followed the same general procedure of vertical shifting based on a master curve. Data where $\theta$ increased with time during grain rotation and shrinkage was shifted with respect to the branch of the curve with $\partial \ln r / \partial \theta < 0$ (i.e., Eq.~(\ref{lnr1}) or (\ref{lnr3})). Correspondingly, data with time-decreasing $\theta$ was shifted with respect to the other branch with $\partial \ln r / \partial \theta > 0$ (i.e., Eq.~(\ref{lnr2}) or (\ref{lnr4})), indicating the opposite direction of rotation. Dual behavior of coupling results in both branches of simulation data clearly following the respective master curves of coupled motion, as seen in Fig.~\ref{fig:binary_rotation_all}, whereas random rotation would not show any correlation between $r$ and $\theta$. A steep departure of the simulation results of $\ln r(\theta)$ from the analytic master curves for perfect coupling could occur for $\theta$ near $\theta_{\rm max}^{\rm b}/2 = 60\degree$ (see the lower branches of numerical data in Fig.~\ref{fig:binary_rotation_all}), indicating the grains begin to experience sliding and the decoupling between normal and tangential motions (i.e., with $\beta$ descending towards zero). This is similar to the single-component case, although now it is near $60\degree$ instead of $30\degree$ due to the binary lattice symmetry. Grains initialized exactly at $60\degree$ create inversion domain boundaries, regions separating two parts of the crystal where the $A$- and $B$-type atoms are reversed. These grains tend to shrink rigidly through collective atomic displacements of defects, without rotating or sliding, as demonstrated in previous work \cite{Taha17}. Different from the single-component results given in Fig.~\ref{fig:single_rotation_lnr}, the simulation data we obtained for binary grains closely follows the master curve right up to $\theta = 30\degree$ without steep descending of $\ln r(\theta)$, implying that the grain motions remain in the coupling mode. However, the dual behavior in its nearby range of $\theta$ caused the direction of rotation to fluctuate between opposite directions increasingly rapidly when approaching the intersection of different coupling modes at $\theta = 30\degree$ or $90\degree$, where the positive and negative branches of coupling factor $\beta(\theta)$ are nearly of the same magnitude as shown in Fig. \ref{fig:beta_branches}. It would lead to a frustrated state with almost no net rotation observed, similar to the scenario discussed in Ref.~\cite{TrauttActaMater12a}. This has been confirmed in our PFC simulations. Grains initialized with $\theta_0$ within this narrow angle range were found to experience little change in $\theta$ but with many small fluctuations over time, representing small degrees of rotations in opposite directions that frequently reversed. We were thus unable to assemble meaningful segments of rotation data within approximately $5 \degree$ of these intersection points due to this frustrated behavior (which results in the corresponding data-free regions in Fig.~\ref{fig:binary_rotation_all}). It is noted that the simulation data presented in Fig.~\ref{fig:binary_rotation_all} appear much smoother and better aligned with the analytic master curves as compared to the single-component counterparts shown in Fig.~\ref{fig:single_rotation_lnr}, which would seem to indicate a much less degree of sliding for angles far enough from $\theta_{\rm max}^{\rm b}/2 = 60\degree$, as measured by the relative deviations from the master curves for perfect coupling. However, it is not as conclusive as it appears. Because the binary embedded grains exhibited much lower mobility and typically rotated through less than 1-2 degrees before ceasing to evolve, Fig.~\ref{fig:binary_rotation_all} is comprised of many short segments of rotation data corresponding to independent simulation runs initialized at different values of $\theta_0$. In contrast, for single-component grains much larger angle range of grain rotation during each simulation leads to longer sections of data in Fig.~\ref{fig:single_rotation_lnr}, which allows any fluctuations over time in grain size ($\ln r$) and/or misorientation $\theta$ to become apparent. The extent of such fluctuations could exceed the short range of each individual data segment in Fig.~\ref{fig:binary_rotation_all} for binary grains, and hence these fluctuations might not be effectively sampled in our binary PFC simulations. Therefore, it would be difficult to determine accurately the degree of sliding in the binary system, other than noting that the binary grain’s motion is dominated by coupling and sliding appears to play a secondary or minor role for $\theta$ far from $60\degree$. Only in the range of angles not far from $60\degree$ were the variations large enough to obviously depart from the master curves predicted by the Cahn-Taylor formulation for several sequential segments of data, resulting in the clear signal of sliding. \section{Discussion} The main difference between the above simulation results of single component (graphene) and binary (h-BN) 2D grain dynamics, other than the doubling of misorientation range, is the appearance of dual behavior of coupling modes for binary grain rotation that is absent in the single-component 2D system, although the lattice sites of both are of the same honeycomb structure. The angle range exhibiting this dual mode behavior is much broader than that found in the previous MD studies of 3D single-component fcc metals. Without loss of generality, in the following we consider half of the rotational symmetry period of the binary honeycomb system, $0\degree < \theta < 60\degree$, with the same analysis applied to the other half. The first part of the angle range $0 \degree < \theta < 30 \degree$ follows the same behavior as for the single-component honeycomb grains, with the coupling factor $\beta_1$ governed by Eq.~(\ref{beta1}); thus grains initialized in this range of angles rotate in a single direction towards increasing $\theta$, i.e., in the positive branch of coupling. To understand the dual behavior of binary grains coupled motion at misorientation angles $30\degree < \theta < 60\degree$, we need to consider factors beyond the purely geometric ones leading to Eqs.~(\ref{beta1})--(\ref{lnr4}) in the Cahn-Taylor formulation. The lattice-site structure of binary honeycomb 2D crystals like h-BN with sublattice ordering is identical to that of the single-component honeycomb (graphene), and thus for both of them the lattice sites themselves have a six-fold rotational symmetry with $\theta_{\rm max}^{\rm s}=60\degree$. This creates the identical geometric effect causing normal-translational coupled motions of the grain boundary with shear deformation, leading to unidirectional rotation of the embedded grains towards $\theta_{\rm max}^{\rm s}/2=30\degree$. In the range of $30\degree < \theta < 60\degree$ this purely geometric effect yields the negative mode of coupling (i.e., $\beta_2$ in Eq.~(\ref{beta2})), with no switching with the other positive $\beta_1$ mode found in simulations of 2D single-component grains. Nevertheless, the binary system is not identical to their single-component counterpart. While the structure of the lattice sites is unchanged, the alternating $A$ and $B$ atomic components with sublattice ordering break the inversion symmetry of the lattice, resulting in a reduced three-fold symmetry with $\theta_{\rm max}^{\rm b} = 120\degree$. This is due to the bonding-energy difference between $A$-$B$ heteroelemental neighborings and $A$-$A$ or $B$-$B$ homoelemental ones, a key factor that has been incorporated in the PFC free energy functional Eq.~(\ref{hBN energy}). A typical example showing this inversion symmetry breaking is the occurrence of $60\degree$ grain boundaries for which the lattice planes remain continuous across the boundary without any lattice site mismatch, other than the reverse of $A$ and $B$ components. As a result of this symmetry breaking, we not only have three coupling modes that become accessible as described in Sec.~\ref{sec:multimodes} (instead of two modes in single-component systems), but also have the accessibility of the first positive branch of $\beta_1$ extended to the range of $30\degree < \theta < 60\degree$ (similarly for those beyond $60\degree$), as seen in Fig.~\ref{fig:binary_rotation_all}, leading to the competition between two coupling branches in that angle range and hence the coexistence of both two opposite directions of grain rotation. This dual behavior can be attributed to the following two factors related to the specific defect structures and energy of grain boundaries. It has been known that the selection between different coupling modes (if they are both accessible) is mainly dependent on the detailed atomic microstructures of local dislocation defects at the grain boundary and the induced stresses to activate a specific coupling mode through its corresponding atomic mechanisms for distorting and transforming structural units at boundary \cite{CahnActaMater06}. Defects in single-component grain boundaries are purely geometric, with the change in the lattice orientation being accommodated by the corresponding changes in the individual structural units or rings. This leads to the prevalence of $5|7$ dislocation cores in single-component 2D hexagonal materials like graphene \cite{Yazyev14,Huang11,HirvonenPRB16,LiJMPS18,Zhou19}, with other types of defect motifs being rare. In 2D binary hexagonal materials like h-BN and TMDs the situation is more complex, with a much richer variety of dislocation core structures such as $4|8$, $5|7$, $4|6$, $4|10$, $6|8$, $4|4$, $8|8$, and others depending on the specific conditions of grain boundary \cite{GibbJACS13,RenNanoLett19,RenACSNano20,LinACSNano15,MendesACSNano19,Taha17}. This is related to the distinction between heteroelemental and homoelemental neighborings, and thus the formation of defect core structures containing more energetically favorable $A$-$B$ heteroelemental bonds. In addition, in the high-angle range with $30\degree < \theta < 60\degree$ the number density of defects in single-component grain boundaries decreases as $\theta$ increases due to the $60\degree$ rotational invariance, with well-separated dislocation cores at large $\theta$. However, in binary grain boundaries the number density of defects remains roughly similar within this angle range, although with the variation of specific defect structures, given that at $60 \degree$ the grain boundary exists as an inversion domain. The corresponding grain boundaries of these misorientations are composed of arrays of mostly connected dislocation cores, such that most atoms along the boundary are parts of defect (see some examples in Ref.~\cite{Taha17}). Detailed mechanisms of grain boundary motion with these connected defects should then be governed by collective local atomic displacements that are different from those of low-angle boundaries (and the single-component high-angle cases) consisting of separated or dispersed dislocations. Consequently, during the time evolution of single-component grains most defects at the boundary would remain the same type of dislocation structure (i.e., $5|7$, as also seen in our simulations such as Fig.~\ref{fig:single_rotation}), resulting in a single atomic mechanisms for grain boundary migration as there is no structural transformation between defect types. Thus only one coupling mode can be activated for a given misorientation $\theta$, excluding the appearance of dual behavior. By contrast, in a binary system it is possible that the distortion and transformation of structural units at the grain boundary could involve different types of dynamical process given the degeneration of various types of defect ring structures. This would correspond to different atomic mechanisms of coupled grain motion, such that the boundary migration mechanism could even change during a time evolution of binary grain, as accompanied by the structural changes of some boundary defects, leading to the dual behavior of mode switching as found in our PFC simulations. The activation and change of coupling modes can be readily achieved as long as the related coupling branches are accessible, including those requiring the process of mirror symmetry breaking of local structural units, due to the diversity of the available defect configurations in the binary ordering lattice. This could also explain the much wider angle range showing dual behavior as compared to 3D single-component systems where the grain boundary migration mechanisms for mode switching are much more difficult to realize and restricted to a narrow regime near the angle of transition between two modes \cite{CahnActaMater06}. \begin{figure}[htbp] \centering \includegraphics[width=0.95\textwidth]{defect_figure.png} \caption{Simulation snapshots for two types of dislocation core transformations around a portion of the boundary of binary embedded grains initialized with $\theta_0$ close to $50 \degree$. (a) and (b) show the time evolution of a section of the lower-right quadrant of a grain that rotates from the initial $\theta_0=48.05\degree$ towards decreasing $\theta$, driven by the glide of $4|8$ or $8|8$ defect pairs. (c) and (d) show the evolution of a section of the upper-right quadrant of a grain that instead rotates from the initial $\theta_0=49.17\degree$ towards increasing $\theta$, where the grain boundary migrates through the transformation of $4|4$ defect cores together with neighboring 6-membered rings to $8|8$ dislocations, and the glide of a $10$-membered defect ring. The Burgers vectors $\vec{b}$ for defect pairs in the dislocation clusters involved in structural transformation of the grain boundary are marked with dark green arrows.} \label{fig:defect_figure} \end{figure} The above analysis is consistent with recent experimental observations of 2D h-BN monolayers \cite{RenACSNano20}, where high-resolution TEM imaging of some time-evolving h-BN samples did show different types of atomic mechanisms of grain boundary coupled motion involving dislocation glide and climb at small vs large misorientations $\theta$, associated with different dominated grain boundary defect configurations and dynamics. These included the motion of $5|7$ dislocations for low-angle misorientations, similar to that of graphene, leading to a single positive mode of shear-coupled motion as also shown above, and more complicated processes at larger $\theta$ involving $5|8|4|7$ and $5|7$ dislocations, yielding the coupled motion of another negative mode or dual behavior of mode switching \cite{RenACSNano20}. Richer types of defect transformation have been found in our PFC simulations of binary grain rotation. Some examples in the angle range showing dual behavior are given in Fig.~\ref{fig:defect_figure}, where grain boundary motions are dominated by atomic displacements of $4|8$, $4|4$, $8|8$, and $4|10$ dislocations. Rotation towards decreasing $\theta$ was associated with the glide of $4|8$ or $8|8$ dislocations (see Fig.~\ref{fig:defect_figure}(a)-(b)), whereas rotation towards increasing $\theta$ was mainly associated with chains of $4|4$ dislocations combining with nearby 6-membered rings and opening up to $4|8$ and $8|8$ dislocations and the reverse procedure, as well as the glide of $10$-membered rings (see an example shown in Fig.~\ref{fig:defect_figure}(c)-(d)). The difference between our simulation and the experimental observation of specific defect behaviors could be attributed to different setup conditions in the experiment which used a nanowelding method in the grain boundary fabrication and the electron beam irradiation to activate the grain motion (in addition to thermal activation) \cite{RenACSNano20}, as compared to the annealing process simulated here. Nevertheless, both yield qualitatively consistent results showing different types of dynamic pathways of defect evolution corresponding to different types of grain rotation and coupling modes. Another factor facilitating dual behavior of the coupled motion would be related to the angle dependence of grain boundary energy. For single-component graphene with the full $\theta$ range of $[0\degree, 60\degree]$, the curve of the grain boundary energy $\gamma$ per unit length is known to have a shallow dip at $\theta$ slightly higher than $30\degree$ \cite{HirvonenPRB16}, which would not affect the dominance of the shear-coupled mechanism. For the binary system of h-BN with the full angle range of $[0\degree, 120\degree]$, recent calculations showed a maximum regime of $\gamma$ around $30\degree$-$40\degree$ but a steep local minima at $\theta=60\degree$ (with $\gamma$ reduced by half) \cite{Taha17}. This energetic factor could then further facilitate the accessibility of an additional $\beta_1$ or $\beta_3$ coupling mode in the range of $30\degree < \theta < 90\degree$ (the lower branches in Fig.~\ref{fig:binary_rotation_all}) with grains tending to rotate in the direction towards $60\degree$, enabling the occurrence of dual behavior of coupling modes in addition to the atomic mechanisms of defect motion described above which probably is the primary driving factor. In the limit of $\theta$ close to $60\degree$, i.e., when $55\degree < \theta < 75 \degree$, the grain boundary structure is close to that of an inversion domain, showing as an array of connected defect rings mostly being of the $4|8$ type. Thus, the grain shrinking dynamics is similar to that of inversion domains, controlled by a chain reaction of atomic displacements of the boundary dislocation cores and the corresponding defect shape transformation, without bulk motion of the grain \cite{Taha17}. This then results in the absence of any grain rotation, sliding, or coupled motion, with $\theta$ kept unchanged during grain shrinkage, corresponding to the mid region of Fig.~\ref{fig:binary_rotation_all}. In the other limit of $\theta$ very close to $30\degree$ or $90\degree$, the absence of net grain rotation is due to the corresponding frustrated state explained above in Sec.~\ref{sec:binary}. This is caused by the similar degree of coupling factor $\beta$ for two positive and negative modes, such that both atomic mechanisms of defect motion leading to opposite rotation directions can occur almost equally likely at a given moment during a time evolution of grain. \section{Conclusions} We have conducted a systematic study on the dynamics of single- and two-component grains and the coupled grain boundary motion in 2D crystals with honeycomb lattice symmetry, through the PFC modeling of graphene and h-BN type 2D hexagonal materials. By tracking the motion of time-evolving embedded grains misoriented with respect to the surrounding crystalline matrix, we have analyzed the angle dependence of grain boundary dynamics and its implications for grain growth across the full range of misorientations. Our results indicate that over the majority of misorientation angles $\theta$ the behaviors of both single-component and binary 2D grains are governed by the coupling between normal and tangential motions of the grain boundary, well following the Cahn-Taylor formulation in terms of the $\theta$ dependence of the coupling factor $\beta(\theta)$ and the grain radius $r(\theta)$ as given in Eqs.~(\ref{beta1})--(\ref{lnr4}). This is seen from both the simulation outcomes of grain rotation direction and the matching of numerical data to the analytic master curves for perfect coupling. The coupling is weakened when $\theta$ approaches the mid point of the maximum allowed misorientation angle (i.e., $30\degree$ for single-component and $60\degree$ for binary honeycomb lattice), as accompanied by the occurrence or dominance of grain boundary sliding. When in the closer vicinity of the mid angle, no angle change is observed during grain shrinking, indicating the absence of both coupling and sliding. One of our key findings is the occurrence of dual behavior of grain coupled motion within the misorientation range of $30\degree < \theta < 90\degree$ in the binary 2D hexagonal system, which is missing in the 2D single-component counterpart. It manifests as grains being able to rotate in both of the opposite directions towards higher and lower angles corresponding to two different coupling modes. Our systematic study predicts a broad range of misorientation angles exhibiting the dual behavior of mode selection or switching, much broader than that observed previously in 3D simulations of single-component systems \cite{CahnActaMater06,CahnPhiloMag06,ThomasNatCommun17} and experiments of 2D h-BN monolayers \cite{RenACSNano20}, as a result of a much more diverse family of defect core structures and dynamics available in the binary 2D materials. This dual behavior in 2D binary systems is originated from the coexistence of both positive and negative branches of coupling in this angle range, of which the additional mode associated with the grain rotation to a direction towards $\theta=60\degree$ is made accessible through different dominated types of boundary defect configurations and defect evolution pathway, as further facilitated by the sharp local minimum of grain boundary energy at $60 \degree$ misorientation. The activation, selection, and switching of different coupling modes are then determined by the specific dislocation core microstructures and arrangements at the embedded grain boundary and their structural transformations during the grain evolution. The origin of the availability of a rich variety of dislocation structures enabling the 2D dual behavior is the breaking of inversion symmetry in the binary honeycomb lattice with $AB$ sublattice ordering, given the energetic difference between $A$-$B$ heteroelemental and $A$-$A$ or $B$-$B$ homoelemental bondings, a pivotal factor that is absent in single-component systems. Similar material systems, where the crystalline structure has a symmetry breaking because of alternating lattice sites occupied by different atomic species, should be expected to exhibit similar dual behavior of coupling. This would be important for the understanding and control of grain growth mechanisms and hence the production of large-scale single crystals, not only for 2D materials like h-BN that is modeled here but also for other binary materials such as TMDs with similar structure of sublattice ordering and defect configurations as well as other 2D or 3D multi-component material systems. These materials would be governed by more complex dynamics of grain evolution when subjected to annealing as compared to single-component ones, as a result of the competing mechanisms involved particularly those caused by dual behavior of coupled grain boundary motion and collective atomic displacements for various types of defect transformations. All these emphasize the need for further studies to explore more detailed atomic mechanisms of grain rotation and dynamics in multi-component systems under a wider range of growth and processing conditions. \section*{Acknowledgements} This work was supported by the National Science Foundation under Grant No. DMR-2006446. We thank Talbot Knighton for the help on the coding implementation of angle calculations based on Delaunay triangulation. \bibliographystyle{elsarticle-num}
1,108,101,564,784
arxiv
\section{Introduction} Industrial Control Systems (ICS) are found in modern critical infrastructure (CI) such as the electric power grid and water treatment plants. The primary role of an ICS is to control the underlying processes in a CI. Such controls are facilitated through the use of computing and communication elements such as Programmable Logic Controllers (PLCs) and Supervisory Control and Data Acquisition systems (SCADA), and communications networks. The PLCs receive data from sensors, compute control actions, and send these over to the actuators for effecting control over the process. The SCADA workstations are used to exert high level control over the PLCs, and the process, and provide a view into the current process state. Each of these computing elements is vulnerable to cyber and physical attacks as evident from several widely reported successful attempts such as those reported in,\cite{weinbergerStuxnet,ukraineBlackout,germanSteelMill}. Such attacks have demonstrated that while air-gapping a system might be a means to consider securing an ICS, it does not guarantee in keeping attackers from gaining access to the system. Successful attacks on ICS have led to research to prevent, detect, and react to different forms of cyber attacks. \textit{Anomaly detection} that aims at raising an alert when the controlled process in an ICS moves from its normal to an unexpected, i.e. {\em anomalous}, state. The challenge with the proposed techniques is that those are not able to distinguish between an anomaly being raised due to an \textit{attack} or a \textit{system fault}. \section{Problem Identification} Approaches used in the design of such anomaly detectors fall into two broad categories: \textit{design-centric}\,\cite{adepuMathurTDSC2018} and \textit{data-centric}\,\cite{NoiseMatters_ACSAC2018}. The focus of this position paper is on the \textit{data-centric} approaches that rely on well-known methods for model creation such as those found in the system identification\,\cite{van1996} and machine learning {literature}\cite{CPS_security_survey2017}. While the use of machine learning to design anomaly detectors becomes attractive with increasing availability of data and advanced computational resources, recent {attempts} \cite{sugumar2019method,CPSweek2016_stealthy_replayATT,adepuMathurTDSC2018,fabioFlorianBulloFrancesco,urbinaGiraldoTipenhauerCardenas} to create anomaly detectors, have concluded that the majority of techniques are not able to distinguish a \textit{fault} from an \textit{attack}. This opinion paper focuses on this challenge with the hope that other researchers will come forward and propose practical solutions to overcome the discussed challenges. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figures/attack.eps} \caption{Traditional responses to anomalies: attacks and faults } \label{fig:anomlalies} \end{figure} $\blacksquare $ \textbf{Relevance of differentiator and correctness:} First, lets investigate why a robust robust differentiator is important. We interviewed plant operators and cyber-security researchers with following questions, as part of a survey, to begin with: \begin{itemize} \item Do you have measures to identify between \textit{attack} and \textbf{fault}? \item If you have measures to identify an attack, what is the typical response to it, i.e., when an alarm is raised for potential attack what is the system and the operator configured to do? \item Will the response be different if the algorithm told it was a system fault? \end{itemize} We interviewed 19 candidates which included researchers at state-of-art test beds like SWaT \cite{swat2016}, EPIC \cite{siddiqi2018practical} and WADI \cite{wadi2017}, experts in ICS security and engineers at industrial production plants for steel and water. 18 participants said that they have not adopted any measure so far or have not come across even a near-robust algorithm to differentiate between the two versions of anomalies. Based on responses to aforementioned questions, we could confirm that both the system's and the people's behavior vary a lot in the contexts of attacks and fault alarms. We sketch the findings in Fig. \ref{fig:anomlalies}. A CI plant may go in shutdown for hours under identification of an attack. A correct classification, an attack identified as an attack and a fault as a fault would save loss of lives and revenues. But incorrect and cross classifications would be detrimental for the system. A fault raised as attack has high psychological impacts, invites panic and responses of higher magnitudes. Hence, if anomaly detectors are deployed, faults in sensors or actuators might lead to shutdowns of whole units and incur huge economic losses. Similarly, an attack identified as a fault would undermine the losses and a repetitions would discourage adoptions of such detectors. We note that psychologically an attack is dealt with much higher alertness and hardness in response strategy. Our participants did acknowledge the relevance which can be inferred from the responses like: \begin{itemize} \item \textit{"Yes, no need to shut the whole system} under assumption of attack when a fault might have occurred". \item \textit{"Yes, if an algorithm can identify it to be a system fault immediately, it will reduce the amount of time required during the identification process. True attacks can be immediately treated as attacks and response time will be much shorter."} \item \textit{"Yes, the incidents should be handled differently." } \item \textit{"... But the incident response may have different outcome if it isn't an attack. Such as forensics and developing control to handle such attacks may not be performed."} \end{itemize} Five of the participants reported to be working on response and mitigation strategy design and did suggest that differentiating strategy would save time and facilitate narrowing down focus and search spaces. Participants working in production industries reported of following traditional methods of \textit{predictive maintenance} in case of anomalies i.e., deviations of variables from history and process physics Fig. \ref{fig:anomlalies}. The process still keeps running in this case. \textit{ Breakdown maintenance} is conducted in case there are serious faults. In this case either spare parts are used or redundant process chains are activated. It is to be noted that there are dedicated employees under \textit{ central maintenance team} for handling such faults which have been maturing over the decades but intelligent cyber attackers might hide their attacks inside such checks. Also the frequency of faults and predictive maintenance are fairly high. On the other side, a cyber attack leads to process halt and shutdowns of plants like in recent case with Renault, \cite{renault}. And to make matter worse, cyber attacks against industrial targets have been growing rapidly \cite{rise} as well. As noted before, any cross-error would invite economic and opportunity losses. Hence a segregation is necessary and very timely. \section{Why is the segregation difficult?} We and past research works like \cite{sugumar2019method,CPSweek2016_stealthy_replayATT,adepuMathurTDSC2018,Asiaccs2018_mujeeb_noiseprint,fabioFlorianBulloFrancesco,urbinaGiraldoTipenhauerCardenas,John_ACSAC2018} note that it is fairly difficult to differentiate between these two vectors of anomalies. As an example consider Fig.~\ref{fig:leak_exp_wadi} that shows data from two different pressure sensors from the water distribution testbed. On the left hand side, a water leakage attack was executed while on the right hand side a fault similar to leakage happened but for a statistical detector both appear to be the same. Few of the challenges are summarized herein: \\ \begin{figure} \centering[h] \includegraphics[scale=0.11]{figures/l2_19nov19.eps} \caption{It is hard to distinguish between an attack (on the left) and a fault (on the right) for two different sensors in a water distribution testbed.} \label{fig:leak_exp_wadi} \end{figure} $\blacksquare$ \textbf{Challenge 1. Not modeling properties of faults and attacks:} Most of the related works would look at the consequences of an attack or fault rather than looking through the properties of attack or fault themselves. For example, a fault is usually random and results in abrupt change for a short period of time. On the other hand an attack is properly planed and is executed for a longer time to do substantial damages. Missing on opportunities of differential properties adds to hardness of the problem. \\ $\blacksquare$ \textbf{Challenge 2. Unknown attacks and faults:} The base assumption for an anomaly detection method is that there would always be unknown attacks, therefore it is not possible to use blacklisting or white-listing as an effective method~\cite{sommer2010paxson_anomaly_detection_ML}. Therefore, we rely on behavioral methods which simply look at the end result of anomalous behavior and raise an alarm. The challenge is that it is not possible to create a model for each possible fault that is out there in the wild. \newline $\blacksquare$ \textbf{Challenge 3. Lack of hybrid models:} There are no precise hybrid models for cyber and physical domains. It is not clear how to come up and use the information from independent and orthogonal spaces through an analytical lense. Most of the anomaly detection methods are implemented either in cyber layer or in the physical layer. There is no one size fit all techniques due to a wide variety of ICS. \section{Suggested Directions of Approach} There are a lot of efforts for detecting sensor attacks but most can not distinguish between an attack and a faulty sensor measurement. An attempt was made recently~\cite{park2015sensor_transientFaults} to model and detect transient faults. It models a transient fault for each sensor and an algorithm is designed to detect and identify attacks in the presence of transient faults. This approach can detect transient faults (e.g., a GPS reporting faulty readings inside a tunnel) but not the permanent faults/attacks, e.g., DDoS attack or cutting wire of a sensor. Also, this work does not consider a stealthy attacker trying to imitate a transient fault. They have also assumed multiple sensors for the same physical state variable. They also assume an abstract sensor model where the sensor reports an interval of readings e.g., a set rather than one value. Therefore, it is highly desirable to come up with novel methods which could differentiate between a fault and an attack by considering a realistic threat model. In the following, we present a few proposals that also highlight open problems. \subsection{Combining Process and Network Layer Detectors} The studies involving both the network and the process layer are rare~\cite{CPS_security_survey2017}. We propose to use data from both the network layer and the process layer. For example, if a sensor data is compromised as a man in the middle attack (MiTM)~\cite{NoiseMatters_ACSAC2018} then the resultant process state might have equally resulted from a fault but by looking at the MiTM traffic, it would be certainly possible to point out an attack. \subsection{Relation Between Size, Detection Time and Time to Damage of faults and attacks} Fault/Attack size means how much is the sudden change in the physical state variables. For example, consider an initial state ($S_i$) before attack and state transition to $S_a$ after an attack, where $|S_a| >>> |S_i|$, such an instantaneous change would be considered of a large size. Detection time is a measure of how fast one can detect the attack whereas time to damage depends on the process itself, e.g., for a fluid storage tank depending on the capacity it could take long time to overflow or underflow attack. While in electric grid a sudden surge can instantly damage the physical system. We can study relationship between variables to segregate between two. \subsection{Virtual Sensors/Digital Twin} We can exploit the idea of a digital twin or model virtual sensors. Since the digital system does not have any real sensors, it could not get faulty and any manipulation must be a result of some attack. The key assumption is that an attacker must attack that virtual/digital twin. \subsection{Signature based Attack Detection} Attack signatures can be collected by designing an attacker's intentions and strategy. However, a limitation of this method is that it would not be able to detect unseen attack patterns. \subsection{Mode Shift Across Attack and Fault} The idea is to figure out how the device's mode transition occurs during a fault. Faults would be random, for example, failing of a sensor or actuator. Whereas, attacks would exhibit a mode shift for several devices at the same time. For example, to attack a process plant, an attacker would compromise the sensor as well as actuator's mode at the same time to hide itself. \subsection{Simulate Failures and Faults for Known Device} Attacks are unknown before an attack has taken place whereas fault data could be generated. The idea is to collect data for the (known) faulty behavior to create fault models. If we have profiles for normal data as well as faulty data, then it should be possible to distinguish attacks from faults. \subsection{ Exploiting Asymmetry between Correlation and Causation} We argue that correlated failures should be taken with a pinch of salt. If a sensor fails or an actuator fails it would have an impact on another associated device. For example, if a motorized valve at the inlet of a tank fails then the flow sensor is also affected but if the flow sensor fails then the actuator (motorized valve) is not affected. Therefore, to attack the flow sensor an attacker has to attack both the flow meter as well as the actuator but there is no such requirement for a fault. \subsection{Detection Latency based Method} An interesting observation from previous research~\cite{Ahmed_AsiaCCS2017_stealthyAtt,park2015sensor_transientFaults} shows that sophisticated attacks (takes more time) and fault like bias injection attacks (detected instantaneously) takes different amount of time. The hypothesis is that the fault would be a sudden change. However, a persistent attacker would stay for a longer period of time to do substantial damage. \subsection{Fault Time Constant} Fault's time constant would be different than that of attack. Inspired from the research in fault detection in ICS~\cite{abrupt_fault_time_constant_samara2008statistical}, it is hypothesized that the faults are more abrupt and random and hence their time constant is much smaller than the normal process profile. However, we assume that an attacker would try to imitate as close to the process as possible, therefore if modeled properly faults could be distinguished. \subsection{Redundancy} Using the redundant sensors for the same physical state variable can help in fault and attack isolation provided that not all are attacked at the same time~\cite{park2015sensor_transientFaults}. \section{Conclusions} Through this opinion paper we aim to kindle interest of ICS community towards deeper nuances of anomaly detection and analysis. We assert the relevance of differentiating between faults and attacks in cyber-physical systems and try to motivate from both economical and psychological lenses which are further corroborated by interviews with researchers and industry managers. We build up on top of the core challenges in segregating the two forms of anomalies and propose multiple directions of approach. This work shall be followed by rigorous research in most of the outlined directions in collaborations with research institutes and universities hosting state-of-art CPS test beds.
1,108,101,564,785
arxiv
\section{Introduction}\label{sec1} Many of the the major ongoing government or government-funded surveys have panel components including, for example, in the U.S., the American National Election Study (ANES), the General Social Survey (GSS), the Panel Survey on Income Dynamics (PSID) and the Current Population Survey (CPS). Despite the millions of dollars spent each year to collect high quality data, analyses using panel data are inevitably threatened by panel attrition (\cite{lynn2009methodology}), that is, some respondents in the sample do not participate in later waves of the study because they cannot be located or refuse participation. For instance, the multiple-decade PSID, first fielded in 1968, lost nearly 50 percent of the initial sample members by 1989 due to cumulative attrition and mortality. Even with a much shorter study period, the 2008--2009 ANES Panel Study, which conducted monthly interviews over the course of the 2008 election cycle, lost 36 percent of respondents in less than a year. At these rates, which are not atypical in large-scale panel studies, attrition can have serious impacts on analyses that use only respondents who completed all waves of the survey. At best, attrition reduces effective sample size, thereby decreasing analysts' abilities to discover longitudinal trends in behavior. At worst, attrition results in an available sample\vadjust{\goodbreak} that is not representative of the target population, thereby introducing potentially substantial biases into statistical inferences. It is not possible for analysts to determine the degree to which attrition degrades complete-case analyses by using only the collected data; external sources of information are needed. One such source is refreshment samples. A refreshment sample includes new, randomly sampled respondents who are given the questionnaire at the same time as a second or subsequent wave of the panel. Many of the large panel studies now routinely include refreshment samples. For example, most of the longer longitudinal studies of the National Center for Education Statistics, including the Early\break Childhood Longitudinal Study and the National Educational Longitudinal Study, freshened their samples at some point in the study, either adding new panelists or as a separate cross-section. The National Educational Longitudinal Study, for instance, followed 21,500 eighth graders in two-year intervals until 2000 and included refreshment samples in 1990 and 1992. It is worth noting that by the final wave of data collection, just 50\% of the original sample remained in the panel. Overlapping or rotating panel designs offer the equivalent of refreshment samples. In such designs, the sample is divided into different cohorts with staggered start times such that one cohort of panelists completes a follow-up interview at the same time another cohort completes their baseline interview. So long as each cohort is randomly selected and administered the same questionnaire, the baseline interview of the new cohort functions as a refreshment sample for the old cohort. Examples of such rotating panel designs include the GSS and the Survey of Income and Program Participation. Refreshment samples provide information that can be used to assess the effects of panel attrition and to correct for biases via statistical modeling (\cite{hirano1998combining}). However, they are infrequently used by analysts or data collectors for these tasks. In most cases, attrition is simply ignored, with the analysis run only on those respondents who completed all waves of the study (e.g., \cite{jelicic2009use}), perhaps with the use of post-stratification weights (\cite{vandecasteele2007attrition}). This is done despite widespread recognition among subject matter experts about the potential problems of panel attrition (e.g., \cite{ahern2005methodological}). In this article, we review and bolster the case for the use of refreshment samples in panel studies. We begin in Section~\ref{sec2}\vadjust{\goodbreak} by briefly describing existing approaches for handling attrition that do not involve refreshment samples. In Section~\ref{sec3} we present a hypothetical two-wave panel to illustrate how refreshment samples can be used to remove bias from nonignorable attrition. In Section~\ref{sec4} we extend current models for refreshment samples, which are described exclusively with two-wave settings in the literature, to models for three waves and two refreshment samples. In doing so, we discuss modeling nonterminal attrition in these settings, which arises when respondents fail to respond to one wave but return to the study for a subsequent one. In Section~\ref{sec5} we illustrate the three-wave analysis using the 2007--2008 Associated Press--Yahoo! News Election Poll (APYN), which is a panel study of the 2008 U.S. Presidential election. Finally, in Section~\ref{sec6} we discuss some limitations and open research issues in the use of refreshment samples. \section{Panel Attrition in Longitudinal Studies}\label{sec2} Fundamentally, panel attrition is a problem of nonresponse, so it is useful to frame the various approaches to handling panel attrition based on the assumed missing data mechanisms (\cite{rubin1976}; \cite{littlerubin}). Often researchers ignore panel attrition entirely and use only the available cases for analysis, for example, listwise deletion to create a balanced subpanel (e.g., \cite{bartels1993messages};\break \cite{wawro2002estimating}). Such approaches assume that the panel attrition is missing completely at random\break (MCAR), that is, the missingness is independent of observed and unobserved data. We speculate that this is usually assumed for convenience, as often listwise deletion analyses are not presented with empirical justification of MCAR assumptions. To the extent that diagnostic analyses of MCAR assumptions in panel attrition are conducted, they tend to be reported and published separately from the substantive research (e.g., \cite{zabel1998analysis}; \cite{fitzgerald1998analysis}; \cite{bartels1999panel}; \cite {clinton2001panel}; \cite{kruse2009panel}), so that it is not clear if and how the diagnostics influence statistical model specification. Considerable research has documented that some individuals are more likely to drop out than others (e.g., \cite{behr2005extent}; \cite {olsen2005problem}), making listwise deletion a risky analysis strategy. Many \mbox{analysts} instead assume that the data are missing at random (MAR), that is, missingness depends on observed, but not unobserved, data. One widely used MAR approach is to\vadjust{\goodbreak} adjust survey\break weights for nonresponse, for example, by using post-stratification weights provided by the survey organization (e.g., Henderson, Hillygus and Tompson\break (\citeyear{henderson2010sour})). Re-weighting approaches assume that drop\-out occurs randomly within weighting classes defined by observed variables that are associated with dropout. Although re-weighting can reduce bias introduced by panel attrition, it is not a fail-safe solution. There is wide variability in the way weights are constructed and in the variables used. Nonresponse weights are often created using demographic benchmarks, for example, from the CPS, but demographic variables alone are unlikely to be adequate to correct for attrition (\cite{vandecasteele2007attrition}). As is the case in other nonresponse contexts, inflating weights can result in increased standard errors and introduce instabilities due to particularly large or small weights (\cite{lohr1999sampling}; \cite{gelman2007struggles}). A related MAR approach uses predicted proba\-bilities of nonresponse, obtained by modeling the \mbox{response} indicator as a function of observed variables, as inverse probability weights to enable inference by generalized estimating equations (e.g., \cite{robinsrot95}; \cite{robinsrotzha}; Scharfstein, Rotnitzky and Robins\break (\citeyear{scharfrobrot}); \cite{chenyicook}). This potentially offers some robustness to model misspecification, at~least asymptotically for MAR mechanisms, although inferences can be sensitive to large weights. One also can test whether or not parameters differ significantly due to attrition for cases with complete data and cases with incomplete data (e.g., \cite{diggle89}; \cite{chenlittle}; \cite{qu02}; \cite{qu11}), which can offer insight into the appropriateness of the assumed MAR mechanism.\looseness=1 An alternative approach to re-weighting is single imputation, a method often applied by statistical agencies in general item nonresponse contexts (\cite{kalton1986treatment}). Single imputation methods replace each missing value with a plausible guess, so that the full panel can be analyzed as if their data were complete. Although there are a wide range of single imputation methods (hot deck, nearest neighbor, etc.) that have been applied to missing data problems, the method most specific to longitudinal studies is the last-observation-carried-forward approach, in which an individual's missing data are imputed to equal his or her response in previous waves (e.g., \cite{packer1996double}). Research has shown that this\vadjust{\goodbreak} approach can introduce substantial biases in inferences (e.g., see \cite{hogandaniels}). Given the well-known limitations of single imputation methods (\cite{littlerubin}), multiple imputation (see Section~\ref{sec3}) also has been used to handle missing data from attrition (e.g., \cite{pasek2009determinants}; \cite{honaker2010missing}). As with the majority of available methods used to correct for panel attrition, standard approaches to multiple imputation assume an ignorable missing data mechanism. Unfortunately, it is often expected that panel attrition is not missing at random (NMAR), that is, the missingness depends on unobserved data. In such cases, the only way to obtain unbiased estimates of parameters is to model the missingness. However, it is generally impossible to know the appropriate model for the missingness mechanism from the panel sample alone (\cite{kristman2005methods}; \cite{basic2007assessing}; \cite{molenbeunk}). Another approach, absent external data, is to handle the attrition directly in the statistical models used for longitudinal data analysis (\cite{verbmol}; \cite{diggleheag}; \cite{fitzmaurice}; \cite{hedeker}; \cite{hogandaniels}). Here, unlike with other approaches, much research has focused on methods for handling nonignorable panel attrition. Methods include variants of both selection models (e.g., \cite{hausman1979attrition}; \cite{siddiqui1996factors}; \cite{kenward}; \cite{scharfrobrot}; \cite{vella1999two}; \cite{das2004simple}; \cite{wooldridge2005simple}; \cite{semykina2010estimating}) and pattern mixture models (e.g., \cite{littlepatmix}; \cite{kenwardmolenthijs}; \cite{roy}; \cite{linmccol}; Roy and\break Daniels (\citeyear{roydaniels})). These model-based methods have to make untestable and typically strong assumptions about the attrition process, again because there is insufficient information in the original sample alone to learn the missingness mechanism. It is therefore prudent for analysts to examine how sensitive results are to different sets of assumptions about attrition. We note that \citet{rotrobschar98} and \citet{scharfrobrot} suggest related sensitivity analyses for estimating equations with inverse probability weighting. \section{Leveraging Refreshment Samples}\label{sec3} Refreshment samples are available in many panel studies, but the way refreshment samples are currently used with respect to panel attrition varies widely. Initially, refreshment samples, as the name implies, were conceived as a way to directly replace units who had dropped out (\cite {ridder1992empirical}). The general idea of using survey or field substitutes to correct for nonresponse dates to some of the earliest survey methods work (\cite{kish1959replacement}). Research has shown, however, that respondent substitutes are more likely to resemble respondents rather than nonrespondents, potentially introducing bias without additional adjustments (\cite{lin1995using}; \cite{vehovar1999field}; \cite{rubin2001using}; \cite{dorsett10}). Also potentially problematic is when refreshment respondents are simply added to the analysis to boost the sample size, while the attrition process of the original respondents is disregarded (e.g., \cite{wissen1989dutch}; \cite{heeringa1997russia}; \cite {thompson2006methods}). In recent years, however, it is most common to see refreshment samples used to diagnose panel attrition characteristics in an attempt to justify an ignorable missingness assumption or as the basis for discussion about potential bias in the results, without using them for statistical correction of the bias (e.g., \cite{Frick2006}; \cite {kruse2009panel}). Refreshment samples are substantially more powerful than suggested by much of their previous use. Refreshment samples can be mined for information about the attrition process, which in turn facilitates adjustment of inferences for the missing data (Hirano et~al., \citeyear{hirano1998combining,hirano2001combining}; \cite {bartels1999panel}). For example, the data can be used to construct inverse probability weights for the cases in the panel (\cite{hirano1998combining}; \cite {nevo2003using}), an approach we do not focus on here. They also offer information for model-based methods and multiple imputation (\cite{hirano1998combining}), which we now describe and illustrate in detail. \subsection{Model-Based Approaches}\label{sec3.1} Existing model-based methods for using refreshment samples (\cite{hirano1998combining}; \cite{bhattacharya2008inference}) are based on selection models for the attrition process. To our knowledge, no one has developed pattern mixture models in the context of refreshment samples, thus, in what follows we only discuss selection models. To illustrate these approaches, we use the simple example also presented by Hirano et~al. (\citeyear{hirano1998combining,hirano2001combining}), which is illustrated graphically in Figure~\ref{fig2-period}. Consider a two-wave panel of $N_P$ subjects that includes a refreshment sample of $N_R$ new subjects \begin{figure} \includegraphics{414f01.eps} \caption{Graphical representation of the two-wave model. Here, $X$ represents variables available on everyone.}\label{fig2-period}\vspace*{-3pt} \end{figure} during the second wave. Let $Y_1$ and $Y_2$ be binary responses\vadjust{\goodbreak} potentially available in wave 1 and wave 2, respectively. For the original panel, suppose that we know $Y_1$ for all $N_P$ subjects and that we know $Y_2$ only for $N_{\mathit{CP}} < N_P$ subjects due to attrition. We also know $Y_2$ for the $N_R$ units in the refreshment sample, but by design we do not know $Y_1$ for those units. Finally, for all $i$, let $W_{1i} = 1$ if subject $i$ would provide a value for $Y_2$ if they were included in wave 1, and let $W_{1i}=0$ if subject $i$ would not provide a value for $Y_2$ if they were included in wave 1. We note that $W_{1i}$ is observed for all $i$ in the original panel but is missing for all $i$ in the refreshment sample, since they were not given the chance to respond in wave 1. The concatenated data can be conceived as a partially observed, three-way contingency table with eight cells. We can estimate the joint probabilities in four of these cells from the observed data, namely, $P(Y_1=y_1, Y_2=y_2, W_1 =1)$ for $y_1, y_2 \in\{0,1\}$. We also have the following three independent constraints involving the cells not directly observed: \begin{eqnarray*} && 1 - \sum_{y_1, y_2} P(Y_1 = y_1, Y_2 = y_2, W_1 = 1) \\[-1pt] &&\quad= \sum _{y_1, y_2} P(Y_1 = y_1, Y_2 = y_2, W_1 = 0), \\[-1pt] && P(Y_1 = y_1, W_1=0) \\[-1pt] &&\quad= \sum _{y_2} P(Y_1 = y_1, Y_2 = y_2, W_1=0), \\[-1pt] && P(Y_2= y_2) - P(Y_2=y_2, W_1 = 1) \\[-1pt] &&\quad= \sum_{y_1} P(Y_1 = y_1, Y_2 = y_2, W_1=0). \end{eqnarray*} Here, all quantities on the left-hand side of the equations are estimable from the observed data. The system of\vadjust{\goodbreak} equations offers seven constraints for eight cells, so that we must add one constraint to identify all the joint probabilities. \begin{table*} \tablewidth=400pt \caption{Summary of simulation study for the two-wave example. Results include the average of the posterior means across the 500 simulations and the percentage of the 500 simulations in which the 95\% central posterior interval covers the true parameter value. The implied Monte Carlo standard error of the simulated coverage rates is approximately $\sqrt{(0.95)(0.05)/500} = 1$\%}\label{tabHWandANNMAR} \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}ld{2.1}d{2.2}cd{2.2}d{2.0}d{2.2}c@{}} \hline && \multicolumn{2}{c}{\textbf{HW}} & \multicolumn{2}{c}{\textbf{MAR}} & \multicolumn{2}{c@{}}{\textbf{AN}} \\ [-4pt] && \multicolumn{2}{l}{\hspace*{-1.5pt}\hrulefill} & \multicolumn{2}{l}{\hspace*{-1.5pt}\hrulefill} & \multicolumn{2}{l@{}}{\hspace*{-1.5pt}\hrulefill} \\ \textbf{Parameter} & \multicolumn{1}{c}{\textbf{True value}} & \multicolumn{1}{c}{\textbf{Mean}} & \multicolumn{1}{c}{\textbf{95\% Cov.}} & \multicolumn{1}{c}{\textbf{Mean}} & \multicolumn{1}{c}{\textbf{95\% Cov.}} & \multicolumn{1}{c}{\textbf{Mean}} & \multicolumn{1}{c@{}}{\textbf{95\% Cov.}} \\ \hline $\beta_{0}$ &0.3 &0.29 & 96 &0.27 & 87 &0.30 & 97\\ $\beta_{X}$ & -0.4 & -0.39 & 95 & -0.39 & 95 & -0.40 & 96\\ $\gamma_{0}$ &0.3 &0.44 & 30 &0.54 & 0&0.30 & 98\\ $\gamma_{X}$ & -0.3 & -0.35 & 94 & -0.39 & 70 & -0.30 & 99\\ $\gamma_{Y_{1}}$ &0.7 &0.69 & 91 &0.84 & 40 &0.70 & 95\\ $\alpha_{0}$ & -0.4 & -0.46 & 84&0.25 & 0 & -0.40 & 97\\ $\alpha_{X}$ & 1 &0.96 & 93 &0.84 & 13 & 1.00 & 98\\ $\alpha_{Y_{1}}$ & -0.7 & \multicolumn{1}{c}{---} & --- & -0.45 & 0 & -0.70 & 98\\ $\alpha_{Y_{2}}$ & 1.3 &0.75 & 0 & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & 1.31& 93\\ \hline \end{tabular*} \end{table*} Hirano et~al. (\citeyear{hirano1998combining,hirano2001combining}) suggest characterizing the joint distribution of $(Y_1, Y_2, W_1)$ via a chain of conditional models, and incorporating the additional constraint within the modeling framework. In this context, they suggested letting \begin{eqnarray} \label{equ1} Y_{1i} &\sim& \operatorname{Ber}(\pi_{1i}),\nonumber\\[-9pt]\\[-9pt] \operatorname{logit}(\pi_{1i}) &=& \beta_0,\nonumber \\[-2pt] \label{equ2} Y_{2i}|Y_{1i} &\sim& \operatorname{Ber}(\pi_{2i}),\nonumber\\[-9pt]\\[-9pt] \operatorname{logit}(\pi_{2i}) &=& \gamma _0+\gamma_1 Y_{1i},\nonumber \\[-2pt] \label{equ3} W_{1i}|Y_{2i},Y_{1i} &\sim& \operatorname{Ber} (\pi_{W_{1i}}),\nonumber\\[-8pt]\\[-8pt] \operatorname{logit}(\pi _{W_{1i}})&=& \alpha_0+ \alpha_{Y_1}Y_{1i}+\alpha_{Y_2}Y_{2i}\nonumber \end{eqnarray} for all $i$ in the original panel and refreshment sample, plus requiring that all eight probabilities sum to one. \citet {hirano1998combining} call this an additive nonignorable (AN) model. The AN model enforces the additional constraint by disallowing the interaction between $(Y_1, Y_2)$ in (\ref{equ3}). \citet{hirano1998combining} prove that the AN model is likelihood-identified for general distributions. Fitting AN models is straightforward using Bayesian MCMC; see \citet{hirano1998combining} and \citet{dengMS} for exemplary Metropolis-within-Gibbs algorithms. Parameters also can be estimated via equations of moments (\cite {bhattacharya2008inference}). Special cases of the AN model are informative. By setting $(\alpha_{Y_2}=0, \alpha_{Y_1} \ne0)$, we specify a model for a MAR missing data mechanism. Setting $\alpha_{Y_2} \ne0$ implies a NMAR missing data mechanism. In fact, setting $(\alpha_{Y_1} = 0, \alpha_{Y_2} \ne0)$ results in the nonignorable model of \citet{hausman1979attrition}.\vadjust{\goodbreak} Hence, the AN model allows the data to determine whether the missingness is MAR or NMAR, thereby allowing the analyst to avoid making an untestable choice between the two mechanisms. By not forcing $\alpha_{Y_1}=0$, the AN model permits more complex nonignorable selection mechanisms than the model of \citet{hausman1979attrition}. The AN model does require separability of $Y_1$ and $Y_2$ in the selection model; hence, if attrition depends on the interaction between $Y_1$ and $Y_2$, the AN model will not fully correct for biases due to nonignorable attrition. As empirical evidence of the potential of refreshment samples, we simulate 500 data sets based on an extension of the model in (\ref{equ1})--(\ref{equ3}) in which we add a Bernoulli-generated covariate $X$ to each model; that is, we add $\beta_{X}X_i$ to the logit predictor in (\ref{equ1}), $\gamma_X X_i$ to the logit predictor in (\ref{equ2}), and $\alpha_X X_i$ to the logit predictor in~(\ref{equ3}). In each we use $N_P=10\mbox{,}000$ original panel cases and $N_R = 5000$ refreshment sample cases. The parameter values, which are displayed in Table~\ref{tabHWandANNMAR}, simulate a nonignorable missing data mechanism. All values of $(X, Y_1, W_1)$ are observed in the original panel, and all values of $(X, Y_2)$ are observed in the refreshment sample. We estimate three models based on the data: the \citet{hausman1979attrition} model (set $\alpha_{Y_1}=0$ when fitting the models) which we denote with HW, a MAR model (set $\alpha_{Y_2}=0$ when fitting the models) and an AN model. In each data set, we estimate posterior means and 95\% central posterior intervals for each parameter using a Metropolis-within-Gibbs sampler, running 10,000 iterations (50\% burn-in). We note that interactions involving $X$ also can be included and identified in the models, but we do not use them here.\vadjust{\goodbreak} For all models, the estimates of the intercept and coefficient in the logistic regression of $Y_1$ on $X$ are reasonable, primarily because $X$ is complete and $Y_1$ is only MCAR in the refreshment sample. As expected, the MAR model results in biased point estimates and poorly calibrated intervals for the coefficients of the models for $Y_2$ and $W_1$. The HW model fares somewhat better, but it still leads to severely biased point estimates and poorly calibrated intervals for $\gamma_0$ and $\alpha_{Y_2}$. In contrast, the AN model results in approximately unbiased point estimates with reasonably well-calibrated intervals. We also ran simulation studies in which the data generation mechanisms satisfied the HW and MAR models. When $(\alpha_{Y_1}=0, \alpha_{Y_2} \neq0)$, the HW model performs well and the MAR model performs terribly, as expected. When $(\alpha_{Y_1} \neq0, \alpha_{Y_2} = 0)$, the MAR model performs well and the HW model performs terribly, also as expected. The AN model performs well in both scenarios, resulting in approximately unbiased point estimates with reasonably well-cali\-brated intervals. To illustrate the role of the separability assumption, we repeat the simulation study after including a nonzero interaction between $Y_1$ and $Y_2$ in the model for $W_1$. Specifically, we generate data according to a response model, \begin{eqnarray}\label{equ4} \operatorname{logit}(\pi_{W_{1i}})&=&\alpha_0+\alpha_{Y_1}Y_{1i}+ \alpha_{Y_2}Y_{2i}\nonumber\\[-8pt]\\[-8pt] &&{}+ \alpha_{Y_1Y_2} Y_{1i}Y_{2i},\nonumber \end{eqnarray} setting $\alpha_{Y_1Y_2} = 1$. However, we continue to use the AN model by forcing $\alpha_{Y_1Y_2} = 0$ when estimating parameters. Table \ref {tab2waveint} summarizes the results of 100 simulation runs, showing substantial biases in all parameters except $(\beta_{0}, \beta_{X}, \gamma_X, \alpha_X)$. The estimates of $(\beta_0, \beta_X)$ are unaffected by using the wrong value for $\alpha_{Y_1 Y_2}$, since all the information about the relationship between $X$ and $Y_1$ is in the first wave of the panel. The estimates of $\gamma_X$ and $\alpha_X$ are similarly unaffected because $\alpha_{Y_1 Y_2}$ involves only $Y_1$ (and not~$X$), which is controlled for in the regressions. Table~\ref{tab2waveint} also displays the results when using (\ref{equ1}), (\ref{equ2}) and (\ref{equ4}) with $\alpha_{Y_1Y_2}=1$; that is, we set $\alpha_{Y_1Y_2}$ at its true value in the MCMC estimation and estimate all other parameters. After accounting for separability, we are able to recover all true parameter values. \begin{table} \caption{Summary of simulation study for the two-wave example without separability. The true selection model includes a nonzero interaction between $Y_1$ and $Y_2$ (coefficient $\alpha_{Y_1Y_2}=1$). We fit the AN model plus the AN model adding the interaction term set at its true value. Results include the averages of the posterior means and posterior standard errors across 100 simulations}\label{tab2waveint} \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}ld{2.1}d{2.2}cd{2.2}d{1.2}@{}} \hline &&\multicolumn{2}{c}{\textbf{AN}} & \multicolumn{2}{c@{}}{$\bolds{\mathrm{AN} + \alpha_{Y_1Y_2}}$} \\ [-4pt] &&\multicolumn{2}{l}{\hspace*{-1.5pt}\hrulefill} & \multicolumn{2}{l@{}}{\hspace*{-1.5pt}\hrulefill} \\ \textbf{Parameter} & \multicolumn{1}{c}{\textbf{True value}} & \multicolumn{1}{c}{\textbf{Mean}} & \multicolumn{1}{c}{\textbf{S.E.}} & \multicolumn{1}{c}{\textbf{Mean}} & \multicolumn{1}{c@{}}{\textbf{S.E.}} \\ \hline $\beta_{0}$ &0.3 &0.30 &0.03 &0.30 &0.03\\ $\beta_{X}$ & -0.4 & -0.41 &0.04 & -0.41 &0.04\\ $\gamma_{0}$ &0.3 &0.14 &0.06&0.30 &0.06\\ $\gamma_{X}$ & -0.3 & -0.27 &0.06 & -0.30 &0.05\\ $\gamma_{Y_{1}}$ &0.7 &0.99 &0.07 &0.70 &0.06\\ $\alpha_{0}$ & -0.4 & -0.55 &0.08 & -0.41 &0.09\\ $\alpha_{X}$ & 1 &0.99 &0.08 & 1.01 &0.08\\ $\alpha_{Y_{1}}$ & -0.7 & -0.35 &0.05 & -0.70 &0.07\\ $\alpha_{Y_{2}}$ & 1.3 & 1.89 &0.13 & 1.31 &0.13\\ $\alpha_{Y_1Y_2}$ & 1 & \multicolumn{1}{c}{---} & \multicolumn{1}{c}{---} & 1 & 0 \\ \hline \end{tabular*} \vspace*{6pt} \end{table} Of course, in practice analysts do not know the true value of $\alpha_{Y_1Y_2}$. Analysts who wrongly set\break $\alpha _{Y_1Y_2}=0$, or any other incorrect value, can expect bias patterns like those in Table~\ref{tab2waveint}, with magnitudes determined by how dissimilar the fixed $\alpha_{Y_1Y_2}$ is from the true value. However, the successful recovery of true parameter values when setting $\alpha_{Y_1Y_2}$ at its correct value suggests an approach for analyzing the sensitivity of inferences to the separability assumption. Analysts can posit a set of plausible values for $\alpha_{Y_1Y_2}$, estimate the models after fixing $\alpha_{Y_1Y_2}$ at each value and evaluate the inferences that result. Alternatively, analysts might search for values of $\alpha_{Y_1Y_2}$ that meaningfully alter substantive conclusions of interest and judge whether or not such $\alpha_{Y_1Y_2}$ seem realistic. Key to this sensitivity analysis is interpretation of $\alpha_{Y_1Y_2}$. In the context of the model above, $\alpha_{Y_1Y_2}$ has a natural interpretation in terms of odds ratios; for example, in our simulation, setting $\alpha_{Y_1Y_2}=1$ implies that cases with $(Y_1 = 1, Y_2 =1)$ have $\exp(2.3) \approx10$ times higher odds of responding at wave~2 than cases with $(Y_1 = 1, Y_2 = 0)$. In a sensitivity analysis, when this is too high to seem realistic, we might consider models with values like $\alpha_{Y_1Y_2} =0.2$. Estimates from the AN model can serve as starting points to facilitate interpretations. Although we presented models only for binary data, \citet{hirano1998combining} prove that similar models can be constructed for other data types, for example, they present an analysis with a multivariate normal distribution for $(Y_1, Y_2)$. Generally speaking, one proceeds by specifying a joint model for the outcome (unconditional on $W_1$), followed by a selection model for $W_1$ that maintains separation of $Y_1$ and $Y_2$. \subsection{Multiple Imputation Approaches}\label{sec3.2} \label{secMI} One also can treat estimation with refreshment samples as a multiple imputation exercise, in which one creates a modest number of completed data sets to be analyzed with complete-data methods. In multiple imputation, the basic idea is to simulate values for the missing data repeatedly by sampling from predictive distributions of the missing values. This creates $m>1$ completed data sets that can be analyzed or, as relevant for many statistical agencies, disseminated to the public. When the imputation models meet certain conditions (\cite{rubin1987}, Chapter 4), analysts of the $m$ completed data sets can obtain valid inferences using complete-data statistical methods and software. Specifically, the analyst computes point and variance estimates of interest with each data set and combines these estimates using simple formulas developed by \citet{rubin1987}. These formulas serve to propagate the uncertainty introduced by missing values through the analyst's inferences. Multiple imputation can be used for both MAR and NMAR missing data, although standard software routines primarily support MAR imputation schemes. Typical approaches to multiple imputation presume either a joint model for all the data, such as a multivariate normal or log-linear model (\cite{schafer}), or use approaches based on chained equations (Van~Buuren and Oudshoorn\break (\citeyear{oos}); \cite{raghu2001}). See Rubin\break (\citeyear{rubin1996}), \citet{barnardmeng1999} and \citet{reiterraghu07} for reviews of multiple imputation. Analysts can utilize the refreshment samples when implementing multiple imputation, thereby realizing similar benefits as illustrated in Section~\ref{sec3.1}. First, the analyst fits the Bayesian models in (\ref{equ1})--(\ref{equ3}) by running an MCMC algorithm for, say, $H$ iterations. This algorithm cycles between (i) taking draws of the missing values, that is, $Y_2$ in the panel and $(Y_1,\break W_1)$ in the refreshment sample, given parameter values and (ii) taking draws of the parameters given completed data. After convergence of the chain, the analyst collects $m$ of these completed data sets for use in multiple imputation. These data sets should be spaced sufficiently so as to be approximately independent, for example, by thinning the $H$ draws so that the autocorrelations among parameters are close to zero. For analysts reluctant to run MCMC algorithms, we suggest multiple imputation via\break chained equations with $(Y_1, Y_2, W_1)$ each taking turns as the dependent variable. The conditional models should disallow interactions (other than those involving $X$) to respect separability. This suggestion is based on our experience with limited simulation studies, and we encourage further research into its general validity. For the remainder of this article, we utilize the fully Bayesian MCMC approach to implement multiple imputation. Of course, analysts could disregard the refreshment samples entirely when implementing multiple imputation. For example, analysts can estimate a MAR multiple imputation model by forcing $\alpha_{Y_2} = 0$ in (\ref{equ3}) and using the original panel only. However, this model is exactly equivalent to the MAR model used in Table~\ref{tabHWandANNMAR} (although those results use both the panel and the refreshment sample when estimating the model); hence, disregarding the refreshment samples can engender the types of biases and poor coverage rates observed in Table~\ref{tabHWandANNMAR}. On the other hand, using the refreshment samples allows the data to decide if MAR is appropriate or not in the manner described in Section~\ref{sec3.1}. In the context of refreshment samples and the example in Section~\ref{sec3.1}, the analyst has two options for implementing multiple imputation. The first, which we call the ``P${}+{}$R'' option, is to generate completed data sets that include all cases for the panel and refreshment samples, for example, impute the missing $Y_2$ in the original panel and the missing $(Y_1, W_1)$ in the refreshment sample, thereby creating $m$ completed data sets each with $N_P+N_R$ cases. The second, which we call the ``P-only'' option, is to generate completed data sets that include only individuals from the initial panel, so that $N_P$ individuals are disseminated or used for analysis. The estimation routines may require imputing $(Y_1, W_1)$ for the refreshment sample cases, but in the end only the imputed $Y_2$ are added to the observed data from the original panel for dissemination/analysis. For the P${}+{}$R option, the multiply-imputed data sets are byproducts when MCMC algorithms are used to estimate the models. The P${}+{}$R option offers no advantage for analysts who would use the Bayesian model for inferences, since essentially it just reduces from $H$ draws to $m$ draws for summarizing posterior distributions. However, it could be useful for survey-weighted analyses, particularly when the concatenated file has weights that have been revised to reflect (as best as possible) its representativeness. The analyst can apply the multiple imputation methods of \citet{rubin1987} to the concatenated file.\vadjust{\goodbreak} Compared to the P${}+{}$R option, the P-only option offers clearer potential benefits. Some statistical agencies or data analysts may find it easier to disseminate or base inferences on only the original panel after using the refreshment sample for imputing the missing values due to attrition, since combining the original and freshened samples complicates interpretation of sampling weights and design-based inference. For example, re-weighting the concatenated data can be tricky with complex designs in the original and refreshment sample. Alternatively, there may be times when a statistical agency or other data collector may not want to share the refreshment data with outsiders, for example, because doing so would raise concerns over data confidentiality. Some analysts might be reluctant to rely on the level of imputation in the P${}+{}$R approach---for the refreshment sample, all $Y_1$ must be imputed. In contrast, the P-only approach only leans on the imputation models for missing $Y_2$. Finally, some analysts simply may prefer the interpretation of longitudinal analyses based on the original panel, especially in cases of multiple-wave designs. In the P-only approach, the multiple imputation has a peculiar aspect: the refreshment sample records used to estimate the imputation models are not used or available for analyses. When records are used for imputation but not for analysis, \citet{reitermime} showed that Rubin's (\citeyear{rubin1987}) variance estimator tends to have positive bias. The bias, which can be quite severe, results from a mismatch in the conditioning used by the analyst and the imputer. The derivation of Rubin's (\citeyear{rubin1987}) variance estimator presumes that the analyst conditions on all records used in the imputation models, not just the available data. We now illustrate that this phenomenon also arises in the two-wave refreshment sample context. To do so, we briefly review multiple imputation (\cite{rubin1987}). For $l = 1, \ldots, m$, let $q^{(l)}$ and $u^{(l)}$ be, respectively, the estimate of some population quantity $Q$ and the estimate of the variance of $q^{(l)}$ in completed data set $D^{(l)}$. Analysts use $\bar{q}_{m} = \sum_{l=1}^{m} q^{(l)}/m$ to estimate $Q$ and use $T_{m} = (1 + 1/m) b_m + \bar{u}_{m}$ to estimate $\operatorname{var}(\bar{q}_{m})$, where $b_m = \sum_{l=1}^{m} (q^{(l)} - \bar{q}_{m})^{2} / (m-1)$ and $\bar{u}_{m} = \sum_{l=1}^{m} u^{(l)}/m$. For large samples, inferences for $Q$ are obtained from the $t$-distribution, $(\bar{q}_m - Q) \sim t_{\nu_m}(0, T_m)$, where the degrees of freedom is ${\nu_m} = (m - 1) [1+ \bar{u}_{m}/ ((1+1/m)b_m ) ]^{2}$. A better degrees of freedom for small samples is presented by \citet {barnardrubin1999}. Tests of significance for multicomponent null hypotheses are derived by \citet{limengraghurub1991}, \citet{raghuetal1991}, \citet{mengrubin1992} and \citet{reitersmallndf}. Table~\ref{tab1stepMI-2wave} summarizes the properties of the P-only multiple imputation inferences for the AN model under the simulation design used for Table~\ref{tabHWandANNMAR}. We set $m=100$, spacing out samples of parameters from the MCMC so as to have approximately independent draws. Results are based on 500 draws of observed data sets, each with new values of missing data. \begin{table} \caption{Bias in multiple imputation variance estimator for P-only method. Results based on 500 simulations}\label{tab1stepMI-2wave} \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}ld{2.1}d{2.2}ccc@{}} \hline \textbf{Parameter} & \multicolumn{1}{c}{$\bolds{Q}$} & \multicolumn{1}{c}{\textbf{Avg.} $\bolds{\bar{q}_*}$} & \multicolumn{1}{c}{\textbf{Var} $\bolds{\bar{q}_*}$} & \multicolumn{1}{c}{\textbf{Avg.} $\bolds{T_*}$} & \multicolumn{1}{c@{}}{\textbf{95\% Cov.}}\\ \hline $ \beta_{0}$ &0.3 &0.30 &0.0008 &0.0008 & 95.4\\ $ \beta_{X}$ & -0.4 & -0.40 &0.0016 &0.0016 & 95.8\\ $ \gamma_{0}$ &0.3 &0.30 &0.0018 &0.0034 & 99.2\\ $ \gamma_{X}$ & -0.3 & -0.30 &0.0022 &0.0031 & 98.4\\ $ \gamma_{Y_1}$ &0.7 &0.70 &0.0031 &0.0032 & 96.4\\ \hline \end{tabular*} \end{table} As before, the multiple imputation results in approximately unbiased point estimates of the coefficients in the models for $Y_1$ and for $Y_2$. For the coefficients in the regression of $Y_2$, the averages of $T_m$ across the 500 replications tend to be significantly larger than the actual variances, leading to conservative confidence interval coverage rates. Results for the coefficients of $Y_1$ are well-calibrated; of course, $Y_1$ has no missing data in the P-only approach. We also investigated the two-stage multiple imputation approach of \citet{reitermime}. However, this resulted in some anti-conservative variance estimates, so that it was not preferred to standard multiple imputation. \subsection{Comparing Model-Based and Multiple Imputation Approaches}\label{sec3.3} As in other missing data contexts, model-based and multiple imputation approaches have differential advantages (\cite{schafer}). For any given model, model-based inferences tend to be more efficient than multiple imputation inferences based on modest numbers of completed data sets. On the other hand, multiple imputation can be more robust than fully model-based approaches to poorly fitting models. Multiple imputation uses the posited model only for completing missing values, whereas a fully model-based approach relies on the model for the entire inference. For example, in the P-only approach, a~poor\-ly-specified imputation model affects inference only through the $(N_P - N_{\mathit{CP}})$ imputations for $Y_2$. Speaking loosely to offer intuition, if the model for $Y_2$ is only 60\% accurate (a poor model indeed) and $(N_P - N_{\mathit{CP}})$ represents 30\% of $N_P$, inferences based on the multiple imputations will be only 12\% inaccurate. In contrast, the full model-based inference will be 40\% inaccurate. Computationally, multiple imputation has some advantages over model-based approaches, in that analysts can use ad hoc imputation methods like chained equations (\cite{oos}; \cite{raghu2001}) that do not require MCMC. Both the model-based and multiple imputation approaches, by definition, rely on models for the data. Models that fail to describe the data could result in inaccurate inferences, even when the separability assumption in the selection model is reasonable. Thus, regardless of the approach, it is prudent to check the fit of the models to the observed data. Unfortunately, the literature on refreshment samples does not offer guidance on or present examples of such diagnostics. We suggest that analysts check models with predictive distributions (\cite{meng94ppp}; \cite{gelmanmengstern}; \cite{he2010}; \cite {burgreit10}). In particular, the analyst can use the estimated model to generate new values of $Y_2$ for the complete cases in the original panel and for the cases in the refreshment sample. The analyst compares the set of replicated $Y_2$ in each sample with the corresponding original $Y_2$ on statistics of interest, such as summaries of marginal distributions and coefficients in regressions of $Y_2$ on observed covariates. When the statistics from the replicated data and observed data are dissimilar, the diagnostics indicate that the imputation model does not generate replicated data that look like the complete data, suggesting that it may not describe adequately the relationships involving $Y_2$ or generate plausible values for the missing $Y_2$. When the statistics are similar, the diagnostics do not offer evidence of imputation model inadequacy (with respect to those statistics). We recommend that analysts generate multiple sets of replicated data, so as to ensure interpretations are not overly specific to particular replications. These predictive checks can be graphical in nature, for example, resembling grouped residual plots for logistic regression models. Alternatively, as summaries analysts can compute posterior predictive probabilities. Formally, let $S$ be the statistic of interest, such as a regression coefficient or marginal probability. Suppose the analyst has created\vspace*{1pt} $T$ replicated data sets, $\{R^{(1)}, \ldots, R^{(T)}\}$, where $T$ is somewhat large (say, $T=500$). Let $S_{D}$ and $S_{R^{(l)}}$ be the values of $S$ computed with an observed subsample~$D$, for example, the complete cases in the panel or the refreshment sample, and $R^{(l)}$, respectively, where $l=1,\ldots,T$. For each $S$ we compute the two-sided posterior predictive probability, \begin{eqnarray}\label{equ5} \mathrm{ppp} &=& (2/T)*\min \Biggl(\sum_{l=1}^{T}I(S_{D}-S_{R^{(l)}} > 0),\nonumber\\[-8pt]\\[-8pt] &&\hspace*{64.5pt} \sum_{l=1}^{T}I(S_{R^{(l)}}-S_{D} > 0) \Biggr).\nonumber \end{eqnarray} We note that $\mathrm{ppp}$ is small when $S_{D}$ and $S_{R^{(l)}}$ consistently deviate from each other in one direction, which would indicate that the model is systematically distorting the relationship captured by $S$. For $S$ with small $\mathrm{ppp}$, it is prudent to examine the distribution of $S_{R^{(l)}} - S_{D}$ to evaluate if the difference is practically important. We consider probabilities in the 0.05 range (or lower) as suggestive of lack of model fit. To obtain each $R^{(l)}$, analysts simply add a step to the MCMC that replaces all observed values of $Y_2$ using the parameter values at that iteration, conditional on observed values of $(X, Y_1, W_1)$. This step is used only to facilitate diagnostic checks; the estimation of parameters continues to be based on the observed $Y_2$. When autocorrelations among parameters are high, we recommend thinning the chain so that parameter draws are approximately independent before creating the set of $R^{(l)}$. Further, we advise saving the $T$ replicated data sets, so that they can be used repeatedly with different~$S$. We illustrate this process of model checking in the analysis of the APYN data in Section~\ref{sec5}. \begin{figure*} \includegraphics{414f02.eps} \caption{Graphical representation of the three-wave panel with monotone nonresponse and no follow-up for subjects in refreshment samples. Here, $X$ represents variables available on everyone and is displayed for generality; there is no $X$ in the example in Section \protect\ref{sec4}.}\label{fig3-period-monotone,nofollow}\vspace*{-6pt} \end{figure*} \section{Three-Wave Panels with Two Refreshments}\label{sec4} To date, model-based and multiple imputation\break methods have been developed and applied in the context of two-wave panel studies with one refreshment sample. However, many panels exist for more than two waves, presenting the opportunity for fielding multiple refreshment samples under different designs. In this section we describe models for three-wave panels with two refreshment samples. These can be used as in Section~\ref{sec3.1} for model-based inference or as in Section~\ref{sec3.2} to implement multiple imputation. Model identification depends on (i) whether or not individuals from the original panel who\vadjust{\goodbreak} did not respond in the second wave, that is, have $W_{1i}=0$, are given the opportunity to provide responses in the third wave, and (ii) whether or not individuals from the first refreshment sample are followed in the third wave. To begin, we extend the example from Figure~\ref{fig2-period} to the case with no panel returns and no refreshment follow-up, as illustrated in Figure \ref{fig3-period-monotone,nofollow}. Let $Y_3$ be binary responses potentially available in wave 3. For the original panel, we know $Y_3$ only for $N_{\mathit{CP}2} < N_{\mathit{CP}}$ subjects due to third wave attrition. We also know $Y_3$ for the $N_{R2}$ units in the second refreshment sample. By design, we do not know $(Y_1, Y_3)$ for units in the first refreshment sample, nor do we know $(Y_1, Y_2)$ for units in the second refreshment sample. For all $i$, let $W_{2i} = 1$ if subject $i$ would provide a value for $Y_3$ if they were included in the second wave of data collection (even if they would not respond in that wave), and let $W_{2i}=0$ if subject $i$ would not provide a value for $Y_3$ if they were included in the second wave. In this design, $W_{2i}$ is missing for all $i$ in the original panel with $W_{1i}=0$ and for all $i$ in both refreshment samples. There are 32 cells in the contingency table cross-tabulated from $(Y_1, Y_2, Y_3, W_1, W_2)$. However, the observed data offer only sixteen constraints, obtained from the eight joint probabilities when $(W_1 = 1,\break W_2 = 1)$ and the following dependent equations\break (which can be alternatively specified). For all $(y_1, y_2,\break y_3, w_1, w_2)$, where $y_3, w_1, w_2 \in\{0,1\}$, we have \begin{eqnarray*} &&1 = \sum_{y_1, y_2, y_3, w, w_2} P(Y_1 = y_1, Y_2 = y_2,\\ &&\hspace*{86pt}Y_3 = y_3, W_1 = w_1, W_2 = w_2), \\ && P(Y_1 = y_1, W_1 =0) \\ &&\quad= \sum _{y_2, y_3, w_2} P(Y_1 = y_1, Y_2 = y_2,\\ &&\hspace*{67.6pt} Y_3 = y_3, W_1 =0, W_2 = w_2), \\ && P(Y_2= y_2) - P(Y_2=y_2, W_1 = 1) \\ &&\quad= \sum_{y_1, y_3, w_2} P(Y_1 = y_1, Y_2 = y_2,\\ &&\hspace*{67.6pt} Y_3 = y_3, W_1=0, W_2=w_2), \\ &&P(Y_1 = y_1, Y_2 = y_2, W_1=1, W_2 = 0) \\ &&\quad= \sum_{y_3} P(Y_1 = y_1, Y_2 = y_2,\\ &&\hspace*{50.4pt} Y_3 = y_3, W_1 =1, W_2 = 0), \\ &&P(Y_3=y_3) \\ &&\quad= \sum _{y_1, y_2, w_1, w_2} P(Y_1 = y_1, Y_2 = y_2,\\ &&\quad\hspace*{70.2pt} Y_3=y_3, W_1 = w_1, W_2 = w_2). \end{eqnarray*} As before, all quantities on the left-hand side of the equations are estimable from the observed data. The first three equations are generalizations of those from the two-wave model. One can show that the entire set of equations offers eight independent constraints, so that we must add sixteen constraints to identify all the probabilities in the table. \begin{figure*} \includegraphics{414f03.eps} \caption{Graphical representation of the three-wave panel with return of wave 2 nonrespondents and no follow-up for subjects in refreshment samples. Here, $X$ represents variables available on everyone.}\label{fig3-period-nmonotone,nofollow} \end{figure*} Following the strategy for two-wave models, we characterize the joint distribution of $(Y_1, Y_2, Y_3, W_1,\break W_2)$ via a chain of conditional models. In particular, for all $i$ in the original panel and refreshment samples, we supplement the models in (\ref{equ1})--(\ref{equ3}) with \begin{eqnarray} \label{equ6} Y_{3i} \mid Y_{1i}, Y_{2i}, W_{1i} & \sim& \operatorname{Ber}(\pi_{3i}), \nonumber\\ \operatorname{logit}(\pi_{3i}) &=& \beta_0+\beta_1 Y_{1i} \\ &&{}+ \beta_2 Y_{2i} + \beta_3 Y_{1i}Y_{2i},\nonumber \\ \label{equ7} \quad W_{2i} \mid Y_{1i}, Y_{2i}, W_{1i}, Y_{3i} &\sim& \operatorname{Ber}(\pi_{W2i}), \nonumber\\ \operatorname{logit}(\pi_{W2i}) &=& \delta_0+\delta_1Y_{1i}+ \delta_2Y_{2i} \\ &&{}+ \delta_3 Y_{3i} + \delta_4 Y_{1i}Y_{2i}, \nonumber \end{eqnarray} plus requiring that all 32 probabilities sum to one. We note that the saturated model---which includes all eligible one-way, two-way and three-way inter\-actions---contains 31 parameters plus the sum-to-one requirement, whereas the just-identified model contains 15 parameters plus the sum-to-one requirement; thus, the needed 16 constraints are obtained by fixing parameters in the saturated model to zero. The sixteen removed terms from the saturated model include the interaction $Y_1 Y_2$ from the model for $W_1$, all terms involving $W_1$ from the model for $Y_3$ and all terms involving $W_1$ or interactions with $Y_3$ from the model for $W_2$. We never observe $W_1=0$ jointly with $Y_3$ or $W_2$, so that the data cannot identify whether or not the distributions for $Y_3$ or $W_2$ depend on $W_1$. We therefore require that $Y_3$ and $W_2$ be conditionally independent of $W_1$. With this assumption, the $N_{\mathit{CP}}$ cases with $W_1 = 1$ and the second refreshment sample can identify the interactions of $Y_1 Y_2$ in (\ref{equ6}) and (\ref{equ7}). Essentially, the $N_{\mathit{CP}}$ cases with fully observed $(Y_1, Y_2)$ and the second refreshment sample considered in isolation are akin to a two-wave panel sample with $(Y_1, Y_2)$ and their interaction as the variables from the ``first wave'' and $Y_3$ as the variable from the ``second wave.'' As with the AN model, in this pseudo-two-wave panel we can identify the main effect of $Y_3$ in (\ref{equ7}) but not interactions involving $Y_3$. In some multi-wave panel studies, respondents who complete the first wave are invited to complete all subsequent waves, even if they failed to complete a previous one. That is, individuals with observed $W_{1i}=0$ can come back in future waves. For example, the 2008 ANES increased incentives to attriters to encourage them to return in later waves. This scenario is illustrated in Figure~\ref{fig3-period-nmonotone,nofollow}. In such cases, the additional information offers the potential to identify additional parameters from the saturated model. In particular, one gains the dependent equations, \begin{eqnarray*} && P(Y_1 = y_1, Y_3 = y_3, W_1=0, W_2 = 1)\\ &&\quad= \sum_{y_2} P(Y_1 = y_1, Y_2 = y_2,\\ &&\hspace*{51.5pt} Y_3 = y_3, W_1 =0, W_2 = 1) \end{eqnarray*} for all $(y_1, y_3)$. When combined with other equations, we now have 20 independent constraints. Thus, we can add four terms to the models in (\ref{equ6}) and (\ref{equ7}) and maintain identification. These include two main effects for $W_1$ and two interactions between $W_1$ and $Y_1$, all of which are identified since we now observe some $W_2$ and $Y_3$ when $W_1=0$. In contrast, the interaction term $Y_2 W_1$ is not identified, because $Y_2$ is never observed with $Y_3$ except when $W_1=1$. Interaction terms involving $Y_3$ also are not identified. This is intuitively\vadjust{\goodbreak} seen by supposing that no values of $Y_2$ from the original panel were missing, so that effectively the original panel plus the second refreshment sample can be viewed as a two-wave setting in which the AN assumption is required for $Y_3$. Thus far we have assumed only cross-sectional refreshment samples, however, refreshment sample respondents could be followed in subsequent waves. Once again, the additional information facilitates estimation of additional terms in the models. For example, consider extending Figure \ref{fig3-period-nmonotone,nofollow} to include incomplete follow-up in wave three for units from the first refreshment sample. \citet{dengMS} shows that the observed data offer 22 independent constraints, so that we can add six terms to (\ref{equ6}) and (\ref{equ7}). As before, these include two main effects for $W_1$ and two interactions for $Y_1W_1$. We also can add the two interactions for $Y_2 W_1$. The refreshment sample follow-up offers observations with $Y_2$ and $(Y_3, W_2)$ jointly observed, which combined with the other data enables estimation of the one-way interactions. Alternatively, consider extending Figure~\ref{fig3-period-monotone,nofollow} to include the incomplete follow-up in wave three for units from the first refreshment sample. Here, \citet{dengMS} shows that the observed data offer 20 independent constraints and that one can add the two main effects for $W_1$ and two interactions for $Y_2 W_1$ to (\ref{equ6}) and (\ref{equ7}). As in the two-wave case (\cite{hirano1998combining}), we expect that similar models can be constructed for other data types. We have done simulation experiments (not reported here) that support this expectation. \section{Illustrative Application}\label{sec5} To illustrate the use of refreshment samples in practice, we use data from the 2007--2008 Associated Press--Yahoo! News Poll (APYN). The APYN is a one year, eleven-wave survey with three refreshment samples intended to measure attitudes about the 2008 presidential election and politics. The panel was sampled from the probability-based Knowledge\-Panel(R) Internet panel, which recruits panel members via a probability-based sampling method using known published sampling frames that cover 99\% of the U.S. population. Sampled noninternet households are provided a laptop computer or MSN TV unit and free internet service. The baseline (wave 1) of the APYN study was collected in November 2007, and the final wave took place after the November 2008 general election. The baseline was fielded to a sample of 3548 adult citizens, of\vadjust{\goodbreak} whom 2735 responded, for a 77\% cooperation rate. All baseline respondents were invited to participate in each follow-up wave; hence, it is possible, for example, to obtain a baseline respondent's values in wave $t+1$ even if they did not participate in wave~$t$. Cooperation rates in follow-up surveys varied from 69\% to 87\%, with rates decreasing towards the end of the panel. Refreshment samples were collected during follow-up waves in January, September and October 2008. For illustration, we use only the data collected in the baseline, January and October waves, including the corresponding refreshment samples. We assume nonresponse to the initial wave and to the refreshment samples is ignorable and analyze only the available cases. The resulting data set is akin to Figure \ref{fig3-period-nmonotone,nofollow}. \begin{table}[b] \caption{Campaign interest. Percentage choosing each response option across the panel waves (P1, P2, P3) and refreshment samples (R2, R3). In P3, 83 nonrespondents from P2 returned to the survey. Five participants with missing data in P1 were not used in the analysis}\label{tabcampaigninterest} \begin{tabular*}{\tablewidth}{@{\extracolsep{4in minus 4in}}ld{2.1}d{2.1}d{2.2}d{2.1}d{2.1}@{}} \hline & \multicolumn{1}{c}{\textbf{P1}} & \multicolumn{1}{c}{\textbf{P2}} & \multicolumn{1}{c}{\textbf{P3}} & \multicolumn{1}{c}{\textbf{R2}} & \multicolumn{1}{c@{}}{\textbf{R3}}\\ \hline ``A lot'' & 29.8 & 40.3 & 65.0 & 42.0 & 72.2\\ ``Some'' & 48.6 & 44.3 & 25.9 & 43.3 & 20.3\\ ``Not much'' & 15.3 & 10.8 & 5.80 & 10.2 & 5.0\\ ``None at all'' & 6.1 & 4.4 & 2.90 & 3.6 & 1.9\\ Available sample size & \multicolumn{1}{c}{2730} & \multicolumn{1}{c}{2316} & \multicolumn{1}{c}{1715} & \multicolumn{1}{c}{691} & \multicolumn{1}{c@{}}{461}\\ \hline \end{tabular*} \end{table} \begin{table*} \tablewidth=410pt \caption{Predictors used in all conditional models, denoted as $X$. Percentage of respondents in each category in initial panel (P1) and refreshment samples (R2, R3)}\label{tabIndependent-Variables-Used} \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}llccc@{}} \hline \textbf{Variable} & \multicolumn{1}{c}{\textbf{Definition}} & \multicolumn{1}{c}{\textbf{P1}} & \multicolumn{1}{c}{\textbf{R2}} & \multicolumn{1}{c@{}}{\textbf{R3}}\\ \hline AGE1 & $=$ 1 for age 30--44, $=$ 0 otherwise &0.28 &0.28 &0.21\\ AGE2 & $=$ 1 for age 45--59, $=$ 0 otherwise &0.32 &0.31 &0.34\\ AGE3 & $=$ 1 for age above 60, $=$ 0 otherwise &0.25 &0.28 &0.34\\ MALE & $=$ 1 for male, $=$ 0 for female &0.45 &0.47 &0.43\\ COLLEGE & $=$ 1 for having college degree, $=$ 0 otherwise &0.30 &0.33 & 0.31\\ BLACK & $=$ 1 for African American, $=$ 0 otherwise &0.08 &0.07 &0.07\\ INT & $=$ 1 for everyone (the intercept) & & & \\ \hline \end{tabular*} \end{table*} The focus of our application is on campaign interest, one of the strongest predictors of democratic attitudes and behaviors (\cite{Prior2010}) and a key measure for defining likely voters in pre-election polls (\cite{Traugott1984}). Campaign interest also has been shown to be correlated with panel attrition (\cite{bartels1999panel}; \cite{olson2011}). For our analysis, we use an outcome variable derived from answers to the survey question, ``How much thought, if any, have you given to candidates who may be running for president in 2008?'' Table~\ref{tabcampaigninterest} summarizes the distribution of the answers in the three waves. Following convention (e.g., \cite{pew2010}), we dichotomize answers into people most interested in the campaign and all others. We let $Y_{ti}=1$ if subject $i$ answers ``A lot'' at time $t$ and $Y_{ti}=0$ otherwise, where $t \in\{1,2,3\}$ for the baseline, January and October waves, respectively. We let $X_{i}$ denote the vector of predictors summarized in Table \ref {tabIndependent-Variables-Used}. We assume ignorable nonresponse in the initial wave and refreshment samples for convenience, as our primary goal is to illustrate the use and potential benefits of refreshment samples. Unfortunately, we have little evidence in the data to support or refute that assumption. We do not have access to $X$ for the nonrespondents in the initial panel or refreshment samples, thus, we cannot compare them to respondents' $X$ as a (partial) test of an MCAR assumption. The respondents' characteristics are reasonably similar across the three samples---although the respondents in the second refreshment sample (R3) tend to be somewhat older than other samples---which offers some comfort that, with respect to demographics, these three samples are not subject to differential nonresponse bias. As in Section~\ref{sec4}, we estimate a series of logistic regressions. Here, we denote the $7 \times1$ vectors of coefficients in front of the $X_i$ with $\theta$ and subscripts indicating the dependent variable, for example, $\theta_{Y_1}$ represents the coefficients of $X$ in the model for $Y_1$. Suppressing conditioning, the series of models is \begin{eqnarray*} Y_{1i} &\sim& \operatorname{Bern} \biggl(\frac{\operatorname{exp} (\theta_{Y_1}X_{i} )}{1+\operatorname{exp} (\theta_{Y_{1}} X_{i} )} \biggr), \\ Y_{2i} &\sim& \operatorname{Bern} \biggl(\frac{\operatorname{exp} (\theta_{Y_2} X_{i}+\gamma Y_{1i} )}{1+\operatorname{exp} (\theta_{Y_2} X_{i}+\gamma Y_{1i} )} \biggr), \\ W_{1i} &\sim& \operatorname{Bern} \biggl(\frac{\operatorname{exp} (\theta_{W_1}X_{i}+\alpha_{1} Y_{1i}+\alpha_{2} Y_{2i} )}{1+\operatorname{exp} (\theta_{W_1}X_{i}+\alpha_{1}Y_{1i}+\alpha _{2}Y_{2i} )} \biggr), \\ Y_{3i} &\sim& \operatorname{Bern} \bigl(\operatorname{exp} (\theta_{Y_3} X_{i}+\beta_{1}Y_{1i}+\beta_{2}Y_{2i}\\ &&\hspace*{46pt}{}+\beta_{3}W_{1i}+\beta _{4}Y_{1i}Y_{2i}+\beta_{5}Y_{1i}W_{1i} )\\ &&\hspace*{27pt}{}/\bigl(1+\operatorname{exp} (\theta_{Y_3} X_{i}+\beta_{1}Y_{1i}\\ &&\hspace*{73pt}{}+\beta_{2}Y_{2i}+\beta_{3}W_{1i}\\ &&\hspace*{87pt}{}+\beta_{4} Y_{1i}Y_{2i}+\beta_{5}Y_{1i}W_{1i} )\bigr) \bigr), \\ W_{2i} &\sim& \operatorname{Bern} \bigl(\operatorname{exp} (\theta_{W_2}X_{i}+\delta_{1} Y_{1i}+\delta_{2}Y_{2i}+\delta_{3}Y_{3i}\\ &&\hspace*{43.7pt}{}+\delta_{4}W_{1i}+\delta _{5}Y_{1i}Y_{2i}+\delta_{6}Y_{1i}W_{1i} )\\ &&\hspace*{26pt}{}/\bigl(1+\operatorname{exp} (\theta_{W_2} X_{i}+\delta_{1}Y_{1i}+\delta_{2}Y_{2i}\\ &&\hspace*{71.3pt}{}+\delta_{3}Y_{3i}+\delta _{4}W_{1i}\\ &&\hspace*{83.2pt}{}+\delta_{5}Y_{1i}Y_{2i}+\delta_{6}Y_{1i}W_{1i} )\bigr) \bigr). \end{eqnarray*} We use noninformative prior distributions on all parameters. We estimate posterior distributions of the parameters using a Metropolis-within-Gibbs algorithm, running the chain for 200,000 iterations and treating the first 50\% as burn-in. MCMC diagnostics suggested that the chain converged. Running the MCMC for 200,000 iterations took approximately 3 hours on a standard desktop computer (Intel Core 2 Duo CPU 3.00 GHz, 4 GB RAM). We developed the code in Matlab without making significant efforts to optimize the code. Of course, running times could be significantly faster with higher-end machines and smarter coding in a language like C$++$. \begin{table*} \caption{Posterior means and 95\% central intervals for coefficients in regressions.\break Column headers are the dependent variable in the regressions}\label{tabY2est} \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}lccccc@{}} \hline \textbf{Variable} & \multicolumn{1}{c}{$\bolds{Y_1}$} & \multicolumn{1}{c}{$\bolds{Y_2}$} & \multicolumn{1}{c}{$\bolds{Y_3}$} & \multicolumn{1}{c}{$\bolds{W_1}$} & \multicolumn{1}{c@{}}{$\bolds{W_2}$}\\ \hline INT & $-$1.60 & $-$1.77 &0.04 & 1.64 & $-$1.40\\ & $(-1.94, -1.28)$ & $(-2.21, -1.32)$ & $(-1.26, 1.69)$ & $(1.17, 2.27)$ & $(-2.17, -0.34)$\\ AGE1 &0.25 &0.27 &0.03 & $-$0.08 &0.28\\ & $(-0.12,0.63)$ & $(-0.13,0.68)$ & $(-0.40,0.47)$ & $(-0.52,0.37)$ & $(-0.07,0.65)$\\ AGE2 &0.75 &0.62 &0.15 &0.24 &0.27\\ & $(0.40, 1.10)$ & $(0.24, 1.02)$ & $(-0.28,0.57)$ & $(-0.25,0.72)$ & $(-0.07,0.64)$\\ AGE3 & 1.26 &0.96 &0.88 &0.37 &0.41\\ & $(0.91, 1.63)$ & $(0.57, 1.37)$ & $(0.41, 1.34)$ & $(-0.14,0.87)$ & $(0.04,0.80)$\\ COLLEGE &0.11 &0.53 &0.57 &0.35 &0.58\\ & $(-0.08,0.31)$ & $(0.31,0.76)$ & $(0.26,0.86)$ & $(0.04,0.69)$ & $(0.34,0.84)$\\ MALE & $-$0.05 & $-$0.02 & $-$0.02 &0.13 &0.08\\ & $(-0.23,0.13)$ & $(-0.22,0.18)$ & $(-0.29,0.24)$ & $(-0.13,0.39)$ & $(-0.14,0.29)$\\ BLACK &0.75 & $-$0.02 &0.11 & $-$0.54 & $-$0.12\\ & $(0.50, 1.00)$ & $(-0.39,0.35)$ & $(-0.40,0.64)$ & $(-0.92,-0.14)$ & $(-0.47,0.26)$\\ $Y_{1}$ & --- & 2.49 & 1.94 &0.50&0.88\\ & --- & $(2.24, 2.73)$ & $(0.05, 3.79)$ & $(-0.28, 1.16)$ & $(0.20, 1.60)$\\ $Y_2$ & --- & --- & 2.03 & $-$0.58 &0.27\\ & --- & --- & $(1.61, 2.50)$ & $(-1.92,0.89)$ & $(-0.13,0.66)$\\ $W_1$ & --- & --- & $-$0.42 & --- & 2.47\\ & --- & --- & $(-1.65,0.69)$ & --- & $(2.07, 2.85)$\\ $Y_1Y_2$ & --- & --- & $-$0.37 & --- & $-$0.07\\ & --- & --- & $(-1.18,0.47)$ & --- & $(-0.62,0.48)$\\ $Y_1W_1$ & --- & --- & $-$0.52 & --- & $-$0.62\\ & --- & --- & $(-2.34, 1.30)$ & --- & $(-1.18, -0.03)$\\ $Y_3$ & --- & --- & --- & --- & $-$1.10\\ & --- & --- & --- & --- & $(-3.04, -0.12)$\\ \hline \end{tabular*} \end{table*} The identification conditions include no interaction between campaign interest in wave 1 and wave~2 when predicting attrition in wave 2, and no interaction between campaign interest in wave 3 (as well as nonresponse in wave 2) and other variables when predicting attrition in wave 3. These conditions are impossible to check from the sampled data alone, but we cannot think of any scientific basis to reject them outright. Table~\ref{tabY2est} summarizes the posterior distributions of the regression coefficients in each of the models. Based on the model for $W_1$, attrition in the second wave is reasonably described as missing at random, since the coefficient of $Y_2$ in that model is not significantly different from zero. The model for $W_2$ suggests that attrition in wave 3 is not missing at random. The coefficient for $Y_3$ indicates that participants who were strongly interested in the election at wave 3 (holding all else constant) were more likely to drop out. Thus, a panel attrition correction is needed to avoid making biased inferences. This result contradicts conventional wisdom that politically-interested respondents are \textit{less} likely to attrite (\cite{bartels1999panel}). The discrepancy could result from differences in the survey design of the APYN study compared to previous studies with attrition. For example, the APYN study consisted of 10--15 minute online interviews, whereas the ANES panel analyzed by \citet{bartels1999panel} and \citet{olson2011} consisted of 90-minute, face-to-face interviews. The lengthy ANES interviews have been linked to significant panel conditioning effects, in which respondents change their attitudes and behavior as a result of participation in the panel (\cite{bartels1999panel}). In contrast, \citet{kruse2009panel} finds few panel conditioning effects in the APYN study. More notably, there was a differential incentive structure in the APYN study. In later waves of the study, reluctant responders (those who took more than 7 days to respond in earlier waves) received increased monetary incentives to encourage their participation. Other panelists and the refreshment sample respondents received\vadjust{\goodbreak} a standard incentive. Not surprisingly, the less interested respondents were more likely to have received the bonus incentives, potentially increasing their retention rate to exceed that of the most interested respondents. This possibility raises a broader question about the reasonableness of assuming the initial nonresponse is ignorable, a point we return to in Section~\ref{sec6}. In terms of the campaign interest variables, the observed relationships with $(Y_1, Y_2, Y_3)$ are consistent with previous research (\cite{Prior2010}). Not surprisingly, the strongest predictor of interest in later waves is interest in previous waves. Older and college-educated participants are more likely to be interested in the election. Like other analyses of the 2008 election (\cite{lawless2009}), and in contrast to many previous election cycles, we do not find a significant gender gap in campaign interest. We next illustrate the P-only approach with multiple imputation. We used the posterior draws of parameters to create $m=500$ completed data sets of the original panel only. We thinned the chains until autocorrelations of the parameters were near zero to obtain the parameter sets. We then estimated marginal probabilities of $(Y_2, Y_3)$ and a logistic regression for $Y_3$ using maximum likelihood on only the 2730 original panel cases, obtaining inferences via Rubin's (\citeyear{rubin1987}) combining rules. For comparison, we estimated the same quantities using only the 1632 complete cases, that is, people who completed all three waves. The estimated marginal probabilities reflect the results in Table~\ref{tabY2est}. There is little difference in $P(Y_2=1)$ in the two analyses: the 95\% confidence interval is $(0.38,0.42)$ in the complete cases and $(0.37,0.46)$ in the full panel after multiple imputation. However, there is a suggestion of attrition bias in $P(Y_3=1)$. The 95\% confidence interval is $(0.63,0.67)$ in the complete cases and $(0.65,0.76)$ in the full panel after multiple imputation. The estimated $P(Y_3 =1 \mid W_2 = 0)=0.78$, suggesting that nonrespondents in the third wave were substantially more interested in the campaign than respondents. Table~\ref{tabY3est-MI-2} displays the point estimates and 95\% confidence intervals for the regression coefficients for both analyses. The results from the two analyses are quite similar except for the intercept, which is smaller after adjustment for attrition. The relationship between a college education and political interest is somewhat attenuated after correcting for attrition, although the confidence intervals in the two analyses overlap substantially. Thus, despite an apparent attrition bias affecting the marginal distribution of political interest in wave 3, the coefficients for this particular complete-case analysis appear not to be degraded by panel attrition. \begin{table} \caption{Maximum likelihood estimates and 95\% confidence intervals based for coefficients of predictors of $Y_{3}$ using $m=500$ multiple imputations and only complete cases at final wave}\label{tabY3est-MI-2} \begin{tabular*}{\tablewidth}{@{\extracolsep{4in minus 4in}}lll@{\hspace*{-1pt}}} \hline \textbf{Variable} & \multicolumn{1}{c}{\textbf{Multiple imputation}} & \multicolumn{1}{c@{}}{\textbf{Complete cases}}\\ \hline INT & $-$0.22 $(-0.80,0.37)$ & $-$0.64 $(-0.98, -0.31)$\\ AGE1 & $-$0.03 $(-0.40,0.34)$ & \hphantom{$-$}0.01 $(-0.36,0.37)$\\ AGE2 & \hphantom{$-$}0.08 $(-0.30,0.46)$ & \hphantom{$-$}0.12 $(-0.25,0.49)$ \\ AGE3 & \hphantom{$-$}0.74 $(0.31, 1.16)$ & \hphantom{$-$}0.76 $(0.36, 1.16)$\\ COLLEGE & \hphantom{$-$}0.56 $(0.27,0.86)$ & \hphantom{$-$}0.70 $(0.43,0.96)$\\ MALE & $-$0.09 $(-0.33,0.14)$ & $-$0.08 $(-0.32,0.16)$ \\ BLACK & \hphantom{$-$}0.07 $(-0.38,0.52)$ & \hphantom{$-$}0.05 $(-0.43,0.52)$\\ $Y_{1}$ & \hphantom{$-$}1.39 $(0.87, 1.91)$ & \hphantom{$-$}1.45 $(0.95, 1.94)$ \\ $Y_{2}$ & \hphantom{$-$}2.00 $(1.59, 2.40)$ & \hphantom{$-$}2.06 $(1.67, 2.45)$\\ $Y_{1}Y_{2}$ & $-$0.33 $(-1.08,0.42)$ & $-$0.36 $(-1.12,0.40)$ \\ \hline \end{tabular*} \end{table} Finally, we conclude the analysis with a diagnostic check of the three-wave model following the approach outlined in Section~\ref{sec3.3}. To do so, we generate 500 independent replications of $(Y_2, Y_3)$ for each of the cells in Figure \ref{fig3-period-nmonotone,nofollow} containing observed responses. We then compare the estimated probabilities for $(Y_2, Y_3)$ in the replicated data to the corresponding probabilities in the observed data, computing the value of $\mathrm{ppp}$ for each cell. We also estimate the regression from Table~\ref{tabY3est-MI-2} with the replicated data using only the complete cases in the panel, and compare coefficients from the replicated data to those estimated with the complete cases in the panel. As shown in Table~\ref{ppp}, the imputation models generate data that are highly compatible with the observed data in the panel and the refreshment samples on both the conditional probabilities and regression coefficients. Thus, from these diagnostic checks we do not have evidence of lack of model fit. \begin{table \caption{Posterior predictive probabilities ($\mathrm{ppp}$) based on 500 replicated data sets and various observed-data quantities. Results include probabilities for cells with observed data and coefficients in regression of $Y_3$ on several predictors estimated with complete cases in the panel}\label{ppp} \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}lc@{}} \hline \textbf{Quantity}& \multicolumn{1}{c@{}}{\textbf{Value of} $\bolds{\mathrm{ppp}}$}\\ \hline \multicolumn{2}{@{}l}{Probabilities observable in original data}\\ \quad$\operatorname{Pr}(Y_2=0)$ in the 1st refreshment sample &0.84\\ \quad$\operatorname{Pr}(Y_3=0)$ in the 2nd refreshment sample &0.40\\ \quad$\operatorname{Pr}(Y_2=0|W_1=1)$ &0.90\\ \quad$\operatorname{Pr}(Y_3=0|W_1=1, W_2=1)$ &0.98\\ \quad$\operatorname{Pr}(Y_3=0|W_1=0, W_2=1)$ &0.93\\ \quad$\operatorname{Pr}(Y_2=0, Y_3=0|W_1=1, W_2=1)$ &0.98\\ \quad$\operatorname{Pr}(Y_2=0, Y_3=1|W_1=1, W_2=1)$ &0.87\\ \quad$\operatorname{Pr}(Y_2=1, Y_3=0|W_1=1, W_2=1)$ &0.92\\[6pt] \multicolumn{2}{@{}l}{Coefficients in regression of $Y_3$ on}\\ \quad INT&0.61\\ \quad AGE1&0.72\\ \quad AGE2 &0.74\\ \quad AGE3 &0.52\\ \quad COLLEGE&0.89\\ \quad MALE &0.76\\ \quad BLACK &0.90\\ \quad$Y_1$ &0.89\\ \quad$Y_2$&0.84\\ \quad$Y_1Y_2$&0.89\\ \hline \end{tabular*} \end{table} \section{Concluding Remarks}\label{sec6} The APYN analyses, as well as the simulations, illustrate the benefits of refreshment samples for diagnosing and adjusting for panel attrition bias. At the same time, it is important to recognize that the approach alone does not address other sources of nonresponse bias.\vadjust{\goodbreak} In particular, we treated nonresponse in wave 1 and the refreshment samples as ignorable. Although this simplifying assumption is the usual practice in the attrition correction literature (e.g., \cite{hirano1998combining}; \cite{bhattacharya2008inference}), it is worth questioning whether it is defensible in particular settings. For example, suppose in the APYN survey that people disinterested in the campaign chose not to respond to the refreshment samples, for example, because people disinterested in the campaign were more likely to agree to take part in a political survey one year out than one month out from the election. In such a scenario, the models would impute too many interested participants among the panel attriters, leading to bias. Similar issues can arise with item nonresponse not due to attrition. We are not aware of any published work in which nonignorable nonresponse in the initial panel or refreshment samples is accounted for in inference. One potential path forward is to break the nonresponse adjustments into multiple stages. For example, in stage one the analyst imputes plausible values for the nonrespondents in the initial wave and refreshment sample(s) using selection or pattern mixture models developed for cross-sectional data (see \cite{littlerubin}). These form a completed data set except for attrition and missingness by design, so that we are back in the setting that motivated Sections~\ref{sec3} and~\ref{sec4}. In stage two, the analyst estimates the appropriate AN model with the completed data to perform multiple imputations for attrition (or to use model-based or survey-weighted inference). The analyst can investigate the sensitivity of inferences to multiple assumptions about the nonignorable missingness mechanisms in the initial wave and refreshment samples. This approach is related to two-stage multiple imputation (\cite{shen2000}; \cite{rubin2003}; \cite{junedofer}) More generally, refreshment samples need to be representative of the population of interest to be informative. In many contexts, this requires closed populations or, at least, populations with characteristics that do not change over time in unobservable ways. For example, the persistence effect in the APYN multiple imputation analysis (i.e., people interested in earlier waves remain interested in later waves) would be attenuated if people who are disinterested in the initial wave and would be so again in a later wave are disproportionately removed from the population after the first wave. Major population composition changes are rare in most short-term national surveys like the APYN, although this could be more consequential in panel surveys with a long time horizon or of specialized populations. We presented model-based and multiple imputation approaches to utilizing the information in refreshment samples. One also could use approaches based on inverse probability weighting. We are not aware of any published research that thoroughly evaluates the merits of the various approaches in refreshment sample contexts. The only comparison that we identified was in \citet{nevo2003using}---which weights the complete cases of the panel so that the moments of the weighted data equal the moments in the refreshment sample---who briefly mentions towards the end of his article that the results from the weighting approach and the multiple imputation in \citet{hirano1998combining} are similar. We note that \citet{nevo2003using} too has to make identification assumptions about interaction effects in the selection model. It is important to emphasize that the combined data do not provide any information about the interaction effects that we identify as necessary to exclude from the models. There is no way around making assumptions about these effects. As we demonstrated, when the assumptions are wrong, the additive nonignorable models could generate inaccurate results. This limitation plagues model-based, multiple imputation and re-weighting methods. The advantage of including refreshment samples in data collection is that they allow one to make fewer assumptions about the missing data mechanism than if only the original panel were available. It is relatively straightforward to perform sensitivity analyses to this separability assumption in two-wave settings with modest numbers of outcome variables; however, these sensitivity analyses are likely to be cumbersome when many coefficients are set to zero in the constraints, as is the case with multiple outcome variables or waves. In sum, refreshment samples offer valuable information that can be used to adjust inferences for nonignorable attrition or to create multiple imputations for secondary analysis. We believe that many longitudinal data sets could benefit from the use of such samples, although further practical development is needed, including methodology for handling nonignorable unit and item nonresponse in the initial panel and refreshment samples, flexible modeling strategies for high-dimensional panel data, efficient methodologies for inverse probability weighting and thorough comparisons of them to model-based and multiple imputation approaches, and methods for extending to more complex designs like multiple\break waves between refreshment samples. We hope that this article encourages researchers to work on these issues and data collectors to consider supplementing their longitudinal panels with refreshment samples.\looseness=1 \section*{Acknowledgments} Research supported in part by NSF Grant SES-10-61241.
1,108,101,564,786
arxiv
\section{\label{s1}Introduction} \section{Introduction} \label{Sect1} The antisymmetric fields in four dimensions are interesting from various viewpoints. The most attractive part is that they emerge naturally as effective fields after compactification of the (super)string effective action \cite{1}-\cite{5}. Therefore, the detection of these fields or their low-energy quantum effects may be regarded as indirect detection of (super)string theory. Naturally, the standard situation is that such fields emerge as parts of the corresponding supermultiplets. At the same time, at low energies the supersymmetry is supposed to be broken and the mainstream approach is the soft symmetry breaking related to the introduction of masses. Therefore, one can expect that the antisymmetric fields can be massive. At the same time, due to compactification of extra dimensions such a mass can be quite small, and hence it is interesting to see what happens in the massless limit, especially at the quantum level. Let us note that the antisymmetric tensor fields have also interesting applications to the construction of non-minimal Grand Unification models \cite{Adler}, where the interface between massless and broken-symmetry massive versions is one of the main issues. The quantum aspects of massless antisymmetric fields has been explored in Refs.~\cite{8}-\cite{13} (also, both massless and massive cases were explored within the worldline approach \cite{Bastianelli-massless,Bastianelli-massive}). In particular, there was found a quantum equivalence with vector and scalar fields (classical equivalence was established before in \cite{6}), and was shown that the massless third-rank field has no physical degrees of freedom \cite{7}, \cite{13} - \cite{16}. Indeed, the first work where the equivalence between Proca model and antisymmetric second-rank field can be seen was published long before in 1960 \cite{Kemmer}. Taking these results into account, one of the interesting questions is about the possible discontinuity of the quantum effects of antisymmetric fields in the massless limit. Recently, quantum theory of massive antisymmetric fields was considered in Ref.~\cite{Buch-Kiri-Plet}. In particular, it was shown that the mentioned models are equivalent to vector and scalar massive fields, correspondingly. The equivalence holds in curved space-time, and not only at the classical level, but also in a semiclassical theory, that means the contributions of the corresponding fields to the effective action of vacuum are identical to the ones of vector and minimal scalar theories. The proof of \cite{Buch-Kiri-Plet} is very general and is based on $\ze$-regularization. However, this type of proofs is always interesting to check by direct calculation, similar to what was done for the Proca model \cite{BuGui}. In this case one can detect the discontinuity of the massless limit not only in the local divergent terms, but also in the complicated non-local contributions, which are typical for the massive field. Another interesting aspect it that the proofs of equivalence involve operations which are potentially dangerous with respect to the non-local multiplicative anomaly, which was previously detected only for fermionic determinants \cite{QED-form}. It looks reasonable to to see whether a similar situation takes place in the case of antisymmetric fields and their quantum equivalence with massive vectors and minimal massive scalars. In the present work we will derive the one-loop form factors using the heat-kernel technique based on the exact solution by Barvinsky and Vilkovisky and Avramidi \cite{Bar-Vil90,Avramidi}. Such a calculation was previously performed for various models, including scalar field \cite{apco}, fermions and massive vectors \cite{fervi}. The equivalence with the derivation by means of Feynman diagrams has been shown in \cite{apco} and more recently in \cite{CadeOmar}. Indeed, the heat-kernel based method is much more economic and since the application to antisymmetric fields is technically complicated, we chose this approach. The paper is organized as follows. In Sect.~\ref{o} we briefly review the heat-kernel derivation of form factors according to \cite{apco} and present the final results for massive scalars and vectors. The second-rank antisymmetric tensor theory is worked out in Sect.~\ref{sec-rank2}, and Sect.~\ref{sec-rank3} deals with the third-rank case. Most of these two sections repeats the contents of \cite{Buch-Kiri-Plet} and other references. The reason to include this is the intention to make the work self-consistent. Therefore, this part is made as brief as possible, but at the same time we give sufficient details. Finally, the Conclusions are drawn in Sect.~\ref{Conc}. \section{Derivation of the one-loop form factors} \label{o} Let us review the derivation of the one-loop effective action up to the terms of the second order in curvatures. More details can be found in Refs.~\cite{apco,fervi}. The one-loop Euclidian effective action is given by the formula \beq \n{EA} \bar{\Ga}^{(1)} = \frac{1}{2} \Tr \ln \hat{H} \,, \eeq where we assume the minimal form of the bilinear form of the action \beq \n{Bilinear} \hat{H} &=& \hat{1}\Box - \hat{1}m^2 + \hat{P} - \frac{\hat{1}}{6}\, R \,, \eeq where $\hat{1}$ is an identity matrix in the space of the fields of interest and $\hat{P}$ operator depends on the metric and possibly other background fields. The commutator of the two covariant derivatives acting on the corresponding fields are $\,\hat{S}_{\mu\nu} = [\na_\mu,\na_\nu]$. The effective action \eq{EA} can be presented as an integral in the proper time $s$, involving the heat kernel $K(s)$, \beq \n{EA2} \bar{\Ga}^{(1)} = - \frac{1}{2} \, \int_0^\infty \frac{ds}{s} \, \Tr K(s)\,. \eeq The heat kernel can be expanded up to the second order in the curvatures, namely Ricci tensor $R_{\mu\nu}$ and scalar $R$, $\,\hat{S}_{\mu\nu}\,$ and $\,\hat{P}$. The second-order solution has the form \cite{Bar-Vil90} \beq \n{Heat-Kernel} \Tr K(s) &=& \frac{\mu^{2(2-\om)}}{(4 \pi s)^\om} \int d^{2\om} x \, \sqrt{g} \, e^{-s m^2} \tr \Big\{\hat{1} + s \hat{P} \nonumber \\ &+& s^2 \,\hat{1} \Big[ R_{\mu \nu} f_1 (-s\Box) R^{\mu\nu} + R f_2(-s\Box) R \nonumber \\ &+& \hat{P} f_3(-s\Box) R + \hat{P} f_4(-s\Box) \hat{P} \nonumber \\ &+& \hat{S}_{\mu\nu} f_5 (-s\Box) \hat{S}^{\mu\nu} \Big] \Big\}\,, \eeq where $\,\om\,$ is the parameter of dimensional regularization, $\,\mu\,$ is renormalization parameter and the functions $\,f_{1,2,...,5}\,$ are given by the following expressions: \beq f_1 (\tau) &=& \frac{f(\tau) - 1 + \tau/6}{\tau^2}\,, \nonumber \\ f_2 (\tau) &=& \frac{f(\tau)}{288} + \frac{f(\tau)-1}{24 \tau} - \frac{f(\tau)-1+\tau/6}{8 \tau^2}\,, \nonumber \\ f_3 (\tau) &=& \frac{f(\tau)}{12} + \frac{f(\tau)-1}{2\tau}\,, \quad f_4 (\tau) \,=\, \frac{f(\tau)}{2}\,, \nonumber \\ f_5 (\tau) &=& \frac{1 - f(\tau)}{2\tau}\,, \eeq where \beq f(\tau) = \int_0^1 \, d\al \, e^{\al(1-\al)\tau} \,, \qquad \tau = - s \Box \,. \eeq The integrals were taken in Ref.~\cite{apco, fervi} and we present only the final result, \beq \n{0-EAM} \bar{\Ga}^{(1)} &=& - \frac{1}{2(4 \pi)^2} \int d^{2\om} x \, \sqrt{g}\, \Big\{ l_{CC}L_0 + l_RL_1 R \nonumber \\ &+& \sum_{i=1}^5 l_i^* \,R_{\mu\nu} \,M_i \,R^{\mu\nu} + \sum_{i=1}^5 l_i \,R \,M_i \,R \Big\}, \eeq where the coefficients $\,l^*_{1,2,..,5}\,$ and $\,l_{CC,R,1,2,..,5}\,$ are model-dependent and the integrals are universal. Within the dimensional regularization $\,\om \to 2\,$ they have the form \beq \n{Int-L0} L_0 &=& \frac{m^4}{2}\Big(\frac{1}{\epsilon}+\frac{3}{2}\Big)\,, \\ L_1 &=& -m^2 \Big(\frac{1}{\epsilon} + 1 \Big)\,, \label{Int-L1} \\ M_1 &=& \frac{1}{\epsilon} + 2 Y \,, \label{Int-M1} \\ M_2 &=& \Big(\frac{1}{\epsilon} + 1 \Big) \Big(\frac{1}{12}-\frac{1}{a^2}\Big) - \frac{4Y}{3a^2} + \frac{1}{18} \,, \label{Int-M2} \\ M_3 &=& \Big(\frac{1}{\epsilon} + \frac{3}{2} \Big) \Big(\frac{1}{2a^4}-\frac{1}{12 a^2}+\frac{1}{160}\Big) \nonumber \\ &+& \frac{8Y}{15a^4} -\frac{7}{180a^2}+\frac{1}{400}\,, \label{Int-M3} \\ M_4 &=& \Big(\frac{1}{\ep} + 1 \Big) \Big(\frac{1}{4} - \frac{1}{a^2} \Big)\,, \label{Int-M4} \\ M_5 &=& \Big( \frac{1}{\epsilon} + \frac{3}{2} \Big) \Big( \frac{1}{2a^4}-\frac{1}{4 a^2}+\frac{1}{32}\Big)\,, \label{Int-M5} \eeq where we used condensed notation \beq \frac{1}{\epsilon} = \frac{1}{2-\om} - \ga + \ln \Big(\frac{4\pi \mu^2}{m^2}\Big) \eeq and $\ga$ is the Euler number (it was absorbed into $\mu$-dependence in \cite{apco,fervi}). In the expressions \eq{Int-L0}-\eq{Int-M5} we disregarded the terms with are ${\cal O}(2-\om)$. We also used the definitions \beq \n{A} Y &=& 1 - \frac{1}{a} \, \ln \Big(\frac{2 + a}{2 - a} \Big) \eeq and \beq \n{a-square} a^2 = \frac{4 \Box}{ \Box - 4m^2}. \eeq In order to arrive at the final form of the one-loop effective action, it is useful to introduce the basis which consists from the square of the Weyl tensor and of scalar curvature. For this end one can assume that for functions $\,F=F(\Box)$ of our interest there is an expansion into power series in $\Box$, and use the reduction formula (see, e.g., \cite{highderi}) \beq R_{\mu\nu\al\be} F R^{\mu\nu\al\be} &=& 4 R_{\mu\nu} F R^{\mu\nu} - R F R + {\cal O}(R_{\dots}^3)\,. \label{GB} \eeq For the scalar field with non-minimal interaction to external gravity, \beq \n{S0} S_0 = \frac{1}{2} \int d^4 x \sqrt{g}\, \Big\{ g^{\mu\nu} \pa_\mu \ph \pa_\nu \ph + m^2 \ph^2 + \xi R \ph^2 \Big\}\,, \eeq the ${\cal O}(R_{\dots}^2)$ result is\footnote{Here we follow \cite{apco} and use an opposite sign for the mass term. This is reasonable taking into account possible applications to spontaneous symmetry breaking.} \cite{apco} \beq \n{EA-scalar} \bar{\Ga}_0^{(1)} &=& - \frac{1}{2(4 \pi)^2} \int d^4x \, \sqrt{g} \, \Big\{ \frac{m^4}{2}\Big(\frac{1}{\ep}+\frac{3}{2}\Big) \nonumber \\ &+& \tilde{\xi} \Big(\frac{1}{\epsilon} + 1 \Big) m^2 R + R \Big(\frac{1}{2 \epsilon } \tilde{\xi}^2 + k^0_R \Big) R \nonumber \\ &+& \frac{1}{2} \, C_{\mu\nu\al\be} \Big(\frac{1}{60 \epsilon } + k^{0}_W \Big) C^{\mu\nu\al\be} \Big\}\,, \eeq where $\,\tilde{\xi}=\xi - 1/6\,$ and the non-local form factors have the form \beq \n{Form-s1} k^0_W \,=\,k^0_W (a) = \frac{1}{150} + \frac{2}{45 a^2} + \frac{8 Y}{15 a^4} \eeq and \beq \n{Form-s2} k^0_R &=& k^0_R (a) \,=\, \frac{1}{108}\Big( \frac{1}{a^2} - \frac{7}{20}\Big) + \frac{Y}{144}\Big(1- \frac{4}{a^2}\Big)^2 \nonumber \\ &+& \Big(\frac{1}{18} - \frac{Y}{6} + \frac{2 Y}{3 a^2}\Big)\,\tilde{\xi} + Y \tilde{\xi}^2\,. \eeq In the next sections we will need the form factors for a minimal (means $\xi=0$) scalar and also the ones for the Proca model in curved space, \beq \n{ac-1} S_1 &=& \int d^4 x \sqrt{g} \,\Big\{ - \frac{1}{4}\, F_{\mu\nu}^2 - \frac12\, m^2 A_{\mu}^2 \Big\} \,, \eeq where $\,F_{\mu\nu} = \na_\mu A_\nu - \na_\nu A_\mu$. The standard Stueckelberg procedure can be easily adapted to the curved space \cite{BuGui}, yielding an equivalent action with an extra scalar field $\,\ph$, \beq \n{New-ac1} S'_1 = - \int d^4 x \sqrt{g} \Big\{ \frac{1}{4}\, F_{\mu\nu}^2 + \frac{m^2 }{2} \big( A_{\mu} - \frac{1}{m}\pa_\mu\ph\big)^2 \Big\} . \eeq The new action \eq{New-ac1} is gauge invariant under the gauge transformations \beq A_\mu \to A'_\mu = A_\mu + \na_\mu f, \quad \ph \to \ph' = \ph + m f\,. \label{trans} \eeq The original theory \eq{ac-1} is recovered in the special gauge $\,\ph = 0$. Since the gauge fixing dependence is irrelevant for the derivation of vacuum effective action, the practical calculation can be performed in a more useful gauge. The reader can consult Ref.~\cite{BuGui} for the details, let us just present the final result \beq \n{EA-vetor} \bar{\Ga}^{(1)} &=& - \frac{1}{2(4 \pi)^2} \int d^4x \, \sqrt{g} \, \Big\{ \frac{3 m^4}{2} \Big(\frac{1}{\epsilon} +\frac{3}{2}\Big) \nonumber \\ &+& \frac{m^2}{2} \Big(\frac{1}{\epsilon} + 1 \Big) R + R \Big(\frac{1}{72 \ep} + k^1_R \Big) R \nonumber \\ &+& \frac{1}{2} \, C_{\mu\nu\al\be} \Big(\frac{13}{60 \epsilon } + k^1_W \Big) C^{\mu\nu\al\be}\Big\}\,, \eeq where \beq k^1_W &=& k^1_W (a) = -\frac{91}{450} +\frac{2}{15 a^2} + Y +\frac{8 Y}{5 a^4} -\frac{8 Y}{3 a^2}\,, \label{ff-Proca} \\ k^1_R &=& k^1_R (a) = -\frac{1}{2160} +\frac{1}{36 a^2} +\frac{Y}{48} +\frac{Y}{3 a^4}-\frac{Y}{18 a^2}\,. \nonumber \eeq As it was already discussed in \cite{BuGui}, the massless limit in the expression (\ref{EA-vetor}) does not yield the effective action for a massless field, due to the discontinuity in the quantum corrections. In the next sections we shall meet two other examples of a similar discontinuity in the massless limit. \section{Massive antisymmetric rank-2 tensor} \label{sec-rank2} In this section we first present the well-known general considerations and then proceed to the derivation of non-local form factors. \subsection{General considerations} The model of massive antisymmetric second-rank tensor $\,B_{\mu\nu} = - B_{\nu\mu}\,$ field is described by the action \beq \n{Ant2-Act} S_2 = \int d^4 x\sqrt{g} \Big\{ - \frac{1}{12} F_{\mu\nu\la} F^{\mu\nu\la} - \frac{m^2}4 B_{\mu\nu}B^{\mu\nu} \Big\}, \eeq where \beq F_{\mu\nu\la} &=& \na_\mu B_{\nu\la} + \na_\nu B_{\la \mu} + \na_\la B_{\mu\nu}\,. \n{F} \eeq In four dimensions the theory \eq{Ant2-Act} is classically equivalent to a massive axial vector field $A^\mu$. The equivalence can be found through detailed analysis of the equations of motion\footnote{ Let us note that the massive axial vector describes an antisymmetric torsion field. This identification comes from the requirement of quantum consistency \cite{belyaev,torsi}. The theory was eventually shown to violate unitarity when coupled to fermions \cite{guhesh}.}. The duality between the two theories is given by \beq B_{\mu\nu} & \propto & \frac{1}{m}\, \epsilon_{\mu\nu\al\be} F^{\al\be}\,, \eeq where $\,F_{\mu\nu} = \na_\mu A_\nu - \na_\nu A_\mu$. The model \eq{Ant2-Act} is an example of a theory with the softly broken gauge symmetry. The kinetic part of the action \eq{Ant2-Act} is gauge invariant under the transformation \beq \n{r2-gauge} B_{\mu\nu} \to B'_{\mu\nu} = B_{\mu\nu} + \na_\mu \xi_\nu - \na_\nu \xi_\mu\,, \eeq where the vector gauge parameters $\xi_\mu$ in \eq{r2-gauge} are not unique. They can be transformed according to \beq \n{xi-gauge} \xi_\mu \to \xi_\mu' = \xi_\mu + \na_\mu \ph\,, \eeq with $\,\ph=\ph (x)\,$ being an arbitrary scalar field. Equation \eq{xi-gauge} means that the gauge generators are linearly dependent. Using the background field method we can observe that the massive term in \eq{r2-gauge} violates gauge symmetry, but does not remove the degeneracy in the bilinear form in quantum fields \beq \hat{H}_2 &=& \frac12\, \frac{\de^2 S_2}{\de B_{\mu\nu}(x)\,\de B_{\al\be}(x^\prime)}\,. \eeq Similar to the Proca model, the simplest approach for the Lagrangian quantization of the theory (\ref{Ant2-Act}) requires the Stueckelberg procedure. Following \cite{Buch-Kiri-Plet}, we introduce an extra vector field $A_\mu$ and consider, instead of Eq. \eq{r2-gauge}, the action \beq \n{Ant2-Act-New} S'_2 &=& \int d^4 x \, \sqrt{g} \, \Big\{ - \frac{1}{12} \, F_{\mu\nu\la} F^{\mu\nu\la} \nonumber \\ &-& \frac{1}{4}\, m^2 \Big( B_{\mu\nu} - \frac{1}{m} F_{\mu\nu}\Big)^2 \Big\}\,. \eeq The previous action \eq{Ant2-Act} can be obtained from \eq{Ant2-Act-New} in the specific gauge $\,A_\mu= 0$. The new action \eq{Ant2-Act-New} is gauge invariant under the simultaneous transformation \beq \n{G1} B_{\mu\nu} &\to& B'_{\mu\nu} = B_{\mu\nu} + \na_\mu \xi_\nu - \na_\nu \xi_\mu\,, \\ \n{G2} A_\mu &\to& A'_\mu = A_\mu + m \, \xi_\mu \, \eeq and it is also invariant under gauge transformation of the Stueckelberg field \beq \n{G3} A_\mu \to A'_\mu = A_\mu +\na_\mu \La \,, \eeq with a scalar parameter $\La(x)$. Furthermore, we can consider a new scalar field $\ph(x)$ and note that the fields $B_{\mu\nu}$ and $A_\mu$ do not change if their gauge parameters transform as \beq \n{D1} \xi &\to& \xi_\mu' = \xi_\mu + \na_\mu \ph \,, \\ \n{D2} \La &\to& \La' = \La + m \ph \,. \eeq Once again, the equations \eq{D1} and \eq{D2} reflect the fact that the gauge generators of the theory are linearly dependent. The general formalism of Lagrangian quantization in theories with dependent generators is based on the Batalin-Vilkovisky method \cite{Batalin-Vilkovisky}. However, in the relatively simple theories such as \eq{Ant2-Act-New}, where the action is quadratic and the algebra of dependent gauge generators is Abelian, it is sufficient to make a successive multi-step application of the Faddeev-Popov method \cite{Schwarz1,Schwarz2,13}. In the following we are going apply this approach to the theory \eq{Ant2-Act-New}. According to the Faddeev-Popov method we replace the gauge group integration in the functional integral over gauge fields, \beq \int DB \, DA \, e^{\,i\, S'_2[B,A]}\,, \label{base} \eeq by the quantity \beq \n{Standard-FP} \int DB DA e^{i\, S'_2[B,A]} \, \De\, \de \big(\chi^\al[B,A] - l^\al ,\chi[A] - l \big), \eeq where $\De$ provides the identity \beq \n{Standard-De} 1 = \De \int D \xi D \La \, \de \left(\chi^\al[B',A'] - l^\al ,\,\chi[A'] - l \right). \eeq Here $\chi^\al$ and $\chi$ are the gauge fixing term which are related to the transformations \eq{G1}, \eq{G2} and to \eq{G3}, respectively. For the theory \eq{Ant2-Act-New} one can choose \beq \chi_{\be} &=& \na^\al B_{\al\be} - m \, A_\be \,, \qquad \chi \,=\, \na_\al A^\al \,. \eeq It is easy to verify the following constraint between the two gauge fixing conditions: \beq \n{Transverse} \na^\al \chi_\al + m \chi = 0 \,. \eeq Due to the constraint \eq{Transverse}, the delta-function $\,\de \left(\chi^\al[B,A] ,\, \chi[B] \right)\,$ in the definitions \eq{Standard-FP} and \eq{Standard-De} is ill-defined, that represents the main difference with the standard Faddeev-Popov procedure. The same problem also affects the integral in \eq{Standard-De}, since the fields are invariant under transformations \eq{D1} and \eq{D2} for the gauge parameters $\xi_\mu$ and $\La$ . To solve this issue, one can also apply the Faddeev-Popov trick second time to remove the integration along the gauge group orbits. Consider the Fourier representation for the delta-function \beq \n{DeDirac-int-1} \de \left(\chi^\al ,\, \chi \right) &=& \int \, D \zeta \, D \psi \, e^{\,i(\, \ze_\al \chi^\al\, -\,\psi\,\chi)} \,. \eeq After integration by parts and using \eq{Transverse} one can easily show that the integrand in \eq{DeDirac-int-1} is invariant under the transformation \beq \n{DD1} \ze_\al &\to& \ze_\al' = \ze_\al + \na_\al \phi \,, \\ \n{DD2} \psi &\to& \psi' = \psi + m\, \phi \,. \eeq In order to have well-defined definition one can extract from \eq{DeDirac-int-1} the integral over gauge group orbit \eq{DD1}-\eq{DD2} by using the Faddeev-Popov trick, hence we arrive at \beq \n{DeDirac-int-2} \de \left(\chi^\al \,,\, \chi \right) &=& \int \, D \zeta \, D \psi \, e^{\, i( \zeta_\al \, \chi^\al - \, \psi \, \chi) } \, \times \nonumber \\ &\times & \de ( \na_\al \ze^\al - m\psi) \,\Det \hat{H}_0^{min} \,, \eeq where $\,H_0^{min}= \Box - m^2\,$ is a minimal scalar operator. Let us remember that this operator depends on the external metric and its functional determinant is non-trivial. For the integral in Eq.~\eq{Standard-De} one has to factorize the integrations over gauge group orbits \eq{D1}-\eq{D2}. This means we replace $\,D \xi D\La\,$ by the product \beq D \xi D\La \, \de (\na_\al \xi^\al - m\La) \, \Det \hat{H}_0^{min} \eeq in the definition \eq{Standard-De}. Hence, the equation \eq{Standard-De} for $\De$ becomes \beq \De^{-1} &=& \int \, D \ze \, D\psi \, D \xi \, D \La \, \de (\na_\al \ze^\al - m\psi - s )\, \times \nonumber \\ &\times & \de (\na_\be \xi^\be - m\La - t )\,\times \n{De-certo} \\ &\times& e^{i \, \left\{\ze_\al \, \left(\chi^\al[B'] - l^\al \right) - \psi \, (\chi[A'] - l) \right\}} \, ( \Det \hat{H}_0^{min})^2 \,. \nonumber \eeq For solving \eq{De-certo} one can use the fact that \beq \n{vs} \chi ^\al [B',A'] - l^\al &=& \chi ^\al [B,A] - l^\al + (H_1)^\al_\be \,\xi^\be \nonumber \\ \mbox{and} \quad \chi[A'] - l &=& \chi[A] - l + \Box \La \,, \eeq where the vector operator is \beq \n{Bilinear-1} \hat{H}_1 &=& (H_1)_{\nu}^{\mu} \,=\, \de^\mu_\nu \Box - \na^\mu \na_\nu - R^\mu_\nu - m^2 \de^\mu_\nu \,. \eeq Because of the delta-function in \eq{Standard-FP}, the fields satisfy the equations $\,\chi[B,A]^\al-l^\al =0\,$ and $\,\chi[A]^\al-l =0$. Therefore, introducing the identity factor in the form of the double integral $\,\int Ds \, Dt \, e^{- i s\, t}\equiv 1$, we can take the integral over delta-functions, arriving at \beq \n{De-Re} \De = \Det \hat{H}'_1 \,\cdot\, (\Det \hat{H}_0^{min})^{-1} \,, \eeq with $\,\hat{H}'_1\,$ is a minimal vector operator, \beq \n{H1prime} \hat{H}'_1 = (H_1^\prime)_\nu^\mu = \de^\mu_\nu \Box - R^\mu_\nu - \de^\mu_\nu m^2 \,. \eeq For the sake of completeness, let us remember the Stueckelberg procedure for the massive vector field \cite{BuGui}, \beq \n{EA-1} \bar{\Ga}^{(1)} &=& \frac{1}{2} \Tr \ln \hat{H}_1 \nonumber \\ &=& \frac{1}{2} \Tr \ln ( \de^\mu_\nu \Box - R^\mu_\nu - \de^\mu_\nu m^2) - \frac{1}{2} \Tr \ln (\Box - m^2) \nonumber \\ &=& \frac{1}{2} \Tr \ln \hat{H}'_1 - \frac{1}{2} \Tr \ln \hat{H}_0^{min} \,. \eeq Now, using the well-defined expressions \eq{DeDirac-int-2} and \eq{De-Re} we can consider Eq.~\eq{Standard-FP} and then the effective action. First one has to write the delta-function \eq{DeDirac-int-2} in a more useful way. Using the Fourier representation \beq \de (\na_\al \ze^\al - m \psi) &=& \int D \ph \, e^{i\,(\na_\al \ze^\al - m\psi) \, \ph} \nonumber \\ &=& \int D \ph \, e^{i\, ( - \ze^\al \na_\al \ph - \psi \, m \, \ph )} \,, \eeq we can make an integration over $\ze$ and $\psi$ in \eq{DeDirac-int-2} and find \beq \n{De-final} \de \left(\chi^\al ,\, \chi \right) &=& \int D \ph \,\, \de \left(\chi_\al - \na_\al \ph \right) \,\times \nonumber \\ &\times & \de \left(\chi + m \ph \right) \, \Det \hat{H}_0^{min}. \eeq Hence, using Eqs.~\eq{De-final} and \eq{De-Re} we write the vacuum effective action in the form \beq \n{VP-EA-1} e^{i \Ga[g_{\mu\nu}]} &=& \int DB \, DA \, D \ph \, e^{\,i\, S'_2[B,A]} \, \de \left(\chi_\al - \na_\al \ph -l^\al \right) \,\times \nonumber \\ &\times& \de \left(\chi + m \ph - l \right) \, \Det \hat{H}'_1 \,. \eeq As far as \eq{VP-EA-1} does not depend of $l^\al$ and $l$ one can insert the identities in the form of $\,\int Dl \, e^{-\frac{i}{2} l^\al l_\al}$ and $\int Dl \, e^{-\frac{i}{2} l^2}$ to take the integrals. In this way we arrive at \beq e^{i \Ga[g_{\mu\nu}]} &=& \int DB \, DA \, D \ph \, e^{i\left\{S'_2[B,A] + S_{gf}[B,A] - S_0[\ph] \right\}} \,\times \nonumber \\ &\times& \Det \hat{H}'_1 \,, \eeq where $S_{gf} = -\frac{1}{2} \int d^4 x \sqrt{g} \{\chi_\al \chi^\al + \chi^2\}$ and $S_0[\ph]$ is the action of scalar field (\ref{S0}). The action with the gauge-fixing term can be written in the form \beq \n{Bili2} S'_2 + S_{gf} &=& \int d^4 x \sqrt{g} \Big\{ \frac{1}{4} \, B_{\al\be} (H'_2)^{\mu\nu}_{\al\be} B^{\mu\nu} \nonumber \\ &+& \frac{1}{2} \, A_\mu (H'_1)^\mu_\nu A^\nu \Big\} \,, \eeq where \beq \hat{H}'_2 &=& (H'_2)^{\al\be}_{\quad\mu\nu} \nonumber \\ &=& \de^{\al\be}_{\quad\mu\nu} (\Box - m^2) - J^{\al\be}_{\quad\mu \nu} + R^{\al\be}_{\,.\,.\,\mu\nu} \eeq and $\,(H'_1)^\mu_\nu\,$ was defined in (\ref{H1prime}). Here \beq \de_{\al \be ,\, \mu \nu} = \frac{1}{2} \left( g_{\al\mu} g_{\be\nu} - g_{\al\nu}g_{\be\mu} \right) , \eeq is the identity matrix in the antisymmetric rank-2 tensor space and \beq J_{\al \be ,\, \mu \nu} = \frac{1}{2} \, \big( \,g_{\al\mu} R_{\be\nu} + g_{\be\nu} R_{\al\mu} - g_{\al\nu} R_{\be\mu} - g_{\be\mu} R_{\al\nu}\, \big)\,. \nonumber \eeq Eqs.~\eq{VP-EA-1} and \eq{Bili2} enable one to formulate the one-loop contribution to the vacuum Euclidean effective action in the form \beq \n{EAe} \bar{\Ga}^{(1)} &=& \frac{1}{2}( \Tr \ln \hat{H}'_2 + \Tr \ln \hat{H}'_1 + \Tr \ln \hat{H}_0^{min} ) - \Tr \ln \hat{H}'_1 \nonumber \\ &=& \frac{1}{2} \Tr \ln \hat{H}'_2 - \frac{1}{2} \Tr \ln \hat{H}'_1 + \frac{1}{2} \Tr \ln \hat{H}_0^{min}\,. \eeq Using the relation for the Proca field contribution, Eq. ~(\ref{EA-1}), one can also write the one-loop effective action in the form \beq \bar{{\Ga}}^{(1)} &=& \frac{1}{2} \Tr \ln \hat{H}'_2 - \frac{1}{2} \Tr \ln \hat{H}_1 \,. \label{Gamma} \eeq Finally, the effective action requires subtracting the contribution of the Stueckelberg massive vector from the one of the massive tensor operator, $\,\Tr \ln\hat{H}'_2$. \subsection{Derivation of form factors} The result (\ref{EAe}) enables one to use the heat-kernel technique for deriving form factors. The first step is to identify the elements of the general expression (\ref{0-EAM}), \beq \hat {1} &=& \de^{\al\be}_{\quad\mu\nu}\,, \nonumber \\ \hat{P}_2 &=& (P_2)^{\al\be}_{\quad\mu\nu} = R^{\al \be}_{\,.\,.\,\mu \nu} + \frac{1}{6} \, \de^{\al \be}_{\quad\mu \nu} R - J^{\al \be}_{\quad\mu \nu}\,. \eeq The commutator of covariant derivatives on the antisymmetric tensor field $B_{\mu\nu}$ is \beq && \hat{(S_2)}_{\mu\nu} \,=\, \big[(S_2)_{\mu\nu}\big]^{\al\be}_{\rho\om} \\ \,=\, && \frac{1}{2} \,\big(R^{\al}_{.\,\rho\mu\nu} \, \de^\be_\om - R^{\be}_{.\,\rho\mu\nu} \, \de^\al_\om - R^{\al}_{.\,\om\mu\nu} \, \de^\be_\rho + R^{\be}_{.\,\om\mu\nu} \, \de^\al_\rho\big)\,. \nonumber \eeq Then, using the heat kernel representation, we arrive at the identification in the second order in curvatures, \beq && \frac{1}{2} \Tr \ln H'_2 \,=\, - \frac{1}{2(4 \pi)^2} \int d^{2\om} x \, \sqrt{g} \, \Big\{\, 6 L_0 - L_1 \, R \nonumber \\ && + \sum_{i=0}^5 l_i^* \,R_{\mu\nu} \,M_i \,R^{\mu\nu} + \sum_{i=0}^5 l_i \,R \,M_i \,R \, \Big\}\,, \label{hk} \eeq where $l_{CC}$ and $l_R$ are already inserted into (\ref{hk}), other coefficients are \beq l_1 = -\frac{5}{16} \,, \quad l_2 = -\frac{5}{4}\,, \quad l_3 = -\frac{3}{4}\,, \quad l_4 = \frac{9}{8}\,, \quad l_5 = \frac{3}{4}\,, \nonumber \eeq \beq l_1^* = 1\,, \quad l_2^* = 4\,, \quad l_3^* = 6\,, \quad l_4^* = -3\,, \quad l_5^* = -6\,. \nonumber \eeq Replacing these values and the integrals \eq{Int-L0}-\eq{Int-M5}, we arrive at the expression \beq \n{EA-2'} \frac{1}{2} \, \Tr \ln \hat{H}'_2 &=& - \frac{1}{2(4 \pi)^2} \int d^4x \, \sqrt{g} \, \Big\{ 3 m^4 \Big(\frac{1}{\ep}+\frac{3}{2}\Big) \nonumber \\ &+& m^2 \Big(\frac{1}{\ep} + 1 \Big) R + R \Big[\frac{1}{36 \ep} + k'_{2,R} (a) \Big] R \nonumber \\ &+& \frac{1}{2} \, C_{\mu\nu\al\be} \Big[\frac{13}{30 \ep} + k'_{2,W} (a) \Big] C^{\mu\nu\al\be} \Big\}\,, \eeq where \beq k'_{2,W} (a) = -\frac{91}{225} +\frac{4}{15 a^2} +2 Y +\frac{16 Y}{5 a^4} -\frac{16 Y}{3 a^2} \,, \eeq \beq k'_{2,R} (a) = -\frac{1}{1080} +\frac{1}{18 a^2} +\frac{Y}{24} +\frac{2 Y}{3 a^4} -\frac{Y}{9 a^2} \,. \eeq According to Eq.~(\ref{Gamma}), we have to subtract from \eq{EA-2'} the massive vector part, Eq.~\eq{EA-vetor}. Hence, we get \beq \n{EA-2a} \bar{\Ga}^{(1)} &=& -\,\frac{1}{2(4 \pi)^2} \int d^4x \, \sqrt{g} \, \Big\{ \frac{3 m^4}{2}\,\Big(\frac{1}{\ep} +\frac{3}{2}\Big) \nonumber \\ &+& \frac{m^2}{2}\, \Big(\frac{1}{\ep} + 1 \Big) R + R \Big[\frac{1}{72 \ep} + k^2_R \Big] R \nonumber \\ &+& \frac{1}{2} \, C_{\mu\nu\al\be} \Big[\frac{13}{60 \ep} + k^2_W \Big] C^{\mu\nu\al\be}\Big\}\,, \eeq where the non-local form factors are \beq k^2_W &=& k^2_W(a) = -\frac{91}{450} +\frac{2}{15 a^2} +Y +\frac{8 Y}{5 a^4} -\frac{8 Y}{3 a^2}\,, \\ k^2_R &=& k^2_R(a) = -\frac{1}{2160} +\frac{1}{36 a^2} +\frac{Y}{48} +\frac{Y}{3 a^4}-\frac{Y}{18 a^2}\,. \nonumber \eeq It is easy to see that the vacuum effective action for the massive rank-2 antisymmetric tensor, \eq{EA-2a}, is exactly the same as the vacuum effective action in the massive vector field case, given by Eq.~\eq{EA-vetor}. This confirms the conclusion of \cite{Buch-Kiri-Plet} that the massive rank-$2$ antisymmetric tensor is equivalent to the Proca theory at quantum level. Let us note that this conclusion has been achieved by the $\ze$-regularization method, and we know that some of the relations of this kind can be violated by the non-local multiplicative anomaly \cite{QED-form}. Nothing of this sort occurs in the present case, as we have seen. It is easy to check that in the massless limit the form factor $k^2_W(a)$ reduce to the usual logarithmic expression $\,-\frac{13}{60} \ln \left( - \frac{ \Box}{4 \pi \mu^2}\right)$. On the other hand, we know that in the $m=0$ case the rank-$2$ antisymmetric tensor is equivalent to a scalar field minimally coupled to gravity, where the duality looks like $\,F_{\al\be\om} = \ep_{\al\be\om\ga} \na^\ga \ph$. The form factor for the minimal massless scalar field is $\,-\frac{1}{60} \ln \left( - \frac{\Box}{4 \pi \mu^2} \right)$. The difference between the two coefficients $1/5 = 13/60 - 1/60$ demonstrates the discontinuity of quantum contributions in the massless limit for the rank-$2$ antisymmetric tensor theory. This difference is nothing else but the contribution of a massless vector field. To understand which vector is this, let us consider the effective action for a massless rank-$2$ antisymmetric tensor field \cite{Buch-Kiri-Plet}, \beq \n{EA2m} \bar{\Ga}^{(1)} = \frac{1}{2} \Tr \ln \hat{H}'_2 - \Tr \ln \hat{H}'_1 + \frac{3}{2} \Tr \ln \hat{H}_0^{min}\,. \eeq The difference between \eq{EAe} and \eq{EA2m} is \beq -1/2 \Tr \ln \hat{H}'_1 +\Tr \ln \hat{H}_0^{min} , \eeq which is the effective action for the free massless vector field. Another way to understand this is to recall that massive rank-$2$ antisymmetric tensor field is equivalent to a massive vector field model. According to \cite{BuGui}, \beq \n{EAe2} \bar{\Ga}^{(1)} &=& \frac{1}{2} \Tr \ln ( \de^\mu_\nu \Box - R^\mu_\nu - \de^\mu_\nu m^2) + \frac{1}{2} \Tr \ln (\Box - m^2) \nonumber \\ &-& \Tr \ln (\Box - m^2) \\ &=& \frac{1}{2} \Tr \ln ( \de^\mu_\nu \Box - R^\mu_\nu - \de^\mu_\nu m^2) \,-\, \frac{1}{2} \Tr \ln (\Box - m^2) \,. \nonumber \eeq Obviously, the difference in the contributions of a massive vector field and the one of the minimal scalar field is just the effective action of a massless vector field. In the massless limit this extra term does not disappear and this produce the quantum discontinuity. \section{Massive antisymmetric rank-3 tensor} \label{sec-rank3} As a second example, consider the model of massive totally antisymmetric rank-$3$ tensor field $\,C_{\mu \nu \rho} = C_{[\mu \nu \rho]} $. The action is given by \beq \n{S3} S_3 &=& \int d^4 x\,\sqrt{g}\,\Big\{ - \frac{1}{48} \, F_{\mu\nu\rho\om} ^2 - \frac{1}{12} \, m^2\, C_{\mu\nu\rho} ^2 \Big\}, \eeq where \beq F_{\mu\nu\rho\om} = \na_\mu C_{\nu\rho\om} - \na_\nu C_{\rho\om\mu} + \na_\rho C_{\om\mu\nu} - \na_\om C_{\mu\nu\rho} . \nonumber \eeq It is possible to prove that in four dimensional space the theory \eq{S3} is classically equivalent to the theory of a real massive scalar field $\,\ph\,$ minimally coupled to gravity. The duality relation between the two theories is defined by the relation $\,C_{\mu\nu\rho} \propto \frac{1}{m}\, \epsilon_{\mu\nu\al\be} \na^\be \ph$. The kinetic term of the action \eq{S3} is invariant under the gauge transformations \beq \n{GGG1} C_{\mu\nu\rho} &\to& C'_{\mu\nu\rho} \nonumber \\ &= & C_{\mu\nu\rho} + \na_\mu \om_{\nu \rho} + \na_\nu \om_{\rho \mu} + \na_\rho \om_{\mu \nu}\,, \eeq with an antisymmetric tensor field parameter $\,\om_{\mu\nu} = - \om_{\nu\mu}$. This parameter is defined up to the gauge transformation \beq \n{LD1} \om_{\mu \nu} \to \om'_{\mu \nu} &=& \om_{\mu \nu} + \na_\mu \ze_\nu - \na_\nu \ze_\mu\,, \eeq where $\ze_\mu$ is a vector gauge field parameter. Furthermore, $\ze_\mu$ is also defined up to the gauge transformation \beq \n{LD2} \ze_\mu &\to& \ze'_\mu = \ze_\mu + \na_\mu \phi\,, \eeq with the scalar field parameter $\,\phi (x)$. Equations \eq{LD1} and \eq{LD2} mean that the gauge generators are linearly dependent. As in the previous case of the second-rank tensor, due to the gauge invariance of $F_{\mu\nu\rho\om}^2$ we have to deal with a theory with softly broken gauge symmetry. Therefore, the quantization must be done with the Stueckelberg procedure. One can restore the gauge symmetric by introducing an extra second-rank antisymmetric field $B_{\mu\nu}$. Consider the following action: \beq \n{S3'} S'_3 &=& \int d^4 x \, \sqrt{g} \, \Big\{ - \frac{1}{48} \, F_{\mu\nu\rho\om} ^2 \nonumber \\ & - & \frac{1}{12} \, m^2\, \Big( C_{\mu\nu\rho} - \frac{1}{m} F_{\mu\nu\rho}\Big)^2\Big\}\,, \eeq where $F_{\mu\nu\rho}$ is defined in (\ref{F}). The action \eq{S3'} is gauge invariant under the simultaneous transformation \eq{GGG1} and \beq B_{\mu\nu} \to B'_{\mu\nu} = B_{\mu\nu} + m \, \om_{\mu\nu} \,. \eeq It is also invariant under the gauge transformation of Stueckelberg procedure field \beq B_{\mu\nu} \to B'_{\mu\nu} = B_{\mu\nu} + \na_\mu \xi_\nu - \na_\nu \xi_\mu \,, \eeq where $\xi_\mu$ is a vector gauge parameter, defined up to a gauge transformation $\xi'_\mu = \xi_\mu + \na_\mu \ph$ with a scalar parameter $\,\ph(x)$. Since the gauge generators of the theory are linearly dependent, the quantization of the theory \eq{S3'} differs from standard scheme and can be done in a way similar to the one described above for the second-rank field. The successive multi-step applications of Faddeev-Popov method for the antisymmetric rank-3 tensor is somehow more tedious than for the antisymmetric rank-2 field, hence we are not going to bother the reader with the details and present only the final formula for the one-loop effective action, \beq \n{EA-3} \bar{\Ga}^{(1)} &=& \frac{1}{2} \Tr \ln \hat{H}'_3 - \frac{1}{2} \Tr \ln \hat{H}'_2 \nonumber \\ &+ & \frac{1}{2} \Tr \ln \hat{H}'_1 - \frac{1}{2} \Tr \ln \hat{H}_0^{min} \nonumber \\ &=& \frac{1}{2} \Tr \ln \hat{H}'_3 - \frac{1}{2} \Tr \ln \hat{H}_2 \,, \eeq where \beq \hat{H}'_3 &=& (H'_3)^{\al\be\om}_{\mu\nu\rho} \nonumber \\ &= & \de^{\al\be\om}_{\mu\nu\rho} (\Box - m^2) + K^{\al\be\om}_{\mu\nu\rho} - L^{\al\be\om}_{\mu\nu\rho}\,, \eeq with \beq \n{de3} \de^{\al\be\om}_{\mu\nu\rho} = \frac{1}{6} \, \ep^{\al\be\om\si} \ep_{\mu\nu\rho\si} = \begin{vmatrix} \de^\al_\mu & \de^\al_\nu & \de^\al_\rho \\ \de^\be_\mu & \de^\be_\nu & \de^\be_\rho \\ \de^\om_\mu & \de^\om_\nu & \de^\om_\rho \\ \end{vmatrix}, \eeq \beq \n{K2} K^{\al\be\om}_{\mu\nu\rho} = 3 \, \de^{\al\be\om}_{\ga \theta \ph} \, \de^{\ga\la\tau}_{\mu\nu\rho} \, R^{\theta\ph}_{.\,.\,\la\tau} \eeq and \beq \n{L} L^{\al\be\om}_{\mu\nu\rho} = \de^{\al\be\om}_{\ga \theta \ph} \left( R^\ga_\la \, \de^{\la\theta\ph}_{\mu\nu\rho} + R^\theta_\la \, \de^{\la\ph\ga}_{\mu\nu\rho} + R^\ph_\la \, \de^{\la\ga\theta}_{\mu\nu\rho} \right) . \nonumber \\ \eeq Equation \eq{de3} defines the generalized Kronecker delta which serves as an identity matrix in the space of third-order totally antisymmetric tensors. It also has the property \beq \n{Iden} \de^{\al\be\om}_{\mu\nu\rho} \, T_{\al\be\om} &=& T_{[\mu\nu\rho]}\,. \eeq Due to the identity \eq{Iden} one can write the expressions \eq{K2} and \eq{L} in a compact way respecting their symmetries. From the technical side, by using the definition \eq{de3}, calculation of divergences can be mainly reduced to contractions of the products of Levi-Civita symbols. In accordance to the formula \eq{EA-3} we need to work with the third-rank tensor and subtract the second-rank contribution which is already known. Consider the field strengths for the first term, \beq \hat{P}_3 = (P_3)^{\al\be\om}_{\mu\nu\rho} = K^{\al\be\om}_{\mu\nu\rho} + \frac{1}{6} \, \de^{\al\be\om}_{\mu\nu\rho} R - L^{\al\be\om}_{\mu\nu\rho} \,, \eeq \beq (\hat{S}_3)_{\mu\nu} &=& [(S_3)_{\mu\nu}]^{\al\be\om}_{\theta\ph\ga} \\ &=& \de^{\al\be\om}_{\eta\xi\ze}( R^{\eta}_{.\,\la\mu\nu} \de^{\la \xi\ze}_{\theta\ph\ga} + R^{\xi}_{.\,\la\mu\nu} \de^{\la \ze\eta}_{\theta\ph\ga} + R^{\ze}_{.\,\la\mu\nu} \de^{\la \eta\xi}_{\theta\ph\ga}) \,. \nonumber \eeq Then it is easy to obtain the expression \beq && \frac{1}{2} \, \Tr \ln \hat{H}'_3 = - \frac{1}{2(4 \pi)^2} \int d^{2\om} x \, \sqrt{g} \, \Big\{\, 4 L_0 - \frac{1}{3} L_1 \, R \nonumber \\ && +\, \sum_{i=0}^5 l_i^* \,R_{\mu\nu} \,M_i \,R^{\mu\nu} \,+\,\sum_{i=0}^5 l_i \,R \,M_i \,R \, \Big\}\,, \eeq where \beq && l_1 = - \frac{1}{8}, \quad l_2 = l_3 = -\frac{1}{2}, \quad l_4 = \frac{5}{12}, \quad l_5 = \frac{1}{2}; \nonumber \\ && l_1^* = \frac{1}{2}, \quad l_2^* = 2, \quad l_3^* = 4, \quad l_4^* = -\frac{4}{3}, \quad l_5^* = -4. \nonumber \eeq By using the table of integrals \eq{Int-L0}-\eq{Int-M5} we find \beq \n{EA-3'} \frac{1}{2} \, \Tr \ln \hat{H}'_3 &=& -\frac{1}{2(4 \pi)^2} \int d^4x \, \sqrt{g} \, \Big\{ 2 m^4 \Big(\frac{1}{\epsilon}+\frac{3}{2}\Big) \nonumber \\ &+& \frac{m^2}{3} \Big(\frac{1}{\epsilon} + 1 \Big) R + R \Big[\frac{1}{36 \ep} + k'_{3,R} (a) \Big] R \nonumber \\ &+& \frac{1}{2} \, C_{\mu\nu\al\be} \Big[\frac{7}{30 \epsilon } + k'_{3,W} (a) \Big] C^{\mu\nu\al\be} \Big\}\,, \eeq where \beq k'_{3,W} (a) = -\frac{44}{225} +\frac{8}{45 a^2} +Y +\frac{32 Y}{15 a^4} -\frac{8 Y}{3 a^2} \,, \eeq \beq k'_{3,R} (a) = -\frac{7}{540} +\frac{1}{27 a^2} +\frac{Y}{12} +\frac{4 Y}{9 a^4} -\frac{2 Y}{9 a^2}\,. \eeq Subtracting \eq{EA-2a} from \eq{EA-3'} we arrive at the final result \beq \n{EA-3a} \bar{\Ga}^{(1)} &=& - \frac{1}{2(4 \pi)^2} \int d^4x \, \sqrt{g} \, \Big\{ \frac{ m^4}{2}\Big(\frac{1}{\epsilon}+\frac{3}{2}\Big) \nonumber \\ &-& \frac{m^2}{6} \Big(\frac{1}{\epsilon} + 1 \Big) R + R \Big(\frac{1}{72 \epsilon } + k^3_R \Big) R \nonumber \\ &+& \frac{1}{2} \, C_{\mu\nu\al\be} \Big(\frac{1}{60\ep} + k^3_W \Big) C^{\mu\nu\al\be} \Big\}\,, \eeq where \beq k^3_W &=& k^3_W (a) = \frac{1}{150} +\frac{2}{45 a^2} +\frac{8 Y}{15 a^4}\,, \\ k^3_R &=& k^3_R(a) = -\frac{1}{80} +\frac{1}{108 a^2} +\frac{Y}{16} +\frac{Y}{9 a^4} -\frac{Y}{6 a^2}\,. \nonumber \eeq In accordance to the general proof of \cite{Buch-Kiri-Plet}, the effective action for the massive rank-3 antisymmetric tensor \eq{EA-3a} is exactly the same as the one for the massive scalar field minimally coupled to gravity, given by \eq{EA-scalar} with $\xi=0$. There is no anomaly in the non-local part of effective action in this case. Consider the massless limit for the form factor $k_W^3(a)$ of the Weyl-squared term. Taking the $\,m \to 0\,$ limit in Eq.~\eq{EA-3a} we meet a non-zero contribution in the form $\,-\frac{1}{60} \ln \left(- \frac{\Box}{4 \pi \mu^2} \right)$. At the same time, for $\,m = 0\,$ the rank-3 antisymmetric tensor has no degrees of freedom and the result of the calculation is different. By using the methods explained in section \ref{sec-rank2}, we arrive at the expression \beq \n{EA3m} \bar{\Ga}^{(1)} &=& \frac{1}{2} \, \Tr \ln \hat{H}'_3 - \Tr \ln \hat{H}'_2 \nonumber \\ &+& \frac{3}{2}\Tr \ln \hat{H}'_1 - 2 \Tr \ln \hat{H}_0^{min}\,. \eeq Using previous results it is easy to check that the equation \eq{EA3m} in the massless case gives $\,\bar{\Ga}^{(1)} = 0$. The difference in the coefficients of the logarithmic form factors in $k_W^3(a)$ of the massless limit in a massive theory and of the strictly massless case is $1/60$. A similar situation holds for the factor $k_R^3(a)$. In the $\,m \to 0\,$ limit of a massive theory of the third-rank tensor the logarithmic coefficient in the form factor follows the divergent term and we find the quantum contribution $\,-\frac{1}{36}\ln \left(- \frac{\Box}{4 \pi \mu^2} \right)\,$ for the $R^2$-term. In the strictly massless case as the effective action \eq{EA3m} vanishes and we meet no contribution. The difference between the two coefficients is $1/36$ and represents the minimal scalar field contribution which do not disappear in the $m \to 0$ limit. This example once again demonstrates the quantum discontinuity for the massless limit. Finally let us note that the conformal anomaly for vector and scalar massless fields can not be reproduced within the dual antisymmetric theories. The reason is that the duality takes place only in the massive theories and in the massless limits there is a discontinuity which makes reproduction of anomaly impossible. In the strictly massless theories we checked that the second-rank model does not possess the local conformal symmetry. This is a natural result due to the equivalence with a minimal (and hence non-conformal) scalar model. \section{Conclusions} \label{Conc} By making direct calculations we have confirmed that the result of the paper \cite{Buch-Kiri-Plet} concerning the quantum equivalence of massive tensor fields of the second- and third-rank with vector and scalar models holds in the non-local form factors. Furthermore, one meets the discontinuity of the massless limits of quantum contributions in both cases. In fact, for the massless cases the mentioned equivalence does not hold. In particular, for the rank-3 tensor field in the massless case there is neither classical nor quantum dynamics and the theory is trivial. It would be interesting to formulate the same two types of fields on a more general backgrounds, e.g., including additional vector or axial vector fields. In these cases the proof based on $\ze$-regularization may be more difficult, but there are apparently no obstacles in making explicit calculations. \begin{acknowledgments} One of the authors (I.Sh.) is grateful to I.L. Buchbinder for useful discussions. The work of I.Sh. has been partially supported by CNPq, FAPEMIG and ICTP. T.P.N. is grateful to CAPES for supporting his Ph.D. project. \end{acknowledgments}
1,108,101,564,787
arxiv
\section{Introduction} Quasi-birth-and-death processes (QBDs) is the fundamental class of Markovian models in the theory of matrix-analytic methods, with a level-variable $X(t)$ and a phase-variable $\varphi(t)$ forming a two-dimensional state space. In many applications of the QBDs, the phase variable $\varphi(t)$ is used to model information about the underlying environment that drives the evolution of some system. A QBD is a model that lends itself to representing healthcare system in a natural, intuitive manner (see Figure~\ref{fig:RWquadrant}), and so the application potential of the QBDs in this area is immense, as demonstrated by Heydar et al.~\cite{2021HOTFTT} and Grant~\cite{Gus2021}. As another example of application in real world systems, QBDs have been applied in the analysis of evolution of gene families e.g. in Diao et al.~\cite{DSLOH2020}. An initial sensitivity analysis of a level-dependent QBD (LD-QBD) has been performed by G{\'o}mez-Corral and L{\'o}pez-Garc{\'i}a in~\cite{2018CL}. Here, we build on these ideas and extend the analysis to a wide range of metrics of interest, with a particular focus on applications in modelling healthcare systems. Suppose that the generator ${\bf Q}(\btheta)$ of a LD-QBD depends on some parameters recorded in a row vector $\btheta=[\theta_i]_{i=1,\ldots,k}$. Various stationary (long-run) and transient (time-dependent) quantities in the analysis of the LD-QBDs, recorded as matrices ${\bf A}(\btheta)=[A_{ij}(\btheta)]$, can then be expressed using expressions involving ${\bf Q}(\btheta)$, and so, also depend on $\btheta$. We derive the sensitivity analysis of relevant quantities, where given a vector of parameters $\btheta=[\theta_i]_{i=1,\ldots,k}$ and matrix ${\bf A}(\btheta)$, we write \begin{eqnarray*} \frac{\partial {\bf A}(\btheta)}{\partial \btheta} &=& \left[ \begin{array}{ccc} \frac{\partial {\bf A}(\btheta)}{\partial \theta_1} & \ldots & \frac{\partial {\bf A}(\btheta)}{\partial \theta_k} \end{array} \right]. \end{eqnarray*} Here, we focus on the development of the key building blocks of the methodology. Full details of this work, including numerical examples, will be presented in our future paper. \begin{figure}[h!] \begin{center} \resizebox{0.48\textwidth}{!}{ \begin{tikzpicture}[>=stealth,redarr/.style={->}] \draw [ dashed, ->] (0,0) -- (10,0); \draw [ dashed, ->] (0,0) -- (0,10); \draw [dashed] (0,0) -- (9,9); \draw (1,10) node[anchor=north, below=-0.17cm] {{\color{black} $X(t)=n$}}; \draw (9.4,-0.2) node[anchor=north, below=-0.17cm] {{\color{black} $\varphi(t)=i$}}; \draw (0,-0.3) node[anchor=north, below=-0.17cm] {{\color{black} $0$}}; \draw (9.5,9.5) node[anchor=north, below=-0.17cm] {{\color{black} $n=i$}}; \draw (5,9.5) node[anchor=north, below=-0.17cm] {{\color{black} $n>i$}}; \node at (2,8) [black,circle,fill,inner sep=2pt]{}; \draw [black, ->] (2,8) -- (3,8); \draw [black, ->] (2,8) -- (1,8); \draw [black, ->] (2,8) -- (2,9); \draw [black, ->] (2,8) -- (2,7); \draw [black, ->] (2,8) -- (3,9); \draw [black, ->] (2,8) -- (1,7); \draw [black, ->] (2,8) -- (3,7); \draw [black, ->] (2,8) -- (1,9); \draw [dashed] (1,7) -- (3,7) -- (3,9) -- (1,9) -- (1,7); \node at (5,5) [black,circle,fill,inner sep=2pt]{}; \draw [black, ->] (5,5) -- (4,5); \draw [black, ->] (5,5) -- (5,6); \draw [black, ->] (5,5) -- (6,6); \draw [black, ->] (5,5) -- (4,4); \draw [black, ->] (5,5) -- (4,6); \draw [dashed] (4,4) -- (4,6) -- (6,6) \node at (0,5) [black,circle,fill,inner sep=2pt]{}; \draw [black, ->] (0,5) -- (0,6); \draw [black, ->] (0,5) -- (0,4); \draw [black, ->] (0,5) -- (1,6); \draw [black, ->] (0,5) -- (1,4); \draw [black, ->] (0,5) -- (1,5); \draw [dashed] (0,4) -- (1,4) -- (1,6) -- (0,6); \node at (0,0) [black,circle,fill,inner sep=2pt]{}; \draw [black, ->] (0,0) -- (0,1); \draw [black, ->] (0,0) -- (1,1); \draw [dashed] (0,1) -- (1,1) \end{tikzpicture} } \end{center} \caption[Evolution of a QBD]{Evolution of a QBD $\{(\varphi(t),X(t)):t\geq 0\}$ modelling the total number of patients $X(t)$ and some information $\varphi(t)$ about the system e.g. (here) the number of patients of a particular class such as complex patients, whose treatment is likely to require more resources and time~\cite{Gus2021}.} \label{fig:RWquadrant} \end{figure} \section{LD-QBD model} Suppose that $\{(X(t),\varphi(t)):t\geq 0\}$ is a continuous-time Markov chain with a two-dimensional state $(X(t),\varphi(t))$ consisting of the level variable $X(t)$ and the phase variable, $\varphi(t)$, taking values in an irreducible state space given by \begin{equation*} \mathcal{S}=\{(n,i): n=0,1,2,\ldots,N ; i=0,1,\ldots,m_n\}, \end{equation*} and with transition rates recorded in the generator matrix ${\bf Q}=[q_{(n,i)(n',i')}]_{(n,k),(n',i')\in\mathcal{S}}$ made of block matrices ${\bf Q}^{[n,n']}=[q_{(n,i)(n',i')}]_{i=0,1,\ldots,m_n,i'=0,1,\ldots,m_{n'})}$ such that \begin{eqnarray*} \lefteqn{ {\bf Q} = [{\bf Q}^{[n,n']}]_{n,n'=0,1,\ldots,N} } \\[1ex] &=& \begin{bmatrix} {\bf Q}^{[0,0]} & {\bf Q}^{[0,1]} & {\bf 0} & \cdots & \cdots & {\bf 0}\\ {\bf Q}^{[1,0]} & {\bf Q}^{[1,1]} & {\bf Q}^{[1,2]} & \cdots & \cdots & {\bf 0}\\ \vdots & \vdots & \vdots & \cdots & \cdots & \vdots\\ {\bf 0} & {\bf 0} & {\bf 0} & \cdots & {\bf Q}^{[N,N-1]} & {\bf Q}^{[N,N]} \end{bmatrix},\nonumber\\ \end{eqnarray*} so that only transitions to the neighbouring levels are possible. We refer to such process as a continuous-time level-dependent quasi-birth-and-death process (LD-QBD), that is bounded from above by level $N$. The level variable $X(t)$ may be used to record the number of individuals in some system at time $t$, while $\varphi(t)$ may be used to record some additional information about the system at time $t$. Since a LD-QBD is a continuous-time Markov chain, standard expressions from the theory of Markov chains apply. However, as its state space $\mathcal{S}$ may be very large, here we apply ideas from the theory of matrix-analytic methods, which leads to efficient computational methods. We also consider the LD-QBD which does not have an upper boundary $N$. We note that a range of various transient and stationary performance measures of such defined LD-QBD can be readily derived using the existing methods in the literature of matrix-analytic methods, see Ramaswami~\cite{ram1997}, Joyner and Fralix~\cite{joyner2016new}, and Phung-Duc et al.~\cite{phung2010simple}. Since these performance measures depend on the parameters of the model, we are interested in the sensitivity analysis of these measures. \section{Quantities of interest} We consider the following key quantities in the analysis of the LD-QBDs, \begin{itemize} \item the long-run proportion of times spent in states $(n,i)$; \item the distribution of times spent within levels contained in some set $\mathcal{A}\subset\{0,1,\ldots,N\}$ in a sample path for the process to first reach level $n\pm k$ and do so in state $(n\pm k,j)$ given start from state $(n,i)$; \item the distribution of the process observed at time $t$ given start from state $(n_0,i)$. \end{itemize} We follow the approach summarised in Grant~\cite{Gus2021}, which is built on the results in Ramaswami~\cite{ram1997}, Joyner and Fralix~\cite{joyner2016new}, and Phung-Duc et al.~\cite{phung2010simple}. We also state expressions for the relevant Laplace-Stieltjes transforms (LSTs) of various quantities. These can be inverted it using numerical inversion techniques by Abate and Whitt~\cite{abate1995numerical}, Den Iseger~\cite{DenIseger_2006}, or Horv{\'a}th et al.~\cite{horvath2020numerical}, to compute the corresponding quantities. \subsection{Stationary distribution}\label{sec:StatDistr} For all $n=0,1,\ldots,N$, $i=0,1,\ldots,m_n$, define the limiting probabilities \begin{eqnarray*} \pi_{(n,i)}=\lim_{t\to\infty}\mathbb{P}\left(X(t)=n,\varphi(t)=i\right) \end{eqnarray*} recording the long-run proportions of time spent in states $(n,i)$, and collect these in a row vector $\bm{\pi}=[\bm{\pi}_n]_{n=0,1,\ldots,N}$, where $\bm{\pi}_n=[\pi_{(n,i)}]_{i=0,1,\dots,I_n}$. To evaluate $\bm{\pi}$, we follow the approach in Grant~\cite{Gus2021}, and write the expressions for $\bm{\pi}_n$ in terms of $\bm{\pi}_N$ (rather than in terms of $\bm{\pi}_0$, since potential close-to-zero values in $\bm{\pi}_0$ may lead to computational errors). We consider matrices $$\widehat{\bf R}^{(n)}=[\widehat R^{(n)}_{ij}]_{i=0,1,\dots,m_{n-1},j=0,1,\dots,m_{n}}$$ recording the expected times $\widehat R^{(n)}_{ij}$ spent in states $(n,j)$ per unit time spent in $(n+1,i)$, before returning to level $n+1$, given the process starts in state $(n+1,i)$. For $n=0,1,\ldots,N-1$, we apply the recursion, \begin{eqnarray*} \widehat {\bf R}^{(0)}(s) &=& -{\bf Q}^{[1,0]}({\bf Q}^{[0,0]}-s{\bf I})^{-1},\\ \widehat {\bf R}^{(n)}(s) &=& -{\bf Q}^{[n+1,n]}(\widehat {\bf R}^{(n-1)}(s){\bf Q}^{[n-1,n]}+{\bf Q}^{[n,n]}-s{\bf I})^{-1}, \end{eqnarray*} and then, with $\widehat{\bf R}^{(n)}=\widehat{\bf R}^{(n)}(0)$, let \begin{eqnarray*} \bm{\pi}_{n}&=&\bm{\pi}_{n+1} \widehat{\bf R}^{(n)} = \bm{\pi}_N \prod_{k=N-1}^{n} \widehat{\bf R}^{(k)}, \label{eq:pinRn} \end{eqnarray*} where $\bm{\pi}_N$ is the solution of the set of equations, \begin{eqnarray} \bm{\pi}_N \left( \widehat {\bf R}^{(N-1)} {\bf Q}^{[N-1,N]}+{\bf Q}^{[N,N]} \right)\ &=& {\bf 0}, \nonumber\\ \bm{\pi}_N \left( {\bf 1} + \sum_{n=0}^{N-1} \prod_{k=N-1}^{n}\widehat{\bf R}^{(k)}{\bf 1} \right)&=&1. \nonumbe \label{eq:piN} \end{eqnarray} \begin{Remark} Similar methods may be applied to derive $\bpi$ for a LD-QBD in which the level variable $X(t)\geq 0$ has no upper boundary $N$. First, following Phung-Duc et al.~\cite{phung2010simple}, find a sufficiently large truncation level $L$ such that the condition $||\widehat {\bf R}_{L}^{(n)}-\widehat {\bf R}_{L-1}^{(n)} ||<\epsilon$ is met for a required criterion $\epsilon>0$, where $\widehat {\bf R}_{L}^{(n)}$ and $\widehat {\bf R}_{L-1}^{(n)}$ denote matrix $\widehat {\bf R}^{(n)}$ computed for a LD-QBD with an upper boundary $N=L$ and $N=L-1$, respectively. Next, apply the approximation $\widehat {\bf R}^{(n)}\approx \widehat {\bf R}_{L}^{(n)}$. This technique may be applied for the remaining quantities. \end{Remark} \subsection{Sojourn times in specified sets}\label{sec:Sojtimes} Let $\mathcal{A}\subset\{0,1,\ldots,N\}$ be some set of desirable or undesirable levels, such as $\mathcal{A}=\{0,\ldots,A\}$ or $\mathcal{A}=\{B,\ldots,N\}$ where $A$ and $B$ are some desirable or undesirable thresholds. Let $\theta_n=\inf \{t>0:X(t)=n\}$ and $I(\cdot)$ be an indicator function. For any $n'\not=n$, let $${\bf W}^{n,n'}(s)=[W_{ij}^{n,n'}(s)]_{i=1,\ldots,m_n;j=1,\ldots,m_{n'}}$$ be a matrix such that the entry \begin{eqnarray*} W_{ij}^{n,n'}(s)&=& \mathbb{E}( e^{-s\theta_{n'}}\times I\left(\varphi(\theta_{n'}\leq t,\theta_{n'}=j\right) \\ &&\quad \ | \ X(0)=n,\varphi(0)=i)) \end{eqnarray*} is the Laplace-Stieltjes transform (LST) of the time for the process to first visit level $n'$ and do so in phase $j$, given start from level $n$ in phase $i$. Let ${\bf W}_{\mathcal{A}}^{n,n'}(s)=[W_{\mathcal{A};ij}^{n,n'}(s)]_{i=1,\ldots,m_n;j=1,\ldots,m_{n'}}$ be the LST matrix of the total time spent in the set $\mathcal{A}$ during a sample path corresponding to $W_{ij}^{n,n'}$. Denote ${\bf G}^{n,n'}(s)={\bf W}^{n,n'}(s)$ and ${\bf G}_{\mathcal{A}}^{n,n'}(s)={\bf W}^{n,n'}(s)$ whenever $n'<n$, and ${\bf H}^{n,n'}(s)={\bf W}^{n,n'}(s)$ and ${\bf H}_{\mathcal{A}}^{n,n'}(s)={\bf W}_{\mathcal{A}}^{n,n'}(s)$ whenever $n'>n$. We note that when $\mathcal{A}=\{n'+1,\ldots,N\}$, then clearly ${\bf G}_{\mathcal{A}}^{n,n'}(s)={\bf G}^{n,n'}(s)$. When $\mathcal{A}=\{0,\ldots,n'-1\}$, then ${\bf H}_{\mathcal{A}}^{n,n'}(s)={\bf H}^{n,n'}(s)$. By standard decomposition of a sample path~\cite{Gus2021,ram1997,joyner2016new,phung2010simple,SOB2020}, we have, \begin{eqnarray*} {\bf G}_{\mathcal{A}}^{n,n-k}(s)&=& {\bf G}_{\mathcal{A}}^{n,n-1}(s) \times \cdots \times {\bf G}_{\mathcal{A}}^{n-k+1,n-k}(s), \end{eqnarray*} where \begin{eqnarray*} {\bf G}_{\mathcal{A}}^{N,N-1}(s)&=& -({\bf Q}^{[N,N]}-s{\bf I}\times I(N\in\mathcal{A}))^{-1} {\bf Q}^{[N,N-1]}, \end{eqnarray*} and for $n=N-1,\ldots,1$, \begin{eqnarray*} {\bf G}_{\mathcal{A}}^{n,n-1}(s) &=& -\Big({\bf Q}^{[n,n]}-s{\bf I}\times I(n\in\mathcal{A}) \\ && \quad \quad +{\bf Q}^{[n,n+1]}{\bf G}_{\mathcal{A}}^{n+1,n}(s) \Big)^{-1} {\bf Q}^{[n,n-1]}. \end{eqnarray*} Similarly, \begin{eqnarray*} {\bf H}_{\mathcal{A}}^{n,n+k}(s)&=& {\bf H}_{\mathcal{A}}^{n+1,n+2}(s) \times \cdots \times {\bf H}_{\mathcal{A}}^{n+k-1,n+k}(s), \end{eqnarray*} where \begin{eqnarray*} {\bf H}_{\mathcal{A}}^{0,1}(s)&=& -({\bf Q}^{[0,0]}-s{\bf I}\times I(0\in\mathcal{A}))^{-1} {\bf Q}^{[0,1]}, \end{eqnarray*} and for $n=N-1,\ldots,1$, \begin{eqnarray*} {\bf H}_{\mathcal{A}}^{n,n+1}(s)&=& -\Big({\bf Q}^{[n,n]}-s{\bf I}\times I(n\in\mathcal{A}) \\ && \quad\quad +{\bf Q}^{[n,n-1]}{\bf H}^{n-1,n}(s) \Big)^{-1} {\bf Q}^{[n,n+1]}. \end{eqnarray*} \subsection{Distribution at time $t$}\label{sec:Distrt} Suppose that the QBD starts from some level $n_0$ in some phase $i_0=0,1,\ldots,m_{n_0}$ according to the initial distribution of phases $\bm{\alpha}=[\alpha_j]_{j\in 0,1,\ldots,m_{n_0}}$ such that $\alpha_j=\mathbb{P}(\varphi(0)=j)$. Define vector ${\bf f}(t)=[{\bf f}_n(t)]_{n=0,1,\ldots,N}$ such that \begin{eqnarray*} [{\bf f}_n(t)]_j&=& \mathbb{P}_{\bm{\alpha}}(X_t=n,\varphi(t)=j) \end{eqnarray*} is the probability that at time $t$ the process is on level $n$ and in phase $j$, given $\bm{\alpha}$; and the corresponding Laplace Transform vector $\widetilde{\bf f}(s)=[\widetilde{\bf f}_n(s)]_{n=0,1,\ldots,N}$ such that \begin{eqnarray*} [\widetilde{\bf f}_n(s)]_j&=& \int_{t=0}^{\infty} e^{-st} \mathbb{P}_{\bm{\alpha}}(X_t=n,\varphi(t)=j)dt . \end{eqnarray*} The Kolmogorov differential equations of the process are, \begin{eqnarray*} \frac{\partial {\bf f}_0(t)}{ \partial t} &=& {\bf f}_0(t){\bf Q}^{[0,0]} + {\bf f}_1(t){\bf Q}^{[1,0]} , \nonumber\\ \frac{\partial {\bf f}_n(t)}{ \partial t} &=& {\bf f}_n(t){\bf Q}^{[n,n]} + {\bf f}_{n-1}(t){\bf Q}^{[n-1,n]} + {\bf f}_{n+1}(t){\bf Q}^{[n+1,n]}, \nonumber\\ && \quad \mbox{ for }n=1,\ldots,N-1, \nonumber\\ \frac{\partial {\bf f}_N(t)}{ \partial t} &=& {\bf f}_N(t){\bf Q}^{[N,N]} + {\bf f}_{N-1}(t){\bf Q}^{[N-1,N]}, \end{eqnarray*} with the initial condition ${\bf f}_{n}(0)=\balpha I(n= n_0)$. To evaluate $\widetilde{\bf f}(s)$, we apply the following recursion summarised in Grant~\cite{Gus2021}, \begin{eqnarray*} \widetilde{\bf f}_{n_0}(s)&=& -\bm{\alpha} \Big( ({\bf Q}^{[n_0,n_0]}-s{\bf I}) + \widehat {\bf R}^{(n_0-1)}(s){\bf Q}^{[n_0-1,n_0]} \\ && \quad\quad + \widetilde{\bf R}^{(n_0+1)}(s){\bf Q}^{[n_0+1,n_0]} \Big)^{-1} , \end{eqnarray*} and then for $n>n_0$, \begin{eqnarray*} \widetilde{\bf f}_{n}(s)&=& -\bm{\alpha} {\bf H}^{n_0,n}(s) \Big( ({\bf Q}^{[n,n]}-s{\bf I}) + \widehat {\bf R}^{(n-1)}(s){\bf Q}^{[n-1,n]} \\ && \quad\quad + \widetilde{\bf R}^{(n+1)}(s){\bf Q}^{[n+1,n]} \Big)^{-1} , \end{eqnarray*} and for $n<n_0$, \begin{eqnarray*} \widetilde{\bf f}_{n}(s)&=& -\bm{\alpha} {\bf G}^{n_0,n}(s) \Big( ({\bf Q}^{[n,n]}-s{\bf I}) + \widehat {\bf R}^{(n-1)}(s){\bf Q}^{[n-1,n]} \\ && + \widetilde{\bf R}^{(n+1)}(s){\bf Q}^{[n+1,n]} \Big)^{-1} , \end{eqnarray*} where, for $n=N,\ldots,1$, we apply the recursion, \begin{eqnarray*} \widetilde R_{N}({\bf 0}) &=& -{\bf Q}^{[N-1,N]}({\bf Q}^{[N,N]}-s{\bf I})^{-1},\\ \widetilde{\bf R}^{(n)}(s) &=& -{\bf Q}^{[n-1,n]}({\bf Q}^{[n,n]}-s{\bf I}+\widetilde{\bf R}^{(n+1)}(s){\bf Q}^{[n+1,n]})^{-1}. \end{eqnarray*} \section{Sensitivity analysis} First, we present simple examples to motivate the theory. \begin{Example} Suppose that $X(t)$ records the total number of customers in the system at time $t$. Let $\varphi(t)\in\{1,\ldots,k\}$ be the phase of the environment that drives the evolution of the system, so that $\lambda_{\varphi(t)}>0$ is the arrival rate to system (provided $X(t)<N$), and $\mu_{\varphi(t)}>0$ is the service rate per customer (provided $X(t)>0$). Assume that $\{\varphi(t):t\geq 0\}$ is a continuous-time Markov chain with generator ${\bf T}=[T_{ij}]$. The system can be modelled as a LD-QBD with generator ${\bf Q}(\btheta)=[q(\btheta)_{(n,i)(m,j)}]$ that depends on the vector of parameters $\btheta=[\lambda_1,\ldots,\lambda_k,\mu_1,\ldots,\mu_k]$, such that the nonzero off-diagonals $q(\btheta)_{(n,i)(m,j)}$ are given by \begin{eqnarray*} \left\{ \begin{array}{ll} T_{ij}& j\not= i, m=n;\\ \lambda_i& j = i, m=n+1, n<N;\\ n\mu_i& j = i, m=n-1, n>0. \end{array} \right. \end{eqnarray*} Then $\frac{\partial}{\partial\btheta}{\bf Q}(\btheta)$ is given by \begin{eqnarray*} \frac{\partial q(\btheta)_{(n,i)(m,j)}}{\partial\lambda_i}&=& \left\{ \begin{array}{ll} 1& j = i, m=n+1, n<N;\\ -1& j = i, m=n, 0\leq n<N;\\ 0& \mbox{otherwise;} \end{array} \right. \end{eqnarray*} and \begin{eqnarray*} \frac{\partial q(\btheta)_{(n,i)(m,j)}}{\partial\mu_i}&=& \left\{ \begin{array}{ll} n& j = i, m=n-1, n>0;\\ -n& j = i, m=n, 0<n\leq N;\\ 0& \mbox{otherwise.} \end{array} \right. \end{eqnarray*} \end{Example} \begin{Example} Suppose that customers of type $k\in\{1,2\}$ arrive to the system with capacity $N$ at the total rate $\lambda_k>0$, and are served at rate $\mu_k>0$ per customer. The system can be modelled as a LD-QBD $\{(X(t),\varphi(t)):t\geq 0\}$ where $X(t)$ records the total number of customers and $\varphi(t)\leq X(t)$ records the number of customers of type $1$ in the system at time $t$. The generator ${\bf Q}(\btheta)=[q(\btheta)_{(n,i)(m,j)}]$ that depends on the vector of parameters $\btheta=[\lambda_1,\lambda_2,\mu_1,\mu_2]$, is such that nonzero off-diagonals $q(\btheta)_{(n,i)(m,j)}$ are given by \begin{eqnarray*} \left\{ \begin{array}{ll} \lambda_1& j = i, m=n+1, n<N;\\ \lambda_2& j = i+1, m=n+1, n<N;\\ (n-i)\mu_1& j = i, m=n-1, n>0;\\ i\mu_2& j = i-1, m=n-1, n>0. \end{array} \right. \end{eqnarray*} Then $\frac{\partial}{\partial\btheta}{\bf Q}(\btheta)$ is given by \begin{eqnarray*} \frac{\partial q(\btheta)_{(n,i)(m,j)}}{\partial \lambda_1}&=& \left\{ \begin{array}{ll} 1& j = i, m=n+1, n<N;\\ -1& j = i, m=n, 0\leq n<N;\\ 0& \mbox{otherwise;} \end{array} \right. \end{eqnarray*} and \begin{eqnarray*} \frac{\partial q(\btheta)_{(n,i)(m,j)}}{\partial \lambda_2}&=& \left\{ \begin{array}{ll} 1& j = i+1, m=n+1, n<N;\\ -1& j = i, m=n, 0\leq n<N;\\ 0& \mbox{otherwise;} \end{array} \right. \end{eqnarray*} and \begin{eqnarray*} \frac{\partial q(\btheta)_{(n,i)(m,j)}}{\partial \mu_1}&=& \left\{ \begin{array}{ll} (n-i)& j = i, m=n-1, n>0;\\ -(n-i)\mu_1& j = i, m=n, 0<n\leq N;\\ 0& \mbox{otherwise;} \end{array} \right. \end{eqnarray*} and \begin{eqnarray*} \frac{\partial q(\btheta)_{(n,i)(m,j)}}{\partial \mu_2}&=& \left\{ \begin{array}{ll} i& j = i-1, m=n-1, n>0;\\ -i& j = i, m=n, 0<n\leq N;\\ 0& \mbox{otherwise.} \end{array} \right. \end{eqnarray*} \end{Example} \begin{Example} Suppose that the generator of a LD-QBD is a function of $\bepsilon=[\epsilon_i]_{i=1,\ldots,k}>{\bf 0}$ such that \begin{eqnarray*} {\bf Q}(\bepsilon)&=& {\bf Q} + \sum_{i=1}^k \epsilon_i \times \widetilde{\bf Q}_i \end{eqnarray*} is a generator for sufficiently small $||\bepsilon||>{\bf 0}$. Then \begin{eqnarray*} \frac{\partial {\bf Q}(\bepsilon)}{\partial \bepsilon} &=& \left[ \begin{array}{ccc} \frac{\partial {\bf Q}(\bepsilon)}{\partial \epsilon_1} & \ldots & \frac{\partial {\bf Q}(\bepsilon)}{\partial \epsilon_k} \end{array} \right] = \left[ \begin{array}{ccc} \widetilde{\bf Q}_1 & \ldots & \widetilde{\bf Q}_k \end{array} \right]. \end{eqnarray*} \end{Example} The derivatives $\frac{\partial}{\partial\btheta}$ of quantities of interest for these and other LD-QBDs can be expressed in terms of $\frac{\partial}{\partial\btheta}{\bf Q}(\btheta)$ using expressions from the matrix calculus e.g.~\cite{2007PK}, as follows. Let ${\bf G}^{n,n-k}={\bf G}^{n,n-k}(0)$ and $\mathbb{E}^{n,n-k}=-\frac{\partial}{\partial s}{\bf G}^{n,n-k}(s)\big|_{s=0}$ be the probability and expectation matrix, respectively. Let ${\bf G}^{n,n-k}(\btheta)$ and $\mathbb{E}^{n,n-k}(\btheta)$ be the notation for ${\bf G}^{n,n-k}$ and $\mathbb{E}^{n,n-k}$ when evaluated for a given $\btheta$. By the recursive expressions in Section~\ref{sec:Sojtimes}, we have, \begin{eqnarray*} \lefteqn{ \frac{\partial }{\partial \btheta}{\bf G}^{N,N-1}(\btheta) } \nonumber\\ &=& ({\bf Q}^{[N,N]}(\btheta))^{-1} \times \frac{\partial {\bf Q}^{[N,N]}(\btheta)}{\partial \btheta} \times \left( {\bf I}_k \otimes ({\bf Q}^{[N,N]}(\btheta))^{-1} \right) \\ && \times\left( {\bf I}_k\otimes {\bf Q}^{[N,N-1]}(\btheta) \right) \nonumber\\ && - ({\bf Q}^{[N,N]}(\btheta))^{-1} \times \frac{\partial {\bf Q}^{[N,N-1]}(\btheta)}{\partial \btheta}; \end{eqnarray*} for $n=N-1,\ldots,1$, we have the recursion \begin{eqnarray*} \lefteqn{ \frac{\partial }{\partial \btheta}{\bf G}^{n,n-1}(\btheta) } \\ &=& - \frac{\partial }{\partial \btheta} \left({\bf Q}^{[n,n]}(\btheta) +{\bf Q}^{[n,n+1]}(\btheta){\bf G}^{n+1,n}(\btheta) \right)^{-1} \\ && \times\left( {\bf I}_k\otimes {\bf Q}^{[n,n-1]}(\btheta) \right) \\ && - ({\bf Q}^{[n,n]}(\btheta) +{\bf Q}^{[n,n+1]}(\btheta){\bf G}^{n+1,n}(\btheta) )^{-1} \\ && \times \frac{\partial {\bf Q}^{[n,n-1]}(\btheta)}{\partial \btheta} \end{eqnarray*} with \begin{eqnarray*} \lefteqn{ \frac{\partial }{\partial \btheta} \left({\bf Q}^{[n,n]}(\btheta) +{\bf Q}^{[n,n+1]}(\btheta){\bf G}^{n+1,n}(\btheta) \right)^{-1} } \nonumber\\ &=& \left({\bf Q}^{[n,n]}(\btheta) +{\bf Q}^{[n,n+1]}(\btheta){\bf G}^{n+1,n}(\btheta) \right)^{-1} \nonumber\\ && \times \Big( \frac{\partial ({\bf Q}^{[n,n]}(\btheta) }{\partial \btheta} + \frac{\partial {\bf Q}^{[n,n+1]}(\btheta)}{ \partial \btheta} \times\left( {\bf I}_k\otimes {\bf G}^{n+1,n}(\btheta) \right) \\ && + {\bf Q}^{[n,n+1]}(\btheta) \times \frac{\partial {\bf G}^{n+1,n}(\btheta)}{\partial \btheta} \Big) \nonumber\\ && \times \left( {\bf I}_k \otimes (({\bf Q}^{[n,n]}(\btheta) +{\bf Q}^{[n,n+1]}(\btheta){\bf G}^{n+1,n}(\btheta))^{-1} \right) ; \label{eq:term1} \end{eqnarray*} and so for $k\geq 2$ we obtain the recursion, \begin{eqnarray*} \lefteqn{ \frac{\partial}{\partial\btheta}{\bf G}^{n,n-k}(\btheta) } \\ &=& \frac{\partial {\bf G}^{n,n-k+1}(\btheta)}{ \partial \btheta} \times\left( {\bf I}_k\otimes {\bf G}^{n-k+1,n-k}(\btheta) \right) \\ && + {\bf G}^{n,n-k+1}(\btheta) \times \frac{\partial {\bf G}^{n-k+1,n-k}(\btheta)}{\partial \btheta}. \end{eqnarray*} We apply similar methods to evaluate $\frac{\partial}{\partial\btheta}\mathbb{E}^{n,n-k}(\btheta)$ and the derivatives of the higher moments. The expressions for $\frac{\partial}{\partial\btheta}{\bf H}^{n,n+k}(\btheta)$ and related quantities follow by symmetry. Next, to evaluate $\frac{\partial}{\partial\btheta} {\bf f}(t;\btheta)$, we apply \begin{eqnarray*} \int_0^{\infty}e^{-st} \frac{\partial}{\partial\btheta} {\bf f}(t;\btheta)dt &=& \frac{\partial}{\partial\btheta} \int_0^{\infty}e^{-st} {\bf f}(t;\btheta)dt = \frac{\partial}{\partial\btheta} \widetilde{\bf f}(s), \end{eqnarray*} since then the right-hand side can be computed using the results from the earlier sections, and then inverted to obtain the quantities of interest, $\frac{\partial}{\partial\btheta} {\bf f}_n(t;\btheta)$, for all $n$. By the recursive expressions in Section~\ref{sec:Distrt}, we have, \begin{eqnarray*} \lefteqn{ \frac{\partial \widetilde{\bf f}_{n_0}(s)}{ \partial \btheta} = -\widetilde{\bf f}_{n_0}(s) \times \frac{\partial } {\partial \btheta} \Big( ({\bf Q}^{[n_0,n_0]}-s{\bf I}) } \\ && + \widehat {\bf R}^{(n_0-1)}(s){\bf Q}^{[n_0-1,n_0]} + \widetilde{\bf R}^{(n_0+1)}(s){\bf Q}^{[n_0+1,n_0]} \Big) \\ && \times \Big( {\bf I}_k\otimes \Big( ({\bf Q}^{[n_0,n_0]}-s{\bf I}) \\ && + \widehat {\bf R}^{(n_0-1)}(s){\bf Q}^{[n_0-1,n_0]} + \widetilde{\bf R}^{(n_0+1)}(s){\bf Q}^{[n_0+1,n_0]} \Big) \Big)^{-1}; \end{eqnarray*} and for $n\not= n_0$, \begin{eqnarray*} \lefteqn{ \frac{\partial \widetilde{\bf f}_{n}(s;\btheta) }{ \partial \btheta} = \Big( -\bm{\alpha} \frac{\partial {\bf W}^{n_0,n}(s;\btheta) }{ \partial \btheta} } \\ && -\widetilde{\bf f}_{n}(s;\btheta) \times \frac{\partial }{\partial \btheta} \Big( ({\bf Q}^{[n,n]}(\btheta)-s{\bf I}) \\ && + \widehat {\bf R}^{(n-1)}(s;\btheta){\bf Q}^{[n-1,n]}(\btheta) + \widetilde{\bf R}^{(n+1)}(s;\btheta){\bf Q}^{[n+1,n]}(\btheta) \Big) \\ && \times \Big( {\bf I}_k\otimes \Big( ({\bf Q}^{[n,n]}(\btheta)-s{\bf I}) \\ && + \widehat {\bf R}^{(n-1)}(s;\btheta){\bf Q}^{[n-1,n]}(\btheta) + \widetilde{\bf R}^{(n+1)}(s;\btheta){\bf Q}^{[n+1,n]}(\btheta) \Big)^{-1} \Big). \end{eqnarray*} Further, by the recursive expressions in Section~\ref{sec:StatDistr}, \begin{eqnarray*} \frac{\partial \bm{\pi}_{n}(\btheta)}{\partial\btheta} &=& \frac{\partial \bm{\pi}_{n+1}(\btheta)}{ \partial \btheta} \times\left( {\bf I}_k\otimes \widehat{\bf R}^{(n)}(\btheta) \right) \\ && + \bm{\pi}_{n+1}(\btheta) \times \frac{\partial \widehat{\bf R}^{(n)}(\btheta)}{\partial \btheta}, \end{eqnarray*} where, \begin{eqnarray*} \lefteqn{ \frac{\partial \bm{\pi}_N(\btheta)}{ \partial \btheta} = - \bm{\pi}_N(\btheta) } \\ && \times \frac{\partial } {\partial \btheta} \left( \widehat {\bf R}^{(N-1)}(\btheta) {\bf Q}^{[N-1,N]}(\btheta)+{\bf Q}^{[N,N]}(\btheta) \right) \\ && \times\left( {\bf I}_k\otimes \left( \widehat {\bf R}^{(N-1)}(\btheta) {\bf Q}^{[N-1,N]}(\btheta)+{\bf Q}^{[N,N]}(\btheta) \right)^{-1} \right) . \end{eqnarray*} Finally, the quantities $$\frac{\partial}{\partial\btheta}\widehat {\bf R}^{(n-1)}(s;\btheta)\quad\mbox{ and }\quad \frac{\partial}{\partial\btheta}\widetilde {\bf R}^{(n+1)}(s;\btheta)$$ on the right-hand side in the above, which are required to complete the analysis, can be derived from the recursive expressions for $\widehat{\bf R}^{(n-1)}(s;\btheta)$ and $\widetilde{\bf R}^{(n-1)}(s;\btheta)$ in Section~\ref{sec:StatDistr} and Section~\ref{sec:Distrt}, respectively, by analogous methods. \bibliographystyle{abbrv}
1,108,101,564,788
arxiv
\section{Introduction} The high-energy behaviour of gravity is still not well understood, and several directions have been developed in order to build a satisfactory theoretical and phenomenological framework. One possibility is to consider gravity as an effective model \cite{donoghue} and study the implications of its quantisation at energies well below the Planck mass. Another possibility is to modify the ultraviolet (UV) behaviour of gravity, as can consistently be done, for example, with Horava-Lifshitz gravity \cite{Horava}. We consider here two modifications of Einstein gravity which are invariant under the reduced symmetry of 3-dimensional diffeomorphisms, in the situation where a specific coordinate frame is preferred, and study the consequences of this violation of local Lorentz symmetry on the low energy physics. In order to test the validity of these modified gravity models, we couple them to classical matter fields and study the generation, through quantum gravity corrections, of Lorentz-violating features in the matter dispersion relations. These two models are \begin{itemize} \item A modification of Einstein gravity, which preserves only 3-dimensional space diffeomorphisms, as in Horava-Lifshitz (HL) gravity, but which does not involve higher-order space derivatives or anisotropic scaling between space and time, so that one-loop corrections are quadratically divergent. This model is consistent with the effective approach to gravity; \item An extension of the first model, with an improved UV behaviour. This is achieved by the introduction of higher-order space derivatives, leading to less divergent loop integrals, and which can consistently be implemented if we allow for an anisotropic scaling between space and time. The extension we consider is the non-projectable version of HL gravity (npHL) \cite{non-proj} for $z=3$, and involves logarithmic divergences only. \end{itemize} In both cases, gravity is naturally described in terms of the Arnowitt-Deser-Misner (ADM) decomposition of the metric, which naturally exhibits a space-time foliation. Also, the graviton has 3 degrees of freedom (dof), with a scalar dof appearing as a consequence of the breaking of 4-dimensional diffeomorphisms \cite{Arkani}. We couple these models to classical matter and derive the one-loop effective dispersion relation seen by particles, after integrating out gravitons. One-loop quantum corrections to the dispersion relations of quantum matter fields coupled to HL gravity have been studied in~\cite{Pospelov}, where the Authors derive the effective speed seen by a scalar field and an Abelian gauge field, and compare these to measure Lorentz-symmetry violation. In \cite{Padilla}, the non-projectable version of HL gravity is considered for the derivation of the effective matter Lagrangian. We are doing a similar study here, taking into account a classical scalar and fermionic backgrounds though, and we calculate the difference $\Delta v^2$ between the effective speeds of light seen by these two species. The first model involves quadratic divergences, so that we use a cut off to calculate one-loop graviton loops, whereas the second model involves logarithmic divergences only, that we control with dimensional regularisation. We find that, although the two models behave differently in the UV, their phenomenological predictions are very similar in the sense that they predict the same order of magnitude for $\Delta v^2$, if one keeps generic values for the different parameters. Therefore this phenomenology is not really improved by the introduction of higher-order space derivatives. Moreover, in both cases, we find that one can always impose $\Delta v^2=0$ if the parameters are fine-tuned. Studies of effective dispersion relations for Lifshitz-type models in flat space-time have shown the limitations of phenomenological viability of these models. This was first noted in \cite{IengoRussoSerone}, where the unnatural fine-tuning of bare parameters is shown in order to match the light cones seen by two different scalar particles interacting. Similar studies were done in \cite{ABH}, where the effective dispersion relation for interacting Lifshitz fermions is derived, in the case where flavour symmetry is broken, or in \cite{AB}, where the fermion effective dispersion relation is derived in a Lifshitz extension of Quantum Electrodynamics. Our aim here is to study similar features and, more precisely, how global Lorentz-symmetry is affected from a local symmetry breaking. A more complete study could involve the energy dependence of effective parameters obtained from a quantum modified gravity, in a Wilsonian renormalisation framework for example, as shown in \cite{Saueressig} for HL gravity. The latter approach is also used in \cite{Dodorico}, where instead the scalar field is integrated out on classical metric background. Section 2 describes the models and discusses gauge freedom, as well as the actions for gravity and matter sectors. We describe the different simplifications which can be used for the one-loop integration of graviton, which is presented in section 3 and in the appendix. We integrate there successively the different components of graviton fluctuations, for both models, where we keep on-shell the auxiliary fields which appear in the decomposition of the graviton fluctuations. Finally, we discuss the phenomenology of the two models in section 4, based on the current upper bounds for Lorentz-symmetry violating parameters. \section{Models} In this article, Greek letters denote space-time indices and Latin letters space indices only. The signature of the metric is $(-,+,+,+)$. \subsection{The original actions} We introduce here two modified gravity models, and their coupling to matter.\\ $\bullet$ The first modified gravity action we consider is a modified Einstein-Hilbert action where new operators are allowed due to the reduced symmetry of the theory. Using the ADM decomposition and omitting the cosmological constant, it can be expressed as \be\label{SG} S_G = M_P^2 \int dt d^3 x \sqrt{g} N (K_{ij} K^{ij} -\lambda K^2 + R^{(3)}+\alpha a_i a^i)~, \ee where $M_P^2 = (16 \pi G_N)^{-1}$ and \bea g &=& |\det(g_{ij})| \\ K_{ij} &=& \frac{1}{2N}( \partial_t g_{ij} - D_i N_j - D_j N_i)~~~\mbox{with}~~~D_i N_j = \partial_i N_j - \Gamma^{k}_{ij} N_k \nn K &=& K_{ij} g^{ij}~,~~~ R^{(3)} = R_{ijkl} g^{ik} g^{jl}~,~~~~a_i=\partial_i \ln N\nonumber~. \eea $ G_N$ and $K_{ij}$ are respectively the Newton gravitational constant and the extrinsic curvature. The possibility to have $\lambda\ne1$ and $\alpha\neq 0$ implies that 4-dimensional diffeomorphisms are broken to 3-dimensional diffeomorphisms, and the Hilbert-Einstein action can be recovered with $\lambda=1$ and $\alpha=0$.\\ $\bullet$ The second model we consider is the non-projectable version of Horava-Lifshitz gravity~\cite{non-proj} which allows the lapse function $N$ to be a function of space and time. This implies that terms containing $a_i = \partial_i\ln N$ should be included in the action, in order to have a consistent dispersion relation for the scalar graviton \cite{non-proj}, and avoids the well-known pathology present in the original HL gravity version~\cite{Horava}. In this context, we impose space and time to scale anisotropically such that \be \vec{x}\to b\vec{x}~~~\mbox{while}~~~t\to b^z t~, \ee and the choice $z=3$ motivates us to introduce dimension 4 and dimension 6 operators, which become important in the ultraviolet (UV). As in~\cite{non-proj} though, the time component is reparametrised so that the action is written in terms of the ``physical'' units. The npHL action can be written as~\cite{non-proj} \bea\label{SHL} S_{HL} &=& M_P^2 \int dt d^3 x \sqrt{g} N \left\{K_{ij} K^{ij} -\lambda K^2 + R^{(3)}+\alpha a_i a^i\right.\\ &&\left. + F_1 R_{ij}R^{ij} +F_2 (R^{(3)})^2 +F_3 R^{(3)} \nabla_i a^i +F_4 a_i \Delta a^i\right.\nonumber\\ &&\left.+S_1 (\nabla_i R_{jk})^2 +S_2 (\nabla_i R^{(3)})^2 + S_3 (\Delta R^{(3)} \nabla_i a^i) + S_4 (a_i \Delta^2 a^i) \right\}~,\nonumber \eea where $F_i= (f_i/M_{HL}^2)$ and $S_i=(s_i/M_{HL}^4)$ with $M_{HL}$ being the Horava-Lifshitz scale, and $f_i$ and $s_i$ are dimensionless coupling constants associated with operators of dimension 4 and 6, respectively.\\ $\bullet$ Finally, for the matter sector, we will consider complex scalar and fermion fields minimally coupled to the gravity models~(\ref{SG}) and (\ref{SHL}). The complex scalar field action is \be\label{Ss1} S_s = -\int dt d^3 x~\sqrt{g}Ng^{\mu\nu}\partial_\mu \phi \partial_\nu \phi^\star~, \ee and the fermion action is \be\label{Sf1} S_f = -\int dt d^3 x~\frac{ie}{2} \left[\bar{\psi} \gamma^\alpha e^{\mu}_{~\alpha} \nabla_\mu \psi-e^{\mu}_{~\alpha} (\nabla_\mu\bar{\psi}) \gamma^\alpha\psi \right]~, \ee where \bea e &=& \det(e^{~\alpha}_{\mu}) = \sqrt{g} N\\ \nabla_\mu \psi &=& (\partial_\mu + \Gamma_\mu) \psi~~\mbox{and}~~ \nabla_\mu \bar{\psi} = \partial_\mu\bar{\psi} - \bar{\psi}\Gamma_\mu\nn \Gamma_\mu &=& \frac{1}{2} w_{\mu\alpha\beta}\sigma^{\alpha\beta}~~\mbox{and}~~ \sigma^{\alpha\beta}=\frac{1}{4}[\gamma^\alpha,\gamma^\beta]~, \nn w_{\mu\alpha\beta} &=& e^{\lambda}_{~\alpha}( \partial_\mu e_{\lambda \beta} - \Gamma^{\sigma}_{\lambda\mu} e_{\sigma\beta}) = e^{\lambda}_{~\alpha}(D_\mu e_{\lambda\beta})~,\nonumber \eea $w_{\mu\alpha\beta}$ being the spin connection. \subsection{Gauge invariance and degrees of freedom} The 4-dimensional diffeomorphisms are explicitly broken for both gravity models for $\lambda \neq 1$, $\alpha \neq 0$, $F_i\neq0 $ and $S_i\neq 0$. Instead, these models are invariant under 3-dimensional diffeomorphisms \bea \label{diffeo} \delta t&=&f(t)\\ \delta x^{i}&=&\xi^i(t,x)\nn \delta g_{ij}&=&\partial_{i}\xi_j+\partial_{j}\xi_i+\xi^{k}\partial_{k}g_{ij}+f\dot{g}_{ij} \nn \delta N_{i}&=&\partial_{i}\xi^kN_k+\xi^k\partial_kN_{i}+\dot{\xi^{j}}g_{ij}+\dot{f}N_i+f \dot{N}_{i} \nn \delta N&=&\xi^k\partial_kN+\dot{f}N+f \dot{N}~.\nonumber \eea Because of 4-dimensional diffeomorphism breaking, a third physical dof is present in both models, and a way of counting the number of dof of the theory is given in \cite{Henneaux}, where the Authors use a Hamiltonian description of gauge theories. They show that the number of primary constraints to take into account is the number of gauge functions {\it plus} the number of their time derivatives, in the situation where these gauge functions depend on space and time. This is because gauge functions and their time derivatives must be considered independently, when defining a boundary condition for the evolution of gauge fields. In our case, we have 10 independent metric components ($N,~N_i,~g_{ij}$) and we see from the gauge transformations (\ref{diffeo}) that the functions $\xi^i$ count twice since they appear with their time derivative, while $f$ counts once only because it depends on $t$ only. The total number of dof is therefore $10-(2\times3+1) = 3$. In order to study the one-loop quantum corrections to the matter sectors, we expand the metric $g_{\mu\nu}$ and, consequently, $e^\mu_{~\alpha}$ around a flat background: \bea g_{\mu\nu} &=& \eta_{\mu\nu} + h_{\mu\nu}\\ g^{\mu\nu} &=& \eta^{\mu\nu} -h^{\mu\nu}+h^{\mu\lambda}h_{\lambda}^\nu+\cdots~,\nonumber\\ e^{~\alpha}_{\mu} &=& \delta^{\alpha}_{\mu} + \frac{1}{2} h^{\alpha}_{\mu} - \frac{1}{8} h_{\mu \lambda } h^{\lambda \alpha} + \cdots\nonumber\\ e^{\mu}_{~\alpha} &=& \delta^{\mu}_{\alpha} - \frac{1}{2} h^{\mu}_{\alpha} + \frac{3}{8} h^{\mu \lambda } h_{\lambda \alpha} + \cdots ~,\nonumber \eea where dots represent higher orders in fluctuations. Using the following relations \bea g_{\mu\nu} &=& e_{\mu}^{~\alpha} e_{\nu}^{~\beta} \eta_{\alpha\beta}~,\\ e_{\mu}^{~\alpha}e^{\mu}_{~\beta}&=&\eta^{\alpha}_{\beta}~,~~e_{\nu}^{~\alpha}e^{\mu}_{~\alpha}=g^{\mu}_{\nu}~,\nonumber\\ g_{\mu\nu} &=& -N^2 d^2 t + g_{ij}(dx^i+N^idt)(dx^j+N^jdt)~,\nonumber \eea we can then express the different ADM components in terms of their fluctuations \bea N &=& 1+n\\ N_i &=& n_i \nonumber\\ g_{ij} &=& \delta_{ij} + h_{ij}~.\nonumber \eea The fluctuations $n_i$ and $h_{ij}$ can additionally be decomposed into their different spin components as: \bea n_i &=& n_i^T + \partial_i \rho~,\label{nidec}\\ h_{ij} &=& H_{ij}+ (\partial_i W_j+ \partial_j W_i ) + \left(\partial_{i}\partial_{j}-\frac{\delta_{ij}}{3}\partial^2\right)B +\frac{\delta_{ij}}{3}h~,\label{hdecomp} \eea where $H_{ij}$ is a transverse-traceless tensor, $n_i^T$ and $W_i$ are transverse vectors and $B$, $h$ and $\rho$ are scalar fields, $h$ being the trace of $h_{ij}$. Nevertheless, making use of the gauge freedom shown in eq.(\ref{diffeo}), a natural gauge choice is to set $W_i$ and scalar $B$ to zero. Consequently, eq.(\ref{hdecomp}) becomes \be\label{hdec} h_{ij}=H_{ij} +\frac{\delta_{ij}}{3} h~, \ee where $H_{ij}$ and $h$ are the 3 physical degrees of freedom present in the theory, while $n$, $n_i^T$ and $\rho$ are auxiliary fields. \subsection{Expanding the actions } Since we are interested in one-loop corrections, it is enough to expand the actions up to quadratic order in the ADM field fluctuations. The flat space metric is $\delta_{ij}$, such that all the spatial indices are lowered, {\it i.e.} $h^{ij} \to h_{ij} $. We have then \bea \sqrt{g} &=& 1 + \frac{1}{2} h + \frac{1}{8} (h^2 - 2h_{ij}h_{ij})+\cdots\\ \Gamma_k &=& -\frac{\sigma_{ij}}{2} \left[ \partial_i h_{kj} - \frac{1}{2}\left(h_{il}\partial_l h_{jk} + h_{lj}\partial_i h_{kl} - \frac{1}{2}h_{il}\partial_k h_{jl}\right)\right]+\cdots~,\nonumber \eea where $h=h_{ii}$, and dots represent higher orders in fluctuations which can be omitted for the one-loop calculation. \subsubsection{Matter sector} We explain here the construction of the relevant matter actions for scalars and fermions, and then describe the ansatz taken for these external fields. $\bullet$ Expanding the scalar action (\ref{Ss1}) up to quadratic order in the graviton field fluctuations, the scalar action becomes \bea\label{Ss2} S_s^{(2)} &=& -\int dt d^3 x~\left\{\left[1+n+\frac{h}{2}+\frac{hn}{2} +\frac{1}{8}(h^2-2h^2_{ij})\right](-\dot{\phi} \dot{\phi}^\star +\partial_k \phi \partial_k \phi^\star)\right.\\ &&+\left. 2 n \dot{\phi}\dot{\phi}^\star+2n_i \dot{\phi}\partial_i \phi^\star-h_{ij}\partial_i\phi\partial_j\phi^\star+2\left(\frac{h n_i}{2}-n n_i - n_j h_{ij}\right)\dot{\phi} \partial_i\phi^\star\right.\nonumber\\ &&+\left. (nh-n^2)\dot{\phi} \dot{\phi}^\star +\left(h_{il}h_{lj}-n_i n_j -nh_{ij}-\frac{h h_{ij} }{2}\right)\partial_i\phi\partial_j\phi^\star\right\}~,\nonumber \eea where the first line can only generate Lorentz-symmetric contributions and therefore will be omitted. Moreover, as pointed out in \cite{Pospelov}, quadratic terms in the metric fluctuations will only contribute to our results when the graviton fields are contracted among themselves. Therefore, for any tensor $T_{ij}$ quadratic in the graviton field, because of rotational invariance in space, one can use the following simplification: $T_{ij}\partial_i\phi\partial_j\phi^\star\to (T_{ii}/3) \partial_k\phi\partial_k\phi^\star$. In addition, linear terms in the metric perturbations can also be omitted, as they lead to quartic matter self interactions after completing the square (we are interested in corrections to the kinetic terms only). Finally, terms of the form $hn_i, nn_i$ or $n_jh_{ij}$ do not contribute since they have to be contracted with another vector metric fluctuations, leading to terms which are at least cubic in fluctuations. The relevant part of the action, containing only terms which generate Lorentz-violating contributions, is then \be\label{Ss3} S_s^{(2)} = -\int dt d^3 x~\left\{ (nh-n^2)\dot{\phi}\dot{\phi}^\star + \frac{1}{3}\left(h_{ij}^2-n_i^2 -nh-\frac{h^2}{2}\right)\partial_k\phi\partial_k\phi^\star\right\}~. \ee To simplify even more the expression above, we can write \be (nh-n^2)\dot{\phi}\dot{\phi}^\star =- (nh-n^2)\partial_\mu\phi\partial^\mu\phi^\star+(nh-n^2)\partial_k \phi \partial_k \phi^\star~, \ee and since the first term on the right-hand side of the expression above is Lorentz-symmetric, we only need to take into account the second one, then~(\ref{Ss3}) becomes: \be\label{Ss4} \tilde S_s^{(2)} = -\frac{1}{3}\int dt d^3 x~\left[ h_{ij}^2-n_i^2- 3n^2+2nh-\frac{h^2}{2}\right]\partial_k\phi\partial_k\phi^\star~. \ee $\bullet$ For the fermion sector, we find \bea\label{Sf2'} S_f^{(2)} &=& -\frac{i}{2}\int dt d^3 x \left\{\left[1+n+\frac{h}{2}+\frac{hn}{2} +\frac{1}{8}(h^2-2h^2_{ij})\right]\bar{\psi} \gamma^{\mu} \overleftrightarrow{\partial_\mu}\psi\right.\\ &&+\left. \left(n\delta_{ij} -\frac{1}{2}h_{ij}\right)\left( \bar{\psi}\gamma_i \overleftrightarrow{\partial_j}\psi+\bar{\psi}\{\gamma_i,\Gamma_j^{(1)}\}\psi\right) +\bar{\psi} \{\gamma^\mu,\Gamma_\mu^{(1)}\}\psi\right.\nonumber\\ &&+\left.\frac{1}{2}n_i\left[\bar{\psi}\left(\gamma_i\overleftrightarrow{\partial_0}-\gamma^0\overleftrightarrow{\partial_i} +\{\gamma_i,\Gamma_0^{(1)}\}-\{\gamma^0,\Gamma_i^{(1)}\} \right) \psi \right] +\bar{\psi}\{\gamma^\mu,\Gamma_\mu^{(2)}\}\psi \right.\nonumber\\ &&+\left.\left(\frac{h n_i}{4}-\frac{3}{8} n_j h_{ij} - \frac{n n_i}{4}\right)\bar{\psi} \left(\gamma_i \overleftrightarrow{\partial_0}-\gamma^0\overleftrightarrow{\partial_i} \right)\psi +\frac{h}{2} \bar{\psi}\{\gamma^\mu,\Gamma_\mu^{(1)}\}\psi+ \right.\nonumber\\ &&+\left. \frac{1}{8}n_i^2\bar{\psi} \gamma^0\overleftrightarrow{\partial_0}\psi +\left(\frac{3}{8}h_{il}h_{lj}-\frac{h h_{ij}}{4} +\frac{n(h\delta_{ij}-h_{ij})}{2}-\frac{3}{8}n_in_j\right)\bar{\psi} \gamma_i \overleftrightarrow{\partial_j}\psi \right\}~.\nonumber \eea As in the scalar case, the terms on the first line would only contribute to Lorentz-symmetric corrections, hence such terms will be omitted. For the same reasons explained above, linear terms in the graviton fields as well as terms proportional to $\bar{\psi}\gamma_i\overleftrightarrow{\partial_0}\psi$ or $\bar{\psi}\gamma_0\overleftrightarrow{\partial_i}\psi$ do not contribute to our calculation and therefore will also be omitted from now on. Terms containing the anticommutator of a $\gamma$ matrix and the spin connection will not contribute as well, since they involve the contraction of a symmetric tensor with an antisymmetric one. Finally, as discussed above for the scalar action, when $T_{ij}$ is a given quadratic term in graviton, we make the replacement $T_{ij}\bar{\psi}\gamma_i\overleftrightarrow{\partial_j}\psi \to (T_{ii}/3)(\bar{\psi}\gamma_k\overleftrightarrow{\partial_k}\psi)$. Thus, the Lorentz-violating fermion action we consider becomes \bea\label{Sf3} S_f^{(2)} &=& -\frac{i}{2}\int dt d^3 x \left\{\frac{1}{8}n_i^2\bar{\psi} \gamma^0\overleftrightarrow{\partial_0}\psi +\frac{1}{3}\left(\frac{3}{8}h_{ij}^2-\frac{h^2}{4}+n h -\frac{3}{8}n_i^2\right)\bar{\psi} \gamma_k \overleftrightarrow{\partial_k}\psi\right\}~, \eea which, after removing Lorentz-symmetric terms, reduces to \be\label{Sf4} \tilde S_f^{(2)} =-\frac{i}{4}\int dt d^3 x \left[\frac{1}{4}h_{ij}^2-\frac{h^2}{6}-\frac{1}{2}n_i^2+\frac{2}{3}n h\right]\bar{\psi} \gamma_k \overleftrightarrow{\partial_k}\psi~. \ee $\bullet$ After reducing the matter action to their simplest form (\ref{Ss4}) and (\ref{Sf4}), we then consider a plane-wave as an ansatz for the external fields \bea\label{ansatz} \phi(x)&=& \phi_0\exp(-ip^\mu x_\mu)\nn \psi(x)&=& \psi_0\exp(-iq^\mu x_\mu)~, \eea so that the quantities $\partial_k \phi\partial_k \phi^\star$ and $\bar{\psi}i\gamma_k \overleftrightarrow{\partial_k}\psi$ become constants and, therefore, will be respectively replaced by $(\vec p^2\phi_0^2)$ and $(-2i)[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]$. With these classical background matter field configurations and using the field decompositions (\ref{nidec}) and (\ref{hdec}), the scalar~(\ref{Ss4}) and fermion~(\ref{Sf4}) actions are respectively \bea\label{expm} \tilde S_s^{(2)} &=& -\frac{1}{3}\int dt d^3 x~\left[H_{ij}^2-\frac{h^2}{6}-(n_i^T)^2+\rho\partial^2\rho-3n^2+2nh \right](\vec p^2\phi_0^2)~,\\ \tilde S_f^{(2)} &=& -\frac{1}{2}\int dt d^3 x~\left[\frac{1}{4}H_{ij}^2-\frac{h^2}{12}-\frac{(n_i^T)^2}{2}+\frac{1}{2}\rho \partial^2\rho+ \frac{2}{3}nh\right][\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]~.\nonumber \eea \subsubsection{Gravity actions} For the gravity actions, we expand~(\ref{SG}) and~(\ref{SHL}) up to quadratic order in the metric fluctuations and make use of the metric decompositions~(\ref{nidec}) and~(\ref{hdec}) to obtain \bea\label{expg} S_G^{(2)} &=& M_P^2 \int dt d^3 x \left[ \frac{1}{4}H_{ij}(\partial^2-\partial_t^2)H_{ij} -\frac{1}{2}n_i^T \partial^2 n_i^T-(\lambda-1)\rho (\partial^2)^2\rho\right.\\ &&+\left.\frac{(3\lambda-1)}{12}h \partial_t^2h -\frac{1}{18}h \partial^2h-\alpha n \partial^2 n+\frac{(3\lambda-1)}{3}\rho \partial^2 \dot{h} - \frac{2}{3}n \partial^2 h\right]~,\nonumber \eea and \bea\label{exphl} S_{HL}^{(2)} &=& M_P^2 \int dt d^3x \left\{ \frac{1}{4} H_{ij} \left[- \partial_t^2 +\partial^2 + F_1 (\partial^2)^2 - S_1 (\partial^2)^3 \right] H_{ij}-\frac{1}{2} n_i^T \partial^2 n_i^T\right. \\ && + \left. \frac{1}{18} h \left[\frac{3(3\lambda-1)}{2}\partial_t^2-\partial^2 +(3F_1+8F_2)(\partial^2)^2-(3S_1+8S_2)(\partial^2)^3 \right] h\right.\nonumber\\ && - \left. n[\alpha\partial^2+F_4 (\partial^2)^2+S_4 (\partial^2)^3] n - (\lambda-1)\rho (\partial^2)^2 \rho +\frac{(3\lambda-1)}{3} \rho \partial^2 \dot{h}\right. \nonumber\\ && - \left. \frac{2}{3} h [\partial^2+F_3 (\partial^2)^2 +S_3 (\partial^2)^3] n\right\} \nonumber~. \eea Finally, because ghosts do not couple to the matter sector at tree level, one needs to consider at least two-loop corrections in order to have a non-vanishing contribution coming from interactions with ghost fields. Therefore, in the present work, ghosts can be omitted. \section{One-loop matter effective actions} We integrate here the metric fluctuations in order to obtain the effective matter kinetic terms. For both gravity models, we impose the Hamiltonian and momentum constraints in the path integral, which consists in keeping auxiliary fields on-shell. This approach is used in \cite{Padilla} and implies that no conformal instability arises in our calculations \cite{Gibbons}. It is known in perturbative quantum gravity that, when introducing an irreducible decomposition for the metric, some of the components have a propagator with the wrong sign, leading to a potential problem in defining the partition function. This unstable mode can be traced down to a conformal factor, but with our gauge choice it would arise from the integration of the auxiliary fields in the metric decomposition, which does not happen here. This problem of conformal instability can be understood as an artifact arising from perturbative expansion \cite{Mazur}, but can also be avoided by making an analytical continuation of the metric components to imaginary values, simultaneously with the Wick rotation \cite{'tHooft}. We note here another non-trivial connection between the Wick rotation and quantum gravity, in the context of time-dependent bosonic strings. As shown in \cite{AEM} for a specific string configuration, which satisfies conformal invariance non-perturbatively in $\alpha'$, a Wick rotation in target space implies a phase factor for the overall string partition function. This phase becomes real for specific space-time dimensions only, and 4 appears to be the lowest dimension for which the string partition function remains real, after the Wick rotation. \subsection{Model I: modified Einstein-Hilbert gravity} We study here the model (\ref{SG}), for which one-loop quantum corrections are quadratically divergent, and will therefore be regularised with a cut off. After expansion of the matter and gravity sectors in terms of the metric fluctuations, the actions that we are interested in here are (\ref{expm}) and (\ref{expg}). \subsubsection{Constraints} Fluctuations in the shift vector $(n_i^T,\rho)$ and the lapse function $(n)$ are auxiliary fields and are therefore not propagating, such that one can use their equations of motion (the momentum and Hamiltonian constraints) that we substitute back into the action. Varying the actions with respect to $n$ gives us the following constraint \be\label{n1constr} [-2\alpha M_P^2\partial^2 +2(\vec{p}^2\phi_0^2) ] n = \frac{2}{3}\left[M_P^2 \partial^2+ (\vec{p}^2\phi_0^2) +\frac{1}{2} [\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]\right]h~, \ee whereas variations with respect to the scalar and transverse parts of the shift vector lead to \bea \left[2M_P^2(1-\lambda)\partial^2 -\frac{2}{3}\left((\vec{p}^2\phi_0^2) + \frac{3}{4}[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]\right)\right]\partial^2 \rho &=& -\frac{M_P^2(3\lambda-1)}{3}\partial^2 \dot{h}~,\label{rconstr}\\ \left[-M_P^2\partial^2+\frac{2}{3}\left((\vec{p}^2\phi_0^2) + \frac{3}{4}[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]\right)\right]n_i^T &=& 0~.\label{nTconstr} \eea When the last constraint is put back into the actions, all contributions coming from $n_i^T$ disappear and from now on such actions will only depend on the tensor and scalar components of the metric. On the other hand, since the auxiliary scalar fields $n$ and $\rho$ appear mixed with the scalar graviton $h$, before substituting the constraints~(\ref{n1constr}) and (\ref{rconstr}) back into the actions, we can expand them in terms of the matter contributions to find \bea\label{nrconstr} n&=& -\frac{1}{3\alpha}\left[1+\frac{(\vec{p}^2\phi_0^2)}{M_P^2}\left(\frac{\alpha+1}{\alpha}\right)(\partial^2)^{-1} + \frac{[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]}{2M_P^2}(\partial^2)^{-1}\right]h+\cdots~,\label{nconst1}\\ \rho &=& \frac{(3\lambda-1)}{6(\lambda-1)}\left[1-\frac{(\vec{p}^2\phi_0^2) + \frac{3}{4}[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]}{3(\lambda-1)M_P^2}(\partial^2)^{-1}\right](\partial^2)^{-1}\dot{h}+\cdots,\label{rconst} \eea where dots represent higher-order terms in the matter fields. The other equations of motion fed back into the actions lead to \bea\label{SmI} S^{(2)}_I &=&\int dt d^3x \left\{ \frac{1}{2} H_{ij} \left[ \frac{M_P^2}{2} (\partial^2- \partial_t^2) -\frac{2}{3}(\vec p^2 \phi_0^2) - \frac{1}{4}[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]\right] H_{ij}\right.\\ && + \left. \frac{1}{2} h \left[ \frac{M_P^2}{9}\left(-X\partial_t^2+\left(\frac{2-\alpha}{\alpha}\right)\partial^2\right) +\frac{(\vec{p}^2\phi_0^2)}{9}\left(\frac{\alpha^2+4\alpha+2}{\alpha^2}+\frac{X^2}{6}\frac{\partial_t^2}{\partial^2}\right)\right.\right.\nonumber\\ && +\left.\left. \frac{[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]}{9}\left(\frac{3\alpha+8}{4\alpha}+\frac{X^2}{8}\frac{\partial^2_t}{\partial^2}\right)\right] h\right\} \nonumber~, \eea where \be\label{X} X=\frac{3\lambda-1}{\lambda-1}~. \ee For a consistent propagation of the scalar graviton $h$, one needs $X>0$, such that the allowed values for $\lambda$ are \be\label{allowedlambda} \lambda<1/3~~\mbox{or}~~\lambda>1~. \ee Finally, from the action (\ref{SmI}) we find the following dispersion relation for $h$ \be\label{hdrIR} \omega^2 = \left(\frac{\lambda -1}{3\lambda-1}\right) \left(\frac{2-\alpha}{\alpha}\right) \vec{k}^2~, \ee which shows a consistent propagation for $0<\alpha<2$ for the allowed values of $\lambda$ given in eq.(\ref{allowedlambda}). \subsubsection{Loop integration} We give in the Appendix the details of the integration over graviton.\\ \nin{\bf Spin-2 component}\\ The integration over the spin-2 component gives \be \exp\left\{-\frac{1}{M_P^2}\left[\frac{4}{3}(\vec p^2 \phi_0^2) + \frac{1}{2}[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]\right] \frac{\delta(0)}{2(2\pi)^2} \Lambda^2+\cdots\right\}~,\nonumber \ee where $\delta(0)$ is the space-time volume and dots represent either field-independent terms or higher orders in $(\vec p^2 \phi_0^2)$ and $[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]$. \vspace{0.5cm} \nin{\bf Spin-0 component}\\ The integration over the spin-0 component gives \bea &&\exp\left\{\frac{1}{M_P^2}\left[\frac{\alpha^2+4\alpha+2}{2\alpha^2}(\vec p^2\phi_0^2)+\frac{3\alpha+8}{8\alpha} [\bar{\psi_0}(\vec{\gamma}\cdot \vec q)\psi_0]\right.\right.\nn &&~~~~~~~~~~~~~~~~~~\left.\left.-\frac{X^2}{4}\left(\frac{(\vec p^2\phi_0^2)}{3}+\frac{[\bar{\psi_0}(\vec{\gamma}\cdot \vec q)\psi_0]}{4}\right)\right] Y\frac{\delta(0)\Lambda^2}{2(2\pi)^2} +\cdots \right\}~. \eea where \be Y = \sqrt{ \frac{\alpha(\lambda-1)}{(2-\alpha)(3\lambda-1)} }~. \ee \subsubsection{Total Lorentz-violating contributions} Considering the results obtained in the previous sections, we add here all relevant contributions and present the total Lorentz-violating corrections for both scalar and fermion fields. We also note that \be\label{dpf} (\vec p^2 \phi_0^2) = \frac{1}{\delta(0)}\int dt d^3x~\partial_k \phi \partial_k \phi^\star~~~\mbox{and}~~~[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0] = \frac{1}{\delta(0)}\int dt d^3x~\bar{\psi}i\gamma_k \partial_k\psi~. \ee Considering the results (\ref{resHI}) and (\ref{reshI}), the total contributions to the matter fields are given by the following Lagrangians \be\label{sresI} \frac{1}{2(2\pi)^2}\frac{\Lambda^2}{M_P^2}\left[\frac{4}{3}+ \frac{Y}{2}\left(\frac{X^2}{6}-\frac{\alpha^2+4\alpha+2}{\alpha^2}\right)\right] \partial_k \phi\partial_k \phi^\star~ \ee in the scalar case, and \be\label{fresI} \frac{1}{2(2\pi)^2}\frac{\Lambda^2}{M_P^2}\left[\frac{1}{2}-\frac{Y}{8}\left(\frac{3\alpha+8}{\alpha}-\frac{X^2}{2}\right)\right] \bar{\psi}i\gamma_k\partial_k\psi~, \ee in the fermion case, where $X$ and $Y$ are defined in eq.(\ref{X}) and eq.(\ref{Y}) respectively. \subsection{Model II: non-projectable HL gravity} We turn here to the non-projectable version of Horava-Lifshitz gravity~(\ref{SHL}), for which the relevant actions are given by eqs.(\ref{expm}) and (\ref{exphl}). In the absence of matter, one-loop quantum corrections for this model are logarithmically divergent, but in the presence of dynamical matter, quadratic divergences arise from the coupling gravity-bosonic matter \cite{Pospelov,Padilla}. In our case, matter cannot induce further divergences compared to the single gravity case, since it is classical and thus does not involve any new loop momentum. The use of Hamiltonian and momentum constraints generate an artificial quartic divergence though, due to the introduction of additional space derivatives in the decompositions (\ref{nidec},\ref{hdecomp}) of the graviton: as can be seen from the action (\ref{SmII}) below, the presence of matter is then accompanied with the derivative operator $\partial_t^2/(\vec\partial)^2$, arising from the coupling between $\rho$ and $h$ in the gravity sector. This divergence is thus a gauge artifact, on which we will come back in section 4, and that we disregard in our calculations. As a consequence, we use dimensional regularisation, where the naively power-law divergent integrals actually vanish \cite{Leibbrandt}. We note that the vanishing or finiteness of a regularised integral which otherwise would naively be divergent is explained pedagogically in \cite{Weinzierl}: in the regularised integral, divergences associated to different regions of the domain of integration cancel each other, such that the integral is finite when the regulator is removed. \subsubsection{Constraints} The constraints (\ref{rconstr}) and (\ref{nTconstr}) obtained from the variation of the action with respect to the shift vector components ($n_i^T,\rho$) are the same as in the previous modified gravity model, since the dimension 4 and dimension 6 operators which are added do not depend on these components. The additional contributions to the lapse function fluctuations $n$ lead to the new constraint \be [-2M_P^2 \mathcal{D}_2 + 2 (\vec{p}^2\phi_0^2)]n =\frac{2}{3}\left[M_P^2 \mathcal{D}_1 + (\vec{p}^2\phi_0^2) + \frac{1}{2}[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0] \right] h ~, \ee which can be written as \be\label{nconst} n = -\frac{1}{3}\left[\frac{\mathcal{D}_1}{\mathcal{D}_2} + \frac{(\vec{p}^2 \phi_0^2)}{M_P^2}\frac{1}{\mathcal{D}_2}\left(1+\frac{\mathcal{D}_1}{\mathcal{D}_2}\right) + \frac{1}{2M_P^2}[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]\frac{1}{\mathcal{D}_2} \right]h+\cdots~, \ee where dots represent higher-order terms in $[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]$ and $(\vec{p}^2\phi_0^2)$, and \bea \mathcal{D}_1 &=& [\partial^2+ F_3 (\partial^2)^2+S_3 (\partial^2)^3]~,\\ \mathcal{D}_2 &=& [\alpha\partial^2+ F_4 (\partial^2)^2+S_4 (\partial^2)^3]~.\nonumber \eea Using the constraints (\ref{rconstr}), (\ref{nTconstr}) and (\ref{nconst}) to rewrite the original actions~(\ref{expm}) and~(\ref{exphl}), we arrive at the following action, which only depends on the physical metric fluctuations $H_{ij}$ and $h$, \bea\label{SmII} S^{(2)}_{II} &=&\int dt d^3x \left\{ \frac{1}{2} H_{ij} \left[ \frac{M_P^2}{2} (- \partial_t^2 +\partial^2 + F_1 (\partial^2)^2 - S_1 (\partial^2)^3 ) -\frac{2}{3}(\vec p^2 \phi_0^2) - \frac{1}{4}[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]\right] H_{ij}\right.\nonumber \\ && + \left. \frac{1}{2} h \left[ \frac{M_P^2}{9}\left(-X\partial_t^2-\partial^2+(3F_1+8F_2)(\partial^2)^2-(3S_1+8S_2)(\partial^2)^3 +2\left(\frac{\mathcal{D}_1^2}{\mathcal{D}_2}\right)\right)\right.\right.\nonumber\\ &&+\left.\left. \frac{(\vec{p}^2\phi_0^2)}{9}\left(1+4\left(\frac{\mathcal{D}_1}{\mathcal{D}_2}\right)+2\left(\frac{\mathcal{D}_1}{\mathcal{D}_2}\right)^2 +\frac{X^2}{6}\frac{\partial_t^2}{\partial^2}\right)\right.\right.\nonumber\\ && +\left.\left. \frac{[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]}{9}\left(\frac{3}{4}+2\left(\frac{\mathcal{D}_1}{\mathcal{D}_2}\right) +\frac{X^2}{8}\frac{\partial^2_t}{\partial^2}\right)\right] h\right\}~. \eea \subsubsection{Loop integration} We give in the Appendix the details of the integration over graviton, which is done using dimensional regularisation, with $d=3-\epsilon$.\\ \nin{\bf Spin-2 component}\\ The integration over the spin-2 component gives \be \exp\left\{-\frac{1}{M_P^2}\left[\frac{4}{3}(\vec p^2 \phi_0^2) + \frac{1}{2}[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]\right] \frac{\delta(0)}{(2\pi)^2\sqrt{|S_1|}}\frac{\mu^\epsilon}{\epsilon}+\cdots\right\}~. \ee \vspace{0.5cm} \nin{\bf Spin-0 component}\\ The integration over the spin-0 component gives \bea &&\exp\left\{\frac{1}{M_P^2}\left[\frac{1}{2\sqrt{C_6^{(0)}}}\left((\vec p^2 \phi_0^2)+\frac{3}{4}[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]\right) +\frac{(\vec p^2 \phi_0^2)}{\sqrt{C_6^{(2)}}}\right.\right.\\ &&\left.\left.~~~~~~~~~~~~~~~~~~~~~~~~~~+\frac{1}{\sqrt{C_6^{(1)}}}\left(2(\vec p^2 \phi_0^2)+[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]\right)\right] \frac{\delta(0)}{(2\pi)^2\sqrt{X}}\frac{\mu^\epsilon}{\epsilon}+\cdots\right\}\nonumber~, \eea where $C_6^{(n)}$, with $n=0,1,2$, are given in the Appendix. \subsubsection{Total Lorentz-violating contributions} In the Appendix appears the following integral \be\label{intd} \mathcal{I}\left(\Delta\right)=\int \frac{d^d k}{(2\pi)^d} \frac{1}{\vec{k}^2\sqrt{\vec{k}^2+ \Delta }}~, \ee which, with dimensional regularisation ($d=3-\epsilon$), gives \be\label{Isol} \mathcal{I}(\Delta) = \frac{1}{2 \pi^2} \frac{\mu^\epsilon}{\epsilon}+\mathcal{O}(\epsilon)~. \ee In the limit $\epsilon\to0$, we obtain then \be \mu \frac{\partial}{\partial \mu} \mathcal{I}(\Delta)= \frac{1}{2 \pi^2}~, \ee such that \be\label{Isol2} \mathcal{I}(\Delta) = \frac{1}{2 \pi^2} \ln\left(\frac{\mu}{\mu_0}\right)~, \ee where $\mu_0$ is a mass scale. To choose $\mu_0$ accordingly, we calculate $\mathcal{I}(\Delta)$ using a cut off $\Lambda$ in dimension $d=3$ and find: \be\label{Ico} \mathcal{I}(\Delta) = \frac{1}{2\pi^2}\ln\left(\frac{\Lambda+\sqrt{\Lambda^2 +\Delta}}{\sqrt{\Delta}}\right)~. \ee From the form of $\Delta$ in~(\ref{trH}) and (\ref{3dint}), we note that $\Delta$ can be written as $d M_{HL}^2$, where $d$ represents a dimensionless constant of order 1 for each of the different cases. Then, expanding~(\ref{Ico}) for $\Lambda\gg M_{HL}$, we find \be \mathcal{I}(\Delta) = \frac{1}{2\pi^2} \ln\left(\frac{\Lambda}{M_{HL}}\right) ~, \ee where finite terms were omitted in the expression above. Comparing eq.(\ref{Isol2}) with the result above, we naturally choose $\mu_0 = M_{HL}$ and $\mu=\Lambda$. Considering now the results obtained above, we write the total Lorentz-violating contributions for both scalar and fermion fields. Using the relations (\ref{dpf}) and assuming~(\ref{Isol2}) with $\mu_0=M_{HL}$, the total corrections for scalar and fermion fields are, respectively, \bea \label{sfresII} &&\left[-\frac{1}{2(2\pi)^2}\left(\frac{4}{3}\frac{1}{\sqrt{|s_1|}} - \frac{1}{2}\frac{1}{\sqrt{X c_6^{(0)}}} -\frac{2}{\sqrt{X c_6^{(1)}}}-\frac{1}{\sqrt{X c_6^{(2)}}} \right) \frac{M_{HL}^2}{M_P^2} \ln\left(\frac{M_{HL}^2}{\Lambda^2 }\right)\right] \partial_k \phi \partial_k \phi^\star~,\nonumber\\ &&\left[-\frac{1}{2(2\pi)^2}\left(\frac{1}{2}\frac{1}{\sqrt{|s_1|}} - \frac{3}{8}\frac{1}{\sqrt{X c_6^{(0)}}} -\frac{1}{\sqrt{X c_6^{(1)}}} \right) \frac{M_{HL}^2}{M_P^2} \ln\left(\frac{M_{HL}^2}{\Lambda^2 }\right)\right]\bar{\psi} i \gamma_k \partial_k \psi~. \eea \subsection{Non-minimal coupling} The models studied here involve minimally coupled matter, and one could ask what the effect of a non-minimal coupling would have on the effective dispersion relations. We show here that these would not affect our results. Given that we impose all the terms in the action to be at most of mass dimension 6, and that the scalar field is dimensionless for $z=3$ in $d=3$ space dimensions, its non-minimal coupling to gravity would be for example of the form \be \left[\xi_1 R^{(3)}+\xi_2(R^{(3)})^2+\xi_3(a_i a^i)\right](\phi\phi^\star)~, \ee where $\xi_i$ have the appropriate mass dimension for the terms inside the square bracket to be of dimension 6. Similarly, for the fermion of mass dimension 3/2, we could have \be \left[\zeta_1 R^{(3)}+\zeta_2(a_i a^i)\right](\ol\psi\psi)~. \ee In both cases, the space derivatives appearing in $R^{(3)}$ and $a_i$ generate space derivatives for matter fields, after integration by parts, and thus could naively contribute to the effective dispersion relation. But the latter is obtained with matter plane wave configurations, such that $\phi\phi^\star$ and $\ol\psi\psi$ are constants, and these configurations would therefore not give additional contributions to the dispersion relation. A more general field configuration could be chosen of course, but which would also contribute to all the other terms calculated in this article, in such a way that the final dispersion relation would not change: the functional for matter fields obtained after integrating gravitons is unique, and the corresponding dispersion relation is obtained by plugging a plane wave solution. This conclusion is valid at one-loop though: non-minimal coupling would radiatively generate terms which would modify the matter kinetic terms, with an impact on the dispersion relation at two loops and higher orders. \subsection{Comments on regularisation} All the integrals in this article have been calculated by first integrating over frequency and then over momentum. In most of the integrals, the integration over frequency is finite, and has been performed without any regularisation. We are then left with a 3-dimensional integration over momentum, which is regularised either with a cut off or with dimensional regularisation. But there is also the situation where the integration over frequency is divergent, and we discuss here few details for both models.\\ \nin $\bullet$ Model I As discussed previously, this model involves quadratic divergences and therefore cannot be treated with dimensional regularisation, which sees only logarithmic divergences. A typical example where the integration over frequencies is finite is \be \int \frac{d^4k}{(2\pi)^4} \frac{1}{\omega^2+z^2\vec{k}^2} =\frac{1}{4\pi^3} \int_0^\Lambda dk\int_{-\infty}^{+\infty} d\omega \frac{\vec{k}^2}{\omega^2+z^2\vec{k}^2} =\frac{\Lambda^2}{8\pi^2z}~.\\ \ee where $z$ is a positive dimensionless constant. \\ But we also have integrals for which the integration over frequencies is divergent, and these are calculated with the same cut off as the one used for momentum integration in the other integrals. The integral is of the form \bea \int \frac{d^4k}{(2\pi)^4} \frac{\omega^2}{\vec{k}^2(\omega^2+z^2\vec{k}^2)} &=&\frac{1}{4\pi^3} \int_0^{\infty} dk\int_{-\Lambda}^{+\Lambda} d\omega \frac{\omega^2}{\omega^2+z^2\vec{k}^2}\\ &=&\frac{1}{4\pi^3}\int_0^{\infty} dk\int_{-\Lambda}^{+\Lambda} d\omega \left[1-\frac{z^2\vec{k}^2}{\omega^2+z^2\vec{k}^2}\right]\nn &=&\frac{1}{2\pi^3}\int_0^{\infty} dk\left[\Lambda- zk \arctan\left(\frac{\Lambda}{z k}\right)\right]\nn &=&\frac{\Lambda^2}{8\pi^2z}\nonumber~. \eea We note that the same result is obtained if one performs first the finite integration over momentum, and then uses the cut off for frequency, whereas the commutativity of the order of integration is not obvious when the integrals are divergent.\\ \nin $\bullet$ Model II The second model contains only logarithmic divergences, but artificial quartic divergences appear as a result of the graviton decomposition (\ref{hdecomp}) in terms of auxiliary fields. Since $[\omega]=3$, we regularise the integral over frequency with the cut off $\Lambda^3$ \bea \int \frac{d^4k}{(2\pi)^4} \frac{\omega^2}{\vec{k}^2(\omega^2+z^6\vec{k}^6)} &=& \frac{1}{4\pi^3} \int_0^{\infty} dk\int_{-\Lambda^3}^{+\Lambda^3} d\omega \left[1-\frac{z^6\vec{k}^6}{\omega^2+z^6\vec{k}^6}\right]\\ &=&\frac{1}{2\pi^3}\int_0^{\infty} dk \left[\Lambda^3-z^3k^3 \arctan\left(\frac{\Lambda^3}{z^3k^3}\right) \right]\nn &=&\frac{\Lambda^{4}}{8\pi^2z}\nonumber~. \eea As explained at the beginning of section 3.2, this divergence is artificial and is thus omitted. The other integrals for this model involve a finite integration over frequency, and a logarithmically divergent momentum integral. The latter is regularised with dimensional regularisation, but the same coefficient of the logarithmic divergence would be found with a cut off, as explained in section 3.2.3. Therefore the Lorentz-symmetry violating terms generated in this model are unique (in the present gauge) and independent of the regularisation. \section{Analysis} The Lorentz-symmetry violating effect is measured by the product $v^2$ of group and phase velocities, which should be equal to 1 in the Lorentz-symmetric case. As noted in \cite{Pospelov}, the measurable deviation from the Lorentz-symmetric case is the difference $|v_s^2-v_f^2|$, which should be typically smaller than $10^{-20}$, according to the current upper bounds on Lorentz-symmetry violation \cite{bounds}. We note that \cite{Pospelov} consider dynamical matter fields, instead of classical ones, which allows the cancellation of the above mentioned quartic divergence, as expected for a gauge artifact. Indeed, the graviton loop giving rise to this divergence is canceled by the equivalent matter loop, the difference of signs coming from the fact that the gravity spin-0 field component leading to the quartic divergence is auxiliary. In our case though, this cancellation does not occur because the matter loop is not present. This shows the non-trivial fact that a classical matter background can be consistently taken into account only if one removes this specific gauge artifact by hand. Other gauge artifacts are removed by taking the difference $|v_s^2-v_f^2|$, as we do below. The calculation of the traces obtained in the previous sections leads to the following kinetic terms \bea &&-i\bar{\psi} \gamma^0 \partial_t\psi+v_f^{(m)} i\bar{\psi}\gamma_k \partial_k\psi\\ &&-\partial_t\phi\partial_t\phi^\star+(v_s^{(m)})^2\partial_k\phi\partial_k\phi^\star~,\nonumber \eea where $v_f^{m} = 1+\delta v_f^{m}$ and $(v_s^{m})^2=1+(\delta v_s^{m})^2$ and $m =I, II$ for the first model (\ref{SG}) and the second model (\ref{SHL}) respectively. \subsection{Model I} For the first model, $(\delta v_s^{I})^2$ and $\delta v_f^{I}$ are given by eqs.(\ref{sresI}) and (\ref{fresI}), respectively, {\it i.e.} \bea (\delta v_s^I)^2 &\simeq& \frac{1}{2(2\pi)^2}\left[\frac{4}{3} +\frac{Y}{2}\left(\frac{X^2}{6}-\frac{\alpha^2+4\alpha+2}{\alpha^2}\right)\right]\frac{\Lambda^2}{M_P^2}~,\\ (\delta v_f^I)^2&\simeq& 2 \delta v_f^I \simeq \frac{1}{2(2\pi)^2}\left[1-\frac{Y}{4}\left(\frac{3\alpha+8}{\alpha}-\frac{X^2}{2}\right)\right]\frac{\Lambda^2}{M_P^2}~. \eea Subtracting the fermion contribution from the scalar one, we get the measurable departure from Lorentz symmetry \bea\label{finalresI} &&|(\delta v_s^I)^2-(\delta v_f^I)^2| = \frac{1}{2(2\pi)^{2}}\left|F(\lambda,\alpha)\right|\frac{\Lambda^2}{ M_P^2}~, ~~~\mbox{where}\\ &&F(\lambda,\alpha) = -\frac{1}{3}+\frac{1}{4}\sqrt{\frac{\alpha(\lambda-1)}{(2-\alpha)(3\lambda-1)}}\left[\frac{(3\lambda-1)^2}{6(\lambda-1)^2} -\frac{\alpha^2-4}{\alpha^2}\right].\nonumber \eea If one does not impose any specific values for $\lambda$ and $\alpha$, $F(\lambda,\alpha)$ is generically of order $1$, such that the current experimental bounds on Lorentz violation are satisfied with the difference (\ref{finalresI}) if the graviton loop momentum is cut off by $\Lambda\lesssim10^{10}$ GeV. Nevertheless, one can always find specific values for $\lambda$ and $\alpha$ such that the difference (\ref{finalresI}) vanishes. An example is \be F(\lambda_0,\alpha_0)=0 ~~~\mbox{for}~~~\lambda_0 \simeq 0.332~~~ \mbox{and}~~~ \alpha_0 \simeq 1.995~, \ee which are both in the range of allowed values for these parameters. This set of values has been chosen here because they are close to the boundary values $\lambda_b=1/3$ and $\alpha_b=2$, which suggest that quantum corrections point towards this specific point in parameter space. It is interesting to note that $\lambda_b=1/3$ is found as an IR fixed point for the Wilsonian renormalisation flows studied in \cite{Dodorico}, and it would be interesting to relate this result to ours. \subsection{Model II} For the second model, from the results in~(\ref{sfresII}) we can write $(\delta v_s^{II})^2$ and $\delta v_f^{II}$ as \bea (\delta v_s^{II})^2 &\simeq& -\frac{1}{2(2\pi)^2}\left(\frac{4}{3}\frac{1}{\sqrt{|s_1|}} - \frac{1}{2}\frac{1}{\sqrt{X c_6^{(0)}}} -\frac{2}{\sqrt{X c_6^{(1)}}}-\frac{1}{\sqrt{X c_6^{(2)}}} \right) \frac{M_{HL}^2}{M_P^2} \ln\left(\frac{M_{HL}^2}{\Lambda^2 }\right)~,\nonumber\\ (\delta v_f^{II})^2 &\simeq& 2\delta v_f^{II} \simeq -\frac{1}{2(2\pi)^2}\left(\frac{1}{\sqrt{|s_1|}} - \frac{3}{4}\frac{1}{\sqrt{X c_6^{(0)}}} -\frac{2}{\sqrt{X c_6^{(1)}}} \right) \frac{M_{HL}^2}{M_P^2} \ln\left(\frac{M_{HL}^2}{\Lambda^2 }\right)~. \eea Consequently, the physical quantity in which we are interested, {\it i.e.} the difference between these two contributions, is \bea\label{finalresII} |(\delta v_s^{II})^2 - (\delta v_f^{II})^2| \simeq\left|\frac{1}{2(2\pi)^2}\left(\frac{1}{3}\frac{1}{\sqrt{|s_1|}} + \frac{1}{4}\frac{1}{\sqrt{X c_6^{(0)}}} -\frac{1}{\sqrt{X c_6^{(2)}}} \right) \frac{M_{HL}^2}{M_P^2} \ln\left(\frac{M_{HL}^2}{\Lambda^2 }\right)\right|~. \eea If one assumes that $\Lambda \sim M_P$ and $s_1$, $c_6^{(n)}$ are of order 1, the bounds on Lorentz violation~\cite{bounds} are then satisfied for $M_{HL}\lesssim10^{10}$ GeV. Note that, by construction of HL gravity, $M_{HL}$ should also be large enough for higher-order space derivatives to be small compared to relativistic terms in the IR. The fact that quantum corrections imply an {\it upper} bound on $M_{HL}$ is because we impose the UV regime, above $M_{HL}$, to dominate the loop integrals ``early'' enough, for quantum corrections not to be too large. However, similarly to what we find with the first model, the Lorentz-violating correction in~(\ref{finalresII}) can also be canceled if the coupling constants are chosen accordingly. For instance, one of the many ways in which this cancellation takes place is achieved by setting $s_1$, $c_6^{(n)}$ to 1, and taking $\lambda\approx 1.969$, which is in the allowed regime for this parameter. \subsection{Conclusion} Our results lead to the following main points \begin{itemize} \item If one wishes to conclude with generic values for the different parameters, then both models lead to the same order of magnitude $10^{10}$ GeV for the typical scale above which the predicted Lorentz symmetry violation is too large. Therefore, although the second model has an improved behaviour in terms of UV divergences, the IR phenomenology is not really improved compared to the model without higher-order space derivatives; \item If one accepts to fine-tune the different parameters, then there is always a range of values for these parameters, such that the effective maximum speed seen by particles is consistent with Special Relativity. Once again, this is valid for both models, and this feature is not a consequence of introducing higher-order space derivatives. \end{itemize} Finally, the typical scale $10^{10}$ GeV is consistent with other modified gravity model, as in \cite{Pospelov}. The same characteristic scale is also found in \cite{AB2}, where non-relativistic corrections to matter kinetic terms are calculated in the framework of the Covariant HL gravity \cite{covariant}. We note that this scale also corresponds to the Higgs potential instability \cite{Higgsinst}, which could be avoided by taking into account curvature effects in the calculation of the Higgs potential \cite{rajantie}. It would therefore be interesting to look for a stabilising mechanism in the framework of non-relativistic gravity models. A next step consists in deriving higher-order space derivatives for the fermion kinetic term, from a 4-dimensional diffeomorphism breaking gravity model, in order to generate fermion mass and flavour oscillations dynamically, as was shown in \cite{ALM}. \section*{Appendix: loop calculations} \subsection*{Model I} \nin{\bf Spin-2 component} Gathering all the terms depending on $H_{ij}$ in the actions~({\ref{expm}}) and~({\ref{expg}}), we obtain \bea \tilde S_I^{(2)}(H_{ij}) = \int dt d^3 x~ \frac{1}{2}H_{ij}\left[ \frac{M_P^2}{2} (\partial^2-\partial_t^2) -\frac{2}{3}(\vec p^2 \phi_0^2) - \frac{1}{4}[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]\right]H_{ij}~, \eea which can be rewritten in terms of the Fourier transform $\tilde H_{ij}(k)$ of $H_{ij}(x)$ after a Wick rotation ($t\to it$, $\omega\to-i\omega$) as \bea &&\int \mathcal{D}H_{ij} \exp\left\{i \tilde S_{I}^{(2)}(H_{ij})\right\}\to\int \mathcal{D} \tilde{H}_{ij}\exp\left\{-\int \frac{d^4 k_1d^4 k_2}{(2\pi)^8} ~\frac{1}{2}\tilde{H}_{ij}(k_2) \left[\mathcal{A}^{(I)}_{\tilde H_{ij}} \right] \tilde{H}_{ij}(k_1)\right\}, \nonumber\\ && \mbox{where}~~ \mathcal{A}^{(I)}_{\tilde H_{ij}} = \left[ \frac{M_P^2}{2} k_1^2 +\frac{2}{3}(\vec p^2 \phi_0^2) + \frac{1}{4}[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]\right]\delta(k_1+k_2)\nonumber \eea with $k_1^2=\omega_1^2+\vec{k}_1^2$ in Euclidean space. Then, considering the two components of $\tilde{H}_{ij}$, we can perform the functional integration to find \bea\label{resHI} &&\left\{\det \left[\frac{1}{(2\pi)^8} \left(\frac{M_P^2}{2} k_1^2 +\frac{2}{3}(\vec p^2 \phi_0^2)+ \frac{1}{4}[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0] \right)\delta(k_1+k_2) \right]\right\}^{-1} \\ &=& \exp\left\{-\mbox{Tr} \ln\left[\frac{1}{(2\pi)^8}\left( \frac{M_P^2 k_1^2}{2} +\frac{2}{3}(\vec p^2 \phi_0^2) + \frac{1}{4}[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]\right) \right]\delta(k_1+k_2)\right\}\nonumber\\ &=& \exp\left\{-\frac{1}{M_P^2}\left[\frac{4}{3}(\vec p^2 \phi_0^2) + \frac{1}{2}[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]\right] \mbox{Tr}\left(\frac{\delta(k_1+k_2)}{k_1^2}\right)+\cdots\right\}~,\nonumber \eea where dots represent field-independent terms or higher orders in $(\vec p^2 \phi_0^2)$ and $[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]$. The trace is calculated using a momentum cut off $\Lambda$ and leads to \be\label{trace} \mbox{Tr}\left(\frac{\delta(k_1+k_2)}{k_1^2}\right) = \int \frac{d^4 k_1d^4 k_2}{(2\pi)^8}\left(\frac{\delta(k_1+k_2)}{k_1^2}\right)\delta(k_1+k_2)= \frac{\delta(0)}{2(2\pi)^2} \Lambda^2~, \ee where $\delta(0)$ is the space-time volume. \vspace{1cm} \nin{\bf Spin-0 component} Considering the terms which depend on the Fourier transform $\tilde h$ in the action (\ref{SmI}) and performing a Wick rotation, we find \bea \int \mathcal{D}h \exp\left\{i S_{I}^{(2)}(h)\right\}\to\int \mathcal{D} \tilde{h}\exp\left\{-\int \frac{d^4 k_1}{(2\pi)^4}\frac{d^4 k_2}{(2\pi)^4} ~\frac{1}{2}\tilde{h}(k_2) \left[\mathcal{A}^{(I)}_{\tilde h}\right]\tilde{h}(k_1)\right\}~, \eea where \bea\label{AhI} \mathcal{A}^{(I)}_{\tilde h} &=& \frac{1}{9} \left\{M_P^2\left(X\omega_1^2+\frac{2-\alpha}{\alpha}\vec{k}_1^2\right)+(\vec p^2 \phi_0^2) \left[\frac{X^2}{6}\frac{\omega_1^2}{\vec{k}_1^2}-\frac{\alpha^2+4\alpha+2}{\alpha^2}\right]\right.\nonumber\\ &&+\left.[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]\left[\frac{X^2}{8}\frac{\omega_1^2}{\vec{k}_1^2}-\frac{3\alpha+8}{4\alpha}\right]\right\}\delta(k_1+k_2)~. \eea The evaluation of the functional integral leads to \bea \label{reshI} &&\exp\left\{\frac{1}{M_P^2}\left[\left(\frac{\alpha^2+4\alpha+2}{2\alpha^2}(\vec p^2\phi_0^2)+\frac{3\alpha+8}{8\alpha} [\bar{\psi_0}(\vec{\gamma}\cdot \vec q)\psi_0]\right)\mbox{Tr}\left(\frac{\delta(k_1+k_2)}{X\omega_1^2+\alpha^{-1}(2-\alpha)\vec{k}_1^2}\right)\right.\right.\nonumber\\ &&-\left.\left.\frac{X^2}{4}\left(\frac{(\vec p^2\phi_0^2)}{3}+\frac{[\bar{\psi_0}(\vec{\gamma}\cdot \vec q)\psi_0]}{4}\right) \mbox{Tr}\left(\frac{\delta(k_1+k_2)\omega_1^2}{\vec{k}_1^2 \left(X\omega_1^2+\alpha^{-1}(2-\alpha)\vec{k}_1^2 \right)}\right) \right] \right\}~. \eea Both traces above are evaluated with the common cut off $\Lambda$ for frequencies and wave vectors, and lead to the same result \bea &&Tr\left[\frac{\delta(k_1+k_2)\omega_1^2}{\vec{k}_1^2 \left(X\omega_1^2+\alpha^{-1}(2-\alpha)\vec{k}_1^2 \right)}\right] =Tr\left[\frac{\delta(k_1+k_2)}{ \left(X\omega_1^2+\alpha^{-1}(2-\alpha)\vec{k}_1^2 \right)}\right] = Y\frac{\delta(0)\Lambda^2}{2(2\pi)^2}~, \eea where \be\label{Y} Y = \sqrt{ \frac{\alpha(\lambda-1)}{(2-\alpha)(3\lambda-1)} }~. \ee \subsection*{Model II} \nin{\bf Spin-2 component} Comparing the terms which depend on $H_{ij}$ in the expressions~(\ref{SmI}) and (\ref{SmII}), it is clear that the only difference is on the appearance of higher-order space derivatives in the propagator of the tensor field in the npHL case. Therefore the integration over the spin-2 component of the metric in the present model leads to the same result as in~(\ref{resHI}) when $k_1^2=\omega^2+\vec{k}^2_1$ (in Euclidean space) is replaced by $\omega_1^2+\vec{k}_1^2-F_1(\vec{k}_1^2)^2-S_1(\vec{k}_1^2)^3$. Assuming that $F_1$ and $S_1$ are negative constants, we obtain then \be\label{resHII} \exp\left\{-\frac{1}{M_P^2}\left[\frac{4}{3}(\vec p^2 \phi_0^2) + \frac{1}{2}[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]\right] \mbox{Tr}\left(\frac{\delta(k_1+k_2)}{\omega_1^2+\vec{k}_1^2+|F_1|(\vec{k}_1^2)^2+|S_1|(\vec{k}_1^2)^3}\right)+\cdots\right\}~. \ee \\ In order to calculate the trace above, we first integrate over the frequencies \bea \label{trH} \mbox{Tr}\left(\frac{\delta(k_1+k_2)}{\omega_1^2+\vec{k}_1^2+|F_1|(\vec{k}_1^2)^2+|S_1|(\vec{k}_1^2)^3}\right) &=& \frac{\delta(0)}{2} \int \frac{d^3k}{(2\pi)^3} \frac{1}{\sqrt{\vec{k}^2+|F_1| \vec{k}^4 +|S_1|\vec{k}^6}}\nn &\approx&\frac{\delta(0)}{2\sqrt{|S_1|}} ~\mathcal{I}\left(\frac{|F_1|}{|S_1|}\right)~, \eea where \be \mathcal{I}\left(\Delta\right)=\int \frac{d^3 k}{(2\pi)^3} \frac{1}{\vec{k}^2\sqrt{\vec{k}^2+ \Delta }}~. \ee Using dimensional regularisation, this integral becomes \be \mathcal{I}\left(\Delta\right) = \frac{2 \pi^{d/2}}{\Gamma(d/2)} \frac{\mu^{3-d}}{(2\pi)^d}\int_0^\infty d k \frac{k^{d-3}}{(k^2+\Delta)^{1/2}}~, \ee which is calculated using \cite{Ryder} \be\label{intGamma} \int_0^\infty d k \frac{k^\beta}{(k^2+\Delta)^\alpha} = \frac{\Gamma(\frac{1+\beta}{2})\Gamma(\alpha-\frac{1+\beta}{2})}{2(\Delta)^{\alpha-\frac{1+\beta}{2}}\Gamma(\alpha)}~. \ee Writing $d=3-\epsilon$, we find then \be \mathcal{I}(\Delta) = \frac{1}{2 \pi^2} \frac{\mu^\epsilon}{\epsilon}+\mathcal{O}(\epsilon)~, \ee which finally leads to \be \mbox{Tr}\left(\frac{\delta(k_1+k_2)}{\omega_1^2+\vec{k}_1^2+|F_1|(\vec{k}_1^2)^2+|S_1|(\vec{k}_1^2)^3}\right) \approx \frac{\delta(0)}{(2\pi)^2\sqrt{|S_1|}}\frac{\mu^\epsilon}{\epsilon}+\cdots \ee where dots represent finite terms. We note here that the above result does not depend on $|F_1|$, since this coupling constant controls $\vec{k}^4$ in~(\ref{resHII}), which is sub-dominant, and the UV behaviour is dominated by the terms $|S_1|\vec{k}^6$. \vspace{0.5cm} \nin{\bf Spin-0 component} Gathering all terms which depend on $h$ in the action (\ref{SmII}), we write the equivalent action in terms of the Fourier transform $\tilde{h}$ of $h$ and, after performing a Wick rotation, we obtain \bea\label{hwr} \int \mathcal{D}h \exp\left\{i S_{II}^{(2)}(h)\right\} = \int \mathcal{D} \tilde{h}\exp\left\{-\int \frac{d^4 k_1}{(2\pi)^4}\frac{d^4 k_2}{(2\pi)^4} ~\frac{1}{2}\tilde{h}(k_2) \left[\mathcal{A}^{(II)}_{\tilde h}\right]\tilde{h}(k_1)\right\}~, \eea where \bea\label{AhII} \mathcal{A}^{(II)}_{\tilde h} &=& \frac{1}{9} \left\{M_P^2\left[ \mathcal{P}^{-1}_h(k_1)\right]-(\vec{p}^2\phi_0^2) \left(1+4\left(\frac{\mathcal{D}_1(\vec{k}_1^2)}{\mathcal{D}_2(\vec{k}_1^2)}\right)+2\left(\frac{\mathcal{D}_1(\vec{k}_1^2)}{\mathcal{D}_2(\vec{k}_1^2)}\right)^2 -\frac{X^2}{6}\frac{\omega_1^2}{\vec{k}_1^2}\right)\right.\nonumber\\ &&-\left.[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]\left(\frac{3}{4}+2\left(\frac{\mathcal{D}_1(\vec{k}_1)}{\mathcal{D}_2(\vec{k}_1)}\right) -\frac{X^2}{8}\frac{\omega^2_1}{\vec{k}_1^2}\right)\right\}\delta(k_1+k_2)~, \eea and \bea \mathcal{P}^{-1}_h(k_1) &=& X \omega_1^2 +\frac{ \vec{k}_1^2\left[1+(3F_1+8F_2)\vec{k}_1^2+(3S_1+8S_2)(\vec{k}_1^2)^2\right]\mathcal{D}_2(\vec{k}_1) +2\left[\mathcal{D}_1(\vec{k}_1)\right]^2}{-\mathcal{D}_2(\vec{k}_1)}\nn \mathcal{D}_1(\vec{k}_1) &=& -[\vec{k}_1^2- F_3 (\vec{k}_1^2)^2+S_3 (\vec{k}_1^2)^3]\nn \mathcal{D}_2(\vec{k}_1) &=& -[\alpha\vec{k}_1^2- F_4 (\vec{k}_1^2)^2+S_4 (\vec{k}_1^2)^3]~. \eea As expected, the IR limit ({\it i.e.} neglecting terms of the form $(\vec{k}^2)^n$ for $n>1$) of the expression (\ref{AhII}) leads to the expression (\ref{AhI}), and the dispersion relation for the scalar graviton is given by the identity (\ref{hdrIR}). The functional integration over $h$ gives \bea &&\exp\left\{\frac{1}{2M_P^2}\left[(\vec p^2 \phi_0^2)+\frac{3}{4}[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]\right] \mbox{Tr}\left(\frac{\delta(k_1+k_2)}{\mathcal{P}^{-1}_h(k_1)}\right)\right.\\ &&\left.+\frac{2}{M_P^2}\left[(\vec p^2 \phi_0^2)+\frac{1}{2}[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]\right] \mbox{Tr}\left(\frac{\delta(k_1+k_2)\mathcal{D}_1(\vec{k}_1)}{\mathcal{P}^{-1}_h(k_1) \mathcal{D}_2(\vec{k}_1)}\right)+\frac{(\vec p^2 \phi_0^2)}{M_P^2} \mbox{Tr}\left(\frac{\delta(k_1+k_2)(\mathcal{D}_1(\vec{k}_1))^2}{\mathcal{P}^{-1}_h(k_1) (\mathcal{D}_2(\vec{k}_1))^2} \right)\right.\nonumber\\ &&\left. -\frac{X^2}{12 M_P^2}\left[(\vec p^2 \phi_0^2) +\frac{ 3}{4}[\bar{\psi}_0(\vec{\gamma}\cdot\vec q)\psi_0]\right] \mbox{Tr}\left(\frac{\delta(k_1+k_2)\omega_1^2}{\mathcal{P}^{-1}_h(k_1) \vec{k_1}^2}\right)+\cdots\right\}~.\nonumber \eea The first three traces above can be written in the following generic form \be\label{logsc} \mbox{Tr}\left[\frac{\delta(k_1+k_2)}{\mathcal{P}^{-1}_h(k_1)}\left(\frac{\mathcal{D}_1(\vec{k}_1)}{ \mathcal{D}_2(\vec{k}_1)}\right)^n\right],~~~\mbox{for}~n=0,1,2~, \ee which, after integrating over the frequencies leads to \bea\label{trn} &&\mbox{Tr}\left[\frac{\delta(k_1+k_2)}{\mathcal{P}^{-1}_h(k_1)}\left(\frac{\mathcal{D}_1(\vec{k}_1)}{ \mathcal{D}_2(\vec{k}_1)}\right)^n\right] =\frac{\delta(0)}{2\sqrt{X}}\int \frac{d^3 k}{(2\pi)^3} \left[\mathcal{G}(\vec{k})\left(\frac{D_2(\vec{k})}{D_1(\vec{k})}\right)^{2n}\right]^{-\frac{1}{2}}~, \eea with \be \label{G} \mathcal{G}(\vec{k})= (\mathcal{P}_h^{-1} -X \omega^2)=\frac{ \vec{k}^2\left[1+(3F_1+8F_2)\vec{k}^2+(3S_1+8S_2)(\vec{k}^2)^2\right]\mathcal{D}_2(\vec{k}) +2\left[\mathcal{D}_1(\vec{k})\right]^2}{-\mathcal{D}_2(\vec{k})}~. \ee Expanding the terms inside the square brackets in~(\ref{trn}), keeping only terms of at least quartic order in the momentum, as we did in for the tensor field, the right-hand side of eq.(\ref{trn}) reduces to \be\label{3dint} \frac{\delta(0)}{2\sqrt{XC_6^{(n)}}}~ \mathcal{I}\left(\frac{C_4^{(n)}}{C^{(n)}_6}\right), \ee with the integral $\mathcal{I}$ given by eq.(\ref{intd}) and, for $n=0,1,2$, \bea C_4^{(n)}&=&(c_4^{(n)}/M_{HL}^{2})\\ C_6^{(n)}&=&(c_6^{(n)}/M_{HL}^{4})\nn c_4^{(0)} &=& -(2 f_1+8f_2)- \frac{2s_3( f_4 s_3-2 f_3 s_4)}{s_4^2}\nn c_4^{(1)} &=& 2f_4+\frac{2s_4(2s_1+8s_2)(f_4s_3-f_3 s_4)-s_3s_4^2(2f_1+8f_2)}{s_3^3} \nn c_4^{(2)} &=& \frac{s_4^2\left[-s_3 s_4^2(2f_1+8f_2) + 4s_4(f_4s_3 -f_3 s_4)(2s_1+8s_2)+2s_3^2(3f_4s_3-2f_3s_4)\right]}{s_3^5}\nn c_6^{(0)}&=& -(2 s_1+ 8s_2)-\frac{2 s_3^2}{s_4} \nn c_6^{(1)} &=& -2s_4 -\frac{ (2 s_1+8s_2) s_4^2}{s_3^2}\nn c_6^{(2)} &=& -2\frac{s_4^3}{s_3^2}-\frac{s_4^4(2s_1+8s_2)}{s_3^4}~.\nonumber \eea Solving this integral with dimensional regularisation using the result~(\ref{Isol}), we obtain \be\label{convint} \mbox{Tr}\left[\frac{\delta(k_1+k_2)}{\mathcal{P}^{-1}_h(k_1)}\left(\frac{\mathcal{D}_1(\vec{k}_1)}{ \mathcal{D}_2(\vec{k}_1)}\right)^n\right] = \frac{\delta(0)}{(2\pi)^2\sqrt{X C_6^{(n)}}}\frac{\mu^\epsilon}{\epsilon}+\cdots, \ee where dots represent finite terms. Finally we need to calculate the following trace \bea \label{trwk} \mbox{Tr}\left(\frac{\delta(k_1+k_2)\omega_1^2}{\mathcal{P}^{-1}_h(k_1) \vec{k_1}^2}\right)&=&\delta(0)\int \frac{d^4k}{(2\pi)^4}\frac{\omega^2}{P_h^{-1}(k)\vec{k}^2}\\ &=&\frac{\delta(0)}{ X}\left\{\int \frac{d^4k}{(2\pi)^4}\frac{1}{\vec{k}^2}-\int \frac{d^4k}{(2\pi)^4}\frac{\mathcal{G}(\vec{k})/X}{[\omega^2+\mathcal{G}(\vec{k})/X]\vec{k}^2}\right\}~,\nonumber \eea with $\mathcal{G}$ defined in~(\ref{G}). The first integral in the last line of~(\ref{trwk}) vanishes with dimensional regularisation. In addition, for the second integral, we integrate over the frequencies to get \be \mbox{Tr}\left(\frac{\delta(k_1+k_2)\omega_1^2}{\mathcal{P}^{-1}_h(k_1) \vec{k_1}^2}\right)=\frac{\delta(0)}{2 X^{3/2}}\int \frac{d^3k}{(2\pi)^3}\frac{\sqrt{\mathcal{G}(\vec{k})}}{\vec{k}^2}~. \ee We then expand $\mathcal{G}(\vec{k})$, for which only the dominant term potentially leads to a divergence \be \mbox{Tr}\left(\frac{\delta(k_1+k_2)\omega_1^2}{\mathcal{P}^{-1}_h(k_1) \vec{k_1}^2}\right)=\frac{\delta(0)\sqrt{C_6^{(0)}}}{2 X^{3/2}}\int \frac{d^3k}{(2\pi)^3} |\vec{k}| ~ + \cdots \ee with dots representing finite terms. The integral vanishes with dimensional regularisation, such that the result is finite. \vspace{1cm} \nin{\bf Acknowledgements} The work of J. L. is supported by the National Council for Scientific and Technological Development (CNPq - Brazil).
1,108,101,564,789
arxiv
\section{Introduction}\label{sec:Introduction} For a cellular network, uplink transmissions define the coverage area. This is because the transmission power in the uplink is limited to $23$ dBm at the user equipment (UE) owing to hardware limitations (such a battery size) and regulatory constraints as opposed to $43$ dBm at the base station in the downlink \cite{101}. This limited transmission power in the uplink must therefore be used carefully to enhance cell coverage without increasing the CAPEX/OPEX costs of deploying more cell sites. Therefore the uplink design of a cellular standard is crucial in enabling uplink transmissions at high powers without saturating the power amplifier, which otherwise results in unwanted non-linear distortions. To address the above issues and to enhance the cell coverage of the newly designed $3$GPP $5$G NR when compared to $4$G LTE, a new modulation scheme, namely, $\pi/2$-BPSK was introduced for the uplink data channel (physical uplink shared channel- PUSCH) and control channel (physical uplink control channel - PUCCH) transmission. This waveform, when combined with an appropriate spectrum shaping enables low peak-to-average-power (PAPR) ratio transmissions without compromising the error rate performance \cite{211}-\cite{kk1}. Specifically, the PAPR of this modulation scheme with DFT-spread-OFDM waveform and spectrum shaping is smaller than $2$ dB. Moreover, it is shown in \cite{kk1}, \cite{kk2} that the power amplifier can be driven to saturation (adjacent channel leakage ratio (ACLR) and error vector magnitude (EVM) will still be within the required specification limits) and yet the error rate performance of this modulation scheme is not compromised. Hence, this modulation scheme plays a crucial role in significantly enhancing the cell coverage for $3$GPP $5$G NR-based cellular networks. shaping vector is performed The demodulation reference signals (DMRS) employed in Rel-$15$ for coherent demodulation of the PUSCH and PUCCH are generated using Zadoff-Chu (ZC) sequences or QPSK-based Computer Generated Sequences(CGS) as specified in Section $5$.$2$.$2$ in \cite{211} and Section $6$.$2$.$2$ in \cite{213}. The PAPR of these sequences is around $3.5$-$4$ dB when spectrum shaping is employed which is higher than that of the spectrum-shaped data transmissions \cite{QC}\hspace{-0.5pt}-\cite{kk4}. Therefore, even though the data transmissions have low PAPR and potentially allow for larger coverage, the DMRS design still limits the cell size due to its high PAPR in Rel-$15$ $3$GPP $5$G NR. Note that, the performance of PUSCH and PUCCH channels directly depend on the quality of the channel estimates obtained using these DMRS sequences. Hence, when the DMRS sequences are transmitted at lower power to avoid PA saturation, the coverage of PUSCH and PUCCH channels is automatically limited. For this reason, $3$GPP introduced a new study item in Rel-$16$ to design new reference signal sequences with lower PAPR \cite{SI}. The sequences in \cite{3g}-\cite{Qual} were agreed to be used as low-PAPR reference sequences. In this paper, we will use them as the reference signal sequences for the proposed reference signal transceiver design. The Rel-$15$ specifications for $3$GPP $5$G NR also support multiple stream transmissions using DFT-spread-OFDM waveform. In other words, a single user can be scheduled to transmit multiple streams or multiple users can be configured simultaneously to transmit multiple streams depending on the channel conditions. In order to support these multiple-stream (also known as layers in $3$GPP terminology) MIMO transmission, multiple orthogonal DMRS sequences are necessary, one for each stream. This is achieved by introducing the concept of baseband antenna port where one single port is assigned for the demodulation of each stream/layer \cite[Sec 6.3.1.3]{211}. Since the DMRS of each stream must be independently decoded for channel estimation of each stream, these DMRS sequences must be orthogonally separated to avoid any interference. In $3$GPP specifications, the orthogonality across the ports is achieved by frequency division multiplexing (FDM) or code division multiplexing (CDM). Distinct orthogonal DMRS sequences, each corresponding to an antenna port, share the same time-frequency resources in CDM method as shown in Fig.~\ref{fig:FDM_CDM} where $r_0$, $r_1$ are two distinct DMRS sequences corresponding to antenna port $0$ and antenna port $1$ respectively. In FDM method, the same sequence is employed for all the antenna ports but frequency multiplexed as shown in Fig.~1b. It can be seen that in FDM the length of DMRS on each port will be $\frac{M}{P}$ rather than $M$, where $P$ indicates the number of antenna ports multiplexed in frequency domain. It is agreed in $3$GPP that Rel-16 NR \cite{3g} support only two layers via FDM and hence the length of DMRS on each port will be $\frac{M}{2}$ for a data allocation of length $M$ sub-carriers. We show in Section \ref{sec:ReceiverDesign} that this $\frac{M}{2}$-point reduction in DMRS length does not reduce the channel estimation quality and the $M$-length channel estimate vector corresponding to the $M$-length data allocation can be reconstructed perfectly. \begin{figure}[h] \centering \includegraphics[width=0.7\columnwidth,height=5cm]{CDM_FDM-eps-converted-to.pdf} \caption{Port mapping for CDM, FDM method of reference signal multiplexing for MIMO stream transmissions in $3$GPP.} \label{fig:FDM_CDM} \vspace{-10pt} \end{figure} When multiple-stream transmissions are supported, the current $3$GPP Rel-$15$ specifications does not clearly mention the spectrum shaping implementation for the $\frac{\pi}{2}$-BPSK data and DMRS sequences. For instance, when multiple users each with one layer are configured to transmit simultaneously, a $\frac{M}{P}$ length DMRS sequence corresponding to each user's $M$ length data will be transmitted on one of the $P$ ports, in such case spectrum shaping has to align between data and DMRS transmissons so that channel can be estimated correctly, which otherwise may result in imperfect receiver implementations (causing a loss of data exchanged). In addition to this, if proper design choices are not made, then it is also possible that the same DMRS sequence when mapped to two different baseband antenna ports (for example, as shown in Fig.~(1b)), it will behave differently with respect to (w.r.t) PAPR, auto and/or cross-correlation which eventually impact the channel estimation performance (immunity to inter-cell interference) and subsequently data demodulation. Therefore in this paper, we propose two transceiver architectures which generate low PAPR DMRS waveform and also results in identical channel estimation performance on all the baseband antenna ports. Specifically, we show the the sequences designed in \cite{3g}-\cite{Qual} to have low PAPR will have same error rate performance on any stream in the case of multiple-stream transmissions. \textit{Notation:} The following notation is used in this paper. Upper case letters $\mathbf{X}$ denote matrices, bold lower case letters $\mathbf{x}$ denote vectors, non bold face letters represent scalars and $\mathbf{x}_t,\mathbf{y}_f$ indicates the time domain and frequency domain vectors $x$ amd $y$ respectively. $\mathbf{x}^T$ and $\mathbf{X}^{\dagger}$ represent the transpose and Hermitian operations on the vector $\mathbf{x}$ and matrix $\mathbf{X}$ respectively. We use the symbol $\mathbf{x}$ to denote the data symbols and $\mathbf{r}$ to denote reference signal symbols. \section{Transmitter Architecture for $\pi/2$-BPSK DATA and DMRS generation}\label{sec:SignalModel} In this section, we present transmitter designs to generate low PAPR data, and DMRS waveforms. We first describe the system model, including the design of the DFT-s-OFDM waveform as per the current 3GPP 5G NR specifications and then discuss the proposed transmitter designs. \subsection{DFT-s-OFDM Signal Model}\label{subsec:SignalModel} In the current NR specifications \cite{211}, \cite{213}, Discrete Fourier transform-spread orthogonal frequency-division multiplexing (DFT-s-OFDM) \cite{Text1} is used for the uplink transmission, especially in coverage limited scenarios. This waveform is also referred to as single-carrier FDM waveform (SCFDM) in the literature. {In $3$GPP $5$G NR, QAM modulation symbols with modulation order $(4, 16, 64, 256)$ can be transmitted using the DFT-s-OFDM}. When compared to LTE, a new modulation scheme, namely, $\frac{\pi}{2}$-BPSK was introduced in $5$G NR. This is a special constellation-rotated BPSK modulation, such that even-numbered symbols are transmitted as in BPSK and the odd-numbered data symbols are phase rotated by $\frac{\pi}{2}$ as given below - \begin{equation}\label{eq:pi/2 BPSK} \hspace{-5pt}{x_p}(m)=\frac{(1+1i)}{\sqrt{2}}e^{i\hspace{0.5mm} (m\hspace{-5pt} \mod 2)\frac{\pi}{2}}x_t(m) ,\hspace{0.1cm} m\in [0,\ldots,M-1], \end{equation} where $i=\sqrt{-1}$ and $M$ is the length of a BPSK sequence $x(m)$. Here the sub-script $p$ in $x_p(m)$ indicates a phase rotated sequence and the sub-script $t$ in $x_t(m)$ indicates a time-domain sequence. The $\frac{\pi}{2}$-phase rotation can be equivalently expressed in vector notation as given below \begin{equation}\label{eq:pi/2 BPSK vector} \mathbf{x}_p=\frac{1+i}{\sqrt{2}}\mathbf{P}\mathbf{x}_t \end{equation} where $\mathbf{x}_t$ is a $M$ length BPSK vector, $P$ is $M\times M$ diagonal matrix with diagonal entries $p_{mm}=e^{i\hspace{0.5mm}(m \hspace{-10pt} \mod 2 \\)\frac{\pi}{2}}$. The $\frac{\pi}{2}$-BPSK modulation scheme when transmitted using DFT-s-OFDM has a low PAPR when compared to higher-order modulation schemes including QPSK as the zero-crossing transitions are avoided. The PAPR for various modulation schemes is shown in Fig.~\ref{fig:PAPR_mods}, which clearly shows the low PAPR behavior of the $\frac{\pi}{2}$-BPSK modulation scheme. Note that, although the constellation is similar to QPSK, we can only transmit $1$-bit on one $\frac{\pi}{2}$-BPSK modulation symbol. \begin{figure}[h] \centering \includegraphics[width=0.8\columnwidth]{PAPR_mods-eps-converted-to.pdf} \caption{PAPR of different modulation schemes using a DFT-s-OFDM waveform.} \label{fig:PAPR_mods} \vspace{-20pt} \end{figure} % \begin{figure}[h] \centering \includegraphics[width=0.8\columnwidth]{spectral_fils-eps-converted-to.pdf} \caption{Frequency response of commonly used spectrum shaping filters with $2$-tap and $3$-tap impulse response.} \label{fig:spectrum_shaping} \vspace{-15pt} \end{figure} \subsection{Spectrum shaping}\label{sec:spectrum_shaping} Spectrum shaping is a data-independent PAPR reduction technique which can be performed either in time domain or frequency domain \cite{kk1}, \cite{kk2}. In case of frequency-domain processing, spectrum shaping can be performed by means of a spectrum-shaping function $\mathbf{w}_f=\mathbf{D}_M\mathbf{w}_t$, where $\mathbf{w}_t$ is zero-padded time domain impulse response of the $L$-tap spectrum shaping filter i.e., $\mathbf{w}_t=[w(0),w(1),..w(L-1),\underbrace{0,\ldots,0}_{M-L}]^T$. Commonly used spectrum shaping filters with $2$ and $3$-tap impulse response are shown in Fig.~\ref{fig:spectrum_shaping}. \textbf{Remark on the length of the spectrum shaping filter:} \textit{In a recent study \cite{Korea}, a joint optimization of the rotation angle (other than $\frac{\pi}{2}$) and the spectrum shaping function is considered for further optimization of the PAPR of the BPSK-based DFT-s-OFDM waveforms beyond what is achieved using the filters shown in Fig.~\ref{fig:spectrum_shaping}. The spectrum shaping filter obtained via optimization in \cite{Korea} is of the length ranging between $8$-$24$. {To estimate the channel at the receiver, in \cite{Korea} it is assumed that the spectrum shaping filter is perfectly known at the receiver and then the impulse response of the wireless channel is estimated for data demodulation. This violates the $3$GPP design} wherein it is clearly mentioned that the spectrum shaping filter is implementation-specific \cite{101} and therefore this filter is unknown at the receiver. In such cases, the receiver will have to estimate the joint impulse repsonse of the spectrum shaping filter and the wireless channel (will be explained in detail in Section~\ref{sec:chan_est_length_port}). Note that, a worst case wireless channel impulse response will be of length $\leq 3$ for an allocation of size $12$ subcarriers (i.e., $1$ resource block in $3$GPP terminology) as per $3$GPP channel models \cite{901}. Now, if the spectrum shaping filter is unknown at the receiver, we will need a minimum of $11$-$27$ samples to estimate the joint impulse response as per the design in \cite{Korea} which forces the data allocation to be a minimum of $2$-$4$ resource blocks (RB). Again this is contradicting the $3$GPP design where the minimum allocation size is $1$ RB. Hence, the length of the spectrum shaping filter has to be less than or equal to $3$ \cite{101} assuming two CDM groups with $6$ DMRS samples per CDM in a RB. Therefore, in this paper we restrict our analysis and simulations to filters with length $\leq 3$.} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{Data_method1.pdf} \caption{Transmitter architecture for data waveform generation using method-$1$.} \label{fig:Data_M1} \vspace{-12pt} \end{figure*} \subsection{DMRS Signal Structure} As discussed in Section~\ref{sec:Introduction}, multiple DMRS sequences are transmitted on frequency division multiplexed antenna ports \cite{211}, \cite{213} to support MIMO transmissions. It should be noted that if spectrum shaping is performed on data symbols, identical spectrum shaping should also be performed on DMRS sequences to facilitate proper channel estimation and thereby equalization. However, if this spectrum shaping is not done in the right manner, will alter the properties of the DMRS waveform depending on the antenna port on which DMRS sequence is transmitted, which subsequently may result in non-identical channel estimation (and thereby equalization and demodulation) performance across the antenna ports which is not desirable. Hence the DMRS transmitter design, besides minimizing the PAPR of the waveform should also ensure that the characteristics of the waveform (like auto-correlation and cross-correlation) are similar for spectrum-shaped DMRS sequences across all the antenna ports. In this paper, we propose two transmitter designs such that the PAPR of DMRS waveform is low and also the characteristics of the waveform are uniform across all the baseband antenna ports. {In the current $3$GPP specifications \cite{3g}, $2$ MIMO streams are supported when $\frac{\pi}{2}$-BPSK modulation scheme is used. To support two MIMO streams,} two FDM DMRS ports are most commonly used as opposed to CDM (wherein the code orthogonality may be impacted in heavy delay spread channels). For the case of CDM, the DMRS sequences are mapped on the same antenna port and hence both DMRS ports are identical in terms of sequence generation, mapping and have same PAPR. The FDM case presents a challenging problem that needs to be addressed as will be discussed below. For FDM, a $M$-length data sequence on a given antenna port is associated with a corresponding $\frac{M}{2}$-length DMRS sequence. \subsection{Transmission Method - $1$} In this section, we present data and DMRS transmission method-$1$ wherein the spectrum shaping is performed in the frequency domain. \subsubsection{\textbf{Data waveform design method-$1$}} Let $ \mathbf{x}_t $ denote a $M \times 1$ vector of $\frac{\pi}{2}$-BPSK modulated data symbols generated as per \eqref{eq:pi/2 BPSK}. For transmission via DFT-s-OFDM, the $\frac{\pi}{2}$-BPSK data symbols are first DFT-precoded as \begin{equation}\label{eq:dft equation} x_f(k) =\sum_{m=0}^{M-1}x_p(m)e^{\frac{-i\hspace{0.5mm}2\pi km}{M}}. \end{equation} The subscript $f$ in $x_f(k)$ indicates a frequency domain sequence. The DFT precoding shown in \eqref{eq:dft equation} can be equivalently represented in vector notation form as - \begin{equation}\label{eq:dft-operation} \mathbf{x}_f=\mathbf{D}_M\mathbf{x}_t, \end{equation} where $\mathbf{D}_M$ is a $M\times M$ DFT matrix given by \begin{equation*} \mathbf{D}_M(k,m)=e^{\frac{-i\hspace{0.5mm}2\pi km}{M}}, 0 \leq k, m \leq M-1 \end{equation*} The spectrum shaping is performed on the DFT-precoded data vector as ${\mathbf{x}_f^s}=\mathbf{w}_f\mathbf{x}_f$, where ${\mathbf{x}_f^s}$ indicates the spectrum shaped frequency domain sequence $\mathbf{x}_f$. The spectrum-shaped data vector ${\mathbf{x}_f^s}$ is then mapped to a set of sub-carriers in frequency domain via a ${N\times M}$ mapping matrix $\mathbf{M}_f$ where $M\leq N$. The mapping matrix $\mathbf{M}_f$ is designed such that there are $M$ $1$'s in the matrix and $(N-1)M$ $0$'s, with the following constraints \begin{itemize} \item There is a single location in each row that has $1$ \item No two rows can have $1$ in the same location \item The total number of rows with $1$ is $M$ \end{itemize} The mapping matrix $\mathbf {M}_f$ can be constructed such that it allocates $M$ sub-carriers in a localized or interleaved manner. Finally, the output of this mapping operation is converted to $N\times 1$ time domain signal $\mathbf{s}_t$ as \begin{equation*} \mathbf{s}_t=\mathbf{D}_N^{\dagger}\mathbf{M}_f {\mathbf{x}_f^s}, \end{equation*} where $\mathbf{D}_N^{\dagger}$ is an inverse DFT matrix and $N$ is the total number sub-carriers corresponding to system bandwidth. An appropriate length cyclic prefix is added to $\mathbf{s}_t$ to generate $\mathbf{s}_t(t)$ as given in equation (5.3.1) in 3GPP spec \cite{211}. This transmitter architecture for data waveform generation is shown in Fig.~\ref{fig:Data_M1}. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{PAPR_96_ZC_pi_2_data-eps-converted-to.pdf} \caption{PAPR comparison between spectrum shaped ZC sequence and spectrum shaped $\frac{\pi}{2}$-BPSK data.} \label{fig:PAPR} \vspace{-12pt} \end{figure} \begin{figure*} \vspace{-10pt} \centering \includegraphics[width=0.8\textwidth]{DMRS_method1.pdf} \caption{Transmitter architecture for \textit port-$0$ DMRS waveform generation using method-$1$. } \label{fig:DMRS_M1} \vspace{-10pt} \end{figure*} \begin{figure*} \vspace{-10pt} \centering \includegraphics[width=0.8\textwidth]{DMRS_method1_port1.pdf} \caption{ Transmitter architecture for \textit port-$1$ DMRS waveform generation using method-$1$. } \label{fig:DMRS_M1_p1} \end{figure*} \subsubsection{\textbf{DMRS waveform design method-$1$}} The CCDF of PAPR of a DFT-s-OFDM waveform with spectrum-shaped $\frac{\pi}{2}$-BPSK data symbols and the commonly used Zadoff-Chu based DMRS sequences \cite [Section 5.2.2]{211}, \cite [Section 5.5]{213} is shown in Fig.~\ref{fig:PAPR}. It can be seen that PAPR of $\frac{\pi}{2}$-BPSK is lower than that of the ZC sequences by over $2$dB. The high PAPR of ZC-based DMRS sequences will therefore limit the cell coverage as it is currently the case in Release $15$ $3$GPP $5$G NR. Hence there is a need for designing new reference signal sequences (DMRS) such that the PAPR of DMRS is similar to or lower than the data waveform. For this reason, 3GPP designed new DMRS sequences with low PAPR in \cite{SI}-\cite{Qual}. We will next describe how to use these sequences and design a transceiver to maintain the low PAPR for DMRS transmissions. As mentioned earlier, we assume $2$ MIMO streams are supported and the DMRS are multiplexed in an FDM manner for these streams. Hence, we assume $\frac{M}{2}$-length DMRS sequences will be transmitted for an $M$-length data allocation. In this architecture the transmitter design is such that a given time domain DMRS signal $\mathbf{r}_t$ will result in an identical frequency domain signal $\mathbf{{r}}_f$ for any of the antenna ports. This subsequently results in similar auto and cross-correlation properties and hence produces an identical channel estimation performance at receiver. The system model of the architecture is shown in Figs.~\ref{fig:DMRS_M1}, \ref{fig:DMRS_M1_p1} and the summary is tabulated in Table \ref{tab:Method1} shown on the next page. \textbf{DMRS waveform generation for Port 0}: Let $\mathbf{r}_t$ be a pre-determined $\frac{M}{2}$-length DMRS sequence with BPSK modulated symbols chosen as per the designs in \cite{SI}-\cite{Qual}. This will be cyclically extended to result a $M$ length vector $\mathbf{\tilde{r}}_t (n)$ as follows \begin{equation} \mathbf{\tilde{r}}_t (n)=\mathbf{r}_t\left(n\hspace{-10pt} \mod\frac{M}{2}\right), n=0,1,\ldots,M-1. \end{equation} Using $\mathbf{P}$ defined in \eqref{eq:pi/2 BPSK vector}, a $\frac{\pi}{2}$-phase rotation is applied on $\mathbf {\tilde{r}}_t $ to give $\mathbf{\tilde{r}}_t^p=\mathbf{P}\mathbf{\tilde{r}}_t$. The resultant $\frac{\pi}{2}$-BPSK signal is DFT precoded as $\mathbf {r}_f^{p_0}=\mathbf{D}_M \mathbf{\tilde{r}}_t^p$. The resulting DFT-output will be a comb-like structure with non-zero entries only at odd locations which is equivalent to \textit{port}-$0$ mapping shown in Fig.\ref{fig:FDM_CDM} (and hence the notation $\mathbf {r}_f^{p_0}$). The DFT-precoded DMRS symbols are now spectrum-shaped using $\mathbf{w}_f$ defined in Section~\ref{sec:spectrum_shaping} to give the spectrum-shaped \textit{port}-$0$ DMRS as \begin{equation} {\mathbf{r}_f^{s_0}}=\mathbf{w}_f\mathbf{r}_f^{p_0} \end{equation} \textbf{DMRS waveform generation for Port 1}: As per 3GPP specifications, in FDM-based multiplexing of multiple antenna ports, the DMRS sequence should be identical on both the ports i.e., the input BPSK sequence $\mathbf{r}_t$ and the resulting $\frac{\pi}{2}$-BPSK sequence $\mathbf{\tilde{r}}_t^p$ has to be same for both \textit{port}-0 and \textit{port}-1. However, different from \textit{port}-$0$, to generate the spectrum-shaped frequency domain-DMRS sequence on \textit{port}-$1$, the following additional steps need to be performed - \begin{itemize} \item a precoder $\mathbf T$ is applied on $\mathbf{\tilde{r}}_t^p$, where $\mathbf{T}$ is a $M\times M$-diagonal matrix with diagonal entries $T_{mm}=e^{i2\pi m/M }$ followed by DFT precoding as shown below \begin{equation*} \mathbf {r}_f^{p_1}=\mathbf {D}_M \mathbf T \mathbf{\tilde{r}}_t^p. \end{equation*} This $\mathbf {r}_f^{p_1}$ is a comb-like structure with non-zero entries only at even sub-carriers equivalent to \textit{port}-1 mapping as given in Fig.~\ref{fig:FDM_CDM}. \begin{table*}[t] \centering { \caption{Summary of Method-$1$ based DMRS waveform generation} \label{tab:Method1} \renewcommand{\arraystretch}{1} {\fontsize{9}{9}\selectfont \begin{tabular}{|c|c|c|c|} \hline Port &Time Domain DMRS &Spectrum shaping Filter &Freq Domain DMRS\\ \hline \hline $0$ & $\mathbf{\tilde{r}}_t^p (n)=\mathbf{P}\mathbf{r}_t\left(n\hspace{-5pt} \mod\frac{M}{2}\right)$ &$\mathbf{w}_f=\mathbf{D}_M\mathbf{w}_t$ & $\mathbf{D}_M \mathbf{\tilde{r}}_t^p\mathbf{w}_f$ \\ \hline $1$ & $\mathbf{\tilde{r}}_t^p (n)=\mathbf{T}\mathbf{P}\mathbf{r}_t\left(n\hspace{-5pt} \mod\frac{M}{2}\right)$&$\mathbf{w}_f=\mathbf Z\mathbf{D}_M\mathbf{w}_t$& $\mathbf{D}_M \mathbf{\tilde{r}}_t^p\mathbf{w}_f$ \\ \hline \end{tabular} } } \end{table*} \item Spectrum shaping of $\mathbf {r}_f^{p_1}$ is done as follows - \begin{equation*} \mathbf {r}_f^{s_1}=\mathbf Z \mathbf w_f \mathbf {r}_f^{p_1} \end{equation*} where, $\mathbf Z$ is a square-circulant matrix of size $M\times M$ whose $1$st row entries are $[\underbrace{0,0,...,0}_{M-1}, 1]$. \end{itemize} {\textbf{Note:} Only when this precoder $\mathbf Z$ is applied on DMRS of port-$1$, the effect of spectrum shaping on data (shown in Fig.~\ref{fig:Data_M1}) and the DMRS on \textit{ports}-$0$, $1$ (shown in Figs.~\ref{fig:DMRS_M1}, \ref{fig:DMRS_M1_p1}) will be identical and data can be demodulated. In the absence of the precoder $\mathbf Z$, the non-zero entries of the spectrum shaped outputs $\mathbf {r}_f^{s_0}$, $\mathbf {r}_f^{s_1}$ will not be identical as shown in Fig.~\ref{fig:Angle_P0_P1}. This results in non-identical PAPR and channel estimation performance on \textit{port}-$0$ and \textit{port}-$1$, which is not acceptable in any MIMO system.} \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{Angular_difference_p0_p1-eps-converted-to.pdf} \vspace*{-5pt} \caption{Angle of $\mathbf {r}_f^{s_0}$, $\mathbf {r}_f^{s_1}$, i.e., the spectrum shaping filter outputs on \textit{port}-0 and \textit{port}-1 in the absence of precoder $\mathbf Z$. } \vspace{-10pt} \label{fig:Angle_P0_P1} \end{figure} Using the proposed architecture, it can be shown that the output of the spectrum shaping filter is identical for both the ports i.e., \begin{align}\label{eq:M1 Equivalence} \mathbf{r}_f^{s_0}(2k) &= \mathbf{r}_f^{s_1}(2k+1) \nonumber \\ &= \mathbf {r}_f^{p_0}(2k)\mathbf{w}_f(2k). \end{align} where $\mathbf {r}_f(k)$ is the M-point DFT of $\frac{\pi}{2}$-BPSK signal $\mathbf{\tilde{r}}_t^p$. Therefore, the same reference signal is transmitted on each baseband antenna port as per the $3$GPP $5$G NR specifications. {We further show in Section \ref{sec:ReceiverDesign} that the channel impulse response estimated on both the ports will be identical}. The spectrum-shaped DMRS vectors $ \mathbf {r}_f^{s_0}$, $\mathbf {r}_f^{s_1}$ are mapped to a set of sub-carriers in frequency domain as discussed in Section~\ref{sec:spectrum_shaping}. The resulting output is converted to time domain via inverse-DFT operation similar to the method employed for data transmission as shown below - \begin{align} \mathbf s_t^0=\mathbf{D}_N^\dagger\mathbf{M}_f\mathbf {r}_f^{s_0}.\\ \mathbf s_t^1=\mathbf{D}_N^\dagger\mathbf{M}_f\mathbf {r}_f^{s_1}. \end{align} Using the above, the overall time-domain baseband signals $\mathbf{s}_t^0(t)$, $\mathbf{s}_t^1(t)$ with an appropriate cyclic prefix is generated as given by equation (5.3.1) in $3$GPP spec \cite{211}. \subsection{Transmission Method - $2$} In the method-$1$ based transmitter design, the $\frac{\pi}{2}$-BPSK data and DMRS sequences are spectrum shaped in frequency domain. Further, the DFT-precoded DMRS sequences corresponding to each antenna port are generated and spectrum-shaped independently. In method-$2$ based design, we propose a low complexity design where spectrum shaping is performed in time-domain for both data and DMRS sequences via circular convolution operation. Specifically, a single DMRS sequence is spectrum-shaped in time-domain and mapped to both the antenna ports. The architecture for this transmitter design for the data and DMRS is shown in Figs.~\ref{fig:M2_Data} and \ref{fig:M2_DMRS} respectively. \subsubsection{\textbf{Data waveform design method-$2$}} { \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{Data_method2.pdf} \vspace*{-5pt} \caption{Transmitter architecture for data waveform generation using method-$2$.} \vspace{-5pt} \label{fig:M2_Data} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{DMRS_method2.pdf} \caption{ Transmitter architecture for DMRS waveform generation for \textit{port}-0 and \textit{port}-1 using method-$2$.} \vspace{-10pt} \label{fig:M2_DMRS} \end{figure*} Let $\mathbf{x}_t$ be the $M$-length data vector to be transmitted from the UE to base station that undergoes a $\frac{\pi}{2}$-phase rotation through a $M\times M$ diagonal matrix $P$. Here, $P$ is the same matrix used in method-$1$. This results in a $M$-length data vector $\mathbf x_t^p= \mathbf P \mathbf x_t$ with $\frac{\pi}{2}$-BPSK symbols. Note that in this method, the spectrum shaping of $\frac{\pi}{2}$-BPSK data, is performed in time domain through a circular-convolution procedure with zero-padded $\mathbf{w}_t$ to produce a spectrum-shaped data as, \begin{align} \mathbf{x}_t^s(n)&=\sum_{m=0}^{M-1}\mathbf x_t^p(n)\mathbf{w}_t\left((m+n)\hspace{-10pt}\mod M\right),\nonumber \\ &\hspace{50pt} m, n \in \left[0,M-1\right] \end{align} The spectrum-shaped data sequence is DFT precoded by means of $M$-point as $\mathbf{x}_f^s =\mathbf{D}_M\mathbf{x}_t^s$. The DFT precoded spectrum-shaped data vector is mapped to a set of sub-carriers in frequency domain via a mapping matrix $\mathbf{M}_f$ (described in Sec~\ref{sec:spectrum_shaping}). Finally, this mapped sequence is converted to time domain via inverse-DFT operation as \begin{align} \mathbf{s}_t =\mathbf{D}_N^{\dagger}\mathbf{M}_f\mathbf{x}_f^{s} \nonumber \end{align} Using the above, the overall time-domain baseband signals $\mathbf{s}_t(t)$ with appropriate length cyclic prefix are generated as per equation (5.3.1) in 3GPP spec \cite{211}. \subsubsection{\textbf{DMRS waveform design method-$2$}} Let $\mathbf{r}_t$ be the pre-determined $\frac{M}{2}$-length DMRS sequences (as mentioned earlier in method-$1$) which undergo $\frac{\pi}{2}$-phase rotation through diagonal matrix $P_1$ of sizes $\frac{M}{2}\times\frac{M}{2}$.Where $P_1$ is a $\frac{M}{2}\times\frac{M}{2}$ diagonal matrix with diagonal entries given by $(e^{i(m \hspace{-5pt} \mod 2)\frac{\pi}{2}})$ this result in a $\frac{M}{2}$-length DMRS vector $\mathbf r_t^p= \mathbf P_1 \mathbf r_t$ with $\frac{\pi}{2}$-BPSK symbols. The spectrum shaping of the DMRS symbols is performed in time domain through a circular-convolution procedure with zero-padded $\mathbf{w}_t$ to produce a spectrum-shaped DMRS sequences as, \begin{align} \mathbf{r}_t^s(n)&=\sum_{m=0}^{\frac{M}{2}-1}\mathbf r_t^p(n)\mathbf{w}_t\left((m+n)\hspace{-10pt}\mod\frac{M}{2}\right),\nonumber \\ &\hspace{50pt} m, n \in \left[0,\frac{M}{2}-1\right] \end{align} The spectrum-shaped DMRS sequence is DFT precoded by means of $\frac{M}{2}$-point DFT matrix as $\mathbf{r}_f^s =\mathbf{D}_{\frac{M}{2}}\mathbf{r}_t^s$. The DFT output of DMRS sequence generated above is mapped to \textit{port}-0 as} \begin{equation*} \begin{split} \mathbf {r}_f^{s_0}(k) &= \mathbf {r}_f^{s} \left(\frac{k}{2}\right) \hspace*{0.5 cm} k \in {0,2,4,....}\\ &= 0 \hspace*{1.2 cm}\textnormal{otherwise}, \end{split} \end{equation*} and to \textit{port}-1 as \begin{equation*} \begin{split} \mathbf {r}_f^{s_1}(k)&= \mathbf {r}_f^{s} \left(\frac{k-1}{2}\right) \hspace*{0.5 cm} k\in {1,3,5,....}\\ &= 0 \hspace*{1.2 cm} \textnormal{otherwise}. \end{split} \end{equation*} In the above equations, $\mathbf {r}_f^{s_0}$ and $\mathbf {r}_f^{s_1}$ indicate the frequency domain DMRS sequences on \textit{port}-0 and \textit{port}-1 respectively. It can be seen that with the proposed architecture the non-zero entries of DMRS sequence are exactly identical for both the ports i.e., \begin{align}\label{eq:M2 Equivalence} \mathbf r_f^{s_0}(2k)&= \mathbf r_f^{s_1}(2k+1)\nonumber \\ &= \mathbf r_f^{p}(k)\mathbf w_f(k), \end{align} where $\mathbf r_f^{p}(k)$, $\mathbf w_f(k)$ are the $\frac{M}{2}$-DFT outputs of $\frac{\pi}{2}$-BPSK DMRS symbol $\mathbf{r}_t^p$, filter $\mathbf{w}_t $ respectively. The DFT precoded spectrum-shaped data and DMRS vector of each port is mapped to a set of sub-carriers in frequency domain via a mapping matrix $M$ (described in Sec~\ref{sec:spectrum_shaping}). Finally, this mapped sequence is converted to time-domain via inverse-DFT operation as \begin{align} \mathbf{s}_t^0 &=\mathbf{D}_N^{\dagger}\mathbf{M}_f\mathbf{r}_f^{s_0} \nonumber \\ \mathbf{s}_t^1 &=\mathbf{D}_N^{\dagger}\mathbf{M}_f\mathbf{r}_f^{s_1}. \end{align} Using the above, the overall time-domain baseband signals for DMRS transmission i.e., $\mathbf{s}_t^0(t)$, $\mathbf{s}_t^1(t)$ with appropriate length cyclic prefix are generated as per equation (5.3.1) in 3GPP spec \cite{211}. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{RxerArch.pdf} \caption{Base station receiver architecture for each receive antenna.} \label{fig:Rxer} \end{figure*} \subsection{\textbf{Summary of the transmission methods}}We presented two transmission methods for the data, DMRS waveform generation. Specifically, in method-$1$, the processing happens in frequency domain while in method-$2$ the processing happens in time domain via the circular-convolution operation. {Also, in method-$1$, a $M$-length DMRS sequence is spectrum shaped in frequency domain, whereas a length $\frac{M}{2}$ DMRS sequence is spectrum shaped in time domain. Irrespective of this difference, we show that both these methods are capable of estimating the channel perfectly}. Further using \eqref{eq:M1 Equivalence} and the DFT property that even indexed samples of $M$-point DFT of any arbitrary sequence will be identical to its $\frac{M}{2}$-point DFT output, it can be shown that \begin{equation}\label{DFT Reln} \mathbf r_{f,M}^{p_0}(2k)=\mathbf r_{f,\frac{M}{2}}^{p_0}(k) \hspace{0.5cm} k=2l , \hspace{0.2cm} l=0,1,....\frac{M}{2}-1 \end{equation} where $\mathbf r_{f,M}^{p_0}$, $\mathbf r_{f,\frac{M}{2}}^{p_0}$ are $M$-point and $\frac{M}{2}$-point DFT outputs of $\mathbf{r}_t^p$ respectively. Using \eqref{DFT Reln}, we can rewrite \eqref{eq:M1 Equivalence} as \begin{align*} \mathbf{r}_f^{s_0}(2k)&= \mathbf{r}_f^{s_1}(2k+1) \nonumber \\ &= \mathbf {r}_{f,\frac{M}{2}}^{p_0}(k)\mathbf{w}_{f,\frac{M}{2}}(k) \end{align*} which is exactly identical to \eqref{eq:M2 Equivalence}. Since input to IDFT is identical for both the methods, we conclude that the inverse DFT outputs of \textit{port}-0, \textit{port}-1 i.e., $\mathbf {r}_f^{s_0}$, $\mathbf {r}_f^{s_1}$ and the subsequent baseband signals generated through method-$1$ will be identical to that of generated using method-$2$. Using an example, we show in the Appendix that the channel estimation performance when these different transmitter methods are used will remain the same. {\textbf{Remark:} \textit{The current 3GPP specifications for $5$G NR do not mention how the spectrum shaping and transmission for data and DMRS must be done for single as well as multiple antenna ports as it is left as an implementation choice. However, as we have shown extensively, this causes ambiguity at both the transmitter and receiver if not done in the right manner. Hence to avoid this ambiguity and also to avoid data loss on any antenna port, the designs mentioned above must be used.}} \section{Receiver Design}\label{sec:ReceiverDesign} The receiver procedure explained next is common for both the transmission methods explained in previous sections. Hence, we do not distinguish between the transmission method-$1$ and method-$2$ in this section. The receiver front end operations such as sampling, synchronization, CP removal and FFT are similar to a conventional DFT-s-OFDM-based system as shown in \ref{fig:Rxer}. Further, the ISI introduced by the propagation channel is assumed to be less than that of the CP length. Therefore, after CP removal and DFT, the data and DMRS signals on $k$th sub-carrier can be represented as (without loss of generality we consider only the initial M subcarriers of the DFT output, i.e., $k\in [0,M-1]$) \begin{align} \mathbf y_d(k)&=\mathbf x_f^{s_0}(k)\mathbf h_{f,\mathtt{data}}^{0}(k)+\mathbf x_f^{s_1}(k)\mathbf h_{f,\mathtt{data}}^{1}(k)+\mathbf v(k)\nonumber \\ \mathbf y_{\mathtt{DMRS}}^0(k)&=\mathbf r_f^{s_0}(k)\mathbf h_{f,\mathtt{DMRS}}^{0}(k)+\mathbf v_0(k) \label{eq:Port0 Rx} \\ \mathbf y_{\mathtt{DMRS}}^1(k)&=\mathbf r_f^{s_1}(k)\mathbf h_{f,\mathtt{DMRS}}^{1}(k)+\mathbf v_1(k) \label{eq:Port1 Rx}. \end{align} In the above, $\mathbf y_d$ correspond to the received data vector with data symbols from both the ports (recall that $2$-antenna ports can support $2$-MIMO stream transmissions). $\mathbf y_{\mathtt{DMRS}}^0$, $\mathbf y_{\mathtt{DMRS}}^1$ correspond to the received DMRS vectors on \textit{port}-0 and \textit{port}-1 respectively. $\mathbf h_{f,\mathtt{DMRS}}^{0}=\mathbf{D}_M\mathbf{h}_t^0$, $\mathbf h_{f,\mathtt{DMRS}}^{1}=\mathbf{D}_M\mathbf{h}_t^1$ correspond to frequency response of the time-domain wireless channel impulse response $\mathbf{h}_t^0$ on \textit{port}-0 and $\mathbf{h}_t^1$ on \textit{port}-1 respectively and $\mathbf x_f^{s_0}$, $\mathbf x_f^{s_1}$, $\mathbf r_f^{s_0}$, $\mathbf r_f^{s_1}$ are the transmitted data and DMRS sequences defined in Section~\ref{sec:SignalModel}. The noise vectors $\mathbf{v}, \mathbf{v}_0$ and $\mathbf{v}_1$ are \textit{i.i.d.} complex Gaussian random variables with zero-mean and co-variance $\sigma^2\mathbf{I}$ where $\mathbf{I}$ is an identity matrix and $\sigma^2$ is a constant indicating the variance of each noise sample. In practice, for low to medium user speeds the time variations of the multipath channel across consecutive OFDM symbols as shown in Fig.~\ref{fig:Data_symb} will be minimal and hence without loss of generality we consider that \begin{equation*} \mathbf h_{f,\mathtt{DMRS}}^0(k)=\mathbf h_{f,\mathtt{data}}^0(k) . \end{equation*} \begin{equation*} \mathbf h_{f,\mathtt{DMRS}}^1(k)=\mathbf h_{f,\mathtt{data}}^1(k) . \end{equation*} \begin{figure} \centering \includegraphics[width=0.5\columnwidth,height=5cm]{dataDMRS-eps-converted-to.pdf} \caption{DMRS and data symbols in a OFDM resource grid.} \label{fig:Data_symb} \vspace{-12pt} \end{figure} This is a common assumption made in the design of $4$G and $5$G cellular systems. \subsubsection{\textbf{Channel estimation}} As per $3$GPP specifications, the spectrum shaping filter $\mathbf{w}_t$ is implementation-specific i.e., different UEs can use different filters based on their hardware implementation and hence the exact filter being used is unknown at the base station receiver \cite{101}, \cite{213}. Hence, the channel estimation module at the receiver should now estimate the impulse response of filter and wireless channel jointly. In our work, we use a DFT-based channel estimation technique to estimate the joint channel impulse response for the $M$ allocated sub-carriers. A simple least-squares based technique with tone averaging or linear interpolation based on assumption that the channel is constant across consecutive sub-carriers does not work well in this case due to the presence of the spectrum shaping filter, because spectrum shaping considerably changes channel across consecutive sub-carriers based on the shape of the filter shown in Fig.~\ref{fig:spectrum_shaping}. As already mentioned in Section~\ref{sec:SignalModel}, a $M$-length data vector will be associated with $\frac{M}{2}$-length DMRS vector. Firstly, we show that a $M$-length frequency domain channel vector (as the data allocation is $M$, the channel on all of these $M$ tones must be estimated for demodulation) corresponding to $M$-length data symbol can be perfectly constructed from $\frac{M}{2}$-length DMRS sequence for both ports. \subsubsection{\textbf{Channel estimation on \textit{port}-0}}\label{sec:chan_est_length_port} As mentioned earlier, \textit{port}-0 carries DMRS only on even numbered sub-carriers which are extracted and expressed in terms of $\frac{\pi}{2}$-BPSK DMRS as follows \begin{align} \tilde{ \mathbf y}_{\mathtt{DMRS}}(k) & = \mathbf y^0_{\mathtt{DMRS}}(2k), k=0,1,..\frac{M}{2}-1 \nonumber \\ &=\mathbf r_f^{s_0}(2k)\mathbf h_{f,\mathtt{DMRS}}^0(2k)+\mathbf v_0(2k), \label{eq:tempe1}\\ &=\mathbf {r}_f(2k)\mathbf {w}_f(2k)\mathbf h_{f,\mathtt{DMRS}}^0(2k)+\mathbf{v}_0(2k), \label{eq:tempe2 \end{align} where \eqref{eq:tempe1} results from \eqref{eq:Port0 Rx}, and \eqref{eq:tempe2} results from \eqref{eq:M1 Equivalence}. Invoking the equivalence between $M$-point DFT and $\frac{M}{2}$-point DFTs \eqref{DFT Reln}, the above equation can be represented as \begin{equation} \tilde{ \mathbf y}_{\mathtt{DMRS}}^0(k)=\left[\mathbf{D}_{\frac{M}{2}} \mathbf{\tilde{r}}_t^p\right] \left[\mathbf{D}_{\frac{M}{2}}\left(\mathbf w_t \odot \mathbf h_{t,\mathtt{DMRS}}^0\right)\right]+\mathbf{v}_0 \end{equation} where $\odot$ indicates the circular-convolution operation, $\mathbf {h}_{t,\mathtt{DMRS}}^0$ is the impulse response of the wireless channel on \textit{port}-0, $\mathbf{\tilde{r}}_t^p(n)$ is defined in section \ref{sec:SignalModel} We perform channel estimation on ${\tilde{\mathbf {y}}_{\mathtt{DMRS}}}$ as follows - we first perform a least squares based channel estimation and then on the resulting output we take an $\frac{M}{2}$-point IDFT. This gives the joint impulse response of filter and the wireless channel as - \begin{align} D_{\frac{M}{2}}^\dagger \left(\frac{\mathbf {\tilde{y}_{\mathtt{DMRS}}}}{\mathbf{D}_{\frac{M}{2}} \mathbf{\tilde{r}}_t^p }\right)&=\underbrace{\mathbf w_t(n)\odot \mathbf h_{\mathtt{f,DMRS}}^0(n)}_{\mathbf h_{\mathtt{eff}}}. \label{step1:channelest} \end{align} The length of $\mathbf h_{\mathtt{eff}}$ will be $\max\left(\mathtt{length}(\mathbf{w}_t), \mathtt{length}(\mathbf h_{\mathtt{t,DMRS}}^0)\right)$. Irrespective of the pulse shaping filter, the reference signal design should ensure that the DMRS sequence length will be at-least twice that of the impulse response of the wireless channel i.e., the length of $\mathbf h_{\mathtt{t,DMRS}}^0)$ is assumed to be less than $\frac{M}{2}$ \cite{CIR1} which is typically the case for practical wireless channel models considered by $3$GPP \cite{901}. From the above we conclude that $\mathbf h_{\mathtt{eff}}$ completely captures the joint impulse response of the spectrum shaping filter and also the wireless channel. A de-noising time domain filter \cite{Denoise} is then applied to reduce noise in ~\eqref{step1:channelest}. This filter $\mathbf {f}(n)$ is defined as \begin{equation*} \begin{split} \mathbf {f}(n)&=1, \hspace{0.6cm} 0\leq n \leq f_c-1 ,M-f_c\leq n\leq 1 \\ &=0, \hspace*{0.6cm} \textnormal{otherwise}\\ \end{split} \end{equation*} where $f_c$ is the ``cut-off'' point of the time domain filter which is commonly chosen as the length of the wireless channel length $\mathtt{length}(\mathbf h_{\mathtt{t,DMRS}}^0$ if it is known \emph{apriori} or it is set to the cyclic prefix length in case no knowledge about the wireless channel is available. The rest of the samples are set to $0$. This filter helps to extract only the useful samples of the CIR while reducing the noise in the rest of the samples. For more details, please see \cite{Denoise}. The effective impulse response after de-noising is given as \begin{equation*} \hat{\mathbf{h}}_{\mathtt{eff}}(n)=\mathbf {h}_{\mathtt{eff}}(n)\mathbf {f}(n), \hspace*{0.5cm} 0\leq n\leq M-1 \end{equation*} Lastly, the time domain filtered samples are transformed via a $M$-point DFT to recover the frequency-domain channel estimates on each sub-carrier $k\in[0,M-1]$ as $\hat{\mathbf{h}^f}_{\mathtt{eff}}=\mathbf{D}_M\hat{\mathbf h}_{\mathtt{eff}}(n)$. This can be further used for \textit{port}-0 data demodulation using well-known techniques. \subsubsection{\textbf{Channel estimation on \textit{port}-1}} As mentioned earlier, \textit{port}-1 carries DMRS only on odd sub-carriers which are extracted and expressed as follows \begin{equation*} \tilde{ \mathbf y}_{\mathtt{DMRS}}^1(k) = \mathbf y^1_{\mathtt{DMRS}}(2k+1), k=0,1,\ldots, \frac{M}{2}-1 \end{equation*} Using \eqref{eq:Port1 Rx}, the above equation can be written as \begin{equation} \tilde{ \mathbf y}_{\mathtt{DMRS}}^1(k) =\mathbf r_f^{s_1}(2k+1)\mathbf h_{f,\mathtt{DMRS}}^1(2k+1)+\mathbf v_1(2k+1) \label{eq:y1DMRS} \end{equation} Assuming that the wireless channel remains constant across consecutive sub-carriers (again a common assumption in $3$GPP designs), we have \begin{equation}\label{Approx} \mathbf {h}_{f,\mathtt{DMRS}}^1(2k+1)\approx \mathbf {h}_{f,\mathtt{DMRS}}^1(2k). \end{equation} Using \eqref{eq:M1 Equivalence} and \eqref{Approx}, \eqref{eq:y1DMRS} can be expressed as \begin{align} \tilde{ \mathbf y}_{\mathtt{DMRS}}^1(k)&=\mathbf {r}_f(2k)\mathbf {w}_f(2k)\mathbf h_{f,\mathtt{DMRS}}^1(2k)+\mathbf v_1(2k+1)\nonumber \\ &=\left[\mathbf{D}_{\frac{M}{2}} \mathbf{\tilde{r}}_t^p\right] \left[\mathbf{D}_{\frac{M}{2}}\left(\mathbf w_t \odot \mathbf h_{t,\mathtt{DMRS}}^1\right)\right]+\mathbf{v}_1 \end{align} Further processing steps such as the least-squares based channel estimation, de-noising and transforming the effective impulse response to frequency domain are identical to the procedure followed for channel estimation on \textit{port}-0. For the case of AWGN channel i.e., \begin{equation*} \mathbf {h}_{f,\mathtt{DMRS}}^0(k)=\mathbf {h}_{f,\mathtt{DMRS}}^1(k)=1\hspace{0.2cm} \forall k, \end{equation*} the estimated joint impulse response ${\mathbf h_{\mathtt{eff}}}$ on \textit{port}-0 and \textit{port}-1 is shown in Fig.~\ref{fig:CIR}. It can be noticed that the estimated impulse response is identical for both the ports. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{CIR-eps-converted-to.pdf} \caption{Magnitude of the estimated channel impulse response on \textit{port}-0 and \textit{port}-1.} \label{fig:CIR} \vspace{-5pt} \end{figure} \subsubsection{\textbf{Equalization and data demodulation}:} The estimated channel on \textit{port}-0 and \textit{port}-1 will be employed for channel equalization of data streams. Specifically, we construct an MMSE-equalization filter using the channel estimates obtained previously and then equalize the received signal samples on all the receive antennas of the base station. The equalized data streams are demodulated to generate soft log-likelihood ratio values which are given as input to the channel decoder module for further bit-level processing. \section{Numerical Results}\label{sec:Numerical Results} In this section, we present various numerical results that show \begin{itemize} \item The PAPR comparison between the $\frac{\pi}{2}$-BPSK based DMRS sequences and the existing $3$GPP ZC-based DMRS sequences. \item Link level block error rate (BLER) comparison for the data transmissions employing $\frac{\pi}{2}$-BPSK based DMRS sequences and existing $3$GPP ZC-based DMRS sequences for various sequence lengths and various bandwidth allocations. \item BLER performance for the data transmissions on \textit{port}-0 and \textit{port}-1 in the case of MIMO two stream transmissions. \end{itemize} Unless otherwise mentioned, the simulation assumptions shown in Table~\ref{tab:Sim Params} are used throughout this paper. \begin{table}[t] \centering { \caption{Simulation Assumptions for BLER comparisons} \label{tab:Sim Params} \renewcommand{\arraystretch}{1.5} {\fontsize{7}{7}\selectfont \begin{tabular}{|c|c|} \hline \textbf{Parameter} & \textbf{Value} \\ \hline \hline Channel Type & PUSCH\\ \hline System Bandwidth & $20$ MHz\\ \hline Sub-carrier spacing & $15$ KHz \\ \hline Allocated PRBs & $1$-$16$ PRBs\\ \hline Channel Model & TDL-C $300$ns \\ \hline Number of UE transmitter antennas & $1$\\ \hline Number of UEs & $1$, $2$\\ \hline Number of BS receiver antennas & $2$, $4$\\ \hline Number of MIMO streams & $1$, $2$ \\ \hline Channel Coding & $3$GPP NR LDPC\\ \hline Equalizer & MMSE\\ \hline \end{tabular} } } \end{table} The CCDF of PAPR for ZC and $\frac{\pi}{2}$- BPSK sequences is shown in Fig.~\ref{fig:PAPR22}. The ZC sequences considered in this case are as defined in \cite[section 5.2.2]{211} with length $96$. The PAPR of both with and without spectrum shaping of ZC sequences is shown in the figure. As can be seen from the figure, the $3$GPP ZC sequences without spectrum shaping have a PAPR (at the $10^{-3}$ CDF point) that is $2.8$dB more than the $\frac{\pi}{2}$-BPSK sequences. When spectrum shaping is applied to the ZC-DMRS, the PAPR is slightly reduced from that of un-filtered ZC sequences. However, the PAPR of the filtered ZC sequence is still $2.0$ dB larger than the PAPR of the $\frac{\pi}{2}$ BPSK sequences with the same spectrum shaping. Moreover, as we increase the number of allocated sub-carriers for data transmission, the PAPR gap between $3$GPP ZC sequence and $\frac{\pi}{2}$ BPSK increases even further. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{PAPR_96_ZC_pi_2_DMRS-eps-converted-to.pdf} \caption{PAPR of length $96$ ZC and $\frac{\pi}{2}$-BPSK DMRS sequences.} \label{fig:PAPR22} \vspace{-10pt} \end{figure} The CCDF of PAPR for ZC and $\frac{\pi}{2}$-BPSK for smaller lengths $(N=12)$ is shown in Fig.~\ref{fig:PAPR_CGS}. As discussed in section \ref{sec:SignalModel}, for smaller lengths $(N<30)$, $3$GPP employs computer generated sequences (CGS)as DMRS. It can be seen from the figure that the PAPR of the spectrum shaped CGS sequences is almost $1.2$ dB larger than the PAPR of the $\frac{\pi}{2}$ BPSK sequences. Moreover, it can also be noticed that for CGS the PAPR is further increased with filtering. Hence, these results conclude that the $\frac{\pi}{2}$ BPSK sequences designed in \cite{Qual} are far superior compared to the existing sequences in improving the cell-coverage. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{PAPR_24-eps-converted-to.pdf} \caption{PAPR of length-12 $3$GPP CGS and $\frac{\pi}{2}$-BPSK DMRS sequences } \label{fig:PAPR_CGS} \vspace{-12pt} \end{figure} The block error rate performance for a single stream PUSCH transmission in shown in Fig.~\ref{fig:BLER_24}. Here, DMRS is transmitted on \textit{port}-0. Note that ZC sequences are used for comparing the BLER performance because these sequences have a power density which is frequency-flat and hence treats every sub-carrier equally and can estimate the channel equally well across the entire bandwidth. Hence, the goal for the newly designed sequences is to ensure that they match the performance of these ZC sequences. In this figure, the results are shown for the cases when the base station receiver employs $2$ and $4$ receive antennas. {From Fig.~\ref{fig:BLER_24}, it can be clearly seen that irrespective of number of receive antennas, the link level performance of $\frac{\pi}{2}$-BPSK DMRS is equivalent to that of $3$GPP ZC-sequences although the newly designed sequences are not frequency-flat like ZC}. \begin{figure} \centering \includegraphics[width=0.8\columnwidth,height=5.5cm]{BLER_24-eps-converted-to.pdf} \caption{BLER comparison of length-$24$ ZC and $\frac{\pi}{2}$-BPSK DMRS sequences.} \label{fig:BLER_24} \vspace{-12pt} \end{figure} We next consider the performance of the proposed transmitter designs for the case of two MIMO streams transmission setting where DMRS is transmitted on both \textit{port}-0 and \textit{port}-1. Firstly, we show the drawbacks of the existing design in $3$GPP in Figs.~\ref{fig:PAPR_our_3gpp} and \ref{fig:TDL_2_4_3gpp}. It is seen that when the $3$GPP transceiver is used, there is a clear difference in the performance both in terms of PAPR and BLER across \textit{port}-0 and \textit{port}-1. This is highly undesirable as the data on two different ports will behave differently and practically \textit{port}-1 is useless. This problem is addressed using our proposed transceiver design as claimed earlier. We next show that it is indeed the case. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{PAPR_our_Others-eps-converted-to.pdf} \caption{PAPR comparison of length-$6$ DMRS sequences on \textit{port}-0 and \textit{port}-1 with proposed and $3$GPP transmitter designs.} \vspace{-10pt} \label{fig:PAPR_our_3gpp} \end{figure} In Figs.~\ref{fig:PAPR_our_3gpp} and \ref{fig:TDL_2_4_our}, we show the PAPR and BLER performance for the two MIMO streams transmission setting where DMRS is transmitted on both \textit{port}-0 and \textit{port}-1 using our proposed method-$1$ transceiver design. It can be seen that both the PAPR as well as the BLER is identical for both the ports confirming that the proposed transmitter design produces identical DMRS sequences on both the ports. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{TDL_4rx_2rx_3gpp-eps-converted-to.pdf} \caption{BLER comparison of length-$6$ DMRS sequences on \textit{port}-0 and \textit{port}-1 with $3$GPP transmitter design.} \vspace{-10pt} \label{fig:TDL_2_4_3gpp} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{TDL_2_4_rx_our-eps-converted-to.pdf} \caption{BLER comparison of length-$6$ DMRS sequences on \textit{port}-0 and \textit{port}-1 with proposed transmitter design.} \vspace{-10pt} \label{fig:TDL_2_4_our} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{PAPR_meth1_meth2-eps-converted-to.pdf} \caption{ PAPR comparison of length-$12$ DMRS sequences on \textit{port}-0 and \textit{port}-1 with method-$1$ and method-$2$ transmitter designs respectively.} \label{fig:M1-M2} \end{figure} In Fig.~\ref{fig:M1-M2}, PAPR of the DMRS sequences on \textit{port}-0 and \textit{port}-1 generated by method-$1$ and method-$2$ is shown. It can be seen that PAPR is exactly same for both \textit{port}-0 and \textit{port}-1 in both the methods confirming that proposed transmitter designs are equivalent. The same is the case with BLER performance as well. Therefore, the proposed methods~$1$ and $2$ have shown to be equivalent both analytically and numerically. \section{Conclusion}\label{sec:Conclusion} In this paper we proposed a low PAPR reference signal transceiver design for $3$GPP $5$G NR $\frac{\pi}{2}$-BPSK based uplink transmissions. Using the proposed design, the PAPR of the reference signal is significantly minimized compared to the current design of Rel-15 5G NR systems. Such a design considerably helps to improve the coverage of the $5$G systems. Specifically, we have shown a frequency domain and a time domain transceiver design both of which are equivalent and result in same system performance in terms of PAPR and also BLER. We have shown how the proposed design can be extended to the case of a MIMO transmission without causing any discrepancy on different MIMO streams which is not the case for the current Rel-$15$ $3$GPP $5$G NR uplink design.
1,108,101,564,790
arxiv
\section*{Introduction} Let $X$ be an algebraic scheme, that is, a separated scheme of finite type over a ground field $k$, which is not necessarily quasiprojective. A fundamental question is whether or not every coherent sheaf $\shF$ on $X$ is the quotient of some locally free sheaf $\shE$ of finite rank. If this property holds, one says that $X$ has the \emph{resolution property}. Totaro \cite{Totaro 2004} gave a characterization of schemes having the resolution property: They admit some principal $\GL_n$-bundle whose total space is quasiaffine. This should be seen as a far-reaching generalization of the pointed cone attached to an ample invertible sheaf. On schemes $X$ having the resolution property, any coherent sheaf $\shF$ can be replaced by a complex of locally free sheaves of finite rank, which has important consequence for K-theory. If the resolution property is unavailable, one relies on ad hoc approaches, which may become intricate. Here we mention the definition of Chern classes for coherent sheaves on arbitrary compact complex manifolds taking values in Deligne cohomology constructed by Grivaux \cite{Grivaux 2010}. More generally, the resolution property makes sense for \emph{algebraic stack}s. It is then related to Grothendieck's question on the equality of the Brauer group and the cohomological Brauer group \cite{Edidin; Hassett; Kresch; Vistoli 1999}. Note that the resolution property does not hold in each and every situation: One easily constructs non-separated schemes without resolution property. A natural example is the algebraic stack $\shM_0$ of prestable curves of genus zero, as observed by Kresch \cite{Kresch 2013}. An even more basic question is whether or not any proper scheme $X$ admits locally free sheaves $\shE$ of finite rank that are not free, that is $\shE\not\simeq\O_X^{\oplus r}$. Winkelmann \cite{Winkelmann 1993} showed that this indeed holds for compact complex manifolds. For proper schemes, one has the following facts: Any curve is projective, so there are invertible sheaves $\shL$ with $c_1(\shL)$ arbitrary large. In contrast, there are normal surfaces $S$ with trivial Picard group, see for example \cite{Schroeer 1999}. However, any surface admits locally free sheaves $\shE$ of rank $n=2$, in fact with $c_2(\shE)$ arbitrary large (\cite{Schroeer; Vezzosi 2004}, actually the resolution property holds by results of Gross \cite{Gross 2012}). Based on these facts, one may arrive at the perhaps over-optimistic conjecture that \emph{any proper scheme $Y$ should admit locally free sheaves $\shE$ of rank $n=\dim(Y)$ with Chern number $c_n(\shE)$ arbitrarily large}. The main goal of this paper is to provide further bits of evidence for this. Throughout the article, we assume that the ground field $k$ is infinite, if not stated otherwise. One of our results deals with toric varieties of dimension three: \begin{maintheorem} Let $Y$ be a proper toric threefold. Then there are locally free sheaves $\shE$ on $Y$ of rank $n=3$ with arbitrarily large Chern number $c_3(\shE)$. \end{maintheorem} In contrast to toric surfaces, smooth proper toric threefolds $Y$ are not necessarily projective. A characterization of the non-projective ones in terms of triangulations of the 2-sphere was given by Oda (\cite{Oda 1978}, Proposition 9.3). Eikelberg \cite{Eikelberg 1992} gives examples of proper toric threefolds with trivial Picard group, see also the discussions by Fulton \cite{Fulton 1993}, pp.\ 25--26 and Ford and Stimets \cite{Ford; Stimets 2002}. Examples of proper toric threefolds $Y$ whose \emph{toric} vector bundles of rank $\leq 3$ are trivial were constructed by Payne \cite{Payne 2009}. In other words, the quotient stack $\shY=[Y/\GG_m^3]$ has no non-trivial vector bundles of rank $\leq 3$. This result relies on the theory of branched coverings of cone complexes, together with a computer calculation. Payne also posed the question whether or not there are nontrivial vector bundles on $Y$ at all. This question was taken up by Gharib and Karu \cite{Gharib; Karu 2012}, and our Theorem provides a positive answer to Payne's question. Note that there has been a strong interest in the $K$-theory of toric varieties in the recent past. For example, Anderson and Payne \cite{Anderson; Payne 2013} showed that for proper toric threefolds over algebraically closed ground fields, the canonical map $KH^\circ(X)\ra\text{op}K^\circ(X)$ from the $K$-group of perfect complexes to the operational $K$-groups of Fulton--MacPherson \cite{Fulton; MacPherson 1981} is surjective. Gubeladze \cite{Gubeladze 2004} constructed simplicial toric varieties with surprisingly large $K^\circ(X)$. We also would like to mention results of Corti\~nas, Haesemeyer, Walker and Weibel, which express various $K$-groups of toric varieties in terms of the cdh-topology (\cite{Cortinas; Haesemeyer; Walker; Weibel 2009}, \cite{Cortinas; Haesemeyer; Walker; Weibel 2014}). Our theorem above is actually a simple consequence of the following more general statement, which is the main result of this paper: \begin{maintheorem} Let $Y$ be a proper scheme. Suppose there is a proper birational morphism $X\ra Y$ and a Cartier divisor $D\subset X$ that intersects the exceptional locus in a finite set, and that the proper scheme $D$ is projective. Then there are locally free sheaves $\shE$ of rank $n=\dim(Y)$ on $Y$ with Chern number $c_n(\shE)$ arbitrarily large. \end{maintheorem} Indeed, this is a generalization to higher dimensions of a result of the second author and Vezzosi \cite{Schroeer; Vezzosi 2004} on proper surfaces, where the assumptions are vacuous. It would be interesting to find examples in dimension $n\geq 3$ where all vector bundles of rank $\leq n-1$ are trivial. This result also has applications to toric varieties in arbitrary dimension $n\geq 3$: Indeed, we give characterizations for proper toric $n$-folds $Y$ so that there is a \emph{toric} modification $f:X\ra Y$ and a \emph{toric} divisor $D\subset X$ satisfying the assumptions of our main result, in terms of convexity properties around the ray $\rho$ corresponding to the Weil divisor $f(D)\subset Y$ in the fan $\Delta$ that describes the toric variety $Y=Y_\Delta$. Roughly speaking, any cone $\sigma\in \Star(\rho)$ has to be a \emph{pyramidal extension} of the cone $\sigma'$ generated by the other rays $\rho'\neq\rho$ in $\sigma$. The notion of pyramidal extensions is closely related to the so-called \emph{beneath-and-beyond method} of convex geometry, and leads to a condition on the ray $\rho\in\Delta$ which we choose to call \emph{in Egyptian position}. In dimension $n\leq 3$, any ray is in Egyptian position, but the condition becomes nontrivial in higher dimensions. \medskip The paper is organized as follows: We start in Section \ref{Infinitesimal Neighborhoods} by showing that under certain circumstances, locally free sheaves are determined on infinitesimal neighborhoods of exceptional sets. This is used in Section \ref{Equivalence} to deduce an equivalence of categories between locally free sheaves on $Y$ and certain proper $Y$-schemes $X$. Our main theorem appears in Section \ref{Elementary Transformations}, in which we construct infinitely many locally free sheaves $\shE_t$ on certain proper schemes. To see that these sheaves have unbounded top Chern number, we investigate in Section \ref{Chern Classes} Chern classes for coherent sheaves admitting short global resolutions, without the usual assumption that any coherent sheaf is the quotient of a locally free sheaf. In Section \ref{Toric Varieties}, we apply our result to toric varieties, and in particular to toric threefolds. Section \ref{Examples trivial} contains concrete examples of toric varieties with trivial Picard group for which our results apply. The final Section \ref{Divisors on threefolds} contains a sufficient condition for certain proper threefolds to contain projective divisors. \begin{acknowledgement} The authors wish to thank the referee for a thorough and careful report, which helped to remove some mistakes, to clarify certain arguments, and to improve the overall exposition. \end{acknowledgement} \section{Vector bundles on infinitesimal neighborhoods} \mylabel{Infinitesimal Neighborhoods} In this section, we study the behavior of locally free sheaves near certain closed fibers. The set-up is as follows: Suppose $R$ is a local noetherian ring, denote by $\idealm=\idealm_R$ its maximal ideal, and let $y\in \Spec(R)$ be the corresponding closed point. Let $f:X\ra\Spec(R)$ be a proper morphism, and $X_y=f^{-1}(y)$ be the closed fiber. For each coherent sheaf $\shF$ on $X$, we regard the cohomology groups $H^p(X,\shF)$ as $R$-modules, which are finitely generated, though not of finite length in general. In what follows, we shall consider certain \emph{infinitesimal neighborhoods} of the reduced closed fiber, that is, closed subschemes $E\subset X$ having the same topological space as $f^{-1}(y)\subset X$. \begin{theorem} \mylabel{isomorphic restriction} Let $\shE$ be a locally free sheaf of finite rank on $X$. Suppose that the local noetherian ring $R$ is complete, and that the $R$-modules $H^1(X,\shE)$ and $H^2(X,\shE)$ have finite length. Then there exists an infinitesimal neighborhood $ E $ of the reduced closed fiber with the following property: For any locally free sheaf $\shF$ on $X$ with $\shF|E\simeq \shE|E$, we already have $\shF\simeq \shE$. \end{theorem} \proof Set $\shI=\idealm\O_X$. Let $E_k\subset X$ be the closed subscheme corresponding to $\shI^{k+1}\subset\O_X$, which are infinitesimal neighborhoods of the closed fiber $X_y=E_0$. We have inclusions of subschemes $E_0\subset E_1\subset \ldots$, and the subsheaf $\shI^{k+1}/\shI^{k+2}\subset\O_X/\shI^{k+2}$ is the ideal sheaf for $E_{k}\subset E_{k+1}$. Clearly, $\shI$ annihilates the coherent sheaf $\shI^{k+1}/\shI^{k+2}$, hence the schematic support $A_{k+1}\subset X$ of the latter is contained in $E_0$. Suppose for the moment that we already know that there is an integer $m\geq 0$ such that the groups $H^1(X,\underline{\End}(\shE_{A_{k+1}})\otimes_{\O_{A_{k+1}}} \shI^{k+1}/\shI^{k+2}) $, which coincide with \begin{equation} \label{first cohomology} H^1(X,\underline{\End}(\shE)\otimes\shI^{k+1}/\shI^{k+2}) = H^1(X,\shI^{k+1}\underline{\End}(\shE)/\shI^{k+2}\underline{\End}(\shE)), \end{equation} vanish for all $k\geq m$. We now check that $E=E_m$ has the desired property: Let $\shF$ be a locally free sheaf on $X$ with $\shF|E_m\simeq \shE|E_m$. In light of Corollary \ref{isomorphic increase}, which for the sake of the exposition is deferred to the end of this section, it follows by induction that $\shF|E_k\simeq\shE|E_k$ for all $k\geq m$. In turn, the isomorphism classes of $\shE$ and $\shF$ have the same image under the canonical map $$ H^1(X,\GL_r(\O_X))\lra \invlim_k H^1(E_k,\GL_r(\O_{E_k})). $$ Let $\foX$ be the formal completion of $X$ along the closed fiber. As explained in \cite{Artin 1969}, proof for Theorem 3.5, the canonical map $$ H^1(\foX,\GL_r(\O_\foX))\lra \invlim_k H^1(E_k,\GL_r(\O_{E_k})). $$ is bijective. In other words, the sheaves $\shE$ and $\shF$ are formally isomorphic. Since the local noetherian ring $R$ is complete, we may apply the Existence Theorem (\cite{EGA IIIa}, Theorem 5.1.4) and conclude that $\shE$ and $\shF$ are isomorphic. It remains to verify that the groups (\ref{first cohomology}) indeed vanish for all $k$ sufficiently large. This is a special case of the next assertion. \qed \begin{proposition} \mylabel{cohomology vanishes} Let $\shF$ be a coherent sheaf on our scheme $X$ and $p\geq 1 $ an integer such that the $R$-modules $H^p(X,\shF)$ and $H^{p+1}(X,\shF)$ have finite length. Then there is an integer $m\geq 0$ so that $H^p(X,\idealm^k\shF/\idealm^{k+1}\shF)=0$ for all $k\geq m$. \end{proposition} \proof Consider the Rees ring $S=\bigoplus _{k\geq 0} \maxid^k$ corresponding to the $\maxid$-adic filtration on $R$, which has invariant subring $S_0=R$ and irrelevant ideal $S_+=\bigoplus_{i\geq 1}\maxid^k$. The graded $S$-module $$ \bigoplus_{k\geq 0} H^p(X,\maxid^k\shF) $$ is finitely generated, according to the Generalized Finiteness Theorem (\cite{EGA IIIa}, Corollary 3.3.2). In particular, there is an integer $n\geq 1$ such that $H^p(X,\maxid^{n+i}\shF)= \maxid^i H^p(X,\maxid^n\shF)$ for all $i\geq 0$. The short exact sequence $$ 0\lra\maxid^n\shF\lra\shF\lra\shF/\maxid^n\shF\lra 0 $$ yields an exact sequence $$ H^{p-1}(X,\shF/\maxid^n\shF)\lra H^p(X,\maxid^n\shF)\lra H^p(X,\shF) $$ of finitely generated $R$-modules. The term on the right has finite length by assumption, and the term on the left is annihilated by $\maxid^n\subset R$, thus has finite length as well. So the term in the middle has finite length. In particular, $H^p(X,\maxid^n\shF)$ is annihilated by $\maxid^d$ for some integer $d\geq 0$. Consequently, $H^p(X,\maxid^{n+i}\shF)=0$ for all $i\geq d$. The same arguments apply in degree $p+1$ instead of $p$, and we thus have shown that there is an integer $m\geq 0$ such that $H^p(X,\maxid^k\shF)$ and $H^{p+1}(X,\maxid^k\shF)$ vanish for all $k\geq m$. The short exact sequence $$ 0\lra\maxid^{k+1}\shF\lra\maxid^k\shF\lra\maxid^k\shF/\maxid^{k+1}\shF\lra 0 $$ of sheaves yields an exact sequence $$ H^p(X,\maxid^k\shF)\ra H^p(X,\maxid^k\shF/\maxid^{k+1}\shF)\ra H^{p+1}(X,\maxid^{k+1}\shF) $$ of $R$-modules, and it follows that $H^p(X,\maxid^k\shF/\maxid^{k+1}\shF)$ vanishes for $k\geq m$. \qed \begin{remark} Theorem \ref{isomorphic restriction} remains true if $R$ is the henselization of a ring $A$ with respect to some prime ideal, provided that $A$ is finitely generated over some field or some excellent Dedekind ring. Indeed, by \cite{Artin 1969}, Theorem 3.5 the restriction map $$ H^1(X,\GL_r(\O_X))\ra H^1(\foX,\GL_r(\O_\foX)) $$ is injective, which relies on Artin's Approximation Theorem \cite{Artin 1969}, Theorem 1.12. In light of Popescu's generalization \cite{Popescu 1985}, Theorem 1.3 (see also the discussions of Swan \cite{Swan 1998} and Conrad and de Jong \cite{Conrad; de Jong 2002}), it remains valid under the assumption that $R$ is any henselian excellent local ring. \end{remark} \begin{remark} \mylabel{no torsion} We may assume that the structure sheaf $\O_E$ of the infinitesimal neighborhood $f^{-1}(y)_\red\subset E$ contains no nonzero local sections whose support is finite. Indeed, if $\shJ\subset\O_E$ is the ideal of such local sections, then we have $H^1(E,\underline{\End}(\shE)\otimes\shJ)=0$, and Corollary \ref{isomorphic increase} below tells us that a locally free sheaf $\shF$ that with $\shF|{E'}\simeq\shE|{E'}$ already has $\shF|E\simeq\shE|E$. \end{remark} \begin{remark} The proof for Theorem \ref{isomorphic restriction} reveals that one may choose the infinitesimal neighborhood as $E=X_y$ provided that the groups (\ref{first cohomology}) vanish for all $k\geq 0$. However, it appears difficult to give a natural interpretation for this condition if the ideal sheaf $\shI=\maxid\O_X$ is not invertible. \end{remark} \medskip In the proof for Theorem \ref{isomorphic restriction}, we have used Corollary \ref{isomorphic increase} below, and we now gather the necessary facts from non-abelian cohomology. Let $X$ be a scheme, $\shI\subset \O_X$ be a quasicoherent ideal sheaf with $\shI^2=0$, and $X'\subset X$ be the corresponding closed subscheme. Let $\shE$ be a locally free sheaf of finite rank on $X$, and $\shE'=\shE_{X'}=\shE\otimes_{\O_X}\O_{X'}=\shE/\shI\shE$ its restriction to $X'$. Each homomorphism $f:\shE\ra\shE$ necessarily has $f(\shI\shE)\subset\shI\shE$, therefore induces a map $f':\shE'\ra\shE'$. We thus obtain a homomorphism of group-valued sheaves $$ \underline{\Aut}(\shE)\lra\underline{\Aut}(\shE'),\quad f\longmapsto f'. $$ Furthermore, each homomorphism $h:\shE\ra\shI\shE$ yields a homomorphism $$ \shE\lra\shE,\quad s\longmapsto s+h(s). $$ Using $\shI^2=0$, we see that $s\mapsto s-h(s)$ is an inverse, and that the resulting mapping $\underline{\Hom}(\shE,\shI\shE)\ra\underline{\Aut}(\shE)$ is a homomorphism of group-valued sheaves. We thus obtain a sequence \begin{equation} \label{automorphism sheaves} 0\lra \underline{\Hom}(\shE,\shI\shE)\lra\underline{\Aut}(\shE) \lra \underline{\Aut}(\shE')\lra 1 \end{equation} of group-valued sheaves. Note that the term on the left is commutative and written additively, whereas the other terms are in general non-commutative, and written multiplicatively. \begin{lemma} \mylabel{exact} The sequence (\ref{automorphism sheaves}) is exact. \end{lemma} \proof This is a local problem, so it suffices to treat the case that $\shE=\O_X^{\oplus r}$ is free. It follows immediately from the definition of the maps that the sequence is a complex. Now let $x\in X$ be a point, $x\in U\subset X$ an open neighborhood, and $A'\in \GL_r(\Gamma(X,\O_{X'}))$. Shrinking $U$, we may lift the entries of the invertible matrix $A'$ and obtain a matrix $A\in\Mat_r(\Gamma(X,\O_X))$. Then $\det(A)$ is a unit, because it is a unit modulo the nilpotent ideal $\Gamma(X,\shI)$. Hence the complex is exact at the term on the right. An element $A\in\Mat_r(\Gamma(X,\O_X))$ mapping to the identity matrix in $\GL_r(\Gamma(X,\O_{X'}))$ differs from the identity matrix by some $h\in\Mat_r(\Gamma(X,\shI))$, so the complex is exact in the middle. Since hom functors are left exact in the second variable, the induced map $\underline{\Hom}(\shE,\shI\shE)\ra \underline{\Hom}(\shE,\shE)$ is injective, therefore the corresponding map to $\underline{\Aut}(\shE)$ is injective as well. \qed \medskip One may simplify the term on the left in the short exact sequence (\ref{automorphism sheaves}) in rather general circumstances. In what follows, we tacitly suppose that that the kernel $\shN$ of the canonical homomorphism $\O_X\ra\underline{\End}(\shI)$ is quasicoherent, which holds, for example, if $\shI$ is of finite presentation (\cite{EGA I}, Corollary 2.2.2), and in particular if $X$ is locally noetherian. Let $A\subset X$ be the corresponding closed subscheme, which is called the \emph{schematic support} for the sheaf $\shI$. Note that the $\O_X$-module $\shI$ is actually an $\O_A$-module. Since $\shI^2=0$, we have $\shN\supset\shI$, hence $A\subset X'$. \begin{lemma} \mylabel{identification} There is a canonical identification $$ \underline{\End}_{\O_A}(\shE_A)\otimes_{\O_A}\shI=\underline{\Hom}_{\O_X}(\shE,\shI\shE) $$ of quasicoherent $\O_X$-modules \end{lemma} \proof The ideal sheaf $\shN\subset\O_X$ annihilates $\shI\shE$, hence the canonical injection $$ \underline{\Hom}_{\O_A}(\shE/\shN\shE,\shI\shE) \lra\underline{\Hom}_{\O_X}(\shE,\shI\shE) $$ is bijective. Using the identifications $\shE/\shN\shE=\shE_A$ and $\shI\shE=\shI\otimes_{\O_A}\shE_A$ and the fact that $\shE_A$ is locally free on $A$, we obtain an identification $$ \underline{\Hom} (\shE/\shN\shE,\shI\shE)= \shE_A^\vee\otimes \shE_A\otimes\underline{\Hom} (\O_A,\shI)= \underline{\End} (\shE_A)\otimes\shI, $$ where all tensor products and hom sheaves are over $\O_A$. \qed \medskip Now recall that $X'\subset X$ is a closed subscheme whose ideal $\shI$ has square zero, and $A\subset X'$ is the closed subscheme whose ideal is the kernel of $\O_X\ra\underline{\End}(\shI)$. Under these assumptions, we get the following result by combining the preceding lemmas: \begin{proposition} \mylabel{matrix exact} There is a short exact sequence $$ 0\lra\underline{\End}_{\O_A}(\shE_A)\otimes_{\O_A}\shI\lra\underline{\Aut}(\shE) \lra \underline{\Aut}(\shE_{X'})\lra 1 $$ of group-valued sheaves. \end{proposition} This has the following consequence, which was used in a crucial way for the proof for Theorem \ref{isomorphic restriction}: \begin{corollary} \mylabel{isomorphic increase} Assumptions as above. If $H^1(X,\underline{\End}_{\O_A}(\shE_A)\otimes_{\O_A}\shI)=0$, then a locally free sheaf $\shF$ on $X$ is isomorphic to $\shE$ if and only if $\shF_{X'}\simeq\shE_{X'}$. \end{corollary} \proof The short exact sequence of group valued sheaves in the Proposition yields an exact sequence $$ H^1(X,\underline{\End}_{\O_A}(\shE_A)\otimes_{\O_A}\shI)\lra H^1(X,\underline{\Aut}(\shE))\lra H^1(X,\underline{\Aut}(\shE_{X'})) $$ of pointed sets, by the machinery of non-abelian cohomology exposed in \cite{Giraud 1971}, Chapter III, \S3. The term in the middle is the set of isomorphism classes of $\O_X$-modules that are locally isomorphic to $\shE$, which coincides with the set of isomorphism classes of locally free sheaves of rank $r=\rank(\shE)$. Exactness means that the image of the map on the left is the set of isomorphism classes whose restrictions to $X'$ become isomorphic to $\shE_{X'}$. By assumption, the term on the left consists of a single element. \qed \section{An equivalence of categories} \mylabel{Equivalence} Let $X$ be a scheme. We denote by $\Vect(X)$ the exact category of locally free sheaves of finite rank on $X$. Given a closed subscheme $E\subset X$, we write $\Vect_E(X)\subset\Vect(X)$ for the full subcategory of all locally free sheaves $\shE$ on $X$ whose restriction to $E$ is free, that is, $\shE|E \simeq\O_E^{\oplus r}$, with $r=\rank(\shE)$. Note that if $X$ is not connected, one has to regard the rank as a locally constant function $x\mapsto\rank_x(\shE)$. More generally, if $\Phi=\left\{E_\alpha\right\}_{\alpha\in I}$ is a collection of closed subschemes (``family of supports''), we denote by $$ \Vect_\Phi(X)\subset \Vect(X) $$ the full subcategory of the $\shE$ that become free on each $E_\alpha\in\Phi$. Now let $Y$ be a noetherian scheme, and $f:X\ra Y$ be a proper morphism with $\O_Y=f_*(\O_X)$. Suppose that the coherent sheaves $R^1f_*(\O_X)$ and $R^2f_*(\O_X)$ have finite supports. Applying Theorem \ref{isomorphic restriction}, we choose for each closed point $y\in Y$ with $\dim f^{-1}(y)\geq 1$ an infinitesimal neighborhood $f^{-1}(y)_\red\subset E_y$ so that locally free sheaves of finite rank on $X\otimes_{\O_{Y}}\O_{Y,y}^\wedge$ that become free on $E_y$ are already free. Let $\Phi$ be the collection of these $E_y$. We thus obtain a functor $f^*:\Vect(Y)\ra\Vect_\Phi(X)$. \begin{theorem} \mylabel{equivalence} Let $f:X\ra Y$ be a proper morphism of noetherian schemes with $\O_Y=f_*(\O_X)$ such that $R^1f_*(\O_X)$ and $R^2f_*(\O_X)$ have finite supports, and let $\Phi$ be the collection of closed subschemes defined above. Then for every $\shE\in\Vect_\Phi(X)$, the coherent $\O_Y$-module $f_*(\shE)$ is locally free, and the functors $$ f^*:\Vect(Y)\lra\Vect_\Phi(X)\quadand f_*:\Vect_\Phi(X)\lra\Vect(Y) $$ are quasi-inverse equivalences of categories. \end{theorem} \proof Suppose first that $Y=\Spec(R)$ is the spectrum of a complete local noetherian ring, with closed point $y\in Y$. The assertion is trivial if the closed fiber is zero-dimensional, because then $f:X\ra Y$ is an isomorphism. Suppose now that $\dim f^{-1}(y)\geq 1$. Let $\shE$ be a locally free sheaf of finite rank on $X$ whose restriction to $E_y\subset X$ is free. This implies, by the choice of $E_y$, that $\shE$ is free. Using the assumption $\O_Y= f_*(\O_X)$, we infer that $f_*(\shE)$ is free. To see that $f^*f_*$ and $f_*f^*$ are isomorphic to the respective identity functors, it thus suffices to verify this for the structure sheaves $\O_X$ and $\O_Y$, which again follows from $\O_Y= f_*(\O_X)$. Thus the assertion holds in this special case. We now come to the general case. Let $\shE\in\Vect_\Phi(X)$. To verify that the coherent sheaf $f_*(\shE)$ is locally free, it suffices to check that its stalks at closed points are free. Fix a closed point $y\in Y$. In order to check that $f_*(\shE)_y$ is free, we may assume that $Y=\Spec(R)$ is the spectrum of a local ring. By faithfully flat descent (see \cite{SGA 1}, Expose VIII, Corollary 1.11), it suffices to treat the case that $R$ is complete, which indeed holds by the preceding paragraph. Summing up, for each $\shE\in\Vect_\Phi(X)$, the coherent sheaf $f_*(\shE)$ is locally free. To see that the natural adjunction map $f^*(f_*(\shE))\ra \shE$ is bijective for each $\shE\in\Vect_\Phi(X)$, it again suffices to treat the case that $Y$ is the spectrum of a complete local noetherian ring. It then follows that $\shE$ is free, and bijectivity follows from $\O_Y= f_*(\O_X)$. Finally, checking the bijectivity of the natural adjunction map $\shF\ra f_*(f^*(\shF))$ with $\shF\in\Vect(Y)$ is a local problem, so we may assume that $\shF$ is free, and then conclude with $\O_Y= f_*(\O_X)$. \qed \begin{remark} Suppose that $Y$ is normal and admits a resolution of singularities, and that $f:X\ra Y$ is proper and birational. The assumption that $R^1f_*(\O_X)$ and $R^2f_*(\O_X)$ have finite support holds in particular if the local schemes $\Spec(\O_{Y,y})$ have only rational singularities, for all nonclosed points $y\in Y$. \end{remark} \medskip We are mainly interested in the following situation: Suppose that $Y$ is a noetherian scheme and $f:X\ra Y$ be a proper morphism with $\O_Y=f_*(\O_X)$. Let $E\subset X$ be the \emph{exceptional locus}, that is, the set of points $x\in X$ where $\dim_xf^{-1}(f(x))>0$, which is closed by Chevalley's Semicontinuity Theorem (\cite{EGA IVc}, Theorem 13.1.3). This $E$ can be viewed as the union of all irreducible curves mapping to points. Its image $f(E)\subset Y$, which is a closed set, is called the \emph{critical locus}. The exceptional locus $E\subset X$ can also be regarded as the set of points where the morphism $f:X\ra Y$ is ramified in the sense of \cite{EGA IVd}, Definition 17.3.1. Thus there is a canonical scheme structure on $E$, being the support of the coherent sheaf $\Omega^1_{X/Y}$. In the following, however, we shall regard the exceptional locus either as a closed subset, or choose an infinitesimal neighborhood that makes the exceptional locus large enough in the following sense: \begin{corollary} \mylabel{critical finite} Let $Y$ be a noetherian scheme, $f:X\ra Y$ a proper morphism with $\O_Y=f_*(\O_X)$, whose critical locus is finite. Then there is an infinitesimal neighborhood $E$ of the exceptional locus with the following property: For every $\shE\in\Vect_E(X)$, the coherent sheaf $f_*(\shE)$ is locally free, and the functors $$ f^*:\Vect(Y)\lra\Vect_E(X)\quadand f_*:\Vect_E(X)\lra\Vect(Y) $$ are quasi-inverse equivalences of categories. \end{corollary} \proof Let $y_1,\ldots,y_r\in Y$ be the points comprising the critical locus $C\subset Y$. Clearly, the coherent sheaves $R^pf_*(\O_X)$, $p\geq 1$ are supported by the finite set $C$. Moreover, the $f^{-1}(y_i)$, $1\leq i\leq r$ are precisely the fibers that are not zero-dimensional, and their union is the the exceptional locus for $f:X\ra Y$. We then choose infinitesimal neighborhoods $f^{-1}(y_i)\subset E_{y_i}$ as for the Theorem, and for the union $E=E_{y_1}\cup\ldots\cup E_{y_r}$ the assertion follows. \qed \section{Elementary transformations} \mylabel{Elementary Transformations} Fix an infinite ground field $k$, let $Y$ be a proper scheme, and $f:X\ra Y$ be a proper birational morphism with $\O_Y=f_*(\O_X)$. Let $E\subset X$ be the exceptional locus. \begin{theorem} \mylabel{infinitely classes} Assumptions as above. Suppose the following three conditions: \begin{enumerate} \item The critical locus $f(E)\subset Y$ is finite. \item There is an effective Cartier divisor $D\subset X$ such that $D\cap E$ is finite. \item The proper scheme $D$ is projective. \end{enumerate} Then there are infinitely many isomorphism classes of locally free sheaves on $Y$ of rank $n=\dim(Y)$. \end{theorem} \proof Let us first discuss the case $\dim(D)=0$. Then each irreducible component $X'\subset X$ intersecting $D$ is 1-dimensional. Since $X\ra Y$ is birational, $X'\cap E$ is finite. It follows that the irreducible component $C=f(X')$ of $Y$ is a curve. Since proper curves are projective, and every invertible sheaf on $C$ can be represented by a Cartier divisor whose support is disjoint from the closure of $Y\smallsetminus C\subset Y$, one easily sees that the restriction map $\Pic(Y)\ra\Pic(C)$ is surjective. Hence the locally free sheaves on $Y$ of the form $\shE=\shL^{\oplus n}$, with $\shL$ invertible and $(\shL\cdot C)=\deg(\shL_C)>0$ yield the assertion. Suppose now that $\dim(D)\geq 1$. According to Corollary \ref{critical finite}, we may choose a suitable infinitesimal neighborhood $E$ of the exceptional set so that the pullback functor $f^*$ induces an equivalence between the category $\Vect(Y)$ of locally free sheaves of finite rank on $Y$ and the category $\Vect_E(X)$ of locally free sheaves on $X$ whose restriction to $E$ are free. This allows us to work entirely on $X$ rather than $Y$. In light of Remark \ref{no torsion}, we additionally may assume that the structure sheaf $\O_E$ contains no nontrivial local sections supported on the finite set $D\cap E$. Let $\shE$ be some locally free sheaf of rank $n=\dim(Y)=\dim(X)$ on $X$ that becomes free on $E$. For example, we could take the free sheaf $\shE=\O_X^{\oplus n}$. The following construction yields other locally free sheaves $\shE'$ on $X$ that become free on $E$. To start with, fix an ample invertible sheaf $\O_D(1)$. For simplicity, we write $\shE_D=\shE|D$ for the induced locally free sheaf on the effective Cartier divisor $D\subset X$. By Proposition \ref{schematic support} below applied to the projective scheme $D$, which has dimension $\leq n-1$, and the empty closed subscheme $A=\emptyset$, there is some integer $s\geq 1$ so that there exists a surjection $\shE_D\ra\O_D(s)$. Note that here our assumption that the ground field $k$ is infinite enters. Composing with the canonical projection, we get a surjection $\shE\ra\O_D(s)$. The short exact sequence $$ 0\lra\shF\lra\shE\lra\O_D(s)\lra 0 $$ defines a coherent sheaf $\shF$ on $X$, which is locally free because the stalks of $\O_D(s)$ have projective dimension $\leq 1$ as modules over the stalks of $\O_X$. Such $\shF$ are called \emph{elementary transformations} of $\shE$. One may recover the latter from the former: Dualizing the preceding short exact sequence yields $$ 0\lra\shE^\vee\lra\shF^\vee\lra\underline{\Ext}^1(\O_D(s),\O_X)\lra 0. $$ This is exact, because the sheaves $\underline{\Hom}(\O_D(s),\O_X)$ and $\underline{\Ext}^1(\shE,\O_X)$ vanish. Now we view $\underline{\Ext}^p(\O_D(s),\shM)$ and $\underline{\Ext}^p(\O_D,\shM)\otimes\O_D(-s)$ as $\delta$-functors in $\shM$. Obviously, both are exact and vanish on injective $\O_X$-modules $\shM$, hence are universal. Moreover, we have a canonical bijection $\underline{\Hom}(\O_D,\shM)\otimes\O_D(-s)\ra \underline{\Hom}(\O_D(s),\shM)$ given by $h_1\otimes h_2\mapsto h_1\circ h_2$, where we regard the local section $h_2$ of $\O_D(-s)$ as a homomorphism $\O_D(s)\ra \O_D$. In turn, our two universal $\delta$-functors are isomorphic (\cite{Grothendieck 1957}, Section 2.1). Using the resulting identification $\underline{\Ext}^1(\O_D(s),\O_X)=\shN_D(-s)$, where $\shN_D=\O_D(D)$ be the \emph{normal sheaf} of the effective Cartier divisor, which is an invertible sheaf on $D$, we rewrite the preceding short exact sequence as \begin{equation} \label{first sequence} 0\lra\shE^\vee\lra\shF^\vee\lra\shN_D(-s)\lra 0, \end{equation} and denote the surjective map on the right by $\phi:\shF^\vee\ra\shN_D(-s)$. Now suppose that $\psi:\shF^\vee\ra\shN_D(t-s)$ is another surjection for some integer $t\geq 0$. Then the short exact sequence \begin{equation} \label{second sequence} 0\lra \shE'^\vee\lra\shF^\vee\stackrel{\psi}{\lra}\shN_D(t-s)\lra 0 \end{equation} defines a new locally free sheaf $\shE'=\shE'_{t,\psi}$ of rank $n$, whose dual is isomorphic to $\ker(\psi)$. In the special case $t=0$ and $\psi=\phi$, we obviously have $\shE'=\shE$. In general, however, the exact sequences (\ref{first sequence}) and (\ref{second sequence}) yield $$ \chi(\shE'^\vee)- \chi(\shE^\vee)= P(t-s) - P(-s), $$ where $P(t)=\chi(\shN_D(t))=\sum_i(-1)^ih^i(\shN_D(t))$ is the Hilbert polynomial of the invertible sheaf $\shN_D$ on $D$ with respect to the ample sheaf $\O_D(1)$. This Hilbert polynomial has degree $\dim(D)\geq 1$, hence $P(t-s)-P(-s)$ is a nonzero polynomial in $t$, of the same degree. It follows that the locally free sheaves $\shE'=\shE'_{t,\psi}$ are pairwise non-isomorphic for $t$ sufficiently large. It remains to choose $t\geq 0$ and $\psi$ so that the locally free sheaf $\shE'$ becomes free on the closed subscheme $E\subset X$. Restricting the short exact sequence of sheaves (\ref{first sequence}) to the subscheme $E$ one obtains a short exact sequence $$ \underline{\Tor}_1^{\O_X}(\O_E,\shN_D(-s))\lra\shE^\vee_E\lra\shF^\vee_E\lra\shN_D(-s)| E\lra 0. $$ The map on the left vanishes, because the tor sheaf is supported by the finite set $D\cap E$, and $\shE^\vee_E$, regarded as a locally free sheaf on $E$, has no local sections supported by $D\cap E$. The latter holds, because we have arranged things so that $\O_E$ contains no such local sections. The upshot is that we get a short exact sequence $$ 0\lra\shE^\vee_E\lra\shF^\vee_E\stackrel{\phi_E}{\lra}\shN_D(-s)| E\lra 0. $$ Note that the term on the right is supported by $D\cap E$, which is finite. From (\ref{second sequence}) we likewise get a short exact sequence $$ 0\lra\shE'^\vee_E\lra\shF^\vee_E\stackrel{\psi_E}{\lra}\shN_D(t-s)| E\lra 0. $$ Now suppose that $t\geq 0$ and $\psi$ are chosen so that there is an isomorphism of skyscraper sheaves $h:\shN_D(-s)| E\ra \shN_D(t-s)| E$ with $\psi_E=h\circ\phi_E$. It then follows that $\shE'^\vee_E=\ker(\psi_E)$ is isomorphic to $\shE^\vee_E=\ker(\phi_E)$, hence the restriction $\shE'_E\simeq\shE_E\simeq\O_E^{\oplus n}$ is free. This can be achieved as follows: Choose an integer $t_0\geq 0$ so that $\O_D(t)$ is globally generated for all $t\geq t_0$. According to Proposition \ref{schematic support} below, there is an integer $t_1\geq 0$ so that for all $t\geq t_1$, there is a homomorphism $\shF^\vee_D\ra\shN_D(t-s)$ whose cokernel has as schematic support an infinitesimal neighborhood of $D\cap E\subset D$. Note that here again the assumption that the ground field $k$ is infinite enters. Now suppose that we have an integer $t$ satisfying $t\geq\max(t_0,t_1)$. First, choose a homomorphism $\shF^\vee_D\ra\shN_D(t-s)$ as above, and regard it as a homomorphism $h:\shF^\vee\ra\shN_D(t-s)$ whose cokernel has an infinitesimal neighborhood of $D\cap E\subset D$ as schematic support. Thus the base-change of $h$ to $ E$ yields an exact sequence $$ \shF^\vee_{E}\stackrel{h_{E}}{\lra}\shN_D(t-s)|E\lra\O_{D\cap E}\lra 0. $$ The two terms on the right are invertible $\O_{D\cap E}$-modules, therefore the surjection on the right is bijective, such that $h_E=0$. Second, choose a global section $g\in H^0(D,\O_D(t))$ that does not vanish at the finite subset $D\cap E$, and regard it as a map $g:\O_X\ra\O_D(t)$, which is surjective at $D\cap E$. Now consider the homomorphism $$ \psi=\phi\otimes g + h:\shF^\vee\lra\shN_D(t-s). $$ On the subsheaf $\shE^\vee\subset\shF^\vee$, the map $\phi$ obviously vanishes and the map $\psi$ coincides with $h$. The latter is surjective outside the subscheme $D\cap E\subset X$. Base-changed to the subscheme $E\subset X$, the map $h$ vanishes and the map $\psi$ coincides with $\phi\otimes g$, which is surjective at all points of $D\cap E$. The upshot is that $\psi$ is surjective and thus qualifies for our construction: its kernel is locally free of rank $n$, hence of the form $\shE'^\vee$ for some locally free sheaf $\shE'$. We just saw that the base-change $\psi_E:\shF^\vee_E\lra\shN_D(t-s)|E$ coincides with $(\phi\otimes g)_E$. Thus there is a commutative diagram $$ \begin{CD} \shF^\vee_E @>\phi_E>> \shN_D(-s)|E \\ @V\id VV @VV\id\otimes gV\\ \shF^\vee_E @>>\psi_E> \shN_D(t-s)|E. \end{CD} $$ As discussed above, this implies that $\shE'_E\simeq\shE_E\simeq\O_{E}^{\oplus n}$. The upshot is that for each $t\geq\max(t_0,t_1)$, we indeed found some $\psi$ giving a locally free sheaf $\shE'=\shE'_{t,\psi}$ on $X$ whose restriction to $E$ is free. This yields infinitely many isomorphism classes of locally free sheaves $\shE'$ on $X$ of rank $n$ that are free on $E\subset X$. \qed \medskip In the preceding proof, we have used the following fact: \begin{proposition} \mylabel{schematic support} Let $X$ be a quasiprojective scheme, $\O_X(1)$ an ample invertible sheaf, $\shE$ a locally free sheaf of finite rank $r>\dim(X)$, and $A\subset X$ a finite closed subscheme. Then there is an integer $t_0$ so that for all $t\geq t_0$, there is a homomorphism $\shE\ra\O_X(t)$ such that the schematic support of the cokernel is an infinitesimal neighborhood of $A$. \end{proposition} \proof We first reduce to the case that $X$ is projective: Choose an embedding $X\subset\PP^n$, consider the schematic closure $X'=\bar{X}$, and extend $\shE$ to a coherent sheaf $\shE'$ on $X'$. According to a result of Moishezon (\cite{Moishezon 1969}, Lemma 3.5, see also \cite{Bogomolov; Landia 1990}), there is a blowing-up $X''\ra X'$ so that the strict transform $\shE''$ of $\shE'$ becomes locally free. Moreover, we may assume that the center of the blowing-up is disjoint from $X$, and that $\O_X(1)$ extends to some ample invertible sheaf on $X''$. Thus, it suffices to treat the case that $X$ is projective. Next, we reduce to the case that $A$ is disjoint from the set of associated points $\Ass(\O_X)\subset X$: Let $\shI\subset \O_X$ be a quasicoherent ideal consisting of torsion sections supported by a single point $a\in A$, with $\length(\shI_a)=1$, such that $\shI_a\simeq\kappa(a)$. In other words, $\shI_a$ is a 1-dimensional vector subspace inside the socle of $\O_{X,a}$. Let $X'\subset X$ the corresponding closed subscheme, and $A'=A\cap X'$. Then there is a module decomposition $\O_{X,a}\simeq\O_{X',a}\oplus\shI_a$, which globalizes to $\O_X\simeq\O_{X'}\oplus\shI$. Let $\shE'=\shE/\shI\shE$, and suppose there is an integer $t_0$ so that for each $t\geq t_0$, we have a homomorphism $\shE'\ra\O_{X'}(t)$ such that the cokernel has an infinitesimal neighborhood of $A'\subset X'$ as support. Using the module decomposition, we obtain a homomorphism $\shE\ra\O_{X'}(t)\subset\O_{X}(t)$ whose cokernel has an infinitesimal neighborhood of $A'$ that is strictly larger than $A'$, thus must contain $A$. Inductively, we are reduced to the situation that $A$ contains no associated point of $X$. We now proceed by induction on the rank $r=\rank(\shE)$. The case $r=1 $ is trivial, because then the scheme $X$ is zero-dimensional, the closed subscheme $A\subset X$ is also open, and every locally free sheaf is free. Suppose now that $r\geq 2$, and that the assertion is true for $r-1$. According to the Atiyah--Serre Theorem (see \cite{Kleiman 1969}, Theorem 4.7), there is an invertible sheaf $\shL\subset \shE$ that is locally a direct summand, such that $\shF=\shE/\shL$ is locally free of rank $r-1$. We thus have a short exact sequence \begin{equation} \label{serre extension} 0\lra\shL\lra\shE\lra\shF\lra 0. \end{equation} Choose an integer $n\geq 0$ so that $\shL^\vee(t)$ and $\O_X(t)$ are globally generated for all $t\geq n$, and regular global sections $$ s\in H^0(X,\shL^\vee(n))\quadand s_i\in H^0(X,\O_X(n+i)) $$ for $i=0,\ldots,n-1$ that vanish on $A\subset X$. Let $D,D_i\subset X$ be the corresponding effective Cartier divisors and set $X_i=D\cup D_0\cup D_i$, which contain $A$ and have $\dim(X_i)<\dim(X)$, and in particular $\rank(\shF_{X_i})>\dim(X_i)$. Let $\shI\subset \O_X$ be the ideal sheaf of $A\subset X$. Choose an integer $n'\geq 0$ so that for all $t\geq n'$, the groups \begin{gather*} \Ext^1(\shF,\shI(t)) = H^1(X,\shI\otimes\shF^\vee(t) ), \\ \Ext^1(\shE,\shL(t-3n-i))=H^1(X, \shE^\vee\otimes\shL(t-3n-i)) \end{gather*} vanish for all $i=0,\ldots,n-1$, and that furthermore there are homomorphisms $\shF_{X_i}\ra\O_{X_i}(t)$ whose cokernels have an infinitesimal neighborhood of $A\subset X_i$ as schematic support. The latter can be done by the induction hypothesis applied to the locally free sheaves $\shF_{X_i}$ on $X_i$ for $i=0,\ldots,n-1$. We claim that $t_0=\max(3n,n')$ does the job: Suppose $t\geq t_0$. Write this integer as $t=n+mn+(n+i)$ for some $m\geq 1$ and some $0\leq i\leq n-1$, and regard the sections $s,s_i$ as homomorphisms $s:\shL\ra\O_X(n)$ and $s_i:\O_X\ra\O_X(n+i)$. Their tensor product yields a homomorphism $$ f=s\otimes s_0^m\otimes s_i:\shL\lra\O_X(t), $$ whose cokernel is an invertible sheaf on some Cartier divisor. By construction, this Cartier divisor contains $A$, and is an infinitesimal neighborhood of $X_i$, the latter being defined by $s\otimes s_0\otimes s_i$. It follows that $f_{X_i}=0$. The short exact sequence (\ref{serre extension}) yields a long exact sequence $$ \Hom(\shE,\shI(t))\lra \Hom(\shL,\shI(t))\lra\Ext^1(\shF,\shI(t)), $$ and the term on the right vanishes. Thus we may extend the homomorphism $f$ to a homomorphism $f:\shE\ra\O_X(t)$, denoted by the same letter, which factors over $\shI(t)$, such that the cokernel is annihilated by $\shI$. Now choose a homomorphism $\shF_{X_i}\ra\O_{X_i}(t)$ so that the schematic support of the cokernel is an infinitesimal neighborhood of $A\subset X_i$, and let $$ g:\shE\lra\shF\lra\shF_{X_i}\lra\O_{X_i}(t) $$ the composite map. Arguing in a similar way as above, we may lift this to a map $g:\shE\ra\O_X(t)$ denoted by the same letter. It remains to check that the sum $h=f+g:\shE\ra\O_X(t)$ has a cokernel whose schematic support is an infinitesimal neighborhood of $A\subset X$. On the subsheaf $\shL\subset\shE$, the map $g$ obviously vanishes and $f,h$ coincide. Since $f$ is surjective outside $X_i$, the same holds for $h$. On the closed subscheme $X_i\subset X$, the base-change $f_{X_i}$ vanishes, such that $h_{X_i}=g_{X_i}$. By construction, the cokernel of $g_{X_i}$ has cokernel has schematic support an infinitesimal neighborhood of $A\subset X_i$, so the same holds for $h_{X_i}$, and thus for $h$, because $A\subset X_i$. \qed \begin{remark} We have used the assumption that the ground field $k$ is infinite to apply the Atiyah--Serre Theorem on the existence of invertible subsheaves that are locally direct summands, provided that the rank of the vector bundle exceeds the dimension of the scheme (compare \cite{Serre 1958}, \cite{Atiyah 1957}, \cite{Kleiman 1969}, and also \cite{Okonek; Schneider; Spindler 1980}). We do not know whether this holds true for finite fields. There might be relations to the Bertini Theorem on smooth hyperplane sections over finite fields due to Gabber \cite{Gabber 2001} and Poonen \cite{Poonen 2004}. However, Theorem \ref{infinitely classes} holds true for finite ground fields $k$ if one allows larger ranks: There is an integer $d\geq 1$ so that there are infinitely many isomorphism classes of locally free sheaves of rank $r=d\dim(Y)$. Indeed, one takes a suitable finite field extension $k\subset k'$ so that the construction exists over $Y'=Y\otimes_kk'$, and obtains the desired locally free sheaves on $Y$ as push-forwards of the locally free sheaves on $Y'$. \end{remark} \begin{remark} The proper morphism $f:X\ra Y$ in Theorem \ref{infinitely classes} induces a proper morphism $D\ra f(D)$ whose exceptional set is finite. Thus $D\ra f(D)$ is finite, and there is an ample effective divisor $H\subset D$ disjoint from the exceptional set. Consequently, $f(H)\subset f(D)$ is an effective divisor whose preimage on $D$ is ample. It follows with \cite{EGA IIIa}, Proposition 2.6.2 that $f(H)$ is ample, such that the proper scheme $f(D)$ is projective. \end{remark} \begin{remark} In Theorem \ref{infinitely classes}, the invertible sheaf $\shL=\O_X(D)$ is relatively semiample over $Y$ by the Zariski--Fujita Theorem \cite{Fujita 1983}. This holds because the relative base locus, which is contained in $D\cap E$, is finite over $Y$. Replacing $X$ by the relative homogeneous spectrum of the sheaf of graded $\O_Y$-algebras $\shA=f_*(\bigoplus_{t\geq 0}\shL^{\otimes t})$, one thus may as well assume that the exceptional set $E\subset X$ is 1-dimensional. \end{remark} \begin{remark} The arguments in this section hold literally true for algebraic spaces or complex spaces. \end{remark} \section{Computation of top Chern classes} \mylabel{Chern Classes} In this section, we show that the vector bundles $\shE'=\shE_{t,\phi}$ constructed in the proof for Theorem \ref{infinitely classes} attain infinitely many Chern classes. In fact, their top Chern classes, regarded as numbers, become arbitrarily large. Naturally, Chern classes for coherent sheaves show up in this computation. Some care has to be taken for the definition of such Chern classes, because in our situation it is not permissible to assume that all coherent sheaves are quotients of locally free sheaves. For a noetherian scheme $X$, one writes $\Coh(X)$ for the abelian category of coherent sheaves, and $K^\circ(X)_\naif$ for the $K$-group of the exact category $\Vect(X)$ of locally free sheaves of finite rank (compare \cite{SGA 6}, Expose IV, Section 2). We write $[\shE]\in K^\circ(X)_\naif$ for the class of $\shE\in\Vect(X)$. The next observation allows us to extend this to certain $\shF\in\Coh(X)$. \begin{lemma} \mylabel{well-defined} Let $X$ be a noetherian scheme, and $0\ra\shE_1\ra\shE_0\ra\shF\ra 0$ a short exact sequence of coherent sheaves, with $\shE_0$ and $\shE_1$ locally free. Then the difference $[\shE_0]-[\shE_1]\in K^\circ(X)_\naif$ depends only on the isomorphism class of $\shF\in\Coh(X)$. \end{lemma} \proof We have to check that the arguments of Borel and Serre (\cite{Borel; Serre 1958}, Section 4), which work for exact sequences of arbitrary length but rely on the existence of global resolutions for \emph{all} coherent sheaves, carry over. Suppose $0\ra\shE_1'\ra\shE'_0\ra\shF\ra 0$ is another short exact sequence. If there is a commutative diagram $$ \begin{CD} 0 @>>> \shE_1' @>>> \shE'_0 @>>> \shF @>>> 0\\ @. @Vf_1VV @Vf_0VV @VVgV\\ 0 @>>> \shE_1 @>>> \shE_0 @>>> \shF@>>> 0\\ \end{CD} $$ with $f_0,f_1$ surjective and $g$ bijective, the Snake Lemma implies that the induced map $\ker(f_1)\ra\ker(f_0)$ is bijective, and thus $[\shE_0]-[\shE_1]=[\shE'_0]-[\shE'_1]$. To exploit this in the general case, consider the fiber product $\shE''_0=\shE_0\times_\shF\shE'_0$, which can be defined by the exact sequence $$ 0\lra\shE_0''\lra\shE_0\oplus\shE'_0\stackrel{p-p'}{\lra}\shF\lra 0 $$ where $p:\shE_0\ra\shF$ and $p':\shE_0'\ra\shF$ are the canonical maps. The coherent sheaf $\shE''_0$ is already locally free, because the stalks of $\shF$ have projective dimension $\pd(\shF_x)\leq 1$. One easily sees that the map $\shE_0''\ra\shF$ given by $p\circ \pr_1=p'\circ \pr_2 $ is surjective, and its kernel $\shE''_1$ is thus also locally free. Moreover, the canonical projection $\shE_0''\ra\shE_0$ is surjective. The Snake Lemma ensures that the induced map $\shE_1''\ra\shE_1$ is surjective as well. The preceding paragraph thus gives $[\shE''_0]-[\shE''_1]=[\shE_0]-[\shE_1]$. By symmetry, we also get $[\shE''_0]-[\shE''_1]=[\shE'_0]-[\shE'_1]$. In turn, the differences in question depend only on the isomorphism class of $\shF\in\Coh(X)$. \qed \medskip The following ad hoc terminology will be useful throughout: Let us call a coherent sheaf $\shF$ \emph{admissible} if for each $x\in X$, the projective dimension of the stalk is $\pd(\shF_x)\leq 1$, and there is surjection $\shE_0\ra\shF$ for some locally free sheaf $\shE_0$ of finite rank. This ensures that the kernel $\shE_1\subset\shE_0$ is locally free as well. By the preceding lemma, the class $[\shF]=[\shE_0]-[\shE_1]\in K^\circ(X)_\naif$ is well-defined. \begin{lemma} \mylabel{additive} Let $0\ra\shF'\ra\shF\ra\shF''\ra 0$ be a short exact sequence of admissible coherent sheaves. Then $[\shF]=[\shF']+[\shF'']$ in the group $K^\circ(X)_\naif$. \end{lemma} \proof Choose surjections $p:\shE_0'\ra\shF'$ and $q:\shE\ra\shF$ with $\shE'_0$ and $\shE$ locally free of finite rank. Composing $q$ with the canonical projection $\pr:\shF\ra\shF''$ yields a surjection $\pr\circ q:\shE\ra\shF''$. We then obtain a commutative diagram $$ \begin{CD} 0 @>>> \shE'_0 @>\can>> \shE'_0\oplus\shE @>\can>> \shE @>>> 0\\ @. @VpVV @VVp+qV @VV\pr\circ qV\\ 0 @>>> \shF' @>>> \shF @>>> \shF'' @>>> 0, \end{CD} $$ whose rows are exact and whose vertical maps are surjective. Applying the Snake Lemma, one easily gets the assertion. \qed \medskip Given an admissible coherent sheaf $\shF$, it is thus possible to define the \emph{total Chern class} $$ c_\bullet(\shF)=1+ c_1(\shF) + c_2(\shF) + \ldots = c_\bullet(\shE_0)/c_\bullet(\shE_1)\in A^\bullet(X), $$ where $A^\bullet(X)$ is any suitable cohomology theory with Chern classes for locally free sheaves satisfying the usual axioms, confer \cite{Borel; Serre 1958}. In light of Lemma \ref{additive}, the \emph{Whitney Sum Formula} $c_\bullet(\shF)=c_\bullet(\shF')c_\bullet(\shF'')$ holds true for short exact sequences of admissible sheaves $0\ra\shF'\ra\shF\ra\shF''\ra 0$. We now examine the following situation: Let $k$ be an infinite ground field, $X$ an irreducible proper scheme of dimension $n=\dim(X)$, and $\shN$ an invertible sheaf on $X$. Suppose that $D\subset X$ is an effective Cartier divisor, and $\shL_D$ is an invertible sheaf on $D$ that is the quotient of a locally free sheaf of finite rank on $X$. Then the same holds for the tensor products $\shN_D\otimes\shL^{\otimes t}_D$ for all $t\geq 0$. Let us assume that there is a single locally free sheaf $\shA$ of finite rank, having surjections $\shA\ra \shN_D\otimes\shL^{\otimes t}_D$ for all $t\geq 0$. Denote by $\shE_t\subset\shA$ its kernel, which is locally free of the same rank, such that we have a short exact sequence \begin{equation} \label{single vector bundle} 0\lra\shE_t\lra\shA\lra \shN_D\otimes\shL^{\otimes t}_D\lra0. \end{equation} We seek to express the total Chern class $c_\bullet(\shN_D\otimes\shL_D^{\otimes t})\in A^\bullet(X)$, or rather its inverse, in dependence on $t\geq 0$. For simplicity, consider $l$-adic cohomology $A^i(X)=H^{2i}(X,\QQ_l(i))$, where $l$ denotes a prime number different from the characteristic of the ground field. We make the identification $A^n(X)=H^{2n}(X,\QQ_l(n))=\QQ_l$ and regard cohomology classes in top degree as numbers. Moreover, we consider the descending filtration $\Fil^jA^\bullet(X)=\bigoplus_{i\geq j} A^i(X)$. \begin{theorem} \mylabel{total chern class} Assumptions as above. Suppose that $\shL_D$ is globally generated. Then there is a proper birational morphism $f:X'\ra X$ and classes $\alpha_j\in \Fil^{j+1}A^\bullet(X')$, $1\leq j\leq n-1$ such that $$ c_\bullet(f^*(\shN_D\otimes\shL_D^{\otimes t}))^{-1} = 1+\alpha_1t+\ldots+\alpha_{n-1}t^{n-1} $$ for all integers $t\geq 0$. Moreover, the coefficient $\alpha_{n-1}\in \Fil^nA^\bullet(X)=A^n(X')$ is given by $\alpha_{n-1}=(-1)^nc_1^{n-1}(\shL_D)$. \end{theorem} \proof First, we consider the special case that $\shL_D\in\Pic(D)$ is the restriction of some $\shL\in\Pic(X)$. The exact sequence $$ 0\lra\shN\otimes\shL^{\otimes t}(-D)\lra\shN\otimes\shL^{\otimes t}\lra\shN_D\otimes\shL_D^{\otimes t}\lra 0 $$ shows that the inverse of the total Chern class for $\shN_D\otimes\shL_D^{\otimes t}$ is $$ (1+N+tL-D)/(1+N+tL) = 1 - D\sum_{i=0}^{n-1} (-1)^i(N+tL)^i. $$ Here $N,L\in A^2(X)$ are the first Chern classes of the invertible sheaves $\shN$ and $\shL$, respectively. Using the equality of numbers $D\cdot L^{n-1}=c_1^{n-1}(\shL_D)$, the statement already follows with $X'=X$. Second, consider the special case that $D=D_1\cup D_2$ is the schematic union of two effective Cartier divisors without common irreducible components. Interpreting the intersection number $c_1^{n-1}(\shL_D)/(n-1)!$ as the the coefficient in degree $n-1$ of the Hilbert polynomial $\chi(\shL_D^{\otimes t})$, we deduce $c_1^{n-1}(\shL_{D})=c_1^{n-1}(\shL_{D_1})+c_1^{n-1}(\shL_{D_2})$. Moreover, $D_1\cap D_2$ is an effective Cartier divisor in both $D_1,D_2$, and we have a short exact sequence $0\ra\O_{D_2}(-D_1)\ra\O_D\ra\O_{D_1}\ra 0$. Clearly, the restrictions $\shL_{D_1}, \shL_{D_2}$ are globally generated and admissible. Now suppose that our statement is already true for the effective Cartier divisors $D_1,D_2\subset X$. Applying the Whitney Sum Formula to the short exact sequence $$ 0\lra\shN'_{D_2}\otimes\shL_{D_2}^{\otimes t} \lra\shN_D\otimes\shL_D^{\otimes t}\lra \shN_{D_1}\otimes\shL_{D_1}^{\otimes t}\lra 0 $$ where $\shN'=\shN(-D_1)$ easily yields the assertion. We now come to the general situation. We proceed by induction on the \emph{Kodaira--Ithaka dimension} $k=\kod(\shL_D)$ of the invertible sheaf $\shL_D\in\Pic(D)$. For globally generated invertible sheaves, this is the dimension of the image for the morphism $D\ra\PP^m$ coming from the linear system $H^0(D,\shL_D)$, where $m+1=h^0(\shL_D)$. Moreover, it coincides with the \emph{numerical Kodaira--Ithaka dimension}, which is the largest integer $k\geq 0$ so that the intersection number $c_1^k(\shL_D)\cdot V $ is nonzero for some integral closed subscheme $V\subset D$ of dimension $k=\dim(V)$. In the case $k=0$, we have $\shL_D=\O_D$, and the assertion holds by the first special case. Now suppose $k\geq 1$, and that the assertion is true for $k-1$. Choose a regular global section of $\shL_D$, and let $A\subset D$ be its zero locus, such that $\shL_D=\O_D(A)$. Let $f:X'\ra X$ be the blowing-up with center $A\subset X$. Since $f$ is birational, the locally free sheaves $\shE_t$ and $\shE'_t=f^*(\shE_t)$ have the same top Chern numbers. Furthermore, the exact sequence (\ref{single vector bundle}) induces an exact sequence $$ 0\lra \shE_t'\lra\shA'\lra f^*(\shN_D\otimes\shL^{\otimes t}_D)\lra 0. $$ where $\shA'=f^*(\shA)$. The latter is indeed exact, because $\O_{X'}$ has no torsion sections supported by the effective Cartier divisor $E=f^{-1}(A)$. Thus the coherent sheaf $\shF_t=f^*(\shN_D\otimes\shL^{\otimes t}_D)$ is admissible. Consider the effective Cartier divisor $D'=f^{-1}(D)$. The universal property for blowing-ups gives a partial section $\sigma:D\ra X'$ for the structure morphism $f:X'\ra X$. We thus obtain a short exact sequence $$ 0\lra\shF_t(-\sigma(D))|E \lra \shF_t \lra \shF_t|{\sigma(D)}\lra 0, $$ where the term on the right is invertible on the effective Cartier divisor $\sigma(D)\subset X'$, and the term on the left is invertible on the effective Cartier divisor $E\subset X'$. This follows from Lemma \ref{blowing-up} below. We now define another locally free sheaf $\tilde{\shE}'_t$ as the kernel of the composite surjection $\shA'\ra\shF_t\ra\shF_t|\sigma(D)$, such that we have a commutative diagram $$ \begin{CD} 0 @>>> \shE'_t @>>> \shA'@>>> \shF_t @>>> 0\\ @. @VVV @VV\id V @VV\can V \\ 0 @>>> \tilde{\shE}'_t @>>> \shA'@>>> \shF_t|{\sigma(D)} @>>> 0. \end{CD} $$ By the Snake Lemma, the vertical map on the left is injective, with cokernel $$ \shF_t(-\sigma(D))|E = f^*(\shN_A \otimes\O_A(tA))(-\sigma(D)|E). $$ In turn, this sheaf is admissible. The Cartier divisor $D'=f^{-1}(D)\subset X'$ is the union of the two effective Cartier divisors $E,\sigma(D)$, which have no common irreducible component. Consider the globally generated invertible sheaf $\shL_{D'}=f^*(\shL_D)=f^*(\O_D(A))$ on $D'$. Its restriction to $\sigma(D)$ is nothing but $\O_{\sigma(D)}(E)$, which is the restriction of the invertible sheaf $\O_{X'}(E)$. Thus the assertion hold for $\shL_{D'}|\sigma(D)$ by the first special case. Moreover, the restriction $\shL_{D'}|E$ equals $f^*(\O_A(A))$. This sheaf has Kodaira--Ithaka dimension smaller than that of $\shL_D=\O_D(A)$, by its interpretation via intersection numbers. We thus may apply the induction hypothesis to $\shL_{D'}|E$. Using the second special case, we infer the assertion for $\shL_{D'}$. \qed \medskip In the preceding proof, we have used the following fact: \begin{lemma} \mylabel{blowing-up} Let $X$ be a noetherian scheme, $D\subset X$ an effective Cartier divisor, $A\subset D$ an effective Cartier divisor, and $f:X'\ra X$ the blowing-up with center $A$. Let $\sigma:D\ra X'$ be the canonical partial section, $E=f^{-1}(A)$ the exceptional divisor, and $D'=f^{-1}(D)$. Then $E,D', \sigma(D)\subset X'$ are effective Cartier divisors, the subschemes $E,\sigma(D)\subset X'$ have no common irreducible components, their schematic union is $D'$, and there is a short exact sequence \begin{equation} \label{subscheme sequence} 0\lra \O_E(-\sigma(D))\lra\O_{D'}\lra\O_{\sigma(D)}\lra 0. \end{equation} \end{lemma} \proof We give a sketch and leave some details to the reader. The problem is local, so it suffices to treat the case that $X=\Spec(R)$ is affine, and that there is a regular sequence $f,g\in R$ such that $D\subset X$ is defined by the ideal $fR$, and $A\subset X$ is defined by the ideal $I=(f,g)$. Consider the Rees ring $S=\bigoplus I^n$, such that $X'=\Proj(S)$. The closed subscheme $D'\subset X'$ can be identified with the homogeneous spectrum of $S\otimes_R(R/fR)=\bigoplus I^n/fI^n$, the exceptional divisor $E\subset X'$ with that of of $S\otimes_R(R/I)=\bigoplus I^n/I^{n+1}$, and the section $\sigma(D)\subset X'$ with that of the graded $R/fR$-algebra $\bigoplus J^n$, where $J=I\cdot R/fR$ is the induced invertible ideal. The homogeneous components of the latter can be rewritten as $J^n=(I^n+fR)/fR=I^n/(fR\cap I^n)$. Thus, in order to verify $D'=E\cup\sigma(D)$, it suffices to check that for each degree $n\geq 1$, the sequence \begin{equation} \label{homogeneous components} 0\lra I^n/fI^n \lra I^n/I^{n+1} \times I^n/(fR\cap I^n) \lra I^n/(I^{n+1}+(fR\cap I^n))\lra 0 \end{equation} is exact. Consider first the special case that $R=\ZZ[f,g]$, where $f,g$ are indeterminates. One easily sees that each term in (\ref{homogeneous components}) is a free $\ZZ$-module, with basis given by certain monomials in $f$ and $g$, from which one easily infers exactness. Moreover, it follows from \cite{Eagon; Hochster 1974}, Theorem 2 that if $M$ is a module over $\ZZ[f,g]$ such that $f,g$ is a regular sequence on $M$, then $\Tor_p(\ideala/\idealb,M)=0$ for all $p>0$ and all monomial ideals $\ideala,\idealb\subset \ZZ[f,g]$ occurring according in the terms of (\ref{homogeneous components}), for example $\ideala=I^n$ and $\idealb=fI^n$. Using the long exact sequences for Tor, we infer that the exactness of (\ref{homogeneous components}) for the ring $\ZZ[f,g]$ implies the exactness for any local rings $R$ with regular sequence $f,g\in R$. This indeed shows that $D'=E\cup\sigma(D)$ holds in general. By the universal property of blowing-ups, the closed subscheme $E\subset X'$ is an effective Cartier divisor. Since $f$ is regular on $R$, the same holds for the polynomial ring $R[T]$ and the subalgebra $S\subset R[T]$, such that $D'\subset X'$ is an effective Cartier divisor. Localizing at the generic points of $D\subset X$, one easily sees that $\sigma(D)$ and $E$ have no common irreducible component. Using that there are no associated points on $X$ and $X'$ contained in the Cartier divisors $D$ and $E$, respectively, we infer that $\sigma(D) = D'-E$ is an effective Cartier divisor. Finally, the Snake Lemma applied to the diagram $$ \begin{CD} 0 @>>> \O_{X'}(-E-\sigma(D)) @>>> \O_{X'} @>>> \O_{D'} @>>> 0\\ @. @VVV @VVV @VVV\\ 0 @>>> \O_{X'}(-\sigma(D)) @>>> \O_{X'} @>>> \O_{\sigma(D)} @>>> 0 \end{CD} $$ yields the short exact sequence (\ref{subscheme sequence}). \qed \medskip As an application, we now can compute the top Chern class for the locally free sheaves constructed in the course of the proof for Theorem \ref{infinitely classes}. We work in the following set-up: Let $X$ be a proper irreducible scheme of dimension $n=\dim(X)$. Suppose $D\subset X$ is an effective Cartier divisor, and $\shN$ an invertible sheaf on $X$, and $\shL_D$ be a globally generated invertible sheaf on $D$. Suppose we have a locally free sheaf $\shA$ of finite rank, and for each $t\geq 0$ some surjection $\shA\ra\shN_D\otimes\shL_D^{\otimes t}$. Define the locally free sheaf $\shE_t$ by the short exact sequence \begin{equation} \label{defining sequence} 0\lra\shE_t^\vee\lra\shA\lra \shN_D\otimes\shL_D^{\otimes t}\lra 0, \end{equation} and regard its top Chern class $c_n(\shE_t)\in A^n(X)=H^{2n}(X,\QQ_l(n))=\QQ_l$ as a number. \begin{proposition} \mylabel{chern number} Assumptions as above. Then there are $\beta_0,\ldots,\beta_{n-1}\in\QQ_l$ with $$ c_n(\shE_t)=\beta_{n-1}t^{n-1}+\beta_{n-2}t^{n-2}+\ldots+\beta_0, $$ for all $t\geq 0$, and the coefficient in degree $n-1$ is given by $\beta_{n-1}= c_1^{n-1}(\shL_D)$. \end{proposition} \proof Applying the Whitney Sum Formula to the short exact sequence (\ref{defining sequence}) and using Theorem \ref{total chern class}, we see that $$ c_\bullet(\shE_t^\vee)=(1+c_1(\shA)+\ldots+c_n(\shA)) \cdot (1+\alpha_1t+\ldots+\alpha_{n-1}t^{n-1}) $$ for certain cohomology classes $\alpha_j\in\Fil^{j+1}A^\bullet(X)$, with $\alpha_{n-1}=(-1)^nc_1^{n-1}(\shL_D)$. Strictly speaking, the classes $\alpha_j$ lie on $X'$ for some proper birational morphism $X'\ra X$, but this can be neglected because the canonical map $A^n(X)\ra A^n(X')$ is bijective. We conclude that $c_n(\shE_t^\vee)$ is a polynomial function of degree $\leq n-1$ in $t\geq 0$. Its coefficient in degree $n-1$ is $(-1)^nc_1^{n-1}(\shL_D)$, because $c_i(\shA)\cdot\alpha_{n-1}\in\Fil^{n+i}A^\bullet(X)=0$ for all $i\geq 1$. The statement follows from the general fact that $c_i(\shE^\vee)=(-1)^ic_i(\shE)$ for any locally free sheaf $\shE$ of finite rank. \qed \medskip Combining this computation with the proof for Theorem \ref{infinitely classes}, we obtain the following, which is the main result of this paper: \begin{theorem} \mylabel{main result} Let $Y$ be a proper scheme. Suppose there is a proper birational morphism $X\ra Y$ and a Cartier divisor $D\subset X$ that intersects the exceptional locus in a finite set, and that the proper scheme $D$ is projective. Then there are locally free sheaves $\shE$ of rank $n=\dim(Y)$ on $Y$ with Chern number $c_n(\shE)$ arbitrarily large. \end{theorem} Over the field of complex numbers $k=\CC$, we see that there are infinitely many algebraic vector bundles of rank $n=\dim(Y)$ that are non-isomorphic as topological vector bundles. Of course, in this situation one could use singular cohomology $A^i(X)=H^{2i}(X^\an,\ZZ)$ of the associated complex space $X^\an$ rather then $l$-adic cohomology $A^i(X)=H^{2i}(X,\QQ_l(i))$. \section{Toric varieties} \mylabel{Toric Varieties} In this section we study toric varieties and formulate a condition ensuring that a given toric prime divisor becomes $\QQ$-Cartier on a small modification. If furthermore the toric divisor is projective, we conclude with Theorem \ref{main result} that there are non-trivial vector bundles. It turns out that this condition automatically holds in dimension $n=3$. For general facts on toric varieties, we refer to \cite{Cox; Little; Schenck 2011} and \cite {Kempf et al. 1973}. Fix an infinite ground field $k$. As customary, we denote by $T = \mathbb{G}_m^n$ the standard torus, $M=\ZZ^{\oplus n}$ its character group and $N = \Hom_\ZZ(M, \ZZ)$ the dual group of $1$-parameter subgroups. Moreover, set $N_\RR = N \otimes_\ZZ \RR$ and $M_\RR = M \otimes_\ZZ \RR$. Any \emph{strictly convex rational polyhedral cone} $\sigma \subset N_\RR $ is of the form $\sum_{i = 1}^t \RR_{\geq 0} v_i$, where $v_1, \dots, v_t\in N\subset N_\RR$ are lattice vectors, such that $\sigma$ does not contain non-trivial linear subspaces. Its dual cone is given by $\check{\sigma} = \{m \in M_\RR \mid n(m) \geq 0$ for all $n \in \sigma\}$. The affine toric variety $U_\sigma$ associated to $\sigma$ is the spectrum of the monoid ring $k[\check{\sigma} \cap M]$. The \emph{linear span} of $\sigma$ is the linear subspace of $N_\RR$ generated by $\sigma$. Its dimension is also called the dimension of the cone $\sigma$. Inside this linear span, we can distinguish between the interior and the boundary of $\sigma$. We call the former the {\em relative interior} of $\sigma$. A {\em face} of $\sigma$ is either $\sigma$ itself or given by an intersection $\sigma \cap H$, where $H\subset H_\RR$ is a hyperplane disjoint from the relative interior of $\sigma$. A proper face (i.e.\ a face $\neq \sigma$) is again a strictly convex rational polyhedral cone contained in $H$. The set of faces of $\sigma$ is closed under intersection and partially ordered, where we write $\tau \preceq \eta$ if and only if $\tau \subseteq \eta$. For any integer $l\geq 0$ we denote by $\sigma(l)$ the set of $l$-dimensional faces of $\sigma$. The most important faces are the {\em rays} $\rho\in\sigma(1)$ and the {\em facets} $\eta\in\sigma(d - 1)$, where $d = \dim \sigma$. Note that $\sigma = \sum_{\rho \in \sigma(1)} \rho$. From now on, we assume that $\dim \sigma = n$. Given a ray $\rho \in \sigma(1)$, we shall formulate certain conditions on the pair $(\sigma, \rho)$. For this, we use the {\em beneath-and-beyond method} from convex geometry (see \cite{Edelsbrunner 1987}, \S 8.4). In its original form, it deals with polytopes rather than cones. However, we can always choose an affine hyperplane $H$ which passes through the interior of $\sigma$ such that the cross-section $P = \sigma \cap H$ is a compact polytope and $\sigma$ coincides with the $\RR_{\geq 0}$-linear span of $P$. Moreover, there is a one-to-one correspondence between the non-zero faces of $\sigma$ and those of $P$, where the former are the $\RR_{\geq 0}$-linear spans of the latter. This way, the corresponding terminology in \cite{Edelsbrunner 1987} straightforwardly carries over to our setting. Consider a facet $\eta = \sigma \cap H_\eta$. Then the hyperplane $H_\eta$, which is unique for facets, separates $N_\RR$ into two half spaces, one of which contains the interior of $\sigma$. We say that $x\in N_\RR$ is {\em beneath} $H_\eta$ if it is contained in the same half space that contains the relative interior of $\sigma$; if it is contained in the opposite half space, we call $x$ {\em beyond} $H_\eta$. Now set $$ \sigma' = \sum_{\rho' \in \sigma(1) \smallsetminus \{\rho\}} \rho'. $$ One can distinguish two cases: First, $\dim \sigma' = n - 1$ and therefore $\sigma'\subset\sigma$ is a proper face. Second, $\dim \sigma' = n$, i.e.\ $\sigma'$ is a cone contained in $\sigma$ and, in general, will share some of its faces with $\sigma$. The beneath-and-beyond method is an inductive procedure to describe the faces of $\sigma$ in terms of the faces of $\sigma'$ and the relative position of $\rho$ with respect to the hyperplanes of the facets of $\sigma'$. Here, we are interested in the following special case: \begin{definition} Let $\rho$, $\sigma' \subset \sigma$ be as before. We say that $\sigma$ is a {\em pyramidal extension} of $\sigma'$ by $\rho$, if one of the following holds: \begin{enumerate} \item $\dim \sigma' = n - 1$. \item $\dim \sigma' = n$ and there exists precisely one facet $\eta\subset\sigma'$ such that the relative interior of $\rho$ is beyond $H_\eta$, and beneath every other facet of $\sigma'$. \end{enumerate} In the first case, we set $\sigma'' = \sigma = \sigma' + \rho$. In the second case we write $\sigma'' = \eta + \rho$. \end{definition} Note that in both cases, $\sigma''$ is an $n$-dimensional cone, which in \cite{Edelsbrunner 1987} is called \emph{pyramidal update} of $\sigma'$ or $\eta$, respectively. \begin{example}\label{pyramid} Let $\tau$ be a $(n - 1)$-dimensional cone and $\kappa = \RR_{\geq 0} x$ for some $x \in N_\RR \smallsetminus H_\tau$. Then $\sigma = \tau + \kappa$ is a pyramidal extension with $\sigma' = \tau$, $\sigma'' = \sigma$ and $\rho = \kappa$. \end{example} \begin{example}\label{simplex} Recall that an $n$-dimensional cone is called {\em simplicial} if it is generated by $n$ rays. In this case, for any $\rho \in \sigma(1)$, the cone $\sigma'$ as above is $(n - 1)$-dimensional and therefore $\sigma$ is a pyramidal extension of $\sigma'$ by $\rho$. \end{example} \begin{example}\label{threedimensionalcones} Let $\sigma$ be a non-simplicial $3$-dimensional cone and $\rho$ be any element of $\sigma(1)$. Then $\rho = \tau_1 \cap \tau_2$ for two facets $\tau_1, \tau_2 \in \sigma(2)$ and there are $\kappa_1, \kappa_2 \in \sigma(1)$ such that $\tau_1 = \rho + \kappa_1$ and $\tau_2 = \rho + \kappa_2$. Then it is elementary to see that $\sigma$ is a pyramidal extension of the $n$-dimensional cone $\sigma' = \sum_{\rho' \in \sigma(1) \smallsetminus\{\rho\}} \rho'$ with respect to the facet $\eta = \kappa_1 + \kappa_2$. Together with Example \ref{simplex}, this shows that every $3$-dimensional cone is a pyramidal extension. \end{example} We now paraphrase from \cite{Edelsbrunner 1987}, Lemmas 8.6 and 8.7 the classification of faces of $\sigma$ and $\sigma''$ in terms of those of $\sigma'$ and $\eta$ for pyramidal extensions. \begin{lemma}\label{pyramidalfaces} Let $\rho, \sigma', \sigma'' \subset \sigma$ be as before. \begin{enumerate} \item\label{pyramidalfacesi} If $\dim \sigma' = n - 1$, then every face of $\sigma'$ is a face of $\sigma''$. Moreover, any cone of the form $\tau + \rho$ with $\tau \preceq \sigma'$ is a face of $\sigma''$ and $\sigma''$ has no other faces. \item\label{pyramidalfacesii} If $\dim \sigma' = n$, then every face of $\eta$ is a face of $\sigma''$. Moreover, any cone of the form $\tau + \rho$ with $\tau \preceq \eta$ is a face of $\sigma''$ and $\sigma''$ has no other faces. \item\label{pyramidalfacesiii} If $\dim \sigma' = n$, then every proper face of $\sigma'$ is a face of $\sigma$ if it does not coincide with $\eta$. For every proper face $\tau$ of $\eta$, the cone $\tau + \rho$ is a proper face of $\sigma$. There are no other proper faces of $\sigma$. \end{enumerate} \end{lemma} Note that the characterization (i) will be convenient later on though it is somewhat redundant, because $\sigma'' = \sigma$ holds. Recall that a collection $\Delta$ of cones is called a {\em fan} if it is closed under taking intersections and faces. \begin{proposition}\label{subdivisionfan} If $\dim \sigma' = n$, then $\sigma' \cap \sigma'' = \eta$, and $\sigma' \cup \sigma'' = \sigma$, and the faces of $\sigma'$ and $\sigma''$ form a fan. \end{proposition} To any toric variety $Y$ there is associated a fan $\Delta$ that encodes the orbit structure of the $T$-action on $Y$. More precisely, every $\sigma \in \Delta$ corresponds to an open affine subset $U_\sigma = \Spec K[\check{\sigma} \cap M]$ which, in turn, contains a unique minimal orbit $\orb(\sigma)$. This correspondence is compatible with inclusions, i.e.\ $\tau \preceq \sigma$ if and only if $U_\tau \subseteq U_\sigma$ if and only if $\orb(\sigma)$ is contained in the closure of $\orb(\tau)$ in $Y$. An orbit closure $V(\sigma) = \orb(\sigma)$ by itself is again a toric variety with respect to the action of the torus $T / T_\sigma$, where $T_\sigma \subset T$ denotes the stabilizer subgroup of $\orb(\sigma)$. Its orbit structure can be described with help of the {\em star} $\Star(\sigma) = \{\tau \in \Delta \mid \sigma \preceq \tau\}$. All $\tau \in \Star(\sigma)$ have $\sigma$ as a common face and the sets $\bar{\tau} = (\tau + \langle \sigma \rangle_\RR) / \langle \sigma \rangle_\RR$ form a fan $\bar{\Delta}_\sigma$ in $N_\RR / \langle \sigma \rangle_\RR$ which encodes the toric structure of $V(\sigma)$. Again, we are only interested in two special situations. The first is the case where $\tau$ is $(n - 1)$-dimensional. Then $\dim V(\tau) = 1$ and there are only three possibilities for what $V(\tau)$ can look like, depending on whether $\tau$ is contained in none, one, or two $n$-dimensional cones which correspond to $V(\tau) \simeq \mathbb{A}^1 \smallsetminus \{0\}$, $V(\tau) \simeq \mathbb{A}^1$, and $V(\tau) \simeq \PP^1$, respectively. The second case is where $\rho$ is a ray in $\Delta$ such that $V(\rho) = D_\rho$ is a {\em toric Weil divisor}. In what follows, we write $\Delta(1)$ for the collection of rays in the fan $\Delta$, as customary. \begin{lemma}\label{pyramidalcartier} Let $\rho \in \Delta(1)$ be such that for every $n$-dimensional cone $\sigma \in \Star(\rho)$, we have $\dim \sigma' = n - 1$. Then $D_\rho \subset Y$ is $\QQ$-Cartier. \end{lemma} \begin{proof} We have to find an integer $c\neq 0$ so that $cD_\rho\subset Y$ is Cartier. Any toric Weil divisor of the form $D = \sum_{\kappa \in \Delta(1)} a_\kappa D_\kappa$ with $a_\kappa \in \ZZ$, and $D$ is Cartier if and only if for every maximal cone $\sigma \in \Delta$ there exists $m_\sigma \in M$ such that $m(l_\kappa) = -a_\kappa$ for every $\kappa \in \sigma(1)$, where $l_\kappa \in \N$ denotes the \emph{primitive generator} of $\kappa$. For our divisor $D_\rho$, we have $a_\kappa = 0$ for $\kappa \neq \rho$ and $a_\rho = 1$. Therefore, for any $\sigma \in \Delta$ which does not contain $\rho$, we can choose $m_\sigma = 0$. If $\rho \in \sigma(1)$ then, because the rays of $\sigma'$ lie in a hyperplane in $M_\RR$, we can choose $m'_\sigma \in M_\RR$ so that $m'_\sigma(l_\kappa) = 0$ for every $\kappa \in \sigma'(1)$ and $m'_\sigma(l_\rho) = 1$. Then for a suitable multiple $c$, we have $m_\sigma = c \cdot m'_\sigma \in M$ for all $\sigma \in \Star(\rho)$ and hence the $m_\sigma$ describe the Cartier divisor $c D_\rho$. \end{proof} Under the assumptions of Proposition \ref{subdivisionfan}, the cones $\sigma'$ and $\sigma''$ generate a fan which is supported on $\sigma$. It arises from the fan generated by $\sigma$ by removing the cone $\sigma$ and introducing three new cones $\sigma',\sigma'',\sigma' \cap \sigma''$. This implies that $\sigma' \cup \sigma''$ extends to a refinement $\Delta'$ of the fan $\Delta$ such that the associated toric morphism $Y' \rightarrow Y$ corresponds to the contraction of a toric subvariety $V(\eta) \simeq \mathbb{P}^1$. Moreover, if $D_\rho'$ denotes the strict transform of $D_\rho$ on $X$, then $D'_\rho \cap V(\eta)$ consists of precisely one point. \begin{definition} We say that $\rho \in \Delta(1)$ is in {\em Egyptian position} if every $n$-dimensional cone $\sigma \in \Star(\rho)$ is a pyramidal extension of $\sigma'$ by $\rho$. \end{definition} If $\Delta$ contains a ray $\rho$ in Egyptian position, then for every $\sigma \in \Star(\rho)$ with $\dim \sigma' = n$, we can consider the modification of $X \rightarrow Y$ corresponding to inserting the extra facets $\sigma' \cap \sigma''$ for every such $\sigma$. By our discussion above, the exceptional locus $E\subset X$ if a disjoint union of copies of $\mathbb{P}^1$, such that $\dim E \cap D_\rho' = 0$. Summing up: \begin{proposition}\label{Egyptiandivisor} Let $Y$ be an $n$-dimensional toric variety associated to a fan $\Delta$ in $N$ and $\rho \in \Delta(1)$ in Egyptian position. For the corresponding toric modification $f : X \ra Y$ with exceptional set $E\subset X$, denote $D'_\rho \subset X$ the strict transform of the toric prime divisor $D_\rho \subset Y$ associated to $\rho$. Then $E$ is a curve, $D_\rho' \cap E$ is finite, and the induced morphism $D'_\rho \ra D_\rho$ is an isomorphism. \end{proposition} In general, multiples of a divisor that is quasiprojective are not necessarily quasiprojective. The following shows that this problem does not occur for toric prime divisors. \begin{proposition}\label{multiples} Let $Y$ be an $n$-dimensional toric variety with fan $\Delta$ and $D = D_\rho$ a toric prime divisor for some $\rho \in \Delta(1)$. If the scheme $D$ is quasiprojective, the same holds for $cD$ for all integers $c > 0$. \end{proposition} \begin{proof} The divisor $D\subset Y$ is contained in the open subset $U = \bigcup_{\sigma \in \Star(\rho)} U_\sigma$. It is enough to show that $U$ admits an ample invertible sheaf. By construction, there is a toric morphism $\pi: U \ra D_\rho$ which is induced by a map of fans $\pi': \Delta_\rho \ra \bar{\Delta}_\rho$, where $\Delta_\rho$ denotes the fan generated by $\Star(\rho)$ and $\bar{\Delta}_\rho$ is the fan describing $D_\rho$ as a toric variety. The map $\pi'$ is induced by the projection $N_\RR \ra N_\RR / \langle\rho\rangle_\RR $. In particular, there is a one-to-one correspondence between maximal cones in $\Delta_\rho$ and maximal cones in $\bar{\Delta}_\rho$, given by $\sigma \mapsto \pi'(\sigma) = \bar{\sigma}$. Consequently, for every maximal cone $\bar{\sigma}$ and the corresponding open toric variety $U_{\bar{\sigma}}$, we have $\pi^{-1}(U_{\bar{\sigma}}) = U_\sigma$. Hence the morphism $\pi$ is affine. Therefore, by \cite{EGA II}, Proposition 5.1.6, the structure sheaf $\mathcal{O}_U$ is $\pi$-ample and with \cite{EGA II}, Proposition 4.6.13 we conclude that $U$ admits an ample invertible sheaf. \end{proof} \begin{remark} The fact that $U$ as in the preceding proof is quasiprojective if and only if $D_\rho$ is, has a nice interpretation in terms of the toric combinatorics. Recall that a very ample toric divisor $D = \sum_{\xi \in \Delta_\rho(1)} c_\xi D_\xi$ on $U$ corresponds to an integral polyhedron $P_D = \{m \in M \mid l_\xi(m) \geq -c_\xi\}$ whose face lattice is dual to that of $\Delta_\rho$. The restriction of $\mathcal{O}(D)$ to $D_\rho$ corresponds to a divisor $D'$ on $D_\rho$ with associated polyhedron $P_{D'} \subset \rho^\bot \cap M$. Up to rational equivalence, we can always assume that $c_\rho = 0$ and then we can identify in a natural way $P_{D'}$ with $\rho^\bot \cap P_D$, which is the face of $P_D$ orthogonal to $\rho$. Conversely, every ray $\bar{\tau}$ in $\bar{\Delta}_\rho$ is the image of a two-dimensional cone $\tau$ in $\Star(\rho)$, which in turn is generated by $\rho$ and another ray $\xi \in \Delta_\rho(1)$. However, the generator of $\xi$ might not map to a generator of $\bar{\tau}$. So, if $D' = \sum_{\tau \in \bar{\Delta}(1)} c_{\bar{\tau}} D_{\bar{\tau}}$ is an ample toric divisor on $D_\rho$ with associated polyhedron $P_{D'} \subset \rho^\bot \cap M$, the naturally defined polyhedron $P_D$ with $P_{D'} = P_D \cap \rho^\bot$ might only represent an ample $\QQ$-Cartier divisor which can be made integral by passing form $D'$ to an appropriate multiple. \end{remark} We now come to the main result of this section: \begin{theorem}\label{manybundles} Let $Y$ be a proper toric variety with associated fan $\Delta$. Suppose there is a ray $\rho \in \Delta(1)$ in Egyptian position such that the corresponding toric prime divisor $D_\rho$ is projective. Then there are locally free sheaves $\shE$ on $Y$ of rank $n=\dim(Y)$ with Chern number $c_n(\shE)$ arbitrarily large. \end{theorem} \begin{proof} By Proposition \ref{Egyptiandivisor}, we can subdivide $\Delta$ so that the corresponding modification $X \rightarrow Y$ has $1$-dimensional exceptional set, whose intersection with the strict transform of $D_\rho$ on $X$ is zero-dimensional. By Lemma \ref{pyramidalcartier}, $D_\rho$ is $\QQ$-Cartier and therefore there exists a multiple $c > 0$ such that $c D_\rho$ is Cartier. Moreover, $c D_\rho$ remains projective by Proposition \ref{multiples}. Hence, we can apply Theorem \ref{main result}, which proves the assertion. \end{proof} \begin{corollary}\label{manybundles3d} On every $3$-dimensional proper toric variety $Y$ there are locally free sheaves $\shE$ of rank $3$ with arbitrarily large Chern number $c_3(\shE)$. \end{corollary} \begin{proof} We saw in Example \ref{threedimensionalcones} that in a $3$-dimensional fan every ray is in Egyptian position. Moreover, every toric prime divisor is a toric surface and therefore projective, so the Theorem applies. \end{proof} \section{Examples with trivial Picard group} \mylabel{Examples trivial} In this section we will construct in any dimension $n \geq 3$ an explicit family of toric varieties with trivial Picard group which admit a ray in Egyptian position. Let $e_1, \dots, e_n$ be the standard basis of $N = \ZZ^n$ and $u > 0$ an integer. Consider the following $2n + 2$ primitive vectors: \begin{align*} e & = e_n, \quad f_i = e_i \text{ for } 1 \leq i < n, \quad f_n = -\sum_{i = 1}^{n - 1} e_i,\\ h & = -e, \quad g_i = h - f_i \text{ for } 1 \leq i < n, \quad g_n = uh - f_n. \end{align*} With these vectors, we define the following $\binom{n + 1}{2}$ cones of dimension $n$: \begin{align*} \sigma_i & = \langle e, g_i, f_k \mid k \neq i \rangle_{\RR_{\geq 0}} \text{ for every } 1 \leq i \leq n,\\ \sigma_{ij} & = \langle h, g_i, g_j, f_k \mid k \neq i, j \rangle_{\RR_{\geq 0}} \text{ for every pair } 1 \leq i \neq j \leq n. \end{align*} Let us now show that these cones generate a fan. We start by analyzing their face structure. Every cone has precisely $n + 1$ generators and it is easy to see that the generators of a cone form a \emph{circuit}, i.e.\ a minimally linearly dependent set of lattice vectors. In particular, the generators of the $\sigma_i$ satisfy the following relations: $$ e + g_i = \sum_{j \neq i} f_j \text{ for } 1 \leq i < n, \quad \text{ and }\quad ue + g_n = \sum_{j = 1}^{n - 1} f_j, $$ and for the generators of the $\sigma_{ij}$ we get: $$ g_i + g_j = \begin{cases} 2h + \sum_{k \neq i, j} f_k & \text{ if } i,j\neq n; \\ (u+1)h + \sum_{k \neq i, j} f_k & \text{ else}. \end{cases} $$ The face structures of the cones $\sigma_i, \sigma_{ij}$ can easily be read-off from these relations. In particular, every facet is simplicial (for details we refer to \cite{GKZ}, \S 7). The $2n - 2$ facets of $\sigma_i$ are: \begin{align*} \langle e, f_j \mid j \neq i, k \rangle_{\RR_{\geq 0}} \quad \text{ and } \quad \langle g_i, f_j \mid j \neq i, k \rangle_{\RR_{\geq 0}} \text{ for every } 1 \leq k \neq i \leq n, \end{align*} and the $2n - 2$ facets of $\sigma_{ij}$ are: \begin{align*} &\langle h, g_i, f_l \mid j \neq i, j, k \rangle_{\RR_{\geq 0}} \quad \langle h, g_j, f_l \mid j \neq i, j, k \rangle_{\RR_{\geq 0}} \text{ for } 1 \leq k \neq i, j \leq n\\ \text{ and } & \langle g_i, f_k \mid k \neq i, j \rangle_{\RR_{\geq 0}}, \quad \langle g_j, f_k \mid k \neq i, j \rangle_{\RR_{\geq 0}}. \end{align*} We have the following intersections of codimension one among the $\sigma_i, \sigma_{ij}$: \begin{align*} \sigma_i \cap \sigma_j & = \langle e, f_k \text{ with } k \neq i, j \rangle_{\RR_{\geq 0}} \text{ for } i \neq j,\\ \sigma_{ik} \cap \sigma_{jk} & = \langle h, g_k, f_l, l \neq i, j, k \rangle_{\RR_{\geq 0}} \text{ for } i \neq j,\\ \sigma_i \cap \sigma_{ij} & = \langle g_i, f_k, k \neq i, j \rangle_{\RR_{\geq 0}} \text{ for } i \neq j. \end{align*} The remaining intersections are all of codimension three: \begin{align*} \sigma_{ij} \cap \sigma_{pq} & = \langle h, f_k, k \neq i, j, p, q \rangle_{\RR_{\geq 0}} \text{ if } \{i, j\} \cap \{p, q\} = \emptyset,\\ \sigma_i \cap \sigma_{jk} & = \langle f_l, l \neq i, j, k \rangle_{\RR_{\geq 0}} \text{ if } i \neq j, k. \end{align*} So we see that any two cones intersect in a proper face and every facet is the intersection of two maximal cones. It follows that the cones $\sigma_i$, $\sigma_{ij}$ generate a complete fan $\Delta_u$. We denote $Y_u$ the corresponding proper toric variety. \begin{proposition}\label{trivialpic} The toric variety $Y_1$ is projective, whereas $\Pic(Y_u) = 0$ for $u > 1$. \end{proposition} \proof In general, on an $n$-dimensional proper toric variety $Y$ with fan $\Delta$, a toric Weil divisor $D = \sum_{\rho \in \Delta(1)} c_\rho D_\rho$ is Cartier if and only if there exist a collection of characters $(m_\sigma)_{\sigma \in \Delta(n)}$ such that $c_\rho = -m_\sigma(l_\rho)$ for every $\sigma \in \Delta$ with $\rho \in \sigma(1)$. Here we identify $M$ with the dual of $N$ and write $m_\sigma(l_\rho)$ for the evaluation of $m_\sigma$ at the primitive vector $l_\rho$ generating the ray $\rho$. Note that if $(m_\sigma)_{\sigma \in \Delta(n)}$ corresponds to a toric Cartier divisor, then so does $(m_\sigma + m)_{\sigma \in \Delta(n)}$ for any $m \in M$; this corresponds to a change of linearization of the line bundle $\mathcal{O}_X(D)$ by a global twist with $m$. In our situation, we denote the toric prime divisors by $D_e, D_{f_i}, D_{g_i}, D_h$ and consider a family of characters $m_i, m_{ij}$ corresponding to the cones $\sigma_i, \sigma_{ij}$. For $u>1$, the task is to show that if such a collection of characters corresponds to a Cartier divisor then there is an $m \in M$ with $m_i = m_{ij} = m$ for all $i, j$. We can assume $m_n = 0$ without loss of generality, such that the corresponding toric Cartier divisor is of the form $-c D_{f_n} - \sum_{i = 1}^{n - 1} c_i D_{g_i} - c_h D_h$. Then it follows that $m_i(f_n) = c$ for every $1 \leq i < n$. Moreover, for $1 \leq j \neq i < n$ we have $m_i(f_j) = 0$. Altogether we have a complete set of linearly independent conditions which determine $m_i$ and we obtain $c_i = m_i(g_i) = c$ for every $1 \leq i < n$. Next we consider any cone $\sigma_{ij}$, with $1 \leq i \neq j < n$. Then we have $m_{ij}(g_i) = m_{ij}(g_j) = m_{ij}(f_n) = c$, and $m_{ij}(f_k) = 0$ for every $k \neq i, j, n$. It follows that $m_{ij}(g_i) = m_{ij}(h - f_i) = m_{ij}(g_j) = m_{ij}(h - f_j) = c$, hence $m_{ij}(f_i) = m_{ij}(f_j) = m_{ij}(h) - c$ and therefore $m_{ij}(f_n) = -2m_{ij}(f_i) = c$, so we get $c_h = m_{ij}(h) = c / 2$. At this point, we have shown that $\Pic(Y_u)$ is exhausted by the parameter $c$ and therefore has rank at most one. Now we consider $\sigma_{in}$ for any $1 \leq i < n$. Again, we have $m_{in}(g_i) = c$, but $m_{in}(g_n) = m_{in}(f_k) = 0$ for $1 \leq k \neq i < n$. With $m_{in}(g_i) = m_{in}(h) - m_{in}(f_i) = c/2 - m_{in}(f_i) = c$ it follows $m_{in}(f_i) = -c/2$. Then $m_{in}(g_n) = uc/2 - m_{in}(f_n) = uc/2 + m_{in}(f_i) = (u-1)c/2$. By our original assumption, we had $m_n(g_n) = m_{in}(g_n) = 0$, so for $u > 1$ we necessarily have $c = 0$ and hence a toric Cartier divisor is rationally equivalent to zero. For the case $u = 1$, it is straightforward to check that for $c > 0$ the corresponding characters $m_i, m_{ij}$ constitute a strictly convex piecewise linear function on $\Delta_1$; we leave this as an exercise for the reader. \qed \begin{proposition}\label{Egyptianposition} The ray generated by $e$ is in Egyptian position and the corresponding divisor $D_e$ on $Y_u$ is projective. \end{proposition} \proof For any $1 \leq i \leq n$, the cone $\sigma_i'$ is generated by $f_j, j \neq k$ and $g_i$ and hence, by construction, $\sigma_i$ is a pyramidal extension of $\sigma_i'$ by $e$. So, $D_e$ is Egyptian. The star $\Star(\RR_{\geq 0} e)$ consists the maximal cones $\sigma_1, \dots, \sigma_n$ and, again by construction, the fan in $N_\RR / \RR e$ generated by images of the $\sigma_i$ under the projection map is the fan associated to $\mathbb{P}^{n - 1}$ and therefore we have $D_e \simeq \mathbb{P}^{n - 1}$. \qed \medskip Now, by putting together Propositions \ref{trivialpic} and \ref{Egyptianposition} and Theorem \ref{manybundles}, we obtain: \begin{theorem} For all $n\geq 3$ and $u > 1$, the toric variety $Y_u$ has no nontrivial invertible sheaves but admits locally free sheaves $\shE$ of rank $n=\dim(Y_u)$ with arbitrarily large top Chern class $c_n(\shE)$. \end{theorem} \section{Projective divisors on threefolds} \mylabel{Divisors on threefolds} Let $k$ be a ground field. Theorem \ref{infinitely classes} triggers the following question: Under what conditions does a proper scheme $X$ contain a divisor $D\subset X$ so that the proper scheme $D$ is projective? As far as we see, the existence of such a projective divisor is open for smooth proper threefolds that are non-projective. In this direction, we have a partial result: \begin{proposition} Let $X$ be an integral, normal, proper threefold that is $\QQ$-factorial, and $S\subset X$ be an irreducible closed subscheme of dimension $\dim(S)=2$. Suppose there is a quasiprojective open subset $U\subset X$ containing all points $x\in S$ of codimension $\dim(\O_{S,x})=1$. Then the proper scheme $S$ is projective. \end{proposition} \proof First note that we may assume that the structure sheaf $\O_S$ contains no nontrivial torsion sections. This follows inductively from the following observation: If $\shJ\subset\O_S$ is a quasicoherent ideal sheaf with $\dim(\shJ)\leq 1$, defining a closed subscheme $S'\subset S$, we have a short exact sequence of abelian sheaves $$ 1\lra 1+\shJ\lra\O_{S}^\times\lra\O_{S'}^\times\lra 1, $$ In the long exact sequence $$ \Pic(S)\lra\Pic(S')\lra H^2(S,1+\shJ), $$ the term on the right vanishes because the sheaf $1+\shJ$ is supported on a closed subset of dimension $<2$. It follows from \cite{EGA II}, Proposition 4.5.13 that $S$ is projective provided that $S'$ is projective. According to Chow's Lemma (in the refined form of \cite{Deligne 2010}, Corollary 1.4), there is a proper morphism $f:\tilde{X}\ra X$ with $\tilde{X}$ projective and $f^{-1}(U)\ra U$ an isomorphism. Clearly, we may also assume that $\tilde{X}$ is integral and normal. Let $R\subset \tilde{X}$ be the exceptional locus, which we regard as a reduced closed subscheme. After replacing $\tilde{X}$ by a blowing-up with center $R$, we may assume that $R\subset\tilde{X}$ is a Cartier divisor. Let $\tilde{S}\subset \tilde{X}$ be the strict transform of the surface $S$, that is, the schematic closure of $f^{-1}(U\cap S)\subset\tilde{X}$. Choose an ample sheaf $\shL\in\Pic(\tilde{X})$. Replacing $\shL$ by a suitable multiple, we may assume that $\shN=\shL(-R)$ is ample as well. Let $R=R_1\cup\ldots\cup R_t$ be the irreducible components. Since $\tilde{S}$ intersects $f^{-1}(U)$, the scheme $\tilde{S}$ contains none of the $R_i$, hence there are closed points $r_i\in R_i\smallsetminus\tilde{S}$. Let $\shI\subset\O_{\tilde{X}}$ be the quasicoherent ideal corresponding to the closed subscheme $\tilde{S}\cup A\subset\tilde{X}$, where $A=\left\{r_1,\ldots,r_t\right\}$, and the union is disjoint. Choose some $n_0$ so that for all $n\geq n_0$, $H^1(\tilde{X},\shN^{\otimes n}\otimes\shI)=0$. Next, choose some $n\geq n_0$ so that there is a regular section $s'\in H^0(\tilde{S},\shN^{\otimes n}_{\tilde{S}})$ whose zero set $(s'=0)\subset\tilde{S}$ is irreducible. The short exact sequence $$ 0\lra\shI\otimes\shN^{\otimes n}\lra \shN^{\otimes n}\lra \shN^{\otimes n}|_{\tilde{S}\cup A}\lra 0 $$ yields an exact sequence $$ H^0(\tilde{X},\shN^{\otimes n})\lra H^0(\tilde{S}\cup A,\shN^{\otimes n}|_{\tilde{S}\cup A})\lra H^1(\tilde{X},\shN^{\otimes n}\otimes\shI), $$ where the term on the right vanishes, and the term in the middle is a sum corresponding to the disjoint union $\tilde{S}\cup A$. Therefore we may extend $s'$ to a section $s$ over $\tilde{X}$ that is nonzero at each generic point of $R$. It thus defines an ample Cartier divisor $\tilde{H}\subset\tilde{X}$ whose intersection with $\tilde{S}$ is irreducible, and that contains no irreducible component of the exceptional divisor $R$. We now consider its image $H=f(\tilde{H})$, which is a closed subscheme of $X$. \medskip {\bf Claim 1:} Each irreducible component of $H$ is of codimension one in $X$. Indeed: The irreducible components $\tilde{H}_i\subset\tilde{H}$ are of codimension one in $\tilde{X}$, and their generic points lie in $f^{-1}(U)=U$. If follows that their images $H_i=f(\tilde{H}_i)$ have codimension one. \medskip {\bf Claim 2:} The scheme $S\cap H$ is irreducible and 1-dimensional. Clearly, we have a union $$ S\cap H = f(\tilde{S}\cap\tilde{H}) \cup \bigcup_{s\in S} f((f^{-1}(s)\smallsetminus \tilde{S})\cap \tilde{H}). $$ By construction, $\tilde{S}\cap\tilde{H}$ is irreducible, and so is its image $f(\tilde{S}\cap\tilde{H})$. The sets on the right $(f^{-1}(s)\smallsetminus \tilde{S})\cap \tilde{H}$ can be nonempty only if $s\in S$ is a critical point for $f:\tilde{X}\ra X$, that is, in the image of the exceptional set $R\subset\tilde{X} $, thus contained in $S\smallsetminus U$, which is finite. The upshot is that $S\cap H$ is a disjoint union of the irreducible closed subset $f(\tilde{S}\cap\tilde{H})$ and finitely many closed points $x_1,\ldots,x_r\in S$. Now we use the assumption that $X$ is $\QQ$-factorial: The closed subset $H\subset X$ is the support of some Cartier divisor, so $S\cap H\subset S$ is the support of some Cartier divisor. In turn, each irreducible component of $S\cap H$ is purely of codimension one. It follows that $S\cap H = f(\tilde{S}\cap\tilde{H})$, and this must be 1-dimensional. \medskip {\bf Claim 3:} The scheme $S\smallsetminus (S\cap H)$ is affine. To see this, note that $$ H=f(\tilde{H})=f(\tilde{H}\cup R) $$ This is because $\tilde{H}$ intersects each curve on $\tilde{X}$, in particular those mapping to points in $X$, and the latter cover $R$. Since $\shN^{\otimes n}(nR)=\shL^{\otimes n}$ is ample, the effective Cartier divisor $\tilde{H}\cup R$ is ample, hence its complement $\tilde{X}\smallsetminus(\tilde{H} \cup R)$ is affine. Clearly, $\tilde{H}\cup R$ is saturated with respect to the map $f:\tilde{X}\ra X$, thus $\tilde{H}\cup R=f^{-1}f(\tilde{H}\cup R)=f^{-1}(H)$. By construction, $$ X\smallsetminus f(R) \supset X\smallsetminus f(\tilde{H}\cup R) = X\smallsetminus H. $$ We conclude that $f:\tilde{X}\ra X$ is an isomorphism over $X\smallsetminus H$, and that $$ \tilde{X}\smallsetminus f^{-1}(H) \simeq \tilde{X}\smallsetminus (\tilde{H}\cup R)\simeq X\smallsetminus H $$ is affine. \medskip {\bf Claim 4:} The scheme $S$ is projective. Since $X$ is $\QQ$-factorial, we may endow the closed subset $H\subset X$ whose irreducible components are 1-codimensional with a suitable scheme structure so that it becomes an effective Cartier divisor. Then $S\cap H$ is an effective Cartier divisor, which is moreover irreducible, and has affine complement. According to Goodman's Theorem (\cite{Goodman 1969}, Theorem 2 on page 168), there is an ample divisor on $S$ supported by $S\cap H$, in particular $S$ is projective. Goodman formulated his result under the assumption that the local rings $\O_{S,s}$ are factorial for all $s\in S$, but the proof goes through with only minor modification under our assumption of $\QQ$-factoriality. Compare also \cite{Hartshorne 1970}, Chapter II, \S 4, Theorem 4.2 for a nice exposition of Goodman's arguments. \qed \medskip The following observation emphasizes that the existence of large quasiprojective open subsets is a delicate condition: \begin{proposition} Let $X$ be an integral, normal, proper $n$-fold that is $\QQ$-factorial but does not admit an ample invertible sheaf. Then there is no quasiprojective open subset $U\subset X$ containing all points $x\in X$ of codimension $\dim(\O_{X,x})=n-1$. \end{proposition} \proof Suppose there would be such an open subset $U\subset X$. Choose a very ample divisor $H_U\subset U$, and let $H\subset X$ be its closure. Since $X$ is $\QQ$-factorial, we may assume that $H\subset X$ is Cartier. Let $\shL=\O_X(H)$ be the corresponding invertible sheaf. Obviously, its base locus is contained in $A=X\smallsetminus U$, which is finite. By the Zariski--Fujita Theorem \cite{Fujita 1983}, we may replace $\shL$ by some tensor power and assume that $\shL$ is globally generated. Let $f:X\ra\PP^m$ be the morphism coming from the linear system $H^0(X,\shL)$, with $m+1=h^0(\shL)$. Clearly, the morphism $f$ is injective on $U$, thus has finite fibers. Then it follows from \cite{EGA IIIa}, Proposition 2.6.2 that the scheme $X$ is projective, contradiction. \qed
1,108,101,564,791
arxiv
\section{Introduction} Since lepto-quark(LQ) bosons connect lepton and quark sectors, these models potentially explain several new physics beyond the standard model (SM); {\it e.g.}, lepton(muon or electron) anomalous magnetic dipole moment ($g-2$)~\cite{Bauer:2015knc, Chen:2016dip, Chen:2017hir, Nomura:2021oeu,ColuccioLeskow:2016dox}, $B$ meson decays such as $b\to s\mu\bar\mu$~\cite{Sahoo:2015wya,Becirevic:2016yqi,Chen:2016dip, Chen:2017hir,Nomura:2021oeu,Cai:2017wry} and $b\to c\ell\bar\nu_\ell$($\ell=e,\mu,\tau$)~\cite{Sakaki:2013bfa,Becirevic:2016yqi,Chen:2017hir,Cai:2017wry}~\footnote{The anomaly of $b\to c\ell\bar\nu_\ell$ processes are observed in experiments~\cite{Huschle:2015rga, Lees:2012xj,Lees:2013uzd,Hirose:2016wfn,Abdesselam:2016cgx,Aaij:2015yra,Aaij:2017deq}, and LQ model is one of the most promising explanations on this anomaly.}, and nonzero neutrino masses~\cite{AristizabalSierra:2007nf, Cheung:2016fjo,Cai:2017wry}. Especially, muon $g-2$ anomaly is recently reported by E989 experiment at Fermilab combining BNL result~\cite{Abi:2021gix}, and its value is deviated from the SM by 4.2$\sigma$ as follows: \begin{align} \Delta a_\mu = (25.1\pm 5.9)\times 10^{-10}. \end{align} Also, the LHCb collaboration~\cite{Aaij:2021vac} recently reported anomaly of rare $B$ meson decays of $b\to s\mu\bar\mu$ that is understood as violation of lepton universality. The updated result is given by \begin{align} \frac{BR(B^+\to K^+\mu^-\mu^+)}{BR(B^+\to K^+e^-e^+)}= 0.846_{-0.039-0.012}^{+0.042+0.013}\quad (1.1 {\rm GeV}^2 < q^2 < 6 {\rm GeV}^2), \end{align} where first(second) uncertainty is statistical(systematic) one and $q^2$ is the invariant mass squared for dilepton. In addition to the above phenomenologies, interestingly, the nonzero Majorana neutrino mass at one-loop can be realized without any additional symmetries by introducing appropriate LQs~\cite{Cheung:2016fjo}. This may be natural realization of tiny neutrino mass model due to loop suppression. Considering above issues, one finds that Yukawa flavor structure is also very important to explain them. Recently, powerful symmetries to restrict the number of parameters in Yukawa couplings, so called "modular flavor symmetries", were proposed by authors in refs.~\cite{Feruglio:2017spp,deAdelhartToorop:2011re}, in which they have applied modular originated non-Abelian discrete flavor symmetries to quark and lepton sectors. One remarkable advantage of applying this symmetries is that dimensionless couplings of model can be transformed into non-trivial representations under those symmetries, and all the dimensionless values are uniquely fixed once modulus is determined in fundamental region. We then do not need the scalar fields to obtain a predictive mass matrix. Along the line of this idea, a vast reference has recently appeared in the literature, {\it e.g.}, $A_4$~\cite{Feruglio:2017spp, Criado:2018thu, Kobayashi:2018scp, Okada:2018yrn, Nomura:2019jxj, Okada:2019uoy, deAnda:2018ecu, Novichkov:2018yse, Nomura:2019yft, Okada:2019mjf,Ding:2019zxk, Nomura:2019lnr,Kobayashi:2019xvz,Asaka:2019vev,Zhang:2019ngf, Gui-JunDing:2019wap,Kobayashi:2019gtp,Nomura:2019xsb, Wang:2019xbo,Okada:2020dmb,Okada:2020rjb, Behera:2020lpd, Behera:2020sfe, Nomura:2020opk, Nomura:2020cog, Asaka:2020tmo, Okada:2020ukr, Nagao:2020snm, Okada:2020brs, Yao:2020qyy, Chen:2021zty, Kobayashi:2021bgy, Kashav:2021zir, Okada:2021qdf}, $S_3$ \cite{Kobayashi:2018vbk, Kobayashi:2018wkl, Kobayashi:2019rzp, Okada:2019xqk, Mishra:2020gxg, Du:2020ylx}, $S_4$ \cite{Penedo:2018nmg, Novichkov:2018ovf, Kobayashi:2019mna, King:2019vhv, Okada:2019lzv, Criado:2019tzk, Wang:2019ovr, Zhao:2021jxg, King:2021fhl, Ding:2021zbg, Zhang:2021olk, gui-jun}, $A_5$~\cite{Novichkov:2018nkm, Ding:2019xna,Criado:2019tzk}, double covering of $A_5$~\cite{Wang:2020lxk, Yao:2020zml, Wang:2021mkw}, larger groups~\cite{Baur:2019kwi}, multiple modular symmetries~\cite{deMedeirosVarzielas:2019cyj}, and double covering of $A_4$~\cite{Liu:2019khw, Chen:2020udk}, $S_4$~\cite{Novichkov:2020eep, Liu:2020akv}, and the other types of groups \cite{Kikuchi:2020nxn, Almumin:2021fbk, Ding:2021iqp, Feruglio:2021dte, Kikuchi:2021ogn, Novichkov:2021evw} in which masses, mixing, and CP phases for the quark and/or lepton have been predicted~\footnote{For interest readers, we provide some literature reviews, which are useful to understand the non-Abelian group and its applications to flavor structure~\cite{Altarelli:2010gt, Ishimori:2010au, Ishimori:2012zz, Hernandez:2012ra, King:2013eh, King:2014nza, King:2017guk, Petcov:2017ggy}.}. Moreover, a systematic approach to understand the origin of CP transformations has been discussed in Ref.~\cite{Baur:2019iai}, and CP/flavor violation in models with modular symmetry was discussed in Refs.~\cite{Kobayashi:2019uyt,Novichkov:2019sqv,Kobayashi:2021bgy,1869542}, and a possible correction from K\"ahler potential was discussed in Ref.~\cite{Chen:2019ewa}. Furthermore, systematic analysis of the fixed points (stabilizers) has been discussed in Ref.~\cite{deMedeirosVarzielas:2020kji}. A very recent paper of Ref.~\cite{Ishiguro:2020tmo} finds a favorable fixed point $\tau=\omega$ among three fixed points, which are the fundamental domain of PSL$(2,Z)$, by systematically analyzing the stabilized moduli values in the possible configurations of flux compactifications as well as investigating the probabilities of moduli values. {It is then interesting to discuss a LQ model under the framework of modular flavor symmetry since we are motivated to consider lepton and quark sector together as a LQ connect these sectors and some predictions in both sector can be expected. } In this paper, we focus on the quark and lepton masses and mixings based on a LQ model in ref.~\cite{Cheung:2016fjo}, introducing modular $A_4$ symmetry to reduce free parameters of Yukawa couplings. Since the quark sector connects to the lepton sector via LQ, charge assignments for quarks(leptons) directly affect the leptons(quarks). In this sense, it would be a good motivation towards unification of quark and lepton flavor in $A_4$ modular symmetry. This paper is organized as follows. In Sec.~II, we review our model of quark and lepton. In Sec.~III, we have numerical analysis and show several results for normal and inverted hierarchies. We conclude in Sec.~IV. In appendix, we summarize several features of modular $A_4$ symmetry. \section{Model} \begin{table}[t!] \begin{tabular}{|c||c|c|} \hline\hline & ~$\eta$~ & ~$\Delta$~ \\\hline $SU(3)_C$ & $\bm{3}$ & $\bar{\bm3}$ \\\hline $SU(2)_L$ & $\bm{2}$ & $\bm{3}$ \\\hline $U(1)_Y$ & $\frac16$ & $\frac13$ \\\hline $A_4$ & $\bm{1}$ & $\bm{1}$ \\\hline $-k_I$ & $-2$ & $-2$ \\\hline \end{tabular} \caption{\small Charge assignments of the LQ bosons $\eta$ and $\Delta$ under $SU(3)_C \times SU(2)_L\times U(1)_Y\times A_4$ where $k_I$ is the number of modular weight.} \label{tab:1} \end{table} \begin{center} \begin{table}[t! \begin{tabular}{|c||c|c|c|c|c|c|}\hline\hline & \multicolumn{5}{c|}{Fermions} \\ \hline \hline & ~$Q_L$~& ~$\bar u_R$~ & ~$\bar d_R$~& ~$L_L$~& ~$\bar e_R$~ \\\hline\hline $SU(3)_C$ & $\bm{3}$ & $\bar{\bm{3}}$ & $\bar{\bm{3}}$ & $\bm{1}$ & $\bm{1}$ \\\hline $SU(2)_L$ & $\bm{2}$ & $\bm{1}$ & $\bm{1}$ & $\bm{2}$ & $\bm{1}$ \\\hline $U(1)_Y$ & $\frac16$ & $-\frac23$ & $\frac13$ & $-\frac12$ & $1$ \\\hline $A_4$ & $\bm{3}$ & $\bm{1,1'',1'}$ & $\bm{1,1'',1'}$ & $\bm3$ & $\bm{1,1'',1'}$ \\ \hline $-k_I$ & $-2$ & $-4$ & $0$ & $-2$ & $0$ \\ \hline \end{tabular} \caption{Charge assignments of the SM fermions under $SU(3)_C \times SU(2)_L\times U(1)_Y\times A_4$ where $k_I$ is the number of modular weight.} \label{tab:fields-inverse} \end{table} \end{center} In this section, we review our model. It is known that introducing proper leptoquarks lead us to a radiative seesaw model without any additional symmetries such as $Z_2$. Here, we introduce two types of leptoquarks $\eta$ and $\Delta$ based on Ref.~\cite{Cheung:2016fjo}. The color-triplet $\eta$ has $SU(2)_L$ doublet with $1/6$ hypercharge, and the color-antitriplet $\Delta$ has $SU(2)_L$ triplet with $1/3$ hypercharge, where these new bosons and their charges are summarized in Table~\ref{tab:1}. Then, the valid Lagrangian to induce the quark and lepton mass matrices is given b \begin{align} -\mathcal{L}_{Y}^{q} &= y^u_{ij} \bar u_{R_i} (i\sigma_2) H^* Q_{L_j} + y^d_{ij} \bar d_{R_i} H Q_{L_j} + {\rm h.c.}, \label{Eq:lag-quark}\\ -\mathcal{L}_{Y}^{\ell} &= h_{ij} \bar e_{R_i} H L_{L_j} + {\rm h.c.}. \label{Eq:lag-lepton} \end{align} The Lagrangian for the mixing between the quark and lepton and nontrivial potential are given by \begin{align} -\mathcal{L}_{Y}^{mix} &= f_{ij} \overline{d_{R_i}} \eta^T (i\sigma_2) L_{L_j} + g_{ij} \overline{ Q^c_{L_i}} (i\sigma_2) \Delta L_{L_j} + {\rm h.c.},\\ \mathcal{V} &\supset -\mu H^\dag \Delta \eta+ {\rm h.c.}, \label{Eq:lag-flavor} \end{align} where $(i,j)=1-3$ are family indices, $\sigma_2$ is the second Pauli matrix, and $H$ is the SM Higgs field that develops a nonzero VEV, which is symbolized by $\langle H\rangle\equiv v/\sqrt2\approx 246/\sqrt2$ GeV, and $H$ has nonzero modular weight. Here, we parameterize components of the scalars as follows: \begin{align} &H =\left[ \begin{array}{c} w^+\\ \frac{v+\phi+iz}{\sqrt2} \end{array}\right],\quad \eta =\left[ \begin{array}{c} \eta_{2/3}\\ \eta_{-1/3} \end{array}\right],\quad \Delta =\left[ \begin{array}{cc} \frac{\delta_{1/3}}{\sqrt2} & \delta_{4/3} \\ \delta_{-2/3} & -\frac{\delta_{1/3}}{\sqrt2} \end{array}\right], \label{component} \end{align} where the subscript of the fields represents the electric charge, and $w^+$ and $z$ are absorbed by the longitudinal component of the $W^+$ and $Z$ bosons, respectively. Due to the $\mu$ term in Eq.~(\ref{Eq:lag-flavor}), the charged components with $1/3$ and $2/3$ electric charges mix each other. Here, we parametrize their mixing matrices and mass eigenstates as follows: \begin{align} &\left[\begin{array}{c} \eta_{i/3} \\ \delta_{i/3} \end{array}\right] = O_i \left[\begin{array}{c} A_i \\ B_i \end{array}\right],\quad O_i\equiv \left[\begin{array}{cc} c_{a_i} & s_{a_i} \\ -s_{a_i} & c_{a_i} \end{array}\right], \quad (i=1,2), \end{align} where their masses are denoted as $m_{A_i}$ and $m_{B_i}$ respectively. The interactions in terms of the mass eigenstates can be written as \begin{align} - \mathcal{L}_{Y}^q \approx & \ m_{u_{ij}} \bar u_{R_i} u_{L_j} + m_{d_{ij}} \bar d_{R_i} d_{L_j}+ {\rm h.c.}, \label{eq:quark}\\ - \mathcal{L}_{Y}^\ell \approx & \ m_{\ell_{ij}}\bar e_{R_i} \ell_{L_j} + {\rm h.c.},\\ - \mathcal{L}_{Y}^{mix} \approx & \ f_{ij} \overline{ d_{R_i}} \nu_{L_j} (c_{a_1} A_1^* +s_{a_1} B_1^*) -\frac{g_{ij}}{\sqrt2} \overline{ d_{L_i}^c} \nu_{L_j} (-s_{a_1} A_1 + c_{a_1} B_1) \label{eq:neut} \\ & -f_{ij} \overline{d_{R_i}} \ell_{L_j} (c_{a_2} A_2 +s_{a_2} B_2) -\frac{g_{ij}}{\sqrt2} \overline{u_{L_i}^c} \ell_{L_j} (-s_{a_1} A_1 + c_{a_1} B_1) \label{eq:lfvs-1}\\ & - {g_{ij}} \overline{d_{L_i}^c} \ell_{L_j}\delta_{4/3} \; + {g_{ij}} \overline{u_{L_i}^c} \nu_{L_j} (-s_{a_2} A_{2}^* + c_{a_2} B_{2}^*) + {\rm h.c.}, \label{eq:lfvs-2} \end{align} where we define $m_{u_{ij}}\equiv \frac{v y^u_{ij}}{\sqrt2} $, $m_{d_{ij}}\equiv \frac{v y^d_{ij}}{\sqrt2} $, and $m_{\ell_{ij}}\equiv \frac{v h_{ij}}{\sqrt2} $. The next task is to determine the matrices of $y^u,y^d,h,f,g$ via modular $A_4$ symmetry. In the quark sector, we assign $Q_L$ to be ${\bf 3}$ and $-2$, $\bar u_R$ to be $\{\bf 1,1'',1'\}$ and $-4$, and $\bar d_R$ to be $\{{\bf 1,1'',1'}\}$ and $0$ under $A_4$ and $-k$, respectively. This assignment is the same as the one in ref.~\cite{Okada:2020rjb}, and it is already known that allowed region~\cite{Okada:2019uoy}. Thus, we will work on the same $\tau$ region of the lepton sector in our numerical analysis. The up-type quark mass matrix is written as: \begin{align} \begin{aligned} y^u= \begin{pmatrix} a_u & 0 & 0 \\ 0 &a_c & 0\\ 0 & 0 &a_t \end{pmatrix} \left [ \begin{pmatrix} f_1 & f_3 & f_2 \\ f_2 & f_1 & f_3 \\ f_3 & f_2 & f_1 \end{pmatrix} + \begin{pmatrix} g_{u1} & 0 & 0 \\ 0 &g_{u2} & 0\\ 0 & 0 &g_{u3} \end{pmatrix} \begin{pmatrix} f'_1 & f'_3 & f'_2 \\ f'_2 & f'_1 & f'_3 \\ f'_3 & f'_2 & f'_1 \end{pmatrix} \right ], \end{aligned} \label{matrix6} \end{align} where $Y^{(6)}_{3}\equiv[f_1,f_2,f_3]^T$ and $Y^{(6)}_{3'}\equiv[f'_1,f'_2,f'_3]^T$, $g_{u1}=\alpha'_u/\alpha_u$, $g_{u2}=\beta'_u/\beta_u$ and $g_{u3}=\gamma'_u/\gamma_u$ are complex parameters, and $a_u$, $a_c$ and $a_t$ can be used to fit the masses of up quarks. The explicit forms of $f_i$ and $f'_i$ are summarized in Appendix. Then $m_u$ is diagonalized by two unitary matrices as $D_u=V_{u_R}^\dag m_u V_{u_L}$, where $D_u\equiv {\rm diag}(m_u,m_c,m_t)$ is mass eigenvalues. Therefore, we find $|D_u|^2=V^\dag_{u_L} m_u^\dag m_u V_{u_L}$. On the other hand, the down-type quark mass matrix is given as: \begin{align} &\begin{aligned} y^d= \begin{pmatrix} a_d & 0 & 0 \\ 0 &a_s & 0\\ 0 & 0 &a_b \end{pmatrix} \begin{pmatrix} y_1 & y_3& y_2\\ y_2 & y_1 & y_3 \\ y_3 & y_2& y_1 \end{pmatrix} \,, \end{aligned} \label{down} \end{align} where and $a_d$, $a_s$ and $a_b$ can be used to fit the masses of down quarks, and $Y^{(2)}_{3}\equiv[y_1,y_2,y_3]^T$ in Appendix. Then $m_d$ is diagonalized by two unitary matrices as $D_d=V_{d_R}^\dag m_d V_{d_L}$, where $D_d\equiv {\rm diag}(m_d,m_s,m_b)$ is mass eigenvalues. Therefore, we find $|D_d|^2=V^\dag_{d_L} m_d^\dag m_d V_{d_L}$. Finally, we get the observable mixing matrix $V_{CKM}$ as follows: \begin{align} V_{CKM}= V^\dag_{u_L} V_{d_L}. \end{align} \subsection{Lepton sector} Now let us move on the lepton sector. We assign $L_L$ to be ${\bf 3}$ and $-2$ and $\bar e_R$ to be $\{\bf 1,1'',1'\}$ and $0$ under $A_4$ and $-k$, respectively. Here, both of the leptoquark scalars are assigned to be true $A_4$ singlets with $-2$ modular weight. The assignments of $A_4$ and $-k$ are also summarized in Tables~\ref{tab:1} and \ref{tab:fields-inverse}. Under these assignments, we can write down the concrete matrices as follows: \begin{align} h& \left[\begin{array}{ccc} a_\ell &0 &0 \\ 0 &b_\ell &0 \\ 0 &0 &c_\ell \\ \end{array}\right] \left[\begin{array}{ccc} y_1 & y_3 &y_2 \\ y_2 &y_1 &y_3 \\ y_3 &y_2 &y_1 \\ \end{array}\right] ,\\ f&= \left[\begin{array}{ccc} a_\eta &0 &0 \\ 0 &b_\eta &0 \\ 0 &0 &c_\eta \\ \end{array}\right] \left[\begin{array}{ccc} y'_1 & y'_3 &y'_2 \\ y'_2 &y'_1 &y'_3 \\ y'_3 &y'_2 &y'_1 \\ \end{array}\right] ,\\ g&= a Y^{(6)}_1\left[\begin{array}{ccc} 1 &0 &0 \\ 0 &0 &1 \\ 0 &1 &0 \\ \end{array}\right] +\frac{b}{3} \left[\begin{array}{ccc} 2 f_1 &- f_3& - f_2 \\ - f_3 & 2 f_2 & - f_1 \\ - f_2 &- f_1 & 2 f_3 \\ \end{array}\right] +\frac{c}{2} \left[\begin{array}{ccc} 0 & f_3 & - f_2 \\ - f_3 &0 & f_1 \\ f_2 &- f_1 &0 \\ \end{array}\right] \nonumber\\ &+\frac{b^{'}}{3} \left[\begin{array}{ccc} 2 f^{'}_1 &- f^{'}_3& - f^{'}_2 \\ - f^{'}_3 & 2 f^{'}_2 & - f^{'}_1 \\ - f^{'}_2 &- f^{'}_1 & 2 f^{'}_3 \\ \end{array}\right] +\frac{c^{'}}{2} \left[\begin{array}{ccc} 0 & f^{'}_3 & - f^{'}_2 \\ - f^{'}_3 &0 & f^{'}_1 \\ f^{'}_2 &- f^{'}_1 &0 \\ \end{array}\right] \end{align} where $Y^{(4)}_{3}\equiv[y'_1,y'_2,y'_3]^T$ in Appendix. \begin{figure}[tb] \begin{tikzpicture}[/tikzfeynman] \begin{feynman} \vertex (i){$\nu_{L_i}$}; \vertex[right = 1.5 cm of i](v1); \vertex[right = 3. cm of v1](v2); \vertex[above right = 2. cm of v1](l1); \vertex[right = 1. cm of v2](j){$\nu_{L_j}$}; \diagram*[large]{ (i)--[fermion](v1), (v1)--[charged scalar,edge label=$A_1 / B_1$](l1)--[charged scalar](v2), (v1)--[fermion, edge label=$d_a$](v2), (v2)--[anti fermion](j) }; \end{feynman} \end{tikzpicture} \caption{ One-loop diagrams for generating the neutrino mass matrix.} \label{fig:neutrino} \end{figure} The charged-lepton sector after spontaneous symmetry breaking is given by \begin{align} - \mathcal{L}_{Y}^\ell = v\frac{h_{ij}}{\sqrt2} \ell_{L_i} e_{R_j}+{\rm h.c.} . \label{eq:neut} \end{align} Then $m_\ell(\equiv v h/ \sqrt2)$ is diagonalized by two unitary matrices as $D_\ell=V_{\ell_R}^\dag m_\ell V_{\ell_L}$, where $D_\ell \equiv {\rm diag}(m_e,m_\mu,m_\tau)$ is mass eigenvalues. Therefore, we find $|D_\ell|^2=V^\dag_{\ell_L} m_\ell^\dag m_\ell V_{\ell_L}$. The active neutrino mass matrix $m_\nu$ is given at one-loop level through the following interactions: \begin{align} - {L}_{Y}^\ell = F_{aj} \overline{ d'_{R_a}} \nu_{L_j} (c_{a_1} A_1 +s_{a_1} B_1) -G_{aj} \overline{ d'^c_{L_a}} \nu_{L_j} (-s_{a_1} A_1 + c_{a_1} B_1) , \label{eq:neut} \end{align where $F\equiv V^\dag_{d_R}f$ and $G\equiv V^T_{d_L}g$ and $d'$ is mass eigenstate. Then, the neutrino masss matrix in Fig.~\ref{fig:neutrino} is given at one-loop level as follows: \begin{align} &(m_{\nu})_{ij} = s_{2a_1}\frac{3}{4(4\pi)^2} \left[1-\frac{m^2_{A_1}}{m^2_{B_1}}\right] \sum_{i=1}^3 \left[F^T_{ia} D_{d_a} G_{aj}+ G^T_{ia} D_{d_a} F_{aj} \right] F_I(r_{A_1}, r_{D_{d_i}}),\\ &F_I(r_1, r_2) = \frac{r_1(r_2-1)\ln r_1 - r_2(r_1-1)\ln r_2}{(r_1-1)(r_2-1)(r_1-r_2)},\quad (r_1\neq 1), \end{align where we define $r_{A_1}\equiv (m_{A_1}/m_{B_1})^2$ and $r_{D_{d_i}}\equiv (D_{d_i}/m_{B_1})^2$. $m_\nu$ is diagonalzied by a unitary matrix $V_{\nu}$; $D_\nu\equiv V_{\nu}^T m_\nu V_{\nu}$. Here, we define a modified neutrino mass matrix as $\tilde m_\nu \equiv m_\nu / s_{2a_1} $. Then, we rewrite this diagonalization in terms of the modified form $\tilde D_\nu\equiv V_{\nu}^T \tilde m_\nu V_{\nu}$. Thus, we fix $s_{2a_1}$ by \begin{align} ({\rm NH}):\ s_{2a_1}^2= \frac{|\Delta m_{\rm atm}^2|}{\tilde D_{\nu_3}^2-\tilde D_{\nu_1}^2}, \quad ({\rm IH}):\ s_{2a_1}^2= \frac{|\Delta m_{\rm atm}^2|}{\tilde D_{\nu_2}^2-\tilde D_{\nu_3}^2}, \end{align} where $\tilde m_\nu$ is diagonalized by $V^\dag_\nu (\tilde m_\nu^\dag \tilde m_\nu)V_\nu=(\tilde D_{\nu_1}^2,\tilde D_{\nu_2}^2,\tilde D_{\nu_3}^2)$ and $\Delta m_{\rm atm}^2$ is the atmospheric neutrino mass-squared difference. Here, NH and IH respectively stand for the normal hierarchy and the inverted hierarchy. Subsequently, the solar neutrino mass-squared difference is depicted in terms of $s_{2a_1}$ as follows: \begin{align} \Delta m_{\rm sol}^2= {s_{2a_1}^2}({\tilde D_{\nu_2}^2-\tilde D_{\nu_1}^2}). \end{align} This should be within the experimental value, where we adopt NuFit 5.0~\cite{Esteban:2020cvm} to our numerical analysis later. The neutrinoless double beta decay is also given by \begin{align} \langle m_{ee}\rangle=s_{2a_1}|\tilde D_{\nu_1} \cos^2\theta_{12} \cos^2\theta_{13}+\tilde D_{\nu_2} \sin^2\theta_{12} \cos^2\theta_{13}e^{i\alpha_{2}}+\tilde D_{\nu_3} \sin^2\theta_{13}e^{i(\alpha_{3}-2\delta_{CP})}|, \end{align} which may be able to observed by KamLAND-Zen in future~\cite{KamLAND-Zen:2016pfg}. The observed mixing matrix of lepton sector~\cite{Maki:1962mu} is given by $V_{\rm PMNS}\equiv V^\dag_{\ell_L} V_{\nu}$. \section{Numerical analysis \label{sec:numerical}} Here, we perform numerical analysis. Before searching for allowed region, we fix some mass parameters as $m_{A_2}=m_{A_1}$ and $m_{B_2}=m_{\delta}=m_{B_1}$, {where we require degenerate masses for the components of $\eta$ and $\Delta$ to suppress the oblique parameters $\Delta S$ and $\Delta T$}. Notice here that our theoretical parameters $a_{u,c,t}, a_{d,s,b}, a_\ell, b_\ell,c_\ell$ are used to determined the experimental masses for quarks and charged-leptons. Thus, only the following input parameters are randomly selected in the range of \begin{align} & (m_{A_1},m_{B_1}) \in [1\,, 100\,]\ \text{TeV},\nonumber\\ & |g_{u1,u2,u3}| \in \left[10^{-5},\ 1.5\,\right] ,\quad (|a_\eta|, |b_\eta|, |c_\eta|, |a|, |b|, |c|, |b'|, |c'|) \in \left[10^{-5},\ 10\,\right]. \label{range_scanning} \end{align} Above the range, we have numerical analysis in cases for quark and lepton, where experimental data in the quark sector should be within the range at 3$\sigma$. While the one in the lepton sector is discussed in the range within 3$\sigma$(yellow dots) and 5$\sigma$(red dots) applying $\chi^2$ analysis in Nufit 5.0. \subsection{NH} For NH case, we show our results of lepton sector in Figs.~\ref{tau_LQMA4_nh},~\ref{sum-dcp_LQMA4_nh}, \ref{m1-mee_LQMA4_nh}, \ref{majo_LQMA4_nh}. In Fig.~\ref{tau_LQMA4_nh}, allowed value of $\tau$ is shown where yellow(red) points present the values within 3(5)$\sigma$. One finds that allowed space is rather localized. Especially, the region at nearby $\tau\sim1.75i$ would be interesting since it is close to the fixed point that has a remnant $Z_3$ symmetry. In Fig.~\ref{sum-dcp_LQMA4_nh}, we demonstrate allowed region of $\delta_{CP}$ in terms of $\sum m_i$. $\sum m_i$ is rather localized at $0.06-0.08$ eV while whole the region is allowed for $\delta_{CP}$. Moreover, almost all the points are within the cosmological constraint $\sim0.12$ eV~\cite{pdg}. In Fig.~\ref{m1-mee_LQMA4_nh}, we present allowed region of neutrinoless double beta decay $\langle m_{ee}\rangle$ in terms of the lightest neutrino mass $m_1$. $\langle m_{ee}\rangle$ is allowed up to $0.025$ eV while $m_1$ is allowed up to $0.0025$ eV. Moreover, allowed region of $m_1$ is localized at around $10^{-6}$ eV indicating tiny mass of the lightest neutrino mass. In Fig.~\ref{majo_LQMA4_nh}, we depict allowed region of Majorana phases $\alpha_{21}$ and $\alpha_{31}$. Both are allowed for whole the region but there would be tendency that $\alpha_{21}$ is localized at around 180$^\circ$. \begin{figure}[h] \begin{minipage}[]{0.4\linewidth} \vspace{0mm} \includegraphics[{width=\linewidth}] {tau_LQMA4_nh.pdf} \caption{Allowed value of $\tau$. Yellow and red points present the values within 3 and 5$\sigma$.} \label{tau_LQMA4_nh} \end{minipage} \hspace{5mm} \begin{minipage}[]{0.4\linewidth} \vspace{2mm} \includegraphics[{width=\linewidth}]{sum-dcp_LQMA4_nh.pdf} \caption{Allowed region of $\delta_{CP}$ in terms of $\sum m_i$.} \label{sum-dcp_LQMA4_nh} \end{minipage} \hspace{5mm} \begin{minipage}[]{0.4\linewidth} \vspace{2mm} \includegraphics[{width=\linewidth}]{m1-mee_LQMA4_nh.pdf} \caption{Allowed region of the mass of neutrinoless double beta decay in terms of the lightest neutrino mass.} \label{m1-mee_LQMA4_nh} \end{minipage} \hspace{5mm} \begin{minipage}[]{0.4\linewidth} \vspace{2mm} \includegraphics[{width=\linewidth}]{majo_LQMA4_nh.pdf} \caption{Allowed region of Majorana phases $\alpha_{21}$ in terms of $\alpha_{31}$. } \label{majo_LQMA4_nh} \end{minipage} \end{figure} In addition to the lepton sector, we will search for our allowed region of quark sector in Figs.~\ref{vub-dcp-nh}, \ref{vub-vtd-nh}, \ref{vcb-vub-nh}. Here, the dotted red line at 3$\sigma$ interval while the black line is best fit value. And the yellow(red) points correspond to 3(5)$\sigma$ of the lepton sector where $\tau$ is commonly used. In Fig.~\ref{vub-dcp-nh}, we show the CP phase of quark $\delta$ in term of (1, 3) component of CKM matrix; $|V_{ub}|$, and find whole the region is allowed at 3$\sigma$ interval. In Fig.~\ref{vub-vtd-nh}, we show $|V_{ub}|$ and $|V_{td}|$, and found that there is a weak linearly correlation between them. In Fig.~\ref{vcb-vub-nh}, we show $|V_{cb}|$ and $|V_{ub}|$, and find that there is also a weak linear correlation between them. \begin{figure}[t] \begin{minipage}[]{0.4\linewidth} \vspace{0mm} \includegraphics[{width=\linewidth}] {vub-dcp_LQMA4_nh.pdf} \caption{The CP phase of quark $\delta$ versus (1, 3) component of CKM matrix. The red dashed lines represent 3$\sigma$ experimental bounds. } \label{vub-dcp-nh} \end{minipage} \hspace{5mm} \begin{minipage}[]{0.4\linewidth} \vspace{2mm} \includegraphics[{width=\linewidth}]{vub-vtd_LQMA4_nh.pdf} \caption{$|V_{ub}|$ versus $|V_{td}|$.} \label{vub-vtd-nh} \end{minipage} \hspace{5mm} \begin{minipage}[]{0.4\linewidth} \vspace{2mm} \includegraphics[{width=\linewidth}]{vcb-vub_LQMA4_nh.pdf} \caption{$|V_{cb}|$ versus $|V_{ub}|$.} \label{vcb-vub-nh} \end{minipage} \end{figure} {\it Bench mark point for NH}: We also give a benchmark point to satisfy the quark and lepton masses and mixings as well as phases in the left sides of Tables~\ref{nh-bp-L} and~\ref{nh-bp-Q}, where we extracted a value at nearby $\tau=1.75i$. The corresponding lepton and neutrino mixings are given by \begin{align} V_{\ell_L}&= \left[\begin{array}{ccc} -0.75 + 0.00017 i & 0.35 - 0.000026 i & -0.56 \\ 0.20 - 0.000012 i & -0.68 + 0.000074 i & -0.70 - 0.000040 i \\ -0.63 + 0.00013 i & -0.64 + 0.000077 i & 0.44 + 0.000044 i \\ \end{array}\right] , \\ V_{\nu}&= \left[\begin{array}{ccc} -0.91 - 0.20 i & 0.26 - 0.075 i & -0.22 - 0.027 i \\ 0.23 + 0.018 i & -0.045 - 0.28 i & -0.87 - 0.33 i \\ -0.13 + 0.24 i &-0.25 + 0.88 i & -0.25 - 0.15 i \\ \end{array}\right]. \end{align} And the quark mixings are given by \begin{align} V_{u_L}&= \left[\begin{array}{ccc} -0.75 - 0.057 i & 0.41 + 0.24 i & -0.46 + 0.0034 i \\ -0.62 + 0.017 i & -0.31 - 0.068 i & 0.71 - 0.0045 i \\ 0.19 - 0.071 i & 0.76 + 0.30 i & 0.53 - 0.0015 i \\ \end{array}\right] , \\ V_{d_L}&= \left[\begin{array}{ccc} -0.64 + 0.000077 i & -0.63 + 0.00013 i & -0.44 - 0.000044 i \\ -0.68 + 0.000074 i & 0.20 - 0.000012 i & 0.70 + 0.000040 i \\ 0.35 - 0.000026 i & -0.75 + 0.00017 i & 0.56 \\ \end{array}\right] . \end{align} \begin{table}[h] \centering \begin{tabular}{|c|c||c|c|} \hline \rule[14pt]{0pt}{0pt} Lepton & NH($\tau\approx 1.75 i$) & IH($\tau\approx 1.06 i$) & IH($\tau\approx 1.76 i$)\\ \hline \rule[14pt]{0pt}{0pt} $\tau$& $ -0.0000945 + 1.75 i$ & $-0.000689 + 1.06 i$ & $-0.000829+ 1.76 i$\\ \rule[14pt]{0pt}{0pt} $a_\eta$ &$-0.23 - 1.4 i$ & $-0.31 + 0.013 i $ & $-4.1 + 4.3 i $ \\ \rule[14pt]{0pt}{0pt} $b_\eta$ & $-0.38 + 1.3 i$ & $-0.045 - 0.027 i$ & $-0.0014 + 0.0032 i $\\ \rule[14pt]{0pt}{0pt} $c_\eta$ & $0.0077 - 0.031 i $ & $0.0014 - 0.000047 i $ & $-0.0023 + 0.0035 i $\\ \rule[14pt]{0pt}{0pt} $a$ & $0.00016 + 0.00011 i$ & $3.0 + 0.98 i$ & $0.0085 + 0.025 i $\\ \rule[14pt]{0pt}{0pt} $b$ & $-0.017 + 0.0013 i$ & $0.00014 + 0.00015 i $ & $-0.096 + 0.041 i $\\ \rule[14pt]{0pt}{0pt} $c$ & $-0.00030 - 0.00010 i $ & $0.0056 - 0.0021 i $ & $-1.8 + 4.5 i $\\ \rule[14pt]{0pt}{0pt} $b'$ & $0.00014 - 0.000016 i $ & $0.000024 + 0.000016 i $ & $-0.00011 + 0.00011 i $\\ \rule[14pt]{0pt}{0pt} $c'$ & $-0.21 - 0.27 i $ & $-0.15 + 0.031 i $ & $-0.0000034 - 0.000060 i $\\ \rule[14pt]{0pt}{0pt} $[\alpha_e, \beta_e,\gamma_e]$ & $[0.0002,9.3\times10^{-7},0.003]$ &$[0.0005,10^{-5},0.007]$ & $[9.1\times10^{-7},1.9\times10^{-4},0.003]$\\ \rule[14pt]{0pt}{0pt} $\sin^2\theta_{12}$ & $ 0.32$& $0.28$ & $0.33$\\ \rule[14pt]{0pt}{0pt} $\sin^2\theta_{23}$ & $ 0.56$& $0.46$ & $0.58$\\ \rule[14pt]{0pt}{0pt} $\sin^2\theta_{13}$ & $ 0.024$&$0.024$ & $0.022$\\ \rule[14pt]{0pt}{0pt} $\delta_{CP}^\ell$ & $328^\circ$& $ 170^\circ$ & $335^\circ$\\ \rule[14pt]{0pt}{0pt} $[\alpha_{21},\,\alpha_{31}]$ & $[169^\circ,\, 336^\circ]$ & $[ 167^\circ,\, 159^\circ]$ & $[ 157^\circ,\, 130^\circ]$ \\ \rule[14pt]{0pt}{0pt} $\sum m_i$ & $0.071$\,eV & $0.11$\,eV & $0.11$\, eV \\ \rule[14pt]{0pt}{0pt} $s_{2a_1}$ & $4.6\times10^{-9}$ & $5.7\times10^{-5}$ & $1.9\times10^{-9}$ \\ \rule[14pt]{0pt}{0pt} $\langle m_{ee} \rangle$ & $3.2$\,meV& $21$\,meV & $19$\,meV \\ \rule[14pt]{0pt}{0pt} $[m_{A_1},m_{B_1}]$ & $[18,6.0]$\,TeV & $[37, 37]$\,TeV & $[33, 39]$\,TeV \\ \rule[14pt]{0pt}{0pt} $\sqrt{\chi^2}$ & $2.9$ & $4.8$ & $4.5$\\ \hline \end{tabular} \caption{Numerical values of parameters and observables at the best fit points of NH and IH.} \label{nh-bp-L} \end{table} \begin{table}[h] \centering \begin{tabular}{|c|c||c|c|} \hline \rule[14pt]{0pt}{0pt} Quark & NH($\tau\approx 1.75 i$) & IH($\tau\approx 1.06 i$) & IH($\tau\approx 1.76 i$)\\ \hline \rule[14pt]{0pt}{0pt} $\tau$& $ -0.0000945 + 1.75 i $ & $-0.000689 + 1.06 i $ & $-0.000829+ 1.76 i $\\ \rule[14pt]{0pt}{0pt} $a_u$ &$1.3\times 10^{-7}$ & $8.2\times 10^{-8}$ & $1.1\times 10^{-5}$ \\ \rule[14pt]{0pt}{0pt} $a_c$ & $4.6\times 10^{-5}$ & $4.2\times 10^{-5}$ & $0.0007$\\ \rule[14pt]{0pt}{0pt} $a_t$ & $0.017 $ & $0.017$ & $0.22$\\ \rule[14pt]{0pt}{0pt} $g_{u1}$ &$0.00066 + 0.0016 i $ & $-0.00013 + 0.013 i$ & $0.57 - 0.35 i $ \\ \rule[14pt]{0pt}{0pt} $g_{u2}$ & $0.053 + 0.30 i $ & $0.094 + 0.24 i $ & $-0.060 - 0.41 i $\\ \rule[14pt]{0pt}{0pt} $g_{u3}$ & $0.11 + 0.0061 i $ & $0.091 + 0.018 i $ & $0.046 + 0.021 i $\\ \rule[14pt]{0pt}{0pt} $a_d$ & $0.00018$ & $0.00014$ & $0.00047$\\ \rule[14pt]{0pt}{0pt} $a_s$ & $1.2 \times 10^{-5} $ & $8.7\times 10^{-6}$ & $5.2\times 10^{-5}$\\ \rule[14pt]{0pt}{0pt} $a_b$ & $0.011$ & $0.011$ & $0.025$\\ \rule[14pt]{0pt}{0pt} $|V_{us}|$ & $ 0.23$& $0.22$ & $0.22$\\ \rule[14pt]{0pt}{0pt} $|V_{cb}|$ & $ 0.033$& $0.027$ & $0.042$\\ \rule[14pt]{0pt}{0pt} $|V_{ub}|$ & $ 0.0031$&$0.0020$ & $0.0039$\\ \rule[14pt]{0pt}{0pt} $\delta_{CP}$ & $58^\circ$& $ 51^\circ$ & $83^\circ$\\ \hline \end{tabular} \caption{Numerical values of parameters and observables at the best fit points of NH and IH.} \label{nh-bp-Q} \end{table} \subsection{IH} In case of IH, we obtain less allowed parameter points compared to the case of $NH$ since it is more difficult to fit the data. Since there are no points within 3$\sigma$ region but few points within 5$\sigma$ region, we will explain the tendency instead of showing scattering plots. The value of $\tau$ is interestingly localized at nearby two fixed points $i$ and $1.74i$, each of which has remnant symmetry of $Z_2$ and $Z_3$. $\sum m_i$ is localized at $0.10 - 0.12$ eV while $\delta_{CP}$ is allowed for the range of $150^\circ - 360^\circ$. Moreover, almost all the points are within the cosmological constraint $\sim0.12$ eV that is similar to the NH case. $\langle m_{ee}\rangle$ is localized at around $0.016 - 0.024$ eV while $m_1$ is allowed up to $1.2\times10^{-4}$ eV. Moreover, $m_1$ is also localized at around $10^{-6}$ eV . $\alpha_{21}$ is localized at around 180$^\circ$,while $\alpha_{21}$ is allowed for the range of $100^\circ -360^\circ$. \if0 \begin{figure}[h] \begin{minipage}[]{0.4\linewidth} \vspace{0mm} \includegraphics[{width=\linewidth}] {tau_LQMA4_ih} \caption{...} \label{tau_LQMA4_ih} \end{minipage} \hspace{5mm} \begin{minipage}[]{0.4\linewidth} \vspace{2mm} \includegraphics[{width=\linewidth}]{sum-dcp_LQMA4_ih.pdf} \caption{... } \label{sum-dcp_LQMA4_ih} \end{minipage} \hspace{5mm} \begin{minipage}[]{0.4\linewidth} \vspace{2mm} \includegraphics[{width=\linewidth}]{m3-mee_LQMA4_ih.pdf} \caption{... } \label{m1-mee_LQMA4_ih} \end{minipage} \hspace{5mm} \begin{minipage}[]{0.4\linewidth} \vspace{2mm} \includegraphics[{width=\linewidth}]{majo_LQMA4_ih.pdf} \caption{... } \label{majo_LQMA4_ih} \end{minipage} \end{figure} \fi In addition to the lepton sector, we discuss our allowed region of quark sector. Even though the allowed points are not so many, we might say something from our analysis as follows. As for the CP phase of quark $\delta$ in term of (1, 3) component of CKM matrix; $|V_{ub}|$, we found whole the region is allowed at 3$\sigma$ interval. As for $|V_{ub}|$ and $|V_{td}|$, we find that there is a weak linearly correlation between them. As for $|V_{cb}|$ and $|V_{ub}|$, we find that there is also a weak linear correlation between them. \if0 \begin{figure}[h] \begin{minipage}[]{0.4\linewidth} \vspace{0mm} \includegraphics[{width=\linewidth}] {vub-dcp_LQMA4_ih.pdf} \caption{...} \label{vub-dcp-ih} \end{minipage} \hspace{5mm} \begin{minipage}[]{0.4\linewidth} \vspace{2mm} \includegraphics[{width=\linewidth}]{vub-vtd_LQMA4_ih.pdf} \caption{... } \label{vub-vtd-ih} \end{minipage} \hspace{5mm} \begin{minipage}[]{0.4\linewidth} \vspace{2mm} \includegraphics[{width=\linewidth}]{vcb-vub_LQMA4_ih.pdf} \caption{... } \label{vcb-vub-ih} \end{minipage} \end{figure} \fi {\it Bench mark point for IH}: We give two interesting benchmark points; $\tau\approx 1.06 i,\ 1.76 i$ to satisfy the quark and lepton masses and mixings as well as phases in the center and right sides of Tables~\ref{nh-bp-L} and~\ref{nh-bp-Q}. The lepton and neutrino mixings are given by \begin{align} \tau\approx 1.06 i:&\nonumber\\ V_{\ell_L}&= \left[\begin{array}{ccc} -0.65 + 0.0068 i & 0.72 + 0.00024 i & -0.25 + 0.00061 i \\ -0.47 + 0.0067 i & -0.64 + 0.0012 i &-0.61 + 0.00072 i \\ -0.60 + 0.0068 i & -0.28 + 0.00098 i & 0.75 + 0.000042 i \\ \end{array}\right] , \\ V_{\nu}&= \left[\begin{array}{ccc} -0.63 + 0.12 i &0.053 - 0.029 i & -0.13 - 0.75 i \\ 0.090 + 0.11 i &0.14 + 0.98 i & 0.015 - 0.089 i \\ -0.75 + 0.10 i &0.12 + 0.097 i & 0.068 + 0.63 i \\ \end{array}\right] , \\ \end{align} \begin{align} \tau\approx 1.76 i:&\nonumber\\ V_{\ell_L}&= \left[\begin{array}{ccc} -0.21 - 0.000031 i & 0.80 - 0.0015 i & 0.56 + 0.000052 i \\ 0.63 - 0.00065 i & -0.33 + 0.00026 i &0.70 + 0.00032 i \\ 0.75 - 0.00091 i &0.50 - 0.0010 i &-0.44 - 0.00036 i \\ \end{array}\right] , \\ V_{\nu}&= \left[\begin{array}{ccc} -0.010 + 0.016 i &0.063 - 0.098 i &-0.87 + 0.48 i \\ 0.28 - 0.037 i &-0.77 + 0.56 i & -0.12 + 0.014 i \\ 0.53 + 0.80 i &0.25 + 0.13 i & 0.016 + 0.0094 i \\ \end{array}\right] . \end{align} The quark mixings are given by \begin{align} \tau\approx 1.06 i:&\nonumber\\ V_{u_L}&= \left[\begin{array}{ccc} -0.59 + 0.26 i & -0.18 - 0.067 i & -0.71 - 0.23 i \\ -0.55 + 0.21 i & -0.50 - 0.030 i & 0.61 + 0.18 i \\ -0.41 + 0.28 i & 0.84 - 0.062 i & 0.20 + 0.080 i \\ \end{array}\right] , \\ V_{d_L}&= \left[\begin{array}{ccc} -0.60 + 0.0068 i & -0.28 + 0.00096 i & -0.75 - 0.000044 i \\ -0.47 + 0.0067 i & -0.64 + 0.0011 i & 0.61 - 0.00072 i \\ -0.65 + 0.0068 i & 0.72 + 0.00022 i & 0.25 - 0.00061 i \\ \end{array}\right] , \end{align} \begin{align} \tau\approx 1.76 i:&\nonumber\\ V_{u_L}&= \left[\begin{array}{ccc} -0.76 - 0.042 i & 0.43 + 0.19 9 & -0.45 + 0.018 8i\\ -0.62 + 0.015 i & -0.33- 0.052 i & 0.71 - 0.026 i \\ 0.19 - 0.054 i & 0.78 + 0.23 i & 0.54 - 0.014 i \\ \end{array}\right] , \\ V_{d_L}&= \left[\begin{array}{ccc} -0.64 + 0.00066 i & -0.63 + 0.0011 i & -0.44 - 0.00039 i \\ -0.68 + 0.00064 i & 0.21 - 0.00013 i & 0.70 + 0.00037 i \\ 0.35 - 0.00023 i & -0.75 + 0.0015 i & 0.56 + 0.000089 i \\ \end{array}\right]. \end{align} \section{Conclusions} We have proposed a LQ model to explain the masses and mixings for quark and lepton, introducing modular $A_4$ symmetry. Due to nature of LQ model that lepton(quark) directly connects to the quark(lepton) via LQ, a single modulus number has to be applied that leads to a good motivation towards unification of quark and lepton flavor in $A_4$ modular symmetry. After giving an assignment for quark sector to reproduce the experimental results at 3$\sigma$ interval, we have also constructed the lepton sector, where the neutrino mass matrix is induced at one-loop level running down quark sector, unified value of $\tau$ is used for quark and lepton. Then, we have performed numerical analysis to search for allowed region satisfying experimental measurements for both quark and lepton sector, depending on NH and IH. In case of NH, we have found rather wide allowed space within 3$\sigma$ interval and obtained tendency of observables for quark and lepton. Especially, we have found allowed region at nearby $\tau=1.75 i$ that is close to the fixed point of $\tau=i\infty$. Thus, we have also shown a promising bench mark point at around the solution. In case of IH, we would not found the allowed region within 3$\sigma$ interval, but found within 5$\sigma$ interval. Although the number of allowed point is few, we have found all the allowed regions are localized at nearby $\tau=i,\ 1.76i$, both of which are nearby fixed points. We have shown them as benchmark points. These would be tested near future. \section*{Acknowledgments} \vspace{0.5cm} {\it This research was supported by an appointment to the JRG Program at the APCTP through the Science and Technology Promotion Fund and Lottery Fund of the Korean Government. This was also supported by the Korean Local Governments - Gyeongsangbuk-do Province and Pohang City (H.O.), European Regional Development Fund-Project Engineering Applications of Microworld Physics (Grant No. CZ.02.1.01/0.0/0.0/16\_019/0000766) (Y.O.). H. O. is sincerely grateful for the KIAS member.} \section*{Appendix} The modular forms of weight 2, $Y^{(2)}_{\bf3} = [y_{1},y_{2},y_{3}]^T$, transforming as a triplet of $A_4$ is written in terms of Dedekind eta-function $\eta(\tau)$ and its derivative: \begin{eqnarray} \label{eq:Y-A4} y_{1}(\tau) &=& \frac{i}{2\pi}\left( \frac{\eta'(\tau/3)}{\eta(\tau/3)} +\frac{\eta'((\tau +1)/3)}{\eta((\tau+1)/3)} +\frac{\eta'((\tau +2)/3)}{\eta((\tau+2)/3)} - \frac{27\eta'(3\tau)}{\eta(3\tau)} \right), \nonumber \\ y_{2}(\tau) &=& \frac{-i}{\pi}\left( \frac{\eta'(\tau/3)}{\eta(\tau/3)} +\omega^2\frac{\eta'((\tau +1)/3)}{\eta((\tau+1)/3)} +\omega \frac{\eta'((\tau +2)/3)}{\eta((\tau+2)/3)} \right) , \label{eq:Yi} \\ y_{3}(\tau) &=& \frac{-i}{\pi}\left( \frac{\eta'(\tau/3)}{\eta(\tau/3)} +\omega\frac{\eta'((\tau +1)/3)}{\eta((\tau+1)/3)} +\omega^2 \frac{\eta'((\tau +2)/3)}{\eta((\tau+2)/3)} \right), \nonumber\\ \eta(\tau) &=& q^{1/24}\Pi_{n=1}^\infty (1-q^n), \quad q=e^{2\pi i \tau}, \quad \omega=e^{2\pi i /3} \nonumber \end{eqnarray} Then, any multiplets of higher weight are constructed by multiplication rules of $A_4$, and one finds the following : \begin{align} &Y^{(4)}_{\bf1}=y^2_1+2y_2y_3,\quad Y^{(4)}_{\bf3}\equiv \left[\begin{array}{c} y'_1 \\ y'_2 \\ y'_3 \\ \end{array}\right] = \left[\begin{array}{c} y^2_1-y_2y_3 \\ y^2_3-y_1y_2 \\ y^2_2-y_1y_3 \\ \end{array}\right], \quad Y^{(6)}_{\bf 1}= y_1^2 + y_2^2 + y_3^2 - 3 y_1 y_2 y_3, \end{align} \begin{align} &Y^{(6)}_{\bf3}\equiv \left[\begin{array}{c} f_1 \\ f_2 \\ f_3 \\ \end{array}\right] = \left[\begin{array}{c} y^3_1 + 2 y_1 y_2 y_3 \\ y^2_1 y_2 +2 y_2^2 y_3 \\ y^2_1 y_3 + 2 y_3^2 y_2 \\ \end{array}\right],\quad Y^{(6)}_{\bf 3'}\equiv \left[\begin{array}{c} f'_1 \\ f'_2 \\ f'_3 \\ \end{array}\right] = \left[\begin{array}{c} y^3_3 + 2 y_1 y_2 y_3 \\ y^2_3 y_1 +2 y_1^2 y_2 \\ y^2_3 y_2 + 2 y_2^2 y_1 \\ \end{array}\right]. \end{align}
1,108,101,564,792
arxiv
\section{Introduction} The $\Lambda$CDM cosmogony, while successful in describing the large scale structure of our universe, still suffers from potential discrepancies in modeling the properties on small scales, primarily for dark matter halos that are expected to host the observed dwarf galaxies. For example, Milky-Way satellites have significantly lower dark matter densities in the inner regions compared to the corresponding subhalos in cosmological $N$-body simulations --- this is known as the {\it Too Big To Fail} problem \citep{boylan2011too}. A potentially related issue concerns the inner dark matter density profiles inferred from the rotation curves of small disk galaxies, many of which are observed to be cored/flat, while simulated $\Lambda$CDM halos are cusped/rising --- this is the {\it Cusp-core} problem \citep{flores1994,moore1994,de2010core}. Feedback from star formation can potentially explain this discrepancy in larger dwarf galaxies \citep{governato2010bulgeless,pontzen2012supernova}. However, if dark matter cores exist within galaxies that have had too little star formation ($M_\star \lesssim 10^{6}\ M_\odot$) to affect the dark matter density slopes \citep{di2013dependence,chan2015impact,tollet2016nihao}, then this could be an indication that the dark matter is something other than CDM \citep[see][and references there in]{bullock2017small}. Though particularly important, the question of whether or not the smallest galaxies have cusps or cores is notoriously difficult to answer owing to the fact that they are dispersion supported. While it is possible to quantify the detailed mass profiles of spheroidal galaxies through the use of kinematic measurements of individual stars in 3D \citep[e.g.][]{wilkinson2002dark,strigari2007astrometry}, until recently we have been limited to data sets that include only 1D velocities along the line-of-sight. This introduces a degeneracy between the inferred mass profile slope and the underlying velocity dispersion anisotropy parameter $\beta$, which quantifies the intrinsic difference between the radial and tangential velocity dispersions. One robust measurement that is possible with line-of-sight velocities is the integrated mass within a single characteristic radius for each galaxy. This idea was first emphasized by \cite{walker2009universal}, who used spherical Jeans modelling to show that the integrated mass within an effective radius was independent of assumed $\beta$ for a wide variety of assumptions for many galaxies. \cite{wolf2010accurate} extended this idea, also using Jeans modeling, to show that there exists, analytically, an idealized radius within which the mass inferred from line-of-sight velocities is formally insensitive to $\beta$. Under mild assumptions, this radius is where the log-slope of the stellar tracer profile is equal to $-3$. Both the Walker and Wolf mass estimators do remarkably well when compared to \textit{ab initio} cosmological simulations of (non-spherical) dwarf galaxies in \citep{campbell2017knowing,gonzalez2017dwarf}. They are also used extensively to interpret observed line-of-sight velocity dispersion measurements \citep[see][and references there in]{simon2019}. We are entering a new era of astrometry, such that the internal proper motions in distant dwarf spheroidal galaxies are now becoming possible to measure with the advent of {\tt GAIA} \citep{brown2016gaia,prusti2016gaia,brown2018gaia,helmi2018gaia}. Additionally, {\tt LSST} may provide similar advances \citep{lsst2009}. Measurements of stellar velocities along the plane-of-the-sky promise an important new window into the mass and density structure of dwarf galaxies. The results of \cite{massari20173d} and \cite{massari2019stellar} provide an exciting first look at what we expect to measure in the coming years by providing plane of the sky velocity dispersion measurements for Sculptor and Draco, respectively. The article is outlined as follows: In Section~\ref{sec:preliminaries}, we briefly introduce the spherical Jeans equation and the coordinate system used as the basis of our analysis. Section~\ref{sec:estimators} derives the mass estimators by combining the Jeans equation and proper motions measured from the plane of the sky, which includes the key assumptions considered therein. Section~\ref{sec:observation.results} demonstrates the use of the combined mass estimators to provide an implied mass-density slope for currently available proper motions of Draco and Sculptor. Section~\ref{sec:mock.observations} assesses our estimators with mock observations constructed from high-resolution simulations. In Section~\ref{sec:discussion}, we discuss possible biases that might arise due to Jeans modelling, and finally, Section~\ref{sec:conclusion} summarizes our results and we provide concluding remarks. \section{Prelimnaries} \label{sec:preliminaries} In what follows, lower case $r$ represents the (physical) three-dimensional radius and the upper case $R$ represents the (physical) two-dimensional projected radius. \subsection{The Spherical Jeans Equation} \label{sec:spherical.jean.equation} For a spherically symmetric steady-state system, the first moment of the collisionless Boltzmann equation for a stellar phase-space distribution takes the form of the spherical Jeans equation \citep{binney2011galactic}: \begin{align} -\frac{d\Phi(r)}{dr} = \frac{1}{n_{\star}(r)}\frac{d}{dr}\left(n_{\star} \sigma_{r}^{2}(r)\right) + \frac{2\beta\sigma^{2}_{r}(r)}{r} \, , \label{eq:spherical_jeans} \end{align} which relates the total gravitational potential, $ \Phi(r)$, of the stellar system to its two tracers: the intrinsic radial velocity dispersion, $\sigma_{r}^{2} := \langle v_{r}^{2}\rangle - \langle v_{r} \rangle^{2}$, and the three-dimensional stellar number density, $n_{\star}(r)$. The quantity, \begin{align} \beta(r) & := 1 - \frac{\sigma_{\theta}^{2} + \sigma_{\phi}^{2}}{2\sigma_{r}^{2}} \, , \end{align} is a measure of the velocity dispersion {\it anisotropy} of the tracer population, where $\sigma_{\theta}$ and $\sigma_{\phi}$ are the intrinsic velocity dispersion tangential to radius $r$. We will assume that $\sigma_{\theta} = \sigma_{\phi}$. Radially biased systems tend to have $\beta \rightarrow 1$ while $\beta \rightarrow -\infty$ constitutes tangentially biased measurements. In addition, the total intrinsic velocity dispersion follows \begin{align} \sigma_{\rm tot}^{2}(r) = \sigma_{r}^{2} + \sigma_{\theta}^{2} + \sigma_{\phi}^{2} = (3-2\beta)\sigma_{r}^{2}(r) \, . \label{eq:total.dispersion} \end{align} The total mass profile of the dynamical system is an implied quantity of Eq.~\eqref{eq:spherical_jeans}, such that, \begin{align} M(r|\beta) = \frac{r \sigma_{r}^{2}(r)}{G} \left( \gamma_{\star} + \gamma_{\sigma} - 2\beta \right) \, , \label{eq:jeans_mass} \end{align} where $G$ is Newtons gravitational constant and the logarithmic slopes are defined as \begin{align} \gamma_{\star} := -\frac{d\log n_{\star}}{d\log r} ;&\qquad \gamma_{\sigma} := -\frac{d\log \sigma_{r}^{2}}{d\log r}; \\ \nonumber \gamma_{\beta} &:= -\frac{d\log \beta}{d\log r} \, . \end{align} \subsection{Coordinate System of Measurements} \label{sec:coordinates} We use the coordinate system discussed in \cite{strigari2007astrometry} such that the three-dimensional components of stars' velocity in a spherically, steady-state systems are comprised of the components radial, $v_{r}$, and transverse, $v_{\theta}$ and $v_{\phi}$. The projected proper motions are composed of these three-dimensional velocities, that is, along the measured {\it line-of-sight}, \begin{align} v_{\rm los} &= v_{r}\cos\theta + v_{\theta}\sin\theta \, , \end{align} where $\vec{r}\cdot \vec{z} = \cos\theta$ and $\vec{z}$ is the line-of-sight direction, and along the plane of the sky, the components {\it parallel} and {\it transverse} to the projected radius $R$ are \begin{align} v_{\mathcal{R}} &= v_{r}\sin\theta + v_{\theta}\cos\theta \,\quad \mathrm{and}\quad v_{\mathcal{T}} = v_{\phi} \, , \end{align} respectively. Here, the variances of the velocity dispersions are given by $\sigma_{i}^{2} := \langle v_{i}^{2} \rangle$ and $\sigma_{\phi} = \sigma_{\theta}$ is assumed. The derived mapping of the observable proper motions to the deprojected, three-dimensional tracer profiles are \begin{align} \Sigma_{\star} \sigma_{\rm los}^{2}(R) &= \int_{R^{2}}^{\infty} \frac{dr^{2}}{\sqrt{r^{2} - R^{2}}} \left[ 1- \frac{R^{2}}{r^{2}}\beta \right] n_{\star}\sigma_{r}^{2}\, , \label{eq:sig_los.mapping} \\ \Sigma_{\star} \sigma_{\mathcal{R}}^{2}(R) &= \int_{R^{2}}^{\infty} \frac{dr^{2}}{\sqrt{r^{2} - R^{2}}} \left[ 1 - \beta + \frac{R^{2}}{r^{2}}\beta \right] n_{\star}\sigma_{r}^{2} \, , \label{eq:sig_projR.mapping} \\ \Sigma_{\star} \sigma_{\mathcal{T}}^{2}(R) &= \int_{R^{2}}^{\infty} \frac{dr^{2}}{\sqrt{r^{2} - R^{2}}} \left[ 1-\beta\right] n_{\star}\sigma_{r}^{2} \, . \label{eq:sig_projT.mapping} \end{align} The combination of the proper motions also satisfy $\sigma_{\rm tot}^{2} = \sigma_{\mathcal{R}}^{2} + \sigma_{\mathcal{T}}^{2} + \sigma_{\rm los}^{2}$. For an observed galaxy, $\Sigma_{\star}(R)$ is the projected stellar density, which is related to the three-dimensional $n_{\star}(r)$ via an Abel inversion, Eq~\eqref{eq:abel.inversion}. For the relevance of the proceeding text, we will focus on the additional constraint imposed by \section{Mass Estimators from Proper Motions} \label{sec:estimators} In this section, quantities enclosed in brackets with an asterisk as $\langle\cdots\rangle^{*}$ indicates a measurement to be {\it luminosity-weighted}, $r_{1/2}$ is the three-dimensional, deprojected half-light radius, and $R_{e}$ is the two-dimensional effective radius. \input{tables/observations.tex} \subsection{Measurements along the Line-of-sight} Here, we rederive the main results from \cite{wolf2010accurate} using the assumptions discuss there-in: Consider a velocity dispersion-supported stellar system that is well studied, such that $\Sigma_{\star}(R)$ and $\sigma_{\rm los}(R)$ are determined accurately by observations. In this system, all of the stars are assumed to be bound with no dynamical interlopers. If we model the systems mass profile using the Jeans equation, any viable solution will keep the combination of $\Sigma_{\star}\sigma_{\rm los}^{2}(R)$ fixed to within allowable errors. We start with the mapping of $\sigma_{\rm los}$ to $\sigma_{r}$. To utilize Eq.~\eqref{eq:spherical_jeans}, Eq.~\eqref{eq:sig_los.mapping} is massaged to an invertable form that is applicable to that of an Abel inversion: \begin{align} \Sigma_{\star}\sigma_{\rm los}^{2}(R) &= \int_{R^{2}}^{\infty} \frac{dr^{2}}{\sqrt{r^{2} - R^{2}}} \left[ n_{\star}\sigma_{r}^{2}(1-\beta) + \int_{r^{2}}^{\infty} d\tilde{r}^{2} \frac{\beta n_{\star} \sigma_{r}^{2}}{2\tilde{r}^{2}} \right] \, . \nonumber \label{eq:sig_los.mapping.inv} \end{align} The term in the brackets on the right-hand side has to be a well-defined quantity, as the left-hand side is an accurate, observed quantity ignorant of the form of $\beta$. Therefore, we are allowed to compare different forms of $\beta$ with one another; we equate the isotropic form of the integrand, $\beta = 0$, with an integrand that is dependent on some arbitrary $\beta$, as this is the simplest case one can consider as a comparison: \begin{align} n_{\star}\sigma_{r}^{2}\big|_{\beta = 0} &= n_{\star}\sigma_{r}^{2}(1-\beta) + \int_{r^{2}}^{\infty} d\tilde{r}^{2} \frac{\beta n_{\star} \sigma_{r}^{2}}{2\tilde{r}^{2}} \, . \end{align} By then taking a derivative in respect to $\log r$ and introducing a factor of $r\sigma_{r}^{2}/G$ on both sides, we can massage the left-hand and right-hand side into their respective forms of Eq.~\eqref{eq:jeans_mass} and evaluate the difference: \begin{align} M(r|\beta) - M(r|0) &= \frac{r \sigma_{r}^{2} \beta}{G} \left(\gamma_{\star} + \gamma_{\sigma} + \gamma_{\beta} - 3 \right) \, . \label{eq:los_mass_diff} \end{align} From this expression, we see that there can exist a radius, $r_{\rm eq}$, where the term in the parentheses vanishes based off of mapping projected line-of-sight measurements to the intrinsic quantities of the system, that is, \begin{align} \gamma_{\star}(r_{\rm eq}) &= 3 - \gamma_{\sigma}(r_{\rm eq}) - \gamma_{\beta}(r_{\rm eq}) \, . \label{eq:wolf_constraint} \end{align} Moreover, if $\sigma_{r}^{2}(r)$ and $\beta(r)$ are slowly varying, such that the log-slope profiles are approximately zero, i.e., $\gamma_{\sigma}(r_{\rm eq}) \simeq 0$ and $\gamma_{\beta}(r_{\rm eq}) \simeq 0$, then the degeneracy of the mass profile written in Eq.~\eqref{eq:jeans_mass} is effectively minimized. This would then have the right-hand side of Eq.~\eqref{eq:los_mass_diff} to be subsequently null. Furthermore, if $\gamma_{\star}(r_{\rm eq}) \simeq 3$, then this equates the radius of minimized anisotropy as $r_{\rm eq} \simeq r_{-3}$, which is the radius where the differential log-gradient of the stellar tracer profile is $-3$. To determine the value of $M(r_{\rm eq})$, Eq.~\eqref{eq:sig_los.mapping.inv} is deprojected via an Abel inversion to isolate out the combination of $n_{\star}\sigma_{r}^{2}$ \citep[Eq. A5;][]{wolf2010accurate}. This is then hit with a derivative in respect to $\log r$ and is inserted into Eq.~\eqref{eq:jeans_mass} to obtain \begin{align} \frac{GM(r)}{r} &= (3-2\beta)\sigma_{r}^{2}(r) + \left( \gamma_{\star} + \gamma_{\sigma} - 3 \right)\sigma_{r}^{2}(r) \nonumber \\ &= \sigma_{\rm tot}^{2}(r) + \left( \gamma_{\star} + \gamma_{\sigma} - 3 \right)\sigma_{r}^{2}(r) \, , \end{align} where we have related the total intrinsic velocity dispersion using Eq~\eqref{eq:total.dispersion}. From the assumptions prior, if $\gamma_{\sigma}(r_{\rm eq}) \ll 3$, then the parenthetical term vanishes and $r_{\rm eq}\simeq r_{-3}$, giving us \begin{align} M(r_{-3}) \simeq \frac{\sigma_{\rm tot}^{2}(r_{-3}) r_{-3}}{G} \, . \end{align} \cite{wolf2010accurate} showed that to a good approximation, $\sigma_{\rm tot}^{2}(r_{-3}) \simeq \langle \sigma_{\rm tot}^{2} \rangle^{*}$ for models that match observations. Furthermore, spherical symmetry demands that the line-of-sight dispersion obeys $\langle \sigma_{\rm tot}^{2} \rangle^{*} = 3\langle \sigma_{\rm los}^{2} \rangle^{*}$ (see Section \ref{sec:virial}). This will lead us to obtain an {\it idealized} estimator at $r_{-3}$:\footnote{Throughout, we refer to an {\it idealized} solution as one that considers the quintessential case of $\gamma_{\beta} = \gamma_{\sigma} = 0$ at the radius that minimizes the anisotropy. We do not expect physical results to perfectly match this behavior, but we instead presume the profiles to be relatively small enough at the expected radius where this is prominent. We will remain agnostic on this point until later in the article.} \begin{align} M_{-3}^{\rm ideal} &\equiv M(r_{-3}) = \frac{3\langle \sigma_{\rm los}^{2} \rangle^{*} r_{-3}}{G} \, . \label{eq:wolf_mass} \end{align} Additionally, with the foundations of spherical symmetry, the implied circular velocity at $r_{-3}$ is particularly simple \begin{align} V_{\rm circ}(r_{-3}) &= \sqrt{\ 3 \langle \sigma_{\rm los}^{2} \rangle^{*}} \, . \label{eq:wolf_vcirc} \end{align} \cite{wolf2010accurate} showed that for a variety of analytical stellar profiles, $r_{-3}$ is close to $r_{1/2} \simeq 4R_{e}/3$, giving \begin{align} M(r_{-3}) &\simeq \frac{3\langle \sigma_{\rm los}^{2} \rangle^{*} r_{-3}}{G} ;\qquad M(4R_{e}/3) \simeq \frac{4\langle \sigma_{\rm los}^{2} \rangle^{*} R_{e}}{G} \, . \end{align} In the coming sections, we utilize the arguments stipulated here in the derivation of Eq.~\eqref{eq:wolf_mass}, where we seek to determine if another radius, one that is also independent of the anisotropy, exists for the two other proper motion mappings, such that it is independent of $r_{\rm eq}$ found previously. From here-on, we will refer Eq.~\eqref{eq:wolf_mass} as $M_{-3}$ and Eq.~\eqref{eq:wolf_vcirc} as $V_{-3}$. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/fig1.pdf} \caption{--- {\bf \emph{Radii of minimized uncertainty for idealized models}}. Curves depict the cumulative mass profiles derived in Appendix~\ref{sec:mass.profiles} based on fixed line-of-sight (top panel), parallel (middle panel), and transverse (bottom panel) velocity dispersions, all of which use the median values for Sculptor from Table~\ref{tab:1}. The results assume that the observed dispersion profile for each component is constant with $R$. The lines correspond to several choices of constant intrinsic anisotropy $\beta(r) = \beta_{0}$ as indicated by the colors. We also assume that the underlying tracer profile follows a Plummer model. The small white circles in the top plot and bottom plot show masses predicted by the $M_{-3}$ and $M_{-2}$ estimators, respectively. These points intersect the region of mass that is independent of the anisotropy. Note that the parallel component has no such intersection, as anticipated in Section~\ref{sec:parallel}. The dotted lines give the characteristic log-slope radii of the tracer profile while the dashed shows the standard mapping to the projected observable, $R_{e}$. } \label{fig:1} \end{figure} \subsection{\emph{Plane of the sky}: Measurements Parallel to \emph{R}} \label{sec:parallel} Consider a dispersion-supported stellar system that is well studied, such that $\sigma_{\mathcal{R}}(R)$ is determined accurately through observations. We begin by relating the projected measurement of $\sigma_{\mathcal{R}}(R)$ to the intrinsic quantities via Eq.~\eqref{eq:sig_projR.mapping}. This is then rewritten to an invertable form (see Appendix~\ref{sec:mass.profiles}), \begin{align} \Sigma_{\star} \sigma_{\mathcal{R}}^{2}(R) &= \int_{R^{2}}^{\infty} \frac{dr^{2}}{\sqrt{r^{2} - R^{2}}} \left[ n_{\star}\sigma_{r}^{2} - \int_{r^{2}}^{\infty} d\tilde{r}^{2} \frac{\beta n_{\star} \sigma_{r}^{2}}{2\tilde{r}^{2}} \right] \, . \label{eq:sig_projR.mapping.inv} \end{align} From its invertable form, the left-hand side is an accurate, observable quantity that is ignorant of the form of $\beta$. Therefore, the term in the brackets must be a well-defined quantity regardless of the form of $\beta$ chosen. Therefore, we are allowed to consider the simple case of equating the isotropic integrand with an integrand that is dependent on some arbitrary anisotropy: \begin{align} n_{\star}\sigma_{r}^{2}\big|_{\beta = 0} &= n_{\star}\sigma_{r}^{2} - \int_{r^{2}}^{\infty} d\tilde{r}^{2} \frac{\beta n_{\star} \sigma_{r}^{2}}{2\tilde{r}^{2}} \, . \end{align} By then taking the derivative in respect to $\log r$ and introducing a factor of $r\sigma_{r}^{2}/G$ to both sides, the left-hand and right-hand side are allowed to be rewritten in the form of the integrated Jeans masses, allowing use to express the difference: \begin{align} M(r|\beta) - M(r|0) &= \frac{r \sigma_{r}^{2} \beta}{G} \, . \label{eq:parallel_mass_diff} \end{align} Importantly, this expression lacks the parenthetical term seen in Eq.~\eqref{eq:los_mass_diff}. We conclude that a radius that minimizes the anisotropy, at least, for the assumptions we considered in the Jeans modeled measurements of $\sigma_{\mathcal{R}}(R)$, {\it does not} exist in whatever limiting case of $\beta$ we were to impose, since the anisotropy is a dependent quantity throughout the mass profile. \subsection{\emph{Plane of the sky}: Measurements Transverse to \emph{R}} \label{sec:tangential} Consider a dispersion-supported stellar system that is well studied, such that $\sigma_{\mathcal{T}}(R)$ is determined accurately through observations, in which all stars are bounded inside the system. We begin by relating of $\sigma_{\mathcal{T}}(R)$ to $\sigma_{r}(r)$, given by Eq.~\eqref{eq:sig_projT.mapping}. Fortunately, this is already in an invertable form; we now equate its isotropic and general anisotropic form to one another \begin{align} n_{\star}\sigma_{r}^{2}\big|_{\beta = 0} &= n_{\star}\sigma_{r}^{2}(1-\beta) \, , \end{align} differentiate it with respect to $\log r$, and algebraically manipulate to acquire the expression \begin{align} M(r|\beta) - M(r|0) &= \frac{r \sigma_{r}^{2} \beta}{G} \left(\gamma_{\star} + \gamma_{\sigma} + \gamma_{\beta} - 2 \right) \, . \label{eq:projT_mass_diff} \end{align} We see that {\it there can} exist a radius, that we shall denote as $\tilde{r}_{\mathrm{eq}}$,\footnote{This is not to be associated with the radius, $r_{\rm eq}$, seen in the derivation of $M_{-3}$, as that $r_{\rm eq}$ is constrained to measurements of $\sigma_{\rm los}$. Simply, the $r_{\rm eq}$ of Eq.~\eqref{eq:wolf_constraint} and $\tilde{r}_{\rm eq}$ of Eq.~\eqref{eq:lazar_constraint} are taken to be nonequivalent.} where the parenthesis vanishes. The possible existence of $\tilde{r}_{\mathrm{eq}}$ therefore minimizes the dependency of $\beta$ around the region $\tilde{r}_{\mathrm{eq}}$ for measurements solely based off of $\sigma_{\mathcal{T}}(R)$, such that, \begin{align} \gamma_{\star}(\tilde{r}_{\mathrm{eq}}) &= 2 - \gamma_{\sigma}(\tilde{r}_{\mathrm{eq}}) - \gamma_{\beta}(\tilde{r}_{\mathrm{eq}}) \, . \label{eq:lazar_constraint} \end{align} Moreover unless galaxies have large variation in $\sigma_{r}^{2}$ and in $\beta$ with radius, we may expect $\gamma_\sigma(\tilde{r}_{\rm eq}) + \gamma_\beta(\tilde{r}_{\rm eq}) \ll 2$, as least for radii in the vicinity of $\tilde{r}_{\rm eq} < r_{-3} \simeq r_{1/2}$ for commonly assumed stellar density profiles. Therefore, we can expect that to a good approximation, $\tilde{r}_{\rm eq} \simeq r_{-2}$, where $r_{-2}$ is the radius at which the log-slope of the tracer profile is equivalent to $-2$. Like before, we now consider the integrated Jeans mass. The dependence of $\beta$ can be absorbed into the definition of the intrinsic total velocity dispersion. Moreover, the formulation of Eq.~\eqref{eq:total.dispersion} allows, $(1-\beta)\sigma_{r}^{2} = \sigma_{\theta}^{2} = \sigma_{\mathcal{T}}^{2}$ with the assumption of spherical symmetry. The Jeans equation becomes \begin{align} \frac{GM(r)}{r} &= 2\sigma_{\theta}^2(r) + (\gamma_{\star} - \gamma_{\sigma} - 2)\sigma_{r}^{2}(r) \, . \end{align} If in fact that $\gamma_\star + \gamma_\sigma \approx \gamma_\star \simeq 2$, the term in parenthesis vanishes at the radius $r_{-2}$. The remaining term on the right-hand side depends only on the intrinsic transverse component, $\sigma_{\theta}$ = $\sigma_\mathcal{T}$, which is an observable.\footnote{Note that the term in brackets in Eq.~\eqref{eq:sig_projT.mapping} is constrained by observables. Specifically, the intrinsic transverse dispersion, $\sigma_{\theta}$, it is related to the transverse component along the plane of the sky via $(1-\beta)\sigma_{r}^{2} = \sigma_{\theta}^{2} = \sigma_{\mathcal{T}}^{2}$. This is what allows proper motion mesurements to constrain the anisotropy \citep{strigari2007astrometry}.} Finally, we obtain an idealized estimator \begin{align} M_{-2}^{\rm ideal} &\equiv M(r_{-2}) = \frac{2 \langle \sigma_{\mathcal{T}}^{2} \rangle^{*} r_{-2}}{G} \, , \label{eq:lazar_mass} \end{align} where we have assumed $\sigma_{\theta}(r_{-2}) \simeq \sigma_{\mathcal{T}}^{2}(r_{-2}) \simeq \langle \sigma_{\mathcal{T}}^{2} \rangle^{*}$. The implied circular velocity at $r_{-2}$ is particularly succinct \begin{align} V_{\rm circ}(r_{-2}) &= \sqrt{\ 2 \langle \sigma_{\mathcal{T}}^{2} \rangle^{*}} \label{eq:lazar_vcirc} \, . \end{align} From here-on, we will refer Eq.~\eqref{eq:lazar_mass} as $M_{-2}$ and Eq.~\eqref{eq:lazar_vcirc} as $V_{-2}$. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/fig2.pdf} \caption{--- {\bf \emph{Mass measurements for Draco and Sculptor}}. For each galaxy, points correspond to the line-of-sight mass (magenta), $M_{-3}$, and the projected tangential mass (cyan), $M_{-2}$, at two characteristic radii. Lines show representative NFW mass profiles of fixed $M_{\rm vir}$ with median concentration set by subhalos in the {\tt Phat-ELVIS} simulations. } \label{fig:2} \end{figure} \subsection{Overview of Assumptions} \label{sec:assumptions} We have made a few assumptions in the derivation of $M_{-3}$ and $M_{-2}$. In addition to the strong assumption that galaxies are spherical, we have assumed that the velocity dispersions are relatively flat such that $\sigma_{\rm tot}^{2}(r_{-3}) \simeq \langle \sigma_{\rm tot}^{2} \rangle^{*} = 3\langle \sigma_{\rm los}^{2} \rangle^{*}$ and $\sigma_{\rm \theta}^{2}(r_{-2}) = \sigma_{\mathcal{T}}^{2}(r_{-2}) \simeq \langle \sigma_{\mathcal{T}}^{2} \rangle^{*}$. \cite{wolf2010accurate} showed that the assumption for the line-of-sight component is excellent for a variety of models that match line-of-sight data, yet, for the transverse component, not enough data is available to test this assumption. Some justification comes from Section~\ref{sec:mock.observations}, where we use a set of cosmological simulations of dwarf galaxies in mock observations and find that these assumptions are good to better than $10\%$. Second, we have assumed that the intrinsic radial velocity dispersion varies minimally with radius compared to the tracer profile out to $r_{-3}$. More specifically we assume that the log-slopes of the tracer velocity dispersion profiles are small compared to $-3$ and $-2$ at $r_{-3}$ and $r_{-2}$, respectively. Third, we assume that the velocity dispersion anisotropy varies slowly with radius compared to the light profile. If $\beta(r)$ and $\sigma_{r}(r)$ vary quickly as a function of radius $r$, then the mass estimators will break down. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/fig3.pdf} \caption{--- {\bf \emph{Observed circular velocities of Draco and Sculptor}}. Circular velocity curves for NFW subhalos of a given $V_{\rm max}$ are shown for the two characteristic radii. Each assumes a median $r_{\rm max}$ as derived from the {\tt Phat-ELVIS} simulations. } \label{fig:3} \end{figure} In order to map $M_{-3}$ and $M_{-2}$ to observables measured in two-dimensions, the characteristic radii of the tracer profile, $r_{-2}$ and $r_{-3}$, must be mapped to the projected tracer profile that is observed. If we assume that the three-dimensional profile is well described by a \cite{plummer1911problem} profile, then $r_{-3} \simeq 4R_{e}/3$ and $r_{-2} \simeq 4R_e/5$. That is \begin{align} M(3r_{1/2}/5) &\simeq \frac{6 \langle \sigma_{\mathcal{T}}^{2}\rangle^{*} r_{1/2}}{5G}; \qquad M(4R_{e}/5) \simeq \frac{8 \langle \sigma_{\mathcal{T}}^{2}\rangle^{*} R_{e}}{5G} \, . \end{align} To clarify, {\em if the underlying three-dimensional tracer profile is not well-described by a Plummer profile, then this mapping will fonder}. Ultimately the mapping between the three-dimensional characteristic radii and observed two-dimensional radii will obey another relationship that depends on the underlying profile. Fig.~\ref{fig:1} provides a test and illusration of the derivation presented above using a full mass profile analysis derived in Appendix~\ref{sec:mass.profiles}. Shown are the mass profiles implied by various choices of constant velocity dispersion anisotropy constrained by dispersion components along the line-of-sight (Eq.~\ref{eq:los.mass.profile.constant}; top panel), parallel (Eq.~\ref{eq:projR.mass.profile.constant}; middle panel), and tangential (Eq.~\ref{eq:projT.mass.profile.constant}; bottom panel) under the assumption of constant $\beta$ (denoted by $\beta_{0}$). We assume that the velocity dispersions for each component is constant with $R$, and set them equal to the luminosity-weighted median values observed for Sculptor (9, 11.5, and 8.5 km s$^{-1}$, respectively). We also assume that the tracer profile follows a Plummer, again matched to the median value for Sculptor given in Table~\ref{tab:1}. The white circles show the estimators $M_{-3}$ and $M_{-2}$ in the top and bottom panels, respectively. Encouragingly, they intersect the regions where all of the varying $\beta_{0}$ mass profiles converge. As anticipated in Section~\ref{sec:parallel}, constraints imposed by the parallel component, $\sigma_{\mathcal{R}}$, show no convergence point. This figure shows that the mass estimators we have derived work under reasonable, but idealized assumptions. In the last last section of Appendix~\ref{sec:mass.profiles}, we show a similar analysis that allows for parametric forms of $\beta(r)$ commonly used in Jeans modeling analyses. We show the idealized mass estimators work well unless $\beta(r)$ varies rapidly with radius (as expected). Of course, real galaxies will not obey these assumptions with absolute precision. Perhaps most importantly, no galaxy is perfectly spherical. We expect real galaxies to have velocity dispersion profiles that vary with radius to some degree. Galaxies also have light tracer profiles that will not necessarily obey convenient functional characterizations such as the Plummer model in three-dimensions, which will make determining $r_{-2}$ and $r_{-3}$ more difficult. We test these assumptions along with our estimator in Section \ref{sec:mock.observations} using cosmological dwarf galaxy simulations. \section{Modeling from Observations} \label{sec:observation.results} We now apply our mass estimator using kinematic measurements for the spheroidal galaxies, Draco and Sculptor. Table~\ref{tab:1} lists the observed properties that we adopt. We assume that each galaxies stellar distribution obeys a \cite{plummer1911problem} profile in deprojection and in projection. We used the radial conversions for a Plummer profile given in \cite{wolf2010accurate}. \subsection{The Internal Structure of Draco and Sculptor} Fig.~\ref{fig:2} plots the implied mass of Draco (squares) and Sculptor (circles) using both $M_{-3}$ (magenta colored) and $M_{-2}$ (cyan colored). With the current data today, masses implied from well studied, line-of-sight measurements have smaller error bars while the implied masses from the tangential along the plane of the sky have relatively larger error bars. Also plotted are the NFW \citep{navarro1997universal} mass profiles at fixed halo mass, $M_{\rm vir}= 2\times 10^{10}$ and $3\times 10^{9}\ M_{\odot}$. Concentrations are set to $16.3$ and $20.2$, respectively, based on the median values for subhalos of this mass in the $z=0$ dark matter only physics results of the {\tt Phat-ELVIS} simulations \citep{kelley2019phat}. The subhalo masses were chosen so that at median value of the concentration for the profiles intersect the line-of-sight mass points. In principle, by comparing the location of the tangentially-derived masses to the extrapolated NFW curves can allow us to determine if the predictions are consistent with a cuspy profile. Both galaxies appear consistent with sitting within a typical CDM halo. Note that this result for Draco is in agreement with results by \cite{read2018case}, who find Draco to be cusped around the same radial range. \input{tables/halos.tex} Fig.~\ref{fig:3} provides an alternative view by plotting observed circular velocities using now $V_{-3}$ and $V_{-2}$. The rotation curves for NFW profiles at fixed values of $V_{\rm max}=$ 19 and $34\ \rm km\ s^{-1}$ are also plotted, with median values of $r_{\rm max} = 1.67$ and $4.71$ kpc, respectively, for the same subhalos of {\tt Phat-ELVIS}. As seen previously in Fig.~\ref{fig:2}, both measurements are consistent with the expectations for an NFW. Sculptor's median does fall below the extrapolated NFW, though it is easily consistent within error. If Sculptor has a cored inner-density it could have interesting implications. With a stellar mass of $M_{\star} \simeq 4\times 10^{6}\ M_{\odot}$, this galaxy lies near the low-mass edge of where feedback may be able to produce significant cores \citep{bullock2017small}. This motivates the acquisition of additional data to provide a more precise measure of $\langle \sigma_{\mathcal{T}}^{2} \rangle^{*}$. \section{Mock Observations} \label{sec:mock.observations} We are now interested in testing the mass estimators discuss previously, including the one derived here for the first time. We use simulations that have been previously published with data kindly supplied by the authors \citep{fitts2017fire,robles2017fire}. The simulations were run as part of the Feedback in Realistic Environments (\texttt{FIRE}) project and include galaxies simulated in both Cold Dark Matter (CDM) and Self Interacting Dark Matter (SIDM). Table \ref{tab:2} lists the global parameters of the galaxies considered herein as well as the references the reader can refer to with the specific physics used when running the {\tt FIRE-2} algorithm \citep{hopkins2014galaxies,hopkins2015new,hopkins2018fire}. We specifically have chosen low-mass galaxies that are dispersion supported that resemble dwarf spheroidals. The values of $M_{\star}/M_{\rm vir}$ for the CDM galaxies do not produce enough energy to transform cusps to cores and thus provide a good test for ``cuspy" underlying profiles \citep{di2013dependence,chan2015impact,tollet2016nihao,bose2019bursty}, while SIDM halos are naturally core-like. Their stellar masses are low enough that episodic gas outflows do not bias estimates from equilibrium when using Jeans modeling. \citep{elbardy2016breathing,elbadry2017jeans}. In summary, we consider two types of simulations: \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/fig4.pdf} \caption{--- {\bf \emph{Mock observations of galaxies in the plane of the sky}}: The stellar surface density of the stars for our dwarf galaxies (given in columns) actualized along a random orientation of the plane ($X,Y$) looking along the line-of-sight, $z$, in CDM (top row) and SIDM (bottom row). The center of mass of the galaxy determined from this plane of observation is centered at the origin. This shows how elongated several galaxies can appear projections when viewed in the plane of the sky for observations. } \label{fig:4} \end{figure*} \begin{itemize} \item[] {\bf \emph{CDM}}: Dark matter is considered to be collisionless. The sample of galaxies simulated in CDM are m10b, m10c, m10d, and m10e, which were first presented in \cite{fitts2017fire} and explored further in \cite{fitts2018assembly,fitts2019baryons}. The fiducial CDM simulations have a baryonic particle mass of $m_{\rm b} = 500\ M_{\odot}$ with force resolution $\epsilon_{b} = 2\ \rm pc$ and a dark matter mass $m_{\rm DM} = 2500\ M_{\odot}$ with softening $\epsilon_{\rm DM} = 35\ \rm pc$. This sample of galaxies have their dark halos forming cusps $z=0$. \vspace{1ex} \item[] {\bf \emph{SIDM}}: This considers the CDM power spectrum but with a imposed self-interaction cross section of $\sigma/m = 1\ \rm cm^{2}\ g^{-1}$ that is velocity independent. The sample of galaxies considered here are the analogs of the CDM galaxies: m10b, m10c, m10d, and m10e. SIDM analogs of m10d and m10b were first presented in \cite{robles2017fire} and further explored with m10c and m10e in \cite{fitts2019baryons}. A key result is that all halos have formed appreciable cores at $z=0$. \end{itemize} \subsection{Methodology} \subsubsection{Properties in three-dimensions} The center position of the galaxies are determined by using an iterative "shrinking spheres" method \citep{power2003inner,navarro2004inner}. That is, the center of mass of star particles is successively computed in a sphere and then has its radius reduced by $50\%$, which is then re-centered on the new center of mass. This is done iteratively until a thousand particles enclose the minimized sphere. The center of mass velocity is then computed using all of the star particles enclosing the final minimized radius. Three-dimensional positions and velocities of all the star particles, associated with that galaxy, are then transformed to be relative to the center of mass position and velocity, respectively. The stellar profiles are assembled using 25, log-spaced radial bins of starting from the center of mass of the stars out to $4 \times r_{1/2}$. In quantifying the characteristic radii of $r_{-2}$ and $r_{-3}$, the stellar profiles are smoothed using a third ordered spline fit as profiles tend to be noisy. From there, $r_{-2}$ and $r_{-3}$ are interpolated from the log-gradients of the resultant fits. In the construction of the intrinsic dispersion profiles, the Cartesian velocities relative to the center of mass are transformed to spherical coordinates and are evaluate using the same spherical bin spacing. In each bin shell of $r$, the relative velocities are weighted by their associated stellar particle mass. This includes both the random motions and streaming motions. We also compared between a sample containing {\it only} star particles bound to the dynamical system and another sample containing {\it both} bound and unbound star particles to the dynamical system. Results for these two population samples were found to be indistinguishable, as unbound star particles only comprised $1\%$ of the galaxies' stellar population. Final results presented here include both bound stars and unbound stars. \subsubsection{Idealized Mock Observations} For each galaxy, we construct 1000 mock observations. That is, mock observations are done in 1000 random orientations with each orientation evaluated as follows: the relative Cartesian positions and velocities of the galaxies' stellar particles are rotated into a new orientation denoted by prime coordinates, such that the star particles along the new line-of-sight axis, $z'$ with velocity $v_{z'} \equiv v_{\rm los}$, are stacked along the projected $x'-y'$ plane. From the galaxy projected on this plane, the center position is determined by re-implementing an iterative "shrinking spheres" method. This again determines the center-of-mass position and velocities of the stars found on the $x'-y'$ plane. We define this as the center of the galaxy when analysing its projection in two-dimensions, where we now label the center position and velocity as $\boldsymbol{X} = (X,Y)$ and $\boldsymbol{V} = (V_{X},V_{Y})$, respectively. Hereafter, we drop the prime notation for the line-of-sight axis. Fig.~\ref{fig:4} illustrates a single mock observations by projecting the stellar particles of each galaxy using the method discussed in the previous paragraph. These images have been made after the transformation of coordinates and placing the origin at the center of mass from the projected distribution of stars. Note that for both CDM and SIDM the galaxies are not spherical but do appear to have morphologies comparable to actual observed dwarf spheroidals. That is, dwarf galaxies can appear elongated in the plane of sky (plane $X-Y$ in the figure). \input{tables/halos2.tex} The stellar surface profile is then assembled using spherical bins of $R=\sqrt{X^{2}+Y^{2}}$, were we used 25, log-spaced concentric bins starting from the projected center of mass. From this profile, we fit a projected Plummer profile out to $R=4\times r_{1/2}$ in order to obtain the value of the effective radius, $R_{e}$. That is, the best-fit parameters are determined by adjusting the parameters of a projected Plummer in order to minimize a figure-of-merit function. The dispersion profiles are evaluate using the same bin spacing in spherical shells. The relative velocities found in projection are transformed to cylindrical values in correspondence to the coordinate system used in Section~\ref{sec:preliminaries}. That is, the velocity components parallel and transverse to radius $R$ follows $v_{\mathcal{R}} = (\boldsymbol{X}\cdot\boldsymbol{V})/R$ and $v_{\mathcal{T}} = |\boldsymbol{X}\wedge \boldsymbol{V}|/R$, respectively. In each bin shell of $R$, the relative velocities in projection are weighted by their associated stellar particle mass. Finally, the stellar mass-weighted velocity dispersions of the entire galaxy is measured within $4\times R_{e}$ for the value of $R_{e}$ determined from the surface density fit. We consider both random and streaming motions. \subsection{Results} Our key results are presented in both Table~\ref{tab:3} and Fig~\ref{fig:5}. In the table, we first list quantities measured to test the assumptions discussed in Section~\ref{sec:assumptions}. We start with columns 1 and 2, which give the log-gradient slope of the intrinsic radial velocity dispersion, $\gamma_{\sigma}$, at $r_{-2}$ and $r_{-3}$, respectively. These values are not precisely zero (as we have assumed in our idealized estimator) but they are small compared to the log-slope of the tracer profile (-3 and -2) at these radii and therefore are roughly in line with our assumptions. This behavior is found to be present for all of our galaxies, regardless of dark matter cores and cusps lying dormant. The radial anisotropy is similarly slowly varying though we have not summarized it here. Columns 3 and 4 show ratios that measure the flatness of observable velocity profiles as the ratios $\widetilde{\sigma}_{\mathcal{T},-2} := \sigma_{\mathcal{T}}(r_{-2})/\langle \sigma_{\mathcal{T}}\rangle^{*}$ and $\widetilde{\sigma}_{\rm tot,-3} := \sigma_{\rm tot}(r_{-3})/(\sqrt{3}\langle \sigma_{\rm los}\rangle^{*})$, respectively. For the component transverse to the projected radius $R$, the median results are found to be well approximated by $\sigma_{\mathcal{T}}(r_{-2}) \simeq \langle \sigma_{\mathcal{T}}\rangle^{*}$ within $10\%$ even when considering the $68\%$ dispersion for galaxies either with cusps and cores. Interestingly, uncertainties are well constrained for all of the galaxies in our sample when just considering binned unit circles of projected radius $R$. Looking at the relation argued in \cite{wolf2010accurate} and here for the total intrinsic velocity dispersion (referring to column 4), the median results are found to be well approximated by $\sigma_{\rm tot}(r_{-3}) \simeq \sqrt{3}\langle \sigma_{\rm los}\rangle^{*}$ to better than about $10\%$ for the cusped galaxies. The galaxies with cores have this approximation accurate to $15-20\%$ when including the $1\sigma$ deviations. \begin{figure*} \centering \includegraphics[width=0.875\textwidth]{figures/fig5.pdf} \caption{--- {\bf \emph{Measurements from mock observations}}. The rotation curves of our galaxies compared to the estimators at the characteristic radii $r_{-2}$ and $r_{-3}$ of the stellar density profile. Black and gray lines show the rotation curves for each system simulated in CDM (cusps) and SIDM (cores). The estimators $V_{-2}$ (cyan) and $V_{-3}$ (magenta) are plotted at $r_{-2}$ and $r_{-3}$, respectively, where circles denote the estimators for the CDM galaxies and squares are for galaxies in SIDM. Error bars are the $1\sigma$ dispersion over all 1000 projections. Note that while the estimators are not perfect, they are accurate enough to discriminate between SIDM and CDM models in each case, especially when the two estimators are combined. Estimates of the shapes of the rotation curves will be more uncertain than the overall circular velocity normalization at each radius. Given large enough galaxy samples, measurements should enable a strong discriminant between CDM and SIDM based on normalization alone. } \label{fig:5} \end{figure*} Shown in Fig.~\ref{fig:5} are the actual circular velocity curves compared to the combined measurements of the estimators at their characteristic radii, $V_{-2}$ at $r_{-2}$ (cyan points) and $V_{-3}$ at $r_{-3}$ (magenta points). The vertical error bars of the estimators depict the $1\sigma$ dispersion from all 1000 mock projections. The total circular velocity profile is given by the black curves for the CDM and gray curves for SIDM. Columns 5 and 6 lists the ratio between these velocity estimators to the true value of the galaxies dynamical mass at the respective characteristic radii. The CDM galaxies perform remarkably well in predicting {\rm both} the actual circular velocity measurement at $r_{-2}$ and $r_{-3}$ within $10\%$ including uncertainties. The SIDM galaxies are as good to $20\%$ when including $1\sigma$ dispersions. By examining the outliers we see the worst offsets stem from difficulties in determining $r_{-2}$ and $r_{-3}$ of the simulated stellar density profile, as these profiles are, in essence, noisy, which makes the measurements of the log-gradient profiles without smoothing the density profile problematic. Since the idealized estimators, $V_{-2}$ and $V_{-3}$, predict the values of the dynamical profile to acceptable accuracy, we now see established predictions are when using characterizations modeled from the Plummer profile. Columns 9-12 in Table~\ref{tab:3} give the results for performing a fit using a Plummer profile on the projected surface density in each mock observation. The resulting values of $R_{e}$ are used to measure the stellar mass-weighted median dispersion, which have been depicted in Fig.~\ref{fig:5}. Columns 9 and 10 are the ratios of using the estimators with $R_{e}$ while columns 11 and 12 are the comparisons of the characteristic radii to the predicted mapping. We see that for many galaxies, the Plummer fits do not provide precise enough characterizations to infer the values of $r_{-2}$ and $r_{-3}$ to better than $\sim 20 \%$. As for modeling the slope of the underlying profile, we expect that the local inner-density behaves like a power-law, $\rho \propto r^{-\alpha}$ such that the integrated mass scales as $M \propto r^{3-\alpha}$. This leads us the expected behavior of circular velocity in relation to the local dark matter density: $V_{\rm circ}^{2} \propto r^{2-\alpha}$. We derive the implied slope given by the estimators by relating the inner density of the circular velocities as a power law that is defined like $\xi :=\Delta \log V_{\rm circ}/\Delta\log r$. This allows then to the relate the power laws for the density profile, i.e., $\alpha = 2(1-\xi)$.\footnote{\cite{di2013dependence} and \cite{tollet2016nihao} define cusps as $\alpha \simeq [1 - 1.5]$, which maps to $\xi \simeq 0.5$, and define cores as $\alpha \simeq [0 - 0.5]$, which maps to $\xi \simeq 0.75$.} The implied slope of the circular velocity estimators is given by the dashed red line in Fig.~\ref{fig:5}. In columns 7 and 8, we give the implied slope of the combined estimators, $\xi_{\rm est}$, and the true slope of the {\it dynamical} profile found at $r_{-2}$ and $r_{-3}$, $\xi_{\rm true}$, respectively. Without considering the $1\sigma$ dispersion of measurements, estimates from galaxies in CDM are predicted within $20 \%$ while the SIDM analogs are off by almost $50\%$. While the cuspy profiles are reasonably well measured, the SIDM core profiles estimated to be too cuspy via this method. This is unfortunate, as this precision is not enough to distinguish between a cusp and core. However, the accuracy on the normalization ($V_{-2}$ at $r_{-2}$) is good enough to discriminate between absolute core densities expected for CDM vs. SIDM. With large enough data sets, this will provide important constraints on models of this kind. \section{Discussion} \label{sec:discussion} We have used the spherical Jeans equation to infer two idealized mass estimators that depend on the stellar proper motions measured in observations. Specifically, we there are two radii, independent from one another, that potentially minimizes the anisotropy of the mass profile: one radius based off of measurements of the velocity dispersions {\it along} the line-of-sight and another radius from measurements for dispersions transverse along the plane of the sky. \subsection{Constraints from the Virial Theorem} \label{sec:virial} The scalar virial theorem has been historically utilized to provide approximate mass constraints for spheroidal galaxies \citep[e.g.][]{tully1997method,busarello1997virial}. That is, the scalar virial theorem is observationally applicable, such that dispersion-supported systems can probe the integrated mass profile within the stellar extent without the degeneracies provided by the anisotropy. It is constructed from the diagonalized components of the velocity dispersion tensor, which describes the {\it local} distribution of velocities at each point in space. The trace of diagonal components provide an extended scalar virial theorem \citep{errani2018virial}: \begin{align} \langle \sigma_{\alpha}^{2} \rangle^{*} + \langle \sigma_{\delta}^{2} \rangle^{*} + \langle \sigma_{\rm los}^{2} \rangle^{*} &= 4\pi G \int_{0}^{\infty} dr\ r n_{\star}(r)M(r) \label{eq:extended.SVT} \\ &\equiv \langle \sigma_{\rm tot}^{2} \rangle^{*} \nonumber \, \end{align} where $\langle \sigma_{\alpha}^{2} \rangle^{*}$ and $\langle \sigma_{\delta}^{2} \rangle^{*}$ are defined as the luminosity-averaged velocity dispersions of the two velocity components tangential to the line-of-sight. By design, Eq.~\eqref{eq:extended.SVT} provides a good integral constraint on the dynamical mass, as the entire expression is independent of the anisotropy. The line-of-sight component can be utilized as a constraint via the projected virial theorem \citep[e.g.][]{agnello2012core,errani2018virial}. Adding dispersions in the $\alpha$ and $\delta$ directions would enable a tighter constraint on $\langle \sigma_{\rm tot}^{2} \rangle^{*}$. Note however, that when written this way we do not provide any additional constraint on $\beta$. Working in a Cartesian coordinate system, such that $\mathrm{los} \rightarrow z $, $\alpha \rightarrow x$ and $\delta \rightarrow y$, then spherical symmetry would demand each component of velocity dispersion be equal: $\langle \sigma_{x}^{2} \rangle^{*} = \langle \sigma_{y}^{2} \rangle^{*} = \langle \sigma_{z}^{2}\rangle^{*}$. The coordinate system introduced in Section~\ref{sec:preliminaries} does not force this equality and allows for separate components of the luminosity-averaged velocity dispersion to constrain the velocity dispersion anisotropy $\beta$ \citep{strigari2007astrometry}. The two components, $\sigma_{\mathcal{R}}$ and $\sigma_{\mathcal{T}}$, depend on $\beta$ differently and are not necessarily equal.\footnote{Though symmetry demands $\langle \sigma_{\mathcal{T}}^{2} \rangle^{*} + \langle \sigma_{\mathcal{R}}^{2} \rangle^{*} = \frac{2}{3} \langle \sigma_{\rm tot}^{2} \rangle^{*} = 2 \langle \sigma_{\rm los}^{2} \rangle^{*}$.} Note that when Eqs.~\eqref{eq:sig_los.mapping},~\eqref{eq:sig_projR.mapping},~and~\eqref{eq:sig_projT.mapping} are added together we find $\langle \sigma_{\mathcal{T}}^{2} \rangle^{*} + \langle \sigma_{\mathcal{R}}^{2} \rangle^{*} + \langle \sigma_{\rm los}^{2} \rangle^{*} = \langle \sigma_{\rm tot}^{2} \rangle^{*}$ such that Eq.~\eqref{eq:extended.SVT} can be satisfied. By examining the components separately we can have mass estimators that provide information at a different radius than the one enabled from line-of-sight motions alone. \subsection{Possible Biases in Jeans Modeled Mass Estimates} Our mass estimates rely on the fact that dispersion-supported systems are approximately in dynamical equilibrium and are accurately modeled by the spherical Jeans equation. Non-steady-state systems, ones that significantly deviate from dynamical equilibrium, can lead to biased estimations of the complete dynamical mass. This can lead to systematically biased mass estimates \citep[e.g.][]{amorisco2011phase,errani2018virial}. For the simulated galaxy sizes considered in our analysis, mass estimates with short time-scale fluctuations of the potential well are non-trivially biased \citep{elbadry2017jeans,gonzalez2017dwarf}. For the largest kind of dispersion-supported systems, ones with a stellar mass of $M_{\star} \approx 10^{8-10} M_{\odot}$, uncertainty are as large $20 \%$ of the dynamical mass. To minimize the variability of energetic outflows, mass estimates are best focused on dwarf spheroidals at around the threshold of lowest detectability, i.e., low-mass dwarfs, as this should reduce the likelihood of potential fluctuations biasing the stellar tracers. Using simulated data, we have shown that our estimate at $r_{-2}$ is able to obtain the normalization to better than $20\%$ when using $V_{-2}$ for low-mass dwarf galaxies. As for the applicability to observations, it is import that careful measurements of the highest precision are obtained in order to dissociate between possible models embellished with systematic errors Although our simulated galaxies are analogous to those in the field, local group satellites also experience tidal stripping of the main halo, which can preferentially bias the estimates of the satellites dynamical mass. However, analysis from \cite{klimentowski2007tidal} has already eluded that velocity dispersions are well modeled by the Jeans equation for even in the case of mildly tidally disrupted dwarf galaxies, as long as unbound, interloping stars are properly accommodated for in the stellar sample. For the case of Draco and Sculptor considered here, they are both satellites of the Milky Way and are therefore, in principle, subjected to tidal forces that could render mass models from the Jeans equation inadequate. However, no sign of strong tidal influence is apparent \citep{piatek2002draco,coleman2005tidal}. \section{Concluding Remarks} \label{sec:conclusion} Using the spherical Jeans equation, we have derived a mass estimator that depends on stellar kinematics measured along the plane of the sky, specifically the velocity dispersion tangential to the projected radius $R$. We have shown that under idealized but reasonable assumptions, Eq.~\eqref{eq:lazar_mass} provides the cumulative mass within a characteristic radius, $r_{-2}$, independent of the stellar velocity dispersion anisotropy $\beta$. This ideal radius is where the log-slope of the underlying tracer profile is $-2$. For Plummer profiles $r_{-2} \simeq 4R_{e}/5 \simeq 3r_{1/2}/5$. We also showed that a $\beta$-independent estimator does not exist for the velocity dispersion parallel along the plane of the sky. Fig.~\ref{fig:1} summarizes this result. Our derivation followed the approach in \cite{wolf2010accurate}, and relied on similar assumptions: that the stellar velocity dispersion profiles $\sigma_{r}(r)$ and $\beta(r)$ vary slowly compared to the tracer profile itself out to $r_{1/2}$. To test our assumptions and our estimators, we employ previously-published simulations of dwarf galaxies done for both CDM and SIDM dark matter physics. We find that $\sigma_{\mathcal{T}}$ is indeed flat in the vicinity of $r_{-2}$ for both dwarf galaxies of CDM and SIDM and found that our mass estimator is accurate in quantifying the enclosed mass at $r_{-2}$. For CDM, the estimates for the dynamic rotation curves are found to be accurate to $10\%$ for both estimators while SIDM are accurate to $15\%$. This level of absolute mass accuracy is good enough to discriminate between expected core densities in SIDM and CDM models. Unfortunately, this level of accuracy is {\em not} good enough to regularly measure slopes at the precision required to differentiate between cusps and cores in real data without deeper prior to help us understand the underlying tracer profile shape in real galaxies. However, the difference in absolute circular velocity predicted between SIDM and CDM at these radii is well within the normalization uncertainties of the estimators (see Fig \ref{fig:5}). As an example of the applicability of our estimator, we have combined it with the \cite{wolf2010accurate} estimator at $r_{-3}$ for line-of-sight velocities to explore the mass profiles of Draco and Sculptor. Both galaxies are consistent with inhabiting cuspy NFW subhalos with densities consistent with those expected in CDM with $V_{\rm max} \simeq 34$ and $19\ \rm km\ s^{-1}$, respectively, though current uncertainties allow for a variety of inner profile slopes and are consistent with SIDM densities given the sparsity of the data. In the coming era of precision-based measurements of stellar proper motions, we expect the internal structure of dwarf galaxies to be revealed with more clarity. \section*{Acknowledgements} We are thankful to the anonymous referee for their invaluable feedback that helped improve the earlier version of this article. We would like to thank Victor Robles and Alex Fitts for facilitating access to their simulations. We are thankful to Michael Boylan-Kolchin and Josh Simon for comments on early versions of the article. Lazar and Bullock are supported by the NSF grant AST-1910965 . The analysis in this article made extensive use of the python packages \texttt{NumPy} \citep{van2011numpy}, \texttt{SciPy} \citep{oliphant2007python}, and \texttt{Matplotlib} \citep{hunter2007matplotlib}; We are thankful to the developers of these tools. This research has made all intensive use of NASA's Astrophysics Data System (\url{https://ui.adsabs.harvard.edu/}) and the arXiv eprint service (\url{http://arxiv.org}). \subsection{The Spherical Jeans Equation} \label{sec:jeans} For a spherically symmetric steady-state system, the first moment of the collisionless Boltzmann equation for a stellar phase-space distribution takes the form of the spherical Jeans equation \begin{align} -\frac{d\Phi}{dr} = \frac{1}{n_{\star}}\frac{d}{dr}\left(n_{\star} \sigma_{r}^{2}\right) + \frac{2\beta\sigma^{2}_{r}}{r} \, , \label{eq:spherical_jeans} \end{align} which relates the total gravitational potential, $\Phi(r)$, of a spherically symmetric, dispersion-supported system to its two tracers: the radial velocity dispersion, $\sigma_{r}(r)$, and the 3D stellar number density, $n_{\star}(r)$. The quantity $\beta(r) := 1 - (\sigma_{\theta}^{2} + \sigma_{\phi}^{2})/2\sigma_{r}^{2}$ is a measure of the velocity dispersion anisotropy, in which $\sigma_{\theta,\phi}^{2}$ are the velocity variances in spherical coordinates tangential to the radius $r$. Larger values of $\beta$ imply that the velocity dispersion is larger in the radial direction than in the tangential direction. The total mass profile of the dynamical system is an implied quantity of (\ref{eq:spherical_jeans}) \begin{align} M(r|\beta) = \frac{r \sigma_{r}^{2}(r)}{G} \left( \gamma_{\star} + \gamma_{\sigma} - 2\beta \right) \, , \label{eq:jeans_mass} \end{align} where the logarithmic slopes are $\gamma_{\star} := -d\log n_{\star}/d\log r$ and $\gamma_{\sigma} := -d\log \sigma_{r}^{2}/d\log r$. \subsection{Measurements Along the Line-of-Sight} \label{sec:los} With these fundamentals, we follow \citet{wolf2010accurate} by considering a velocity dispersion-supported stellar system that is well studied, such that $\Sigma_{\star}(R)$ and $\sigma_{\rm los}(R)$ are determined accurately by observations. If we model the systems mass profile using the Jeans equation, any viable solution will keep the quantity $\Sigma_{\star}\sigma_{\rm los}^{2}(R)$ fixed to within allowable errors. To utilize the Jeans equation, we must relate $\sigma_{\rm los}$ to $\sigma_{r}$ \begin{align} \Sigma_{\star} \sigma_{\rm los}^{2}(R) &= \int_{R^{2}}^{\infty} \frac{dr^{2}}{\sqrt{r^{2} - R^{2}}} \left[ 1- \frac{R^{2}}{r^{2}}\beta(r) \right] n_{\star}\sigma_{r}^{2}(r) \, . \label{eq:sig_los-sig_r} \end{align} This is then massaged to an {\it invertable} form \begin{align} \Sigma_{\star} \sigma_{\rm los}^{2}(R) &= \int_{R^{2}}^{\infty} \frac{dr^{2}}{\sqrt{r^{2} - R^{2}}} \left[ n_{\star}\sigma_{r}^{2}(1-\beta) + \int_{r^{2}}^{\infty} d\tilde{r}^{2} \frac{\beta n_{\star} \sigma_{r}^{2}}{2\tilde{r}^{2}} \right] \, . \label{eq:sig_los-sig_r_inv} \end{align} In doing so, the left-hand side is an observable quantity and independent of $\beta$. Therefore, the term in the brackets has to be a {\it well-defined} quantity regardless of the $\beta$ chosen. With this, we are allowed to equate the isotropic integrand, where $\beta = 0$, with an integrand that is dependent on some arbitrary anisotropy $\beta$, be that if it is constant or varying. By then taking a radial logarithmic derivative and introducing a factor of $r\sigma_{r}^{2}/G$ on both sides, we can rewrite the resulting equation in the form of (\ref{eq:jeans_mass}) \begin{align} M(r|\beta) - M(r|0) &= \frac{r \sigma_{r}^{2} \beta}{G} \left(\gamma_{\star} + \gamma_{\sigma} + \gamma_{\beta} - 3 \right) \, , \label{eq:los_mass_diff} \end{align} where $\gamma_{\beta} := - d\log \beta /d\log r$. From (\ref{eq:los_mass_diff}), we see that there can exist a radius, $r_{\mathrm{eq}}$, where the term in the parentheses vanishes based off of line-of-sight measurements. } At this radius the enclosed mass $M(r_{\mathrm{eq}})$ is minimally affected by the form of $\beta(r)$: $\gamma_{\star}(r_{\mathrm{eq}}) = 3 - \gamma_{\sigma}(r_{\mathrm{eq}}) - \gamma_{\beta}(r_{\mathrm{eq}})$. As discussed in \cite{wolf2010accurate}, $\gamma_{\beta}$ and $\gamma_{\sigma}$ are expected to be relatively small for $r \lesssim r_{\rm eq}$ in realistic cases of observed dwarf galaxies, which tend to have $\sigma_{\rm los}^{2}(R)$ roughly constant with radius at $R \sim R_{e}$.} This implies that to a good approximation, $\gamma_{\star}(r_{\mathrm{eq}}) \simeq 3$ and $r_{\rm eq} \simeq r_{-3}$, which is the radius at which the log derivative of the stellar tracer profile is equal to $-3$. \cite{wolf2010accurate} used a general Bayesian analysis of published dwarf spheroidal velocity data that allowed for radially variable $\beta$ to show that uncertainties were indeed minimized at the expected radius. In order to determine the value of $M(r_{\rm eq})$, \citet{wolf2010accurate} de-projected (\ref{eq:sig_los-sig_r_inv}) via an Abel inversion to isolate out $n_{\star}\sigma_{r}^{2}$ (their Equation A5). They then relied on the same assumptions discussed above to derive $r_{\rm eq} \simeq r_{-3}$ and used (\ref{eq:jeans_mass}) to determine the enclosed mass inside $r_{-3}$} \begin{align} M(r_{-3}) &= \frac{3 \sigma_{\rm los}^{2}(r_{-3}) r_{-3}}{G} \, . \label{eq:wolf_mass} \end{align} Finally, \citet{wolf2010accurate} showed that $\sigma_{\rm los}^{2}(r_{-3}) \approx \langle \sigma_{\rm los}^{2} \rangle$ when $\sigma_{\rm los}(R)$ varies weakly with $R$. The implied circular velocity at $r_{-3}$ is particularly simple \begin{align} V_{\rm circ}(r_{-3}) &= \sqrt{\ 3 \langle \sigma_{\rm los}^{2}\rangle} \, . \label{eq:wolf_vcirc} \end{align} Note that $r_{-3} \simeq r_{1/2} \simeq 4R_{e}/3$ for a wide range of stellar profiles \citep{wolf2010accurate} and this makes the above equations easy to apply in practice. \subsection{Measurements Along the Projected Radius} \label{sec:projected} We repeat the steps used in the preceding section to the measured velocity dispersions along the plane of the sky parallel to $R$, $\sigma_{\mathcal{R}}$. We begin by relating $\sigma_{\mathcal{R}}(R)$ to $\sigma_{r}(r)$ via \cite{strigari2007astrometry}: } \begin{align} \Sigma_{\star} \sigma_{\mathcal{R}}^{2}(R) &= \int_{R^{2}}^{\infty} \frac{dr^{2}}{\sqrt{r^{2} - R^{2}}} \left[ 1 - \beta(r) + \frac{R^{2}}{r^{2}}\beta(r) \right] n_{\star}\sigma_{r}^{2}(r) \, . \label{eq:sig_projR-sig_r} \end{align} This can then be re-written in an invertable form.\footnote{The result will exactly be that of (\ref{eq:sig_los-sig_r_inv}) with $n_{\star}\sigma_{r}^{2}(1-\beta) \longrightarrow n_{\star}\sigma_{r}^{2}$.} Following the same arguments discussed previously for (\ref{eq:sig_los-sig_r_inv}), we equate the isotropic and anisotropic integrands to one another, differentiate, and algebraically massage to acquire \begin{align} M(r|\beta) - M(r|0) &= \frac{r \sigma_{r}^{2}\beta}{G} \, . \label{eq:projR_mass_diff} \end{align} Importantly, this equation lacks the parenthetical term we found in (\ref{eq:los_mass_diff}). We conclude that a radius that minimizes the anisotropy, $r_{\rm eq}$, {\it does not} exist in whatever limiting case of $\beta$ we were to impose, since the anisotropy is a dependent quantity throughout the mass profile. \raggedbottom \subsection{Measurements Along the Projected Tangential} \label{sec:tangential} Moving on to the projected tangential velocity dispersion in the plane of the sky, $\sigma_{\mathcal{T}}(R)$, we relate it to $\sigma_{r}(r)$ as in \cite{strigari2007astrometry}: } \begin{align} \Sigma_{\star} \sigma_{\mathcal{T}}^{2}(R) &= \int_{R^{2}}^{\infty} \frac{dr^{2}}{\sqrt{r^{2} - R^{2}}} \left[ 1-\beta(r)\right] n_{\star}\sigma_{r}^{2}(r) \, . \label{eq:sig_projT-sig_r} \end{align} This is already in an invertable form. We now equate its isotropic and anisotropic form to one another, differentiate, and algebraically manipulate to acquire \begin{align} M(r|\beta) - M(r|0) &= \frac{r \sigma_{r}^{2} \beta}{G} \left(\gamma_{\star} + \gamma_{\sigma} + \gamma_{\beta} - 2 \right) \, . \label{eq:projT_mass_diff} \end{align} We see that as in (\ref{eq:los_mass_diff}), there can exist a radius, $r_{\mathrm{eq}}$, where the term in parentheses is zero, thus minimizing the dependency of $\beta$. As discussed previously, we expect $\gamma_\sigma + \gamma_\beta \ll 2$ at $r \simeq r_{1/2}$ for galaxies that resemble observed dwarfs. This means that, to a good approximation, $r_{\rm eq} \simeq r_{-2}$, where $r_{-2}$ is the radius at which $\gamma_{\star} = 2$.} Consider now the integrated mass, given by (\ref{eq:jeans_mass}). The dependence of $\beta$ can be absorbed into the definition of $\sigma_{\rm tot}^{2} = (3-2\beta)\sigma_{r}^{2}$. Moreover, $\sigma_{\rm tot}^{2} - \sigma_{r}^{2}= 2(1-\beta)\sigma_{r}^{2} = 2\sigma_{\mathcal{T}}^{2}$ \citep[e.g.][]{strigari2007astrometry}, which implies} \begin{align} \frac{GM(r)}{r} &= 2\sigma_{\mathcal{T}}^2(r) + \sigma_{r}^{2}(r)(\gamma_{\star} - \gamma_{\sigma} - 2) \, . \end{align} We see that at the radius $r_{-2}$, we can utilize our previous assumptions to argue $\gamma_\star + \gamma_\sigma \simeq \gamma_\star \simeq 2$, such that the term in brackets vanishes. The remaining term on the right-hand side depends only on $\sigma_{\mathcal{T}}(r)$, which is constrained by observables, independent of the form of $\beta$, using the fact that $(1-\beta)\sigma_{r}^{2} = \sigma_{\mathcal{T}}^{2}$ in (\ref{eq:sig_projT-sig_r})}. Finally, we obtain \begin{align} M(r_{-2}) &= \frac{2 \langle \sigma_{\mathcal{T}}^{2} \rangle r_{-2}}{G} \, , \label{eq:lazar_mass} \end{align} where we have assumed $\sigma_{\mathcal{T}}^{2}(r_{-2}) \simeq \langle \sigma_{\mathcal{T}}^{2} \rangle$. The implied circular velocity at $r_{-2}$ is particularly succinct \begin{align} V_{\rm circ}(r_{-2}) &= \sqrt{\ 2 \langle \sigma_{\mathcal{T}}^{2} \rangle} \, . \label{eq:lazar_vcirc} \end{align} \end{document} \subsection{Testing of Equation (\ref{eq:lazar_mass})} Figure \ref{fig:mass_profiles} provides explicit examples of the accuracy of (\ref{eq:lazar_mass}) applied to Draco (left) and Sculptor (right). For demonstration purposes, we set $\sigma_{\mathcal{T}}^{2}$ precisely to the best-fit value and further assume that it is constant as a function of projected radius $R$ (which is applicable give the lack of available data). We use (\ref{eq:projT_mass_profile}) to derive the mass profiles for several fixed values of $\beta$ and see that they all intersect at $r_{-2}$, regardless of the chosen $\beta$, as expected. For a Plummer profile $r_{-2} \simeq 4R_{e}/5 \simeq 3r_{1/2}/5$ and the vertical-dotted lines in each panel mark this radius. The white dots mark the prediction of (\ref{eq:lazar_mass}) using $r_{-2} \simeq 4R_{e}/5 $. Therefore, to a well-established approximation, the following expression is accurate \begin{align} M(r_{-2}) &\simeq \frac{6 \langle \sigma_{\mathcal{T}}^{2}\rangle r_{1/2}}{5G} \simeq \frac{8 \langle \sigma_{\mathcal{T}}^{2}\rangle R_{e}}{5G} \, . \end{align} \subsection{The Internal Structure of Draco and Sculptor} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/dwarf_mass} \caption{ {\bf Mass measurements for Draco and Sculptor at two characteristic radii along with NFW profiles}. For each galaxy, points correspond to the line-of-sight mass (\ref{eq:wolf_mass}, smaller errors) and the projected tangential mass (\ref{eq:lazar_mass}). Lines show representative NFW mass profiles of fixed $M_{\rm vir}$ with median concentration set by subhalos in the {\tt Phat-ELVIS} simulations. } \label{fig:nfw_mass} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/dwarf_vcirc} \caption{ {\bf Observed circular velocities of Draco and Sculptor at two characteristic radii}. Circular velocity curves for NFW subhalos of a given $V_{\rm max}$ are shown. Each assumes a median $r_{\rm max}$ as derived from the {\tt Phat-ELVIS} simulations. } \label{fig:nfw_vcirc} \end{figure} Figure \ref{fig:nfw_mass} plots the implied mass of Draco (squares) and Sculptor (circles) using both (\ref{eq:wolf_mass}) and (\ref{eq:lazar_mass}). The masses from line-of-sight measurements are the right-most points with the small error bars while the masses from the projected tangential velocity dispersion are the left-most with larger error bars. Also plotted are the NFW \citep{navarro1997universal} mass profiles at fixed halo mass, $M_{\rm vir}= 2\times 10^{10}$ and $3\times 10^{9}\ M_{\odot}$. Concentrations are set to $16.3$ and $20.2$, respectively, based on the median values for subhalos of this mass in the $z=0$ dark matter only physics results of the {\tt Phat-ELVIS} simulations \citep{kelley2019phat}. The subhalo masses plotted were chosen so that at median value of the concentration for the profiles intersect the line-of-sight mass points. In principle, by comparing the location of the tangentially-derived masses to the extrapolated NFW curves allows us to determine if the predictions are consistent with a cuspy profile. Both galaxies are consistent with sitting within typical CDM halos. Note that our result for Draco is in agreement with results by \cite{read2018case}, who find Draco to be cusped around the same radial range. Figure \ref{fig:nfw_vcirc} provides an alternative view by plotting observed circular velocities using (\ref{eq:wolf_vcirc}) and (\ref{eq:lazar_vcirc}). The rotation curves for NFW profiles at fixed values of $V_{\rm max}=$ 19 and $34\ \rm km\ s^{-1}$ are also plotted, with median values of $r_{\rm max} = 1.67$ and $4.71$ kpc, respectively, for the same subhalos of {\tt Phat-ELVIS}. As seen previously in Figure \ref{fig:nfw_mass}, both measurements are consistent with the expectations for an NFW. Sculptor's median does fall below the extrapolated NFW, though it is easily consistent within error. If Sculptor has a cored inner-density it could have interesting implications. With a stellar mass of $M_{\star} \simeq 4\times 10^{6}\ M_{\odot}$, this galaxy lies near the low-mass edge of where feedback may be able to produce significant cores \citep{bullock2017small}. This motivates the acquisition of additional data to provide a more precise measure of $\langle \sigma_{\mathcal{T}}^{2} \rangle$. \end{document}
1,108,101,564,793
arxiv
\section{Introduction} Since unitary quantum theory almost inevitably makes it the case that for most times, nearly all macroscopic objects have large uncertainties in their positions relative to other objects not rigidly attached to them, there has been the mystery of why our conscious perceptions are generally of such objects being perceived as having relatively precise relative locations. In other words, although our universe certainly seems to be quantum, our conscious observations seem to be almost entirely classical. Decoherence \cite{Zeh1970,Zeh1973,Z1981,Z1982,Z1984,JZ1985,Z1989,Z1991,Albrecht1992,PHZ1993,ZHP1993,Z1993,Anderson, PZ1993,Albrecht1993,ZP1994,ZP1995,AZ1996,APZ1997,Z1998a,HSZ1998,Z1998b,Z1998c,PZ1999,Z2003a,Z2003b, Schloss2004,DDZ2005,Schloss2007,P2021} and Quantum Darwinism \cite{Z2000,OPZ1,OPZ2,BZ1,BZ2,BZ3,Z1,PR2009,ZQZ2009,ZQZ2010,RZ2010,RZ2011,Z2,Z3,TYGDZ,Z4,Ball} are properties of the quantum state and its evolution at least in a region of the universe sufficiently like ours, particularly in having a strong thermodynamic arrow of time and having subsystems that exist moderately stably for time periods long compared with the decoherence timescale that is often microscopically short. They can explain why a system or object can be redundantly recorded, in a suitable basis called the {\it pointer basis} \cite{Z1981,Z1982}, by the environment, especially by the photons that scatter off the object. Observers who intercept a small fraction of this environment gain access to the redundant information that the environment records and presumably can also record this information redundantly. However, the quantum state and dynamics do not by themselves logically imply what (if any) conscious perceptions occur, or how much. (I use conscious perception and sentient experience interchangeably, meaning all that one is consciously aware of at once.) Presumably there are rules for getting the sentient experiences and their measure from the quantum state, but we do not yet know much about what these rules might be. These measures would be needed to use as substitutes, after being normalized, for the likelihood of a theory for the quantum state and for these rules in complete theories that are deterministic and hence do not really have any probabilities from true randomness \cite{P2006,P2017}. \newpage Perhaps the simplest framework for the rules giving the measures of the sentient experiences would be that the unnormalized measures are the expectation values, in the quantum state of the universe, of a certain set of positive operators, one for each possible sentient experience, which I shall call the Awareness Operator corresponding to the sentient experience \cite{P1994a,P1994b,P1995a,P1995b,P1996,P2001,P2006,P2011,P2017,P2020}. (Of course, the measures might be more complicated functionals of the quantum state, such as non-unit powers of expectation values or other nonlinear functionals, but expectations values, which are linear in the quantum state density matrix, appear to be the simplest possibility. For the unnormalized measures to be truly linear in the quantum state density matrix, the Awareness Operators themselves should be independent of the quantum state.) Let us make the plausible assumption that most of the contribution to the expectation values of the Awareness Operators come from conditions inside conscious beings. For conscious humans, it seems virtually certain that most of the contribution comes from inside their brains (so that, for example, it was just a figure of speech when Einstein said, ``One feels in one’s bones the significance of blood ties''). Therefore, for brevity, I shall refer to ``brains'' as the locations whose conditions give the dominant contributions for the measure of conscious perceptions, without meaning to prejudice the question of what gives the dominant contributions over the entire universe. For sighted humans, it seems that most (but certainly not all) of their conscious awareness is usually of visual images, which are mostly mediated by photons scattering off objects, entering the eyes, and exciting retinal neurons that then process and transmit the signals to the brain, where more processing occurs before the resulting state in some region of the brain presumably contributes to the expectation value of the Awareness Operators. Quantum Darwinism can help explain why stable records of the effect of the photons give information about the angular and spectral distribution of the photons entering the eyes, and hence about the approximate location and colors of objects, rather than about superpositions of greatly different locations. However, if the Awareness Operators are determined {\it a priori} by the laws of psycho-physical parallelism (which, as a physicist, I shall call laws of physics, even though they are not laws logically deducible from any of the laws of physics so far discovered, so to distinguish them from the presently known laws of physics, I shall call them the augmented laws of physics), they would not be determined by the conditions outside the brain. (The Awareness Operators themselves would also not be determined by the conditions inside the brain, but the expectation values they produce would of course depend on the conditions inside the brain that are themselves highly influenced by the signals coming in from the outside, even though I am assuming that the conditions outside the brain do not contribute directly to the expectation values.) So this raises the question of why our conscious perceptions seem to be of aspects of the external world that are determined by Quantum Darwinism. It seems mysterious why the state-independent Awareness Operators should be well tuned to the information recorded redundantly by Quantum Darwinism. Here I shall propose that indeed the set of all Awareness Operators is not itself tuned to the conditions redundantly recorded by Quantum Darwinism, but that instead there are many subsets of Awareness Operators that are each tuned to different forms of multiple copies of information, and it is the Awareness Operators that are tuned to the form of multiple copies that actually exist in the brain that will receive the dominant expectation values and hence make the dominant contribution to consciousness. There may indeed be some measure of conscious perceptions that do not correspond to classical components of the quantum state, but if that measure is sufficiently suppressed in comparison with the measure corresponding to classical components, that would be sufficient to explain why our observations are predominantly classical. \section{Model for the Brain and Awareness Operators} As discussed above, assume that most of the contributions to the expectation values of the Awareness Operators (which I am assuming give the unnormalized measures for the corresponding conscious perception, one Awareness Operator for each conscious perception) come from the conditions inside brains, and plausibly from conditions in certain subregions of brains, conditions that have low measures of occurring elsewhere. For example, one might think of each Awareness Operator as being essentially an integral over spacetime of a localized projection operator onto a certain set of configurations that predominantly occur only inside certain regions of brains when they have the corresponding conscious perception. (Each Awareness Operator thus does not have any fixed location but only picks up expectation value from regions that do have an element of one of the set of configurations corresponding to the localized projection operator that is integrated over the spacetime to give the full Awareness Operator.) Here let us model this region of the brain by a certain set of $N \geq 1$ qubits. Although these qubits need not be spins or have any particular directional properties, I shall visualize them as if they were spin-half particles (with no positional degrees of freedom) that would each, if it were in a pure state, have its spin point in some direction given by a unit vector ${\bf n}$ on the Bloch sphere; such a state can be characterized as having the density operator $\rho_{\bf n} = |{\bf n}\rangle\langle{\bf n}|$, the rank-1 projection operator onto this pure state. Then I can write the probability that such a pure state labeled by its spin direction ${\bf n}$ would be measured to have spin direction ${\bf m}$ at an angle $\theta$ from ${\bf n}$, with $\cos{\theta} = {\bf m}\cdot{\bf n}$, as \begin{equation} P({\bf m},{\bf n}) = Tr(\rho_{\bf m}\rho_{\bf n}) = \cos^2{(\theta/2)} = (1/2)(1 + {\bf m}\cdot{\bf n}). \label{prob} \end{equation} (Incidentally, if one visualized the spin-half particle in the pure state $\rho_{\bf n} = |{\bf n}\rangle\langle{\bf n}|$ as being a small opaque ball with the hemisphere surface within $90^\circ$ of the direction ${\bf n}$ painted white and the opposite hemisphere painted black, then this probability is the fraction of the total solid angle that is seen as being white from a long way away in the direction ${\bf m}$. If anyone knows a previous publication of this fact I could cite, please let me know.) I shall assume that these qubits occur mainly only inside certain regions of brains. Let the projection operator ${\bf P}_k$ (before integrating over the spacetime) corresponding to the Awareness Operator ${\bf A}_k$ (with $1\leq k \leq M$ labeling which of the $M$ total Awareness Operators of this type is being considered) be the tensor product of $N$ projection operators for each of the spins to be ``up'' along the same direction, given by the unit vector ${\bf m}_k$, but with that direction varying with $k$ that labels the Awareness Operator ${\bf A}_k$. If all of the qubits were pointing in the direction ${\bf n}$, then the expectation value of this projection operator would be $\langle{\bf P}_k\rangle = P({\bf m}_k,{\bf n})^N = [(1/2)(1 + {\bf m_k}\cdot{\bf n})]^N$. If the state of the $N$ spins were a maximally mixed state, the expectation value of the total projection operator at the location of this set of $N$ qubits would be $\langle{\bf P}_k\rangle = 2^{-N}$, which I am assuming is very small, since I am taking $N \gg 1$. Now suppose that Quantum Darwinism leads to the density matrix of the $N$ qubits to be essentially an incoherent sum of the tensor product of the $N$ spin projection operators that has all the spins pointing along a particular direction given by the unit vector ${\bf n}$, with nonnegative real coefficient $p$, and of the tensor product of the $N$ spin projection operators that has all the spins pointing in the opposite direction, given by the unit vector ${\bf -n}$, with nonnegative real coefficient $1-p$. In other words, the mixed state of the $N$ qubits is an incoherent mixture with probability $p$ that all of the spins are pointed in the ${\bf n}$-direction and probability $1-p$ that all of the spins are pointing in the opposite direction, given by the following density operator, \begin{equation} \rho = p|{\bf n},{\bf n},\ldots\rangle\langle{\bf n},{\bf n},\ldots|+(1-p)|{-\bf n},{-\bf n},\ldots\rangle\langle{-\bf n},{-\bf n},\ldots|, \label{qubit-state} \end{equation} with $N$ qubits in each ket and bra of this density operator. \newpage However, assume that the augmented laws of physics do not restrict the Awareness Operators ${\bf A}_k$ to have just one single direction for their projection operators, but rather that instead they correspond to a whole set of directions ${\bf m}_k$. Then the expectation value of the Awareness Operator ${\bf A}_k$ would be proportional to the expectation value of its corresponding projection operator \begin{equation} {\bf P}_k = |{\bf m}_k,{\bf m}_k,\ldots\rangle\langle{\bf m}_k,{\bf m}_k,\ldots| \label{Pk} \end{equation} for all of the qubits to be pointing in the direction ${\bf m_k}$, which is \begin{equation} \langle{\bf P}_k\rangle \equiv Tr({\bf P}_k\rho) = p[(1/2)(1 + {\bf m}_k\cdot{\bf n})]^N + (1-p)[(1/2)(1 - {\bf m}_k\cdot{\bf n})]^N. \label{Pexp} \end{equation} For a complete set of laws of physics, one would need not only the quantum state of the universe and the Awareness Operators whose expectation values in that quantum state give the unnormalized measures of the corresponding conscious perceptions, but also the content of each perception. How the content of the perception is related to the corresponding Awareness Operator is far beyond the scope of my ability to predict, but surely there is some strong correspondence. Here I shall just assume that if the direction ${\bf m}_k$ corresponding to the projection operator ${\bf P}_k$ that is integrated over spacetime to give the Awareness Operator ${\bf A}_k$ is closely aligned with the direction ${\bf n}$ the spins would have if they corresponded to a classical pointer basis, then the content of the corresponding conscious perception would be an awareness of a classical perception, without large quantum uncertainties. In other words, the conscious perception would include an awareness of an image that appears to be of one or more objects at definite directions from the observer. Let us assume that an Awareness Operator ${\bf A}_k$ with its projection operator ${\bf P}_k$ being a tensor product of $N$ spin-half projection operators each having a direction ${\bf m}_k$ close to that of ${\bf n}$ would correspond to a classical awareness of the visual image recorded redundantly by the $N$ qubits all pointing in the direction ${\bf n}$ (with probability $p$, there also being probability $1-p$ that the $N$ qubits are all pointing in the opposite direction, ${-\bf n}$, which I am assuming would lead to a complementary classical awareness, so that an ${\bf A}_k$ with ${\bf m}_k$ close to that of ${-\bf n}$ would contribute significant measure for the complementary classical awareness). By ${\bf m}_k$ being ``close'' to ${\bf n}$ or to ${-\bf n}$, I mean that either ${\bf n}\cdot{\bf m}_k$ or ${-\bf n}\cdot{\bf m}_k$ is within $f$ of unity for $f\ll 1$, i.e., that $1-f \leq |{\bf n}\cdot{\bf m}_k| \leq 1$. If there are many ${\bf m}_k$'s uniformly distributed over the unit sphere, $f$ is the fraction of them that are ``close'' to ${\bf n}$ or to ${-\bf n}$ in this sense. This implies that if the brain qubits corresponded to directions distributed uniformly over the unit sphere, only a fraction $f$ of the total measure for all the Awareness Operators of this form would correspond to classical awarenesses. \section{Fraction of the Measure That Is Classical} If the $N$ relevant brain qubits are in the mixed state given by Eq.\ (\ref{qubit-state}), as plausibly given by Quantum Darwinism, then the situation is quite different from having brain qubits distributed randomly in direction. If $c = \cos{\theta} = {\bf n}\cdot{\bf m}_k$ with $\theta$ being the angle between ${\bf n}$ and ${\bf m}_k$, then the contribution of the fraction $f$ of the Awareness Operator directions that are ``close'' to ${\bf n}$ or ${-\bf n}$ (have $1-f\leq |c|\leq 1$ and hence are assumed to give the measure for a classical awareness) to the total measure given by all the Awareness Operators is, with $\Theta$ being the Heaviside step function, \begin{eqnarray} F&\equiv&\frac{\int_{-1}^1 dc\, \Theta(|c|-1+f)\langle{\bf P}_k\rangle}{\int_{-1}^1 dc \langle{\bf P}_k\rangle} =\frac{\int_{-1}^1 dc\, \Theta(|c|-1+f)\left[p\left(\frac{1+c}{2}\right)^N+(1-p)\left(\frac{1-c}{2}\right)^N\right]} {\int_{-1}^1 dc\, \left[p\left(\frac{1+c}{2}\right)^N+(1-p)\left(\frac{1-c}{2}\right)^N\right]}\nonumber \\ &=& 1-\left(1-\frac{1}{2}f\right)^{N+1}+\left(\frac{1}{2}f\right)^{N+1}=1-\delta. \label{fraction} \end{eqnarray} Therefore, all but a fraction $\delta = 1-F = (1-f/2)^{N+1}-(f/2)^{N+1}$ of the measure for this type of conscious perceptions will be perceived as classical in this toy model. For a very small fraction $f$ of directions for the Awareness Operator qubit directions ${\bf m}_k$ that correspond to classical observations, and for a large number $N$ of qubits where the information is stored redundantly, \begin{equation} \delta \approx (1-f/2)^{N+1} = e^{(N+1)\ln{(1-f/2)}} \approx e^{-\frac{N+1}{2}f}. \label{delta} \end{equation} This implies that if $f\ll 1$, to get at least all but a fraction $\delta$ of the measure of classical conscious perceptions to be classical, the number of qubits $N$ in the projection operator ${\bf P}_k$ that is integrated over the spacetime to give the Awareness Operator ${\bf A}_k$ must be at least $N\approx (2/f)\ln{(1/\delta)}-1$. Alternatively, the fraction of the solid angle $f$ given by the directions ${\bf m}_k$ that make classical perseptions have a measure that is at least a fraction $F = 1-\delta$ of the total must be at least $f \approx [2/(N+1)]\ln{(1/\delta)}$. Since $\ln{(1/\delta)}$ is only logarithmically large, it is not too hard to get $N$ large enough that the solid angle fraction $f$ can be small, even for small $\delta$ that is the fraction of the measure of these conscious perceptions that are not classical. \section{Conclusions} Even if the Awareness Operators, whose expectation values in the quantum state of the universe I have postulated give the unnormalized measures of the corresponding conscious perceptions, are specified by augmented laws of physics independently of the quantum state, they can have a form such that, for a quantum state in which consciousness occurs primarily in a region where decoherence and Quantum Darwinism lead to high redundancy of information, most of the measure is for conscious perceptions that are classical (e.g., perceiving visual images of objects to have fairly well defined directions from the observer), instead of having large quantum uncertainties. I do need to assume that a fraction not too small of the Awareness Operators get large expectation values from quantum states of the brain in which information redundantly by Quantum Darwinism, though this fraction can be considerably smaller than unity if the information is stored sufficiently redundantly. I also need to assume that there is a strong correspondence between the content of conscious perceptions and the quantum states that produce the measures of these perceptions as the expectation values of the corresponding Awareness Operators. \section{Acknowledgments} I am grateful especially for email discussions with James Hartle and Wojciech Zurek, which motivated this analysis. My research was funded by the Natural Sciences and Engineering Research Council of Canada. \baselineskip 4pt
1,108,101,564,794
arxiv
\section{Introduction} The production of the strange vector meson $K^{*}(892)$ in proton-proton collisions was studied rather extensively at high collision energies \cite{Ammosov:1975cn, Kichimi:1979cs, Drijard:1981ab, Brick:1982ke, Aziz:1985gf, Bogolyubsky:1988ei, AguilarBenitez:1991yy}, with the lowest-energy data point at $\sqrt{s} = 4.93$~GeV \cite{Bockmann:1979fr}. No data, however, exist in the region close to the production threshold $\sqrt{s_{thr}} = 2.95$~GeV. This is in contrast to the situation with the kaon ground state, the production of which has been measured both inclusively and exclusively in a broad range of energies, including the very vicinity of its production threshold. The production of kaons and their excitations in proton-proton collisions is governed by the conservation of strangeness, so the simplest possible reaction reads \begin{equation} p + p \to N + Y + K/K^{*}(892), \end{equation} where $N$ stands for the nucleon and $Y$ for the ground-state hyperons $\Lambda(1116)$ or $\Sigma(1189)$. The 3-body kaon production has been studied in detail at low excess energies \cite{AbdelBary:2010pc, AbdelBary:2012zz}, and it was established that, for energies close to the production threshold, the kaon production is mainly accompanied by a $\Lambda(1116)$-hyperon rather than a $\Sigma(1189)$. It is of interest, therefore, to identify the preferable formation mechanism of the $K^{*}$-meson as well. In this paper we discuss the study of the $K^{*}(892)^{+}$ production in proton-proton collisions performed by the HADES collaboration. The deep sub-threshold $K^{*}$ production was analyzed in \cite{Agakishiev:2013nta} for Ar+KCl reactions at a beam energy of 1.756 GeV. In view of future experiments at the Facility for Antiproton and Heavy-Ion Research (FAIR) exploring heavy ion collisions at energies of 2-8 GeV/nucleon, new data from proton-proton reactions are essential as reference measurements and input for transport models. This work complements our previous studies of inclusive and exclusive strangeness production in proton-proton reactions at 3.5 GeV, namely $K^{0}$ \cite{Agakishiev:2013yyy}, $\Sigma(1385)^{+}$ \cite{Agakishiev:2011qw}, $\Lambda(1405)$ \cite{Agakishiev:2012qx, Agakishiev:2012xk}, and $pK^{+}\Lambda$ \cite{Agakishiev:2014dha}. The paper is organized as follows. Section II gives a brief information about the experimental setup. The particle selection and $K^{*}$ reconstruction procedure is described in Section III. Section IV contains the obtained results, their interpretation within the two-channel model, and a discussion of the spin alignment measurement. The summary can be found in Section V. \section{The experiment} The experimental data stem from the High-Acceptance Di-Electron Spectrometer (HADES), installed at the SIS18 synchrotron (GSI Helmholtzzentrum, Darmstadt). The detector tracking system consists of a superconducting magnet and four planes of Multiwire Drift Chambers (MDC). The particle identification capabilities are extended by the Time-of-Flight wall and a Ring Imaging Cherenkov (RICH) detector. The detector, as implied by its name, is characterized by a large acceptance both in the polar (from $18^{\circ}$ to $85^{\circ}$) and azimuthal angles. The detector sub-systems are described in detail in \cite{Agakishiev:2009am}. In 2007 a measurement of proton-proton collisions at a kinetic beam energy of 3.5 GeV was performed: the beam with an average intensity of about $1 \times 10^{7}$ particles/s was incident on a liquid hydrogen target with an area density of 0.35~g$/$cm$^{2}$ corresponding to total interaction probability of $\sim 0.7 \%$. In total, $1.2 \times 10^{9}$ events were collected. The first-level trigger (LVL1) required at least three hits in the Time-of-Flight wall in order to suppress elastic scattering events. \section{Data analysis} \subsection{$K^{*}$ reconstruction} The $K^{*}(892)^{+}$-meson decays strongly, at the primary pp-reaction vertex, into a kaon-pion pair. The decay mode $K^{*}(892)^{+} \to K^{0} \pi^{+}$ with a branching ratio of 2/3 is particularly suited for the analysis, since the short-lived component of the $K^{0}$, the $K^{0}_{S}$, decays weakly into a $\pi^{+}\pi^{-}$ pair ($c\tau = 2.68$~cm, branching ratio 69.2\%). The considered final state of the $K^{*}(892)^{+}$ decay is, therefore, composed out of three charged pions, two of which are emitted from a secondary vertex. Therefore, as a first step of the analysis, we select events with (at least) three pions, identified by a selection on the $(dE/dx)_{MDC}$-momentum plane (Fig.~\ref{fig:dEdx}). The variable $(dE/dx)_{MDC}$ is the cumulative specific energy loss of a charged particle in the four MDC planes. \begin{figure}[h] \includegraphics[width=0.39\textwidth, angle=90]{MDCdEdx_Exp_140417.pdf} \caption{\label{fig:dEdx} (Color online) Specific energy loss of charged particles in MDC chambers as a function of the momentum times polarity. Solid curves show graphical cuts for the $\pi^{\pm}$ selection. The Bethe-Bloch curves are shown by dashed curves.} \end{figure} In the next step we consider all triplets of charged pions ($\pi^{+}\pi^{+}\pi^{-}$) that were found in one event. Since two positively charged pions are available, two $K^{0}_S$-candidates are constructed for each triplet. Afterwards, to ensure that the $K^{0}_S$ decayed away from the primary vertex, and, thus, reduce the combinatorial background, a set of topological cuts was applied to the $\pi^{+}\pi^{-}$ pair that forms the $K^{0}_S$ candidate. These were: i) a cut on the distance between the primary and the secondary vertex $d(K^{0}_S - V)>28$~mm, ii) a cut on the distance of closest approach between two pion tracks, $d_{\pi^{+}-\pi^{-}}<13$~mm, and iii) a cut on the distance of closest approach between either of the extrapolated pion tracks and the primary vertex $DCA^{K^{0}}_{\pi} > 8$~mm. Besides, a cut on the distance of closest approach between the pion track \emph{not associated} to the $K^{0}_S$-candidate (i.e. stemming from the $K^{*}$-decay) and the primary vertex $DCA^{K^{*}}_{\pi} < 6$~mm, has been introduced. The application of the topological cuts reduces the double counting probability (i.e. identifying more than one $K^{0}_S$-candidate in one event, which is kinematically forbidden at this collision energy) to 6\%. The invariant mass spectrum of the $\pi^{+}\pi^{-}$ pairs that passed the topological cuts is shown in Fig.~\ref{fig:K0S}. A prominent $K^{0}_S$ signal is visible. In total, about $24 \times 10^{3}$ $K^{0}_S$-candidates were reconstructed in events with at least three charged pions. \begin{figure}[h] \includegraphics[width=0.48\textwidth]{IM_K0S.eps} \caption{\label{fig:K0S} (Color online) $\pi^{+}\pi^{-}$ invariant mass distribution from events with at least three pions used for the $K^{0}_S$ reconstruction. The legend delivers information on the fit p-value, the number of $K^{0}_{S}$ over background, extracted mass, width, and signal to background ratio.} \end{figure} In the next step we applied a cut on the invariant mass of the $\pi^{+}\pi^{-}$ pairs, i.e. a 19.4~MeV$/c^{2}$ interval ($\pm 1\sigma$) centered at the $K^{0}_S$ mass peak at 495.5~MeV$/c^{2}$ and combined the pairs passing the cut with the remaining positively charged pion. The resulting invariant mass distributions of the $K^{0}_{S}-\pi^{+}$ system are shown in Fig.~\ref{fig:KstarIM} for the total sample (upper left panel) and separately in five transverse momentum bins. A clear signal of the sought-after $K^{*}$ is visible on top of the combinatorial background that can be divided into two classes: i) $\pi^{+}\pi^{+}\pi^{-}$ triplets produced in reactions without strangeness involvement, and ii) non-resonant production of $K^{0}_{S}\pi^{+}$ pairs. The total statistics of about 1700 $K^{*}$ allows for a one-dimensional differential analysis. Below we discuss the extraction of the transverse momentum spectra in detail; the procedure can be applied to any kinematical variable. The extraction of the raw (i.e. neither corrected for the limited geometrical acceptance of the HADES detector nor for the efficiency of the analysis procedure) $K^{*}$ yields is carried out with fits of the experimental invariant mass distributions. After a careful study of the best way to approximate the experimental distributions, we used the following function \begin{equation} \label{eq:fit_fun} f \left( M \right) = F_{PS}\left(M\right) \times V(M;\Gamma,\sigma) + P_{3}\left(M\right), \end{equation} where $M$ is the invariant mass of $K^{0}_{S}\pi^{+}$ pairs, $F_{PS}\left(M\right)$ --- the factor that takes into account phase-space limitations, $V\left(M; \Gamma,\sigma \right)$ --- the Voigt function, and $P_{3}\left(M\right)$ --- a third degree polynomial that models the non-resonant background. The parameters of the Voigt function are: i) $\Gamma$ --- internal width of $K^{*}$ that was fixed to the PDG value of 50.8~MeV \cite{Agashe:2014kda} and ii) $\sigma$ --- the detector resolution (instrumental width) that was determined (and fixed as well) to be about $11$~MeV by simulating a sample of zero-width $K^{*}$'s. The phase space distortion factor $F_{PS}\left(M\right)$ depends on the production channel ($\Lambda$- or $\Sigma$-associated), so first we shall introduce the two-channel simulation model. \begin{figure*}[t] \includegraphics[width=0.88\textwidth]{IMS_pTcut429.eps} \caption{\label{fig:KstarIM} (Color online) Invariant mass spectra of $K^{0}_S-\pi^{+}$ pairs (symbols) for the total sample (upper left) and in five transverse momentum bins. Solid curves are for the fits with \ref{eq:fit_fun}, dashed curves depict the background, separately. The boxes deliver information on the fit p-value, the number of $K^{*}$ over background, mean mass and signal to background ratio.} \end{figure*} \subsection{Two channel model} Due to the quite low beam energy, the three following 3-body channels are expected to dominate $K^{*}$ production: \begin{equation} \label{eq:lch} p + p \to p + \Lambda + K^{*}(892)^{+}, \end{equation} \begin{equation} \label{eq:sch} p + p \to p + \Sigma^{0} + K^{*}(892)^{+}, \end{equation} \begin{equation} p + p \to n + \Sigma^{+} + K^{*}(892)^{+}. \end{equation} Here we neglect the contribution of the 4-body channels with an additional pion ($p+p \to N + \pi + Y + K^{*}$). They are energetically possible, but are expected to be suppressed in comparison with the 3-body channels. As it will be shown below, this assumption is confirmed by the experimental data. We make another simplification, namely we consider only one $\Sigma$-associated channel ($p\Sigma^{0}K^{*}$) out of two that are allowed. Up to the small differences in the masses of the reaction products, both channels have exactly the same kinematics and are indistinguishable in the inclusive analysis of $K^{*}$. Hence, we employ hereafter a two-channel model that includes the $\Lambda$- and the $\Sigma$-channels, where the latter has to be understood as the sum of the two isospin-splitted sub-channels. The invariant mass distributions of the simulated $K^{*}$'s were reconstructed for both channels of the model. Then, the phase-space factor $F_{PS}\left(M\right)$ was determined, which takes into account the deviation from an ideal Breit-Wigner shape. As the contributions of both channels are not known a priori, the phase-space factor is constructed as the sum of the two individual contributions from the $\Lambda$- and $\Sigma$-channel: \begin{equation} F_{PS} \left(M \right) = A_{\Lambda} \times F^{\Lambda}_{PS} \left( M \right) + A_{\Sigma} \times F^{\Sigma}_{PS} \left( M \right). \end{equation} It was found that the fitting procedure is not sensitive to the exact contribution of each channel (i.e to the weights $A_{\Lambda}$ and $A_{\Sigma}$). \subsection{Raw spectra and corrections} Fits of the experimental data, performed with the Eq.~(\ref{eq:fit_fun}) allow to extract the raw yields (affected by the finite acceptance of the detector and efficiency of the analysis procedure). As an example, the raw $p_{t}$-spectrum of the $K^{*}$ mesons is shown in Fig.~\ref{fig:pt_raw}. Also shown are $K^{*}$ distributions corresponding to the $\Lambda$- and $\Sigma$-channels of the two-channel model. They were obtained in the following way. A set of events (four-vectors of the reactions products) corresponding to both channels (\ref{eq:lch}) and (\ref{eq:sch}) were simulated with the {\sc pluto} Monte Carlo generator \cite{Frohlich:2007bi}. A uniform population of the 3-body phase-space has been assumed: neither any kind of angular anisotropy has been implemented, nor any contribution from an $N^{*}$- or $\Delta$-resonance coupling to the $K^{*}-Y$ pair considered. Afterwards, these events served as input for the full-scale simulation procedure that includes the propagation of particles in the detector (the {\sc geant3} code was employed), track reconstruction, etc. Finally, the simulated data sample was analyzed in the same way as the experimental one. All simulated curves shown in Fig.~\ref{fig:pt_raw} are normalized to the integral of the experimental distribution. Furthermore, an optimal mixture of the two channels has been determined by means of a $\chi^{2}$-analysis, delivering the value for the relative $\Sigma$-channel contribution of $0.4\pm0.2$, where $0.2$ is the dominating systematic uncertainty determined by the variation of the experimental cuts, as will be explained below. The resulting mixed spectrum is shown in Fig.~\ref{fig:pt_raw} as well. A contribution of the 4-body channels ($N\pi Y K^{*}$ final state) would produce an even softer $p_{t}$-spectrum as compared to the one generated by the $\Sigma$-channel and, therefore, is completely negligible in this analysis. Finally, we note that an analysis of the missing mass to the $K^{*}p$ final state, potentially more selective with respect to the $\Lambda$- or $\Sigma$-contributions, is not feasible due to the limited statistics. \begin{figure}[h] \includegraphics[width=0.49\textwidth]{UncorrSpectrum_pT_3channels.eps} \caption{\label{fig:pt_raw} (Color online) Raw $p_{t}$-spectrum of $K^{*}$'s produced in proton-proton reactions. Markers --- experimental data with statistical uncertainties, curves --- expectations from the channels (3), (4), and their mixture ``$0.6 \times \Lambda + 0.4 \times \Sigma$''.} \end{figure} As the constructed model (``$0.6 \times \Lambda + 0.4 \times \Sigma$'') describes the data very well, we can use it to correct the raw experimental yields. Due to the limited statistics of the experimental data we perform a one-dimensional correction. The efficiency and acceptance depend, thus, on one kinematical variable. We exemplify the procedure for the $p_{t}$-variable; all other variables are treated analogously. For the purpose of the efficiency correction we prepare a histogram $I$, which corresponds to the $p_{t}$-spectrum of the simulated $K^{*}$'s in the full solid angle, not affected by the limited geometrical acceptance of the detector and efficiency of the analysis procedure. A histogram $O$ corresponds to the $p_{t}$-spectrum of $K^{*}$'s that went through full simulation and analysis chain. Finally, the ratio $\epsilon(p_{t}) = O/I$ is the efficiency histogram. Dividing bin-wise the raw experimental $p_{t}$-spectrum by the $\epsilon(p_{t})$ histogram we obtain the acceptance- and efficiency-corrected spectrum. \section{Results and discussion} Figure~\ref{fig:pt_corr} shows the acceptance and efficiency corrected $p_{t}$-spectrum of $K^{*}$. The experimental data are normalized absolutely based on the analysis of the proton-proton elastic scattering channel in the HADES acceptance, as has been done for the inclusive di-electron analysis \cite{HADES:2011ab}. To no surprise (cf. Fig.~\ref{fig:pt_raw}), the corrected spectrum is very well described by the two-channel model that assumes a 40\% contribution of the $\Sigma$-channel. \begin{figure}[h] \includegraphics[width=0.5\textwidth]{CorrSpectrum_pT.eps} \caption{\label{fig:pt_corr} (Color online) $K^{*}$ transverse momentum spectrum. Markers --- experimental data with statistical uncertainties, empty boxes --- systematical uncertainties. The two-channel model (``$0.6 \times \Lambda + 0.4 \times \Sigma$'') final state phase space distribution is shown by the solid curve.} \end{figure} As mentioned already, a one-dimensional analysis, exemplified above with the transverse momentum variable, can be performed for any chosen kinematical variable. For completeness, Figs.~\ref{fig:ptot_corr} and \ref{fig:cost_corr} show corrected spectra for the total momentum and angular distribution in the pp centre-of-mass reference frame, respectively. \begin{figure}[h] \includegraphics[width=0.5\textwidth]{CorrSpectrum_PKstarCM.eps} \caption{\label{fig:ptot_corr} (Color online) Same as Fig.~\ref{fig:pt_corr} for the momentum spectrum in the pp centre-of-mass reference frame.} \end{figure} \begin{figure}[h] \includegraphics[width=0.5\textwidth]{CorrSpectrum_CosThetaKstarCM.eps} \caption{\label{fig:cost_corr} (Color online) Same as Fig.~\ref{fig:pt_corr} for the angular spectrum in the pp-centre-of-mass reference frame. The dashed curve correspond to the fit with Legendre polynomials (only 0-th and 2-nd order are used due to the symmetry arguments, resulting coefficients are shown in the inset).} \end{figure} Remarkably, all three selected ($p_{t}$, $p_{c.m.}$, and $\cos{\theta_{c.m.}}$) projections of a three-dimensional single-particle phase space are well described by the two-channel model. No significant angular anisotropy is observed for the $K^{*}$ emission in proton-proton collisions: a Legendre-polynomial fit (dotted curve in Fig.~\ref{fig:cost_corr}) does not deliver a better $\chi^{2}$ as compared to the isotropic (by construction) distribution of the two-channel model. The integration of the experimental $p_{t}$-spectrum (Fig.~\ref{fig:pt_corr}) allows to extract the total cross section of the inclusive $K^{*+}$ production: \begin{equation} \label{eq:sigma_tot} \sigma(p+p \to K^{*}(892)^{+} + X) = 9.5 \pm 0.9 ^{+1.1}_{-0.9} \pm 0.7~\mu\text{b}, \end{equation} where the statistical (first), systematic (second) and normalization (third) uncertainties are given. The systematic uncertainty was estimated by a variation of the experimental cuts (topological cuts plus $K^{0}_S$ selection via an invariant mass constraint). The values of the cuts used for this variations are listed in Table~\ref{table:table1}. In total, 1200 cut combinations have been tested. For each cut combination new invariant mass fits were performed along with new efficiency corrections. Afterwards, from the distribution of the total cross section a central interval covering 68\% of the outcomes has been identified. The borders of this interval---asymmetric with respect to the median value---define the systematic uncertainty quoted in Eq.~\ref{eq:sigma_tot}. \begin{table}[h \caption{\label{table:table1}% Topological and the $K^{0}_S$ invariant mass cut variations used to estimate the systematic uncertainty. For all cut combinations the condition $DCA^{K^{*}}_{\pi} \leq DCA^{K^{0}}_{\pi}$ has been demanded, reducing 1800 cut combinations to 1200.} \begin{ruledtabular} \begin{tabular}{llll} Observable & Lower value & Upper value & Steps \\ \colrule $d(K^{0}_S - V)$~[mm] & $> 24$ & $> 40$ & 5\\ $d_{\pi^{+}-\pi^{-}}$~[mm] & $<7$ & $<13$ & 4\\ $DCA^{K^{0}}_{\pi}$~[mm] & $>5.6$ & $>16$ & 5\\ $DCA^{K^{*}}_{\pi}$~[mm] & $<3$ & $<16$ & 6\\ \colrule $\Delta M_{K^{0}_S}$~[MeV/$c^{2}$]& 9.7(1$\sigma$) & 19.4(2$\sigma$) & 3\\ \end{tabular} \end{ruledtabular} \end{table} The extracted total cross section value complements the $K^{*}$ excitation function shown in Fig.~\ref{fig:cs} in the low-energy region, where measurements were not available until now. The HADES data point is consistent with the trend set by the measurements at higher energies. \begin{figure}[h] \includegraphics[width=0.52\textwidth]{cs_180515.eps} \caption{\label{fig:cs} (Color online) Energy ($\sqrt{s} - \sqrt{s_{thr}}$) dependence of the total cross section for the processes: i) $pp \to K^{*}(892)^{+}X$ (red squares --- world data \cite{Ammosov:1975cn, Bockmann:1979fr, Bogolyubsky:1988ei, Brick:1982ke, Aziz:1985gf, AguilarBenitez:1991yy, Kichimi:1979cs}, red triangle --- present work), ii) $pp \to K^{*}(892)^{-}X$ (empty green crosses) \cite{Bockmann:1979fr, Bogolyubsky:1988ei, Brick:1982ke, Aziz:1985gf, AguilarBenitez:1991yy, Kichimi:1979cs}, and iii) $pp \to K^{+}X$ (empty circles) (\cite{AbdelBary:2010pc, AbdelBary:2012zz, Reed:1968zza} and references therein). The solid (dashed) line is a fit to the $K^{*}(892)^{+}$ ($K^{+}$) data with $f\left( x \right) = C \left( 1 - \left(D/x \right)^{\mu} \right)^{\nu}$, where $x = \sqrt{s}$. The numerical values are $C = 3.22 \times 10^{6} (1.04 \times 10^{5})$, $D = 2.89 (2.55)$~GeV, $\mu = 1.19 \times 10^{-2} (1.16 \times 10^{-1})$, $\nu = 1.86 (1.67)$. } \end{figure} In comparison to the ground state, the $K^{*}$ carries spin one, so we proceed with the discussion of its polarisation properties as probed in proton-proton collisions. The spin configuration of the $K^{*}$ in the final state is described by the spin density matrix $\rho_{mm'}$. The diagonal elements $\rho_{11}$, $\rho_{00}$, and $\rho_{1-1}$ define, respectively, the probabilities of the $+1$, 0 and $-1$ spin projections on the quantisation axis. The $\rho_{00}$ can be extracted from the angular distribution of the decay products ($K^{0}$ or $\pi^{+}$) in the rest frame of $K^{*}$ \cite{Donoghue:1978bx} via \begin{equation} \label{eq:spinalfit} W \left(\vartheta \right) = \frac{3}{4} \left[ \left(1 - \rho_{00} \right) + \left(3\rho_{00} - 1\right)\cos^{2}(\vartheta)\right]. \end{equation} The situation with $\rho_{00} \neq 1/3$ is referred to as the \emph{spin- alignment} case, i.e. not equally probable populations of the $\pm 1$ and 0 spin projections. The angular distribution of interest is shown in Fig.~\ref{fig:spinal}. Our fit with Eq.~(\ref{eq:spinalfit}) gives the following result: \begin{equation} \rho_{00} = 0.39 \pm 0.09(\text{stat.})^{+0.10}_{-0.09}(\text{syst.}). \end{equation} Within uncertainties our measurement is fully consistent with $\rho_{00} =1/3$, i.e. no spin-alignment of $K^{*}$ is observed. \begin{figure}[h] \includegraphics[width=0.485\textwidth]{CorrSpectrum_SpinAlCosThetaK0S.eps} \caption{\label{fig:spinal} (Color online) Angular distribution of $K^{0}_S$ in the $K^{*}$ rest frame. Markers --- experimental data with statistical uncertainties, empty boxes --- systematical uncertainties. The two-channel-model (``$0.6 \times \Lambda + 0.4 \times \Sigma$'') final state phase space distribution as simulated with {\sc pluto} without any spin alignment is shown by the solid curve. The dashed curve --- fit of Eq. (\ref{eq:spinalfit}) to the data with $\rho_{00} = 0.39$.} \end{figure} \section{Summary and conclusions} To summarize, we presented here the hitherto lowest-energy measurement of the $K^{*}(892)^{+}$ production in proton-proton collisions. The relative contribution of the channel $p + p \to p + \Sigma + K^{*}(892)^{+}$ has been estimated as $0.4\pm0.2$. Within uncertainties of the experimental data, no deviations from a 3-body phase-space population have been identified in the kinematical distributions of $K^{*}$, as well as no convincing signal of a spin-alignment has been observed. This measurement sets a baseline for future studies of $K^{*}$ production in proton-nucleus and heavy-ion collisions. For instance, by comparing the present data with previously measured by HADES proton-niobium collisions at the same beam energy, cold nuclear matter effects affecting the production of the $K^{*}$'s can be extracted. \begin{acknowledgments} The HADES collaboration gratefully acknowledges the support by the grants LIP Coimbra, Coimbra (Portugal) PTDC/FIS/113339/2009; SIP JUC Cracow, Cracow (Poland): NCN Poland, 2013/10/M/ST2/00042; Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Dresden (Germany) BMBF 05P12CRGHE; TU M\"unchen, Garching (Germany) MLL M\"unchen: DFG EClust 153, VH-NG-330 BMBF 06MT9156 TP5 GSI TMKrue 1012; NPI AS CR, Rez, Rez (Czech Republic) M100481202 and GACR 13-06759S; USC - S. de Compostela, Santiago de Compostela (Spain) CPAN:CSD2007-00042, Goethe University, Frankfurt (Germany): HA216/EMMI HIC for FAIR (LOEWE) BMBF:06FY9100I GSI F\&E EU Contract No. HP3-283286. \end{acknowledgments}
1,108,101,564,795
arxiv
\section{Introduction} \label{sec:intro} The potential of difference image analysis (DIA) as a powerful tool for unveiling short period variable stars, or small amplitude variations in Blazhko RR Lyraes, in the densely populated central regions of globular clusters (GC), has been demonstrated in recent papers \citep[e.g.][ etc]{kains12+05, af12+04, af11+04, bramich11+03, corwin06+06, strader02+02}. Multi-colour time-series CCD photometry allows the identification of variable stars in specific regions of the colour-magnitude diagram (CMD). The Fourier decomposition of RR Lyrae light curves enables us to derive stellar parameters. In the Blue Straggler region, SX Phe stars are often found and depending on the number of them in the cluster, their Period-Luminosity relation (P-L) can be calibrated or it can be used to obtain an independent estimate of the distance to the cluster. In the present paper we focus on the globular cluster NGC~7492. This is a rather sparse outer-halo cluster ($R_{GC} \sim$25~kpc; \cite{harris96} (2010 edition)) for which detailed spectroscopic abundances exist ([Fe/H] $\sim-$1.8) for four stars at the tip of the red giant branch (RGB) \citep{cohen05+01}. Hence the cluster offers a good opportunity to compare the spectroscopic results with the metallicity derived from the light curve Fourier decomposition approach for RR Lyrae stars. According to the Catalogue of Variable Stars in Globular Clusters (CVSGC) \citep{clement01}, only four variable stars are known in this cluster; namely one RR0 (V1), discovered by \cite{shapley20XVII}, two RR1 (V2 and V3) and one long period variable (LPV; V4), all discovered by \cite{barnes68}. Although \cite{buonanno87+03} suggest the presence of a population of blue stragglers in this cluster and numerous blue stragglers have been identified by \cite{cote91+02}, no investigation into the variability of these otherwise faint stars ($V\sim$20~mag) has been reported. Taking advantage of our time-series CCD photometry of the cluster and our capability of performing precise photometry via DIA, we explore the field of the cluster for new variables. In $\S$ \ref{sec:ObserRed} we describe the observations, data reduction, and transformation of the photometry to the Johnson-Kron-Cousins standard system. In $\S$ \ref{sec:var_find_technique} we present a detailed discussion of the strategies employed for the identification of new variables. In $\S$ \ref{sec:RRLyrae} the physical parameters of the RR0 star as derived from the Fourier decomposition of its light curve and the Blazhko effect for the RR1 star are discussed. In $\S$ \ref{sec:newRGB}, we make brief comments on the long term variables. In $\S$ \ref{sec:newsx} we present a discussion for the newly found SX Phe stars and candidates and in $\S$ \ref{sec:Concl} we summarize our results. \section{Observations and reductions} \label{sec:ObserRed} The observations employed in the present work were obtained, using the Johnson-Kron-Cousins $V$, $R$ and $I$ filters, on the dates listed in table \ref{tab:date}. We used the 2.0m telescope of the Indian Astronomical Observatory (IAO) at Hanle, India, located at 4500m above sea level. The typical seeing was $\sim$1.3 arcsec. The detector was a Thompson CCD of 2048 $\times$ 2048 pixels with a pixel scale of 0.296 arcsec/pix and a field of view of $\sim10.1 \times 10.1$ arcmin$^2$. However, for this cluster we can only apply DIA to smaller images that cover an area of $\sim$6.4$\times$5.5 arcmin$^2$ centred on the cluster because of a lack of sources towards the detector edges for use in the kernel solutions. Our data set consists of 119 images in $V$, 54 images in $R$, and 10 images in $I$. The images were calibrated via standard overscan bias level and flat-field correction procedures, and difference image analysis (DIA) was performed with the aim of extracting high precision time-series photometry of the stars in the field of NGC~7492. We used the {\tt DanDIA}\footnote{ {\tt DanDIA} is built from the DanIDL library of IDL routines available at http://www.danidl.co.uk} pipeline for the data reduction process which models the convolution kernel matching the point-spread function (PSF) of a pair of images of the same field as a discrete pixel array \citep{bramich08,Bramichetal2012}. A brief summary of the {\tt DanDIA} pipeline can be found in \cite{af11+04} while a detailed description of the procedure and its caveats is available in \cite{bramich11+03}. \begin{table} \scriptsize \caption{Distribution of observations of NGC 7492 for each filter.} \centering \begin{tabular}{ccccccc} \hline Date &$N_V$&$t_V$(s) &$N_R$&$t_R$(s)& $N_I$ & $t_I$(s)\\ \hline 20041004 & 7 & 60-200 & 7 & 150-180& -- & -- \\ 20041005 & 16 & 60-120 & 14 & 100 & -- & -- \\ 20060801 & 8 & 120 & 7 & 100 & -- & -- \\ 20070804 & 6 & 240 & 7 & 100-180& -- & -- \\ 20070805 & 4 & 240 & 3 & 120-180& -- & -- \\ 20070904 & 8 & 180-240 & 8 & 120-180& -- & -- \\ 20070905 & 8 & 180 & 8 & 120 & -- & -- \\ 20090107 & 4 & 100 & -- & -- & 5 & 100-300 \\ 20090108 & 5 & 300 & -- & -- & 5 & 220-300 \\ 20120628 & 37 & 90-180 & -- & -- & -- & -- \\ 20120629 & 16 & 200 & -- & -- & -- & -- \\ \hline TOTAL: & 119 & & 54 & & 10 & \\ \hline \end{tabular} \tablefoot{ \tablefoottext{a}{$N_V$, $N_R$ and $N_I$ are the number of images taken for the filters $V$, $R$, and $I$ respectively.} \tablefoottext{b}{$t_V$, $t_R$ and $t_I$ are the exposure times, or range of exposure times, employed during each night for each filter.}} \label{tab:date} \end{table} The reference image for each filter was constructed by registering and stacking the best-seeing calibrated images such that all images used were taken on a single night. This resulted in 2, 4, and 1 images being stacked with total exposure times of 120, 400 and 100 s for the filters $V$, $R$ and $I$, respectively. The light curve data in all three filters for all of the variable stars is provided in table \ref{tab:vri_phot}. In addition to the star magnitudes, we supply the difference fluxes $f_{\mbox{\scriptsize diff}}(t)$ (ADU/s), the reference flux $f_{\mbox{\scriptsize ref}}$ (ADU/s) and the photometric scale factor $p(t)$, at time $t$, as provided by the {\tt DanDIA} pipeline. These quantities are linked to the instrumental magnitudes $m_{\mbox{\scriptsize ins}}$ via the equations \begin{equation} f_{\mbox{\scriptsize tot}}(t) = f_{\mbox{\scriptsize ref}} + \frac{f_{\mbox{\scriptsize diff}}(t)}{p(t)} \label{eq:totflux} \end{equation} \begin{equation} m_{\mbox{\scriptsize ins}}(t) = 25.0 - 2.5 \log (f_{\mbox{\scriptsize tot}}(t)). \label{eq:mag} \end{equation} \subsection{Transformation to the $VRI$ standard system} \label{subs:transf_std} The instrumental $v$, $r$, $i$, magnitudes were converted to the Johnson-Kron-Cousins photometric system \citep{landolt92} by using the standard stars in the field of NGC 7492. The $V$, $R$ and $I$ standard stars and their magnitudes are available in the catalogue of Stetson (2000)\footnote{http://www3.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/community/STETSON/standards}. Fig. \ref{trans} displays the relations between the instrumental and standard magnitude systems as a function of instrumental ($v-r$) colour, where we found mild colour dependencies. The standard stars that we used have a colour range between -0.02$<V-R<$0.53~mag and 0.02$<V-I<$1.10~mag which covers the range of colours of the stars in our field of view. To convert the instrumental magnitudes to standard magnitudes for the stars without an instrumental $v-r$ colour, we assumed a value of $v-r = 0.286$~mag corresponding to the centre of the spread of instrumental $v-r$ star colours. \begin{figure}[!t] \begin{center} \includegraphics[scale=0.47]{transf.VRI.colour.eps} \caption{Photometric transformation relations between the instrumental $v, r, i$ and the standard $V, R, I$ magnitudes using the standard stars from Stetson (2000).} \label{trans} \end{center} \end{figure} \subsection{Astrometry and finding chart} \label{sub:finding_chart} We fit a linear astrometric solution derived for the $V$ filter reference image by matching 37 hand-picked stars with the UCAC3 star catalogue \citep{ucac3} using a field overlay in the image display tool GAIA \citep{Draper2000}. We achieved a radial RMS scatter in the residuals of $\sim$0.24~arcsec which is equivalent to $\sim$0.82 pixels. To facilitate the identification of the variable stars in this cluster in future studies, we have produced a finding chart which we present in figure \ref{fig:N7492id}. In addition, in Table \ref{tab:astrom} we present the equatorial J2000 celestial coordinates of all of the variable stars discussed in this work. This astrometric solution is in perfect agreement with the astrometry given by \cite{stetson00+00} for the standard stars in this cluster. \begin{table*}[htp] \caption{Celestial coordinates for all of the confirmed and candidate variables in our field of view, except V3 which lies outside of our field of view. The coordinates correspond to the epoch of the $V$ reference image, which is the heliocentric Julian date $\sim$2453284.26~d. We also include in this table, the epoch, period, mean $V$ magnitude, $V-R$ colour and the full amplitude of each variable.} \centering \begin{tabular}{cccccccccc} \hline ID & type & RA & DEC & Epoch & P & $V$ &$V-R$ & Amp (V) &Amp(R)\\ & & (J2000) & (J2000) & (days) & (days) & (mag) &(mag) & (mag) &(mag) \\ \hline V1 & RR0 & 23:08:26.68 &-15:34:58.5 & 2453949.4046 & 0.805012 & 17.303\tablefootmark{b} &0.264\tablefootmark{b}&0.511 &0.408\\ V2 & RR1 & 23:08:25.00 &-15:35:52.9 & 2453284.2652 &0.411764\tablefootmark{a}& 17.256\tablefootmark{b} &0.188\tablefootmark{b}&0.251 &0.140\\ V4 & LPV & 23:08:23.19 &-15:39:06.0 & -- &$\sim$21.7 & 14.271\tablefootmark{c} & -- &$\sim$0.18& -- \\ V5 & LPV & 23:08:39.08 &-15:34:36.3 & -- &-- & 14.258\tablefootmark{c} & -- &$>$0.33 & -- \\ V6 & SX Phe & 23:08:29.16 &-15:36:51.1 & 2454318.3687 &0.0565500 & 19.235\tablefootmark{b} &0.166\tablefootmark{b}&0.136 &0.058\\ V7 & SX Phe & 23:08:19.83 &-15:37:33.6 & 2454839.0479 &0.0725859 & 19.363\tablefootmark{b} &0.448\tablefootmark{b}&0.050 & -- \\ CSX1& SX Phe?& 23:08:32.73 &-15:35:03.4 & -- & -- & 19.499\tablefootmark{c} &0.210\tablefootmark{c}&$\sim$0.15&0.09?\\ \hline \end{tabular} \tablefoot{\tablefoottext{a}{If we consider a secular period change then the period is P$_0=0.412119$~d at the epoch $E=$2453284.2652~d and the period change rate is $\beta\approx$ 47~d~Myr$^{-1}$. }\tablefoottext{b}{Intensity weighted magnitude calculated from the light curve model.}\tablefoottext{c}{Mean magnitude from our data.}} \label{tab:astrom} \end{table*} \begin{figure*}[!htp] \begin{center} \includegraphics[scale=0.75]{Finding.Charts.ps} \caption{Finding charts constructed from our $V$ reference image; north is up and east is to the right. The cluster image is 6.36$\times$5.52 arcmin$^{2}$, and the image stamps are of size 23.7$\times$23.7 arcsec$^{2}$. Each variable (except V4 and V5) lies at the centre of its corresponding image stamp and is marked by a cross-hair.} \label{fig:N7492id} \end{center} \end{figure*} \begin{table*} \caption{Time-series $V$, $R$ and $I$ photometry for all of the confirmed and candidate variables in our field of view. Note that V3 lies outside of our field of view. The standard $M_{\mbox{\scriptsize std}}$ and instrumental $m_{\mbox{\scriptsize ins}}$ magnitudes are listed in columns 4 and 5, respectively, corresponding to the variable star, filter, and heliocentric Julian Date of mid-exposure listed in columns 1-3, respectively. The uncertainty on $m_{\mbox{\scriptsize ins}}$ is listed in column 6, which also corresponds to the uncertainty on $M_{\mbox{\scriptsize std}}$. For completeness, we also list the quantities $f_{\mbox{\scriptsize ref}}$, $f_{\mbox{\scriptsize diff}}$ and $p$ from Equation~\ref{eq:totflux} in columns 7, 9 and 11, along with the uncertainties $\sigma_{\mbox{\scriptsize ref}}$ and $\sigma_{\mbox{\scriptsize diff}}$ in columns 8 and 10. This is an extract from the full table, which is available with the electronic version of the article. } \centering \begin{tabular}{ccccccccccc} \hline Variable & Filter & HJD & $M_{\mbox{\scriptsize std}}$ & $m_{\mbox{\scriptsize ins}}$ & $\sigma_{m}$ & $f_{\mbox{\scriptsize ref}}$ & $\sigma_{\mbox{\scriptsize ref}}$ & $f_{\mbox{\scriptsize diff}}$ & $\sigma_{\mbox{\scriptsize diff}}$ & $p$ \\ Star ID & & (d) & (mag) & (mag) & (mag) & (ADU s$^{-1}$) & (ADU s$^{-1}$) & (ADU s$^{-1}$) & (ADU s$^{-1}$) & \\ \hline V1 & $V$ & 2453283.25245 & 17.400 & 18.014 & 0.009 & 558.913& 0.904 & 64.366& 4.992& 1.0059 \\ V1 & $V$ & 2453283.28310 & 17.444 & 18.058 & 0.005 & 558.913& 0.904 & 39.065& 2.618& 0.9944 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots& \vdots&\vdots& \vdots \\ V1 & $R$ & 2453283.27903 & 17.162 & 17.769 & 0.004 & 786.605& 0.821 & -5.906 & 3.044& 1.0210 \\ V1 & $R$ & 2453283.28689 & 17.153 & 17.759 & 0.004 & 786.605& 0.821 & 0.832 & 3.034& 1.0184 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots& \vdots&\vdots& \vdots \\ V1 & $I$ & 2454839.03764 & 16.575 & 17.636 & 0.010 & 865.889& 3.651 & 16.109& 8.462& 0.9974 \\ V1 & $I$ & 2454839.04150 & 16.577 & 17.639 & 0.007 & 865.889& 3.651 & 14.141& 5.362& 0.9945 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots& \vdots&\vdots& \vdots \\ \hline \end{tabular} \label{tab:vri_phot} \end{table*} \section{Variable star search strategies} \label{sec:var_find_technique} \subsection{Standard deviation} \label{subs:SD} $V$ light curves were produced for 1623 stars in the field of our images. The mean magnitude, computed using inverse variance weights, and the RMS were calculated for each light curve. Fig. \ref{fig:rms} shows the RMS as a function of the mean magnitude for the $V$, $R$ and $I$ filters and indicates the precision of our photometry. Stars with a large dispersion for a given mean magnitude, in principle, are good candidate variables. However, it is possible that a light curve has a large RMS due to occasional bad measurements of the corresponding star in some images, in which case the variability could be spurious. We have used the RMS values as a guide to our search for variables. The two known RR Lyrae stars (V1 and V2) and the known red giant variable (V4) are highlighted with colours as indicated in the caption of Fig. \ref{fig:rms}. They clearly stand out from the general trend. V3 is not in the field of our images. With this method we have also identified another long period variable discussed later in this paper which we have labelled as V5. While this method is useful for detecting bright variables, it is not useful for detecting shorter period faint variables in the blue straggler region. It is clear from the red points, which correspond to two new SX Phe and one candidate SX Phe, that they do not stand out in the plot relative to other faint non-variable stars. For these variables, we have used a different approach described in the following sections. \begin{figure}[!t] \begin{center} \includegraphics[scale=0.485]{rmsVRI.eps} \caption{RMS magnitude deviation as a function of the mean magnitude in the filters $V$, $R$, and $I$. Known variables are labelled. V1 (dark blue point) and V2 (green point) are two known RR Lyrae stars in the field of the cluster. Red circles correspond to two newly identified SX Phe stars and one candidate SX Phe. The variable V4 is saturated in our $R$ reference image and the newly identified variable V5 is saturated in the $R$ and $I$ reference images. Hence these stars are not shown in the corresponding plots. The cyan points correspond to the blue stragglers identified by \cite{cote91+02}.} \label{fig:rms} \end{center} \end{figure} \subsection{String-length period search} \label{subs:SQ} The light curves of the 1623 stars measured in each of the 119 $V$ images were analyzed by the string-length minimization approach \citep{burke70+02,Dworetsky83}. In this analysis, the light curve is phased with numerous test periods within a given range. For each period the dispersion parameter $S_Q$ is calculated. When $S_Q$ is at a minimum, the corresponding period produces a phased light curve with a minimum possible dispersion and it is adopted as the best-fit period for that light curve. Bona fide variable stars should have a value of $S_Q$ below a certain threshold. Similar analyses of clusters with numerous variables have shown that all periodic variables with long periods and large amplitudes are likely to have $S_Q \leq 0.3$. However short-period small-amplitude variables like SX Phe are often missed by this approach \citep{af04+05,af06+02}. Fig. \ref{fig:SQ} shows the distribution of $S_Q$ values for all stars measured in the $V$ images, plotted as a function of an arbitrary star number. We explored individually the light curves below the indicated threshold of $S_Q=0.3$. With this method we recover the two RR Lyrae stars in the field of our images (V1 and V2), the LPV V4, and we discovered the LPV V5. The others stars below this threshold were not found to display true variations. \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.475]{SQ.V.eps} \caption{$S_Q$ values for all the stars in the $V$ images as a function of an arbitrary star number. The blue line is the threshold below which RR Lyrae stars tend to be found. The known variables V1, V2 and V4 are labelled as well as the newly identified long period variable V5, the new SX Phe V6 and V7 and one candidate SX Phe star (red points). The cyan points correspond to the stars identified by \cite{cote91+02} as blue stragglers.} \label{fig:SQ} \end{center} \end{figure} \subsection{Colour-magnitude diagram} \label{subs:CMD} The CMD is very useful for separating groups of stars that are potential variables, e.g. in the horizontal branch (HB), the RGB and the blue straggler region. Fig. \ref{fig:CMD} shows the $V$ versus $(V-R)$ diagram. The known RR0 and RR1 stars contained in our field of view are shown as dark blue and green circles, respectively. The known red giant variable V4 is not shown because it is saturated in our $R$ reference image and V5 is saturated in the $R$ and $I$ reference images. The blue straggler region has been arbitrarily defined by the red box in this CMD. We selected the faint limit such that the photometric uncertainty is below 0.1 mag and the red limit so that the region is not too contaminated by the main sequence. It is worth noting that in the HB, the RR Lyrae region is populated only by two of the three already known RR Lyrae stars and one more star labelled as C in the figure. The light curve of star C does not show signs of variation at the precision of our data ($\sim$0.02~mag), hence the star may be a field object. Furthermore, there are no saturated stars in the field of view of our $V$ filter images. Therefore, with a typical precision of $\sim$0.01-0.02~mag in our $V$ light curves at the magnitude of the HB ($\sim$17.3~mag), we can be sure that there are no more RR Lyrae stars in the cluster in our field of view (Fig. \ref{fig:N7492id}). \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.495]{dcm.VR.eps} \caption{CMD of NGC 7492. Two known RR Lyrae variables V1 and V2 are marked with dark blue and green symbols respectively. The new SX Phe variables V6 and V7 and one candidate SX Phe are shown with red symbols. The cyan points correspond to the blue stragglers identified by \cite{cote91+02}. The red box is an arbitrarily defined Blue Straggler region (see text).} \label{fig:CMD} \end{center} \end{figure} \subsection{Variability detection statistic $\cal S_B$} \label{subs:SB} We also analysed the light curves for variability via the detection statistic $\cal S_B$ defined by \cite{af12+04} and employed by these authors to detect amplitude modulations in RR Lyrae stars attributed to the Blazhko effect. The variability detection statistic $\cal S_B$ was inspired by the alarm statistic $\cal A$ defined by \cite{tamuz06+02}, designed originally for improving the fitting of eclipsing binary light curves. The advantages of redefining the alarm statistic as: \begin{equation} S_B=\left(\frac{1}{NM}\right)\sum_{i=1}^{M}\left(\frac{r_{i,1}}{\sigma_{i,1}} +\frac{r_{i,2}}{\sigma_{i,2}}+...+\frac{r_{i,k_{i}}}{\sigma_{i,k_{i}}} \right)^2, \label{eq:fourier} \end{equation} \noindent have been discussed by \cite{af12+04}. In this equation, $N$ represents the total number of data points in the light curve and $M$ is the number of groups of time-consecutive residuals of the same sign from a constant-brightness light curve model. The residuals $r_{i,1}$ to $r_{i,k_i}$ form the $i$th group of $k_i$ time-consecutive residuals of the same sign with corresponding uncertainties $\sigma_{i,1}$ to $\sigma_{i,k_i}$. Our $\cal S_B$ statistic may therefore be interpreted as a measure of the systematic deviation per data point of the light curve from a non-variable (constant-brightness) model. We note that in \cite{af12+04} the residuals $r_{i,j}$ are calculated relative to the Fourier decomposition light curve model rather than relative to a constant-brightness model used in this work. It is this difference in application that makes $\cal S_B$ a detection statistic for the Blazhko effect in \cite{af12+04} and a detection statistic for variability in this paper. Equation~\ref{eq:fourier} has been modified from the corresponding equation in \cite{af12+04} by further normalising the $\cal S_B$ statistic by $M$. This modification serves to improve the discriminative power of the statistic because variable stars, as opposed to non-variable stars, have longer time-consecutive runs of light curve data points that are brighter or fainter than the constant-brightness model, and therefore smaller values of $M$ (for a given light curve $N$). We calculated $\cal S_B$ for each of our $V$ and $R$ light curves and we made plots of $\cal S_B$ versus magnitude in each filter. The variables detected so far by the methods discussed in Sections \ref{subs:SD}-\ref{subs:CMD} (V1, V2, V4 and V5) stand out in these diagrams with very large $\cal S_B$ values compared to the other stars. However, we found that we could make these differences in the $\cal S_B$ values between variable and non-variable stars even larger by calculating $\cal S_B$ for the combined $VR$ light curves. In this case, for each star, we adjust the $R$ light curve so that its mean magnitude matches that of the corresponding $V$ light curve, and then we calculate $\cal S_B$ for the combined $VR$ light curve\footnote{This procedure is valid for the variable stars in our data because the data points in our light curves generally alternate between the two filters and therefore the light curve data in each filter has approximately the same phase coverage.}. \begin{figure}[!t] \begin{center} \includegraphics[scale=0.48]{alarm.VR.eps} \caption{$\cal S_B$ statistic as a function of $V$ mean magnitude for the $VR$ combined light curves. The RR Lyrae stars V1 and V2 are labelled, as are the new SX Phe stars V6 and V7 and the one candidate SX Phe star. The cyan points correspond to the blue stragglers identified by \cite{cote91+02}. The long period variables V4 and V5 do not appear on this plot because they are saturated in the $R$ filter. The solid blue curve is the median (50\%) curve determined from our simulations and adjusted to fit the real $\cal S_B$ data above $V \sim$19~mag. The dashed red curve represents our variable star detection threshold in $\cal S_B$ set using our simulations to limit our false alarm rate to $\sim$0.1\%. The solid red curve represents our adopted variable star detection threshold when we take into account the systematic errors. We further limited our variable star search to stars brighter than $V =$20.6~mag (vertical dashed red line).} \label{fig:alarm} \end{center} \end{figure} In Fig. \ref{fig:alarm} we plot $\cal S_B$ for each of the combined $VR$ light curves as a function of the $V$ mean magnitude. The RR Lyrae stars V1 and V2 clearly have $\cal S_B$ values among the largest in the light curve sample. It is interesting to note that $\cal S_B$ generally scatters around a constant value ($\sim$0.03) for $V$ fainter than $\sim$19~mag. For stars brighter than $V \sim$19~mag, the $\cal S_B$ values show an exponential increase (which appears as linear on the log-scale of Fig. \ref{fig:alarm}). This feature can be explained by considering the systematic errors that exist at some level in all the light curves. However, we defer the relevant dicussion of this topic until later in this section. To detect new variable stars, we need to define a detection threshold that optimises our sensitivity to real variables while being set high enough to minimise the number of false alarms (i.e. classification of non-variable stars as variable). Without setting this detection threshold carefully, one runs the risk of publishing suspected variable stars of which the majority may be refuted in subsequent photometric campaigns (see \cite{safonova11+01} and \cite{bramich12+03} papers for a good example). We decided to determine the threshold for our $\cal S_B$ statistic through the use of simulations. For each combined $VR$ light curve in our sample, we performed 10$^{6}$ simulations. Each simulation consists of generating a random light curve $m_{i}$ using the real light curve data point uncertainties $\sigma_{i}$ via: \begin{equation} m_{i} = \overline{V} + \lambda_{i} \sigma_{i}, \label{eqn:randomlc} \end{equation} where the $\lambda_{i}$ are a set of random deviates drawn from a normal distribution with zero mean and unit $\sigma$, and $\overline{V}$ is the mean $V$ magnitude of the real light curve. We calculated $\cal S_B$ for the simulated light curves and obtained a distribution of 10$^{6}$ $\cal S_B$ values from which we determined the median (50\%) and 99.9\% percentile. We found that the median values of the $\cal S_B$ distributions are approximately the same (to within the noise of the finite number of simulations) for all of our stars, as are the 99.9\% percentile values, which implies that for light curves with the same number of data points, the actual distribution of data point uncertainties has no impact on the threshold to be chosen for $\cal S_B$. We found that for our combined $VR$ light curves, the mean of the $\cal S_B$ distribution medians is $\sim$0.0256, and the mean of the 99.9\% percentiles is $\sim$0.0525. These lines are plotted in Fig. \ref{fig:alarm} as the horizontal dashed blue and red lines. Looking again at Fig. \ref{fig:alarm} we now see that the $\cal S_B$ values for the real light curves scatter close to the median line from the simulations for stars fainter than $V \sim$19~mag, which implies that for these stars the simulations provide a reasonably good model for the noise in the real light curves. However, for the stars brighter than $V \sim$19~mag, the $\cal S_B$ values for the real light curves increase exponentially with increasing brightness and are much larger than what we would expect as determined from our light curve simulations with pure Gaussian noise. We can explain this by considering that the systematic errors in the light curves, which correlate over groups of time-consecutive data points and therefore mimic real variability, increasingly dominate the noise in our real light curves with increasing star brightness. To account for the systematic errors, we need to adjust our median and 99.9\% percentile curves in Fig. \ref{fig:alarm}, which we do by fitting a linear relation to the log-$\cal S_B$ values for $V$ brighter than 19~mag and merging this fit with the constant median curve for $V$ fainter than 19~mag (solid blue curve). We then shift this curve to larger $\cal S_B$ values so that the horizontal part matches that of the 99.9\% percentile (solid red curve). Finally we adopt the solid red curve as our detection threshold for new variables. By choosing a variable star detection threshold set to the 99.9\% percentile of $\cal S_B$ from our simulations of light curves that have only pure Gaussian noise, we have set our false alarm rate to 0.1\%, which implies that with 1585 stars with combined $VR$ light curves we should expect only $\sim$1.6 non-variable stars to fall above our threshold. However, since we are fully aware that the systematic errors may affect some light curves more than others for various reasons (e.g. near a saturated star, cosmic ray hits, etc.) we must still exercise caution with all candidate variable stars that lie above our detection threshold in $\cal S_B$. We observe that for stars fainter than $V =$20.6~mag, the $\cal S_B$ values have a larger number of high outliers than is typical and therefore we further limit our variable star search to stars brighter than $V =$20.6~mag (vertical dashed red line in Fig. \ref{fig:alarm}). We note that the two RR Lyrae stars V1 and V2 have $\cal S_B$ values much greater than our adopted detection threshold and are therefore recovered by this method. We have explored the appearance of the light curves of all stars with $\cal S_B$ above our detection threshold and we have found convincing indications of variability in two stars in the blue straggler region. These SX Phe stars are discussed in Section~\ref{sec:newsx} along with one other candidate SX Phe star that also lies above our detection threshold in $\cal S_B$. The remaining stars with $\cal S_B$ values above our threshold do not show convincing light curve variability either on inspection of their light curves or when analysed with the string-length minimisation approach. If we compare this method with the others used in this paper for detecting variable stars (see sec. \ref{subs:SD}, \ref{subs:SQ}), then it becomes clear that this method is the only one that has been used to successfully detect all of the previously known and new variables in this cluster. \section{The RR Lyrae stars} \label{sec:RRLyrae} Of the three known RR Lyrae stars in the cluster, the RR1 star V3 is not in the field of our images. {\bf V1}. This is a clear fundamental mode pulsator or RR0. Our data are neatly phased with a period of 0.805012~d (Fig \ref{fig:V1}). The light curve was fitted with 4 Fourier harmonics (red continuous line in Fig. \ref{fig:V1}) of the form of eq. \ref {FOUfit}. \begin{equation}\label{FOUfit} m(t) = A_o ~+~ \sum_{k=1}^{N}{A_k ~cos~( {2\pi \over P}~k~(t-E) ~+~ \phi_k ) }. \end{equation} \begin{figure} \begin{center} \includegraphics[scale=0.51]{V1.eps} \caption{Light curve of the RR0 star V1 in the $V$ filter (top) and the $R$ filter (bottom) phased with the period 0.805012~d. The data point colours represent the different epochs listed in table \ref{tab:colour_code}. The red line corresponds to the Fourier fit of equation \ref{eq:fourier} with four harmonics. The typical uncertainties in the $V$ and $R$ magnitudes are $\sim$0.007 and 0.005~mag, respectively.} \label{fig:V1} \end{center} \end{figure} We noted that using more than four harmonics results in over-fitting. The decomposition of the light curve in Fourier harmonics was used to estimate the iron abundance [Fe/H] and the absolute magnitude $M_V$, and hence the distance. These calculations were made using the semi-empirical calibrations available in the literature. To calculate [Fe/H] we employed the calibration of \cite{jurcsik96+01} valid for RR0 stars. \begin{equation}\label{eq:JK96} {\rm [Fe/H]}_{J} = -5.038 ~-~ 5.394~P ~+~ 1.345~\phi^{(s)}_{31}. \end{equation} The Fourier parameter $\phi^{(s)}_{31}$ comes from fitting a sine series to the light curve of star V1 rather than a cosine series as in eq. \ref{FOUfit}. However, the corresponding cosine parameter $\phi^{(c)}_{31}$ is related by $\phi^{(s)}_{31} = \phi^{(c)}_{31} - \pi$. The above equation gives [Fe/H]$_J$ with a standard deviation from this calibration of 0.14 dex \citep{jurcsik98}. The Jurcsik metallicity scale can be transformed to the \cite{zinn84+01} scale [Fe/H]$_{ZW}$ through the relation [Fe/H]$_{\rm J}$ = 1.43 [Fe/H]$_{ZW}$ + 0.88 \citep{Jurcsik95}. We note that the deviation parameter $Dm$ \citep{jurcsik96+01} for this star when fitting higher order Fourier series is greater than the recommended value. Hence, our metallicity estimate should be treated with caution. We also calculate the metallicity on the UVES scale using the equation of \cite{carretta09+04}: \begin{equation}\label{eq:Fe.uves.carretta09} {\rm [Fe/H]}_{\rm{UVES}} = -0.413 +0.130\rm{[Fe/H]_{ZW}}-0.356\rm{[Fe/H]}^2_{\rm{ZW}}. \end{equation} From our light curve fit, we find $\phi^{(c)}_{31}=$8.987 and obtain [Fe/H]$_{\rm{ZW}}=-1.68\pm$0.10 or [Fe/H]$_{\rm{UVES}}=$-1.64$\pm$0.13. These metallicity values are in good agreement with the mean spectroscopic values of [Fe/H]=-1.82$\pm$0.05 and [Fe/H]=-1.79$\pm$0.06 determined by \cite{cohen05+01} from Fe~I and Fe~II lines, respectively, in four bright red giants in the cluster. The Fe abundances were derived using high resolution (R=$\lambda/\delta\lambda$=35,000) spectra obtained with HIRES at the Keck Observatory. Similarly they are in good agreement with the latest spectroscopic metallicities from \cite{saviane12+06} (see table \ref{tab:Fe.values}). Other metallicity estimates for this cluster are [Fe/H]=-1.70$\pm$0.06 from \cite{rutledge97+02} determined using moderate dispersion spectroscopy in the region of the infrared Ca triplet, [Fe/H]=-1.5$\pm$0.3 from \cite{zinn84+01} using the narrow band Q39 photometric system, and [Fe/H]=-1.34$\pm$0.25 by \cite{smith84} using the $\Delta$S method for two RR Lyraes in the cluster. Thus the metallicity estimated via the Fourier decomposition technique agrees well with the other independent estimates (see table \ref{tab:Fe.values}). \begin{table*} \caption{Metallicity estimates for NGC 7492 on the ZW scale and their respective values on the UVES scale and vice versa as found from the literature search.} \centering \begin{tabular}{cccc} \hline [Fe/H]$_{\rm{ZW}}$ & [Fe/H]$_{\rm{UVES}}$ & Reference & Method \\ \hline -1.68$\pm$0.10 &-1.64$\pm$0.13\tablefootmark{c} & This work & Fourier light-curve decomposition of the RR Lyrae stars\\ &-1.72$\pm$0.07 &\cite{saviane12+06} & Ca$_{\rm{II}}$ triplet using the FORS2 imager and spectrograph at the VLT\\ &-1.69$\pm$0.08 &\cite{saviane12+06} & Ca$_{\rm{II}}$ triplet using the FORS2 imager and spectrograph at the VLT\\ &-1.69$\pm$0.08 & \cite{carretta09+04}& Weighted average of several metallicities\tablefootmark{b}\\ -1.82$\pm$0.05 &-1.83$\pm$0.07\tablefootmark{c} &\cite{cohen05+01} & Fe$_{\rm{I}}$ line in bright red giants in this cluster\\ -1.79$\pm$0.06 &-1.79$\pm$0.08\tablefootmark{c} &\cite{cohen05+01} & Fe$_{\rm{II}}$ line in bright red giants in this cluster\\ -1.70$\pm$0.06 &-1.66$\pm$0.08\tablefootmark{c} &\cite{rutledge97+02} & Infrared Ca triplet\\ &-1.78 & \cite{harris96} &Globular cluster catalogue\tablefootmark{a}\\ -1.5$\pm$0.3 &-1.41$\pm$0.36\tablefootmark{c} & \cite{zinn84+01} & Narrow band Q39 photometric system\\ -1.34$\pm$0.25 &-1.23$\pm$0.27\tablefootmark{c} & \cite{smith84} & $\Delta$S method for two RR Lyraes in the cluster\\ \hline \hline \end{tabular} \tablefoot{\tablefoottext{a}{The catalogue version used is the updated 2010 version available at http://www.physics.mcmaster.ca/Globular.html.} \tablefoottext{b}{\cite{carretta09+04}, \cite{carretta97+01}, \cite{kraft03+01}, and the recalibration of the Q39 and W$^{''}$ indices.} \tablefoottext{c}{Converted from column 1 using Eq. \ref{eq:Fe.uves.carretta09} \citep{carretta09+04}.}} \label{tab:Fe.values} \end{table*} For the determination of the absolute magnitude of V1 we employed the calibration of \cite{kovacs01+01}, \begin{equation}\label{eq:KW01} M_V = ~-1.876~\log~P ~-1.158~A_1 ~+0.821~A_3 + K, \end{equation} \noindent which has a standard deviation of 0.04 mag. From our fit of equation \ref{FOUfit} to the light curve of V1, we derive $A_1$=0.206 and $A_3$=0.034 mag. We adopt $K$=0.41 mag in order to be consistent with a true distance modulus for the Large Magellanic Cloud (LMC) of $\mu_0$=18.5 mag \citep[see the discussion in][ in their section 4.2]{af10+02}. We obtain $M_V=$0.376$\pm$0.040 mag which is equivalent to the luminosity $\log(L/L_{\odot})=$1.762$\pm$0.016. Assuming $E(B-V)=$0.0 mag \citep{harris96}, the true distance modulus is $\mu_0=$16.927$\pm$0.040 mag, equivalent to a distance of $24.3\pm0.5$~kpc. \cite{cote91+02} estimated a distance of 26.18$\pm$2.41~kpc to NGC 7492 using the cluster NGC 6752 as a reference and whose distance was estimated by \cite{PD86}. Our distance estimate using V1 agrees within the uncertainties. \begin{figure}[!t] \begin{center} \includegraphics[scale=0.51]{V2.eps} \caption{Light curve of the RR1 star V2 in the $V$ filter (top) and the $R$ filter (bottom) phased with the period 0.411764~d. The data point colours represent the different epochs listed in table \ref{tab:colour_code}. The typical uncertainties in the $V$ and $R$ magnitudes are $\sim$0.007 and 0.005~mag, respectively.} \label{fig:V2} \end{center} \end{figure} {\bf V2}. This RR1 star shows a complex light curve. The period derived by \cite{barnes68} of 0.292045~d fails to phase our light curve properly. Using the string-length minimisation method on our light curve, we determine a period of 0.411764~d, which produces the phased light curve shown in Fig. \ref{fig:V2}. The period is towards the upper limit for an RR1 type star. However, long periods like this are not uncommon in Oosterhoff type II clusters, which typically have similar metallicity and horizontal branch morphology as NGC~7492 \citep{lee90+00,clement01}. We note that our light curve does not phase well at this period and so we searched for a second period. As the ratio $P_1/P_0=$0.746$\pm$0.001 \citep{catelan09+00, cox83+02} in RRd stars, then we expect the second period to be $\sim$0.5519~d when performing a search of the residuals from the first period. We could only find a non-significant second period with $P_1$=0.4365~d. The light curve shows nightly amplitude changes that resemble those found by \cite{af12+04} in the majority of RR1 stars in NGC~5024. We have attempted to model the light curve with a secular period change which we have parameterised as in \cite{bramich11+03}, i.e. \begin{equation} \label{eq:chanceperiod} \phi(t)=\frac{t-E}{P(t)}-\left\lfloor \frac{t-E}{P(t)}\right\rfloor \end{equation} \begin{equation} P(t)=P_0+\beta(t-E), \end{equation} where $\phi$(t) is the phase at time $t$, $P(t)$ is the period at time $t$, $P_0$ is the period at the epoch $E$, and $\beta$ is the rate of period change. We searched the parameter space at fixed epoch $E$, whose value is arbitrary, for the best-fitting values of $P_0$ and $\beta$, using as a criterion the minimum string-length statistic of the light curve. The search is done in a small range of periods around the previously determined best-fitting period. This is the only type of period change that we can consider modelling given our limited photometric data. We found a period of P$_0=$ 0.412119~d at the epoch $E=$2453284.2652~d and a period change rate of $\beta\approx$47~d~Myr$^{-1}$. The light curve phased with $\phi$(t) from eq. \ref{eq:chanceperiod} is shown in Fig. \ref{fig:V2_PCH}. Clearly the phased light curve is now much improved, but we still observe possible amplitude modulations, which may be due to the Blazhko effect. This period change rate is higher than other values found in the literature by a factor of two or more \citep{leborgne07+08, lee91+00, jurcsik01+03}, but given our limited data we cannot speculate on the cause. \begin{figure}[!t] \begin{center} \includegraphics[scale=0.51]{V2.chain.eps} \caption{ Same as figure \ref{fig:V2} except that now the light curve of V2 is phased with the period P$_0$=0.412119~d at the epoch $E=$2453284.2652~d with an ephemeris that includes a period change rate $\beta\approx$47~d~Myr$^{-1}$.} \label{fig:V2_PCH} \end{center} \end{figure} \begin{table} \caption{The data point colours used to mark different observing runs in figs \ref{fig:V1}, \ref{fig:V2}, \ref{fig:V2_PCH}, \ref{fig:V6} and \ref{fig:V7}.} \centering \begin{tabular}{cc} \hline Dates & Colour \\ \hline 20041004 - 20041005 & Black \\ 20060801 & Red \\ 20070804 - 20070805 & Blue \\ 20070904 - 20070905 & Cyan \\ 20090107 - 20090108 & Magenta\\ 20120628 - 20120629 & Green \\ \hline \end{tabular} \label{tab:colour_code} \end{table} \section{Long period variables} \label{sec:newRGB} {\bf V4}. This red giant variable, discovered by \cite{barnes68}, clearly stands out as a variable in Figs. \ref{fig:rms} and \ref{fig:SQ}. \cite{barnes68} estimated a period of 17.9 days but pointed out that the observations did not cover the whole period. Our data set for this star consists of 119 $V$ filter epochs distributed over a baseline of 8 years (top panel of Fig. \ref{fig:RDG_VAR}). Thus our data are less than ideal for estimating an accurate period. Nevertheless, using the {\tt Period04} program \citep{period04}, we find a period of $\sim$21.7~d. In the V vs (V-I) diagram (not plotted in the paper), the star is situated in the upper region of the red giant branch. Exploring the Catalogue of Variable Stars in Galactic Globular Clusters \citep{clement01} one finds LBs (slow irregular variables of types K, M, C and S; see the General Catalog of Variable Stars \citep{kholopov96+09} for classifications of variables) with periods of 13-20~d and amplitudes 0.1-0.4 mag. See for example V8 and V10 in NGC 2419. See also, V109 in NGC 5024 listed as a semi-regular variable with a period of 21.93~d and amplitude of 0.05 mag. Our data for V4 are consistent with the classification as a long period variable. {\bf V5}. From Figs. \ref{fig:rms} and \ref{fig:SQ} we discover this new long period variable. Its light curve is shown in the bottom panel of Fig. \ref{fig:RDG_VAR}. This star is saturated in our $R$ and $I$ images and hence we have not been able to plot it in the CMD and determine its classification (e.g. as a red giant). It is evident from the light curve that the star undergoes a long term dimming. We note that for both V4 and V5, the formal uncertainties on the data points are typically $\sim$0.001 mag. However, these stars are very bright and their light curves suffer from systematic errors that correlate during the nightly observations, leading to a relatively large intra-night scatter (see Fig. \ref{fig:RDG_VAR}). \begin{figure}[htp] \begin{center} \includegraphics[scale=0.51]{V4V5.eps} \caption{Light curves of the variables V4 and V5 in the $V$ filter.} \label{fig:RDG_VAR} \end{center} \end{figure} \section{SX Phoenicis stars and candidates} \label{sec:newsx} We have discovered two new SX Phe stars which we label V6 and V7, and one candidate SX Phe star which we label CSX1. {\bf V6}. This variable was found above our detection threshold for the $\cal S_B$ statistic in section \ref{subs:SB} (see figure \ref{fig:alarm}). In the CMD it is placed well inside the blue straggler region (see figure \ref{fig:CMD}). In fact, this star is a blue straggler as found by \cite{cote91+02}. We analysed the $V$ light curve with {\tt Period04} and found a clear frequency at 17.683477~cycles~d$^{-1}$ (or a period of 0.0565500~d). We did not find any further significant frequencies. Based on the blue straggler status and the detected period, we can be sure that this is an SX Phe star. In figure \ref{fig:V6}, we present the phased light curve in the $V$ and $R$ filters. We overplot the best fit sine curve as a solid black line. As expected, the $R$ filter light curve shows variations with the same period and phase as the $V$ filter light curve but with smaller amplitude. There is a hint that the amplitude of V6 changed between different observing runs (compare the black points from 2004 with the green points from 2012). There are previous studies about SX Phe stars that show period change and also amplitude change. See for example figure 25 and section 4.2 in \cite{nemec95+03} and also section 5.2 in \cite{af10+02}. SX Phe stars are well known as distance indicators through their P-L relation \citep[e.g.][]{jeon03+03}. By adopting the P-L relation for the fundamental mode recently calculated by \cite{cohen12+01} for a sample of 77 double mode SX Phe stars in Galactic globular clusters, which is of the form $M_{V}=-1.640(\pm0.110)-3.389(\pm0.090)\log(P_{f})$, we may calculate $M_V=$2.588$\pm$0.157~mag for V6 assuming that it is pulsating in the fundamental mode. Given this, and assuming $E(B-V)=$0.0~mag, we obtain a true distance modulus $\mu_0=$16.644$\pm$0.157~mag, which translates to a distance of $\sim$21.3$\pm$1.5~kpc. Hence, if V6 is pulsating in the fundamental mode, it cannot be a cluster member. However, if we assume that V6 is pulsating in the first overtone (1H), then we may ``fundamentalise'' the detected frequency by multiplying it by the frequency ratio $f_1/f_2=$0.783 \citep[see][]{santolomazza01+05,jeon03+03, poretti05+15}. Using the \cite{cohen12+01} P-L relation as before, we obtain $M_V=$2.228$\pm$0.149~mag, $\mu_0=$17.004$\pm$0.149~mag, and a distance of $\sim$25.2$\pm$1.8~kpc. Hence, if V6 is pulsating in the first overtone, then it is most likely a cluster member. Unfortunately, without detecting two frequencies in the light curve of V6, we cannot further speculate on the pulsation mode of this star. {\bf V7}. Again, this variable was found above our detection threshold for the $\cal S_B$ statistic. In the CMD, it lies on the RGB at the edge of the blue straggler region. We analysed the $V$ light curve with {\tt Period04} and found a candidate frequency at 13.776775~cycles~d$^{-1}$ (or a period of 0.072586~d). We did not find any further significant frequencies. In figure \ref{fig:V7}, we present the phased light curve in the $V$ and $R$ filters along with the best fit sine curve in $V$ (solid black curve). The variations at an amplitude of $\sim$0.05~mag are barely visible in the phased $V$ light curve. In order to quantify our classification of V7 as a variable with the detected period, we calculate the improvement in chi-squared $\Delta \chi^{2}$ when fitting the sine curve compared to a constant magnitude. Under the null hypothesis that the light curve is not variable, the $\Delta \chi^{2}$ statistic follows a chi-squared distribution with two degrees of freedom. We set our threshold for rejection of the null hypothesis at 1\%, which is equivalent to $\Delta \chi^{2} \ga 9.21$. The $V$ light curve of V7 has $\Delta \chi^{2} \approx 14.40$ which supports our conclusion that it is variable. We note that in the $R$ filter, the light curve is not detected as showing variability by the $\Delta \chi^{2}$ test. Using the \cite{cohen12+01} P-L relation for the detected frequency, we obtain $M_V=$2.221$\pm$0.152~mag, $\mu_0=$17.142$\pm$0.152~mag, and a distance of $\sim$26.8$\pm$1.8~kpc, which is consistent with the distance to the cluster. Considering all the evidence we have discussed, we classify this star as an SX Phe star that is most likely a cluster member pulsating in the fundamental mode. {\bf CSX1}. This star was detected above the threshold for the $\cal S_B$ statistic, and it lies in the blue straggler region in the CMD. We searched for frequencies in the $V$ light curve using {\tt period04}, but we found no clear peaks. However, on our inspection of the light curve on individual nights, we found clear cyclical variations on the time scale of $\sim$1~hour (see figure \ref{fig:CSXPhe}). Since we have been unable to detect a pulsation frequency, we classify this star as a candidate SX Phe for which follow-up observations would be desirable. \begin{figure}[!t] \begin{center} \includegraphics[scale=0.51]{V6.eps} \caption{Light curve of the newly discovered SX Phe star V6 in the $V$ filter (top) and $R$ filter (bottom) phased with the period 0.0565500~d. The data point colours represent the different epochs listed in Table \ref{tab:colour_code}. The solid black curves represent the best fit sine curves at the phasing period. The typical uncertainties in the $V$ and $R$ magnitudes are $\sim$0.02~mag.} \label{fig:V6} \end{center} \end{figure} \begin{figure}[!t] \begin{center} \includegraphics[scale=0.51]{V7.eps} \caption{Light curve of the newly discovered SX Phe star V7 in the $V$ filter (top) and $R$ filter (bottom) phased with the period 0.0725859~d. The data point colours represent the different epochs listed in Table \ref{tab:colour_code}. The solid black curve represent the best fit sine curve at the phasing period in the $V$ filter. The typical uncertainties in the $V$ and $R$ magnitudes are $\sim$0.02~mag.} \label{fig:V7} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.465]{CSX1.eps} \caption{Light curve of the candidate SX Phe star CSX1. Black points correspond to the $V$ magnitudes and red points correspond to the $R$ magnitudes (which are shifted in mean magnitude to match the mean magnitude of the $V$ data). Some data points fall outside of the plot magnitude range.} \label{fig:CSXPhe} \end{center} \end{figure} Periods and amplitudes for the two new SX Phe and one candidate are given in Table \ref{tab:SXPHE}. They are also labelled in the CMD of Fig. \ref{fig:CMD} and their equatorial coordinates (J2000) are listed in the Table \ref{tab:astrom}. We note that the (V-R) colours of V6 and V7, converted to (B-V) using the colour transformations of \cite{VandenBerg03+01}, are consistent (within 1$\sigma$) with the (B-V) colour-period relation of \cite{McNamara11}. \begin{table} \caption{Detected pulsation frequencies for the new SX Phe variables discovered in NGC 7492. The numbers in parenthesis indicate the uncertainty on the last decimal place.} \centering \begin{tabular}{lccccl} \hline ID & $A_0$\tablefootmark{a} & Label & Frequency & $A_V$\tablefootmark{b} & mode \\ &($V$ mag) & &(c/d$^{-1}$) & (mag) & \\ \hline V6 & 19.235(4) & $f_1$ &17.683477(13)& 0.123(10)& $1H$?\tablefootmark{c} \\ V7 & 19.363(4) & $f_1$ &13.776775(32)& 0.030(10) & $F$?\tablefootmark{c}\\ \hline \end{tabular} \tablefoot{\tablefoottext{a}{Mean V magnitude $A_0$} \tablefoottext{b}{Full amplitude $A_V$ in the $V$ filter} \tablefoottext{c}{If we assume that these stars are cluster members, then they are likely to be pulsating in the suggested mode.}} \label{tab:SXPHE} \end{table} \section{Conclusions} \label{sec:Concl} Precise time series differential CCD $V, R, I$ photometry with a baseline of $\sim$8~years has been performed to detect brightness variations in stars with $14.0<V<19.5$~mag in the field of NGC~7492. We found the following: \begin{enumerate} \item We identified one new long period variable (V5) and two SX Phe stars (V6 and V7). We present one candidate SX Phe (CSX1), which requires more data of high precision to finally establish its nature. \item With the $\cal S_B$ variability statistic, it was possible to recover all previously known variables and also to find the new variables presented in this work. \item Our photometric precision at the magnitude of the horizontal branch combined with the consideration of the CMD means that we can be sure that there are no undetected RR Lyrae stars that are cluster members in the field of view of our images. \item For the RR0 V1, we improve the period estimate and perform a Fourier analysis to estimate a cluster metallicity of [Fe/H]$_{\rm{ZW}}=$-1.68$\pm$0.10 or [Fe/H]$_{\rm{UVES}}=$-1.64$\pm$0.13 and a distance of $\sim$24.3$\pm$0.5~kpc. \item We found that the RR1 star V2 is undergoing a period change at a rate of $\beta\approx$47~d~Myr$^{-1}$. We also found tentative evidence for the presence of the Blazhko effect in the light curve. \item By assuming that the SX Phe stars are cluster members (which is consistent with their position in the CMD), we have used the SX Phe P-L relation to speculate on the mode of oscillation of each star. We also obtain independent distance estimates to the cluster of $\sim$25.2$\pm$1.8 and 26.8$\pm$1.8~kpc. \item The cluster metallicity and distance estimates that we derive in this paper are all consistent with previous estimates in the literature. \end{enumerate} \begin{acknowledgements} AAF acknowledges financial support from DGAPA-UNAM grant through project IN104612. We are thankful to the CONACyT (M\'exico) and the Department of Science and Technology (India) for financial support under the Indo-Mexican collaborative project DST/INT/MEXICO/RP001/2001. We thank the staff at IAO and at the remote control station at CREST, Hosakote for assistance during the observations. This work has made a large use of the SIMBAD and ADS services, for which we are thankful. \end{acknowledgements}
1,108,101,564,796
arxiv
\section{Introduction} The classical momentum map for an action of a Lie group on a Poisson manifold provides a mathematical formalization of the notion of conserved quantity associated to symmetries of a dynamical system. The standard definition of momentum map only requires a canonical Lie algebra action and its existence is guaranteed whenever the infinitesimal generators of the Lie algebra action are Hamiltonian vector fields (modulo vanishing of a certain Lie algebra cohomology class). In this paper we focus on a generalization of the momentum map provided by Lu \cite{Lu3}, \cite{Lu1}. The detailed construction of this generalized momentum map and its basic properties are recalled in the following section. The basic structure is as follows. Given Poisson Lie group $(G,\pi_G )$ one introduces the dual Poisson Lie group $(G^*,\pi_{G^*})$ and, under fairly general conditions, $G^*$ carries a Poisson action of $G$ (and vice versa). The Lie algebra ${\mathfrak g}$ of $G$ is naturally identified with the space of $G^*$-(left-)invariant one-forms on $G^*$: $$ \alpha : {\mathfrak g}\to \Omega^1 (G^* )^{G^*}, $$ Given a Poisson manifold $(M,\pi)$ with a Poisson action of $G$, a momentum map is a smooth, Poisson map $$ \boldsymbol{\mu} : M \to G^* $$ satisfying $$ X_\xi = \pi^\sharp (\boldsymbol{\mu}^*(\theta_{\xi})) $$ where $X$ is the map ${\mathfrak g} \to Vect (M)$ induced by the action of $G$ on $M$. A canonical example of a momentum map is the identity map $G^*\to G^*$, in which case $\alpha$ coincides with the structure one-form in $\theta \in \Omega^1 (G^*, {\mathfrak g}^* )^{G^*}$ of the Lie group $G^*$. The Poisson structure on $G$ gives its Lie algebra a structure of a Lie bialgebra $({\mathfrak g},[\cdot,\cdot], \delta )$ and hence a structure of Gerstenhaber algebra on $\wedge^{\bullet}{\mathfrak g}$. On the other hand the Poisson bracket on $M$ gives $\Omega^{\bullet}(M)[1]$ a structure of Lie algebra with bracket $[\cdot,\cdot]_{\pi}$ which induces a structure of Gerstenhaber algebra on $\Omega^{\bullet}(M)$. The map $\alpha$ from above lifts to a morphism of Gerstenhaber algebras $$ \alpha : (\wedge^{\bullet}\mathfrak{g} ,\delta, [\;,\;])\longrightarrow (\Omega^{\bullet} (M), d_{dR},[\;,\;]_\pi ). $$ which we will call an {\em infinitesimal momentum map} (cf. the subsection \ref{sub: structure} and the proposition \ref{prop:gerst}). The existence of the infinitesimal momentum map has been discussed in \cite{Gi1}. The main subject of this paper is the study of the properties of this infinitesimal momentum map and its relation to the usual momentum map. In particular, we show under which conditions it integrates to the usual momentum map. The fact that $\alpha$ is map of Gerstenhaber algebras reduces to two equations $$ \alpha_{[\xi,\eta]}=[\alpha_{\xi},\alpha_\eta ]_{\pi} \quad\mbox{ and }\quad d\alpha_{\xi} +\frac{1}{2}\alpha\wedge\alpha\circ\delta(\xi)=0 $$ The second is a Maurer-Cartan type equation, in fact, in the case when $M=G^*$ it is precisely the Maurer Cartan equation for the Lie group $G^*$. In the case when $\Omega^\bullet$ is formal, the second equation admits explicit solution modulo gauge equivalence (cf. Theorem \ref{thm:formal}). \medskip \noindent {\bf Theorem} {\em Suppose that $M$ is is a K\"{a}hler manifold. The set of gauge equivalence classes of $\alpha\in \Omega^1 (M,\mathfrak{g}^*)$ satisfying the equation \begin{equation} d\alpha_{\xi} +\frac{1}{2}\alpha\wedge\alpha\circ\delta(\xi)=0 \end{equation} is in bijective correspondence with the set of the cohomology classes $c\in H^1(M,\mathfrak{g}^*)$ satisfying \begin{equation} [c,c]=0. \end{equation}} \medskip The following describes conditions under which an infinitesimal momentum map integrates to the usual momentum map (cf. Theorem \ref{thm: rec} for the details). \medskip \noindent {\bf Theorem }{\em Let $(M,\pi )$ be a Poisson manifold and $\alpha : {\mathfrak g}\to \Omega^1 (M)$ an infinitesimal momentum map. Suppose that $M$ and $G$ are simply connected and $G$ is compact. Then ${\mathcal D} = \{ \alpha_{\xi}-\theta_{\xi},\ \xi\in {\mathfrak g}\}$ generates an involutive distribution on $M \times G^*$ and a leaf $\boldsymbol{\mu}_{\mathcal{F}}$ of $\mathcal D$ is a graph of a momentum map if \begin{equation}\label{eq:vp} \pi (\alpha_{\xi} ,\alpha_{\eta})-\pi_{G^*}(\theta_{\xi},\theta_{\eta})|_{\mathcal F} = 0 , \quad \xi ,\eta \in {\mathfrak g} \end{equation} } \medskip In the section \ref{sec: rec} we study concrete cases of this globalization question and prove the existence and uniqueness/nonuniqueness of a momentum map associated to a given infinitesimal momentum map for the particular case when the dual Poisson Lie group is abelian, respectively the Heisenberg group. For the second case the result is as follows (cf. Theorem \ref{thm:heisenberg}). \medskip \noindent {\bf Theorem} {\em Let $G$ be a Poisson Lie group acting on a Poisson manifold $M$ with an infinitesimal momentum map $\alpha$ and such that $G^*$ is the Heisenberg group. Let $\xi,\eta,\zeta$ denote the basis of $\mathfrak{g}$ dual to the standard basis $x,y,z$ of $\mathfrak{g}^*$, with $z$ central and $[x,y]=z$. Then \begin{equation} \pi(\alpha_{\xi}, \alpha_{\eta})=c \end{equation} where $c$ is a constant on $M$. The form $\alpha$ lifts to a momentum map $\boldsymbol{\mu}:M\rightarrow G^*$ if and only if $c=0$. When $c=0$ the set of momentum maps with given $\alpha$ is one dimensional with free transitive action of $\mathbb{R}$. } \medskip Finally, in the last section, we study the question of infinitesimal deformations of a given momentum map. The main result is Theorem \ref{thm: idef}, which describes explicitly the space tangent to the space of momentum maps at a given point. The main result can be formulated as a statement that the space of momentum maps has a structure of flat manifold (in an appropriate $C^\infty$ topology). \medskip \noindent {\bf Theorem} {\em Infinitesimal deformations of a momentum map are given by smooth maps $H:M\rightarrow {\mathfrak g}^*$ satisfying the equations \noindent For all $\xi, \eta \in \mathfrak g$, \begin{align} \label{eq: inf1} X_{\xi} H (\eta)-X_{\eta} H (\xi) & = H( [\xi,\eta])\\ \label{eq: inf2} \{ H(\xi),\ \cdot \} & = -X_{ad^*_H\xi}. \end{align} } This theorem has the following corollary (cf. Corollary \ref{cor:unique}). \medskip \noindent{\bf Theorem} {\em Suppose that $G$ is a compact and semisimple Poisson Lie group with Poisson action on a Poisson manifold $M$ and with a momentum map $\boldsymbol \mu$. Any smooth deformation of $\boldsymbol \mu$ is given by integrating a Hamiltonian flow on $M$ commuting with the action of $G$.} \section{Preliminaries: Poisson actions and Momentum maps}\label{sec: pam} In this section we give a brief summary of the notions of Poisson action and momentum map in the Poisson context. We discuss the dressing transformations as an example of Poisson actions that will allow us to introduce the concept of Hamiltonian action. Recall that a Poisson Lie group $(G,\pi_G)$ is a Lie group equipped with a multiplicative Poisson structure $\pi_G$. From the Drinfeld theorem \cite{Drinfeld}, given a Poisson Lie group $(G,\pi_G)$, the linearization of $\pi_G$ at $e$ defines a Lie algebra structure on $\mathfrak{g}^*$ such that $(\mathfrak{g},\mathfrak{g}^*)$ form a Lie bialgebra over $\mathfrak{g}$. For this reason, in the following we always assume that $G$ is connected and simply connected. \begin{definition} The action of $(G,\pi_G)$ on $(M,\pi)$ is called \textbf{Poisson action} if the map $\Phi:G\times M\rightarrow M$ is Poisson, where $G\times M$ is a Poisson product with structure $\pi_G\oplus\pi$ \end{definition} Given an action $\Phi:G\times M\to M$, we denote by $X_{\xi}$ the Lie algebra anti-homomorphism from $\mathfrak{g}$ to $M$ which defines the infinitesimal generator of this action. \begin{proposition} Assume that $(G,\pi_G)$ is a connected Poisson Lie group. Then the action $\Phi: G\times M\to M$ is a Poisson action if and only if \begin{equation}\label{eq: pa} \mathcal{L}_{X_{\xi}}(\pi) = (\Phi\wedge\Phi)(\delta(\xi)), \end{equation} for any $\xi\in\mathfrak{g}$, where $\delta=d_e\pi_G:\mathfrak{g}\to \mathfrak{g}\wedge \mathfrak{g}$ is the derivative of $\pi_G$ at $e$. \end{proposition} The proof of this Proposition can be found in \cite{LW}. Motivated by this fact, we introduce the following definition. \begin{definition} A Lie algebra action $\xi\mapsto X_{\xi}$ is called an \textbf{infinitesimal Poisson action} of the Lie bialgebra $(\mathfrak{g},\delta)$ on $(M,\pi)$ if it satisfies eq. (\ref{eq: pa}). \end{definition} In this formalism the definition of momentum map reads (Lu, \cite{Lu3}, \cite{Lu1}): \begin{definition}\label{def: mm} A \textbf{momentum map} for the Poisson action $\Phi:G\times M\to M$ is a map $\boldsymbol{\mu}: M\rightarrow G^*$ such that \begin{equation}\label{eq: mmp} X_{\xi} = \pi^{\sharp}(\boldsymbol{\mu}^*(\theta_{\xi})) \end{equation} where $\theta_{\xi}$ is the left invariant 1-form on $G^*$ defined by the element $\xi\in\mathfrak{g}=(T_e G^*)^*$ and $\boldsymbol{\mu}^*$ is the cotangent lift $T^* G^*\rightarrow T^*M$. \end{definition} \subsection{Dressing Transformations}\label{sec: dressing} One of the most important example of Poisson action is the dressing action of $G$ on $G^*$. Consider a Poisson Lie group $(G,\pi_G)$, its dual $(G^*,\pi_{G^*})$ and its double $\mathcal{D}$, with Lie algebras $\mathfrak{g}$, $\mathfrak{g}^*$ and $\mathfrak{d}$, respectively. Let $L_{\xi}$ the vector field on $G^*$ defined by \begin{equation}\label{eq: idr} L_{\xi} = \pi_{G^*}^{\sharp}(\theta_{\xi}) \end{equation} for each $\xi\in\mathfrak{g}$. Here $\theta_{\xi}$ is the left invariant 1-form on $G^*$ defined by $\xi\in\mathfrak{g}=(T_eG^*)^*$. The map $\xi\mapsto L_{\xi}$ is a Lie algebra anti-homomorphism. Using the Maurer-Cartan equation for $G^*$: \begin{equation}\label{eq: mct} d\theta_{\xi} + \frac{1}{2}\theta\wedge\theta\circ\delta(\xi) = 0 \end{equation} The action $\xi\mapsto L_{\xi}$ is an infinitesimal Poisson action of the Lie bialgebra $\mathfrak{g}$ on the Poisson Lie group $G^*$, called left infinitesimal dressing action (see, for example, \cite{Lu3}). Similarly, the right infinitesimal dressing action of $\mathfrak{g}$ on $G^*$ is defined by $R_{\xi}=-\pi_{G^*}^{\sharp}(\theta_{\xi})$ where $\theta_{\xi}$ is the right invariant 1-form on $G^*$. Let $L_{\xi}$ (resp. $R_{\xi}$) a left (resp. right) dressing vector field on $G^*$. If all the dressing vector fields are complete, we can integrate the $\mathfrak{g}$-action into a Poisson $G$-action on $G^*$ called the \textbf{dressing action} and we say that the dressing actions consist of dressing transformations. The orbits of the dressing actions are precisely the symplectic leaves in $G$ (see \cite{We1}, \cite{Lu3}). The momentum map for the dressing action of $G$ on $G^*$ is the opposite of the identity map from $G^*$ to itself. \begin{definition} A multiplicative Poisson tensor $\pi$ on $G$ is complete if each left (equiv. right) dressing vector field is complete on $G$. \end{definition} It has been proved in \cite{Lu3} that a Poisson Lie group is complete if and only if its dual Poisson Lie group is complete. Assume that $G$ is a complete Poisson Lie group. We denote respectively the left (resp. right) dressing action of $G$ on its dual $G^*$ by $g\mapsto L_g$ (resp. $g\mapsto R_g$). \begin{definition} A momentum map $\boldsymbol{\mu}:M\rightarrow G^*$ for a left (resp. right) Poisson action $\Phi$ is called \textbf{G-equivariant} if it is such with respect to the left dressing action of $G$ on $G^*$, that is, $\boldsymbol{\mu}\circ \Phi_g = L_g\circ \boldsymbol{\mu}$ (resp. $\boldsymbol{\mu}\circ \Phi_g = R_g\circ \boldsymbol{\mu}$) \end{definition} A momentum map is $G$-equivariant if and only if it is a Poisson map, i.e. $\boldsymbol{\mu}_*\pi=\pi_{G^*}$. Given this generalization of the concept of equivariance introduced for Lie group actions, it is natural to call \textbf{Hamiltonian action} a Poisson action induced by an equivariant momentum map. \section{The infinitesimal momentum map}\label{sec: uni} In this section we study the conditions for the existence and the uniqueness of the momentum map. In particular, we give a new definition of the momentum map, called infinitesimal, in terms of one-forms and we study the conditions under which the infinitesimal momentum map determines a momentum map in the usual sense. We describe the theory of reconstruction of the momentum map from the infinitesimal one in two explicit cases. Finally, we provide the conditions which ensure the uniqueness of the momentum map. \subsection{The structure of a momentum map} \label{sub: structure} Recall that, for the Poisson Lie group $G^*$ we identify $\mathfrak{g}$ with the space of left invariant 1-forms on $G^*$; this space is closed under the bracket defined by $\pi_{G^*}$ and the induced bracket on $\mathfrak{g}$, by the above identification, coincides with the original Lie bracket on $\mathfrak{g}$ (see \cite{We}). \begin{proposition} Let $\theta_{\xi},\theta_{\eta}$ be two left invariant 1-forms on $G^*$, such that $\theta_\xi(e)=\xi$, $\theta_{\eta}(e)=\eta$ then \begin{equation}\label{eq: theta} \theta_{[\xi,\eta]} = [\theta_{\xi},\theta_{\eta}]_{\pi_{G^*}} \end{equation} and \begin{equation}\label{eq: theta2} \mathcal{L}_X\pi_{G^*}(\theta_{\xi},\theta_{\eta}) = x([\xi,\eta]) + \pi_{G^*}(\theta_{ad^*_x \xi},\theta_{\eta}) + \pi_{G^*}(\theta_{\xi},\theta_{ad^*_x\eta}) \end{equation} \end{proposition} \begin{proof} Let us consider and element $x\in\mathfrak{g}^*$ and the correspondent left invariant vector field $X\in TG^*$. Recall that given a Poisson manifold, the Poisson structure always induces a Lie bracket on the space of one-form on the manifold (see \cite{V}) by \begin{equation}\label{eq: bff} [\alpha,\beta]_{\pi} = \mathcal{L}_{\pi^{\sharp}(\alpha)}\beta-\mathcal{L}_{\pi^{\sharp}(\beta)}\alpha-d(\pi(\alpha,\beta)). \end{equation} Using this explicit formula for $[\theta_{\xi},\theta_{\eta}]_{\pi_{G^*}}$ we can see that \begin{equation} \iota_X [\theta_{\xi},\theta_{\eta}]_{\pi_{G^*}} = (\mathcal{L}_X\pi_{G^*})(\theta_{\xi},\theta_{\eta}). \end{equation} This proves that $[\theta_{\xi},\theta_{\eta}]_{\pi_{G^*}}$ is a left invariant 1-form. In particular, since $\mathcal{L}_X\pi_{G^*}(e) = {}^{t}\delta(x)$, eq. (\ref{eq: theta}) is proved\footnote{This relation has already been claimed in \cite{YK}}. Moreover, we have \begin{equation} \begin{split} \mathcal{L}_X\pi_{G^*}(\theta_{\xi},\theta_{\eta}) & =(\mathcal{L}_X\pi_{G^*})(\theta_{\xi},\theta_{\eta}) + \pi_{G^*}(\mathcal{L}_X\theta_{\xi}, \theta_{\eta}) + \pi_{G^*}(\theta_{\xi},\theta_{\eta})\\ & = {}^{t}\delta(x)(\xi,\eta) + \pi_{G^*}(\theta_{ad^*_x \xi},\theta_{\eta}) + \pi_{G^*}(\theta_{\xi},\theta_{ad^*_x\eta}), \end{split} \end{equation} since $\mathcal{L}_X\theta_{\xi} = \theta_{ad^*_x \xi}$. From ${}^{t}\delta(x)(\xi,\eta) = x([\xi,\eta])$, eq. (\ref{eq: theta2}) follows. \end{proof} As a direct consequence, recalling that the pullback and the differential commute and using the equivariance of the momentum map, we have the following proposition: \begin{proposition}\label{prop:gerst} Given a Poisson action $\Phi:G\times M\to M$ with equivariant momentum map $\boldsymbol{\mu}:M\to G^*$, the forms $\alpha_{\xi}=\boldsymbol{\mu}^*(\theta_{\xi})$ satisfy the following identities: \begin{align} \label{eq: ah} \alpha_{[\xi,\eta]} & = [\alpha_{\xi},\alpha_\eta ]_{\pi}\\ \label{eq: mca} d\alpha_{\xi} & + \frac{1}{2}\alpha\wedge\alpha\circ\delta(\xi)=0 \end{align} \end{proposition} This motivates the following Definition. \begin{definition}\label{def: inf} Let $M$ be a Poisson manifold and $G$ a Poisson Lie group. An \textbf{infinitesimal momentum map} is a morphism of Gerstenhaber algebras \begin{equation}\label{eq: imm} \alpha : (\wedge^{\bullet}\mathfrak{g} ,\delta, [\;,\;])\longrightarrow (\Omega^{\bullet} (M), d_{DR},[\;,\;]_\pi ). \end{equation} \end{definition} The following theorem describes the conditions in which an infinitesimal momentum map determines a momentum map in the usual sense. \begin{theorem}\label{thm: rec} Let $(M,\pi )$ be a Poisson manifold and $\alpha:{\mathfrak g}\to \Omega^1 (M)$ a linear map which satisfies the conditions (\ref{eq: ah})-(\ref{eq: mca}). Then: \begin{itemize} \item[(i)] The set $\{ \alpha_{\xi}-\theta_{\xi},\, \xi\in {\mathfrak g}\}$ generate an involutive distribution $\mathcal D$ on $M \times G^*$. \item[(ii)] If $M$ is connected and simply connected, the leaves $\mathcal{F}$ of $\mathcal D$ coincide with the graphs of the maps $\boldsymbol{\mu}_{\mathcal{F}}: M\to G^*$ satisfying $\alpha = \boldsymbol{\mu}^*_{\mathcal{F}} (\theta)$ and $G^*$ acts freely and transitively on the space of leaves by left multiplication on the second factor. \item[(iii)] The vector fields $\pi^{\sharp} (\alpha_{\xi})$ define a homomorphism from $\mathfrak{g}$ to $TM$. If they integrate to the action $\Phi : G\times M\to M$ (e.g. when $M$ is compact and $G$ simply connected), then $\Phi$ is a Poisson action and $\boldsymbol{\mu}_{\mathcal{F}}$ is its momentum map if and only if the functions \begin{equation}\label{eq:vp} \varphi (\xi,\eta) = \pi (\alpha_{\xi} ,\alpha_{\eta}) - \pi_{G^*}(\theta_{\xi},\theta_{\eta}) \end{equation} satisfy \begin{equation}\label{eq:vanishing} \varphi (\xi,\eta)\vert_{\mathcal{F}}=0 \end{equation} for all $\xi,\eta\in \mathfrak{g}$. \end{itemize} \end{theorem} \begin{proof} \begin{itemize} \item[(i)] Using the eqs. (\ref{eq: mct}) and (\ref{eq: mca}), the $\mathfrak{g}$-valued form $\alpha -\theta$ on $M\times G^*$ satisfies $ d(\alpha -\theta ) = (\alpha -\theta )\wedge (\alpha -\theta ); $ as a consequence, from the Frobenius theorem, it defines a distribution on $M\times G^*$. Let $\mathcal{F}$ be any of its leaves and let $p_i$, $i=1,2$ denote the projection onto the first (resp. second) factor in $M\times G^*$. Since the linear span of $\theta_{\xi}$, $\xi\in \mathfrak{g}$ at any point $u\in G^*$ coincides with $T_u^*{G^*}$, the restriction of the projection $p_1 : M\times G \to M$ to $\mathcal{F}$ is an immersion. Finally, since $dim(M) = dim(\mathcal{F})$, $p_1$ is a covering map. \item[(ii)] Under the hypothesis that $M$ is simply connected, $p_1$ is a diffeomorphism and $$ \boldsymbol{\mu}_{\mathcal{F}} = p_2 \circ p_1^{-1} $$ is a smooth map whose graph coincides with $\mathcal{F}$. It is immediate, that $\alpha =\boldsymbol{\mu}_{\mathcal{F}}^* (\theta)$. Moreover, since $\theta$'s are left invariant it follows immediately that the action of $G^*$ on the space of leaves by left multiplication of the second factor is free and transitive. \item[(iii)] Suppose that the condition (\ref{eq:vanishing}) is satisfied. Then $$ \pi (\alpha_{\xi},\alpha_{\eta}) = \boldsymbol{\mu}^*_{\mathcal{F}} (\pi_{G^*}(\theta_{\xi},\theta_{\eta})) $$ and $Ker {\boldsymbol{\mu}_{\mathcal{F}}}_*$ coincides with the set of zero's of $\alpha_{\xi},\ {\xi}\in \mathfrak{g}$. Hence, $\boldsymbol{\mu}_{\mathcal{F}}$ is a Poisson map and, in particular $$ {\boldsymbol{\mu}_{\mathcal{F}}}_* ( \pi^{\sharp} (\alpha_{\xi})) = \pi_{G^*}^\sharp (\theta_{\xi}), $$ i.e. it is a $G$-equivariant map. \end{itemize} \end{proof} The first of the equations in the Definition (\ref{prop:gerst}), the equation \begin{equation}\label{eq:MC} d\alpha_{\xi} + \frac{1}{2}\alpha \wedge \alpha \circ \delta(\xi) = 0 , \end{equation} can be solved explicitly in the case when $M$ is a K\"{a}hler manifold. Before stating the result, we need to introduce the concept of gauge equivalence of solutions of (\ref{eq:MC}): \begin{definition} Two solutions $\alpha$ and $\alpha'$ of eq. (\ref{eq:MC}) are said to be \textbf{gauge equivalent}, if there exists a smooth function $H:M\rightarrow \mathfrak{g}^*$ such that \begin{equation} \alpha' = \exp(ad H)(\alpha)+\int_0^1 dt \exp t(ad H)(dH) \end{equation} \end{definition} \begin{theorem}\label{thm:formal} Suppose that $M$ is is a K\"{a}hler manifold. The set of gauge equivalence classes of $\alpha\in \Omega^1 (M,\mathfrak{g}^*)$ satisfying the equation \begin{equation} d\alpha_{\xi} +\frac{1}{2}\alpha\wedge\alpha\circ\delta(\xi)=0 \end{equation} is in bijective correspondence with the set of the cohomology classes $c\in H^1(M,\mathfrak{g}^*)$ satisfying \begin{equation} [c,c]=0. \end{equation} \end{theorem} \begin{proof} Since $M$ is is a K\"{a}hler manifold, $(\Omega^{\bullet}(M),d)$ is a formal CDGA (commutative differential graded algebra) \cite{GDMS}. As a consequence, \begin{equation} Hom({\mathfrak g^*},\Omega^{\bullet} (M)),d,[\cdot ,\cdot ]) \end{equation} is a formal DGLA and, in particular, there exists a bijection between the equivalence classes of Maurer Cartan elements of $Hom({\mathfrak g^*},\Omega^{\bullet} (M), d ,[\cdot ,\cdot ])$ and Maurer Cartan elements of $Hom({\mathfrak g^*},H_{dR}^{\bullet} (M),[\cdot ,\cdot ])$. A Maurer-Cartan element in $Hom({\mathfrak g^*},H_{dR}^{\bullet} (M),[\cdot ,\cdot ])$ is an element $c$ in $H^1(M,\mathfrak{g}^*)$ satisfying \begin{equation} [c,c]=0, \end{equation} and the claim is proved. \end{proof} \subsection{The reconstruction problem} \label{sec: rec} In this section we discuss the conditions under which the distribution $\mathcal D$ defined in Theorem \ref{thm: rec} admits a leaf satisfying eq. (\ref{eq:vanishing}). In particular, we analyze the case where the structure on $G^*$ is trivial and the Heisenberg group case. In the following we keep the assumption that $M$ is connected and simply connected. \subsubsection{The abelian case} Suppose that $G^*=\mathfrak{g}^*$ is abelian. Then, the forms $\alpha_{\xi}$ satisfy $d\alpha_{\xi}=0$, hence $\alpha_{\xi}=dH_{\xi}$ (since $H^1 (M)=0$), for some $H_{\xi}\in C^{\infty}(M)$. Let us denote by $ev_{\xi}$ the linear functions $\mathfrak{g}^* \ni z\rightarrow z(\xi)$. Then $\theta_{\xi}=d(ev_{\xi})$ and the leaves of the distribution $\mathcal D$ coincide with the level sets (on $M\times \mathfrak{g}^*$) of the functions \begin{equation} \{ H_{\xi} - ev_{\xi} \mid \xi\in\mathfrak{g} \}. \end{equation} Furthermore, we have \begin{equation} \varphi (\xi,\eta)(m,z) = \{ H_{\xi} ,H_{\eta} \} - z([\xi,\eta]). \end{equation} In this case, the basic identity (\ref{eq: bff}) reduces to \begin{equation} d\{ H_{\xi} ,H_{\eta} \}=dH_{[\xi,\eta]}, \end{equation} hence \begin{equation} \{ H_{\xi} ,H_{\eta} \}-H_{[\xi,\eta]}=c(\xi,\eta), \end{equation} for some constants $c(\xi,\eta)$. By the Jacobi identity, the constants $c(\xi,\eta)$ define a class $[c]\in H^2 (\mathfrak{g} ,\mathbb{R})$. Suppose that this class vanishes (for example if $\mathfrak{g}$ semisimple). Then, there exists a $z_0\in\mathfrak{g}^*$ such that $c(\xi,\eta) = z_0([\xi,\eta])$. Hence, given a leaf $\mathcal{F}$, \begin{equation} \varphi (\xi,\eta)|_{\mathcal{F}} = 0 \end{equation} if and only if $\mathcal{F}$ is given by \begin{equation} H_{\xi}-ev_{\xi} - z_0(\xi) = 0. \end{equation} In other words, the space of leaves of $\mathcal D$ which give a momentum map coincides with the affine space modeled on $\{ z\in \mathfrak{g}^* : z|_{[\mathfrak{g} ,\mathfrak{g} ]}=0\}$ (which again vanishes when $\mathfrak{g}$ is semisimple). This proves the following theorem. \begin{theorem} Suppose that $G$ is a connected and simply connected Lie group with trivial Poisson structure and $M$ is compact. Then an infinitesimal momentum map is a map $H:\mathfrak{g} \rightarrow C^\infty (M): \xi\mapsto H_{\xi}$ such that \begin{equation} d\{ H_{\xi} ,H_{\eta}\} = dH_{[\xi,\eta]},\ \quad\forall \xi,\eta\in \mathfrak{g} . \end{equation} The element $c(\xi,\eta)=\{ H_{\xi} ,H_{\eta}\} - H_{[\xi,\eta]} $ is a two cocycle $c$ on $\mathfrak{g}$ with values in $\mathbb{R}$. The infinitesimal momentum map $H$ is generated by a momentum map $\boldsymbol{\mu}$ if this cocycle vanishes and, in this case, $\boldsymbol{\mu}$ is unique. \end{theorem} \subsubsection{the Heisenberg group case} Suppose now that $G^*$ is the Heisenberg group. Let $x,y,z$ be a basis for $\mathfrak{g}^*$, where $z$ is central and $[x,y]=z$. Let $\xi,\eta,\zeta$ be the dual basis of $\mathfrak{g}$. The cocycle $\delta$ on $\mathfrak{g}$ is given by \begin{equation} \delta (\xi)=\delta (\eta)=0 \quad\mbox{ and }\quad \delta (\zeta)=\xi\wedge \eta, \end{equation} then \begin{equation} \begin{split} d\alpha_{\xi} & = d\alpha_{\eta} = 0\\ d\alpha_{\zeta}& = \alpha_{\xi}\wedge \alpha_{\eta} \end{split} \end{equation} There are essentially two possibilities for the Lie bialgebra structure on $\mathfrak{g}^*$, which give the following two possibilities for the Lie algebra structure on $\mathfrak{g}$. Either \begin{equation} [\xi,\eta] = 0 ,\quad [\xi,\zeta] = \xi, \quad [\eta,\zeta] = \eta \end{equation} or \begin{equation} [\xi,\eta] = 0, \quad [\xi,\zeta] = \eta, \quad [\eta,\zeta] = -\xi. \end{equation} The result below will turn out to be independent of the choice (the computations will be done using the second choice, which corresponds to $G = \mathbb{R} \ltimes \mathbb{R}^2$, with $\mathbb{R}$ acting by rotation on $\mathbb{R}^2$). Below we use the notation \begin{equation} \delta(\xi) = \sum_i \xi^1_i\wedge \xi^2_i. \end{equation} Applying the Cartan formula $\mathcal{L} =[\iota ,d]$ and the identity $[ \alpha_{\xi}, \alpha_{\eta}]_\pi =\alpha_{[\xi,\eta]}$ to the basic equation (\ref{eq: bff}), we get \begin{equation} \sum_i \pi (\alpha_\eta ,\alpha_{\xi_i^1})\alpha_{\xi^2_i} - \sum_i \pi (\alpha_\xi ,\alpha_{\eta^1_i})\alpha_{\eta^2_i} = \alpha_{[\eta,\xi]} - d \pi (\alpha_\eta,\alpha_\xi). \end{equation} In our case it gives the following equations \begin{equation} \begin{split} d\pi (\alpha_{\xi}, \alpha_{\eta}) & = \alpha_{[\xi,\eta]}\\ d\pi(\alpha_{\zeta}, \alpha_{\eta}) & = \alpha_{[\zeta,\eta]}+\pi(\alpha_{\eta}, \alpha_{\xi}) \alpha_{\eta}\\ d\pi (\alpha_{\zeta}, \alpha_{\xi}) & = \alpha_{[\zeta,\xi]}-\pi(\alpha_{\xi} ,\alpha_{\eta})\alpha_{\xi} \end{split} \end{equation} which are also satisfied after replacing $\alpha$ with $\theta$. Let $\mathcal I $ denote the ideal generating our distribution $\mathcal D$. Then, from above, \begin{equation}\label{eq:constant} d\varphi (\xi,\eta)\in \mathcal I \end{equation} and \begin{equation} \varphi (\xi,\eta)|_{\mathcal{F}} = 0 \quad \Longrightarrow \quad d\varphi (\zeta,\eta)|_{\mathcal{F}} \quad\mbox{ and }\quad d\varphi (\zeta,\xi)|_{\mathcal{F}}\in\mathcal I. \end{equation} Here, as before, $\mathcal{F}$ is a leaf of $\mathcal D$. Using the relation (\ref{eq: theta2}), we get \begin{equation} \begin{split} & \mathcal{L}_z^*( \pi_{G^*}(\theta_{\xi},\theta_{\eta})) = \mathcal{L}_x^*( \pi_{G^*}(\theta_{\xi},\theta_{\eta})) = \mathcal{L}_y^*( \pi_{G^*}(\theta_{\xi},\theta_{\eta})) = 0\\ &\mathcal{L}_z^*( \pi_{G^*}(\theta_{\xi},\theta_{\zeta})) = \mathcal{L}_y^*( \pi_{G^*}(\theta_{\xi},\theta_{\zeta})) = 0\\ &\mathcal{L}_z^*( \pi_{G^*}(\theta_{\eta},\theta_{\zeta})) = \mathcal{L}_x^*( \pi_{G^*}(\theta_{\eta},\theta_{\zeta})) = 0\\ &\mathcal{L}_x^*( \pi_{G^*}(\theta_{\xi},\theta_{\zeta})) = 1\\ &\mathcal{L}_y^*( \pi_{G^*}(\theta_{\eta},\theta_{\zeta})) = 1 \end{split} \end{equation} In particular, $\pi_{G^*} (\theta_{\xi},\theta_{\eta})$ is invariant under left translations. Since $\pi_{G^*}$ is zero at the identity, we get \begin{equation}\label{eq:piz} \pi_{G^*} (\theta_{\xi},\theta_{\eta}) = 0 \end{equation} Since $ d\varphi (\xi,\eta)\in \mathcal I$, the function $\varphi (\xi,\eta)$ is leafwise constant. Using the definition (\ref{eq:vp}) and the equation (\ref{eq:piz}) it follows that also $\pi(\alpha_{\xi}, \alpha_{\eta})$ is leafwise constant. Hence we have \begin{lemma} $\pi (\alpha_{\xi}, \alpha_{\eta})=c$ is constant on $M$ and necessary condition for existence of the momentum map is $c=0$. \end{lemma} By assuming that $c=0$, given a leaf $\mathcal{F}$ by eq. (\ref{eq:constant}), \begin{equation} \varphi(\eta,\zeta)|_{\mathcal{F}} = c_1 \quad \text{and} \quad \varphi(\xi,\zeta)|_{\mathcal{F}}=c_2 \end{equation} for some constants $c_1$ and $c_2$. Setting $\mathcal{F}_1 =id \times \exp(c_1 x) \exp (c_2 y)$ to $\mathcal{F}$, we get \begin{equation} \varphi(\eta,\zeta)|_{\mathcal{F}_1} = \varphi(\xi,\zeta)|_{\mathcal{F}_1} = \varphi(\xi,\eta)|_{\mathcal{F}_1} = 0. \end{equation} The final result is as follows. \begin{theorem}\label{thm:heisenberg} Let $G$ be a Poisson Lie group acting on a Poisson manifold $M$ with an infinitesimal momentum map $\alpha$ and such that $G^*$ is the Heisenberg group. Let $\xi,\eta,\zeta$ denote the basis of $\mathfrak{g}$ dual to the standard basis $x,y,z$ of $\mathfrak{g}^*$, with $z$ central and $[x,y] = z$. Then \begin{equation} \pi(\alpha_{\xi}, \alpha_{\eta}) = c \end{equation} where $c$ is a constant on $M$. The form $\alpha$ lifts to a momentum map $\boldsymbol{\mu}: M\to G^*$ if and only if $c=0$. When $c=0$ the set of momentum maps with given $\alpha$ is one dimensional with free transitive action of $\mathbb{R}$. \end{theorem} \section{Infinitesimal deformations of a momentum map} \label{sec: inf} In the following we study infinitesimal deformations of a given momentum map. Let $(M,\pi)$ be a Poisson manifold with a Poisson action of a Poisson Lie group $(G,\pi_G)$ generated by the momentum map $\boldsymbol{\mu}:M \to G^*$. In the following we denote by $\exp :{\mathfrak g}^*\to G^*$ the exponential map. Suppose that $[-\epsilon ,\epsilon ]\ni t\to \boldsymbol{\mu}_t : M \to G^*$, $\epsilon >0$, is a differentiable path of momentum maps for this action. We can assume that $\boldsymbol{\mu}_t(m)$ is of the form \begin{equation}\label{eq:mut} \boldsymbol{\mu} (m)\exp (t H_m +t^2\lambda(t,m)) \end{equation} for some differentiable maps $H : M \to {\mathfrak g}^*: m \mapsto H(m) $ and $\lambda:]-\epsilon ,\epsilon [\times M\to G^*$. \begin{theorem}\label{thm: idef} In the notation above the following identities hold. \medskip \noindent For all $\xi, \eta \in \mathfrak g$, \begin{align} \label{eq: inf1} X_{\xi} H (\eta)-X_{\eta} H (\xi) & = H( [\xi,\eta])\\ \label{eq: inf2} \{ H(\xi),\ \cdot \} & = -X_{ad^*_H\xi}. \end{align} \end{theorem} \begin{proof} Let us compute $$ \beta_\xi = \left.\frac{d}{dt}\right|_{t=0}\langle d\boldsymbol{\mu}_t,\theta_\xi \rangle = \left.\frac{d}{dt}\right|_{t=0}\langle d(\boldsymbol{\mu}\exp (t H)),\theta_\xi \rangle. $$ First note that \begin{equation} d(\boldsymbol{\mu}\exp (t H)) = (r_{\exp (t H)})_*d\boldsymbol{\mu} + (l_{\boldsymbol{\mu}})_*d \exp (t H), \end{equation} where $r$ and $l$ are the right and left multiplication, respectively. Calculating the derivative $\left.\frac{d}{dt}\right|_{t=0}$ we get: \begin{equation} \begin{split} \left.\frac{d}{dt}\right|_{t=0}\langle (r_{\exp (t H)})_*d\boldsymbol{\mu},\theta_{\xi} \rangle &=\left.\frac{d}{dt}\right|_{t=0}\langle d\boldsymbol{\mu},(r_{\exp (t H)})^*\theta_{\xi}\rangle\\ & = \langle d\boldsymbol{\mu},\mathcal{L}_H\theta_{\xi}\rangle\\ & = \langle d\boldsymbol{\mu},\theta_{ad^*_H\xi}\rangle = \alpha_{ad^*_H\xi} \end{split} \end{equation} and \begin{equation} \begin{split} \left.\frac{d}{dt}\right|_{t=0}\langle (l_{\boldsymbol{\mu}})_*d \exp (t H),\theta_{\xi} \rangle & =\left.\frac{d}{dt}\right|_{t=0}\langle d \exp (t H),(l_{\boldsymbol{\mu}})^*\theta_{\xi} \rangle\\ & = \left.\frac{d}{dt}\right|_{t=0}\langle d \exp (t H),\theta_{\xi} \rangle \end{split} \end{equation} The differential of the exponential map $\exp: {\mathfrak g}^*\rightarrow G^*$ is a map from the cotangent bundle of ${\mathfrak g}^*$ to the cotangent bundle of $G^*$. It can be trivialized as $d\exp:{\mathfrak g}^*\times {\mathfrak g}^*\rightarrow G^*\times {\mathfrak g}^* $. Furthermore, $(\exp^{-1},id): G^*\times {\mathfrak g}^*\rightarrow {\mathfrak g}^*\times {\mathfrak g}^*$ hence the map ${\mathfrak g}^*\times {\mathfrak g}^*\rightarrow {\mathfrak g}^*\times {\mathfrak g}^*$ is given by $tH+o(t^2)$. We get \begin{equation} \left.\frac{d}{dt}\right|_{t=0}\langle d \exp (t H),\theta_{\xi} \rangle = \left.\frac{d}{dt}\right|_{t=0}\langle d(tH+o(t)),\theta_{\xi} \rangle = d\langle H,\theta_{\xi} \rangle = d\langle H,\xi \rangle \end{equation} and finally \begin{equation}\label{eq: beta} \beta_\xi = \alpha_{ad^*_H\xi } + dH (\xi ). \end{equation} Since $\pi^{\sharp} (\alpha^t_\xi) = X_{\xi}$ is independent of $t$, $\pi^\sharp \beta_\xi =0$ and we get the identity (\ref{eq: inf2}). In order to prove the relation (\ref{eq: inf1}), recall that, since $\boldsymbol{\mu}_t$ is a family of Poisson maps, one has \begin{equation} \pi (\alpha^t_\xi ,\alpha^t_\eta)(m) = \pi_{G^*}(\theta_\xi ,\theta_\eta)(\boldsymbol{\mu}_t (m)). \end{equation} Applying $\left.\frac{d}{dt}\right|_{t=0}$ to both sides, we get \begin{equation} \pi (\beta_\xi ,\alpha_\eta)(m) + \pi (\alpha_\xi ,\beta_\eta)(m) = \mathcal{L}_H (\pi_{G^*}(\theta_\xi ,\theta_\eta))(\boldsymbol{\mu} (m)). \end{equation} Substituting the expression of $\beta$'s (\ref{eq: beta}) and using the following identity \begin{equation} \mathcal{L}_H (\pi_{G^*}(\theta_\xi ,\theta_\eta))(\boldsymbol{\mu} (m)) = H([\xi ,\eta ]) + \pi_{G^*}(\theta_{ad^*_H\xi} ,\theta_\eta) + \pi_{G^*}(\theta_\xi ,\theta_{ad^*_H\eta}) \end{equation} the claimed equality follows. \end{proof} \begin{corollary}\label{cor:unique} Suppose that $M$ is a Poisson manifold with a Poisson action of a compact semisimple Poisson Lie group $G$. Then any infinitesimal deformation of a momentum map $\boldsymbol{\mu} : M \to G^*$ as above is generated by a one parameter family of gauge transformations. \end{corollary} \begin{proof} Since the relation (\ref{eq: inf1}) implies that $H \in H^1({\mathfrak g},C^\infty (M, {\mathfrak g}^*))$ and $G$ is compact semisimple, $H$ is a Lie coboundary, i. e. there exists a function $$ \Phi : M \to {\mathfrak g}^* $$ such that \begin{equation}\label{eq:last1} X_{\xi}\Phi = H(\xi). \end{equation} In particular, it is easy to check that $X_{ad^*_H\xi}f = \sum X_{\xi'_i}\Phi\,\xi''_i(f)$, where $\delta(\xi)=\sum\xi'_i\otimes \xi''_i$. Now observe that \begin{equation} \xi\{\Phi,f\} = X_{\xi}\pi(d\Phi,df) = (X_{\xi}\pi)(d\Phi,df)+\{X_{\xi}\Phi,f\} + \{\Phi,X_{\xi}f\} \end{equation} hence \begin{equation}\label{eq:last2} \begin{split} \{H(\xi),f\}& = \{X_{\xi}\Phi,f\}=\xi\{\Phi,f\}-(X_{\xi}\pi)(d\Phi,df)-\{\Phi,X_{\xi}f\}\\ & = \xi\{\Phi,f\}-\delta(\xi)(\Phi,f)-\{\Phi,X_{\xi}f\}\\ & = \xi\{\Phi,f\}-X_{\xi'}\Phi\,\xi''(f)-\{\Phi,X_{\xi}f\}. \end{split} \end{equation} Substituting the equations \ref{eq:last1} and \ref{eq:last2} in (\ref{eq: inf2}) we get \begin{equation} \xi\{\Phi,f\} - \{\Phi,X_{\xi}f\} = 0. \end{equation} In other words the Hamiltonian vector field associated to $\Phi $ commutes with the group action and is tangent to the derivative of $\boldsymbol \mu_t$ at $t=0$ as claimed. \end{proof} \bibliographystyle{plain}
1,108,101,564,797
arxiv
\section{Introduction} \label{sec:intro} Big Bang nucleosynthesis produced only hydrogen, helium and trace amounts of lithium in the first minutes of the Universe. All metals, as astronomers call elements heavier than helium, were produced in stars and distributed into the interstellar medium (ISM) by their violent deaths. Therefore, the first generation of stars (Population~III, or Pop~III) formed from hydrogen and helium only and the average metallicity of the Universe built up cumulatively over time. Compared to the present day Universe, where metals and dust provide the main cooling channels during protostellar collapse, primordial gas cools less efficiently \citep{omukai05,bovino16}. The lower cooling rates and higher temperatures in a Pop~III star forming region result in a higher Jeans mass and therefore in larger fragment masses \citep{silk83,bromm99,abel00,bromm02,abel02,yoshida03}. Hence, it is generally expected that the first stars are more massive than present-day stars, although this has not yet been confirmed by direct observations \citep[see reviews by][]{glover05,bromm09,greif15}. The initial mass function (IMF) of the first stars is crucial to understand the buildup of the first galaxies: the Pop~III stars shape the first galaxies with their radiative and chemical feedback, they may provide the seeds for supermassive black holes, and they contribute to reionisation. Their relative contribution to these processes depends on their mass and therefore on the Pop~III IMF. Despite intensive research in the last years, no studies have conclusively derived the IMF of the first stars, only provided indirect constraints. No metal-free Pop~III survivor has been discovered in the Milky Way (MW). This implies that most primordial stars accreted metal-rich ISM over their lifetime (\citealt{shen16}, however see \citealt{tanaka17}) or have lifetimes that are shorter than the age of the Universe, which limits the mass of Pop~III stars to $\gtrsim 0.8\,\ensuremath{\mathrm{M}_\odot}$ \citep{2007Salvadori,hartwig15,ishiyama16,magg19}. This also means that we rely on indirect observational constraints. The direct observation of Pop~III-dominated galaxies at high redshift is very challenging, even with next-generation telescopes \citep{zackrisson11,barrow18,dayal18}. Supernova (SN) explosions of the first stars are very rare and upcoming survey will likely only place upper limits on the number density of massive Pop~III stars \citep{hummel12,hartwig18b,rydberg20}. Gravitational waves from the remnants of the first stars may be detectable with the current sensitivity of ground-based detectors \citep{kinugawa14,hartwig16,belczynski17}. However, to discriminate Pop~III remnant black holes from other formation channels requires a statistically sound sample of several hundred detections over the next decades. Recently, the 21cm sky-averaged signal from the EDGES experiment \citep{edges} has triggered discussions on the contribution from the first stars to gas heating at high redshift \citep{barkana18,fialkov18,mirocha18,schauer19,liu19}. Numerical simulations of Pop~III star formation are another important tool to predict their nature and IMF \citep{stacy10,greif11b,clark11,hirano14,susa19,shrada20}. However, current estimates can only approximately treat the gas accretion and mergers of protostars until the stars reach stable hydrogen burning, which limits their predictive power. The most informative approach to study the nature of the first stars is stellar archaeology \citep{beers05,ji15,frebel15}. As the name suggests, one extracts information from stellar fossils; in this case, one connects the chemical signature of metal-poor stars in the MW to the nucleosynthetic yields from the first SNe \citep{salvadori10,milos18}. Various groups have tried to derive the Pop~III IMF by fitting progenitor stars of different masses to observed extremely metal-poor (EMP) stars with different results: \citet{fraser17} find that their sample of $\sim 30$ EMP stars is best reproduced by a Salpeter IMF for Pop~III stars. However, \citet{ishigaki18} show, based on $R>28000$ spectroscopic abundances of $\sim 200$ EMP stars, that their sample is better fitted by a log-normal Pop~III IMF around $\sim 25\,\ensuremath{\mathrm{M}_\odot}$. In both cases, the method can only probe mass ranges of the Pop~III IMF in which we expect SNe to explode, approximately $10-40\,\ensuremath{\mathrm{M}_\odot}$ and $140-260\,\ensuremath{\mathrm{M}_\odot}$ \citep{heger02}. We therefore analyse the metallicity distribution function (MDF), which is a results of an interplay of chemical, radiative, and mechanical feedback from the first and second generation of stars and is therefore sensitive to the complete Pop~III IMF \citep{deB17}. The MDF is the distribution of stellar metallicities, which is observationally traced by [Fe/H]\footnote{Defined as $[\mathrm{A}/\mathrm{B}] = \log_{10}(m_\mathrm{A}/m_\mathrm{B})-\log_{10}(m_{\mathrm{A},\odot}/m_{\mathrm{B},\odot})$, where $m_\mathrm{A}$ and $m_\mathrm{B}$ are the abundances of elements A and B and $m_{\mathrm{A},\odot}$ and $m_{\mathrm{B},\odot}$ are the solar abundances of these elements.}. If we assume that the metallicity of a star is defined by the metallicity of the molecular cloud out of which it forms, then the stellar metallicity depends on one main ingredient: the ratio of metals to hydrogen in the stellar birth clouds. To understand the MDF and to use it as a tool to study the first stars, we therefore need to understand how metals mix with hydrogen after the first SN explosions. For simplicity, previous semi-analytical approaches have assumed that metals and hydrogen within the virial radius are homogeneously mixed \citep{deB17,graziani17,visbal18}, that all Pop~II stars have the same metallicity \citep{trenti09,dayal14}, or they applied heuristic models to imitate inhomogeneous mixing \citep{salvadori10,hartwig18a,cote18}. We investigate and improve sub-grid models for metal mixing in this paper, because previous simplistic models limit the reliability of their predictions. Hydrodynamical simulations have studied the mixing of metals from the first SNe in individual haloes. While \citet{joggerst11} find that the degree of mixing depends on the type of SN, \cite{ritter15} conclude that the abundance of gas clouds is different from progenitor SNe yields due to inhomogeneous mixing. \cite{sluder16} study the evolution of a minihalo after its first Pop~III SN. They conduct cosmological simulation and find that there exists abundance biases, concluding that ``to fully exploit the stellar-archaeological programme of constraining the Pop~III IMF from the observed Pop II abundances, considering these hydrodynamical transport effects is crucial''. These pioneering studies have highlighted the importance of inhomogeneous metal mixing but our present study is the first attempt to investigate this effect for a cosmological sample of halos. The main scientific question is to understand metal mixing in the first galaxies and how to predict the metallicity of the star-forming gas. These results will help to improve existing semi-analytical models and to deepen our understanding of metal mixing, star formation at high redshift, and the build-up of the MDF. \section{Methodology: Semi-Analytical Model} In this section, we summarise the semi-analytical model (SAM) that we apply to simulate the MDF. In the next section, we then present our improved model of metal mixing, derived from 3D cosmological simulations. The SAM used here has been previously used \citep{hartwig15,hartwig16,hartwig18a,magg16,magg18} and has recently been named \textsc{a-sloth} (Ancient Stars and Local Obervables by Tracing Haloes)\footnote{\url{http://www-utap.phys.s.u-tokyo.ac.jp/~hartwig/A-SLOTH}}. It is based on MW-like dark matter merger trees from the Caterpillar simulation \citep{griffen16}, which represent the hierarchical formation and growth of gravitationally bound structure over time. On top of the dark matter distribution, we model the formation of stars and their chemical and radiative feedback processes. Here, we summarise the main ingredients of \textsc{a-sloth} to reproduce the MDF. More details on the model can be found in previous publications \citep{hartwig15,hartwig18a,magg18}. \subsection{Pop~III star formation} The main condition to form Pop~III stars in a minihalo is that molecular hydrogen is the main coolant. Sufficient molecular hydrogen can form if the halo mass is above the critical mass \citep{hummel12,glover13} \begin{equation} M_\mathrm{crit} = 3\times 10^6 \,\ensuremath{\mathrm{M}_\odot} \left( \frac{1+z}{10} \right)^{-3/2}, \end{equation} which corresponds to a virial temperature of $T_\mathrm{vir}=2200\,\mathrm{K}$. We do not include the effect of baryonic streaming, which can increase this critical halo mass for gas collapse at high redshift \citep{schauer19b}, but plan to do so in a future study. Lyman-Werner (LW) photons can photodissociate molecular hydrogen and therefore delay or even prevent the gas collapse in the first haloes. Therefore, we require a second, LW flux-dependent mass threshold for gas collapse \citep{OShea08} and model how the LW background builds up over time based on \citet{greif06}. We also follow the evolution of H{\sc ii} regions around star-forming galaxies and allow star formation in other haloes in these ionised regions only if their virial temperature is $>10^4$\,K. We allow Pop~III star formation if the gas is sufficiently metal-poor. \cite{2017Chiaki_criterion} argue that we have never observed stars with \begin{equation} 10^{\mathrm{[C/H]}-2.30} + 10^{\mathrm{[Fe/H]}} \leq 10^{-5.07}. \label{eq:transition_criterion} \end{equation} and suggest that the truncation is caused by the absence of dust cooling. The C abundance traces the amount of carbon dust, and the Fe abundance traces the amount of Silicate dust. We use Eq.~\ref{eq:transition_criterion} as transition criterion from Pop~III to Pop~II star formation. If a halo is identified to form Pop~III stars, we define the total stellar mass of this halo as \begin{equation} M_* = \eta_{III} \frac{\Omega _b}{\Omega _m} M_h, \end{equation} where the star formation efficiency $\eta_{III}$ needs to be calibrated. This parameter is fully degenerate with baryon fractions of the haloes. For a globally reduced baryon fraction in the haloes, e.g., due to streaming velocities (see \cite{schauer19b}), the same $\eta_\mathrm{III}$ parameter would correspond to a higher physical star formation efficiency. Then, we draw individual stars from a pre-defined IMF (see below) and assign them to this halo until the total mass of stars is $\geq M_*$. The mass of a Pop~III star defines its lifetime \citep{marigo01,schaerer02} and its final fate \citep{karlsson13}. If the star explodes as a SN, we assume tabulated SN metal yields \citep{kobayashi11,nomoto13} and assume that a fraction, $f_\mathrm{faint}$, of core-collapse SNe explode as faint SNe with the corresponding yields from \citet{ishigaki14}. Based on \citet{ritter15}, we assume that a fraction of the metals, $f_\mathrm{fallback}$, remains in the halo (i.e. falls back after some time) and another fraction, $f_\mathrm{eject} = 1 - f_\mathrm{fallback}$, escapes the gravitational potential of the halo. We assume that these parameters do not depend on the halo mass and we calibrate $f_\mathrm{fallback}$ based on the metallicity distribution function and other constrains such as the external enrichment fraction. For Pop~III stars, we treat these fractions as free parameters, whereas for SNe from Pop~II we calculate the ejected gas mass self-consistently (see below). For the fraction of the metals that escape a Pop~III forming halo, we assume that these metal winds have a constant velocity of $110\,\mathrm{km}\,\mathrm{s}^{-1}$ and may externally enrich neighboring haloes. After internal enrichment that produces sufficient metals by a Pop~III SN (Eq.~\ref{eq:transition_criterion}), we allow Pop~II star formation after the recovery time $t_\mathrm{rec}$, which we calibrate with priors in the range $10\,\mathrm{Myr} \leq t_\mathrm{rec} \leq 100\,\mathrm{Myr}$ \citep{jeon14,chiaki18}. If a previously pristine halo is sufficiently enriched by external enrichment, we trigger Pop~II star formation in this halo one freefall time after the external enrichment event, which is of the order of $100\,$Myr at the redshifts of interest. \subsection{Pop~II star formation} We model the formation of metal-enriched stars with a bathtub model, which will be described in more detail in Magg~et~al. (in prep.). This improved way of simulating Pop~II formation represents a major update to \textsc{a-sloth} as compared to, e.g., \citet{hartwig18a}. The baryonic matter in each metal-enriched, star-forming halo is split into four components: \begin{itemize} \item $M_\mathrm{*,II}$: the mass in metal-enriched stars, \item $M_\mathrm{hot}$: the mass of hot gas in the halo, \item $M_\mathrm{cold}$: the mass of cold, star-forming gas in the centre of the halo and \item $M_\mathrm{out}$: the mass of the outflows, i.e. baryonic gas that has been unbound from the halo. \end{itemize} The transition of matter from one component to the other is prescribed according to characteristic time-scales and efficiencies. We aim at keeping the sum of these four components to $(\Omega_\mathrm{b}/\Omega_\mathrm{m}) M_\mathrm{vir}$: \begin{equation} M_\mathrm{*,II}+M_\mathrm{hot}+M_\mathrm{cold}+M_\mathrm{out} = \frac{\Omega_\mathrm{b}}{\Omega_\mathrm{m}} M_\mathrm{vir}. \end{equation} We adapt this general stucture from \citet{agarwal12}, i.e., the idea of having a four-component bathtub model, where we keep the the total baryonic mass fraction equal to the cosmic mean \citep[see also][]{cote16}. The exact time-scales according to which the matter moves from one phase to the other and the efficiency of outflows were revised. Each time a halo forms Pop~II stars the following algorithm is used: \begin{enumerate} \item The halo is initialized. All four baryonic matter components are set to the sum of these components over all progenitors. Thus the mass accretion rate during a time-step of length $\Delta t$ is \begin{equation} \begin{split} \dot{M}_\mathrm{acc} =& \frac{\Omega_\mathrm{b}}{\Omega_\mathrm{m}\,\Delta t}M_\mathrm{vir}\\-& \left(M_\mathrm{*,II}+M_\mathrm{hot}+M_\mathrm{cold}+M_\mathrm{out}\right)\frac{1}{\Delta t}.\\ \end{split} \end{equation} \item Compute relevant time-scales and outflow efficiency: \begin{itemize} \item $t_\mathrm{ff}$ the free fall time at the 192-fold cosmic over-density, i.e. the density of the halo, for conversion of hot diffuse gas to cold dense gas. It is defined by \begin{equation} \begin{split} t_\mathrm{ff} =& \sqrt{\frac{1}{(1+z)^3\,192\,G\,\rho_\mathrm{m}}}\\ =& 5.2\,\mathrm{Gyr} \left(1+z \right)^{-\frac{3}{2}}, \end{split} \end{equation} where $\rho_\mathrm{m}$ is the cosmic matter density at z=0. At redshifts and halo masses of interest, this time-scale is longer than e.g. the cooling time of the gas and thus the gas collapses essentially in free-fall. \item $t_\mathrm{dyn}$: the dynamical time-scale of the halo centre, for star-formation from cold gas. We assume that cold gas and stars are concentrated in the central 5 per cent of the virial radius $R_\mathrm{vir}$ of the halo \citep{mo98}. The dynamical time-scale can then be computed as ratio of the virial radius and the circular velocity of this region i.e. \begin{equation} \begin{split} t_\mathrm{dyn} =& \frac{0.05\, R_\mathrm{vir}}{v_\mathrm{dyn}}\\ =& \sqrt{\frac{\left(0.05\, R_\mathrm{vir}\right)^3}{G\,\left(M_\mathrm{cold}+M_\mathrm{*,II}\right)}},\\ \end{split} \end{equation} where the dynamical velocity in the centre of the halo is \begin{equation} v_\mathrm{dyn} =\sqrt{\frac{G\,\left(M_\mathrm{cold}+M_\mathrm{*,II}\right)}{\left(0.05\, R_\mathrm{vir}\right)^2}}. \end{equation} \item $\gamma_\mathrm{out}$ the outflow efficiency, which is computed from the energy balance between the injected energy per solar mass of star formation and the needed kinetic energy for outflows (per solar mass of out-flowing mass). This effective specific binding energy is \begin{equation} \frac{1}{2} v_\mathrm{eff}^2 = \frac{1}{2}\left(v_\mathrm{dyn}^2+v_\mathrm{circ}^2\right). \end{equation} The circular velocity $v_\mathrm{circ}$ of the halo is defined analogously to the dynamical velocity of the halo \begin{equation} v_\mathrm{circ} =\sqrt{\frac{G\,M_\mathrm{vir}}{R_\mathrm{vir}^2}}. \end{equation} For the specific energy input of SNe we find \begin{equation} e_\mathrm{SN} = 8.7\times 10^{15}\,\mathrm{erg}\,\mathrm{g}^{-1}, \end{equation} which corresponds to one $10^{51}\,\mathrm{erg}$ SN per $57\,\,\ensuremath{\mathrm{M}_\odot}$ of stars formed. For ionizing radiation we find \begin{equation} e_\mathrm{ion}= 5.2\times 10^{16}\,\mathrm{erg}\,\mathrm{g}^{-1}, \end{equation} being the equivalent of 30000 ionizing photons per stellar baryon \citep{greif06}, 2.0\,eV thermal energy injection for each of these photons and an escape fraction of 10 per cent. We assume that feedback by ionizing photons is only efficient in haloes that are not too far above the atomic cooling limit and smoothly join the two regimes together: \begin{equation} \gamma_\mathrm{out} = \begin{cases} 2\frac{e_\mathrm{SN}}{16 \left(v_\mathrm{eff} - 20\,\mathrm{km}\,\mathrm{s}^{-1}\right)^2} \mathrm{if}\ v_\mathrm{eff}>20\,\mathrm{km}\,\mathrm{s}^{-1}\\ 2\frac{e_\mathrm{SN}+e_\mathrm{ion}}{\left(10\,\mathrm{km}\,\mathrm{s}^{-1}\right)^2}\ \mathrm{else}.\\ \end{cases} \end{equation} This assumes that outflows move at 10\,km\,s$^{-1}$ if they are primarily ionization driven and a few times as fast as the escape velocity if they are SN driven. The choice of the exact transition between the two regimes is calibrated to reproduce, e.g. the stellar mass to halo mass relation of the MW (see below). \end{itemize} \item We solve the following differential equations with a simple forward Euler method: \begin{equation} \dot{M}_\mathrm{*,II} = \eta_{II} \frac{M_\mathrm{cold}}{t_\mathrm{dyn}} \end{equation} \begin{equation} \dot{M}_\mathrm{out} = \gamma_\mathrm{out}\,\dot{M}_\mathrm{*,II} \end{equation} \begin{equation} \dot{M}_\mathrm{cold} = - \dot{M}_\mathrm{*,II} -\dot{M}_\mathrm{out} + \frac{M_\mathrm{hot}}{t_\mathrm{ff}} \end{equation} \begin{equation} \dot{M}_\mathrm{hot} = -\frac{M_\mathrm{hot}}{t_\mathrm{ff}} + \dot{M}_\mathrm{acc}, \end{equation} where the Pop~II star formation efficiency $\eta_{II}=0.1$ was calibrated to reproduce the stellar mass to halo mass relation (see below). \item Afterwards we update the produced metals and feedback: \begin{itemize} \item reduce metal mass because of outflows \begin{equation} M_\mathrm{metals, t+\Delta t} = M_\mathrm{metals, t} \frac{M_\mathrm{cold}+M_\mathrm{hot}}{M_\mathrm{gas}}, \end{equation} where the total gas mass is \begin{equation} \begin{split} M_\mathrm{gas}= &M_\mathrm{cold}+M_\mathrm{hot}\\&+\Delta M_\mathrm{*,II, tot}+ \Delta M_\mathrm{out,tot},\\ \end{split} \end{equation} and $\Delta M_\mathrm{*,II, tot}$ and $\Delta M_\mathrm{out,tot}$ are the mass of stars and outflows formed during the whole time-step. \item Add metals generated during this time-step \begin{equation} M_\mathrm{metals, t+\Delta t} = M_\mathrm{metals, t} + 0.05 \Delta M_\mathrm{*,II, tot} , \end{equation} where we took the IMF averaged metal-yields of 0.05\,\ensuremath{\mathrm{M}_\odot}\ per 1\,\ensuremath{\mathrm{M}_\odot}\ of star formation from \citet{vincenzo16}. \item We use snowplough algorithm to compute the size of the metal-enriched region and propagate the ionization front as described in \citet{magg18}. \end{itemize} \end{enumerate} This new model comes with several free parameters. We calibrate these parameters mainly by the stellar mass to halo mass relation in a mass range of 2 dex. For more details see Sec.~\ref{sec:results}. \section{Methodology: Metal Mixing} In this section, we explain our approach to investigate inhomogeneity of metallicity in the high-redshift ISM. Although its importance has been pointed out in previous research \citep{cote18}, metallicity inhomogeneity has often been neglected in archaeological approaches. However, without understanding this effect we cannot properly connect the observational information on EMP stars and theoretical efforts on SN yields. We take into account the inhomogeneity by post-processing the metallicity of each star-formation event in {\sc a-sloth}, according to the distribution extracted from cosmological simulations. \subsection{Cosmological Simulation} To study metal mixing in high-redshift galaxies, we analyse the Renaissance simulations \citep{xu16}. The simulations were conducted with the adaptive mesh refinement (AMR) code \textsc{enzo} \citep{bryan14}. The boxsize is 28.4 cMpc/h, the particle masses of dark matter particles are $2.9\times 10^{4} \,\ensuremath{\mathrm{M}_\odot}$, and the spacial resolution is $19$ comoving pc at the most resolved region. This corresponds to a physical resolution of a few parsec at the redshifts of interest. The Renaissance simulations are suited for our purpose because they have a specific treatment for Pop~III stars (including IMF), and the mass resolution is sufficient to resolve $3 \times 10^{6} \,\ensuremath{\mathrm{M}_\odot}$ haloes, where the formation of Pop~III stars takes place \citep{xu16}. In less massive haloes, Pop~III star formation may be delayed or suppressed by baryonic streaming velocities \citep{tseliakhovich10,schauer19b}. For more information, the simulations are well described in \citet{Oshea15} and \citet{xu16}. The simulations consist of three distinct boxes, ``Rarepeak'', ``Normal'', and ``Void'', with the names representing the overdensity on super-halo scale. Since we want to study the properties of galaxies in a cosmologically representative environment, we select the ``Normal'' region as the main analysis set. At $z=12$, 2223 haloes contain Pop~III stars. \subsection{Sample selection} We select halos to analyse based on two criteria: First, we select haloes that contain gas denser than $1\,\mathrm{cm}^{-3}$ in order to compare the metallicity between dense gas and average gas within a halo. In the following, we define ``dense gas'' as gas denser than $1\,\mathrm{cm}^{-3}$. We chose $1\,\mathrm{cm}^{-3}$ to improve statistics because not many of the simulated galaxies have gas denser than 10 (1122 haloes) or 100$\,\mathrm{cm}^{-3}$ (433 haloes), whereas 2733 haloes contain gas cells denser than 1$\,\mathrm{cm}^{-3}$. If we only considered cells that form stars in the next timestep, only six halos would contain more than 10 cells. For all the haloes with gas above a density of 100$\,\mathrm{cm}^{-3}$ we verified that the metallicities above all three density thresholds are similar. Second, to make sure that the halo is well resolved, we apply two conditions: haloes which include more than 1000 cells in its virial radius, and haloes in which the dense gas is resolved by at least 10 resolution elements. The numbers of analysed haloes are 2733, 6150, 1145 for Normal z=12, Rarepeak z=15, and Normal z=15. \subsection{Metallicity Shift dZ} To predict the metallicity of star-forming gas we introduce the ``metallicity shift'' $dZ$ as \begin{equation} dZ=Z_\mathrm{dense}-Z_\mathrm{all}, \label{eq:dZ} \end{equation} where $dZ$ quantifies the difference of metallicity $Z$\footnote{Defined as $Z = \log_{10}(m_\mathrm{metal}/m_\mathrm{H})-\log_{10}(m_{\mathrm{metal},\odot}/m_{\mathrm{H},\odot})$, where $m_\mathrm{metal}$ is the abundance of metals, $m_\mathrm{H}$ is the abundance of Hydrogen, and $m_{\mathrm{metal},\odot}$ and $m_{\mathrm{H},\odot}$ are the solar abundances of these \citep{asplund09}.} between dense gas and all gas inside each halo. Positive $dZ$ represents a situation where dense gas is metal-rich compared to the average gas of the halo. In Fig.~\ref{fig:Sliceplot} we show two exemplary slice plots showing negative $dZ$ and positive $dZ$ haloes. The mean gas metallicity, $Z_\mathrm{all}$, is available in many models with coarse resolution. Together with an analytical prediction of $dZ$, this allows to calculate the actual metallicity of star-forming gas. \begin{figure} \includegraphics[width=\columnwidth]{Z_densitycontour_galaxy2312.pdf} \includegraphics[width=\columnwidth]{Z_densitycontour_galaxy1858.pdf} \caption{Slice plots showing metallicity distribution of haloes with dense gas at redshift $z=12$. In both panels, the large black circle represents the virial radius of the halo, and the white/black curves are iso-density contours. The top panel illustrates a halo that has negative $dZ$ ($dZ$ = -0.28), suggesting that gas collapse occurs before metal enrichment, and metals can not penetrate into the dense gas. The bottom panel illustrates a halo that has positive $dZ$ ($dZ$ = 0.14), suggesting that gas collapse occurs after metal enrichment. The masses of the dark matter haloes presented in top and bottom panel are $7.4\times 10^{7} \,\ensuremath{\mathrm{M}_\odot}$ and $3.0\times 10^{8} \,\ensuremath{\mathrm{M}_\odot}$, respectively.} \label{fig:Sliceplot} \end{figure} \subsubsection{Physical interpretation} Various processes are at play to provide positive or negative values of $dZ$. The positive shifting process can be related to metal line cooling. Since metal-rich gas cools efficiently by metal and dust cooling, it is likely to be dense. There are two processes with the opposite effect. One process is shielding. Metal-rich winds cannot penetrate dense gas clumps easily \citep{jeon14,chen17b,chiaki18}. If gas cooling and clump formation occur earlier than metal enrichment, the metallicity shift may take a negative value. The other process is the feedback origin. Metal-rich gas tends naturally to be hot, because thermal and chemical feedback sources are the same \citep{emerick18}. The interplay of these processes, together with inhomogeneous mixing of metals with pristine gas manifest in the distribution of $dZ$. \subsubsection{Correlation analysis} The goal is to find an analytical expression to predict $dZ$ for a given halo. We expect $dZ$ to be correlated to other halo properties. For example, stellar mean age traces the time that has passed after a star formation event. Galaxies with older stellar populations have longer time to mix after energy injection. If we wait longer than the cooling time of metal-rich gas, one could expect that $dZ$ takes a positive value by the first process (and disappearance of the third process) explained above. We calculate Pearson correlation coefficients between halo properties and the metallicity shift. We include the number of Pop~III SNe (which traces injected energy by Pop~III, pop3SNcount), halo mass (halo mass), Pop~III mass that went into SNe (pop3 mass), Pop~II stellar mass (which traces injected energy, pop2 mass), gas mass (gas mass), metallicity of all gas (Z$_\mathrm{all}$), mean temperature (temperature), stellar mean age (which traces the time passed after star formation event, stellar mean age), metallicity of dense gas (Z$_\mathrm{dense}$), and mass of dense gas (mass of dense gas). First, we calculate the correlation matrix among various quantities in search of explanatory variables for $dZ$. Pearson's correlation coefficients are presented in Fig.~\ref{fig:non_correlation}. The two strongest correlations to $dZ$ are dense gas metallicity and average gas metallicity, which is natural because $dZ$ is defined based on these two quantities. The absence of any other strong correlations demonstrates that metal mixing in the first galaxies is an intrinsically stochastic process. The mass of Pop~II stars shows the next strongest correlation. When we take a close look at the actual scatter plot between $dZ$ and mass of all stars, two distinct clusters are observed (Fig.~\ref{fig:dZ_Mstars_bimodality}). $dZ$ behaves very differently between haloes with stars and those without stars. This bimodality also explains other correlations to $dZ$, namely number of Pop~III SNe, and Pop~III mass that went into SNe. If a halo has a finite metallicity but did not experience SNe before, it must have been enriched externally by at least one SN from a nearby halo. We do not see any other quantities that are correlated to $dZ$. In particular, we do not find a dependence of the metallicity shift on redshift or halo masses. \begin{figure} \includegraphics[width=\columnwidth]{Fig2_new.pdf} \caption{Pearson's correlation coefficients between $dZ$ (the metallicity shift defined in Eq.~(\ref{eq:dZ})) and other halo properties. $dZ$ has strong correlation to metallicity of dense gas and metallicity of all gas, which is natural because they are defined with these quantities. Also $dZ$ has mild correlation to Pop~II stellar mass and Pop~III progenitor stellar mass. These correlations come from the bimodality of $dZ$ distinguished by whether haloes contain stars or not. } \label{fig:non_correlation} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{dZ_Mstar_forpaper.pdf} \caption{Scatter plot between $dZ$ and mass of all stars in a halo. This figure clearly shows a bimodality, suggesting that internal enrichment and external enrichment behave very differently in terms of $dZ$. We labeled haloes without stars as ``external enrichment'' because they are dominated by external enrichment, and haloes with stars as ``internal enrichment'' because they are dominated by internal enrichment. Since the horizontal axis is logarithmic, we artificially set 1 $\,\ensuremath{\mathrm{M}_\odot}$ for haloes without any stars for illustration purpose. On the right we show the histogram of $dZ$ for externally enriched haloes. } \label{fig:dZ_Mstars_bimodality} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{Z_dZ_forpaper.pdf} \caption{ Scatter plot between average metallicity of all gas and metallicity difference between dense gas and average gas, $dZ$. The trend suggests that gas at [Fe/H]$ \gtrsim -4$ is dominated by internal enrichment, where the SN energy can efficiently mix all gas components, and metal-poor gas is dominated by external enrichment. [Fe/H] $\simeq -4$ corresponds to the critical metallicity for Pop~II star formation in the Renaissance simulation.} \label{fig:Z_dZ_overall} \end{figure} \subsubsection{Overall Trend} Fig.~\ref{fig:Z_dZ_overall} shows a scatter plot between average metallicity inside each halo and $dZ$. Internally enriched haloes reside in relatively high-metallicity region, while externally enriched haloes are widely scattered on the figure. In internal enrichment, where the direct energy injection by SNe strongly disrupts the host halo and creates turbulence, metals produced by SNe mix with the surrounding gas efficiently. It is expected that in such haloes $dZ$ is close to zero. On the other hand, in external enrichment, where produced metals are ``just'' accreted onto the gas cloud, mixing is not efficient. In a case where gas collapses earlier than external enrichment, $dZ$ is expected to be negative, because metals cannot penetrate into the gas clouds that are already dense. Since internal enrichment pollutes each halo to higher metallicity compared to external enrichment, it is natural to interpret the figure that the increasing trend in $dZ$ represents the transition of enrichment mode from external enrichment to internal enrichment. \subsection{Internal Enrichment} We identify internally enriched haloes with haloes that contain at least one star that went into SN. In internally enriched haloes, almost no correlation is observed between metallicity and $dZ$, therefore we regard them as metallicity-independent and we fit the distribution with a Gaussian distribution function with $\mu = -0.03, \sigma = 0.15$: \begin{equation} p(x) = \frac{1}{\sqrt{2\pi (0.15)^{2}}}\exp\biggl[\frac{(x+0.03)^2}{2 (0.15)^{2}}\biggr] \label{eq:Gaussian} \end{equation} The mean $dZ$ is almost zero, however slightly negative, suggesting that on average star-forming gas has almost the same metallicity as the average gas of the halo. However, the distribution is a bit skewed, with a longer tail on the negative end. This can also be seen by calculating the mean and standard deviation directly from the data, instead of fitting a Gaussian distribution. The mean and standard deviation of the data points is -0.08 and 0.27\,dex. \subsection{External Enrichment} We identify externally enriched haloes with haloes that do not contain any stars that went into SN. The importance of such external enrichment for the formation of EMP stars has already been pointed out by \citet{smith15}, although \citep{jaacks18} show that external enrichment alone may not be sufficient to reach the critical metallicity (Eq. \ref{eq:transition_criterion}) to trigger Pop~II star formation. In externally enriched haloes, an obvious increasing trend is observed between metallicity and $dZ$. We therefore bin the metallicity with $\Delta Z = 1$ in the range $-5 < Z \leq -1$, and in $Z \leq -5$ we group them together. We calculate mean and standard deviation in each bin. We fit the distribution functions of $dZ$ at different metallicities with an exponentially modified Gaussian distributions. The distribution has three free parameters, $(K, \mu, \sigma)$, and the probability distribution function $p(x; \mu, \sigma, \lambda)$ is \begin{equation} p(x; \mu, \sigma, \lambda)= \frac{\lambda}{2} \exp\biggl[\frac{\lambda}{2}(2\mu+\lambda\sigma^{2}-2x)\biggr]{\rm erfc}\biggl(\frac{\mu+\lambda\sigma^2-x}{\sqrt{2}\sigma}\biggr), \\ \label{eq:PdZ_external} \end{equation} where \begin{eqnarray} {\rm erfc}(x) &=& 1 - {\rm erf}(x)\\ &=& \frac{2}{\sqrt{\pi}}\int^{\infty}_{x}e^{-t^{2}}dt. \label{eq:erfc} \end{eqnarray} The fitting results, which can be used to implement a sub-grid model for incomplete mixing, are presented in Table.~\ref{tab:fitting_result}. The table shows the evolution of the distribution function of $dZ$ with mean metallicity. \begin{table} \centering \caption{Best-fitting parameters for external enrichment. These parameter sets are for ``$-dZ$'', not $dZ$ itself.} \label{tab:fitting_result} \begin{tabular}{l|lll} \hline Metallicity & $\mu$ & $\lambda$ & $\sigma$ \\ \hline $-2<Z\leq -1$ & -0.07 & 0.14 & 2.78\\ $-3<Z\leq -2$ & -0.01 & 0.24 & 1.41\\ $-4<Z\leq -3$ & 0.08 & 0.20 & 0.77\\ $-5<Z\leq -4$ & 0.16 & 0.19 & 0.38\\ $-8<Z\leq -5$ & 1.63 & 1.01 & 0.35\\ \end{tabular} \end{table} \subsection{Implementation} We implement this new recipe for improved metal mixing in \textsc{a-sloth}, which constitutes a major improvement compared to earlier versions of the code. We discriminate the internally enriched and externally enriched haloes by the stellar mass inside haloes. If a halo has already experienced star formation (either Pop~II or Pop~III), we apply the internal enrichment formula. Otherwise, we apply the external enrichment formula, equation~(\ref{eq:PdZ_external}), based on pre-calculated look-up tables. \subsection{Comparison to other research} One of the limitations of our metal-mixing model is the finite resolution of the Renaissance simulations. These may not allow us to capture inhomogeneities at the highest densities. In the high-resolution simulations of \cite{greif10} the metallicity of the recollapsing halo becomes uniform, implying almost zero metallicity shift for internally enriched haloes. \cite{chiaki18} show several cases of internal enrichment where the metallicity of the recollapsing gas is much lower than the average metallicty of the halo, implying a negative metallicity shift that is potentially larger than what we find. Both simulations, as well as \cite{chen17b} and \cite{smith15} show that the densest parts of externally enriched haloes are usually not enriched efficiently. This is consistent with the strongly negative metallicity shift we find for externally enriched haloes. However, all simulations of this type focus on one or a few haloes. The low-number statistics of the high-resolution simulations do not yet allow a statistical description of the inherently chaotic process of metal mixing. In Fig.~\ref{fig:metal_mixing_distribution} we compare our new implementation of the metallicity shift to previous approaches. In one SAM \citep{deB17} this metallicity shift was not taken into consideration, and their treatment corresponds to $dZ = 0$ for all haloes. In another SAM \citep{cote18} metallicity inhomogeneity is taken into account by convolving a Gaussian with $\mu = 0,\ \sigma = 0.2$ with the final MDF. \begin{figure} \includegraphics[width=\columnwidth]{dZ_dist_comparison.pdf} \caption{Comparison of our new metallicity shift treatment (green, blue) to previous implementations. Our sample is taken from the Normal $z=12$ dataset (histograms). The green curve is the fitted distribution function of $dZ$ among externally enriched haloes, exemplarily for the metallicity range $-4<\mathrm{Z}\leq -3$. The blue curve is the fitted distribution function of $dZ$ among internally enriched haloes. For comparison, we also show the $dZ$ distribution of a SAM considering inhomogeneous metallicity inside each galaxies \citep{cote18} and of a homogeneous model \citep[$\delta$-function at zero, e.g. used in][]{deB17}. Our mean metallicity shift is negative for both internal and external enrichment, and we see a prominent long tail towards the negative end in the external enrichment. } \label{fig:metal_mixing_distribution} \end{figure} The authors report that they need the convolution to reproduce the MDF extracted from hydrodynamical simulation with their SAM. In our previous implementation, we assumed that produced metals do not mix with all the hydrogen and therefore the metallicity shifts were positive values \citep{hartwig18a}. The cosmological simulation, however, suggests that the mean metallicity shift is negative for both internal and external enrichment. Previous authors did not try to derive such a metallicity shift. However, their approaches and implementations can be interpreted in our new framework as metallicity shifts. In this comparison, the implementation of \citet{cote18} matches very well our method for internal enrichment. \citet{sarmento17,sarmento19} and \citet{safarzadeh18} use elaborate sub-grid model to keep track of the pristine gas fraction in each cell. Their model let Pop~III stars form even in enriched cells. Such treatment is suited to follow ``residual'' Pop~III star formation in enriched regions. However, the main purpose of our $dZ$ is to predict the metallicity of stars (or their progenitors, star-forming gas), taking inhomogeneity of metallicity into account. For such purpose, direct analysis of cosmological simulation is the best way to extract this information. \citet{hirai17} included their sub-grid metal diffusion recipe. The model calculates the amount of metals diffused to the next cells by the metallicity gradient, shear tensor of the cells, and a scaling factor for metal diffusion. They use elemental abundance patterns to calibrate their sub-grid model and conclude that the metal mixing timescale is less than 40 Myr, shorter than the dynamical time of the typical dwarf galaxies. This comparatively short mixing timescale means that gas and metals are well-mixed, which is consistent to our overall trend that the typical $dZ$ is close to zero. A halo with very negative $dZ$ can be produced if the collapse of a gas cloud happens earlier than the mixing timescale after the first SNe. Such haloes also exist in the simulation, see e.g. the top panel of Fig.~\ref{fig:Sliceplot}. An alternative approach is to describe metal mixing as a diffusion process \citep{karlsson08, komiya20}. They assume diffusion coefficients that allows galactic gas to be mixed well within a short period of time ($\simeq 30$ Myr). This is consistent to our finding that in internally enriched haloes gas is mixed quite well. \section{Results} \label{sec:results} In this section, we will first present the calibration of our model based on the MDF and discuss the effect of metal mixing. Then, we will demonstrate that this calibrated model is also able to reproduce additional, independent observations. \subsection{MDF} To calibrate our theoretically predicted MDF we use a de-biased MDF in the range of $-4 \leq \mathrm{[Fe/H]} \leq -3$ provided by \cite{2020Youakim_MDF}. This MDF is based on the photometric Pristine survey \citep{starkenburg17}, corrected for all major biases. This metallicity range is strongly affected by the properties of Pop~III stars and dominating the statistical comparison. For our model prediction, we exclude stars in simulated satellite galaxies to guarantee that we compare halo stars to halo stars. To quantify the fit quality, we calculate the Kolmogorov-Smirnov (KS) distance for each MDF from each merger tree, i.e., the maximum distance between the observed and modelled cumulative MDF in the range $-4 \leq \mathrm{[Fe/H]} \leq -3$. We use 30 independent MW-like halo trees from the Caterpillar simulation. First, we execute the model, and obtain the MDF as model prediction on each tree. Next, we calculate the KS distance on each MDF. Finally, we use the average of the 30 KS distances as the quantification of comparison between observation and model prediction. \begin{table} \centering \caption{Parameter values in our fiducial model. This set of parameters best reproduces the MDF at $-4\leq$ [Fe/H] $\leq-3$ as we show below. We fixed the Pop~III metal fallback fraction at $f_\mathrm{fallback} = 1 - f_\mathrm{eject}$.} \label{tab:fiducial parameter} \begin{tabular}{ll} \hline Parameter & Value\\ \hline Pop~III SFE & $\eta_{\rm{III}} = 1\times 10^{-2}$\\ Pop~III metal ejection fraction & $f_{\rm{eject}} = 80\%$\\ lower IMF limit & $M_{\rm{min}} = 2 {\rm M}_{\odot}$\\ upper IMF limit & $M_{\rm{max}} = 180{\rm M}_{\odot}$\\ IMF slope & $\alpha = 0.5$\\ recovery time & $t_\mathrm{recov} = 30$ Myr \end{tabular} \end{table} We calibrate the model parameters by minimizing the average of KS distances to the observed de-biased MDF \citep{2020Youakim_MDF}. We present the parameters of our fiducial model in Table~\ref{tab:fiducial parameter}. We find a top-heavy IMF with the slope $\alpha = 0.5$ in the mass range from $2\,\ensuremath{\mathrm{M}_\odot}$ to $180\,\ensuremath{\mathrm{M}_\odot}$, with a Pop~III SFE of $1\%$ to best reproduce the MDF. For this set of parameters, the average of KS distance is 0.074. We also estimate p-value from KS distances assuming an observational sample size of 2762, based on the sum of the ``corrected''-row in Table.A1 of \cite{2020Youakim_MDF}. The average p-value over 30 merger trees is 0.018. It means our calibrated MDF and observation are in some tension, but the difference is not statistically significant at 99\% significance level. This tension is partly due to the fact that not all merger trees are equally representative of the merger history of the MW. The highest p-value for one MW-like merger tree with the fiducial parameters is 0.49, showing the importance of variations in the merger history. In Fig.~\ref{fig:IMF comparison} we compare our derived fiducial IMF and an independent IMF obtained from numerical simulations \citep{hirano15}. The green region shows the range of best-fitting IMFs, i.e., p-value more than 90\% of the fiducial model. The yellow region illustrates the marginally well-fitting IMFs, i.e., p-value more than 10\% of the fiducial model. The red region represents the disfavoured IMFs, i.e., p-value less than 10\% of the fiducial model. The homogeneous model is within the best-fitting region, therefore it is statistically indistinguishable from our best-fitting model with inhomogeneity. All the best-fitting IMFs are more top-heavy than the Salpeter IMF. Our IMF favours stars with $2 \sim 200 \,\ensuremath{\mathrm{M}_\odot}$. A large fraction of PISNe from Pop~III stars was not favoured (see also \citet{deB17} and \citet{salvadori19}). Our calibration is not very sensitive to the lower mass limit of the Pop~III IMF, because such low-mass stars do not contribute to chemical enrichment and therefore do not directly affect the MDF. Also, our calibration based on the KS test is not very sensitive to the low-metallicity tail of the MDF, because of the small number of observed stars in this range. Therefore, our improved model for metal mixing, which mostly affects star formation at very low metallicities, does not affect the IMF calibration significantly. \begin{figure} \includegraphics[width=\columnwidth]{IMF_comparison_new_sum.pdf} \caption{Comparison of the predicted primordial IMF. The black line illustrates our fiducial model which minimizes the average KS distance for 30 trees. The blue line illustrates the calibration assuming homogeneous metal mixing. The green/yellow shaded region is the IMF range that has average p-value more than 90\%/10\% of the one obtained with fiducial parameter. The red shaded region includes all tested IMFs. The dashed line represents the Salpeter slope to guide the eye. With red line we also overplot the Pop~III IMF by \citet{hirano15} derived from numerical simulations. Taking inhomogeneity into account slightly modifies the IMF prediction but the effect is within the uncertainty of the model.} \label{fig:IMF comparison} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{MDF_all.pdf} \caption{Averaged MDF of 30 merger trees with the fiducial model parameters. The black histogram is the model prediction. The green histogram is the metallicity distribution function only from external enrichment. The red curve is the de-biased MDF obtained by \citet{2020Youakim_MDF} and the blue curve is the MDF from SAGA database \citep{saga}.} \label{fig:MDF_example} \end{figure} In Fig.~\ref{fig:MDF_example} we compare the calibrated MDF and observed MDF. At [Fe/H]$ > -4.5$ range, internal enrichment is dominant. The metallicity inhomogeneity only plays a minor role on this metallicity range. In our calibration we only compare MDF at [Fe/H]$ > -4$ range. Therefore, the metallicity inhomogeneity only has a small effect on the Pop~III IMF. \begin{figure} \includegraphics[width=\columnwidth]{MDF_all_only_linear.pdf} \caption{MDF comparison among various model parameters. The black dots represent the observed MDF \citep{2020Youakim_MDF}. To show the approximate error from sampling, we plot Poisson errors assuming that the error is obtained by the number of observed stars in each bin. The black curve represents the fiducial model MDF and the shaded region shows the one sigma scatter of the 30 trees. The other curves are model predictions with one parameter modified from its fiducial value. The modeled MDFs are normalised by the mass of stars at [Fe/H] $ < -3$. The observed MDF is normalised by the number of stars at [Fe/H] $ < -3$. We can see that $M_\mathrm{max} = 100\,\ensuremath{\mathrm{M}_\odot}$ and $\eta _{III} = 10^{-3}$ predict too many stars at [Fe/H]$\ \lesssim -4.0$. These models tend to produce less metals per Pop~III star formation event. Such small metal mass events contribute too much to the formation of stars at [Fe/H]$\ < -4.0$. On the other hand, $M_\mathrm{max} = 260\,\ensuremath{\mathrm{M}_\odot}$ and $\eta _{III} = 10^{-1}$ predict too many stars in [Fe/H]$\ \gtrsim -3.25$, although they are consistent to the fiducial MDF within the scatter of ``Milky Way-like merger trees''. To eliminate such parameter sets we need to resort to different information sources.} \label{fig:MDFParameterExploration} \end{figure} In Fig.~\ref{fig:MDFParameterExploration} we show how a different choice of model parameters affects the MDF. For some choices of parameters (($M_\mathrm{max} = 100$) model,\ $(\eta_{\rm III} = 10^{-3}$) model presented with blue curves), the models predict too many stars at [Fe/H] $\ \lesssim -4.0$. This is the consequence of the decrease in the amount of metals produced by Pop~III stars. Stars with lower mass convert pristine gas to metals less efficiently than higher-mass stars. Also, a lower Pop~III star formation efficiency decreases the overall metal production from Pop~III stars. The decrease in metal mass from Pop~III stars consequently decrease the metallicity of second-generation stars, therefore the number of stars at [Fe/H] $\ < -4.0$ increases. For other choices of parameters (($M_\mathrm{max} = 260$) model,\ $(\eta_{\rm III} = 10^{-1}$) model, presented with red curves), the opposite trend is observed. These models produce too much metals from Pop~III stars, directly enriching the host galaxy to [Fe/H] $\gtrsim -3$. In these models the fraction of stars in [Fe/H] $\lesssim -3.5$ is less than the observed stellar metallicity distribution function, although the difference is typically smaller than the scatter among different merger trees. This comparison suggests that the lower limits of $M_\mathrm{max}$ and $\eta_\mathrm{III}$ can be constrained well by our method, and to constrain the upper limits of these parameters we need additional information such as the (non-) detection of stars with PISNe abundance pattern \citep[see e.g.][]{salvadori19}. \subsection{Additional Observables} In Fig.~\ref{fig:SMHM} we show the stellar mass to halo mass relation at present day. The dots are the stellar mass at $z=0$ calculated with our fiducial model. The trees are sampled randomly. The solid line is the abundance matching relation derived by comparing the number of satellites around the MW with dark-matter-only simulations of MW-like haloes \citep{gk14}. In the shaded region (stellar mass $<10^{5} \,\ensuremath{\mathrm{M}_\odot}$), the abundance matching relation is not reliable due to poor sampling in the observation \citep{gk17}. Our fiducial model reproduce the stellar mass to halo mass relation reasonably well at $z=0$. Our model also reproduces the fraction of externally enriched halos as a function of redshift that is found in the Renaissance simulation: it increases from $\sim 10\%$ at $z=18$ to $~23\%$ at $z=12$. While this is not an observable, it is an additional independent crosscheck for our approach. \begin{figure} \centering \includegraphics[width=\hsize]{SMHM_5trees.pdf} \caption{Comparison of stellar mass to halo mass. The dots are the simulated galaxies at $z=0$ in \textsc{a-sloth} with fiducial parameters. Five different colours correspond to five different merger trees, showing the tree-to-tree scatter. The solid line is the abundance matching relation from observations \citep{gk14} and the grey region indicates the stellar mass range in which the abundance matching prediction becomes unreliable.} \label{fig:SMHM} \end{figure} \begin{figure} \centering \includegraphics[width=\hsize]{dZ_distribution.pdf} \includegraphics[width=\hsize]{dZ_distribution_Mhalo.pdf} \includegraphics[width=\hsize]{dZ_distribution_redshift.pdf} \caption{Distribution of $dZ$ as functions of various physical properties of stars. Top panel: for stars with [Fe/H] $< -4.5$, external enrichment is the dominant channel. Therefore, inhomogeneity is important for such ultra metal-poor stars. Middle panel: for large, matured halos ($M_\mathrm{halo} > 10^{8} \,\ensuremath{\mathrm{M}_\odot}$) star formation has already begun and therefore they are internally enriched. External enrichment is important when we consider star formation in small galaxies ($\sim 10^{7} \,\ensuremath{\mathrm{M}_\odot}$). Bottom panel: for star formation events at a very high redshift, external enrichment plays a role.} \label{fig:dZ_distributions} \end{figure} \subsection{When is $dZ$ important?} In Fig.~\ref{fig:dZ_distributions} we show the distribution of $dZ$ as functions of physical parameters. Since $dZ$ is most important in external enrichment, these panels indicate under what conditions external enrichment becomes important. For some stars with low [Fe/H] ($\lesssim -4$), $dZ$ is largely negative. This means that the metallicities of these stars would have been much higher if we assumed homogeneity. The clear cut at [Fe/H] $\sim -5.1$ comes from our Pop~II star formation criterion. For stars with [Fe/H] $ > -5.1$, we do not require carbon enhancement to form stars, whereas if [Fe/H] $ < -5.1$ we do require the carbon enhancement (see eq.~\ref{eq:transition_criterion}). Highly negative $dZ$ means high (naively calculated) [Fe/H], therefore less likely to be carbon enhanced and allowed to form stars. The middle panel shows that in small halos $dZ$ can be largely negative. In massive galaxies ($\gtrsim 10^{8} \,\ensuremath{\mathrm{M}_\odot}$), star formation has already begun, therefore they are internally enriched and well mixed. Combining with our former analysis on the $dZ$ distribution, we find that the metallicity inhomogeneity is less significant in massive galaxies. The bottom panel shows that $dZ$ is only important at high (although not the highest) redshifts. Low-redshift galaxies are homogeneous due to the same reason as the massive galaxies. The highest redshift galaxies ($z \gtrsim 20$) are homogeneous, because they do not have enough time after the first star formation events to experience external enrichment. In summary, inhomogeneous metal mixing is important in low-mass halos at high redshift that are about to form EMP stars. \section{discussion} We have performed the first cosmologically representative analysis of metal mixing in high redshift galaxies. We derived a physically motivated estimate of $dZ$, the metallicity difference between star-forming and all gas, and find that the distribution of $dZ$ is very different between haloes with stars and haloes without stars. This bimodality can be understood by assuming that two processes are at work: internal enrichment and external enrichment. Haloes without stars have not experienced any star-formation events, so they can be identified as externally enriched haloes. In external enrichment, the momentum of metal-rich wind is not strong. If a galaxy already has a dense gas cloud when the external enrichment takes place, it is expected that metals cannot penetrate into the dense gas. In such cases, $dZ$ is expected to be negative, indicating incomplete mixing. On the other hand, haloes with stars have experienced star formation at least once. The energy injection from SNe mix up the ISM in the host halo, which results in more homogeneous mixing between hydrogen and metals. The formula we have obtained for $dZ$ suggests that the metallicity difference of star-forming gas cloud and average gas can be large in external enrichment (Eq. \ref{eq:Gaussian} and Eq. \ref{eq:PdZ_external}). We show that in this case 39 per cent of haloes have $dZ$ less than $-1$, which means more than 10 times metal-poorer than the average gas. A naive estimate of metallicity for stars formed in external enrichment can therefore overestimate the actual stellar metallicity. On the other hand, in internal enrichment, the average inhomogeneity is small: the fitted distribution of $dZ$ has a mean of $\mu = -0.03$ and a standard deviation of $\sigma=0.15$. The absence of any variable that has strong correlation to $dZ$ other than metallicity leads us to the conclusion that the metallicity difference between dense gas and average halo gas is intrinsically stochastic. One explanation is the missing stability of star formation and gas circulation. The stochasticity of star formation in small haloes is pointed out by \citet{xu16} and \citet{sharma19}. Since small haloes have shallow gravitational potential wells, they easily lose their gas by stellar feedback. The stochasticity of metallicity difference between star-forming gas and average gas can be related to the stochasticity of star formation. Moreover, the first galaxies had not yet enough time to develop self-regulated star formation and therefore correlations between the involved physical quantities. Despite the large metallicity difference in external enrichment, the predicted IMF is not sensitive to this difference. This can be understood because we compare the MDF mainly at $-4 < \mathrm{[Fe/H]} < -3$, where internal enrichment is the dominant channel and we mostly apply the corresponding distribution with a mean value of $\mu = -0.03$. Therefore, the statistical average over many haloes cancels out the inhomogeneity of each halo. Taking inhomogeneous metal mixing into account does not have a significant influence on current observables. However, we show that the MDF at [Fe/H]$ < -4.5$ is affected by inhomogeneous metal mixing in the first galaxies (compare Fig.~\ref{fig:MDF_example}). Future observations of more ultra metal-poor stars can confirm or falsify our model of inhomogeneous metal mixing by discriminating MDFs at [Fe/H]$ < -4.5$. Our derived IMF is in general agreement with the model by \citet{sarmento19} who show that the Pop~III IMF was dominated by stars in the mass range $20-120\,\ensuremath{\mathrm{M}_\odot}$, by comparing the radiative and chemical imprint of the first stars to observations. Our upper mass limit of $M_\mathrm{max} = 180\,\ensuremath{\mathrm{M}_\odot}$ is a bit higher than earlier estimates by \citet{deB17} who show that metal enrichment of EMP stars from PISNe is very rare, and by \citet{jeon17}, who simulate the chemical composition of MW satellites by adopting a Pop~III IMF up to $M_\mathrm{max}= 150\,\ensuremath{\mathrm{M}_\odot}$. We also examined the fraction of carbon-enhanced metal-poor (CEMP) stars as a function of metallicity. CEMP stars are a very prominent sub-class of EMP stars with [C/Fe]$>0.7$ \citep{beers92,aoki07,lee13,salvadori15,sharma18}. The fraction of CEMP stars increases with decreasing metallicity \citep{yong13,placco14,yoon16}, which places them as prototypes of second-generation stars \citep{hansen16}. While we reproduce the general trend of the CEMP-no (CEMP stars without enhancement of neutron-capture elements) fraction, we do not reproduce the fraction of CEMP-no stars in the metallicity range $-5\lesssim$[Fe/H]$\lesssim -3$ with our fiducial model. Our model predicts that only 0.4 per cent of stars with $-4.5 < \mathrm{[Fe/H]} < -3$ are CEMP-no stars, which is below the reported values by \citet{norris19} which derived that 12 per cent is truly CEMP-no stars even after 3-D NLTE corrections. Recently, also \citet{komiya20} suggests that it is quite difficult to reproduce both the MDF and the fraction of CEMP-no stars if faint SNe are the only considered channel for the formation of CEMP-no stars. While we include faint SNe based on \citet{ishigaki14}, we miss, for example, mass transfer from a binary companion \citep{arentsen19}, carbon enrichment from rotating massive stars \citep{choplin19b}, differential mixing of carbon and iron \citep{frebel14,hartwig19}, or aspherical SN explosions \citep{tominaga07,ezzeddine19}. We have not yet included these additional channels because their nature and relative contribution is not well understood and a topic of ongoing research \citep{yoon19}. Therefore, we expect that our current model can only provide a lower limit to the CEMP fraction. \subsection{Caveats} Hydrodynamical simulations are limited by numerical resolution. We confirmed that the resolution is sufficient to capture the metallicity structure of dense gas up to $100 \,\mathrm{cm}^{-3}$, because we see almost no difference among different choices of this density threshold in the range of [$1\,\mathrm{cm}^{-3}, 100\,\mathrm{cm}^{-3}$]. However, in order to follow metallicity difference between stars and gas completely, one should analyse the metallicity of denser gas, up to protostar formation. In our work, we could not follow the dense gas cloud phase, where stars are formed, due to limited resolution. Furthermore, \citet{schauer19b} have shown that around 1000 resolution elements per halo are required to properly resolve the onset of star formation. Thus, star formation in the Renaissance simulations is likely to be artificially delayed, and the mixing behaviour in the smallest haloes may not be captured in our model. The absolute and relative metal yields from Pop~III SNe are subject to uncertainties. The amount of ejected metals depends on, e.g., the explosion energy or rotation. Our model yields for Pop~III SNe are based on \citet{nomoto13}. The mass-dependent explosion energies for these SNe are calibrated based on observed explosion energies at higher metallicity. However, the explosion energies of Pop~III SNe, and therefore the effective metal yields, may not be a monotonic function of the progenitor mass, but rather a distribution of explosion energies \citep{ishigaki18}. Hence, while the IMF-averaged metals are more reliable, the metal yields for individual stars may differ from our implementation due to stochastic differences in the explosion energy or stellar rotation. \section{Conclusions} In this work, we have investigated the effect of inhomogeneous metal mixing on the metallicities of EMP stars. The inhomogeneity of metallicity has not been well understood and is often ignored in semi-analytical approaches. We analyzed the cosmological hydrodynamic ``Renaissance Simulations'' \citep{Oshea15} to gain insight into the metallicity of star-forming gas in the first galaxies. The aim is to predict the stellar metallicity based on the metal mass and the gas mass in the halo, taking inhomogeneity into account. The analysis of hydrodynamical simulations shows that the metallicity of star-forming gas can be different from the average metallicity in a halo. Our analysis suggests that the difference of metallicity between dense gas and average gas, $dZ$, behaves systematically different for haloes with and without stars (Fig.~\ref{fig:dZ_Mstars_bimodality}). For starless haloes, $dZ$ is largely negative (typically about or more than 1\,dex), and it increases with overall metallicity, suggesting that it is difficult to enrich already dense gas clouds with external enrichment. For haloes with stars, $dZ$ tends to be close to zero \citep[see also][]{cote18}, however slightly negative with 0.15 dex scatter, which is comparable to observational uncertainties. We do not find other correlations to $dZ$, highlighting its stochastic nature. The small metallicity difference $dZ$ in internal enrichment suggests that the homogeneous assumptions inside halo in many of existing SAMs are a good approximation. However, one should be cautious in externally enriched halos, where the difference between metallicity of all and of star-forming gas exceeds 1\,dex in 39 per cent of the cases. Taking metallicity inhomogeneity into account, we calibrated our SAM, \textsc{a-sloth}, and explored various sets of Pop~III IMF-generating parameters. The best IMF is a function with $dN/dM \propto M^{-0.5}$ between [2, 180] $\,\ensuremath{\mathrm{M}_\odot}$. The predicted IMF did not change significantly by taking inhomogeneity into account. The uncertainty and degeneracy in other parameters such as Pop~III star formation efficiency can change the prediction on Pop~III IMF \citep{cote17}. This degeneracy can be resolved if we can obtain an independent estimate on Pop~III star formation efficiency, such as direct observations of Pop~III-dominated galaxies at high redshift with next generation telescopes. \section*{Acknowledgements} Numerical computations were carried out on Cray XC50 at Center for Computational Astrophysics, National Astronomical Observatory of Japan. We thank the anonymous referee for helpful and constructive comments on this manuscript. YT is grateful for the hospitality of the Department of Astrophysical Sciences at Princeton University. YT's visit was supported by the University of Tokyo-Princeton strategic partnership grant. This work was supported by JSPS KAKENHI Grant Number 19K23437. MM was supported by the Max-Planck-Gesellschaft via the fellowship of the International Max Planck Research School for Astronomy and Cosmic Physics at the University of Heidelberg (IMPRS-HD). We are grateful to the Renaissance collaboration for kindly sharing their simulation and to the Caterpillar collaboration for providing their dark matter merger trees. We also thank Naoki Yoshida, John Wise, and Conor Omand for stimulating discussions and helpful comments on the paper draft and we thank Kris Youakim for sharing his MDF. The majority of the plots were done with \textsc{yt}. YT wishes to thank the \textsc{yt} community for their hard work, which facilitated this research. We acknowledge the work by Takuma Suda to maintain the SAGA database. \bibliographystyle{aasjournal}
1,108,101,564,798
arxiv
\section{Introduction} Tropospheric lightning produces an electromagnetic field that influences the lower ionosphere. This lightning-generated electromagnetic field can heat and accelerate ionospheric electrons, triggering a cascade of chemical reactions that results in fast optical emissions, known as Transient Luminous Events (TLEs). TLEs are an optical manifestation of the coupling between atmospheric layers. The existence of TLEs was firstly proposed by \cite{Wilson1925/PPhSocLon} in 1925. However, they were not officially discovered until 1989 by \cite{Franz1990/Sci}, who reported the detection of a sprite, a type of TLE formed by a complex structures of streamers. Since the discovery of sprites, other types of TLEs have been added to the list of these electricity phenomena in the upper atmosphere. Nowadays, the most important TLEs can be divided into sprites, halos, elves, blue jets, and giant jets \citep{Pasko2012/SSR}. The chemical impact and optical signatures of TLEs have been widely investigated by several authors \citep{Sentman2008/JGRD/1, Gordillo-Vazquez2008/JPhD, Parra-Rojas/JGR, Parra-Rojas/JGR2015, Kuo2007/JGRA, Winkler2015/JASTP}. There have also been some ground, balloon, plane and space-based instrumentation devoted to the study of TLEs, such as the ``Imager of Sprites and Upper Atmospheric Lightning" (ISUAL) \citep{Chern2003/JASTP, hsu2017/TAOC} of the National Space Organization (NSPO), Taiwan that was in operation between May 2004 and June 2016, the ``Global LIghtning and sprite MeasurementS" (GLIMS) \citep{sato2015overview, Adachi2016/JASTP} of the Japan Aerospace Exploration Agency (JAXA) between 2012 and 2015 and the "GRAnada Sprite Spectrograph and Polarimeter" (GRASSP) \citep{Parra-Rojas2013/JGR, Passas2014/IEEE, Passas2016/APO, Gordillo-Vazquez2018/JGR} and the high-speed ground-based photometer array known as PIPER \citep{Marshall2008/ITGRS}, both of them currently in operation. Despite the valuable advance in the knowledge of TLEs in the last decades, there are still several open questions about the inception and evolution of these events or their global chemical influence in the atmosphere. For instance, we do not fully understand their relation with the parent lightning. Space-based missions such as the ``Atmosphere-Space Interactions Monitor" (ASIM) of the European Space Agency (ESA) \citep{Neubert2006/ILWS} recently launched last April 2, 2018 and the future ``Tool for the Analysis of RAdiations from lightNIngs and Sprites" (TARANIS) of the Centre National d'\'Etudes Spatiales (CNES), France \citep{Blanc2007/AdSpR} will provide new information about these events. The aim of this paper is to contribute to the scientific goals of these space missions. For this purpose, we have developed spectroscopic diagnostic methods to derive the peak reduced electric field in halos and elves. The value of the electric field determines the coupling between the troposphere, the mesosphere and the lower ionosphere produced by these TLEs. Let us now describe the different characteristics and physical mechanisms of these two types of TLEs. Halos are produced by the quasielectrostatic field produced by lightning at altitudes between 75 and 85~km. Halos are disk-shaped optical emissions with a diameter of more than 100~km and lasting less than 10~ms \citep{Pasko1996/GeoRL, Barrington-Leigh2001/JGR, Bering2002/AdSpR,Bering2004/AdSpR,Bering2004/GeoRL,Frey2007/GeoRL, Pasko2012/SSR}. Elves are a consequence of the electron heating produced by the lightning-emitted electromagnetic pulses (EMP) at about 88~km of altitude and with a lateral extension of about 200~km \citep{Boeck1992/GRL,Moudry2003/JASTP, Chang2010/JGRA,Adachi2016/JASTP, van_der_Velde2016/GRL}. These types of TLEs are often observed as ring-shaped optical emissions lasting less than 1~ms. Halos and elves emit light predominatly in the first and second positive systems of the molecular neutral nitrogen (1PS~N$_2$ and the 2PS~N$_2$, or simply FPS and SPS), the first negative system of the molecular nitrogen ion (N$_2$$^+$-1NS or simply FNS), the Meinel band of the molecular nitrogen ion (Meinel N$_2$$^+$) and the Lyman-Birge-Hopfield (LBH) band of the molecular neutral nitrogen. Some authors have used the recorded intensity ratios of these spectral bands to estimate the electric field that produces molecular excitation in air discharges. In particular, \cite{Morrill2002/GeoRL, Kuo2005/GRL, Kuo2009/JGR, Kuo2013/JGRA, Paris2005/JPhD, Adachi2006/GeoRL, Liu2006/GeoRL, Pasko2010/JGRA, Celestin2010/GeoRL, Bonaventura2011/PSST, Holder2016/PSST} based their analysis on the optical emissions from the first negative system and second positive system of molecular nitrogen at 391.4~nm and 337~nm, respectively. \cite{simek2014optical} discussed the possibility of using other spectral bands (specifically, FPS and LBH) to estimate the electric field in air discharges below 100~Td. To the best of our knowledge the spectral bands FPS and LBH have not been employed to estimate the reduced electric field in TLEs. In this work, we extend the analysis of TLEs to these spectral bands. Spacecraft devoted to the observation of TLEs are often equipped with photometers collecting photons corresponding to certain transitions between vibrational level $v^{\prime}$ to $v^{\prime\prime}$ ($v^{\prime}$, $v^{\prime\prime}$) of the FPS(3,0) in 760~nm, SPS(0,0) in 337~nm and FNS(0,0) in 391.4~nm as well as covering the spectral (LBH) band between about 150~nm and about 280~nm. For this reason, we will focus on the spectroscopic diagnostics of halos and elves from the optical signals detected in these bands of the optical spectrum. In this work, we develop two different diagnostic methods to extract physical information from the optical signals emitted by TLEs. The first method is based on the comparison between the measured ratio of optical signals emitted in two different spectral bands with the theoretical predictions, allowing us to estimate the reduced electric field in halos and elves. The second method is an inversion procedure useful to derive the temporal evolution of the number of photons emitted by elves from the signals recorded by space-based photometers. As a first application of these methods, we apply them to the predicted (synthetic) optical signatures of halos and elves obtained in previously developed electrodynamical models \citep{PerezInvernon2018/JGR}. Afterwards, we test the developed methods with several optical signatures of halos and elves reported respectively by ISUAL and GLIMS spacecraft. The organization of this paper is as follows: we present the general spectroscopic diagnostics methods in section~\ref{sect:signal}. Firstly, we apply these methods to the synthetic optical emissions of halos and elves derived with the electrodynamical models described in section~\ref{sect:electrodynamical}. Secondly, the developed diagnostic methods are applied to optical signals of halos and elves recorded by spacecraft. Results and discussion of the application of these methods to synthetic and real optical signals from halos and elves are presented in section~\ref{sec:results}. The conclusions are finally presented in section~\ref{sec:conclusions}. \section{Optical signal treatment} \label{sect:signal} In this section we describe two spectroscopic diagnostic methods to analyze the optical signal emitted by halos and elves. We present in subsection~\ref{sec:opticalanalysis} a method to derive the reduced electric field in halos and elves using their optical emissions. In the case of elves, the relation between their short time duration and large spatial extension implies that photons are not neccesarily observed in the same order as they were emitted. We present in subsection~\ref{telvesignal} an inversion method to derive the temporal evolution of the emitted number of photons from the recorded optical signal. \subsection{Deduction of the peak reduced electric field} \label{sec:opticalanalysis} The aim of this section is to describe an optical diagnostic method to extract the reduced electric field from the observation of light emitted by TLEs in the lower ionosphere. We explore the possibility of using this procedure to analyze the recorded optical emissions from TLEs to be recorded by ASIM and the future TARANIS space mission. Let us define $i(t)$ as the temporal evolution of an observed intensity at a particular wavelength or interval of wavelengths. The density of the emitting species, $N_s(t)$, can be estimated from the decay constant $A$ of the transitions that produce photons in the considered wavelength as \begin{linenomath*} \begin{equation} N_s(t) = \frac{i(t)}{A}. \label{densities} \end{equation} \end{linenomath*} In the case of halos, elves and sprite streamers the gas temperature is low enough to consider that the plasma is far from thermodynamic equilibrium. Hence, we can use the continuity equation of the emitting species to write their temporal production rate $S(t)$ due to electron impact as \begin{linenomath*} \begin{equation} S(t) = \frac{dN_s(t)}{dt} + A N_s(t) + Q N_s(t) \times N - C N^{\prime}(t) + O(t), \label{production} \end{equation} \end{linenomath*} where $A$ is in s$^{-1}$, $Q$ represents all the quenching rate constants by air molecules of the considered species in cm$^{-3}$s$^{-1}$ and $N$ is the density of air in cm$^{-3}$ at the emission altitude. $N^{\prime}(t)$, in $cm^{-3}$, accounts for the density of all the species that populate the species by radiative cascade with rate constants $C$, in $s^{-1}$. Finally, the term $O(t)$ in $cm^{-3}s^{-1}$ includes the rest of loss processes, such as intersystem processes or vibrational redistribution, that is usually negligible compared to quenching. We use equations~(\ref{densities}) and~(\ref{production}) to obtain the ratios of production of two different species (1 and 2) at a fixed time $t_i$ given by $S_{12} = \frac{S_1(t_i)}{S_2(t_i)}$ in a first approach. We use the magnitude $S_{12}$ and the theoretical electric field dependent ratio of electron-impact productions of species 1 and 2, given by $\nu_{12} = \frac{\nu_1(E/N)}{\nu_2(E/N)}$, to estimate the reduced electric field (ratio or the electric field and the air density) that satisfies the equation \begin{linenomath*} \begin{equation} \frac{S_1(t_i)}{S_2(t_i)} \simeq \frac{\nu_1(E/N)}{\nu_2(E/N)} . \label{equality} \end{equation} \end{linenomath*} The values of $\nu_i(E/N)$ for all the considered species are calculated using BOLSIG+ for air \citep{Hagelaar2005/PSST}. The reduced electric field is usually given in Townsend units, defined as 10$^{-17}$~V~cm$^{-2}$. This approximation assumes an electric field homogeneously distributed in space. However, halo and elve emissions are produced by an inhomogeneous electric field that varies in the scale of kilometers. We propose below a method to improve this first approach and account for the spatial distribution of the electric field. We define the function $H\left(\frac{E^{\prime}}{N}\right) $ as the number of electrons under the influence of a reduced electric field larger than $E/N$ and weighted by the air density $N$ \begin{linenomath*} \begin{equation} H\left(\frac{E^{\prime}}{N}\right) = \int d\vec{r} N(\vec{r}) n_e(\vec{r}) \Theta\left( \frac{E}{N}(\vec{r}) - \frac{E^{\prime}}{N} \right), \label{nen} \end{equation} \end{linenomath*} where $n_e(\vec{r})$ and $\frac{E}{N}(\vec{r})$ are, respectively, the electron density and the reduced electric field spatial distributions. The symbol $\Theta$ corresponds to the step function, being 1 if $E/N > E^{\prime}/N$ or 0 in any other case. The function defined by equation~(\ref{nen}) is monotonic and decreasing. In addition, we know that $H\left(\frac{E_{max}}{N}\right) = 0 $ by definition. Therefore, we approximate it to fisrt order as \begin{linenomath*} \begin{equation} H\left(\frac{E^{\prime}}{N}\right) \simeq \alpha \left( \frac{E_{max}}{N} - \frac{E^{\prime}}{N} \right), \label{NEN_line} \end{equation} \end{linenomath*} where $\alpha$ is the slope of the linear approximation. The total excitation of species $i$ by electron impact can be written as \begin{linenomath*} \begin{equation} \nu_i = \alpha \int_0^{\frac{E_{max}}{N}} d\left( \frac{E^{\prime}}{N} \right) \left| \frac{dH}{d\left( \frac{E^{\prime}}{N} \right)} \right| k_i\left( \frac{E^{\prime}}{N} \right), \label{productioni_1} \end{equation} \end{linenomath*} where $k_i\left( \frac{E^{\prime}}{N} \right)$ is the electron-impact excitation of species $i$, again calculated using BOLSIG+ \citep{Hagelaar2005/PSST}. Using the derivative of equation (\ref{NEN_line}), equation (\ref{productioni_1}) can be expressed as \begin{linenomath*} \begin{equation} \nu_i = \alpha \int_0^{\frac{E_{max}}{N}} d\left( \frac{E^{\prime}}{N} \right) k_i\left( \frac{E^{\prime}}{N} \right), \label{productioni} \end{equation} \end{linenomath*} and the ratio of production of two species by electron impact $S_{12} = \frac{S_1(E/N)}{S_2(E/N)}$ can be finally written as \begin{linenomath*} \begin{equation} S_{12} = \frac{S_1(E/N)}{S_2(E/N)} \simeq \frac{ \int_0^{\frac{E_{max}}{N}} d\left( \frac{E^{\prime}}{N} \right) k_1\left( \frac{E^{\prime}}{N} \right)} {\int_0^{\frac{E_{max}}{N}} d\left( \frac{E^{\prime}}{N} \right) k_2\left( \frac{E^{\prime}}{N} \right)}. \label{production_ratio} \end{equation} \end{linenomath*} Equation (\ref{production}) allows us to calculate species production by electron impact from observed intensities, while equation~(\ref{production_ratio}) gives the theoretical reduced electric field dependence of these productions. We can then use these two equations to estimate the maximum reduced electric field underlying halo and elve optical emissions. \subsubsection{Accuracy of the method} \label{sec:signalscomparison} Let us now evaluate the usefulness of each ratio between optical emissions from different excited species to be used in order to estimate the electric field. The accuracy of the developed method is limited by the sensitivity to the electric field of the production coefficients in equation~\ref{equality}. If the ratio between the pair of considered emitted species does not depend significantly on the electric field, this method could be inaccurate. Errors introduced by measurements or by the assumed spatial distribution of the electric field (see equation~(\ref{NEN_line})) could also produce a large uncertainty in the electric field that satisfies equation~(\ref{equality}). We plot in figure~\ref{fig:ratio_Ered} the reduced electric field dependence of the ratio between several electronic emission systems of excited N$_2$. We note that the ratio of any system to the FNS depends strongly on the reduced electric field. However, the ratio of SPS to FPS, FPS to LBH and SPS to LBH do not depend significantly on the reduced electric field for high ($>$150~Td) values. Therefore, we can conclude that the ratio of FPS, SPS or LBH to FNS are the most adequate in order to estimate the reduced electric field causing optical emissions in TLEs. The ratio of FPS to SPS could also be accurate for reduced electric field values below $\sim$150-200~Td. These results are in agreement with \cite{simek2014optical}, who discussed the possibility of using different spectral bands to estimate the electric field in air discharges. The accuracy of the method is also determined by the goodness of the linear approximation for equation~ \ref{NEN_line}. Figure~\ref{fig:HE} clearly shows that the goodness of the linear approximation is not constant. \begin{figure}[ht] \centering \includegraphics[width=12cm]{ratio_Ered.pdf} \caption{Electric field dependence of the ratio between the rate of excitation of different states of electronically excited N$_2$. We consider excited states whose optical emissions correspond to transition within the entire First Positive System (FPS), Second Positive System (SPS), First Negative System (FNS) and Lyman-Birge-Hopfield (LBH) band. These rates have been obtained using the Boltzmann solver BOLSIG+\citep{Hagelaar2005/PSST} and the cross sections used in \cite{PerezInvernon2018/JGR}.} \label{fig:ratio_Ered} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=12cm]{HE.pdf} \caption{Number of photons of electrons under the influence of a reduced electric field larger than $E/N$ and weighted by the air density. This plot corresponds to a simulation of a halo \citep{PerezInvernon2018/JGR} at different times after its onset.} \label{fig:HE} \end{figure} \subsection{Treatment of the optical signal emitted by an elve} \label{telvesignal} The relation between the short time (less than 1~ms) and the large spatial extension ($\sim$300~km) of elves favors an observer's simultaneous reception of photons that were not emitted at the same time from the elve. Therefore, the method developed in the previous section to derive the reduced electric field cannot be directly applied to the case of an optical signal from an elve. In this section, we describe an inversion method to deduce the temporal evolution of the source optical emissions of an elve knowing the signal observed by a spacecraft. Firstly, we use the source emissions of a modeled elve to calculate the observed signal as seen from a spacecraft (direct method). Then, we describe an inversion method to recover the emission source. \subsubsection{Observed signal} \label{observedsignal} \begin{figure} \centering \includegraphics[width=12cm]{geometry.png} \caption{Geometry for calculating the observed optical signal from ASIM. The elve and the spacecraft are located at altitudes $h_0$ and $h_1$ from the ground, respectively. $I(t)$ and $i(t)$ are the temporal evolution of the observed and emitted intensities, respectively. The center of the elve, denoted as $O$, is located at an horizontal distance $l$ from ASIM. $R(t)$ corresponds to the temporal dependence of the elve radius, that is radially symmetrical. The blue line ($s$) represents the path of an observed photon emitted from the elve at a point given by (R, $\theta$). } \label{fig:geometry} \end{figure} The aim of this section is to describe an approximate procedure to calculate the observed signal of an elve from spacecraft given the temporal profile of emitted photons. Using a cylindrically symmetrical two-dimensional coordinate system, the elve center is located right above the lightning discharge, at an altitude $h_0$. Let's suppose that the spacecraft is located at an altitude $h_1$ and horizontally separated from the elve center by a distance $l$, as illustrated in figure~\ref{fig:geometry}. We can calculate the emitted photons per second $i(t)$ using an electrodynamical model of elves \citep{PerezInvernon2018/JGR}. It is important to note that the optical emissions are ring-shaped with a radius that increases in time according to $R(t) = (c^{2}t^{2} + 2ct(h_{0}-h_{lightning}))^{1/2}$, where $c$ is the velocity of light, the time $t$ = 0 corresponds to the zero radius of the elve and $h_{lightning}$ is the considered altitude of the parent-lightning, ranging between 0~km and the length of the lightning channel. Then, the distance $s(t)$ between an elve emitting point and the spacecraft is given by \begin{linenomath*} \begin{equation} s(t) = \left[(h_1-h_0)^{2} + (l-R(t)\cos(\theta))^{2} + R^2(t)\sin^2(\theta)\right]^{1/2}, \label{s} \end{equation} \end{linenomath*} where $\theta$ is the angle between the $r$-axis and the emitting point. We can now calculate the observed signal at a time $\tau$ under the approximation of isotropic emission and assuming that the elve is a thin ring. To do that, we take into account that the photons detected at a given time $\tau$ are those whose time of flight ($s(t)/c$) plus time of emissions ($t$) are equal to $\tau$ \begin{linenomath*} \begin{equation} \hat{I}(\tau) = \frac{A_{ph}}{4\pi} \int_{-\infty}^{\tau} i(t) R(t) dt \int_{-\pi}^{\pi} s^{-2}(t) \delta \left[ \tau - \left( t + \frac{s(t)}{c} \right) \right] d\theta , \label{Itau} \end{equation} \end{linenomath*} where $A_{ph}$ is the area of the detector in the photometer. Firstly, we calculate the angular integration, given by \begin{linenomath*} \begin{equation} K(\tau, t) = \int_{-\pi}^{\pi} s^{-2}(t) \delta \left[ \tau - \left( t + \frac{s(t)}{c} \right) \right] d\theta , \label{intK} \end{equation} \end{linenomath*} For this purpose, we can use the Dirac delta function property \begin{linenomath*} \begin{equation} \int_{a}^{b} f(x) \delta (G(x)) dx = \sum\limits_{i} \frac{f(x_i)}{|G^\prime(x_i)|} \label{deltaprop} \end{equation} \end{linenomath*} where $x_i$ are the zeros of $G(x)$ in the interval ($a$, $b$), assuming no zeros at $a$ or $b$. In our case we have the function of $\theta$ \begin{linenomath*} \begin{equation} G(\theta)= \tau - \left( t + \frac{s(t)}{c} \right ) \label{G}. \end{equation} \end{linenomath*} Solving $G(\theta)=0$ we obtain \begin{linenomath*} \begin{equation} \theta = \pm \arccos\left( -\frac{(h_1-h_0)^{2}+l^2+R^2(t)-c^2(\tau-t)^2}{2R(t)l} \right), \label{theta} \end{equation} \end{linenomath*} while the derivative of equation~(\ref{G}) is \begin{linenomath*} \begin{equation} G^\prime(\theta) = -\frac{R(t)l}{c^2(\tau - t)} \sin(\theta). \label{Gpr} \end{equation} \end{linenomath*} We can now combine the four last equations and replace $s \rightarrow c(\tau-t)$ to solve the angular integration under assumption of isotropic emission \begin{linenomath*} \begin{equation} K(\tau,t) = \frac{2}{l(\tau-t)R(t)\sin(\theta) } . \label{kernel} \end{equation} \end{linenomath*} Finally, we can replace (\ref{kernel}) in equation (\ref{Itau}) and integrate in time to obtain the observed signal. We distinguish between two possible cases: \begin{enumerate} \item If the center of the elve is located just below the spacecraft, the horizontal distance $l$ is equal to zero. In this particular case the integrand of equation (\ref{intK}) does not depend on the angle $\theta$, and can be analytically expressed as \begin{linenomath*} \begin{equation} K(\tau, t) = 2 \pi s^{-2}(t) \delta \left[ \tau - \left( t + \frac{s(t)}{c} \right) \right]. \label{Kl0} \end{equation} \end{linenomath*} As a consequence, the integration given by equation (\ref{Itau}) can be solved analytically using the Dirac's delta function properties to obtain the observed signal $\hat{I}(\tau)$. \item In a more general case, there exists a non-zero horizontal distance $l$ between the elve center and the spacecraft, therefore the integration~(\ref{Itau}) must be numerically solved. It is important to integrate carefully over the angle $\theta$ near the singularities of $K(\tau, t)$ in equation~\ref{kernel} given by $\theta$=0, $\pi$. We refer to values of $t$ in equation~(\ref{theta}) producing these $\theta$ as $t_{inf}$ and $t_{sup}$. The value of $t_{inf}$ and $t_{sup}$ can be obtained by setting $\cos \theta = \pm 1$ in equation~(\ref{theta}) and solving for $t$, that is, \begin{linenomath*} \begin{equation} \pm 1 = \frac{(h_1-h_0)^{2}+l^2+R^2(t)-c^2(\tau-t)^2}{2R(t)l}. \label{thetasing} \end{equation} \end{linenomath*} The kernel, represented by equation~(\ref{kernel}), contains integrable singularities at $t_{inf}$ and $t_{sup}$ as a consequence of the singularities of equation~(\ref{kernel}). The integration will be then performed assuming a piecewise-constant emitted intensity as \begin{linenomath*} \begin{equation} \hat{I}(\tau) = \frac{A_{ph}}{4\pi} \int_{-\infty}^{\tau} K(\tau, t) i(t) dt \simeq \frac{A_{ph}}{4\pi} \sum\limits_j i_j \int_{\max(t_{inf}, t_{j-\frac{1}{2}})}^{\min(t_{sup}, t_{j+\frac{1}{2}})} K(\tau, t) dt \label{piecewise} \end{equation} \end{linenomath*} \end{enumerate} Elves are extensive structures of light with radius of more than 200~km, therefore it is possible that some emitted photons are out of the photometer field of view (FOV). Assuming a circular photometer aperture with a given FOV angle and knowing the horizontal and vertical separation between the elve and the photometer, we can calculate the maximum distance $s_0$ between an elve emitting point and the photometer as \begin{linenomath*} \begin{equation} s_0 = (h_1 - h_0) \cos^{-1} \left( \frac{FOV}{2} \right). \label{s0} \end{equation} \end{linenomath*} We can then calculate the observed intensity excluding the photons that come from distances greater than $s_0$ using the Heaviside function $\Theta$. Equation (\ref{Itau}) becomes \begin{linenomath*} \begin{equation} \hat{I}(\tau) = \frac{A_{ph}}{4\pi} \int_{-\infty}^{\tau} i(t) R(t) dt \int_{-\pi}^{\pi} s^{-2}(t) \delta \left[ \tau - \left( t + \frac{s(t)}{c} \right) \right] \Theta(s(t)-s_{0}) d\theta . \label{Itaus0} \end{equation} \end{linenomath*} This method is valid if the emissions are concentrated on a thin ring. According to \cite{Rakov2003/ligh.book}, the typical rise time of CG lightning is of the order of microseconds or tens of microseconds, which corresponds to wavelengths of the order of hundreds of meters or a few of kilometers. The ionization front that causes the elve has then an approximate radius that can range between hundreds of meters and a few thousands of meters. We consider this radius negligible (compare to the elve's size) and approximate the elve ionization front as a thin ring. This approximation is justified after considering that the most impulsive CG lightning are responsible of most of the observed elves, as the absolute value of the reduced electric field in the pulse is proportional to the rise time of the discharge. However, the molecules excited by the lightning-radiated pulse do not decay instantaneously. These emitting species decay according to a radiative decay constant $\nu$. Therefore, the elve would be seen as a ring with a thickness and a radial brightness dependency determined by the radiative decay constant of each species. We can then approximate the elve as a sequence of thin rings that emit with different intensities (see figure~\ref{fig:rings}), resulting in an observed intensity $I(\tau)$ that can be calculated as the convolution of each ring intensity with its corresponding decay function as \begin{linenomath*} \begin{equation} I(\tau) = \int_{0}^{\tau} \exp(-\nu t) \hat{I} (\tau - t) dt , \label{rings} \end{equation} \end{linenomath*} \begin{figure} \centering \includegraphics[width=12cm]{rings.png} \caption{Approximation of an elve as a succession of thin rings. $\hat{I}(t)$ corresponds to the signal observed from ASIM, while $I(t)$ would be the observed signal observed if all the emissions were focused in an instantaneous and, consequently, thin ring.} \label{fig:rings} \end{figure} Finally, the atmospheric absorption at each wavelength can be applied to the observed signal $I(\tau)$ in case it is necessary. \subsubsection{Inversion of the signal} \label{inversion} Following our notation, the observed optical signal from a spacecraft is denoted by $I(\tau)$. In this section we describe a procedure to invert this signal and obtain the emitting source $i(t)$ that defines the elve. Firstly, we need to deconvolve the total signal$I(\tau)$to obtain an individual ring-shaped source $\hat{I}(\tau)$ using the Wiener deconvolution in the frequency domain. We assume a signal-to-noise ratio given by \begin{linenomath*} \begin{equation} SNR(t) = \frac{\sqrt{I(\tau) \Delta t}}{\Delta t} , \label{SNR} \end{equation} \end{linenomath*} where $\Delta t$ is the integration time of the observed signal. Now we define the Fourier transform of the signal-to-noise ratio as \begin{linenomath*} \begin{equation} SNR_f(f) = \mathcal{F}[SNR]. \label{SNR_f} \end{equation} \end{linenomath*} As we explained before, the size of the ring-shaped emissions is a consequence of spatial distribution of the electric field and of the radiative decay constant ($\nu$) of the emitting species. We calculate the Fourier transform of this decay as \begin{linenomath*} \begin{equation} D_f(f) = \mathcal{F}[\exp(-\nu t)], \label{decayf} \end{equation} \end{linenomath*} finally, we define the Fourier transform of the observed signal as \begin{linenomath*} \begin{equation} I_f(f) = \mathcal{F}[I(\tau)]. \label{If} \end{equation} \end{linenomath*} We can obtain now the observed signal of each individual ring-shaped source in the frequency domain ($\hat{I}_f(f)$) using the Wiener deconvolution as \begin{linenomath*} \begin{equation} \hat{I}_f(f) = \frac{I_f(f)}{D_f(f)} \left[ \frac{|D_f(f)|^2}{|D_f(f)|^2+SNR_f(f)^{-1}} \right], \label{Ihat} \end{equation} \end{linenomath*} Finally, we can derive $\hat{I}(\tau)$ as the inverse Fourier transform of $\hat{I}_f(f)$ \begin{linenomath*} \begin{equation} \hat{I}(\tau) = \mathcal{F}^{-1}[I_f(f)]. \label{If} \end{equation} \end{linenomath*} The next step of this inversion process is more complex and has to be accomplished numerically, since the goal is to obtain the function $i(t)$ from the integral equation (\ref{Itaus0}). The resolution of this kind of equations, known as Fredholm integral equations of the first kind, is a common problem in mathematics. We use the numerical method proposed by \cite{Hanson1971/SIAM} to solve the equation using singular values. We detail the resolution of this equation in Appendix~\ref{ap:A}. \section{Electrodynamical models} \label{sect:electrodynamical} We use a halo model based on the impact of lightning-produced quasielectrostatic fields in the lower ionosphere using a cylindrically symmetrical scheme. The time evolution of the electric field is coupled with the transport of charged particles and with an extended set of chemical reactions. This model allows us to set the characteristics of the parent lightning that triggers the halo. For a complete description of the model, we refer to \cite{Luque2009/NatGe,Neubert2011/JGRA, Pasko2012/SSR, Qin2014/NatCo,Liu2015/NatCo,PerezInvernon2016/GRL, PerezInvernon2016/JGR, PerezInvernon2018/JGR}. The model of elves is based on the resolution of the Maxwell equations and a modified Ohm's equation using a cylindrically symmetrical scheme. As in the model of halos, we can choose the characteristics of the lightning discharge that produces the elve. The details of this elve model can be found in \cite{Inan1991/GRL, Taranenko1993/GRL, Kuo2007/JGRA, Marshall2010/JGRA/2, Inan2011/Book, Luque2014/JGRA, Marshall2015/GRL, PerezInvernon2017/JGR, Liu2017/JGRAinpress, PerezInvernon2018/JGR}. We couple the electrodynamical models of halos \citep{PerezInvernon2016/GRL} and elves \citep{PerezInvernon2016/JGR} with a set of chemical reactions collected from \cite{Gordillo-Vazquez2008/JPhD,Sentman2008/JGRD/1, Gordillo-Vazquez2009/PSST, Parra-Rojas/JGR} and \cite{Parra-Rojas/JGR2015}. We also include the molecular nitrogen vibrational kinetics proposed by \cite{Gordillo-Vazquez2010/JGRA, Luque2011/JGRA}. The synthetic optical emissions predicted by these models in \cite{Gordillo-Vazquez2010/JGRA} and \cite{PerezInvernon2018/JGR} allow us to estimate the temporal evolution of the intensities observed by space-based photometers. Hence, we can use these predicted intensities together with the reduced electric field calculated by the electrodynamical models in order to test the accuracy of the spectroscopic diagnostic methods. \section{Results and Discussion} \label{sec:results} The methods described above can be applied to the modeled optical emissions of elves and halos as well as to the optical signals recorded by spacecraft. This section is divided into two parts. In subsection~\ref{sec:signalmodels} we apply the analysis methods to the modeled optical emissions of halos and elves. This approach allows us to compare the inferred reduced electric field with the self-consistently calculated fields given by the models. Then, we discuss in subsection~\ref{sec:missions} the possibility of applying our procedures to signals reported by ISUAL and GLIMS as well as to the future observations by ASIM and TARANIS. \subsection{Analysis of the signals obtained with the halo and elve models} \label{sec:signalmodels} \subsubsection{Reduced electric field in halos} \label{efieldhalos} Spacecraft devoted to the observation of TLEs are often equipped with photometers collecting photons from FPS(3,0) in 760~nm, from SPS(0,0) in 337~nm and from FNS(0,0) in 391.4~nm as well as from the spectral (LBH) band between about 150~nm and about 280~nm. In this section, we discuss the possibility of using the observed optical emissions comprised in these wavelengths to deduce the reduced electric field inside a simulated halo triggered by a CG lightning discharge with a CMC of 560~C~km. The halo model allows us to obtain, among others, the temporal evolution of the optical emissions in the vibronic bands centered at 760~nm, 337~nm and 391.4~nm from the First Positive, the Second Positive Systems of N$_2$ and from the First Negative Systems of N$_2^+$. The model also computes the optical emissions in the entire LBH band, including the spectrum between 150~nm and 280~nm where space-based photometers usually record light from TLEs. We denote the intensities of these optical emissions as $I_{FPS(3,0)}(t)$, $I_{SPS(0,0)}(t)$, $I_{FNS(0,0)}(t)$ and $I_{LBH}(t)$, respectively. We can then use equations~(\ref{densities}) and~(\ref{production}) to deduce the temporal production rate of these emitting species using their kinetic rates. However, in the case of halos, the use of the observed LBH band to deduce the production rate of all the molecules emitting in these wavelengths is not possible as a consequence of their quenching rates. As halos are descending events, we cannot use a fixed altitude to estimate the quenching rates of the molecules emitting in the LBH band. In addition, we cannot neglect these quenching rates, as the quenching altitudes of some of them are located above the halo, that is, at altitudes above 80~km (see figure~\ref{fig:quenching_altitude}). Therefore, we cannot use the observed intensity of the LBH band in order to deduce the reduced electric field inside halos. \begin{figure}[ht] \centering \includegraphics[width=12cm]{quenching_altitude.pdf} \caption{Difference between the quenching and the radiative decay characteristic times of each vibrational level of N$_2$(a$^1$ $\Pi _g$ , v = 0, ..., 12). The altitude at which the quenching time is similar to the radiative decay time is called the quenching altitude. The quenching of each state can be neglected for altitudes significantly above the quenching altitude. The quenching of each state can be neglected for altitudes significantly above the quenching altitude. The rate constant used in this plot are from \cite{PerezInvernon2018/JGR}. } \label{fig:quenching_altitude} \end{figure} After obtaining the production rate ratios of the emitting species from their corresponding observed optical emissions, we particularize equation~(\ref{production_ratio}) to the considered emitting species to obtain the theoretical reduced electric field dependence of these ratios. To obtain these theoretical production rates, we have to include in equation~(\ref{production_ratio}) the production rates (denoted as $k$) of each species. Let us discuss the particularities of the theoretical production rate of each emitting species depending on the spectral band where the emission is produced following the kinetic scheme proposed by \cite{Gordillo-Vazquez2008/JPhD,Sentman2008/JGRD/1,Parra-Rojas/JGR, Gordillo-Vazquez2010/JGRA, Luque2011/JGRA} and \cite{Parra-Rojas/JGR2015}: \begin{enumerate} \item Emissions in the vibronic band centered at 391.4~nm $\left(I_{N_2 ^+ (B^2 \Sigma ^+ _u, v = 0)}\right)$ are produced by the radiative decay process N$_2 ^+$ (B$^2$ $\Sigma ^+ _u$, v = 0) $\rightarrow$ N$_2$$^{+}$(X$^1\Sigma^+_g$, v = 0) + $h\nu$. We can estimate the number of particles of th emitting state as \begin{linenomath*} \begin{equation} n_{N_2 ^+ (B^2 \Sigma ^+ _u, v = 0)} = \frac{I_{N_2 ^+ (B^2 \Sigma ^+ _u, v = 0)}}{A_{N_2 ^+ (B^2 \Sigma ^+ _u, v = 0)}}, \end{equation} \end{linenomath*} where $A_{N_2 ^+ (B^2 \Sigma ^+ _u, v = 0)}$ is the rate of the radiative decay process N$_2 ^+$ (B$^2$ $\Sigma ^+ _u$, v = 0) $\rightarrow$ N$_2$$^{+}$(X$^1\Sigma^+_g$, v = 0) + $h\nu$. The only process that contributes to populate this state is direct electron impact ionization of N$_2$ molecules. Therefore, we can calculate the production by electron impact using equation~(\ref{production}) and exclusively considering the radiative decay process as \begin{linenomath*} \begin{equation} S_{N_2 ^+ (B^2 \Sigma ^+ _u v = 0)} = \frac{dn_{N_2 ^+ (B^2 \Sigma ^+ _u v = 0)}}{dt} + A_{N_2 ^+ (B^2 \Sigma ^+ _u, v = 0)} n_{N_2 ^+ (B^2 \Sigma ^+ _u, v = 0)} \end{equation} \end{linenomath*} Then, we use the rate coefficient of the reaction e + N$_2$(X$^1$ $\Sigma _g ^+$, v = 0) $\rightarrow$ e + e + N$_2 ^+$ (B$^2$ $\Sigma ^+ _u$, v = 0) in $cm^{3}s^{-1}$ to calculate the theoretical production of N$_2 ^+$ (B$^2$ $\Sigma ^+ _u$, v = 0) in equation~(\ref{productioni}). \item Emissions in the vibronic band centered at 337~nm $\left(I_{N_2(C^3 \Pi _u , v = 0)}\right)$ are produced by the radiative decay process N$_2$(C$^3$ $\Pi _u$ , v = 0) $\rightarrow$ N$_2$(B$^3$ $\Pi _g$ , v = 0) + h$\nu$. The number of particles in the emitting state can be estimated as \begin{linenomath*} \begin{equation} n_{N_2(C^3 \Pi _u , v = 0)} = \frac{I_{N_2(C^3 \Pi _u , v = 0)}}{A_{N_2(C^3 \Pi _u, v = 0)}}, \end{equation} \end{linenomath*} where $A_{N_2(C^3 \Pi _u , v = 0)}$ is the rate of the radiative decay process N$_2$(C$^3$ $\Pi _u$ , v = 0) $\rightarrow$ N$_2$(B$^3$ $\Pi _g$ , v = 0) + h$\nu$. There are two processes that contribute to populate the state N$_2$(C$^3$ $\Pi _u$ , v = 0), the electron impact excitation of N$_2$ and the radiative decay process N$_2$(E$^3$ $\Sigma ^+ _g$) $\rightarrow$ N$_2$(C$^3$ $\Pi _u$ , v = 0) + $h\nu$. Therefore, we include this radiative decay process in equation~(\ref{production}) to calculate the production of N$_2$(C$^3$ $\Pi _u$ , v = 0) by electron impact as \begin{linenomath*} \begin{equation} \begin{split} S_{N_2(C^3 \Pi _u , v = 0)} = \frac{dn_{N_2(C^3 \Pi _u , v = 0)}}{dt} + A_{N_2(C^3 \Pi _u , v = 0)} n_{N_2(C^3 \Pi _u , v = 0)} - \\ A_{N_2(E^3 \Sigma ^+ _g)} n_{N_2(E^3 \Sigma ^+ _g)} \end{split} \end{equation} \end{linenomath*} where $A_{N_2(E^3 \Sigma ^+ _g)}$ is the rate of the radiative decay process N$_2$(E$^3$ $\Sigma ^+ _g$) $\rightarrow$ N$_2$(C$^3$ $\Pi _u$ , v = 0) + $h\nu$. However, it is necessary to estimate the number of particles of N$_2$(E$^3$ $\Sigma ^+ _g$). To do that, we assume that the production rate of N$_2$(E$^3$ $\Sigma ^+ _g$) is proportional to the production rate of N$_2$(C$^3$ $\Pi _u$ , v = 0) as $S_{N_2(E^3 \Sigma ^+ _g)} = \chi S_{N_2(C^3 \Pi _u, v = 0)}$, where $\chi$ is a constant. To obtain the value of this constant, we calculate with BOLSIG+ \citep{Hagelaar2005/PSST} the rate coefficients of the reactions e + N$_2$(X$^1$ $\Sigma _g ^+$, v = 0) $\rightarrow$ e + N$_2$(C$^3$ $\Pi _u$ , v = 0) and e + N$_2$(X$^1$ $\Sigma _g ^+$, v = 0) $\rightarrow$ e + N$_2$(E$^3$ $\Sigma ^+ _g$). We obtain that the ratio between these two rate coefficients is $\chi \sim 0.02$ and does not significantly depend on the reduced electric fields. We can then write the time derivative of $n_{N_2(E^3 \Sigma ^+ _g)}$ as \begin{linenomath*} \begin{equation} \begin{split} \frac{dn_{N_2(E^3 \Sigma ^+ _g)}}{dt} = \chi \left( \frac{dn_{N_2(C^3 \Pi _u , v = 0)}}{dt} + A_{N_2(C^3 \Pi _u , v = 0)} n_{N_2(C^3 \Pi _u , v = 0)} \right) \\ - (\chi + 1 ) n_{N_2(E^3 \Sigma ^+ _g)}, \end{split} \end{equation} \end{linenomath*} that can be solved as \begin{linenomath*} \begin{equation} \begin{split} n_{N_2(E^3 \Sigma ^+ _g)} = \chi \exp \left(-( 1 + \chi ) A_{N_2(E^3 \Sigma ^+ _g)} t \right) \int^t_0 dt^{\prime} \exp \left(-( 1 + \chi ) A_{N_2(E^3 \Sigma ^+ _g)} t^{\prime}\right) \\ \left[ \frac{dn_{N_2(C^3 \Pi _u , v = 0)}}{dt} + A_{N_2(C^3 \Pi _u , v = 0)} n_{N_2(C^3 \Pi _u , v = 0)}\right]. \end{split} \end{equation} \end{linenomath*} Finally, we use the rate coefficient of the reaction e + N$_2$(X$^1$ $\Sigma _g ^+$, v = 0) $\rightarrow$ e + N$_2$(C$^3$ $\Pi _u$ , v = 0) in $cm^{3}s^{-1}$ to calculate the theoretical production of N$_2$(C$^3$ $\Pi _u$ , v = 0) in equation~(\ref{productioni}). \item Emissions in the vibronic band centered at 760~nm $\left(I_{N_2(B^3 \Pi _g , v = 3)}\right)$ are produced by the radiative decay process N$_2$(B$^3$ $\Pi _g$ , v = 3) $\rightarrow$ N$_2$(A$^3$ $\Sigma _u ^+$ , v = 1) + $h\nu$. We can estimate the number of particles of N$_2$(B$^3$ $\Pi _g$ , v = 3) as \begin{linenomath*} \begin{equation} n_{N_2(B^3 \Pi _g , v = 3)} = \frac{I_{N_2(B^3 \Pi _g , v = 3)}}{A_{N_2(B^3 \Pi _g , v = 3)}}, \end{equation} \end{linenomath*} where $A_{N_2(B^3 \Pi _g , v = 3)}$ is the rate of the radiative decay process N$_2$(B$^3$ $\Pi _g$ , v = 3) $\rightarrow$ N$_2$(A$^3$ $\Sigma _u ^+$ , v = 1) + $h\nu$. There are several processes that contribute to populate the state N$_2$(B$^3$ $\Pi _g$ , v = 3), the electron impact excitation of N$_2$ and the radiative decay processes N$_2$(C$^3$ $\Pi _u$ , v = 0, ..., 4) $\rightarrow$ N$_2$(B$^3$ $\Pi _g$ , v = 3) + $h\nu$. Therefore, we have to include these radiative decay processes in equation~(\ref{production}) to calculate the production of N$_2$(B$^3$ $\Pi _g$ , v = 3) by electron impact as \begin{linenomath*} \begin{equation} \begin{split} S_{N_2(B^3 \Pi _g , v = 3)} = \frac{dn_{N_2(B^3 \Pi _g , v = 3)}}{dt} + A_{N_2(B^3 \Pi _g , v = 3)} n_{N_2(B^3 \Pi _g , v = 3)} - \\ A_{N_2(C^3 \Pi _u, v = 0, ..., 4)} n_{N_2(C^3 \Pi _u , v = 0, ..., 4)}, \end{split} \end{equation} \end{linenomath*} where $A_{N_2(C^3 \Pi _u , v = 0, ..., 4)}$ are the rate of the radiative decay processes N$_2$(C$^3$ $\Pi _u$ , v = 0, ..., 4) $\rightarrow$ N$_2$(B$^3$ $\Pi _g$ , v = 3) + $h\nu$. As we have only deduced the number of species N$_2$(C$^3$ $\Pi _u$ , v = 0), we have to use the Vibrational-Distribution-Function (VDF) of the species N$_2$(C$^3$ $\Pi _u$ , v = 0, ..., 4) to estimate the number of species N$_2$(C$^3$ $\Pi _u$ , v = 1, ..., 4). The VDF of N$_2$(C$^3$ $\Pi _u$ , v = 0, ..., 4) is sensitive to the electric field \citep{simek2014optical}, specially for low fields below 200~Td. As a first approximation, we can obtain this VDF using the chemical scheme of \cite{Gordillo-Vazquez2008/JPhD,Sentman2008/JGRD/1,Parra-Rojas/JGR, Gordillo-Vazquez2010/JGRA, Luque2011/JGRA} and \cite{Parra-Rojas/JGR2015}: \begin{linenomath*} \begin{equation} \label{VDFC3P} \begin{split} VDF\left( N_2(C^3 \Pi _u, v = 0, ..., 4) \right) = \\ \left(0.69, 0.15, 0.12, 0.03, 0.01 \right), \end{split} \end{equation} \end{linenomath*} where the numbers correspond to the relative population of each vibrational level. We can also use as an approximation to this VDF the Franck-Condon factors given in Table~25 of \cite{Gilmore1992/JPCRD}. Then, we use the rate coefficient of the reaction e + N$_2$(X$^1$ $\Sigma _g ^+$, v = 0) $\rightarrow$ e + N$_2$(B$^3$ $\Pi _g$ , v = 3) in $cm^{3}s^{-1}$ to calculate the theoretical production of N$_2$(B$^3$ $\Pi _g$ , v = 3) in equation~(\ref{productioni}). \end{enumerate} Finally, we can calculate the reduced electric field necessary to match the observed and theoretical ratios of production at each time. The results are plotted in figure~\ref{fig:Estimated_Ered} together with the maximum reduced electric field given by the simulation at each particular time and using the VDF of expression~(\ref{VDFC3P}). The use of the VDF approximated as the Franck-Condon factors given in Table~25 of \cite{Gilmore1992/JPCRD} produces a different electric field (25 \% greater than the one plotted in figure~\ref{fig:Estimated_Ered}) for the case of the FPS/SPS ratio. However, it does not influence the electric field obtained from the rest of ratios. The comparison between the derived electric fields from optical band intensity ratios with the electric field given by the model allows us to test the accuracy of the proposed methods for the optical diagnosis of halos using ASIM and TARANIS optical data. \begin{figure} \centering \includegraphics[width=14cm]{Estimated_Ered.pdf} \caption{Temporal evolution of the maximum reduced electric field inside a halo (left) and an elve (right). The blue line corresponds to the maximum reduced electric field according to the halo or elve model. The rest of the points correspond to the inferred reduced electric field using the ratios of the observed FPS(3,0), SPS(0,0), FNS(0,0) and LBH band of N$_2$ following subsection~\ref{sec:opticalanalysis}. We have used the VDF given by expression~(\ref{VDFC3P}).} \label{fig:Estimated_Ered} \end{figure} \subsubsection{Reduced electric field in elves} Let us now apply the previous electric field deduction method to a simulated elve triggered by a lightning stroke with a current peak of 220~kA. As in the case of the halo model, the elve model allows us to calculate the intensities of the optical emissions $I_{FPS(3,0)}(t)$, $I_{SPS(0,0)}(t)$, $I_{FNS(0,0)}(t)$ and $I_{LBH}(t)$. Again, the first step to derive the reduced electric field inside the TLE is to use equations~(\ref{densities}) and~(\ref{production}) to deduce the temporal production rate of these emitting species using their kinetic rates. Elves are always produced at altitudes of about 88~km, where the quenching of all the vibrational states emitting in the LBH band is less important than the radiative decay (see figure~\ref{fig:quenching_altitude}). Therefore, we can now neglect in our calculations the quenching of all these species at a fixed altitude of 88~km and use the intensities observed in the LBH band to deduce the reduced electric field. The second step is to particularize equation~(\ref{production_ratio}) to the case of the considered emitting species to obtain the theoretical reduced electric field dependence of their ratios. We use the same steps enumerated in section~\ref{efieldhalos} to deduce the theoretical production rate of each emitting species with the following exceptions: \begin{enumerate} \item Emissions in the LBH band $\left(I_{N_2(a^1 \Pi _g , v = 0, ..., 15)}\right)$ are produced by the radiative decay processes N$_2$(a$^1$ $\Pi _g$ , v = 0, ..., 15) $\rightarrow$ N$_2$(X$^1$ $\Sigma _g ^+$, v = 0, ..., 8)+ $h\nu$. We can estimate the number of particles in the emitting state by neglecting the quenching at elve altitude as \begin{linenomath*} \begin{equation} n_{N_2(a^1 \Pi _g , v = 0, ..., 15)} = \frac{I_{N_2(a^1 \Pi _g , v = 0, ..., 15)}}{A_{N_2(a^1 \Pi _g , v = 0, ..., 15)}}, \end{equation} \end{linenomath*} where $A_{N_2(a^1 \Pi _g , v = 0, ..., 15)}$ is the total rate of the radiative decay processes N$_2$(a$^1$ $\Pi _g$ , v = 0, ..., 15) $\rightarrow$ N$_2$(X$^1$ $\Sigma _g ^+$, v = 0, ..., 8)+ $h\nu$. The production of N$_2$(a$^1$ $\Pi _g$ , v = 0, ..., 15) by electron impact can be calculated using equation~(\ref{production}) \begin{linenomath*} \begin{equation} S_{N_2(a^1 \Pi _g , v = 0, ..., 15)} = \frac{dn_{N_2(a^1 \Pi _g , v = 0, ..., 15)}}{dt} + A_{N_2(a^1 \Pi _g , v = 0, ..., 15)} n_{N_2(a^1 \Pi _g , v = 0, ..., 15)}. \end{equation} \end{linenomath*} Finally, the rate of the reaction e + N$_2$(X$^1$ $\Sigma _g ^+$, v = 0)$\rightarrow$ e + N$_2$(a$^1$ $\Pi _g$ , v = 0, ..., 15) in $cm^{-3}s^{-1}$ is used to calculate the theoretical production of N$_2$(a$^1$ $\Pi _g$ , v = 0, ..., 15) in equation~(\ref{productioni}). \end{enumerate} As in the case of halos, the next step would be to calculate the reduced electric field necessary to match the observed and theoretical ratio of production at each time. The results are plotted in figure~\ref{fig:Estimated_Ered} together with the maximum reduced electric field given by the simulation. The comparison between the deduced electric fields with the electric field given by the model allows us to test the accuracy of these methods for elves. The best fits between the maximum electric field given by the model and the values deduced using the ratio of different pairs of emitted intensities is reached when the FNS is used. \subsubsection{Emitting source of elves} In the previous section we have deduced the reduced electric field of halos and elves considering that the observed emissions and the emitting source follow the same temporal evolution. However, as we discussed before, this assumption is not true for the case of elves. Therefore, it is necessary to invert the observed signal in order to obtain the emitting source before deducing the reduced electric field in the elve. In this section, we apply the methods described in subsection~\ref{observedsignal} to calculate how a spacecraft would observe a simulated elve optical emission. Afterwards, we invert this signal following the process detailed in subsection~\ref{inversion} to recover the emitting source. We use as source of the optical emissions a simulated elve triggered by a lightning stroke with a current peak of 154~kA. We plot in the first row of figure~\ref{fig:Source_originalandoneringTARANIS} the emitting source that we will treat in this section. \begin{figure} \centering \includegraphics[width=13cm]{Source_originalandoneringTARANIS.pdf} \caption{Optical emissions in different wavelengths of a simulated elve and triggered by a lightning with a current peak of 154~kA (first row). Optical signal convolved with their corresponding decay function (second row).} \label{fig:Source_originalandoneringTARANIS} \end{figure} The method developed in subsection~\ref{observedsignal} to calculate the signal observed by a spacecraft receives as input the optical emissions of a thin ring-shaped elve. We convolve the emissions shown in the first row of figure~\ref{fig:Source_originalandoneringTARANIS} with their corresponding decay function (see subsection~\ref{observedsignal}) to obtain the observed emissions due to an instantaneous and thin ring (second row of figure~\ref{fig:Source_originalandoneringTARANIS}). Let us now follow the method of subsection~\ref{observedsignal} to calculate the hypothetical signals observed from TARANIS and ASIM. We have assumed that the observation instruments are located at an altitude of 410~km and at a horizontal distance from the center of the elve of 80~km. We plot in figure~\ref{fig:Received_TARANIS_noise} the calculated received signals. \begin{figure} \centering \includegraphics[width=14cm]{Received_TARANIS_noise.pdf} \includegraphics[width=14cm]{Received_ASIM_noise.pdf} \caption{Hypothetical signals from an elve received by photometers on-board (upper row) TARANIS and (lower row) ASIM. We assume that both spacecraft are located at an altitude of 410~km and at a horizontal distance from the center of the elve of 80~km. The blue lines correspond to the signals without noise, while the orange asterisk represents the signals with noise. We have assumed that the photometers have a FOV of 55$^{\circ}$ for the case of TARANIS and 61.4$^{\circ}$ for ASIM. The sampling frequencies are 20~kHz for TARANIS and 100~kHz for ASIM, while the circular aperture total area is 0.04~m$^{-2}$} \label{fig:Received_TARANIS_noise} \end{figure} The inversion method described in subsection~\ref{inversion} can be directly applied to the received signals in order to recover the emitting sources. However, before applying the inversion method, we turn to a more realistic case adding some artificial noise to the received signals as follows. We denote as $S_i$ the $i$ nth point of a received signal and as $S_{max}$ the maximum value of the signal. Then, we define a parameter $g$ that will control the noise: \begin{linenomath*} \begin{equation} g =\frac{a}{S_{max}}, \label{g} \end{equation} \end{linenomath*} where $a$ is an arbitrary number that controls the noise. Afterwards, we use this parameter ($g$) to generate a random number $b_i$ with a Poisson distribution with mean value $gS_i$. Finally, the $S_i$ value of the signal is modified to be \begin{linenomath*} \begin{equation} S_i^{\prime} =\frac{b_i}{g}. \label{Sip} \end{equation} \end{linenomath*} Setting the $a$ parameter to 50 for the case of TARANIS and 5$\times$10$^3$ for ASIM, we obtain the signal with the noise shown in figure~\ref{fig:Received_TARANIS_noise} (orange asterisk). We can now apply the inversion method developed in subsection~\ref{inversion} to the received signals with noise in order to compare with the signals given by the models. We plot the results in figure~\ref{fig:Source_Hanson_TARANIS}. \begin{figure} \centering \includegraphics[width=14cm]{Source_Hanson_TARANIS.pdf} \includegraphics[width=14cm]{Source_Hanson_ASIM.pdf} \caption{Result of inverting the signals plotted in figure~\ref{fig:Received_TARANIS_noise} (orange asterisk). We also plot the emitted signal given by the FDTD elve model (blue line). } \label{fig:Source_Hanson_TARANIS} \end{figure} \subsection{Analysis of signals recorded from space} \label{sec:missions} In this section we analyze the particularities of the photometers integrated in ISUAL, GLIMS, ASIM and TARANIS (see table~\ref{tab:missions}) for the investigation of TLEs. In addition, we apply the inversion method described in subsection~\ref{inversion} to one LBH signal emitted by an elve detected by GLIMS. \tiny \begin{longtable}{|c|c|c|c|c|c|} \hline \multicolumn{1}{|c}{\textbf{Mission}} & \multicolumn{1} {|c}{\textbf{Photometer}} & \multicolumn{1} {|c}{\textbf{Bandwidth (if relevant)}} & \multicolumn{1}{|c}{\textbf{FOV}} & \multicolumn{1}{|c|}{\textbf{Frequency}} & \multicolumn{1}{c|}{\textbf{Inclination}} \\ \endfirsthead \multicolumn{1}{|c}{\textbf{Mission}} & \multicolumn{1} {|c}{\textbf{Photometer}} & \multicolumn{1} {|c}{\textbf{Bandwidth (if relevant)}} & \multicolumn{1}{|c}{\textbf{FOV}} & \multicolumn{1}{|c|}{\textbf{Frequency}} & \multicolumn{1}{c|}{\textbf{Inclination}} \\ \hline \endhead \hline & SP1: 150~-~290~nm & - & 20 deg (H) $\times$ 5 deg (V) & 10~kHz & Limb \\ & SP2: 337~nm & 5.6~nm & 20 deg (H) $\times$ 5 deg (V) & 10~kHz & Limb \\ & SP3: 391~nm & 4.2~nm & 20 deg (H) $\times$ 5 deg (V) & 10~kHz & Limb \\ ISUAL & SP4: 608.9~-~753.4~nm & - & 20 deg (H) $\times$ 5 deg (V) & 10~kHz & Limb \\ & SP5: 777.4~nm & - & 20 deg (H) $\times$ 5 deg (V) & 10~kHz & Limb \\ & SP6: 250~-~390~nm & - & 20 deg (H) $\times$ 5 deg (V) & 10~kHz & Limb \\ & AP1 (16 CH): 370~-~450~nm & - & 20 deg (H) $\times$ 5 deg (V) & 0.2, 2 or 20~kHz & Limb \\ & AP2 (16 CH): 530~-~650~nm & - & 20 deg (H) $\times$ 5 deg (V) & 0.2, 2 or 20~kHz & Limb \\ \hline & PH1: 150~-~280~nm & - & 42.7$^{\circ}$ & 20~kHz & Nadir \\ & PH2: 332~-~342~nm & - & 42.7$^{\circ}$ & 20~kHz & Nadir \\ GLIMS & PH3: 755~-~766~nm & - & 42.7$^{\circ}$ & 20~kHz & Nadir \\ & PH4: 599~-~900~nm & - & 86.8$^{\circ}$ & 20~kHz & Nadir \\ & PH5: 310~-~321~nm & - & 42.7$^{\circ}$ & 20~kHz & Nadir \\ & PH6: 386~-~397~nm & - & 42.7$^{\circ}$ & 20~kHz & Nadir \\ \hline & PH1: 145~-~230~nm & - & 61.4$^{\circ}$ & 100~kHz & Nadir \\ ASIM & PH2: 337~nm & 5~nm & 61.4$^{\circ}$ & 100~kHz & Nadir \\ & PH3: 777.4~nm & 5~nm & 61.4$^{\circ}$ & 100~kHz & Nadir \\ \hline & PH1: 145~-~280~nm & - & 55$^{\circ}$ & 20~kHz & Nadir \\ TARANIS & PH2: 332 - 342 ~nm & - & 55$^{\circ}$ & 20~kHz & Nadir \\ & PH3: 757 - 765~nm & - & 55$^{\circ}$ & 20~kHz & Nadir \\ & PH4: 600~-~800~nm & - & 100$^{\circ}$ & 20~kHz & Nadir \\ \hline \caption{Optical characteristic of the photometers onboard ISUAL \citep{Chern2003/JASTP}, GLIMS \citep{sato2015overview,Adachi2016/JASTP}, ASIM \citep{Neubert2006/ILWS} and TARANIS \citep{Blanc2007/AdSpR}. The observation mode (limb or nadir) of each photometer is indicated.} \label{tab:missions} \\ \end{longtable} \normalsize The photons emitted by TLEs can travel from the source to a nadir-pointing photometer without suffering an important atmospheric absorption. However, the signal of TLEs observed in the nadir can be contaminated by photons emitted by the parent-lightning discharge. An exception to this are the optical emissions in the LBH, as the photons emitted by lightning in these short wavelengths are totally absorbed by the atmosphere \citep{Mende2005/JGRA}. For this reason, the signals detected in the FPS(3,0), SPS(0,0) or FNS(0,0) cannot be analyzed following our inversion methods unless the parent lightning is out of the FOV. In addition, recorded signals from TLEs taking place behind the limb would not be contaminated by the photons emitted by their lightning stroke. If the absorption of the atmosphere is used to correct the observed signals, it is then possible to apply our inversion methods to the analysis of behind-the-limb TLEs. \subsubsection{Deduction of the reduced electric field in a halo reported by ISUAL} \begin{figure} \centering \includegraphics[width=11cm]{60764.pdf} \caption{Halo recorded by ISUAL at universal time 0626:38.806 on 31 July, 2006. The exposure time of each frame (a-f) is $\sim$30 ms. Image adapted from \cite{Kuo2013/JGRA}.} \label{fig:halokuo} \end{figure} \cite{Kuo2013/JGRA} investigated a halo without a visible sprite reported by ISUAL on 31 July, 2006 (see figure~\ref{fig:halokuo}). The parent lightning was below the limb, at a distance of about 4100~km. Therefore, the photometric recordings were not contaminated by possible optical emissions from lightning. \cite{Kuo2013/JGRA} estimated the maximum reduced electric field in this halo using the ratio of FNS(0,0) of N$_2^+$ to SPS(0,0) of N$_2$, obtaining a value ranging between 275~Td and 325~Td. In this section, we analyze the reported intensities of the ISUAL photometers SP2, SP3 and SP4 to estimate the reduced electric field using the methods developed in subsection~\ref{sec:opticalanalysis}. As proposed by \cite{Kuo2013/JGRA}, we calculate the emissions in SPS(0,0) of N$_2$, FNS(0,0) of N$_2^+$ and FPS(3,0) of N$_2$ from the SP2, SP3 and SP4 photometric recordings, respectively. We do that by considering the percentages of the respective band emissions that fit into each photometer, the atmospheric transmittances and the blueward shifts as proposed by \cite{Kuo2013/JGRA}. We plot in figure~\ref{fig:Ered} the resulting reduced electric fields using the ratios of FPS(3,0) to SPS(0,0), SPS(0,0) to FNS(0,0) and FPS(3,0) to FNS(0,0). The value of the maximum reduced electric field calculated from the emission ratio of SPS(0,0) to FNS(0,0) agrees with the value obtained by \cite{Kuo2013/JGRA}. The reduced electric field calculated from the emission ratio of FPS(3,0) to FNS(0,0) is slightly below that value, while the field obtained from the ratio of FPS(3,0) to SPS(0,0) is about a factor of 2 below. The reason of the underestimation of the reduced electric field using the ratio of SPS(0,0) to FPS(3,0) is that this ratio does not depend significantly on the electric field when it is high (above 150~Td), as explained in subsection~\ref{sec:signalscomparison}. The reduced electric field obtained from the FNS(0,0) is only shown between 0.2~ms and 0.26~ms, because the signal in the FNS(0,0) is very noisy out of that range. \begin{figure} \centering \includegraphics[width=11cm]{Ered.pdf} \caption{Temporal evolution of the reduced electric field in the halo without no visible sprite reported by \cite{Kuo2013/JGRA} from the ratios of FPS(3,0) to SPS(0,0), SPS(0,0) to FNS(0,0) and FPS(3,0) to FNS(0,0). Optical emissions in the FNS(0,0) before 0.2~ms and after 0.3~ms cannot be used as a consequence of the signal to noise ratio. The value of the reduced electric field reported in \cite{Kuo2013/JGRA} ranged between 275~Td and 325~Td. We have used the VDF given by expression~(\ref{VDFC3P}).} \label{fig:Ered} \end{figure} \subsubsection{Deduction of the source emissions of an elve reported by GLIMS} \label{sourceGLIMS} We apply the inversion method described in subsection~\ref{inversion} to the LBH signal of an elve reported by GLIMS at 16.28.04 (UT) on December 13, 2012. At the moment of the detection, the instrument was located at 422~km of altitude. The optical camera Lightning and Sprite Imager (LSI) onboard GLIMS has a FOV of 28.3$^{\circ}$ and is equipped with two frequency filters, a broadband (768~nm~-~830~nm) filter (LSI-1) and a narrowband (760~nm~-~775~nm) filter (LSI-2). Before applying the inversion method to the elve LBH signal, we have to deduce the horizontal separation between the photometers and the center of the elve ($l$) (see figure~\ref{fig:geometry}). For this purpose, we use the images taken by the LSI camera onboard GLIMS. Knowing the altitude (422~km) of GLIMS at the moment of the elve detection, assuming that the elve took place at an altitude of 88~km and knowing that the FOV of the camera is 28.3$^{\circ}$, the LSI camera can observe a square with a lateral dimension of 168~km in the plane of the elve (at 88~km of altitude). If we assume that the parent-lightning is located just below the center of the elve, we can use this information together with the number of pixels between the center of the camera FOV and the parent-lightning to calculate the horizontal separation between the elve center and the photometer. In this case, this separation is 55~km. We also need to estimate the moment at which the source started its emissions in relation to the moment of detection of the first photon ($t_E$). That is, the difference between the time of the elve detection and the time of the elve onset. The deduction of this time is not direct, as the radius of the elve expands faster than the speed of light as can be demonstrated using the scheme plotted figure~\ref{fig:elve_velocity}. \begin{figure} \centering \includegraphics[width=13cm]{elve_velocity.png} \caption{Expansion of the elve wavefront and the elve radius during a time of $\tau$~=~$\tau_2$~-~$\tau_1$. Lightning and elve are located at altitudes of $h_{lightning}$ and $h_0$, respectively. The elve wavefront propagates at the velocity of light.} \label{fig:elve_velocity} \end{figure} We can calculate the expansion of the elve radius during a time~$\tau$~=~$\tau_2$~-~$\tau_1$ knowing that the pulse radiated by the lightning discharge travels at the speed of light and that the difference of altitudes between the elve and the lightning stroke is $h_0 - h_{lightning}$. At the moment $\tau_1$, the distance $r_1$ between stroke and the intersection of the wavefront with the ionosphere is given by \begin{linenomath*} \begin{equation} r_1 =c \tau_1, \label{r1} \end{equation} \end{linenomath*} then we can write the distance $A$, corresponding to the radius of the elve at a time of $\tau_1$, as \begin{linenomath*} \begin{equation} A =\sqrt{c^{2}\tau_1^2 - (h_0 - h_{lightning})^2}. \label{A} \end{equation} \end{linenomath*} If now we call $B$ the radius of the elve at a time $\tau_2 > \tau_1$, we can calculate the expansion of the elve radius during the time $\tau_2 - \tau_1$ as \begin{linenomath*} \begin{equation} B - A =\sqrt{c^{2}\tau_1^2 - (h_0 - h_{lightning})^2} - \sqrt{c^{2}\tau_2^2 - (h_0 - h_{lightning})^2} > d, \label{B-A} \end{equation} \end{linenomath*} while the elve wavefront would have advanced a distance of only $d = c(\tau_2 - \tau_1)$. Therefore, the first observed photon would be the one that travels the minimum path from the source to the photometer. We can calculate the time of flight of that photon by minimizing equation~(\ref{s}), obtaining the minimum optical path ($s_{min}$). The time at which $s(t) = s_{min}$ is the time after the elve onset at which the photon was emitted ($t_{min}$). The time of flight of that photon is then given by $\frac{s_{min}}{c}$. We can finally estimate the moment at which the source started its emissions in relation with the moment of detection of the first photon as the sum of the time of flight of the first detected photon and the time at which it was emitted as \begin{linenomath*} \begin{equation} t_E =\frac{s_{min}}{c} + t_{min}. \label{te} \end{equation} \end{linenomath*} The last step before applying the inversion method is to convert the detected signal from $W/m^2$ to photons/s. To do that, we multiply the signal by the area of the detector of radius 12~mm. We also divide by the energy of the received photons. The photons received by the photometer PH1 have wavelengths between 150~nm and 280~nm. As most of the photons emitted by the elve have wavelengths closer to 150~nm rather than to 280~nm \citep{Gordillo-Vazquez2010/JGRA, PerezInvernon2018/JGR}, we can calculate this energy as the corresponding energy of a photon with a wavelength of about 180~nm. The observed signal in photons/s is shown in figure~\ref{fig:I_decon_a2012-12-13_16280487397}. We also show in figure~\ref{fig:I_decon_a2012-12-13_16280487397} the signal obtained after applying the Wiener deconvolution. \begin{figure} \centering \includegraphics[width=11cm]{I_decon_a2012-12-13_16280487397.pdf} \caption{Photons received by GLIMS in the 150~nm - 280~nm band from an elve (orange dots). Signal after the application of the Wiener deconvolution in order to approximate the elve as a thin ring (blue asterisk).} \label{fig:I_decon_a2012-12-13_16280487397} \end{figure} Then, we can apply the Hanson method to the (blue) signal of figure~\ref{fig:I_decon_a2012-12-13_16280487397} in order to obtain the temporal evolution of the signal emitted by a thin ring. We plot the source optical emission in figure~\ref{fig:F_a2012-12-13_16280487397}. \begin{figure} \centering \includegraphics[width=11cm]{F_a2012-12-13_16280487397.pdf} \caption{Emitting source for an instantaneous thin ring.} \label{fig:F_a2012-12-13_16280487397} \end{figure} However, the obtained emitting source corresponds to the source of a thin ring. Therefore, we can obtain the real emitting source by convolving the obtained emitting source of an instantaneous thin ring with its corresponding decay function. The final emitting source is shown in figure~\ref{fig:comparison_GLIMS_FDTD}, together with the simulated intensity emitted by an elve triggered by a CG lightning discharge with current peak of 184~kA, a rise time of \SI{10}{\micro\second} and a total time of 0.5~ms. \begin{figure} \centering \includegraphics[width=11cm]{comparison_GLIMS_FDTD.pdf} \caption{Source plotted in figure~\ref{fig:F_a2012-12-13_16280487397} after the convolution with the corresponding decay function in order to obtain the emitting source (orange asterisk). This source emits photons with wavelengths between 150~nm and 280~nm. We also plot the simulated intensity emitted by an elve triggered by a CG lightning discharge with a current peak of 184~kA, a rise time of \SI{10}{\micro\second} and a total time of 0.5~ms. The simulated emitted intensity has been calculated using a FDTD elve model.} \label{fig:comparison_GLIMS_FDTD} \end{figure} \subsubsection{Remarks about the possibility of applying the inversion method to signals reported by other space missions} In section~\ref{sourceGLIMS} we have applied the inversion method previously described in subsection~\ref{inversion} to an elve signal reported by GLIMS in a range of wavelengths between 150~nm and 280~nm. We mentioned that, in general, it is not possible to apply this method to the optical signals detected in other wavelengths, as the lightning flash would contaminate them. However, the detection of an elve whose parent lightning is out of the photometer FOV could be analyzed with our method providing that the position of the lightning stroke is known. A lightning detection network (local or global) could provide this information. In order to estimate the electric fields in halos, it would also be necessary to limit the analysis to halos whose parent lightning is outside the FOV. In the case of ASIM and TARANIS, the FOV of the photometers and the optical cameras are the same. Therefore, if an elve is detected without its parent-lightning, the image of the elve taken by the optical cameras could be useful to estimate the position of the elve center and that of the lightning. In the case of ASIM, the photometers and the optical cameras are part of the Modular Multispectral Imaging Array (MMIA). The cameras record at 12~fps. However, the weak intensity of elves observed at the nadir makes difficult their detection by MMIA cameras. If the optical cameras of ASIM and TARANIS are not sensible enough to distinguish the shape of the elves, some detection network, as it was done in the case of GLIMS, must give the position of the lightning stroke. The case of ISUAL is different, as it is capable of reporting elves taking place behind the limb without the contamination of the parent lightning. However, our inversion method is highly dependent of the elve position. \cite{Kuo2007/JGRA} developed a procedure to obtain the distance between ISUAL and the elve center by counting pixels in the image recorded by ISUAL optical cameras. However, we think that this procedure is not accurate enough to be used together with our inversion method. Therefore, a lightning detection network must probably give the position of the elve center, determined by the location of the parent lightning. Finally, as the bandwidth of the ASIM and TARANIS photometers are quite similar to the bandwidth of the ISUAL photometers, we propose following the method of \cite{Kuo2013/JGRA} to calculate possible overlaps between different bands. \section{Conclusions} \label{sec:conclusions} We have developed a general method to estimate the reduced electric field in upper atmospheric TLEs from space-based photometers. The first step of the proposed method uses of the observed emission ratio of two different spectral bands together with the continuity equation of the emitting species in order to estimate the rate of production by electron impact of the considered species. Then, the obtained ratio has been compared with the theoretical electric field dependent value calculated with BOLSIG+ \citep{Hagelaar2005/PSST}. Finally, we have calculated the reduced electric field that matches the observed emission ratio with the theoretical one. The recorded optical signals of elves do not have the same temporal evolution as the emitting source. Therefore, we have developed an inversion procedure to calculate the emitting source of elves before applying the method to estimate the reduced electric field. However, this procedure is exclusively valid if the optical signal produced by the parent lightning does not contaminate the optical signal of the elve. We have successfully applied this inversion method to the optical signal emitted by an elve within the LBH band and recorded by GLIMS. We have firstly applied this procedure to the predicted emissions of simulated halos and elves. In the case of a halo with a maximum electric field of 140~Td, we have estimated the reduced electric field using the ratio of FPS(3,0) to SPS(0,0), FPS(3,0) to FNS(0,0) and SPS(0,0) to FNS(0,0). The obtained reduced electric field overestimates the maximum field given by the model by less than 10\%. The LBH band of N$_2$ is not considered in the case of halos because this TLE descends through a region of the atmosphere where the quenching altitude of the vibrational levels of N$_2$(a$^1$ $\Pi _g$ , v = 0, ..., 12) by N$_2$ and O$_2$ changes rapidly. In the case of a simulated elve with a maximum electric field of 210~Td, we have estimated the reduced electric field using the ratios of FPS(3,0) to SPS(0,0), FPS(0,0) to FNS(0,0), SPS(0,0) to FNS(0,0), FPS(3,0) to LBH, SPS(0,0) to LBH and FNS(0,0) to LBH. We have obtained a good agreement between the field given by the model and the estimated field using the observed ratios that include the FNS(0,0). The use of the ratio of FPS(3,0) to SPS(0,0) underestimates the value of the reduced electric field in about 15~\%. Finally, we have concluded that the ratios of FPS(3,0) to LBH and SPS(0,0) to LBH are not adequate to deduce the electric field in elves, as the results differ in more than 50~\% with respect to the expected quantity. The ratio of SPS(0,0) to FNS(0,0) has been widely used to estimate reduced electric fields. We have found that the ratio of FPS(3,0) to FNS(0,0) leads to reduced electric field values that agree with those obtained by the SPS(0,0) to FNS(0,0) ratio. The application of our method to deduce the electric field of a halo reported by ISUAL \citep{Kuo2013/JGRA} has confirmed that the ratio of SPS(0,0) to FPS(3,0) emissions is not reliable if the electric field is below 150~Td or 200~Td.
1,108,101,564,799
arxiv
\section{Introduction} In many functional data analysis (FDA) settings, one wishes to test either a null hypothesis \begin{equation}\label{onehyp}H_0: f(s)=0\mbox{ for all }s\in\mathcal{S},\end{equation} for a function $f$ defined on a domain $\mathcal{S}$, or alternatively a family of null hypotheses \begin{equation}\label{h0s}\{H_0(s): s\in\mathcal{S}\}\end{equation} where for each $s$, $H_0(s)$ is the pointwise hypothesis $f(s)=0$. For example, $f$ may refer to \begin{enumerate}[(i)] \item a group difference $f(s)=g_1(s)-g_2(s)$, where $g_1,g_2$ denote mean functions in two subsets of a population, or \item a coefficient function $f(s)=\beta(s)$ in a functional linear model. \end{enumerate} Clearly the global hypothesis $H_0$ in \eqref{onehyp} is just the intersection over all $s$ of the pointwise hypotheses $H_0(s)$ in \eqref{h0s}. The difference is that whereas \eqref{onehyp} refers to a single test, for which a single $p$-value would be appropriate, the family \eqref{h0s} gives rise to a collection of $p$-values. The latter setup is appropriate when the values of $f(s)$ for different $s$ carry distinct scientific meaning. For example, in \secref{appsec} below we test for sex-related differences in the thickness of the human cerebral cortex as a function of age $s$. In this context, age-specific results may have implications for the study of brain development. Previous work has tended to focus either on distribution-free tests of the global hypothesis \eqref{onehyp} (see \secref{envsec} below), or on multiplicity-adjusted parametric pointwise tests for the family \eqref{h0s}. As we show in \secref{adjsec}, it is straightforward to combine the advantages of both approaches---that is, to derive pointwise adjusted $p$-values without having to specify a null statistic distribution. In \secref{morepower}, we present two alternative pointwise $p$-value adjustments that offer improved power. \section{Setup} We let $T(s)$ ($s\in\mathcal{S}$) denote a functional test statistic for null hypothesis \eqref{onehyp}, and take as given a group of permutations of the data, along with the null hypothesis that the joint distribution of $T(s)$, $s\in\mathcal{S}$, is invariant to such permutations. This hypothesis may be stronger than \eqref{onehyp}, but for the sake of a brief and general presentation, we ignore that distinction here. Let $T_0$ be the test statistic function computed with the real data, and $T_1,\ldots,T_{M-1}$ be test statistic functions that are computed with randomly permuted data sets and thus constitute a simulated null distribution. We consider $T_0(s),\ldots,T_{M-1}(s)$ only for $s\in\mathcal{G}$, for a finite set $\mathcal{G}\subset\mathcal{S}$ (e.g., a grid of points spanning $\mathcal{S}$, if the latter is a subinterval of the real line). We assume $\mathcal{G}$ to be an adequate approximation to $\mathcal{S}$, in the sense that the difference between a minimum over $\mathcal{G}$ versus over $\mathcal{S}$ is negligible \citep[see][for a relevant treatment of grid approximations in functional hypothesis testing]{cox2008}. We further assume that there are no pointwise ties, i.e., ties among $T_0(s),\ldots,T_{M-1}(s)$ for a given $s\in\mathcal{G}$ \section{Envelope tests}\label{envsec} Hypotheses regarding spatial point patterns are commonly tested by functions $T(s)$ of interpoint distance $s$, such as the $K$ function of \cite{ripley1977}. Such functions typically have unknown null distributions, and hence are most readily tested via Monte Carlo methods. This is the motivation for graphical or envelope tests \citep{ripley1977,davison1997,baddeley2014}, which have recently been formalized, extended, and applied to functional data \citep{myllymaki2017,mrkvicka2018}. The global envelope test (GET) of \cite{myllymaki2017} is based on the ranks $R_m^*(s)$ of $T_m(s)$ among $T_0(s),\ldots,T_{M-1}(s)$ for $s\in\mathcal{G}$. Here rank is defined in such a way that low rank indicates maximal inconsistency with the null hypothesis. Thus, depending on the test, $R_m^*(s)$ may be rank be from smallest to largest, rank from largest to smallest, or for a two-sided test, the smaller of the two. The minimum rank attained by $T_m$, $R_m=\min_{s\in\mathcal{G}}R_m^*(s)$, is a functional depth \citep{lopez2009}, which we may call the min-rank depth. The GET $p$-value is then defined as \begin{equation}\label{pp}p_+=\frac{\sum_{m=1}^{M-1} \mathbb{I}(R_m \leq R_0) +1}{M}.\end{equation} This $p$-value has a graphical interpretation in terms of envelopes, which we define here in a manner that is consistent with \cite{myllymaki2017}, but that relates to $p$-values rather than a specified level $\alpha$. For $j\geq 1$, let $\kappa_j=\sum_{m=0}^{M-1}\mathbb{I}(R_m\leq j)$, and let $E^{\kappa_j}$ be the envelope defined by the set of $M-\kappa_j$ curves $\{T_m: R_m > j\}$, that is, the range from $\ubar{T}^{\kappa_j}(s)=\min_{m: R_m > j}T_m(s)$ to $\bar{T}^{\kappa_j}(s)=\max_{m: R_m > j}T_m(s)$ for each $s$. We say that $T_0$ exits this envelope at $s$ if $T_0(s)\notin[\ubar{T}^{\kappa_j}(s),\bar{T}^{\kappa_j}(s)]$. Arguing as in \cite{myllymaki2017}, one can show that $p_+\leq \kappa_j/M$ if and only if $T_0$ exits $E^{\kappa_j}$ at some $s$. \section{Adjusted $p$-values}\label{adjsec} Turning from the single hypothesis \eqref{onehyp} to the family \eqref{h0s} of pointwise hypotheses, the na\"ive or raw permutation-based $p$-values are \begin{equation}\label{rawp}p(s)=R_0^*(s)/M\end{equation} for each $s$. These $p$-values, however, require adjustment for multiplicity \citep{wright1992} in order to control the overall type-I error rate, usually taken as the family-wise error rate (FWER). Strictly speaking, since the GET is a single test as opposed to a multiple testing procedure, adjusted $p$-values with respect to the GET are undefined. But it is natural to define the GET-adjusted $p$-value at $s$, in the notation of \secref{envsec}, as the smallest value $\kappa_j/M$ such that $T_0$ exits the envelope $E^{\kappa_j}$ at $s$. It can be shown that an equivalent definition is \begin{equation}\label{pr}\tilde{p}(s)=\frac{\sum_{m=1}^{M-1} \mathbb{I}[R_m \leq R^*_0(s)] +1}{M};\end{equation} and that, as we would expect, the adjusted $p$-values $\tilde{p}(s)$ control the FWER The adjusted $p$-value \eqref{pr} is not really new. The \texttt{fda} package \cite{ramsay2009} for R \citep{R} offers permutation $t$- and $F$-tests for settings (i) and (ii), respectively, of the Introduction \citep[and similar permutation $F$-tests are described by][]{reiss2010}. These tests yield pointwise adjusted $p$-values that are related to \eqref{pr}, but there are two differences. First, in the terminology of \cite{ge2003}, the \texttt{fda} package offers \emph{max T} adjusted $p$-values, whereas \eqref{pr} is more akin to \emph{min P} adjusted $p$-values, which are more appropriate when one cannot assume the null distribution of $T(s)$ to be identical across $s$. Second, \cite{ramsay2009} adopt a different permutation $p$-value convention in which the numerator and denominator are reduced by 1, leading to the zero $p$-value problem criticized by \cite{phipson2010}. \section{More powerful $p$-value adjustments}\label{morepower} We describe next two alternative adjusted $p$-values that are bounded above by \eqref{pr} and thus offer greater power. \subsection{Step-down adjustment}\label{stepsub} In the language of multiple testing, the adjusted $p$-values \eqref{pr} are of \emph{single-step} type, suggesting that an analogous \emph{step-down} procedure \citep{westfall1993,ge2003,romano2016} would be more powerful. Define $S_i=\{s\in\mathcal{G}: R_0^*(s)\geq i\}$ for $i=1,2,\ldots$, and $R_{m;U}=\min_{s\in U}R_m^*(s)$ for $m\in\{0,\ldots,M-1\}$ and $U\subset \mathcal{G}$. We can then define the step-down adjusted $p$-value at $s$ as \begin{equation}\label{pss} \tilde{p}^{\text{stepdown}}(s)=\max_{i\in\{1,\ldots,R^*_0(s)\}}\frac{\sum_{m=1}^{M-1} \mathbb{I}(R_{m;S_i} \leq i) +1}{M}. \end{equation} This expression is readily shown to be less than or equal to $\tilde{p}(s)$ in \eqref{pr}. Thus the step-down adjusted $p$-values offer greater power than their single-step counterparts, but they can be shown to retain control of the FWER. \subsection{Extreme rank length adjustment}\label{erlsub} The min-rank depth $R_m$ of \secref{envsec} tends to be strongly affected by ties. In particular, typically $\kappa_1>1$ of the $M$ functions attain rank 1 at some point and thus have $R_m=1$, with the result that $\kappa_1/M$ is the smallest attainable value of either $p_+$ or $\tilde{p}(s)$. An alternative functional depth, the \emph{extreme rank length} (ERL), largely eliminates ties and thus leads to a more powerful variant of the GET. A formal definition of ERL appears in \cite{myllymaki2017}, but the basic idea is to break the tie among curves with the same min-rank depth $R_m$ by ordering from longest to shortest extent of the region over which that minimum rank is attained. For example, four curves in Fig.~\ref{erlfig} attain pointwise rank~1 (from the top) somewhere in the domain and thus all have $R_m=1$; the ERL depths $R_m^{\text{ERL}}=$1-4, indicated in the figure, are based on the widths of these curves' regions of attaining rank~1. \begin{figure}[b] \centering \includegraphics[width=\textwidth]{crossover} \caption{An illustration of one-sided (higher = more extreme) ERL depths, and associated pointwise adjusted $p$-values. Here $M=100$ and the numerals 1--4 denote ERL depths for the four curves with $R_m=1$; the thickest curve represents the real data, so that $R_0^{\text{ERL}}=1$. The raw $p$-values \eqref{rawp} satisfy $p(s_1)>p(s_2)$, but ERL adjustment reverses the order, i.e., $\tilde{p}^{\text{ERL}}(s_1)<\tilde{p}^{\text{ERL}}(s_2)$.} \label{erlfig} \end{figure} An ERL envelope $E^{\kappa_j;\text{ERL}}$ \citep{mrkvicka2018} can be defined as in \secref{envsec}, but in terms of $R_m^{\text{ERL}}$ rather than $R_m$. We can then proceed as in \secref{adjsec}, and define $\tilde{p}^{\text{ERL}}(s)$, the ERL-adjusted $p$-value at $s$, as $\kappa_j/M$ for the smallest $\kappa_j$ such that $T_0(s)$ lies outside $E^{\kappa_j;\text{ERL}}$. This adjusted $p$-value is bounded above by \eqref{pr}, and hence offers improved power. However, unlike most $p$-value adjustments, the ERL adjustment is not order-preserving, in the sense that $p(s_1)>p(s_2)$ does not guarantee that $\tilde{p}^{\text{ERL}}(s_1)\geq\tilde{p}^{\text{ERL}}(s_2)$. An counterexample, that is, a pair of points $s_1,s_2$ for which $p(s_1)>p(s_2)$ but $\tilde{p}^{\text{ERL}}(s_1)<\tilde{p}^{\text{ERL}}(s_2)$, appears in Fig.~\ref{erlfig}. Some might argue that this non-order-preserving behavior vitiates the use of ERL-adjusted $p$-values altogether. \section{Application: Age-varying sex difference in cortical thickness}\label{appsec} We consider cortical thickness (CT) measurements from a longitudinal magnetic resonance imaging study at the US National Institute of Mental Health, which were previously analyzed by \cite{reiss2018}. Specifically, we examine CT in the right superior temporal gyrus in 131 males with a total of 355 observations, and 114 females with 300 observations, over the age range from 5--25 years (displayed in the left panel of Fig.~\ref{fig67}). Viewing the observations as sparse functional data, we fit the model $y_i(s)=\beta_0(s)+\tau_i\beta_1(s)+\varepsilon_i(s)$, in which $y_i(s)$ is the $i$th participant's CT at age $s$; $\tau_i=0,1$ if this participant is male or female, respectively; and $\varepsilon_i(s)$ denotes error. We focus on testing whether the age-varying sex effect $\beta_1(s)$ (female minus male) equals zero; see the right panel of Fig.~\ref{fig67} for an estimate of this coefficient function, along with pointwise 95\% confidence intervals. \begin{figure}[b] \centering \includegraphics[width=\textwidth]{CT-67} \caption{Left: Cortical thickness in the right superior temporal gyrus for the NIMH sample. Right: Coefficient function estimate $\hat{\beta}_1(s)$ representing sex effect (female minus male), along with approximate pointwise 95\% confidence interval.} \label{fig67} \end{figure} \begin{figure}[b] \centering \includegraphics[width=\textwidth]{CT-pvals-67} \caption{Above: Standardized coefficient functions $\hat{\beta}_1(s)/\mbox{ } \widehat{\textsc{se}}\mbox{ }[\hat{\beta}_1(s)]$ for the real data (black curve and circles) and for 3999 permuted data sets (grey curves), adapted from the R package GET \citep{myllymaki2017}. Dashed lines indicate envelope for testing at the 5\% level. Below: Pointwise adjusted $p$-values $\tilde{p}(s)$ (single-step), $\tilde{p}^{\text{stepdown}}(s)$ and $\tilde{p}^{\text{ERL}}(s)$.} \label{p67} \end{figure} The model was fitted by the \texttt{pffr} function \citep{ivanescu2015}, part of the R package \texttt{refund} \citep{refund}, with both the real data and $M-1=3999$ data sets with the sex labels permuted. The upper panel of Fig.~\ref{p67} displays standardized coefficient functions $\hat{\beta}_1(s)/\mbox{ } \widehat{\textsc{se}}\mbox{ }[\hat{\beta}_1(s)]$ for the real and permuted data sets, along with a two-sided envelope for testing at the 5\% level. The GET $p$-value \eqref{pp} based on min-rank depth is $p_+=.003$; if we instead use the ERL depth, the GET $p$-value falls to .00025 ($=1/M$). But to quantify the evidence of a sex effect in an age-specific manner, we require pointwise $p$-values. The lower panel of Fig.~\ref{p67} shows the pointwise adjusted $p$-values $\tilde{p}(s)$ \eqref{pr}, along with the step-down and ERL-based adjusted $p$-values of \secref{morepower}, for an evenly spaced grid of 100 ages. Judging from the values of $\tilde{p}(s)$, there is only weak evidence of a CT difference between girls and boys up to age~9. The step-down $p$-values in this age range, on the other hand, are markedly lower and consistently below the conventional .05 level. The ERL-adjusted $p$-values are closer to $\tilde{p}(s)$ in this lower age range but, somewhat less visibly, are the lowest of the three $p$-values for age 16 and higher. Thus neither one of the two adjustments of \secref{morepower} consistently dominates the other. It must be acknowledged that the right superior temporal gyrus was specifically selected for the purpose of illustrating differences that may arise among the $p$-value adjustments. Comparable analyses for most other brain regions would have yielded less prominent differences. \section{Discussion} Expression \eqref{pr} defines distribution-free pointwise adjusted $p$-values with respect to the global envelope test of \cite{myllymaki2017}. A pointwise $p$-value approach such as this, which is agnostic with respect to the distribution of $T(s)$, is particularly valuable in analyses that go beyond pointwise $t$- or $F$-tests. For example, we are currently developing flexible pointwise tests for group differences in a measure of interest, based on estimating each group's density at each $s$, and then referring the distance between group-specific densities to a permutation distribution for each $s$; this distribution has no known analytic form under the null hypothesis. The step-down and ERL-based adjusted $p$-values of \secref{morepower} offer more powerful alternatives to \eqref{pr}, but some might question the suitability of the ERL adjustment since it is not order-preserving in general. The cortical thickness analysis of \secref{appsec} illustrates the power gains that the step-down and ERL adjustments may provide in some applications. Simulation studies will further elucidate the relative performance of alternative $p$-value adjustments in FDA settings. \begin{acknowledgement} This work was supported by Israel Science Foundation grant 1777/16. We thank Aaron Alexander-Bloch, Jay Giedd and Armin Raznahan for providing the cortical thickness data, and for advice on processing these data. \end{acknowledgement} \bibliographystyle{chicago}
1,108,101,564,800
arxiv
\part{Paper} \section{Introduction} \input{Paper/intro} \subsection*{Related Work} \input{Paper/related} \section{Preliminaries} \input{Paper/prelim} \section{Equilibrium for General Utility Function} \input{Paper/general} \section{Equilibrium for Anonymous Utility Function} \input{Paper/anonymous} \section{Conclusions} \input{Paper/conclusions} \subsection{Anonymous Strictly Supermodular Utility Function in Two-receiver Scenarios} \label{sec:two receiver supermodular} \input{Paper/two-supermodular} \subsection{Anonymous Strictly Submodular Utility Function in Two-receiver Scenarios} \label{sec:two receiver submodular} \input{Paper/two-submodular} \subsection{Anonymous Strictly Supermodular (resp.\ Submodular ) Utility Function in Multi-receiver Scenarios} \label{sec:multi receiver} \input{Paper/multi-receiver} \subsection{Omitted Proof for \Cref{Thm: small_prior_multi_receiver}} \label{apx:proof_small_prior_multi_receiver} \multipossmall* \begin{proof} We first characterize the PoS for supermodular function, then proceed to discuss the PoS for submodular function. Both discussions hinges on an equilibrium characterization that can achieve the optimal welfare when $\lambda \le \sfrac{1}{2}$. \xhdr{Price of stability for supermodular function when $\lambda \le \sfrac{1}{2}$} When $\lambda \le \sfrac{1}{2}$, we have following signaling policy $G$ that is the welfare-optimal equilibrium for any anonymous strictly supermodular utility function. \begin{align*} g(\mathbf{\signal}) = \frac{1}{2\lambda} \cdot \indicator{q_j = q, \forall j \in [n], q\in[0, 2\lambda]} ~~~~ \forall \mathbf{\signal} \in [0, 1]^n~. \end{align*} Clearly, the above strategy is Bayes-plausible for each receiver. The supporting hyperplane function can be characterized as $\overline{\payoff}(\mathbf{q}, G) = \boldsymbol{\alpha} \cdot \mathbf{q}$, where $\boldsymbol{\alpha} = (\frac{\val(n)}{2n\lambda}, \ldots, \frac{\val(n)}{2\lambda n})$. It is easy to see that for all $\mathbf{q} = (q, \ldots, q) \in {\tt supp}{G}$, we have $\Pi(\mathbf{q}, G) = \frac{q\val(n)}{2n\lambda} = \overline{\payoff}(\mathbf{q}, G)$. For $\mathbf{q} \notin {\tt supp}{G}$, first note that $\Pi(\mathbf{q}, G) = \Pi((\min(q_j, 2\lambda))_{j\in[n]}, G)$. Thus, it is without loss to focus on the point $\mathbf{q}$ where each dimension of it is not larger than $2\lambda$. For a given point $\mathbf{q} = (q_j)_{j\in[n]} \preceq (2\lambda, \ldots, 2\lambda)$, let $q_j^m$ be the $j$-th minimum value of $(q_j)_{j\in[n]}$ (here we focus on the verification of the point where there is no tie between each value $q_j$, the proof can readily generalize to the case when there are same value of $q_j$). Define $q_0^m = 0$. Then the payoff $\Pi(\mathbf{q}, G)$ at this point is given as follows: $\Pi(\mathbf{q}, G) = \sum_{j = 1} \frac{q_j^m - q_{j-1}^m}{2\lambda} \cdot \val(n - j + 1)$. By definition of $\val(\cdot)$, we have $\Pi(\mathbf{q}, G) < \overline{\payoff}(\mathbf{q}, G)$. The price of stability $\Gamma(\lambda, \val)$ of the above equilibrium strategy is also straightforward, note that $\texttt{WEL}(G,G) = \int_0^{2\lambda} \frac{1}{2\lambda} \Pi(\mathbf{q}, G)d\mathbf{q} = \int_0^{2\lambda} \frac{1}{2\lambda} \overline{\payoff}(\mathbf{q}, G)d\mathbf{q} = \sfrac{\val(n)}{2}$ where $\mathbf{q} = (q, \ldots, q)$ for $q\in[0, 2\lambda]$. We can then conclude $\Gamma(\lambda, \val) = 1$. \xhdr{Price of stability for submodular function when $\lambda \le \sfrac{1}{2}$} When $\lambda \le \sfrac{1}{2}$, we have following signaling policy $G$ that is the welfare-optimal equilibrium for any anonymous strictly monotone submodular utility function. \begin{align*} g(\mathbf{\signal}) = \frac{1}{2\lambda} \cdot \indicator{q_j = q, \forall j \in [\sfrac{n}{2}], q_j = 2\lambda - q, \forall j \in [\sfrac{n}{2}+1, \ldots, n]} ~~~~ \forall \mathbf{\signal} \in [0, 1]^n~, \text{when $n$ is even};\\ g(\mathbf{\signal}) = \frac{1}{2\lambda} \cdot \indicator{q_j = q, \forall j \in [\sfrac{(n-1)}{2}], q_j = 2\lambda - q, \forall j \in [\sfrac{(n+1)}{2}+1, \ldots, n]} ~~~~ \forall \mathbf{\signal} \in [0, 1]^n~, \text{when $n$ is odd}. \end{align*} Then it suffices to verify that the above strategy is indeed a symmetric equilibrium. Clearly, the strategy presented in the above is Bayes-plausible for each receiver. Observe that given an opponent sender's strategy $G$ which follows from the above structure, for an induced signal vector $\mathbf{q} = (q, \ldots, q, 2\lambda - q, \ldots, 2\lambda - q)$, we have \begin{alignat}{2} \Pi(\mathbf{q}, G) & = \frac{q}{2\lambda} \cdot v\left(\frac{n}{2}\right) + \left(1 - \frac{q}{2\lambda}\right)\cdot v\left(\frac{n}{2}\right) = v\left(\frac{n}{2}\right), ~~ & \text{when $n$ is even};\nonumber\\ \Pi(\mathbf{q}, G) & = \frac{q}{2\lambda}\cdot v\left(\frac{n-1}{2}\right) + \left(1- \frac{q}{2\lambda}\right)\cdot v\left(\frac{n+1}{2}\right), & \text{when $n$ is odd}.\nonumber \end{alignat} To satisfy the condition as in~\Cref{prop:payoff envelope}, the hyperplane function can be characterized as $\overline{\payoff}(\mathbf{q}, G) = \boldsymbol{\alpha}^\top \mathbf{q}$, where \begin{alignat}{2} \boldsymbol{\alpha} & = \left(\frac{v\left(\frac{n}{2}\right)}{n\lambda}, \ldots, \frac{v\left(\frac{n}{2}\right)}{n\lambda}\right), ~~ & \text{when $n$ is even};\nonumber\\ \boldsymbol{\alpha} & = \left(\frac{v\left(\frac{n-1}{2}\right)}{\lambda(n-1)}, \ldots, \frac{v\left(\frac{n-1}{2}\right)}{\lambda(n-1)}, \frac{v\left(\frac{n+1}{2}\right)}{\lambda(n+1)}, \ldots, \frac{v\left(\frac{n+1}{2}\right)}{\lambda(n+1)}\right), & \text{when $n$ is odd}. \nonumber \end{alignat} By construction, it is easy to see that for any $\mathbf{q}\in{\tt supp}{G}$, we have $\Pi(\mathbf{q}, G) = \overline{\payoff}(\mathbf{q}, G)$. For any $\mathbf{q} \notin {\tt supp}{G}$, we also have $\Pi(\mathbf{q}, G) \le \overline{\payoff}(\mathbf{q}, G)$. Follow the same analysis as in two-receiver setting, the above equilibrium is also a welfare-optimal equilibrium for any $\lambda \le \sfrac{1}{2}$. To see this, since all two senders are ex ante symmetric, the optimal welfare is $v\left(\sfrac{n}{2}\right)$ when $n$ is even, and welfare is $\left(v\left(\frac{n-1}{2}\right) + v\left(\frac{n+1}{2}\right)\right)/2$ when $n$ is odd. Following the simple algebra, it can verified the above equilibrium indeed achieves the optimal welfare. Note that the above equilibrium for either case is not a unique welfare-optimal equilibrium, as sender can arbitrarily choose $\sfrac{n}{2}$ (or $\sfrac{n-1}{2}$) receivers and then generate negatively correlated signals for remaining receivers. \end{proof} \subsection{Omitted Proof for \Cref{Thm: large_prior_multi_receiver}} \label{apx:large_prior_multi_receiver} \multilargesup* \begin{proof} We characterize the upper bound of PoS for supermodular utility function by construction. In particular, we consider following following signaling policy $(G, G)$ \begin{align*} g(\mathbf{\signal}) = \frac{1 - \mass_{\texttt{s}}}{\hat{p}} \cdot \indicator{q_1 = \cdots = q_n = q \;\; \text{and}\;\; q \in [0,\hat{p}]} + \mass_{\texttt{s}}\cdot \indicator{q_1 = \cdots = q_n = 1} ~~~~ \forall \mathbf{\signal} \in [0, 1]^n~. \end{align*} where $\mass_{\texttt{s}}, \hat{p}$ are the parameters to be given shortly. In the below, we will argue that there exists a valid choice of $\mass_{\texttt{s}}, \hat{p}$ such that $(G, G)$ is indeed an equilibrium. Clearly, the above strategy is Bayes-plausible for each receiver. Since $\Pi(\boldsymbol{0}_{[n]}, G) = 0$ and $\boldsymbol{0}_{[n]}{\tt supp}{G}$, we can conclude $\overline{\payoff}(\boldsymbol{0}_{[n]}, G) = 0$. Thus, the supporting hyperplane function can be characterized as $\overline{\payoff}(\mathbf{q}, G) = \boldsymbol{\alpha} \cdot \mathbf{q}$, where $\boldsymbol{\alpha} = (\alpha, \ldots, \alpha)$. Let $\mu, \hat{p}$ be the structure parameters of the equilibrium. Observe that by Bayes-plausibility constraint and \Cref{prop:payoff envelope}, we have \begin{alignat}{2} \text{Bayes-plausibility} \Rightarrow ~ & \frac{1}{2}\hat{p} \cdot (1 -\mu) +\mu = \lambda; \nonumber\\ \Pi((\hat{p},\ldots, \hat{p}), G) = \overline{\payoff}((\hat{p},\ldots, \hat{p}), G) \Rightarrow ~ & (1 -\mu) \val(n) = n\alpha\hat{p}; \nonumber\\ \Pi(\boldsymbol{1}_{[n]}, G) = \overline{\payoff}(\boldsymbol{1}_{[n]}, G) \Rightarrow ~ &\sum_{j=1}^n \frac{\mu C_j^n}{2^n}\cdot \val(j) + \left(1 -\mu \right)\cdot \val(n) = n\alpha.\nonumber \end{alignat} The above equations can help us pin down the value of $\mu$ given $\lambda$ and $\val(\cdot)$: \begin{align*} \mu^2\left(2T(n) - \val(n)\right) +\mu\cdot 2\lambda\left(\val(n) - T(n)\right) + \val(n)\left(1 - 2\lambda\right) = 0 \end{align*} where $T(n) = \sum_{j=1}^n \frac{C_j^n}{2^n}\cdot \val(j)$. In the below analysis, we focus on the scenario where $\val(n) = n^\tau$ for $\tau \ge 1$. By inspection, it is easy to see that $\forall \lambda > \sfrac{1}{2}, \tau \ge 1$, the above equation always has a solution \begin{align} \mass_{\texttt{s}} = \frac{2\lambda T(n) - 2\lambda\val(n) + \sqrt{4\lambda^2(\val(n) - T(n))^2 - 4\val(n)(2T(n) - \val(n))(1 - 2\lambda))}}{2(2T(n) - \val(n))} \label{eq: mass_choice_sup_multi_r} \end{align} We can now pin down the value of $\alpha, \hat{p}$: \begin{align*} \alpha = \frac{\mu T(n) + (1 - \mu)\val(n)}{n}, ~~ \hat{p} = \frac{(1 -\mass_{\texttt{s}})\val(n)}{n\alpha}, ~~ \lambda = 0. \end{align*} To verify all points $\mathbf{q} \in [0, 1]^n$, $\Pi(\mathbf{q}, G) \le \overline{\payoff}(\mathbf{q}, G)$, note that for all $\mathbf{q} \prec \boldsymbol{1}_{[n]}$ the verification is same as the case of verification for supermodular function and $\lambda \le \sfrac{1}{2}$. Thus, in this proof, we focus on the verification when some values in $(q_j)_{j\in[n]}$ equal to $1$. Suppose for a point $\mathbf{q}$, assume that the last $s$ values of $(q_j)_{j\in[n]}$ equal to $1$ (i.e., $q_{n-s + 1} = \ldots = q_n = 1$), and the remaining $q_j$s are all smaller or equaling to $\hat{p}$ (note that if for a point $\mathbf{q}$, there exists some $j$ where $1 > q_j > \hat{p}$, we can without loss to consider verifying a point $\mathbf{q}'$ in which its $j$-dimension equals to $\hat{p}$). Let $q_j^m$ be the $j$-th minimum value in all $(q_j)_{j \in [1, \ldots, n-s]}$. Define $q_0^m = 0$. Then the payoff $\Pi(\mathbf{q}, G)$ at this point is given as follows: \begin{align*} \Pi(\mathbf{q}, G) & = \sum_{j = 1}^{n-s} \frac{(1- \mu)(q_j^m - q_{j-1}^m)}{\hat{p}} \cdot \val(n - j + 1) + \sum_{j = 1}^{s} \frac{\mu C_j^{s}}{2^{s}}\cdot \val(j) \\ & = \sum_{j = 1}^{n-s} \frac{n\alpha(q_j^m - q_{j-1}^m)}{\val(n)} \cdot \val(n - j + 1) + \sum_{j = 1}^{s} \frac{\mu C_j^{s}}{2^{s}}\cdot \val(j). \end{align*} For $\overline{\payoff}(\mathbf{q}, G) = \sum_{j=1}^{n-s} q_j \alpha + \sum_{j = n-s+1}^n \alpha$. By definition of $\val(\cdot)$, we have $\Pi(\mathbf{q}, G) < \overline{\payoff}(\mathbf{q}, G)$. Given the probability mass $\mu$ on the point $\boldsymbol{1}_{[n]}$, we can compute the social welfare $\texttt{WEL}(G,G)$ of as a function of $\mu$ of this equilibrium structure \begin{align*} \texttt{WEL}(G,G) & = 2\int_0^{\hat{p}} \frac{1 -\mu}{\hat{p}}\cdot n\alpha q dq + 2\mu\cdot \left((1 - \mu)\cdot\val(n) + \sum_{j=1}^n \frac{C_j^n}{2^n}\val(j)\right) \\ & =(1 - \mu)^2 \val(n) + 2\mu((1 - \mu)\cdot\val(n) + \mu T(n)). \end{align*} Thus, choose $\mu$ as the value given in~\eqref{eq: mass_choice_sup}, the PoS can be upper bounded by \begin{align*} \Gamma(\lambda, \val) \le \frac{\val(n)}{\texttt{WEL}(G,G)} = \frac{\val(n)}{(1 - \mass_{\texttt{s}}^2)\val(n) + 2\mass_{\texttt{s}}^2 T(n) }. \end{align*} \end{proof} \subsection{Omitted Proof for \Cref{Thm: large_prior_multi_receiver_sub}} \label{apx:large_prior_multi_receiver_sub} \multilargesub* \begin{proof} When $\lambda > \sfrac{1}{2}$ and $n$ is even, let $\mathbf{q}\in[0, 1]^n$ denote the induced signal random variable, we consider following structure of signaling policy: \begin{equation} \label{multi_r_equilibrium_large_prior_sub} \begin{aligned} g(\mathbf{\signal}) = & \frac{\mu}{\ell} \cdot \indicator{q_j = q, \forall j \in [\sfrac{n}{2}],~ \text{for } q \in [0, \ell], q_j = 1, \forall j \in [\sfrac{n}{2}+1, \ldots, n]} + \\ & \frac{1 - 2\mu}{\hat{p} - \ell} \cdot \indicator{q_j = q, \forall j \in [\sfrac{n}{2}], q_j = \hat{p} + \ell - q, \forall j \in [\sfrac{n}{2}+1, \ldots, n], ~ \text{for } q \in [\ell, \hat{p}]} + \\ & \frac{\mu}{\ell} \cdot \indicator{q_j = 1, \forall j \in [\sfrac{n}{2}], q_j = q, \forall j \in [\sfrac{n}{2}+1, \ldots, n],~ \text{for } q \in [0, \ell], } \end{aligned} \end{equation} where $\ell, \hat{p}$ are parameters to be given shortly. We consider the supporting hyperplane function $\overline{\payoff}(\mathbf{q}, G) = \boldsymbol{\alpha}^\top \mathbf{q} + \lambda$, where $\boldsymbol{\alpha} = (\alpha, \ldots, \alpha)$. We first write $\alpha, \lambda, \ell$ and $\hat{p}$ as a function of $\mu$, then given a choice of $\mu$, we verify whether it is indeed an equilibrium. \begin{align*} \text{Bayes-plausibility} \Rightarrow ~ & \frac{\ell \mu }{2} + \frac{1}{2} (\hat{p} + \ell) (1 - 2\mu) + \mu = \lambda\\ (0, \ldots, 0, 1, \ldots, 1) \in {\tt supp}{G} \Rightarrow ~ & \sum_{j = 1}^{\frac{n}{2}}\frac{\mu C^{\frac{n}{2}}_j}{2^{\frac{n}{2}}}\cdot v(j) + (1 - \mu )v\left(\frac{n}{2}\right) = \frac{\alpha n}{2} + \lambda\\ (\ell, \ldots, \ell, 1, \ldots, 1) \in {\tt supp}{G} \Rightarrow ~ & \sum_{j = 0}^{\frac{n}{2}}\frac{\mu C^{\frac{n}{2}}_j}{2^{\frac{n}{2}}}\cdot v\left(j+\frac{n}{2}\right) + (1 - \mu )v\left(\frac{n}{2}\right) = \frac{\alpha\ell n}{2} + \frac{\alpha n}{2} + \lambda\\ (\ell, \ldots, \ell, \hat{p}, \ldots, \hat{p}) \in {\tt supp}{G} \Rightarrow ~ & v\left(\frac{n}{2}\right) = \frac{\alpha\ell n}{2} + \frac{\alpha\hat{p} n}{2} + \lambda. \end{align*} We get \begin{align} \alpha & = \frac{\mu^2\left(\frac{v(\frac{n}{2})}{2^{\frac{n}{2}} }+ \overline{T}(n) - T(n)\right) + \mu (1 - 2\mu)\left(v(\frac{n}{2}) - T(n)\right)}{(2\lambda - 1) \cdot \frac{n}{2}}, \nonumber\\ \lambda & =\mu T(n) + (1 - \mu )v\left(\frac{n}{2}\right) - \frac{\alpha n}{2}, \nonumber\\ \ell & = \frac{2}{\alpha n}\cdot \left(\frac{\mu v(\frac{n}{2})}{2^{\frac{n}{2}}} + \mu (\overline{T}(n) - T(n))\right), \label{eq_multi_r_ell}\\ \hat{p} & = \frac{2}{\alpha n}\cdot \left(\mu v\left(\frac{n}{2}\right) -\mu T(n) + \frac{\alpha n}{2}\right) - \ell, \label{eq_multi_r_qhat} \end{align} where $T(n):= \sum_{j = 1}^{\frac{n}{2}} \frac{C_j^{\frac{n}{2}}}{2^{\frac{n}{2}}}v(j), \overline{T}(n) := \sum_{j = 1}^{\frac{n}{2}} \frac{C_j^{\frac{n}{2}}}{2^{\frac{n}{2}}}v(j+\frac{n}{2})$. Given $\lambda > 0.5$, we have following constraints to ensure the strategy has a valid structure \begin{align} \alpha \ge 0 \Rightarrow ~ & \mu S(n)\ge T(n) - v\left(\frac{n}{2}\right) \nonumber\\ \lambda\ge 0 \Rightarrow ~ & \mu^2S(n) - \mu \cdot 2\lambda\left(T(n) - v\left(\frac{n}{2}\right) \right) - (2\lambda - 1)v\left(\frac{n}{2}\right) \le 0 \nonumber\\ \ell \in (0, 1) \Rightarrow ~ & \mu S(n) \ge T(n) - v\left(\frac{n}{2}\right) + (2\lambda - 1) \left(\overline{T}(n) - T(n) + \frac{v\left(\frac{n}{2}\right)}{2^{\frac{n}{2}}}\right) \nonumber\\ \hat{p} \in (\ell, 1) \Rightarrow ~ & \mu S(n) \ge T(n) - v\left(\frac{n}{2}\right) + (2\lambda - 1) \left(2\overline{T}(n) - T(n) + \frac{v\left(\frac{n}{2}\right)(2 - 2^{\frac{n}{2}})}{2^{\frac{n}{2}}}\right) \label{rhs} \end{align} where $S(n):= T(n) + \overline{T}(n) + v\left(\frac{n}{2}\right) \left(2^{-\frac{n}{2}} - 2\right)$. In addition to the above constraints, we also need to ensure that $\Pi(\mathbf{q}, G) \le \overline{\payoff}(\mathbf{q}, G)$ for all $\mathbf{q} \in [0, 1]^n$. It is easy to verify that following two conditions will suffice to ensure $\Pi(\mathbf{q}, G) \le \overline{\payoff}(\mathbf{q}, G), \forall \mathbf{q} \in [0, 1]^n$. \begin{align*} \Pi(\boldsymbol{1}_{[n]}, G) \le \overline{\payoff}(\boldsymbol{1}_{[n]}) \Rightarrow ~ & (1 - 2\mu)v(n) + 2\mu \cdot \left(\overline{T}(n) + \frac{v(\frac{n}{2})}{2^{\frac{n}{2}}}\right) \le n\alpha + \lambda \\ \Pi(\boldsymbol{\ell}_{[n]}, G) \le \overline{\payoff}(\boldsymbol{\ell}_{[n]}) \Rightarrow ~ & 2\mu v\left(\frac{n}{2}\right) \le n\alpha\ell + \lambda = \frac{2\mu v\left(\frac{n}{2}\right)}{2^{\frac{n}{2}}} + 2\mu (\overline{T}(n) - T(n)) + \lambda \end{align*} Consider a utility function $v(k) = k^\tau$ for any $\tau \in[0, 1]$. It is easy to check that $S(n) \le 0$ (equality holds when $\tau = 1$). Let $R(n)$ be the right hand side of~\eqref{rhs}, then the above conditions reduce to \begin{align*} & \mu \le \frac{R(n)}{S(n)} ~~ \text{ and } ~~ \mu^2S(n) - \mu \cdot 2\lambda\left(T(n) - v\left(\frac{n}{2}\right) \right) - (2\lambda - 1)v\left(\frac{n}{2}\right) \le 0 \\ & -\mu^2 S(n) + \mu \left((2\lambda - 1)(2\overline{T}(n) - T(n) - 2v(n)+\frac{v\left(\frac{n}{2}\right)(2^{\frac{n}{2}} + 2)}{2^{\frac{n}{2}}}) - v(\sfrac{n}{2}) + T(n)\right)+ \\ & (2\lambda - 1)(v(n) - v\left(\sfrac{n}{2}\right)) \le 0 \\ & \mu^2S(n) + \mu \left(v(\sfrac{n}{2}) - T(n) + (2\lambda - 1)\left(v(\sfrac{n}{2})\left(3 - \frac{2}{2^{\frac{n}{2}}}\right) - 2\overline{T}(n) + T(n)\right)\right) - (2\lambda - 1)v(\sfrac{n}{2}) \le 0 \end{align*} The above conditions give us an instance-specific range $\cI(\lambda, \val, n)$ of possible values of $\mu$. For any $\mu\in\cI(\lambda, \val, n)$, we can find an equilibrium that has the structure as stated the above and its marginal mass on the point $1$ is $\mu$. To ensure a feasible mass, we also need $\cI(\lambda, \val, n) \subseteq (0, \sfrac{1}{2})$. Thus, given a prior $\lambda > \sfrac{1}{2}$, $\cI(\lambda, \val, n) \subseteq (0, \sfrac{1}{2})$ gives us a parameter region $\cC(\lambda, n) \subseteq(0, 1]$ such that for any $\tau\in \cC(\lambda, n)$, we have that any $\mu\in\cI(\lambda, \val, n)$ will form an equilibirum which has the structure as stated above. Let $(G, G)$ be the equilibrium where its marginal mass on point $1$ is $\mass_{\texttt{sub}} = \min\{\mu:\mu\in\cI(\lambda, \val, n)\}$. We can now give an upper bound of the price of stability for the instance $(\lambda, \val, n)$, \begin{align*} \Gamma(\lambda, \val) & \le \frac{2\val(\sfrac{n}{2})}{2\int_{\mathbf{q}} g(\mathbf{q}) \Pi(\mathbf{q}, G)d\mathbf{q}} \\ & = \frac{2\val(\sfrac{n}{2})}{2(2\cdot \int_{\mathbf{q}} g(\mathbf{q}) \overline{\payoff}(\mathbf{q}, G)d\mathbf{q}} \\ & = \frac{2\val(\sfrac{n}{2})}{2\int_0^\ell \frac{\mass_{\texttt{sub}}}{\ell}\cdot (\frac{n\alpha q}{2} + \frac{n \alpha}{2} + \lambda)dq + \int_\ell^{\hat{p}} \frac{1 - 2\mass_{\texttt{sub}}}{\hat{p} - \ell}\cdot(\frac{n\alpha q}{2} + \frac{n\alpha (\hat{p} + \ell - q) }{2}+ \lambda))} \\ & = \frac{2\val(\sfrac{n}{2})}{2\cdot\left(2\mass_{\texttt{sub}}\cdot(\frac{n\alpha}{2} + \lambda) + \frac{n\mass_{\texttt{sub}}\alpha \ell}{2} + (1 - 2\mass_{\texttt{sub}})\cdot(\frac{n\alpha(\ell+\hat{p})}{2} + \lambda)\right)} \\ & = \frac{2\val(\sfrac{n}{2})}{2\val(\sfrac{n}{2}) + 2\mass_{\texttt{sub}}^2 S(n)}. \end{align*} \begin{figure}[ht] \centering \begin{subfigure}{0.4\linewidth} \includegraphics[width=\linewidth]{Paper/plots/eq_region_sub_muliple_r.pdf} \end{subfigure} \caption{The layout of parameter region $\cC(\lambda, n)$ (above each curve) for $\val(n) = n^\tau$ where $\tau \in (0, 1]$ that admits the existence of an equilibrium which has the structure as stated in~\eqref{multi_r_equilibrium_large_prior_sub}.} \label{tau_multi_r_sub} \end{figure} \begin{comment} \begin{lem}\label{lem_multi_r_large_lambda_odd} When $\lambda \ge 0.5$ and $n$ is odd, let $\mathbf{q}\in[0, 1]^n$ denote the induced signal random variable, we have following signaling policy $G(\mathbf{q})$ that is in the equilibrium: \begin{equation} \mathbf{q} = \left \{ \begin{aligned} & \big[\underbrace{q, \ldots, q}_{1, \ldots, \frac{n-1}{2}-\text{receiver}}, \underbrace{1, \ldots, 1}_{\frac{n+1}{2}, \ldots, n-\text{receiver}}\big] \text{ for } q\sim{\tt Uniform}[0, \ell_1], && \text{with prob.}~ m_1\\ & \big[\underbrace{q, \ldots, q}_{1, \ldots, \frac{n-1}{2}-\text{receiver}}, \underbrace{f(q), \ldots, f(q)}_{\frac{n+1}{2}, \ldots, n-\text{receiver}}\big] \text{ for } q\sim{\tt Uniform}[\ell_1, \hat{p}_1], && \text{with prob.}~ 1 - m_1 - m_2\\ & \big[\underbrace{1, \ldots, 1}_{1, \ldots, \frac{n-1}{2}-\text{receiver}}, \underbrace{q, \ldots, q}_{\frac{n+1}{2}, \ldots, n-\text{receiver}}\big] \text{ for } q\sim{\tt Uniform}[0, \ell_2], && \text{with prob.}~ m_2 \end{aligned} \right. \end{equation} where $f(q) = -\frac{\hat{p}_2 - \ell_2 }{\hat{p}_1 - \ell_1}\cdot q + \ell_2 + \frac{\hat{p}_1(\hat{p}_2 - \ell_2)}{\hat{p}_1 - \ell_1}$ and $\ell_1, \ell_2, \hat{p}_1, \hat{p}_2$ and $m_1, m_2$ are defined in~\eqref{eq_multi_r_ell_odd} and~\eqref{eq_multi_r_qhat_odd}. Furthermore, when the function $f$ is less towards to an additive function, the above equilibrium is \wt{more towards to a welfare-optimal equilibrium?} \end{lem} \end{comment} \newcommand{\underline{n}}{\underline{n}} \newcommand{\overline{n}}{\overline{n}} When $n$ is odd, for ease of presentation, we define $\underline{n} = \frac{n-1}{2}$ and $\overline{n} = \frac{n+1}{2}$. The structure we consider is a bit more involved than that of case for even $n$. In particular, we consider following structure of signaling policy \begin{equation} \label{multi_r_equilibrium_large_prior_sub_odd} \begin{aligned} g(\mathbf{\signal}) = & \frac{\mu_1}{\ell_1} \cdot \indicator{q_j = q, \forall j \in [\overline{n}],~ \text{for } q \in [0, \ell_1], q_j = 1, \forall j \in [\overline{n}+1, \ldots, n]} + \\ & \frac{1 - \mu_1 - \mu_2}{\hat{p}_1 - \ell_1} \cdot \indicator{q_j = q, \forall j \in [\overline{n}], q_j = f(q), \forall j \in [\overline{n}+1, \ldots, n], ~ \text{for } q \in [\ell_1, \hat{p}_1]} + \\ & \frac{\mu_2}{\ell_2} \cdot \indicator{q_j = 1, \forall j \in [\overline{n}], q_j = q, \forall j \in [\overline{n}+1, \ldots, n],~ \text{for } q \in [0, \ell_2], } \end{aligned} \end{equation} where $f(q) = \frac{(q - \ell_1)(\ell_2 - \hat{p}_2)}{\hat{p}_1 - \ell_1} + \hat{p}_2$, and $\hat{p}_1, \hat{p}_2, \mu_1, \mu_2, \ell_1, \ell_2$ are the parameters to be given shortly. We consider a supporting hyperplane function $\overline{\payoff}(\mathbf{q}, G) = \boldsymbol{\alpha} \cdot \mathbf{q} + \lambda$, where $\boldsymbol{\alpha} = (\underbrace{\alpha_1, \ldots, \alpha_1}_{\text{$[\overline{n}]$}}, \underbrace{\alpha_2, \ldots, \alpha_2)}_{[n]\setminus [\overline{n}]}$. According to Bayes-plausibility and \Cref{prop:payoff envelope}, we have \begin{align*} \text{Bayes-plausibility} \Rightarrow ~ & \ell_1 \mu_1 /{2} + (\hat{p}_1 + \ell_1) (1 - \mu_1 - \mu_2)/2 + \mu_2 = \lambda\\ \\ & \ell_2 \mu_2 /{2} + (\hat{p}_2 + \ell_2) (1 - \mu_1 - \mu_2)/2 + \mu_1 = \lambda\\ (0, \ldots,0, 0, 1, \ldots, 1) \in {\tt supp}{G} \Rightarrow ~ & \sum_{j=1}^{\overline{n}} \frac{C_j^{\overline{n}}}{2^{\overline{n}}}\val(j)+ (1 - \mu_2)\val(\overline{n}) = \frac{(n-1)\alpha_2}{2} + \lambda\\ (1, \ldots, 1, 1,0 ,\ldots, 0) \in {\tt supp}{G} \Rightarrow ~ & \sum_{j=1}^{\underline{n}} \frac{C_j^{\underline{n}}}{2^{\underline{n}}}\val(j)+ (1 - \mu_1)\val(\underline{n}) = \frac{(n+1)\alpha_1}{2} + \lambda\\ (\ell_1, \ldots, \ell_1, \ell_1, 1, \ldots, 1) \in {\tt supp}{G} \Rightarrow ~ & \mu_1\sum_{j=0}^{\overline{n}} \frac{C_j^{\overline{n}}}{2^{\overline{n}}}\val(j+\underline{n}) + (1- \mu_1)\val(\underline{n}) = \frac{(n+1)\ell_1\alpha_1 + (n-1)\alpha_2}{2} \\ (1, \ldots, 1, 1, \ell_2, \ldots, \ell_2) \in {\tt supp}{G} \Rightarrow ~ & \mu_2\sum_{j=0}^{\underline{n}} \frac{C_j^{\underline{n}}}{2^{\underline{n}}}\val(j+\overline{n}) + (1- \mu_2)\val(\overline{n}) = \frac{(n+1)\alpha_1 + (n-1)\ell_2\alpha_2}{2}\\ (\ell_1, \ldots, \ell_1, \ell_1, \hat{p}_2, \ldots, \hat{p}_2) \in {\tt supp}{G} \Rightarrow ~ & \mu_1\val(\overline{n}) + (1 - \mu_1)\val(\underline{n}) = \frac{\alpha_1\ell_1 (n+1)}{2} + \frac{\alpha_2\hat{p}_2 (n-1)}{2} + \lambda \\ (\hat{p}_1, \ldots, \hat{p}_1, \hat{p}_1, \ell_2, \ldots, \ell_2) \in {\tt supp}{G} \Rightarrow ~ & \mu_2\val(\underline{n}) + (1 - \mu_2)\val(\overline{n}) = \frac{\alpha_1\hat{p}_1 (n+1)}{2} + \frac{\alpha_2\ell_2 (n-1)}{2} + \lambda. \end{align*} The above equations are enough for us to express the $\hat{p}_1, \hat{p}_2, \ell_1, \ell_2, \alpha_1, \alpha_2, \beta, \mu_2$ as a function of $\mu_1$. To guarantee a valid equilibrium, we also need to ensure the above parameters in a reasonable range like in the earlier discussions. Furthermore, following \Cref{prop:payoff envelope}, we also need to ensure $\overline{\payoff}(\mathbf{q}, G) \ge \Pi(\mathbf{q}, G)$ for all $\mathbf{q} \in [0, 1]^n$. Similar to the discussion in even $n$ case, it suffices to ensure $\overline{\payoff}((\ell_1, \ldots, \ell_1, \ell_2, \ldots, \ell_2), G) \ge \Pi((\ell_1, \ldots, \ell_1, \ell_2, \ldots, \ell_2), G)$. Together with the constraints of $\hat{p}_1, \hat{p}_2, \ell_1, \ell_2, \alpha_1, \alpha_2, \beta, \mu_2$, we are able to pin down a parameter region $\cC(\lambda, n)$ for $\val$ such that for any $\val \in \cC(\lambda, n)$, we can identify a feasible range $\cI(\lambda, \val, n)$ for $\mu_1$ such that for any $\mu_1\in \cI(\lambda, \val, n)$, we can find an equilibrium that has the structure in~\eqref{multi_r_equilibrium_large_prior_sub_odd}. \end{proof} \subsection{Numerical Results for \Cref{sec:multi receiver}} \label{appendix: numerical results} Below we present numerical results for a general anonymous utility function $\val(n) = n^\tau$. Given this utility function, we have $R(n) = 0, T(n) = 0$ only when $\tau = 1$. When $\tau\in (1, +\infty)$, $\val(\cdot)$ is a strictly supermodular utility function. As presented in Fig.~\ref{Fig: pos_multi_r_sup}, the upper bound of PoS increases as the prior $\lambda$ and $\tau$ increase. Note that smaller $\tau$ means that $\val$ is more additive. Thus, the results are consistent with our findings in two-receiver setting. Moreover, the dynamic of the upper bound of PoS seems to be stable as $n$ increases. When $\tau\in (0, 1]$ it is a strictly submodular utility function, the above parameter region $\cC(\lambda, n)$ in \Cref{Thm: large_prior_multi_receiver_sub} then reduces to a feasible range of $\tau$.\footnote{We also present a figure (see Fig.~\ref{tau_multi_r_sub}) in appendix on how $\cC(\lambda, n)$ change according to different $\lambda, n$.} Similar to our results in two-receiver setting for submodular utility functions, the impact of the shape of $\val$ (i.e., $\tau$) on the upper bound of PoS is subtle, and such subtleties also appear when $n$ increases. \begin{figure}[ht] \centering \begin{subfigure}{0.3\linewidth} \includegraphics[width=\linewidth]{Paper/plots/pos_sup_large_prior_n=4.pdf} \caption{} \label{sup_multi_r_n_4} \end{subfigure} \hspace{1.0cm} \begin{subfigure}{0.3\linewidth} \includegraphics[width=\linewidth]{Paper/plots/pos_sup_large_prior_n=8.pdf} \caption{} \label{sup_multi_r_n_8} \end{subfigure} \caption{The illustration of how the upper bound of PoS change according to different parameters $(\lambda, \val, n)$ for $\val(n) = n^\tau$ where $\tau > 1$.} \label{Fig: pos_multi_r_sup} \end{figure} \begin{figure}[ht] \centering \begin{subfigure}{0.3\linewidth} \includegraphics[width=\linewidth]{Paper/plots/pos_sub_large_prior_n=4.pdf} \caption{} \label{sub_multi_r_n_4} \end{subfigure} \hspace{1.0cm} \begin{subfigure}{0.3\linewidth} \includegraphics[width=\linewidth]{Paper/plots/pos_sub_large_prior_n=8.pdf} \caption{} \label{sub_multi_r_n_8} \end{subfigure} \caption{The illustration of how the upper bound of PoS change according to different parameters $(\lambda, \val, n)$ for $\val(n) = n^\tau$ where $\tau \le 1$.} \label{Fig: pos_multi_r_sub} \end{figure} \subsection{Omitted Proof for \Cref{thm: pos-two-sup-large-prior}} \tworeceiversup* \begin{proof} Consider a supporting hyperplane function $\overline{\payoff}(\mathbf{q}, G) = \boldsymbol{\alpha} \cdot \mathbf{q} + \lambda$, where $\boldsymbol{\alpha} = (\alpha, \alpha)$. Let $\mu$ denote the mass on point $(1, 1)$. We first write $\alpha, \lambda, \ell$ and $\hat{p}$ as a function of $\mu$, then given a choice of $\mu$, we then verify whether it is indeed an equilibrium. For notation simplicity, let $t\triangleq\val(1), r\triangleq\val(2)$. Observe that by Bayes-plausibility constraint and \Cref{prop:payoff envelope}, we have \begin{alignat}{2} \text{Bayes-plausibility} \Rightarrow ~ & \frac{1}{2}\hat{p} \cdot (1 -\mu) +\mu = \lambda; \nonumber\\ \Pi((0,0), G) = \overline{\payoff}((0,0), G) \Rightarrow ~ & 0 = \lambda; \nonumber\\ \Pi((\hat{p},\hat{p}), G) = \overline{\payoff}((\hat{p},\hat{p}), G) \Rightarrow ~ & (1 -\mu) r = 2\alpha\hat{p} + \lambda; \nonumber\\ \Pi(\boldsymbol{1}_{[2]}, G) = \overline{\payoff}(\boldsymbol{1}_{[2]}, G) \Rightarrow ~ & \frac{1}{4}\mu \cdot t \cdot 2 + \left(1 -\mu + \frac{1}{4}\mu\right)\cdot r = 2\alpha + \lambda.\nonumber \end{alignat} The above equations help us pin down the value of $\mu$ given $\lambda$ and $\rho$: \begin{align*} \mu^2\left(2t - r\right) +\mu\cdot \lambda\left(3 r - 2t\right) + r\left(2 - 4\lambda\right) = 0 \end{align*} By inspection, it is easy to see that $\forall \lambda > \sfrac{1}{2}, r > 2t$, the above equation always has a solution \begin{align} \mass_{\texttt{s}} = \frac{\lambda(3 - 2\rho) - \sqrt{\lambda^2(3-2\rho)^2 - 4(2\rho - 1)(2-4\lambda)}}{2(1 - 2\rho)} \in [0, 1] \label{eq: mass_choice_sup} \end{align} where $\rho = \sfrac{t}{r} \le \sfrac{1}{2}$, and we can now pin down the value of $\alpha, \lambda, \hat{p}$: \begin{align*} \alpha = \left(\frac{1}{2} - \frac{3}{8}\mass_{\texttt{s}}\right) \cdot r + \frac{1}{4}\mass_{\texttt{s}} t, ~~ \hat{p} = \frac{(1 -\mass_{\texttt{s}})r}{2\alpha}, ~~ \lambda = 0. \end{align*} To verify all points $\mathbf{q} = (q_1, q_2) \in [0, 1]^2$, $\Pi(\mathbf{q}, G) \le \overline{\payoff}(\mathbf{q}, G)$, observe that $\Pi(\mathbf{q}, G) = r(1 -\mass_{\texttt{s}})$ for all $(\hat{p}, \hat{p}) \preceq \mathbf{q} \prec (1, 1)$ and $\Pi((q_1, q_2), G) = \Pi((\hat{p}, q_2), G)$ for all $q_1 \in [\hat{p} , 1), \forall q_2 \in[0, 1)$, thus it suffices to argue that \begin{align*} & \Pi((\hat{p}, q_2), G)\le \overline{\payoff}((\hat{p}, q_2), G), ~~ \forall q_2\in[0, \hat{p}]\\ \Rightarrow ~~ & \frac{q_2}{\hat{p}}(1 -\mass_{\texttt{s}}) r + \frac{\hat{p} - q_2}{\hat{p}}(1 -\mass_{\texttt{s}}) t \le \alpha(\hat{p} + q_2) , ~~ \forall q_2\in[0, \hat{p}]\\ & \Pi((1, q_2), G)\le \overline{\payoff}((1, q_2), G), ~~ \forall q_2\in[0, \hat{p}]\\ \Rightarrow ~~ & \frac{q_2}{\hat{p}}(1 -\mass_{\texttt{s}}) r + \frac{\hat{p} - q_2}{\hat{p}}(1 -\mass_{\texttt{s}}) t + \sfrac{1}{2}\mass_{\texttt{s}} t \le \alpha(1 + q_2), ~~ \forall q_2\in[0, \hat{p}] \end{align*} It is easy to verify that the above conditions holds naturally for any $\lambda>\sfrac{1}{2}$ and $\rho\in(0, \sfrac{1}{2})$. Given the probability mass $\mu$ on the point $(1, 1)$, we can compute the social welfare $\texttt{WEL}(G,G)$ as a function of $\mu$ of this equilibrium structure \begin{align*} \texttt{WEL}(G,G) & = 2\int_0^{\hat{p}} \frac{1 -\mu}{\hat{p}}\cdot 2\alpha q dq + 2\mu\cdot \left(\left(1 - \frac{3}{4}\mu\right)\cdot r + \frac{1}{2}\mu t\right) \\ & = r \cdot \left(1 -\mu^2\left(\frac{1}{2} - \rho \right)\right) \end{align*} Thus, choose $\mu$ as the value given in~\eqref{eq: mass_choice_sup}, the PoS can be upper bounded by \begin{align} \Gamma(\lambda, \rho) \le \frac{r}{\texttt{WEL}(G,G)} = \frac{1}{1 -\mass_{\texttt{s}}^2\left(\sfrac{1}{2} - \rho \right)}. \end{align} With the value of $\mass_{\texttt{s}}$ in~\eqref{eq: mass_choice_sup}, we further have $\frac{1}{1 -\mass_{\texttt{s}}^2\left(\sfrac{1}{2} - \rho \right)} \le 2$ for any $\rho, \lambda$. \end{proof} \subsection{Omitted Proof for \Cref{Lem: infeasible_equilibrium_sub}} \twosubinfeasible* \begin{proof} We prove this lemma by showing that there exists no equilibrium with this simple structure. Let $\mu$ be the mass on the point $(1, 1)$, consider $\overline{\payoff}(\mathbf{q}) = \boldsymbol{\alpha} \cdot \mathbf{q} + \lambda$, where $\boldsymbol{\alpha} = (\alpha, \alpha)$. Let $t = \val(1), r = \val(2)$. Similar to the earlier proof, we first write $\alpha, \lambda, \hat{p}$ as a function of $\mu$, and then given a choice of $\mu$, we verify whether it is indeed an equilibrium. \begin{alignat}{2} \text{Bayes-plausibility} \Rightarrow ~ & \frac{1}{2}\hat{p} \cdot (1 -\mu) +\mu = \lambda; \nonumber\\ \Pi((\hat{p},0),G) = \overline{\payoff}((\hat{p},0), G) \Rightarrow ~ & (1 -\mu) t = \alpha\hat{p} + \lambda; \label{point_qhat_0_eq}\\ \Pi(\boldsymbol{1}_{[2]},G) = \overline{\payoff}(\boldsymbol{1}_{[2]}, G) \Rightarrow ~ & \frac{1}{4}\mu \cdot t \cdot 2 + \left(1 -\mu + \frac{1}{4}\mu\right)\cdot r = 2\alpha + \lambda. \label{point_1_1_eq} \end{alignat} In addition the above equations, we also need to ensure for all $\mathbf{q}\in[0, 1]^2$, we have $\Pi(\mathbf{q},G) \le \overline{\payoff}(\mathbf{q}, G)$. Observe that for all $q\in[\hat{p}, 1)$ and all $q_2 \in [0, \hat{p}]$, we have $\Pi((q, q_2),G) = \overline{\payoff}((\hat{p}, q_2), G)$, and for all $(\hat{p}, \hat{p}) \preceq \mathbf{q} \prec(1, 1)$, we have $\Pi(\mathbf{q},G) = \Pi((\hat{p}, \hat{p}))$. Thus, to ensure $\Pi(\mathbf{q},G) \le \overline{\payoff}(\mathbf{q}, G)$ for all $\mathbf{q}\in[0, 1]^2$, it suffices to show that \begin{align} \Pi((\hat{p}, \hat{p}),G) \le \overline{\payoff}((\hat{p}, \hat{p}), G) & \Rightarrow ~~ (1 -\mu)r \le 2\alpha\hat{p} + \beta \label{verification_1}\\ \Pi((1, 0),G) \le \overline{\payoff}((1, 0), G) & \Rightarrow ~~ (1 -\mu)t + \frac{1}{2}\mu t \le \alpha + \beta \label{verification_2}. \end{align} From~\eqref{point_1_1_eq}, we know that $(1 -\mu) r = 2\alpha + \lambda - \frac{1}{4}\mu r - \frac{1}{2}\mu t$. Thus, with~\eqref{verification_1}, we have \begin{align} 2\alpha + \lambda - \frac{1}{4}\mu r - \frac{1}{2}\mu t & \le 2\alpha \hat{p} + \lambda \nonumber\\ \Rightarrow ~ 2\alpha(1 - \hat{p}) & \le \frac{1}{4}\mu r + \frac{1}{2}\mu t \label{helper_1} \end{align} From~\eqref{point_qhat_0_eq} we know that $\beta = (1 -\mu)t-\alpha \hat{p}$. Together with~\eqref{verification_2}, we have \begin{align} \left(1 - \frac{1}{2}\mu\right)t & \le \alpha + (1 -\mu)t-\alpha \hat{p} \nonumber\\ \Rightarrow ~\mu t & \le 2\alpha(1 - \hat{p}). \label{helper_2} \end{align} With~\eqref{helper_1} and~\eqref{helper_2}, we have \begin{align*} \mu t \le \frac{1}{4}\mu r + \frac{1}{2}\mu t \end{align*} which implies that $2\mu t\le\mu r$. This equality holds true only when $\mu = 0$ or $2t \le r$. This contradicts to the submodularity of $V$. When $2t = r$, we have \begin{align*} \mu = \frac{2\lambda - 1}{\lambda}, ~ \alpha = \frac{t}{2\lambda},~ \lambda = 0, ~ \hat{p} = 2(1 - \lambda). \end{align*} \end{proof} \subsection{Omitted Proof for \Cref{Lem: special_equilibrium_sub}} \twosubfeasible* \begin{proof} Consider a supporting hyperplane function $\overline{\payoff}(\mathbf{q}, G) = \boldsymbol{\alpha} \cdot \mathbf{q} + \lambda$, where $\boldsymbol{\alpha} = (\alpha, \alpha)$. We first write $\alpha, \lambda, \ell$ and $\hat{p}$ as a function of $\mu$, then given a choice of $\mu$, we then verify whether it is indeed an equilibrium. For notation simplicity, let $t = \val(1), r = \val(2)$. Observe that by Bayes-plausibility and \Cref{prop:payoff envelope}, we have \begin{align*} \text{Bayes-plausibility} \Rightarrow ~ & \frac{\ell \mu}{2} + \frac{1}{2} (\hat{p} + \ell) (1 - 2\mu) + \mu = \lambda.\\ \Pi((1, 0), G) = \overline{\payoff}((1, 0), G) \Rightarrow ~ & \left(1 - \frac{\mu }{2}\right)t = \alpha + \lambda.\\ \Pi((1,\ell), G) = \overline{\payoff}((1, \ell), G) \Rightarrow ~ & (1 - \mu)t + \frac{1}{2}\mu t + \frac{1}{2}\mu r = \alpha + \alpha\ell + \lambda.\\ \Pi((\hat{p},\ell), G) = \overline{\payoff}((\hat{p}, \ell), G) \Rightarrow ~ & (1 - \mu) t + \mu t = \alpha\hat{p} + \alpha\ell + \lambda. \end{align*} We get \begin{align*} \alpha = \frac{t\mu - \mu^2(2t - r) }{4\lambda - 2}, ~~ \lambda = \left(1 - \frac{\mu }{2}\right)t - \alpha, ~~\ell = \frac{\mu r}{2\alpha}, ~~ \hat{p} = \frac{2\alpha - \mu(r - t)}{2\alpha}. \end{align*} For all $\mathbf{q} = (q_1, q_2)\in {\tt supp}{G}$, we are able to verify that $\Pi(\mathbf{q}, G) = \overline{\payoff}(\mathbf{q}, G)$. To see this, for $q_1 \in [\ell, \hat{p}]$ and $q_2 = \ell+\hat{p} - q_1$, we have \begin{align*} \Pi((q_1, q_2), G) & = \left(\frac{\hat{p} - q_1}{\hat{p} - \ell}\cdot(1 -2\mu) + \mu\right)\cdot \val(1) + \left(\frac{\hat{p} - q_2}{\hat{p} - \ell}\cdot(1 -2\mu) + \mu\right)\cdot \val(1)\\ & = t\cdot\left(\frac{2\hat{p} - (q_1 + q_2)}{\hat{p} - \ell}\cdot (1 - 2\mu) + 2\mu\right) = t (1 - 2\mu + 2\mu ) = t.\\ \overline{\payoff}((q_1, q_2), G) & = \alpha(q_1 + q_2) + \lambda = \alpha (\hat{p} + \ell) + \lambda = \frac{\mu t}{2} + \alpha + \lambda = \frac{\mu t}{2} + \left(1 - \frac{\mu }{2}\right)t = t\\ & \Rightarrow ~ \Pi((q_1, q_2), G) = \overline{\payoff}((q_1, q_2), G). \end{align*} For $\mathbf{q} = (1, q)$, where $q\in[0, \ell]$, we have \begin{align*} \Pi((1, q), G) & = \left(1 -2\mu + \mu + \frac{\mu }{2}\cdot \frac{\ell - q}{\ell}\right)\cdot \val(1) + \frac{\mu }{2}\cdot \frac{q}{\ell}\cdot \val(2) + \frac{\mu }{2}\cdot \frac{q}{\ell}\cdot \val(1).\\ & = \left(1 - \frac{\mu }{2}\right)t + q\alpha\\ \overline{\payoff}((1, q), G) & = \alpha(1 + q) + \lambda = q\alpha + \left(1 - \frac{\mu }{2}\right)t\\ & \Rightarrow ~ \Pi((1, q), G) = \overline{\payoff}((1, q), G). \end{align*} Similarly, we can also show that $\Pi((q, 1), G) = \overline{\payoff}((q, 1), G)$ for $q\in[0, \ell]$. Now given $\lambda > \sfrac{1}{2}$, we have following condition for the choice of $\mu$ to ensure the complementary slackness constraints satisfied: \begin{align*} \alpha \ge 0 \Rightarrow ~ & \mu \le \frac{t}{2t - r}. \\ \lambda \ge 0 \Rightarrow ~ & \mu^2(2t-r) - \mu \cdot 2\lambda t + 2t(2\lambda - 1) \ge 0.\\ \ell \in [0, 1) \Rightarrow ~ & \mu \le \frac{t - r (2\lambda - 1)}{2t - r}. \\ \hat{p} \in (\ell, 1] \Rightarrow ~ & \mu \le \frac{2\lambda t - 2r(2\lambda - 1)}{2t - r}. \end{align*} In addition to the above constraints, we also need to ensure that $\Pi(\mathbf{q}, G) \le \overline{\payoff}(\mathbf{q}, G)$ for all $\mathbf{q} \in [0, 1]^2$. It is easy to verify that following two conditions will suffice to ensure $\Pi(\mathbf{q}, G) \le \overline{\payoff}(\mathbf{q}, G), \forall \mathbf{q} \in [0, 1]^2$. \begin{align} \Pi((1, 1), G) \le \overline{\payoff}((1, 1), G) \Rightarrow ~ & r - \mu( r - t) \le 2\alpha + \lambda = \alpha + \left(1 - \frac{\mu }{2}\right)t.\\ \Pi((\ell, \ell), G) \le \overline{\payoff}((\ell, \ell)) \Rightarrow ~ & 2\mu t \le 2\alpha\ell + \lambda = \mu r + \lambda. \end{align} Since $\lambda \ge \sfrac{1}{2}$, we have $t - r (2\lambda - 1) - (2\lambda t - 2r(2\lambda - 1)) = (r- t)(2\lambda -1) \ge 0$, the above conditions then reduce to \begin{alignat}{2} \hat{p} \in (\ell, 1] \Rightarrow ~ & \mu \le \frac{2\lambda t - 2r(2\lambda - 1)}{2t - r}.\nonumber\\ \lambda \ge 0 \Rightarrow ~ & \mu^2(2t-r) - \mu \cdot 2t\lambda + t(4\lambda - 2) \ge 0.\nonumber\\ \Pi((1, 1), G) \le \overline{\payoff}((1, 1), G) \Rightarrow ~ & \mu^2(2t - r) -\mu (2t(1 - \lambda) + (4\lambda - 2)(r - t)) + (r - t)(4\lambda - 2) \le 0 \label{m_lb}.\\ \Pi((\ell, \ell), G) \le \overline{\payoff}((\ell, \ell), G) \Rightarrow ~ & \mu^2(2t - r) - \mu((4\lambda -2)(2.5t - r) + t) + t(4\lambda -2) \ge 0. \label{m_ub} \end{alignat} Let $\cI(\lambda, \rho) \subseteq \mathbb{R}$ denote the set of all $\mu$s that satisfy the above conditions. Note that $\cI(\lambda, \rho)$ depends on the values of $\rho \in (\sfrac{1}{2}, 1], \lambda\in(\sfrac{1}{2}, 1]$. Let $\mu_{lb}$ and $\mu_{ub}$ denote the smaller root of the RHS of~\eqref{m_lb} and~\eqref{m_ub}, respectively, then by inspection, we have $\cI(\lambda, \rho) = [\mu_{lb}, \mu_{ub}]$ Also observe that to have a valid $\mu$, we must ensure $\cI(\lambda, \rho) \subseteq (0, \sfrac{1}{2}]$. Thus, for each prior $\lambda > \sfrac{1}{2}$, we can identify a parameter region $\cC(\lambda) \subseteq (\sfrac{1}{2}, 1]$ of $\rho$ which admits the existence of an equilibrium that attains the structure as in Fig.~\ref{Fig: special_equilibrium_sub}. When $\rho = \sfrac{1}{2}$, we can show that only when $\lambda \in [\sfrac{1}{2}, \sfrac{2}{3})$, these exists an equilibrium that has the structure in Fig.~\ref{Fig: special_equilibrium_sub}, in this case we have $\mu = \sfrac{(2\lambda - 1)}{\lambda}$. \end{proof} \subsection{Omitted Proof of \Cref{thm: pos_special_equilibrium_sub}} \twosubpos* \begin{proof}[Proof of \Cref{thm: pos_special_equilibrium_sub}] The proof is immediate from the analysis of the proof for~\cref{Lem: special_equilibrium_sub}. Note that for any $\mathbf{q} \in{\tt supp}{G}$, we have $\Pi(\mathbf{q}, G) = \overline{\payoff}(\mathbf{q}, G)$. Thus, given a prior $\lambda >\sfrac{1}{2}$, for any $\rho\in \cC(\lambda)$, we have \begin{align*} \Gamma(\lambda,\rho) & \le \frac{2\val(1)}{2\int_{\mathbf{q}}\mathbf{q}\Pi(\mathbf{q}, G) d\mathbf{q} }\\ & = \frac{2\val(1)}{2\int_{\mathbf{q}}\mathbf{q}\overline{\payoff}(\mathbf{q}, G) d\mathbf{q} }\\ & = \frac{2\val(1)}{2 \left(2\int_{0}^\ell \frac{\mass_{\texttt{s}}}{\ell} (\alpha + \alpha q + \lambda) dq + \int_{\ell}^{\hat{p}}\frac{1-2\mass_{\texttt{s}}}{\hat{p} - \ell} (\alpha q + \alpha(\hat{p} +\ell - q) + \lambda)dq\right)} \\ & = \frac{2\val(1)}{2 \left(2\mass_{\texttt{s}}(\alpha + \lambda) + \mass_{\texttt{s}} \alpha\ell + (1 - 2\mass_{\texttt{s}})(\alpha(\hat{p} + \ell) + \lambda)\right)} \\ & = \frac{2\val(1)}{2 \left(\frac{1}{2}\mass_{\texttt{s}}^2\val(2) + ( 1-2\mass_{\texttt{s}})\cdot \frac{1}{2}\mass_{\texttt{s}} \val(1) + \alpha+\lambda\right)} \\ & = \frac{2\rho}{2\rho - \mass_{\texttt{s}}^2(2\rho - 1)}. \end{align*} \end{proof} \subsection{Linear Program Characterization of Best Response} We first show that given sender $2$'s signaling policy $F$, sender $1$'s best response can be expressed as a linear program. Fixing sender $2$'s signaling policy $F$ defined in $[0, 1]^n$, for any signal $\mathbf{\signal}\in[0, 1]^n$ and subset $S\subseteq[n]$, we denote by $P(\mathbf{\signal},S, F)$ the probability that sender $1$ wins a set $S$ of receivers when his realized signal is $\mathbf{\signal}$, i.e., \begin{align*} P(\mathbf{\signal}, S, F) = \displaystyle\sum_{T\subseteqS} \frac{1}{2^{|S| - |T|}} \prob[\mathbf{\signalother}\sim F]{ \text{$\forall j \in T: p_j < q_j$ and $\forall j \in S\backslashT: p_j = q_j$ and $\forall j \in [n]\backslashS: p_j > q_j$}}~. \end{align*} Thus, sender $1$'s expected utility $\Pi(\mathbf{\signal}, F)$ when his realized signal is $\mathbf{\signal}$ can be written as \begin{align*} \Pi(\mathbf{\signal}, F) = \displaystyle\sum\nolimits_{S\subseteq[n]} P(\mathbf{\signal}, S, F) V(S)~. \end{align*} Since the value function $V(\cdot)$ is monotone and $V(\emptyset) = 0$, we know that $\Pi(\mathbf{\signal}, F)$ is weakly increasing and non-negative as well. \begin{lemma} \label{lem:monotone payoff} Fix an arbitrary signaling policy $F$, $\Pi(\mathbf{\signal}, F)$ is weakly increasing and non-negative in all dimensions. \end{lemma} \begin{proof} Fix an arbitrary receiver $j\in[n]$, and any $\mathbf{\signal},\mathbf{\signal}^\dagger\in[0,1]^n$ such that $q_\ell = q_\ell^\dagger$ for all $\ell\not=j$ and $q_j < q_j^\dagger$. Consider \begin{align*} \Pi(\mathbf{\signal}^\dagger, F) - \Pi(\mathbf{\signal},F) &= \displaystyle\sum_{ S\subseteq [n]\backslash\{j\} } \left( V(S\cup\{j\}) - V(S) \right) \left( P(\mathbf{\signal}^\dagger, S\cup\{j\},F) - P(\mathbf{\signal}, S\cup\{j\},F) \right)~. \end{align*} Note that $ V(S\cup\{j\}) - V(S)\geq 0$ because of \ref{eq:monotone} and $P(\mathbf{\signal}^\dagger, S\cup\{j\},F) - P(\mathbf{\signal}, S\cup\{j\},F)\geq 0$ by definition. Thus, we conclude that $\Pi(\mathbf{\signal}^\dagger, F) - \Pi(\mathbf{\signal},F)\geq 0$ which finishes the proof. \end{proof} Let $g:[0, 1]^n \rightarrow [0, 1]$ be the generalized density function of sender $1$'s signaling policy $G$. His best response problem can be formulated as the following optimization program, \begin{align} \label{eq:best-reponse} \tag{$\mathcal{P}^\text{BR}$} \begin{array}{lll} \max\limits_{g(\cdot)} & \displaystyle\int_{\mathbf{\signal}} g(\mathbf{\signal}) \Pi(\mathbf{\signal}, F)\,d\mathbf{\signal} & \text{s.t.} \vspace{1.5mm}\\ \vspace{1.5mm} & \displaystyle\int_{\mathbf{\signal}} g(\mathbf{\signal})\, \mathbf{\signal}\,d\mathbf{\signal} = \lambda\,\boldsymbol{1}_{[n]}~, \\ \vspace{1.5mm} & \displaystyle\int_{\mathbf{\signal}} g(\mathbf{\signal}) \,d\mathbf{\signal} = 1~, \\ & g(\mathbf{\signal}) \geq 0 & \mathbf{\signal} \in [0,1]^n~. \end{array} \end{align} where $\boldsymbol{1}_{[n]}\in \mathbb{R}^n$ is a vector with $1$ on all $n$ dimensions. \newcommand{\alpha}{\alpha} \newcommand{\boldsymbol{\alpha}}{\boldsymbol{\alpha}} \newcommand{\lambda}{\beta} \newcommand{\gamma}{\gamma} \newcommand{\mathcal{L}}{\mathcal{L}} Consider the Lagrangian of program~\ref{eq:best-reponse} \begin{align*} \mathcal{L} = \displaystyle\int_{\mathbf{\signal}} g(\mathbf{\signal})\Pi(\mathbf{\signal}, F) \,d\mathbf{\signal} - \boldsymbol{\alpha}\cdot \left( \displaystyle\int_{\mathbf{\signal}} g(\mathbf{\signal})\,\mathbf{\signal} \,d\mathbf{\signal} - \lambda\,\boldsymbol{1}_{[n]} \right) - \lambda\left( \displaystyle\int_{\mathbf{\signal}} g(\mathbf{\signal}) \,d\mathbf{\signal} - 1 \right) + \displaystyle\int_{\mathbf{\signal}} \gamma(\mathbf{\signal})g(\mathbf{\signal}) \,d\mathbf{\signal} \end{align*} where the Lagrange multipliers $\boldsymbol{\alpha} = (\alpha_1, \dots, \alpha_n) \in \mathbb{R}^n$, $\lambda \in \mathbb{R}$, and $\gamma(\mathbf{\signal})\in \mathbb{R}_{+}$ for all $\mathbf{\signal}\in [0,1]^n$. The stationarity condition with respect to $g(\mathbf{\signal})$ requires that for all $\mathbf{\signal} \in [0, 1]^n$, \begin{align*} \Pi(\mathbf{\signal},F) - \boldsymbol{\alpha}\cdot \mathbf{\signal} - \lambda + \gamma(\mathbf{\signal}) = 0~. \end{align*} From now on, we abuse the notation and let $\boldsymbol{\alpha},\lambda,\gamma$ and $g$ (as well as $G$) be the optimal solution from the Lagrangian relaxation. Let ${\tt supp}{G}$ denote the support of distribution $G$. Note that the complementary slackness implies that either $\mathbf{\signal}\notin {\tt supp}{G}$ (i.e., $g(\mathbf{\signal}) = 0$) or $\gamma(\mathbf{\signal}) = 0$ is always satisfied. Hence, we define following supporting hyperplane function \begin{align*} \overline{\payoff}(\mathbf{\signal}, F) = \boldsymbol{\alpha}\cdot \mathbf{\signal} + \lambda. \end{align*} The above analysis implies that \begin{align*} \Pi(\mathbf{\signal},F) \labelrel\leq{condition4} \overline{\payoff}(\mathbf{\signal},F) \end{align*} where~\eqref{condition4} holds as equality if $\mathbf{\signal} \in {\tt supp}{G}$. By \Cref{lem:monotone payoff}, we further know that both $\boldsymbol{\alpha}$ and $\lambda$ are non-negative. We summarize this discussion in \Cref{prop:payoff envelope}. \begin{proposition} \label{prop:payoff envelope} Fix an arbitrary signaling policy $F$. Let $G$ be the best response of $F$ and $\Pi(\mathbf{\signal},F)$ be sender $1$'s expected utility given his realized signal to be $\mathbf{\signal}$. Then there exists a hyperplane $\overline{\payoff}(\mathbf{\signal}, F) = \boldsymbol{\alpha}\cdot \mathbf{\signal} + \lambda $ where $\boldsymbol{\alpha}\in \mathbb{R}_{\geq 0}^n$ and $\lambda\in \mathbb{R}_{\geq 0}$ such that $\Pi(\mathbf{\signal},F) \leq \overline{\payoff}(\mathbf{\signal}, F) $ for all $\mathbf{\signal} \in [0,1]^n$ and the inequality binds if $\mathbf{\signal} \in {\tt supp}{G}$. \end{proposition} \subsection{Necessary Conditions for Symmetric Equilibrium} With \Cref{prop:payoff envelope}, we can derive the following necessary conditions for symmetric equilibrium. \begin{lemma} \label{lem:necessary condition i} Let signaling policies $(G,G)$ be an arbitrary symmetric equilibrium. Then there is no mass point in $G$ except possible mass on point $\boldsymbol{1}_{[n]}$. \end{lemma} \begin{proof} We prove by contradiction. Suppose there exists a mass point $\mathbf{\massmarg} = (m_1, m_2, \dots, m_n) \in [0, 1]^n \backslash\{\boldsymbol{1}_{[n]}\}$ with probability $\mu = \prob[\mathbf{\signal}\sim G] {\mathbf{\signal}=\mathbf{\massmarg}}$. Without loss of generality, we assume $m_1 < 1$. Let $\epsilon\in(0,1-m_1)$ be an arbitrary positive constant, and $\mathbf{\massmarg}^\dagger = (m_1 + \epsilon, m_2, \dots, m_n)$. We claim that there exists a constant $c > 0$ such that $\Pi(\mathbf{\massmarg}^\dagger, G) \geq \Pi(\mathbf{\massmarg},G) + c$ for all $\epsilon \in (0, 1 - m_1)$. To see this, consider \begin{align*} \Pi(\mathbf{\massmarg}^\dagger, G) - \Pi(\mathbf{\massmarg},G) &= \displaystyle\sum_{ S\subseteq [n]\backslash\{1\} } \left( V(S\cup\{1\}) - V(S) \right) \left( P(\mathbf{\massmarg}^\dagger, S\cup\{1\},G) - P(\mathbf{\massmarg}, S\cup\{1\},G) \right)~. \end{align*} Note that for all $S\subseteq[n]\backslash\{1\}$, $V(S\cup\{1\}) - V(S) \geq 0$ because of monotonicity of $V(\cdot)$. Since $\mathbf{\massmarg}$ is a mass point with probability $\mu$, we have $P(\mathbf{\massmarg}^\dagger, S\cup\{1\},G) - P(\mathbf{\massmarg}, S\cup\{1\},G) \geq \frac{\mu}{2^n}$ by definition of $P(\cdot,\cdot,\cdot)$. Invoking \ref{eq:non-degenerate} of $V(\cdot)$, we know that there exists $S^*\in [n]\backslash\{1\}$ such that $V(S^*\cup\{1\}) - V(S^*) > 0$. Thus, for all $\epsilon\in (0,1-m_1)$ \begin{align*} \Pi(\mathbf{\massmarg}^\dagger, G) - \Pi(\mathbf{\massmarg},G) &\geq \frac{\mu}{2^n} \left( V(S^*\cup\{1\}) - V(S^*)\right) \triangleq c > 0~. \end{align*} Recall \Cref{prop:payoff envelope} implies that there exists a hyperplane $\overline{\payoff}(\cdot)$ where $\Pi(\mathbf{\signal},G) \leq \overline{\payoff}(\mathbf{\signal},G)$ for all $\mathbf{\signal} \in[0,1]^n$ and $\Pi(\mathbf{\massmarg},G) = \overline{\payoff}(\mathbf{\massmarg},G)$. Note that this leads to a contradiction, since \begin{align*} \inf_{\epsilon\in (0, 1-m_1)} \Pi(\mathbf{\massmarg}^\dagger,G) \leq \lim_{\epsilon\rightarrow 0} \overline{\payoff}(\mathbf{\massmarg}^\dagger, G) = \overline{\payoff}(\mathbf{\massmarg},G) = \Pi(\mathbf{\massmarg},G) \leq \inf_{\epsilon\in (0, 1-m_1)} \Pi(\mathbf{\massmarg}^\dagger,G) - c ~. ~~~~ \qedhere \end{align*} \end{proof} The next three necessary conditions of symmetric equilibrium rely on the assumption that utility function $V(\cdot)$ satisfies \ref{eq:strict monotone}. \begin{comment} we introduce a new assumption (i.e., \ref{eq:strictly monotone}) on senders' value function $V(\cdot)$ as follows. \begin{definition} A set function $V:2^{[n]}\rightarrow \mathbb{R}_{\geq 0}$ satisfies \emph{\ref{eq:strictly monotone}} if \begin{align} \label{eq:strictly monotone} \tag{positive-marginal} \forall j \in [n], ~~~~ V([n]\backslash\{j\}) < V([n]) \end{align} \end{definition} Note that for a submodular set function, \ref{eq:strictly monotone} is a stronger version of \ref{eq:monotone}. \begin{lemma} \label{lem:strict monotone} If a set function $V:2^{[n]}\rightarrow \mathbb{R}_{\geq 0}$ satisfies both \ref{eq:submodular} and \ref{eq:strictly monotone}, then \begin{align*} \forall T\subset S\subseteq[n] ~~~~ V(T) < V(S)~. \end{align*} \end{lemma} \begin{proof} It is sufficient to show that \begin{align*} \forall j\in[n],S\subseteq[n]\backslash\{j\} ~~~~ V(S) < V(S\cup\{j\})~. \end{align*} To see why the above inequality holds, note that applying \ref{eq:submodular} on sets $[n]\backslash\{j\}$ and $S\cup\{j\}$ \begin{align*} V( ([n]\backslash\{j\}) \cup (S\cup\{j\}) ) + V( ([n]\backslash\{j\}) \cap (S\cup\{j\}) ) \leq V([n]\backslash\{j\}) + V(S\cup\{j\}) \end{align*} namely, \begin{align*} V([n]) + V(S) \leq V([n]\backslash\{j\}) + V(S\cup\{j\}) \end{align*} Invoking \ref{eq:strictly monotone} finishes the proof. \end{proof} \end{comment} \begin{lemma} \label{lem:necessary condition ii} Suppose senders' utility function $V(\cdot)$ satisfies \ref{eq:strict monotone}. Let signaling policies $(G,G)$ be an arbitrary symmetric equilibrium. Three necessary conditions are satisfied as follows. \begin{enumerate}[label=(\roman*)] \item Let $G_j$ be the marginal distribution for dimension (or receiver) $j$. Then there is no mass point in $G_j$ expect possible mass on point 1. \item Let $\overline{\payoff}(\mathbf{\signal}, G) = \boldsymbol{\alpha} \cdot \mathbf{\signal} + \lambda$ be a hyperplane defined in \Cref{prop:payoff envelope}. Then $\alpha_j > 0$ for all $j\in[n]$. \item ${\tt supp}{G_j}$ is a single interval $[0, \widehatq]$ with or without a mass point on $1$ for some $\widehatq\in (0, 1)$. \end{enumerate} \end{lemma} \begin{proof} \noindent\emph{Condition (\rom{1}).} The argument is similar to \Cref{lem:necessary condition i}. We prove by contradiction. Suppose there exists a receiver $j$ and a mass point $m\in [0, 1)$ in $G_j$. Let $\mathbf{\signal} = (\mathbf{\signal}_{-j}, m)$ be the signal such that the signal for receiver $j$ is $m$, and the probability $$\prob[\mathbf{\signalother}\simG] {\text{$\forall \ell\in S^*:p_\ell\leq q_\ell$ and $p_j=q_j = m$ and $\forall \ell \not\in [n]\backslash(S^*\cup\{j\}): p_\ell > q_\ell$}}$$ is a positive constant (denoted as $\mu$) for some $S^* \subseteq[n]\backslash\{j\}$. Let $\epsilon\in(0,1-m)$ be an arbitrary positive constant, and $\mathbf{\signal}^\dagger = (\mathbf{\signal}_{-j},m+\epsilon)$ such that the signal for receiver $j$ is $m+\epsilon$. We claim that there exists a constant $c > 0$ such that $\Pi(\mathbf{\signal}^\dagger, G) \geq \Pi(\mathbf{\signal},G) + c$ for all $\epsilon \in (0, 1 - m)$. To see this, consider \begin{align*} \Pi(\mathbf{\signal}^\dagger, G) - \Pi(\mathbf{\signal},G) &= \displaystyle\sum_{ S\subseteq [n]\backslash\{j\} } \left( V(S\cup\{j\}) - V(S) \right) \left( P(\mathbf{\signal}^\dagger, S\cup\{j\},G) - P(\mathbf{\signal}, S\cup\{j\},G) \right)~. \end{align*} Note that for all $S\subseteq[n]\backslash\{j\}$, $V(S\cup\{j\}) - V(S) > 0$ because of \ref{eq:strict monotone}. Also, $P(\mathbf{\signal}^\dagger, S\cup\{j\},G) - P(\mathbf{\signal}, S\cup\{j\},G) \geq 0$ for all $S\subseteq[n]\backslash\{j\}$; and $P(\mathbf{\signal}^\dagger, S^*\cup\{j\},G) - P(\mathbf{\signal}, S^*\cup\{j\},G) \geq \frac{\mu}{2^n} > 0$. Thus, for all $\epsilon\in (0,1-m)$ \begin{align*} \Pi(\mathbf{\signal}^\dagger, G) - \Pi(\mathbf{\signal},G) &\geq \frac{\mu}{2^n} \left( V(S^*\cup\{1\}) - V(S^*)\right) \triangleq c > 0~. \end{align*} Invoking \Cref{prop:payoff envelope} leads to a contradiction, since \begin{align*} \inf_{\epsilon\in (0, 1-m)} \Pi(\mathbf{\signal}^\dagger,G) \leq \lim_{\epsilon\rightarrow 0} \overline{\payoff}(\mathbf{\signal}^\dagger, G) = \overline{\payoff}(\mathbf{\signal},G) = \Pi(\mathbf{\signal},G) \leq \inf_{\epsilon\in (0, 1-m)} \Pi(\mathbf{\signal}^\dagger,G) - c ~. \end{align*} where $\overline{\payoff}$ is the hyperplane defined in \Cref{prop:payoff envelope}. \noindent\emph{Condition (\rom{2}).} Fix an arbitrary receiver $j\in[n]$. Since $\lambda < 1$, there exists a closed set $I \subseteq[0, 1]^{n}$ such that the generalized density function $g(\mathbf{\signal}) > 0$ for all $\mathbf{\signal}$ in $I$. Due to condition~(\rom{1}), we can assume that there exists $\mathbf{\signal} = (q_1, \dots, q_n), \mathbf{\signal}^\dagger = (q^\dagger_1, \dots, q^\dagger_n) \in I$ such that $q_j < q^\dagger_j$. Let $\mathbf{\signal}^{\ddagger} = (q_1, \dots, q^\dagger_j, \dots, q_n)$ be the signal where the signal for receiver $\ell\not=j$ is the same as $\mathbf{\signal}$ and the signal for receiver $j$ is $q_j^\dagger$. We claim that $\Pi(\mathbf{\signal},G) < \Pi(\mathbf{\signal}^{\ddagger},G)$ which proves the condition~(\rom{2}) since \begin{align*} \boldsymbol{\alpha}\cdot \mathbf{\signal} + \lambda = \overline{\payoff}(\mathbf{\signal},G) \overset{(a)}= \Pi(\mathbf{\signal},G) < \Pi(\mathbf{\signal}^{\ddagger},G) \leq \overline{\payoff}(\mathbf{\signal}^{\ddagger},G) = \boldsymbol{\alpha}\cdot \mathbf{\signal}^{\ddagger} + \lambda \end{align*} where (a) holds since $\mathbf{\signal}\in{\tt supp}{G}$ and $\boldsymbol{\alpha}\cdot \mathbf{\signal}^{\ddagger} + \lambda - (\boldsymbol{\alpha}\cdot \mathbf{\signal} + \lambda) = \alpha_j(q_j^{\ddagger} - q_j)$. To see why $\Pi(\mathbf{\signal},G) < \Pi(\mathbf{\signal}^{\ddagger},G)$ holds, note that \begin{align*} \Pi(\mathbf{\signal}^{\ddagger}, F) - \Pi(\mathbf{\signal},F) &= \displaystyle\sum_{ S\subseteq [n]\backslash\{j\} } \left( V(S\cup\{j\}) - V(S) \right) \left( P(\mathbf{\signal}^{\ddagger}, S\cup\{j\},G) - P(\mathbf{\signal}, S\cup\{j\},G) \right)~. \end{align*} Similar to the argument in condition (\rom{1}), it is sufficient to argue that there exists $S^*\subseteq[n]\backslash\{j\}$ such that $P(\mathbf{\signal}^{\ddagger}, S^*\cup\{j\},G) - P(\mathbf{\signal}, S^*\cup\{j\},G) > 0$. Note that the existence of $\mathbf{\signal}^\dagger$ ensures that by constructing $S^* = \{\ell \in [n]\backslash\{j\}: q^\dagger_j \leq q_j\}$ the requirement is satisfied. \noindent\emph{Condition (\rom{3}).} It is sufficient to show that for every receiver $j\in[n]$, for any $q^\dagger \in (0, 1)$, if $q^\dagger \in {\tt supp}{G_j}$, then $q\in {\tt supp}{G_j}$ for all $q \in [0, q^\dagger]$. Again, we prove this by contradiction. Suppose there exists a receiver $j$ and $q, q^\dagger\in [0, 1)$ such that $q < q^\dagger$, $q\not\in{\tt supp}{G_j}$ and $q^\dagger\in{\tt supp}{G_j}$. Without loss of generality, we assume that $q^{\ddagger}\not\in{\tt supp}{G_j}$ for all $q^{\ddagger} \in[q,q^\dagger)$. Let $\mathbf{\signal}^\dagger=(\mathbf{\signal}_{-j},q^\dagger)$ be the signal such that $\mathbf{\signal}^\dagger \in {\tt supp}{G}$ and the signal for receiver $j$ is $q^\dagger$. Let $\mathbf{\signal} = (\mathbf{\signal}_{-j}, q)$ be the signal such that the signal for receiver $j$ is $q$, and signal for other receivers are $\mathbf{\signal}_{-j}$ (the same as $\mathbf{\signal}^\dagger$). By condition~(\rom{1}), there is no mass point whose $j$ dimension is $q^\dagger$. Thus, $P(\mathbf{\signal}^\dagger,S,G) =P(\mathbf{\signal},S,G) $ for all $S\subseteq[n]$, and $\Pi(\mathbf{\signal}^\dagger,G) = \Pi(\mathbf{\signal},G)$. Recall \Cref{prop:payoff envelope} implies that there exists a hyperplane $\overline{\payoff}(\cdot)$ where $\Pi(\mathbf{\signal},G) \leq \overline{\payoff}(\mathbf{\signal},G)$ for all $\mathbf{\signal} \in[0,1]^n$ and $\Pi(q^\dagger,G) = \overline{\payoff}(q^\dagger,G)$. Note that this leads to a contradiction, since \begin{align*} \Pi(\mathbf{\signal},G) = \Pi(\mathbf{\signal}^\dagger,G) = \overline{\payoff}(\mathbf{\signal}^\dagger,G) \overset{(a)}> \overline{\payoff}(\mathbf{\signal},G) \geq \Pi(\mathbf{\signal},G)~. \end{align*} where inequality~(a) holds because of condition~(\rom{2}). \end{proof} In \Cref{example:necessity of strictly monotone}, we show that conditions~(\rom{2}) and (\rom{3}) will not hold without \ref{eq:strict monotone}. \begin{example} \label{example:necessity of strictly monotone} Consider an instance with two senders and two receivers. The common prior $\lambda$ is $\sfrac{1}{2}$. Senders' utility function $V(\cdot)$ is $V(S) = 1$ for all $S\in 2^{[n]} \backslash\{\emptyset\}$ and $V(\emptyset) = 0$. Fix an arbitrary constant $c \in (0, \sfrac{1}{2}]$. Consider the signaling policy $G$ with density function $$g(\mathbf{\signal}) = \frac{1}{2c} \cdot \indicator{q_1 + q_2 = 2\lambda} \cdot \indicator{\lambda - c \leq q_1\leq \lambda + c} ~~~~\forall \mathbf{\signal}\in[0,1]^2$$ where $\indicator{\cdot}$ is the indicator function. Then $(G,G)$ is an equilibrium, and the expected utility $\Pi(\cdot, G)$ satisfies $$ \Pi(\mathbf{\signal},G) = \indicator{q_1 + q_2 \geq 2\lambda} ~~~~\forall \mathbf{\signal}\in[0,1]^2~. $$ \end{example} \begin{figure}[ht] \centering \begin{subfigure}{0.25\linewidth} \includegraphics[width=\linewidth]{Paper/plots/example_3_1_layout.pdf} \end{subfigure} \caption{The equilibrium structure (i.e., ${\tt supp}{G}$) stated as in \Cref{example:necessity of strictly monotone}.} \label{Fig: equilibrium_support_layout_ex_3_1} \end{figure} \begin{remark} Our analysis for \Cref{prop:payoff envelope}, \Cref{lem:necessary condition i} and \Cref{lem:necessary condition ii} generalizes the results by \citet{BC-18,AK-19,AK-20} who study the information disclosure in multi-sender, single-receiver settings. In particular, our model and the characterization of symmetric equilibrium exactly generalize \citet{AK-20} from single receiver to multiple receivers. \end{remark} \begin{remark} When there is only a single receiver, \citet{AK-20} shows that the symmetric equilibrium is unique. However, when there are multiple receivers, there may exist multiple symmetric equilibriums. See \Cref{sec:anonymous} for more discussions. \end{remark} \begin{comment} Having shown the continuity of $G$ on $[0, 1]^K\setminus \boldsymbol{1}_{[K]}$, we next show that in a symmetric equilibrium, the marginal support of $G$ for each receiver only contains a single interval. \begin{lem}\label{lem_single_interval} In equilibrium, let $G_i$ be the marginal CDF of $G$ for receiver $i$, then ${\tt supp}(G_i)$ contains a single interval. \end{lem} \begin{proof} Suppose ${\tt supp}(G_i)$ contains more than one interval. In this case for small positive values $\varepsilon_L, \varepsilon_H$, there exist $(q_L, q_H)$ with $q_H > q_L$ such that $[q_L - \varepsilon_L, q_L]$ and $[q_H, q_H + \varepsilon_H]$ are inside ${\tt supp}(G_i)$ but $(q_L, q_H)$ is not. By assumption, there exist no $\mathbf{q}_{-i}\in[0, 1]^{K-1}$ such that for any $q\in (q_L, q_H)$, $\mathbf{q} = (\mathbf{q}_{-i}, q) \in{\tt supp}(G)$. Thus, for any vector $\mathbf{q} = (\mathbf{q}_{-i}, q_H) \in {\tt supp}(G)$ (note that such vector must exist), we have \begin{align} W((\mathbf{q}_{-i}, q_H); G) = W((\mathbf{q}_{-i}, q_L); G) \end{align} According to condition~\eqref{condition5}, we have \begin{align} W((\mathbf{q}_{-i}, q_H); G) = \phi((\mathbf{q}_{-i}, q_H)), \end{align} For the point $\mathbf{q}' := (\mathbf{q}_{-i}, q_L)$, we must have \begin{align} W((\mathbf{q}_{-i}, q_L); G) \le \phi((\mathbf{q}_{-i}, q_L)) \end{align} An immediate contradiction arises from $\phi((\mathbf{q}_{-i}, q_H)) \le \phi((\mathbf{q}_{-i}, q_L))$ due to $q_L < q_H$ and the non-negativity of Lagrange multiplier. \ignore{ Suppose $G$ has the support on the point $\mathbf{q} \in (0, 1)^K$ and there exists a point $\mathbf{q}'$ where $0 \prec \mathbf{q}' \prec \mathbf{q}$ but $\mathbf{q}'\notin {\tt supp}(G)$, then define $\underline{\mathbf{q}}:= \max(\mathbf{p}: \mathbf{p} \prec \mathbf{q}', \mathbf{p} \in {\tt supp}(G))$ and $\overline{\mathbf{q}}:= \min(\mathbf{p}: \mathbf{p} \succ \mathbf{q}', \mathbf{p} \in {\tt supp}(G))$. We note that such $\underline{\mathbf{q}}$ and $\overline{\mathbf{q}}$ always exist (e.g., in extreme case, $\underline{\mathbf{q}} = \boldsymbol{0}_{[K]}$ and $\overline{\mathbf{q}} = \mathbf{q}'$). Then by definition we have $W(\underline{\mathbf{q}}; G) = W(\overline{\mathbf{q}}; G)$, which gives an immediate contradiction because $W(\underline{\mathbf{q}}; G) = \phi(\underline{\mathbf{q}}), W(\overline{\mathbf{q}}; G) = \phi(\overline{\mathbf{q}})$ and $\phi(\underline{\mathbf{q}}) \neq \phi(\overline{\mathbf{q}})$. We now proceed to prove the second statement. Suppose ${\tt supp}(G_i)$ contains multiple intervals over $[0, 1]$, then there must exist a pair $a, b$ where $0 < a < b < 1$ such that for a small value $\varepsilon$, $[a - \varepsilon, a] \in {\tt supp}(G_i)$ and $[b, b + \varepsilon] \in {\tt supp}(G_i)$ but $(a, b) \notin {\tt supp}(G_i)$. Moreover, according to the above statement, there must exist a vector $\mathbf{q}_{-i}$ such that $\underline{\mathbf{q}} = (\mathbf{q}_{-i}, a)\in{\tt supp}(G)$ and $\overline{\mathbf{q}} = (\mathbf{q}_{-i}, b)\in{\tt supp}(G)$. Thus, by definition, we have $W(\underline{\mathbf{q}}; G) = W(\overline{\mathbf{q}}); G)$. Again, an immediate contradiction arises because $W(\underline{\mathbf{q}}; G) = \phi(\underline{\mathbf{q}}), W(\overline{\mathbf{q}}; G) = \phi(\overline{\mathbf{q}})$ and $\phi(\underline{\mathbf{q}}) \neq \phi(\overline{\mathbf{q}})$. } \end{proof} We now examine the behavior of $G$ at the boundary point. \begin{lem}\label{lem_marg_zero_continuity} Let $G$ be the equilibrium strategy and let $G_i$ be the marginal CDF of $G$ for receiver $i$, then we have $0\in{\tt supp}(G_i)$. \end{lem} \begin{figure}[ht] \centering \begin{subfigure}{0.250\linewidth} \includegraphics[width=\linewidth]{plots/proof_zero.pdf} \end{subfigure} \hspace{1cm} \begin{subfigure}{0.27\linewidth} \includegraphics[width=\linewidth]{plots/proof_single_interval.pdf} \end{subfigure} \caption{Left: The proof for Lemma~\ref{lem_marg_zero_continuity}. Let $K = 2$. Blue line is the support of $G$. Here, $\mathbf{q} = (q_1, q_2), \mathbf{q}' = (0, q_2)$. Middle: The proof for the $1$st statement in Lemma~\ref{lem_single_interval}. Let $K = 2$. Blue line is the support of $G$. Here, $\mathbf{q} = (q_1, q_H), \mathbf{q}' = (q_1, q_L)$. } \label{fig_zero_continuity} \end{figure} \begin{proof} For a receiver $i$ and a strategy $G$ in equilibrium, we prove $0\in{\tt supp}(G_i)$. Suppose $0\notin {\tt supp}(G_i)$. Then it must be the case that $\min \{q: q\in{\tt supp}(G_i)\} > 0$. Let $\mathbf{q} = (\mathbf{q}_{-i}, q)$ be the point where the posterior mean for receiver $i$ is $q$ and $\mathbf{q} \in{\tt supp}(G)$. Then according to condition~\eqref{condition5}, we have \begin{align} W(\mathbf{q}; G) = \phi(\mathbf{q}) = \mathbf{a}_1\cdot \mathbf{q} + a_2 > 0 \end{align} Now consider the point $\mathbf{q}' := (\mathbf{q}_{-i}, 0)$. Then by assumption, we have $\mathbf{q}'\notin {\tt supp}(G)$, implying that $ W(\mathbf{q}'; G) \le \phi(\mathbf{q}')$. Furthermore, for any $S\subseteq[K]$, \begin{align} G(\mathbf{q}', S) = G(\mathbf{q}, S) \end{align} Thus, we have $ W(\mathbf{q}'; G) = W(\mathbf{q}; G) = \phi(\mathbf{q}) \le \phi(\mathbf{q}')$, which gives an immediate contradiction because the Lagrange multiplier is non-negative. \end{proof} Note that the objective function in~\eqref{cor_obj} is equivalent to \begin{align} \eqref{cor_obj} & = \max_g ~~\sum_{S\subseteq[K]} V(S) \cdot \int_{\mathbf{q}} g(\mathbf{q})G(\mathbf{q}, S) d\mathbf{q} \end{align} \end{comment} \subsection*{Main Results} \paragraph{Characterization of Equilibrium Structure} We characterize the equilibrium structure for senders with monotone utility function. We start with the observation that given an arbitrary strategy of the opponent sender, the best response problem can be formalized as a linear program. Thus, by studying the duality of this linear program, we provide the characterization of the interim utility and the signaling policy in the equilibrium. Our characterization generalizes the earlier works \citep{BC-18,AK-20} who study the competition between senders against a single receiver. Unlike the single-receiver settings where the characterization above pins down the unique equilibrium, we show that the equilibrium is not necessarily unique (and may exists infinite number of equilibrium) in multi-receiver settings. Nonetheless, the characterization of equilibrium becomes a useful tool for our later analysis. \paragraph{Price of Stability for Anonymous Supermodular (resp.\ Submodular) Utility Functions} Since the receiver has no outside option in our model (as well as \citealp{AK-19,AK-20}), for single-receiver setting studied in the literature, the welfare in the unique equilibrium matches the optimal welfare. When there are multiple receivers and senders' utility function is not additive, the welfare in different equilibrium is different and it is unclear whether there exists an equilibrium whose welfare matches the optimal welfare. In light of this observation, the focus in this paper for competition in multi-receiver settings diverges from the literature for competition in single-receiver settings who examine whether the competition increases the extent of information revelation. We analyze the welfare in equilibrium and how it approximates optimal welfare. In particular, we study the price of stability (PoS), the ratio between the best welfare of senders in one of its equilibria and that of an optimal outcome in the game of two senders, and study the relation between PoS with the ex ante qualities $\lambda$ and the shape of sender's utility function. To this end, we further restrict our attention to two families of utility functions -- anonymous supermodular set functions and anonymous submodular set functions. A set function is anonymous if two sets have the same function value as long as their cardinality is the same. These two families of utility functions are also studied in \citet{AB-19} for single-sender, multi-receiver Bayesian persuasion game. In both families of utility function, we show that $\text{PoS} = 1$ when the ex ante quality $\lambda$ is weakly smaller than $\sfrac{1}{2}$, that is, there exists equilibrium that can achieve welfare in the optimal outcome. The proof hinges on a construction of equilibrium strategy where each sender can fully positively (resp.\ negatively) correlate his information to those receivers if his utility function is supermodular (resp.\ submodular). On the other side, with the characterization of equilibrium, we also prove that $\text{PoS} > 1$ when the ex ante quality $\lambda$ is larger than $\sfrac{1}{2}$, that is, there exists no equilibrium that can achieve the welfare in the optimal outcome. Informally, the central insight here is that even though each sender is freely to choose any flexible information disclosure policy, the policy is subject to a budget-like constraint.\footnote{Please see formal definition here:~\ref{eq:bayes plausible}.} To achieve an optimal welfare, the sender needs to fully positively (resp.\ negatively) correlate his information for receivers to avoid winning suboptimal receiver set. However, this is not feasible when the ex ante quality $\lambda > \sfrac{1}{2}$, while simultaneously satisfying the above constraint. Furthermore, we consider a specific equilibrium structure for ex ante quality $\lambda > \sfrac{1}{2}$ which is a natural generalization of the unique equilibrium structure in single-receiver settings. We analyze its welfare as a function of ex ante quality $\lambda$ and the supermodularity (resp.\ submodularity) of the utility function. Note that this presents an upper bound of PoS for $\lambda > \sfrac{1}{2}$. Our analysis indicates that the upper bound becomes worse as the ex ante quality $\lambda$ increases or the utility function becomes more supermodular (resp.\ submodular).
1,108,101,564,801
arxiv
\section{Introduction} Puncturable encryption (PE), proposed by Green and Miers \cite{GM15} in 2015, is a kind of public key encryption, which can also be seen as a tag-based encryption (TBE), where both encryption and decryption are controlled by tags. Similarly to TBE, a plaintext in PE is encrypted together with tags, which are called \textit{ciphertext tags}. In addition, the puncturing property of PE allows to produce new punctured secret keys associated some \textit{punctures} (or \textit{punctured tags}). Although the new keys (\textit{puncture keys}) differ from the old ones, they still allow recipients to decrypt old ciphertexts as long as chosen \textit{punctured tags} are different from tags embedded in the ciphertext. The puncturing property is very useful when the current decryption key is compromised. In a such situation, a recipient merely needs to update his key using the puncturing mechanism. PE is also useful when there is a need to revoke decryption capability from many users in order to protect some sensitive information (e.g., a time period or user identities). In this case, the puncturing mechanism is called for time periods or user identities. Also, PE can provide forward security in a fine-grained level. Forward security, formulated in \cite{Gun90} in the context of key-exchange protocols, is a desired security property that helps to reduce a security risk caused by key exposure attacks. In particular, forward secure encryption (FSE) guarantees confidentiality of old messages, when the current secret key has been compromised. Compared to PE, FSE provides a limited support for revocation of decryption capability. For instance, it is difficult for FSE to control decryption capability for any individual ciphertext (or all ciphertexts) produced during a certain time period, which, in contrast, can be easily done with PE. Due to the aforementioned advantages, PE has become more and more popular and has been used in many important applications in such as asynchronous messaging transport systems \cite{GM15}, forward-secure zero round--trip time (0-RTT) key-exchange protocols \cite{GHJ+17, DJSS18}, public-key watermarking schemes \cite{CHN+16} and forward-secure proxy re-encryptions \cite{DKL+18}.\\ \noindent \textbf{Related Works.} Green and Miers \cite{GM15} propose the notion of PE and also present a specific ABE-based PE instantiation. The instantiation is based on the decisional bilinear Diffie-Hellman assumption (DBDH) in bilinear groups and is proven to be CPA secure in the random oracle model (ROM). Following the work \cite{GM15}, many other constructions have been proposed such as \cite{CHN+16, CRRV17, GHJ+17, DJSS18, SSS+20} (see Table \ref{tab2} for a summary). For instance, G{\"u}nther et al. \cite{GHJ+17} have provided a generic PE construction from \textit{any} selectively secure hierarchical identity-based key encapsulation (HIBEKEM) combined with an \textit{any} one time signature (OTS). In fact, the authors of \cite{GHJ+17} claim that their framework can be instantiated as the first post-quantum PE. Also, in the work \cite{GHJ+17}, the authors present the first PE-based forward-secret zero round-trip time protocol with full forward secrecy. However, they instantiate PE that is secure in the standard model (SDM) by combining a (DDH)-based HIBE with a OTS based on discrete logarithm. The construction supports a predetermined number of ciphertext tags as well as a limited number of punctures. Derler et al. \cite{DJSS18} introduce the notion of Bloom filter encryption (BFE), which can be converted to PE. They show how to instantiate BFE using identity-based encryption (IBE) with a specific construction that assumes intractability of the bilinear computational Diffie-Hellman (BCDH) problem. Later, Derler et. al. \cite{DGJ+18-ePrint} extend the result of \cite{DJSS18} and give a generic BFE construction from identity-based broadcast encryption (IBBE). The instantiation in \cite{DGJ+18-ePrint} is based on a generalization of the Diffie-Hellman exponent (GDDHE) assumption in parings. However, the construction based on BFE suffers from \textit{non-negligible correctness error}. This excludes it from applications that require negligible correctness error, as discussed in \cite{SSS+20}. Most recently, Sun et al. \cite{SSS+20} have introduced a new concept, which they call key-homomorphic identity-based revocable key encapsulation mechanism (KH-IRKEM) with extended correctness, from which they obtain a modular design of PE with \textit{negligible correctness errors}. In particular, they describe four modular and compact instantiations of PE, which are secure in SDM. However, all of them are based on hard problems in pairings, namely $q$-decision bilinear Diffie-Hellman exponent problem ($q$--DBDHE), the decision bilinear Diffie-Hellman problem(DBDH), the $q$-decisional multi-exponent bilinear Diffie-Hellman (q-MEBDH) problem and the decisional linear problem (DLIN). We emphasize that all existing instantiations mentioned above are insecure against quantum adversaries. Some other works like \cite{CRRV17, CHN+16} based PE on the notion of indistinguishability obfuscation, which is still impractical. The reader is referred to \cite{SSS+20} for a state-of-the-art discussion. To the best of our knowledge, there has been no specific lattice-based PE instantiation, which simultaneously enjoys negligible correctness error as well as post-quantum security in the standard model. \subsubsection{Our Contribution.} We first give a \textit{generic} construction of PE from \textit{delegatable fully key-homomorphic encryption} (DFKHE) framework. The framework is a generalisation of fully key-homomorphic encryption (FKHE) \cite{BGG+14} by adding a key delegation mechanism. The framework is closely related to the functional encryption \cite{BSW11}. We also present an explicit PE construction based on lattices. Our design is obtained from LWE-based DFKHE that we build using FKHE for the learning with errors (LWE) setting \cite{BGG+14}. This is combined with the key delegation ability supplied by the lattice trapdoor techniques \cite{GPV08, CHKP10, ABB10}. Our lattice FE construction has the following characteristics: \begin{itemize} \item It supports a predetermined number of ciphertext tags per ciphertext. The ciphertext size is short and depends linearly on the number of ciphertext tags, which is fixed in advance. However, we note that following the work of Brakerski and Vaikuntanathan \cite{BV16}, our construction might be extended to obtain a variant that supports unbounded number of ciphertext tags (see Section \ref{unbounded} for a detailed discussion), \item It works for a predetermined number of punctures. The size of decryption keys (i.e., puncture keys) increases quadratically with the number of punctured tags, \item It offers selective CPA security in the standard model (that can be converted into full CPA security using the complexity leveraging technique as discussed in \cite{CHK04}, \cite{Kil06}\cite{BB11}, \cite{BGG+14}). This is due to CPA security of LWE-based underlying DFKHE (following the security proof for the generic framework). \item It enjoys post-quantum security and negligible correctness errors. \end{itemize} Table \ref{tab2} compares our work with the results obtained by other authors. At first sight, the FE framework based on key homomorphic revocable identity-based (KH-IRKEM) \cite{SSS+20} looks similar to ours. However, both frameworks are different. While key-homomorphism used by us means the capacity of transforming (as claimed in \cite[Subsection 1.1]{BGG+14}) ``\textit{an encryption under key $\mathbf{x}$ into an encryption under key $f(\mathbf{x})$}", key-homomorphism defined in \cite[Definition 8]{SSS+20} reflects the ability of preserving the algebraic structure of (mathematical) groups. \\ \noindent \textbf{Overview and Techniques.} We start with a high-level description of fully-key homomorphism encryption (FHKE), which was proposed by Boneh et al. \cite{BGG+14}. Afterwards, we introduce what we call the \textit{delegetable fully-key homomorphism encryption (DFHKE)}. At high-level description, FKHE possesses a mechanism that allows to convert a ciphertext $ct_\mathbf{x}$ (associated with a public variable $\mathbf{x}$) into the evaluated one $ct_f$ for the same plaintext (associated with the pair $(y,f)$), where $f$ is an efficiently computable function and $f(\mathbf{x})=y$. In other words, FKHE requires a special key-homomorphic evaluation algorithm, called $\mathsf{Eval}$, such that $ct_f \leftarrow \mathsf{Eval}(f, ct_\mathbf{x})$. In order to successfully decrypt an evaluated ciphertext, the decryptor needs to evaluate the initial secret $sk$ to get $sk_f$. An extra algorithm, called $\mathsf{KHom}$, is needed to do this, i.e. $sk_f \leftarrow \mathsf{KHom}(sk,(y,f))$. A drawback of FKHE is that it supports only a single function $f$. Actually, we'd like to perform key-homomorphic evaluation for many functions $\{f_1,\cdots, f_k\}$ that belong to a family $\mathcal{F}$. To meet the requirement and obtain DFKHE, we generalise FKHE by endowing it with two algorithms $\mathsf{ExtEval}$ and $\mathsf{KDel}$. The first algorithm transforms ($ct_\mathbf{x}, \mathbf{x}$) into ($ct_{f_1,\cdots, f_k}, (y,f_1, \cdots f_k)$), where $f_1(\mathbf{x})=\cdots=f_k(\mathbf{x})=y$. This is written as $ct_{f_1,\cdots, f_k} \leftarrow \mathsf{ExtEval}(f_1,\cdots, f_k, ct_\mathbf{x})$. The second algorithm allows to delegate the secret key step by step for the next function or $sk_{f_1, \cdots, f_k} \leftarrow \mathsf{KDel}(sk_{f_1, \cdots, f_{k-1}}, (y,f_k))$. \begin{table}[h] \centering \medskip \smallskip \small\addtolength{\tabcolsep}{0pt} \begin{tabular}{ c | c| c|c|c|c |c|c} \hline Literature&From& Assumption&\makecell{Security\\ Model} &\#Tags &\#Punctures &\makecell{Post-\\quantum} &\makecell{Negl.\\ Corr. \\Error} \\ \hline \hline Green \cite{GM15}&ABE& DBDH& ROM &$<\infty$ &$\infty$ & $\times$ &$\checkmark$\\ \hline G{\"u}nther \cite{GHJ+17} &\makecell{any HIBE \\+ any OTS}&\makecell{DDH (HIBE)\\+ DLP (OTS) }&SDM & $<\infty$ &$<\infty$ &$\times$ &$\checkmark$\\ \hline Derler \cite{DGJ+18-ePrint} &BFE (IBBE)& GDDHE& \textbf{ROM$^*$} &1& $< \infty$&$\times$ &$\times$\\ \hline Derler \cite{DJSS18} &BFE (IBE)&BCDH&ROM &1& $<\infty$&$\times$&$\times$\\ \hline Sun\cite{SSS+20} &KH-IRKEM&\makecell{$q$--DBDHE\\DBDH\\ $q$--MEBDH\\ DLIN} &SDM& \makecell{$<\infty$\\$<\infty$\\$\infty$\\$<\infty$}&\makecell{$\infty$\\$\infty$\\$\infty$\\$\infty$} &\makecell{$\times$\\$\times$\\$\times$\\$\times$}&$\checkmark$\\ \hline \hline \textbf{This work}& DFKHE & DLWE& SDM& \makecell{$< \infty$}&$< \infty$&$\checkmark$ &$\checkmark$\\ \hline \hline \end{tabular} \caption{ Comparison of some existing PE constructions in the literature with ours. Note that, here all works are being considered in the CPA security setting. The notation ``$<\infty$" means "bounded" or ``predetermined", while ``$\infty $" means ``unlimited" or ``arbitrary". The column entitled ``Post-quantum" says whether the specific construction in each framework is post-quantum secure or not regardless its generic framework. The last column mentions to supporting the negligible correctness error. \textbf{ROM$^*$}: For the BFE-based FE basing on the IBBE instantiation of Derler et al. \cite{DGJ+18-ePrint}, we note that, the IBBE instantiation can be modified to remove ROM, as claimed by Delerabl\'ee in \cite[Subection 3.2]{Del07} } \label{tab2} \end{table} \iffalse \begin{table}[pt] \centering \medskip \smallskip \small\addtolength{\tabcolsep}{0pt} \begin{tabular}{ c | c| c|c|c} Literature&$|pk|$&$|sk|$&\makecell{$|sk_\eta|$} &$|ct|$\\ \hline \hline Green \cite{GM15}&& & &$a$ \\ \hline G{\"u}nther \cite{GHJ+17} &\makecell{}&\makecell{}&& $<\infty$\\ \hline Derler \cite{DGJ+18-ePrint} & & & &1\\ \hline Derler \cite{DJSS18} & & & &1\\ \hline Sun\cite{SSS+20} &&\makecell{ \\ \\ \\ } & & \makecell{$<\infty$\\$<\infty$\\$\infty$\\$<\infty$}\\ \hline \textbf{Ours}&$O((d+1)\cdot n^2 \log^2 q)$& $O(n^2 \log^2 q\cdot \log( n\log q))$ & $O(\eta^2)$& \makecell{$O((d+2)\cdot n\log^2q))$}\\ \hline \hline \end{tabular} \caption{ \textcolor{red}{ Performance of our LWE-based PE $(\eta+1) \cdot n \log q \cdot( O(\log(\beta_{\mathcal{F}})+\eta\cdot \log (n \log q)))$}} \label{tab21} \end{table} \fi Our generic PE framework is inspired by a simple but subtle observation that puncturing property requires equality of ciphertext tags and punctures. This can be provided by functions that can be efficiently computed by arithmetic circuits. We call such functions \textit{equality test functions}. Note that for PE, ciphertext tags play the role of variables $\textbf{x}$'s and equality test functions act as functions $f$'s defined in FKHE. For FE, one more puncture added defines one extra equality test function, which needs a delegation mechanism to take the function into account. We note that the requirement can be easily met using the same idea as the key delegation mentioned above. In order to be able to employ the idea of DFKHE for $(y_0, \mathcal{F})$ to PE, we define an efficiently computable family $\mathcal{F}$ of equality test functions $f_{t^*}(\mathbf{t})$ allowing us to compare the puncture $t^*$ with ciphertext tags $\mathbf{t}=(t_1, \cdots, t_d)$ under the definition that $f_{t^*}(\mathbf{t})=y_0$ iff $t^* \neq t_j \forall j\in [d]$, for some fixed value $y_0$. For concrete DHKHE and PE constructions, we employ the LWE-based FKHE proposed in \cite{BGG+14}. In this system, the ciphertext is $ct= (\textbf{c}_{\textsf{in}}, \textbf{c}_1, \cdots, \textbf{c}_d, \textbf{c}_{\textsf{out}})$, where $\textbf{c}_{i}=(t_i\textbf{G}+\textbf{B}_i)^T \textbf{s}+\textbf{e}_{i}$ for $i\in [d]$. Here the gadget matrix $\textbf{G}$ is a special one, whose associated trapdoor $\textbf{T}_\textbf{G}$ (i.e., a short basis for the $q$-ary lattice $\Lambda_q^{\bot}(\textbf{G})$) is publicly known (see \cite{MP12} for details). Also, there exist three evaluation algorithms named $\textsf{Eval}_{\textsf{pk}}$, $\textsf{Eval}_{\textsf{ct}}$ and $\textsf{Eval}_{\textsf{sim}}$ \cite{BGG+14}, which help us to homomorphically evaluate a circuit (function) for a ciphertext $ct$. More specifically, from $\textbf{c}_i:=[t_i\textbf{G}+\textbf{B}_i]^T\textbf{s}+\textbf{e}_i, \text{ where } \| \textbf{e}_i \|<\delta $ for all $i\in [d]$, and a function $f:(\mathbb{Z}_q)^d \rightarrow \mathbb{Z}_q$, we get $\textbf{c}_f=[f(t_1, \cdots, t_d)\textbf{G}+\textbf{B}_f]^T\textbf{s}+\textbf{e}_f, \| \textbf{e}_f \|<\Delta $, where $ \mathbf{B}_f \leftarrow\mathsf{Eval}_\mathsf{pk}(f, (\mathbf{B}_i )_{i=1}^d) $, $ \mathbf{c}_f \leftarrow \mathsf{Eval}_\mathsf{ct}(f, ((t_i, \mathbf{B}_i,\mathbf{c}_i))_{i=1}^d)$, and $\Delta < \delta \cdot \beta$ for some $\beta$ sufficiently small. The algorithm $\textsf{ExtEval}$ mentioned above can be implemented calling many times $\mathsf{Eval}_\mathsf{pk}$,$\textsf{Eval}_{\textsf{ct}}$, each time for each function. Meanwhile, $\mathsf{Eval}_\mathsf{sim}$ is only useful in the simulation for the security proof. In the LWE-based DFKHE construction, secret keys are trapdoors for $q$-ary lattices of form $\Lambda_q^{\bot}([\textbf{A}|\textbf{B}_{f_1}|\cdots| \textbf{B}_{f_k}])$. For the key delegation $\mathsf{KDel}$, we can utilize the trapdoor techniques \cite{GPV08, ABB10, CHKP10} . For the LWE-based PE instantiation, we employ the equality test function with $y_0:=0 \text{ (mod } q)$. Namely, for a puncture $t^*$ and a list of ciphertext tags $t_1, \cdots, t_d$ we define $f_{t^*}(t_1, \cdots, t_d):=eq_{t^*}(t_1)+\cdots+ eq_{t^*}(t_d)$, where $eq_{t^*}: \mathbb{Z}_q\rightarrow \mathbb{Z}_q $ satisfying that $\forall t\in \mathbb{Z}_q$, $eq_{t^*}(t)=1 \text{ (mod } q)$ iff $t=t^*$, otherwise $eq_{t^*}(t)=0 \text{ (mod } q)$. Such functions has also been employed in \cite{BKM17} to construct a privately puncturable pseudorandom function. It follows from generic construction that our PE instantiation is selective CPA-secure.\\ \noindent \noindent \textbf{Efficiency.} Table \ref{tab21} summarizes the asymptotic bit-size of public key, secret key, punctured key and ciphertext. We can see that the public key size is a linear function in the number of ciphertext tags (i.e., $d$). The (initial) secret key size is independent of both $d$ and $\eta$ (the number of punctures). The punctured key (decryption key) size is a quadratic function of $\eta$. Lastly, the ciphertext size is a linear function of $d$.\\ \noindent \noindent{\textbf{On unbounded ciphertext tags.}} We believe that our framework can be extended to support unbounded number of ciphertext tags by exploiting the interesting technique of \cite{BV16}. The key idea of \cite{BV16} is to use homomorphic evaluation of a family pseudorandom functions. This helps to stretch a predetermined parameter (e.g., the length of a seed) to an arbitrary number of ciphertext tags. The predetermined parameter will be used to generate other public parameters (e.g., public matrices). More details is given in Section \ref{unbounded}.\\ \noindent \noindent{\textbf{Paper Organization.}} In Section \ref{pre}, we review some background related to this work. Our main contributions are presented in Section \ref{generic} and Section \ref{instan}. We formally define DFKHE and the generic PE construction from DFKHE in Section \ref{generic}. Section \ref{instan} is dedicated to the LWE-based instantiation of DFKHE and the induced PE. Section \ref{unbounded} discusses on the feasibility of transforming our proposed LWE-based PE to work well with unbounded ciphertext tags. This work is concluded in Section \ref{conclude}. \begin{table}[pt] \centering \medskip \smallskip \small\addtolength{\tabcolsep}{0pt} \begin{tabular}{ c| c} \hline Public key size& $O((d+1)\cdot n^2 \log^2 q)$\\ Secret key size& $O(n^2 \log^2 q\cdot \log( n\log q))$ \\ Punctured key size& $(\eta+1) \cdot n \log q \cdot( O(\log(\beta_{\mathcal{F}})+\eta\cdot \log (n \log q)))$\\ Ciphertext size& $O((d+2)\cdot n\log^2q))$\\ \hline \end{tabular} \caption{ Keys and ciphertext's size of our LWE-based PE as functions in number of ciphertext tags $d$ and number of punctures $\eta$. } \label{tab21} \end{table} \section{Preliminaries} \label{pre} \subsection{Framework of Puncturable Encryption } \label{peer} \noindent \textbf{Syntax of puncturable encryption.} For a security parameter $\lambda$, let $d=d(\lambda)$, $\mathcal{M}=\mathcal{M}(\lambda)$ and $\mathcal{T}=\mathcal{T}(\lambda)$ be maximum number of tags per ciphertext, the space of plaintexts and the set of valid tags, respectively. Puncturable encryption (PE) is a collection of the following four algorithms \textsf{KeyGen}, \textsf{Encrypt}, \textsf{Puncture} and \textsf{Decrypt}: \begin{itemize} \item \underline{$(pk, sk_0) \leftarrow \textsf{KeyGen}(1^\lambda, d)$: } For a security parameter $\lambda$ and the maximum number $d$ of tags per ciphertext, the probabilistic polynomial time (PPT) algorithm \textsf{KeyGen} outputs a public key $pk$ and an initial secret key $sk_0$. \item \underline{ $ct \leftarrow \textsf{Encrypt}(pk,\mu, \{t_1, \cdots, t_d\})$: } For a public key $pk$, a message $\mu$, and a list of tags $t_1, \cdots, t_d$, the PPT algorithm \textsf{Encrypt} returns a ciphertext $ct$. \item \underline{$sk_{i} \leftarrow \textsf{Puncture}(pk, sk_{i-1}, t^*_{i})$: } For any $i>1$, on input $pk$, $sk_{i-1}$ and a tag $t^*_i$, the PPT algorithm \textsf{Puncture} outputs a punctured key $sk_{i}$ that decrypts any ciphertexts, except for the ciphertext encrypted under any list of tags containing $t^*_i$. \item \underline{ $\mu/\bot \leftarrow \textsf{Decrypt}(pk, sk_{i},(ct, \{t_1, \cdots, t_d\}))$: } For input $pk$, a ciphertext $ct$, a secret key $sk_{i}$, and a list of tags $\{t_1, \cdots, t_d\}$, the deterministic polynomial time (DPT) algorithm \textsf{Decrypt} outputs either a message $\mu$ if the decryption succeeds or $\bot$ if it fails. \end{itemize} \noindent \textbf{Correctness.} The correctness requirement for PE is as follows: \\ For all $\lambda, d, \eta\geq 0$, $t^*_1, \cdots, t^*_\eta ,t_1, \cdots, t_d \in \mathcal{T}$, $(pk, sk_0) \leftarrow \textsf{KeyGen}(1^\lambda, d)$, $sk_i\gets\textsf{Punc}(pk, sk_{i-1},t^*_i),$ $ \forall i \in [\eta]$, $ct=\textsf{Encrypt}(pk,\mu, \{t_1, \cdots, t_d\}),$ we have \begin{itemize} \item If $\{t^*_1, \cdots, t^*_\eta\}\cap \{t_1, \cdots, t_d\}=\emptyset $, then $\forall i\in \{0,\cdots, \eta\}$, \[ \mathrm{Pr}[\textsf{Decrypt}(pk, sk_i, (ct, \{t_1, \cdots, t_d\}))=\mu]\geq 1-\textsf{negl}(\lambda). \] \item If there exist $j\in[d]$ and $k\in [\eta]$ such that $t^*_k= t_j$, then $\forall i \in \{k,\cdots, \eta\}$, \[ \mathrm{Pr}[\textsf{Decrypt}(pk, sk_i, (ct, \{t_1, \cdots, t_d\}))=\mu]\leq \textsf{negl}(\lambda). \] \end{itemize} \begin{definition}[Selective Security of PE] PE is IND-sPUN-ATK if the advantage of any PPT adversary $\mathcal{A}$ in the game $\mathsf{IND}$-$\mathsf{sPUN}$-$\mathsf{ATK}^{\mathsf{sel},\mathcal{A}}_{\mathsf{PE}}$ is negligible, where ATK $\in $ \{CPA, CCA\}. Formally, $$\mathsf{Adv}_{\mathsf{PE}}^{\mathsf{IND}\text{-}\mathsf{sPUN}\text{-}\mathsf{ATK}}(\mathcal{A})=|\mathrm{Pr}[b'=b]-\frac{1}{2}| \leq \mathsf{negl}(\lambda).$$ \end{definition} The game $\mathsf{IND}$-$\mathsf{sPUN}$-$\mathsf{ATK}^{\mathsf{sel},\mathcal{A}}_{\mathsf{PE}}$ proceeds as follows. \begin{enumerate} \item \textbf{Initialize.} The adversary announces the target tags $\{\widehat{t_1}, \cdots, \widehat{t_d}\}$. \item \textbf{Setup.} The challenger initializes a set punctured tags $\mathcal{T}^* \leftarrow \emptyset$, a counter $i\leftarrow 0$ that counts the current number of punctured tags in $\mathcal{T}^*$ and a set of corrupted tags $\mathcal{C}^* \leftarrow \emptyset$ containing all punctured tags at the time of the first corruption query. Then, it runs $(pk, sk_0) \leftarrow \textsf{KeyGen}(1^\lambda, d)$. Finally, it gives $pk$ to the adversary. \item \textbf{Query 1.} \begin{itemize} \item Once the adversary makes a puncture key query PQ($t^*$), the challenger updates $i \leftarrow i+1$, returns $sk_{i} \leftarrow \textsf{Punc}(pk, sk_{i-1},t^*)$ and adds $t^*$ to $\mathcal{T}^*$. \item The first time the adversary makes a corruption query CQ(), the challenger returns $\bot$ if it finds out that $\{\widehat{t_1}, \cdots, \widehat{t_d}\} \cap \mathcal{T}^*= \emptyset$. Otherwise, the challenger returns the most recent punctured key $sk_\eta$, then sets $\mathcal{C}^*$ as the most recent $\mathcal{T}^*$ (i.e., $\mathcal{C}^* \leftarrow \mathcal{T}^*=\{t_1^*, \cdots, t^*_\eta\}$). All subsequent puncture key queries and corruption queries are answered with $\bot$. \item If $ATK=CCA$: Once the adversary makes a decryption query DQ$(ct,\{t_1, $ $\cdots, t_d\})$, the challenger runs $\textsf{Decrypt}(pk, sk_{\eta}, (ct,\{t_1, \cdots, t_d\}))$ using the most recent punctured key $sk_\eta$ and returns its output. \\ If $ATK=CPA$: the challenger returns $\bot$. \end{itemize} \item \textbf{Challenge.} The adversary submits two messages $\mu_0, \mu_1$. The challenger rejects the challenge if it finds out that $\{\widehat{t_1}, \cdots, \widehat{t_d}\} \cap \mathcal{C}^*= \emptyset$\footnote{Note that, after making some queries that are different from the target tags, the adversary may skip making corruption query but goes directly to the challenge phase and trivially wins the game. This rejection prevents the adversary from such a trivial win. It also force the adversary to make the corruption query before challenging.}. Otherwise, the challenger chooses $b \xleftarrow{\$} \{0,1\}$ and returns $\widehat{ct} \leftarrow \textsf{Encrypt}(pk,\mu_b, \{\widehat{t_1}, \cdots, \widehat{t_d}\})$. \item \textbf{Query 2.} The same as Query 1 with the restriction that for DQ$(ct,\{t_1, \cdots, t_d\})$, the challenger returns $\bot$ if $(ct,\{t_1, \cdots, t_d \})$ $=(\widehat{ct}, \{\widehat{t_1}, \cdots, \widehat{t_d}\} )$. \item \textbf{Guess.} The adversary outputs $b'\in \{0,1\}$. It wins if $b'=b$. \end{enumerate} The full security for PE is defined in the same way, except that the adversary can choose target tags at Challenge phase, after getting the public key and after Query 1 phase. In this case, the challenger does not need to check the condition $\{\widehat{t_1}, \cdots, \widehat{t_d}\} \cap \mathcal{T}^*= \emptyset$ in the first corruption query CQ() of the adversary in Query 1 phase. \subsection{Background on Lattices} \label{background} In this work, all vectors are written as columns. The transpose of a vector $\textbf{b}$ (resp., a matrix $\textbf{A}$) is denoted as $\textbf{b}^T$ (resp., $\textbf{A}^T$). The Gram-Schmidt (GS) orthogonaliation of $\textbf{S}:=[\bf{s}_1,\cdots, \bf{s}_k]$ is denoted by $\widetilde{\textbf{S}}:=[\widetilde{\bf{s}}_1,\cdots,\widetilde{\bf{s}}_k ]$ in the same order. \noindent \textbf{Lattices.} A lattice is a set $\mathcal{L}=\mathcal{L}(\textbf{B}):=\left\{\sum_{i=1}^m\bf{b}_ix_i : x_i\in\mathbb{Z}~\forall i\in[m ] \right\}\subseteq\mathbb{Z}^m$ generated by a basis $\textbf{B}=[\textbf{b}_1|\cdots |\textbf{b}_m]\in \mathbb{Z}^{n\times m}.$ We are interested in the following lattices:\\ $ \Lambda^{\bot}_q(\textbf{A}) :=\left\{ \bf{e}\in\mathbb{Z}^m \text{ s.t. } \textbf{A}\bf{e}=0 ~(\text{mod } q) \right\}, $ $\Lambda_q^{\textbf{u}}(\textbf{A}) := \left\{ \bf{e}\in\mathbb{Z}^m~\mathrm{s.t.}~ \textbf{A}\bf{e}=\textbf{u}~(\text{mod } q) \right\},$\\ $\Lambda_q^{\textbf{U}}(\textbf{A}) := \left\{ \mathbf{R}\in\mathbb{Z}^{m\times k}~\mathrm{s.t.}~ \textbf{A}\mathbf{R}=\textbf{U} (\text{mod } q) \right\},$ where $\textbf{A}\xleftarrow{\$}\mathbb{Z}^{n\times m}$, $\textbf{u}\in\mathbb{Z}_q^n$ and $\textbf{U}\in\mathbb{Z}_q^{n \times k}$. For a vector $\textbf{s}=(s_1,\cdots, s_n)$, $\|\textbf{s}\|:=\sqrt{s_1^2+\cdots+s_n^2}$, $\|\textbf{s}\|_\infty:=\max_{i \in [n]}|s_i|$. For a matrix $\textbf{S}=[\bf{s}_1\cdots\bf{s}_k]$ and any vector $\textbf{x}=(x_1,\cdots, x_k)$, we define $\|\textbf{S}\|:=\max_{i\in [k]}\|\bf{s}_i\|$, the GS norm of $\textbf{S}$ is $\|\widetilde{\textbf{S}}\|$, the sup norm is $\| \mathbf{S}\|_{sup}=\sup_{\textbf{x}}\frac{\| \textbf{S} \textbf{x}\|}{\| \textbf{x}\|}$. This yields for all $\textbf{x}$ that $\| \textbf{S} \textbf{x}\| \leq \| \textbf{S}\|_{sup} \cdot \|\textbf{x}\|$. We call a basis $\textbf{S}$ of some lattice \textit{short} if $\|\widetilde{\textbf{S}}\|$ is short. \\ \noindent \textbf{Gaussian Distributions.} Assume $m\geq 1$, $\mathbf{v}\in \mathbb{R}^m$, $\sigma>0$, and $\mathbf{x}\in \mathbb{R}^m$. We define the function $\rho_{\sigma,\mathbf{v}}(\mathbf{x})= \exp({{-\pi \Vert \mathbf{x}-\mathbf{v}\Vert^2 }/{ \sigma^2}})$. \begin{definition}[Discrete Gaussians]\label{def12} Suppose that $\mathcal{L}\subseteq\mathbb{Z}^m$ is a lattice, and $\mathbf{v}\in\mathbb{R}^m$ and $\sigma>0$. The discrete Gaussian distribution over $\mathcal{L}$ with center $\mathbf{v}$ and parameter $\sigma$ is defined by $\mathcal{D}_{\mathcal{L},\sigma,\mathbf{v}}(\mathbf{x})=\frac{\rho_{\sigma,\mathbf{v}}(\mathbf{x})}{\rho_{\sigma,\mathbf{v}}(\mathcal{L})}$ for $\mathbf{x}\in\mathcal{L},$ where $ \rho_{\sigma,\mathbf{v}}(\mathcal{L}):=\sum_{\bf{x}\in\mathcal{L}}\rho_{\sigma,\mathbf{v}}(\bf{x}).$ \end{definition} \begin{lemma}[{\cite[Lemma 4.4]{MR04}}]\label{thm:Gauss} Let $q> 2$ and let $\mathbf{A},\mathbf{B}$ be a matrix in $\mathbb{Z}_q^{n\times m}$ with $m>n$. Let $\mathbf{T}_\mathbf{A}$ be a basis for $\Lambda^{\perp}_q(\mathbf{A})$. Then, for $\sigma\geq\|\widetilde{\mathbf{T}_\mathbf{A}}\|\cdot \omega(\sqrt{\log n})$, $\mathrm{Pr}[\bf{x}\gets \mathcal{D}_{\Lambda_q^{\bot}(\mathbf{A}),\sigma}:~\|\bf{x}\|>\sigma\sqrt{m}]\leq\mathsf{negl}(n).$ \end{lemma} \iffalse \begin{lemma}[{\cite[Lemma 4.3]{Lyu12}}] \label{lem2} For any $\mathbf{v} \in \mathbb{R}^m$, and any $\sigma, r>0$, $$\mathrm{Pr}[|\mathbf{z}^T\mathbf{v}| >r, \mathbf{z }\leftarrow \mathcal{D}_{\mathbb{Z}^m,\sigma}] \leq 2 e^{\frac{-r^2}{2\| \mathbf{v}\|^2\sigma^2}}.$$ \end{lemma} \begin{lemma}[{ \cite[Lemma 4.4]{Lyu12}}] \label{lem2} For any $k>0$, $$\mathrm{Pr}[|z| >k\sigma, z \leftarrow \mathcal{D}_{\mathbb{Z},\sigma}] \leq 2 e^{\frac{-k^2}{2}}.$$ \end{lemma} \fi \subsubsection{Learning with Errors.} The security for our construction relies on the decision variant of the learning with errors (DLWE) problem defined below. \begin{definition}[DLWE, {\cite{Reg05}}] \label{lwe} Suppose that $n$ be a positive integer, $q$ is prime, and $\chi$ is a distribution over $\mathbb{Z}_q$. \iffalse The $(n, m, q, \chi)$-$\mathsf{DLWE}$ problem requires to distinguish $\mathcal{O}_\bf{s}$ and $\mathcal{O}_\$ $, where \begin{enumerate} \item \underline{$\mathcal{O}_\bf{s}$}: $m$ pairs of the form $(\mathbf{a}_i, c_i):=(\mathbf{a}_i,\mathbf{a}_i^T\bf{s}+e_i)\in\mathbb{Z}_q^n\times\mathbb{Z}_q$, where $\bf{s} \xleftarrow{\$} \mathbb{Z}_q^n$ fixed, $\bf{u}_i \xleftarrow{\$}\mathbb{Z}_q^n$ and $e_i \gets \chi$ for $i\in [m]$. \item \underline{$\mathcal{O}_\$ $}: $m$ uniform pairs $(\mathbf{a}_i,c_i) \xleftarrow{\$} \mathbb{Z}_q^n\times\mathbb{Z}_q$. \end{enumerate} In other words, \fi The $(n, m, q, \chi)$-$\mathsf{DLWE}$ problem requires to distinguish $(\mathbf{A}, \mathbf{A}^T\mathbf{s}+\mathbf{e})$ from $(\mathbf{A}, \mathbf{c}),$ where $\mathbf{A} \xleftarrow{\$}\mathbb{Z}_q^{n \times m} , \bf{s} \xleftarrow{\$} \mathbb{Z}_q^n, \mathbf{e}\gets \chi^m, \mathbf{c}\xleftarrow{\$} \mathbb{Z}_q^m.$ \end{definition} Let $\chi$ be a $\chi_0$-bounded noise distribution, i.e., its support belongs to $[-\chi_0,\chi_0]$. The hardness of DLWE is measured by $q/\chi_0$, which is always greater than 1 as $\chi_0$ is chosen such that $\chi_0<q$. Specifically, the smaller $q/\chi_0$ is, the harder DLWE is. (See \cite[Subsection 2.2]{BGG+14} and \cite[Section 3]{BV16} for further discussions.) \begin{lemma}[{\cite[Corollary 3.2]{BV16}}] \label{dlwehard} For all $\epsilon>0$, there exist functions $q=q(n)\leq 2^n$, $m=\Theta(n\log q)=\mathsf{poly}(n)$, $\chi=\chi(n)$ such that $\chi$ is a $\chi_0$-bounded for some $\chi_0=\chi_0(n)$, $q/\chi_0 \geq 2^{n^\epsilon}$ and such that $DLWE_{n, m, q, \chi}$ is at least as hard as the classical hardness of GapSVP$_{\gamma}$ and the quantum hardness of SIVP$_\gamma$ for $\gamma=2^{\Omega(n^\epsilon)}$. \end{lemma} The GapSVP$_{\gamma}$ problem is the one, given a basis for a lattice and a positive number $d$, requires to distinguish between two cases; (i)the lattice has a vector shorter than $d$, and (ii) all lattice vector have length bigger than $\gamma\cdot d$. And SIVP$_\gamma$ is the problem that, given a basis for a lattice of rannk $n$, requires to find a set of $n$ ``short" and independent lattice vectors.\\ \noindent \textbf{Leftover Hash Lemma.} The following variant of the so-called leftover hash lemma will be used in this work to support our arguments. \begin{lemma}[{\cite[Lemma 13]{ABB10}}] \label{lhl} Let $m,n,q$ be such that $m>(n+1)\log_2 q+ \omega(\log n)$ and that $q> 2$ is prime. Let $\mathbf{ A}$ and $\mathbf{B }$ are uniformly chosen from $\mathbb{Z}_q^{n \times m}$ and $\mathbb{Z}_q^{n \times k}$, respectively. Then for any uniformly chosen matrix $\mathbf{S}$ from $\{-1, 1\}^{m \times k} \text{ (mod } q)$ and for all vectors $\mathbf{e}\in \mathbb{Z}_q^{m}$, $$(\mathbf{A}, \mathbf{A}\mathbf{S}, \mathbf{S}^T\mathbf{e}) \stackrel{\text{s}}{\approx} (\mathbf{A}, \mathbf{B}, \mathbf{S}^T\mathbf{e}).$$ \end{lemma} We conclude this section with some standard results regarding trapdoor mechanism often used in lattice-based cryptography. \\ \noindent \textbf{Lattice Trapdoor Mechanism.} In our context, a (lattice) trapdoor is a short basis $\textbf{T}_\textbf{A}$ for the $q$-ary lattice $\Lambda^{\bot}_q(\textbf{A})$, i.e., $\textbf{A}\cdot\textbf{T}_\textbf{A}=0 \text{ (mod } q)$ (see \cite{GPV08}). We call $\textbf{T}_\textbf{A}$ the associated trapdoor for $\Lambda^{\bot}_q(\textbf{A})$ or even for $\textbf{A}$. \begin{lemma}\label{trapdoor} Let $n, m, q>0$ and $q$ be prime. \begin{enumerate} \item $(\mathbf{A},\mathbf{T}_\mathbf{A}) \leftarrow \mathsf{TrapGen}(n,m,q)$ (\cite{AP09}, \cite{MP12}): This is a PPT algorithm that outputs a pair $(\mathbf{A},\mathbf{T}_\mathbf{A}) \in \mathbb{Z}_q^{n \times m}\times \mathbb{Z}_q^{m \times m}$, where $\mathbf{T}_\mathbf{A}$ is a trapdoor for $\Lambda^{\bot}_q(\mathbf{A})$ such that $\mathbf{A}$ is negligibly close to uniform and $\| \widetilde{\mathbf{T}_\mathbf{A}} \|=O(\sqrt{n \log q})$. The algorithm works if $m=\Theta(n \log q)$. \item $\mathbf{T}_\mathbf{D}\leftarrow \mathsf{ExtBasisRight}(\mathbf{D}:=[\mathbf{A}|\mathbf{A}\mathbf{S}+\mathbf{B}], \mathbf{T}_\mathbf{B})$ ( \cite{ABB10}): This is a DPT algorithm that, for the input $(\mathbf{D}, \mathbf{T}_\mathbf{B})$, outputs a trapdoor $\mathbf{T}_\mathbf{D}$ for $\Lambda^{\bot}_q(\mathbf{D})$ such that $\| \widetilde{\mathbf{T}_\mathbf{D}}\| \leq \| \widetilde{\mathbf{T}_\mathbf{B}}\|(1+\|\mathbf{S}\|_{sup})$, where $\mathbf{A}, \mathbf{B} \in \mathbb{Z}_q^{n \times m}$. \item $\mathbf{T}_\mathbf{E}\leftarrow \mathsf{ExtBasisLeft}(\mathbf{E}:=[\mathbf{A}|\mathbf{B}], \mathbf{T}_\mathbf{A})$ (\cite{CHKP10}): This is a DPT algorithm that for $\mathbf{E}$ of the form $\mathbf{E}:=[\mathbf{A}|\mathbf{B}]$ and a trapdoor $\mathbf{T}_\mathbf{A}$ for $\Lambda^{\bot}_q(\mathbf{A})$, outputs a trapdoor $\mathbf{T}_\mathbf{E}$ for $\Lambda^{\bot}_q(\mathbf{E})$ such that $\| \widetilde{\mathbf{T}_\mathbf{E}}\| =\| \widetilde{\mathbf{T}_\mathbf{A}}\|$, where $\mathbf{A}, \mathbf{B} \in \mathbb{Z}_q^{n \times m}$. \item $\mathbf{R}\leftarrow\mathsf{SampleD}(\mathbf{A},\mathbf{T}_\mathbf{A}, \mathbf{U}, \sigma)$ ( \cite{GPV08}): This is a PPT algorithm that takes a matrix $\mathbf{A} \in \mathbb{Z}_q^{n \times m}$, its associated trapdoor $\mathbf{T}_\mathbf{A} \in \mathbb{Z}^{m \times m}$, a matrix $\mathbf{U} \in \mathbb{Z}_q^{n\times k}$ and a real number $\sigma>0$ and returns a short matrix $\mathbf{R} \in \mathbb{Z}_q^{m \times k} $ chosen randomly according to a distribution that is statistically close to $\mathcal{D}_{\Lambda^{\mathbf{U}}_q(\mathbf{A}),\sigma}$. The algorithm works if $\sigma=\| \widetilde{\mathbf{T}_\mathbf{A}}\|\cdot \omega(\sqrt{\log m})$. Furthermore, $\|\textbf{R}^T\|_{sup} \leq \sigma\sqrt{mk}$, $\|\textbf{R}\|_{sup} \leq \sigma\sqrt{mk}$ (see also in \cite[Lemma 2.5]{BGG+14}). \item $\mathbf{T}'_\mathbf{A} \leftarrow\mathsf{RandBasis}(\mathbf{A},\mathbf{T}_\mathbf{A}, \sigma)$ ( \cite{CHKP10}): This is a PPT algorithm that takes a matrix $\mathbf{A} \in \mathbb{Z}_q^{n \times m}$, its associated trapdoor $\mathbf{T}_\mathbf{A} \in \mathbb{Z}^{m \times m}$, and a real number $\sigma>0$ and returns a new basis $\mathbf{T}'_\mathbf{A}$ for $\Lambda^{\bot}_q(\mathbf{A})$ chosen randomly according to a distribution that is statistically close to $(\mathcal{D}_{\Lambda^{\bot}_q(\mathbf{A}),\sigma})^m$, and $\| \widetilde{\mathbf{T}'_\mathbf{A}}\| \leq \sigma\sqrt{m}$. The algorithm works if $\sigma=\| \widetilde{\mathbf{T}_\mathbf{A}}\|\cdot \omega(\sqrt{\log m})$. \end{enumerate} \end{lemma} \section{Generic PE Construction from DFKHE} \label{generic} \subsection{Delegatable Fully Key-homomorphic Encryption} Delegatable fully key-homomorphic encryption (DFKHE) can be viewed as a generalised notion of the so-called fully key-homomorphic encryption (FKHE) \cite{BGG+14} augmented with a key delegation mechanism \cite{BGG+14}. Informally, FKHE enables one to transform an encryption, say $ct_\mathbf{x}$, of a plaintext $\mu$ under a public variable $\mathbf{x}$ into the one, say $ct_f$, of the same $\mu$ under some value/function pair $(y, f)$, with the restriction that one is only able to decrypt the ciphertext $ct_f$ if $f(\mathbf{x})=y$. Similarly, DFHKP together with the key delegation mechanism allows one to do the same but with more functions, i.e., $(y, f_1, \cdots, f_k)$, and the condition for successful decryption is that $f_1(\mathbf{x})=\cdots=f_k(\mathbf{x})=y$. \begin{definition}[DFKHE] \label{dfkhe2} Let $\lambda, d=d(\lambda) \in \mathbb{N}$ be two positive integers and let $\mathcal{T}=\mathcal{T}(\lambda)$ and $\mathcal{Y}=\mathcal{Y} (\lambda)$ be two finite sets. Define $\mathcal{F}=\mathcal{F}(\lambda)=\{f| f: \mathcal{T}^{d} \rightarrow \mathcal{Y} \}$ to be a family of efficiently computable functions. $(\lambda, d,$ $\mathcal{T}, \mathcal{Y}, \mathcal{F})$--DFKHE is a tuple consisting of algorithms as follows. \begin{description} \item \underline{$( \mathsf{dfkhe}.pk, \mathsf{dfkhe}.sk) \leftarrow \mathsf{DFKHE.KGen}(1^\lambda,\mathcal{F} )$: } This PPT algorithm takes as input a security parameter $\lambda$ and outputs a public key $\mathsf{dfkhe}.pk$ and a secret key $\mathsf{dfkhe}.sk$. \item\underline{ $\mathsf{dfkhe}.sk_{y,f} \leftarrow \mathsf{DFKHE.KHom}(\mathsf{dfkhe}.sk, (y,f) )$: } This PPT algorithm takes as input the secret key $\mathsf{dfkhe}.sk$ and a pair $(y,f)\in \mathcal{Y} \times \mathcal{F} $ and returns a secret homomorphic key $sk_{y,f}$. \item \underline{ $\mathsf{dfkhe}.sk_{y, f_1,\cdots, f_{k+1}} \leftarrow \mathsf{DFKHE.KDel}(\mathsf{dfkhe}.pk, \mathsf{dfkhe}.sk_{y, f_1,\cdots, f_{k}}, (y,f_{k+1}) )$:} This PPT algorithm takes as input the public key $\mathsf{dfkhe}.pk$, a function $f_{k+1}\in \mathcal{F}$ and the secret key $\mathsf{dfkhe}.sk_{y, f_1,\cdots, f_{k}}$ and returns the delegated secret key $\mathsf{dfkhe}.sk_{y, f_1,\cdots, f_{k+1}}$. Further, the key $\mathsf{dfkhe}.sk_{y, f_1,\cdots, f_{k}}$ is produced either by $\mathsf{DFKHE.KHom}$ if $k=1$, or iteratively by $ \mathsf{DFKHE.KDel}$ if $k>1$. \item \underline{$(\mathsf{dfkhe}.ct,\mathbf{t} ) \leftarrow \mathsf{DFKHE.Enc}(\mathsf{dfkhe}.pk, \mu, \mathbf{t} )$:} This PPT algorithm takes as input the public key $\mathsf{dfkhe}.pk$, a plaintext $\mu$ and a variable $\mathbf{t} \in \mathcal{T}^d$ and returns a ciphertext $\mathsf{dfkhe}.ct$-- an encryption of $\mu$ under the variable $\mathbf{t} $. \item \underline{$\mathsf{dfkhe}.ct_{f_1,\cdots, f_k} \leftarrow \mathsf{DFKHE.ExtEval}(f_1,\cdots, f_{k},(\mathsf{dfkhe}.ct,\mathbf{t} ))$: } The DPT algorithm takes as input a ciphertext $\mathsf{dfkhe}.ct$ and the associated variable $\mathbf{t} \in \mathcal{T}^d$ and returns an evaluated ciphertext $\mathsf{dfkhe}.ct_{f_1,\cdots, f_k}$. If $f_1(\mathbf{t})=\cdots=f_k(\mathbf{t})=y$ , then we say that $\mathsf{dfkhe}.ct_{f_1, \cdots, f_k}$ is an encryption of $\mu$ using the public key $(y,f_1, \cdots, f_k)$. \item \underline{$\mu/\bot \leftarrow \mathsf{DFKHE.Dec}(\mathsf{dfkhe}.sk_{y,f_1, \cdots, f_k}, (\mathsf{dfkhe}.ct,\mathbf{t}))$:} The DPT algorithm takes as input a delegated secret key $\mathsf{dfkhe}.sk_{y,f_1, \cdots, f_k}$ and a ciphertext $\mathsf{dfkhe}.ct$ associated with $ \mathbf{t}\in \mathcal{T}^d$ and recovers a plaintext $\mu$. It succeeds if $f_i(\mathbf{t})=y$ for all $i\in[k]$. Otherwise, it fails and returns $\bot$. To recover $\mu$, the algorithm first calls $ \mathsf{DFKHE.ExtEval}(f_1,\cdots, f_{k},(\mathsf{dfkhe}.ct,\mathbf{t} ))$ and gets $\mathsf{dfkhe}.ct_{f_1,\cdots, f_k}$. Next it uses $\mathsf{dfkhe}.sk_{y, f_1, \cdots, f_k}$ and opens $\mathsf{dfkhe}.ct_{f_1,\cdots, f_k}$. \end{description} \end{definition} Obviously, DFKHE from Definition \ref{dfkhe2} is identical to FKHE \cite{BGG+14} if $k=1$.\\ \noindent \textbf{Correctness.} For all $\mu \in \mathcal{M}$, all $k\in \mathbb{N}$, all $f_1, \cdots, f_k\in \mathcal{F}$ and $\mathbf{t} \in \mathcal{T}^d$, $y\in \mathcal{Y}$, over the randomness of $(\mathsf{dfkhe}.pk, \mathsf{dfkhe}.sk) \leftarrow \mathsf{FKHE.KGen}(1^\lambda,\mathcal{F} )$, $(\mathsf{dfkhe}.ct,\mathbf{t} ) \leftarrow \mathsf{FKHE.Enc}(\mathsf{dfkhe}.pk, \mu, \mathbf{t} )$, $\mathsf{dfkhe}.sk_{y,f_1} \leftarrow \mathsf{FKHE.KHom}(\mathsf{dfkhe}.sk, (y,f_1) )$ and \\ $\mathsf{dfkhe}.sk_{y, f_1,\cdots, f_{i}}$ $\leftarrow \mathsf{FKHE.KDel}(\mathsf{dfkhe}.sk_{y, f_1,\cdots, f_{i-1}}, $ $(y, f_{i} )),$ $\mathsf{dfkhe}.ct_{f_1,\cdots, f_k} \leftarrow \mathsf{DFKHE.ExtEval}$ $(f_1,\cdots, f_{k},(\mathsf{dfkhe}.ct,\mathbf{t} ))$ for all $ i\in \{2,\cdots, k\}$, then \begin{itemize} \item $ \mathrm{Pr}[\mathsf{FKHE.Dec}(\mathsf{dfkhe}.sk, (\mathsf{dfkhe}.ct, \mathbf{t}))=\mu]\geq 1-negl(\lambda),$ \item if $y=f_1(\mathbf{t})=\cdots=f_k(\mathbf{t})$, then \begin{eqnarray*} && \mathrm{Pr}[\mathsf{FKHE.Dec}(\mathsf{dfkhe}.sk, (\mathsf{dfkhe}.ct_{f_1,\cdots, f_k}, \mathbf{t}))=\mu]\geq 1-negl(\lambda),\\ && \mathrm{Pr}[\mathsf{FKHE.Dec}(\mathsf{dfkhe}.sk_{y,f_1,\cdots, f_i}, (\mathsf{dfkhe}.ct, \mathbf{t}))=\mu]\geq 1-negl(\lambda), \forall i\in [k], \end{eqnarray*} \item For any $ i\in [k]$, if $ y\neq f_i(\mathbf{t}),$ \[ \mathrm{Pr}[\mathsf{FKHE.Dec}(\mathsf{dfkhe}.sk_{y,f_1,\cdots, f_j}, (\mathsf{dfkhe}.ct, \mathbf{t}))=\mu]\leq negl(\lambda), \forall j\in \{i, k\}. \] \end{itemize} \noindent \textbf{Security.} Security of DFKHE is similar to that of FKHE from \cite{BGG+14} with an extra evaluation that includes the key delegation mechanisms. \begin{definition}[Selectively-secure CPA of DFKHE] DFKHE is IND-sVAR-CPA if for any polynomial time adversary $\mathcal{B}$ in the game $\mathsf{IND}$-$\mathsf{sVAR}$-$\mathsf{CPA}^{\mathsf{sel},\mathcal{B}}_{\mathsf{DFKHE}}$, the adversary advantage $\mathsf{Adv}_{\mathsf{DFKHE}}^{\mathsf{IND}\text{-}\mathsf{sVAR}\text{-}\mathsf{CPA}}(\mathcal{B})=|\mathrm{Pr}[b'=b]-\frac{1}{2}| \leq \mathsf{negl}(\lambda).$ \end{definition} The $\mathsf{IND}$-$\mathsf{sVAR}$-$\mathsf{CPA}^{\mathsf{sel},\mathcal{B}}_{\mathsf{DFKHE}}$ game is as follows. \begin{enumerate} \item \textbf{Initialize.} On the security parameter $\lambda$ and $\lambda$--dependent tuple $(d, (\mathcal{T}, \mathcal{Y}, \mathcal{F}))$, $\mathcal{B}$ releases the target variable $\widehat{\mathbf{t}}=(\widehat{t_1}, \cdots, \widehat{t_d})\in \mathcal{T}^d$. \item \textbf{Setup.} The challenger runs $(\textsf{dfkhe}.pk, \textsf{dfkhe}.sk) \leftarrow \textsf{DFKHE.KGen}(1^\lambda, \mathcal{F} )$. Then, it gives $\textsf{dfkhe}.pk$ to $\mathcal{B}$. \item \textbf{Query.} $\mathcal{B}$ adaptively makes delegated key queries DKQ($y,(f_1, \cdots, f_k)$) to get the corresponding delegated secret keys. Specifically, $\mathcal{B}$ is allowed to have an access to the oracle $KG(\mathsf{dfkhe}.sk,\widehat{\mathbf{t}},y,(f_1, \cdots, f_k))$, which takes as input $\mathsf{dfkhe}.sk,$ $\widehat{\mathbf{t}},$ a list of functions $f_1, \cdots, f_k\in \mathcal{F}$ and $y\in \mathcal{Y}$ and returns either $\bot$ if all $f_j(\widehat{\mathbf{t}})=y$, or the delegated secret key $\mathsf{dfkhe}.sk_{y, f_1,\cdots, f_{k}}$ otherwise. The delegated secret key $\mathsf{dfkhe}.sk_{y, f_1,\cdots, f_{k}}$ is computed calling $\mathsf{dfkhe}.sk_{y, f_1}: =\mathsf{DFKHE.KHom}$ $(\mathsf{dfkhe}.sk, (y,f_1))$ and\\ $\mathsf{dfkhe}.sk_{y, f_1,\cdots, f_{i}}\leftarrow \mathsf{DFKHE.KDel}$ $(\mathsf{dfkhe}.pk, \mathsf{dfkhe}.sk_{y, f_1,\cdots, f_{i-1}}, (y,f_{i} )),~ \forall i\in \{2, \cdots, k\}$. \item \textbf{Challenge.} The adversary submits two messages $\mu_0, \mu_1$ (with $\widehat{\mathbf{t}}$). The challenger in turn chooses $b \xleftarrow{\$} \{0,1\}$ and returns the output $(\mathsf{dfkhe}.\widehat{ct}, \widehat{\mathbf{t}})$ of $ \textsf{DFKHE.Enc}(\mathsf{dfkhe}.pk,\mu_b, $ $\widehat{\mathbf{t}})$. \item \textbf{Guess.} The adversary outputs $b'\in \{0,1\}$. It wins if $b'=b$. \end{enumerate} \iffalse \begin{enumerate} \item $(\widehat{\mathbf{t}}\in \mathcal{T}^d, state_1)\leftarrow \mathcal{A}(\lambda)$ // $\mathcal{A}$ announces its target variable \item $(\mathsf{dfkhe}.pp, \mathsf{dfkhe}.pk, \mathsf{dfkhe}.sk) \leftarrow \mathsf{DFKHE.KGen}(1^\lambda,\mathcal{F} )$ \item $(\mu_0,\mu_1, state_2)\leftarrow \mathcal{A}^{KG(\mathsf{dfkhe}.sk,\widehat{\mathbf{t}},\cdot)}(\mathsf{dfkhe}.pk, state_1)$, \\ $b\xleftarrow{\$} \{0,1\}$, $ (\mathsf{dfkhe}.\widehat{ct},\widehat{\mathbf{t}} ) \leftarrow \mathsf{DFKHE.Enc}(\mathsf{dfkhe}.pk, \mu_b, \widehat{\mathbf{t}} )$ // Challenge Phase \item $b' \leftarrow \mathcal{A}^{KG(\mathsf{dfkhe}.sk,\widehat{\mathbf{t}},\cdot )}(\mathsf{dfkhe}.\widehat{ct}, state_2)$, // $\mathcal{A}$ guesses $b$ \item output $b'\in \{0,1\}$, \end{enumerate} where $KG(\mathsf{dfkhe}.sk,\widehat{\mathbf{t}},y,(f_1, \cdots, f_k))$ is an oracle that takes as input a list of functions $f_1, \cdots, f_k\in \mathcal{F}$ and $y\in \mathcal{Y}_\lambda$ to either return $\bot$ if all $f_j(\widehat{\mathbf{t}})=y$, or return the delegated secret key $\mathsf{dfkhe}.sk_{y, f_1,\cdots, f_{k}}$ otherwise. The delegated secret key $\mathsf{dfkhe}.sk_{y, f_1,\cdots, f_{k}}$ is computed by first producing $\mathsf{dfkhe}.sk_{y, f_1}: =\mathsf{DFKHE.KHom}$ $(\mathsf{dfkhe}.sk, (y,f_1))$ and then doing delegation $\mathsf{dfkhe}.sk_{y, f_1,\cdots, f_{i}}\leftarrow \mathsf{DFKHE.KDel}$ $(\mathsf{dfkhe}.pk, \mathsf{dfkhe}.sk_{y, f_1,\cdots, f_{i-1}}, f_{i} )$ for all $i\in \{2, \cdots, k\}$. \fi \subsection{Generic PE Construction from DFKHE.} The main idea behind our construction is an observation that ciphertext tags can be treated as variables $\textbf{t}=(t_1, \cdots, t_d) \in \mathcal{T}^d$. The puncturing property, which is related to the ``equality", suggests us to construct a family $\mathcal{F}$ of equality test functions, allowing to compare each pair of ciphertext tags and punctures. Using this idea, we then can have a PE construction from DFKHE. Let $\lambda, d=d(\lambda) \in \mathbb{N}$ be two positive integers. Let $\mathcal{T}=\mathcal{T}({\lambda})$ be a finite set (that henceforth called the \textit{tag space}) and $\mathcal{Y}=\mathcal{Y}({\lambda})$ be also a finite set. In addition, let $y_0\in \mathcal{Y}$ be a some fixed special element. Define a family of all equality test functions indicated by $\mathcal{T}$, \begin{equation}\label{eq10} \mathcal{F}=\mathcal{F}({\lambda}):=\left \{ f_{t^*}| t^* \in \mathcal{T}, \forall \mathbf{t}=(t_1, \cdots, t_d), f_{t^*}: \mathcal{T}^d \rightarrow \mathcal{Y} \right \}, \end{equation} where $f_{t^*}(\mathbf{t}):=y_0$ if $t^* \neq t_i, \forall i\in[d]$, $f_{t^*}(\mathbf{t}):= y_{t^*,\mathbf{t}}\in \mathcal{Y}\setminus \{y_0\}$. Here, $y_{t^*,\mathbf{t}}$ means depending on the value of $t^*$ and $\mathbf{t}$. Now, let $\Pi=(\mathsf{DFKHE.KGen}, \mathsf{DFKHE.KHom}, $ $ \mathsf{DFKHE.Enc}, $ $ \mathsf{DFKHE.ExtEval},$ $ \mathsf{DFKHE.KDel},$ $\mathsf{DFKHE.Dec} )$ be $(\lambda, d,$ $\mathcal{T}, \mathcal{Y}, \mathcal{F})$--DFHKE. Using $\Pi$, we can construct a PE system $\Psi=(\mathsf{PE.key}, \mathsf{PE.enc}, \mathsf{PE.pun},$ $ \mathsf{PE.dec} )$ of which both tags and punctures reside in $ \mathcal{T}$. The description of $\Psi$ is below: \begin{description} \item \underline{$(\mathsf{pe}.pk, \mathsf{pe}.sk_0) \leftarrow \mathsf{PE.key}(1^\lambda, d)$:} For input a security parameter $\lambda$ and the maximum number $d$ of tags per ciphertext, run $(\mathsf{dfkhe}.pk, \mathsf{dfkhe}.sk) $ $\leftarrow \mathsf{DFKHE.KGen}(1^\lambda, \mathcal{F})$, and return $\mathsf{pe}.pk:=\mathsf{dfkhe}.pk$, and $ \mathsf{pe}.sk_0:=\mathsf{dfkhe}.sk$. \item \underline{$\mathsf{pe}.ct \leftarrow \mathsf{PE.enc}(\mathsf{pe}.pk,\mu, \mathbf{t}=(t_1, \cdots, t_d))$:} For a public key $\mathsf{pe}.pk$, a message $\mu$, and ciphertext tags $\mathbf{t}=(t_1, \cdots, t_d)$, return $\mathsf{pe}.ct \leftarrow \mathsf{DFKHE.Enc}(\mathsf{pe}.pk, \mu, \mathbf{t} )$. \item \underline{$\mathsf{pe}.sk_{i} \leftarrow \mathsf{PE.pun}(\mathsf{pe}.pk, \mathsf{pe}.sk_{i-1}, t^*_{i})$:} For input $\mathsf{pe}.pk$, $\mathsf{pe}.sk_{i-1}$ and a punctured tag $t^*_i$, \begin{itemize} \item If $i=1$: run $\mathsf{dfkhe}.sk_{y_0,f_{t^*_1}} \leftarrow \mathsf{DFKHE.KHom}(\mathsf{pe}.sk_{0}, (y_0,f_{t^*_1}))$ and output $\mathsf{pe}.sk_{1}:=\mathsf{dfkhe}.sk_{y_0,f_{t^*_1}} $. \item If $i \geq 2$: compute $\mathsf{pe}.sk_{i} \leftarrow \mathsf{DFKHE.KDel}(\mathsf{dfkhe}.pk, \mathsf{pe}.sk_{i-1},(y_0,f_{t^*_{i}}) ).$ \item Finally, output $\mathsf{pe}.sk_{i}.$ \end{itemize} \item \underline{$\mu/\bot \leftarrow \mathsf{PE.dec}(\mathsf{pe}.pk, (\mathsf{pe}.sk_{i}, (t^*_1, \cdots, t^*_i)),(\mathsf{pe}.ct, \mathbf{t}))$:} For input the public key $\mathsf{pe}.pk$, a puncture key $\mathsf{pe}.sk_{i}$ together with punctures $(t^*_1, \cdots, t^*_i)$, a ciphertext $\mathsf{pe}.ct$ and its associated tags $\mathbf{t}=(t_1, \cdots, t_d)$, the algorithm first checks whether or not $f_{t^*_1}(\mathbf{t})=\cdots=f_{t^*_i}(\mathbf{t})=y_0$. If not, the algorithm returns $\bot$. Otherwise, it returns the output of $ \mathsf{DFKHE.Dec}(\mathsf{pe}.sk_{i}, \mathsf{pe}.ct)$. \end{description} \noindent \textbf{Correctness.} Remark that, over the choice of $(\lambda, d, \eta, (t^*_1, \cdots, t^*_\eta),$ $ (t_1, \cdots, t_d)$, $\eta\geq 0$, $t^*_1, \cdots, t^*_\eta \in \mathcal{T}$, $t_1, \cdots, t_d \in \mathcal{T}\setminus \{t^*_1, \cdots, t^*_\eta\}$, we have $f_{t^*_j}(\mathbf{t})=y_0$ for all $j\in [\eta].$ Then, it is clear that, the induced PE $\Psi$ is correct if and only if the DFKHE $\Pi$ is correct. \begin{theorem} \label{pe} PE $\Psi$ is selectively-secure CPA assuming that the underlying DFKHE $\Pi$ is selectively-CPA secure. \end{theorem} \begin{proof} \label{peproof} Assume that there exists an adversary $\mathcal{A}$ that is able to break the selective security of $\Psi$ with probability $\delta$. We can construct a simulator $\mathcal{S}$, which takes advantage of $\mathcal{A}$ and breaks selective security of $\Pi$ with the same probability. \begin{description} \item \textbf{Initialize.} $\mathcal{S}$ would like to break the selective security of the ($\lambda, d, (\mathcal{T}, \mathcal{Y}, \mathcal{F}$)--DFHKE system $\Pi=( \mathsf{DFKHE.KGen}, \mathsf{DFKHE.KHom}, \mathsf{DFKHE.Enc},$ $ \mathsf{DFKHE.Dec} ,$ $ \mathsf{DFKHE.ExtEval}, $ $\mathsf{DFKHE.KDel})$, where $\lambda, d, \mathcal{T}, $ $\mathcal{Y},$ $ \mathcal{F}$ are specified as in and around Equation \eqref{eq10}. \item \textbf{Targeting.} $\mathcal{S}$ calls $\mathcal{A}$ to get the target tags $(\widehat{t}_1, \cdots, \widehat{t}_d)$ in the game for $\Psi$, and lets it be $\widehat{\mathbf{t}}$, playing the role of the target variable in the game for $\Pi$. \item \textbf{Setup.} $\mathcal{S}$ initializes a set of punctured tags $\mathcal{T}^* \leftarrow \emptyset$, and a set of corrupted tags $\mathcal{C}^* \leftarrow \emptyset$ containing all punctured tags at the time of the first corruption query. runs $(\mathsf{dfkhe}.pp, \mathsf{dfkhe}.pk, $ $\mathsf{dfkhe}.sk) \leftarrow \mathsf{DFKHE.KGen}(1^\lambda, \mathcal{F})$ and gives $\mathsf{dfkhe}.pp, \mathsf{dfkhe}.pk$ to $\mathcal{A}$. Note that $ \mathsf{PE.key}(1^\lambda, d)$ $\equiv \mathsf{DFKHE.KGen}(1^\lambda, \mathcal{F})$ by construction. \item \textbf{Query 1.} In this phase, $\mathcal{A}$ adaptively makes puncture queries PQ($k, t^*_k$), where $k$ implicitly counts the number of PQ queries so far, and corruption queries CQ(). To reply PQ($k, t^*_k$), $\mathcal{S}$ simply returns the output of $\mathsf{DFKHE.KDel}(\mathsf{dfkhe}.pk, \mathsf{dfkhe}.sk_{y_0,f_{t^*_1},\cdots, f_{t^*_{k-1}}} )$, with noting that when $k=1$, then we have both $\mathsf{dfkhe}.sk_{y_0,f_{t^*_1},\cdots, f_{t^*_{k-1}}}:=\mathsf{dfkhe}.sk_{0}$ and $\mathsf{DFKHE.KDel}\equiv \mathsf{DFKHE.KHom}$ and finally appends $t^*_k$ to $\mathcal{T}^*$. The simulator just cares about the time at which the first CQ() has been completed. At that time, $\mathcal{S}$ saves the value of the counter $k$ and makes $\mathcal{A}$'s puncture queries a list of functions $\{f_{t^*_1},\cdots, f_{t^*_{k}}\}$ and sets $\mathcal{C}^* \leftarrow \mathcal{T}^* $. We can consider that $\mathcal{A}$ has made a sequence of $k$ queries to the $KG(\mathsf{dfkhe}.sk,\widehat{\mathbf{t}},y,(f_1, \cdots, f_k))$ oracle in the DFKHE's security game. Recall that, the requirement for a query to $KG$ to be accepted is that it must be \textit{not} all $j\in [k]$ sastyfying $f_j(\widehat{\mathbf{t}})=y$. This requirement is essentially fulfilled thanks to the condition in the FE's security game that there is at least one $t^*_j\in \{\widehat{t}_1, \cdots, \widehat{t}_d\}\cap \mathcal{C}^*$. \item \textbf{Challenge.} $\mathcal{A}$ submits two messages $\mu_0, \mu_1$ (with $\widehat{\mathbf{t}}$). $\mathcal{S}$ in turn chooses $b \xleftarrow{\$} \{0,1\}$ and returns $(\mathsf{dfkhe}.\widehat{ct},\widehat{\mathbf{t}}) \leftarrow \textsf{DFKHE.Enc}(\mathsf{dfkhe}.pk,\mu_b, \widehat{\mathbf{t}})$. \item \textbf{Query 2.} The same as Query 1 \item \textbf{Guess.} $\mathcal{S}$ outputs the same $b'\in \{0,1\}$ as $\mathcal{A}$ has guessed. \end{description} It is clear that the FE adversary $\mathcal{A}$ is joining the DFKHE game, however it is essentially impossible to distinguish the DFKHE game from the FE one as the simulated environment for $\mathcal{A}$ is \textit{perfect}. This concludes the proof.\qed \end{proof} \section{DFKHE and FE Construction from Lattices} \label{instan} At first, in Subsection \ref{gad} below, we will review the key-homomorphic mechanism, which is an important ingredient for our lattice-based construction. \subsection{Key-homomorphic Mechanism for Arithmetic Circuits} \label{gad} \iffalse \begin{lemma}[Bit decomposition] $\mathbf{K} \leftarrow \mathsf{BitDec}(\mathbf{A})$: This is a DPT algorithm that takes a matrix $\mathbf{A}\in \mathbb{Z}_q^{n \times m}$ as its input and returns a matrix $\mathbf{R}\in \mathbb{Z}_q^{m \times m}$, in which each element $a \in \mathbb{Z}_q$ of $\mathbf{A}$ is decomposed into a column binary vector $\mathbf{r}=[a_0 \cdots a_{k-1}]^T\in \mathbb{Z}_q^{k}$, where the bits $a_i$'s are ordered from least significant bit to most significant bit. \end{lemma} \begin{lemma}[{\cite[Claim 2.3]{BGG+14}}] Given a matrix $\mathbf{A} \in \mathbb{Z}_q^{n \times m}$ and $\mathbf{K} =\mathsf{BitDec}(\mathbf{A})$. Then, $\| \mathbf{K}\|_{\text{sup}}\leq m$ and $\| \mathbf{K}^T\|_{\text{sup}} \leq m$. \end{lemma} \fi Let $n, q>0$, $k:=\lceil \log q \rceil$ and $m:=n\cdot k$. We exploit the gadget matrix $\textbf{G}$ and its associated trapdoor $\textbf{T}_{\textbf{G}}$. According to \cite[Section 4]{MP12}, the matrix $\textbf{G}:=\textbf{I}_n\otimes\textbf{g}^T\in \mathbb{Z}_q^{n \times m}$, where $\textbf{g}^T=[1 \; 2\; 4 \; \cdots\; 2^{k-1}]$. The associated trapdoor $\textbf{T}_{\textbf{G}}\in \mathbb{Z}^{m \times m}$ is publicly known and $\| \widetilde{\textbf{T}_{\textbf{G}}}\| \leq \sqrt{5}$ (see \cite[Theorem 4.1]{MP12}). \noindent \textbf{Key-homomorphic Mechanism.} We recap some basic facts useful for construction of evaluation algorithms for the family of polynomial depth and unbounded fan-in arithmetic circuits (see \cite[Section 4]{BGG+14} for details). Let $\textbf{G}\in \mathbb{Z}_q^{n\times m}$ be the gadget matrix given above. For $x\in \mathbb{Z}_q$, $\textbf{B} \in \mathbb{Z}_q^{n\times m}$, $\textbf{s} \in \mathbb{Z}_q^{n}$ and $\delta>0$, define the following set $ E_{\textbf{s},\delta}(x, \textbf{B}):=\{(x\textbf{G}+\textbf{B})^T\textbf{s}+\textbf{e}, \text{ where } \| \textbf{e}\|<\delta\}.$ More details can be found in \cite{BGG+14}. \begin{lemma}[{\cite[Section 4]{BGG+14}}]\label{eval} Let $n$, $q=q(n)$, $m=\Theta(n\log q)$ be positive integers, $\mathbf{x}=(x_1, \cdots, x_d) \in \mathbb{Z}_q^d$, $\mathbf{x}^*=(x_1^*, \cdots, x^*_d) \in \mathbb{Z}_q^d$, $\mathbf{B}_i\in \mathbb{Z}_q^{n\times m}$, $\mathbf{c}_i \in E_{\mathbf{s},\delta}(x_i, \mathbf{B}_i)$ for some $\mathbf{s}\in \mathbb{Z}_q^n$ and $\delta>0$, $ \mathbf{S}_i \in \mathbb{Z}_q^{m\times m}$ for all $i\in [d]$. Also, let $\beta_{\mathcal{F}}=\beta_{\mathcal{F}}(n):\mathbb{Z} \rightarrow \mathbb{Z}$ be a positive integer-valued function, and $\mathcal{F}=\{f:(\mathbb{Z}_q)^d \rightarrow \mathbb{Z}_q\}$ be a family of functions, in which each function can be computed by some circuit of a family of depth $\tau$, polynomial-size arithmetic circuits $(C_{\lambda})_{\lambda\in \mathbb{N}}$. Then there exist DPT algorithms $\mathsf{Eval}_\mathsf{pk}$, $\mathsf{Eval}_\mathsf{ct}$, $ \mathsf{Eval}_\mathsf{sim}$ associated with $\beta_{\mathcal{F}}$ and $\mathcal{F}$ such that the following properties hold. \begin{enumerate} \item If $\mathbf{B}_f \leftarrow \mathsf{Eval}_\mathsf{pk}(f\in \mathcal{F}, (\mathbf{B}_i )_{i=1}^d )$, then $\mathbf{B}_f\in \mathbb{Z}_q^{n\times m}$. \item Let $\mathbf{c}_f \leftarrow \mathsf{Eval}_\mathsf{ct}(f\in \mathcal{F}, ((x_i, \mathbf{B}_i,\mathbf{c}_i))_{i=1}^d)$, then $\mathbf{c}_f \in E_{\mathbf{s},\Delta}(f(\mathbf{x}), \mathbf{B}_f)$, in which $\mathbf{B}_f \leftarrow \mathsf{Eval}_\mathsf{pk}(f, (\mathbf{B}_i)_{i=1}^d)$ and $\Delta < \delta \cdot \beta_{\mathcal{F}}.$ \item The output $\mathbf{S}_f \leftarrow \mathsf{Eval}_\mathsf{sim}(f\in \mathcal{F}, ((x_i^*,\mathbf{S}_i))_{i=1}^d, \mathbf{A})$ satisfies the relation $\mathbf{A}\mathbf{S}_f-f(\mathbf{x}^*)\mathbf{G}=\mathbf{B}_f$ and $\|\mathbf{S}_f \|_{sup} <\beta_{\mathcal{F}}$ with overwhelming probability, where $\mathbf{B}_f \leftarrow \mathsf{Eval}_\mathsf{pk}(f, (\mathbf{A}\mathbf{S}_i-x_i^*\mathbf{G})_{i=1}^d) $. In particular, if $\mathbf{S}_1,\cdots, \mathbf{S}_d \xleftarrow{\$} \{-1,1\}^{m\times m}$, then $\|\mathbf{S}_f \|_{sup} <\beta_{\mathcal{F}}$ with all but negligible probability for all $f\in \mathcal{F}$. \end{enumerate} \end{lemma} In general, for a family $\mathcal{F}$ of functions represented by polynomial-size and unbounded fan-in circuits of depth $\tau$, the function $\beta_{\mathcal{F}}$ is given by the following lemma. \begin{lemma}[{\cite[Lemma 5.3]{BGG+14}}]\label{eval2} Let $n$, $q=q(n)$, $m=\Theta(n\log q)$ be positive integers. Let $\mathcal{C}_{\lambda}$ be a family of polynomial-size arithmetic circuits of depth $\tau$ and $\mathcal{F}=\{f:(\mathbb{Z}_q)^d \rightarrow \mathbb{Z}_q\}$ be the set of functions $f$ that can be computed by some circuit $\mathcal{C} \in \mathcal{C}_{\lambda}$ as stated in Lemma \ref{eval}. Also, suppose that all (but possibly one) of the input values to the multiplication gates are bounded by $p<q$. Then, $\beta_{\mathcal{F}}=(\frac{p^d-1}{p-1}\cdot m)^\tau \cdot 20\sqrt{m}=O((p^{d-1}m)^\tau\sqrt{m})$. \end{lemma} \begin{definition}[FKHE enabling functions] The tuple $(\mathsf{Eval}_\mathsf{pk}$, $\mathsf{Eval}_\mathsf{ct}$, $ \mathsf{Eval}_\mathsf{sim})$ together with the family $\mathcal{F}$ and the function $\beta_{\mathcal{F}}=\beta_{\mathcal{F}}(n)$ in the Lemma \ref{eval} is called $\beta_{\mathcal{F}}$-FKHE enabling for the family $\mathcal{F}$. \end{definition} \subsection{LWE-based DFKHE Construction} \label{dfkhe} Our LWE-based DFKHE construction $\Pi$ is adapted from LWE--based FKHE and the key delegation mechanism, both of which proposed in \cite{BGG+14}. Roughly speaking, the key delegation mechanism in the lattice setting is triggered using the algorithms $\mathsf{ExtBasisLeft}$ and $\mathsf{ExtBasisRight}$ and $\mathsf{RandBasis}$ in Lemma \ref{trapdoor}. \iffalse Note that, in \cite{BGG+14}, the key delegation mechanism is originally exploited for the delegatable ABE from LWE, not for their LWE-based FKHE; see \cite[Subsection 5.1]{BGG+14} for further details \textcolor{red}{ In the $\Pi$, the plaintext will be encrypted under the public key $pk=\{\textbf{A},\textbf{B}_1, \cdots \textbf{B}_d , \textbf{U}\}$, where $(\textbf{A},\textbf{T}_\textbf{A}) \leftarrow \textsf{TrapGen}(n,m,q)$, $\textbf{B}_1, \cdots ,\textbf{B}_d \xleftarrow{\$} \mathbb{Z}_q^{n \times m}$, $\textbf{u} \xleftarrow{\$}\mathbb{Z}_q^{n} $ and under a list of at most $d$ tags $t_1, \cdots t_d \in \mathbb{Z}_q$ as well. Note that, $\textbf{T}_\textbf{A}$ will be kept as the initial secret key $sk_0$, which can decrypt a ciphertext and it will be used to produce the first punctured key. To encrypt a bit $\mu$, we construct $\textbf{H}:= [\textbf{A}|t_1\textbf{G}+\textbf{B}_1|\cdots |t_d \textbf{G}+\textbf{B}_d]$ and then compute $\textbf{c}=\textbf{H}^T\textbf{s}+\textbf{e}$ using a random vector $\textbf{s} \xleftarrow{\$}\mathbb{Z}_q^{n}$ as a ephemeral secret key, while $\textbf{e}$ are obtained by choosing $\textbf{e}_\textsf{in} \leftarrow \mathcal{D}_{\mathbb{Z}_q^{m},\sigma_1}$, sampling $\textbf{S}_1, \cdots, \textbf{S}_{d} \xleftarrow{\$} \{-1,1\}^{m \times m}$ and assigning $\textbf{e}$ to $(\textbf{e}_{ \textsf{in}}, \textbf{e}_{1}, \cdots, \textbf{e}_{d}) :=(\textbf{I}_m|\textbf{S}_1|\cdots|\textbf{S}_{d})^T\textbf{e}_\textsf{in}$. This way, we have a ciphertext of the form $(ct= (\textbf{c}_{\textsf{in}}, \textbf{c}_1, \cdots, \textbf{c}_d, c_{\textsf{out}}), (t_1, \cdots, t_d)) $, where $\textbf{c}_{\textsf{in}}=\textbf{A}^T \textbf{s}+\textbf{e}_{\textsf{in}}$, $c_{\textsf{out}} \leftarrow \textbf{u}^T \textbf{s}+e_{\textsf{out}}+\mu \lceil \frac{q}{2} \rceil$, $e_{\textsf{out}} \leftarrow \mathcal{D}_{\mathbb{Z}_q,\sigma_1}$, and $\textbf{c}_{i}=(t_i\textbf{G}+\textbf{B}_i)^T \textbf{s}+\textbf{e}_{i}$ for $i\in [d]$. Obviously, the size of the ciphertext is proportional to the number of ciphertext tags. The $i$-th puncture key $sk_i$ consists of a short basis (i.e., trapdoor) $\textbf{T}_{eq_{i}}$ for the certain matrix $[\textbf{A}|\textbf{B}_{eq_{1}}|\cdots |\textbf{B}_{eq_{{i-1}}}|\textbf{B}_{eq_{{i}}}]$, together with a list of secret punctured-so-far tags $\{t^*_1, \cdots, t^*_i\}$ and matrices $\{\textbf{B}_{eq_1}, \cdots, \textbf{B}_{eq_i}\}$. Here for all $j \in [i]$, $\textbf{B}_{eq_j}$ is computed by evaluating $\textbf{B}_{eq_{j}} \leftarrow \textsf{Eval}_\textsf{pk}(f_{t^*_j}, (\textbf{B}_k)_{k=1}^d)$. This way, we can compute $\textbf{T}_{eq_{1}}$ from $sk_0=\textbf{T}_{A}$, compute $\textbf{T}_{eq_{2}}$ from $\textbf{T}_{eq_{1}}$ and so on, using $\textsf{ExtBasisRight}$. For instance, to compute $\textbf{T}_{eq_{i}}$ using $\textbf{T}_{eq_{i-1}}$, we just compute and append $\textbf{B}_{eq_{{i}}}$ to $[\textbf{A}|\textbf{B}_{eq_{1}}|\cdots |\textbf{B}_{eq_{{i-1}}}]$, and then run $$\textbf{T}_{eq_{i}} \leftarrow \textsf{ExtBasisRight}([\textbf{A}|\textbf{B}_{eq_{1}}|\cdots |\textbf{B}_{eq_{{i-1}}}|\textbf{B}_{eq_{{i}}}], \textbf{T}_{eq_{{i-1}}}, \sigma_2).$$ Clearly, the size of the puncture key is proportional to the number of punctured tags. For decryption on input a ciphertext $ct$ associated with ciphertext tags $\{t_1, \cdots, t_d\}$ using the $i$-th puncture key $sk_i$ along with punctured tags $\{t^*_1, \cdots, t^*_{i}\}$, we simply compute $\textbf{c}_{eq_j} \leftarrow \textsf{Eval}_\textsf{ct}(f_{t^*_{j}}, ((t_k, \textbf{B}_k,\textbf{c}_{k}))_{k=1}^d)$ for $j \in [i]$. At this point, we can set up the matrix $[\textbf{A} |\textbf{B}_{eq_{1}}|\cdots |\textbf{B}_{eq_{{i}}}]$ and take $\textbf{T}_{eq_{{i}}}$ to sample a short vector $\textbf{r} $ such that $[\textbf{A} |\textbf{B}_{eq_{1}}|\cdots |\textbf{B}_{eq_{{i}}}]\cdot \textbf{r}=\textbf{u} ~\text{(mod } q)$. We finally can learn $\mu $ by calculating $ c_{\textsf{out}}-\textbf{r}^T(\textbf{c}_\textbf{in}|\textbf{c}_{eq_{1}}|\cdots|\textbf{c}_{eq_i})$. Thanks to the properties of evaluation algorithms, decryption is successful if and only if $ f_{t^*_{j}}(t_k)=0$, meaning that $t^*_{j}\neq t_k$ for all $(j,k)\in [i]\times [d]$ as required.} \fi Formally, LWE-based DFKHE $\Pi$ consists of the following algorithms: \begin{description} \item \underline{\textbf{Parameters}:} Let $\lambda \in \mathbb{N}$ be a security parameter. Set $n=n(\lambda)$, $q=q(\lambda)$ and $d=d(\lambda)$ to be fixed such that $d<q$. Let $\eta \in \mathbb{N}$ be the maximum number of variables that can be delegated and $\sigma_1, \cdots, \sigma_{\eta}$ be Gaussian parameters. Also, we choose a constant $\epsilon \in (0,1)$, which is mentioned in Lemma \ref{dlwehard}. The constant is used to determine the tradeoff between the security level and the efficiency of the system. Let $\mathcal{F}:=\{ f| f: ( \mathbb{Z}_q)^d \rightarrow \mathbb{Z}_q\}$ be a family of efficiently computable functions over $\mathbb{Z}_q$ that can be computed by some circuit of a family of depth $\tau$, polynomial-size arithmetic circuits $(C_{\lambda})_{\lambda\in \mathbb{N}}$. Take the algorithms $(\mathsf{Eval}_\mathsf{pk}$, $\mathsf{Eval}_\mathsf{ct}$, $ \mathsf{Eval}_\mathsf{sim})$ together with a function $\beta_{\mathcal{F}}=\beta_{\mathcal{F}}(n)$ to be $\beta_{\mathcal{F}}$--FKHE enabling for $\mathcal{F}$ \item \underline{$\textsf{DFKHE.KGen}(1^\lambda, \mathcal{F} )$:} For the input pair (a security parameter $\lambda \in \mathbb{N}$ and a family $\mathcal{F}$) \footnote{Here, $d$ also appears implicitly as an input.}, do the following: \begin{enumerate} \item Choose $m=\Theta{(n\log q)}$. The plaintext space is $\mathcal{M}:= \{0,1\}^m$, $\mathcal{T}:=\mathbb{Z}_q$. Additionally, let $\chi$ be a $\chi_0$--bounded noise distribution (i.e, its support belongs to $[-\chi_0, \chi_0]$) for which the $(n,2m, q,\chi)$--DLWE is hard. \item Generate $(\textbf{A},\textbf{T}_\textbf{A}) \leftarrow \textsf{TrapGen}(n,m,q)$, sample $\textbf{U}, \textbf{B}_1, \cdots ,\textbf{B}_d \xleftarrow{\$} \mathbb{Z}_q^{n \times m}$. \item Output the public key $pk=\{\textbf{A},\textbf{B}_1, \cdots \textbf{B}_d , \textbf{U}\}$ and the initial secret key $sk=\{\textbf{T}_\textbf{A}\}$. \end{enumerate} \item \underline{$\textsf{DFKHE.KHom}(sk, (y,f_1) )$:} For the input pair (the initial secret key $sk$ and a pair $(y, f_1) \in \mathbb{Z}_q \times \mathcal{F}$) do the following: \begin{enumerate} \item $\textbf{B}_{f_1} \leftarrow \textsf{Eval}_\textsf{pk}(f_1, (\textbf{B}_k)_{k=1}^d)$, $\textbf{E}_{y,f_1} \leftarrow \textsf{ExtBasisLeft}([\textbf{A}|y\mathbf{G}+\textbf{B}_{f_1}], \textbf{T}_\textbf{A})$. \item $\textbf{T}_{y,f_1} \leftarrow \textsf{RandBasis}([\textbf{A}|y\mathbf{G}+\textbf{B}_{f_1}],\textbf{E}_{y,f_1}, \sigma_1)$, output the secret key $sk_{y,f_1}=\{\textbf{T}_{y,f_1}\}$. Here, we set $\sigma_1=\omega(\beta_\mathcal{F}\cdot \sqrt{\log (2m)})$ for the security proof to work. \end{enumerate} \item \underline{$\textsf{DFKHE.KDel}(sk_{y,f_1,\cdots, f_{\eta-1}}, (y, f_{\eta}) )$: } For the input pair (the delegated secret key $sk_{y,f_1,\cdots, f_{\eta-1}} $ and a pair $(y, f_\eta) \in \mathbb{Z}_q \times \mathcal{F}$) do the following: \begin{enumerate} \item $\textbf{B}_{f_\eta} \leftarrow \textsf{Eval}_\textsf{pk}(f_\eta, (\textbf{B}_k)_{k=1}^d)$. \item $\textbf{E}_{y,f_1,\cdots, f_{\eta}} \leftarrow \textsf{ExtBasisLeft}([\textbf{A}|y\mathbf{G}+\textbf{B}_{f_1}|\cdots|y\mathbf{G}+\textbf{B}_{f_{\eta-1}}|y\mathbf{G}+\textbf{B}_{f_\eta}], \textbf{T}_{y,f_1,\cdots, f_{\eta-1}})$. \item $\textbf{T}_{y,f_1,\cdots, f_{\eta}} \leftarrow \textsf{RandBasis}([\textbf{A}|y\mathbf{G}+\textbf{B}_{f_1}|\cdots|y\mathbf{G}+\textbf{B}_{f_{\eta-1}}|y\mathbf{G}+\textbf{B}_{f_\eta}], \textbf{E}_{y,f_1,\cdots, f_{\eta}}, \sigma_\eta)$. \item Output the secret key $sk_{y,f_1,\cdots, f_{\eta}}=\{\textbf{T}_{y,f_1,\cdots, f_{\eta}}\}$.\\ We set $\sigma_\eta=\sigma_1\cdot(\sqrt{m\log m})^{\eta-1}$ and discuss on setting parameters in details later. \end{enumerate} \item \underline{$\textsf{DFKHE.Enc}(\mu, pk, \mathbf{t})$: } For the input consiting of (a message $\mu=(\mu_1, \cdots, \mu_m)\in \mathcal{M}$, the public key $pk$ and ciphertext tags $\mathbf{t}=(t_1, \cdots, t_d) \in \mathcal{T}^d$), perform the following steps: \begin{enumerate} \item Sample $\textbf{s} \xleftarrow{\$}\mathbb{Z}_q^{n}$, $\mathbf{e}_{\textsf{out}} , \textbf{e}_\textsf{in} \leftarrow \chi^m$, and $\textbf{S}_1, \cdots, \textbf{S}_{d} \xleftarrow{\$} \{-1,1\}^{m \times m}$. \item Compute $\textbf{e}\leftarrow (\textbf{I}_m|\textbf{S}_1|\cdots|\textbf{S}_{d})^T\textbf{e}_\textsf{in}=(\textbf{e}_{ \textsf{in}}^T, \textbf{e}_{1}^T, \cdots, \textbf{e}_{d}^T)^T $. \item Form $\textbf{H}\leftarrow [\textbf{A}|t_1\textbf{G}+\textbf{B}_1|\cdots |t_d \textbf{G}+\textbf{B}_d]$ and compute $\textbf{c}=\textbf{H}^T\textbf{s}+\textbf{e} \in \mathbb{Z}_q^{(d+1)m}$ ,\\ $\textbf{c}=[\textbf{c}_\textsf{in}|\textbf{c}_1|\cdots |\textbf{c}_d]$, where $\textbf{c}_{\textsf{in}}=\textbf{A}^T \textbf{s}+\textbf{e}_{\textsf{in}}$ and $\textbf{c}_{i}=(t_i\textbf{G}+\textbf{B}_i)^T \textbf{s}+\textbf{e}_{i}$ for $i\in [d]$. \item Compute $\textbf{c}_{\textsf{out}} \leftarrow \textbf{U}^T \textbf{s}+\textbf{e}_{\textsf{out}}+\mu \lceil \frac{q}{2} \rceil$. \item Output the ciphertext $(ct_\mathbf{t}= (\textbf{c}_{\textsf{in}}, \textbf{c}_1, \cdots, \textbf{c}_d, \textbf{c}_{\textsf{out}}), \mathbf{t}) $. \end{enumerate} \iffalse \item \underline{$\textsf{DFKHE.Eval}(f, ct_\mathbf{t})$: } For the input (a ciphertext $ct_\mathbf{t}=(\textbf{c}_{\textsf{in}}, \textbf{c}_1, $ $ \cdots, \textbf{c}_{d}, c_{\textsf{out}}) $ and its associated tags $\textbf{t}=(t_1, \cdots, t_d)$, and a $f_1 \in \mathcal{F}$), execute the following steps: \begin{enumerate} \item Output $\textbf{c}_f\leftarrow \textsf{Eval}_\textsf{ct}(f, ((t_k, \textbf{B}_k, \textbf{c}_{k}))_{k=1}^{d})$. \end{enumerate} \fi \item \underline{$\textsf{DFKHE.ExtEval}(f_1,\cdots, f_\eta, ct_\mathbf{t})$: } For the input (a ciphertext $ct_\mathbf{t}=(\textbf{c}_{\textsf{in}}, \textbf{c}_1, $ $ \cdots, \textbf{c}_{d}, \textbf{c}_{\textsf{out}}) $ and its associated tags $\textbf{t}=(t_1, \cdots, t_d)$, and a list of functions $f_1,\cdots, f_\eta \in \mathcal{F}$), execute the following steps: \begin{enumerate} \item Evaluate $\textbf{c}_{f_j}\leftarrow \textsf{Eval}_\textsf{ct}(f_j, ((t_k, \textbf{B}_k, \textbf{c}_{k}))_{k=1}^{d})$ for $j\in[\eta]$. \item Output the evaluated ciphertext $\textbf{c}_{f_1,\cdots, f_\eta}:=(\textbf{c}_{f_1},\cdots, \textbf{c}_{f_\eta})$. \end{enumerate} \item \underline{$\textsf{DFKHE.Dec}(ct_\mathbf{t}, sk_{y,f_1,\cdots, f_\eta} )$:} For the input (a ciphertext $ct_\mathbf{t}=(\textbf{c}_{\textsf{in}}, \textbf{c}_1, $ $ \cdots, \textbf{c}_{d}, \textbf{c}_{\textsf{out}}) $, the associated tags $\textbf{t}=(t_1, \cdots, t_d)$, and a delegated secret key $sk_{y,f_1,\cdots, f_\eta}$, execute the following steps: \begin{enumerate} \item If $\exists j\in [\eta]$ s.t. $f_{j}(\mathbf{t}) \neq y$, then output $\bot$. Otherwise, go to Step 2. \item Sample $\textbf{R} \leftarrow \textsf{SampleD}([\textbf{A} |y\mathbf{G}+\textbf{B}_{f_{1}}|\cdots |y\mathbf{G}+\textbf{B}_{f_{\eta}}], \textbf{T}_{y,f_1,\cdots, f_{\eta}}, \textbf{U}, \sigma_\eta)$. \item Evaluate $(\textbf{c}_{f_1},\cdots, \textbf{c}_{f_\eta}) \leftarrow \textsf{DFKHE.ExtEval}(f_1,\cdots, f_\eta, ct_\mathbf{t})$. \item Compute $\bar{\mu}:=(\bar{\mu}_1,\cdots, \bar{\mu}_m) \leftarrow \textbf{c}_{\textsf{out}}-\textbf{R}^T(\textbf{c}_\textsf{in}|\textbf{c}_{f_{1}}|\cdots|\textbf{c}_{f_\eta})$. \item For $\ell \in [m]$, if $ |\bar{\mu}_\ell | <q/4$ then output $\mu_\ell=0$; otherwise, output $\mu_\ell=1$. \end{enumerate} \end{description} In the following, we will demonstrate the correctness and the security of the LWE-based DFKHE $\Pi$. \begin{theorem}[Correctness of $\Pi$] \label{theo2} The proposed DFKHE $\Pi$ is correct if the condition \begin{equation}\label{key} (\eta+1)^2\cdot \sqrt{m}\cdot \omega( (\sqrt{m\log m})^{\eta})\cdot \beta_{\mathcal{F}}^2+2<\frac{1}{4}(q/\chi_0) \end{equation} holds, assumming that $f_j(\mathbf{t})=y$ for all $j\in [\eta]$. \end{theorem} \begin{proof} We have $ \bar{\mu}= \textbf{c}_{\textsf{out}}-\textbf{R}^T(\textbf{c}_\textsf{in}|\textbf{c}_{f_{1}}|\cdots|\textbf{c}_{f_{\eta}})=\mu \lceil \frac{q}{2} \rceil+ \textbf{e}_{\textsf{out}} -\textbf{R}^T(\textbf{e}_{\textsf{in}}|\textbf{e}_{f_1}|\cdots|\textbf{e}_{f_\eta}).$ \iffalse \begin{eqnarray*} \bar{\mu}&=& \textbf{c}_{\textsf{out}}-\textbf{R}^T(\textbf{c}_\textsf{in}|\textbf{c}_{f_{1}}|\cdots|\textbf{c}_{f_{\eta}})\\ &=& \mu \lceil \frac{q}{2} \rceil+ \textbf{e}_{\textsf{out}} -\textbf{R}^T(\textbf{e}_{\textsf{in}}|\textbf{e}_{f_1}|\cdots|\textbf{e}_{f_\eta}). \end{eqnarray*} The third equality is because $f_{j}(\textbf{t})=y$ for all $j \in [\eta]$. Clearly, if there exists $j\in [\eta]$ such that $f_j (\mathbf{t})\neq y$, then we cannot obtain the third equality, hence the decryption fails. Let $\overline{\textbf{e}}=(\textbf{e}_{\textsf{in}}|\textbf{e}_{f_1}|\cdots|\textbf{e}_{f_\eta})$. \fi Next, we evaluate the norm of $e_{\textsf{out}} -\textbf{R}^T(\textbf{e}_{\textsf{in}}|\textbf{e}_{f_1}|\cdots|\textbf{e}_{f_\eta})$. Since $\textbf{c}_{f_j} \in E_{\textbf{s}, \Delta}(y, \textbf{B}_{f_\eta})$, for all $j \in [\eta]$, where $\Delta<\chi_0\cdot \beta_{\mathcal{F}}$, then $\|(\textbf{e}_{\textsf{in}}|\textbf{e}_{f_1}|\cdots|\textbf{e}_{f_\eta})\| \leq \eta\cdot \Delta+\chi_0\leq (\eta\cdot \beta_{\mathcal{F}}+1)\chi_0.$ Then \begin{equation*} \begin{split} \|\mathbf{e}_{\textsf{out}} -\textbf{R}^T(\textbf{e}_{\textsf{in}}|\textbf{e}_{f_1}|\cdots|\textbf{e}_{f_\eta})\|_\infty &\leq \|\mathbf{e}_{\textsf{out}}\|_\infty+ \| \textbf{R}^T\|_{sup} \cdot\|(\textbf{e}_{\textsf{in}}|\textbf{e}_{f_1}|\cdots|\textbf{e}_{f_\eta})\|\\ &\leq ((\eta+1)^2\cdot \sqrt{m}\cdot \omega( (\sqrt{m\log m})^{\eta})\cdot \beta_{\mathcal{F}}^2+2)\cdot \chi_0,\\ \end{split} \end{equation*} where $\| \textbf{R}^T\|_{sup} \leq (\eta+1)m\sigma_\eta$ by Item 4 of Lemma \ref{trapdoor} and $\sigma_\eta=\sigma_1\cdot(\sqrt{m\log m})^{\eta-1}=\omega(\beta_{\mathcal{F}}\cdot \sqrt{\log m})\cdot(\sqrt{m\log m})^{\eta-1}$. By choosing parameters such that $((\eta+1)^2\cdot \sqrt{m}\cdot \omega( (\sqrt{m\log m})^{\eta})\cdot \beta_{\mathcal{F}}^2+2)\cdot \chi_0<q/4$, which yields Equation \eqref{key}, then the decryption is successful. \qed \end{proof} \begin{theorem}[IND-sVAR-CPA of $\Pi$] \label{varcpa} Assuming the hardness of $(n,2m,q,\chi)$--$\mathsf{DLWE}$, the proposed DFKHE $\Pi$ is IND-sVAR-CPA. \end{theorem} \begin{proof} The proof consists of a sequence of four games, in which the first Game 0 is the original $\mathsf{IND}$-$\mathsf{sVAR}$-$\mathsf{CPA}^{\mathsf{sel},\mathcal{A}}_{\Psi}$ game. The last game chooses the challenge ciphertext uniformly at random. Hence, the advantage of the adversary in the last game is zero. The games 2 and 3 are indistinguishable thanks to a reduction from the DLWE hardness. \begin{description} \item \textbf{Game 0.} This is the original $\mathsf{IND}$-$\mathsf{sVAR}$-$\mathsf{CPA}^{\mathsf{sel},\mathcal{A}}_{\Psi}$ game being played by an adversary $\mathcal{A}$ and a challenger. At the initial phase, $\mathcal{A}$ announces a target variable $\widehat{\mathbf{t}}=(\widehat{t_1}, \cdots, \widehat{t_d})$. Note that, the challenger has to reply delegate key queries DKQ$(y, f_1,\cdots, f_k)$. However, if $(y,(f_1,\cdots, f_k)) \in \mathbb{Z}_q\times \mathcal{F}^k$ such that $f_1(\widehat{\mathbf{t}})=\cdots=f_1(\widehat{\mathbf{t}})=y$ then the query will be aborted. At the setup phase, the challenger generates $pk=\{\textbf{A},\textbf{B}_1, \cdots \textbf{B}_d , \textbf{U}\}$, the initial secret key $sk=\{\textbf{T}_\textbf{A}\}$, where $\textbf{B}_1, \cdots \textbf{B}_d \xleftarrow{\$} \mathbb{Z}_q^{n \times m}$, $\textbf{U} \xleftarrow{\$} \mathbb{Z}_q^{n\times m}$, $(\textbf{A}, \textbf{T}_\textbf{A})$ $\leftarrow$ $\textsf{TrapGen}(n,m,q)$. The challenger then sends $pk$ to the adversary, while it keeps $sk$ secret. Also, in order to produce the challenge ciphertext $\widehat{ct}$ in the challenge phase, $\widehat{\textbf{S}}_1, \cdots, \widehat{\textbf{S}}_{d} \in \{-1,1\}^{m \times m}$ are generated (Step 2 of \textsf{DFKHE.Enc}). \item \textbf{Game 1.} This game slightly changes the way $\textbf{B}_0, \cdots \textbf{B}_d$ are generated in the setup phase. Instead in the challenge phase, $\widehat{\textbf{S}}_1, \cdots, \widehat{\textbf{S}}_{d} \in \{-1,1\}^{m \times m}$ are sampled in the setup phase. This allows to compute $\textbf{B}_i:=\textbf{A}\widehat{\textbf{S}}_i-\widehat{t_i}\textbf{G}$ for $i\in [d]$. The rest of the game is the same as Game 0. \\ Game 1 and Game 0 are indistinguishable thanks to the leftover hash lemma (i.e., Lemma \ref{lhl}). \item \textbf{Game 2.} In this game, the matrix $\textbf{A}$ is not generated by \textsf{TrapGen} but chosen uniformly at random from $\mathbb{Z}_q^{n \times m}$. The matrices $\textbf{B}_1, \cdots \textbf{B}_d$ are constructed as in Game 1. The secret key is $sk_0=\{\textbf{T}_\textbf{G}\}$ instead. The challenger replies to a delegated key query DKQ$(y, f_1,\cdots, f_k)$ as follows: \begin{enumerate} \item If $f_1(\widehat{\mathbf{t}})=f_k(\widehat{\mathbf{t}})=y$, the challenger aborts and restarts the game until there exists at least one $f_j(\widehat{\mathbf{t}})\neq y$. Without loss of generality, we can assume that $f_k(\widehat{\mathbf{t}})\neq y$. \item For all $i\in [k]$, compute $\widehat{\textbf{S}}_{f_i}\leftarrow \textsf{Eval}_\textsf{sim}(f_{i}, ((\widehat{t_j},\widehat{\textbf{S}_j}))_{j=1}^d , \textbf{A})$, and let $\textbf{B}_{f{i}}=\textbf{A}\widehat{\textbf{S}}_{f_i}-f_{i}(\widehat{\mathbf{t}})\textbf{G}$. Remark that, $\textbf{B}_{f_{1}}=\textsf{Eval}_\textsf{pk}(f_{i}, (\textbf{B}_j)_{j=1}^d)$. For choosing Gaussian parameters, note that $\|\widehat{\textbf{S}}_{f_i}\|_{sup} \leq \beta_{\mathcal{F}}$ due to Item 3 of Lemma \ref{eval}. \item $\textbf{E}_{y,f_1, \cdots, f_k} \leftarrow \textsf{ExtBasisRight}([\textbf{A}|\textbf{A}\widehat{\textbf{S}}_{f_1}+(y-f_1(\widehat{\mathbf{t}}))\textbf{G}|\cdots |\textbf{A}\widehat{\textbf{S}}_{f_k}+(y-f_k(\widehat{\mathbf{t}}))\textbf{G}], \textbf{T}_{\textbf{G}}).$ Note that, $\| \textbf{E}_{y,f_1, \cdots, f_k}\| \leq \| \widetilde{\mathbf{T}_\mathbf{G}}\|(1+\|\mathbf{S}_{f_k}\|_{sup})=\sqrt{5}(1+\beta_{\mathcal{F}})$ for all $k \in [\eta]$ by Item 2 of Lemma \ref{trapdoor}. \item $\textbf{T}_{y,f_1, \cdots, f_k} \leftarrow \textsf{RandBasis}([\textbf{A}|\textbf{A}\widehat{\textbf{S}}_{f_1}+(y-f_1(\widehat{\mathbf{t}}))\textbf{G}|\cdots |\textbf{A}\widehat{\textbf{S}}_{f_k}+(y-f_k(\widehat{\mathbf{t}}))\textbf{G}],$ $ \textbf{E}_{y,f_1, \cdots, f_k} , \sigma_k).$ \item Return $sk_{y,f_1, \cdots, f_k}:=\{\textbf{T}_{y, f_1, \cdots, f_k} \}$. \end{enumerate} Game 2 and Game 1 are indistinguishable. The reason is that the distributions of $\textbf{A}$'s in both games are statistically close and that the challenger's response to the adversary's query is also the output of \textsf{RandBasis}. \item \textbf{Game 3.} This game is similar to Game 2, except that the challenge ciphertext $\widehat{ct}$ is chosen randomly. Therefore, the advantage of the adversary $\mathcal{A}$ in Game 3 is zero.\\ Now we show that Games 2 and 3 are indistinguishable using a reduction from DLWE. \item \textbf{Reduction from DLWE.} Suppose that $\mathcal{A}$ can distinguish Game 2 from Game 3 with a non-negligible advantage. Using $\mathcal{A}$, we construct a DLWE solver $\mathcal{B}$. The reduction is as follows: \begin{itemize} \item $(n,2m,q,\chi)$--\textbf{DLWE instance.} $\mathcal{B}$ is given a $\textbf{F} \xleftarrow{\$}\mathbb{Z}_q^{n \times 2m}$, and a vector $\mathbf{c}\in \mathbb{Z}_q^{2m}$, where either (i) $\textbf{c}$ is random or (ii) $\textbf{c}$ is in the LWE form $\textbf{c}=\textbf{F}^T \textbf{s}+\textbf{e},$ for some random vector $\textbf{s}\in \mathbb{Z}_q^{n}$ and $\textbf{e} \leftarrow \chi^{2m}$. The goal of $\mathcal{B}$ is to decide whether $\textbf{c}$ is random or generated from LWE. \item \textbf{Initial.} $\mathcal{B}$ now parses $[\mathbf{c}_{\textsf{in}}^T|\mathbf{c}_{\textsf{out}}^T]^T \leftarrow \mathbf{c}$, where $\mathbf{c}_{\textsf{in}}, \mathbf{c}_{\textsf{out}}\in \mathbb{Z}_q^{m}$, $[\mathbf{e}_{\textsf{in}}^T|\mathbf{e}_{\textsf{out}}^T]^T \leftarrow \mathbf{e}$, where $\mathbf{e}_{\textsf{in}}, \mathbf{e}_{\textsf{out}}\leftarrow \chi^{m}$, and $[\mathbf{A} |\mathbf{U}] \leftarrow \mathbf{F}$, where $\mathbf{A}, \textbf{U}\in \mathbb{Z}_q^{n \times m}$. That is, \begin{equation}\label{key89} \textbf{c}_{\textsf{in}}=\textbf{A}^T \textbf{s}+\textbf{e}_{\textsf{in}}, \quad \textbf{c}_{\textsf{out}}=\textbf{U}^T \textbf{s}+\textbf{e}_{\textsf{out}}. \end{equation} Now $\mathcal{B}$ calls $\mathcal{A}$ to get the target variable $\widehat{\mathbf{t}}=(\widehat{t_1}, \cdots, \widehat{t_d})$ to be challenged. \item \textbf{Setup.} $\mathcal{B}$ generates the keys as in Game 2. That is, $\widehat{\textbf{S}}_1, \cdots, \widehat{\textbf{S}}_{d} \xleftarrow{\$} \{-1,1\}^{m \times m}$ and $\textbf{B}_i:=\textbf{A}\widehat{\textbf{S}_i}-\widehat{t_i}\textbf{G}$ for $i\in [d]$. Finally, $\mathcal{B}$ sends $\mathcal{A}$ the public key $pk=(\textbf{A}, \textbf{B}_1, \cdots, \textbf{B}_d, \textbf{U})$. Also, $\mathcal{B}$ keeps $sk=\{\textbf{T}_\textbf{G}\}$ as the initial secret key. \item \textbf{Query.} Once $\mathcal{A}$ makes a delegated key query, $\mathcal{B}$ replies as in Game 2. \item \textbf{Challenge.} Once $\mathcal{A}$ submits two messages $\mu_0$ and $\mu_1$, $\mathcal{B}$ chooses uniformly at random $b \xleftarrow{\$} \{0,1\}$, then computes $\widehat{\textbf{c}}\leftarrow [\textbf{I}_m|\widehat{\textbf{S}}_1|\cdots|\widehat{\textbf{S}}_{d}]^T\textbf{c}_\textsf{in} \in \mathbb{Z}_q^{(d+1)m}$ and $\widehat{\textbf{c}}_{\textsf{out}} \leftarrow \textbf{c}_{\textsf{out}}+\mu_b \lceil \frac{q}{2} \rceil \in \mathbb{Z}_q$. \begin{itemize} \item Suppose $\textbf{c}$ is generated by LWE, i.e., $\textbf{c}_{\textsf{in}}$, $ \textbf{c}_{\textsf{out}}$ satisfy Equation \eqref{key89}. In the \textsf{DFKHE.Enc} algorithm, $\textbf{H}=[\textbf{A}|\widehat{t_1}\textbf{G}+\textbf{B}_1|\cdots |\widehat{t_d} \textbf{G}+\textbf{B}_d]=[\textbf{A}|\textbf{A}\widehat{\textbf{S}}_1|\cdots|\textbf{A}\widehat{\textbf{S}}_d]$. Then $$ \widehat{\textbf{c}}= [\textbf{I}_m|\widehat{\textbf{S}}_1|\cdots|\widehat{\textbf{S}}_{d}]^T(\textbf{A}^T\textbf{s}+\textbf{e}_\textsf{in})=\textbf{H}^T\textbf{s}+\widehat{\textbf{e}}, $$ where $\widehat{\textbf{e}}=[\textbf{I}_m|\widehat{\textbf{S}}_1|\cdots|\widehat{\textbf{S}}_{d}]^T\textbf{e}_\textsf{in}$. It is easy to see that $\widehat{\textbf{c}}$ is computed as in Game 2. Additionally, $\widehat{\textbf{c}}_{\textsf{out}} =\textbf{U}^T \textbf{s}+\widehat{\textbf{e}}_{\textsf{out}}+\mu_b \lceil \frac{q}{2} \rceil \in \mathbb{Z}_q$. Then $\widehat{ct}:=(\widehat{\textbf{c}}, \widehat{\textbf{c}}_{\textsf{out}} ) \in \mathbb{Z}_q^{(d+2)m}$ is a valid ciphertext of $\mu_b$. \item If $\textbf{c}_{\textsf{in}}$, $ \textbf{c}_{\textsf{out}}$ are random then $\widehat{\textbf{c}}$ is random (following a standard left over hash lemma argument). And since $\widehat{\textbf{c}}_{\textsf{out}}$ is also random, $\widehat{ct}:=(\widehat{\textbf{c}}, \widehat{\textbf{c}}_{\textsf{out}})$ is random in $\mathbb{Z}_q^{(d+2)m}$ which behaves similarly to Game 3. \end{itemize} \item \textbf{Guess.} Eventually, once $\mathcal{A}$ outputs his guess of whether he is interacting with Game 2 or Game 3, $\mathcal{B}$ outputs his decision for the DLWE problem. \end{itemize} We have shown that $\mathcal{B}$ can solve the $(n,2m,q,\chi)$--$\textsf{DLWE}$ instance. \qed \end{description} \end{proof} \noindent \textbf{Setting Parameters.} In order to choose parameters, we should take the following into consideration: \begin{itemize} \item For the hardness of DLWE, by Theorem \ref{dlwehard}, we choose $\epsilon, n, q, \chi$, where $\chi$ is a $\chi_0$-bounded distribution, such that $q/\chi_0\geq 2^{n^\epsilon}$. We also note that, the hardness of DLWE via the traditional worst-case reduction (e.g., Lemma \ref{dlwehard}) does not help us much in proposing concrete parameters for lattice-based cryptosystems. Instead, a more conservative methodology that has been usually used in the literature is the so-called ``core-SVP hardness"; see \cite[Subsection 5.2.1]{ABD+20} for a detailed reference. \item Setting Gaussian parameters: \begin{enumerate} \item \textit{First approach:} Without caring the security proof, for trapdoor algorithms to work, we can set $\sigma_1=\| \widetilde{\textbf{T}_{\textbf{A}}}\|\cdot\omega(\sqrt{\log (2m)})$, with $\| \widetilde{\textbf{T}_{\textbf{A}}}\|=O(\sqrt{n\log m})$ by Item 1 of Lemma \ref{trapdoor}. Note that, in \textsf{DFKHE.KHom} we have $\| \widetilde{\textbf{T}}_{y,f_1}\| <\sigma_1 \cdot \sqrt{2m}$ by Item 5 of Lemma \ref{trapdoor}. Then, $\sigma_2=\| \widetilde{\textbf{T}}_{y,f_1}\| \cdot \omega(\sqrt{\log (3m)})=\sigma_1 \cdot \omega(\sqrt{m\log m}).$ Similarly, we can set $\sigma_k=\sigma_1\cdot(\sqrt{m\log m})^{k-1}$ for all $k \in [\eta]$. \item \textit{Second approach:} For the security proof to work, we have to be careful in choosing Gaussian parameters $\sigma_1, \cdots, \sigma_{\eta}$. Indeed, we have to choose $\sigma_1=\omega(\beta_\mathcal{F}\cdot \sqrt{\log m})$. In fact, we remarked in Step 2 of \textbf{Game 2} of the proof for Theorem \ref{varcpa} that $\|\widehat{\textbf{S}}_{f_i}\|_{sup} \leq \beta_{\mathcal{F}}$ for all $i$. And for a generic $k$ we still obtain $\|\widetilde{ \textbf{E}}_{y,f_1, \cdots, f_k}\| \leq \| \widetilde{\mathbf{T}_\mathbf{G}}\|(1+\|\mathbf{S}_{f_k}\|_{sup})=\sqrt{5}(1+\beta_{\mathcal{F}})$ as we just exploit $\textbf{T}_\textbf{G}$ as the secret key. Hence, $\sigma_k=\| \widetilde{\textbf{E}}_{y,f_1, \cdots, f_k}\|\cdot\omega(\sqrt{\log ((k+1)m)})=\omega(\beta_\mathcal{F}\cdot \sqrt{\log m})$ for all $k \in [\eta]$. \item Compared with $\sigma_k$ of the first approach, $\sigma_k$'s of the second approach are essentially smaller. Therefore, in order for both trapdoor algorithms and the security to work, we should set $\sigma_1=\omega(\beta_\mathcal{F}\cdot \sqrt{\log m})$ and choose $\beta_{\mathcal{F}} >\| \widetilde{\textbf{T}_{\textbf{A}}}\|=\sqrt{n\log m}$ and then follow the first approach in setting Gaussian parameters. Recall that, $\beta_{\mathcal{F}}=(\frac{p^d-1}{p-1}\cdot m)^\tau \cdot 20\sqrt{m}=O((p^{d-1}m)^\tau\sqrt{m})$ by Lemma \ref{eval2}. \end{enumerate} \item For the correctness: We need Condition \eqref{key} to hold, i.e., $ (\eta+1)^2\cdot \sqrt{m}\cdot \omega( (\sqrt{m\log m})^{\eta})\cdot \beta_{\mathcal{F}}^2+2<\frac{1}{4}(q/\chi_0)$. \end{itemize} \iffalse Let consider $\sigma_1$. We will call $\textsf{ExtBasisRight}([\textbf{A}|\textbf{A}\widehat{\textbf{S}}_{f_1}+(y-f_1(\widehat{\mathbf{t}}))\textbf{G}], \textbf{T}_\textbf{G})$ to output $\textbf{E}_{y,f_1}$, such that $\| \widetilde{\textbf{E}_{y,f_1}}\| \leq \| \widetilde{\mathbf{T}_\mathbf{G}}\|(1+\|\mathbf{S}_{f_1}\|_{sup})=\sqrt{5}(1+\beta_{\mathcal{F}})$. After that $\textbf{E}_{y,f_1}$ will be the input of \textsf{RandBasis} with Gaussian parameter $\sigma_1$. Then by Item 5 of Lemma \ref{trapdoor}, $\sigma_1=\| \widetilde{\textbf{E}_{y,f_1}}\|\cdot\omega(\sqrt{\log m})=\omega(\beta_\mathcal{F}\cdot \sqrt{\log m})$. \textbf{Concrete parameters.} The hardness of DLWE via the worst-case reduction does not help us much in proposing concrete parameters for lattice-based cryptosystems. Instead, a more conservative methodology that has been usually used in the literature is the so-called ``core-SVP hardness". We follow the ProdoKEM paper \cite[Subsection 5.2.1]{ABD+20} which is a second-round NIST PQC candidate, in analysing to choose parameters for our LWE-based PE scheme. Let $m\geq 1$, a vector $\mathbf{c}\in \mathbb{R}^m$ and a positive parameter $s$, for $\mathbf{x}\in \mathbb{R}^m$ define $\rho_{s,\mathbf{c}}(\mathbf{x})= \exp({{-\pi \Vert \mathbf{x}-\mathbf{c}\Vert^2 }/{ s^2}})$. The continuous Gaussian distribution on $\mathbb{R}^m$ with mean $\textbf{c}$ and with width parameter $s$ is proportional to $\rho_{s,\mathbf{c}}(\mathbf{x})$. Let $\mathbb{T}=\mathbb{R}/\mathbb{Z}$ be the additive group of real numbers modulo 1. Given $\alpha>0$ and $m=1$, we denote by $\Psi_{\alpha}$ the continuous Gaussian distribution on $\mathbb{T}$ of mean $0$ and width parameter $s:=\alpha$. Remind that, this Gaussian distribution has standard deviation of $\sigma=\alpha/\sqrt{2\pi}$. \begin{table}[pt] \centering \medskip \smallskip \small\addtolength{\tabcolsep}{0pt} \begin{tabular}{ c | c| c|c|c|c |c|c} $n$&$q$&$\sigma$&$\chi_0$&$m$ & $d$& & \\ \hline \hline 640&$2^{15}$& 2.8& [-12,12] && & &\\ 976&$2^{16}$& 2.3& [-10,10] && & &\\ 1344&$2^{16}$& 2.3& [-6,6] && & &\\ \hline \end{tabular} \caption{ \textcolor{red}{ Parameters}} \label{tab1} \end{table} \begin{definition}[Discretized Gaussian] \label{def11} The discretized Gaussian distribution $\widetilde{\Psi}_{\alpha q}$ is defined by sampling $X \leftarrow \Psi_{\alpha}$ then outputting $\lfloor q \cdot X\rceil \text{ mod } q$. \end{definition} In particular, we can define a Gaussian distribution over a subset of $\mathbb{Z}^m$, hence over any integer lattices in $\mathbb{Z}^m$. \begin{definition}[Discrete Gaussian] \label{def12} Let $m$ be a positive integer, $\Lambda \subset \mathbb{Z}^m$ be any subset, a vector $\mathbf{c}\in \mathbb{R}^m$ and a positive parameter $s$, define $\rho_{s, \mathbf{c}}(\Lambda ):=\sum_{\mathbf{x} \in \Lambda } \rho_{s, \mathbf{c}}(\mathbf{x})$. The discrete Gaussian distribution over $\Lambda$ centered at $\mathbf{c} \in \mathbb{Z}^m$ with width parameter $s$ is defined by: $\forall \mathbf{x}\in \Lambda$, $D_{\Lambda, s, \mathbf{c}}(\mathbf{x}):=\rho_{s,\mathbf{c}}(\mathbf{x})/\rho_{s,\mathbf{c}}(\Lambda).$ If $\mathbf{c}=\mathbf{0}$, we just simply write $\rho_{s}$, $D_{\Lambda,s}.$ \end{definition} Note that in Definition \ref{def12}, $\Lambda$ can be a lattice over $\mathbb{Z}^m$. A continuous Gaussian with width parameter $s:=\alpha q$ has standard deviation $\sigma=\frac{\alpha q} {\sqrt{2 \pi}}$, hence the discrete Gaussian with width parameter $\alpha q$. It is known that this still roughly holds for the discretized Gaussian, as long as $\sigma$ is greater than the smoothing parameter $\eta_{\epsilon}(\mathbb{Z})$ of $\mathbb{Z}$. (See for the discussion in \cite[page 170]{APS15}). This means that we can identify the discretized $\widetilde{\Psi}_{\alpha q}$ with the discrete $D_{\mathbb{Z},\alpha q}$ if $\frac{\alpha q} {\sqrt{2 \pi}} \geq \eta_{\epsilon}(\mathbb{Z})$.\\ \fi \noindent \textbf{Sizes of Keys and Ciphertext.} Recall that, throughout this work, we set $m=\Theta(n\log q)$. The public key corresponding $d$ variables consists of $d+1$ matrices of dimension $n\times m$ over $\mathbb{Z}_q$. Then the public key size is $O((d+1)\cdot n^2 \log^2 q)$. The initial secret key is the short trapdoor matrix $\textbf{T}_\textbf{A}$ of dimension $m\times m$ generated by \textsf{TrapGen} such that $\|\textbf{T}_\textbf{A}\| \leq O(\sqrt{n \log q})$, then size is $O(n^2 \log^2 q\cdot \log( n\log q))$. The secret key after delegating $\eta$ functions is the trapdoor matrix $\textbf{T}_{y,f_1, \cdots, f_\eta}$ of dimension $(\eta+1)m \times (\eta+1)m$ and $\| \textbf{T}_{y,f_1, \cdots, f_\eta} \| <\sigma_\eta\cdot \sqrt{(\eta+1)m}=\beta_{\mathcal{F}}\cdot \omega((\sqrt{m\log m})^{\eta})$ with overwhelming probability by Lemma \ref{thm:Gauss}. Therefore its size is $ (\eta+1) \cdot n \log q \cdot( O(\log(\beta_{\mathcal{F}})+\eta\cdot \log (n \log q)))$. The ciphertext is a tuple of $(d+2)$ vectors of in $\mathbb{Z}^m_q$ hence its size is $O((d+2)\cdot n\log^2q))$. \subsection{LWE-based PE Construction from DFKHE} \label{cons} We define the family of equality functions $\mathcal{F}:=\{ f_{t^*}: \mathbb{Z}_q^d \rightarrow \mathbb{Z}_q| t^* \in \mathbb{Z}_q\}$, where $f_{t^*}(\mathbf{t}):=eq_{t^*}(t_1)+\cdots+ eq_{t^*}(t_d)$, $\mathbf{t}=(t_1, \cdots, t_d)$, $eq_{t^*}: \mathbb{Z}_q\rightarrow \mathbb{Z}_q $, satisfying that $\forall t\in \mathbb{Z}_q$, $eq_{t^*}(t)=1 \text{ (mod } q)$ iff $t=t^*$, otherwise $eq_{t^*}(t)=0 \text{ (mod } q)$. Then $f_{t^*}(\textbf{t})=0 \text{ (mod } q)$ iff $eq_{t^*}(t_i)=0 \text{ (mod } q)$ if $d<q$, for all $i\in [d]$. By applying the generic framework in Section \ref{generic} to DFKHE demonstrated in Subsection \ref{dfkhe} and modifying the resulting PE, we come up with the LWE-based \textsf{PE} construction $\Psi=\{\textsf{PE.key},$ $ \textsf{PE.enc}, $ $\textsf{PE.pun}, \textsf{PE.dec}\}$ presented below: \begin{description} \item \underline{$\textsf{PE.key}(1^\lambda )$:} For the input security parameter $\lambda$, do the following: \begin{enumerate} \item Choose $n=n(\lambda)$, $q=q(\lambda)$ prime, and the maximum number of tags $d=d(\lambda)$ per a ciphertext such that $d<q$. \item Choose $m=\Theta{(n\log q)}$. The plaintext space is $\mathcal{M}:= \{0,1\}^m$, $\mathcal{T}:=\mathbb{Z}_q$. Additionally, let $\chi$ be a $\chi_0$--bounded noise distribution (i.e, its support belongs to $[-\chi_0, \chi_0]$ for which the $(n,2m, q,\chi)$--DLWE is hard. Set $\sigma=\omega(\beta_\mathcal{F}\cdot \sqrt{\log m}).$ \item Sample $(\textbf{A},\textbf{T}_\textbf{A}) \leftarrow \textsf{TrapGen}(n,m,q)$, $\textbf{U},\textbf{B}_1, \cdots ,\textbf{B}_d \xleftarrow{\$} \mathbb{Z}_q^{n \times m}$. \item Output $pk=\{\textbf{A},\textbf{B}_1, \cdots \textbf{B}_d , \textbf{U}\}$ and $sk_0=\{\textbf{T}_\textbf{A}\}$. \end{enumerate} \item \underline{$\textsf{PE.enc}(\mu, pk, \{t_1, \cdots, t_d\})$:} For the input consiting of (a message $\mu$, the public key $pk$ and ciphertext tags $(t_1, \cdots, t_d) \in \mathcal{T}^d$), perform the following steps: \begin{enumerate} \item Sample $\textbf{s} \xleftarrow{\$}\mathbb{Z}_q^{n}$, $\textbf{e}_\textsf{out}, \textbf{e}_\textsf{in} \leftarrow \chi^{m}$, $\textbf{S}_1, \cdots, \textbf{S}_{d} \xleftarrow{\$} \{-1,1\}^{m \times m}$. \item Compute $\textbf{e}\leftarrow (\textbf{I}_m|\textbf{S}_1|\cdots|\textbf{S}_{d})^T\textbf{e}_\textsf{in}=(\textbf{e}_{ \textsf{in}}^T, \textbf{e}_{1}^T, \cdots, \textbf{e}_{d}^T)^T $. \item Form $\textbf{H}\leftarrow [\textbf{A}|t_1\textbf{G}+\textbf{B}_1|\cdots |t_d \textbf{G}+\textbf{B}_d]$ and compute $\textbf{c}=\textbf{H}^T\textbf{s}+\textbf{e} \in \mathbb{Z}_q^{(d+1)m}$,\\ $\textbf{c}=[\textbf{c}_\textsf{in}|\textbf{c}_1|\cdots |\textbf{c}_d]$, where $\textbf{c}_{\textsf{in}}=\textbf{A}^T \textbf{s}+\textbf{e}_{\textsf{in}}$ and $\textbf{c}_{i}=(t_i\textbf{G}+\textbf{B}_i)^T \textbf{s}+\textbf{e}_{i}$ for $i\in [d]$. \item Compute $\textbf{c}_{\textsf{out}} \leftarrow \textbf{U}^T \textbf{s}+\textbf{e}_{\textsf{out}}+\mu \lceil \frac{q}{2} \rceil$, output $(ct= (\textbf{c}_{\textsf{in}}, \textbf{c}_1, \cdots, \textbf{c}_d, \textbf{c}_{\textsf{out}}), (t_1, $ $ \cdots, t_d)) $. \end{enumerate} \iffalse \item \underline{$\textsf{PE. pun}(\textbf{A}, kp_{0},t^*_1)$:} // for the case $i=1$ \begin{enumerate} \item Parse $kp_{0}:=\textbf{T}_{\textbf{A}}$ \item $\textbf{B}_{eq_{1}}^{(1)} \leftarrow \textsf{Eval}_\textsf{pk}(eq_{t^*_1}, \textbf{B}_1)$, $\cdots$, $\textbf{B}_{eq_{1}}^{(d)} \leftarrow \textsf{Eval}_\textsf{pk}(eq_{t^*_1}, \textbf{B}_d)$ \item Let $\textbf{B}_{eq_{1}}:=\textbf{B}_{eq_{1}}^{(1)}+\cdots+\textbf{B}_{eq_{_1}}^{(d)}$ \item $\textbf{T}_{eq_{1}} \leftarrow \textsf{ExtBasis}([\textbf{A} |\textbf{B}_{eq_{1}}], \textbf{T}_{\textbf{A}}, \sigma')$ \item Output $kp_1:=\{\textbf{T}_{eq_{1}},t^*_1, \textbf{B}_{eq_{1}}\}$ \end{enumerate} \fi \item \underline{$\textsf{PE.pun}(sk_{\eta-1},t^*_\eta)$: } For the input (a puncture key $sk_{\eta-1}$ and a punctured tag $t^*_\eta \in \mathcal{T}$), do: \begin{enumerate} \item Evaluate $\textbf{B}_{eq_\eta} \leftarrow \textsf{Eval}_\textsf{pk}(f_{t^*_{\eta}}, (\textbf{B}_k)_{k=1}^d)$. \item Compute $\textbf{E}_{eq_{\eta}} \leftarrow \textsf{ExtBasisLeft}([\textbf{A}|\textbf{B}_{eq_{1}}|\cdots |\textbf{B}_{eq_{{\eta-1}}}|\textbf{B}_{eq_{{\eta}}}], \textbf{T}_{eq_{{\eta-1}}})$. \item $\textbf{T}_{eq_{\eta}} \leftarrow \textsf{RandBasis}([\textbf{A}|\textbf{B}_{eq_{1}}|\cdots |\textbf{B}_{eq_{{\eta-1}}}|\textbf{B}_{eq_{{\eta}}}], \textbf{E}_{eq_{\eta}}, \sigma_\eta)$. \item Output $sk_{\eta}:=(\textbf{T}_{eq_{\eta}},(t^*_1,\cdots, t^*_{\eta}), (\textbf{B}_{eq_{1}}, \cdots, \textbf{B}_{eq_{\eta}}))$. \end{enumerate} \item \underline{$\textsf{PE.dec}(ct, \textbf{t}, (sk_\eta, \{t^*_1, \cdots, t^*_{\eta}\}))$: } For the input (a ciphertext $ct=(\textbf{c}_{\textsf{in}}, \textbf{c}_1, \cdots, \textbf{c}_{d},$ $ \textbf{c}_{\textsf{out}}) $, the associated tags $\textbf{t}=(t_1, \cdots, t_d)$, a puncture key $sk_{\eta}$ and the associated punctured tags $\{t^*_1, \cdots, t^*_{\eta}\} \subset \mathcal{T}$), execute the following steps: \begin{enumerate} \item If there exists $j\in [\eta]$ such that $f_{t^*_j}(\textbf{t}) \neq 0$, then output $\bot$. Otherwise, go to Step 2. \item Parse $sk_{\eta}:=(\textbf{T}_{eq_{\eta}},(t^*_1,\cdots, t^*_{\eta}), (\textbf{B}_{eq_{1}}, \cdots, \textbf{B}_{eq_{\eta}}))$. \item Sample $\textbf{R} \leftarrow \textsf{SampleD}([\textbf{A} |\textbf{B}_{eq_{1}}|\cdots |\textbf{B}_{eq_{{\eta}}}], \textbf{T}_{eq_{{\eta}}}, \textbf{U}, \sigma_\eta)$. \item Evaluate $\textbf{c}_{eq_j}\leftarrow \textsf{Eval}_\textsf{ct}(f_{t^*_j}, ((t_k, \textbf{B}_k, \textbf{c}_{k}))_{k=1}^{d})$, for $j\in [\eta]$. \item Compute $\bar{\mu}=(\bar{\mu}_1, \cdots, \bar{\mu}_m) \leftarrow \textbf{c}_{\textsf{out}}-\textbf{R}^T(\textbf{c}_\textsf{in}|\textbf{c}_{eq_{1}}|\cdots|\textbf{c}_{eq_\eta})$. \item For $\ell \in [m]$, if $ |\bar{\mu}_\ell | <q/4$ then output $\mu_\ell=0$; otherwise, output $\mu_\ell=1$. \end{enumerate} \end{description} We remark that all analysis done for the LWE-based DFKHE in Subsection \ref{dfkhe} can perfectly applied to our LWE-based PE. Therefore, we do not mention the analysis again in this section. For completeness, we only state two main theorems as below. \begin{theorem}[Correctness of $\Psi$] The proposed $\mathsf{PE}$ $\Psi$ is correct if $(\eta+1)^2\cdot m^{1+\frac{\eta}{2}}\cdot \omega( (\sqrt{\log m})^{\eta+1})\cdot \beta_{\mathcal{F}}^2+2<\frac{1}{4}(q/\chi_0),$ assumming that $t^*_j \neq t_k$ for all $(j,k)\in [\eta]\times [d]$. \end{theorem} \begin{theorem}[IND-sPUN-CPA] The proposed PE $\Psi$ scheme is IND-sPUN-CPA thanks to the IND-sVAR-CPA of the underlying DFKHE $\Pi$. \end{theorem} \iffalse \begin{proof} To argue the correctness of the proposed scheme, it suffices to compute $\bar{\mu}= c_{\textsf{out}}-\textbf{r}^T(\textbf{c}_\textsf{in}|\textbf{c}_{eq_{1}}|\cdots|\textbf{c}_{eq_{i}})$, which leads to correct decryption. \iffalse Recall that, \begin{itemize} \item If $\textbf{B}_{eq_j} \leftarrow \textsf{Eval}_\textsf{pk}(f_{t^*_{j}}, (\textbf{B}_k)_{k=1}^d)$, and $\textbf{c}_{eq_j}\leftarrow \textsf{Eval}_\textsf{ct}(f_{t^*_{j}}, ((t_k, \textbf{B}_k,\textbf{c}_{k}))_{k=1}^{d})$, then $\textbf{c}_{eq_j}=(f_{t^*_{j}}(t_1,\cdots,t_d)\textbf{G}+\textbf{B}_{eq_j})^T \textbf{s}+\textbf{e}_{{eq_j}}$. \item For all $j \in [i]$, $f_{t^*_j}(\textbf{t})=0$ iff $t^*_1,\cdots, t^*_i \in \mathcal{T} \setminus \{t_1, \cdots, t_d\}$. \end{itemize} \fi So we have: \begin{eqnarray*} \bar{\mu}&=& c_{\textsf{out}}-\textbf{r}^T(\textbf{c}_\textsf{in}|\textbf{c}_{eq_{1}}|\cdots|\textbf{c}_{eq_{i}})\\ &=& \textbf{u}^T \textbf{s}+e_{\textsf{out}}+\mu \lceil \frac{q}{2} \rceil-\textbf{r}^T[\textbf{A}^T|(f_{t^*_1}(\textbf{t})\cdot\textbf{G}+\textbf{B}_{eq_1})^T|\cdots \\ && |(f_{t^*_i}(\textbf{t})\cdot \textbf{G}+\textbf{B}_{eq_i})^T]\textbf{s} -\textbf{r}^T(\textbf{e}_{\textsf{in}}|\textbf{e}_{eq_1}|\cdots|\textbf{e}_{eq_i})\\ &=&\textbf{u}^T \textbf{s}-\textbf{r}^T[\textbf{A}^T|\textbf{B}_{eq_1}^T|\cdots|\textbf{B}_{eq_i}^T]\textbf{s}+\mu \lceil \frac{q}{2}\rceil +e_{\textsf{out}}-\textbf{r}^T(\textbf{e}_{\textsf{in}}|\textbf{e}_{eq_1}|\cdots|\textbf{e}_{eq_i})\\ &=& \mu \lceil \frac{q}{2} \rceil+ e_{\textsf{out}} -\textbf{r}^T(\textbf{e}_{\textsf{in}}|\textbf{e}_{eq_1}|\cdots|\textbf{e}_{eq_i}). \end{eqnarray*} \iffalse Next, we evaluate the length of $|e_{\textsf{out}} -\textbf{r}^T(\textbf{e}_{\textsf{in}}|\textbf{e}_{eq_1}|\cdots|\textbf{e}_{eq_i})|$. We have, \begin{itemize} \item $e_{\textsf{out}} \leftarrow \mathcal{D}_{\mathbb{Z}_q,\sigma_1}$, $\textbf{e}_\textsf{in} \leftarrow \mathcal{D}_{\mathbb{Z}_q^{m},\sigma_1}$: $|e_{\textsf{out}}|<\sigma_1$ \end{itemize} \fi The third equality is because $f_{t^*_j}(\textbf{t})=0$ for all $j \in [i]$. Thus if the condition $|e_{\textsf{out}} -\textbf{r}^T(\textbf{e}_{\textsf{in}}|\textbf{e}_{eq_1}|\cdots|\textbf{e}_{eq_i})|<q/4$ holds, then the decryption is correct. It means that if $\overline{\mu} < q/4$, then $\mu=0$, otherwise $\mu=1$. Clearly, if there exists a pair $(j,k) \in [i]\times [d]$ such that $t^*_j =t_k$, then we cannot obtain the third equality, hence the decryption fails. \end{proof} \fi \iffalse \begin{proof} The proof consists of a sequence of four games, in which the first Game 0 is the original $\mathsf{IND}$-$\mathsf{sPUN}$-$\mathsf{CCA1}^{\mathsf{sel},\mathcal{A}}_{\mathsf{PE}}$ game. The last game chooses the challenge ciphertext uniformly at random. Hence, the advantage of the adversary in the last game is zero. The games 2 and 3 are indistinguishable thanks to a reduction from the DLWE hardness. \begin{description} \item \textbf{Game 0.} This is the original $\mathsf{IND}$-$\mathsf{sPUN}$-$\mathsf{CCA1}^{\mathsf{sel},\mathcal{A}}_{\mathsf{PE}}$ game being played by an adversary $\mathcal{A}$ and a challenger. At the initial phase, $\mathcal{A}$ announces the target tags $\widehat{\mathbf{t}}=(\widehat{t_1}, \cdots, \widehat{t_d})$. At the setup phase, the challenger generates $pk=\{\textbf{A},\textbf{B}_1, \cdots \textbf{B}_d , \textbf{u}\}$, $sk_0=\{\textbf{T}_\textbf{A}\}$, where $\textbf{B}_1, \cdots \textbf{B}_d \xleftarrow{\$} \mathbb{Z}_q^{n \times m}$, $\textbf{u} \xleftarrow{\$} \mathbb{Z}_q^{n}$, $(\textbf{A}, \textbf{T}_\textbf{A})$ $\leftarrow$ $\textsf{TrapGen}(n,m,q)$. The challenger then sends $pk$ to the adversary, while it keeps $sk_0$ secret. Also, in order to produce the challenge ciphertext $\widehat{ct}$ in the challenge phase, $\widehat{\textbf{S}}_1, \cdots, \widehat{\textbf{S}}_{d} \in \{-1,1\}^{m \times m}$ are generated (Step 2 of \textsf{Encryption}). \item \textbf{Game 1.} This game slightly changes the way $\textbf{B}_0, \cdots \textbf{B}_d$ are generated in the setup phase. Instead in the challenge phase, $\widehat{\textbf{S}}_1, \cdots, \widehat{\textbf{S}}_{d} \in \{-1,1\}^{m \times m}$ are calculated in the setup phase. This allows to compute $\textbf{B}_i:=\textbf{A}\widehat{\textbf{S}}_i-\widehat{t_i}\textbf{G}$ for $i\in [d]$. The rest of the game is the same as Game 0. \\ Game 1 and Game 0 are indistinguishable thanks to Lemma \ref{lhl}. \item \textbf{Game 2.} In this game, the matrix $\textbf{A}$ is not generated by \textsf{TrapGen} but chosen uniformly at random from $\mathbb{Z}_q^{n \times m}$. The matrices $\textbf{B}_1, \cdots \textbf{B}_d$ are constructed as in Game 1. The secret key is $sk_0=\{\textbf{T}_\textbf{G}\}$. The query phases proceed as shown below. \begin{enumerate} \item The challenger replies to a puncture query PQ($i, t^*$) as follows: \begin{enumerate} \item If $i=0$ (i.e., the first puncture query): \begin{enumerate} \item If $t^*_1 \notin\{\widehat{t}_1, \cdots, \widehat{t}_d\}$, the challenger aborts and restarts the game until the first query $t^*_1 \in \{\widehat{t}_1, \cdots, \widehat{t}_d\}$. Recall that the game requires at least one punctured tag that is target tag. \item Update $i \leftarrow i+1$, let $t^*_1\leftarrow t^*$ and parse $sk_{0}=\{\textbf{T}_{\textbf{G}}\}$. \item $\textbf{B}_{eq_{1}}\leftarrow \textsf{Eval}_\textsf{pk}(f_{t^*_1}, (\textbf{B}_j)_{j=1}^d)$. If $\widehat{\textbf{S}}_{eq_1}\leftarrow \textsf{Eval}_\textsf{sim}(f_{t^*_1}, ((\widehat{t_j},\widehat{\textbf{S}_j}))_{j=1}^d , \textbf{A})$, then $\textbf{B}_{eq_{1}}=\textbf{A}\widehat{\textbf{S}}_{eq_1}-f_{t^*_1}(\widehat{\mathbf{t}})\textbf{G}$, with $f_{t^*_1}(\widehat{\mathbf{t}})\neq 0$. \item Compute $\textbf{T}_{eq_{t_1}} \leftarrow \textsf{ExtBasisLeft}([\textbf{A}|\textbf{B}_{eq_{1}}], \textbf{T}_{\textbf{G}}, \sigma_2).$ \item Return $sk_{1}:=\{\textbf{T}_{eq_1},t^*_1,\textbf{B}_{eq_{1}}\}$ \end{enumerate} \item If $i\geq 1$: The challenger simply follows the real algorithm \textsf{Puncture} to reply to the puncture query PQ($i, t^*$), using the puncture key $\textbf{T}_{eq_{{i-1}}}$. It finally returns $sk_{i}:=\{\textbf{T}_{eq_{i}},(t^*_1,\cdots, t^*_{i}), (\textbf{B}_{eq_{1}}, \cdots, \textbf{B}_{eq_{i}})\}$. \iffalse \begin{enumerate} \item $i \leftarrow i+1$, and let $t^*_i\leftarrow t^*$ \item Parse $sk_{i-1}:=\{\textbf{T}_{eq_{i-1}},(t^*_1,\cdots, t^*_{i-1}), (\textbf{B}_{eq_{1}}, \cdots, \textbf{B}_{eq_{i-1}})\}$. \item $\textbf{B}_{eq_{_i}}^{(1)} \leftarrow \textsf{Eval}_\textsf{pk}(eq_{t^*_i}, \textbf{B}_1)$, $\cdots$, $\textbf{B}_{eq_{i}}^{(d)} \leftarrow \textsf{Eval}_\textsf{pk}(eq_{t^*_{i}}, \textbf{B}_d)$. Note that, for $j \in [d]$, $\textbf{B}_{eq_{i}}^{(j)}=\textbf{A}\widehat{\textbf{S}}_{eq_i}^{(j)}-eq_{t^*_i}(\widehat{t_j})\textbf{G}$, where $\widehat{\textbf{S}}_{eq_i}^{(j)} \leftarrow \textsf{Eval}_\textsf{sim}(eq_{t^*_i}, (\widehat{t_j},\widehat{\textbf{S}_j}), \textbf{A})$. \item Let $\textbf{B}_{eq_{i}}:=\textbf{B}_{eq_{i}}^{(1)}+\cdots+\textbf{B}_{eq_{i}}^{(d)}$. \item $\textbf{T}_{eq_{i}} \leftarrow \textsf{ExtBasis}([\textbf{A} |\textbf{B}_{eq_{1}}|\cdots |\textbf{B}_{eq_{{i}}}], \textbf{T}_{eq_{{i-1}}}, \sigma_2)$. \item Output $sk_{i}:=\{\textbf{T}_{eq_{i}},(t^*_1,\cdots, t^*_{i}), (\textbf{B}_{eq_{1}}, \cdots, \textbf{B}_{eq_{i}})\}$. \end{enumerate} \fi \end{enumerate} \item The adversary sends its first corruption query CQ(). The challenger first checks if the condition $\{\widehat{t_1}, \cdots, \widehat{t_d}\} \cap \mathcal{T}^*= \emptyset$ holds. If yes, the challenger returns $\bot$. Otherwise, the challenger returns the most recent punctured key $sk_\eta$ and sets $\mathcal{C}^*$ as the most recent $\mathcal{T}^*$ (i.e., $\mathcal{C}^* \leftarrow \mathcal{T}^*=\{t_1^*, \cdots, t^*_\eta\}$). All subsequent puncture key queries and corruption queries are answered with $\bot$. \item The challenger simply replies a decryption query DQ$(ct, \textbf{t}=(t_1, \cdots, t_d))$ following the real algorithm \textsf{Decryption}$(ct, \textbf{t}, (sk_\eta, (t^*_1, \cdots, t^*_{\eta})))$, using the most recent punctured key $sk_\eta$. \iffalse \begin{enumerate} \item The challenger calls $\mathcal{T}^*=\{t_1^*, \cdots, t^*_\eta\}$ and the punctured key $sk_\eta$. \item The challenger checks if there exists a pair $(k,j)\in [\eta]\times [d]$ such that $eq_{t^*_k}(t_j) \neq 0$. If yes, it then returns $\bot$. Otherwise, go to Step (c). \item Parse $ct=(\textbf{c}_{\textsf{in}}, \textbf{c}_1, \cdots, \textbf{c}_{d}, c_{\textsf{out}}) $. \item Parse $sk_{\eta}=\{\textbf{T}_{eq_{i}},(t^*_1,\cdots, t^*_{\eta}), (\textbf{B}_{eq_{1}}, \cdots, \textbf{B}_{eq_{\eta}})\}$. \item For $j \in [\eta], k\in [d]$, evaluate $\textbf{B}_{eq_j}^{(k)} \leftarrow \textsf{Eval}_\textsf{pk}(eq_{t^*_{j}}, \textbf{B}_k)$, and evaluate $\textbf{c}_{eq_j}^{(k)} \leftarrow \textsf{Eval}_\textsf{ct}(eq_{t^*_{j}}, t_k, \textbf{B}_{eq_j}^{(k)})$, i.e., $\textbf{c}_{eq_j}^{(k)}=(eq_{t^*_{j}}(t_k)\textbf{G}+\textbf{B}_{eq_j}^{(k)})^T \textbf{s}+\textbf{e}_{{eq_j}}^{(k)}$. \item For $j \in [\eta]$, let $\textbf{B}_{eq_j} \leftarrow \textbf{B}_{eq_j}^{(1)}+\cdots +\textbf{B}_{eq_j}^{(d)}$, $\textbf{c}_{eq_j} \leftarrow \textbf{c}_{eq_j}^{(1)}+\cdots +\textbf{c}_{eq_j}^{(d)}$, \\ i.e., $\textbf{c}_{eq_j}=\left(f_j(\textbf{t})\cdot \textbf{G}+\textbf{B}_{eq_j}\right)^T \textbf{s}+\textbf{e}_{{eq_j}}$, where $\textbf{e}_{{eq_j}}=\sum_{k=1}^{d}\textbf{e}_{{eq_j}}^{(k)}$. \item $\textbf{r} \leftarrow \textsf{SampleD}([\textbf{A} |\textbf{B}_{eq_{1}}|\cdots |\textbf{B}_{eq_{{\eta}}}], \textbf{T}_{eq_{{\eta}}}, \textbf{u}, \sigma_2)$. \item Compute $\bar{\mu} \leftarrow c_{\textsf{out}}-\textbf{r}^T(\textbf{c}_\textsf{in}|\textbf{c}_{eq_{1}}|\cdots|\textbf{c}_{eq_\eta})$. \item If $ \bar{\mu} <q/4$, output $1$, otherwise output $0$. \end{enumerate} \fi \end{enumerate} Game 2 and Game 1 are indistinguishable. \item \textbf{Game 3.} This game is similar to Game 2, except that the challenge ciphertext $\widehat{ct}=(\textbf{c}_{\textsf{in}}, \textbf{c}_1, \cdots, \textbf{c}_{d}, c_{\textsf{out}})$ is chosen randomly. Therefore, the advantage of the adversary $\mathcal{A}$ in Game 3 is zero.\\ Now we show that Games 2 and 3 are indistinguishable using a reduction from LWE. \item \textbf{Reduction from LWE.} Suppose that $\mathcal{A}$ can distinguish Game 2 from Game 3 with a non-negligible advantage. Using $\mathcal{A}$, we construct a DLWE solver $\mathcal{B}$. The reduction is as follows: \begin{itemize} \item $(n,2m,q,\chi)$--\textbf{DLWE instance.} $\mathcal{B}$ is given a $\textbf{F} \xleftarrow{\$}\mathbb{Z}_q^{n \times 2m}$, and a vector $\mathbf{c}\in \mathbb{Z}_q^{2m}$, where either (i) $\textbf{c}$ is random or (ii) $\textbf{c}$ is in the LWE form $\textbf{c}=\textbf{F}^T \textbf{s}+\textbf{e},$ for some random vector $\textbf{s}\in \mathbb{Z}_q^{n}$ and $\textbf{e} \leftarrow \chi^{2m}$. The goal of $\mathcal{B}$ is to decide whether the $\textbf{c}$ is random or generated from LWE. \item \textbf{Initial.} $\mathcal{B}$ now parses $[\mathbf{c}_{\textsf{in}}^T|\mathbf{c}_{\textsf{out}}^T]^T \leftarrow \mathbf{c}$, where $\mathbf{c}_{\textsf{in}}, \mathbf{c}_{\textsf{out}}\in \mathbb{Z}_q^{m}$, $[\mathbf{c}_{\textsf{in}}^T|\mathbf{c}_{\textsf{out}}^T]^T \leftarrow \mathbf{c}$, where $\mathbf{e}_{\textsf{in}}, \mathbf{e}_{\textsf{out}}\leftarrow \chi^{m}$, and $[\mathbf{A} |\mathbf{U}] \leftarrow \mathbf{F}$, where $\mathbf{A}, \textbf{U}\in \mathbb{Z}_q^{n \times m}$. That is, \begin{equation}\label{key89} \textbf{c}_{\textsf{in}}=\textbf{A}^T \textbf{s}+\textbf{e}_{\textsf{in}}, \quad \textbf{c}_{\textsf{out}}=\textbf{U}^T \textbf{s}+\textbf{e}_{\textsf{out}}. \end{equation} Now $\mathcal{B}$ calls $\mathcal{A}$ to get the target variable $\widehat{\mathbf{t}}=(\widehat{t_1}, \cdots, \widehat{t_d})$ that $\mathcal{A}$ wants to be challenged. \item \textbf{Setup.} $\mathcal{B}$ generates the keys as in Game 2. That is, $\widehat{\textbf{S}}_1, \cdots, \widehat{\textbf{S}}_{d} \xleftarrow{\$} \{-1,1\}^{m \times m}$ and $\textbf{B}_i:=\textbf{A}\widehat{\textbf{S}_i}-\widehat{t_i}\textbf{G}$ for $i\in [d]$. Finally, $\mathcal{B}$ sends $\mathcal{A}$ the public key $pk=(\textbf{A}, \textbf{B}_1, \cdots, \textbf{B}_d, \textbf{U})$. Also, $\mathcal{B}$ keeps $sk=\{\textbf{T}_\textbf{G}\}$ as the initial secret key. \item \textbf{Query.} Once $\mathcal{A}$ makes a delegated key query, $\mathcal{B}$ replies as in Game 2. \item \textbf{Challenge.} Once $\mathcal{A}$ submits two messages $\mu_0$ and $\mu_1$, $\mathcal{B}$ chooses uniformly at random $b \xleftarrow{\$} \{0,1\}$, then computes $\widehat{\textbf{c}}\leftarrow [\textbf{I}_m|\widehat{\textbf{S}}_1|\cdots|\widehat{\textbf{S}}_{d}]^T\textbf{c}_\textsf{in} \in \mathbb{Z}_q^{(d+1)m}$ and $\widehat{c}_{\textsf{out}} \leftarrow c_{\textsf{out}}+\mu_b \lceil \frac{q}{2} \rceil \in \mathbb{Z}_q$. \begin{itemize} \item Suppose that $\textbf{c}$ is generated by LWE, i.e., $\textbf{c}_{\textsf{in}}$, $ \textbf{c}_{\textsf{out}}$ satisfy Equation \eqref{key89}. In the \textsf{DFKHE.Enc} algorithm, the matrix $\textbf{H}=[\textbf{A}|\widehat{t_1}\textbf{G}+\textbf{B}_1|\cdots |\widehat{t_d} \textbf{G}+\textbf{B}_d]=[\textbf{A}|\textbf{A}\widehat{\textbf{S}}_1|\cdots|\textbf{A}\widehat{\textbf{S}}_d]$. Then $$ \widehat{\textbf{c}}= [\textbf{I}_m|\widehat{\textbf{S}}_1|\cdots|\widehat{\textbf{S}}_{d}]^T(\textbf{A}^T\textbf{s}+\textbf{e}_\textsf{in}]=\textbf{H}^T\textbf{s}+\widehat{\textbf{e}}, $$ where $\widehat{\textbf{e}}=[\textbf{I}_m|\widehat{\textbf{S}}_1|\cdots|\widehat{\textbf{S}}_{d}]^T\textbf{e}_\textsf{in}$. It is easy to see that $\widehat{\textbf{c}}$ is computed as in Game 2. Additionally, $\widehat{\textbf{c}}_{\textsf{out}} =\textbf{u}^T \textbf{s}+\widehat{\textbf{e}}_{\textsf{out}}+\mu_b \lceil \frac{q}{2} \rceil \in \mathbb{Z}_q$. Then $\widehat{ct}:=(\widehat{\textbf{c}}, \widehat{\textbf{c}}_{\textsf{out}} ) \in \mathbb{Z}_q^{(d+2)m}$ is a valid ciphertext of $\mu_b$. \item If $\textbf{c}_{\textsf{in}}$, $ \textbf{c}_{\textsf{out}}$ are random then $\widehat{\textbf{c}}$ is random (following a standard left over hash lemma argument). And since $\widehat{\textbf{c}}_{\textsf{out}}$ is also random, $\widehat{ct}:=(\widehat{\textbf{c}}, \widehat{\textbf{c}}_{\textsf{out}})$ is random in $\mathbb{Z}_q^{(d+2)m}$ which behaves similarly to Game 3. \end{itemize} \item \textbf{Guess.} Eventually, once $\mathcal{A}$ outputs his guess of whether he is interacting with Game 2 or Game 3, $\mathcal{B}$ outputs his decision for the DLWE problem. \end{itemize} We have shown that $\mathcal{B}$ can solve the $(n,2m,q,\chi)$--$\textsf{DLWE}$ instance. \qed \end{description} \end{proof} \fi \section{Discussion on Unbounded Number of Ciphertext Tags} \label{unbounded} The idea of \cite{BV16} might help us to extend the LWE-based DFKHE construction from Subsection \ref{dfkhe} (resp., PE from Subsection \ref{cons}) to a variant that supports arbitrary number of variables (resp., ciphertext tags). We call this variant \textit{\textsf{unDFKHE}}. Although, the original idea of \cite{BV16} is applied to ABE with attributes belonging to $\{0,1\}$ using the XOR operation, we believe that it might be adapted to work well with our DFKHE with variables and punctures over $\mathbb{Z}_q$ using the addition modulo $q$ (denoted $\oplus_q$. In \textsf{unDFKHE}, the maximum number of ciphertext tags $d$ is not fixed in advance. Then, in the key generation algorithm, we cannot generate $\textbf{B}_1, \cdots, \textbf{B}_d$ and give them to the public. In order to solve this issue, we utilize a family of pseudorandom functions \textsf{PRF}=(\textsf{PRF.Gen}, \textsf{PRF.Eval}), where $\textsf{PRF.Gen}(1^\lambda)$ takes as input a security parameter $\lambda$ and outputs a seed $\textbf{s} \in \mathbb{Z}_q^\ell$ of length $\ell=\ell({\lambda})$ (which depends on $\lambda$) and $\textsf{PRF.Eval}(\textbf{s},\textbf{x})$ takes as input a seed $\textbf{s} \in \mathbb{Z}_q^{\ell}$ and a variable $\textbf{x}\in \mathbb{Z}_q^*$ of \textit{arbitrary length} and returns an element in $\mathbb{Z}_q$. The family of pseudorandom functions helps us to stretch a variable of fixed length $\ell$ to one of arbitrary length $d $ as follows. In \textsf{unDFKHE.KGen}, for a variable $\textbf{t}$ of length $d=|\textbf{t}|$, instead of $\textbf{B}_1, \cdots, \textbf{B}_d$, we generate $\overline{\textbf{B}}_1, \cdots, \overline{\textbf{B}}_\ell$ and use them to produce $\textbf{B}_1, \cdots, \textbf{B}_d$ later. This can be done by running $\textsf{Eval}_\textsf{pk}(\textsf{PRF.Eval}(\cdot,i), (\overline{\textbf{B}}_k)_{k=1}^\ell)$, for $i\in [d]$, where $\textsf{PRF.Eval}(\cdot,i)$ acts as a function that can be evaluated by $\textsf{Eval}_\textsf{pk}$. Accordingly, any function $f \in \mathcal{F}$ will also be transformed to $f_{\Delta}$ defined by $f_{\Delta}(\textbf{t}):=f(\textbf{t}\oplus_q\Delta_{\le d})$ before joining to any computation later on. Here $\Delta_i:=\textsf{PRF.Eval}(\textbf{s},i)$ for $i\in [d]$, $\Delta_{\le d}=(\Delta_1,\cdots, \Delta_d)$. Also remark that, $f_{\Delta}(\textbf{t}\oplus_q (q_{\le d}-\Delta_{\le d}))=f(\textbf{t})$, where $q_{\le d}=(q, \cdots, q) \in \mathbb{Z}^d$. Therefore, in \textsf{unDFKHE.KHom}, $\textbf{B}_{f} \leftarrow \textsf{Eval}_\textsf{pk}(f_{\Delta}, (\textbf{B}_k)_{k=1}^d)$ \iffalse \textcolor{red}{For a proof of the semi-adaptive security, we can follow the proof of \cite[Section 4.2]{BV16} with noting that Equation (3) in \cite{BV16} should be replaced ny $\Delta_i=\begin{cases} \mathsf{PRF.Eval}(\textbf{s},i)\oplus_q (q-\hat{t}_i) \text{ if } i \le d^*,\\ \mathsf{PRF.Eval}(\textbf{s},i), \text{ otherwise, } \end{cases}$ where $d^*$ is the number of challenged ciphertext tags.} \fi Actually, there are a lot of work left to be done. Due to space limitation, we leave details of this section for the full version of this paper. \section{Conclusion and Future Works} \label{conclude} In this paper, we show puncturable encryption can be constructed from the so-called delegatable fully key-homomorphic encryption. From the framework, we instantiate our puncturable encryption construction using LWE. Our puncturable encryption enjoys the selective indistinguishability under chosen plaintext attacks, which can be converted into adaptive indistinguishability under chosen ciphertext attacks using well-known standard techniques. For future works, there are few investigation directions worth pursuing such as design of: (i) puncturable lattice-based ABE as in \cite{PNXW18}, (ii) efficient puncturable forward-secure encryption schemes as proposed in\cite{GM15} or (iii) puncturable encryption schemes, whose puncture key size is constant or puncturable ecnryption schemes support unlimited number of punctures. \subsubsection{Acknowledgment.} We thank Sherman S.M. Chow and anonymous reviewers for their insightful comments which improve the content and presentation of the manuscript. This work is supported by the Australian Research Council Linkage Project LP190100984. Huy Quoc Le has been sponsored by a CSIRO Data61 PhD Scholarship and CSIRO Data61 Top-up Scholarship. Josef Pieprzyk has been supported by the Australian ARC grant DP180102199 and Polish NCN grant 2018/31/B/ST6/03003.
1,108,101,564,802
arxiv
\section{Introduction} Network representation of complex interactions among elements is an overarching framework heavily used in many fields of science~\cite{Newman2010book,Barabasi2012NatPhys,Barabasi2016book}. For social systems, the dynamics of interactions between individuals (whether electronic, online or face-to-face) can be represented as time-varying networks, often called temporal networks, in which nodes come and go and edges are activated or deactivated as time goes on~\cite{Holme2012PhysRep,Holme2015}. Many essential features of human behaviour encoded in the representation of temporal networks have been revealed over the past decade, such as burstiness~\cite{Jo2011PlosOne,Karsai2011PhysRevE,Karsai2012SciRep}, circadian/diurnal rhythms~\cite{Jo2012NewJPhys}, temporal communities~\cite{gauvin2014detecting}, higher-order interactions~\cite{scholtes2016higher,lambiotte2019networks}, etc. While the studies of temporal networks shed light on the time-varying nature of interactions between nodes, dynamics in social systems emerge not only at the local level~\cite{Gautreau2009PNAS}, but also at the global level. In a wide variety of social contexts, network size (\emph{i.e.}, the number of active nodes) and the number of edges observed at a given point in time are very often not constant, and accordingly the average degree increases or decreases~\cite{Leskovec2007CA_Full,Kobayashi2020}. In fact, the numbers of aggregate nodes and edges have been shown to have a scaling relationship known as the densification power law or densification scaling~\cite{Leskovec2007CA_Full}. In temporal networks (\emph{i.e.}, a sequence of snapshot networks), any variation in the number of active nodes $N$ and the number of edges $M$ can be \emph{a priori} attributed to changes in (i) the population in the system (\emph{e.g.}, the number of students present in a school, the number of attendees in a conference, etc); (ii) the probability of two nodes being connected; or (iii) both. With a constant probability of edge creation, $N$ and $M$ will increase if more nodes enter the system, since each node will have a higher chance of finding partners. Likewise, for a given population, if the probability of two nodes being connected increases, $M$ will surely increase, and $N$ will rise as well as isolated nodes, if they exist, will be more likely to get connected. These two mechanisms are fundamental factors that bring about the dynamics of $N$ and $M$, yet separating their contributions based on the dynamical behaviour of $N$ and $M$ is a challenging problem. In a wide variety of social and economic systems, network dynamics are likely to be driven by a mixture of these two mechanisms, and moreover their relative importance may occasionally change as the network evolves~\cite{Kobayashi2020}. In theory, each of these two mechanisms leads to a distinctive type of densification scaling; The first one, generated by the evolution of population, is a scaling behaviour similar to the typical densification scaling in which the number of edges $M$ scales with the number of active nodes $N$ with a constant exponent $\alpha$, \emph{i.e.}, $M\propto N^\alpha$~\cite{Leskovec2007CA_Full}. The second one is an accelerating growth of $M$, which is caused by the evolution of the probability of edge creation~\cite{Kobayashi2020}. In fact, for the human contact networks we study, neither of these two types of scaling is observed in their original form. Rather, we observe a ``mixed'' scaling behaviour which appears to be a composite of the two types and therefore cannot be explained by a single scaling law. Here, we develop a Bayesian statistical method to identify the source of dynamics generating network densification and sparsification based on the sequence of $N$ and $M$. To take into account possible changes in the source of dynamics, we derive two specifications (\emph{i.e.}, ``regimes'') for the solution of a simple generative model, namely a dynamic hidden variable model, each of which capturing one of the two fundamental mechanisms. By fitting the two specifications simultaneously to the observed mixed scaling relationship using a unified estimation framework, known as the Markov regime-switching model~\cite{Hamilton1994book,hamilton2010regime}, we are able to estimate the probability that the dynamical source of densification or sparsification at a given point in time is attributed to a particular mechanism. At the same time, the Bayesian inference also allows us to trace the paths of the time-varying parameters directly related to the dynamical source, \emph{i.e.}, the population in the system and the activity level of nodes. An important advantage of the regime-switching model is that it allows the ``true'' model specification to occasionally switch, possibly depending on the social context. In this work we analyse networks of face-to-face human interactions collected by the SocioPatterns collaboration\cite{SocioPatterns}. We focus on four datasets: contact networks in two scientific conferences, a hospital and a workplace. Such networks can indeed be affected by the two fundamental mechanisms at the same time, because (i) individuals can always enter and exit the system, and (ii) presence of a time schedule could facilitate or inhibit face-to-face interactions (\emph{e.g.}, attendees of a conference are more likely to have interactions during coffee breaks than during keynote talks). In particular, using data on academic conferences has an important advantage, as it allows us to compare the dynamical regimes detected by the proposed method with the ``ground-truth'' conference time schedules. We find indeed that during keynote talks, parallel sessions and coffee breaks, the temporal densification and sparsification in the contact networks formed by conference attendees are mainly related to shifts in the chance of contacts being made between attendees present at the venue. On the other hand, shifts in the population are the main driving force of densification and sparsification during registration and poster sessions. This result is consistent with our intuition that the number of attendees in the middle of the program would be mostly constant, while it may be more likely to change during registration, which is held in the morning, and poster sessions in which not all of the attendees participate. For contact networks in a hospital and a workplace, this kind of comparison with a prespecified time schedule is not possible because there is no such rigorous time constraints to follow. Nevertheless, in all the systems we examined, the proposed method reveals that the main driving force of network densification and sparsification is occasionally switching, suggesting that the formation of social ties in physical space generally involves multiple dynamical sources. \section{Results} \subsection{Empirical evidence on mixed densification scaling} We focus our analysis on temporal contact networks taken from the following four datasets: \begin{itemize} \item {\bf{WS-16}}: Contacts between participants of the Computational Social Science Winter Symposium 2016 at GESIS in Cologne on November 30, 2016~\cite{Genois2019}. \item {\bf{IC2S2-17}}: Contacts between participants of the International Conference on Computational Social Science 2017 at GESIS in Cologne on July 12, 2017~\cite{Genois2019}. \item {\bf{Hospital}}: Contacts among patients, nurses, doctors and staffs in a Hospital in Lyon on December 8, 2010~\cite{Vanhems:2013}. \item {\bf{Workplace}}: Contacts between workers in a office building in France on June 27, 2015~\cite{genois2015data}. \end{itemize} These data consist of contacts between individuals collected every 20 seconds using RFID sensors \cite{Cattuto2010PLOS,SocioPatterns}. A ``contact'' is here defined as a physical, face-to-face proximity event. The datasets thus give us temporal networks in which nodes are individuals and edges encode the contacts occurring between them. All datasets exhibit large and abrupt fluctuations of the number of edges that are typical in these non-stationary systems (see Fig.~\ref{fig:scaling_data}, lower panels). In these particular contexts of social interactions, these transitions between high an low activity periods are often related to specified schedules: from talk sessions to coffee breaks in the conferences, changes in shifts in the hospital, from desk work to meetings in the workplace. \begin{figure*}[tb] \centering \includegraphics[width=17.5cm]{Figure/NMscaling_data.pdf} \caption{Dynamical behaviour of number of active nodes $N$ and number of active edges $M$. In upper panels, dynamical relationship between $N$ and $M$ is shown. Each dot represents a snapshot network created over a 10-minute time window. Gray dashed and dotted lines respectively denote $N/2$ (\emph{i.e.}, the lower bound for $M$) and $N(N-1)/2$ (\emph{i.e.}, the upper bound for $M$). Lower panels show the behaviour of $M$ over time.} \label{fig:scaling_data} \end{figure*} In many social and economic dynamical networks, the numbers of aggregate edges and nodes have a superlinear scaling relationship called the ``densification power law''~\cite{Leskovec2005HepAS,Leskovec2007CA_Full,bettencourt2009scientific}, in which the average degree is increasing with the number of nodes, \emph{i.e.}, ``densification''. For temporal networks, where there is a sequence of network snapshots, a similar type of scaling emerges from the dynamics of the population, in which nodes enter and leave the system, keeping the chance of two nodes being connected constant~\cite{kobayashi2018social,Kobayashi2020}. However, another type of scaling emerges in real-world systems for which the population is fixed. In such systems densification is ``explosive'', with the scaling exponent increasing with $N$\cite{Kobayashi2020}. While these two classes of scaling could be differentiated and identified from data if we observe a specific type of scaling~\cite{Kobayashi2020}, in general there may exist a mixture of them that cannot be easily classified as one of the two classes. Indeed, in the four datasets we study, no clear scaling relationship appears (Fig.~\ref{fig:scaling_data}, upper panels). In the following we show that the mixed shape of empirical densification behavior reflects a mixing of both classes of scaling. \subsection{Two dynamical regimes in the dynamic hidden-variable model} To explore the temporal dynamics of densification and sparsification, we consider a dynamic version of the hidden variable model. The probability that two nodes $i$ and $j$ are in contact within a given time window $t$ is: \begin{align} \mathcal{P}_{ij,t} = \kappa_t a_{i} a_{j}, \;\;\; i,j=1,\ldots, N_{{\rm p},t}, \;\; t = 1,\ldots, T. \label{eq:prob_ij} \end{align} where $a_{i}$ is the ``fitness'' that represents the intrinsic activity level of node $i$~\cite{Caldarelli2002PRL,Boguna2003PRE,DeMasi2006PRE}, and $T$ denotes the last time window in the data. There are two time-varying parameters in the model. The first one is $\kappa_{t}>0$, which modulates the overall activity rhythm of nodes. A variation in $\kappa$ would reflect the time-schedule of a conference or a school, working hours in an office or a hospital, or the circadian rhythm of individuals~\cite{Cattuto2010PLOS,Jo2012NewJPhys,aledavood2015digital,kobayashi2019structured}. The second time-varying parameter $N_{{\rm p},t}$ denotes the potential number of active nodes at time $t$, \emph{i.e.}, the total of active and inactive nodes that are in the room or the building. It should be noted that although the number of active nodes (\emph{i.e.}, nodes having at least one edge) $N_t$ is always observable from the data, the potential number of nodes $N_{{\rm p},t}$ is not. We do not usually know how many people were actually in the room at a given time because people could enter and exit the room at any time without being interacting with any other individual. We can observe the number of active nodes that appear in the record of contacts, but in many cases there is no record of nodes without any interaction. We assume that activity $a_i$ is uniformly distributed on $[0,1]$, because i) we do not have any prior information about the full distribution of the activity levels of all nodes including isolated ones, and ii) introducing a more general distribution prohibits us from obtaining an analytical solution, which makes it difficult to implement parameter estimation. The average numbers of active nodes $N$ and edges $M$ are analytically given as (see section~\ref{sec:analytical_derivation_N_M} in Methods for derivation): \begin{align} N &= N_{\rm p} \left[ 1- \frac{2}{\kappaN_{\rm p}}\left(1-\left( 1-\frac{\kappa}{2}\right)^{N_{\rm p}}\right) \right],\label{eq:N_main}\\ M &= \frac{1}{8} \kappaN_{\rm p}(N_{\rm p}-1),\label{eq:M_main} \end{align} where we drop time subscript $t$ for brevity. From these expressions, it is clear that the two parameters $\kappa$ and $N_{\rm p}$ play different roles in the determination of $N$ and $M$, but it is not clear how $N$ and $M$ correlate. To see the direct relationship between $N$ and $M$, we eliminate one of the two parameters in Eq.~\eqref{eq:N_main}, using Eq.~\eqref{eq:M_main}. By doing this, we can effectively endogenise either $\kappa$ or $N_{\rm p}$. Depending on whether we endogenise $\kappa$ or $N_{\rm p}$, we obtain different functional forms that connect $N$ and $M$. \subsubsection{Regime 1: $N_{\rm p}$-driven dynamics} First, let us consider the case of time-varying $N_{\rm p}$. This is a situation in which the dynamics of $N$ and $M$ are fully driven by changes in the population. We call this system as being in ``Regime 1'' or ``state 1'': \begin{theo} {A system is in Regime 1 if $N_{\rm p}$ is time-varying and $\kappa$ is constant, in which case the dynamical relationship between $N$ and $M$ is given by:} \begin{align} N_t &= N_{\rm p}(M_t,\kappa) \left[ 1- \frac{2}{\kappaN_{\rm p}(M_t,\kappa)}\left(1-\left( 1-\frac{\kappa}{2}\right)^{N_{\rm p}(M_t,\kappa)}\right) \right] \notag \\ & \equiv h^1(M_t;\kappa), \label{eq:model1} \end{align} where the time-varying $N_{\rm p}$ value is expressed as a function of $M_t$ and $\kappa$: $N_{\rm p}(M_t,\kappa) \equiv \frac{1+ \sqrt{1+{32M_t/{\kappa}}}}{2}$ (see, Eq.~\ref{eq:M_main}). \end{theo} For the purpose of parameter estimation, we introduce an error term as $N_t=h^1(M_t;\widehat\kappa) + \varepsilon_{1,t}, \label{eq:h1}$ where $\widehat{\kappa}$ denotes the estimated value of $\kappa$, and $\varepsilon_t^1$ is a residual term following a normal distribution with mean zero and standard deviation $\sigma_1$. Estimated value of $N_{{\rm p},t}$ when the system is in Regime 1 leads to: \begin{equation} \widehat N_{{\rm p},t}|_{S_t=1} = \frac{1+ \sqrt{1+{32M_{t}/{\widehat\kappa}}}}{2}, \end{equation} where $S_t=1$ denotes the fact that the system is in Regime 1 at time $t$. In Regime 1, network dynamics is totally driven by the time-varying nature of the population, what we call ``$N_{\rm p}$-driven'' dynamics. For a given $\kappa$, the slope of densification scaling is close to constant, while different $\kappa$ yield different slopes. (Fig.~\ref{fig:schematic}, lower left). \begin{figure*}[tb] \centering \includegraphics[width=17.5cm]{Figure/RegimeSwitching_Schematic.pdf} \caption{Schematic of the identification method. Empirical densification is fitted to the regime switching model in which the model switches from Regime 1 to Regime 2 (resp. from Regime 2 to Regime 1) with probability $p_{12}$ (resp. $p_{21}$). Then, the estimated parameters are used to infer the probability of the system being in Regime 1 at a given time $t$. For panels at lower left and lower middle, different colours denote different $N_{\rm p}$, and different symbols denote different $\kappa$ (see Eqs.~\ref{eq:N_main} and \ref{eq:M_main}). If the scaling is $N_{\rm p}$-driven (resp. $\kappa$-driven), the time variation of $N$ and $M$ is fully caused by shifts in $N_{\rm p}$ (resp. $\kappa$).} \label{fig:schematic} \end{figure*} \subsubsection{Regime 2: $\kappa$-driven dynamics} Next, let us consider the case of time-varying $\kappa$. This corresponds to a situation in which the dynamics of the system is fully driven by changes in the overall activity of individuals. We call this system as being in ``Regime 2'' or ``state 2'': \begin{theo} {A system is in Regime 2 if $\kappa$ is time-varying and $N_{\rm p}$ is constant, in which case the dynamical relationship between $N$ and $M$ is given by} \begin{align} N &= N_{\rm p} \left[ 1- \frac{2}{\kappa(M,N_{\rm p})N_{\rm p}}\left(1-\left( 1-\frac{\kappa(M,N_{\rm p})}{2}\right)^{N_{\rm p}}\right) \right] \notag \\ & \equiv h^2(M;N_{\rm p}), \label{eq:model2} \end{align} where the time-varying value of $\kappa$ is expressed as a function of $M_t$ and $N_{\rm p}$: $\kappa(M_t,N_{\rm p}) \equiv \frac{8M_t}{N_{\rm p}(N_{\rm p}-1)}$ (see, Eq.~\ref{eq:M_main}). \end{theo} For estimating, we add an error term as $N_t=h^2(M_t;\widehat{N}_{\rm p}) + \varepsilon_{2,t}, \label{eq:h2}$ where $\widehat{N}_{\rm p}$ denotes the estimated value of $N_{\rm p}$, and $\varepsilon_{2,t}$ is a residual term following a normal distribution with mean zero and standard deviation $\sigma_2$. Estimated value of $\kappa$ at time $t$ when the system is in Regime 2 leads to: \begin{equation} \widehat\kappa_t|_{S_t=2} = \frac{8M_t}{\widehat{N}_{\rm p}(\widehat{N}_{\rm p}-1)}. \end{equation} In Regime 2, network dynamics is fully driven by the individuals' time-varying activity levels, what we call ``$\kappa$-driven'' dynamics, and the slope of densification scaling in fact increases with $N$ (Fig.~\ref{fig:schematic}, lower middle). This kind of accelerating growth of $M$ naturally happens when edges are created in a fixed-population system, in which case the network tends to be denser as the number of inactive nodes vanishes. \subsection{Analysis of switching dynamics behind temporal densification and sparsification} \subsubsection{A Markov regime switching model} In real-world networks, the mechanism of densification and sparsification may occasionally change depending on the context, such as working schedule, coffee breaks, lunch time, etc. To incorporate such a possibility, we propose a unified framework based on the Markov regime switching model in which the hidden state of a system can switch from Regime 1 to Regime 2 (respectively from Regime 2 to Regime 1) with probability $p_{12}$ (resp. $p_{21}$)~\cite{Hamilton1994book,hamilton2010regime}. An important advantage of the regime switching model is that it allows us to calculate the probability of a system being in Regime $s\in\{1,2\}$ at time $t$ for a given parameter set $\vect{\theta} =\{N_{\rm p},\kappa,\sigma_1,\sigma_2,p_{11},p_{22}\}$. This probability of the system being in Regime $s$ can then be interpreted as the relevancy of each mechanism in explaining the densification dynamics at a given time (Fig.~\ref{fig:schematic}). We employ a Bayesian approach for the estimation of the parameters, using the Markov chain Monte Carlo (MCMC) to obtain posterior distributions (see, Methods~\ref{sec:method_regime_switch} for the estimation method). In the following, we use the smoothed probability ${\rm Pr}(S_t=s|\psi_T;\vect{\theta})$ which is calculated conditional on all the information available at time $T$, denoted by $\psi_T$ (see, Methods~\ref{sec:smoothed} for full derivation)~\cite{kim1994smoothed}. Validation analyses using synthetic networks show that the proposed method correctly detects the switching of regimes and estimates the model parameters quite accurately (Table~\ref{tab:validation_params}, Figs.~\ref{fig:validation_prob_scatter} and \ref{fig:validation_Np_kap} in Supporting Information (SI)). Given the probability of being in Regime $s\in\{1,2\}$, we can estimate the dynamical parameters $N_{{\rm p},t}$ and $\kappa_{t}$ as: \begin{align} \widehat N_{{\rm p},t} &= {\rm Pr}(S_t=1|\psi_{T};\widehat{\vect{\theta}})\cdot\widehat N_{{\rm p},t}|_{S_t=1} \; +\; {\rm Pr}(S_t=2|\psi_{T};\widehat{\vect{\theta}})\cdot\widehat{N}_{\rm p}, \\ \widehat \kappa_{t} &= {\rm Pr}(S_t=1|\psi_{T};\widehat{\vect{\theta}})\cdot\widehat{\kappa} \; + \; {\rm Pr}(S_t=2|\psi_{T};\widehat{\vect{\theta}})\cdot\widehat\kappa_t|_{S_t=2}, \end{align} where $\widehat{\vect{\theta}}$ denotes the set of estimated parameters, which is summarised in Table.~\ref{tab:parameters} in Methods. \subsubsection{Classification of network dynamics} \begin{figure*}[tb] \centering \includegraphics[width=17.5cm]{Figure/Regime_Scatter_pale.pdf} \caption{Identification of dynamical regime. Upper panels show the smoothed probability of being in Regime 1 (\emph{i.e.}, $N_{\rm p}$-driven dynamics) at each time window. 95\,\% credible interval is indicated by shading. Lower panels show $N$-$M$ plots with classified regimes being denoted by different colours and symbols. We identify a snapshot network as being in Regime 1 (resp. Regime 2) if the estimated probability of being in Regime 1 (resp. Regime 2) is greater than 0.5 in more than 95\,\% of MCMC sampling. Otherwise, a network is considered as being in an undetermined ``gray area''.} \label{fig:regime_NM_result} \end{figure*} The Bayesian estimation of the parameters suggests that the empirical systems' dynamics are indeed occasionally switching between $N_{\rm p}$-driven and $\kappa$-driven (Fig.~\ref{fig:regime_NM_result}, upper panels). For the conference data, a common feature is that the probability of being in Regime 1 is almost 1 prior to the first session and after the last keynote session of the day, and mostly zero in between (see Fig.~\ref{fig:schedule} for the correspondence between the dynamics and the schedule of the conferences). For WS-16, we see further fluctuations between the two regimes, one linked to the lunch break, the other to the poster session which closed the day. This suggests that the dynamics during the oral sessions, keynote talks and breaks are mainly driven by changes in the activity level of participants, while in the ``opened'' time slots, such as registration, closing and poster session, their dynamics are explained by time-varying population. The same patterns linked to the schedule are found on the other days of the conferences (see \ref{fig:SI_regime_NM_result}a--c). For the Workplace data we see a roughly similar pattern (Figs.~\ref{fig:regime_NM_result}, top right). The dynamics in the early morning and evening are driven by a variation in $N_{\rm p}$, as well as around lunch time and coffee break, and changes in activity level are the main source of dynamics in between. This is, of course, not necessarily a general property of contact networks in physical space. We also see that the regime remains almost constant in most of the day (Fig.~\ref{fig:SI_regime_NM_result}e), or there might be days in which the regime constantly changes throughout the day (Fig.~\ref{fig:SI_regime_NM_result}f). In the case of the Hospital data, there is no clear tendency for the regime-switching pattern (Figs.~\ref{fig:regime_NM_result}, third column and \ref{fig:SI_regime_NM_result}d), which seems natural for such an open environment with visitors and medical workers coming and going, and no general, fixed schedule for working hours. We next attempt to classify the snapshot networks into two groups based on their probability of being in a particular regime. We identify a snapshot network at $t$ as being in Regime 1 (resp. Regime 2) if more than 95\,\% of samples for the value of ${\rm Pr}(S_t=1|\psi_T;\vect{\theta})$ generated by MCMC are greater than 0.5 (resp. lower than 0.5), \emph{i.e.,} in more than 95\,\% of parameter sampling the dynamics at $t$ is considered to be attributed to Regime 1 (resp. Regime 2). Otherwise, the system is considered as being in an undetermined ``gray area''. As seen in the lower panels of Fig.~\ref{fig:regime_NM_result}, the location of snapshot networks in the $N$-$M$ space is strongly related to which regimes they belong to. As expected, the snapshots in Regime 1 exhibit a scaling whose slope is almost constant (\emph{i.e.}, $N_{\rm p}$-driven scaling), while the snapshots in Regime 2 exhibit accelerating growth patterns (\emph{i.e.}, $\kappa$-driven scaling). Classifying each time window according to the underlying dynamical mechanism is essentially equivalent to identifying patterns in the $N$-$M$ space. \subsubsection{Temporal dynamics of population and activity level} \begin{figure*}[tb] \centering \includegraphics[width=17.5cm]{Figure/WS_IC2S2_Np_kappa_22.pdf} \caption{Estimation of $N_{{\rm p},t}$ and $\kappa_t$ for (a) WS-16 and (b) IC2S2-17. $\widehat N_{{\rm p},t}$ and $\widehat \kappa_{t}$ are shown in the upper and the lower panels, respectively, and the 95\,\% credible interval is indicated by shading. Upper panels show the number of active nodes (dashed blue line) at each time, thus the difference between the two lines represents the number of isolated nodes. Lower panels also show the number of edges at each time (dashed red line). Vertical dotted lines indicate the time windows of the scheduled sessions, with the labels in the middle.} \label{fig:schedule} \end{figure*} We also examine the evolution of the dynamical parameters for both regimes (Fig.~\ref{fig:schedule}). For the two conferences (WS-16 and IC2S2-17), the estimated population size $\widehat N_{{\rm p},t}$ increases at the beginning of the day and decreases at the end, consistent with the dynamics of participants entering and exiting the venue. The estimated activity parameter $\widehat \kappa_{t}$ is high during these periods, and the level is consistent with those seen in highly active windows during social breaks. During the main program, the population is virtually constant and the size is consistent with the number of attendants ($\sim$ 120 for WS-16, $\sim$ 200 for IC2S2-17). The variation of network size is thus mainly driven by the schedule, which constraints the participants' networking activity. In the case of WS-16, the fluctuation of $\widehat N_{{\rm p},t}$ during the lunch break and the poster session are worth noting since the variation of observed network size $N$ seems to be driven by both mechanisms; we see slight reductions in the estimated population while the overall activity is still high in these time windows. This demonstrates the ability of the proposed method to extract mixed-regime periods in which both of the two mechanisms are at work (see Fig.~\ref{fig:schematic}, right, for schematic). Similar patterns are also found in the other days (see~\ref{fig:SI_conf_schedule}). \begin{figure*}[tb] \centering \includegraphics[width=17.5cm]{Figure/Main_HosWork_schedule.pdf} \caption{Estimation of $N_{{\rm p},t}$ and $\kappa_t$ for (a) Hospital and (b) Workplace.} \label{fig:schedule_hos_work} \end{figure*} In the Hospital data, the regime-switching dynamics is much less periodic, with lots of transitions and mixed periods (Fig.~\ref{fig:schedule_hos_work}a). This is however not surprising, because there is no fixed schedule regulating either the activity or the number of people present. For the Workplace data, we also do not expect \emph{a priori} to see a clear segmentation of regimes because of the absence of a rigid schedule as in a hospital. However, the dynamics uncovered by our method indicates that the situation is much simpler than that for Hospital, as there seem to be less variation in population size, aside from the ``opening'' and ``closing'' effects and a reduction in population around the lunch time (Fig.~\ref{fig:schedule_hos_work}b). The day that exhibited many regime switches presents however many episodes of small variations in population size (see \ref{fig:SI_HosWork_schedule}), similar to the dynamics observed in a Hospital. \subsubsection{Non-monotonic behaviour of network density} Since both types of scaling emerging from two different dynamics exhibit superlinearity, the average degree is always increasing in $N$. However, the density of networks, defined by $2M/(N(N-1))$, is not always increasing with $N$ (Fig.~\ref{fig:density_main}). In fact, when the dynamics is $N_{\rm p}$-driven, the network density mostly decreases as the network size $N$ increases (Fig.~\ref{fig:density_main}, blue circle). So, a rise in $N$ causes the density to be smaller when the engine of dynamics is changes in population. In contrast, when changes in $\kappa$ play a dominant role, the network density may increase when the network size is sufficiently large (Fig.~\ref{fig:density_main}, pale-red cross). This is because when the number of active nodes $N$ is close to its upper bound $N_{\rm p}$, at which the activity levels of remaining inactive nodes are fairly low, the overall activity $\kappa$ needs to be large enough for those low-activity nodes to get at least one edge. This would necessarily increase the total number of edges in the network to a large extent, which leads to a ``true'' densification of networks. \begin{figure*}[tb] \centering \includegraphics[width=17.5cm]{Figure/density_main.pdf} \caption{Density versus the number of active nodes. Classification of dynamical regimes is conducted in the same way as in Fig.~\ref{fig:regime_NM_result}.} \label{fig:density_main} \end{figure*} These properties are also confirmed by the analytical equation for the average network density~\cite{Kobayashi2020} \begin{align} \frac{2M}{N(N-1)} = \frac{\kappa}{4} \left( \frac{1}{1-q_0(\kappa,{N}_{\rm p})}\right)^2 \left( 1+\frac{q_0(\kappa,{N}_{\rm p})}{N-1}\right), \label{eq:density} \end{align} where $q_0$ denotes the fraction of isolated nodes in the system (see Eq.~\ref{eq:q0_uniform} in Methods~\ref{sec:analytical_derivation_N_M}). If the system is in Regime 1, in which $\kappa$ is constant, the density monotonically approaches $\kappa/4$ as $N_{\rm p}\to \infty$ (\emph{i.e.}, $q_{0}\to 0$ and $N\to \infty$). On the other hand, if the system is in Regime 2, in which $N_{\rm p}$ is constant, there is no a \emph{priori} upper bound, and the density exhibits a non-monotonic behaviour. In Regime 2, a change in $\kappa$ has two opposing effects on the network density. First, an increase in $\kappa$ directly increases density through a rise in the probability of edges being created. Second, a shift in $\kappa$ would also increase $N$, which reduces the density through the third term in Eq.~\ref{eq:density}. Since $q_0\to 0$ as $N$ becomes sufficiently large, the latter effect is vanishing, and therefore the density begins to rise with $N$ for a sufficiently large $N$. \section{Discussion} Densification and sparsification of networks can occur for two reasons, namely a variation in the population $N_{\rm p}$ and a variation in the overall activity level $\kappa$. A key finding of this work is that the relative importance of each of these two dynamical factors occasionally change, depending on the social context under study. By fitting the model to the observed scaling relations, we can detect the main factor that is relevant at a given point in time. Shifts in $N_{\rm p}$ and/or $\kappa$ affect the activity of all individuals equally, so these parameters could be considered as effective ``temperatures'' of the system. While in this work we studied face-to-face networks of individuals, by its versatility the proposed method could also be used for a wide variety of dynamical systems. There are some remaining issues for future research. First, the baseline model, a dynamic hidden variable model, relies on a ``homogeneous mixing'' hypothesis, which implies that nodes are connected to each other at random, given their activity levels. If we look at the structural properties of networks, such as triadic closure and community structure, such a hypothesis ---especially for social contexts--- would be unrealistic. However, the fact that the proposed method works remarkably well indicates that, as long as we look at network dynamics at a sufficiently coarse scale, keeping local properties aside, this homogeneous mixing assumption is a good approximation. In fact, introducing a non-random structure would easily make it impossible to obtain analytical expressions that would be needed for identification. Second, we assumed that the distribution of intrinsic node activities is uniform for simplicity. Ideally, one would need to set this distribution based on empirical evidence. However, measuring the empirical intrinsic activity levels of individuals is extremely difficult because one needs activity levels of totally inactive individuals as well. Furthermore, this parameter might very well have its own temporal evolution. If available, such a rich information would allow for a refinement of the method. Third, while the current method works well for temporal networks whose dynamical regime is occasionally switching, for fixed-regime systems in which the whole dynamics could be explained by either a $N_{\rm p}$-driven \emph{or} a $\kappa$-driven regime, the proposed regime-switching model is unnecessary. In such cases, one would fit the empirical scaling to each of the two models separately, and then find out which model is better fitted~\cite{Kobayashi2020}. In many cases, examining the source of network dynamics from the level of each individual would be prohibitively difficult because each individual has his/her own circumstance, and privacy issues often prohibit researchers from obtaining enough information to reveal particular individuals' behaviour. In contrast, global quantities, such as the total numbers of nodes and edges, are much more widely accessible, and therefore utilising these quantities will be inevitable when high-resolution data are difficult to collect. A contribution of this work is that the proposed model allows us to detect the role of the two fundamental dynamical factors just by using information on the global network dynamics. Any dynamical processes occurring \emph{on} networks, regardless of whether they are micro- or macro-phenomena, would be largely affected by the underlying dynamics \emph{of} networks. This is in particular the case for spreading processes such as epidemics. A better understanding of the dynamics of densification and sparsification could thus benefit public health policies, which are of central importance for modern social systems. \section{Methods} \subsection{Analytical expression for $N$ and $M$}\label{sec:analytical_derivation_N_M} In this section we derive Eqs.~\eqref{eq:N_main} and \eqref{eq:M_main}. The numbers of active nodes $N$ and edges $M$ can be expressed as functions of parameters $\kappa$ and $N_{\rm p}$ (we drop time subscript $t$ for brevity): \begin{align} \begin{cases} N &= (1- q_0(\kappa,N_{\rm p})) N_{\rm p},\\ M &= \frac{\overline{k}(\kappa, N_{\rm p}) N_{\rm p}}{2}, \end{cases} \label{eq:NM_main} \end{align} where $\overline{k}(\kappa,N_{\rm p})$ denotes the average degree over all the existing nodes including isolated ones, and $q_0(\kappa,N_{\rm p})$ denotes the fraction of isolated nodes or equivalently the probability that a randomly chosen node being isolated. Let $\rho(a)$ be the density of node activities, and define $u(a,a^\prime)$ as the probability that there is an edge between two nodes having activity levels $a$ and $a^\prime$. The average degree $\overline{k}(\kappa,N_{\rm p})$ is given by the number of possible partners times the average of $u(a,a^\prime)$ (see, section~\ref{sec:SI_derivation} in SI for a full derivation): \begin{align} \overline{k}(\kappa,N_{\rm p}) = (N_{\rm p}-1) \int \int d a d a^\prime \rho(a) \rho(a^\prime) u(a, a^\prime), \label{eq:k_avg} \end{align} It should be noted that Eq.~\eqref{eq:k_avg} is equivalent to the average degree in the standard fitness model~\cite{Boguna2003PRE} if $N_{\rm p} -1$ is replaced with $N$, which is only asymptotically true in our model. The fraction of isolated nodes in the system is given by (see, section~\ref{sec:SI_derivation} in SI): \begin{align} q_0(\kappa,N_{\rm p}) = \int d a^\prime \rho(a^\prime) \left[ 1 - \int u(a^\prime, a) \rho(a) d a \right]^{N_{\rm p}-1}. \label{eq:q_0} \end{align} Substituting $\rho(a) = 1$ (\emph{i.e.}, uniform distribution on $[0,1]$) and $u(a, a^\prime) = \kappa a a^\prime$ into Eq.~(\ref{eq:k_avg}) leads to: \begin{align} \overline{k}(\kappa,N_{\rm p}) = \frac{\kappa }{4} (N_{\rm p}-1). \end{align} Similarly, $q_0$ is given by: \begin{align} q_0(\kappa,N_{\rm p}) &= \int_0^1 \left( 1 - \frac{\kappa a^\prime}{2} \right)^{N_{\rm p}-1}d a^\prime. \end{align} By defining a variable $x \equiv 1 - \frac{\kappa a^\prime}{2}$, we have: \begin{align} q_0(\kappa,N_{\rm p}) &= \frac{2}{\kappa}\int_{1-\frac{\kappa}{2}}^1 x^{N_{\rm p}-1} dx \notag \\ &= \frac{2}{\kappaN_{\rm p}}\left[1-\left( 1-\frac{\kappa}{2}\right)^{N_{\rm p}}\right]. \label{eq:q0_uniform} \end{align} Combining these results with Eq.~(\ref{eq:NM_main}), we have: \begin{align} N &= N_{\rm p} \left[ 1- \frac{2}{\kappaN_{\rm p}}\left(1-\left( 1-\frac{\kappa}{2}\right)^{N_{\rm p}}\right) \right],\label{eq:N}\\ M &= \frac{1}{8} \kappaN_{\rm p}(N_{\rm p}-1).\label{eq:M} \end{align} It should be noted that if $|1-\kappa/2| < 1$ and $N_{\rm p}$ is sufficiently large, then $q_0(\kappa,N_{\rm p}) \simeq 0$ and thereby $N \simeq N_{\rm p}$ and $M \propto N^2$, as is shown in the study of the static fitness model~\cite{Caldarelli2002PRL,Boguna2003PRE,DeMasi2006PRE}. \subsection{Bayesian estimation}\label{sec:method_regime_switch} This section describes how we can infer the model parameters and the dynamical regime at a given time interval $t$. Let ${\rm Pr}(S_t=s|\psi_{t-1};\vect\theta)$ be the probability that a network is in state $s$ (\emph{i.e.}, in Regime $s$) conditional on information available at the end of time interval $t-1$, denoted by $\psi_{t-1}$, for a given set of parameters $\vect\theta=\{N_{\rm p},\kappa,\sigma_1,\sigma_2,p_{11},p_{22}\}$. The likelihood function is then given by: \begin{align} L(\{\vect{D}_t\}|\vect{\theta}) = \prod_{t=1}^T \sum_{s=1}^2f(\vect{D}_t|S_t=s,\psi_{t-1};\vect\theta){\rm Pr}(S_t=s|\psi_{t-1};\vect\theta), \end{align} where $\{\vect{D}_t\}$ denotes the sequence of observations $\vect{D}_t = (N_t,M_t)$, and $f$ is given by: \begin{align} f(\vect{D}_t|S_t=s,\psi_{t-1};\vect\theta) = \frac{1}{\sqrt{2\pi\sigma_s^2}}\exp{\left(-\frac{(N_t-h^s)^2}{2\sigma_s^2} \right)}, \;\; s= 1,2. \end{align} The log-likelihood function leads to: \begin{align} \log{L}(\{\vect{D}_t\}|\vect{\theta}) &= \sum_{t=1}^T \log\sum_{s=1}^2f(\vect{D}_t|S_t=s,\psi_{t-1};\vect\theta){\rm Pr}(S_t=s|\psi_{t-1};\vect\theta), \notag \\ &= \sum_{t=1}^T \log\sum_{s=1}^2\sum_{r=1}^2f(\vect{D}_t|S_t=s,\psi_{t-1};\vect\theta){\rm Pr}(S_{t-1}=r|\psi_{t-1};\vect\theta)p_{rs}. \end{align} Bayesian inference is conducted based on the relationship $p(\vect{\theta}|\{\vect{D}_t\})\propto L(\{\vect{D}_t\}|\vect{\theta})p(\vect{\theta})$, where $p(\vect{\theta}|\{\vect{D}_t\})$ and $p(\vect{\theta})$ are posterior and prior densities, respectively. For each parameter we collect 20,000 samples (four chains, 5,000 samples after 5,000 burn-in for each chain) generated from the posterior using Markov chain Monte Carlo (MCMC). We implement MCMC using Pystan ver.~2.19.0~\cite{Pystan}, which runs the No-U-Turn sampler (NUTS)~\cite{Nuts2014}. The mean parameter values are summarised in Table~\ref{tab:parameters}. \begin{table}[tb] \centering \caption{Estimated parameters. For each parameter, mean and 95\% credible interval obtained by MCMC are shown at the upper and lower rows, respectively. $N_{\rm max}$ denotes $\max_t\{{N_t}\}$.} \begin{tabular}{cccccc} \hline & WS-16 & IC2S2-17 & Hospital & Workplace & Prior distribution \\ \hline $N_{\rm p}$ & 121.152 & 196.236 & 28.087& 109.694 & Uniform($N_{\rm max},2N_{\rm max}$)\\ & $[121.005, 121.555]$ & $[196.005,196.884]$ & $[28.002,28.306]$ & $[106.449,112.984]$& \\ $\kappa$ & 0.467 & 0.222& 0.495& 0.087& Uniform$(0,1)$\\ & $[0.430, 0.512]$ & $[0.215,0.228]$ &$[0.450,0.541]$ & $[0.075,0.101]$& \\ $p_{11}$ & 0.930 & 0.969& 0.919& 0.917& Beta$(5,1)$\\ & $[0.853, 0.980]$ & $[0.914,0.996]$ & $[0.857,0.965]$& $[0.836,0.973]$& \\ $p_{22}$ & 0.969 & 0.988& 0.926& 0.974& Beta$(5,1)$\\ & $[0.932, 0.992]$ & $[0.966,0.999]$ & $[0.861,0.971]$ & $[0.944,0.993]$& \\ $\sigma_{1}$ & 8.587 & 5.358& 1.617& 5.815& Cauchy$(0,2)$\\ &$[4.186, 12.618]$ & $[4.360,6.598]$ &$[1.395,1.870]$ & $[4.820,7.064]$& \\ $\sigma_{2}$ & 5.828 & 15.776& 1.760& 2.913& Cauchy$(0,2)$\\ &$[4.330,7.520]$ & $[14.096,17.714]$ &$[1.532,2.007]$ & $[2.602,3.270]$ & \\ \hline \end{tabular} \label{tab:parameters} \end{table} Now we describe how information is updated in each period. The probability of being in state $s$ conditional on information at time $t$ is written as: \begin{align} {\rm Pr}(S_t=s|\psi_{t};\vect{\theta}) &= \frac{f(\vect{D}_t|S_t=s,\psi_{t-1})\cdot{\rm Pr}(S_t=s|\psi_{t-1})}{\sum_{s}f(\vect{D}_t|S_t=s,\psi_{t-1})\cdot{\rm Pr}(S_t=s|\psi_{t-1})}, \notag \\ &= \frac{\sum_{r}f(\vect{D}_t|S_t=s,\psi_{t-1})\cdot{\rm Pr}(S_{t-1}=r|\psi_{t-1})p_{rs}}{\sum_{s}\sum_{r}f(\vect{D}_t|S_t=s,\psi_{t-1})\cdot{\rm Pr}(S_{t-1}=r|\psi_{t-1})p_{rs}}, \label{eq:prob_S_update} \end{align} where we drop argument $\vect\theta$ in $f$ for brevity. Given the initial guess for ${\rm Pr}(S_0=r|\psi_{0})$, we can recursively update the probability of being in state $s$. \subsection{Smoothed probability}\label{sec:smoothed} The probability ${\rm Pr}(S_t=s|\psi_{t};\vect{\theta})$ obtained in Eq.~\eqref{eq:prob_S_update} is based on information available at time $t$ for a given parameter set $\vect{\theta}$. We can also obtain the probability based on all information, represented by information set $\psi_{T}$. Let $\vect\xi_{t|T}\equiv [{\rm Pr}(S_t=1|\psi_{T};\vect{\theta}),{\rm Pr}(S_t=2|\psi_{T};\vect{\theta})]^{\prime}$ be the vector of probabilities conditional on information at $T$. $\vect\xi_{t|T}$ can be calculated by conducting backward iteration from $T$~\cite{Hamilton1994book}: \begin{align} \vect\xi_{T-1|T} &= \vect\xi_{T-1|T-1}\odot\{\vect{P}^\prime[\vect\xi_{T|T}(\div)\vect\xi_{T|T-1}]\},\notag \\ \vect\xi_{T-2|T} &= \vect\xi_{T-2|T-2}\odot\{\vect{P}^\prime[\vect\xi_{T-1|T}(\div)\vect\xi_{T-1|T-2}]\},\notag \\ &\vdots \notag \\ \vect\xi_{t|T} &= \vect{\xi}_{t|T-1}\odot\{\vect{P}^\prime[\vect\xi_{t+1|T}(\div)\vect\xi_{t+1|T-1}]\}, \end{align} where $\odot$ and $(\div)$ denote element-by-element multiplication and element-by-element division, respectively, and $\vect{P}=(p_{ss})$ is the transition matrix. Note that all the terms in the RHS of the first equality are already known from the previous estimation procedure. After calculating $\vect\xi_{T-1|T}$, we use it to calculate the RHS of the second line. We repeat this until we obtain $\vect\xi_{t|T}$. \subsection{Validation}\label{sec:validation} We check the accuracy of the inference method based on synthetic network data generated by the regime-switching hidden variable model. For given parameters $N_{\rm p}$, $\kappa$, $p_{11}$, and $p_{22}$, and time-varying variables $\{N_{{\rm p},t}\}$ and $\{\kappa_{t}\}$, we generate sequences of $\{N_t\}$ and $\{M_t\}$ in a way prescribed in the model. When the network at $t$ is in Regime 1 (Regime 2), the true $N_{{\rm p},t}$ ($\kappa_{t}$) is given by $N_{{\rm p},t} = 0.95N_{{\rm p}, t-1}$ ($\kappa_{t}=0.95\kappa_{t-1}$), and $N_{{\rm p},t}=N_{\rm p}$ ($\kappa_{t}=\kappa$) otherwise. We assume that the initial probability of being in Regime 1 is set at 0.5, and $N_{\rm p{,0}}=N_{\rm p}$ and $\kappa_{0}=\kappa$. For each parameter, we collect 20,000 samples by MCMC (5,000 samples from four chains after 5,000 burn-in iterations). The estimated parameters under different sets of ground-truth $\vect\theta$ are summarised in Table~\ref{tab:validation_params} in SI. The estimated smoothed probabilities well match the true states of the generated networks (Fig.~\ref{fig:validation_prob_scatter} in SI). We also group the generated networks based on the probability of being in Regime 1; For each time period, if more than 95\% of the sampled values for ${\rm Pr}(S_t=1|\psi_{t};\widehat{\vect{\theta}})$ are higher (lower) than 0.5, then we classify the corresponding snapshot as being in Regime 1 (Regime 2). If it is not classified as Regime 1 or 2, the network is considered to be in a ``gray area''. As shown in the middle and the right columns of Fig.~\ref{fig:validation_prob_scatter}, the classification of generated networks based on estimated parameters is consistent with the ground truth, while there are some networks that are in gray zones especially when the observed pairs of $(N_t,M_t)$ are overlapped between the two regimes. A comparison between the estimated and the true paths of $N_{{\rm p},t}$ and $\kappa_{t}$ is also shown in Fig.~\ref{fig:validation_Np_kap}.
1,108,101,564,803
arxiv
\section{Introduction} The theory of open quantum dynamics has attracted significant interest recently due to the fast development of new experimental skills to study, and even design, the interaction between a quantum system and its environment. In many applications, the dynamics of an open system interacting with its reservoir can be described by a quantum Markov process. Specifically, let us consider a finite-dimensional quantum system described in a Hilbert space $\mathcal{H}\simeq\mathbb{C}^N.$ The state of the system is represented by a \emph{density operator} $\rho$ on $\mathcal{H}$ with $\rho\geq 0$ and $\textrm{trace}(\rho)=1$. Density operators form a convex set $\mathfrak{D}(\mathcal{H})\subset\mathfrak{H}(\mathcal{H})$ with one-dimensional projectors corresponding to extreme points (\emph{pure states}). We denote by $\mathfrak{B}(\mathcal{H})$ the set of linear operators on $\mathcal{H}$, with $\mathfrak{H}(\mathcal{H})$ denoting the real subspace of Hermitian operators. Throughout the paper we will use $\dag$ to denote the adjoint, $*$ for the complex conjugate, and $[X,Y]=XY-YX,$ $\{X,Y\}=XY+YX$ to represent the commutator and the anticommutator, respectively. The Markovian quantum dynamics is described by the following Lindblad-Gorini-Kossakowskii-Sudarshan master equation (LME)~\cite{lindblad,gorini-k-s}: \begin{equation} \label{gen} \dot\rho = -i[H,\rho]+\mathcal{D}(L,\rho) = -i[H,\rho]+L\rho L^\dag -\tfrac{1}{2}\{L^\dag L,\rho\}, \end{equation} where $H\in\mathfrak{H}(\mathcal{H})$ is the Hamiltonian and $L\in\mathfrak{B}(\mathcal{H})$ describes the dissipation due to the environment, accounting for the non-unitary part of the evolution. The mathematical problem we consider is the following: \emph{Find a pair $(H,L)$ that makes a given density matrix $\rho$ globally asymptotically stable (GAS) assuming dissipative dynamics of Lindblad type} (for the standard definition of GAS see~\cite{khalil}). This type of dynamic stabilization of quantum states is important in quantum information processing applications~\cite{viola-qubit,nielsen-chuang}. The main result of this paper is a constructive procedure to design a pair $(H,L)$ that renders an arbitrary mixed state $\rho$ GAS. This problem of GAS has been considered before for pure states, i.e., rank-$1$ projectors~\cite{wang-wiseman,ticozzi-QDS,ticozzi-attractive}. Stabilizing pure states is an important task but it is not always possible to stabilize a desired pure state. However, there always exists a mixed state arbitrarily close to the target state that can be stabilized, hence attaining {\em practical stability}, that is, stability of a neighborhood of the target state. Mixed-state stabilization was considered in~\cite{schirmer-wang} but without a constructive procedure to find the required dynamics $(H,L)$. The $L$ \SGS{and $H$} we explicitly construct in this Note \SGS{are in a simple form in an eigenbasis of $\rho$, tri- and quintdiagonal, respectively, with off-diagonal elements determined by the spectrum of $\rho$. The diagonal elements $a_n$ and $h_n$ of $L$ and $H$ are free variables. Further analysis shows that $\rho$ is GAS for most all choices of the diagaonal elements $a_n$ and $h_n$, and we give explicit conditions on $h_n$ to guarantee GAS of $\rho$.} In Section \ref{qubit}, we illustrate by an example how the Lindblad generator $L$ obtained this way can be implemented by reservoir engineering via direct (Markovian) feedback~\cite{wiseman-feedback, wang-wiseman,vitali-optomechanical, barchielli-gregoratti}, where the controlled dynamics has the form of a Wiseman-Milburn Markovian \emph{Feedback Master Equation} (FME)\cite{wiseman-feedback, wiseman-milburn}: \begin{equation} \label{eq:FME} \dot{\rho}_t= -i[H+ H_C+\tfrac{1}{2}(FM+M^\dag F),\,\rho_t] +\mathcal{D}(M-iF,\rho_t)+\mathcal{D}(L_0,\rho_t). \end{equation} The drift Hamiltonian $H\in\mathfrak{H}(\mathcal{H}),$ the measurement operator $M \in\mathfrak{B}(\mathcal{H})$ and the noise operator $L_0$ are assumed to be given, while the open-loop and the feedback Hamiltonians $H_C, F\in\mathfrak{H}(\mathcal{H})$ are our control parameters. \section{Main Results} \subsection{Preliminaries and design assumptions} \label{preliminaries} We recall two results on subspace stabilization that have been derived in \cite{ticozzi-attractive,schirmer-wang}, which will be used in the rest of the paper. \begin{theorem} [Feedback-attractive subspaces~\cite{ticozzi-attractive}] \label{T-V-feedback} Let $\mathcal{H}_I=\mathcal{H}_S\oplus\mathcal{H}_R$ with $\Pi_S$ being the orthogonal projection on $\mathcal{H}_S.$ If we can freely choose both the open-loop and the feedback Hamiltonian then, for any measurement operator $M$, there exist a feedback Hamiltonian $F$ and a Hamiltonian compensation $H_c$ that make the subsystem supported by $\mathcal{H}_S$ invariant and attractive for the FME \eqref{eq:FME} iff \begin{equation} \label{stabilize} [\Pi_S,(M+M^\dag)] \neq 0. \end{equation} \end{theorem} A constructive proof is provided in \cite{ticozzi-attractive}. We will also make use of the following characterization of the global attractivity of a state: \begin{theorem} [Uniqueness equivalent to GAS \cite{schirmer-wang}] \label{unique-attractive} A steady state of \eqref{gen} is GAS if and only if it is unique. \end{theorem} Finally, we require a few basic observations and a simple lemma. Consider an orthogonal decomposition $\mathcal{H}=\mathcal{H}_S\oplus\mathcal{H}_R$ with $d_s=\operatorname{dim}(\mathcal{H}_S),$ $d_r=\operatorname{dim}(\mathcal{H}_R)$, and let $\{\ket{\phi_j^{S}}\}_{j=1}^{d_s},\{\ket{\phi_l^{R}}\}_{l=1}^{d_r}$ be orthonormal bases for $\mathcal{H}_{S},\,\mathcal{H}_R,$ respectively. The basis \begin{equation*} \{\ket{\varphi_m}\} =\{\ket{\phi_j^{S}}\}_{j=1}^{d_s}\cup\{\ket{\phi_l^{R}}\}_{l=1}^{d_r}, \end{equation*} induces a block decomposition for matrices representing operators acting on $\mathcal{H}$: \begin{equation} \label{eq:blocks} X=\left[\begin{array}{c|c} X_S & X_P \\\hline X_Q & X_R \end{array}\right], \end{equation} and we have the following: \begin{lemma} Assume $\rho=\left[\begin{smallmatrix} \rho_S & 0 \\ 0 & \rho_R \end{smallmatrix}\right],$ divided in blocks accordingly to some orthogonal Hilbert space decomposition $\mathcal{H}=\mathcal{H}_S\oplus\mathcal{H}_R.$ Then $\rho$ is invariant for the dynamics \eqref{gen} if and only if: \begin{subequations} \begin{align} 0=& -i[H_S,\rho_S]+L_S\rho_S L_S^\dag -\tfrac{1}{2}\{L_S^\dag L_S,\rho_S\}+L_P\rho_R L_P^\dag\nonumber\\ & -\tfrac{1}{2}\{L_Q^\dag L_Q,\rho_S\} \label{condS}\\ 0=&-i(H_P\rho_R-\rho_S H_P)+L_S\rho_S L_Q^\dag -\tfrac{1}{2}\rho_S(L_S^\dag L_P+L_Q^\dag L_R) \nonumber\\ & +L_P\rho_R L_R^\dag -\tfrac{1}{2}( L_S^\dag L_P + L_Q^\dag L_R)\rho_R\label{condP}\\ 0=&-i[H_R,\rho_R]+L_R\rho_R L_R^\dag -\tfrac{1}{2}\{L_R^\dag L_R,\rho_R\}+L_Q\rho_S L_Q^\dag\nonumber\\ & -\tfrac{1}{2}\{L_P^\dag L_P,\rho_R\} \label{condR} \end{align} \end{subequations} \end{lemma} \begin{IEEEproof} By direct computation of the generator \eqref{gen} one finds its $S$, $P$ and $R$-blocks to be the l.h.s. of \eqref{condS}-\eqref{condR}, respectively. A given state $\rho$ is stationary if and only if $\mathcal{L}(\rho)=0,$ and hence if and only if its blocks are all zero. \end{IEEEproof} \noindent We know from \cite{ticozzi-QDS, ticozzi-attractive} that for $\rho$ to be invariant, its support $\mathcal{H}_\rho$ must be an invariant subspace, and using the constructive procedure used in the proof of Theorem \ref{T-V-feedback}, we can construct an block-upper-triangular $L$ that stabilizes the $\mathcal{H}_\rho$ subspace. There are many possible choices to do that, e.g. \begin{equation*} L= \left[ \begin{array}{c|c} L_\rho & L_{\rho,P} \\\hline 0 & L_{\rho,R} \\ \end{array}\right]\; \end{equation*} with blocks \begin{equation*} L_{\rho,P}= \begin{bmatrix} 0 & 0 & \cdots & 0 \\ \vdots & 0 & \cdots & 0 \\ \ell_1 & 0 & \cdots & 0 \\ \end{bmatrix},\; L_{\rho,R}= \begin{bmatrix} 0 & \ell_2 & 0 & 0 \\ 0 & 0 & \ell_3 & \ddots \\ \vdots & & \ddots & \ddots \\ \end{bmatrix} \end{equation*} with $\ell_1,\ell_2,\ldots \neq 0$. Therefore, we can focus on the dynamics restricted to the invariant support $\mathcal{H}_\rho$, and restrict our attention here to full-rank states $\rho=\operatorname{diag} (p_1,\ldots,p_N)$ with $p_1,\ldots,p_{N}>0$. To develop a constructive procedure to build a stabilizing pair $(H,L)$ with a simple structure we make a series of assumptions and design choices: {\em Assumption 1.} \emph{The \emph{spectrum} of $\rho$ is {non-degenerate} (generic case).} \noindent A state with non-degerate spectrum can be chosen \emph{arbitrarily close to any state}. Without loss of generality, we can choose a basis such that $\rho$ is diagonal $\rho=\operatorname{diag}(p_1,\ldots,p_N)$ with $p_1>\ldots>p_{N}>0$. This assumption is instrumental for the construction of $L$ but can actually be relaxed, as we remark after the Theorem \ref{main}. Consider the decomposition $\mathcal{H}=\mathcal{H}_S\oplus\mathcal{H}_R,$ with $\operatorname{dim}(\mathcal{H}_R)=1,$ such that the corresponding block decomposition for $\rho$ is $\rho=\left[\begin{smallmatrix} \rho_S & 0 \\ 0 & \rho_R \\ \end{smallmatrix}\right],$ and divide accordingly $H, L$. {\em Assumption 2.} \emph{$\rho_S$ satisfies \begin{equation} \label{hypS} -i[H_S,\rho_S]+L_S\rho_S L_S^\dag -\tfrac{1}{2}\{L_S^\dag L_S,\rho_S\}=0. \end{equation}} \noindent Condition \eqref{hypS} is clearly not satisfied in general, but it will be ensured at each step of the iterative procedure outlined in the next Section. Given this, conditions \eqref{condS}-\eqref{condR} can be rewritten as: \begin{equation} \label{syst} \left\{ \begin{array}{l} 0=L_Q\rho_S L_Q^\dag - \rho_R L_P^\dag L_P\\ 0=L_P\rho_R L_P^\dag - \tfrac{1}{2}\{L_Q^\dag L_Q,\rho_S\}. \end{array} \right. \end{equation} Since we assumed $\mathcal{H}_R$ to be one-dimensional, $L_P$ and $L_Q^\dag$ are both $n-1$ dimensional vectors. Call $\Pi_P=L_P L_P^\dag/(L_P^\dag L_P), \Pi_Q=L_Q^\dag L_Q/(L_Q L_Q^\dag).$ Then the second equation in \eqref{syst} reads: \begin{equation} \label{Pis} \rho_R(L_P^\dag L_P) \Pi_P =(L_Q L_Q^\dag)\tfrac{1}{2}(\Pi_Q\rho_S+\rho_S\Pi_Q). \end{equation} {\em Assumption 3.} \emph{Choose $\Pi_Q$ to be the orthogonal projector on the eigenspace corresponding to the smallest eigenvalue of $\rho_S$}. \noindent Choose an ordered spectral decomposition with rank-one projectors, $\rho=\sum_{j=1}^{N-1}p_i\Pi_{S,i}+\rho_R\Pi_R,$ where $p_i> p_{i+1}.$ As $\rho$ is in diagonal form this means that we are choosing: \begin{equation*} L_Q= \begin{bmatrix}0&\ldots &0 & \ell_Q \end{bmatrix}. \end{equation*} We thus get: \begin{equation} \label{key} \rho_R(L_P^\dag L_P) \Pi_P =(L_Q L_Q^\dag)p_{N-1}\Pi_Q, \end{equation} which can be satisfied if and only if $\Pi_P=\Pi_Q$ and $\rho_R(L_P^\dag L_P)=(L_Q L_Q^\dag)p_{N-1}.$ We can choose $L_P^\dag L_P=p_{N-1},$ obtaining $(L_Q L_Q^\dag)={p_N}.$ Hence $\ell_Q$ can be in particular chosen to be real, $\ell_Q=\sqrt{p_N}$. We have thus constructed a Lindblad term of the form: \begin{equation*} L = \left[ \begin{array}{c|c} L_S & \begin{array}{c} 0\\ \vdots\\ 0\\ \sqrt{p_{N-1}} \end{array}\\ \hline \begin{array}{cccc} 0& \cdots & 0 & \sqrt{p_{N}} \end{array} & L_R\\ \end{array} \right]. \end{equation*} As $\rho_R$ is a scalar, we can rewrite \eqref{condP} as: \begin{equation} \label{condHp} -i(\rho_R I_S + \rho_S)H_P +K=0, \end{equation} where $K = -\tfrac{1}{2}( L_PL_R^\dag + L_S^\dag L_P + L_Q^\dag L_R)\rho_R -\tfrac{1}{2}\rho_S(L_S^\dag L_P + L_Q^\dag L_R)+L_S\rho_SL_Q^\dag.$ Once $\rho_S,\rho_R,L_S,L_R,L_P,L_Q$ are fixed, \eqref{condHp} is a linear system in $H_P$ which always admits a unique solution $H_p=i(\rho_R I_S + \rho_S)^{-1}K,$ which is a necessary condition for the invariance of $\rho$. We are now in a position to \SGS{present an inductive procedure for constructing stabilizing generators.} \subsection{Constructive algorithm and proof of uniqueness} We start by constructing a stabilizing generator for the two-level case. Let $\rho=\left[\begin{smallmatrix} p_1 & 0 \\ 0 & p_2 \end{smallmatrix}\right]$ with $p_1> p_2>0.$ Note that in the reasoning of the previous Section it is not necessary to impose $\textrm{tr}(\rho)=1.$ We will make use of this fact in extending the procedure to the $n$-dimensional case. Given the previous observations, we can render the given $\rho$ invariant for the dynamics if we can enact dissipation driven by a Lindblad operator and a Hamiltonian of the form \begin{equation} \label{2level} L= \begin{bmatrix} a_1 & \sqrt{p_{1}} \\ \sqrt{p_{2}} & a_2 \end{bmatrix} \end{equation} and a $H$ such that its off-diagonal elements satisfy \eqref{condHp}. In the $N$-level case let $\rho=\operatorname{diag}(p_1,\ldots ,p_N)$. We can iterate our procedure by induction on the dimension of $\mathcal{H}_S.$ We have just found the $2\times 2$ upper-left blocks of $L$ and $H$ such that $\rho^{(2)}=\operatorname{diag}(p_1,p_2)$ is stable and attractive for the dynamics driven by the reduced matrices. Assume that we have some $m\times m$ upper-left blocks of $L$ and $H$ such that $\rho^{(m)}=\operatorname{diag}(p_1,\ldots,p_m)$ is invariant for the reduced dynamics. This is exactly Assumption 2 above. Let $\mathcal{H}_S^{(m)}=\operatorname{supp} (\sum_{j=1}^{m}p_j\Pi_j),$ $\mathcal{H}_R^{(m)}=\operatorname{supp}(p_{m+1}\Pi_{m+1}).$ If we want to stabilize $\rho^{(m+1)}=\operatorname{diag}(p_1,\ldots,p_{m+1})$ for the dynamics restricted to $\mathcal{H}_S^{(m)}\oplus\mathcal{H}_R^{(m)}$, we can then proceed building $L_Q^{(m)},L_P^{(m)}$ using Design Choice 1 above. Design Choice 2 lets us compute the off-diagonal terms of the Hamiltonian, while we can pick $H_R$ to assume any value, since it does not enter the procedure. By iterating until $m=N,$ we obtain a tridiagonal matrix $L$ with $L_{n,n+1}=\sqrt{p}_n$, $L_{n+1,n}=\sqrt{p}_{n+1}$ and $L_{n,n}=a_n$, i.e., \begin{equation} \label{eq:L} L = \begin{bmatrix} a_1 & \sqrt{p}_1 & 0 & \cdots & 0 & 0\\ \sqrt{p}_2 & a_2 & \sqrt{p}_2 & \cdots & 0 & 0\\ 0 & \sqrt{p}_3 & a_3 & \cdots & 0 & 0\\ 0 & 0 & \ddots & \ddots & \ddots & \vdots\\ 0 & 0 & \cdots & \sqrt{p}_{N-1} & a_{N-1} & \sqrt{p}_{N-1}\\ 0 & 0 & \cdots & 0 & \sqrt{p}_N & a_N \end{bmatrix} \end{equation} and $H$ becomes a quintdiagonal Hermitian matrix, i.e., \begin{equation} \label{eq:H} \begin{split} H_{nn} & = h_n, \\ H_{n,n+1}&=H_{n+1,n}^* = \frac{i}{2} \frac{\sqrt{p}_n-\sqrt{p}_{n+1}}{\sqrt{p}_n+\sqrt{p}_{n+1}} (a_n \sqrt{p}_n+ a_{n+1}\sqrt{p}_{n+1})\\ H_{n,n+2}& =H_{n+2,n}^* = -\frac{i}{2} {p}_{n+1} \frac{\sqrt{p}_n-\sqrt{p}_{n+2}}{\sqrt{p}_n+\sqrt{p}_{n+2}}, \end{split} \end{equation} \begin{theorem} \label{main} If $(H,L)$ are chosen as in (\ref{eq:L}--\ref{eq:H}) then $\rho=\operatorname{diag}({p}_1,\ldots,{p}_N)$ is a stationary state of the LME $\dot\rho(t)=-i[H,\rho]+\mathcal{D}(L,\rho).$ \SGS{$\rho$ is GAS for most choices of the diagonal elements $\vec{a}$ and $\vec{h}$ of $L$ and $H$, respectively; in particular there exists $M_0\geq 0$ so that $\rho$ is GAS for $\vec{h}=(M,0,\ldots,0)$ for all $M>M_0.$ } \end{theorem} \begin{IEEEproof} Given the generator, we can verify by direct calculation that $\rho$ is a steady state. Setting $B=\mathcal{D}(L,\rho)$, direct calculation shows that $B_{m,n}=0$ except for \begin{align*} B_{n,n+1} = B_{n+1,n} &=-\frac{1}{2} (\sqrt{p}_n-\sqrt{p}_{n+1})^2 (d_n \sqrt{p}_n + d_{n+1}\sqrt{p}_{n+1}),\\ B_{n,n+2} = B_{n+2,n} &=-\frac{1}{2} {p}_{n+1}(\sqrt{p}_n-\sqrt{p}_{n+1})^2, \end{align*} and setting $A=i[H,\rho]$ shows that the Hamiltonian term exactly cancels the non-zero elements of $B$, i.e., $-A+B=0$. Thus $\rho$ is a steady state of the system. To show how to make $\rho$ the unique, and hence attractive, stationary state, assume that $\rho'$ is another stationary state in the support of $\rho$. Let $\mathcal{X}=\{X|X=x\rho+y\rho',\,x,y\in\mathbb{R}\}.$ Then any state in $\mathcal{X}\cap \mathfrak{D}(\mathcal{H})$ is also stationary. Since $\mathcal{X}$ is unbounded while $\mathfrak{D}(\mathcal{H})$ is compact, there must be a stationary state $\rho_1$ at the boundary of $\mathfrak{D}(\mathcal{H})$, i.e., with rank strictly less than $\rho$. Then the support of $\rho_1$, $\mathcal{H}_1=\operatorname{supp}(\rho_1)$, must be invariant \cite{ticozzi-attractive}, and $H,L$ must exhibit the following block-decompositions with respect to the orthogonal decomposition $\mathcal{H}=\mathcal{H}_1\oplus\mathcal{H}_2:$ \begin{equation*} \label{eq:ss_bd3} H = \begin{bmatrix} H_{11} & H_{12} \\ H_{12}^\dag & H_{22} \\ \end{bmatrix}, \quad L = \begin{bmatrix} L_{11} & L_{12} \\ 0 & L_{22} \\ \end{bmatrix}, \end{equation*} with $H_{12}=-i\frac{1}{2} L_{11}^\dag L_{12}.$ If $L_{12}\neq 0$ then it is straightforward to show that $\operatorname{Tr}(\Pi_2\rho),$ where $\Pi_2$ the orthogonal projection onto $\mathcal{H}_2$, is strictly decreasing~\cite{ticozzi-QDS}, and thus $\mathcal{H}_2$ is not invariant. Since $\rho$ is also stationary, and is a full-rank state, this is not possible and we must therefore have $L_{12}=0$ and thus $H_{12}=0$. This implies that $H$ and $L$ are block-diagonal. Both $\mathcal{H}_1$ and $\mathcal{H}_2$ must contain at least one eigenvector of $L,$ say $\vec{v}_1$ and $\vec{v}_2.$ Orthogonality of the subspaces $\mathcal{H}_1$ and $\mathcal{H}_2$ along with the block-decomposition of $H$ and $L$ imply that any pair of vectors $\vec{v}_1\in\mathcal{H}_1$, $\vec{v}_2\in\mathcal{H}_2$ must satisfy: \begin{equation} \label{eqs} \vec{v}_2^\dag L \vec{v}_1 = \vec{v}_2^\dag H \vec{v}_1 = \vec{v}_2^\dag \vec{v}_1 = 0. \end{equation} These are {\em necessary condition} the existence of another stationary state, and hence for making $\rho$ not attractive. From the results in the appendix (Theorem \ref{thm:NOE}), a tridiagonal $L$ in the form (\ref{eq:L}) has $n$ distinct eigenvalues corresponding to $n$ eigenvectors $\vec{v}_k$ with real entries and the first all different from zero. Without loss of generality, we can assume that the first element of each (unnormalized) eigenvector $\vec{v}_j$ equals $1$ for all $j.$ If the eigenvectors are mutually non-orthogonal, i.e., $\vec{v}_k^T\vec{v}_\ell\neq 0$ for all $k,\ell$ then the third equality in \eqref{eqs} is automatically violated, and hence $\rho$ must be the unique stationary state. This condition will almost always be satisfied in practice, and it is easy to check that it always true when $N=2$ (see appendix). However, even if $L$ has orthogonal eigenvectors we can use our freedom of choice in the diagonal elements $h_{nn}$ of $H$ to render $\rho$ the unique stationary state. Assume $\vec{v}_j^\dag \vec{v}_k = 0$ for some pair of eigenvectors of $L$. Let $H_0$ be the Hamiltonian corresponding to $\vec{h}=\vec{0}$ and $M_0=\max_{j,k}|\vec{v}_j^\dag H_0 \vec{v}_k|$. Choose $M>M_0$ and let $H$ be the Hamiltonian corresponding to $\vec{h}=(M,0,\ldots)$, $H=H_0+\operatorname{diag}(M,0,\ldots,0)$. Recalling that the first component of $\vec{v}_j$ is $1$ for all $j$, shows that \begin{equation*} \vec{v}_j^\dag H \vec{v}_k = M + \vec{v}_j^\dag H_0 \vec{v}_k > 0, \end{equation*} and therefore the second equality in \eqref{eqs} is violated, and $\rho$ is the unique stationary state. \end{IEEEproof} {\em Remark:} Theorem \ref{main}--\ref{thm:NOE} further shows that {Assumption 1} on the spectrum of $\rho$ can be relaxed, since the fact that the spectrum is non-degenerate plays no role in the proof. The construction is effective for {\em any} full rank state on the desired support. \section{Feedback stabilization of encoded qubit states} \label{qubit} Consider a system whose evolution is governed by the FME~(\ref{eq:FME}) with $H_0=-H_C'$, $M=M^\dag,$ and $L$ admitting an eigenspace $\mathcal{H}_S$ of dimension 2 for some eigenvalue $\lambda_S$. Assume we can switch off the measurement and the feedback Hamiltonian, $F=M=0.$ With this choice, \eqref{eq:FME} admits a two-dimensional \emph{noiseless (or decoherence-free) subspace}~\cite{lidar-DFS, lidar-initializationDFS,ticozzi-QDS}, which can be effectively used to encode a quantum bit protected from noise. We now face the problem of {\em initializing} the quantum state {\em inside the DFS}: we thus wish to construct $H_C$, $F$ and $M,$ such that a given state $\rho$ \emph{of the encoded qubit} is GAS on the full Hilbert space. Setting $\mathcal{H}=\mathcal{H}_S\oplus\mathcal{H}_R$ and choosing an appropriate basis for $\mathcal{H}_S$, the encoded state to be stabilized takes the form \begin{equation*} \rho = \begin{bmatrix} \rho_S & 0\\ 0 & 0\end{bmatrix}, \quad \rho_S= \begin{bmatrix} p_1 & 0\\ 0 & p_2 \end{bmatrix}. \end{equation*} The dynamical generators can be partitioned accordingly, \begin{gather*} L = \begin{bmatrix} \lambda_M I_2 & L_P\\ 0 & L_R \end{bmatrix}, \quad H_C= \begin{bmatrix} H_S & H_P\\ H_P^\dag & H_R \end{bmatrix}, \\ F = \begin{bmatrix} F_S & F_P\\ F_P^\dag & F_R \end{bmatrix}, \quad M = \begin{bmatrix} M_S & M_P\\ M_P^\dag & M_R \end{bmatrix} \end{gather*} where $I_2$ is the $2\times 2$ identity matrix. We compensate the feedback-correction to the Hamiltonian by choosing $H_C=H_C'-\frac{1}{2}(FM+M^\dag F)$. For $L_P\neq0$ we use the constructive algorithm described above to render $\mathcal{H}_S$ attractive by choosing $F_P=-iM_P$, and constructing a $H_R'$ for $H_C'$ so that no invariant state has support in $\mathcal{H}_R$ (see Theorem 12 in \cite{ticozzi-attractive}) and by imposing $H'_P=0$. For $L_P=0$ we need to choose an observable such that $M_P\neq0$. We are thus left with freedom on the choice of $H_S,F_S,$ which can be now used to stabilize the desired $\rho_S$ in the {controlled invariant subspace} $\mathcal{H}_S.$ Denote the elements of the upper-left $2\times 2$ blocks as \begin{equation} M_S= \begin{bmatrix} M_1 & M_3\\ M_3^\dag & M_2\end{bmatrix}, \quad F_S= \begin{bmatrix} F_1 & F_3\\ F_3^\dag & F_2\end{bmatrix} \end{equation} Set $F_{3}=-\frac{ik_M}{2}(\sqrt{p}_2-\sqrt{p}_{1})$. If $M_3\neq 0$, setting $k_M:=2M_3/(\sqrt{p}_1+\sqrt{p}_{2})$ shows that $M_{3}=\frac{k_M}{2}(\sqrt{p}_1+\sqrt{p}_{2})$, i.e., up to the multiplicative constant $k_M$ the two block matrices are such that $M_S-iF_S$ is of the form \eqref{eq:L}, Theorem \ref{main} applies and the desired state is GAS. Notice that if $M_3=0,$ and $M$ has non-degenerate spectrum, it is easy to find another state $\rho'$, arbitrarily close to $\rho,$ such that, in the basis in which $\rho'$ is diagonal, we have $M_3'\neq 0.$ Thus we can attain practical stabilization of any state of the encoded qubit. \vspace{-5mm} \section{Conclusions and outlook} Efficient quantum state preparation is crucial to most of the physical implementations of quantum information technologies. Here we have shown how \emph{quantum noise} can be designed to stabilize arbitrary quantum states. The main interest in such a result is motivated by direct feedback design for applications in quantum optical and opto-mechanical systems and quantum information processing applications. As an example we have demonstrated how to devise the (open- and closed- loop) control Hamiltonians in order to asymptotically stabilize a state of a qubit encoded in a noiseless subspace of a larger system. Further study is under way to address similar problems for multiple qubits, in the presence of structural constraints on the measurements, control and feedback operators, and to optimize the speed of convergence to the target state. \section*{Acknowledgments} We thank Arieh Iserles, Lorenza Viola and Maria Jose Cantero for interesting and fruitful discussions.
1,108,101,564,804
arxiv
\section{Introduction}\label{sec:intro} \input{1_introduction.tex} \section{Related Work}\label{sec:related} \input{2_relatedwork.tex} \section{Proposed Method}\label{sec:method} \input{3_method.tex} \section{Experimental Results}\label{sec:evaluation} \input{4_experimentalresults.tex} \section{Discussion \& Future Work}\label{sec:conclusions} \input{5_conclusion.tex} \bibliographystyle{IEEEtran} \subsection{Point cloud Accumulation}\label{sec:supercloud} To facilitate object detection from sparse LiDAR scans (Figure~\ref{fig:images}A), such as that obtained from low-cost LiDARs with fewer beams like the Velodyne VLP-16, multiple LiDARs scans are accumulated to generate a dense representation (Figure~\ref{fig:images}B). To accumulate LiDAR scans, a sliding window of size \(N_c\) is maintained. Upon query at a defined frequency \(f\) to generate a dense representation, all point clouds in the sliding window are transformed to the latest LiDAR frame using the robot odometry information. Furthermore, in addition to time, only point clouds satisfying a minimum motion criterion of translation \(d_{min}\) or rotation \(\theta_{min}\) are added to the sliding window to ensure good coverage and dense representation of the environment. \subsection{Point cloud Projection}\label{sec:image} The accumulated point cloud is converted to an ordered image representation, with rows and columns corresponding to elevation and azimuth angles respectively, to enable fast point-wise information lookup and to facilitate the application of neighborhood-based image processing operations. In addition to point coordinates, a range, a LiDAR intensity return, and a surface normals image are generated. For all images, a fixed resolution of $180$ rows and $1200$ columns, corresponding to a vertical and horizontal angular coverage of $60\degree$ and $360\degree$ respectively, is used. In the case of accumulated LiDAR scan, as the vertical resolution covered is variable and discrete, the generated images can contain empty rows, seen as black lines in Figure~\ref{fig:images}C. To fill in missing data from neighboring scan lines, bilinear interpolation~\cite{VPR} is performed. To reduce the effect of noise and aliasing, the result is smoothed using a Gaussian kernel. The resulting filtered range and intensity images are shown in Figure~\ref{fig:images}D and \ref{fig:images}E, respectively. In addition to depth and intensity differences, surface normals distribution is used to detect and segment objects from the environment background surface. Characteristically the surface normals distribution of an object has a higher standard deviation due to its irregular surface compared to a typical environment surface, such as a wall or floor, which commonly has a consistent surface normals distribution. To calculate a surface normals image in a computationally efficient manner, the ordered structure of the image projection is exploited, and vector cross-products between neighboring points are calculated to compute a mean surface normal for all points in the accumulated point cloud~\cite{li2019net}. The computed range, intensity, and surface normals images are then used for object clustering in the next step. \subsection{Point Clustering for Object Segmentation} \label{sec:clustering} For segmentation of objects from the environment, points belonging to the same object are clustered using the range, intensity, and surface normals information. First, using range image information, points belonging to the ground are removed using the approach described in~\cite{PRBonn2}. Next, points belonging to the same object are first clustered together using the angular resolution of the range image $\alpha$, depth disparity, and incidence angle difference $\beta$, similar to the approach proposed in \cite{PRBonn1}; by clustering together points having $\beta$ value larger than a threshold $\beta_{min}$. The identified clusters are then refined by checking for intensity consistency, with the intuition being that LiDAR returns from the same object return similar intensity values. Algorithm \ref{alg:clustering} summarizes the proposed method used for point clustering and object segmentation. It can be noted from lines 17-19 that a neighboring point must satisfy three conditions in order to be added to the same cluster. First, the point should satisfy the depth angle threshold $\beta_{min}$ as discussed above. Second, the intensity of the point must satisfy a minimum threshold $I_{min}$. Third, the intensity of the point must be within a certain intensity range $\pm I_n$ with respect to its neighbors. \begin{algorithm} \caption{Image Labeling}\label{alg:clustering} \begin{algorithmic}[1] \Procedure{LabelRangeImage}{} \State \texttt{Label} $\gets 2$, $R \gets$ range img, $I \gets$ intensity img \State $L \gets zeros(R_{rows} \times R_{cols})$ \For{ $r = 1...R_{rows}$} \For{ $c = 1...R_{cols}$} \If{$L(r,c) = 0 $} \State \texttt{LabelComponentBFS(r,c,Label)} \State \texttt{Label $\gets$ Label$+ 1$} \EndIf \EndFor \EndFor \EndProcedure \Procedure{LabelComponentBFS}{$r,c,$\texttt{Label}} \State \texttt{queue.push(\{$r,c$\})} \While{ \texttt{queue} is not empty} \State \{$r,c$\} $\gets$ \texttt{queue.top()} \State $L(r,c) \gets$ \texttt{Label} \For{ \{$r_n,c_n$\} $\in$ \texttt{Neigborhood\{$r,c$\}}} \State $d_1 \gets max(R(r,c), R(r_n,c_n))$ \State $d_2 \gets min(R(r,c), R(r_n,c_n))$ \If{$\arctan\frac{d_2\sin\alpha}{d_1 - d_2\cos\alpha} > \beta_{min}$} \If{$I(r_n,c_n) > I_{min}$} \If{$ \mid I(r_n,c_n) - I(r,c) \mid < I_n$} \State \texttt{queue.push(\{$r_n,c_n$\})} \EndIf \Else \State $L(r_n,c_n) \gets$ 1 \Comment{background label} \EndIf \EndIf \EndFor \State \texttt{queue.pop()} \EndWhile \EndProcedure \end{algorithmic} \end{algorithm} Once object clusters have been formed, the next step is to remove the object clusters which do not belong to the predefined objects of interest for the mission. Utilizing loose prior knowledge about object properties, such as the range of acceptable size and volume, the clusters are filtered. A cluster is considered valid if it has a volume between $V_{min}$ and $V_{max}$, and the number of points in the cluster is within a range of [$n_{min}$, $n_{max}$]. Moreover, the surface normal standard deviation of the cluster points must lie above a certain threshold \(\sigma_{min}\). The clusters before and after the filtering step are shown in Figure~\ref{fig:images}F and~\ref{fig:images}G, respectively. If multiple clusters lie nearby such that they can be jointly observed with the field-of-view of the used camera, they are combined (cluster merging) to minimize the number of waypoints generated for object candidates to be observed by the camera. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/LD_imgs_1.jpg} \caption{Visualization of the output of each stage of the proposed method. Sub-figures show the single point cloud obtained from VLP-16 LiDAR (A), the accumulated point cloud (B), the range image generated from point cloud projection (C), the filtered range image obtained after bi-linear interpolation and Gaussian blur (D), the corresponding intensity image (E), the segmented object point clusters (F), and the final result obtained after applying various object cluster filters (G). Colored bounding boxes have been added to indicate different objects of interest detected and establish correspondence between sub-figures for better understanding.} \label{fig:images} \vspace{-3ex} \end{figure} \subsection{Object Proposal Waypoint Generation} Once an object cluster has been identified, a pose waypoint is generated for the articulated PTZ camera to observe the object cluster and perform object detection. The distance between the object cluster and the camera origin is used to adjust the required zoom level for object observation. Furthermore, to keep track of object clusters that are already observed, a voxel-based global map of the environment is maintained using Voxblox~\cite{oleynikova2017voxblox}. In addition to spatial information, each voxel contains a temporal uniqueness boolean and a timestamp variable to indicate if the camera observed the voxel in the past. If the number of unobserved voxels in the field-of-view of the PTZ camera is above a certain threshold, the cluster origin in the LiDAR frame is used to generate yaw and tilt angle commands for the PTZ camera in the camera frame for observation. Finally, to detect and classify the observed objects, the learned architecture described in~\cite{dang2020autonomous} is applied to the acquired image from the PTZ camera. \subsection{Experimental Setup} The suitability of the proposed method for real-world disaster response applications was evaluated by collecting sensor data during multiple autonomous field deployments conducted in various underground environments. During experiments, an ANYmal quadruped robot~\cite{hutter2016anymal}, equipped with a Velodyne VLP-16 LiDAR, a PTZ camera system and four static cameras covering the forward hemisphere of the robot, was used. Furthermore, each camera was equipped with an LED flashlight aligned with the camera axis to provide illumination in visually-degraded underground environments. It should be noted that this work's main contribution is generating object presence proposals and not object classification. For object proposals generation, LiDAR data was used and the proposals were compared against classification results and the detection range of static cameras. For mission autonomy, the robot relied on CompSLAM~\cite{khattak2020complementary} for localization and mapping, with high-level exploration planning provided by GBPlanner~\cite{dang2020graph}. Further details about the hardware and software architecture are provided in~\cite{cerberus}. To evaluate the object presence proposal performance in complex underground environments, various objects associated with search-and-rescue missions, such as backpacks, human survivors, drills, helmets, etc., were placed in the environment during field tests. The objects of interest were selected following the "artifacts" specifications provided by the DARPA SubT challenge guidelines\footnote{\href{https://www.subtchallenge.com/resources.html}{https://www.subtchallenge.com/resources.html}}. To evaluate the accuracy of proposals, each unique object proposal generated by the proposed approach was manually evaluated by a human expert, and labeled as an artifact (object-of-interest), non-artifact (an object but not of interest such as wooden boxes, dustbins, water-dispenser, traffic lights, etc., plenty of which were present in DARPA SubT Finals environment) or a false positive (part of environment incorrectly proposed to be an object, e.g. rocks, walls, etc). The precision, defined by the ratio of true object proposals to the total number of proposals, was reported for the experiments, where both artifact and non-artifact detections were considered as true object proposals. Furthermore, the range of detection when using the proposed method against static camera object detections was evaluated by measuring distances for each detection in the robot map of the environment. The set of parameters used for the evaluations are presented in Table~\ref{tab:params}. \begin{table}[h] \centering \caption{Parameter values used during evaluation.} \begin{tabular}{l|c|l|c} \hline \multicolumn{1}{c|}{Parameter} & Value & \multicolumn{1}{c|}{Parameter} & Value \\ \hline \(N_c\) & 10 & \(I_{min}\) & 25 \\ \(\theta_{min}\)($\degree$) & 30 & \(I_n\) & 60 \\ \(d_{min} (\si{\meter})\) & 0.15 & \(V_{min}, V_{max}\)(\si{\meter^{3}}) & 0.01, 0.8 \\ \(f (Hz)\) & 2 & \(n_{min}, n_{max}\) & 50, 5000 \\ \(row, col\) & 180, 1200 & \(\sigma_{max}\) & 0.01 \\ \(V_{FOV}\)($\degree$) & 60 & \(\beta\) ($\degree$) & 14 \\ \hline \end{tabular} \label{tab:params} \vspace{-3ex} \end{table} \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/LD_rviz.jpg} \caption{An instance during the Seemuhle underground mine experiment using the proposed method to detect three artifacts. Sub-figures show the filtered range image (A), the filtered intensity image (B), the segmented object points clusters (C), the surface normal image (D), images from left (E), center (F), and right (G) static cameras for reference. The colored bounding boxes correspond to the same detected artifacts across images and are visualized to provide a better understanding.} \label{fig:rviz} \vspace{-3ex} \end{figure*} \subsection{Seemuhle Underground Mine Deployment}\label{sec:seemuhle} For evaluation, an autonomous robot exploration mission was conducted at Seemühle Borner Walenstadt mine in Switzerland under the conditions of complete darkness. During this mission, a total of 34 unique object proposals were generated and provided to the PTZ camera for classification, with a precision of 85.3\%. To provide a qualitative understanding, Figure~\ref{fig:rviz} shows the output of each sub-module of the proposed work for an instance during the experiment when three artifacts (two human survivors and a backpack) were detected to be in the vicinity of the robot. Quantitative results comparing the object detection distance of the proposed approach against using static cameras are summarized in Table~\ref{tab:seemuehle}. If multiple instances of the artifact are detected, a detection distance range instead of a single value is provided. It can be clearly noted that the proposed method's detection ranges extend those of the static cameras. It can be argued that the overall detection ranges are not very large for both methods; however, the maximum observation ranges are limited due to the complex topology of the underground mine. This is clearly demonstrated when multiple instances of the same artifact are present. The proposed method can detect the artifact when it is first observable, thus providing multiple detection distances, whereas the static cameras can only detect them at a fixed distance when the robot is very near to the object. It can also be noted that during this experiment our method was not able to detect the rope artifact. This can be attributed to the small size and relatively low-intensity LiDAR returns obtained from this specific artifact. Nevertheless, the robot is still able to detect the object at a closer distance using the static cameras showcasing the usefulness of our approach to augment the current artifact detection pipeline. To demonstrate the advantage of the proposed method over the current state-of-the-art LiDAR depth-only object segmentation methods, a comparison with the approach proposed in~\cite{PRBonn1} is performed. Furthermore, to demonstrate the effectiveness of utilizing auxiliary LiDAR data (intensity returns) and the proposed filtering steps, an ablation study is conducted. The results for the comparison and ablation study are presented in Figure~\ref{fig:False_positives}, which compares the number of false positive (FP) proposals for four types of artifacts detected over a horizon of $100$ point clouds. For this study, four short sequences of the dataset were used wherein only one artifact was present near the robot for ease of calculating the FPs associated with the artifacts. The efficacy of the proposed approach and all individual steps is clearly demonstrated by the low number of false positive (FP) proposals reported. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/LD_FPv6.png} \caption{Comparison of the number of false positive proposals generated for different object point cluster segmentation methods and strategies for three types of artifacts detected during the Seemuhle underground mine mission.} \label{fig:False_positives} \vspace{-3ex} \end{figure} \subsection{DARPA SubT Challenge Finals}\label{sec:darpa} The DARPA SubT Challenge's Final Event took place at the Louisville Mega Cavern in Kentucky on $21-24$ September \num{2021}. During its hour-long final run, team CERBERUS deployed four ANYmal-C robots to autonomously explore the unknown underground environment containing caves, tunnels, and urban structures, collectively travelling a distance of over ~\SI{1.7}{\km}. For the evaluation of this work, we selected the longest robot trajectory and a total of 334 unique object proposals were generated, with a precision of 53.4\%. It can be noted that compared to the previous result, the precision of object proposals is lower. One probable reason is the inconsistency of LiDAR intensity returns obtained in this artificially created finals course compared to those obtained in the natural environment of the previous experiment. Nevertheless, even in such unfavourable conditions, the proposed method is able to demonstrate its efficacy by consistently detecting objects at a farther distance than the static cameras, as shown quantitatively in Table \ref{tab:seemuehle}. \begin{table}[] \centering \caption{Object detection range (\si{\meter}) comparison} \begin{tabular}{l|l|c|c} \hline \multicolumn{1}{c|}{\textbf{Dataset}} & \multicolumn{1}{c|}{\textbf{Artifact Name}} & \textbf{\begin{tabular}[c]{@{}c@{}}Proposed \\ Method\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Static \\ Camera\end{tabular}} \\ \hline \multirow{5}{*}{Seemuehle} & Backpack & 5-10 & 3.5 \\ & Human Survivor & 4-9 & 3 \\ & Power Drill & 3.5-6.5 & 5 \\ & Rope & - & 2.5 \\ & Fire Extinguisher & 4.5-6 & 3.5 \\ \hline \multirow{4}{*}{DARPA} & Backpack & 4.5-13 & 3.5 \\ & Human Survivor & 2.5 & 2.5 \\ & Power Drill & 4.5 & 2.5 \\ & Vent & 3.5-5 & 1.5 \\ \hline \end{tabular} \label{tab:seemuehle} \vspace{-3ex} \end{table}
1,108,101,564,805
arxiv
\section{Introduction}\label{sec:1} Many methods have been developed in order to estimate free energy differences, ranging from thermodynamic integration \cite{Kirkwood1935,Gelman1998}, path sampling \cite{Minh2009}, free energy perturbation \cite{Zwanzig1954}, umbrella sampling \cite{Torrie1977,Chen1997,Oberhofer2008}, adiabatic switching \cite{Watanabe1990}, dynamic methods \cite{Sun2003,Ytreberg2004,Jarzynski2006,Ahlers2008}, optimal protocols \cite{Then2008,Geiger2010}, asymptotic tails \cite{vonEgan-Krieger2008}, to targeted and escorted free energy perturbation \cite{Meng2002,Jarzynski2002,Oberhofer2007,Vaikuntanathan2008,Hahn2009a}. Yet, the reliability and efficiency of the approaches have not been considered in full depth. Fundamental questions remain unanswered \cite{Lu2007}, e.g., what method is best for evaluating the free energy? Is the free energy estimate reliable and what is the error in it? How can one assess the quality of the free energy result when the true answer is unknown? Generically, free energy estimators are strongly biased for finite sample sizes, such that the bias constitutes the main source of error of the estimates. Moreover, the bias can manifest itself in a seemingly convergence of the calculation by reaching a stable value, although far apart from the desired true value. Therefore, it is of considerable interest to have reliable criteria for the convergence of free energy calculations. Here we focus on the convergence of Bennett's acceptance ratio method. Thereby, we will only be concerned with the intrinsic statistical errors of the method and assume uncorrelated and unbiased samples from the work densities. For incorporation of instrument noise, see Ref. \cite{Maragakis2008}. With emerging results from nonequilibrium stochastic thermodynamics, Bennett's acceptance ratio method \cite{Bennett1976,Meng1996,Kong2003,Shirts2008} has revived actual interest. Recent research has shown that the isothermal free energy difference $\Delta f=f_1-f_0$ of two thermal equilibrium states $0$ and $1$, both at the same temperature $T$, can be determined by externally driven nonequilibrium processes connecting these two states. In particular, if we start the process with the initial thermal equilibrium state $0$ and perturb it towards $1$ by varying the control parameter according to a predefined protocol, the work $w$ applied to the system will be a fluctuating random variable distributed according to a probability density $p_{0}(w)$. This direction will be denoted with \textit{forward}. Reversing the process by starting with the initial equilibrium state $1$ and perturbing the system towards $0$ by the time reversed protocol, the work $w$ done $by$ the system in the \textit{reverse} process will be distributed according to a density $p_{1}(w)$. Under some quite general conditions, the forward and reverse work densities $p_{0}(w)$ and $p_{1}(w)$ are related to each other by Crooks fluctuation theorem \cite{Crooks1999,Campisi2009} \begin{align}\label{fth} \frac{p_{0}(w)}{p_{1}(w)}=e^{w-\Delta f}. \end{align} Throughout the paper, all energies are understood to be measured in units of the thermal energy $kT$, where $k$ is Boltzmann's constant. The fluctuation theorem relates the equilibrium free energy difference $\Delta f$ to the nonequilibrium work fluctuations which permits calculation (estimation) of $\Delta f$ using samples of work-values measured either in only one direction (\textit{one-sided} estimation) or in both directions (\textit{two-sided} estimation). The one-sided estimators rely on the Jarzynski relation \cite{Jarzynski1997} $e^{-\Delta f}=\int e^{-w} p_{0}(w) dw$ which is a direct consequence of Eq.~\gl{fth}, and the free energy is estimated by calculating the sample mean of the exponential work. In general, however, it is of great advantage to employ optimal two-sided estimation with Bennett's acceptance ratio method \cite{Bennett1976}, although one has to measure work-values in both directions. The work fluctuations necessarily allow for events which ``violate'' the second law of thermodynamics such that $w<\Delta f$ holds in forward direction and $w>\Delta f$ in reverse direction, and the accuracy of any free energy estimate solely based on knowledge of Eq.~\gl{fth} will strongly depend on the extend to which these events are observed. The fluctuation theorem indicates that such events will in general be exponentially rare; at least, it yields the inequality $\left\langle w \right\rangle_1 \leq \Delta f \leq \left\langle w \right\rangle_0$ \cite{Jarzynski1997}, which states the second law in terms of the average work $\left\langle w\right\rangle_0$ and $\left\langle w\right\rangle_1$ in forward and reverse direction, respectively. Reliable free energy calculations will become harder the larger the dissipated work $\left\langle w \right\rangle_0 - \Delta f$ and $\Delta f - \left\langle w \right\rangle_1$ in the two directions is \cite{Hahn2009a}, i.e.\ the farther from equilibrium the process is carried out, resulting in an increasing number $N$ of work values needed for a converging estimate of $\Delta f$. This difficulty can also be expressed in terms of the overlap area $\mathcal{A}=\int\min\{p_{0}(w),p_{1}(w)\}dw \leq 1$ of the work densities, which is just the sum of the probabilities $\int_{-\infty}^{\Delta f}p_0 dw$ and $\int_{\Delta f}^{\infty}p_1 dw$ of observing second-law ``violating'' events in the two directions. Hence, $N$ has to be larger than $1/\mathcal{A}$. However, an \textit{\`a priori} determination of the number $N$ of work values required will be impossible in situations of practical interest. Instead, it may be possible to determine \textit{\`a posteriori} whether a given calculation of $\Delta f$ has converged. The present paper develops a criterion for the convergence of two-sided estimation which relies on monitoring the value of a suitably bounded quantity $a$, the convergence measure. As a key feature, the convergence measure $a$ checks if the relevant second-law ``violating'' events are observed sufficiently and in the right proportion for obtaining an accurate and precise estimate of $\Delta f$. Two-sided free energy estimation, i.e.\ Bennett's acceptance ratio method, incorporates a pair of samples of both directions: given a sample $\{w^0_k\}$ of $n_0$ forward work values, drawn independently from $p_{0}(w)$, together with a sample $\{w^1_l\}$ of $n_1$ reverse work values drawn from $p_{1}(w)$, the asymptotically optimal estimate $\widehat{\df}$ of the free energy difference $\Delta f$ is the unique solution of \cite{Bennett1976,Meng1996,Kong2003,Shirts2008} \begin{align}\label{benest} \frac{1}{n_0} \sum\limits_{k=1}^{n_0} \frac{1}{\beta + \alpha e^{w^0_k-\widehat{\df}}} = \frac{1}{n_1} \sum\limits_{l=1}^{n_1} \frac{1}{\alpha + \beta e^{-w^1_l+\widehat{\df}}}, \end{align} where $\alpha$ and $\beta\in(0,1)$ are the fraction of forward and reverse work values used, respectively, \begin{align} \alpha=\frac{n_0}{N} \quad \text{and} \quad \beta=\frac{n_1}{N}, \end{align} with the total sample size $N=n_0+n_1$. Originally found by Bennett \cite{Bennett1976} in the context of free energy perturbation \cite{Zwanzig1954}, with ``work'' being simply an energy difference, the two-sided estimator \gl{benest} was generalized by Crooks \cite{Crooks2000} to actual work of nonequilibrium finite time processes. We note that the two-sided estimator has remarkably good properties \cite{Bennett1976,Meng1996,Shirts2003,Hahn2009a}. Although in general biased for small sample sizes $N$, the bias \begin{align}\label{bias} b = \left\langle \widehat{\df} - \Delta f \right\rangle \end{align} asymptotically vanishes for $N\to\infty$, and the estimator is the one with least mean square error (viz.\ variance) in the limit of large sample sizes $n_0$ and $n_1$ within a wide class of estimators. In fact, it is the optimal estimator if no further knowledge on the work densities besides the fluctuation theorem is given \cite{Hahn2009a,Maragakis2008}. It comprises one-sided Jarzynski estimators as limiting cases for $\alpha\to0$ and $\alpha\to1$, respectively. Recently \cite{Hahn2009b}, the asymptotic mean square error has been shown to be a convex function of $\alpha$ for fixed $N$, indicating that typically two-sided estimation is superior if compared to one-sided estimation. In the limit of large $N$, the mean square error \begin{align}\label{msefull} m = \left\langle(\widehat{\df}-\Delta f)^2\right\rangle \end{align} converges to its asymptotics \begin{align}\label{mse} X(N,\alpha) = \frac{1}{N} \frac{1}{\alpha\beta} \big( \frac{1}{U_{\al}}-1 \big), \end{align} where the overlap (integral) $U_{\al}$ is given by \begin{align}\label{Udef} U_{\al} = \int\limits \frac{p_{0}p_{1}}{\alphap_{0}+\betap_{1}} dw. \end{align} Likewise, in the large $N$ limit the probability density of the estimates $\widehat{\df}$ (for fixed $N$ and $\alpha$) converges to a Gaussian density with mean $\Delta f$ and variance $X(N,\alpha)$ \cite{Meng1996}. Thus, within this regime a reliable confidence interval for a particular estimate $\widehat{\df}$ is obtained with an estimate $\widehat{X}(N,\alpha)$ of the variance, \begin{align}\label{Xhat} \widehat{X}(N,\alpha) := \frac{1}{N\alpha\beta} \big( \frac{1}{\widehat{U}_\alpha}-1 \big), \end{align} where the overlap estimate $\widehat{U}_\alpha$ is given through \begin{align}\label{Uhat} \widehat{U}_\alpha := \frac{1}{n_0} \sum\limits_{k=1}^{n_0} \frac{1}{\beta + \alpha e^{w^0_k-\widehat{\df}}} = \frac{1}{n_1} \sum\limits_{l=1}^{n_1} \frac{1}{\alpha + \beta e^{-w^1_l+\widehat{\df}}}. \end{align} To get some feeling for when the large $N$ limit ``begins'', we state a close connection between the asymptotic mean square error and the overlap area $\mathcal{A}$ of the work densities as follows: \begin{align}\label{mseineq} \frac{1-2\mathcal{A}}{N\mathcal{A}} < X(N,\alpha) \leq \frac{1-\mathcal{A}}{\alpha\beta N\mathcal{A}}, \end{align} see Appendix \ref{appendix.A}. Using $\alpha\approx0.5$ and assuming that the estimator has converged once $X<1$, we find the ``onset'' of the large $N$ limit for $N>\frac{1}{\mathcal{A}}$. However, this onset may actually be one or more orders of magnitude larger. \begin{figure} \includegraphics{figure1.eps} \caption{\label{fig:1}Displayed are free energy estimates $\widehat{\df}$ in dependence of the sample size $N$, reaching a seemingly stable plateau if $N$ is restricted to $N=1000$ (top panel). Another stable plateau is reached if the sample size is increased up to $N=100\,000$ (bottom panel). Has the estimate finally converged? The answer is given by the corresponding graph of the convergence measure $a$ which is shown in the inset. The fluctuations around zero indicate convergence. The exact value of the free energy difference is visualized by the dashed horizontal line.} \end{figure} If we do not know whether the large $N$ limit is reached, we cannot state a reliable confidence interval of the free energy estimate: a problem which encounters frequently within free energy calculations is that the estimates ``converge'' towards a stable plateau. While the sample variance can become small, it remains unclear whether the reached plateau represents the correct value of $\Delta f$. Possibly, the found plateau is subject to some large bias, i.e.\ far off the correct value. A typical situation is displayed in Fig.~\ref{fig:1} which shows successive two-sided free energy estimates in dependence of the sample size $N$. The errorbars are obtained with an error-propagation formula for the variance of $\widehat{\df}$ which reflects the sample variances, see Appendix \ref{appendix.C} after reading Sec.~\ref{sec:3}. If we take a look on the top panel of Fig.~\ref{fig:1}, we might have the impression that the free energy estimate has converged at $N\approx300$ already, while the bottom panel reaches out to larger sample sizes where it becomes visible that the ``convergence'' in the top panel was just pretended. Finally, we may ask if the estimates shown in the bottom panel have converged at $N\gtrsim10000$? As we know the true value of $\Delta f$, which is depicted in the figure as a dashed line, we can conclude that convergence actually happened. The main result of the present paper is the statement of a convergence criterion for two-sided free energy estimation in terms of the \textit{behavior} of the convergence measure $a$. As will be seen, $a$ converges to zero. Moreover, this happens almost simultaneously with the convergence of $\widehat{\df}$ to $\Delta f$. The procedure is as follows: While drawing an increasing number of work values in both directions (with fixed fraction $\alpha$ of forward draws), successive estimates $\widehat{\df}$ and corresponding values of $a$, based on the present samples of work, are calculated. The values of $a$ are displayed graphically in dependence of $N$, preferably on a log-scale. Then the typical situation observed is that $a$ is close to it's upper bound for small sample sizes $N<\frac{1}{\mathcal{A}}$, which indicates lack of ``rare events'' which are required in the averages of Eq.~\gl{benest} (i.e.\ those events which ``violate'' the second law). Once $N$ becomes comparable to $\frac{1}{\mathcal{A}}$, single observations of rare events happen and change the value of $\widehat{\df}$ and $a$ rapidly. In this regime of $N$, rare events are likely to be observed either disproportionally often or seldom, resulting in strong fluctuations of $a$ around zero. This indicates the transition region to the large $N$ limit. Finally, for some $N\gg\frac{1}{\mathcal{A}}$, the large $N$ limit is reached, and $a$ typically fluctuates close around zero, cf.\ the inset of Fig.~\ref{fig:1}. The paper is organized as follows. In Sec.~\ref{sec:2}, we first consider a simple model for the source of bias of two-sided estimation which is intended to obtain some insight into the convergence properties of two-sided estimation. The convergence measure $a$, which is introduced in Sec.~\ref{sec:3}, however, will not depend on this specific model. As the convergence measure is based on a sample of forward and reverse work values, it is itself a random variable, raising the question of reliability once again. Using numerically simulated data, the statistical properties of the convergence measure will be elaborated in Sec.~\ref{sec:4}. The convergence criterion is stated in Sec.~\ref{sec:5}, and Sec.~\ref{sec:6} presents an application to the estimation of the chemical potential of a Lennard-Jones fluid. \section{Neglected tail model for two-sided estimation}\label{sec:2} To obtain some first qualitative insight into the relation between the convergence of Eq.~\gl{Uhat} and the bias of the estimated free energy difference, we adopt the neglected tails model \cite{Wu2004} originally developed for one-sided free energy estimation. Two-sided estimation of $\Delta f$ essentially means estimating the overlap $U_{\al}$ from two sides, however in a dependent manner, as $\widehat{\df}$ is adjusted such that both estimates are equal in Eq.~\gl{Uhat}. \begin{figure} \includegraphics{figure2.eps} \caption{\label{fig:2}Schematic diagram of reverse $p_{1}$, overlap $p_{\al}$, and forward $p_{0}$ work densities (top). Schematic histograms of finite samples from $p_{0}$ and $p_{1}$, where in particular the latter is imperfectly sampled, resulting in a biased estimate $\widehat{\df}$ of the free energy difference (bottom).} \end{figure} Consider the (normalized) overlap density $p_{\al}(w)$, defined as harmonic mean of $p_{0}$ and $p_{1}$: \begin{align}\label{pa} p_{\al}(w) = \frac{1}{U_{\al}} \frac{p_{0}(w)p_{1}(w)}{\alphap_{0}(w)+\betap_{1}(w)}. \end{align} For $\alpha\to0$ and $\alpha\to1$, $p_{\al}$ converges to $p_{0}$ and $p_{1}$, respectively. The dominant contributions to $U_{\al}$ come from the overlap region of $p_{0}$ and $p_{1}$ where $p_{\al}$ has its main probability mass, see Fig.~\ref{fig:2} (top). In order to obtain an accurate estimate of $\Delta f$ with the two-sided estimator \gl{benest}, the sample $\{w^0_k\}$ drawn from $p_{0}$ has to be representative for $p_{0}$ up to the \textit{overlap region} in the left tail of $p_{0}$, and the sample $\{w^1_k\}$ drawn from $p_{1}$ has to be representative for $p_{1}$ up to the overlap region in the right tail of $p_{1}$. For small $n_0$ and $n_1$, however, we will have certain effective cut-off values $w^0_c$ and $w^1_c$ for the samples from $p_{0}$ and $p_{1}$, respectively, beyond which we typically will not find any work values, see Fig.~\ref{fig:2} (bottom). We introduce a model for the bias \gl{bias} of two-sided free energy estimation as follows. Assuming a ``semi-large'' $N=n_0+n_1$, the \textit{effective} behavior of the estimator for fixed $n_0$ and $n_1$ is modeled by substituting the sample averages appearing in the estimator \gl{benest} with ensemble averages, however truncated at $w^0_c$ and $w^1_c$, respectively: \begin{align}\label{negtail} \int\limits_{w^0_c}^{\infty} \frac{p_{0}(w)}{\beta + \alpha e^{w-\left\langle\widehat{\df}\right\rangle}} dw = \int\limits_{-\infty}^{w^1_c}\frac{p_{1}(w)}{\alpha + \beta e^{-w+\left\langle\widehat{\df}\right\rangle}} dw . \end{align} Thereby, the cut-off values $w^i_c$ are thought fixed (only depending on $n_0$ and $n_1$) and the expectation $\left\langle \widehat{\df} \right\rangle$ is understood to be the unique root of Eq.~\gl{negtail}, thus being a function of the cut-off values $w^i_c$, $i=0,1$. In order to elaborate the implications of this model, we rewrite Eq.~\gl{negtail} with the use of the fluctuation theorem \gl{fth} such that the integrands are equal, \begin{align}\label{negtail2} e^{\left\langle\widehat{\df}-\Delta f\right\rangle} = \frac{\int\limits_{-\infty}^{w^1_c} \frac{p_{0}(w)}{\alpha e^{w-\left\langle\widehat{\df}\right\rangle} + \beta} dw}{ \int\limits_{w^0_c}^{\infty} \frac{p_{0}(w)}{\alpha e^{w-\left\langle\widehat{\df}\right\rangle} + \beta} dw}, \end{align} and consider two special cases: \begin{enumerate} \item \textit{Large $n_1$ limit}: Assume the sample size $n_1$ is large enough to ensure that the overlap region is fully and accurately sampled (large $n_1$ limit). Thus, $w^1_c$ can be safely set equal to $\infty$ in Eq.~\gl{negtail2}, and the r.h.s.\ becomes larger than unity. Accordingly, our model predicts a positive bias. \item \textit{Large $n_0$ limit}: Turning the tables and using $w^0_c=-\infty$ in Eq.~\gl{negtail2}, the model implies a negative bias. \end{enumerate} In essence, $\left\langle\widehat{\df}\right\rangle$ is shifted away from $\Delta f$ towards the insufficiently sampled density. In general, when none of the densities is sampled sufficiently, the bias will be a trade off between the two cases. Qualitatively, from the neglected tails model, we find the main source of bias resulting from a different convergence behavior of forward and reverse estimates \gl{Uhat} of $U_{\al}$. The task of the next section will be to develop a quantitative measure of convergence. \section{The convergence measure}\label{sec:3} In order to check convergence, we propose a measure which relies on a consistency check of estimates based on first and second moments of the Fermi functions that appear in the two-sided estimator \gl{Uhat}. In a recent study \cite{Hahn2009a}, we already used this measure for the special case of $\alpha=\frac{1}{2}$. Here, we give a generalization to arbitrary $\alpha$, study the convergence measure in greater detail, and justify its validity and usefulness. In the following we will assume that the densities $p_0$ and $p_1$ have the same support. It was discussed in the preceding section that the large $N$ limit is reached and hence the bias of two-sided estimation vanishes if the overlap $U_{\al}$ is (in average) correctly estimated from both sides, $0$ and $1$. Defining the complementary Fermi functions $t_c(w)$ and $b_c(w)$ (for given $\alpha$) with \begin{align} &t_c(w) = \frac{1}{\alpha + \beta e^{-w+c}}, \nonumber \\ &b_c(w) = \frac{1}{\alpha e^{w-c} + \beta}, \label{fermidef} \end{align} such that $\alpha t_c(w)+\beta b_c(w)=1$ and $t_c(w)=e^{w-c}b_c(w)$ holds. The overlap \gl{Udef} can be expressed in terms of first moments, \begin{align}\label{Udef2} U_{\al} = \int t_{\scriptscriptstyle\Delta f}(w) p_1(w) dw = \int b_{\scriptscriptstyle\Delta f}(w) p_0(w) dw, \end{align} and the overlap estimate $\widehat{U}_\alpha$, Eq.~\gl{Uhat}, is simply obtained by replacing in Eq.~\gl{Udef2} the ensemble averages by sample averages, \begin{align}\label{Uhatnew} \widehat{U}_\alpha = \widebar{t_{\scriptscriptstyle\widehat{\df}}}^{\scriptscriptstyle (1)} = \widebar{ b_{\scriptscriptstyle\widehat{\df}}}^{\scriptscriptstyle(0)}. \end{align} According to Eq.~\gl{benest}, the value of $\widehat{\df}$ is defined such that the above relation holds. Note that $\widehat{\df} = \widehat{\df}(w^0_1,\ldots,w^1_{n_1})$ is a single-valued function depending on all work values used in both directions. The overbar with index $(i)$ denotes an average with a sample $\{w^i_k\}$ drawn from $p_i$, $i=0,1$. For an arbitrary function $g(w)$ it explicitly reads \begin{align}\label{sampleaverage} \widebar{g}^{\scriptscriptstyle(i)} = \frac{1}{n_i} \sum\limits_{k=1}^{n_i} g(w^i_k) \end{align} Interestingly, $U_{\al}$ can be expressed in terms of second moments of the Fermi functions such that it reads \begin{align}\label{Udef3} U_{\al} = \alpha \int t_{\scriptscriptstyle\Delta f}^2 p_1 dw + \beta \int b_{\scriptscriptstyle\Delta f}^2 p_0 dw \end{align} A useful test of self-consistency is to compare the first order estimate $\widehat{U}_\alpha$, with the second order estimate $\Uhat^{\scriptscriptstyle(I\!I)}_\alpha$, where the latter is defined by replacing the ensemble averages in Eq.~\gl{Udef3} with sample averages: \begin{align}\label{Uhat2} \Uhat^{\scriptscriptstyle(I\!I)}_\alpha = \alpha \widebar{t_{\scriptscriptstyle\widehat{\df}}^2}^{\scriptscriptstyle (1)} + \beta \widebar{ b_{\scriptscriptstyle\widehat{\df}}^2}^{\scriptscriptstyle(0)}. \end{align} Thereby, the estimates $\widehat{\df}$, $\widehat{U}_\alpha$, and $\Uhat^{\scriptscriptstyle(I\!I)}_\alpha$, are understood to be calculated with the same pair of samples $\{w^0_k\}$ and $\{w^1_l\}$. The relative difference of this comparison results in the definition of the convergence measure, \begin{align}\label{a} a = \frac{\widehat{U}_\alpha-\Uhat^{\scriptscriptstyle(I\!I)}_\alpha}{\widehat{U}_\alpha}, \end{align} for all $\alpha\in(0,1)$. Clearly, in the large $N$ limit, $a$ will converge to zero, as then $\widehat{\df}$ converges to $\Delta f$ and thus $\widehat{U}_\alpha$ as well as $\Uhat^{\scriptscriptstyle(I\!I)}_\alpha$ converge to $U_{\al}$. As argued below, it is the estimate $\Uhat^{\scriptscriptstyle(I\!I)}_\alpha$ that converges last, hence $a$ converges somewhat later than $\widehat{\df}$. Below the large $N$ limit, $a$ will deviate from zero. From the general inequality \begin{align}\label{Uineq} \widehat{U}_\alpha^2 \le \Uhat^{\scriptscriptstyle(I\!I)}_\alpha < 2 \widehat{U}_\alpha \end{align} (see Appendix \ref{appendix.B}) follow upper and lower bounds on $a$ which read \begin{align}\label{aineq1} -1 < a \le 1-\widehat{U}_\alpha < 1. \end{align} The behavior of $a$ with increasing sample size $N=n_0+n_1$ (while keeping the fraction $\alpha=\frac{n_0}{N}$ constant) can roughly be characterized as follows: $a$ ``starts'' close to its upper bound for small $N$ and decreases towards zero with increasing $N$. Finally, $a$ begins to fluctuate around zero when the large $N$ limit is reached, i.e.\ when the estimate $\widehat{\df}$ converges. \begin{figure} \includegraphics{figure3.eps} \caption{\label{fig:3}Schematic plot which shows that the forward work density, $p_{0}(w)$, samples the Fermi function $b_{\scriptscriptstyle\Delta f}(w)=1/(\beta+\alpha e^{w-\Delta f})$ somewhat earlier than its square.} \end{figure} To see this qualitatively, we state that the second order estimate $\Uhat^{\scriptscriptstyle(I\!I)}_\alpha$ converges later than the first order estimate $\widehat{U}_\alpha$, as the former requires sampling the tails of $p_{0}$ and $p_{1}$ to a somewhat wider extend than the latter, cf.\ Fig.~\ref{fig:3}. For small $N$, both, $\widehat{U}_\alpha$ and $\Uhat^{\scriptscriptstyle(I\!I)}_\alpha$, will typically underestimate $U_{\al}$, as the ``rare-events'' which contribute substantially to the averages \gl{Uhatnew} and \gl{Uhat2} are quite likely not to be observed sufficiently, if at all. For the same reason, generically $\Uhat^{\scriptscriptstyle(I\!I)}_\alpha < \widehat{U}_\alpha$ will hold, since $b_{\scriptscriptstyle\widehat{\df}}(w^0)^2\leq b_{\scriptscriptstyle\widehat{\df}}(w^0)$ holds for $w^0\geq\widehat{\df}$ and similar $t_{\scriptscriptstyle\widehat{\df}}(w^1)^2\leq t_{\scriptscriptstyle\widehat{\df}}(w^1)$ for $w^1\leq\widehat{\df}$. Therefore, $a$ is typically positive for small $N$. In particular, if $N$ is so small that \textit{all} work values of the forward sample are larger than $\widehat{\df}$ and all work values of the reverse sample are smaller than $\widehat{\df}$, then $\Uhat^{\scriptscriptstyle(I\!I)}_\alpha$ becomes much smaller than $\widehat{U}_\alpha$, resulting in $a\approx 1$. Analytic insight into the behavior of $a$ for small $N$ results from the fact that $n \widebar{x}^2 \geq \widebar{x^2}$ for any set $\{x_1,\ldots x_n\}$ of positive numbers $x_k$. Using this in Eq.~\gl{Uhat2} yields \begin{align}\label{Uineq2} \Uhat^{\scriptscriptstyle(I\!I)}_\alpha \leq 2N\alpha\beta\widehat{U}_\alpha^2 \end{align} and \begin{align}\label{aineq2} 1 - 2\alpha\beta N\widehat{U}_\alpha \le a \le 1 - \widehat{U}_\alpha. \end{align} This shows that as long as $N\widehat{U}_\alpha \ll 1$ holds, $a$ is close to its upper bound $1-\widehat{U}_\alpha \approx 1$. In particular, if $\alpha=\frac{1}{2}$ and $N=2$, then $a=1-\widehat{U}_\alpha$ holds exactly. Averaging the inequality for some $N$ sufficiently large to ensure $\left\langle a \right\rangle \approx 0$ and $\left\langle \widehat{U}_\alpha \right\rangle \approx U_{\al}$, we get a lower bound on $N$ which reads $N\geq \frac{1}{2\alpha\betaU_{\al}}$. Again, this bound can be related to the overlap area $\mathcal{A}$: taking $\alpha=\frac{1}{2}$ and using $U_{\frac{1}{2}}\leq 2\mathcal{A}$ (see Appendix \ref{appendix.A}), we obtain $N\geq\frac{1}{\mathcal{A}}$, in concordance with the lower bound for the large $N$ limit stated in Sec.~\ref{sec:1}. Last we note that the convergence measure $a$ can also be understood as a measure of the sensibility of relation \gl{benest} with respect to the value of $\widehat{\df}$: in the low $N$ regime, the relation is highly sensible to the value of $\widehat{\df}$, resulting in large values of $a$, whereas in the limit of large $N$, relation \gl{benest} becomes insensible to small perturbations of $\widehat{\df}$, corresponding to $a\approx0$. The details are summarized in Appendix \ref{appendix.D}. \section{Study of statistical properties of the convergence measure}% \label{sec:4} \begin{figure} \includegraphics{figure4.eps} \caption{\label{fig:4}Exponential (left panel) and Gaussian (right panel) work densities.} \end{figure} In order to demonstrate the validity of $a$ as a measure of convergence of two-sided free energy estimation, we apply it to two qualitatively different types of work densities, namely exponential and Gaussian, see Fig.~\ref{fig:4}. Samples from these densities are easily available by standard (pseudo) random generators. Statistical properties of $a$ are obtained by means of independent repeated calculations of $\widehat{\df}$ and $a$. While the two types of densities used are fairly simple, they are entirely different and general enough to reflect the statistical properties of the convergence measure. \subsection{Exponential work densities} The first example uses exponential work densities, i.e.\ \begin{align}\label{expdist} p_i(w) = \frac{1}{\mu_i} e^{-\frac{w}{\mu_i}}, \quad w \geq 0, \end{align} $\mu_i>0$, $i=0,1$. According to the fluctuation theorem \gl{fth}, the mean values $\mu_i$ of $p_{0}$ and $p_{1}$ are related to each other, $\mu_1=\frac{\mu_0}{1+\mu_0}$, and the free energy difference is known to be $\Delta f=\ln(1+\mu_0)$. \begin{figure} \includegraphics{figure5.eps} \caption{\label{fig:5}Statistics of two-sided free energy estimation (exponential work densities): shown are averaged estimates of $\Delta f$ in dependence of the total sample size $N$. The errorbars reflect the standard deviation. The dashed line shows the exact value of $\Delta f$, and the inset the details for large $N$ (top). Statistics of the convergence measure $a$ corresponding to the estimates of the top panel: shown are the average values of $a$ together with their standard deviation in dependence of the sample size $N$. Note the characteristic convergence of $a$ towards zero in the large $N$ limit (bottom).} \end{figure} \begin{figure} \includegraphics{figure6.eps} \caption{\label{fig:6}Mean values of overlap estimates $\widehat{U}_\alpha$ and $\Uhat^{\scriptscriptstyle(I\!I)}_\alpha$ of first and second order, respectively. The slightly slower convergence of $\Uhat^{\scriptscriptstyle(I\!I)}_\alpha$ towards $U_{\al}$ results in the characteristic properties of the convergence measure $a$. To enhance clarity, data points belonging to the same value of $N$ are spread.} \end{figure} \begin{figure} \includegraphics{figure7.eps} \caption{\label{fig:7}(Color online) Double-logarithmic scatter plot of $\Uhat^{\scriptscriptstyle(I\!I)}_\alpha$ versus $\widehat{U}_\alpha$ for many individual estimates in dependence of the sample size $N$. The dotted lines mark the exact value of $U_{\al}$ on the axes, and the dashed line is the bisectrix $\Uhat^{\scriptscriptstyle(I\!I)}_\alpha=\widehat{U}_\alpha$. The approximatively linear relation between the logarithms of $\Uhat^{\scriptscriptstyle(I\!I)}_\alpha$ and $\widehat{U}_\alpha$ is continued up to the smallest observed values ($< 10^{-100}$, not shown here).} \end{figure} \begin{figure} \includegraphics{figure8.eps} \caption{\label{fig:8}The average convergence measure $\left\langle a\right\rangle$ plotted against the corresponding mean square error $\left\langle(\widehat{\df}-\Delta f)^2\right\rangle$ of the two-sided free energy estimator. The inset shows an enlargement for small values of $\left\langle a\right\rangle$.} \end{figure} \begin{figure} \includegraphics{figure9.eps} \caption{\label{fig:9}(Color online) A scatter plot of the deviation $\widehat{\df}-\Delta f$ versus the convergence measure $a$ for many individual estimates in dependence of the sample size $N$. Note that the majority of estimates belonging to $N=32$ and $N=100$ have large values of $\widehat{\df}-\Delta f$ well outside the displayed range with $a$ being close to one.} \end{figure} \begin{figure} \includegraphics{figure10.eps} \caption{\label{fig:10}Estimated constrained probability densities $p(\widehat{\df}|a\!<\!0.9)$ (black) and $p(\widehat{\df}|a\!\ge\!0.9)$ (grayscale) for two different sample sizes $N$, plotted versus the deviation $\widehat{\df}-\Delta f$. The inset shows averaged estimates of $\Delta f$ over the total sample size $N$ subject to the constraints $a\ge0.9$ and $a<0.9$, respectively.} \end{figure} Choosing $\mu_0=1000$ and $\alpha=\frac{1}{2}$, i.e.\ $n_0=n_1$, we calculate free energy estimates $\widehat{\df}$ according to Eq.~\gl{benest} together with the corresponding values of $a$ according to Eq.~\gl{a} for different total sample sizes $N=n_0+n_1$. An example of a single running estimate and the corresponding values of the convergence measure are depicted in Fig.~\ref{fig:1}. Ten-thousand repetitions for each value of $N$ yield the results presented in Figs.~\ref{fig:5}--\ref{fig:10}. To begin with, the top panel of Fig.~\ref{fig:5} shows the averaged free energy estimates in dependence of $N$, where the errorbars show $\pm$ the estimated square root of the variance $\left\langle(\widehat{\df}-\langle\widehat{\df}\rangle)^2\right\rangle$. For small $N$, the bias $\left\langle\widehat{\df}-\Delta f\right\rangle$ of free energy estimates is large, but becomes negligible compared to the standard deviation for $N\gtrsim 5000$. This is a prerequisite of the large $N$ limit, therefore we will view $N \approx 5000$ as the onset of the large $N$ limit. The bottom panel of Fig.~\ref{fig:5} shows the averaged values of the convergence measure $a$ corresponding to the free energy estimates of the top panel. Again, the errorbars are $\pm$ one standard deviation $\sqrt{\left\langle a^2\right\rangle -\left\langle a\right\rangle^2}$, except that the upper limit is truncated for small $N$, as $a<1$ holds. The trend of the averaged convergence measure $\left\langle a \right\rangle$ is in full agreement with the general considerations given in the previous section: for small $N$, $\left\langle a \right\rangle$ starts close to its upper bound, decreases monotonically with increasing sample size, and converges towards zero in the large $N$ limit. At the same time, its standard deviation converges to zero, too. This indicates that single values of $a$ corresponding to single estimates $\widehat{\df}$ will typically be found close to zero in the large $N$ regime. Noting that $a$ is defined as relative difference of the overlap estimators $\widehat{U}_\alpha$ and $\Uhat^{\scriptscriptstyle(I\!I)}_\alpha$ of first and second order, respectively, we can understand the trend of the average convergence measure by taking into consideration the average values $\left\langle\widehat{U}_\alpha\right\rangle$ and $\left\langle\Uhat^{\scriptscriptstyle(I\!I)}_\alpha\right\rangle$, which are shown in Fig.~\ref{fig:6}. For small sample sizes, $U_{\al}$ is \textit{typically} underestimated by both, $\widehat{U}_\alpha$ and $\Uhat^{\scriptscriptstyle(I\!I)}_\alpha$, with $\Uhat^{\scriptscriptstyle(I\!I)}_\alpha<\widehat{U}_\alpha$. The convergence measure takes advantage of the different convergence times of the overlap estimators: $\Uhat^{\scriptscriptstyle(I\!I)}_\alpha$ converges somewhat slower than $\widehat{U}_\alpha$, ensuring that $a$ approaches zero right after $\widehat{\df}$ has converged. The large standard deviations shown as errorbars in Fig.~\ref{fig:6} do not carry over to the standard deviation of $a$, because $\widehat{U}_\alpha$ and $\Uhat^{\scriptscriptstyle(I\!I)}_\alpha$ are strongly correlated, as is impressively visible in Fig.~\ref{fig:7}. The estimated correlation coefficient \begin{align} \frac{ \left\langle \big(\Uhat^{\scriptscriptstyle(I\!I)}_\alpha-\langle\Uhat^{\scriptscriptstyle(I\!I)}_\alpha\rangle \big) \big(\widehat{U}_\alpha-\langle\widehat{U}_\alpha\rangle\big) \right\rangle }{ \sqrt{\operatorname{Var}\big(\Uhat^{\scriptscriptstyle(I\!I)}_\alpha\big) \operatorname{Var}\big(\widehat{U}_\alpha\big)} } \end{align} is about $0.97$ (!) for the entire range of sample sizes $N$. In good approximation, $\widehat{U}_\alpha$ and $\Uhat^{\scriptscriptstyle(I\!I)}_\alpha$ are related to each other according to a power law, $\Uhat^{\scriptscriptstyle(I\!I)}_\alpha \approx c_N \widehat{U}_\alpha^{\ \gamma_N}$, where the exponent $\gamma_N$ and the prefactor $c_N$ depend on the sample size $N$ (and $\alpha$). We note that $\gamma_N$ has a phase-transition-like behavior: for small $N$, it stays approximately constant near two; right before the onset of the large $N$ limit, it shows a sudden switch to a value close to one where it finally remains. Figure \ref{fig:8} accents the decrease of the average $\left\langle a\right\rangle$ with decreasing mean square error \gl{msefull} of two-sided estimation. The small $N$ behavior is given by the upper right part of the graph, where $\left\langle a\right\rangle$ is close to its upper bound together with a large mean square error of $\widehat{\df}$. With increasing sample size, the mean square error starts to drop somewhat sooner than $\left\langle a\right\rangle$, however, at the onset of the large $N$ limit, they drop both and suggest a linear relation, as can be seen in the inset for small values of $\left\langle a\right\rangle$. The latter shows that $\left\langle a \right\rangle$ decreases to zero proportional to $\frac{1}{N}$ for large $N$ (this is confirmed by a direct check, but not shown here). The next point is to clarify the correlation of single values of the convergence measure with their corresponding free energy estimates. For this issue, figure \ref{fig:9} is most informative, showing the deviations $\widehat{\df}-\Delta f$ in dependence of the corresponding values of $a$ for many individual observations. The figure makes clear that there is a \textit{strong relation, but no one-to-one correspondence} between $a$ and $\widehat{\df}-\Delta f$: For large $N$, both $a$ and $\widehat{\df}-\Delta f$ approach zero with very weak correlations between them. However, the situation is different for small sample sizes $N$ where the bias $\left\langle\widehat{\df}-\Delta f\right\rangle$ is considerably large. There, the typically observed large deviations occur together with values of $a$ close to the upper bound, whereas the atypical events with small (negative) deviations come together with values of $a$ well below the upper limit. Therefore, small values of $a$ detect exceptional events if $N$ is well below the large $N$ limit, and ordinary events if $N$ is large. To make this relation more visible, we split the estimates $\widehat{\df}$ into the mutually exclusive events $a\geq 0.9$ and $a< 0.9$. The statistics of the $\widehat{\df}$ values within these cases are depicted in the inset of Fig.~\ref{fig:10}, where normalized histograms, i.e.\ estimates of the constrained probability densities $p(\widehat{\df}|a\!\geq\!0.9)$ and $p(\widehat{\df}|a\!<\!0.9)$ are shown. The unconstrained probability density of $\widehat{\df}$ can be reconstructed from a likelihood weighted sum of the constrained densities, $p(\widehat{\df}) = p(\widehat{\df}|a\!\geq\!0.9) p_{\scriptscriptstyle a\geq0.9} + p(\widehat{\df}|a\!<\!0.9) p_{\scriptscriptstyle a<0.9}$. The likelihood ratios read $p_{\scriptscriptstyle a\geq0.9}/p_{\scriptscriptstyle a<0.9} = 6.2$ and $0.002$ for $N=32$ and $1000$, respectively. Finally, the inset of Fig.~\ref{fig:10} shows the average values of constrained estimates $\widehat{\df}$ over $N$ with errorbars of $\pm$ one standard deviation, in dependence of the condition on $a$. \subsection{Gaussian work densities} For the second example the work-densities are chosen to be Gaussian, \begin{align} p_i(w) = \frac{1}{\sigma\sqrt{2\pi}} e^{-\frac{(w-\mu_i)^2}{2\sigma^2}}, \quad w\in\mathds{R}, \end{align} $i=0,1$. The fluctuation theorem \gl{fth} demands both densities to have the same variance $\sigma^2$ with mean values $\mu_0=\Delta f+\frac{1}{2}\sigma^2$ and $\mu_1=\Delta f-\frac{1}{2}\sigma^2$. Hence, $p_{0}$ and $p_{1}$ are symmetric to each other with respect to $\Delta f$, $p_{0}(\Delta f+w)=p_{1}(\Delta f-w)$. As a consequence of this symmetry, the two-sided estimator with equal sample sizes $n_0$ and $n_1$, i.e.\ $\alpha=0.5$, is unbiased for any $N$. However, this does not mean that the limit of large $N$ is reached immediately. In analogy to the previous example, we proceed in presenting the statistical properties of $a$. Choosing $\sigma = 6$ and without loss of generality $\Delta f=0$, we carry out $10^4$ estimations of $\Delta f$ over a range of sample sizes $N$. The forward fraction is chosen to be equal to $\alpha=0.5$, and for comparison, $\alpha=0.999$, and $\alpha=0.99999$, respectively. In the latter two cases, the two-sided estimator is biased for small $N$. We note that $\alpha=0.5$ is always the optimal choice for symmetric work densities which minimizes the asymptotic mean square error \gl{mse} with respect to $\alpha$ \cite{Hahn2009b}. \begin{figure} \includegraphics{figure11.eps} \caption{\label{fig:11}Gaussian work densities result in the displayed averaged estimates of $\Delta f$. For comparison, three different fractions $\alpha$ of forward work values are used (top). Average values of the convergence measure $a$ corresponding to the estimates of the top panel (bottom).} \end{figure} Comparing the top and the bottom panel of Fig.~\ref{fig:11}, which show the statistics (mean value and standard deviation as errorbars) of the observed estimates $\widehat{\df}$ and of the corresponding values of $a$, we find a coherent behavior for all three cases of $\alpha$ values. The trend of the average $\left\langle a\right\rangle$ shows in all cases the same features in agreement with the trend found for exponential work densities. \begin{figure} \includegraphics{figure12.eps} \caption{\label{fig:12}Mean values of overlap estimates $\widehat{U}_\alpha$ and $\Uhat^{\scriptscriptstyle(I\!I)}_\alpha$ of first and second order ($\alpha=0.5$).} \end{figure} As before, the characteristics of $a$ are understood by the slower convergence of $\Uhat^{\scriptscriptstyle(I\!I)}_\alpha$ compared to that of $\widehat{U}_\alpha$, as can be seen in Fig.~\ref{fig:12}. A scatter plot of $\Uhat^{\scriptscriptstyle(I\!I)}_\alpha$ versus $\widehat{U}_\alpha$ looks qualitatively like Fig.~\ref{fig:7}, but is not shown here. \begin{figure} \includegraphics{figure13.eps} \caption{\label{fig:13}The average convergence measure $\left\langle a\right\rangle$ plotted against the corresponding mean square error $\left\langle(\widehat{\df}-\Delta f)^2\right\rangle$ of free energy estimates in dependence of $N$.} \end{figure} Figure \ref{fig:13} compares the average convergence measures as functions of the mean square error of $\widehat{\df}$ for the three values of $\alpha$. For the range of small $\left\langle a\right\rangle$, all three curves agree and are linear. Again $\left\langle a \right\rangle$ decreases proportionally to $\frac{1}{N}$ for large $N$. Noticeable for small $N$ is the shift of $\left\langle a \right\rangle$ towards smaller values with increasing $\alpha$. This results from the definition of $a$: the upper bound $1-\widehat{U}_\alpha$ of $a$ tends to zero in the limits $\alpha\to 0,1$, as then $\widehat{U}_\alpha\to1$. \begin{figure} \includegraphics{figure14.eps} \caption{\label{fig:14}(Color online) A scatter plot of the deviation $\widehat{\df}-\Delta f$ versus the convergence measure $a$ for many individual estimates in dependence of the sample size $N$ ($\alpha=0.5$).} \end{figure} The relation of single free energy estimates $\widehat{\df}$ with the corresponding $a$ values can be seen in the scatter plot of Fig.~\ref{fig:14}. The mirror symmetry of the plot originates from the symmetry of the work densities and the choice $\alpha=0.5$, i.e.\ of the unbiasedness of the two-sided estimator. Opposed to the foregoing example, the correlation between $\widehat{\df}-\Delta f$ and $a$ vanishes for any value of $N$. Despite the lack of any correlation, the figure reveals a strong relation between the deviation $\widehat{\df}-\Delta f$ and the value of $a$: they converge equally to zero for large $N$. \begin{figure} \includegraphics{figure15.eps} \caption{\label{fig:15}Averaged two-sided estimates of $\Delta f$ in dependence of the total sample size $N$ for the constraints $a\ge0.9$, $a$ unconstrained, and $a<0.9$ ($\alpha=0.99999$).} \end{figure} Last, figure \ref{fig:15} shows averages of constrained $\Delta f$ estimates for the mutually exclusive conditions $a\geq0.9$ and $a<0.9$, now with $\alpha=0.99999$ in order to incorporate some bias. We observe the same characteristics as before, cf.\ the inset of Fig.~\ref{fig:10}: The condition $a<0.9$ filters the estimates $\widehat{\df}$ which are closer to the true value. \subsection{The general case} The characteristics of the convergence measure are dominated by contributions of work densities inside and near the region where the overlap density $p_{\al}(w)$, Eq.~\gl{pa}, has most of its mass. We call this region the overlap region. In the overlap region, the work densities may have one of the following characteristic relation of shape: \begin{enumerate} \item Having their maxima at larger and smaller values of work, respectively, the forward and reverse work densities both drop towards the overlap region. Hence, any of both densities sample the overlap region by rare events, only, which are responsible for the behavior of the convergence measure. \item Both densities decrease with increasing $w$ and the overlap region is well sampled by the forward work density compared with the reverse density. Especially the ``rare'' events $w<\Delta f$ of forward direction are much more available than the rare events $w>\Delta f$ of reverse direction. Hence, more or less typical events of one direction together with atypical events of the other direction are responsible for the behavior of the convergence measure. Likewise if both densities increase with $w$. \item More generally, the work densities are some kind of interpolation between the above two cases. \item Finally, there remain some exceptional cases. For instance, if the forward and reverse work densities have different support or if they do not obey the fluctuation theorem at all. \end{enumerate} With respect to the exceptional case, the convergence measure fails to work, since it requires that the forward and reverse work densities have the same support and that the densities are related to each other via the fluctuation theorem \gl{fth}. In all other cases, the convergence measure certainly will work and will show a similar behavior, regardless of the detailed nature of the densities. This can be explained as follows. In the preceding subsections, we have investigated exponential and Gaussian work densities, two examples that differ in their very nature. While exponential work densities cover case number two, and Gaussians cover case number one, they show the same characteristics of $a$. This means that the characteristics of the convergence measure are insensitive to the individual nature of the work densities as long as they have the same support and obey the fluctuation theorem. To this end, we want to point to some subtleties in the text of the actual paper. While the measure of convergence is robust with respect to the nature of work densities, some heuristic or pedagogic explanations in the text are written with regard to the typical case number one, where the overlap region is sampled by rare events, only. This concerns mainly Sec.~\ref{sec:2} where we speak about effective cut-off values in the context of the neglected tail model. These effective cut-off values would become void if we would try to explain the bias of exponential work densities qualitatively via the neglected tail model. Also the explanations in the text of the next section are mainly focused on the typical case number one. This concerns the passages where we speak about rare events. Nevertheless, the main and essential statements are valid for all cases. The most important property of $a$ is its almost \textit{simultaneous} convergence with the free energy estimator $\widehat{\df}$ to an \textit{\`a priori known} value. This fact is used to develop a convergence criterion in the next section. \section{The convergence criterion}\label{sec:5} Elaborated the statistical properties of the convergence measure, we are finally interested in the convergence of a \textit{single} free energy estimate. In contrast to averages of many independent running estimates, estimates based on individual realization are not smooth in $N$, see e.g.\ Fig.~\ref{fig:1}. For small $N$, typically $\Uhat^{\scriptscriptstyle(I\!I)}_\alpha$ underestimates $U_{\al}$ more than $\widehat{U}_\alpha$ does, pushing $a$ close to its upper bound. With increasing $N$, $\widehat{\df}$ starts to ``converge''; typically in a non-smooth manner. The convergence of $\widehat{\df}$ is triggered by the occurrence of rare events. Whenever such a rare event in the important tails of the work densities gets sampled, $\widehat{\df}$ jumps, and between such jumps, $\widehat{\df}$ stays rather on a stable plateau. The measure $a$ is triggered by the same rare events, but the changes in $a$ are smaller, unless convergence starts happening. Typically, the rare events that bring $\widehat{\df}$ near to its true value are the rare events which change the value of $a$ drastically. In the typical case, these rare events let $a$ even undershoot below zero, before $\widehat{\df}$ and $a$ finally converge. The features of the convergence measure, \begin{enumerate} \item it is bounded, $a\in(-1,\,1-\widehat{U}_\alpha]$, \item it starts for small $N$ at its upper bound, \item it converges to a known value, $a\to0$, \item and typically it converges almost simultaneously with $\widehat{\df}$, \end{enumerate} simplify the task of monitoring the convergence significantly, since it is far easier to compare estimates of $a$ with the known value zero than the task of monitoring convergence of $\widehat{\df}$ to an unknown target value. The characteristics of the convergence measure enable us to state: typically, if a is close to zero, $\widehat{\df}$ has converged. Deviations from the typical situation are possible. For instance, $\widehat{\df}$ may not show such clear jumps, neither may $a$. Occasionally, $\widehat{\df}$ and $a$, may also fluctuate exceedingly strong. Thus, a single value of $a$ close to zero does not guarantee convergence of the free energy estimate as can be seen from some few individual events in the scatter plot of Fig.~\ref{fig:14} that fail a correct estimate while $a$ is close to zero. A single random realization may give rise to a fluctuation that brings $a$ close to zero by chance, a fact that needs to be distinguished from $a$ having converged to zero. The difference between random chance and convergence is revealed by increasing the sample size, since it is highly unlikely that $a$ stays close to zero by random. It is the \textit{behavior} of $a$ with increasing $N$, that needs to be taken into account in order to establish an equivalence between $a\to0$ and $\widehat{\df}\to\Delta f$. This allows us to state the convergence criterion: \begin{itemize} \item[] \textit{if $a$ fluctuates close around zero, convergence is assured}, \end{itemize} implying that if $a$ fluctuates around zero, $\widehat{\df}$ fluctuates around its true value $\Delta f$, the bias vanishes, and the mean square error reaches its asymptotics which can be estimated using Eq.~\gl{Xhat}. $a$ fluctuating close around zero means that it does so over a suitable range of sample sizes, which extends over an order of magnitude or more. \section{Application}\label{sec:6} As an example, we apply the convergence criterion to the calculation of the excess chemical potential $\mu^{ex}$ of a Lennard Jones fluid. Using Metropolis Monte Carlo simulation \cite{Metropolis1953} of a fluid of $N_p$ particles, the forward work is defined as energy increase when inserting at random a particle into a given configuration \cite{Widom1963}, whereas the reverse work is defined as energy decrease when a random particle is deleted from a given $N_p+1$-particle configuration. The densities $p_0(w)$ and $p_1(w)$ of forward and reverse work obey the fluctuation theorem \gl{fth} with $\Delta f=\mu^{ex}$ \cite{Hahn2009a}. Thus, Bennett's acceptance ratio method can be applied to the calculation of the chemical potential. Details of the simulation are reported in Ref.~\cite{Hahn2009a}. Here, the parameter values chosen read: $N_p=120$, reduced Temperature $T^*=1.2$, and reduced density $\rho^*=0.5$. \begin{figure} \includegraphics{figure16.eps} \caption{\label{fig:16} Running estimates of the excess chemical potential $\mu^{ex}$ in dependence of the sample size $N$ ($\alpha=0.9$). The inset displays the corresponding values of the convergence measure $a$.} \end{figure} Drawing work values up to a total sample size of $10^6$ with fraction $\alpha=0.9$ of forward draws (which will be a near-optimal choice \cite{Hahn2009b}), the successive estimates of the chemical potential together with the corresponding values of the convergence measure are shown in Fig.~\ref{fig:16}. The dashed horizontal line does not show the exact value of $\mu^{ex}$, which is unknown, but rather the value of the last estimate with $N=10^6$. Taking a closer look on the behavior of the convergence measure with increasing $N$, we observe $a$ near unity for $N\leq 10^2$, indicating the low $N$ regime and the lack of observing rare events. Then, a sudden drop near to zero happens at $N=10^2$, which coincides with a large jump of the estimate of $\mu^{ex}$, followed by large fluctuations of $a$ with strong negative values in the regime $N=10^2$ to $10^4$. This behavior indicates that the important but rare events which trigger the convergence of the $\mu^{ex}$ estimate are now sampled, but with strongly fluctuating relative frequency, which in specific cases causes the negative values of $a$ (because of too many rare events!). Finally, with $N > 10^4$, $a$ equilibrates and converges to zero. The latter is observed over two orders of magnitude, such that we can conclude that the latest estimate of $\mu^{ex}$ with $N=10^6$ has surely converged and yields a reliable value of the chemical potential. The confidence interval of the estimate can safely be calculated as the square root of Eq.~\gl{mse} (one standard deviation), and we obtain explicitly $\widehat{\mu^{ex}} = -2.451 \pm 0.005$. \begin{figure} \includegraphics{figure17.eps} \caption{\label{fig:17} Statistics of estimates of the excess chemical potential: shown are the average value and the standard deviation (as errorbars) in dependence of the sample size $N$. The statistics of the corresponding values of the convergence measure is shown in the inset.} \end{figure} Interested in the statistical behavior of $a$ for the present application, we carried out 270 simulation runs up to $N=10^4$ to obtain the average values and standard deviations of $\widehat{\mu^{ex}}$ and $a$ which are depicted in Fig.~\ref{fig:17}. The dashed line marks the same value as that in Fig.~\ref{fig:16}. Again, we observe the same qualitative behavior of $a$ as in the foregoing examples of Sec.~\ref{sec:5}, especially a positive average value of $\left\langle a \right\rangle$ and a convergence to zero which occurs simultaneously with the convergence of Bennett's acceptance ratio method. \section{Conclusions}\label{sec:7} Since its formulation a decade ago, the Jarzynski equation and the Crooks fluctuation theorem gave rise to enforced research of nonequilibrium techniques for free energy calculations. Despite the variety of new methods, in general little is known about their statistical properties. In particular, it is often unclear whether the methods actually converge to the desired value of the free energy difference $\Delta f$, and if so, it remains in question whether convergence happened within a given calculation. This is of great concern, as usually the calculations are strongly biased before convergence starts happening. In consequence, it is impossible to state the result of a single calculation of $\Delta f$ with a reliable confidence interval unless a convergence measure is evaluated. In this paper, we presented and tested a quantitative measure of convergence for two-sided free energy estimation, i.e.\ Bennett's acceptance ratio method, which is intimately related to the fluctuation theorem. From this follows a criterion for convergence relying on monitoring the convergence measure $a$ within a running estimation of $\Delta f$. The heart of the convergence criterion is the nearly simultaneous convergence of the free energy calculation and the convergence measure $a$. Whereas the former converges towards the unknown value $\Delta f$, which makes it difficult or even impossible to decide when convergence actually takes place, the latter converges to an \textit{\`a priori known} value. If convergence is detected with the convergence criterion, the calculation results in a reliable estimate of the free energy difference together with a precise confidence interval.
1,108,101,564,806
arxiv
\section{Introduction} \label{sec1} Transfer reactions below Coulomb barrier energies are known to be a powerful technique to determine asymptotic properties of the overlap between the initial and final state wave functions, essentially free from uncertainties associated with optical potentials and structural complexity of wave functions in the nuclear interior region.~\cite{Satchler} Recently, subbarrier $\alpha$ transfer reactions have been used to indirectly measure cross sections of $\alpha$-induced reactions of astrophysical interest.~\cite{Eric1,Eric2} In Ref.~\citen{Eric1}, Johnson and collaborators determined the reaction rate of $^{13}$C($\alpha,n$)$^{16}$O by measuring the $^{13}$C($^{6}$Li$,d$)$^{17}$O(6.356~MeV, $1/2^+$) reaction; for simplicity, we henceforth denote the final state of $^{17}$O as $^{17}$O$^*$. The $^{13}$C($\alpha,n$)$^{16}$O reaction is considered to be important as a neutron source for the slow neutron capture process (s-process) taken place in the asymptotic giant branch (AGB) stars.~\cite{Iben} In the cross section formula, Eq.~(1) of Ref.~\citen{Eric1}, of the $^{13}$C($\alpha,n$)$^{16}$O reaction based on $R$-matrix approach~\cite{TM}, the asymptotic normalization coefficient (ANC) for $\alpha + ^{13}$C $\longrightarrow$ $^{17}$O$^*$, $C_{\alpha ^{13}\mbox{\scriptsize C}}^{^{17}\mbox{\scriptsize O}^*}$, is the only missing quantity. Throughout this study we consider the ANC with Coulomb-modification,~\cite{Eric1} i.e., a value divided by the Gamma function $\Gamma (2+\eta)$, where $\eta$ is the Sommerfeld parameter for the $\alpha$-$^{13}$C system. In Ref.~\citen{Eric1}, the $\alpha$ transfer reaction $^{13}$C($^{6}$Li$,d$)$^{17}$O$^*$ was analyzed with DWBA, disregarding the breakup effects of $^6$Li and $^{17}$O$^*$, and $(C_{\alpha ^{13}\mbox{\scriptsize C}}^{^{17}\mbox{\scriptsize O}^*})^2 =0.89 \pm 0.23$~fm$^{-1}$ was extracted. The ground state energy of $^{6}$Li is, however, just 1.47 MeV below the $\alpha+d$ threshold. Furthermore, the binding energy of $^{17}$O$^*$, i.e., $^{17}$O(6.356~MeV, $1/2^+$), from the $\alpha+^{13}$C threshold is only 3~keV. Therefore, to extract a reliable value of $C_{\alpha ^{13}\mbox{\scriptsize C}}^{^{17}\mbox{\scriptsize O}^*}$, one should investigate how important $^6$Li and $^{17}$O$^*$ breakup are in the $\alpha$ transfer reaction. The purpose of the present Letter is to analyze the $^{13}$C($^{6}$Li$,d$)$^{17}$O$^*$ reaction at 3.6 MeV (for the incident energy of $^{6}$Li) with the three-body ($\alpha+d+^{13}$C) model and to determine $C_{\alpha ^{13}\mbox{\scriptsize C}}^{^{17}\mbox{\scriptsize O}^*}$ accurately. Roles of $^6$Li breakup in the initial channel and $^{17}$O breakup in the final channel are investigated with the continuum-discretized coupled-channels method (CDCC)~\cite{CDCC1,CDCC2}. As shown in \S\ref{sec3-2}, the former is found important as a large back-coupling to the elastic channel, while the latter is confirmed much less important. CDCC was proposed and developed by Kyushu group and has been highly successful in quantitatively reproducing observables of reaction processes in which virtual or real breakup effects of the projectile are significant.~\cite{Ogata1,Ogata2} CDCC treats continuum states of the projectile nonperturbatively, with reasonable truncation and discretization, and thus can describe the breakup effects with very high accuracy. Note that theoretical foundation of CDCC was established in Refs.~\citen{AYK,AKY,Piya}. The transition from the $^{6}$Li$+^{13}$C channel to the $d+^{17}$O$^*$ channel is described with Born approximation; the breakup states of $^6$Li are explicitly taken into account in the calculation of the transfer process. The ANC thus extracted is compared with the result of the previous DWBA analysis. This paper is constructed as follows. In \S\ref{sec2} we formulate the three-body wave functions in the initial and final channels and the transfer cross section of the $^{13}$C($^{6}$Li$,d$)$^{17}$O$^*$ reaction. Numerical setting is described in \S\ref{sec3-1}. Breakup effects of $^6$Li and $^{17}$O are investigated in \S\ref{sec3-2}, and the transfer cross section is analyzed and the ANC is extracted in \S\ref{sec3-3}. In \S\ref{sec3-4} we see the convergence of the modelspace of CDCC, and in \S\ref{sec3-5} we discuss the present result in comparison with the previous DWBA result. Finally, we give a summary in \S\ref{sec4}. \section{Formulation} \label{sec2} \begin{figure}[htb] \begin{center} \includegraphics[width=0.9\textwidth,clip]{fig1.eps} \caption{Illustration of the three-body system in the initial and final channels.} \label{fig1} \end{center} \end{figure} In the present calculation, we work with the $\alpha+d+^{13}$C model shown in Fig.~\ref{fig1}. The transition matrix ($T$ matrix) for the transfer reaction $^{13}$C($^{6}$Li$,d$)$^{17}$O$^*$ is given by \begin{equation} T_{fi}=S_{\rm exp}^{1/2} \left\langle \Psi_f^{(-)} \left| V_{\rm tr} \right| \Psi_i^{(+)} \right\rangle, \label{tfi} \end{equation} where $\Psi_i^{(+)}$ and $\Psi_f^{(-)}$ are the three-body wave functions of the system in the initial and final channels, respectively, and $V_{\rm tr}$ is the transition operator of the transfer process. We put a normalization constant $S_{\rm exp}^{1/2}$ in $T_{fi}$, physics meaning of which is discussed below. The three-body wave function $\Psi_i^{(+)}$ in the initial state satisfies the Schr\"odinger equation \begin{equation} (H_i-E)\Psi_i^{(+)}({\bm r}, {\bm R})=0, \label{sch1} \end{equation} where $E$ is the total energy of the system in the center-of-mass (c.m.) frame and ${\bm r}$ (${\bm R}$) is the coordinate of $\alpha$ ($^6$Li) relative to $d$ ($^{13}$C). The Hamiltonian $H_i$ is given by \begin{equation} H_i=T_{\bm R}+V_{d {\rm C}}^{({\rm N})}(R_{d{\rm C}}) +V_{\alpha {\rm C}}^{({\rm N})}(R_{\alpha {\rm C}}) +V^{\rm Coul}(R) +h_i, \label{hi} \end{equation} where $T_{\bm R}$ is the kinetic energy operator associated with ${\bm R}$ and $h_i$ is the internal Hamiltonian of $^6$Li. We use $V_{\rm XY}^{({\rm N})}$ for the nuclear interaction between X and Y; each of X and Y represents a particle, i.e., $d$, $\alpha$, or C ($^{13}$C). Similarly, ${\bm R}_{\rm XY}$ denotes the relative coordinate between X and Y. $V^{\rm Coul}$ is the Coulomb interaction between $^6$Li and $^{13}$C. Note that we neglect the Coulomb breakup of $^6$Li, which can be justified by the fact that the effective charge of the $\alpha+d$ system for electric dipole transition is almost zero. Furthermore, as shown in \S\ref{sec3-2}, it is numerically confirmed that Coulomb breakup processes due to electric quadrupole and higher multipoles are negligibly small. As the partial wave $\Psi_{i;JM}$ of $\Psi_i^{(+)}$, we adopt the following CDCC wave function: \begin{equation} \Psi_{i;JM}^{\rm CDCC}({\bm r}, {\bm R}) = \sum_{j=0}^{j_{\rm max}} \sum_{\ell=0}^{\ell_{\rm max}} \sum_{L=|J-\ell|}^{J+\ell} \frac{\hat{\phi}_{j,\ell} (r)}{r} \frac{\hat{\chi}_{j,\ell,L}^J (R)}{R} \left[ i^\ell Y_\ell (\hat{\bm r}) \otimes i^L Y_L (\hat{\bm R}) \right]_{JM}, \label{cdccwf} \end{equation} where $J$ and $M$ are the total angular momentum and its $z$-component, respectively, and $\ell$ ($L$) is the orbital angular momentum between $\alpha$ and $d$ ($^6$Li and $^{13}$C). We disregard the intrinsic spin of each particle for simplicity. The radial part of the $^6$Li wave function is denoted by $\hat{\phi}_{j,\ell} (r)/r$, where $j$ is the energy index; $j=0$ corresponds to the ground state and $j \neq 0$ to discretized continuum states obtained by the momentum-bin discretization.~\cite{CDCC1} The internal wave function $\hat{\Phi}_{j,\ell,m}$ given by \begin{equation} \hat{\Phi}_{j,\ell,m}({\bm r}) = \frac{\hat{\phi}_{j,\ell} (r)}{r} i^\ell Y_{\ell m}(\hat{\bm r}) \end{equation} satisfies \begin{equation} \left\langle \hat{\Phi}_{j',\ell',m'}({\bm r}) \left| h_i \right| \hat{\Phi}_{j,\ell,m}({\bm r}) \right\rangle = \epsilon_{j,\ell}\delta_{j'j}\delta_{\ell' \ell}\delta_{m'm}. \label{onp} \end{equation} Inserting Eqs.~(\ref{hi}) and (\ref{cdccwf}) into Eq.~(\ref{sch1}) and making use of Eq.~(\ref{onp}), one obtains the following CDCC equation: \begin{equation} \left[ -\frac{\hbar^2}{2\mu}\frac{d^2}{dR^2} +\frac{\hbar^2}{2\mu}\frac{L(L+1)}{R^2} +V^{\rm Coul}(R) -E_{j,\ell} \right] \hat{\chi}_{c}^J (R) = -\sum_{cc'} F_{cc'}(R) \hat{\chi}_{c'}^J (R), \label{cdcceq} \end{equation} where $\mu$ is the reduced mass of the $^6$Li-$^{13}$C system, $E_{j,\ell}=E-\epsilon_{j,\ell}$, and \begin{equation} F_{cc'}(R)= \left\langle \frac{\hat{\phi}_{j',\ell'} (r)}{r} \left[ i^\ell Y_\ell \otimes i^L Y_L \right]_{JM} \left| V_{d {\rm C}}^{({\rm N})} +V_{\alpha {\rm C}}^{({\rm N})} \right| \frac{\hat{\phi}_{j,\ell} (r)}{r} \left[ i^{\ell'} Y_{\ell'} \otimes i^{L'} Y_{L'} \right]_{JM} \right\rangle_{{\bm r},\hat{\bm R}}. \label{ff} \end{equation} For simple notation, we denote the channel indices $\{j,\ell,L \}$ as $c$. The CDCC equation is solved numerically up to $R=R_{\rm max}$ and $\hat{\chi}_{c}$ is connected with the usual boundary condition \begin{equation} \hat{\chi}_{c}^J(R) \to \left\{ \begin{array} [c]{ll}% U^{(-)}_{L,\eta_{j,\ell}} (K_{j,\ell} R)\delta_{cc_0} -\sqrt{K_{0,\ell_0}/K_{j,\ell}} \hat{S}_{cc_0}^J U^{(+)}_{L,\eta_{j,\ell}} (K_{j,\ell} R) & {\rm for } \; E_{j,\ell} \ge 0 \\ -\hat{S}_{cc_0}^J W_{-\eta_{j,\ell},L+1/2} (-2i K_{j,\ell} R) & {\rm for } \; E_{j,\ell} < 0 \end{array} \right., \label{bc} \end{equation} where $K_{j,\ell}=\sqrt{2 \mu E_{j,\ell}}/\hbar$, $U^{(-)}_{L,\eta_{j,\ell}}$ ($U^{(+)}_{L,\eta_{j,\ell}}$) is the incoming (outgoing) Coulomb wave function with the Sommerfeld parameter $\eta_{j,\ell}$, and $W_{-\eta_{j,\ell},L+1/2}$ is the Whittaker function. The subscript 0 of $\ell$ and $c$ represents the incident channel. With the $S$-matrix elements $\hat{S}_{cc_0}^J$ in Eq.~(\ref{bc}), one may obtain any physics quantities with the standard procedure except that one needs to make the discrete results smooth when breakup observables are investigated. Since the CDCC wave function $\Psi_i^{{\rm CDCC}}$ can be regarded as, with very high accuracy, an exact solution to Eq.~(\ref{sch1}) in evaluation of $T$-matrix elements that contain a short range interaction, one may define $V_{\rm tr}$ by \begin{equation} V_{\rm tr}=V_{\alpha d}(r) +V_{d {\rm C}}(R_{d{\rm C}}) +V_{\alpha {\rm C}}(R_{\alpha {\rm C}}) -V_{\rm aux} \label{vres} \end{equation} with any choice of the auxiliary potential $V_{\rm aux}$. In Eq.~(\ref{vres}), $V_{\alpha d}$, $V_{d {\rm C}}$, and $V_{\alpha {\rm C}}$ contain both nuclear and Coulomb parts. Note that $V_{\rm aux}$ determines the final state wave function $\Psi_f^{(-)}$. In the present study, we adopt \begin{equation} V_{\rm aux}=V_{d {\rm C}}(R_{d{\rm C}}) +V_{\alpha {\rm C}}(R_{\alpha {\rm C}}) +V_{\alpha d}^{({\rm C})}(r), \label{vaux} \end{equation} which trivially gives \begin{equation} V_{\rm tr}=V_{\alpha d}^{({\rm N})}(r). \end{equation} The superscript (C) of $V_{\alpha d}$ in Eq.~(\ref{vaux}) represents the Coulomb part of the interaction. We then have \begin{equation} (H_f-E)\Psi_f^{(+)}({\bm R}_{\alpha {\rm C}}, {\bm R}_{d {\rm O}})=0 \label{sch2} \end{equation} with \begin{equation} H_f= T_{{\bm R}_{d {\rm O}}} +V_{d {\rm C}}(R_{d{\rm C}}) +V_{\alpha d}^{({\rm C})}(r) +h_f, \label{hf} \end{equation} where $T_{{\bm R}_{d {\rm O}}}$ is the kinetic energy regarding ${\bm R}_{d {\rm O}}$ and $h_f$ is the internal Hamiltonian of $^{17}$O. Note that we here consider a Schr\"odinger equation for $\Psi_f^{(+)}$, which is the time-reversal of $\Psi_f^{(-)}$. One can easily obtain the form of $\Psi_f^{(+)}$ based on CDCC, $\Psi_f^{{\rm CDCC}(+)}$, just in the same way as in the initial channel, except that i) we should include Coulomb breakup of $^{17}$O, ii) we have no nuclear part of $V_{\alpha d}$, and iii) the bound state of $^{17}$O at 6.356~MeV is a p-wave that generates both monopole and quadrupole interactions between $d$ and $^{17}$O; the latter causes also change in the $d$-$^{17}$O angular momentum that is called reorientation. Note that $V_{d {\rm C}}$ in Eq.~(\ref{hf}) contains both nuclear and Coulomb parts, as mentioned above. It is shown in \S\ref{sec3-2} that $^{17}$O breakup channels have very small ($\sim 5$\%) effects on the $d$-$^{17}$O elastic scattering. Furthermore, the quadrupole interaction is found negligibly small (see Fig.~\ref{fig2}). Then we can approximate \begin{equation} \Psi_f^{{\rm CDCC}(-)} \approx \varphi_0({\bm r})\xi_0^{(-)}({\bm R}_{d {\rm O}}) \equiv \Psi_f^{{\rm 1ch}(-)}, \label{final-wf} \end{equation} where $\varphi_0({\bm r})$ is the relative wave function between $\alpha$ and $^{13}$C in $^{17}$O$^*$, and $\xi_0^{(-)}({\bm R}_{d {\rm O}})$ is the distorted wave function obtained by the single-channel calculation, in which both the breakup channels and the aforementioned quadrupole interaction are switched off. In the calculation of $T_{fi}$, we make zero-range approximation; the strength $D_{j,\ell}$ of the zero-range $\alpha$-$d$ interaction is given by \begin{equation} D_{j,\ell}= \int \hat{\phi}^*_{j,\ell} (r) V_{\alpha d}^{({\rm N})}(r) \hat{\phi}_{j,\ell} (r) dr. \end{equation} The finite-range correction to the zero-range calculation of $T_{fi}$ is made with the standard prescription.~\cite{Satchler} One may examine the validity of this approximation by the magnitude of the correction. We use $\Psi_i^{(+)}$ calculated with CDCC, while $\Psi_f^{{\rm 1ch}(-)}$ of Eq.~(\ref{final-wf}) is adopted as $\Psi_f^{(-)}$, in the evaluation of $T_{fi}$. \section{Results and discussion} \label{sec3} \subsection{Numerical input} \label{sec3-1} The $\alpha$-$d$ wave function in $\Psi_{i}^{\rm CDCC}$ is constructed by following Ref.~\citen{Sakuragi}, except that we do not use the orthogonal condition model (OCM) but exclude Pauli's forbidden states by hand. We include $\ell=0$, 1, and 2 states. As for the nuclear part of the $\alpha$-$d$ interaction for $\ell=0$, we use \begin{equation} V_{\alpha d; \ell=0}^{({\rm N})}(r) = -105.5 \exp[-(r/2.191)^2] +46.22 \exp[-(r/1.607)^2]. \end{equation} For $\ell=2$, \begin{equation} V_{\alpha d; \ell=2}^{({\rm N})}(r) = -85.00 \exp[-(r/2.377)^2] +30.00 \exp[-(r/1.852)^2] \end{equation} is adopted. We neglect the intrinsic spin $S$ of $d$, and we have only one resonance state at 3.474~MeV (measured from the ground state energy) with a width of 0.45~MeV. It is found that, however, if we include $S$ and a spin-orbit interaction that reproduces the $1^+$, $2^+$, and $3^+$ resonance states, the resulting value of the ANC shown below changes by only 0.2\%. Thus, the separation of the $\ell=2$ resonance state to $1^+$, $2^+$, and $3^+$ resonance states by the spin-orbit interaction plays no role in the present subbarrier $\alpha$ transfer reaction. For $\ell=1$, we adopt~\cite{Matsumoto} \begin{equation} V_{\alpha d}^{({\rm N})}(r) =-74.19 \exp[-(r/2.236)^2], \end{equation} which is used also for $\ell>2$ when we check the convergence of CDCC calculation with respect to $\ell_{\rm max}$ (see \S\ref{sec3-4}). The Coulomb interaction between $\alpha$ and $d$ is evaluated by assuming a uniformly charged sphere with the charge radius $R_{\mathrm{C}}$ of 3.0~fm; see Eq.~(\ref{vcoul}) below. We take the maximum value $k_{\rm max}$ ($r_{\rm max}$) of the relative wave number $k$ (coordinate $r$) between $\alpha$ and $d$ to be 2.0~fm$^{-1}$ (60~fm); the maximum relative energy $\epsilon_{\rm max}$ is 62.4~MeV. We use $j_{\rm max}=100$ for each of the $\ell=0$, 1, and 2 states and the width $\Delta k$ of the momentum bin is thus 0.02~fm$^{-1}$. The number of channels, $N_{\rm ch}$, in the CDCC equation (\ref{cdcceq}) is 601. When we see the effects of Coulomb breakup in Fig.~\ref{fig2}, we take $r_{\rm max}=300$~fm. As for the interactions of the $\alpha$-$^{13}$C and $d$-$^{13}$C systems, we use the parameters shown in Table~\ref{tab1}. The standard Woods-Saxon form is adopted: \begin{equation} V(x)=-V_0 f_{\rm V} (x) -iW_0 f_{\rm W} (x) + V_{\rm C}(x), \end{equation} where $f_{\rm V} (x)=(1+\exp[(x-R_{\rm V})/a_{\rm V}])^{-1}$ and $f_{\rm W} (x)=(1+\exp[(x-R_{\rm W})/a_{\rm W}])^{-1}$. The Coulomb interaction $V_{\rm C}(x)$ is given by \begin{equation} V_{\rm C}(x)= \left\{ \begin{array} [c]{ll}% \displaystyle\frac {Z_1 Z_2e^{2}}{2R_{\mathrm{C}}}\left( 3-\frac{x^{2}}{R_{\mathrm{C}}^{2}}\right) & \quad x\leq R_{\mathrm{C}}\\ \displaystyle\frac{Z_1 Z_2e^{2}}{x} & \quad x>R_{\mathrm{C}}% \end{array} \right., \label{vcoul} \end{equation} where $Z_1 Z_2$ is the product of the atomic numbers of the interacting particles. These parameters are used in the calculation of both initial and final state wave functions. The parameter set for the $d$-$^{13}$C system is determined to reproduce the elastic scattering cross section obtained with the parameters in Ref.~\citen{Eric1} that contains a spin-orbit part. We determine $V_0$ for the $\alpha$-$^{13}$C system to reproduce $\varepsilon_0$ assuming that the orbital angular momentum is 1 and the number of forbidden states is 2. Note that we use Eq.~(\ref{vcoul}) with $R_{\rm C}=2.94$~fm for the $^6$Li-$^{13}$C Coulomb interaction unless we include Coulomb breakup of $^6$Li. \begin{table}[htb] \caption{ Potential parameters used in the present calculation. } \begin{center} \begin{tabular}{cccccccc} \hline \hline System & $V_0$ & $R_{\rm V}$ & $a_{\rm V}$ & $W_0$ & $R_{\rm W}$ & $a_{\rm W}$ & $R_{\rm C}$ \\ & (MeV) & (fm) & (fm) & (MeV) & (fm) & (fm) & (fm) \\ \hline $\alpha$+$^{13}$C & 69.30 & 2.939 & 0.670 & --- & --- & --- & 2.969 \\ $d$+$^{13}$C & 73.05 & 3.128 & 0.780 & 10.50 & 2.986 & 0.800 & 2.969 \\ \hline \hline \end{tabular} \label{tab1} \end{center} \end{table} In the calculation of $\Psi_{i;JM}^{\rm CDCC}$, we use $R_{\rm max}=15$~fm and $J_{\rm max}=7$. Note that we explicitly include closed channels, in which $E_{j,\ell}<0$, in CDCC calculations. In the evaluation of $T_{fi}$, we set the maximum value of $R_{d {\rm C}}$ to be 30~fm; we use the asymptotic form of $\hat{\chi}_{c}^J$, Eq.~(\ref{bc}), to obtain $\Psi_{i;JM}^{\rm CDCC}$ for $R > 15$~fm. When we include Coulomb breakup, we set $R_{\rm max}$ to 200~fm. For the final channel, the relative energy between $\alpha$ and $^{13}$C in the $1/2^+$ state at 6.356~MeV is $\varepsilon_0=-3$~keV from the $\alpha$-$^{13}$C threshold. In the calculation of $\Psi_f^{{\rm CDCC}(-)}$, we include the p-wave bound state and the s-, p-, d-continua of the $\alpha$+$^{13}$C system up to the relative momentum of 1.2~fm$^{-1}$ (relative energy of 39.6~MeV) with the momentum bin with a common width 0.06~fm$^{-1}$. The maximum values of $R_{\alpha {\rm C}}$ and $R_{d {\rm O}}$ are both set to 100~fm, and we put $J_{\rm max}=10$. We include all closed channels in the CDCC calculations as in the initial channel. \subsection{Breakup effects of $^6$Li and $^{17}$O} \label{sec3-2} \begin{figure}[htbp] \begin{center} \includegraphics[width=1.0\textwidth,clip]{fig2.eps} \caption{ (color online) Elastic cross sections of $^6$Li-$^{13}$C at 3.6~MeV (left panel) and $d$-$^{17}$O$^*$ at 1.1~MeV (right panel). In each panel, the dashed line shows the result of CDCC and the solid line is the result without breakup channels. The dotted line in the left panel is the result of CDCC with both nuclear and Coulomb breakup, while that in the right panel shows the result without breakup including only the monopole interaction between $d$ and $^{17}$O. } \label{fig2} \end{center} \end{figure} Figure~\ref{fig2} shows the elastic cross sections of $^6$Li-$^{13}$C at 3.6~MeV (left panel) and $d$-$^{17}$O$^*$ at 1.1~MeV (right panel) corresponding to the initial and final channels, respectively, of the $^{13}$C($^{6}$Li$,d$)$^{17}$O$^*$ reaction. In each panel, the dashed line shows the result of CDCC and the solid line is the result without breakup channels. One sees from the left panel significant breakup effects on the elastic cross section, i.e., a large back-coupling to the elastic channel. Another finding is the inclusion of Coulomb breakup (the dotted line in the left panel) little affects the cross section. One can thus infer that nuclear breakup plays important roles in the $^{13}$C($^{6}$Li$,d$)$^{17}$O$^*$ reaction, and conclude that neglect of the Coulomb breakup in the calculation of $\Psi_{i;JM}^{\rm CDCC}$ is justified. On the other hand, in the final channel, effects of nuclear and Coulomb breakup are found very small as shown in the right panel; they change the cross section for $\theta \mathrel{\mathpalette\fun >} 60^\circ$ by 5\% at most. We further investigate the breakup effects on the $d$-$^{17}$O$^*$ wave function in the elastic channel. The absolute value (argument) of the wave function for $J=0$ at $R_{d {\rm O}}=10$~fm, which is found to have the main contribution to the transfer amplitude, is 0.982 and 0.956 ($278.8^\circ$ and $276.7^\circ$) when the breakup states of $^{17}$O are included and neglected, respectively; the breakup effects are about 3\%. Therefore, we can disregard the breakup channels of the $d$-$^{17}$O system in the calculation of $T_{fi}$ with the error of 5\% at most. The dotted line in the right panel shows the result with neglecting both the breakup channels and the quadrupole interaction between $d$ and $^{17}$O, which is almost identical to the solid line. Thus, one can use Eq.~(\ref{final-wf}) in the calculation of the final state wave function; we estimate the error due to this approximation to be 5\% as mentioned above. It should be noted that breakup cross sections in the initial and final channels are both found smaller than the nuclear part of the elastic cross section by about four orders of magnitude. The very small breakup effects in the final channel are because the incoming energy of $d$ is suitably below the Coulomb barrier, and the interaction that causes breakup in Eq.~(\ref{hf}) is significantly weaker than in Eq.~(\ref{hi}); note that $V_{\alpha d}^{({\rm N})}(r)$ is defined as $V_{\rm tr}$ and does not appear in Eq.~(\ref{hf}). \subsection{Transfer cross section and ANC} \label{sec3-3} \begin{figure}[htbp] \begin{center} \includegraphics[width=0.6\textwidth,clip]{fig3.eps} \caption{ (color online) Cross section of the transfer reaction $^{13}$C($^{6}$Li$,d$)$^{17}$O$^*$ at 3.6~MeV. The solid line is the result of calculation with $S_{\rm exp}=1$. The dashed line is the result of the $\chi^2$ fit to the experimental data taken from Ref.~\citen{Eric1}. } \label{fig3} \end{center} \end{figure} We show in Fig.~\ref{fig3} the cross section of the transfer reaction $^{13}$C($^{6}$Li$,d$)$^{17}$O$^*$ at 3.6 MeV as a function of the outgoing angle $\theta$ of $d$ in the c.m. frame. The solid line represents the result with $S_{\rm exp}=1$ and the dashed line shows the result of the $\chi^2$ fit to the experimental data.~\cite{Eric1} The resulting value of $S_{\rm exp}$ is 0.357. Note that $S_{\rm exp}$ cannot be regarded as a spectroscopic factor. Indeed, $S_{\rm exp}$ has strong dependence on the model wave function of the $\alpha$-$^{13}$C system; typically, it varies by a factor of 2 with changing the geometric parameters of $V_{\alpha {\rm C}}^{({\rm N})}$ by 30\%. This clearly shows that it is not feasible to determine $S_{\rm exp}$ from the present analysis of the experimental data. On the other hand, the ANC $C_{\alpha ^{13}\mbox{\scriptsize C}}^{^{17}\mbox{\scriptsize O}^*}$ given by \begin{equation} C_{\alpha ^{13}\mbox{\scriptsize C}}^{^{17}\mbox{\scriptsize O}^*} = S_{\rm exp}^{1/2}\, C_{\alpha ^{13}\mbox{\scriptsize C}}^{\rm sp} \end{equation} with the single particle ANC $C_{\alpha ^{13}\mbox{\scriptsize C}}^{\rm sp}$ of the $\alpha$-$^{13}$C wave function, is robust against changes in the potential parameters. This shows that the reaction process considered is peripheral with respect to $R_{\alpha {\rm C}}$, i.e., only the tail of the $\alpha$-$^{13}$C wave function contributes to the transition amplitude. Note that $C_{\alpha ^{13}\mbox{\scriptsize C}}^{\rm sp}$ is defined by \begin{equation} C_{\alpha ^{13}\mbox{\scriptsize C}}^{\rm sp} = \dfrac{ R_{\alpha {\rm C}}\,{\bar{\varphi}_0(R_{\alpha {\rm C}})} } { W_{-\bar{\eta},3/2}(2 \kappa_0 R_{\alpha {\rm C}}) \Gamma(2+\bar{\eta}) } \;\; {\rm at} \;\; R_{\alpha {\rm C}} \gg R_{\rm N}, \end{equation} where $\bar{\varphi}_0$ is the radial part of $\varphi_0$, $\bar{\eta}$ is the Sommerfeld parameter of the $\alpha$-$^{13}$C system, $\kappa_0=\sqrt{-2\mu_{\alpha ^{13}\mbox{\scriptsize C}}\,\varepsilon_0}/\hbar$ with $\mu_{\alpha ^{13}\mbox{\scriptsize C}}$ the reduced mass of $\alpha$ and $^{13}$C, $\Gamma$ is the Gamma function, and $R_{\rm N}$ represents the range of $V_{\alpha {\rm C}}^{({\rm N})}$. The value of $(C_{\alpha ^{13}\mbox{\scriptsize C}}^{^{17}\mbox{\scriptsize O}^*})^2$ extracted by the present calculation is 1.03~fm$^{-1}$. We then evaluate the uncertainty of $(C_{\alpha ^{13}\mbox{\scriptsize C}}^{^{17}\mbox{\scriptsize O}^*})^2$ associated with the $\alpha$-$^{13}$C and $d$-$^{13}$C potential parameters shown in Table~\ref{tab1} by changing each value by 30\%. Note that $V_0$ for $\alpha$-$^{13}$C has a constraint that it must reproduce $\varepsilon_0$. The uncertainty is found to be 22\%. We take into account also the uncertainty due to the use of Eq.~(\ref{final-wf}) (5\%) and that coming from the zero-range approximation to $V_{\alpha d}^{({\rm N})}$ (8\%), and conclude that the theoretical uncertainty is totally 24\%. Including the ambiguity of experimental information~\cite{Eric1} together, we finally obtain \begin{equation} (C_{\alpha ^{13}\mbox{\scriptsize C}}^{^{17}\mbox{\scriptsize O}^*})^2 =1.03 \pm 0.25 \; {\rm (theor)}\; \pm 0.15 \; {\rm (expt)}, \label{ANC-result} \end{equation} where (theor) and (expt) respectively represent theoretical and experimental uncertainties. \subsection{Convergence of the CDCC wave function in the initial channel} \label{sec3-4} \begin{figure}[htbp] \begin{center} \includegraphics[width=1.0\textwidth,clip]{fig4.eps} \caption{ (color online) The dependence the cross section for $^{13}$C($^{6}$Li$,d$)$^{17}$O$^*$ at 3.6~MeV on the modelspace of CDCC for $\Psi_i^{{\rm CDCC}}$. In the left panel, $k_{\rm max}$ is varied. The values of $k_{\rm max}$ are shown in unit of fm$^{-1}$ and the corresponding values of $\epsilon_{\rm max}$ are given in the parentheses in unit of MeV. Here, the $\ell=0$, 1, and 2 breakup continua are taken with $\Delta k=0.02$~fm$^{-1}$. In the right panel, the dashed line stands for the result of the $\ell=0$, 1, and 2 breakup continua with $k_{\rm max}=2.0$~fm$^{-1}$ and $\Delta k=0.01$~fm$^{-1}$. The dotted line shows the result of the $0 \le \ell \le 5$ continua with $k_{\rm max}=2.0$~fm$^{-1}$ and $\Delta k=0.02$~fm$^{-1}$. The thick solid line (the solid line) in the left (right) panel is the result of the $\ell=0$, 1, and 2 breakup continua with $k_{\rm max}=2.0$~fm$^{-1}$ and $\Delta k=0.02$~fm$^{-1}$ and the same as the solid line in Fig.~\ref{fig3}. The result without breakup channels is also shown by the dash-dotted line in each panel. } \label{fig4} \end{center} \end{figure} Figure~\ref{fig4} shows the dependence of the cross section for $^{13}$C($^{6}$Li$,d$)$^{17}$O$^*$ at 3.6~MeV on the modelspace of CDCC for $\Psi_i^{{\rm CDCC}}$. In the left panel, we show the convergence of the cross section with respect to increasing $k_{\rm max}$, where the $\ell=0$, 1, and 2 breakup continua are taken with $\Delta k=0.02$~fm$^{-1}$. One can see that the convergence is very slow and obtained at $k_{\rm max}=2.0$~fm$^{-1}$. In usual CDCC calculation, one takes only the open channels, i.e., channels with $E_{j,\ell}>0$. The result thus obtained (the thin solid line) is, however, sizably different from the converged one (the thick dotted line), at backward angles in particular. Thus, inclusion of the breakup channels is important. In the right panel of Fig.~\ref{fig4}, the dashed line is the result including the $\ell=0$, 1, and 2 breakup continua with $\Delta_k=0.01$~fm$^{-1}$ and $k_{\rm max}=2.0$~fm$^{-1}$ ($N_{\rm ch}=1201$), and the dotted line is the result including the $\ell=0,$ 1, 2, 3, 4, and 5 breakup continua with $\Delta_k=0.02$~fm$^{-1}$ and $k_{\rm max}=2.0$~fm$^{-1}$ ($N_{\rm ch}=2101$). The dashed and solid lines both agree well with the solid line, which is the same as in Fig.~\ref{fig3}. In fact, the resulting values of $(C_{\alpha ^{13}\mbox{\scriptsize C}}^{^{17}\mbox{\scriptsize O}^*})^2$ differ from each other by less than 1\%. Thus, the modelspace used in the solid line of Fig.~\ref{fig3} gives good convergence of the calculated cross section, hence $C_{\alpha ^{13}\mbox{\scriptsize C}}^{^{17}\mbox{\scriptsize O}^*}$. \subsection{Discussion on the comparison with the previous DWBA analysis} \label{sec3-5} \begin{figure}[htbp] \begin{center} \includegraphics[width=0.6\textwidth,clip]{fig5.eps} \caption{ (color online) $^6$Li-breakup effect on the transfer reaction $^{13}$C($^{6}$Li$,d$)$^{17}$O$^*$ at 3.6~MeV. The solid and dash-dotted lines show the results of calculations with and without $^6$Li breakup channels, respectively. The dashed line stands for the result of the elastic transfer process only. The solid and dash-dotted lines are respectively the same as those in the right panel of Fig.~\ref{fig4}. } \label{fig5} \end{center} \end{figure} Figure~\ref{fig5} shows the $^6$Li-breakup effect on the transfer reaction. The solid and dash-dotted lines are the results of the calculation with and without the breakup channels, respectively; they are shown also in the right panel of Fig.~\ref{fig4}. The two results largely deviate from each other, indicating that the breakup effect is important, as mentioned in \S\ref{sec3-4}. This does not necessarily mean, however, inadequacy of DWBA, as discussed below. The transition matrix elements of the transfer reaction can be separated into two parts, \begin{eqnarray} T_{fi}=S_{\rm exp}^{1/2} \left[ T_{fi}^{\rm (el)}+T_{fi}^{\rm (br)} \right] \label{tfi-separation-1} \end{eqnarray} with \begin{eqnarray} T_{fi}^{\rm (el)}&=& \left\langle \Psi_f^{(-)} \left| V_{\rm tr} \right| \Psi_{\rm el}^{(+)} \right\rangle , \\ T_{fi}^{\rm (br)}&=& \left\langle \Psi_f^{(-)} \left| V_{\rm tr} \right| \Psi_{\rm br}^{(+)} \right\rangle , \label{tfi-separation-2} \end{eqnarray} where $\Psi_{\rm el}^{(+)}$ and $\Psi_{\rm br}^{(+)}$ are the elastic and breakup parts of the CDCC wave function $\Psi_i^{{\rm CDCC}}$. The transition matrix $T_{fi}^{\rm (el)}$ describes the transfer reaction from the elastic channel, i.e., the elastic transfer process, which includes the back-coupling effect of the breakup channels to the elastic channel. On the other hand, $T_{fi}^{\rm (br)}$ describes the transfer reaction from $^6$Li breakup channels, i.e., the breakup transfer process. Thus, there are two kinds of breakup effects on the transfer reaction; one is the back-coupling effect in the elastic transfer process and the other is the presence of the breakup transfer process. The dashed line is a result of the elastic transfer transition only. The result agrees with the solid line, indicating that the breakup transfer transition is much smaller than the elastic transfer one. This is consistent with the small breakup cross section of $^6$Li by $^{13}$C as mentioned in \S\ref{sec3-2}. Hence, only the back-coupling effect is important in the present subbarrier transfer reaction. In DWBA, the back-coupling effect is expected to be included by using the $^6$Li optical potential, which describes the elastic scattering by definition, as the distorting potential. The ANC, $(C_{\alpha ^{13}\mbox{\scriptsize C}}^{^{17}\mbox{\scriptsize O}^*})^2$, extracted in the preceding DWBA calculation~\cite{Eric1} is $0.89 \pm 0.23$~fm$^{-1}$. This value agrees well with the present result Eq.~(\ref{ANC-result}) within the uncertainties. \section{Summary} \label{sec4} In summary, we analyze the $^{13}$C($^{6}$Li$,d$)$^{17}$O(6.356~MeV, $1/2^+$) reaction at 3.6~MeV by the three-body ($\alpha+d+^{13}$C) model. The breakup effects of $^6$Li and $^{17}$O are investigated by CDCC. Those of $^6$Li are found important as a large back-coupling to the elastic channel, while those of $^{17}$O turns out negligible with an error of 5\%. The transfer cross section is calculated with Born approximation to the transition interaction, and including only the breakup of $^6$Li. The ANC extracted by the three-body reaction model is $ (C_{\alpha ^{13}\mbox{\scriptsize C}}^{^{17}\mbox{\scriptsize O}^*})^2 = 1.03 \pm 0.25 \; {\rm (theor)}\; \pm 0.15 \; {\rm (expt)} $. The back-coupling effect of $^{6}$Li breakup on the transfer reaction is large, while the breakup transfer transition is negligible compared with the elastic transfer transition. The preceding DWBA calculation implicitly treated the back-coupling effect by using a $^{6}$Li optical potential that described the elastic scattering as the distorting potential. The value of $(C_{\alpha ^{13}\mbox{\scriptsize C}}^{^{17}\mbox{\scriptsize O}^*})^2$ extracted by DWBA is $0.89 \pm 0.23$~fm$^{-1}$, which is consistent with the present value within the uncertainties. It can be conjectured that in the DWBA calculation, the aforementioned back-coupling effect in the initial channel was properly included. However, this will not always be the case, since the optical potential is determined phenomenologically. Furthermore, breakup transfer processes may be important in other subbarrier $\alpha$ transfer reactions. The present three-body approach, therefore, should be applied systematically to these reactions. From theoretical point of view, inclusion of CDCC wave functions in both initial and final channels will be an important subject; to achieve this, one should treat, in principle, very large coordinate space in the calculation of the $T$ matrix, since there is no damping in the overlap kernel. It will be interesting to use four-body CDCC~\cite{Matsumoto2} based on a $p+n+\alpha+^{13}$C model to obtain the wave function in the initial channel. At this stage, however, the modelspace required is too large for four-body CDCC to be applied. \vspace{3mm} One of the authors (K.~O.) wishes to thank G.~V.~Rogachev and E.~D.~Johnson for valuable discussions and providing detailed information on their DWBA calculation. The authors are grateful to Y.~Iseri for providing a computer code {\sc rana} for calculation of transfer processes. The computation was carried out using the computer facilities at the Research Institute for Information Technology, Kyushu University.
1,108,101,564,807
arxiv
\section{Introduction} \label{intro} \subsection{Background} A basic observation made by Thomas Schelling while studying the mechanisms leading to social segregation in the United States \cite{schelling1969models,schelling1971dynamic} was that individuals in a social network have interactions with their friends and neighbors rather than with the entire population, and this often triggers global effects that were not originally intended, nor desired. Schelling proposed a simple stochastic model to predict these global outcomes, which has become popular in the social sciences. Two types of agents are randomly placed at the vertices of a two-dimensional grid and interact with a small subset of nodes located in their local neighborhood. Based on these interactions, the boolean state of each agent is determined as follows. All agents have a common intolerance threshold, indicating the minimum fraction of agents of their same type that must be located in their neighborhood to make their state happy. Unhappy agents randomly move to vacant locations where they will be happy. A peculiar effect observed by simulating several variants of this model is that when the system reaches a stable state, large areas of segregated agents of the same type are observed, for a wide range of the intolerance threshold value. Individuals, Schelling concluded, tend to spontaneously self-segregate. See Figure~\ref{Fig:Sim_results_1} for a simulation of this behavior. \begin{figure} \begin{center} {\includegraphics[width=\textwidth]{all_gb.png}} \end{center} \caption{{\footnotesize Self-segregation arising over time for a value of the intolerance $\tau=0.42$ on a grid of size $1000 \times 1000$ and neighborhood size $441$. Green and blue indicate areas of ``happy'' agents of type (+1) and (-1), respectively. White and yellow indicate areas of ``unhappy" agents of type (+1) and (-1) respectively. Initial configuration (a), intermediate configurations (b)-(c), final configuration (d). When the process terminates all agents are happy but large segregated regions can be observed. }} \label{Fig:Sim_results_1} \end{figure} Similar models have been considered in the statistical physics literature well before Schelling's observation. For an intolerance value of $1/2$, for example, agents take the same value of the majority of their neighbors, and self-organized segregation in the Schelling model corresponds to spontaneous magnetization in the Ising model with zero temperature, where spins align along the direction of the local field~\cite{stauffer2007ising, castellano2009}. In computation theory, mathematics, physics, complexity theory, theoretical biology and microstructure modeling, the model is known as a two-dimensional, two-state Asynchronous Cellular Automaton (ACA) with extended Moore neighborhoods and exponential waiting times \cite{chopard1998cellular}. Other related models appeared in epidemiology \cite{hethcote2000mathematics, draief2010epidemics}, economics \cite{jackson2002formation}, engineering and computer sciences \cite{kleinberg2007cascading, easley2010networks}. Mathematically, all of these models fall in the general area of interacting particle systems, or contact processes, and exhibit phase transitions~\cite{liggett2012interacting,liggett2013stochastic}. Schelling-type models can be roughly divided into two classes. A \textit{Kawasaki dynamic} model assumes there are no vacant positions in the underlying graph, and a pair of unhappy agents swap their locations if this will make both of them happy. A \textit{Glauber dynamic} model assumes single agents to simply flip their types if this makes them happy. This flipping action indicates that the agent has moved out of the system and a new agent has occupied its location. While in a Kawasaki model the system is ``closed" and the number of agents of the same type is fixed, in a Glauber model the system is ``open" and the number of agents of the same type may change over time. Sometimes the model dynamics are defined as having unhappy agents swap (or flip) regardless of whether this makes them happy or not. We assume throughout Glauber dynamics and agents to flip only if this makes them happy. Another possible variant is to assume that agents have a small probability of acting differently than what the general rule prescribes, other variants also consider having multiple intolerance levels, multiple agent types, different agent distributions, and time-varying intolerance \cite{young2001individual, zhang2004dynamic, zhang2004residential, zhang2011tipping, mobius2000formation, meyer2003immigration,bhakta2014clustering, schulze2005potts,barmpalias2015minority,barmpalias2015tipping}. \subsection{Contribution} We focus on the case of two types of agents placed uniformly at random on a two-dimensional grid according to a Bernoulli distribution of parameter $p = 1/2$ and having a single intolerance level $0<\tau<1$, and study the range of intolerance leading to the formation of large segregated regions. Even for the one-dimensional version of this problem rigorous results appeared only recently. Brandt et al.~\cite{brandt2012analysis} considered a ring graph for the Kawasaki model of evolution. In this setting, letting the neighborhood of an agent be the set of nearby agents that is used to determine whether the agent is happy or not, they showed that for an intolerance level $\tau=1/2$, the expected size of the largest segregated region containing an arbitrary agent in steady state is polynomial in the size of the neighborhood. Barmpalias et al.~\cite{barmpalias2014digital} showed that there exists a value of $\tau^*\approx 0.35$, such that for all $\tau<\tau^*$ the initial configuration remains static with high probability (w.h.p.), while for all $\tau^*<\tau<1/2$ the size of the largest segregated region in steady state becomes exponential in the size of the neighborhood w.h.p. On the other hand, for all $\tau>1/2$ the system evolves w.h.p.\ towards a state with only two segregated components. For the Glauber model the behavior is similar, but symmetric around $\tau=1/2$, with a first transition from a static configuration to exponential segregation occurring at $\tau \approx 0.35$, a special point $\tau=1/2$ with the largest segregated region of expected polynomial size, then again exponential segregation until $\tau \approx 0.65$, and finally a static configuration for larger values of $\tau$. In a two-dimensional grid graph on a torus, the case $\tau=1/2$ is open. Immorlica et al.~\cite{immorlica2015exponential} have shown for the Glauber model the existence of a value $\tau^*< 1/2$, such that for all $\tau^*<\tau<1/2$ the expected size of the largest segregated region is exponential in the size of the neighborhood. This shows that segregation is expected in the small interval $\tau \in (1/2-\epsilon, 1/2)$. Note that this does not imply exponential segregation w.h.p., but only expected segregated regions of exponential size. Barmpalias et al.~\cite{barmpalias2016unperturbed} considered a model in which each type of agent has a different intolerance, i.e., $\tau_1$ and $\tau_2$. For the special case of $\tau_1 = \tau_2 = \tau$, they have shown that when $\tau>3/4$, or $\tau<1/4$, the initial configuration remains static w.h.p Our main contribution is depicted in Figure~\ref{fig:tau}. We consider the Glauber model for the two-dimensional grid graph on a torus. First, we enlarge the intolerance interval that leads to the formation of large segregated regions from the known size $\epsilon>0$ to size $\approx 0.134$, namely we show that when $0.433 < \tau < 1/2$ (and by symmetry $1/2<\tau<0.567$), the expected size of the largest segregated region is exponential in the size of the neighborhood. Second, we further extend the interval leading to large segregated regions to size $\approx 0.312$. In this case, the main contribution is that we consider ``almost segregated'' regions, namely regions where the ratio of the number of agents of one type and the number of agents of the other type quickly vanishes as the size of the neighborhood grows, and show that for $0.344 < \tau \leq 0.433$ (and by symmetry for $0.567 \leq \tau<0.656$) the expected size of the largest almost segregated region is exponential in the size of the neighborhood. As shown for the one dimensional case in~\cite{barmpalias2014digital} and conjectured for the two-dimensional case in~\cite{barmpalias2016unperturbed}, we show that as the intolerance parameter gets farther from one half, in both directions, the average size of both the segregated and almost segregated regions gets larger: higher tolerance in our model does not necessarily lead to less segregation. On the contrary, it can increase the size of the segregated areas. This result is depicted in Figure~\ref{fig:aaprime}. The intuitive explanation is that highly tolerant agents are seldom unhappy in the initial configuration, and the segregated regions of opposite types that unhappy agents may ignite are likely to start from far apart, and may grow larger before meeting at their boundaries. Finally, the exponential upper bound that we provide on the expected size of the largest segregated region implies that complete segregation, where agents of a single type cover the whole grid, does not occur w.h.p. for the range of intolerance considered. In contrast, Fontes et al.~ \cite{siv} have shown the existence of a critical probability $1/2<p^*<1$ for the initial Bernoulli distribution of the agents such that for $\tau =1/2$ and $p > p^*$ the Glauber model on the $d$-dimensional grid converges to a state where only one type of agents are present. This shows that complete segregation occurs w.h.p.\ for $\tau=1/2$ and $p \in (1-\epsilon,1)$. Morris~\cite{morris2011zero} has shown that $p^*$ converges to $1/2$ as $d \rightarrow \infty$. Caputo and Martinelli \cite{caputo2006phase} have shown the same result for $d$-regular trees, while Kanoria and Montanari~\cite{montanaritree} derived it for $d$-regular trees in a synchronous setting where flips occur simultaneously, and obtained lower bounds on $p^*(d)$ for small values of $d$. The case $d = 1$ was first investigated by Erd\"{o}s and Ney~\cite{erdos1974some}, and Arratia~\cite{arratia1983site} has proven that $p^*(1)=1$. \subsection{Techniques} Our proofs are based on a typicality argument showing a self-similar structure of the neighborhoods in the initial state of the process, and on the identification of geometric configurations igniting a cascading process leading to segregation. We make extensive use of tools from percolation theory, including the exponential decay of the radius of the open cluster below criticality~\cite{grimmett1999percolation}, concentration bounds on the passage time~\cite{kesten1993speed} (see also \cite{damron2014subdiffusive, talagrand1995concentration}), and on the chemical distance between percolation sites~\cite{garet2007large}. We also make frequent use of renormalization, and correlation inequalities for contact processes~\cite{liggett2010stochastic}. In this framework, we provide an extension of the Fortuin-Kasteleyn-Ginibre (FKG) inequality in a dynamical setting that can be of independent interest. \begin{figure}[!t] \centering \includegraphics[width=3in]{tau_contribution1_HM.pdf} \caption{We enlarge the width of the intolerance interval for which the expected size of the largest segregated region containing an arbitrary agent is exponential in the size of the neighborhood from the known value $\epsilon>0$ to $\approx 0.134$ (grey region). We also show that the expected size of the largest almost segregated region containing an arbitrary agent is exponential in the size of the neighborhood for an intolerance interval of width $\approx 0.312$ (grey plus black region). } \label{fig:tau} \end{figure} The paper is organized as follows. In section \ref{Sec:Model} we introduce the model, state our results, and give a summary of the proof construction. In section~\ref{Sec:Triggering} we study the initial configuration and derive some properties of the sub-neighborhoods of the unhappy agents. In section \ref{Sec:Segregation} we study the dynamics of the segregation process and derive the main results. Concluding remarks are given in section \ref{Sec:Conclusion}. \section{Model and main results} \label{Sec:Model} \subsection{The Model} \textit{Initial Configuration.} We consider an $n \times n$ grid graph $G_n$ embedded on a torus $\mathbb{T}=[0,n)\times[0,n)$, an integer $w \in O(\sqrt{\log n})$ called \emph{horizon}, and a rational $0 \leq \tau \leq 1$ called \emph{intolerance}. All arithmetic operations over the coordinates are performed modulo $n$, i.e., $(x,y)=(x+n,y)=(x,y+n)$. We place an agent at each node of the grid and choose its type independently at random to be (+1) or (-1) according to a Bernoulli distribution of parameter $p = 1/2$. \ A \textit{neighborhood} is a connected sub-graph of $G_n$. A \textit{neighborhood of radius ${\radius}$} is the set of all agents with $l_\infty$ distance at most ${\radius}$ from a central node, and is denoted by $\mathcal{N}_{\radius}$. The \textit{size} of a neighborhood is the number of agents in it. The \textit{neighborhood of an agent $u$} is a neighborhood of radius equal to the horizon and centered at $u$, and is denoted by $\mathcal{N}(u)$. \ \textit{Dynamics.} We let the rational $\tau$ called \emph{intolerance} be $\lceil \tilde{\tau} {\size} \rceil/{\size}$, where $\tilde{\tau} \in [0,1]$ and ${\size} = (2w+1)^2$ is the size of the neighborhood of an agent. The integer $\tau {\size}$ represents the minimum number of agents of the same type as $u$ that must be present in $\mathcal{N}(u)$ to make $u$ \emph{happy}. More precisely, for every agent $u$, we let $s(u)$ be the ratio between the number of agents of the same type as $u$ in its neighborhood and the size of the neighborhood. At any point in continuous-time, if ${s(u) \ge \tau}$ then $u$ is labeled \textit{happy}, otherwise it is labeled \textit{unhappy}. We assign independent and identical Poisson clocks to all agents, and every time a clock rings the type of the agent is flipped if and only if the agent is unhappy and this flip will make the agent happy. Two observations are now in order. First, for $\tau< 1/2$ flipping its type will always make an unhappy agent happy, but this is not the case for $\tau>1/2$. Second, the process dynamics are equivalent to a discrete-time model where at each discrete time step one unhappy agent is chosen uniformly at random and its type is flipped if this will make the agent happy. \ \textit{Termination.} The process continues until there are no unhappy agents left, or there are no unhappy agents that can become happy by flipping their type. By defining a Lyapunov function to be the sum over all agents $u$ of the number of agents of the same type as $u$ present in its neighborhood, it is easy to argue that the process indeed terminates. \ \textit{Segregation.} The \textit{monochromatic region} of an agent $u$ is the neighborhood with largest radius containing agents of a single type and that also contains $u$ when the process stops. Let $\epsilon>0$ and ${\size}=(2 w+1)^2$. The \textit{almost monochromatic region} of an agent $u$, is the neighborhood with largest radius such that the ratio of the number of agents of one type and the number of agents of the other type is bounded by $e^{-{\size}^{\epsilon}}$ and that also contains $u$ when the process stops. \ Throughout the paper we use the terminology \textit{with high probability} (w.h.p.) meaning that the probability of an event approaches one as $N$ approaches infinity. \subsection{The Results} To state our results, we let $\tau_1 \approx 0.433$ be the solution of \begin{align} \frac{3}{4} \left[1-H\left(\frac{4}{3} \tau_1 \right)\right]- \left[1-H\left( \tau_1 \right) \right] = 0, \end{align} where $H$ is the binary entropy function \begin{align} H(\tau_1) = - \tau_1 \log_2 \tau_1 - (1-\tau_1) \log_2 (1-\tau_1), \end{align} and $\tau_2 \approx 0.344$ be the solution of \begin{align}\label{Eq:tau2_eq} 1024\tau_2^2-384\tau_2+11= 0 \end{align} We also let $M$ and $M'$ be the sizes of the monochromatic and almost monochromatic regions of an arbitrary agent, respectively. We consider values of the intolerance $\tau \in (\tau_2,1-\tau_2) \setminus\{1/2\}$. Most of the work is devoted to the study of the intervals $(\tau_2, \tau_1]$ and $(\tau_1,1/2)$, a symmetry argument extends the analysis to the intervals $(1/2,1-\tau_1)$ and $[1-\tau_1,1-\tau_2)$. The following theorems show that segregation occurs for values of $\tau$ in the grey region of Figure~\ref{fig:tau}, where we expect an exponential monochromatic region, and in the black region of Figure~\ref{fig:tau}, where we expect an exponential almost monochromatic region. \begin{theorem*}\label{Thrm:first_theorem} For all $\tau \in (\tau_1,1-\tau_1) \setminus \{1/2\}$ and for sufficiently large ${\size}$, we have \begin{align} 2^{a(\tau){\size}-o({\size})} \le \mathbb{E}[M] \le 2^{b(\tau){\size} +o({\size})}, \end{align} where $a$ and $b$ are decreasing functions of $\tau$ for $\tau < 1/2$ and increasing for $\tau > 1/2$. \end{theorem*} \begin{theorem*}\label{Thrm:second_theorem} For all $\tau \in (\tau_2,\tau_1]\cup[1-\tau_1,1-\tau_2)$ and for sufficiently large ${\size}$, we have \begin{align} 2^{a(\tau){\size}-o({\size})} \le \mathbb{E}[M'] \le 2^{b(\tau){\size} +o({\size})}, \end{align} where $a$ and $b$ are decreasing functions of $\tau$ for $\tau < 1/2$ and increasing for $\tau > 1/2$. \end{theorem*} \begin{figure \centering \includegraphics[width=3.5in]{aaprimeunified_2.pdf} \caption{Exponent multipliers $a(\tau)$ and $b(\tau)$ for the lower bound and upper bounds on the expected size of the largest segregated region $\mathbb{E}[M]$, and the expected size of the largest almost segregated region $\mathbb{E}[M']$. } \label{fig:aaprime} \end{figure} The numerical values for $a(\tau)$ and $b(\tau)$ derived in the proofs of the above theorems are plotted in Figure \ref{fig:aaprime}. For $\tau \in (\tau_1,1-\tau_1)\setminus \{1/2\}$, as the intolerance gets farther from one half in both directions, larger monochromatic regions are expected. \subsection{Proof Outline} \label{Subsec:proof_const} The main idea of the proof is to identify a local initial configuration that can potentially trigger a cascading process leading to segregation. We then bound the probability of occurrence of such a configuration in the initial state, and of the conditions to trigger segregation. To identify this local configuration, we study the relationship between the typical neighborhood of an unhappy agent and the sub-neighborhoods contained within this neighborhood, showing a self-similar structure. Namely, the fraction of agents of the same type, when scaled by the size of the neighborhood, remains roughly the same (Proposition~\ref{Prop:firstprop}). We then define a \emph{radical region} that contains a nucleus of unhappy agents (Lemma~\ref{Lemma:unhappy_region}), and using the self-similar structure of the neighborhoods we construct a geometric configuration where a sequence of flips can lead to the formation of a neighborhood of agents of the same type inside a radical region (Lemma~\ref{Lemma:Trigger}). Finally, we provide a lower bound for the probability of occurrence of this configuration in the initial state of the system (Lemma~\ref{Lemma:expandable}), which can initiate the segregation process. The second part of the proof is concerned with the process dynamics, and shows a cascading effect ignited by the radical regions that leads to the formation of exponentially large segregated areas. We consider an indestructible and impenetrable structure around a radical region called a \textit{firewall} and show that once formed it remains static and protects the radical region inside it from vanishing (Lemma~\ref{Lemma:firewall}). Conditioned on certain events occurring in the area surrounding the radical region, including the formation of the initial configuration described in the first part of the proof, we show that an agent close to the radical region will be trapped w.h.p.\ inside an exponentially large firewall whose interior becomes monochromatic (Lemma~\ref{Lemma:fw}), see Figure~\ref{fig:aaprime1}(a). We then obtain a lower bound on the joint probability of the conditioning events and this leads to a lower bound on the probability that an agent is eventually contained in a monochromatic region of exponential size. Since the lower bound holds for both type of agents, we expect to have both types of exponential monochromatic regions in a large area by the end of the process. This leads to an exponential upper bound on the expected size of the largest monochromatic region of each type. To perform our computations, we rely on a bound on the passage time on the square lattice~\cite{kesten1993speed} to upper bound the rate of spread of other monochromatic regions outside the firewall, and ensure that they do not interfere with its formation during the dynamics of the process. \begin{figure}[!t] \centering \includegraphics[width=\textwidth]{firewall_intro.pdf} \caption{An arbitrary agent $u$ that is close to a radical region will be trapped inside a firewall of exponential size whose interior will eventually become monochromatic (a), or almost monochromatic (b).} \label{fig:aaprime1} \end{figure} The construction described above works for all $\tau_1< \tau<1/2$. For smaller values of $\tau$, agents are more tolerant and this may cause the construction of a firewall to fail, since tolerant agents do not easily become unhappy and flip their types igniting the cascading process. In order to overcome this difficulty, we introduce a \emph{chemical firewall} through a comparison with a Bernoulli site percolation model, see Figure~\ref{fig:aaprime1}(b). This firewall is constructed through renormalization and is initially made of \emph{good} blocks that occur independently and with probability above the critical threshold for site percolation on the square grid. Using a theorem in~\cite{garet2007large} on the chemical distance between good blocks, we show that they form a large cycle that, once it becomes monochromatic, isolates its interior. Finally, using the exponential decay of the size of the clusters of bad blocks~\cite{grimmett1999percolation}, we show that the region inside the chemical firewall becomes \emph{almost monochromatic}, namely for all $\tau_2 < \tau \le \tau_1$, we expect the formation of exponentially large regions where the ratio of number of agents of one type and the number of agents of the other type quickly vanishes. All results are extended to the interval $1/2 <\tau < 1-\tau_2$ using a symmetry argument. Compared to the proof in~\cite{immorlica2015exponential}, our derivation differs in the following aspects. The definition of radical region is fundamentally different from the viral nodes considered in~\cite{immorlica2015exponential}, and the identification of the radical regions gives us an immediate understanding of the arrangement of the agents in the initial configuration in terms of self-similarity arising at different scales. Our definition of an annular firewall that forms quickly enough eliminates the need for additional arguments from first passage percolation that are used in~\cite{immorlica2015exponential}, it allows for a wider range of intolerance parameters, and it is easily generalized to the notion of chemical firewall using the results from \cite{garet2007large}. The renormalization of the grid for the study of the growth of the monochromatic regions is also different from~\cite{immorlica2015exponential} and works for a wider range of the intolerance. The idea of considering almost monochromatic regions is new, and so are the approaches that we use from percolation theory to argue the existence of the chemical firewall and the size of the minority clusters. Finally, we rigorously apply a variation of the FKG inequality to show positive correlation of certain events, while in~\cite{immorlica2015exponential} it is often informally argued that similar correlations exist in their setting. \section{Triggering configuration} \label{Sec:Triggering} We start our analysis considering the initial configuration of the system. Proposition~\ref{Prop:firstprop} shows a similarity relationship between the neighborhood of an agent and its sub-neighborhoods. This relationship is exploited in Lemma~\ref{Lemma:Trigger} to construct an initial configuration of agents that can trigger the segregation process. Lemma~\ref{Lemma:expandable} provides a bound on the probability of occurrence of this triggering configuration. Let $\mathcal{N}(u)$ be the neighborhood of an arbitrary agent $u$ containing ${\size}$ agents. Consider a sub-neighborhood $\mathcal{N}'(u) \subset \mathcal{N}(u)$ containing ${\size}'$ agents and let $\gamma$ be the scaling factor ${{\size}'}/{{\size}}$. Let $W$ and $W'$ be the random variables representing the number of (-1) agents in $\mathcal{N}(u)$ and $\mathcal{N}'(u)$ respectively. The following proposition shows that, conditioned on $W$ being less than $\tau {\size}$, $W'$ is very close to the rescaled quantity $ \gamma \tau {\size}$, with overwhelming probability as ${\size} \rightarrow \infty$. \begin{proposition*}\label{Prop:firstprop} For any $\epsilon \in (0,1/2)$ and $c \in \mathbb{R}^+$ there exists $c' \in \mathbb{R}^{+}$ such that for all $\size \ge 1$ \begin{align*} P\left(|W' - \gamma \tau {\size}| < c{\size}^{1/2+\epsilon} \given[\Big] W < \tau {\size}\right) \ge 1 - e^{-c'{\size}^{2\epsilon}}. \end{align*} \end{proposition*} To prove this proposition, where the two constants $\epsilon$ and $c$ are introduced for technical convenience in its later applications, we need the following three lemmas. \begin{lemma*}\label{Lemma:lemma_0} Let $\mathcal{N}$ be a set of $(+1)$ and $(-1)$ arbitrary agents in the grid such that it has exactly $K$ agents of type $(-1)$ and ${\size}-K$ agents of type $(+1)$. Then, if we choose a set $\mathcal{N}'$ of size ${\size}'$ of agents uniformly at random from $\mathcal{N}$, we have \begin{align} \label{eq:Azuma1} P(W' \ge \gamma K + t) \le e^{\frac{-t^2}{2{\size}'}} , \end{align} and \begin{align}\label{eq:Azuma2} P(W' \le \gamma K - t) \le e^{\frac{-t^2}{2{\size}'}}, \end{align} where $W'$ is the random variable indicating the number of $(-1)$ agents in $\mathcal{N}'$, and $\gamma = {{\size}'}/{{\size}}$. \end{lemma*} \begin{proof} Let $W'_i$ be a random variable indicating the type of the $i$'th agent in $\mathcal{N}'$, namely $W'_i$ is one if the type is (-1) and zero otherwise. Let $\mathcal{F}_i = \sigma(W'_1,...,W'_i)$, where $\sigma(X)$ denotes the sigma field generated by random variable $X$. It is easy to see that for all $n \in \{1,...,{\size}' \}$, $M_n = \mathbb{E}[W'|\mathcal{F}_n]$ is a martingale. It is also easy to see that $M_0 = \mathbb{E}[W'] = \gamma {\size}_\tau$, and $M_{{\size}'} = W'$. For all $n \in\{1,2,...,{\size}'\}$, we also have \begin{align*} |M_n - M_{n-1}| &= \left|\mathbb{E}\left(\sum_{i=1}^{{\size}'}W'_i\given[\Big]\mathcal{F}_n\right) - \mathbb{E}\left(\sum_{i=1}^{{\size}'}W'_i\given[\Big]\mathcal{F}_{n-1}\right)\right| \\ &= \left|W'_n + \frac{K-\sum_{i=1}^{n}W'_i}{{\size}-n}({\size}'-n) - \frac{K-\sum_{i=1}^{n-1}W'_i}{{\size}-(n-1)}[{\size}'-(n-1)]\right| \\ &\le 1. \end{align*} Now, using Azuma's inequality \cite{janson2011random}, we have \begin{align*} P\left(W'_i \ge \gamma K + t\right) = P\left(M_{{\size}'} \ge M_0 + t)\right) \le e^{\frac{-t^2}{2{\size}'}}. \end{align*} With the same argument we can derive (\ref{eq:Azuma2}). \end{proof} \begin{lemma*}\label{Lemma:lemma_1} Let $\epsilon \in (0,1/2)$ and $c \in \mathbb{R}^+$. There exists $c' \in \mathbb{R}^{+}$ such that for all $\size \ge 1$ \begin{align*} {P\left(W' < \gamma \tau {\size} + c{\size}^{1/2+\epsilon} \given[\Big] W < \tau {\size}\right) \ge 1 - e^{-c'{\size}^{2\epsilon}}}. \end{align*} \end{lemma*} \begin{proof} Let us denote $c{\size}^{1/2+\epsilon}$ by $v({\size})$. We let \begin{align*} p_w &= P\left(W' \ge\gamma \tau {\size} + v({\size}) \given[\Big] W < \tau {\size}\right) \\ &\le P\left(W' \ge \gamma \tau {\size} + v({\size}) \given[\Big] W \le \tau {\size}\right) \\ &\le P\left(W' \ge \gamma \tau {\size} + v({\size})\given[\Big] W = \tau {\size}\right) \end{align*} The first inequality is trivial. The second inequality follows from \begin{align*} P\left(W' \ge \gamma \tau {\size} + v({\size}) \given[\Big]W \le \tau {\size}\right) \end{align*} being the probability of choosing $W' \ge \gamma \tau {\size} + v({\size})$ agents from a set with $W \le \tau {\size}$. It is easy to see that this probability can only increase if we have $W = \tau {\size}$. The result follows by applying Lemma~\ref{Lemma:lemma_0}. \end{proof} Let $\mathcal{N}''(u) = \mathcal{N}(u) \setminus \mathcal{N}'(u)$. Let us denote the number of agents in $\mathcal{N}''(u)$ by ${\size}''$. Let $W''$ denote the random variable representing the number of (-1) agents in $\mathcal{N}''(u)$. \begin{lemma*}\label{Lemma:lemma_2} Let $\epsilon \in (0,1/2)$ and $c \in \mathbb{R}^+$. There exist $c' \in \mathbb{R}^{+}$ such that for all $\size \ge 1$ \begin{align*} {P\left(W' > \gamma \tau {\size} - c{\size}^{1/2+\epsilon} \given[\Big] W < \tau {\size}\right) \ge 1 - e^{-c'{\size}^{2\epsilon}}}. \end{align*} \end{lemma*} \begin{proof} Let us denote $c{\size}^{1/2+\epsilon}$ by $v({\size})$, and $ \tau {\size} - 1$ by ${\size}_\tau$. Let \begin{align} p_w &= P\left(W' \le \tau\gamma {\size} - v({\size}) | W < \tau {\size}\right) \nonumber \\ &= P\left(W' \le \tau {\size}' - v({\size}) | W' + W'' < \tau {\size}\right) \nonumber \\ &\le \ddfrac{P\left(W' \le \tau {\size}' - v({\size}), W'+W'' \le {\size}_\tau \right)}{P(W \le {\size}_\tau)} \nonumber \\ &\le \ddfrac{\sum\limits_{k=0}^{\lfloor \tau {\size}' - v({\size}) \rfloor}P(W' = k)\sum\limits_{m=0}^{\min\{{\size}_\tau - k,{\size}''\}}P(W''=m)}{P(W \le {\size}_\tau)} \nonumber \\ &= \ddfrac{\sum\limits_{k=0}^{\lfloor \tau {\size}' - v({\size}) \rfloor}{{\size}' \choose k}\sum\limits_{m=0}^{\min\{{\size}_\tau - k,{\size}''\}}{{\size}'' \choose m}}{\sum\limits_{n=0}^{{\size}_\tau}{{\size} \choose n}}. \label{denominator} \end{align} We use the following inequality, valid for all $a \in (0,0.5)$ \begin{equation*} {{{\size} \choose a{\size}} \le\sum_{m=0}^{a{\size}}{{\size} \choose m} \le \frac{1-a}{1-2a}{{\size} \choose a{\size}}}. \end{equation*} Since $\tau < 1/2$, it follows that ${\size} \choose {\size}_\tau$ is a lower bound for the denominator of (\ref{denominator}). We also have the following upper bound for the numerator \begin{align*} \sum\limits_{k=0}^{\lfloor \tau {\size}' - v({\size}) \rfloor} & {{\size}' \choose k}\sum\limits_{m=0}^{\min\{{\size}_\tau - k,{\size}''\}}{{\size}'' \choose m} \le \\ &\sum\limits_{k=0}^{\lfloor \tau {\size}' - v({\size}) \rfloor}c^k{{\size}' \choose k}{{\size}'' \choose {\min\{{\size}_\tau - k,\lfloor {\size}''/2 \rfloor \}}}, \end{align*} where $\{c^k\}$ are positive constants for $k=0,1,...,\lfloor \tau {\size}' - v({\size}) \rfloor$. Since for all $l \in \{0,1,...,\lfloor \tau {\size}' - v({\size}) \rfloor\}$, we have \begin{align*} \ddfrac{{{\size}' \choose \lfloor \tau {\size}' - v({\size}) \rfloor}{{\size}'' \choose {\min\{{\size}_\tau - \lfloor \tau {\size}' - v({\size}) \rfloor,\lfloor {\size}''/2 \rfloor \}}}}{{{\size}' \choose \lfloor \tau {\size}' - v({\size}) \rfloor - l}{{\size}'' \choose {\min\{{\size}_\tau - \lfloor \tau {\size}' - v({\size}) \rfloor + l,\lfloor {\size}''/2 \rfloor \}}}} \ge 1, \end{align*} it follows that there exist a constant $c_1 \in \mathbb{R}^+$ such that \begin{align*} c_1{\size}{{\size}' \choose \lfloor\tau {\size}' - v({\size})\rfloor}{{\size}'' \choose {\size}_\tau - \lfloor\tau {\size}' - v({\size})\rfloor} \end{align*} is an upper bound for the numerator. Putting things together, we have \begin{align*} p_w &\le c_1{\size}\ddfrac{{{\size}' \choose \lfloor\tau {\size}' - v({\size})\rfloor}{{\size}'' \choose {\size}_\tau - \lfloor\tau {\size}' - v({\size})\rfloor}}{{{\size} \choose {\size}_\tau}} \\ &\le c_1{\size} P(W'\le \tau {\size}'-v({\size})|W= {\size}_\tau ). \end{align*} Using the same argument as in Lemma \ref{Lemma:lemma_1}, we now have \begin{align*} p_w \le e^{-c'{\size}^{2\epsilon}}, \end{align*} where $c' \in \mathbb{R}^+$ is a constant. \end{proof} \begin{proof}[{Proposition~\ref{Prop:firstprop}}] Let \begin{align*} A=\left\{\tau\gamma {\size} - c{\size}^{1/2+\epsilon} < W'\right\}, \\ B = {\left\{W' < \tau\gamma {\size} + c{\size}^{1/2+\epsilon}\right\}}, \\ C=\left\{W < \tau {\size}\right\}. \end{align*} By Lemmas \ref{Lemma:lemma_1} and \ref{Lemma:lemma_2} there exist constants $c_1,c_2>0$ such that we have \begin{align*} P(A\cap B|C) &= 1 - P\left(A^C\cup B^C\given[\Big]C\right) \\ &\ge 1 - \left(P\left(A^C\given[\Big]C\right)+P\left(B^C\given[\Big]C\right)\right) \\ &\ge 1 - \left(e^{-c_1{\size}^{2\epsilon}}+e^{-c_2{\size}^{2\epsilon}}\right). \end{align*} Hence, there exists a constant $c' \in \mathbb{R}^+$ such that \begin{align*} P\left(A\cap B\given[\Big]C\right) \ge 1 - e^{-c'{\size}^{2\epsilon}}, \end{align*} and the proof is complete. \end{proof} We now identify a configuration that has the potential to trigger a cascading process. We show that a neighborhood that is slightly larger than the neighborhood of an agent and that contains a fraction of same type agents that is slightly less than $\tau$ has the desired configuration. For any $\epsilon, \epsilon' \in {(0,1/2)}$ let ${\hat{\tau} = \tau [1- 1/ (\tau {\size}^{1/2-\epsilon})]}$ and define a \textit{radical region} $\mathcal{N}_{(1+\epsilon')w}$ to be a neighborhood of radius ${(1+\epsilon')w}$ containing less than $\hat{\tau}(1+\epsilon')^2{\size} $ agents of type (-1). We also define an \textit{unhappy region} $\mathcal{N}_{\epsilon'w}$ to be a neighborhood of radius $\epsilon' w$, containing at least $\lfloor \tau \epsilon'^2 {\size}-{\size}^{1/2+\epsilon} \rfloor$ unhappy agents of type (-1). \begin{lemma*} \label{Lemma:unhappy_region} A radical region $\mathcal{N}_{(1+\epsilon')w}$ contains an unhappy region $\mathcal{N}_{\epsilon'w}$ at its center w.h.p. \end{lemma*} \begin{proof} Let $\epsilon \in (0,1/2)$. We show that w.h.p. the region $\mathcal{N}_{\epsilon'w}$ co-centered with $\mathcal{N}_{(1+\epsilon')w}$ has at least ${\lfloor \tau\epsilon'^2{\size} - {\size}^{1/2+\epsilon}}\rfloor$ agents of type (-1) such that all of them are unhappy. Let $A$ be the event that there are less than $\tau \epsilon'^2 {\size} - {\size}^{1/2+\epsilon}$ agents of type (-1) in $\mathcal{N}_{\epsilon'w}$, which has $N'$ agents. By Proposition~\ref{Prop:firstprop}, there exists $c_1, c_2>0$ such that \begin{align*} P(A) \le P\left(W'\le\hat{\tau} {\size}' - c_1{\size}^{1/2+\epsilon}\given[\Big] W_{(1+\epsilon')w} < (1+\epsilon')^2\hat{\tau} {\size}\right) \le e^{-c_2{\size}^{2\epsilon}}, \end{align*} where $W_{(1+\epsilon')w}$ represents the number of (-1) agents in $\mathcal{N}_{(1+\epsilon')w}$. Let $\mathcal{I}$ denote the set of the positions of all the agents in $\mathcal{N}_{\epsilon'w}$, and let $B_i$ be the event that a (-1) agent positioned at $i\in \mathcal{I}$ is happy. By Proposition \ref{Prop:firstprop}, there exists $c_3>0$ such that, for all $i\in\mathcal{I}$ \begin{align*} P(B_i) = P\left(W_i\ge \hat{\tau} {\size} + c_u{\size}^{1/2+\epsilon}\given[\Big]W_{(1+\epsilon')w} < (1+\epsilon')^2\hat{\tau} {\size}\right) \le e^{-c_3{\size}^{2\epsilon}}, \end{align*} where $W_i$ is the number of (-1) agents in the neighborhood of $i$ and $c_u>0$ is chosen so that the threshold for being happy is met. It follows that there exists $c>0$ such that \begin{align*} P\left(A\cap B_1^C \cap ... \cap B_{|\mathcal{I}|}^C\right) \ge 1 - {\size}e^{-c{\size}^{2\epsilon}}, \end{align*} where $|\mathcal{I}|$ denotes the cardinality of $\mathcal{I}$. \end{proof} A radical region is \emph{expandable} if there is a sequence of at most $(w+1)^2$ possible flips inside it that can make the neighborhood $\mathcal{N}_{w/2}$ at its center monochromatic. We consider a geometric configuration where a radical region, and neighborhoods $\mathcal{N}_{\epsilon'w}$ , $\mathcal{N}_{w/2}$ and $\mathcal{N}_{{\radius}}$ with ${\radius}>3w$, are all co-centered. We consider the process dynamics and let $u^+$ denote an arbitrary (+1) agent and \begin{align} T({\radius}) = \inf\left\{t: \exists v\in \mathcal{N}_{\radius}, \; u^{+} {\normalfont \mbox{ would be unhappy at the location of $v$} } \right\}. \label{Tinfdef} \end{align} The next lemma shows that the radical region in this configuration is expandable w.h.p., provided that $\epsilon'$ is large enough and no (+1) agent at the location of any agent in $\mathcal{N}_{\radius}$ is unhappy. The main idea is that the (-1) agents in the unhappy region at the center of the radical region can trigger a process that leads to a monochromatic (+1) region of radius $w/2$. \begin{lemma*} \label{Lemma:Trigger} For all $\epsilon' > f(\tau)$, where \begin{align} f(\tau) = \frac{3(\tau - 0.5)+\sqrt{9(\tau-0.5)^2-7(\tau-0.5)(3\tau+0.5)}}{2(3\tau+0.5)}, \label{eq:ftau} \end{align} there exists w.h.p. a sequence of at most $(w+1)^2$ possible flips in $\mathcal{N}_{(1+\epsilon')w}$ such that if they happen before $T({\radius})$, then all the agents inside $\mathcal{N}_{w/2}$ will become of type (+1). \end{lemma*} \begin{figure}[!t] \centering \includegraphics[width=\textwidth]{lemma5.pdf} \caption{Regions discussed in Lemma~\ref{Lemma:Trigger}. $\mathcal{N}_{\epsilon'w}$ is an unhappy region w.h.p., the dashed box is $\mathcal{N}_{w/2}$, $u$ is a corner agent in $\mathcal{N}_{w/2}$, and finally $\mathcal{N}(u)$ is the neighborhood of agent $u$.} \label{fig:thrm41s} \end{figure} \begin{proof} Let $\epsilon \in (0,1/2)$. Let us denote the neighborhood with radius $\epsilon'w$ and co-centered with the radical region by $\mathcal{N}_{\epsilon'w}$, see Figure \ref{fig:thrm41s}. By Lemma \ref{Lemma:unhappy_region}, with probability at least $1-e^{-O({\size}^{2\epsilon})}$ there are at least $\lfloor \tau\epsilon'^2{\size} - {\size}^{2\epsilon} \rfloor$ agents of type (-1) inside this neighborhood such that all of them are unhappy. Next, we show that if these unhappy agents flip before $T(\radius)$, all the agents inside the neighborhood $\mathcal{N}_{w/2}$ will be unhappy w.h.p., which gives the desired result. First, we notice that if there is a flip of an unhappy (-1) agent in $\mathcal{N}_{\radius}\setminus \mathcal{N}_{w/2}$ it can only increase the probability of the existence of the sequence of flips we are looking for, hence conditioned on having these flips before $T(\radius)$, the worst case is when these flips occur with the initial configuration of $\mathcal{N}_{\radius}\setminus \mathcal{N}_{w/2}$. Since a corner agent in $\mathcal{N}_{w/2}$ shares the least number of agents with the radical region, it is more likely for it to have the largest number of (+1) agents in its neighborhood compared to other agents in $\mathcal{N}_{w/2}$. Hence, as a worst case, we may consider a corner agent in $\mathcal{N}_{w/2}$ which is co-centered with the radical region. Let us assume that $\epsilon' \in (0,1/2)$, in this case $\mathcal{N}_{\epsilon'w}$ is completely contained in the neighborhood of each of the agents in $\mathcal{N}_{w/2}$. Let us denote the neighborhood shared between the neighborhood of the agent $u$ at the corner of $\mathcal{N}_{w/2}$ and the radical region by $\mathcal{N}''(u)$. Also, let us denote the scaling factor corresponding to this shared neighborhood by $\gamma''$. We have \begin{align*} \gamma''= \frac{(3/2+\epsilon')^2}{4(1+\epsilon')^2} \pm O\left(\frac{1}{\sqrt{{\size}}}\right). \end{align*} By Proposition \ref{Prop:firstprop} it follows that with probability at least ${1-e^{-O({\size}^{2\epsilon})}}$ there are at most \begin{align*} \frac{(3/2+\epsilon')^2\tau}{4}{\size} + o(N), \end{align*} agents of type (-1) in $\mathcal{N}''(u)$. Hence, we can conclude that, for any agent in $\mathcal{N}_{w/2}$, w.h.p., there are at most this many (-1) agents in the intersection of the neighborhood of this agent and the radical region. Also, using Lemma \ref{Lemma:balanced} of the Appendix, with probability at least $1-e^{-O(N^{2\epsilon})}$ we have at most \begin{align*} { \frac{1}{2}\left(1-(3/2+\epsilon')^2/4\right){\size} + o({\size}) }, \end{align*} agents of type (-1) in the part of the neighborhood of the corner agent $u$ in $\mathcal{N}_{w/2}$ that is also not in the radical region. Combining the above results, we can conclude that with probability at least $1-e^{-O(N^{2\epsilon})}$ there are at most \begin{align*} \frac{(3/2+\epsilon')^2\tau}{4}{\size} + \frac{1}{2}\left(1-\frac{(3/2+\epsilon')^2\tau}{4}\right){\size} + o({\size}), \end{align*} agents of type (-1) in the neighborhood of an agent in $\mathcal{N}_{w/2}$. Let us denote this event for the corner agent~$u$ by $A_1$. Let us denote the events of having at most this many (-1) agents in the neighborhoods of other agents in $\mathcal{N}_{w/2}$ by $A_2, ..., A_{|\mathcal{N}_{w/2}|}$, where $|\mathcal{N}_{w/2}|$ denotes the number of agents in $\mathcal{N}_{w/2}$. We have \begin{align*} P(A_1 \cap ... \cap A_{(w+1)^2}) &\ge 1 - P(A_1^C \cup ... \cup A^C_{|\mathcal{N}_{w/2}|}) \\ &\ge 1 - (w+1)^2 P(A^C_1) \\ & \ge 1 - e^{-O(N^{2\epsilon})}. \end{align*} The goal is now to find the range of $\epsilon'$ for which $\mathcal{N}_{\epsilon'w}$ is large enough that once all of its unhappy agents flip, all the agents in $\mathcal{N}_{w/2}$ become unhappy w.h.p. It follows that we need \begin{align*} \frac{(3/2+\epsilon')^2\tau}{4}{\size} + \frac{1}{2}\left(1-\frac{(3/2+\epsilon')^2\tau}{4}\right){\size} - \tau\epsilon'^2{\size} + o({\size}) < \tau {\size}, \end{align*} to hold w.h.p. Dividing by ${\size}$, and letting ${\size}$ go to infinity, after some algebra it follows that \begin{align}\label{Eq:eps1} \epsilon' > \frac{3(\tau - 0.5)+\sqrt{9(\tau-0.5)^2-7(\tau-0.5)(3\tau+0.5)}}{2(3\tau+0.5)} = f(\tau), \end{align} where $f(\tau) < 1/2$ for $\tau \in (\tau_2,1/2)$, as desired. \end{proof} \begin{figure}[!t] \centering \includegraphics[width=4.5in]{epsilon1.pdf} \caption{The infimum of $\epsilon'$ to potentially trigger a cascading process.} \label{fig:epsilon} \end{figure} Figure \ref{fig:epsilon} depicts $f(\tau)$ as a function of $\tau$. When $\tau$ is close to one half, it is sufficient to have an $\epsilon'$ close to zero to potentially trigger a segregation process. In this case, a small number of agents located in a small unhappy region are needed to flip in order to make other agents in the radical region unhappy. However, as $\tau$ decreases and agents become more tolerant, a larger number of agents must make a flip in the unhappy region in order to make other agents in the radical region unhappy, and hence larger values of $\epsilon'$ are needed. Using Lemma~\ref{Lemma:Trigger}, we obtain an exponential bound on the probability of having an expandable radical region inside a sufficiently large neighborhood. This shows that the probability that an expandable radical region is sufficiently close to an arbitrary agent $u$ in the initial configuration, is not too small. \begin{lemma*} \label{Lemma:expandable} Let $r = 2^{[1-H(\tau')]{\size}/2-o({\size})}$, where $\tau'=(\tau {\size} -2)/({\size}-1)$. Let \begin{align*} C = \left\{\text{$\mathcal{N}_r$ \emph{contains an expandable radical region at} $t=0$}\right\}. \end{align*} For all $\epsilon' > f(\tau)$ and sufficiently large ${\size}$, we have \begin{align*} P(C) \ge 2^{-[1-H(\tau')](2\epsilon'+\epsilon'^2){\size} -o({\size})}. \end{align*} \end{lemma*} \begin{proof} Let $\mathcal{N}_r$ be an arbitrary neighborhood of radius $r= 2^{[1-H(\tau')]{\size}/2-o({\size})}$ and let $\mathcal{N}_{\radius}$ be a neighborhood of radius $\radius = r + w$ and with the same center as $\mathcal{N}_r$. Let \begin{align*} A = \{\forall v\in \mathcal{N}_{\radius}, \; u^{+} \mbox{ would be happy at the location of } v \mbox{ at time } t=0\}, \end{align*} \begin{align*} C = \text{\{$\mathcal{N}_r$ {\normalfont contains an expandable radical region at time $t=0$\}}}, \end{align*} \begin{align*} S_{\epsilon'} = \text{\{$\mathcal{N}_r$ contains a radical region of radius $(1+\epsilon')w$ at time $t=0$\}}. \end{align*} We have \begin{align*} P(C) &\ge P(C\cap S_{\epsilon'} \cap A) \\ &= P\left(C\given[\Big]A,S_{\epsilon'}\right)P(S_{\epsilon'}\cap A). \end{align*} Using the FKG inequality and since $S_{\epsilon'}$ and $A$ are increasing events, we have \begin{align*} P(C) \ge P\left(C\given[\Big]A,S_{\epsilon'}\right)P(S_{\epsilon'})P(A). \end{align*} By Lemma~\ref{Lemma:Trigger} we have that $P(C|A,S_{\epsilon'})$ occurs w.h.p. By Lemmas \ref{Lemma:R_unhappy} and \ref{Lemma:r_unhappy} of the Appendix we have that \begin{align*} P(S_{\epsilon'}) \ge 2^{-[1-H(\tau')][2\epsilon'+\epsilon'^2]{\size} -o({\size})}. \end{align*} Finally, $P(A)$ tends to one as ${\size} \rightarrow \infty$ which leads to the desired result. \end{proof} So far, we have identified a local configuration (radical region) that can lead to the formation of a small monochromatic neighborhood w.h.p. In the following section we show that this monochromatic neighborhood is in fact capable of making a large region monochromatic or almost monochromatic. \section{The segregation process} \label{Sec:Segregation} We now consider the dynamics of the segregation process and show that for all $\tau \in (\tau_1, 1/2)$ the expected size of the monochromatic region in steady state is exponential, while for all $\tau \in (\tau_2,\tau_1]$ the expected size of the almost monochromatic region is exponential. \subsection{Monochromatic region} We need the following definitions and preliminary results for proving the first part of Theorem~\ref{Thrm:first_theorem}. A \textit{firewall} of radius $r$ and center $u$ is a set of agents of the same type contained in an annulus \begin{align*} A_r(u) = \left\{y: r-\sqrt{2}w \leq \|u-y\| \leq r\right\}, \end{align*} where $\|.\|$ denotes Euclidean distance and $r\ge 3w$. By Lemma \ref{Lemma:firewall}, once formed a firewall of sufficiently large radius remains static, and since its width is $\sqrt{2} w$ the agents inside the inner circle are not going to be affected by the configurations outside the firewall. We now call a neighborhood with radius $w/2$ a $w$-block. Consider the grid graph $G_n$. Let us renormalize this grid into $w$-blocks and denote the resulting graph by $G'_n$ where each vertex of it is a $w$-block. Consider i.i.d. random variables $\{t(v):v\in G'_n\}$, each attached to a vertex of $G'_n$. Let $F$ denote the common distribution of these random variables and assume $F(0^{-}) = 0$, $\int_{[0,\infty)}xF(dx) < \infty$, and that $F$ is not concentrated on one point. Consider a path $\eta$ consisting of the vertices $v_1,...,v_k \in G'_n$ and define the passage time of this path \begin{equation*} T^*(\eta) = \sum_{i=1}^k t(v_i). \end{equation*} We also define \begin{align*} T_k &= \inf_{\eta \in (0 \leftrightarrow k\zeta_1)} \{T^*(\eta) \}, \end{align*} where $\zeta_1$ is a coordinate vector and $(0 \leftrightarrow k\zeta_1)$ indicates the set of paths between the origin and $k\zeta_1$. The following theorem, originally stated for bond percolation, also holds for site percolation and appears as Theorem~1 in \cite{kesten1993speed}. \begin{theorem*} [Kesten] \label{Thrm:Kesten_thrm1} Let $F(0) < p_c(\mathbb{Z}^d)$ where $p_c$ is the critical probability for site percolation on $\mathbb{Z}^d$, and $\int e^{\gamma x}F(dx)<\infty$ for some $\gamma > 0$. Then, there exist $c_1,c_2,c_3,c_4 \in \mathbb{R}^+$ independent of $k$ and such that \begin{align*} {P\left(|T_k - \mathbb{E}[T_k]|>x\sqrt{k}\right)<c_1e^{-c_2x}}, \end{align*} for $x<c_3k$ and $c_4k^{-2} \le \mathbb{E}[T_k]/{k}-\mu$ where $\mu = \lim_{k\rightarrow \infty} {T_k}/{k}$. \end{theorem*} Using the above theorem, we obtain the following lower bound on the conditional probability that the spread of unhappy agents takes a sufficiently large amount of time. \begin{lemma*} \label{Lemma:Unhappy_growth} Let $\mathcal{N}_{\radius}$ be a neighborhood with radius ${\radius}>{\size}^3$ and let $u^+$ denote an arbitrary (+1) agent. Let \begin{align*} A = \left\{\forall v\in \mathcal{N}_{\radius}, \; u^{+} \mbox{ {\normalfont would be happy at the location of $v$ at time $t=0$}}\right\}. \end{align*} There exist constants $c,c',c'' \in \mathbb{R}^+$ independent of $N$, such that for all ${\size \ge 1}$, \begin{align*} P\left(T(\radius/2)>c''\frac{{\radius}}{{\size}^{3/2}}\given[\Big] A\right) > 1 - {c{\radius}^2}e^{-c' {\radius}^{1/3}}, \end{align*} where $T(\radius)$ is defined in (\ref{Tinfdef}). \end{lemma*} \begin{figure}[!t] \centering \includegraphics[width=\textwidth]{thrm_us.pdf} \caption{Neighborhoods described in the proof of Lemma~\ref{Lemma:Unhappy_growth}.} \label{fig:Unhappy_growth} \end{figure} \begin{proof} We renormalize the grid into $w$-blocks starting with the block at the center of $\mathcal{N}_{\radius}$ and construct $G'_n$ as described above. Let $\mathcal{N}_U$ be the set of all the $w$-blocks on the outside boundary of $\mathcal{N}_{\radius}$ (these are the blocks that are connected to $\mathcal{N}_{\radius}$ in $G'_n$). In order to find an upper bound for the speed of the spread of the unhappy agents, assume that all the (+1) agents in a $w$-block will become unhappy with a single flip in one of its eight $l_\infty$ closest neighboring $w$-blocks. Also assume that all the agents in $\mathcal{N}_U$ are unhappy of type (+1). Finally, denote the $w$-blocks on the outside boundary of $\mathcal{N}_{{\radius}/2}$ with $\mathcal{N}_{U'}$. We show that the speed of the spread of unhappy blocks, i.e., $w$-blocks containing unhappy agents, is independent of the configuration of the agents outside the neighborhood $\mathcal{N}_{\radius} \cup \mathcal{N}_U$ and then use Theorem~\ref{Thrm:Kesten_thrm1} to obtain the final result. Consider $G'_n$ in which each vertex is a $w$-block as described above. Here we attach i.i.d. random variables $\{t(v):v\in G'_n\}$ to each vertex. Let these random variables have a common exponential distribution with mean $1/{\size}$. Consider a path $\eta$ consisting of the verticies $v_1,...,v_k$ and the passage time $T^*(\eta) = \sum_{i=1}^k t (v_i)$. Let \begin{align*} T' = \inf_{\eta \in (\mathcal{N}_U \leftrightarrow \mathcal{N}_{U'})} T^*(\eta), \end{align*} where $(\mathcal{N}_U \leftrightarrow \mathcal{N}_{U'})$ is the set of paths connecting $\mathcal{N}_U$ to $\mathcal{N}_{U'}$. It is easy to see that $T' \le T({\radius}/2)$. We now argue that regardless of the configuration of agents in the blocks of the graph $G'_n$ containing $\mathcal{N}_{\radius} \cup \mathcal{N}_U$, the path with the smallest $T^*(\eta)$ consists only of $w$-blocks inside $\mathcal{N}_{\radius} \cup \mathcal{N}_U$. Assume that this is not the case, then a $w$-block is in $T^*(\eta)$ but it is not in $\mathcal{N}_{\radius} \cup \mathcal{N}_U$. There needs to be a path from this block to a block in $\mathcal{N}_{U'}$. This path has to cross the $\mathcal{N}_U$, and as a result there is another path from $\mathcal{N}_U$ to $\mathcal{N}_{U'}$ that is at least as short as $\eta$. It follows that the shortest path from $\mathcal{N}_U$ to $\mathcal{N}_{U'}$ only consists of blocks from $\mathcal{N}_{\radius}$. Now we can assume that $\mathcal{N}_{\radius} \cup \mathcal{N}_U$ is in an infinite lattice of blocks $\mathbb{L}$, where i.i.d. random variables $\{t(v):v\in\mathbb{L}\}$ are attached to its nodes. Let $B_{U}$ and $B_{U'}$ be two blocks in $\mathcal{N}_{U}$ and $\mathcal{N}_{U'}$ that have the minimum $l_1$ distance. We let \begin{align*} T'' = \inf_{\eta \in (B_U \leftrightarrow B_{U'})} T^*(\eta). \end{align*} By Theorem~\ref{Thrm:Kesten_thrm1} and since the neighborhood is divided into $w$-blocks so that $k$ is proportional to ${\radius}/\sqrt{{\size}}$, we conclude that there exist a constant $c'' \in \mathbb{R}^+$ such that for any pair of $w$-blocks in $\mathcal{N}_U$ and $\mathcal{N}_{U'}$, there exist constants $c,c' \in \mathbb{R}^+$ such that for all ${\size \ge 1}$ \begin{align*} P\left(T'' \le c''\frac{{\radius}}{{\size}^{3/2}} \given[\Big] A \right) &\le P\left(T'' \le \frac{{\radius}}{{\size}^{1/2}}\frac{\mu}{{\size}}-x\sqrt{\frac{{\radius}}{\sqrt{{\size}}}} \given[\Big] A\right) \\ &\le P\left(T'' \le \mathbb{E}[T'']-x\sqrt{\frac{{\radius}}{\sqrt{{\size}}}} \given[\Big] A \right) \\ & \le ce^{-c'{({\radius})^{1/3}}}, \end{align*} where $x={\radius}^{1/3}$ and we have used the fact that if for a first passage percolation process with exponential distribution with unit mean we have $\lim_{n\rightarrow \infty}{T_n}/{n} = \mu$, then for the passage times of our process, which is assumed to be exponential with mean ${1}/{{\size}}$, we have $\lim_{n\rightarrow \infty} {T_n}/{n} = {\mu}/{{\size}}$. Finally, by the union bound, the probability that any of the unhappy agents in $\mathcal{N}_U$ affects an agent in $\mathcal{N}_{U'}$ before or at time $c''{{\radius}}/{{\size}^{3/2}}$ is at most $c{(4{\radius})}{(8{\radius})}e^{-c'({\radius})^{1/3}}$. Hence, we have \begin{align*} P\left(T({\radius}/2)>c''\frac{{\radius}}{{\size}^{3/2}}\given[\Big] A\right) &\ge P\left(T'>c''\frac{{\radius}}{{\size}^{3/2}}\given[\Big] A\right) \\ &> 1 - c{(4{\radius})}{(8{\radius})}e^{-c'({\radius})^{1/3}}, \end{align*} which tends to one as $\size \rightarrow \infty$. \end{proof} Call a \emph{region of expansion} any neighborhood whose configuration is such that by placing a neighborhood $\mathcal{N}_{w/2}$ of type (+1) agents anywhere inside it, all the (-1) agents on the outside boundary of $\mathcal{N}_{w/2}$ become unhappy with probability one. \begin{lemma*} \label{Lemma:monoch_spread_1} Let $\tau \in (\tau_1,1/2)$ and let $\mathcal{N}_{4r}$ be a neighborhood of radius $4r=2^{[1-H(\tau')]{\size}/2-o({\size})}$ such that ${\radius} > 8r$. Let \begin{align*} D = \left\{\text{$\forall t < T({\radius}/2), \; \mathcal{N}_{4r}${ \normalfont is a region of expansion}}\right\}, \end{align*} then $D$ occurs w.h.p. \end{lemma*} \begin{proof} Since $D$ is increasing in a flip of a (-1) agent, we can focus on the case when the initial configuration is preserved. In this case, for the configuration to be expandable we need to make sure that any agent right outside the boundary of a monochromatic $w$-block will be unhappy. We obtain a lower bound for the probability of this event. With the same argument as in the proof of Lemma \ref{Lemma:unhappyprob} of the Appendix, a lower bound for the probability that a given agent right outside the boundary of a monochromatic neighborhood $\mathcal{N}_{w/2}$ is unhappy, is \begin{align*} 1-2^{-[1-H(\frac{4}{3}\tau)]\frac{3}{4}{\size}-o({\size})}. \end{align*} Let us denote the latter event for the (-1) agents right outside the boundary of $\mathcal{N}_{w/2}$ by $A_1, ... , A_L$, where $L$ is the number of (-1) agents right outside the boundary of $\mathcal{N}_{w/2}$. It is easy to see that these are all increasing events and using the FKG inequality we conclude that \begin{align*} P(A_1\cap ... \cap A_L) \ge P(A_1)P(A_2)...P(A_L) \ge (1-2^{-[1-H(\frac{4}{3}\tau)]\frac{3}{4}{\size}-o({\size})})^L. \end{align*} Now, for any $v \in \mathcal{N}_{4r}$ let $B_v$ be the event that all the (-1) agents outside $\mathcal{N}_{w/2}$ centered at $v$ are unhappy. It is also easy to see that $B_v$'s are increasing events. Hence, with another application of the FKG inequality we have \begin{align*} P\left(\bigcap\limits_{v \in \mathcal{N}_{4r}} B_v\right) \ge \left(1-2^{-[1-H(\frac{4}{3}\tau)]\frac{3}{4}{\size}-o({\size})}\right)^{2^{[1-H(\tau)]{\size}+o({\size})}}, \end{align*} where we have used the fact that $L < \size$. \end{proof} Consider a disc of radius $r$, centered at an agent such that all the agents inside the disc are of the same type. It is easy to see that if $r$ is sufficiently large then all the agents inside the disc will remain happy regardless of the configuration of the agents outside the disc. Lemma 6 in \cite{immorlica2015exponential} shows that for $r>w^3$ this would be the case for sufficiently large $w$. Here we state a similar lemma but for an annulus, i.e., a firewall, without proof. \begin{lemma*} \label{Lemma:firewall} Let $A_r(u)$ be the set of agents contained in an annulus of outer radius $r \ge w^3$ and of width $\sqrt{2} w$ centered at $u$. For all $\tau \in (\tau_2,1/2)$ and for a sufficiently large constant $w$, if $A_r(u)$ is monochromatic at time $t$, then it will remain monochromatic at all times $t'>t$. \end{lemma*} \begin{lemma*}\label{Lemma:fw} Let $\mathcal{N}_{\radius}$, $\mathcal{N}_{{\radius}/2}$, $\mathcal{N}_{4r}$, and $\mathcal{N}_{r}$ be all centered at $u$ with ${\radius} = 2^{[1-H(\tau')]{\size}/2}$ and $r = 2^{[1-H(\tau')]{\size}/2-o({\size})}$, $r<{\radius}/8$. Let $u^+$ denote an arbitrary (+1) agent, $T({\radius})$ be as defined in (\ref{Tinfdef}), and $\kappa$ be such that $\kappa r {\size}^{1/2}$ is the sum of the number of agents in a firewall with radius $2r$ and the number of agents in a line of width $w+1$ that connects the center to the boundary of the firewall and includes $\mathcal{N}_{w/2}$ at its center. Conditioned on the following events, w.h.p. the monochromatic region of $u$ will have at least radius $r$. \begin{enumerate} \item $A = \left\{\forall v\in \mathcal{N}_{\radius}, \; u^{+} \mbox{ {\normalfont would be happy at the location of $v$ at $t=0$}}\right\}, $ \item $B = \{T({\radius}/2) > 2\kappa r {\size}^{1/2} \}$, \item $C = \left\{\text{$\mathcal{N}_r$ {\normalfont contains an expandable radical region at }$t=0$}\right\}$, \item $D = \text{\{$\forall t < T({\radius}/2), \;\; \mathcal{N}_{4r}$ {\normalfont is a region of expansion}\}}$. \end{enumerate} \end{lemma*} \begin{figure}[!t] \centering \includegraphics[width=\textwidth]{thrm_fw.pdf} \caption{Neighborhoods described in the proof of Lemma~\ref{Lemma:fw}.} \label{fig:thrm_fw} \end{figure} \begin{proof} Conditioned on events $A, B$, $C$, and $D$, an expandable radical region contained in $\mathcal{N}_r$ can lead to the formation of a firewall of radius $2r$ centered at this region. Let $M(r)$ denote the event that the radius of the monochromatic region of $u$ is at least $r$. Let $T_f$ be the time at which this firewall forms, meaning that all the agents contained in the annulus become of the same type. We have \begin{align*} P\left(M(r)\given[\Big]A, B , C , D\right) \ge P\left(T_f<2\kappa r \sqrt{{\size}} \given[\Big]A,B,C,D\right) \end{align*} Let $T'_f$ be the sum of $\kappa r {\size}^{1/2}$ exponential random variables with mean one. It is easy to see that $T'_f$ is an upper bound for the time it takes until the firewall is formed, since the worst case scenario for the formation of the firewall is when the $\kappa r {\size}^{1/2}$ agents flip to (+1), one by one. Hence, we have \begin{align*} P\left(T_f<2\kappa r \sqrt{{\size}} \given[\Big]A,B,C,D\right) \ge P\left(T'_f<2\kappa r \sqrt{{\size}}\right). \end{align*} Next, we bound this probability. We have \begin{align*} P\left(T'_f\ge 2\kappa r \sqrt{{\size}}\right) &\le P\left(|T'_f-\mathbb{E}[T'_f]|\ge \kappa r\sqrt{{\size}}\right). \end{align*} By Chebyshev's inequality, we have \begin{align*} P\left(T'_f\ge 2\kappa r \sqrt{{\size}}\right) = O\left(\frac{\mbox{Var }(T'_f)}{(r\sqrt{{\size}})^2}\right) = O\left(\frac{r\sqrt{{\size}}}{(r\sqrt{{\size}})^2}\right) = O\left(\frac{1}{r\sqrt{{\size}}}\right). \end{align*} It follows that w.h.p. agent $u$ will be trapped inside a firewall together with an expandable radical region and the interior of the firewall will be a region of expansion until the end of the process. Hence this interior will eventually become monochromatic and, as a result, agent $u$ will have a monochromatic region of size at least proportional to $r^2$, as desired. \end{proof} We can now give the proof for the first part of Theorem~\ref{Thrm:first_theorem}. \noindent {\bf Proof of Theorem~\ref{Thrm:first_theorem}} {{\normalfont (for $\tau_1< \tau< 1/2$)}} First, we derive the lower bound in the theorem letting \begin{equation} a(\tau) = \left[1-(2\epsilon'+\epsilon'^2)\right]\left[1-H(\tau')\right], \end{equation} where $\epsilon' > f(\tau)$, and $\tau'=(\tau {\size} -2)/({\size}-1)$. We consider neighborhoods $\mathcal{N}_{\radius}$, $\mathcal{N}_{\radius/2}$, and $\mathcal{N}_r$, with ${\radius} = 2^{[1-H(\tau')]{\size}/2}$ and $r< \radius/8$, all centered at node $u$ as depicted in Figure~\ref{fig:thrm1}. We let $u^{+}$ be an arbitrary (+1) agent, and consider the following event in the initial configuration \begin{align} A = \left\{\forall v\in \mathcal{N}_{\radius}, \; u^{+} \mbox{ {\normalfont would be happy at the location of $v$ at $t=0$}}\right\}. \end{align} By Lemma \ref{Lemma:R_unhappy} of the Appendix, we have \begin{align} P(A) \rightarrow 1,\ \mbox{ as } \ {\size} \rightarrow \infty. \label{pa} \end{align} We then consider a firewall of radius $2r$ centered anywhere inside $\mathcal{N}_r$, let $\kappa >0$ so that $\kappa r {\size}^{1/2}$ is the sum of the number of agents in it and the number of agents in a line of width $w+1$ that connects its center to its boundary and includes $\mathcal{N}_{w/2}$ at its center. Consider the event \begin{align*} B = \left\{\text{$T({\radius}/2) > 2\kappa r {\size}^{1/2}$}\right\}, \end{align*} where $T({\radius})$ is defined in (\ref{Tinfdef}). By Lemma~\ref{Lemma:Unhappy_growth}, we can choose $r$ proportional to ${\radius}/({\size}^2)$ so that \begin{align} P(B|A) \rightarrow 1,\ \mbox{ as } \ {\size} \rightarrow \infty. \label{pea} \end{align} With this choice, we also have \begin{align*} r &= 2^{[1-H(\tau')]{\size}/2-o({\size})}, \end{align*} and if we consider the event \begin{align*} C = \left\{\text{$\mathcal{N}_r$ contains an expandable radical region at $t=0$}\right\}, \end{align*} by Lemma \ref{Lemma:expandable}, we have for $N$ sufficiently large \begin{align} P(C) \geq 2^{-[1-H(\tau')](2\epsilon'+\epsilon'^2){\size} -o({\size})}. \label{pc} \end{align} Consider a neighborhood $\mathcal{N}_{4r}$ also centered at $u$ and the event \begin{align*} D = \left\{\text{$\forall t < T({\radius}/2), \; \mathcal{N}_{4r}$ is a region of expansion}\right\}. \end{align*} By Lemma \ref{Lemma:monoch_spread_1}, we have \begin{align} P(D) \rightarrow 1,\ \mbox{ as } \ {\size} \rightarrow \infty. \label{pd} \end{align} \begin{figure}[!t] \centering \includegraphics[width=\textwidth]{thrm11.pdf} \caption{Neighborhoods described in the proof of Theorem~\ref{Thrm:first_theorem}.} \label{fig:thrm1} \end{figure} We now note that $A$, $B$, $C$, $D$ are increasing events with respect to a partial ordering on their outcomes. More precisely, consider two outcomes of the sample space $\omega, \omega' \in \Omega$ such that $\omega, \omega' \in E$ where $E$ is an event. We define a partial ordering on the outcomes such that $\omega' \ge \omega$ if for all time steps, the set of agents of type (+1) in $\omega$ is a subset of the set of agents of type (+1) in $\omega'$. Event $E$ is increasing if $1_E(\omega') \ge 1_E(\omega)$ where $1_E$ is the indicator function of the event $E$. According to this definition, $A$, $B$, $C$, $D$ are increasing events. By combining (\ref{pa}), (\ref{pea}), (\ref{pc}), and (\ref{pd}), and using a version of the FKG inequality adapted to our dynamic process, stated in Lemma~\ref{Lemma:FKG_Harris} of the Appendix, it follows that for ${\size}$ sufficiently large \begin{align} P(A \cap B \cap C \cap D) &\ge P(A)P(B)P(C)P(D) \nonumber \\ &\ge P(A)P(B\cap A)P(C)P(D) \nonumber \\ &= P(B|A)[P(A)]^2P(C)P(D) \nonumber \\ &= 2^{-[1-H(\tau')][2\epsilon'+\epsilon'^2]{\size} -o({\size})}.\label{Eq:intersect1} \end{align} Since by Lemma~\ref{Lemma:fw} we have that conditioning on $A, B, C,$ and $D$, at the end of the process w.h.p. agent $u$ will be part of a monochromatic region with radius at least $r$, it follows that (\ref{Eq:intersect1}) is also a lower bound for the probability that the monochromatic neighborhood of agent $u$ will have size of at least proportional to $r^2$. The desired lower bound on the expected size of the monochromatic region now easily follows by multiplying (\ref{Eq:intersect1}) by the size of a neighborhood of radius $r$. Next, we show the corresponding upper bound, letting \begin{align*} b(\tau)= \left[\frac{3}{2}(1+\epsilon')^2\right][1-H(\tau')], \end{align*} and $\epsilon'$ and $\tau'$ as defined above. For any $\delta>0$, consider a neighborhood $\mathcal{N}_{{\radius}'}$ such that \begin{align*} {\radius}' = 2^{(1+\epsilon')^2[1-H(\tau')]{\size}/2+ \delta {\size}/2}, \end{align*} and divide $\mathcal{N}_{{\radius}'}$ into blocks of size $\mathcal{N}_{\radius}$ in the obvious way. Let $M_{+1}$ and $M_{-1}$ denote the events of $\mathcal{N}_{{\radius}'}$ being monochromatic of type (+1) and (-1) respectively. Also let $E_{+1}$ and $E_{-1}$ be the events of having a monochromatic region of type (+1) and (-1) inside a firewall of radius $2r$ centered anywhere inside $\mathcal{N}_{{\radius}'}$. We have that for $N$ sufficiently large \begin{align} P(M_{+1}\cup M_{-1}) &\le P(M_{+1})+P(M_{-1}) \nonumber \\ &=P(M_{+1}\cap E^C_{-1}) + P(M_{-1}\cap E^C_{+1}) \nonumber \\ & \le P(E^C_{-1}) + P(E^C_{+1}) \nonumber \\ & = 2P(E^C_{-1}) \nonumber \\ & \le 2(1-2^{-[1-H(\tau')]\left(2\epsilon'+\epsilon'^2 \right){\size}-o({\size})})^{{\radius}'^2/{\radius}^2} \nonumber \\ &= e^{-2^{\delta {\size} - o({\size})}}. \label{finalM} \end{align} By considering the set of all the neighborhoods of radius $\radius'$ sharing agent $u$, by the union bound the probability that at least one of them will be monochromatic of only one type is also bounded by (\ref{finalM}). We now consider the expected size of the monochromatic region of agent $u$, that is bounded as \begin{align*} \mathbb{E}[M] \le \sum_{m=1}^{n}m^2p_m, \end{align*} where $p_m$ denotes the probability of having a monochromatic region of size $m^2$ containing $u$. We let \begin{align*} {\radius}'' = 2^{[(1+\epsilon')^2(1-H(\tau')]{\size}/2+ o({\size})}, \end{align*} and divide the series into two parts \begin{align}\label{Eq:upper_bound} \mathbb{E}[M] &\le \sum_{m=1}^{{\radius}''}m^2p_m + \sum_{m={\radius}''+1}^{n}m^2p_m \nonumber \\ &\le 2^{\left[\frac{3}{2}(1+\epsilon')^2(1-H(\tau')\right]{\size}+ o({\size})} + \sum_{m={\radius}''+1}^{n}m^2p_m, \end{align} where the first inequality follows from $p_m \leq 1$. Since by (\ref{finalM}) for all $m \geq {\radius}'$, the probability of having a monochromatic region of size $m^2$ containing $u$ has at most a double exponentially small probability, the tail of the remaining series in (\ref{Eq:upper_bound}) converges to a constant, while for sufficiently large $N$ the sum of the first $\radius'-\radius''-1$ terms is smaller than the first term of (\ref{Eq:upper_bound}), and the proof is complete. \subsection{Almost monochromatic region} We now turn our attention to the case where $\tau \in (\tau_2,\tau_1]$. We define an $\mbox{$m$-block}$ to be a neighborhood of radius $m/2$. Let $\mathcal{I}$ be the collection of sets of agents in the possible intersections of a $w$-block with an $m$-block on the grid in the initial configuration. Also, let $W_I$ be the random variable representing the number of (-1)'s in $I \in \mathcal{I}$, and ${\size}_I$ be the total number of agents in $I \in \mathcal{I}$. \ \begin{figure}[!t] \centering \includegraphics[width=2.5in]{percolation.pdf} \caption{Part of the grid renormalized into $m$-blocks. Green and gray indicate good and bad blocks respectively. } \label{fig:goodbad} \end{figure} \textit{Good block.} For any $\epsilon \in (0,1/2)$, a \textit{good $m$-block} is an $m$-block such that for all $I\in \mathcal{I}$ we have $W_I-{\size}_I/2 < {\size}^{1/2+\epsilon}$. The $m$-blocks that do not satisfy this property are called \textit{bad $m$-blocks} (see Fig. \ref{fig:goodbad}). It is easy to see that all the blocks contained in a good $m$-block are also good blocks. \ For the following two definitions, we assume that the grid is renormalized into $m$-blocks. In this setting each $m$-block is horizontally or vertically adjacent to four other $m$-blocks. \textit{$m$-path}. An $m$-path is an ordered set of $m$-blocks such that each pair of consecutive $m$-blocks are either horizontally or vertically adjacent and no $m$-block appears more than once in the set. The \textit{length} of the path is the number of $m$-blocks in the path. Two $m$-blocks are \textit{connected} if there exists an $m$-path between them. \ \textit{$m$-cycle}. An $m$-cycle is a closed path in which the last $m$-block in its ordered set is adjacent to the first $m$-block. An $m$-cycle divides the $m$-blocks of the grid into two sets of $m$-blocks referred to as its \textit{interior} and its \textit{exterior}. \ \textit{$r$-chemical path}. Renormalize the grid into $6w^3$-blocks starting from the block centered at agent $u$. To define an $r$-chemical path, consider two neighborhoods $\mathcal{N}_{3r}$ and $\mathcal{N}_{r}$ with radii $3r$ and $r$ respectively and both centered at an agent $u$. Let $r>12w^3$. An \textit{$r$-chemical path} centered at $u$, is the union of a $6w^3$-cycle of good $6w^3$-blocks contained in $\mathcal{N}_{3r} \setminus \mathcal{N}_{r}$ such that $u$ is in its interior, and a path of good $6w^3$-blocks from the $6w^3$-block at the center of $\mathcal{N}_r$ to a $6w^3$-block in the $6w^3$-cycle, such that the total length of the $6w^3$-cycle and the $6w^3$-path is proportional to $r/(6w^3)$ (see Fig. \ref{fig:chemical_in_r}). \ \begin{figure}[!t] \centering \includegraphics[width=2.5in]{chemical_firewall.pdf} \caption{ Larger blocks are $6w^3$-blocks and smaller ones are $2w^3$-blocks. The red cycle indicates the chemical firewall which is in the cycle of an $r$-chemical path (orange). } \label{fig:chemical_in_r} \end{figure} \textit{Chemical firewall.} Renormalize the grid into $2w^3$-blocks starting from the block centered at agent $u$ and consider the $r$-chemical path defined above in this setting. A \textit{chemical firewall} with radius $r$ is a $2w^3$-cycle contained in the cycle of the $r$-chemical path such that agent $u$ is in its interior and all the agents in the $2w^3$-cycle are of the same type (see Fig. \ref{fig:chemical_in_r}). Although the structure of a chemical firewall is very different from the annular firewall defined before, the size of the $m$-blocks are chosen such that it is easy to see that, with similar arguments given for Lemma \ref{Lemma:firewall}, it acts as a firewall, i.e., the flips of the agents in its exterior cannot affect the agents in its interior. \ An \textit{$r$-expandable radical region} of type (-1) is a radical region such that it is expandable and it is located at the center of an $r$-chemical path. Before proceeding with the first part of the proof of Theorem~\ref{Thrm:second_theorem}, we need the following results. The following lemma gives a lower bound for the probability that an arbitrary $m$-block with $m \le \size^3$ is a good $m$-block. Using this lemma, by renormalizing the grid into $m$-blocks we will argue that the probability that a block is a bad block can be arbitrary small for sufficiently large $\size$. \begin{lemma*} \label{Lemma:goodblock} Let $\epsilon \in (0,1/2)$ and $m\le \size^3$. For all $I\in \mathcal{I}$ we have $W_I-{\size}_I/2 < {\size}^{1/2+\epsilon}$ with probability at least \begin{align*} 1 - e^{-c{\size}^{2\epsilon}+o({\size}^{2\epsilon})}. \end{align*} \end{lemma*} \begin{proof} By Lemma \ref{Lemma:balanced} of the Appendix, for an arbitrary $I\in \mathcal{I}$ we have \begin{align*} P\left(W_I-{\size}_I/2\ge {\size}^{1/2+\epsilon}\right) < e^{-c{\size}^{2\epsilon}}, \end{align*} where $\epsilon \in (0,1/2)$ and $c>0$. Since there are less than ${\size}^3$ elements in $\mathcal{I}$, we have \begin{align*} P\left(W_I-{\size}_I/2< {\size}_I^{1/2+\epsilon} \mbox{ for all }I\in \mathcal{I}\right) \ge 1 - {\size}^{3}e^{-c{\size}^{2\epsilon}}. \end{align*} \end{proof} Let us consider a neighborhood consisting of exponentially large number of $m$-blocks where $m\le \size^3$. Based on the following lemma, the ratio between bad blocks and good blocks in this neighborhood is exponentially small w.h.p. \begin{lemma*} \label{Lemma:bad_ratio} Let $c$ be a positive constant and $\epsilon \in (0,1/2)$. Let $\mathcal{N}_{\radius}$ be a neighborhood consisting of $m$-blocks and with $2^{c{\size}}$ agents. The ratio between bad blocks and good blocks is less than $e^{-{\size}^\epsilon}$ w.h.p. \end{lemma*} \begin{proof} By Lemma \ref{Lemma:goodblock}, the probability of having a bad block is less than $e^{-{\size}^{2\epsilon}+o({\size}^{2\epsilon})}$. It is easy to show that the number of bad blocks is less than $2^{c{\size}}e^{-{\size}^{2\epsilon}+o({\size}^{2\epsilon})}$ w.h.p. Hence, the ratio between the number of bad blocks and the number of good blocks is less than $e^{-{\size}^\epsilon}$ w.h.p., see Figure \ref{fig:goodbad}. \end{proof} We now want to argue that the formation of a chemical firewall is likely. We first notice that a monochromatic $w$-block located inside a good $6w^3$-block can make at least a $2w^3$-block at the center of the good block monochromatic. This means that a monochromatic $w$-block at the center of the $r$-chemical path can create a chemical firewall (see Fig. \ref{fig:chemical_in_r}). Our next goal is to show that the existence of an $r$-chemical path is likely. The critical step is to show that the length of the $r$-chemical path is proportional to $r/6w^3$. We use a result from percolation theory \cite{garet2007large} restated in the following. Consider site percolation on square lattice in the supercritical regime. Let $D(0,x) = \inf_{\Gamma} |\Gamma|$, where $\Gamma$ is a path from the origin to the vertex $x$ and $|\Gamma|$ is the number of vertices in the path. Let $0\leftrightarrow x$ denote that $0$ and $x$ belong to the same connected component. The following is Theorem~1.4 from \cite{garet2007large}, and it asserts that the length of the shortest path between the origin and an arbitrary vertex $x$ cannot be much different from its $l_1$ distance $\| x \|_1$, see Figure~\ref{fig:chemical}. \begin{figure}[!t] \centering \includegraphics[width=2.5in]{percolation_chemical.pdf} \caption{ The length of the shortest path of good blocks between two arbitrary vertices denoted by X is w.h.p. not much different from its $l_1$-distance between them in the supercritical regime. } \label{fig:chemical} \end{figure} \begin{theorem*}[Garet and Marchand] \label{Thrm:Garet} For all $\alpha > 0$, there exists $p'(\alpha) \in (p_c(d),1)$ such that for all $p \in (p'(\alpha),1]$, we have: \begin{align*} \limsup_{\|x\|_1\rightarrow +\infty} \frac{\ln P_p\left(0\leftrightarrow x,D(0,x) \ge (1+\alpha)\|x\|_1\right)}{\|x\|_1} < 0. \end{align*} \end{theorem*} Now consider a two dimensional lattice which consists of good $6w^3$-blocks and bad $6w^3$-blocks. The probability of a site being good then, is at least the value computed in Lemma \ref{Lemma:goodblock}, hence for sufficiently large $\size$ we are dealing with a percolation problem in the super-critical regime. Let us denote a radical region with radius $\epsilon'$ by $\epsilon'$-\textit{radical region}. \begin{lemma*} \label{Lemma:firewall_path} W.h.p. an $\epsilon'$-radical region is at the center of an $r$-chemical path at time $t=0$ where $r < n/10$. \end{lemma*} \begin{proof} Since an $r$-chemical path is contained in a neighborhood of radius $3r$, without loss of generality we can assume that this neighborhood is contained in a $\mathbb{Z}^2$ lattice. It is also clear that the flip of a (-1) agent, can only increase the probability of formation of the $r$-chemical path. Divide the resulting lattice into $m$-blocks such that the $\epsilon'$-radical region is at the center of an $m$-block and call the resulting renormalized lattice $\mathbb{L'}$. Consider performing site percolation on this lattice by considering good $6w^3$-blocks as open sites of $\mathbb{L'}$ and bad $6w^3$-blocks as its closed sites. As discussed above, for sufficiently large $\size$ we are dealing with a percolation problem in its super-critical regime. Consider two blocks containing agents $(2r,2r)$ and $(-2r,2r)$ in the original lattice denoted by $0$ and $x$ respectively. By Theorem~\ref{Thrm:Garet} we conclude that for sufficiently large ${\size}$ there exists a constant $c>0$ such that \begin{align*} P_p\left(0\leftrightarrow x,D(0,x) \ge (1.25)\| x \|_1\right) \le e^{-c\|x\|_1} \end{align*} where $\|x\|_1$ is the $l_1$ distance of $x$ from $0$ and we have put $\alpha = 0.25$. By the union bound and the FKG inequality, we have \begin{align*} P_p\left(D(0,x) < 1.25\| x \|_1\right) &\ge P\left(0\leftrightarrow x\right) - P\left(0\leftrightarrow x,D(0,x) \ge (1.25) \| x \|_1\right) \\ &\ge \theta(p)^2 - e^{-c \| x \|_1}, \end{align*} where $\theta(p)$ is the probability that a node belongs to an infinite cluster and we have used the FKG inequality to conclude that $P(0\leftrightarrow x)\ge \theta(p)^2$. Now, using Lemma \ref{Lemma:goodblock} it is easy to see that for sufficiently large values of ${\size}$ this lower bound is as close as we want to one. For each pair of corner agents of $\mathcal{N}_{2r}$ on the same side the above argument holds. A similar argument also holds for the existence of a path from the center of $\mathcal{N}_r$ to an arbitrary block on the boundary of $\mathcal{N}_{3r}$, i.e., a $6w^3$-block which contains agents with $l_\infty$-distance of $3r$ from the center of $\mathcal{N}_{3r}$. It is also easy to see that these events are all increasing events, i.e., their indicator functions can only increase by changing a closed site to an open site, in this case, a bad $6w^3$-block to a good $6w^3$-block. Hence, by the FKG inequality, the joint probability of the existence of the above paths is at least their product which can be made arbitrary close to one for large values of ${\size}$. \end{proof} We need to show that w.h.p. the radical region located inside the firewall can make the interior of the firewall almost monochromatic by the end of the process. We show that there are no clusters of bad blocks of radius larger than a polynomial function of ${\size}$ in a neighborhood with exponential size in $\size$. To show this we first restate a result from \cite{grimmett1999percolation}. Let $S(k)$ be the ball of radius $k$ with center at the origin, i.e., $S(k)$ is the set of all vertices $x$ in $\mathbb{Z}^2$ for which $\Delta(0,x) \le k$, where $\Delta$ denotes the $l_1$ distance. Let $\partial S(k)$ denote the surface of $S(k)$, i.e., the set of all $x$ such that $\Delta(0,x)=k$. Let $A_k$ be the event that there exists an open path joining the origin to some vertex in $\partial S(k)$. Let the \textit{radius} of a bad cluster be defined as \begin{align*} \sup \{\Delta(0,x): x \in \mbox{bad cluster}\}. \end{align*} The following result is Theorem~5.4 in \cite{grimmett1999percolation}. \begin{theorem*} [Grimmett] \label{Thrm:grimmett_bad_cluster} \textsl{(Exponential tail decay of the radius of an open cluster.)} If $p<p_c$, there exists $\psi(p) >0 $ such that \begin{align*} P_p(A_k) < e^{-k\psi(p)}, \ \ \ for \ all \ k. \end{align*} \end{theorem*} \begin{lemma*}\label{Lemma:bad_cluster} W.h.p. there are no clusters of bad $6w^3$-blocks with radius greater than ${\size}^2$ blocks in a neighborhood with radius $4r= 2^{[1-H(\tau')]{\size}/2-o({\size})}$ at time $t=0$. \end{lemma*} \begin{proof} Let $p$, be the probability of having a bad $6w^3$-block, and let $k={\size}^2$. By Theorem~\ref{Thrm:grimmett_bad_cluster} it follows that w.h.p. there is no cluster of bad $6w^3$-blocks containing a bad $6w^3$-block with $l_1$-distance from its center greater than ${\size}^2$ $6w^3$-blocks in a neighborhood with exponential radius in ${\size}$. \end{proof} \ It is easy to check that for $\tau > 3/8$, a monochromatic $w$-block in a good block can make the whole block monochromatic (except for possibly a margin of $w$ at the borders). On the other hand, Lemma~\ref{Lemma:Sufficient2} shows that the same condition of Lemma~\ref{Lemma:Trigger} leads to the formation of a monochromatic $3w/2$-block for $\tau \in (\tau_1,3/8)$ because once the $\epsilon'$-radical region leads to a monochromatic $w$-block at its center, it can as well lead w.h.p to a monochromatic $3w/2$-block. Lemma \ref{Lemma:Monoch_spread2} then shows that the spread of the monochromatic $3w/2$-blocks is indeed possible. \begin{lemma*} \label{Lemma:Sufficient2} Consider the $\mathcal{N}_S$ neighborhood defined in Lemma \ref{Lemma:Trigger} and co-centered with a neighborhood $\mathcal{N}_{\radius}$ of radius ${\radius}>{\size}$ with the property that no (+1) agent inside $\mathcal{N}_{\radius}$ will become unhappy until some time $T(\radius)$. Then w.h.p. there exists a set of flips with the following property: if they happen before $T(\radius)$ then all the agents inside a neighborhood with radius $3w/2$ concentric with $\mathcal{N}_{\radius}$ will be of the same type. \end{lemma*} \begin{proof} By Lemma \ref{Lemma:Trigger}, w.h.p. there exists a set of flips that if they happen before $T(\radius)$ will make a $w$-block at the center of $\mathcal{N}_{\radius}$ unhappy. By Proposition \ref{Prop:firstprop}, it follows that this monochromatic block will make all the (-1) agents in four identical trapezoids outside the $w$-block whose larger bases are the sides of the $w$-block unhappy, and hence monochromatic w.h.p. Now, with another application of Proposition \ref{Prop:firstprop} we have that for $\tau > \tau_1$, all the (-1) agents in a $3w/2$-block with the same center as the $w$-block will be unhappy, hence the $3w/2$-block can become monochromatic w.h.p. \end{proof} \begin{figure}[!t] \centering \includegraphics[width=\textwidth]{lemma16.pdf} \caption{Neighborhoods described in the proof of Lemma \ref{Lemma:Monoch_spread2}.} \label{fig:Lemma_monoch_spread2} \end{figure} \begin{lemma*} \label{Lemma:Monoch_spread2} Consider a good block at the center of $\mathcal{N}_{\radius}$ with ${\radius} > m$. A $3w/2$-block with (+1) agents at the center of a $7w/2$-block contained in the good block will make all (-1) agents right outside the $3w/2$-block unhappy with probability one and with at most $(3w/4+1)^2$ flips happening before $T({\radius})$, for sufficiently large ${\size}$. \end{lemma*} \begin{proof} Consider four identical isosceles trapezoids outside the $3w/2$-block whose larger bases are the sides of the $3w/2$-block (see Figure \ref{fig:Lemma_monoch_spread2}). Let $\zeta = (3-8\tau)/2$ and $\nu = (16\tau -5)/6$. Let the smaller bases of the above trapezoids be $2(3/4-2\zeta)w$ and their heights be $2 \nu w$. For $\tau > 0.3463$, since these trapezoids are located inside a good block for sufficiently large $\size$ all the agents of type (-1) in these trapezoids will be unhappy with probability one. Consider the case where these trapezoids have become monochromatic after the flips of (-1) agents happening before $T(\radius)$. Now consider four identical rectangles located outside the trapezoids. Let one side of each of these rectangles be at the center of one of the smaller bases of each of the four trapezoids and of length $2(1/8-\nu)w$ and let the other sides of the triangles be $w/4$. For $\tau > \tau_1$, all the agents of type (-1) located inside these rectangles will be unhappy. Now, as a worst case scenario, let us consider an agent outside the $3w/2$-block and next to its corner which shares the smallest number of agents with the monochromatic regions. When the unhappy agents in the rectangles flip before $T(\radius)$, for this agent to be unhappy we need to have \begin{align*} \left[1-\frac{1}{4}- \left(\frac{1}{4}+\frac{1}{2}-\zeta \right)\nu - \frac{1}{4}\left(\frac{1}{8}-\nu \right)\right]\frac{1}{2} + \frac{o({\size})}{{\size}}<\tau, \end{align*} which can be simplified to (\ref{Eq:tau2_eq}). This means that for $\tau < \tau_1$ and for sufficiently large $\size$ this agent will be unhappy with probability one. Since all the other agents of type (-1) right outside the $3w/2$-block share at least the same number of agents with the single-type regions, we have that for sufficiently large $\size$, all the (-1) agents right outside the $3w/2$-block will be unhappy with probability one. \end{proof} \begin{figure}[!t] \centering \includegraphics[width=\textwidth]{thrm_fw2.pdf} \caption{Neighborhoods described in the proof of Lemma~\ref{Lemma:fw2}. Agent $u$ is depicted by the circle in the red square and the $\epsilon'$-radical region is depicted by the small orange square in the red square.} \label{fig:thrm_fw2} \end{figure} \ The following lemma, which can be thought of as the counterpart of Lemma \ref{Lemma:fw} for $\tau \in (\tau_2,\tau_1]$, shows that conditional on some events, the size of the almost monochromatic region of an arbitrary agent is exponential in $\size$. Unless otherwise stated, by a good block we mean a good $6w^3$-block and by a bad block we mean a bad $6w^3$-block. \begin{lemma*}\label{Lemma:fw2} Let $\mathcal{N}_{\radius}$, $\mathcal{N}_{{\radius}/2}$, $\mathcal{N}_{4r}$, and $\mathcal{N}_{r}$ be all centered at $u$ with \begin{align*} {\radius} = 2^{[1-H(\tau')]{\size}/2}, \\ r = 2^{[1-H(\tau')]{\size}/2-o({\size})}, \end{align*} and $r<{\radius}/8$. Let $u^+$ denote an arbitrary (+1) agent, $T({\radius})$ be as defined in (\ref{Tinfdef}), and $\kappa>0$ be such that $\kappa r {\size}^{3/2}$ is the total number of agents in a $2r$-chemical path. Conditioned on the following events, w.h.p. the almost monochromatic region of $u$ will have at least radius $r$. \begin{enumerate} \item $A = \left\{\forall v\in \mathcal{N}_{\radius}, \; u^{+} \mbox{ { would be happy at the location of $v$ at $t=0$}}\right\}, $ \item $\textit{B = \{$T({\radius}/2) > 2\kappa r {\size}^{3/2}$\}}$, \item $C = \text{\{$\mathcal{N}_r$ { contains a $2r$-expandable radical region at $t=0$\}}},$ \item $D = \mbox{\{{$\not \exists$ cluster of bad blocks with $l_1$-radius} $r' > {\size}^2$ { in} $\mathcal{N}_{4r}$ at $t=0$\}},$ \item $E = \text{\{${\size}_B/{\size}_G < e^{-{\size}^\epsilon}$ in $\mathcal{N}_{r}$ at $t=0$\}}$, where ${\size}_B$ is the number of bad blocks and ${\size}_G$ is the number of good blocks in $\mathcal{N}_{r}$. \end{enumerate} \end{lemma*} \begin{proof} Conditional on events $A, B$, and $C$, w.h.p. a $2r$-expandable radical region will lead to the formation of a firewall that contains $\mathcal{N}_r$. With additional conditioning on events $D$ and $E$ once the firewall is formed, the expandable radical region will turn all the interior of at least $\mathcal{N}_r$ almost monochromatic by the end of the process. Let $M(r)$ denote the event that the radius of the almost monochromatic region of $u$ is at least $r$. Let $T_f$ be the time at which the firewall forms, i.e., its agents become monochromatic. We have \begin{align*} P\left(M(r)\given[\Big]A, B , C , D, E \right) \ge P\left(T_f<2\kappa r \sqrt{{\size}} \given[\Big]A,B,C,D,E \right). \end{align*} Let $T'_f$ be the sum of $\kappa r {\size}^{3/2}$ exponential random variables with mean one, where $\kappa r {\size}^{3/2}$ is the total number of agents in the $2r$-chemical path. It is easy to see that $T'_f$ is an upper bound for the time it takes until the firewall is formed, i.e., all agents inside the firewall flip to (+1), one by one. Hence, we have \begin{align*} P\left(M(r)\given[\Big]A,B,C,D,E\right) \ge P\left(T'_f<2\kappa r \sqrt{{\size}}\right). \end{align*} Next we bound this probability. We have \begin{align*} P\left(T'_f\ge 2\kappa r {\size}^{3/2}\right) &\le P\left(|T'_f-\mathbb{E}[T'_f]|\ge \kappa r {\size}^{3/2}\right). \end{align*} By Chebyshev's inequality we have \begin{align*} P\left(T'_f\ge 2\kappa r {\size}^{3/2}\right) = O\left(\frac{VarT'_f}{(r{\size}^{3/2})^2}\right) =O\left(\frac{r\sqrt{{\size}}}{(r{\size}^{3/2})^2}\right) = O \left(\frac{1}{r{\size}^{3/2}}\right), \end{align*} leading to the desired result. \end{proof} With the above definitions and results, we can proceed to the first part of the proof of Theorem~\ref{Thrm:second_theorem} (for $\tau_2< \tau \leq \tau_1$). \ \noindent {\bf Proof of Theorem~\ref{Thrm:second_theorem} {\normalfont (for $\tau_2< \tau \le \tau_1$)}:} First, we derive the lower bound in the theorem letting \begin{equation} a(\tau) = \left[1-(2\epsilon'+\epsilon'^2)\right]\left[1-H(\tau')\right], \end{equation} where $\epsilon' > f(\tau)$, and $\tau'=(\tau {\size} -2)/({\size}-1)$. We consider neighborhoods $\mathcal{N}_{\radius}$, $\mathcal{N}_{\radius/2}$, and $\mathcal{N}_r$, with ${\radius} = 2^{[1-H(\tau')]{\size}/2}$ and $r< \radius/8$, all centered at node $u$ as depicted in Figure~\ref{fig:thrm2}. We let ${\radius} = 2^{[1-H(\tau')]{\size}/2}$, and $u^{+}$ be an arbitrary (+1) agent and consider the following event in the initial configuration \begin{align*} A = \{\forall v\in \mathcal{N}_{\radius}, u^{+} \mbox{ would be happy at the location of } v \mbox{ at } t=0 \}. \end{align*} By Lemma \ref{Lemma:R_unhappy} of the Appendix, we have \begin{align} P(A) \rightarrow 1,\ \mbox{ as } \ {\size} \rightarrow \infty. \label{pa2} \end{align} We then consider a chemical firewall of radius $2r$ centered anywhere inside $\mathcal{N}_r$, let $\kappa >0$ so that $\kappa r {\size}^{3/2}$ is an upper bound on the total number of agents in the $2r$-chemical path containing it, and consider the event \begin{align*} \textit{B = \{$T({\radius}/2) > 2\kappa r {\size}^{3/2}$\}}, \end{align*} where $T({\radius})$ is defined in (\ref{Tinfdef}). By Lemma~\ref{Lemma:Unhappy_growth}, we can choose $r$ proportional to ${\radius}/({\size}^3)$ so that \begin{align} P\left(B\given A\right) \rightarrow 1,\ \mbox{ as } \ {\size} \rightarrow \infty. \label{pea2} \end{align} With this choice, we also have \begin{align*} r &= 2^{[1-H(\tau')]{\size}/2-o({\size})}, \end{align*} and if we consider the event \begin{align*} C = \text{\{$\mathcal{N}_r$ contains a $2r$-expandable radical region at $t=0$\}}, \end{align*} by Lemma \ref{Lemma:expandable} and Lemma \ref{Lemma:firewall_path} and the FKG inequality, since $\epsilon'>f(\tau)$ we conclude that for sufficiently large ${\size}$ \begin{align} P(C) \ge 2^{-[1-H(\tau')][2\epsilon'+(\epsilon')^2]{\size} -o({\size})}, \label{pc2} \end{align} and there is a $2r$-expandable radical region surrounding $u$. Let us divide the grid into $m$-blocks in the obvious way. Let the \textit{radius} of a bad cluster be defined as \begin{align*} \sup \{\Delta(0,x): x \in \mbox{bad cluster}\}. \end{align*} where $\Delta$ denotes the $l_1$ distance. Let \begin{align*} D = \mbox{\{{$\not \exists$ cluster of bad blocks with $l_1$-radius} $r' > {\size}^2$ { blocks in} $\mathcal{N}_{4r}$ at $t=0$\}}. \end{align*} By Lemma \ref{Lemma:bad_cluster}, we have \begin{align} P(D) \rightarrow 1,\ \mbox{ as } \ {\size} \rightarrow \infty. \label{pd2} \end{align} Finally, let $\epsilon \in (0,1/2)$ and let ${\size}_B$ and ${\size}_G$ denote the total number of bad blocks sharing at least one agent with $\mathcal{N}_r$ and good blocks contained in $\mathcal{N}_r$ respectively and let \begin{align*} E = \text{\{${\size}_B/{\size}_G < e^{-{\size}^\epsilon}$ in $\mathcal{N}_{r}$ at $t=0$\}}. \end{align*} By an application of Lemma \ref{Lemma:bad_ratio}, also \begin{align} P(E) \rightarrow 1,\ \mbox{ as } \ {\size} \rightarrow \infty. \label{pe2} \end{align} See Figure \ref{fig:thrm2} for a visualization of the neighborhoods defined above. \begin{figure}[!t] \centering \includegraphics[width=\textwidth]{thrm12.pdf} \caption{Neighborhoods described in the proof of Theorem~\ref{Thrm:second_theorem}.} \label{fig:thrm2} \end{figure} Now it is easy to see that the events $A$, $B$, $C$, $D$, and $E$ are increasing. By combining (\ref{pa2}), (\ref{pea2}), (\ref{pc2}), (\ref{pd2}), and (\ref{pe2}), and using a version of the Fortuin-Kasteleyn-Ginibre (FKG) inequality adapted to our dynamic process described in Lemma~\ref{Lemma:FKG_Harris} of the Appendix, it follows that for ${\size}$ sufficiently large \begin{align} P(A \cap B \cap C \cap D \cap E) &\ge P(A)P(B)P(C)P(D)P(E) \\ &\ge P(A)P(A\cap B)P(C)P(D)P(E) \nonumber \\ &= P(B|A)[P(A)]^2P(C)P(D)P(E) \nonumber \\ &= 2^{-[1-H(\tau')](2\epsilon'+(\epsilon')^2){\size} -o({\size})}. \label{Eq:intersect2} \end{align} Since by Lemma~\ref{Lemma:fw2} we have that conditional on $A, B, C ,D, E$, at the end of the process w.h.p. agent $u$ will be part of an almost monochromatic region with radius at least $r$, it follows that (\ref{Eq:intersect2}) is also a lower bound for the probability that the monochromatic neighborhood of agent $u$ will have size of at least proportional to $r^2$. The desired lower bound on the expected size of the monochromatic region now easily follows by multiplying (\ref{Eq:intersect2}) by the size of a neighborhood of radius $r$. The second part of the proof follows the same argument as the second part of the proof of Theorem~\ref{Thrm:first_theorem}. \subsection{Extension to the interval $1/2<\tau<1-\tau_2$} We call \textit{super-unhappy agents} the unhappy agents that can potentially become happy once they flip their type. While for $\tau<1/2$ unhappy agents can alway become happy by flipping their type, for $\tau>1/2$ this is only true for the super-unhappy agents. It follows that for $\tau > 1/2$ super-unhappy agents act in the same way as unhappy agents do for $\tau < 1/2$. We let $\bar{\tau} = 1 - \tau + 2/{\size}$. A \textit{super-unhappy agent} of type (-1) is an agent for which $W < \bar{\tau}{\size}$ where $W$ is the number of (-1) agents in its neighborhood. The reason for adding the term $2/{\size}$ in the definition is to account for the strict inequality that is needed for being unhappy and the flip of the agent at the center of the neighborhood which adds one agent of its type to the neighborhood. A \textit{super-radical region} is a neighborhood $\mathcal{N}_S$ of radius $S=(1+\epsilon')w$ such that $W_S < \bar{\tau}'(1+\epsilon')^2{\size} $, where $\epsilon \in (0,1/2)$ and \begin{equation*} \bar{\tau}' = \left(1-\frac{1}{\bar{\tau} {\size}^{1/2-\epsilon}}\right)\bar{\tau}. \end{equation*} By replacing $\tau$ with $\bar{\tau}$, ``unhappy agent" with ``super-unhappy agent" and ``radical region" with ``super-radical region," it can be checked that all proofs extend to the interval $1/2<\tau<1-\tau_2$. \section{Concluding Remarks } \label{Sec:Conclusion} The main lesson learned from our study is that even a small amount of intolerance can lead to segregation at the large scale. We remark, however, that the model is somewhat naturally biased towards segregation because agents can flip their type when a sufficiently large number of their neighbors are different from themselves, but they never flip when a large number of their neighbors are of their same type. Variations where agents could potentially flip in both situations, namely they are ``uncomfortable" being both a minority or a majority in a largely segregated area, would be of interest. Another direction of further study could be the investigation of how the parameter of the initial distribution of the agents influences segregation, since it is only known that complete segregation occurs w.h.p.\ for $\tau=1/2$ and $p \in (1-\epsilon,1)$, while we have shown that for $0.344 < \tau < 1/2$ and $p=1/2$ the size of the monochromatic region is at most exponential in the size of its neighborhood, w.h.p. We also point out that for $\tau = 1/2$ and for $\tau \in [1/4,\tau_2] \cup [1-\tau_2,3/4]$ the behavior of the model is unknown. Finally, our results only show lower bounds on the expected size of the monochromatic region containing a given agent, but they do not show that in the steady state every agent ends up in an exponentially large monochromatic region with high probability. A possibility that is consistent with these results (but inconsistent with the simulation results) is that only an exponentially small fraction of the nodes are contained in large monochromatic regions at the end of the process, but that those regions are so large that the expected radius of the monochromatic region containing any node is exponentially large. Proving an exponential lower bound on the size of the monochromatic region w.h.p., rather than in expectation, would rule out this possibility. \section*{Acknowledgment} The authors thank Prof. Jason Schweinsberg of the Mathematics Department of University of California at San Diego for providing invaluable feedback on earlier drafts of the paper and for suggesting some improved proofs. \printbibliography \newpage \section{Appendix} \subsection{Concentration bound on the number of agents in the initial configuration} \label{Sec:A_Sufficient} \begin{lemma*}\label{Lemma:balanced} Let $\epsilon \in (0,1/2)$, and let $\mathcal{N}$ be an arbitrary neighborhood in the grid with ${\size}$ agents. There exist $c,c' \in \mathbb{R}^+$, such that \begin{align}\label{Eq:balanced} P\left(|W - {\size}/2| < c{\size}^{1/2+\epsilon}\right) \ge 1-2e^{-c'{\size}^{2\epsilon}}. \end{align} \end{lemma*} \begin{proof} Let $W_i$ be the random variable associated with the type of the $i$'th agent in $\mathcal{N}$ such that it is one whenever the type is (-1) and zero otherwise. Let $\mathcal{F}_i = \sigma(W_1,...,W_i)$. Then it is easy to see that $M_n = \mathbb{E}[W|\mathcal{F}_n]$ for $n=1,...,{\size}$ is a martingale. It is also easy to see that $M_0 = \mathbb{E}[W] = {\size}/2$, and $M_{{\size}} = W$. We also have \begin{align*} |M_n - M_{n-1}| &= \left|\mathbb{E}\left(\sum_{i=1}^{{\size}}W_i|\mathcal{F}_n\right) - \mathbb{E}\left(\sum_{i=1}^{{\size}}W_i|\mathcal{F}_{n-1}\right)\right| \\ &= \left|W_n+ ({\size}-n)/2 - [{\size}-(n-1)]/2\right| \\ &\le \left|W_n - \frac{1}{2}\right| \le 1/2, \end{align*} for $n=1,2,...,{\size}$. Now using Azuma's inequality, there exist constants $c_1,c_2 \in \mathbb{R}^+$ such that \begin{align*} P\left(W - {\size}/2 \ge c{\size}^{1/2+\epsilon}\right) \le e^{-c_1{\size}^{2\epsilon}}, \end{align*} and \begin{align*} P\left(W - {\size}/2 \le -c'{\size}^{1/2+\epsilon}\right) \le e^{-c_2{\size}^{2\epsilon}}. \end{align*} It follows by an application of Boole's inequality that there exists a constant $c\in \mathbb{R}^+$ such that (\ref{Eq:balanced}) holds. \end{proof} \subsection{Preliminary results for the proof of Theorem~\ref{Thrm:first_theorem}} \label{Sec:A_proof_theorem_1} First, we give a bound on the probability of having an unhappy agent in the initial configuration, we then extend this bound for a radical region. \begin{lemma*} \label{Lemma:unhappyprob} Let $p_u$ be the probability of being unhappy for an arbitrary agent in the initial configuration. There exist positive constants $c_l$ and $c_u$ which depend only on $\tau$ such that \begin{align*} c_{l}\frac{2^{-[1-H(\tau')]{\size}}}{\sqrt{{\size}}} \le p_u \le c_{u}\frac{2^{-[1-H(\tau')]{\size}}}{\sqrt{{\size}}}. \end{align*} where $\tau' = \frac{\tau {\size} - 2}{{\size}-1}$, and $H$ is the binary entropy function. \end{lemma*} \begin{proof} We have \begin{align} \label{eq:pu} p_u = \frac{1}{2^{\size}} \sum_{k = 0 }^{\tau {\size} - 2}{{{\size}-1}\choose{k}} + \frac{1}{2^{\size}} \sum_{k = 0 }^{\tau {\size} - 2}{{{\size}-1}\choose{k}}, \end{align} where the two unit reduction is to account for the strict inequality and the agent at the center of the neighborhood. Let $\tau' = \frac{\tau {\size} - 2}{{\size}-1}$. After some algebra we have \begin{align*} {{\size}-1 \choose \tau' ({\size}-1)} \le \sum_{k = 0 }^{\tau' ({\size}-1)}{{{\size}-1}\choose{k}} \le \frac{1-\tau'}{1-2\tau'}{{\size}-1 \choose \tau' ({\size}-1)}, \end{align*} and using Stirling's formula, there exist constants $c,c' \in \mathbb{R}^+$ such that \begin{align*} {c\frac{2^{-[1-H(\tau')]({\size}-1)}}{\sqrt{({\size}-1)\tau'(1-\tau')}}} \le {{\size}-1 \choose \tau' ({\size}-1)} \le c'\frac{2^{-[1-H(\tau')]({\size}-1)}}{\sqrt{({\size}-1)\tau'(1-\tau')}}. \end{align*} The result follows by combining the above inequalities. \end{proof} \begin{lemma*} \label{Lemma:super_unhappy_prob} There exist positive constants $c_l$ and $c_u$ which depend only on $\tau$ such that in the initial configuration, an arbitrary neighborhood with radius $(1+\epsilon')w$ is a radical region with probability $p_{\epsilon'}$ where we have \begin{align*} c_{l}{2^{-[1-H(\tau'')](1+\epsilon')^2{\size}-o({\size})} \le p_{\epsilon'} \le c_{u} 2^{-[1-H(\tau'')](1+\epsilon')^2{\size}+o({\size})}}, \end{align*} where $\tau'' = (\lfloor \hat{\tau}(1+\epsilon')^2{\size} \rfloor - 1) / (1+\epsilon')^2{\size} $, $\hat{\tau} = (1-{1}/{(\tau {\size}^{1/2-\epsilon})})\tau$, and $H$ is the binary entropy function. \end{lemma*} \begin{proof} The proof follows the same lines as in the proof of Lemma \ref{Lemma:unhappyprob}. \end{proof} \begin{lemma*} \label{Lemma:R_unhappy} Let ${\radius} = 2^{[1-H(\tau')]{\size}/2}$ and \begin{align*} A = \left\{\forall v\in \mathcal{N}_{\radius}, \; u^{+} \mbox{ {\normalfont would be happy at the location of $v$ at $t=0$}}\right\}. \end{align*} Then $A$ occurs w.h.p. \end{lemma*} \begin{proof} Let $U_i$ for $i=1,2,...,|\mathcal{N}_{\radius}|$ be the event that agent $u^+$ would be happy at the location of $i$'th agent of $\mathcal{N}_{\radius}$. It is easy to see that $P(U_i)=p_u$ (see~(\ref{eq:pu})). Hence we have \begin{align*} P(A) &= P\left(U_1^C \cap ... \cap U^C_{|\mathcal{N}_{\radius}|}\right) \\ &= 1 - P\left(U_1 \cup ... \cup U_{|\mathcal{N}_{\radius}|}\right) \\ &\ge 1 - |\mathcal{N}_{\radius}|\frac{2^{-[1-H(\tau')]{\size}}}{\sqrt{{\size}}} \\ &\ge 1 - \frac{5}{\sqrt{{\size}}} \end{align*} which tends to one as ${\size} \rightarrow \infty$. \end{proof} The following lemma gives a simple lower bound for the probability of having a radical region inside a neighborhood which has radius $r= 2^{[1-H(\tau')]{\size}/2-o({\size})}$. We call a radical region with radius $(1+\epsilon')w$ an $\epsilon'$-radical region. \begin{lemma*} \label{Lemma:r_unhappy} Any arbitrary neighborhood $\mathcal{N}_r$ with radius $r= 2^{[1-H(\tau')]{\size}/2-o({\size})}$ in the initial configuration has at least one $\epsilon'$-radical region in it with probability at least $2^{-[1-H(\tau')](2\epsilon'+\epsilon'^2){\size} -o({\size})}$. \end{lemma*} \begin{proof} Divide the neighborhood into $2(1+\epsilon')w$-blocks, and let ${\size}_b$ denote the number of blocks in $\mathcal{N}_r$. Define the events \begin{align*} Q_i = \{ \text{The i-th block of } \mathcal{N}_r \text{ is an } \epsilon'\text{-radical region} \}, \\ Q = \{\text{There is an } \epsilon'\text{-radical region in }\mathcal{N}_r \}. \end{align*} Using Lemma \ref{Lemma:super_unhappy_prob}, it follows that \begin{align*} P(Q) \ge& \ P\left(Q_1 \cup ... \cup Q_{{\size}_b}\right) \\ =& \ 1 - P\left(Q_1^C \cap ... \cap Q^C_{{\size}_b}\right) \\ =& \ \frac{4r^2}{(1+\epsilon')^2{\size}}2^{-[1-H(\tau'')](1+\epsilon')^2{\size}-o({\size})}\\ =& \ 2^{-[1-H(\tau')][2\epsilon'+\epsilon'^2]{\size} -[H(\tau')-H(\tau'')](1+\epsilon')^2{\size}-o({\size})} \\ =& \ 2^{-[1-H(\tau')][2\epsilon'+\epsilon'^2]{\size} -o({\size})}. \end{align*} \end{proof} \subsection{FKG-Harris inequality} \label{FKGus} The following is Theorem~4 in \cite{liggett2010stochastic} which is originally by Harris \cite{harris1977correlation}. Let $\sigma_t$ be the configuration of the agents on the grid at time $t$. Let $\mathbb{E}^{\sigma_0}[X]$ be the expected value of the random variable $X$, when the initial state of the system is $\sigma_0$. A probability distribution $\mu$ on $\{0,1\}^{\mathbb{Z}^d}$ is said to be positively associated if for all increasing $f$ and $g$ we have \begin{align*} \mathbb{E}[f(\sigma)g(\sigma)] \ge \mathbb{E}[f(\sigma)]\mathbb{E}[g(\sigma)]. \end{align*} \begin{theorem*}[Harris] \label{Thrm:harris} Assume the process satisfies the following two properties: (a) Individual transitions affect the state at only one site. (b) For every continuous increasing function $f$ and every $t>0$, the function $\sigma_0 \rightarrow \mathbb{E}^{\sigma_0}[f(\sigma_t)]$ is increasing. Then, if the initial distribution is positively associated, so is the distribution at all later times. \end{theorem*} The following is a version of the FKG inequality~\cite{fortuin1971correlation} in our setting. The original inequality holds for a static setting and is extended here to our time-dynamic setting using Theorem~\ref{Thrm:harris}. \begin{lemma*}[FKG-Harris] \label{Lemma:FKG_Harris} Let $A$ and $B$ be two increasing events defined on our process on the grid. We have \begin{align*} P(A\cap B) \ge P(A)P(B). \end{align*} \end{lemma*} \begin{proof} Assume $A$ and $B$ are increasing random variables which depend only on the states of the sites $v_1,v_2,...,v_k$ and first time step. We proceed by induction on $k$. First, let $k=1$. Let $\omega(v_1)$ be the realization of the site $v_1$. We also have \begin{align*} \left(1_A(\omega_1)-1_A(\omega_2)\right) \left(1_B(\omega_1)-1_B(\omega_2) \right) \ge 0, \end{align*} for all pairs of vectors $\omega_1$ and $\omega_2$ from the sample space. We have \begin{align*} 0 &\le \sum_{\omega_1,\omega_2} \left(1_A(\omega_1)-1_A(\omega_2)\right) \left(1_B(\omega_1)-1_B(\omega_2)\right)P(\omega(v_1)=\omega_1)P(\omega(v_1)=\omega_2) \\ &=2\left(P(A\cap B)-P(A)P(B)\right), \end{align*} as required. Assume now that the result is valid for values of $n$ satisfying $k<n$. Then \begin{align*} P(A\cap B) &= \mathbb{E}\left[P\left(A\cap B \given[\Big]\omega(v_1),...,\omega(v_{n-1})\right) \right] \\ &\ge \mathbb{E}\left[P\left(A\given[\Big]\omega(v_1),...,\omega(v_{n-1})\right)P\left(B\given[\Big]\omega(v_1),...,\omega(v_{n-1})\right) \right], \end{align*} since, given $\omega(v_1),...,\omega(v_{n-1})$, $1_A$ and $1_B$ are increasing in the single variable $\omega(v_n)$. Now since $P\left(A|\omega(v_1),...,\omega(v_{n-1})\right)$ and $P\left(B|\omega(v_1),...,\omega(v_{n-1})\right)$ are increasing in the space of the $n-1$ sites, it follows from the induction hypothesis that \begin{align} \label{Eq:fkg_1} P(A\cap B) &\ge \mathbb{E}\left[P\left(A\given[\Big]\omega(v_1),...,\omega(v_{n-1})\right)\right]\mathbb{E}\left[P\left(B\given[\Big]\omega(v_1),...,\omega(v_{n-1})\right) \right] \nonumber \\ &= P(A)P(B). \end{align} Next, assume $A$ and $B$ are increasing random variables which depend only on the states of the sites in the first $k$ time steps. We proceed by induction on $k<K$ such that $K$ denotes the final time step over all the realizations. First, let $k=0$. Let $\omega(t_0)$ be the configuration of the graph at the first time step. We have \begin{align*} P(A\cap B) \ge P(A)P(B), \end{align*} by the above result. Assume now that the result is valid for all values of $k$ satisfying $k<K$. Then, since our process satisfies the conditions of Theorem~\ref{Thrm:harris} and given $\omega(t_0),...,\omega(t_{K-1})$, $1_A$ and $1_B$ are increasing in $\omega(t_{K})$, we have \begin{align*} P(A\cap B) &= \mathbb{E}\left[P\left(A\cap B \given[\Big]\omega(t_0),...,\omega(t_{K-1})\right) \right] \\ &\ge \mathbb{E}\left[P\left(A\given[\Big]\omega(t_0),...,\omega(t_{K-1})\right)P\left(B\given[\Big]\omega(t_0),...,\omega(t_{K-1})\right) \right]. \end{align*} Now, since $P\left(A|\omega(t_0),...,\omega(t_{K-1})\right)$ and $P\left(B|\omega(t_0),...,\omega(t_{K-1})\right)$ are increasing in the space of the configurations of the graph in the first $K-1$ time steps, it follows from the induction hypothesis that \begin{align*} P(A\cap B) &\ge \mathbb{E}\left[P\left(A\given[\Big]\omega(t_0),...,\omega(t_{K-1})\right)\right]\mathbb{E}\left[P\left(B\given[\Big]\omega(t_0),...,\omega(t_{K-1})\right) \right]\\ &= P(A)P(B). \end{align*} \end{proof} \end{document}
1,108,101,564,808
arxiv
\section{Introduction} What are all the ways to produce the simplest closed 3-manifolds by the simplest 3-dimensional topological operation? From the cut-and-paste point of view, the simplest 3-manifolds are the {\em lens spaces} $L(p,q)$, these being the spaces (besides $S^3$ and $S^1 \times S^2$) that result from identifying two solid tori along their boundaries, and the simplest operation is {\em Dehn surgery} along a knot $K \subset S^3$. With these meanings in place, the opening question goes back forty years to Moser \cite{moser:torus}, and its definitive answer remains unknown. By definition, a {\em lens space knot} is a knot $K \subset S^3$ that admits a lens space surgery. Moser observed that all torus knots are lens space knots and classified their lens space surgeries. Subsequently, Bailey-Rolfsen \cite{baileyrolfsen} and Fintushel-Stern \cite{fs:lenssurgery} gave more examples of lens space knots. The production of examples culminated in an elegant construction due to Berge that at once subsumed all the previous ones and generated many more classes \cite{berge:lens}. Berge's examples are the knots that lie on a Heegaard surface $\Sigma$ of genus two for $S^3$ and that represent a primitive element in the fundamental group of each handlebody. For this reason, such knots are called {\em doubly primitive}. Berge observed that performing surgery along such a knot $K$, with (integer) framing specified by a push-off of $K$ on $\Sigma$, produces a lens space. Furthermore, he enumerated several different types of doubly primitive knots. By definition, the {\em Berge knots} are the doubly primitive knots that Berge specifically enumerated in \cite{berge:lens}. They are reproduced in Subsection \ref{ss: list} (more precisely, the {\em dual} Berge knots are reported there). The most prominent question concerning lens space surgeries is the {\em Berge conjecture}. \begin{conj}[Problem 1.78, \cite{kirby:problems}]\label{conj: berge} If integer surgery along a knot $K \subset S^3$ produces a lens space, then it arises from Berge's construction. \end{conj} Complementing Conjecture \ref{conj: berge} is the cyclic surgery theorem of Culler-Gordon-Luecke-Shalen \cite{cgls:cyclic}, which implies that if a lens space knot $K$ is not a torus knot, then the surgery coefficient is an integer. Therefore, an affirmative answer to the Berge conjecture would settle Moser's original question. We henceforth restrict attention to {\em integer} slope surgeries as a result. Reflecting $K$ if necessary, we may further assume that the slope is positive. Thus, in what follows, we attach to every lens space knot $K$ a positive integer $p$ for which $p$-surgery along $K$ produces a lens space, and denote the surgered manifold by $K_p$. Using monopole Floer homology, Kronheimer-Mrowka-Ozsv\'ath-Szab\'o related the knot genus and the surgery slope via the inequality \begin{equation}\label{e: kmos} 2g(K)-1 \leq p \end{equation} \cite[Corollary 8.5]{kmos}. Their argument utilizes the fact that the Floer homology of a lens space is as simple as possible: ${\mathrm{rk}} \; \widehat{HF}(Y) = |H_1(Y;{\mathbb Z})|$. A space with this property is called an {\em L-space}, and a knot with a positive L-space surgery is an {\em L-space knot}. Their proof adapts to the setting of Heegaard Floer homology as well \cite{os:absgr}, the framework in place for the remainder of this paper. Ozsv\'ath-Szab\'o established a significant constraint on the knot Floer homology groups $\widehat{HFK}(K)$ and hence the Alexander polynomial $\Delta_K$ \cite[Theorem 1.2 and Corollary 1.3]{os:lens}. Utilizing this result, Ni proved that $K$ is fibered \cite[Corollary 1.3]{ni:fibered}. As indicated by Berge, it is often preferable to take the perspective of surgery along a knot in a lens space. Corresponding to a lens space knot $K \subset S^3$ is a dual knot $K' \subset K_p$, the core of the surgery solid torus. Reversing the surgery, it follows that $K'$ has a positive integer surgery producing $S^3$. Following custom, we refer to the dual of a Berge knot as a Berge knot as well, and stress the ambient manifold to prevent confusion. As demonstrated by Berge \cite[Theorem 2]{berge:lens}, the dual to a doubly primitive knot takes a particularly pleasant form: it is an example of a {\em simple knot}, of which there is a unique one in each homology class in $L(p,q)$. Thus, each Berge knot in a lens space is specified by its homology class, and this is what we report in Subsection \ref{ss: list}. This point of view is taken up by Baker-Grigsby-Hedden \cite{bgh:lens} and J. Rasmussen \cite{r:Lspace}, who have proposed programs to settle Conjecture \ref{conj: berge} by studying knots in lens spaces with simple knot Floer homology. \subsection{Results.}\label{ss: results} \noindent A derivative of the Berge conjecture is the {\em realization problem}, which asks for those lens spaces that arise by integer surgery along a knot in $S^3$. Closely related is the question of whether the Berge knots account for all the doubly primitive knots. Furthermore, the Berge conjecture raises the issue of tightly bounding the knot genus $g(K)$ from above in terms of the surgery slope $p$. The present work answers these three questions. \begin{thm}\label{t: main} Suppose that positive integer surgery along a knot $K \subset L(p,q)$ produces $S^3$. Then $K$ lies in the same homology class as a Berge knot $B \subset L(p,q)$. \end{thm} \noindent The resolution of the realization problem follows at once. As explained in Section \ref{s: completion}, the same result holds with $S^3$ replaced by any L-space homology sphere with $d$-invariant $0$. As a corollary, we obtain the following result. \begin{thm}\label{c: main} Suppose that $K \subset S^3$, $p$ is a positive integer, and $K_p$ is a lens space. Then there exists a Berge knot $B \subset S^3$ such that $B_p \cong K_p$ and $\widehat{HFK}(B) \cong \widehat{HFK}(K)$. Furthermore, every doubly primitive knot in $S^3$ is a Berge knot. \end{thm} \noindent Based on well-known properties of the knot Floer homology groups, it follows that $K$ and $B$ have the same Alexander polynomial, genus, and four-ball genus. Furthermore, the argument used to establish Theorem \ref{t: main} leads to a tight upper bound on the knot genus $g(K)$ in relation to the surgery slope. \begin{thm}\label{t: berge bound} Suppose that $K \subset S^3$, $p$ is a positive integer, and $K_p$ is a lens space. Then \begin{equation}\label{e: berge bound} 2g(K)-1 \leq p - 2 \sqrt{(4p+1)/5}, \end{equation} unless $K$ is the right-hand trefoil and $p = 5$. Moreover, this bound is attained by the type VIII Berge knots specified by the pairs $(p,k) = (5n^2+5n+1, 5n^2-1)$. \end{thm} \noindent Theorem \ref{t: berge bound} was announced without proof in \cite[Theorem 1.2]{greene:cabling} (cf. \cite{saito:lens}). As indicated in \cite{greene:cabling}, for $p \gg 0$, Theorem \ref{t: berge bound} significantly improves on the bound $2g(K)-1 \leq p-9$ conjectured by Goda-Teragaito \cite{gt:lenssurgery} for a hyperbolic knot $K$, and can be used to show that the conjectured bound holds for all but at most two values $p \in \{ 14, 19 \}$. In addition, one step involved in both approaches to the Berge conjecture outlined in \cite{bgh:lens,r:Lspace} is to argue the non-existence of a non-trivial knot $K$ for which $K_{2g(K)-1}$ is a lens space. This fact follows immediately from Theorem \ref{t: berge bound}. \subsection{Berge knots in lens spaces.}\label{ss: list} J. Rasmussen concisely tabulated the Berge knots $B \subset L(p,q)$ \cite[Section 6.2]{r:Lspace}. To describe those with a {\em positive} $S^3$ surgery, select a positive integer $k$ and produce a positive integer $p$ in terms of it according to the table below. The value $k \pmod p$ represents the homology class of $B$ in $H_1(L(p,q)) \cong {\mathbb Z} / p {\mathbb Z}$, $q \equiv -k^2 \pmod p$, as described at the end of Section \ref{s: topology}. We reproduce the tabulation here. \smallskip \noindent {\bf Berge Type I$_{\pm}$:} $p = ik \pm 1, \quad \gcd(i,k) = 1$; \noindent {\bf Berge Type II$_{\pm}$:} $p = ik \pm 1, \quad \gcd(i,k) = 2$, $i,k \geq 4$; \noindent {\bf Berge Type III:} $ \begin{cases} (a)_{\pm} \quad p \equiv \pm(2k-1)d \pmod{k^2}, \quad d \; | \; k+1, \; {k+1 \over d} \text{ odd}; \\ (b)_{\pm} \quad p \equiv \pm(2k+1)d \pmod{k^2}, \quad d \; | \; k-1, \; {k-1 \over d} \text{ odd}; \end{cases}$ \noindent {\bf Berge Type IV:} $ \begin{cases} (a)_{\pm} \quad p \equiv \pm(k-1)d \pmod{k^2}, \quad d \; | \; 2k+1; \\ (b)_{\pm} \quad p \equiv \pm(k+1)d \pmod{k^2}, \quad d \; | \; 2k-1; \end{cases}$ \noindent {\bf Berge Type V:} $ \begin{cases} (a)_{\pm} \quad p \equiv \pm(k+1)d \pmod{k^2}, \quad d \; | \; k+1, d \text{ odd}; \\ (b)_{\pm} \quad p \equiv \pm(k-1)d \pmod{k^2}, \quad d \; | \; k-1, d \text{ odd}; \end{cases}$ \noindent \noindent {\bf Berge Type VII:} $k^2+k+1 \equiv 0 \pmod{p}$; \noindent {\bf Berge Type VIII:} $k^2-k-1 \equiv 0 \pmod{p}$; \noindent {\bf Berge Type IX:} $p = {1 \over 11}(2k^2 + k + 1), k \equiv 2 \pmod{11}$; \noindent {\bf Berge Type X:} $p = {1 \over 11}(2k^2 + k + 1), k \equiv 3 \pmod{11}$. \smallskip \noindent As indicated by J. Rasmussen, type VI occurs as a special case of type V, and types XI and XII result from allowing negative values for $k$ in IX and X, respectively. \subsection{Overview and organization.}\label{ss: methods} We now provide a detailed overview of the general strategy we undertake to establish the main results. We hope that this account will satisfy the interests of most readers and clarify the intricate combinatorial arguments that occupy the main body of the text. Our approach draws inspiration from a remarkable pair of papers by Lisca \cite{lisca:lens1,lisca:lens2}, in which he classified the sums of lens spaces that bound a smooth, rational homology ball. Lisca began with the observation that the lens space $L(p,q)$ naturally bounds a smooth, negative definite plumbing 4-manifold $X(p,q)$ (Section \ref{s: topology}). If $L(p,q)$ bounds a rational ball $W$, then the 4-manifold $Z := X(p,q) \cup -W$ is a smooth, closed, negative definite 4-manifold with $b_2(Z) = b_2(X) =: n$. According to Donaldson's celebrated ``Theorem A", the intersection pairing on $H_2(Z;{\mathbb Z})$ is isomorphic to {\em minus} the standard Euclidean integer lattice $-{\mathbb Z}^n$ \cite{d:thma}. As a result, it follows that the intersection pairing on $X(p,q)$, which we henceforth denote by $-\Lambda(p,q)$, embeds as a full-rank sublattice of $-{\mathbb Z}^n$. Lisca solved the combinatorial problem of determining the pairs $(p,q)$ for which there exists an embedding $\Lambda(p,q) \hookrightarrow {\mathbb Z}^n$, subject to a certain additional constraint on the pair $(p,q)$. By consulting an earlier tabulation of Casson-Gordon \cite[p. 188]{cg:cobordism}, he observed that the embedding exists iff $\pm L(p,q)$ belongs to a family of lens spaces already known to bound a special type of rational ball. The classification of lens spaces that bound rational balls follows at once. Pushing this technique further, Lisca obtained the classification result for sums of lens spaces as well. In our situation, we seek the pairs $(p,q)$ for which $L(p,q)$ arises as positive integer surgery along a knot $K \subset S^3$. Thus, suppose that $K_p \cong L(p,q)$, and form a smooth 4-manifold $W_p(K)$ by attaching a $p$-framed 2-handle to $D^4$ along $K \subset {\partial} D^4$. This space has boundary $K_p$, so we obtain a smooth, closed, negative definite 4-manifold by setting $Z = X(p,q) \cup - W_p(K)$, where $b_2(Z) = n+1$. By Donaldson's Theorem, it follows that $\Lambda(p,q)$ embeds as a codimension one sublattice of ${\mathbb Z}^{n+1}$. However, this restriction is too weak: it is easy to produce pairs $(p,q)$ that fulfill this condition, while $L(p,q)$ does not arise as a knot surgery (for example, $L(33,2)$). Thankfully, we have another tool to work with: the {\em correction terms} in Heegaard Floer homology (\cite[Section 2]{greene:cabling}). Ozsv\'ath-Szab\'o defined these invariants and subsequently used them to phrase a necessary condition on the pair $(p,q)$ in order for $L(p,q)$ to arise as a positive integer surgery \cite[Corollary 7.5]{os:absgr}. Using a computer, they showed that this condition is actually sufficient for $p \leq 1500$: every pair that fulfills it appears on Berge's list \cite[Proposition 1.13]{os:lens}. Later, J. Rasmussen extended this result to all $p \leq 100,000$ \cite[end of Section 6]{r:Lspace}. Following their work, it stood to reason that the Ozsv\'ath-Szab\'o condition is both necessary and sufficient for {\em all} $(p,q)$. However, it remained unclear how to manipulate the correction terms effectively towards this end. The key idea here is to use the correction terms in tandem with Donaldson's Theorem. The result is an enhanced lattice embedding condition (cf. \cite{greene:3braids, greene:cabling, gj:slicepretzels}). In order to state it, we first require a combinatorial definition. \begin{defin}\label{d: change} A vector $\sigma = (\sigma_0, \dots, \sigma_n) \in {\mathbb Z}^{n+1}$ with $1 = \sigma_0 \leq \sigma_1 \leq \cdots \leq \sigma_n$ is a {\em changemaker} if for all $0 \leq k \leq \sigma_0 + \cdots + \sigma_n$, there exists a subset $A \subset \{0,\dots,n\}$ such that $\sum_{i \in A} \sigma_i = k$. Equivalently, $\sigma_i \leq \sigma_0 + \cdots + \sigma_{i-1} + 1$ for all $1 \leq i \leq n$. \end{defin} \noindent If we imagine the $\sigma_i$ as values of coins, then Definition \ref{d: change} asserts a necessary and sufficient condition under which one can make exact change from the coins in any amount up to their total value. The reader may find it amusing to establish this condition; its proof appears in both \cite{brown:changemaker} and \cite[Lemma 3.2]{greene:cabling}. Note that Definition \ref{d: change} differs slightly from the one used in \cite{greene:cabling}, since here we require that $\sigma_0 = 1$. Our lattice embedding condition now reads as follows. Again, we phrase it from the perspective of surgery along a knot in a lens space. \begin{thm}\label{t: main technical} Suppose that positive integer surgery along a knot $K \subset L(p,q)$ produces $S^3$. Then $\Lambda(p,q)$ embeds as the orthogonal complement to a changemaker $\sigma \in {\mathbb Z}^{n+1}$, $n = b_2(X)$. \end{thm} \noindent Our strategy is now apparent: determine the list of pairs $(p,q)$ which pass this refined embedding obstruction, and check that it coincides with Berge's list. Indeed, this is the case. \begin{thm}\label{t: linear changemakers} At least one of the pairs $(p,q)$, $(p,q')$, where $q q' \equiv 1 \pmod 1$, appears on Berge's list iff $\Lambda(p,q)$ embeds as the orthogonal complement to a changemaker in ${\mathbb Z}^{n+1}$. \end{thm} \noindent Furthermore, when $\Lambda(p,q)$ embeds, we recover a value $k \pmod p$ that represents the homology class of a Berge knot $K \subset L(p,q)$ (Proposition \ref{p: homology}). Theorem \ref{t: main} follows easily from this result. To give a sense of the proof of Theorem \ref{t: linear changemakers}, we first reflect on the lattice embedding problem that Lisca solved. He made use of the fact that $\Lambda(p,q)$ admits a special basis; in our language, it is a {\em linear lattice} with a distinguished {\em vertex basis} (Subsection \ref{ss: linear lattice}). He showed that any embedding of a linear lattice as a full-rank sublattice of ${\mathbb Z}^n$ (subject to the extra constraint he posited) can be built from one of a few small embeddings by repeatedly applying a basic operation called {\em expansion}. Following this result, the identification of the relevant pairs $(p,q)$ follows from a manipulation of continued fractions. The precise details of Lisca's argument are involved, but ultimately elementary and combinatorial in nature. One is tempted to carry out a similar approach to Theorem \ref{t: linear changemakers}. Thus, one might first attempt to address the problem of embedding $\Lambda(p,q)$ as a codimension one sublattice of ${\mathbb Z}^n$, and then analyze which of these are complementary to a changemaker. However, getting started in this direction is difficult, since Lisca's techniques do not directly apply. More profitable, it turns out, is to turn this approach on its head. Thus, we begin with a study of the lattices of the form $(\sigma)^\perp \subset {\mathbb Z}^n$ for some changemaker $\sigma$; by definition, these are the {\em changemaker lattices}. A changemaker lattice is best presented in terms of its {\em standard basis} (Subsection \ref{ss: changemaker}). The question then becomes: when is a changemaker lattice isomorphic to a linear lattice? That is, how do we recognize whether there exists a change of basis from its standard basis to a vertex basis? The key notion in this regard is that of an {\em irreducible} element in a lattice $L$. By definition, an element $x \in L$ is {\em reducible} if $x = y+z$, where $y,z \in L$ are non-zero and ${\langle} y,z {\rangle} \geq 0$; it is irreducible otherwise. Here ${\langle}\, ,\, {\rangle}$ denotes the pairing on $L$. As we show, the standard basis elements of a changemaker lattice are irreducible (Lemma \ref{l: change irred}), as are the vertex basis elements of a linear lattice. Furthermore, the irreducible elements in a linear lattice take a very specific form (Corollary \ref{c: linear irred}). This leads to a variety of useful Lemmas, collected in Subsection \ref{ss: int graph}. For example, if a changemaker lattice is isomorphic to a linear lattice, then its standard basis does not contain three elements, each of norm $\geq 3$, such that any two pair together non-trivially (Lemma \ref{l: no triangle}). Thus, we proceed as follows. First, choose a standard basis $S \subset {\mathbb Z}^n$ for a changemaker lattice $L$, and suppose that $L$ is isomorphic to a linear lattice. Then apply the combinatorial criteria of Subsection \ref{ss: int graph} to deduce the specific form that $S$ must take. Standard basis elements come in three distinct flavors -- {\em gappy}, {\em tight}, and {\em just right} (Definition \ref{d: tight, gappy, just right}) -- and our case analysis decomposes according to whether $S$ contains no gappy or tight vectors (Section \ref{s: just right}), a gappy vector but not a tight one (Section \ref{s: gappy, no tight}), or a tight vector (Section \ref{s: tight}). In addition, Section \ref{s: decomposable} addresses the case in which $L$ is isomorphic to a (direct) sum of linear lattices. This case turns out the easiest to address, and the subsequent Sections \ref{s: just right} - \ref{s: tight} rely on it, while increasing in order of complexity. The net result of Sections \ref{s: decomposable} - \ref{s: tight} is a collection of several structural Propositions that enumerate the possible standard bases for a changemaker lattice isomorphic to a linear lattice or a sum thereof. Section \ref{s: cont fracs} takes up the problem of converting these standard bases into vertex bases, extracting the relevant pairs $(p,q)$ for each family of linear lattices, as well as the value $k \pmod p$ of Proposition \ref{p: homology}. Here, as in Lisca's work, we make some involved calculations with continued fractions. Table \ref{table: A} gives an overview of the correspondence between the structural Propositions and the Berge types. Lastly, Section \ref{s: completion} collects the results of the earlier Sections to prove the Theorems stated above. The remaining introductory Sections discuss various related topics. \subsection{Related progress.}\label{ss: partial} A number of authors have recently addressed both the realization problem and the classification of doubly primitive knots in $S^3$. S. Rasmussen established Theorem \ref{t: main} under the constraint that $k^2 < p$ \cite[Theorem 1.0.3]{sr:thesis}. This condition is satisfied precisely by Berge types I-V. Tange established Theorem \ref{t: main} under a different constraint relating the values $k$ and $p$ \cite[Theorem 6]{tange:realization}. The two constraints are not complementary, however, so the full statement of Theorem \ref{t: main} does not follow on combination of these results. In a different direction, Berge showed by direct topological methods that all doubly primitive knots are Berge knots; equivalently, a simple knot in a lens space has an $S^3$ surgery iff it is a (dual) Berge knot \cite{berge:ppknots}. \subsection{Comparing Berge's and Lisca's lists.}\label{ss: berge lisca} Lisca's list of lens spaces that bound rational balls bears a striking resemblance to the list of Berge knots of type I-V (Subsection \ref{ss: list}). J. Rasmussen has explained this commonality by way of the knots $K$ in the solid torus $S^1 \times D^2$ that possess integer $S^1 \times D^2$ surgeries. The classification of these knots is due to Berge and Gabai \cite{berge:solidtorus,gabai:solidtorus}. Given such a knot, we obtain a knot $K' \subset S^1 \times S^2$ via the standard embedding $S^1 \times D^2 \subset S^1 \times S^2$. Performing the induced surgery along $K'$ produces a lens space $L(p,q)$, which we can effect by attaching a 2-handle along $K' \subset {\partial} (S^1 \times D^3)$. The resulting 4-manifold is a rational ball built from a single 0-, 1-, and 2-handle, and it has boundary $L(p,q)$. As observed by J. Rasmussen, Lisca's theorem shows that every lens space that bounds a rational ball must bound one built in this way. On the other hand, we obtain a knot $K'' \subset S^3$ from $K$ via the standard embedding $S^1 \times D^2 \subset S^3$. Performing the induced surgery along $K''$ produces a lens space $L(r,s)$ and a dual knot representing some homology class $k \pmod r$ . The Berge knots of type I-V arise in this way. The pair $(p,q)$ comes from setting $p = k^2$ and $q = r$; in this way, we reconstruct Lisca's list (but not his result!) from Berge's. Analogous to the Berge conjecture, Lisca's theorem raises the following conjecture. \begin{conj} If a knot in $S^1 \times S^2$ admits an integer lens space surgery, then it arises from a knot in $S^1 \times D^2$ with an integer $S^1 \times D^2$ surgery. \end{conj} \subsection{4-manifolds with small $b_2$.} Which lens spaces bound a smooth 4-manifold built from a single 0- and 2-handle? Theorem \ref{t: main} can be read as answer to this question. Which lens spaces bound a smooth, simply-connected 4-manifold $W$ with $b_2(W) = 1$? Are there examples beyond those coming from Theorem \ref{t: main}? The answers to these questions are unknown. By contrast, the situation in the topological category is much simpler: a lens space $L(p,q)$ bounds a topological, simply-connected 4-manifold with $b_2 = b^+_2=1$ iff $-q$ is a square $({\textup{mod} \;} p)$. Similarly, we ask: which lens spaces bound a smooth rational homology ball? Which bound one built from a single 0-, 1-, and 2-handle? As addressed in Subsection \ref{ss: berge lisca}, the answers to these two questions are the same. Furthermore, Lisca showed that a two-bridge link is smoothly slice if and only if its branched double-cover (a lens space) bounds a smooth rational homology ball. Which lens spaces bound a topological rational homology ball? The answer to this question is unknown. For that matter, it is unknown which two-bridge links $L$ are topologically slice. Note that if $L$ is topologically slice, then the lens space that arises as its branched double-cover bounds a topological rational homology ball. However, the converse is unknown: is it the case that a lens space bounds a topological rational homology ball iff the corresponding two-bridge link is topologically slice? Are the answers to these questions the same as in the smooth category? \subsection{The Poincar\'e sphere.}\label{ss: poincare} Tange constructed several families of simple knots in lens spaces with integer surgeries producing the Poincar\'e sphere $P^3$ \cite[Section 5]{tange:poincare}. J. Rasmussen verified that Tange's knots account for all such simple knots in $L(p,q)$ with $|p| \leq 100,000$ \cite[end of Section 6]{r:Lspace}. Furthermore, he observed that in the homology class of each type VII Berge knot, there exists a $(1,1)$-knot $T_L$ as constructed by Hedden \cite[Figure 3]{hedden:berge}, and it admits an integer $P^3$-surgery for values $p \leq 39$ \cite[end of Section 5]{r:Lspace}. Combining conjectures of Hedden \cite[Conjecture 1.7]{hedden:berge} and J. Rasmussen \cite[Conjecture 1]{r:Lspace}, it would follow that Tange's knots and the knots $T_L$ homologous to type VII Berge knots are precisely the knots in lens spaces with an integer $P^3$-surgery. Conjecture \ref{conj: poincare} below is the analogue to the realization problem in this setting. \begin{conj}\label{conj: poincare} Suppose that integer surgery along a knot $K \subset L(p,q)$ produces $P^3$.\footnote{Or, more generally, any L-space homology sphere with $d$-invariant $-2$.} Then either $2g(K) - 1 < p$, and $K$ lies in the same homology class as a Tange knot, or else $2g(K)-1 = p$, and $K$ lies in the same homology class as a Berge knot of type VII. \end{conj} \noindent Tange has obtained partial progress on Conjecture \ref{conj: poincare} \cite{tange:realization}. The methodology developed here to establish Theorem \ref{t: main} suggests a similar approach to Conjecture \ref{conj: poincare}, making use of an unpublished variant on Donaldson's theorem due to Fr{\o}yshov \cite[Proposition 2 and the remark thereafter]{froyshov:poincare}. Lastly, we remark that the determination of {\em non}-integral $P^3$-surgeries along knots in lens spaces seems tractable, although it falls outside the scope of the cyclic surgery theorem. \section*{Acknowledgments} Thanks to John Baldwin for sharing the meal of paneer bhurji that kicked off this project, and to him, Cameron Gordon, Matt Hedden, John Luecke, and Jake Rasmussen for helpful conversations. Paolo Lisca's papers \cite{lisca:lens1,lisca:lens2} and a lecture by Dusa McDuff on her joint work with Felix Schlenk \cite{ms:ellipsoids} were especially influential along the way. The bulk of this paper was written at the Mathematical Sciences Research Institute in Spring 2010. Thanks to everyone connected with that institution for providing an ideal working environment. \section{Topological Preliminaries}\label{s: topology} Given relatively prime integers $p > q > 0$, the lens space $L(p,q)$ is the oriented manifold obtained from $- p/q$ Dehn surgery along the unknot. It bounds a plumbing manifold $X(p,q)$, which has the following familiar description. Expand $p/q$ in a Hirzebruch-Jung continued fraction \[ p/q = [a_1,a_2,\dots,a_n]^- = a_1 - \cfrac{1}{a_2 - \cfrac{1}{\ddots \\ - \cfrac{1}{a_n}}} \; ,\] with each $a_i$ an integer $\geq 2$. Form the disk bundle $X_i$ of Euler number $-a_i$ over $S^2$, plumb together $X_i$ and $X_{i+1}$ for $i = 1, \dots, n-1$, and let $X(p,q)$ denote the result. The manifold $X(p,q)$ is {\em sharp} \cite[Section 2]{greene:cabling}. It also admits a Kirby diagram given by the framed chain link $\mathbb{L} = L_1 \cup \cdots \cup L_n \subset S^3$, in which each $L_i$ is a planar unknot framed by coefficient $-a_i$, oriented so that consecutive components link once positively. To describe the intersection pairing on $X(p,q)$, we state a definition. \begin{defin}\label{d: linear lattice} The {\em linear lattice} $\Lambda(p,q)$ is the lattice freely generated by elements $x_1,\dots,x_n$ with inner product given by \[ \langle x_i, x_j \rangle = \begin{cases} a_i, & \text{ if } i=j; \\ -1, & \text{ if } |i-j|=1; \\ 0, & \text{ if } |i-j| > 1. \end{cases}\] \end{defin} \noindent A more detailed account about lattices (in particular, the justification for calling $\Lambda(p,q)$ a lattice) appears in Section \ref{s: lattices}. It follows at once that the inner product space $H_2(X(p,q),Q_X)$ equals {\em minus} $\Lambda(p,q)$; here and throughout, we take homology groups with integer coefficients. We note that $p/q' = [a_n,\dots,a_1]^-$, where $0 < q' < p$ and $q q' \equiv 1 \pmod p$ (Lemma \ref{l: cont frac basics}(4)). Thus, we obtain $\Lambda(p,q) \cong \Lambda(p,q')$ on the algebraic side, and $L(p,q) \cong L(p,q')$ on the topological side (cf. Proposition \ref{p: gerstein}). Now suppose that positive integer surgery along a knot $K \subset L(p,q)$ produces $S^3$. Let $W$ denote the associated 2-handle cobordism from $L(p,q)$ to $S^3$, capped off with a $4$-handle. Orienting $K$ produces a canonical generator $[\Sigma] \in H_2(-W)$ defined by the condition that ${\langle} [C],[\Sigma] {\rangle} = +1$, where $C$ denotes the core of the 2-handle attachment. Form the closed, oriented, smooth, negative definite 4-manifold $Z = X(p,q) \cup - W$. By \cite[Theorem 3.3]{greene:cabling}, it follows that $\Lambda(p,q)$ embeds in the orthogonal complement $(\sigma)^\perp \subset {\mathbb Z}^{n+1}$, where the changemaker $\sigma$ corresponds to the class $[\Sigma]$. {\em A priori} $\sigma$ could begin with a string of zeroes as in \cite{greene:cabling}, but Theorem \ref{t: main technical} rules this out, and moreover shows that $\Lambda(p,q) \cong (\sigma)^\perp$. We establish Theorem \ref{t: main technical} once we develop a bit more about lattices (cf. Subsection \ref{ss: changemaker}), and we make use of it in the remainder of this section. We now focus on the issue of recovering the homology class $[K] \in H_1(L(p,q))$ from this embedding. Regard $\mathbb{L}$ as a surgery diagram for $L(p,q)$, and let $\mu_i, \lambda_i \subset {\partial} (nd(L_i))$ denote a meridian, Seifert-framed longitude pair for $L_i$, oriented so that $\mu_i \cdot \lambda_i = +1$. Let $T_i$ denote the $i^{th}$ surgery solid torus. The boundary of $T_n$ is a Heegaard torus for $L(p,q)$; denote by $a$ the core of $T_n$ and by $b$ the core of the complementary solid torus $T_n'$. We compute the self-linking number of $b$ as $-q'/p \pmod 1$ (cf. \cite[Section 2]{r:Lspace}, bearing in mind the opposite orientation convention in place there). Thus, if $[K] = \pm k [B]$, then the self-linking number of $K$ is $- k^2 q'/p \pmod 1$. The condition that $K$ has a positive integer homology sphere surgery amounts to the condition that $-k^2 q' \equiv 1 \pmod p$ (ibid.), from which we derive $q \equiv -k^2 \pmod p$. Define \begin{equation}\label{e: x} x := \sum_{i=1}^n p_{i-1} x_i \in \Lambda(p,q), \end{equation} where the values $p_i$ are inductively defined by $p_{-1} = 0$, $p_0 = 1$, and $p_i = a_i p_{i-1} - p_{i-2}$ (cf. Definition \ref{d: cont frac} and Lemma \ref{l: cont frac basics}(1)). We identify the elements $x_i$ and $x$ with their images under the embedding $\Lambda(p,q) \oplus (\sigma) \hookrightarrow {\mathbb Z}^{n+1}$. \begin{prop}\label{p: homology} Suppose that positive integer surgery along the oriented knot $K \subset L(p,q)$ produces $S^3$, let $\Lambda(p,q) \oplus (\sigma) \hookrightarrow {\mathbb Z}^{n+1}$ denote the corresponding embedding, and set $k = {\langle} e_0, x {\rangle}$. Then \[ [K] = k \, [b] \in H_1(L(p,q)).\] \end{prop} \begin{proof} (I) We first express the homology class of a knot $\kappa \subset L(p,q)$ from the three-dimensional point of view. To that end, we construct a compressing disk $D \subset T_n'$ which is related to the class $x$. Let $P_i$ denote the planar surface in $S^3 - L$ with $[{\partial} P_i] = [\lambda_i] - [\mu_{i-1}] - [\mu_{i+1}]$ (taking $\mu_0, \mu_{n+1} = \varnothing$). Choose $p_{i-1}$ disjoint copies of $P_i$, and form the oriented cut-and-paste $P$ of all these surfaces. We calculate \begin{eqnarray*} [{\partial} P] &=& \sum_{i=1}^n p_{i-1} [{\partial} P_i] = \sum_{i=1}^n p_{i-1} ([\lambda_i] - [\mu_{i-1}] - [\mu_{i+1}]) \\ &=& \sum_{i=1}^{n-1} (p_{i-1}[\lambda_i] - (p_{i-2} + p_i)[\mu_i]) \; + \; (p_{n-1}[\lambda_n] - p_{n-2}[\mu_n]) \\ &=& \sum_{i=1}^{n-1} p_{i-1}[\lambda_i - a_i \mu_i] \; + \; (p_{n-1}[\lambda_n] - p_{n-2}[\mu_n]). \end{eqnarray*} Let $D_i$ denote a compressing disk for $T_i$. Since $[{\partial} D_i] = [\lambda_i - a_i \mu_i]$, it follows that we can form the union of $P$ with $p_{i-1}$ copies of $-D_i$ for $i = 1,\dots, n-1$ to produce a properly embedded, oriented surface $D \subset T_n'$. The boundary ${\partial} D$ represents $p_{n-1}[\lambda_n] - p_{n-2}[\mu_n] \in H_1({\partial} (nd (L_n)))$, and since $\gcd(p_{n-1},p_{n-2}) = 1$, it follows that ${\partial} D$ has a single component. Furthermore, a simple calculation shows that $\chi(D) = 1$. Therefore, $D$ provides the desired compressing disk. Since $b \cdot D = 1$, we calculate \begin{equation}\label{e: kappa class} [\kappa] = (\kappa \cdot D) [b] \in H_1(L(p,q)) \end{equation} for an oriented knot $\kappa \subset L(p,q)$ supported in $T_n'$ and transverse to $D$. \noindent (II) Now we pull in the four-dimensional point of view. Given $\alpha \in H_2(X, {\partial} X)$, represent the class ${\partial}_* \alpha$ by a knot $\kappa \subset L(p,q)$, isotop it into the complement of the surgery tori $T_1 \cup \cdots \cup T_n$, and regard it as knot in ${\partial} D^4$. Choose a Seifert surface for it and push its interior slightly into $D^4$, producing a surface $F$. Consider the Kirby diagram of $X$. We can represent the class $x$ by the sphere $\mathcal{S}$ obtained by pushing the interior of $P$ slightly into $D^4$, producing a surface $P'$, and capping off ${\partial} P'$ with $p_{i-1}$ copies of the core of handle attachment along $L_i$, for $i= 1,\dots,n$. It is clear that \begin{equation}\label{e: int nos} \kappa \cdot D = \kappa \cdot P = F \cdot P' = F \cdot \mathcal{S} = {\langle} [F] , x {\rangle}. \end{equation} Since ${\partial}_* \alpha = {\partial}_* [F] = [\kappa]$, it follows that $\alpha -[F]$ represents an absolute class in $H_2(X)$. Since the pairing $H_2(X) \otimes H_2(X) \to {\mathbb Z}$ takes values in $|H_1({\partial} X)| \cdot {\mathbb Z}$, it follows that ${\langle} \alpha , x {\rangle} \equiv {\langle} [F] , x {\rangle} \pmod{p}$. Comparing with \eqref{e: kappa class} and \eqref{e: int nos}, we obtain \begin{equation}\label{e: alpha class} {\partial}_*\alpha = {\langle} \alpha , x {\rangle} [b]. \end{equation} \noindent (III) At last we use the 2-handle cobordism $W$ and the closed manifold $Z$. Given $\beta \in H_2(-W, - {\partial} W)$, write $\beta = n [C]$, where $C$ denotes the core of the handle attachment along $K$. Since ${\partial}_*[C] = [K]$, it follows that \begin{equation}\label{e: beta class} {\partial}_*\beta = {\langle} \beta , [\Sigma] {\rangle} [K]. \end{equation} Finally, consider the commutative diagram \[ \xymatrix @R=.5pc{ & H_2(Z,-W) \ar[r]^\sim_{exc.} & H_2(X,{\partial} X) \ar[dr]^{{\partial}_*} & \cr H_2(Z) \ar[ur] \ar [dr] & & & H_1(L(p,q))\cr & H_2(Z,X) \ar[r]^(.42)\sim_(.42){exc.} & H_2(-W, -{\partial} W) \ar[ur]^{{\partial}_*} & } \] Proceeding along the top, the image of a class $\gamma \in H_2(Z)$ in $H_1(L(p,q))$ is given by ${\langle} \gamma , x {\rangle} [b]$ according to \eqref{e: alpha class}. Similarly, proceeding along the bottom, its image in $H_1(L(p,q))$ is given by ${\langle} \gamma , \sigma {\rangle} [K]$ according to \eqref{e: beta class}, switching to the use of $\sigma$ for $[\Sigma]$. Thus, taking $\gamma = e_0$, we have \[ k [b] = {\langle} e_0 , x {\rangle} [b] = {\langle} e_0 , \sigma {\rangle} [K] = [K],\] using the fact that $\sigma_0 = 1$. This completes the proof of the Proposition. \end{proof} Thus, for an {\em unoriented} knot $K \subset L(p,q)$, we obtain a pair of values $\pm k \pmod p$ that specify a pair of homology classes in $H_1(L(p,q))$, one for each orientation on $K$. Note that had we used the reversed basis $\{x_n,\dots,x_1\}$, we would have expressed $[K]$ as a multiple $k'[a] \in H_1(L(p,q))$. Since $[a] = q [b]$, we obtain $k' \equiv k q' \equiv k^{-1} \pmod p$, which is consistent with $q' \equiv - (k')^2 \pmod p$ and $L(p,q') \cong L(p,q)$. Thus, given a value $k \pmod p$, we represent equivalent (unoriented) knots by choosing any of the values $\{ \pm k, \pm k^{-1} \} \pmod p$. For the latter Berge types listed in Subsection \ref{ss: list}, we use a judicious choice of $k$. For example, Berge types IX and X involve a concise quadratic expression for $p$ in terms of $k$, but there does not exist such a nice expression for it in terms of the least positive residue of $-k$ or $k^{-1} \pmod p$. \section{Lattices}\label{s: lattices} \subsection{Generalities.}\label{ss: generalities} A {\em lattice} $L$ consists of a finitely-generated free abelian group equipped with a positive-definite, symmetric bilinear pairing ${\langle} \; , \; {\rangle} : L \times L \to \mathbb{R}$. It is {\em integral} if the image of its pairing lies in ${\mathbb Z}$. In this case, its {\em dual lattice} is the lattice \[ L^* := \{ x \in L \otimes \mathbb{R} \; | \; {\langle} x, y {\rangle} \in {\mathbb Z} \; \text{for all} \; y \in L \},\] and its {\em discriminant} ${\textup{disc}}(L)$ is the index $[L^*:L]$. All lattices will be assumed integral henceforth. Given a vector $v \in L$, its {\em norm} is the value $| v | := {\langle} v,v {\rangle}$. It is {\em reducible} if $v = x+y$ for some non-zero $x,y \in L$ with ${\langle} x,y {\rangle} \geq 0$, and {\em irreducible} otherwise. It is {\em breakable} if $v = x + y$ for some $x,y \in L$ with $|x|,|y| \geq 3$ and ${\langle} x,y {\rangle} = -1$, and {\em unbreakable} otherwise. A lattice $L$ is {\em decomposable} if it is an orthogonal direct sum $L = L_1 \oplus L_2$ with $L_1, L_2 \ne (0)$, and {\em indecomposable} otherwise. Observe that any lattice $L$ has a basis $S = \{v_1,\dots,v_n\}$ of irreducible vectors, gotten by first selecting a non-zero vector $v_1$ of minimal norm, and then inductively selecting $v_i$ as a vector of minimal norm not contained in the span of $v_1,\dots,v_{i-1}$. Given such a basis $S$, we define its {\em pairing graph} \[ \widehat{G}(S) = (S,E), \quad E = \{ (v_i,v_j) \; | \; i \ne j \text{ and } \langle v_i, v_j \rangle \ne 0 \}.\footnote{More on graph notation in Subsection \ref{ss: graph lattices}.} \] Let $G_k$ denote a connected component of $\widehat{G}(S)$ and $L_k \subset L$ the sublattice spanned by $V(G_k)$. If $L_k = L' \oplus L''$, then each vector in $V(G_k)$ must belong to one of $L'$ or $L''$ by indecomposability. Since $G_k$ is connected, it follows that they must all belong to the same summand, whence $L_k$ is indecomposable. A basic result, due to Eichler, asserts that this decomposition $L \cong \bigoplus_k L_k$ into indecomposable summands is unique up to reordering of its factors \cite[Theorem II.6.4]{milnor+husemoller}. \subsection{Graph lattices.}\label{ss: graph lattices} Let $G = (V,E)$ denote a finite, loopless, undirected graph. Write $v \sim w$ to denote $(v,w) \in E$. A {\em subgraph} of $G$ takes the form $H = (V',E')$, where $V' \subset V$ and $E' \subset \{ (v,w) \in E \; | \: v, w \in V' \}$; it is {\em induced} if ``$=$" holds in place of ``$\subset$", in which case we write $H = G|V'$. For a pair of disjoint subsets $T,T' \subset V$, write $E(T,T')$ for the set of edges between $T$ and $T'$, $e(T,T')$ for its cardinality, and set $d(T) = e(T,V-T)$. In particular, the {\em degree} of a vertex $v \in V$ is the value $d(v)$. Form the abelian group $\overline{\Gamma}(G)$ freely generated by classes $[v], \; v \in V$, and define a symmetric, bilinear pairing by \[ \langle [v], [w] \rangle = \begin{cases} d(v), & \text{ if } v=w; \\ -e(v,w), & \text{ if } v \ne w. \end{cases}\] Let \[[T] := \sum_{v \in T} \; [v]\] and note that \[\langle [T], [T'] \rangle = e(T \cap T', V - (T \cup T')) - e(T - T', T' - T).\] In particular, $\langle [T], [T] \rangle = d(T)$, and $\langle [T], [T'] \rangle = -e(T,T')$ for disjoint $T, T'$. Given $x \in \overline{\Gamma}(G)$, write $x = \sum_{v \in V} x_v [v]$, and observe that $|x| = \sum_{e \in E} (x_v - x_w)^2$, where $v$ and $w$ denote the endpoints of the edge $e$. It follows that $|x| \geq 0$, so the pairing on $\overline{\Gamma}(G)$ is positive semi-definite. Let $V_1,\dots,V_k$ denote the vertex sets of the connected components of $G$. It easy to see that $|x| = 0$ iff $x$ belongs to the span of $[V_1],\dots,[V_k]$, and moreover that these elements generate $Z(G) := \{ x \in \overline{\Gamma}(G) \; | \; {\langle} x,y {\rangle} = 0 \text{ for all } y \in \overline{\Gamma}(G) \}$. It follows that the quotient $\Gamma(G) := \overline{\Gamma}(G) / Z(G)$ is a lattice. \begin{defin}\label{d: graph lattice} The {\em graph lattice} associated to $G$ is the lattice $\Gamma(G)$. \end{defin} Now assume that $G$ is connected. For a choice of {\em root} $r \in V$, every element in $\overline{\Gamma}(G)$ is equivalent $({\textup{mod} \;} Z(G))$ to a unique element in the subspace of $\overline{\Gamma}(G)$ spanned by the set $\{ [v] \; | \; v \in V - r \}$. In what follows, we keep a choice of root fixed, and identify $\Gamma(G)$ with this subspace. We reserve the notation $[T]$ for $T \subset V - r$. \begin{defin}\label{d: vertex basis} The set $\{ [v] \; | \; v \in V - r \}$ constitutes a {\em vertex basis} for $\Gamma(G)$. \end{defin} \begin{prop}\label{p: graph irred} The irreducible elements of $\Gamma(G)$ take the form $\pm [T]$, where $T$ and $V - T$ induce connected subgraphs of $G$. \end{prop} \begin{proof} Suppose that $0 \ne x = \sum_{v \in V - r} c_v [v] \in \Gamma(G)$ is irreducible. Replacing $x$ by $-x$ if necessary, we may assume that $c := \max_v c_v \geq 1$. Let $T = \{ v \; | \; c_v = c \}$; then \begin{eqnarray*} \langle [T], x - [T] \rangle &=& \langle [T], (c-1)[T] \rangle + \langle [T], \sum_{v \in V - T} c_v [v] \rangle \\ &=& (c-1) \cdot d(T) -\sum_{v \in V - T} c_v \cdot e(v,T) \\ &=& \sum_{v \in V - T} (c-1-c_v) \cdot e(v,T) \geq 0.\end{eqnarray*} Since $x$ is irreducible, it follows that $x = [T]$. Next, we argue that $[T]$ is irreducible if and only if the induced subgraphs $G|T$ and $G|(V-T)$ are connected. Write $y = \sum_{v \in V} y_v [v] \in \overline{\Gamma}(G)$. Then \begin{equation}\label{e: y} \langle y, y-[T] \rangle = \sum_C \sum_{(u,v) \in E(C)} (y_u - y_v)^2 + \sum_{(u,v) \in E(T,V-T)} (y_u - y_v)(y_u - y_v -1), \end{equation} where $C$ ranges over the connected components of $G|T$ and $G|(V-T)$. Each summand appearing in \eqref{e: y} is non-negative. It follows that \eqref{e: y} vanishes identically if and only if (a) $y_u$ is constant on each component $C$ and (b) if a component $C_1 \subset G|T$ has an edge $(u,v)$ to a component $C_2 \subset G|(V-T)$, then $y_u = y_v$ or $y_v + 1$. Now pass to the quotient $\Gamma(G)$. This has the effect of setting $y_r = 0$ in \eqref{e: y}. If $G|(V-T)$ were disconnected, then we may choose a component $C$ such that $r \notin V(C)$ and set $y_u = -1$ for all $u \in V(C)$ and $0$ otherwise. Then $y$ and $[T] -y $ are non-zero, orthogonal, and sum to $[T]$, so $[T]$ is reducible. Similarly, if $G|T$ were disconnected, then we could choose an arbitrary component $C$ and set $y_u = 1$ if $u \in V(C)$ and $0$ otherwise, and conclude once more that $[T]$ is reducible. Otherwise, both $G|T$ and $G|(V-T)$ are connected, and $y$ vanishes on $G|(V-T)$ and equals $0$ or $1$ on $G|T$. Thus, $y = 0$ or $[T]$, and it follows that $[T]$ is irreducible. \end{proof} \begin{prop}\label{p: graph break} Suppose that $G$ does not contain a cut-edge, and suppose that $[T] = y + z$ with ${\langle} y, z {\rangle} = -1$. Then either \begin{enumerate} \item $G|T$ contains a cut-edge $e$, $V(G|T \, - \, e) = T_1 \cup T_2$, and $\{y,z\} = \{[T_1], [T_2]\}$; or \item $G(V-T)$ contains a cut-edge $e$, $V(G|(V-T) - e) = T_1 \cup T_2$, $r \in T_2$, and $\{y,z\}= \{ [T_1 \cup T], - [T_1] \}$. \end{enumerate} \end{prop} \begin{proof} Reconsider \eqref{e: y}. In the case at hand, the inner product is $1$. Each term $(y_u - y_v)(y_u - y_v - 1)$ is either 0 or $\geq 2$, so it must be the case that each such term vanishes and there exists a unique edge $e \in E(T) \cup E(V-T)$, $e=(u,v)$, for which $(y_u - y_v)^2 =1$ and all other terms vanish. In particular, it follows that $e$ is a cut-edge in either (a) $G|T$ or (b) $G|(V-T)$. In case (a), write $T_1$ and $T_2$ for the vertex sets of the components of $G|T \, - \, e$. Then $y$ is constant on $T_1$, $T_2$, and $V-T$; furthermore, it vanishes on $V-T$ and its values on $T_1$ and $T_2$ differ by one. Since $e$ is not a cut-edge in $G$, it follows that $E(V-T,T_1), E(V-T,T_2) \ne \varnothing$, so the values on $T_1$ and $T_2$ differ from the value on $V-T$ by at most one. It follows that these values are $0$ and $1$ in some order. This results in (1). In case (b), write $T_1$ and $T_2$ for the vertex sets of the components of $G|(V-T) - e$ with $r \in T_2$. Now $y$ is constant on $T_1, T_2$, and $T$; furthermore, it vanishes on $T_2$ and its values on $T_1$ and $T_2$ differ by one. Hence the value on $T_2$ is $1$ or $-1$. Since $e$ is not a cut-edge in $G$, it follows that $E(T,T_1), E(T,T_2) \ne \varnothing$, so the value on $T$ is $0$ or $1$ more than the values on $T_1$ and $T_2$. Thus, either the value on $T_1 \cup T$ is 1, or the value on $T_1$ is $-1$ and the value on $T$ is 0. This results in (2). \end{proof} \subsection{Linear lattices.}\label{ss: linear lattice} Observe that a sum of linear lattices $L = \bigoplus_k L_k$ occurs as a special case of a graph lattice. Indeed, construct a graph $G$ whose vertex set consists of one vertex for each generator $x_i$ of $L_k$ (Definition \ref{d: linear lattice}), as well as one additional vertex $r$. For a pair of generators $x_i, x_j$, declare $(x_i,x_j) \in E$ if and only if ${\langle} x_i, x_j {\rangle} = -1$, and define as many parallel edges between $r$ and $x_i$ as necessary so that $d(x_i) = a_i$. It is clear that $\Gamma(G) \cong L$, and this justifies the term linear {\em lattice}. Furthermore, the $x_i$ comprise a vertex basis for $L$. Given a linear lattice $L$ and a subset of consecutive integers $\{i,\dots,j\} \subset \{1,\dots,n\}$, we obtain an {\em interval} $\{x_i,\dots,x_j\}$. Two distinct intervals $T = \{x_i,\dots,x_j\}$ and $T' = \{x_k,\dots,x_l\}$ {\em share a common endpoint} if $i = k$ or $j = l$ and are {\em distant} if $k > j+1$ or $i > l+1$. If $T$ and $T'$ share a common endpoint and $T \subset T'$, then write $T \prec T'$. If $i = k+1$ or $l = j+1$, then $T$ and $T'$ are {\em consecutive} and write $T \dagger T'$. They {\em abut} if they are either consecutive or share a common endpoint. Write $T \pitchfork T'$ if $T \cap T' \ne \varnothing$ and $T$ and $T'$ do not share a common endpoint. Observe that if $T \pitchfork T'$, then the symmetric difference $(T - T') \cup (T' - T)$ is the union of a pair of distant intervals. \begin{cor}\label{c: linear irred} Let $L = \bigoplus_k L_k$ denote a sum of linear lattices. \begin{enumerate} \item The irreducible vectors in $L$ take the form $\pm[T]$, where $T$ is an interval in some $L_k$; \item each $L_k$ is indecomposable; \item if $T \pitchfork T'$, then $[T-T'] \pm [T'-T]$ is reducible; \item $[T]$ is unbreakable iff $T$ contains at most one vertex of degree $\geq 3$. \end{enumerate} \end{cor} \begin{proof} (1) follows from Proposition \ref{p: graph irred}, noting that for $T \subset V - \{r\}$, if $T$ and $V-T$ induce connected subgraphs of the graph $G$ corresponding to $L$, then $T$ is an interval in some $L_k$. (2) follows since the elements of the vertex basis for $L_k$ are irreducible and their pairing graph is connected. For item (3), write $(T-T') \cup (T'-T) = T_1 \cup T_2$ as a union of distant intervals. Then $[T-T']\pm[T'-T] = \epsilon_1 [T_1] + \epsilon_2 [T_2]$ for suitable signs $\epsilon_1,\epsilon_2 \in \{ \pm 1 \}$, and ${\langle} \epsilon_1 [T_1], \epsilon_2 [T_2] {\rangle} = 0$. For (4), we establish the contrapositive in two steps. \noindent ($\implies$) If an interval $T$ contains a pair of vertices $x_i,x_j$ of degree $\geq 3$, then it breaks into consecutive intervals $T = T_i \cup T_j$ with $x_i \in T_i$ and $x_j \in T_j$. It follows that $[T]$ is breakable, since $[T] = [T_i]+[T_j]$ with ${\langle} [T_i],[T_j] {\rangle} = -1$ and $d(T_i), d(T_j) \geq 3$. \noindent ($\impliedby$) If $[T] = y+z$ is breakable, then Proposition \ref{p: graph break} applies. Observe that case (2) does not hold since breakability entails $|z| \geq 3$, while a cut-edge in $G(V-T)$ separates it into $T_1 \cup T_2$ with $d(T_1) = 2$ and $r \in T_2$. Thus, case (1) holds, and it follows that $d(T_1),d(T_2) \geq 3$, so both $T_1$ and $T_2$ contain a vertex of degree $\geq 3$, which shows that $T$ contains at least two such. \end{proof} Next we turn to the question of when two linear lattices are isomorphic. Let $I$ denote the set of irreducible elements in $L$, and given $y \in I$, let \[I(y) = \{ z \in I \: | \; {\langle} y,z {\rangle} = -1, y + z \in I \}.\] To unpack the meaning of this definition, suppose that $y \in I$, $|y| \geq 3$, and write $y = \epsilon_y [T_y]$. If $z \in I(y)$ with $z = \epsilon_z [T_z]$, then either $\epsilon_y = \epsilon_z$ and $T_y \dagger T_z$, or else $|z|=2$, $\epsilon_y = - \epsilon_z$, and $T_z \prec T_y$. Now suppose that $z \in I(y)$ with $|z| = 3$. Choose elements $x_i \in T_y$ and $x_j \in T_z$ of norm $\geq 3$ so that the open interval $(x_i,x_j)$ contains no vertex of degree $\geq 3$. If $w \in I(y) \cap (-I(z))$ with $w = \epsilon_w [T_w]$, then $T_w \subset (x_i,x_j)$, and either $T_w \prec T_y$ and $\epsilon_w = - \epsilon_y$, or else $T_w \prec T_z$ and $\epsilon_w = \epsilon_z$. It follows that $|I(y) \cap (-I(z))| = |(x_i,x_j)| = |i-j|-1$. The following is the main result of \cite{gerstein}. \begin{prop}[Gerstein]\label{p: gerstein} If $\Lambda(p,q) \cong \Lambda(p',q')$, then $p = p'$, and $q = q'$ or $q q' \equiv 1 (\textup{mod } p)$. \end{prop} \begin{proof} Let $L$ denote a linear lattice with standard basis $S = \{ x_1,\dots,x_n \}$. The Proposition follows once we show that $L$ uniquely determines the sequence of norms ${\bf x} = (|x_1|,\dots,|x_n|)$ up to reversal, noting that if $p/q = [a_1,\dots,a_n]^-$ and $p/q' = [a_n,\dots,a_1]^-$, then $q q' \equiv 1 \pmod p$ (Lemma \ref{l: cont frac basics}(4)). Suppose that $I$ contains an element of norm $\geq 3$. In this case, select $y_1 \in I$ with minimal norm $\geq 3$ subject to the condition that there does not exist a pair of orthogonal elements in $I(y_1)$. It follows that $y_1 = \epsilon [T_1]$, where $T_1$ contains exactly one element $x_{j_1} \in S$ of norm $\geq 3$, and $j_1$ is the smallest or largest index of an element in $S$ with norm $\geq 3$. Inductively select $y_i \in I(y_{i-1})$ with minimal norm $\geq 3$ subject to the condition that ${\langle} y_i, y_j {\rangle} = 0$ for all $j < i$, until it is no longer possible to do so, terminating in some element $y_k$. It follows that $y_i = \epsilon [T_i]$ for all $i$, where $\epsilon \in \{ \pm 1 \}$ is independent of $i$; each $T_i$ contains a unique $x_{j_i} \in S$ of norm $\geq 3$; $T_i \dagger T_{i+1}$ for $i < k$; and each $x_j \in S$ of norm $\geq 3$ occurs as some $x_{j_i}$. Therefore, up to reversal, the (possibly empty) sequence $(|y_1|,\dots,|y_k|) = (|x_{j_1}|,\dots,|x_{j_k}|)$ coincides with ${\bf x}$ with every occurrence of 2 omitted. To recover ${\bf x}$ completely, assume for notational convenience that $j_1 < \cdots < j_k$. Set $n_i: = | I(y_i) \cap (-I(y_{i+1})) |$ for $i=1,\dots,k-1$, so that $n_i = j_{i+1}-j_i -1$. If $k \geq 2$, then set $n_0 = | I(y_1) - (-I(y_2))$, $n_k = |I(y_k) - (-I(y_{k-1}))|$, and observe that $n_0 = j_1-1$ and $n_k = n-j_k$. If $k = 1$, then decompose $I(y) = I_0 \cup I_1$, where ${\langle} z_i, z'_i {\rangle} \ne 0$ for all $z_i,z'_i \in I_i$, $i = 0,1$. In this case, set $n_i = |I_i|$, and observe that $\{n_0,n_1\} = \{ j_1 - 1 , n-j_1 \}$. Lastly, if $k = 0$, then set $n_0 = n$. Letting $2^{[t]}$ denote the sequence of 2's of length $t$, it follows that ${\bf x} = (2^{[n_0]},|y_1|,2^{[n_1]},\dots,2^{[n_{k-1}]},|y_k|,2^{[n_k]})$. Since the elements $y_1,\dots,y_k$ and the values $n_0,\dots,n_k$ depend solely on $L$ for their definition, it follows that ${\bf x}$ is determined uniquely up to reversal, and the Proposition follows. \end{proof} The following Definition and Lemma anticipate our discussion of the intersection graph in Subsection \ref{ss: int graph} (esp. Lemma \ref{l: cycle}). \begin{defin}\label{d: abut graph} Given a collection of intervals ${\mathcal T} =\{ T_1,\dots,T_k \}$ whose classes are linearly independent, define a graph \[ G({\mathcal T}) = ({\mathcal T}, {\mathcal E}), \quad {\mathcal E} = \{ (T_i,T_j) \; | \; T_i \text{ abuts } T_j \}.\] \end{defin} \begin{lem}\label{l: triangle} Given a cycle $C \subset G({\mathcal T})$, the intervals in $V(C)$ abut pairwise at a common end. That is, there exists an index $j$ such that each $T_i \in V(C)$ has left endpoint $x_{j+1}$ or right endpoint $x_j$. In particular, $V(C)$ induces a complete subgraph of $G({\mathcal T})$. \end{lem} \begin{proof} Relabeling as necessary, write $V(C) = \{T_1,\dots,T_k\}$, where $(T_i,T_{i+1}) \in E(C)$ for $i = 1,\dots, k$, subscripts $({\textup{mod} \;} k)$. We proceed by induction on the number of edges $n \geq k$ in the subgraph induced on $V(C)$. When $n = k$, $C$ is an induced cycle. In this case, if some of three of the intervals abut at a common end, then they span a cycle, $k=3$, and we are done. If not, then define a sign $\epsilon_i = \pm 1$ by the rule that $\epsilon_i = 1$ if and only if $T_i \dagger T_{i-1}$ and $T_i$ lies to the right of $T_{i-1}$, or if $T_i$ and $T_{i-1}$ share a common left endpoint. Fix a vertex $x_j$, suppose that $x_j \in T_i$ for some $i$, and choose the next index $l \pmod{k}$ for which $x_j \in T_l$. Observe, crucially, that $\epsilon_i = - \epsilon_l$. It follows that ${\langle} x_j, \sum_{i=1}^k \epsilon_i [T_i] {\rangle} = 0$. As $x_j$ was arbitrary, we obtain the linear dependence $\sum_{i=1}^k \epsilon_i [T_i] = 0$, a contradiction. It follows that if $n = k$, then $k=3$ and the three intervals abut at a common end. Now suppose that $n > k$. Thus, there exists an edge $(T_i,T_j) \in E(C)$ for some pair of non-consecutive indices $i, j \pmod{k}$. Split $C$ into two cycles $C_1$ and $C_2$ along $(T_i,T_j)$. By induction, every interval in $V(C_1)$ and $V(C_2)$ abuts at the same end as $T_i$ and $T_j$, so the same follows at once for $V(C)$. \end{proof} \subsection{Changemaker lattices.}\label{ss: changemaker} Fix an orthonormal basis $\{e_0,\dots,e_n\}$ for ${\mathbb Z}^{n+1}$. \begin{defin}\label{d: changemaker lattice} A {\em changemaker lattice} is any lattice isomorphic to $(\sigma)^\perp \subset {\mathbb Z}^{n+1}$ for some changemaker $\sigma$ (Definition \ref{d: change}). \end{defin} \begin{lem}\label{l: change disc} Suppose that $L = (\sigma)^\perp \subset {\mathbb Z}^{n+1}$ is a changemaker lattice. Then ${\textup{disc}}(L) = |\sigma|$. \end{lem} \begin{proof} (cf. \cite[proof of Lemma II.1.6]{milnor+husemoller}) Consider the map \[ \varphi: {\mathbb Z}^{n+1} \to {\mathbb Z} / | \sigma | {\mathbb Z}, \quad \varphi(x) = {\langle} x, \sigma {\rangle} \pmod{| \sigma |}.\] As $\sigma_0 = 1$, the map $\varphi$ is onto, so $K : = \ker(\varphi)$ has discriminant $[{\mathbb Z}^{n+1}:K]^2 = | \sigma |^2$. On the other hand, $K = L \oplus (\sigma)$, so ${\textup{disc}}(L) = {\textup{disc}}(K) / | \sigma | = | \sigma |$. \end{proof} \begin{proof}[Proof of Theorem \ref{t: main technical}] We invoke \cite[Theorem 3.3]{greene:cabling} with $X = X(p,q)$. It follows that $\Lambda(p,q)$ embeds as a full-rank sublattice of $(\sigma)^\perp \subset {\mathbb Z}^{n+1}$, where $|\sigma| = p$ and $\sigma$ is a changemaker according to the convention of \cite{greene:cabling}. It stands to verify that $\sigma_0 = 1$, and furthermore that $\Lambda(p,q)$ actually equals $(\sigma)^\perp$ on the nose. First, if $\sigma_0 = 0$, then $\Lambda(p,q)$ would have a direct summand isomorphic to $(e_0) \cong {\mathbb Z}$, in contradiction to its indecomposability. Hence $\sigma_0 = 1$. Second, ${\textup{disc}}(\Lambda(p,q)) = p = |\sigma| = {\textup{disc}}((\sigma)^\perp)$, using Lemma \ref{l: change disc} at the last step. Since ${\mathrm{rk}} \; \Lambda(p,q) = {\mathrm{rk}} \; (\sigma)^\perp$, the two lattices coincide. \end{proof} We construct a basis for a changemaker lattice $L$ as follows. Fix an index $1 \leq j \leq n$, and suppose that $\sigma_j = 1 + \sum_{i=0}^{j-1} \sigma_i$. In this case, set $v_j = -e_j + 2e_0 + \sum_{i=1}^{j-1} e_i \in L$. Otherwise, $\sigma_j \leq \sum_{i=0}^{j-1} \sigma_i$. It follows that there exists a subset $A \subset \{ 0,\dots,j-1\}$ such that $\sigma_j = \sum_{i \in A} \sigma_i$. Amongst all such subsets, choose the one maximal with respect to the total order $<$ on subsets of $\{ 0, 1, \dots, n \}$ defined by declaring $A' < A$ if the largest element in $(A \cup A') \setminus (A \cap A')$ lies in $A$; equivalently, $\sum_{i \in A'} 2^i < \sum_{i \in A} 2^i$. Then set $v_j = -e_j+\sum_{i \in A} e_i \in L$. If $v = -e_j + \sum_{i \in A'} e_i$ for some $A' < A$, then write $v \ll v_j$. The vectors $v_1,\dots,v_n$ are clearly linearly independent. The fact that they span $L$ is straightforward to verify, too: given $w \in L$, add suitable multiples of $v_n,\dots,v_1$ to $w$ in turn to produce a sequence of vectors with {\em support} decreasing to $\varnothing$. Recall that the support of a vector $v \in {\mathbb Z}^{n+1}$ is the set ${\textup{supp}}(v) = \{ i \; | \; {\langle} v, e_i {\rangle} \ne 0 \}$. For future reference, we also define \[ {\textup{supp}}^+(v) = \{ i \; | \; {\langle} v, e_i {\rangle} > 0 \} \quad \text{and} \quad {\textup{supp}}^-(v) = \{ i \; | \; {\langle} v, e_i {\rangle} < 0 \}. \] \begin{defin}\label{d: tight, gappy, just right} The set $S = \{v_1,\dots,v_n\}$ constitutes the {\em standard basis} for $L$. A vector $v_j \in S$ is \begin{itemize} \item {\em tight}, if $v_j = -e_j + 2e_0 + \sum_{i=1}^{j-1} e_i$; \smallskip \item {\em gappy}, if $v_j = -e_j+\sum_{i \in A} e_i$ and $A$ does not consist of consecutive integers; and \smallskip \item {\em just right}, if $v_j = -e_j+\sum_{i \in A} e_i$ and $A$ consists of consecutive integers. \end{itemize} A {\em gappy index} for a gappy vector $v_j$ is an index $k \in A$ such that $k+1 \notin A \cup \{j\}$. \end{defin} \noindent Thus, every element of $S$ belongs to exactly one of these three types. We record a few basic observations before proceeding to some more substantial facts about changemaker lattices. Write $v_{jk} = {\langle} v_j, e_k {\rangle}$. \begin{lem}\label{l: basic} The following hold: \begin{enumerate} \item $v_{jj} = -1$ for all $j$, and $v_{j,j-1}=1$ unless $j=1$ and $v_{1,0} = 2$; \item for any pair $v_i, v_j$, we have ${\langle} v_i, v_j {\rangle} \geq -1$; \item if $k$ is a gappy index for some $v_j$, then $|v_{k+1}| \geq 3$; \item given $z = \sum_{i=0}^n z_i e_i \in L$ with $|z| \geq 3$, ${\textup{supp}}^-(z)=\{j\}$, and $z_j = -1$, it follows that $j = \max({\textup{supp}}(z))$. \end{enumerate} \end{lem} \begin{proof} (1) is clear, using the maximality of $A$ for the second part. (2) is also clear. (3) follows from maximality, as otherwise $v_j \ll v_j - v_{k+1}$. For (4), suppose not, and select $k > j$ for which $z_k > 0$. We obtain the contradiction $0 = {\langle} z, \sigma {\rangle} > \sigma_k - \sigma_j \geq 0$, where the inequality is strict because $|z| \geq 3$. \end{proof} \begin{lem}\label{l: change irred} The standard basis elements of a changemaker lattice are irreducible. \end{lem} \begin{proof} Choose a standard basis element $v_j \in S$ and suppose that $v_j = x +y$ for $x,y \in L$ with $\langle x,y \rangle \geq 0$. In order to prove that $v_j$ is irreducible, it stands to show that one of $x$ and $y$ equals $0$. Write $x = \sum_{i=0}^n x_i e_i$ and $y = \sum_{i=0}^n y_i e_i$. {\em Case 1.} $v_j$ is not tight. In this case, $|v_{ji}| \leq 1$ for all $i$. We claim that $x_i y_i = 0$ for all $i$. For suppose not. Since $\langle x, y \rangle \geq 0$ there exists an index $i$ so that $x_i y_i > 0$. Then $|v_{ji}| = |x_i + y_i| \geq 2$, a contradiction. Since all but one coordinate of $v_j$ is non-negative, it follows that one of $x$ and $y$ has all its coordinates non-negative. But the only such element in $L$ is $0$. It follows that $v_j$ is irreducible. Notice that this same argument applies to any vector of the form $-e_j + \sum_{i \in A} e_i$. {\em Case 2.} $v_j$ is tight. We repeat the previous argument up to the point of locating an index $i$ such that $x_i y_i > 0$. Now, however, we conclude that $x_0 = y_0 =1$, and $x_i y_i \leq 0$ for all other indices $i$. In particular, there is at most one index $k$ for which $x_k y_k = -1$, and $x_i y_i =0$ for $i \ne 0,k$. If there were no such $k$, then we conclude as above that one of $x$ and $y$ has all its coordinates non-negative, and $v_j$ is irreducible as before. Otherwise, we may assume that $x_k = 1$. Then ${\textup{supp}}^-(y) = \{k\}$ and ${\textup{supp}}^-(x) = \{j\}$. Since $x_i + y_i = v_{ji} \ne 0$ for all $i \leq j$, it follows that $k > j$. But then $|x| \geq 3$ and $k = \max({\textup{supp}}(x)) > j$, in contradiction to Lemma \ref{l: basic}(4). Again it follows that $v_j$ is irreducible. \end{proof} We collect a few more useful cases of irreducibility (cf. Lemma \ref{l: tight}). \begin{lem}\label{l: more irred} Suppose that $v_t \in L$ is tight. \begin{enumerate} \item If $v_j$ is tight, $j \ne t$, then $v_j - v_t$ is irreducible. \item If $v_j = -e_j+e_{j-1}+e_t$, $j > t$, then $v_j + v_t$ is irreducible. \item If $v_{t+1} = -e_{t+1}+e_t+\cdots+e_0$, then $v_{t+1} - v_t$ is irreducible. \end{enumerate} \end{lem} \begin{proof} In each case, we assume that the vector in question is expressed as a sum of non-zero vectors $x$ and $y$ with ${\langle} x,y {\rangle} \geq 0$. Recall that both $x$ and $y$ have entries of both signs. (1) Assume without loss of generality that $j > t$. Thus, $v_j - v_t = -e_j + 2e_t + \sum_{i=t+1}^{j-1} e_i$, where the summation could be empty. As in the proof of Lemma \ref{l: change irred}, it quickly follows that $x_t = y_t = 1$, $x_k y_k = -1$ for some $k$, and otherwise $x_i y_i = 0$. Without loss of generality, $x_k = 1$. Thus, ${\textup{supp}}^-(x) = \{j\}$ and ${\textup{supp}}^-(y) = \{ k \}$. By Lemma \ref{l: basic}(4), it follows that $j = \max({\textup{supp}}(x)) > k$. As $x_k+y_k = 0$, it follows that $k < t$. Now $0 = {\langle} y, \sigma {\rangle} \geq \sigma_t - \sigma_k > 0$, a contradiction. Therefore, $v_j - v_t$ is irreducible. (2) It follows that $x_0 = y_0 = 1$, $x_k = -y_k = \pm 1$ for some value $k \geq t$, and otherwise $x_i y_i = 0$. Without loss of generality, say $x_k = 1$. Thus, ${\textup{supp}}^-(x) = \{j\}$ and ${\textup{supp}}^-(y) = \{ k \}$. By Lemma \ref{l: basic}(4), $j = \max ({\textup{supp}}(x))$. In particular, it follows that $k < j$. Another application of Lemma \ref{l: basic}(4) implies that $k = \max({\textup{supp}}(y))$. It follows that $y = -e_k + \sum_{i \in A} e_i$ for some $A \subset \{0,\dots,t-1\}$. But then $0 = {\langle} y, \sigma {\rangle} = -\sigma_k + \sum_{i \in A} \sigma_i < -\sigma_k + 1 + \sum_{i=0}^{t-1} \sigma_i = -\sigma_k + \sigma_t \leq 0$, a contradiction. Therefore, $v_j + v_t$ is irreducible. (3) We have $v_{t+1} - v_t = -e_{t+1} + 2e_t - e_0$. It follows that $x_t = y_t = 1$. If $x_i y_i = 0$ for every other index $i$, then $\{ x,y \} = \{ -e_{t+1} + e_t, e_t - e_0 \}$, but ${\langle} e_t - e_0, \sigma {\rangle} = \sigma_t - \sigma_0 > 0$, a contradiction. It follows that there is a unique index $k$ for which $x_k y_k = -1$, and otherwise $x_i y_i = 0$. Without loss of generality, say $x_k = 1$. Hence $y = e_t - \sum_{i \in A} e_i$ for some $A \subset \{0,k,t+1\}$. But $A$ cannot contain an index $> t$, for then $0 = {\langle} y, \sigma {\rangle} \leq \sigma_t - \sigma_{t+1} < 0$, nor can it just contain indices $< t$, for then $0 = {\langle} y, \sigma {\rangle} \geq \sigma_t - \sum_{i=0}^{t-1} \sigma_i = 1$. Therefore, $v_{t+1} - v_t$ is irreducible. \end{proof} \begin{lem}\label{l: change unbreakable} If $v_j \in S$ is not tight, then it is unbreakable. \end{lem} \begin{proof} Suppose that $v_j$ were breakable and choose $x$ and $y$ accordingly. From the conditions that $\langle x, y \rangle = -1$ and $v_{j0 } \ne 2$ it follows that $x_k y_k = -1$ for a single index $k$, and otherwise $x_i y_i = 0$. Without loss of generality, say $x_k = -1$. Then ${\textup{supp}}^-(x) = \{ k \}$ and ${\textup{supp}}^-(y) = \{ j \}$. By Lemma \ref{l: basic}(4), it follows that $k = \max ({\textup{supp}}(x))$ and $j = \max( {\textup{supp}}(y))$. In particular, it follows that $k < j$, and that $y_i = v_{ji}$ for all $i > j$. On the other hand, $j \in {\textup{supp}}^+(y) - {\textup{supp}}^+(v_i)$. It follows that $v_j \ll y$, a contradiction. Hence $v_i$ is unbreakable, as claimed. \end{proof} \section{Comparing linear lattices and changemaker lattices}\label{s: prep work} In this section we collect some preparatory results concerning when a changemaker lattice is isomorphic to a sum of one or more linear lattices. Thus, for the entirety of this section, let $L$ denote a changemaker lattice with standard basis $S = \{ v_1, \dots, v_n \}$, and suppose that $L$ is isomorphic to a linear lattice or a sum thereof. By Corollary \ref{c: linear irred} and Lemma \ref{l: change irred}, it follows that $v_i = \epsilon_i [T_i]$ for some sign $\epsilon_i = \pm 1$ and interval $T_i$. Let ${\mathcal T} = \{T_1,\dots,T_n\}$. If $v_i$ is not tight, then Corollary \ref{c: linear irred} and Lemma \ref{l: change unbreakable} imply that $T_i$ contains at most one vertex of degree $\geq 3$. If $[T_i]$ is unbreakable and $d(T_i) \geq 3$, then let $z_i$ denote its unique vertex of degree $\geq 3$. \subsection{Standard basis elements and intervals.} Tight vectors, especially breakable ones, play an involved role in the analysis (Section \ref{s: tight}). We begin with some basic observations about them. \begin{lem}\label{l: tight 2} Suppose that $v_t$ is tight, $j \ne t$, and $|v_j| \geq 3$. Then ${\langle} v_t, v_j {\rangle}$ equals \begin{enumerate} \item $|v_j| - 1$, iff $T_j \prec T_t$; \item $|v_j| -2$, iff $z_j \in T_t$ and $T_j \pitchfork T_t$, or $|v_j| = 3, T_j \dagger T_t$, and $\epsilon_j \ne \epsilon_t$; \item $\epsilon \in \{\pm 1\}$, iff $T_j \dagger T_t$ and $\epsilon_j \epsilon_t \ne\epsilon$, or $|v_j| = 3$, $z_j \in T_t$, $T_j \pitchfork T_t$, and $\epsilon_j \epsilon_t = \epsilon$; or \item $0$, iff $z_j \notin T_t$ and either $T_j$ and $T_t$ are distant or $T_j \pitchfork T_t$. \end{enumerate} If $|v_j| = 2$, then $|{\langle} v_t, v_j {\rangle}| \leq 1$, with equality iff $T_t$ and $T_j$ abut. \end{lem} \begin{proof}[Proof sketch.] Observe that $-1 \leq {\langle} v_i, v_j {\rangle} \leq |v_j|-1$ for any pair of distinct $i,j$. Assuming that $|v_j| \geq 3$, the result follows by using the fact that $T_j$ is unbreakable, and conditioning on how $T_j$ meets $T_t$ and whether or not $d(T_j) > 3$. \end{proof} \begin{lem}\label{l: tight} Suppose that $v_t \in S$ is tight. \begin{enumerate} \item No other standard basis vector is tight. \item If $v_j = -e_j+e_{j-1}+e_t$, $j > t+1$, then $T_t \dagger T_j$. \item If $v_{t+1} = -e_{t+1}+e_t+\cdots+e_0$, then $t=1$ and $T_1 \dagger T_2$. \end{enumerate} \end{lem} \begin{proof} We apply each case of Lemma \ref{l: more irred} in turn. (1) Suppose that there were another index $j$ for which $v_j$ is tight. Without loss of generality, we may assume that $j> t$. Then $\langle v_t, v_j \rangle = | v_t | - 2 \geq 3.$ It follows that $\epsilon_j = \epsilon_t$ and $T_j \pitchfork T_t$. Thus, $[T_j - T_t] - [T_t - T_j]$ is reducible, but it also equals $\epsilon_j(v_j - v_t)$, which is irreducible according to Lemma \ref{l: more irred}(1). This yields the desired contradiction. (2) We have ${\langle} v_t, v_j {\rangle} = -1$ and $|v_j | = 3$, so either the desired conclusion holds, or else $z_j \in T_t$, $T_t \pitchfork T_j$, and $\epsilon_t \epsilon_j = -1$. If the latter possibility held, then $[T_j - T_t] - [T_t - T_j]$ is reducible, but it also equals $\epsilon_j v_j - \epsilon_t v_t = \epsilon_j (v_j + v_t)$, which is irreducible according to Lemma \ref{l: more irred}(2). It follows that $T_t \dagger T_j$. (3) We have ${\langle} v_t, v_{t+1} {\rangle} = |v_{t+1}|-2$, so either the desired conclusion holds, or else $z_{t+1} \in T_t$, $T_t \pitchfork T_{t+1}$, and $\epsilon_t = \epsilon_{t+1}$. If the latter possibility held, then again $[T_{t+1} - T_t]-[T_t-T_{t+1}]$ is reducible, but it also equals $\epsilon_t (v_{t+1}-v_t)$, which is irreducible according to Lemma \ref{l: more irred}(3). It follows that $t = 1$ and $T_1 \dagger T_2$. \end{proof} Lemmas \ref{l: change unbreakable} and \ref{l: tight}(1) immediately imply the following result. \begin{cor}\label{c: one breakable} A standard basis $S$ contains at most one unbreakable vector, and it is tight. \qed \end{cor} The following important Lemma provides essential information about when two standard basis elements can pair non-trivially together: unless one is breakable or has norm 2, then they correspond to consecutive intervals. \begin{lem}\label{l: cool} Given a pair of unbreakable vectors $v_i, v_j \in S$ with $| v_i |, | v_j | \geq 3$, we have $| {\langle} v_i, v_j {\rangle} | \leq 1$, with equality if and only if $T_i \dagger T_j$ and $\epsilon_i \epsilon_j = - \langle v_i, v_j \rangle$. \end{lem} \begin{proof} The Lemma follows easily once we establish that ${\langle} [T_i], [T_j] {\rangle} \leq 0$. Thus, we assume that ${\langle} [T_i], [T_j] {\rangle} \geq 1$ and derive a contradiction. Since these classes are unbreakable and they pair positively, it follows that $z_i = z_j$. In particular, $d:= |v_i| = d(T_i) = d(T_j) =|v_j| $. Now, either $T_i \pitchfork T_j$, in which case ${\langle} [T_i], [T_j] {\rangle} = d-2$, or else $T_i$ and $T_j$ share a common endpoint, in which case ${\langle} [T_i], [T_j] {\rangle} = d-1$. Let us first treat the case in which $i = t$ and $v_t$ is tight. By Corollary \ref{c: one breakable}, it follows that $v_j$ is not tight. Thus, $d = t+4$, and ${\textup{supp}}(v_j)$ contains at least three values $> t$. If $v_{jt}=1$, then ${\langle} v_t, v_j {\rangle} \leq d-3$, while if $v_{jt} = 0$, then ${\textup{supp}}(v_j)$ contains at least four values $>t$, and again ${\langle} v_j, v_t {\rangle} \leq d-3$. As ${\langle} v_t, v_j {\rangle} \geq -1$ and $d \geq 5$, we have $| {\langle} v_t, v_j {\rangle} | \leq d-3$, whereas $|{\langle} [T_t], [T_j] {\rangle}| \geq d-2$. This yields the desired contradiction to the assumption that ${\langle} [T_t], [T_j] {\rangle} \geq 1$ in this case. Thus, we may assume that neither $v_i$ nor $v_j$ is tight, and without loss of generality that $j > i$. Suppose that $\epsilon: = \epsilon_j = \epsilon_i$. Thus $\langle v_i, v_j \rangle = \langle [T_i], [T_j] \rangle \geq 1$. If $v_{ji} = 1$, then $\langle v_j, v_i \rangle \leq d -2$, with equality possible iff $v_{jk} = 1$ whenever $v_{ik} = 1$. But then $| v_j | > | v_i |$, a contradiction. Hence $v_{ji} = 0$. If $\langle v_j, v_i \rangle = d-1$, then again $v_{jk} = 1$ whenever $v_{ik} = 1$. But then $v_j - v_i = -e_j + e_i \gg v_j$, a contradiction. Still assuming that $\epsilon_j = \epsilon_i$, we are left to consider the case that $v_{ji} = 0$ and $\langle v_j, v_i \rangle = d - 2$, and therefore $T_i \pitchfork T_j$. In this case, ${\textup{supp}}(v_j) - {\textup{supp}}(v_i) = \{j,k\}$ and ${\textup{supp}}(v_i) - {\textup{supp}}(v_j) = \{i,l\}$ for some indices $k, l$. If $\sigma_j = \sigma_k$, then $v_j = -e_j +e_k$, in contradiction to $|v_j| \geq 3$. If $\sigma_j = \sigma_i$, then either $i < k$, in which case we derive the contradiction $v_j = -e_j + e_k$ again, or else $k < i$, in which case we derive the contradiction $-e_j + e_i \gg v_j$. Therefore, $\sigma_j \ne \sigma_i, \sigma_k$. It easily follows that $v_j - v_i = -e_j + e_k + e_i - e_l$ is irreducible. On the other hand, $v_j - v_i = \epsilon ( [T_j] - [T_i] ) = \epsilon[T_j - T_i] - \epsilon[T_i - T_j]$ is reducible, a contradiction. It follows that $\epsilon: = \epsilon_j = - \epsilon_i$. Hence $\langle [T_i], [T_j] \rangle = - \langle v_i, v_j \rangle \leq 1$. In case of equality, we have $d = 3$ and $T_i \pitchfork T_j$. So on the one hand, $v_j = -e_j + e_i + e_p$ and $v_i = -e_i + e_q + e_s$ for distinct indices $i,j,p,q,s$, whence $v_j + v_i = -e_j + e_p + e_q + e_s$ is irreducible (cf. the proof of Lemma \ref{l: change irred}, Case 1). On the other hand, it equals $\epsilon([T_j]-[T_i]) = \epsilon[T_j - T_i] - \epsilon[T_i - T_j]$, which is reducible, a contradiction. In total, ${\langle} [T_i], [T_j] {\rangle} \leq 0$ in every case, and the Lemma follows. \end{proof} \begin{cor}\label{c: distinct z_i} If $T_i$ and $T_j$ are distinct unbreakable intervals with $d(T_i),d(T_j) \geq 3$, then $z_i \ne z_j$. \qed \end{cor} \subsection{The intersection graph.}\label{ss: int graph} This subsection defines the key notion of the intersection graph, and establishes the most important properties about it that are necessary to carry out the combinatorial analysis of Sections \ref{s: decomposable} - \ref{s: tight}. Recall that from the standard basis $S$ we obtain a collection of intervals ${\mathcal T}$. Let $\overline{S} \subset S$ denote the subset of unbreakable elements of $S$; thus, $\overline{S} = S - v_t$ if $S$ contains a breakable element $v_t$, and $\overline{S} = S$ otherwise. \begin{defin}[Compare Definition \ref{d: abut graph}]\label{d: int graph} The {\em intersection graph} is the graph \[ G(S) = (S,E), \quad E = \{ (v_i,v_j) \; | \; T_i \text{ abuts } T_j \}.\] Write $G(S')$ to denote the subgraph induced by a subset $S' \subset S$. If $(v_i, v_j) \in E$ with $i < j$, then $v_i$ is a {\em smaller neighbor} of $v_j$. \end{defin} Observe that $G(S)$ is a subgraph of the pairing graph $\widehat{G}(S)$ (Subsection \ref{ss: generalities}), and Lemma \ref{l: cool} implies that they coincide unless $S$ contains a breakable element $v_t$. Furthermore, if $v_t \in S$ is breakable, then Lemma \ref{l: tight 2} implies that $(v_t,v_j) \in E$ iff ${\langle} v_t, v_j {\rangle} \in \{|v_j|-1,1,-1\}$, except in the special case that $|v_j| = 3$, $z_j \in T_t$, $T_j \pitchfork T_t$, and $\epsilon_j \epsilon_t = {\langle} v_t, v_j {\rangle}$. Therefore, $G(S)$ is determined by the pairings of vectors in $S$ except in this special case, which fortunately arises just once in our analysis (Proposition \ref{p: breakable 3}). We now collect several fundamental properties about the intersection graph $G(S)$. \begin{defin}\label{d: claw} The {\em claw} $(i;j,k,l)$ is the graph $Y = (V,E)$ with \[ V = \{i,j,k,l\} \quad \text{and} \quad E = \{ (i,j), (i,k), (i,l) \}.\] A graph $G$ is {\em claw-free} if it does not contain an induced subgraph isomorphic to $Y$. \end{defin} \noindent Equivalently, if three vertices in $G$ neighbor a fourth, then some two of them neighbor. \begin{lem}\label{l: no claw} $G(S)$ is claw-free. \end{lem} \begin{proof} If $T_i$ abuts three intervals $T_j, T_k, T_l$, then it abuts some two at the same end, and then those two abut. \end{proof} \begin{defin}\label{d: heavy} A {\em heavy triple} $(v_i,v_j,v_k)$ consists of distinct vectors of norm $\geq 3$ contained in the same component of $G(\overline{S})$, none of which separates the other two in $G(\overline{S})$. In particular, if $(v_i,v_j,v_k)$ spans a triangle, then it spans a {\em heavy triangle}. \end{defin} \begin{lem}\label{l: no triangle} $G(\overline{S})$ does not contain a heavy triple. \end{lem} \begin{proof} Since $v_i,v_j,v_k$ belong to the same component of $G(\overline{S})$, the intervals $T_i,T_j,T_k$ are subsets of some path $P \subset G - r$. Assume without loss of generality that $z_i$ lies between $z_j$ and $z_k$ on $P$. Every unbreakable interval in $P$ that avoids $z_i$ lies to one side of it, and $T_j$ and $T_k$ lie to opposite sides by assumption. As each element in $\overline{S}$ is unbreakable and $T_i$ is the unique interval containing $z_i$, it follows that $v_j$ and $v_k$ lie in separate components of $G(\overline{S}) -v_i$. \end{proof} \begin{lem}\label{l: cycle} Every cycle in $G(\overline{S})$ has length $3$ and contains a unique vector $v_i$ of norm $2$. Furthermore, if $(v_i,v_j,v_k)$ is a cycle with $i < j < k$ and $v_k$ is not gappy, then $|v_l| = 2$ for all $l \leq i$ if $\overline{S} = S$, or for all $t < l \leq i$ otherwise. \end{lem} \begin{proof} Choose a cycle $C \subset G(\overline{S})$. By Lemma \ref{l: triangle}, it follows that $V(C)$ induces a complete subgraph. Thus, $V(C)$ cannot contain three vectors of norm $\geq 3$, for then they would span a heavy triangle, in contradiction to Lemma \ref{l: no triangle}. Note also that the vectors of norm $2$ in $S$ induce a union of paths in $G(S)$. Therefore, $V(C)$ cannot contain more than two such vectors. If it did contain two, then they must take the form $v_{i+1} = -e_{i+1}+e_i$ and $v_i = -e_i + e_{i-1}$. Choose any other $v_j \in V(C)$. Then either $v_{j,i} = 0$ and $v_{j,i\pm1} = 0$, or else $v_{j,i} = 0$ and $v_{j,i\pm1} = 1$. In the first case, $v_j \ll v_j - v_{i+1}$, and in the second, $v_j \ll v_j - v_i$, both of which entail contradictions. It follows that $V(C)$ contains at most two vectors of norm $\geq 3$ and at most one vector of norm $2$, hence exactly that many of each. This establishes the first part of the Lemma. For the second part, it follows at once that $\min({\textup{supp}}(v_k)) = i$ and $|v_i|=2$. If $|v_l| \geq 3$ for some largest value $l < i$ and $l \ne t$, then $(v_l,v_{l+1},\dots,v_i)$ induces a path in $G(\overline{S})$, and then $(v_l,v_j,v_k)$ forms a heavy triple. The second part now follows as well. \end{proof} \begin{cor}\label{c: cycle} If $C \subset S$ spans a cycle in $G(S)$, then it induces a complete subgraph and $|V(C)| \leq 4$, with equality iff $C$ contains a breakable vector $v_t$. \qed \end{cor} \begin{defin}\label{d: triangle sign} If $(v_i,v_j,v_k)$ spans a triangle in $G(S)$, then it is {\em positive} or {\em negative} according to the sign of ${\langle} v_i,v_j {\rangle} \cdot {\langle} v_j,v_k {\rangle} \cdot {\langle} v_k,v_i {\rangle}$. \end{defin} \begin{lem}\label{l: signs} If $(v_i,v_j,v_k)$ spans a triangle in $G(S)$ and some pair of $T_i, T_j, T_k$ are consecutive, then the triangle is positive. \end{lem} \begin{proof} Observe that ${\langle} v_i,v_j {\rangle} \cdot {\langle} v_j,v_k {\rangle} \cdot {\langle} v_k,v_i {\rangle} = (\epsilon_i \epsilon_j \epsilon_k)^2 \cdot {\langle} [T_i],[T_j] {\rangle} \cdot {\langle} [T_j],[T_k] {\rangle} \cdot {\langle} [T_k],[T_i] {\rangle}$. Two pairs of $T_i,T_j,T_k$ are consecutive and the other pair shares a common endpoint, so the right-hand side of this equation is positive. \end{proof} Most of the case analysis to follow in Sections \ref{s: decomposable} - \ref{s: tight} involves arguing that elements of $S$ must take a specific form, for otherwise we would obtain a contradiction to one of the preceding Lemmas. In such cases, we typically just state something to the effect of ``$(v_i,v_j,v_k)$ forms a negative triple" without the obvious conclusion ``a contradiction", to spare the use of this phrase several dozen times. We conclude with one last basic observation. \begin{lem}\label{l: index s} Suppose that $v_s \in S$ has norm $\geq 3$ with $s$ chosen smallest, and that it is not tight. Then $v_s$ is just right, and $|v_s| \in \{s, s+1\}$. \end{lem} \begin{proof} Recall that if $v_g$ is gappy, then $|v_{k+1}| \geq 3$ for a gappy index $k$. By minimality of $s$, it follows that $v_s$ is just right. If $|v_s| < s$, then $v_s = -e_s+e_{s-1}+\cdots + e_k$ for some $2 \leq k \leq s-2$, and $(v_k;v_{k-1},v_{k+1},v_s)$ induces a claw, a contradiction. \end{proof} \section{A decomposable lattice}\label{s: decomposable} The goal of this section is to classify the changemaker lattices isomorphic to a sum of more than one linear lattice (Proposition \ref{p: decomposable structure}). We begin with a basic result. \begin{lem}\label{l: two summands} A changemaker lattice has at most two indecomposable summands. If it has two indecomposable summands, then there exists an index $s > 1$ for which $v_s = -e_s + \sum_{i=0}^{s-1} e_i$, $|v_i| = 2$ for all $1 \leq i < s$, and $v_s$ and $v_1$ belong to separate summands. \end{lem} \begin{proof} For the first statement, it suffices to show that $\widehat{G}(S)$ has at most two connected components (cf. Subsection \ref{ss: generalities}). Thus, suppose that $\widehat{G}(S)$ has more than one component, fix a component $C$ that does not contain $v_1$, and choose $s > 1$ smallest such that $v_s \in V(C)$. Thus, $v_s \not\sim v_i$ for all $1 \leq i < s$. Let $k = \min({\textup{supp}}(v_s))$. Then $k = 0$, since otherwise $v_s \sim v_k$. Furthermore, $v_s$ is not gappy, for if $l$ is a gappy index, then $v_s \sim v_{l+1}$. Therefore, $v_s = -e_s + \sum_{i=0}^{s-1} e_i$. If $|v_i| \geq 3$ for some $i < s$, then $v_s \sim v_i$. It follows that $|v_i| = 2$ for all $i < s$. As $s$ is uniquely determined, it follows that $C$ is as well, so $\widehat{G}(S)$ contains exactly two components. The statement of the Lemma now follows. \end{proof} For the remainder of the section, suppose that $L$ is a changemaker lattice isomorphic to a sum of two linear lattices. \begin{lem}\label{l: just right} All elements of $S$ are just right. \end{lem} \noindent Thus, $G(S) = \widehat{G}(S)$. \begin{proof} Suppose that $v_t \in S$ were tight. Then $\langle v_t, v_1 \rangle = 1$ and $\langle v_t, v_s \rangle \geq 1$ would imply that $v_1$ and $v_s$ belong to the same component of $\widehat{G}(S)$, a contradiction. Next, suppose that $v_g \in S$ were gappy with $g$ chosen minimal. Note that $| v_g | \geq 3$. Let $k$ denote the minimal gappy index for $v_g$. Since $v_{k+1}$ is not gappy, it follows that $v_{k+1,k-1} \ne 0$. Since $\langle v_g, v_{k+1} \rangle \leq 1$ by Lemma \ref{l: cool}, it follows that $v_{g,k-1} = 0$, and now minimality of $k$ implies that $k = \min({\textup{supp}}(v_g))$. This implies that $v_g \sim v_k$. We cannot have $v_{k+1} \sim v_k$, since this would force $| v_k | \geq 3$, and then $(v_k,v_{k+1},v_j)$ forms a heavy triangle, in contradiction to Lemma \ref{l: no triangle}. Hence $v_k \not\sim v_{k+1}$. As $G(S)$ does not contain a cycle of length $> 3$, it follows that $v_k$ and $v_{k+1}$ belong to separate components of $G(S_{g-1})$. Since $G(S_{g-1})$ has at most two components, it follows that $G(S_g)$ is connected, hence $G(S)$ is as well, a contradiction. \end{proof} \begin{lem}\label{l: v_m of degree 2} Suppose that $v_m$ has multiple smaller neighbors. Then $m > s+1$, and \begin{itemize} \item $v_m = -e_m + e_{m-1} + \cdots + e_{s-1}$, \item $v_{s+1} = -e_{s+1}+e_s+e_{s-1}$, \item $v_s = -e_s + e_{s-1}+ \cdots + e_0$, and \item $|v_k| = 2$ for all other $k < m$. \end{itemize} \end{lem} \begin{proof} Suppose that $v_m \sim v_i, v_j$ with $i < j < m$. As in the proof of Lemma \ref{l: just right}, the vectors $v_i$ and $v_j$ cannot belong to separate components of $G(S_{m-1})$, for then $G(S)$ would be connected. Furthermore, $v_i \sim v_j$, since otherwise $G(S_m)$ would contain a cycle of length $> 3$. By Lemma \ref{l: cycle}, it follows that $|v_l| = 2$ for all $l \leq i$. Hence $s \geq i+1$. From $v_j \sim v_i$, it follows that $j \geq s+1$ and $v_j = -e_j + e_{j-1} + \cdots + e_i$. As ${\langle} v_j, v_m {\rangle} \leq 1$, it follows that $j = i+2$. Thus, $s = i+1$. As $i$ and $j$ are uniquely determined, it follows that $v_m \not\sim v_k$ for all $s+2 < k < m$, so $|v_k| = 2$ for all such $k$. \end{proof} The following definition is essential to describe the way in which we build families of standard bases. The terminology borrows from \cite[Definition 3.4]{lisca:lens1}, although its meaning differs somewhat. \begin{defin}\label{d: expansion} We call $S_m$ an {\em expansion} of $S_{m-1}$ if $v_m = -e_m+ e_{m-1} + \cdots + e_k$ for some $k$, $|v_i| = 2$ for all $k+ 1 < i < m$, and $| v_{k+1} | \geq 3$ in case $m > k+1$. \end{defin} \begin{lem}\label{l: v_m of degree 1} Suppose that $v_m$ has a single smaller neighbor. Then either \begin{enumerate} \item $s = 2$, $|v_k|=2$ for $s < k < m$, and $v_m = -e_m + e_{m-1} + \cdots + e_0$; \item $|v_k| = 2$ for $s < k < m$ and $v_m = -e_m + e_{m-1} + \cdots + e_s$; or \item $S_m$ is an expansion of $S_{m-1}$. \end{enumerate} \end{lem} \begin{proof} Suppose first that $v_{m0} = 1$. It follows by assumption on $v_m$ that $|v_k|=2$ for all $1 \leq k < n$ except for a single $v_i$, for which $|v_i | = 3$. On the other hand, $| v_s | = s+1$. It follows that $s = 2$, and (1) holds. Thus, we may assume that $v_m = -e_m + e_{m-1} + \cdots + e_i$ for some $i > 0$. Now $v_m \sim v_i$, so $| v_k | = 2$ for all $i+1 < k < m$. Suppose that $i+1 < m$ and $| v_{i+1} | = 2$. If $v_i \sim v_l$ with $l < i$, then $(v_i; v_l, v_{i+1}, v_m)$ induces a claw, in contradiction to Lemma \ref{l: no claw}. It follows that $i = s$ and (2) holds. The remaining cases that $m = i+1$, or that $m > i+1$ and $|v_{i+1}| > 2$, both result in (3). \end{proof} \begin{defin} Let $A_{s,m}$ denote the family of standard bases enumerated in Lemma \ref{l: v_m of degree 2} and $B_m$, $C_{s,m}$ the families enumerated in Lemma \ref{l: v_m of degree 1}(1) and (2), respectively. \end{defin} Combining Lemmas \ref{l: v_m of degree 2} and \ref{l: v_m of degree 1} and induction, we obtain the following structural result. \begin{prop}\label{p: decomposable structure} Suppose that a changemaker lattice is isomorphic to a sum of more than one linear lattice. Then its standard basis is built by a sequence of (possibly zero) expansions to $A_{s,m}$, $B_m$, $C_{s,m}$, or $\varnothing$, for some $m > s \geq 2$. \qed \end{prop} In fact, we have established somewhat more: if a changemaker lattice is isomorphic to a linear lattice or a sum thereof, and $G(S_{n'})$ is disconnected for some $n' \leq n$, then $S_{n'}$ takes the form appearing in Proposition \ref{p: decomposable structure}. We will utilize Proposition \ref{p: decomposable structure} in this stronger form on several occasions in Sections \ref{s: just right} - \ref{s: tight}. We obtain vertex bases for the families in Proposition \ref{p: decomposable structure} as follows: \begin{enumerate} \item[$A_{s,m}$:] $\{v_{m-1},\dots,v_{s+1},v_{s-1},\dots,v_1,-(v_m+v_1+\cdots+v_{s-1})\} \cup \{v_s\}$; \item[$B_m$:] $\{ v_{m-1} \dots,v_2,-v_m\} \cup \{ v_1 \}$; \item[$C_{s,m}$:] $\{ v_1, \dots, v_{s-1} \} \cup \{ v_{m-1}, \dots, v_s, v_m \}$. \end{enumerate} \noindent For expansion on $\varnothing$, the standard basis is, up to reordering, a vertex basis (cf. Proposition \ref{p: expansion}). \section{All vectors just right}\label{s: just right} Before proceeding further, we briefly comment on the purpose of this and the next two sections, and establish some notation. Just as Proposition \ref{p: decomposable structure} describes the structure of the standard basis for a changemaker lattice isomorphic to a sum of more than one linear lattice, our goal in Sections \ref{s: just right} - \ref{s: tight} is to produce a comprehensive collection of {\em structural Propositions} that do the same thing for the case of a single linear lattice. In each Proposition we enumerate a specific family of standard bases, and in Section \ref{s: cont fracs} we verify that each basis does, in fact, span a linear lattice by converting it into a vertex basis. {\em A posteriori}, each standard basis $S = \{v_1,\dots,v_n\}$ contains at most one tight vector and two gappy vectors. We always denote the tight vector by $v_t$. We denote the gappy vector with the smaller index by $v_g$, which always takes the form $e_k + e_j + e_{j+1} + \cdots + e_{g-1}-e_g$ with $k < j+1$. When there are two gappy vectors (\ref{p: breakable 1}(1), \ref{p: breakable 2}(2), \ref{p: breakable 3}(1,2)), we specifically notate the one with the larger index. We write $s = \min \{ i \; | \; |v_i| > 2 \}$ when there is no tight vector, and $s =\min \{ i > t \; | \; |v_i| > 2 \}$ when there is one. Otherwise, every standard basis element $v_i$ is just right, so is completely determined by $i$ and its norm, which we report iff $|v_i| \geq 3$. In \ref{p: gappy structure}(2) and \ref{p: breakable 1}-\ref{p: breakable 3} we report some families of standard bases {\em up to truncation}. Thus in \ref{p: gappy structure}(2), we may truncate by taking $n = g$ and disregarding $v_i$ for $i \geq g+1$. \noindent {\em Example.} The first structural Proposition \ref{p: just right 1}(1) reports the family of standard bases parametrized by $s \geq 2$, where \begin{itemize} \item $v_i = e_{i-1}-e_i$ for $i=1,\dots,s-1$; \item $v_s = e_0 + \cdots + e_{s-1} - e_s$; \item $v_{s+1} = e_{s-1} + e_s - e_{s+1}$; \item $v_{s+2} = e_s + e_{s+1} - e_{s+2}$; and \item $v_n = v_{s+3} = e_{s-1} + e_s + e_{s+1} + e_{s+2} - e_{s+3}$. \end{itemize} \medskip For the remainder of this section, assume that $L$ is a changemaker lattice isomorphic to a linear lattice, and that every element of $S$ is just right (hence also unbreakable). \subsection{$G(S)$ contains a triangle.} A {\em sun} is a graph consisting of a triangle $\Delta$ on vertices $\{a_1,a_2,a_3\}$ together with three vertex-disjoint paths $P_1,P_2,P_3$ such that $a_i$ is an endpoint of $P_i$, $i = 1,2,3$. The other endpoints of the $P_i$ are the {\em extremal vertices} of the graph. \begin{lem}\label{l: sun} If $G(S)$ contains a triangle $\Delta$, then $G(S)$ is a sun and $V(\Delta) = \{v_i,v_{i+2},v_m\}$ for some $i+2 < m$. Furthermore, $|v_l| = 2$ for all $v_l$ along the path containing $v_i$. \end{lem} \begin{proof} Choose a triangle $\Delta \subset G(S)$ with $V(\Delta) = \{ v_i,v_j,v_m \}$, $i < j < m$. By Lemma \ref{l: cycle}, $|v_l| = 2$ for all $l \leq i$ and $|v_j|,|v_m| \geq 3$. Since $v_j$ and $v_m $ are just right, we have $v_j = -e_j+\cdots+e_i$ and $v_m = -e_m+ \cdots + e_i$. Since ${\langle} v_m, v_j {\rangle} \leq 1$, it follows that $j = i+2$. Suppose by way of contradiction that $\Delta' \subset G(S)$ were another triangle with $V(\Delta') = \{ v_{i'},v_{j'},v_{m'} \}$, $i' < j' < m'$. Then $|v_l| = 2$ for all $l \leq i'$ and $j' = i'+2$, so $i' \in \{i-1,i,i+1\}$. If $i' = i$, then ${\langle} v_{m'},v_m {\rangle} \geq 2$, which cannot occur. If $i' = i + 1$, then $(v_{i+3},v_{i+1},v_i)$ is a path in $G(S)$, which implies that $(v_{i+2},v_{i+3},v_m)$ forms a heavy triple, a contradiction. By symmetry, $i = i' + 1$ cannot occur either. Consequently, $G(S)$ contains a unique triangle $\Delta$. Furthermore, $G(S)$ is claw-free by Lemma \ref{l: no claw}. It follows that $G(S)$ is a sun. If some vector $v_l$ on the path containing $v_i$ had norm $\geq 3$, then $(v_l,v_k,v_m)$ forms a heavy triple, so this does not occur. \end{proof} \begin{prop}\label{p: just right 1} Suppose that every element in $S$ is just right, and that $G(S)$ contains a triangle. Then either \begin{enumerate} \item $n = s+3$, $|v_s| = s+1$, $|v_{s+1}|=|v_{s+2}|=3$, and $|v_{s+3}|=5$; \item $n = s+3$, $|v_s| = s+1$, $|v_{s+1}|=3$, and $|v_{s+2}| = |v_{s+3}|=4$; or \item $|v_s| = s = 3$, $|v_m| = m$ for some $m > 3$, $|v_i| = 2$ for all $i < m$, $i \ne 3$, and $S$ is built from $S_m$ by a sequence of expansions. \end{enumerate} \end{prop} \begin{proof} We apply Lemma \ref{l: sun}, keeping the notation therein. \noindent (I) {\em Suppose that $|v_{i+1}| > 2$.} \noindent In this case, $s = i+1$ and $v_s$ has no smaller neighbor, so $|v_s| = s+1$. Since $G(S)$ is connected, $v_s$ has some neighbor $v_j = -e_j+\cdots+e_l$. Note that $v_s \not\sim v_{s+1},v_m$, so $j \ne s+1,m$. If $l < s$, then in fact $l < s-1$ and $(v_{s+1},v_j,v_m)$ forms a heavy triangle, so it follows that $l = s$. Since $1 \geq {\langle} v_j,v_m {\rangle} = \min\{m,j\} - (s+1) \geq 1$, it follows that $\min\{m,j\} = s+2$. \noindent (I.1) {\em Suppose that $j = s+2$.} \noindent The subgraph $H$ of $G(S)$ induced on $\{v_1,\dots,v_{s+2},v_m\}$ is a sun with extremal vertices $v_1,v_{s},v_{s+1}$. We claim that $G(S) = H$. For if not, then there exists some vector $v_l = -e_l + \cdots + e_k$ with a single edge to $H$, meeting it in an extremal vertex. This forces $k \leq s+1$, but then ${\langle} v_l, v_m {\rangle} \geq 1$, a contradiction. It follows that $n=m = s+3$, and case (1) results. \noindent (I.2) {\em Suppose that $m = s+2$.} \noindent The subgraph $H$ induced on $\{v_1,\dots,v_{s+2},v_j\}$ is a sun with extremal vertices $v_1,v_s,v_{s+1}$ like before. The argument just given (with $v_j$ in place of $v_m$) applies to show that $G(S) = H$. It follows that $n = j = s+3$, and case (2) results. \noindent (II) {\em Suppose that $|v_{i+1}| = 2$.} \noindent In this case, $s = i+2 = 3$ and $|v_3| = 3$. If $|v_j| \geq 3$ for some $3 < j < m$ chosen smallest, then the subgraph induced on $V' = \{v_1,\dots,v_{j-1},v_m\}$ is a sun in which $v_j$ has multiple neighbors, which cannot occur in the sun $G(S)$. Thus, $|v_j| = 2$ for all $3 < j < m$. Next, choose any $v_j = -e_j+\cdots+e_k$ with $j > m$. Then $G(S_{j-1})$ is a sun, so $v_j$ has exactly one smaller neighbor. It easily follows that $k \geq 2$, $v_j \sim v_k$, and $v_k$ has some smaller neighbor $v_l$. If $|v_j| \geq 3$, then $|v_{k+1}| \geq 3$, since otherwise $(v_k;v_l,v_{k+1},v_j)$ induces a claw. It follows that $G(S_j)$ is an expansion of $G(S_{j-1})$. By induction, $G(S)$ is a sequence of expansions applied to $G(S_m)$, and case (3) results. \end{proof} \subsection{$G(S)$ does not contain a triangle.} In this case, $G(S)$ is a path. \subsubsection{Some vertex has multiple smaller neighbors.} Suppose that $v_m \in S$ has multiple smaller neighbors. Since $G(S)$ is a path, it follows that $G(S_{m-1})$ consists of a union of two paths and that $v_m$ is adjacent precisely to one endpoint of each. Therefore, $m$ is the minimal index for which $G(S_m)$ is connected, which establishes that $m$ is unique. \begin{lem}\label{l: little} Suppose that every element in $S$ is just right, $G(S)$ does not contain a triangle, $v_m \in S$ has multiple smaller neighbors, and $v_{m-1}$ is not an endpoint of $G(S)$. Then $m = n$. \end{lem} \begin{proof} Suppose by way of contradiction that $n > m$, and consider $v_{m+1}$. Its unique smaller neighbor $v_j$ is an endpoint of the path $G(S_m)$, and $j < m-1$ by hypothesis. Therefore, $|v_{m+1}| \geq 4$, but this implies that ${\langle} v_{m+1}, v_m {\rangle} \geq 1$, a contradiction. \end{proof} \begin{prop}\label{p: just right 2} Suppose that every element in $S$ is just right, $G(S)$ does not contain a triangle, and some vector $v_m \in S$ has multiple smaller neighbors. Then either \begin{enumerate} \item $m=n=4$, $|v_2| = |v_3| = 3$, and $|v_4| = 5$; \item $m=n=4$, $|v_2|=3$, and $|v_3|=|v_4|=4$; \item $m=n = s+3$, $|v_s| = s+1$, $|v_{s+2}|=3$, and $|v_{s+3}|=5$; \item $m=n = s+3$, $|v_s| = s+1$, and $|v_{s+2}| = |v_{s+3}| = 4$; or \item $s=3$, $|v_3|=4$, $|v_m|= m > 3$, and $|v_{m+1}| =3$ in case $n > m$. \end{enumerate} \end{prop} \begin{proof} Write $v_m = -e_m + \cdots + e_k$. \noindent (I) {\em Suppose that $k=0$.} \noindent Then ${\langle} v_m, v_s {\rangle} = s-1 \geq 1$ forces $s = 2$. Further, $v_m$ is not adjacent to $v_1$, but as $v_m$ has a neighbor in the component of $G(S_{m-1})$ containing it, $v_1$ must have some neighbor $v_i = -e_i + \cdots + e_1$ with $i \geq 3$. Then ${\langle} v_m, v_i {\rangle} \geq i-2 \geq 1$, so $i=3$. Hence $v_m$ neighbors $v_2$ and $v_3$ but no other $v_j$, $j < m$. It follows that $|v_j| = 2$ for $3<j<m$. If $m > 4$, then $(v_3;v_1,v_4,v_m)$ induces a claw. Hence $m=4$. By Lemma \ref{l: little}, it follows that $m = n$, and case (1) results. \noindent (II) {\em Suppose that $k > 0$.} Hence $v_m \sim v_k, v_j$ for some $j \geq k+2$. \noindent (II.1) {\em Suppose that $j > k+2$.} \noindent In this case, $|v_j|=3$ and $|v_{j-1}|=2$, so $v_{j-2}$ neighbors $v_j$ and $v_{j-1}$ and no other vector. It follows that $j-2 \in \{1,s\}$. But $1=j-2 > k >0$ cannot occur, so $j-2=s$. Since $0={\langle} v_m,v_s {\rangle} = s-k-1$, it follows that $s = k+1$. Moreover, since $v_j$ is an endpoint of its path in $G(S_{m-1})$, it follows that $m=j+1$. By Lemma \ref{l: little}, it follows that $m = n$, and case (3) results. \noindent (II.2) {\em Suppose that $j = k+2$.} Since $v_j$ and $v_k$ belong to different components of $G(S_{m-1})$, it follows that $|v_k|=2$ and $|v_{k+2}| \geq 4$. \noindent (II.2.i) {\em Suppose that $v_{k+2}$ has no smaller neighbor.} \noindent In this case, $|v_{k+1}|=2$ and $k+2 = s$. As $v_k \sim v_{k+1}$ and $v_k$ is an endpoint of its path in $G(S_{m-1})$, it follows that $v_k$ has no smaller neighbor, hence $k = 1$. It follows that $s=3$, $|v_3| = 4$, $|v_m| = m$, and $|v_i| = 2$ for all other $i < m$. If $n = m$, then we land in case (5). If $n > m$, then consider $v_{m+1}$. Its unique smaller neighbor $v_j$ is an endpoint of $G(S_m)$. It follows that $j = m-1$ and $|v_{m+1}|=3$. By a similar argument, it follows that $|v_i|=2$ for all $i > m+1$. Therefore, case (5) results. \noindent (II.2.ii) {\em Suppose that $v_{k+2}$ has a smaller neighbor.} Thus, it has no larger neighbor in $G(S_m)$ besides $v_m$, so $m = k+3$. \noindent (II.2.ii$'$) {\em Suppose that $|v_{k+1}| \geq 3$.} \noindent It follows that $v_{k+1} \sim v_{k+2}$, and $v_{k+2}$ cannot have any other smaller neighbor. Hence $\min({\textup{supp}}(v_{k+2}))=0$, $s = k+1$, and $1 \geq |{\langle} v_{k+1},v_k {\rangle}| = k$, so $k =1$. By Lemma \ref{l: little}, it follows that $m = n$, and case (2) results. \noindent (II.2.ii$''$) {\em Suppose that $|v_{k+1}|=2$.} \noindent In this case, $v_k \sim v_{k+1}$, so $v_k$ has no smaller neighbor. Hence $k \in \{1,s\}$. However, if $k = 1$, then $v_{k+2}$ would not have a smaller neighbor. Hence $k = s$, and as ${\langle} v_{k+2}, v_s {\rangle} = 0$ but $v_{k+2}$ has a smaller neighbor, it follows that $|v_{k+2}| = 4$. By Lemma \ref{l: little}, it follows that $m = n$, and case (4) results. \end{proof} \subsubsection{No vector has multiple smaller neighbors.} \begin{prop}\label{p: just right 3} Suppose that every element in $S$ is just right, $G(S)$ does not contain a triangle, and no vector in $S$ has multiple smaller neighbors. Then either \begin{enumerate} \item $|v_i| = 2$ for all $i$; \item $s = 3, |v_3|=3,|v_4|=5$; or \item $|v_s| = s$ and $S$ is built from $S_s$ by a sequence of expansions. \end{enumerate} \end{prop} \begin{proof} It must be the case that $G(S_j)$ is connected for all $j$, since $G(S)$ is connected, and if $G(S_{m-1})$ were disconnected for some $m > 0$, then $v_m$ would have multiple smaller neighbors. Thus, unless case (1) occurs, it follows that $|v_s| = s \geq 3$. \noindent (I) {\em Suppose that there exists an index $m > s$ for which $\min({\textup{supp}}(v_m))=0$.} \noindent In this case, ${\langle} v_m, v_s {\rangle} = s-2 \geq 1$. It follows that $s = 3$ and $v_3$ is the unique smaller neighbor of $v_m$. If $m > 4$, then $(v_3;v_2,v_4,v_m)$ induces a claw. Hence $m = 4$. If $n > 4$, then choose $j$ maximal for which $|v_5|=\cdots=|v_j|=2$. Thus, $G(S_j)$ has endpoints $v_j$ and $v_2$. If $n > j$, then $v_{j+1}$ must neighbor one of $v_j$ and $v_2$. However, $v_{j+1} \sim v_2$ implies that ${\langle} v_{j+1}, v_4 {\rangle} \geq 1$, while $v_{j+1} \sim v_j$ implies that $|v_{j+1}|=2$. Both result in contradictions, so it follows that $n = j$, and case (2) results. \noindent (II) {\em Suppose that $\min({\textup{supp}}(v_m)) > 0$ for all $m > s$.} \noindent Consider $v_m = -e_m + \cdots + e_k$ with $m > s$ and $k > 0$. Thus, $v_k$ is the unique smaller neighbor of $v_m$, so it is an endpoint of $G(S_{m-1})$, and $|v_i| = 2$ for all $k+1 < i < m$. If $m > k+1$ and $|v_{k+1}| = 2$, then $v_k \sim v_{k+1}$. As $v_k$ is an endpoint of $G(S_{m-1})$, it follows that $v_k$ has no smaller neighbor. But then $k = 1$ and $|v_i| = 2$ for all $i < m$, in contradiction to the assumption that $m > s$. It follows that $|v_{k+1}| \geq 3$, or else $m = k+1$ and $|v_m| = 2$. Hence $S_m$ is an expansion on $S_{m-1}$. By induction on $m$, it follows that case (3) results. \end{proof} \section{A gappy vector, but no tight vector}\label{s: gappy, no tight} In this section, assume that $L$ is a changemaker lattice isomorphic to a linear lattice, $S$ does not contain a tight vector, and it does contain a gappy vector $v_g$. Again, every vector in $S$ is unbreakable, and $G(S) = \widehat{G}(S)$. For use in Section \ref{s: tight}, the following Lemma allows the possibility that $S$ contains a tight, unbreakable vector. \begin{lem}\label{l: one gappy} Suppose that $v_g \in S$ is gappy, and that $S$ contains no breakable vector. Then $v_g$ is the unique gappy vector, $v_g = -e_g + e_{g-1} + \cdots + e_j + e_k$ for some $k +1 < j < g$, and $v_k$ and $v_{k+1}$ belong to distinct components of $G(S_{g-1})$. \end{lem} \begin{proof} Choose $v_g$ with $g$ minimal, and choose a minimal gappy index $k$ for $v_g$. Then $|v_{k+1}| \geq 3$, and since $v_{k+1}$ is not gappy, it follows that $v_{k+1,k-1} = 1$. Thus, $v_{g,k-1}=0$, since otherwise ${\langle} v_g, v_{k+1} {\rangle} \geq 2$. It follows that $v_g \sim v_{k+1},v_k$. If $v_k \sim v_{k+1}$, then either $|v_k| \geq 3$, or else $k = 1$ and $v_2$ is tight. In the first case, the triangle $(v_k,v_{k+1},v_g)$ is heavy, and in the second case, it is negative. Hence $|v_k| = 2$ and $v_k \not\sim v_{k+1}$. If $v_k$ and $v_{k+1}$ were in the same component of $G(S_{g-1})$, then a shortest path between them, together with $v_g$, would span a cycle of length $> 3$ in $G(S)$. It follows that $G(S_{g-1})$ has two components, and $v_k$ and $v_{k+1}$ belong to separate components. Suppose by way of contradiction that $l > k$ were another gappy index. Then $|v_{l+1}| \geq 3$, so $v_{k+1} \not\sim v_{l+1}$, since otherwise $(v_{k+1},v_{l+1},v_g)$ forms a heavy triangle. Furthermore, $v_{l+1,k} = 0$, since otherwise ${\langle} v_g, v_{l+1} {\rangle} \geq 2$. It follows that $v_{l+1} \not\sim v_k$, too. But then $(v_g; v_k,v_{k+1},v_{l+1})$ induces a claw. Hence no other index $l$ exists, and $v_g$ takes the stated form. Lastly, suppose by way of contradiction that $v_h$ were another gappy vector, with $h > g$ chosen smallest. Note that $G(S_{h-1})$ is connected. It follows that $v_h$ has at most two smaller neighbors and that they are adjacent, since otherwise there would exist a cycle of length $> 3$ in $G(S)$. Choose a minimal gappy index $k'$ for $v_h$ and let $l = \min({\textup{supp}}(v_h))$. Then $|v_{k'+1}| \geq 3$, and since ${\langle} v_g, v_{k'+1} {\rangle} \leq 1$, it follows that $v_{k'+1,i} = 0$ for $i = l,\dots,k'-1$. Thus, $v_{k'+1,i}=1$ for some $i < l$, whence $l > 0$. Thus, $v_h \sim v_{k'+1}, v_l$, so $v_{k'+1} \sim v_l$. However, $|v_l|=2$, so it follows that $v_{k'+1,l-1} = 1$; but then $v_{k'+1} \ll v_{k'+1} - v_l$, a contradiction. It follows that $v_g$ is the unique gappy vector, as claimed. \end{proof} By Lemma \ref{l: one gappy}, it follows that $G(S_{g-1})$ is disconnected, so $S_{g-1}$ must take one of the forms described by Proposition \ref{p: decomposable structure}. Lemmas \ref{l: S_g nothing} and \ref{l: S_g not nothing} condition on these possible forms to determine the structure of $S_g$. \begin{lem}\label{l: S_g nothing} Suppose that $S_{g-1}$ is built from $\varnothing$ by a sequence of expansions. Then $S_g$ takes one of the following forms: \begin{enumerate} \item $k = s-1$, $j = s+1$, $|v_s| = s+1$, and $|v_{s+2}|=4$; \item $j = k+2$, $|v_i| = 2$ for all $i >k+1$, and otherwise $S_{g-1}$ is arbitrary; \item $k = s$, $j = s+2$, $|v_s| = s+1$, $|v_{s+1}|=3$, and $|v_{s+2}|=3$; \item $k = 1$, $s = 2$, $|v_2| = 3$, $|v_j|=j$, and $|v_{j+1}|=3$; or \item $k = 1$, $s = 2$, $|v_2|=3$, and $|v_j| = j$. \end{enumerate} \end{lem} \begin{proof} \noindent (I) {\em Suppose that $v_g \sim v_j$.} \noindent Thus, $v_{jk} = 0$. If $v_j \not\sim v_{k+1}$, then $(v_g;v_k,v_{k+1},v_j)$ induces a claw. Hence $v_j \sim v_{k+1}$. If $|v_j| \geq 3$, then $(v_{k+1},v_j,v_g)$ forms a heavy triangle. Hence $|v_j|=2$ and $j = k+2$. \noindent (I.1) {\em Suppose that $v_l \sim v_k$ with $k < l < g$.} \noindent Thus, $v_l = -e_l+\cdots+e_k$. Then $l > k+2$ and ${\langle} v_g, v_l {\rangle} \leq 1$, which implies that $l = k+3$ and $|v_{k+3}|=4$. If $|v_i| \geq 3$ for some largest value $i \leq k$, then $(v_i,v_{k+3},v_g)$ forms a heavy triple. Hence $|v_i| = 2$ for all $i \leq k$, and $s = k+1$. If $v_l \sim v_{s+1}$ for some $s+1 < l < g$, then $(v_g;v_{s-1},v_s,v_l)$ induces a claw. It follows that $|v_l| = 2$ for all $s+1 < l < g$, and case (1) results. \noindent (I.2) {\em Suppose that $v_l \not\sim v_k$ for all $k+1 < l < g$.} \noindent It follows that $|v_l|=2$ for all such $l$, and case (2) results. \noindent (II) {\em Suppose that $v_g \not\sim v_j$.} \noindent Thus, $v_{jk} = 1$. Furthermore, the assumption on $S_{g-1}$ implies that $k = \min({\textup{supp}}(v_j))$ and that $|v_i| = 2$ for all $k+1<i<j$. If $v_k$ has a smaller neighbor $v_l$, then $(v_k;v_l,v_j,v_g)$ induces a claw. It follows that $k \in \{1,s\}$. \noindent (II.1) {\em Suppose that $k = s$.} \noindent Since $|v_s| \geq 3$, it follows that $|v_{s+1}|=3$. If $j > s+2$, then $|v_{s+2}|=2$ and $(v_{s+1};v_{s-1},v_{s+2},v_g)$ induces a claw. Hence $j = s+2$. If $v_{s+1} \sim v_l$ for some $s+2 < l < g$, then either $l = s+3$, in which case $(v_s,v_l,v_g)$ forms a heavy triangle, or else $l > s+3$, in which case ${\langle} v_g,v_l {\rangle} \geq 2$. It follows that $|v_l|=2$ for all $s+2 < l < g$, and case (3) results. \noindent (II.2) {\em Suppose that $k = 1$.} It follows that $s = 2$. \noindent (II.2$'$) {\em Suppose that $v_l \sim v_{j-1}$ for some $j < l < g$.} \noindent If $v_g \sim v_l$, then $(v_{j-1},v_l,v_g)$ forms a heavy triple. Hence $v_g \not\sim v_l$, so $l = j+1$ and $|v_{j+1}|=3$. If $v_i \sim v_j$ for some $j < i < g$, then $(v_j,v_i,v_g)$ forms a heavy triple. Hence $|v_i| = 2$ for all $j+1 < i < g$, and case (4) results. \noindent (II.2$''$) {\em Suppose that $v_l \not\sim v_{j-1}$ for all $j < l < g$.} \noindent It follows that $|v_l|=2$ for all $j < l < g$, and case (5) results. \end{proof} \begin{lem}\label{l: S_g not nothing} Suppose that $S_{g-1}$ is not built from $\varnothing$ by a sequence of expansions. Then $S_g$ takes one of the following forms: \begin{enumerate} \item $k=s$, $j=s+2$, $|v_s| = s+1$, $|v_{s+1}|=3$, and $|v_{s+2}| = 4$; \item $k=1$, $j=3$, $s = 2$, $|v_2| = 3$, and $|v_3| =4$; or \item $k = s-1$, $j = s+1$, $|v_s| = s+1$, and $|v_{s+2}|=3$. \end{enumerate} \end{lem} \begin{proof} Since $S_{g-1}$ is not obtained from $\varnothing$ by a sequence of expansions, Proposition \ref{p: decomposable structure} implies that $S_{g-1}$ is built by applying a sequence of expansions to $A_{s,m}, B_m$, or $C_{s,m}$, for some $m > s \geq 2$. We consider these three possibilities in turn. \noindent (I) $S_m = A_{s,m}$. \noindent In this case, $v_s$ is a singleton in $G(S_{g-1})$. It follows that $s \in \{ k,k+1 \}$. If $s = k+1$, then since $(v_{s-1},v_s,v_m)$ spans a triangle and $v_{s-1} \sim v_g$, it follows that $(v_s,v_m,v_g)$ forms a heavy triple. Therefore, $s = k$. If $m \ne j$, then $(v_{s+1},v_m,v_g)$ forms a heavy triangle. Therefore, $m = j$. If $m > s+2$, then $|v_{s+1}|=2$, and $(v_{s+1};v_{s-1},v_{s+2},v_g)$ induces a claw. Therefore, $m = s+2$. It follows that $S_{g-1}$ is built from $A_{s,s+2}$ by a sequence of expansions. If $v_l \sim v_{k+1}$ for some $s+2<l\leq g-1$, then either $l = s+3$ and $(v_{s+1},v_{s-1},v_{s+3},v_g)$ induces a claw, or else $l > s+3$ and $(v_{s+1},v_l,v_g)$ forms a heavy triple. Consequently, no such $l$ exists, and therefore $|v_i| = 2$ for all $s+2 < i \leq g-1$. This results in case (1). \noindent (II) $S_m = B_m$. \noindent In this case, $v_1$ is a singleton in $G(S_{g-1})$, so $k = 1$. If $m \ne j$, then $(v_2,v_m,v_g)$ forms a heavy triangle. Therefore, $m = j$. If $m > 3$, then $(v_2;v_3,v_m,v_g)$ induces a claw. Hence $m = 3$. It follows that $S_{g-1}$ is built from $B_4$ by a sequence of expansions. If $v_l \sim v_2$ for some $3 < l \leq g-1$, then either $(v_3,v_l,v_g)$ forms a heavy triangle, or $(v_3;v_4,v_l,v_g)$ induces a claw. It follows that $|v_i|=2$ for all $4 < i \leq g-1$. This results in case (2). \noindent (III) $S_m = C_{s,m}$. \noindent In this case, $(v_1,\dots,v_{s-1})$ spans a component of $G(S_{g-1})$. It follows that $k = s-1$. If $v_m \sim v_g$, then $(v_s,v_m,v_g)$ forms a heavy triangle. Hence $v_m \not\sim v_g$. If $j > s+1$, then $(v_s;v_{s+1},v_m,v_g)$ induces a claw. Hence $j = s+1$. Since $v_m \not\sim v_g$, it follows that $m = s+2$. Therefore, $S_{g-1}$ is built from $C_{s,s+2}$ by a sequence of expansions. If $v_l \sim v_{s+1}$ for some $s+2 < l \leq g-1$, then $l = s+3$ since ${\langle} v_g, v_l {\rangle} \leq 1$, and then $(v_g;v_{s-1},v_s,v_{s+3})$ induces a claw. It follows that $|v_l|=2$ for all $s+2 < l < g$. This results in case (3). \end{proof} \begin{lem}\label{l: gappy triangle} Suppose that there exists $v_m \in S$ with multiple smaller neighbors, and $m > g$. Then $m = g+1$, $g = s+2$, $|v_s|=s+1$, $v_{s+2} = -e_{s+2} + e_{s+1} + e_{s-1}$, $|v_{s+3}| = 5$, and $|v_i|=2$ for $i =1,\dots,s-1,s+1$. \end{lem} \begin{proof} Since $G(S_{m-1})$ is connected and $G(S_m)$ does not contain a cycle of length $>3$, it follows that $v_m$ has precisely two smaller neighbors $v_a, v_b$ with $a < b$, and $(v_a,v_b,v_m)$ spans a triangle. By Lemma \ref{l: cycle}, it follows that $v_m = -e_m+ \cdots + e_a$, $|v_l|=2$ for all $l \leq a$, and $|v_b|, |v_m| \geq 3$. Furthermore, $|v_l|=2$ for all $l < m$, $l \ne s, b$. As $|v_s|,|v_g| \geq 3,$ it follows that $s = a+1$ and $b = g$; since $|v_{k+1}| \geq 3$, it follows that $k = s$; and since ${\langle} v_m, v_g {\rangle} \leq 1$, it follows that $v_g = -e_g + e_{g-1} +e_{s-1}$ for some $g \geq s+2$. If $g < m-1$, then $(v_g;v_{g-1},v_{g+1},v_m)$ induces a claw, and if $g > s+2$, then $(v_g;v_s,v_{g-1},v_m)$ induces a claw. It follows that $m = g+1$, $g = s+2$, and $S_m$ takes the stated form. \end{proof} \begin{prop}\label{p: gappy structure} Suppose that $S$ contains a gappy vector $v_g$ but no tight vector. Then $S$ takes one of the following forms: \begin{enumerate} \item $n = g$ and $S$ is as in Lemma \ref{l: S_g nothing}(2); \item $n \geq g$, and up to truncation, $|v_{g+1}|=3$, $|v_i|=2$ for all $g+1 < i \leq n$, and $S_g$ is as in Lemmas \ref{l: S_g nothing} or \ref{l: S_g not nothing}, except for Lemma \ref{l: S_g nothing}(2); \item $S_{g+1}$ is as in Lemma \ref{l: gappy triangle} and $|v_i| = 2$ for all $g+1 < i \leq n$. \end{enumerate} \end{prop} \begin{proof} If $n = g$ then the result is immediate. Thus, suppose that $n > g$, and select any $g < m \leq n$. If $v_m$ has multiple smaller neighbors, then $m = g+1$ and $S_{g+1}$ takes the form stated in Lemma \ref{l: gappy triangle}. Assuming this is not the case, $v_m$ has a unique smaller neighbor. If $l := \min({\textup{supp}}(v_m)) = 0$, then $v_m \sim v_s, v_g$, a contradiction. Hence $l > 0$, and $v_l$ is the unique smaller neighbor of $v_m$. Observe that $l \ne g$, since then $(v_g;v_k,v_{k+1},v_m)$ induces a claw. It follows that $v_g$ has no larger neighbor. If $|v_{l+1}| = 2$, then $s < g \leq l$, so $v_l$ has a smaller neighbor $v_i$, and then $(v_l;v_i,v_{l+1},v_g)$ induces a claw. It follows that $|v_{l+1}| \geq 3$. Consequently, if $m > g$ is chosen minimal with $|v_m| \geq 3$, then $m = g+1$ and either $S_{g+1}$ is as in Lemma \ref{l: gappy triangle}, or else $|v_{g+1}| = 3$. Furthermore, there does not exist any $m' > m$ with $|v_{m'}| \geq 3$, since then $\min({\textup{supp}}(v_{m'})) = g$ and $v_{m'} \sim v_g$, which does not occur. Therefore, $|v_i| = 2$ for all $g+1 < i \leq n$. Finally, $S_g$ cannot take the form stated in Lemma \ref{l: S_g nothing}(2), for then $(v_{k+1},v_g,v_{g+1})$ forms a heavy triple. The statement of the Proposition now follows. \end{proof} \section{A tight vector}\label{s: tight} Suppose that $S$ contains a tight vector $v_t$. By Lemma \ref{l: tight}(1), the index $t$ is unique. The arguments in this Section reach slightly beyond the criteria laid out in Subsection \ref{ss: int graph} that sufficed to carry out the analysis in Sections \ref{s: decomposable} - \ref{s: gappy, no tight}. Nevertheless, the basic ideas are the same as before. \subsection{All vectors unbreakable.}\label{ss: tight unbreakable} Propositions \ref{p: berge viii} and \ref{p: tight unbreakable} describe the structure of a standard basis that contains a tight, unbreakable element. However, we do not make any assumption on $v_t$ just yet, as these results will apply in Subsection \ref{ss: tight breakable}. \begin{lem} $S_{t-1}$ is built from $\varnothing$ by a sequence of expansions. \end{lem} \begin{proof} If $|v_i|=2$ for all $i < t$, then the result is immediate, so suppose that $|v_s| \geq 3$ with $s < t$ chosen smallest. Thus, $|v_s|=s$ or $s+1$. Let us rule out the first possibility. If $|v_s| = s$, then $s \geq 3$ and ${\langle} v_t, v_s {\rangle} = s-2 \geq 1$. Hence either $T_s \pitchfork T_t$, or else $s = 3$ and $T_s \dagger T_t$. In the first case, $(v_1;v_2,v_s,v_t)$ induces claw, and in the second case, $(v_1,v_s,v_t)$ forms a negative triangle. Therefore, $|v_s| = s+1$. It follows that ${\langle} v_t, v_s {\rangle} = |v_s|-1 \geq 2$, so that $T_s \prec T_t$. As ${\langle} v_1, v_s {\rangle} = 0$, it follows that $T_1$ and $T_s$ abut $T_t$ at opposite ends. We claim that $v_1$ and $v_s$ belong to separate components of $G(S_{t-1})$. For suppose the contrary, and choose a shortest path between them. Together with $v_t$ they span a cycle of length $\geq 4$ in $G(S)$ that is missing the edge $(v_1,v_s)$, contradicting Corollary \ref{c: cycle}. Therefore, $G(S_{t-1})$ is disconnected. It follows by Proposition \ref{p: decomposable structure} that $S_{t-1}$ is built from $A_{s,m}, B_m, C_{s,m}$, or $\varnothing$ by a sequence of expansions. Let us rule out the first three possibilities in turn. \noindent (a) $A_{s,m}.$ Since $|v_m| \geq 4$ and ${\langle} v_t, v_m {\rangle} = |v_m| -2$, it follows that $T_m \pitchfork T_t$. Since $v_{s+1} \sim v_m$, it follows that $T_{s+1} \dagger T_m$, whence $T_{s+1} \; \cancel{\dagger} \; T_t$ since otherwise $T_m \dagger T_t$. Hence $T_{s+1} \pitchfork T_t$ as well. In particular, $z_{s+1},z_m \in T_t$. On the other hand, $(v_1,\dots,v_{s-1},v_{s+1},v_m)$ induces a sun, with $|v_{s+1}|,|v_m| \geq 3$. It follows that $T_1$ is contained in the open interval with endpoints $z_{s+1}$ and $z_m$, so that $T_1$ and $T_t$ do not abut, in contradiction to $v_1 \sim v_t$. \noindent (b) $B_m.$ In this case, $T_2$ and $T_m$ both abut $T_t$, and at the opposite end as $T_1$. As $|v_2|, |v_m| \geq 3$, it follows that both $T_2, T_m \prec T_t$. Hence one of $T_2$, $T_m$ contains the other, in contradiction to their unbreakability. \noindent (c) $C_{s,m}.$ Now $T_s \prec T_t$. If $T_m \pitchfork T_t$, then $T_m$ and $T_t$ abut $T_s$ at opposite ends. However, $T_{s+1}$ abuts $T_s$ as well, but $v_s \not\sim v_t, v_{s+1}$. It follows that $m = s+2$ and $T_m \dagger T_t$. But then $(v_s,v_m,v_t)$ forms a negative triangle. It follows that $S_{t-1}$ is built from $\varnothing$ by a sequence of expansions, as desired. \end{proof} \begin{prop}\label{p: berge viii} Suppose that $|v_i| \ne 2$ for some $i < t$. Then $S = S_t$. \end{prop} \begin{proof} We proceed by way of contradiction. Thus, suppose that $S \ne S_t$, and consider $v_{t+1}$. Since $G(S_t)$ is a path, Lemma \ref{l: one gappy} implies that $v_{t+1}$ is not gappy. Set $k := \min({\textup{supp}}(v_{t+1}))$. By Lemma \ref{l: tight}(3), it follows that $k > 0$. Hence ${\langle} v_t, v_{t+1} {\rangle} \leq |v_{t+1}| - 3$, so $|v_{t+1}| \in \{2,3,4\}$. If $|v_{t+1}| = 2$, then $(v_t;v_1,v_s,v_{t+1})$ induces a claw. Similarly, if $|v_{t+1}| = 4$, then $(v_t;v_1,v_s,v_{t+1})$ induces a claw unless $k \in \{1,s\}$; but if $k \in \{1,s\}$, then $(v_k,v_t,v_{t+1})$ forms a negative triangle. It remains to consider the case that $|v_{t+1}|=3$. In this case, $v_{t-1}$ is the unique smaller neighbor of $v_{t+1}$, and $v_t \not\sim v_{t+1}$, so $z_{t+1} \notin T_t$. Let $P \subset G(S)$ denote the induced path with consecutive vertices $(v_{i_1},\dots, v_{i_l})$, where $i_1 = t$ and $i_l =t+1$. Thus, $i_2 \in \{1,s\}$ and $i_{l-1} = t-1$. Observe that if $m < t$ is maximal with the property that $|v_m| \geq 3$, then $v_m \in V(P)$; in fact, $i_j = m$, where $j+m = t+1$. Note that $z_{i_j} \notin T_{i_h}$ for all $h \ne 1,j$. Let $x$ denote the endpoint of $T_t$ at which $T_{i_1} = T_t$ and $T_{i_2}$ abut. Without loss of generality, suppose that $y$ is the left endpoint of $T_t$. Thus, $T_{i_{j-1}}$ abuts the left endpoint of $T_m$. It follows that $T_{i_{j+1}}$ abuts the right endpoint of $T_{i_j}$, since otherwise $(v_{i_{j-1}},v_{i_j},v_{i_{j+1}})$ induces a triangle, while $P$ is a path. Hence $z_m$ separates $x$ from all $T_{i_h}$ with $h > j$. In particular, $z_m$ separates $x$ from $z_{t+1} \in T_{t+1} = T_{i_l}$. As $z_{t+1} \notin T_t$, it follows that $z_{t+1}$ lies to the right of $T_t$. Hence $T_t \subset T := \bigcup_{h=2}^l T_{i_h}$. However, $d(T_t) = t+4 > t+1 \geq d(T)$, a contradiction. It follows that $v_{t+1}$ cannot exist, so $S = S_t$, as desired. \end{proof} Henceforth we assume that $|v_i|=2$ for all $i < t$. \begin{prop}\label{p: tight unbreakable} Suppose that $z_i \notin T_t$ for all $i \leq n' \leq n$ with $|v_i| \geq 3$. Then $S_{n'}$ takes one of the following forms: \begin{enumerate} \item $t=1$, $|v_s| = s+1$ for some $s > 1$, $|v_i| = 2$ for all $1 < i < s$, and $S_{n'}$ is built from $S_s$ by a sequence of expansions; \item $t=1$, $|v_s| = s$ for some $s > 1$, $|v_i| = 2$ for all $1 < i < s$, and $S_{n'}$ is built from $S_s$ by a sequence of expansions; or \item $t > 1$, $|v_i| = 2$ for all $i < t$, and $S_{n'}$ is built from $S_t$ by a sequence of expansions. \end{enumerate} \end{prop} \noindent Notice that Proposition \ref{p: tight unbreakable}(1) allows the possibility that $s=2$, a slight divergence from our convention on the use of $s$ stated at the outset of Section \ref{s: just right}. Under the assumption that $n = n'$, Proposition \ref{p: tight unbreakable} produces three broad families of examples. Assuming instead that $n > n'$, Propositions \ref{p: breakable 1}, \ref{p: breakable 2}, and \ref{p: breakable 3} utilize this result to produce even more. \begin{proof} By Lemma \ref{l: one gappy}, $S_{n'}$ does not contain a gappy vector. Choose any $m > t$, and suppose by way of contradiction that $v_m$ had multiple smaller neighbors. Since $G(S_{m-1})$ is connected and $G(S_m)$ does not contain a cycle of length $>3$, it follows that $v_m$ has exactly two smaller neighbors $v_k$ and $v_j$, $k < j$, and $v_k \sim v_j$. Therefore, $|v_i| = 2$ for all $k+1 < i < m$, $i \ne j$, and since $G(S_m)$ does not contain a heavy triple, it follows that $|v_i| = 2$ for all $i \leq k$. Hence $t \in \{k+1, j\}$. However, if $t = k+1$, then $(v_t,v_j,v_m)$ forms a heavy triple, while if $t = j$, then $k = 1, t=3$, and $(v_k,v_t,v_m)$ forms a negative triangle. Therefore, $v_m$ has exactly one smaller neighbor. Set $k: = \min({\textup{supp}}(v_m))$ and suppose that $k = 0$. Then ${\langle} v_t, v_m {\rangle} = t$, so it follows that $t = 1$. Since $v_m$ has no other smaller neighbor, it follows that $|v_i| = 2$ for all $1 < i < m$. Thus, $S_m$ takes the form stated in (2) with $m = s$. Suppose instead that $|v_{k+1}| = 2$. Then $v_k$ has no smaller neighbor $v_i$, since then $(v_k;v_i,v_{k+1},v_m)$ induces a claw. As $t \leq k$, $G(S_k)$ is connected, so $k = t = 1$. Thus, $S_m$ takes the form stated in (3) with $m = s$. If neither $k = 0$ nor $|v_{k+1}|=2$, then it follows that $S_m$ is an expansion on $S_{m-1}$. By induction, it follows that $S$ takes one of the forms stated in the Lemma. \end{proof} \subsection{A tight, breakable vector.}\label{ss: tight breakable} Now we treat the case that $v_t$ is breakable. This is the final and most arduous step in the case analysis, resulting in Propositions \ref{p: breakable 1}, \ref{p: breakable 2}, and \ref{p: breakable 3}. \begin{lem}\label{l: breakable gappy} Suppose that $v_t$ is breakable, $g \ne t$, $|v_g| \geq 3$, and $z_g \in T_t$. Then $g > t+1$ and either $t > 1$, $v_g = -e_g+e_{g-1}+e_{t-1}$, and $T_g \pitchfork T_t$, or else $v_g = -e_g+e_{g-1}+e_{t-1}+\cdots+e_0$ and $T_g \prec T_t$. \end{lem} \noindent Note that we do not assume {\em a priori} that $v_g$ is gappy. \begin{proof} \noindent (a) $g > t+1$. \noindent Otherwise, ${\langle} v_t, v_g {\rangle} \in \{ |v_g| - 3, |v_g|-2 \}$, with the second possibility iff $\min({\textup{supp}}(v_g)) = 0$. Lemma \ref{l: tight 2} rules out the first possibility and Lemma \ref{l: tight}(3) the second. It follows that ${\textup{supp}}(v_g)$ contains at least two values $> t$. \noindent (b) $v_{gt}=0$. \noindent Otherwise, $1 \leq {\langle} v_t, v_g {\rangle} \leq |v_g|-3$. By Lemma \ref{l: tight 2}, we must have ${\langle} v_t, v_g {\rangle} = 1$ and $|v_g| = 3$, so $v_g = -e_g + e_{g-1} + e_t$. Now Lemma \ref{l: tight}(2) implies that $z_g \notin T_t$, a contradiction. As $z_g \in T_t$, it follows that ${\langle} v_t, v_g {\rangle} > 0$, so $v_g$ is gappy and there exists a gappy index $k < t$. Since $|v_{k+1}| \geq 3$, it follows that $k = t-1$, and ${\textup{supp}}(v_g) \cap \{0,\dots,t-1\}$ consists of consecutive integers. \noindent (c) ${\textup{supp}}(v_g)$ contains exactly two values $>t$. \noindent Otherwise, $0 \leq {\langle} v_t, v_g {\rangle} \leq |v_g| -2$, where the latter inequality is attained precisely when $v_g = -e_g+e_{g-1}+e_m+e_{t-1}+\cdots+e_0$ for some $t < m < g-1$. Thus, $T_g \pitchfork T_t$, $\epsilon_g = \epsilon_t$, and $\epsilon_g([T_g - T_t] - [T_t - T_g])$ is reducible. However, this equals $v_g - v_t = -e_g + e_{g-1} + e_m + e_t -e_0$. Since every non-zero entry in this vector is $\pm 1$, a decomposition $v_g - v_t = x + y$ with ${\langle} x,y {\rangle} = 0$ satisfies $x_i y_i = 0$ for all $i$, and both $x$ and $y$ have a negative coordinate. Without loss of generality, $x_g = -1$ and $y_0 = -1$. Then $0 = {\langle} y, \sigma {\rangle} \geq -1 + \sigma_i$ for some $i \in \{ t, m, g-1 \}$; but $\sigma_i \geq \sigma_t = t+1 > 1$, a contradiction. It follows that $v_g = -e_g + e_{g-1} + e_{t-1} + \cdots + e_l$ for some $0 \leq l \leq t-1$. Suppose by way of contradiction that $0 < l < t-1$. Then $(v_l;v_i,v_{l+1},v_g)$ induces a claw in $G(S)$, where $i = l-1$ if $l > 1$, and $i = t$ if $l = 1$. Therefore, $l \in \{0, t-1\}$, and the statement of the Lemma follows on consideration of ${\langle} v_t, v_g {\rangle}$. \end{proof} Observe that if $v_t$ is breakable, $z_i \notin T_t$ for all $i < t$, and $g$ is chosen minimally as in Lemma \ref{l: breakable gappy}, then $S_{g-1}$ takes one of the forms stated in Proposition \ref{p: tight unbreakable}. We assume henceforth that this is the case, and $g > t+1$ is chosen minimally with $z_g \in T_t$. \begin{lem}\label{l: engulf} Suppose that $T_t$ is breakable, $T_i \prec T_t$, and let $C = \{ v_t \} \; \cup \; \{ v_j \; | \; T_j \dagger T_i,T_t \}$. Then $C$ separates $v_i$ in $G(S)$ from every other $v_l$ of norm $\geq 3$ for which $z_l \notin T_t$. \end{lem} \begin{proof} For suppose the contrary, and choose an induced path $P$ in $G(S) - C$ with distinct endpoints $v_i, v_l$ such that $|v_l| \geq 3$ and every vector interior to $P$ has norm $2$. Set $T = \bigcup_{v_k \in V(P)} T_k$. Since $V(P) \cap C = \varnothing$ and $z_l \notin T_t$, it follows that $T_t \subset T$ and $T_t - T_j$ contains no vertex of degree $\geq 3$. But then $T_t$ is unbreakable, a contradiction. \end{proof} \begin{prop}\label{p: breakable 1} Suppose that $S_{g-1}$ is as in Proposition \ref{p: tight unbreakable}(1). Then $s=2$, $n \geq g = 3$, and $S$ takes one of the following forms (up to truncation): \begin{enumerate} \item $|v_m| = m-1$ for some $m \geq 4$; \item $v_4 = -e_4+e_3+e_0$ and $|v_m| = m-1$ for some $m \geq 5$; or \end{enumerate} \end{prop} \begin{proof} Lemma \ref{l: breakable gappy} implies that $v_g = -e_g+e_{g-1}+e_0$, $T_g \prec T_t$, and furthermore that $G(S) = \widehat{G}(S)$ in this case (see the end of the paragraph following Definition \ref{d: int graph}). \noindent (a) {\em $g = s+1$, and $s=2$.} \noindent If $g > s+1$, then $G(S_{g-1})$ is a path, and $v_g$ neighbors $v_1,v_s,$ and $v_{g-1}$, so $(v_1,\dots,v_g)$ spans a cycle in $G(S)$ missing the edge $(v_1,v_{g-1})$, in contradiction to Lemma \ref{l: triangle}. If $s > 2$, then $(v_1;v_2,v_s,v_g)$ induces a claw. Let $h$ denote the maximum index of a vector $v_h$ for which $|v_h| \geq 3$ and $z_h \in T_1$. \noindent (b) {\em $v_h = -e_h + e_{h-1}+e_0$ and $h \in \{3,4\}$.} \noindent The first statement follows from Lemma \ref{l: breakable gappy}, which also implies that $\epsilon_h = \epsilon_1 = \epsilon_g$. If $h > 4$, then ${\langle} v_g, v_h {\rangle} = 1$. But both $v_g$ and $v_h$ are unbreakable, so ${\langle} v_g, v_h {\rangle} = \epsilon_g \epsilon_h {\langle} [T_g], [T_h] {\rangle} = {\langle} [T_g], [T_h] {\rangle} \leq 0$, a contradiction. \noindent (c) {\em $v_{m0}=1$ for all $m>h$.} \noindent For suppose that $v_{m0} = 1$ for some $m > h$. Then $v_{m1} = 1$ by Lemma \ref{l: breakable gappy} and the definition of $h$, which implies that ${\langle} v_m, v_2 {\rangle} \geq 1$. It follows that $T_1,T_2,T_m$ abut in pairs. But this cannot occur, since $T_m \not\prec T_1$ by assumption, and $T_2$ and $T_m$ are both unbreakable. \noindent (d) {\em $v_{m1}=0$ for all $m > h$.} \noindent For suppose that $v_{m1} = 1$ for some $m > h$. Thus, $T_m \dagger T_1$. If $v_{m2} = 0$, then $(v_1,v_2,v_m)$ forms a negative triangle. If $v_{m2}=1$, then $T_m$ abuts $T_1$ at the same end as $T_3$, so $v_{m3} = 0$, and then $(v_1,v_3,v_m)$ forms a negative triangle. It follows that $\min({\textup{supp}}(v_m)) \geq 2$ for all $m > h$. In particular, ${\langle} v_t, v_m {\rangle} = 0$. \noindent (e) {\em There is no $m > h$ for which $v_m$ is gappy.} \noindent Suppose by way of contradiction that $v_m$ is gappy for some smallest $m > h$, and choose a minimal gappy index $k$ for $v_m$. Take $i = 3$ in Lemma \ref{l: engulf}. Then $C = \{v_1\}$, $v_k \not\sim v_3$, and so $k > 2$. If $h = 4$, then take $i=4$ in Lemma \ref{l: engulf}. Then $C = \{v_1,v_2\}$, $v_k \not\sim v_h$, and so $k > 3$. In any event, it follows that $k > h-1$, so $v_{k+1}$ is not gappy. It follows as in the proof of Lemma \ref{l: one gappy} that $v_m \sim v_k, v_{k+1}$. Now either $v_k \sim v_{k+1}$, in which case $(v_k,v_{k+1},v_m)$ forms a heavy triangle, or else $v_k \not\sim v_{k+1}$, and then the connectivity of $G(S_{m-1})$ implies that $G(S_m)$ contains an induced cycle of length $> 3$. Either case results in a contradiction. Thus, if $|v_m| \geq 3$ for some $m > h$, then Lemma \ref{l: engulf} implies that $v_m$ does not lie in the same component of $G(S) - \{v_1,v_2\}$ as $v_g$ or $v_h$. It quickly follows that there is at most one index $m > h$ for which $|v_m| \geq 3$, and if so, then $v_m \sim v_2$. It then follows that $S$ takes one of the forms stated in the Proposition. \end{proof} \begin{prop}\label{p: breakable 2} Suppose that $S_{g-1}$ is as in Proposition \ref{p: tight unbreakable}(2). Then $n \geq g = s+1$, and $S$ takes one of the following forms (up to truncation): \begin{enumerate} \item $s=2$, $v_4 = -e_4+e_3+e_0$, and $|v_m| = m-1$ for some $m \geq 5$; \item $|v_m| = m-g+2$ for some $m \geq g$; or \item $|v_m| = m-g+3$ for some $m \geq g$. \end{enumerate} \end{prop} \begin{proof} As in Proposition \ref{p: breakable 1}, Lemma \ref{l: breakable gappy} implies that $G(S) = \widehat{G}(S)$. In particular, if ${\langle} v_j,v_t {\rangle} = \pm 1$, then $T_j$ abuts $T_t$. Observe that $v_1 \sim v_2, v_s$, but $v_2 \not\sim v_s$. It follows that $T_1 \dagger T_2, T_s$, and $T_2$ and $T_s$ are distant. If $g > s+1$, then $(v_1;v_2,v_s,v_g)$ induces a claw. Hence $g = s+1$, $T_g \prec T_1$, and $T_g$ abuts $T_1$ at the same end as $T_s$. \noindent (I) {\em Suppose that $v_{j0} = 1$ for some $j > g$.} \noindent (a) {\em $v_{j1} = 0$.} \noindent If $v_{j1}=1$, then $T_j \dagger T_1$. As $T_s$ and $T_g$ abut $T_1$ at the same end, it follows that either $v_j \not\sim v_s, v_g$, or else $v_j \sim v_s, v_g$ and $s = |v_s| = 2$. Furthermore, $v_{ji} = 1$ for all $1 \leq i \leq s-1$, since otherwise $v_j \ll v_j - v_i$ for any such $i$ with $v_{ji} = 0$. Now, if $v_j \not\sim v_g$, then $v_{js} = 0$, which implies that ${\langle} v_s, v_j {\rangle} = s-1 \geq 1$ and $v_j \sim v_s$, a contradiction. If instead $v_j \sim v_s$ and $s =2$, then $(v_1,v_2,v_j)$ forms a negative triangle. Therefore, $v_{j1} = 0$. It follows that $T_j \prec T_1$, so $v_j = -e_j + e_{j-1} + e_0$ according to Lemma \ref{l: breakable gappy}. Now, $T_j$ and $T_g$ abut $T_1$ at opposite ends, so $v_j \not\sim v_g$. It follows that $j = g+1$. Furthermore, $s = 2$, since otherwise $T_2$ abuts $T_1$ at the same end as $T_j$, but $v_j \not\sim v_2$. In summary, $s =2$, $g=3$, $j = 4$, $v_4 = -e_4+e_3 + e_0$, $T_4 \prec T_1$, and $T_4$ abuts $T_1$ at the opposite end as do $T_2$ and $T_3$. In particular, $v_{m0} = 0$ for all $m > 4$. Now suppose that there exists $m > 4$ with $|v_m| \geq 3$. \noindent (b) {\em $v_{m1}=0$, and $v_{m2} = v_{m3} = v_{m4}$.} \noindent For suppose that $v_{m1}=1$. Then $v_{m2}=1$ since $|v_2|=2$. Thus, $v_m \sim v_1$ and $v_m \not\sim v_2$. It follows that $T_m$ abuts $T_1$ at the same end as $T_4$. Thus, $v_m \not\sim v_3$, so $v_{m3}=1$, and $v_m \sim v_4$, so $v_{m4}=0$. But now $(v_1,v_4,v_m)$ forms a negative triangle, a contradiction. Thus, $v_{m1} = 0$. It follows that $v_m \not\sim v_1$, so $v_m \not\sim v_3, v_4$. Thus, $v_{m4} = v_{m3} = v_{m2}$. Let us further suppose that $m > 4$ is minimal subject to $|v_m| \geq 3$. If $k := \min({\textup{supp}}(v_m)) > 4$, then $(v_k;v_{k-1},v_{k+1},v_m)$ induces a claw. Hence $k = 2$. Since $|v_i|=2$ for $4 < i < m$, it follows that $v_m$ is not gappy, and $v_m = -e_m + e_{m-1} + \cdots + e_2$. \noindent (c) {\em There does not exist $m' > m$ such that $|v_{m'}| \geq 3$.} For suppose otherwise, and choose $m'$ minimal with this property. Thus, (b) implies that $v_{m'1} = 0$ and $v_{m'2} = v_{m'3} = v_{m'4}$. If these values all equal $1$, then ${\langle} v_m, v_{m'} {\rangle} \geq 2$, a contradiction. Hence $k' := \min({\textup{supp}}(v_{m'})) > 4$. Now, Lemma \ref{l: engulf} implies that $k' \geq m$, taking $i = 4$ and $l = m'$ therein. But now $|v_{k'+1}|=2$ and $v_{k'}$ has a smaller neighbor $v_i$, so $(v_{k'};v_i,v_{k'+1},v_{m'})$ induces a claw. This is a contradiction. In summary, (I) leads to case (1) of the Proposition. \noindent (II) {\em Suppose that $v_{m0} = 0$ for all $m > g$.} \noindent (d) {\em If $s > 2$, then $v_{m1} = 0$ for all $m > g$.} \noindent Assume the contrary, and choose $m$ accordingly. It follows that $v_m \sim v_1$, and moreover that $T_m \dagger T_1$. Since $v_s$ is unbreakable, it follows that $T_m$ abuts $T_1$ at the same end as $T_2$. Thus, $v_m \not\sim v_s, v_g$. From $v_m \not\sim v_s$ it follows that $v_{m2}= \cdots = v_{m,s-1} = 0$ and $v_{ms}=1$, and from $v_m \not\sim v_g$ it subsequently follows that $v_{m0}=0$ and $v_{mg} = 1$. But then $(v_1,v_2,v_m)$ forms a negative triangle. It follows that if $v_{m1} = 1$, then $s = 2$ and $T_m \dagger T_1$; otherwise $v_m \not\sim v_1$. We henceforth drop any assumption about $s$. \noindent (e) {\em $v_m$ is not gappy for any $m > g$.} \noindent For suppose some $v_m$ were, choose $m$ minimal with this property, and choose a minimal gappy index $k$ for $v_m$. If $g = k+1$, then $v_m \sim v_g$. If $v_{m1}=0$, then Lemma \ref{l: engulf} implies a contradiction with $i = g$ and $l = m$, while if $v_{m1}=1$, then $s=2$, ${\langle} v_3, v_m {\rangle} \geq 0$ and $T_m \dagger T_1$ implies that $(v_1,v_3,v_m)$ is a negative triangle. It follows in either case that $g \ne k+1$, and since $m$ is chosen minimal, it follows that $v_{k+1}$ is not gappy. It follows at once that $v_{m,k-1} = 0$, whence $v_m \sim v_k, v_{k+1}$. Furthermore, $k \ne 1$ since $|v_{k+1}| \geq 3$. Since $k \geq 2$, both $v_k$ and $v_{k+1}$ are unbreakable. Now, if $v_k \sim v_{k+1}$, then $|v_k| \geq 3$, and $(v_k,v_{k+1},v_m)$ forms a heavy triangle. Hence $v_k \not\sim v_{k+1}$; but then a shortest path between them in $G(S_{k+1})$, together with $v_m$, results in an induced cycle of length $> 3$, a contradiction. It follows that $v_m$ is not gappy. \noindent (f) {\em $v_m$ does not have multiple smaller neighbors for any $m > g$.} \noindent For suppose that $v_m \sim v_j$ for some $j > k := \min({\textup{supp}}(v_m)) \geq 1$. Note that $j \ne g$ because of the form $v_g$ takes, so $v_j$ is not gappy, and it follows that $j = k+2$ is uniquely determined. In particular, it follows that $k > 1$. Hence $v_j \sim v_k$, since otherwise $G(S)$ contains an induced cycle of length $> 3$. As $|v_j| \geq 3$, it follows that $|v_i| = 2$ for all $1 < i \leq k$, since otherwise $(v_i,v_j,v_m)$ forms a heavy triple for some such $i$. Moreover, $|v_{k+1}| \geq 3$, since otherwise $(v_k;v_{k-1},v_{k+1},v_m)$ induces a claw. It follows that $k = s-1$, but then $j = g$ and $v_j \not\sim v_m$. Therefore, $v_m$ does not have multiple smaller neighbors. Thus, $v_m \sim v_k$ and $v_m$ has no other smaller neighbor. Furthermore, $|v_{k+1}| \geq 3$, as argued in the last paragraph. If $|v_m| \geq 3$ for some smallest $m > g$, then $k \in \{s-1, s\}$, and $v_{m-1}$ lies in the same component of $G(S) - \{v_1,v_s\}$ as $v_g$. Suppose by way of contradiction that there exists some smallest $m' > m$ for which $|v_{m'}| \geq 3$. It follows from the foregoing that $\min({\textup{supp}}(v_{m'})) + 1 = m$. But then $v_{m'}$ lies in the same component of $G(S)-\{v_1,v_s\}$ as $v_g$, in contradiction to Lemma \ref{l: engulf}. Therefore, $|v_m| \geq 3$ for at most one value $m > g$, and in this case, $v_m$ is not gappy, and $\min({\textup{supp}}(v_m)) \in \{s-1,s\}$. The two possibilities lead to cases (2) and (3), respectively. \end{proof} \begin{prop}\label{p: breakable 3} Suppose that $S_{g-1}$ is as in Proposition \ref{p: tight unbreakable}(3). Then $n \geq g = t+2$, $v_{t+2} = -e_{t+2}+e_{t+1}+e_{t-1}+\cdots+e_0$, and $S$ takes one of the following forms (up to truncation): \begin{enumerate} \item $|v_{t+1}| = 2$, $v_{t+3} = -e_{t+3}+e_{t+2}+e_{t-1}$, and $|v_m| = m-t$ for some $m \geq t+4$; \item $|v_{t+1}| = 3$, $v_{t+3} = -e_{t+3}+e_{t+2}+e_{t-1}$, and $|v_m| = m-t$ for some $m \geq t+4$; \item $|v_{t+1}|=2$ and $|v_m| = m-t$ for some $m \geq t+3$. \item $|v_{t+1}|=3$ and $|v_m| = m-t$ for some $m \geq t+3$. \end{enumerate} \end{prop} \begin{proof} \noindent (a) {\em $v_g = -e_g + e_{g-1} + e_{t-1} + \cdots + e_0$ for some $g \geq t+2$.} \noindent By Lemma \ref{l: breakable gappy}, it follows that $v_g = -e_g+e_{g-1} + e_{t-1}$ or $-e_g + e_{g-1} + e_{t-1} + \cdots + e_0$, and $g \geq t+2$. Let us rule out the first possibility. Thus, assume by way of contradiction that this is the case. It follows that $v_g \not\sim v_t$ in $G(S)$. Note that $G(S_{g-1})$ is a path, and that $v_g \sim v_{t-1}$. Suppose that $v_g \sim v_{g-1}$. It follows that $v_{g-1} \sim v_{t-1}$, since otherwise $G(S)$ contains an induced cycle of length $> 3$. However, since $S_{g-1}$ is built from $S_t$ by a sequence of expansions, it follows that $t-1 = \min({\textup{supp}}(v_{g-1}))$. However, this implies that $v_g \not\sim v_{g-1}$, a contradiction. Hence $v_g \not\sim v_{g-1}$. But then $(v_{t-1};v_i,v_{g-1},v_g)$ induces a claw, where $i = t $ if $t = 2$, and $i = t-2$ if $t > 2$. This contradiction shows that $v_g = -e_g + e_{g-1} + e_{t-1} + \cdots + e_0$, as desired. \noindent (b) {\em $g = t+2$.} \noindent Observe that $|v_{t+1}| \in \{2,3\}$ since $S_{t+1}$ is an expansion on $S_t$. If $|v_{t+1}| = 2$, then $v_{t+1} \sim v_g$ since otherwise $(v_t;v_1,v_{n+1},v_g)$ induces a claw. If $|v_{t+1}|=3$, then $v_{t+1} \not\sim v_g$, since otherwise $(v_g,v_t,v_1,\cdots,v_{t-1})$ induces a cycle of length $>3$ in $G(S)$. It follows in either case that $g = t+2$, as desired. \noindent (c) {\em If $v_h = -e_h + e_{h-1} + e_{t-1} + \cdots + e_0$, then $h = g$.} \noindent For if $h \ne g$, then $T_g, T_h \prec T_t$ and $T_g$ and $T_h$ are distant, and $(v_t;v_1,v_g,v_h)$ induces a claw. \noindent (d) {\em If $v_h = -e_h + e_{h-1} + e_{t-1}$, then $h = g+1 = t+3$.} \noindent For if $h > g+1$, then $(v_h,v_g,v_t,v_1,\dots,v_{t-1})$ spans a cycle in $G(S)$ that is missing the edge $(v_h,v_t)$. Henceforth we write $h = g+1$ if $v_{g+1}$ takes the form in article (d), and $h = g$ otherwise. It follows from Lemma \ref{l: breakable gappy} and articles (c) and (d) that if $|v_m| \geq 3$ for some $m > h$, then $z_m \notin T_t$. In particular, $v_m \sim v_i$ iff $T_m$ and $T_i$ abut for all $m > h$. \noindent (e) {\em If $m > h$, then $v_m$ is not gappy.} \noindent For suppose that $v_m$ were gappy for some minimal $m > h$, let $k = \min({\textup{supp}}(v_m))$, and let $j$ denote a minimal gappy index for $v_m$. Then $k > 0$, since otherwise $|v_{j+1}| \geq 3$ implies that ${\langle} v_{j+1},v_g {\rangle} \geq 2$, and then $t = j+1$ and $z_g \in T_t$, a contradiction. Now ${\langle} v_m,v_k {\rangle} = -1$ and ${\langle} v_m, v_{j+1} {\rangle} =1$, so $v_m \sim v_k, v_{j+1}$, and since $G(S_{m-1})$ is connected, it follows that $v_k \sim v_{j+1}$. Furthermore, $T_g \dagger T_{j+1}$ since $|v_g|, |v_{j+1}| \geq 3$, and since $(v_k,v_{l+1},v_g)$ is a positive triangle, it follows that ${\langle} v_k,v_{l+1} {\rangle} = -1$. Thus, $v_{l+1,k} = 1$, and since ${\langle} v_{l+1},v_g {\rangle} \leq 1$, it follows that $l = k$. Now, $v_{k,k-1} \ne 0$, so it follows that $v_{k+1,k-1} = 0$. Consequently, $v_{k+1}$ is gappy. Since $m > h$ was chosen minimal, it follows that $k+1 \in \{g,h\}$. However, the only way that this can occur and satisfy ${\langle} v_k, v_{k+1} {\rangle} = -1$ is if $k+1 = g = t+2$ and $|v_{t+1}|=2$. However, in this case, $(v_t,v_{t+1},v_m,v_{t+2})$ spans a cycle that is missing the edge $(v_t,v_m)$, a contradiction. It follows that no such $m$ exists, as desired. Thus, $z_m \notin T_t$. \noindent (f) {\em $\min({\textup{supp}}(v_m)) = t+1$ or $\geq t+3$ for all $m > h$.} \noindent Let $k = \min({\textup{supp}}(v_m))$. Since ${\langle} v_g, v_m {\rangle} \leq 1$, it follows that $k \geq t-1$. Lemma \ref{l: engulf} with $i = g$ and $l = m$ implies that $k \notin \{t-1,t+2\}$. Finally, $k \ne t$, since otherwise $(v_t;v_1,v_g,v_m)$ induces a claw. Suppose that there exists a minimal $m > h$ such that $|v_m| \geq 3$. \noindent (g) {\em $k =t+1$.} \noindent For if $k \geq t+3$, then $v_k$ has a smaller neighbor $v_i$, and $(v_k;v_i,v_{k+1},v_m)$ induces a claw. \noindent (h) {\em There does not exist $m' > m$ for which $|v_{m'}| \geq 3$.} \noindent Suppose otherwise, and let $k' = \min({\textup{supp}}(v_{m'}))$. If $k' = t+1$, then either $(v_{t+1},v_m,v_{m'})$ or $(v_g,v_m,v_{m'})$ forms a heavy triple, depending on $|v_{t+1}| \in \{2,3\}$. From (f) it follows that $k' \geq t+3$. Thus, $k'+1 = m$, since otherwise $|v_{k'+1}|=2$, $v_{k'}$ has a smaller neighbor $v_i$, and $(v_{k'},v_i,v_{k'+1},v_{m'})$ induces a claw. Thus, $(v_h,\dots,v_{m-1},v_{m'})$ induces a path. If $h = t+2$, then we obtain a contradiction to Lemma \ref{l: engulf} with $i = g$ and $l = m'$. If $h = t+3$, then we obtain a similar contradiction with a bit more work. Specifically, considering the path $(v_t,v_1,\dots,v_{t-1},v_{t+3},\dots,v_{m-1},v_{m'})$, it follows that the interval $T_{t+3} \cup \bigcup_{i=1}^{t-1}T_i$ contains one endpoint of $T_t$ and the interval $T_{m'} \cup \bigcup_{i=t+3}^{m-1} T_i$ contains the other. As $z_{m'} \notin T_t$, it follows that $z_{t+3}$ is the unique vertex of degree $\geq 3$ in $T_t$, a contradiction, since $z_{t+2} \in T_t$ as well. The four cases stated in the Proposition now follow from the possibilities $|v_{t+1}| \in \{2,3\}$ and $h \in \{t+2,t+3\}$. \end{proof} \section{Producing the Berge types}\label{s: cont fracs} The goal of this section is to show how the families of linear lattices enumerated in the structural Propositions of Sections \ref{s: just right}, \ref{s: gappy, no tight}, and \ref{s: tight} give rise to the homology classes of Berge knots tabulated in Subection \ref{ss: list}. Subsection \ref{ss: methodology} describes the methodology and Table \ref{table: A} collects the results. Subsection \ref{ss: cont fracs} contains the necessary background material about continued fractions, and Subsections \ref{ss: large} and \ref{ss: small} carry out the details. \subsection{Methodology}\label{ss: methodology} Given a standard basis $S$ expressed in one of the structural Propositions, we show that the changemaker lattice it spans is isomorphic to a linear lattice $\Lambda(p,q)$ by converting $S$ into a vertex basis $B = \{x_1,\dots,x_n\}$ for it. Letting $\nu$ denote the sequence of norms $(|x_1|,\dots,|x_n|)$, we recover $p$ as the numerator $N[\nu]^- = N[|x_1|,\dots,|x_n|]^-$ of the continued fraction. We recover the value $k$ of Proposition \ref{p: homology}, and hence $q \equiv - k^2 \pmod p$, in the following way. Let $B^\star$ denote the elements in $B$ that pair non-trivially with $e_0$, let $\nu_i = (|x_1|,\dots,|x_i|)$, and let $p_i = N[\nu_i]^-$. Then \begin{equation}\label{e: k} k = \sum_{x_i \in B^\star} p_{i-1} {\langle} x_i, e_0 {\rangle} \end{equation} according to \eqref{e: x}, Proposition \ref{p: homology}, and Lemma \ref{l: cont frac basics}(1). In practice, $B^\star$ contains at most three elements, and each value ${\langle} x_i, e_0 {\rangle}$ is typically $\pm 1$ (in case of Proposition \ref{p: tight unbreakable}, it can equal $\pm 2$). \medskip \noindent {\em Example.} As an illustrative example, consider a standard basis $S$ as in Proposition \ref{p: just right 1}(1). By inspection, $\widehat{G}(S)$ is nearly a path, which suggests that $S$ is not far off from a vertex basis. Indeed, a little manipulation shows that \[ B = \{-v_s^\star,-v_{s+2},v_{s+3},v_{s-1},\dots,v_1^\star, -(v_{s+1}+v_{s-1}+\cdots+v_1)^\star\} \] is a vertex basis for the lattice spanned by $S$. The elements denoted by a star ($\star$) belong to $B^\star$. From $B$ we obtain the sequence of norms \[ \nu = (s+1,3,5,2^{[s-1]},3), \] using $2^{[t]}$ as a shorthand for a sequence of $t$ 2's. In order to determine $p$, we calculate \[p = N[s+1,3,5,2^{[s-1]},3]^- = N[s+1,3,4,-s,2]^- = N[s+1,-3,4,s,-2]^+ = 22s^2+31s+11,\] using Lemma \ref{l: cont frac basics 2}(1) for the second equality and Mathematica \cite{mathematica} for the last one. In order to determine $k$, we consider the substrings \[ \nu_0 = \varnothing, \quad \nu_{s+1} = (s+1,3,5,2^{[s-2]}), \quad \nu_{s+2} = (s+1,3,5,2^{[s-1]}). \] Weight the numerator of each $[\nu_{i-1}]^-$ by ${\langle} x_i, e_0 {\rangle}$, which equals the sign $\pm 1$ appearing on the leading term in the starred expression, and add them up to obtain the value $k$. Thus, \begin{eqnarray*} k &=& -N[\varnothing]^- + N[s+1,3,5,2^{[s-2]}]^- - N[s+1,3,5,2^{[s-1]}]^- \\ &=& -1 + N[s+1,3,4,-(s-1)]^- - N[s+1,3,4,-s]^- \\ &=& -1 + N[s+1,-3,4,s-1]^+ - N[s+1,-3,4,s]^+ \\ &=& -1 + (11s^2-s-5) - (11s^2+10s+2) \\ &=& 11(-s-1)+3. \end{eqnarray*} Since $s \geq 2$ and $p = (2k^2+k+1)/11$, it follows that the standard bases of \ref{p: just right 1}(1) correspond to Berge type X with $k \leq 11(-3)+3$. The result of this example appears in Table \ref{table: A} and as the first entry in Table \ref{table: 1}. \medskip In this manner we extract a linear lattice, described by the pair $(p,k)$, from each standard basis expressed in the structural Propositions. In the process, we show that these values account for precisely the pairs $(p,k)$ tabulated in Subsection \ref{ss: list}. Table \ref{table: A} displays the results. Note that Proposition \ref{p: gappy structure}(2) gets reported in terms of its constituents, Lemmas \ref{l: S_g nothing}(1,3,4,5) and \ref{l: S_g not nothing}. \begin{table}[h] \caption{Structural Propositions sorted by Berge type.} \label{table: A} \begin{center} {\small \begin{tabular}{l l l l} {\bf I$_+$} \ref{p: just right 3}(1,3) & {\bf I$_-$} \ref{p: tight unbreakable}(1,3) & {\bf II$_+$} \ref{p: tight unbreakable}(2) & {\bf II$_-$} \ref{p: just right 1}(3) \\ \cr {\bf III}(a)$_+$ \ref{p: breakable 2}(1), \ref{p: breakable 3}(1) & (a)$_-$ \ref{p: just right 3}(2), \ref{l: S_g not nothing}(1,2), \ref{p: gappy structure}(3) & (b)$_+$ \ref{p: breakable 1}(1), \ref{p: breakable 3}(4) & (b)$_-$ \ref{l: S_g not nothing}(3) \\ \cr {\bf IV}(a)$_+$ \ref{p: breakable 1}(2), \ref{p: breakable 3}(2) & (a)$_-$ \ref{l: S_g nothing}(3) & (b)$_+$ \ref{p: breakable 3}(3) & (b)$_-$ \ref{p: just right 2}(5), \ref{l: S_g nothing}(1), \ref{p: gappy structure}(3) \\ \cr {\bf V}(a)$_+$ \ref{p: breakable 2}(2) & (a)$_-$ \ref{p: just right 2}(5), \ref{p: just right 3}(2), \ref{l: S_g nothing}(4) & (b)$_+$ \ref{p: breakable 2}(3) & (b)$_-$ \ref{l: S_g nothing}(5) \\ \cr {\bf VII} \ref{p: gappy structure}(1) & {\bf VIII} \ref{p: berge viii} & {\bf IX} \ref{p: just right 1}(2), \ref{p: just right 2}(2,3) & {\bf X} \ref{p: just right 1}(1), \ref{p: just right 2}(1,4) \end{tabular} } \end{center} \end{table} We mention one caveat, which amounts to the overlap between different Berge types. For example, in type III(a)$_+$, \ref{p: breakable 2}(1) and \ref{p: breakable 3}(1) account for the cases that $d =2$, $(k+1)/d \geq 5$ and $d=3$, $(k+1)/d \geq 3$, respectively (Table \ref{table: 2}). What happens when $(d, (k+1)/d) \in \{ (2,3), (1, *), (*,1) \}$? For $(2,3)$, notice that we obtain the same family of examples by setting $d = 3, (k+1)/d = 2$ in type V(a)$_+$. This is covered by \ref{p: breakable 2}(2). Moreover, \ref{p: breakable 2}(2) fills out most of V(a)$_+$, while only this sliver of it applies to III(a)$_+$. For that reason, we only report \ref{p: breakable 2}(2) next to V(a)$_+$ in Table \ref{table: A}. Similarly, the cases of $(1,*)$ and $(*,1)$ correspond to II$_-$ with $i=2$ and I$_-$ with $i=1$, respectively. In general, it is not difficult to identify the overlaps of this sort and use Table \ref{table: A} to obtain a complete correspondence between structural Propositions and Berge types. In a few places the overlap is explicit: \ref{p: just right 2}(5), \ref{p: just right 3}(2), and \ref{p: gappy structure}(3) each appear twice in Table \ref{table: A}. The correspondence between structural Propositions and Berge types exhibits some interesting features. For example, amongst the ``small" families (defined just below), and excluding the special cases of \ref{p: just right 2}(5) and \ref{p: just right 3}(1), \begin{itemize} \item all elements of $S$ are just right iff $L$ is an exceptional type (IX or X); \item $S$ has a gappy vector but no tight one iff $L$ is of $-$ type; \item $S$ has a tight vector iff $L$ is of $+$ type. \end{itemize} It would be interesting to examine the geometric significance of this correspondence. In determining the values $(p,k)$ from the structural Propositions, it is useful to partition these families into two broad classes: {\em large families}, those that involve a sequence of expansions, and {\em small families}, those that do not. The large families (along with \ref{p: just right 3}(1)) correspond to Berge types I, II, VII, and VIII in Table \ref{table: A}. Determining the relevant values $(p,k)$ for these families occupies Subsection \ref{ss: large}. The small families, while more numerous, are considerably simpler to address. We take them up in Subsection \ref{ss: small}. Excluding \ref{p: just right 3}(1), they correspond to Berge types III, IV, V, IX, and X in Table \ref{table: A}. Lastly, we remark that the determination of the isomorphism types of the sums of linear lattices enumerated in Proposition \ref{p: decomposable structure} follows as well, and involves far fewer cases. As it turns out, they correspond precisely to the sums of lens spaces that arise by surgery along a torus knot or a cable thereof. For example, Proposition \ref{p: expansion} enumerates the sums of linear lattices spanned by standard bases built from $\varnothing$ by a sequence of expansions. They correspond to the connected sums $- (L(p,q) \# L(q,p))$ that result from $pq$-surgery along the positive $(p,q)$-torus knots. In fact, \cite[Theorem 1.5]{greene:cabling} asserts a much stronger conclusion: if surgery along a knot produces a connected sum of lens spaces, then it is either a torus knot or a cable thereof. We refer to \cite{greene:cabling} for further details. \subsection{Minding $p$'s and $q$'s.}\label{ss: cont fracs} Given a basis $C = \{v_1,\dots,v_n\}$ built from $\varnothing$ by a sequence of expansions, augment $C$ by a vector $v'_{n+1} := \sum_{i=k}^n e_i$, where $k = 0$ if $|v_i| = 2$ for all $v_i \in C$; $k = n-1$ if $|v_n| \geq 3$; and $k$ is the maximum index of a vector in $C$ with norm $\geq 3$ otherwise. Observe that $C':= C \cup \{v'_{n+1}\}$ spans a lattice isomorphic to a sum of two non-zero linear lattices for which $C'$ is a vertex basis. More precisely, partition $C' = \{v_{i_1},\dots,v_{i_l}\} \cup \{v_{j_1},\dots,v_{j_m}\}$ into vertex bases for the two summands, where $i_1 > \cdots > i_l$ and $n+1=j_1 > \cdots > j_m$, and write $(a_1,\dots,a_l) = (|v_{i_1}|,\dots,|v_{i_l}|)$ and $(b_1,\dots,b_m) = (|v_{j_1}|,\dots,|v_{j_m}|)$. Then ${\langle} C' {\rangle} \cong \Lambda(p,q) \oplus \Lambda(p',q')$, where $p/q = [a_1,\dots,a_l]^-$ and $p'/q' = [b_1,\dots,b_m]^-$. The following result sharpens this statement. \begin{lem}\label{l: aug exp} The lattice spanned by $C$ is isomorphic to $\Lambda(p,q) \oplus \Lambda(p,p-q)$ for some $p > q > 0$. \end{lem} Note that Lemma \ref{l: aug exp} implies a relationship between the Hirzebruch-Jung continued fraction expansions of $p/q$ and $p/(p-q)$. This is nicely expressed by the Riemenschneider point rule (see the German original, \cite[pp. 222-223]{riemenschneider}; or \cite[pp. 2158-2159]{lisca:lens2}). \begin{proof} We proceed by induction on $n=|C|$. When $n=1$, we have $C' = \{e_0-e_1,e_0+e_1\}$, and $C'$ spans a lattice isomorphic to $\Lambda(2,1) \oplus \Lambda(2,1)$, from which the Lemma follows with $p = 2$ and $q = 1$. For $n > 1$, observe that $C'$ is constructed from $C'_{n-1}$ by either setting $v_n = v'_n-e_n$ and $v'_{n+1} = e_{n-1}+e_n$, or else $v_n = e_{n-1}-e_n$ and $v'_{n+1} = v'_n + e_n$. By induction, $C'_{n-1}$ determines two strings of integers $(a_1,\dots,a_l)$ and $(b_1,\dots,b_m)$, and ${\langle} C'_{n-1} {\rangle} \cong \Lambda(p,q) \oplus \Lambda(p,p-q)$, where $p/q = [a_1,\dots,a_l]^-$ and $p/(p-q) = [b_1,\dots,b_m]^-$. Swapping the roles of $q$ and $p-q$ if necessary, $C'$ determines the strings $(2,a_1,\dots,a_l)$ and $(b_1+1,\dots,b_m)$, for which we calculate $[2,a_1,\dots,a_l]^- = 2 - 1/(p/(p-q)) = (p+q)/p$ and $[b_1+1,\dots,b_m]^- = 1 + p/(p-(p-q)) = (p+q)/p$. Therefore, ${\langle} C' {\rangle} = \Lambda(p+q,q) \oplus \Lambda(p+q,p)$, which takes the desired form and completes the induction step. \end{proof} \begin{prop}\label{p: expansion} Suppose that $C$ is built from $\varnothing$ by a sequence of expansions. If $|v_i|=2$ for all $v_i \in C$, then $L \cong \Lambda(n+1,n)$. Otherwise, $L \cong \Lambda(p,q) \oplus \Lambda(r,s)$ for some $p > q > 0$, where $r = p-q$ and $s$ denotes the least positive residue of $-p \pmod r$. \end{prop} \begin{proof} Augment $C$ to $C'$ as above and write ${\langle} C' {\rangle} \cong \Lambda(p,q) \oplus \Lambda(p,p-q)$ according to Lemma \ref{l: aug exp}. Then $C$ determines the strings $(a_1,\dots,a_l)$ and $(b_2,\dots,b_m)$, where the second string is empty in case $m = 1$. We have $b_1 = \lceil p/(p-q) \rceil$, from which it easily follows that $[b_2,\dots,b_m]^- =r/s$ when $m > 1$, with the values $r$ and $s$ as above. Thus, $L$ takes the desired form in this case. Furthermore, when $m=1$, it follows easily that $L \cong \Lambda(n+1,n)$. This establishes the Proposition. \end{proof} \begin{defin}\label{d: cont frac} Given integers $a_1,\dots,a_l \geq 2$, write $p_j / q_j = [a_1,\dots,a_j]^-$, $r_j = p_j - q_j$, and $p_0 = 1$, and define integers $b_1,\dots,b_m \geq 2$ by $[b_1,\dots,b_m]^- = p_l / r_l$. \end{defin} \noindent Thus, Definition \ref{d: cont frac} relates the strings $(a_1,\dots,a_l)$ and $(b_1,\dots,b_m)$ preceding Lemma \ref{l: aug exp}. \begin{lem}\label{l: cont frac basics} Given integers $a_1,\dots,a_n \geq 2$ and an indeterminate $x$, the following hold: \begin{enumerate} \item $p_j = p_{j-1} a_j - p_{j-2}$ and $q_j = q_{j-1} a_j - q_{j-2}$; \item $[a_1,\dots,a_n,x]^- = (p_n x-p_{n-1})/(q_n x - q_{n-1})$; \item $[a_j,\dots,a_1]^- = p_j / p_{j-1}$; \item $p_{j-1}$ is the least positive residue of $q_j^{-1} \pmod{p_j}$; \item $q_{j-1}$ is the least positive residue of $-p_j^{-1} \pmod{q_j}$; \item $r_{j-1}$ is the least positive residue of $p_j^{-1} \equiv q_j^{-1} \pmod{r_j}$. \end{enumerate} \end{lem} \begin{proof}[Proof sketch.] Item (1) follows by induction on $k$, using the identity \[[a_1,\dots,a_j,a_{j+1}]^- = [a_1,\dots,a_j - 1/a_{j+1}]^-.\] Item (2) follows at once from (1). Item (3) follows from $[a_{j+1},\dots,a_1]^- = a_{j+1}-q_j/p_j$ and (1). The identity \[p_{j-1} q_j - p_j q_{j-1} = 1\] follows from (1) and induction; the inequalities $0 < p_{j-1} < p_j$ and $0 < q_{j-1} < q_j$ follow from the fact that $a_j \geq 2$; and items (4) and (5) follow from these observations. From the preceding identity we obtain \[ p_j r_{j-1} - p_{j-1} r_j = 1 \quad \text{and} \quad q_j r_{j-1}- q_{j-1} r_j = 1,\] and (1) implies that $0 < r_{j-1} < r_j$. Item (6) now follows as well. \end{proof} We collect a few more useful facts whose routine proofs follow from Lemma \ref{l: cont frac basics}. Following Lisca, we use the shorthand \[(\dots,2^{[t]},\dots) := (\dots,\underbrace{2,\dots,2}_t,\dots).\] \begin{lem}\label{l: cont frac basics 2} The following identities hold: \begin{enumerate} \item $[\dots,b+1,2^{[a-1]},c+1,\dots]^- = [\dots,b,-a,c,\dots]^-$; \item $[2^{[a-1]},b+1,\dots]^- = p/q \implies [-a,b,\dots]^- = -p/(p-q)$; \item $[\dots,b+1,2^{[a-1]}]^- = [\dots,b,-a]^-$; \item $[b_m,\dots,b_2]^- = r_l/(r_l - r_{l-1})$; \item $[a_1,\dots,a_l,t+1,b_m,\dots,b_2]^- = (p_l r_l t+1)/(q_l r_l t + 1)$; \item $[a_1,\dots,a_l+1,2^{[t-2]},b_m+1,\dots,b_2]^- = (p_l r_l t - 1)/(q_l r_l t -1)$; \item $[a_1,\dots,a_l,b_m,\dots,b_1]^- = (p_l^2 -p_l p_{l-1} + p_{l-1}^2)/(p_l q_l -p_l q_{l-1} + p_{l-1} q_{l-1})$; \item $[a_1,\dots,a_l,b_m,\dots,b_2]^- = (p_l r_l - p_{l-1} r_l + p_{l-1} r_{l-1})/(q_l r_l -q_{l-1} r_l + q_{l-1} r_{l-1})$; \item $[a_1,\dots,a_l+b_m+1,\dots,b_1]^- = (p_l^2 + p_l p_{l-1} - p_{l-1}^2)/(q_l p_l + q_{l-1} p_l - q_{l-1} p_{l-1} - 1)$; \item $[a_1,\dots,a_l+b_m+1,\dots,b_2]^- = (p_l r_l + p_{l-1} r_l - p_{l-1} r_{l-1} -1)/(q_l r_l + q_{l-1} r_l - q_{l-1} r_{l-1} - 1)$. \qed \end{enumerate} \end{lem} \subsection{Large families}\label{ss: large} For each standard basis $S$ occurring in a large family, we alter at most one $v_i \in S$ to another $\overline{v}_i$ such that $S - \{v_i\} \cup \{ \overline{v}_i \}$ is a vertex basis, up to reordering and negating some elements. In each case, there exists a unique partition $\{1,\dots,n\} = \{i_1,\dots,i_\lambda \} \cup \{j_1,\dots,j_\mu \}$ with the following properties: \begin{itemize} \item $i_1 = n$, and in the case of Propositions \ref{l: S_g nothing}(2) and \ref{p: berge viii}, $j_1 = n-1$; \item $i_1 > \cdots > i_\lambda$ and $j_1 > \cdots > j_\mu$; \item $\{i_1,j_1\} = \left\{1, \min\{ j > 1 \; | \; |v_j| > 2 \} \right\}$; \item the subgraphs of $\widehat{G}(S - \{v_i\} \cup \{ \overline{v}_i \})$ induced on $\{v_{i_1},\dots,v_{i_\lambda }\}$ and $\{v_{j_1},\dots,v_{j_\mu}\}$ are paths with vertices appearing in consecutive order (replacing $v_i$ by $\overline{v}_i$). \end{itemize} For the first five families below, we modify $S$ to a related subset $C$ built from $\varnothing$ by a sequence of expansions. We obtain a pair of strings $(a_1,\dots,a_l)$, $(b_1,\dots,b_m)$ from $C'$, and we express the sequence of norms $\nu$ in terms of them. The values $l$ and $m$ are related to the values $\lambda$ and $\mu$. To determine $p$ and $k$ from $\nu$, we apply Lemmas \ref{l: cont frac basics} and \ref{l: cont frac basics 2}. Frequently it is easier to recover $k'$ instead of $k$ by reversing the order of the basis. \smallskip {\bf Proposition \ref{p: just right 3}(3).} \noindent $B = \{v_{i_1},\dots,v_{i_\lambda},v_{j_\mu},\dots,v_{j_1} \}$ and $B^\star = \{ v_1 \}$; \noindent $C = S - \{v_1\} \subset \text{span}{\langle} e_1,\dots,e_n {\rangle}$; \noindent $\nu = (a_1,\dots,a_l,2,b_m,\dots,b_2)$; \noindent $p = p_l r_l+1$ by \ref{l: cont frac basics 2}(5) with $t=1$; \noindent $k = p_l$ if $j_\mu=1$; $k' = r_l$ if $i_\lambda = 1$. \noindent Note that $p_l \geq 3$ and $r_l \geq 2$. With $\{i,k\} = \{p_l,r_l\}$, it follows that $p = ik +1$ with $i,k \geq 2$ and $\gcd(i,k)=1$. The values $i$ and $k$ are unconstrained besides these conditions. In summary, \ref{p: just right 3}(3) accounts for Berge type I$_+$ with $i,k \geq 2$. {\bf Proposition \ref{p: tight unbreakable}(1).} \noindent $B = \{ -v_{i_1},\dots,-v_{i_\lambda}^\star,v_{j_\mu}^\star,\dots,v_{j_1} \}$; \noindent $C = S \cup \{v',v_t'\} - \{v_t\} \subset \text{span} {\langle} e', e_0, \dots, e_n {\rangle}$, where $v' = -e_0+e'$ and $v_t' = v_t + v'$; \noindent $\nu = (a_1,\dots,a_l+b_m,\dots,b_2)$; \noindent $p = p_l r_l - 1$ by \ref{l: cont frac basics 2}(5) with $t=-1$; \noindent $k = p_l$ if $j_\mu = 1$, using $a_l = 2$ and \ref{l: cont frac basics}(1); $k' = r_l$ if $i_\lambda = 1$ in the same way. \noindent With $\{i,k\} = \{p_l,r_l\}$, it follows that $p = ik -1$ with $i,k \geq 3$ and $\gcd(i,k) = 1$. The values $i$ and $k$ obey two further constraints coming from $\max\{a_l, b_m\} = 3$ and $\max\{a_{l-1}, \dots, a_1, b_{m-1}, \dots, b_2\} \geq s+1 \geq 3$. In summary, \ref{p: tight unbreakable}(1) accounts for part of Berge type I$_-$ with $i,k \geq 3$. {\bf Proposition \ref{p: tight unbreakable}(3).} \noindent The argument is identical to the case of \ref{p: tight unbreakable}(1), switching the conclusions in the case of $i_\lambda = 1$ and $j_\mu = 1$. Now we have the constraint that $\max\{a_l, b_m\} = t+2 \geq 4$. In summary, \ref{p: tight unbreakable}(3) accounts for another part of Berge type I$_-$ with $i,k \geq 3$. {\bf Proposition \ref{p: tight unbreakable}(2).} \noindent $B = \{ v_{i_1},\dots,v_{i_\lambda},v_{j_\mu},\dots,v_{j_1} \}$ and $B^\star = \{ v_1 \}$; \noindent $C = S - \{v_1\} \subset \text{span} {\langle} e_1, \dots, e_n {\rangle}$; \noindent $\nu = (a_1,\dots,a_l,5,b_m,\dots,b_2)$; \noindent $p = 4p_l r_l +1$ by \ref{l: cont frac basics 2}(5) with $t=4$; \noindent $k = 2p_l$ if $j_\mu = 1$; $k' = 2r_l$ if $i_\lambda = 1$. \noindent With $\{i,k\} = \{2p_l,2r_l\}$, it follows that $p = ik+1$ with $i,k \geq 2$ and $\gcd(i,k) = 2$. The values $i,k$ are unconstrained besides these conditions. However, the case $\min\{i,k\}=2$ (which occurs when $s=2$) accounts for Berge I$_-$ with $\min\{i,k\}=2$. In summary, \ref{p: tight unbreakable}(2) accounts for Berge type II$_+$ and this special case of Berge I$_-$. {\bf Proposition \ref{p: just right 1}(3).} \noindent $B = \{ -v_{i_1},\dots,-\overline{v}_{i_{\lambda-2}}^{\; \star},v_{i_{\lambda - 1}},v_{i_\lambda}^\star,v_{j_\mu},\dots,v_{j_1} \}$ if $i_\lambda = 1$, in which case $i_{\lambda-2} = m$, $\overline{v}_{i_{\lambda-2}} = v_m + v_1 + v_2$, $i_{\lambda-1} = 2$, and $j_\mu = 3$; \noindent $C = S \cup \{v_3',v_m'\} - \{v_1,v_2,v_3,v_m\} \subset \text{span} {\langle} e_2,\dots,e_n {\rangle}$, where $v_3' = v_3 - e_1$ and $v_m' = v_m -e_1$; \noindent $\nu = (a_1,\dots,a_l+1,2,2,b_m+1,\dots,b_2)$; \noindent $p = 4p_l r_l -1$ by \ref{l: cont frac basics 2}(6) with $t=4$; \noindent $k = 2p_l$ if $i_\lambda = 1$; $k' = 2r_l$ if $j_\mu = 1$. \noindent With $\{i,k\} = \{2p_l,2r_l\}$, it follows that $p = ik - 1$ where $i,k \geq 4$ and $\gcd(i,k) = 2$. A similar argument applies in case $j_\mu = 1$. In summary, \ref{p: just right 1}(3) accounts for Berge type II$_-$. \smallskip For the two remaining large families, we modify $S$ directly into a subset $C'$ as in Subsection \ref{ss: cont fracs}. Let $(a_1',\dots,a_l')$ and $(b_1',\dots,b_m')$ denote its corresponding strings, and let $(a_1,\dots,a_l)$ and $(b_1,\dots,b_m)$ denote their reversals ($a_i = a_{l+1-i}'$ and $b_j = b_{m+1-j}'$). This notational hiccup results in cleaner expressions for $p$ and $k$. Note that these values are still related in the manner of Definition \ref{d: cont frac}, so Lemma \ref{l: cont frac basics 2} applies. \smallskip {\bf Proposition \ref{p: gappy structure}(1).} Here $n = g$. \noindent $B = \{ -v_{i_\lambda}^\star,\dots,-v_{i_1},\overline{v}_{j_1},\dots,v_{j_\mu}^\star \}$ if $i_\lambda = 1$, where $j_1 = n$ and $\overline{v}_n = \overline{v}_g = v_g + v_{g-1} + \cdots + v_{k+2}$;\footnote{Apology: this is the $k$ of Lemma \ref{l: S_g nothing}, not the homology class of $K$.} \noindent $C' = S \cup \{v_n'\} - \{v_n\} \subset \text{span} {\langle} e_0,\dots,e_{n-1} {\rangle}$, where $v_n' = \overline{v}_n + e_n + e_{n-1}$; \noindent $\nu = (a_1,\dots,a_l,b_m,\dots,b_1)$; \noindent $p = p_l^2 -p_l p_{l-1} + p_{l-1}^2$ by \ref{l: cont frac basics 2}(7); \noindent $k = p_l r_l - p_{l-1} r_l + p_{l-1} r_{l-1} - 1$ by \ref{l: cont frac basics 2}(8); also, observe that the difference between the numerator and denominator in $[\nu]^-$ is $D := r_l^2 - r_l r_{l-1} + r_{l-1}^2$. \noindent Now we use the identity $(a^2-ab+b^2)(c^2-cd+d^2) = (e^2-ef+f^2)$, where $e= ac-bc+bd$ and $f = ad-bc$. We apply this identity with $a = p_l, b = p_{l-1}, c = r_l$, and $d = r_{l-1}$, noting that $f = 1$. It follows that $p \cdot D = (k+1)^2 - (k+1) + 1 = k^2+k+1$. Up to renaming variables, the same argument applies in case $j_\mu = 1$. In summary, \ref{l: S_g nothing}(2) accounts for Berge type VII. {\bf Proposition \ref{p: berge viii}.} Here $n = t$. \noindent $B' = \{ v_{i_\lambda}^\star,\dots, v_{i_1},\overline{v}_{j_1},\dots,v_{j_\mu}^\star \}$ if $i_\lambda = 1$, where $j_1 = n$ and $\overline{v}_n = \overline{v}_t = v_t -(v_{t-1} + \cdots + v_1)$; \noindent $C' = S \cup \{v_n', v_{n+1}' \} - \{ v_n \}$, where $v_n' = \overline{v}_n - e_{n-1}$ and $v_{n+1}' = e_{n-1}+e_n$; \noindent $\nu = (a_1,\dots,a_l+b_m + 1, \dots, b_1)$; \noindent $p = p_l^2 + p_l p_{l-1} - p_{l-1}^2$ by \ref{l: cont frac basics 2}(9); \noindent $k = p_l r_l +p_{l-1} r_l - p_{l-1} r_{l-1}$ by \ref{l: cont frac basics 2}(10); and the difference between the numerator and denominator in $[\nu]^-$ is $D = r_l^2 + r_l r_{l-1} - r_{l-1}^2$. \noindent Now we use the identity $(a^2+ab-b^2)(c^2+cd-d^2) = (e^2+ef-f^2)$, where $e = ac-bd+bc$ and $f = ad-bc$. As before, we apply it with $a = p_l, b = p_{l-1}, c = r_l$, and $d = r_{l-1}$, noting that $f = 1$. It follows that $p \cdot D = k^2 + k - 1$. Again, the same conclusion holds if instead $j_\mu = 1$. Replacing $k$ by $-k$, it follows in summary that \ref{p: berge viii} accounts for Berge type VIII. \subsection{Small families}\label{ss: small} {\scriptsize \begin{table} \caption{Small families and their Berge types} \label{table: 1} \begin{center} \makebox[0pt]{ \begin{tabular}{c | l l} Prop$^{\text{n}}$. & Berge type & $B$ and $B^\star$; $\nu$; $k$, $p$ \\[.1cm] \hline \hline \\[-.3cm] \ref{p: just right 1}(1) & X, $k \leq 11(-3)+3$ & $\{-v_s^\star,-v_{s+2},v_{s+3},v_{s-1},\dots,v_1^\star, -(v_{s+1}+v_{s-1}+\cdots+v_1)^\star$\} \\[.1cm] && $(a+1,3,5,2^{[a-1]},3)$, $a=s \geq 2$ \\[.1cm] && $k = 11(-a-1)+3$, $p=(2k^2+k+1)/11$ \\[.1cm] \cline{2-3} \\[-.3cm] \ref{p: just right 1}(2) & IX, $k \leq 11(-3)+2$ & $\{-v_s^\star, -v_{s+3}, v_{s+2}, v_{s-1}, \dots, v_1^\star, -(v_{s+1}+v_{s-1}+\cdots+v_1)^\star\}$ \\[.1cm] && $(a+1,4,4,2^{[a-1]},3)$, $a=s \geq 2$ \\[.1cm] && $k=11(-a-1)+2$, $p= (2k^2+k+1)/11$ \\[.1cm] \cline{1-3} \\[-.3cm] \ref{p: just right 2}(1) & X, $k = 11(-2)+3$ & $\{-v_1^\star,-v_3,v_4^\star,-v_2^\star\}$ \\[.1cm] && $(2,3,5,3)$ \\[.1cm] && $k = -19$, $p=(2k^2+k+1)/11=64$ \\[.1cm] \cline{2-3} \\[-.3cm] \ref{p: just right 2}(2) & IX, $k = 11(-2)+2$ & $\{-v_1^\star,-v_4,v_3^\star, -v_2^\star\}$ \\[.1cm] && $(2,4,4,3)$ \\[.1cm] && $k=-20$, $p=(2k^2+k+1)/11=71$ \\[.1cm] \cline{2-3} \\[-.3cm] \ref{p: just right 2}(3) & IX, $k \geq 11(2)+2$ & $\{-v_1^\star, \dots, -v_{s-1}, -v_{s+3}, v_{s+2}, v_s^\star, v_{s+1}\}$ \\[.1cm] && $(2^{[a-1]},5,3,a+1,2), a=s \geq 2$ \\[.1cm] && $k=11a+2$, $p= (2k^2+k+1)/11$ \\[.1cm] \cline{2-3} \\[-.3cm] \ref{p: just right 2}(4) & X, $k \geq 11(2)+3$ & $\{-v_1^\star,\dots,-v_{s-1},-v_{s+2}, v_{s+3}, v_s^\star,v_{s+1}\}$ \\[.1cm] && $(2^{[a-1]},4,4,a+1,2)$, $a=s \geq 2$ \\[.1cm] && $k=11a+3$, $p=(2k^2+k+1)/11$ \\[.1cm] \cline{2-3} \\[-.3cm] \ref{p: just right 2}(5) & IV(b)$_-, d = 3, {2k-1 \over d} \geq 5$ & $\{v_2, v_1^\star, v_m, -v_3^\star, \dots, -v_{m-1},-v_{m+1}, \dots,-v_n\}$ \\[.1cm] & {\em and} & $(2,2,a+3, 4, 2^{[a-1]}, 3, 2^{[b-1]})$, $a = m-3 \geq 1, b = n-m \geq 0$ \\[.1cm] & V(a)$_-, d = 3, {k+1 \over d} \geq 3$ & $k=3a+5$, $p=(b+1)k^2-3(k+1)$ \\[.1cm] \cline{1-3} \\[-.3cm] \ref{p: just right 3}(1) & any type with $k=1$ & $\{v_1^\star, \dots,v_n\}$ \\[.1cm] & & $(2^{[n]})$ \\[.1cm] && $k=1$, $p=n+1$ \\[.1cm] \cline{2-3} \\[-.3cm] \ref{p: just right 3}(2) & III(a)$_-$, $d=2, {k+1 \over d}=3$ & $\{-v_2,-v_1^\star,v_3,v_4^\star,\dots,v_n\}$ \\[.1cm] & {\em and} & $(2,2,3,5,2^{[a-1]})$, $a \geq 1$ \\[.1cm] & V(a)$_-$, $d=3, {k+1 \over d} = 2$& $k = 5$, $p = 25(a+1)-18$ \\[.1cm] \cline{1-3} \\[-.3cm] \ref{l: S_g nothing}(1) & IV(b)$_-, d \geq 5, {2k-1 \over d} \geq 5$ & $\{v_s^\star,v_{s+1},-(v_g+v_1+\cdots+v_{s-1}+v_{s+1})^\star, v_1^\star,\dots,v_{s-1},v_{s+2},\dots,v_{g-1}, v_{g+1},\dots,v_n\}$ \\[.1cm] && $(a+1,2,b+3, 2^{[a-1]}, 4,2^{[b-1]},3,2^{[c-1]})$, $a=s \geq 2, b=g-s-2 \geq 1, c=n-g \geq 0$ \\[.1cm] && $k=2ab+3a+b+2$, $p=(c+1)k^2- (2a+1)(k+1)$ \\[.1cm] \cline{2-3} \\[-.3cm] \ref{l: S_g nothing}(3) & IV(a)$_-, d \geq 5, {2k+1 \over d} \geq 5$ & $\{-v_1^\star,\dots,-v_{s-1},-v_{s+1},v_g,v_s^\star, v_{s+2}, \dots,v_{g-1},v_{g+1}, \dots, v_n\}$ \\[.1cm] && $(2^{[a-1]},3,b+2,a+1, 3,2^{[b-1]},3,2^{[c-1]})$, $a=s \geq 2, b=g-s-2 \geq 1, c=n-g \geq 0$ \\[.1cm] && $k=2ab+3a+b+1$, $p=(c+1)k^2 - (2a+1)(k-1)$ \\[.1cm] \cline{2-3} \\[-.3cm] \ref{l: S_g nothing}(4) & V(a)$_-, d \geq 5, {k+1 \over d} \geq 3$ & $\{-v_j,-v_1^\star,-v_g,v_2^\star,\dots,v_{j-1}, v_{j+1}, \dots, v_{g-1},v_{g+1},\dots,v_n\}$ \\[.1cm] && $(a+2,2,b+3,3,2^{[a-1]}, 3,2^{[b-1]},3,2^{[c-1]})$, $a = j-2 \geq 1, b= g-j-1 \geq 1, c=n-g \geq 0$ \\[.1cm] && $k=2ab+4a+3b+5$, $p=(c+1)k^2- (2a+3)(k+1)$ \\[.1cm] \cline{2-3} \\[-.3cm] \ref{l: S_g nothing}(5) & V(b)$_-, d \geq 3, {k-1 \over d} \geq 2$ & $\{-v_{j-1},\dots,-v_2^\star,v_g,v_1^\star,v_j, \dots,v_{g-1},v_{g+1},\dots,v_n\}$ \\[.1cm] && $(2^{[a-1]},3,b+2,2,a+2,2^{[b-1]},3,2^{[c-1]})$, $a=j-2 \geq 1, b=g-j \geq 1, c=n-g \geq 0$ \\[.1cm] && $k=2ab+2a+b+2$, $p=(c+1)k^2- (2a+1)(k-1)$ \end{tabular} } \end{center} \end{table} } {\scriptsize \begin{table} \caption{Small families (cont$^\text{d}$)} \label{table: 2} \begin{center} \makebox[0pt]{ \begin{tabular}{c | l l} \hline \hline \\[-.3cm] \ref{l: S_g not nothing}(1) & III(a)$_-, d \geq 3, {k+1 \over d} \geq 5$ & $\{v_s^\star,v_g,-v_{s+1},-v_{s-1},\dots,-v_1^\star, (v_{s+2}+v_1+\cdots+v_{s-1})^\star, \dots, v_{g-1}, v_{g+1},\dots,v_n\}$ \\[.1cm] && $(a+1,b+2,3,2^{[a-1]}, 4,2^{[b-1]},3,2^{[c-1]})$, $a=s \geq 2, b=g-s-2 \geq 1, c=n-g \geq 0$ \\[.1cm] && $k=2ab+3a+2b+2$, $p=(c+1)k^2 - (a+1)(2k-1)$ \\[.1cm] \cline{2-3} \\[-.3cm] \ref{l: S_g not nothing}(2) & III(a)$_-, d=2, {k+1 \over d} \geq 5$ & $\{v_1^\star,v_g,-v_2^\star,v_3^\star,\dots,v_{g-1},v_{g+1},\dots,v_n\}$ \\[.1cm] && $(2,a+2,3,4,2^{[a-1]}, 3,2^{[b-1]})$, $a = g-3\geq 1, b = n-g\geq 0$ \\[.1cm] && $k=4a+5$, $p=(b+1)k^2 - 2(2k-1)$ \\[.1cm] \cline{2-3} \\[-.3cm] \ref{l: S_g not nothing}(3) & III(b)$_-, d \geq 2, {k-1 \over d} \geq 5$ & $\{-v_1^\star,\dots,-v_{s-1},-v_g,-v_{s+1}, (v_s+v_{s+1})^\star, v_{s+2},\dots,v_{g-1}, v_{g+1},\dots, -v_n\}$ \\[.1cm] && $(2^{[a-1]},b+3,2,a+1,3, 2^{[b-1]},3,2^{[c-1]})$, $a=s \geq 2, b=g-s-2 \geq 1, c=n-g \geq 0$ \\[.1cm] && $k=2ab+3a+1$, $p=(c+1)k^2 - a(2k+1)$ \\[.1cm] \cline{1-3} \\[-.3cm] \ref{p: gappy structure}(3) & III(a)$_-, d \geq 2, {k+1 \over d} = 3$ & $\{-v_s^\star,-v_{s+1},-(v_{s+2}-v_1-\cdots-v_{s-1})^\star, v_1^\star, \dots, v_{s-1}, v_{s+3}, \dots, v_n\}$ \\[.1cm] & {\em and} & $(a+1,2,3,2^{[a-1]},5,2^{[b-1]})$, $a=s \geq 2, b=n-s-2 \geq 0$ \\[.1cm] & IV(b)$_-, d \geq 3, {2k-1 \over d} =3$ & $k=3a+2$, $p=(b+1)k^2- (k+1)(2k-1)/3$ \\[.1cm] \cline{1-3} \\[-.3cm] \ref{p: breakable 1}(1) & III(b)$_+$, $d = 2$, ${k-1 \over d} \geq 3$ & $\{-v_3^\star,\dots,-v_{m-1},-(v_1-v_3-\cdots-v_{m-1})^\star, v_2^\star,v_m,\dots,v_n\}$ \\[.1cm] & & $(3,2^{[a-1]},4,3,a+2,2^{[b-1]})$, $a=m-3 \geq 1, b=n-m+1 \geq 0$ \\[.1cm] & & $k=4a+3$, $p=bk^2 + 2(2k+1)$ \\[.1cm] \cline{2-3} \\[-.3cm] \ref{p: breakable 1}(2) & IV(a)$_+$, $d =5$, ${2k+1 \over d} \geq 5$ & $\{-v_3^\star,-(v_1-v_3-v_4-\cdots-v_{m-1})^\star,-v_{m-1},\dots,-v_4^\star,v_2^\star,v_m,\dots,v_n\}$ \\[.1cm] & & $(3,3,2^{[a-1]},3,3,a+3, 2^{[b-1]})$, $a=m-4 \geq 1, b=n-m+1 \geq 0$ \\[.1cm] & & $k=5a+7$, $p=bk^2 + 5(k-1)$ \\[.1cm] \cline{1-3} \\[-.3cm] \ref{p: breakable 2}(1) & III(a)$_+$, $d = 2$, ${k+1 \over d} \geq 5$ & $\{v_4,\dots,v_{m-1},(v_1-v_3 -v_4-\cdots - v_{m-1}), (v_3+v_2)^\star,-v_2,v_m,\dots,v_n\}$ \\[.1cm] & & $(3,2^{[a-1]},3,3,2,a+3,2^{[b-1]})$, $a = m-4 \geq 1, b=n-m+1 \geq 0$ \\[.1cm] & & $k=4a+5$, $p=bk^2 + 2(2k-1)$ \\[.1cm] \cline{2-3} \\[-.3cm] \ref{p: breakable 2}(2) & V(a)$_+$, $d \geq 3$, ${k+1 \over d} \geq 2$ & $\{v_s,\dots,v_2,(v_1-v_2-\cdots-v_s-v_{s+2})^\star, v_{m-1},\dots,v_{s+2}^\star,v_{s+1},v_m,\dots, v_n\}$ \\[.1cm] & & $(2^{[a-1]},4,2^{[b-1]},3,a+1, b+2,2^{[c-1]})$, $a=s \geq 1, b=m-s-2 \geq 1, c=n-m+1 \geq 0$ \\[.1cm] & & $k=2ab+2a+b$, $p=ck^2 + (2a+1)(k+1)$ \\[.1cm] \cline{2-3} \\[-.3cm] \ref{p: breakable 2}(3) & V(b)$_+$, $d \geq 3$, ${k-1 \over d} \geq 2$ & $\{v_{s+1},v_{s+2}^\star,\dots,v_{m-1}, (v_1-v_{s+2}-\cdots-v_{m-1})^\star, \dots, v_s,v_m,\dots,v_n\}$ \\[.1cm] & & $(a+1,3,2^{[b-1]}, 4, 2^{[a-1]},b+3, 2^{[c-1]})$, $a=s \geq 1, b=m-s-2 \geq 1, c=n-m+1 \geq 0$ \\[.1cm] & & $k=2ab+2a+b+2$, $p=ck^2 + (2a+1)(k-1)$ \\[.1cm] \cline{1-3} \\[-.3cm] \ref{p: breakable 3}(1) & III(a)$_+$, $d \geq 3$, ${k+1 \over d} \geq 3$ & $\{v_1^\star,\dots,v_{t-1},v_{t+3},\dots,v_{m-1}, v_t-v_1-\cdots-v_{t-1}-v_{t+2}-\cdots-v_{m-1},$ \\[.1cm] && $v_{t+2}^\star,v_{t+1},v_m,\dots,v_n\}$ \\[.1cm] & & $(2^{[a-1]},3,2^{[b-1]},3,a+2,2,b+3,2^{[c-1]})$, $a = t \geq 2, b = m-t-3 \geq 0, c=n-m+1 \geq 0$ \\[.1cm] & & $k=2ab+3a+2b+2$, $p=ck^2 + (a+1)(2k-1)$ \\[.1cm] \cline{2-3} \\[-.3cm] \ref{p: breakable 3}(2) & IV(a)$_+$, $d \geq 7$, ${2k+1 \over d} \geq 3$ & $\{-v_{t+2}^\star,-v_t + v_1 + \cdots +v_{t-1}+v_{t+2}+\cdots+v_{m-1}, -v_{m-1},\dots,$ \\[.1cm] && $(-v_{t+3}+v_1+\cdots+v_{t-1})^\star,v_1^\star,\dots,v_{t-1}, v_{t+1},v_m,\dots,v_n\}$ \\[.1cm] & & $(a+2,3,2^{[b-1]},3,2^{[a-1]},3,b+3,2^{[c-1]})$, $a=t \geq 2, b=m-t-3 \geq 0, c = n-m+1\geq 0$\\[.1cm] & & $k=2ab+3a+3b+4$, $p=ck^2 + (2a+3)(k-1)$ \\[.1cm] \cline{2-3} \\[-.3cm] \ref{p: breakable 3}(3) & IV(b)$_+$, $d \geq 5$, ${2k-1 \over d} \geq 3$ & $\{-v_{t-1},\dots,-v_1^\star, (-v_t+v_{t+2}+ \cdots+v_{m-1})^\star, v_{m-1}, \cdots, v_{t+2}^\star,v_{t+1}, v_m,\dots, v_n\}$ \\[.1cm] & & $(2^{[a-1]},4,2^{[b-1]},a+2,2,b+2,2^{[c-1]})$, $a=t \geq 2, b = m-t-2 \geq 1, c=n-m+1 \geq 0$ \\[.1cm] & & $k=2ab+a+b+1$, $p=ck^2 + (2a+1)(k+1)$ \\[.1cm] \cline{2-3} \\[-.3cm] \ref{p: breakable 3}(4) & III(b)$_+$, $d \geq 3$, ${k-1 \over d} \geq 3$ & $\{-v_{t+2}^\star,\dots,-v_{m-1}, (-v_t+v_{t+2}+\cdots+v_{m-1})^\star, v_1^\star, \dots, v_{t-1},v_{t+1},v_m, \dots, v_n\}$ \\[.1cm] & & $(a+2,2^{[b-1]},4,2^{[a-1]},3,b+2,2^{[c-1]})$, $a=t \geq 2, b=m-t-2 \geq 1, c = n-m+1 \geq 0$ \\[.1cm] & & $k=2ab+2a+b+2$, $p=ck^2+(a+1)(2k+1)$ \end{tabular} } \end{center} \end{table} } The 26 small families fall to a straightforward, though somewhat lengthy, analysis. In each case, converting the standard basis $S$ into a vertex basis $B$ usually involves altering just one element from $S$ into a sum of several such, and then permuting these elements and replacing some of them by their negatives. In two cases (\ref{p: breakable 2}(1) and \ref{p: breakable 3}(2)) there are two such alterations, and in a handful there are none. From the sequence of norms $\nu$, it is straightforward to obtain the values $p$ and $k$ as in the example of Subsection \ref{ss: methodology}. Lemmas \ref{l: cont frac basics 2}(1,2,3) help reduce the number of terms appearing in the continued fraction expansions under consideration; note that although Lemma \ref{l: cont frac basics 2}(3) relates two different fractions, their numerators are opposite one another. In this way, we reduce each string to one with at most three variables ($a,b,c$) and eight entries, which a computer algebra package or a tenacious person can evaluate. We used Mathematica \cite{mathematica} to perform these evaluations, relying on the command {\tt FromContinuedFraction} and the conversion $[\dots,a_i, \dots]^- = [\dots,(-1)^{i+1}a_i, \dots]^+$. Tables \ref{table: 1} and \ref{table: 2} report the results. We use variables $a,b,c$ (instead of $g,m,n,s,t$) to keep notation uniform across different families. As in Table \ref{table: A}, we report Lemmas \ref{l: S_g nothing}(1,3,4,5) and \ref{l: S_g not nothing} in place of Proposition \ref{p: gappy structure}(2). Certain degenerations in our notation deserve mention. The string $(\dots,x,y,2^{[-1]})$ should be understood as $(\dots,x)$. Thus, in \ref{p: just right 2}(5), taking $b=0$, we obtain the string $(2,2,a+3,4,2^{[a-1]})$. Furthermore, it follows that $n = m-1$ in this case, so the vertex basis truncates to $\{v_2,v_1,v_m,-v_3,\dots,-v_{m-1}\}$. In \ref{p: breakable 3}(1) and (2), there are two degenerations that can occur. In these cases, the degeneration $b=0$ can occur only if $c=0$. If $b = c = 0$, then we obtain the strings $(2^{[a-1]},4,a+2,2)$ and $(a+2,4,2^{[a-1]},3)$, respectively. \section{Proofs of the main results}\label{s: completion} Recall that we established Theorem \ref{t: main technical} in Subsection \ref{ss: changemaker} using \cite[Theorem 3.3]{greene:cabling}. \begin{proof}[Proof of Theorem \ref{t: linear changemakers}] Suppose that $(p,q)$ appears on Berge's list. Then $\Lambda(p,q)$ embeds as the orthogonal complement to a changemaker by Theorem \ref{t: main technical}. On the other hand, suppose that $\Lambda(p,q)$ is isomorphic to a changemaker lattice $L =(\sigma)^\perp \subset {\mathbb Z}^{n+1}$. Then $L$ has a standard basis $S$ appearing in one of the structural Propositions of Sections \ref{s: just right} - \ref{s: tight}. Section \ref{s: cont fracs} in turn exhibits an isomorphism $L \cong \Lambda(p',q')$, where the pair $(p',q')$ appears on Berge's list. By Proposition \ref{p: gerstein}, $p' = p$, and either $q' = q$ or $q q' \equiv 1 \pmod p$. Hence at least one of the pairs $(p,q)$, $(p,q')$ appears on Berge's list. \end{proof} \begin{proof}[Proof of Theorem \ref{t: main}] This follows from Theorems \ref{t: main technical} and \ref{t: linear changemakers}, using Proposition \ref{p: homology} and the analysis of Section \ref{s: cont fracs} to pin down the homology class of the knot $K$. \end{proof} We note that the statement of Theorem \ref{t: main} holds with $S^3$ replaced by an arbitrary L-space homology sphere $Y$ with $d$-invariant $0$. The only modification in the set-up is to use the 2-handle cobordism $W$ from $L(p,q)$ to $Y$. The space $X(p,q) \cup W$ is negative-definite and has boundary $Y$. By \cite[Corollary 9.7]{os:absgr} and Elkies' Theorem \cite{elkies}, its intersection pairing is diagonalizable, so \cite[Theorem 3.3]{greene:cabling} and the remainder of the proof go through unchanged. \begin{proof}[Proof of Theorem \ref{c: main}] Suppose that $K_p = L(p,q)$, and let $K' \subset L(p,q)$ denote the induced knot following the surgery. By Theorem \ref{t: main}, $[K'] = [B'] \in H_1(L(p,q);{\mathbb Z})$ for some Berge knot $B \subset S^3$. By \cite[Theorem 2]{r:Lspace}, it follows that $\widehat{HFK}(K') \cong \widehat{HFK}(B')$. By \cite[Proposition 3.1 and the remark thereafter]{r:Lspace}, it follows that $\Delta_{K'} = \Delta_{B'}$, where $\Delta$ denotes the Alexander polynomial. Since $\Delta$ depends only on the knot complement, it follows that $\Delta_K= \Delta_B$. By \cite[Theorem 1.2]{os:lens}, $\Delta_K$ and $\Delta_B$ determine $\widehat{HFK}(K)$ and $\widehat{HFK}(B)$; therefore, these groups are isomorphic. Next, suppose that $K$ is doubly primitive. As remarked in the introduction, both $K'$ and $B'$ are simple knots, and since they are homologous, they are isotopic. Thus, the same follows for $K$ and $B$, whence every doubly primitive knot is a Berge knot. \end{proof} \begin{proof}[Proof sketch of Theorem \ref{t: berge bound}] The main idea is to analyze the changemakers implicit in the structural Propositions and apply Proposition \ref{p: genus}, which restates the essential content of \cite[Proposition 3.1]{greene:cabling}. The use of weight expansions draws inspiration from \cite{ms:ellipsoids}. \begin{prop}\label{p: genus} Suppose that $K_p =L(p,q)$, and let $\sigma$ denote the corresponding changemaker. Then \[ 2g(K) = p - |\sigma|_1. \] \qed \end{prop} \begin{defin}\label{d: wt expansion} A {\em weight expansion} is a vector of the form \[w = (\underbrace{a_0,\dots,a_0}_{m_0}, \underbrace{a_1,\dots,a_1}_{m_1}, \dots, \underbrace{a_j,\dots,a_j}_{m_j}) \] where each $m_i \geq 1$, $a_{-1}:=0, a_0=1$, and $a_i = m_{i-1} a_{i-1} + a_{i-2}$ for $i = 1,\dots,j$. \end{defin} It is an amusing exercise to show that the entries of $w$ form the sequence of side lengths of squares that tile an $a_j \times a_{j+1}$ rectangle. \begin{figure}[h] \centering \includegraphics[width=2.8in]{rectangle} \put(-135,65){$25$} \put(-48,70){$9$} \put(-48,20){$9$} \put(-53,110){$7$} \put(-30,118){$2$} \put(-30,107.5){$2$} \put(-30,97){$2$} \put(0,118){$1$} \caption{The tiling specified by the weight expansion $w = (1,1,2,2,2,7,9,9,25)$.} \label{f: rectangle} \end{figure} \noindent Another useful observation is $a_t = 1 + \sum_{i=0}^{t-1} m_i a_i - a_{t-1}$, for all $t \geq 1$. Thus, \begin{equation}\label{e: wt exp} |w| = a_j \cdot a_{j+1} \quad \text{and} \quad |w|_1 = a_j + a_{j+1} - 1, \end{equation} which together with $a_{j+1} \geq a_j+1$ leads to the bound \begin{equation}\label{e: wt bound} (|w|_1 + 1)^2 \geq 4 |w| + 1. \end{equation} Observe that a weight expansion is a special kind of changemaker. In fact, it is easy to check that a changemaker $\sigma$ is a weight expansion iff the changemaker lattice $L = (\sigma)^\perp \subset {\mathbb Z}^{n+1}$ is built from $\varnothing$ by a sequence of expansions. Such lattices occur as one case of Proposition \ref{p: decomposable structure}. Indeed, by inspection, for each changemaker lattice that appears in one of the structural Propositions of Sections \ref{s: decomposable} - \ref{s: tight}, the changemaker $\sigma$ is just a slight variation on a weight expansion. For example, the changemakers implicit in Proposition \ref{p: berge viii} are obtained by augmenting a weight expansion by $a_j + a_{j+1}$, while those in Proposition \ref{p: tight unbreakable}(1,3) are obtained by deleting the first entry in a weight expansion with $m_0 \geq 2$. Using Proposition \ref{p: genus}, we obtain estimates on the genera of knots appearing in these families. For the changemakers $\sigma$ specified by Proposition \ref{p: berge viii}, \eqref{e: wt exp} easily leads to the inequality \begin{equation}\label{e: jimmy} (|\sigma|_1 + 1)^2 \geq (4/5) \cdot (4|\sigma|+1) \end{equation} in the same way as \eqref{e: wt bound}. Furthermore, equality in \eqref{e: jimmy} occurs precisely for changemakers $\sigma$ of the form $(1,\dots,1,n,2n+1)$, with $1$ repeated $n$ times. By Proposition \ref{p: genus}, it follows that the bound \eqref{e: berge bound} stated in Theorem \ref{t: berge bound} holds for type VIII knots, with equality attained precisely by knots $K$ specified by the pairs $(p,k) = (5n^2+5n+1,5n^2-1)$. Similarly, for the changemakers $\sigma$ specified by Proposition \ref{p: tight unbreakable}(1,3), \eqref{e: wt exp} easily leads to the bound $(|\sigma_1| + 2)^2 \geq 4|\sigma|+5$. Furthermore, equality occurs precisely for changemakers $\sigma$ of the form $(1,\dots,1,n+1)$, with $1$ repeated $n$ times. By Proposition \ref{p: genus}, it follows that the bound \begin{equation}\label{e: I- bound} 2g(K)-1 \leq p + 1 - \sqrt{4p+5} \end{equation} holds for type I$_-$ knots, with equality attained precisely by knots $K$ specified by the pairs $(p,k) = (n^2 + 3n + 1, n+1)$. In fact, \eqref{e: jimmy} holds for all the changemakers of Proposition \ref{p: tight unbreakable}(1,3) with the single exception of $(1,2)$. This corresponds to $5$-surgery along a genus one L-space knot, which must be the right-hand trefoil by a theorem of Ghiggini \cite{ghiggini}. Thus, the bound \eqref{e: berge bound} holds for all the type I$_-$ knots with the sole exception of $5$-surgery along the right-hand trefoil. The changemakers in the other structural Propositions fall to the same basic analysis. Due to the abundance of cases, we omit the details, and instead happily report that the bound \eqref{e: berge bound} is strict for the remaining lens space knots. This completes the proof sketch of Theorem \ref{t: berge bound}. \end{proof} \nocite{ni:erratum} \bibliographystyle{plain}
1,108,101,564,809
arxiv
\section{Introduction} \label{sec:intro} Machine learning (ML) is set out to revolutionize the way that the individuals and the society interact with and utilize the machines. These advances; however, are predicated on delivering high-performance platforms for training models during the training phase. The trained model, then, is used to evaluate unseen data, a.k.a., the inference phase. Training ML models is significantly compute intensive and at the same time, puts a lot of pressure on the memory~\cite{eyeriss:isca:2016, shidiannao:isca:2015, imagenet:neural:2012, very:arxiv:2014, cnvlutin:rchnews:2016, tetris:asplos:2017, pipelayer:hpca:2017, graphpim:hpca:2017, tom:isca:2016, isaac:archnews:2016, neurocube:isca:2016, chameleon:micro:2016, prime:archnews:2016, pim:nda:hpca:2015, pim:graph:isca:2015, pim:drama:cal:2015, pim:sparse:hpec:2013, neuflow:cvprw:2011}. Given these characteristics, in-memory acceleration~\cite{isaac:archnews:2016, pipelayer:hpca:2017, prime:archnews:2016, tetris:asplos:2017, graphpim:hpca:2017, tom:isca:2016, neurocube:isca:2016, chameleon:micro:2016, pim:nda:hpca:2015, pim:graph:isca:2015, pim:drama:cal:2015, pim:sparse:hpec:2013, neuflow:cvprw:2011, CMP-PIM_DAC118, PIM-training_Micro18} is a natural fit for accelerating ML algorithms. By advent of 3D-stacked memories~\cite{micro:hmc:spec, MICRON, phd:hmc, soc:dic3:2010, smartrefresh}, in-memory acceleration~\cite{tetris:asplos:2017,graphpim:hpca:2017,tom:isca:2016,neurocube:isca:2016,chameleon:micro:2016,pim:nda:hpca:2015,pim:graph:isca:2015,pim:drama:cal:2015,pim:sparse:hpec:2013,neuflow:cvprw:2011, CMP-PIM_DAC118, PIM-training_Micro18} becomes a feasible solution. Various pieces of inspiring work have devised in-memory accelerators for ML algorithms but mostly focused on the inference phase~\cite{tetris:asplos:2017, graphpim:hpca:2017, tom:isca:2016, chameleon:micro:2016, pim:nda:hpca:2015, pim:graph:isca:2015, pim:drama:cal:2015, pim:sparse:hpec:2013, neuflow:cvprw:2011, machine:neurocomputing:2017} or training phase of special kinds of ML algorithms~\cite{neurocube:isca:2016} that can be done by exploiting compute units of the inference phase, i.e., multiply-accumulator (MAC) units. An ideal in-memory accelerator for training ML algorithms should be (1) \codebold{general} to support different kinds of ML algorithms, as there are variations in the compute patterns of different ML algorithms, and (2) \codebold{efficient} to capture the available bandwidth of 3D-stacked memories~\cite{phd:hmc, MICRON, micro:hmc:spec}, while meeting the limited power and area budgets of these memories. We observe that previous work limits the potential capability of in-memory accelerators as none of them provides all the necessary features (see \S~\ref{sec:motiv} for more detail). As an example, integrating general-purpose units~\cite{tabla:hpca:2016} inside the 3D-stacked memory only captures up to \xx{16\%} of the available bandwidth (see \S~\ref{sec:motiv} for more detail). We set out to explore an in-memory acceleration with heterogeneous compute units to support a wide range of ML algorithms. Investigating a wide range of popular ML algorithms, we observe that ML algorithms exploit common compute patterns, in which, each pattern can be executed on a specialized compute unit with low area and power overheads. The combination of these compute units can execute any type of ML algorithms. Constrained by the limited area and power budgets of a 3D-stacked memory, even these highly optimized compute units capture only \xx{47\%} of the memory bandwidth (see \S~\ref{sec:motiv} for more details). Although the captured bandwidth is much larger than that of the general-purpose units, it is still lower than the total available bandwidth. We conclude that in-memory accelerators, alone, cannot utilize the whole available bandwidth even if we use light-weight compute units due to the limited area and power budgets of the 3D-stacked memory. To capture all the available bandwidth, we aim to enable a split execution between the light-weight heterogeneous in-memory engines and an out-of-the-memory compute platform. We observe that existing 3D interfaces can transfer about 63\% of the internal bandwidth to an out-of-memory compute platform~\cite{MICRON, micro:hmc:spec}. Inside the memory, we integrate as many compute units as the area and power budgets allow to capture the 3D-stacked memory bandwidth. We drive an out-of-the-memory compute platform by the unused portion of the bandwidth. To fully utilize the two platforms, we observe that ML algorithms are composed of many parallel regions, which facilitate execution of ML algorithms over the two platforms, in-memory accelerators and an out-of-the-memory platform, with minimal inter-communications. \hpim\footnote{The small number of basic origami folds can be combined in a variety of ways to make intricate designs.} is a hardware-software solution that combines compute patterns with a heterogeneous set of in-memory accelerators, and splits the execution over the in-memory accelerators and an out-of-the-memory platform. \hpim~translates the common compute patterns of different ML algorithms into heterogeneous compute engines that should be integrated on the logic layer of 3D-stacked memories. Moreover, \hpim~efficiently distributes parts of the computation of an ML algorithm to an out-of-the-memory compute platform to capture all of the available memory bandwidth provided by a 3D-stacked DRAM. This paper makes the following contributions: \begin{itemize}[leftmargin=2.5mm,itemsep=0mm,parsep=0mm,topsep=0mm] \item We extract common compute patterns and parallelism types of a set of different ML algorithms. \item We propose \hpim~that benefits from a set of heterogeneous in-memory accelerators derived from the identified compute patterns, and splits the computation over the in-memory accelerators and an out-of-the-memory compute platform using the identified parallelism types. \item We show that \hpim~outperforms the state-of-the-art solution in terms of performance and energy-delay product (EDP) by \xxs{1.5$\times$} and \xxs{29$\times$} (up to \xxs{1.6$\times$} and \xxs{31$\times$}), respectively. Moreover, \hpim~is within a 1\% margin of an ideal system, which has unlimited compute resources on the logic layer of a 3D-stacked memory. \end{itemize} \section{Motivation} \label{sec:motiv} There are two phases in processing ML algorithms: (1) a \emph{training phase} that optimizes the model parameters over a training dataset, and (2) an \emph{inference phase} where the trained model is deployed to process new unseen data. While both phases are computationally intensive, the training phase demands more compute resources due to two reasons. First, the training phase is a superset of the inference phase. The inference phase just includes multiply-accumulator (MAC) operations, but the training phase, which optimizes different objective functions, includes more operations like non-linear operations. Second, to achieve high accuracy for the trained model, ML algorithms require copious amounts of processing power to iterate over vast amounts of training data~\cite{eyeriss:isca:2016, shidiannao:isca:2015, imagenet:neural:2012, very:arxiv:2014, isaac:archnews:2016, cnvlutin:rchnews:2016}. Intrinsic parallelism of ML algorithms has inspired both academia and industry to explore accelerating platforms such as FPGAs~\cite{FPGADeep:FPGA:2015, dnnweaver:micro:2016,tabla:hpca:2016, CHiMPS:FPGA:2008, large:journal:2011, cosmic:Micro:2017, resourcePartitioning:ISCA:2017}, GPUs~\cite{convGPU:2017, Oh:2004, Guzhva:2009, saberlda:asplos:2017}, and ASICs~\cite{bitfusion:isca:2018, snapea:isca:2018, ganax:isca:2018, eyeriss:isca:2016, cnvlutin:rchnews:2016, shidiannao:isca:2015, stripes:micro:2016, scnn:isca:2017, eie:isca:2016, cambrion:micro:2016, dadiannao:micro:2014, pudiannao:asplos:2015, tpu:isca:2017, cosmic:Micro:2017, scalpel:ISCA:2017, scaledeep:ISCA:2017}. However, the high memory footprints of ML algorithms limit the potential performance benefits of accelerations. \subsection{Memory Bandwidth Bottleneck} To keep compute resources busy, accelerators need to transfer huge amounts of data, which makes memory subsystem a serious bottleneck in terms of bandwidth and energy~\cite{eyeriss:isca:2016, shidiannao:isca:2015, imagenet:neural:2012, very:arxiv:2014, cnvlutin:rchnews:2016, tetris:asplos:2017, pipelayer:hpca:2017, graphpim:hpca:2017, tom:isca:2016, isaac:archnews:2016, neurocube:isca:2016, chameleon:micro:2016, prime:archnews:2016, pim:nda:hpca:2015, pim:graph:isca:2015, pim:drama:cal:2015, pim:sparse:hpec:2013, neuflow:cvprw:2011, tabla:hpca:2016, dnnweaver:micro:2016, machine:neurocomputing:2017, saberlda:asplos:2017}. A large body of work has explored in-memory processing, built upon DRAM~\cite{dadiannao:micro:2014}, SRAM~\cite{PIM-SRAM:VLSI:2016}, non-volatile memories~\cite{prime:archnews:2016, isaac:archnews:2016, pipelayer:hpca:2017}, and 3D-stacked memories~\cite{tetris:asplos:2017, graphpim:hpca:2017, tom:isca:2016, neurocube:isca:2016, chameleon:micro:2016, pim:nda:hpca:2015, pim:graph:isca:2015, pim:drama:cal:2015, pim:sparse:hpec:2013, neuflow:cvprw:2011, machine:neurocomputing:2017} for performance improvements and energy savings. Many pieces of prior work proposed in-memory accelerators that are built upon 3D-stacked memories, as these memories are commercialized (e.g., HMC~\cite{MICRON, micro:hmc:spec}) and in-memory processing within them is feasible. 3D-stacked memories stack multiple DRAM dies on top of each other inside a package. These dies are vertically connected via thousands of low-capacitance through-silicon vias (TSVs) to a logic die in which the memory controllers are located. 3D memories use high-speed signaling circuits from the logic die to the active die (e.g., CPU, GPU, FPGA, etc.) out of the memory. Putting all together, 3D-stacked memories provide massive bandwidth with low access energy (3 to 5 times smaller) as compared to the conventional DRAMs. \subsection{Challenges of In-memory Acceleration} While in-memory accelerators have the potential to address the memory bandwidth bottleneck, they have two main challenges that should be addressed. First, there are many types of ML algorithms and an in-memory accelerator should be able to effectively accelerate them. Second, there is a significant constraint on the area and power usage of the logic die in 3D-stacked memories~\cite{graphpim:hpca:2017, soc:dic3:2010, pim:isca:2015, pim:graph:isca:2015, machine:neurocomputing:2017, isaac:archnews:2016, neurocube:isca:2016, pipelayer:hpca:2017, prime:archnews:2016}. To evaluate in-memory accelerators, we define two parameters: (1) \textit{Generality}: how much the architecture is flexible to support different kinds of ML algorithms; (2) \textit{Efficiency}: how much the architecture can utilize the available bandwidth subject to area and power constraints. To achieve generality in accelerating a wide range of ML algorithms with different objective functions, an in-memory accelerator may use general-purpose execution units to execute different operations including various non-linear operations (e.g., Sigmoid) in the training phase. General-purpose execution units can provide generality. However, they are expensive in terms of area and power. On the other hand, to achieve efficiency, we need to integrate as many general-purpose execution units as needed to capture the whole available bandwidth. Prior 3D-stacked based in-memory accelerators either support the inference phase~\cite{tetris:asplos:2017, graphpim:hpca:2017, tom:isca:2016, chameleon:micro:2016, pim:nda:hpca:2015, pim:graph:isca:2015, pim:drama:cal:2015, pim:sparse:hpec:2013, neuflow:cvprw:2011, machine:neurocomputing:2017} or the training phase of special kinds of ML algorithms such as Convolutional Neural Network (CNN)~\cite{neurocube:isca:2016}. As there is \textit{no} previous in-memory accelerator that supports the training phase of different types of ML algorithms, we implement general-purpose units similar to those in prior work~\cite{tabla:hpca:2016, cosmic:Micro:2017}. Considering the available power and area budgets of 3D-stacked memories, these general-purpose units only capture \xx{80~GB/s} (\xx{16\%}) of the available bandwidth (out of \newtext{\xx{512~GB/s}}, more details in \S~\ref{sec:eval}). The captured bandwidth is much lower than the total bandwidth of 3D-stacked memories, which shows that general-purpose in-memory accelerators fail to utilize the available bandwidth. \subsection{Holistic In-memory Approach} To alleviate the bandwidth bottleneck and accelerate the training phase of a wide range of ML algorithms, we propose a holistic in-memory approach, called \hpim, which satisfies both ML requirements and limitations of 3D-stacked memories. While we focus on accelerating the training phase of ML algorithms, the proposed idea can also be applied to the inference phase, as the training phase is a superset of the inference phase. \hpim~benefits from low-overhead compute engines as in-memory accelerators on the 3D-stacked memory to capture as much 3D-stacked memory bandwidth as possible (240~GB/s of 512~GB/s). \hpim~uses the rest of the bandwidth (272~GB/s of 512~GB/s) to drive an out-of-the-memory compute platform, which can be ASIC, GPU, FPGA, TPU, or any other types of compute platform. We build \hpim~upon two key ideas: \niparagraph{Pattern-Aware Execution.} \hpim~exploits pattern-aware execution to accelerate different ML algorithms by light-overhead heterogeneous compute engines on the logic die of a 3D-stacked memory. First, \hpim~identifies these compute patterns. Second, \hpim~implements each compute pattern by a specific hardware unit, called compute engine, with low area and power overheads. The combination of these heterogeneous compute engines can accelerate different types of ML algorithms. \niparagraph{Split Execution.} 3D-stacked memories provide the active die with external bandwidth of up to \xx{320~GB/s}. This observation motivates us to integrate as many compute engines as possible on the logic die and feed the unused portion of the available bandwidth to the active die. The combination of accelerators on the logic die and the compute platform on the active die utilizes the whole 3D-stacked DRAM bandwidth. To partition an ML algorithm efficiently between the in-memory accelerators and the out-of-the-memory platform, a partitioning algorithm should have three features. (1) \emph{Concurrency:} Partitioned parts should be able to run simultaneously. (2) \emph{Minimum inter-communications:} Partitioned parts should have no or minimum inter-platform communications. (3) \emph{Load Balancing:} Partitioned parts should be proportional to the compute capabilities of the two platforms. Compute capabilities are limited by two factors. First, compute throughput that depends on the speed of hardware. Second, memory throughput that depends on the available memory bandwidth. To have all of these features, we observe that there are various parallelism types inside ML algorithms. \hpim~extracts three parallelism types from ML algorithms, that guarantee concurrency and minimum inter-communications. Then, \hpim~exploits an assignment algorithm to partition these concurrent parts based on the compute capabilities of the two platforms, that guarantees load-balancing. We compare some prior work~\cite{tabla:hpca:2016, scalpel:ISCA:2017, scaledeep:ISCA:2017, resourcePartitioning:ISCA:2017, neurocube:isca:2016, PIM-training_Micro18, CMP-PIM_DAC118} in Table~\ref{tab:previous}. \emph{In-memory} and \emph{Training} columns show if the method uses an in-memory accelerator and targets the training phase, respectively. The other five columns show whether the method offers generality, split execution, concurrency, load-balancing, and minimum inter-communications, respectively. As summarized in Table~\ref{tab:previous}, pieces of prior work that use in-memory accelerators~\cite{CMP-PIM_DAC118, neurocube:isca:2016, PIM-training_Micro18} do not support execution of the training phase of different kinds of ML algorithms. While \emph{TABLA}~\cite{tabla:hpca:2016} is a general method to accelerate the training phase of ML algorithms, it suffers from memory bandwidth problem. Other pieces of work~\cite{scalpel:ISCA:2017, scaledeep:ISCA:2017, resourcePartitioning:ISCA:2017, PIM-training_Micro18} benefit from split execution but do not support acceleration of the training phase of different kinds of ML algorithms. \emph{Scalpel}~\cite{scalpel:ISCA:2017}, \emph{Proger PIM}~\cite{PIM-training_Micro18}, and \emph{Resource partitioning}~\cite{resourcePartitioning:ISCA:2017}) execute only special parts of algorithms over multiple resources and run the rest on just one compute resource. Such techniques fail to provide concurrency and load balancing. Solutions such as partitioning ML algorithms based on the types of layers, e.g., \emph{Scaledeep}~\cite{scaledeep:ISCA:2017}, which assign memory-intensive parts to the in-memory and compute-intensive parts to the out-of-the-memory platform, neglect the minimum-intercommunications and do not always distribute the computation in a load-balanced manner. As shown in the table, none of prior work has all the required features. \begin{table}[!ht] \centering % \caption{Characteristics of previous ML accelerators in supporting: in-memory acceleration (In-memory), training phase (Training), generality (Generality), split execution (Split), concurrency (Conc), load-balancing (L-B), and minimum inter-communications (Min I-C).} % \includegraphics[width=0.5\textwidth]{previous.pdf} % \label{tab:previous} % \end{table} \section{Evaluation} \label{sec:eval} \subsection{Experimental Setup} \label{subsec:exp} \begin{table*}[!ht] \centering % \caption{Benchmarks.} % \includegraphics[width=0.85\textwidth]{benchmark.pdf} % \label{tab:bench} % \end{table*} \niparagraph{Benchmarks and Datasets.} Table~\ref{tab:bench} summarizes the set of benchmarks that are used for evaluation of \hpim, and their descriptions including model topology, number of features, and number of input vectors. To evaluate the sensitivity of \hpim's performance improvement to the size of the model used by a given ML algorithm, we use two distinct models (shown as \bench{M1} and \bench{M2} in Table~\ref{tab:bench}) for each evaluated benchmark. The benchmarks include the state-of-the-art ML algorithms. The Back-propagation (\bench{BProp}) algorithm trains models to detect handwritten digits ~\cite{mnist:2010, mnist8m} and speech ~\cite{acoustic:2010}. The Linear Regression (\bench{LinReg}) algorithm is widely used in finance and image processing to predict prices~\cite{stock-exchange:2008} and texture of images~\cite{texture1:2016}. The Logistic Regression (\bench{LogReg}) algorithm trains models to detect tumors~\cite{tumor:2003}, and cancer~\cite{cancer1:2002}. The Support Vector Machine (\bench{SVM}) algorithm is used in computer vision and medical diagnosis domains to detect human faces~\cite{face:2000} and cancer~\cite{cancer2}. The Recommender Systems (\bench{Reco}) algorithm is widely used in processing movie datasets such as \emph{Movielens} datasets~\cite{movielens:hetrec:2011, movielens_web:2017} and the \emph{Netflix Prize} datasets~\cite{netflix}. The 2D-Regression (\bench{2D-Reg}) algorithm trains models to detect different kinds of tumors~\cite{tumor2} and cancers~\cite{cancer2}. \niparagraph{FPGA Platform.} We evaluate \hpim~in the context of a 3D-stacked memory on top of an active die that includes a \code{Virtex UltraScale+ (DS923) VU13P} FPGA. Table~\ref{tab:arch} reports the key FPGA parameters. We synthesize the hardware in the FPGA platform with \code{Vivado Design Suite v2017.2} to extract the FPGA design parameters. \niparagraph{ASIC Implementation.} We use \code{Synopsys Design Compiler (L-2016.03-SP5)} and \code{TSMC\,45-nm} standard cell library at \xx{313} MHz frequency, the frequency of HMC stacked memory~\cite{hybridcube:vlsit:2012,neurocube:isca:2016,micro:hmc:spec,MICRON}, to synthesize the accelerators and obtain the area, delay, and energy numbers. We use \code{CACTI-P}~\cite{cactip} to measure the area and power of the registers and on-chip SRAMs. \niparagraph{Memory Model.} The 3D-stacked memory is modeled after an HMC stacked memory~\cite{hybridcube:vlsit:2012,neurocube:isca:2016,micro:hmc:spec,MICRON}. Each vault delivers up to 16 GB/s bandwidth to the logic die and 10 GB/s bandwidth to the active die~\cite{micro:hmc:spec,MICRON}. The available area to accelerators in each vault is \code{1.5\,mm\textsuperscript{2}}\cite{micro:hmc:spec, tetris:asplos:2017}. We extract the 3D-stacked memory model parameters from the data sheet~\cite{micro:hmc:spec}. Table~\ref{tab:arch} reports the parameters of the memory model used in our evaluations. \niparagraph{Cycle-Level Simulation.} Using the ASIC and FPGA synthesis numbers and the configurations of the memory models, we develop a cycle-level architectural simulator to measure the performance and energy consumption of \hpim. The \hpim~simulator includes the timing of the memory accesses and faithfully models the parameters of the ASIC and FPGA implementations. Table~\ref{tab:origami} lists the major micro-architectural parameters of \hpim. \begin{table}[h] \centering % \caption{Major parameters of FPGA and 3D-stacked Memory~\cite{neurocube:isca:2016}.} % \includegraphics[width=0.5\textwidth]{arch.pdf} % \label{tab:arch} \vspace{-8pt} % \end{table} \begin{table}[h] \centering % \caption{Parameters of \hpim.} % \includegraphics[width=0.28\textwidth]{origami.pdf} % \label{tab:origami} \vspace{-5pt} % \end{table} \niparagraph{Comparison Metrics.} We evaluate benefits of \hpim~with six ML algorithms, listed in Table~\ref{tab:bench}, in terms of performance and energy-delay product (EDP). \niparagraph{Comparison Points.} We compare six different platforms, namely (1) \codebold{FPGA}, (2) \codebold{PIM-GU}, (3) \codebold{PIM-CE}, (4) \codebold{\hpim}~(our approach), (5) \codebold{\hpim-IIC}, and (6) \codebold{PIM-GU-Unlimited}. \codebold{\hpim}~represents our approach, in which, we use both the FPGA and the compute engines on the logic die of the 3D-stacked memory. \hpim~assigns as much of the internal bandwidth as possible to the compute engines and delivers the rest of the bandwidth to the FPGA (See Table~\ref{tab:arch}). Table~\ref{tab:origami} also lists the available resources on the logic die that \hpim~uses for computation. \codebold{\hpim-IIC} evaluates an \hpim~which exploits an ideal inter-platform communications with no delay and bandwidth usage. The \codebold{\hpim-IIC} exploits the same configuration as \codebold{\hpim}. We use this comparison point to evaluate the effect of inter-platform communications' delay and bandwidth usage on \hpim's effectiveness. The state-of-the-art FPGA-based accelerator to train different ML algorithms is \emph{TABLA}~\cite{tabla:hpca:2016}. It has been shown that it outperforms GPU and CPU implementations~\cite{tabla:hpca:2016}. We implement ALUs of \emph{TABLA} in an FPGA connected to a 3D-stacked memory. We refer to this design as \codebold{FPGA}. In-memory accelerators focus on the inference phase~\cite{tetris:asplos:2017, graphpim:hpca:2017, tom:isca:2016, chameleon:micro:2016, pim:nda:hpca:2015, pim:graph:isca:2015, pim:drama:cal:2015, pim:sparse:hpec:2013, neuflow:cvprw:2011, machine:neurocomputing:2017} or the training phase of a restricted set of ML algorithms such as CNNs~\cite{neurocube:isca:2016}. As there is no previous in-memory accelerator that supports the training phase of different types of ML algorithms, we compare \hpim~with a design that uses the general-purpose ALUs similar to those in prior work ~\cite{tabla:hpca:2016, cosmic:Micro:2017} on the logic die of the 3D-stacked memory. We refer to this design as \codebold{PIM-GU}. To understand the limitations of previous in-memory accelerator, we compare the results to an ideal but impractical platform, \codebold{PIM-GU-Unlimited}, with enough general-purpose ALUs on the logic die of the 3D-stacked memory to fully utilize the available bandwidth. \codebold{PIM-CE} evaluates an \hpim~which only benefits from the compute engines on the logic die. We use this comparison point to show the importance of split execution. Moreover, this comparison point shows the effectiveness of heterogeneous compute engines as compared to general-purpose ALUs. Table~\ref{tab:syn} shows the specifications of \hpim~accelerators and the ALU of prior work~\cite{tabla:hpca:2016, cosmic:Micro:2017}. The table shows that we can only include \xx{32} ALUs on the logic die, as the area of one ALU is 1.2~$mm^{2}$ and the available area in a single vault is \code{1.5\,mm\textsuperscript{2}}. Consequently, \codebold{PIM-GU} exploits 80~GB/s of the available bandwidth. \\ \begin{table}[h] \centering % \caption{Area, power, and latency of the compute engines of \hpim~and a general-purpose ALU in a \code{45-nm} technology node.} % \includegraphics[width=0.35\textwidth]{synthesis-units.pdf} % \label{tab:syn} % \end{table} \subsection{Experimental Results} \label{sec:eval} \begin{figure*}[h] \centering % \includegraphics[width=1\textwidth]{speedup-complexGU.pdf} % \caption{Speedup of the competing compute platforms over \codebold{FPGA}.} % \label{fig:speedup} \end{figure*} \begin{figure*}[h] \centering % \includegraphics[width=1\textwidth]{EDP-Reduction-complexGU.pdf} % \caption{EDP reduction of the competing platforms, normalized to \codebold{FPGA}.} % \label{fig:energy} \end{figure*} \niparagraph{Performance Analysis.} To evaluate the effect of \hpim~on accelerating the training phase of ML algorithms, we measure the execution time across the evaluated benchmarks. Figure~\ref{fig:speedup} shows the speedup (higher is better) of different platforms with respect to \codebold{FPGA}. We make four key observations. First, \codebold{\hpim}~outperforms \codebold{FPGA} in terms of execution time, by \xxs{1.55$\times$} on average (up to \xxs{1.6$\times$}). \codebold{\hpim}~exploits all the available bandwidth, while \codebold{FPGA} only captures the external bandwidth. Second, The speedup of \codebold{\hpim}~is within \xx{$\approx$1\%} of \codebold{PIM-GU-Unlimited}. The reason for this level of speedup is the effectiveness of \codebold{\hpim}~in capturing all the available memory bandwidth. \codebold{\hpim}~maximally utilizes the memory bandwidth by judiciously distributing the computations between the FPGA and the accelerators on the logic die of the 3D-stacked memory. Third, \codebold{\hpim}~offers the same speedup as \codebold{\hpim-IIC}, which shows how well our partitioning algorithm minimizes the inter-platform communications. Our evaluations show that the bandwidth overhead in \codebold{\hpim}~is less than \xxs{0.001\%} and \codebold{\hpim}~effectively hides the delay overhead. Fourth, \codebold{PIM-CE} outperforms \codebold{PIM-GU} by \xxs{2.9$\times$}, which shows the effectiveness of the heterogeneous compute engines as compared to general-purpose units. Moreover, \hpim~combines heterogeneous compute engines with split execution to capture the whole available bandwidth, and hence, outperforms~\codebold{PIM-CE} by \xxs{2.1$\times$}. \niparagraph{Energy Analysis.} We measure energy-delay product (EDP) of all benchmarks on all compute platforms. Figure~\ref{fig:energy} shows the normalized EDP reduction (higher is better) of the compute platforms, normalized to \codebold{FPGA}. We make four key observations. First, \codebold{\hpim}~outperforms \codebold{FPGA} by \xxs{29$\times$}, on average (up to \xxs{31$\times$}). It is due to two reasons: (1) in \codebold{\hpim}, a portion of data communications is local, as it executes a portion of computations inside the 3D-stacked memory, and (2) \codebold{\hpim}~exploits light-weight compute engines in both in-memory and out-of-the-memory platforms, which consume less energy than the general-purpose ALUs of \codebold{FPGA}. Second, EDP of \codebold{\hpim}~is \xxs{86.1$\times$} and \xxs{2.1$\times$} lower than \codebold{PIM-GU}'s and \codebold{PIM-GU-Unlimited}'s, respectively. Although \codebold{PIM-GU} and \codebold{PIM-GU-Unlimited} perform all the communications and computations inside the 3D-stacked memory, their general-purpose ALUs consume more energy than compute engines of \codebold{\hpim}. Third, \codebold{PIM-CE} outperforms both \codebold{PIM-GU} and \codebold{PIM-GU-Unlimited} by \xxs{67.3$\times$} and \xxs{1.7$\times$}, on average, respectively. This is because \codebold{PIM-CE} exploits heterogeneous compute engines whose energy usage is significantly lower than general-purpose ALUs in \codebold{PIM-GU} and \codebold{PIM-GU-Unlimited}. Fourth, \codebold{\hpim}~and \codebold{\hpim-IIC}~offer very close EDP due to low inter-platform communications offered by split execution of \codebold{\hpim}. \begin{figure}[h] \centering % \includegraphics[width=0.35\textwidth]{origami-speedup-EDP-reduction-complexGU.pdf} % \caption{Breakdown of speedup and EDP reduction between pattern-aware execution and split execution of \hpim.} % \label{fig:breakdown} \end{figure} \begin{figure*}[h] \centering % \includegraphics[width=1\textwidth]{parallel-speedup-complexGU.pdf} % \caption{Speedup sensitivity to the three parallelism types.} % \label{fig:speedup-parallel} \end{figure*} \begin{figure*}[h] \centering % \includegraphics[width=1\textwidth]{parallel-EDP-Reduction-complexGU.pdf} % \caption{EDP sensitivity to the three parallelism types.} % \label{fig:energy-parallel} \end{figure*} \niparagraph{Sources of Benefit.} \codebold{\hpim} exploits two execution techniques, \emph{pattern-aware execution} and \emph{split execution}, to effectively run different kinds of ML algorithms. To shed light on the importance of these two techniques, we compare \codebold{\hpim} against \codebold{PIM-GU}. As compared to \codebold{PIM-GU}, \codebold{\hpim} offers two advantages: (1) heterogeneous compute engines instead of general-purpose ALUs, and (2) split execution over the compute engines and the out-of-the-memory platform. Figure~\ref{fig:breakdown} shows the contribution of each technique to the speedup and EDP reduction of \codebold{\hpim} as compared to \codebold{PIM-GU}. The figure shows that heterogeneous compute engines are responsible for 48\% of the speedup and 78\% of the EDP reduction. Likewise, split execution is responsible for 52\% of the speedup and 22\% of the EDP reduction. These results clearly show that both techniques are necessary for the success of \codebold{\hpim}. \niparagraph{Sensitivity to Parallelism Types.} To better illustrate the source of benefit in \codebold{\hpim}, Figures~\ref{fig:speedup-parallel} and ~\ref{fig:energy-parallel} show the effect of different parallelism types (i.e., \emph{block\_{level}}, \emph{partial\_{level}}, and \emph{model\_{level}}) on the speedup and EDP across the evaluated benchmarks. The first bar, \codebold{block\_{level}}, shows the results when only \emph{block\_{level}} parallelism is enabled. The second bar, \codebold{partial\_{level}}, shows the speedup and EDP when both \emph{block\_{level}} and \emph{partial\_{level}} parallelism types are enabled. Finally, the last bar, \codebold{model\_{level}}, illustrates the results when all three parallelism types (the default mode in \codebold{\hpim}) are enabled. We make three observations. First, by enabling \emph{block\_{level}} parallelism, \codebold{\hpim}~outperforms \fpga in terms of speedup and EDP, by \xxs{1.52$\times$} and \xxs{25.24$\times$}, on average, respectively. It is due to the fact that all benchmarks benefit from the \emph{block\_{level}} parallelism, thus by leveraging \emph{block\_{level}} parallelism, \codebold{\hpim}~splits the execution over both compute engines in 3D-stacked memory and the out-of-the-memory FPGA platform. Out of twelve benchmarks, six benchmarks (\bench{LinReg (M1), LinReg (M2), LogReg (M1), LogReg (M2), SVM (M1), and SVM (M2)} have only \emph{block\_{level}} parallelism. Thus, these benchmarks see no speedup or improvement in EDP by enabling \emph{partial\_{level}} and \emph{model\_{level}} parallelism types. Second, by enabling both~\codebold{block\_{level}}~and~\codebold{partial\_{level}}, on average, \codebold{\hpim}~achieves \xxs{1.56$\times$} and \xxs{28.10$\times$} improvement in execution time and EDP over \codebold{\fpga}. \bench{BProp}, \bench{Reco}, and \bench{2D-Reg} achieve higher speedup and lower EDP over \codebold{block\_{level}}. As an example, \codebold{partial\_{level}} improves the speedup and EDP of \bench{BProp} by \xxs{$\approx$10\%} and \xxs{26.75\%}, respectively, over \codebold{block\_{level}}. Third, by enabling all three parallelism types, \codebold{\hpim} improves speedup and EDP of \bench{Reco} by \xxs{1.6$\times$} and \xxs{31.0$\times$}, on average, respectively, over \codebold{\fpga}. Other benchmarks do not have \codebold{model\_{level}} parallelism and achieve no improvements by enabling \codebold{model\_{level}}. \bench{Reco} has two independent models and achieves the highest speedup and the lowest EDP by \codebold{model\_{level}}. \codebold{model\_{level}} improves the speedup and EDP reduction of \bench{Reco} as compared to other parallelism types by \xxs{6\%} and \xxs{1.50$\times$}, on average, respectively. Although \bench{BProp} includes three models, they are not independent, thus, \bench{BProp} does not have \codebold{model\_{level}} parallelism. These results asserts the importance of exploring various types of parallelism to fully benefit from \codebold{\hpim}. \section{Related Work} \label{sec:related} Our proposal, \hpim, is fundamentally different from prior work in the following directions: (1) \hpim~extracts compute patterns of ML algorithms and translates them into heterogeneous compute engines on the logic die of a 3D-stacked memory, (2) \hpim~splits execution of ML algorithms over the heterogeneous compute engines and an out-of-the-memory compute platform to utilize all the available bandwidth, and (3) \hpim~exploits an optimization algorithm to split the computation of ML algorithms between two platforms in a load-balanced manner and with minimum inter-platform communications to maximize resource utilization. There has been a wealth of architectures for in-memory accelerators that integrate logic and memory onto a single die to enable higher memory bandwidth and lower access energy~\cite{tetris:asplos:2017,pipelayer:hpca:2017,tom:isca:2016,isaac:archnews:2016,neurocube:isca:2016,chameleon:micro:2016,prime:archnews:2016,pim:nda:hpca:2015,pim:drama:cal:2015,pim:sparse:hpec:2013,neuflow:cvprw:2011, bitfusion:isca:2018, snapea:isca:2018, ganax:isca:2018, saberlda:asplos:2017, eyeriss:isca:2016, cnvlutin:rchnews:2016, shidiannao:isca:2015, stripes:micro:2016, scnn:isca:2017, eie:isca:2016, cambrion:micro:2016, dadiannao:micro:2014, pudiannao:asplos:2015, FPGADeep:FPGA:2015, dnnweaver:micro:2016,tabla:hpca:2016, CHiMPS:FPGA:2008, large:journal:2011, tensorflow, convGPU:2017, Oh:2004, Guzhva:2009, tpu:isca:2017, CMP-PIM_DAC118, PIM-training_Micro18}. Most of these in-memory architectures accelerate the inference phase of ML algorithms, some of in-memory accelerators, such as Neurocube~\cite{neurocube:isca:2016} and Proger PIM~\cite{PIM-training_Micro18}, accelerate both the training and inference phases, only target CNNs and do not work for other ML algorithms. Prior work exploits ASIC~\cite{bitfusion:isca:2018, snapea:isca:2018, ganax:isca:2018, eyeriss:isca:2016, cnvlutin:rchnews:2016, shidiannao:isca:2015, stripes:micro:2016, scnn:isca:2017, eie:isca:2016, cambrion:micro:2016, dadiannao:micro:2014, pudiannao:asplos:2015, tpu:isca:2017}, GPU~\cite{convGPU:2017, Oh:2004, Guzhva:2009, saberlda:asplos:2017}, FPGA~\cite{FPGADeep:FPGA:2015, dnnweaver:micro:2016,tabla:hpca:2016, CHiMPS:FPGA:2008, large:journal:2011}, and multi-computing-node~\cite{tensorflow, tpu:isca:2017} platforms to accelerate ML algorithms. While effective, these techniques do not benefit from in-memory processing. Some prior work used split execution to accelerate ML algorithms. Shen, et al. ~\cite{resourcePartitioning:ISCA:2017} partitioned FPGA resources to process different subsets of convolutional layers of CNNs. Scalpel~\cite{scalpel:ISCA:2017} customizes DNN pruning over SIMD-aware weight pruning and node pruning. Park, et al.~\cite{cosmic:Micro:2017} distribute only the optimization part of the training phase of different ML algorithms over FPGA and \emph{n} ASIC units. Consequently, their technique does not offer load balancing. Scaledeep~\cite{scaledeep:ISCA:2017} uses heterogeneous processing tiles that are customized for compute-intensive and memory-intensive parts of training DNNs. Proger PIM~\cite{PIM-training_Micro18} uses CPU, fixed-function PIM and a programmable PIM unit for training different CNN models. These techniques are fundamentally different from our proposed technique since none of them is in-memory processing and they do not offer all the necessary features needed for an efficient in-memory processing. \section{Conclusion} \label{sec:concl} During the training phase, ML algorithms process large amounts of data, iteratively, which consumes significant bandwidth and energy. Although in-memory accelerators provide high memory bandwidth and consume less energy, they suffer from lack of generality or efficiency. We propose \hpim, a holistic approach that exploits heterogeneous compute engines on the logic die to efficiently cover a wide range of ML algorithms and splits the execution of ML algorithms over the in-memory compute engines and an out-of-memory compute platform to use all the available bandwidth. The evaluation results show that \hpim~outperforms the best-performing prior work, in terms of performance and energy-delay product (EDP), by up to \xxs{1.6$\times$} and \xxs{31$\times$}, respectively. \hpim~also improves average performance and energy efficiency by \xxs{1.5$\times$} and \xxs{21$\times$}, respectively. \section{Pattern-Aware Execution} \label{sec:pattern} To accelerate different ML algorithms, one solution is to use general-purpose execution units. However, as we discussed in \S~\ref{sec:motiv}, although general-purpose execution units (e.g.,~\cite{tabla:hpca:2016}) can accelerate a wide range of ML algorithms, they utilize only a small fraction of the available memory bandwidth provided by a 3D-stacked memory due to their large area overhead. To address this problem, this work proposes to integrate light-weight accelerators in such a way that the power and area of the accelerators are much smaller that those of general-purpose accelerators, and at the same time, provide enough generality to accelerate different kinds of ML algorithms. Our key idea is to identify common compute patterns of different ML algorithms and map them to light-weight hardware accelerators. \subsection{Compute Patterns} \label{subsec:Blocks} We thoroughly examine the compute graphs of different ML algorithms and break the graphs down to several compute patterns. We observe that some of these compute patterns are common across different ML algorithms. Each ML algorithm has an objective function, \emph{(f)}, and a set of weights, \emph{(w)}, a.k.a, models,\footnote{We use weight and model interchangeably in this paper.} that map the elements of an input vector, \emph{${(X)}$}, to the output, \emph{${(Y)}$}, as shown in Equation~\ref{eq:objFunc}. \begin{equation} \label{eq:objFunc} \exists W min\sum_{i} f(W, X_{i}, Y_{i}) \end{equation} The objective function is a cost function that measures the quantity of the distance between the predicted output and the actual output for the corresponding input dataset. Solving an optimization problem over the training data, ML algorithms minimize the objective function gradually. Stochastic Gradient Descent (SGD) is a widely-used algorithm to gradually minimize the objective functions~\cite{de2017understanding, bottou1991stochastic, bottou2012stochastic, asgd, async-p2p, 1bit-sgd, efficient-mini-batch-sgd}. Due to the popularity of SGD, in this paper, we consider SGD as the optimization algorithm, however, our work is general to consider other optimization algorithms as well. Equation~\ref{eq:sgd} shows how SGD solves the optimization problem defined in Equation~\ref{eq:objFunc}. \begin{equation} \label{eq:sgd} W^{(t+1)} = W^{(t)} - \mu \times \frac{\partial (\sum_{i}f(W^{(t)}, X_{i}, Y_{i})}{\partial (W^{(t)})} \end{equation} SGD updates \( W^{(t)} \), by computing \( W^{(t+1)}\), in the reverse direction of the gradient function, \(\partial (f)\), which speeds minimizing the objective function. ML algorithms use the updated weights with other $m$ input vectors during the next iterations. Parameter \(\mu\) is the learning rate of the ML algorithm. Considering the objective functions and parameters of different ML algorithms, we extract four types of compute patterns in the compute graphs of different ML algorithms. The combination of these compute patterns optimizes the objective function. Three types of these compute patterns are common among different ML algorithms, and one of them is algorithm dependent. \begin{itemize}[leftmargin=1mm,itemsep=0mm,parsep=0mm,topsep=0mm] \item \textbf{Common Compute Patterns.} \begin{enumerate} [leftmargin=1.5mm,itemsep=0mm,parsep=0mm,topsep=0mm] % \item \textbf{\macblock~Compute Pattern}. The first compute pattern of different ML algorithms calculates the dot product of the input vector, \emph{(X)}, and the weight vector, \emph{(W)}. % We refer to this dot product, \( \sum_{i} X_{i}*W_{{j}{i}} ; i \in [0, k) and j \in [0, n)\), as \macblock. % This compute pattern is needed to compute the predicted output. \item \textbf{\compblock~Compute Pattern}. The predicted output of an ML algorithm is compared against a threshold or the known output, \emph{(Y)}, usually using a subtractor operation. % We refer to this compute pattern as \compblock. % Using this compute pattern, the output of the objective function, \emph{delta}, is calculated. % \item \textbf{\optblock~Compute Pattern}. Using an optimization method, an ML algorithm updates the models to minimize the output of the objective function, \emph{delta}. This compute pattern, \(W^{(t+1)} = W^{(t)} - \mu \times delta \), is referred to as \optblock. % \end{enumerate} \item \textbf{Algorithm-Dependent Compute Pattern.} In addition to the aforementioned compute patterns, some ML algorithms need to perform extra operations in their objective functions to calculate the predicted output and the delta value. % These extra operations differ from one ML algorithm to another, and include basic operations \((e.g., -, +, *, <, and >)\) and non-linear operations (e.g., Sigmoid, Gaussian, Sigmoid Symmetric, and Log). % We refer to this compute pattern as \emph{algorithm-dependent} (a.k.a, \specblock). \end{itemize} % % \begin{table*}[h] \centering % \caption{Compute patterns for several ML algorithms.} % \includegraphics[width=0.75\textwidth]{pattern.pdf} % \label{tab:pattern} % \end{table*} \begin{figure}[h] \setstretch{1} \centering \includegraphics[width=0.3\textwidth]{logistic.pdf} \caption{Compute patterns of Logistic Regression (LogReg) algorithm.}\label{fig:logistic} \end{figure} Table~\ref{tab:pattern} shows the compute patterns of the ML algorithms that we considered. For example, the compute patterns of \bench{LogReg}, as shown in Figure~\ref{fig:logistic}, include \macblock~(\circledb{1}), \sigblock~(\circledb{2}), \compblock~(\circledb{3}), and \optblock~(\circledb{4}). This table shows that different ML algorithms have common compute patterns, \macblock, \compblock, and \optblock. Note that \bench{LonReg} has the same compute patterns as \bench{2D-Reg}. The reason is that \bench{LogReg} and \bench{2D-Reg} exploit the same objective function to optimize one-dimensional and two-dimensional models, respectively. \subsection{Light-Weight Compute Engines} \label{subsec:accelerators} Compute patterns in the compute graphs of ML algorithms can easily be mapped to a set of \emph{heterogeneous compute engines} such as \macunit, \compunit, \optunit, and \nonlinunit, where the compute engines are customized to execute one particular compute pattern efficiently. Each compute engine (accelerator), as shown in Figure~\ref{fig:arch_detail}, is named after its corresponding compute pattern. \niparagraph{Reduction Compute Engine (\macunit).} To perform \macblock~compute pattern, we need an array of multipliers, along with an arrangement of adders to accumulate the products into one final output. {\macunit} compute engine, $k/RU$, is an array of $k$ multipliers, each evaluating the product of $x_i$ and $w_i$, followed by as many adders as required to aggregate the sum of all the products into one final output, and save the output in the {\emph{sum register}}. Figure~\ref{fig:arch_detail}~\circledb{1} shows a \macunit~of size 8, '8/RU', in which eight multiplications are performed, and the products are accumulated using several levels of adders. \niparagraph{Comparator Compute Engine (\compunit).} \compunit, labeled as \circledb{2} in Figure~\ref{fig:arch_detail}, performs a subtraction between the predicted output, \emph{PO}, and the expected output, \emph{EO}, and generates a difference to be used for updating the models. \niparagraph{Optimization Compute Engine (\optunit).} \optunit, labeled as \circledb{3} in Figure~\ref{fig:arch_detail}, is a serial chain of two multiplier units and a subtractor unit, which updates an element of the \emph{model} array. To update $n$ \emph{weight}s, we need to integrate $n$ instances of the {\optunit}s. \niparagraph{Non-Linearity Compute Engine (\nonlinunit).} The last compute pattern, \specblock, is algorithm-specific and mostly performs non-linear operations. Linear operations of \specblock~are performed by other compute engines. Regarding the non-linear operations, one realization of this accelerator is a general-purpose ALU that has a special unit for non-linear operations. However, this realization is expensive in terms of area and power. To address this problem, we borrow the idea of implementing non-linear functions using lookup tables from prior work~\cite{kaufmann1998signal, engel2001high, keahey1996techniques, mielikainen2006lossless} but instead of lookup tables, we use the die-stacked DRAM to hold the outputs of non-linear functions. This is mainly because the area and power budgets of the logic die are limited and we prefer to dedicate the available area and power to the other compute engines. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{origami-arch.pdf} \caption{Reduction, Comparator, and Optimization compute engines.}\label{fig:arch_detail} \end{figure} \section{Split Execution} \label{sec:split} As the area and power budgets of the logic layer of 3D-stacked memories are limited, in-memory accelerators cannot utilize the whole bandwidth offered by 3D-stacked DRAM even if we use light-weight compute engines. One way to address this limitation is to use out-of-the-memory resources on the active die in parallel with the in-memory accelerators. The out-of-the-memory resource can be ASIC, GPU, FPGA, TPU, or any other types of compute platforms. To this end, we need to split the execution on two platforms. Moreover, split execution offers performance improvement if both platforms are fully utilized and inter-platform communications are minimal. Analyzing the compute graphs of ML algorithms, we set out a platform-aware partitioning mechanism. We observe that there are three different types of parallelism inside ML algorithms. Based on these parallelism types and the specifications of the two platforms (heterogeneous accelerators on the logic die and the out-of-the-memory compute platform), we partition the compute graph over the platforms to maximize resource utilization with minimal inter-platform communications. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{parallel.pdf} \caption{Three different types of parallelism, namely \emph{bloc\_{level}}, \emph{partial\_{level}}, and \emph{model\_{level}}.}\label{fig:parallel} \end{figure} \vspace{-5pt} \subsection{Parallelism Types}\label{subsec:parallel} % We observe that there are up to three types of parallelism, as shown in Figure~\ref{fig:parallel}, in the compute graph of an ML algorithm. % \begin{enumerate} [leftmargin=2mm,itemsep=0mm,parsep=0mm,topsep=0mm] % \item \textbf{Model\_{level} parallelism}. Some ML algorithms optimize more than one \emph{{weight}} array and each \emph{{weight}} array works on completely distinct data arrays. % For example, Recommender Systems (\bench{Reco}) algorithm optimizes two independent weights: movie\_{feature} and users\_{feature}. % Thus, its compute graph consists of independent compute subgraphs, labeled \circledb{1}~in Figure~\ref{fig:parallel}. % % \item \textbf{Partial\_{level} parallelism}. The \emph{{weight}} arrays of some ML algorithms such as Recommender Systems (\bench{Reco}), 2D-Regression (\bench{2D-Reg}), and Back-propagation (\bench{BProp}) have more than one dimension, \( W_{{j}{i}};\) \(i \in [0, k)\) and \(j \in [0, n)\) where \($n>1$\). % In such cases, the compute graph consists of up to ${n}$ independent compute subgraphs. These compute subgraphs work on all elements of an \emph{input} array and a portion of the elements of the \emph{weight} array as labeled \circledb{2}~in Figure~\ref{fig:parallel}. % With this type of parallelism, we can create two subgraphs that work on two distinct portions of the {weight} array, \( \emph{model[a][i]} \) and \( \emph{model[b][i]} \) where \( a \in [0, n_1) \), \( b \in [n_1, n) \), and \(i \in [0, k) \). % \item \textbf{Block\_{level} parallelism}. There is an internal parallelism in the \macblock~and \optblock~compute patterns as they operate on the \emph{weight} and \emph{input} arrays, \( W_{i} \) and \( X_{i} \) where \( i \in [0, k) \). % As shown by the compute subgraphs labeled \circledb{3}~in Figure~\ref{fig:parallel}, these compute patterns can be broken into two parts, which operate on distinct elements on \emph{input} and \emph{weight} arrays. % While the two parts in the \optblock~compute pattern are independent of each other, the two parts of the \macblock~compute pattern generate partial sums that need to be aggregated (i.e., inter-platform communications). % To benefit from this source of parallelism, we partition the {input} and {weight} arrays into two parts. The \macblock~compute pattern can be executed on both partitions in parallel on the two platforms. When the two partitions are executed, the two results are aggregated. To minimize the inter-platform communications, the \optblock~computation will be executed on the same two partitions at each platform. % \end{enumerate} % As ML algorithms always include \macblock~and \optblock~compute patterns, our analysis reveals that there is at least one source of parallelism in an ML algorithm (while some algorithms benefit from two or even all three sources of parallelism). % \subsection{Platform-Aware Partitioning}\label{subsec:partition} Using the three types of parallelism, we partition an ML algorithm using Algorithm~\ref{alg:patternAlg}. The partitioning algorithm receives the compute graph,~\emph{G}, and the specifications of the two platforms (\codebold{in-memory} and \codebold{out-of-the-memory}, labeled as \codebold{MEM} and \codebold{External} in the algorithm, respectively), and statically partitions the compute graph into two subgraphs to be assigned to the two platforms. The partitioning algorithm attempts to maximize the resource utilization using two key ideas: (1) minimizing inter-platform communications, and (2) splitting the execution over the two platforms in a load-balanced manner. As a result, the algorithm makes sure that the two platforms finish their execution at about the same time. The static nature of the partitioning algorithm relaxes the hardware control unit from the overhead of runtime load balancing. The \emph{platform-aware partitioning} algorithm follows two steps: \begin{itemize} [leftmargin=3mm,itemsep=0mm,parsep=0mm,topsep=0mm] \item \textbf{Minimizing inter-platform communications.} First, we extract all the available types of parallelism in an ML algorithm. Second, out of the available types of parallelism, we pick the best one to partition the compute graph into two subgraphs (line 5). The \emph{model\_{level}} parallelism has the highest priority as it includes two independent subgraphs working on different weights and inputs. The next priority belongs to the \emph{partial\_{level}} parallelism in which the compute graph is divided into two independent subgraphs working on distinct parts of the weight array. The \emph{partial\_{level}} parallelism has lower priority than the \emph{model\_{level}} parallelism. Unlike \emph{model\_{level}} parallelism, in \emph{partial\_{level}} parallelism, the two partitions operate on the same input. The \emph{block\_{level}} parallelism has the lowest priority because it requires inter-platform communications, while the other two types of parallelism (i.e., \emph{model\_{level}} and \emph{partial\_{level}}) have no inter-platform communications. \item \textbf{Providing load-balanced partitioning.} To offer a load-balanced partitioning between the two compute platforms, we set the size of each partition (subgraph) based on the throughput of the corresponding platform (lines 7-26). To this end, we calculate the throughput of a platform using Equation~\ref{eq:throughput}: \begin{equation} \label{eq:throughput} Throughput = min(Memory\,\,BW, \\ Compute\,\,BW) \end{equation} For memory bandwidth, we assign as much bandwidth as needed to the heterogeneous compute engines on the logic die. The rest of the bandwidth is given to the out-of-the-memory platform. To calculate the compute bandwidth, we use Equation~\ref{eq:computeBW}: \begin{equation} \label{eq:computeBW} Compute\,\,BW = \sum_{i} \times \sum_{j} I_{{i}{j}} \times data\,\,size \times frequency \end{equation} where \(I_{{i}{j}}\) indicates the number of inputs of the \emph{i}$_{th}$ resource, \emph{data size} indicates the size of each input in byte, and \emph{frequency} is the operating frequency of the platform (lines 7-9). Assume that a platform has a $1~GHz$ frequency, $320~GB/s$ \emph{memoryBW}, and 1000 32-bit multipliers. Its \emph{compute bandwidth} is \emph{8~TB/s (1000 $\times$ 2 $\times$ 4 $\times$ 1)} and its \emph{throughput} is 320~GB/s. To offer a load-balanced partitioning, we partition the compute graph of an ML algorithm based on the throughput ratio of the two platforms, ${rateThroughput}$. \begin{enumerate} [leftmargin=2mm,itemsep=0mm,parsep=0mm,topsep=0mm] \item{\textbf{Block\_{level} parallelism.}} We divide $k$ weights into two parts with proportion to $rateThroughput$, \emph{(k = num1+num2; num1 = rateThroughput $\times$ num2)}, (lines 12-15). \item{\textbf{Partial\_{level} parallelism.}} We divide the second dimension of the \emph{weight} array, which its size is $n$, into two parts with the size of $n_1$ and $n_2$ in proportion to ${rateThroughput}$, \emph{( $n$ = $n_1$ $+$ $n_2$; $n_1$ $=$ $rateThroughput$ $\times$ $n_2$)} (lines 16-19). \item{\textbf{Model\_{level} parallelism.}} We sort the independent models based on their size. Starting from the largest, we assign models, one by one, to the platform with larger throughput till partitioning ratio becomes $rateThroughput$. As with the \emph{model\_{level}} parallelism, we only have coarse-grained partitioning ability (i.e., assign the whole model to a platform), the ratio of partitioning might not exactly become $rateThroughput$. Consequently, after \emph{model\_{level}} partitioning is done, we apply other types of available parallelism (e.g., \emph{block\_{level}} or \emph{partial\_{level}}) to change the ratio to $rateThroughput$. \end{enumerate} \end{itemize} \begin{algorithm}[htp] \setstretch{0.80} \small \SetAlgoLined\DontPrintSemicolon \SetKwFunction{algo}{Platform-Aware Partitioning}\SetKwFunction{proc}{pattern} \SetKwProg{myalg}{Algorithm}{}{} \myalg{\algo{}}{ \SetKwInOut{Input}{input} \SetKwInOut{Output}{output} \Input{ \hspace*{0.1cm}$G$: Compute Graph \\ \hspace*{0.1cm}$External$-$Spec$: Out-of-the-memory Specifications \\ \hspace*{0.1cm}$MEM$-$Spec$: In-memory Specifications \\ } \Output{ \hspace*{0.1cm}$External$-$Partitions$: subgraphs assigned to the out-of-the-memory platform \\ \hspace*{0.1cm}$MEM$-$Partitions$: subgraphs assigned to in-memory compute engines \\ } External-Partitions $\leftarrow$ empty() \\ MEM-Partitions $\leftarrow$ empty() \\ queue $\leftarrow$ empty() \\ pMode = parallelismAnalayzer(G) \\ queue.push(G) \\ External-Spec.throughput = min ( External-Spec.memoryBW, External-Spec.computeBW) \\ MEM-Spec.throughput = min (MEM-Spec.memoryBW, MEM-Spec.computeBW) \\ rateThroughput = External-Spec.throughput / MEM-Spec.throughput \\ \While {(!queue.empty())}{ G = queue.pop() \\ \If{(pMode == Block\_{level})}{ num1 $\leftarrow$ num-model * (rateThroughput/(1+rateThroughput))\\ num2 $\leftarrow$ num-model - num1 \\ {G1, G2} += partition(G, num1, num2) \\ } \ElseIf{ (pMode == Partition\_{level}) }{ dim1 $\leftarrow$ ${N\_{dim}}$ * (rateThroughput/(1+rateThroughput))\\ dim2 $\leftarrow$ ${N\_{dim}}$ - dim1\\ {G1, G2} += partition(G, dim1, dim2) \\ } \ElseIf{(pMode == Model\_{level})}{ {g1, g2} $\leftarrow$ partition(G, rateThroughput) \\ G1 += g1 \\ rateThroughput = update(g1, g2, rateThroughput)\\ pMode = parallelismAnalayzer(g2) \\ queue.push(g2) \\ } } External-Partitions.insert(G1)\\ MEM-Partitions.insert(G2) }{} \caption{Platform-Aware Partitioning}\label{alg:patternAlg} \end{algorithm} \section{Origami} \label{sec:workflow} \begin{figure*}[ht] \centering \includegraphics[width=1\linewidth]{workflow.pdf} \caption{{\hpim}~Workflow.}\label{fig:platform} \end{figure*} \vspace{-5pt} We propose a heterogeneous split architecture for in-memory acceleration of ML algorithms, called \hpim. \hpim~is a holistic approach that benefits from pattern and split executions to accelerate different ML algorithms over a set of heterogeneous compute engines on the logic die and a compute platform on the active die. \hpim~is a hardware-software solution that spreads at different abstraction levels including \emph{programming layer} (\S ~\ref{subsec:PL}), \emph{compiler layer} (\S ~\ref{subsec:CL}), \emph{architecture layer} (\S ~\ref{subsec:AL}), and \emph{hardware layer} (\S ~\ref{subsec:HL}). Figure~\ref{fig:platform} shows the main components of \hpim. \\ \subsection{Programming Layer}\label{subsec:PL} % The \emph{programming layer} includes a \emph{programming interface} and a \emph{graph extractor} unit to translate the high-level specification of an ML algorithm to its corresponding \emph{compute graph}, as shown in Figure~\ref{fig:platform}, labeled \circledb{1}. % \textbf{Programming Interface.} \emph{Programming interface} receives a high-level specification which includes learning parameters, data declaration, and mathematical declaration of an ML algorithm. % The learning parameters include the learning rate and the number of features. % The data declaration specifies various types of data such as the training input vectors (a.k.a., \emph{input} or \emph{model\_input}), the real outputs (a.k.a., \emph{model\_output}), and the weights (a.k.a., \emph{model\_parameters} or \emph{model}). % The mathematical declaration specifies how the objective function of the ML algorithm is computed, using mathematical operations, to update the \emph{weights}. % The mathematical operations can be expressed in three categories: (1) \emph{basic operations}, such as \(-, +, *, <, >\), (2) \emph{group operations}, such as \(\sum\), \(\ \lVert \rVert \), \(\Pi\), and (3) \emph{non-linear operations}, such as \(Sigmoid\), \(Gaussian\), \(Sigmoid~Symmetric\), and \(Log\). % \textbf{Graph Extractor.} By receiving the high-level specification of an ML algorithm, the \emph{graph extractor} unit extracts the corresponding compute graph. % \subsection{Compiler Layer}\label{subsec:CL} At the \emph{compiler layer}, \hpim~performs five operations: (1) extracts compute patterns, (2) detects parallelism types, (3) partitions the compute graph into two load-balanced parts with minimum inter-part communications, (4) assigns each part to a compute platform, and (5) schedules the execution of the algorithm over the two platforms. Managing these goals at the compiler layer alleviates the runtime overhead and facilitates management of the simultaneous execution, which simplifies the control mechanism in the 3D-stacked memory. \emph{Compiler layer} consists of two key components, as shown in Figure~\ref{fig:platform}: % \circledb{2}~\codebold{Pattern extractor} passes three steps: (1) it creates three pattern subgraphs for the three \emph{common compute patterns}, % (2) it runs a pattern-matching algorithm, adopted from graph algorithms~\cite{graph-partitioning:kit:2016, graph-clustering:mathematics:2004}, to find all instances of these compute patterns in the compute graph. % The remaining parts of the compute graph are instances of the \emph{\specblock~compute pattern}, % and, (3) it clusters all nodes in each instance as a coarse-grained node in the pattern compute graph. \circledb{3}~\codebold{Partition analyzer} detects the parallelism types in the pattern compute graph of an ML algorithm and partitions the graph into two parts to be executed on the heterogeneous compute engines on the logic die and an out-of-the-memory compute platform (i.e., FPGA in this paper\footnote{\hpim~is general and can benefit from different kinds of out-of-the-memory compute platforms such as FPGA, GPU, and etc. Without loss of generality, in this work, we assume that the compute platform on the active die is an FPGA. We use FPGA as an example and leave examination of other platforms for the future work.}). % \emph{Partition analyzer} follows Algorithm~\ref{alg:patternAlg} to split the pattern compute graph into two load-balanced partitions with minimum inter-platform communications. % \subsection{Architecture Layer}\label{subsec:AL} The \emph{code generator} uses our proposed \emph{instruction set architecture (ISA)} to prepare executable code and static scheduling for the heterogeneous compute engines. % % \subsubsection{ISA}\label{subsubsec:ISA} The proposed ISA is a RISC instruction set that consists of two flags, two types of registers, \emph{computation} and {synchronization}, and three types of instructions, \emph{communication}, \emph{computation}, and \emph{synchronization}. The input and output of each compute engine is hardwired to a dedicated \emph{computation} register. In addition to \emph{computation} registers, there are two registers for synchronizing the execution of the compute engines and the out-of-the-memory platform. \textbf{Communication Instructions} transfer data from memory locations to registers and vice versa (\codebold{mov \%src, \%des}). \textbf{Computation Instructions} use the compute engines to perform computation in the 3D-stacked memory. There are three computation instructions: \begin{enumerate}[leftmargin=1.5mm,itemsep=0mm,parsep=0mm,topsep=0mm] % \item \codebold{reduce \%Num}: enables the \codebold{Num}$_{th}$ \macblock~compute engine to operate on its input registers and store the results in the output register. % \item \codebold{comparator \%Num}: enables the \codebold{Num}$_{th}$ \compblock~to operate on its input registers and store the results in the output register. % \item \codebold{optimization \%Num}: enables the \codebold{Num}$_{th}$ \optblock~compute engine to operate on its input registers and store the results in the output register. % \end{enumerate} \textbf{Synchronization Instructions.} Synchronization instructions handle the required interactions between the two compute platforms. \hpim~utilizes three parallelism types to split the execution of the compute graph over the two platforms. With \emph{model\_level} and \emph{partial\_level} parallelism types, there is no inter-communications between the two compute platforms, hence, there is no need for synchronization. However, with \emph{block\_level} parallelism type, as shown in Figure~\ref{fig:parallel}, there is a need for inter-communications between the two platforms. In \emph{block\_level}, the heterogeneous compute engines on the logic die execute a part of the reduction and optimization compute pattern, while the out-of-the-memory compute platform executes the rest. With the reduction compute pattern, the partial sum of the two platforms need to be aggregated. The optimization compute engine cannot start the execution until the two partial sums are aggregated. For this purpose, one platform (called~\emph{master}) is in charge of aggregating the partial sums and generating the final result, while the other platform (called \emph{slave}) should transfer its partial sum to the \emph{master}. When \emph{master} is done with its partial sum, it needs to wait to receive the partial sum of the \emph{slave}. Receiving the partial sum, the \emph{master} aggregates the partial sums and sends the final result to the \emph{slave}, which is waiting for the result to start execution of the optimization compute pattern. To this end, the proposed \emph{ISA} uses two flags, \codebold{M\_{ready}} and \codebold{S\_{ready}}, two synchronization registers, \codebold{M\_{delta}} and \codebold{S\_{psum}}, and three instructions, \codebold{check}, \codebold{set}, and \codebold{wait}. Without loss of generality, we assume that the out-of-the-memory compute platform is the \emph{master}. \begin{enumerate}[leftmargin=1.5mm,itemsep=0mm,parsep=0mm,topsep=0mm] \item \codebold{set \%f}: sets the value of flag \codebold{\%f}.\\ % After preparing the partial sum, the in-memory controller should write the partial sum to \codebold{S\_{psum}} and set \codebold{S\_{ready}} by the \codebold{set} instruction. % The \emph{master} should check the \codebold{S\_{ready}} flag and reads the partial sum from \codebold{S\_{psum}} when the flag indicates it is ready. % \item \codebold{wait \%f}: waits for the flag \codebold{\%f} to set.\\ The in-memory controller should wait for the \codebold{M\_{ready}} to set and then read the value of \codebold{M\_{delta}} register. % The \emph{master} computes the \textit{delta}, which is needed for the optimization compute pattern, writes it to \codebold{M\_{delta}} register, and sets the \codebold{M\_{ready}} flag. % \item \codebold{clr \%f}: resets the value of flag \codebold{\%f}.\\ % \end{enumerate} \vspace{-5pt} \subsubsection{Static Scheduling} \emph{Code generator} unit receives the part of the compute graph that needs to be executed on the heterogeneous compute engines on the logic die, and transforms it into a sequence of instructions to be executed by the in-memory controller, as we explain in \S \ref{subsec:HL}. \subsection{Hardware Layer}\label{subsec:HL} \hpim~adds a set of \emph{heterogeneous compute engines} and an \emph{in-memory controller} to the logic die of a 3D-stacked memory. \emph{In-memory controller} is a light-weight unit that executes the instructions of \S \ref{subsubsec:ISA}.
1,108,101,564,810
arxiv
\section{Introduction} Electromagnetic form factors probe the internal structure of hadrons mapping their charge and magnetic distributions. The slope of the electric and magnetic form factors at zero momentum yields the electric and magnetic root mean square radius, while the value of the form factors at zero momentum give its electric charge and magnetic moment. Extensive electron scattering experiments have been carried out since the fifties for the precise determination of the nucleon form factors, including recent experiments at Jefferson Lab, MIT-Bates and Mainz. For a recent review on electron elastic scattering experiments, see Ref.~\cite{Punjabi:2015bba}. The proton radius can also be obtained spectroscopically, namely via the Lamb shifts of the hydrogen atom and of muonic hydrogen~\cite{Pohl:2010zza} and via transition frequencies of electronic and muonic deuterium. In these measurements, including a recent experiment using muonic deuterium~\cite{Pohl1:2016xoo}, discrepancies are observed in the resulting proton radius between hydrogen and deuterium and their corresponding muonic equivalents. Whether new physics is responsible for this discrepancy, or errors in the theoretical or experimental analyses, a first principles calculation of the electromagnetic form factors of the nucleon can provide valuable insight. Although nucleon electromagnetic form factors have been extensively studied in lattice QCD, most of these studies have been carried out at higher than physical pion masses, requiring extrapolations to the physical point, which for the case of baryons carry a large systematic uncertainty. In this paper, we calculate the electromagnetic form factors of the nucleon using an ensemble of two degenerate light quarks (N$_\textrm{f}=2$) tuned to reproduce a pion mass of about 130~MeV, in a volume with $m_\pi L\simeq$3~\cite{Abdel-Rehim:2015pwa}. We use the twisted mass fermion action with clover improvement~\cite{Frezzotti:2003ni,Frezzotti:2000nk}. We employ $\mathcal{O}(10^5)$ measurements to reduce the statistical errors, and multiple sink-source separations to study excited state effects using three different analyses. We extract the momentum dependence of the electric and magnetic Sachs form factors for both isovector and isoscalar combinations, i.e. for both the difference ($p-n$) and sum ($p+n$) of proton and neutron form factors. For the latter we compute the computationally demanding disconnected contributions and find them to be smaller than the statistical errors of the connected contributions. To fit the momentum dependence we use both a dipole form as well as the z-expansion~\cite{Hill:2010yb}. From these fits we extract the electric and magnetic radii, as well as, the magnetic moments of the proton, the neutron and the isovector and isoscalar combinations. For the electric root mean squared (rms) radius of the proton we find $\sqrt{\langle r_E^2\rangle_p}=0.767(25)(21)$~fm where the first error is statistical and the second a systematic due to excited states. Although this value is closer to the value of 0.84087(39)~fm extracted from muonic hydrogen~\cite{Pohl1:2016xoo}, a more complete analysis of systematic errors using multiple ensembles is required to assess accurately all lattice artifacts. The remainder of this paper is organized as follows: in Section~\ref{sec:setup} we provide details of the lattice set-up for this calculation and in Section~\ref{sec:results} we present our results. In Section~\ref{sec:others} we compare our results with other lattice calculations and in Section~\ref{sec:conclusions} we summarize our findings and conclude. \section{Setup and lattice parameters} \label{sec:setup} \subsection{Electromagnetic form factors} The electromagnetic form factors are extracted from the electromagnetic nucleon matrix element given by \begin{align} \langle &N(p', s')|\mathcal{O}^{V}_\mu|N(p,s)\rangle = \nonumber\\ &\sqrt{\frac{m^2_N}{E_N(\vec{p}')E_N(\vec{p})}}\bar{u}_N(p',s')\Lambda^{V}_\mu(q^2)u_N(p,s) \label{eq:matrix element} \end{align} with $N(p,s)$ the nucleon state of momentum $p$ and spin $s$, $E_N(\vec{p}) = p_0$ its energy and $m_N$ its mass, $\vec q=\vec{p}\,^\prime-\vec p$, the spatial momentum transfer from initial ($\vec p$) to final ($\vec{p}\,'$) momentum, $u_N$ the nucleon spinor and $\mathcal{O}^{V}$ the vector current. In the isospin limit, where an exchange between up and down quarks ($u\leftrightarrow d$) and between proton and neutron ($p \leftrightarrow n$) is a symmetry, the isovector matrix element can be related to the difference between proton and neutron form factors as follows: \begin{align} \langle p | \frac{2}{3}\bar{u}\gamma_\mu u - \frac{1}{3}\bar{d}\gamma_\mu d | p \rangle - \langle n | \frac{2}{3}\bar{u}\gamma_\mu u - \frac{1}{3}\bar{d}\gamma_\mu d | n \rangle\nonumber\\\xrightarrow[p\leftrightarrow n]{u\leftrightarrow d} \langle p | \bar{u}\gamma_\mu u - \bar{d}\gamma_\mu d | p \rangle. \label{eq:isovector} \end{align} Similarly, for the isoscalar combination we have \begin{align} \langle p | \frac{2}{3}\bar{u}\gamma_\mu u - \frac{1}{3}\bar{d}\gamma_\mu d | p \rangle + \langle n | \frac{2}{3}\bar{u}\gamma_\mu u - \frac{1}{3}\bar{d}\gamma_\mu d | n \rangle\nonumber\\\xrightarrow[p\leftrightarrow n]{u\leftrightarrow d} \frac{1}{3}\langle p | \bar{u}\gamma_\mu u + \bar{d}\gamma_\mu d | p \rangle. \label{eq:isoscalar} \end{align} We will use these relations to compare our lattice results, obtained for the isovector and isoscalar combinations, with the experimental data for the proton and neutron matrix elements. We use the symmetrized lattice conserved vector current, $\mathcal{O}^{V}_\mu$ = $\frac{1}{2}[j_\mu(x)+j_\mu(x-\hat{\mu})]$, with \begin{align} j_\mu(x) = \frac{1}{2}[&\bar{\psi}(x+\hat{\mu})U^\dagger_\mu(x)(1+\gamma_\mu)\tau_a\psi(x) \nonumber\\ -&\bar{\psi}(x)U_\mu(x)(1-\gamma_\mu)\tau_a\psi(x+\hat{\mu})], \end{align} where $\bar{\psi}=(\bar{u}, \bar{d})$ and $\tau_a$ acts in flavor space. We consider $\tau_a=\tau_3$, the third Pauli matrix, for the isovector case, and $\tau_a=\sfrac{\mathbb{1}}{3}$ for the isoscalar case. $\hat{\mu}$ is the unit vector in direction $\mu$ and $U_\mu(x)$ is the gauge link connecting site $x$ with $x+\hat{\mu}$. Using the conserved lattice current means that no renormalization of the vector operator is required. The matrix element of the vector current can be decomposed in terms of the Dirac $F_1$ and Pauli $F_2$ form factors as \begin{align} \Lambda_\mu^V(q^2) = \gamma_\mu F_1(q^2)+\frac{i\sigma_{\mu\nu}q^\nu}{2m_N}F_2(q^2). \end{align} $F_1$ and $F_2$ can also be expressed in terms of the nucleon electric $G_E$ and magnetic $G_M$ Sachs form factors via the relations \begin{align} G_E(q^2) =& F_1(q^2)+\frac{q^2}{(2m_N)^2}F_2(q^2),\,\textrm{and}\nonumber\\ G_M(q^2) =& F_1(q^2)+F_2(q^2). \label{eq:f1f2} \end{align} \subsection{Lattice extraction of form factors} On the lattice, after Wick rotation to Euclidean time, extraction of matrix elements requires the calculation of a three-point correlation function shown schematically in Fig.~\ref{fig:thrp}. For simplicity we will take $x_0=(\vec{0}, 0)$ from here on. We use sequential inversions through the sink, fixing the sink momentum $\vec{p}\,^\prime$ to zero, which constrains $\vec{p}=-\vec{q}$: \begin{widetext} \begin{align} G_\mu(\Gamma;\vec{q};t_s,t_{\rm ins}) = & \sum_{\vec{x}_s\vec{x}_{\rm ins}}e^{-i\vec q .\vec{x}_{\rm ins}}\Gamma^{\alpha\beta}\langle \bar{\chi}^\beta_N(\vec{x}_s;t_s) | \mathcal{O}^\mu(\vec{x}_{\rm ins};t_{\rm ins}) | \chi^\alpha_N(\vec{0};0)\rangle\nonumber \\ \xrightarrow[t_{\rm ins} \rightarrow \infty]{t_{\rm s}-t_{\rm ins}\rightarrow\infty} & \sum_{ss'}\Gamma^{\alpha\beta}\langle \chi_N^\beta | N(0,s') \rangle \langle N(p,s)| \bar{\chi}_N^\alpha\rangle\langle N(0,s')|\mathcal{O}_\mu(q)|N(p,s)\rangle e^{-E_N(\vec{p})t_{\rm ins}} e^{-m_N(t_{\rm s}-t_{\rm ins})}, \label{eq:three-point} \end{align} \end{widetext} where $\Gamma$ is a matrix acting on Dirac indices $\alpha$ and $\beta$ and $\chi_N$ is the standard nucleon interpolating operator given by \begin{equation} \chi_N^\alpha(\vec{x},t)=\epsilon^{abc}u^a_\alpha(x)[u^{b\intercal}(x)C\gamma_5d^c(x)]. \end{equation} with $C=\gamma_0\gamma_2$ the charge conjugation matrix. In the second line of Eq.~(\ref{eq:three-point}) we have inserted twice a complete set of states with the quantum numbers of the nucleon, of which, after assuming large time separations, only the nucleon survives with higher energy states being exponentially suppressed. We use Gaussian smeared point-sources~\cite{Alexandrou:1992ti, Gusken:1989ad} to increase the overlap with the nucleon state with APE smearing applied to the gauge links, with the same parameters as in Ref.~\cite{Abdel-Rehim:2015owa}, tuned so as to yield a rms radius of about 0.5~fm. These are the same parameters as in Ref.~\cite{Abdel-Rehim:2016won}, namely $(N_G,\alpha_G) = (50, 4)$ for the Gaussian smearing and $(N_{\rm APE},\alpha_{\rm APE}) = (50, 0.5)$ for the APE smearing. \begin{figure} \includegraphics[width=1\linewidth]{three-point.pdf} \caption{Three-point nucleon correlation function with source at $x_0$, sink at $x_s$ and current insertion $\mathcal{O}_\mu$ at $x_{\rm ins}$. The connected contribution is shown in the upper panel and the disconnected contribution in the lower panel. } \label{fig:thrp} \end{figure} We construct an optimized ratio dividing $G_\mu$ by a combination of two-point functions. The optimized ratio $R_\mu$ is given by \begin{align} R_\mu(\Gamma;\vec{q};t_s;t_{\rm ins}) =& \frac{G_{\mu}(\Gamma;\vec{q};t_s;t_{\rm ins})}{G(\vec{0};t_s)}\times\nonumber\\ &\Biggl[ \frac{G(\vec{0};t_s)G(\vec{q};t_s-t_{\rm ins})G(\vec{0};t_{\rm ins})} {G(\vec{q};t_s)G(\vec{0};t_s-t_{\rm ins})G(\vec{q};t_{\rm ins})} \Biggr ]^{\frac{1}{2}} \label{eq:ratio} \end{align} with the two-point function given by \begin{align} G(\vec{p};t)=\sum_{\vec{x}} e^{-i\vec{p}\vec{x}}\Gamma^{\alpha\beta}_0\langle \bar{\chi}^\beta_N(\vec{x};t) | \chi^\alpha_N(\vec{0};0)\rangle. \label{eq:two-point} \end{align} $\Gamma_0$ is the unpolarized projector, $\Gamma_0=\frac{1+\gamma_0}{4}$. After taking the large time limit, unknown overlaps and energy exponentials cancel in the ratio, leading to the time-independent quantity $\Pi_\mu(\Gamma;\vec{q})$, defined via: \begin{align} R_\mu(\Gamma;\vec{q};t_s;t_{\rm ins})\xrightarrow[t_{\rm ins}\rightarrow\infty]{t_s-t_{\rm ins}\rightarrow\infty}\Pi_\mu(\Gamma;\vec{q}). \end{align} Having $\Pi_\mu(\Gamma;\vec{q})$, different combinations of current insertion directions ($\mu$) and nucleon polarizations determined by $\Gamma$ yield different expressions for the form factors~\cite{Alexandrou:2006ru,Alexandrou:2011db}. Namely, we have \begin{align} \Pi_0(\Gamma_0;\vec{q}) =& \mathcal{C}\frac{E_N+m_N}{2m_N}G_E(Q^2), \nonumber \\ \Pi_i(\Gamma_0;\vec{q}) =& \mathcal{C}\frac{q_i}{2m_N}G_E(Q^2),\nonumber\\ \Pi_i(\Gamma_k;\vec{q}) =& \mathcal{C}\frac{\epsilon_{ijk}q_j}{2m_N}G_M(Q^2),\label{eq:ffs} \end{align} where $Q^2=-q^2$, is the Euclidean momentum transfer squared, $\mathcal{C}=\sqrt{\frac{2m_N^2}{E_N(E_N+m_N)}}$, and the polarized projector is given by $\Gamma_k=i\gamma_5\gamma_k\Gamma_0$, and $i,k=1,2,3$. In what follows we will use three methods to extract $\Pi_\mu$ from lattice data: \noindent i) \textit{Plateau method.} We seek to identify a range of values of $t_{\rm ins}$ where the ratio $R_\mu$ is time-independent (plateau region). We fit, within this window, $R_\mu$ to a constant and use multiple $t_s$ values. Excited states are considered suppressed when our result does not change with $t_s$. \noindent ii) \textit{Two-state fit method.} We fit the time dependence of the three- and two-point functions keeping contributions up to the first excited state. Namely, we truncate the two-point function of Eq.~(\ref{eq:two-point}) keeping only the ground and first excited states to obtain \begin{equation} G(\vec{p};t) = c_0(\vec{p}) e^{-E(\vec{p})t} [1 + c_1(\vec{p}) e^{-\Delta E_1(\vec{p})t}+ {\cal O}(e^{-\Delta E_2(\vec{p})t})]. \end{equation} Similarly, the three-point function of Eq.~(\ref{eq:three-point}) becomes \begin{align} G_\mu(\Gamma;\vec{q};t_s,t_{\rm ins}) =& a^\mu_{00}(\Gamma;\vec{q})e^{-m (t_s-t_{\rm ins})} e^{-E(\vec{q}) t_{\rm ins}}\times \nonumber\\ \bigg [1+&a^\mu_{01}(\Gamma;\vec{q}) e^{-\Delta E_1(\vec{q}) t_{\rm ins}}\nonumber\\ +&a^\mu_{10}(\Gamma;\vec{q})e^{-\Delta m_1 (t_s-t_{\rm ins})} \nonumber\\ +&a^\mu_{11}(\Gamma;\vec{q})e^{-\Delta m_1 (t_s-t_{\rm ins})} e^{-\Delta E_1(\vec{q}) t_{\rm ins}}\nonumber\\ +&\mathcal{O}[\min(e^{-\Delta m_2 (t_s-t_{\rm ins})}, e^{-\Delta E_2(\vec{q})t_{\rm ins}})]\bigg], \label{eq:two state} \end{align} where $\Delta E_k(\vec{p})=E_k(\vec{p})-E(\vec p)$ is the energy difference between the $k^{\rm th}$ nucleon excited state and the ground state at momentum $\vec{p}$ and $m=E(\vec{0})$ and $\Delta m_k=\Delta E_k(\vec{0})$. The desired ground state matrix element is given by \begin{equation} \Pi_\mu(\Gamma;\vec{q}) = \frac{a^\mu_{00}(\Gamma;\vec{q})}{\sqrt{c_0(\vec{0})c_0(\vec{q})}}. \end{equation} In practice, we fit simultaneously the three-point function and the finite and zero momentum two-point functions in a twelve parameter fit to determine $m$, $E(\vec{q})$, $\Delta m_1$, $\Delta E_1(\vec{q})$, $c_0(\vec{q})$, $c_0(\vec{0})$, $c_1(\vec{q})$, $c_1(\vec{0})$, $a^\mu_{00}(\Gamma;\vec{q})$, $a^\mu_{01}(\Gamma;\vec{q})$, $a^\mu_{10}(\Gamma;\vec{q})$ and $a^\mu_{11}(\Gamma;\vec{q})$. The two-point function is evaluated using the maximum statistics available at time separation $t_s/a=18$. \noindent iii) \textit{Summation method}. We sum the ratio of Eq.~(\ref{eq:ratio}) over the insertion time-slices. From the expansion up to first excited state of Eq.~(\ref{eq:two state}) one sees that a geometric sum arises, which yields: \begin{equation} \sum_{t_{\rm ins}=a}^{t_s-a}R_\mu(\Gamma;\vec{q};t_s;t_{\rm ins})\xrightarrow{t_s\rightarrow\infty}c+\Pi_{\mu}(\Gamma;\vec{q})t_s+\mathcal{O}(t_se^{-\Delta m_1t_s}). \label{eq:summation} \end{equation} The summed ratio is then fitted to a linear form and the slope is taken as the desired matrix element. We note that, in quoting final results, we do not use the values extracted from summation method However, it does provide an additional consistency check for the plateau values. \subsection{Lattice setup} The simulation parameters of the ensemble we use are tabulated in Table~\ref{table:sim}. We use an N$_\textrm{f}=2$ ensemble of twisted mass fermion configurations with clover improvement with quarks tuned to maximal twist, yielding a pion mass of about 130~MeV. The lattice volume is 48$^3\times$96 and the lattice spacing is determined at $a=$0.0938$(3)$~fm yielding a physical box length of about 4.5~fm. The value of the lattice spacing is determined using the nucleon mass, as explained in Ref.~\cite{Alexandrou:2017xwd}. Details of the simulation and first results using this ensemble were presented in Refs.~\cite{Abdel-Rehim:2015owa, Abdel-Rehim:2015pwa}. \begin{table} \caption{Simulation parameters of the ensemble used in this calculation, first presented in Ref.~\cite{Abdel-Rehim:2015pwa}. The nucleon and pion mass and the lattice spacing have been determined in Ref.~\cite{Alexandrou:2017xwd}.} \label{table:sim} \begin{tabular}{c|r@{=}l} \hline\hline \multicolumn{3}{c}{$\beta$=2.1, $c_{\rm SW}$=1.57751, $a$=0.0938(3)~fm, $r_0/a$=5.32(5)} \\ \hline \multirow{4}{*}{48$^3\times$96, $L$=4.5~fm} & $\alpha\mu$ & 0.0009 \\ & $m_\pi$ & 0.1304(4)~GeV\\ & $m_\pi L$ & 2.98(1) \\ & $m_N$ & 0.932(4)~GeV \\ \hline\hline \end{tabular} \end{table} The parameters used for the calculation of the correlation functions are given in Table~\ref{table:statistics}. We use increasing statistics with increasing sink-source separation so that statistical errors are kept approximately constant. Furthermore, as will be discussed in Section~\ref{sec:results}, $G_E(Q^2)$ is found to be more susceptible to excited states compared to $G_M(Q^2)$, requiring larger separations for ensuring their suppression. Therefore, we carry out sequential inversions for five sink-source separations using the unpolarized projector $\Gamma_0$, which yields $G_E(Q^2)$ according Eq.~\ref{eq:ffs}. To obtain $G_M(Q^2)$, we carry out three additional sequential inversions, one for each polarized projector $\Gamma_k$, $k=1,2,3$, for each of the three smallest separations. \begin{table} \caption{Parameters of the calculation of the form factors. The first column shows the sink-source separations used, the second column the sink projectors and the last column the total statistics ($N_{\rm st}$) obtained using $N_{\rm cnf}$ configurations times $N_{\rm src}$ source-positions per configuration.} \label{table:statistics} \begin{tabular}{ccr@{ = }r} \hline\hline $t_s$ [$a$] & Proj. & $N_{\rm cnf}\cdot N_{\rm src}$&$N_{\rm st}$ \\ \hline 10,12,14 & $\Gamma_0$, $\Gamma_k$ & 578$\cdot$16& 9248 \\ 16 & $\Gamma_0$ & 530$\cdot$88& 46640 \\ 18 & $\Gamma_0$ & 725$\cdot$88& 63800 \\ \hline\hline \end{tabular} \end{table} \section{Results} \label{sec:results} \subsection{Analysis} \subsubsection{Isovector contributions} We use the three methods, described in the previous section, to analyze the contribution due to the excited states and extract the desired nucleon matrix element. We demonstrate the quality of our data and two-state fits in Figs.~\ref{fig:gev two-state} and~\ref{fig:gmv two-state} for the isovector contributions to $G_E(Q^2)$ and $G_M(Q^2)$ respectively, for three momentum transfers, namely the first, second and fourth non-zero $Q^2$ values of our setup. In these figures we show the ratio after the appropriate combinations of Eq.~(\ref{eq:ffs}) are taken to yield either $G^{u-d}_E(Q^2)$ or $G^{u-d}_M(Q^2)$. We indeed observe larger excited state contamination in the case of $G^{u-d}_E(Q^2)$, which is the reason for considering larger values of $t_s$ for this case. We note that for fitting the plateau and summation methods, the ratios of Eq.~(\ref{eq:ratio}) are constructed with two- and three-point functions with the same source positions and gauge configurations. For the two-state fit, as already mentioned, we use the two-point correlation function at the maximum statistics available, namely 725 configurations times 88 source positions, as indicated in Table~\ref{table:statistics}. These are the ratios shown in Figs.~\ref{fig:gev two-state} and~\ref{fig:gmv two-state}, which differ from those used for the plateau fits. \begin{figure} \includegraphics[width=\linewidth]{gEv-two-state.pdf} \caption{Ratio yielding the isovector electric Sachs form factor. We show results for three representative $Q^2$ values, namely the first, second and fourth non-zero $Q^2$ values from top to bottom, for $t_s=12a$ (open circles), $t_s=14a$ (filled squares), $t_s=16a$ (filled circles) and $t_s=18a$ (filled triangles). The curves are the results from the two-state fits, with the fainter points excluded from the fit. The band is the form factor value extracted using the two-state fit.} \label{fig:gev two-state} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{gMv-two-state.pdf} \caption{Ratio yielding the isovector magnetic Sachs form factor. We show results for three representative $Q^2$ values, namely the first, second and fourth non-zero $Q^2$ values from top to bottom, for $t_s=10a$ (open squares), $t_s=12a$ (open circles), and $t_s=14a$ (filled squares). The curves are the results from the two-state fits, with the fainter points excluded from the fit. The band is the form factor value extracted using the two-state fit.} \label{fig:gmv two-state} \end{figure} The investigation of excited states is facilitated further by Figs.~\ref{fig:gev fits} and~\ref{fig:gmv fits}. These plots indicate that excited state contributions are present in $G^{u-d}_E(Q^2)$ for the first three sink-source separations of $t_s/a=10$, $12$ and~$14$ in particular for larger momentum transfer. For the two larger sink-source separations we see convergence of the results extracted from the plateau method, which are in agreement with those from the summation method and the two-state fits when the lower fit range is $t_s^{\rm low}=12a=1.1$~fm. For $G^{u-d}_M(Q^2)$, all results from the three sink-source separations are in agreement and consistent with the summation and two-state fit methods within their errors. The values obtained at $t_s=18a=1.7$~fm for the case of $G^{u-d}_E(Q^2)$ and $t_s=14a=1.3$~fm for the case of $G^{u-d}_M(Q^2)$ are shown in Figs.~\ref{fig:gev fits} and~\ref{fig:gmv fits} with the open symbols and associated error band that demonstrates consistency with the values extracted using the summation and two-state fit methods. \begin{figure} \includegraphics[width=\linewidth]{gEv-ratio-fits.pdf} \caption{Isovector electric form factor, for four non-zero $Q^2$ values, extracted from the plateau method (squares), the summation method (circles) and the two-state fit method (triangles). The plateau method results are plotted as a function of the sink-source separation while the summation and two-state fit results are plotted as a function of $t_s^{\rm low}$, i.e. of the smallest sink-source separation included in the fit, with $t_s^{\rm high}$ kept fixed at $t_s=18a=1.7$~fm. The open square and band shows the selected value and its statistical error used to obtain our final results.} \label{fig:gev fits} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{gMv-ratio-fits.pdf} \caption{Isovector magnetic form factor. The notation is the same as that in Fig.~\ref{fig:gev fits}. For the summation and two-state fit methods, the largest sink-source separation included in the fit is kept fixed at $t^{\rm high}_s=14a=1.3$~fm.} \label{fig:gmv fits} \end{figure} Our results for the isovector electric Sachs form factor extracted using all available $t_s$ values and from the summation and two-state fit methods are shown in Fig.~\ref{fig:gev all ts}. On the same plot we show the curve obtained from a parameterization of experimental data for $G^p_E(Q^2)$ and $G_E^n(Q^2)$ according to Ref.~\cite{Kelly:2004hm}, using the parameters obtained in Ref.~\cite{Alberico:2008sz}, and taking the isovector combination $G^p_E(Q^2)-G_E^n(Q^2)$. We see that as the sink-source separation is increased, our results tend towards the experimental curve. The results from the two-state fit method using $t_s^{\rm low}$=$1.1$~fm is consistent with those extracted from the plateau for $t_s=1.7$~fm for all $Q^2$ values. Results extracted using the summation method are consistent within their large errors to those obtained from fitting the plateau for $t_s=1.7$~fm. \begin{figure} \includegraphics[width=\linewidth]{gEv-ff-ts.pdf} \caption{Isovector electric Sachs form factor as a function of the momentum transfer squared ($Q^2$). Symbols for the plateau method follow the notation of Figs.~\ref{fig:gev two-state} and~\ref{fig:gmv two-state}. Results from the summation method are shown with open diamonds and for the two-state fit method with the crosses. The solid line shows $G_E^p(Q^2)-G_E^n(Q^2)$ using Kelly's parameterization of the experimental data~\cite{Kelly:2004hm} with parameters taken from Alberico \textit{et al.}~\cite{Alberico:2008sz}.} \label{fig:gev all ts} \end{figure} In Fig.~\ref{fig:gmv all ts} we show our results for the isovector magnetic form factor. We observe that excited state effects are milder than in the case of $G^{u-d}_E(Q^2)$, corroborating the conclusion drawn by observing Fig.~\ref{fig:gmv fits}. We also see agreement with the experimental curve for $Q^2$ values larger than $\sim$0.2~GeV$^2$. However, our lattice results underestimate the experimental ones at the two lowest $Q^2$ values. Excited state effects are seen to be small for this quantity, and thus they are unlikely to be the cause of this discrepancy given the consistency of our results at three separations, as well as with those extracted using the summation and the two-state fit method. This small discrepancy could be due to suppressed pion cloud effects, due to the finite volume, that could be more significant at low momentum transfer. For example, a study of the magnetic dipole form factor $G_{M1}$ in the $N\rightarrow \Delta$ transition using the Sato-Lee model predicts larger pion cloud contributions at low momentum transfer~\cite{Sato:2003rq}. Lattice QCD computations also observe a discrepancy at lower $Q^2$ for $G_{M1}$ when compared to experiment~\cite{Alexandrou:2010uk}. Analysis on a larger volume is ongoing to investigate volume effects not only in $G_M(Q^2)$ but also for other nucleon matrix elements and the results will be reported in subsequent publications. Our results for the form factors at all sink-source separations and using the summation and two-state fit methods are included in Appendix~\ref{sec:appendix results} in Tables~\ref{table:results gev} to~\ref{table:results gms}. Preliminary results for the isovector electromagnetic form factors have been presented for this ensemble in Refs.~\cite{Alexandrou:2017msl,Abdel-Rehim:2015jna}. \begin{figure}[!h] \includegraphics[width=\linewidth]{gMv-ff-ts.pdf} \caption{Isovector magnetic Sachs form factor as a function of the momentum transfer squared. The notation is the same as that of Fig.~\ref{fig:gev all ts}.} \label{fig:gmv all ts} \end{figure} \subsubsection{Isoscalar contributions} We perform a similar analysis for the isoscalar contributions, denoted by $G^{u+d}_E(Q^2)$ and $G^{u+d}_M(Q^2)$. As mentioned, we use the combination $\sfrac{(u+d)}{3}$ in the matrix element for the isoscalar such that it yields $G_{E,M}^{u+d}(Q^2) = G^p_{E,M}(Q^2) + G^n_{E,M}(Q^2)$. Having also the isovector combination $G_{E,M}^{u-d}(Q^2)=G^p_{E,M}(Q^2)-G^n_{E,M}(Q^2)$ the individual proton and neutron form factors can be extracted. While isovector matrix elements receive no disconnected contributions since they cancel in the isospin limit, the isoscalar form factors do include disconnected fermion loops, shown schematically in Fig.~\ref{fig:thrp}. These disconnected contributions are included for the first time here at the physical point to obtain the isoscalar form factors. The connected isoscalar three-point function is computed using the same procedure as in the isovector case. We show results for the connected contribution to $G_E^{u+d}(Q^2)$ and $G_M^{u+d}(Q^2)$ in Figs.~\ref{fig:ges fits} and~\ref{fig:gms fits} respectively. These results are for the same momentum transfer values as used in Figs.~\ref{fig:gev fits} and~\ref{fig:gmv fits}. In the case of the isoscalar electric form factor, we observe contributions due to excited states that are similar to those observed for the isovector case. Namely, we find that a separation of about $t_s$=1.7~fm is required for their suppression. For the isoscalar magnetic form factor, we observe that the values extracted from fitting the plateau at time separations $t_s=1.1$~fm and $t_s=1.3$~fm are consistent and also in agreement with the values extracted using the two-state fit and summation methods. \begin{figure} \includegraphics[width=\linewidth]{gEs-ratio-fits.pdf} \caption{Connected contribution to the $G^{u+d}_E(Q^2)$ form factor, for four non-zero $Q^2$ values, extracted from the plateau method (squares), the summation method (filled circles) and the two-state fit method (filled triangles). The notation is the same as in Fig.~\ref{fig:gev fits}.} \label{fig:ges fits} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{gMs-ratio-fits.pdf} \caption{Connected contribution to the $G^{u+d}_M(Q^2)$ form factor. The notation is the same as in Fig.~\ref{fig:gmv fits}. } \label{fig:gms fits} \end{figure} The disconnected diagrams of the electromagnetic form factors are particularly susceptible to statistical fluctuations, even at larger pion masses of 370~MeV as reported in Refs.~\cite{Abdel-Rehim:2013wlz} and~\cite{Alexandrou:2012zz}. Here we show results, at the physical pion mass, for the disconnected contribution to $G_E^{u+d}(Q^2)$ and $G_M^{u+d}(Q^2)$ in Fig.~\ref{fig:gem disc} for the first non-zero momentum transfer. The results are obtained using the same ensemble used for the connected contributions, detailed in Table~\ref{table:sim}, using 2120 configurations, with two-point functions computed on 100 randomly chosen source positions per configuration. 2250 stochastic noise vectors are used for estimating the fermion loop. Averaging the proton and neutron two-point functions and the forward and backwards propagating nucleons yields a total of 8$\cdot 10^5$ statistics. More details of this calculation are presented in Ref.~\cite{Alexandrou:2017hac}, where results for the axial form factors are shown. In the case of the electric form factor, we obtain $G^{u+d,\textrm{disc.}}_E(Q^2=0.074~\textrm{GeV}^2) = -0.002(3)$, which is consistent with zero and about 0.2\% of the value of the connected contribution and four times smaller than its statistical error. For the magnetic form factor, fitting to the plateau we obtain $G^{u+d,\textrm{disc.}}_M(Q^2=0.074~\textrm{GeV}^2) = -0.016(7)$ which is 2\% of the value of the connected contribution at this $Q^2$ and half the value of the statistical error. These values are consistent with a dedicated study of the disconnected contributions using an ensemble of clover fermions with pion mass of 317~MeV~\cite{Green:2015wqa} and a recent result at the physical point presented in Ref.~\cite{Sufian:2017osl}. There it was shown that $G^{u+d,\textrm{disc.}}_M(Q^2)$ is negative and largest in magnitude at $Q^2=0$ while $G^{u+d,\textrm{disc.}}_E(Q^2)$ is largest at around $Q^2=0.4$~GeV$^2$. In our case, at our largest momentum transfer, we find $G^{u+d,\textrm{disc.}}_E(Q^2=0.280~\textrm{GeV}^2) = -0.0056(40)$, which is 1\% of the value of the connected contribution at this momentum transfer and smaller than the associated statistical error. Investigation of methods for increasing the precision at the physical point is ongoing, with preliminary results presented in Ref.~\cite{Abdel-Rehim:2016pjw} for the ensemble used here, and will be reported in a separate work. \begin{figure} \includegraphics[width=\linewidth]{gE-disc-plat.pdf}\\ \includegraphics[width=\linewidth]{gM-disc-plat.pdf} \caption{Disconnected contribution to the electric (upper panel) and magnetic (lower panel) isoscalar Sachs form factors for sink-source separation $t_s=8a=0.75$~fm (inverted triangles) and $t_s=10a=0.94$~fm (squares) for the first non-zero momentum transfer of $Q^2=0.074$~GeV$^2$. The horizontal bands show the values obtained after fitting with the plateau method to the results at $t_s=10a=0.94$~fm.} \label{fig:gem disc} \end{figure} We show our results for the connected contribution to the isoscalar electric and magnetic form factors in Figs.~\ref{fig:ges all ts} and~\ref{fig:gms all ts} extracted from the plateau method for all available sink-source separations, and from the summation and the two-state fit methods. The isoscalar electric form factor tends to decrease as the sink-source separation increases approaching the experimental parameterization. This may indicate residual excited state effects, that need to be further investigated by going to larger time separations. For the isoscalar magnetic form factor, we observe a weaker dependence on $t_s$ pointing to less severe excited state effects. \begin{figure} \includegraphics[width=\linewidth]{gEs-ff-ts.pdf} \caption{Connected contribution to the isoscalar electric Sachs form factor as a function of the momentum transfer, using the notation of Fig.~\ref{fig:ges all ts}. The solid line shows $G_E^p(Q^2)+G_E^n(Q^2)$ using the Kelly parameterization of experimental data from Ref.~\cite{Kelly:2004hm} with parameters taken from Alberico \textit{et al.}~\cite{Alberico:2008sz}.} \label{fig:ges all ts} \end{figure} \begin{figure}[!h] \includegraphics[width=\linewidth]{gMs-ff-ts.pdf} \caption{Connected contribution to the isoscalar magnetic Sachs form factor as a function of the momentum transfer. The notation is the same as in Fig.~\ref{fig:ges all ts}.} \label{fig:gms all ts} \end{figure} \subsection{$Q^2$-dependence of the form factors} \subsubsection{Isovector and isoscalar form factors} \begin{figure} \includegraphics[width=\linewidth]{kmax.pdf} \caption{Results from fitting using the z-expansion as a function of $k_{\rm max}$ for $a_1^E$ (lower panel), $a_1^M$ (center panel) and $a_0^M$ (top panel) of Eq.~\ref{eq:zexp fit}.} \label{fig:zexp fit kmax} \end{figure} We fit $G_E(Q^2)$ and $G_M(Q^2)$ to both a dipole Ansatz and the z-expansion form. The truncated z-expansion is expected to model better the low-$Q^2$~\cite{Hill:2010yb} dependence of the form factors, while the dipole form is motivated by vector-meson pole contributions to the form factors~\cite{Perdrisat:2006hj}. For the case of the dipole fits, we use \begin{equation} G_i(Q^2) = \frac{G_i(0)}{(1+\frac{Q^2}{M_i^2})^2}, \label{eq:dipole fit} \end{equation} with $i=E,\,M$, allowing both $G_M(0)$ and $M_M$ to vary for the case of magnetic form factor, while constraining $G_E(0)=1$ for the case of the electric form factor. For the z-expansion, we use the form~\cite{Hill:2010yb} \begin{align} G_i(Q^2) &= \sum_{k=0}^{k_{\rm max}} a_k^{i}z^k,\,\textrm{where}\nonumber\\ z&= \frac{\sqrt{t_{\rm cut}+Q^2} - \sqrt{t_{\rm cut}}}{\sqrt{t_{\rm cut}+Q^2} + \sqrt{t_{\rm cut}}} \label{eq:zexp fit} \end{align} and take $t_{\rm cut}=4m_\pi^2$. For both isovector and isoscalar $G_E(Q^2)$ we fix $a^E_0=1$ while for $G_M(Q^2)$ we allow all parameters to vary. We use Gaussian priors for $a^i_k$ for $k\ge2$ with width $w=5\max(|a_0^i|, |a_1^i|)$ as proposed in Ref.~\cite{Epstein:2014zua}. We observe larger errors when fitting with the z-expansion compared to the dipole form. In Fig.~\ref{fig:zexp fit kmax} we show $a^M_0$ and $a^M_1$ from fits to the magnetic isovector form factor and $a^E_1$ from fits to the electric as a function of $k_\textrm{max}$ and observe no significant change in the fitted parameters beyond $k_{\rm max}\ge2$. We also note that the resulting values for $a^{i}_k$ for $k\ge2$ obtained are well within the Gaussian priors, i.e. $|a^{i}_k|\ll 5\max(|a_0^i|, |a_1^i|)$. We therefore quote results using $k_{\rm max}=2$ from here on. \begin{figure} \includegraphics[width=\linewidth]{gEv-with-fits.pdf} \caption{Isovector electric Sachs form factor as a function of the momentum transfer extracted from the plateau method at $t_s=18a=1.7$~fm (triangles). We show fits using the dipole form (left) and the z-expansion (right). The black points are obtained using experimental data for $G_E^p(Q^2)$ from Ref.~\cite{Bernauer:2013tpr} and for $G_E^n(Q^2)$ from Refs.~\cite{Golak:2000nt, Becker:1999tw, Eden:1994ji, Meyerhoff:1994ev, Passchier:1999cj, Warren:2003ma, Zhu:2001md, Plaster:2005cx, Madey:2003av, Rohe:1999sh, Bermuth:2003qh, Glazier:2004ny, Herberg:1999ud, Schiavilla:2001qe, Ostrick:1999xa}.} \label{fig:gev ff fit} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{gMv-with-fits.pdf} \caption{Isovector magnetic Sachs form factor as a function of the momentum transfer extracted from the plateau method at $t_s=14a=1.3$~fm (squares). We show fits using the dipole form (left) and the z-expansion (right). The smaller error band corresponds to fitting to all $Q^2$ values, while the larger band is obtained after omitting the two smallest values. The black points are obtained using experimental data for $G_M^p(Q^2)$ from Ref.~\cite{Bernauer:2013tpr} and for $G_M^n(Q^2)$ from Refs.~\cite{Anderson:2006jp, Gao:1994ud, Anklin:1994ae, Anklin:1998ae, Kubon:2001rj, Alarcon:2007zza}.} \label{fig:gmv ff fit} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{gMs-disc-with-fits.pdf} \caption{Disconnected contribution to the isoscalar magnetic Sachs form factor as a function of the momentum transfer for $t_s=8a=0.7$~fm (inverted triangles) and $t_s=10a=0.9$~fm (squares). The bands show fits to the z-expansion form with $k_{\rm max}=1$.} \label{fig:gms disc ff fit} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{gEs-with-fits.pdf} \caption{Isoscalar electric Sachs form factor with fits to the dipole form (left) and to the z-expansion (right). We show with triangles the sum of connected and disconnected contributions, with the plateau result for $t_s=18a=1.7$~fm for the connected and for $t_s=10a=0.9$~fm for the disconnected. The black points show experiment using the same data as for Fig.~\ref{fig:gev ff fit}. } \label{fig:ges ff fit} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{gMs-with-fits.pdf} \caption{Isoscalar magnetic Sachs form factor with fits to the dipole form (left) and to the z-expansion (right). We show with triangles the sum of connected and disconnected contributions, with the plateau result for $t_s=14a=1.3$~fm for the connected and for $t_s=10a=0.9$~fm for the disconnected. The black points show experiment using the same data as for Fig.~\ref{fig:gmv ff fit}.} \label{fig:gms ff fit} \end{figure} Fits to the $Q^2$ dependence of $G^{u-d}_E(Q^2)$ are shown in Fig.~\ref{fig:gev ff fit} using the values extracted from the plateau at $t_s=18a=1.7$~fm. The line and error band are the result of fitting to either dipole or the z-expansion for all available $Q^2$ values. Both dipole and z-expansion form describe the lattice QCD results well. In this plot we also show results from experiment, using data for $G_E^p(Q^2)$ obtained from Ref.~\cite{Bernauer:2013tpr} and data for $G_E^n(Q^2)$ from Refs.~\cite{Golak:2000nt, Becker:1999tw, Eden:1994ji, Meyerhoff:1994ev, Passchier:1999cj, Warren:2003ma, Zhu:2001md, Plaster:2005cx, Madey:2003av, Rohe:1999sh, Bermuth:2003qh, Glazier:2004ny, Herberg:1999ud, Schiavilla:2001qe, Ostrick:1999xa}. To subtract the two form factors and obtain the isovector combination, we linearly interpolate the more accurate experimental data of $G_E^p(Q^2)$ to the $Q^2$ values for which $G_E^n(Q^2)$ is available. For both dipole and z-expansion fit, the resulting curve lies about one standard deviation above the experimental data. This small discrepancy may be due to small residual excited state effects, which would require significant increase of statistics at larger sink-source separations to identify. Having only performed the calculation using one ensemble we cannot check directly for finite volume and cut-off effects. However, in a previous study employing $N_\textrm{f}=2$ twisted mass fermions at heavier than physical pion masses and three values of the lattice spacing, we found no detectable cut-off effects in these quantities for a lattice spacing similar to the one used here~\cite{Alexandrou:2011db}. We have also performed a volume assessment using the aforementioned heavier mass twisted mass ensembles with $m_\pi L$ values ranging from 3.27 to 5.28. Namely, we found no volume dependence within our statistical accuracy between two ensembles with $m_\pi L=3.27$ and $m_\pi L$=4.28 respectively and similar pion mass of $m_\pi\simeq$300 MeV. We plan to carry out a high accuracy analysis of the volume dependence at the physical point on a lattice size of $64^3\times 128$ keeping the other parameters fixed in a forthcoming publication. The same analysis carried out for $G^{u-d}_E(Q^2)$ is also performed for $G^{u-d}_M(Q^2)$ in Fig.~\ref{fig:gmv ff fit}, where we use the result from fitting to the plateau at the largest sink-source separation available, namely $t_s=14a=1.3$~fm. As for the case of $G_E^{u-d}(Q^2)$, both the dipole Ansatz and z-expansion describe well the lattice QCD data. The plots show two bands, one when including all $Q^2$ values, resulting in the smaller error band, and one in which the first two $Q^2$ values are omitted, resulting in the larger band. The experimental data shown are obtained using $G_M^p(Q^2)$ from the same experiment as for $G_E^p(Q^2)$ shown in Fig.~\ref{fig:gev ff fit}, namely Ref.~\cite{Bernauer:2013tpr}, and $G_M^n(Q^2)$ from Refs.~\cite{Anderson:2006jp, Gao:1994ud, Anklin:1994ae, Anklin:1998ae, Kubon:2001rj, Alarcon:2007zza}. In both dipole and z-expansion fits of $G_M^{u-d}(Q^2)$ we find that the $Q^2$ dependence is consistent with experiment after $Q^2\simeq0.2$~GeV$^2$. We suspect that the deviation at the two smallest $Q^2$ values is due to finite volume effects. As already mentioned, we plan to further investigate this using an ensemble of $N_\textrm{f}=2$ twisted mass fermions on a larger volume of $64^3\times 128$. As can be seen, discarding the two lowest $Q^2$ values results in a larger error for $G^{u-d}_M(0)$, in particular in the case of the z-expansion. We show the momentum dependence of the disconnected contribution to $G_M^{u+d}(Q^2)$ in Fig.~\ref{fig:gms disc ff fit}. The large errors do not permit as thorough analysis as for the connected contribution. Since the disconnected isoscalar contributions do not follow a dipole form, and in the absence of any theoretically motivated form for the disconnected contributions, we use a z-expansion fit with $k_{\rm max}=2$, fixing $a_0=0$ for $G_E^{u+d,\textrm{disc.}}(Q^2)$ and with $k_{\rm max}=1$, allowing both $a_0$ and $a_1$ to vary. For the case of $G_E^{u+d,\textrm{disc.}}(Q^2)$ we find results consistent with zero. For the magnetic case, the disconnected contribution decreases the form factor by at most 3\% at $Q^2=0$. We add connected and disconnected contributions to obtain the isoscalar contributions shown in Figs.~\ref{fig:ges ff fit} and~\ref{fig:gms ff fit}. There are small discrepancies between our lattice data and experiment at larger $Q^2$ values. Whether these are due to volume effects or other lattice artifacts will be investigated in a follow-up study. The slope of the form factors at $Q^2$=0 is related to the isovector electric and magnetic radius as follows \begin{equation} \frac{\partial}{\partial Q^2}G_i(Q^2)|_{Q^2=0}=-\frac{1}{6}G_i(0)\langle r_{i}^2\rangle, \label{eq:ff deriv} \end{equation} with $i=E,M$ for the electric and magnetic form factors respectively. For the z-expansion, this is given by \begin{equation} \langle r_{i}^2\rangle = -\frac{6}{4 t_{\rm cut}} \frac{a^i_1}{a^i_0} \end{equation} and for the dipole fit \begin{equation} \langle r_{i}^2\rangle = \frac{12}{M_i^2}. \end{equation} Furthermore, the nucleon magnetic moment is defined as $\mu=G_M(0)$ and is obtained directly from the fitted parameter in both cases. As for the form factors, we will denote the isovector radii and magnetic moment with the $u-d$ superscript and for the isoscalar with $u+d$. We tabulate our results for the isovector radii and magnetic moment from both dipole and z-expansion fits in Tables~\ref{table:fit params gev} and~\ref{table:fit params gmv}, and from fits to the isoscalar form factors in Tables~\ref{table:fit params ges} and~\ref{table:fit params gms}. For the isoscalar results shown in Tables~\ref{table:fit params ges} and~\ref{table:fit params gms}, we show two results for each case, namely the result of fitting only the connected contribution in the first column of each case and the total contribution, by combining connected and disconnected, in the second column. \begin{table} \caption{Results for the isovector electric charge radius of the nucleon ($\langle r_E^2\rangle^{u-d}$) from fits to $G^{u-d}_E(Q^2)$. In the first column we show $t_s$ for the plateau method and the $t_s$ fit range for the summation and two-state fit methods.} \label{table:fit params gev} \begin{tabular}{cr@{.}lcr@{.}lc} \hline\hline \multirow{2}{*}{$t_s$ [fm]} & \multicolumn{2}{c}{dipole} & \multicolumn{2}{c}{z-expansion}\\ & \multicolumn{2}{c}{$\langle r_E^2\rangle^{u-d}$ [fm$^2$]} & $\frac{\chi^2}{\rm d.o.f}$ & \multicolumn{2}{c}{$\langle r_E^2\rangle^{u-d}$ [fm$^2$]} & $\frac{\chi^2}{\rm d.o.f}$ \\ \hline \multicolumn{7}{c}{Plateau}\\ 0.94 & 0&523(08) & 2.0 & 0&562(19) & 1.2 \\ 1.13 & 0&562(14) & 1.9 & 0&677(37) & 0.7 \\ 1.31 & 0&580(26) & 1.2 & 0&718(75) & 0.7 \\ 1.50 & 0&666(33) & 0.9 & 0&61(10) & 0.3 \\ 1.69 & 0&653(48) & 0.6 & 0&52(14) & 0.2 \\ \hline \multicolumn{7}{c}{Summation}\\ 0.9-1.7 & 0&744(55) & 0.3 & 0&79(14) & 0.2 \\ \hline \multicolumn{7}{c}{Two-state}\\ 1.1-1.7 & 0&623(33) & 1.0 & 0&56(10) & 0.8 \\ \hline\hline \end{tabular} \end{table} \begin{table} \caption{Results for the isovector magnetic charge radius of the nucleon ($\langle r_M^2\rangle^{u-d}$) and the isovector magnetic moment $G_M(0) = \mu^{u-d}$ from fits to $G^{u-d}_M(Q^2)$. In the first column we show $t_s$ for the plateau method and the $t_s$ fit range for the summation and two-state fit methods. The two smallest $Q^2$ values are omitted from the fit.} \label{table:fit params gmv} \begin{tabular}{cr@{.}lcr@{.}lc} \hline\hline \multirow{2}{*}{$t_s$ [fm]} & \multicolumn{2}{c}{dipole} & \multicolumn{2}{c}{z-expansion}\\ & \multicolumn{2}{c}{$\langle r_M^2\rangle^{u-d}$ [fm$^2$]} & $\frac{\chi^2}{\rm d.o.f}$ & \multicolumn{2}{c}{$\langle r_M^2\rangle^{u-d}$ [fm$^2$]} & $\frac{\chi^2}{\rm d.o.f}$ \\ \hline \multicolumn{7}{c}{Plateau}\\ 0.94 & 0&404(10) & 0.3 & 0&59(13) & 0.3 \\ 1.13 & 0&434(22) & 0.3 & 0&82(23) & 0.3 \\ 1.31 & 0&536(52) & 0.3 & 0&79(40) & 0.3 \\ \hline \multicolumn{7}{c}{Summation}\\ 0.9-1.3 & 0&68(16) & 0.1 & 1&83(49) & 0.1\\ \hline \multicolumn{7}{c}{Two-state}\\ 0.9-1.3 & 0&470(31) & 0.3 & 1&15(25) & 0.3\\ \hline\hline \end{tabular} \begin{tabular}{cr@{.}lcr@{.}lc} \hline\hline \multirow{3}{*}{$t_s$ [fm]} & \multicolumn{2}{c}{dipole} & \multicolumn{2}{c}{z-expansion}\\ & \multicolumn{2}{c}{$G^{u-d}_M(0)$} & $\frac{\chi^2}{\rm d.o.f}$ & \multicolumn{2}{c}{$G^{u-d}_M(0)$} & $\frac{\chi^2}{\rm d.o.f}$ \\ \hline \multicolumn{7}{c}{Plateau}\\ 0.94 & 3&548(52) & 0.3 & 3&85(16) & 0.3 \\ 1.13 & 3&595(90) & 0.3 & 4&13(31) & 0.3 \\ 1.31 & 4&02(21) & 0.3 & 4&31(57) & 0.3 \\ \hline \multicolumn{7}{c}{Summation}\\ 0.9-1.3 & 4&32(57) & 0.1 & 6&35(1.35) & 0.1\\ \hline \multicolumn{7}{c}{Two-state}\\ 0.9-1.3 & 3&74(14) & 0.3 & 4&71(42) & 0.3\\ \hline\hline \end{tabular} \end{table} \begin{table} \caption{Results for the isoscalar electric charge radius of the nucleon ($\langle r_E^2\rangle^{u+d}$). In the first column we show $t_s$ for the plateau method and the $t_s$ fit range for the summation and two-state fit methods. For each $t_s$ and for each fit Ansatz, we give the result from fitting to the connected contribution in the first column and to the total contribution of connected plus disconnected in the second column.} \label{table:fit params ges} \begin{tabular}{cr@{.}lr@{.}lcr@{.}lr@{.}lc} \hline\hline \multirow{3}{*}{$t_s$ [fm]} & \multicolumn{5}{c}{dipole} & \multicolumn{5}{c}{z-expansion}\\ & \multicolumn{4}{c}{$\langle r_E^2\rangle^{u+d}$ [fm$^2$]} & $\frac{\chi^2}{\rm d.o.f}$ & \multicolumn{4}{c}{$\langle r_E^2\rangle^{u+d}$ [fm$^2$]} & $\frac{\chi^2}{\rm d.o.f}$ \\ & \multicolumn{2}{c}{Connected}& \multicolumn{2}{c}{Total}& &\multicolumn{2}{c}{Connected}& \multicolumn{2}{c}{Total}& \\ \hline \multicolumn{11}{c}{Plateau}\\ 0.94 & 0&440(3) & 0&449(49) & 4.5 & 0&418(9) & 0&427(49) & 0.9 \\ 1.13 & 0&469(6) & 0&478(49) & 1.9 & 0&464(17) & 0&474(52) & 0.7\\ 1.31 & 0&494(12) & 0&503(50) & 0.9 & 0&485(34) & 0&495(59) & 0.5\\ 1.50 & 0&502(14) & 0&512(50) & 0.3 & 0&494(41) & 0&503(63) & 0.4\\ 1.69 & 0&527(22) & 0&537(53) & 0.9 & 0&493(60) & 0&503(77) & 0.8\\ \hline \multicolumn{11}{c}{Summation}\\ 0.9-1.7 & 0&565(20) & 0&576(53) & 0.9 & 0&555(54) & 0&564(72) & 0.6\\ \hline \multicolumn{11}{c}{Two-state}\\ 1.1-1.7 & 0&490(16) & 0&499(51) & 0.5 & 0&453(77) & 0&462(91) & 0.7\\ \hline\hline \end{tabular} \end{table} \begin{table} \caption{Results for the isoscalar magnetic charge radius of the nucleon ($\langle r_M^2\rangle^{u+d}$) and the isoscalar magnetic moment $G^{u+d}_M(0)$. The notation is as in Table~\ref{table:fit params ges}.} \label{table:fit params gms} \begin{tabular}{cr@{.}lr@{.}lcr@{.}lr@{.}lc} \hline\hline \multirow{3}{*}{$t_s$ [fm]} & \multicolumn{5}{c}{dipole} & \multicolumn{5}{c}{z-expansion}\\ & \multicolumn{4}{c}{$\langle r_M^2\rangle^{u+d}$ [fm$^2$]} & $\frac{\chi^2}{\rm d.o.f}$ & \multicolumn{4}{c}{$\langle r_M^2\rangle^{u+d}$ [fm$^2$]} & $\frac{\chi^2}{\rm d.o.f}$ \\ & \multicolumn{2}{c}{Connected}& \multicolumn{2}{c}{Total}& &\multicolumn{2}{c}{Connected}& \multicolumn{2}{c}{Total}& \\ \hline \multicolumn{11}{c}{Plateau}\\ 0.94 & 0&392(13) & 0&302(34) & 0.2 & 0&41(19) & 0&32(20) & 0.2\\ 1.13 & 0&419(29) & 0&329(47) & 0.1 & 0&84(28) & 0&78(32) & 0.1\\ 1.31 & 0&476(59) & 0&394(82) & 0.4 & 0&4(1.0) & 0&4(1.1) & 0.5\\ \hline \multicolumn{11}{c}{Summation}\\ 0.9-1.3 & 0&50(18) & 0&42(24) & 0.2 & 1&94(92) & 2&0(1.3) & 0.2\\ \hline \multicolumn{11}{c}{Two-state}\\ 0.9-1.3 & 0&439(44) & 0&353(65) & 0.2 & 0&89(47) & 0&83(52) & 0.2\\ \hline\hline \end{tabular} \begin{tabular}{cr@{.}lr@{.}lcr@{.}lr@{.}lc} \hline\hline \multirow{3}{*}{$t_s$ [fm]} & \multicolumn{5}{c}{dipole} & \multicolumn{5}{c}{z-expansion}\\ & \multicolumn{4}{c}{$G^{u+d}_M(0)$} & $\frac{\chi^2}{\rm d.o.f}$ & \multicolumn{4}{c}{$G^{u+d}_M(0)$} & $\frac{\chi^2}{\rm d.o.f}$ \\ & \multicolumn{2}{c}{Connected}& \multicolumn{2}{c}{Total}& &\multicolumn{2}{c}{Connected}& \multicolumn{2}{c}{Total}& \\ \hline \multicolumn{11}{c}{Plateau}\\ 0.94 & 0&838(16) & 0&808(18) & 0.2 & 0&867(50) & 0&837(50) & 0.2\\ 1.13 & 0&841(29) & 0&811(30) & 0.1 & 0&981(90) & 0&951(90) & 0.1\\ 1.31 & 0&900(59) & 0&870(60) & 0.4 & 0&90(19) & 0&87(19) & 0.5\\ \hline \multicolumn{11}{c}{Summation}\\ 0.9-1.3 & 0&88(16) & 0&85(16) & 0.2 & 1&51(45) & 1&48(45) & 0.2\\ \hline \multicolumn{11}{c}{Two-state}\\ 0.9-1.3 & 0&861(47) & 0&831(48) & 0.2 & 1&01(14) & 0&98(14) & 0.2\\ \hline\hline \end{tabular} \end{table} For our final result for the isovector electric charge radius, we use the central value and statistical error of the result from the plateau method at $t_s=18a=1.7$~fm using a dipole fit to all $Q^2$ values. We also include a systematic error from the difference of the central values when comparing with the two-state fit method to account for excited states effects. Similarly, for the magnetic radius and moment, we take the result from the dipole fits to our largest sink-source separation, which for this case is $t_s=14a=1.31$~fm and as in the case of the electric charge radius, we take the difference with the two-state fit method as an additional systematic error. In this case, the values at the two lowest momenta are not included in the fit. Our final values for the isovector radii and isovector nucleon magnetic moment are: \begin{align} \langle r_E^2\rangle^{u-d} &= 0.653(48)(30)~\textrm{fm}^2, \nonumber\\ \langle r_M^2\rangle^{u-d} &= 0.536(52)(66)~\textrm{fm}^2,\,\textrm{and} \nonumber\\ \mu^{u-d} &= 4.02(21)(28), \end{align} where the first error is statistical and the second error is a systematic obtained when comparing the plateau method to the two-state fit method as a measure of excited state effects. For the isoscalar radii and moment we follow a similar analysis after adding the disconnected contribution from the plateau method for $t_s=10a=0.9$~fm. We obtain \begin{align} \langle r_E^2\rangle^{u+d} &= 0.537(53)(38)~\textrm{fm}^2, \nonumber\\ \langle r_M^2\rangle^{u+d} &= 0.394(82)(42)~\textrm{fm}^2,\,\textrm{and} \nonumber\\ \mu^{u+d} &= 0.870(60)(39). \end{align} \subsection{Proton and neutron form factors} Having the isovector and isoscalar contributions to the form factors, we can obtain the proton ($G^p(Q^2)$) and neutron ($G^n(Q^2)$) form factors via linear combinations taken from Eqs.~(\ref{eq:isovector}) and~(\ref{eq:isoscalar}) assuming isospin symmetry between up and down quarks and proton and neutron. Namely, we have: \begin{align} G^p(Q^2) =& \frac{1}{2}[G^{u+d}(Q^2) + G^{u-d}(Q^2)] \nonumber\\ G^n(Q^2) =& \frac{1}{2}[G^{u+d}(Q^2) - G^{u-d}(Q^2)] \end{align} where $G^p(Q^2)$ ($G^n(Q^2)$) is either the electric or magnetic proton (neutron) form factor. In Figs.~\ref{fig:gep ff fit} and~\ref{fig:gmp ff fit} we show results for the proton electric and magnetic Sachs form factors respectively. As for the isoscalar case, the disconnected contributions have been included. The bands are from fits to the dipole form of Eq.~(\ref{eq:dipole fit}). In these plots we compare to experimental results from the A1 collaboration~\cite{Bernauer:2013tpr}. We observe a similar behavior when comparing to experiment as for the case of the isovector form factors. Namely, the dipole fit to the lattice data has a smaller slope for small values of $Q^2$ as compared to experiment, while $G_M^p(Q^2)$ reproduces the experimental momentum dependence for $Q^2>0.2$~GeV$^2$. \begin{figure} \includegraphics[width=\linewidth]{gEp-with-fits.pdf} \caption{Proton electric Sachs form factor as a function of the momentum transfer. We show with triangles the sum of connected and disconnected contributions, with the plateau result for $t_s=18a=1.7$~fm for the connected and for $t_s=10a=0.9$~fm for the disconnected. The band is a fit to the dipole form. The black points show experimental data from Ref.~\cite{Bernauer:2013tpr}.} \label{fig:gep ff fit} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{gMp-with-fits.pdf} \caption{Proton magnetic Sachs form factor as a function of the momentum transfer. We show with squares the sum of connected and disconnected contributions, with the plateau result for $t_s=14a=1.3$~fm for the connected and for $t_s=10a=0.9$~fm for the disconnected. The band is a fit to the dipole form. The black points show experimental data from Ref.~\cite{Bernauer:2013tpr}.} \label{fig:gmp ff fit} \end{figure} In Figs.~\ref{fig:gen ff fit} and~\ref{fig:gmn ff fit} we show the same for the neutron form factors. For the neutron electric form factor we fit to the form~\cite{Kelly:2004hm}: \begin{equation} G^{n}_E(Q^2) = \frac{\tau A}{1 + \tau B}\frac{1}{(1+\frac{Q^2}{\Lambda^2})^2} \label{eq:dipole neutron} \end{equation} with $\tau=Q^2/(2m_N)^2$ and $\Lambda^2=0.71$~GeV$^2$ and allow $A$ and $B$ to vary. This Ansatz reproduces our data well. We compare to a collection of experimental data from Refs.~\cite{Golak:2000nt, Becker:1999tw, Eden:1994ji, Meyerhoff:1994ev, Passchier:1999cj, Warren:2003ma, Zhu:2001md, Plaster:2005cx, Madey:2003av, Rohe:1999sh, Bermuth:2003qh, Glazier:2004ny, Herberg:1999ud, Schiavilla:2001qe, Ostrick:1999xa}. For $G_M^n(Q^2)$, we agree with the experimental data for $Q^2>0.2$~GeV$^2$, however we underestimate the magnetic moment by about 20\%. Experimental data for $G_M^n(Q^2)$ shown in Fig.~\ref{fig:gmn ff fit} are taken from Refs.~\cite{Anderson:2006jp, Gao:1994ud, Anklin:1994ae, Anklin:1998ae, Kubon:2001rj, Alarcon:2007zza}. \begin{figure} \includegraphics[width=\linewidth]{gEn-with-fits.pdf} \caption{Neutron electric Sachs form factor as a function of the momentum transfer. Triangles are from the sum of connected and disconnected contributions, with the plateau result for $t_s=18a=1.7$~fm for the connected and for $t_s=10a=0.9$~fm for the disconnected. The band is a fit to the form of Eq.~(\ref{eq:dipole neutron}). Experimental data are shown with the black points, obtained from Refs.~\cite{Golak:2000nt, Becker:1999tw, Eden:1994ji, Meyerhoff:1994ev, Passchier:1999cj, Warren:2003ma, Zhu:2001md, Plaster:2005cx, Madey:2003av, Rohe:1999sh, Bermuth:2003qh, Glazier:2004ny, Herberg:1999ud, Schiavilla:2001qe, Ostrick:1999xa}.} \label{fig:gen ff fit} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{gMn-with-fits.pdf} \caption{Neutron magnetic Sachs form factor as a function of the momentum transfer. We show with squares the sum of connected and disconnected contributions, with the plateau result for $t_s=14a=1.3$~fm for the connected and for $t_s=10a=0.9$~fm for the disconnected. The band is a fit to the dipole form. The black points show experimental data from Refs.~\cite{Anderson:2006jp, Gao:1994ud, Anklin:1994ae, Anklin:1998ae, Kubon:2001rj, Alarcon:2007zza}.} \label{fig:gmn ff fit} \end{figure} We use Eq.~(\ref{eq:ff deriv}) to obtain the radii using the dipole fits. For the case of $G_E^n(Q^2)$, the neutron electric radius is obtained via: $\langle r^2_E\rangle^n = -\frac{3A}{2m_N^2}$, where $A$ is the parameter of Eq.~(\ref{eq:dipole neutron}). In all cases we have combined connected and disconnected. We obtain: \begin{align} \langle r_E^2\rangle^{p} &= 0.589(39)(33)~\textrm{fm}^2, \nonumber\\ \langle r_M^2\rangle^{p} &= 0.506(51)(42)~\textrm{fm}^2,\,\textrm{and} \nonumber\\ \mu_{p} &= 2.44(13)(14), \end{align} for the proton, and: \begin{align} \langle r_E^2\rangle^{n} &= -0.038(34)(6)~\textrm{fm}^2, \nonumber\\ \langle r_M^2\rangle^{n} &= \phantom{-}0.586(58)(75)~\textrm{fm}^2,\,\textrm{and} \nonumber\\ \mu_{n} &= -1.58(9)(12), \end{align} for the neutron, where as in the case of the isoscalar and isovector, the first error is statistical and the second is a systematic obtained when comparing the plateau method to the two-state fit method as a measure of excited state effects. \section{Comparison with other results} \label{sec:others} \subsection{Comparison of isovector and isoscalar form factors} Recent lattice calculations for the electromagnetic form factors of the nucleon include an analysis from the Mainz group~\cite{Capitani:2015sba} using $N_\textrm{f}=2$ clover fermions down to a pion mass of 193~MeV, results from the PNDME collaboration~\cite{Bhattacharya:2013ehc} using clover valence fermions on $N_\textrm{f}=2+1+1$ HISQ sea quarks down to pion mass of $\sim$220~MeV and $N_\textrm{f}=2+1+1$ results from the ETM collaboration down to 213~MeV pion mass~\cite{Alexandrou:2013joa}. Simulations directly at the physical point have only been possible recently. The LHPC has published results in Ref.~\cite{Green:2014xba} using $N_\textrm{f}=2+1$ HEX smeared clover fermions, which include an ensemble with $m_\pi=$149~MeV. Preliminary results for electromagnetic nucleon form factors at physical or near physical pion masses have also been reported by the PNDME collaboration in Ref.~\cite{Jang:2016kch} using clover valence quarks on HISQ sea quarks at a pion mass of $130$~MeV and by the RBC/UKQCD collaboration using Domain Wall fermions at $m_\pi=172$~MeV in Ref.~\cite{Abramczyk:2016ziv}. In Fig.~\ref{fig:gev comparison} we compare our results for $G^{u-d}_E(Q^2)$ from the plateau method using $t_s=18a=1.7$~fm to published results. We show results from Ref.~\cite{Green:2014xba} extracted from the summation method using three sink-source separations from 0.93 to 1.39~fm for their ensemble at the near-physical pion mass of $m_\pi=$149~MeV. We note that their statistics of 7752 are about six times less than ours at the sink-source separation we use in this plot (see Table~\ref{table:statistics}). \begin{figure} \includegraphics[width=\linewidth]{gEv-with-LHPC.pdf} \caption{Comparison of $G^{u-d}_E(Q^2)$ between results from this work (circles) denoted by ETMC and from the LHPC taken from Ref.~\cite{Green:2014xba} (squares). The dashed line shows the parameterization of the experimental data.} \label{fig:gev comparison} \end{figure} In Fig.~\ref{fig:gmv comparison} we plot our results for $G^{u-d}_M(Q^2)$ from the plateau method using $t_s=14a=1.3$~fm and compare to those from LHPC. At this sink-source separation the statistics are similar, namely 7752 for the LHPC data and 9248 for the results from this work, however their errors are larger, possibly due to the fact that the summation method is used for their final quoted results. Within errors, we see consistent results at all $Q^2$ values. \begin{figure} \includegraphics[width=\linewidth]{gMv-with-LHPC.pdf} \caption{Comparison of $G^{u-d}_M(Q^2)$ between results from this work (circles) and Ref.~\cite{Green:2014xba} (squares). The dashed line shows the parameterization of the experimental data.} \label{fig:gmv comparison} \end{figure} In Figs.~\ref{fig:f1v comparison} and~\ref{fig:f2v comparison} we compare our results for the isovector Dirac and Pauli form factors $F^{u-d}_1(Q^2)$ and $F^{u-d}_2(Q^2)$ with those from Ref.~\cite{Green:2014xba}. We use Eq.~(\ref{eq:f1f2}) to obtain $F_1^{u-d}(Q^2)$ and $F^{u-d}_2(Q^2)$ from $G^{u-d}_E(Q^2)$ and $G^{u-d}_M(Q^2)$ extracted from the plateau method at the same sink-source separations used in Figs.~\ref{fig:gev comparison} and~\ref{fig:gmv comparison}. As in the case of $G^{u-d}_E(Q^2)$ and $G^{u-d}_M(Q^2)$ we see agreement between these two calculations. We also note that the discrepancy with experiment of $G^{u-d}_M(Q^2)$ at low $Q^2$ values carries over to $F^{u-d}_2(Q^2)$. \begin{figure} \includegraphics[width=\linewidth]{f1v-with-LHPC.pdf} \caption{Comparison of $F^{u-d}_1(Q^2)$ between results from this work (circles) and Ref.~\cite{Green:2014xba} (squares). The dashed line shows the parameterization of the experimental data.} \label{fig:f1v comparison} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{f2v-with-LHPC.pdf} \caption{Comparison of $F^{u-d}_2(Q^2)$ between results from this work (circles) and Ref.~\cite{Green:2014xba} (squares). The dashed line shows the parameterization of the experimental data.} \label{fig:f2v comparison} \end{figure} For the isoscalar case, we compare the connected contributions to the Sachs form factors with Ref.~\cite{Green:2014xba} in Figs.~\ref{fig:ges comparison} and~\ref{fig:gms comparison}. The agreement between the two lattice formulations is remarkable given that the results have not been corrected for finite volume or cut-off effects. The gauge configurations used by LHPC were carried out using the same spatial lattice size as ours but with a coarser lattice spacing yielding $m_\pi L=4.2$ compared to ours of $m_\pi L=3$. Although the LHPC results for the isovector magnetic form factor at low $Q^2$ are in agreement with experiment, they carry large statistical errors that do not allow us to draw any conclusion as to whether the origin of the discrepancy in our much more accurate data is due to the smaller $m_\pi L$ value. \begin{figure} \includegraphics[width=\linewidth]{gEs-with-LHPC.pdf} \caption{Comparison of $G^{u+d}_E(Q^2)$ between results from this work (circles) and Ref.~\cite{Green:2014xba} (squares). The dashed line shows the parameterization of the experimental data.} \label{fig:ges comparison} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{gMs-with-LHPC.pdf} \caption{Comparison of $G^{u+d}_M(Q^2)$ between results from this work (circles) and Ref.~\cite{Green:2014xba} (squares). The dashed line shows the parameterization of the experimental data.} \label{fig:gms comparison} \end{figure} For the radii and magnetic moment, we compare our result to recent published results, which are available for the isovector case, from Refs.~\cite{Green:2014xba,Bhattacharya:2013ehc,Capitani:2015sba,Alexandrou:2013joa}. We quote their values obtained before extrapolation to the physical point, using the smallest pion mass available. In Fig.~\ref{fig:rEv comparison} we see that the two results at physical or near-physical pion mass, namely the result of this work and from LHPC, are within one standard deviation from the spectroscopic determination of the charged radius using muonic hydrogen~\cite{Pohl:2010zza}. \begin{figure} \includegraphics[width=\linewidth]{rEv-with-others.pdf} \caption{Our result for $\langle r^2_E\rangle^{u-d}$ at $m_\pi=$130~MeV (circle) compared to recent lattice results from LHPC~\cite{Green:2014xba} at $m_\pi=149$~MeV (square), PNDME~\cite{Bhattacharya:2013ehc} at $m_\pi=220$~MeV (triangle), the Mainz group~\cite{Capitani:2015sba} at $m_\pi=193$~MeV (diamond) and ETMC~\cite{Alexandrou:2013joa} (pentagon). We show two error bars when systematic errors are available, with the smaller denoting the statistical error and the larger denoting the combination of statistical and systematic errors added in quadrature. The vertical band denoted with $\mu$H is the experimental result using muonic hydrogen from Ref.~\cite{Pohl:2010zza} and the band denoted with CODATA is from Ref.~\cite{Mohr:2015ccw}.} \label{fig:rEv comparison} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{rMv-with-others.pdf} \caption{Comparison of results for $\langle r^2_M\rangle^{u-d}$ with the notation of Fig.~\ref{fig:rEv comparison}. The experimental band is from Ref.~\cite{Mohr:2015ccw}.} \label{fig:rMv comparison} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{muv-with-others.pdf} \caption{Comparison of results for the isovector nucleon magnetic moment $\mu^{u-d}$ with the notation of Fig.~\ref{fig:rEv comparison}.} \label{fig:muv comparison} \end{figure} A similar comparison is shown in Fig.~\ref{fig:rMv comparison} for the magnetic radius. We see that all lattice results underestimate the experimental band by at most 2$\sigma$, with the exception of the LHPC value that used the summation method. Similar conclusions are drawn for the isovector magnetic moment $G_M(0)=\mu^{u-d}=\mu_p-\mu_n$, which we show in Fig.~\ref{fig:muv comparison}. \subsection{Comparison of proton form factors} \begin{figure} \includegraphics[width=\linewidth]{gEp-with-LHPC.pdf} \caption{Comparison of $G^p_E(Q^2)$ between results from this work (circles) and Ref.~\cite{Green:2014xba} (squares). The dashed line shows the parameterization of the experimental data.} \label{fig:gep comparison} \end{figure} Published lattice QCD results for the proton form factors at physical or near-physical pion masses are available from LHPC~\cite{Green:2014xba}. We compare our results in Figs.~\ref{fig:gep comparison} and~\ref{fig:gmp comparison} for the proton electric and magnetic Sachs form factors respectively. We see agreement with their results and note that their relatively larger errors at small $Q^2$ for the case of the magnetic form factor are consistent with both the experimentally determined curve and our results. \begin{figure} \includegraphics[width=\linewidth]{gMp-with-LHPC.pdf} \caption{Comparison of $G^p_M(Q^2)$ between results from this work (circles) and Ref.~\cite{Green:2014xba} (squares). The dashed line shows the parameterization of experimental data.} \label{fig:gmp comparison} \end{figure} \section{Summary and conclusions} \label{sec:conclusions} A first calculation of the isovector and isoscalar electromagnetic Sachs nucleon form factors including the disconnected contributions is presented directly at the physical point using an ensemble of $N_\textrm{f}=2$ twisted mass fermions at maximal twist at a volume of $m_\pi L\simeq 3$. Using five sink-source separations for $G_E(Q^2)$ between 0.94~fm and 1.69~fm, we confirm our previous findings that excited state contributions require a separation larger than $\sim$1.5~fm to be sufficiently suppressed. For the case of $G_M(Q^2)$ we use three sink-source separations between 0.94~fm and 1.31~fm and observe that for the isovector no excited state effects are present within statistical errors, while for the connected isoscalar, the largest separation of $t_s=1.31$~fm is sufficient for their suppression. Our results for both the isovector and isoscalar $G_E(Q^2)$ lie higher than experiment by about a standard deviation. This may be due to small residual excited state contamination since this difference is found to decrease as the sink-source separation increases. Our results for $G^{u-d}_M(Q^2)$ at the two lowest $Q^2$ values underestimate the experimental ones but are in agreement for $Q^2>0.2$~GeV$^2$. Volume effects are being investigated to determine whether these could be responsible for this discrepancy. The isoscalar matrix element requires both connected and disconnected contributions, the latter requiring an order of magnitude more statistics. We have computed the disconnected contributions to $G_E^{u+d}(Q^2)$ and $G_M^{u+d}(Q^2)$ for the first four non-zero momentum transfers up to $Q^2= 0.28$~GeV$^2$ and find that their magnitude is smaller or comparable to the statistical error of the connected contribution. We include the disconnected contributions to combine isovector and isoscalar matrix elements and obtain the proton and neutron electromagnetic Sachs form factors at the physical point. We have used two methods to fit the $Q^2$-dependence of our data, both a dipole Ansatz and the z-expansion. These two methods yield consistent results, however the latter method yields parameters with larger statistical errors. Using the dipole fits to determine the electric and magnetic radii, as well as the magnetic moment, we find agreement with other recent lattice QCD results for the isovector case, and are within 2$\sigma$ with the experimental determinations. Our result for the proton electric charge radius $\langle r^2_E\rangle^p = 0.589(39)(33)$~fm$^2$, is two sigmas smaller than the muonic hydrogen determination~\cite{Antognini:1900ns} of $\langle r^2_p\rangle = 0.7071(4)(5)$~fm$^2$, which may be due to remaining excited state effects or volume effects, which will be investigated further. Our final results are collected in Table~\ref{table:intro results}. \begin{table} \caption{Our final results for the isovector ($p-n$), isoscalar ($p+n$), proton ($p$) and neutron ($n$) electric radius ($\langle r_E^2\rangle$), magnetic radius ($\langle r_M^2\rangle$) and magnetic moment ($\mu$). The first error is statistical and the second a systematic due to excited state contamination.} \label{table:intro results} \begin{tabular}{lr@{.}lr@{.}lr@{.}l} \hline\hline &\multicolumn{2}{c}{$\langle r_E^2\rangle$ [fm$^2$]} & \multicolumn{2}{c}{$\langle r_M^2\rangle$ [fm$^2$]} & \multicolumn{2}{c}{$\mu$} \\ \hline $p$-$n$& 0&653(48)(30) & 0&536(52)(66) & 4&02(21)(28) \\ $p$+$n$& 0&537(53)(38) & 0&394(82)(42) & 0&870(60)(39)\\ $p$ & 0&589(39)(33) & 0&506(51)(42) & 2&44(13)(14) \\ $n$ &-0&038(34)(6) & 0&586(58)(75) &-1&58(9)(12) \\ \hline\hline \end{tabular} \end{table} We plan to analyze the electromagnetic form factors using both an ensemble of $N_\textrm{f}=2$ twisted mass clover-improved fermions simulated at the same pion mass and lattice spacing as the ensemble analyzed in this work but with a lattice size of $64^3\times 128$, yielding $m_\pi L=4$ as well as with an $N_\textrm{f}=2+1+1$ ensemble of finer lattice spacing. In addition, we are investigating improved techniques for the computation of the disconnected quark loops at the physical point. These future calculations will allow for further checks of lattice artifacts and resolve the remaining small tension between lattice QCD and experimental results for these important benchmark quantities. \textit{Acknowledgments:} We would like to thank the members of the ETM Collaboration for a most enjoyable collaboration. We acknowledge funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie Grant Agreement No. 642069. Results were obtained using Jureca, via a John-von-Neumann-Institut für Computing (NIC) allocation ECY00, HazelHen at H\"ochstleistungsrechenzentrum Stuttgart (HLRS) and SuperMUC at the Leibniz-Rechenzentrum (LRZ), via Gauss allocations with ids 44066 and 10862, Piz Daint at Centro Svizzero di Calcolo Scientifico (CSCS), via projects with ids s540, s625 and s702, and resources at Centre Informatique National de l’Enseignement Supérieur (CINES) and Institute for Development and Resources in Intensive Scientific Computing (IDRIS) under allocation 52271. We thank the staff of these centers for access to the computational resources and for their support. \bibliographystyle{unsrtnat}
1,108,101,564,811
arxiv
\section{Introduction} New ideas are created by (re)combining existing ideas and applications \cite{poincare, weitzman98, burt04}. Business opportunities and jobs are found amidst heterogeneous offers and demands \cite{burt92}. Both novelty and economic welfare depend on information diversity, be it different kinds of information for different kinds of opportunities \cite{page07}. Seen from a network perspective, and without much knowledge about the content of information sources, it's a challenge to model diversity such that economic, scientific, artistic, and other kinds of success can be predicted. In a recent paper in \emph{Science}, Eagle, Macy and Claxton \cite{eagle10}, EMC for short, found support for the relation between diversity and economic well-being in a network study of British communities. They had almost complete telephone data over a month in 2005, obviously stripped of content. Interestingly, the nodes in this network were the communities, as sources and recipients of information, not individuals, for whom numerous benefits of diversity had already been shown in other studies \cite{granovetter83}. However, EMC's measure did not indicate diversity of sources, but diversity of time (volume of calls) spent on any given number of sources instead. This choice seems puzzling at first sight, and is not explained in their paper. I will go into their measure in some detail, and then proceed with network diversity in general. \section{Tie strength diversity} In the normalized Shannon entropy measure EMC propose, $p_{ij}$ is the proportional strength, or value, of the tie (\emph{arc}) between focal node $i$ and contact node $j$, such that \( \sum_{i=1}^{i=k} p_{ij}=1, \) and $k_i$ is the number of $i$'s contacts (\emph{degree}). In their study, $p_{ij}$ is community $i$'s proportional volume of calls to community $j$. Although in general, $p_{ij} \ne p_{ji}$, in phone conversations and in many other social relations, information goes in both directions. Relevant exceptions are written sources of information, that can be cited but not influenced by their readers. {\bf Normalized entropy} is defined as \begin{equation} D(i) = \frac{-\sum_{j=1}^k p_{ij} log(p_{ij})}{log(k_i)} \label{entropy} \end{equation} An index of economic welfare did correlate with more equally divided attention across sources as indicated by Eq.1 ($r = 0.73$).\footnote{Eq.1 was also used to measure diversity across geographic areas, which correlated with economic welfare as well ($r=0.58$).} This is intriguing, but we want comprehension, not just correlation. Only in the extreme case of spending almost all time on one source and almost neglecting others it's obvious that diversity of time spent reduces diversity of information. Otherwise, and net of institutions and cognitive limitations, having more sources is better, at least according to Ron Burt's theory of brokerage \cite{burt92} on which EMC build (see Scott Page's additional arguments \cite{page07}). For sources to provide diverse information and opportunities indeed, they should not be connected to each other directly, and not be connected indirectly other than via the focal node itself \cite{burt92}; see Fig.1. Neither of these effects is represented in EMC's measure, though, whereas other measure that they used suggested that numbers of sources ($r=0.44$) and their lacking direct links (Burt's brokerage, $r=0.72$) are important. (Burt's measure incorporates both effects.) \begin{figure} \begin{center} \usetikzlibrary{positioning} \begin{tikzpicture}[xscale=6, yscale=8,>=stealth] \tikzstyle{v}=[circle, minimum size=1mm,draw,thick] \node[v] (a) {$\mathbf{A}$}; \node[v] (b) [right=of a] {$C$}; \node[v] (c) [below=of a] {$D$}; \node[v] (d) [below=of b] {$B$}; \node[v] (e) [right=of d] {$E$}; \draw[thick,-] (a) to node {} (c); \draw[thick,-] (a) to node {} (b); \draw[thick,-] (c) to node {} (d); \draw[thick,-] (d) to node {} (e); \draw[thick,-] (b) to node {} (d); \end{tikzpicture} \caption{\small If all nodes divide their attention equally among their contacts, focal node $A$ should have a lower score on diversity than node $B$, as in Burt's measure for brokerage; EMC's scores (eq.1) are the same for both. Furthermore, $A$'s score should be lower if $C$ and $D$ were connected directly and increase redundancy rather than diversity for $A$, which is expressed by Burt's measure whereas EMC's scores stay equal. Finally, $B$ exchanging information with $C$ and $D$ reduces opportunities for $A$, which is unnoticed by both EMC's and Burt's measures.} \end{center} \end{figure} The network approach helps us to make parsimonious theories, that abstract away as much as possible from the content of the ties to make predictions as general as possible. Content matters, obviously, and a balance has to be found. It seems that to comprehend EMC's findings on the diversity of tie strength, we have to take into account some broad characteristic of their content. To this end it might help if we contrast generic phone conversations between residential communities with information transmission in the fields of research and innovation, mostly not by phone. In a comparable network, nodes are then scientific communities or technology domains \cite{davismarquis05}. There, individual inventors must have a skillful command of their sources, for example scientific literature, patents, or experts, which takes much more time and effort than maintaining business relations or asking about jobs on the phone. To cross-fertilize sophisticated knowledge successfully, knowledge brokerage must be preceded, and followed upon, by a phase of specialization in these sources \cite{carnabuci09}. An innovation-dedicated community can also self-specialize, indicated by a strong tie from the node to it self that summarizes a myriad of individuals collaborating with, or citing, each other. EMC's measure should therefore incorporate reflexive ties as well, \emph{i.e.} allowing for the index in Eq.1 the case $i = j$. Specialization thus can happen in multiple ways, that have in common an accumulation of more densely interrelated knowledge wherein shortcuts and workarounds are discovered. Diversity of tie strength in cross-sectional data reflects combinations or alterations of specialization and brokerage, which, co-depending on network dynamics (see \cite{carnabuci09}), indicates good fortune rather than misfortune. The content of British phone conversations we do not know, but it's clear that innovations are by far outnumbered by more mundane exchanges of information. To transfer complex information, strong ties are necessary \cite{hansen99}, as they are for specialization, while for most interactions in daily life, like searching a job or selling an item, weak ties will do \cite{granovetter83}. For all those more common cases, strong ties indicate redundancy rather than progressive knowledge refinement. Dedicating much attention to a few sources has therefore no advantages, or only briefly, while it precludes people and their communities from getting non-redundant information elsewhere. It seems that this explains what EMC found. We may thus conclude that their measure is very useful indeed, and focuses on one important aspect of diversity that was previously not studied separately.\footnote{Some credit should go to James Coleman \cite{coleman64}, who presented a measure of entropy (different from EMC's) in his well-known Introduction to Mathematical Sociology. Ron Burt used that measure for tie strength diversity, not to assess knowledge source specialization, though, but to show that women in an organization got early promotion if they had one strong tie to a ``sponsor," a higher manager in the organization other than their own \cite{burt98}---status specialization, for short.} \section{Source diversity} To predict opportunities to create and trade, we should also come to terms with source diversity. As said, the optimal situation for a focal node is to have as many different sources as possible, for as far cognition enables and institutions allow. Our challenge is to appropriately deal with direct and indirect links between these sources. For focal node $A$ in Fig.1, if nodes $C$ and $D$ exchange information directly, $A$ receives more redundant and less diverse information than if $C$ and $D$ are unconnected. Consequentially, chances for $A$ to recombine information from them to create new opportunities decrease, be it for business, innovations, or other. Moreover, $C$ and $D$ no longer need $A$ (or other nodes like $B$) for them to communicate; $A$ is then out-competed with respect to benefits resulting from combinations of, and transactions between, $C$ and $D$. In the case of direct links between sources, reduced diversity and increased competition are two sides of the same coin. Empirically they differ; diversity of information and other resources can in principle be observed in social interactions, while competition---if nodes do not show direct rivalry in their behavior---can't be observed but has to be inferred, from performance reduced by it. Burt's measure does a good job at capturing the effects of both tie strength diversity and direct links between sources in one stroke. But it correlates slightly weaker with economic success than normalized entropy does \cite{eagle10} and it overlooks nodes one removed from the focal node that draw information or other resources from the same sources as the focal node does. If we look again at focal node $A$ in Fig.1, $B$ is a case in point. Suppose $B$ provides information to $C$ and $D$ that is useful to them, then they have the advantage first, while $A$ still waits or never hears about it. If, on the other hand, $B$ uses information from $C$ and $D$, the ideas $B$ produces will be more similar to $A$'s than if $B$ would use sources unrelated to $A$. $B$ does not necessarily reduce $A$'s diversity but it reduces $A$'s chances for novelty. In a rare email network study where also the content of the messages was known to the researchers, the effect of indirect links (structural equivalence) on information diversity was indeed insignificant \cite{aral07}. Other studies (based on patent data) showed that the effect of this competition on performance (citation impact) was significantly negative, though \cite{podolnystuart95,carnabuci10}. There, competition was not for information itself, which does not deplete with usage \cite{arrow62}, but for the novelty that could be created with it and valued by others. What we should measure is not diversity per se, but potentially useful diversity, to which as few as possible competitors have access. In sum, like direct links between sources discussed above, also indirect links increase competition for a focal node. \emph{Betweenness centrality} \cite{freeman77} is the simplest measure that captures the number of sources under both constraints, of their lacking direct links and indirect links. To broker diverse information, a focal node should sit astride on multiple \emph{paths} (concatenations of ties) between places where ``useful bits of information are likely to air, and provide a reliable flow of information to and from those places" \cite{burt92}. The basic intuition dates back from 1948 \cite{bavelas48}, and its formalization came independently in 1971 \cite{anthonisse71} and in 1977 \cite{freeman77}. Due to its simplicity, betweenness is better comprehensible (after working through a few examples), communicable, and applicable than more sophisticated measures, of which it's generally not understood what the underlying social mechanisms would be. For betweenness, of all paths through a focal node from here to there, only the shortest paths count. But shortest paths can still be long. Unchangeable information, like chain letters, can travel long distances \cite{liben-novell08}, but response times to information are heterogeneously distributed, and the ``fat tail" of slow responders strongly slows down diffusion processes \cite{iribarren09}. In our case, shortest paths may not be short enough, because strategically relevant and manipulable information is much less reliable over longer paths, and before it reaches a focal node it has probably already been used by another middle(wo)man along its way. Diversity is by far the most useful where---and when---the news breaks \cite{burt92}, while ``second hand brokerage" is not \cite{burt07}. Moreover, long network paths strongly affect a node's betweenness scores, whereas they rarely matter for brokerage. We should therefore constrain betweenness to paths shorter than or equal to three ties in a row, and call it 3-betweenness for short. It thereby fits squarely into Fowler and Christakis' \cite{christakis09} ``three degrees rule," a stylized fact that various sorts of social influence do not reach further than path lengths of three. In Fig.1, $B$ has exclusive access to $E$, and is the gatekeeper with the power to speed up, interrupt, or distort information from or to $E$ \cite{boissevain74, freeman80}; $B$ thus enjoys the full benefit of paths from $E$ to other nodes. Our focal node $A$, in contrast, has no exclusive sources. There is one (shortest) path through it from $C$ to $D$ and another path from $C$ to $D$ of equal length that does not pass through $A$. Without any further information about the network, our initial best bet is that roughly half of the information exchange between $C$ to $D$ will pass through $A$. Assortment (``homophily" \cite{rogers03}), sympathy, and other factors may bias one channel in favor of another, but the associated tie strength diversity around the focal we have already captured by EMC's measure. We can complement the latter with betweenness that focuses on a different aspect of diversity. From a focal node's point of view, diversity of tie strength (or anything else) further afield is less relevant, so there we may trade off realism for parsimony. This is what betweenness does, by abstracting away from tie strength. For the presence or absence of ties, a threshold value should be established depending on the field of application. Below the threshold, information transfer is insignificantly weak and then ignored. Generalizing these intuitions about exclusive and shared access, {\bf 3-betweenness} of focal node $i$ is the ratio of the shortest paths, $g_{jil}$, from $j$ through $i$ to $l$ (under the distance constraint discussed above), to all shortest paths between these two nodes, $g_{j.l}$, and then summed for all pairs of nodes in the network.\footnote{Notice that if ties are (strictly) asymmetric, a path in one direction is not necessarily the same as a path in the opposite direction. Alternatives to 3-betweenness are 2-betweenness, and 4, 5, etcetera, -betweenness. It remains an empirical question if 3-betweenness predicts best. In R's igraph package, 3-betweenness for a graph $G$ can be computed by \texttt{betweenness.estimate(G, cutoff=3)}} Formally, \begin{equation} C_B(i) = \sum_{j} \sum_{j<l} \frac{g_{jil}}{g_{j.l}} \hspace{1cm} j \ne i \ne l, \hspace{.2cm} d(j,l) \leq 3. \label{betweenness} \end{equation} The reader may verify that if the number of direct or indirect links between $i$'s sources increases, its 3-betweenness score decreases, and that direct links have a stronger impact than indirect links have. \section{Test} I tested the two measures on a network of ``invisible colleges" of US inventors ($n = 417$), analogous to the British communities of citizens. In this case the ties consist of patent citations, that represent knowledge flows \cite{henderson05}, for which I used all patents in the USA (about two million) over the period 1975---1999. The administrative units corresponding to the colleges of inventors are technology domains, wherein patents are categorized. Performance is here measured as citation impact (number of citations) over the entire period. Domains' self-specialization is a prominent knowledge strategy; on average a domain has 214 source domains, but $p_{ii} = 0.53$, which is much higher than it would have been in an equal division of citations over source domains (0.005). To compare this network with the British community network for the effect of diversity on performance, I simplify by leaving out network dynamics (elaborated in \cite{carnabuci09}). As the average path length is short (1.49), there is no difference between betweenness and 3-betweenness. Both correlate 0.77 with performance, whereas normalized entropy correlates -0.22. The most successful technology domains thus combine brokerage with specialization, which we can clearly see by using these two measures.\footnote{A preliminary regression model also featured a significant $(p<0.01)$ interaction effect, suggesting that brokering and specializing \emph{at the same time} are beneficial for collectives in complex environments.} In Burt's measure, tie strength diversity and topological diversity are combined, and as in this case they point in opposite directions (low entropy, high brokerage), that measure correlates much lower with performance, 0.19, and is less informative. Only if they point into the same direction (high entropy, high brokerage), like in EMC's study, Burt's measure is adequate. Interestingly, though, Burt's discursive theory matches the entropy and 3-betweenness measures better than his own measure. Additional tests in a variety of fields should point out if we now have both correlation and comprehension indeed. \section{Brokerage and Specialization} To assess network diversity for economic development and other accomplishments, we may start out with the elegant and simple measure of 3-betweenness. For valued graphs we complement it with normalized entropy, that should also take reflexive ties into account, if present. Subsequently, it's important to know if the field one is about to investigate is complex for its inhabitants, such that progressive knowledge or skill refinement yields benefits for them, or is relatively simple such that we may neglect small bursts of specialization. We already know that the fields of technology and science are complex, to which we may add sport, architecture, haute cuisine, art, law, and any other field where extensive schooling or training are required. (And if we don't know, we can figure it out through the effect of normalized entropy.) In all those fields we will find individuals who spend years on specializing, and have intensive contacts with relatively few and interconnected sources of knowledge, such as teachers, books, and peers. In the special case of repeated complex tasks, like building aircraft, specialization follows the well known learning curve \cite{argote90}. We would not want to say that all those learners are wasting their time and should only muster diversity instead. Specialization to the level of mastery makes it possible to use the acquired knowledge (partly) routinely, and also enhances individuals' as well as organizations' absorptive capacity \cite{levinthal90}. This enables to notice valuable information amidst redundancy and noise, including brokering opportunities that laymen overlook. ``Chance favors the prepared mind," as Louis Pasteur said. There is of course no guarantee whatsoever that trained specialists become good brokers, or continue to be successful specialists, and they run a risk, individually and collectively, to get stuck in a local optimum of their specialization \cite{page07}---their competency trap. In complex fields, we should expect to see the best outcome in the long run for those individuals and collectives who oscillate between, or dynamically combine, specialization and brokerage, and not stay permanently at either strategy or some place in between \cite{carnabuci09}.\footnote{The cognitive processes associated with brokerage and specialization are exploration and exploitation, respectively. The human brain has different parts for each \cite{daw06}, and noradrenaline helps regulating the dynamic balance between the two \cite{cohenjones05}. When the temporal aspect is overlooked, paradoxes may result. When a broker gets to know her contacts, or sources, well, she may exploit them, whereas progressive specialization is only possible through exploring more efficient shortcuts or (re)combinations. The paradoxes vanish when time is taken into account.} Collectives, such as large business companies with a R\&D department, may employ each strategy in a different part of their organization, and teams may broker by a composition of non-overlapping specialists \cite{guimera05}. As we have seen, the most successful technology domains combine brokerage, to collect diverse information, with specialization, to accumulate and integrate this information to well-exploit it.\footnote{Normalized entropy as discussed here captures both \emph{self} and \emph{source specialization}. As a refinement of source specialization, nodes can also exchange information dyadically, which we might call \emph{mutualistic specialization}. For technology domains, it correlated positively with performance. A more interrelated knowledge base also results from \emph{cluster specialization}~\cite{burt04}, i.e. when a focal node's sources draw ideas from each other, either with or without the focal node's own doing. When a node's local clustering increases, betweenness necessarily decreases. Furthermore, there is \emph{geographic specialization} in specific, often proximate, areas, which also holds for patent citations \cite{jaffe93}. Finally, there is \emph{status specialization} (footnote 2), i.e. a preference for linking to high status nodes. In the case of technology domains, it coincides with auto-regression (see main text).} We now have the tools to measure tie strength diversity and topological diversity, know more about the underlying mechanisms, can predict when specialization matters, and tell why. Finally, we should not forget that irrespective of diversity, good sources of information are substantially more beneficial than arbitrary sources are. This holds throughout society, from technology domains \cite{carnabuci10} to philosophers \cite{collins00}. High performing nodes thus have an important spillover, of higher quality knowledge for their network neighbors who specialize in them (an instance of network \emph{auto-regression}). On this note, we may end with some practical advice. First, have good sources of information, and keep in mind that having some good sources is better than having just many. Second, make sure they are diverse. Third, integrate and master complex information from these sources through specialization.
1,108,101,564,812
arxiv
\section{Introduction} \thispagestyle{empty} Well-posedness of final value problems for a large class of parabolic differential equations was recently obtained in a joint work of the author and given an ample description for a broad audience in \cite{ChJo18ax}, after the announcement in \cite{ChJo18}. The present paper substantiates the indications made in the concise review \cite{JJ19}, namely, that the abstract parts in \cite{ChJo18ax} extend from $V$-elliptic Lax--Milgram operators $A$ to those that are merely $V$-\emph{coercive}---despite that such $A$ may be non-injective. As an application, the final value heat conduction problem with the homogeneous Neumann condition is shown to be well-posed. \bigskip The basic analysis is made for a Lax--Milgram operator $A$ defined in $H$ from a $V$-coercive sesquilinear form $a(\cdot,\cdot)$ in a Gelfand triple, i.e., three separable, densely injected Hilbert spaces $V\hookrightarrow H\hookrightarrow V^*$ having the norms $\|\cdot\|$, $|\cdot|$ and $\|\cdot\|_*$, respectively. Hereby $V$ is the form domain of $a$; and $V^*$ the antidual of $V$. Specifically there are constants $C_j>0$ and $k\in{\mathbb{R}}$ such that all $u, v\in V$ satisfy $\| v\|_*\le C_1|v|\le C_2 \| v\|$ and \begin{equation} \label{coerciv-id} |a(u,v)|\le C_3\|u\|\,\|v\|\,, \qquad \Re a(v,v)\ge C_4\|v\|^2-k|u|^2. \end{equation} In fact, $D(A)$ consists of those $u\in V$ for which $a(u,v)=\scal{f}{v}$ for some $f\in H$ holds for all $v\in V$, and $Au=f$; hereby $\scal{u}{v}$ denotes the inner product in $H$. There is also an extension $A\in{\mathbb{B}}(V,V^*)$ given by $\dual{Au}{v}=a(u,v)$ for $u,v\in V$. This is uniquely determined as $D(A)$ is dense in $V$. Both $a$ and $A$ are referred to as $V$-elliptic if the above holds for $k=0$; then $A\in{\mathbb{B}}(V,V^*)$ is a bijection. One may consult the book of Grubb \cite{G09} or that of Helffer \cite{Hel13}, or \cite{ChJo18ax}, for more details on the set-up and basic properties of the unbounded, but closed operator $A$ in $H$. Especially $A$ is self-adjoint in $H$ if and only if $a(v,w)=\overline{a(w,v)}$, which is not assumed; $A$ may also be nonnormal in general. In the framework of such a triple $(H,V,a)$, the general final value problem is the following: \emph{for given data $f\in L_2(0,T; V^*)$ and $u_T\in H$, determine the $u\in{\cal D}'(0,T;V)$ such that} \begin{equation} \label{fvA-intro} \left. \begin{aligned} \partial_tu +Au &=f &&\quad \text{in ${\cal D}'(0,T;V^*)$} \\ u(T)&=u_T &&\quad\text{in $H$} \end{aligned} \right\} \end{equation} By definition of Schwartz' vector distribution space ${\cal D}'(0,T;V^*)$ as the space of continuous linear maps $C_0^\infty(]0,T[)\to V^*$, cf.\ \cite{Swz66}, the above equation means that for every scalar test function $\varphi\in C_0^\infty(]0,T[)$ the identity $\dual{u}{-\varphi'}+\dual{A u}{\varphi}=\dual{f}{\varphi}$ holds in $V^*$. As is well known, a wealth of parabolic Cauchy problems with homogeneous boundary conditions have been treated via triples $(H,V,a)$ and the ${\cal D}'(0,T;V^*)$ set-up in \eqref{fvA-intro}; cf.\ the work of Lions and Magenes~\cite{LiMa72}, Tanabe~\cite{Tan79}, Temam~\cite{Tem84}, Amann \cite{Ama95} etc. The theoretical analysis made in \cite{ChJo18,ChJo18ax, JJ19} shows that, in the $V$-elliptic case, the problem in \eqref{fvA-intro} is well posed, i.e., it has \emph{existence, uniqueness} and \emph{stability} of a solution $u\in X$ for given data $(f,u_T)\in Y$, in certain Hilbertable spaces $X$, $Y$ that were described explicitly. Hereby the data space $Y$ is defined in terms of a particular compatibility condition, which was introduced for the purpose in \cite{ChJo18,ChJo18ax}. More precisely, there is even a linear homeomorphism $X\longleftrightarrow Y$, which yields well-posedness in a strong form. This has seemingly closed a gap in the theory, which had remained since the 1950's, even though the well-posedness is decisive for the interpretation and accuracy of numerical schemes for the problem (the work of John~\cite{John55} was pioneering, but also Eld\'en \cite{Eld87} could be mentioned). In rough terms, the results are derived from a useful structure on the reachable set for a general class of parabolic differential equations. The main example treated in \cite{ChJo18,ChJo18ax} is the heat conduction problem of characterising the $u(t,x)$ that in a $C^\infty$-smooth bounded open set $\Omega\subset{\mathbb{R}}^{n}$ with boundary $\Gamma=\partial\Omega$ fulfil the equations (for $\Delta=\partial_{x_1}^2+\dots+\partial_{x_n}^2$), \begin{equation} \label{heat-intro} \left. \begin{aligned} \partial_tu(t,x)-\operatorname{\Delta} u(t,x)&=f(t,x) &&\quad\text{for $t\in\,]0,T[\,$, $x\in\Omega$} \\ u(t,x)&=g(t,x) &&\quad\text{for $t\in\,]0,T[\,$, $x\in\Gamma$} \\ u(T,x)&=u_T(x) &&\quad\text{for $x\in\Omega$} \end{aligned} \right\} \end{equation} An area of interest of this could be a nuclear power plant hit by a power failure at $t=0$: after power is regained at $t=T>0$, and the reactor temperatures $u_T(x)$ are measured, a calculation backwards in time could possibly settle whether at some $t_0<T$ the temperatures $u(t_0,x)$ could cause damage to the fuel rods. However, the Dirichlet condition $u=g$ at the boundary $\Gamma$ is of limited physical importance, so an extension to, e.g., the Neumann condition, which represents controlled heat flux at $\Gamma$, makes it natural to work out an extension to $V$-coercive Lax--Milgram operators $A$. In this connection it should be noted that when $A$ is $V$-coercive (that is, satisfies \eqref{coerciv-id} only for some $k>0$), it is possible that $0\in\sigma(A)$, the spectrum of $A$, for example because $\lambda=0$ is an eigenvalue of $A$. In fact, this is the case for the Neumann realisation $-\!\operatorname{\Delta}_N$, which has the space of constant functions $\C1_\Omega$ as the null space. Well-posedness is obtained for the heat problem \eqref{heat-intro} with a replacement of the Dirichlet condition by the homogeneous Neumann condition in Section~\ref{Neu-sect} below. \bigskip At first glance, it may seem surprising that the possible non-injectivity of the coercive operator $A$ is \emph{inconsequential} for the well-posedness of the final value problem \eqref{fvA-intro}. In particular this means that the backward uniqueness---$u(T)=0$ in $H$ implies $u(t)=0$ in $H$ for $0\le t<T$---of the equation $u'+Au=f$ will hold regardless of whether $A$ is injective or not. This can be seen from the extensions of the abstract theory given below; in particular when the results are applied in Section~\ref{Neu-sect} to the case $A=-\!\operatorname{\Delta}_N$. The point of departure is to make a comparison of \eqref{fvA-intro} with the corresponding Cauchy problem for the equation $u'+Au=f$. For this it is classical to seek solutions $u$ in the Banach space \begin{equation} \begin{split} X=&L_2(0,T;V)\bigcap C([0,T];H) \bigcap H^1(0,T;V^*), \\ \|u\|_X=&\big(\int_0^T \|u(t)\|^2\,dt+\sup_{0\le t\le T}|u(t)|^2+\int_0^T (\|u(t)\|_*^2 +\|u'(t)\|_*^2)\,dt\big)^{1/2}. \end{split} \label{eq:X} \end{equation} In fact, the following result is essentially known from the work of Lions and Magenes \cite{LiMa72}: \begin{proposition} \label{LiMa-prop} Let $V$ be a separable Hilbert space with $V \subseteq H$ algebraically, topologically and densely, and let $A$ denote the Lax--Milgram operator induced by a $V$-coercive, bounded sesquilinear form on $V$, as well as its extension $A\in{\mathbb{B}}(V,V^*)$. When $u_0 \in H$ and $f \in L_2(0,T; V^*)$ are given, then there is a uniquely determined solution $u$ belonging to $X$, cf.\ \eqref{eq:X}, of the Cauchy problem \begin{equation} \left. \begin{aligned} \partial_tu +Au &=f \quad \text{in ${\cal D}'(0,T;V^*)$} \\ \qquad u(0)&=u_0 \quad\text{in $H$} \end{aligned} \right\} \label{ivA-intro} \end{equation} The solution operator $(f,u_0)\mapsto u$ is continuous $L_2(0,T; V^*)\oplus H\to X$, and problem \eqref{ivA-intro} is well-posed. \end{proposition} Remarks on the classical reduction from the $V$-coercive case to the elliptic one will follow in Section~\ref{aproof-sect}. The stated continuity of the solution operator is well known to the experts. But for the reader's convenience, in Proposition~\ref{pest-prop} below, the continuity is shown by explicit estimates using Gr{\"o}nwall's lemma; these may be of independent interest. Whilst the below expression for the solution hardly is surprising at all, it has seemingly not been obtained hitherto in the present context of $V$-coercive Lax--Milgram operators $A$ and general triples $(H,V,a)$: \begin{proposition} \label{Duhamel-prop} The unique solution $u$ in $X$ provided by Proposition~\ref{LiMa-prop} is given by Duhamel's formula, \begin{equation} \label{u-id} u(t) = e^{-tA}u_0 + \int_0^t e^{-(t-s)A}f(s) \,ds \qquad\text{for } 0\leq t\leq T. \end{equation} Here each of the three terms belongs to $X$. \end{proposition} As shown in Section~\ref{aproof-sect} below, it suffices for \eqref{u-id} to reinforce the classical integration factor technique by \emph{injectivity} of the semigroup $e^{-tA}$. In fact, it is exploited in \eqref{u-id} and throughout that $-A$ generates an analytic semigroup $e^{-zA}$ in ${\mathbb{B}}(H)$. As a consequence of the analyticity, the family of operators $e^{-zA}$ was shown in \cite{ChJo18ax} to consist of \emph{injections} on $H$ in case $A$ is $V$-elliptic. This extends to general $V$-coercive $A$, as accounted for in Proposition~\ref{inj-prop} below. Hence $e^{-tA}$ also in the present paper has the inverse $e^{tA}:=(e^{-tA})^{-1}$ for $t>0$. For $t=T$, the Duhamel formula \eqref{u-id} now obviously yields a \emph{bijection} $u(0)\longleftrightarrow u(T)$ between the initial and terminal states (for fixed $f$), as one can solve for $u_0$ by means of the inverse $e^{TA}$. In particular backwards uniqueness of the solutions to $u'+Au=f$ holds in the large class $X$. \bigskip Returning to the final value problem \eqref{fvA-intro} it would be natural to seek solutions $u$ in the same space $X$. This turns out to be possible only when the data $(f, u_T)$ are subjected to substantial further conditions. To formulate these, it is noted that the above inverse $e^{tA}$ enters the theory through its domain, which in the algebraic sense simply is a range, namely $D(e^{tA})=R(e^{-tA})$; but this domain has the structural advantage of being a Hilbert space under the graph norm $\|u\|=(|u|^2+|e^{tA}u|^2)^{1/2}$. For $t=T$ the domains $D(e^{T A})$ have a decisive role in the well-posedness result below, where condition \eqref{eq:cc-intro} is a fundamental clarification for the final value problem in \eqref{fvA-intro} and the parabolic problems it represents. Another ingredient in \eqref{eq:cc-intro} is the full yield $y_f$ of the source term $f\colon \,]0,T[\, \to V^*$, namely \begin{equation} \label{yf-eq} y_f= \int_0^T e^{-(T-t)A}f(t)\,dt. \end{equation} Hereby it is used that $e^{-tA}$ extends to an analytic semigroup in $V^*$, as the extension $A\in{\mathbb{B}}(V,V^*)$ is an unbounded operator in the Hilbertable space $V^*$ satisfying the necessary estimates (cf.\ Remark~\ref{domain-rem}; and also \cite[Lem.\ 5]{ChJo18ax} for the extension). So $y_f$ is a priori a vector in $V^*$, but in fact $y_f$ lies in $H$ as Proposition~\ref{Duhamel-prop} shows it equals the final state of a solution in $C([0,T],H)$ of a Cauchy problem having $u_0=0$. These remarks on $y_f$ make it clear that in the following main result of the paper---which relaxes the assumption of $V$-ellipticity in \cite{ChJo18,ChJo18ax} to $V$-coercivity---the difference in \eqref{eq:cc-intro} is a member of $H$: \begin{theorem} \label{intro-thm} Let $A$ be a $V$-\emph{coercive} Lax--Milgram operator defined from a triple $(H,V,a)$ as above. Then the abstract final value problem \eqref{fvA-intro} has a solution $u(t)$ belonging the space $X$ in \eqref{eq:X}, if and only if the data $(f,u_T)$ belong to the subspace \begin{equation} Y\subset L_2(0,T; V^*)\oplus H \end{equation} defined by the condition \begin{equation} \label{eq:cc-intro} u_T-\int_0^T e^{-(T-t)A}f(t)\,dt \ \in\ D(e^{TA}). \end{equation} In the affirmative case, the solution $u$ is uniquely determined in $X$ and \begin{equation} \label{eq:Y-intro} \begin{split} \|u\|_{X}& \le c \Big(|u_T|^2+\int_0^T\|f(t)\|_*^2\,dt+\Big|e^{TA}\big(u_T-\int_0^Te^{-(T-t)A}f(t)\,dt\big)\Big|^2\Big)^{1/2} \\ &=c \|(f,u_T)\|_Y, \end{split} \end{equation} whence the solution operator $(f,u_T)\mapsto u$ is continuous $Y\to X$. Moreover, \begin{equation} \label{eq:fvp_solution} u(t) = e^{-tA}e^{TA}\Big(u_T-\int_0^T e^{-(T-t)A}f(t)\,dt\Big) + \int_0^t e^{-(t-s)A}f(s) \,ds, \end{equation} where all terms belong to $X$ as functions of $t\in[0,T]$, and the difference in \eqref{eq:cc-intro} equals $e^{-TA}u(0)$ in $H$. \end{theorem} The norm on the data space $Y$ in \eqref{eq:Y-intro} is seen at once to be the graph norm of the composite map \begin{equation} L_2(0,T; V^*)\oplus H \xrightarrow[\;]{\quad \Phi\quad} H\xrightarrow[\;]{\quad e^{TA}\quad} H \end{equation} given by $(f,u_T)\ \mapsto u_T-y_f\ \mapsto \ e^{TA}(u_T-y_f)$ and $\Phi(f,u_T)=u_T-y_f$. In fact, the solvability criterion \eqref{eq:cc-intro} means that $e^{TA}\Phi$ must be defined at $(f,u_T)$, so the data space $Y$ is its domain. Being an inverse, $e^{TA}$ is a closed operator in $H$, and so is $e^{TA}\Phi$; hence $Y=D(e^{TA}\Phi)$ is complete. Now, since in \eqref{eq:Y-intro} the Banach space $V^*$ is Hilbertable, so is $Y$. Thus the unbounded operator $e^{TA}\Phi$ is a key ingredient in the rigorous treatment of the final value problem \eqref{fvA-intro}. In control theoretic terms, the role of $e^{TA}\Phi$ is to provide the unique initial state given by \begin{equation} u(0)=e^{TA}\Phi(f,u_T)=e^{TA}(u_T-y_f), \end{equation} which is steered by $f$ to the final state $u(T)=u_T$ at time $T$; cf.\ the Duhamel formula \eqref{u-id}. Criterion \eqref{eq:cc-intro} is a generalised \emph{compatibility} condition on the data $(f,u_T)$; such conditions have long been known in the theory of parabolic problems, cf.\ Remark~\ref{GS-rem}. The presence of $e^{-(T-t)A}$ and the integral over $[0,T]$ makes \eqref{eq:cc-intro} \emph{non-local} in both space and time. This aspect is further complicated by the reference to $D(e^{TA})$, which for larger final times $T$ typically gives increasingly stricter conditions: \begin{proposition} If the spectrum $\sigma(A)$ of $A$ is not contained in the strip $\set{z\in{\mathbb{C}}}{-k\le \Re z\le k}$, whereby $k$ is the constant from \eqref{coerciv-id}, then the domains $D(e^{tA})$ form a strictly descending chain, that is, \begin{equation} \label{dom-intro} H\supsetneq D(e^{tA})\supsetneq D(e^{t' A}) \qquad\text{ for $0<t<t'$}. \end{equation} \end{proposition} This results from the injectivity of $e^{-tA}$ via well-known facts for semigroups reviewed in \cite[Thm.\ 11]{ChJo18ax} (with reference to \cite{Paz83}). In fact, the arguments given for $k=0$ in \cite[Prop.\ 11]{ChJo18ax} apply mutatis mutandis. Now, \eqref{u-id} also shows that $u(T)$ has two radically different contributions, even if $A$ has nice properties. First, for $t=T$ the integral equals $y_f$, which can be \emph{anywhere} in $H$. Indeed, $f\mapsto y_f$ is a continuous \emph{surjection} $y_f\colon L_2(0,T;V^*)\to H$. This was shown for $k=0$ via the Closed Range Theorem in \cite[Prop.\ 5]{ChJo18ax}, and for $k>0$ surjectivity follows from this case as $e^{-(T-s)A}f(s)=e^{-(T-s)(A+kI)}e^{-sk}f(s)$ in \eqref{yf-eq}, whereby $A+kI$ is $V$-elliptic and $f\mapsto e^{-sk}f$ is a bijection on $L_2(0,T;V^*)$. Secondly, $e^{-tA}u(0)$ solves $u'+Au=0$, and for $u(0)\ne0$ and $V$-elliptic $A$ it is a precise property in non-selfadjoint dynamics that the ``height'' $h(t)= |e^{-tA}u(0)|$ is \begin{align*} &\text{strictly positive ($h>0$)},\\ &\text{strictly decreasing ($h'<0$)},\\ &\text{\emph{strictly convex} ($\Leftarrow h''>0$)} . \end{align*} Whilst this holds if $A$ is self-adjoint or normal, it was emphasized in \cite{ChJo18ax} that it suffices that $A$ is just hyponormal (i.e., $D(A)\subset D(A^*)$ and $|Ax|\ge|A^*x|$ for $x\in D(A)$, following Janas \cite{Jan94}). Recently this was followed up by the author in \cite{18logconv}, where the stronger logarithmic convexity of $h(t)$ was proved \emph{equivalent} to the formally weaker property of $A$ that, for $x\in D(A^2)$, \begin{equation} \label{Alogconv-id} 2(\Re\scal{A x}{x})^2\le \Re\scal{A^2x}{x}|x|^2+|A x|^2|x|^2 . \end{equation} For $V$-coercive $A$ only the strict decrease may need to be relinquished. Indeed, the strict positivity $h(t)>0$ follows by the injectivity of $e^{-tA}$ in Proposition~\ref{inj-prop} below. Moreover, the characterisation in \cite[Lem.\ 2.2]{18logconv} of the log-convex $C^2$-functions $f(t)$ on $[0,\infty[\,$ as the solutions of the differential inequality $f''\cdot f\ge (f')^2$ and the resulting criterion for $A$ in \eqref{Alogconv-id} apply \emph{verbatim} to the coercive case; hereby the differential calculus in Banach spaces is exploited in a classical derivation of the formulae for $u(t)=e^{-tA}u(0)$, \begin{align} h'(t)&=-\frac{\Re\scal{Au(t)}{u(t)}}{|u(t)|}, \\ h''(t)&=\frac{\Re\scal{A^2u(t)}{u(t)}+|Au(t)|^2}{|u(t)|} -\frac{(\Re\scal{Au(t)}{u(t)})^2}{|u(t)|^3}. \end{align} But it is due to the strict positivity $|e^{-tA}u(0)|>0$ for $t\ge0$ in the denominators that the expressions make sense, so injectivity of $e^{-tA}$ also enters crucially at this point. Similarly the singularity of $|\cdot|$ at the origin poses no problems for the mere differentiation of $h(t)$. Therefore it is likely that the natural formulas for $h'$, $h''$ have not been rigorously proved before \cite{JJ19}. These remarks also shed light on the usefulness of Proposition~\ref{inj-prop} below. However, the stiffness intrinsic to \emph{strict} convexity, hence to log-convexity, corresponds well with the fact that $u(T)=e^{-TA}u(0)$ in any case is confined to a dense, but very small space, as by the analyticity \begin{equation} \label{DAn-cnd} u(T)\in \textstyle{\bigcap_{n\in{\mathbb{N}}}} D(A^n). \end{equation} For $u'+Au=f$, the possible $u_T$ will hence be a sum of some arbitrary $y_f\in H$ and a stiff term $e^{-TA}u(0)$. Thus $u_T$ can be prescribed in the affine space $y_f+D(e^{TA})$. As any $y_f\ne0$ will shift $D(e^{TA})\subset H$ in an arbitrary direction, $u(T)$ can be expected \emph{anywhere} in $H$ (unless $y_f\in D(e^{TA})$ is known). So neither \eqref{DAn-cnd} nor $u(T)\in D(e^{TA})$ can be expected to hold if $y_f\ne0$---not even if $|y_f|$ is much smaller than $|e^{-TA}u(0)|$. Hence it seems best for final value problems to consider inhomogeneous problems from the outset. \begin{remark} To give some backgound, two classical observations for the homogeneous case $f=0$, $g=0$ in \eqref{heat-intro} are recalled. First there is the smoothing effect for $t>0$ of parabolic Cauchy problems, which means that $u(t,x)\in C^\infty(\,]0,T]\times\overline\Omega)$ whenever $u_0\in L_2(\Omega)$. (Rauch \cite[Thm.\ 4.3.1]{Rau91} has a version for $\Omega={\mathbb{R}}^{n}$; Evans \cite[Thm.\ 7.1.7]{Eva10} gives the stronger result $u\in C^\infty([0,T]\times\overline\Omega)$ when $f\in C^\infty([0,T]\times\overline\Omega)$, $g=0$ and $u_0\in C^\infty(\overline\Omega)$ fulfill the classical compatibility conditions at $\{0\}\times\Gamma$---which for $f=0$, $g=0$ gives the $C^\infty$ property on $[\varepsilon, T]\times\overline{\Omega}$ for any $\varepsilon>0$, hence on $\,]0, T]\times\overline{\Omega}$). Therefore $u(T,\cdot)\in C^\infty(\overline\Omega)$; whence \eqref{heat-intro} with $f=0$, $g=0$ cannot be solved if $u_T$ is prescribed arbitrarily in $L_2(\Omega)$. But this just indicates an asymmetry in the properties of the initial and final value problems. Secondly, there is a phenomenon of $L_2$-instability in case $f=0$, $g=0$ in \eqref{heat-intro}, which perhaps was first described by Miranker \cite{Miranker61}. The instability is found via the Dirichlet realization of the Laplacian, $-\!\operatorname{\Delta}_D$, and its $L_2(\Omega)$-orthonormal basis $e_1(x), e_2(x),\dots$ of eigenfunctions associated to the usual ordering of its eigenvalues $0<\lambda_1\le\lambda_2\le\dots$, which via Weyl's law for the counting function, cf.\ \cite[Ch.~6.4]{CuHi53}, gives \begin{equation} \label{Weyl-id} \lambda_j={\cal O}(j^{2/n})\quad\text{ for $j\to\infty$}. \end{equation} This basis gives rise to a sequence of final value data $u_{T,j}(x)=e_j(x)$ lying on the unit sphere in $L_2(\Omega)$ as $\|u_{T,j}\|= \|e_j\|=1$ for $j\in{\mathbb{N}}$. But the corresponding solutions to $u'-\!\operatorname{\Delta} u=0$, i.e.\ $u_j(t,x)=e^{(T-t)\lambda_j}e_j(x)$, have initial states $u(0,x)$ with $L_2$-norms that because of \eqref{Weyl-id} grow \emph{rapidly} with the index $j$, \begin{equation} \|u_j(0,\cdot)\| = e^{T\lambda_j}\|e_j\| = e^{T\lambda_j}\nearrow\infty. \end{equation} This $L_2$-instability cannot be removed, of course, but it only indicates that the $L_2(\Omega)$-norm is an insensitive choice for problem \eqref{heat-intro}. The task is hence to obtain a norm on $u_T$ giving better control over the backward calculations of $u(t,x)$---for the inhomogeneous heat problem \eqref{heat-intro}, an account of this was given in \cite{ChJo18ax}. \end{remark} \begin{remark} Almog, Grebenkov, Helffer, Henry \cite{AlHe15,GrHelHen17,GrHe17} studied the complex Airy operator $-\!\operatorname{\Delta}+\operatorname{i} x_1$ recently via triples $(H,V,a)$, leading to Dirichlet, Neumann, Robin and transmission boundary conditions, in bounded and unbounded regions. Theorem~\ref{intro-thm} is expected to apply to final value problems for those of their realisations that satisfy the coercivity condition in \eqref{coerciv-id}. However, $-\!\operatorname{\Delta}+\operatorname{i} x_1$ has empty spectrum on ${\mathbb{R}}^{n}$, cf.\ the fundamental paper of Herbst \cite{Her79}, so it remains to be seen for which of the regions in \cite{AlHe15,GrHelHen17,GrHe17} there is a strictly descending chain of domains as in \eqref{dom-intro}. \end{remark} \section{Preliminaries: Injectivity of Analytic Semigroups} As indicated in the introduction, it is central to the analysis of final value problems that an analytic semigroup of operators, like $e^{t\operatorname{\Delta}_D}$, always consists of \emph{injections}. This shows up both at the technical and conceptual level, that is, both in the proofs and in the objects that enter the theorem. A few aspects of semigroup theory in a complex Banach space $B$ is therefore recalled. Besides classical references by Davies~\cite{Dav80}, Pazy~\cite{Paz83}, Tanabe~\cite{Tan79} or Yosida~\cite{Yos80}, a more recent account is given in \cite{ABHN11}. The generator is $\mathbf{A} x=\lim_{t\to0^+}\frac1t(e^{t\mathbf{A}}x-x)$, where $x$ belongs to the domain $D(\mathbf{A})$ when the limit exists. $\mathbf{A}$ is a densely defined, closed linear operator in $B$ that for some $\omega \geq 0$, $M \geq 1$ satisfies the resolvent estimates $\|(\mathbf{A}-\lambda)^{-n}\|_{{\mathbb{B}}(B)}\le M/(\lambda-\omega)^n$ for $\lambda>\omega$, $n\in{\mathbb{N}}$. The corresponding $C_0$-semigroup of operators $e^{t\mathbf{A}}\in{\mathbb{B}}(B)$ is of type $(M,\omega)$: it fulfils that $e^{t\mathbf{A}}e^{s \mathbf{A}}=e^{(s+t)\mathbf{A}}$ for $s,t\ge0$, $e^{0\mathbf{A}}=I$ and $\lim_{t\to0^+}e^{t \mathbf{A}}x=x$ for $x\in B$; whilst \begin{align} \|e^{t\mathbf{A}}\|_{{\mathbb{B}}(B)} \leq M e^{\omega t} \quad \text{ for } 0 \leq t < \infty. \end{align} Indeed, the Laplace transformation $(\lambda I-\mathbf{A})^{-1}=\int_0^\infty e^{-t\lambda}e^{t\mathbf{A}}\,dt$ gives a bijection of the semigroups of type $(M,\omega)$ onto (the resolvents of) the stated class of generators. To elucidate the role of \emph{injectivity}, recall that if $e^{t\mathbf{A}}$ is analytic, $u'=\mathbf{A} u$, $u(0)=u_0$ is uniquely solved by $u(t)=e^{t\mathbf{A}}u_0$ for \emph{every} $u_0\in B$. Here injectivity of $e^{t\mathbf{A}}$ is equivalent to the important geometric property that the trajectories of two solutions $e^{t\mathbf{A}}v$ and $e^{t\mathbf{A}}w$ of $u'=\mathbf{A} u$ have no confluence point in $B$ for $v\ne w$. Nevertheless, the literature seems to have focused on examples of semigroups with non-invertibility of $e^{t\mathbf{A}}$, like \cite[Ex.~2.2.1]{Paz83}; these necessarily concern non-analytic cases. The well-known result below gives a criterion for $\mathbf{A}$ to generate a $C_0$-semigroup $e^{z\mathbf{A}}$ that is defined and analytic for $z$ in the open sector \begin{equation} S_{\theta}= \Set{z\in{\mathbb{C}}}{z\ne0,\ |\arg z | < \theta}. \end{equation} It is formulated in terms of the spectral sector \begin{equation} \Sigma_{\theta} =\left\{ 0 \right\}\cup \Set{ \lambda \in{\mathbb{C}}}{ |\arg\lambda | < \frac{\pi}{2} + \theta} . \end{equation} \begin{proposition} \label{Pazy'-prop} If $\mathbf{A}$ generates a $C_0$-semigroup of type $(M,\omega)$ and $\omega\in\rho(\mathbf{A})$, the following properties are equivalent for each $\theta \in\,]0,\frac{\pi}{2}[\,$: \begin{itemize} \item[{\rm (i)}] The resolvent set $\rho(\mathbf{A})$ contains $\omega+\Sigma_{\theta}$ and \begin{equation} \sup\Set{|\lambda-\omega|\cdot\|(\lambda I - \mathbf{A})^{-1} \|_{{\mathbb{B}}(B)}}{\lambda\in\omega+\Sigma_{\theta}, \ \lambda \neq \omega} <\infty. \end{equation} \item[{\rm (ii)}] The semigroup $e^{t \mathbf{A}}$ extends to an analytic semigroup $e^{z \mathbf{A}}$ defined for $z\in S_{\theta}$ with \begin{equation} \sup\Set{ e^{-z\omega}\|e^{z\mathbf{A}}\|_{{\mathbb{B}}(B)}}{z\in \overline{S}_{\theta'}}<\infty \quad \text{whenever $0<\theta'<\theta$}. \end{equation} \end{itemize} In the affirmative case, $e^{t \mathbf{A}}$ is differentiable in ${\mathbb{B}}(B)$ for $t>0$ with derivative $(e^{t\mathbf{A}})' = \mathbf{A} e^{t\mathbf{A}}$, and for every $\eta$ such that $\alpha(\mathbf{A})<\eta<\omega$ one has \begin{align} \label{eta-est} \sup_{t>0} e^{-t\eta}\|e^{t\mathbf{A}}\|_{{\mathbb{B}}(B)}+\sup_{t>0} te^{-t\eta}\|\mathbf{A} e^{t\mathbf{A}}\|_{{\mathbb{B}}(B)} <\infty, \end{align} whereby $\alpha(\mathbf{A})=\sup\Re\sigma(\mathbf{A})$ denotes the spectral abscissa of $\mathbf{A}$ (here $\alpha(\mathbf{A})<\omega$, as $0\in\Sigma_\theta$). \end{proposition} In case $\omega=0$, the equivalence is just a review of the main parts of Theorem~2.5.2 in \cite{Paz83}. For general $\omega\ge0$, one can reduce to this case, since $\mathbf{A}=\omega I+(\mathbf{A}-\omega I)$ yields the operator identity $e^{t\mathbf{A}}=e^{t\omega}e^{t(\mathbf{A}-\omega I)}$, where $e^{t(\mathbf{A}-\omega I)}$ is of type $(M,0)$ for some $M$. Indeed, the right-hand side is easily seen to be a $C_0$-semigroup, which since $e^{t\omega}=1+t\omega+o(t)$ has $\mathbf{A}$ as its generator, so the identity results from the bijectiveness of the Laplace transform. In this way, (i)$\iff$(ii) follows straightforwardly from the case $\omega=0$, using for both implications that $e^{z\mathbf{A}}=e^{z\omega}e^{z(\mathbf{A}-\omega I)}$ holds in $S_\theta$ by unique analytic extension. Since $\omega\in\rho(\mathbf{A})$ implies $\alpha(\mathbf{A})<\omega$, the above translation method gives $e^{t\mathbf{A}}=e^{t\eta}e^{t(\mathbf{A}-\eta I)}$, where $e^{t(\mathbf{A}-\eta I)}$ is of type $(M,0)$ whenever $\alpha(\mathbf{A})<\eta<\omega$. This yields the first part of \eqref{eta-est}, and the second now follows from this and the case $\omega=0$ by means of the splitting $\mathbf{A}=\eta' I+(\mathbf{A}-\eta' I)$ for $\alpha(\mathbf{A})<\eta'<\eta$. The reason for stating Proposition~\ref{Pazy'-prop} for general type $(M,\omega)$ semigroups is that it shows explicitly that cases with $\omega>0$ only have other estimates on ${\mathbb{R}}_+$ or in the closed subsectors $\overline{S}_{\theta'}$---but the mere analyticity in $S_{\theta}$ is unaffected by the translation by $\omega I$. Hence one has the following improved version of \cite[Prop.\ 1]{ChJo18ax}: \begin{proposition} \label{inj-prop} If a $C_0$-semigroup $e^{t\mathbf{A}}$ of type $(M,\omega)$ on a complex Banach space $B$ has an analytic extension $e^{z\mathbf{A}}$ to $S_{\theta}$ for some $\theta>0$, then $e^{z\mathbf{A}}$ is \emph{injective} for every $z \in S_\theta$. \end{proposition} \begin{proof} Let $e^{z_0 \mathbf{A}} u_0 = 0$ hold for some $u_0 \in B$ and $z_0 \in S_{\theta}$. Analyticity of $e^{z\mathbf{A}}$ in $S_{\theta}$ carries over by the differential calculus in Banach spaces to the map $f(z)= e^{z\mathbf{A}}u_0$. So for $z$ in a suitable open ball $B(z_0,r)\subset S_{\theta}$, a Taylor expansion and the identity $f^{(n)}(z_0) = \mathbf{A}^n e^{z_0 \mathbf{A}}u_0$ for analytic semigroups (cf.~\cite[Lem.~2.4.2]{Paz83}) give \begin{align} f(z) = \sum_{n=0}^{\infty} \frac{1}{n!}(z-z_0)^n f^{(n)}(z_0)=\sum_{n=0}^{\infty} \frac{1}{n!}(z-z_0)^n \mathbf{A}^n e^{z_0 \mathbf{A}}u_0\equiv 0. \end{align} Hence $f\equiv 0$ on $S_{\theta}$ by unique analytic extension. Now, as $e^{t\mathbf{A}}$ is strongly continuous at $t=0$, we have $u_0 = \lim_{t \rightarrow 0^+} e^{t\mathbf{A}} u_0 = \lim_{t \rightarrow 0^+}f(t) = 0$. Thus the null space of $e^{z_0\mathbf{A}}$ is trivial. \end{proof} \begin{remark} The injectivity in Proposition~\ref{inj-prop} was claimed in \cite{Sho74} for $z>0$, $\theta\le \pi/4$ and $B$ a Hilbert space (but not quite obtained, as noted in \cite[Rem.~1]{ChJo18ax}; cf.\ the details in Lemma 3.1 and Remark 3 in \cite{18logconv}). A local version for the Laplacian on ${\mathbb{R}}^{n}$ was given by Rauch \cite[Cor.~4.3.9]{Rau91}. \end{remark} As a consequence of the above injectivity, for an \emph{analytic} semigroup $e^{t\mathbf{A}}$ we may consider its inverse that, consistently with the case in which $e^{t\mathbf{A}}$ forms a group in $\mathbb{B}(B)$, may be denoted for $t>0$ by $e^{-t\mathbf{A}} = (e^{t\mathbf{A}})^{-1}$. Clearly $e^{-t\mathbf{A}}$ maps $D(e^{-t\mathbf{A}})=R(e^{t\mathbf{A}})$ bijectively onto $H$, and it is an unbounded, but closed operator in $B$. Specialising to a Hilbert space $B=H$, then also $(e^{t\mathbf{A}})^*=e^{t\mathbf{A}^*}$ is analytic, so $Z(e^{t\mathbf{A}^*})=\{0\}$ holds for its null space by Proposition~\ref{inj-prop}; whence $D(e^{-t\mathbf{A}})$ is dense in $H$. Some further basic properties are: \begin{proposition}{\cite[Prop.\;2]{ChJo18ax}} \label{inverse-prop} The above inverses $e^{-t\mathbf{A}}$ form a semigroup of unbounded operators in $H$, \begin{equation} e^{-s\mathbf{A}}e^{-t\mathbf{A}}= e^{-(s+t)\mathbf{A}} \qquad \text{for $t, s\ge0$}. \end{equation} This extends to $(s,t)\in{\mathbb{R}}\times \,]-\infty,0]$, whereby $e^{-(t+s)\mathbf{A}}$ may be unbounded for $t+s>0$. Moreover, as unbounded operators the $e^{-t\mathbf{A}}$ commute with $e^{s \mathbf{A}}\in {\mathbb{B}}(H)$, that is, $e^{s \mathbf{A}}e^{-t\mathbf{A}}\subset e^{-t\mathbf{A}}e^{s\mathbf{A}}$ for $t,s\ge0$, and have a descending chain of domains, $H\supset D(e^{-t\mathbf{A}}) \supset D(e^{-t'\mathbf{A}})$ for $0<t<t'$. \end{proposition} \begin{remark} \label{domain-rem} The above domains serve as basic structures for the final value problem \eqref{heat-intro}. They apply for $\mathbf{A}=-A$ that generates an analytic semigroup $e^{-zA}$ in ${\mathbb{B}}(H)$ defined in $S_{\theta}$ for $\theta=\operatorname{arccot}(C_3/C_4)>0$. Indeed, this was shown in \cite[Lem.\ 4]{ChJo18ax} with a concise argument using $V$-ellipticity of $A$; the $V$-coercive case follows easily from this via the formula $e^{-zA}=e^{kz}e^{-z(A+kI)}$ that results for $z\ge 0$ from the translation trick after Proposition~\ref{Pazy'-prop}; and then it defines $e^{-zA}$ by the right-hand side for every $z\in S_\theta$. (A rather more involved argument was given in \cite[Thm.\ 7.2.7]{Paz83} in a context of uniformly strongly elliptic differential operators.) \end{remark} \section{Proof of Theorem~\ref{intro-thm}} \label{aproof-sect} To clarify a redundancy in the set-up, it is remarked here that in Proposition~\ref{LiMa-prop} the solution space $X$ is a Banach space, which can have its norm in \eqref{eq:X} rewritten in the following form, using the Sobolev space $H^1(0,T;V^*)=\Set{u\in L_2(0,T;V^*)}{\partial_t u\in L_2(0,T;V^*)}$, \begin{align} \label{eq:Xnorm} \|u\|_{X} = \big(\|u\|^2_{L_2(0,T;V)} + \sup_{0 \leq t \leq T}|u(t)|^2 + \|u\|^2_{H^1(0,T;V^*)}\big)^{1/2}. \end{align} Here there is a well-known inclusion $L_2(0,T;V)\cap H^1(0,T;V^*)\subset C([0,T];H)$ and an associated Sobolev inequality for vector functions (\cite{ChJo18ax} has an elementary proof) \begin{equation} \label{Sobolev-ineq} \sup_{0\le t\le T}| u(t)|^2\le (1+\frac{C_2^2}{C_1^2T})\int_0^T \|u(t)\|^2\,dt+\int_0^T \|u'(t)\|_*^2\,dt. \end{equation} Hence one can safely omit the space $C([0,T];H)$ in \eqref{eq:X} and remove $\sup_{[0,T]}|u|$ from $\| \cdot\|_{X}$. Similarly $\int_0^T\|u(t)\|_*^2\,dt$ is redundant in \eqref{eq:X} because $\|\cdot\|_*\le C_2\|\cdot\|$, so an equivalent norm on $X$ is given by \begin{equation} \label{Xnorm-id} {|\hspace{-1.6pt}|\hspace{-1.6pt}|} u{|\hspace{-1.6pt}|\hspace{-1.6pt}|}_X = \big(\int_0^T \|u(t)\|^2\,dt +\int_0^T \|u'(t)\|_*^2\,dt\big)^{1/2}. \end{equation} Thus $X$ is more precisely a Hilbertable space, as $V^*$ is so. But the form given in \eqref{eq:X} is preferred in order to emphasize the properties of the solutions. As a note on the equation $u'+Au=f$ with $u\in X$, the continuous function $u\colon[0,T]\to H$ fulfils $u(t)\in V$ for a.e.\ $t\in\,]0,T[\,$, so the extension $A\in {\mathbb{B}}(V,V^*)$ applies for a.e.\ $t$. Hence $Au(t)$ belongs to $L_2(0,T;V^*)$. \subsection{Concerning Proposition~\ref{LiMa-prop}} The existence and uniqueness statements in Proposition~\ref{LiMa-prop} are essentially special cases of the classical theory of Lions and Magenes, cf.\ \cite[Sect.~3.4.4]{LiMa72} on $t$-dependent $V$-elliptic forms $a(t;u,v)$. Indeed, because of the fixed final time $T\in\,]0,\infty[\,$, their indicated extension to $V$-coercive forms works well here: since $u\mapsto e^{\pm tk}u$ and $f\mapsto e^{\pm tk}$ are all bijections on $L_2(0,T;V)$ and $L_2(0,T;V^*)$, respectively, the auxiliary problem $v'+(A+kI)v=e^{-kt}f$, $v(0)=u_0$ has a solution $v\in X$ according to the statement for the $V$-elliptic operator $A+kI$ in \cite[Sect.~3.4.4]{LiMa72}, when $k$ is the coercivity constant in \eqref{coerciv-id}; and since multiplication by the scalar $e^{kt}$ commutes with $A$ for each $t$, it follows from the Leibniz rule in $\cal{D}'(0,T;V^*)$ that the function $u(t)=e^{kt}v(t)$ is in $X$ and satisfies \begin{equation} u'+Au=f,\qquad u(0)=u_0. \end{equation} Moreover, the uniqueness of a solution $u\in X$ follows from that of $v$, for if $u'+Au=0$, $u(0)=0$, then it is seen at once that $v=e^{-kt}u$ solves $v'+(A+kI)v=0$, $v(0)=0$; so that $v\equiv0$, hence $u\equiv0$. In the $V$-elliptic case, the well-posedness in Proposition~\ref{LiMa-prop} is a known corollary to the proofs in \cite{LiMa72}. For coercive $A$, the above exponential factors should also be handled, which can be done explicitly using \begin{lemma}[Gr{\"o}nwall] \label{Gronwall-lem} When $\varphi$, $k$ and $E$ are positive Borel functions on $[0,T]$, and $E(t)$ is increasing, then validity on $[0,T]$ of the first of the following inequalities implies that of the second: \begin{equation} \varphi(t)\le E(t)+\int_0^t k(s)\varphi(s)\,ds\le E(t)\cdot\exp(\int_0^t k(s)\,ds). \end{equation} \end{lemma} The reader is referred to the proof of Lemma 6.3.6 in \cite{H97}, which actually covers the slightly sharper statement above. Using this, one finds in a classical way a detailed estimate on each subinterval $[0,t]$: \begin{proposition} \label{pest-prop} The unique solution $u\in X$ of \eqref{ivA-intro}, cf.\ Proposition~\ref{LiMa-prop}, fulfils in terms of the boundedness and coercivity constants $C_3$, $C_4$ and $k$ of $a(\cdot,\cdot)$ that for $0\leq t\leq T$, \begin{equation} \label{u-est} \begin{split} \int_0^t \|u(s)\|^2 \,ds +\sup_{0\le s\le t}|u(s)|^2 &+ \int_0^t \|u'(s) \|_{*}^2 \,ds \\ &\leq (2+ \frac{2C_3^2+C_4+1}{C_4^2}e^{2kt})\big(C_4|u_0|^2 + \int_0^t\|f(s)\|_{*}^2\,ds\big). \end{split} \end{equation} For $t=T$, this entails boundedness $ L_2(0,T;V^*)\oplus H\to X $ of the solution operator $(f,u_0)\mapsto u$. \end{proposition} \begin{proof} As $u \in L_2(0,T;V)$, while $f$ and $A u$ and hence also $u'=f-Au$ belong to the dual space $L_2(0,T;V^*)$, one has in $L_1(0,T)$ the identity \begin{equation} \Re\dual{\partial_t u}{u} + \Re a(u,u) = \Re\dual{f}{u}. \end{equation} Here a classical regularisation yields $\partial_t |u|^2=2\Re\dual{\partial_t u}{u}$, cf.\ \cite[Lem.\ III.1.2]{Tem84} or \cite[Lem.\ 2]{ChJo18ax}, so by Young's inequality and the $V$-coercivity, \begin{align} \partial_t |u|^2 + 2(C_4 \|u\|^2 -k|u|^2)\leq 2 |\dual{f}{u}| \leq C_4^{-1} \|f\|_{*}^2 + C_4 \|u\|^2. \end{align} Integration of this yields, since $|u|^2$ and $\partial_t |u|^2= 2\Re\dual{\partial_t u}{u}$ are in $L_1(0,T)$, \begin{align} \label{VH-est} |u(t)|^2 + C_4 \int_0^t \|u(s)\|^2 \,ds \leq |u_0|^2 + C_4^ {-1}\int_0^t\|f(s)\|_{*}^2\,ds +2k\int_0^t |u(s)|^2\,ds. \end{align} Ignoring the second term on the left-hand side, it follows from Lemma~\ref{Gronwall-lem} that, for $0\le t\le T$, \begin{align} \label{H-est} |u(t)|^2 \leq \Big(|u_0|^2 + C_4^ {-1}\int_0^t\|f(s)\|_{*}^2\,ds\Big)\cdot\exp(2kt); \end{align} and since the right-hand side is increasing, one even has \begin{align} \label{H-est'} \sup_{0\le s\le t} |u(s)|^2 \leq \Big(|u_0|^2 + C_4^ {-1}\int_0^t\|f(s)\|_{*}^2\,ds\Big)\cdot\exp(2kt). \end{align} In addition it follows in a crude way, from \eqref{VH-est} and an integrated version of \eqref{H-est}, that \begin{equation} \label{V-est} \begin{split} C_4\int_0^t \|u(s)\|^2 \,ds &\leq \big(|u_0|^2 + C_4^ {-1}\int_0^t\|f(s)\|_{*}^2\,ds\big) \big(1+\int_0^t (e^{2ks})'\,ds) \\ &= e^{2kt}\big(|u_0|^2 + C_4^ {-1}\int_0^t\|f(s)\|_{*}^2\,ds\big). \end{split} \end{equation} Moreover, as $u$ solves \eqref{ivA-intro}, it is clear that $\|\partial_t u \|_{*}^2 \leq (\|f\|_{*} + \|A u \|_{*})^2\leq 2\|f\|_{*}^2 + 2\|A u \|_{*}^2$, and since $\|A\|\leq C_3$ holds for the norm in ${\mathbb{B}}(V,V^*)$, the above estimates entail \begin{equation} \label{V*-est} \begin{split} \int_0^t \|u'(s) \|_{*}^2 \,ds &\leq 2 \int_0^t \|f(s)\|_{*}^2 \,ds + 2 C_3^2 \int_0^t \|u(s)\|^2 \,ds \\ &\leq 2(C_4+ \frac{C_3^2}{C_4} e^{2kt})\big(|u_0|^2 + C_4^ {-1}\int_0^t\|f(s)\|_{*}^2\,ds\big). \end{split} \end{equation} Finally the stated estimate \eqref{u-est} follows from \eqref{H-est'}, \eqref{V-est} and \eqref{V*-est}. \end{proof} \subsection{On the proof of the Duhamel formula} As a preparation, a small technical result is recalled from Proposition 3 in \cite{ChJo18ax}, where a detailed proof can be found: \begin{lemma} \label{Leibniz-lem} When $\mathbf{A}$ generates an analytic semigroup on the complex Banach space $B$ and $w\in H^1(0,T;B)$, then the Leibniz rule \begin{equation} \partial_t e^{(T-t)\mathbf{A}}w(t)= (-\mathbf{A})e^{(T-t)\mathbf{A}}w(t)+e^{(T-t)\mathbf{A}}\partial_t w(t) \end{equation} is valid in $\cal{D}'(0,T;B)$. \end{lemma} In Proposition~\ref{Duhamel-prop}, equation \eqref{u-id} is of course just the Duhamel formula from analytic semigroup theory. However, since $X$ also contains non-classical solutions, \eqref{u-id} requires a proof in the present context---but as noted, it suffices just to reinforce the classical argument by the injectivity of $e^{-tA}$ in Proposition~\ref{inj-prop}: \begin{proof}[Proof of Proposition~\ref{Duhamel-prop}] To address the last statement first, once \eqref{u-id} has been shown, Proposition~\ref{LiMa-prop} yields $e^{-tA}u_0\in X$ for $f=0$. For general $(f,u_0)$ one has $u\in X$, so the last term containing $f$ also belongs to $X$. To obtain \eqref{u-id} in the above set-up, note that all terms in $\partial_t u+A u=f$ are in $L_2(0,T;V^*)$. Therefore $e^{-(T-t)A}$ applies for a.e.\ $t\in[0,T]$ to both sides as an integration factor, so as an identity in $L_2(0,T;V^*)$, \begin{equation} \partial_t(e^{-(T-t)A}u(t))=e^{-(T-t)A}\partial_t u(t)+ e^{-(T-t)A}A u(t)=e^{-(T-t)A}f(t). \end{equation} Indeed, on the left-hand side $e^{-(T-t)A}u(t)$ is in $L_1(0,T;V^*)$ and its derivative in ${\cal D}'(0,T;V^*)$ follows the Leibniz rule in Lemma~\ref{Leibniz-lem}, since $u\in H^1(0,T; V^*)$ as a member of $X$. As $C([0,T];H)\subset L_2(0,T;V^*)\subset L_1(0,T;V^*)$, it is seen in the above that $e^{-(T-t)A}u(t)$ and $e^{-(T-t)A}f(t)$ both belong to $L_1(0,T;V^*)$. So when the Fundamental Theorem for vector functions (cf.\ \cite[Lem.\ III.1.1]{Tem84}, or \cite[Lem.\ 1]{ChJo18ax}) is applied and followed by use of the semigroup property and a commutation of $e^{-(T-t)A}$ with the integral, using Bochner's identity, cf.\ Remark~\ref{Bochner-rem} below, one finds that \begin{equation} \label{eq:identityT} \begin{split} e^{-(T-t)A}u(t)&=e^{-TA}u_0+\int_0^t e^{-(T-s)A}f(s)\,ds \\ &=e^{-(T-t)A}e^{-tA}u_0+e^{-(T-t)A}\int_0^t e^{-(t-s)A}f(s)\,ds. \end{split} \end{equation} Since $e^{-(T-t)A}$ is linear and injective, cf.~Proposition~\ref{inj-prop}, equation \eqref{u-id} now results at once. \end{proof} \begin{remark} \label{Bochner-rem} It is recalled that for $f\in L_1(0,T; B)$, where $B$ is a Banach space, it is a basic property that for every functional $\varphi$ in the dual space $B'$, one has Bochner's identity: $\dual{\int_0^T f(t)\,dt}{\varphi}= \int_0^T\dual{ f(t)}{\varphi}\,dt$. \end{remark} \subsection{Concerning Theorem~\ref{intro-thm}} As all terms in \eqref{u-id} are in $C([0,T];H)$, it is safe to evaluate at $t=T$, which in view of \eqref{yf-eq} gives that $u(T)=e^{-TA}u(0)+y_f$. This is the flow map \begin{equation} \label{flow-id} u(0)\mapsto u(T). \end{equation} Owing to the injectivity of $e^{-TA}$ once again, and that Duhamel's formula implies $u(T)-y_f=e^{-TA}u(0)$, which clearly belongs to $D(e^{TA})$, this flow is inverted by \begin{equation} \label{u0uT-id} u(0)=e^{TA}(u(T)-y_f). \end{equation} In other words, not only are the solutions in $X$ to $u'+Au=f$ parametrised by the initial states $u(0)$ in $H$ (for fixed $f$) according to Proposition~\ref{LiMa-prop}, but also the final states $u(T)$ are parametrised by the $u(0)$. Departing from this observation, one may give an intuitive \begin{proof}[Proof of Theorem~\ref{intro-thm}] If \eqref{fvA-intro} is solved by $u \in X$, then $u(T)=u_T$ is reached from the unique initial state $u(0)$ in \eqref{u0uT-id}. But the argument for \eqref{u0uT-id} showed that $u_T-y_f = e^{-TA} u(0)\in D(e^{TA})$, so \eqref{eq:cc-intro} is necessary. Given data $(f,u_T)$ that fulfill \eqref{eq:cc-intro}, then $u_0 = e^{TA}(u_T - y_f)$ is a well-defined vector in $H$, so Proposition~\ref{LiMa-prop} yields a function $u\in X$ solving $u' +Au = f$ and $u(0)=u_0$. By the flow \eqref{flow-id}, this $u(t)$ has final state $u(T)=e^{-TA}e^{TA}(u_T-y_f)+y_f=u_T$, hence satisfies both equations in \eqref{fvA-intro}. Thus \eqref{eq:cc-intro} suffices for solvability. In the affirmative case, \eqref{eq:fvp_solution} results for any solution $u\in X$ by inserting formula \eqref{u0uT-id} for $u(0)$ into \eqref{u-id}. Uniqueness of $u$ in $X$ is seen from the right-hand side of \eqref{eq:fvp_solution}, where all terms depend only on the given $f$, $u_T$, $A$ and $T>0$. That each term in \eqref{eq:fvp_solution} is a function belonging to $X$ was seen in Proposition~\ref{Duhamel-prop}. Moreover, the solution can be estimated in $X$ by substituting the expression \eqref{u0uT-id} for $u_0$ into the inequality in Proposition~\ref{pest-prop} for $t=T$. For the norm in \eqref{Xnorm-id} this gives \begin{equation} \begin{split} {|\hspace{-1.6pt}|\hspace{-1.6pt}|} u{|\hspace{-1.6pt}|\hspace{-1.6pt}|}_X^2 &\leq (2+ \frac{2C_3^2+C_4+1}{C_4^2}e^{2kT})\max(C_4,1)\big(|u_0|^2 + \int_0^T\|f(s)\|_{*}^2\,ds\big) \\ &\le c(|e^{TA}(u_T-y_f)|^2+\|f\|_{L_2(0,T;V^*)}^2). \end{split} \end{equation} Here one may add $|u_T|^2$ on the right-hand side to arrive at the expression for $\|(f,u_T)\|_Y$ in Theorem 1. \end{proof} \begin{remark} It is easy to see from the definitions and proofs that $\cal{P}u=(\partial_t u+ Au, u(T))$ is a bounded operator $X\to Y$. The statement in Theorem~\ref{intro-thm} means that the solution operator $\cal{R}(f,u_T)=u$ (is well defined and) satisfies $\cal{P}\cal{R}=I$, but by the uniqueness also $\cal{R}\cal{P}=I$ holds. Hence $\cal{R}$ is a linear homeomorphism $Y\to X$. \end{remark} \section{The heat problem with the Neumann condition} \label{Neu-sect} In the sequel $\Omega$ stands for a $C^\infty$ smooth, open bounded set in ${\mathbb{R}}^{n}$, $n\ge2$ as described in \cite[App.~C]{G09}. In particular $\Omega$ is locally on one side of its boundary $\Gamma=\partial \Omega$. For such $\Omega$, the problem is to characterise the $u(t,x)$ satisfying \begin{equation} \label{heatN_fvp} \left. \begin{aligned} \partial_tu(t,x) -\Delta u(t,x) &= f(t,x) &&\text{ in } \, ]0,T[ \times \Omega \\ \gamma_1 u(t,x) &= 0 && \text{ on } \, ]0,T[\, \times \Gamma \\ r_T u(x) &= u_T(x) && \text{ at } \left\{ T \right\} \times \Omega \end{aligned} \right\} \end{equation} While $r_Tu(x)=u(T,x)$, the Neumann trace on $\Gamma$ is written in the operator notation $\gamma_1u= (\nu\cdot\nabla u)|_{\Gamma}$, whereby $\nu$ is the outward pointing normal vector at $x\in\Gamma$. Similarly $\gamma_1$ is used for traces on $\, ]0,T[\, \times \Gamma$. Moreover, $H^m(\Omega)$ denotes the usual Sobolev space that is normed by $\|u\|_m = \big(\sum_{|\alpha|\le m}\int_\Omega |\partial^\alpha u|^2\,dx\big)^{1/2}$, which up to equivalent norms equals the space $H^{m}(\overline{\Omega})$ of restrictions to $\Omega$ of $H^{m}({\mathbb{R}}^{n})$ endowed with the infimum norm. Correspondingly the dual of e.g.\ $H^1(\overline{\Omega})$ has an identification with the closed subspace of $H^{-1}({\mathbb{R}}^{n})$ given by the support condition in \begin{equation} H^{-1}_0(\overline{\Omega})=\Set{ u\in H^{-1}({\mathbb{R}}^{n})}{\operatorname{supp} u\subset \overline{\Omega}}. \end{equation} For these matters the reader is referred to \cite[App.\ B.2]{H}. Chapter 6 and (9.25) in \cite{G09} could also be references for this and basic facts on boundary value problems; cf.\ also \cite{Eva10, Rau91}. The main result in Theorem~\ref{intro-thm} applies to \eqref{heatN_fvp} for $V= H^1(\overline\Omega)$, $H = L_2(\Omega)$ and $V^* \simeq H^{-1}_0(\overline{\Omega})$, for which there are inclusions $H^1(\overline\Omega)\subset L_2(\Omega)\subset H^{-1}_0(\overline{\Omega})$, when $g\in L_2(\Omega)$ via $e_\Omega$ (extension by zero outside of $\Omega$) is identified with $e_\Omega g$ belonging to $H^{-1}_0(\overline{\Omega})$. The Dirichlet form \begin{align} \label{sform-id} s(u,v) = \sum_{j=1}^n \scal{\partial_j u}{\partial_j v}_{L_2(\Omega)} = \sum_{j=1}^n \int_\Omega {\partial_j u}\overline{\partial_j v}\, dx \end{align} satisfies $|s(v,w)|\le \|v\|_1\|w\|_1$, and the coercivity in \eqref{coerciv-id} holds for $C_4=1$, $k=1$ since $s(v,v)=\|v\|_1^2-\|v\|_0^2$. The induced Lax--Milgram operator is the Neumann realisation $-\!\operatorname{\Delta}_N$, which is selfadjoint due to the symmetry of $s$ and has its domain given by $D(\operatorname{\Delta}_N)=\Set{u\in H^2(\Omega)}{\gamma_1 u=0}$. This is a classical but non-trivial result (cf.\ the remarks prior to Theorem 4.28 in \cite{G09}, or Section 11.3 ff.\ there; or \cite{Rau91}). Thus the homogeneous boundary condition is imposed via the condition $u(t)\in D(\operatorname{\Delta}_N)$ for $0<t<T$. By the coercivity, $-A = \operatorname{\Delta}_N$ generates an analytic semigroup of injections $e^{z\operatorname{\Delta}_N}$ in ${\mathbb{B}}(L_2(\Omega))$, and the bounded extension $\tilde\operatorname{\Delta}\colon H^{1}(\overline\Omega) \rightarrow H^{-1}_0(\overline{\Omega})$ induces the analytic semigroup $e^{z\tilde\operatorname{\Delta}}$ on $H^{-1}_0(\overline{\Omega})$; both are defined for $z\in S_{\pi/4}$. As previously, $(e^{t\operatorname{\Delta}_N})^{-1} = e^{-t\operatorname{\Delta}_N}$. The action of $\tilde \operatorname{\Delta}$ is (slightly surprisingly) given by $\tilde\operatorname{\Delta} u=\operatorname{div}(e_\Omega\operatorname{grad} u)$ for each $u\in H^{1}(\overline\Omega)$, for when $w\in H^{1}({\mathbb{R}}^{n})$ coincides with $v$ in $\Omega$, then \eqref{sform-id} gives \begin{equation} \begin{split} \Dual{-\tilde\operatorname{\Delta} u}{v}=s(u,v) & = \sum_{j=1}^n \int_{{\mathbb{R}}^{n}} e_\Omega(\partial_j u)\cdot\overline{\partial_j w}\, dx \\ & =\sum_{j=1}^n\Dual{-\partial_j(e_\Omega\partial_j u)}{w}_{H^{-1}({\mathbb{R}}^{n})\times H^{1}({\mathbb{R}}^{n})} \\ &=\Dual{\sum_{j=1}^n-\partial_j(e_\Omega\partial_j u)}{v}_{H^{-1}_0(\overline{\Omega})\times H^{1}(\overline\Omega)}. \end{split} \end{equation} To make a further identification one may recall the formula $\partial_j(u\chi_\Omega)=(\partial_j u)\chi_\Omega-\nu_j(\gamma_0u)dS$, valid for $u\in C^{1}({\mathbb{R}}^{n})$ when $\chi_\Omega$ denotes the characteristic function of $\Omega$, and $\gamma_0$, $S$ the restriction to $\Gamma$ and the surface measure at $\Gamma$, respectively; a proof is given in \cite[Thm.\ 3.1.9]{H}. Replacing $u$ by $\partial_j u$ for some $u\in C^{2}(\overline\Omega)$, and using that $\nu(x)$ is a smooth vector field around $\Gamma$, we obtain that $\partial_j(e_\Omega\partial_ju)=e_\Omega(\partial_j^{2} u)-(\gamma_0\nu_j\partial_j u)dS$. This now extends to all $u\in H^{2}(\overline\Omega)$ by density and continuity, and by summation one finds that in $\cal{D}'({\mathbb{R}}^{n})$, \begin{equation} \label{tlap-id} \tilde\operatorname{\Delta} u=\operatorname{div}(e_\Omega\operatorname{grad} u)=e_\Omega(\operatorname{\Delta} u)-(\gamma_1 u)dS. \end{equation} Clearly the last term vanishes for $u\in D(\operatorname{\Delta}_N)$; whence $\operatorname{div}(e_\Omega\operatorname{grad} u)$ identifies in $\Omega$ with the $L_2$-function $\operatorname{\Delta} u$ for such $u$. But for general $u$ in the form domain $H^{1}(\overline\Omega)$, none of the terms on the right-hand side make sense. The solution space for \eqref{heatN_fvp} amounts to \begin{equation} \begin{split} X_0 &= L_2(0,T;H^1(\Omega)) \bigcap C([0,T]; L_2(\Omega)) \bigcap H^1(0,T; H^{-1}_0(\overline{\Omega})), \label{X0-id} \\ \|u\|_{X_0}&= \big(\int_0^T\|u(t)\|^2_{H^{1}(\Omega)}\,dt \\ &\hphantom{= \big(\int_0^T\|u(t)\|^2} +\sup_{t\in[0,T]}\int_\Omega |u(x,t)|^2\,dx+ \int_0^T\|\partial_t u(t)\|^2_{H^{-1}_0(\overline{\Omega})}\,dt \Big)^{1/2}. \end{split} \end{equation} The corresponding data space is here given in terms of $y_f=\int_0^T e^{(T-t)\operatorname{\Delta}}f(t)\,dt$, cf.\ \eqref{yf-eq}, as \begin{equation} \begin{split} Y_0&= \left\{ (f,u_T) \in L_2(0,T;H^{-1}_0(\overline{\Omega})) \oplus L_2(\Omega) \Bigm| u_T - y_f \in D(e^{-T\operatorname{\Delta}_N}) \right\}, \label{Y0-id} \\ \| (f,u_T) \|_{Y_0} &= \Big(\int_0^T\|f(t)\|^2_{H^{-1}_0(\overline{\Omega})}\,dt \\ &\hphantom{= \Big(\int_0^T\|u(t)\|^2}+ \int_\Omega\big(|u_T(x)|^2+|e^{-T\operatorname{\Delta}_N}(u_T - y_f )(x)|^2\big)\,dx\Big)^{1/2}. \end{split} \end{equation} With this framework, Theorem~\ref{intro-thm} at once gives the following new result on a classical problem: \begin{theorem} \label{heatN-thm} Let $A=-\!\operatorname{\Delta}_N$ be the Neumann realization of the Laplacian in $L_2(\Omega)$ and $-\tilde\operatorname{\Delta}=-\operatorname{div}(e_\Omega\operatorname{grad}\cdot)$ its extension $H^{1}(\overline{\Omega})\to H^{-1}_0(\overline{\Omega})$. When $u_T \in L_2(\Omega)$ and $f \in L_2(0,T;H^{-1}_0(\overline{\Omega}))$, then there exists a solution $u\in X_0$ of \begin{equation} \partial_t u-\operatorname{div}(e_\Omega\operatorname{grad} u)=f,\qquad r_Tu=u_T \end{equation} if and only if the data $(f,u_T)$ are given in $Y_0$, i.e.\ if and only if \begin{equation} \label{heat-ccc} u_T - \int_0^T e^{(T-s)\tilde\operatorname{\Delta}}f(s) \,ds\quad \text{ belongs to }\quad D(e^{-T \operatorname{\Delta}_N})=R(e^{T\operatorname{\Delta}_N}). \end{equation} In the affirmative case, $u$ is uniquely determined in $X_0$ and satisfies the estimate $\|u\|_{X_0} \leq c \| (f,u_T) \|_{Y_0}$. It is given by the formula, in which all terms belong to $X_0$, \begin{equation} u(t) = e^{t\operatorname{\Delta}_N}e^{-T\operatorname{\Delta}_N}\Big(u_T-\int_0^T e^{(T-t)\tilde\operatorname{\Delta}}f(t)\,dt\Big) + \int_0^t e^{(t-s)\tilde\operatorname{\Delta}}f(s) \,ds. \end{equation} Furthermore the difference in \eqref{heat-ccc} equals $e^{T\operatorname{\Delta}_N}u(0)$ in $L_2(\Omega)$. \end{theorem} Besides the fact that $\tilde\operatorname{\Delta}=\operatorname{div}(e_\Omega\operatorname{grad}\cdot)$ appears in the differential equation (instead of $\operatorname{\Delta}$), it is noteworthy that there is no information on the boundary condition. However, there is at least one simple remedy for this, for it is well known in analytic semigroup theory, cf.\ \cite[Thm.\ 4.2.3]{Paz83} and \cite[Cor.\ 4.3.3]{Paz83}, that if the source term $f(t)$ is valued in $H$ and satisfies a global condition of H{\"o}lder continuity, that is, for some $\sigma\in\,]0,1[\,$, \begin{equation} \sup\Set{|f(t)-f(s)|\cdot|t-s|^{-\sigma}}{0\le s<t\le T}<\infty, \end{equation} then the integral in \eqref{u-id} takes values in $D(A)$ for $0<t< T$ and $A\int_0^t e^{-(t-s)A}f(s)\,ds$ is continuous $\,]0,T[\,\to H$. When this is applied in the above framework, the additional H{\"o}lder continuity yields $u(t)\in D(\operatorname{\Delta}_N)=\Set{u\in H^2(\Omega)}{\gamma_1 u=0}$ for $t>0$, so the homogeneous Neumann condition is fulfilled and $\tilde\operatorname{\Delta} u$ identifies with $\operatorname{\Delta} u$, as noted after \eqref{tlap-id}. Therefore one has the following novelty: \begin{theorem} If $u_T\in L_2(\Omega)$ and $f\colon\,[0,T]\to L_2(\Omega)$ is H{\"o}lder continuous of some order $\sigma\in\,]0,1[\,$, and if $u_T-y_f$ fulfils the criterion \eqref{heat-ccc}, then the homogeneous Neumann heat conduction final value problem \eqref{heatN_fvp} has a uniquely determined solution $u$ in $X_0$, satisfying $u(t)\in \Set{u\in H^2(\Omega)}{\gamma_1 u=0}$ for $t>0$, and depending continuously on $(f,u_T)$ in $Y_0$. Hence the problem is well posed in the sense of Hadamard. \end{theorem} It would be desirable, of course, to show the well-posedness in a strong form, with an isomorphism between the data and solution spaces. \section{Final remarks} \begin{remark} \label{GS-rem} Grubb and Solonnikov~\cite{GrSo90} systematically treated a large class of \emph{initial}-boundary problems of parabolic pseudo-differential equations and worked out compatibility conditions characterising well-posedness in full scales of anisotropic $L_2$-Sobolev spaces (such conditions have a long history in the differential operator case, going back at least to work of Lions and Magenes \cite{LiMa72} and Ladyzenskaya, Solonnikov and Ural'ceva \cite{LaSoUr68}). Their conditions are explicit and local at the curved corner $\Gamma\times\{0\}$, except for half-integer values of the smoothness $s$ that were shown to require so-called coincidence, which is expressed in integrals over the Cartesian product of the two boundaries $\{0\}\times\Omega$ and $\,]0,T[\,\times\,\Gamma$; hence coincidence is also a non-local condition. Whilst the conditions of Grubb and Solonnikov are decisive for the solution's regularity, condition \eqref{eq:cc-intro} in Theorem~\ref{intro-thm} is in comparison crucial for the \emph{existence} question. \end{remark} \begin{remark} Injectivity of the linear map $u(0)\mapsto u(T)$ for the homogeneneous equation $u'+Au=0$, or equivalently its backwards uniqueness, was proved much earlier for problems with $t$-dependent sesquilinear forms $a(t;u,v)$ by Lions and Malgrange~\cite{LiMl60}. In addition to some $C^1$-regularity properties in $t$, they assumed that (the principal part of) $a(t;u,v)$ is symmetric and uniformly $V$-coercive in the sense that $a(t;v,v)+\lambda\|v\|^2\ge \alpha\|v\|^2$ for certain fixed $\lambda\in{\mathbb{R}}$, $\alpha>0$ and all $v\in V$. In Problem~3.4 of \cite{LiMl60}, they asked whether backward uniqueness can be shown without assuming symmetry (i.e., for non-selfadjoint operators $A(t)$ in the principal case), or more precisely under the hypothesis $\Re a(t;v,v)+\lambda\|v\|^2\ge \alpha\|v\|^2$. \linebreak[4] The present paper gives an affirmative answer for the $t$-independent case of their problem. \end{remark}
1,108,101,564,813
arxiv
\section{Proofs for \Cref{sec:problem}} \subsection{Gradient norm lower bound for the input to a network block} \begin{proof}[Proof of \Cref{thm:gradient-norm-activation}] We use $f_{i\to j}$ to denote the composition $f_j\circ f_{j-1}\circ \cdots\circ f_i$, so that $\mathbf{z} = f_{i\to L}(\mathbf{x}_{i-1})$ for all $i\in [L]$. Note that $\mathbf{z}$ is p.h. with respect to the input of each network block, i.e. $f_{i\to L}((1+\epsilon)\mathbf{x}_{i-1}) = (1+\epsilon)f_{i\to L}(\mathbf{x}_{i-1})$ for $\epsilon>-1$. This allows us to compute the gradient of the cross-entropy loss with respect to the scaling factor $\epsilon$ at $\epsilon=0$ as \begin{equation} \left.\frac{\partial}{\partial\epsilon} \ell(f_{i\to L}((1+\epsilon)\mathbf{x}_{i-1}), \mathbf{y})\right|_{\epsilon=0} = \frac{\partial\ell}{\partial\mathbf{z}}\frac{\partial f_{i\to L}}{\partial\epsilon} = - \mathbf{y}^T \mathbf{z} + \mathbf{p}^T \mathbf{z} = \ell(\mathbf{z},\mathbf{y}) - H(\mathbf{p}) \end{equation} Since the gradient $L_2$ norm $\left\|\nicefrac{\partial\ell}{\partial\mathbf{x}_{i-1}}\right\|$ must be greater than the directional derivative $\frac{\partial}{\partial t}\ell(f_{i\to L}(\mathbf{x}_{i-1} + t\frac{\mathbf{x}_{i-1}}{\|\mathbf{x}_{i-1}\|}), \mathbf{y})$, defining $\epsilon=\nicefrac{t}{\|\mathbf{x}_{i-1}\|}$ we have \begin{equation} \left\|\frac{\partial\ell}{\partial\mathbf{x}_{i-1}}\right\| \ge \frac{\partial}{\partial \epsilon}\ell(f_{i\to L}(\mathbf{x}_{i-1} + \epsilon\mathbf{x}_{i-1}), \mathbf{y})\frac{\partial\epsilon}{\partial t} = \frac{\ell(\mathbf{z},\mathbf{y}) - H(\mathbf{p})}{\|\mathbf{x}_{i-1}\|}. \end{equation} \end{proof} \subsection{Gradient norm lower bound for positively homogeneous sets} \begin{proof}[Proof of \Cref{thm:gradient-norm-weight}] The proof idea is similar. Recall that if $\theta_{ph}$ is a p.h. set, then $\bar{f}^{(m)}(\theta_{ph})\triangleq f(\mathbf{x}^{(m)}; \theta\setminus \theta_{ph}, \theta_{ph})$ is a p.h. function. We therefore have \begin{equation} \left.\frac{\partial}{\partial\epsilon} \ell_{\mathrm{avg}}(\mathcal{D}_M; (1+\epsilon)\theta_{ph})\right|_{\epsilon=0} = \frac{1}{M} \sum_{m=1}^M \frac{\partial\ell}{\partial\mathbf{z}^{(m)}}\frac{\partial \bar{f}^{(m)}}{\partial\epsilon} = \frac{1}{M}\sum_{m=1}^M \ell(\mathbf{z}^{(m)},\mathbf{y}^{(m)}) - H(\mathbf{p}^{(m)}) \end{equation} hence we again invoke the directional derivative argument to show \begin{equation} \left\|\frac{\partial\ell_{\mathrm{avg}}}{\partial\theta_{ph}}\right\| \ge \frac{1}{M\|\theta_{ph}\|} \sum_{m=1}^M \ell(\mathbf{z}^{(m)},\mathbf{y}^{(m)}) - H(\mathbf{p}^{(m)}) \triangleq G(\theta_{ph}). \end{equation} In order to estimate the scale of this lower bound, recall the FC layer weights are i.i.d. sampled from a symmetric, mean-zero distribution, therefore $\mathbf{z}$ has a symmetric probability density function with mean $\mathbf{0}$. We hence have \begin{equation} \mathbb{E}\ell(\mathbf{z},\mathbf{y}) = \mathbb{E}[-\mathbf{y}^T(\mathbf{z}-\texttt{logsumexp}(\mathbf{z}))] \ge \mathbb{E}[\mathbf{y}^T(\textstyle\max_{i\in[c]}z_i-\mathbf{z})] = \mathbb{E}[\textstyle\max_{i\in[c]}z_i] \end{equation} where the inequality uses the fact that $\texttt{logsumexp}(\mathbf{z})\ge\max_{i\in[c]}z_i$; the last equality is due to $\mathbf{y}$ and $\mathbf{z}$ being independent at initialization and $\mathbb{E}\mathbf{z}=\mathbf{0}$. Using the trivial bound $\mathbb{E}H(\mathbf{p})\le\log(c)$, we get \begin{equation} \mathbb{E}G(\theta_{ph}) \ge \frac{\mathbb{E}[\max_{i\in[c]}z_i]-\log(c)}{\|\theta_{ph}\|} \end{equation} which shows that the gradient norm of a p.h. set is of the order $\Omega(\mathbb{E}[\max_{i\in[c]}z_i])$ at initialization. \end{proof} \section{Proofs for \Cref{sec:zeroinit}} \subsection{Residual branches update the network in sync} \label{sec:sync-updates} A common theme in previous analysis of residual networks is the scale of activation and gradient \citep{balduzzi2017shattered,yang2017mean,hanin2018start}. However, it is more important to consider the scale of actual change to the network function made by a (stochastic) gradient descent step. If the updates to different layers cancel out each other, the network would be stable as a whole despite drastic changes in different layers; if, on the other hand, the updates to different layers align with each other, the whole network may incur a drastic change in one step, even if each layer only changes a tiny amount. We now provide analysis showing that the latter scenario more accurately describes what happens in reality at initialization. For our result in this section, we make the following assumptions: \begin{itemize}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt, leftmargin=25pt] \item $f$ is a sequential composition of network blocks $\{f_i\}_{i=1}^L$, i.e. $f(\mathbf{x}_0) = f_L(f_{L-1}(\dots f_1(\mathbf{x}_0)))$, consisting of fully-connected weight layers, ReLU activation functions and residual branches. \item $f_L$ is a fully-connected layer with weights i.i.d. sampled from a zero-mean distribution. \item There is no bias parameter in $f$. \end{itemize} For $l<L$, let $\mathbf{x}_{l-1}$ be the input to $f_l$ and $F_l(\mathbf{x}_{l-1})$ be a branch in $f_l$ with $m_l$ layers. Without loss of generality, we study the following specific form of network architecture: \begin{align*} F_l(\mathbf{x}_{l-1}) &~=~ (\overbrace{\text{ReLU}\circ W_l^{(m_l)}\circ\dots\circ\text{ReLU}\circ W_l^{(1)}}^{\mathclap{m_l\text{~ReLU}}})(\mathbf{x}_{l-1}), \\ f_l(\mathbf{x}_{l-1}) &~=~ \mathbf{x}_{l-1} + F_l(\mathbf{x}_{l-1}). \end{align*} For the last block we denote $m_L=1$ and $f_L(\mathbf{x}_{L-1}) = F_L(\mathbf{x}_{L-1})=W_L^{(1)}\mathbf{x}_{L-1}$. Furthermore, we always choose $0$ as the gradient of ReLU when its input is $0$. As such, with input $\mathbf{x}$, the output and gradient of $\text{ReLU}(\mathbf{x})$ can be simply written as $D_{\mathbbm{1}[\mathbf{x}>0]}\mathbf{x}$, where $D_{\mathbbm{1}[\mathbf{x}>0]}$ is a diagonal matrix with diagonal entries corresponding to $\mathbbm{1}[\mathbf{x}>0]$. Denote the preactivation of the $i$-th layer (i.e. the input to the $i$-th ReLU) in the $l$-th block by $\mathbf{x}_l^{(i)}$. We define the following terms to simplify our presentation: \begin{align*} F_l^{(i-)}~&\triangleq~ D_{\mathbbm{1}[\mathbf{x}_l^{(i-1)}>0]}W_l^{(i-1)}\cdots D_{\mathbbm{1}[\mathbf{x}_l^{(1)}>0]}W_l^{(1)}\mathbf{x}_{l-1}, \quad l<L, i\in[m_l] \\ F_l^{(i+)}~&\triangleq~ D_{\mathbbm{1}[\mathbf{x}_l^{(m_l)}>0]}W_l^{(m_l)}\cdots D_{\mathbbm{1}[\mathbf{x}_l^{(i)}>0]}, \quad l<L, i\in[m_l]\\ F_L^{(1-)}~&\triangleq~ \mathbf{x}_{L-1} \\ F_L^{(1+)}~&\triangleq~ I \end{align*} We have the following result on the gradient update to $f$: \begin{theorem} With the above assumptions, suppose we update the network parameters by $\Delta\theta = -\eta\frac{\partial}{\partial\theta} \ell(f(\mathbf{x}_0;\theta),\mathbf{y})$, then the update to network output $\Delta f(\mathbf{x}_0)\triangleq f(\mathbf{x}_0;\theta+\Delta\theta) - f(\mathbf{x}_0;\theta)$ is \begin{equation} \Delta f(\mathbf{x}_0) = -\eta\sum_{l=1}^{L}\left[\sum_{i=1}^{m_l}\overbrace{\left\|F_l^{(i-)}\right\|^2 \left(\frac{\partial f}{\partial\mathbf{x}_l}\right)^T F_l^{(i+)}\left(F_l^{(i+)}\right)^T\left(\frac{\partial f}{\partial\mathbf{x}_l}\right)}^{\mathclap{\triangleq J_l^i}}\right]\frac{\partial\ell}{\partial\mathbf{z}} + O(\eta^2), \end{equation} } where $\mathbf{z}\triangleq f(\mathbf{x}_0)\in\mathbb{R}^c$ is the logits. \end{theorem} Let us discuss the implecation of this result before delving into the proof. As each $J_l^i$ is a $c\times c$ real symmetric positive semi-definite matrix, the trace norm of each $J_l^i$ equals its trace. Similarly, the trace norm of $J\triangleq\sum_l\sum_i J_l^i$ equals the trace of the sum of all $J_l^i$ as well, which scales linearly with the number of residual branches $L$. Since the output $\mathbf{z}$ has no (or little) correlation with the target $\mathbf{y}$ at the start of training, $\frac{\partial\ell}{\partial\mathbf{z}}$ is a vector of some random direction. It then follows that the expected update scale is proportional to the trace norm of $J$, which is proportional to $L$ as well as the average trace of $J_l^i$. Simply put, to allow the whole network be updated by $\Theta(\eta)$ per step independent of depth, we need to ensure each residual branch contributes only a $\Theta(\eta/L)$ update on average. \begin{proof} The first insight to prove our result is to note that conditioning on a specific input $\mathbf{x}_0$, we can replace each ReLU activation layer by a diagonal matrix and does not change the forward and backward pass. (In fact, this is valid even after we apply a gradient descent update, as long as the learning rate $\eta>0$ is sufficiently small so that all positive preactivation remains positive. This observation will be essential for our later analysis.) We thus have the gradient w.r.t. the $i$-th weight layer in the $l$-th block is \begin{equation} \label{eq:gradient-W} \frac{\partial\ell}{\partial \text{Vec}(W_l^{(i)})} = \frac{\partial\mathbf{x}_l}{\partial \text{Vec}(W_l^{(i)})}\cdot \frac{\partial f}{\partial\mathbf{x}_l}\cdot \frac{\partial\ell}{\partial\mathbf{z}} = \left(F_l^{(i-)}\otimes I_l^{(i)}\right)\left(F_l^{(i+)}\right)^T\frac{\partial f}{\partial\mathbf{x}_l}\cdot \frac{\partial\ell}{\partial\mathbf{z}}. \end{equation} where $\otimes$ denotes the Kronecker product. The second insight is to note that with our assumptions, a network block and its gradient w.r.t. its input have the following relation: \begin{equation} \label{eq:block-gradient-relation} f_l(\mathbf{x}_{l-1}) = \frac{\partial f_l}{\partial\mathbf{x}_{l-1}}\cdot\mathbf{x}_{l-1}. \end{equation} We then plug in \Cref{eq:gradient-W} to the gradient update $\Delta\theta = -\eta\frac{\partial}{\partial\theta} \ell(f(\mathbf{x}_0;\theta),\mathbf{y})$, and recalculate the forward pass $f(\mathbf{x}_0;\theta+\Delta\theta)$. The theorem follows by applying \Cref{eq:block-gradient-relation} and a first-order Taylor series expansion in a small neighborhood of $\eta=0$ where $f(\mathbf{x}_0;\theta+\Delta\theta)$ is smooth w.r.t. $\eta$. \end{proof} \subsection{What scalar branch has $\Theta(\eta/L)$ updates?} \label{sec:scalar-branch} For this section, we focus on the proper initialization of a scalar branch $F(x) = (\prod_{i=1}^m a_i)x$. We have the following result: \begin{theorem} Assuming $\forall i, a_i\ge 0, x=\Theta(1)$ and $\frac{\partial\ell}{\partial F(x)}=\Theta(1)$, then $\Delta F(x)\triangleq F(x;\theta-\eta\frac{\partial\ell}{\partial\theta}) - F(x;\theta)$ is $\Theta(\eta/L)$ \emph{if and only if} \begin{equation} \label{eq:scalar-constraint} \left(\prod_{k\in[m]\setminus\{j\}} a_k\right)x = \Theta\left(\frac{1}{\sqrt{L}}\right),\quad \text{where}\quad j\in\arg\min_k a_k \end{equation} \end{theorem} \begin{proof} We start by calculating the gradient of each parameter: \begin{equation} \frac{\partial\ell}{\partial a_i} = \frac{\partial\ell}{\partial F}\left(\prod_{k\in[m]\setminus\{i\}} a_k\right)x \end{equation} and a first-order approximation of $\Delta F(x)$: \begin{equation} \label{eq:Delta-F} \Delta F(x) = -\eta \frac{\partial\ell}{\partial F(x)}\left(F(x)\right)^2\sum_{i=1}^m \frac{1}{a_i^2} \end{equation} where we conveniently abuse some notations by defining \begin{equation} F(x)\frac{1}{a_i}\triangleq\left(\prod_{k\in[m]\setminus\{i\}}a_k\right)x, \quad \text{if~} a_i = 0. \end{equation} Denote $\sum_{i=1}^m \frac{1}{a_i^2}$ as $M$ and $\min_k a_k$ as $A$, we have \begin{equation} \label{eq:A-M-bound} (F(x))^2\cdot\frac{1}{A^2}\le (F(x))^2M\le (F(x))^2\cdot\frac{m}{A^2} \end{equation} and therefore by rearranging \Cref{eq:Delta-F} and letting $\Delta F(x)=\Theta(\eta/L)$ we get \begin{equation} (F(x))^2\cdot\frac{1}{A^2} = \Theta\left(\frac{\Delta F(x)}{\eta\frac{\partial\ell}{\partial F(x)}}\right) = \Theta\left(\frac{1}{L}\right) \end{equation} i.e. $F(x)/A = \Theta(1/\sqrt{L})$. Hence the ``only if'' part is proved. For the ``if'' part, we apply \Cref{eq:A-M-bound} to \Cref{eq:Delta-F} and observe that by \Cref{eq:scalar-constraint} \begin{equation} \Delta F(x) = \Theta\left(\eta(F(x))^2\cdot\frac{1}{A^2}\right) = \Theta\left(\frac{\eta}{L}\right) \end{equation} \end{proof} The result of this theorem provides useful guidance on how to rescale the standard initialization to achieve the desired update scale for the network function. \section{Additional experiments} \subsection{Ablation studies of Fixup} \label{sec:ablation} In this section we present the training curves of different architecture designs and initialization schemes. Specifically, we compare the training accuracy of batch normalization, Fixup, as well as a few ablated options: (1) removing the bias parameters in the network; (2) use $0.1$x the suggested initialization scale and no bias parameters; (3) use $10$x the suggested initialization scale and no bias parameters; and (4) remove all the residual branches. The results are shown in \Cref{fig:cifar10-train-acc}. We see that initializing the residual branch layers at a smaller scale (or all zero) slows down learning, whereas training fails when initializing them at a larger scale; we also see the clear benefit of adding bias parameters in the network. \begin{figure}[htp] \centering \includegraphics[width=0.6\textwidth]{figures/cifar10_train_acc.pdf} \caption{Minibatch training accuracy of ResNet-110 on CIFAR-10 dataset with different configurations in the first 3 epochs. We use minibatch size of 128 and smooth the curves using 10-step moving average.} \label{fig:cifar10-train-acc} \end{figure} \subsection{CIFAR and SVHN with better regularization} \label{sec:additional-cifar} We perform additional experiments to validate our hypothesis that the gap in test error between Fixup and batch normalization is primarily due to overfitting. To combat overfitting, we use Mixup \citep{zhang2017mixup} and Cutout \citep{devries2017cutout} with default hyperparameters as additional regularization. On the CIFAR-10 dataset, we perform experiments with WideResNet-40-10 and on SVHN we use WideResNet-16-12 \citep{zagoruyko2016wide}, all with the default hyperparameters. We observe in \Cref{tbl:cifar-svhn} that models trained with Fixup and strong regularization are competitive with state-of-the-art methods on CIFAR-10 and SVHN, as well as our baseline with batch normalization. \begin{table}[htp] \centering \begin{tabular}{ llcr } \toprule Dataset & Model & Normalization & Test Error ($\%$) \\ \midrule \multirow{5}{*}{CIFAR-10} & \citep{zagoruyko2016wide} & \multirow{3}{*}{Yes} & $3.8$ \\ & \citep{yamada2018shakedrop} & & {\bf 2.3} \\ & BatchNorm + Mixup + Cutout & & 2.5\\ \cmidrule(lr){2-4} & \citep{graham2014fractional} & \multirow{2}{*}{No} & 3.5 \\ & Fixup-init + Mixup + Cutout & & {\bf 2.3} \\ \midrule \multirow{5}{*}{SVHN} & \citep{zagoruyko2016wide} & \multirow{3}{*}{Yes} & 1.5 \\ & \citep{devries2017cutout} & & {\bf 1.3} \\ & BatchNorm + Mixup + Cutout & & 1.4 \\ \cmidrule(lr){2-4} & \citep{lee2016generalizing} & \multirow{2}{*}{No} & 1.7 \\ & Fixup-init + Mixup + Cutout & & {\bf 1.4} \\ \bottomrule \end{tabular} \caption{Additional results on CIFAR-10, SVHN datasets.} \label{tbl:cifar-svhn} \end{table} \subsection{Training and test curves on ImageNet} \label{sec:additional-imagenet} Figure \ref{fig:imagenet} shows that without additional regularization Fixup fits the training set very well, but overfits significantly. We see in \Cref{fig:imagenet_mixup} that Fixup is competitive with networks trained with normalization when the Mixup regularizer is used. \begin{figure}[htp] \centering \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{figures/imagenet_train.pdf} \end{subfigure} \hfill \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth]{figures/imagenet_test.pdf} \end{subfigure} \caption{Training and test errors on ImageNet using ResNet-50 without additional regularization. We observe that Fixup is able to better fit the training data and that leads to overfitting - more regularization is needed. Results of BatchNorm and GroupNorm reproduced from \citep{wu2018group}.} \label{fig:imagenet}\vspace{-10pt} \end{figure} \begin{figure}[htp] \centering \includegraphics[width=0.7\textwidth]{figures/imagenet_mixup.pdf} \caption{Test error of ResNet-50 on ImageNet with Mixup \citep{zhang2017mixup}. Fixup closely matches the final results yielded by the use of GroupNorm, without any normalization.} \label{fig:imagenet_mixup} \end{figure} \section{Additional references: A brief history of normalization methods} \label{sec:normalization-history} The first use of normalization in neural networks appears in the modeling of biological visual system and dates back at least to \citet{heeger1992normalization} in neuroscience and to \citet{pinto2008real,lyu2008nonlinear} in computer vision, where each neuron output is divided by the sum (or norm) of all of the outputs, a module called divisive normalization. Recent popular normalization methods, such as local response normalization \citep{krizhevsky2012imagenet}, batch normalization \citep{ioffe2015batch} and layer normalization \citep{ba2016layer} mostly follow this tradition of dividing the neuron activations by their certain summary statistics, often also with the activation mean subtracted. An exception is weight normalization \citep{salimans2016weight}, which instead divides the weight parameters by their statistics, specifically the weight norm; weight normalization also adopts the idea of activation normalization for weight initialization. The recently proposed actnorm \citep{kingma2018glow} removes the normalization of weight parameters, but still use activation normalization to initialize the affine transformation layers. \section{Conclusion} In this work, we study how to train a deep residual network reliably without normalization. Our theory in \Cref{sec:problem} suggests that the exploding gradient problem at initialization in a positively homogeneous network such as ResNet is directly linked to the blowup of logits. In \Cref{sec:zeroinit} we develop Fixup initialization to ensure the whole network as well as each residual branch gets updates of proper scale, based on a top-down analysis. Extensive experiments on real world datasets demonstrate that Fixup matches normalization techniques in training deep residual networks, and achieves state-of-the-art test performance with proper regularization. Our work opens up new possibilities for both theory and applications. Can we analyze the training dynamics of Fixup, which may potentially be simpler than analyzing models with batch normalization is? Could we apply or extend the initialization scheme to other applications of deep learning? It would also be very interesting to understand the regularization benefits of various normalization methods, and to develop better regularizers to further improve the test performance of Fixup. \section{Experiments} \label{sec:experiments} \subsection{Training at increasing depth} One of the key advatanges of BatchNorm is that it leads to fast training even for very deep models \citep{ioffe2015batch}. Here we will determine if we can match this desirable property by relying only on proper initialization. We propose to evaluate how each method affects training very deep nets by \emph{measuring the test accuracy after the first epoch as we increase depth}. In particular, we use the wide residual network (WRN) architecture with width 1 and the default weight decay $\expnumber{5}{-4}$ \citep{zagoruyko2016wide}. We specifically use the default learning rate of $0.1$ because the ability to use high learning rates is considered to be important to the success of BatchNorm. We compare Fixup against three baseline methods --- (1) rescale the output of each residual block by $\frac{1}{\sqrt{2}}$ \citep{balduzzi2017shattered}, (2) post-process an orthogonal initialization such that the output variance of each residual block is close to 1 (Layer-sequential unit-variance orthogonal initialization, or LSUV) \citep{mishkin2015all}, (3) batch normalization \citep{ioffe2015batch}. We use the default batch size of 128 up to 1000 layers, with a batch size of 64 for 10,000 layers. We limit our budget of epochs to 1 due to the computational strain of evaluating models with up to 10,000 layers. \begin{figure}[htp] \centering \includegraphics[width=0.65\textwidth]{figures/depth_cifar10.pdf} \caption{Depth of residual networks versus test accuracy at the first epoch for various methods on CIFAR-10 with the default BatchNorm learning rate. We observe that Fixup is able to train very deep networks with the same learning rate as batch normalization. (Higher is better.)} \label{fig:depth_cifar} \vspace{-3mm} \end{figure} Figure \ref{fig:depth_cifar} shows the test accuracy at the first epoch as depth increases. Observe that Fixup matches the performance of BatchNorm at the first epoch, even with 10,000 layers. LSUV and $\sqrt{^1/_2}$-scaling are not able to train with the same learning rate as BatchNorm past 100 layers. \subsection{Image classification} \label{sec:image-classification} In this section, we evaluate the ability of Fixup to replace batch normalization in image classification applications. On the CIFAR-10 dataset, we first test on ResNet-110 \citep{he2016deep} with default hyper-parameters; results are shown in \Cref{tbl:cifar-resnet-110}. Fixup obtains 7\% relative improvement in test error compared with standard initialization; however, we note a substantial difference in the difficulty of training. While network with Fixup is trained with the same learning rate and converge as fast as network with batch normalization, we fail to train a Xavier initialized ResNet-110 with 0.1x maximal learning rate.\footnote{Personal communication with the authors of \citep{shang2017exploring} confirms our observation, and reveals that the Xavier initialized network need more epochs to converge.} The test error gap in \Cref{tbl:cifar-resnet-110} is likely due to the regularization effect of BatchNorm rather than difficulty in optimization; when we train Fixup networks with better regularization, the test error gap disappears and we obtain state-of-the-art results on CIFAR-10 and SVHN without normalization layers (see \Cref{sec:additional-cifar}). \begin{table}[htp] \centering {\small \begin{tabular}{ llccr } \toprule Dataset & ResNet-110 & Normalization & Large $\eta$ & Test Error ($\%$) \\ \midrule \multirow{3}{*}{CIFAR-10} & w/ BatchNorm \citep{he2016deep} & \cmark & \cmark & 6.61 \\ \cmidrule(lr){2-5} & w/ Xavier Init \citep{shang2017exploring} & \xmark & \xmark & 7.78 \\ & w/ Fixup-init & \xmark & \cmark & 7.24 \\ \bottomrule \end{tabular} } \caption{Results on CIFAR-10 with ResNet-110 (mean/median of 5 runs; lower is better).} \label{tbl:cifar-resnet-110} \end{table} On the ImageNet dataset, we benchmark Fixup with the ResNet-50 and ResNet-101 architectures \citep{he2016deep}, trained for 100 epochs and 200 epochs respectively. Similar to our finding on the CIFAR-10 dataset, we observe that (1) training with Fixup is fast and stable with the default hyperparameters, (2) Fixup alone significantly improves the test error of standard initialization, and (3) there is a large test error gap between Fixup and BatchNorm. Further inspection reveals that Fixup initialized models obtain significantly lower training error compared with BatchNorm models (see \Cref{sec:additional-imagenet}), i.e., Fixup suffers from overfitting. We therefore apply stronger regularization to the Fixup models using Mixup \citep{zhang2017mixup}. We find it is beneficial to reduce the learning rate of the scalar multiplier and bias by $10$x when additional large regularization is used. Best Mixup coefficients are found through cross-validation: they are $0.2$, $0.1$ and $0.7$ for BatchNorm, GroupNorm \citep{wu2018group} and Fixup respectively. We present the results in \Cref{tbl:imagenet}, noting that with better regularization, the performance of Fixup is on par with GroupNorm. \begin{table}[htp] \centering {\small \begin{tabular}{ llcr } \toprule Model & Method & Normalization & Test Error ($\%$) \\ \midrule \multirow{6}{*}{ResNet-50} & BatchNorm \citep{goyal2017accurate} & \multirow{3}{*}{\cmark} & 23.6 \\ & BatchNorm + Mixup \citep{zhang2017mixup} & & {\bf 23.3} \\ & GroupNorm + Mixup & & 23.9 \\ \cmidrule(lr){2-4} & Xavier Init \citep{shang2017exploring} & \multirow{3}{*}{\xmark} & 31.5 \\ & Fixup-init & & 27.6 \\ & Fixup-init + Mixup & & {\bf 24.0} \\ \midrule \multirow{4}{*}{ResNet-101} & BatchNorm \citep{zhang2017mixup} & \multirow{3}{*}{\cmark} & 22.0 \\ & BatchNorm + Mixup \citep{zhang2017mixup} & & {\bf 20.8} \\ & GroupNorm + Mixup & & 21.4 \\ \cmidrule(lr){2-4} & Fixup-init + Mixup & \xmark & 21.4 \\ \bottomrule \end{tabular} } \caption{ImageNet test results using the ResNet architecture. (Lower is better.)} \label{tbl:imagenet} \end{table} \subsection{Machine translation} \label{sec:machine-translation} To demonstrate the generality of Fixup, we also apply it to replace layer normalization \citep{ba2016layer} in Transformer \citep{vaswani2017attention}, a state-of-the-art neural network for machine translation. Specifically, we use the $\text{fairseq}$ library \citep{gehring2017convolutional} and follow the Fixup template in \Cref{sec:zeroinit} to modify the baseline model. We evaluate on two standard machine translation datasets, IWSLT German-English (de-en) and WMT English-German (en-de) following the setup of \cite{ott2018scaling}. For the IWSLT de-en dataset, we cross-validate the dropout probability from $\{0.3, 0.4, 0.5, 0.6\}$ and find $0.5$ to be optimal for both Fixup and the LayerNorm baseline. For the WMT'16 en-de dataset, we use dropout probability $0.4$. All models are trained for $200$k updates. It was reported \citep{chen2018best} that ``Layer normalization is most critical to stabilize the training process... removing layer normalization results in unstable training runs''. However we find training with Fixup to be very stable and as fast as the baseline model. Results are shown in \Cref{table:machine-translation}. Surprisingly, we find the models do not suffer from overfitting when LayerNorm is replaced by Fixup, thanks to the strong regularization effect of dropout. Instead, Fixup matches or supersedes the state-of-the-art results using Transformer model on both datasets. \begin{table}[htp] \centering {\small \begin{tabular}{ llcr } \toprule Dataset & Model & Normalization & BLEU \\ \midrule \multirow{3}{*}{IWSLT DE-EN} & \citep{deng2018latent} & \multirow{2}{*}{\cmark} & 33.1 \\ & LayerNorm & & $34.2$\\ \cmidrule(lr){2-4} & Fixup-init & \xmark & {\bf 34.5} \\ \midrule \multirow{3}{*}{WMT EN-DE} & \citep{vaswani2017attention} & \multirow{2}{*}{\cmark} & 28.4\\ & LayerNorm \citep{ott2018scaling} & & {\bf 29.3}\\ \cmidrule(lr){2-4} & Fixup-init & \xmark & {\bf 29.3}\\ \bottomrule \end{tabular} } \caption{Comparing Fixup vs. LayerNorm for machine translation tasks. (Higher is better.)} \label{table:machine-translation} \vspace{-10pt} \end{table} \section{Introduction} Artificial intelligence applications have witnessed major advances in recent years. At the core of this revolution is the development of novel neural network models and their training techniques. For example, since the landmark work of \cite{he2016deep}, most of the state-of-the-art image recognition systems are built upon a deep stack of network blocks consisting of convolutional layers and additive skip connections, with some normalization mechanism (e.g., batch normalization \citep{ioffe2015batch}) to facilitate training and generalization. Besides image classification, various normalization techniques \citep{ulyanov2016instance, ba2016layer, salimans2016weight, wu2018group} have been found essential to achieving good performance on other tasks, such as machine translation \citep{vaswani2017attention} and generative modeling \citep{zhu2017unpaired}. They are widely believed to have multiple benefits for training very deep neural networks, including stabilizing learning, enabling higher learning rate, accelerating convergence, and improving generalization. Despite the enormous empirical success of training deep networks with normalization, and recent progress on understanding the working of batch normalization \citep{santurkar2018does}, there is currently no general consensus on why these normalization techniques help training residual neural networks. Intrigued by this topic, in this work we study \begin{enumerate}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt, leftmargin=*, label=(\roman*)] \item \emph{without} normalization, can a deep residual network be trained reliably? (And if so,) \item \emph{without} normalization, can a deep residual network be trained with the same learning rate, converge at the same speed, and generalize equally well (or even better)? \end{enumerate} Perhaps surprisingly, we find the answers to both questions are \emph{Yes}. In particular, we show: \begin{itemize}[leftmargin=*] \item {\bf Why normalization helps training.} We derive a lower bound for the gradient norm of a residual network at initialization, which explains why with standard initializations, normalization techniques are \emph{essential} for training deep residual networks at maximal learning rate. (\Cref{sec:problem}) \item {\bf Training without normalization.} We propose Fixup, a method that rescales the standard initialization of residual branches by adjusting for the network architecture. Fixup enables training very deep residual networks stably at maximal learning rate without normalization. (\Cref{sec:zeroinit}) \item {\bf Image classification.} We apply Fixup to replace batch normalization on image classification benchmarks CIFAR-10 (with Wide-ResNet) and ImageNet (with ResNet), and find Fixup with proper regularization matches the well-tuned baseline trained with normalization. (\Cref{sec:image-classification}) \item {\bf Machine translation.} We apply Fixup to replace layer normalization on machine translation benchmarks IWSLT and WMT using the Transformer model, and find it outperforms the baseline and achieves new state-of-the-art results on the same architecture. (\Cref{sec:machine-translation}) \end{itemize} \begin{figure}[htp] \centering \includegraphics[width=0.8\linewidth]{figures/zeroinit_denormalize.pdf} \caption{{\bf Left:} ResNet basic block. Batch normalization \citep{ioffe2015batch} layers are marked in red. {\bf Middle:} A simple network block that trains stably when stacked together. {\bf Right:} Fixup further improves by adding bias parameters. (See \Cref{sec:zeroinit} for details.)} \label{fig:basic_block} \vspace{-10pt} \end{figure} In the remaining of this paper, we first analyze the exploding gradient problem of residual networks at initialization in \Cref{sec:problem}. To solve this problem, we develop Fixup in \Cref{sec:zeroinit}. In \Cref{sec:experiments} we quantify the properties of Fixup and compare it against state-of-the-art normalization methods on real world benchmarks. A comparison with related work is presented in \Cref{sec:related}. \subsubsection*{Acknowledgments} The authors would like to thank Yuxin Wu, Kaiming He, Aleksander Madry and the anonymous reviewers for their helpful feedback. \section{Problem: ResNet with Standard Initializations Lead to Exploding Gradients}\label{sec:problem} \newcommand{{\bf x}}{{\bf x}} Standard initialization methods \citep{glorot2010understanding,he2015delving,xiao2018dynamical} attempt to set the initial parameters of the network such that the activations neither vanish nor explode. Unfortunately, it has been observed that without normalization techniques such as BatchNorm they do not account properly for the effect of residual connections and this causes exploding gradients. \citet{balduzzi2017shattered} characterizes this problem for ReLU networks, and we will generalize this to residual networks with positively homogenous activation functions. A plain (i.e. without normalization layers) ResNet with residual blocks $\{F_1, \ldots, F_L\}$ and input ${\bf x}_0$ computes the activations as \begin{equation}\label{eqn:resnet} {\bf x}_l = {\bf x}_0 + \sum_{i=0}^{l-1} F_i({\bf x}_i). \end{equation} \paragraph{ResNet output variance grows exponentially with depth.} Here we only consider the initialization, view the input ${\bf x}_0$ as fixed, and consider the randomness of the weight initialization. We analyze the variance of each layer ${\bf x}_l$, denoted by $\mathrm{Var}[{\bf x}_{l}]$ (which is technically defined as the sum of the variance of all the coordinates of ${\bf x}_l$.) For simplicity we assume the blocks are initialized to be zero mean, i.e., $\mathbb{E} [F_l({\bf x}_l) \mid {\bf x}_l] = 0$. By ${\bf x}_{l+1}= {\bf x}_{l}+ F_l({\bf x}_l)$, and the law of total variance, we have $\mathrm{Var}[{\bf x}_{l+1}] = \mathbb{E}[\mathrm{Var}[F({\bf x}_l) \vert {\bf x}_l]] + \mathrm{Var}( {\bf x}_l)$. Resnet structure prevents ${\bf x}_l$ from vanishing by forcing the variance to grow with depth, i.e. $\mathrm{Var}[{\bf x}_{l}] < \mathrm{Var}[{\bf x}_{l+1}]$ if $\mathbb{E}[\mathrm{Var}[F({\bf x}_l) \vert {\bf x}_l]] > 0$. Yet, combined with initialization methods such as \cite{he2015delving}, the output variance of each residual branch $\mathrm{Var}[F_l({\bf x}_l)\vert {\bf x}_l]$ will be about the same as its input variance $\mathrm{Var}[{\bf x}_l]$, and thus $\mathrm{Var}[{\bf x}_{l+1}] \approx 2 \mathrm{Var}[{\bf x}_{l}]$. This causes the output variance to explode exponentially with depth without normalization \citep{hanin2018start} for positively homogeneous blocks (see Definition \ref{def:first-degree-positively-homogeneous}). This is detrimental to learning because it can in turn cause gradient explosion. As we will show, at initialization, the gradient norm of certain activations and weight tensors is \emph{lower bounded} by the cross-entropy loss up to some constant. Intuitively, this implies that blowup in the logits will cause gradient explosion. Our result applies to convolutional and linear weights in a neural network with ReLU nonlinearity (e.g., feed-forward network, CNN), possibly with skip connections (e.g., ResNet, DenseNet), but without any normalization. Our analysis utilizes properties of positively homogeneous functions, which we now introduce. \begin{mydef}[positively homogeneous function of first degree] \label{def:first-degree-positively-homogeneous} A function $f:\mathbb{R}^m\to \mathbb{R}^n$ is called \emph{positively homogeneous (of first degree)} (p.h.) if for any input $\mathbf{x}\in\mathbb{R}^m$ and $\alpha > 0$, $f(\alpha\mathbf{x}) = \alpha f(\mathbf{x})$. \end{mydef} \begin{mydef}[positively homogeneous set of first degree] Let $\theta=\{\theta_i\}_{i\in S}$ be the set of parameters of $f(\mathbf{x})$ and $\theta_{ph}=\{\theta_i\}_{i\in S_{ph}\subset S}$. We call $\theta_{ph}$ a \emph{positively homogeneous set (of first degree)} (p.h. set) if for any $\alpha > 0$, $f(\mathbf{x};\theta\setminus \theta_{ph}, \alpha \theta_{ph}) = \alpha f(\mathbf{x};\theta\setminus \theta_{ph}, \theta_{ph})$, where $\alpha \theta_{ph}$ denotes $\{\alpha \theta_i\}_{i\in S_{ph}}$. \end{mydef} Intuitively, a p.h. set is a set of parameters $\theta_{ph}$ in function $f$ such that for any fixed input $\mathbf{x}$ and fixed parameters $\theta\setminus \theta_{ph}$, $\bar{f}(\theta_{ph})\triangleq f(\mathbf{x}; \theta\setminus \theta_{ph}, \theta_{ph})$ is a p.h. function. Examples of p.h. functions are ubiquitous in neural networks, including various kinds of linear operations without bias (fully-connected (FC) and convolution layers, pooling, addition, concatenation and dropout etc.) as well as ReLU nonlinearity. Moreover, we have the following claim: \begin{prop} \label{thm:ph-composition} A function that is the composition of p.h. functions is itself p.h. \end{prop} We study classification problems with $c$ classes and the cross-entropy loss. We use $f$ to denote a neural network function except for the softmax layer. Cross-entropy loss is defined as $\ell(\mathbf{z},\mathbf{y})\triangleq -\mathbf{y}^T(\mathbf{z}-\texttt{logsumexp}(\mathbf{z}))$ where $\mathbf{y}$ is the one-hot label vector, $\mathbf{z}\triangleq f(\mathbf{x})\in \mathbb{R}^c$ is the logits where $z_i$ denotes its $i$-th element, and $\texttt{logsumexp}(\mathbf{z})\triangleq\log\left(\sum_{i\in[c]}\exp(z_i)\right)$. Consider a minibatch of training examples $\mathcal{D}_M=\{(\mathbf{x}^{(m)},\mathbf{y}^{(m)})\}_{m=1}^M$ and the average cross-entropy loss $\ell_{\mathrm{avg}}(\mathcal{D}_M)\triangleq\frac{1}{M}\sum_{m=1}^M\ell(f(\mathbf{x}^{(m)}), \mathbf{y}^{(m)})$, where we use $^{(m)}$ to index quantities referring to the $m$-th example. $\|\cdot\|$ denotes any valid norm. We only make the following assumptions about the network $f$: \begin{enumerate}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt, leftmargin=25pt] \item $f$ is a sequential composition of network blocks $\{f_i\}_{i=1}^L$, i.e. $f(\mathbf{x}_0) = f_L(f_{L-1}(\dots f_1(\mathbf{x}_0)))$, each of which is composed of p.h. functions. \item Weight elements in the FC layer are i.i.d. sampled from a zero-mean symmetric distribution. \end{enumerate} These assumptions hold at initialization if we remove all the normalization layers in a residual network with ReLU nonlinearity, assuming all the biases are initialized at $0$. Our results are summarized in the following two theorems, whose proofs are listed in the appendix: \begin{theorem}\label{thm:gradient-norm-activation} Denote the input to the $i$-th block by $\mathbf{x}_{i-1}$. With Assumption 1, we have \begin{equation} \left\|\frac{\partial\ell}{\partial\mathbf{x}_{i-1}}\right\| \ge \frac{\ell(\mathbf{z},\mathbf{y}) - H(\mathbf{p})}{\|\mathbf{x}_{i-1}\|}, \end{equation} where $\mathbf{p}$ is the softmax probabilities and $H$ denotes the Shannon entropy. \end{theorem} Since $H(\mathbf{p})$ is upper bounded by $\log(c)$ and $\|\mathbf{x}_{i-1}\|$ is small in the lower blocks, blowup in the loss will cause large gradient norm with respect to the lower block input. Our second theorem proves a lower bound on the gradient norm of a p.h. set in a network. \begin{theorem}\label{thm:gradient-norm-weight} With Assumption 1, we have \begin{equation} \left\|\frac{\partial\ell_{\mathrm{avg}}}{\partial\theta_{ph}}\right\| \ge \frac{1}{M\|\theta_{ph}\|} \sum_{m=1}^M \ell(\mathbf{z}^{(m)},\mathbf{y}^{(m)}) - H(\mathbf{p}^{(m)}) \triangleq G(\theta_{ph}). \end{equation} Furthermore, with Assumptions 1 and 2, we have \begin{equation} \mathbb{E}G(\theta_{ph}) \ge \frac{\mathbb{E}[\max_{i\in[c]}z_i]-\log(c)}{\|\theta_{ph}\|}. \end{equation} \vspace{-10pt} \end{theorem} It remains to identify such p.h. sets in a neural network. In \Cref{fig:ph-set-example} we provide three examples of p.h. sets in a ResNet without normalization. \Cref{thm:gradient-norm-weight} suggests that these layers would suffer from the exploding gradient problem, if the logits $\mathbf{z}$ blow up at initialization, which unfortunately would occur in a ResNet without normalization if initialized in a traditional way. This motivates us to introduce a new initialization in the next section. \begin{figure}[htp] \centering \includegraphics[width=0.5\linewidth]{figures/ph-set-example.pdf} \caption{Examples of p.h. sets in a ResNet without normalization: (1) the first convolution layer before max pooling; (2) the fully connected layer before softmax; (3) the union of a spatial downsampling layer in the backbone and a convolution layer in its corresponding residual branch.} \label{fig:ph-set-example} \vspace{-10pt} \end{figure} \section{Related Work}\label{sec:related} \paragraph{Normalization methods.} Normalization methods have enabled training very deep residual networks, and are currently an essential building block of the most successful deep learning architectures. All normalization methods for training neural networks explicitly normalize (i.e. standardize) some component (activations or weights) through dividing activations or weights by some real number computed from its statistics and/or subtracting some real number activation statistics (typically the mean) from the activations.\footnote{For reference, we include a brief history of normalization methods in \Cref{sec:normalization-history}.} In contrast, Fixup does not compute statistics (mean, variance or norm) at initialization or during any phase of training, hence is not a normalization method. \paragraph{Theoretical analysis of deep networks.} Training very deep neural networks is an important theoretical problem. Early works study the propagation of variance in the forward and backward pass for different activation functions \citep{glorot2010understanding, he2015delving}. Recently, the study of \emph{dynamical isometry} \citep{saxe2013exact} provides a more detailed characterization of the forward and backward signal propogation at initialization \citep{pennington2017resurrecting,hanin2018neural}, enabling training 10,000-layer CNNs from scratch \citep{xiao2018dynamical}. For residual networks, activation scale \citep{hanin2018start}, gradient variance \citep{balduzzi2017shattered} and dynamical isometry property \citep{yang2017mean} have been studied. Our analysis in \Cref{sec:problem} leads to the similar conclusion as previous work that the standard initialization for residual networks is problematic. However, our use of positive homogeneity for lower bounding the gradient norm of a neural network is novel, and applies to a broad class of neural network architectures (e.g., ResNet, DenseNet) and initialization methods (e.g., Xavier, LSUV) with simple assumptions and proof. \citet{hardt2016identity} analyze the optimization landscape (loss surface) of linearized residual nets in the neighborhood around the zero initialization where all the critical points are proved to be global minima. \citet{yang2017mean} study the effect of the initialization of residual nets to the test performance and pointed out Xavier or He initialization scheme is not optimal. In this paper, we give a concrete recipe for the initialization scheme with which we can train deep residual networks without batch normalization successfully. \paragraph{Understanding batch normalization.} Despite its popularity in practice, batch normalization has not been well understood. \citet{ioffe2015batch} attributed its success to ``reducing internal covariate shift'', whereas \citet{santurkar2018does} argued that its effect may be ``smoothing loss surface''. Our analysis in Section 2 corroborates the latter idea of \citet{santurkar2018does} by showing that standard initialization leads to very steep loss surface at initialization. Moreover, we empirically showed in Section 3 that steep loss surface may be alleviated for residual networks by using smaller initialization than the standard ones such as Xavier or He's initialization in residual branches. \citet{van2017l2,hoffer2018norm} studied the effect of (batch) normalization and weight decay on the effective learning rate. Their results inspire us to include a multiplier in each residual branch. \paragraph{ResNet initialization in practice.} \cite{gehring2017convolutional,balduzzi2017shattered} proposed to address the initialization problem of residual nets by using the recurrence ${\bf x}_l = \sqrt{^1/_2}({\bf x}_{l-1} + F_l({\bf x}_{l-1}))$. \cite{mishkin2015all} proposed a data-dependent initialization to mimic the effect of batch normalization in the first forward pass. While both methods limit the scale of activation and gradient, they would fail to train stably at the maximal learning rate for very deep residual networks, since they fail to consider the accumulation of highly correlated updates contributed by different residual branches to the network function (\Cref{sec:sync-updates}). \cite{srivastava2015highway,hardt2016identity,goyal2017accurate,kingma2018glow} found that initializing the residual branches at (or close to) zero helped optimization. Our results support their observation in general, but \Cref{eq:scalar-branch-constraint} suggests additional subtleties when choosing a good initialization scheme. \section{Fixup: Update a Residual Network $\Theta(\eta)$ per SGD Step} \label{sec:zeroinit} Our analysis in the previous section points out the failure mode of standard initializations for training deep residual network: the gradient norm of certain layers is in expectation lower bounded by a quantity that increases indefinitely with the network depth. However, escaping this failure mode does not necessarily lead us to successful training --- after all, it is \emph{the whole network as a function} that we care about, rather than a layer or a network block. In this section, we propose a top-down design of a new initialization that ensures proper update scale to the network function, by simply rescaling a standard initialization. To start, we denote the learning rate by $\eta$ and set our goal: \begin{tcolorbox}[enhanced,attach boxed title to top center={yshift=-3mm,yshifttext=-1mm}, title=,colback=white, colframe=white!75!blue, coltitle=black, colbacktitle=white, fonttitle=\bfseries] \emph{$f(\mathbf{x};\theta)$ is updated by $\Theta(\eta)$ per SGD step after initialization as $\eta\to 0$.} \\ That is, $\|\Delta f(\mathbf{x})\| = \Theta(\eta)$ where $\Delta f(\mathbf{x})\triangleq f(\mathbf{x};\theta - \eta\frac{\partial}{\partial\theta}\ell(f(\mathbf{x}),\mathbf{y})) - f(\mathbf{x};\theta)$. \end{tcolorbox} Put another way, our goal is to design an initialization such that SGD updates to the network function are in the right scale and independent of the depth. We define the \emph{Shortcut} as the shortest path from input to output in a residual network. The Shortcut is typically a shallow network with a few trainable layers.\footnote{For example, in the ResNet architecture (e.g., ResNet-50, ResNet-101 or ResNet-152) for ImageNet classification, the Shortcut is always a 6-layer network with five convolution layers and one fully-connected layer, irrespective of the total depth of the whole network.} We assume the Shortcut is initialized using a standard method, and focus on the initialization of the residual branches. \paragraph{Residual branches update the network in sync.} To start, we first make an important observation that the SGD update to each residual branch changes the network output in highly correlated directions. This implies that if a residual network has $L$ residual branches, then an SGD step to each residual branch should change the network output by $\Theta(\eta/L)$ on average to achieve an overall $\Theta(\eta)$ update. We defer the formal statement and its proof until \Cref{sec:sync-updates}. \paragraph{Study of a scalar branch.} Next we study how to initialize a residual branch with $m$ layers so that its SGD update changes the network output by $\Theta(\eta/L)$. We assume $m$ is a small positive integer (e.g., 2 or 3). As we are only concerned about the scale of the update, it is sufficiently instructive to study the scalar case, i.e., $F(x) = \left(\prod_{i=1}^m a_i\right) x$ where $a_1,\dots,a_m,x\in\mathbb{R}^+$. For example, the standard initialization methods typically initialize each layer so that the output (after nonlinear activation) preserves the input variance, which can be modeled as setting $\forall i\in[m], a_i=1$. In turn, setting $a_i$ to a positive number other than $1$ corresponds to rescaling the i-th layer by $a_i$. Through deriving the constraints for $F(x)$ to make $\Theta(\eta/L)$ updates, we will also discover how to rescale the weight layers of a standard initialization as desired. In particular, we show the SGD update to $F(x)$ is $\Theta(\eta/L)$ \emph{if and only if} the initialization satisfies the following constraint: \begin{equation} \label{eq:scalar-branch-constraint} \left(\prod_{i\in[m]\setminus\{j\}} a_i\right)x = \Theta\left(\frac{1}{\sqrt{L}}\right),\quad \text{where}\quad j\in\arg\min_k a_k \end{equation} We defer the derivation until \Cref{sec:scalar-branch}. \Cref{eq:scalar-branch-constraint} suggests new methods to initialize a residual branch through \emph{rescaling the standard initialization of i-th layer in a residual branch by its corresponding scalar $a_i$}. For example, we could set $\forall i\in[m], a_i=L^{-\frac{1}{2m-2}}$. Alternatively, we could start the residual branch as a zero function by setting $a_m = 0$ and $\forall i\in[m-1], a_i=L^{-\frac{1}{2m-2}}$. In the second option, the residual branch does not need to ``unlearn'' its potentially bad random initial state, which can be beneficial for learning. Therefore, we use the latter option in our experiments, unless otherwise specified. \paragraph{The effects of biases and multipliers.} With proper rescaling of the weights in all the residual branches, a residual network is supposed to be updated by $\Theta(\eta)$ per SGD step --- our goal is achieved. However, in order to match the training performance of a corresponding network with normalization, there are two more things to consider: biases and multipliers. Using biases in the linear and convolution layers is a common practice. In normalization methods, bias and scale parameters are typically used to restore the representation power after normalization.\footnote{For example, in batch normalization gamma and beta parameters are used to affine-transform the normalized activations per each channel.} Intuitively, because the preferred input/output mean of a weight layer may be different from the preferred output/input mean of an activation layer, it also helps to insert bias terms in a residual network without normalization. Empirically, we find that inserting just one scalar bias before each weight layer and nonlinear activation layer significantly improves the training performance. Multipliers scale the output of a residual branch, similar to the scale parameters in batch normalization. They have an interesting effect on the learning dynamics of weight layers in the same branch. Specifically, as the stochastic gradient of a layer is typically almost orthogonal to its weight, learning rate decay tends to cause the weight norm equilibrium to shrink when combined with L2 weight decay \citep{van2017l2}. In a branch with multipliers, this in turn causes the growth of the multipliers, increasing the effective learning rate of other layers. In particular, we observe that inserting just one scalar multiplier per residual branch mimics the weight norm dynamics of a network with normalization, and spares us the search of a new learning rate schedule. Put together, we propose the following method to train residual networks without normalization: \begin{tcolorbox}[enhanced,attach boxed title to top center={yshift=-3mm,yshifttext=-1mm}, title=Fixup initialization (or: How to train a deep residual network without normalization), colback=white, colframe=white!75!blue, coltitle=black, colbacktitle=white] \begin{enumerate}[leftmargin=*] \item Initialize the classification layer and the last layer of each residual branch to $0$. \item Initialize every other layer using a standard method (e.g., \cite{he2015delving}), and scale only the weight layers inside residual branches by $L^{-\frac{1}{2m-2}}$. \item Add a scalar multiplier (initialized at 1) in every branch and a scalar bias (initialized at 0) before each convolution, linear, and element-wise activation layer. \end{enumerate} \end{tcolorbox} It is important to note that Rule 2 of Fixup is the essential part as predicted by \Cref{eq:scalar-branch-constraint}. Indeed, we observe that using Rule 2 alone is sufficient and necessary for training extremely deep residual networks. On the other hand, Rule 1 and Rule 3 make further improvements for training so as to match the performance of a residual network with normalization layers, as we explain in the above text.\footnote{It is worth noting that the design of Fixup is a simplification of the common practice, in that we only introduce $O(K)$ parameters beyond convolution and linear weights (since we remove bias terms from convolution and linear layers), whereas the common practice includes $O(KC)$ \citep{ioffe2015batch,salimans2016weight} or $O(KCWH)$ \citep{ba2016layer} additional parameters, where $K$ is the number of layers, $C$ is the max number of channels per layer and $W, H$ are the spatial dimension of the largest feature maps.} We find ablation experiments confirm our claims (see \Cref{sec:ablation}). Our initialization and network design is consistent with recent theoretical work~\cite{hardt2016identity,li2017algorithmic}, which, in much more simplified settings such as linearized residual nets and quadratic neural nets, propose that small initialization tend to stabilize optimization and help generalizaiton. However, our approach suggests that more delicate control of the scale of the initialization is beneficial.\footnote{For example, learning rate smaller than our choice would also stabilize the training, but lead to lower convergence rate.}
1,108,101,564,814
arxiv
\section{Introduction} KDD Cup 2022 proposes a dynamic wind power forecasting competition to encourage the development of data mining and machine learning techniques in the energy domain \cite{zhou2022sdwpf}. In this challenge, wind power data are sampled every 10 minutes for each of the 134 turbines of a wind farm. Essential features consist of external characteristics (\textit{e.g.} wind speed) and internal ones (\textit{e.g.} pitch angle), as well as the relative location of all wind turbines. Given this dataset, we aim to provide accurate predictions for the future wind generation of each turbine on various timescales. In this paper, we present the solution of Team 88VIP. In general, we ensemble two types of models: a gradient boosting decision tree to memorize basic data patterns \cite{friedman2001gbdt, guyon2017lightgbm}, and a recurrent neural network to capture deep and latent probabilistic transitions \cite{mikolov2010recurrent,chung2014gru}. Ensemble learning of these two types of models, which characterize different aspects of sequential fluctuation, contributes to improved robustness and better predictive performance \cite{dong2020ensemble}. Specifically, compared to general time-series forecasting tasks, distinguished properties of wind power forecasting are observed: \textit{i) Spatial dynamics}: Predictions for each of the 134 turbines are required. The distributions of two turbines are not fully matched, but share a certain spatial correlation \cite{damousis2004spatial}. \textit{ii) Timescales variation}: 288-length predictions are required ahead of 48 hours, which cover both the short-term and long-term prediction scenarios \cite{WANG2021117766}. \textit{iii) Concept drift:} Due to the large time gap between the test set and the training set, the distribution discrepancy cannot be ignored \cite{lu2019drift}. To tackle these specific challenges, corresponding model components are included. First, the spatial distribution of the turbines is considered for clustering data and imputing missing values. Secondly, submodels training and continual training are proposed to handle the heterogeneous timescales, for tree-based and neural network-based methods, respectively. Third, the distribution drift is adjusted during the inference stage, by comparing the expectation of the predicted values and the average of the ground truth. Empirical results demonstrate the effectiveness of the proposed solution. In all, the proposed solution achieves an overall online score of \textbf{-45.214} in Phase 3. \section{Method} \subsection{Solution overview} In this section, we introduce the total pipeline of the proposed solution, as illustrated in Figure \ref{fig:pipeline}. Four essential stages are involved, including: turbine clustering, data preprocessing, model training, and post-processing. And two major models are included through ensemble learning: Gate Recurrent Unit (\textbf{GRU}) and Gradient Boosting Decision Tree (\textbf{GBDT}). \begin{figure*}[h] \centering \includegraphics[width=0.95\linewidth]{images/arch_bw.pdf} \caption{Total pipeline of the proposed solution.} \label{fig:pipeline} \end{figure*} \subsection{Solution details} \subsubsection{Major models} \paragraph{Gate Recurrent Unit} GRU is a type of recurrent neural network with a gating mechanism \cite{cho-etal-2014-properties}. In recent years, it has shown good performance in many sequential tasks, including signal processing and natural language processing \cite{10.1016/j.neucom.2019.04.044,athiwaratkun2017malware}. Through less active in the energy domain, there are studies shows that GRU is well-performed in smaller-sample datasets, compared to other variants of recurrent network \cite{chung2014empirical,gruber2020gru}. For formally, at time $t$, given an element $x_t$ in the input sequence, GRU layer updates the hidden state from $h_{t-1}$ to $h_t$ as follows: \begin{equation*} \label{eq:gru} \begin{split} r_{t}= &\sigma\left(W_{i r} x_{t}+b_{i r}+W_{h r} h_{(t-1)}+b_{h r}\right)\,, \\ z_{t}= & \sigma\left(W_{i z} x_{t}+b_{i z}+W_{h z} h_{(t-1)}+b_{h z}\right)\,, \\ n_{t}= & \tanh \left(W_{i n} x_{t}+b_{i n}+r_{t} \circ \left(W_{h n} h_{(t-1)}+b_{h n}\right)\right)\,, \\ h_{t}= &\left(1-z_{t}\right) \circ n_{t}+z_{t} \circ h_{(t-1)}\,, \end{split} \end{equation*} where $r_t$, $z_t$, and $n_t$ represent the reset, update and new gates, respectively; $\sigma$ is the sigmoid function and $\circ$ denote the Hadamard product. \paragraph{Gradient boosting decision tree} GBDT is an ensemble of weak learners, \textit{i.e.}, decision trees \cite{hastie2009elements}. Iteratively, boosting helps to increase the accuracy of the tree-based learner, to form a stronger model \cite{mason1999boosting}. In addition, both model performance and interpretability can be achieved \cite{wu2008top}. \subsubsection{Main stages} \paragraph{Turbine clustering} As demonstrated in Figure \ref{fig:pipeline}, the first stage is clustering the 134 turbines. Data in each cluster can then be processed and trained individually to improve the efficiency and effectiveness of training. In terms of clustering method, neighbored turbines can be placed into the same cluster according to relative location. Alternatively, statistical correlation of wind power series can be also used. Empirically, we find that spatial clustering outperforms that relying on correlation, as further detailed in Section \ref{sec:exp-cluster}. \paragraph{Data preprocessing} In the second stage, we preprocess the dataset. To deal with the large amount of missing values and abnormal values, data imputation techniques are employed, either by the average within the cluster, or by a linear interpolation based on time series. For GRU, data scaler is applied on numeric features. Details are provided in Section \ref{sec:exp-pre}. For GBDT, feature engineering is performed to reconstruct the features used in the model training, including: the mean, max, min and standard deviation with the rolling in the historical sequences. More precisely, we consider that the lengths of the rolling window range from 6 to 144, \textit{i.e.}, from one hour to one day. In addition, values differences between the last timestamps and the history, ranging from last 10 minutes to two days, are also taken into account. \paragraph{Model training} GRU and GBDT are trained individually for each turbine cluster. For both, the mean square error (MSE) is considered as the training loss. To deal with the heterogeneous timescales in the 288-length outputs, we separately develop training techniques for the two models. For GRU, we first pretrain the model with input length (=72) and output length (=288) for multiple epochs (=20); after that, we finetune the solution with a relatively small input length (=36) and output length (=36), such that the finetuned solution focuses more on the short-term prediction. Finally, we replace the first 36 points in the pretrained outputs with the finetuned ones, to obtain the final prediction. And for GBDT, different submodels are trained for different timescales of outputs. In this way, each submodel can extract the important features separately for short-term prediction and long-term prediction. Detailed empirical studies are described in Section \ref{sec:exp-train}. \paragraph{Post-processing} During inference, regular post-processing consists of clipping the predicted output to a reasonable range and smoothing the prediction curve to a certain level. Specifically, due to the large time gap between the test set and training set, the distribution discrepancy can not be ignored, especially in the online prediction task. In this case, adjusting the prediction results to the average of the ground truth contributes to an improvement of global predictive performance. A simple yet efficient way is to multiply each predicted value by a constant $\alpha > 0 $. Intuitively, if the average of all ground truth is greater than the average of all predictions, then the optimal $\alpha$ > 1; otherwise, $\alpha$ < 1. More formally, \begin{equation*} \alpha = \arg \min \ell(\hat{y}, y; \alpha) = \sum_i{(\alpha \hat{y}_i - y_i)^2} + \sum_i{|\alpha \hat{y}_i - y_i|} \,, \end{equation*} where $\hat{y}$ denotes the predicted values and $y$ the ground truth. The existence of an optimal solution is guaranteed considering that the loss function $\ell$ is convex \textit{w.r.t.} $\alpha$. Empirical experiments related to the adjustment parameters are described in Section \ref{sec:exp-alpha}. \section{Experiments} We conduct comprehensive experiments to demonstrate the effectiveness of our solution. In this section, we focus on \textit{i)} providing an overall predictive performance of both online and offline datasets; \textit{ii)} providing detailed ablation study to analyze the effectiveness of modules, mainly in an offline setting. \subsection{Data preprocessing}\label{sec:exp-pre} Firstly, we introduce how the official training and test data are preprocessed. \paragraph{Invalid cases imputation} We encode the invalid conditions, including both the abnormal values and missing values, as ``NA'' (not available). Abnormal values include cases where the wind power is negative or equals zero while the wind speed is large \cite{zhou2022sdwpf}. As summarized in Table \ref{table:stats}, the invalid conditions account for about a third of the total samples. To handle these values, we perform a two-step imputation strategy to reduce uncertainty and bias in the prediction \cite{rubin1988overview, schafer1999multiple}. Firstly, we substitute the missing values with the average within the cluster; Secondly, for those not sharing any valid conditions within the cluster, we perform a linear interpolation based on time-series. \begin{table}[h] \centering \begin{threeparttable} \caption{Dataset statistics.} \begin{tabular}{lrrrr} \toprule & \# day & \# samples & \# abnormal & \# missing \\ \midrule Training & 245 & 4\,727\,520 & 1\,354\,025 & 49\,518 \\ Test & 16 & 308\,736 & 104\,625 & 2\,663 \\ \bottomrule \end{tabular} \label{table:stats} \end{threeparttable} \end{table} \paragraph{Data scaling} Following data imputation, we apply the feature scaler to keep the relative scales of features comparable. Considering that data have outliers as illustrated in Figure \ref{fig:data_distr}, a robust scaler is applied, which removes the median and scales the data according to the range between the $1^{\text{st}}$ quartile and the $3^{\text{rd}}$ quartile \cite{scikit-learn}. More formally, for each column $x$: $$x_{\text{scaled}} = \frac{x - x_{\text{median}}}{x_{q75} - x_{q25}} \,.$$ \begin{figure}[h] \centering \begin{subfigure}{0.235\textwidth} \includegraphics[width=1\linewidth]{images_exp/patv.pdf} \caption{Wind power.} \end{subfigure} \hfill \begin{subfigure}{0.235\textwidth} \includegraphics[width=1\linewidth]{images_exp/wspd.pdf} \caption{Wind speed.} \end{subfigure} \caption{Visualizing data distribution. (\textbf{a}) Wind power. (\textbf{b}) Wind speed. Values less than 0 are removed in the plots.} \label{fig:data_distr} \end{figure} \paragraph{Test set reconstruction} The official test set defines a 2-day ahead forecasting evaluation given 14-day historical sequence. However, a single evaluation of one 288-length output is far from reliability. In addition, in our solution, the maximum input length is set as 288 (2 days). Thus we can reconstruct the test set, by randomly splitting the total 16 days to turn a single evaluation into 30 times through rolling windows. In this way, multiple evaluations guarantee the reliability of the offline results. \begin{table*}[t] \centering \caption{Online and offline forecasting performance of GBDT, GRU and the final ensemble models. The execution time is also provided. \textit{Improv} row shows the absolute improvements of the ensemble model over the best-performed single contributing model.} \label{tab:performance} \begin{tabular}{lrrrrrrrrrr} \toprule & \multicolumn{2}{c}{\textbf{Online}} & \multicolumn{6}{c}{\textbf{Offline}} & \multicolumn{2}{c}{\textbf{Time}} \\ \cmidrule(lr{0.5em}){2-3} \cmidrule(lr{0.5em}){4-9} \cmidrule(lr{0.5em}){10-11} & Phase 3 & Phase2 & Overall Score & RMSE & MAE & 6Hours Score & Day1 Score & Day2 Score & $\quad$ Train & Eval\\ \midrule GBDT & - & -44.430 & -49.788 & 53.977 & 45.600 & -32.537 & -46.526 & -53.032 & 40 min & 25 min\\ GRU & - & -44.370 & -49.797 & 53.924 & 45.670 & -32.946 & -46.434 & -53.051 & 100 min & 10 min\\ \midrule Ensemble & \textbf{-45.213} & -44.195 & -49.646 & 53.775 & 45.517 & -32.406 & -46.302 & -52.935 & - & 35 min \\ \textit{Improv} & - & \textit{+0.175} & \textit{+0.142} & \textit{+0.149} & \textit{+0.083} & \textit{+0.131} & \textit{+0.132} & \textit{+0.097} & - & -\\ \bottomrule \end{tabular} \end{table*} \begin{figure*}[t] \centering \includegraphics[width=1\linewidth]{images_exp/plot_pred.pdf} \vskip -0.30in \caption{An extract of 288-length prediction results.} \label{fig:plot_pred} \end{figure*} \subsection{Hyperparameter setting} The hyperparameters for GRU are as listed in Table \ref{tab:hpgru}. Adam is applied for model optimization \cite{DBLP:journals/corr/KingmaB14}. For GBDT, we choose the first 200 days to train the model, while the remaining 45 days are used for hyperparameter tuning and early stopping. The maximum number of boost rounds is set as 1000 and the early stopping step is 20. Specifically, the other essential hyperparameters, including the number of leaves, the learning rate and the bagging parameters, are individually tuned for each submodel. Please refer to the codes for their precise values. We conduct all the experiments on a machine equipped with a CPU: Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz, and a GPU: Nvidia Tesla v100 GPU. \begin{table}[h] \centering \begin{threeparttable} \caption{Main hyperparameter settings for GRU.} \begin{tabular}{lr} \toprule Hyperparameter Names & Values \\ \midrule GRU layers & 2 \\ GRU hidden units & 48 \\ Numeric embedding dimension & 42 \\ Time embedding dimension & 6 \\ ID embedding dimension & 6 \\ Dropout rate & 0.05 \\ Learning rate & $10^{-4}$\\ \bottomrule \end{tabular} \label{tab:hpgru} \end{threeparttable} \end{table} \subsection{Overall performance} Table \ref{tab:performance} illustrates the overall performance of the investigated solution. The improvement of ensemble model over the best-performed single contributing model is also reported. For evaluation metrics, apart from RMSE, MAE, and the overall scores of the total 288-length prediction, we also consider the scores of various output timescales, including those of the first 6 hours (6Hours Score), the first day (Day1 Score) and the second day (Day2 Score). According to Table \ref{tab:performance}, the overall results are comparable between two single contributing models. However, in the offline setting, we can explore their slight variations among different timescales: On average, GBDT is better-performed in the first 6 hours prediction, while GRU surpasses GBDT in Day 1, which indicates the two models contribute diversely to the prediction. Meanwhile, we can observe consistent improvements of the ensemble model on all metrics from offline to online. For instance, the ensemble model gains 14.20\% offline score, and 17.50\% online score (Phase 2) over the single contributing models. The improvements come from the reduction of prediction bias and variation by aggregating two type of models with diversity. For a more intuitive comprehension, a representative example of forecasting results is showed as Figure \ref{fig:plot_pred}. In addition, offline training and evaluation time are also reported in Table \ref{tab:performance}. \subsection{Effectiveness of model components} Now we analyze how the various model components impact the forecasting performance. \subsubsection{Ablation study of turbine clustering}\label{sec:exp-cluster} One of the most essential model components is clustering turbines to achieve a more effective and efficient training. More precisely, we compare the way of turbine clustering, either according to the spatial relative positions or to the statistical correlation of the wind power time-series. As demonstrated in Figure \ref{fig:cluster}, in practice, 24 clusters based on turbine location result in relatively good performance for GRU. Similarly, 4 clusters according to the turbine location is the best set-up for GBDT. \begin{figure}[t] \centering \begin{subfigure}{0.47\textwidth} \begin{tikzpicture} \begin{axis}[linestyle,width=\textwidth, title={}, xlabel={},bar width=2.9mm, symbolic x coords = {134, 24, 8, 4, 1}, legend style={draw=none, at={(0.4,0.8)}, anchor=west, ymax=-48.5, ymin=-55, nodes={scale=0.9, transform shape}, legend image post style={scale=0.5}}] \addplot [color=myblue1, mark=*] coordinates { (134, -50.67) (24, -49.797) (8, -51.91) (4, -52.98) (1, -53.58) }; \end{axis} \end{tikzpicture} \captionof{figure}{Number of turbine clusters (spatial positions). Specifically, Number 134 indicates the case of turbine-level individual modelling; Number 1 corresponds to a unified model for all turbines. \end{subfigure} \vskip 0.15in \begin{subfigure}{0.28\textwidth} \begin{tikzpicture} \begin{axis}[barstyle,width=\textwidth, title={}, xlabel={},bar width=2mm, symbolic x coords = {spatial positions, statistical correlation}, legend style={draw=none, at={(0.4,0.8)}, anchor=west, ymax=-49.5, nodes={scale=0.6, transform shape}, legend image post style={scale=0.9}}] \addplot [color=myblue1, fill=myblue1] coordinates { (spatial positions, -49.797) (statistical correlation, -49.959) }; \end{axis} \end{tikzpicture} \captionof{figure}{Clustering methods. \end{subfigure} \caption{Effectiveness of turbine clustering (GRU). (\textbf{a}) Comparison of the number of turbine clusters, according to spatial positions. (\textbf{b}) Comparison of clustering methods. \label{fig:cluster} \end{figure} \begin{table}[h] \centering \begin{threeparttable} \caption{Important features for submodels of GBDT.} \begin{tabular}{ccc} \toprule Submodels & Timescales& Top 3 important features \\ \midrule \multirow{3}{*}{0 - 3} & \multirow{3}{*}{0-30 min (short-term)} & Time \\ & & Patv \\ & & Patv diff \\ \midrule \multirow{3}{*}{18 - 36} & \multirow{3}{*}{3-6h (mid-term)} & Time \\ & & Wspd \\ & & Patv mean rolling 72 \\ \midrule \multirow{3}{*}{72 - 288} & \multirow{3}{*}{12h-2days (long-term)} & Wspd max rolling 144 \\ & & Wspd std rolling 144\\ & & Patv std rolling 144 \\ \bottomrule \end{tabular} \label{tab:importfeat} \end{threeparttable} \end{table} \subsubsection{Ablation study of heterogeneous timescales training}\label{sec:exp-train} To deal with heterogeneous timescales in 288-length outputs, we train submodels for GBDT, and perform continual training for GRU, respectively. The effects of training techniques are shown in Figure \ref{fig:timescales}. More specifically, we investigate the most important features for each GBDT submodel. Table \ref{tab:importfeat} shows that short-term and long-term predictions rely on different genres of features. For example, short-term predictions depend mostly on the latest values in the historical sequences, while long-term predictions focus more on the periodical statistical features such as the average wind power in the past 24 hours. \begin{figure}[t] \centering \begin{subfigure}{0.43\textwidth} \begin{tikzpicture} \begin{axis}[barstyle,width=\textwidth, title={}, xlabel={},bar width=2.9mm, symbolic x coords = {7, 6, 2}, legend style={draw=none, at={(0.4,0.8)}, anchor=west, ymax=-49, ymin=-50.2, nodes={scale=0.9, transform shape}, legend image post style={scale=0.5}}] \addplot [color=myblue1, fill=myblue1] coordinates { (7, -50.006) (6, -49.788) (2, -49.989) }; \end{axis} \end{tikzpicture} \captionof{figure}{Number of submodels for GBDT. Number 6 corresponds to six submodels for output timescales $1-3-9-18-36-72-288$. Based on that, number 7 corresponds to splitting the last part to $72-144-288$. And number 2 for two submodels $1-72-288$.} \end{subfigure} \vskip 0.1in \begin{subfigure}{0.28\textwidth} \begin{tikzpicture} \begin{axis}[barstyle, width=\textwidth, title={}, xlabel={},bar width=2mm, symbolic x coords = {with continual train, without continual train}, legend style={draw=none, at={(0.4,0.8)}, anchor=west, ymax=-49.5, nodes={scale=0.6, transform shape}, legend image post style={scale=0.9}}] \addplot [color=myblue1, fill=myblue1] coordinates { (with continual train, -49.797) (without continual train, -49.980) }; \end{axis} \end{tikzpicture} \captionof{figure}{Continual training for GRU. \end{subfigure} \caption{Effectiveness of training on heterogeneous timescales. (\textbf{a}) Comparison of the number of submodels for GBDT. (\textbf{b}) Effectiveness of continual training for GRU. \label{fig:timescales} \end{figure} \subsubsection{Ablation study of the concept drift in prediction} \label{sec:exp-alpha} To correct the concept drift in prediction, the ablation study of adjustment parameter $\alpha$ is conducted in Phase 3 online, as shown in Table \ref{tab:alpha}. \begin{table}[h] \centering \begin{threeparttable} \caption{Effectiveness of adjustment parameter for online inference. \textit{Improvement} column indicates the absolute improvement \textit{w.r.t.} the score without adjustment ($\alpha=1$).} \begin{tabular}{ccc} \toprule Adjustment parameter $\alpha$ & Online Score (Phase 3) & \textit{Improvement}\\ \midrule 0.95 & 45.750 & \textit{-0.300}\\ 1 (no adjustment) & 45.450 & - \\ 1.05 & 45.274 & \textit{+0.176} \\ 1.10 & 45.213 & \textit{+0.237} \\ \bottomrule \end{tabular} \label{tab:alpha} \end{threeparttable} \end{table} Note that the ablation studies in one component have been conducted with all other hyperparameters equal. Through testing all possible combinations offline, consequently, the optimal solution is summarized in Table \ref{tab:performance}. Finally, for the reproducibility issue, we have repeated multiple times in the offline scenario, and the variation of the evaluation results is strictly controlled under 0.2\%. \section{Conclusion} In summary, GBDT memorizes the basic data patterns and GRU captures the deep and latent sequential transitions. Ensemble learning of these two models, which characterize different aspects of sequential fluctuation, leads to enhanced model robustness. The detailed design of each stage, including turbine clustering, data preprocessing, model training, and post processing, simultaneously contribute to a better predictive performance, empirically verified from offline to online. There are several possible directions to explore in future work. First, more ensemble strategies can be considered, in addition to the average of prediction results. Secondly, farm-level forecasting task instead of the current turbine-level maybe interesting and beneficial in practice. Last but not least, uncertainty forecasting such as quantile regression or other probabilistic models is another center of new energy AI research that worth exploring. \bibliographystyle{ACM-Reference-Format}